text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
abstract: |
We discuss the problem of experimentally evaluating linear-time temporal logic (LTL) synthesis tools for reactive systems. We first survey previous such work for the currently publicly available synthesis tools, and then draw conclusions by deriving useful schemes for future such evaluations.
In particular, we explain why previous tools have incompatible scopes and semantics and provide a framework that reduces the impact of this problem for future experimental comparisons of such tools. Furthermore, we discuss which difficulties the complex workflows that begin to appear in modern synthesis tools induce on experimental evaluations and give answers to the question how convincing such evaluations can still be performed in such a setting.
author:
- Rüdiger Ehlers
bibliography:
- 'bib.bib'
title: Experimental Aspects of Synthesis
---
Introduction
============
The problem of synthesizing reactive systems from linear-time temporal logic (LTL) specifications [@DBLP:conf/focs/Pnueli77; @DBLP:conf/icalp/AbadiLW89; @DBLP:conf/popl/PnueliR89; @DBLP:journals/bsl/KupfermanV99; @Vardi1995] has attracted many researchers in the past, leading to a tremendous amount of results in this area. Broadly, works contributing to the progress of its solution can be classified into two sorts.
On the theory side, major breakthroughs have been obtained by establishing the 2EXPTIME-hardness of this problem [@DBLP:conf/icalp/PnueliR89; @DBLP:conf/stoc/VardiS85; @DBLP:journals/jcss/FischerL79], along with asymptotically optimal automata-theoretic constructions for solving it [@DBLP:conf/popl/PnueliR89; @DBLP:conf/icalp/AbadiLW89; @Vardi1995]. Recent works are concerned with making these constructions easier [@DBLP:conf/focs/KupfermanV05; @DBLP:conf/atva/ScheweF07a; @DBLP:conf/vmcai/PitermanPS06] or enhancing the scope of the algorithms and hardness-results known to, e.g., distributed systems [@DBLP:conf/lics/KupfermanV01; @DBLP:conf/lics/FinkbeinerS05].
On the practical side, many works deal with the construction of sophisticated algorithms that aim at improving the scalability of current synthesis techniques [@DBLP:conf/fmcad/JobstmannB06; @DBLP:conf/fmcad/SohailS09; @DBLP:conf/cav/FiliotJR09; @DBLP:conf/atva/FiliotJR10; @DBLP:conf/vmcai/PitermanPS06; @DBLP:conf/cav/Ehlers10]. While the 2EXPTIME-hardness of the LTL synthesis problem induces a limit on the effectiveness of any approach, this line of research is motivated by the observation that “typical“ specifications in practice have a structure that can be exploited [@DBLP:conf/lics/Kupferman06; @DBLP:conf/vmcai/SohailSR08; @DBLP:journals/corr/abs-1006-1408; @DBLP:conf/atva/FiliotJR10; @DBLP:conf/fmcad/SohailS09; @DBLP:conf/vmcai/PitermanPS06; @DBLP:conf/cav/Ehlers10; @DBLP:conf/cav/KupfermanPV06].
While many works fall into both categories, contributions to the latter sort typically contain proofs of the usefulness of the proposed techniques obtained by experimentally evaluating a prototype implementation. This is commonly done by taking some example specifications (the so-called [benchmarks]{}) and showing that the prototype is able to handle these in reasonable time.
The situation is similar to the one for the problem of [satisfiability]{} (SAT) testing (see, e.g., [@Biere2009]), where despite its NP-completeness, an active area of research has emerged, witnessing its progress by the fact that practical problems with millions of variables are nowadays routinely solved by modern SAT solvers. One of the key factors for this success is the possibility to perform meaningful benchmarking, which drives the development of new solution heuristics into those directions that appear to be most promising. Thousands of example problem instances can easily be used to obtain, optimise and test new approaches. As a result, the annual SAT solving competitions typically draw a lot of interest, for example 19 submissions to the main track in the 2010 SAT-Race [@SATRace2010].
For the synthesis of reactive systems from linear-time specifications, however, there appears to be far less interest in tools. At the time of writing, to the best of our knowledge, there are only four publicly available synthesis tools[^1], namely [<span style="font-variant:small-caps;">Anzu</span> [@AnzuTool; @DBLP:conf/cav/JobstmannGWB07; @DBLP:conf/date/BloemGJPPW07; @DBLP:journals/entcs/BloemGJPPW07]]{}, [<span style="font-variant:small-caps;">Lily</span> [@LilyTool; @DBLP:conf/fmcad/JobstmannB06]]{}, [<span style="font-variant:small-caps;">Acacia</span> [@AcaciaTool; @DBLP:conf/cav/FiliotJR09; @DBLP:conf/atva/FiliotJR10]]{}, and [<span style="font-variant:small-caps;">Unbeast</span> [@UnbeastTool; @DBLP:conf/cav/Ehlers10]]{}. Equally unfortunate, the amount of benchmarks available is rather low. One can identify (at least) three reasons for this state:
1. A factor contributing to this difference is the fact that while rudimentary SAT solvers can be written in the order of hours, creating even a simple synthesis tool requires the implementation of comparably complex operations. As an example, for approaches working with deterministic automata, a construction similar to Safra’s determinisation procedure [@phd-safra] needs to be performed, which has been argued to be notoriously complex to implement [@DBLP:conf/csl/HenzingerP06; @DBLP:conf/tacas/HardingRS05; @DBLP:conf/fmcad/JobstmannB06; @DBLP:conf/lics/Kupferman06; @DBLP:conf/cav/KupfermanPV06].
2. As a second reason, the publication schemes of the formal methods and SAT communities are different. While the latter appreciates work whose primary concern is to improve the scalability of current SAT solving techniques (see, e.g., [@DBLP:conf/sat/Kottler10; @DBLP:conf/sat/NadelR10; @POS2010] for recent such publications from 2010), works in the formal methods area are typically built around some appealing new concept, which is mostly only evaluated briefly on some prototype implementation (see, e.g., [@DBLP:conf/spin/GhafariHR10] for a pointer to a typical such a case in the area of software model checking, or simply compare recent practical synthesis papers [@DBLP:conf/atva/FiliotJR10; @DBLP:conf/cav/Ehlers10; @DBLP:conf/cav/FiliotJR09; @DBLP:conf/fmcad/JobstmannB06; @DBLP:conf/vmcai/PitermanPS06; @DBLP:conf/fmcad/SohailS09; @DBLP:conf/vmcai/SohailSR08]).
Arguably, one of the main reasons for this difference is the fact that the roots of formal methods lie in theoretical computer science, where, given technical correctness and sufficient general style of writing, the main merits of a paper are seen in the significance of the conceptual contribution to the field [@DBLP:journals/iandc/Parberry94]. As a result, papers that propose improvements to current techniques that lack a major theoretical insight have a small chance of being accepted at major conferences even if the proposed techniques lead to significant speed-ups in the synthesis process. Consequently, time is typically only invested in writing synthesis tools after a new idea has been developed that is *both* theoretically compelling and gives the impression that it will significantly improve upon the performance of previous techniques.
3. Third, even if time has been invested in writing a synthesis tool, a new technique still has to be shown to be competitive with earlier techniques. Typically, benchmarking is used for this purpose. In the scope of synthesis, however, it can be observed that this is by no means a trivial task – all four currently available synthesis tools have different scopes and semantics. Also, the amount of benchmarks available is extremely low and the benchmarks in previous evaluations have often been rewritten to be compatible to the improvements proposed. This puts a high burden on the quality of future works in this area: comparability to previous works must be maintained in order to obtain a high credibility of the experimental evaluation. At the same time, as improvements to synthesis techniques often introduce additional details that must be taken care of in the evaluation (e.g., having two semi-algorithms running in parallel in [@DBLP:conf/cav/FiliotJR09; @DBLP:conf/cav/Ehlers10] or the assumption dropping heuristic from [@DBLP:conf/atva/FiliotJR10]), it is very hard to produce an appealing evaluation without spending too much space in a publication on the details.
Recently, the first of these problems has been attenuated by the availability of good LTL-to-Büchi translation tools [@DBLP:journals/sttt/RozierV10; @DBLP:conf/cav/GastinO01; @DBLP:conf/mascots/Duret-LutzP04] and determinisation and optimisation tools for $\omega$-automata [@DBLP:journals/tcs/KleinB06; @DBLP:conf/sat/Ehlers10; @DBLP:conf/spin/EhlersF10] on the one hand, and the availability of efficient binary decision diagram (BDD) libraries [@Somenzi98cudd:cu; @DBLP:journals/tc/Bryant86] and satisfiability modulo theory (SMT) or SAT-solvers [@Biere2009] as reasoning backbones on the other hand. Thus, developers of new synthesis tools can build their implementations on top of such previous work. The second problem will hopefully lose impact over time when more interest in the practical side of reactive system synthesis is aroused.
In this paper, we approach the remaining third problem by giving both insights why experimental evaluations in the context of synthesis are notoriously harder than, e.g., in the SAT context, as well as proposing ”standardised“ evaluation schemes for synthesis tools that aim at simplifying further work in this area. We hope that our discussion helps interested observers of the advances in the practical approaches to synthesis (by providing a survey on the problem of evaluating synthesis tools, with special consideration of work already done in this area) as well as authors of future synthesis tools (by giving inspirations and in particular justification for the choice of their experimental settings) and paper reviewers in the field (by explaining the difficulties of performing an experimental evaluation for synthesis tools).
We start by giving a definition of the LTL (open) synthesis problem in Section \[sec:Problem\]. In Section \[sec:tools\], we review the synthesis approaches of the synthesis tools that were publicly available at the time of writing. Afterwards, we give some observations on these approaches (and their experimental evaluations). In Section \[sec:evaluationFramework\], we analyse the observations and propose a framework for future synthesis tool experimental evaluations. We conclude with a summary.
The LTL open synthesis problem {#sec:Problem}
==============================
We start by giving a problem description of reactive system synthesis that focusses on those aspects that require special attention when comparing synthesis approaches. Formally, a synthesis problem instance is a triple $\langle {\mathsf{AP}}_I, {\mathsf{AP}}_O, \psi \rangle$, where ${\mathsf{AP}}_I$ is a set of input atomic propositions, ${\mathsf{AP}}_O$ is a set of output atomic propositions, and $\psi$ is a formula in linear-time temporal logic (LTL) [@DBLP:conf/focs/Pnueli77] over ${\mathsf{AP}}_I \uplus {\mathsf{AP}}_O$. For the scope of this paper, we denote the LTL temporal operators for ”globally”, ”finally” and ”next-time” by $\mathsf{G}$, $\mathsf{F}$, and $\mathsf{X}$. For the ease of reading, we sometimes call the atomic propositions simply variables or bits.
We say that a triple $\langle {\mathsf{AP}}_I, {\mathsf{AP}}_O, \psi \rangle$ represents a [realisable]{} specification in the [Mealy-type semantics]{} if there exists some function $f : (2^{{\mathsf{AP}}_I})^+ \rightarrow 2^{{\mathsf{AP}}_O}$ such that for all $w = w_0 w_1 \ldots \in {\mathsf{AP}}_I^\omega$, we have $(w_0 \cup f(w_0)), (w_1 \cup f(w_0 w_1)), (w_2 \cup f(w_0 w_1 w_2)), \ldots \models \psi$. Likewise, we say that $\langle {\mathsf{AP}}_I, {\mathsf{AP}}_O, \psi \rangle$ is realisable in the [Moore-type semantics]{} if there exists some function $f : (2^{{\mathsf{AP}}_I})^* \rightarrow 2^{{\mathsf{AP}}_O}$ such that for all $w = w_0 w_1 \ldots \in {\mathsf{AP}}_I^\omega$, we have $(w_0 \cup f(\epsilon)), (w_1 \cup f(w_0)), (w_2 \cup f(w_0 w_1)), \ldots \models \psi$ for $\epsilon$ denoting the empty word. Specifications that are not realisable are called unrealisable in the respective semantics.
Typically, realisability checking is performed by building a game between a [system player]{} and an [environment player]{}. In this setting, a function $f$ satisfying the constraints stated above is called a [winning strategy]{}. Details on the game-based view to synthesis can be found in [@DBLP:conf/dagstuhl/2001automata].
It is well-known that whenever there exists some winning strategy for one of the semantics above, there also exists a finite representation of it. For the Mealy-type semantics, this representation is typically given as a Mealy automaton, whereas for the Moore-type semantics, Moore automata serve this purpose (see, e.g., [@Mueller2000]).
Intuitively, the Mealy- and Moore-type semantics differ in the order of input and output. As an example, for the LTL formula $\psi = \mathsf{G}(r \leftrightarrow g)$, the specification $\langle \{r\}, \{g\}, \psi \rangle$ is realisable for the Mealy-type semantics but not for the Moore-type semantics. The reason is that in the Mealy-type semantics, the system already knows the input in the respective computation cycle when having to choose an output, whereas in the Moore-type semantics, the roles are swapped and thus the system has to guess whether $r$ is set or not when choosing whether $g$ should be set.
Synthesis tools {#sec:tools}
===============
We briefly recapitulate the ideas behind the synthesis tools [<span style="font-variant:small-caps;">Anzu</span> [@DBLP:conf/cav/JobstmannGWB07]]{}, [<span style="font-variant:small-caps;">Lily</span> [@LilyTool; @DBLP:conf/fmcad/JobstmannB06]]{}, [<span style="font-variant:small-caps;">Acacia</span> [@AcaciaTool; @DBLP:conf/cav/FiliotJR09; @DBLP:conf/atva/FiliotJR10]]{}, and [<span style="font-variant:small-caps;">Unbeast</span> [@UnbeastTool; @DBLP:conf/cav/Ehlers10]]{}. We use the terminologies from the respective papers and refer the reader not familiar with the terms used hereafter to them.
[<span style="font-variant:small-caps;">Anzu</span>]{} {#ref:AnzuScopeDescription}
------------------------------------------------------
#### Scope:
This tool implements the concept of generalised reactivity(1) [@DBLP:conf/vmcai/PitermanPS06] synthesis (abbreviated as GR(1) synthesis) in the Mealy-type semantics. Here, the specification is restricted to be of the form $(a_1 \wedge a_2 \wedge \ldots \wedge a_n) \rightarrow (g_1 \wedge g_2 \wedge \ldots \wedge g_m)$ for some sets of assumptions $\{a_1, \ldots, a_n\}$ and guarantees $\{g_1, \ldots, g_m\}$. Every assumption is of one of the following forms:
1. $\psi_I$
2. $\mathsf{G}(\psi \rightarrow \mathsf{X}(\psi_I))$
3. $\mathsf{GF}(\psi)$
where $\psi$ is an LTL-formula over ${\mathsf{AP}}_I \cup {\mathsf{AP}}_O$ free of temporal operators and $\psi_I$ is an LTL-formula over ${\mathsf{AP}}_I$ free of temporal operators . Likewise, all guarantees are of one of the following forms:
1. $\psi_O$
2. $\mathsf{G}(\psi \rightarrow \mathsf{X}(\psi_O))$
3. $\mathsf{GF}(\psi)$
where $\psi_O$ is an LTL-formula over ${\mathsf{AP}}_O$ free of temporal operators. [<span style="font-variant:small-caps;">Anzu</span>]{} requires the assumptions and guarantees to be in [Property Specification Language]{} (PSL) [@PSLBook] syntax and checks for [strong realisability]{} of the given overall specification, where unlike in normal realisability checking, safety guarantee violations are not tolerated in cases in which a safety assumption violation has not yet been witnessed but the system has a strategy to ensure that the overall input and output will not satisfy all assumptions (see, e.g., [@Klein2010]). Specifications of the $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$ form, as used in [<span style="font-variant:small-caps;">Anzu</span>]{}, typically occur in cases in which a part of a larger system is to be synthesized, where the set of assumptions represents the behaviour the part of the system to be synthesized can assume about the other parts of the system whereas the guarantees describe the requirements on the behaviour of the part of the system to be synthesized.
#### Techniques:
[<span style="font-variant:small-caps;">Anzu</span>]{} implements generalised reactivity(1) synthesis [@DBLP:conf/vmcai/PitermanPS06] in a symbolic manner using binary decision diagrams (BDDs) [@DBLP:journals/tc/Bryant86; @Somenzi98cudd:cu] as reasoning backbone. Here, a synthesis game is built whose state space consists of all variable valuations to the input and output atomic propositions. The LTL assumptions and guarantees of the forms $\psi_I$ and $\psi_O$ are encoded into the set of initial positions of the game, whereas assumptions and guarantees of the form $\mathsf{G}(\psi \rightarrow \mathsf{X}(\psi_I))$ and $\mathsf{G}(\psi \rightarrow \mathsf{X}(\psi_O))$ are encoded into its transition relation (describing the possible moves of the players). Then, a symbolic algorithm is used in which the system player tries to satisfy the specification $(a'_1 \wedge a'_2 \wedge \ldots \wedge a'_{n'}) \rightarrow (g'_1 \wedge g'_2 \wedge \ldots \wedge g'_{m'})$ in this so-called [game arena]{} (consisting of the game positions and the transition relation), where $\{a'_1, \ldots, a'_{n'}\}$ are the assumptions of the form $\mathsf{GF}(\psi)$ and $\{g'_1, \ldots, g'_{m'}\}$ are the guarantees of the form $\mathsf{GF}(\psi)$. Implementations are extracted by building circuits for computing the outputs from the BDD representation of the winning state set while performing a care-set optimisation to the BDD after every step [@DBLP:conf/date/BloemGJPPW07; @DBLP:journals/entcs/BloemGJPPW07].
#### Experimental evaluation:
The usefulness of the tool [<span style="font-variant:small-caps;">Anzu</span>]{} has been shown on two case studies: an AMBA AHB arbiter [@AMBA1999] specification (simplified by leaving out bus splits and early burst terminations) and a generalised Buffer (GenBuf) controller that has been described by IBM for tutorial purposes [@IBMTutorial]. Both case studies contain several assumptions and guarantees and are scalable by the numbers of clients.
[<span style="font-variant:small-caps;">Lily</span>]{}
------------------------------------------------------
#### Scope:
The tool [<span style="font-variant:small-caps;">Lily</span>]{} accepts arbitrary LTL formulas in PSL syntax as specifications. If multiple LTL formulas are found in the input file, they are treated in a conjunctive manner, except for those that are preceded by an `assume` keyword, which are used as assumptions for the overall specification.
#### Techniques:
[<span style="font-variant:small-caps;">Lily</span>]{} implements the optimisations to the Safraless synthesis [@DBLP:conf/focs/KupfermanV05] approach presented in [@DBLP:conf/fmcad/JobstmannB06]. In the first step, the negation of the specification is converted to a node-labelled nondeterministic Büchi word (NBW) automaton using the LTL-to-Büchi translator [<span style="font-variant:small-caps;">Wring</span> [@DBLP:conf/cav/SomenziB00]]{}. The Büchi automaton is then converted to a universal co-Büchi tree automaton (UCT) that checks for the satisfaction of the original specification along all computation tree paths, which is in turn tested for emptiness using a construction proposed by Kupferman and Vardi [@DBLP:conf/focs/KupfermanV05], utilising alternating weak tree automata (AWT) and non-deterministic Büchi tree (NBT) automata. Jobstmann and Bloem [@DBLP:conf/fmcad/JobstmannB06] add various optimisations to these steps. The conversion of the UCT to the AWT is parametrised by some constant $k$, which influences the size of the NBT produced and thus the running time of the overall algorithm. While low values for $k$ typically suffice for realisable specifications in practice, a relatively large value of $k$, exponential in the size of the NBW (or, alternatively, doubly-exponential in the length of the LTL specification), is needed to have the emptiness of the language of the NBT imply the emptiness of the UCT language (except if the UCT turns out to be weak), and thus prove unrealisability of the given specification. To avoid this problem, [<span style="font-variant:small-caps;">Lily</span>]{} can also be run in a special unrealisability detection mode. In this case, it checks the realisability of the negated specification with swapped inputs and outputs (and a slight modification of the resulting specification to convert its Mealy-type semantics to Moore-type again). Then, for unrealisable specifications, only a small value of $k$ is typically needed in practice to identify them as such.
#### Experimental evaluation:
In [@DBLP:conf/fmcad/JobstmannB06], the performance of [<span style="font-variant:small-caps;">Lily</span>]{} is evaluated on some specifications written by the authors of that paper, representing mostly arbiter variations and traffic light controllers. The evaluation is focussed on proving that the optimisations proposed in that paper contribute significantly to having low running times of the tool.
[<span style="font-variant:small-caps;">Acacia</span>]{}
--------------------------------------------------------
#### Scope:
The tool [<span style="font-variant:small-caps;">Acacia</span>]{} has the same input syntax as [<span style="font-variant:small-caps;">Lily</span>]{} and also uses Moore-type semantics. There exist two versions of [<span style="font-variant:small-caps;">Acacia</span>]{}. [<span style="font-variant:small-caps;">Acacia</span>]{} 2009 implements the techniques described in [@DBLP:conf/cav/FiliotJR09], while [<span style="font-variant:small-caps;">Acacia</span>]{} 2010 also implements those of [@DBLP:conf/atva/FiliotJR10]. The latter version includes support for making assumptions local to some set of guarantees. The specification then consists of a conjunction of sub-specifications of the form $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$. All assumptions and guarantees of the conjuncts are assumed to be given separately.
#### Techniques:
[<span style="font-variant:small-caps;">Acacia</span>]{} is based on the concept of bounded synthesis [@DBLP:conf/atva/ScheweF07a; @DBLP:conf/cav/FiliotJR09], a refinement of the Safraless synthesis techniques proposed in [@DBLP:conf/focs/KupfermanV05]. Here, as in [<span style="font-variant:small-caps;">Lily</span>]{}, the specification is first negated and then converted to a node-labelled Büchi automaton. As [<span style="font-variant:small-caps;">Lily</span>]{}, [<span style="font-variant:small-caps;">Acacia</span>]{} uses [<span style="font-variant:small-caps;">Wring</span> [@DBLP:conf/cav/SomenziB00]]{} for this purpose. Afterwards, the Büchi automaton is converted to a universal co-Büchi tree automaton (UCT) that checks for the satisfaction of the specification along all paths of a computation tree. This UCT is used as a basis for building a series of synthesis safety games, where for a successively increasing [bound value]{} $k$, for every state $q$ in the UCT, the maximum number of visits to rejecting states in the UCT from its initial state along some path to $q$ for the input/output played in the game so far is encoded into the game positions. Once one of these counters exceeds the value $k$, the game is lost for the system player. The main idea of [<span style="font-variant:small-caps;">Acacia</span>]{} is to use anti-chains as an efficient representation of the [frontier sets]{} (i.e., pre-fixed points of winning positions) occurring during the safety game solving process. This representation makes use of the fact that the set of possible future behaviours of the system player in a position $p_1$ can only be larger than when being in a game position $p_2$ if all counters in $p_1$ are less than or equal to those in $p_2$. Thus by storing only states whose counter vectors are not [dominated]{} by the counter vectors of other states in the pre-fixed point during the solving process, redundancies can be avoided.
As in [<span style="font-variant:small-caps;">Lily</span>]{}, the value of $k$ required to conclude the unrealisability of a specification is exponential in the size of the UCT or doubly-exponential in the length of the LTL specification. The authors thus propose to run [<span style="font-variant:small-caps;">Acacia</span>]{} two times in parallel, where in the first run, realisability is checked and in the second run, unrealisability is tested. As in [<span style="font-variant:small-caps;">Lily</span>]{}, in the latter case, the specification is negated, a conversion between Mealy-type and Moore-type semantics takes place, and the inputs and outputs are swapped.
[<span style="font-variant:small-caps;">Acacia</span>]{}2010 adds some additional features. Here, the game solving process is made compositional. Recall that in [<span style="font-variant:small-caps;">Acacia</span>]{} 2010, the specification is supposed to consist of a conjunction of sub-specifications of the form $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$. In this setting, the safety games for the synthesis process can be built separately, preliminarily solved independently and finally composed on-the-fly during the solving process for the game representing the overall specification. [<span style="font-variant:small-caps;">Acacia</span>]{} 2010 furthermore adds the possibility to use the <span style="font-variant:small-caps;">OTFUR</span> mixed forward-backward game solving algorithm [@DBLP:conf/icalp/LiuS98; @DBLP:conf/concur/CassezDFLL05] instead of the classical backward safety game solving algorithm. Finally, for cases in which the specification is only a single formula of the form $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$, [<span style="font-variant:small-caps;">Acacia</span>]{} can rewrite this specification into the form $\bigwedge_{g \in \text{guarantees}}\left((\bigwedge \text{assumptions}) \rightarrow g \right)$ in order to benefit from the compositional algorithms implemented. In this case, an assumption dropping heuristic is used to remove some assumption copies in this formula, which reduces the problem that the assumptions are replicated for all guarantees in this setting. Using the heuristic makes the approach however incomplete.
#### Experimental evaluation:
In [@DBLP:conf/cav/FiliotJR09], the focus of the experimental evaluation lies on proving that [<span style="font-variant:small-caps;">Acacia</span>]{} improves upon the performance of [<span style="font-variant:small-caps;">Lily</span>]{}, using the fact that the semantics are compatible. The authors of [@DBLP:conf/cav/FiliotJR09] show that on the examples from [@DBLP:conf/fmcad/JobstmannB06], using the anti-chains approach typically results in lower computation times and that the Büchi automaton building time surprisingly dominates the overall synthesis time. One of the example specifications is made scalable and it is proven that the anti-chains approach is much faster here. Another set of variations of one of the [<span style="font-variant:small-caps;">Lily</span>]{} examples is used as a further benchmark set.
In [@DBLP:conf/atva/FiliotJR10], the 2010 version of [<span style="font-variant:small-caps;">Acacia</span>]{} is evaluated with several different choices for (1) whether game solving should be performed backwards or in a forward-backward manner, (2) whether monolithic or compositional synthesis should be performed, and (3) whether the assumption dropping heuristic should be used (only in the compositional case). Apart from the benchmarks also used in [@DBLP:conf/cav/FiliotJR09], the generalised Buffer (GenBuf) controller [@IBMTutorial] specification that was also used for benchmarking [<span style="font-variant:small-caps;">Anzu</span>]{} has been formulated in a way such that the assumptions to the environment are local to some sets of guarantees, such that compositional synthesis can be performed directly. This benchmark is used to show the benefits of the compositional approach.
[<span style="font-variant:small-caps;">Unbeast</span>]{}
---------------------------------------------------------
#### Scope:
The tool [<span style="font-variant:small-caps;">Unbeast</span> [@UnbeastTool; @DBLP:conf/cav/Ehlers10]]{} focusses on specifications of the form $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$ and uses Mealy-type semantics. By using an input language based on XML, incorrect presumptions by the user about precedences of temporal operators in LTL are avoided. The assumptions and guarantees are given separately in the XML input file.
#### Techniques:
The [<span style="font-variant:small-caps;">Unbeast</span>]{} tool implements the synthesis techniques presented in [@DBLP:conf/atva/ScheweF07a; @DBLP:conf/cav/Ehlers10]. The library [<span style="font-variant:small-caps;">CuDD</span>]{} [@Somenzi98cudd:cu] is used for constructing and manipulating BDDs during the synthesis process.
The first step is to determine which of the given assumptions and guarantees are safety formulas. In order to detect also simple cases of [pathological safety]{} [@DBLP:journals/fmsd/KupfermanV01], this is done by computing an equivalent Büchi automaton using an external LTL-to-Büchi converter such as [<span style="font-variant:small-caps;">ltl2ba</span> [@DBLP:conf/cav/GastinO01]]{} or [<span style="font-variant:small-caps;">spot</span>’s <span style="font-variant:small-caps;">ltl2tgba</span> [@DBLP:conf/mascots/Duret-LutzP04]]{}, and examining whether all maximal strongly connected components in the computed automaton do not have infinite non-accepting paths. Special care is taken of so-called of [bounded look-ahead safety formulas]{}.
In a second step, for the set of bounded look-ahead assumptions and the set of such guarantees, safety automata for their respective conjunctions are built. Both of them are represented in a symbolic way using BDDs. For the remaining safety assumptions and guarantees, safety automata are built by taking the Büchi automata computed in the previous step and applying a subset construction for determinisation in a symbolic manner. For the remaining non-safety parts of the specification, a combined universal co-Büchi automaton is computed by calling the external LTL-to-Büchi tool again.
In the next phase, the given specification is checked for realisability. This is done almost as in [<span style="font-variant:small-caps;">Acacia</span>]{} 2009, i.e., for a successively increasing so-called *bound value*, the bounded synthesis approach [@DBLP:conf/atva/ScheweF07a; @DBLP:conf/cav/FiliotJR09] is performed by building a safety automaton from the co-Büchi automaton for the non-safety part of the specification and solving the safety games induced by a special product of the automata involved [@DBLP:conf/cav/Ehlers10]. However, instead of anti-chains, BDDs are used.
Finally, if the specification is found to be realisable (i.e., the game computed in the previous phase is winning for the player representing the system to be synthesised), the symbolic representation of the winning states of the system is used to compute a prototype implementation satisfying the specification in a fully symbolic way, using a slight simplification of the algorithm from [@DBLP:conf/cav/KukulaS00]. However, the implementations generated are typically relatively large. As [<span style="font-variant:small-caps;">Acacia</span>]{}, to also detect unrealisable specifications, [<span style="font-variant:small-caps;">Unbeast</span>]{} needs to be run two times in parallel.
#### Experimental evaluation:
[<span style="font-variant:small-caps;">Unbeast</span>]{} was evaluated on the specifications defined in [@DBLP:conf/fmcad/JobstmannB06] as well as on those given in [@DBLP:conf/cav/FiliotJR09]. The Moore-type semantics from these examples have been adapted to the Mealy-type semantics of [<span style="font-variant:small-caps;">Unbeast</span>]{} by prefixing all occurrences of input atomic propositions with an LTL next-time operator (see, e.g., [@DBLP:conf/fmcad/JobstmannB06]).
Additionally, a scalable load balancing case study is presented in [@DBLP:conf/cav/Ehlers10], having a Mealy-type semantics. For comparison and usage with [<span style="font-variant:small-caps;">Acacia</span>]{} and [<span style="font-variant:small-caps;">Lily</span>]{}, the examples have been transformed to Moore-type semantics by prefixing all occurrences of output atomic propositions with an LTL next-time operator.
Observations on the differences and similarities of the synthesis tools {#sec:observations}
=======================================================================
We continue with a discussion of the similarities and differences between the synthesis tools. The arguments to follow form the foundation of the experimental evaluation frameworks proposed in Section \[sec:evaluationFramework\].
The incomparable scopes & semantics of the tools
------------------------------------------------
When comparing the tools considered in this paper, it is striking that all four tools have incompatible specification languages. Only [<span style="font-variant:small-caps;">Acacia</span>]{} 2009 and [<span style="font-variant:small-caps;">Lily</span>]{} have the same input specification format (however, [<span style="font-variant:small-caps;">Acacia</span>]{} 2010 adds local assumptions to the input language which cannot be interpreted correctly by [<span style="font-variant:small-caps;">Lily</span>]{}).
It is fair to raise the question why this is the case, given the fact that the number of benchmark sets for synthesis is rather low, so one would expect that the scopes and semantics are compatible in order to have as many benchmarks available as possible. In this paper, we conjecture that the reason for this situation is that *the scopes of the tools are strongly adapted to the techniques implemented*, but whenever the choice does not matter, as close to the literature as possible.
We begin our discussion of this observation with the tool [<span style="font-variant:small-caps;">Anzu</span>]{}. The generalised reactivity(1) synthesis technique is implementable in both Mealy and Moore semantics. The tool [<span style="font-variant:small-caps;">Anzu</span>]{} uses Mealy semantics, as the description of the synthesis algorithm in [@DBLP:conf/vmcai/PitermanPS06]. The assumptions and guarantees allowed are precisely those that can be processed by the algorithm without giving up the idea that the state space of the underlying game is the set of input and output variable valuations. This can be seen from the fact that from the formula types given in Section \[ref:AnzuScopeDescription\], type (1) is only an initial state condition, and type (2) can be encoded into the transition relation of a game with such a structure. Formulas of type (3) are precisely those that can then be given to the actual solving process as liveness parameters.
In contrast to [<span style="font-variant:small-caps;">Anzu</span>]{}, [<span style="font-variant:small-caps;">Lily</span>]{} uses Moore-type semantics and some PSL-like input file syntax. Since [<span style="font-variant:small-caps;">Lily</span>]{} bases on tree automaton techniques, this is not surprising: using a Mealy-type semantics in the context of tree automata would require that the labelling of the initial node of a computation tree (that is either accepted or rejected by the tree automaton) is ignored, as the node labels represent the output of the system. On a theoretical level, such a definition would look unnecessarily awkward, which is why the Moore-type semantics are usually preferred in this context.
As [<span style="font-variant:small-caps;">Lily</span>]{}, [<span style="font-variant:small-caps;">Acacia</span>]{} uses a Moore-type semantics and the same syntax as [<span style="font-variant:small-caps;">Lily</span>]{}. According to [@DBLP:conf/cav/FiliotJR09], the authors wanted to keep [<span style="font-variant:small-caps;">Acacia</span>]{} 2009 comparable to [<span style="font-variant:small-caps;">Lily</span>]{}. An additional reason for keeping the Moore-type semantics is the better applicability of the results: specifications that are found to be realisable in the Moore-type semantics are also realisable in the Mealy-type semantics, but not vice versa. Only in [<span style="font-variant:small-caps;">Acacia</span>]{} 2010, the input language is extended in order to accommodate the new features proposed in [@DBLP:conf/atva/FiliotJR10].
[<span style="font-variant:small-caps;">Unbeast</span>]{}, on the other hand, uses a Mealy-type semantics and has its own XML-based input file format. In [@DBLP:conf/cav/Ehlers10], it has been argued that specifications often become shorter and thus the (edge-labelled) Büchi automata become smaller in the Mealy setting, which is beneficial for a BDD-based approach. For example, when specifying some immediate output consequences of some input such as $\mathsf{G}(r \rightarrow g)$ for some input set $\{r\}$ and output set $\{g\}$, taking the Moore semantics would require the introduction of a next-time operator into the formula, which would be reflected in the automaton size. The XML-based input language has been used in order to circumvent the necessity for a complicated formula parser, but also to make the operator precedences explicit.
Comparability of the examples {#sec:ComparabilityOfExamples}
-----------------------------
The arguments from the preceding subsection explain why the scopes and semantics of the tools are different. Nevertheless, it does not explain why different specification sets have been used, as benchmarks for [<span style="font-variant:small-caps;">Anzu</span>]{} could be converted to benchmarks for the other tools and the conversion between Mealy- and Moore-type semantics is rather simple. Still, the only case in which benchmarks were converted for an experimental evaluation was in [@DBLP:conf/cav/Ehlers10], where [<span style="font-variant:small-caps;">Lily</span>]{}’s and [<span style="font-variant:small-caps;">Acacia</span>]{}’s examples were used for evaluating [<span style="font-variant:small-caps;">Unbeast</span>]{} (and vice versa). In some cases, benchmarks have been rewritten, e.g., the IBM generalised buffer specification, which was used to evaluate [<span style="font-variant:small-caps;">Anzu</span>]{}, has been altered in [@DBLP:conf/atva/FiliotJR10] to a form in which the assumptions were made local. Also, there are two other publications [@DBLP:conf/fmcad/SohailS09; @Morg10] reporting on experimental results for synthesis approaches using generalized parity games. In both of them, the feasibility of their approaches is shown using different reformulations of the AMBA arbiter example that was also used for [<span style="font-variant:small-caps;">Anzu</span>]{}.
An explanation for this fact was given in [@DBLP:conf/vmcai/SohailSR08; @DBLP:conf/fmcad/SohailS09; @DBLP:conf/cav/BloemCGHJ10]. In fact, the GR(1) synthesis approach implemented in [<span style="font-variant:small-caps;">Anzu</span>]{} can accommodate all types of assumptions and guarantees that are representable as deterministic Büchi automata (DBA) [@DBLP:conf/cav/BloemCGHJ10; @DBLP:conf/sat/Ehlers10]. In order to fit into the input language of [<span style="font-variant:small-caps;">Anzu</span>]{}, however, the output bit set of the system to be synthesized has to be extended by state bits of the automaton. Somenzi and Sohail coined the term “pre-synthesis” for such an encoding, as converting an assumption or guarantee to such an automaton and encoding it into some output bits in a good way is a problem on its own for BDD-based techniques (see, e.g., [@DBLP:journals/amcs/GostiVSS07; @DBLP:conf/aspdac/ForthM00]). Thus, a lot of effort has been put into a good reformulation of the problem description before checking realisability. An equivalent approach to using DBAs is to introduce so-called [auxiliary signals]{} (or auxiliary variables) into the design [@DBLP:journals/corr/abs-1001-2811; @DBLP:journals/entcs/BloemGJPPW07; @DBLP:conf/date/BloemGJPPW07]. It has been noted that rewriting a specification using different signals can significantly speed up the synthesis process [@DBLP:journals/corr/abs-1001-2811; @DBLP:journals/entcs/BloemGJPPW07] and for the AMBA AHB specification, this has also been done. As a consequence, it is not surprising that a specification for which pre-synthesis was performed and that has been optimized towards [<span style="font-variant:small-caps;">Anzu</span>]{}, neither the authors of [<span style="font-variant:small-caps;">Acacia</span>]{} nor [<span style="font-variant:small-caps;">Unbeast</span>]{} (nor the authors of the works using generalized parity automata [@DBLP:conf/fmcad/SohailS09; @Morg10]) used the AMBA AHB arbiter specifications in the form provided with the [<span style="font-variant:small-caps;">Anzu</span>]{} tool for comparisons.
With respect to the fact that the IBM Generalised Buffer example has been altered for usage with [<span style="font-variant:small-caps;">Acacia</span>]{} 2010, the situation is similar: in the original specification, the assumptions were not localised; defining the scope of the assumptions was simply not an issue in this case. As soon as techniques are introduced that can make use of such local assumptions, the situation changes.
Complexity of the workflows
---------------------------
Except for [<span style="font-variant:small-caps;">Anzu</span>]{}, the workflows, i.e., the numbers and orders of computation steps in the realisability checking process, of the tools discussed here are rather complicated. [<span style="font-variant:small-caps;">Lily</span>]{} and [<span style="font-variant:small-caps;">Acacia</span>]{} 2009 first convert the specification to a universal Büchi automaton and then perform, for some successively increasing bound value, a realisability check over this automaton. In order to also detect unrealisable specifications, the check must additionally be ran for the negated specification with a conversion between the two semantics types in parallel. The workflow of [<span style="font-variant:small-caps;">Unbeast</span>]{} is similar. In contrast to many other formal methods experimental evaluations, this whole process is relatively complicated and might easily appear less compelling than more simple schemes that are used in, for example, SAT solvers.
It is fair to conjecture that future workflows will even be more complicated. Take for example, [<span style="font-variant:small-caps;">Anzu</span>]{}, which has a relatively straight-forward workflow. As it has been argued that the generalised reactivity(1) synthesis approach that is used in this tool could handle all assumptions and guarantees that are representable by deterministic Büchi automata, developing a preprocessor that takes LTL specifications of this kind and produces equivalent [<span style="font-variant:small-caps;">Anzu</span>]{} specifications appears to be worthwhile to write. However, such a preprocessor would have a very complicated workflow. After converting the assumptions and guarantees to Büchi automata, these have to be determinised (whenever possible), using an external tool like <span style="font-variant:small-caps;">ltl2dstar</span> [@DBLP:journals/tcs/KleinB06]. Afterwards, it is possibly wise to try some exhaustive minimisation method for these automata [@DBLP:conf/sat/Ehlers10]. Then, the automata also have to be symbolically encoded [@DBLP:journals/amcs/GostiVSS07; @DBLP:conf/aspdac/ForthM00]. Furthermore, the time spent on optimising the automata has to be balanced against the overall computation time in order to avoid running out of time in the automaton optimisation step.[^2] All in all, these aspects make the whole synthesis process quite complicated and arguably, less compelling than other approaches, which ultimately reduces the publishability of any result on such a workflow, which in turn leads to little incentive to perform research or write tools in this area.
Providing a framework for future evaluations {#sec:evaluationFramework}
============================================
The preceding sections discussed the difficulties of composing meaningful experimental evaluations of synthesis tools. Nevertheless, as benchmarking is often considered to be the only way to distinguish promising ideas from the ones that are likely not to be useful (see, e.g., [@DBLP:journals/computer/Tichy98]), in this section, we propose *three evaluation schemes* for each of the problems of *using appropriate benchmarks* and *dealing with complex workflows* whose compositions respect the difficulties discussed earlier. The schemes are ordered from the minimum requirement to show that a new technique is worthwhile considering to the “superior” scheme that demonstrates clear advantages over previous techniques.
Benchmarking
------------
### Comparison using the home field advantage
It is fair to say that a new approach should beat older approaches at least in the cases in which it has a natural advantage. This is typically shown by taking some example specification that falls into the class of systems the new approach is intended to be applied to, applying a prototype implementation of the approach to it, and showing that previous tools perform worse using an automatic, ingenuine, conversion to the semantics/scopes of the previous tools. This means in particular to convert between Mealy- and Moore-type semantics if applicable. Competitor tools which can only handle a subset of the language of the new prototype tool need not be considered.
### Comparison from a neutral view-point
One problem of benchmarking tools with different scopes and semantics against the same examples is that *specifications are typically geared towards the usage with a certain tool*. A typical example is the pre-synthesis process discussed in Section \[sec:ComparabilityOfExamples\] that ensures that the specification of a system falls into the class handled by the GR(1) synthesis tools. After this has been done, the specification is not only suitable but also optimised for such a tool. As many signalling bits are introduced in the process, tools like [<span style="font-variant:small-caps;">Lily</span>]{} and [<span style="font-variant:small-caps;">Acacia</span>]{} 2009 that are explicit in the input and output bit valuations have problems with handling such pre-synthesized specifications even in cases in which they can deal with the non-pre-synthesized versions.
A similar situation arises for example when localising the assumptions (as discussed in Section \[sec:ComparabilityOfExamples\]): doing so is beneficial for [<span style="font-variant:small-caps;">Acacia</span>]{} 2010, but renders the optimisations of [<span style="font-variant:small-caps;">Unbeast</span>]{} unusable as the input is then no longer in the $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$ form.
As a solution to this problem, we propose the following scheme: given a setting, the specification is written for all tools to be compared individually, taking care of their specialities. If the prototype tool of a new approach performs better in such a situation than previous tools, it is clear that the techniques proposed have their merits if used correctly when modelling a specification. It should be noted, however, that this scheme favours tools that require some form of pre-synthesis: by rewriting the specification for the simpler tool in a smart way, its performance can often greatly be increased. As an example, we refer to the work on rewriting the AMBA AHB bus arbiter specification [@DBLP:journals/corr/abs-1001-2811].
### Beating the other tools where they have a natural advantage
As a third scheme, we propose that if a prototype implementation of some approach can beat other tools on benchmark suites on which they have a natural advantage, this should suffice to show the merits of a new approach without doubt. In order to do so, one would typically use an automatic converter between the scope and semantics (if necessary) of the other tool and the scope and semantics of the new prototype tool to import benchmarks originally written for the other tool. The converter must not apply sophisticated optimisations on the specification. As an example, converting the LTL formula $\mathsf{G}\mathsf{F} \mathsf{X} p$ to $\mathsf{G}\mathsf{F} p$ for some atomic proposition $p$ during the adaptation of the Mealy/Moore-type semantics should be considered to be fair, whereas rewriting a guarantee into a simpler one that is only equivalent if the given assumptions also hold is probably too complex for this scheme.
Complex workflows
-----------------
### Basic scheme
In order to combat the problem of having workflows that involve multiple steps that can be be skipped without obstructing the steps following (like for example automaton optimisations), we propose the following scheme: for successively increasing timeout values (using a reasonable granularity), the synthesis approach is performed using the given timeout value for *all* individual sub-steps involved until the respective tool execution yields an answer. If the least timeout value that leads to a result for the new technique is lower than the least such timeout value for previous approaches, it is shown that the new approach has some merits.
### Advanced scheme
As an extension to the basic scheme, it is worthwhile to show that the timeout value obtained in the basic scheme does not make the old approaches look bad unnecessarily. Let $A$ be the least timeout value (for every step of the workflow) tried such that the prototype tool of the new approach terminates with an answer. Let $B$ be the overall running time of the process. If it can be shown that the other approaches do not even terminate with a timeout of $B$ for each step, additional justification for the new approach is obtained. Of course, many intermediate variations between the basic and advanced schemes are possible.
### Simple scheme
Probably the most convincing way to solve the problem of having complex workflows is to set static timeouts for the individual steps of the workflow and to just measure the overall running time. If it is better than those of other tools, this clearly shows the efficiency of the new approach. Obviously, this scheme is hard to follow when comparing against an other approach that has itself many steps which introduce a need for individual timeouts if the author of the other tool has not provided good values for these. Additionally, special care must be taken not to “overfit“ [@DBLP:journals/heuristics/Falkenauer98] the timeouts for the individual steps – tuning these values for the new prototype tool against a benchmark set and then evaluating on the same set against the other tools in a publication is not fair and can be considered to be scientifically unsound.
Conclusion
==========
In this paper, we discussed the problems of experimentally evaluating a synthesis tool. We discussed three major issues: the incomparable semantics and scopes of the tools, the bad comparability of the tools with respect to the benchmarks available and the complexities of the workflows. Three evaluation schemes to combat the first two of these problems and three schemes to account for the complex workflows have been presented.
While the workflow evaluation schemes cannot fully remove the problem that experimental evaluations using these are often not fully compelling to the reader of a scientific publication, they introduce means of comparing tools if they have parts in their workflow that may time out without prohibiting later steps (like, e.g., automaton optimisation). We must admit that currently, there is no synthesis tool that performs such steps. However, this on its own is an interesting fact: due to the immense set of techniques proposed in the literature for reducing the sizes and numbers of automata representing the overall specification, operations such as determining whether a guarantee is actually necessary in a specification or more complicated automaton minimisation techniques are not used in current tools yet even though the theory behind these operations has been established [@DBLP:conf/icalp/GreimelBJV08; @DBLP:conf/sat/Ehlers10; @DBLP:conf/spin/EhlersF10; @DBLP:conf/concur/ClementeM10]. Thus, we hope that the three workflow evaluation schemes proposed help to level the way to further advances in this area.
As a final note, we would like to defend the argumentation in this paper against the point of view that establishing a common file format with its clearly defined semantics and scope and requiring all future tools to use it as a basis is a way to fight the benchmarking problem. We have shown in Section \[sec:observations\] that the choice of techniques affects the choice of the semantics of a tool. Thus, picking one particular scope and semantics would drive the evolution of the tools and thus also the theory into a certain direction while ignoring possibilities apart from techniques not suitable for the scope and semantics agreed upon. However, as even if leaving this consideration apart, the form of a ”typical” specification in practice (conjunction of guarantees [@DBLP:conf/atva/FiliotJR10; @DBLP:conf/cav/KupfermanPV06; @Morg10] vs. $(\bigwedge \text{assumptions}) \rightarrow (\bigwedge \text{guarantees})$ form [@DBLP:conf/date/BloemGJPPW07; @DBLP:conf/cav/Ehlers10; @DBLP:journals/entcs/BloemGJPPW07; @DBLP:conf/cav/BloemCGHJ10; @DBLP:journals/corr/abs-1001-2811]) is not agreed upon, it is fair to argue that consensus will not be reached within the next few years. Also, an implicit or explicit requirement that tools with a complex workflow should always be evaluated in a way similar to the simple scheme proposed here is highly problematic: due to the low number of meaningful benchmarks, fixing good values as timeouts for the intermediate steps without overfitting for the concrete set of benchmarks in the evaluation is hardly possible. As a result, such a requirement would basically rule out complex optimisations a-priori (or require cheating by an author in the paper by overfitting), which is highly questionable for practical approaches to a problem that is, after all, still 2EXPTIME-complete.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the German Research Foundation (DFG) as part of the Transregional Collaborative Research Center “Automatic Verification and Analysis of Complex Systems” (SFB/TR 14 AVACS). The author wants to thank Barbara Jobstmann and Emmanuel Filiot for helpful comments on the descriptions of the techniques employed in [<span style="font-variant:small-caps;">Anzu</span>]{}, [<span style="font-variant:small-caps;">Lily</span>]{} and [<span style="font-variant:small-caps;">Acacia</span>]{}.
[^1]: For the scope of this paper, we exclude all tools that aim at pure game solving, as here, the synthesis functionalities and the possibility to start from linear-time temporal logic is missing (which is typically not merely a preprocessing step). There is another tool called [<span style="font-variant:small-caps;">Ratsy</span> [@DBLP:conf/cav/BloemCGHKRSS10]]{}, which we excluded as its synthesis functionality is mainly a reinterpretation of [<span style="font-variant:small-caps;">Anzu</span>]{} (plus some preliminary implementation of the bounded synthesis approach [@DBLP:conf/atva/ScheweF07a; @DBLP:conf/cav/FiliotJR09]) and the tool aims at providing an environment for specification engineering rather than being only a synthesis tool. Consequently, no experimental evaluation of the synthesis performance has been given in [@DBLP:conf/cav/BloemCGHKRSS10]. The <span style="font-variant:small-caps;">Jtlv</span> [@DBLP:conf/cav/PnueliSZ10] scripting environment that also has synthesis procedures has been excluded as no stand-alone synthesis tool exists and benchmarks comparisons are not available.
[^2]: In benchmark comparisons, it is customary to restrict the running times of the tools. Such a time restriction is the typical answer to the problem that in most experimental evaluations, there are some benchmark/tool combinations that do not yield a result even after days or weeks of computation time.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'P. Altmann'
- 'M. Kohda'
- 'C. Reichl'
- 'W. Wegscheider'
- 'G. Salis'
title: Transition of a 2D spin mode to a helical state by lateral confinement
---
[**Spin-orbit interaction (SOI) leads to spin precession about a momentum-dependent spin-orbit field. In a diffusive two-dimensional (2D) electron gas, the spin orientation at a given spatial position depends on which trajectory the electron travels to that position. In the transition to a 1D system with increasing lateral confinement, the spin orientation becomes more and more independent on the trajectory. It is predicted that a long-lived helical spin mode emerges [@Malshukov2000; @Kiselev2000]. Here we visualize this transition experimentally in a GaAs quantum-well structure with isotropic SOI. Spatially resolved measurements show the formation of a helical mode already for non-quantized and non-ballistic channels. We find a spin-lifetime enhancement that is in excellent agreement with theoretical predictions. Lateral confinement of a 2D electron gas provides an easy-to-implement technique for achieving high spin lifetimes in the presence of strong SOI for a wide range of material systems.** ]{}
In a diffusive electron system with intrinsic SOI (e.g., of Rashba or Dresselhaus type), the effective spin-orbit field changes after each scattering event. This leads to a randomization of spin polarization that is described by the Dyakonov-Perel (DP) spin-dephasing mechanism [@Dyakonov1972], in the case of an initially homogenous spin excitation. Given a local spin excitation, a spin mode emerges that is described by the Green’s function of the spin diffusion equation [@Froltsov2001; @Stanescu_2007; @Liu2012]. For a 2D system in the weak SOI limit, analytical solutions exist for a few special situations, such as for the persistent spin helix case with equal Rashba and Dresselhaus SOI [@Schliemann2003; @Bernevig2006; @Koralek2009; @Kohda2012; @Walser2012]. In the isotropic limit (either only Rashba or only linear Dresselhaus SOI), the spin mode is described by a Bessel-type oscillation in space (see Fig. \[fig0\]b) [@Froltsov2001]. The spin lifetime of such a mode is only slightly enhanced [@Froltsov2001] compared with the DP time because rotations about varying precession axes (see Figs. \[fig0\]c-e) do not commute and therefore the spin polarization at a given position depends on the trajectory on which the electron reaches that position. If the electron motion is laterally confined by a channel structure of width $w$, the spin motion is restricted to a ring on the Bloch sphere (see Figs. \[fig0\]g and \[fig0\]h). In this situation, the spins collectively precess along the channel direction (Fig. \[fig0\]f) [@Malshukov2000; @Kiselev2000; @Kettemann2007]. This extends even into the 2D diffusive regime as long as the cumulative spin rotations attributed to the lateral motion are small, i.e., as long as $w q^0 < 1$, where $q^0$ is the lateral wave number of the 2D spin mode. As a consequence, for a 2D diffusive system, increasing lateral confinement is predicted to result in an enhanced spin lifetime proportional to $(q^0 w)^2$ [@Malshukov2000; @Kiselev2000]. This effect could be highly relevant for spintronics applications because it circumvents the conventional trade-off between a long spin lifetime and strong SOI. It has been experimentally explored in different ways, including measurements of weak-antilocalization [@Schaepers2006; @Kunihashi2009], the inverse spin-Hall effect [@Wunderlich2010], and time-resolved Kerr rotation [@Holleitner2006]. None of these works were able to resolve the spin dynamics both spatially and temporally, and a quantitative investigation of the spin mode in the confined channel is still lacking.
We experimentally explore the dynamics and spatial evolution of electron spins in a 2D electron gas hosted in a symmetrically confined, 12-nm-wide GaAs/AlGaAs quantum well where the linear Dresselhaus SOI is much larger than the Rashba or the cubic Dresselhaus SOI, thus providing an almost isotropic SOI. To study the transition from 2D to 1D, we have lithographically defined wire structures along the \[1$\bar{1}$0\] ($x$) and \[110\] ($y$) directions with the channel width $w$ ranging from 0.7 to 79 $\mu$m.


Figure \[fig0\]a shows a sketch explaining the measurement principle. Spins polarized along the out-of-plane direction, $z$, are locally excited at time $t=0$ by a focused, circularly polarized pump laser pulse, which has a Gaussian intensity profile of a sigma-width of 1.1 $\mu$m. A second, linearly polarized probe pulse measures the out-of-plane component, $S_z$, of the local spin density using the magneto-optical Kerr effect. The spatial evolution of the spin packet is mapped out along the channel direction for various time delays, $t$, between pump and probe pulse. All measurements have been performed at a sample temperature of 20K.
A measurement of spatially resolved spin dynamics in a channel in the 2D limit ($w = 19$$\mu$m) is shown in Fig. \[fig1\]a. Spins are excited at $t=0$ and at $x=y=0$ and traced as a function of $y$ and $t$. At $y = 0$, $S_z$ simply decays in time. It reverses its sign after $t > 400~$ps for electrons that diffused along $y$ by more than $\approx 4 ~\mu$m, seen as a faint blue color in the Figure. The situation is different in the $0.7$-$\mu$m-wide channel (Fig. \[fig1\]b). Here, spin decay is strongly suppressed and $S_z$ reverses its sign multiple times along $y$ at later times. Note that the pattern is overlaid with the spin texture that survived from the previous pump pulse at $t = -12.6$ ns. Figure \[fig1\]c shows measured data of $S_z(y)$ for the 19-$\mu$m and the 0.7-$\mu$m-wide channels taken at $t = 1.5$ns. The comparison of the two curves clearly shows an enhanced $S_z$ and strong oscillations along $y$ in the narrow channel. This indicates a helical spin mode in the 1D case. The helical nature is further supported by measured maps where an external magnetic field is applied along the $x$ direction, rotating the helix as a function of time, see supplementary information.
For a deeper analysis, it is advantageous to Fourier-transform $S_z (x, y, t)$ into momentum space. Thereby one obtains Fourier components $S_z (q_x, q_y, t)$ at wave numbers $q_x$ and $q_y$ that according to theory decay biexponentially in time [@Stanescu_2007]. For channels narrower than 15$\mu$m, the spin modes exhibit a pronounced structure only along the channel direction, and we therefore analyze the 1D Fourier transformation along this direction. For wider channels, we obtain the 2D Fourier transformation from 1D scans of $S_z$ by assuming a radially symmetric spin mode, see supplementary information for details. This is justified because we observe a similar dependence of $S_z$ along the $x$ and $y$ directions, as seen from the values obtained for wavenumbers $q_x^0$ and $q_y^0$ later in the text.
Figures \[fig1\]d and \[fig1\]e show $S_z (q_y, t)$ for the 19- and $0.7-\mu$m wires, respectively. The Gaussian distribution of $S_z (q_y,t)$ decays in time with very different rates for varying $q_y$, which are minimal at a finite wavenumber, $q^0_y$. Figure \[fig1\]f shows traces at $q_y \approx q^0_y$ for the two cases. For $t > 500$ps, we fit each trace with a single exponential decay to obtain the momentum-dependent lifetime $\tau (q_y)$ of the longer-lived spin mode [@Koralek2009; @Stanescu_2007]. The decay rates, $1/\tau (q_y)$ are shown in Fig. \[fig2\]a. In both the 1D and the 2D case, $1/\tau$ vs $q_y$ can be well approximated close to $q_y^0$ by the parabolic function [@Liu2012; @Stanescu_2007]
$$1/\tau = 1/\tau^0 + D_s (q_y - q_y^0)^2,
\label{parabola}$$
where $D_s$ is the spin diffusion constant [^1]. Figures \[fig2\]b and \[fig2\]c plot the values obtained for $q_y^0$ ($q_x^0$) and $\tau^0$, respectively, for channels along the $y$ ($x$) direction and of various widths.

![**Lifetime enhancement for various $\alpha / \beta$.** Lifetimes for the 1D and 2D situation as determined from Monte-Carlo simulations for various ratios of $-1.1 < \alpha / \beta < 1.1$. Data is obtained by fitting $S_z(y,t)$ with a model that includes a diffusive dilution proportional to either $1/t$ or $1/\sqrt t$. The former is used in the 2D case for $|\alpha|\approx|\beta|$ (diamonds), the latter for isotropic SOI in the 2D case (rectangles) and for the 1D case (crosses). The solid and dashed lines are theoretical curves (see supplementary information). The lifetime enhancement under lateral confinement is largest for $\alpha = 0$ (arrow). For both the 2D case at $|\alpha| = |\beta|$ and the 1D case, the lifetime is limited by the same value given by the cubic SOI only. \[fig3\] ](Figure4.pdf)
Comparing $q_x^0$ and $q_y^0$ in Fig. \[fig2\]b, we observe a slight anisotropy characterized by $q_x^0 > q_y^0$. This means that the SOI is stronger for electrons that move along $x$ and indicates a remaining Rashba field due to a slight asymmetry in the quantum well. The SOI coefficients, $\alpha$ and $\beta$, are obtained from $q_{y}^0$ and $q_{x}^0$ measured in the 1D limit by using the expressions
$$\begin{aligned}
q_\textrm{y}^0 = \Big| \frac{2 m^*}{\hbar^2} \left( \alpha - \beta \right) \Big| \approx 0.7~\mu \textrm{m}^{-1} ,~\textrm{and}\\
\: q_\textrm{x}^0 = \Big| \frac{2 m^*}{\hbar^2} \left( \alpha + \beta \right) \Big| \approx 0.8~\mu \textrm{m}^{-1}.
\label{q0s}\end{aligned}$$
Here, $m^*$ is the effective electron mass and $\hbar$ is the reduced Planck’s constant, $\alpha$ is the SOI parameter of the Rashba field and $\beta = \beta_1 - \beta_3$ that of the Dresselhaus field. $\beta_1$ and $\beta_3$ characterize the linear and cubic Dresselhaus fields, respectively. Values for $\alpha$, $\beta_1$, $\beta_3$ and $D_s$ are given in the supplementary information.
The dependence of $q_x^0$ on $w$ is rather flat, whereas $q_y^0$ decreases for increasing $w$. This is in agreement with the prediction that $q_y^0$ of the 2D spin mode is smaller for slightly anisotropic SOI than expected from Eq. (\[q0s\]) [@Stanescu_2007; @Poshakinskiy2015]. Close to the persistent spin helix situation, the same effect leads to a suppression of precession along $y$.
The lifetime, $\tau^0$, however, behaves almost identically for both wire directions and increases by about one order of magnitude from $w = 19$ to $0.7~\mu$m. Theory provides expressions for the lifetime in the 2D limit [@Froltsov2001; @Stanescu_2007], $\tau_\textrm{2D}$, and in the intermediate regime [@Kiselev2000; @Malshukov2000], $\tau_\textrm{IM}$, see supplementary information. For very narrow channels, the lifetime $\tau_\textrm{1D}$ is limited by cubic Dresselhaus SOI only, and as we will show later, is the same as in the completely balanced spin-helix case [@Salis2014]. The theoretically expected values are plotted in Fig. \[fig2\]c as black lines. The interpolation between $\tau_\textrm{2D}$, $\tau_\textrm{IM}$ and $\tau_\textrm{1D}$ (yellow line in Fig. \[fig2\]c) is in very good quantitative agreement with the experimental data. Although $\tau^0$ towards smaller $w$ is not yet saturated, it is possible to project that cubic SOI will limit the lifetime.
The lifetime enhancement achievable by channel confinement depends strongly on the ratio $\alpha / \beta$. Figure \[fig3\] shows lifetimes determined by Monte-Carlo simulations for $-1.1 < \alpha / \beta < 1.1$. For this analysis, we determined $\tau^0$ directly from the decay of $S_z (y=0, t)$, see supplementary information. Without Fourier transformation one has to account for a diffusion factor that reduces the amplitude, in addition to an exponential decay term. The diffusive dilution of electrons in 2D scales with $1/t$ and in 1D with $1/\sqrt{t}$. Interestingly, the spins in a 2D system, however, also decay with $1/\sqrt{t}$ for the isotropic SOI case [@Stanescu_2007]. Solid and dashed lines are the theoretically expected values of $\tau^0$ for 2D and 1D spin modes ($\tau_\textrm{1D}$, $\tau_\textrm{2D}$, $\tau_\textrm{PSH}$), as well as for the DP case ($\tau_\textrm{DP}$).
We find that in a narrow channel, $\tau^0$ does not depend on $\alpha$ or $\beta_1$ and is limited by cubic SOI ($\beta_3$) only. The same limit is reached in the 2D situation at $\alpha = \beta$, i.e., when the system is tuned to the persistent spin helix symmetry. For given SOI coefficients, the maximal lifetime enhancement under lateral confinement in the diffusive limit occurs for the isotropic case ($\alpha=0$). Close to $\alpha = \beta$, the lifetime enhancement is small, but a reduction of diffusive dilution was observed [@Altmann2014].
In conclusion, we measured the evolution of a local spin excitation in a GaAs/AlGaAs quantum well dominated by linear Dresselhaus SOI. Because of SOI, the lateral confinement leads to an increased correlation between electron position and spin precession. Using a real-space mapping of the spin distribution, we observe a helical spin mode accompanied by an enhanced lifetime for decreasing channel width. The analysis in momentum space shows that the long-lived components decay exponentially with a minimum rate at a finite $q^0$. Both the precession length and the lifetime are in quantitative agreement with theory for the 2D limit, the 1D limit and also for the intermediate regime. The narrowest channel in our study still is 10 times wider than the mean-free-path (including electron-electron scattering) and 100 times wider than the Fermi wavelength of the electrons. At those smaller length scales, also a reduction of the cubic SOI contribution to spin decay was predicted [@Wenk2011].
These findings illuminate an interesting path for studying spin-related phenomena. Lateral confinement provides a straight forward method for achieving spin lifetimes that are otherwise only possible by careful tuning of SOI to the persistent spin helix symmetry. This facilitates the use of spins in materials with stronger SOI, such as InAs or GaSb, but also in group-IV semiconductors, like Si and Ge. Extending the presented method to 1D systems in the quantized limit will be relevant for the quest for Majorana fermions when combined with superconductors [@Lutchyn2010; @Oreg2010; @Alicea2010; @Mourik2012]. Furthermore, the results are important for transport studies and transistor applications [@Schliemann2003; @Kunihashi2012APL; @Chuang2014] using SOI in 1D or quasi-1D systems.
We acknowledge financial support from the NCCR QSIT and from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) in Grant-in-Aid for Scientific Research Nos. 15H02099 and 25220604. We thank R. Allenspach, A. Fuhrer, T. Henn, A. V. Poshakinskiy, F. Valmorra, M. Walser, and R. J. Warburton for helpful discussions, and U. Drechsler for technical assistance.
[29]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.64.045311) [****, ()](\doibase 10.1103/PhysRevB.75.125307) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.98.176808) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1126/science.1195816) @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.83.115301) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
Supplement
==========
**Measurement Setup**
We use a time- and spatially-resolved pump-probe technique to map out spin dynamics. Two Ti:sapphire lasers are used to generate the pump and probe laser pulses at 785 nm and 802 nm, respectively. Pulse lengths are on the order of 1 ps and the repetition rate is 79.1 MHz, which corresponds to 12.6 ns between two pulses. The pulses of the two lasers are electronically synchronized. The delay between pump and probe pulses is controlled by a mechanical delay line. The linearly polarized probe beam is chopped at a frequency of 186 Hz. The polarization of the pump is modulated by a photo-elastic modulator between plus and minus circular polarization at a frequency of 50 kHz. The sample is located inside an optical cryostat at a temperature of 20 K. Both laser beams are focused onto the sample surface with a lens inside the cryostat. The Gaussian sigma width of the intensity profile is $\approx 1.1~\mu$m for both spots. The power of the pump beam is $100~\mu$W and that of the probe beam is $50~\mu$W. The pump spot is positioned onto the sample relative to the fixed probe spot by a scanning mirror. After being reflected from the sample, the pump beam is blocked with a suitable edge filter, whereas the probe beam is sent to a detection line, where its polarization is monitored by a balanced photodiode bridge and lock-in amplifiers.
**Sample preparation**
A GaAs quantum well is grown on a (001) GaAs substrate by molecular beam epitaxy. The barrier material is Al$_{0.3}$Ga$_{0.7}$As. Front and back Si $\delta$-doping layers are positioned such that the electric field perpendicular to the quantum-well plane is very small. A sheet density of $3.5 \times 10^{15}$ m$^{-2}$ and a transport mobility of $7.0 \times 10^5$ cm$^2$(Vs)$^{-1}$ were determined at 4 K after illumination by a van-der-Pauw measurement. A $5 \times 5$ mm$^2$ piece was cleaved out of the 2” wafer and processed with photo-lithography. Wires of variable width were etched by wet-chemical etching. The effective widths of the wires as given in the main text were determined by scanning electron microscopy images, measuring the width of the top surface.
**Theory**
The measured values of $q_y^0$ and $q_x^0$ in the 1D limit allows the determination of $\alpha$ and $\beta$, as described in the main text. Additionally, the knowledge of the electron density, $n_s$, allows the calculation of the cubic Dresselhaus coefficient via $\beta_3 = -\gamma \times n_s /4$ with $\gamma =-11 \times 10^{-30}$ eVm$^3$ [@Walser2012PRB]. Equation (\[parabola\]) allows the determination of $D_s$. The structure investigated is described by the following parameters.
$D_s$ $\alpha $ $\beta_1$ $\beta_3$
--------------- ----------- ----------- -----------
0.005 m$^2$/s -0.3 meV 4.9 meV 0.6 meV
The spin-dephasing time in the 2D limit for Dresselhaus fields only is given by [@Stanescu_2007]
$$\begin{aligned}
\begin{split}
\tau_\textrm{2D} = \Bigg[ 2 \frac{D_s m^{*2}}{\hbar^4} \Bigg( \beta^2 + 3 \beta_3^2 - 2\beta^2 - \left( \frac{\beta^2 + \beta_3^2}{8 \beta^2} \right) \Bigg) \Bigg]^{-1} ~.
\end{split}
\label{tau2D}\end{aligned}$$
The spin-dephasing time in the 2D limit close to the persistent spin helix symmetry is given by [@Bernevig2006; @Salis2014]
$$\tau_{\textrm{PSH}} = 2 D_s \frac{m^{*2}}{\hbar^4} \left[ (\alpha - \beta)^2 + 3 \beta_3^2 \right] \,.
\label{tauPSH}$$
The spin-dephasing time in the 1D limit is given by [@Chang2009; @Wenk2011; @Salis2014]
$$\tau_\textrm{1D} = \left[ 6 \frac{D_s m^{*2}}{\hbar^4} \beta_3^2 \right]^{-1} ~.$$
The Dyakonov-Perel spin-dephasing time for out-of-plane spin polarization is given by
$$\tau_\textrm{DP} = \left[ 8 \frac{D_s m^{*2}}{\hbar^4} \left( \alpha^2 + \beta^2 + \beta_3^2 \right) \right]^{-1} ~.
\label{tauDP}$$
The following scaling behavior is expected for the intermediate regime between the 1D and the 2D limit [@Chang2009]:
$$\tau_\textrm{IM} = 48 \tau_\textrm{DP} \left( q_0 w \right)^{-2} ~.$$
**Real-space evaluation**
An analysis of the spin dynamics is, in principle, also possible in the real-space representation $S_z (y, t)$ and a model can be fitted to the full data. It contains the spatial variation of the spin mode (Bessel function in 2D, cosine in 1D), a Gaussian envelope, which originates in the Gaussian shape of the initial spin density profile, the diffusive broadening of this Gaussian envelope, and an exponential decay [@Altmann2014]. A diffusive decay term accounts for the diffusive dilution of the spin density. As mentioned in the main text when discussing the evaluation of $\tau^0$ from Monte-Carlo simulations, this term is proportional to $1/\sqrt{t}$ in the 1D situation. In the 2D case, however, it is either $1/t$ for $\alpha \approx \beta$ or $1/\sqrt{t}$ for the isotropic case, i.e., when $\alpha \beta = 0$ [@Stanescu_2007]. An additional complication arises because the initial spin excitation has a finite spatial distribution owing to the pump-laser spot size. This can be accounted for by a convolution of the exact solution for a $\delta$-peak excitation with a Gaussian function that is itself a convolution of the intensity profile of both the pump and the probe laser spots. Figure \[figSupReal\] shows such fits for the 19-$\mu$m and 0.7-$\mu$m wires along the $y$ direction. This procedure is numerically more demanding than the fits of the exponentially decaying Fourier components of $S_z(y)$, and the fit parameter $\tau^0$ is obtained more indirectly. We therefore prefer to evaluate the Fourier-transformed $S_z$.
![**Fits of $S_z (y, t)$. a,** Fit of a 2D real space model, as described in the text, to the data of the $19-\mu$m-channel. **b,** Fit of a 1D real space model, as described in the text, to the data of the $0.7-\mu$m-channel. The values determined by this method for $\tau^0$ agree with the evaluation of $S_z (q_y, t)$ in momentum-space. \[figSupReal\] ](FigSupp_RealSpace.pdf)
**Fourier transformation**
To obtain the decay dynamics at specific wave-vectors, the spatial spin pattern $S_z(x,y)$ needs to be Fourier-transformed. If we consider a 1D system defined by a channel along $y$, $S_z$ only varies along $y$, which allows us to apply a 1D Fourier transformation:
$$S_z (q_y,t) = \int_{- \infty}^\infty \cos (q_y y) S_z (y, t) \mathrm{d}y ~.$$
In the 2D situation, the Fourier transformation in principle requires full knowledge of $S_z(x,y)$. Because of the almost isotropic SOI, we can assume a radially symmetric mode and obtain the Fourier component $S_z(q_y,t)$ from scans of $S_z$ along $y$:
$$S_z (q_y,t) = \int_0^\infty 2 \pi y \mathrm{J}_0 (q_y y) S_z (y, t) \mathrm{d}y ~.$$
Here, $\mathrm{J}_0$ is the zeroth-order Bessel function. When the channel width $w$ is gradually reduced, the system undergoes a transition from the 2D to the 1D situation. We have Fourier-transformed all data sets with both methods and fitted them as described in the main text. Figure \[figSup1\] shows $\tau^0$ along the $y$-direction determined with both methods. The values are very similar. In the main text, we therefore plot the 2D transformation for $w \geq 15~\mu$m and the 1D transformation for narrower channels.
![**Comparison of radial and linear Fourier transformation** Experimental values of $\tau^0$ as obtained by 2D (diamonds) and 1D (circles) Fourier transformations of $S_z (y, t)$. Also shown are the theoretical values of $\tau_1D$, $\tau_2D$ and $\tau_\textrm{IM}$, as well as their interpolation. \[figSup1\] ](FigSupp_Fourier.pdf)
**The sign of $\alpha$**
While $\beta$ can only be positive, the sign of $\alpha$ depends on the direction of the perpendicular electric field with respect to the growth direction. To determine the sign of $\alpha$, we measure spatial spin maps also at an applied external magnetic field. Figure \[figSupTilt\] shows such measurements in a channel with $w = 1.7~\mu$m for an external magnetic field of $B_\textrm{ext} = +1$ T perpendicular to the wire direction. Evaluating these measurements as done in [@Walser2012], we can conclude that $\alpha < 0$. Moreover, the continuous lines of constant spin orientation in these measurements demonstrate the helical nature of the ground mode.
![**Spin maps at an external magnetic field. a,** $S_z (y, t)$ in a 1.7-$\mu$m-wide wire along the $y$ direction at an external magnetic field of $B_\textrm{ext} = + 1 $ T. The field is perpendicular to the wire direction. The sample temperature is 30 K. **b,** $S_z (x, t)$ in a 1.7-$\mu$m-wide wire along the $y$ direction at an external magnetic field of $B_\textrm{ext} = + 1 $ T. The field is perpendicular to the wire direction. The sample temperature is 10 K. The magnetic field induced additional spin precession lines with constant phase are tilted, showing the helical nature of the spin mode. From the opposite sign of the tilts in the $(x, t)$ and the $(y, t)$ planes, it is concluded that $\alpha$ is negative. \[figSupTilt\] ](FigSupp_Tilt.pdf)
**Monte-Carlo simulations**
Spin dynamics in a laterally confined 2D electron gas are calculated numerically using a Monte-Carlo method where the positions and spin orientations of $3 \times 10^5$ electrons are updated in time steps of 0.1 ps. Electrons are distributed on a Fermi circle and scatter isotropically, with the mean scattering time given by $\tau=2 D/v_F^2$, where $v_F=\hbar k_F/m$ is the Fermi velocity. Each electron moves with the Fermi velocity and sees an individual spin-orbit field as defined in the supplementary information of Ref. [@Walser2012] that depends on its velocity direction. The real-space coordinates and the corresponding spin dynamics are calculated semiclassically. We initialize the electrons at $t=0$ all with their spins oriented along the $z$ direction and distribute their coordinates in a Gaussian probability distribution with a center at $x=y=0$ and a $\sigma$ width of 0.5$\mu$m. Histograms of the electron density and the spin orientations are recorded every 5ps, and the simulation is run until $t=5$ns is reached. We obtain the spin polarization at $x=y=0$ versus $t$ from the spin-density maps using a convolution with an assumed Gaussian probe spot size of 0.5$\mu$m. We determine the spin lifetimes $\tau^0$ by fitting the transients with a function proportional to $1/t\times\exp -t/\tau^0$ or $1/\sqrt{t} \times \exp -t/\tau^0$ in a window 800ps $<t<$4000ps, where additional spin decay is negligible because of the small spot sizes [@Salis2014]. For the data shown in Fig. 4, we have used the following parameters: $D_s=0.004$ m$^2$/s, $n_s=3.4\times$10$^{15}$cm$^{-2}$, $\beta_1 = 4.9\times10^{-13}$eVm and $\beta_3=0.6\times10^{-13}$eVm. Lateral confinement was implemented by assuming specular scattering at the channel edges. For the 1D case, $w=0.4$$\mu$m was used.
[^1]: Note that the spin diffusion constant differs from the electron diffusion constant measured by transport measurements because it is sensitive to electron-electron scattering.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Harmonic emission from cluster nanoplasmas subject to short intense infrared laser pulses is studied. In a previous publication \[M. Kundu [*et al.*]{}, , 033201 (2007)\] we reported particle-in-cell simulation results showing resonant enhancements of low-order harmonics when the Mie plasma frequency of the ionizing and expanding cluster resonates with the respective harmonic frequency. Simultaneously we found that high-order harmonics were barely present in the spectrum, even at high intensities. The current paper is focused on the analytical modeling of the process. We show that dynamical stochasticity owing to nonlinear resonance inhibits the emission of high order harmonics.'
author:
- 'S.V. Popruzhenko'
- 'M. Kundu'
- 'D.F. Zaretsky'
- 'D. Bauer'
date:
-
-
title: Harmonic emission from cluster nanoplasmas subject to intense short laser pulses
---
Introduction
============
The study of rare gas and metal clusters interacting with intense infrared, optical, and ultraviolet laser pulses has emerged as a new promising research area in strong field physics (see [@rost06; @krainov02] for reviews). The generation of fast electrons and ions, the production of high charge states, the generation of x-rays and nuclear fusion (in the case of deuterium-enriched clusters) was observed using cluster targets in strong laser fields of intensities up to $10^{20}$W/cm$^2$. Hot and dense nonuniform, nonequilibrium and nonstationary plasmas produced under such conditions and confined on a femto- or picosecond time scale to nanometer sizes—so-called [*nanoplasmas*]{}—are new physical objects with unusual properties.
One of the most important features of nanoplasmas is the very efficient energy transfer from light to charged particles, which is much higher (per particle) than for an atomic gas of the same average density [@ditmire97]. Although the particular mechanisms responsible for the laser energy deposition in clusters still remain debated [@kuhl; @last; @korn04; @milchberg; @kost05; @mulser05; @kundu06; @devis; @korn07], the pivotal role of collective plasma dynamics (in particular the excitation of surface plasmons) and of nonlinear resonance (a definition is given below in Sec. \[reoloh\]) in the energy transfer from laser light to the electrons in the nanoplasma and the subsequent outer ionization was proved in experiments, simulations, and simple analytical models [@korn04; @mulser05; @kost05; @antonsen05; @doeppner; @kundu06].
While the energy absorption from intense laser pulses by nanoplasmas has been widely studied both in experiments and theory, much less attention has been paid to harmonic emission from such systems. Naturally the question arises whether laser-driven clusters, being highly nonlinear systems, may be a source of high-order harmonics as efficient as atoms in the gas phase (or even more efficient?). There are at least two essentially different physical mechanisms which could be responsible for emission of harmonics from such a system.
First, the mechanism based on recombination of the virtually ionized electron with its parent ion [@corkum], well-studied in gaseous atomic targets, may also work in clusters where it can be modified by the fact that the atoms are closer to each other, so that the electron’s motion between ionization and recombination can be distorted by the field of the other ions. Moreover, the electron may recombine with an ion different from its parent. This leads to modifications in the single-electron dynamics and in the phase matching conditions while the physical origin of harmonic generation (HG) remains the same as in a gas jet. Modifications of the atomic recombination mechanism in clusters have been considered in [@rost00; @maquet01]. However, in high intensity fields where each atom is loosing one or several electrons already during the leading edge of the laser pulse the recombination mechanism can hardly be efficient.
Second, as dense electron plasmas is produced inside a cluster, its coherent motion may be an efficient source of radiation. In a collisionless plasma, as it is generated by intense laser fields inside small clusters, individual electron-electron and electron-ion collisions cannot destroy the coherency of the collective electron motion. This coherent, collective electron motion may cause HG if it is nonlinear. Recently, several experiments on high harmonic emission from plasma surfaces illuminated by short laser pulses of intensity $10^{17}$W/cm$^2$ and higher were reported (see, e.g., [@nph06; @nph07]). The physical picture which is behind such [*plasma harmonics*]{} appears to be more complicated and diverse than the recombination mechanism in atomic HG.
In macroplasmas the magnetic component of the Lorentz force is a typical source of nonlinearity. In this case the nonlinearity parameter is $v/c$ where $v$ is the typical velocity related to collective oscillations of the electron plasma and $c$ is the speed of light. As a consequence, generation of harmonics from dense macroplsmas may require relativistic intensities [@nph06]. As compared to macroplasmas, clusters introduce an extra source of nonlinearity due to their small spacial size, namely $X/R_0$, where $R_0$ is the cluster radius and $X$ is the amplitude with which the electron cloud oscillates under the action of the laser field [@fom03]. Therefore, one could expect strongly nonlinear electron motion even in the nonrelativistic regime. It should be noted, however, that although individual collisions may not be important, there are other effects which can spoil the coherency required for efficient radiation. Most of these undesirable effects can be attributed to dynamical instabilities induced by the interaction of particles with the mean self-consistent field (in the presence of the laser field). Therefore, the examination of [*plasma harmonics*]{} usually requires not only the analysis of the collective motion but that of individual electron trajectories as well.
Up to now only a few experiments on HG from clusters are known. In Refs. [@ditmire96; @ditmire97a; @vozzi05] HG from rare-gas clusters irradiated by infrared pulses of moderate ($\simeq 10^{13}-10^{14}$W/cm$^2$) intensity was studied. It was shown that under such conditions harmonics can be generated up to higher orders and with a higher saturation intensity than in a gas jet. In addition other interesting properties including different power laws in the intensity-dependent harmonic yield for particular harmonics for clusters and atoms were reported [@ditmire96]. However, in Refs. [@ditmire96; @ditmire97a; @vozzi05] the applied intensities were rather low to create a dense nanoplasma inside clusters and the main features of the recorded spectra (like a plateau followed by a cutoff) were found to be quite similar to the case of a gaseous target. This allows to attribute the observed effects to HG along the standard atomic recombination mechanism as described above.
Recently the experimental observation of the third harmonic (TH) generation from argon clusters in a strong laser field was reported [@ditmire07]. The laser intensity was varied between $10^{14}$ and $10^{16}$W/cm$^2$ and was thus sufficiently high for the creation of a dense nanoplasma. At such intensities HG along the atomic recombination mechanism relevant in gases is essentially suppressed because of saturated single-electron ionization so that the observed TH signal can be fully attributed to the nonlinearity of the laser-driven nanoplasma. A resonant enhancement of the TH yield (when the Mie frequency of the expanding cluster approaches three times the laser frequency) has been measured using a pump-probe setup. The TH enhancement of the single-cluster response, as studied in theory before [@fom03; @fomytski04; @fom05], is, however, masked in the experiment by phase matching effects whose optimization at high average atomic density necessary to create clusters has been shown to be more intricate than for rare gas jets. The latter complication makes an experimental study of nanoplasma radiation a difficult task while in computations it can be simplified by first examining the single-cluster response and, second, analyzing propagation effects.
In the recent paper [@kundu07], we considered the radiation emitted by a single cluster exposed to a strong laser pulse. We computed harmonic spectra from argon clusters in short 800-nm pulses of intensity $2.5\times 10^{14}-7.5\times 10^{17}$W/cm$^2$ using a 3D particle-in-cell (PIC) code (applied before to the study of collisionless energy absorption in laser-driven nanoplasmas [@kundu06]). The most intriguing outcome of our study was the absence of high-order harmonics in the computed spectra, even at high intensities for all cluster sizes we considered. We attributed this effect to the above-mentioned dynamical instability in motion of individual electrons.
In this paper we introduce two analytical models which describe the collective and single-electron dynamics of a laser-driven nanoplasma, respectively, and apply them for the explanation of both the numerical results [@kundu07] and the data [@ditmire07]. We show that the numerical results can be well described and understood within the rigid sphere model (RSM) but only for low harmonic orders $3,5,7$ while for higher harmonics the RSM yields qualitatively wrong predictions. Using a simple 1D model we describe a stochastic, resonant single-particle electron dynamics which suppresses the emission of high harmonics.
The paper is organized as follows. In Sec. \[softpapr\] we formulate the statement of the problem and describe the numerical method and the PIC results. In Sec. \[reoloh\] the spectra extracted from the PIC simulations are compared with the predictions of the RSM. In Sec. \[sohhvsboie\] we introduce a model for the description of the single-electron dynamics and radiation and use it to explain the suppression of the high harmonic yield. The last section contains the conclusions.
Statement of the problem and previous results {#softpapr}
=============================================
A cluster is converted into a dense electron plasma almost promptly if the laser pulse is intense enough to ionize the cluster constituents. We refer to this process as inner ionization which should not be confused with outer ionization, when the nanoplasma electrons leave the cluster. Several competitive processes govern the evolution of this plasma during the interaction with the pulse and later, until the cluster becomes dissolved due to Coulomb explosion or hydrodynamic expansion. The electron density increases because of further ionization of atoms and ions by the local electric field which may differ essentially from the applied laser field. With increasing plasma density due to inner ionization, the oscillating electric field of an infrared laser is screened, so that its amplitude inside the cluster may be a few times or even an order of magnitude less than the amplitude of the incident wave (see more explanations on screening in Sec. \[sohhvsboie\] below Eq.(\[E0\])). Simultaneously, as soon as a sizeable fraction of electrons have left the cluster, a quasistatic space charge field is built up which may become strong enough to induce further inner ionization (“ionization ignition”, [@rose97; @bauer03]). On the other hand, both outer ionization and the expansion of the cluster reduce the electron density. The net result of this competition is very sensitive to all parameters, including laser intensity, pulse duration, cluster size and type of atoms. However, for the vast majority of parameters a significant part of the electrons remains confined within the expanding ionic core. During this stage of the cluster evolution until the laser pulse is off the nonlinear motion of the laser-driven nanoplasma may cause emission of laser harmonics.
We have observed the above described scenario in PIC simulations of laser-cluster interaction, as reported in [@kundu06; @kundu07]. The dynamics and the radiation of nanoplasmas was studied for ${\rm Ar}_N$ clusters (with the number of atoms $N\approx 10^4$–$10^5$ and radii $R_0\approx 6$–$10$nm), irradiated by linearly polarized, 8-cycle sin$^2$-laser pulses with an electric field ${{\bf E}}_l(t)={{\bf E}}_0\sin^2(\omega_l t/2n)\cos(\omega_l t)$ and the wavelength $\lambda = 800$ nm. Here $\omega_l$ is the carrier frequency and $n=8$ is the number of optical cycles in the pulse. The contribution of electrons remaining bound in atoms (ions) during the interaction was not taken into account. Results were reported in Ref. [@kundu07] where also more details about the numerical simulation may be found. Here we restrict ourselves to briefly summarize the main results:
- The relative yields of low order harmonics depend on the laser intensity and the cluster size.
- The time-frequency analysis of harmonic spectra shows that low order harmonic enhancements occur when multiples of the laser frequency resonate with the transient Mie frequency.
- Even for the very high intensity $7.5\times 10^{17}$W/cm$^2$ no distinguishable high harmonics (higher than the 7th) appear in the spectra (see Fig. 1 in Ref. [@kundu07]).
- Only a part of the nanoplasma deeply bound inside the ion core contributes to (low order) harmonic generation.
Resonant enhancement of low-order harmonics {#reoloh}
===========================================
The enhancements of particular low-order harmonics predicted in [@fom03; @fomytski04] have been later studied numerically for small clusters using molecular dynamics simulations [@fom05] and were finally observed in the experiment for the case of the third harmonic [@ditmire07]. It is well established now that the physical origin of enhancements is the resonance between the harmonic frequency and the Mie frequency of the expanding nanoplasma. The time-frequency analysis of the radiation as calculated from the PIC results for the total acceleration further confirmed this statement [@kundu07]. Typical TF diagrams, showing which frequencies are emitted when, are reproduced in Fig. \[fig1\]. In the same plots we show the scaled time-dependent effective Mie frequency $\omega_{\rm Mie}(t)/\omega_l$. The standard definition of the Mie frequency $\omega_{\rm Mie}=\sqrt{4\pi\overline{z}n_0/3}$, where $\overline{z}$ is the average ion charge and $n_0$ is the atom density, is only appropriate for the case of an almost homogeneous charge distribution (in this section atomic units $\hbar=m=e=1$ are used). However, because of the ignition effect and the cluster expansion the average ion charge $\overline{z}$ depends on the ion position. As we found in [@kundu07], the low-order harmonics are emitted mainly by electrons confined within the ion core and with excursion amplitudes comparable to the initial cluster radius $R_0$ or less. In the following we will refer to these electrons as the [*deeply bound*]{} electrons, and we define the effective Mie frequency as $\omega_{\rm Mie}(t) = \sqrt{Q_0(t)/R_0^3}$ with $Q_0(t)$ the total ionic charge inside the sphere of radius $R_0$ within which the cloud of deeply bound electrons oscillates.
![(Color online) Time-frequency diagrams for an ${\rm Ar}_{17256}$ cluster at laser intensities $2.5\times 10^{15}$W/cm$^2$ (a), $7.5\times 10^{17}$W/cm$^2$ (b). Panel (c) shows the TF diagram for an ${\rm Ar}_{92096}$ cluster at the intensity $7.5\times 10^{17}$W/cm$^2$. The scaled time-dependent Mie frequency $\omega_{\rm Mie}(t)/\omega_l$ is included in all plots (solid lines). \[fig1\]](Fig1-final-reduced.eps){width="40.00000%"}
It is clearly seen from the TF diagrams of Fig.\[fig1\] that the enhancements of the third and fifth harmonic must be attributed to the respective high-order nonlinear resonances between the laser frequency and the Mie frequency, $\omega_{\rm Mie}(t) = m \omega_l$ (with $m=3,5$). Here we call the resonance [*nonlinear*]{} since it appears in a [*nonlinear*]{} system. This nonlinearity (i.e., anharmonicity in the effective potential of electron-ion interaction) is a necessary condition for the emission of harmonics. The physical origin of this nonlinearity may be either the electrons jutting out of the core during their motion, thus sensing the Coulomb tail [@fom03], or the inhomogeneous charge distribution within the ion core [@fomytski04]. Note that the appearance of a significant signal at frequencies different from odd multiples of the laser frequency (namely, the second harmonic in Fig.\[fig1\](a) and the forth harmonic in Fig.\[fig1\](c)) should be attributed to the excitation of eigenoscillations of the electron cloud with the time-dependent eigenfrequency $\omega_{\rm Mie}(t)$. These oscillations are of significance only in short pulses, as used in the PIC simulations.
The results of the TF analysis in Ref. [@kundu07] not only explain, at least qualitatively, the resonant enhancement of the TH observed in [@ditmire07] but also confirm the idea formulated there that with a proper adjustment of parameters harmonics of higher order (5th, maybe 7th) could also be resonantly enhanced.
It was shown in earlier studies [@fom03; @fom05] that the resonant enhancements of low-order harmonics may be reasonably described using a very simple rigid sphere model (RSM). Here we show that, while for low-order harmonics the RSM works well, it fails even qualitatively to reproduce the high-energy part of the spectrum. In a RSM [@parks; @mulser05; @kundu06] it is usually assumed that both ions and electrons form homogeneous, rigid spheres with sharp boundaries. In this case the electron and ion charge density distributions are \_[e(i)]{}(r)=n\_0(R\_0-r), \[rho0\] where $n_0$ is the atom density in the cluster and $\theta(x)$ is the Heaviside step function. The signs $\mp$ correspond to electrons and ions, respectively. Within this model the restoring force ${{\bf F}}_{\rm ei}\equiv\omega_l^2R_0{{\bf f}}$ depends upon the displacement ${{\bf X}}\equiv R_0{{\bf y}}$ of the electron cloud as [@parks] $${{\bf f}}({{\bf y}})=-\frac{{{\bf y}}}{y}g_0(y),
\nonumber$$ g\_0(y)=()\^2{
[l]{}y-+, 0y2,\
, y2.
. \[g0\] Here the dimensionless coordinate ${{\bf y}}$ and force ${{\bf f}}$ are introduced. The equation of motion reads =[[**f**]{}]{}([[**y**]{}]{})-- \[NE0\] where ${{\bf E}}_l(\varphi=\omega_l t)$ is the electric field of the laser wave and $\gamma$ is the effective damping constant which can be estimated assuming a collisionless damping mechanism [@korn04; @korn07] (in calculations we use $\gamma=0.1$, in accordance with such estimates).
Using the RSM significant insights into the absorption mechanisms in clusters [@kundu06] and TH emission [@fom03] have been gained. However, an essential shortcoming of the RSM in the form specified by Eqs.(\[rho0\]),(\[g0\]) is the appearance of even powers of $y$ in (\[g0\]) at $y\le 2$. Physically it is the result of the discontinuous charge distributions (\[rho0\]) at $r=R_0$. As a consequence, in a perturbative treatment of harmonic generation the intensity of the $s$-th harmonic is proportional to $E_0^{2s-2}$ instead of $E_0^{2s}$. For example, for the third harmonic one obtains with the RSM $P_{3\omega}\sim E_0^4$, which is obviously unphysical [@fom03]. To improve the model it is sufficient to assume a smoothed charge distribution for the electrons while the positive charge density still may be described by (\[rho0\]). We use a Gaussian of characteristic width $R_e$ for the electron charge density, \_e(r)=-n\_0(-r\^2/R\_e\^2), \[rho1\] and assume that the net charge density is zero at the cluster center. From (\[rho1\]) the total number of electrons is $N_e=\pi^{3/2}\overline{z}n_0R_e^3$. The restriction $N_e\le\overline{z}N$ gives $R_e\le (4/3\sqrt{\pi})^{1/3}R_0\approx 0.91R_0$. The equation of motion has now the same form (\[NE0\]) but with a modified restoring force: $$g_0(y)\to g_1(y)=\frac{\omega_{\rm Mie}^2}{\omega_l^2 y^2}\times
\nonumber$$ $$\times\left\{\frac{1}{2}\bigg\lbrack(y^3+1){\rm erf}[a(y+1)]-(y^3-1){\rm erf}[a(y-1)]\bigg\rbrack+\right.
\nonumber$$ $$\left.+\frac{1}{4\sqrt{\pi}a^3}\bigg\lbrack e^{-a^2(y-1)^2}[1-2a^2(1+y+y^2)]-\right.
\nonumber$$ .-e\^[-a\^2(y+1)\^2]{}\[1-2a\^2(1-y+y\^2)\]} \[g1\] where $a=R_0/R_e>1.1$ and ${\rm erf}(x)=2/\sqrt{\pi}\int_0^x\exp(-z^2)dz$ is the error function. Contrary to Eq. (\[g0\]), a decomposition of the restoring force (\[g1\]) contains only odd powers of $y$ while it has the same asymptotic Coulomb behavior for large displacements. The respective asymptotic expansions have the form \[compare with (\[g0\])\] g\_1(y)=()\^2{
[l]{}A\_1y+A\_3y\^3+..., y1,\
+O(e\^[-a\^2y\^2]{}), y1
. \[g1as\] with $A_1={\rm erf}(a)-2a/\sqrt{\pi}\exp(-a^2)>0$ and $A_3=-4a^5/5\sqrt{\pi}\exp(-a^2)$. It is clear that $A_1\to 1$ if $a\to\infty$, i.e., when the tail of the electron density distribution does not stick out of the ion core. In this case the eigenfrequency of small oscillations $\sqrt{A_1}\omega_{\rm Mie}$ is equal to $\omega_{\rm Mie}$. For a finite $a$ the spread of the electron cloud beyond the ion core reduces the eigenfrequency.
Note that within the model we describe above the restoring force is nonlinear because a part of the electron cloud spreads out of the ion core. In this case the leading term in the nonlinear part of the force can be estimated as $\omega_{\rm Mie}^2{{\bf X}}X^2/R_0R_e$. As a result, the third harmonic signal scales as $R_0^4$ with the cluster radius, in contrast to the standard $R_0^6$ low expected for a volume bulk effect. This shows that the above discussed low-order harmonic emission from the nanoplasma is a [*surface*]{} effect which becomes relatively less pronounced with increasing of the cluster size. In small cold metal clusters subject to infrared laser pulses of moderate intensity the $R_0^4$ dependence was observed in [@lippitz] though the experiment was not entirely conclusive (see also [@zar06] for the respective theoretical analysis).
Figure \[fig2\] shows harmonic spectra $P(\omega)$ calculated from the above-described RSM. Within the model, the dynamics and radiation of the electron cloud are governed mainly by the values of the two parameters $\omega_{\rm Mie}/\omega_l$ and $E_0/\omega_l^2R_0$, determining the possibility of resonant enhancements for particular harmonics and the number of excited harmonics in the spectrum. The eigenfrequency of the electron cloud $\omega_{\rm eff}$ depends upon the amplitude of the oscillations (the maximum value ${\rm max}[\omega_{\rm eff}]=\sqrt{A_1}\omega_{\rm Mie}$ corresponds to small harmonic oscillations) and, therefore, upon the laser intensity. As a result, at fixed cluster parameters the resonant enhancements appear at certain values of the intensity. For the spectra of Fig.\[fig2\](a,d) and (c,f) the parameters were chosen such that the maximum value of the eigenfrequency exceeds the integers 3 and 5, respectively, so that, with increasing intensity resonant enhancements of the third and the fifth harmonic appear. These enhancements are clearly seen from the comparison of these spectra with the ones calculated for the case shown in Fig.\[fig2\](b,e) when the maximum eigenfrequency ${\rm max}[\omega_{\rm eff}]\approx 4.4$ is far from both resonances. In panels (a-c) the intensities are chosen such that the sphere’s oscillation amplitudes are almost the same in each case so that the relative difference in the strength of the third and the fifth harmonic (about three orders in magnitude) shows the efficiency of the resonant enhancement. For the higher intensities in panels (d-f) these enhancements are even more pronounced.
A comparison between the RSM and the PIC results shows that for the low-order harmonics the RSM provides a qualitatively correct description of the spectrum, including the effect of resonant enhancement. This gives an additional justification of the RSM for applications to laser-driven nanoplasmas. However, for relatively high harmonics (ninth or higher) the results predicted by the RSM appear to be qualitatively wrong. Indeed, the RSM predicts the appearance of higher harmonics with increasing laser intensity, as it is seen from Fig. \[fig2\](f) where the spectrum ends up at the 13-th harmonic. By a moderate variation of the parameters inherent to the model, namely $\omega_{\rm Mie}/\omega$ and $a$, one may obtain even higher harmonics for the same intensities used for the spectra of Fig. \[fig2\]. As expected for a classical system, no signature of a plateau emerges in the spectra. However, the PIC simulations reported in [@kundu07] do not show any harmonics above the seventh within the studied domain of parameters, including the very high intensity of $7.5\times 10^{17}$W/cm$^2$. The same effect was found in [@antonsen05] where no harmonics above the ninth have been observed for intensities up to $10^{17}$W/cm$^2$. In Ref.[@kundu07] we concluded from the inspection of individual PIC electron’s trajectories that a dynamical instability induced by the resonant interaction of electrons with the time-dependent self-consistent field is responsible for the suppression of high harmonics. Obviously, such a mechanism is beyond the RSM since the latter accounts for collective electron dynamics only. In the next section we introduce a single-particle model suited for the analysis of the motion and the radiation of individual electrons so that the instability responsible for the suppression of high harmonics from cluster nanoplasmas under the above-described conditions can be studied.
Suppression of high harmonics via stochastic behavior of individual electrons {#sohhvsboie}
=============================================================================
It is known from numerical studies (see examples in [@fomytski04; @antonsen05; @last; @brabec]) that in a laser-driven cluster the electron population separates into a dense core with a radius comparable to the initial cluster radius $R_0$ and a rarefied halo with a typical size of several $R_0$. This subdivision is equivalent to a separation of quasifree electrons into deeply and weakly bound electrons, correspondingly. A decrease of the electron density due to outer ionization is compensated by the inner ionization of atoms, provided inner ionization is not yet depleted, so that the cycle-averaged electron density distributions both in the core and the halo evolve rather slowly in time. The core oscillates with relatively small deformations so that the RSM seems to be applicable to the deeply bound electron’s dynamics while electron trajectories in the halo are strongly disturbed by the laser field and cannot be captured by the RSM.
In Ref. [@kundu07] the radiation of individual PIC electrons has been considered (see Fig. 3 there). It was shown that electrons radiate harmonics as long as they move inside the dense core. Being liberated from the core, electrons leave the cluster vicinity almost promptly, usually within a laser period. This means, that the halo contains basically no permanent population, but consists almost entirely of electrons on their way out of the cluster. Hence the electron density in the halo determines the rate of outer ionization from the cluster. During the ejection, each electron emits an intense flash of radiation with an almost continuous spectrum that extends up to significantly higher frequencies than present in the net harmonic spectrum. In Ref.[@kundu07] we argued that these flashes add up incoherently in the total emission amplitude. Here we introduce a simple analytical 1D model which helps to illuminate the physical origin of this incoherency. Despite its simplicity our model is able to describe, at least qualitatively, all essential features seen in the simulations.
Model
-----
Let us suppose frozen ions and the number of nanoplasma electrons fixed so that in the absence of the laser field each electron moves in the time-independent self-consistent potential $U(x)$. The laser field excites oscillations of the electron cloud which induce the ac part of the space charge field $E^{(I)}_{sc}(t)$. The net oscillating field inside the system can be written as (t)=E\_l(t)+E\^[(I)]{}\_[sc]{}(t)\_0f(t)(\_l t+), \[field\] where $f(t)$ is the time-dependent pulse envelope. If the Mie frequency notably exceeds the laser frequency, $\omega_{\rm Mie}>\omega_l$, as it is the case for infrared lasers, the laser field and the ac space-charge field $E^{(I)}_{sc}(t)$ essentially compensate each other. In this case the amplitude of the net oscillating field inside the cluster ${\cal E}_0$ is related to the amplitude of the laser field $E_0$ according \_0E\_0. \[E0\] This result is commonly referred to as [*screening*]{} of a low-frequency laser field inside clusters (see, e.g., the review [@krainov02]). There is no contradiction between this screening of the laser field for individual electrons and the fact that the whole electron cloud feels the unscreened field. Indeed, in the RSM there are two forces acting on the electron cloud: one due to the interaction with the ion core, another one due to the laser force, see Eq.(\[NE0\]). In the single-electron description we should also take into account the interaction between the electron under consideration and all other electrons in the cloud. Within the model we assume that the electron cloud undergoes small, slightly nonlinear oscillations, so that this extra force is almost homogeneous inside the cluster and oscillates in time with the frequency $\omega_l$. Within the RSM and under the condition $\omega_l\ll\omega_{\rm Mie}$ we assume throughout the paper that the electron cloud displacement ${{\bf X}}(t)$ reads $${{\bf X}}(t)\approx-\frac{e{\cal E}_0}{m(\omega_{\rm Mie}^2-\omega^2)}f(t)\cos(\omega_lt+\alpha).
\nonumber$$ Calculating the electric field induced inside the cluster due to this displacement and summing it up with the laser field (\[field\]) one obtains the estimate (\[E0\]) for the amplitude of the net oscillating field. This type of screening results from the coherent superposition of the applied and the self-consistent field and has nothing in common with damping of electromagnetic waves in macroplasmas. The latter occurrs on the spatial scale of the skin-depth, which is in general much bigger than the typical cluster size we consider.
Taking a cluster consisting of $N\approx 1.7\times 10^4$ Ar atoms ($R_0\approx 6.2$nm) with the average ion charge ${\overline z}\approx 6$ and the degree of outer ionization $\eta\approx 0.5$, one can estimate $\hbar\omega_{\rm Mie}\approx 6$eV. In a Ti:Sa laser pulse of intensity $5\times 10^{17}$W/cm$^2$ ($E_0\approx 3$a.u.) the amplitude of the oscillating field inside the cluster is according to (\[E0\]) ${\cal E}_0\approx 0.2$a.u., i.e., more than one order of magnitude below the amplitude of the applied laser field. The quasistatic part of the space-charge field $E_{sc}^{(II)}=E_{sc}-E_{sc}^{(I)}$ which traps electrons within the ion core can also be estimated for the assumed values of $\eta$ and ${\overline z}$. Namely, the field near the cluster edge is $E^{(II)}_{sc}\approx\eta N{\overline z}e/R_0^2\approx 3$a.u., i.e., more than one order of magnitude above the oscillating field amplitude. From this estimate we conclude that the oscillating field inside the cluster usually remains small compared to the quasistatic space-charge field.
Within the model the electron’s evolution is governed by the Hamiltonian H(p,x,t)=+U(x)-e[E]{}(t)xH\_0(p,x)-e[E]{}(t)x \[H\] and the corresponding Newton equation =m[x]{}=-+e[E]{}(t)-eE\^[(II)]{}\_[sc]{}(x)+e[E]{}(t), \[NE\] where $m$ and $e$ are the electron mass and charge. The well $U(x)$ is created by the quasistatic part of the space charge. We model it by the function U(x)=U\_0\[1-1/\], \[c\] where the values $R_0$ and $U_0$ are the cluster radius and the depth of the self-consistent well, respectively. Here we choose the energy minimum $\epsilon=0$, so the $\epsilon=U_0$ is the continuum threshold. For small excursion amplitudes this well is a nonlinear oscillator while for large excursions it has the desired Coulomb behavior. According to the estimates given above we assume the inequality 1, [F]{}\_0= \[mu\] being satisfied, where ${\cal F}_0$ has the meaning of a characteristic quasistatic force trapping the electron.
Dynamics
--------
The dynamics of the unperturbed system with the Hamiltonian $H_0$ is characterized by the energy dependence of the eigenfrequency ()=, T()=\_a\^b, \[T\] where $T(\epsilon)$ is the oscillation period, $a(\epsilon)$, $b(\epsilon)$ are the turning points, and $\epsilon>0$ is the total energy [@goldstein]. The energy-dependent parameter ()=\[beta\] characterizes the nonlinearity of the unperturbed system and thus its potential capability to emit harmonics. Figure \[fig3\] shows the energy dependence of the scaled eigenfrequency $\Omega(\epsilon)/\Omega(0)$ and the parameter (\[beta\]). In cluster potentials the period $T(\epsilon)$ increases with increasing energy so that d/d0, B= \[Hfinal\] with $\Omega_1^{\prime}=d\Omega/d\epsilon|_{\epsilon=\epsilon_1}$. The electron behavior can thus be qualitatively described as nonlinear oscillations in $(P,\psi)$ space, known as [*phase oscillations*]{} [@liber; @sagdeev; @chirikov]. In the new canonical variables which have the formal status of momentum $(P)$ and coordinate $(\psi)$ the phase space of the Hamiltonian (\[Hfinal\]) splits into domains of finite and infinite motion (see Fig.\[fig4\]). Finite motion corresponds to a particle trapped by the resonance, while particles moving infinitely do not intersect with the resonance.
The separatrix of (\[Hfinal\]) is a boundary between these two domains. The motion near separatrices is unstable, so that even a small variation in the initial conditions may entirely change a trajectory. As a result, particles approaching a separatrix may penetrate from one domain to another or may be trapped by a resonance. The parameters characterizing the motion of trapped particles are the maximum deviation from the resonant action $I_1$ and the frequency of small phase oscillations: P\_[max]{}=2, \_[ph]{}=\_l. \[omph\] In energy space the positions of the separatrices of the first-order resonance are determined by \_1\^=(I\_1P\_[max]{})=\_1\_1, \_1U\_0. \[threshold\] If the energy intersects a respective threshold so that $\vert\epsilon-\epsilon_1\vert\le\Delta\epsilon_1$, the electron becomes trapped by this resonance domain and experiences phase oscillations with the frequency and amplitude both proportional to $\sqrt{\mu}$. Due to the appearance of a new time scale given by the frequency of the phase oscillations (\[omph\]) the motion becomes aperiodic and highly nonlinear. It should be emphasized that in weak fields the perturbation parameter $\mu$ is far from resonance and $\sqrt{\mu}$ close to it so that the near-resonant motion appears to be much more perturbed than the off-resonant one.
The interaction with an isolated resonance cannot lead to ionization since the electron remains trapped for an, in principle, infinite time [@comm3]. However, the higher-order resonances lying above may come into play. As soon as the separatrices of neighboring resonances intersect, the particle, captured by the first resonance, may jump to the third, etc. Because the volume of the accessible phase space is increasing with energy this inter-resonance motion will have a predominant direction, namely towards higher energies. This leads to a fast liberation of the electron from the system, known as [*stochastic ionization*]{} [@chirikov; @sagdeev; @kost05]. For realistic parameters of laser-cluster interaction this overlap of resonances is realized with almost 100% probability so that cases where the particle remains trapped by a resonance are rare while almost prompt ionization occurs as soon as the first-order resonance is reached.
We visualize the above-described scenario by solving Eq.(\[NE\]) numerically. In the calculation we take $\omega_l=R_0=1.0$ and $U_0=5.0$ so that even the amplitude ${\cal E}_0=1.0$ ($\mu=0.2$) still corresponds to the weak-field regime, as defined above. These parameters are not arbitrarily chosen. Indeed, a solution of Eq.(\[NE\]) with the well (\[c\]) and the field (\[field\]) with a slowly varying envelope depends on four dimensionless parameters, $\mu$, ${\cal F}_0/m\omega^2 R$, $\epsilon_0/U_0$ and $\alpha$, the two last of them defining the initial conditions. To recalculate all the parameters for a real system we should assume some certain values for the cluster radius and the laser field frequency, which then define all other parameters. For typical values, say $R_0=5$nm and $\hbar\omega_l=1.55$eV, and for the dimensionless parameters of Fig. \[fig5\] one may check that the resulting strengths of the quasistatic and the oscillating parts of the self-consistent field indeed correspond to the estimates given below Eq.(\[E0\]). The positions of the most important first and third-order resonances are $\epsilon_1\approx 0.48U_0$ and $\epsilon_3\approx 0.77U_0$, respectively. A particle with the initial energy $\epsilon_0<\epsilon_1$ starts its motion at $x=0$ and $\varphi\equiv\omega_lt=-40$ when the field ${\cal E}(t)$ is negligibly small. Then the field (with a Gaussian envelope) increases, and the electron propagation under the action of the full force is calculated until $\varphi=+40$. By choosing different phases $\alpha$ of the field we model different initial conditions for the particle at the fixed initial energy $\epsilon_0$. The results are summarized in Fig.\[fig5\].
Figure \[fig5\]a corresponds to the “perturbative” regime of interaction. The initial energy is far enough from the first (lower) resonance, so that the trajectory in the energy space does not intersect the respective lower separatrix, or just touches it. As a result, the trajectory remains weakly disturbed, its shape is well described as a superposition of oscillations with the frequencies $\Omega(\epsilon_0)$ and $\omega_l$. By choosing different initial conditions we obtain trajectories simply shifted in time by the value of $\alpha$. Figures \[fig5\](b,c) correspond to the “resonant” regime of interaction where the initial energy is high enough or the field is strong enough to cause penetration of the particle into the vicinity of the first resonance. The five trajectories plotted in Figs.\[fig5\](b,c) show that the near-resonant motion is very sensitive both to the initial conditions and to the field amplitude so that a particular trajectory appears to be unpredictable. Usually the particle is emitted from the system while in rare cases it remains in a bound state after the pulse is off, being trapped by a resonance (see the trajectories in Fig. \[fig5\](b,c)). From this observations we may conclude that at parameters typical for intense laser-nanoplasma interactions the particle behavior in the “resonant” regime becomes stochastic, as it is expected to be according to the general theory [@liber; @sagdeev; @chirikov].
In classical ionization, a particle has to overcome the potential barrier, i.e., its total energy must exceed the maximum value of the potential energy suppressed by the field at some time instant. Obviously, the trajectories of Fig. \[fig5\](b,c) satisfy this condition, while both trajectories of Fig. \[fig5\]a do not. In Fig. \[fig6\] we show the trajectories in energy space evaluated for half the laser frequency, $\omega_l^{\prime}=\omega_l/2=0.5$, and all other parameters the same as in Fig. \[fig5\]b.
The plots show that with decreasing laser frequency the time-dependent energy gain from the field to the particle decreases down to a perturbative level, the total energy remains always below the maximum of the time-dependent potential energy, and no ionization or excitation occurs. Within the resonant picture we exploit here the qualitative difference between the trajectories of Figs. \[fig5\](b,c) and \[fig6\] appears because with decreasing frequency the first resonant level is shifted up and no penetration into the resonant area between the separatrices takes place anymore. It is instructive to show the connection between the single-electron model specified by Eqs. (\[field\]),(\[H\]),(\[NE\]),(\[c\]) and a model which describes collective motion, as the RSM of Sec. \[reoloh\] does. The RSM deals with the electron cloud displacement ${{\bf X}}(t)$ whose Fourier-transform is directly related to the spectrum. This value can also be calculated within a single-electron picture as [[**X**]{}]{}(t)=d\_0d [[**r**]{}]{}(\_0,,t)F(\_0,), \[Xt\] where ${{\bf r}}(\epsilon_0,\alpha,t)$ is the individual trajectory with the initial energy $\epsilon_0$ and the initial condition $\alpha$, and $F(\epsilon_0,\alpha)$ is the distribution function for electrons before the field is on. One should note that, contrary to the RSM, the spatial distribution of the electrons in the presence of the field is not known unless one calculates all individual trajectories. Calculating a trajectory ${{\bf r}}(\epsilon_0,\alpha,t)$ analytically is possible only within perturbation theory with respect to the external field where one may easily derive (\[NE0\]) with the linear part of the restoring force only from (\[Xt\]). The derivation of nonlinear corrections, although doable, requires very cumbersome algebra. The single-particle model is appropriate for a qualitative description of the stochastic resonant behavior but hardly applicable to the study of the slightly unharmonic motion of the deeply bound electrons.
Radiation
---------
The analysis of the previous subsection gives a direct explanation of the radiation spectra extracted from the PIC results in Ref. [@kundu07] both for individual electrons and for whole clusters. Deeply bound electrons with energies below the first resonant separatrix $\epsilon\le\epsilon_1-\Delta\epsilon_1$ move along slightly perturbed regular trajectories. This causes HG with rapidly decreasing yield as a function of the harmonic order so that even the seventh harmonic is barely present in the corresponding spectrum of Fig. \[fig7\]a. An individual electron, while passing the resonance and being trapped by it or leaving the cluster potential, emits radiation due to its strong acceleration, seen as a flash in the TF spectrograms of Fig. \[fig7\]b,c. These spectrograms should be compared with the ones extracted from our PIC results (see Fig.3 in [@kundu07]). Exactly because of the stochastic nature of nonlinear resonance the electrons’ trajectories are very sensitive to the initial conditions with which the nonlinear resonance is entered, as is clearly seen from Fig. \[fig5\]c where solid black, dashed black and gray trajectories correspond to the same initial energy and the same field amplitude but different phases of the external field. As a result, flashes from different electrons are incoherent (the corresponding amplitudes have nearly random phases), and, being added up, vanish in the total dipole acceleration.
![TF diagrams corresponding to three (the black curve of Fig. \[fig5\]a and both curves of Fig. \[fig5\]b) electron trajectories.[]{data-label="fig7"}](Fig7-final.eps){width="40.00000%"}
This shows that exactly the same mechanism behind efficient energy absorption by and outer ionization from clusters, namely nonlinear resonance [@mulser05; @kost05; @antonsen05; @kundu06], restricts HG from them by breaking the coherent electron motion once it becomes strongly anharmonic. Only well-bound electrons trapped inside the ionic core with energies far from the resonance contribute to the net, coherent radiation of the cluster. A similar behavior was observed in classical ensemble simulations of atomic HG [@bandarage].
Instabilities induced by the resonant interaction grow in time (usually with an exponential rate). Thus the picture depends also on the pulse duration. With all other parameters fixed, for longer pulses, more and more trajectories from the vicinity of the separatrix experience a stochastic behavior. As a consequence, an increase of the pulse duration should lead, in general, to a further loss of coherency.
Conclusions
===========
Although laser-irradiated cluster nanoplasmas emit low-order harmonics efficiently, no significant yield of high harmonics can be expected even for very high laser intensities. This is a consequence of dynamical stochasticity, inherent to nonlinear dynamical systems driven by weak, time-dependent forces such as the screened electric field inside a cluster.
Increasing the laser intensity does not help much because the self-consistent field trapping the electrons inside the ion core increases too. As a consequence the electron population always splits into deeply bound electrons and a halo, the Mie-frequency and the screening increases, so that the physical picture remains almost insensitive to the intensity of the applied field.
Another option we did not consider above is to use relatively long pulses where the first-order resonance with the Mie frequency can be reached because of the ion core expansion. If the Mie resonance is met the ac electric field inside the cluster is grossly enhanced and nearly all electron trajectories appear to be strongly disturbed. In this case an analysis of near-resonant stochastic behavior as given above is inappropriate. It seems that a direct numerical study is the only option under these conditions. An analysis based on classical Vlasov simulations was performed in Refs.\[14\].
Acknowledgment
==============
We are grateful to W. Becker for valuable discussions. This work was supported by the Deutsche Forschungsgemeinschaft and the Russian Foundation for Basic Research (project No.06-02-04006).
[99]{} U. Saalmann, Ch. Siedschlag and J.M. Rost, J. Phys. B: At. Mol. Opt. Phys. [**39**]{}, R39, (2006).
V.P. Krainov, M.B. Smirnov, Phys. Rep. [**370**]{}, 237 (2002).
T. Ditmire, R.A. Smith, J.W.G. Tisch, and M.H.R. Hutchinson, , 3121 (1997).
F. Greschik, L.Dimou and H.-J. Kull, Laser Part. Beams [**22**]{}, 137 (2004).
I. Last and J. Jortner, J. Chem. Phys. [**120**]{}, 1348 (2004).
D.F. Zaretsky, Ph.A Korneev, S.V. Popruzhenko and W. Becker, , 4817 (2004); Ph. Korneev, D.F. Zaretsky, S.V. Popruzhenko and W. Becker, Laser Phys. Lett. [**2**]{}, 452 (2005).
T. Taguchi, Th.M. Antonsen, Jr., H.M. Milchberg, , 205003 (2004).
P. Mulser and M. Kanapathipillai, , 063201 (2005); P. Mulser, M. Kanapathipillai, and D.H.H. Hoffmann, , 103401 (2005).
I.Yu. Kostyukov, JETP [**100**]{}, 903 (2005).
M. Kundu and D. Bauer, , 123401 (2006); M. Kundu and D. Bauer, , [**74**]{}, 063202 (2006).
J. Davis, G.M. Petrov and A. Velikovich, Phys. Plasmas [**14**]{}, 060701 (2007).
D.F. Zaretsky, Ph.A Korneev, S.V. Popruzhenko, Quant. Electr. [**37**]{}, 565 (2007).
Th.M. Antonsen , Phys. Plasmas [**12**]{}, 056703 (2005).
T. Döppner, Th. Fennel, Th. Diederich, J. Tiggesbäumker, and K.H. Meiwes-Broer, , 013401 (2005); T. Döppner, Th. Fennel, P. Radcliffe, J. Tiggesbäumker, and K.H. Meiwes-Broer, , 031202(R) (2006); Th. Fennel, T. Döppner, J. Passig, Ch. Schaal, J. Tiggesbäumker, and K.H. Meiwes-Broer, , 143401 (2007).
P.B. Corkum, , 1994 (1993).
Carla Figuiera de Morisson Faria and Jan-Michael Rost, , 051402(R) (2000).
Valérie Véniard, Richard Taïeb, and Alfred Maquet, , 013202 (2001).
B. Dromey, M. Zepf, A. Gopal, [*et. al.*]{}, Nature Phys. [**2**]{}, 338 (2006).
C. Thaury, F. Quéré, J.-P. Giendre, [*et. al.*]{}, Nature Phys. [**3**]{}, 595 (2007).
S.V. Fomichev, S.V. Popruzhenko, D.F. Zaretsky, and W. Becker, , 3817 (2003).
T. D. Donnelly T. Ditmire, K. Neuman, M.D. Perry, and R.W. Falcone, , 2472 (1996).
J.W.G. Tisch, T. Ditmire, D.J. Fraser, N. Hay, M.B. Mason, E. Springate, J.P. Marangos, and M.H.R. Hutchinson, , 709, (1997).
C. Vozzi et al. M. Nisoli, J-P. Caumes, G. Sansone, S. Stagira, and S. De Silvestri, M. Vecchiocattivi, D. Bassi, M. Pascolini, L. Poletto, P. Villoresi, and G. Tondello, Appl. Phys. Lett. [**86**]{}, 111121 (2005).
B. Shim, G. Hays, R. Zgadzaj, T. Ditmire, and M. C. Downer, , 123902 (2007).
M.V. Fomyts’kyi, B.N. Breizman, A.V. Arefiev, and C. Chiu, Phys. Plasmas [**11**]{}, 3349 (2004).
S.V. Fomichev, D.F. Zaretsky and W. Becker, , L175 (2004); S.V. Fomichev, D.F. Zaretsky, D. Bauer and W. Becker, , 13201 (2005).
M. Kundu, S.V. Popruzhenko, and D. Bauer, , 033201 (2007).
C. Rose-Petruck, K.J. Schafer, K.R. Wilson, and C.P.J.Barty, , 1182 (1997).
D. Bauer and A. Macchi, , 33201 (2003); D. Bauer, , 3085 (2004).
P.B. Parks, T.E. Cowan, R.B. Stephens, and E.M. Campbell, , 063203 (2001).
M. Lippitz, M.A. van Dijk and M. Orrit, Nano Lett. [**5**]{}, 799 (2005).
S.V. Popruzhenko, D.F. Zaretsky and W. Becker, , 4933 (2006).
C. Jungreuthmayer, M. Giessler, J. Zanghellini and T. Brabec, , 133401 (2004).
L.D. Landau, E.M. Lifshits [*Mechanics*]{}, 3rd ed. Butterworth-Heinmann, Oxford, 1976; H. Goldstein [*Classical Mechanics*]{}, 2nd ed., Addison Wesley, 1980.
R.Z. Sagdeev, D.A. Usikov, G.M. Zaslavsky [*Nonlinear Physics: from the Pendulum to Turbulence and Chaos*]{}, Harwood Academic Publishers, Chur, Switzerland, 1992.
A.J. Lichtenberg and M.A. Lieberman [*Regular and Chaotic Dynamics*]{}, 2nd ed., Applied Mathematical Sciences, Vol. 38, New York, NY: Springer-Verlag, 1992.
G.M. Zaslavsky, V.V. Chirikov, Sov. Phys. Uspekhi [**105**]{}, 3 (1971).
Unless the amplitude of phase oscillations around the first resonance is already enough to lift the electron above the continuum threshold.
G. Bandarage, A. Maquet, and J. Cooper, , 1744 (1990); G. Bandarage, A. Maquet, Th. Ménis, R. Taïeb, V. Véniard, J. Cooper, , 390 (1992).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'M. Pietrow[^1]'
bibliography:
- 'references.bib'
title: 'Cellular Automaton-Like Model of Arising Physical-Like Properties'
---
#### Abstract
A simple relation of the order of $n$ abstract objects generates an $n-2$ dimensional basis of three dimensional vectors. A cellular automaton-like model of evolution of this system is postulated. During this evolution, some quantities stabilise with time and form a discrete spectrum of values. The presented model may have some general aspects in common with a cellular automaton representation of a quantum system.
#### Introduction:
Cellular automata (CA) are used to describe the behaviour of systems with a wide range of complexity from physics to biology [@Wolfram02]. Mainly, the description is functional, but not a structural one (CA rules allow description of some aspects of a system at a high structural level without references to the rules from the deeper level of subsystems).\
The aim of this presentation is opposite to some extent. One does not require here a compatibility of an introduced model with any special real system. Instead, it was assumed that there exists a set of some abstract objects and a general quantity, an order, characterising each member of this set. Based on this, a matrix was constructed which keeps relations between these objects. The properties of this matrix were examined and some arising similarities to physical properties were brought into focus. Furthermore, the presented model keeps the compatibility with CA ideas to some extent and tries to adhere to a description of a set of physical objects from the structural point of view.\
Although most of the ideas here are postulated but not derived from something deeper, it would be promising to consider the Author’s idea of a simple relation between some elementary objects and an introductory model of how physical properties arise from it.
#### Relation matrix ($mrel$):
A fundamental feature of a system of basic objects (thought here as abstract entities, not physical ones; called here *elementary objects*) is the relation between them. One of the simplest relations seems to be an ’order’ of these objects. For example, for three objects there are 3$!$ of their possible arrangements.\
Now, consider a set of $n$ identical elementary objects. Let us define an $n\times n$ matrix $mrel_{i,j}$ (*relation matrix*), which describes the distance (in the meaning of this order) of the $i$-th relative to the $j$-th object. For example, $mrel_{1,2}$=-1 because the $1^{\textrm{st}}$ object is one step before the $2^{\textrm{nd}}$ one (it proceeds object $2$). The $mrel$ for three particles of the order $\{1,2,3\}$ is $$\left(
\begin{array}{ccc}
0 & -1 & -2 \\
1 & 0 & -1 \\
2 & 1 & 0
\end{array}
\right),$$ whereas for the $\{1,3,2\}$ order we have $$\left(
\begin{array}{ccc}
0 & -2 & -1 \\
2 & 0 & 1 \\
1 & -1 & 0
\end{array}
\right).$$
#### The Eigensystem of the $mrel$:
$Mrel$s have interesting properties. Consider a $5\times 5$ $mrel$ for an arrangement $\{1,2,3,4,5\}$ as an example. Its eigenvalues are $$\{\lambda_1=5i\sqrt{2},\lambda_1^{\ast},0,0,0\},$$ whereas the corresponding eigenvectors are $$\begin{aligned}
v_1= & \{\frac{1}{3}(-1+2i\sqrt{2}),\frac{i}{\sqrt{2}},\frac{1}{3}(1+i\sqrt{2}),\frac{1}{6}(4+i\sqrt{2}),1\},\nonumber\\
v_2= & v_1^{\ast},\\
v_3= & \{3,-4,0,0,1\}, v_4=\{2,-3,0,1,0\}, v_5=\{1,-2,1,0,0\},\nonumber\end{aligned}$$ where $^{\ast}$ denotes a complex conjugation.\
The following is an interesting general rule for $mrel$ (no matter what its $n$ dimension is). Its $n-2$ eigenvalues are equal to $0$, whereas the respective eigenvectors *always have the non-zero values in 3 dimensions only*. These vectors span the $n-2$ space, $physV$.\
The $physV$ seems to be a promising representation of $n-2$ physical objects in a common physical space, i.e. these $n-2$ eigenvectors can describe the basic objects in a three-dimensional sub-spaces of a common space. Let us call these vectors with the zero eigenvalues the *physical vectors*.
#### Other properties of $mrel$:
Some other interesting properties of $mrel$ are listed below.
1. Permutation of related elementary objects does not change the eigenvalues of $mrel$, whereas the eigenvectors do change.
2. The physical vectors are independent of $a_0$ in the case of generalisation of the relation definition in $mrel$ as $$\begin{gathered}
\cdots, -1\rightarrow a_0-1, 0\rightarrow a_0, 1\rightarrow a_0+1,\\
2\rightarrow a_0+2, etc..\end{gathered}$$
3. Any $mrel$’s sub-matrix of dimension $n'$ has $n'-2$ physical vectors.
4. $2\times 2$ $mrel$s (for the arrangements $\{1,2\}$ and $\{2,1\}$) have no physical vectors. Normalised eigenvectors of these matrices resemble spin vectors for a spin-$\frac{1}{2}$ particle $$\{-\frac{i}{\sqrt{2}},\frac{1}{\sqrt{2}}\},\quad \{\frac{i}{\sqrt{2}},\frac{1}{\sqrt{2}}\},$$ whereas these $mrel$s are proportional to one of the Pauli matrices, $\sigma_2$: $mrel(\{1,2\})=i\sigma_2$ and $mrel(\{2,1\})=-i\sigma_2$.\
For three elementary objects in the relation, there is one physical vector as an eigenvector[^2]. In this case, all $2\times 2$ sub-matrices of all three-dimensional $mrel$s generated from permutations of the order $\{1,2,3\}$ give the eigenvalues from the set $\{-2i,-i,1,2\}$ and these sub-matrices are a simple combination of the Pauli matrices. For $dim(mrel)>3$ (two or more physical vectors present) the expansion into the Pauli matrices becomes less trivial.\
To generalise, the $2 \times 2$ (sub)-$mrel$s seem to be promising operators for spin description.\
The time evolution of such a system is postulated below.
5. $Mrel$s are antihermitian (antisymmetric). Some sets of $mrel$s form a linearly independent set (for example, a subset of three $mrel$s generated by permutations of elementary objects). According to the general theory [@Antihermitian], they form a Lie algebra of generators related to some unitary matrices. This suggests a possibility of description of quantum-like evolution [@Greiner94] by these matrices.
6. \[AlaMaKota\] Another scheme of a time evolution (called a *second kind*) of a system described by $mrel$ could be suggested by the case of 2-dim $mrel$s which have been linked with a spin. Each of the Pauli matrices can be derived from one of them by some elementary operations known from linear algebra (two lines[^3] switching, a line multiplication by a number). Thus, the evolution of $mrel$ in a general sense could be identified with elementary operations. In general, swapping lines is not equivalent to permutations of the elementary objects.\
In the simplest case, one may consider a $mrel$ at each step where some two lines could be randomly swapped. However, a more complicated algorithm could be used as a current $mrel$ generator. A new $mrel$ could be considered as a product of up-to-now $mrel$s that could change additionally at some steps by swaps of lines.\
On the other hand, continuing the idea of relations, for a system of three elementary objects as an example, their states $A$, $B$, $C$ are influenced by each state from all these objects in the set. Thus, it could be written $$\begin{aligned}
A\ =\ & m_{1,1}\ A+m_{1,2}\ B+m_{1,3}\ C,\nonumber\\
B\ =\ & m_{2,1}\ A+m_{2,2}\ B+m_{2,3}\ C,\label{eq:IntoSelf}\\
C\ =\ & m_{3,1}\ A+m_{3,2}\ B+m_{3,3}\ C.\nonumber\end{aligned}$$ The matrix $m_{ij}$ here could be identified with $mrel$.\
More generally, for a set of consecutive steps $t$, eq. (\[eq:IntoSelf\]) gives $$\begin{bmatrix}
A\\
B\\
C
\end{bmatrix}=
(mrel)^t\times
\begin{bmatrix}
A\\
B\\
C
\end{bmatrix}.
\label{eq:Evolution}$$ The equation above is, in fact, a requirement to find a vector $[A,B,C]^T$ which is unchanged by a projection by the $mrel^t$ operator. Vectors which are the solution of (\[eq:Evolution\]) have interesting properties.\
As an example, consider the $mrel$ for three elementary objects. Calculate $B$ and $C$ as a function of time (because the rank of any $mrel$ is 2, $B$ and $C$ are $A$–dependent here). These functions are shown in fig. \[fig:Evolution\].
Additionally, the physical vectors *do not* change with steps, whereas the rest of the eigenvectors set oscillate within some set of values.\
The non-zero eigenvalues of $mrel^t$ rise logarithmically with steps when the system evolves without swaps in between inside the matrix–fig. \[fig:LogEigenvalues\].
However, when the swaps of lines take place, the non-zero eigenvalue rises much faster that logarithmically.\
One may consider the evolution complicated one step more. If one makes some swaps of lines and *then* solves eq. (\[eq:Evolution\]), the result for $B$ and $C$ will approximate asymptotically some value – fig. \[fig:EvolutionFlipped\].
\
On the other hand, if one makes a swap of matrix lines between some steps of evolution and solves eq. (\[eq:Evolution\]) after each step then one observes switches to some other value for some time (fig. \[fig:EvolutionFlippedDiscreteLevels\]). The interesting feature of this evolution is that the spectrum of values is discrete (they form a multiplet).
Generally, the changes of values do not coincide with the moment of the swap of the matrix lines.\
A discrete spectrum of $B(t)$ and $C(t)$ is also obtained when one calculates it for any sub-matrix of a larger $mrel$ under evolution.\
The evolution of a second kind erases the anti-symmetricity of a $mrel$ and thus it is a considerably different scheme. However, the antisymmetricity returns after some swaps.
7. If the evolution consists in swapping lines, the number of $n-2$ physical vectors does not change. Also, if one considers the evolution (\[eq:Evolution\]) with $mrel^t$, the number of the physical vectors remains constant.
\
It is interesting to consider physical vectors relating to $mrel$s representing all permutations of $n$ elementary objects. These vectors form sets with non-zero values at different three of $n$ positions: $\zeta_1: \{[x, y, z, 0, ...]\}$, $\zeta_2: \{[x, y, 0, z, 0, ...]\}$, $\zeta_3: \{[x, y, 0, 0, z, 0, ...]\}$, etc.. Each $\zeta_i$ points the same network of points located at a plane $x+y+z=0$ (blue points in fig. \[fig:physVnetwork\]; any length of vectors are possible). The number of points increases with $n$ (all points generated by smaller set of $n$ elementary objects are generated by a larger one, too). Furthermore, any swaps of $mrel$’s lines produce physical vectors which are a subset of the network given by permutations of elementary objects (e.g.: red points in fig. \[fig:physVnetwork\]). Moreover, a multiplication of $mrel$ mentioned in the eq. (\[eq:Evolution\]) does not give an additional points but those generated by permutations. Permutations and $mrel$ powering (no matter what is done first) give the points from the regular structure whose an initial part was depicted in fig. \[fig:physVnetwork\].\
In fact, any length of the eigenvectors of $mrel$s are possible. If one limits to normalized vectors only the set of possible points form a part of a circle centred at $(0, 0, 0)$ with radius 1 and normal vector pointing in the direction of $[1, 1, 1]$ (blue points in fig. \[fig:physV\_norm\]).
\
Let us follow the position of the points described by $\zeta_1$ at each step of the evolution consisting on random swapping lines or powering the matrix. If $n>3$ then the position of the point could change randomly from step to step at a semi-circle of possible points. However, if $n=3$ (there is only one physical vector) only jumps between the points given in red in fig. \[fig:physV\_norm\] are possible.\
To generalise, the physical vectors point a net of places in a three-dimensional sub-space for each of $n-2$ objects. The structure of the network (positions of allowed points) is the same for each of these physical vector. According to this, each $\zeta_i$ has its own (’internal’) net of possible states. Although each physical vector is represented in its own subspace, from this model, the coordinates of each possible point obey the equation $x+y+z'=0$, where $x$- and $y$-coordinates can be regarded as common ones whereas the $z'$-coordinate is set individually for each vector.\
The further Author’s work will be devoted to check if the jumps through the network (for the one particle case, in particular) could describe a space-time motion of elementary objects in some way.
The evolution described in point \[AlaMaKota\] above resembles rules obeyed by the CA [@Wolfram02] in general. Its algorithm is an application of a simple rule (\[eq:Evolution\]) at each step (however, when swaps of matrix lines take place, randomness of choice as a generalisation of CA rules is added). The equivalence of cells in CA would be matrix elements (or lines) here. Each matrix element changes by application of a rule that requires other elements (but not neighbouring ones only). Additionally, in both cases, the $mrel$ evolution and the CA, some values can form a complex pattern of changes in ’time’. Such behaviour is maintained by non-zero eigenvalues of $mrel$s (fig. \[fig:Evolution6\_FlipsRandomLambda\]).
#### Conclusions:
This paper presents a collection of statements and hypotheses concerning a relation between basic physical properties, e.g. a number of dimensions of space containing physical objects or an evolution process, and relation matrix properties for which some characteristics have been investigated. From a point of view of the model presented above, the $mrel$ resembles operators in quantum mechanics. Possibly, a permutation group would help to find a link. An interesting consequence would be that the spin-like vector may originate from two-dimensional $mrel$ eigensystems which differ in dimensionality only from three dimensional physical vectors originating from larger $mrel$s.\
The statements do not form a consistent view of linked concepts but the Author’s hope is that the interesting properties of $mrel$s do reveal a structure resembling CA with quantum-like properties and could be developed for a useful description of physical many-body systems.
[^1]: e-mail: [email protected]
[^2]: When rearrangement of the elementary objects takes place the components of this vector $$\frac{1}{\sqrt{6}}\ \{1,-2,1\}$$ interchange.
[^3]: rows or columns, optionally
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The sky-averaged (global) 21-cm signal is a powerful probe of the intergalactic medium (IGM) prior to the completion of reionization. However, it has so far been unclear that even in the best case scenario, in which the signal is accurately extracted from the foregrounds, that it will provide more than crude estimates of when the Universe’s first stars and black holes form. In contrast to previous work, which has focused on predicting the 21-cm signatures of the first luminous objects, we investigate an arbitrary realization of the signal, and attempt to translate its features to the physical properties of the IGM. Within a simplified global framework, the 21-cm signal yields quantitative constraints on the $\Lya$ background intensity, net heat deposition, ionized fraction, and their time derivatives, without invoking models for the astrophysical sources themselves. The 21-cm absorption signal is most easily interpreted, setting strong limits on the heating rate density of the Universe with a measurement of its redshift alone, independent of the ionization history or details of the $\Lya$ background evolution. In a companion paper we extend these results, focusing on the confidence with which one can infer source emissivities from IGM properties.'
author:
- 'Jordan Mirocha$^{\dagger}$, Geraint J.A. Harker, Jack O. Burns'
bibliography:
- 'references.bib'
title: |
Interpreting the Global 21-cm Signal from High Redshifts. I.\
Model Independent Constraints
---
INTRODUCTION {#sec:Introduction}
============
Nearly all of our knowledge about the early Universe comes from the observable signatures of two phase transitions: the Cosmic Microwave Background (CMB), a byproduct of cosmological recombination at $z \sim$ 1100 [@Spergel2003; @Komatsu2011], and Gunn-Peterson troughs in the spectra of high-$z$ quasars [@Gunn1965], a sign that cosmological reionization is complete by $z
\gtrsim 6$. The intervening $\sim$Gyr, in which the first stars, black holes, and galaxies form, is very poorly understood.
Observations with *The Hubble Space Telescope* (HST) have begun to directly constrain galaxies well into the Epoch of Reionization (EoR) at redshifts possibly as high as $z \sim 10$ [e.g. @Oesch2010; @Bouwens2011; @Oesch2012; @Zheng2012; @Coe2013; @Ellis2013], and upcoming facilities such as *The James Webb Space Telescope* (JWST) promise to extend this view even further, likely to $z
\gtrsim 10-15$ [e.g. @Johnson2009; @Zackrisson2012]. However, directly observing luminous sources at high-$z$ is not equivalent to constraining their impact on the intergalactic medium (IGM) [@Pritchard2007], be it in the form of ionization, heating, or more subtle radiative processes (e.g. the Wouthuysen-Field effect). The most promising probe of the IGM in the pre-reionization epoch is the redshifted 21-cm signal from neutral hydrogen. Its evolution over cosmic time encodes the history of heating, ionization, and $\Lya$ emission, meaning in principle it is a probe of the background intensity at photon energies ranging from the $\Lya$ resonance to hard X-rays [for a recent review, see @FurlanettoOhBriggs2006].
At stake in the quest to observe the Universe in its infancy is an understanding of galaxy formation, which currently rests upon a theoretically reasonable but virtually unconstrained foundation. The first stars are expected to be very massive [$M \gtrsim 100\ \Msun$; e.g. @Haiman1996; @Tegmark1997; @Bromm1999; @Abel2002b] resulting in surface temperatures of order $10^5$ K [@Tumlinson2000; @Bromm2001; @Schaerer2002], though evidence for such objects is currently limited to abundance patterns in low-mass stars in the Milky Way [e.g. @Salvadori2007; @Rollinde2009]. Whether or not such massive stars ever form is a vital piece of the galaxy formation puzzle, as their brief existence is expected to dramatically alter the physical conditions for subsequent star formation: first, through an intense soft-UV radiation field, which photo-ionizes (dissociates) atomic (molecular) hydrogen, and presumably via metal enrichment and thermal feedback following a supernovae explosion [see review by @Bromm2009].
Even if the first stars are $\sim 100 \ \Msun$ and leave behind remnant black holes of comparable mass, it is difficult to reconcile the existence of $z
\gtrsim 6-7$ quasars [@Fan2006; @Mortlock2011], whose luminosities imply accretion onto super-massive black holes (SMBHs) with masses $\Mbh \gtrsim
10^9 \ \Msun$, with models of growth via Eddington-limited accretion. The difficulty of growing SMBHs from modest seeds has inspired direct-collapse models [@Begelman2006; @Begelman2008], which predict the formation of BHs with $\Mbh \gtrsim 10^3 \ \Msun$ in massive, atomic-cooling dark matter halos via dynamical instabilities. These models alleviate the requirement of continual Eddington-limited accretion throughout the reionization epoch, but remain unconstrained.
*JWST* may be able to detect clusters of PopIII stars at $2 \lesssim z
\lesssim 7$ [@Johnson2010], PopIII galaxies and quasistars at $z \sim
10-15$ [@Zackrisson2011; @Johnson2012], and PopIII supernovae at $z \sim
15-20$ [@Whalen2013SNIIn; @Whalen2013PISN], depending on their masses, emission properties, etc. However, the prospects for constraining the *first* generations of stars and black holes via direct detection, which likely form at higher redshifts, are bleak. The prospects for constraining the first stars and black holes *indirectly*, however, are encouraging at low radio frequencies, regardless of their detailed properties.
While the long term goal is to map the 21-cm fluctuations from the ground [a task on the horizon at $z \lesssim 10$; e.g. LOFAR, MWA, PAPER, GMRT, SKA; @Harker2010; @vanHaarlem2013; @Bowman2013; @Parsons2010; @Paciga2013; @Carilli2004; @Mellema2013]) or space [e.g. the Lunar Radio Array (LRA); @Jester2009] using large interferometers, in the near term, the entire $10 \lesssim z \lesssim 40$ window is likely to be accessible only to all-sky 21-cm experiments. Several challenges remain, however, from both observational and theoretical perspectives. The Earth is a sub-optimal platform for observations at the relevant frequencies ($\nu \lesssim 200 \ \mathrm{MHz}$) due to radio-frequency interference (RFI) and ionospheric variability [@Vedantham2013], making the lunar farside a particularly appealing destination for future observatories [e.g. the LRA, *The Dark Ages Radio Explorer* (DARE); @Burns2012]. Some foregrounds cannot be escaped even from the lunar farside (e.g. synchrotron emission from our own galaxy), and must be removed in post-processing using sophisticated fitting algorithms [e.g. @Harker2012; @Liu2013]. To date, ground based 21-cm efforts have largely focused on the end of the EoR ($100 \lesssim [\nu / \mathrm{MHz}]
\lesssim 200$), including lower limits on the duration of reionization [via the single-element EDGES instrument; @Bowman2010], and constraints on the thermal and ionization history with single dish telescopes and multi-element interferometers [e.g. @Paciga2013; @Parsons2013]. Extending this view to “cosmic dawn” requires observations below 100 MHz, a frequency range most easily explored from the radio-quiet, ionosphere-free[^1], lunar farside.
Even if the astrophysical signal is extracted from the foregrounds perfectly, it is not clear that one could glean more than gross estimates of the timing of first star and black hole formation. While simply knowing the redshift at which the first stars and black holes form would be an enormous achievement, ultimately it is their properties that are of interest. Were the Universe’s first stars very massive? Did all SMBHs in the local Universe form via direct collapse at high-$z$? Could the global 21-cm signal alone rule out models for the formation of the first stars and black holes? What if independent measurements from JWST and/or other facilities were available?
Motivated by such questions we turn our attention to the final stage of any 21-cm pipeline: interpreting the measurement. Rather than formulating astrophysical models and studying 21-cm realizations that result, we focus on an *arbitrary* realization of the signal, and attempt to recover the properties of the Universe in which it was observed. We defer a detailed discussion of how these properties of the Universe (e.g. the temperature, ionized fraction, etc.) relate to astrophysical sources to Paper II (Mirocha et al., in prep).
The outline of this paper is as follows. In Section 2, we introduce the physical processes that give rise to the 21-cm signal. In Section 3, we step through the three expected astrophysical features of the signal, focusing on how observational measures translate to physical properties of the Universe. Discussion and conclusions are presented in Sections 4 and 5, respectively.
We adopt a cosmology with $\Omnow=0.272$, $\Obnow=0.044$, $\OLnow=0.728$, and $\Hnow=70.2 \ \mathrm{km} \ \mathrm{s}^{-1} \ \mathrm{Mpc}^{-3}$ throughout.
Formalism {#sec:Formalism}
=========
Magnitude of the 21-cm Signal
-----------------------------
The 21-cm transition results from hyperfine splitting in the $1\mathrm{S}$ ground state of the hydrogen atom when the magnetic moments of the proton and electron flip between aligned (triplet state) and anti-aligned (singlet state). The HI brightness temperature depends sensitively on the “spin temperature,” $\TS$, a 21-cm specific excitation temperature which characterizes the number of hydrogen atoms in the triplet and singlet states, $(n_1/n_0) = (g_1/g_0) \exp(-\Tstar/\TS)$, where $g_1$ and $g_0$ are the degeneracies of the triplet and singlet hyperfine states, respectively, and $\Tstar = 0.068$ K is the temperature corresponding to the energy difference between hyperfine levels.
The redshift evolution of the 21-cm signal, $\dTb(z)$, as measured relative to the CMB, also depends on the mean hydrogen ionized fraction, $\xibar$, and in general on the baryon over-density and proper motions along the line of sight, though the last two effects should be negligible for studies of the all-sky spectrum, leaving [e.g @FurlanettoOhBriggs2006], $$\dTb \simeq 27 (1 - \xibar) \left(\frac{\Obnow h^2}{0.023} \right) \left(\frac{0.15}{\Omnow h^2} \frac{1 + z}{10} \right)^{1/2} \left(1 - \frac{\Tcmb}{T_{\mathrm{S}}} \right) , \label{eq:dTb}$$ where $h$ is the Hubble parameter today in units of $100 \ \mathrm{km} \
\mathrm{s}^{-1} \ \mathrm{Mpc}^{-1}$, and $\Obnow$ and $\Omnow$ are the fractional contributions of baryons and matter to the critical energy density, respectively.
Whether the signal is seen in emission or absorption against the CMB depends entirely on the spin temperature, which is determined by the strength of collisional coupling and presence of background radiation fields, $$T_S^{-1} \approx \frac{T_{\gamma}^{-1} + x_c T_K^{-1} + x_{\alpha} T_{\alpha}^{-1}}{1 + x_c + x_{\alpha}} \label{eq:SpinTemperature}$$ where $\Tcmb = \Tcmbnow(1+z)$ is the CMB temperature, $\TK$ is the kinetic temperature, and $T_{\alpha} \approx \TK$ is the UV color temperature.
In general, the collisional coupling is a sum over collision-partners, $$x_c = \sum_i \frac{n_i \kappa_{10}^i}{A_{10}} \frac{\Tast}{\Tcmb} \label{eq:xc}$$ where $n_i$ is the number density of species $i$, and $\kappa_{10}^i =
\kappa_{10}^i(\TK)$ is the rate coefficient for spin de-excitation via collisions with species $i$. In a neutral gas, collisional coupling is dominated by hydrogen-hydrogen collisions [@Allison1969; @Zygelman2005; @Sigurdson2006], though hydrogen-electron collisions can become important as the ionized fraction and temperature grow [@FurlanettoFurlanetto2007a]. We neglect collisional coupling due to all other species[^2].
The remaining coupling coefficient, $x_{\alpha}$, characterizes the strength of Wouthuysen-Field coupling [@Wouthuysen1952; @Field1958], $$x_{\alpha} = \frac{S_{\alpha}}{1+z} \frac{\Jhat}{\overline{J}_{\alpha}} \label{eq:Jalpha}$$ where $$\overline{J}_{\alpha} \equiv \frac{16\pi^2 \Tstar e^2 f_{\alpha}}{27 A_{10} \Tcmbnow m_e c} .$$ $\Jhat$ is the angle-averaged intensity of $\Lya$ photons in units of $\intensityunitsnumber$, $S_{\alpha}$ is a correction factor that accounts for variations in the background intensity near line-center [@Chen2004; @FurlanettoPritchard2006; @Hirata2006], $m_e$ and $e$ are the electron mass and charge, respectively, $f_{\alpha}$ is the $\Lya$ oscillator strength, and $A_{10}$ is the Einstein A coefficient for the 21-cm transition.
Slope of the 21-cm Signal
-------------------------
Models for the global 21-cm signal generally result in a curve with five extrema[^3], three of which are labeled in Figure \[fig:global\_signal\], roughly corresponding to the formation of the first stars (B), black holes (C), and beginning of the EoR (D). Due to the presence of strong [but spectrally smooth in principle; see @Petrovic2011] foregrounds, the “turning points” are likely the only pieces of the signal that can be reliably extracted [e.g. @Pritchard2010a; @Harker2012]. Our primary goal in §\[sec:CriticalPoints\] will be to determine the quantitative physical meaning of each feature in turn.
![An example global 21-cm spectrum (top), its derivative (middle), and corresponding thermal evolution (bottom) for a model in which reionization is driven by PopII stars, and the X-ray emissivity of the Universe is dominated by high-mass X-ray binaries.[]{data-label="fig:global_signal"}](global_signal.eps){width="48.00000%"}
In preparation, we differentiate Equation \[eq:dTb\], $$\begin{aligned}
\frac{d}{d\nu} & \bigg[\dTb \bigg] \simeq 0.1 \left(\frac{1 - \xibar}{0.5}\right) \left(\frac{1 + z}{10}\right)^{3/2} \left\{\left(\frac{\Tcmb}{\TS} \right) \left[1 + \frac{3}{2}\frac{d\log T_S}{d\log t}\right] \right. \nonumber \\
& \left. - \frac{1}{2(1 - \xibar)} \left(1 - \frac{\Tcmb}{\TS} \right) \left[1 - \xibar \left(1 - 3 \frac{d\log \xibar}{d\log t}\right) \right] \right\} \mathrm{mK} \ \mathrm{MHz}^{-1}
\label{eq:dTbdnu} .\end{aligned}$$ making it clear that at an extremum, the following condition must be satisfied: $$\frac{d\log T_S}{d\log t} = \frac{1}{3(1 - \xibar)} \left(\frac{\TS}{\Tcmb} - 1\right)\left[1 - \xibar\left(1 - 3 \frac{d\log \xibar}{d\log t}\right)\right] - \frac{2}{3} \label{eq:TurningPoint}$$ We can obtain a second independent equation for the spin-temperature rate of change by differentiating Equation \[eq:SpinTemperature\], $$\begin{aligned}
\frac{d\log T_S}{d\log t} & = \left[1 + x_{\tot} \left(\frac{\Tcmb}{\TK}\right) \right]^{-1} \left\{\frac{x_{\tot}}{(1 + x_{\tot})} \frac{d\log x_{\tot}}{d\log t} \left[1 - \left(\frac{\Tcmb}{\TK}\right)\right] \right. \nonumber \\
& \left. + x_{\tot} \frac{d\log T_K}{d\log t} \left(\frac{\Tcmb}{T_K}\right) - \frac{2}{3} \right\} \label{eq:dlogTs} .\end{aligned}$$ where $\xtot = x_c + x_{\alpha}$, such that $$\frac{d\log x_{\tot}}{d\log t} = x_{\tot}^{-1} \left[\sum_i x_c^i \frac{d\log x_c^i}{d\log t} + x_{\alpha}\frac{d\log x_{\alpha}}{d\log t} \right] \label{eq:dlogxtot} .$$ Expanding out the derivatives of the coupling terms, we have $$\frac{d\log x_{\alpha}}{d\log t} = \frac{d\log \Jhat}{d\log t} + \frac{d\log S_{\alpha}}{d\log \TK} \frac{d\log \TK}{d\log t} + \frac{2}{3} \label{eq:dlogxa}$$ and $$\frac{d\log x_c^i}{d\log t} = \frac{d\log \kappa_{10}^i}{d\log \TK} \frac{d\log \TK}{d\log t} \pm \frac{d\log x_e}{d\log t} - \frac{4}{3} \label{eq:dlogxc}$$ where the second to last term is positive for H-H collisions and negative for H-$e^-$ collisions.
As in @Furlanetto2006 and @Pritchard2007, we adopt a two-zone model in which the volume filling fraction of HII regions, $x_i$, is treated separately from the ionization in the bulk IGM, parameterized by $x_e$. The mean ionized fraction is then $\xibar = x_i + (1 - x_i) x_e$. This treatment is motivated[^4] by the fact that $\dTb=0$ in HII regions, thus eliminating the need for a detailed treatment of the temperature and ionization evolution, but beyond HII regions, the gas is warm and only partially ionized (at least at early times) so we must track both the kinetic temperature and electron density in order to compute the spin temperature.
CRITICAL POINTS IN THE 21-CM HISTORY {#sec:CriticalPoints}
====================================
From the equations of §\[sec:Formalism\], it is clear that in general, turning points in the 21-cm signal probe a set of eight quantities, $\boldsymbol{\theta} = \{x_i, x_e, \TK, \Jhat, x_i^{\prime}, x_e^{\prime},
\TK^{\prime}, \Jhat^{\prime}\}$, where primes represent logarithmic time derivatives. Given a perfect measurement of the redshift and brightness temperature, $(z, \dTb)$, at a turning point, the system is severely underdetermined with two equations (Eqs. \[eq:dTb\] and \[eq:TurningPoint\]) and eight unknowns. Without independent measurements of the thermal and/or ionization history and/or $\Lya$ background intensity, no single element of $\boldsymbol{\theta}$ can be constrained unless one or more assumptions are made to reduce the dimensionality of the problem.
The most reasonable assumptions at our disposal are:
1. The volume filling factor of HII regions, $x_i$, and the ionized fraction in the bulk IGM, $x_e$, are both negligible, as are their time derivatives, such that $\xibar = d\log \xibar/d\log t = 0$.
2. There are no heat sources, such that the Universe’s temperature is governed by pure adiabatic cooling after decoupling at $\zdec \simeq 150$ [@Peebles1993], i.e. $d\log \TK / d\log t = -4/3$.
3. $\Lya$ coupling is strong, i.e. $x_{\alpha} \gtrsim 1$, such that $\TS \rightarrow \TK$, and the dependencies on $\Jhat$ no longer need be considered.
These assumptions are expected to be valid at $z \gtrsim$ 10, $z \gtrsim 20$, and $z \lesssim 10$, respectively, according to typical models [e.g. @Furlanetto2006; @Pritchard2010a]. But, since it may be impossible to verify their validity from the 21-cm signal alone, we will take care in the following sections to state explicitly how each assumption affects inferred values of $\boldsymbol{\theta}$. We will now examine each feature of the signal in turn.
Turning Point B: End of the Dark Ages {#sec:B}
-------------------------------------
Prior to the formation of the first stars, the Universe is neutral to a part in $\sim 10^4$ [e.g. `RECFAST`, `HyRec`, `CosmoRec`; @Seager1999; @Seager2000; @AliHamoud2010; @Chluba2011], such that a measurement of $\dTb$ probes $\TS$ directly via Equation \[eq:dTb\], $$\TS \leq \Tcmb \left[1 - \frac{\dTb}{9 \ \mathrm{mK}} (1 + z)^{-1/2} \right]^{-1} \label{eq:TurningPointNeutral_TS}$$ where the $\leq$ symbol accounts for the possibility that $\xibar > 0$ (a non-zero ionized fraction always acts to reduce the amplitude of the signal). For the first generation of objects, we can safely assume $\xibar \ll 1$, and interpret a measurement of the brightness temperature as a proper constraint on $\TS$ (rather than an upper limit). We will relax this requirement in §\[sec:C\].
If $\TS$ and $\TK$ are both known, Equation \[eq:SpinTemperature\] yields the total coupling strength, $\xtot$. But, the contribution from collisional coupling is known as a function of redshift for a neutral adiabatically-cooling gas, and can simply be subtracted from $\xtot$ to yield $x_{\alpha}$, and thus $\Jhat$ (via Eq. \[eq:Jalpha\]). The top panel of Figure \[fig:tpB\] shows lines of constant $\log_{10} (J_{\alpha} /
J_{21})$, where $J_{\alpha} = h\nuLya \Jhat$ and $J_{21} = \Jtwoone$, given the redshift and brightness temperature of turning point B, $\dTb(\zB)$. From Equations \[eq:TurningPoint\] and \[eq:dlogTs\], we can also constrain the rate of change in the background $\Lya$ intensity (Eq. \[eq:dlogxa\]), as shown in the bottom panel of Figure \[fig:tpB\].
In the event that heating has already begun (rendering $\TK(z)$ unknown), interpreting turning point B becomes more complicated[^5]. Now, $x_{\alpha}$ will be overestimated, given that a larger (unknown) fraction of $\xtot$ is due to collisional coupling. Uncertainty in $\TK$ propagates to $S_{\alpha}$, meaning $x_{\alpha}$ can only be considered to provide an upper limit on the product $S_{\alpha} \Jhat$, rather than $\Jhat$ alone. Interpretation of the turning point condition (Eq. \[eq:TurningPoint\]) becomes similarly complicated if no knowledge of $\TK(z)$ is assumed.
![Values $J_{\alpha} = h\nuLya \Jhat$ and $d\log J_{\alpha} /
d\log t$ that give rise to turning point B at position $(\zB, \dTb(\zB))$. The color scale shows the value of $J_{\alpha}$ (top panel, in units of $J_{21} =
\Jtwoone$), and $d\log \Jhat / d\log t$ (bottom panel) required for turning point B to appear at the corresponding position in the $(\zB,
\dTb(\zB))$ plane, under the assumptions given in Section 3.1. The gray shaded region is excluded unless heating occurs in the dark ages. For reference, the highlighted black contours represent $\Lya$ fluxes (assuming a flat spectral energy distribution at energies between $\Lya$ and the Lyman-limit, $h\nuLya \leq h\nu \leq h\nu_{\mathrm{LL}}$), corresponding to Lyman-Werner band fluxes of $J_{\mathrm{LW}} / J_{21} = \{10^{-2}, 10^{-1}\}$ (from top to bottom), which roughly bracket the range of fluxes expected to induce negative feedback in minihalos at $z \sim 30$ [@Haiman2000].[]{data-label="fig:tpB"}](tpB_2panel.eps){width="48.00000%"}
Turning Point C: Heating Epoch {#sec:C}
------------------------------
In the general case where Hubble cooling and heating from astrophysical sources must both be considered, the temperature evolution can be written as $$\frac{d\log \TK}{d\log t} = \frac{\tau_H}{\tau_X} -
\mathcal{C} \label{eq:ThermalEvolution}$$ where we’ve defined a characteristic heating timescale $\tau_X^{-1} \equiv
\eheat / \eint$, where $\eint$ is the gas internal energy, $\eheat$ and $\mathcal{C}$ are the heating and cooling rate densities, respectively, and $\tau_H^{-1} = 3 \Hofz / 2$ is a Hubble time at redshift $z$ in a matter-dominated Universe.
![Cooling rate of the Universe under different assumptions. The black line is an approximate analytic solution [@Peebles1993], while the blue and green lines are numerical solutions. The blue curve considers cooling via radiative recombination, collisional excitation and ionization, and the Hubble expansion, and heating via Compton scattering. The green line is an even more detailed numerical solution obtained with the `CosmoRec` code [@Chluba2011], which includes a multi-level atom treatment and many radiative transfer effects.[]{data-label="fig:thermal_evolution"}](thermal_evolution.eps){width="48.00000%"}
In a neutral medium, the solution to Equation \[eq:ThermalEvolution\] for an arbitrary $\eheat$ is $$\TK(z) = \mathcal{C}_1^{-1} \int_{z}^{\infty} \eheat(z^{\prime}) \frac{dt}{dz^{\prime}} dz^{\prime} + \Tcmbnow \frac{(1 + z)^2}{1+\zdec} \label{eq:TemperatureSolution}$$ where $\mathcal{C}_1 \equiv 3 \nHbar (1 + y) \kB / 2$, $\kB$ is Boltzmann’s constant, $\nHbar$ is the hydrogen number density today, $y$ is the primordial helium abundance (by number), and the second term represents the adiabatic cooling limit.
To move forward analytically we again adopt the maximal cooling rate, $\mathcal{C} = 4/3$. Detailed calculations with `CosmoRec` indicate that such a cooling rate is not achieved until $z \lesssim 10$ in the absence of heat sources, which means we *overestimate* the cooling rate, and thus *underestimate* $\TK$ at all redshifts. This lower bound on the temperature is verified in Figure \[fig:thermal\_evolution\], in which we compare three different solutions for the cooling rate density evolution.
In order for the 21-cm signal to approach emission, the temperature must be increasing relative to the CMB[^6], i.e. $\tau_H/\tau_X > 4/3$, meaning the existence of turning point C, at redshift $\zC$, alone gives us a lower limit on $\eheat(\zC)$. Detection of the absorption signal (regardless of its amplitude) also requires the kinetic temperature to be cooler than the CMB temperature. If we assume a ‘burst’ of heating, $\eheat \rightarrow \eheat
\delta(z - \zC)$, where $\delta$ is the Dirac Delta function, and require $\TK
< \Tcmb$, we can solve Equation \[eq:ThermalEvolution\] and obtain an upper limit on the co-moving heating rate density. The bottom panel of Figure \[fig:tpC\_heat\_constraints\] shows the upper and lower limits on $\eheat$ as a function of $\zC$ alone.
A stronger upper limit on $\eheat(\zC)$ is within reach, however, if we can measure the brightness temperature of turning point C accurately. Given that $\dTb(\zC)$ provides an upper limit on $\TS$ for all values of $\xibar$ (Eq. \[eq:TurningPointNeutral\_TS\]), and an absorption signal requires $\TK <
\TS < \Tcmb$, we can solve Equation \[eq:TemperatureSolution\] assuming $\TK < \TS$, and once again assume a burst of heating to get a revised upper limit on $\eheat(\zC)$.
In general, turning point C yields an upper limit (again because we’ve assumed $\mathcal{C} = 4/3$) on the *integral* of the heating rate density (Eq. \[eq:TemperatureSolution\]), which is seen in the upper panel of Figure \[fig:tpC\_heat\_constraints\][^7]. This upper limit is independent of the ionization history, since any ionization reduces the amplitude of $\TS$, thus lessening the amount of heating required to explain an absorption feature of a given depth. The only observational constraints available to date are consistent with X-ray heating of the IGM at $z \gtrsim 8$ [@Parsons2013].
![*Top:* Constraints on the cumulative energy deposition as a function of the redshift and brightness temperature of turning point C. The gray region is disallowed because it requires cooling to be more rapid than Hubble (adiabatic) cooling. *Bottom:* Constraints on the co-moving heating rate density ($\mathrm{cMpc}^{-3}$ means co-moving $\mathrm{Mpc}^{-3}$) as a function of $\zC$ alone. The blue region includes heating rate densities insufficient to overcome the Hubble cooling, while the red region is inconsistent with the existence of an absorption feature at $\zC$ because such heating rates would instantaneously heat $\TK$ above $\Tcmb$. The triangles, plotted in increments of $50$ mK between $\dTb = \{-250,-50\}$ mK show how a measurement of $\dTb(\zC)$, as opposed to $\zC$ alone, enables more stringent upper limits on the heating rate density.[]{data-label="fig:tpC_heat_constraints"}](tpC_2panel.eps){width="48.00000%"}
### From Absorption to Emission {#sec:trans}
If heating persists, and the Universe is not yet reionized, the 21-cm signal will eventually transition from absorption to emission. At this time, coupling is expected to be strong such that at the precise redshift of the transition, $z_{\mathrm{trans}}$, Equation \[eq:dTbdnu\] takes special form since $\TS \simeq \TK = \Tcmb$, $$\begin{aligned}
\frac{d}{d\nu}\bigg[\dTb \bigg] & \simeq 0.1 \left(\frac{1 - x_i}{0.5}\right) \left(\frac{1 + z_{\mathrm{trans}}}{10}\right)^{3/2} \nonumber \\
& \times \left[1 + \frac{3}{2}\frac{d\log \TK}{d\log t}\right] \mathrm{mK} \ \mathrm{MHz}^{-1} . \label{eq:TransitionSlope}\end{aligned}$$ That is, if we can measure the slope at the absorption-emission transition, we obtain a lower limit on the heating rate density. Our inferred heating rate density would be exact if $\xibar$ were identically zero, but for $\xibar >
0$, the slope provides a lower limit. This is illustrated in the Figure \[fig:slope\_constraints\].
![Constraints on the co-moving heating rate density (once again $\mathrm{cMpc}^{-3}$ means co-moving $\mathrm{Mpc}^{-3}$) as a function of the absorption-emission transition redshift, $\ztrans$, and the slope of the 21-cm signal at that redshift. As in Figure \[fig:tpC\_heat\_constraints\], the blue region indicates heating rates insufficient to overcome the Hubble cooling, while the red region denotes heating rates that would instantaneously heat $\TK$ above $\Tcmb$. The triangles show how measuring the slope of the signal at $z_{\mathrm{trans}}$ can provide a lower limit on $\eheat$.[]{data-label="fig:slope_constraints"}](trans_1panel_heat.eps){width="48.00000%"}
### Could the Absorption Feature be Ionization-Driven? {#sec:tpC_ion}
\[sec:Cion\] The absorption feature of the all-sky 21-cm signal is generally expected to occur when X-rays begin heating the IGM [e.g. @Ricotti2005; @Ciardi2010]. However, this feature could also be produced given sufficient ionization, which similarly acts to drive the signal toward emission (albeit by reducing the absolute value of $\dTb$ rather than increasing $\TS$). We now assess whether or not such a scenario could produce turning point C while remaining consistent with current constraints from the Thomson optical depth to the CMB [$\tau_e$; @Dunkley2009; @Larson2011; @Bennett2012].
We assume that coupling is strong, $\TS \simeq \TK$, and that the Universe cools adiabatically (i.e. the extreme case where turning point C is *entirely* due to ionization), so that a measurement of $\dTb$ is a direct proxy for the ionization fraction (via Eq. \[eq:dTb\]). If we adopt a $\mathrm{tanh}$ model of reionization, parameterized by the midpoint of reionization, $\zrei$, and its duration, $\Delta \zrei$, we can solve Equation \[eq:dTb\] at a given $\dTb(\zC)$ for $\xibar(\zC)$. Then, we can determine the ($\zrei$, $\Delta \zrei$) pair, and thus entire ionization history $\xibar(z)$, consistent with our measure of $\xibar(\zC)$. Computing the Thomson optical depth is straightforward once $\xibar(z)$ is in hand – we assume HeIII reionization occurs at $z = 3$, and that HeII and hydrogen reionization occur simultaneously.
At a turning point, however, Equation \[eq:TurningPoint\] must also be satisfied. This results in a unique track through $(z, \dTb)$ space corresponding to values of $\zC$ and $\dTb(\zC)$ that are consistent with both $\xibar(\zC)$ and its time derivative for a given *tanh* model. Figure \[fig:tpC\_ion\] shows the joint ionization and 21-cm histories consistent with WMAP 9 constraints on $\tau_e$ [@Bennett2012].
This technique is limited because it assumes a functional form for the ionization history that may be incorrect, in addition to the fact that we are only using two points in the fit – the first being $\zrei$, at which point $\xibar = 0.5$ (by definition), and the second being $\xibar(\zC)$ as inferred from $\dTb(\zC)$. However, it does show that reasonable reionization scenarios could produce turning point C, although at later times (lower redshifts) than typical models (where turning point C is a byproduct of heating) predict.
{width="98.00000%"}
Turning Point D: Reionization {#sec:D}
-----------------------------
In principle, turning point D could be due to a sudden decline in the $\Lya$ background intensity, which would cause $\TS$ to decouple from $\TK$ and re-couple to the CMB. Alternatively, turning point D could occur if heating subsided enough for the Universe to cool back down to the CMB temperature. However, the more plausible scenario is that coupling continues between $\TS$ and $\TK$, heating persists, and the signal “saturates,” i.e. $1 - \Tcmb/\TS \approx 1$, in which case the brightness temperature is a direct proxy for the volume filling factor of HII regions[^8].
If saturated, Equation \[eq:TurningPoint\] becomes $$\frac{\xibar}{1-\xibar} \frac{d\log \xibar}{d\log t} \simeq \left(\frac{\Tcmb}{\TK}\right) \frac{d\log \TK}{d\log t} - \frac{1}{3} . \label{eq:Saturated}$$ Even in the saturated regime, the first term on the right-hand side cannot be discarded since we have assumed nothing about $d\log \TS / d\log t$.
Many authors have highlighted the 21-cm emission signal as a probe of the ionization history during the EoR [e.g. @Pritchard2010b; @Morandi2012]. Rather than dwell on it, we simply note that if 21-cm measurements of the EoR signal are accompanied by independent measures of $\xibar$, in principle one could glean insights into the thermal history from turning point D as well.
DISCUSSION {#sec:Discussion}
==========
A Shift in Methodology
----------------------
The redshifted 21-cm signal has been studied by numerous authors in the last 10-15 years. Efforts have concentrated on identifying probable sources of $\Lya$, Lyman-continuum, and X-ray photons at high-$z$, and then solving for their combined influence on the thermal and ionization state of gas surrounding individual objects [e.g. @Madau1997; @Thomas2008; @Chen2008; @Venkatesan2011], or the impact of populations of sources on the the global properties of the IGM [e.g. @Choudhury2005; @Furlanetto2006; @Pritchard2010a]. It has been cited as a probe of the first stars [@Barkana2004], stellar-mass black holes and active galactic nuclei [e.g. @Mirabel2011; @McQuinn2012b; @Tanaka2012; @Fragos2013; @Mesinger2013], which primarily influence the thermal history through X-ray heating, but could contribute non-negligibly to reionization [e.g. @Dijkstra2004; @Pritchard2010b; @Morandi2012]. More recently, more subtle effects have come into focus, such as the relative velocity-difference between baryons and dark-matter, which delays the formation of the first luminous objects [@Tseliakhovich2010; @McQuinn2012; @Fialkov2012].
Forward modeling of this sort, where the input is a set of astrophysical parameters and the output is a synthetic global 21-cm spectrum, is valuable because it 1) identifies the processes that most affect the signal, 2) has so far shown that a 21-cm signal should exist given reasonable models for early structure formation, and 3) that the signal exhibits the same qualitative features over a large subset of parameter space. However, this methodology yields no information about how unique a given model is.
We have taken the opposite approach. Rather than starting from an astrophysical model and computing the resulting 21-cm spectrum, we begin with an arbitrary signal characterized by its extrema, and identify the IGM properties that would be consistent with its observation. The advantage is that 1) we have a mathematical basis to accompany our intuition about which physical processes give rise to each feature of the signal, 2) we can see how reliably IGM properties can be constrained given a perfect measurement of the signal, and 3) we can predict which models will be degenerate without even computing a synthetic 21-cm spectrum.
An Example History {#sec:ExampleHistory}
------------------
In our analysis, we have found that the 21-cm signal provides more than coarse estimates of when the first stars and black holes form. Turning points B, C, and D constrain (quantitatively) the background $\Lya$ intensity, cumulative energy deposition, and mean ionized fraction, respectively, as well as their time derivatives, as summarized in Table \[tab:SignalFeatures\]. For concreteness, we will now revisit each feature of the signal for an assumed realization of the 21-cm spectrum, and demonstrate how each can be interpreted in terms of model-independent IGM properties.
[clclccc]{} B & $\zB$ & ... & lower limit on redshift of first star formation & \[sec:B\] & ... & ...\
B & $\dTb(\zB)$ & $\xibar = \eheat = 0$ & $\Jhat(\zB)$, $\Jhat^{\prime}(\zB)$ & \[sec:B\] & \[eq:dTb\]-\[eq:Jalpha\], \[eq:TurningPoint\]-\[eq:dlogxc\] & \[fig:tpB\]\
C & $\zC$ & ... & upper limit on $\eheat(\zC)$ & \[sec:C\] & \[eq:TemperatureSolution\] & \[fig:tpC\_heat\_constraints\]\
C & $\zC$ & $\xibar = 0$ & lower limit on redshift of first X-ray source formation & \[sec:C\] & ... & ...\
C & $\zC$ & $\xibar = 0$ & lower limit on $\eheat(\zC)$ & \[sec:C\] & \[eq:ThermalEvolution\] & \[fig:thermal\_evolution\], \[fig:tpC\_heat\_constraints\]\
C & $\dTb(\zC)$ & ... & improved upper limit on $\eheat(\zC)$ & \[sec:C\] & \[eq:dTb\], \[eq:TurningPoint\], \[eq:ThermalEvolution\], \[eq:TemperatureSolution\] & \[fig:tpC\_heat\_constraints\]\
C & $\dTb(\zC)$ & $\eheat = 0$ & rule out reionization scenario? & \[sec:Cion\] & \[eq:TurningPoint\] & \[fig:tpC\_ion\]\
transition & $\ztrans$ & $\TS = \TK$ & upper limit on $\int \eheat dt$ & \[sec:trans\] & \[eq:TemperatureSolution\] & \[fig:slope\_constraints\]\
transition & $\frac{d}{d\nu}\left[\delta T_b\right](\ztrans)$ & $\TS = \TK$ & lower limit on $\eheat(\ztrans)$ & \[sec:trans\] & \[eq:TransitionSlope\] & \[fig:slope\_constraints\]\
D & $\zD$ & ... & start of EoR & \[sec:D\] & ... & ...\
D & $\dTb(\zD)$ & ... & upper limit on $\xibar(\zD)$ & \[sec:D\] & \[eq:dTb\] & ...\
D & $\dTb(\zD)$ & $\TS = \TK \gg \Tcmb$ & $\xibar(\zD)$, joint constraint on $\xibar^{\prime}(\zD)$, $\TK(\zD)$, and $\TK^{\prime}(\zD)$ & \[sec:D\] & \[eq:dTb\], \[eq:TurningPoint\], \[eq:Saturated\] & ... \[tab:SignalFeatures\]
We will assume the same realization of the signal as is shown in Figure \[fig:global\_signal\], with turning points B, C, and D at $(z, \dTb /
\mathrm{mK})$ of [$(30.2, -4.8)$]{}, [$(21.1, -112)$]{}, and [$(13.5, 24.5)$]{}, respectively, and absorption-emission transition at $\ztrans=15$, $d(\dTb)/d\nu = 4.3 \ \mathrm{mK} \
\mathrm{MHz}^{-1}$. At a glance, the 21-cm realization shown in Figure \[fig:global\_signal\] indicates that the Universe’s first stars form at $z
\gtrsim 30$, the first black holes form at $z \gtrsim 21$, and that reionization has begun by $z \gtrsim 13.5$. Global feedback models such as those presented in @Tanaka2012 are inconsistent with this realization of the signal, as they predict $\TK > \Tcmb$ at $z \gtrsim 20$.
More quantitatively, from Figure \[fig:tpB\] we have an upper limit on the $\Lya$ background intensity of $\Jhat(\zB) \geq 10^{-11.1}
\intensityunitsnumber$ and its time rate-of-change, $d\log \Jhat / d\log t
\simeq 11.2$. Moving on to turning point C (Figure \[fig:tpC\_heat\_constraints\]), the kinetic temperature is constrained between $9 \lesssim \TK / \mathrm{K} \lesssim 16$, meaning the cumulative energy deposition must be $\int \eheat dt \leq 10^{51.9} \mathrm{erg} \
\mathrm{cMpc}^{-3}$. In the absence of any ionization, a minimum heating rate density of $\eheat \geq 10^{36.1} \coheatingdensity$ is required to produce turning point C, and a maximum of $\eheat \leq 10^{38.2} \coheatingdensity$ is imposed given the existence of the absorption feature.
The slope of the signal as it crosses $\dTb = 0$ is $\dTb^{\prime} = 4.3
\mathrm{mK} \ \mathrm{MHz}^{-1}$, corresponding to a lower limit on the heating rate density of $\eheat \geq 10^{37.6} \coheatingdensity$ (Figure \[fig:slope\_constraints\]). Finally, at turning point D, the ionized fraction must be $\xibar \leq 0.24$ (Eq. \[eq:dTb\] when $\TS
>> \Tcmb$). An ionization-driven turning point C can be ruled out by Figure \[fig:tpC\_ion\], since the amount of ionization required to produce $(\zC,
\dTb(\zC)) =$ [$(21.1, -112)$]{}leads to $\tau_e$ values inconsistent with WMAP at the $> 3\sigma$ level, for *tanh* models with $8 \leq \zrei \leq 12$.
With limits on $\Jhat$, $\eheat$, $\xibar$, and their derivatives, the next step is to determine how each quantity relates to astrophysical quantities. Typically, models for the global 21-cm signal relate the emissivity of the Universe to the cosmic star-formation rate density (SFRD) via simple scalings of the form $\hat{\upepsilon}_{i,\nu}(z) \propto f_i \rhostardot(z)
I_{\nu}$ [e.g. @Furlanetto2006; @Pritchard2010a], in which case the parameters of interest are $f_i$, which converts a star formation rate into a bolometric energy output in band $i$ (generally split between $\Lya$, soft-UV, and X-ray photons), the SFRD itself, $\rhostardot$, and the spectral energy distribution (SED) of luminous sources being modeled, $I_{\nu}$.
Given that soft-UV photons have very short mean-free-paths in a neutral medium, a determination of $d\log \xibar / d\log t$ is likely to be an accurate tracer of the soft-UV ionizing emissivity of the Universe, $\eion$. However, the same is not true of photons emitted between $\Lyn$ resonances and hard X-ray photons, which can travel large distances before being absorbed, where they predominantly contribute to Wouthuysen-Field coupling and heating, respectively. Because of this, translating $\Jhat$ and $\eheat$ measurements to their corresponding emissivities, $\ealpha$ and $\eX$, is non-trivial. In general, the accuracy with which one can convert $\Jhat$ ($\eheat$) to $\ealpha$ ($\eX$) depends on the redshift-evolution of the co-moving bolometric luminosity and the SED of sources, $I_{\nu}$.
For a zeroth order estimate, we will assume that sources have a flat spectrum between the $\Lya$ resonance and the Lyman limit, and neglect “injected photons,” i.e. those that redshift into higher a $\Lyn$ resonance and (possibly) cascade through the $\Lya$ resonance. If $\ealpha \propto N_{\alpha} \rhostardot$, where $N_{\alpha}$ is the number of photons emitted between $\nuLya \leq \nu \leq \nuLL$ per baryon, then $$\rhostardot(z) \approx 10^{-5} \left(\frac{9690}{N_{\alpha}}\right) \left(\frac{J_{\alpha}}{J_{21}} \right) \left(\frac{1 + z}{30} \right)^{-1/2} \Msun \ \mathrm{yr}^{-1} \ \mathrm{cMpc}^{-3}$$ where we’ve scaled $N_{\alpha}$ to a value appropriate for low-mass PopII stars [@Barkana2004].
Similarly, if we assume that a fraction $\fXh = 0.2$ of the X-ray emissivity is deposited as heat [appropriate for the $E \gtrsim 0.1$ keV limit in a neutral medium; @Shull1985], and normalize by the local $L_X$-SFR relationship [e.g. @Mineo2012 who found $L_{0.5-8 \mathrm{keV}} = 2.61 \times 10^{39} \ \mathrm{erg} \ \mathrm{s}^{-1} \ (\Msun \ \mathrm{yr}^{-1})$], we have $$\begin{aligned}
\rhostardot(z) & \approx 2 \times 10^{-2} \fX^{-1} \left(\frac{0.2}{f_{\mathrm{X,h}}} \right) \nonumber \\
& \times \left(\frac{\eheat}{10^{37} \ \coheatingdensity} \right) \ \Msun \ \mathrm{yr}^{-1} \ \mathrm{cMpc}^{-3}\end{aligned}$$ where we subsume all uncertainty in the normalization between $L_X$ and $\rhostardot$, the SED of X-ray sources, and radiative transfer effects into the factor $\fX$.
If these approximate treatments are sufficient, then measures of $J_{\alpha}$ provide 2D constraints on $\rhostardot$ and $N_{\alpha}$, and measures of $\eheat$ constrain $\rhostardot$ and $\fX$[^9]. However, given the long mean free paths of X-rays and photons in the $\nuLya \leq \nu \leq \nuLL$ band, the estimates above are likely to be inadequate. It is the primary goal of a forthcoming paper (Mirocha et al., in prep) to characterize uncertainties in these estimates that arise due to two major unknowns: 1) redshift evolution in the ionizing emissivity of UV and X-ray sources, and 2) their spectral energy distributions.
Synergies with Upcoming Facilities
----------------------------------
The prospects for synergies are most promising for turning point D, which is predicted to occur at $z \lesssim 15$, coinciding with the JWST window and current and upcoming campaigns to measure the 21-cm power spectrum. JWST will probe the high-$z$ galaxy population even more sensitively than HST [e.g. @Robertson2013], which may allow degeneracies between the star-formation history and other parameters to be broken (e.g. the $f_i$ normalization factors). However, our focus in this paper is on model-independent quantities – the issue of degeneracy among astrophysical parameters will be discussed in Paper II.
In terms of model-independent quantities, current and upcoming facilities will benefit global 21-cm measurements by constraining the ionization history. For example, one can constrain $\xibar(z)$ via observations of $\Lya$-emitters [LAEs, e.g. @Malhotra2006; @McQuinn2007; @Mesinger2008], the CMB through $\tau_e$ and the kinetic Sunyaev-Zeldovich effect [@Zahn2012], or via measurements of the 21-cm power spectrum, which reliably peaks when $\xibar
\simeq 0.5 $[@Lidz2008]. However, like the global signal, power spectrum measurements yield upper limits on $\xibar$, since they assume $\TS \gg
\Tcmb$, which may not be the case. Constraints from LAEs require no such assumption, and instead set lower limits on $\xibar$, since our ability to see $\Lya$ emission from galaxies at high-$z$ depends on the *minimum* size of an HII region required for $\Lya$ photons to escape. Limits on $\xibar(z)$ out to $z \sim 10-15$ would yield a prediction for the amplitude of turning point D, which, in conjunction with a global 21-cm measurement could validate or invalidate the $\TS \gg \Tcmb$ assumption often adopted for EoR work. In addition, one could determine if ionization-driven absorption features are even remotely feasible (§\[sec:tpC\_ion\]).
Caveats
-------
Simple models for the global 21-cm signal rely on the assumption that the IGM is well approximated as a two-phase medium, one phase representing HII regions, and the other representing the bulk IGM. As reionization progresses, the distinction between these two phases will become tenuous, owing to a warming and increasingly ionized IGM whose properties differ little from an HII region. Even prior to reionization the global approximation may be inadequate depending on the distribution of luminous sources. If exceedingly rare sources dominate ionization and heating, we would require a more detailed treatment [a problem recently addressed in the context of helium reionization by @Davies2012].
Eventually, simple models must also be calibrated by more sophisticated simulations. This has been done to some extent already in the context of 21-cm fluctuations, with good agreement so far between semi-analytic and numerical models [@Zahn2011]. However, analogous comparisons for the global signal have yet to be performed rigorously. The limiting factor is that a large volume must be simulated in order to avoid cosmic variance, but the spatial resolution required to simultaneously resolve the first galaxies becomes computationally restrictive.
Finally, though we included an analysis of the absorption-emission transition point, $\ztrans$, in truth, the slope measured from this feature will be correlated with the positions of the turning points. The most promising foreground removal studies rely on parameterizing the signal as a simple function (e.g. spline), meaning the slope at $\ztrans$ is completely determined by the positions of the turning points and the function used to represent the astrophysical signal.
CONCLUSIONS
===========
In this paper we have addressed one tier of the 21-cm interpretation problem: identifying the physical properties of the IGM that can be constrained uniquely from a measurement of the all-sky 21-cm signal. Our main conclusions are:
- The first feature of the global signal, turning point B, provides a lower limit on the redshift at which the Universe’s first stars formed. But, more quantitatively, its position in $(z, \dTb)$ space measures the background $\Lya$ intensity, $\Jhat$, and its time derivative, respectively, assuming a neutral, adiabatically-cooling medium.
- The absorption feature, turning point C, is most likely a probe of accretion onto compact objects considering the $\tau_e$ constraint from the CMB. As a result, it provides a lower limit on the redshift when the first X-ray emitting objects formed. Even if the magnitude of the absorption trough cannot be accurately measured, a determination of $\zC$ alone sets strong upper and lower limits on the heating rate density of the Universe, $\eheat(\zC)$. If the absorption feature is deep ($\dTb(\zC) \lesssim -200$ mK) and occurs late ($z \lesssim 15$), it could be a byproduct of reionization.
- The final feature, turning point D, indicates the start of the EoR, and traces the mean ionized fraction of the Universe and its time derivative. In general, it also depends on the spin-temperature evolution, though it is expected that at this stage the signal is fully saturated. Without independent constraints on the thermal history, $\dTb(\zD)$ provides an upper limit on the mean ionized fraction, $\overline{x}_i$.
In general, the relationship between IGM diagnostics (such as $\Jhat$ and $\eheat$) and the properties of the astrophysical sources themselves (like $\rhostardot$, $\Nalpha$, and $\fX$) is expected to be complex. In a forthcoming paper, we compare simple analytic arguments (e.g. those used in §\[sec:ExampleHistory\]) with the results of detailed numerical solutions to the cosmological radiative transfer equation in order to assess how accurately the global 21-cm signal can constrain the Universe’s luminous sources.
The authors thank the anonymous referee, whose suggestions helped improve the quality of this manuscript, and acknowledge the LUNAR consortium[^10], headquartered at the University of Colorado, which is funded by the NASA Lunar Science Institute (via Cooperative Agreement NNA09DB30A) to investigate concepts for astrophysical observatories on the Moon.
[^1]: The Moon is not truly devoid of an ionosphere – its atmosphere is characterized as a surface-bounded exosphere, whose constituents are primarily metal ions liberated by interactions with energetic particles and radiation from the Sun [e.g. @Stern1999]. However, it is tenuous enough to be neglected at frequencies $\nu \gtrsim 1 \ \mathrm{MHz}$.
[^2]: @FurlanettoFurlanetto2007b investigated the effects of hydrogen-proton collisions on $\TS$ and found that they could account for up to $\sim 2$% of the collisional coupling at $z \approx 20$, and would dominate the coupling at $z\approx 10$ in the absence of heat sources. However, an early $\Lya$ background is expected to couple $\TS\rightarrow \TK$ prior to $z = 20$, and heating is expected prior $z = 10$, so protons are generally neglected in 21-cm calculations. Collisions with neutral helium atoms in the triplet state could also induce spin-exchange [@Hirata2007], though the cold high-$z$ IGM lacks the energy required to excite atoms to the triplet state. We also neglect hydrogen-deuterium collisions, whose rarity prevents any real effect on $\TS$, even though $\kappa_{10}^{\mathrm{HD}} >
\kappa_{10}^{\mathrm{HH}}$ at low temperatures [@Sigurdson2006]. Lastly, we neglect velocity-dependent effects [@Hirata2007], which introduces an uncertainty of up to a few % in the mean signal.
[^3]: We neglect the first and last features of the signal in this paper. The lowest redshift feature marks the end of reionization, and while its frequency derivative is zero, so is its amplitude, making its precise location difficult to pinpoint. The highest redshift feature is neglected because it is well understood theoretically, and should occur well before the formation of the first luminous objects [though exotic physics such as dark-matter annihilation could complicate this, e.g. @FurlanettoOh2006].
[^4]: Our motivation for the logarithmic derivative convention is primarily compactness, though the non-dimensionalization of derivatives is convenient for comparing the rate at which disparate quantities evolve. For reference, the logarithmic derivative of a generic function of redshift with respect to time, $d\log w/d\log t = b$, implies $w(z) \propto (1 + z)^{-3b/2}$ under the high-$z$ approximation, $H(z) \approx H_0 \Omnow^{1/2} (1 +
z)^{3/2}$, which is accurate to better than $\sim 0.5$% for all $z > 6$. For example, the CMB cools as $d\log \Tcmb/d\log t = -2/3$.
[^5]: We deem such a scenario “exotic” because it requires heat sources prior to the formation of the first stars. Heating via dark matter annihilation is one example of such a heating mechanism [@FurlanettoOh2006].
[^6]: Though see §\[sec:Cion\] for an alternative scenario.
[^7]: We express our results in units of $\mathrm{erg} \ \mathrm{cMpc}^{-3}$ to ease the conversion between $\eheat$ and the X-ray emissivity, $\eX$ (see §\[sec:ExampleHistory\]). For reference, $10^{51} \ \mathrm{erg} \ \mathrm{cMpc}^{-3} \simeq 10^{-4} \ \mathrm{eV} \ \mathrm{baryon}^{-1}$.
[^8]: If the signal is not yet saturated, a measurement of turning point D instead yields an upper limit on $\xibar$.
[^9]: Here we have assumed that high-mass X-ray binaries are the only source of X-rays, when in reality the heating may be induced by a variety of sources. Other candidates include X-rays from “miniquasars” [e.g. @Kuhlen2005], inverse Compton scattered CMB photons off high energy electrons accelerated in supernovae remnants [@Oh2001], or shock heating [e.g. @Gnedin2004; @Furlanetto2004].
[^10]: http://lunar.colorado.edu
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Black holes with masses of $\rm 10^6-10^9~M_{\odot}$ dwell in the centers of most galaxies, but their formation mechanisms are not well known. A subdominant dissipative component of dark matter with similar properties to the ordinary baryons, known as mirror dark matter, may collapse to form massive black holes during the epoch of first galaxies formation. In this study, we explore the possibility of massive black hole formation via this alternative scenario. We perform three-dimensional cosmological simulations for four distinct halos and compare their thermal, chemical and dynamical evolution in both the ordinary and the mirror sectors. We find that the collapse of halos is significantly delayed in the mirror sector due to the lack of $\rm H_2$ cooling and only halos with masses above $ \rm \geq 10^7~ M_{\odot}$ are formed. Overall, the mass inflow rates are $\rm \geq 10^{-2}~M_{\odot}/yr$ and there is less fragmentation. This suggests that the conditions for the formation of massive objects, including black holes, are more favorable in the mirror sector.'
author:
- |
M. A. Latif[^1]$^1$, A. Lupi$^2$, D. R. G. Schleicher$^3$, G. D’Amico$^{4,6}$, P. Panci$^{4,5}$, S. Bovino$^3$\
$^1$Physics Department, College of Science, United Arab Emirates University, PO Box 15551, Al-Ain, UAE\
$^2$Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56126, PISA, Italy\
$^3$Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción,\
Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C, Chile\
$^4$CERN Theoretical Physics Department, Case C01600, CH-1211 Geneve, Switzerland\
$^5$Laboratori Nazionali del Gran Sasso, Via G. Acitelli, 22, I-67100 Assergi (AQ), Italy\
$^6$Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94306, USA
bibliography:
- 'smbhs.bib'
---
\[firstpage\]
methods: numerical – cosmology: theory – early Universe – high redshift quasars– black holes physics-galaxies: formation
Introduction
============
Most of the galaxies if not all today harbor supermassive black holes (SMBHs) of a few million to billion solar masses [@Kormendy2013] and their presence has also been revealed from the observations of quasars up to $z \geq 7$, a few hundred million years after the Big Bang [@Fan2003; @Willott2007; @Jiang2009; @MOrtlock2011; @Venemans2015; @Wu2015; @Banados18; @Schleicher18]. The existence of such massive objects at early epochs poses a challenge to our understanding of structure formation in the Universe. How come they formed and how did they grow are still open questions.
Various models of black hole (BH) formation have been proposed in the literature, which include the collapse of stellar remnants, runaway collisions in stellar clusters and the collapse of a giant gas cloud into a massive black hole, i.e. the so-called direct collapse model. These models provide seed BHs of $\rm 10-10^5~M_{\odot}$ and these seed BHs have to efficiently grow to reach the observed masses within the first billion years. Population III stars, depending upon their mass, may collapse into a BH of a few hundred solar masses. However, they have to continuously grow at the Eddington limit to reach the observed masses. The feedback from BHs halts the accretion onto them [@Johnson2007; @Alvarez2009; @Smith17] and they may require a few episodes of super-Eddington accretion [@Madau14; @Mayer2015; @Lupi16; @Inayoshi16] to grow to a billion solar masses. The mass of BHs resulting from the runaway collisions in dense stellar clusters depends on the density, metallicity and the initial mass of the cluster. Under optimal conditions, seed BHs from this scenario can have masses of about a thousand solar masses and have to form within the first 2-3 million years [@Zwart2002; @Devecchi2012; @Katz2015; @Reinoso18a; @Sakurai18; @Reinoso18b]. Particularly, the potential interaction between stellar collisions and gas accretion is important. The latter may enhance the black hole mass formed in the first stellar clusters [@Boekholt18], or trigger the formation of run-away mergers in clusters of stellar mass black holes [@Davies11; @Lupi14]. The direct collapse model, on the other hand, provides BH seeds with masses of about $\rm 10^5~ M_{\odot}$, but requires large inflow rates of about $\rm 0.1~M_{\odot}/yr$ [@Schleicher13; @LatifViscous2015; @Latif2016]. Such conditions can be achieved in metal free halos illuminated by strong UV radiation [@Chon17]. However, it is still not clear what would be the number density of direct collapse black holes. Even the massive seeds would require rather special conditions to efficiently grow, see [@Latif18] and [@Regan18]. Each model has its pros and cons, further details about the models can be found in the dedicated reviews on this topic [@Volonteri2012; @Haiman2013; @Latif16PASA; @Woods18].
In this article, we take an alternative approach and explore the possibility of forming massive BHs via dissipative Dark Matter (DM). [@Damico18] (hereafter called D18) have proposed that a small component of mirror matter (i.e. an elegant model of dissipative DM; see e.g. [@Berezhiani05; @Foot:2004pa; @Blinnikov:1983gh; @Khlopov:1989fj]) may form intermediate mass BHs. They have shown, using one-zone models, that the thermal and the chemical evolution for both the mirror $\mathcal{M}$ and the ordinary $\mathcal{O}$ sectors are different due to the lower $\mathcal{M}$ radiation temperature. Furthermore, they found that the thermal evolution in the $\mathcal{M}$ sector depends on the virial temperature of the halo and 3D simulations are required to investigate this effect. They also pointed out that, in the presence of this dissipative DM sector, the BHs are expected to grow at a faster rate with respect to the ordinary case as they can accrete both collapsed $\mathcal{O}$ and $\mathcal{M}$ matters. Motivated by the work of D18, we perform 3D cosmological simulations of the first minihalos forming at $z=20-30$ and explore the impact of hydrodynamics and the collapse dynamics on the thermal and chemical properties of the halos in the $\mathcal{M}$ sector. We compare our results with the $\mathcal{O}$ sector and also assess the inflow rates, as well as study fragmentation properties of these halos. Our findings suggest that halos forming in the $\mathcal{M}$ sector are about an order of magnitude more massive and they may have important implications for the formation of first structures in the Universe.
Our article is organized as follows. In section two, we describe the model, and the numerical methods and initial conditions. We present our results in section three and confer our conclusions in section four.
Setting the stage
=================
A dissipative DM Model
----------------------
In terms of microphysical properties, a symmetric mirror sector (the parity symmetry which exchanges $\mathcal{M}$ and $\mathcal{O}$ field is not broken) is identical to the standard model of particle physics, for details see D18 and references therein. The $\mathcal{M}$ sector differs from the $\mathcal{O}$ one only in two macroscopic quantities: $i)$ the ratio of abundances $\beta = \Omega^{'}_{\rm b} /\Omega_{\rm b}$ and $ii)$ the ratio of radiation temperatures $x = T^{'}_{ \gamma} /T_{ \gamma}$. The $'$ symbol denotes mirror matter. To avoid stringent BBN and CMB limits, the $\mathcal{M}$ sector must be colder than the ordinary one ($x\lesssim 0.3$ [@Berezhiani:2000gw; @Berezhiani05; @Foot:2014uba]).
In our simulations, we assume $\beta = 1$ and $x=0.01$. The rest of DM is in the form of standard cold and collisionless DM. These values could result from a $\mathcal{M}$ sector with a broken mirror parity (see e.g. [@Berezhiani:1995am]) or minimal mirror twin Higgs (see e.g. [@Barbieri:2016zxn]). Although one cannot easily predict typical values for these parameters, we expect that our benchmark model exhibits macrophysical behavior common to several microphysical realizations of mirror DM. In addition to the $\beta$ and $x$ parameters, we assume for simplicity that the chemistry is the same in the two sectors. However, one can easily imagine scenarios in which the mirror chemistry makes production of $\rm H_2$ more difficult [@Rosenberg17]. We cannot study these at the moment, but we expect that our benchmark model exhibits similar qualitative features.
We neglect the baryonic component when adding the $\mathcal{M}$ sector, effectively including only conventional DM and mirror DM in the simulations. These components only interact gravitationally (and via renormalizable portals that we set to zero), so we assume their impact is not strong as on larger scales gravity is dominated by the DM, and on smaller scales self-gravity takes over. To solve the chemical and thermal evolution of $\mathcal{M}$ matter in the early Universe, we follow the approach of D18. They have shown that due to the faster recombination in the $\mathcal{M}$ sector, the fraction of free electrons is lower and results in a suppressed abundance of $\mathcal{M}$ molecular hydrogen.
--------------------

--------------------
Numerical methods
-----------------
We employ the publicly available code Enzo [@Enzo2014] to conduct hydrodynamical cosmological simulations. Enzo is an open source, adaptive mesh refinement (AMR), parallel, multi-physics simulation code which can run and scale well on various platforms using the message passing interface (MPI). Hydrodynamics is solved using the piece-wise parabolic method (PPM) and the DM dynamics is computed with particle-mesh based N-body solver. For self-gravity calculations, we use a multi-grid Poisson solver.
We make use of the MUSIC package [@Hahn2011] to generate cosmological initial conditions typically at $z \geq 100$ by using the PLANCK 2016 data with $\Omega_{\rm M}=0.3089$, $\Omega_{\Lambda}=0.6911$, $\rm H_{0}=0.6774$ [@Planck2016]. Our cosmological volume (simulation box) has a comoving size of 1 Mpc/h, we select the most massive halo forming in our computational domain at $z \ge 20$ and place it at the center of the box. We employ nested grid initial conditions with top-level grid resolution of $\rm128^3$ cells and an equal number of DM particles. We subsequently employ two additional nested grids each with the same resolution as of a top grid. In addition to this, we further employ 18 additional levels of refinement during the course of the simulations which provide us with an effective resolution of about 200 AU. We ensure a Jeans resolution of at least four cells during the simulations. In total, we employ 5767168 DM particles to solve the N-body dynamics which provides us an effective DM resolution of about a few hundred solar masses. Our refinement criterion is based on the baryonic overdensity and the DM mass resolution. A cell is marked for refinement when it exceeds four times the cosmic mean density or DM particle density of 0.0625 times $\rho_{DM}r^{\ell \alpha}$ where $\rho_{DM}$ is the dark matter density, $r = 2$ is the refinement factor, $\ell$ is the refinement level, and $\alpha = -0.3$ makes the refinement super-Lagrangian.
We use the KROME package [@Grassi2014] to self-consistently solve the thermal and chemical evolution of nine primordial species [**(**]{}$\rm H,~H^+,~ H^-, ~He,~ He^+, ~He^{++},~ H_2, ~H_2^+, ~e^-$[**)**]{}[^2] in cosmological simulations. Our chemical model includes the most important gas phase reactions and processes including the formation of molecular hydrogen. It also includes the cooling and heating processes due to collisional excitation, collisional ionization, radiative recombination, collisional induced emission, $\rm H_2 $ and chemical heating/cooling.
For the $\mathcal{M}$ sector with $x=0.01$, the $\mathcal{M}$ helium fraction is almost negligible [@Berezhiani05]. More importantly, the initial abundance of free $\mathcal{M}$ electrons at $z=100$ is about four orders of magnitude smaller than the ordinary one (see Fig. 1 of D18) and we scale the fractions of other species accordingly.
[| c | c | c | c |c |]{}
Model & Halo Mass$^{\mathcal{O}}$ & Collapse redshift$^{\mathcal{O}}$ & Halo Mass$^ {\mathcal{M}}$ & Collapse redshift$^{\mathcal{O}}$\
No & $\rm M_{\odot} $ & $z$ & $\rm M_{\odot}$ & $z$\
\
1 & $\rm 3.5 \times 10^{5}$ & 23.3 & $\rm 1.01 \times 10^{7}$ & 13.79\
2 & $\rm 1.3 \times 10^{6}$ & 20 & $\rm 2.69 \times 10^{7}$ & 14.05\
3 & $\rm 7.8 \times 10^{5}$ & 24.5 & $\rm 1.3 \times 10^{7}$ & 13.35\
4 & $\rm 6.7 \times 10^{5}$ & 25.4 & $\rm 1.2 \times 10^{7}$ & 16\
\[table1\]
[c]{}
------------------- -------------------
 
------------------- -------------------

Results
=======
We present the main findings of the present work in this section. In total, we have performed eight cosmological simulations for four different halos in both sectors. The properties of the simulated halos are listed in table \[table1\]. In the coming subsections, we discuss and compare the thermal and chemical evolution of the halos in both sectors. We also point out the differences in the fragmentation properties of the halos.
Time evolution of a reference run in the mirror and baryonic sector
-------------------------------------------------------------------
We take halo 1 as the reference case and compare its thermal evolution in both sectors as shown in figure \[fig1\]. The temperature of the gas in the $\mathcal{M}$ sector at $z=100$ is lower than the $\mathcal{O}$ sector due to inefficient Compton heating. As collapse proceeds gas falls in the DM potential and gets heated up to about 1000 K due to the virialization shocks. In the $\mathcal{O}$ sector, gas starts to cool and collapse after reaching the molecular hydrogen cooling threshold (a few times $\rm 10^5~M_{\odot}$) at $z$ =23. In the $\mathcal{M}$ sector, due to the inefficient production of molecular hydrogen, gas cannot cool until the halo mass reaches the atomic cooling limit. Consequently, the halo virial temperature reaches around $\rm 10^4~K$ and the halo mass $\rm \geq 10^7~M_{\odot}$. For the $\mathcal{M}$ sector, strong shocks during the virialization of an atomic cooling halo catalyze $\rm H_2$ formation by enhancing the electron fraction and as a result $\rm H_2$ abundance gets significantly increased.
Once sufficient $\mathcal M$ H$_2$ has formed, the gas temperature is brought down to about a few hundred Kelvin in the core of the atomic cooling halo. Overall, the temperature in the center of $\mathcal{M}$ halos is about a factor of two higher compared to the $\mathcal{O}$ sector. The halo mass is about a factor of twenty larger and the collapse is delayed until $ z \sim 14 $, see table \[table1\]. This suggests that the first halos forming in the $\mathcal{M}$ sector are atomic cooling halos while in the $\mathcal{O}$ sector minihalos are formed first. This will have an important implications for the early structure formation, see our discussion below.
Comparison of different halos and fragmentation study
-----------------------------------------------------
We here compare the chemical and thermal evolution of four different halos in the $\mathcal{M}$ sector only. The plots of $\rm H_2$ and $\rm HII$ mass fractions and temperature against the gas density at the collapse redshifts of the halos are shown in figure \[fig2\]. The HII fraction at low densities is about $\rm 10^{-7}$, a few orders of magnitude lower compared to the $\mathcal{O}$ sector and reaches up to $\rm \sim 10^{-4}$ at densities between $\rm 10^{-26}-10^{-24}~g/cm^3$ due to the strong virialization shocks. At higher densities, the $\rm HII$ fraction starts to decline due to the Lyman alpha and molecular hydrogen cooling which brings the temperature down. This trend has been observed for all halos. The enhanced electron fraction during the process of virialization acts as a catalyst for the formation of molecular hydrogen and as a result the $\rm H_2$ fraction gets boosted about six orders of magnitude. After the halo has virialized, the $\rm H_2$ fraction continues to increase and gets further enhanced due to the three body reactions. The typical abundance of $\rm H_2$ in the core of the halo is a few times $\rm 10^{-2}$ and the same is for all halos. Overall, the halos with higher virial masses have higher HII and $\rm H_2$ fractions.
Contrary to the $\mathcal{O}$ sector, the temperature of the halo continues to increase until it reaches the atomic cooling regime due to the low $\rm H_2$ fraction at earlier times. By that time, the halo mass is above $\rm 10^7~M_{\odot}$ and Lyman alpha cooling becomes effective at densities above $\rm 10^{-24}~g/cm^3$. After reaching the atomic cooling limit, the molecular hydrogen formed during the virialization brings the gas temperature down to a few hundred K. Consequently, the formation of minihalos remains suppressed in the $\mathcal{M}$ sector. To further clarify the differences between two sectors, we show the averaged radial profiles of $\rm H_2$ and $\rm HII$ mass fractions in figure \[fig3\]. Although the initial abundances of $\rm H_2$, $\rm HII$, HI and electrons are a few orders of magnitude lower in the $\mathcal{M}$ sector, they become almost similar to the $\mathcal{O}$ sector after virialization. Due to the larger halo mass in $\mathcal{M}$ sector, stronger virialization shocks boost the abundances of these species and reduce the differences between the two sectors.
To compare the dynamical properties of the halos, we show the profiles of the temperature, density, enclosed mass and the mass inflow rates in figure \[fig4\]. The density in the outskirts of the halo is $\rm 10^{-24}~g/cm^3$ and increases up to $\rm 10^{-16}~g/cm^3$ in the core, with small bumps due to the presence of substructure. The density profiles are almost similar in both sectors for all halos and follow the $\sim R^{-2.1}$ behaviour as expected for $\rm H_2$ cooled gas. Consequently, the enclosed gas mass profiles are similar in both sectors, but almost an order of magnitude larger around 100 pc in the $\mathcal{M}$ sector. This difference comes from the larger halo masses in the $\mathcal{M}$ sector. The temperature profiles show that differences between the two sectors are very prominent above 10 pc. For instance, at 100 pc the temperature in the $\mathcal{M}$ sector is about $\rm 8000-10^4$ K while in the $\mathcal{O}$ sector the temperature does not exceed $\rm 10^3$ K. These differences are again due to the larger halo masses in the $\mathcal{M}$ sector. In general, gas in the centre of the halos is warmer in the $\mathcal{M}$ sector. The average mass inflow rates for both sectors are between $\rm 0.001- 0.1~M_{\odot}/yr$ and they are generally lower in the $\mathcal{O}$ sector.
[c]{}
The morphology of the halos at their collapse redshifts is shown in figure \[fig5\]. The density structure in the central region of the halo is different for both sectors and this trend is observed for all halos. In general, for the $\mathcal{O}$ sector the density structure is more dense, filamentary and compact compared to the $\mathcal{M}$ one. Moreover, there are more than one gas clumps, and they are well separated, while in the $\mathcal{M}$ sector structures are more spherical and fluffy. This comes from the fact that the molecular hydrogen is mainly concentrated in the core of the halo in the $\mathcal{M}$ sector while in the outskirts of the halo $\rm H_2$ is below the universal value (i.e. $\rm 10^{-3}$). Based on these indications, more fragmentation is expected in the $\mathcal{O}$ sector. However, the possibility of fragmentation at the later stages of collapse in the $\mathcal{M}$ sector cannot be completely ruled out.
Discussion and conclusions
==========================
In this study, we have investigated the possibility of BH formation in the context of dissipative DM. Previous works (D18) suggest that a small component of DM similar to the baryonic matter may collapse to form massive BHs. D18 show, employing one-zone models, that the evolution in the $\mathcal{M}$ sector is very different from the $\mathcal{O}$ one. Motivated by the work of D18, we have performed 3D cosmological simulations for four different halos to explore the impact of hydrodynamics and collapse dynamics on the thermal and chemical evolution as well as their implications for structure formation.
In our simulations of $\mathcal{O}$ sector, we have only ordinary baryons and collisionless DM to compare with simulations of $\mathcal{M}$ one. For the $\mathcal{O}$ sector, the mirror component is expected to collapse later than the baryonic component and therefore to have negligible effect. For the $\mathcal{M}$ sector, the potential influence of the baryons is only via gravity. Hence we expect that due to the inefficient cooling, the cloud will not be able to collapse even if we take into account the additional gravitational effect of baryons.
In general our results are in an agreement with the findings of D18. We show that the formation of minihalos remains suppressed in the $\mathcal{M}$ sector due to the deficiency of molecular hydrogen and the gravitational collapse is significantly delayed by $ \Delta z \sim 10$. Consequently, gas keeps collapsing until the halo mass reaches an atomic cooling limit with typical halo masses of $\rm 10^{7} ~M_{\odot}$ and virial temperatures around $\rm 10^4~K$. Before the virialization of the halos, the abundances of $\rm H_2$ and $\rm HII$ are a few orders of magnitude lower in the $\mathcal{M}$ sector, but become comparable to the $\mathcal{O}$ one after virialization because of strong shocks. In general, the $\rm H_2$ mass fraction is about a factor of two lower and the temperature is about a factor of two higher. The mass inflow rate is $\rm \geq 10^{-2}~M_{\odot}/yr$, a factor of a few higher than in the baryonic sector. The degree of fragmentation is also very low. Overall, halos in the $\mathcal{M}$ sector are very similar to halos irradiated by a moderate background UV flux in the standard scenario [@Schleicher10; @Latif2016dust].
These factors suggest that the conditions for the formation of massive objects, including BHs, are more favourable in the $\mathcal{M}$ sector. Furthermore, when BHs form the accretion rate is largely boosted because they can accrete a substantial portion of dissipative DM. Our results reinforce the findings of D18 and provide a viable alternative of massive BH formation at high redshift.
Acknowledgments {#acknowledgments .unnumbered}
===============
ML thanks the UAEU for funding via startup grant No..... AL acknowledges support from the European Research Council project No. 740120 ’INTERSTELLAR’. DRGS thanks for funding via Conicyt PIA ACT172033, Fondecyt regular (project code 1161247), the ”Concurso Proyectos Internacionales de Investigación, Convocatoria 2015” (project code PII20150171) and the BASAL Centro de Astrofísica y Tecnologías Afines (CATA) PFB-06/2007. GDA is supported by the Simons Foundation Origins of the Universe program (Modern Inflationary Cosmology collaboration). This work has made use of the Horizon Cluster, hosted by Institut d’Astrophysique de Paris, to carry out and analyse the presented simulations. SB is financially supported by Fondecyt Iniciacion (project code 11170268), CONICYT programa de Astronomia Fondo Quimal 2017 QUIMAL170001, and BASAL Centro de Astrofisica y Tecnologias Afines (CATA) AFB-17002.
[^1]: Corresponding author: [email protected]
[^2]: From now on, instead of using chemical notation for $\rm H^+$ and $\rm H$, we use HII and HI, respectively.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recently proposed relation between conformal field theories in two dimensions and supersymmetric gauge theories in four dimensions predicts the existence of the distinguished basis in the space of local fields in CFT. This basis has a number of remarkable properties, one of them is the complete factorization of the coefficients of the operator product expansion. We consider a particular case of the $U(r)$ gauge theory on $\mathbb{C}^{2}/\mathbb{Z}_{p}$ which corresponds to a certain coset conformal field theory and describe the properties of this basis. We argue that in the case $p=2$, $r=2$ there exist different bases. We give an explicit construction of one of them. For another basis we propose the formula for matrix elements.'
author:
- |
A. A. Belavin$^{1}$, M. A. Bershtein$^{1,2}$, B. L. Feigin$^{1,2,3}$,\
A. V. Litvinov$^{1,4}$ and G. M. Tarnopolsky$^{1,4}$\
$^1$ \
$^2$ \
$^3$ \
$^4$
bibliography:
- 'MyBib.bib'
title: '**Instanton moduli spaces and bases in coset conformal field theory**'
---
Introduction
============
Two-dimensional conformal field theories and $\mathcal{N}=2$ supersymmetric gauge theories in four dimensions were developed independently through years. However, it was observed in the paper by Alday, Gaiotto and Tachikawa [@Alday:2009aq] that the instanton part of the partition function in $\mathcal{N}=2$ gauge theory coincides with the conformal block in 2d conformal field theory.
The relation between these two different types of theories is carried out through the intermediate object — moduli space of instantons $\mathcal{M}$: $$\label{scheme}
\begin{picture}(30,75)(160,10)
\Thicklines
\unitlength 2.3pt
\put(20,15){\vector(1,1){10}}
\put(20,15){\vector(-1,-1){10}}
\put(120,15){\vector(1,-1){10}}
\put(120,15){\vector(-1,1){10}}
\put(34,28){\mbox{\large{Instanton moduli space $\mathcal{M}$}}}
\put(-2,-5){\mbox{\large{CFT}}}
\put(105,-5){\mbox{\large{$\mathcal{N}=2$ gauge theory}}}
\end{picture}
\vspace*{1cm}$$ The right arrow on this picture symbolises that the path integral for the partition function in $\mathcal{N}=2$ supersymmetric gauge theory is localized and can be reduced to the integral over the manifold $\mathcal{M}$ (manifold $\mathcal{M}$ is disconnected, its connected components are labeled by some topological characteristics of instantons). The last integral is divergent due to the non-compactness of the manifold $\mathcal{M}$. However, one can introduce proper regularization in the gauge theory [@Nekrasov:2002qd] which breaks Lorenzian symmetry, but preserves some of the supersymmetries and makes it possible to apply the localization technique. The regularized integral is localized at the fixed points of an abelian group (torus) which acts on $\mathcal{M}$ by the space-time rotations survived after breaking of Lorenzian symmetry and by the gauge transformation at infinity. The advantage of using of the deformed theory is that the fixed points of the torus are isolated. Hence the partition function is given by the sum of the fixed points contributions. The partition function defined in such a way is usually referred as Nekrasov partition function.
The non-trivial part of is represented by the left arrow which means, that there is a natural action of the symmetry algebra $\mathcal{A}$ of some conformal field theory on equivariant cohomologies of $\mathcal{M}$ (see Nakajima’s papers [@Nakajima_1995; @0970.17017] for basic examples of such action). Basis in the (localized) equivariant cohomology space can be labeled by the fixed points of the torus [@Atiyah_Bott_1984]. Thus the geometrical construction gives some special basis of states in the highest weight representations $\pi_{\mathcal{A}}$ of the algebra $\mathcal{A}$. This basis is already remarkable just because of its geometrical origin and possesses many nice properties. Let us list some of them:
- To every torus fixed point $p\in\mathcal{M}$ correspond basic vector $v_p \in \pi_{\mathcal{A}}$. Moreover if $p \in\mathcal{M}_N$, where $N$ is a topological number then the vector $v_p$ has degree $N$.
- There is a geometrically constructed scalar product on $\pi_{\mathcal{A}}$. Basis $v_p$ is orthogonal under this product and the norm of the vector $v_p$ equals to the determinant of the vector field $v$ in the tangent space of $p$. The last expression is also denoted by $Z^{-1}_{\textsf{vec}}$ (contribution of the vector multiplet).
- Matrix elements of geometrically defined vertex operators have completely factorized form. The last expressions are also denoted by $Z_{\textsf{bif}}$ (contribution of the bifundamental multiplet).
- There is a commutative algebra (Integrals of Motion) which is diagonalized in the basis $v_p$. Geometrically this algebra arise from the multiplication on cohomology classes.
Knowledge of the functions $Z_{\textsf{vec}}$ and $Z_{\textsf{bif}}$ allows to compute multi-point conformal blocks on a surface of genus $0$ and $1$. In CFT they give explicit and remarkably simple expressions for the coefficients of the operator product expansion.
In this paper we consider the particular case of the scheme described above. Namely, we consider the case when $\mathcal{M}$ is the moduli space of $U(r)$ instantons on $\mathbb{C}^2/\mathbb{Z}_p$ where $\mathbb{Z}_p$ acts by formula ($z_{1}$ and $z_{2}$ are coordinates on $\mathbb{C}^{2}$) $$(z_1,z_2) \mapsto (\omega z_1,\omega^{-1}z_2), \quad \text{where}\quad
\omega^p=1.$$ There are several smooth partial compactifications of this space. One of them can be constructed as follows. Denote by $\mathcal{M}(r,N)$ smooth compactified moduli space of $U(r)$ instantons on $\mathbb{C}^2$ with topological number $N$. The set $\mathcal{M}(r,N)^{\mathbb{Z}_{p}}$ of $\mathbb{Z}_p$-invariant points on $\mathcal{M}(r,N)$ is a smooth compactification of the space of instantons on $\mathbb{C}^2/\mathbb{Z}_p$. The torus action on $\mathcal{M}(r,N)^{\mathbb{Z}_{p}}$ induced by the actions on $\mathbb{C}^{2}$ and on framing at infinity. The fixed points of this torus are labeled by $r$-tuples $(Y_{1},\dots,Y_{r})$ of Young diagrams colored in $p$ colors. Then, there should be a basis labeled by $(Y_{1},\dots, Y_{r})$ in a representation of some algebra $\mathcal{A}$.
It was suggested in [@Belavin:2011pp] that the instanton manifold $\mathcal{M}=\bigsqcup_{N}\mathcal{M}(r,N)^{\mathbb{Z}_{p}}$ corresponds to the coset conformal field theory $$\label{BF-coset}
\mathcal{A}(r,p)\overset{\text{def}}{=}\frac{\widehat{\mathfrak{gl}}(n)_r}{\widehat{\mathfrak{gl}}(n-p)_r},$$ where parameter $n$ is related to equivariant parameters and in general can be arbitrary complex number. Using well known level-rank duality this coset can be rewritten as $$\mathcal{A}(r,p)=
\widehat{\mathfrak{gl}}(p)_r\times \frac{\widehat{\mathfrak{gl}}(n)_r}{\widehat{\mathfrak{gl}}(p)_r \times \widehat{\mathfrak{gl}}(n-p)_r}=
\mathcal{H}
\times \widehat{\mathfrak{sl}}(p)_r\times \frac{\widehat{\mathfrak{sl}}(r)_p \times \widehat{\mathfrak{sl}}(r)_{n-p}}{\widehat{\mathfrak{sl}}(r)_n},$$ where $\mathcal{H}$ is the Heisenberg algebra. Taking into account the construction of [@Goddard:1986ee] some of these algebras can be rewritten as
\[scheme-12\] $$\label{scheme-1}
\begin{picture}(300,80)(0,40)
\Thicklines
\unitlength 1.6pt
\put(0,0){\line(1,0){200}}
\put(0,20){\line(1,0){200}}
\put(0,40){\line(1,0){200}}
\put(0,60){\line(1,0){200}}
\put(0,0){\line(0,1){75}}
\put(60,0){\line(0,1){75}}
\put(120,0){\line(0,1){75}}
\put(180,0){\line(0,1){75}}
\put(27,8){\mbox{\small{$\mathcal{H}$}}}
\put(77,8){\mbox{\small{$\mathcal{H}\oplus\mathsf{Vir}$}}}
\put(137,8){\mbox{\small{$\mathcal{H}\oplus\mathsf{W}_{3}$}}}
\put(15,28){\mbox{\small{$\mathcal{H} \oplus \widehat{\mathfrak{sl}}(2)_1$}}}
\put(15,48){\mbox{\small{$\mathcal{H} \oplus \widehat{\mathfrak{sl}}(3)_1$}}}
\put(64,28){\mbox{\small{$\mathcal{H} \oplus \widehat{\mathfrak{sl}}(2)_2 \oplus \textsf{NSR}$}}}
\put(-22,8){\mbox{\small{$p=1$}}}
\put(-22,28){\mbox{\small{$p=2$}}}
\put(-22,48){\mbox{\small{$p=3$}}}
\put(22,-8){\mbox{\small{$r=1$}}}
\put(82,-8){\mbox{\small{$r=2$}}}
\put(142,-8){\mbox{\small{$r=3$}}}
\put(25,70){\circle*{1}}
\put(30,70){\circle*{1}}
\put(35,70){\circle*{1}}
\put(85,50){\circle*{1}}
\put(90,50){\circle*{1}}
\put(95,50){\circle*{1}}
\put(85,70){\circle*{1}}
\put(90,70){\circle*{1}}
\put(95,70){\circle*{1}}
\put(145,30){\circle*{1}}
\put(150,30){\circle*{1}}
\put(155,30){\circle*{1}}
\put(145,50){\circle*{1}}
\put(150,50){\circle*{1}}
\put(155,50){\circle*{1}}
\put(145,70){\circle*{1}}
\put(150,70){\circle*{1}}
\put(155,70){\circle*{1}}
\put(195,50){\circle*{1}}
\put(200,50){\circle*{1}}
\put(205,50){\circle*{1}}
\put(195,30){\circle*{1}}
\put(200,30){\circle*{1}}
\put(205,30){\circle*{1}}
\put(195,10){\circle*{1}}
\put(200,10){\circle*{1}}
\put(205,10){\circle*{1}}
\end{picture}
\vspace*{1.9cm}$$ where is the Virasoro algebra, $\mathsf{W}_{3}$ is the $\mathfrak{sl}(3)$ $W$ algebra and is the Neveu–Schwarz–Ramond algebra, $N=1$ superanalogue of the Virasoro algebra. Using the free-field representation of the algebras $\widehat{\mathfrak{sl}}(2)_{1}$, $\widehat{\mathfrak{sl}}(2)_{2}$ and $\widehat{\mathfrak{sl}}(3)_{1}$ and restricting only on some components of $\mathcal{M}$ this table can be rewritten as $$\label{scheme-2}
\begin{picture}(300,80)(0,40)
\Thicklines
\unitlength 1.6pt
\put(0,0){\line(1,0){200}}
\put(0,20){\line(1,0){200}}
\put(0,40){\line(1,0){200}}
\put(0,60){\line(1,0){200}}
\put(0,0){\line(0,1){75}}
\put(60,0){\line(0,1){75}}
\put(120,0){\line(0,1){75}}
\put(180,0){\line(0,1){75}}
\put(27,8){\mbox{\small{$\mathcal{H}$}}}
\put(77,8){\mbox{\small{$\mathcal{H}\oplus\mathsf{Vir}$}}}
\put(137,8){\mbox{\small{$\mathcal{H}\oplus\mathsf{W}_{3}$}}}
\put(20,28){\mbox{\small{$ \mathcal{H} \oplus \mathcal{H}$}}}
\put(13,48){\mbox{\small{$ \mathcal{H} \oplus \mathcal{H}\oplus \mathcal{H}$}}}
\put(62,28){\mbox{\small{$\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$}}}
\put(-22,8){\mbox{\small{$p=1$}}}
\put(-22,28){\mbox{\small{$p=2$}}}
\put(-22,48){\mbox{\small{$p=3$}}}
\put(22,-8){\mbox{\small{$r=1$}}}
\put(82,-8){\mbox{\small{$r=2$}}}
\put(142,-8){\mbox{\small{$r=3$}}}
\put(25,70){\circle*{1}}
\put(30,70){\circle*{1}}
\put(35,70){\circle*{1}}
\put(85,50){\circle*{1}}
\put(90,50){\circle*{1}}
\put(95,50){\circle*{1}}
\put(85,70){\circle*{1}}
\put(90,70){\circle*{1}}
\put(95,70){\circle*{1}}
\put(145,30){\circle*{1}}
\put(150,30){\circle*{1}}
\put(155,30){\circle*{1}}
\put(145,50){\circle*{1}}
\put(150,50){\circle*{1}}
\put(155,50){\circle*{1}}
\put(145,70){\circle*{1}}
\put(150,70){\circle*{1}}
\put(155,70){\circle*{1}}
\put(195,50){\circle*{1}}
\put(200,50){\circle*{1}}
\put(205,50){\circle*{1}}
\put(195,30){\circle*{1}}
\put(200,30){\circle*{1}}
\put(205,30){\circle*{1}}
\put(195,10){\circle*{1}}
\put(200,10){\circle*{1}}
\put(205,10){\circle*{1}}
\end{picture}
\vspace*{1.9cm}$$
where $\mathcal{F}$ is the Majorana fermion algebra.
In the language of the scheme the conjecture of [@Belavin:2011pp] imply that there exists a construction of geometrical action of the algebra on equivariant cohomologies of $\mathcal{M}=\bigsqcup_{N}\mathcal{M}(r,N)^{\mathbb{Z}_{p}}$. This action was constructed explicitly only in the case of rank one ($r=1$) in [@0970.17017]. For higher ranks $r>1$ a similar construction is not developed so far. However, it can be obtained as a limit of geometrical action of more general algebra constructed by Nakajima in [@Nakajima:fk]. To be more precise, the author in [@Nakajima:fk] constructed the action of the so called $\mathfrak{gl}_p$-toroidal algebra of the level $r$ on equivariant $K$-theory of the space $\mathcal{M}=\bigsqcup_{N}\mathcal{M}(r,N)^{\mathbb{Z}_{p}}$. In some limit equivariant $K$-theory degenerates to equivariant cohomology and toroidal algebra degenerates to the Vertex operator algebra related to the coset $\mathcal{A}(r,p)$[^1]. The construction based on a limit of toroidal algebra is difficult to accomplish (for $p=1$ case see [@Awata:2011fk]). However, using geometrical intuition one can predict the properties of the basis quoted above. It gives the expressions for the conformal blocks which can be compared to the expressions obtained from the standard CFT framework. Below we list main up-to-date achievements in this direction.
- In the case $p=1$, $r=1$ Nakajima [@Nakajima_1995] defined the geometrical action of the Heisenberg algebra. The fixed points basis corresponds to Jack polynomials, see e.g. [@Li:uq]. Carlsson and Okounkov gave geometrical construction of the vertex operator in [@Carlsson:2008fk].
- The case $p=1$, $r=2$ was considered in the paper [@Alday:2009aq]. The authors conjectured the expression for the multipoint conformal blocks in terms of the Nekrasov instanton partition functions. Alday and Tachikawa in [@Alday:2010vg] conjectured the existence of the basis which explains these expressions. In [@Alba:2010qc] explicit algebraic construction of this basis was given.
- The case $p=1$, $r>2$ was considered along the lines of [@Alday:2009aq] by Wyllard [@Wyllard:2009hg] (see also [@Mironov:2009by]). The construction of the basis was done in [@Fateev:2011hq].
- For the case $p=2$, $r=2$ V. Belavin and the third author proposed an expression for Whittaker limit of the four-point superconformal block in Neveu-Schwarz sector in terms of Nekrasov instanton partition functions [@Belavin:2011pp]. This result was generalized in [@Belavin:2011tb] for general four-point conformal block. For the results in Ramond sector see [@Ito:2011mw].
- For $p>2$. The check of central charges of the coset CFT $\widehat{\mathfrak{sl}}(r)_p \times \widehat{\mathfrak{sl}}(r)_{n-p}\Bigl/\widehat{\mathfrak{sl}}(r)_n$ from $M-$theory consideration was performed in [@Nishioka:2011jk]. Wyllard [@Wyllard:2011mn] considered the Whittaker limit in the case $p=4$, $r=2$. Some further checks for this case were made in [@Alfimov:2011ju]. In the case of generic $p$ and $r$ some non-trivial checks were done in [@Wyllard:2011mn] by use of Kac determinant of the coset CFT.
There exists another compactification of the space of instantons on $\mathbb{C}^2/\mathbb{Z}_p$. Denote by $X_p$ the minimal resolution of the $\mathbb{C}^2/\mathbb{Z}_p$. The moduli space $\mathcal{M}(X_{2},r,N)$ of framed torsion free sheaves of rank $r$ on $X_p$ is a smooth compactification of the space of instantons on $\mathbb{C}^2/\mathbb{Z}_p$. The torus action on $\mathcal{M}(X_{2},r,N)$ is induced by the torus action on $X_p$ and action on framing at infinity. The fixed points are labelled by $p$ sets of $r$-tuple of Young diagrams and $p-1$ vectors $(k^{i}_1,k^{i}_2\dots,k^{i}_r)$, $1 \leq i \leq p-1$ of integer numbers. Note that this combinatorial description differs from the description for torus fixed points on $\mathcal{M}(r,N)^{\mathbb{Z}_p}$ in terms of $p$-colors colored Young diagrams. It is natural to assume that similar algebras act on the equivariant cohomologies of $\mathcal{M}(X_{2},r,N)$. In [@Bonelli:2011jx; @Bonelli:2011kv] the authors used the space $\mathcal{M}(X_{2},2,N)$ for Nekrasov type expressions of the conformal blocks in the superconformal field theory.
The symmetry algebra for the coset models $$\label{BF-coset-2}
\frac{\widehat{\mathfrak{sl}}(r)_p \times \widehat{\mathfrak{sl}}(r)_{n-p}}{\widehat{\mathfrak{sl}}(r)_n}$$ with generic $r$ and $p$ is not known in explicit form. For example for $r=2$ and generic $p$ the symmetry algebra is generated by the current $G(z)$ of fractional spin $(p+4)/(p+2)$ [@Argyres:1990aq]. This current is non-abelianly braided i.e. the operator product of $G(z)$ with itself contains singularities with incommensurable powers. This fact makes it difficult to study such models. The situation simplifies in three cases: $p=1$ which corresponds to the Virasoro algebra, $p=2$ which corresponds to the Neveu-Schwarz-Ramond algebra and $p=4$ which can be expressed through the abelianly braided model called spin $4/3$ parafermionic CFT [@Fateev:1985ig; @Pogosian:1988ar]. For higher ranks the algebraic treatment of the coset model becomes even more problematic. Already in the case of $p=1$ the commutation relations of the corresponding algebra ($\mathsf{W}_{r}$ algebra in this case) are known in explicit terms only for the small ranks. Remarkably, that such obstructions do not appear in geometrical side of the relation and the case of generic $p$ and $r$ can be studied in its entirety.
In this paper we continue the study of the case $p=2$, $r=2$ as the next example (after $p=1$ and $r=2$) where the algebraic treatment is relatively simple[^2]. General philosophy suggests the existence of the basis in the representation of the algebra $\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$ (see ). This basis has geometric origin and gives expressions for the conformal blocks mentioned before. Moreover, the different manifolds $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,N)^{\mathbb{Z}_2}$ might correspond to different bases.
The appearance of the different bases is a new effect in the case $p>1$ compared to $p=1$. Geometrically this is related to the fact that manifolds $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,N)^{\mathbb{Z}_2}$ are $\mathbb{C}^*$– diffeomorphic, but not $\left(\mathbb{C}^*\right)^2$– diffeomorphic. Algebraically this leads to the fact that formulae in [@Belavin:2011pp; @Belavin:2011tb] from the one hand and [@Bonelli:2011jx; @Bonelli:2011kv] from the other hand are different. They give the same result because the manifolds $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,N)^{\mathbb{Z}_2}$ are the compactifications of the same manifold and hence the integrals are equal. In other words these two compactifications give two ways to compute the integral. Equality between results means the nontrivial combinatorial identity.
In section \[SAGT\] we construct the basis which corresponds to the manifold $\mathcal{M}_2(2,N)$ (to be more precise to its component with $c_1=0$). This basis gives [@Bonelli:2011jx; @Bonelli:2011kv] expressions for the conformal blocks in the superconformal field theory. As the main tool we use the subalgebra $$\left(\mathcal{H} \oplus \textsf{Vir} \right) \oplus \left(\mathcal{H} \oplus \textsf{Vir} \right) \subset
\left(\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus\mathsf{NSR}\right).$$ In other words we use an embedding of the direct sum of two algebras for $p=1$ into the algebra for $p=2$ (see ). Geometrically the appearance of this subalgebra is related to the existence of two points on $X_{2}$ invariant under the torus action. Algebraic explanation based on the coset formula $$\frac{\widehat{\mathfrak{gl}}(n)_r}{\widehat{\mathfrak{gl}}(n-1)_r}
\times \frac{\widehat{\mathfrak{gl}}(n-1)_r}{\widehat{\mathfrak{gl}}(n-2)_r} \subset \frac{\widehat{\mathfrak{gl}}(n)_r}{\widehat{\mathfrak{gl}}(n-2)_r}.$$ Using this subalgebras we reduce the basis problem to the $p=1$ case and use construction of [@Alba:2010qc].
In section \[SAGT-collored\] we study the basis corresponding to the manifold $\mathcal{M}(2,N)^{\mathbb{Z}_2}$ (to be more precise only one connected component for each $N$). We couldn’t give an explicit construction of this basis but we conjecture a factorized formula for matrix elements of vertex operators ($Z_{\textsf{bif}}$) in this basis. We checked this formula comparing two evaluations of the five-point conformal block. In the first case we use the formula mentioned above connected with the hypothetical basis which corresponds to the manifold $\mathcal{M}(2,N)^{\mathbb{Z}_{2}}$. In the second case we use the basis constructed in section \[SAGT\]. This basis corresponds to the manifold $\mathcal{M}(X_{2},2,N)$.
In the second part of section \[SAGT-collored\] we study all connected components of $\mathcal{M}(1,N)^{\mathbb{Z}_2}$. In other words it means that we consider the algebra $\mathcal{H}\oplus\widehat{\mathfrak{sl}}(2)_{1}$ from the table instead of the algebra $\mathcal{H}\oplus\mathcal{H}$ from the table . We will see that there are several classes of connected components labeled by an integer number $d$ and different classes correspond to different bases. The basis constructed in section \[SAGT\] appears to be a limit when $d \rightarrow \infty$.
The plan of the paper is the following. In section \[AFLT\] we reproduce all known facts about the basis in the case $p=1$. The content of the sections \[SAGT\] and \[SAGT-collored\] was described above. In \[Concl\] we formulate some obvious open questions. In appendix \[2Liouville\] we discuss the embedding $\mathsf{Vir}\oplus\mathsf{Vir}\subset\mathcal{F}\oplus\mathsf{NSR}$ in more details. In appendices \[Highest-weight\] and \[Bersh-app\] we present some explicit formulae used in sections \[SAGT\] and \[SAGT-collored\].
The case $p=1$ {#AFLT}
==============
In this section we review the construction of the basis in the case $p=1$ and arbitrary rank $r$. This example is used to illustrate the general scheme formulated in Introduction. Moreover, some constructions will be used below in section \[SAGT\].
Geometrical setup
-----------------
In this case the geometrical object under consideration is the manifold $\mathcal{M}=\bigsqcup_{N}\mathcal{M}(r,N)$, where $\mathcal{M}(r,N)$ is the compactified moduli spaces of $U(r)$ instantons on $\mathbb{C}^{2}$ with instanton number $N$ (see [@0949.14001] Ch. 2 or [@2003math.....11058N] Ch. 3) $$\label{ADHM-def}
\mathcal{M}(r,N)\cong\left\{
(B_{1},B_{2},I,J)\left|
\begin{aligned}
&(\mathrm{i})\quad[B_{1},B_{2}]+IJ=0\\
&(\mathrm{ii})\quad
\begin{minipage}{.44\textwidth}
There is no subspace $S \varsubsetneq \mathbb{C}^n$, such that $B_\sigma S \subset S$ ($\sigma= 1,2$) and $I_1,\dots I_r \in S$
\end{minipage}¥
\end{aligned}
\right\}\right.\Biggl/\mathrm{GL_{N}},$$ where $B_{j}$, $I$ and $J$ are $N\times N$, $N\times r$ and $r\times N$ complex matrices with the action of $\mathrm{GL}_{N}$ given by $$g\cdot(B_{1},B_{2},I,J)=(gB_{1}g^{-1},gB_{2}g^{-1},gI,Jg^{-1}),$$ for $g\in\mathrm{GL}_{N}$. In $I_{1},\dots,I_{r}$ denote the columns of the matrix $I$. Torus $T=(\mathbb{C}^*)^2\times (\mathbb{C}^*)^{r}$ acts on the manifold $\mathcal{M}$. The action of $(\mathbb{C}^*)^2$ arise from the action of two rotations on $\mathbb{C}^2$ and $(\mathbb{C}^*)^r$ action arises from the action on framing at infinity. The exact formula reads $$\label{torus-action}
B_1\mapsto t_1 B_1 ; \, \, \, \, B_1\mapsto t_1 B_1 ; \, \, \, \, I \mapsto It; \, \, \,
\, J\mapsto t_1 t_2 t^{-1}J,$$ where $(t_1,t_2,t)\in \mathbb{C}^*\times \mathbb{C}^*\times (\mathbb{C}^*)^r=T$. Fixed points under the torus action are labeled by the $r$-tuples of Young diagrams $\vec{Y}=(Y_1,\dots,Y_{r})$ and $T$ acts on the tangent space of any fixed point $p_{\scriptscriptstyle{\vec{Y}}}=p_{\scriptscriptstyle{Y_1},\dots,\scriptscriptstyle{Y_r}}$. For any element $v=(\epsilon_1,\epsilon_2,a)\in\textit{Lie}(T)$, where $\epsilon_{1},\epsilon_{2}\in\mathbb{C}$, $a$ is the diagonal matrix $a=\textrm{diag}(a_{1},\dots,a_{r})$ and the determinant of $v$ on the tangent space of $p_{\scriptscriptstyle{\vec{Y}}}$ reads [@Flume:2002az; @2003math......6198N] $$\label{det-vp}
\det v\Bigl|_{p_{\scriptscriptstyle{\vec{Y}}}}=\prod_{i,j=1}^{r}
\prod_{s\in \scriptscriptstyle{Y_{i}}}
E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{Y_{j}}}(a_{i}-a_{j}|s)
\bigl(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{Y_{j}}}(a_{i}-a_{j}|s)\bigr),$$ where
$$\label{E-def}
E_{\scriptscriptstyle{Y},\scriptscriptstyle{W}}(x|s)=x-\epsilon_{1}\,\mathrm{l}_{\scriptscriptstyle{W}}(s)+\epsilon_{2}(\mathrm{a}_{\scriptscriptstyle{Y}}(s)+1).$$
In $\mathrm{a}_{\scriptscriptstyle{Y}}(s)$ and $\mathrm{l}_{\scriptscriptstyle{W}}(s)$ are correspondingly the arm length of the box $s$ in the partition $Y$ and the leg length of the box $s$ in the partition $W$. The inverse of the determinant usually called the contribution of the vector hypermultiplet and denoted as $$\label{Zvec-def}
Z^{(r)}_{\textsf{vec}}(\vec{a},\vec{Y}|\epsilon_{1},\epsilon_{2})\overset{\text{def}}{=}
\prod_{i,j=1}^{r}
\prod_{s\in \scriptscriptstyle{Y_{i}}}
\Bigl(E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{Y_{j}}}(a_{i}-a_{j}|s)
\bigl(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{Y_{j}}}(a_{i}-a_{j}|s)\bigr)\Bigr)^{-1},$$ where $\vec{a}=(a_{1},\dots,a_{r})$. This quantity enters into instanton part of the Nekrasov partition function for pure $U(r)$ gauge theory (without matter) $$\label{Zpure-def}
Z^{(r)}_{\text{pure}}(\vec{a},\epsilon_{1},\epsilon_{2}|\Lambda)=
1+\sum_{k=1}^{\infty}\sum_{|\scriptscriptstyle{\vec{Y}}\scriptstyle|=k} Z^{(r)}_{\textsf{vec}}(\vec{a},\vec{Y}|\epsilon_{1},\epsilon_{2})\,\Lambda^{4k},$$ where $\vec{a}=(a_{1},\dots,a_{r})$ is interpreted as vacuum expectation value of the scalar field and $\Lambda$ is the scale in gauge theory.
An important quantity is the contribution of the bifundamental matter hypermultiplet [@Fucito:2004gi; @Flume:2002az; @Shadchin:2005cc]. This quantity is defined geometrically and is given by the determinant of the vector field in a fiber of certain bundle over fixed point[^3] of the torus on $\mathcal{M}(r,N)\times\mathcal{M}(r,N')$ $$\label{Zbif-def}
Z^{(r)}_{\text{\sf{bif}}}(m;\vec{a}',\vec{W};\vec{a},\vec{Y}|\epsilon_{1},\epsilon_{2})=\prod_{i,j=1}^{r}
\prod_{s\in \scriptscriptstyle{Y_{i}}}\left(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{W_{j}}}(a_{i}-a'_{j}|s)-m\right)
\prod_{t\in \scriptscriptstyle{W_{j}}}\left(E_{\scriptscriptstyle{W_{j}},\scriptscriptstyle{Y_{i}}}(a'_{j}-a_{i}|t)-m\right),$$ where the parameter $m$ coincides with the mass of bifundamental hypermultiplet. As all the expressions $Z_{\textsf{vec}}$ and $Z_{\textsf{bif}}$ appear to be homogeneous under $a_{i}\rightarrow\lambda a_{i}$, $m\rightarrow\lambda m$ and $\epsilon_{j}\rightarrow\lambda\epsilon_{j}$ one can fix this freedom by demanding that $\epsilon_{1}\epsilon_{2}=1$. We will adopt the notations common in CFT literature $$\epsilon_{1}=b,\qquad
\epsilon_{2}=b^{-1}.$$ Moreover, we assume that $\sum_{j=1}^{r}a_{j}=0$. In particular, below we consider in details the case $r=1$ and $r=2$. For $r=2$ it would be convenient to introduce $$\label{Zbif-def-2}
\mathbb{F}(\alpha|P',\vec{W};P,\vec{Y})\overset{\text{def}}{=}
Z^{(2)}_{\text{\sf{bif}}}(\alpha;(P',-P'),\vec{W};(P,-P),\vec{Y}|b,1/b).$$ and $$\label{Nvec-def-2}
\mathbb{N}(P,\vec{Y})\overset{\text{def}}{=}
Z_{\textsf{vec}}^{(2)}((P,-P),\vec{Y}|b,1/b).$$
Algebraic setup
---------------
In this case the conformal field theory under consideration has the symmetry algebra $\mathcal{H}\oplus\textsf{W}_{r}$. There is special basis of states in the highest weight representation of this algebra corresponding to the fixed points of the vector field acting on $\mathcal{M}$. This basis of states diagonalizes an infinite system of commuting quantities (Integrals of Motion) $\mathbf{I}_{k}$ $$\label{II-commute}
[\mathbf{I}_{k},\mathbf{I}_{l}]=0,$$ which are elements of the universal enveloping of the algebra $\mathcal{H}\oplus\textsf{W}_{r}$. We review the construction of the basis of states in two particular cases $r=1$ and $r=2$. For the case of general rank see [@Fateev:2011hq].
### Case $r=1$
Our algebra is Heisenberg algebra with components $\mathtt{a}_{k}$ and commutation relations[^4] $$\label{H-relat}
[\mathtt{a}_{n},\mathtt{a}_{m}]=n\,\delta_{n+m,0}.$$ The highest weight representation of this algebra (Fock module) is defined by the vacuum state $|0\rangle$ $$\mathtt{a}_{n}|0\rangle=0\quad\text{for}\quad n>0,$$ and spanned by the vectors of the form $$\mathtt{a}_{-k_{1}}\dots\mathtt{a}_{-k_{n}}|0\rangle,\qquad
k_{1}\geq k_{2}\geq\dots\geq k_{n}.$$ One can define another basis $$\label{Jack-basis-1}
|Y\rangle\overset{\text{def}}{=}\jac_{\scriptscriptstyle{Y}}^{\scriptscriptstyle{(1/g)}}(x)|0\rangle,$$ where $\jac_{\scriptscriptstyle{Y}}^{\scriptscriptstyle{(1/g)}}(x)$ is the Jack polynomial in integral normalization [@Macdonald] with parameter $g=-b^{2}$ associated to the partition $Y$ and the following identification is made $$\mathtt{a}_{-k}=-ib\,p_{k},$$ where $p_{k}$ are power-sum symmetric polynomials $$p_{k}=p_{k}(x)=\sum_{j}x_{j}^{k}.$$ The basis of states $|Y\rangle$ is usually called Jack basis by transparent reasons. There exists a system of Integrals of Motion $\mathbf{I}_{k}$ which acts diagonally in Jack basis . The first two representatives of this family are (here $Q=b+1/b$) $$\label{I21-components}
\begin{aligned}
&\mathbf{I}_{1}=\sum_{k>0}\mathtt{a}_{-k}\mathtt{a}_{k},\\
&\mathbf{I}_{2}=iQ\sum_{k>0}k\mathtt{a}_{-k}\mathtt{a}_{k}+\frac{1}{3}\sum_{i+j+k=0}\mathtt{a}_{i}\mathtt{a}_{j}\mathtt{a}_{k}.
\end{aligned}$$
Another important property of the Jack basis was pointed out in [@Carlsson:2008fk]. Namely, consider vertex operator $$\label{vertex-CO}
\mathsf{V}_{\alpha}=
e^{(\alpha-Q)\varphi_{-}(1)}e^{\alpha\varphi_{+}(1)},$$ with $\varphi_{+}(z)=i\sum_{n>0}\frac{\mathtt{a}_{n}}{n}z^{-n}$ and $\varphi_{-}(z)=i\sum_{n<0}\frac{\mathtt{a}_{n}}{n}z^{-n}$. Define also dual basis $\langle W|$, which is orthogonal to the Jack basis with respect to usual scalar product in the Heisenberg algebra. It was proved in [@Carlsson:2008fk] that $$\label{matrix-element-CO}
\langle W|\mathsf{V}_{\alpha}|Y\rangle=
\prod_{s\in \scriptscriptstyle{Y}}
\Bigl(b\,\bigl(\mathrm{l}_{\scriptscriptstyle{W}}(s)+1\bigr)-b^{-1}\mathrm{a}_{\scriptscriptstyle{Y}}(s)-\alpha\Bigr)
\prod_{t\in \scriptscriptstyle{W}}
\Bigl(b^{-1}\,\bigl(\mathrm{a}_{\scriptscriptstyle{W}}(t)+1\bigr)-b\,\mathrm{l}_{\scriptscriptstyle{Y}}(t)-\alpha\Bigr).$$
We stress that the Jack basis $|Y\rangle$ is interpreted as the basis of fixed points $p_{\scriptscriptstyle{Y}}$ of the vector field on instanton manifold $\mathcal{M}$ (in the case of rank one and $\epsilon_{1}=b$, $\epsilon_{2}=1/b$)[@Li:uq]. Integrals of Motion are interpreted as operators of multiplication on cohomology classes. We note that the r.h.s. of coincides with in the case of $r=1$, $a=a'=0$, $m=\alpha$ and $\epsilon_{1}=b$, $\epsilon_{2}=1/b$. $$\langle W|\mathsf{V}_{\alpha}|Y\rangle=
Z^{(1)}_{\text{\sf{bif}}}(\alpha;0,W;0,Y|b,b^{-1})$$
### Case $r=2$
We consider conformal field theory, whose symmetry algebra is $\mathcal{A}=\mathcal{H}\oplus\text{\sf Vir}$ (we use conventions which are specific in this case: there is the factor $1/2$ in commutation relations for $a_{k}$ generators compared to ) $$\label{Vir-relat}
\begin{aligned}
&[L_{n},L_{m}]=(n-m)L_{n+m}+\frac{c}{12}(n^{3}-n)\,\delta_{n+m,0},\\
&[a_{n},a_{m}]=\frac{n}{2}\,\delta_{n+m,0},\qquad [L_{n},a_{m}]=0.
\end{aligned}$$ We will parametrize the central charge $c$ of the Virasoro algebra in a Liouville manner as $$c=1+6Q^{2},\qquad\text{where}\quad Q=b+\frac{1}{b}.$$ We also need to introduce the operators $$\label{primary}
V_{\alpha}\overset{\text{def}}{=}\mathcal{V}_{\alpha}\cdot V_{\alpha}^{\scriptscriptstyle{\textsf{Vir}}},$$ where $V_{\alpha}^{\scriptscriptstyle{\textsf{Vir}}}$ is the primary field of the Virasoro algebra with conformal dimension $$\label{Delta-Vir}
\Delta(\alpha,b)=\alpha(Q-\alpha)$$ and $\mathcal{V}_{\alpha}$ is a free exponential $$\label{vertex}
\mathcal{V}_{\alpha}=e^{2(\alpha-Q)\varphi_{-}}e^{2\alpha\varphi_{+}},$$ with $\varphi_{+}(z)=i\sum_{n>0}\frac{a_{n}}{n}z^{-n}$ and $\varphi_{-}(z)=i\sum_{n<0}\frac{a_{n}}{n}z^{-n}$.
Let us consider the highest weight representation of the algebra $\mathcal{H}\oplus\mathsf{Vir}$ parameterized by the momenta $P$ and defined by the vacuum state $|P\rangle$: $$L_{n}|P\rangle=a_{n}|P\rangle=0,\quad\text{for}\quad n>0,\qquad
L_{0}|P\rangle=\Delta(P)|P\rangle,\qquad \langle P|P\rangle=1.$$ The Virasoro conformal dimension of the state $|P\rangle$ is expressed through the momenta $P$ as $$\Delta(P)=\frac{Q^{2}}{4}-P^{2}.$$ Then the highest weight representation is spanned by the vectors of the form $$\label{naive-basis}
\begin{gathered}
a_{-l_{m}}\dots a_{-l_{1}}L_{-k_{n}}\dots L_{-k_{1}}|P\rangle,\\
k=(k_{1}\geq k_{2}\geq\dots\geq k_{n}),\quad
l=(l_{1}\geq l_{2}\geq\dots\geq l_{m}).
\end{gathered}$$ This representation is irreducible for general values of the momenta $P$.
In principle, one can choose another basis different from the naive one . Among the possible bases there is one which is of special interest for us. The defining property of this basis is formulated by the following proposition proved in [@Alba:2010qc].
\[AFLT-prop\] There exists unique orthogonal basis $|P\rangle_{\vec{\scriptscriptstyle{Y}}}$ such that $$\label{matrix-elements}
\frac{ _{\vec{\scriptscriptstyle{W}}}\langle P'|V_{\alpha}|P\rangle_{\vec{\scriptscriptstyle{Y}}}}
{\langle P'|V_{\alpha}|P\rangle}=\mathbb{F}(\alpha|P',\vec{W};P,\vec{Y}).$$
In proposition \[AFLT-prop\] we denoted the elements of this basis by $|P\rangle_{\scriptscriptstyle{\vec{Y}}}$ where $\vec{Y}=(Y_{1},Y_{2})$ stands for the pair of Young diagrams. In the function $\mathbb{F}(\alpha|P',\vec{Y}';P,\vec{Y})$ is defined by –. We note that in geometrical language the basis state $|P\rangle_{\vec{\scriptscriptstyle{Y}}}$ corresponds to the fixed point $p_{\vec{\scriptscriptstyle{Y}}}$ of the vector field. It follows from Proposition \[AFLT-prop\] that the states $|P\rangle_{\vec{\scriptscriptstyle{Y}}}$ form an orthogonal basis $$_{\vec{\scriptscriptstyle{W}}}\langle P|P\rangle_{\vec{\scriptscriptstyle{Y}}}=
\frac{\delta_{\vec{\scriptscriptstyle{Y}},\vec{\scriptscriptstyle{W}}}}{\mathbb{N}(P,\vec{Y})},$$ where $\delta_{\vec{\scriptscriptstyle{Y}},\vec{\scriptscriptstyle{W}}}=0$ if $\vec{Y}\neq\vec{W}$, $\delta_{\vec{\scriptscriptstyle{Y}},\vec{\scriptscriptstyle{Y}}}=1$ and function $\mathbb{N}(P,\vec{Y})$ is defined by .
It will be convenient below to introduce operators $X_{\vec{\scriptscriptstyle{Y}}}(P,b)$: $$\label{X-def}
|P\rangle_{\vec{\scriptscriptstyle{Y}}}\overset{\text{def}}{=}X_{\vec{\scriptscriptstyle{Y}}}(P,b)|P\rangle,$$ and such that $X_{\vec{\scriptscriptstyle{Y}}}(P,b)$ does not contain positive components of $\mathcal{A}$, i.e. $$X_{\vec{\scriptscriptstyle{Y}}}(P,b)=\sum_{\scriptscriptstyle{l+k}=|\scriptscriptstyle{Y}|}C_{\vec{\scriptscriptstyle{Y}}}^{\scriptscriptstyle{\vec{l}},\scriptscriptstyle{\vec{k}}}(P,b)\,a_{-l_{m}}\dots a_{-l_{1}}L_{-k_{n}}\dots L_{-k_{1}},$$ where $l=\sum l_{i}$ and $k=\sum k_{j}$. It can be shown that all the coefficients $C_{\vec{\scriptscriptstyle{Y}}}^{\scriptscriptstyle{\vec{l}},\scriptscriptstyle{\vec{k}}}(P,b)$ are some polynomials in the momenta $P$ (see examples in [@Alba:2010qc]).
The system of Integrals of Motion which acts diagonally in the basis $|P\rangle_{\vec{\scriptscriptstyle{Y}}}$ was constructed in [@Alba:2010qc]. First two representatives of this system are $$\label{I1I2}
\begin{aligned}
&\mathbf{I}_{1}=L_{0}+2\sum_{k>0}a_{-k}a_{k},\\
&\mathbf{I}_{2}=
\sum_{k\neq0}a_{-k}L_{k}+2iQ\sum_{k>0}^{\infty}ka_{-k}a_{k}+\frac{1}{3}\sum_{i+j+k=0}a_{i}a_{j}a_{k}.
\end{aligned}$$ This integrable system was studied in [@Alba:2010qc; @Belavin:2011js; @Estienne:2011qk]. In particular, it was noticed that the basis of eigenstates is very similar to the Jack basis studied above. The states $|P\rangle_{\scriptscriptstyle{Y,\varnothing}}$ as well as the states $|P\rangle_{\scriptscriptstyle{\varnothing,Y}}$ become the Jack states if one expresses the Virasoro generators $L_{n}$ in terms of bosons. In fact, there are two ways to do it $$\label{Bosonization}
\begin{gathered}
L_{n}=\sum_{k\neq0,n}c_{k}c_{n-k}+i(nQ\mp2\mathcal{P})c_{n},\quad L_{0}=\frac{Q^{2}}{4}-\mathcal{P}^{2}+2\sum_{k>0}c_{-k}c_{k},\\
[c_{n},c_{m}]=\frac{n}{2}\,\delta_{n+m,0},\quad[\mathcal{P},c_{n}]=0,\quad\mathcal{P}|P\rangle=P|P\rangle,\quad\langle P|\mathcal{P}=-P\langle P|.
\end{gathered}$$ corresponding to the choice of sign in front of operator of the zero mode $\mathcal{P}$. These two choices define two different sets of bosons $c_{k}$, which are related by the unitary transform also called reflection operator [@Zamolodchikov:1995aa]. The sign “$-$” works for the states $|P\rangle_{\scriptscriptstyle{Y,\varnothing}}$ while “$+$” works for $|P\rangle_{\scriptscriptstyle{\varnothing,Y}}$. For example, taking “$-$” in one can show that $$\label{Jack-basis}
|P\rangle_{\scriptscriptstyle{Y,\varnothing}}=\Omega_{\scriptscriptstyle{Y}}(P)\,\jac_{\scriptscriptstyle{Y}}^{\scriptscriptstyle{(1/g)}}(x)|P\rangle,$$ where $\jac_{\scriptscriptstyle{Y}}^{\scriptscriptstyle{(1/g)}}(x)$ is the Jack polynomial with $g=-b^{2}$, $$a_{-k}-c_{-k}=-ib\,p_{k}(x),$$ and $\Omega_{\scriptscriptstyle{Y}}(P)$ is the normalization factor, whose explicit form can be found in [@Alba:2010qc]. The statement similar to is valid for the state $|P\rangle_{\scriptscriptstyle{\varnothing,Y}}$ if one takes the sign “$+$” in . At the value $Q=0$ these two sets of bosons are differ by sign and general state $|P\rangle_{\scriptscriptstyle{\vec{Y}}}$ can be written as a tensor product of two Jack states [@Belavin:2011js]. Remarkably, the fact that some of the states become the Jack states after bosonization is valid for any $r$ (see [@Fateev:2011hq]). Using this fact and the “bootstrap” equations suggested in [@Alba:2010qc; @Fateev:2011hq] one can construct recurrently all basis states.
Supersymmetric case ($p=2$, $r=2$) {#SAGT}
==================================
In this section we construct the basis corresponding to the case $p=2$, $r=2$ from the general scheme. In algebraic side we expect to deal with the algebra $\mathcal{A}=\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$.
Geometrical setup
-----------------
By $X_2$ we denote the ALE space, which is the minimal resolution of the factor space $\mathbb{C}^{2}/\mathbb{Z}_{2}$. This space can be constructed by gluing two charts $\mathbb{C}^2$ with coordinates: $$\text{1}: \quad \mathbb{C}^2\; (u_1,v_1)\quad u_2=v_1^{-1},\,\; v_2=u_1v_1^2 \qquad\qquad
\text{2}: \quad \mathbb{C}^2\; (u_2,v_2)\quad u_1=u_2^2v_2,\,\; v_1=u_2^{-1}$$ There is a map $\mathbb{C}^2\backslash \{0\} \rightarrow X_2$ given in coordinates $u_1=z_1^2, v_1=z_2/z_1$ in the first chart and $u_2=z_1/z_2, v_2=z_2^2$ in the second chart. Points $(z_1,z_2)$ and $(-z_1,-z_2)$ have the same image under this map. Hence we obtain the projection $$\pi\colon X_2 \rightarrow \mathbb{C}^{2}/\mathbb{Z}_{2},$$ which appears to be the minimal resolution of singularity. The preimage of $(0,0)\in \mathbb{C}^2$ is exceptional divisor $C \in X_2$. In the first and the second charts $C$ is given by equations $u_1=0$ and $v_2=0$ respectively.
The torus action on $X_2$ arises from the torus action on $\mathbb{C}^2$: $$\text{1:}\quad (u_1,v_1)\mapsto (t_1^2u_1,t^{-1}_1t_2v_1); \qquad \text{2:}\quad (u_2,v_2)\mapsto (t_1t^{-1}_2u_2,t_2^2v_2).$$ There are two points which are invariant under the torus action namely $p_1$ and $p_2$ origins in the first and second charts respectively.
Let $\mathcal{M}=\bigsqcup_{N}\mathcal{M}(X_{2},2,N)$ be the moduli space of framed torsion free sheaves on $X_{2}$ of rank $2$ with Chern classes $c_{1}=0$, $c_{2}=N$ [@1166.14007]. Torus $T=(\mathbb{C}^*)^2\times (\mathbb{C}^*)^{2}$ acts on the manifold $\mathcal{M}$. The action of the first $(\mathbb{C}^*)^2$ arise from the action of two rotations on $\mathbb{C}^2$ and the action of the second $(\mathbb{C}^*)^r$ action arises from the action on framing at infinity.
The points of the torus were described in [@2011CMaPh.304..395B]. They are labeled by the pair of pairs of Young diagrams $\vec{Y}^{\scriptscriptstyle{(\sigma)}}=(Y_{1}^{\scriptscriptstyle{(\sigma)}},Y_{2}^{\scriptscriptstyle{(\sigma)}})$, $\sigma=1,2$ and one integer number $k\in\mathbb{Z}$. The pair of Young diagrams $\vec{Y}^{\scriptscriptstyle{(\sigma)}}$ describes the corresponding sheaf $\mathcal{E}_{\scriptscriptstyle\vec{Y}^{\scriptscriptstyle{(\sigma)}},k}$ near the invariant point $p_\sigma$ and $k$ means that $\mathcal{E}_{\scriptscriptstyle\vec{Y}^{\scriptscriptstyle{(\sigma)}},k}$ is a subsheaf of $\mathcal{O}(kC)+\mathcal{O}(-kC)$.
The determinant of the vector field $v=(\epsilon_{1},\epsilon_{2},a)$ at the fixed point $p_{\scriptscriptstyle\vec{Y}^{\scriptscriptstyle{(\sigma)}},k}$ equals to [@2011CMaPh.304..395B] $$\label{Zvec-def-2}
\det v\Bigl|_{p_{\scriptscriptstyle\vec{Y}^{\scriptscriptstyle{(\sigma)}},k}}=\frac{l_{\vec{k}}(\vec{a}|\epsilon_{1},\epsilon_{2})}
{Z_{\textsf{vec}}^{(2)}(\vec{a}+\epsilon_{1}\vec{k},\vec{Y}^{\scriptscriptstyle{(1)}}|2\epsilon_{1},\epsilon_{2}-\epsilon_{1})
Z_{\textsf{vec}}^{(2)}(\vec{a}+\epsilon_{2}\vec{k},\vec{Y}^{\scriptscriptstyle{(2)}}|\epsilon_{1}-\epsilon_{2},2\epsilon_{2})},$$ where $\vec{k}=(k,-k)$, function $Z_{\textsf{vec}}^{(2)}(\vec{a},\vec{Y}|\epsilon_{1},\epsilon_{2})$ is given by and the factor $l_{\vec{k}}(\vec{a}|\epsilon_{1},\epsilon_{2})$ is $$\label{l-vec}
l_{\vec{k}}(\vec{a}|\epsilon_{1},\epsilon_{2})=(-1)^{k}\times
\begin{cases}
l(2a,k)l(\epsilon_{1}+\epsilon_{2}+2a,k)\quad\qquad\:\,\text{if}\quad k>0,\\
l(-2a,-k)l(\epsilon_{1}+\epsilon_{2}-2a,-k)\quad\text{if}\quad k<0,
\end{cases}$$ where $$l(x,n)=\prod_{\substack{i,j\geq1,\;i+j\leq2n\\i+j\equiv0\mod 2}}\hspace*{-10pt}(x+(i-1)\epsilon_{1}+(j-1)\epsilon_{2}).$$ Two factors $Z_{\textsf{vec}}^{(2)}$ in arise from the points $p_1, p_2 \in X_2$ invariant under the torus action. The factor $l_{\vec{k}}$ arises from the exceptional divisor. We will call this factor as blow-up factor.
The instanton part of the Nekrasov partition function for the pure $U(2)$ gauge theory on $X_{2}$ can be written as [@Bonelli:2011jx] $$\label{BMT-formula}
Z_{\text{pure}}^{(2,X_{2})}(\vec{a},\epsilon_{1},\epsilon_{2}|\Lambda)=\sum_{k\in\mathbb{Z}}
\frac{\Lambda^{2k^{2}}}{l_{\vec{k}}(\vec{a}|\epsilon_{1},\epsilon_{2})}
Z_{\text{pure}}^{(2)}(\vec{a}+\epsilon_{1}\vec{k},2\epsilon_{1},\epsilon_{2}-\epsilon_{1}|\Lambda)
Z_{\text{pure}}^{(2)}(\vec{a}+\epsilon_{2}\vec{k},\epsilon_{1}-\epsilon_{2},2\epsilon_{2}|\Lambda),$$ where $Z_{\text{pure}}^{(2)}(\vec{a},\epsilon_{1},\epsilon_{2}|\Lambda)$ is given by . Equations and give some hint about the structure of the basis of states in this case. Namely, the r.h.s. of is expressed in terms of two partition functions (corresponding to the case $p=1$, $r=2$ from our scheme) with parameters $$\begin{aligned}
&\epsilon_{1}^{\scriptscriptstyle{(1)}}=2\epsilon_{1},&\qquad
&\epsilon_{2}^{\scriptscriptstyle{(1)}}=\epsilon_{2}-\epsilon_{1},\\
&\epsilon_{1}^{\scriptscriptstyle{(2)}}=\epsilon_{1}-\epsilon_{2},&\qquad
&\epsilon_{2}^{\scriptscriptstyle{(2)}}=2\epsilon_{2}.
\end{aligned}$$ We note that if we define CFT parameters $b^{\scriptscriptstyle{(\sigma)}}$ by $$(b^{\scriptscriptstyle{(\sigma)}})^{2}=\frac{\epsilon_{1}^{\scriptscriptstyle{(\sigma)}}}{\epsilon_{2}^{\scriptscriptstyle{(\sigma)}}},$$ then they are subject to the relation $$\label{bb-relat-1}
(b^{\scriptscriptstyle{(1)}})^{2}+(b^{\scriptscriptstyle{(2)}})^{-2}=-2.$$ One can propose that similar relation should hold in the CFT terms too. Namely, in algebraic language we expect that in the algebra $\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$ there are two commuting subalgebras $\mathcal{H}\oplus\mathsf{Vir}$ with the parameters $b^{\scriptscriptstyle{(1)}}$ and $b^{\scriptscriptstyle{(2)}}$ satisfying . In the next subsection we give explicit construction of these two subalgebras.
Algebraic setup
---------------
As was claimed above this case corresponds to the algebra $\mathcal{A}=\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$. Let us first introduce the notations. The commutation relations of the Neveu-Schwarz-Ramond algebra are known to be $$\label{NS-comm-relat}
\begin{aligned}
&[L_{n},L_{m}]=(n-m)L_{n+m}+\frac{c_{\textsf{\tiny{NSR}}}}{8}(n^{3}-n)\delta_{n+m},\\
&\{G_{r},G_{s}\}= 2L_{r+s}+\frac{1}{2}c_{\textsf{\tiny{NSR}}}(r^{2}-\frac{1}{4})\delta_{r+s,0},\\
&[L_{n},G_{r}]=\left(\frac{1}{2}n-r\right)G_{n+r}.
\end{aligned}$$ The central charge $c_{\scriptscriptstyle{\textsf{NSR}}}$ is parameterized as follows $$c_{\textsf{\tiny{NSR}}}=1+2Q^{2}, \quad Q=b+\frac{1}{b}.$$ The indexes $r$ and $s$ in are either integer (the Ramond sector), or half and odd integer (the Neveu-Schwarz sector). Below we will consider the Neveu-Schwarz sector. The highest weight representation in this case is defined by the vacuum state $|P\rangle_{\scriptscriptstyle{\textsf{NS}}}$ $$\label{NS-vac}
L_{n}|P\rangle_{\scriptscriptstyle{\textsf{NS}}}=G_{r}|P\rangle_{\scriptscriptstyle{\textsf{NS}}}=0\quad\text{for}\quad n,r>0,\qquad
L_{0}|P\rangle_{\scriptscriptstyle{\textsf{NS}}}=\Delta_{\textsf{\tiny{NS}}}(Q/2+P,b)|P\rangle_{\scriptscriptstyle{\textsf{NS}}},$$ where $$\label{Delta-NS}
\Delta_{\textsf{\tiny{NS}}}(\alpha,b)=\frac{1}{2}\alpha(Q-\alpha).$$
### Two commutative Virasoro algebras
We will extend our algebra multiplying it by two additional Heisenberg algebras $\mathcal{H}$ and one fermion algebra $\mathcal{F}$. Let us first multiply the algebra by additional fermion (in the Neveu-Schwarz sector) $$\label{free-fermion}
\{f_{r},f_{s}\}=\delta_{r+s,0},\quad
r,s\in\mathbb{Z}+\frac{1}{2}$$ and also we assume that it anticommutes with generators $G_{r}$ $$\{G_{r},f_{s}\}=0.$$ It was pointed out in [@Crnkovic:1989gy; @Crnkovic:1989ug; @Lashkevich:1992sb] that there exists a non-trivial embedding of two commuting Virasoro algebras in $\mathcal{F}\oplus\mathsf{NSR}$ which will be an essential point of our construction[^5]. Following [@Crnkovic:1989gy; @Crnkovic:1989ug; @Lashkevich:1992sb] we can notice that the combinations $$\label{2-Vir}
\begin{aligned}
&L_{n}^{\scriptscriptstyle{(1)}}= \frac{1}{1-b^{2}}L_{n} -\frac{1+2b^{2}}{2(1-b^{2})}\sum_{r=-\infty}^{\infty}r:f_{n-r}f_{r}:+\frac{b}{1-b^{2}}\sum_{r=-\infty}^{\infty}f_{n-r}G_{r} ,\\
&L_{n}^{\scriptscriptstyle{(2)}}= \frac{1}{1-b^{-2}}L_{n} -\frac{1+2b^{-2}}{2(1-b^{-2})}\sum_{r=-\infty}^{\infty}r:f_{n-r}f_{r}:+\frac{b^{-1}}{1-b^{-2}}\sum_{r=-\infty}^{\infty}f_{n-r}G_{r},
\end{aligned}$$ commute with each other and satisfy the Virasoro commutation relations i.e. $$\label{two-Virasoro}
\begin{gathered}
[L_{n}^{\scriptscriptstyle{(1)}},L_{m}^{\scriptscriptstyle{(2)}}]=0,\\
[L_{n}^{\scriptscriptstyle{(\sigma)}},L_{m}^{\scriptscriptstyle{(\sigma)}}]=(n-m)L_{n+m}^{\scriptscriptstyle{(\sigma)}}+\frac{c^{\scriptscriptstyle{(\sigma)}}}{12}(n^{3}-n)\,\delta_{n+m,0},
\end{gathered}$$ with
$$\label{newbQ}
c^{\scriptscriptstyle{(\sigma)}}=1+6Q^{(\sigma)}\,^{2}, \quad
Q^{\scriptscriptstyle{(\sigma)}}=b^{(\sigma)}+1/b^{(\sigma)}\quad\text{and}\quad
b^{\scriptscriptstyle{(1)}} =\frac{2b}{\sqrt{2-2b^{2}}},\quad
(b^{\scriptscriptstyle{(2)}})^{-1}=\frac{2b^{-1}}{\sqrt{2-2b^{-2}}}.$$
We note that the parameters $b^{\scriptscriptstyle{(1)}}$ and $b^{\scriptscriptstyle{(2)}}$ satisfy the relation .
Consider the highest weight representation $\pi_{\scriptscriptstyle{\mathcal{F}\oplus\mathsf{NSR}}}=\pi_{\scriptscriptstyle{\mathcal{F}}}\otimes\pi_{\scriptscriptstyle{\mathsf{NSR}}}$ of the algebra $\mathcal{F}\oplus\mathsf{NSR}$. In other words we extend the definition of the highest weight vector by demanding that $$f_{r}|P\rangle_{\scriptscriptstyle{\textsf{NS}}}=0,\qquad\text{for}\quad r>0.$$ For general values of the momenta $P$ the highest weight representation $\pi_{\scriptscriptstyle\mathcal{F}\oplus\mathsf{NSR}}$ is irreducible. Its character is given by $$\label{Char-NSfer}
\chi_{\scriptscriptstyle\mathcal{F}\oplus\mathsf{NSR}}(q)=\chi_{\mathcal{F}}(q)^{2}\chi_{\mathcal{B}}(q),$$ where $$\chi_{\mathcal{F}}(q)=\prod_{k>0}(1+q^{k-\frac{1}{2}}),\qquad
\chi_{\mathcal{B}}(q)=\prod_{k>0}\frac{1}{(1-q^{k})}$$ are fermionic and bosonic characters[^6].
We see from that there is a natural action of the two Virasoro algebras in the representation $\pi_{\scriptscriptstyle\mathcal{F}\oplus\mathsf{NSR}}$. As a representation of $\mathsf{Vir}\oplus\mathsf{Vir}$ it is no longer irreducible and for general values of the momenta $P$ can be decomposed into direct sum of the Verma modules $\pi_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}$ over the algebra $\mathsf{Vir}\oplus\mathsf{Vir}$. The character of any of $\pi_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}$ is given by $$\label{Char-VirVir}
\chi_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}(q)=\chi_{\mathcal{B}}(q)^{2}.$$ Using the consequence of the Jabobi triple product identity $$\prod_{k>0}(1+q^{k-\frac{1}{2}})^{2}(1-q^{k})=\sum_{k\in\mathbb{Z}}q^{\frac{k^{2}}{2}}=1+2q^{\frac{1}{2}}+2q^{2}+2q^{\frac{9}{2}}+\dots$$ we see that $$\label{char-decomposition}
\chi_{\scriptscriptstyle\mathcal{F}\oplus\mathsf{NSR}}(q)=
\sum_{k\in\mathbb{Z}}q^{\frac{k^{2}}{2}}\chi_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}(q),$$ which implies the decomposition (see fig. \[NS-decompos-pic\])
![Decomposition of an irreducible representation of the algebra $\mathcal{F}\oplus\mathsf{NSR}$ into direct sum of representations of the algebra $\mathsf{Vir}\oplus\mathsf{Vir}$. Each interior angle corresponds to Verma module $\pi^{k}_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}$ over the algebra $\mathsf{Vir}\oplus\mathsf{Vir}$ whose conformal dimension is shifted by $k^{2}/2$ as in .[]{data-label="NS-decompos-pic"}](NSrep.eps){width=".6\textwidth"}
$$\label{NS-decompos}
\pi_{\scriptscriptstyle\mathcal{F}\oplus\mathsf{NSR}}=\bigoplus_{k\in\mathbb{Z}} \pi^{k}_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}},$$
where $\pi^{k}_{\scriptscriptstyle\mathsf{Vir}\oplus \mathsf{Vir}}$ is the Verma module of $\mathsf{Vir}\oplus\mathsf{Vir}$ with the highest weight $|P,k\rangle$. The highest weight state $|P,k\rangle$ is defined as $$\label{high-weight-def}
\begin{gathered}
L_{n}^{\scriptscriptstyle{(1)}}|P,k\rangle=L_{n}^{\scriptscriptstyle{(2)}}|P,k\rangle=0\qquad\text{for}\qquad n>0,\\
L_{0}^{\scriptscriptstyle{(1)}}|P,k\rangle=\Delta^{\scriptscriptstyle{(1)}}(P,k)|P,k\rangle,\qquad
L_{0}^{\scriptscriptstyle{(2)}}|P,k\rangle=\Delta^{\scriptscriptstyle{(2)}}(P,k)|P,k\rangle,
\end{gathered}$$ where the conformal dimensions $\Delta^{\scriptscriptstyle{(1)}}(P,k)$ and $\Delta^{\scriptscriptstyle{(2)}}(P,k)$ satisfy the relation $$\label{Delta-shifted}
\Delta^{\scriptscriptstyle{(1)}}(P,k)+\Delta^{\scriptscriptstyle{(2)}}(P,k)=\Delta_{\scriptscriptstyle{\textsf{NS}}}(Q/2+P,b)+\frac{k^{2}}{2}.$$ Equation follows from the relation $$L_{0}^{\scriptscriptstyle{(1)}}+L_{0}^{\scriptscriptstyle{(2)}}=L_{0}+L_{0}^{\textrm{f}},$$ where $L_{0}^{\textrm{f}}$ is the zeroth component of the stress-energy tensor for the free-fermion $$L_{0}^{\textrm{f}}=\sum_{r=1/2}^{\infty}r f_{-r}f_{r}.$$
In order to construct the highest weight states $|P,k\rangle$ in more explicit terms and to compute the conformal dimensions $\Delta^{\scriptscriptstyle{(1)}}(P,k)$ and $\Delta^{\scriptscriptstyle{(2)}}(P,k)$ we consider free-field representation for the algebra. There exist two alternative free-field representations (corresponding to the choice of sign in front of operator $\mathcal{P}$) $$\label{NS-bosonization}
\begin{aligned}
&L_{n}= \frac{1}{2}\sum_{k\neq 0,n}c_{k}c_{n-k}+\frac{1}{2}\sum_{r}(r-\frac{n}{2})\psi_{n-r}\psi_{r}+\frac{i}{2}(Qn\mp 2\mathcal{P})c_{n},\\
&L_{0}=\sum_{k>0}c_{-k}c_{k}+\sum_{r>0}r\psi_{-r}\psi_{r}+\frac{1}{2}\big(\frac{Q^{2}}{4}-\mathcal{P}^{2}\big),\\
&G_{r}= \sum_{n\neq 0}c_{n}\psi_{r-n}+i(Qr\mp \mathcal{P})\psi_{r},\qquad
\mathcal{P}|P\rangle_{\scriptscriptstyle{\textsf{NS}}}=P|P\rangle_{\scriptscriptstyle{\textsf{NS}}},
\end{aligned}$$ where the operator of zero mode $\mathcal{P}$, bosonic components $c_{n}$ and fermionic components $\psi_{r}$ satisfy commutation relations $$\label{NS-bosonization-comm-relat}
\begin{gathered}
[c_{n},c_{m}]= n\,\delta_{n+m,0},\quad
\{\psi_{r},\psi_{s}\} =\delta_{r+s,0},\\
[\mathcal{P},c_{n}]=[\mathcal{P},\psi_{r}]=0.
\end{gathered}$$ It is convenient to introduce the combinations $$\chi_{r}=f_{r}-i\psi_{r},$$ then one can show that the state $$\label{high-weight-def2}
|P,k\rangle=\Omega_{k}(P)\,\chi_{-\frac{1}{2}}\chi_{-\frac{3}{2}}\dots\chi_{-\frac{2|k|-1}{2}}|\text{\textrm{vac}}\rangle,$$ is the highest weight vector, i.e. it satisfies the conditions and $|\text{\textrm{vac}}\rangle$ is the vacua state defined by $$c_{n}|\text{\textrm{vac}}\rangle=\psi_{r}|\text{\textrm{vac}}\rangle=f_{r}|\text{\textrm{vac}}\rangle=0,\quad\text{for}\quad n,r>0.$$ Last statement can be derived using the relations $$\begin{aligned}
&[L_{n}^{\scriptscriptstyle{(1)}}+L_{n}^{\scriptscriptstyle{(2)}},\chi_{r}]=-\left(\frac{n}{2}+r\right)\chi_{r+n},\\
&[bL_{n}^{\scriptscriptstyle{(1)}}+b^{-1}L_{n}^{\scriptscriptstyle{(2)}},\chi_{r}]=-\left((n+r)Q\mp\mathcal{P}\right)\chi_{r+n}+
i\sum_{m \neq 0}c_{m}\chi_{r+n-m}.
\end{aligned}$$ The choice of sign in front of the operator of the zero mode $\mathcal{P}$ in corresponds to $k>0$ or $k<0$ in . Choosing “$\mp$” in we define two different sets of generators $c_{k}$ and $\psi_{r}$. Similarly to the bosonic case they are related by some unitary transform (in particular if $Q=0$ they just differ by a sign).
Using one can compute $$\Delta^{\scriptscriptstyle{(1)}}(P,k)=\frac{(Q^{\scriptscriptstyle{(1)}})^{2}}{4}-\left(P^{\scriptscriptstyle{(1)}}
+\frac{kb^{\scriptscriptstyle{(1)}}}{2}\right)^{2},\quad
\Delta^{\scriptscriptstyle{(2)}}(P,k)=\frac{(Q^{\scriptscriptstyle{(2)}})^{2}}{4}-\left(P^{\scriptscriptstyle{(2)}}
+\frac{k}{2b^{\scriptscriptstyle{(2)}}}\right)^{2},$$ where parameters $b^{\scriptscriptstyle{(\sigma)}}$ and $Q^{\scriptscriptstyle{(\sigma)}}$ are given by and $$\label{newP}
P^{\scriptscriptstyle{(1)}}=\frac{P}{\sqrt{2-2b^{2}}}\quad\text{and\quad}P^{\scriptscriptstyle{(2)}}=\frac{P}{\sqrt{2-2b^{-2}}}.$$ One can also define the state $\langle k',P'|$ conjugated to $$\label{high-weight-def2-dual}
\langle k',P'|=\Omega_{k'}(P')\langle\text{\textrm{vac}}|\chi_{\frac{2|k'|-1}{2}}\dots\chi_{\frac{1}{2}}.$$ This choice is consistent with the following conjugation $f_{r}^{+}=-f_{-r}$. We chose the normalization factors $\Omega_{k}(P)$ in and such that $$|P,k\rangle=\left(\bigl(G_{-\frac{1}{2}}\bigr)^{k^{2}}+\dots\right)|P\rangle,\qquad
\langle k',P'|=\langle P'|\left(\bigl(G_{\frac{1}{2}}\bigr)^{k'^{2}}+\dots\right),$$ where omitted terms have smaller degree in $G$. One can find that $$\Omega_{k}(P)=\frac{1}{2}\,\prod_{m+n\leq 2|k|}(2P+mb+nb^{-1}).$$ This normalization is standard in CFT and from the other side it coincides with geometrical normalization. The norm of the state $|P,k\rangle$ equals to the determinant of the vector field[^7] $$\langle k,P|P,k\rangle=\det v\Bigl|_{p_{\scriptscriptstyle{(\varnothing,\varnothing),(\varnothing,\varnothing)},k}}$$ and coincides with the factor .
### Construction of the basis
Now we can multiply our algebra $\mathcal{F}\oplus\mathsf{NSR}$ by two additional Heisenberg algebras $\mathcal{H}\oplus\mathcal{H}$ with generators $h_{n}$ and $w_{n}$ $$[h_{n},h_{m}]=[w_{n},w_{m}]=n\delta_{n+m,0},\qquad
[h_{n},w_{m}]=0.$$ The sets of bosons $w_{n}$ and $h_{n}$ have different nature. In particular, the bosons $w_{n}$ are analogous to the bosons $\mathtt{a}_{n}$ and $a_{n}$ considered in section \[AFLT\] and enter into vertex operators in non-symmetric way (see e.g. – and compare it to and ). Contrary, the bosons $h_{n}$ always enter in vertex operators in a symmetric way (see ). From the point of view of scheme the bosons $w_{n}$ correspond to the factor $\mathcal{H}$ in $\mathcal{H}\oplus\widehat{\mathfrak{sl}}(2)_{2}\oplus\mathsf{NSR}$, while the bosons $h_{n}$ belong to the free-field representation for $\widehat{\mathfrak{sl}}(2)_{2}$ algebra.
We define also another set of generators $$\label{rotated-bosons}
a_{n}^{\scriptscriptstyle{(1)}}=\frac{1}{\sqrt{2-2b^{2}}}(w_{n}-ibh_{n}),\qquad
a_{n}^{\scriptscriptstyle{(2)}}=\frac{1}{\sqrt{2-2b^{-2}}}(w_{n}-ib^{-1}h_{n}),$$ such that $$\label{rotated-bosons-comm-relat}
[a_{n}^{\scriptscriptstyle{(\sigma)}},a_{m}^{\scriptscriptstyle{(\rho)}}]=\frac{n}{2}\delta_{n+m,0}\,\delta_{\sigma,\rho},\qquad
\sigma,\rho=1,2.$$ Thus in the algebra $\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$ we have two subalgebras $\mathcal{H}\oplus\mathsf{Vir}$ with generators $a_{n}^{\scriptscriptstyle{(\sigma)}}$ and $L_{n}^{\scriptscriptstyle{(\sigma)}}$ for $\sigma=1,2$ which satisfy , and obvious relations $$[L_{n}^{\scriptscriptstyle{(\sigma)}},a_{m}^{\scriptscriptstyle{(\rho)}}]=0.$$ We note that bosons $a_{n}^{\scriptscriptstyle{(1)}}$ and $a_{n}^{\scriptscriptstyle{(2)}}$ enter in our construction in a completely symmetric way (together with the symmetry $b\rightarrow1/b$). For each of these subalgebras we can define integrable system : $$\label{I1I2-new}
\begin{aligned}
&\mathbf{I}_{1}^{\scriptscriptstyle{(\sigma)}}=L_{0}^{\scriptscriptstyle{(\sigma)}}+2\sum_{k>0}a_{-k}^{\scriptscriptstyle{(\sigma)}}a_{k}^{\scriptscriptstyle{(\sigma)}},\\
&\mathbf{I}_{2}^{\scriptscriptstyle{(\sigma)}}= \sum_{k\neq0}a_{-k}^{\scriptscriptstyle{(\sigma)}}L_{k}^{\scriptscriptstyle{(\sigma)}}+2iQ\sum_{k>0}^{\infty}ka_{-k}^{\scriptscriptstyle{(\sigma)}}a_{k}^{\scriptscriptstyle{(\sigma)}}+
\frac{1}{3}\sum_{i+j+k=0}a_{i}^{\scriptscriptstyle{(\sigma)}}a_{j}^{\scriptscriptstyle{(\sigma)}}a_{k}^{\scriptscriptstyle{(\sigma)}}.
\end{aligned}$$ The eigenvectors for this integrable system can be easily found. At first, we redefine the highest weight states by demanding that $$h_{n}|P,k\rangle=w_{n}|P,k\rangle=0\quad\text{for}\quad n>0.$$ Then the eigenvectors can be written in the form $$\label{new-eigenfunctions}
|P,k\rangle_{\scriptscriptstyle{\vec{Y}}^{(1)},\scriptscriptstyle{\vec{Y}}^{(2)}}\overset{\text{def}}{=}
X_{\scriptscriptstyle{\vec{Y}}^{(1)}}
\Bigl(P^{\scriptscriptstyle{(1)}}+\frac{kb^{\scriptscriptstyle{(1)}}}{2},b^{\scriptscriptstyle{(1)}} \Bigr)
X_{\scriptscriptstyle{\vec{Y}}^{(2)}}
\Bigl(P^{\scriptscriptstyle{(2)}}+\frac{k}{2b^{\scriptscriptstyle{(2)}}},b^{\scriptscriptstyle{(2)}} \Bigr)|P,k\rangle,$$ where $\vec{Y}^{\scriptscriptstyle{(1)}}$ and $\vec{Y}^{\scriptscriptstyle{(2)}}$ are two pairs of the Young diagrams and parameters $b^{(\sigma)}$ and $P^{(\sigma)}$ are given by and . Operators $X_{\scriptscriptstyle{\vec{Y}^{(\sigma)}}}(P^{\scriptscriptstyle{(\sigma)}},b^{\scriptscriptstyle{(\sigma)}})$ in are given by and consists of generators $L_{-n}^{\scriptscriptstyle{(\sigma)}}$ and $a_{-n}^{\scriptscriptstyle{(\sigma)}}$.
We claim that the basis factorizes certain primary operators analogous to . It is remarkable that compared to the case $p=1$ we have infinitely many of them $$\label{super-primary-k}
\mathbb{V}_{\alpha}^{(m)}\qquad m\in\mathbb{Z},$$ which corresponds to the highest weight states $|P,m\rangle$ due to the operator–state correspondence. Only the field $V_{\alpha}^{(0)}$ corresponds to the primary field of the algebra, the rest correspond to descendant fields with the conformal dimensions under the “total” stress-energy tensor $T(z)+T^{\textrm{f}}(z)$ $$\Delta_{\scriptscriptstyle{\textsf{NS}}}(\alpha)+\frac{m^{2}}{2},$$ where $T^{\textrm{f}}(z)$ is the stress-energy tensor for the Majorana fermion $f_{r}$. The first few examples of the fields $V_{\alpha}^{(m)}$ can be easily calculated: $$\label{super-primary}
\begin{aligned}
&\mathbb{V}_{\alpha}^{\scriptscriptstyle{(0)}}(z)=\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\cdot \mathcal{W}_{\alpha}(z),\\
&\mathbb{V}_{\alpha}^{\scriptscriptstyle{(1)}}(z)=\bigl(\alpha f(z)\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)+
\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\bigr)\,e^{i\phi(z)}\,\mathcal{W}_{\alpha}(z),\\
&\mathbb{V}_{\alpha}^{\scriptscriptstyle{(-1)}}(z)=\bigl((Q-\alpha) f(z)\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)+
\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\bigr)\,e^{-i\phi(z)}\,\mathcal{W}_{\alpha}(z),
\end{aligned}$$ where $\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}$ is the primary field of the algebra with conformal dimension $\Delta(\alpha)=\frac{1}{2}\alpha(Q-\alpha)$, $\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}$ its super partner with the dimension $\Delta(\alpha)+1/2$, $$f(z)=\sum_{r}f_{r}z^{r+1/2},\qquad \phi(z)=i\sum_{n\neq0}\frac{h_{n}}{n}z^{-n}$$ and $\mathcal{W}_{\alpha}$ is a free exponential $$\label{vertex-super}
\mathcal{W}_{\alpha}= e^{(\alpha-Q)\varphi_{-}}e^{\alpha\varphi_{+}},$$ with $\varphi_{+}=i\sum_{n>0}\frac{w_{n}}{n}z^{-n}$ and $\varphi_{-}(z)=i\sum_{n<0}\frac{w_{n}}{n}z^{-n}$. For general $m$ the field $\mathbb{V}_{\alpha}^{(m)}$ has a form $$\label{super-primary-m}
\mathbb{V}_{\alpha}^{(m)}=D^{m}[\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z),f(z)]\,
e^{im\phi(z)}\,\mathcal{W}_{\alpha}(z),$$ where $D^{m}[\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z),f(z)]$ is some descendant field on a level $m^{2}/2$.[^8]
The commutation relations of the primary fields $\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}$, $\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}$ and $\mathcal{W}_{\alpha}$ with generators $L_{n}$, $a_{n}$, $w_{n}$, $G_{r}$ and $f_{r}$ can be summarized as $$\begin{aligned}
&[L_{n},\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}] =
(z^{n+1}\partial_{z}+(n+1)\Delta(\alpha)z^{n})\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}},\\
& [L_{n},\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}] =
(z^{n+1}\partial_{z}+(n+1)(\Delta(\alpha)+1/2)z^{n})\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}},\\
&[G_{r},\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}] =
z^{r+1/2}\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}},\\
&\{G_{r},\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}\}=
(z^{r+1/2}\partial_{z}+(2r+1)\Delta(\alpha)z^{r-1/2})\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}},\\
& [w_{n},\mathcal{W}_{\alpha}(z)]=-i\alpha z^{n}\mathcal{W}_{\alpha}, \quad\qquad \textrm{for}\; n<0,\\
& [w_{n},\mathcal{W}_{\alpha}(z)]=i(Q-\alpha)z^{n}\mathcal{W}_{\alpha}, \quad \textrm{for}\; n>0.
\end{aligned}$$ Let us consider the matrix elements $$\mathfrak{F}(\alpha,m|P',k',\vec{W}^{\scriptscriptstyle{(1)}},\vec{W}^{\scriptscriptstyle{(2)}};
P,k,\vec{Y}^{\scriptscriptstyle{(1)}},\vec{Y}^{\scriptscriptstyle{(2)}})\overset{\text{def}}{=}\frac
{_{\scriptscriptstyle{\vec{W}^{(1)}},\scriptscriptstyle{\vec{W}^{(2)}}}\langle k',P'|
\mathbb{V}_{\alpha}^{(m)}|P,k\rangle_{\scriptscriptstyle{\vec{Y}}^{(1)},\scriptscriptstyle{\vec{Y}}^{(2)}}}
{\langle k',P'|\mathbb{V}_{\alpha}^{(m)}|P,k\rangle}.$$
\[main-prop\] We propose that $$\begin{gathered}
\label{main-prop-equality}
\mathfrak{F}(\alpha,m|P',k',\vec{W}^{\scriptscriptstyle{(1)}},\vec{W}^{\scriptscriptstyle{(2)}};
P,k,\vec{Y}^{\scriptscriptstyle{(1)}},\vec{Y}^{\scriptscriptstyle{(2)}})=
\mathbb{F}\Bigl(\alpha^{\scriptscriptstyle{(1)}}+\frac{mb^{\scriptscriptstyle{(1)}}}{2},b^{\scriptscriptstyle{(1)}} \Bigl|P'_{1}+
\frac{k'b^{\scriptscriptstyle{(1)}} }{2},\vec{W}^{\scriptscriptstyle{(1)}},P_{1}+\frac{kb^{\scriptscriptstyle{(1)}} }{2},
\vec{Y}^{\scriptscriptstyle{(1)}}\Bigr)
\times\\\times
\mathbb{F}\Bigl(\alpha^{\scriptscriptstyle{(2)}}+\frac{m}{2b^{\scriptscriptstyle{(2)}}},b^{\scriptscriptstyle{(2)}}\Bigr|
P'_{2}+\frac{k'}{2b^{\scriptscriptstyle{(2)}}},\vec{W}^{\scriptscriptstyle{(2)}},
P_{2}+\frac{k}{2b^{\scriptscriptstyle{(2)}}},\vec{Y}^{\scriptscriptstyle{(2)}}\Bigr),\end{gathered}$$ where $$\alpha^{\scriptscriptstyle{(1)}}=\frac{\alpha}{\sqrt{2-2b^{2}}},\quad
\alpha^{\scriptscriptstyle{(2)}}=\frac{\alpha}{\sqrt{2-2b^{-2}}};$$ and parameters $b_{j}$ and $P_{j}$ are given by and and function $\mathbb{F}$ by –.
We note that Proposition \[main-prop\] suggests the following identification $$\label{strange-relation}
\mathbb{V}_{\alpha}^{(m)}(z)=V_{\alpha^{\scriptscriptstyle{(1)}}+mb^{\scriptscriptstyle{(1)}} /2}^{\scriptscriptstyle{(1)}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}+m/2b^{\scriptscriptstyle{(2)}}}^{\scriptscriptstyle{(2)}}(z),$$ where by $V_{\alpha}^{\scriptscriptstyle{(\sigma)}}$ for $\sigma=1,2$ we denoted primary operator constructed for one of two subalgebras $\mathcal{H}\oplus\textsf{Vir}$: $$(\mathcal{H}\oplus\textsf{Vir})_{\sigma}\subset\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}.$$ We have checked equality by explicit computations on lower levels. For further confirmations see appendix \[2Liouville\].
For practical purposes it is also useful to compute the ratio of the matrix elements (blow-up factors) $$\label{blowup-factors-def}
l(\alpha,m|P',k',P,k)\overset{\text{def}}{=}
\begin{cases}
\frac{\langle k',P'|\mathbb{V}_{\alpha}^{(m)}|P,k\rangle}{\langle P'|\mathbb{V}_{\alpha}^{(0)}|P\rangle},\quad\text{if}\quad k+k'+m=2n,\\
\frac{\langle k',P'|\mathbb{V}_{\alpha}^{(m)}|P,k\rangle}{\langle P'|\mathbb{V}_{\alpha}^{(\pm1)}|P\rangle},\quad\text{if}\quad k+k'+m=2n+1.
\end{cases}$$
\[blowup-Proposition\] The factors are given by $$l(\alpha,m|P',k',P,k)=
\begin{cases}
\prod_{i,j}
s_{\textrm{even}}\left(\alpha+P'_{i}+P_{j},\frac{m+k'_{i}+k_{j}}{2}\right)\quad \qquad\;\;\,\text{if}\quad m+k+k'\quad\text{is even}\\
\prod_{i,j}
s_{\textrm{odd}}\left(\alpha+P'_{i}+P_{j},\mathrm{int}\,\Bigl(\frac{m+k'_{i}+k_{j}}{2}\Bigr)\right),
\quad \text{if}\quad m+k+k'\quad\text{is odd}
\end{cases}$$ where $\vec{P}=(P,-P)$, $\vec{k}=(k,-k)$, $\vec{P}'=(P',-P')$, $\vec{k}'=(k',-k')$ and $\mathrm{int}(x)=\textrm{sgn}(x)\lfloor|x|\rfloor$ is the integer part of $x$ and for $n\geq0$ $$\begin{aligned}
&s_{\textrm{even}}(x,n)=2^{-\frac{n^{2}}{2}}\hspace*{-10pt}
\prod_{\substack{i,j\geq1,\;i+j\leq2n\\i+j\equiv0\mod 2}}\hspace*{-10pt}(x+(i-1)b+(j-1)b^{-1}),\\
&s_{\textrm{odd}}(x,n)=2^{-\frac{n(n+1)}{2}}\hspace*{-10pt}
\prod_{\substack{i,j\geq1,\;i+j\leq2n+1\\i+j\equiv1\mod 2}}\hspace*{-10pt}(x+(i-1)b+(j-1)b^{-1}),
\end{aligned}$$ while for $n<0$ we have $$s_{\textrm{even}}(x,n)=(-1)^{n}\,s_{\textrm{even}}(Q-x,-n),\quad
s_{\textrm{odd}}(x,n)=s_{\textrm{odd}}(Q-x,-n).$$
The proof of this proposition can be done by Coulomb integrals method and will be published elsewhere (see also appendix \[2Liouville\]).
Supersymmetric case: another compactification {#SAGT-collored}
=============================================
The basis constructed in section \[SAGT\] corresponds to the manifold of moduli of framed torsion free sheaves on $X_2$. As was mentioned in the Introduction there is another partial compactification of the moduli space of instantons on $\mathbb{C}^{2}/{\mathbb{Z}_{2}}$. This compactification will be explored in this section.
Another compactification
------------------------
Recall that $\mathcal{M}(r,N)$ denotes the compactified moduli spaces of $U(r)$ instantons on $\mathbb{C}^{2}$ with the instanton number $N$. For any numbers $q_1,q_2,\dots q_r=0,1$ there is a natural action of $\mathbb{Z}_2$ on $\mathcal{M}(r,N)$: $$\begin{aligned}
B_{1} \mapsto -B_{1};\quad
B_{2}=-B_{2} ;\quad
I = Iq;\quad J=qJ,
$$ where $q=\textrm{diag}((-1)^{q_{1}},\dots,(-1)^{q_{r}})$. Denote by $\mathcal{M}(r,N)^{\mathbb{Z}_2}$ the $\mathbb{Z}_2$ invariant part of $\mathcal{M}(r,N)$.
The manifold $\mathcal{M}(r,N)^{\mathbb{Z}_2}$ is smooth but not connected. In order to describe connected components consider the $N$–dimensional tautological vector bundle $\mathcal{V}$ on $\mathcal{M}(r,N)$. Its fiber at the point $p=(B_1,B_2,I,J)$ coincides with the vector space $V$ obtained from the vectors $I_1,\dots,I_r$ by action of an algebra generated by the operators $B_1$ and $B_2$. If $p \in \mathcal{M}(r,N)^{\mathbb{Z}_2}$ then $\mathbb{Z}_2$ acts on the fiber of $\mathcal{V}$ at $p$. Then $V$ can be decomposes $V_+\oplus V_-$, where $V_+$ is the trivial representation and $V_-$ is the sign representations of $\mathbb{Z}_2$. Two points $p$, $q$ belong to the same component if the dimensions of $V_+$ at these points coincide. We denote connected components as $\mathcal{M}(r,d,N)$ where $d=N_+-N_-$, and $N_+$, $N_-$ equal to the ranks of the bundles $\mathcal{V}_+$ and $\mathcal{V}_-$ respectively[^9]. It is evident that $d \equiv N\, (\mathrm{mod}\, 2)$.
Torus action on $\mathcal{M}(r,N)^{\mathbb{Z}_2}$ is given by formula . Points $p_{\scriptscriptstyle{\vec{W}}}$ fixed under the torus action are labeled by the $r$-tuples of Young diagrams $\vec{W}=(W_1,\dots,W_{r})$. It is convenient to color these diagrams as follows: the box $s\in W_k$ with coordinates $(i,j)$ is white if $i-j+q_k \equiv 0\,(\mathrm{mod}\,2)$ and black otherwise. The numbers $N_+$ and $N_-$ equal to the number of white and black boxes respectively.
The determinant of the vector field $v=(\epsilon_{1},\epsilon_{2},a)$ at the fixed point $p_{\scriptscriptstyle{\vec{W}}}$ equals to [@Fucito:2004ry; @Fucito:2006kn] $$\label{eq det2 color}
\det v\Bigl|_{p_{\scriptscriptstyle{\vec{W}}}}= Z_{\textsf{vec}}^{\diamond}(\vec{a},\vec{W}|\epsilon_{1},\epsilon_{2})^{-1}=
\prod_{i,j=1}^{2}
\prod_{s\in \scriptscriptstyle{W_{i}^{\diamond}}}
E_{\scriptscriptstyle{W_{i}},\scriptscriptstyle{W_{j}}}(a_{i}-a_{j}|s)
\bigl(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{W_{i}},\scriptscriptstyle{W_{j}}}(a_{i}-a_{j}|s)\bigr),$$ where the superscript $\diamond$ means that the product goes over boxes $s\in W_{i}$ satisfying $$\mathrm{a}_{\scriptscriptstyle{W_{i}}}(s)+\mathrm{l}_{\scriptscriptstyle{W_{j}}}(s)+1+q_i-q_j \equiv 0 \, (\mathrm{mod}\, 2).$$
In this subsection we consider the $r=2$ case. Following [@Belavin:2011pp] we choose components $\mathcal{M}(2,0,N)$ for $(q_1,q_2)=(0,0)$ and $\mathcal{M}(2,-1,N)$ for $(q_1,q_2)=(1,1)$[^10]. One can compute the Nekrasov partition function for the pure $U(r)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$ using these components: $$\label{Zpure-def-col}
Z_{\textrm{pure}}^{\diamond}(\vec{a},\epsilon_{1},\epsilon_{2}|\Lambda)=
\sum_{k=0}^{\infty}\sum_{\diamond} Z_{\textsf{vec}}^{\diamond}(\vec{a},\vec{W}|\epsilon_{1},\epsilon_{2})\,\Lambda^{2k},$$ where the second sum goes over pairs of diagrams $\vec{W}$ with $|W|=k$, $N_+=N_-$ and with white corners or over pairs of diagrams with $|W|=k$, $N_+=N_--1$ and with black corners. As it was conjectured and checked in [@Belavin:2011pp] this function coincides with Whittaker limit of the four-point conformal block in $\mathcal{N}=1$ supersymmetric conformal field theory.
From the other side it was conjectured and checked in [@Bonelli:2011jx] that the function $Z_{\textrm{pure}}^{(2,X_{2})}(\vec{a},\epsilon_{1},\epsilon_{2}|q)$ defined by coincides with the same conformal block as well. Hence, these partition functions equal to each other $$\begin{aligned}
Z_{\textrm{pure}}^{\diamond}(\vec{a},\epsilon_{1},\epsilon_{2}|q)= Z_{\textrm{pure}}^{(2,X_{2})}(\vec{a},\epsilon_{1},\epsilon_{2}|q). \label{eq Z=Z}\end{aligned}$$ Summands on the left hand side are labeled by pairs of colored Young diagrams $W_1,W_2$. Summands on the right hand side are labeled by pair of pairs of Young diagrams $\vec{Y}^{\scriptscriptstyle{(\sigma)}}=(Y_{1}^{\scriptscriptstyle{(\sigma)}},Y_{2}^{\scriptscriptstyle{(\sigma)}})$, $\sigma=1,2$ and one integer number $k\in\mathbb{Z}$. There exists a bijection between these two types of combinatorial data (see for example [@Macdonald Sec 1.1 Ex. 8] or [@Fucito:2006kn]). However the sets of summands on the left hand side and on the right hand side of are different (see appendix \[Bersh-app\]). The identity is nontrivial, we have the equality of sums of different rational functions.
The formula follows from the fact that for $N\in \mathbb{Z}$ manifolds $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,0,2N)$ are the compactifications of the same manifold (moduli space of instantons on $\mathbb{C}^2/\mathbb{Z}_2$). Hence the integrals of the equivariant forms should be equal. Similarly for $N \in \mathbb{Z}+\frac12$ integrals over $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,-1,2N)$ should be equal (see also [@1166.14007] and [@Nagao_2007]).
Geometrical arguments from the Introduction suggest the existence of the basis labeled by pair of colored Young diagrams in the representation of the algebra $\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{F}\oplus \textsf{NSR}$. In notation for this basis we use superscript $\diamond$: $|P\rangle^{\diamond}_{\vec{W}}$. Norm of the vector $|P\rangle^{\diamond}_{\vec{W}}$ should equal to the $Z_{\textsf{vec}}^{\diamond}(\vec{a},\vec{W}|\epsilon_{1},\epsilon_{2})^{-1}$. The basis $|P\rangle^{\diamond}_{\scriptscriptstyle{\vec{W}}}$ differs from the basis $|P,k\rangle_{\scriptscriptstyle{\vec{Y}^{1},\vec{Y}^{2}}}$ constructed in Section 3 since sets of summands in are different.
Although we do not have an explicit construction of such basis, we suggest the formula for matrix element of the vertex operator $\mathbb{V}^{(0)}_{\alpha}$ in this basis $$\begin{aligned}
\frac{ _{\vec{\scriptscriptstyle{W}}}^{\diamond}\langle P'|\mathbb{V}^{(0)}_{\alpha}|P\rangle_{ \vec{\scriptscriptstyle{Y}}}^{\diamond}}
{^{\diamond}\langle P'|\mathbb{V}^{(0)}_{\alpha}|P\rangle^{\diamond}}=
Z_{\textsf{bif}}^\diamond(\alpha;\vec{P}',\vec{W};\vec{P},\vec{Y}|b,b^{-1}), \label{eq-matr-elem-color}\end{aligned}$$ where $$\begin{aligned}
Z_{\textsf{bif}}^\diamond(m;\vec{a}',\vec{W};\vec{a},\vec{Y}|\epsilon_{1},\epsilon_{2})=\prod_{i,j=1}^{r}
\prod_{\diamond}\left(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{Y_{i}},\scriptscriptstyle{W_{j}}}(a_{i}-a'_{j}|s)-m\right)
\prod_{\diamond}\left(E_{\scriptscriptstyle{W_{j}},\scriptscriptstyle{Y_{i}}}(a'_{j}-a_{i}|t)-m\right)\end{aligned}$$ and the product goes over boxes $s\in Y_{i}$ and $t\in W_{j}$ satisfying $$\mathrm{a}_{\scriptscriptstyle{Y_{i}}}(s)+\mathrm{l}_{\scriptscriptstyle{W_{j}}}(s)+1+q_{\scriptscriptstyle{Y_i}}-
q_{\scriptscriptstyle{W_j}} \equiv 0 \, (\mathrm{mod}\, 2); \quad
\mathrm{a}_{\scriptscriptstyle{W_j}}(t)+\mathrm{l}_{\scriptscriptstyle{Y_{i}}}(t)+1+
q_{\scriptscriptstyle{W_j}}-q_{\scriptscriptstyle{Y_i}} \equiv 0 \, (\mathrm{mod}\, 2).$$ We have checked the formula computing the five-point conformal block $$\langle P'|\mathbb{V}^{(0)}_{\alpha}(q_1)\mathbb{V}^{(0)}_{\alpha}(q_1q_2)\mathbb{V}^{(0)}_{\alpha}(1)|P\rangle,$$ using two different bases, i.e. comparing in lowest orders in $q_{1}$ and $q_{2}$ the results obtained with the help of and .
We note that can be considered as a system of equations for unknown basis vectors $|P\rangle_{ \vec{\scriptscriptstyle{Y}}}^{\diamond}$. Unfortunately, the solution of this system is not unique. This is closely related to the fact that the vertex operator $\mathbb{V}^{(0)}_{\alpha}$ does not depend on $\widehat{\mathfrak{sl}}(2)_{2}$ bosons $h_{n}$. Additional constraints could be explicit expressions for matrix elements of operators different from $\mathbb{V}_{\alpha}^{(0)}$. It is unlikely that the matrix elements of the operators $\mathbb{V}_{\alpha}^{(m)}$ introduced in section \[SAGT\] have nice factorized form similar to for $m\neq0$.
Note that if $\epsilon_1+\epsilon_2=0$ (in CFT notations $Q=0$) the equality become trivial. Geometrically it is related to the fact that the manifolds $\mathcal{M}(X_{2},2,N)$ and $\mathcal{M}(2,0,2N)$ are $\mathbb{C}^*$–diffeomorphic, where $\mathbb{C}^*$ acts on $\mathbb{C}^2$ by formula $(z_1,z_2) \mapsto (wz_1,w^{-1}z_2)$. However these manifolds are not diffeomorphic as $\left(\mathbb{C}^*\right)^2$–manifolds because the determinants at fixed points are different.
The $r=1$ case
--------------
In this subsection we discuss the phenomena of existence of different bases mentioned above. For simplicity we restrict ourself to the case $r=1$.
Denote by $\mathcal{M}(X_{2},1,N)$ the moduli space of framed torsion free sheaves on $X_{2}$ of rank $1$ with Chern classes $c_{1}=0$, $c_{2}=N$. Torus fixed points are labeled by pairs of Young diagrams $(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}})$, $|Y^{\scriptscriptstyle{(1)}}|+|Y^{\scriptscriptstyle{(2)}}|=N$ and the determinant of the vector field $v=(\epsilon_{1},\epsilon_{2},a)$ at the fixed point $p_{\scriptscriptstyle Y^{\scriptscriptstyle{(1)}},\scriptscriptstyle Y^{\scriptscriptstyle{(2)}}}$ equals to (see [@2011CMaPh.304..395B]): $$\label{eq-Zvec2}
\det v\Bigl|_{p_{\scriptscriptstyle Y^{\scriptscriptstyle{(1)}},\scriptscriptstyle Y^{\scriptscriptstyle{(2)}}}}=
Z_{\textsf{vec}}(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}}|\epsilon_{1},\epsilon_{2})^{-1} =
Z_{\textsf{vec}}(Y^{\scriptscriptstyle{(1)}}|2\epsilon_{1},\epsilon_{2}-\epsilon_{1})^{-1}
Z_{\textsf{vec}}(Y^{\scriptscriptstyle{(2)}}|\epsilon_{1}-\epsilon_{2},2\epsilon_{2})^{-1},$$ where $Z_{\textsf{vec}}$ is given in and we omit $\vec{a}$ since in $r=1$ case $\vec{a}$ doesn’t appear in formulas. Denote by $$\mathcal{Z}_N=\sum_{|Y^{\scriptscriptstyle{(1)}}|+|Y^{\scriptscriptstyle{(2)}}|=N}
Z_{\textsf{vec}}(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}}|\epsilon_{1},\epsilon_{2}).$$ the coefficient in Nekrasov partition function. The expression $\mathcal{Z}_{N}$ equals to the integral over moduli space $\mathcal{M}(X_{2},1,N)$. From the general scheme it follows that there should be a basis labeled by $(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}})$ in representation of the algebra $ \mathcal{H} \oplus \mathcal{H}$ (see ). The algebraic construction of this basis is similar to one given in Section \[SAGT\].
From the colored partition side consider all components $\mathcal{M}(1,d,N)$ (with $q_1=0$). The torus fixed points $p_{\scriptscriptstyle W} \in \mathcal{M}(1,d,N)$ are labeled by colored Young diagrams $W$ with $d(W)=d$, $|W|=N$. The determinant of the vector field $v=(\epsilon_{1},\epsilon_{2},a)$ at the fixed point $p_{\scriptscriptstyle W}$ equals to [@Fucito:2004ry; @Fucito:2006kn] $$\label{eq det1 color}
\det v\Bigl|_{ p_{\scriptscriptstyle W}}= Z_{\textsf{vec}}^{\diamond}(a,\vec{W}|\epsilon_{1},\epsilon_{2})^{-1}=
\prod_{s\in \scriptscriptstyle{W^{\diamond}}}
E_{\scriptscriptstyle{W},\scriptscriptstyle{W}}(0|s)
\bigl(\epsilon_{1}+\epsilon_{2}-E_{\scriptscriptstyle{W},\scriptscriptstyle{W}}(0|s)\bigr),$$ where the product goes over boxes $s\in W$ satisfying $\mathrm{a}_{\scriptscriptstyle{W}}(s)+\mathrm{l}_{\scriptscriptstyle{W}}(s)+1 \equiv 0 \, (\mathrm{mod}\, 2).$
Vectors $v_{\scriptscriptstyle W}$ corresponding to $p_{\scriptscriptstyle{W}}$ form a basis in representation of the algebra $\mathcal{H}\oplus\widehat{\mathfrak{sl}}(2)_1$ (see ). Combinatorial gradings $d(W)$ and $|W|$ coincide with $h_0$ grading and principal grading of representation of this algebra. The structure of representation of the algebra $\mathcal{H}\oplus\widehat{\mathfrak{sl}}(2)_1$ is shown on fig. \[CP-decompos-pic\].
![The colored partition basis in the representation of $\mathcal{H}\oplus \widehat{\mathfrak{sl}}(2)_1$. The interior of each angle corresponds to the representation of $\mathcal{H}\oplus\mathcal{H}\subset\mathcal{H}\oplus\widehat{\mathfrak{sl}}(2)_{1}$ with given value of $h_{0}$. Each colored diagram represents a vector in this representation.[]{data-label="CP-decompos-pic"}](Hrep.eps){width=".7\textwidth"}
Generators $e_{i}$ from $\widehat{\mathfrak{sl}}(2)_{1}$ shift $d$ by $+1$, generators $f_{i}$ by $-1$ and generators $h_{i}$ act in subspace with given $d$. Elements $h_{i}$ generate the Heisenberg algebra $\mathcal{H}\subset\widehat{\mathfrak{sl}}(2)_{1}$.
The vectors $v_{\scriptscriptstyle W}$ with given $d(W)=d$ form a basis in representation of the algebra $ \mathcal{H} \oplus \mathcal{H}$. It is easy to see that the smallest diagram $W_0$ with $d(W_0)=d$ consist of $2d^2-d$ boxes and has a “triangular” form with edge length $2|d|$ for $d \leq 0$ and $2d-1$ for $d>0$ $$\label{2triangles}
\unitlength 2.3pt
\begin{picture}(130,20)(0,15)
\Thicklines
\path(0,0)(0,30)(30,30)(30,25)(25,25)(25,20)(20,20)(20,15)(15,15)(15,10)(10,10)(10,5)(5,5)(5,0)(0,0)
\put(-7,19){\vector(0,1){11}}
\put(-10,14){\mbox{\small{$2|d|$}}}
\put(-7,11){\vector(0,-1){11}}
\put(0,-11){\mbox{for $d<0$}}
\blacken\path(0,0)(0,5)(5,5)(5,0)(0,0)
\blacken\path(5,5)(5,10)(10,10)(10,5)(5,5)
\blacken\path(10,10)(10,15)(15,15)(15,10)(10,10)
\blacken\path(15,15)(15,20)(20,20)(20,15)(15,15)
\blacken\path(20,20)(20,25)(25,25)(25,20)(20,20)
\blacken\path(25,25)(25,30)(30,30)(30,25)(25,25)
\blacken\path(0,10)(0,15)(5,15)(5,10)(0,10)
\blacken\path(5,15)(5,20)(10,20)(10,15)(5,15)
\blacken\path(10,20)(10,25)(15,25)(15,20)(10,20)
\blacken\path(15,25)(15,30)(20,30)(20,25)(15,25)
\blacken\path(0,20)(0,25)(5,25)(5,20)(0,20)
\blacken\path(5,25)(5,30)(10,30)(10,25)(5,25)
\path(100,5)(100,30)(125,30)(125,25)(120,25)(120,20)(115,20)(115,15)(110,15)(110,10)(105,10)(105,5)(100,5)
\put(93,21){\vector(0,1){9}}
\put(86,16){\mbox{\small{$2d-1$}}}
\put(93,13){\vector(0,-1){9}}
\put(100,-11){\mbox{for $d>0$}}
\blacken\path(100,10)(100,15)(105,15)(105,10)(100,10)
\blacken\path(105,15)(105,20)(110,20)(110,15)(110,15)
\blacken\path(110,20)(110,25)(115,25)(115,20)(115,20)
\blacken\path(115,25)(115,30)(120,30)(120,25)(120,25)
\blacken\path(100,20)(100,25)(105,25)(105,20)(100,20)
\blacken\path(105,25)(105,30)(110,30)(110,25)(110,25)
\end{picture}
\vspace*{2.5cm}$$
Denote by $$Z_{d,N}=\sum_{W,\, d(W)=d,\, |W|=N} Z_{\textsf{vec}}^{\diamond}(W|\epsilon_{1},\epsilon_{2})$$ the coefficient in the Nekrasov partition function. The expression $Z_{d,N}$ equals to the integral over moduli space $\mathcal{M}(1,d,N)$.
For any integer $d$ $$\begin{aligned}
Z_{d,2d^2-d+2N} \label{eq Z=Z=Z}=Z_{0,2N}=\mathcal{Z}_N\end{aligned}$$
This proposition follows from the fact that the manifolds $\mathcal{M}(1,d,2d^2-d+2N)$ and $\mathcal{M}(X_{2},1,N)$ are birationally isomorphic to the Hilbert scheme of $N$ point on $\mathbb{C}^2/\mathbb{Z}_2$.
The equality is an equality of sums. The number of summands from the left hand side and right hand side is the same (this follows from the bijection mentioned above). We will write $\sum \equiv \sum$ if sums are equal and moreover the sets of summands on both sides are the same. Correspondingly we will write $\sum \not\equiv \sum$ if the sums are equal but the sets of summands are different. Direct calculations shows: $$Z_{0,0}\equiv Z_{1,1} \equiv Z_{-1,3} \equiv Z_{2,6} \equiv Z_{-2,10} \equiv \mathcal{Z}_0.$$ $$Z_{0,2}\equiv Z_{1,3} \equiv Z_{-1,5} \equiv Z_{2,8} \equiv Z_{-2,12} \equiv \mathcal{Z}_1.$$ $$Z_{0,4}\equiv Z_{1,5} \equiv Z_{-1,7} \equiv Z_{2,10} \equiv Z_{-2,14} \equiv \mathcal{Z}_2.$$ $$Z_{0,6}\not\equiv Z_{1,7},\quad Z_{1,7}\equiv Z_{-1,9} \equiv Z_{2,12} \equiv Z_{-2,16} \equiv \mathcal{Z}_3.$$ $$Z_{0,8}\not\equiv Z_{1,9},\quad Z_{0,8}\not\equiv Z_{-1,11},\quad Z_{1,9}\not\equiv Z_{-1,11},\quad Z_{-1,11}\equiv Z_{2,14} \equiv Z_{-2,18} \equiv \mathcal{Z}_4.$$ $$\begin{gathered}
Z_{0,10}\not\equiv Z_{1,11},\quad Z_{0,10}\not\equiv Z_{-1,13},\quad Z_{1,11}\not\equiv Z_{-1,13},\\ Z_{0,10}\not\equiv Z_{2,16},\quad Z_{1,11}\not\equiv Z_{2,16},\quad Z_{-1,12}\not\equiv Z_{2,16}, \quad Z_{2,16}\equiv Z_{-2,20} \equiv Z_{3,25} \equiv \mathcal{Z}_{5}.\end{gathered}$$ These results suggest the following proposition[^11]
- For any $d_1,d_2$ there exists $N$ such that $Z_{d_1,2d_1^2-d_1+2N}\not\equiv Z_{d_2,2d_2^2-d_2+2N}$
- For any $N$ there exist $D$ such that $Z_{d,2d^2-d+2N}\equiv\mathcal{Z}_N$ for any $d,\, |d|\geq D$.
In terms of basis this proposition means that there exists an infinite number of different bases in representation of the algebra $\mathcal{H} \oplus \mathcal{H}$. These bases are numbered by integer number $d$. Basic vectors in $d$-th basis are labeled by Young diagrams $W$ with $d(W)=d$. The basis labeled by pairs of Young diagrams $Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}}$ appears in the limit $d \rightarrow \infty$.
We prove the second assertion:
If $|d| \geq N$, then $$\begin{aligned}
Z_{d,2d^2-d+2N}\equiv\mathcal{Z}_N\label{eq Z=Z limit}.\end{aligned}$$
The proof is based on the explicit bijection: for any pair of Young diagram $Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}}$ with $|Y^{\scriptscriptstyle{(1)}}|+|Y^{\scriptscriptstyle{(2)}}|=N$ we construct colored Young diagram $W$ with $|W|=2d^2-d+2N$, $d(W)=d$ such that $$\label{eq Z=Z biject}
Z_{\textsf{vec}}^{\diamond}(W|\epsilon_{1},\epsilon_{2})=
Z_{\textsf{vec}}(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}}|\epsilon_{1},\epsilon_{2}).$$
Bijection goes as follows. Denote by $W_0$ the minimal Young diagram with $d(W_0)=d$. Then $|W_0|=2d^2-d$ and $W_0$ has “triangular” form . By $\widetilde{Y}^{\scriptscriptstyle{(1)}}$ denote diagram obtained from $Y^{\scriptscriptstyle{(1)}}$ by doubling all columns. Similarly, by $\widetilde{Y}^{\scriptscriptstyle{(2)}}$ denote diagram obtained from $Y^{\scriptscriptstyle{(2)}}$ by doubling all rows. Then $W$ is obtained by adding diagrams $\widetilde{Y}^{\scriptscriptstyle{(1)}}$ and $\widetilde{Y}^{\scriptscriptstyle{(2)}}$ to the bottom and to the right of the diagram $W_0$ respectively (see fig. \[bijection\]).
![Bijection between the pair $(Y^{\scriptscriptstyle{(1)}},Y^{\scriptscriptstyle{(2)}})$ and $W$.[]{data-label="bijection"}](bijection.eps){width=".86\textwidth"}
The added diagrams $\widetilde{Y}^{\scriptscriptstyle{(1)}}$ and $\widetilde{Y}^{\scriptscriptstyle{(2)}}$ do not interact since $|d| \geq N$. Then, the identity follows from easy combinatorics. $\square$
In this subsection we considered the $r=1$ case only. For general $r$ situation is quite similar, there should be a sequence of bases labelled by integer number $d$. The basis corresponding to $\mathcal{M}_2(r,N)$ appears in the limit $d \rightarrow \infty$.
Concluding remarks {#Concl}
==================
1. It would be interesting to give an explicit construction of the basis labeled by colored partitions. As we see in section \[SAGT-collored\] this basis is not determined by formula for the matrix element .
2. It would be interesting to generalize results of sections \[SAGT\] and \[SAGT-collored\] for the general case $p>2$. Note that on the instanton moduli side this case is very similar to the $p=2$ case.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Giulio Bonelli, Ivan Cherednik, Vladimir Fateev, Anatol Kirillov, Alexander Kuznetsov, Rubik Poghossian and Alessandro Tanzini for discussions. We also thank Slava Pugai and Rubik Poghossian for critical reading of the manuscript and very useful comments. Some of authors (A.B., M.B. and A.L.) are grateful for organizers of the workshop “Low dimensional physics and gauge principles” held in Nor Amberd, Armenia in September 2011 for hospitality and stimulating scientific atmosphere. A.B. thanks Boris Dubrovin for kind hospitality during his visit to SISSA and for interesting discussions.
This research was held within the framework of the Federal programs “Scientific and Scientific-Pedagogical Personnel of Innovational Russia” on 2009-2013 (state contracts No. P1339 and No. 02.740.11.5165) and was supported by RFBR grants 12-01-00836 and 12-02-01092 and by Russian Ministry of Science and Technology under the Scientific Schools grant 6501.2010.2. The research of A.L. and G.T. was also supported in part by the National Science Foundation under Grant No. NSF PHY05-51164 and by Dynasty foundation.
More on two Virasoro algebras in $\mathcal{F}\oplus\mathsf{NSR}$ {#2Liouville}
================================================================
In section \[SAGT\] we observed “strange” relation which is equivalent to[^12] $$\label{des-fields}
\begin{aligned}
&\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\simeq
V_{\alpha^{\scriptscriptstyle{(1)}}}^{\scriptscriptstyle{\textsf{Vir}_{1}}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}}^{\scriptscriptstyle{\textsf{Vir}_{2}}}(z),\\
&\alpha f(z)\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)+
\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\simeq
V_{\alpha^{\scriptscriptstyle{(1)}}+b^{\scriptscriptstyle{(1)}} /2}^{\scriptscriptstyle{\textsf{Vir}_{1}}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}+1/2b^{\scriptscriptstyle{(2)}} }^{\scriptscriptstyle{\textsf{Vir}_{2}}}(z),\\
&(Q-\alpha) f(z)\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)+
\Psi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\simeq
V_{\alpha^{\scriptscriptstyle{(1)}}-b^{\scriptscriptstyle{(1)}} /2}^{\scriptscriptstyle{\textsf{Vir}_{1}}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}-1/2b^{\scriptscriptstyle{(2)}}}^{\scriptscriptstyle{\textsf{Vir}_{2}}}(z),\\
&\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots
\end{aligned}$$ i.e. for any $m\in\mathbb{Z}$ the product $$V_{\alpha^{\scriptscriptstyle{(1)}}+mb^{\scriptscriptstyle{(1)}} /2}^{\scriptscriptstyle{\textsf{Vir}_{1}}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}+m/2b^{\scriptscriptstyle{(2)}} }^{\scriptscriptstyle{\textsf{Vir}_{2}}}(z)$$ of two primary fields in two CFT’s $\textsf{Vir}_{1}$ and $\textsf{Vir}_{2}$ with parameters satisfying is equal up to normalization to the descendant field on level $m^{2}/2$ of the field $\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)$ in $\mathcal{F}\oplus\textsf{NSR}$ theory. In operator language this descendant field corresponds to the highest weight vector . First check which we can perform is to compare conformal dimensions. One can easily find that $$\Delta(\alpha^{\scriptscriptstyle{(1)}}+mb^{\scriptscriptstyle{(1)}} /2,b^{\scriptscriptstyle{(1)}} )+
\Delta(\alpha^{\scriptscriptstyle{(2)}}+m/2b^{\scriptscriptstyle{(2)}} ,b^{\scriptscriptstyle{(2)}} )=\Delta_{\textsf{\tiny{NS}}}(\alpha,b)+\frac{m^{2}}{2},$$ where $\Delta(\alpha,b)$ and $\Delta_{\textsf{\tiny{NS}}}(\alpha,b)$ are the conformal dimensions parameterized in Virasoro (Liouville) and NSR (Super Liouville) manners.
Another more concrete check would be to compare three-point correlation functions. We consider the relation (other relations can be treated similarly) $$\label{FF-VV}
\Phi_{\alpha}^{\scriptscriptstyle{\textsf{NS}}}(z)\simeq
V_{\alpha^{\scriptscriptstyle{(1)}}}^{\scriptscriptstyle{\textsf{Vir}_{1}}}(z)\cdot V_{\alpha^{\scriptscriptstyle{(2)}}}^{\scriptscriptstyle{\textsf{Vir}_{2}}}(z).$$ Right hand side of is given by the products of two primary operators in two CFT’s $\textsf{Vir}_{1}$ and $\textsf{Vir}_{2}$ with central charges $c^{\scriptscriptstyle{(1)}}$ and $c^{\scriptscriptstyle{(2)}}$ parameterized as $$c^{\scriptscriptstyle{(\sigma)}}=1+6\left(b^{\scriptscriptstyle{(\sigma)}}+\frac{1}{b^{\scriptscriptstyle{(\sigma)}}}\right)^{2},$$ where $b^{\scriptscriptstyle{(\sigma)}}$ are given by $$b^{\scriptscriptstyle{(1)}} =\frac{2b}{\sqrt{2-2b^{2}}},\quad
(b^{\scriptscriptstyle{(2)}})^{-1}=\frac{2b^{-1}}{\sqrt{2-2b^{-2}}}.$$ Let us consider the region $b<1$. In this case $b^{\scriptscriptstyle{(1)}} $ is real while $b^{\scriptscriptstyle{(2)}} $ is imaginary. For general values of all the parameters we can treat theories $\textsf{Vir}_{1}$ and $\textsf{Vir}_{2}$ as the Liouville field theory [@Zamolodchikov:1995aa] with coupling constant $b^{\scriptscriptstyle{(1)}}$ and generalized minimal model [@Zamolodchikov:2005fy] (time-like Liouville field theory) with coupling constant $\hat{b}^{\scriptscriptstyle{(2)}}$ (we have fixed the brunch cut as $b^{\scriptscriptstyle{(2)}}=-i\hat{b}^{\scriptscriptstyle{(2)}}$). The three-point functions in both theories $$\begin{aligned}
&C(\alpha_{1}^{\scriptscriptstyle{(1)}},\alpha_{2}^{\scriptscriptstyle{(1)}},\alpha_{3}^{\scriptscriptstyle{(1)}}|b^{\scriptscriptstyle{(1)}})
\overset{\text{def}}{=}
\langle V_{\alpha_{1}^{\scriptscriptstyle{(1)}}}(0)V_{\alpha_{2}^{\scriptscriptstyle{(1)}}}(1)V_{\alpha_{3}^{\scriptscriptstyle{(1)}}}(\infty)
\rangle_{b^{\scriptscriptstyle{(1)}}},\\
&\hat{C}(\hat{\alpha}_{1}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{2}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{3}^{\scriptscriptstyle{(2)}}
|\hat{b}^{\scriptscriptstyle{(2)}})\overset{\text{def}}{=}
\langle V_{\hat{\alpha}_{1}^{\scriptscriptstyle{(2)}}}(0)V_{\hat{\alpha}_{2}^{\scriptscriptstyle{(2)}}}(1)V_{\hat{\alpha}_{3}^{\scriptscriptstyle{(2)}}}(\infty)\rangle_{\hat{b}^{\scriptscriptstyle{(2)}}},
\end{aligned}$$ where $$\label{alpha-sigma}
\begin{aligned}
&b^{\scriptscriptstyle{(1)}} =\frac{2b}{\sqrt{2-2b^{2}}},&\quad &\alpha^{\scriptscriptstyle{(1)}}=\frac{\alpha}{\sqrt{2-2b^{2}}},\\
&(\hat{b}^{\scriptscriptstyle{(2)}})^{-1}=\frac{2}{\sqrt{2-2b^{2}}},&\quad&\hat{\alpha}^{\scriptscriptstyle{(2)}}=\frac{b\alpha}{\sqrt{2-2b^{2}}}.
\end{aligned}$$ These three-point functions are known in explicit form [@Zamolodchikov:1995aa]
\[C\] $$\begin{gathered}
\label{CL}
C(\alpha_{1},\alpha_{2},\alpha_{3}|b)=
\frac{\Upsilon_{b}(b)\Upsilon_{b}(2\alpha_{1})\Upsilon_{b}(2\alpha_{2})\Upsilon_{b}(2\alpha_{3})}
{\Upsilon_{b}(\alpha_{1}+\alpha_{2}+\alpha_{3}-Q)\Upsilon_{b}(\alpha_{1}+\alpha_{2}-\alpha_{3})
\Upsilon_{b}(\alpha_{1}+\alpha_{3}-\alpha_{2})
\Upsilon_{b}(\alpha_{2}+\alpha_{3}-\alpha_{1})}\end{gathered}$$ and [@Zamolodchikov:2005fy] $$\begin{gathered}
\label{CM}
\hat{C}(\alpha_{1},\alpha_{2},\alpha_{3}|b)=\\=
\frac{\Upsilon_{b}(b)\Upsilon_{b}(\alpha_{1}+\alpha_{2}+\alpha_{3}-b^{-1}+2b)\Upsilon_{b}(\alpha_{1}+\alpha_{2}-\alpha_{3}+b)
\Upsilon_{b}(\alpha_{1}+\alpha_{3}-\alpha_{2}+b)
\Upsilon_{b}(\alpha_{2}+\alpha_{3}-\alpha_{1}+b)}
{\Upsilon_{b}(2\alpha_{1}+b)\Upsilon_{b}(2\alpha_{2}+b)\Upsilon_{b}(2\alpha_{3}+b)},\end{gathered}$$
where $\Upsilon_{b}(x)$ is the entire selfdual function (with respect to transformation $b\rightarrow1/b$), which was defined in [@Zamolodchikov:1995aa] by the integral representation $$\log\Upsilon_{b}(x)=\int_{0}^{\infty}\frac{dt}{t}
\left[\left(\frac{b+b^{-1}}{2}-x\right)^2e^{-t}-\frac
{\sinh^2\left(\frac{b+b^{-1}}{2}-x\right)\frac{t}{2}}
{\sinh\frac{bt}{2}\sinh\frac{t}{2b}}
\right].$$ Equations are written up to some factors which can be eliminated by changing of normalization of the primary operators which is always in our hands. For the fields in the left hand side of we can define the three-point function $$C_{\scriptscriptstyle{\textsf{NS}}}(\alpha_{1},\alpha_{2},\alpha_{3})\overset{\text{def}}{=}
\langle \Phi_{\alpha_{1}}^{\scriptscriptstyle{\textsf{NSR}}}(0)
\Phi_{\alpha_{2}}^{\scriptscriptstyle{\textsf{NSR}}}(1)\Phi_{\alpha_{3}}^{\scriptscriptstyle{\textsf{NSR}}}(\infty)
\rangle_{b},$$ where the average is understood as an average in the Super-Liouville field theory with coupling constant $b$. Following [@Rashkov:1996jx; @Poghosian:1996dw] it has the following explicit form (again up to normalization of primary fields) $$C_{\scriptscriptstyle{\textsf{NS}}}(\alpha_{1},\alpha_{2},\alpha_{3})=
\frac{\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(2\alpha_{1})\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(2\alpha_{2})
\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(2\alpha_{3})}
{\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(\alpha_{1}+\alpha_{2}+\alpha_{3}-Q)
\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(\alpha_{1}+\alpha_{2}-\alpha_{3})
\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(\alpha_{1}+\alpha_{3}-\alpha_{2})
\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(\alpha_{2}+\alpha_{3}-\alpha_{1})},$$ where $$\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(x)\overset{\text{def}}{=}
\Upsilon_{b}\left(\frac{x}{2}\right)\Upsilon_{b}\left(\frac{x+Q}{2}\right).$$ Using the relation[^13] $$\frac{\Upsilon_{b^{\scriptscriptstyle{(1)}}}(\alpha^{\scriptscriptstyle{(1)}})}
{\Upsilon_{\hat{b}^{\scriptscriptstyle{(2)}}}(\hat{\alpha}^{\scriptscriptstyle{(2)}}+\hat{b}^{\scriptscriptstyle{(2)}})}=
\frac{\Upsilon_{b^{\scriptscriptstyle{(1)}}}(b^{\scriptscriptstyle{(1)}})}
{\Upsilon_{\hat{b}^{\scriptscriptstyle{(2)}}}(\hat{b}^{\scriptscriptstyle{(2)}})\Upsilon_{b}(b)}\,b^{\frac{b^{2}\alpha(Q-\alpha)}{2-2b^{2}}}
\left(\frac{1-b^{2}}{2}\right)^{\frac{\alpha(Q-\alpha)}{4}-\frac{1}{2}}
\Upsilon^{\scriptscriptstyle{\textsf{NS}}}_{b}(\alpha),$$ one can check that $$\label{CC=C}
C(\alpha_{1}^{\scriptscriptstyle{(1)}},\alpha_{2}^{\scriptscriptstyle{(1)}},\alpha_{3}^{\scriptscriptstyle{(1)}}|b^{\scriptscriptstyle{(1)}})
\hat{C}(\hat{\alpha}_{1}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{2}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{3}^{\scriptscriptstyle{(2)}}
|\hat{b}^{\scriptscriptstyle{(2)}})\simeq
C_{\scriptscriptstyle{\textsf{NS}}}(\alpha_{1},\alpha_{2},\alpha_{3}).$$ We note that choosing appropriate normalization of the fields one can always set the coefficient of proportionality in to be equal to $1$.
The ratio of the matrix elements can also be interpreted within this framework. Namely, let us assume that $m+k+k'$ is an even number, then $$\begin{gathered}
\label{l=CC}
l(\alpha,m|P',k',P,k)^{2}\simeq
\frac{C(\alpha_{1}^{\scriptscriptstyle{(1)}}+kb^{\scriptscriptstyle{(1)}}/2,\alpha_{2}^{\scriptscriptstyle{(1)}}+k'b^{\scriptscriptstyle{(1)}}/2,
\alpha^{\scriptscriptstyle{(1)}}+mb^{\scriptscriptstyle{(1)}}/2|b^{\scriptscriptstyle{(1)}})}
{C(\alpha_{1}^{\scriptscriptstyle{(1)}},\alpha_{2}^{\scriptscriptstyle{(1)}},\alpha^{\scriptscriptstyle{(1)}}|b^{\scriptscriptstyle{(1)}})}
\times\\\times
\frac{\hat{C}(\hat{\alpha}_{1}^{\scriptscriptstyle{(2)}}+k/2\hat{b}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{2}^{\scriptscriptstyle{(2)}}+k'/2\hat{b}^{\scriptscriptstyle{(2)}},
\hat{\alpha}^{\scriptscriptstyle{(2)}}+m/2\hat{b}^{\scriptscriptstyle{(2)}}
|\hat{b}^{\scriptscriptstyle{(2)}})}
{\hat{C}(\hat{\alpha}_{1}^{\scriptscriptstyle{(2)}},\hat{\alpha}_{2}^{\scriptscriptstyle{(2)}},\hat{\alpha}^{\scriptscriptstyle{(2)}}
|\hat{b}^{\scriptscriptstyle{(2)}})},\end{gathered}$$ where $$\alpha_{1}=\frac{Q}{2}+P,\qquad
\alpha_{2}=\frac{Q}{2}+P',$$ and the sets $(\alpha_{1}^{\scriptscriptstyle{(\sigma)}},\alpha_{2}^{\scriptscriptstyle{(\sigma)}},\alpha^{\scriptscriptstyle{(\sigma)}})$ and $(\hat{\alpha}_{1}^{\scriptscriptstyle{(\sigma)}},\hat{\alpha}_{2}^{\scriptscriptstyle{(\sigma)}},\hat{\alpha}^{\scriptscriptstyle{(\sigma)}})$ are related to $(\alpha_{1},\alpha_{2},\alpha)$ as in . Equation can be checked (again up to normalization of the fields) using the relation $$\begin{gathered}
\frac{\Upsilon_{b^{\scriptscriptstyle{(1)}}}(\alpha^{\scriptscriptstyle{(1)}})}
{\Upsilon_{b^{\scriptscriptstyle{(1)}}}(\alpha^{\scriptscriptstyle{(1)}}+nb^{\scriptscriptstyle{(1)}})}
\frac{\Upsilon_{\hat{b}^{\scriptscriptstyle{(2)}}}(\hat{\alpha}^{\scriptscriptstyle{(2)}}+\hat{b}^{\scriptscriptstyle{(2)}}+n/\hat{b}^{\scriptscriptstyle{(2)}})}
{\Upsilon_{\hat{b}^{\scriptscriptstyle{(2)}}}(\hat{\alpha}^{\scriptscriptstyle{(2)}}+\hat{b}^{\scriptscriptstyle{(2)}})}=
\frac{(-1)^{n}}{(2-2b^{2})^{n^{2}}}\,
b^{\frac{2bn}{(1-b^{2})}(x+nb^{-1}-Q/2)}
\times\\\times
\hspace*{-10pt}
\prod_{\substack{i,j\geq1,\;i+j\leq2n\\i+j\equiv0\mod 2}}\hspace*{-10pt}(\alpha+(i-1)b+(j-1)b^{-1})^{2}.\end{gathered}$$ The case when $m+k+k'$ is an odd number can be treated similarly.
Highest weight vectors {#Highest-weight}
======================
In this appendix we give explicit expressions for the highest weight vectors $|P,k\rangle$ defined by with $$\Delta^{\scriptscriptstyle{(1)}}(P,k)=\frac{(Q^{\scriptscriptstyle{(1)}})^{2}}{4}-\left(P^{\scriptscriptstyle{(1)}}
+\frac{kb^{\scriptscriptstyle{(1)}}}{2}\right)^{2},\quad
\Delta^{\scriptscriptstyle{(2)}}(P,k)=\frac{(Q^{\scriptscriptstyle{(2)}})^{2}}{4}-\left(P^{\scriptscriptstyle{(2)}}
+\frac{k}{2b^{\scriptscriptstyle{(2)}}}\right)^{2}.$$ The state $|P,k\rangle$ belongs to the level $k^{2}/2$ of the highest weight representation $|P\rangle$ of the algebra $\mathcal{F}\oplus\textsf{NSR}$. For each value of $k^{2}/2$ there are exactly two states $|P,k\rangle$ and $|P,-k\rangle$ orthogonal to each other. For example, on the level $1/2$ we have $$\begin{aligned}
&|P,1\rangle=\left(G_{-\frac{1}{2}}+(Q/2+P)f_{-\frac{1}{2}}\right)|P\rangle_{\scriptscriptstyle{\textsf{NS}}},\\
&|P,-1\rangle=\left(G_{-\frac{1}{2}}+(Q/2-P)f_{-\frac{1}{2}}\right)|P\rangle_{\scriptscriptstyle{\textsf{NS}}}
\end{aligned}$$ and on the level $2$
$$\begin{gathered}
|P,2\rangle=\bigl(G_{-\frac{1}{2}}^{4}+(Q/2+P)^{2}G_{-\frac{1}{2}}G_{-\frac{3}{2}}-
(Q/2+P+b)(Q/2+P+b^{-1})G_{-\frac{3}{2}}G_{-\frac{1}{2}}
-2(Q+P)G_{-\frac{1}{2}}^{3}f_{-\frac{1}{2}}-\\
-2(Q/2+P)^{2}(Q+P)G_{-\frac{3}{2}}f_{-\frac{1}{2}}+ 2(Q/2+P+b)(Q/2+P+b^{-1})(Q+P)G_{-\frac{1}{2}}f_{-\frac{3}{2}}+\\+
2(Q/2+P)(Q/2+P+b)(Q/2+P+b^{-1})(Q+P)f_{-\frac{1}{2}}f_{-\frac{3}{2}}\bigr)|P\rangle_{\scriptscriptstyle{\textsf{NS}}},\end{gathered}$$
$$\begin{gathered}
|P,-2\rangle=\bigl(G_{-\frac{1}{2}}^{4}+(Q/2-P)^{2}G_{-\frac{1}{2}}G_{-\frac{3}{2}}-
(Q/2-P+b)(Q/2-P+b^{-1})G_{-\frac{3}{2}}G_{-\frac{1}{2}}
-2(Q-P)G_{-\frac{1}{2}}^{3}f_{-\frac{1}{2}}-\\
-2(Q/2-P)^{2}(Q-P)G_{-\frac{3}{2}}f_{-\frac{1}{2}}+ 2(Q/2-P+b)(Q/2-P+b^{-1})(Q-P)G_{-\frac{1}{2}}f_{-\frac{3}{2}}+\\+
2(Q/2-P)(Q/2-P+b)(Q/2-P+b^{-1})(Q-P)f_{-\frac{1}{2}}f_{-\frac{3}{2}}\bigr)|P\rangle_{\scriptscriptstyle{\textsf{NS}}}.\end{gathered}$$
We note that there is an obvious relation $$\label{k-k}
|P,k\rangle=|-P,-k\rangle.$$ For general values of integer number $k$ one can construct the state $|P,k\rangle$ as described in section \[SAGT\]. Due to it is enough to consider only the case $k>0$. Then we can look for the expression for the vector $|P,k\rangle$ in the form $$\label{high-weight-def1-app}
|P,k\rangle=(G_{-\frac{1}{2}}^{k^{2}}+C_{1}(P)G_{-\frac{1}{2}}^{k^{2}-3}G_{-\frac{3}{2}}+\dots)|P\rangle_{\scriptscriptstyle{\textsf{NS}}},$$ where $(C_{1}(P)\dots)$ are the coefficients to be determined. As was explained in section \[SAGT\] the state $|P,k\rangle$ has nice representation in terms of free fields. That meansthet if we express generators $G_{r}$ as in (for $k>0$ we have to take the sign “$-$” in ) and use commutation relations we will have $$\label{high-weight-def2-app}
|P,k\rangle=\Omega_{k}(P)\,\chi_{-\frac{1}{2}}\chi_{-\frac{3}{2}}\dots\chi_{-\frac{2|k|-1}{2}}|\text{\textrm{vac}}\rangle,$$ where $$\chi_{r}=f_{r}-i\psi_{r}.$$ Comparing and we find all the coefficients $C_{j}(P)$ unambiguously.
Comparing of $Z_{\textrm{pure}}^{X_{2}}$ and $Z_{\textrm{pure}}^{\diamond}$ {#Bersh-app}
===========================================================================
We claimed in section \[SAGT-collored\] that the sets of summands on the left hand side and on the right hand side of the identity are different. In this appendix we give an example of such phenomena. The expressions in differs first time in coefficient $\Lambda^{8}$ of $\Lambda$ expansion. For shortness we will use following notation: $$\eps_{i,\,j}=i\eps_1+j\eps_2,\quad a_{i,\,j}=2a+i\eps_1+j\eps_2.$$ The left hand side of can be computed using the formula (we omit $\vec{a}, \eps_1, \eps_2$ in notation). In the order $\Lambda^{8}$ the result reads: $$\begin{aligned}
&Z_{\textsf{vec}}^{\diamond}((4),\,\varnothing)+ Z_{\textsf{vec}}^{\diamond}((3,1),\,\varnothing)+ Z_{\textsf{vec}}^{\diamond}(((2,2),\,\varnothing)+ Z_{\textsf{vec}}^{\diamond}(((2,1,1),\,\varnothing)+\\ & Z_{\textsf{vec}}^{\diamond}(((1,1,1,1),\,\varnothing)+Z_{\textsf{vec}}^{\diamond}((2,1),\,(1))+ Z_{\textsf{vec}}^{\diamond}(((2),\,(2))+ Z_{\textsf{vec}}^{\diamond}(((2),\,(1,1))+\\ & Z_{\textsf{vec}}^{\diamond}((1,1),\,(2))+ Z_{\textsf{vec}}^{\diamond}((1,1),\,(1,1))+ Z_{\textsf{vec}}^{\diamond}(((1),\,(2,1))+ Z_{\textsf{vec}}^{\diamond}(\varnothing\,(4))+\\ & Z_{\textsf{vec}}^{\diamond}(\varnothing,\,(3,1))+ Z_{\textsf{vec}}^{\diamond}(\varnothing,\,(2,2))+ Z_{\textsf{vec}}^{\diamond}(\varnothing,\,(2,1,1))+ Z_{\textsf{vec}}^{\diamond}(\varnothing,\,(1,1,1,1))=\\ &\frac1{\eps_{1,\,-3}\eps_{0,\,4}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,1}a_{0,\,0}a_{1,\,3}a_{0,\,2}}+
\frac1{\eps_{2,\,-2}\eps_{-1,\,3}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,3}a_{0,\,2}a_{1,\,1}a_{0,\,0}}+
\\& \frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{1,\,-1}\eps_{0,\,2} a_{2,\,2}a_{1,\,1}a_{1,\,1}a_{0,\,0}}+
\frac1{\eps_{3,\,-1}\eps_{-2,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{3,\,1}a_{2,\,0}a_{1,\,1}a_{0,\,0}}+
\\& \frac1{\eps_{4,\,0}\eps_{-3,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{3,\,1}a_{2,\,0}a_{1,\,1}a_{0,\,0}}+
\frac1{a_{2,\,0}a_{1,\,-1}a_{1,\,1}a_{-0,\,0} a_{1,\,1}a_{0,\,0}a_{-1,\,1}a_{0,\,2}}+\\
&\frac1{\eps_{1,\,-1}\eps_{0,\,2}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,-1}a_{0,\,-2}a_{-1,\,1}a_{0,\,2}}+
\frac1{\eps_{1,\,-1}\eps_{0,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{1,\,1}a_{0,\,0}a_{1,\,1}a_{0,\,0}}+
\\& \frac1{\eps_{1,\,-1}\eps_{0,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{1,\,1}a_{0,\,0}a_{1,\,1}a_{0,\,0}}+
\frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{2,\,0}a_{1,\,-1}a_{-2,\,0}a_{-1,\,1}}+
\\& \frac1{a_{1,\,-1}a_{0,\,2}a_{-2,\,0}a_{-1,\,1} a_{-1,\,-1}a_{0,\,0}a_{-1,\,-1}a_{0,\,0}}+
\frac1{\eps_{1,\,-3}\eps_{0,\,4}\eps_{1,\,-1}\eps_{0,\,2} a_{-1,\,-1}a_{0,\,0}a_{-1,\,-3}a_{0,\,-2}}+\\
&\frac1{\eps_{2,\,-2}\eps_{-1,\,3}\eps_{1,\,-1}\eps_{0,\,2} a_{-1,\,-3}a_{0,\,-2}a_{-1,\,-1}a_{0,\,0}}+
\frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{1,\,-1}\eps_{0,\,2} a_{-2,\,-2}a_{-1,\,-1}a_{-1,\,-1}a_{0,\,0}}+\\
& \frac1{\eps_{3,\,-1}\eps_{-2,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{-3,\,-1}a_{-2,\,0}a_{-1,\,-1}a_{0,\,0}}+
\frac1{\eps_{4,\,0}\eps_{-3,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{-3,\,-1}a_{-2,\,0}a_{-1,\,-1}a_{0,\,0}}=
\\& \frac{16 a^4-52 a^2 \epsilon _1^2+36 \epsilon _1^4-92 a^2 \epsilon _1 \epsilon _2+177 \epsilon _1^3 \epsilon _2-52 a^2 \epsilon _2^2+294 \epsilon _1^2 \epsilon _2^2+177 \epsilon _1 \epsilon _2^3+36 \epsilon _2^4}{2 \eps_1 \eps_2 a_{-1,\,-1} a_{1,\,1} a_{-2,\,-2} a_{2,\,2} a_{-3,\,-1} a_{-1,\,-3} a_{1,\,3} a_{3,\,1}}\;.\end{aligned}$$ The right hand side of can be computed using the formula $$\begin{aligned}
&Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{\varnothing,\,\varnothing\},\,-2)+
Z_{\textsf{vec}}(\{(2),\,\varnothing\},\,\{\varnothing,\,\varnothing\},\,0)+ Z_{\textsf{vec}}(\{(1,1),\,\varnothing\},\,\{\varnothing,\,\varnothing\},\,0)+ \\ & Z_{\textsf{vec}}(\{\{1\},\,\{1\}\},\,\{\varnothing,\,\varnothing\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,(2)\},\,\{\varnothing,\,\varnothing\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,(1,1)\},\,\{\varnothing,\,\varnothing\},\,0)+ \\ & Z_{\textsf{vec}}(\{(1),\,\varnothing\},\,\{(1),\,\varnothing\},\,0)+ Z_{\textsf{vec}}(\{(1),\,\varnothing\},\,\{\varnothing,\,(1)\},\,0)+
Z_{\textsf{vec}}(\{\varnothing,\,(1)\},\,\{(1),\,\varnothing\},\,0)+ \\ & Z_{\textsf{vec}}(\{\varnothing,\,(1)\},\,\{\varnothing,\,(1)\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{(2),\,\varnothing\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{(1,1),\,\varnothing\},\,0)+ \\ & Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{(1),\,(1)\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{\varnothing,\,(2)\},\,0)+ Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{\varnothing,\,(1,1)\},\,0)+ \\ & Z_{\textsf{vec}}(\{\varnothing,\,\varnothing\},\,\{\varnothing,\,\varnothing\},\,2)=\\
&\frac1{a_{0,\,0}a_{-2,\,0}a_{0,\,-2} a_{-1,\,-1}a_{-1,\,-1} a_{-1,\,-3} a_{-3,\,-1}a_{-2,\,-2}}+
\frac1{\eps_{3,\,-1}\eps_{-2,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{1,\,1}a_{0,\,0}a_{0,\,2}a_{-1,\,1}}+\\
& \frac1{\eps_{4,\,0}\eps_{-3,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{3,\,1}a_{2,\,0}a_{1,\,1}a_{0,\,0}}+
\frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{2,\,0}a_{1,\,-1}a_{-2,\,0}a_{-1,\,1}}+\\
& \frac1{\eps_{3,\,-1}\eps_{-2,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{-1,\,-1}a_{0,\,0}a_{0,\,-2}a_{1,\,-1}}+
\frac1{\eps_{4,\,0}\eps_{-3,\,1}\eps_{2,\,0}\eps_{-1,\,1} a_{-3,\,-1}a_{-2,\,0}a_{-1,\,-1}a_{0,\,0}}+\\
& \frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,1}a_{0,\,0}a_{1,\,1}a_{0,\,0}}+
\frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,1}a_{0,\,0}a_{-1,\,-1}a_{0,\,0}}+\\
& \frac1{\eps_{-1,\,1}\eps_{0,\,2}\eps_{2,\,0}\eps_{-1,\,1} a_{1,\,1}a_{0,\,0}a_{-1,\,-1}a_{0,\,0}}+
\frac1{\eps_{2,\,0}\eps_{-1,\,1}\eps_{1,\,-1}\eps_{0,\,2} a_{-1,\,-1}a_{0,\,0}a_{-1,\,-1}a_{0,\,0}}+\end{aligned}$$ $$\begin{aligned}
& \frac1{\eps_{1,\,-3}\eps_{0,\,4}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,1}a_{0,\,0}a_{1,\,3}a_{0,\,2}}+
\frac1{\eps_{2,\,-2}\eps_{-1,\,3}\eps_{1,\,-1}\eps_{0,\,2} a_{2,\,0}a_{1,\,-1}a_{1,\,1}a_{0,\,0}}+\\
& \frac1{\eps_{1,\,-1}\eps_{0,\,2}\eps_{1,\,-1}\eps_{0,\,2} a_{1,\,-1}a_{0,\,-2}a_{-1,\,1}a_{0,\,2}}+
\frac1{\eps_{1,\,-3}\eps_{0,\,4}\eps_{1,\,-1}\eps_{0,\,2} a_{-1,\,-1}a_{0,\,0}a_{-1,\,-3}a_{0,\,-2}}+\\
& \frac1{\eps_{2,\,-2}\eps_{-1,\,3}\eps_{1,\,-1}\eps_{0,\,2} a_{-2,\,0}a_{-1,\,1}a_{-1,\,-1}a_{0,\,0}}+
\frac1{a_{0,\,0}a_{2,\,0}a_{0,\,2} a_{1,\,1}a_{1,\,1} a_{1,\,3} a_{3,\,1}a_{2,\,2}}=\\
&\frac{16 a^4-52 a^2 \epsilon _1^2+36 \epsilon _1^4-92 a^2 \epsilon _1 \epsilon _2+177 \epsilon _1^3 \epsilon _2-52 a^2 \epsilon _2^2+294 \epsilon _1^2 \epsilon _2^2+177 \epsilon _1 \epsilon _2^3+36 \epsilon _2^4}{2 \eps_1 \eps_2 a_{-1,\,-1} a_{1,\,1} a_{-2,\,-2} a_{2,\,2} a_{-3,\,-1} a_{-1,\,-3} a_{1,\,3} a_{3,\,1}}\;.\end{aligned}$$ We see that results are the same but the sets of summands are different. For example there are only two summands which have degree $8$ in variable $a$, but these summands are different.
[^1]: Algebraic construction of such limit of toroidal algebra is given in the case $r=1$ [@10.1063/1.2823979; @2010arXiv1002.2485F], for $r>1$ [@Feigin-unpub]. The geometrical interpretation of the obtained coset algebras is very implicit.
[^2]: Some analysis of the case $p=4$ and $r=2$ was done in [@Wyllard:2011mn; @Alfimov:2011ju].
[^3]: This fixed point is labeled by the pair of $r-$tuples of Young diagrams $\vec{Y}$ and $\vec{W}$.
[^4]: Here and below we assume that our Heisenberg algebra has no zero mode since it plays artificial role in our construction. In other words we assume that we are considering highest weight representations such that $a_{0}|0\rangle=0$.
[^5]: The possibility of using of the construction [@Crnkovic:1989gy; @Crnkovic:1989ug; @Lashkevich:1992sb] in this context was also suggested by Wyllard in [@Wyllard:2011mn].
[^6]: Usually, the character which is defined as $\textrm{Tr}\,q^{L_{0}}\bigl|_{\pi_{\Delta}}$ is proportional to $q^{\Delta}$. We erased these factors for simplicity.
[^7]: Note that states $|P,k\rangle$ and $\langle k',P'|$ cannot be represented in form and simultaneously.
[^8]: Geometrical definition of the vertex operator in [@Carlsson:2008fk] (for the case of Hilbert schemes) depends on the line bundle on the surface. It is natural to expect that the vertex operator $\mathbb{V}_{\alpha}^{(m)}$ corresponds to the line bundle $\mathcal{O}(mC)$ on the surface $X_2$
[^9]: The connectedness of $\mathcal{M}(r,d,N)$ follows from its description in terms of Nakajima quiver varieties.
[^10]: Such components satisfy the condition $q_1+q_2+2(N_+-N_-)=0$ which can be interpreted as the vanishing of the first Chern class [@Fucito:2004ry].
[^11]: The same phenomena was independently noticed by R. Poghossian [@Pog-cite].
[^12]: For $m=0$ this relation was noticed in [@Crnkovic:1989ug].
[^13]: We note that this relation is very similar to the relation used in ref. [@Bershtein:2010wz], where the connection between the parafermionic Liouville theory and the three-exponential model [@Fateev:1996ea] was studied.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
\
Los Alamos National Laboratory\
E-mail:
- |
Rubén López-Coto\
Max-Planck Institute for Nuclear Physics\
E-mail:
- |
Francisco Salesa Greus\
The Henryk Niewiadomski Institute of Nuclear Physics, Polish Academy of Sciences\
E-mail:
- 'For the HAWC Collaboration[^1]'
title: 'Constraining the Diffusion Coefficient with HAWC TeV Gamma-Ray Observations of Two Nearby Pulsar Wind Nebulae'
---
Introduction
============
Launched into orbit in 2006, the PAMELA detector discovered an excess in the positron fraction at energies above 10 GeV as compared to theoretical models of positron production [@pamela]. This anomalous observation has been confirmed with high precision by Fermi Large Area Telescope (Fermi-LAT) [@fermi_e+] and Alpha Magnetic Spectrometer (AMS) [@ams]. It was proposed that this overabundance of positrons could be a consequence of the annihilation or decay of dark matter, but an alternative explanation is that the positron excess is due to nearby electron/positron accelerators. Pulsar wind nebulae (PWNe), known as efficient electron/positron accelerators, were postulated as sources of the positron excess [@yuksel][@hooper]. At 250 pc and 288 pc, Geminga (PSR J0633+1746) and PSR B0656+14 are two of the nearest pulsars to earth, and this proximity combined with their relatively advanced age make them important candidates contributing to the locally measured electron and positron flux.
Ultra-relativistic electrons and positrons cool down via inverse Compton scattering (ICS) and synchrotron. TeV gamma ray can be produced through IC scattering off lower energy photons, e.g. cosmic microwave background . The extended TeV PWN with a size of $2.8^\circ \pm 0.8^\circ$ around Geminga was reported by the Milagro experiment [@milagro]. Weak evidence ($2.2\sigma$) of this large nebula emission was reported by the Tibet air shower array [@tibet], but IACT observations using standard analysis techniques have only provided upper limits. In Fermi-LAT data, the Geminga pulsar is one of the brightest sources in the GeV sky but there is no unambiguous evidence of the existence of a surrounding nebula at GeV energies.
In this contribution, we will demonstrate the method of morphological analysis on the extended TeV emission around Geminga and PSR B0656+14 in order to constraint the particle diffusion and to estimate the contribution from these sources to the local flux of electrons and positrons measured near earth.
Diffusion Coefficient
=====================
Both Geminga and PSR B0656+14 are middle-aged pulsars, with relatively low magnetic field, and the modulation to the surrounding interstellar medium (ISM) due to the particle acceleration by these sources is low. Therefore we consider the scenario that the accelerated particles diffuse isotropically into the ISM. In order to constrain the electron and positron flux that reach the Earth from these two pulsars, we need to know how fast these particles diffuse in the ISM, referred as diffusion coefficient. It is a property of a medium and depends on the energy of the particles,
$$D(E_e) = D_0(E_e/10\,\textrm{GeV})^\delta
\label{equ:dc}$$
where the typical value of the diffusion index $\delta$ is 1/3 and $D_0$ is the diffusion coefficient at 10GeV. The diffuse coefficient has been measured from the Boron-Carbon ratio in hadronic cosmic rays. Figure \[fig:dcbc\] summarizes the diffusion coefficients as a function of energy from different measurements.
![Diffusion coefficients from different measurements: blue [@strong], green [@delahaye], black [@yin], red [@yuksel], and yellow [@adriani].[]{data-label="fig:dcbc"}](D_comparation.pdf){width="0.7\linewidth"}
However, the diffusion coefficient measured from the Boron-Carbon ratio is the average value encountered over very long lifetime of hadronic cosmic rays which are expected to have spent much of that time in the Galactic halo. The local diffusion coefficient could be different. The measurement on the source size around these two nearby pulsars will provide a constraint on the local diffusion coefficient in the ISM.
HAWC Observations
=================
The High Altitude Water Cherenkov (HAWC) Observatory, located in central Mexico at 4100 m above sea level, is sensitive to gamma rays between 100 GeV and 100 TeV. Thanks to its large field of view of 2 steradians, HAWC has a good sensitivity to extended sources such as pulsar wind nebulae. With 17 month of data, two very extended sources are detected spatially coincident with Geminga and PSR B0656+14 [@2hwc]. Figure \[fig:map\] shows the significance map in the region around these two pulsars convolved with point spread function.
![The significance map convolved with point spread function in the region around Geminga and PSR B0656+14 with 17 months of HAWC data.[]{data-label="fig:map"}](geminga_psf.pdf){width="0.5\linewidth"}
From Figure \[fig:map\], there is clearly extended emission beyond the point spread function of HAWC around both pulsars, showing evidence of electrons and positrons diffuse away from the source into the ISM. We can use the size of the extended sources to constrain the diffusion coefficient. On the studies of extended sources, the most commonly used morphological models are disk and Gaussian models. However, with these models, there is no direct connection from the model parameters to the physical parameters. In this work, we develop a particle diffusion model and apply it to the HAWC data.
Diffusion Model
===============
The TeV gamma-ray morphology is determined assuming a model where electrons and positrons diffuse isotropically away from the source into the ISM. They produce TeV gamma rays through ICS off low energy photons in the ISM, i.e. cosmic microwave background (CMB), infrared, and optical photons. We hereafter obtain an approximated formula for the gamma-ray emission of the diffusion electron positron cloud which we will use to constrain the size of both HAWC sources.
In case of continuous injection of electrons (and positrons) from a point source at a constant rate $Q_0 E_e^{-\Gamma}$, the radial distribution of the electrons with energy $E_e$ at an instant t and distance r from the source is given by equation 21 of [@diffusion],
$$f(t,r,E_e) = \frac{Q_0 E_e^{-\Gamma}}{4\pi D(E_e)r} erfc(\frac{r}{r_d})
\label{equ:diffusion}$$
where $D(E_e)$ is the diffusion coefficient as a function of electron energy $E_e$ and $r_d$ is the diffusion radius, up to which the electron efficiently diffuse to. They are defined as,
$$D(E_e) = D_0(E_e/10\,GeV)^\delta
r_d = 2\sqrt{D(E_e)t_E}
\label{equ:dr}$$
The typical diffusion index $\delta$ is 1/3 and is fixed in this analysis. $t_E$ is the smaller of two timescales: the injection time t (in this case the age of the pulsar) and the electron cooling time $t_{cool}$, which is a function of election energy and target photon energy,
$$t_{cool} = \frac{m_e c^2}{4/3 c \sigma_T \gamma} \cdot \frac{1}{\mu_B + \mu_{ph}/(1+4\gamma \epsilon_0)^{3/2}}
\label{equ:tcool}$$
where $\sigma_T$ is the Thomson cross section, $\gamma$ is the Lorentz factor of electrons, $\mu_B = B^2/8\pi$ is the energy density in the magnetic field, and $\mu_{ph}$ is the energy density of target photon field with average energy $\epsilon_0$ per photon. Equation \[equ:tcool\] takes into the account the energy loss of electrons due to both synchrotron in magnetic field and ICS off low energy photons. For the elections that produce TeV gamma rays through ICS, where the Klein-Nishina effects are important [@kn], the ICS off infrared and optical photons in the ISM is highly suppressed, leaving only CMB photons as important targets. At these energies, the cooling time is much shorter than the age of these two pulsars. Therefore $t_E$ in equation \[equ:dr\] can be replaced by $t_{cool}$.
Integrating the energy distribution of electrons and positrons along the observer’s line of sight, and considering the gamma rays produced through ICS, we obtain the morphology of gamma rays as a function of the distance $d$. An analytical approximation is found on this numerical integral,
$$f_d = \frac{1.2154}{\pi^{3/2} r_d(d+0.06r_d)}exp(-\frac{d^2}{r_d^2})
\label{equ:f2d}$$
With a source distance of $d_{src}$, equation \[equ:f2d\] becomes,
$$f_\theta = \frac{1.2154}{\pi^{3/2} \theta_d(\theta+0.06\theta_d)}exp(-\frac{\theta^2}{\theta_d^2})
\label{equ:f2d}$$
where $\theta$ is the angular distance from the pulsar and $\theta_d = r_d/d_{src} \cdot 180^\circ/\pi$ is the diffusion angle. Combined with equation \[equ:f2d\], \[equ:dr\], and \[equ:tcool\], the diffusion angle is a function of electron energy $E_e$ and the target field,
$$\theta_d = \theta_0 (\frac{E_e}{E_{e0}})^\frac{\delta-1}{2}\sqrt{\frac{B^2/8\pi+\mu_{ph}/(1+4\epsilon_0E_{e0}/m_ec^2)^{3/2}}{B^2/8\pi+\mu_{ph}/(1+4\epsilon_0E_{e}/m_ec^2)^{3/2}}}
\label{equ:da}$$
where $\theta_0$ is the diffusion angle at the pivot energy of $E_{e0}$. The relation between the mean electron and gamma ray energy is given by [@eg],
$$<E_e> \approx 17 <E_\gamma>^{0.54+0.046log_{10}(<E_\gamma>/TeV)}
\label{equ:eg}$$
The pivot energy $E_{e0}$ is chosen to be 100 TeV in this analysis, which corresponds to $\sim 20$ TeV gamma rays. Figure \[fig:3models\] compares the radial profile as a function of the distance from the pulsar from three morphological models.
![Radial profiles of three morphological models: disk, Gaussian, and diffusion model.[]{data-label="fig:3models"}](3models.pdf){width="0.7\linewidth"}
We then fit the gamma-ray emission around Geminga and PSR B0656+14 with the diffusion model defined in equation \[equ:f2d\] and \[equ:da\] using the Multi-Mission Maximum Likelihood framework [@3ml], and calculate the diffusion coefficient of 100 TeV electrons based on the obtained diffusion angle from the likelihood fit.
Results and Discussion
======================
HAWC observations reveal two very extended TeV gamma-ray sources spatially coincident with Geminga and PSR 0656+14, suggesting ultra-relativistic electrons and positrons accelerated in our neighborhood. The results of the analysis on these two sources and the obtained diffusion coefficient will be presented at ICRC2017. The implication on the positron contribution of these sources to the local flux can be found in other two proceedings for this conference [@paco] [@ruben].
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnología (CONACyT), M[é]{}xico (grants 271051, 232656, 260378, 179588, 239762, 254964, 271737, 258865, 243290, 132197), Laboratorio Nacional HAWC de rayos gamma; L’OREAL Fellowship for Women in Science 2014; Red HAWC, M[é]{}xico; DGAPA-UNAM (grants IG100317, IN111315, IN111716-3, IA102715, 109916, IA102917); VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015;the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant DEC-2014/13/B/ST9/945; Coordinaci[ó]{}n de la Investigaci[ó]{}n Científica de la Universidad Michoacana. Thanks to Luciano Díaz and Eduardo Murrieta for technical support.
[99]{}
Adriani et al., *An anomalous positron abundance in cosmic rays with energies 1.5-100 GeV*, Nature **458** (2009) 607
Abdo, A. A. et al., *Measurement of the Cosmic Ray $e^+ + e^-$ Spectrum from 20 GeV to 1 TeV with the Fermi Large Area Telescope*, Phys. Rev. Lett. **102** (2009) 181101 \[0905.0025\]
Aguilar et al., *First Result from the Alpha Magnetic Spectrometer on the International Space Station: Precision Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-350 GeV*, Phys. Rev. Lett. **110** (2013) 141102
Yüksel, H., Kistler, M. D., & Stanev, T., *TeV Gamma Rays from Geminga and the Origin of the GeV Positron Excess*, Phys. Rev. Lett. **103** (2009) 051101 \[0810.2784\]
Hooper, D., Blasi, P., & Dario Serpico, P. , *Pulsars as the sources of high energy cosmic ray positrons*, Cosmology Astropart. Phys. **1** (2009) 025 \[0810.1527\]
Abdo, A. A. et al., *TeV Gamma-Ray Sources from a Survey of the Galactic Plane with Milagro*, Astrophys. J. **664** (2007) 91
Amenomori, M. et al., *Observation of TeV Gamma Rays from the Fermi Bright Galactic Sources with the Tibet Air Shower Array*, Astrophys. J. Lett. **709** (2010) 6
Strong, A. W. & Moskalenko, I. V., *Propagation of cosmic-ray nucleons in the Galaxy*, Astrophys. J **212** (1998) 228 \[astro-ph/9807150\]
Delahaye, T. et al., *Positrons from dark matter annihilation in the galactic halo: Theoretical uncertainties*, Phys. Rev. D **77** (2008) 063527 \[0712.2312\]
Yin, P. et al., *Pulsar interpretation for the AMS-02 result*, Phys. Rev. D **88** (2013) 023001 \[1304.4128\]
Adriani, O. et al, *Measurement of Boron and Carbon Fluxes in Cosmic Rays with the PAMELA Experiment*, Astrophys. J. **791** (2014) 93 \[1407.1657\]
**The HAWC Collaboration**, Abeysekara, A. U. et al., *The 2HWC HAWC Observatory Gamma-Ray Catalog*, Astrophys. J. **843** (2017) 40 \[1702.02992\]
Atoyan, A. M., Aharonian, F. A. & Völk, H. J., *Electrons and positrons in the galactic cosmic rays*, Phys. Rev. D **52** (1995) 3265
Moderksi, R. M. et al., *Klein-Nishina effects in the spectra of non-thermal sources immersed in external radiation fields*, Mon. Not. Roy. Astron. Soc. **364** (2005) 1488 \[astro-ph/0504388\]
Aharonian, F. A., *Very high energy cosmic gamma radiation : A crucial window on the extreme Universe*, World Scientific Publishing Co. Pte. Ltd. (2004) ISBN 9789812561732
Vianello, G. et al., *The Multi-Mission Maximum Likelihood framework (3ML)*, in *Proceedings of ICRC2015* (2015) \[1507.08343\]
Salesa Greus, F. et al, *Constraining the Origin of Local Positrons with HAWC TeV Gamma-Ray Observations of Two Nearby Pulsar Wind Nebulae*, in *Proceedings of ICRC2017* (2017)
Lopez-Coto, R. et al, in *Proceedings of ICRC2017* (2017)
[^1]: For a complete author list, see http://www.hawc-observatory.org/collaboration/icrc2017.php
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Mercury Orbiter radio Science Experiment (MORE) is one of the experiments on-board the ESA/JAXA BepiColombo mission to Mercury, to be launched in October 2018. Thanks to full on-board and on-ground instrumentation performing very precise tracking from the Earth, MORE will have the chance to determine with very high accuracy the Mercury-centric orbit of the spacecraft and the heliocentric orbit of Mercury. This will allow to undertake an accurate test of relativistic theories of gravitation (relativity experiment), which consists in improving the knowledge of some post-Newtonian and related parameters, whose value is predicted by General Relativity. This paper focuses on two critical aspects of the BepiColombo relativity experiment. First of all, we address the delicate issue of determining the orbits of Mercury and the Earth-Moon barycenter at the level of accuracy required by the purposes of the experiment and we discuss a strategy to cure the rank deficiencies that appear in the problem. Secondly, we introduce and discuss the role of the solar Lense-Thirring effect in the Mercury orbit determination problem and in the relativistic parameters estimation.'
address:
- 'IFAC-CNR, Via Madonna del Piano 10, 50019 Sesto Fiorentino (FI), Italy'
- 'Dipartimento di Matematica, Largo B. Pontecorvo 5, 56127 Pisa, Italy'
author:
- Giulia Schettino
- Daniele Serra
- Giacomo Tommei
- Andrea Milani
title: Addressing some critical aspects of the BepiColombo MORE relativity experiment
---
Radio science, Mercury, BepiColombo mission, General Relativity tests
Introduction {#intro}
============
BepiColombo is a space mission for the exploration of the planet Mercury, jointly developed by the European Space Agency (ESA) and the Japan Aerospace eXploration Agency (JAXA). The mission includes two spacecraft: the ESA-led Mercury Planetary Orbiter (MPO), mainly dedicated to the study of the surface and the internal composition of the planet [@Benk], and the JAXA-led Mercury Magnetospheric Orbiter (MMO), designed for the study of the planetary magnetosphere [@Mukai]. The two orbiters will be launched together in October 2018 on an Ariane 5 launch vehicle from Kourou and they will be carried to Mercury by a common Mercury Transfer Module (MTM) using solar-electric propulsion. The arrival at Mercury is foreseen for December 2025, after 7.2 years of cruise. After the arrival, the orbiters will be inserted in two different polar orbits: the MPO on a $480\times 1500\,$km orbit with a period of 2.3 hours, while the MMO on a $590\times 11639\,$km orbit. The nominal duration of the mission in orbit is one year, with a possible one year extension.
The Mercury Orbiter Radio science Experiment (MORE) is one of the experiments on-board the MPO spacecraft. The scientific goals of MORE concern both fundamental physics and, specifically, the geodesy and geophysics of Mercury. The radio science experiment will provide the determination of the gravity field of Mercury and its rotational state, in order to constrain the planet’s internal structure (*gravimetry* and *rotation experiments*). Details can be found, e.g., in [@Mil_01; @Sanchez; @Iess_09; @Cic_12; @Cic_16; @G_sait; @G_17]. Moreover, taking advantage from the fact that Mercury is the best-placed planet to investigate the gravitational effects of the Sun, MORE will allow an accurate test of relativistic theories of gravitation (*relativity experiment*; see, e.g., [@Mil_02; @Mil_10; @G_15; @Schu; @universe]). The global experiment consists in a very precise orbit determination of both the MPO orbit around Mercury and the orbits of Mercury and the Earth around the Solar System Barycenter (SSB), performed by means of state-of-the-art on-board and on-ground instrumentation [@Iess_01]. In particular, the on-board transponder will collect the radio tracking observables (range, range-rate) up to a goal accuracy (in Ka-band) of about $\sigma_r=15\,$cm at 300 s for one-way range and $\sigma_{\dot{r}}=1.5\times 10^{-4}\,$cm/s at 1000 s for one-way range-rate [@Iess_01]. The radio observations will be further supported by the on-board Italian Spring Accelerometer (ISA; see, e.g., [@Iaf]). Thanks to the very accurate radio tracking, together with the state vectors (position and velocity) of the spacecraft, Mercury and the Earth, the experiment will be able to determine, by means of a global non-linear least squares fit (see, e.g., [@mil_gron]), the following quantities of general interest:
- coefficients of the expansion of Mercury gravity field in spherical harmonics with a signal-to-noise ratio better than 10 up to, at least, degree and order 25 and Love number $k_2$ [@Kozai];
- parameters defining the model of Mercury’s rotation;
- the post-Newtonian (PN) parameters $\gamma$, $\beta$, $\eta$, $\alpha_1$ and $\alpha_2$, which characterise the expansion of the space-time metric in the limit of slow-motion and weak field (see, e.g., [@Will_93]), together with some related parameters, as the oblateness of the Sun $J_{2\odot}$, the solar gravitational factor $\mu_{\odot}=GM_{\odot}$ (where $G$ is the gravitational constant and $M_{\odot}$ the mass of the Sun) and possibly its time derivative $\zeta=(1/\mu_{\odot})d\mu_{\odot}/dt$.
The aim of the present paper is to address two critical issues which affect the BepiColombo relativity experiment and to introduce a suitable strategy to handle these aspects. The first issue concerns the determination of two PN parameters, the Eddington parameter $\beta$ and the Nordtvedt parameter $\eta$. The criticality of determining these parameters by ranging to a satellite around Mercury has been already pointed out in the past (see, e.g., the discussion in [@Mil_01] and [@ashby]). More recently, in [@DeMarchi] the issue of how the lack of knowledge in the Solar System ephemerides can affect, in particular, the determination of $\eta$ has been discussed. Moreover, in [@GiuliaMA2016] the authors considered the downgrading effect on the estimate of PN parameters due to uncalibrated systematic effects in the radio observables and concluded that these effects turn out to be particularly detrimental for the determination of $\beta$ and $\eta$. Aside from these remarks, we observed that the accuracy by which $\beta$ and $\eta$ can be determined turns out to be very sensitive to changes in the epoch of the experiment in orbit. Indeed, during the last years the simulations of the radio science experiment in orbit have been performed assuming different scenarios and epochs, due to the repeated postponement of the launch date of the mission because of technical problems. As will be described in the following, a deeper analysis reveals that the observed sensitivity to the epoch of estimate is related to the rank deficiencies found in solving simultaneously the Earth and Mercury orbit determination problem, which affect in particular the estimate of $\beta$ and $\eta$.
The second critical aspect we investigated concerns how the solar Lense-Thirring (LT) effect affects the Mercury orbit determination problem. The general relativistic LT effect on the orbit of Mercury due to the Sun’s angular momentum [@lense] is expected to be relevant at the level of accuracy of our tests [@iorio2] and was not included previously in our dynamical model (see a brief discussion on this issue in [@universe]). Due to the resulting high correlation between the Sun’s angular momentum and its quadrupole moment, we will discuss how the mismodelling deriving from neglecting this effect can affect specifically the determination of $J_{2\odot}$.
The paper is organised as follows: in Sect. \[sec:1\] we describe the mathematical background at the basis of our analysis, focusing on the two highlighted critical issues. In Sect. \[sec:2\] we describe how these issues can be handled in the framework of the orbit determination software ORBIT14, developed by the Celestial Mechanics group of the University of Pisa and we outline the simulation scenario and assumptions. In Sect. \[sec:3\] we present the results of our simulations and some sensitivity studies to strengthen the confidence in our findings. Finally, in Sect. \[concl\] we draw some conclusions and final remarks.
Mathematical background {#sec:1}
=======================
The challenging scientific goals of MORE can be fulfilled only by performing a very accurate orbit determination of the spacecraft, of Mercury and of the Earth-Moon barycenter (EMB)[^1]. Starting from the radio observations, i.e. the distance (range) and the radial velocity (range-rate) between the MORE on-board transponder and one or more on-ground antennas, we perform the orbit determination together with the parameters estimation by means of an iterative procedure based on a classical non-linear least squares (LS) fit.
The differential correction method
----------------------------------
Following, e.g., [@mil_gron] - Chap. 5, the non-linear LS fit aims at determining a set of parameters $\mathbf{u}$ which minimises the target function: $$Q(\mathbf{u})=\frac{1}{m}\boldsymbol{\xi}^T(\mathbf{u})W\boldsymbol{\xi}(\mathbf{u})\,,$$ where $m$ is the number of observations, $W$ is the matrix of the observation weights and $\boldsymbol{\xi}(\mathbf{u})=\mathcal{O}-\mathcal{C}(\mathbf{u})$ is the vector of the residuals, namely the difference between the observations $\mathcal{O}$ (i.e. the tracking data) and the predictions $\mathcal{C}(\mathbf{u})$, resulting from the light-time computation as a function of all the parameters $\mathbf{u}$ (see [@lt] for all the details).
The procedure to compute the set of minimising parameters $\mathbf{u}^\star$ is based on a modified Newton’s method called [ *differential correction method*]{}. Let us define the design matrix $B$ and the normal matrix $C$: $$B=\frac{\partial \boldsymbol{\xi}}{\partial\mathbf{u}}(\mathbf{u})\,,\,\,\,\,\,\,C=B^TWB\,.$$ The stationary points of the target function are the solution of the normal equation: $$C\Delta\mathbf{u}^{\star}=-B^TW\boldsymbol{\xi}\,,
\label{norm_eq}$$ where $\Delta\mathbf{u}^{\star}=\mathbf{u}^{\star}-\mathbf{u}$. The method consists in applying iteratively the correction: $$\Delta\mathbf{u}=\mathbf{u}_{k+1}-\mathbf{u}_k=-C^{-1}B^TW\boldsymbol{\xi}$$ until, at least, one of the following conditions is met: $Q$ does not change significantly between two consecutive iterations; $\Delta\mathbf{u}$ becomes smaller than a given tolerance. In particular, the inverse of the normal matrix, $\Gamma=C^{-1}$, can be interpreted as the covariance matrix of the vector $\mathbf{u}^\star$ (see, e.g., [@mil_gron] - Chap. 3), carrying information on the attainable accuracy of the estimated parameters.
The task of inverting the normal matrix $C$ can be made more difficult by the presence of symmetries in the parameters space. A group $G$ of transformations of such space is called group of *exact symmetries* if, for every $g\in G$, the residuals remain unchanged under the action of $g$ on $\mathbf{u}$, namely: $$\boldsymbol{\xi}(g[\mathbf{u}])=\boldsymbol{\xi}(\mathbf{u}).$$ It can be easily shown that if the latter holds, the normal matrix is singular. In practical cases, the symmetry is usually *approximate*, that is there exists a small parameter $s$ such that: $$\boldsymbol{\xi}(g[\mathbf{u}])=\boldsymbol{\xi}(\mathbf{u})+O(s^2)\,,$$ leading to a ill-conditioned normal matrix $C$, anyway yet invertible. When this happens, solving for all the parameters involved in the symmetry leads to a significant degradation of the results. Possible solutions will be described in Sect. \[subsec:1.1\].
The dynamical model
-------------------
To achieve the scientific goals of MORE, both the Mercury-centric dynamics of the probe and the heliocentric dynamics of Mercury and the EMB need to be modelled to a high level of accuracy. On the one hand, the MPO orbit around Mercury is expected to have a period of about 2.3 hours; on the other hand, the motion of Mercury around the Sun takes place over 88 days. Thus, due to the completely different time scales, we can handle separately the two dynamics. This means that, although we are dealing with a unique set of measurements, we can conceptually separate between gravimetry-rotation experiments on one side, mainly based on range-rate observations, and the relativity experiment on the other, performed ultimately with range measurements. Comparing the goal accuracies for range and range-rate, scaled over the same integration time according to Gaussian statistics, we indeed find that $\sigma_r/\sigma_{\dot{r}}\sim 10^5\,$s. As a result, range measurements are more accurate when observing phenomena with periodicity longer than $10^5\,$s, like relativistic phenomena, whose effects become significant over months or years. On the contrary, since the gravity and rotational state of Mercury show variability over time scales of the order of hours or days, the determination of the related parameters is mainly based on range-rate observations.
All the details on the Mercury-centric dynamical model of the MPO orbiter can be found in [@Cic_16] and [@universe], including the effects due to the gravity field of the planet up to degree and order 25, the tidal effects of the Sun on Mercury (Love potential; see, e.g., [@Kozai]), a semi-empirical model for the planet’s rotation (see [@Cic_12]), the third-body perturbations from the other planets, the solar non-gravitational perturbations, like the solar radiation pressure, and some non-negligible relativistic effects (see, e.g., [@Moyer] and [@Cic_16]). In the following we will focus on the relativity experiment, hence on the heliocentric dynamics of Mercury and the EMB. In the slow-motion, weak-field limit, known as Post-Newtonian (PN) approximation, the space-time metric can be written as an expansion about the Minkowski metric in terms of dimensionless gravitational potentials. In the parametrised PN formalism, each potential term in the metric is characterised by a specific parameter, which measures a general property of the metric (see, e.g., [@will_LR]). Each PN parameter assumes a well defined value (0 or 1) in General Relativity (GR). The effect of each term on the motion can be isolated, therefore the value of the associated PN parameter can be constrained within some accuracy threshold, testing any agreement (or not) with GR. The PN parameters that will be estimated are the Eddington parameters $\beta$ and $\gamma$ ($\beta=\gamma=1$ in GR), the Nordtvedt parameter $\eta$ [@nordt] ($\eta=0$ in GR) and the preferred frame effects parameters $\alpha_1$ and $\alpha_2$ ($\alpha_1=\alpha_2=0$ in GR). Moreover, we include in the solve-for list a few additional parameters, whose effect on the orbital motion can be comparable with that induced by some PN parameters [@Mil_02]: the oblateness factor of the Sun $J_{2\odot}$, the gravitational parameter of the Sun $\mu_\odot$ and its time derivative $\zeta=(1/\mu_\odot)\, d\mu_\odot/dt$.
The modification of the space-time metric due to a single PN parameter affects both the propagation of the tracking signal and the equations of motion. As regards the observables, they must be computed in a coherent relativistic background. This implies to account for the curvature of the space-time metric along the propagation of radio signals (Shapiro effect [@shap]) and for the proper times of different events, as the transmission and reception times of the signals. All the details concerning the relativistic computation of the observables can be found in [@lt]. A relativistic model for the motion of Mercury is necessary in order to accurately determine its orbit and, hence, constrain the PN and related parameters. The complete description of the relativistic setting can be found in [@Mil_02; @universe].
Determination of Mercury and EMB orbits {#subsec:1.1}
---------------------------------------
As already pointed out, the relativity experiment is based on a very accurate determination of the heliocentric orbits of Mercury and the EMB, that is we estimate the corresponding state vectors (position and velocity) w.r.t. the SSB at a given reference epoch. A natural choice is to determine the state vectors at the central epoch of the orbital mission, whose duration is supposed to be one year. In this way the propagation of the orbits is performed backward for the first six months of the mission and forward for the remaining six months, thus minimising the numerical errors due to propagation. Of course, the determination of the PN and related parameters should not depend significantly from the choice of the epoch of the estimate. To verify this point, in Figure \[eta\_NO\] we have shown the behaviour of the accuracy of $\beta$ (left) and $\eta$ (right), obtained from the diagonal terms of the covariance matrix, as a function of the epoch of the estimate, from the beginning of the orbital mission (Modified Julian Date (MJD) 61114, corresponding to 15 March 2026) to the end (MJD 61487, corresponding to 23 March 2027).
![Formal accuracy of $\beta$ (left) and $\eta$ (right) as a function of the epoch of the estimate (in MJD) over the mission time span. In red the value of the accuracy for the estimate at central epoch.[]{data-label="eta_NO"}](beta_no.eps "fig:"){width="50.00000%"} ![Formal accuracy of $\beta$ (left) and $\eta$ (right) as a function of the epoch of the estimate (in MJD) over the mission time span. In red the value of the accuracy for the estimate at central epoch.[]{data-label="eta_NO"}](eta_no.eps "fig:"){width="50.00000%"}
The value of the formal accuracy at the central epoch (MJD 61303, corresponding to 20 September 2026) is highlighted in red. It is clear that there is a strong dependency of the achievable accuracy on the epoch of the estimate. If the planetary orbits are determined at MJD 61183 (23 May 2026), the accuracy of $\eta$ turns out to be $\sigma(\eta)\simeq 2.3\times 10^{-6}$, whereas estimating at MJD 61291 (8 September 2026) results in $\sigma(\eta)\simeq 1.1\times
10^{-4}$, almost two orders of magnitude larger. On the contrary, the uncertainty of the other PN parameters showed very little variability with the epoch of the estimate.
Such behaviour indicates the presence of some weak directions in orbit determination, possibly connected to the strategy adopted until now for the MORE Relativity Experiment: we determine only 8 out of 12 components of the initial conditions of Mercury and EMB. This assumption, first introduced in [@Mil_02], is a solution to the presence of an approximate rank deficiency of order 4, arising when we try to determine the orbits of Mercury and the Earth (or, similarly, the EMB as in our problem) w.r.t. the Sun only by means of relative observations. Indeed, if there were only the Sun, Mercury and the Earth, and the Sun was perfectly spherical ($J_{2\odot}=0$), there would be an exact symmetry of order 3 represented by the rotation group $SO(3)$ applied to the state vectors of Mercury and the Earth. Because of the coupling with the other planets and due to the non-zero oblateness of the Sun, the symmetry is broken but only by a small amount, of the order of the relative size of the perturbations of the other planets on the orbits of Mercury and the Earth and of the order of $J_{2\odot}$.
Moreover, there is another approximate symmetry for scaling. The symmetry would be exact if there were only the Sun, Mercury and the Earth: if we change all the lengths involved in the problem by a factor $\lambda$, all the masses by a factor $\mu$ and all the times by a factor $\tau$, with the factors related by $\lambda^3=\tau^2\mu$ (Kepler’s third law), then the equation of motion of the gravitational 3-body problem would remain unchanged. Since we can assume [^2] that $\tau=1$, the symmetry for scaling involves the state vectors of Mercury and the Earth (i.e. the “lengths” involved in the problem) and the gravitational mass of the Sun, which is among the solve-for parameters. The symmetry for scaling can also be expressed by the well known fact that it is not possible to solve simultaneously for the mass of the Sun and the value of the astronomical unit. Since the state vectors of the other planets, perturbing the orbits of Mercury and the Earth, are given by the planetary ephemerides and thus they cannot be rescaled, the symmetry is broken but, again, only by a small amount. In conclusion, an approximate rank deficiency of order 4 occurs in the orbit determination problem we want to solve. Solving for all the 12 components of the initial conditions and the mass of the Sun would result in considerable loss of accuracy for all the parameters of the relativity experiment, as will be quantified in Sect. \[sec:3\].
The only solution in case of rank deficiency is to change the problem. When no additional observations breaking the symmetry are available, a convenient solution is to remove some parameters from the solve-for list. Starting from $N$ parameters to be solved, in case of a rank deficiency of order $d$, we can select a new set of $N-d$ parameters to be solved, in such a way that the new normal matrix $\bar C$, with dimensions $(N-d)\times (N-d)$ instead of $N\times N$, has rank $N-d$. The remaining $d$ parameters can be set at some nominal value (*consider parameters*). This solution has been applied up to now in the MORE relativity experiment (see, e.g., [@universe]): the three position components and the out-of-plane velocity component of the EMB orbit, for a total of 4 parameters, have been removed from the solve-for list, curing in this way the rank deficiency of order 4.
Another option can be investigated: the use of a priori observations. When some information on one or more of the parameters involved in the symmetry is already available – for instance from previous experiments – it can be taken into account in our experiment and could lead to an improvement of the results. In this case the search for the minimum of the target function is restricted to the vector of parameters fulfilling a set of a priori equations. In practice, we add to the observations a set of a priori constraints, $\mathbf{u}=\mathbf{u}^P$, on the value of the parameters, with given a priori standard deviation $\sigma_i$ ($i=1,..N$) on each constraint $u_i=u_i^P$. This is equivalent to add to the normal equation in Eq. (\[norm\_eq\]) an a priori normal equation of the form: $$C^P\mathbf{u}=C^P\mathbf{u}^P\,,$$ with $C^P=\mbox{diag}[\sigma_i^{-1}]$. In this way, an “a priori penalty” is added to the target function: $$Q(\mathbf{u})=\frac{1}{N+m}[(\mathbf{u}-\mathbf{u}^P)^TC^P(\mathbf{u}-\mathbf{u}^P)+\boldsymbol{\xi}^T(\mathbf{u})W\boldsymbol{\xi}(\mathbf{u})]$$ and the complete normal equation becomes: $$(C^P+C)\Delta\mathbf{u}=-B^TW\boldsymbol{\xi}+C^P(\mathbf{u}^P-\mathbf{u}_k)\,.$$ If the a priori uncertainties $\sigma_i$ are small enough, the new normal matrix $\bar{C}=C^P+C$ has rank $N$ and the complete orbit determination problem can be solved.
In our problem, the a priori information is represented by four constraint equations which inhibit the symmetry for rotation and scaling, to be added to the LS fit as a priori observations. A complete description of the form that the constraint equations assume will be given in Sect. \[subsec:2.1\].
The solar Lense-Thirring effect {#subsec:1.2}
-------------------------------
In [@universe] we pointed out that the Lense-Thirring (LT) effect on the orbit of Mercury due to the angular momentum of the Sun has been neglected, in order to simplify the development and implementation of the dynamical model. In fact the solar LT effect is expected to be relevant at the level of accuracy of our tests [@iorio2]. As it will be clear in Sect. \[subsec:3.2\], the mismodelling resulting from neglecting this effect affects specifically the determination of the oblateness of the Sun, $J_{2\odot}$.
More specifically, the general relativistic LT effect induces a precession of the argument of the pericenter of Mercury in the gravity field of the Sun at the level of $\dot{\omega}_{\mathrm{LT}}=-2\,$milliarcsec/century, according to GR [@iorio3]. We modelled the effect as an additional perturbative acceleration in the heliocentric equation of motion of Mercury (see, e.g. [@Moyer]): $$\mathbf{a}_{\mathrm{LT}}=\frac{(1+\gamma)\,GS_{\odot}}{c^2\,r^3}\left[-\hat{\mathbf s} \times \dot{\mathbf r}+ 3\,\frac{(\hat{\mathbf s} \cdot \mathbf r)\,(\mathbf r \times \dot{\mathbf r})}{r^2} \right]\,,
\label{acc_LT}$$ where $\mathbf{S}_{\odot}=S_{\odot}\hat {\mathbf{s}}$ is the angular momentum of the Sun ($\hat{\mathbf{s}}$ is assumed along the rotation axis of the Sun). To assess the role of the solar LT in the dynamics, in Figure \[delta\_range\] we plot the effect of the solar LT acceleration, given by Eq. (\[acc\_LT\]), on the simulated range of the orbiter. In other words, this is the difference between simulated range with and without LT effect over the one-year mission time span. As it can be seen, the mismodelling due to the lack of the solar LT perturbation in the dynamical model can be as high as some meters. This result is in very good agreement with Fig. 1 in [@iorio2], which shows the numerically integrated EMB-Mercury ranges with and without the perturbation due to the solar Lense-Thirring field over two years in the ICRF/J2000.0 reference frame, with the mean equinox of the reference epoch and the reference $x-y$ plane rotated from the mean ecliptic of the epoch to the Sun’s equator, centered at the SSB.
![Difference (in cm) of simulated spacecraft range with and without solar LT perturbation in the dynamical model, over one-year mission time span.[]{data-label="delta_range"}](Delta_range.eps){width="\textwidth"}
The ORBIT14 software {#sec:2}
====================
Since 2007, the Celestial Mechanics Group of the University of Pisa has developed[^3] a complete software, ORBIT14, dedicated to the BepiColombo and Juno radio science experiments [@tommei_J; @serra], which is now ready for use. All the code is written in Fortran90. The software includes a data simulator, which generates the simulated observables and the nominal value for the orbital elements of the Mercury-centric orbit of the MPO and the heliocentric orbits of Mercury and the EMB, and the differential corrector, which is the core of the code, solving for the parameters of interest by a global non-linear LS fit, within a constrained multi-arc strategy [@Alessi]. The general structure of the software is described, e.g., in [@universe].
Handling the a priori constraints {#subsec:2.1}
---------------------------------
The equations needed to a priori constrain the LS solution are given as an input to the differential corrector. In general, the $n$-th constraint has the expression: $f_n(\mathbf{u})=0$. ORBIT14 has been designed to handle only linear constraints. Thus, the equation for the $n$-th constraint, involving $d$ parameters to be determined, reads: $$f_n(\mathbf{u})=\sum_{i=1}^{d}a_i(x_i-\theta_i)=N(0,\mbox{diag}[\sigma_i])\,,$$ where $\sigma_i$ are the weights associated to each parameter involved in the constraint, assuming a Gaussian distribution with zero mean. Following the notation of Sect. \[subsec:1.1\], its contribution to the normal matrix is given by: $$C_n^{P}=\left(\frac{\partial f_n}{\partial \mathbf{u}}\right)^T\,W\,\frac{\partial f_n}{\partial \mathbf{u}}\,,$$ and to the right hand side of the equations of motion by: $$D_n^{P}=\left(\frac{\partial f_n}{\partial \mathbf{u}}\right)^T\,W\,f_n\,,$$ where $W=\mbox{diag}[\sigma_i^{-2}]$
In order to write the linear constraint equations of our orbit determination problem, let us introduce the following notation for the components of the state vectors of Mercury and the EMB:
- $\bf{M}$, $\dot{\bf{M}}$: position and velocity of Mercury at the reference epoch from ephemerides (nominal values); $\bf{m}$, $\dot{\bf{m}}$: estimated position and velocity of Mercury; $\Delta\bf{M}=\bf{M}-\bf{m}$, $\Delta\dot{\bf{M}}=\dot{\bf{M}}-\dot{\bf{m}}$: deviation between ephemerides and estimate.
- $\bf{E}$, $\dot{\bf{E}}$: position and velocity of EMB at the reference epoch from ephemerides; $\bf{e}$, $\dot{\bf{e}}$: estimated position and velocity of EMB; $\Delta\bf{E}=\bf{E}-\bf{e}$, $\Delta\dot{\bf{E}}=\dot{\bf{E}}-\dot{\bf{e}}$: deviation between ephemerides and estimate.
#### Symmetry for rotations.
The symmetry for rotation is described by a three-parameter group, whose generators are for example the rotations around three orthogonal axis $(x,y,z)$ of the reference frame used for orbit propagation. The constraint equation which inhibits an infinitesimal rotation by an angle $s$ around the $x$-axis has the expression: $$\begin{split}
& \left. \frac{\Delta\mathbf{M}}{|\mathbf{m}|}\cdot\frac{\partial (R_{s,\hat{x}}\hat{\mathbf{M}})}{\partial s}\right\arrowvert_{s=0}+\left. \frac{\Delta\mathbf{E}}{|\mathbf{e}|}\cdot\frac{\partial (R_{s,\hat{x}}\hat{\mathbf{E}})}{\partial s}\right\arrowvert_{s=0}+\left. \frac{\Delta\dot{\mathbf{M}}}{|\dot{\mathbf{m}}|}\cdot\frac{\partial (R_{s,\hat{x}}\hat{\dot{\mathbf{M}}})}{\partial s}\right\arrowvert_{s=0}+ \\
& + \left. \frac{\Delta\dot{\mathbf{E}}}{|\dot{\mathbf{e}}|}\cdot\frac{\partial (R_{s,\hat{x}}\hat{\dot{\mathbf{E}}})}{\partial s}\right\arrowvert_{s=0}=N(\mbox{diag}[\sigma_i],0)\,,
\end{split}
\label{eq_rot}$$ where $\sigma_i$ are the weights for the state vectors components, $N$ represents a Gaussian distribution with zero mean, $R_{s,\hat{x}}$ is the rotation matrix by an angle $s$ around the $x$-axis: $$R_{s,\hat{x}}=
\begin{pmatrix}
1 & 0 & 0 \\
0 & \cos s & -\sin s \\
0 & \sin s & \cos s \\
\end{pmatrix}$$ and $(\partial R_{s}/\partial s) \arrowvert_{s=0}$ is a generator of the Lie algebra of the rotations $SO(3)$. Two similar equations hold for the rotations by an angle $s$ around the $y$ and $z$ axes.
#### Symmetry for scaling.
To find the equation to constrain for scaling, we can start from the simple planar two-body problem of a planet around the Sun, with the non-linear dependency of the mean motion $n$ upon the semi-major axis $a$, in the hypothesis of circular motion: $$\frac{da}{dt}=0,\,\,\,\,\,\frac{d\lambda}{dt}=n(a)=\frac{k}{a^{3/2}}\,,$$ where $k^2=GM_{\odot}=\mu_{\odot}$, with solution given by: $$a(t)=a_0,\,\,\,\,\,\lambda (t)=\frac{k}{a_0^{3/2}}t+\lambda_0\,.$$ This problem has a symmetry with multiplicative parameter $w\in \mathbb{R}^+$: $$k \mapsto w^3 k,\,\,\,\,\, a_0\mapsto w^2a_0\,,$$ leaving $n=k/a^{3/2}$ invariant. The symmetry can be represented by means of an additive parameter $s$ by setting $w=e^{s}$. The derivative of the symmetry group action with respect to $s$ is: $$\frac{da_0}{ds}=2w^2a_0,\,\,\,\,\,\frac{dk}{ds}=3w^3k\,.$$ Finally, the constraint takes the form: $$-3\,\frac{\Delta a}{a_0}\,\left.\frac{da_0}{ds}\right\arrowvert_{s=0}+2\,\frac{\Delta k}{k_0}\,\left.\frac{dk}{ds}\right\arrowvert_{s=0}=0\,.$$
In our fit, we estimate the parameter $\mu_{\odot}$, that is $k^{1/2}$. Since we need to deal with linear constraints, we can linearize the problem by expanding the non-linear equation up to the first order around the nominal value. In this way, the final expression adopted for the scaling constraint reads: $$\sum_{j=1}^3\left [ \frac{\Delta M_j}{|\mathbf{M}|}\,M_j+\frac{\Delta \dot{M}_j}{|\dot{\mathbf{M}}|}\,\dot{M}_j+\frac{\Delta E_j}{|\mathbf{E}|}\,E_j+\frac{\Delta \dot{E}_j}{|\dot{\mathbf{E}}|}\,\dot{E}_j\right ]+3\Delta\mu_{\odot}=N(\mbox{diag}[\sigma_i],0)\,,
\label{eq_scal}$$ where $j=1,2,3$ refers to the three orthogonal directions $x,y,z$.
#### Setting the weights $\sigma_i$.
Together with the constraint equations given in input to the differential corrector, it is necessary to provide also the a priori standard deviations $\sigma_i$ by which the involved parameters are constrained. The strength of the weights $\sigma_i$ is the result of a trade-off between two opposite trends: on the one hand, the tighter the constraint the less the solution is affected by the corresponding rank deficiency; on the other hand, if the constraint is too tight, the approach becomes equivalent to descoping, i.e. the involved parameters are handled as consider parameters.
The formulation given in Eqs. (\[eq\_rot\]) and (\[eq\_scal\]) implies the employment of adimensional weights, which constrain the relative accuracy of each involved parameter. To find a suitable value for the weights, we start from a standard simulation of the relativity experiment, obtained by estimating only 8 out of the 12 components of the orbits of Mercury and the EMB, and we consider the ratio between the formal accuracy of each component and the corresponding estimated value. The results are shown in Table \[confr\], where we included also the ratio of the accuracy over the estimated value of $\mu_{\odot}$.
[ccccccccc]{} $\frac{\sigma(x_M)}{x_M}$ & $\frac{\sigma(y_M)}{y_M}$ & $\frac{\sigma(z_M)}{z_M}$ & $\frac{\sigma(\dot{x}_M)}{\dot{x}_M}$ & $\frac{\sigma(\dot{y}_M)}{\dot{y}_M}$ & $\frac{\sigma(\dot{z}_M)}{\dot{z}_M}$ & $\frac{\sigma(\dot{x}_E)}{\dot{x}_E}$ & $\frac{\sigma(\dot{y}_E)}{\dot{y}_E}$ & $\frac{\sigma(\mu_{\odot})}{\mu_{\odot}}$\
$0.22$ & $0.61$ & $3.1$ & $0.23$ & $0.26$ & $3.2$ & $4.4$ & $0.29$ & $0.78$\
All the values range between $10^{-12}$ and $10^{-13}$, thus suitable values to be adopted are $\sigma_i\sim 10^{-13}-10^{-14}$. In the following, we will adopt a relative weight $\sigma_i=10^{-14}$ for each parameter involved in the a priori constraints. Nevertheless, we checked that adopting $\sigma_i=10^{-13}$ for each parameter, the worsening of the global solution is negligible.
Simulation scenario {#subsec:2.2}
-------------------
To perform a global simulation of the radio science experiment, we make use of some assumptions both at simulation stage and during the differential correction process, which are briefly described in the following.
#### Error models.
To simulate the observables in a realistic way, we need to make some assumptions concerning the error sources which unavoidably affect the observations. We assume that the radio tracking observables are affected only by random effects with a standard deviation of $\sigma_{r}=15\,$cm at 300 s and $\sigma_{\dot{r}}=1.5\times 10^{-4}\,$cm/s at 1000 s, respectively, for Ka-band observations. The software is capable of including also a possible systematic component to the range error model and to calibrate for it[^4], but we did not account for this detrimental effect in the present work, which has been partially discussed in [@GiuliaMA2016].
The accelerometer readings themselves suffer from errors of both random and systematic origin, which can significantly bias the results of the orbit determination. Systematic effects due to the accelerometer readings turn out to be particularly detrimental for the purposes of gravimetry and rotation (see, e.g., the discussion in [@Cic_16]), while they induce a minor loss in accuracy for what concerns the relativity experiment (see, e.g., [@G_15] and [@GiuliaMA2016]). The adopted accelerometer error model, provided by the ISA team (private communications) and the digital calibration method applied during the differential correction process have been extensively discussed in [@Cic_16].
#### Additional rank deficiencies in the problem.
A critical issue which significantly affects the success of the relativity experiment concerns the high correlation between the Eddington parameter $\beta$ and the solar oblateness $J_{2\odot}$. Indeed, from a geometrical point of view the main orbital effect of $\beta$ is a precession of the argument of perihelion, which is a displacement taking place in the plane of the orbit of Mercury, while $J_{2\odot}$ affects the precession of the longitude of the node, producing a displacement in the plane of the solar equator. Since the angle between the two planes is almost zero, the two effects blend each other and the parameters turn out to be highly correlated, causing a deterioration of the solution. A meaningful solution to the problem is to link the PN parameters through the Nordtvedt equation [@nordt]: $$\eta=4(\beta-1)-(\gamma-1)-\alpha_1-\frac{2}{3}\alpha_2\,
\label{eq_N}$$ and add such relation as an a priori constraint to the LS fit. In such a way, the knowledge of $\beta$ is mainly determined from the value of $\eta$ and $\gamma$, removing the correlation with $J_{2\odot}$. This assumption corresponds to hypothesise that gravity is a metric theory.
Moreover, a solar superior conjunction experiment (SCE) for the determination of the PN parameter $\gamma$ is expected during the cruise phase of the BepiColombo mission (see, e.g., the description in [@Mil_02]), similar to the one performed by Cassini [@bert]. The resulting estimate of $\gamma$ will be adopted as an a priori constraint on the parameter in the experiment in orbit. The complete results and a thorough discussion on the simulations of SCE with ORBIT14 will be presented in a future paper; however, we include in the fit a constraint on the value of $\gamma$ given by: $\gamma=1\pm 5\times 10^{-6}$, coming from our cruise simulations. In this way, from Eq. (\[eq\_N\]) it turns out that $\beta$ is mainly determined from $\eta$, with a ratio $1:4$ in the corresponding accuracies and a near-one correlation between the two parameters. Indeed, this fact was already clear from Fig. \[eta\_NO\]: the accuracy of the two parameters shows exactly the same behaviour as a function of the epoch of the estimate and, at each given epoch, the ratio of the accuracies is around 4.
#### Solve-for list.
The latest mission scenario consists of a one-year orbital phase, with a possible extension to another year, starting from 15 March 2026. The orbital elements of the initial Mercury-centric orbit of the MPO orbiter are: $$1500\times 480\,\mbox{km},\,\,\,i=90^{\circ},\,\,\,\Omega=67.8^{\circ},\,\,\,\omega=16^{\circ}.$$ We assume that only one ground station is available for tracking, at the Goldstone Deep Space Communications Complex in California (USA), providing observations in the Ka-band. We solved for a total of almost 5000 parameters simultaneously in the non-linear LS fit adopting a constrained multi-arc strategy and accounting for the correlations. The list of solve-for parameters includes:
- state vector (position and velocity) of the Mercury-centric orbit of the spacecraft at each arc and of Mercury and Earth-Moon barycenter at the central epoch of the mission, in the Ecliptic Reference frame at epoch J2000;
- the PN parameters $\beta$, $\gamma$, $\eta$, $\alpha_1$, $\alpha_2$ and the related parameters $J_{2\odot}$, $\mu_{\odot}$, $\zeta$ and $GS_{\odot}$ of the Sun;
- the calibration coefficients for the accelerometer readings at each arc (six parameters per arc).
Numerical results {#sec:3}
=================
In this Section we describe the results of the numerical simulations of the MORE relativity experiment. In Sect. \[subsec:3.1\] we compare the two possible strategies described in Sect. \[subsec:1.1\] to remove the rank deficiency of order 4 due to the symmetry for rotation and scaling. Then, in Sect. \[subsec:3.2\] we discuss the effects on the solution due to the addition of the solar LT effect in the dynamical model, with a particular attention on the estimate of $J_{2\odot}$.
Removing the planetary rank deficiency {#subsec:3.1}
--------------------------------------
We briefly recall the two possible strategies to remove the approximate rank deficiency of order 4 found when we try to solve simultaneously for the orbits of Mercury and the EMB (12 parameters) and the solar gravitational mass $\mu_{\odot}$:
- strategy I (descoping)[^5]: we remove 4 out of the 13 parameters from the solve-for list (the three position components of the EMB and the $z$-component of the velocity of the EMB);
- strategy II: we solve simultaneously for the 13 parameters by adding 4 a priori constraint equations in the LS fit.
In Table \[res\_centro\] the expected accuracies for the PN and related parameters obtained following both strategies are compared with the current knowledge of the same parameters. Table \[sv\_centro\] provides the achievable accuracies for the state vectors components. For all parameters, the reference date for the estimate is the central epoch of the mission. In both tables, the last column contains the accuracies that would be obtained if all the state vectors components and $\mu_{\odot}$ are determined simultaneously without any a priori constraint whatsoever. Because of the approximate rank deficiency of order 4 (described in Sect. \[subsec:1.1\]), the normal matrix is still invertible, yet the global solution is highly downgraded. We note indeed a loss in accuracy up to 2-3 orders of magnitude in the components of the planetary state vectors, while an order of magnitude is lost in the solution for $\beta$ and $\eta$. As far as the other relativistic parameters are concerned, it turns out that knowing the orbits of Mercury and the EMB at the level of some meters is sufficient to determine their value at the goal level of accuracy of MORE.
[cllcl]{} Parameter & Strategy I & Strategy II & Current knowledge & No constraints\
$\beta$ & $2.4\times 10^{-6}$ & $2.9\times 10^{-6}$ & $7\times 10^{-5}$[@fienga],$ \ 3.9\times 10^{-5}$[@park] & $2.7\times 10^{-5}$\
$\gamma$ & $7.6\times 10^{-7}$ & $7.6\times 10^{-7}$ & $2.3\times 10^{-5}$[@bert] & $ \ 7.7\times 10^{-7}$\
$\eta$ & $9.3\times 10^{-6}$ & $1.1\times 10^{-5}$ & $4.5\times 10^{-4}$[@williams] & $1.1\times 10^{-4}$\
$\alpha_1$ & $4.9\times 10^{-7}$ & $4.8\times 10^{-7}$ & $6.0\times 10^{-6}$[@iorio1] & $7.5\times 10^{-7}$\
$\alpha_2$ & $1.1\times 10^{-7}$ & $1.1\times 10^{-7}$ & $3.5\times 10^{-5}$[@iorio1] & $1.2\times 10^{-7}$\
$\mu_{\odot}$ & $1.0\times 10^{14}$ & $1.1\times 10^{14}$ & $8\times 10^{15}$ [@jpl] & $1.9\times 10^{14}$\
$J_{2\odot}$ & $4.9\times 10^{-9}$ & $5.0\times 10^{-9}$ & $1.2\times 10^{-8}$[@fienga],$\,9\times 10^{-9}$[@park] & $5.5\times 10^{-9}$\
$\zeta$ & $3.2\times 10^{-14}$ & $3.3\times 10^{-14}$ & $4.3\times 10^{-14}$ [@pit] & $3.5\times 10^{-14}$\
[clll]{} Parameter & Strategy I & Strategy II & No constraints\
$x_M$ & 0.81 & 0.65 & $4.50\times 10^2$\
$y_M$ & 3.6 & 3.0 & $2.68\times 10^2$\
$z_M$ & 4.2 & 2.2 & $1.633\times 10^3$\
$x_E$ & – & 0.70 & $5.89\times 10^1$\
$y_E$ & – & 2.8 & $1.093\times 10^3$\
$z_E$ & – & 4.3 & $4.275\times 10^3$\
$\dot{x}_M$ & $7.3\times 10^{-7}$ & $3.6\times 10^{-7}$ & $2.83\times 10^{-4}$\
$\dot{y}_M$ & $6.1\times 10^{-7}$ & $2.4\times 10^{-7}$ & $2.58\times 10^{-4}$\
$\dot{z}_M$ & $1.5\times 10^{-6}$ & $8.1\times 10^{-7}$ & $1.01\times 10^{-3}$\
$\dot{x}_E$ & $5.0\times 10^{-7}$ & $1.2\times 10^{-7}$ & $2.16\times 10^{-4}$\
$\dot{y}_E$ & $8.6\times 10^{-7}$ & $9.2\times 10^{-7}$ & $8.26\times 10^{-6}$\
$\dot{z}_E$ & – & $7.3\times 10^{-7}$ & $6.27\times 10^{-4}$\
The results achievable with strategies I and II are almost comparable and represent a significant improvement with respect to the current knowledge (see the discussion in [@universe] for a comparison with the actual knowledge). This is true if orbit determination is performed at the central epoch of the orbital mission. Indeed, from Fig. \[eta\_NO\] we have seen that, adopting strategy I, there is a strong dependency of the solution from the epoch. In Fig. \[eta\_SI\] we compare the behaviour of the formal accuracy of $\beta$ (on the left) and $\eta$ (on the right) adopting strategy I (blue curve) and strategy II (green curve). The red circle refers to the estimate at the central epoch (MJD 61303).
![Comparison of the formal accuracy of $\beta$ (left) and $\eta$ (right) as a function of the epoch of the estimate (in MJD) over the mission time span adopting strategy I (blue curve) and strategy II (green curve). In red the value of the accuracy for the estimate at central epoch.[]{data-label="eta_SI"}](beta_si_new2.eps "fig:"){width="50.00000%"} ![Comparison of the formal accuracy of $\beta$ (left) and $\eta$ (right) as a function of the epoch of the estimate (in MJD) over the mission time span adopting strategy I (blue curve) and strategy II (green curve). In red the value of the accuracy for the estimate at central epoch.[]{data-label="eta_SI"}](eta_si_new2.eps "fig:"){width="50.00000%"}
Choosing the second strategy, we observe that the dependency of the accuracy from the epoch of the estimate is definitely reduced. If we consider the evolution of the formal of $\eta$ from the beginning of the orbital mission up to MJD 61350, the variability in case of strategy I spans from a minimum of $\sigma(\eta)=2.3\times 10^{-6}$ to a maximum of $\sigma(\eta)=1.1\times 10^{-4}$, while adopting strategy II the formal accuracy ranges from a minimum of $\sigma(\eta)=3.5\times 10^{-6}$ to a maximum of $\sigma(\eta)=1.4\times 10^{-5}$, with a net variability of only a factor 4 instead of a factor 50. If the orbits of Mercury and the EMB are determined in the second part of the mission, we observe a stronger variability in the accuracies adopting the second approach. Such behaviour could suggest that some degeneracy is still affecting the orbit determination problem. This issue will be investigated in the future. Nevertheless, the standard strategy of orbit determination codes is to adopt, as the reference epoch, the initial or the central date, thus for the purpose of our simulations we can ignore the behaviour of the curves in the second half of the mission time span.
Of course, if the mission scenario is exactly the one adopted in our simulations, i.e. assuming the beginning of scientific operations on 15 March 2026 and the end on 21 March 2027 (corresponding to 365 observed arcs[^6]), choosing strategy I or II does not lead to significant differences in the solution. However, Fig. \[eta\_SI\] states that strategy II provides a more stable solution. Indeed, as an example, in Table \[2weeks\] we show the results for the accuracy of relativity parameters in the hypothesis of moving up the beginning of the orbital experiment by approximately two weeks, on 3 March 2026, still keeping the one-year duration.
[cllll]{} Parameter & Strategy I & Strategy II & Ratio I & Ratio II\
$\beta$ & $3.0\times 10^{-5}$ & $2.3\times 10^{-6}$ & $12.5$ & $0.79$\
$\gamma$ & $6.4\times 10^{-7}$ & $7.7\times 10^{-7}$ & $0.84$ & $1.0$\
$\eta$ & $1.2\times 10^{-4}$ & $8.7\times 10^{-6}$ & $12.5$ & $0.79$\
$\alpha_1$ & $7.6\times 10^{-7}$ & $4.4\times 10^{-7}$ & $1.5$ & $0.92$\
$\alpha_2$ & $9.6\times 10^{-8}$ & $8.0\times 10^{-8}$ & $0.87$ & $0.73$\
$\mu_{\odot}$ & $1.8\times 10^{14}$ & $7.9\times 10^{13}$ & $1.8$ & $0.72$\
$J_{2\odot}$ & $4.7\times 10^{-9}$ & $4.3\times 10^{-9}$ & $0.9$ & $0.86$\
$\zeta$ & $2.6\times 10^{-14}$ & $2.7\times 10^{-14}$ & $0.81$ & $0.82$\
The last two columns of Table \[2weeks\] shows the ratio between the accuracy achieved for each parameter in the scenario of 3 March 2026 and the one of the 15 March 2026 scenario, for strategies I and II, respectively. It is clear that adopting the second approach, a slight variation in the initial date of the mission in orbit leads only to slight variations in the accuracy of the relativity parameters, as it has to be. Conversely, in the case of the first strategy the solution turns out to be less stable. Indeed, the accuracy of $\beta$ and $\eta$ varies by an order of magnitude between the two scenarios, weakening the reliability of the achieved results.
For completeness, Table \[corr\] shows the correlations between PN and related parameters in the case of strategy I (top) and strategy II (bottom). Values higher than 0.8 have been highlighted.
[lcccccccc]{} & $\beta$ & $\gamma$ & $\eta$ & $\alpha_1$ & $\alpha_2$ & $\mu_{\odot}$ & $J_{2\odot}$ & $\zeta$\
$\zeta$ & $<0.1$ & $0.12$ & $<0.1$ & $0.12$ & $0.49$ & $0.74$ & $0.76$ & –\
$J_{2\odot}$ & $<0.1$ & $< 0.1$ & $<0.1$ & $<0.1$ & $0.26$ & $\mathbf{0.86}$ & –\
$\mu_{\odot}$ & $0.22$ & $< 0.1$ & $0.22$ & $0.42$ & $0.38$ & –\
$\alpha_2$ & $0.44$ & $0.14$ & $0.46$ & $0.28$ & –\
$\alpha_1$ & $0.25$ & $0.12$ & $0.21$ & –\
$\eta$ & $\mathbf{0.99}$ & $0.56$ & –\
$\gamma$ & $0.62$ & –\
$\beta$ & –\
& $\beta$ & $\gamma$ & $\eta$ & $\alpha_1$ & $\alpha_2$ & $\mu_{\odot}$ & $J_{2\odot}$ & $\zeta$\
$\zeta$ & $0.63$ & $0.11$ & $0.64$ & $< 0.1$ & $0.51$ & $0.76$ & $0.77$ & –\
$J_{2\odot}$ & $0.54$ & $< 0.1$ & $0.55$ & $0.11$ & $0.29$ & $\mathbf{0.86}$ & –\
$\mu_{\odot}$ & $0.76$ & $< 0.1$ & $0.76$ & $0.35$ & $0.42$ & –\
$\alpha_2$ & $0.73$ & $0.13$ & $0.73$ & $0.24$ & –\
$\alpha_1$ & $0.36$ & $0.17$ & $0.31$ & –\
$\eta$ & $\mathbf{0.99}$ & $< 0.1$ & –\
$\gamma$ & $0.16$ & –\
$\beta$ & –\
In both cases we find a high correlation only between the two physical parameters of the Sun, i.e. $\mu_{\odot}$ and $J_{2\odot}$, and between $\beta$ and $\eta$, whose correlation is near 1 due to the assumption that PN parameters are linked through the Nordtvedt equation. In general, correlations between the parameters, although restrained, are higher in the case of strategy II. This fact was expected since, from Table \[res\_centro\], formal accuracies at the central epoch are slightly worse than adopting strategy I. Nevertheless, except the correlation between $\mu_\odot$-$J_{2\odot}$ and $\beta$-$\eta$, they are always lower than 0.8.
Solar LT effect and the determination of $J_{2\odot}$ {#subsec:3.2}
-----------------------------------------------------
In Sect. \[subsec:1.2\] we showed that the solar LT effect on Mercury produces a signal with a peak-to-peak amplitude up to about ten meters after one year, hence it should be taken into account in the BepiColombo radio science data processing, otherwise it would alias the recovery of other effects, as already pointed out in [@iorio2]. In that paper it was also underlined that the measurement of the solar quadrupole $J_{2\odot}$ at the $1\%$ level or better, which is one of the goals of MORE, cannot be performed aside from accounting for the solar LT effect; the impact of neglecting the gravitomagnetic field of the Sun may affect indeed the determination of $J_{2\odot}$ at the $12\%$ level. Moreover, in [@park] the authors observe that, processing three years of ranging data to MESSENGER by explicitly modelling the gravitomagnetic field of the Sun, the small precession of the perihelion of Mercury induced by solar LT turns out to be highly correlated with the precession due to $J_{2\odot}$.
In this section we investigate two different aspects of the problem with BepiColombo MORE: firstly, we measure the impact on the estimated value of $J_{2\odot}$ if we do not include the solar LT in the dynamical model; secondly, we check whether solving for $GS_{\odot}$ introduces some weakness in the orbit determination problem, for instance deteriorating the formal uncertainties of the other parameters, especially $J_{2\odot}$.
In order to address the first matter, we simulated one year of BepiColombo observations including the solar LT effect and then we applied the differential corrections in two different cases: (i) we included solar LT in the corrector model; (ii) we did not include solar LT in differential corrections. The set of estimated parameters is the same of Sect \[subsec:2.2\], except for the solar angular momentum, which is assumed at the nominal value $S_{\odot}=1.92\times
10^{48}\,$g$\,$cm$^2$/s [@iorio4]. The results for the estimated value and formal accuracy of $J_{2\odot}$ in the two cases are shown in Table \[J2\_1\].
[lll]{} Case & Estimated value & $\sigma(J_{2\odot})$\
LT ON & $1.992\times 10^{-7}$ & $6.0\times 10^{-10}$\
LT OFF & $1.837\times 10^{-7}$ & $6.0\times 10^{-10}$\
As expected, the formal accuracy is the same in both cases, while the estimated value of $J_{2\odot}$ at convergence is different[^7]. More precisely, we observe that neglecting the solar LT effect on the orbit of Mercury (second simulation) introduces a bias in the estimated value of $J_{2\odot}$ as large as $27\sigma$. The effect on the other parameters is only marginally relevant: we remarked a bias in $\mu_{\odot}$ of $\sim
5\sigma$ and in some components of the orbit of Mercury, of the same amount. On the contrary, in the first simulation the estimated value of $J_{2\odot}$ lies within $1.3\sigma$ with respect to the nominal value. In conclusion, this test confirms that the solar LT acceleration produces effects on the orbit of Mercury which can be absorbed by $J_{2\odot}$, if not properly modelled. Under no circumstances should the LT effect be neglected for the BepiColombo MORE experiment.
Now that we proved that it is crucial to include the gravitomagnetic acceleration due to the Sun in the dynamical model, we go on to discuss the second point. We introduce the Lense-Thirring parameter $GS_{\odot}$ in the solve-for list: due to the high correlation with $J_{2\odot}$, we expect to find a significant worsening in the solution for the solar oblateness. A similar behaviour was already found in the case of the mission Juno and described in [@serra]. We considered three explanatory cases: (i) $J_{2\odot}$ and $GS_{\odot}$ are determined simultaneously without any a priori information on their values (same setup of Sect. \[subsec:2.2\]); (ii) the value of $J_{2\odot}$ is a priori constrained to its present knowledge $2\pm 0.12\times 10^{-7}$ (cf. [@fienga]); (iii) the value of $GS_{\odot}$ is a priori constrained to $10\%$ level[^8].
[lccc]{} & Case (i) & Case (ii) & Case (iii)\
$\sigma(J_{2\odot})$ & $5.0\times 10^{-9}$ & $4.6\times 10^{-9}$ & $1.7\times 10^{-9}$\
correlation with $GS_{\odot}$ & 0.9928 & 0.9919 & 0.9354\
The results are shown in Table \[GS\]. The simultaneous determination of $J_{2\odot}$ and $GS_\odot$ without any a priori (case (i)) leads to a 0.99 correlation between the two parameters, as expected. As a result, the solution with respect to the first row of Table \[J2\_1\] is downgraded by almost an order of magnitude. Add an a priori on $J_{2\odot}$ at the level of the current knowledge (case (ii)) does not change much the result, as the correlation between $J_{2\odot}$ and $GS_{\odot}$ does not decrease significantly. Conversely, a rather weak constraint on $GS_{\odot}$ (case (iii)) is capable of significantly improving the solution, breaking the correlation between the two parameters (from 0.99 to 0.93). A tighter constraint on $GS_{\odot}$ would provide a further improvement of the results. As a conclusion, we can state that the achievable accuracy on $J_{2\odot}$ will be mainly limited by the knowledge of the solar angular momentum.
Conclusions and remarks {#concl}
=======================
The present paper addresses two critical aspects of the BepiColombo relativity experiment we aim to solve. The first one concerns the approximate rank deficiency of order 4 found in the Earth and Mercury orbit determination problem. In particular, we highlighted that, according on how the rank deficiency is cured, the dependency of the PN parameters $\beta$ and $\eta$ from the epoch of the estimate can be highly pronounced. As a consequence, the reliability of the solution can be compromised. We considered two possible strategies: the set of 13 critical parameters (initial conditions of Mercury and EMB and the gravitational mass of the Sun) can be reduced to only 9 parameters to be determined, as done up to now in the relativity experiment settings, or we can solve for the whole set of parameters providing 4 a priori constraint equations in input to the differential correction process. We concluded that, although by chance the present mission scenario does not imply considerable differences between the two strategies, the second strategy leads to a more stable solution and, thus, is the more advisable approach.
Secondly, we studied the impact on the determination of the solar oblateness parameter $J_{2\odot}$ of a failure to include the solar LT perturbation in Mercury’s dynamical model. The parameter $J_{2\odot}$ turns out to be highly correlated with the LT parameter $GS_{\odot}$, containing the solar angular momentum. We pointed out that neglecting the solar LT effect leads to a considerable bias in the estimated value of $J_{2\odot}$, and to an illusory high accuracy in the determination of the same parameter. Nevertheless, we have shown that including in the LS fit some reasonable a priori information on $GS_{\odot}$ can help contain the deterioration of the solution for $J_{2\odot}$.
The results of the research presented in this paper have been performed within the scope of the Addendum n. I/080/09/1 of the contract n. I/080/09/0 with the Italian Space Agency.
E.M. Alessi, S. Cicaló, A. Milani and G. Tommei, Desaturation manoeuvers and precise orbit determination for the BepiColombo mission, Mon. Not. R. Astron. Soc., 423, 2270-2278 (2012).
N. Ashby. P.L. Bender and J.M Wahr, Future gravitational physics tests from ranging to the BepiColombo Mercury planetary orbiter, Phys. Rev. D, 75, 022001 (2007).
J. Benkhoff et al., BepiColombo - Comprehensive exploration of Mercury: Mission overview and science goals, Plan. Space. Sci., 58, 2-10 (2010).
B. Bertotti, L. Iess and P. Tortora, A test of general relativity using radio links with the Cassini spacecraft, Nature, 425, 374-376 (2003).
S. Cicalò and A. Milani, Determination of the rotation of Mercury from satellite gravimetry, Mon. Not. R. Astron. Soc., 427, 468-482 (2012).
S. Cicalò et al., The BepiColombo MORE gravimetry and rotation experiments with the ORBIT14 software, Mon. Not. R. Astron. Soc., 457, 1507-1521 (2016).
F. De Marchi, G. Tommei, A. Milani and G. Schettino, Constraining the Nordtvedt parameter with the BepiColombo Radioscience experiment, Phys. Rev. D, 93, 1230014 (2016).
A. Fienga, J. Laskar, P. Exertier, H. Manche and M. Gastineus, Numerical estimation of the sensitivity of INPOP planetary ephemerides to general relativity parameters, Celest. Mech. Dyn. Astron., 123, 325-349 (2015).
V. Iafolla et al., Italian Spring Accelerometer (ISA): A fundamental support to BepiColombo Radio Science Experiments, Planet. Space Sci., 58 300-308, (2010).
L. Iess and G. Boscagli, Advanced radio science instrumentation for the mission BepiColombo to Mercury, Planet. Space Sci., 49, 1597-1608 (2001).
L. Iess, S. Asmar and P. Tortora, MORE: An advanced tracking experiment for the exploration of Mercury with the mission BepiColombo, Acta Astron., 65, 666-675 (2009).
L. Iorio, H.I.M. Lichtenegger, M.L. Ruggiero and C. Corda, Phenomenology of the Lense-Thirring effect in the solar system, Astrophys. Space Sci., 31, 351-395 (2011).
L. Iorio, Constraining the Angular Momentum of the Sun with Planetary Orbital Motions and General Relativity, Solar Phys., 281, 815–826 (2012).
L. Iorio, Constraining the Preferred-Frame $\alpha_1$, $\alpha_2$ Parameters from Solar System Planetary Precessions , Int. J. Mod. Phys. D, 23, 1450006 (2014).
L. Iorio, arXiv:1601.01382 (2016).
Y. Kozai, Effects of the Tidal Deformation of the Earth on the Motion of Close Earth Satellites, Publ. Astron. Soc. Jpn., 17, 395-402 (1965).
J. Lense, H. Thirring, Über den Einflußder Eigenrotation der Zentralkörper auf die Bewegung der Planeten und Monde nach der Einsteinschen Gravitationstheorie, Phys. Z., 19, 156 (1918).
A. Milani et al., Gravity field and rotation state of Mercury from the BepiColombo Radio Science Experiments, Plan. Space Sci., 49, 1579-1596 (2001).
A. Milani et al., Testing general relativity with the BepiColombo radio science experiment, Phys. Rev. D, 66, 082001 (2002).
A. Milani et al., Relativistic models for the BepiColombo radioscience experiment, in *Relativity in Fundamental Astronomy: Dynamics, Reference Frames, and Data Analysis*, Proceedings of the International Astronomical Union, IAU Symposium, 261, 356-365 (2010).
A. Milani and G.F. Gronchi, Theory of orbit determination, Cambridge Univ. Press, Cambridge, UK (2010).
T.D. Moyer, Formulation for Observed and Computed Values of Deep Space Network Data Types of Navigation, NASA JPL, Deep Space Communications and Navigation Series, Monograph 2 (2000).
T. Mukai et al., Present status of the BepiColombo/Mercury magnetospheric orbiter, Adv. Space Res., 38 578-582 (2006).
K.J. Nordtvedt, Post-Newtonian Metric for a General Class of Scalar-Tensor Gravitational Theories and Observational Consequences, Astrophys. Journal, 161, 1059-1067 (1960).
R.S. Park et al., Precession of Mercury’s Perihelion from Ranging to the MESSENGER Spacecraft, Astron. J., 153, 121 1-7 (2017).
F.P. Pijpers, Helioseismic determination of the solar gravitational quadrupole moment, Mon. Not. R. Astron. Soc., 297, L76-L80 (1998).
E.V. Pitjeva and N.P. Pitjev, Relativistic effects and dark matter in the Solar system from observations of planets and spacecraft, Mon. Not. R. Astron. Soc., 432, 3431-3437 (2013).
N. Sanchez Ortiz, M. Belló Mora and R. Jehn, BepiColombo mission: Estimation of Mercury gravity field and rotation parameters, Acta Astron., 58, 236-242 (2006).
G. Schettino, S. Cicaló, S. Di Ruzza and G. Tommei, The relativity experiment of MORE: global full-cycle simulation and results, in Proceedings of the IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 2015, 4-5 June, pp. 141-145.
G. Schettino et al., The radio science experiment with BepiColombo mission to Mercury, Mem. SAIt, 87, 24-29 (2016).
G. Schettino, L. Imperi, L. Iess and G. Tommei, Sensitivity study of systematic errors in the BepiColombo relativity experiment, in Proceedings of the IEEE Metrology for Aerospace (MetroAeroSpace), Florence, Italy, 2016, 22-23 June, pp. 533-537.
G. Schettino and G. Tommei, Testing General Relativity with the Radio Science Experiment of the BepiColombo mission to Mercury, Universe, 2, 21 (2016).
G. Schettino, S. Cicaló, G. Tommei and A. Milani, Determining the amplitude of Mercury’s long period librations with the BepiColombo radio science experiment, Eur. Phys. J. Plus, 132, 218, 6 pp (2017).
A.K. Schuster, R. Jehn and E. Montagnon, Spacecraft design impacts on the post-Newtonian parameter estimation, in Proceedings of the IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 2015, 4-5 June, pp. 82-87.
D. Serra, L. Dimare, G. Tommei and A. Milani, Gravimetry, rotation and angular momentus of Jupiter from the Juno Radio Science Experiment, Plan. Space Sci., 134, 100-111 (2016).
I.I. Shapiro, Forth Test of General Relativity, Phys. Rev. Lett., 13, 789-791 (1964).
G. Tommei, A. Milani and D. Vokrouhlicky, Light-time computations for the BepiColombo Radio Science Experiment, Celest. Mech. Dyn. Astron., 107, 285-298 (2010).
G. Tommei, L. Dimare, D. Serra and A. Milani, On the Juno Radio Science Experiment: models, algorithms and sensitivity analysis”, Mon. Not. R. Astron. Soc., 446, 3089-3099 (2015).
C.M. Will, Theory and Experiment in Gravitational Physics, Cambridge Univ. Press, Cambridge, UK (1993).
C.M. Will, The Confrontation between General Relativity and Experiment, Living Rev. Relativ., 17, 1-117 (2014).
J.G. Williams, S.G. Turyshev and D.H. Boggs ,Lunar Laser Ranging Tests of the Equivalence Principle with the Earth and Moon, Int. J. Mod. Phys. D, 18, 1129-1175 (2009).
Value from latest JPL ephemerides publicly available online at: http://ssd.jpl.nasa.gov/?constants (accessed 22.08.2017).
[^1]: The strategy adopted in our orbit determination code is to determine the EMB orbit instead of the Earth orbit.
[^2]: There are accurate definitions of the time scales based upon atomic clocks.
[^3]: under an Italian Space Agency commission.
[^4]: Two additional parameters to estimate a possible bias and rate over time in the range observations can be added to the solve-for list to avoid biasing in the solution due to systematic errors in ranging.
[^5]: Strategy I has been adopted until now for the MORE relativity experiment.
[^6]: For the definition of observed arc see, e.g., [@Cic_16].
[^7]: The nominal value of $J_{2\odot}$ in simulation has been set to $2.0\times 10^{-7}$.
[^8]: From heliosismology, the angular momentum of the Sun can be constrained significantly better than the 10% level (see, e.g., [@pijpers]), thus our assumption is fully acceptable and is consistent with what done in [@park].
|
{
"pile_set_name": "ArXiv"
}
|
**Dismantlability of weakly systolic complexes and applications**
[Victor Chepoi$^{\small 1}$]{} and [Damian Osajda]{}$^{\small 2}$
$^{1}$Laboratoire d’Informatique Fondamentale,
Université d’Aix-Marseille,
Faculté des Sciences de Luminy,
F-13288 Marseille Cedex 9, France
[email protected]
$^2$ Instytut Matematyczny, Uniwersytet Wroc[ł]{}awski
pl. Grunwaldzki 2/4, 50-384 Wroc[ł]{}aw, Poland
and
Université de Lille I, Laboratoire Paul Painlevé
F-59655 Villeneuve d’Ascq, France
[email protected]
[**Abstract.**]{} In this paper, we investigate the structural properties of weakly systolic complexes introduced recently by the second author and of their 1-skeletons, the weakly bridged graphs. We present several characterizations of weakly systolic complexes and weakly bridged graphs. Then we prove that weakly bridged graphs are dismantlable. Using this, we establish the fixed point theorem for weakly systolic complexes. As a consequence, we get results about conjugacy classes of finite subgroups and classifying spaces for finite subgroups of weakly systolic groups. As immediate corollaries, we obtain new results on systolic complexes and systolic groups.
Introduction {#intro}
============
In his seminal paper [@G], among many other results, Gromov gave a pretty combinatorial characterization of CAT(0) cubical complexes as simply connected cubical complexes in which the links of vertices are simplicial flag complexes. Based on this result, [@Ch_CAT; @Rol] established a bijection between the 1-skeletons of CAT(0) cubical complexes and the median graphs, well-known in metric graph theory [@BaCh_survey]. A similar combinatorial characterization of CAT(0) simplicial complexes having regular Euclidean simplices as cells seems to be out of reach. Nevertheless, [@Ch_CAT] characterized the bridged complexes (i.e., the simplicial complexes having bridged graphs as 1-skeletons) as the simply connected simplicial complexes in which the links of vertices are flag complexes without embedded 4- and 5-cycles; the bridged graphs are exactly the graphs which satisfy one of the basic feature of CAT(0) spaces: the balls around convex sets are convex. Bridged graphs have been introduced and characterized in [@FaJa; @SoCh] as graphs without embedded isometric cycles of length greater than 3 and have been further investigated in several graph-theoretical and algebraic papers; cf. [@AnFa; @BaCh_weak; @Ch_bridged; @Po; @Po1] and the survey [@BaCh_survey]. Januszkiewicz-Swiatkowski [@JanSwi] and Haglund [@Ha] rediscovered this class of simplicial complexes (they call them [*systolic complexes*]{}) using them (and groups acting on them geometrically—[*systolic groups*]{}) fruitfully in the context of geometric group theory. Systolic complexes and groups turned out to be good combinatorial analogs of CAT(0) (nonpositively curved) metric spaces and groups; cf. [@Ha; @JanSwi; @O-ciscg; @OsPr; @Pr2; @Pr3].
One of the characteristic features of systolic complexes, related with convexity of balls around convex sets, is the following $SD_n(\sigma_0)$ property introduced in [@Osajda]: [*if a simplex $\sigma$ of a simplicial complex $\bf X$ is located in the sphere of radius $n+1$ centered at some simplex $\sigma^*$ of $\bf X$, then the set of all vertices $x$ such that $\sigma\cup\{ x\}$ is a simplex and $x$ has distance $n$ to $\sigma^*$ is a nonempty simplex $\sigma_0$ of $\bf X$.*]{} Relaxing this condition, Osajda [@Osajda] called a simplicial complex $\bf X$ [*weakly systolic*]{} if the property $SD_n(\sigma^*)$ holds whenever $\sigma^*$ is a vertex (i.e., a 0-dimensional simplex) of $\bf X$. He further showed that this $SD_n$ property is equivalent with the $SD_n(\sigma^*)$ property in which $\sigma^*$ is a vertex and $\sigma$ is a vertex or an edge (i.e., an 1-dimensional simplex) of $\bf X$. Finally, it is showed in [@Osajda] that weakly systolic complexes can be characterized as simply connected simplicial complexes satisfying some local combinatorial conditions, cf. also Theorem A below. This is analogous to the cases of $CAT(0)$ cubical complexes and systolic complexes. In graph-theoretical terms, the 1-skeletons of weakly systolic complexes (which we call [*weakly bridged graphs*]{}) satisfy the so-called triangle and quadrangle conditions [@BaCh_weak], i.e., like median and bridged graphs, the weakly bridged graphs are weakly modular graphs. As is shown in [@Osajda] and in this paper, the properties of weakly systolic complexes resemble very much properties of spaces of non-positive curvature.
The initial motivation of [@Osajda] for introducing weakly systolic complexes was to exibit a class of simplicial complexes with some kind of simplicial nonpositive curvature that will include the systolic complexes and some other classes of complexes appearing in the context of geometric group theory. As we noticed already, systolic complexes are weakly systolic. Moreover, for every simply connected locally $5$-large cubical complex (i.e. $CAT(-1)$ cubical complex [@G]) there exists a canonically associated simplicial complex, which is weakly systolic [@Osajda]. In particular, the class of [*weakly systolic groups*]{}, i.e., groups acting geometrically by automorphisms on weakly systolic complexes, contains the class of $CAT(-1)$ cubical groups and is therefore essentially bigger than the class of systolic groups; cf. [@O-ciscg]. Other classes of weakly systolic groups are presented in [@Osajda]. The ideas and results from [@Osajda] allowed to construct in [@O2] new examples of Gromov hyperbolic groups of arbitrarily large (virtual) cohomological dimension. Furthermore, Osajda [@Osajda] and Osajda-' Swiatkowski [@OS] provide new examples of high dimensional groups with interesting asphericity properties. On the other hand, as we will show below, the class of weakly systolic complexes seems also to appear naturally in the context of graph theory and have not been studied before from this point of view.
In this paper, we present further characterizations and properties of weakly systolic complexes and their 1-skeletons, weakly bridged graphs. Relying on techniques from graph theory we establish dismantlability of locally-finite weakly bridged graphs. This result is used to show some interesting nonpositive-curvature-like properties of weakly systolic complexes and groups (see [@Osajda] for other properties of this kind). As corollaries, we get also new results about systolic complexes and groups. We conclude this introductory section with the formulation of our main results (see respective sections for all missing definitions and notations as well as for other related results).
We start with a characterization of weakly systolic complexes proved in Section \[char\]:
[**Theorem A.**]{}
*For a flag simplicial complex $\bold X$ the following conditions are equivalent:*
- ${\bold X}$ is weakly systolic;
- the 1-skeleton of ${\bold X}$ is a weakly modular graph without induced $C_4$;
- the 1-skeleton of ${\bold X}$ is a weakly modular graph with convex balls;
- the 1-skeleton of ${\bold X}$ is a graph with convex balls in which any $C_5$ is included in a 5-wheel $W_5$;
- $\bold X$ is simply connected, satisfies the $\widehat{W}_5$-condition, and does not contain induced $C_4.$
In Section \[dismantlability\] we prove the following result:
[**Theorem B.**]{} [*Any LexBFS ordering of vertices of a locally finite weakly systolic complex $\bf X$ is a dismantling ordering of its 1-skeleton.*]{}
This dismantlability result has several consequences presented in Section \[dismantlability\]. This result also allows us to prove in Section \[fixedpt\] the following fixed point theorem concerning group actions:
[**Theorem C.**]{} [*Let $G$ be a finite group acting by simplicial automorphisms on a locally finite weakly systolic complex ${\bold X}$. Then there exists a simplex $\sigma \in {\bold X}$ which is invariant under the action of $G$.*]{}
The barycenter of an invariant simplex is a point fixed by $G$. An analogous theorem holds in the case of $CAT(0)$ spaces; cf. [@BrHa Corollary 2.8]. As a direct corollary of Theorem C, we get the fixed point theorem for systolic complexes. This was conjectured by Januszkiewicz-' Swiatkowski (personal communication) and Wise [@Wi], and later was formulated in the collection of open questions [@Chat Conjecture 40.1 on page 115]. A partial result in the systolic case was proved by Przytycki [@Pr2]. In fact, in Section \[final\], based on a result of Polat [@Po] for bridged graphs, we prove even a stronger version of the fixed point theorem in this case.
There are several important group theoretical consequences of Theorem C. The first one follows directly from this theorem and [@Pr2 Remarks 7.7$\&$7.8].
[**Theorem D.**]{} [*Let $k\geq 6$. Free products of $k$-systolic groups amalgamated over finite subgroups are $k$-systolic. HNN extensions of $k$-systolic groups over finite subgroups are $k$-systolic.*]{}
The following result (Corollary \[conj\] below) has also its $CAT(0)$ counterpart; cf. [@BrHa Corollary 2.8]:
[**Corollary.**]{} [*Let $G$ be a weakly systolic group. Then $G$ contains only finitely many conjugacy classes of finite subgroups.*]{}
The next important consequence of the fixed point theorem concerns classifying spaces for proper group actions. Recall that if a group $G$ acts properly on a space $\bf X$ such that the fixed point set for any finite subgroup of $G$ is contractible (and therefore non-empty), then we say that $\bf X$ is a *model for ${\underline EG}$*—the classifying space for finite groups. If additionally the action is cocompact, then $\bf X$ is a *finite model for ${\underline EG}$*. A (finite) model for ${\underline EG}$ is in a sense a “universal” $G$-space (see [@Lu] for details). The following theorem is a direct consequence of Theorem C and Proposition \[inv set contr\] below.
[**Theorem E.**]{} [*Let $G$ act properly by simplicial automorphisms on a finite dimensional weakly systolic complex $\bf X$. Then $\bf X$ is a finite dimensional model for ${\underline EG}$. If, moreover, the action of $G$ on $\bf X$ is cocompact, then $\bf X$ is a finite model for ${\underline EG}$.*]{}
As an immediate consequence we get an analogous result about ${\underline EG}$ for systolic groups. This was conjectured in [@Chat Chapter 40]. Przytycki [@Pr3] showed that the Rips complex (with the constant at least $5$) of a systolic complex is an ${\underline EG}$ space. Our result gives a systolic—and thus much nicer—model of ${\underline EG}$ in that case.
In the final Section \[final\] we present some further results about systolic complexes and groups. Besides a stronger version of the fixed point theorem mentioned above, we remark on another approach to this theorem initiated by Zawi' slak [@Z] and Przytycki [@Pr2]. In particular, our Proposition \[round\] proves their conjecture about round complexes; cf. [@Z Conjecture 3.3.1] and [@Pr2 Remark 8.1]. Finally, we show (cf. Claim \[Z\]) how our results about ${\underline EG}$ apply to the questions of existence of particular boundaries of systolic groups (and thus to the Novikov conjecture for systolic groups with torsion). This relies on earlier results of Osajda-Przytycki [@OsPr].
Preliminaries
=============
Graphs and simplicial complexes
-------------------------------
We continue with basic definitions used in this paper concerning graphs and simplicial complexes. All graphs $G=(V,E)$ occurring here are undirected, connected, and without loops or multiple edges. The [*distance*]{} $d(u,v)$ between two vertices $u$ and $v$ is the length of a shortest $(u,v)$-path, and the [*interval*]{} $I(u,v)$ between $u$ and $v$ consists of all vertices on shortest $(u,v)$-paths, that is, of all vertices (metrically) [*between*]{} $u$ and $v$: $$I(u,v)=\{ x\in V: d(u,x)+d(x,v)=d(u,v)\}.$$ An induced subgraph of $G$ (or the corresponding vertex set $A$) is called [*convex*]{} if it includes the interval of $G$ between any of its vertices. By the [*convex hull*]{} conv$(W)$ of $W$ in $G$ we mean the smallest convex subset of $V$ (or induced subgraph of $G$) that contains $W.$ An [*isometric subgraph*]{} of $G$ is an induced subgraph in which the distances between any two vertices are the same as in $G.$ In particular, convex subgraphs are isometric. The [*ball*]{} (or disk) $B_r(x)$ of center $x$ and radius $r\ge 0$ consists of all vertices of $G$ at distance at most $r$ from $x.$ In particular, the unit ball $B_1(x)$ comprises $x$ and the neighborhood $N(x)$ of $x.$ The [*sphere*]{} $S_r(x)$ of center $x$ and radius $r\ge 0$ consists of all vertices of $G$ at distance exactly $r$ from $x.$ The ball $B_r(S)$ centered at a convex set $S$ is the union of all balls $B_r(x)$ with centers $x$ from $S.$ The [*sphere*]{} $S_r(S)$ of center $S$ and radius $r\ge 0$ consists of all vertices of $G$ at distance exactly $r$ from $S.$
A graph $G$ is called [*thin*]{} if for any two nonadjacent vertices $u,v$ of $G$ any two neighbors of $v$ in the interval $I(u,v)$ are adjacent. A graph $G$ is [*weakly modular*]{} [@BaCh_weak; @BaCh_survey] if its distance function $d$ satisfies the following conditions:
[*Triangle condition*]{} (T): for any three vertices $u,v,w$ with $1=d(v,w)<d(u,v)=d(u,w)$ there exists a common neighbor $x$ of $v$ and $w$ such that $d(u,x)=d(u,v)-1.$
[*Quadrangle condition*]{} (Q): for any four vertices $u,v,w,z$ with $d(v,z)=d(w,z)=1$ and $2=d(v,w)\le d(u,v)=d(u,w)=d(u,z)-1,$ there exists a common neighbor $x$ of $v$ and $w$ such that $d(u,x)=d(u,v)-1.$
An abstract [*simplicial complex*]{} ${\bold X}$ is a collection of sets (called [*simplices*]{}) such that $\sigma\in {\bold X}$ and $\sigma'\subseteq \sigma$ implies $\sigma'\in {\bold X}.$ The [*geometric realization*]{} $\vert {\bold X}\vert$ of a simplicial complex is the polyhedral complex obtained by replacing every face $\sigma$ of $\bf X$ by a “solid" regular simplex $|\sigma|$ of the same dimension such that realization commutes with intersection, that is, $|\sigma'|\cap |\sigma''|=|\sigma'\cap \sigma''|$ for any two simplices $\sigma'$ and $\sigma''.$ Then $\vert {\bold X}\vert=\bigcup\{
|\sigma|:\sigma\in {\bold X}\}.$ $\bold X$ is called [*simply connected*]{} if it is connected and if every continuous mapping of the 1-dimensional sphere $S^1$ into $|{\bold X}|$ can be extended to a continuous mapping of the disk $D^2$ with boundary $S^1$ into $|{\bold X}|$.
For a simplicial complex $\bold X$, denote by $V({\bold X})$ and $E({\bold X})$ the [*vertex set*]{} and the [*edge set*]{} of ${\bold X},$ namely, the set of all 0-dimensional and 1-dimensional simplices of ${\bold X}.$ The pair $(V({\bold X}),E({\bold X}))$ is called the [*(underlying) graph*]{} or the [*1-skeleton*]{} of ${\bold X}$ and is denoted by $G({\bold X})$. Conversely, for a graph $G$ one can derive a simplicial complex ${\bold X}(G)$ (the [*clique complex*]{} of $G$) by taking all complete subgraphs (cliques) as simplices of the complex. A simplicial complex $\bold X$ is a [*flag complex*]{} (or a [*clique complex*]{}) if any set of vertices is included in a face of $\bold X$ whenever each pair of its vertices is contained in a face of ${\bold X}$ (in the theory of hypergraphs this condition is called conformality). A flag complex can therefore be recovered by its underlying graph $G({\bold X})$: the complete subgraphs of $G({\bold X})$ are exactly the simplices of ${\bold X}.$ The [*link*]{} of a simplex $\sigma$ in ${\bold X},$ denoted lk$(\sigma,{\bold X})$ is the simplicial complex consisting of all simplexes $\sigma'$ such that $\sigma \cap \sigma'=\emptyset$ and $\sigma\cup\sigma'\in {\bold X}.$ For a simplicial complex $\bold X$ and a vertex $v$ not belonging to ${\bold X},$ the [*cone*]{} with apex $v$ and base $\bold X$ is the simplicial complex $v\ast {\bold X}={\bold X}\cup \{ \sigma\cup \{ v\}: \sigma\in {\bold X}\}.$
For a simplicial complex $\bold X$ and any $k\ge 1,$ the [*Rips complex*]{} ${\bold X}_k$ is a simplicial complex with the same set of vertices as $\bold X$ and with a simplex spanned by any subset $S\subset V({\bold X})$ such that $d(u,v)\le k$ in $G({\bold X})$ for each pair of vertices $u,v\in S$ (i.e., $S$ has diameter $\le k$ in the graph $G({\bold X})$). From the definition immediately follows that the Rips complex of any complex is a flag complex. Alternatively, the Rips complex ${\bold X}_k$ can be viewed as the clique complex ${\bold X}(G^k(\bold X))$ of the $k$th power of the graph of $\bold X$ (the [*$k$th power*]{} $G^k$ of a graph $G$ has the same set of vertices as $G$ and two vertices $u,v$ are adjacent in $G^k$ if and only if $d(u,v)\le k$ in $G$).
$SD_n$ property and weakly systolic complexes
---------------------------------------------
The following generalization of systolic complexes has been presented by Osajda [@Osajda]. A flag simplicial complex ${\bold X}$ satisfies the property of [*simple descent on balls*]{} of radii at most $n$ centered at a simplex $\sigma^*$ (*property $SD_n(\sigma^*)$* [@Osajda]) if for each $i=0,1,2,...,n$ and each simplex $\sigma$ located in the sphere $S_{i+1}(\sigma^*)$ the set $\sigma_0:=V(\rm{lk}(\sigma^*,{\bf X}))$$\cap B_i(\sigma^*)$ spans a non-empty simplex of $\bf X$. Systolic complexes are exactly the flag complexes which satisfy the $SD_n(\sigma^*)$ property for all simplices $\sigma^*$ and all natural numbers $n$. On the other hand, the 5-wheel is an example of a (2-dimensional) simplicial complex which satisfies the $SD_2$ property for vertices and triangles but not for edges. In view of this analogy and of subsequent results, we call [*weakly systolic*]{} a flag simplicial complex $\bold X$ which satisfies the $SD_n(v)$ property for all vertices $v\in V({\bold X})$ and for all natural numbers $n$. We also call [*weakly bridged*]{} the underlying graphs of weakly systolic complexes. It can be shown (cf. Theorem \[weakly-systolic\]) that $\bold X$ is a weakly systolic complex if for each vertex $v$ and every $i$ it satisfies the following two conditions:
[*Vertex condition*]{} (V): for every vertex $w \in S_{i+1}(v),$ the intersection $V(\rm{lk}$$(v,{\bf X}))\cap B_i(v)$ is a single simplex;
[*Edge condition*]{} (E): for every edge $e \in S_{i+1}(v),$ the intersection $V(\rm{lk}$$(e,{\bf X}))\cap B_i(v)$ is nonempty.
In fact, this is the original definition of a weakly systolic complex given in [@Osajda]. Notice that these two conditions imply that weakly systolic complexes are exactly the flag complexes whose underlying graphs are thin and satisfy the triangle condition.
Dismantlability of graphs and LC-contractibility of complexes {#dislc}
-------------------------------------------------------------
Let $G=(V,E)$ be a graph and $u,v$ two vertices of $G$ such that any neighbor of $v$ (including $v$ itself) is also a neighbor of $u$. Then there is a retraction of $G$ to $G-v$ taking $v$ to $u$. Following [@HeNe], we call this retraction a [*fold*]{} and we say that $v$ is [*dominated*]{} by $u.$ A finite graph $G$ is [*dismantlable*]{} if it can be reduced, by a sequence of folds, to a single vertex. In other words, an $n$-vertex graph $G=(V,E)$ is dismantlable if its vertices can be ordered $v_1,\ldots,v_n$ so that for each vertex $v_i, 1\le i<n,$ there exists another vertex $v_j$ with $j>i,$ such that $N_1(v_i)\cap V_i\subseteq N_1(v_j)\cap X_i,$ where $V_i:=\{ v_i,v_{i+1},\ldots,v_n\}.$ This order is called a [*dismantling order*]{}. Consider now for simplicial complexes $\bold X$ the analogy of dismantlability investigated in the papers [@CiYa; @Ma]. A vertex $v$ of $\bold X$ is [*LC-removable*]{} if lk$(v,{\bold X})$ is a cone. If $v$ is an LC-removable vertex of $\bold X$, then ${\bold X}-v:= \{ \sigma\in {\bold X}: v\notin \sigma\}$ is obtained from $\bold X$ by an [*elementary LC-reduction*]{} (link-cone reduction) [@Ma]. Then $\bold X$ is called [*LC-contractible*]{} [@CiYa] if there is a sequence of elementary LC-reductions transforming $\bold X$ to one vertex. For flag simplicial complexes, the LC-contractibility of $\bold X$ is equivalent to dismantability of its graph $G({\bold X})$ because an LC-removable vertex $v$ is dominated by the apex of the cone lk$(v,{\bold X})$ and vice versa the link of any dominated vertex $v$ is a cone having the vertex dominating $v$ as its apex. It is clear that LC-contractible simplicial complexes are collapsible (see also [@CiYa Corollary 6.5]).
Dismantlable graphs are closed under retracts and direct products, i.e., they constitute a variety [@NowWin]. Winkler and Nowakowski [@NowWin] and Quilliot [@Qui83] characterized the dismantlable graphs as cop-win graphs, i.e., graphs in which a single cop captures the robber after a finite number of moves for all possible initial positions of two players. Cops and robber game is a pursuit-evasion game played on finite (or infinite) undirected graphs in which the two players move alternatively starting from their initial positions, where a move is to slide along an edge or to stay at the same vertex. The objective of the cop is to capture the robber, i.e., to be at some moment of time at the same vertex as the robber. The objective of the robber is to continue evading the cops.
The simplest algorithmic way to order the vertices of a graph is to apply the [*Breadth-First Search*]{} (BFS) starting from the root vertex (base point) $b.$ We number with 1 the vertex $u$ and put it on the initially empty queue. We repeatedly remove the vertex $v$ at the head of the queue and consequently number and place onto the queue all still unnumbered neighbors of $v$. BFS constructs a spanning tree $T_u$ of $G$ with the vertex $u$ as a root. Then a vertex $v$ is the [*father*]{} in $T_v$ of any of its neighbors $w$ in $G$ included in the queue when $v$ is removed (notation $f(w)=v$). The procedure is called once for each vertex $v$ and proceeds $v$ in $O(|deg(v)|)$ time, so the total complexity of its implementation if linear. Notice that the distance from any vertex $v$ to the root $u$ is the same in $G$ and in $T_u.$ Another method to order the vertices of a graph in linear time is the [*Lexicographic Breadth-First Search*]{} (LexBFS) proposed by Rose, Tarjan, and Lueker [@RoTaLu]. According to LexBFS, the vertices of a graph $G$ are numbered from $n$ to 1 in decreasing order. The [*label*]{} $L(w)$ of an unnumbered vertex $w$ is the list of its numbered neighbors. As the next vertex to be numbered, select the vertex with the lexicographic largest label, breaking ties arbitrarily. As in case of BFS, we remove the vertex $v$ at the head of the queue and consequently number according to the lexicographic order and place onto the queue all still unnumbered neighbors of $v$. LexBFS is a particular instance of BFS, i.e., every ordering produced by LexBFS can also be generated by BFS.
Anstee and Farber [@AnFa] established that bridged graphs are cop-win graphs. Chepoi [@Ch_bridged] noticed that any order of a bridged graph returned by BFS is a dismantling order. Namely, he showed a stronger result: [*for any two adjacent vertices $v_i,v_j$ with $i<j,$ their fathers $f(v_i),f(v_j)$ either coincide or are adjacent and moreover $f(v_j)$ is adjacent to $v_i$.*]{} This property implies that bridged graphs admits a geodesic 1-combing and that the shortest paths participating in this combing are the paths to the root $u$ of the BFS tree $T_u$ [@Ch_CAT]. Similar results have been established in [@Ch_dpo] for larger classes of weakly modular graphs by using LexBFS instead of BFS.
Notice that the notions of dismantlable graph, BFS, and LexBFS can be defined in a straightforward way to all locally finite graphs. Polat [@Po; @Po1] defined dismantlability and BFS for arbitrary (not necessarily locally finite) graphs and extended the results of [@AnFa; @Ch_bridged] to all bridged graphs.
Group actions on simplicial complexes
-------------------------------------
Let $G$ be a group acting by automorphisms on a simplicial complex $\bold X$. By ${\mathrm}{Fix}_G{\bold X}$ we denote the *fixed point set* of the action of $G$ on $\bf X$, i.e. ${\mathrm}{Fix}_G{\bf X}={\left\{}
\newcommand {\rk}{\right\}}x\in {\bf X}|\; Gx={\left\{}
\newcommand {\rk}{\right\}}x\rk \rk$. Recall that the action is *cocompact* if the orbit space $G\backslash {\bf X}$ is compact. The action of $G$ on a locally finite simplicial complex $\bf X$ is *properly discontinuous* if stabilizers of simplices are finite. Finally, the action is *geometric* (or $G$ *acts geometrically* on $\bf X$) if it is cocompact and properly discontinuous.
Characterizations of weakly systolic complexes {#char}
==============================================
We continue with the characterizations of weakly systolic complexes and their underlying graphs; some of those characterizations have been presented also in [@Osajda]. We denote by $C_k$ an induced $k$-cycle and by $W_k$ an induced $k$-wheel, i.e., an induced $k$-cycle $x_1,\ldots,x_k$ plus a central vertex $c$ adjacent to all vertices of $C_k.$ $W_k$ can also be viewed as a 2-dimensional simplicial complex consisting of $k$ triangles $\sigma_1,\ldots,\sigma_k$ sharing a common vertex $c$ and such that $\sigma_i$ and $\sigma_j$ intersect in an edge $x_ic$ exactly when $|j-i|=1\; ({\mathrm}{mod}\; k).$ In other words, lk$(c,{\bold X})=C_k.$ By $\widehat{W}_k$ we denote a $k$-wheel $W_k$ plus a triangle $ax_ix_{i+1}$ for some $i<k$ (we suppose that $a\ne c$ and that $a$ is not adjacent to any other vertex of $W_k$). We continue with a condition which basically characterizes weakly bridged complexes among simply connected flag simplicial complexes:
[$\widehat{W}_5$-condition]{}: [*for any $\widehat{W}_5,$ there exists a vertex $v\notin \widehat{W}_5$ such that $\widehat{W}_5$ is included in $\rm{lk}(v,{\bold X}),$ i.e., $v$ is adjacent in $G({\bold X})$ to all vertices of $\widehat{W}_5$*]{} (see Fig. \[5-wheel\]).
\[weakly-systolic\] For a flag simplicial complex $\bold X$ the following conditions are equivalent:
- ${\bold X}$ is weakly systolic;
- $\bold X$ satisfies the the vertex condition (V) and the edge condition (E);
- $G({\bold X})$ is a weakly modular thin graph;
- $G({\bold X})$ is a weakly modular graph without induced $C_4$;
- $G({\bold X})$ is a weakly modular graph with convex balls;
- $G({\bold X})$ is a graph with convex balls in which any $C_5$ is included in a 5-wheel $W_5$;
- $\bold X$ is simply connected, satisfies the $\widehat{W}_5$-condition, and does not contain induced $C_4.$
The implications (i)$\Rightarrow$(ii) and (iii)$\Rightarrow$(iv) are obvious.
(ii)$\Rightarrow$(iii): The condition (V) implies that all vertices of $I(u,v)$ adjacent to $v$ are pairwise adjacent, i.e., that $G({\bold X})$ is thin. On the other hand, from the condition (E) we conclude that if $1=d(v,w)<d(u,v)=d(u,w)=i+1,$ then $v$ and $w$ have a common neighbor $x$ in the sphere $S_{i}(u),$ implying the triangle condition. Finally, in thin graphs the quadrangle condition is automatically satisfied. This shows that $G({\bold X})$ is a weakly modular thin graph.
(iv)$\Rightarrow$(v): To show that any ball $B_i(u)$ is convex in $G({\bold X}),$ since $G({\bold X})$ is weakly modular and $B_i(u)$ induces a connected subgraph, according to [@Ch_triangle] it suffices to show that the ball $B_i(u)$ is locally convex, i.e., [*if $x,y\in B_i(u)$ and $d(x,y)=2,$ then $I(x,y)\subseteq B_k(u).$*]{} Suppose by way of contradiction that $z\in I(x,y)\setminus B_i(u).$ Then necessarily $d(x,u)=d(y,u)=i$ and $d(z,u)=i+1.$ Applying the quadrangle condition, we infer that there exists a vertex $z'$ adjacent to $x$ and $y$ at distance $i-1$ from $u.$ As a result, the vertices $x,z,y,z'$ induce a forbidden 4-cycle, a contradiction.
(v)$\Rightarrow$(vi): Pick a 5-cycle induced by the vertices $x_1,x_2,x_3,x_4,x_5.$ Since $d(x_4,x_1)=d(x_4,x_2)=2,$ by the triangle condition there exists a vertex $y$ adjacent to $x_1,x_2,$ and $x_4.$ Since $G({\bold X})$ does not contain induced 4-cycles, necessarily $y$ must be also adjacent to $x_3$ and $x_5,$ yielding a 5-wheel.
(v)$\Rightarrow$(i): Pick a simplex $\sigma$ in the sphere $S_{i+1}(u).$ Denote by $\sigma_0$ the set of all vertices $x\in S_i(u)$ such that $\sigma\cup \{ x\}$ is a simplex of $\bold X$. Since the balls of $G$ are convex, necessarily either $\sigma_0$ is empty or the vertices of $\sigma_0$ are pairwise adjacent, thus $\sigma_0$ and $\sigma\cup \sigma_0$ induce complete subgraphs of $G({\bold X}).$ Since $\bold X$ is a flag complex, $\sigma_0$ and $\sigma\cup \sigma_0$ are simplices. Notice that obviously $\sigma'\subseteq \sigma_0$ holds for any other simplex $\sigma'\subseteq S_i(u)$ such that $\sigma\cup\sigma'\in {\bold X}.$ Therefore, it remains to show that $\sigma_0$ is non-empty. Let $x$ be a vertex of $S_i(u)$ which is adjacent to the maximum number of vertices of $\sigma.$ Since $G({\bold X})$ is weakly modular and $\sigma$ is contained in $S_{i+1}(u)$, the vertex $x$ must be adjacent to at least two vertices of $\sigma.$ Suppose by way of contradiction that $x$ is not adjacent to a vertex $v\in \sigma.$ Pick any neighbor $w$ of $x$ in $\sigma.$ By triangle condition, there exists a vertex $y\in S_i(u)$ adjacent to $v$ and $w.$ Since $w$ is adjacent to $x,y\in S_i(u)$ and $w\in S_{i+1}(u),$ the convexity of $B_i(u)$ implies that $x$ and $y$ are adjacent. Pick any other vertex $w'$ of $\sigma$ adjacent to $x.$ Since $x$ is not adjacent to $v$ and $G({\bold X})$ does not contain induced 4-cycles, the vertices $y$ and $w'$ must be adjacent. Hence, $y$ is adjacent to $v\in \sigma$ and to all neighbors of $x$ in $\sigma,$ contrary to the choice of $x.$ Thus $x$ is adjacent to all vertices of $\sigma,$ i.e., $\sigma_0\ne \emptyset.$ This shows that $\bold X$ satisfies the $SD_n(u)$ property.
(vi)$\Rightarrow$(vii): To show that a flag complex ${\bold X}$ is simply connected, it suffices to prove that every simple cycle in the underlying graph of $\bold X$ is a modulo 2 sum of its triangular faces. Notice that the isometric cycles of an arbitrary graph $G$ constitute a basis of cycles of $G$. Since $G({\bold X})$ is a graph with convex balls, the isometric cycles of $G({\bold X})$ have length 3 or 5 [@FaJa; @SoCh]. By (vi), any 5-cycle $C$ of $G({\bold X})$ extends to a 5-wheel, thus $C$ is a modulo 2 sum of triangles. Hence ${\bold X}$ is indeed simply connected. That $\bold X$ does not contain induced 4-cycles and 4-wheels follows from the convexity of balls. Finally, pick an extended 5-wheel $\widehat{W}_5:$ let $x_1,x_2,x_3,x_4,x_5$ be the vertices of the 5-cycle, $y$ be the center of the 5-wheel, and $x_1,x_2,z$ be the vertices of the pendant triangle. Since $x_3$ and $x_5$ are not adjacent and the balls of $G({\bold X})$ are convex, necessarily $d(z,x_4)=2.$ Let $u$ be a common neighbor of $z$ and $x_4.$ If $u$ is adjacent to one of the vertices $x_2$ and $x_3,$ then by convexity of balls it will be also adjacent to the second vertex and to $y$. But if $u$ is adjacent to $y,$ then it will be adjacent to $x_1$ and therefore to $x_5$ as well. Hence, in this case $u$ will be adjacent to all vertices $x_1,x_2,x_3,x_4,x_5,$ and $y,$ and we are done. So, we can suppose that $u$ is not adjacent to anyone of the vertices $x_1,x_2,x_3,x_5,$ and $y.$ As a result, we obtain two 5-cycles induced by the vertices $z,x_2,x_3,x_4,u$ and $z,x_1,x_5,x_4,u.$ Each of these cycles extends to a 5-wheel. Let $v$ be the center of the 5-wheel extending the first cycle. To avoid a 4-cycle induced by the vertices $x_2,v,x_4,y,$ the vertices $v $ and $y$ must be adjacent. Subsequently, to avoid a 4-cycle induced by the vertices $y,v,z,x_1,$ the vertices $v$ and $x_1$ must be adjacent. Finally, to avoid a 4-cycle induced by $x_1,v,x_4,x_5,$ the vertices $v$ and $x_5$ must be adjacent. This way, we obtained that $v$ is adjacent to all six vertices of $\widehat{W}_5$, establishing the $\widehat{W}_5$-condition.
(vii)$\Rightarrow$(iv): To prove this implication, as in [@Ch_CAT], we will use the minimal disk diagrams. Let ${\mathcal D}$ and ${\bold X}$ be two simplicial complexes. A map $\varphi:V({\mathcal D})\rightarrow V({\bold X})$ is called [*simplicial*]{} if $\varphi(\sigma)\in {\bold X}$ for all $\sigma\in {\mathcal D}.$ If ${\mathcal D}$ is a planar triangulation (i.e. the 1–skeleton of ${\mathcal D}$ is an embedded planar graph whose all interior 2–faces are triangles) and $C=\varphi(\partial {\mathcal D}),$ then $({\mathcal D},\varphi)$ is called a [*singular disk diagram*]{} (or Van Kampen diagram) for $C$ (for more details see [@LySch Chapter V]). According to Van Kampen’s lemma ([@LySch], pp.150–151), for every cycle $C$ of a simply connected simplicial complex one can construct a singular disk diagram. A singular disk diagram with no cut vertices (i.e., its 1–skeleton is 2–connected) is called a [*disk diagram.*]{} A [*minimal (singular) disk*]{} of $C$ is a (singular) disk diagram ${\mathcal D}$ of $C$ with a minimum number of 2–faces. This number is called the [*(combinatorial) area*]{} of $C$ and is denoted Area$(C).$ The minimal disks diagrams $({\mathcal D},\varphi) $ of simple cycles $C$ in 1–skeletons of simply connected simplicial complexes have the following properties [@Ch_CAT]: (1) $\varphi$ bijectively maps $\partial {\mathcal D}$ to $C$ and (2) the image of a 2–simplex of $\mathcal D$ under $\varphi$ is a 2–simplex, and two adjacent 2–simplices of $\mathcal D$ have distinct images under $\varphi.$
Let $C$ be a simple cycle in the underlying graph $G({\bold X})$ of a flag simplicial complex $\bold X$ satisfying the condition (vii).
[**Claim 1:**]{} [*If $C$ has length 5, then the minimal disk diagram of $C$ is a 5-wheel. Otherwise, $C$ admits a minimal disk diagram $\mathcal D$ which is a systolic complex, i.e., a plane triangulation whose all inner vertices have degrees $\ge 6.$*]{}
[**Proof of Claim 1:**]{} First we show that any minimal disk diagram $\mathcal D$ of $C$ does not contain interior vertices of degrees 3 and 4. Let $x$ be any interior vertex of $\mathcal D$. Let $x_1,\ldots,x_k$ be the neighbors of $x,$ where $\sigma_i=xx_ix_{i+1(mod ~k)}$ $(i=1,\ldots,k)$ are the faces incident to $x.$ Trivially, $k\geq 3.$ Suppose by way of contradiction that $k\leq 4.$ By properties of minimal disk diagrams, $\varphi(\sigma_1),\ldots,\varphi(\sigma_k)$ are distinct 2–simplices of ${\bold X}.$ If $k=3$ then the 2-simplices $\varphi(\sigma_1),\varphi(\sigma_2),
\varphi(\sigma_3)$ of $\bold X$ intersect in $\varphi(x)$ and pairwise share an edge of ${\bold X}.$ Therefore they are contained in a 3–simplex of ${\bold X}.$ This implies that $\delta=\varphi(x_1)\varphi(x_2)\varphi(x_3)$ is a 2–face of ${\bold X}.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the triangles $\sigma_1, \sigma_2,\sigma_3,$ and adding the 2–simplex $x_1x_2x_3.$ The map $\varphi: V({\mathcal D}')\rightarrow V({\bold X})$ is simplicial, because it maps $x_1x_2x_3$ to $\delta.$ Therefore $({\mathcal D}',\varphi)$ is a disk diagram for $C,$ contrary to the minimality choice of ${\mathcal D}.$ Now, let $x$ has four neighbors. The cycle $(x_1,x_2,x_3,x_4,x_1)$ is sent to a 4–cycle of lk$(\varphi(x),{\bold X}),$ in which two opposite vertices, say $\varphi(x_1)$ and $\varphi(x_3),$ are adjacent. Consequently, $\delta'=\varphi(x_1)\varphi(x_3)\varphi(x_2)$ and $\delta''=\varphi(x_1)\varphi(x_3)\varphi(x_4)$ are 2–faces of ${\bold X}.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the triangles $\sigma_i (i=1,\ldots,4),$ and adding the 2–simplices $\sigma'=x_1x_3x_2$ and $\sigma''=x_1x_3x_4.$ The map $\varphi$ remains simplicial, since it sends $\sigma',\sigma''$ to $\delta',\delta'',$ respectively, contrary to the minimality choice of ${\mathcal D}.$ This establishes that the degree of each interior vertex $x$ of any minimal disk diagram is $\ge 5.$
Suppose now additionally that $\mathcal D$ is a minimal disk diagram of $C$ having a minimum number of inner vertices of degree 5. With some abuse of notation, we will denote the vertices of $\mathcal D$ and their images in $\bold X$ under $\varphi$ by the same symbols. Let $x$ be any interior vertex of $\mathcal D$ of degree 5 and let $x_1,\ldots,x_5$ be the neighbors of $x.$ If $C=(x_1,x_2,x_3,x_4,x_5,x_1),$ then we are done because $\mathcal D$ is a 5-wheel. Now suppose that one of the edges of the 5-cycle $(x_1,x_2,x_3,x_4,x_5,x_1),$ say $x_1x_2$, belongs in $\partial {\mathcal D}$ to the second triangle $x_1x_2x_6.$ The minimality of $\mathcal D$ implies that $x_1,x_2,x_3,x_4,x_5,x_6$ induce in $\bold X$ a $\widehat{W}_5.$ By the $\widehat{W}_5$-condition, there exists a vertex $y$ of $\bold X$ which is adjacent to all vertices of this $\widehat{W}_5.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the five triangles incident to $x$ as well as the triangle $x_1x_2x_6$ and replacing them by the six triangles of the resulting 6-wheel centered at $y$ (we call this operation a [*flip*]{}). The resulting map $\varphi$ remains simplicial. ${\mathcal D}'$ has the same number of triangles as $\mathcal D,$ therefore ${\mathcal D}'$ is also a minimal disk diagram for $C.$ The flip replaces the vertex $x$ of degree 5 by the vertex $y$ of degree 6; it preserves the degrees of all other vertices except the vertices $x_1$ and $x_2,$ whose degrees decrease by 1. If the degrees of $x_1$ and $x_2$ in $\mathcal D$ are $\ge 7,$ then we will obtain a contradiction with the minimality choice of $\mathcal D$. The same contradiction is obtained when the degree of $x_1$ or/and $x_2$ is at most 6 but the respective vertex belongs to $\partial {\mathcal D}$. So suppose that $x_1$ is an interior vertex of $\mathcal D$ and that its degree is at most 6. If the degree of $x_1$ is 5, then in ${\mathcal D}'$ the degree of $x_1$ will be 4, which is impossible by what has been shown above because ${\mathcal D}'$ is also a minimal disk diagram and $x_1$ is an interior vertex of ${\mathcal D}'.$ Hence the degree of $x_1$ in $\mathcal D$ is 6 and its neighbors constitute an induced 6-cycle $(x_6,x_2,x,x_5,u,v,x_6).$ Using the fact established above that the minimal disk diagrams for $C$ does not contain interior vertices of degree 3 and 4, the fact that $\bold X$ does not contain induced $C_4,$ it can be easily shown that the vertices $u,v,x_6,x_2,x_3,x_4,x_5,x_1,x$ induce in $\bold X$ the same subgraph as in $\mathcal D:$ a 6-wheel centered at $x_1$ plus a 5-wheel centered at $x$ which share two triangles $xx_1x_5$ and $xx_1x_2.$ The images in $\bold X$ of the vertices $x_5,y,x_6,v,u,x_1,x_4$ induce a $\widehat{W}_5$ constituted by a 5-wheel centered at $x_1$ and the pendant triangle $x_4yx_5.$ By the $\widehat{W}_5$-condition, there exists a vertex $z$ of $\bold X$ which is adjacent to all vertices of $\widehat{W}_5.$ If $z$ is adjacent in $\bold X$ with all vertices of the 7-cycle $(u,v,x_6,x_2,x_3,x_4,x_5,u),$ then replacing in $\mathcal D$ the 9 triangles incident to $x$ and $x_1$ by the 7 triangles of $\bold X$ incident to $z,$ we will obtain a disk diagram ${\mathcal D}''$ for $C$ having less triangles than $\mathcal D$, contrary to the minimality of $\mathcal D$. Therefore $z$ is different from $x$ and is not adjacent to one of the vertices $x_2,x_3.$ Since $x_1$ and $x_4$ are not adjacent and both $x$ and $z$ are adjacent to $x_1,x_4,$ to avoid an induced $C_4$ we conclude that $z$ is adjacent in $\bold X$ to $x.$ If $z$ is not adjacent to $x_2,$ then, since $x$ and $x_6$ are not adjacent, we will obtain a $C_4$ induced by $x,z,x_6,x_2.$ Thus $z$ is adjacent to $x_2,$ and therefore $z$ is not adjacent to $x_3.$ Since both $z$ and $x_3$ are adjacent to nonadjacent vertices $x_2$ and $x_4,$ we will obtain a $C_4$ induced by $z,x_2,x_3,x_4.$ This final contradiction show that all interior vertices of $\mathcal D$ have degrees $\ge 6,$ establishing Claim 1.
From Claim 1 we deduce that any simple cycle $C$ of the underlying graph of $\bold X$ admits a minimal disk diagram $\mathcal D$ which is either a 5-wheel or a systolic plane triangulation. We will call a [*corner*]{} of $\mathcal D$ any vertex $v$ of $\partial {\mathcal D}$ which belongs in $\mathcal D$ either to a unique triangle (first type) or to two triangles (second type). The corners of first type are the boundary vertices of degree two. The corners of second type are boundary vertices of degree three. In the first case, the two neighbors of $v$ are adjacent. In the second case, $v$ and its neighbors in $\partial {\mathcal D}$ are adjacent to the third neighbor of $v.$ From Gauss–Bonnet formula and Claim 1 we infer that $\mathcal D$ contains at least three corners, and if $\mathcal D$ has exactly three corners then they are all of first type. Furthermore, if $\mathcal D$ contains four corners, then at least two of them are corners of first type.
Next we show that $G({\bold X})$ is weakly modular. To verify the triangle condition, pick three vertices $u,v,w$ with $1=d(v,w)<d(u,v)=d(u,w)=k.$ We claim that if $I(u,v)\cap I(u,w)=\{ u\},$ then $k=1.$ Suppose not. Pick two shortest paths $P'$ and $P''$ joining the pairs $u,v$ and $u,w,$ respectively, such that the cycle $C$ composed of $P',P''$ and the edge $vw$ has minimal Area$(C)$ (the choice of $v,w$ implies that $C$ is a simple cycle). Let $\mathcal D$ be a minimal disk diagram of $C$ satisfying Claim 1. Then either $\mathcal D$ has a corner $x$ different from $u,v,w$ or the vertices $u,v,w$ are the only corners of $\mathcal D.$ In the second case, $u,v,w$ are all three corners of first type, therefore the two neighbors of $v$ in $C$ will be adjacent. This means that $w$ will be adjacent to the neighbor of $v$ in $P',$ contrary to $I(u,v)\cap I(u,w)=\{ u\}.$ So, suppose that the corner $x$ exists. Let $x\in P'.$ Notice that $x$ is a corner of second type, otherwise its neighbors $y,z$ in $P'$ are adjacent, contrary to the assumption that $P'$ is a shortest path. Let $p$ be the vertex of $\mathcal D$ adjacent to $x,y,z.$ If we replace in $P'$ the vertex $x$ by $p,$ we will obtain a new shortest path between $u$ and $v.$ Together with $P''$ and the edge $vw$ this path forms a cycle $C'$ whose area is strictly smaller than Area$(C),$ contrary to the choice of $C.$ This establishes the triangle condition. As to the quadrangle condition, suppose by way of contradiction that we can find distinct vertices $u,v,w,z$ such that $v,w\in I(u,z)$ are neighbors of $z$ and $I(u,v)\cap I(u,w)=\{u\},$ however $u$ is not adjacent to $v$ and $w.$ Again, select two shortest paths $P'$ and $P''$ between $u,v$ and $u,w,$ respectively, so that the cycle $C$ composed of $P',P''$ and the edges $vz$ and $zw$ has minimum area. Choose a minimal disk $\mathcal D$ of $C$ as in Claim 1. From the initial hypothesis concerning the vertices $u,v,w,z$ we deduce that $\mathcal D$ has at most one corner of first type located at $u.$ Hence $\mathcal D$ contains at least four corners of second type. Since one corner $x$ is distinct from $u,v,w,z,$ then proceeding in the same way as before, we will obtain a contradiction with the choice of the paths $P',P''.$ This shows that $u$ is adjacent to $v,w,$ establishing the quadrangle condition. Hence $G({\bold X})$ is a weakly modular graph without induced $C_4,$ concluding the proof of the implication (vii)$\Rightarrow$(iv) and of the theorem.
Properties of weakly systolic complexes {#prop}
=======================================
In this section, we establish some further combinatorial and geometrical properties of weakly systolic complexes, which are well known for systolic complexes [@Ha; @JanSwi]. In particular, we show that weakly systolic complexes $\bold X$ satisfy the $SD_n(\sigma^*)$ property for facets $\sigma^*$ (maximal by inclusion simplices of $\bold X$).
\[edge\_desc\] Let $\sigma$ be a simplex of a weakly systolic complex $\bold X$. Let $e=zz'$ be an edge contained in the sphere $S_i(\sigma).$ Then there exists a vertex $w\in \sigma$ and a vertex $v\in B_{i-1}(\sigma)$ such that $v$ is adjacent to $z,z'$ and $d(v,w)=i-1.$
If there exists a vertex $w\in \sigma$ such that $z,z'\in S_i(w)$ then the assertion follows from the $SD_n$ property. Therefore suppose that such a vertex of $\bold X$ does not exists. Let $w,w'$ be two vertices of $\sigma$ with $d(w,z)=d(w',z')=i.$ Since $d(w',z)=d(w,z')=i+1,$ we conclude that $z\in I(w,z').$ Since $w,z'\in B_i(w')$ and $z\notin B_i(w'),$ this contradicts the convexity of $B_i(w').$
\[bbac\] Let $\sigma$ be a simplex of a weakly systolic complex $\bold X$ and let $i\ge 2.$ Then the ball $B_i(\sigma)$ is convex. In particular, $B_i(\sigma)\cap N(v)$ is a simplex for any vertex $z\in S_{i+1}(\sigma)$.
Since $G({\bold X})$ is weakly modular and $B_i(\sigma)$ induces a connected subgraph, according to [@Ch_triangle] it suffices to show that $B_i(\sigma)$ is locally convex, i.e., if $x,y\in B_i(\sigma),$ $d(x,y)=2,$ and $z$ is a common neighbor of $x$ and $y,$ then $z\in B_i(\sigma)$. Suppose by way of contradiction that $z\in S_{i+1}(\sigma).$ Let $u$ and $v$ be vertices of $\sigma$ located at distance $i$ from $x$ and $y,$ respectively. If $u=v,$ then $x,y\in I(u,z),$ thus $x$ and $y$ must be adjacent because $G({\bold X})$ is thin. So, suppose that $u\ne v,$ i.e., $d(y,u)=d(z,u)=i+1$ holds. By triangle condition there exists a common neighbor $w$ of $z$ and $y$ having distance $i$ to $u$. Since $x,w\in I(z,u)$ and $G({\bold X})$ is thin, the vertices $x$ and $w$ are adjacent; moreover by triangle condition, there exists a common neighbor $u'$ of $w$ and $x$ having distance $i-1$ to $u.$ If $d(w,v)=i+1,$ then $y,u'\in I(w,v),$ thus $y$ and $u'$ must be adjacent because $G({\bold X})$ is thin. As a result, we obtain a 4-cycle defined by $x,z,y,u'.$ Since $d(z,u)=i+1$ and $d(u',u)=i-1,$ $z$ and $u'$ cannot be adjacent, thus this 4-cycle must be induced, which is impossible. Hence $d(w,v)=i.$ Let $u''$ be a neighbor of $u$ in the interval $I(u,u')$ (it is possible that $u''=u'$). Since $d(y,u)=i+1$ and $d(u',u)=i-1, d(u',y)=2,$ we conclude that $u'\in I(y,u),$ yielding $u''\in I(u,u')\subset I(u,y).$ Since $v$ also belongs to $I(u,y)$ and $G({\bold X})$ is thin, the vertices $u''$ and $v$ must be adjacent. But in this case $d(x,v)=1+d(u',u'')+1=i,$ contrary to the assumption that $d(x,v)=i+1.$ This contradiction shows that $B_i(\sigma)$ is convex for any $i\ge 2.$
\[SD\_n-max\] A weakly systolic complex satisfies the property $SD_n(\sigma^*)$ for any maximal simplex $\sigma^*$ of $\bold X$.
Let $\sigma$ be a simplex of $\bold X$ located in the sphere $S_{i+1}(\sigma^*).$ For each vertex $v\in \sigma$ denote by $\sigma^*(v)$ the metric projection of $v$ in $\sigma^*,$ i.e., the set of all vertices of $\sigma^*$ located at distance $i+1$ from $v.$ Notice that the sets $\sigma^*(v)$ $(v\in \sigma)$ can be linearly ordered by inclusion. Indeed, if we suppose the contrary, then there exist two vertices $v',v''\in \sigma$ and the vertices $u'\in \sigma^*(v')\setminus \sigma^*(v'')$ and $u''\in \sigma^*(v'')\setminus \sigma^*(v').$ This is impossible because in this case $v''\in I(v',u'')\setminus B_{i+1}(u')$ and $v',u''\in B_{i+1}(u'),$ contrary to the convexity of $B_{i+1}(u').$ Therefore the simplices $\sigma^*(v)$ $(v\in \sigma)$ can be linearly ordered by inclusion. This means that $\sigma\subset S_{i+1}(u)$ holds for any vertex $u$ belonging to all metric projections $\sigma^*_0=\cap\{ \sigma^*(v): v\in \sigma\}.$ Applying the $SD_n(u)$ property to $\sigma$ we conclude that the set of all vertices $x\in S_i(u)\subseteq S_i(\sigma^*)$ adjacent to all vertices of $\sigma$ is a non-empty simplex. Pick two vertices $x,y\in S_i(\sigma^*)$ adjacent to all vertices of $\sigma.$ Let $x\in S_i(u)$ and $y\in S_i(w)$ for $u,w\in \sigma^*_0.$ We assert that $x$ and $y$ are adjacent. Let $v$ be a vertex of $\sigma$ whose projection $\sigma^*(v)$ is maximal by inclusion. If $\sigma^*(v)=\sigma^*,$ then applying the $SD_n(v)$ property we conclude that there exists a vertex $v'$ at distance $i$ to $v$ and adjacent to all vertices of $\sigma^*,$ contrary to maximality of $\sigma^*.$ Hence $\sigma^*(v)$ is a proper simplex of $\sigma^*.$ Let $s\in \sigma^*\setminus \sigma^*(v).$ Then $x,y\in I(v,s)$ and since $G({\bold X})$ is thin, the vertices $x$ and $y$ must be adjacent.
We conclude this section by showing that the systolic complexes are exactly the flag complexes satisfying $SD_n(\sigma^*)$ for all simplices $\sigma^*.$
\[systolic\] A simplicial flag complex $\bold X$ is systolic if and only if $\bold X$ satisfies the property $SD_n(\sigma^*)$ for all simplices $\sigma^*$ of ${\bold X}$ and all $n\ge 0.$
If $\bold X$ satisfies the property $SD_n(v)$ for all vertices, then $\bold X$ is weakly systolic by Theorem \[weakly-systolic\]. Since $\bold X$ satisfies the property $SD_n(e)$ for all edges, $\bold X$ does not contains 5-wheels. Hence $\bold X$ is systolic. Conversely, let $\sigma^*$ be an arbitrary simplex of a systolic complex $\bold X$ and let $\sigma$ be a simplex belonging to $S_{i+1}(\sigma^*).$ Since $B_i({\sigma^*})$ is convex because $G({\bold X})$ is bridged, the set $\sigma_0$ of all vertices $x\in B_i(\sigma^*)$ such that $\sigma\cup\{ x\}\in {\bold X},$ if nonempty, necessarily is a simplex. Thus it suffices to show that $\sigma_0\ne\emptyset.$ As in previous proof, for each vertex $v\in \sigma$ denote by $\sigma^*(v)$ the metric projection of $v$ in $\sigma^*.$ Then, as we showed in the proof of Proposition \[SD\_n-max\], the sets $\sigma^*(v)$ can be linearly ordered by inclusion. Therefore there exists a vertex $u\in \sigma^*$ belonging to all projections $\sigma^*(v), v\in \sigma.$ Then $\sigma\subset S_{i+1}(u),$ whence $\sigma_0$ is nonempty because of the $SD_n(u)$ property.
Dismantlability of weakly bridged graphs {#dismantlability}
========================================
In this section, we show that the underlying graphs of weakly systolic complexes are dismantlable and that a dismantling order can be obtained using LexBFS. Then we use this result to deduce several consequences about combings of weakly bridged graphs and about the collapsibility of weakly systolic complexes. Other consequences of dismantling are given in subsequent sections.
\[dismantle\] Any LexBFS ordering of a locally finite weakly bridged graph $G=G({\bold X})$ is a dismantling ordering. In particular, locally finite weakly systolic complexes ${\bold X}$ and their Rips complexes ${\bold X}_k$ are LC-contractible and therefore collapsible.
We will establish the result for finite weakly bridged graphs and finite weakly systolic complexes. The proof in the locally finite case is completely similar. Let $v_n,\ldots,v_1$ be the total order returned by the LexBFS starting from the basepoint $u=v_n.$ Let $G_i$ be the subgraph of $G$ induced by the vertices $v_n,\ldots,v_i.$ For a vertex $v\ne u$ of $G,$ denote by $f(v)$ its father in the LexBFS tree $T_u,$ by $L(v)$ the list of all neighbors of $v$ labeled before $v,$ and by $\alpha(v)$ the label of $v$ (i.e., if $v=v_i,$ then $\alpha(v)=i$). We decompose the label $L(v)$ of each vertex $v$ into two parts $L'(v)$ and $L''(v):$ if $d(v,u)=i,$ then $L'(v)=L(v)\cap S_{i-1}(u)$ and $L''(v)=L(v)\cap S_i(u).$ Notice that in the lexicographic order of $L(v),$ all vertices of $L'(v)$ precede the vertices of $L''(v);$ in particular, the father of $v$ belongs to $L'(v).$ The proof of the theorem is a consequence of the following assertion, which we call the [*fellow traveler property*]{}:
[**Fellow Traveler Property:**]{} [*If $v,w$ are adjacent vertices of $G,$ then their fathers $v'=f(v)$ and $w'=f(w)$ either coincide or are adjacent. If $v'$ and $w'$ are adjacent and $\alpha(w)<\alpha(v),$ then $w'$ is adjacent to $v$ and $v'$ is not adjacent to $w.$*]{}
Indeed, if this assertion holds, then we claim that $v_n,\ldots,v_1$ is a dismantling order. To see this, it suffices to show that any vertex $v_i$ is dominated in $G_i$ by its father $f(v_i)$ in the LexBFS tree $T_u.$ Pick any neighbor $v_j$ of $v_i$ in $G_i.$ We assert that $v_j$ coincides or is adjacent to $f(v_i).$ This is obviously true if $f(v_j)=f(v_i).$ Otherwise, if $f(v_i)\ne f(v_j),$ then the Fellow Traveler Property implies that $f(v_i)$ and $f(v_j)$ are adjacent and since $i<j$ that $v_j$ is adjacent to $f(v_i).$ This shows that indeed any LexBFS order is a dismantling order.
We will establish now the Fellow Traveler Property by induction on $i+1:=\mbox{max}\{ d(u,v),d(u,w)\}.$ First suppose that $d(u,v)<d(u,w).$ Then $v,w'\in I(w,u)$ and since $G$ is thin, we conclude that $v$ and $w'$ either coincide or are adjacent. In the first case we are done because $v$ (and therefore $w'$) is adjacent to its father $v'=f(v)$. If $v$ and $w'$ are adjacent, since $i=d(u,v)=d(u,w'),$ the vertices $v'$ and $f(w')$ coincide or are adjacent by the induction assumption. Again, if $v'=f(w'),$ the assertion is immediate. Now suppose that $v'$ and $f(w')$ are adjacent. Since $w'=f(w)$ was labeled before $v$ (otherwise the father of $w$ is $v$ and not $w'$), $f(w')$ must be labeled before $v',$ therefore by the induction hypothesis we deduce that $v'=f(v)$ must be adjacent to $w'=f(w).$ This concludes the analysis of the case $d(u,v)<d(u,w).$
From now on, suppose that $d(u,v)=d(u,w)=i+1$ and that $\alpha(w)<\alpha(v).$ If the vertices $v'=f(v)$ and $w'=f(w)$ coincide, then we are done. If the vertices $v'$ and $w'$ are adjacent, then the vertices $v,w,w',v'$ define a 4-cycle, which cannot be induced by the $SD_n$ property (see Theorem \[weakly-systolic\]). Since $v$ was labeled before $w,$ the vertex $v'$ must be labeled before $w'.$ Therefore, if $v'$ is adjacent to $w,$ then LexBFS will label $w$ from $v'$ and not from $w',$ a contradiction. Thus $v'$ and $w$ are not adjacent, showing that $w'$ must be adjacent to $v,$ establishing the required assertion. So, assume by way of contradiction that the vertices $v'$ and $w'$ are not adjacent in $G.$ Then $w'$ is not adjacent to $v,$ otherwise $w',v'\in B_i(u)$ and $v\in I(v',w')\cap S_{i+1}(u),$ contrary to the convexity of the ball $B_i(u)$ (similarly, $v'$ is not adjacent to $w$).
Since $G$ is weakly modular by Theorem \[weakly-systolic\], by triangle condition applied to the vertices $v,w,$ and $u,$ there exists a common neighbor $s$ of $v$ and $w$ located at distance $i$ from $u.$ Denote by $S$ the set of all such vertices $s.$ From the property $SD_n$ we infer that $S$ is a simplex of $\bold X$ (i.e., its vertices are pairwise adjacent in $G$). Since $v'$ and $w'$ do not belong to $S,$ necessarily all vertices of $S$ have been labeled later than $v'$ and $w'$ (but obviously before $v$ and $w$). Pick a vertex $s$ in $S$ with the largest label $\alpha(s)$ and set $z:=f(s).$ By induction assumption applied to the pairs of adjacent vertices $\{ v',s\}$ and $\{ s,w'\}$, we conclude that the vertices of each of the pairs $\{ f(v'),z\}$ and $\{ z,f(w')\}$ either coincide or are adjacent. Moreover, in all cases, the vertex $z$ must be adjacent to the vertices $v'$ and $w'.$
[**Claim 1:**]{} $C:=L'(v')=L'(s)=L'(w')$ and $z$ is the father of $v'$ and $w'.$
[**Proof of Claim 1:**]{} Since $s$ was labeled later than $v'$ and $w',$ it suffices to show that $L'(v')=L'(s).$ Indeed, if this is the case, then necessarily $z$ is the father of $v'.$ Then, as $z$ is adjacent to $w'$ and $\alpha(w')<\alpha(v'),$ necessarily $z$ is also the father of $w'.$ Now, if $L'(w')$ and $L'(s)=L'(v')$ do not coincide, since $L'(v')$ lexicographically precedes $L''(v')$ and $L'(w')$ precedes $L''(w'),$ the fact that LexBFS labeled $v'$ before $w'$ means that $L'(v')$ lexicographically precedes $L'(w').$ Since $L'(s)=L'(v'),$ then necessarily LexBFS would label $s$ before $w',$ a contradiction. This shows that $L'(s)=L'(v')$ implies the equality of the three labels $L'(v'),L'(s),$ and $L'(w').$
To show that $L'(v')=L'(s),$ since $\alpha(s)<\alpha(v'),$ it suffices to establish only the inclusion $L'(v')\subseteq L'(s).$ Suppose by way of contradiction that there exists a vertex in $L'(v')\setminus L'(s)$ i.e., a vertex $x\in S_{i-1}(u)$ which is adjacent to $v'$ but is not adjacent to $s.$ Let $x$ be the vertex of $L'(v')\setminus L'(s)$ having the largest label $\alpha(x).$ Since $s$ was labeled by LexBFS later than $v',$ necessarily any vertex of $L'(s)\setminus L'(v')$ must be labeled later than $x.$ Notice that $x$ cannot be adjacent to $w',$ otherwise we obtain an induced 4-cycle formed by the vertices $v',s,w',x.$ Since $x$ is not adjacent to $v,w,$ and $s,$ we conclude that the vertices $v,w,w',z,c',s,x$ induce an extended 5-wheel. By the $\widehat{W}_5$-condition, there exists a vertex $t$ adjacent to all vertices of this $\widehat{W}_5.$ Hence $t\in S.$ Further $t$ must be adjacent to $z,$ otherwise we obtain a forbidden 4-cycle induced by the vertices $s,z,x,$ and $t.$ For the same reason, $t$ must be adjacent to any other vertex $z'$ belonging to $L'(v')\cap L'(s).$ This means that LexBFS will label $t$ before $s.$ Since $t$ belongs to $S$ and $\alpha(t)>\alpha (s),$ we obtain a contradiction with the choice of the vertex $s.$ This contradiction concludes the proof of the Claim 1.
Since $v'$ and $w'$ are not adjacent and $G$ does not contain induced 4-cycles, any vertex $s'\ne s$ adjacent to $v'$ and $w'$ is also adjacent to $s.$ In particular, this shows that $L''(v')\cap
L''(w')\subseteq L''(s).$ Therefore, if $L''(w')\subseteq L''(v'),$ then $L''(w')\subseteq L''(s).$ Since $v'\in L''(s)\setminus
L''(w')$ and $L'(s)=L'(w')$ by Claim 1, we conclude that the vertex $s$ must be labeled before $w',$ contrary to the assumption that $\alpha(s)<\alpha(w').$ Therefore the set $B:=L''(w')\setminus
L''(v')$ is nonempty. Since $v'$ was labeled before $w'$ and $L'(v')=L'(w')$ by Claim 1, we conclude that the set $A:=L''(v')\setminus L''(w')$ is nonempty as well. Let $p$ be the vertex of $A$ with the largest label $\alpha(p)$ and let $q$ be the vertex of $B$ with the largest label $\alpha(q).$ Since LexBFS labeled $v'$ before $w'$ and $L'(v')=L'(w')$ holds, necessarily $\alpha(q)<\alpha(p)$ holds. Since $p\in L''(v'),$ we obtain that $\alpha(w')<\alpha(v')<\alpha(p).$ Since $v'=f(v)$ and $w'=f(w),$ this shows that $p$ cannot be adjacent to the vertices $v$ and $w.$ If $s$ is adjacent to $p,$ then $p\in L''(s).$ But then from Claim 1 and the inclusion $L''(v')\cap L''(w')\subseteq L''(s)$ we will infer that LexBFS must label $s$ before $w',$ contrary to the assumption that $\alpha(s)<\alpha(w').$ Therefore $p$ is not adjacent to $s$ either. On the other hand, since $\alpha(v')<\alpha(p),$ by the induction hypothesis applied to the adjacent vertices $p$ and $v',$ we infer that $z=f(v')$ must be adjacent to $p.$ Hence the vertices $v,w,w',z,v',s,p$ induce an extended 5-wheel. By the $\widehat{W}_5$-condition, there exists a vertex $t$ adjacent to all these vertices. Since $C:=L'(v')=L'(w')$ and $d(v',w')=2,$ to avoid induced 4-cycles, the vertex $t$ must be adjacent to any vertex of $C.$ For the same reason, $t$ must be adjacent to any vertex of $L''(v')\cap L''(w').$ Since additionally $t$ is adjacent to the vertex $p$ of $A$ with the highest label, necessarily $t$ will be labeled by LexBFS before $w'$ and $s.$ Since $t$ is adjacent to $v$ and $w,$ this contradicts the assumption that $w'=f(w).$ This shows that the initial assumption that $v'$ and $w'$ are not adjacent lead to a final contradiction. Hence the order returned by LexBFS is indeed a dismantling order of the weakly bridged graph $G=G({\bold X}).$
To show that any finite weakly systolic complex ${\bold X}$ is LC-contractible it suffices to notice that, since ${\bold X}$ is a flag complex, the LC-contractibility of ${\bold X}$ is equivalent to the dismantlability of its graph $G({\bold X}),$ and hence the result follows from the first part of the theorem. To show that the Rips complex ${\bold X}_k$ is LC-contractible, since ${\bold X}_k$ is a flag complex, it suffices to show that its graph $G({\bold X}_k)$ is dismantlable. From the definition of ${\bold X}_k,$ the graph $G({\bold X}_k)$ coincides with the $k$th power $G^k$ of the underlying graph $G$ of $\bold X$. Now notice that if a vertex $v$ is dominated in $G$ by a vertex $u,$ then $u$ also dominates $v$ in the graph $G^k.$ Indeed, pick any vertex $x$ adjacent to $v$ in $G^k.$ Then $d(v,x)\le k$ in $G.$ Let $y$ be the neighbor of $v$ on some shortest path $P$ connecting the vertices $v$ and $x$ in $G.$ Since $u$ dominates $v,$ necessarily $u$ is adjacent to $y.$ Hence $d(u,x)\le k$ in $G,$ therefore $u$ is adjacent to $x$ in $G^k.$ This shows that $v$ is dominated by $u$ in the graph $G^k$ as well. Therefore the dismantling order of $G$ returned by LexBFS is also a dismantling order of $G^k,$ establishing that the Rips complex ${\bold X}_k$ is LC-contractible. This completes the proof of the theorem.
BFS orderings of weakly bridged graphs do not satisfy the property that each vertex is dominated by its father. For example, let $G$ be a 5-wheel whose vertices are labeled as in Fig. \[5-wheel\]. If BFS starts from the vertex $x_1$ and orders the remaining vertices as $x_2,x_5,c,x_3,x_4,$ then the father of the last vertex $x_4$ is $x_5$, however $x_5$ does not dominate $x_4$ in the whole graph. On the other hand, LexBFS starting from $x_1$ and continuing with $x_2,$ necessarily will label the vertex $c$ next. As a consequence, $c$ will be the father of the last labeled vertex $x_4$ and obviously $c$ dominates $x_4.$ Nevertheless, the order $x_1,x_2,x_5,c,x_3,x_4$ returned by BFS is a domination order of the 5-wheel. Is this true for all weakly bridged graphs?
\[Rips\] Graphs of Rips complexes ${\bold X}_n$ of locally finite systolic and weakly systolic complexes are dismantlable.
\[cop-win\] Finite weakly bridged graphs are cop-win.
For a locally finite weakly bridged graph $G$ and integer $k$ denote by $G_k$ the subgraph of $G$ induced by the first $k$ labeled vertices in a LexBFS order, i.e., by the vertices of $G$ with $k$ lexicographically largest labels.
\[G\_k\] Any $G_k$ is an isometric weakly bridged subgraph of $G.$
By Theorem \[dismantle\], LexBFS returns a dismantling order of $G$, hence any $G_k$ is an isometric subgraph of $G.$ Therefore $G_k$ is a thin graph, because any interval $I(x,y)$ in $G_k$ is contained in the interval of $G$ between $x$ and $y.$ Moreover, $G_k,$ as an isometric subgraph of a $G$, does not contain isometric cycles of length $>5.$ Hence, by a result of [@SoCh; @FaJa], $G_k$ is a graph with convex balls. By Theorem \[weakly-systolic\](vi) it remains to show that any induced 5-cycle $C$ of $G_k$ is included in a 5-wheel. Suppose by induction assumption that this is true for $G_{k-1}$. Therefore $C$ must contain the last labeled vertex of $G_k,$ denote this vertex by $v.$ Let $x$ and $y$ be the neighbors of $v$ in $C.$ Let $v'=f(v)$ be the vertex (of $G_k$) dominating $v$ in $G_k.$ Since $C$ is induced, necessarily $v'$ is adjacent to $x$ and $y$ but different from these vertices. Denote by $C'$ the 5-cycle obtained by replacing in $C$ the vertex $v$ by $v'.$ If $C'$ is not induced, then $v'$ will be adjacent to a third vertex of $C,$ and since $G_k$ does not contain induced 4-cycles, $v'$ will be adjacent to all vertices of $C,$ showing that $C$ extends to a 5-wheel. So, suppose that $C'$ is induced. Applying the induction hypothesis to $G_{k-1},$ we conclude that $C'$ extends to a 5-wheel in $G_{k-1}.$ Let $w$ be the central vertex of this wheel. To avoid a 4-cycle induced by the vertices $x,y,v,$ and $w,$ necessarily $v$ and $w$ must be adjacent. Hence $C$ extends in $G_k$ to a 5-wheel centered at $w.$ This establishes that indeed $G_k$ is weakly bridged.
A [*homomorphism*]{} of a graph $G=(V,E)$ to itself is a mapping $\varphi: V\rightarrow V$ such that for any edge $uv\in E$ we have $\varphi(u)\varphi(v)\in E$ or $\varphi(u)=\varphi(v).$ A set $S\subset V$ is fixed by $\varphi,$ if $\varphi(S)=S.$ A [*simplicial map*]{} on a simplicial complex ${\bold X}$ is a map $\varphi: V({\bold X})\rightarrow V({\bold X})$ such that for all $\sigma\in {\bold X}$ we have $\varphi({\sigma})\in {\bold X}.$ A simplicial map fixes a simplex $\sigma\in {\bold X}$ if $\varphi (\sigma)=\sigma$. Every simplicial map on $\bold X$ is a homomorphism of its graph $G({\bold X}).$ Every homomorphism of a graph $G$ is a simplicial map on its clique complex ${\bold X}(G).$ Therefore, if ${\bold X}$ is a flag complex, then the set of simplicial maps of ${\bold X}$ coincides with the set of homomorphisms of its graph $G({\bold X}).$ It is well know (see, for example, [@HeNe Theorem 2.65]) that any homomorphism of a finite dismantlable graph to itself fixes some clique. From Theorem \[dismantle\] we know that the graphs of weakly systolic complexes as well as the graphs of their Rips complexes are dismantlable. Therefore from preceding discussion we obtain:
\[fixed-clique\] Any homomorphism of a finite weakly bridged graph $G=G({\bold X})$ to itself fixes some clique. Any simplicial map of a weakly systolic complex $\bold X$ to itself or of its Rips complex ${\bold X}_k$ to itself fixes some simplex of respective complex.
Let $u$ be a base point of a graph $G.$ A [*(geodesic)*]{} $k$–[*combing*]{} [@ECHLPT] is a choice of a shortest path $P_{(u,x)}$ between $u$ and each vertex $x$ of $G,$ such that $P:=P_{(u,v)}$ and $Q:=P_{(u,w)}$ are $k$–fellow travelers for any adjacent vertices $v$ and $w$ of $G,$ i.e., $d(P(t),Q(t))\leq k$ for all integers $t\ge 0.$ One can imagine the union of combing paths as a spanning tree $T_u$ of $G$ rooted at $u$ and preserving the distances from $u$ to all vertices. A natural way to comb a graph $G$ from $u$ is to run BFS and to take as a shortest path $P_{(u,x)}$ the unique path of the BFS-tree $T_u$. It is shown in [@Ch_CAT] that for bridged graphs this geodesic combing satisfies the 1-fellow traveler property. We will show now that in the case of weakly bridged graphs the same combing property is satisfied by the paths of any LexBFS tree $T_u:$
\[combing\] Locally finite weakly bridged graphs $G$ admit a geodesic 1-combing defined by the paths of any LexBFS tree $T_u$ of $G$.
Pick two adjacent vertices $v,w$ of $G$ and suppose that $w$ was labeled by LexBFS after $v.$ Then $d(u,v)\le d(u,w)=n.$ We proceed by induction on $n.$ Denote by $v'=f(v)$ and $w'=f(w)$ the fathers of $v$ and $w.$ By definition of the combing, $v'$ and $w'$ are the neighbors of $v$ and $w$ in the combings paths $P_{(u,v)}$ and $P_{(u,w)},$ respectively. If $d(u,v)=d(u,w),$ then the [*fellow traveler property*]{} established in Theorem \[dismantle\] shows that $v'$ and $w'$ either are adjacent or coincide. In the second case, $P_{(u,v)}$ and $P_{(u,w)}$ coincide everywhere except the last vertices $v$ and $w.$ In the first case, since $d(u,v')=d(u,w')=n-1,$ the paths $P_{(u,v')}$ and $P_{(u,w')}$ are 1-fellow travelers. Since $P_{(u,v')}\subset P_{(u,v)}$ and $P_{(u,w')}\subset P_{(u,w)}$ we conclude that $P_{(u,v)}$ and $P_{(u,w)}$ are 1-fellow travelers as well. Now suppose that $d(u,v)<d(u,w).$ If $w'=v,$ then $P(u,v)\subset P(u,w)$ and we are done. Otherwise, $w'$ is adjacent to $v$ and $v'.$ Applying the induction hypothesis to the combing paths $P_{(u,v')}\subset P_{(u,v)}$ and $P_{(u,w')}\subset P_{(u,w)},$ again we conclude that $P_{(u,v)}$ and $P_{(u,w)}$ are 1-fellow travelers.
Fixed point theorem {#fixedpt}
===================
In this section, we establish the fixed point theorem (Theorem C from Introduction). We start with two auxiliary results. The first is an easy corollary of Theorem \[dismantle\]:
\[str dominat\] Let $\bold X$ be a finite weakly systolic complex. Then either $\bold X$ is a single simplex or it contains two vertices $v,w$ such that $B_1(v)$ is a proper subset of $B_1(w)$, i.e. $B_1(v)\subsetneq B_1(w)$ .
Let $v$ be the last vertex of $\bold X$ labeled by LexBFS which started at vertex $u$ (see Theorem \[dismantle\]). If $d(u,v)=1,$ then the construction of our ordering implies that $B_1(u)=V({\bold X})$. Hence, either there exists a vertex $w$, such that $B_1(w)\subset
V({\bold X})=B_1(u),$ and we are done, or every two vertices of $\bold X$ are adjacent, i.e., $\bold X$ is a simplex. Now suppose that $d(u,v)\geq 2$. From Theorem \[dismantle\] we know that $B_1(v)\subseteq B_1(w)$, where $w$ is the father of $v$. Since $d(u,v)=d(u,w)+1\geq 2$, we conclude that $u\neq w$ and that $z\in B_1(w)\setminus B_1(v),$ where $z$ is the father of $w.$ Hence $B_1(v)$ is a proper subset of $B_1(w).$
\[LC\] Let $\bold X$ be a finite weakly systolic complex. Let $v,w$ be two vertices such that $B_1(v)$ is a proper subset of $B_1(w)$. Then the full subcomplex ${\bold X}_0$ of $\bf X$ spanned by all vertices of $\bold X$ except $v$ is weakly systolic.
It is easy to see that ${\bf X}_0$ is simply connected (see also the discussion in Section \[dislc\]). Thus, by Theorem \[weakly-systolic\], it suffices to show that ${\bf X}_0$ does not contain induced $4$-cycles and satisfies the $\widehat{W}_5$-condition. Since, by Theorem \[weakly-systolic\], $\bold X$ does not contain induced $C_4,$ the same is true for its full subcomplex ${\bf X}_0$. Let $\widehat{W}_5\subseteq {\bf X}_0$ be a given $5$-wheel plus a triangle as defined in Section \[char\]. By Theorem \[weakly-systolic\] there exists a vertex $v'\in {\bold X}$ adjacent in $\bold X$ to all vertices of $\widehat{W}_5$. If $v'\neq v$ then $v'\in {\bf X}_0$ and if $v'=v$ then $\widehat{W}_5 \subseteq {\mathrm}{lk}(w,{\bf X}_0)$. In both cases all vertices of $\widehat{W}_5$ are adjacent to a vertex of ${\bf X}_0$. Thus $\bold X_0$ also satisfies the $\widehat{W}_5$-condition and hence the lemma follows.
\[fpt\] Let $G$ be a finite group acting by simplicial automorphisms on a locally finite weakly systolic complex ${\bold X}$. Then there exists a simplex $\sigma \in {\bold X}$ which is invariant under the action of $G$.
Let ${\bold X}'$ be the subcomplex of $\bold X$ spanned by the convex hull of the set $\{ gv:\; \; g\in G\}$. Then it is clear that ${\bold X}'$ is a bounded and $G$-invariant full subcomplex of $\bold X$. Moreover, as a convex subcomplex of a weakly systolic complex, ${\bold X}'$ is itself weakly systolic. Thus there exists a minimal non-empty $G$–invariant subcomplex ${\bold X}_0$ of $\bold X$, that is itself weakly systolic. Since $\bold X$ is locally finite, ${\bold X}_0$ is finite. We assert that ${\bold X}_0$ must be a single simplex.
Assume by way of contradiction that ${\bold X}_0$ is not a simplex. Then, by Lemma \[str dominat\], ${\bf X}_0$ contains two vertices $v,w$ such that $B_1(v)\subsetneq B_1(w)$ (i.e., $v$ is a strictly dominated vertex). Since the strict inclusion of $1$-balls is a transitive relation and ${\bold X}_0$ is finite, there exists a finite set $S$ of strictly dominated vertices of $\bold X$ with the following property: for a vertex $x\in S$ there is no vertex $y$ with $B_1(y)\subsetneq B_1(x)$. Let ${\bold X}_0'$ be the full subcomplex of $\bf X$ spanned by $V({\bold X}_0)\setminus S$. It is clear that ${\bold X}_0'$ is a non-empty $G$-invariant proper subcomplex of ${\bold X}_0$. By Lemma \[LC\], ${\bold X}_0'$ is weakly systolic. This contradicts the minimality of ${\bold X}_0$ and thus shows that ${\bold X}_0$ has to be a simplex.
\[conj\] Let $G$ be a group acting geometrically by automorphisms on a weakly systolic complex $\bf X$ (i.e., $G$ is weakly systolic). Then $G$ contains only finitely many conjugacy classes of finite subgroups.
Suppose by way of contradiction that we have infinitely many conjugacy classes of finite subgroups represented by $H_1,H_2,\ldots\subset G$. Since $G$ acts geometrically on ${\bf X},$ there exists a compact subset $K\subset V({\bf X})$ with $\bigcup_{g\in G} gK={\bf X}$. For $i=1,2,\ldots,$ let $\sigma_i$ be an $H_i$-invariant simplex of $\bf X$ (whose existence is assured by the fixed point Theorem \[fpt\]) and let $g_i\in G$ be such that $g_i(\sigma_i)\cap K\neq \emptyset$. Then $g_i(\sigma_i)$ is $g_iH_iG_i^{-1}$ invariant and $\bigcup_i g_iH_ig_i^{-1}$ is infinite. But for every element $g\in \bigcup_i g_iH_ig_i^{-1}$ we have $g(B_1(K))\cap B_1(K)\neq \emptyset,$ a contradiction with the properness of $G$-action on $\bf X$.
Contractibility of the fixed point set {#contrfps}
======================================
The aim of this section is to prove that for a group acting on a weakly systolic complex its fixed point set is contractible (Proposition \[inv set contr\]). As explained in the Introduction, this result implies Theorem E asserting that weakly systolic complexes are models for ${\underline EG}$ for groups acting on them properly.
Our proof closely follows Przytycki’s proof of an analogous result for the case of systolic complexes [@Pr3]. There are however minor technical difficulties. In particular, since balls around simplices in weakly systolic complexes need not to be convex, we have to work with other convex objects that are defined as follows. For a simplex $\sigma$ of a simplicial complex $\bf X,$ set $K_0(\sigma)=\sigma$ and $K_i(\sigma)=\bigcap _{v\in \sigma} B_i(v)$ for $i=1,2,\ldots$.
\[bint\] Let $\sigma$ be a simplex of a weakly systolic complex $\bf X$. Then, for $i=0,1,2,...$, $K_i(\sigma)$ is convex and $K_{i+1}(\sigma)\subseteq B_1(K_i(\sigma))$.
Trivially, $K_0(\sigma)=\sigma$ is convex. For $i>0,$ $K_i(\sigma)$ is the intersection of the balls $B_i(v), v\in \sigma.$ By Theorem \[weakly-systolic\], balls around vertices are convex, whence $K_i(\sigma)$ is convex as well. To establish the inclusion $K_{i+1}(\sigma)\subseteq B_1(K_i(\sigma)),$ pick any vertex $w\in K_{i+1}(\sigma).$ Let $l+1=d(w,\sigma)$ and denote by $\sigma_0$ the metric projection of $w$ in $\sigma$. By the property $SD_{l}(w),$ there exists a vertex $z\in S_{l}(w)$ adjacent to all vertices of the simplex $\sigma_0.$ Let $w'$ be a neighbor of $w$ in the interval $I(w,z).$ Then obviously $d(w',\sigma)=l$ and therefore $\sigma_0$ is the metric projection of $w'$ in $\sigma.$ Since $d(w',v)=d(w,v)-1$ for any vertex $v\in \sigma$ and $w\in K_{i+1}(\sigma),$ we conclude that $w'\in K_i(\sigma),$ whence $w\in B_1(w')\subset B_1(K_i(\sigma)).$
We recall now two general results that were proved in [@Pr3] and which will be important in the proof of Proposition \[inv set contr\].
\[p4.1p3\] If ${\mathcal}C, {\mathcal}D$ are posets and $F_0,F_1\colon {\mathcal}C \to {\mathcal}D$ are functors such that for each object $c$ of ${\mathcal}C$ we have $F_0(c) \leq F_1(c)$, then the maps induced by $F_0$, $F_1$ on the geometric realizations of ${\mathcal}C,{\mathcal}D$ are homotopic. Moreover this homotopy can be chosen to be constant on the geometric realization of the subposet of ${\mathcal}C$ of objects on which $F_0$ and $F_1$ agree.
\[p4.2p3\] Let $F_0\colon {\mathcal}C' \to {\mathcal}C$ be the functor from the flag poset ${\mathcal}C'$ of a poset ${\mathcal}C$ into the poset ${\mathcal}C$, assigning to each object of ${\mathcal}C'$, which is a chain of objects of ${\mathcal}C$, its minimal element. Then the map induced by $F_0$ on geometric realizations of ${\mathcal}C',{\mathcal}C$ (that are homeomorphic in a canonical way) is homotopic to identity.
The following property of flag complexes will be crucial in the definition of expansion by projection below. It says that in weakly systolic case we can define projections on convex subcomplexes the same way as projections on balls.
\[proj\] Let ${\bf X}$ be a simplicial flag complex and let $Y$ be its convex subset. If a simplex $\sigma$ belongs to $S_1(Y),$ i.e. $\sigma \subseteq B_1(Y)$ and $\sigma \cap Y=\emptyset,$ then $\tau:={\mathrm}{lk}(\sigma, {\bf X})\cap Y$ is a single simplex.
By definition of links, $\tau$ consists of all vertices $v$ of $Y$ adjacent in $G({\bf X})$ to all vertices of $\sigma$. Since the set $Y$ is convex and $\sigma$ is disjoint from $Y,$ necessarily the vertices of $\tau$ are pairwise adjacent. As $\bf X$ is a flag complex, $\tau$ is a simplex of $\bf X$.
We will call the simplex $\tau$ as in the lemma above the *projection of $\sigma$ on $Y$*. Now we are in position to define the following notion introduced (in a more general version) by Przytycki [@Pr3 Definition 3.1] in the systolic case. Let $Y$ be a convex subset of a weakly systolic complex $\bf X$ and let $\sigma$ be a simplex in $B_1(Y)$. The *expansion by projection* $e_Y(\sigma)$ of $\sigma$ is a simplex in $B_1(Y)$ defined in the following way: if $\sigma \subseteq Y,$ then $e_Y(\sigma)=\sigma,$ otherwise $e_Y(\sigma)$ is the join of $\sigma
\cap S_1(Y)$ and its projection on $Y$. A version of the following simple lemma was proved in [@Pr3] in the systolic case. Its proof given there is valid also in our case.
\[L3.8p3\] Let $Y$ be a convex subset of a weakly systolic complex $\bf X$ and let $\sigma_1\subseteq \sigma_2\subseteq\ldots \subseteq
\sigma_n\subseteq B_1(Y)$ be an increasing sequence of simplices. Then the intersection $\left( \bigcap_{i=1}^{n}e_Y(\sigma_i)\right) \cap Y$ is nonempty.
Let $\sigma$ be a simplex of a weakly systolic complex $\bf X$. As in [@Pr3], we define an increasing sequence of full subcomplexes ${\bf D}_{2i}(\sigma)$ and ${\bf D}_{2i+1}(\sigma)$ of the baricentric subdivision ${\bf X}'$ of $\bf X$ in the following way. Let ${\bf D}_{2i}(\sigma)$ be the subcomplex spanned by all vertices of ${\bf X}'$ corresponding to simplices of $\bf X$ which have all their vertices in $K_i(\sigma)$. Let ${\bf D}_{2i+1}(\sigma)$ be the subcomplex spanned by all vertices of ${\bf X}'$ which correspond to those simplices of $\bf X$ that have all their vertices in $K_{i+1}(\sigma)$ and at least one vertex in $K_i(\sigma)$. The proof of the main proposition in this section follows closely the proof of [@Pr3 Proposition 1.4].
\[inv set contr\] Let $H$ be a group acting by simplicial automorphisms on a weakly systolic complex $\bf X$. Then the complex ${\mathrm}{Fix}_H {\bf X}'$ is contractible or empty.
Assume that ${\mathrm}{Fix}_H{\bf X}'$ is nonempty and let $\sigma$ be a maximal $H$-invariant simplex. By ${\bf D}_i$ we will denote here ${\bf D}_i(\sigma)$. We will prove the following three assertions.
\(i) ${\bf D}_0\cap {\mathrm}{Fix}_H{\bf X}'$ is contractible;
\(ii) the inclusion ${\bf D}_{2i}\cap {\mathrm}{Fix}_H{\bf X}' \subseteq {\bf D}_{2i+1}\cap
{\mathrm}{Fix}_H{\bf X}'$ is a homotopy equivalence;
\(iii) the identity on ${\bf D}_{2i+2}\cap {\mathrm}{Fix}_H{\bf X}'$ is homotopic to a mapping with image in ${\bf D}_{2i+1}\cap {\mathrm}{Fix}_H{\bf X}'\subseteq
{\bf D}_{2i+2}\cap {\mathrm}{Fix}_H{\bf X}'$.
As in the proof of [@Pr3 Proposition 1.4], the three assertions imply that ${\bf D}_{k}\cap {\mathrm}{Fix}_H{\bf X}'$ is contractible for every $k$, thus the proposition holds. To show (i), note that ${\bf D}_{0}\cap {\mathrm}{Fix}_H{\bf X}'$ is a cone over the barycenter of $\sigma$ and hence it is contractible.
To prove (ii), let ${\mathcal}C$ be the poset of $H$-invariant simplices in $\bf X$ with vertices in $K_{i+1}(\sigma)$ and at least one vertex in $K_i(\sigma)$. Its geometric realization is ${\bf D}_{2i+1}\cap {\mathrm}{Fix}_H {\bf X}'$. Consider a functor $F\colon {\mathcal}C \to {\mathcal}C$ assigning to each object of ${\mathcal}C$ (i.e., each simplex of $\bf X$), its subsimplex spanned by its vertices in $K_i(A)$. By Proposition \[p4.1p3\], the geometric realization of $F$ is homotopic to identity (which is the geometric realization of the identity functor). Moreover this homotopy is constant on ${\bf D}_{2i}\cap {\mathrm}{Fix}_H{\bf X}'$. The image of the geometric realization of $F$ is contained in ${\bf D}_{2i}\cap {\mathrm}{Fix}_H{\bf X}'$. Hence ${\bf D}_{2i}\cap {\mathrm}{Fix}_H{\bf X}'$ is a deformation retract of ${\bf D}_{2i+1}\cap {\mathrm}{Fix}_H{\bf X}',$ as desired.
To establish (iii), let ${\mathcal}C$ be the poset of $H$-invariant simplices of ${\bf X}'$ with vertices in $K_{i+1}(\sigma)$ and let ${\mathcal}C'$ be its flag poset. Let also $F_0\colon {\mathcal}C'\to {\mathcal}C$ be the functor assigning to each object of ${\mathcal}C'$ its minimal element; cf. Proposition \[p4.2p3\]. Now we define another functor $F_1\colon {\mathcal}C'\to {\mathcal}C$. For any object $c'$ of ${\mathcal}C'$, which is a chain of objects $c_1<c_2<\ldots<c_k$ of ${\mathcal}C$, recall that $c_j$ are some $H$-invariant simplices in $K_{i+1}(\sigma)$. Let $c_j'=e_{K_i(\sigma)}(c_j).$ Then by Lemma \[L3.8p3\] the intersection $\bigcap_{j=1}^{k}c_j'$ contains at least one vertex in $K_i(\sigma)$. Thus $\bigcap_{j=1}^{k}c_j'$ is an $H$-invariant non-empty simplex and hence it is an object of ${\mathcal}C$. We define $F_1(c')$ to be this object. In the geometric realization of ${\mathcal}C$, which is ${\bf D}_{2i+2}\cap {\mathrm}{Fix}_H{\bf X}'$, the object $F_1(c')$ corresponds to a vertex of ${\bf D}_{2i+1}\cap {\mathrm}{Fix}_H{\bf X}'$. It is obvious that $F_1$ preserves the partial order. Notice that for any object $c'$ of ${\mathcal}C'$ we have $F_0(c')\subseteq F_1(c')$, hence, by Proposition \[p4.2p3\], the geometric realizations of $F_0$ and $F_1$ are homotopic. We have that $F_0$ is homotopic to the identity and that $F_1$ has image in ${\bf D}_{2i+1}\cap {\mathrm}{Fix}_H{\bf X}',$ thus establishing (iii).
Final remarks on the case of systolic complexes {#final}
===============================================
In this final section, we restrict to the case of systolic complexes and we present some further results in that case. First, using Lemma 3.10 and Theorem 3.11 of Polat [@Po] for bridged graphs, we prove a stronger version of the fixed point theorem for systolic complexes. Namely, Polat [@Po] established that for any subset ${\overline}Y$ of vertices of a graph with finite intervals, there exists a minimal isometric subgraph of this graph which contains ${\overline}Y.$ Moreover, if ${\overline}Y$ is finite and the graph is bridged, then [@Po Theorem 3.11(i)] shows that this minimal isometric (and hence bridged) subgraph is also finite. We continue with two lemmata which can be viewed as $G$-invariant versions of these two results of Polat [@Po].
\[minsubcx\] Let a group $G$ act by simplicial automorphisms on a systolic complex $\bf X$. Let ${\overline}Y$ be a $G$-invariant set of vertices of $\bf X$. Then there exists a minimal $G$-invariant subcomplex $\bf Y$ of $\bf X$ containing ${\overline}Y$, which is itself a systolic complex.
Let $\Sigma$ be a chain (with respect to the subcomplex relation) of $G$-invariants subcomplexes of $\bf X$, which contain ${\overline}Y$ and induce isometric subgraphs of the underlying graph of $\bf X$ (and thus are systolic complexes themselves). Then, as in the proof of [@Po Lemma 3.10], we conclude that the subcomplex ${\bf Y}=\bigcap \Sigma$ is a minimal $G$-invariant subcomplex of $\bf X$, containing ${\overline}Y$ and which is itself a systolic complex.
\[minfin\]
Let a group $G$ act by simplicial automorphisms on a systolic complex $X$. Let ${\overline}Y$ be a finite $G$-invariant set of vertices of $X$. Then there exists a minimal (as a simplicial complex) finite $G$-invariant subcomplex $Y$ of $X$, which is itself a systolic complex.
Let $co({\overline}Y)$ be the convex hull of ${\overline}Y$ in $\bf X$. The full subcomplex $ {\bf Z}$ of $\bf X$ spanned by $co({\overline}Y)$ is a bounded systolic complex. By Lemma \[minsubcx\], there exists a minimal $G$-invariant subcomplex $\bf Y$ of $\bf Z$ containing the set ${\overline}Y$ and which itself is a systolic complex. Then, applying the proof of [@Po Theorem 3.11] to the bounded bridged graphs which are the underlying graphs of the systolic complexes $\bf Y$ of $\bf Z$, it follows that $\bf Y$ is finite.
\[fpt\_sc\] Let $G$ be a finite group acting by simplicial automorphisms on a systolic complex $\bf X$. Then there exists a simplex $\sigma \in {\bf X}$ which is invariant under the action of $G$.
Let ${\overline}Y=Gv={\left\{}
\newcommand {\rk}{\right\}}gv|\; \; g\in G\rk$, for some vertex $v\in {\bf X}$. Then ${\overline}Y$ is a finite $G$-invariant set of vertices of $\bf X$ and thus, by Lemma \[minfin\], there exists a minimal finite $G$-invariant subcomplex $\bf Y$ of $\bf X$, which is itself a systolic complex. Then, the same way as in the proof of Theorem \[fpt\], we conclude that there exists a simplex in $\bf Y$ that is $G$-invariant.
We believe that, as in the systolic case, the stronger version of Theorem \[fpt\] holds also for weakly systolic complexes, i.e., one can drop the assumption on the local finiteness of $\bf X$ in Theorem \[fpt\]. This needs extensions of some results of Polat (in particular, Theorems 3.8 and 3.11 from [@Po]) to the class of weakly bridged graphs.
Zawi' slak [@Z] initiated another approach to the fixed point theorem in the systolic case based on the following notion of round subcomplexes. A systolic complex $\bf X$ of finite diameter $k$ is [*round*]{} (cf. [@Pr2]) if $\cap\{ B_{k-1}(v): v\in V({\bf X})\}=\emptyset.$ Przytycki [@Pr2] established that all round systolic complexes have diameter at most $5$ and used this result to prove that for any finite group $G$ acting by simplicial automorphisms on a systolic complex there exists a subcomplex of diameter at most 5 which is invariant under the action of $G$. Zawi' slak [@Z Conjecture 3.3.1] and Przytycki (Remark 8.1 of [@Pr2]) conjectured that in fact the diameter of round systolic complexes must be at most 2. Zawiślak [@Z Theorem 3.3.1] showed that if this is true, then it implies that $G$ has an invariant simplex, thus paving another way to the proof of Theorem \[fpt\_sc\]. We will show now that the positive answer to the question of Zawi' slak and Przytycki directly follows from an earlier result of Farber [@Fa] on diameters and radii of finite bridged graphs.
\[round\] Any round systolic complex $\bf X$ has diameter at most 2.
Let diam$({\bf X})$ and rad$({\bf X})$ denote the diameter and the radius of a systolic complex $\bf X$, i.e., the diameter and radius of its underlying bridged graph $G=G({\bf X})$. Recall that rad$({\bf X})$ is the smallest integer $r$ such that there exists a vertex $c$ of $\bf X$ (called a central vertex) so that the ball $B_r(c)$ of radius $r$ and centered at $c$ covers all vertices of $\bf X$, i.e., $B_r(c)=V({\bf X}).$
Farber [@Fa Theorem 4] proved that if $G$ is a finite bridged graph, then $3{\mathrm}{rad}(G)\le 2{\mathrm}{diam}(G)+2.$ We will show first that this inequality holds for infinite bridged graphs $G$ of finite diameter ${\mathrm}{diam}(G)$ and containing no infinite simplices. Set $k:={\mathrm}{rad}(G)\le {\mathrm}{diam}(G).$ By definition of ${\mathrm}{rad}(G)$ the intersection of all balls of radius $k-1$ of $G$ is empty. Then using an argument of Polat (personal communication) presented below, we can find a finite subset of vertices $Y$ of $G$ such that the intersection of the balls $B_{k-1}(v),$ $v$ running over all vertices of $Y,$ is still empty. By [@Po Theorem 3.11], there exists a finite isometric bridged subgraph $H$ of $G$ containing $Y.$ From the choice of $Y$ we conclude that the radius of $H$ is at least $k$, while the diameter of $H$ is at most the diameter of $G.$ As a result, applying Farber’s inequality to $H,$ we obtain $3{\mathrm}{rad}(G)\le 3{\mathrm}{rad}(H)\le 2{\mathrm}{diam}(H)+2\le 2{\mathrm}{diam}(G)+2,$ whence $3{\mathrm}{rad}(G)\le 2{\mathrm}{diam}(G)+2.$
To show the existence of a finite set $Y$ such that $\cap \{ B_{k-1}(v): v\in Y\}=\emptyset,$ we use an argument of Polat (personal communication). According to Theorem 3.9 of [@Po3], any graph without isometric rays (in particular, any bridged graph of finite diameter) can be endowed with a topology, called [*geodesic topology*]{}, so that the resulting topological space is compact. On the other hand, it is shown in [@Po4 Corollary 6.26] that any convex set of a bridged graph containing no infinite simplices is closed in the geodesic topology. As a result, the balls of a bridged graph $G$ of finite diameter containing no infinite simplices are compact convex sets. Hence any family of balls with an empty intersection contains a finite subfamily with an empty intersection, showing that such a finite set $Y$ indeed exists.
Now suppose that $\bf X$ is a round systolic complex and let $k:={\mathrm}{diam}({\bf X}).$ Since $\bf X$ is round, one can easily deduce that ${\mathrm}{rad}({\bf X})=k$: indeed, if ${\mathrm}{rad}({\bf X})\le k-1$ and $c$ is a central vertex, then $c$ will belong to the intersection $\cap\{ B_{k-1}(v): v\in V({\bf X})\},$ which is impossible. Applying Farber’s inequality to the (bridged) underlying graph of $\bf X$, we conclude that $3k\le 2k+2,$ whence $k\le 2.$
It would be interesting to extend Proposition \[round\] and the relationship of [@Fa] between radii and diameters to weakly systolic complexes.
Osajda-Przytycki [@OsPr] constructed a $Z$-set compactification ${\overline{\bf X}}={\bf X} \cup \partial {\bf X}$ of a systolic complex $\bf X$. The main result there ([@OsPr Theorem 6.3]) together with Theorem E from the Introduction of our paper, allowed them to claim the following result (without proving it):
\[Z\] Let a group $G$ act geometrically by simplicial automorphisms on a systolic complex $\bf X$. Then the compactification ${\overline{\bf X}}={\bf X}\cup \partial {\bf X}$ of ${\bf X}$ satisfies the following properties:
1\. ${\overline{\bf X}}$ is a Euclidean retract (ER);
2\. $\partial {\bf X}$ is a $Z$–set in ${\overline{\bf X}}$;
3\. for every compact set $K\subset {\bf X}$, $(gK)_{g\in G}$ is a null sequence;
4\. the action of $G$ on $\bf X$ extends to an action, by homeomorphisms, of $G$ on ${\overline{\bf X}}$;
5\. for every finite subgroup $F$ of $G$, the fixed point set ${\mathrm}{Fix}_F {\overline{\bf X}}$ is contractible;
6\. for every finite subgroup $F$ of $G$, the fixed point set ${\mathrm}{Fix}_F {\bf X}$ is dense in ${\mathrm}{Fix}_F {\overline{\bf X}}$.
This result asserts that ${\overline{\bf X}}$ is an *$EZ$-structure*, sensu Rosenthal [@Ro], for a systolic group $G$; for details, see [@OsPr]. The existence of such a structure implies, by [@Ro], the Novikov conjecture for $G$.
Acknowledgements {#acknowledgements .unnumbered}
================
Work of V. Chepoi was supported in part by the ANR grant BLAN06-1-138894 (projet OPTICOMB). Work of D. Osajda was supported in part by MNiSW grant N201 012 32/0718 and by the ANR grants Cannon and Théorie Géométrique des Groupes. We are grateful to Norbert Polat for his help in the proof of Proposition \[round\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Effective identification of asymmetric and local features in images and other data observed on multi-dimensional grids plays a critical role in a wide range of applications including biomedical and natural image processing. Moreover, the ever increasing amount of image data, in terms of both the resolution per image and the number of images processed per application, requires algorithms and methods for such applications to be computationally efficient. We develop a new probabilistic framework for multi-dimensional data to overcome these challenges through incorporating data adaptivity into discrete wavelet transforms, thereby allowing them to adapt to the geometric structure of the data while maintaining the linear computational scalability. By exploiting a connection between the local directionality of wavelet transforms and recursive dyadic partitioning on the grid points of the observation, we obtain the desired adaptivity through adding to the traditional Bayesian wavelet regression framework an additional layer of Bayesian modeling on the space of recursive partitions over the grid points. We derive the corresponding inference recipe in the form of a recursive representation of the exact posterior, and develop a class of efficient recursive message passing algorithms for achieving exact Bayesian inference with a computational complexity linear in the resolution and sample size of the images. While our framework is applicable to a range of problems including multi-dimensional signal processing, compression, and structural learning, we illustrate its work and evaluate its performance in the context of 2D and 3D image reconstruction using real images from the ImageNet database. We also apply the framework to analyze a data set from retinal optical coherence tomography. Both numerical experiments and real data analysis show that our method enhances energy concentration and substantially outperforms a number of state-of-the-art approaches in a variety of images.'
author:
- Meng Li
- Li Ma
bibliography:
- 'WARP-abb.bib'
title: '[WARP: Wavelets with adaptive recursive partitioning for multi-dimensional data]{}'
---
Introduction {#sec:intro}
============
Effective identification of asymmetric and local features in images and other data observed on multi-dimensional grids plays a critical role in a wide range of applications. One such application is optical coherence tomography (OCT). OCT is a non-invasive imaging modality widely used in ophthalmology to visualize cross-sections of tissue layers. These tissue layers—such as the inner nuclear layer and outer nuclear layer—are often mostly homogeneous horizontally while involving large vertical contrasts. These contrasts across layers are key for ophthalmologists to make a diagnosis based on the (algorithmically reconstructed) image. Furthermore, local structures in such images can indicate ocular diseases, and their proper quantitative assessment is an important reference for monitoring the progression of the disease in clinical practice [@alasil2010relationship; @bussel2013oct; @huang2014inner; @sun2014disorganization; @kafieh2015thickness; @oishi2018longitudinal]. Many other applications of 2D and 3D image analyses in biomedicine and beyond also involve asymmetric and local features to various extents. The effective analysis of such multi-dimensional observations can be greatly enhanced by incorporating adaptivity into the algorithm or method to take into account such features.
A further challenge in modern applications involving multi-dimensional observations is the ever increasing size of the datasets. For example, both the number of images analyzed as well as the resolution—i.e., the total number of pixels—of each image have been expanding rapidly. Many traditional methods and models become computational impractical for modern data as they scale polynomially with the resolution. Effective methods for analyzing such data must scale well with both the resolution of each image and the number of images.
The primary aim of this work is to present a probabilistic modeling framework for analyzing observations on multi-dimensional grids that can address these challenges—being able to effectively adapt to the asymmetry on dimensions and the local nature of interesting features, while achieving linear scalability with the resolution and sample size—in a principled probabilistic manner. Our starting point is a well-known strategy for representing functions—a multi-resolution representation through the discrete wavelet transform (DWT). Wavelet analysis is hardly a new topic [@Donoho1994; @donoho1995adapting; @Mallat2008] and it has played a pervasive role in the context of signal processing and image analysis. Its linear computational scalability is well-suited for analyzing massive data. However, the overwhelming emphasis in traditional statistical wavelet analyses has been on effective modeling and inference on the resulting wavelet coefficients starting with a [*given*]{}, [*fixed*]{} wavelet transform of the original data [@Abramovich+:98; @crouse:1998; @Clyde+George:00; @Brown2001; @Wil+Now:04; @Morris2006]. Such a predetermined wavelet transform cannot adapt to the structure of the data and consequently suffers in its ability to effectively maintain the local and asymmetric structures in the original observation. No downstream statistical analyses can hope to recover what has already been lost at the wavelet transform stage.
In this work, we incorporate the desired adaptivity into wavelet analysis by a very simple hierarchical modeling strategy—starting the model “one level up”, that is, by incorporating the choice of the wavelet transform itself into the statistical analysis. Specifically, instead of using traditional fixed multi-dimensional DWT to represent multi-dimensional observations, we utilize a 1D wavelet transform that can “turn and twist” (or “warp”) over the multi-dimensional grid, or the [*index space*]{}. We propose a hierarchical Bayesian model on the local directionality of the 1D transform to allow the “warping” to adapt to the geometric structure of the underlying function, e.g., the true image.
In designing the model, we note that “warping” the 1D wavelet transform is equivalent to fixing the 1D wavelet transform while shuffling points in the multivariate index space of the observation—i.e., through applying a given 1D wavelet transform to a permuted version of the observation. This connection implies that probabilistic models on “warping” can be induced from distributions on the space of permutations of the index points. Finally, we design a Bayesian model, i.e., a prior, on the space of permutations by drawing a further connection between random recursive partitioning and random permutations. The particular class of priors on the permutations induced by random recursive partitioning over the index space allows us to complete exact Bayesian inference that achieves a computational scalability linear in the resolution and sample size.
Due to its connection to recursive partitioning, we shall refer to our approach as WARP, or [*wavelets with adaptive recursive partitioning*]{}. Through extensive numerical studies involving a large number of natural images from the ImageNet database and an OCT data set, we show that WARP outperforms the existing state-of-the-art approaches by a substantial margin while maintaining the computational efficiency of classical wavelet analyses. While we focus on 2D and 3D image analyses in our motivation, our framework is readily applicable to observations of more than three dimensions without modification. Benefiting from its adaptivity to the underlying image structure with linear scalability and the resulting improvement on sparse coding, WARP is suitable for a wide range of applications in scalable multi-dimensional signal processing, compression, and structural learning.
The rest of the paper is organized as follows. \[section:method\] introduces the WARP framework. In Section \[section:RDP\] we review the key components of Bayesian wavelet regression models, introduce permutation of the index space as a way to incorporate adaptivity into wavelet analysis, and construct a class of priors on the permutation induced by recursive dyadic partitioning on the index space. We derive the corresponding posterior distributions and provide computational recipes for exact Bayesian inference for warping Haar wavelets in Section \[sec:posterior.inference\]. In \[sec:experiments\], we carry out an extensive numerical study and compare our method to existing state-of-the-art wavelet and non-wavelet methods using a variety of real 2D and 3D images. In \[sec:application\] we carry out a case study that applies WARP to analyze an OCT data set, and compares its performance to a number of state-of-the-art approaches. \[sec:dicussion\] concludes with some brief remarks. The Supplementary Materials contain all proofs, a sequential Monte Carlo algorithm for applying WARP on wavelet bases other than the Haar basis, a sensitivity analysis on the prior specification, and additional numerical experiments using 3D images. The C++ source code along with a Matlab toolbox and R package to implement the proposed method is available online at <https://github.com/MaStatLab/WARP>.
Method\[section:method\]
========================
Permuted wavelet regression and recursive dyadic partitions\[section:RDP\]
--------------------------------------------------------------------------
We shall use ${\Omega}$ to denote a space of indices or locations (e.g., pixels in images) where we obtain numerical measurements (e.g., intensities of pixels). Throughout this work, we assume ${\Omega}$ to be an $m$-dimensional rectangular tube consisting of $n_i=2^{J_i}$ grid points in the $i$th dimension for $i=1,2,\ldots,m$, that is, the function values are observed on a multi-dimensional equidistant grid. To simplify notation, we shall use $[a,b]$ to represent the set $\{a,a+1,\ldots,b\}$ for two integers $a$ and $b$ with $a\leq b$. Then the index space ${\Omega}$ is of the form $${\Omega}=[0,2^{J_1} - 1]\times [0,2^{J_2} - 1]\times \cdots \times [0,2^{J_m} - 1].$$ The locations in ${\Omega}$ can be placed into a vector of length $n=2^J$. For example, we can map the location ${\bm{s}}=(s_1,s_2,\ldots,s_m) \in {\Omega}$ to the $t$th element in the vector, where $t = s_1 + \sum_{l = 2}^m (\prod_{i = 1}^{l -1} n_i ) s_{l}$. Correspondingly, any function $f:{\Omega}\rightarrow {\mathbb{R}}$ can be represented as a vector ${\bm f}$ of length $n=2^{J}$ whose $t$th element is $f({\bm{s}})$. It may not seem obvious at first why one would want to treat a multi-dimensional function as a one-dimensional vector. We will show later that adaptive wavelet transform on multi-dimensional functions can be achieved through 1D wavelet transforms applied to adaptively permuted vectors.
Now, we consider the regression model $$\label{eq:model}
{\bm{y}}= \bm{f} + {\bm{\epsilon}}\quad \text{with } {\bm{\epsilon}}\sim N(\bm{0}, \Sigma_{\epsilon}),$$ where ${\bm{y}}=(y_0,y_1,\ldots,y_{2^{J}-1})'$ are the observation values made on ${\Omega}$, $\bm{f}=(f_0,f_1,\ldots,f_{2^{J}-1})'$ the underlying unknown function mean (or the signal), and ${\bm{\epsilon}}=(\epsilon_0,\epsilon_1,\ldots,\epsilon_{2^{J}-1})'$ the noise. For ease of illustration, we assume homogeneous white noise, i.e., $\Sigma_{\epsilon} = \sigma^2 I_n$, though our method does not rely on this assumption and can readily apply to models with heterogeneous variance; see \[sec:dicussion\] for further discussion.
Wavelet analysis starts by applying a discrete wavelet transform (DWT) to ${\bm{y}}$. This can be done by multiplying a corresponding orthonormal matrix $W$ to both sides of Eq. , obtaining $\bm{w} = {\bm{z}}+ {\bm{u}}$ where $\bm{w} = W {\bm{y}}$ is the vector of empirical wavelet coefficients, ${\bm{z}}=W \bm{f}$ the mean vector for wavelet coefficients and ${\bm{u}}=W{\bm{\epsilon}}$ the noise vector in the wavelet domain. This model can be rewritten in a location-scale form: $w_{j,k} = z_{j,k}+ u_{j,k}$ for $j=0,1,\ldots,J-1$ and $k=0,1,\ldots,2^{j}-1$, where $w_{j,k}$, $z_{j,k}$, $u_{j,k}$ are the $k$th wavelet coefficient, signal, and noise at the $j$th scale in the wavelet (i.e., location-scale) domain, respectively.
Obviously, it would not be reasonable to simply treat multi-dimensional observations as a vector and apply 1D wavelet regression analysis to an arbitrarily given fixed vectorization; see [@Donoho:99; @Jac+:11; @Ali+:13]. Such a vectorization ignores the structure of the underlying function, and thus will result in less effective “energy concentration”, i.e., producing a wavelet decomposition of ${\bm f}$ that is not very sparse—with many non-zero $z_{j,k}$’s of small to moderate sizes, reducing the signal-to-noise ratio at those $(j,k)$ combinations.
Fortunately, for each specific data set at hand, there typically exist some permutations on the points in ${\Omega}$ that effectively reorganize the data so that the resulting 1D wavelet coefficients provides an efficient representation of the underlying function. See \[fig:RDP\] for an illustration. Adopting a model choice viewpoint, one can think of the wavelet regression model under each index permutation as a competing generative model for the observed data. This perspective inspires us to incorporate a prior on the permutations, thereby allowing us to compute a posterior on the space of competing wavelet regression models, and then to either select some good models [@raftery1995bayesian] or average over the models [@Volinsky1999].
This approach does incur a common challenge in high-dimensional Bayesian model choice—that the space of all permutations is massive, and brute-force enumeration of the space is computationally impractical. In the current context, effective exploration of the model space becomes possible, however, once we realize that the vast majority of the permutations will lead to wavelet regression models that ignore the spatial smoothness of the underlying function—i.e., close locations in ${\Omega}$ often correspond to similar values in ${\bm f}$. As such, we shall focus attention on a subclass of permutations that to various extents preserve smoothness, and design a model space prior supported on this subclass. To this end, we appeal to a relationship between recursive dyadic partitioning (RDP) on ${\Omega}$ and permutations, and shall consider the collection of permutations induced by RDPs. Next we introduce some basic notions regarding RDPs on ${\Omega}$, which is then used to construct a prior on permutations. In reading the next two subsections, the reader may refer to \[fig:RDP\] as an example to help understand the notions and notations.
### Recursive dyadic partitioning on the location space
A [*partition*]{} of ${\Omega}$ is a collection of nonempty sets $\{A_1,A_2,\ldots,A_H\}$ such that ${\Omega}=\cup_{h=1}^{H} A_h$ and $A_{h_1}\cap A_{h_2}=\emptyset$ for any $h_1\neq h_2$. Now let ${\mathcal{T}}^{0},{\mathcal{T}}^1,{\mathcal{T}}^{2},\ldots,{\mathcal{T}}^{j},\ldots$ be a [*sequence of partitions*]{} of ${\Omega}$. We say that this sequence is a [*recursive dyadic partition*]{} (RDP) if it satisfies the following two conditions: (i) ${\mathcal{T}}^{j}$ consists of $2^{j}$ blocks: ${\mathcal{T}}^{j}=\{A_{j,k}:k=0,1,\ldots,2^{j}-1\}$; (ii) ${\mathcal{T}}^{j+1}$ is obtained by dividing each set in ${\mathcal{T}}^{j}$ into two pieces, i.e., $A_{j,k} = A_{j+1,2k}\cup A_{j+1,2k+1}$ for all $j\geq 0$ and $k=0,1,\ldots,2^{j}-1$.
We call an RDP [*canonical*]{} if the sequence of partitions satisfy two additional conditions: (iii) if the partition blocks $A_{j,k}$ are rectangles of the form $$A_{j,k} = [a_{j,k}^{(1)},b_{j,k}^{(1)}] \times [a_{j,k}^{(2)},b_{j,k}^{(2)}] \times \cdots \times [a_{j,k}^{(m)},b_{j,k}^{(m)}].$$ and (iv) $A_{j+1,2k}$ and $A_{j+1,2k+1}$ are produced by dividing $A_{j,k}$ into two halves at the middle of one of $A_{j,k}$’s *divisible* dimensions.
A rectangular partition block $A_{j,k}$ is [*divisible*]{} in dimension $d$ if $A_{j,k}$ is supported on at least two values in that dimension, i.e., $a_{j,k}^{(d)} < b_{j,k}^{(d)}$. In this case, if $A_{j,k}$ is divided in dimension $d$, then its children $A_{j+1,2k}$ and $A_{j+1,2k+1}$ are given by $$[a_{j+1,2k}^{(d)},b_{j+1,2k}^{(d)}] = [a_{j,k}^{(d)},(a_{j,k}^{(d)}+b_{j,k}^{(d)})/2]$$ and $$[a_{j+1,2k+1}^{(d)},b_{j+1,2k+1}^{(d)}] = [(a_{j,k}^{(d)}+b_{j,k}^{(d)})/2+1,b_{j,k}^{(d)}],$$ while $$[a_{j+1,2k}^{(d')},b_{j+1,2k}^{(d')}] = [a_{j+1,2k+1}^{(d')},b_{j+1,2k+1}^{(d')}] = [a_{j,k}^{(d')},b_{j,k}^{(d')}]$$ for all $d'\neq d$.
Any canonical RDP on ${\Omega}$ will have exactly $J+1$ levels, i.e., ${\mathcal{T}}^{0},{\mathcal{T}}^{1},\ldots,{\mathcal{T}}^{J}$. The $j$th level partition ${\mathcal{T}}^{j}$ consists of $2^{j}$ rectangular pieces of equal size, each covering $n/2^{j}$ locations in ${\Omega}$. From now on, we simply use RDP to refer to canonical ones when this causes no confusion.
### RDPs and permutations
Each RDP can be represented by a $J$ level bifurcating tree with the partition blocks in ${\mathcal{T}}^{j}$ forming the $2^j$ nodes in the $j$th level of the tree. As such, we can use ${\mathcal{T}}=\cup_{j=0}^{J} {\mathcal{T}}^{j}$ to represent the RDP. Each node in the $J$th level corresponds to a unique location in ${\Omega}$, and is called “atomic" as it contains a single element. We shall interchangeably refer to an RDP as a tree, and to the partition blocks as nodes.
Given the RDP ${\mathcal{T}}$, each location ${\bm{s}}\in {\Omega}$ falls into a unique branch of ${\mathcal{T}}$, that is, ${\Omega}=A_{0,0}\supset A_{1,k_1({\bm{s}})}\supset A_{2,k_2({\bm{s}})}\supset \cdots \supset A_{J,k_J({\bm{s}})}=\{{\bm{s}}\}$, with $A_{j,k_j({\bm{s}})}$ being the node in the $j$th level to which ${\bm{s}}$ belongs. Accordingly, the RDP ${\mathcal{T}}$ induces a unique vectorization of the locations in ${\Omega}$ such that ${\bm{s}}$ corresponds to the $t({\bm{s}})$th element of the vector where $t({\bm{s}})=\sum_{l = 1}^J 2^{J-l}\cdot e_l({\bm{s}})$ with $e_l = k_l({\bm{s}})\!\! \mod 2$, indicating the branch of the tree ${\bm{s}}$ falls into at level $l$. As such, ${\mathcal{T}}$ induces a permutation of the $n$ locations, and we let $\pi_{{\mathcal{T}}}$ denote this permutation.
As an illustration, \[fig:RDP\] presents an RDP and the induced permutation using a toy $4 \times 4$ image (so $m = 2$ and $J_1 = J_2 = 2$). We index pixels in the true image from 0 to 15. In addition, we assume that the underlying function takes only two values—1 and 2—on the 16 locations, represented by the white and the red colors respectively. The demonstrated RDP corresponds well to the structure of the underlying signal, which would result in an effective 1D wavelet analysis on the vectorized observation.
=\[level distance=3cm, sibling distance=4cm\] =\[level distance=3.5cm, sibling distance=2cm\] =\[level distance=3.5cm, sibling distance=1cm\] =\[level distance=3.5cm, sibling distance=0.5cm\]
[width=,height=,keepaspectratio]{}
We shall now utilize the relationship between RDPs and permutations to construct a prior on the latter. Before that, we shall simplify our notations a little. Note that while what the $(j, k)$th node $A_{j, k}$ is depends on the RDP ${\mathcal{T}}$, different RDP trees can share common nodes—the $(j,k)$th node in one ${\mathcal{T}}$ may be the same as the $(j,k')$th node in another. (Note that the level of the node must be the same in either RDP.) In the following, we will need to specify quantities that only depends on the node regardless of the RDP tree ${\mathcal{T}}$ it arises from. A succinct way for expressing such quantities is to write them as a mapping from ${\mathcal{A}}$ to ${\mathbb{R}}$, where ${\mathcal{A}}$ denotes the finite collection of all sets that [*could*]{} be nodes in [*some*]{} RDP, or equivalently, ${\mathcal{A}}$ is the totality of nodes in all RDPs. (This is to be distinguished from the collection of nodes in any particular RDP, which is denoted by ${\mathcal{T}}$.) For example, we may define $\rho_{j,k}$ in a way that its value only depends on what the set $A_{j,k}$ is, regardless of the RDP ${\mathcal{T}}$ to which it belongs. In this case we can let $\rho_{j,k} = \rho(A_{j, k})$, where $\rho(\cdot)$ is a mapping form ${\mathcal{A}}$ to $[0,1]$.
The mapping-based notation such as $\rho(\cdot)$ allows various parameters to be specified in a node-specific (rather than RDP-specific) manner. This is critical as we show later that the space of nodes ${\mathcal{A}}$ for all canonical RDPs is of a cardinality linear in the size of ${\Omega}$, while that of canonical RDPs is exponential in $n$. (See Proposition \[lem:complexity.tree\] in the Supplementary Materials.) Therefore carrying out computation in a node-specific manner is key to achieving linear complexity. Moreover, this notation will also help elucidate derivations on the posterior.
### Priors on RDPs: random RDP\[section:priors.RDP\]
Our strategy of representing multi-dimensional functions using vectors will only pay off if the vectorization of ${\Omega}$ can result in efficient characterization of the data, thereby leading to stronger energy concentration under wavelet transforms. For example, the RDP illustrated in \[fig:RDP\] will lead to particularly efficient inference of the corresponding function. In general, the true optimal vectorization is unknown, and one shall rely on the data to learn the RDPs that induce “good” vectorizations. Next we shall achieve this in a hierarchical Bayesian approach, by placing a prior on the RDP.
Several priors on recursive partitions have been proposed in the literature. We consider the following generative prior on the RDP [@wongandma:2010; @ma:2013], which will lead to very efficient posterior inference algorithms that scales linearly in $n$, the size of ${\Omega}$.
We describe the prior as a simple generative procedure in an inductive manner. First, ${\mathcal{T}}^{0}=\{{\Omega}\}$ by construction. Now suppose we have generated ${\mathcal{T}}^{0},{\mathcal{T}}^{1},\cdots,{\mathcal{T}}^{j}$ for some $0\leq j \leq J-1$, then ${\mathcal{T}}^{j+1}$ is generated as follows. For each $A_{j,k}\in{\mathcal{T}}^{j}$, let ${\mathcal{D}}(A_{j,k})\subset \{1,2,\ldots,m\}$ be the collection of its divisible dimensions. We randomly draw a dimension in ${\mathcal{D}}(A_{j,k})$, and divide $A_{j,k}$ in that dimension to get $A_{j+1,2k}$ and $A_{j+1,2k+1}$. In particular, we let $\lambda_{d}(A_{j,k})$ be the probability for drawing the $d$th dimension, where $\sum_{d=1}^{m} \lambda_{d}(A_{j,k}) = 1$ and $\lambda_{d}(A_{j,k})=0$ for $d\not\in {\mathcal{D}}(A_{j,k})$. In many problems, [*a priori*]{} one has no reason to favor dividing any particular dimension over another, and a default specification is to set $$\lambda_{d}(A_{j,k})=1/|{\mathcal{D}}(A_{j,k})|\cdot \I_{\{d\in{\mathcal{D}}(A_{j,k})\}},$$ where $\I_{E}$ is the indicator function of whether $E$ holds or not. This completes the inductive generation of ${\mathcal{T}}^{j+1}$. The procedure will terminate after ${\mathcal{T}}^{J}$ is generated as all nodes in ${\mathcal{T}}^{J}$ are atomic with no divisible dimensions.
The above generative mechanism forms a probability distribution on the space of RDPs, which is called the [*random recursive dyadic partition*]{} (RRDP) distribution, and it is specified by the collection of selection probabilities $\lambda_{d}(\cdot)$ defined on all [*potential*]{} nodes. We write $${\mathcal{T}}\sim \text{RRDP}({\bm\lambda}),$$where $\{ {\bm\lambda}(A): A\in{\mathcal{A}}\},$ and ${\bm\lambda}(A)=(\lambda_1(A),\lambda_2(A),\ldots,\lambda_{m}(A))'$, that is, ${\bm\lambda}$ is a mapping from ${\mathcal{A}}$ to the $(m-1)$-dimensional simplex.
It is worth noting that the RRDP is nothing but a restrictive version of the Bayesian classification and regression tree (CART) prior [@Chipman1998a; @Denison1998]. The main constraint in RRDP compared to the general Bayesian CART is that the former is supported on canonical RDPs only—that is, each dyadic partition must be an even split, occurring at the middle of the range in one of the divisible dimensions. This additional restriction ensures the cardinality of ${\mathcal{A}}$ to be linear in $n$, thereby reducing the computational complexity required for inference to $O(n)$.
Recipes for Bayesian inference\[sec:posterior.inference\]
---------------------------------------------------------
Bayesian inference can proceed if we can derive the marginal posterior of ${\mathcal{T}}$, because given ${\mathcal{T}}$, we have a standard Bayesian wavelet regression model, for which classical inference strategies can apply. In this section, we present recipes for deriving and sampling from the posterior, and for evaluating posterior summaries such as the posterior mean of ${\bm f}$.
It turns out that when a Haar basis is adopted in the wavelet regression model, there exists a close-form generative expression for the marginal posterior of ${\mathcal{T}}$, with which one can sample from the posterior directly through vanilla Monte Carlo (not Markov chain Monte Carlo). This close-form posterior can be calculated through a recursive algorithm that are operationally similar to Mallat’s pyramid algorithm, achieving a linear computational complexity $O(n)$.
Before describing the results for the Haar basis, we note that WARP can also be applied to other wavelet bases. In such cases, though the exact inference recipe as in the Haar basis is lost and there is no analytic expression for the posterior, an efficient sequential Monte Carlo (SMC) algorithm can be constructed for inference, using the analytic solution for the Haar basis as proposals. However, our experience in extensive numerical experiments suggests that applying WARP to other bases generally results in no substantive performance gain at least in image analysis to justify the additional complexity and Monte Carlo variation involved in the SMC algorithm. As such we defer details on the SMC strategy for other bases to the Supplementary Materials, mainly for completeness.
### Exact Bayesian inference under Haar basis\[section:estimation\]
The Haar wavelet basis is unique in its very short support, which leads to the desirable property that under the vectorization induced by any RDP ${\mathcal{T}}$, the $(j,k)$th wavelet coefficient are determined by only the locations inside the node $A_{j,k}$. We call this property of the Haar basis [*node-autonomy*]{} and say that inference under the Haar basis is [*node-autonomous*]{}.
The node-autonomy of Haar wavelets has an important computational implication—inference can be carried out in a [*self-similar*]{} fashion on each node (to be explained below), avoiding integration in the much larger space of RDPs. Consequently, exact inference can be completed in a computational complexity of the same scale as the total number of all nodes of all possible RDP trees, which is equal to $\prod_{i = 1}^m (2 n_i - 1) = O(2^m n)$.
Specifically, for all RDPs in which $A$ is a node and is divided in the $d$th direction, the corresponding Haar wavelet coefficient associated with the node $A$ is given by $$w_{d}(A) = 1/\sqrt{|A|} \cdot \left(\sum_{{\bm{x}}\in A^{(d)}_{l}}y({\bm{x}})-\sum_{{\bm{x}}\in A^{(d)}_{r}}y({\bm{x}})\right)$$ where $A^{(d)}_{l}$ and $A^{(d)}_{r}$ represent the two children nodes if $A$ is divided in the $d$th dimension and $|A|=2^{J-j}$ is the total number of locations in $A$. In contrast, wavelet coefficients from wavelet bases with longer support than Haar are not node-autonomous—not only does the coefficient associated with $A$ depends on the observations within $A$ but on those in other (often but not always adjacent) nodes in ${\mathcal{T}}$ as well.
Next we lay out the general strategy for inference. We show through two theorems that generic inference recipes exist for two popular classes of Bayes wavelet regression models—(i) those that model each wavelet coefficient independently (Theorem \[thm:independent\_shrinkage\]); and (ii) those that induce a hidden Markov model (HMM) for incorporating dependency among the wavelet coefficients (Theorem \[thm:latent\_state\]).
\[thm:independent\_shrinkage\] Suppose ${\mathcal{T}}\sim {\rm RRDP}({\bm\lambda})$ and given the Haar DWT under ${\mathcal{T}}$, one models the wavelet coefficients independently, i.e., $(w_{j,k},z_{j,k}) {\stackrel{\mathrm{ind}}{\sim}}p_{j,k}(w,z\,|\,{\bm{\phi}})$ for all $(j,k)$, where ${\bm{\phi}}$ represents the hyperparameters of the Bayesian wavelet regression model. Then the marginal posterior of ${\mathcal{T}}$ is still an RRDP. Specifically, ${\mathcal{T}}\,|\,{\bm{y}}\sim {\rm RRDP}(\tilde{{\bm\lambda}})$ where the posterior selection probability mapping $\tilde{{\bm\lambda}}$ is given as $$\tilde{\lambda}_{d}(A) =\lambda_{d}(A) M_{d}(A) \Phi(A^{(d)}_{l})\Phi(A^{(d)}_{r})/\Phi(A)$$ for any non-atomic $A\in {\mathcal{A}}$ where $M_{d}(A)$ is the marginal likelihood contribution from the wavelet coefficient on node $A$ if it is a node in ${\mathcal{T}}$ and divided in dimension $d$, i.e., $M_{d}(A) = \int p_{j,k}(w_{d}(A),z\,|\,{\bm{\phi}}) \, dz$ and $\Phi:{\mathcal{A}}\rightarrow [0,\infty)$ is a mapping defined recursively (i.e., its value on $A$ depends on its values on $A$’s children) as $$\Phi(A) = \sum_{d\in {\mathcal{D}}(A)} \lambda_{d}(A) M_{d}(A) \Phi(A^{(d)}_{l})\Phi(A^{(d)}_{r})$$ if $A$ is not atomic, and $\Phi(A) = 1$ if $A$ is atomic.
Remark: $\Phi({\Omega})$ is the overall marginal likelihood. It is a function of the hyperparameters ${\bm{\phi}}$, and can be used for specifying the hyperparameters ${\bm{\phi}}$ in an empirical Bayes strategy using maximum marginal likelihood estimation (MMLE).
\[thm:latent\_state\] Suppose ${\mathcal{T}}\sim {\rm RRDP}({\bm\lambda})$ and given ${\mathcal{T}}$ under a Haar DWT, one models the wavelet coefficients conditionally independently given a set of latent state variables ${\mathcal{S}}=\{S_{j,k}:j=0,1,2\ldots,J, k=0,1,\ldots,2^j-1\}$ $$(w_{j,k},z_{j,k})\,|\,S_{j,k}=s {\stackrel{\mathrm{ind}}{\sim}}p_{j,k}^{(s)}(w,z\,|\,{\bm{\phi}}) \quad \text{for all $(j,k)$}$$ where $S_{j,k}\in \{1,2,\ldots,K\}$ is a latent state variable associated with $(j,k)$. Also, suppose the collection of all latent variables is modeled as a top-down Markov tree (MT) with transition kernel ${\bm\rho}$, ${\mathcal{S}}\sim {\rm MT}({\bm\rho})$, i.e., $${\rm P}(S_{j,k}=s'\,|\,S_{j-1,\lfloor k/2 \rfloor} = s) =\rho_j(s,s')$$ where $\rho_j(\cdot,\cdot)$ is the transition kernel of the Markov model which is allowed to be different over $j$. Then the joint marginal posterior of $({\mathcal{T}},{\mathcal{S}})$ can be specified fully as the following sequential generative process. Suppose ${\mathcal{T}}^{0},{\mathcal{T}}^{1},\ldots,{\mathcal{T}}^{j}$ and the latent variables up to level $j-1$ have been generated. (To begin, we have $j=0$ and ${\mathcal{T}}^{0}=\{{\Omega}\}$.) Then the state variables in level $j$, are generated from the following posterior transition probabilities $$\begin{aligned}
& {\rm P}(S_{j,k}=s'\,|\,S_{j-1,\lfloor k/2 \rfloor} = s,{\mathcal{T}}^{(j)},{\bm{y}}) \\
= & \; \rho_{j}(s,s') \sum_{d \in {\mathcal{D}}(A)} \lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})/\Phi_{s}(A),
\end{aligned}$$ where $A$ is the node $A_{j,k}$ in ${\mathcal{T}}^{j}$. Given $S_{j,k}=s'$, suppose $j<J$, then ${\mathcal{T}}^{j+1}$ is generated by drawing $D_{j,k}$ from a multinomial with probabilities $\tilde{{\bm\lambda}}(A)$ such that $$\begin{aligned}
{\rm P}(D_{j,k}=d\,|\,S_{j,k}=s',{\mathcal{T}}^{(j)},{\bm{y}})
= \frac{\lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})}{\sum_{d' \in {\mathcal{D}}(A)}\lambda_{d'}(A) M_{d'}^{(s')}(A) \Phi_{s'}(A^{(d')}_{l})\Phi_{s'}(A^{(d')}_{r})},
\end{aligned}$$ where $M_{d}^{(s)}(A)$ is the marginal likelihood contribution from the wavelet coefficient on node $A$ if it is a node in ${\mathcal{T}}$, is divided in dimension $d$ in ${\mathcal{T}}$, and its latent state is $s$. That is, $M^{(s)}_{d}(A) = \int p^{(s)}_{j,k}(w_{d}(A),z\,|\,{\bm{\phi}}) \, dz$ and $\bm{\Phi}=(\Phi_1,\Phi_2,\ldots,\Phi_K):{\mathcal{A}}\rightarrow [0,\infty)^{K}$ is a vector-valued mapping defined recursively as $
\Phi_s(A) = \sum_{s'}\rho_j(s,s') \sum_{d\in {\mathcal{D}}(A)} \lambda_{d}(A) M^{(s')}_{d}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})
$ if $A$ is not atomic, and $\Phi_s(A) = 1$ if $A$ is atomic, for all $s\in \{1,2,\ldots,K\}$, where $j$ is the level of $A$.
Theorem \[thm:independent\_shrinkage\] and Theorem \[thm:latent\_state\] provide recipes for posterior sampling on $({\mathcal{T}},{\mathcal{S}})$. Given $({\mathcal{T}},{\mathcal{S}})$, one can further sample ${\bm{z}}$ from the conditional posterior corresponding to the chosen wavelet regression model, and Bayesian inference can proceed in the usual manner. For example, function estimation can proceed through drawing $B$ posterior samples $$({\mathcal{T}}^{(1)},{\mathcal{S}}^{(1)},{\bm{z}}^{(1)}),({\mathcal{T}}^{(2)},{\mathcal{S}}^{(2)},{\bm{z}}^{(2)}),\ldots,({\mathcal{T}}^{(B)},{\mathcal{S}}^{(B)},{\bm{z}}^{(B)}).$$ For the $b$th draw, we can compute the corresponding function ${\mbox{\boldmath $ f$}}^{(b)}$ using the inverse DWT $${\mbox{\boldmath $ f$}}^{(b)} = \pi_{{\mathcal{T}}^{(b)}}^{-1} \left(W^{-1} {\bm{z}}^{(b)}\right),$$ where $\pi_{{\mathcal{T}}}^{-1}$ denotes the inverse permutation corresponding to an RDP ${\mathcal{T}}$. Based on the posterior samples of ${\mbox{\boldmath $ f$}}$, we can estimate the posterior mean ${\rm E}({\mbox{\boldmath $ f$}}|{\bm{y}})$ and construct pointwise credible bands. If a point estimate is of ultimate interest such as in image reconstruction, we can estimate the posterior function mean by a Rao-Blackwellized Monte Carlo on the posterior means of ${\mbox{\boldmath $ z$}}$: $${\rm E}(\bm{f}\,|\,{\bm{y}}) \approx \frac{1}{B} \sum_{b=1}^{B} \pi_{{\mathcal{T}}^{(b)}}^{-1} \left(W^{-1} \text{E}({\bm{z}}^{(b)} | {\mathcal{T}}_i^{(b)}, {\bm{y}}) \right).$$ For some Bayesian wavelet regression models, this posterior mean can actually be computed analytically, eliminating the need of posterior sampling whatsoever. We present an algorithm to this end for a class of popular wavelet regression models in Section \[sec:RRDP.OP\], after first reviewing these models in Section \[section:background\].
### Examples of compatible Bayesian wavelet regression models\[section:background\]
So far we have kept the description of the Bayesian wavelet regression model general, using generic notations such as $p(w_{j,k},z_{j,k}\,|\,{\bm{\phi}})$ and $p(w_{j,k},z_{j,k}\,|\,S_{j,k},{\bm{\phi}})$ without spelling out the details. Next we describe some of the most popular Bayesian wavelet regression models. They indeed take these general forms and therefore our framework is applicable to them.
A popular class of Bayesian wavelet regression models for achieving adaptive shrinkage of ${\bm{z}}$ utilize the so-called spike-and-slab prior, which introduces a latent binary random variable $S_{j,k}$ for each $(j,k)$ such that $$\begin{aligned}
\label{eq:spike_and_slab}
z_{j,k}\,|\,S_{j,k} {\stackrel{\mathrm{ind}}{\sim}}(1 - S_{j, k}) \delta_0(z_{j,k}) + S_{j, k} \gamma(z_{j, k} | \tau_j, \sigma) \end{aligned}$$ where $\delta_0(\cdot)$ is a point mass at 0, and $\gamma(\cdot | \tau_j, \sigma)$ is a fixed unimodal symmetric density that possibly depends on $\sigma$ and another scale parameter $\tau_j$. A common choice of $\gamma(\cdot | \tau_j, \sigma)$ is the normal distribution with mean 0 and variance $\tau_j \sigma^2$, denoted by $\phi(\cdot | 0, \sqrt{\tau_j} \sigma)$, while heavy-tailed priors including the Laplace and quasi-Cauchy distributions [@Johnstone+Silverman:05] also enjoy desirable theoretical properties. Specifically, the function $\gamma(x \mid \tau_j, \sigma)$ is $$\gamma(x \mid \tau_j, \sigma) = a \exp(-a |x / \sigma|)/(2\sigma)$$ for Laplace priors where $a = \sqrt{2/\tau_j}$, and $$\gamma(x \mid \tau_j, \sigma) = (2 \pi)^{-1/2}\{1 - |x/\sigma| \cdot \tilde{\Phi}(|x/\sigma|)/\phi(x/\sigma)\}/\sigma$$ for quasi-Cauchy priors with $\tilde{\Phi}(x) = \int_x^{\infty} \phi(t \mid 0, 1) dt$.
Many authors [@Chipman1997; @Clyde+George:00; @Brown2001; @Morris2006] adopt independent priors on the latent shrinkage state variable $S_{j,k}$ $$S_{j,k} {\stackrel{\mathrm{ind}}{\sim}}{\rm Bern}(\rho_{j,k}).$$ One way to specify ${\bm\rho}= \{\rho_{j,k}, 0 \leq k < 2^j, 0 \leq j \leq J - 1\}$ that properly controls for multiplicity is $\rho_{j,k} \propto 2^{-j}$. The specification of ${\bm{\tau}}= \{\tau_j, 0 \leq j \leq J - 1\}$ of course depends on the choice of $\gamma(\cdot | \tau_j, \sigma)$. For instance, if one uses $\tau_j=2^{-\alpha j}\tau_0$ for the normal and Laplace prior, this leads to the reduced parameter ${\bm{\tau}}= (\alpha, \tau_0)$. One can use $\tau_j \equiv 1$ for the quasi-Cauchy prior. Other authors show that introducing Markov dependency into the latent shrinkage states can substantially improve inference by allowing effective borrowing of information across the location and scale.
Carrying out inference under WARP requires the conditional posterior of $z_{j,k}$ given $({\mathcal{T}},{\mathcal{S}})$. For the above popular models, this posterior is given by $$z_{j,k}\,|\,S_{j, k}, {\bm{y}}{\stackrel{\mathrm{ind}}{\sim}}(1 - S_{j, k}) \delta_0(z_{j, k}) + S_{j, k} f_1(z_{j, k} | w_{j,k}, \tau_j, \sigma),$$ where $f_1(z_{j, k} \mid w_{j, k}, \tau_j, \sigma) \propto \phi(w_{j, k} \mid z_{j, k}, \sigma) \cdot \gamma(z_{j, k} \mid \tau_j, \sigma)$. The function $f_1(z_{j, k} \mid w_{j, k}, \tau_j, \sigma)$ is analytically available if $\gamma(\cdot \mid \tau_j, \sigma)$ is the density of normal, Laplace, or quasi-Cauchy distributions. For the normal prior where $\gamma(\cdot \mid \tau_j, \sigma) = \phi(\cdot \mid 0, \sqrt{\tau_j} \sigma)$, $f_1(\cdot \mid w_{j, k}, \tau_j, \sigma)$ is the density of ${\rm N}(w_{j, k}/(1 + \tau_j^{-1}), \sigma^2/(1 + \tau_j^{-1}))$. For Laplace and quasi-Cauchy priors, analytical forms of $f_1(\cdot \mid \tau_j, \sigma)$ are available in [@Johnstone+Silverman:05 Sec. 2.3]. As it is often the mean corresponding to $f_1$ that is needed for posterior estimation, we here give the closed forms of the means by integrating out $z_{j, k}$ with respect to its posterior distribution. Let the corresponding mean function be $\mu_1(w_{j, k}, \tau_j, \sigma)$, which is given by $$w_{j, k}/(1 + \tau_j^{-1})$$ for normal priors, $$w_{j, k} - \sigma \frac{a \{ e^{-a w_{j, k}/\sigma} \Phi(w_{j, k}/\sigma - a) - e^{aw_{j, k}/\sigma} \tilde{\Phi}(w_{j, k}/\sigma + a) \}}{e^{-a w_{j, k}/\sigma} \Phi(w_{j, k}/\sigma - a) + e^{aw_{j, k}/\sigma} \tilde{\Phi}(w_{j, k}/\sigma + a)}$$ for Laplace priors, and $${w_{j, k}} \left\{1 - \exp\left(-\frac{w_{j, k}^2}{2\sigma^2}\right)\right\}^{-1} - 2 \left( \frac{w_{j, k}}{\sigma}\right)^{-1}$$ for quasi-Cauchy priors.
### WARP with local block shrinkage\[sec:RRDP.OP\]
Traditional wavelet analysis is done by fixing the maximum depth of the wavelet tree at $J$. That is, one partitions the index space all the way down to the finest level of “atomic” blocks. In most practical problems, once the blocks are small enough, the function value within the block becomes essentially constant with respect to the noise level, and so further division within such homogeneous blocks will be wasteful and will reduce statistical efficiency. For example, in \[fig:RDP\] the partition in the upper left block (Level 3) along with its descendants is not necessary. Thus it is often desirable to incorporate adaptivity in the depth of wavelet tree and allow it to be terminated earlier than reaching level $J$. In practice the optimal maximum depth varies across ${\Omega}$. For example, some parts of an image may contain many interesting details, while the rest does not—e.g., an image of a painting hung on a gray wall. A high resolution will be needed to capture the details in the painting, but would be unnecessary and introduce additional variability in the estimation for the wall.
This consideration is closely related to the idea of adaptive block shrinkage [@Cai1999] in the frequentist wavelet regression analysis. Once there is little evidence for any interesting structure within a subset of the index space, then the function value within that subset can be shrunk to a constant. That is, the wavelet tree is “pruned” there. Next we show that such pruning can be achieved in a hierarchical modeling manner, and the resulting Bayesian wavelet regression model is again compatible with our WARP framework.
To achieve such pruning, we introduce another set of latent variables ${\mathcal{R}}=\{R_{j,k}: j=0,1,\ldots, J-1, k=0,1,\ldots,2^{j}-1\}$, where $R_{j,k}=1$ indicates that the tree is pruned at node $(j,k)$. Next we describe a generative prior on ${\mathcal{R}}$ that will blend well with the WARP framework. To start, let $
R_{0,0} {\stackrel{\mathrm{ind}}{\sim}}{\rm Bern}(\eta_{0,0})$ and for all $j\geq 1$, and $$R_{j,k}\,|\,R_{j-1,\lfloor k/2 \rfloor} {\stackrel{\mathrm{ind}}{\sim}}\begin{cases}
{\rm Bern}(\eta_{j,k}) & \text{if $R_{j-1,\lfloor k/2 \rfloor}=0$}\\
{\rm Bern}(1) & \text{if $R_{j-1,\lfloor k/2 \rfloor}=1$}.
\end{cases}$$ That is, if a node’s parent has been pruned, then its children are also pruned by construction. From now on, we shall refer to this prior model on ${\mathcal{R}}$ as an [*optional pruning*]{} (OP) model [@ma:2013], which is specified by a set of [*pruning probabilities*]{} $\eta_{j,k}\in [0,1]$. We write ${\mathcal{R}}\sim {\rm OP}(\bm{\eta})$.
Given ${\mathcal{R}}$, we can modify our prior on ${\mathcal{S}}$ to reflect the effect of pruning. For example, instead of an independent prior on $S$, we can now generate them as follows $$S_{j,k}\,|\,{\mathcal{R}}{\stackrel{\mathrm{ind}}{\sim}}\begin{cases}
{\rm Bern}(\rho_{j, k}) & \text{if $R_{j,k}=0$}\\
{\rm Bern}(0) & \text{if $R_{j,k}=1$.}
\end{cases}$$ That is, if the node has not been pruned, then we generate $S_{j,k}$ from the independent Bernoulli as in the standard spike-and-slab setup, but if the node has been pruned, then by construction, we must have $S_{j,k}=0$ due to pruning.
It is often reasonable to specify the prior shrinkage and pruning probabilities as functions of the level in the RDP. That is, $\rho_{j, k} = \rho_j$ and $\eta_{j, k} = \eta_j$ for all $k$. In the node-specific notation, $\rho(A)=\rho_j$ and $\eta(A)=\eta_j$ for all $j$th node $A\in {\mathcal{A}}$. In this case, one can show that this joint model on $({\mathcal{S}},{\mathcal{R}})$ is equivalent to a Markov tree model with three states defined in terms of the combinations of $(S_{j,k},R_{j,k})=(1,0)$ , (0,0), or (0,1), and with the corresponding transition matrix for $S_{j,k}$ given by $${\bm\rho}_j =
\begin{bmatrix}
\rho_j (1-\eta_j) & (1-\rho_j)(1-\eta_j) & \eta_j\\
\rho_j (1-\eta_j) & (1-\rho_j)(1-\eta_j) & \eta_j\\
0 & 0 & 1
\end{bmatrix}.$$ This allows us to derive the posterior from Theorem \[thm:latent\_state\], and carry out inference accordingly. Specifically, for each $A \in {\mathcal{A}}$, let $p_0(A)$ be the marginal likelihood contributed from the wavelet coefficients in $A$ and its descendants if $A$ is pruned, i.e., $$p_0(A)=\frac{1}{(\sqrt{2\pi\sigma^2})^{|A|-1}} \exp\left\{-\frac{\sum_{x\in A}(y(x)-\bar{y}(A))^2}{2\sigma^2}\right\},$$ where $\bar{y}{(A)} = \sum_{x \in A} y(x)/|A|$. If $A \in {\mathcal{T}}$, the following maps are directly available from Theorem \[thm:latent\_state\]:
- The marginal likelihood contribution from the data within node $A$ if $A$ is divided in dimension $d$: $$M_{d}(A)=\rho(A) M_{d}^{(1)}(A) + (1-\rho(A)) M_{d}^{(0)}(A);$$
- The posterior spike probability $\tilde{\rho}_{d}$ of $A$ if $A$ is divided in dimension $d$: $$\tilde{\rho}_{d}(A)=\rho(A)M^{(1)}_{d}(A)/M_{d}(A);$$
- The marginal likelihood from data on $A$ and its descendants: $$\Psi(A)=(1-\eta(A))\sum_{d\in {\mathcal{D}}(A)} \lambda_d(A) M_{d}(A) \Psi(A_{l}^{(d)})\Psi(A_{r}^{(d)}) + \eta(A) p_{0}(A)$$ $\Psi(A)=1$
- The posterior probability of pruning $A$: $$\tilde{\eta}(A)=\eta(A)p_{0}(A)/\Psi(A);$$
- The posterior probability for $A$ to be divided in dimension $d$ given $A$ is not pruned: $$\tilde{\lambda}_{d}(A)=(1 - \eta(A))\frac{\lambda_{d}(A) M_{d}(A)\Psi(A^{(d)}_{l})\Psi(A^{(d)}_{r})}{\Psi(A)-\eta(A)p_{0}(A)}.$$
For the Haar basis, the posterior mean ${\rm E}({\mbox{\boldmath $ f$}}|{\bm{y}})$ can be evaluated analytically through recursive message passing without any Monte Carlo sampling for Bayesian wavelet regression models that adopt the spike-and-slab setup along with optional pruning of the wavelet tree, which contains the models without optional pruning as special cases with zero pruning probabilities. We describe the strategy next and will use it to compute ${\rm E}({\mbox{\boldmath $ f$}}|{\bm{y}})$ in our numerical examples.
For each $A\in {\mathcal{A}}$, let $c(A)$ be the scale (father wavelet) coefficient on $A$ if $A\in{\mathcal{T}}$, and let $\varphi(A)= {\rm E}(c(A) \I_{\{A\in{\mathcal{T}}\}} \,|\, {\bm{y}})$. Note that ${\rm E}({\mbox{\boldmath $ f$}}\,|\,{\bm{y}})$ is given by $\varphi(A)$ for all atomic $A$. To compute the mapping $\varphi$, we introduce two auxiliary mappings $\psi_0(A) = {\rm P}(A\in{\mathcal{T}},R(A)=0\,|\,{\bm{y}})$ and $\varphi_0(A) = {\rm E}(c(A) \I_{\{A\in{\mathcal{T}}, R(A)=0\}} \,|\, {\bm{y}}). $ Let $\bar{A}^{(d)}$ denote the parent of $A$ in ${\mathcal{T}}$ if $A$ is a child node after dividing its parent in the $d$th dimension, and let $\mathcal{P}(A)\subset\{1,2,\ldots,m\} $ be the collection of dimensions of $A$ that do not have full support $[0,2^{J_i} - 1]$, i.e., those that have been partitioned at least once in previous levels. Theorem \[thm:recursive.map\] gives a recursive message passing algorithm for computing the tri-variate mapping $(\phi_0, \varphi_0, \varphi):{\mathcal{A}}\rightarrow {\mathbb{R}}^3$.
\[thm:recursive.map\] To initiate the recursion, for $A = \Omega$, we let $\psi_0(A) = 1 - \tilde{\eta}(A), \varphi_0(A) = (1 - \tilde{\eta}(A)) |A|/\sqrt{n}$, and $\varphi(A) = |A|/\sqrt{n}$. Suppose we have evaluated these mappings up to level $j-1$, for level $j = 1, \ldots, J$, we have $$\begin{aligned}
\psi_0(A) & = \sum_{d\in\mathcal{P}(A)} \psi_0(\bar{A}^{(d)}) \tilde{\lambda}_{d}(\bar{A}^{(d)})(1-\tilde{\eta}(A)); \\
\varphi_0(A) & = (1 - \tilde{\eta}(A)) \cdot \sum_{d\in\mathcal{P}(A)} \frac{ \tilde{\lambda}_{d}(\bar{A}^{(d)}) }{\sqrt{2}}\bigg[\varphi_0(\bar{A}^{(d)}) - \\
& \hspace*{1in}\tilde{\rho}_{d}(\bar{A}^{(d)}) \mu_1(w_{d}(\bar{A}^{(d)})) \psi_0(\bar{A}^{(d)})(-1)^{\I(\text{$A$ is the left child of $\bar{A}^{(d)}$})} \bigg]; \\
\varphi(A) & = \frac{\varphi_0(A)}{1 - \tilde{\eta}(A)} \hspace*{-0.03in}+\hspace*{-0.03in} \frac{1}{\sqrt{2}} \hspace*{-0.06in}\sum_{ d\in\mathcal{P}(A)}\hspace*{-0.1in} \{\varphi(\bar{A}^{(d)}) - \varphi_0(\bar{A}^{(d)}) \}\lambda_{d}(\bar{A}^{(d)}).
\end{aligned}$$
Remark: Note that this recursion is top-down (from low to high resolutions), whereas that for computing $\Phi$ is bottom-up (from high to low resolutions). The two-directional recursion shares the spirit of the forward-backward algorithm for HMMs.
Once we have computed the mapping $(\varphi_0,\psi_0,\varphi): {\mathcal{A}}\rightarrow {\mathbb{R}}^3$, the posterior mean ${\rm E}({\mbox{\boldmath $ f$}} \,|\,{\bm{y}})$ is then given by $\varphi$ applied on the atomic nodes. Note that this theorem applies to the special case with no pruning as well.
Experiments\[sec:experiments\]
==============================
In this section, we conduct extensive experiments to evaluate the performance of our proposed framework in the context of sparse coding and image reconstruction. In particular, we examine its ability in adaptively concentrating the energy, its estimation accuracy, and its computational scalability. We compare WARP to a number of state-of-the-art wavelet and non-wavelet methods available in the literature. For illustration, we apply WARP to the independent spike-and-slab Bayesian wavelet regression model with the Haar basis and optional pruning to denoise both 2D and 3D images. Because the results for 2D and 3D images are similar, we report the results on 2D images in this section and defer most results on 3D images, except those for evaluating computational scalability, to Supplementary Materials.
Our prior specification is as follows: $\rho(A) = \min(1, 2^{-\beta j} C)$ for $A$ in the $j$th resolution (for $j<J$), $\tau_j = 2^{-\alpha j} \tau_0$, and $\eta(A) = \eta_0$ for all $A$; we set $\sigma^2$ to an estimate based on the finest scale wavelet coefficients [@donoho1995adapting]; all other parameters in ${\bm{\phi}}= (\alpha, \beta, \sigma^2, \tau, C, \eta_0)$ are estimated by maximizing the marginal likelihood (available in a closed form as $\Phi({\Omega})$ from our recursive message passing algorithm) at a set of grid points. Supplementary Materials contain a sensitivity analysis showing that WARP is generally robust to the values of its hyperparameters. Therefore we recommend a grid search on a small set rather than a full optimization as the default method. Gaussian noise with standard deviation $\sigma$ is added to the true images and we apply all methods to the noisy observations for image reconstruction. For WARP, we use the posterior mean as the reconstructed image, which is analytically attainable through Theorem \[thm:recursive.map\].
Enhanced energy concentration\[sec:energy\]
--------------------------------------------
We use 100 test images randomly chosen from the ImageNet dataset [@deng2009imagenet] to evaluate selected methods in sparse coding and reconstructing images of various structures. ImageNet is originally used for large-scale visual recognition in the community of computer vision, and we here use its Fall 2011 release (consisting of 14,197,122 urls).
We use the 100 random ImageNet images to illustrate quantitatively how much improvement in energy concentration WARP achieves through adaptively identifying good permutations. To this end, we define a metric to quantify energy concentration using the number of wavelet coefficients needed to exceed a proportion of the sum of squares of the signal.
For each ImageNet image, we draw a sample from the posterior distribution of partition trees produced by WARP, and compute the number of coefficients required to attain a proportion of the total sum of squares corresponding to the resulting permutation on a noisy observation at $\sigma = 0.1$. In comparison we compute the numbers of wavelet coefficients required to attain a proportion of the total sum of squares by traditional 1D and 2D Haar DWT without adaptive permutation. \[fig:energy\] presents the numbers of wavelet coefficients required as a function of the proportion of sum of squares for three representative images.
Focusing on the range of proportion of sum of squares from 0.85 to 0.95, we can see that compared to traditional 1D and 2D Haar DWT, the adaptive partition in WARP results in substantially fewer numbers of wavelet coefficients to attain the same proportion of total sum of squares (see the red and blue line in each plot of \[fig:energy\] corresponding to the right $y$ axis). For better visualization, we calculate the percentage of coefficient savings (black line in \[fig:energy\]) defined by 100% less the ratio of the blue and red curves. We can see that WARP dramatically improves the energy concentration for all three test images and energy levels. In \[fig:energy\], the largest coefficient saving of WARP is (80%, 70%, 70%) compared to 2D DWT, and this saving becomes (97%, 99%, 90%) when compared to 1D DWT. In fact, we observe such enhanced energy concentration of WARP in a wide range of test images in the database, and how much WARP improves the energy concentration depends on the image structure. The improved sparse coding of WARP is expected to benefit statistical analysis in various tasks such as compression, reconstruction, and detection, as to be demonstrated by Section \[sec:2D\] (2D images) and Supplementary Materials (3D images) in terms of reconstruction.
Image reconstruction using ImageNet data\[sec:2D\]
---------------------------------------------------
Using the same ImageNet data as in Section \[sec:energy\], we compare our method with eight existing wavelet and non-wavelet approaches with available software: 1-dimensional Haar denoising operated on vectorized observation [@Johnstone+Silverman:05] or 1D-Haar, translation-invariant 2D Haar estimation [@Wil+Now:04] or TI-2D-Haar, shape-adaptive Haar wavelets [@fryzlewicz2016shah] or SHAH, adaptive weights smoothing [@polzehl2000adaptive] or AWS, Bayesian smoothing method using the Chinese restaurant process [@Li+Ghosal:14] or CRP, coarse-to-fine wedgelet [@Cas+:04] or Wedgelet, nonparametric Bayesian dictionary learning proposed by [@Zhou+:12] or BPFA, and the conventional running median method or RM. We apply the cycle spinning technique to remove visual artifacts in image reconstruction [@Coi+Don:95; @Li+Ghosal:15] for the methods of WARP, 1D-Haar, SHAH, AWS, CRP, Wedgelet and RM, by averaging 121 local shifts (a step size up to 5 pixels in each direction). TI-2D-Haar is translation invariant and BPFA includes cycle spinning based on patches, and thus no additional cycle spinning is needed for these two methods. For each method, we calculate the mean squared error (MSE) and mean absolute error (MAE) to measure its accuracy, and time each method based on one replication ran on MacBook Pro with 2.7 GHz Intel core i7 CPU and 16GB RAM. We implement the methods using publicly available code, either in R (1D-Haar, SHAH, and AWS) or Matlab (TI-2D-Haar, CRP, Wedgelet, BPFA, and RM). WARP is available in both R and Matlab, and we use the R version to time it.
\[figure:imagenet\] presents the average MSEs and MAEs of all methods where $\sigma$ varies from 0.1 to 0.7. We can first see that the proposed hierarchical adaptive partition improved the basic wavelet regression significantly (compare 1D-Haar vs. WARP) for all scenarios. In fact, WARP is uniformly the best method under both metrics for all scenarios, with the performance lead over other methods widening as the noise level increases. The sensitivity analysis in the Supplementary Materials indicates that the method of WARP is robust to hyperparameters and choices of $\gamma$.
----------------------------- -----------------------------
\(a) MSE ($\times 10^{-3}$) \(b) MAE ($\times 10^{-2}$)
----------------------------- -----------------------------
WARP is computationally efficient, benefiting from the conjugacy of random recursive partition and closed form expression in Theorem \[thm:recursive.map\]. WARP is the fastest adaptive approach among SHAH, AWS, CRP, Wedgelet, and BPFA. (The computing times are given in the caption of \[figure:imagenet\].) Section \[sec:scalability\] further compares the scalability of selected methods using images of various sizes.
-------------- --------------------- ---------------------
[ & &\ \(b) WARP vs 1D-DWT \(c) WARP vs 2D-DWT
]{} [ & &\
]{} [ & &\
]{} (a) true
-------------- --------------------- ---------------------
Scalability\[sec:scalability\]
------------------------------
Next we verify the linear complexity of the WARP framework using both 2D and 3D images. Usually there are various ways to tune each method, and we focus on the estimation step given tuning parameters for all methods to make a fair comparison. For WARP, one actually may choose the tuning parameters from a smaller image by downsampling without loss of much accuracy, in view of its insensitivity to hyperparameters (Section D in the Supplementary Materials).
\[fig:scalability\] (a) compares the scalability of selected methods in \[figure:imagenet\]; we exclude 1D-Haar and RM as their reconstructions are highly inaccurate, and BPFA as it scales poorly even at $512 \times 512$ images. We can see that the empirical running time approximately follows a linear function of the number of locations. In fact, WARP takes only about 2 minutes for a large image of $4096 \times 4096$ that contains 17 million pixels, and 5.3 seconds for an image of 1024 by 1024. \[fig:scalability\] suggests that Wedgelet and SHAH takes quadratic time or even more, while TI-2D-Haar, AWS, and CRP takes linear time, but their performances are substantially inferior to that of WARP as shown in \[figure:imagenet\]. CRP seems to have a smaller slope than WARP, but it requires considerably longer tuning time than WARP according to the total running time with the tuning step included in the caption of \[figure:imagenet\], at least based on its latest version of implementation to date.
It is worth noting that while many state-of-the-art methods designed for 2D images such as Wedgelet, TI-2D-Haar, and BPFA require substantial modifications for a new dimensional setting, such as 3D images, the proposed WARP framework is directly applicable to $m$-dimensional data without modification, with the same linear scalability as suggested by \[fig:scalability\] (b).
---------------- ----------------
\(a) 2D images \(b) 3D images
---------------- ----------------
Application to retinal optical coherence tomography\[sec:application\]
======================================================================
We apply the proposed method to a dataset of optical coherence tomography (OCT) volumes. OCT provides a non-invasive imaging modality to visualize cross-sections of tissue layers at micrometer resolution, and thus is instrumental in various medical applications especially for the diagnosis and monitoring of patients with ocular diseases [@huang1991optical; @grewal2013diagnosis; @virgili2015optical; @cuenca2018cellular]. The accurate interpretation of OCT images may require the involvement of both retina specialists and comprehensive ophthalmologists, and this task is compounded by heavily noised observations at a low signal-to-noise ratio due to sample-based speckle and detector noise [@keane2012evaluation; @shi2015automated; @Fang2017]. Therefore, reconstruction of OCT images is necessary to improve both manual and automated OCT image analysis, and is increasingly important when OCT images are used to extract objective and quantitative assessment in ophthalmology which is touted as one advantage of OCT in clinical practice [@virgili2015optical].
We use the OCT data available at <http://people.duke.edu/~sf59/Fang_TMI_2013.htm>, acquired by a Bioptigen SDOCT system (Durham, NC, USA) at an axial resolution of $\sim$ 4.5 $\mu$m. We apply the methods of TI-2D-Haar, SHAH, AWS, CRP, Wedgelet, BPFA, and WARP, to two noisy slices (plotted as “Obs." in \[figure:OCT\]). We also have access to a registered and averaged image by averaging 40 repeatedly sampled scans [@Fang2017], which is referred to as the “noiseless” reference image and is used to compare the quality of reconstructed images. From the results in \[figure:OCT\], we clearly see that WARP gives the best global qualitative metric using MSE and MAE among all methods in comparison.
Visual comparison provides a detailed assessment of reconstructed images on local features that might be clinically relevant. For the first observation in \[figure:OCT\], we can see WARP distinguishes all layers well (the boxed region in the noiseless image), especially compared to TI-2D-Haar and AWS whose reconstructions are blurred across layers. For the second observation, we observe a separation of the posterior cortical vitreous from the internal limiting membrane in the noiseless image, which shows a potential to progress to vitreomacular traction (VMT) [@duker2013international]. This separation becomes less clear if using TI-2D-Haar (especially the left proportion), although TI-2D-Haar gives MSE and MAE that are closer to WARP than the other methods. For both observations, there is still substantial noise left in the denoised images by SHAH, and AWS gives a reconstruction exhibiting undesirable patches. This study confirms that WARP is capable to denoise images while keeping important features present in the image, due to its ability to adapt to the geometry of the underlying structures.
------------------------------------------------------------------------- ---------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
{width="0.31\linewidth"} {width="0.31\linewidth"} {width="0.31\linewidth"}
Obs. (152.4, 9.9) TI-2D-Haar (7.3, 2.0) SHAH (15.7, 2.7)
{width="0.31\linewidth"} {width="0.31\linewidth"} {width="0.31\linewidth"}
AWS (9.4, 2.2) CRP (7.4, 2.1) Wedgelet (7.7, 2.1)
{width="0.31\linewidth"} {width="0.31\linewidth"} {width="0.31\linewidth"}
BPFA (7.8, 2.2) WARP (6.5, 1.9) “noiseless"\
{width="0.31\linewidth"} & {width="0.31\linewidth"} & {width="0.31\linewidth"}\
Obs. (159.4, 10.1) & TI-2D-Haar (10.9, 2.4) & SHAH (18.4, 3.0)\
{width="0.31\linewidth"} & {width="0.31\linewidth"} & {width="0.31\linewidth"}\
AWS (12.9, 2.5) & CRP (11.5, 2.5) & Wedgelet (11.1, 2.4)\
{width="0.31\linewidth"} & {width="0.31\linewidth"} & {width="0.31\linewidth"}\
BPFA (11.7, 2.6) & WARP (10.2, 2.3) & “noiseless"
------------------------------------------------------------------------- ---------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We further compare WARP with a study conducted in [@Fang2017], which considers another six method: BRFOE [@weiss2007makes], K-SVD [@elad2006image], PGPD [@xu2015patch], BM3D [@dabov2007image], MSBTD [@fang2012sparsity], and SSR [@Fang2017]. These six methods have been applied to 18 foveal images from 18 subjects, using four slices nearby the original observation at various stages of their implementation. Although WARP does not require nearby information and can even process a 3D volume if such data exist, we apply WARP to the observation that averages the original observation and the four nearby slices only to make a fair comparison. In \[table:PSNR.18\], we adopt the mean of peak signal-to-noise ratio (PSNR) for all methods to align with [@Fang2017], which is calculated as $-10 \log_{10}(\text{MSE})$ (noting that we rescale all observations and noiseless gray-scale images by 255). We can see that WARP gives the largest mean of PSNR, thus achieves excellent performances compared to a wide range of existing methods in this application setting. We choose the two subjects considered in \[figure:OCT\], and plot the reconstructed images by WARP utilizing the nearby four slices in \[figure:OCT.nearby\]. It suggests that WARP even has an enhanced display compared to the “noiseless" image, especially in the lower half of the image.
------- ------- ------- ------- ------- ------- -------
BRFOE K-SVD PGPD BM3D MSBTD SSR WARP
25.32 27.03 27.01 27.04 27.08 28.10 28.18
------- ------- ------- ------- ------- ------- -------
: Mean PSNR for 18 foveal images reconstructed by BRFOE, K-SVD, PGPD, BM3D, MSBTD, SSR, and WARP. Results for the methods other than WARP are from [@Fang2017].[]{data-label="table:PSNR.18"}
---------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------
{width="0.45\linewidth"} {width="0.45\linewidth"}
WARP (6.6, 2.0) “noiseless"\
{width="0.45\linewidth"} & {width="0.45\linewidth"}\
WARP (10.0, 2.3) & “noiseless"
---------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------
Discussion\[sec:dicussion\]
===========================
We have introduced the WARP framework that uses random recursive partitioning to induce a prior on the permutations of the index space, thereby achieving efficient inference on multi-dimensional functions by converting it into a Bayesian model choice problem involving one-dimensional competitive generative models. While our approach is Bayesian, one may consider other methods such as frequentist adaptive partitioning and shrinkage methods that incorporate the same idea. We do find satisfying the fully principled probabilistic inferential recipes that arise under our approach.
The proposed framework WARP is applicable to a wider range of Bayes wavelet regression models, including those allow heterogeneous noise variances. If the error ${\bm{\epsilon}}$ in Model has general covariance matrix $\Sigma_{\epsilon}$, it often still makes sense to assume that the covariance of the error ${\mbox{\boldmath $ u$}}$ in the wavelet domain, i.e. $W\Sigma_{\epsilon} W'$, is diagonal, due to the so-called whitening property of wavelet transforms discussed in [@Johnstone1997]. In this case, let $\sigma_j^2 = \text{Var}(u_{j, k})$ for each $j$. Then one may estimate $\sigma_j^2$ using a robust estimator of the scale based on $\{w_{j, k}, 0 \leq k \leq 2^j - 1\}$ given a tree, for example, using the median absolute deviation of $\{w_{j, k}, 0 \leq k \leq 2^j - 1\}$ rescaled by 0.6745. Alternatively, one can adopt a hyperprior on location-based unknown variance $\sigma_j^2 \sim \text{IG}(\nu + 1, \nu \sigma_0^2)$, which is an inverse gamma prior with shape $\nu + 1$ and scale $\nu \sigma_0^2$ (thus the prior mean is $\sigma_0^2$). The hyperparameters $(\nu, \sigma_0^2)$ are either specified by users or estimated using data, for instance, we may estimate $\sigma_0^2$ by the median estimate based on the finest scale wavelet coefficients [@donoho1995adapting] .
While we introduce the WARP framework in the context of image denoising, the adaptive wavelet transform is applicable to other domains involving multi-dimensional function processing. In particular, one area of current investigation is data compression—the posterior distribution on the permutations can be utilized to compress multi-dimensional signals to one or more one-dimensional signals, while preserving most of the information in the data. For example, \[fig:energy\] shows that even just using a random sample from the posterior of the partitions can help reduce the number of wavelet coefficients needed to retain information in the data by over 50% in 2D images in comparison to traditional 2D wavelets. We are currently studying strategies to use a representative permutation from the posterior, such as the posterior mode partition to achieve more efficient data compression.
Acknowledgments {#acknowledgments .unnumbered}
===============
Meng Li’s research is partly supported by ORAU Ralph E. Powe Junior Faculty Enhancement Award. Li Ma’s research is partly supported by NSF grants DMS-1612889 and DMS-1749789, and a Google Faculty Research Award. We thank Daniel Bourgeois for his help in porting our C++ code to R.
Supplementary Materials {#supplementary-materials .unnumbered}
=======================
Supplementary materials contain Proposition \[lem:complexity.tree\] and its proof; proofs of Theorems (Theorem \[thm:independent\_shrinkage\], Theorem \[thm:latent\_state\], and Theorem \[thm:recursive.map\]) and Proposition \[prop:complexity\]; a sequential Monte Carlo algorithm for applying WARP to non-Haar wavelets along with a numerical example; a sensitivity analysis for the proposed framework; and comparison of WARP and selected methods using experiments of 3D image reconstruction.
Supplementary Materials to “[WARP: Wavelets with adaptive recursive partitioning for multi-dimensional data]{}"
Supplementary materials contain (A) Proposition \[lem:complexity.tree\] and its proof, (B) proofs of all theorems, (C) a sequential Monte Carlo algorithm for applying WARP to non-Haar wavelets along with its complexity calculation and a numerical example, (D) a sensitivity analysis for the proposed framework, and (E) comparison of WARP and selected methods using experiments of 3D image reconstruction.
Cardinality of the space of RDPs
================================
\[lem:complexity.tree\] The log cardinality of the tree space induced by RDPs is $O(n)$ when $m = 2$.
Let $c(a, b)$ be the cardinality of the tree space induced by RDPs for an $2^a$ by $2^b$ image. We can obtain the following recursive formula $$c(a, b) =
\begin{cases}
c^2(a - 1, b) + c^2(a, b - 1), & \text{if } a, b \geq 1 \\
1 & \text{if } a = 0 \text{ or } b = 0.
\end{cases}$$ We assert that there exist two constants $(k_1, k_2)$ such that $k_2 \geq k_1 > 1$ and $$c(a, b) \in \left[\frac{1}{2} k_1^{2^{a + b}}, \frac{1}{2}k_2^{2^{a + b}}\right],$$ for any $a \geq 1$ and $b \geq 1$.
First consider $a = 1$ and $b \geq 1$. We have $
c(1, b) = c^2(1, b - 1) + 1
$ when $b \geq 1$ and $c(1, 0) = 1$ when $b = 0$. The quantity $c(1, b)$ is actually the number of “strongly" binary trees of height $\leq b$, which possesses an analytical form $$c(1, b) = \lfloor k^{2^b} \rfloor,$$ according to [@Aho+Slo:73], where $$k = \exp\left\{\sum_{j = 0}^\infty 2^{-j - 1} \log(1 + c^{-2}(1, j))\right\} \approx 1.503.$$ Letting $k_1 = \sqrt{k}$ and $k_2 = k$ and noting $k^{2^b} \geq 2$ for all $b \geq 1$, we obtain that $$\frac{1}{2} k_1^{2^{1 + b}} = \frac{1}{2} k^{2^{b}} \leq k^{2^{b}} - 1 \leq \lfloor k^{2^b} \rfloor \leq k^{2^{b}} \leq \frac{1}{2}k^{2^{1 + b}},$$ for all $b \geq 1$. Therefore, the assertion holds for all $a = 1$ and $b \geq 1$. Since $c(a, b) = c(b, a)$, the assertion also holds for all $a \geq 1$ and $b = 1$.
For any $a \geq 1$ and $b \geq 1$, it is easy to verify that if the assertion holds for $(a, b - 1)$ and $(a - 1, b)$, then it holds for $(a, b)$. We then complete the proof by induction.
Proofs of Theorems {#sec:proofs}
==================
Because Theorem \[thm:latent\_state\] can be considered a special case with a single latent state, its proof follows immediately from the latter theorem, which we prove below.
First we verify that the mapping $\Phi_{s}(A)$ is the marginal likelihood contributed from data with locations in $A$, given that $A\in {\mathcal{T}}$ and that the latent state variable associated with parent of $A$ in ${\mathcal{T}}$ is $s$. We show this by induction. First note that if $A$ is atomic, then $$\Phi_s(A) = {\rm P}({\bm{y}}(A) \,|\,A\in{\mathcal{T}},S(A_p)=s)=1$$ by design as there are no wavelet coefficients associated with atomic nodes. Now, suppose we have shown that $\Phi_s(A)= {\rm P}({\bm{y}}(A) \,|\,A\in{\mathcal{T}},S(A_p)=s)$ for all $A$ with level higher than $j$. Then if $A$ is of level $j$, it follows that $$\begin{aligned}
&{\rm P}({\bm{y}}(A) \,|\,A\in{\mathcal{T}},S(A_p)=s)\\
=& \sum_{s'}\sum_{d}{\rm P}({\bm{y}}(A) \,|\,A\in {\mathcal{T}},S(A)=s',S(A_p)=s,D(A)=d) {\rm P}(S(A)=s'\,|\,A\in{\mathcal{T}},S(A_p)=s)\\
&\hspace{5em} \times {\rm P}(D(A)=d\,|\,A\in{\mathcal{T}},S(A_p)=s)\\
=& \sum_{s'}\rho_j(s,s')\sum_{d \in \mathcal{D}(A)} \lambda_{d} M^{(s')}_d(A) \Psi_{s'}(A_l^{d}) \Psi_{s'}(A_{r}^{(d)}),\end{aligned}$$ which leads to the definition of $\Phi_s(A)$ in Theorem \[thm:latent\_state\].
Next let us derive the joint marginal posterior of $({\mathcal{T}},{\mathcal{S}})$. Note that $$\begin{aligned}
{\rm P}(S_{j,k}=s'\,|\,S_{j-1,\lfloor k/2 \rfloor} = s,{\mathcal{T}}^{(j)},{\bm{y}}) &= \frac{{\rm P}(S_{j,k}=s',S_{j-1,\lfloor k/2 \rfloor} = s,{\bm{y}}(A)\,|\,{\mathcal{T}}^{(j)})}{{\rm P}(S_{j-1,\lfloor k/2 \rfloor} = s,{\bm{y}}(A)\,|\,{\mathcal{T}}^{(j)})}.\end{aligned}$$ Now we have $$\begin{aligned}
{\rm P}(S_{j,k}=s',D_{j,k}=d,{\bm{y}}(A)\,|\,{\mathcal{T}}^{(j)},S_{j-1,\lfloor k/2 \rfloor} = s) = \rho_{j}(s,s') \lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r}), \end{aligned}$$ which leads to $${\rm P}(S_{j,k}=s',{\bm{y}}(A)\,|\,{\mathcal{T}}^{(j)},S_{j-1,\lfloor k/2 \rfloor} = s) = \rho_{j}(s,s') \sum_{d}\lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})$$ and furthermore, $${\rm P}(S_{j,k}=s'\,|\,S_{j-1,\lfloor k/2 \rfloor} = s,{\mathcal{T}}^{(j)},{\bm{y}}) = \frac{\rho_{j}(s,s') \sum_{d}\lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})}{\sum_{s''}\rho_{j}(s,s'') \sum_{d}\lambda_{d}(A) M_{d}^{(s'')}(A) \Phi_{s''}(A^{(d)}_{l})\Phi_{s''}(A^{(d)}_{r})},$$ where the denominator is just $\Phi_s(A)$.
Finally, $$\begin{aligned}
{\rm P}(D_{j,k}=d\,|\,S_{j,k}=s',{\mathcal{T}}^{(j)},{\bm{y}}) &= \frac{{\rm P}(D_{j,k}=d,{\bm{y}}(A)\,|\,S_{j,k}=s',{\mathcal{T}}^{(j)})}{{\rm P}({\bm{y}}(A)\,|\,S_{j,k}=s',{\mathcal{T}}^{(j)})}\\
&=\frac{\lambda_{d}(A) M_{d}^{(s')}(A) \Phi_{s'}(A^{(d)}_{l})\Phi_{s'}(A^{(d)}_{r})}{\sum_{d'} \lambda_{d'}(A) M_{d'}^{(s')}(A) \Phi_{s'}(A^{(d')}_{l})\Phi_{s'}(A^{(d)}_{r})}.\end{aligned}$$ This completes the proof.
We first obtain the recursive recipe for computing the maps $(\psi_0, \varphi_0)$ following Theorem \[thm:independent\_shrinkage\]: $$\begin{aligned}
\psi_0(A) & =\sum_{d\in \mathcal{P}(A)} {\rm P}(\bar{A}^{(d)}\in{\mathcal{T}},R(\bar{A}^{(d)})=0\,|\,{\bm{y}}) \tilde{\lambda}_{d}(\bar{A}^{(d)})(1-\tilde{\eta}(A)) \\
& = \sum_{d\in\mathcal{P}(A)} \psi_0(\bar{A}^{(d)}) \tilde{\lambda}_{d}(\bar{A}^{(d)})(1-\tilde{\eta}(A)),
\end{aligned}$$ and $$\begin{aligned}
& \varphi_0(A) = {\rm E}\left(c(A)\I_{\{A\in{\mathcal{T}},R(A)=0\}}\,|\,{\bm{y}}\right)
= \sum_{ d\in\mathcal{P}(A)} {\rm E}\left(c(A)\I_{\{\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d, R(A) = 0\}}\,|\,{\bm{y}}\right) \\
=&\sum_{d\in\mathcal{P}(A)}{\rm E}\left(c(A)\,|\,\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d,R(A)=0,{\bm{y}}\right)\\
& \cdot {\rm P}\left( \bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d,R(A)=0\,|\,{\bm{y}}\right) \\
=& \sum_{d\in\mathcal{P}(A)} \frac{\tilde{\lambda}_{d}(\bar{A}^{(d)}) }{\sqrt{2}}\left[\varphi_0(\bar{A}^{(d)}) - {\tilde{\rho}_{d}(\bar{A}^{(d)}) \mu_1(w_{d}(\bar{A}^{(d)}))\psi_0(\bar{A}^{(d)})}\cdot (-1)^{\I(\text{$A$ is the left child of $\bar{A}^{(d)}$})}\right] \\
& \cdot (1 - \tilde{\eta}(A)). \label{eq:varphi.0}
\end{aligned}$$
We next derive the recursive formula for $\varphi(A)$. Let $\varphi_1(A) = {\rm E}(c(A) \I_{\{A\in{\mathcal{T}}, R(A)=1\}} \,|\, {\bm{y}})$, then we have $\varphi(A) = {\rm E}(c(A) \I_{\{A\in{\mathcal{T}}\}} \,|\, {\bm{y}}) = \varphi_0(A) + \varphi_1(A) $. Note that $$\label{eq:dumm3}
\varphi(A) = \sum_{ d\in\mathcal{P}(A)} {\rm E}\left(c(A)\I_{\{\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d\}}\,|\,{\bm{y}}\right),$$ and for each $ d\in\mathcal{P}(A)$, we have $$\begin{aligned}
& {\rm E}\left(c(A)\I_{\{\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d\}}\,|\,{\bm{y}}\right)
= \sum_{r = 0, 1}{\rm E}\left(c(A)\I_{\{\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d, R(\bar{A}^{(d)}) = r\}}\,|\,{\bm{y}}\right) \label{eq:dummy2}\\
=& \sum_{r = 0, 1}
{\rm E}\left(c(A) \,|\, \bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d, R(\bar{A}^{(d)}) = r, {\bm{y}}\right) \\
& \qquad \qquad \qquad
\times {\rm P}(\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d, R(\bar{A}^{(d)}) = r \,|\, {\bm{y}}). \label{eq:dummy1}
\end{aligned}$$ For the second term in , we have $$\begin{aligned}
& {\rm P}(\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d, R(\bar{A}^{(d)}) = r \,|\, {\bm{y}}) \\
=&{\rm P}(D(\bar{A}^{(d)})=d \,|\, \bar{A}^{(d)}\in{\mathcal{T}}, R(\bar{A}^{(d)}) = r, {\bm{y}}) \cdot {\rm P}(\bar{A}^{(d)}\in{\mathcal{T}}, R(\bar{A}^{(d)}) = r \,|\, {\bm{y}}) \\
=& \tilde{\lambda}_{d}(\bar{A}^{(d)})^{1-r} \lambda_{d}(\bar{A}^{(d)})^{r} \psi_{r}(\bar{A}^{(d)})
\end{aligned}$$ For the first term in , it is easy to check that $$\begin{aligned}
&{\rm E}\left(c(A)\,|\,\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d,R(\bar{A}^{(d)})=r,{\bm{y}}\right)\\
=&\begin{cases}
\frac{1}{\sqrt{2}}\left[\frac{\varphi_0(\bar{A}^{(d)})}{\psi_0(\bar{A}^{(d)})} - \tilde{\rho}_{d}(\bar{A}^{(d)}) \mu_1(w_{d}(\bar{A}^{(d)}))\cdot (-1)^{\I(\text{$A$ is the left child of $\bar{A}^{(d)}$})}\right] & \text{if $r=0$}\\
\frac{1}{\sqrt{2}} \varphi_1(\bar{A}^{(d)})/\psi_1(\bar{A}^{(d)})& \text{if $r=1$},
\end{cases}\end{aligned}$$ where we use the independence between $c(A)$ and $D(A)$ given $A \in {\mathcal{T}}$. Plugging the two terms into , we obtain that $$\begin{aligned}
& {\rm E}\left(c(A)\I_{\{\bar{A}^{(d)}\in{\mathcal{T}},D(\bar{A}^{(d)})=d\}}\,|\,{\bm{y}}\right)\\
=&\frac{1}{\sqrt{2}}[\varphi_0(\bar{A}^{(d)}) - \tilde{\rho}_{d}(\bar{A}^{(d)}) w_{d}(\bar{A}^{(d)})/(1+\tau_{j-1}^{-1})\cdot (-1)^{\I(\text{$A$ is the left child of $\bar{A}^{(d)}$})}\cdot \psi_{0}(\bar{A}^{(d)})] \tilde{\lambda}_{d}(\bar{A}^{(d)}) \\
& \;\;+ \frac{1}{\sqrt{2}} \varphi_1(\bar{A}^{(d)}) \lambda_{d}(\bar{A}^{(d)}). \label{eq:dummy4}
\end{aligned}$$
Combining the result in and , and comparing it with $\varphi_0(A)$ in , we obtain that $$\varphi(A) = \varphi_0(A)/(1 - \tilde{\eta}(A)) + \frac{1}{\sqrt{2}} \sum_{ d\in\mathcal{P}(A)} \varphi_1(\bar{A}^{(d)}) \lambda_{d}(\bar{A}^{(d)}),$$ which concludes the proof by plugging in $\varphi_1(\cdot) = \varphi(\cdot) - \varphi_0(\cdot)$.
Sequential Monte Carlo for other wavelet bases\[sec:SMC\]
==========================================================
We present a sequential Monte Carlo [@Liu2004] algorithm for estimating the posterior mean when applying WARP to Bayesian wavelet regression with bases other than Haar. In particular, we shall use the posterior under the Haar basis to construct the proposal. In what follows, we refer to the Bayes wavelet regression model with the Haar basis as $\mathbb{B}_0$ and one with a user-specified general wavelet basis as $\mathbb{B}$.
Suppose that we have drawn $I$ samples from the marginal posterior on the RDP ${\mathcal{T}}$ under $\mathbb{B}_0$, denoted as ${\mathcal{T}}_i$ for $i = 1, \ldots, I$. Let ${\mbox{\boldmath $ f$}}^{(i)}_{\mathbb{B}} = \pi_{{\mathcal{T}}_i}^{-1} (W_{\mathbb{B}}^{-1} {\rm E}({\bm{z}}({\mathcal{T}}^{(i)})\,|\,{\mathcal{T}}^{(i)},{\bm{y}}, \mathbb{B}))$ be the estimated mean vector under a Bayesian wavelet regression model given the partition ${\mathcal{T}}_i$. Now let $\Psi({\Omega}| {\mathcal{T}}_i, \mathbb{B}_0)$ and $\Psi({\Omega}| {\mathcal{T}}_i, \mathbb{B})$ be the marginal likelihoods given the partition ${\mathcal{T}}_i$ under the operators $\mathbb{B}_0$ and $\mathbb{B}$, respectively. Then an importance sampling [@BDA:3rd Ch.10] estimate of ${\rm E}({\mbox{\boldmath $ f$}}|{\bm{y}}, \mathbb{B})$ is given by $${\rm E}({\mbox{\boldmath $ f$}}|{\bm{y}}, \mathbb{B}) \approx \frac{ w_i {\mbox{\boldmath $ f$}}^{(i)}_{\mathbb{B}} } {\sum_{i = 1}^I w_i},$$ where $w_i = p({\mathcal{T}}_i) \Psi({\Omega}| {\mathcal{T}}_i, \mathbb{B}) / p({\mathcal{T}}_i | {\bm{y}}, \mathbb{B}_0)$ is the unnormalized importance weight; here $p({\mathcal{T}}_i)$ is the prior of ${\mathcal{T}}_i$ and the posterior of ${\mathcal{T}}_i$ under the Haar basis $p({\mathcal{T}}_i | {\bm{y}}, \mathbb{B}_0)$ is used as the proposal.
However, importance sampling is not useful if the importance weight $w_i$ is degenerate and concentrates all the probabilities on one or a small number of draws (also referred to as particles). This occurs very commonly due to the massive size of the space of RDPs, and is indeed what we observed in our experiment running on an image of typical size, say 128 by 128 or 512 by 512, which is expected as the deviation between the target and the proposal distribution accumulates over the entire RDP and the number of nodes is as large as $n - 1$.
Sequential Monte Carlo (SMC) that updates the weight sequentially with a possible resampling step is a powerful tool for such complex dynamic systems [@Liu+Chen:98; @Lin+:13]. Next we shall adapt the SMC algorithm into the framework of WARP, show that the implementation of SMC has a nearly linear complexity, and demonstrate its performance using numerical experiments.
SMC algorithm {#smc-algorithm .unnumbered}
-------------
Consider a general wavelet basis with $2l$ support where the high pass filters are $(h_0, \ldots, h_{2l - 1})$ and low pass filters are $(g_0, \ldots, g_{2l - 1})$. For example, if one uses Daubechies D4 wavelet transform, the high-pass filter is $$h_0 = \frac{1 + \sqrt{3}}{4 \sqrt{2}},
h_1 = \frac{3 + \sqrt{3}}{4 \sqrt{2}},
h_2 = \frac{3 - \sqrt{3}}{4 \sqrt{2}},
h_3 = \frac{1 - \sqrt{3}}{4 \sqrt{2}},$$ and low-pass filter is $$g_0 = h_3, g_1 = -h_2, g_2 = h_1, g_3 = -h_0.$$ Suppose at the beginning of the current stage of particle propagation, the $i$th particle is a partially generally RDP tree ${\mathcal{T}}_{i}^{(j, k)}$ such that $j<J-1$ and all nodes $A_{j,k'}$ for $k' \leq k$ have been expanded, i.e., have children, whereas for all $k' > k$, $A_{j,k'}$ are leafs in the RDP. Thus the leafs of the RDP, denoted as $\partial {\mathcal{T}}_i^{(j, k)}$, is $$\partial {\mathcal{T}}_i^{(j, k)} = \{A_{j,k'}: k' > k\} \cup \{A_{j + 1, k'}: k' \leq 2k - 1\}.$$ In the current stage of particle propagation, we shall expand the node $A_{j,k+1}$ if $k < 2^{j}-1$; otherwise we expand on $A_{j+1,0}$ if $j < J$. The particle propagation terminates as when $j=J-1$ and $k=2^{J-1}-1$, in which case ${\mathcal{T}}_i^{(J - 1, 2^{J - 1} - 1)} = {\mathcal{T}}_i $ denotes an entire particle path as it reaches the very last scale of a tree.
Let $(w_{j', k'}({\mathcal{T}}_i^{(j, k)}), c_{j', k'}({\mathcal{T}}_i^{(j, k)}))$ be the $(j', k')$th pair of mother and father wavelet coefficients, which are calculated iteratively as follows $$c_{j', k'} = \left\{
\begin{array}{ll}
\sum_{{\bm{x}}\in A_{j', k'}} y({\bm{x}}) / \sqrt{|A_{j', k'}|} & \mathrm{if } A_{j', k'} \in \partial{\mathcal{T}}^{j, k} \\
\sum_{t = 0}^{2l - 1} h_t c_{j' + 1, 2k' + t} & \text{if } A_{j', k'} \notin \partial{\mathcal{T}}^{j, k},
\end{array}
\right.$$ and $$\label{eq:wavelet.truncation}
w_{j', k'} = \left\{
\begin{array}{ll}
0 & \mathrm{if } A_{j', k'} \in \partial{\mathcal{T}}^{j, k} \\
\sum_{t = 0}^{2l - 1} g_t c_{j' + 1, 2k' + t} & \text{if } A_{j', k'} \notin \partial{\mathcal{T}}^{j, k}.
\end{array}
\right.$$ We here use periodic padding for the boundary condition, namely, $$c_{j', k'} = \left\{
\begin{array}{ll}
c_{j', k' - 2^{j'}} & \text{if } j' \leq j \text{ and } k' \geq 2^{j'} \\
c_{j', k' - 2k} & \text{if } j' = j + 1 \text{ and } k' \geq 2k.
\end{array}
\right.$$ Now let $\Psi({\Omega}| {\mathcal{T}}_i^{(j,k)},\mathbb{B})$ denote the overall marginal likelihood contributed from the wavelet coefficients in the partial tree ${\mathcal{T}}_i^{(j,k)}$ under $\mathbb{B}$, and $w_i^{(j, k)} = p({\mathcal{T}}_i^{(j, k)}) \Psi({\Omega}| {\mathcal{T}}_i^{(j, k)}, \mathbb{B}) / p({\mathcal{T}}_i^{(j, k)} | {\bm{y}}, \mathbb{B}_0)$ be the current weight for the $i$th particle. We have $$\begin{aligned}
w_i^{(j, k + 1)} = &w_i^{(j, k)}\cdot \frac{p({\mathcal{T}}_i^{(j, k + 1)}) \Psi({\Omega}| {\mathcal{T}}_i^{(j, k + 1)}, \mathbb{B}) / p({\mathcal{T}}_i^{(j, k + 1)} | {\bm{y}}, \mathbb{B}_0)}{p({\mathcal{T}}_i^{(j, k)}) \Psi({\Omega}| {\mathcal{T}}_i^{(j, k)}, \mathbb{B}) / p({\mathcal{T}}_i^{(j, k)} | {\bm{y}}, \mathbb{B}_0)} \\
= &w_i^{(j, k)}\cdot \frac{p({\mathcal{T}}_i^{(j , k + 1)} | {\mathcal{T}}_i^{(j, k)})}{p({\mathcal{T}}_i^{(j, k + 1)} | {\bm{y}}, {\mathcal{T}}_i^{(j, k)}, \mathbb{B}_0)} \cdot \frac{\Psi({\Omega}\mid {\mathcal{T}}_i^{(j, k + 1)}, \mathbb{B})}{\Psi({\Omega}\mid {\mathcal{T}}_i^{(j, k)}, \mathbb{B})} \\
= &w_i^{(j, k)}\cdot \frac{\lambda_d(A_{j, k})}{\tilde{\lambda}_d(A_{j, k})} \cdot \frac{\Psi({\Omega}\mid {\mathcal{T}}_i^{(j, k + 1)}, \mathbb{B})}{\Psi({\Omega}\mid {\mathcal{T}}_i^{(j, k )}, \mathbb{B})},
\label{eq:update.weight}
\end{aligned}$$ where $d$ is the partition dimension of $A_{j, k}$ at the move of the particle. Algorithm \[alg:SMC\] presents a complete implementation of SMC for general wavelet basis.
For a level $j$ node $A$, the likelihood corresponding to node $A$ is $M_d(A; {\mathcal{T}}, \mathbb{B}) = \rho(A) M_{d}^{(1)}(A) + (1-\rho(A)) M_{d}^{(0)}(A),$ where $M_d^{(0)}(A) = \phi(w_d(A) | 0, \sigma)$, $M_d^{(1)}(A) = g(w_d(A) | \tau_j, \sigma)$, and $w_d(A)$ is obtained by Eq. under tree ${\mathcal{T}}$. Therefore, the likelihood ratio in Algorithm \[alg:SMC\] is $$\label{eq:likelihood.ratio.SMC}
\frac{\Psi({\Omega}| {\mathcal{T}}_i^{(j, k + 1)}, \mathbb{B})}{\Psi({\Omega}| {\mathcal{T}}_i^{(j, k)}, \mathbb{B})} = \prod_{j' = 0}^ j \prod_{A \in {\mathcal{A}}^{j, k + 1}_{j'}}\frac{M_d(A; {\mathcal{T}}_i^{(j, k + 1)}, \mathbb{B})}{M_d(A; {\mathcal{T}}_i^{(j, k)}, \mathbb{B})},$$ where ${\mathcal{A}}^{j, k}_{j'}$ is the collection of nodes at level $j'$ where the wavelet coefficients require updates at the move $({\mathcal{T}}_i^{(j, k - 1)} \rightarrow {\mathcal{T}}_i^{(j, k)})$.
Complexity of SMC {#complexity-of-smc .unnumbered}
-----------------
The complexity of SMC is mostly determined by the step of updating wavelet coefficients at each move which boils down to the cardinality of $\bigcup_{j'=0}^j {\mathcal{A}}^{j, k}_{j'}$. A wavelet basis other than the Haar basis is not node-autonomous and thus the calculation of all quantities corresponding to each node often depends on other nodes. Hence each move of a particle affects more than a change at the new node. Specifically, let us consider a wavelet basis whose support length is $2l$ (such as the Daubechies wavelet [@daubechies1992ten]) where $l \geq 2$. A particle moving from ${\mathcal{T}}^{(j, k - 1)}$ to ${\mathcal{T}}^{(j, k)}$ brings two new nodes to the particle, i.e., $A_{j + 1, 2k}$ and $A_{j + 1, 2k + 1}$. As a consequence, the wavelet coefficients associated with nodes $(A_{j, k - l + 1}, \ldots, A_{j, k})$ at level $j$ all need to be updated, and that in turn affects wavelet coefficients at all previous levels $0,1,\ldots,j-1$. The following proposition shows that the cardinality of ${\mathcal{A}}^{j, k}_{j'}$ is at most $(2l - 1) j'$. It follows that the total calculation to update the likelihood before a particle terminates is $O((2l - 1) \sum_{j = 1}^{L - 1} j \cdot 2^{j}) = O((2l - 1) L \cdot 2^L) = O((2l - 1) n \log n)$, that is, with a nearly linear complexity.
\[prop:complexity\] If D-$2l$ is used in $\mathbb{B}$, then each move of a particle affects at most $2l - 1$ wavelet coefficients at each parent level.
The conclusion obviously holds for the last level of the move as only $l$ wavelet coefficients are affected and $l \geq 1$. Suppose the conclusion holds at $j'$th level, then for the $(j' -1)$th level, the number of affected size-2 blocks is at most $l$, thus the number of affected wavelet coefficients is at most $l + l - 1 = 2l - 1$. The proof concludes by induction.
[Remark:]{} This proposition confirms that the induced changes in all previous wavelet coefficients are sparse thus efficient update is possible.
Experiment\[sec:experiements.SMC\] {#experimentsecexperiements.smc .unnumbered}
----------------------------------
We next illustrate the proposed SMC algorithm with the D4 wavelet, and compare its performance to the exact WARP using the Haar basis. We use 121 cycle shifts for both methods. We draw 10 particles per cycle spinning for SMC, which leads to $I = 10 \times 121 = 1210$ particles per run in total. We specify the effective sample size as 0.1$I$. Combined with the technique of cycle spinning, our proposed SMC algorithm inherently incorporates the structure of [*islanding*]{} [@Lakshminarayanan2013] by averaging results of multiple independent particle filters (or islands) rather than drawing a single but larger filter.
\[fig:phantom.SMC\] plots the ratio of MSEs obtained by a WARPed D4 via SMC and WARPed Haar using the phantom test image [@Jain:89]. It shows that the two types of wavelet lead to close performance, but the D4 wavelet tends to outperform the Haar basis when the noise level increases. Even when the noise level is light, the performance of the WARPed D4 wavelet may be viewed as satisfactory since the WARPed Haar has been shown to be constantly among the best approaches compared to a number of state-of-the-art approaches in a variety of images.
---------------- ---------------
Phantom (true) Ratio of MSEs
---------------- ---------------
Sensitivity analysis\[sec:sensitivity.gamma\]
==============================================
In this section, we conduct a sensitivity analysis for the proposed WARP framework at various choices of hyperparameters.
We first implement the method of “WARP-full" which chooses ${\bm{\phi}}$ by a full optimization of the marginal likelihood using two simulated images $(f_1, f_2)$ explicitly given in \[sec:experiment.3D\]. Recall that the row of WARP selects ${\bm{\phi}}$ at a limited set of grid points. \[table:sensitivity.3D\] shows that the MSEs of WARP-full are almost identical to the row of WARP. This observation is consistent across many scenarios we have tested. Therefore, the method of WARP seems robust in terms of hyperparameters, and we shall recommend a maximization over a small set of grid points as default. In addition, we investigate the performances of WARP at various choices of $\gamma$ in $\mathbb{B}_0$ including Laplace and quasi-Cauchy priors. We find out these $\mathbb{B}_0$ lead to almost exactly the same MSEs as normal priors (results not shown here).
We further investigate the sensitivity of WARP by considering the following ways to select hyperparameters $\tau$ and $\eta$:
- $\tau$: “function" (we use $\tau_j = 2^{-\alpha j} \tau_0$ as in \[sec:experiments\]); “mix" (we use separate $\tau_j$ only for the last two levels and a constant for other levels, therefore we have three free parameters for $\tau$); “full" (we use separate $\tau_j$’s for all levels $j$ )
- $\eta$: “constant" (we use $\eta(A) = \eta_0$ for all $A$ as in \[sec:experiments\]);“mix" (we use $\eta_j$ for the last two levels and a constant for other levels, therefore we have three free parameters for $\eta$); “full" (we use separate $\eta_j$’s for all levels $j$).
\[table:more.sensitivity\] shows that the MSEs only exhibit minimal differences across various combinations of tuning approaches. This confirms the previous findings that the proposed framework is not sensitive to hyperparameters.
------------------------------------------------------------ ---------------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ----------
(lr)[2-5]{} (lr)[6-9]{} Method
(lr)[1-1]{}(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} (lr)[8-9]{} $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$
[WARP-full]{} [0.02]{} [0.04]{} [0.04]{} 0.12 [0.01]{} [0.02]{} [0.02]{} [0.05]{}
[WARP]{} [0.02]{} [0.04]{} [0.04]{} [0.11]{} [0.01]{} [0.02]{} [0.02]{} [0.05]{}
------------------------------------------------------------ ---------------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ----------
[llHHHHHHHlHlHlHl]{} $\tau$ & $\eta$ & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7\
(lr)[1-1]{} (lr)[2-2]{} (lr)[9-16]{} function & constant & 0.07 & 0.13 & 0.18 & 0.24 & 0.28 & 0.34 & 0.37 & 0.03 & 0.13 & 0.27 & 0.42 & 0.57 & 0.72 & 0.89\
function & mix & 0.07 & 0.13 & 0.18 & 0.24 & 0.28 & 0.33 & 0.37 & 0.03 & 0.13 & 0.27 & 0.42 & 0.58 & 0.73 & 0.88\
function & full & 0.07 & 0.13 & 0.19 & 0.24 & 0.28 & 0.33 & 0.37 & 0.03 & 0.13 & 0.27 & 0.43 & 0.57 & 0.72 & 0.87\
mix & constant & 0.07 & 0.14 & 0.20 & 0.25 & 0.30 & 0.35 & 0.40 & 0.03 & 0.13 & 0.26 & 0.41 & 0.57 & 0.72 & 0.94\
mix & mix & 0.07 & 0.14 & 0.20 & 0.25 & 0.30 & 0.35 & 0.40 & 0.03 & 0.13 & 0.26 & 0.43 & 0.57 & 0.72 & 0.91\
mix & full & 0.07 & 0.14 & 0.20 & 0.25 & 0.30 & 0.34 & 0.40 & 0.03 & 0.12 & 0.27 & 0.42 & 0.57 & 0.75 & 0.91\
full & constant & 0.07 & 0.13 & 0.18 & 0.24 & 0.28 & 0.33 & 0.38 & 0.03 & 0.13 & 0.27 & 0.42 & 0.58 & 0.72 & 0.86\
full & mix & 0.07 & 0.13 & 0.18 & 0.23 & 0.29 & 0.34 & 0.38 & 0.03 & 0.12 & 0.27 & 0.43 & 0.56 & 0.72 & 0.87\
full & full & 0.07 & 0.13 & 0.18 & 0.23 & 0.28 & 0.33 & 0.37 & 0.03 & 0.13 & 0.27 & 0.42 & 0.57 & 0.72 & 0.88\
3D images\[sec:experiment.3D\]
===============================
Unlike WARP which is directly applicable to $m$-dimensional data for $m > 2$, other methods compared in \[sec:2D\] such as Wedgelet, TI-2D-Haar, and BPFA may require substantial modifications for a new dimensional setting. SHAH is conceptually applicable for 3D data, but the existing software takes hours to days in the tuning step for 3D images of intermediate size while its performance in 2D settings is not among top two. Therefore, we compare WARP with AWS, CRP, and RM along with a collection of other approaches, including a 3D image denoising method via local smoothing and nonparametric regression (LSNR) proposed by [@Muk+Qiu:11], anisotropic diffusion (AD) method [@Per+Mal:90], total variation minimization (TV) method [@Rud+:92] and optimized non-local means (ONLM) method [@Cou+:08]. The TV method is modified by [@Muk+Qiu:11] by minimizing a 3D-version of the TV criterion. We adopt simulation settings in [@Muk+Qiu:11], which uses two artificial 3D images with the following true intensity functions: $$\begin{aligned}
& f_1(x,y,z) = -(x - \frac{1}{2})^2 -(y - \frac{1}{2})^2 -(z - \frac{1}{2})^2 + \I_{\{(x,y,z) \in R_1\cup R_2\}},
\end{aligned}$$ where $R_1 = \{(x,y,z): |x - \frac{1}{2}| \leq \frac{1}{4}, \; |y - \frac{1}{2}| \leq \frac{1}{4}, \; |z - \frac{1}{2}| \leq \frac{1}{4}\}$ and $R_2 = \{(x,y,z): (x - \frac{1}{2})^2 + (y - \frac{1}{2})^2 \leq 0.15^2, \;|z - \frac{1}{2}| \leq 0.35 \}$; $$\begin{aligned}
& f_2(x,y,z) = \frac{1}{4}\sin(2\pi(x+y+z)+1) + \frac{1}{4} + \I_{ \{(x,y,z) \in S_1\cup S_2\}},
\end{aligned}$$ where $S_1 = \{(x,y,z): (x - \frac{1}{2})^2 + (y - \frac{1}{2})^2 \leq \frac{1}{4}(z - \frac{1}{2})^2 ,\;
0.2 \leq z \leq 0.5 \}$ and $S_2 = \{(x,y,z): 0.2^2 \leq (x - \frac{1}{2})^2 + (y - \frac{1}{2})^2 + (z - \frac{1}{2})^2 \leq 0.4^2 ,\; z < 0.45 \}$.
\[table:3D\] shows the comparison of various methods using MSE. It is worth mentioning that the numerical records for the other five methods to estimate $f_1$ and $f_2$ are from [@Muk+Qiu:11] as the code is not immediately available and the running time for some method such as LSNR can take hours to days (including the tuning step). WARP is uniformly the best approach among all the selected methods at least under the simulation setting.
---------------------------------------------------------------------- ---------------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ----------
(lr)[2-5]{} (lr)[6-9]{}
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} (lr)[8-9]{} $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$ $\sigma = 0.1$ $ 0.2$
(lr)[1-1]{} (lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} (lr)[8-9]{} [WARP]{} **0.02** **0.04** **0.04** **0.11** **0.01** **0.02** **0.02** **0.05**
LSNR 0.03 0.08 0.06 0.13 **0.01** **0.03** **0.02** 0.06
TV 0.03 0.09 0.06 0.15 **0.01** 0.04 0.03 0.06
AD 0.06 0.35 0.07 0.38 0.03 0.20 0.04 0.22
ONLM 0.03 0.12 0.06 0.14 **0.01** 0.06 0.03 0.06
RM 0.22 0.33 0.11 0.26 0.08 0.19 0.06 0.14
---------------------------------------------------------------------- ---------------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ----------
: \[table:3D\]3D denoising for two images $f_1$, $f_2$ in terms of MSE $(\times10^{-2})$. WARP uses $5\times5\times5$ local shifts and are based on 100 replications. The mean of 100 MSEs is reported, and the maximum standard error is 0.00.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we provide rigorous proof for the convergence of an iterative voting-based image segmentation algorithm called Active Masks. Active Masks (AM) was proposed to solve the challenging task of delineating punctate patterns of cells from fluorescence microscope images. Each iteration of AM consists of a linear convolution composed with a nonlinear thresholding; what makes this process special in our case is the presence of additive terms whose role is to “skew” the voting when prior information is available. In real-world implementation, the AM algorithm always converges to a fixed point. We study the behavior of AM rigorously and present a proof of this convergence. The key idea is to formulate AM as a generalized (parallel) majority cellular automaton, adapting proof techniques from discrete dynamical systems.'
address:
- 'School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA'
- 'Dept. of Information Science and Engineering, and Center for Pattern Recognition, PES School of Engineering, Bangalore, India'
- 'Dept. of Mathematics and Statistics, Air Force Institute of Technology, Wright-Patterson AFB, USA'
- 'Dept. of Biomedical Eng., Electrical and Computer Eng. and Center for Bioimage Informatics, Carnegie Mellon University, Pittsburgh, USA'
author:
- 'Doru C. Balcan'
- Gowri Srinivasa
- Matthew Fickus
- Jelena Kovačević
bibliography:
- 'am.bib'
- 'bibl\_jelena.bib'
title: |
Guaranteeing Convergence of Iterative Skewed Voting Algorithms\
for Image Segmentation
---
active masks, cellular automata, convergence, segmentation.
Introduction {#sec:intro}
============
Recently, a new algorithm called *Active Masks* (AM) was proposed for the segmentation of biological images [@Srinivasa:09]. Let the “image” $f$ be any real-valued function over the domain $\Omega:=\prod_{d=1}^D{\mathbb{Z}}_{N_d}$ and refer to the $N:=N_1N_2{\ldots}N_D$ elements of $\Omega$ as *pixels*; here, ${\mathbb{Z}}_{N_d}$ denotes the finite group of integers modulo $N_d$. A *segmentation* of $f$ assigns one of $M$ possible *labels* to each of the $N$ pixels in $\Omega$. For the fluorescence microscope image depicted in Figure \[fig:AM\](a), one example of a successful segmentation is to label all of the background pixels as “$1$,” assign label “$2$” to every pixel in the largest cell, “$3$” to every pixel in the second largest cell, and so on. Formally, a segmentation is a *label function* $\psi:\Omega\rightarrow\{1,2,\dots,M\}$, or, equivalently, a collection of $M$ binary *masks* $\mu_m:\Omega\rightarrow\{0,1\}$ where, at any given $n\in\Omega$, we have $\mu_{m}(n)=1$ if and only if $\psi(n)=m$. That is, $\mu_m$ at any iteration $i$ can be defined as $$\label{eq:mu}
\mu_m^{(i)}:=\left\{\begin{array}{ll}1,&\psi_{i}(n)=m,\\0,&\psi_{i}(n)\neq m,\end{array}\right.$$ In AM, these masks actively evolve according to a given rule. To understand this evolution, it helps to first discuss *iterative voting*: in each iteration, at any given pixel, one counts how often a given label appears in the neighborhood of that pixel—weighting nearby neighbors more than distant ones—and assigns the most frequent label to that pixel in the next iteration. For example, if a pixel labeled “$1$” in the current iteration is completely surrounded by pixels labeled “$2$”, its label will likely change to “$2$” in the next iteration. Formally speaking, iterative voting is the repeated application of the rule: $$\label{eq:ct}
\text{Iterative Voting:}
\qquad\psi_{i}(n)=\operatorname*{argmax}\limits_{1\leq m\leq M}~{\bigl[{(\mu_{m}^{(i-1)}*g)(n)}\bigr]},$$ where $i$ is the index of the iteration, $g:\Omega\rightarrow\mathbb{R}$ is some arbitrarily chosen fixed weighting function and “$*$” denotes circular convolution over $\Omega$. Iterative voting is referred to as a *convolution-threshold* scheme since it simplifies to rounding the filtered version of $\mu_1^{(i)}$ in the special case $M=2$. Experimentation reveals that for typical low-pass filters $g$, repeatedly applying to a given initial $\psi_{0}$ results in a progressive smoothing of the contours between distinctly labeled regions of $\Omega$. Despite this nice property, note that taken by itself, iterative voting is useless as a segmentation scheme, as evolves masks in a manner that is independent of any image under consideration.
The AM algorithm is a generalization of that contains additional image-based terms whose purpose is to drive the iteration towards a meaningful segmentation. To be precise, the AM iteration is: $$\label{eq:am}
\text{Active Masks:}
\qquad\psi_{i}(n)=\operatorname*{argmax}\limits_{1\leq m\leq M}~{\bigl[{(\mu_{m}^{(i-1)}*g)(n)+R_{m}(n)}\bigr]},$$ where the region-based distributing functions $\{R_m\}_{m=1}^M$ can be any image-dependent real-valued functions over $\Omega$. These will be referred to as [*skew functions*]{} in this paper, due to their role to bias the voting. Essentially, at any given pixel $n$, these additional terms skew the voting towards labels $m$ whose $R_m(n)$ values are large. For good segmentation, one should define the $R_m$’s in terms of features in the image that distinguish regions of interest from each other.
For example, for the fluorescence microscope image given in Figure \[fig:AM\](a), the cells appear noticeably brighter than the background. As such, we choose $R_1$ to be a soft-thresholded version of the image’s local average brightness, and choose the remaining $R_m$’s to be identically zero. When is applied, such a choice in $R_m$’s forces pixels which lie outside the cells towards label “$1$,” while pixels that lie inside a cell can assume any other label. Intuitively, repeated applications of will cause the mask to converge to an indicator function of the background, while each of the other masks $\{\mu_m^{(i)}\}_{m=2}^{M}$ converges either to a smooth blob contained within the foreground or to the empty set. Experimentation reveals that the AM algorithm indeed often converges to a $\psi$ which assigns a unique label to each cell provided the scale of the window $g$ is chosen appropriately [@Srinivasa:09]; see Figure \[fig:AM\] for examples.
\
The purpose of this paper is to provide a rigorous investigation of the convergence behavior of the AM algorithm. To be precise, we note that in a real-world implementation the AM algorithm occasionally fails to converge to a $\psi$ which is biologically meaningful. However, even in these cases, the algorithm always seems to converge to something. Indeed, when $\psi_{0}$ and the $R_m$’s are chosen at random, experimentation reveals that the repeated application of always seems to eventually produce $\psi_{i}$ such that $\psi_{i+1}=\psi_{i}$. At the same time, a simple example tempers one’s expectations: taking $\Omega={\mathbb{Z}}_4$, $M=2$, $w=\delta_{-1}+\delta_0+\delta_1$ and $R_1=R_2=0$, we see that AM will not always converge, as repeatedly applying to $\psi_{0}=\delta_0+\delta_2$ produces the endless $2$-cycle $\delta_0+\delta_2\mapsto\delta_1+\delta_3\mapsto\delta_0+\delta_2$. In summary, even though random experimentation indicates that AM will almost certainly converge, there exist trivial examples which show that it will not always do so. The central question that this paper seeks to address is therefore:
> Under what conditions on $g$ and $\{R_m\}_{m=1}^M$ will the AM algorithm always converge to a fixed point of ?
We show that when $g$ is an even function, AM will either converge to a fixed point or will get stuck in a $2$-cycle; no higher-order cycles are possible. We can further rule out $2$-cycles whenever $g$ is taken so that the convolutional operator $f\mapsto f*g$ is positive semidefinite. The following is a compilation of these results:
\[thm:suff\] Given any $\Omega:=\prod_{d=1}^D{\mathbb{Z}}_{N_d}$, initial segmentation $\psi_{0}:\Omega\rightarrow\{1,\dotsc,M\}$ and any real-valued functions $\{R_m\}_{m=1}^M$ over $\Omega$, the Active Mask algorithm, namely the repeated application of , will always converge to a fixed point of provided the discrete Fourier transform of $g$ is nonnegative and even.
A preliminary version of the results in this paper appears in the conference proceeding [@BalcanSFK:10]. Though the specific AM algorithm was introduced in [@Srinivasa:09], iterative lowpass filtering has long been a subject of interest in applied harmonic analysis, having deep connections to continuous-domain ideas such as diffusion and the maximum principle [@Nirenbarg:53]. For instance, [@PeronaM:90] gives an edge detection application of a discretized version of these ideas. Meanwhile, [@Koenderink:84] gives diffusion-inspired conditions under which lowpass filtering is guaranteed to produce a coarse version of a given image. One way to prove the convergence of iterative convolution-thresholding schemes is to show that lowpass filtering decreases the number of zero-crossings in a signal; such a condition is equivalent to a version of the maximum principle [@Hummel:87]. More recently, the continuous-domain version of has been used to model the motion of interfaces between media [@Ruuth:98; @RuuthM:01]; in that setting, is known to converge if $M=2$. Since the AM algorithm is iterative, many of the proof techniques we use here were adapted from *majority cellular automata* (MCA), a well-studied class of discrete dynamical systems. Indeed, theoretical guarantees on the convergence of a symmetric class of MCA have been known for several decades; see [@Goles:90; @PS:83], and references therein. Such results were recently generalized to a quasi-symmetric class via the use of Lyapunov functionals [@YN:09]. Whereas much of traditional MCA theory focuses on the convergence of repeated applications of , our work differs due to the presence of the additive $R_m$ terms in .
The paper is organized as follows. In the next section, we use an MCA formulation of AM to prove our main convergence results. In Section 3, we then briefly discuss the generalization of our main results to a less elegant yet more realistic version of involving noncircular convolution. We conclude in Section 4 with some examples illustrating our main results, as well as some experimental results indicating the AM algorithm’s rate of convergence.
Active Masks as a Majority Cellular Automaton {#sec:cellauto}
=============================================
Cellular automata are self-evolving discrete dynamical systems [@Goles:90]. They have been applied in various fields such as statistical physics, computational biology, and the social sciences. A tremendous amount of work in this area has focused on studying the convergence behavior of various types of automata. In this section, we formulate the AM algorithm as an MCA in order to facilitate our understanding of its convergence behavior. To be precise, we consider a generalization of in which the convolutional operator $f\mapsto f*g$ may more broadly be taken to be any linear operator $A$ from $\ell(\Omega):={\{{f|f:\Omega\rightarrow{\mathbb{R}}}\}}$ into itself: $$\label{eq:am generalized}
\psi_{i}(n)
:=\min{\Bigl({\operatorname*{argmax}\limits_{m}{\bigl[{(A\mu_{m}^{(i-1)})(n)+R_{m}(n)}\bigr]}}\Bigr)},
\qquad \mu_m^{(i-1)}:=\left\{\begin{array}{ll}1,&\psi_{i-1}(n)=m,\\0,&\psi_{i-1}(n)\neq m.\end{array}\right.$$ Here, the contribution of mask $m$ in deciding the outcome at location $n$ at iteration $i$ is $(A\mu_{m}^{(i-1)})(n)$, and any ties are broken by choosing the smallest $m$ corresponding to a maximal element. Note that given any initial segmentation $\psi_0$, applying *ad infinitum* produces a sequence $\{\psi_i\}_{i=0}^{\infty}$. However, as there are only $M^N$ distinct possible configurations for $\psi:\Omega\rightarrow\{1,\dotsc,M\}$, this sequence must eventually repeat itself. Indeed, taking minimal indices $i_0$ and $K>0$ such that $\psi_{i_0+K}=\psi_{i_0}$, the deterministic nature of implies that $\psi_{i+K}=\psi_i$ for all $i\geq i_0$. The finite sequence $\{\psi_i\}_{i=i_0}^{i_0+K-1}$ is called a *cycle* of of *length* $K$. Note that $\{\psi_i\}_{i=0}^{\infty}$ converges if and only if $K=1$, which happens precisely when $\psi_{i_0}$ is a fixed point of .
Thus, from this perspective, proving that always converges is equivalent to proving that $K=1$ regardless of one’s choice of $\psi_0$. The following result goes a long way towards this goal, showing that if $A$ is self-adjoint, then for any $\psi_0$ we have that the resulting $K$ is necessarily $1$ or $2$. That is, if $A$ is self-adjoint, then for any $\psi_0$, the sequence ${\{{\psi_i}\}}_{i=0}^{\infty}$ will either converge in a finite number of iterations, or it will eventually come to a point where it forever oscillates between two distinct configurations $\psi_{i_0}$ and $\psi_{i_0+1}$.
\[thm:sym\] If $A$ is self-adjoint, then for any $\psi_0$, the cycle length $K$ of is either 1 or 2.
As we are not presently concerned with the rate of convergence of , but rather the question of whether it does converge, we may assume without loss of generality that $\{\psi_i\}_{i=0}^{\infty}$ has already entered its cycle. That is, we reindex so that $\psi_0$ is the beginning of the $K$-cycle, and heretofore regard all iteration indices as members of the cyclic group ${\mathbb{Z}}_{K}$. We argue by contrapositive, assuming $K>2$ and concluding that $A$ is not self-adjoint. For any $i=1,\dotsc,K$, is equivalent to the system of inequalities:
$$\begin{aligned}
\label{subeq:1}& (A\mu_{\psi_{i}(n)}^{(i-1)})(n)+R_{\psi_{i}(n)}(n)>(A\mu_{m}^{(i-1)})(n)+R_{m}(n) & & \textrm{if}~~1\leq m<\psi_{i}(n),\\
\label{subeq:2}& (A\mu_{\psi_{i}(n)}^{(i-1)})(n)+R_{\psi_{i}(n)}(n)\geq(A\mu_{m}^{(i-1)})(n)+R_{m}(n) & & \textrm{if}~~\psi_{i}(n)\leq m\leq M.\end{aligned}$$
Here, follows from the fact that $\psi_i(n)$ is a value of $m$ that maximizes $(A\mu_{m}^{(i-1)})(n)+R_{m}(n)$. Moreover, in the event of a tie, $\psi_i(n)$ is chosen to be the least of all such maximizing $m$, yielding the strict inequality in . For any $i$ and $n$, picking $m=\psi_{i-2}(n)$ in and leads to the subsystem of inequalities:
\[eq:red-syst\] $$\begin{aligned}
& (A\mu_{\psi_{i}(n)}^{(i-1)})(n)-(A\mu_{\psi_{i-2}(n)}^{(i-1)})(n)+R_{\psi_{i}(n)}(n)-R_{\psi_{i-2}(n)}(n)>0 & & \textrm{if}~\psi_{i-2}(n)<\psi_{i}(n),\\
& (A\mu_{\psi_{i}(n)}^{(i-1)})(n)-(A\mu_{\psi_{i-2}(n)}^{(i-1)})(n)+R_{\psi_{i}(n)}(n)-R_{\psi_{i-2}(n)}(n)\geq0 & & \textrm{if}~~\psi_{i-2}(n)\geq\psi_{i}(n).\end{aligned}$$
Now since $K>2$, there exists a pixel $n$ for which $\{\psi_0(n),\psi_1(n),\dotsc,\psi_{K-1}(n)\}$ is not of the form $\{a,a,\ldots,a\}$ nor of the form $\{a,b,a,b,\ldots a,b\}$. At such an $n$, there must exist an $i$ such that $\psi_{i-2}(n)<\psi_{i}(n)$. Consequently, at least one inequality in is strict. Thus, summing over all pixels $n$ and all cycle indices $i$ yields: $$0<\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i}(n)}^{(i-1)})(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i-2}(n)}^{(i-1)})(n)+\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}R_{\psi_{i}(n)}(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}R_{\psi_{i-2}(n)}(n).$$ Since ${\mathbb{Z}}_K$ is shift-invariant, $\displaystyle\sum_{i\in{\mathbb{Z}}_{K}}R_{\psi_{i}(n)}(n)=\sum_{i\in{\mathbb{Z}}_{K}}R_{\psi_{i-2}(n)}(n)$ for any $n\in\Omega$, reducing the previous equation to: $$\label{eq:sum}
0
<\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i}(n)}^{(i-1)})(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i-2}(n)}^{(i-1)})(n)
=\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i}(n)}^{(i-1)})(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(A\mu_{\psi_{i-1}(n)}^{(i)})(n),$$ where the final equality also follows from the shift-invariance of ${\mathbb{Z}}_K$. To continue, note that for any $i,j\in{\mathbb{Z}}_K$ we have $\mu_m^{(j)}=1$ if and only if $\psi_j(n)=m$ and so: $$\label{eq:trace}
\sum_{n\in\Omega}(A\mu_{\psi_j(n)}^{(i)})(n)
=\sum_{n\in\Omega}\sum_{m=1}^M(A\mu_m^{(i)})(n)\mu_m^{(j)}(n)
=\sum_{m=1}^M{\langle{A\mu_m^{(i)}},{\mu_m^{(j)}}\rangle},$$ where is the standard real inner product over $\Omega$. Using in gives: $$0<\sum_{i\in{\mathbb{Z}}_{K}}\sum_{m=1}^M{\langle{A\mu_m^{(i-1)}},{\mu_m^{(i)}}\rangle}-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{m=1}^M{\langle{A\mu_m^{(i)}},{\mu_m^{(i-1)}}\rangle}
=\sum_{i\in{\mathbb{Z}}_{K}}\sum_{m=1}^M{\langle{(A-A^*)\mu_m^{(i-1)}},{\mu_m^{(i)}}\rangle},$$ implying $A-A^*\neq0$, and so $A$ is not self-adjoint.
Theorem \[thm:sym\] has strong implications for the AM algorithm . Indeed, it is well-known that if $g$ is real-valued, then the adjoint of the convolutional operator $A f=f*g$ is $A^* f=f*\tilde{g}$ where $\tilde{g}(n)=g(-n)$ is the *reversal* of $g$. As such, if $g$ is an even function, Theorem \[thm:sym\] guarantees that AM will always either converge or enter a $2$-cycle. We now build on the techniques of the previous proof to find additional restrictions on $A$ which suffice to guarantee convergence:
\[thm:fp\] If $A$ is self-adjoint and ${\langle{A f},{f}\rangle}\geq0$ for all $f:\Omega\rightarrow\{0,\pm1\}$, then always converges.
In light of Theorem \[thm:sym\], our goal is to rule out cycles of length $K=2$. We argue by contrapositive. That is, we assume that there exist two distinct configurations $\psi_{0}$ and $\psi_{1}$ which are successors of each other, and will use this fact to produce $f:\Omega\rightarrow\{0,\pm1\}$ such that ${\langle{A f},{f}\rangle}<0$. Substituting $i=0$ and $m=\psi_{1}(n)$ into and yields:
$$\begin{aligned}
\label{eq:new-red-syst0:a}
& (A\mu_{\psi_{0}(n)}^{(1)})(n)-(A\mu_{\psi_{1}(n)}^{(1)})(n)+R_{\psi_{0}(n)}(n)-R_{\psi_{1}(n)}(n)>0 & & \textrm{if}~~\psi_{1}(n)<\psi_{0}(n),\\
\label{eq:new-red-syst0:b}
& (A\mu_{\psi_{0}(n)}^{(1)})(n)-(A\mu_{\psi_{1}(n)}^{(1)})(n)+R_{\psi_{0}(n)}(n)-R_{\psi_{1}(n)}(n)\geq0 & & \textrm{if}~~\psi_{1}(n)\geq\psi_{0}(n).\end{aligned}$$
Similarly, letting $i=1$ and $m=\psi_{0}(n)$ into and yields:
$$\begin{aligned}
\label{eq:new-red-syst1:a}
&(A\mu_{\psi_{1}(n)}^{(0)})(n)-(A\mu_{\psi_{0}(n)}^{(0)})(n)+R_{\psi_{1}(n)}(n)-R_{\psi_{0}(n)}(n)>0 & & \textrm{if}~~\psi_{0}(n)<\psi_{1}(n),\\
\label{eq:new-red-syst1:b}
&(A\mu_{\psi_{1}(n)}^{(0)})(n)-(A\mu_{\psi_{0}(n)}^{(0)})(n)+R_{\psi_{1}(n)}(n)-R_{\psi_{0}(n)}(n)\geq0 & & \textrm{if}~~\psi_{0}(n)\geq\psi_{1}(n).\end{aligned}$$
Since $\psi_0$ and $\psi_1$ are distinct, there exists $n_0\in\Omega$ such that $\psi_0(n_0)\neq\psi_1(n_0)$. If $\psi_0(n_0)<\psi_1(n_0)$, we sum and over all $n\in\Omega$. If on the other hand $\psi_0(n_0)>\psi_1(n_0)$, we sum and over all $n\in\Omega$. Either way, we obtain: $$0<\sum_{n=1}^N{\bigl[{(A\mu_{\psi_{0}(n)}^{(1)})(n)-(A\mu_{\psi_{1}(n)}^{(1)})(n)+(A\mu_{\psi_{1}(n)}^{(0)})(n)-(A\mu_{\psi_{0}(n)}^{(0)})(n)}\bigr]}.$$ Applying four times then gives: $$0
<\sum_{m=1}^M{\bigl[{{\langle{A\mu_m^{(1)}},{\mu_m^{(0)}}\rangle}-{\langle{A\mu_m^{(1)}},{\mu_m^{(1)}}\rangle}+{\langle{A\mu_m^{(0)}},{\mu_m^{(1)}}\rangle}-{\langle{A\mu_m^{(0)}},{\mu_m^{(0)}}\rangle}}\bigr]}
=-\sum_{m=1}^{M}{\bigl\langle{A(\mu_m^{(1)}-\mu_m^{(0)})},{(\mu_m^{(1)}-\mu_m^{(0)})}\bigr\rangle}.$$ As such, there exists at least one index $m_0$ such that $0>{\bigl\langle{A(\mu_{m_0}^{(1)}-\mu_{m_0}^{(0)})},{(\mu_{m_0}^{(1)}-\mu_{m_0}^{(0)})}\bigr\rangle}$; choose $f$ to be $\mu_{m_0}^{(1)}-\mu_{m_0}^{(0)}$.
The most obvious way to ensure that ${\langle{A f},{f}\rangle}\geq0$ for all $f:\Omega\rightarrow\{0,\pm1\}$ is for $A$ to be positive semidefinite, that is, ${\langle{A f},{f}\rangle}\geq0$ for all $f:\Omega\rightarrow{\mathbb{R}}$. This in turn can be guaranteed by taking $A$ to be diagonally dominant with nonnegative diagonal entries, via the Gershgorin circle Theorem [@Golub96book]. Note that in fact strict diagonal dominance guarantees that iterative voting always converges in one iteration. More interesting examples can be found in the special case where $A$ is a convolutional operator $Af=f*g$. Indeed, letting ${\mathrm{F}}$ be the standard non-normalized discrete Fourier transform (DFT) over $\Omega$, we have: $$\label{eq:posdef}
{\langle{Af},{f}\rangle}={\langle{f*g},{f}\rangle}=\frac1N{\bigl\langle{{\mathrm{F}}(f*g)},{{\mathrm{F}}f}\bigr\rangle}=\frac1N{\bigl\langle{({\mathrm{F}}f)({\mathrm{F}}g)},{{\mathrm{F}}f}\bigr\rangle}=\frac1N\sum_{n\in\Omega}({\mathrm{F}}g)(n){\bigl|{({\mathrm{F}}f)(n)}\bigr|}^2.$$ As such, if $g:\Omega\rightarrow{\mathbb{R}}$ is even and $({\mathrm{F}}g)(n)\geq0$ for all $n\in\Omega$, then $A$ is self-adjoint positive semidefinite. Moreover, it is well-known that $g$ is real-valued and even if and only if ${\mathrm{F}}g$ is also real-valued and even. Thus, $A$ is self-adjoint positive semidefinite provided ${\mathrm{F}}g$ is nonnegative and even. For such $g$, Theorem \[thm:fp\] guarantees that the AM algorithm will always converge. These facts are summarized in Theorem \[thm:suff\], which is stated in the introduction. Examples of $g$ that satisfy these hypotheses are given in Section \[sec:experiments\].
We emphasize that Theorem \[thm:fp\] does not require $A$ to be positive semidefinite, but rather only that ${\langle{A f},{f}\rangle}\geq0$ for all $f:\Omega\rightarrow\{0,\pm1\}$. In the case of convolutional operators, this means we truly only need to hold for such $f$’s. As such, it may be overly harsh to require that $(Fg)(n)\geq0$ for all $n\in\Omega$. Unfortunately, the problem of characterizing such $g$’s appears difficult, as we could find no useful frequency-domain characterizations of $\{0,\pm1\}$-valued functions. A spatial domain approach is more encouraging: when $\Omega={\mathbb{Z}}_N$, writing $f:\Omega\rightarrow{\{{0,\pm1}\}}$ as the difference of two characteristic functions $\chi_{I_1},\chi_{I_2}:\Omega\rightarrow{\{{0,1}\}}$, we have: $${\langle{Af},{f}\rangle}
={\bigl\langle{A(\chi_{I_1}-\chi_{I_2})},{(\chi_{I_1}-\chi_{I_2})}\bigr\rangle}
=\operatorname*{sum}(A_{1,1})+\operatorname*{sum}(A_{2,2})-\operatorname*{sum}(A_{1,2})-\operatorname*{sum}(A_{2,1}),$$ where $\operatorname*{sum}(A_{i,j})$ denotes the sum of all entries of the submatrix of $A$ consisting of rows from $I_i$ and columns from $I_j$. As such, the condition of Theorem \[thm:fp\] reduces to showing that $0\leq \operatorname*{sum}(A_{1,1})+\operatorname*{sum}(A_{2,2})-\operatorname*{sum}(A_{1,2})-\operatorname*{sum}(A_{2,1})$ for all choices of subsets $I_i$ and $I_j$ of ${\mathbb{Z}}_N$.
We conclude this section by noting that is similar to *threshold cellular automata* (TCA) [@Goles:90; @Goles85dam]. In fact, is equivalent to TCA in the special case of $M=2$; in this case, $\mu_{0}^{(i-1)}(n)=1-\mu_{1}^{(i-1)}(n)$ for all $n\in\Omega$, implying: $$\begin{aligned}
(A\mu_1^{(i-1)})(n)+R_{1}(n)>(A\mu_0^{(i-1)})(n)+R_{0}(n)\quad
&\Longleftrightarrow\quad{\bigl[{A(\mu_{1}^{(i-1)}-\mu_{0}^{(i-1)})}\bigr]}(n)+(R_{1}-R_{0})(n)>0\\
&\Longleftrightarrow\quad{\Bigl\{{A{\bigl[{\mu_{1}^{(i-1)}-(1-\mu_{1}^{(i-1)})}\bigr]}}\Bigr\}}(n)+(R_{1}-R_{0})(n)>0\\
&\Longleftrightarrow\quad(A\mu_{1}^{(i-1)})(n)+\tfrac12(R_{1}-R_{0}-A1)(n)>0\\
&\Longleftrightarrow\quad(A\mu_{1}^{(i-1)})(n)+b(n)>0,\end{aligned}$$ where $b(n):=\tfrac12(R_{1}-R_{0}-A1)(n)$. That is, when $M=2$, the AM algorithm is equivalent to a threshold-like decision. But whereas the traditional method for proving the convergence of TCA involves associated quadratic Lyapunov functionals [@Goles85dam], our method for proving the convergence of AM is more direct, being closer in spirit to that of [@PS:83].
Beyond symmetry {#sec:BeyondSymmetry}
===============
Up to this point, we have focused on the convergence of in the special case where $A$ is self-adjoint. In this section, we discuss how Theorems \[thm:sym\] and \[thm:fp\] generalize to the case of *quasi-self-adjoint* operators, which arise in real-world implementation of the AM algorithm. To clarify, up to this point, we have let the image $f$ and weights $g$ be functions over the finite abelian group $\Omega=\prod_{d=1}^D{\mathbb{Z}}_{N_d}$ and have taken the convolutions in and to be circular. In real-world implementation, the use of such circular convolutions can result in poor segmentation, as values at one edge of the image are used to influence the segmentation at the unrelated opposite edge.
One solution to this problem—implemented in [@Srinivasa:09]—is to redefine the set of pixels as a subset $\Omega:=\prod_{d=1}^D[0,N_d)$ of the $D$-dimensional integer lattice ${\mathbb{Z}}^D$, and regard our image $f$ as a member of $\ell(\Omega):={\{{f:{\mathbb{Z}}^D\rightarrow{\mathbb{R}}\ |\ f(n)=0\ \forall n\notin\Omega}\}}$. Here, the label function $\psi$ and masks $\mu_m$ are regarded as $\{1,\dotsc,M\}$- and $\{0,1\}$-valued members of $\ell(\Omega)$, respectively, and the (noncommutative) convolution of any $f,g\in\ell(\Omega)$ with $g\in\ell^2({\mathbb{Z}}^D)$ is defined as $f\star g\in\ell(\Omega)$, $$\label{eq:newconv}
(f\star g)(n):= \frac{(f* g)(n)}{(\chi_{\Omega}* g)(n)},\quad\forall{n}\in\Omega,$$ where $\chi_{\Omega}$ is the characteristic function of $\Omega$, and $*$ denotes standard (noncircular) convolution in $\ell^2({\mathbb{Z}}^D)$. For the theory below, we need to place additional restrictions on $g$, namely that it belongs to the class: $$\mathcal{G}(\Omega):={\{{g\in\ell^2({\mathbb{Z}}^D) : (\chi_{\Omega}* g)(n)>0\ \ \forall n\in\Omega}\}}.$$ In this setting, for a given $g\in\mathcal{G}(\Omega)$, the AM algorithm becomes: $$\label{eq:am,noncircular}
\text{Noncircular Active Masks:}
\qquad\psi_{i}(n)=\operatorname*{argmax}\limits_{1\leq m\leq M}~{\bigl[{(\mu_{m}^{(i-1)}\star g)(n)+R_{m}(n)}\bigr]},
\qquad \mu_m^{(i-1)}:=\left\{\begin{array}{ll}1,&\psi_{i-1}(n)=m,\\0,&\psi_{i-1}(n)\neq m.\end{array}\right.$$ Note that the use of the $\star$-convolution in ensures that any “missing votes” are not counted in favor of any label $m$. Moreover, the denominator of ensures that when $n$ is close to an edge of $\Omega$, the weights in the $g$-neighborhood of $n$ are rescaled so as to always sum to one. This rescaling ensures that $\sum_{m=1}^{M}(\mu_{m}^{(i)}\star g)(n)=1$ for all $n\in\Omega$, avoiding any need to modify the skew functions $R_m$ near the boundary.
We then ask the question: for what $g$ will always converge? The key to answering this question is to realize that the $\star$-filtering operation $Af=f\star g$ can be factored as $A=DB$, where $B$ is the standard filtering operator $Bf=f*g$ and $(Df)(n)=\lambda_n f(n)$, where $\lambda_n=[(\chi_{\Omega}* g)(n)]^{-1}$. Here, $A$, $B$ and $D$ are all regarded as linear operators from $\ell(\Omega)$ into itself. More generally, we inquire into the convergence of: $$\label{eq:am generalized,noncircular}
\psi_{i}(n)=\operatorname*{argmax}\limits_{1\leq m\leq M}~{\bigl[{(A\mu_{m}^{(i-1)})(n)+R_{m}(n)}\bigr]},
\qquad \mu_m^{(i-1)}:=\left\{\begin{array}{ll}1,&\psi_{i-1}(n)=m,\\0,&\psi_{i-1}(n)\neq m,\end{array}\right.$$ where $A=DB$ and $D$ is *positive-multiplicative*, that is, $(Df)(n)=\lambda_n f(n)$ where $\lambda_n>0$ for all $n\in\Omega$. In particular, we follow [@YN:09] in saying that $A$ is *quasi-self-adjoint* if there exists a positive-multiplicative operator $D$ and a self-adjoint operator $B$ such that $A=DB$. This definition in hand, we have the following generalization of Theorems \[thm:sym\] and \[thm:fp\]:
\[thm:quasi\] Let $A$ be quasi-self-adjoint: $A=DB$ where $D$ is positive-multiplicative and $B$ is self-adjoint. Then for any $\psi_0$, the cycle length $K$ of is either 1 or 2. Moreover, if $B$ is positive-semidefinite, then always converges.
We only outline the proof, as it closely follows those of Theorems \[thm:sym\] and \[thm:fp\]. Let $(Df)(n)=\lambda_n f(n)$ with $\lambda_n>0$ for all $n\in\Omega$. We prove the first conclusion by contrapositive, assuming $K>2$. Rather than summing over all $n$ and $i$ directly, we instead first divide each instance of by the corresponding $\lambda_n$, and then sum. The resulting quantity is analogous to : $$\label{eq:sum, quasi}
0
<\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}\frac1{\lambda_n}(A\mu_{\psi_{i}(n)}^{(i-1)})(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}\frac1{\lambda_n}(A\mu_{\psi_{i-1}(n)}^{(i)})(n)
=\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(B\mu_{\psi_{i}(n)}^{(i-1)})(n)-\sum_{i\in{\mathbb{Z}}_{K}}\sum_{n\in\Omega}(B\mu_{\psi_{i-1}(n)}^{(i)})(n).$$ Simplifying the right-hand side of with quickly reveals that $B$ cannot be self-adjoint, completing this part of the proof. For the second conclusion, we again prove by contrapositive, assuming $K=2$. Dividing , , and by $\lambda_n$ and then summing either and over all $n$ or and over all $n$ gives: $$0<\sum_{n=1}^N\frac1{\lambda_n}{\bigl[{(A\mu_{\psi_{0}(n)}^{(1)})(n)-(A\mu_{\psi_{1}(n)}^{(1)})(n)+(A\mu_{\psi_{1}(n)}^{(0)})(n)-(A\mu_{\psi_{0}(n)}^{(0)})(n)}\bigr]}
=-\sum_{m=1}^{M}{\bigl\langle{B(\mu_m^{(1)}-\mu_m^{(0)})},{(\mu_m^{(1)}-\mu_m^{(0)})}\bigr\rangle},$$ implying $B$ is not positive semidefinite.
For a result about the convergence of , we apply Theorem \[thm:quasi\] to $A=DB$ where $\lambda_n=[(\chi_{\Omega}* g)(n)]^{-1}$ and $Bf=f*g$. Note that we must have $g\in\mathcal{G}(\Omega)$ in order to guarantee that $D$ is positive. Moreover, $B$ is self-adjoint if $g\in\ell^2({\mathbb{Z}}^D)$ is even; since $g$ is real-valued, this is equivalent to having its classical Fourier series $\hat{g}\in L^2(\mathbb{T}^D)$ be real-valued and even. Meanwhile, since: $${\langle{Bf},{f}\rangle}
={\langle{f*g},{f}\rangle}
={\langle{\hat{f}\hat{g}},{\hat{f}}\rangle}
=\int_{\mathbb{T}^d}\hat{g}(x){\bigl|{\hat{f}(x)}\bigr|}^2\,\mathrm{d}x,$$ then $B$ is positive semidefinite if $\hat{g}(x)\geq0$ for almost every $x\in\mathbb{T}^D$. To summarize, we have:
\[cor:noncirculant\] If the Fourier series of $g\in\mathcal{G}(\Omega)$ is nonnegative and even, then will always converge.
In the next section, we discuss how to construct such windows $g$, along with other implementation-related issues.
Examples of Active Masks in practice {#sec:experiments}
====================================
In this section we present a few representative and interesting examples of filter-based cellular automata, and discuss their behavior in relation with the results we proved in the previous sections. We also present some preliminary experimental findings on the rate of convergence of AM. For ease of understanding, let us for the moment restrict ourselves to circulant iterative voting , namely the version of AM in which all the skew functions $R_m$ are identically zero. The simplest nonzero filter is $g=\delta_0$. The DFT of $\delta_0$ has constant value $1$, and is therefore nonnegative and even. As such, Theorem \[thm:suff\] guarantees that will always converge. Of course, we already knew that: since $f*\delta_0=f$ for all $f\in\ell(\Omega)$, will always converge in one step; as noted above, the same holds true for any $g$ whose convolutional operator is strictly diagonally dominant with a nonnegative diagonal: $g(0)\geq\sum_{n\neq 0}{|{g(n)}|}$.
More interesting examples arise from *box filters*: symmetric cubes of Dirac $\delta$’s. For instance, fix $N\geq3$ and consider over $\Omega={\mathbb{Z}}_N$ where $g=\delta_{-1}+\delta_0+\delta_1$. Since $g$ is symmetric, Theorem \[thm:sym\] guarantees that will either always converge or will enter a $2$-cycle. However, if $N$ is even, then will not always converge, since $\psi_0=\delta_0+\delta_2+\dots+\delta_{N-2}$ generates a $2$-cycle. This phenomenon is depicted in Figure \[fig:Oscillating states\](a). This simple example shows that symmetry alone does not suffice to guarantee convergence; one truly needs additional hypotheses on $g$, such as the requirement in Theorem \[thm:suff\] that its DFT is nonnegative. This hypothesis does not hold for $g=\delta_{-1}+\delta_0+\delta_1$, since $({\mathrm{F}}g)(n)=1+2\cos(\frac{2\pi n}N)$. Similar issues arise in the two-dimensional setting $\Omega={\mathbb{Z}}_{N_1}\times{\mathbb{Z}}_{N_2}$: both the $3\times 3$ box filter (Moore’s automaton, see Figure \[fig:Oscillating states\](b)) and the “plus” filter (von Neumann’s automaton, see Figure \[fig:Oscillating states\](c)) are symmetric, meaning their cycle lengths are either $1$ or $2$, but neither are positive semidefinite, having DFTs of $[1+2\cos(\frac{2\pi n_1}{N_1})][1+2\cos(\frac{2\pi n_2}{N_2})]$ and $1+2\cos(\frac{2\pi n_1}{N_1})+2\cos(\frac{2\pi n_2}{N_2})$, respectively. Indeed, when $N_1$ and $N_2$ are even, alternating stripes generate a $2$-cycle for the box filter, while the checkerboard generates a $2$-cycle for the plus filter.
Of course, it is not difficult to find filters $g$ which do satisfy the hypotheses of Theorem \[thm:suff\]: one may simply let $g$ be the inverse DFT of any nonnegative even function. More concrete examples, such as a discrete Gaussian over ${\mathbb{Z}}_N$, can be found using the following process. Let $h:{\mathbb{R}}\rightarrow{\mathbb{R}}$ be an even Schwartz function whose Fourier transform is nonnegative; an example of such a function is a continuous Gaussian. Let $g$ be the $N$-periodization of the integer samples of $h$, namely $g(n):=\sum_{n'=-\infty}^{\infty}h(n+Nn')$. Then $g$ is even, and moreover, by the Poisson summation formula: $$({\mathrm{F}}g)(n)
=\sum_{n'=0}^{N-1}g(n'){\mathrm{e}}^{-\frac{2\pi{\mathrm{i}}nn'}{N}}
=\sum_{n'=0}^{N-1}\sum_{n''=-\infty}^{\infty}h(n'+Nn''){\mathrm{e}}^{-\frac{2\pi{\mathrm{i}}nn'}{N}}
=\sum_{k=-\infty}^{\infty}h(k){\mathrm{e}}^{-\frac{2\pi{\mathrm{i}}nk}{N}}
=\sum_{k=-\infty}^{\infty}\hat{h}(k+\tfrac nN)
\geq0.$$ In particular, if $g$ is chosen as a periodized version of the integer samples of any zero-mean Gaussian, then Theorem \[thm:suff\] gives that the AM algorithm necessarily converges. This construction method immediately generalizes to higher-dimensional settings where $D>1$. It also generalizes to the noncircular convolution setting considered in Section \[sec:BeyondSymmetry\]. There, we further restrict $h$ to be strictly positive, and let $g$ be the integer samples of $h$. The positivity of $h$ implies $(\chi_{\Omega}* g)(n)>0$ for all $n\in\Omega$, implying $g\in\mathcal{G}(\Omega)$ as needed. Moreover, $g$ is even and the Poisson summation formula gives that its Fourier series is nonnegative: $\hat{g}(x)=\sum_{k=-\infty}^{\infty}\hat{h}(k+x)\geq0$. Any $g$ constructed in this manner satisfies the hypotheses of Corollary \[cor:noncirculant\], implying the corresponding noncirculant AM necessarily converges.
The rate of convergence of the AM algorithm
-------------------------------------------
{width="65.00000%"}
Up to this point, we have focused on the question of whether or not the AM algorithm converges. Having settled that question to some degree, our focus now turns to another question of primary importance in real-world implementation: at what rate does AM converge? Experimentation reveals that this rate highly depends on the configuration of the boundary between two distinctly labeled regions of $\Omega$. This led us to postulate that the number of *boundary crossings* (see Figure \[fig:Stitches\]) should monotonically decrease with each iteration. Experimentation reveals that this number indeed often decreases extremely rapidly, regardless of the scale of $g$. Figure \[fig:conv-diff-scales\] depicts such an experiment for the fluorescence microscope image shown in Figure \[fig:AM\](a). Starting from a random initial configuration of 64 masks, we used a Gaussian filter under three different scales, with each plot depicting the evolution of 5 independently-initialized runs of the algorithm. We emphasize the algorithm’s fast rate of convergence: the vertical axis represents a nested four-fold application of the natural logarithm to the number of boundary crossings. We leave a more rigorous investigation of the AM algorithm’s rate of convergence for future work.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Prof. Adam D. Linstedt and Dr. Yusong Guo for providing the biological images which were the original inspiration for the AM algorithm and this work. Fickus and Kovačević were jointly supported by NSF CCF 1017278. Fickus received additional support from NSF DMS 1042701 and AFOSR F1ATA00183G003, F1ATA00083G004 and F1ATA0035J001. Kovačević also received support from NIH R03-EB008870. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We fabricate van der Waals heterostructure devices using few unit cell thick Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ for magnetotransport measurements. The superconducting transition temperature and carrier density in atomically thin samples can be maintained to close to that of the bulk samples. As in the bulk sample, the sign of the Hall conductivity is found to be opposite to the normal state near the transition temperature but with a drastic enlargement of the region of Hall sign reversal in the temperature-magnetic field phase diagram as the thickness of samples decreases. Quantitative analysis of the Hall sign reversal based on the excess charge density in the vortex core and superconducting fluctuations suggests a renormalized superconducting gap in atomically thin samples at the 2-dimensional limit.'
author:
- 'S.Y.FrankZhao'
- NicolaPoccia
- 'MargaretG.Panetta'
- CyndiaYu
- 'JedediahW.Johnson'
- HyobinYoo
- RuidanZhong
- 'G.D.Gu'
- KenjiWatanabe
- TakashiTaniguchi
- 'SvetlanaV.Postolova'
- 'ValeriiM.Vinokur'
- PhilipKim
title: Sign reversing Hall effect in atomically thin high temperature superconductors
---
\[Introduction\] Tunable van der Waals (vdW) structures enable the study of unconventional electronic properties of low-dimensional superconductivity (SC) [@MagicAngleSuperconductor]. Measurement of the Hall effect, one of the most informative tools for probing electronic properties of low-dimensional systems, has renewed recent interest in the context of the particle-hole asymmetry and the Bose metal[@HallTaN; @HallNbN; @Breznay:2017; @Phillips2018; @Shahar2018] with the vanishing of Hall resistance $R_{xy}$ [@Breznay:2017]. One of the striking properties of SC is the sign change of the Hall resistance. As temperature $T$ decreases through the fluctuation region approaching the transition temperature $T_c$, the Hall resistivity decreases and changes its sign relative to the normal state. The Hall sign reversal in SC has been attributed to superconducting fluctuations (SF) for $T>T_c$ [@HallFinkelstein; @HallTaN; @HallNbN; @HallMoN]) and vortex contributions for $T<T_c$ [@AoThoulDSC; @KhomskiiDSC; @PhysicaC:1994; @JETPL:1995]. The Hall voltage exhibits a negative minimum and eventually reaches zero at low temperatures where vortices are completely immobilized[@FisherDSC; @FisherDorseyDSC].
The non-vanishing vortex contribution to the Hall signal is of special importance in high temperature superconductors (HTS), and is controlled by the magnitude of the superconducting gap $\Delta(T)$ [@KhomskiiDSC; @PhysicaC:1994; @JETPL:1995]. There have been striking observations that the superconducting gap (not pseudogap) $\Delta(T)$ is renormalized on approach to $T_c$ from below, obtained from angle-resolved photoemission spectroscopy (ARPES) of HTS and from tunneling spectroscopy of conventional low-$T_c$ films [@NatComTiN]. Employing atomically thin vdW HTS with high crystallinity, we now can address the Hall sign reversal in the 2-dimensional (2D) limit, where fluctuation effects become significant.
In this letter, we report fabrication of electronic devices based on atomically thin [$\mathrm{Bi_2Sr_2CaCu_2O_{8+\delta}}$]{} (BSCCO) SC samples and their magnetotransport properties in a wide temperature range. We observe enhanced fluctuation effects in these samples, manifesting as a large region of Hall sign reversal in the temperature-magnetic field phase diagram. We present quantitative description of the observed magnetotransport, considering SF above $T_c$ and vortex core contributions below $T_c$. Our analysis suggests that the renormalized superconducting gap remains finite at $T_c$.
![**Van der Waals BSCCO device.** a. Optical image of Hall bar device, showing BSCCO with contacts and hexagonal boron nitride ([*h*-BN]{}) cover, as drawn in the inset below. b. Bright field scanning transmission electron microscopy image of cross-section of device. Columns of atoms are visible as dark spots. The layered structure of BSCCO and h-BN are visible, as is supermodulation of the BSCCO lattice. Black arrows point to location of bismuth oxide layers (darkest spots), while gray arrows show their expected positions. c. Resistivity as a function of temperature for vdW devices of different thicknesses.[]{data-label="Fig1"}](Fig1_v4_5.pdf){width="1\linewidth"}
We prepare our few-unit-cell (UC) thick BSCCO using mechanical exfoliation in argon filled glovebox. The BSCCO system is technically challenging to handle in ambient condition, since BSCCO chemically interacts with water vapor in air [@Vasquez] and contains interstitial oxygen dopants which become mobile above 200K [@Poccia]. After conventional nano-fabrication steps, BSCCO typically becomes insulating [@Sandilands]. To address these issues we have developed a high-resolution stencil mask technique (See Supplementary Information), allowing us to fabricate samples in an argon environment without exposure to heat or chemicals, and subsequently sealed with a hexagonal boron nitride ($h$-BN) crystal on top. This technique solves the challenging problem of controlling the desired thickness [@Bozovic2011] of BSCCO crystals, achieving a precision of 0.5 UC. Fig. \[Fig1\]b shows a cross-sectional bright field scanning transmission electron microscope (STEM) image of a typical vdW heterostructure, where individual columns of atoms are clearly visible as dark spots. The darkest of these are bismuth atoms (arrows), which scatter the probing electrons most strongly. A supermodulation in the atoms is clearly visible, with a periodicity that agrees with the bulk value. Extrapolating from the position of the BiO layers, we see the outermost 1 UC on both sides become amorphous in this sample. This degradation of the top and bottom layers is likely to be present in all our samples, although its extent is likely sample-dependent. Fig. \[Fig1\]a shows a typical Hall bar.
Fig.\[Fig1\](c) shows the resistivity $\rho$ as a function of temperature $T$ for BSCCO devices between 2 - 10 UC. $\rho (T)$ exhibits a superconducting transition around 85 K, the measured bulk value prior to exfoliation, with a linear $T$-dependence in the normal region consistent with BSCCO near optimal doping [@HTSbook]. At given temperature $T$, we find that $\rho$ increases as the thickness of the sample $d$ decreases, suggesting that thinner samples become poorer conductors. The surface degradation observed in the TEM image is presumably responsible for increasing $\rho$.
To quantitatively determine the SC transition temperature $T_c$ from $\rho(T)$, we adopt the SF framework for $T>T_c$ [@LarVar_book; @TiN_3D2D; @Bi-2212MT]. Here we take into account all three SF contributions: Aslamazov-Larkin, DOS and the dominant Maki-Thompson contributions [@Bi-2212MT; @TiN_JETP], using both $T_c$ and the pair-breaking parameter $\delta=h/16k_BT\tau_\phi$ as two fitting parameters. We assume the phase-breaking time to be $\tau_\phi \sim T^{-1}$ [@AA_review]. For all samples, the extracted $T_c$ is very close to the temperature where resistance falls fastest [@TiN_JETP; @BenfattoTc], and is consistent between samples. The values we obtain from this analysis for the samples with different $d$ are provided in the Supplementary Materials.
Our ability to precisely control device thickness allows us to measure the Hall density $n_H$ [@Note]. Fig. \[Fig2\](a) presents Hall data for a 2 UC device, where we took the odd component of $R_{xy}(B)$ to account for device geometric effects. In the normal state far above $T_c$ ($T \geq 100$ K), the Hall resistance $R_{xy}$ is linear in applied magnetic field $B$, allowing us to extract the Hall density $n_H=d/ecR_{xy}$. Fig. \[Fig2\](b) shows $n_H$ measured at 100K, well above the transition temperature for samples with different $d$ shown in Fig. \[Fig1\](c). The Hall density $n_H$ scales linearly with $d$, demonstrating an excellent oxygen dopant retention in each CuO$_2$ plane, even in the degraded surface layers. The 3 UC sample deviates from this trend with more carriers than is expected, which agrees with the slightly increased $T_c$ compared to the others (see Fig. \[Fig1\](c)). We also estimate the carrier Hall mobility $\mu_H=d/n_He\rho$ as shown in Fig. 2(c). Below 5 UC, $\mu_H$ decreases with $d$, indicating increasing disorder in thinner samples. We also notice that all samples empirically exhibit the trend $\mu_H \sim T^{-1}$ for $T \gg T_c$, suggesting that the normal carrier momentum relaxation time is $\tau_p \sim T^{-1}$ in our samples regardless of $d$.
![ **Hall effect measurements** a. Hall resistance for a 2 UC sample. The curves are vertically shifted for clarity, with zeros indicated by dashed lines. Below 60K, the Hall effect has the same sign as in the normal state. Above 60K the sign reversal appears at magnetic fields $B<5$ T. b. Carrier density increases linearly with sample thickness in our devices, demonstrating good oxygen dopant retention down to 2 UCs. Data taken at 100K. c. Device mobility increases as samples become thicker, eventually saturating at 5 UC. []{data-label="Fig2"}](Fig2_v5_0.pdf){width="1.0\linewidth"}
As temperature decreases, the linear $R_{xy}(B)$ far above $T_c$ starts to develop a strong nonlinearity around $T_c$ (Fig. \[Fig2\](a)). Just above $T_c$, $R_{xy}(B)$ reverses sign at small $B$, reaching a minimum before increasing again with $B$. As $T$ continues to decrease, the dip in $R_{xy}(B)$ continues to broaden, reaching maximal size around 75 K, at which the sign reversal only vanishes by $B_0 = 4.7$ T. However, as $T$ decreases further, the Hall sign reversal weakens as both its magnitude and $B_0$ decrease. The regime of Hall sign reversal vanishes completely around 60 K, below which temperature $R_{xy}(B)$ remains positive for all magnetic fields, even as $R_{xy}$ decreases in magnitude and vanishes around 40 K.
![[**The double sign change**]{}. a. $R_{xy}(T)$ at fixed magnetic fields for the 2UC device. Fits above (dash-dot) and below (dashed lines) $T_c$ are superimposed on experimental data (symbols). Inset: Superconducting gap extracted from fits below $T_c$ for all samples using Eq.(\[sigmaXY\]). The lower dashed line is the BCS gap $\Delta(T) = 1.76 k_B T_c \sqrt{1-T/T_c}$ with $T_c$ extracted from $R_{xx}(T, B=0)$. The renormalized gap curve is generated using the BCS equation, but with an elevated $T_{c0}$ to approximate the gap $\Delta(T)$ extracted from the fits. b. The Hall sign reversal phase diagram. Shading shows Hall resistance $R_{xy}(B,T)$ for a 2UC device. Blue region indicates the area of negative Hall resistance. Symbols show the locus $R_{xy}=0$ for different sample thicknesses, with the dashed (dash-dot) lines generated from fits below (above) $T_c$ (see SI). As device thickness decreases, the Hall-sign-reversed region becomes larger. []{data-label="Fig3"}](Fig3_v4_7.pdf){width="1.0\linewidth"}
Fig. \[Fig3\](a) shows the evolution of $R_{xy}(T)$ at constant $B$, highlighting a double sign reversal in $R_{xy}$. For instance, at $B \approx 4$ T, it is clear that $R_{xy}(T)$ changes sign twice as $T$ increases, once around $T \approx 67$ K and again at 80 K. The complete phase diagram for the Hall sign reversal is shown in Fig. \[Fig3\](b), where we have superimposed the Hall reversal boundary for different samples with different $d$, yet very similar $T_c$. The region where we observe the negative Hall effect is a well-defined domain of the $T-B$ phase diagram. The region of $R_{xy}(T, B)<0$ grows noticeably as $d$ decreases, indicating that fluctuations enhance Hall sign reversal.
The region of the Hall sign reversal for few-unit-cell BSCCO is distinctly different from that of bulk samples. In bulk HTS samples, Hall sign reversal is observed only in the vortex liquid domain, i.e. in the strip between the vortex lattice melting line $B_m(T)$ and the $H_{c2}(T)$ line. Near $B_m(T)$, the Hall resistance is exponentially suppressed, and the Hall sign reversal region often completely lies within $T<T_c$ [@DoubleSignFit:1998]. On the other hand, in conventional superconductors, $B_m(T)$ and the $H_{c2}(T)$ practically coincide; thus usually all Hall sign change is assigned to the fluctuation region. In our atomically thin BSCCO, unlike the bulk samples, the Hall sign reversal region occurs across $T_c$. Moreover, we observe no sudden changes in $R_{xy}(T)$ upon crossing $T_c$, and the region of Hall effect sign reversal falls both above and below $T_c$ in all samples (Fig. \[Fig3\]b). This calls for a universal approach to the description of 2D superconductivity, which can be formulated in the framework of the Keldysh technique[@AK]. Above $T_c$, in the fluctuation regime, this approach simplifies to the quantum kinetic equation[@HallFinkelstein], where the quantum corrections to conductivity are provided by the Gaussian approximation [@HallFinkelstein; @HallNbN]. For $T<T_c$, the Keldysh action can be reduced to the phenomenological form explicitly accounting for the vortex excitations and normal carriers’ contributions[@PhysicaC:1994; @JETPL:1995].
Qualitatively, superconducting fluctuations are Cooper pair fluctuations with a finite lifetime, arising above [$T_c$]{}. Under applied magnetic field, these pairs rotate around their center of mass [@EPL_rotatingFCP] and can be viewed as elemental current loops [@VoticesAboveTcNature; @VoticesAboveTcPRB]. Applied external current exerts Magnus force moving these loops along circular paths. This gives rise to a Hall voltage opposite to that from the normal carriers. More quantitatively, the SF contribution to Hall conductivity manifests as a negative correction $\delta \sigma_{xy}$ to the positive normal component $\sigma_{xy}^n$ [@LarVar_book; @HallFinkelstein]: $\sigma_{xy} = \sigma_{xy}^n + \delta \sigma_{xy}$. Within the Drude framework, $\sigma_{xy}^n $ can be estimated from experimentally accessed quantity $\sigma_{xy}^n \approx \frac{en_H\mu_H^2}{c}B$. Quantitative expression for $\delta\sigma_{xy}$ can be expressed using the Gaussian approximation [@HallFinkelstein]: $$\delta\sigma_{xy}=\frac{2e^2k_BT}{hd}\zeta f(D,B,T) \label{dSigma}$$ where $D$ is the normal carrier diffusion coefficient, $f$ is a dimensionless function whose explicit form is given in the Supplementary Information, and $\zeta$ is a parameter accounting for particle-hole asymmetry in the time-dependent Ginzburg-Landau equation. $\zeta$ is expressed as the change of $T_c$ with respect to the chemical potential $\mu$: $\zeta = -\frac{1}{2} \partial (\ln T_c)/\partial \mu \approx 1/(\gamma E_F)$ [@Varlamov:1999; @LarVar_book; @HallFinkelstein]. Here $\gamma$ is the dimensionless coupling constant parameterizing the attractive electron-electron interaction that induces superconductivity.
As temperature decreases, the SF contribution $\delta\sigma_{xy}$ increases, leading to deviation from linear $R_{xy}$vs.$B$ behavior, and eventually to the sign change of $\sigma_{xy}$ as soon as $\delta\sigma_{xy}$ starts to dominate[@HallTaN; @HallNbN; @HallMoN]. In a diffusive metal, $D \approx \frac{2}{3}\mu_H E_F$. Thus, Eq. (\[dSigma\]) can be fit into the experimentally measured $\sigma_{xy}$ using $E_F$ and $\gamma$ as two fitting parameters in a wide range of $B$ and $T>T_c$. For this analysis, we also employ previously measured $\mu_H$. As shown in Fig. \[Fig2\](a) (dash-dot lines), this model fits our data very well above $T_c$. The numerical values of our fitting parameters $E_F$ and $\gamma$ are summarized in Table I in Supplementary Information. We obtain $E_F \approx 0.5$ eV. This is in reasonable agreement with the literature value, considering the fact that $E_F$ of cuprates is often an order of magnitude larger than the superconducting gap $\Delta(0)$ [@HTSC_EF] and $E_F\sim 0.1$ eV for [$\mathrm{La_{2-x}SrCuO_2}$]{} [@HTSC_EF]. The value $\gamma \approx 0.1$ corresponds to the weak coupling limit.
We now turn our attention to the Hall sign reversal in the temperature range $T<T_c$. The challenge of describing the Hall effect below $T_c$ is in producing a thorough account of all the contributions to vortex dynamics. A comprehensive description of the Hall conductivity $\sigma_{xy}$ explicitly including topological aspects of vortex dynamics (Berry phase), normal carrier scattering, and weak pinning effect was developed in[@PhysicaC:1994; @JETPL:1995], where Hall conductivity acquires the form: $$\sigma_{xy}= \frac{ \Delta^2 \cdot n_0\cdot ec}{E_F^2 \cdot B} [(
\tau \Delta /\hbar)^2g-\textrm{sign}(\delta
n)]+\sigma^n_{xy}(1-g), \label{sigmaXY}$$ where $n_0$ and $n_{\infty}$ are the normal carrier density inside and outside the vortex core respectively, and $\delta n = n_0-n_{\infty}$ is the excess charge inside the vortex; $\tau$ is the relaxation time of the normal carrier in the vortex core; and parameter $g$ expresses the superconducting fraction of the carriers. In this work, we consider a two-fluid model of a $d$-wave symmetry superconductor [@Tinkham] so that $g(T) = 1-(T/T_c)^2$.
The physical origin of the Hall effect sign change in this low-temperature regime is due to the excess charge $\delta n$ of the vortex core [@KhomskiiDSC; @PhysicaC:1994]. The difference in carrier density $\delta n/n_0$ is of the order of $(\Delta/E_F)^2$ [@PhysicaC:1994; @JETPL:1995; @DoubleSignFit:1998]. Here, the sign of the vortex contribution to the Hall effect is determined by the relation between sign($\delta n$) and $ \tau \Delta$. Since the Hall sign is reversed in the regime $T<T_c$, this observation empirically fixes $\textrm{sign} (\delta n)=1$. Then, the first term in Eq. (\[sigmaXY\]), the vortex core contribution $\sigma^{vc}_{xy}$, can be negative as $ \Delta(T) < \hbar/\tau$. From this definition, we also note that $\sigma_{xy}\sim B^{-1}$ while $\sigma^n_{xy} \sim B$. Therefore, the total Hall sign reversal is expected at low magnetic fields, where negative vortex contribution $\sigma^{vc}_{xy}$ dominates the positive normal carrier contribution $\sigma_{xy}^n$.
We can compare Eq. (\[sigmaXY\]) with our experimental data for $T<T_c$ quantitatively. In order to fit experimental curves with Eq. (\[sigmaXY\]), we estimate the normal contribution $\sigma_{xy}^n$ below $T_c$ using our empirical observation that $\mu_H \sim T^{-1}$ in the normal state above $T_c$. Extrapolating this relation to $T<T_c$ in the two-fluid picture, we assume $\sigma_{xy}^n(T) = \sigma^n_{xy}(T_0)(T_0/T)^2$, with $T_0=100$ K for our analysis. Then, Eq. (\[sigmaXY\]) can be used to fit our data shown in Fig. \[Fig2\](a) and Fig. \[Fig3\](a) (for fixed $T < T_c$ and $B$ respectively), using $\tau$, $n_0$ and $\Delta(T)$ as fitting parameters. The values $E_F$ and $T_c$ were previously determined from analysis of $R_{xy}(B,T)$ at $T>T_c$ with SF theory. The parameter $n_0 \approx 10^{21}$ cm$^{-3}$ agrees with the widely accepted value for the cuprates [@Bozovic_n0]. The relaxation rate of the normal carriers in the vortex core is estimated to be $\tau \approx 0.1$ ps. This value is in reasonable agreement with the quasiparticle lifetime estimated from the scanning tunneling spectroscopy of the vortex cores in BSCCO [@tau_vortex], where normal quasiparticle excitations at $E \approx 7$ meV was reported. A crude estimate of the core state lifetime is therefore $\hbar/E \approx$ 0.1 ps. The numerical values of all our fitting parameters are summarized in Table I in Supplementary Information.
Dashed lines in Fig. \[Fig2\](a) and Fig. \[Fig3\](a) are fitted lines calculated according to Eq. (\[sigmaXY\]). Here, importantly, we kept the temperature dependence of the superconducting gap $\Delta(T)$ as a free fitting parameter. This was prompted by two reasons. First, setting the classic BCS value of $\Delta(T/T_c)$ in Eqs. (\[sigmaXY\]), we would obtain unreasonably small values of the field $B$ where the sign reversal occurs. Second, the fact that the *superconducting* gap (not pseudogap) $\Delta(T)$ is nonzero at $T_c$ is theoretically proposed [@PRLGap; @NatComTiN] and experimentally observed in tunneling [@SciRepGap] and in angle-resolved photoemission spectroscopy (ARPES) [@NatPhysGap]. Our estimated temperature dependences of superconducting gap $\Delta(T/T_c)/T_c$ are shown in inset of Fig. \[Fig3\]a for all samples. We notice that $\Delta(T/T_c)$ dependence differs from the standard BCS dependence, namely $\Delta(T_c)\neq 0$. The deviation from BCS is more pronounced for thinner samples, suggesting that the fluctuation effects may be the major source of such large deviation. Phenomenologically, it is interesting to note that the estimated $\Delta(T)$ evolves according to the expected BCS equation, but with a $T_{c0}$ temperature which is about 10 percent larger than the observed $T_c$, suggesting renormalization of the SC gap.
Finally, using the same set of fitting parameters, we can identify the phase boundary of the Hall sign reversed region in Fig. 3(b) for further independent comparison with experiment. The sign reversal locus, $R_{xy} =0$, according to Eq.(\[sigmaXY\]) is defined by the relation: $$B^2 = \left(\frac{\Delta}{E_F}\right)^2\frac{n_0c}{n_H\mu_H^2}\frac{[(\Delta\tau/\hbar)^2g-1]}{1-g}
\label{Field}$$ The region defined by Eq.(\[Field\]) demonstrates excellent agreement with the experimental observation shown in Fig. \[Fig3\](b) for $T<T_c$. Above $T_c$, however, the phase boundary drops rapidly as $T$ increases, a fact accurately captured in our SF fits.
In conclusion, we developed van der Waals heterostructure assembly techniques specialized to the cuprates. We fabricated few-unit-cell [$\mathrm{Bi_2Sr_2CaCu_2O_{8+\delta}}$]{} crystals, where strongly enhanced Hall sign reversal was observed. From quantitative analysis of the double Hall sign reversal, we find that the superconducting gap is nonzero at the critical temperature $T_c$.
The experiments at Harvard was supported by National Science Foundation (DMR-1809188) and the Gordon and Betty Moore Foundation EPiQS Initiative (GBMF4543). Stencil masks were fabricated at the Harvard Center for Nanoscale Systems (CNS), a part of NNCI, NSF award 1541959. S.Y.F.Z. was partially supported by the NSERC PGS program. NP was partially supported by ARO (W911NF-17-1-0574). G.D.G. is supported by the Office of Science, U.S. Department of Energy under Contract No. de-sc0012704. R.Z. is supported by the Center for Emergent Superconductivity, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science. K.W. and T.T. acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan and the CREST (JPMJCR15F3), JST. The work of V.M.V. was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The work of S.V.P. on the analysis of the experimental data was supported by the Russian Science Foundation under Grant No. 15-12-10020.
[10]{} Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras and P. Jarillo-Herrero, Nature **556**, 43-50 (2018).
N.P. Breznay and A. Kapitulnik, Sci. Adv. **3**, e1700612 (2017).
J. May-Mann and P.W. Phillips, Phys. Rev. B **97**, 024508 (2018).
Y. Wang, I. Tamir, D. Shahar, and N.P. Armitage, Phys. Rev. Lett. **120**, 167002 (2018)
N.P. Breznay, K. Michaeli, K.S. Tikhonov, A.M. Finkelstein, M. Tendulkar and A. Kapitulnik. Phys. Rev. B **86**, 014514 (2012).
D. Destraz, K. Ilin, M. Siegel, A. Schilling, and J. Chang. Phys. Rev. B **95**, 224501 (2017).
K. Michaeli, K.S. Tikhonov, and A.M. Finkelstein, Phys. Rev. B **86**, 014515 (2012).
K. Makise, F. Ichikawa, T. Asano and B. Shinozaki, J. Phys.: Condens. Matter 30, 065402 (2018).
P. Ao and D.J. Thouless, Phys. Rev. Lett. **70**, 2158 (1993)
D.I. Khomskii and A. Freimuth, Phys. Rev. Lett. **75**, 1384 (1995)
M.V. Feigelman, V.B. Geshkenbein, A.I. Larkin, and V.M. Vinokur, Physica C **235-240**, 3127 (1994).
M.V. Feigelman, V.B. Geshkenbein, A.I. Larkin, and V.M. Vinokur, JETP Lett. **62**, 835 (1995).
M.P.A. Fisher, Physica A **177**, 553 (1991).
A.T. Dorsey and M.P.A. Fisher. Phys. Rev. Lett. **68**, 694 (1992).
B. Sacepe, C. Chapelier, T.I. Baturina, V.M. Vinokur, M.R. Baklanov and M. Sanquer. Nature Commun. **1**, 140 (2010)
R.P. Vasquez. J. Electron. Spectrosc. Relat. Phenom. **66** 3 209 (1994).
N. Poccia *et al.* Nat. Mater **10** 733 (2011).
L.J. Sandilands *et al.* Phys. Rev. B **90** 081402 (2014).
A.T. Bollinger, G. Dubuis, J. Yoon, D. Pavuna, J. Misewich and I. Božović, Nature **472**, 458 (2011).
X. G. Qiu. High-Temperature Superconductors, Elsevier (2011)
A.I. Larkin and A.A. Varlamov. *Theory of Fluctuations in Superconductors*, (Oxford University Press, New York, 2005).
S.V. Postolova, A.Y. Mironov, M.R. Baklanov, V.M. Vinokur and T.I. Baturina, Sci. Rep. **7**, 1718 (2017).
M. Truccato, A. Agostino, G. Rinaudo, S. Cagliero and M. Panetta J. Phys.: Condens. Matter **18**, 8295 (2006).
S.V. Postolova, A.Yu. Mironov and T.I. Baturina, *JETP Lett.* **100**, 635 (2015).
B.L. Altshuler and A.G. Aronov, Electron-Electron Interaction In Disordered Systems North-Holland, Amsterdam: Elsevier Science.
P.G. Baity, X. Shi, Z. Shi, L. Benfatto, and D. Popovic, Phys. Rev. B **93**, 024519 (2016).
One should be aware that the Hall coeficient in HTS is not straightforwardly related to carrier density, see L.P. Gor’kov and G.B. Teitel’baum, Phys. Rev. Lett. **97**, 247003 (2006). Hence our experimental dependence $n_H$ should be considered as an estimate.
K. Nakao, K. Hayashi, T. Utagawa, Y. Enomoto, and N. Koshizuka, Phys. Rev. B **57**, 8662 (1998).
A. Kamenev, *Field theory of nonequilibrium systems.* (Cambridge University Press, 2011).
A.Glatz, A.A. Varlamov and V.M. Vinokur, EPL **94**, 47005 (2011)
Z.A. Xu, N.P. Ong, Y. Wang, T. Kakeshita, S. Uchida Nature **406**, 486 (2000).
Y. Wang, Z.A. Xu, T. Kakeshita, S. Uchida, S. Ono, Y. Ando and N.P. Ong, Phys. Rev. B **64**, 224519 (2001).
A.A. Varlamov, G. Balestrino, E. Milani and D.V. Livanov Adv. Phys. **48** 655 (1999).
A. K. Saxena, High-Temperature Superconductors (Springer, Berlin/Heidelberg, 2012)
M. Tinkham, *Introduction to Superconductivity*, 2nd ed. (McGraw-Hill, New York, 1996).
A.T. Bollinger and I. Božović. Supercond. Sci. Technol. **29**, 103001 (2016)
S.H. Pan, E.W. Hudson, A.K. Gupta, K.W. Ng, H. Eisaki, S. Uchida, and J.C. Davis, Phys. Rev. Lett. **85**, 1536 (2000).
B.V. Fine, Phys. Rev. Lett. **94**, 157005 (2005)
J.K. Ren, X.B. Zhu, H.F. Yu, Y. Tian, H.F. Yang, C.Z. Gu, N.L. Wang, Y.F. Ren and S.P. Zhao, Scientific Reports **2**, 248 (2012).
M. Hashimoto, I.M. Vishik, R.H. He, T.P. Devereaux and Z.X. Shen, Nat. Phys. **10**, 483 (2014)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is known ([@KS], [@N]) that for any prime $p$ and any finite semiabelian $p$-group $G$, there exists a (tame) realization of $G$ as a Galois group over the rationals $\Bbb Q$ with exactly $d=\operatorname{d}(G)$ ramified primes, where $\operatorname{d}(G)$ is the minimal number of generators of $G$, which solves the minimal ramification problem for finite semiabelian $p$-groups. We generalize this result to obtain a theorem on finite semiabelian groups and derive the solution to the minimal ramification problem for a certain family of semiabelian groups that includes all finite nilpotent semiabelian groups $G$. Finally, we give some indication of the depth of the minimal ramification problem for semiabelian groups not covered by our theorem.'
address:
-
-
-
author:
- Hershy Kisilevsky
- Danny Neftin
- Jack Sonn
title: On the minimal ramification problem for semiabelian groups
---
[^1]
Introduction
============
Let $G$ be a finite group. Let $d=\operatorname{d}(G)$ be the smallest number for which there exists a subset $S$ of $G$ with $d$ elements such that the normal subgroup of $G$ generated by $S$ is all of $G$. One observes that if $G$ is realizable as a Galois group $G(K/{\mathbb{Q}})$ with $K/{\mathbb{Q}}$ tamely ramified (e.g. if none of the ramified primes divide the order of $G$), then at least $\operatorname{d}(G)$ rational primes ramify in $K$ (see e.g. [@KS]). The *minimal ramification problem for $G$ is to realize $G$ as the Galois group of a tamely ramified extension $K/{\mathbb{Q}}$ in which exactly $\operatorname{d}(G)$ rational primes ramify. This variant of the inverse Galois problem is open even for $p$-groups, and no counterexample has been found. It is known that the problem has an affirmative solution for all semiabelian $p$-groups, for all rational primes $p$ ([@KS],[@N]). A finite group $G$ is *semiabelian if and only if $G \in {\mathcal{S}}{\mathcal{A}}$, where ${\mathcal{S}}{\mathcal{A}}$ is the smallest family of finite groups satisfying: (i) every finite abelian group belongs to ${\mathcal{S}}{\mathcal{A}}$. (ii) if $G
\in {\mathcal{S}}{\mathcal{A}}$ and $A$ is finite abelian, then any semidirect product $A \rtimes G$ belongs to ${\mathcal{S}}{\mathcal{A}}$. (iii) if $G \in {\mathcal{S}}{\mathcal{A}}$, then every homomorphic image of $G$ belongs to ${\mathcal{S}}{\mathcal{A}}$. In this paper we generalize this result to arbitrary finite semiabelian groups by means of a “wreath product length" $\operatorname{wl}(G)$ of a finite semiabelian group $G$. When a finite semiabelian group $G$ is nilpotent, $\operatorname{wl}(G)=\operatorname{d}(G)$, which for nilpotent groups $G$ equals the (more familiar) minimal number of generators of $G$. Thus the general result does not solve the minimal ramification problem for all finite semiabelian groups, but does specialize to an affirmative solution to the minimal ramification problem for nilpotent semiabelian groups. Note that for a nilpotent group $G$, $\operatorname{d}(G)$ is $max_{p||G|}\operatorname{d}(G_p)$ and not $\sum_{p||G|}\operatorname{d}(G_p)$, where $G_p$ is the $p$-Sylow subgroup of $G$. Thus, a solution to the minimal ramification problem for nilpotent groups does not follow trivially from the solution for $p$-groups.**
Properties of wreath products
=============================
Functoriality
-------------
The family of semiabelian groups can also be defined using wreath products. Let us recall the definition of a wreath product. Here and throughout the text the actions of groups on sets are all right actions.
Let $G$ and $H$ be two groups that act on the sets $X$ and $Y$, respectively. The *(permutational) wreath product $H\wr_X G$ is the set $H^X\times G=\{(f,g)|f:X{\rightarrow}H,g\in
G\}$ which is a group with respect to the multiplication: $$(f_1,g_1)(f_2,g_2)=(f_1f_2^{g_1^{-1}},g_1g_2),$$ where $f_2^{g_1^{-1}}$ is defined by $f_2^{g_1^{-1}}(x)=f_2(xg_1)$ for any $g_1,g_2\in G,x\in X,f_1,f_2:X{\rightarrow}H$. The group $H\wr_X G$ acts on the set $Y\times X$ by $(y,x)\cdot (f,g)=(yf(x),xg)$, for any $y\in
Y,x\in X,f:X{\rightarrow}H,g\in G$.*
The *standard (or regular) wreath product* $H\wr G$ is defined as the permutational wreath product with $X=G$,$Y=H$ and the right regular actions.
The functoriality of the arguments of a wreath product will play an important role in the sequel. The following five lemmas are devoted to these functoriality properties.
Let $G$ be a group that acts on $X$ and $Y$. A map $\phi:X{\rightarrow}Y$ is called a $G$-map if $\phi(xg)=\phi(x)g$ for every $g\in G$ and $x\in X$.
Note that for such $\phi$, we also have $\phi^{-1}(y)g=\{xg|\phi(x)=y\}=\{x'|\phi(x'g^{-1})=y\}=\{x'|\phi(x')=yg\}=\phi^{-1}(yg).$
\[G-map induces an epimophism\] Let $G$ be a group that acts on the finite sets $X,Y$ and let $A$ be an abelian group. Then every $G$-map $\phi:X{\rightarrow}Y$ induces a homomorphism ${\widetilde{\phi}}:
A\wr_X G{\rightarrow}A\wr_Y G$ by defining: $({\widetilde{\phi}}(f,g)) = (\hat{\phi}(f),g)$ for every $f:X{\rightarrow}A$ and $g\in G$, where $\hat{\phi}(f):Y{\rightarrow}A$ is defined by: $$\hat{\phi}(f)(y)=\prod_{x\in \phi^{-1}(y)}f(x),$$ for every $y\in Y$. Furthermore, if $\phi$ is surjective then ${\widetilde{\phi}}$ is an epimorphism.
Let us show the above ${\widetilde{\phi}}$ is indeed a homomorphism. For this we claim: ${\widetilde{\phi}}((f_1,g_1)(f_2,g_2))={\widetilde{\phi}}(f_1,g_1){\widetilde{\phi}}(f_2,g_2)$ for every $g_1,g_2\in G$ and $f_1,f_2:X{\rightarrow}A$. By definition: $${\widetilde{\phi}}(f_1,g_1){\widetilde{\phi}}(f_2,g_2)=({\hat{\phi}}(f_1),g_1)({\hat{\phi}}(f_2),g_2)=({\hat{\phi}}(f_1){\hat{\phi}}(f_2)^{g_1^{-1}},g_1g_2),$$ while: ${\widetilde{\phi}}((f_1,g_1)(f_2,g_2))={\widetilde{\phi}}(f_1f_2^{g_1^{-1}},g_1g_2)=({\hat{\phi}}(f_1f_2^{g_1^{-1}}),g_1g_2).$ We shall show that ${\hat{\phi}}(f_1f_2)={\hat{\phi}}(f_1){\hat{\phi}}(f_2)$ and ${\hat{\phi}}(f^g)={\hat{\phi}}(f)^g$ for every $f_1,f_2,f:X {\rightarrow}A$ and $g\in G$. Clearly this will imply the claim. The first assertion follows since: $${\hat{\phi}}(f_1f_2)(y)=\prod_{x\in \phi^{-1}(y)}
f_1(x)f_2(x)=\prod_{x\in \phi^{-1}(y)} f_1(x)\prod_{x\in
\phi^{-1}(y)} f_2(x)=$$$$=
{\hat{\phi}}(f_1)(y){\hat{\phi}}(f_2)(y).$$ As to the second assertion we have: $${\hat{\phi}}(f^g)(y)= \prod_{x\in \phi^{-1}(y)}
f^g(x)=\prod_{x\in \phi^{-1}(y)} f(xg^{-1})
=$$$$= \prod_{x'g\in \phi^{-1}(y)}
f(x') = \prod_{x'\in \phi^{-1}(y)g^{-1}} f(x').$$ Since $\phi$ is a $G$-map we have $\phi^{-1}(y)g^{-1}=\phi^{-1}(yg^{-1})$ and thus $${\hat{\phi}}(f^g)(y)= \prod_{x\in \phi^{-1}(y)g^{-1}} f(x)= \prod_{x\in
\phi^{-1}(yg^{-1})} f(x) = {\hat{\phi}}(f)^g(y).$$ This proves the second assertion and hence the claim. It is left to show that if $\phi$ is surjective then ${\widetilde{\phi}}$ is surjective. Let $f':Y{\rightarrow}A$ and $g'\in G$. Let us define an $f:X{\rightarrow}A$ that will map to $f'$. For every $y\in Y$ choose an element $x_y\in X$ for which $\phi(x_y)=y$ and define $f(x_y):=f'(y)$. Define $f(x)=1$ for any $x\not\in\{x_y|y\in Y\}$. Then clearly $${\hat{\phi}}(f)(y)=\prod_{x\in\phi^{-1}(y)}f(x)=f(x_y)=f'(y).$$ Thus, ${\widetilde{\phi}}(f,g')=({\hat{\phi}}(f),g')=(f',g')$ and ${\widetilde{\phi}}$ is onto.
\[construction of G-maps\] Let $B$ and $C$ be two groups. Then there is a surjective $B\wr C$-map $\phi:B\wr C{\rightarrow}B\times C$ defined by: $\phi(f,c)=(f(1),c)$ for every $f:C{\rightarrow}B, c\in C$.
Let $(f,c),(f',c')$ be two elements of $B\wr C$. We check that $\phi((f,c)(f',c'))=\phi(f,c)(f',c')$. Indeed, $$\phi((f,c)(f',c'))=\phi(ff'^{c^{-1}},cc')=(f(1)f'^{c^{-1}}(1),cc')=(f(1)f'(c),cc')=$$ $$=(f(1),c)(f',c)=\phi(f,c)(f',c').$$ Note that the map $\phi$ is surjective: For every $b\in B$ and $c\in C$, one can choose a function $f_b:C{\rightarrow}B$ for which $f_b(1)=b$. One has: $\phi(f_b,c)=(b,c)$.
The following Lemma appears in [@B Part I, Chapter I, Theorem 4.13] and describes the functoriality of the first argument in the wreath product.
\[functoriality of first argument\] Let $G,A,B$ be groups and $h:A{\rightarrow}B$ a homomorphism (resp. epimorphism). Then there is a naturally induced homomorphism (resp. epimorphism) $h_*:A\wr G{\rightarrow}B\wr
G$ given by $h_*(f,g)=(h\circ f,g)$ for every $g\in G$ and $f:G{\rightarrow}A$.
The functoriality of the second argument is given in [@N Lemma 2.15] whenever the first argument is abelian:
\[functoriality with second argument\] Let $A$ be an abelian group and let $\psi:G{\rightarrow}H$ be a homomorphism (resp. epimorphism) of finite groups. Then there is a homomorphism (resp. epimorphism) ${\widetilde{\psi}}:A\wr G{\rightarrow}A\wr H$ that is defined by: ${\widetilde{\psi}}(f,g)=({\hat{\psi}}(f),\psi(g))$ with ${\hat{\psi}}(f)(h)=\prod_{k\in
\psi^{-1}(h)}f(k)$ for every $h\in H$.
These functoriality properties can now be joined to give a connection between different bracketing of iterated wreath products:
\[induction step\] Let $A,B,C$ be finite groups and $A$ abelian. Then there are epimorphisms: $$A\wr (B \wr C){\rightarrow}(A\wr
B)\wr C{\rightarrow}(A\times B)\wr C.$$
Let us first construct an epimorphism $h_*:(A\wr B)\wr C{\rightarrow}(A\times B)\wr C$. Define $h:A\wr
B{\rightarrow}A\times B$ by: $$h(f,b)=(\prod_{x\in B}f(x),b),$$ for any $f:B{\rightarrow}A,b\in B$. Since $A$ is abelian $h$ is a homomorphism. For every $a\in A$, let $f_a:B{\rightarrow}A$ be the map $f_a(b')=0$ for any $1\not=b'\in B$ and $f_a(e)=a$. Then clearly $h(f_a,b)=(a,b)$ for any $a\in A,b\in B$ and hence $h$ is onto. By Lemma \[functoriality of first argument\], $h$ induces an epimorphism $h_*:(A\wr B)\wr C{\rightarrow}(A\times B)\wr C$. To construct the epimorphism $A\wr (B \wr C){\rightarrow}(A\wr B)\wr C$, we shall use the associativity of the permutational wreath product (see [@B Theorem 3.2]). Using this associativity one has: $$(A\wr B)\wr C= (A\wr_B
B)\wr_C C\cong A\wr_{B\times C} (B\wr_C C).$$ It is now left to construct an epimorphism: $$A\wr (B\wr C)=A\wr_{B\wr C} (B\wr C){\rightarrow}A\wr_{B\times C} (B\wr C).$$ By Lemma \[construction of G-maps\], there is a $B\wr C$-map $\phi:B\wr C{\rightarrow}B\times C$ and hence by Lemma \[G-map induces an epimophism\] there is an epimorphism $A\wr_{B\wr C} (B\wr C){\rightarrow}A\wr_{B\times C} (B\wr C)$.
Let us iterate Lemma \[induction step\]. Let $G_1,...,G_n$ be groups. The *ascending iterated standard wreath product of $G_1,...,G_n$ is defined as $$(\cdots ((G_1 \wr G_2) \wr G_3)\wr \cdots) \wr G_n,$$ and the *descending iterated standard wreath product of $G_1,...,G_n$ is defined as $$G_1 \wr (G_2 \wr (G_3 \wr \cdots \wr
G_n))\cdots ).$$ These two iterated wreath products are not isomorphic in general, as the standard wreath product is not associative (as opposed to the “permutation" wreath product). We shall abbreviate and write $G_1\wr (G_2\wr ... \wr G_n)$ to refer to the descending wreath product and $(G_1 \wr ... \wr G_{r-1})\wr G_r$ to refer to the ascending wreath product. By iterating the epimorphism in Lemma \[induction step\] one obtains:**
\[decending to ascending\] Let $A_1,..,A_r$ be abelian groups. Then $(A_1 \wr ... \wr A_{r-1})\wr A_r$ is an epimorphic image of $A_1 \wr ( A_2 \wr ... \wr A_r).$
By induction on $r$. The cases $r=1,2$ are trivial; assume $r\geq 3$. By the induction hypothesis there is an epimorphism $$\pi_1':A_1 \wr (A_2\wr .... \wr A_{r-1}){\rightarrow}(A_1\wr ... \wr A_{r-2})\wr A_{r-1}.$$ By Lemma \[functoriality of first argument\], $\pi_1'$ induces an epimorphism $\pi_1:(A_1\wr (A_2\wr ... \wr A_{r-1}))\wr A_r {\rightarrow}(A_1\wr ... \wr A_{r-1})\wr A_r$. Applying Lemma \[induction step\] with $A=A_1,B=A_2\wr (A_3\wr ... \wr A_{r-1}), C=A_r$, one obtains an epimorphism: $$\pi_2: A_1\wr (A_2\wr ... \wr A_r){\rightarrow}(A_1\wr (A_2\wr ... \wr A_{r-1}))\wr A_r.$$ Taking the composition $\pi=\pi_1\pi_2$ one obtains an epimorphism $$\pi:A_1\wr (A_2\wr ... \wr A_r){\rightarrow}(A_1\wr ... \wr A_{r-1})\wr A_r.$$
Dimension under epimorphisms
----------------------------
Let us understand how the “dimension" $\operatorname{d}$ behaves under the homomorphisms in Lemma \[induction step\] and Corollary \[decending to ascending\]. By [@KL], for any finite group $G$ that is not perfect, i.e. $[G,G]\not=G$, where $[G,G]$ denotes the commutator subgroup of $G$, one has $\operatorname{d}(G)=\operatorname{d}(G/[G,G])$. According to our definitions, for a perfect group $G$, $\operatorname{d}(G/[G,G])=\operatorname{d}(\{1\})=0$, but if $G$ is nontrivial, $\operatorname{d}(G)\geq 1$. As nontrivial semiabelian groups are not perfect, this difference will not effect any of the arguments in the sequel.
Let $G$ be a finite group and $p$ a prime. Define $\operatorname{d}_p(G)$ to be the rank of the $p$-Sylow subgroup of $G/[G,G]$, i.e. $\operatorname{d}_p(G):=\operatorname{d}((G/[G,G])(p))$.
Note that if $G$ is not perfect one has $\operatorname{d}(G)=max_{p}(\operatorname{d}_p(G))$. Let $p$ be a prime. An epimorphism $f:G{\rightarrow}H$ is called $\operatorname{d}$-preserving (resp. $\operatorname{d}_p$-preserving) if $\operatorname{d}(G)=\operatorname{d}(H)$ (resp. $\operatorname{d}_p(G)=\operatorname{d}_p(H)$).
Let $G$ and $H$ be two finite groups. Then: $$H\wr G/[H\wr G,H\wr G]\cong H/[H,H]\times G/[G,G].$$
Applying Lemmas \[functoriality of first argument\] and \[functoriality with second argument\] one obtains an epimorphism $$H\wr G{\rightarrow}H/[H,H]\wr G/[G,G].$$ By Lemma \[induction step\] (applied with $C=1$) there is an epimorphism $$H/[H,H]\wr G/[G,G]{\rightarrow}H/[H,H]\times G/[G,G].$$ Composing these epimorphisms one obtains an epimorphism $$\pi:H\wr G{\rightarrow}H/[H,H]\times G/[G,G],$$ that sends an element $(f:G{\rightarrow}H,g)\in H\wr G$ to $$(\prod_{x\in G}f(x)[H,H],g[G,G])\in H/[H,H]\times G/[G,G].$$ The image of $\pi$ is abelian and hence $\operatorname{ker}(\pi)$ contains $K:=[H\wr G,H\wr G]$.
Let us show $K\supseteq \operatorname{ker}(\pi)$. Let $(f,g)\in \operatorname{ker}(\pi)$. Then $g\in [G,G]$ and $\prod_{x\in G}f(x)\in [H,H]$. As $g\in [G,G]$, it suffices to show that the element $f=(f,1)\in H\wr G$ is in $K$. Let $g_1,...,g_n$ be the elements of $G$ and for every $i=1,...,n,$ let $f_i$ be the function for which $f_i(g_i)=f(g_i)$ and $f(g_j)=1$ for every $j\not= i$. One can write $f$ as $\prod_{i=1}^n f_i$. Now for every $i=1,..,n$, the function $f_{1,i}=f_i^{g_i^{-1}}$ satisfies $f_{1,i}(1)=f(g_i)$ and $f_{1,i}(g_j)=1$ for every $j\not=1$. Thus $f_i$ is a product of an element in $[H^{|G|},G]$ and $f_{i,1}$. So, $f$ is a product of elements in $[H^{|G|},G]$ and $f'=\prod_{i=1}^n f_{1,i}$. But $f'(1)=\prod_{x\in G}f(x)\in [H,H]$ and $f'(g_i)=1$ for every $i\not= 1$ and hence $f'\in [H^{|G|},H^{|G|}]$. Thus, $f\in K$ as required and $K=\operatorname{ker}{\pi}$.
The following is an immediate conclusion:
Let $G$ and $H$ be two finite groups. Then $$\operatorname{d}_p(H\wr G)=\operatorname{d}_p(H)+\operatorname{d}_p(G)$$ for any prime $p$.
So, for groups $A,B,C$ as in Lemma \[induction step\], we have: $$\operatorname{d}_p(A\wr (B \wr C))=\operatorname{d}_p( (A\times B)\wr C) = \operatorname{d}_p(A\times B\times C)= \operatorname{d}_p(A)+\operatorname{d}_p(B)+\operatorname{d}_p(C)$$ for every $p$. In particular, the epimorphisms in Lemma \[induction step\] are $\operatorname{d}$-preserving.
The same observation holds for Corollary \[decending to ascending\], so one has:
Let $A_1,...,A_r$ be finite abelian groups. Then $$\operatorname{d}_p(A_1\wr (A_2\wr ...\wr A_r))=\operatorname{d}_p((A_1\wr ... \wr A_{r-1})\wr A_r)=\operatorname{d}_p(A_1\times... \times A_r)$$ are all $\sum_{i=1}^r\operatorname{d}_p(A_i)$ for any prime $p$.
For cyclic groups $A_1,...,A_r$, $\operatorname{d}_p(A_1 \wr (A_2 \wr ... \wr A_r))$ is simply the number of cyclic groups among $A_1,..,A_r$ whose $p$-part is non-trivial. Thus:
\[d of iterated cyclic is max appearence of p\] Let $C_1,...,C_r$ be finite cyclic groups and $G=C_1\wr (C_2 \wr ...\wr C_r)$. Then $\operatorname{d}(G)=max_{p||{G}|}\operatorname{d}(C_1(p)\wr (C_2(p) \wr ... \wr C_r(p)))$.
Let us apply Lemma \[induction step\] in order to connect between descending iterated wreath products of abelian and cyclic groups:
\[connection between abelian and cyclic\] Let $A_1,...,A_r$ be finite abelian groups and let $A_i$ have invariant factors $C_{i,j}$ for $j=1,..,l_i$, i.e. $A_i=\prod_{j=1}^{l_i}C_{i,j}$ and $|C_{i,j}|||C_{i,j+1}|$ for any $i=1,...,r$ and $j=1,...,l_i-1$. Then there is an epimorphism from the descending iterated wreath product $\widetilde{G}:=\wr_{i=1}^r \wr_{j=1}^{l_i} C_{i,j}$ [(here the groups $C_{i,j}$ are ordered lexicographically: $C_{1,1}, C_{1,2},...,C_{1,l_1},C_{2,1},...,C_{r,l_r}$)]{} to $G:= A_1\wr (A_2 \wr ...\wr A_r)$.
Let us assume $A_1\not=\{0\}$ (otherwise $A_1$ can be simply omitted). Let us prove the assertion by induction on $\sum_{i=1}^rl_i$. Let $G_2=A_2\wr (A_3 \wr ... \wr A_k)$. Write $A_1=C_{1,1}\times A_1'$. By Lemma \[induction step\], there is an epimorphism $\pi_1:C_{1,1} \wr (A_1'\wr G_2){\rightarrow}(C_{1,1}\times A_1')\wr G_2=A_1\wr G_2=G$. By applying the induction hypothesis to $A_1',A_2,...,A_r$, there is an epimorphism $\pi_2'$ from the descending iterated wreath product $\widetilde{G}_2=\wr_{j=2}^{l_1} C_{1,j} \wr (\wr_{i=2}^r \wr_{j=1}^{l_i} C_{i,j})$ to $A_1'\wr G_2$. By Lemma \[functoriality with second argument\], $\pi_2'$ induces an epimorphism $\pi_2:C_{1,1}\wr \widetilde{G}_2{\rightarrow}C_{1,1} \wr (A_1'\wr G_2)$. Taking the composition $\pi=\pi_2\pi_1$, we obtain the required epimorphism: $ \pi: \widetilde{G}=C_{1,1}\wr \widetilde{G}_2{\rightarrow}G. $
Note that: $$\operatorname{d}_p(\widetilde{G})=\sum_{i=1}^r\sum_{j=1}^{l_i} \operatorname{d}_p(C_{i,j})=\sum_{i=1}^r \operatorname{d}_p(A_i)=\operatorname{d}_p(G)$$ for every $p$ and hence $\pi$ is $\operatorname{d}$-preserving.
Therefore, showing $G$ is a $\operatorname{d}$-preserving epimorphic image of an iterated wreath product of abelian groups is equivalent to showing $G$ is a $\operatorname{d}$-preserving epimorphic image of an iterated wreath product of finite cyclic groups.
Wreath length
=============
The following lemma is essential for the definition of wreath length:
Let $G$ be a finite semiabelian group. Then $G$ is a homomorphic image of a descending iterated wreath product of finite cyclic groups, i.e. there are finite cyclic groups $C_1,...,C_r$ and an epimorphism $C_1\wr (C_2\wr ... \wr C_r){\rightarrow}G.$
By Proposition \[connection between abelian and cyclic\] it suffices to show $G$ is an epimorphic image of a descending iterated wreath product of finite abelian groups. We shall prove this claim by induction on $|G|$. The case $G=\{1\}$ is trivial. By [@Den], $G=A_1H$ with $A_1$ an abelian normal subgroup and $H$ a proper semiabelian subgroup of $G$. First, there is an epimorphism $\pi_1:A_1\wr H {\rightarrow}A_1H=G$. By induction there are abelian groups $A_2,..,A_r$ and an epimorphism $\pi_2':A_2 \wr (A_3 \wr ... \wr A_r){\rightarrow}H$. By Lemma \[functoriality of first argument\], $\pi_2'$ can be extended to an epimorphism $\pi_2:A_1 \wr (A_2\wr ... \wr A_r) {\rightarrow}A_1 \wr H$. So, by taking the composition $\pi=\pi_1\pi_2$ one obtains the required epimorphism $\pi:A_1 \wr (A_2 \wr ... \wr A_r) {\rightarrow}G$.
We can now define:
Let $G$ be a finite semiabelian group. Define the [*wreath length*]{} $\operatorname{wl}(G)$ of $G$ to be the smallest positive integer $r$ such that there are finite cyclic groups $C_1,...,C_r$ and an epimorphism $C_1\wr (C_2 \wr ... \wr C_r) {\rightarrow}G$.
Let $\widetilde{G}=C_1\wr (C_2 \wr ... \wr C_r)$ and $\pi:\widetilde{G}{\rightarrow}G$ an epimorphism. Then by Corollary \[d of iterated cyclic is max appearence of p\]: $$\operatorname{d}(G)\leq \operatorname{d}(\widetilde{G})\leq r.$$ In particular $\operatorname{d}(G)\leq \operatorname{wl}(G)$.
\[wreath len of wreath prd\] Let $C_1,...,C_r$ be nontrivial finite cyclic groups. Then $\operatorname{wl}(C_1\wr (C_2 \wr ...\wr C_r))=r$.
Let $\operatorname{dl}(G)$ denote the derived length of a (finite) solvable group $G$, i.e. the smallest positive integer $n$ such that the $n$th higher commutator subgroup of $G$ ($n$th element in the derived series $G=G^{(0)}\geq G^{(1)}=[G,G]\geq \cdots \geq G^{(i)}=[G^{(i-1)},G^{(i-1)}]\geq \cdots$) is trivial. In order to prove this proposition we will use the following lemma:
\[derived length lemma\] Let $C_1,...,C_r$ be nontrivial finite cyclic groups. Then $\operatorname{dl}(C_1\wr (C_2 \wr ...\wr C_r))=r$.
It is easy (by induction) to see that $\operatorname{dl}(C_1\wr (C_2 \wr ...\wr C_r))\leq r$. We turn to the reverse inequality. By Corollary 2.11, it suffices to prove it for the ascending iterated wreath product $G=(C_1 \wr ... \wr C_{r-1})\wr C_r$. We prove this by induction on $r$. The case $r=1$ is trivial. Assume $r\geq 1$. Write $G_1:=(C_1 \wr ... \wr C_{r-2}) \wr C_{r-1}$ so that $G=G_1\wr C_r$. By induction hypothesis, $\operatorname{dl}(G_1)=r-1.$ View $G$ as the semidirect product $G_1^r\rtimes C_r$. For any $g\in G_1$, the element $t_g:=(g,g^{-1},1,1,...,1)\in G_1^r$ lies in $[G_1^r,C_r]$ and hence in $[G_1^r,C_r]\leq G' \leq G_1^r$. Let $H=\{t_g|g\in G_1 \}$. The projection map $G_1^r\rightarrow G_1$ onto the first copy of $G_1$ in $G_1^r$ maps $H$ onto $G_1$. Since $H\leq G'$, the projection map also maps $G'$ onto $G_1$. Now $\operatorname{dl}(G_1)=r-1$ by the induction hypothesis. It follows that $\operatorname{dl}(G')\geq r-1$, whence $\operatorname{dl}(G)\geq r$.
To prove the proposition, we first observe that $\operatorname{wl}(C_1\wr (C_2 \wr ...\wr C_r))\leq r$ by definition. If $C_1\wr (C_2 \wr ...\wr C_r)$ were a homomorphic image of a shorter descending iterated wreath product $C_1'\wr (C_2' \wr ...\wr C_s')$, then by Lemma \[derived length lemma\], $s=\operatorname{dl}(C_1'\wr (C_2' \wr ...\wr C_s'))\geq \operatorname{dl}(C_1\wr (C_2 \wr ...\wr C_r))=r>s$, contradiction.
Combining Proposition \[wreath len of wreath prd\] with Corollary \[d of iterated cyclic is max appearence of p\] we have:
\[descriptio of wl=d for iterated\] Let $C_1,...,C_r$ be finite cyclic groups and $G=C_1\wr (C_2 \wr ...\wr C_r)$. Then $\operatorname{wl}(G)=\operatorname{d}(G)$ if and only if there is a prime $p$ for which $p||C_1|,...,|C_r|$.
We shall now see that all examples of groups $G$ with $\operatorname{wl}(G)=\operatorname{d}(G)$ arise from Corollary \[descriptio of wl=d for iterated\]:
\[out char of wl\] Let $G$ be a finite semiabelian group. Then $\operatorname{wl}(G)=\operatorname{d}(G)$ if and only if there is a prime $p$, finite cyclic groups $C_1,...,C_r$ for which $p||C_i|$, $i=1,...,r$, and a $\operatorname{d}$-preserving epimorphism $\pi:C_1 \wr (C_2 \wr ... \wr C_r){\rightarrow}G$.
Let $d=\operatorname{d}(G)$. The equality $d=\operatorname{wl}(G)$ holds if and only if there are finite cyclic groups $C_1, C_2,...,C_d$ and an epimorphism $\pi:\widetilde{G}=C_1 \wr (C_2 \wr ... \wr C_d){\rightarrow}G$. Assume the latter holds. Clearly $d\leq \operatorname{d}(\widetilde{G})$ but by Corollary \[d of iterated cyclic is max appearence of p\] applied to $\widetilde{G}$ we also have $\operatorname{d}(\widetilde{G})\leq d$. It follows that $\pi$ is $\operatorname{d}$-preserving. Since $\operatorname{d}(G)=max_p(\operatorname{d}_p(G))$, there is a prime $p$ for which $d=\operatorname{d}_p(G)$ and hence $\operatorname{d}_p(\widetilde{G})=d$. Thus, $p||C_i|$ for all $i=1,...,r$.
Let us prove the converse. Assume there is a prime $p$, finite cyclic groups $C_1,...,C_r$ for which $p||C_i|$, $i=1,...,r$, and a $\operatorname{d}$-preserving epimorphism $\pi:\widetilde{G}:=C_1 \wr (C_2 \wr ... \wr C_r){\rightarrow}G$. Since $p||C_i|$, it follows that $\operatorname{d}_p(\widetilde{G})=r$. As $\operatorname{d}_p(\widetilde{G})\leq \operatorname{d}(\widetilde{G})\leq r$, it follows that $\operatorname{d}(G)=\operatorname{d}(\widetilde{G})=r$. In particular $\operatorname{wl}(G)\leq r=\operatorname{d}(G)$ and hence $\operatorname{wl}(G)=\operatorname{d}(G)$.
\[cyclic dec of p-groups\] Let $G$ be a semiabelian $p$-group. By [@N Corollary 2.15], $G$ is a $\operatorname{d}$-preserving image of an iterated wreath product of abelian subgroups of $G$ (following the proof one can observe that the abelian groups were actually subgroups of $G$). So, by Proposition \[connection between abelian and cyclic\], $G$ is a $\operatorname{d}$-preserving epimorphic image of $\widetilde{G}:=C_1 \wr (C_2 \wr ... \wr C_k)$ for cyclic subgroups $C_1,...,C_k$ of $G$. By applying Proposition \[out char of wl\] one obtains $\operatorname{wl}(G)=\operatorname{d}(G)$.
Throughout the proof of [@N Corollary 2.15] one can use the minimality assumption posed on the decompositions to show directly that the abelian groups $A_1,...,A_r$, for which there is a $\operatorname{d}$-preserving epimorphism $A_1 \wr (A_2 \wr ... \wr A_r){\rightarrow}G$, can be actually chosen to be cyclic.
We shall generalize Remark \[cyclic dec of p-groups\] to nilpotent groups:
\[wl=d for nilp\] Let $G$ be a finite nilpotent semiabelian group. Then $\operatorname{wl}(G)=\operatorname{d}(G)$.
Let $d=\operatorname{d}(G)$. Let $p_1,...,p_k$ be the primes dividing $|G|$ and let $P_i$ be the $p_i$-Sylow subgroup of $G$ for every $i=1,...,k$. So, $G\cong \prod_{i=1}^k P_i$. By Remark \[cyclic dec of p-groups\], there are cyclic $p_i$-groups $C_{i,1},...,C_{i,r_i}$ and a $\operatorname{d}$-preserving epimorphism $\pi_i:C_{i,1} \wr (C_{i,2} \wr ... \wr C_{i,r_i}){\rightarrow}P_i$ for every $i=1,..,k$. In particular for any $i=1,...,k$, $r_i= \operatorname{d}(P_i)=\operatorname{d}_p(G)\leq d$. For any $i=1,..,k$ and any $d\geq j>r_i$, set $C_{i,j}=\{1\}$. For any $j=1,...,d$ define $C_j=\prod_{i=1}^kC_{i,j}$.
We claim $G$ is an epimorphic image of $\widetilde{G}=C_1 \wr (C_2 \wr ... \wr C_{d})$. To prove this claim it suffices to show every $P_i$ is an epimorphic image of $\widetilde{G}$ for every $i=1,..,k$. As $C_{i,j}$ is an epimorphic image of $C_j$ for every $j=1,...,d$ and every $i=1,..,k$, one can apply Lemmas \[functoriality of first argument\] and \[functoriality with second argument\] iteratively to obtain an epimorphism $\pi_i':\widetilde{G}{\rightarrow}C_{i,1} \wr (C_{i,2} \wr ... \wr C_{i,r})$ for every $i=1,...,k$. Taking the composition $\pi_i'\pi_i$ gives the required epimorphism and proves the claim. As $G$ is an epimorphic image of an iterated wreath product of $\operatorname{d}(G)$ cyclic groups one has $\operatorname{wl}(G)\leq \operatorname{d}(G)$ and hence $\operatorname{wl}(G)=\operatorname{d}(G)$.
Let $G=D_n=\langle \sigma,\tau | \sigma^2=1,\tau^n=1, \sigma\tau\sigma=\tau^{-1} \rangle$ for $n\geq 3$. Since $G$ is an epimorphic image of $\langle\tau\rangle\wr \langle\sigma\rangle$ and $G$ is not abelian we have $\operatorname{wl}(G)=2$. On the other hand $\operatorname{d}(G)=\operatorname{d}(G/[G,G])$ is $1$ if $n$ is odd and $2$ if $n$ is even. So, $G=D_3=S_3$ is the minimal example for which $\operatorname{wl}(G)\not=\operatorname{d}(G)$.
a ramification bound for semiabelian groups
===========================================
In this section we prove:
\[main Theorem\] Let $G$ be a finite semiabelian group. Then there exists a tamely ramified extension $K/{\mathbb{Q}}$ with $G(K/{\mathbb{Q}})\cong G$ in which at most $\operatorname{wl}(G)$ primes ramify.
The proof relies on the splitting Lemma from [@KS]: Let $\ell$ be a rational prime, $K$ a number field and ${\frak{p}}$ a prime of $K$ that is prime to $\ell$. Let $I_{K,{\frak{p}}}$ denote the group of fractional ideals prime to ${\frak{p}}$, $P_{K,{\frak{p}}}$ the subgroup of principal ideals that are prime to ${\frak{p}}$ and let $P_{K,{\frak{p}},1}$ be the subgroup of principal ideals $(\alpha)$ with $\alpha\equiv 1$ (mod ${\frak{p}}$). Let $\overline
P_{{\frak{p}}}$ denote $P_{K,{\frak{p}}}/P_{K,{\frak{p}},1}$. The ray class group $Cl_{K,{\frak{p}}}$ is defined to be $I_{K,{\frak{p}}}/P_{K,{\frak{p}},1}$. Now, as $I_{K,{\frak{p}}}/P_{K,{\frak{p}}}\cong Cl_{K}$, one has the following short exact sequence: $$\label{ray class field sequence}
1 \longrightarrow \overline
P_{{\frak{p}}}^{(\ell)} \longrightarrow Cl_{K,{\frak{p}}}^{(\ell)} \longrightarrow
Cl_{K}^{(\ell)} \longrightarrow 1,$$ where $A^{(\ell)}$ denotes the $\ell$-primary component of an abelian group $A$. Let us describe a sufficient condition for the splitting of (\[ray class field sequence\]). Let ${\frak{a}}_1,...,{\frak{a}}_r \in I_{K,{\frak{p}}}$, $\tilde{\frak{a}}_1,...,\tilde{\frak{a}}_r$ their classes in $Cl_{K,{\frak{p}}}^{(\ell)}$ with images $\overline{{\frak{a}}}_1,...,\overline{{\frak{a}}}_r$ in $Cl_K^{(\ell)}$, so that $Cl_{K}^{(\ell)}=\langle\overline{{\frak{a}}}_1\rangle\times
\langle\overline{{\frak{a}}}_2\rangle\times...\times \langle
\overline{{\frak{a}}}_r\rangle$. Let $\ell^{m_i}:=|\langle \overline
{\frak{a}}_i\rangle |$ and let $a_i\in K$ satisfy ${\frak{a}}_i^{\ell^{m_i}}=(a_i)$, for $i=1,...,r$.
\[useful part of the splitting lemma\]*(Kisilevsky-Sonn [@KS2])* Let ${\frak{p}}$ be a prime of $K$ and let $K'=K(\sqrt[\ell^{m_i}]{a_i}|i=1,...,r)$. If ${\frak{p}}$ splits completely in $K'$ then the sequence (\[ray class field sequence\]) splits.
The splitting of (\[ray class field sequence\]) was used in [@KS] to construct cyclic ramified extensions at one prime only. Let $m=\max\{1,m_1,...m_r\}$. Let $U_K$ denote the units in ${\mathcal{O}}_K$.
\[cor-existence of totally ramified extension\]*(Kisilevsky-Sonn [@KS])* Let $K''=K(\mu_{\ell^m},\sqrt[\ell^m]{\xi},\sqrt[\ell^{m_i}]{a_i}|\
\xi\in U_K, i=1,...,r)$ and ${\frak{p}}$ a prime of $K$ which splits completely in $K''$. Then there is a cyclic $\ell^m$-extension of $K$ that is totally ramified at ${\frak{p}}$ and is not ramified at any other prime of $K$.
\[K”’\] Let $K$ be a number field, $n$ a positive integer. Then there exists a finite extension $K'''$ of $K$ such that if ${\frak{p}}$ is any prime of $K$ that splits completely in $K'''$, then there exists a cyclic extension $L/K$ of degree $n$ in which ${\frak{p}}$ is totally ramified and ${\frak{p}}$ is the only prime of $K$ that ramifies in $L$.
Let $n=\prod_{\ell}\ell^{m(\ell)}$ be the decomposition of $n$ into primes. Let $K'''$ be the composite of the fields $K''=K''(\ell)$ in Lemma \[cor-existence of totally ramified extension\] ($m=m(\ell))$. Let $L(\ell)$ be the cyclic extension of degree $\ell^{m(\ell)}$ yielded by Lemma \[cor-existence of totally ramified extension\]. The composite $L=\prod L(\ell)$ has the desired property.
(Theorem \[main Theorem\]) By definition, $G$ is a homomorphic image of a descending iterated wreath product of cyclic groups $C_1 \wr (C_2 \wr \cdots \wr C_r)$, $r=\operatorname{wl}(G)$. Without loss of generality $G\cong C_1 \wr (C_2 \wr \cdots \wr C_r)$ is itself a descending iterated wreath product of cyclic groups. Proceed by induction on $r$. For $r=1$, $G$ is cyclic of order say $N$. If $p$ is a rational prime $\equiv 1$ (mod $N$), then the field of $p$th roots of unity ${\mathbb{Q}}(\mu_p)$ contains a subfield $L$ cyclic over ${\mathbb{Q}}$ with Galois group $G$ and exactly one ramified prime, namely $p$. Thus the theorem holds for $r=1$.
Assume $r>1$ and the theorem holds for $r-1$. Let $K_1/{\mathbb{Q}}$ be a tamely ramified Galois extension with $G(K_1/{\mathbb{Q}})\cong G_1$, where $G_1$ is the descending iterated wreath product $C_2 \wr (C_3 \wr \cdots \wr C_r)$, such that the ramified primes in $K_1$ are a subset of $\{p_2,...,p_r\}$. By Corollary \[K”’\], there exists a prime $p=p_1$ not dividing the order of $G$ which splits completely in $K_1'''$, the field supplied for $K_1$ by Corollary \[K”’\], and let ${\frak{p}}={\frak{p}}_1$ be a prime of $K_1$ dividing $p$. By Corollary \[K”’\], there exists a cyclic extension $L/K_1$ with $G(L/K_1)\cong C_1$ in which ${\frak{p}}$ is totally ramified and in which ${\frak{p}}$ is the only prime of $K_1$ which ramifies in $L$.
Now ${\frak{p}}$ has $|G_1|$ distinct conjugates $\{\sigma({\frak{p}})|\sigma \in
G(K_1/{\mathbb{Q}})\}$ over $K_1$. For each $\sigma \in G(K_1/{\mathbb{Q}})$, the conjugate extension $\sigma(L)/K_1$ is well-defined, since $K_1/{\mathbb{Q}}$ is Galois. Let $M$ be the composite of the $\sigma(L)$, $\sigma \in G(K_1/{\mathbb{Q}})$. For each $\sigma$, $\sigma(L)/K_1$ is cyclic of degree $|C_1|$, ramified only at $\sigma({\frak{p}})$, and $\sigma({\frak{p}})$ is totally ramified in $\sigma(L)/K_1$. It now follows (see e.g. [@KS Lemma 1]) that the fields $\{\sigma(L)|\sigma \in G(K_1/{\mathbb{Q}})\}$ are linearly disjoint over $K_1$, hence $G(M/{\mathbb{Q}})\cong C_1 \wr G_1 \cong G$. Since the only primes of $K_1$ ramified in $M$ are $\{\sigma({\frak{p}})|\sigma \in G(K_1/{\mathbb{Q}})\}$, the only rational primes ramified in $M$ are $p_1,p_2,...,p_n$.
The minimal ramification problem has a positive solution for all finite semiabelian groups $G$ for which $\operatorname{wl}(G)=\operatorname{d}(G)$. Precisely, any finite semiabelian group $G$ for which $\operatorname{wl}(G)=\operatorname{d}(G)$ can be realized tamely as a Galois group over the rational numbers with exactly $\operatorname{d}(G)$ ramified primes.
By Proposition \[wl=d for nilp\], we have
The minimal ramification problem has a positive solution for all finite nilpotent semiabelian groups.
Arithmetic consequences
=======================
In this section we examine some arithmetic consequences of a positive solution to the minimal ramification problem. Specifically, given a group $G,$ the existence of infinitely many minimally tamely ramified $G$-extensions $K/{\mathbb{Q}}$ is re-interpreted in some cases in terms of some open problems in algebraic number theory. We will be most interested in the case $\operatorname{d}(G)=1.$
\[prop:classnumber divisibility\] Let $q$ and $\ell$ be distinct primes. Let $K/{\mathbb{Q}}$ be a cyclic extension of degree $n:=[K:{\mathbb{Q}}]\geq 2$ with $(n,q\ell)=1$. Suppose that $K/{\mathbb{Q}}$ is totally and tamely ramified at a unique prime $\mathfrak l$ dividing $\ell$. Then $q$ divides the class number $h_K$ of $K$ if and only if there exists an extension $L/K$ satisfying the following:
[i).]{} $L/{\mathbb{Q}}$ is a Galois extension with [**non-abelian**]{} Galois group $G=G(L/{\mathbb{Q}})$.
[ii).]{} The degree $[L:K]=q^s$ is a power of $q.$
[iii).]{} $L/{\mathbb{Q}}$ is (tamely) ramified only at primes over $\ell.$
First suppose that $q$ divides $h_K.$ Let $K_0$ be the $q$-Hilbert class field of $K,$ [*i.e.*]{} $K_0/K$ is the maximal unramified abelian $q$-extension of $K.$ Then $K_0/{\mathbb{Q}}$ is a Galois extension with Galois group $G:=G(K_0/{\mathbb{Q}}),$ and $H:=G(K_0/K)\simeq (C_K)_q \neq 0$, the $q$-part of the ideal class group of $K.$ Then $[G,G]$ is contained in $H.$ If $[G,G] \subsetneq H,$ then the fixed field of $[G,G]$ would be an abelian extension of ${\mathbb{Q}}$ which contains an unramified $q$-extension of ${\mathbb{Q}}$ which is impossible. Hence $[G,G]=H\neq 0$ and so $G$ is a non-abelian group, and $L=K_0$ satisfies [*i),ii)*]{}, and [*iii)*]{} of the statement.
Conversely suppose that there is an extension $L/K$ satisfying [*i),ii)*]{}, and [*iii)*]{} of the statement. Since $H=G(L/K)$ is a $q$-group, there is a sequence of normal subgroups $H=H_0\supset H_1\supset H_2 \cdots \supset H_s=0$ with $H_i/H_{i+1}$ a cyclic group of order $q.$ Let $L_i$ denote the fixed field of $H_i$ so that $K=L_0\subset \cdots\subset L_s=L.$ Let $m$ be the largest index such that $L_m/{\mathbb{Q}}$ is totally ramified (necessarily at $\ell$). If $m=s,$ then $L/{\mathbb{Q}}$ is totally and tamely ramified at $\ell$ and so the inertia group $T(\mathfrak L/(\ell))=G,$ where in this case $\mathfrak L$ is the unique prime of $L$ dividing $\ell.$ Since $L/{\mathbb{Q}}$ is tamely ramified it follows that $T(\mathfrak L/(\ell))$ is cyclic, but this contradicts the hypothesis that $G$ is non-abelian. Therefore it follows that $m<s,$ and so $L_{m+1}/L_m$ is unramified and therefore $q$ must divide the class number $h_{L_m}.$ Then a result of Iwasawa [@I] implies that $q$ divides all of the class numbers $h_{L_{m-1}}, \cdots,h_{L_0}=h_K.$
We now apply this to the case that $G\not=\{1\}$ is a quotient of the regular wreath product $C_q \wr C_p$ where $p$ and $q$ are distinct primes. Then $\operatorname{d}(G)=1.$
The existence of infinitely many minimally tamely ramified $G$-extensions $L/{\mathbb{Q}}$ would by Proposition \[prop:classnumber divisibility\] imply the existence of infinitely many cyclic extensions $K/{\mathbb{Q}}$ of degree $[K:{\mathbb{Q}}]=p$ ramified at a unique prime $\ell \neq p,q$ for which $q$ divides the class number $h_K.$ (If there were only finitely many distinct such cyclic extensions $K/{\mathbb{Q}},$ then the number of ramified primes $\ell$ would be bounded, and there would be an absolute upper bound on the possible discriminants of the distinct fields $L/{\mathbb{Q}}.$ By Hermite’s theorem, this would mean that the number of such $G$-extensions $L/{\mathbb{Q}}$ would be bounded).
The question of whether there is an infinite number of cyclic degree $p$ extensions (or even one) of ${\mathbb{Q}}$ whose class number is divisible by $q$ is in general open at this time.
For $p=2,$ it is known that there are infinitely many quadratic fields (see Ankeny, Chowla [@AC]), with class numbers divisible by $q,$ but it is not known that this occurs for quadratic fields with prime discriminant.
This latter statement is also a consequence of Schinzel’s hypothesis as is shown by Plans in [@P]. There is also some numerical evidence that the heuristic of Cohen-Lenstra should be statistically independent of the primality of the discriminant (see Jacobson, Lukes, Williams [@JLW] or te Riele, Williams [@RW]). If this were true, then one would expect that there is a positive density of primes $\ell$ for which the cyclic extension of degree $p$ and conductor $\ell$ would have class number divisible by $q.$
For $p=3$ it has been proved by Bhargava [@B] that there are infinitely many cubic fields $K/{\mathbb{Q}}$ for which $2$ divides their class numbers. That there are infinitely many cyclic cubics with prime squared discriminants whose class numbers are even (or more generally divisible by some fixed prime $q$) seems out of reach at this time.
In our view, there is a significant arithmetic interest in solving the minimal ramification problem for other groups (see also [@H], [@JR], [@R]).
[9]{}
(1955), 321–324.
, [*The density of discriminants of quartic rings and fields*]{}, [*Annals of Math.*]{} [**162**]{} (2005), no. 2, 1031–1062
, [*On geometric embedding problems and semiabelian groups*]{}, [*Manuscripta Math.*]{} [**86**]{} (1995), no. 2, 199–216.
, [*Proceedings of a Conference on Arithmetic Geometry (Arizona State Univ., 1993), AMS Contemporary Mathematics Series*]{}, vol. 174, (1994), pp. 35–60.
, [*Abh. Math. Sem. Univ. Hamburg*]{}, [**20**]{} (1956), 257–258. – also in Kenkichi Iwasawa Collected Papers, Volume 1, 32, p.372-373, Springer. ISBN4-431-70314-4
, [*Experimental Mathematics*]{}, 4 (1995), no. 3, 211–225.
, Lecture Notes in Comput. Sci., 5011, Springer, Berlin, (2008), 226–239.
, [*Comm. Alg.*]{} [**31**]{} no. 6 (2003), 2707–2717.
, [*Abelian extensions of global fields with constant local degrees*]{}, [*Math. Research Letters*]{} [**13**]{} no.4 (2006), 599–605.
, [*On the minimal ramification problem for $\ell$-groups*]{}, [*Compositio Math.*]{} (to appear)
, [*Wreath products of groups and semigroups*]{}, Monographs and Surveys in Pure and Applied Mathematics.
, [*On semiabelian $p$-groups*]{}, submitted.
, [*On the minimal number of ramified primes in some solvable extensions of $\Bbb Q$.* ]{} [*Pacific J. Math.*]{} [**215**]{} (2004), no. 2, 381–391.
, [*Polynomials with roots mod $n$ for all $n$,*]{} [*Master Thesis,*]{} (2009), Technion.
, [*Experimental Mathematics*]{} 12 (2003), no. 1, 99–113.
[^1]: The research of the first author was supported in part by a grant from NSERC
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we study the effect of control input constraints and of the upper bound on the time of convergence on the domain of attraction for systems that exhibit fixed-time stability (FxTS). We first present a new result on FxTS, where we allow a positive term in the time derivative of the Lyapunov function. We provide analytical expressions for the domain of attraction and the time of convergence in terms of the coefficients of the positive and negative terms that appear in the time derivative of the Lyapunov function. We show that this result serves as a robustness characterization of FxTS systems in the presence of additive, vanishing disturbances. We use the new FxTS result in formulating a quadratic program (QP) that computes control inputs to drive the trajectories of a class of nonlinear, control-affine systems to a goal set in the presence of control input constraints. We show that the new positive term in the FxTS result acts as a slack term in the QP constraint, and helps guarantee feasibility of the QP. We study the effect of the control authority and the convergence time on the magnitude of the coefficient of this positive term through numerical examples.'
author:
- Kunal Garg
- 'Dimitra Panagou [^1] [^2]'
bibliography:
- 'myreferences.bib'
title: '**[Characterization of Domain of Fixed-time Stability under Control Input Constraints]{}**'
---
Introduction
============
In the past two decades, much attention has been paid to the concepts of finite- and fixed-time stability, where the system trajectories reach an equilibrium point or a set in a *finite* or *fixed* time as opposed to asymptotically or exponentially. In [@bhat2000finite], the authors introduced the notion of finite-time stability and presented sufficient conditions in terms of Lyapunov functions for finite-time stability (FTS). Under this notion, the time of convergence depends on the initial conditions and can grow unbounded with initial conditions. Fixed-time stability (FxTS), introduced in [@polyakov2012nonlinear], is a stronger notion than FTS, where the time of convergence is uniformly bounded for all initial conditions. In control problems where the objective is to stabilize closed-loop trajectories to a given desired point or a set, control Lyapunov functions (CLFs) are very commonly used to design the control input [@romdlony2016stabilization; @ames2014rapidly; @ames2014control; @ames2017control]. CLFs have been used to design closed-form expressions for control inputs that resemble Sontag’s formula [@sontag1989universal; @romdlony2016stabilization]. More recently, quadratic programs (QPs) have gained popularity for control synthesis; with this approach, the CLF conditions are formulated as inequalities that are linear in the control input [@li2018formally; @ames2014control; @garg2019prescribedTAC]. These methods are suitable for real-time implementation as QPs can be solved very efficiently. The authors in [@lindemann2019control] use control barrier functions (CBFs) to encode signal-temporal logic (STL) specifications, and formulate a QP to compute the control input. The work considered in [@kovacs2016novel; @han2019robust; @ames2012control; @rauscher2016constrained] considers the design of control laws so that reachability objectives, such as reaching a desired location or a desired goal set, are achieved as time goes to infinity, i.e., asymptotically. Furthermore, the feasibility of the resulting QP is either not guaranteed, or requires assumptions on existence of a CLF, which however can be difficult to find for a general nonlinear system. Recently, the concept of fixed-time CLF (FxT-CLF) was introduced [@garg2019control], which combines the notion of CLF and FxTS. In [@garg2019control], FxT-CLFs are used in a QP, but without any feasibility guarantees. The objective of this paper is to extend and formalize the results in [@garg2019control] in a QP framework, such that feasibility as well as fixed-time convergence can be simultaneously guaranteed.
More specifically, in this paper, we present new Lyapunov results on FxTS by introducing a (possibly positive) slack term as a linear function of the Lyapunov function. We show that FxTS can still be guaranteed from a domain of attraction that depends upon the relative magnitude of the positive and the negative terms in the bound of the time derivative of the Lyapunov function. Then, we compute an upper-bound on the time of convergence to the equilibrium, which is also a function of the relative magnitude of the positive and negative terms. Finally, we present the relation between the proposed results on FxTS and the robustness of FxTS systems under additive vanishing disturbances. In addition, based on the results in [@garg2019prescribedTAC], we use the new FxTS conditions in a QP formulation, where the control objective is to drive closed-loop trajectories to a goal set in a given fixed time in the presence of control input constraints. It is not possible to guarantee FxTS for arbitrary initial conditions in the presence of control input constraints. We perform numerical experiments to relate the domain of attraction with the required time of convergence and with the control input bounds.
Mathematical Preliminaries {#sec: math prelim}
==========================
In the rest of the paper, $\mathbb R$ denotes the set of real numbers, and $\mathbb R_+$ denotes the set of non-negative real numbers. We use $\|\cdot\|$ to denote the Euclidean norm. We use $\partial S$ to denote the boundary of a closed set $S$ and $\textrm{int}(S) = S\setminus \partial S$, to denote its interior. The Lie derivative of a function $V:\mathbb R^n\rightarrow \mathbb R$ along a vector field $f:\mathbb R^n\rightarrow\mathbb R^n$ at a point $x\in \mathbb R^n$ is denoted as $L_fV(x) \triangleq \frac{\partial V}{\partial x} f(x)$.
Next, we review the notion of fixed-time stability. Consider the nonlinear system $$\begin{aligned}
\label{ex sys}
\dot x(t) = f(x(t)), \quad x(0) = x_0,\end{aligned}$$ where $x\in \mathbb R^n$ and $f: \mathbb R^n \rightarrow \mathbb R^n$ is continuous with $f(0)=0$. Assume that the solution of exists and is unique. The authors in [@polyakov2012nonlinear] presented the following result for FxTS.
\[FxTS TH\] Suppose there exists a positive definite function $V$ for system such that $$\begin{aligned}
\label{eq: dot V FxTS old }
\dot V(x) \leq -aV(x)^p-bV(x)^q,\end{aligned}$$ holds for all $x\neq 0$, with $a,b>0$, $0<p<1$ and $q>1$. Then, the origin of is FxTS where the time of convergence $T$ is uniformly bounded as $$\begin{aligned}
\label{eq: T bound old}
T \leq \frac{1}{a(1-p)} + \frac{1}{b(q-1)}. \end{aligned}$$
Main results
============
In this section, we present a new result on FxTS. Particularly, we introduce another term in the upper bound of $\dot V$ in , and allow this term to take positive values. Consider a positive definite, continuously differentiable function $V:\mathbb R^n\rightarrow\mathbb R$, such that its time derivative along the trajectories of satisfies $$\begin{aligned}
\label{eq: dot V new ineq}
\dot V(x(t)) \leq -c_1V(x(t))^{a_1}-c_2V(x(t))^{a_2}+c_3V(x(t)),\end{aligned}$$ for all $t\geq 0$, with $c_1, c_2>0$, $c_3\in \mathbb R$, $a_1 = 1+\frac{1}{\mu}$, $a_2 = 1-\frac{1}{\mu}$ for some $\mu>1$. Note that the form of exponents $a_1, a_2$ is not new or restrictive as many authors have used this form to compute tighter bounds on the time of convergence (see Remark \[rem: new T bound\]).
New FxTS result
---------------
Before presenting the first main result, we need the following lemma.
\[lemma:int dot V\] Let $V_0, c_1, c_2>0$, $a_1 = 1+\frac{1}{\mu}$ and $a_2 = 1-\frac{1}{\mu}$, where $\mu>1$. Define $$\begin{aligned}
\label{eq:int dot V}
I \triangleq \int_{V_0}^{0}\frac{dV}{-c_1V^{a_1}-c_2V^{a_2}+c_3V}.\end{aligned}$$ Then, the following holds:
- If $c_3\leq 0$, then for all $V_0\geq 0$, $$\begin{aligned}
\label{eq: I bound 0}
I\leq \frac{\mu\pi}{2\sqrt{c_1c_2}};
\end{aligned}$$
- If $0\leq c_3<2\sqrt{c_1c_2}$, then for all $V_0\geq 0$ $$\begin{aligned}
\label{eq: I bound 1}
I\leq \frac{\mu}{c_1k_1}\left(\frac{\pi}{2}-\tan^{-1}k_2\right),
\end{aligned}$$ where $k_1 = \sqrt{\frac{4c_1c_2-c_3^2}{4c_1^2}}$ and $k_2 = -\frac{c_3}{\sqrt{4c_1c_2-c_3^2}}$;
- If $c_3>2\sqrt{c_1c_2}$ and $V_0<\frac{c_3-\sqrt{c_3^2-4c_1c_2}}{2c_1}$, then for all $V_0\geq 0$ $$\begin{aligned}
\label{eq: I bound 2}
I\leq \frac{\mu}{c_1(b-a)}\log\left(\frac{|b|}{|a|}\right),
\end{aligned}$$ where $-a,-b$ are the roots of $\gamma(z) \triangleq c_1z^2-c_3z+c_2 = 0$;
- If $c_3 = 2\sqrt{c_1c_2}$, then for all $0<k<1$ and $V_0^\frac{1}{\mu}\leq k\sqrt{\frac{c_2}{c_1}}$, $$\begin{aligned}
\label{eq: I bound 3}
I \leq \frac{\mu}{\sqrt{c_1c_2}}\Big(\frac{1-k}{k}\Big).
\end{aligned}$$
Lemma \[lemma:int dot V\] gives necessary upper-bounds on the integral $I$ for various cases. The proof is provided in Appendix \[app Lemma int dot V proof\]. Now we are ready to present our first main result.
\[Th: FxTS new\] Let $V:\mathbb R^n\rightarrow \mathbb R$ be a continuously differentiable, positive definite, radially unbounded function, satisfying along the trajectories of . Then, there exists a neighborhood $D$ of the origin such that for all $x(0)\in D $, the trajectories of reach the origin in a fixed time $T$, where where $0<k<1$, $-a,-b$ are the solutions of $\gamma(s) = c_1s^2-c_3s+c_2 = 0$, $k_1 = \sqrt{\frac{4c_1c_2-c_3^2}{4c_1^2}}$ and $k_2 = -\frac{c_3}{\sqrt{4c_1c_2-c_3^2}}$.
Note that for $c_3\leq 0$, one can recover the right-hand side of from , and (global) FxTS follows from Theorem \[FxTS TH\]. The bound on the convergence time follows from . Next, we consider the cases when $c_3>0$. First we show that there exists $D\subseteq \mathbb R^n$ containing the origin, such that for all $x\in D\setminus\{0\}$, $\dot V(x)<0$.
Consider the right-hand side of . Note that for all $x\in \{z \; |\; c_3V(z)<c_1V(z)^{a_1}+c_2V(z)^{a_2}\} \setminus\{0\}$, one has $\dot V(x)<0$. Thus, for $c_3V(x)<\min_{x\neq 0} c_1V(x)^{a_1}+c_2V(x)^{a_2}$, or equivalently, for $c_3<\min_{x\neq 0} c_1V(x)^\frac{1}{\mu}+c_2V(x)^{-\frac{1}{\mu}}$, one has $\dot V(x)<0$ for all $x\neq 0$. Set $k = V^\frac{1}{\mu}$ to obtain $c_1V^\frac{1}{\mu}+c_2V^{-\frac{1}{\mu}} = c_1k+\frac{c_2}{k}$. The function $p:\mathbb R_+\rightarrow \mathbb R$, defined as $p(k) = c_1k+\frac{c_2}{k}$ is a strictly convex function, since $\frac{d^2}{dk^2}p(k) = \frac{2c_2}{k^3}>0$ for all $k>0$. Hence, the function $p$ has a unique minimizer. The derivative of $p$ reads $\frac{dp}{dk} = c_1-\frac{c_2}{k^2}$, which has a unique root in $\mathbb R_+$ at $k = \sqrt{\frac{c_2}{c_1}}$. Thus the minimum of $p(k)$ is attained for $k=\left(\frac{c_2}{c_1}\right)^{\frac{\mu}{2}}$. Define $V^* = \left(\frac{c_2}{c_1}\right)^{\frac{\mu}{2}}$, and let $\delta = c_1(V^*)^\frac{1}{\mu}+c_2(V^*)^{-\frac{1}{\mu}} = 2\sqrt{c_1c_2}$.
Then, for $c_3<\min_{x\neq 0} c_1V(x)^\frac{1}{\mu}+c_2V(x)^{-\frac{1}{\mu}} = 2\sqrt{c_1c_2}$, we have that $c_3V(x)<c_1V(x)^{a_1}+c_2V(x)^{a_2}$ for all $x\in \mathbb R^n\setminus\{0\}$, and hence, $\dot V(x)<0$ for all $x\neq 0$.
![Qualitative variation of $h(V)= c_1V^\frac{1}{\mu}+c_2V^{-\frac{1}{\mu}}$ with $V$, for $\mu>1$. The function $h(V)$ achieves its minimum at $V = V^*$, marked by orange dashed line.[]{data-label="fig:dot V RHS"}](V3_cond.eps){width="0.9\columnwidth"}
For $c_3> 2\sqrt{c_1c_2}$, we have that there exist $x_1,x_2\in \mathbb R^n$ such that $c_3 = c_1V(x)^{\frac{1}{\mu}}+c_2 V(x)^{-\frac{1}{\mu}}$ for both $V(x) = V_1 \triangleq V(x_1)$ and $V(x) = V_2\triangleq V(x_2)$ (see Figure \[fig:dot V RHS\]). Note that $V_1$ and $V_2$ are also solutions of $-c_1V^{a_1}-c_2V^{a_2}+c_3V = 0$, given as $V_1 = \left(\frac{c_3-\sqrt{c_3^2-4c_1c_2}}{2c_1}\right)^\mu$ and $V_2= \left(\frac{c_3+\sqrt{c_3^2-4c_1c_2}}{2c_1}\right)^\mu$. For all $x$ such that $V_1<V(x)<V_2$, the expression $-c_1V^{a_1}-c_2V^{a_2}+c_3V$ evaluates to a positive value. Also, for all $x \neq 0$ such that $V(x)<V_1$, we have $c_3V(x)<c_1V(x)^{a_1}+c_2V(x)^{a_2}$. Thus, define $D = \left\{x\; |\; V(x)< \left(\frac{c_3-\sqrt{c_3^2-4c_1c_2}}{2c_1}\right)^\mu\right\}$, so that for all $x\in D\setminus\{0\}$, one has $\dot V(x)<0$.
Finally, for $c_3 = 2\sqrt{c_1c_2}$, one has that for any $x\neq 0$ such that $V(x)<V^* = \Big(\frac{c_2}{c_1}\Big)^\frac{\mu}{2}$, it holds that $\dot V(x)<0$. Thus, letting $D = \{x\; |\; V(x)<k\Big(\frac{c_2}{c_1}\Big)^\frac{\mu}{2}\}$ with $0<k<1$, implies that for all $x\in D\setminus\{0\}$, $\dot V(x)<0$.
Next, we show fixed-time convergence of the trajectories to the origin. Let $x(0)\in D$, so that $\dot V\leq 0$ for all $t\geq 0$. Integrating both sides of , we obtain $$\begin{aligned}
\int_{V_0}^{0}\frac{dV}{-c_1V^{a_1}-c_2V^{a_2}+c_3V}\geq \int_{0}^Tdt = T,\end{aligned}$$ where $V_0 = V(x(0))$ and $T\geq 0$ is such that $V(x(T)) = 0$. Note that the left-hand side is denoted as $I$ in Lemma \[lemma:int dot V\], and so, $T\leq I$. We consider the cases when $c_3<2\sqrt{c_1c_2}$, $c_3>2\sqrt{c_1c_2}$, and $c_3= 2\sqrt{c_1c_2}$ separately.
For $c_3<2\sqrt{c_1c_2}$, using Lemma \[lemma:int dot V\] part (ii), we have $T\leq I \leq \frac{\mu}{c_1k_1}\left(\frac{\pi}{2}-\tan^{-1}k_2\right)$ where $k_1 = \sqrt{\frac{4c_1c_2-c_3^2}{4c_1^2}}$ and $k_2 = -\frac{c_3}{\sqrt{4c_1c_2-c_3^2}}$. Hence, if $c_3<2\sqrt{c_1c_2}$, we have that $\dot V(x(t))<0$ for all $t\geq 0$ and all $x(0)\neq 0$, and $V(x(t))= 0$ for all $t\geq T$, where $T\leq \frac{\mu}{c_1k_1}\left(\frac{\pi}{2}-\tan^{-1}k_2\right)$. Since $V$ is proper, the origin is globally FxTS.
For $c_3> 2\sqrt{c_1c_2}$, with $x(0)\in D = \{x\; |\; V(x)<V_1\}$, we have that $c_3V_0<c_1V_0^{a_1}+c_2V_0^{a_2}$. Thus, using Lemma \[lemma:int dot V\], we have $$\begin{aligned}
\label{eq: T bound case 2}
T\leq I\overset{\eqref{eq: I bound 2}}{\leq} \frac{\mu}{c_1(b-a)}\log\left(\frac{|b|}{|a|}\right),\end{aligned}$$ where $-a,-b$ are the roots of $\gamma(z) \triangleq c_1z^2-c_3z+c_2 = 0$.
Finally, for $c_3 = 2\sqrt{c_1c_2}$, we have from Lemma \[lemma:int dot V\] that $T\leq I\leq \frac{\mu}{\sqrt{c_1c_2}}\Big(\frac{k}{1-k}\Big)$ for all $x\in D = \{x\; |\; V(x)^\frac{1}{\mu} < k\sqrt{\frac{c_2}{c_1}}\}$. The above bound on $T$ for all the cases is independent of the initial condition $x(0)$. Thus, for all $x(0)\in D\setminus\{0\}$, the origin is FxTS.
Theorem \[Th: FxTS new\] gives an expression for the domain of attraction $D$ and the time of convergence $T$ as a function of $c_1, c_2, c_3$. As thus, Theorem \[FxTS TH\] and other similar results in the literature (e.g. [@parsegov2012nonlinear]) are special cases of Theorem \[Th: FxTS new\].
\[rem: new T bound\] For $c_3 = 0$, the upper bound on the time of convergence is same as the one given in [@parsegov2012nonlinear Lemma 2]. Note that for $a = c_1, b = c_2, c_3 = 0, p = 1-\frac{1}{\mu}, q = 1+\frac{1}{\mu}$, gives $\frac{\mu}{c_1} + \frac{\mu}{c_2}$ as the upper-bound on the time of convergence. It can be readily observed that $\frac{\mu\pi}{2\sqrt{c_1c_2}}<\frac{2\mu}{\sqrt{c_1c_2}}\leq \frac{\mu}{c_1} + \frac{\mu}{c_2}$, where the last inequality follows since $c_1+c_2\geq 2\sqrt{c_1c_2}$ for $c_1,c_2>0$. Hence, gives a tighter upper-bound on the time of convergence as compared to when $c_3 = 0$.
Robustness perspective
----------------------
In comparison to Theorem \[FxTS TH\], Theorem \[Th: FxTS new\] allows a positive term $c_3V$ in the upper bound of the time derivative of the Lyapunov function. This term also captures the robustness against a class of Lipschitz continuous, or vanishing, additive disturbances in the system dynamics, as shown in the following result. Consider the system $$\begin{aligned}
\label{eq:pert sys}
\dot x = f(x) + \psi(x),\end{aligned}$$ where $f, \psi:\mathbb R^n\rightarrow \mathbb R^n$, $f(0) = 0$ and there exists $L>0$ such that for all $x\in \mathbb R^n$, $\|\psi(x)\|\leq L\|x\|$.
\[cor: robust phi FxTS\] Let the origin for the *nominal* system $\dot x = f(x)$ be FxTS and assume that there exists a Lyapunov function $V:\mathbb R^n \rightarrow\mathbb R$ satisfying the conditions of Theorem \[FxTS TH\]. Assume that there exist $k_1, k_2>0$ such that $V(x) \geq k_1\|x\|^2$ and $ \left\|\frac{\partial V}{\partial x}\right\|\leq k_2\|x\|$ for all $x\in \mathbb R^n$. Then, the origin of the perturbed system is FxTS.
The time derivative of $V$ along the system trajectories of reads $$\begin{aligned}
\dot V = \frac{\partial V}{\partial x}f(x) + \frac{\partial V}{\partial x}\psi(x)\leq &-aV^p-bV^q + k_2L\|x\|^2\\
\leq & -aV^p-bV^q + \frac{k_2L}{k_1}V.\end{aligned}$$ Hence, using Theorem \[Th: FxTS new\], we obtain the origin of is FxTS [ for all $x(0)\in D$, where $D$ is a neighborhood of the origin. As per the conditions of Theorem \[Th: FxTS new\], $D\subset \mathbb R^n$ or $D = \mathbb R^n$, depending upon the parameters $a, b, k_1, k_2$ and $L$.]{}
QP formulation for FxTS
=======================
In this section, we use the Lyapunov condition in conjunction with Theorem \[Th: FxTS new\] in a QP formulation to compute a control input so that the closed-loop trajectories reach a desired goal set within a fixed time. Consider the system: $$\begin{aligned}
\label{cont aff sys}
\dot x(t) = f(x(t)) + g(x(t))u(t), \quad x(t_0) = x_0,\end{aligned}$$ where $x\in \mathbb R^n$ is the state vector, $f:\mathbb R^n\rightarrow \mathbb R^n$ and $g:\mathbb R^n\rightarrow \mathbb R^{n\times m}$ are continuous functions, and $u\in \mathbb R^m$ is the control input. In addition, consider a goal set, to be reached in a user-defined fixed time $T$, defined as $S_G \triangleq \{x\; |\; h_G(x)\leq 0\}$, where $h_G:\mathbb R^n\rightarrow\mathbb R$ is a continuously differentiable function. Consider the QP
\[QP gen\] $$\begin{aligned}
\min_{v\in \mathbb R^m, \delta_1\in \mathbb R} \; \frac{1}{2}z^THz & + F^Tz\\
\textrm{s.t.} \quad \quad A_uv \leq & b_u, \label{C1 cont const}\\
L_fh_G(x) + L_gh_G(x)v \leq & \delta_1h_G(x)-\alpha_1\max\{0,h_G(x)\}^{\gamma_1} \nonumber\\
& -\alpha_2\max\{0,h_G(x)\}^{\gamma_2} \label{C2 stab const},\end{aligned}$$
where $z \triangleq \begin{bmatrix}v^T & \delta_1\end{bmatrix}^T\in \mathbb R^{m+1}$, $H \triangleq \textrm{diag}\{p_{u_1},\ldots, p_{u_m}, p_1\}$ is a diagonal matrix consisting of positive weights $p_{u_i}, p_1>0$, $F \triangleq \begin{bmatrix}\mathbf 0_m^T & q_1\end{bmatrix}$ with $q_1>0$ and $\mathbf 0_m\in \mathbb R^m$ a vector consisting of zeros. The parameters $\alpha_1, \alpha_2, \gamma_1, \gamma_2$ are fixed, and are chosen as $\alpha_1 = \alpha_2 = \frac{\mu\pi}{2\bar T}$, $\gamma_1 = 1+\frac{1}{\mu}$ and $\gamma_2 = 1-\frac{1}{\mu}$ with $\mu>1$. Constraint encodes the control input constraints $u\in \mathcal U = \{v\; |\; A_uv\leq b_u\}$, while encodes the FxT-CLF condition.
In [@garg2019prescribedTAC], it is shown that the QP is feasible, and under certain conditions, the control input defined as solution of lead to FxTS convergence of the closed-loop trajectories. For the sake of completeness, we review these results here. Let the solution of be denoted as $z^*(\cdot) = \begin{bmatrix}v^*(\cdot)^T & \delta_1^*(\cdot)\end{bmatrix}^T$.
If the set $\mathcal U$ is non-empty, then the QP is feasible for all $x\notin S_G$.
Choose any $\bar v\in \mathcal U$, and since $\mathcal U$ is non-empty, there exists such $\bar v$. For $x\notin S_G$, we have that $h_G(x)>0$ by definition, and thus $h_G(x) \neq 0$. Define $\bar \delta_1= \frac{L_fh_G(x) + L_gh_G(x)\bar v+\alpha_1h_G(x)^{\gamma_1}+\alpha_2h_G(x)^{\gamma_2}}{h_G(x)}$, so that is satisfied. Thus, the couple $(\bar v, \bar \delta_1)$ satisfies the constraints of the QP and hence, the QP is feasible, for all $x\notin S_G$.
The feasibility of is guaranteed because of the presence of the slack term $\delta_1V$. Note that in the absence of such a term, might be infeasible due to the presence of the control input constraints.
\[Th: d1 d2 P1 solve\] The closed-loop trajectories under the effect of the control input defined as $u(\cdot) = v^*(\cdot)$ reach the set $S_G$ within a fixed time $\bar T$ for all $x\in D$, where:
- $D = \mathbb R^n$ and $\bar T = T$ if $\max\limits_{x}\delta_1^*(x) \leq 0$;
- $D = \mathbb R^n$ and $\bar T \leq \sup\limits_{x} \frac{\mu}{\alpha_1k_1(x)}\left(\frac{\pi}{2}-\tan^{-1}k_2(x)\right)$, where $k_1(x) = \sqrt{\frac{4\alpha_1\alpha_2-\delta_1^*(x)^2}{4\alpha_1^2}}$ and $k_2(x) = -\frac{\delta_1^*(x)}{\sqrt{4\alpha_1\alpha_2-\delta_1^*(x)^2}}$ if $\max\limits_{x}\delta_1^*(x) < 2\sqrt{\alpha_1\alpha_2}$;
- $D = \{z\; |\; V(z) \leq \inf\limits_x\left(\frac{\delta_1(x)-\sqrt{\delta_1(x)^2-4\alpha_1\alpha_2}}{2\alpha_1}\right)^\mu\}$ and $\bar T \leq \sup\limits_x\frac{\mu}{\alpha_1(b(x)-a(x)}\log\left(\frac{|b(x)|}{|a(x)|}\right)$ where $a,b$ are the solutions of $\gamma(s,x) = \alpha_1s^2-\delta_1(x)s+\alpha_2 = 0$ if $\max\limits_{x}\delta_1^*(x)>2\sqrt{\alpha_1\alpha_2}$.
Thus, if $\delta_1$ is *small* relative to $\alpha_1, \alpha_2$, then the domain of attraction is large for fixed-time convergence, i.e., the slack term corresponding to $\delta_1$ in QP characterizes the trade-off between the domain of attraction and time of convergence for given control input bounds. Intuitively, for a given control input constraint set, a larger value of $T$ results into smaller values of $\alpha_1, \alpha_2$, which would result in satisfaction of with smaller value of $\delta_1$. Conversely, for a given $T$ (and thus, for a given pair $\alpha_1, \alpha_2$), a larger control authority would result into satisfaction of with smaller $\delta_1$. We verify this in the numerical simulations.
Numerical experiments
---------------------
We consider the following system: $$\begin{aligned}
\dot x_1 &= x_2 + x_1(x_1^2+x_2^2-1)+x_1u, \\
\dot x_2 &= -x_1+\zeta(x_2)(x_1^2+x_2^2-1)+x_2u,\end{aligned}$$ where $x = [x_1, x_2]^T\in \mathbb R^2, u\in \mathbb R$, $\zeta(z) = (0.8+0.2e^{-100|z|})\tanh(x)$ and $S_G = \{x\; |\; \|x\|\leq 1\}$. Note that in the absence of the control input, the trajectories diverge away from $S_G$, i.e., the set $S_G$ is unstable for the open-loop system. We define $h_G(x) = \|x\|^2-1$. We impose control input bounds of the form $\|u\|\leq u_{max}$, where $u_{max}>0$. The initial conditions are choosen as $x(0) = [3.33, 1.33]^T$.
![Variation of $\max\delta_1$ for various control input bounds $u_{max}$.[]{data-label="fig:del1 umax"}](del1_u_max.eps){width="0.9\columnwidth"}
We choose $p_{u_1}, p_{u_2} = 1, \mu = 2$ for the numerical simulations. First, we studied the effect of the control input bound on the maximum value of $\delta_1$. We fixed $T = 1, p_1 = 100, q_1 = 1000$, and varied $u_{max}$. Figure \[fig:del1 umax\] plots the maximum value of $\max_x\delta_1(x)$ for various values of $u_{max}\in [16\;,\; 25]$.[^3] It can be observed that $\delta_1$ decreases as the control authority of the system increases.
![Control input $u(t)$ for various control input bounds $u_{max}$.[]{data-label="fig:u bound umax"}](u_bound_u_max.eps){width="0.9\columnwidth"}
Figure \[fig:u bound umax\] plots the norm of the control input with time for various values of $u_{max}$. The value of $u_{max}$ increases from $16$ to $25$ from blue to red. It can be observed that in every case, the system trajectories do utilize the maximum available control authority in the beginning of the simulation, while the control input decreases to zero as the system trajectories approach the goal set.
Figure \[fig:energy umax\] plots the energy utilized by the system in terms of the integral $\int_0^T\|u(t)\|^2dt$ for various values of $u_{max}$. The total energy decreases by about 8$\%$ as the maximum control authority increases from 16 to 25. This is also evident from Figure \[fig:x traj umax\], which plots the different paths traced by the system from various values of $u_{max}$. It can be observed that as the control authority increases, the path length decreases, which results into the decrease in the utilized energy.
![Energy $\int_0^T\|u(t)\|^2dt$ for various control input bounds $u_{max}$.[]{data-label="fig:energy umax"}](energy_u_max.eps){width="0.9\columnwidth"}
![Closed-loop trajectories for various control input bounds $u_{max}$.[]{data-label="fig:x traj umax"}](x_traj_u_max.eps){width="0.9\columnwidth"}
Next, we fix $u_{max} = 16, p_1 = 100, q_1 = 1000$ and vary the required time of convergence $T$ between 1 and 10. Figure \[fig:del1 T\] shows the variation of $\max_x\delta_1(x)$ as a function of the convergence time $T$. As $T$ increases (or equivalently, $\alpha_1, \alpha_2$ decrease), the maximum value of $\delta_1(\cdot)$ decreases. This implies that for a larger time of convergence, there is a larger domain of attraction starting from which convergence can be achieved in the given time.
![Variation of $\max\delta_1$ for various user-defined convergence time $T$.[]{data-label="fig:del1 T"}](del1_T.eps){width="0.9\columnwidth"}
These (numerical) relations indicate how a user can obtain quantitative relations of the maximum value of $\delta_1$ with $u_{max}$ and $T$ via offline numerical simulations. These can further guide the choice of appropriate sets of parameters in the (proved to be feasible) QP formulation for actual implementation on a real system, while ensuring that the control input defined as the solution of the QP indeed solves the problem at hand.
Conclusion
==========
We proposed a new result on FxTS by allowing a positive linear term to appear in the time derivative of the Lyapunov function. We characterized the domain of attraction, as well as the upper bound on the time of convergence for fixed-time stability as a function of the coefficients of the positive and the negative terms in the upper bound of the time derivative of the Lyapunov function. We then used the new FxTS result in a QP formulation, and showed that the feasibility of the QP is guaranteed due to the presence of the slack term that corresponds to the newly added linear term in our FxTS result. For the QP based control design technique, we numerically established relation of the maximum value of this slack term, which characterizes domain of attraction for fixed-time convergence, with the control input bound, and with the required time of convergence.
In this work, we only considered the convergence requirement in the presence of control input constraints. In the future, we would like to study multi-objective problems involving both safety and convergence requirements, and find the relations between the largest domain of attraction for fixed-time convergence and the largest subset of the safe set that can be rendered forward invariant, parametrized by the control input bounds and the time of convergence. Future work will also include the study of the effect of non-vanishing disturbances on fixed-time stable systems, in terms of the domain where the system trajectories are guaranteed to converge, and the time of convergence to this neighborhood.
Proof of Lemma \[lemma:int dot V\] {#app Lemma int dot V proof}
==================================
We haveSet $m = V^{\frac{1}{\mu}}$, so that $dm = \frac{1}{\mu}V^{\frac{1}{\mu}-1}dV$, which implies that $\frac{1}{\mu}\frac{dV}{V} = \frac{dm}{V^\frac{1}{\mu}} = \frac{dm}{m}$. Using this, we obtain that [$$\begin{aligned}
I & = \mu\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{m(-c_1m-c_2\frac{1}{m}+c_3)} = \mu\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{(-c_1m^2-c_2+c_3m)}.\end{aligned}$$ ]{} Now, we consider four cases, namely, $c_3\leq 0$, $0< c_3< 2\sqrt{c_1c_2}$, $c_3= 2\sqrt{c_1c_2}$ and $c_3> 2\sqrt{c_1c_2}$ separately.
For $0< c_3<2\sqrt{c_1c_2}$, the roots of $\gamma(m) = (-c_1m^2-c_2+c_3m) = 0$ are complex. Thus, $I$ is written asEvaluating the individual integrals, we obtain $$\begin{aligned}
\label{eq: I case 1}
I & = \frac{\mu}{-c_1k_1}(\tan^{-1}k_2-\tan^{-1}k_3),\end{aligned}$$ where $k_1 = \sqrt{\frac{4c_1c_2-c_3^2}{4c_1^2}}$, $k_2 = -\frac{c_3}{\sqrt{4c_1c_2-c_3^2}}$ and $k_3 = \frac{2c_1V_0^{\frac{1}{\mu}}-c_3}{\sqrt{4c_1c_2-c_3^2}}$. Hence, we have that $$\begin{aligned}
I & = \frac{\mu}{c_1k_1}(\tan^{-1}k_3-\tan^{-1}k_2)\leq \frac{\mu}{c_1k_1}(\frac{\pi}{2}-\tan^{-1}k_2),\end{aligned}$$ since $\tan^{-1}(\cdot)\leq \frac{\pi}{2}$.
For $c_3\leq 0$, using the same expression as above, and noting that $k_2\geq 0$, we obtain that $$\begin{aligned}
I &\leq \frac{\mu}{c_1k_1}(\frac{\pi}{2}-\tan^{-1}k_2)\leq \frac{\mu\pi}{2\sqrt{c_1c_2}},\end{aligned}$$ since $\tan^{-1}(k_2)\geq 0$ and $k_1\geq \sqrt{\frac{c_2}{c_1}}$ for $c_3\leq 0$.
For $c_3> 2\sqrt{c_1c_2}$, the roots of $\gamma(m) = 0$ are real. Let $a\leq b$ be the such that $c_1m^2-c_3m+c_2 = c_1(m+a)(m+b)$. This substitution allows us to factorize the denominator to evaluate the integral $I$. Note that since $ab = c_2>0$ and $a+b = -c_3$, we have $a\leq b<0$. Since $V_0<\frac{c_3-\sqrt{c_3^2-4c_1c_2}}{2c_1}$, we have that $\frac{1}{-c_1V^{a_1}-c_2V^{a_2}+c_3V}<0$ for all $V\leq V_0$, i.e., the denominator $c_3V_0<c_1V^{a_1}+c_2V^{a_2}$ does not vanish for $V\in [0, V_0]$. Thus, we obtain that [$$\begin{aligned}
I & = \mu\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{(-c_1m^2-c_2+c_3m)} = -\frac{\mu}{c_1}\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{(m+a)(m+b)}\\
& = -\frac{\mu}{c_1(b-a)}\left(\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{m+a}-\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{m+b}\right).\end{aligned}$$ ]{} Evaluating the integrals, we obtainwhere the last inequality is obtained using the fact that $\log\left(\frac{|V_0^\frac{1}{\mu}+a|}{|V_0^\frac{1}{\mu}+b|}\right)\leq 0$ for $a\leq b$. Finally, for $c_3 = 2\sqrt{c_1c_2}$, we have $a = b = \frac{-c_3}{2c_1} = -\sqrt{\frac{c_2}{c_1}}$, and thus, $$\begin{aligned}
I & = -\frac{\mu}{c_1}\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{(m+a)(m+b)} = -\frac{\mu}{c_1}\int_{V_0^{\frac{1}{\mu}}}^{0}\frac{dm}{(m+a)^2}.
\end{aligned}$$ For $V_0^\frac{1}{\mu}<-a = \frac{c_3}{2c_1} = \sqrt{\frac{c_2}{c_1}}$, the integral $I$ evaluates to a finite value. Thus, for all $V_0^\frac{1}{\mu}\leq k\sqrt{\frac{c_2}{c_1}}$ with $0<k<1$, we have thatThis completes the proof.
[^1]: The authors would like to acknowledge the support of the Air Force Office of Scientific Research under award number FA9550-17-1-0284.
[^2]: The authors are with the Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI, USA; `{kgarg, dpanagou}@umich.edu`.
[^3]: Since the open-loop system is unstable, for given set of initial conditions, it is observed that the closed-loop trajectories diverge for $u_{max}\leq 16$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The radio lobes of Hydra A lie within cavities surrounded by a rim of enhanced X-ray emission in the intracluster gas. Although the bright rim appears cooler than the surrounding gas, existing Chandra data do not exclude the possibility that the rim is produced by a weak shock. A temperature map shows that cool gas extends out along the radio axis of Hydra A. The age of the radio source and equipartition pressure of the radio lobe argue against a shock, and comparison with similar structure in the Perseus Cluster also suggests that the rim is cool. We show that the cool bright rim cannot be the result of shock induced cooling, or due to the effect of magnetic fields in shocks. The most likely source of low entropy (cool) gas is entrainment by the rising cavity. This requires some means of communicating the bouyant force on the cavity to the surrounding gas. The magnetic field required to produce the Faraday rotation in Hydra A has the appropriate properties for this, if the Faraday screen is mainly in this bright rim. In Hydra A, the mass outflow due to the rising cavities could be sufficient to balance cooling driven inflow, so preventing the build up of low entropy gas in the cluster core.'
author:
- 'P.E.J. Nulsen, L.P. David, B.R. McNamara, C. Jones, W.R. Forman, and M. Wise.'
title: 'Interaction of Radio Lobes with the Hot Intracluster Medium: Driving Convective Outflow in Hydra A'
---
Introduction
============
The high spatial and spectroscopic resolution of the [*Chandra X-ray Observatory*]{} has permitted detailed observations of the interaction between radio sources and hot gas in elliptical galaxies and clusters of galaxies. Cavities containing radio lobes have been found in the X-ray emitting gas in a rapidly growing number of such systems [e.g. @hb93; @cph94; @brm00; @vrt00; @rpk00; @fj01; @blanton01; @brm01; @schindler01]. Many of these are cooling flow clusters, where Chandra and XMM data now show that there is very little gas below temperatures of about 1 keV [e.g. @lpd00; @peterson].
The lack of cool gas in cooling flow clusters, and the strong association of radio sources with these objects [@burns90] suggest that radio sources provide the energy required to stop copious amounts of gas from cooling to low temperatures in cooling flows [@lpd00; @fmnp01; @churazov]. Furthermore, it is argued on other grounds that the total power of radio jets is substantially larger than the radio power of the lobes that they feed [@pedlar; @bb96], as required if they are to heat the intracluster medium enough to quench cooling flows.
The powerful Fanaroff-Riley class 1 radio source Hydra A [3C218; @es83; @gbt90; @gbt96] shows a striking example of cavities caused by radio lobes. @brm00 found that the radio lobes of Hydra A have carved holes in the surrounding intracluster gas similar to those caused by the radio lobes of 3C84 in the Perseus Cluster [@hb93; @acf00]. Here we consider what the X-ray observations tell us about the interaction between the radio lobes of Hydra A and the intracluster gas. Although the discussion is centered on the Chandra observations of Hydra A, we consider similarities between Hydra A and other systems, especially the lobes of Perseus A [@acf00].
Our basic finding is that the SW cavity of Hydra A is surrounded by a region of enhanced X-ray emission which is cooler than ambient gas at the same radius elsewhere in the cluster. In conventional models [e.g. @chc97; @hrb98], an expanding radio source generates a shock. While this phase is transient, what we see now in the Hydra A cluster does not support a jet power that substantially exceeds its radio power. Furthermore, it is surprising that the coolest gas appears to be closest to the radio lobes. We focus here on the origin of the cool gas.
In §\[xran\] we discuss the Chandra data in the region of the SW radio lobe in detail. In §\[proc\] we consider several shock processes that may play some role in producing the bright rim. In §\[radio\] we argue that the radio observations are more consistent with the radio lobes being in local pressure equilibrium than with them being overpressured. In §\[disc\] we argue that the bright rim is most probably low entropy gas lifted by the buoyantly rising cavity from closer to the cluster center. We also discuss the implications for the magnetic field in the cool gas of the rim.
We adopt a flat CDM cosmology ($\Omega_{\rm m} = 0.3$, $\Omega_\Lambda
= 0.7$) with a Hubble constant of $70 \rm\, km\, s^{-1} \, Mpc^{-1}$, which gives a luminosity distance of 240 Mpc and an angular scale of 1.05 kpc per arcsec for the Hydra A Cluster.
X-ray observations of the region around the radio lobes {#xran}
=======================================================
The Chandra X-ray data used here are the same as discussed by @brm00 and @lpd00, consisting of a total exposure of 40 ks taken on 1999 October 30. Of this, 20 ks is with ACIS-I at the aim point and 20 ks with ACIS-S at the aim point. Raw and smoothed X-ray maps of the region around the lobes are shown in @brm00. Details of the data analysis, including screening and background subtraction, are given in @brm00 and @lpd00.
We focus on the SW cavity, since it is better defined in the X-ray image. As well as the count deficit in this cavity, the raw image shows a bright ‘rim’ of excess emission surrounding it. However, because the gas around the cavity is not uniform and the total number of photons in this part of the image is modest, it is difficult to extract a surface brightness profile for the cavity. Instead we have used circles centered on the SW cavity, at $\rm R.A. = 09^h\, 18^m\,
04^s\!\!.9$, $\rm decl. = -12^\circ\, 06'\, 08''\!\!.4$ (J2000), with radii of $11''$, $20''$ and $25''$, and determined the background subtracted surface brightness for the combined ACIS-I and ACIS-S data in the sector between position angles $90^\circ$ and $330^\circ$ in the resulting annuli (omitting the complex region towards the nucleus; see Fig. \[anreg\]). The resulting counts per pixel in the 0.5 – 7 keV band are given in Table 1. The bright rim shows as a 20% (8.6 sigma) excess over the mean of the two adjacent annuli.
We find, for ACIS-S data, that using the 0.5 – 3 keV and 3 – 7 keV bands to define a hardness ratio gives the greatest discrimination for temperature variations around the values of interest. Table 2 gives the ratio of 3 – 7 keV to 0.5 – 3 keV counts for the cleaned and background subtracted ACIS-S data for the 3 regions used in Fig. \[anreg\]: the cavity; the bright rim surrounding the cavity; the annulus outside the bright rim. The hardness ratio is also given for a circular region with a radius of $8''$ at the same distance from the nucleus as the center of the cavity, but in a direction perpendicular to the radio axis. Hardness ratios for 3, 4 and 7 keV gas, obtained from XSPEC simulated ACIS-S spectra of an absorbed MEKAL model with hydrogen column density equal to the galactic foreground value, the abundance of heavy elements set to 0.4 and a redshift of 0.0538 are also given.
The hardness ratio for the gas in the bright rim around the SW lobe is inconsistent with gas hotter than 4 keV at the $3.8\sigma$ level, and inconsistent with gas hotter than 7 keV at the $11\sigma$ level. The bright rim appears cooler than gas at the same distance from the nucleus in the direction perpendicular to the radio axis, but only at the 2.0 sigma level. No significant differences in hardness ratio between the cavity, its bright rim and the surrounding annulus are found in these data.
A temperature map of the central $128''\times128''$ of Hydra A together with the 6 cm radio contours is shown in Fig. \[tmap\]. The temperature map was computed following the technique of Houck, Wise, & Davis (2001 in preparation). Using the ACIS-S3 Chandra observation of Hydra A, a grid of adaptively sized extraction cells were selected to contain a minimum of 3000 counts each and then fit with a simple MEKAL thermal plasma model including a foreground Galactic absorption fixed at the nominal value of $4.94 \times
10^{20}\rm\, cm^{-2}$. The abundance was also held fixed at a value of 0.40, consistent with the values determined by @lpd00. Temperature maps computed allowing $N_H$ and $Z$ to vary show similar structure.
The main result here is that the bright rim appears to be at least as soft (cool) as ambient gas at the same radius. This is the most puzzling feature of these observations, and is discussed at length below. The situation is similar for the cavities in the Perseus Cluster [@acf00]. From the temperature map, we also note that the the cooler gas extends outward, beyond the cavities, along the direction of the radio source axis.
@brm00 found that compared to the surrounding emission there is a total deficit of about 2000 counts within the SW cavity, in the energy band 0.5 – 7.0 keV. We can use this to constrain the location of the cavity relative to the plane of the sky. For the ambient temperature of 3.4 keV [@lpd00], we can convert the count deficit into an emission measure. Treating the cavity as a sphere of radius 20 kpc, we can then convert this to a gas density. Given the uncertainty in the count deficit, the result, $n_{\rm e} = 0.02 \rm\,
cm^{-3}$, is close to the density of ambient gas at the same radius [$n_{\rm e} \simeq 0.027 \rm\, cm^{-3}$ at $r=30$ kpc; @lpd00]. In order to produce such a large deficit, the the cavity must be nearly devoid of X-ray emitting gas, and the projected distance from the center of the cavity to the nucleus is close to the actual distance. Since $n_{\rm e} \simeq 0.02 \rm\, cm^{-3}$ at $r = 40$ kpc, the radio axis cannot be much more than $45^\circ$ from the plane of the sky.
The geometrical uncertainties and the variation in the ambient gas properties from one side to the other of the SW cavity make it difficult to disentangle “background” cluster emission from emission within the cavity, preventing us from placing stringent quantitative limits on the level of X-ray emission within the cavity. However, we can place limits on emission by hotter gas within the cavity. To do this, first we fit a single temperature MEKAL model to the spectrum of the SW cavity, to account for “background” cluster emission, then we fit a two temperature model, with the lower temperature fixed at the value found from the single temperature fit. The single temperature fit gives $kT = 3.5\pm0.5$ (at 90%) keV and an abundance of 0.4, consistent with the ambient gas temperature and abundance at $r = 30$ kpc [@lpd00]. Abundances were fixed at this value in the two temperature model, leaving only the normalization of the two thermal components as free parameters in the fit. 90% upper limits (for one interesting parameter; $\Delta \chi^2 = 2.71$) on the normalization of the hotter component are given in Table 3, as fractions of the total emission measure, and as upper limits on the density of a uniform gas filling the cavity. Although it is not our main focus here, these limits place some constraint on the nature of the “radio plasma” in the cavity. We note that for $kT \ga 15$ keV, the pressure of the hot component could exceed the ambient pressure in the cavity.
Shock processes and the bright rim {#proc}
==================================
As discussed above, there is little evidence for any X-ray emission in the immediate region of the radio lobes, that is in the radio lobe “cavities.”. Our focus is on the nature and origin of the X-ray emission surrounding the cavities. Apart from the cavity, the most significant feature of this region is the rim of bright emission surrounding the SW cavity (we assume that the structure of the NE cavity is similar). Since there is no evidence of non-thermal emission, our discussion is based on the assumption that the X-ray emission is entirely thermal.
The simplest explanation for the presence of the bright rim is that the expanding radio lobe is compressing (shocking) the surrounding gas, and we consider this next, in §\[sshock\]. However, while we cannot rule it out, it is not consistent with soft emission from the bright rim. Even if the radio lobes are not driving shocks now, in the standard model, the initial radio outburst drives shocks [e.g. @hrb98], so we consider some other shock processes that may have played a role in the formation of the bright rims. In §\[shind\] we show that shock induced cooling does not help to explain the presence of the cooler gas. In §\[mhd\], on the assumption that the Faraday screen lies close to the SW radio lobe, we show that the magnetic pressure near to the lobe may be significant. We then show that the magnetic field in this region may be enhanced by shocks. However, the presence of a magnetic field in the shock increases the entropy jump in the gas, so does not help to explain the presence of the cool gas around the radio lobes.
Radio lobe driven shocks {#sshock}
------------------------
In view of the energetic nature of radio sources, and this one in particular, we consider whether expanding radio plasma in the cavities is driving a shock into the surrounding intracluster medium. @brm00 have already argued that there is no evidence for a shock in Hydra A, while @acf00 and @blanton01 find similar results in Perseus and A2052. Here we consider the issues in more detail, showing that strong shocks around the cavities would be easily detected, hence that any shocking of gas around the cavities must be weak. We argue that the enhanced X-ray emission from the rim of the cavities is probably not due to a shock.
The sensitivity of the ACIS detectors on Chandra is a slowly decreasing function of gas temperature. This is quantified in Fig. \[sensitivity\], where we show relative count rate in the ACIS S3 chip in the bands 0.5 – 3 keV, 3 – 7 keV and 0.5 – 7 keV (dash-dot, dashed and solid curves, respectively) as a function of gas temperature, for gas with a fixed emission measure. The curves are normalized to give a count rate of 1 in the 0.5 – 7 keV band at a temperature of 3.5 keV. The ratio of 3 – 7 keV to 0.5 – 5 keV count rate is also shown (dotted). Note the very modest decline ($\simeq
30$ percent) in the 0.5 – 7 keV count rate as $kT$ varies from 3.5 to 80 keV. This makes it clear that hot gas is not easily hidden.
As well as raising the temperature, a shock compresses gas, tending to increase its brightness. This is illustrated by the uppermost curve in Fig. \[sensitivity\], which shows the count rate in the 0.5 – 7 keV band for a fixed mass of gas shocked from 3.5 keV. That is, it shows the relative count rate from a fixed amount of gas that has been compressed by the appropriate factor for a shock that would raise it from 3.5 keV to the given temperature. Even for a postshock temperature of 80 keV, shocked gas is about 2.7 times brighter than the unshocked 3.5 keV gas. Thus, shocked gas will generally be brighter than unshocked gas, at least until it returns to local pressure equilibrium. This can only fail under the most extreme conditions, where the postshock temperature is well in excess of 80 keV.
We now consider a simple model of a shock driven by an expanding radio lobe. In this model a jet is assumed to feed energy into the cavity, causing it to expand supersonically and drive a shock into the surrounding gas. Following @hrb98, we assume that energy is fed into the lobe at a constant rate, and that the lobe plasma is relativistic ($\rm energy\ density =3 \times pressure$). To keep the model simple, we also assume that the shock expands into uniform gas and so is spherical. As discussed below, radiative cooling can be ignored during passage of the shock.
The state of this model is completely determined by the ratio of the amount of energy injected into the lobe to the initial quantity of thermal energy in the region swept up by the shock. At first, injected energy dominates and the shock is strong. During this stage the shocked gas forms a thin shell between the expanding radio lobe and the shock. The shocked flow is self-similar, with the shock radius given by $r_{\rm s} \simeq 0.82 (Pt^3/\rho_0)^{1/5}$, where $\rho_0$ is the density of the unshocked gas, $P$ is the rate at which the jet feeds energy to the cavity and $t$ is the time. The width of the shocked gas is $0.14 r_{\rm s}$. As it expands, the shock weakens and the shell of swept up gas thickens. At late times, when the shock is very weak, the pressure is nearly uniform, and the expanding lobe is surrounded by a layer of hot shocked gas that connects smoothly to the surrounding ambient gas.
We obtained surface brightness profiles for this model by embedding the spherically symmetric shocked flow into a cube of uniform (unshocked) gas, and projecting the resulting X-ray emission onto the sky, using the conversion to Chandra count rate given in Fig. \[sensitivity\]. The length of the cube was set to 55 kpc, to give the observed cluster background count rate ($9.2\times10^{-5}
\rm\, ct/s/pixel$ in ACIS-S) for the ambient (unshocked) gas density at 30 kpc from the cluster center [$n_{\rm e} = 0.027\rm\,
cm^{-3}$; @lpd00]. Fig. \[sbprof24\]a shows the resulting 0.5 – 3 keV and 3 – 7 keV surface brightness profiles (in arbitrary units, but with consistent relative normalization) at a time when the pressure jumps by a factor of 1.65 in the shock (shock Mach number of 1.23). At this stage, the ratio of the energy injected to the thermal energy swept up is 1.1. Fig. \[sbprof24\]b shows the corresponding 3 – 7 to 0.5 – 3 keV hardness ratio profile. The preshock temperature was set to 3.67 keV to match the hardness ratio in the region around the SW cavity, outside the bright rim ($\simeq 0.093$; Table 2).
Although this model shows about the right peak contrast in surface brightness between the bright rim and the surrounding region, averaged over the rim region to correspond to Table 1, the contrast is 12% instead of the observed 20%. On the other hand, the average surface brightness of the rim is 52% greater than that of the cavity, considerably larger than the observed brightness ratio (and formally unacceptable). Although it is poorly determined, the model gives about the right relative width for the rim and cavity. It predicts that emission from both the shocked rim and the cavity should be harder than from the surrounding gas, with a hardness ratio of 0.103, marginally inconsistent with what is observed (2.7 sigma too high, allowing for the error in hardness of the rim and of the surrounding region).
The near constant, elevated hardness ratio for the whole of the shocked region is a robust feature of these models. Lines of sight passing through the cavity also pass through shocked gas in front of and behind the cavity, adding a similar hard component across the whole shocked region.
Figs \[sbprof10\]a and b, are the same as Figs \[sbprof24\]a and b, but for a shock pressure jump close to 5.0 (Mach number $\simeq2.0$). In this case, the energy injected is about 2.7 times the thermal energy swept up. The surface brightness profile shows a narrower, brighter rim. However, the jump in average surface brightness from the unshocked region to the rim is 22%, close to the observed value. The jump from the cavity to the rim is 61% for this model. The hardness ratio in the shocked region is 0.113, about 3.9 sigma too high.
Interpretation of these results is complicated by non-uniformity of the gas surrounding the cavity and the geometric uncertainties. Neither model is a good fit to the data, but, given the uncertainties, it is hard to completely exclude a weak shock with our data. We will adopt the position that the Mach 1.23 shock is about the strongest that is consistent with the data. For this model, the pressure in the lobe is close to 1.3 times the pressure of the unshocked gas. We emphasize that, while we cannot completely rule out models in which the radio lobe is mildly overpressured, such a model is barely consistent with what is observed. The observations certainly do not suggest that the radio lobes are more than mildly overpressured compared to the ambient gas.
If the unshocked gas were multiphase [@acf94], it would not significantly change the appearance of the shock as deduced here. Shock strength depends on the pressure jump, so that a multiphase gas starting in local pressure equilibrium would experience much the same density and temperature jump in every phase. Since apparent brightness is not sensitive to gas temperature (Fig. \[sensitivity\]), the brightness of all phases would be affected in much the same way by the shock. Thus, the phase that predominates the emission would be little altered by a shock, and the surface brightness and hardness profiles would not be much different from those for single phase gas.
Shock induced cooling {#shind}
---------------------
Our purpose here is to show that shock induced cooling is negligible for the gas around the radio lobes. In general, a shock weakens quickly as it expands. For example, in the model used above, while the shock is strong (self-similar), the postshock pressure decreases with shock radius as $r_{\rm s}^{-4/3}$ (more slowly than a point explosion due to the energy injection). As a result, after gas is swept up by the shock, its pressure declines significantly in one shock crossing time, $r_{\rm s} / v_{\rm s}$, where $v_{\rm s}$ is the shock velocity. In most cases, the gas pressure will eventually return close to its value before the shock, in a few sound crossing times for the region significantly affected by the shock.
The cooling time of the gas is $$t_{\rm c} = {3 p \over 2 n_{\rm e} n_{\rm H} \Lambda(T)},$$ where $p$, $T$, $n_{\rm e}$ and $n_{\rm H}$ are the pressure, temperature, electron and proton number density of the gas, respectively, and $\Lambda$ is the cooling function. Under an adiabatic change $T \propto n_{\rm e}^{2/3}$, so that the cooling time scales as $t_{\rm c} \propto 1 /[\Lambda(T) T^{1/2}]$. This is a decreasing function of temperature for the range of temperatures of interest, so that as the gas pressure declines after passage of the shock the cooling time increases (unless cooling is fast enough to make the pressure change significantly non-adiabatic). When the shocked gas eventually returns to near its preshock pressure, it will have greater entropy due to the shock. This almost inevitably means that its cooling time is ultimately increased by the shock.
Thus, if the shock is to enhance cooling significantly, the cooling time of the gas immediately behind the shock needs to be comparable to the shock crossing time. Taking the temperature, electron density and abundance of the gas in the vicinity of the lobes as 3.4 keV, $0.027\rm\, cm^{-3}$ and 0.4 solar, respectively [@lpd00], its cooling time $\simeq 1.3\times10^9$ y. This is about 2 orders of magnitude longer than the sound crossing time of the lobes, which is close to $2\times10^7$ y for a radius of 20 kpc (the sound crossing time to the center of the cluster is about 50% longer). The shock crossing time is shorter than the sound crossing time, so that in order for shock induced cooling to be significant, the postshock cooling time needs to be much shorter than the preshock cooling time.
For gas hotter than $\sim2$ keV, cooling is mainly due to thermal bremsstrahlung, so that $\Lambda(T) \propto T^{1/2}$ and $t_{\rm c}
\propto p_{\phantom{\rm e}}^{1/2} n_{\rm e}^{-3/2}$. For a ratio of specific heats $\gamma = 5/3$, the shock jump conditions may be written as $$\label{spj}
{p_1\over p_0} = 1 + {5\over4} y$$ and $${n_{\rm e, 1} \over n_{\rm e, 0}} = {4 (1 + y) \over 4 + y},$$ where subscripts ‘0’ and ‘1’ refer to preshock and postshock conditions respectively, and $$y = {3 \mu m_{\rm H} v_{\rm s}^2 \over 5 kT_0} - 1$$ is the square of the shock Mach number minus 1 ($y$ measures shock strength). Using these results, it is straightforward to show that the postshock cooling time is minimized for $y = 4.68$ and the minimum postshock cooling time is 0.62 times the preshock cooling time.
Although the cooling function is not exactly proportional to $T^{1/2}$, the essential result, that the decrease in cooling time in a shock is modest at best, is inescapable. In order for the postshock cooling time to be comparable to the shock crossing time, the preshock cooling time would need to be close to the sound crossing time. If that were the case, then the gas could barely be hydrostatic. In any case, from the numbers given above, the cooling time is roughly 2 orders of magnitude greater than the sound crossing time. In the same manner, we can rule out appreciable shock induced cooling in all similar systems.
This argument applies equally well to shock induced cooling associated with a shock enveloping the two radio lobes, as described by @hrb98. The cool gas in the vicinity of the radio lobes is not the result of shock induced cooling.
Magnetohydrodynamic shocks {#mhd}
--------------------------
Like many cluster center radio sources, Hydra A has a large rotation measure [@tp93], up to $10^4 \rm\, rad\, m^{-2}$ or more for the SW radio lobe. The gas in the immediate vicinity of the radio lobes is an excellent candidate for the Faraday screen. Indeed, if the difference in Faraday rotation between approaching and receding jets is due to the extra path to the receding jet [@gl88], then the bulk of the Faraday rotation must arise in the region close to the lobes.
In view of this, we take the depth of the Faraday screen to be comparable to the size of the lobes, that is $\ell \simeq 20$ kpc. The rotation measure map of @tp93 shows coherent structure on a scale of about $5''$, so we take the coherence length of the magnetic field to be $r_{\rm c} \simeq 5$ kpc. The rotation measure is $812 n_{\rm e} B \ell \rm\, rad\, m^{-2}$ if the field is uniform and along the line of sight, but this is reduced by a factor of roughly $\sqrt{r_{\rm c} / \ell}$ due to random variation of the field direction along the line of sight [all quantities in the units used here; e.g. @ktk91]. Taking $n_{\rm e} = 0.027 \rm\,
cm^{-3}$, as above, requires a magnetic field strength in the Faraday screen of up to $B
\simeq 45\, \mu\rm G$ [exceeding the equipartition field strength in the lobes; @gbt90], although a more typical value would be $B\sim 20
\,\mu\rm G$. For a gas temperature of 3.4 keV, the gas pressure is $2.8\times10^{-10} \rm\, erg\, cm^{-3}$, while the magnetic pressure is up to $B^2/(8\pi) \simeq 8\times10^{-11} \rm\,
erg\, cm^{-3}$, approaching 30% of the gas pressure. The magnetic field strength is quite uncertain. If the main part of the Faraday screen is more closely wrapped around the lobes, then the field strength could be large enough to make the magnetic pressure dynamically important. In view of this, it is interesting to consider what happens to the gas and magnetic field in a shock.
There are two matters of interest here. First, could shocking of the gas help to account for the strength of the magnetic field in this region, hence the presence of the Faraday screen? Second, if gas in the X-ray bright rim around the cavities is in local pressure equilibrium, then the gas in it must have higher density, hence lower entropy, than the surrounding gas. If the magnetic pressure in this gas is also significant, then its thermal pressure must be lower than that of the ambient gas, requiring even lower entropy to get the same X-ray brightness. We consider how a magnetic field can affect these things in a shock.
A general magnetohydrodynamic (MHD) shock can have one of three forms, Alfvén, slow or fast mode [e.g. @melrose]. For the case of interest, where the magnetic pressure is smaller than the gas pressure and the shock is driven by excess pressure, the mode of interest is always the fast mode. In order to keep the discussion simple, we will consider in detail only the case of a transverse MHD shock, where the shock propagates perpendicular to the magnetic field, but we have done the calculations for shocks at any inclination to the field. For a transverse shock, only the magnitude of the magnetic field changes in the shock, and the component of velocity parallel to the shock front is continuous at the shock, so we can choose a frame in which the flow is perpendicular to the shock front. In that frame, the shock jump conditions may be written $$\begin{aligned}
\rho_0 v_0 & = & \rho_1 v_1, \qquad \rm (mass) \\
v_0 B_0 & = & v_1 B_1, \qquad \rm (magnetic\ flux) \\
\rho_0 v_0^2 + p_0 + {{1\over2}}\rho_0 v_{\rm A,0}^2 & = &
\rho_1 v_1^2 + p_1 + {{1\over2}}\rho_1 v_{\rm A,1}^2 \qquad \rm (momentum)\end{aligned}$$ and $$H_0 + {{1\over2}}v_0^2 + v_{\rm A,0}^2 = H_1 + {{1\over2}}v_1^2 + v_{\rm A,
1}^2 \qquad \rm (energy),$$ where $\rho$, $v$ and $p$ are the gas density, velocity and pressure, respectively, $B$ is the magnetic field, and subscripts ‘0’ and ‘1’ refer to preshock and postshock values, respectively. The specific enthalpy is $H = \gamma p / [(\gamma-1) \rho]$, where $\gamma$ is the ratio of specific heats (we assume $\gamma=5/3$). The Alfvén speed, $v_{\rm A}$, is given by $\rho v_{\rm A}^2 = B^2 / (4 \pi)$.
Defining the shock compression ratio $r = \rho_1 / \rho_0$, we readily deduce from the jump conditions that $v_1 = v_0 / r$ and $B_1 = r
B_0$. Using these in the momentum and energy jump conditions then gives $$v_0^2 = {2r \over \gamma + 1 - (\gamma - 1)r} \left[ s_0^2 + {\gamma +
(2 - \gamma) r \over 2} v_{\rm A,0}^2 \right],$$ where $s_0$ is the speed of sound in the unshocked gas, $s_0^2 =
\gamma p_0 / \rho_0$. This equation determines the shock speed, $v_0$, in terms of the compression ratio and the physical properties of the unshocked gas. Note that, as for hydrodynamic shocks, the maximum compression ratio is $r_{\rm m} = (\gamma + 1) / (\gamma -
1) = 4$ (for $\gamma=5/3$). This applies to MHD shocks at any angle to the field.
We can use these results to determine the gas pressure jump, $${p_1\over p_0} = 1 + {2\gamma (r - 1) \over \gamma + 1 - (\gamma - 1)
r} \left[ 1 + {(\gamma - 1) (r - 1)^2 \over 4 \beta_0} \right],$$ where $\beta_0 = s_0^2 / v_{\rm A,0}^2$ is the standard measure of the ratio of thermal to magnetic pressure in the unshocked plasma [e.g. @melrose], and $\beta_0 \ga 1$ for the case of interest here. Magnetized and unmagnetized gas in local equilibrium need to have the same total pressure, $p + p_{\rm B}$, where $p_{\rm B} = B^2
/ (8\pi)$ is the magnetic pressure. A shock propagating through both will also produce nearly the same jump in total pressure. Thus, to compare the effects of shocks in magnetized and unmagnetized gas, we need to compare shocks that produce the same jump in total pressure, which is $${p_1 + p_{\rm B,1} \over p_0 + p_{\rm B,0}}
= {1\over 2 \beta_0 + \gamma} \left( 2 \beta_0 {p_1 \over p_0} + \gamma
r^2 \right).$$
The effect of the shock on the relative size of magnetic and gas pressure is measured by $$\beta_1 = {s_1^2\over v_{\rm A,1}^2} = {\beta_0 \over r^2} {p_1\over
p_0}.$$ This is plotted as a function of total pressure jump in Fig. \[mhdbeta\], for a few values of $\beta_0$. From the figure we see that moderately strong shocks, with total pressure jumps $\la 7$, can produce a modest decrease in $\beta$. However, the reduction is no more than about 13%. Strong shocks always increase $\beta$, i.e.the gas pressure is larger relative to the magnetic pressure after a strong shock. Although no results are shown here, if the angle between the shock front and the direction of the magnetic field exceeds about $30^\circ$, $\beta$ can only increase in the shock.
The tendency of shocks to increase $\beta$ is due to the upper limit on shock compression. Since this cannot exceed a factor of 4, the magnetic field increases by 4 at most, and the magnetic pressure by no more than a factor of 16. On the other hand, there is no limit on the increase in thermal pressure. As a result, thermal pressure is always dominant in a sufficiently strong shock.
As noted above, the (total) pressure will generally return close to its original value after passage of a shock. Under adiabatic expansion, the gas pressure varies as $p \propto \rho^{5/3}$, but the variation of $\beta$ depends on whether the expansion is primarily 1-dimensional, giving $p_{\rm B} \propto \rho^2$, isotropic, giving $p_{\rm B} \propto \rho^{2/3}$, or somewhere in between (we ignore the singular case of 1-d expansion parallel to the magnetic field). Because of this, $\beta$ might change in either direction during re-expansion. However, for the self-similar shock flow of §\[sshock\], the re-expansion is isotropic, so that $\beta \propto
\rho$. As long as magnetic pressure is not dominant and the flow is roughly spherical, we can expect similar behaviour. Since gas pressure dominates after the shock, the re-expansion will decrease the density by about a factor of $ [(p_1 + p_{\rm B,1}) / (p_0 + p_{\rm
B,0}) ]^{-3/5} $. From Fig. \[mhdbeta\], we can see that this would give a net reduction in $\beta$, provided that the shock is not too strong.
Shocks where the magnetic field is not parallel to the shock front produce a greater increase in $\beta$ than the transverse shocks considered here. In particular, if the field is perpendicular to the shock front, the increase in $\beta$ will not be undone by re-expansion. Nevertheless, if the orientation of the field relative to the shock front is random, then the typical angle between shock front and field is $30^\circ$, and it remains true that a shock producing a total pressure jump of $\la 400$, followed by isotropic re-expansion will cause a net reduction in $\beta$. Thus, as long as the shock is not extremely strong, its net effect is to decrease $\beta$. So repeated shocking may help to account for the moderately strong magnetic field in the vicinity of the extended radio source.
We now consider the effect of a MHD shock on entropy. Using $\Sigma =
p / \rho^\gamma$ as a measure of entropy, the entropy jump in the shock is $${\Sigma_1 \over \Sigma_0} = {p_1\over p_0 r^\gamma}.$$ This is plotted as a function of the jump in total pressure for a few values of $\beta_0$ in Fig. \[mhdent\], where we see that the magnetic field increases the shock entropy jump (also true for any angle between the shock and magnetic field). This result is closely related to the rise in $\beta$ in the shock. Since a strong shock is always dominated by gas pressure, the rise in gas pressure, hence entropy, must be greater in the presence of a magnetic field.
This has the opposite sense to that required to explain the bright rim around the radio lobes. If the bright rims of the radio lobes do have a significant magnetic field, then shocks will increase the entropy of the gas in them more than the entropy of other non-magnetized gas. In local pressure equilibrium after such shocks, the magnetized gas would then be less dense and less X-ray luminous than surrounding non-magnetized gas. Either this gas is not significantly magnetized, or it has not been subjected to significant shocks. Alternatively, the dense gas may be replaced in each radio outburst.
Note that no attempt was made to allow for the effects of particle acceleration on these MHD shocks [e.g. @be99]. If particle acceleration is very efficient, it can produce a substantial cosmic ray pressure in the shock and the results above are modified significantly. Of course, in that case the shocked gas would also be a strong radio source.
Implications of radio observations {#radio}
==================================
Based on the radio properties of Hydra A, the radio lobes are not likely to be currently driving shocks into the intracluster medium. The physical quantity controlling shocks is excess pressure (e.g. equation \[spj\]), so that the pressure in the lobes must exceed the ambient gas pressure if they are to drive shocks into the intracluster gas. However, under the usual assumptions, @gbt90 found that the equipartition pressure in the radio lobes is about an order of magnitude smaller than the pressure of the hot gas. This is unlikely to be the actual pressure in the lobes (it would imply that they are collapsing in about 1 sound crossing time), and so requires that the radio source is a long way from equipartition, has a low filling factor, or most of the pressure in the lobes is due to protons (or electrons with low gamma, etc.). The same applies to the radio lobes in Perseus and A2052 [@acf00; @blanton01]. While this does not prove that the lobes cannot be overpressured, it argues against this, supporting the case that they are not driving shocks now.
Using the spectral properties of the remote lobe $\sim4'$ N of the radio nucleus, @gbt90 estimated the age of the radio source to be $\sim 10^8$ y. Similar reasoning would make the inner lobes about an order of magnitude younger. Also based on synchrotron aging arguments, they found that the flow velocity in the SW lobe $\sim 9000
\rm\, km\, s^{-1}$. While we cannot rule out mildly supersonic expansion, the Chandra data for Hydra A are inconsistent with expansion of the SW lobe at Mach 2, i.e. a shock velocity of $1900\rm\, km\, s^{-1}$ in 3.4 keV gas, at the $3.9\sigma$ level. A shock at $9000 \rm\, km\, s^{-1}$ moving into 3.4 keV gas would produce a postshock temperature close to 97 keV. The shocked gas would be highly visible to the Chandra detectors (extrapolating Fig. \[sensitivity\] slightly) and hard, and we can rule this out. More generally, the lobes cannot expand or move through the 3.4 keV intracluster gas supersonically without creating a shock. Furthermore, at such highly supersonic speed, the shock would remain close to a moving lobe, making it easy to find. While we cannot rule out that plasma circulates within the radio lobe at $9000 \rm \, km\,
s^{-1}$, this is implausible, and it seems more likely that one or more of the assumptions used to determine this velocity is invalid. The preponderance of cool gas close to the radio lobes (Fig. \[tmap\]) argues strongly against supersonic motion of the lobe boundaries. In that case, the region around the SW lobe will be close to local pressure equilibrium, and the X-ray luminous gas in the rim surrounding the lobe must be cooler than adjacent, less X-ray luminous gas. This is consistent with a reduced hardness ratio in the bright rim around the lobe (Table 2).
Discussion {#disc}
==========
While we cannot rule out a weak shock producing the bright rim in Hydra A, the evidence does not favour this. Furthermore, in the Perseus cluster where the data are clearer, the bright emission around the cavities is the coolest in the central region of the cluster [@acf00]. In the following we assume this is also the case in Hydra A.
This leaves open the issue of the origin of the cool gas in the bright rim. If it is cooler than the surrounding gas while at the same pressure (or lower, §\[mhd\]), then it has lower entropy. Unless it is produced somehow by the presence of the radio lobes (no mechanism considered above does this), then it must come from where the lowest entropy gas normally resides, at or near to the cluster center [the entropy gradient is weak, but non-zero in the central region of the Hydra A Cluster; @lpd00]. In that case the most obvious way to move the gas is by some form of entrainment, as proposed to account for cool gas associated with the radio structure in M87 [@hb95; @churazov]. However, the large mass of gas involved (even more so in Perseus), and its association with the lobes rather than the jets, suggest that the rising lobes themselves have pushed or dragged the low entropy gas to its current location. A rising “bubble” or cavity moves when denser gas flows down past it. So, while the buoyant force on the cavity is sufficient to move a mass of gas comparable to that displaced by the cavity, some physical mechanism must communicate this force to the surrounding gas to entrain it. Gas and cosmic ray pressure in the cavity or magnetic stresses may do this, but it is unclear whether the resulting stresses are stable enough to lift an appreciable mass of gas with the cavity. For this to work, the radio lobes and cavities must also have risen from a place closer to the active nucleus where they were formed.
Another issue is how the dense gas in the rim remains where it is. If gas in the bright rim is denser than the surrounding gas, then it is negatively buoyant. By Archimedes’ principle, the net force per unit volume on overdense gas is $\delta\rho \, g$, where $\delta\rho$ is the difference between its density and that of the ambient gas, and $g$ is the acceleration due to gravity, so the acceleration of the gas is $a = g \, \delta\rho/\rho$, where $\rho$ is its density. Unless this is counterbalanced, the gas will accelerate inward, falling a distance $r$ in $t_{\rm f} \simeq \sqrt{2r/a}$. Taking the gravitating mass within 30 kpc of the cluster center to be $3\times
10^{12}\rm\, M_\sun$ [@lpd00] and $r=20$ kpc, about the radius of the shell of cool gas, this gives an infall time $t_{\rm f} \simeq 5
\times 10^7 \sqrt{\rho/\delta\rho}$ y. We do not have a good estimate for $\rho/\delta\rho$, but the shock simulations suggest that the density in the shell is about twice the ambient gas density, giving $t_{\rm f} \simeq 7\times 10^7$ y. If the age of the lobe exceeds this, then the cool gas should have fallen away from the radio lobe if it was not held in place. This issue is closely related to the need for a force to drag the gas along with the rising lobe.
There are two ways that the gas might be supported by magnetic fields. Either magnetic stresses could tie it to the cavity, supporting the excess weight of the gas by the positive buoyant force on the cavity, or the entrained gas might have acquired a strong but inhomogeneous magnetic field. In the former case, the low entropy gas would be reasonably homogeneous and its pressure close to the ambient pressure. In §\[mhd\] we estimated $B \sim 20 \, \mu\rm G$ with a coherence length of $r_{\rm c} = 5\rm\, kpc$ in the Faraday screen. Such a field would produce a force per unit volume of about $B^2/(4\pi r_{\rm
c}) \simeq 2.1\times10^{-33} \rm\, dyne\, cm^{-3}$. On the other hand, if the overdensity, $\delta\rho$, in the cool gas is similar to the ambient density at 30 kpc from the cluster center ($n_{\rm e} =
0.027\rm\, cm^{-3}$), then using the numbers above for $g$ at 30 kpc, the bouyant force per unit volume is $g \, \delta\rho \simeq
2.4\times10^{-33} \rm\, dyne\, cm^{-3}$. The magnetic field is quite uncertain, but these numbers are sufficiently close to make this a serious possibility.
Alternatively, if the gas consists of an intimate, but inhomogeneous, mixture of cool gas and strong magnetic field, then the mean density of the mixture can be close to the ambient density, but the X-ray brightness greater [@hb95]. To illustrate this, consider the extreme case of a mixture of regions devoid of gas with regions devoid of magnetic field. Regions devoid of gas would have a magnetic pressure equal to the ambient gas pressure, requiring ($n_{\rm e} =
0.027\rm \, cm^{-3}$, $kT = 3.4$ keV in the ambient gas) $B \simeq
80\, \mu\rm G$, which is large compared to the equipartition field in the lobe [@gbt90]. If the gas in this mixture has density $\rho$ and filling factor $f$, then the mean density of the mixture is $f
\rho$. To be neutrally buoyant, this must equal the ambient density, $\rho_0$, and then the mean emission measure per unit volume of the mixture $\propto \langle \rho^2 \rangle = \rho_0^2/f > \rho_0^2$, so this region is brighter.
The former means of supporting the gas agrees better with the properties of the Faraday screen. Furthermore, the magnetic stresses required to keep the cool gas close to the radio lobe are much the same as those required to explain how this gas was lifted by the rising lobe. The cavity would have formed closer to the AGN and risen to its current location in about its buoyant rise time $\simeq 2R \sqrt{r / RM(R)}
\simeq 7 \times 10^7\rm\, y$ [cavity radius $r = 20\rm\, kpc$, distance to cluster center $R=30\rm\, kpc$, $M(R) =
3\times10^{12}\rm\, M_\odot$; @lpd00]. Although such a system may not be very stable, this is not much longer than the sound crossing time, and instabilities may have developed slowly enough to allow it to evolve to its current state. The patchy gas distribution around the cavity in M84 [@fj01] may represent a later stage of such a cavity, when the instability is well developed and a large part of the cool gas has fallen back to the center. There are also signs of instability in the Chandra image of A2052 [@blanton01]. In particular, the spur of bright emission in the northern radio cavity of A2052 may be due to part of the rim falling inward. In a cluster, the weight of the cool gas could limit the rise of the cavity until it falls away. If the cavity is not disrupted in this process, it could then rise relatively slowly, with the rate of rise determined by the rate at which the low entropy gas detaches from it. We note that the lifting of cool gas described here differs from that invoked by @churazov, where the gas is pulled along in the wake of the rising cavity. @quilis also model a hot bubble forming near a cluster center. While their model shows a transient density enhancement at its outside edge during bubble formation, it does not show a dense rim like that surrounding the SW lobe of Hydra A.
The north – south extension of the cooler gas, outside the region of the cavities (Fig. \[tmap\]) suggests that the lifting of low entropy gas with rising “bubbles” of radio plasma is an ongoing process. The prevalence of radio sources in cooling flow clusters [@burns90], combined with their relatively short lifetimes, suggests that there are repeated radio outbursts. The extended region of cooler gas may be the trail left by the rise of earlier cavities. This ongoing process is also hinted at by the X-ray feature associated with more remote radio structure $\sim4'$ north of the cluster center [@wrf00]. The maximum mass that could be supported by the SW cavity at its present position is the mass of gas it displaces $\simeq
2.6\times10^{10} \rm\, M_\sun$ ($r=20$ kpc, $n_{\rm e} = 0.027 \rm\,
cm^{-3}$). If such a mass was lifted out of the cluster center in a radio outburst every $\sim10^8$ y, it would amount to outflow of about $250\rm\, M_\sun\, y^{-1}$, largely accounting for the lack of mass deposition by the cooling flow [see @lpd00]. On the other hand, if the radio plasma is relativistic, the total energy in the cavity is $3 pV \simeq 8.3\times10^{59} \rm\, erg$ (as above and $kT =
3.4$ keV), and the mean energy input associated with the cavities would be $\simeq 2.7\times10^{44}\rm\, erg\, s^{-1}$. This is comparable to the mean power needed to stop mass deposition by the cooling flow, $P = 5 \dot M kT_{\rm i} / (2 \mu m_{\rm H}) \simeq
3\times10^{44}\rm\, erg\, s^{-1}$, for $\dot M = 300\rm \, M_\sun\,
y^{-1}$ and an initial temperature of gas in the cooling flow $kT_{\rm
i} = 4$ keV. Thus, if there is an efficient mechanism for lifting the gas with the cavities and for thermalizing some of the energy in the cavity, in the case of Hydra A the radio outbursts could be sufficient to balance the energy loss in the cooling flow [cf. @soker01]. Because the bubbles and associated cool gas rise much faster than the cooling gas flows inward, the bulk of the cooling flow is hardly affected by the outflow, and so would form a steady (homogeneous) cooling flow. This is essentially the situation outlined in @lpd00. Of course, Hydra A is an exceptionally luminous radio source, and it is not yet clear whether this could apply in other cooling flow clusters. The energetics of the simulation by @quilis resemble those of Hydra A. However, their simulation was run for a time only about equal to the initial central cooling time of the gas, making it hard to draw conclusions about the long term effects of the energy injection on a cooling flow.
@lpd00 found that the iron and silicon abundances of the hot intracluster medium increase inward in the central $\sim 100$ kpc of the Hydra A Cluster. As they noted, the large-scale circulation described above would tend to mix heavy elements throughout the region of the circulating flow. The total mass of iron causing the excess central abundance is comparable to the total iron yield from type Ia supernovae in the cD galaxy over its lifetime. Together with the strong central concentration of the iron excess, this points to the cD galaxy as the source of the excess iron. However, half of the excess iron lies beyond $r\simeq 47$ kpc, so that its distribution is almost certainly more extended than the light of the cD galaxy, as we should expect if it is mixed outward. The extent of a steady cooling flow is determined by the time since the last major merger. While this is not known for the Hydra A Cluster, the cooling time at $r=100$ kpc is about $6\times 10^9$ y [@lpd00], so that the region of enhanced iron abundance coincides plausibly with the region of the steady cooling inflow. On the other hand, while some enriched gas can circulate out to $r\simeq 100$ kpc or beyond, if all of the gas did this, the heavy elements would need to replaced on about the cooling timescale in order to maintain the abundance gradient. Since the cooling time is less than $10^9$ y for $r \la 30$ kpc this seems implausible. It is more likely that part of the enriched gas circulates over a range of radius well inside $r=100$ kpc. This is consistent with the (unstable) buoyant lifting outlined above, where gas falls away from a cavity as it rises, so that different parts of the gas circulate over different ranges of $r$. Although we do not have a detailed model for this process, it is evident that the abundance gradient will provide a strong constraint on such models if the excess heavy elements do all originate in the cD galaxy.
As shown in §\[mhd\], moderately strong magnetohydrodynamic shocks can increase the ratio of magnetic to gas pressure. The magnetic field required to help carry the cool gas out with the cavity and to make the Faraday screen around the radio lobes may be enhanced by repeated moderate shocks due to outbursts of Hydra A. Alternatively, the magnetic field may be a relic of the radio activity in these outbursts, or due to a combination of these effects.
Conclusions
===========
The cavity in the hot intracluster medium containing the SW radio lobe of Hydra A has a bright rim of X-ray emission. X-ray emission from this rim is marginally softer than that from ambient gas at the same distance from the center of the Hydra A Cluster.
We have considered a simple model in which the bright rim is due to a shock driven by the expanding radio lobe of Hydra A. This model predicts that X-ray emission from the cavity and rim is harder than the surrounding X-ray emission and does not fit the data well, but we cannot rule out models with a weak shock. The most likely interpretation is that gas in the bright rim is cooler than ambient gas, and this is consistent with what is found in the Perseus cluster. A temperature map shows that cooler gas extends along the radio axis of Hydra A, beyond the cavities.
Even though cooling times are relatively short, we have shown that shocks in Hydra A and similar systems are too fast to induce significant cooling of the gas. Furthermore, if the magnetic pressure is significant, then, for a given shock strength (total pressure jump), shocks induce a greater entropy jump in magnetized gas than in non-magnetized gas, so there does not appear to be any way that shocks can account directly for the presence of the cooler gas. On the other hand, repeated shocking may help to produce strong magnetic fields near to the center of the cluster.
The most plausible origin of the cool gas around the cavities is closer to the cluster center. If the cavities were formed deeper within the cluster core than we now find them, they could have lifted lower entropy gas from these regions as they rose. This requires a means of communicating the buoyant force on a cavity to surrounding gas, and the most likely candidate for this is magnetic stresses. In the Hydra A Cluster, the magnitude of the magnetic field required to do this is consistent with that required to account for Faraday rotation in the radio lobes. The amount of gas lifted in this way from the cluster center may be sufficient to balance inflow of low entropy gas due to the cooling flow.
PEJN gratefully acknowledges the Harvard-Smithsonian Center for Astrophysics for their hospitality and support. BRM was supported by LTSA grant NAG5-11025. We thank the referee constructive comments.
Berezhko, E. G., & Ellison, D. C., 1999, , 526, 385
Bicknell, G. V., & Begelman, M. C., 1996, , 467, 597
Blanton, E. L., Sarazin, C. L., McNamara, B. R., & Wise, M. W., , 558, L15
Böhringer, H., Voges, W., Fabian, A. C., Edge, A. C., & Neumann, D. M. 1993, , 264, L25
Böhringer, H., Nulsen, P. E. J., Braun, R., & Fabian, A. C., 1995, , 274, L67
Burns, J. O., 1990, , 99, 14
Carilli, C. L., Perley, R. A., & Harris, D. E., 1994, , 270, 173
Churazov, E., Bruggen, M., Kaiser, C., Bohringer, H., & Forman, W., 2001, , 554, 261
Clarke, D. A., Harris, D. E., & Carilli, C. L., 1997, , 284, 981
David, L. P., Nulsen, P. E. J., McNamara, B. R., Forman, W. R., Jones, C., Ponman, T., Robertson, B., & Wise, M., 2001, , 557, 546
Ekers, R. D., & Simkin, S. M. 1983, , 265, 85
Fabian, A. C., 1994, , 32, 277
Fabian, A. C., Sanders, J. S., Ettori, S., Taylor, G. B., Allen, S. W., Crawford, C. S., Iwasawa, K., Johnstone, R. M., & Ogle, P. M., , in press (astro-ph/0007456)
Fabian, A. C., Mushotzky, R. F., Nulsen, P. E. J., & Peterson, J. R., , 321, L20
Finoguenov, A., & Jones, C., 2001, , 547, L107
Forman, W., David, L., Jones, C., Markevitch, M., McNamara, B., & Vikhlinin, A., 2000, in “Constructing the Universe with Clusters of Galaxies,” eds. F. Durret & D. Gerbal, IAP, Paris
Garrington, S. T., Leahy, J. P., Conway, R. G., & Laing, R. A., 1988, Nature, 331, 147
Heinz, S., Reynolds, C. S., & Begelman, M. C. 1998, , 501, 126
Kim, K.-T., Tribble, P. C., & Kronberg, P. P., 1991, , 379, 80
Kraft, R. P., Forman, W., Jones, C., Kenter, A. T., Murray, S. S., Aldcroft, T. L., Elvis, M. S., Evans, I. N., Fabbiano, G., Isobe, T., Gerius, D., Karovska, M., Kim, D.-W., Prestwich, A. H., Primini, F. A., Schwartz, D. A., Schreier, E. J., & Vikhlinin, A. A., 2000, , 531, L9
McNamara, B. R., Wise, M., Nulsen, P. E. J., David, L. P., Sarazin, C. L., Bautz, M., Markevitch, M., Vikhlinin, A., Forman, W. R., Jones, C., & Harris, D. E. 2000, , 530, L135
McNamara, B. R., in “Proceedings of XXI Moriond conference: Galaxy Clusters and the High Redshift Universe Observed in X-rays,” eds D. Neumann, F. Durret, & J. Tran Thanh Van
Melrose, D. B., 1986, “Instabilities in Space and Laboratory Plasmas,” Cambridge University Press, Cambridge
Pedlar, A., Ghataure, H. S., Davies, R. D., Harrison, B. A., Perley, R., Crane, P. C., & Unger, S. W., 1990, , 246, 477
Peterson, J. A., Paerels, F. B. S., Kaastra, J. S., Arnaud, M., Reiprich, T. H., Fabian, A. C., Mushotzky, R. F., Jernigan, J. G., & Sakelliou, I., 2001, , 365, L104
Quilis, V., Bower, R. G., & Balogh, M. L., 2001, , in press (astro-ph/0109022)
Schindler, S., Castillo-Morales, A., De Filippis, E., Schwope, A., & Wambsganss, J., 2001, , 376, L27
Soker, N., White, R. E., III, David, L. P., & McNamara, B. R., 2001, , 549, 832
Taylor, G. B. 1996, , 470, 394
Taylor, G. B., & Perley, R. A., 1993, , 416, 554
Taylor, G. B., Perley, R. A., Inoue, M., Kato, T., Tabara, H., & Aizu, K. 1990, , 360, 41
Vrtilek, J. M., David, L. P., Grego, L., Jerius, D., Jones, C., Forman, W., Donnelly, R. H., & Ponman, T. J., 2000, in “Constructing the Universe with Clusters of Galaxies,” eds. F. Durret & D. Gerbal, IAP, Paris
TABLE 1
Surface brightness in the SW lobe
[c|cc]{} Annulus & Surface Brightness & error\
& (ct/pixel) & (ct/pixel)\
$0''$ – $11''$ & 2.260 & 0.047\
$11''$ – $20''$ & 2.724 & 0.034\
$20''$ – $25''$ & 2.285 & 0.032\
Notes: Counts are from the combined, cleaned ACIS-I and ACIS-S data. Rings were centered on $\rm R.A. = 09^h\, 18^m\, 04^s\!\!.9$, $\rm
decl. = -12^\circ\, 06'\, 08''\!\!.4$ (J2000). Only counts in the range of position angle $90^\circ$ – $330^\circ$ with respect to the center of the rings, and in the 0.5 – 7 keV energy range were included. Background subtraction was carried out using the same procedure as in @lpd00.
TABLE 2
Hardness ratios for regions around the SW cavity
[l|cc]{} Region & $C(3$–$7{\rm\, keV}) / C(0.5$–$3.0 {\rm\,keV})$ & error\
Cavity & 0.0808 & 0.0088\
Bright rim & 0.0799 & 0.0056\
Outside bright rim & 0.0932 & 0.0065\
Perpendicular to the radio axis & 0.0947 & 0.0049\
XSPEC simulated 3 keV gas & 0.0763\
XSPEC simulated 4 keV gas & 0.101\
XSPEC simulated 7 keV gas & 0.144\
Notes: The first 3 regions coincide with the regions used in Table 1.
TABLE 3
Limits on hot gas emission from within the SW cavity
[c|cc]{} $kT$ & Hot fraction & $n_{\rm e}$\
(keV) & (%) & ($\rm cm^{-3}$)\
4 & $<21$ & $<0.010$\
5 & $<7.8$ & $<0.0063$\
6 & $<5.3$ & $<0.0052$\
7 & $<4.4$ & $<0.0048$\
8 & $<3.4$ & $<0.0043$\
10 & $<3.1$ & $<0.0041$\
15 & $<2.7$ & $<0.0036$\
20 & $<2.5$ & $<0.0035$\
30 & $<2.5$ & $<0.0035$\
Notes: These are 90% upper limits for one parameter of interest ($\Delta \chi^2 = 2.71$) on the fraction of the emission measure from the SW cavity in a two temperature model that can come from a component of the given temperature. The 3rd column gives the electron density of a uniform gas filling the cavity that would give the maximum allowed emission measure for the hot component.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe a scheme for producing an optical nonlinearity using an interaction with one or more ancilla two-level atomic systems. The nonlinearity, which can be implemented using high efficiency fluorescence shelving measurements, together with general linear transformations is sufficient for simulating arbitrary Hamiltonian evolution on a Fock state qudit. We give two examples of the application of this nonlinearity, one for the creation of nonlinear phase shifts on optical fields as required in single photon quantum computation schemes, and the other for the preparation of optical Schrödinger cat states.'
author:
- Alexei Gilchrist
- 'G. J. Milburn'
- 'W. J. Munro'
- Kae Nemoto
title: Generating optical nonlinearity using trapped atoms
---
Our ability to perform quantum transformations on optical fields is hampered by the lack of materials with intrinsic optical non-linearities. While it is possible to circumvent this problem with schemes conditioned on photo-detection (see for instance [@KLM; @99gottesman390; @01pittman062311; @02ralph012314; @02knill052306; @Kok2002]), efficiency problems are encountered both due to the inherent nature of the schemes and efficiencies of current photo-detectors. In this paper we propose the use of the interaction with a simple atomic system, conditioned on high efficiency atomic measurements, to generate a near-deterministic (with probability approaching one) nonlinearity on an optical field. This nonlinearity, together with linear transformations, is sufficient for generating arbitrary Hamiltonian evolution on a qudit formed from a truncated sequence of Fock states.
The interaction between the single mode field and a two-level atom is described by the effective Hamiltonian ${\mbox{\boldmath $H$}}=\kappa({\mbox{\boldmath $a$}}^\dagger
{\mbox{\boldmath $\sigma$}}^-+{\mbox{\boldmath $a$}}{\mbox{\boldmath $\sigma$}}^+)$ where ${\mbox{\boldmath $a$}}^\dagger$, ${\mbox{\boldmath $a$}}$ are the boson creation and annihilation operators for the field mode and the operators ${\mbox{\boldmath $\sigma$}}^+={\mbox{$|e\rangle\langle g|$}}$, ${\mbox{\boldmath $\sigma$}}^-={\mbox{$|g\rangle\langle e|$}}$ are the raising and lowering operators for the atomic state. The unitary transformation that acts when this interaction is applied for a time $t$ is ${\mbox{\boldmath $U$}}(\tau)=\exp[-i\tau({\mbox{\boldmath $a$}}^\dagger
{\mbox{\boldmath $\sigma$}}^-+{\mbox{\boldmath $a$}}{\mbox{\boldmath $\sigma$}}^+)]$ where $\tau=\kappa t$.
![Level scheme for an effective two-level transition controlled by a stimulated Raman process.[]{data-label="fig1"}](fig1)
If the atom is prepared in the ground state and found in the ground state after the interaction, the conditional state of the field is given by $$\label{eq:a-dag-a}
{\mbox{\boldmath $\Upsilon$}}_{g}(\tau){\mbox{$|\psi\rangle$}} =
\cos(\tau\sqrt{{\mbox{\boldmath $a$}}^\dagger{\mbox{\boldmath $a$}}}){\mbox{$|\psi\rangle$}}$$ On the other hand if the atom is prepared in the excited state and found in the excited state after the interaction, the conditional state is given by ${\mbox{\boldmath $\Upsilon$}}_{e}(\tau){\mbox{$|\psi\rangle$}} =
\cos(\tau\sqrt{{\mbox{\boldmath $a$}}{\mbox{\boldmath $a$}}^\dagger}){\mbox{$|\psi\rangle$}}$. There is considerable practical advantage to using the excited state preparation rather than the ground state as there is always a signal for correct operation, however the analysis is the same, and we will concentrate on the operator (\[eq:a-dag-a\]).
We have in mind a quantum computing communication protocol in which the optical field mode is derived from a transform limited pulsed field which is rapidly switched into the cavity mode containing the atomic systems at fixed times determined by the pulse repetition rate. Similar systems have been proposed as a quantum memory for optical information processing [@Pittman2002]. When the atomic measurement yields the required result the field may be switched out again for further analysis or subsequent processing through linear and conditional elements.
Once the cavity field is prepared, we need to switch on the interaction with the atomic system. In order that we can switch this interaction at predetermined times we propose that an effective two level transition connected by a Raman process with one classical field and the quantised signal field, be used. A similar scheme has recently been proposed as the basis of a high efficiency photon counting measurement [@Imamoglu; @James_Kwiat]. The process is also used in the EIT schemes for storing photonic information [@Lukin2000] and for quantum state transfer between distant cavities [@Cirac1997]. The level diagram is shown in figure \[fig1\]. The nearly degenerate levels $|1\rangle$ and $|2\rangle$ are connected by a stimulated Raman transition to level $|3\rangle$. The detuning of the Raman pulse from the excited state $|3\rangle$ is $\Delta$, which is approximately the same as the detuning of the signal mode form the same transition. With these parameters the interaction strength is given by $\kappa=\Omega
g/2\Delta$ where $\Omega$ is the Rabi frequency for the Raman pulse and $g$ is the one photon Rabi frequency for the signal field.
An advantage of using a stimulated Raman process of this kind is that the excited state $|2\rangle$ can be a meta-stable, long lived level. We thus do not need to consider spontaneous emission from this level back to the ground state. The readout of the atomic system may be achieved by using a cycling transition between the excited state $|2\rangle$ and another probe level $|4\rangle$. Such measurements are routinely performed in ion trap studies [@Rowe2001] and can have efficiencies greater than 99%.
In the following, we will prove that the ability to perform the conditional transformation (\[eq:a-dag-a\]) together with linear transformations is universal on qudits. We will follow this with two examples.
It is first instructive to consider a unitary operator of the form $\exp(i\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})$. A nonlinearity of the form $\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}$, which we refer to as the square-root-number operator, turns out to be as good as a Kerr nonlinearity for universal quantum computation in the infinite dimensional Hilbert space of a Harmonic oscillator [@Braunstein99]. The concept of universal computation is isomorphic to the concept of universal simulation, which is the ability to simulate any arbitrary Hamiltonian evolution, to any degree of accuracy, by combining the evolution due to a fixed class of Hamiltonian generators.
To motivate this idea, consider what kind of nonlinear oscillator would correspond to the Hamiltonian ${\mbox{\boldmath $H$}}_{\mathrm{srn}}=\sqrt{{\mbox{\boldmath $a$}}^\dagger
{\mbox{\boldmath $a$}}}$. The resulting Heisenberg equation of motion for the field amplitude operator is $$\dot{{\mbox{\boldmath $a$}}}=i[{\mbox{\boldmath $H$}}_{\mathrm{srn}},{\mbox{\boldmath $a$}}]=-i(\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}+1}-\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}){\mbox{\boldmath $a$}}$$ which clearly indicates an oscillator in which the frequency is state dependent. The right hand side may be expanded as a power series in $({\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}})^{-1/2}$ to give $$\dot{{\mbox{\boldmath $a$}}}=-i\left(\frac{1}{2}({\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}})^{-1/2}
-\frac{1}{8}({\mbox{\boldmath $a$}}^\dagger{\mbox{\boldmath $a$}})^{-3/2}
+\ldots\right ){\mbox{\boldmath $a$}}$$ This corresponds to an oscillator for which the frequency of oscillation [*decreases*]{} with increasing energy. For comparison, the Kerr nonlinear oscillator corresponds to an oscillator in which the frequency increases with increasing energy. It is thus clear that the square-root-number Hamiltonian will result in a rotational shearing of states in the phase plane of the oscillator in much the same way as occurs for the Kerr nonlinearity [@Milburn86].
We can go further by using the results of Braunstein and Lloyd [@Braunstein99]. They considered the question of what Hamiltonians are universal for quantum computation in an infinite dimensional Hilbert space, such as that for a single mode of the radiation field. Their results show that Hamiltonians that are at most quadratic in the canonical momentum and position variables are not universal. For instance, in the case of a single mode field with annihilation and creation operators ${\mbox{\boldmath $a$}},{\mbox{\boldmath $a$}}^\dagger$, successive applications of Hamiltonians from the set of displacements, squeezing and rotations, $\mathcal{H}_{\mathrm{lin}}=\{z{\mbox{\boldmath $a$}}+z^*{\mbox{\boldmath $a$}}^\dagger,
z{\mbox{\boldmath $a$}}^2+z^*({\mbox{\boldmath $a$}}^\dagger)^2, z{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}\}$, can generate arbitrary linear canonical transformations in the phase space variables but no other transformations. A universal set is easily obtained by adjoining almost any Hamiltonian that is at least cubic in the canonical variables. A universal set of Hamiltonians could be made up from $\mathcal{H}_{\mathrm{lin}}$ together with the Kerr nonlinearity ${\mbox{\boldmath $H$}}_k=({\mbox{\boldmath $a$}}^\dagger)^2 {\mbox{\boldmath $a$}}^2$. Another choice is the cubic Hamiltonian ${\mbox{\boldmath $H$}}_c={\mbox{\boldmath $a$}}^\dagger
{\mbox{\boldmath $a$}}({\mbox{\boldmath $a$}}+{\mbox{\boldmath $a$}}^\dagger)+hc$. We now show that the square-root-number operator $\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}$ together with the set $\mathcal{H}_{\mathrm{lin}}$ can be used to simulate a cubic Hamiltonian, and thus is universal for quantum simulations in the Hilbert space of a single mode.
Consider acting on the unitary operator generated by the square-root-number operator with a large displacement $\alpha$: ${\mbox{\boldmath $U$}}(\alpha,\theta)={\mbox{\boldmath $D$}}^\dagger(\alpha)\exp(i\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}){\mbox{\boldmath $D$}}(\alpha)$ where ${\mbox{\boldmath $D$}}(\alpha)$ is the displacement operator given by ${\mbox{\boldmath $D$}}(\alpha)=\exp(\alpha {\mbox{\boldmath $a$}}^\dagger -\alpha^* {\mbox{\boldmath $a$}})$. The resulting unitary operator may then be written as $$\begin{aligned}
{\mbox{\boldmath $U$}}(\alpha,\theta)&=&e^{i\theta|\alpha|}\exp\left[i\theta\left(
\frac{{\mbox{\boldmath $x$}}(\phi)}{2}+\frac{{\mbox{\boldmath $a$}}^\dagger{\mbox{\boldmath $a$}}}{2|\alpha|}
-\frac{{\mbox{\boldmath $x$}}(\phi)^2}{8|\alpha|} \right.\right.\nonumber\\
&&\left.\left.-\frac{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}} {\mbox{\boldmath $x$}}(\phi)+{\mbox{\boldmath $x$}}(\phi) {\mbox{\boldmath $a$}}^\dagger{\mbox{\boldmath $a$}}}{8|\alpha|^2}+ O(|\alpha|^{-3})\right)\right]\end{aligned}$$ where ${\mbox{\boldmath $x$}}(\phi)=({\mbox{\boldmath $a$}}e^{-i\phi}+hc)$ with $\phi$ the phase of $\alpha$. The first three terms in the argument of the exponential correspond to a second order Hamiltonian and simulate a displacement, a rotation, and a squeezing operation respectively. The fourth term however is a cubic term which is what we required for a universal set of Hamiltonians for a single mode. By using a linear Hamiltonian to mix several modes (for instance a beam-splitter), arbitrary multi-mode Hamiltonians can be constructed [@Braunstein99]. It is thus clear that we can use the square-root-number operator, together with an arbitrary linear transformations to perform universal computation.
Now that we have shown the universality of the operator $\exp(i \theta
\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})$ what can we say directly about $\cos
\left[ \theta \sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}\right]$? The action of this conditional operator on the number state ${\mbox{$|n\rangle$}}$ is to multiply the state by the amplitude $\cos(\theta\sqrt{n})=[\exp(i\theta\sqrt{n})+\exp(-i\theta\sqrt{n})]/2$. Clearly, if we can restrict the interaction so that this amplitude is $\pm 1$, then it is equivalent to the full unitary operator—and we might expect to exploit the nonlinear interaction in a similar way. Now, as it turns out, it *is* possible [^1] to choose $\theta$ such that for a finite size computational space, ${\mbox{$|0\rangle$}}\ldots{\mbox{$|N\rangle$}}$, $$\label{qudit:rel}
\cos \left[ \theta \sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}\right]{\mbox{$|n\rangle$}} \approx
\begin{cases}
-{\mbox{$|n\rangle$}}& \text{for $n=2 \left(2 m +1\right)^2$}\\
{\mbox{$|n\rangle$}} & \text{for $n\neq2 \left(2 m +1\right)^2$}
\end{cases}$$ Here $m$ is a non-negative integer and we see a sign shift only on the states $|2\rangle, |18\rangle, |50\rangle \ldots | 2 \left(2 m
+1\right)^2\rangle \ldots$. We have chosen the first sign shift to occur on the $|2\rangle$ Fock state. This is an arbitrary choice and other terms can be shifted instead.
For most computation schemes in which we are interested, there is a finite number of basis states (there generally always is a natural cutoff even in most CV schemes). In this case it is always possible to choose $\theta$ to satisfy the above relations. This indicates that our operator $\cos \left[ \theta \sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}\right]$ is approximating an effective, highly nonlinear unitary operator. For instance, if we consider only the qutrit subspace of ${\mbox{$|0\rangle$}}$, ${\mbox{$|1\rangle$}}$, and ${\mbox{$|2\rangle$}}$ then the operator acts equivalently to the Hamiltonian, $H=\frac{\pi}{2}{\mbox{\boldmath $a$}}^{\dagger 2} {\mbox{\boldmath $a$}}^2$ which contains a Kerr nonlinearity.
With this nonlinear Hamiltonian, it is possible in conjunction linear Hamiltonians to generate arbitrary multi-mode Hamiltonians. Hence we can in principle create any of the unitary operators required for universal qutrit computation. This argument extends to higher qudits in a direct manner. The potential problem however, is that the greater the number of states in the qudit space, the larger the value of $\theta$ required to satisfy (\[qudit:rel\]).
It has recently been shown by Knill, Laflamme and Milburn (KLM) [@KLM], that a conditional nonlinear sign shift (NS) gate on two photon states can be produced with passive linear optical elements and photo-detection. Such conditional nonlinear phase shifts can be used to perform two qubit operations for logical states encoded in photon number states. If such conditional gates are used to prepare entangled states for teleportation, efficient quantum computation can be performed which with suitable error correcting codes can be made fault tolerant [@KLM]. Here we show that if the ancilla modes are replaced with a two level atom similar conditional nonlinear phase shifts can be achieved by near deterministic post-selection on atomic measurements. The atomic measurements can be made with fluorescence shelving techniques which are very much more efficient than single photon counting measurements, thus reducing the need for new photon counting technologies inherent in the KLM scheme. In addition, the near-deterministic nature of our gate would also drastically simplify the implementation of the KLM scheme. The ability to induce conditional nonlinear phase shifts on quantum optical fields also has applications to high precision measurements (see for instance [@Kok2002]).
Our objective for the NS gate is to find a way to produce the nonlinear phase shift defined by $$\label{eq:ns}
c_0{\mbox{$|0\rangle$}}+c_1{\mbox{$|1\rangle$}}+c_2{\mbox{$|2\rangle$}}\rightarrow
c_0{\mbox{$|0\rangle$}}+c_1{\mbox{$|1\rangle$}}-c_2{\mbox{$|2\rangle$}}$$ where the $c_j$ are complex amplitudes satisfying $\sum |c_j|^2=1$. This is in the form of the conditions in equation (\[qudit:rel\]), so consider applying the measurement operator ${\mbox{\boldmath $\Upsilon$}}_{g}$, on the general two photon state. This will leave us with the state $A_0c_0{\mbox{$|0\rangle$}}+A_1c_1{\mbox{$|1\rangle$}}+A_2c_2{\mbox{$|2\rangle$}}$ where $A_0=1$, $A_1=\cos(\tau)$ and $A_2=\cos(\sqrt{2}\tau)$. Solutions for which $(A_1,A_2)$ approaches arbitrarily close to $(1,-1)$ can easily found, for examples see table \[tab:Yg\].
$\tau$ $A_0$ $A_1$ $A_2$
---------- ------- ----------- ------------
6.5064 1 0.97519 -0.97516
37.73742 1 0.9992663 -0.9992665
219.918 1 0.999979 -0.999978
: High-probability results for a single atom initially in a ground state and measured in a ground state after an interaction time $\tau$. []{data-label="tab:Yg"}
It is also possible to employ several atoms, each addressed independently, and find solutions of high probability using the same techniques. For example with two atoms, one initially prepared in the ground state and the second prepared in the excited state, the conditional state given that both atoms are found in their initial state after the interaction is ${\mbox{$|\phi_{ge}\rangle$}}={\mbox{\boldmath $\Upsilon$}}_{g}(\tau_1)
{\mbox{\boldmath $\Upsilon$}}_{e}(\tau_2){\mbox{$|\psi\rangle$}}$ and again values of the interaction times $\tau_1$ and $\tau_2$ can be found which again perform the NS gate with high probability. For instance with $\tau_1=37.79300921$ and $\tau_2=197.78109842$, then $|A_1|=|A_2|=|A_3|=0.990321935$ and the required phase shift is performed on ${\mbox{$|2\rangle$}}$.
In any real experiment, the desired interaction time can be calibrated by placing the phase shift in one arm of a Mach-Zehnder interferometer, with two single photon inputs, and examining the interference fringes as a function of the interaction time.
![A plot of the Q-function versus the two canonical phase space variables, for conditional state produced from an initial coherent state using, (a) $\alpha=10,\;\theta=10\pi$ and (b) $\alpha=10,\;\theta=5\pi$[]{data-label="fig3"}](fig3){width="1.0\columnwidth"}
Now let us consider the case of the field in an initial coherent state $|\alpha\rangle$. We will show that the conditional transformation, $\cos(\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})$, generates a coherent superposition of coherent states, a so called [*Schrödinger cat state*]{}. A convenient phase space representation of the conditional state is the Q-function defined by $$\begin{aligned}
Q(\beta)&=&|\langle\beta|\cos(\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})|\alpha\rangle|^2 \nonumber \\
&=&\frac{1}{4}|\langle\beta|e^{i \theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}}+e^{-i \theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}}}|\alpha\rangle|^2 \nonumber
\label{exact}\end{aligned}$$ We thus first consider the amplitude function ${\cal A}(\beta)=\langle \beta|\exp(i\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})|\alpha\rangle$. Based on the semi-classical expectation that this unitary transformation describes an oscillator with an energy dependent frequency, we anticipate that we need only consider the Q-function on the curve $|\beta|=|\alpha|$. With this in mind we put $\beta=|\alpha|e^{i\phi}$ and the Q-function amplitude is then given by $${\cal A}(\beta)=\sum_{n=0}^\infty p_n(\alpha) e^{i(\theta\sqrt{n}-\phi n)}$$ where $p_n(\alpha)=e^{-|\alpha|^2}\frac{|\alpha|^n}{n!}$. We now assume that $|\alpha|>>1$ and approximate the Poisson distribution, $p_n(\alpha)$ with a Gaussian $$p_n(\alpha)\approx(2\pi|\alpha|)^{-1/2}\exp\left
[-\frac{(n-|\alpha|)^2}{2|\alpha|}\right ].$$ We can then replace the sum with an integral over the variable $y=n-|\alpha|$. Under the assumption $|\alpha|>>1$ we find the integrand can be approximated as a general Gaussian and thus the integral is given by $${\cal A}(\beta)=e^{-i\phi|\alpha|^2+i\theta|\alpha|}\exp\left
[-\frac{1}{8}(\theta-2|\alpha|\phi)^2\right ].$$ Clearly this distribution is peaked at $\phi=\theta/(2|\alpha|)$. If we choose the interaction time so that $\theta=|\alpha|\pi$, we expect the state to be localised on the positive imaginary axis in the phase plane of the Q-function. Similarly for the same parameters $\langle
\beta|\exp(i\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})|\alpha\rangle$ will be localised on the negative imaginary axis in the phase plane of the Q-function. It then follows that for the full conditional operator, $\cos(\theta\sqrt{{\mbox{\boldmath $a$}}^\dagger {\mbox{\boldmath $a$}}})$, the state has two components localised symmetrically about the origin on the imaginary axis. In figure \[\[fig3\]\] we show the Q function as a function of the two canonical phase space variables $x$ and $p$ for several choices of $\alpha$ and $\theta$.
We observe that if we load a coherent state into the cavity, and repeat the conditioning measurement procedure outlined previously, the conditional state of the field will be prepared in a state which is close to a cat state. Note however that these cats are not parity eigenstates as the conditional interaction cannot change the photon number distribution. Similar cat states are produced by a Kerr nonlinearity [@cat1; @cat2].
We now estimate some typical values for the parameters. In a recent experiment a similar stimulated Raman process was observed using single rubidium atoms falling through a high finesse optical cavity [@Henrich2000]. The following parameters are typical of that experiment: $g=2\pi\times 4.5$ MHz, $\Omega=2\pi\times 30 $ MHz and $\Delta=2\pi\times 6$ MHz. This gives a coupling constant of the order of $70$ Mhz. To achieve effective interaction constants of the order of those in the table \[tab:Yg\], requires interaction times of the order of $0.1-5$ $\mu$s. In this paper we have neglected cavity decay which obviously needs to be kept small over similar time scale, which while difficult is not impossible.
We have shown how the ability to do very efficient measurements on single atoms trapped in an optical cavity can be used to implement nonlinear conditional phase shifts on the intra-cavity field. By carefully choosing the interaction time, a nonlinear interaction can be implemented with near unit probability, that, together with linear transformations, is universal for simulating an interaction on qudits. For small qudits the required interaction time can easily be found numerically. If the field state can be carefully switched in and out of the cavity, the method can be used to implement near-deterministic nonlinear gates for quantum optical computing. For instance, the scheme can be used to implement a nonlinear sign shift gate, which thus provides a path to quantum computation with logical qubits encoded in photon number states. It can also be used to conditionally generate coherent superpositions of coherent states and thus can provide the key resource for quantum computing with coherent states [@Ralph02].
We would like to thank the Computer Science department of the University of Waikato for making available their computing resources. AG was supported by the New Zealand Foundation for Research, Science and Technology under grant UQSL0001. GJM was supported by the Cambridge-MIT Institute while a visitor at University of Cambridge. WJM acknowledges support from the EU through the project RAMBOQ. KN acknowledges support from the Japanese Research Foundation for Opto-Science and Technology.
[10]{}
, [R. Laflamme]{}, and [G. Milburn]{}, Nature [**409**]{}, 46 (2001).
D. Gottesman and I. L. Chuang, Nature [**402**]{}, 390 (1999).
T. B. Pittman, B. C. Jacobs, and J. D. Franson, Phys. Rev. A [**64**]{}, 062311 (2001).
T. C. Ralph, A. G. White, W. J. Munro, and G. J. Milburn, Phys. Rev. A [ **65**]{}, 012314 (2002).
E. Knill, Phys. Rev. A [**66**]{}, 052306 (2002).
P. Kok, H. Lee, and J. P. Dowling, Phys. Rev. A [**65**]{}, 052104 (2002).
T. B. Pittman and J. D. Franson, Cyclical quantum memory for photonic qubits, 2002.
A. Imamoglu, Phys. Rev. Lett. [**89**]{}, 163602 (2002).
D. F. V. James and P. G. Kwiat, Phys. Rev. Lett. [**89**]{}, 183601 (2002).
M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. [**84**]{}, 5094 (2000).
J. I. Cirac, P. Zoller, H. J. Kimble, and M. Mabuchi, Phys. Rev. Lett. 3221 (1997).
M. A. Rowe [*et al.*]{}, Nature [**409**]{}, 791 (2001).
S. Lloyd and S. L. Braunstein, Phys. Rev. Lett. [**82**]{}, 1784 (1999).
G. J. Milburn, Phys. Rev. A [**33**]{}, 674 (1986).
V. Buzek and P. L. Knight, in [*Progress in Optics*]{}, edited by E. Wolf (Elsevier, Amsterdam, 1995).
C. C. Gerry and P. L. Knight, Am. J. Phys. [**65**]{}, 973 (1997).
M. Henrich, T. Legero, A. Kuhn, and G. Rempe, Phys. Rev. Lett. [**85**]{}, 4872 (2000).
T. C. Ralph, W. J. Munro, and G. J. Milburn, Quantum Computation with Coherent States, Linear Interactions and Superposed Resources., 2001, quant-ph/0110115.
[^1]: With $\cos(\theta\sqrt{2})=-1$, the fundamental theorem of arithmetic implies that for $\sqrt{n}$ which are rational multiples of $\sqrt{2}$, $\cos(\theta\sqrt{n})=\pm 1$. Irrational multiples of $\sqrt{2}$ will have incommensurate periods, so it should be possible to find $\theta$ for which (\[qudit:rel\]) is true to within a given error.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper is a survey on exponential integrators to solve s and related stiff problems. In particular, we are interested in accurate computation near the pulsating and exploding soliton solutions where different time scales exist. We explore variations of three types of exponential integrators: integrating factor (IF) methods, exponential Runge-Kutta (ERK) methods and split-step (SS) methods, and their embedded versions for computation and comparison. We present the details, derive formulas for completeness, and consider seven different integrating schemes to solve the . Moreover, we propose using a comoving frame to resolve fast phase rotation for better performance. We present thorough comparisons and experiments in the one- and two s.'
address:
- 'Center for Nonlinear Science, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332-0430, USA'
- 'School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332-0430, USA'
author:
- 'X. Ding'
- 'S. H. Kang'
bibliography:
- 'DingKang16.bib'
title: integrators for dissipative solitons in s
---
, , dissipative solitons , exponential integrator,
Introduction {#sec:intro}
============
sections/intro
Review of exponential integrators {#sect:over}
=================================
sections/overview
Embedded exponential integrators {#sect:seven}
================================
sections/seven
and performance metrics {#sect:tsa}
========================
sections/metric
Numerical experiments and comparisons {#sect:exp1d}
=====================================
sections/exp1d
Comoving-frame improvement for ERK methods {#sect:cm}
==========================================
sections/cm
Two numerical experiments {#sect:exp2d}
=========================
sections/exp2d
Conclusions {#sect:concl}
===========
sections/conclusion
Acknowledgements {#sect:ack}
================
We are grateful to P. Cvitanović for providing insightful arguments about the phase rotation phenomenon in one . X.Ding is supported by a grant from G. Robinson, Jr.. S.H. Kang is supported by Simons Foundation grant 282311.
{#sec:append}
sections/append
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
****
= COMMISSION 30 RADIAL VELOCITIES\
\
= PRESIDENT Stephane Udry\
VICE-PRESIDENT Guillermo Torres\
PAST PRESIDENT Birgitta Nordström\
ORGANIZING COMMITTEE Francis C. Fekel,\
Kenneth C. Freeman,\
Elena V. Glushkova,\
Geoffrey W. Marcy,\
Birgitta Nordström,\
Robert D. Mathieu,\
Dimitri Pourbaix,\
Catherine Turon,\
Tomaz Zwitter\
COMMISSION 30 WORKING GROUPS
= Div. IX / Commission 30 WG Radial-Velocity Standard Stars\
Div. IX / Commission 30 WG Stellar Radial Velocity Bibliography\
Div. IX / Commission 30 WG Catalogue of Orbital Elements of\
Spectroscopic Binaries\
TRIENNIAL REPORT 2006–2009
Introduction
============
This three-year period has seen considerable activity in the Commission, with a wide range of applications of radial velocities as well as a significant push toward higher precision. The latter has been driven in large part by the exciting research on extrasolar planets. This field is now on the verge of detecting Earth-mass bodies around nearby stars, as demonstrated by recent work summarized below, and radial velocities continue to play a central role.
This is not to say that classical applications of RVs have lagged behind. On the contrary, this triennium has seen the release of several very large data sets of stellar radial velocities (Galactic and extragalactic) that are sure to have a significant impact on a number of fields for years to come. The era of mass-producing radial velocities has arrived. Examples include the Geneva-Copenhagen Survey, the Sloan Digital Sky Survey, and RAVE, and are described below.
Due to circumstances beyond our control, the report of Commission 30 for the previous (2003–2006) triennium did not appear in the printed version of the Transactions of the IAU, although it did appear in the electronic version. For progress during the previous period, the reader is therefore encouraged to consult the latter, which is available from the Commission web site.
Radial velocities and exoplanets
================================
By G. Torres and J. Johnson
Toward Earth-mass planets
-------------------------
Detections of Jupiter-mass exoplanets by the radial-velocity method relying on measurements with precisions of a few m s$^{-1}$ are now quite routine. This technique has provided by far the majority of the more than 300 planet discoveries to date. The persistence of astronomers and the increasing precision of their instruments has led to larger and larger numbers of multi-planet systems being found. One example is the interesting case of $\mu$ Arae ([@Pepe:07b]), with *four* planets, one of which is as small as 10.5 M$_{\rm
Earth}$. The host star also presents the signature of $p$-mode oscillations seen clearly in the radial velocities. The record-holder for the most planets is the star 55 Cnc, which is orbited by no less than *five* planets ([@Fischer:08]), of which the smallest has a minimum mass of 10.8 M$_{\rm Earth}$. Exciting discoveries during this period made possible by the high precision and stability of the HARPS instrument on the ESO 3.6-m telescope at La Silla (Chile) include the system Gls 581 (at only 6.3 pc), attended by at least three planets. In addition to the previously known Neptune-mass body orbiting the star with a period of 5.3 days, two other low mass planets were found by [@Udry:07a] with minimum masses of only 5 M$_{\rm Earth}$ (period 12.9 days) and 7.7 M$_{\rm Earth}$, the latter being near the outer edge of the habitable zone of the M3V parent star (period 83.6 days). The three-planet orbital solution for this case has an rms residual of only 1.2 m s$^{-1}$. Another system with three low-mass planets was announced by [@Mayor:08a] around the nearby (13 pc) metal-poor K2V star HD 40307. The planets weighed in at 4.2, 6.9, and 9.2 M$_{\rm Earth}$, and the three-planet Keplerian orbital fit gave impressive residuals of just 0.85 m s$^{-1}$. HARPS has demonstrated that this sort of velocity precision is achievable for “quiet” stars that present a low level of “jitter” in their radial velocities due to astrophysical phenomena such as $p$-mode oscillations, granulation, or chromospheric activity. Indications are that Neptune-mass or smaller planets are more common around solar-type (F–K) stars than previously thought (see, e.g., [@Mayor:08b]).
Retired A stars and their planets
---------------------------------
Most Doppler searches for planets have concentrated on main sequence stars of spectral types F or later, because the velocity precision for earlier type stars is seriously compromised by line broadening induced by rapid rotation, as well as the overall fewer number of spectral lines available. This difficulty in studying higher-mass stars introduces a bias in our understanding of planets, but it can be overcome by looking at such stars after they have left the main sequence. This is precisely the approach of an ongoing project to investigate the relationship between stellar mass and planet formation by using the HIRES instrument on the Keck 10-m telescope to search for planets in a sample of 240 intermediate-mass subgiants ($1.3 <
M_*/M_\odot < 2.2$). Subgiants have lower surface temperatures and rotational velocities than their main-sequence progenitors, making them ideal proxies for A- and F-type stars in Doppler studies. From a smaller sample of subgiants observed previously by [@Johnson:07a] at Lick Observatory for 4 years with a typical velocity precision of 4 m s$^{-1}$, a strong correlation was detected between stellar mass and planet occurrence, with a detection rate of 9% within 2.5 AU among the high-mass sample, compared to 4.5% for Sun-like stars and less than 2% for M dwarfs. A paucity of planets within 1 AU of stars with masses greater than 1.5 M$_\odot$ was found, indicating that stellar mass also plays a key role in planet migration ([@Johnson:07b; @Johnson:08]). The goal of the expanded Keck survey (with an increased velocity precision of about 2 m s$^{-1}$) is to map out the relationships between stellar mass and exoplanet properties in greater detail by examining the distribution of planetary minimum masses, eccentricities, semimajor axes, and the rate of multiplicity around evolved A stars. If the 9% occurrence rate is confirmed, some 20–30 new planets should be found in the sample orbiting some of the most massive stars so far examined by the Doppler technique.
Current status and prospects
----------------------------
In May of 2008 NASA convened the Exoplanet Forum 2008, a meeting of experts from the US and other countries in eight different observational techniques related to exoplanet research. The purpose was to discuss paths forward for exploring and characterizing planets around other stars, and to provide specific suggestions for space missions, technology development, and observing programmes that could fulfill the recommendations of a previously held meeting of the Exoplanet Task Force (http://www.nsf.gov/mps/ast/exoptf.jsp). The reports resulting from these meetings are intended to provide input for consideration by various advisory committees in the US, and in particular by the Astronomy and Astrophysics Decadal Survey that is currently underway.
Radial velocities was one of the eight techniques considered by the Exoplanet Forum 2008. The corresponding chapter of the report, available at , summarized the progress in the field over the last few years, which is illustrated by a velocity precision of 1 m s$^{-1}$ or slightly better achieved so far, led by the Swiss team using the HARPS instrument on the ESO 3.6-m telescope, and the California-Carnegie team using the HIRES instrument on the Keck 10-m telescope. The factors currently limiting the precision were discussed briefly and have been described in detail by [@Pepe:07a]. They include various sources of astrophysical noise (stellar oscillations, granulation, magnetic cycles, collectively known as “stellar jitter”), guiding, the illumination of the spectrograph, and the wavelength reference. Good progress has been made in each of these areas. For example, it appears that jitter can be substantially reduced through longer exposures or binning, to the level of perhaps 10 cm s$^{-1}$ or less. A new thorium-argon line list was developed by [@Lovis:07] that significantly improves the velocity precision when using this source as the wavelength reference. Further improvements in the velocity precision perhaps reaching a few cm s$^{-1}$ appear possible using a dense spectrum of lines generated by a femtosecond-pulsed laser (“laser comb”), described in more detail below. The next few years will tell whether this promise can be realized in practice.
The report of the Exoplanet Forum also described recent progress in techniques to measure precise velocities in the near infrared (see, e.g., [@Ramsey:08]), which are now approaching the 10 m s$^{-1}$ level in initial tests. Longer wavelengths potentially provide a significant advantage for the Doppler detection of very small (even Earth-mass) planets, since these objects produce a larger signal when orbiting less massive stars, which emit most of their flux in the near infrared.
In addition to velocity precision, the report pointed out what is currently considered by the community to be the greatest challenge for making progress in the detection of exoplanets by the Doppler technique: the limited access to telescope time. This has a direct impact not only on the size of the samples of solar-type stars that can be studied, but also severely restricts the number of late-type (faint) stars that can be targeted to search for Earth-mass planets. The need for exposure times longer than dictated by Poisson statistics to reduce stellar jitter, as mentioned above, is a further strain on the limited resources currently available on telescopes equipped with high-precision spectrographs.
Toward higher radial velocity precision
=======================================
By G. Torres
During this period agreement has been reached for the construction of an improved copy of the very successful HARPS spectrograph, currently in operation at the ESO 3.6-m telescope at La Silla, for the northern hemisphere (HARPS-NEF). This is a high-resolution ($R \approx
120,\!000$) fibre-fed optical spectrograph with broad wavelength coverage (3780–6910 Å) designed for high radial velocity precision. HARPS-NEF is a collaboration between the New Earths Facility (NEF) scientists of the Harvard Origins of Life Initiative and the HARPS team at the Geneva Observatory. It is expected to be the workhorse for follow-up of transiting planet candidates for NASA’s *Kepler* mission, and should be operational perhaps in late 2010. HARPS-NEF is a cross-dispersed echelle spectrograph that will benefit not only from updates and improvements over the original HARPS instrument, but in addition it will be installed on a larger telescope aperture in the northern hemisphere (the 4.2-m William Herschel Telescope on La Palma, Canary Islands). It is designed for ultra-high stability (10–20 cm s$^{-1}$), and like HARPS it will be placed in a vacuum chamber with careful temperature control.
One of the key factors that determine the precision of the RVs is the wavelength reference. Existing technologies in the optical (such as the Th-Ar technique and iodine gas absorption cell) have already reached sub-m s$^{-1}$ precision in some cases, but further improvements are needed if the Doppler method is to reach cm s$^{-1}$, as is needed to detect terrestrial-mass planets. A new technology that has emerged in the last few years and that holds great promise for providing a very stable reference is that of laser “frequency combs”. As the name suggests, a frequency comb generated from mode-locked femtosecond-pulsed lasers provides a spectrum of very narrow emission lines with a constant frequency separation given by the pulse repetition frequency, typically 1 GHz for this application. This frequency can be synchronized with an extremely precise reference such as an atomic clock. For example, using the generally available Global Positioning System (GPS), the frequencies of comb lines have long-term fractional stability and accuracy of better than $10^{-12}$. This is more than enough to measure velocity variations at a photon-limited precision level of 1 cm s$^{-1}$ in astronomical objects (see, e.g., [@Murphy:07]). This direct link with GPS as the reference allows the comparison of measurements not only between different instruments, but potentially also over long periods of time. To provide lines with separations that are well matched to the resolving powers of commonly used echelle spectrographs, a recent improvement incorporates a Fabry-Pérot filtering cavity that increases the comb line spacing to $\sim$40 GHz over a range greater than 1000 Å ([@Li:08]). Prototypes using a titanium-doped sapphire solid-state laser have been built that provide a reference centreed around 8500 Å. In practice, of course, Doppler measurements are also affected by other instrumental problems, so that the value of this new technology for highly precise RV measurements is still to be demonstrated. Tests have been initiated during this triennium. Plans call also for the installation of a laser comb on the HARPS-NEF spectrograph described above. Applications of this technique are not limited to stars. For example, a direct measurement of the expansion of the Universe could be made by observing [*in real time*]{} the evolution of the cosmological redshift of distant objects such as quasars. Such a measurement would require a precision in determining Doppler drifts of $\sim$1 cm s$^{-1}$ per year (see, e.g., [@Steinmetz:08]), which a laser comb can in principle deliver.
Radial velocities and asteroseismology
======================================
By G. Torres
The significant increase in the precision of velocity measurements over the past few years, driven by exoplanet searches, has enabled important studies of the internal constitution of stars through the technique of asteroseismology. A number of spectrographs now reach the precision needed for this type of investigation. During this period [@Bedding:06] observed the metal-poor subgiant star $\nu$ Ind with the UCLES instrument on the 3.9-m Anglo-Australian Telescope, and with the CORALIE spectrograph on the 1.2-m Swiss telescope at ESO. The precision of those measurements ranged from 5.9 to 9.5 m s$^{-1}$, and allowed the authors to place constraints on the stellar parameters confirming that the star has a low mass and an old age. This was the first application of asteroseismology to a metal-poor star. $\alpha$ Cen A was observed by [@Bazot:07] with the HARPS spectrometer on the ESO 3.6-m telescope, and 34 $p$ modes were identified in the acoustic oscillation spectrum of the star. Individual observations had errors well under 1 m s$^{-1}$. A similar study by [@Mosser:08a] was conducted on Procyon ($\alpha$ CMi) using the SOPHIE spectrograph on the 1.9-m telescope at the Haute-Provence Observatory, yielding a precision of about 2 m s$^{-1}$. The HARPS instrument was used again by [@Mosser:08b] to study the old Galactic disk, low-metallicity star HD 203608. A total of 15 oscillation modes were identified, and the age of the star was determined to be $7.25 \pm 0.07$ Gyr.
Radial velocities in Galactic and extragalactic clusters
========================================================
By E. V. Glushkova, H. Levato, and G. Torres
Searches for spectroscopic binaries in southern open clusters have continued during this period (e.g., [@Gonzalez:06]). These authors have reported results for the open cluster Blanco1. Forty four stars previously mentioned in the literature as cluster candidates, plus an additional 25 stars in a wider region around the cluster were observed repeatedly during 5 years. Six new spectroscopic binaries have been detected and their orbits determined. All of them are single-lined spectroscopic systems with periods ranging from 1.9 to 1572 days. When considering also all suspected binaries, the spectroscopic binary frequency in this cluster amounts to 34%. Additional velocities were measured in this cluster by [@Mermilliod:08a], who obtained a rather similar binary frequency.
Results from long term radial velocity studies based on the CORAVEL spectrometers have been presented during this period for the open clusters NGC 6192, 6208, and 6268 ([@Claria:06]), as well as for NGC 2112, 2204, 2243, 2420, 2506, and 2682 ([@Mermilliod:07a]). These studies were complemented with photometric observations in a variety of systems, and included membership determination and binary studies. A number of new spectroscopic binaries were discovered, and their orbital elements were determined.
Other individual cluster studies in the Milky Way, which we merely reference here without giving the details due to space limitations, include: IC 2361 ([@Platais:07]), NGC 2489 ([@Piatti:07]), $\alpha$ Per ([@Mermilliod:08b]), the five distant open clusters Ru 4, Ru 7, Be 25, Be 73 and Be 75 ([@Carraro:07]), Tombaugh 2 ([@Frinchaboy:08a]), the Orion Nebula cluster ([@Furesz:08]), the most massive Milky way open cluster Westerlund 1 ([@Mengel:08]), the Galactic centre star cluster ([@Trippe:08]), and the globular clusters M4 ([@Sommariva:08]) and $\omega$ Cen ([@DaCosta:08]).
This triennium saw the publication of the final results of the 20-year efforts of J.-C. Mermilliod and colleagues to measure the radial velocities of giant stars in open clusters for a variety of studies related to their kinematics, membership, and photometric and spectroscopic properties. A catalogue of spectroscopic orbits for 156 binaries based on more than 4000 individual velocities was published by [@Mermilliod:07b], based on measurements from CORAVEL and the CfA Digital Speedometers. Orbital periods range from 41 days to more than 40 years, and eccentricities are as high as $e = 0.81$. Another 133 spectroscopic binaries were discovered but do not have sufficient observations and/or time coverage to determine orbital elements. This material provides a dramatic increase in the body of homogeneous orbital data available for red-giant spectroscopic binaries in open clusters, and should form the basis for a comprehensive discussion of membership, kinematics, and stellar and tidal evolution in the parent clusters. A companion catalogue ([@Mermilliod:08c]) reports mean radial velocities for 1309 red giants in clusters based on $10,\!517$ individual measurements, and mean radial velocities for 166 open clusters among which 57 are new. This information, combined with recent absolute proper motions, will permit a number of investigations of the galactic distribution and space motions of a large sample of open clusters.
[@Frinchaboy:08b] reported on a survey of the chemical and dynamical properties of the Milky Way disk as traced by open star clusters. They used medium-resolution spectroscopy ($R \approx
15,\!000$) with the Hydra multi-object spectrographs on the Cerro Tololo Inter-American Observatory 4-m and WIYN 3.5-m telescopes to derive moderately high-precision RVs ($\sigma < 3$ km s$^{-1}$) for 3436 stars in the fields of 71 open clusters within 3 kpc of the Sun. Along with the work described in the preceding paragraph, these represent the largest samples of clusters assembled thus far having uniformly determined, high-precision radial velocities.
A good deal of activity focused on kinematic analyses of globular cluster (GC) systems in other galaxies. [@Lee:08a] measured radial velocities for 748 GC candidates in M31, and [@Lee:08b] obtained radial velocities of 111 objects in the field of M60. [@Konstantopoulos:08] obtained new spectroscopic observations of the stellar cluster population of region B in the prototype starburst galaxy M82. [@Schuberth:06] presented the first dynamical study of the GC system of NGC 4636 based on radial velocities for 174 clusters. [@Bridges:06] measured radial velocities of 38 GCs in the Virgo elliptical galaxy M60, and [@Bridges:07] obtained new velocities for 62 GCs in M104.
An interesting problem was discussed by [@Abt:08], pointing to a possible bias in the RVs of many B-type stars. The author looked at 10 open clusters younger than about 30 million years with sufficient numbers of measured radial velocities, many of them being measured with CORAVEL, and found that in each case, the main-sequence B0–B3 stars have larger velocities than earlier- or later-type stars.
Radial velocities for field giants
==================================
By G. Torres
A programme to measure precise radial velocities for 179 giant stars has been ongoing at the Lick Observatory, with individual errors of 5–8 m s$^{-1}$ per measurement ([@Hekker:06]). This study presented a list of 34 stable K giants (with RV standard deviations under 10 m s$^{-1}$) suitable to serve as reference stars for NASA’s Space Interferometry Mission. A follow-up paper ([@Hekker:08]) reported that 80% of the stars monitored show velocity variations at a level greater than 20 m s$^{-1}$, of which 43 exhibit significant periodicities. One of the goals was to investigate possible mechanisms that cause these variations. A complex correlation was found between the amplitude of the changes and the surface gravity of the star, in which part of the variation is periodic and uncorrelated with $\log
g$, and another component is random and does correlate with surface gravity.
[@Massarotti:08] reported radial velocities made with the CfA Digital Speedometers for a sample of 761 giant stars, selected from the Hipparcos Catalogue to lie within 100 pc. Rotational velocities and other spectroscopic parameters were determined as well. Orbital elements were presented for 35 single-lined spectroscopic binaries and 12 double-lined binaries. These systems were used to investigate stellar rotation in field giants to look for evidence of excess rotation that could be attributed to planets that were engulfed as the parent stars expanded.
Galactic structure – Large surveys
==================================
By B. Nordström and G. Torres
The Geneva-Copenhagen Survey
----------------------------
During the previous 3-year period one of the mayor surveys completed and published is the Geneva-Copenhagen Survey of the Solar Neighbourhood ([@Nordstrom:04]). Unfortunately the full description of this project and the important new science results that came out of it did not make it into the printed version of the Transactions of the IAU for 2003–2006, so we summarize and update that information here for its significant impact for the study of Galactic structure. This survey provided accurate, multi-epoch radial velocities for a magnitude-complete, all-sky sample of $14,\!000$ F and G dwarfs down to a brightness limit of $V = 8.5$, and is volume complete to about 40 pc. The catalogue includes new mean radial velocities for $13,\!464$ stars with typical mean errors of 0.25 [kms$^{-1}$]{}, based on $63,\!000$ individual observations made mostly with the CORAVEL photoelectric cross-correlation spectrometers covering both hemispheres. Studies of this rich data set have found evidence for dynamical substructures that are probably due to dynamical perturbations induced by spiral arms and perhaps the Galactic bar. These “dynamical streams” ([@Famaey:05]) contain stars of different ages and metallicities which do not seem to have a common origin. These features, which dominate the observed $U$,$V$,$W$ diagrams, make the conventional two-Gaussian decomposition of nearby stars into thin and thick disk members a highly dubious procedure. An analysis by [@Helmi:06] suggests that tidal debris from merged satellite galaxies may be found even in the solar neighbourhood.
A new release of this large catalogue with updated calibrations as well as new age and metallicity determinations was published during the present triennium by [@Holmberg:07], and is available from the CDS at . A follow-up paper and catalogue are expected to be available shortly, containing new kinematic data ($UVW$ velocities) resulting from a re-analysis using the revised [*Hipparcos*]{} parallaxes ([@vanLeeuwen:07]), and online updates.
Sloan Digital Sky Survey
------------------------
This period saw the sixth data release of the Sloan Digital Sky Survey ([@Adelman:08]), which now covers an area of 9583 square degrees on the sky. This release includes nearly 1.1 million spectra of galaxies, quasars, and stars with sufficient signal to be usable, along with redshift determinations, as well as effective temperature, surface gravity, and metallicity determinations for many stars. The spectra cover the wavelength region 3800–9200 Å at a resolving power ranging from 1850 to 2200. Velocity precisions range from about 9 km s$^{-1}$ for A and F stars to about 5 km s$^{-1}$ for K stars. The zero point of the velocities is in the process of being calibrated using spectra from the ELODIE spectrograph. These data are a valuable resource for a variety of investigations related to Galactic structure and the evolution and history of the Milky Way.
RAVE
----
The second data release of the Radial Velocity Experiment (RAVE) was published during this triennium ([@Zwitter:08]). This is an ambitious spectroscopic survey to measure radial velocities as well as stellar atmosphere parameters (effective temperature, metallicity, surface gravity, rotational velocity) of up to one million stars using the 6dF multi-object spectrograph on the 1.2-m UK Schmidt telescope of the Anglo-Australian Observatory. The RAVE programme started in 2003, obtaining medium resolution spectra (median $R = 7500$) in the Ca II triplet region (8410–8795 Å) for southern hemisphere stars drawn from the Tycho-2 and SuperCOSMOS catalogues, in the magnitude range $9 <
I < 12$. Following the first data release, the current release doubles the sample of published radial velocities, now reaching $51,\!829$ measurements for $49,\!327$ individual stars observed between 2003 and 2005. Comparison with external data sets indicates that the new data collected since April 2004 show a standard deviation of 1.3 km s$^{-1}$, about twice as good as for the first data release. For the first time, this data release contains values of stellar parameters from $22,\!407$ spectra of $21,\!121$ individual stars. The data release includes proper motions from the STARNET 2.0, Tycho-2, and UCAC2 catalogues, and photometric measurements from Tycho-2, USNO-B, DENIS, and 2MASS. The data can be accessed via the RAVE web site at . Scientific uses of these data include the identification and study of the current structure of the Galaxy and of remnants of its formation, recent accretion events, as well as the discovery of individual peculiar objects and spectroscopic binary stars. For example, kinematic information derived from the RAVE data set has been used by [@Smith:07] to constrain the Galactic escape velocity at the solar radius to $V_{\rm esc} = 536^{+58}_{-44}$ km s$^{-1}$ (90% confidence).
Working groups
==============
Below are the progress reports of the three active working groups of Commission 30. Their efforts are focused on providing a service to the astronomical community at large through the compilation of a variety of information related to radial velocities.
WG on radial velocity standard stars
------------------------------------
By S. Udry
Large radial-velocity surveys are being conducted to search for extrasolar planets around different types of stars, including A to M dwarfs, and G–K giants (e.g., [@Udry:07b]). Although not aiming at establishing a set of radial-velocity standard stars, the non-variable stars in these programmes, followed over a long period of time, provide ideal candidates for our list of standards. They will moreover broaden the domain of stellar properties covered (brightness and spectral type). At this point, the results of most of those programmes are still not publicly available and we must still wait a bit in order to fine-tune and enlarge the list presently available at . In addition to the by-product aspect of planet search programmes, a targeted observational effort, dedicated to the definition of a large sample of RV standards for GAIA, is being pursued with several instruments (CORALIE, SOPHIE, etc). It will provide in a few years a list of several thousand suitable standards spread over the entire sky ([@Crifo:07]).
For all of the efforts above, work remains to be done to combine the data from the different instruments into a common RV system, for example through the observation of minor planets in the solar system ([@Zwitter:07]). This has still to be done for most of the planet search programmes, but is already included in the GAIA effort.
WG on stellar radial velocity bibliography
------------------------------------------
By H. Levato
During the 2006–2009 triennium, the WG searched for the papers with measurements of radial velocities of stars in 33 journals. As of December 2007 $113,\!658$ entries have been catalogued. We expect to finish 2008 with more than $150,\!000$. It is worth mentioning that at the end of 1996 there were $23,\!358$ entries recorded, so that in 10 years the number of entries in the catalogue has expanded by a factor of five. During the triennium we have improved the search engine to search by different parameters. In the main body of the catalogue we have included information about the technical characteristics of the instrumentation used for radial velocity measurements, and comments about the nature of the objects. The catalogue can be accessed at .
WG on the catalogue of orbital elements of spectroscopic binaries (SB9)
-----------------------------------------------------------------------
By D. Pourbaix
In Manchester, a WG was set up to work on the implementation of the 9th catalogue of orbits of spectroscopic binaries (SB9), superseding the 8th release of [@Batten:89] (SB8). SB9 exists in electronic format only. The web site was officially released during the summer of 2001. This site is directly accessible from the Commission 26 web site, from BDB (in Besançon), and from the CDS, among others.
Since the last report, substantial progress has been accomplished, in particular in the way complex systems can be uploaded together with their radial velocities. That is the case, for instance, for triple stars with the light time effect accounted for and systems with a pulsating primary.
At the time of this writing SB9 contains 2802 systems (SB8 had 1469) and 3340 orbits (1469 in SB8). A total of 563 papers were added since August 2000, although most of them come from [*outside*]{} the WG. Many papers with orbits still await uploading into the catalogue. According to ADS, the release paper ([@Pourbaix:04]) has been cited a total of 58 times since 2005. This is twice as many as the old Batten et al. catalogue over the same period.
Even though this work has been very well received by the community and a number of tools have been designed and implemented to make the job of entering new orbits easier (input file checker, plot generator, etc.), the WG still suffers from a serious lack of manpower. Few colleagues outside the WG spontaneously send their orbits (but they are usually pleased to send their data when we ask for them). Any help (from authors, journal editors, and others) is therefore very welcome. Uploading an orbit into SB9 also involves checking for typos. In this way we have found several mistakes in published solutions, which we have corrected. Sending orbits to SB9 prior to publication (e.g., at the proof stage) would therefore be a way to prevent some mistakes from making it into the literature.
[Guillermo Torres]{}
Abt, H. 2008, *PASP*, 120, 715
Adelman-McCarthy, J. K., Agüeros, M. A., Allam, S. S. et al. 2008, *ApJS*, 175, 297
Batten, A. H., Fletcher, J. M., & MacCarthy, D. G. 1989, Eighth catalogue of the orbital elements of spectroscopic binary systems, *Publ. Dom. Astr. Obs.*, 17, 1
Bazot, M., Bouchy, F., Kjeldsen, J., Charpinet, S., Laymand, M., & Vauclair, S. 2007, *A&A*, 470, 295
Bedding, T. R., Butler, R. P., Carrier, F. et al. 2006, *ApJ*, 647, 558
Bridges, T., Gebhardt, K., Sharples, R. et al. 2006, *MNRAS*, 373, 157
Bridges, T., Rhode, K. L., Zepf, S. E., & Freeman, K. C. 2007, *ApJ*, 658, 980
Carraro, G., Geisler, D., Villanova, S. et al. 2007, *A&A*, 476, 217
Claria, J. J., Mermilliod, J.-C., Piatti, A. E., & Parisi, M. C. 2006, *A&A*, 453, 91
Crifo, F., Jasniewica, G., Soubiran, C. et al. 2007, in *Towards a new set of radial velocity standards for GAIA*, eds. J. Bouvier, A. Chalabaev, & C. Charbonnel, Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, Grenoble (France), p. 459
Da Costa, G. S., & Matthew, C. G. 2008, *AJ*, 136, 506
Famaey, B., Jorissen, A., Luri, X. et al. 2005, *A&A*, 430, 165
Fischer, D. A., Marcy, G. W., Butler, R. P. et al. 2008, *ApJ*, 675, 790
Frinchaboy, P. M., Marino, A. F., & Villanova, S. et al. 2008, *MNRAS* (in press), arXiv:0809.2559
Frinchaboy, P. M., & Majewski, S. R. 2008, *AJ*, 136, 118
Fürész, G., Hartmann, L. W., Megeath, S. T. et al. 2008, *ApJ*, 676, 1109
González, J. F., & Levato, H. 2006, *RMxAA*, 26, 171
Hekker, S., Reffert, S., Quirrenbach, A., Mitchell, D. S., Fischer, D. A., Marcy, G. W., & Butler, R. P. 2006, *A&A*, 454, 943
Hekker, S., Snellen, I. A. G., Aerts, C., Quirrenbach, A., Reffert, S., & Mitchell, D. S. 2008, *A&A*, 480, 215
Helmi, A., Navarro, J. F., Nordström, B. et al. 2006, *MNRAS* 365, 1309
Holmberg, J., Nordström, B., & Andersen, J. 2007, *A&A*, 475, 519
Johnson, J. A. et al. 2007a, *ApJ*, 670, 833
Johnson, J. A., Marcy, G. W., Fischer, D. A. et al. 2008, *ApJ*, 675, 784
Johnson, J. A., Butler, R. P., Marcy, G. W., Fischer, D. A., Vogt, S. S., Wright, J. T., & Peek, K. M. G. 2007b, *ApJ*, 665, 785
Karchenko, N. V., Scholz, R.-D., Piskunov, A. E. et al. 2007, *AN*, 328, 889
Konstantopoulos, I. S., Bastian, N., Smith, L. J. et al. 2008, *ApJ*, 674, 846
Lee, M. G., Hwang, Ho S., Kim, S. Ch. et al. 2008a. *ApJ*, 674, 886
Lee, M. G., Hwang, Ho S., Park, H. S. et al. 2008b, *ApJ*, 674, 857
Li, Ch.-H., Benedick, A. J., Fendel, P. et al. 2008, *Nature*, 452, 610
Lovis, C., & Pepe, F. 2007, *A&A*, 468, 1115
Massarotti, A., Latham, D. W., Stefanik, R. P., & Fogel, J. 2008, *AJ*, 135, 209
Mayor, M., & Udry, S. 2008, *Phys. Scr.*, 130 (in press)
Mayor, M., Udry, S., Lovis, C. et al. 2008, *A&A* (in press), arXiv:0806.4587
Mengel, S., & Tacconi-Garman, L. E. 2008, in *Young massive star clusters - Initial conditions and environments*, Granada, Spain (in press), arXiv:0803.4471
Mermilliod, J.-C., Platais, I., James, D. J., Grenon, M., & Cargile, P. A. 2008a, *A&A*, 485, 95
Mermilliod, J.-C., & Mayor, M. 2007, *A&A*, 470, 919
Mermilliod, J.-C., Andersen, J., Latham, D. W., & Mayor, M. 2007, *A&A*, 473, 829
Mermilliod, J.-C., Queloz, D., & Mayor, M. 2008b, *A&A*, 488, 409
Mermilliod, J.-C., Mayor, M., & Udry, S. 2008c, *A&A*, 485, 303
Mosser, B., Bouchy, F., Martić, M. et al. 2008a, *A&A*, 478, 197
Mosser, B., Deheuvels, S., Michel, E. et al. 2008b, *A&A*, 488, 635
Murphy, M. T., Udem, Th., Holzwarth, R. et al. 2007, *MNRAS*, 380, 839
Nordström, B., Mayor, M., Andersen, J. et al. 2004, *A&A*, 418, 989
Pepe, F., Correia, A. C. M., Mayor, M. et al. 2007, *A&A*, 462, 769
Pepe, F. A., & Lovis, C. 2007, in *Physics of Planetary Systems, Nobel Symposium 135*, in press
Piatti, A., Clariá, J. J., Mermilliod, J.-C., Parisi, M. C., & Ahumada, A. V. 2007, *MNRAS*, 377, 1737
Platais, I., Melo, C., Mermilliod, J.-C. et al. 2007, *A&A*, 461, 509
Pourbaix, D., Tokovinin, A. A., Batten, A. H. et al. 2004, *A&A*, 424, 727
Ramsey, L. W., Barnes, J., Redman, S. L., Jones, H. R. A., Wolszczan, A., Bongiorno, S., Engel, L., & Jenkins, J. 2008, *PASP*, 120, 887
Schuberth, Y., Richtler, T., Dirsch, B. et al. 2006, *A&A*, 459, 391
Smith, M. C., Ruchti, G. R., Helmi, A. et al. 2007, *MNRAS*, 379, 755
Sommariva, V., Piotto, G., Rejkuba, M. et al. 2008, *A&A* (in press), arXiv:0810.1897
Steinmetz, T., Wilken, T., Araujo-Hauck, C. et al. 2008, *Science*, 321, 1335
Trippe, S., Gillessen, S., Gerhard, O. E. et al. 2008, *A&A* (in press), arXiv:0810.1040
Udry, S., & Santos, N. C. 2007, *ARA&A*, 45, 397
Udry, S., Bonfils, X., Delfosse, X. et al. 2007, *A&A*, 469, L43
van Leeuwen, F. 2007, *A&A*, 474, 653
Zwitter, T., Mignard, F., & Crifo, F. 2007, *A&A*, 462, 795
Zwitter, T., Siebert, A., Munari, U. et al. 2008, *AJ*, 136, 421
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present an analysis of the mass distribution inferred from strong lensing by [SPT-CLJ0356$-$5337]{}, a cluster of galaxies at redshift $z={1.0359}$ revealed in the follow-up of the SPT-SZ clusters. The cluster has an Einstein radius of [$\theta_{E}\simeq$ 14 ]{}for a source at $z=3$ and a mass within 500 kpc of [M$_{500~{\rm kpc}}=4.0 \pm 0.8\times10^{14}\, $[M$_{\odot}$]{}]{}. Our spectroscopic identification of three multiply-imaged systems ($z={2.363}$, $z={2.364}$, and $z={3.048}$), combined with [[*HST*]{}]{} F606W-band imaging allows us to build a strong lensing model for this cluster with an rms of $\leq0\farcs3$ between the predicted and measured positions of the multiple images. Our modeling reveals a two-component mass distribution in the cluster. One mass component is dominated by the brightest cluster galaxy and the other component, separated by $\sim$170 kpc, contains a group of eight red elliptical galaxies confined in a $\sim$9 ($\sim$70 kpc) diameter circle. We estimate the mass ratio between the two components to be between 1:1.25 and 1:1.58. In addition, spectroscopic data reveal that these two near-equal mass cores have only a small velocity difference of $\sim300$ [km.s$^{-1}$]{} between the two components. This small radial velocity difference suggests that most of the relative velocity takes place in the plane of the sky, and implies that [SPT-CLJ0356$-$5337]{} is a major merger with a small impact parameter seen face-on. We also assess the relative contributions of galaxy-scale halos to the overall mass of the core of the cluster and find that within 800 kpc from the brightest cluster galaxy about 27% of the total mass can be attributed to visible and dark matter associated with galaxies, whereas only 73% of the total mass in the core comes from cluster-scale dark matter halos.'
author:
- Guillaume Mahler
- Keren Sharon
- 'Michael D. Gladders'
- Lindsey Bleem
- 'Matthew B. Bayliss'
- 'Michael S. Calzadilla'
- Benjamin Floyd
- Gourav Khullar
- Michael McDonald
- 'Juan D. Remolina González'
- Tim Schrabback
- 'Antony A. Stark'
- Jan Luca van den Busch
bibliography:
- 'biblio\_SPT0356.bib'
title: 'Strong Lensing Model of [SPT-CLJ0356$-$5337]{}, a Major Merger Candidate at Redshift [1.0359]{}'
---
Introduction {#sec:intro}
============
Embedded in the largest gravitationally-bound dark matter halos in the cosmic web, clusters of galaxies are excellent probes of the high-mass end of large scale structure formation. Models of hierarchical growth predict that the majority of the mass of a cluster halo accumulates through multiple minor-merger events, in which small galaxy-scale or group-scale halos fall into the cluster core. Major mergers (1:3) are rarer; statistically, a typical cluster-scale halo of M$_{200}\sim10^{14}$[M$_{\odot}$]{} at $z=0$ will have undergone one major-merger event throughout its evolution [@Fakhouri2008]. M$_{\delta}$ goes the mass in the radius R$_{\delta}$ that would be reach ${\delta}$ times the critical density of the universe at that redshifts. Major mergers are also uniquely useful for studying the nature of dark matter (DM). For example, the separation between DM and gas in the Bullet Cluster (1E 0657$-$558) provides empirical evidence that favors cold dark matter over theories of modified gravity [@Clowe2006]. Analyses of mergers can also constrain the DM self-interaction cross-section (e.g. @Markevitch2004 [@Harvey2015]) and the large-scale matter-antimatter ratio [@Steigman2008].
Structure growth and mergers are studied in simulations and observed up to $z\sim2$. @McDonald2017 consider the density profiles of clusters out to redshift $\sim1.7$ that have been re-scaled to their R$_{500}$ radius, taking into account the critical density at each epoch, and find that outside the cluster cores the profiles are remarkably similar. Cluster cores deviate from this self-similarity; the complexity of cluster cores can be well probed with a multi-wavelength/multi-scale approach, in particular, by including strong lensing analysis. Indeed, the angular extent of strong lensing features in cluster fields, from a few to a hundred arcseconds, corresponds to the scale of the cluster core – a few to hundreds of kpc in projection.
Since the prototypical Bullet Cluster [@Markevitch2002] was first identified, a small number of other clusters with structure indicative of major mergers have been observed, and showed spatial dissociation of gas, DM, and galaxies. Most of these systems are at low redshifts. Notable higher redshift systems are “El Gordo” (ACT-CL J0102-4915) at $z$=0.870 [@Marriage2011; @Menanteau2012; @Zitrin2015; @Cerny2018], and the structure of CLJ0152-1347 at $z$=0.830 [@Massardi2010], a complex system with two main subclusters separated by 722 kpc, complicated by at least one further merging subgroup.
Only a subset of mergers enable investigation of the full range of phenomena seen in bullet-like mergers, targeting the nature of DM [@Dawson2012]: such mergers are those that (1) occur between two subclusters of comparable mass, (2) have a small impact parameter, (3) are observed during the short period when the cluster gas is significantly offset from the galaxies and DM, and (4) occur mostly transverse to the line of sight such that the apparent angular separation of the cluster gas from the galaxies and DM is maximized.
In this paper we confirm the identification of [SPT-CLJ0356$-$5337]{} (hereafter [SPT$-$0356]{}) as a strong lensing cluster, report on spectroscopic measurements of redshifts of three lensed galaxies behind the cluster, and present the first strong lensing model of the cluster core. We argue that the observed properties of the cluster, combined with the strong lensing mass model, promotes [SPT$-$0356]{} as the highest-redshift major-merger cluster candidate.
At $z={1.0359}$, [SPT$-$0356]{}is one of the most distant clusters known with spectroscopically-confirmed strong lensing evidence at the cluster scale from multiple systems. The lensing geometry offers a unique opportunity to weigh the mass within the core of the cluster. Strong lensing clusters at redshifts $z\geq1$, include SPT-CLJ2011$-$5228 at $z = 1.06$ [@Collett2017], which has only one multiply-imaged system; SPT-CLJ0546$-$5345 at $z=1.066$ [@Brodwin2010], which shows evidence of strong lensing features [@Staniszewski2009]; and SPT-CLJ0205$-$5829 [@Stalder2013; @Bleem2015] at $z = 1.322$ which has one arc with no published redshift. [@Wong2014] also report on a lensed galaxy with spectroscopic redshifts behind a cluster at $z=1.62$. However, the lensing signal comes essentially from the brightest cluster galaxy (BCG) embedded in the cluster, thus this lens offers little or no leverage on the cluster-scale mass distribution. The highest redshift cluster with strong lensing evidence currently published is IDCSJ1426 at $z=1.75$ [@Gonzalez2012], but there is no public spectroscopic redshift measurement for the only giant arc reported.
This paper is organized as follows: In section 2, we report on the identification and previous analyses of [SPT$-$0356]{}. In section 3, we describe the data that are used in this paper. In section 4, we define the cluster-member selection, which is an important input to the strong lens modeling described and analyzed in section 5. In section 6, we discuss our results and we summarize this work in section 7.
Throughout this paper, we adopt a standard $\Lambda$-CDM cosmology with $\Omega_m =0.3$, $\Omega_{\Lambda}=0.7$, and $h=0.7$. All magnitudes are given in the AB system [@Oke1974].
[SPT-CLJ0356$-$5337]{} {#sec:cluster}
======================
@Bleem2015 first identified and published [SPT$-$0356]{}as a strong-lensing cluster, as part of a catalog of galaxy clusters selected from South Pole Telescope (SPT) data based on the Sunyaev-Zel’dovich effect (SZ,@Sunyaev1970). @Bocquet2019 published an updated mass for this system of [M$_{500 c}=3.59^{+0.59}_{-0.66} \times10^{14}h^{-1}_{70}$ [M$_{\odot}$]{}]{} assuming the same fixed $\Lambda$-CDM cosmology we adopt in this work. @Bayliss2016 used the Gemini Multi-Object Spectrograph (GMOS) on the Gemini South Observatory in Chile to measure spectroscopic redshifts of 36 galaxies in this field, eight of which were spectroscopically identified as cluster members, including the BCG (see Figure \[fig:cluster\]). From these eight cluster members with GMOS spectra, @Bayliss2016 determine a median cluster redshift of $z=1.0345\pm0.0112$, with a velocity dispersion $\sigma_{v} = 1691\pm588$ [km.s$^{-1}$]{}. In a reanalysis of these data, @Bayliss2017 report a revised median redshift of $z=1.0359\pm0.0042$ and $\sigma_{v} = 1647\pm514$ [km.s$^{-1}$]{}, based on four of the eight galaxies whose spectral features indicate that they are either passive or post-starburst, so that their velocities are likely less sensitive to recent accretion. In this paper, we adopt as the cluster redshift the measurement of @Bayliss2017, $z=$ [1.0359]{}, used hereafter without uncertainties. We note that these measurements are consistent with each other, and the slight difference between these redshifts has no significant effect on our analysis or results.
Initial follow-up imaging of the SPT-SZ clusters led to the identification of strong lensing evidence in 23 clusters above $z>0.7$ [@Bleem2015]. Figure \[fig:SZcluster\] plots the mass and redshift of [SPT$-$0356]{} compared to the entire @Bleem2015 sample, with the strong lenses highlighted; [SPT$-$0356]{} is among the highest-redshift strong lenses in this sample. The lensing evidence, shown in Figure \[fig:cluster\], include three sets of multiple images of background sources. Each system has three multiple images, all appearing West of the BCG. The high resolution of the [[*HST*]{}]{}/ACS single-band imaging (in F606W, shown in Figure \[fig:cluster\] ) revealed substructure in each of the images, strongly suggesting that the three images of each system are indeed multiple images of the same source. This identification was also supported by the observed symmetry, which is consistent with expectations from the lensing geometry, and prompted follow-up spectroscopy. Here, we report nine lensed images of three distinct sources in the field of [SPT$-$0356]{}
![Comparison of [SPT$-$0356]{} to other clusters in the SPT-SZ 2500 deg$^2$ from @Bleem2015. Clusters identified as strong lenses are labeled with red circles, and several well-studied “bullet” clusters (i.e. dissociative merger) are highlighted. [SPT$-$0356]{} is among the highest redshift strong lensing clusters in this sample.[]{data-label="fig:SZcluster"}](SZ-cluster.pdf)
Data {#sec:data}
====
Imaging {#sec:imaging}
-------
Optical imaging follow-up observations of [SPT$-$0356]{} were conducted with several telescopes and instruments:
#### Magellan
The cluster was first imaged with the Inamori-Magellan Areal Camera & Spectrograph (IMACS) on Magellan Baade 6.5-m telescope as part of the SPT cluster confirmation efforts on 2012 Dec 16. Each IMACS observation covers a field of $13'\times27'$, observed for 400 s with each of the $g$, $r$, and $z$ filters.
The cluster was observed with the Magellan Clay 6.5-m telescope at Las Campanas Observatory, using the Parallel Imager for Southern Cosmology Observations (PISCO) instrument [@Stalder2014] as part of a uniform optical follow-up program on 2016 Dec 31. Each PISCO observation covers a $9.5'\times6'$ area on the sky centered on the cluster, observed in parallel in four different bands ($g$, $r$, $i$, and $z$) for an exposure time of 258 seconds.
#### Hubble Space Telescope
[[*HST*]{}]{} imaging of [SPT$-$0356]{} was obtained with the Advanced Camera for Survey (ACS) camera as part of the SPT-SZ ACS Snapshot Survey (Cycle 21, GO-13412; PI: Schrabback). A single image was obtained in F606W on 2014 June 25, with total exposure time of 2320 s. The ACS field of view covers a $3.3'\times3.3'$ area, centered on the SZ peak.
#### Gemini
Deep $i$-band and $g$-band images were obtained with the Gemini South Observatory 8.1-m telescopes as part of the weak lensing follow-up of SPT-SZ clusters (PI: Benson) from the SPT-SZ ACS Snapshot Survey using the GMOS camera. We used a $2\times2$ binned stacked image reaching 5200 s exposure time and 117 seeing on the central chip covering the [[*HST*]{}]{}/ACS field-of-view. The data were obtained on 2014 Dec 30.
Figure \[fig:PISCO\] shows the cluster core field of view, rendered from the GMOS and [[*HST*]{}]{} imaging. These represent the deepest and highest resolution data in hand. We supplement these data with the shallower, lower resolution, $z$-band imaging from IMACS and PISCO, in order to achieve broad wavelength coverage, where color information is needed for assessing candidate strong lensing features.

Spectroscopy of Lensed Sources {#sec:spectroscopy}
------------------------------
#### Gemini/GMOS
The Gemini/GMOS-South spectroscopic survey of SPT-SZ clusters [@Bayliss2016] targeted [SPT$-$0356]{}, resulting in spectroscopic redshifts for the cluster and eight cluster member galaxies (Section \[sec:cluster\]). Slits were placed on at least one image of each of the lensed sources. These spectra resulted in a redshift limit of $1.78<z<3.9$ based on weak continuum and lack of spectroscopic features in the spectra within the wavelength coverage of the data, $\Delta\lambda = 5920-10350$ Å. We note that @Bayliss2016 provide a “best guess” redshift for image 2.2 (A.1 in their notation) $z=2.1955$, based on very weak spectroscopic features in emission (their Figure 9, panel c), but caution that these may be misidentified. The FIRE data, described below, rule out this solution.
#### Magellan/LDSS3
We obtained multislit spectroscopy of the lensed images using the Magellan Clay telescope with the Low Dispersion Survey Spectrograph (LDSS3-C) on 2018 Jan 9 (PI: Sharon). The observations were conducted under good conditions with sub-arcsecond seeing, and slits were placed on eight out of the nine lensed images. However, none of the observations resulted in a redshift measurement for the multiply-imaged systems, due to the wavelength coverage of the instrument and the absence of strong enough [Ly$\alpha$]{} emission.
#### Magellan/FIRE
Near-IR spectroscopy yielded robust spectroscopic redshifts for several objects of interest in the field. We observed multiple sources in the core of [SPT$-$0356]{} with the Folded-port Infrared Echellette (FIRE; @Simcoe2013) spectrograph at the Magellan-I Baade telescope. Observations took place on 2018 Jan 28-29 (PI: Gladders) ; the median seeing during the time of observation was $1\farcs0$, and the airmass ranged between 1.1–2.0.
In total, we observed five different positions in the field using the $1\farcs0 \times 6\farcs0$ slit, with FIRE in high resolution echelette mode. The slit was set at position angles chosen to allow a clean nod of $2\farcs0$ along the slit between neighboring science exposures, and two of the slit positions yielded traces from two sources of interest. With the $1\farcs0$ wide slit FIRE delivers spectra with a resolution of $R = 3600$ ($\sigma_v$ $=83$ [km.s$^{-1}$]{}) and covers a wavelength range of 0.82–2.5 $\mu$m in a single-object cross-dispersed setup [@Simcoe2008]. We reduced the data using the FIRE reduction pipeline (FIREHOSE)[^1]; our observations resulted in clear astrophysical emission lines in sky-subtracted 2D spectra for seven distinct sources observed across the five different slit positions, and no continuum emission detection. For emission line sources FIREHOSE allows manual identification of source traces using individual emission lines. The user-supplied line positions and trace location are combined with a trace model to extract object spectra by jointly fitting the source trace along with the 2-dimensional sky spectrum using the source-free regions along the slit. We also performed observations of A0V telluric standard stars during the night of our science observations and at similar airmass. The A0V spectra were used to calibrate the extracted science spectra [@Vacca2003] using the xtellcor procedure as a part of the spextool pipeline [@Cushing2004], which is called as a part of the FIREHOSE reduction process.
We measured cosmological redshifts for each source with an extracted FIRE spectrum by identifying families of nebular emission lines—H$\alpha$, H$\beta$, H$\gamma$, \[N II\] at $\lambda\lambda$6855, \[O III\] at $\lambda\lambda$4960,5008, and \[O II\] at $\lambda\lambda$3727,3729—and fitting a Gaussian profile to each emission line. We estimated the mean redshift for each spectrum as the average of the individual line redshifts, and the uncertainty as the quadrature sum of the uncertainties in the individual line centroids from each Gaussian profile, the uncertainty in the wavelength solution (always highly sub-dominant), and the scatter in the measured redshifts of the individual emission lines. Individual source redshifts are labeled in Figure \[fig:cluster\] with solid circles, given in Table \[tab:FIREspeclines\] and the extracted emission line spectra of those sources are shown in the appendix Figures \[fig:jellyfish\], \[fig:FIREsys1\], \[fig:FIREsys2\] \[fig:FIREsys3\].
[c|cc]{}
system 1 & 2.363 & \[O III\][$\lambda\lambda$4960,5008]{}\
system 2 & 2.364 &\[O II\][$\lambda\lambda$3727,3729]{} \
& & \[O III\][$\lambda\lambda$4960,5008]{} \
& & [H$\alpha$]{}6563\
system 3 & 3.048 & \[O III\][$\lambda\lambda$4960,5008]{} \
& & [H$\gamma$]{}4340\
Selection of cluster members {#sec:members}
============================
We identified cluster-member galaxies by color, using the red sequence technique [@Gladders2000] from a GMOS $i$-band and [[*HST*]{}]{}/ACS F606W color-magnitude diagram (Figure \[fig:redsequence\]). To measure galaxy colors, we first aligned and re-sampled the GMOS $i$-band image to match the ACS pixel frame, and used Source Extractor [@SEx] in dual-image mode with the GMOS-$i$-band as the detection image. Magnitudes were measured as MAG\_AUTO[^2] within the $i$-band detection aperture in both images.
Stars and other artifacts were rejected from the catalog based on their location in a MU\_MAX vs MAG\_AUTO diagram in the [[*HST*]{}]{} photometry. The BCG and the other spectroscopically-confirmed galaxies [@Bayliss2016] were used to identify the red sequence locus in color-magnitude space. We include in the cluster-member catalog galaxies brighter than $i=25$ mag that lie within [$75\farcs6$]{} in projection from the BCG, where both the ACS and GMOS images have complete coverage. Being far from the center, the galaxies in the outskirts do not have a significant impact on the mass at the cluster core or the lensing configuration. Figure \[fig:redsequence\] shows the selection made for cluster member galaxies.
We attempted constructing cluster-member catalogs with and without convolving the [[*HST*]{}]{} image with the much larger GMOS point spread function (psf). We find that while the red sequence becomes more diffuse due to contamination from bright nearby objects, the overall selection of cluster members is not significantly affected. After examining the discrepancies, we conservatively choose to use the photometry based on the natural resolution of the [[*HST*]{}]{} image to reduce contamination. We note that only two faint galaxies (with $i>24$) near the cluster core are marginally near the color-magnitude cut, and would be selected by the psf-matched procedure. High-resolution near-IR data would be required to unambiguously determine cluster membership. Our final cluster member catalog contains 45 galaxies within [$75\farcs6$]{} of the BCG. The selected cluster members galaxies are marked with yellow ellipses in Fig. \[fig:PISCO\].
Lensing Analysis and Mass Models {#sec:lensing}
================================
Lens Modeling Methodology {#Methodology}
-------------------------
We compute a mass model of the core of [SPT$-$0356]{} from the strong lensing evidence, using the publicly available lensing algorithm [@Jullo2007]. We refer the reader to [@Kneib1996], [@Smith2005], [@Verdugo2011] and [@Richard2011] for more details on the strong lens modeling approach used in this work. This section provides a short summary. We model the cluster mass distribution as a series of dual pseudo-isothermal ellipsoid (dPIE, @Eliasdottir2007) parametric mass halos, with seven free parameters: the position $\Delta\alpha$, $\Delta\delta$; ellipticity $\epsilon$; position angle $\theta$; normalization $\sigma_0$; truncation radius $r_{cut}$; and core radius $r_{core}$. We use as constraints the positions of prominent emission clumps in each lensed image, and the spectroscopic redshifts of the lensed sources (see Section \[sec:cstr\]). The algorithm uses a Monte Carlo Markov Chain (MCMC) formalism to explore the parameter space. It identifies the best fit as the set of parameters that minimize the scatter between the observed and predicted image-plane positions of the lensed features.
The lens plane is modeled as a combination of cluster-scale and galaxy-scale dPIE halos. For the cluster-scale DM halos, we fix the truncation radius ($r_{cut}$) at 1500 kpc, since it is too far beyond the strong lensing regime to be constrained by the strong lensing evidence. The other parameters are generally allowed to be solved for by the lens model, unless otherwise indicated.
Galaxy-scale halos represent the contribution to the lensing potential from cluster member galaxies (Section \[fig:redsequence\]). Their positional parameters ($\Delta\alpha$, $\Delta\delta$; $\epsilon$; $\theta$) are fixed to their observed values as measured with Source Extractor [@SEx]. To keep the number of model parameters manageable, the slope parameters of the galaxy-scale potentials are scaled to their observed $i$-band luminosity with respect to L$^*$, using a parametrized mass-luminosity scaling relation (see @Limousin2007 and discussion therein on the validity of such parametrization) leaving only $r_{cut}$ and the central velocity dispersion ($\sigma_{0}$) free to vary. The BCG is modeled separately, since we do not expect it to necessarily follow the same scaling relation (@Newman2 [@Newman1]).
Although likely a cluster member, a bright star-forming “jellyfish” galaxy [@Ebeling2014] that appears $7\farcs0$ east of the BCG is not included in the cluster-member catalog, as its brightness significantly deviates from the mass-luminosity relation of the passive cluster member galaxies. This galaxy is far enough from the multiply-imaged systems to not significantly affect the lensing configuration, and for the purpose of the lensing analysis it mainly contributes a small increase in the total mass, which is expected to be degenerate with the cluster scale DM clump. We discuss this galaxy in Appendix \[sec:jellyfish\].
As explained below in Section \[sec:dm\], we consider several lens models, each with a different number of cluster-scale DM halos and modeling assumptions. In all cases, we require that the number of free parameters is smaller than or equal to the number of constraints. We note however that given the small number of lensed sources observed with the existing data, the model may be under-constrained even if this criterion is formally satisfied.
The lensing constraints (Section \[sec:cstr\]) come from the identified image-plane locations of multiple images of the lensed sources, and individual emission knots within each galaxy. The multiple constraints within each galaxy assist in constraining the lensing parity, and provide leverage over the relative magnification between the images, without explicitly using the flux ratios as constraints. The latter can sometimes be affected by variability or microlensing [e.g., @Fohlmeister2008; @Dahle2015].
Lensing constraints {#sec:cstr}
-------------------
@Bayliss2016 identified three arc candidates in the field of [SPT$-$0356]{}, and constrained their redshift to the range $1.78 < z < 3.9$ based on lack of emission lines in their Gemini-GMOS spectroscopy. Here, we refine the identification and report nine lensed images of three distinct sources in the field of [SPT$-$0356]{}. Each source has three lensed images. As described in Section \[sec:spectroscopy\], we obtained spectroscopic redshifts of at least one image in each system. Constraining the model with spectroscopic redshifts is crucial for a precise and accurate lens model [@Johnson2016].
While ground-based data can reveal strong-lensing evidence, accurate modeling of the mass distribution requires [[*HST*]{}]{} resolution to precisely select multiply-imaged features used as constraints. The positions of the images are marked in Figure \[fig:cluster\], color-coded by system; the inset shows a zoomed-in view of the three systems. The [[*HST*]{}]{} data resolve the galaxies and reveal their internal morphology; system 2 and system 3 show clear distinct emission knots that we use as constraints in order to better probe the mass profile of the galaxy cluster. Table \[tab:cstr\] summarizes the positions and the spectroscopic redshifts of these systems. The unique morphology of system 1 and system 2, and the identical morphology observed in their multiple images, result in a robust identification even without spectroscopic confirmation of all three images of each system as was obtained for system 3.
In addition to the secure, spectroscopically confirmed multiply-imaged galaxies, we identify three candidate multiply-imaged systems. Since the strong lensing model is used to help identify these candidates, we do not use those systems as constraints. Table \[tab:candidate\] indicates the position of the identified multiple image candidate systems.
[c|ccccc]{}
system 1 & 1.1 & 3:56:20.458 & -53:37:53.265 & [2.363]{} & 0.11\
& 1.2 & 3:56:20.484 & -53:37:50.799 & & 0.23\
& 1.3 & 3:56:21.317 & -53:37:38.073 & & 0.27\
system 2 & 2.1 & 3:56:20.239 & -53:37:53.951 & [2.364]{} & 0.05\
& 2.2 & 3:56:20.336 & -53:37:47.965 & & 0.04\
& 2.3 & 3:56:20.952 & -53:37:38.157 & [2.364]{} & 0.01\
& 21.1 & 3:56:20.235 & -53:37:53.329 & [2.364]{} & 0.06\
& 21.2 & 3:56:20.302 & -53:37:48.898 & & 0.12\
& 21.3 & 3:56:20.945 & -53:37:38.001 & [2.364]{} & 0.04\
& 22.1 & 3:56:20.230 & -53:37:53.101 & [2.364]{} & 0.12\
& 22.2 & 3:56:20.291 & -53:37:49.157 & & 0.08\
& 22.3 & 3:56:20.937 & -53:37:37.886 & [2.364]{} & 0.04\
system 3 & 3.1 & 3:56:19.895 & -53:37:59.115 & [3.048]{} & 0.05\
& 3.2 & 3:56:20.123 & -53:37:44.328 & [3.048]{} & 0.19\
& 3.3 & 3:56:20.562 & -53:37:37.430 & [3.048]{} & 0.19\
& 31.1 & 3:56:19.871 & -53:37:58.899 & [3.048]{} & 0.05\
& 31.2 & 3:56:20.098 & -53:37:44.510 & [3.048]{} & 0.19\
& 31.3 & 3:56:20.532 & -53:37:37.370 & [3.048]{} & 0.20\
& 32.1 & 3:56:19.868 & -53:37:58.629 & [3.048]{} & 0.00\
& 32.2 & 3:56:20.085 & -53:37:44.730 & [3.048]{} & 0.27\
& 32.3 & 3:56:20.540 & -53:37:37.130 & [3.048]{} & 0.19\
[c|ccc]{}
system 4 & 4.1 & 3:56:19.093 & -53:37:56.703\
& 4.2 & 03:56:18.403 & -53:37:51.059\
& 4.3 & 3:56:20.222 & -53:37:29.576\
system 5 & 5.1 & 3:56:22.947 & -53:37:53.300\
& 5.2 & 3:56:22.864 & -53:37:57.408\
& 5.3 & 03:56:21.842 & -53:38:06.655\
system 6 & 6.1 & 3:56:24.330 & -53:38:10.668\
& 6.2 & 3:56:24.367 & -53:38:10.358\
Dark Matter Halos {#sec:dm}
-----------------
We compute four models with one or two DM halos and varying free parameters, to investigate the spatial distribution of DM in the cluster core with respect to the stellar component. These models are summarized in Table \[tab:fitstat\], and described below.
The spatial distribution of cluster-member galaxies appears to be separate in two components, with a concentration of cluster-member galaxies grouped $\sim$ 150 kpc West of the BCG. The formation of the arcs between the BCG and this concentration of galaxies indicates that the underlying dark matter mass distribution of the cluster may also shows a two-components structure. Similar lensing configurations are seen in several lower-redshift clusters, whose lens models are dominated by two cluster-scale DM halos [e.g., @sharon2019]. To test the hypothesis that this cluster is also dominated by two halos, we compute two sets of lens models: The first set of models, labeled A and B, have one cluster-scale DM halo (DM1 in Table \[tab:fitstat\]). A second set of models, labeled C and D, have two cluster-scale DM halos (DM1 and DM2 in Table \[tab:fitstat\]). In models A and C, the center of DM1 is not fixed, adding two free parameters to these models. Contribution of cluster-member galaxies is included in all models in the same way, as explained in \[Methodology\]. DM1 is assumed to be located at or near the position of the BCG, for two reasons; First, the BCG presents a regular luminosity profile which suggests that it is not disturbed and therefore this galaxy would be at the cluster center. Second, we lack the ability to properly constrain the Eastern extent of the cluster since we do not identify secure lensing constraints in this region at the depth of the existing data. The position of DM2 is free with a loose prior that positions it around the group of galaxies on the western part of the cluster. We chose that location as it is more likely that the DM clump is located close to a luminous counterpart [@Broadhurst2000; @Broadhurst2005].
The second hypothesis we investigate is whether the BCG and its associated DM halo sit at the center of the cluster-scale DM halo. To test this scenario, we fix the position of DM1 to the position of the BCG in models B and D.
Lens modeling results
---------------------
Table \[tab:fitstat\] lists the best-fit values of the lens model parameters for each one of the four test models. To evaluate the lens models, we employ two statistical criteria, as described below. One criterion is named rms and represents the average difference between the observed position of a multiple image and the predicted position from the geometrical center of the best-fit model in the image plane given in arcseconds. Thus we seek to reduce the rms as much as possible. The models with one cluster-scale DM clump show an rms of $0\farcs3$ and $0\farcs1$ for the fixed and free dark matter halo respectively. The models with two cluster-scale DM clumps result in better rms of $0\farcs06$ and $0\farcs07$ for the fixed and free dark matter halo, respectively. The rms criterion suggests that the models with two cluster-scale DM halos are significantly better; however, this criterion does not account for the increased flexibility due to the additional free parameters. To account for that, we further evaluate the models using a second criterion, the Bayesian Information Criterion (BIC), which was presented in previous works [see section 5.1 in @Mahler2018; @Lagattuta2017; @Acebron2017]. The BIC enables a quantitative comparison between similar models; it is a statistical measurement based on the model likelihood $\mathcal{L}$, penalized by the number of free parameters $k$ and the sample size $n$ (i.e. $2 \times$ the number of multiple images): $${\rm BIC} = -2 \times \log(\mathcal{L}) + k \times \log(n),$$ We seek to maximize the likelihood (or reduce the first term of Equation 1). However, arbitrarily increasing the number of free parameters would overfit the data. The second term provides means of balancing the over-fitting. It represents a combination of the number of constraints and the number of free parameters and increases the global value of the BIC. We seek the lowest BIC possible. Using the BIC will help us estimate the improvement of the likelihood in comparison with the freedom allowed by the new parameters such as the secondary halo or freeing the position of DM1. The number of constraints is identical among all models. It is recommended to only compare similar models because otherwise the likelihood will not be a similar description of the model performance.
We discuss here the performance of the four models. To avoid a possible confusion we will refer to the model letter as listed in Table \[tab:fitstat\].
Model A, with one cluster halo at a fixed position, has an rms of $0\farcs3$ and a BIC of 20. Freeing the halo center as was done in model B reduces the rms to $0\farcs1$ and the BIC to $-$16. This is a considerable improvement, indicating that according to both the rms and the BIC criteria model B, with a free halo, is a better fit to the lensing constraints than model A.
We compare models A and C — the two models with fixed “main” DM halo, but with and without a secondary halo around the group, respectively. The two-halo model results in a drastically reduced rms of $0\farcs06$ and a lower BIC of -4. However, we caution that such a low rms is unrealistic in comparison to other well-constrained models in the literature, and may suggest overfitting. It is possible that model C is too flexible, and the lack of constraints west of the group allows it to compensate for the fixed position of DM1 with DM2. We note that its mass distribution is drastically different from the three other models as shown in Figure\[fig:masscontours\] especially in the regions that lack lensing constraints. We conclude that we currently do not have enough information to properly constrain the position of DM2.
We compare models B and D — both with free “main” DM halo, but with and without a secondary halo around the group. The mass distributions of the two models are similar, as shown in Figure\[fig:masscontours\]. Model D adds a second DM halo near the group, however, it adds only little mass to the model – as can be inferred by its low normalization parameter $\sigma_0$ (Table \[tab:fitstat\]). The rms of model B ($0\farcs1$) is similar to the one of model D ($0\farcs07$) even if both remain low in comparison with well constraints cluster in the literature. However the BIC of model B ($-$16) is significantly better than the one for model D (4). This implies that the modeling flexibility offered by the addition of the secondary halo is not required in order to improve the overall goodness of the model.
Models C and D have a similar rms values. Model D has a slightly higher BIC, indicating that there is only little statistical difference between models C and D. Moreover, as can be seen in Table \[tab:fitstat\], a fixed versus free position of DM1 results in significantly different positions for DM2, due to the location of the lensing constraints between these two halos. As noted above, with the current lensing evidence, the position of DM2 is severely under-constrained. Further lensing observables west of the group are needed in order to constrain the position of DM2, which will make models using the same assumptions as C and D more reliable. In conclusion, given the available constraints, the BIC criterion identifies model B as the one that compromises best between the goodness of the fit and number of constraints and free parameters. Nevertheless, models with two cluster-scale DM halos are not ruled out.
While our statistical assessment suggests that there could be more than one unique “best” model that satisfies the lensing constraints, our main conclusions are not significantly affected by the choice of model: The image configurations, regardless of the modeling choices, require that that there be two main mass clumps. None of the modeling results differ on that. We discuss this further in section \[sec:discussion\].
------------------------------------------ ---------------- ------------------------- ------------------------- ------------------------ -------------------------- ------------------------ ------------------------ ------------------------- --
Model name Component $\Delta\alpha^{\rm ~a}$ $\Delta\delta^{\rm ~a}$ $\varepsilon^{\rm ~b}$ $\theta$ $\sigma_0^{\rm ~c}$ r$_{\rm cut}^{\rm ~c}$ r$_{\rm core}^{\rm ~c}$
(Fit statistics) – () () ($\deg$) ([km.s$^{-1}$]{}) (kpc) (kpc)
A – 1 cluster scale halo, fixed DM1 \[0.0\] \[0.0\] 0.82$^{+0.00}_{-0.02}$ 39.1$^{+2.0}_{-1.3}$ 623$^{+22}_{-28}$ \[1500.0\] 1.1$^{+1.4}_{-0.0}$
rms = 03 BCG \[0.0\] \[0.0\] \[0.27\] \[52.1\] 384$^{+90}_{-140}$ 7$^{+21}_{-2}$ \[0.6\]
BIC = 20, AICc = 10 $L^{*}$ Galaxy – – – – 156$^{+5}_{-5}$ 56$^{+9}_{-8}$ –
$\log$($\mathcal{L}$) = 5, k= 8, n= 42 – – – – – – – –
B – 1 cluster scale halo, free DM1 2.9$^{+1.7}_{-3.3}$ 3.8$^{+0.7}_{-1.4}$ 0.65$^{+0.06}_{-0.11}$ 26.0$^{+2.7}_{-3.5}$ 730$^{+85}_{-47}$ \[1500.0\] 9.8$^{+1.6}_{-1.6}$
rms = 01 BCG \[0.0\] \[0.0\] \[0.27\] \[52.1\] 498.1$^{+1.7}_{-31.0}$ 24$^{+23}_{-9}$ \[0.6\]
BIC = -16, AICc = -27 $L^{*}$ Galaxy – – – – 116$^{+8}_{-15}$ 76$^{+109}_{-22}$ –
$\log$($\mathcal{L}$) = 27, k= 10, n= 42 – – – – – – – –
C – 2 cluster scale halos, fixed DM1 \[0.0\] \[0.0\] 0.69$^{+0.01}_{-0.11}$ 33.3$^{+3.2}_{-1.4}$ 801$^{+53}_{-49}$ \[1500.0\] 3.7$^{+0.9}_{-1.4}$
rms = 006 BCG \[0.0\] \[0.0\] \[0.27\] \[52.1\] 100$^{+80}_{-132}$ 54$^{+31}_{-4}$ \[0.6\]
BIC = -4, AICc = -14 DM2 37.1$^{+1.4}_{-6.6}$ 3.1$^{+1.1}_{-2.0}$ 0.84$^{+0.03}_{-0.12}$ 157.3$^{+6.7}_{-13.6}$ 560$^{+52}_{-58}$ \[1500.0\] 3.8$^{+0.3}_{-3.0}$
$\log$($\mathcal{L}$)= 28, k= 14, n=42 $L^{*}$ Galaxy – – – – 164$^{+77}_{-26}$ 8$^{+48}_{-1}$ –
D – 2 cluster scale halos, free DM1 1.9$^{+2.0}_{-2.9}$ 2.2$^{+1.2}_{-1.3}$ 0.6$^{+0.01}_{-0.21}$ 27.3$^{+5.1}_{-3.1}$ 661$^{+88}_{-89}$ \[1500.0\] 5.6$^{+1.8}_{-3.7}$
rms = 007 BCG \[0.0\] \[0.0\] \[0.27\] \[52.1\] 419$^{+3}_{-54}$ 47$^{+24}_{-6}$ \[0.6\]
BIC = 4, AICc = -3 DM2 21.5$^{+5.4}_{-8.6}$ 8.3$^{+3.7}_{-3.7}$ 0.87$^{+0.22}_{-0.36}$ 165.6$^{+21.4}_{-107.4}$ 232$^{+191}_{-123}$ \[1500.0\] 1.4$^{+1.2}_{-2.0}$
$\log$($\mathcal{L}$) = 28, k= 16, n= 42 $L^{*}$ Galaxy – – – – 104$^{+13}_{-16}$ 106$^{+143}_{-23}$ –
------------------------------------------ ---------------- ------------------------- ------------------------- ------------------------ -------------------------- ------------------------ ------------------------ ------------------------- --
\
Quantities in brackets are fixed parameters \
We report statistical quantities such as the Bayesian Information criterion (BIC), the corrected Akaike information criterion (AICc), the likelihood $\log$($\mathcal{L}$) the number of free parameter k, and the sample size n. \
$^{\rm a}$ $\Delta\alpha$ and $\Delta\delta$ are measured relative to the reference coordinate point: ($\alpha$ = 59.0896383, $\delta$ = -53.6310962) \
$^{\rm b}$ Ellipticity ($\varepsilon$) is defined to be $(a^2-b^2) / (a^2+b^2)$, where $a$ and $b$ are the semi-major and semi-minor axes of the ellipse \
$^{\rm c}$ $\sigma_0$, r$_{\rm cut}$ and r$_{\rm core}$ are respectively the central velocity dispersion, the cut radius, and the core radius as defined for the dPIE potential used in our modelisation. For $L^{*}$ Galaxy this value represent the parameter of the galaxy that we optimised for our mass-to-light ratio. We refer the reader to section \[Methodology\] for a summary, and @Limousin2007 [@Eliasdottir2007] for a more detailed description of the potential. \
discussion {#sec:discussion}
==========
Optical imaging and spectroscopy of [SPT$-$0356]{} indicate that it has a two-components distribution of cluster-member galaxies, with two main stellar components separated by 21 arcsec ($\sim$170kpc). Our strong lensing analysis finds that the distribution of DM is consistent with that of the galaxies. In this section, we compare the stellar and DM distributions and discuss some of their unusual properties.
The GMOS spectra of the BCG and two cluster-member galaxies from a nearby group are shown in Figure\[fig:gmos\_spectra\] (retrieved from [@Bayliss2016]), in red, blue and green lines, respectively. The velocity offset between the BCG and these other galaxies is $<300$ [km.s$^{-1}$]{}. The spectrum of the BCG shows \[OII\] in emission, which is indicative of star formation; the other two galaxies show little to no \[OII\] emission. Observing star formation in a BCG is not unusual, and overall, the spectroscopic data are consistent with these galaxies arising from a similar population of galaxies at the same redshift. The spectrum of this galaxy does have a velocity offset between the absorption and emission features ($\sim550$ [km.s$^{-1}$]{}). This offset could trace an in-falling filament of cooling gas in the cluster core. Velocity difference between galaxies are reported here using the measurement of the absorption features.
All four strong lensing models are consistent with a two-components mass distribution, with one component centered near the BCG and one near the group. Figure \[fig:masscontours\] plots the mass contours derived from model A (green contours), B (cyan contours), C (magenta contours), and D (yellow contours), showing that regardless of modeling choices a secondary mass distribution is needed in order to reproduce the lensing constraints.
We measure the mass of each of the two main structures, by summing the projected mass density within apertures of 80 kpc radius centered on the BCG and on the group. We choose 80 kpc as it
We report those values in Table \[tab:masses\], as well as their ratios, for each of the tested models. We find that regardless of the model used, we always find a similar total projected mass ratio between the two structures.
The small radial velocity offset between the BCG and two measured galaxies in the group (only 300 [km.s$^{-1}$]{}) strongly suggests that most of the motion is transverse; the separation between the two mass clumps is small, 21 arcsec ($\sim$170 kpc); and their mass ratio is near-equal. Those aspects fulfill most of the criteria laid out by @Dawson2012, see section \[sec:intro\], arguing that this cluster could be a dissociative merger candidate like the Bullet Cluster undergoing a major merger event. Absent additional information on the dynamical state of the hot gas prevent us to be conclusive regarding its state. Indeed, the cluster can either be in a pre-merger state or has gone through a very recent merger. High resolution X-ray data can distinguish between these scenarios. This arrangement, where the BCG is spatially separated from other cluster members is atypical, and supports the conclusion that we are observing a major merger event.
![Projected mass density contours from models A (green), B (cyan), C (magenta), and D (yellow). The total projected mass density show two component, regardless of modeling choices, with one clump centered near the BCG and one near the nearby group of cluster-member galaxies. Contours are plotted at 0.5, 1, and 2$\times10^{9}$[M$_{\odot}$]{}.kpc$^{-2}$[]{data-label="fig:masscontours"}](masscontours.pdf){width="45.00000%"}
[l|cccc]{} A - 1 halo, DM1 fixed & $18.9_{-0.5}^{+0.0}$ & $15.1_{-0.0}^{+3.3}$ & 1.25 & 0.46\
B - 1 halo, DM1 free & $17.0_{-0.3}^{+0.0}$ & $11.0_{-0.0}^{+5.7}$ & 1.55& 0.27\
C - 2 halos, DM1 fixed & $15.4^{+1.0}_{-0.0}$ & $10.9^{+5.5}_{-0.0}$ & 1.41& 0.54\
D - 2 halos, DM1 free & $16.2 _{-1.1}^{+0.0}$ & $10.2_{-0.0}^{+4.8}$ & 1.58 & 0.28\
Examination of the two main mass concentrations reveals that their galaxy content is significantly different. One component is dominated by a BCG, with a half-light radius larger than any other cluster member. The other component is composed of a group of eight cluster members. While it is unlikely that this apparent clustering of eight galaxies within $\sim$9($\sim$70 kpc) is only due to a projection effect, larger spectroscopic coverage could tackle this issue, as would a more refined red sequence based solely on [[*HST*]{}]{} data.

In Figure\[fig:mass\_prof\] we present the mass and density profiles for the four different models. The statistical uncertainties of the models are very low ($<$3%), and, for clarity, they are not shown in this figure. The error budget is dominated by systematic uncertainties due to various assumptions and modeling choices. Our parametric modeling of the cluster mass distribution allows us to isolate the different mass components that contribute to the total mass: cluster-scale dark matter clumps, the BCG, and the total contribution from the cluster members. We expect degeneracies between the BCG and the core mass of the dark matter halo at least in some of the models, because they are both confined to the same location. The result is a large variation in the mass of the BCG between models, however, this component represents only a small fraction of the total mass.
The total contribution from cluster member galaxies appears to be a large fraction of the total mass. As can be seen in Table \[tab:masses\], the different models predict that about 27 to 54% of the total mass is contained in the cluster member galaxies and their associated DM halos. This is resulting from our fitting procedure that allows a constant mass-to-light ratio to vary in amplitude. In an analysis of a lower-redshift cluster merger, MACSJ0417.5$-$154, @Mahler2019 found a significantly smaller ratio between the galaxy and cluster contributions of about 1%. @Wu2013 investigated this ratio using simulated clusters and reported that up to 20% of the total mass is contained in subhalos that survived the merger with the main halo, although with large scatter, and strong dependence on formation time. In a future analysis of a large sample of clusters, we will investigate whether this is indicative of an evolutionary trend with cosmic time, or anecdotal representation of a larger cluster-to-cluster variation.

Several aspects of this analysis would be better constrained with additional data, primarily multi-band high-resolution imaging, from [[*HST*]{}]{}, and with high resolution X-ray observations. Multi-band [[*HST*]{}]{} observations, extending to the near IR, would refine the red-sequence selection of cluster-member galaxies and provide a handle on their stellar mass through spectral energy distribution fitting. These data would also facilitate the detection of new multiple images and the confirmation of image candidates in the east and west parts of the cluster, which are under-constrained with the current data. X-ray observations are necessary for determining the dynamical state of the hot cluster gas. A signature of a shock between the components would indicate a recent major merger [@Poole2007], while X-ray emission from both structures would support a pre-merger scenario.
summary
=======
We construct a strong lensing mass model of the galaxy cluster [SPT-CLJ0356$-$5337]{} at $z=1.0359$, one of the highest-redshift strong lensing clusters known to date. We present spectroscopic confirmation and redshifts of three multiply-imaged lensed galaxies, whose images appear 9.5 to 15 arcseconds west of the BCG. The lensed galaxies are spatially resolved, allowing us to use different emission knots in the same system as constraints, which adds leverage on the shape of the mass profile. This provides confidence in our ability to accurately probe the mass distribution, which at the location of the lensing evidence, is constrained to within a few percent. However, the lack of multiply-imaged systems at the outskirts of the cluster core limits our understanding of the cluster halo. Nevertheless, [SPT$-$0356]{} appears to be the best-constrained lensing cluster at this redshift bin to date. Other cluster-scale lenses either have too few lensing constraints, are not spectroscopically confirmed, or their apparent lensing evidence is dominated by single galaxies rather than the cluster potential (see section \[sec:intro\]).
We employ statistical criteria to evaluate four possible lens models, which are based on different modeling assumptions. The lens model indicates that [SPT$-$0356]{} has an Einstein radius of [$\theta_{E}\simeq$ 14 ]{}measured based on the tangential critical curve for a source at $z=3$. At a radius of 500 kpc the enclosed mass is measured to be [M$_{500~{\rm kpc}}=4.0 \pm 0.8\times10^{14}\, $[M$_{\odot}$]{}]{}. We report within a radius of 730 kpc ($\simeq R_{500}$) a value of M$=4.1\pm 0.8\times10^{14} $[M$_{\odot}$]{} consistent with the SPT measurements [M$_{500 c}=3.59 ^{+0.59}_{-0.66} \times10^{14}h^{-1}_{70}$ [M$_{\odot}$]{}]{} from @Bocquet2019. All values are derived from the statistically-favored model (B).
Regardless of modeling assumptions, we find that the projected mass density of the cluster is best described by a two-component mass distribution, with one mass substructure centered around the BCG, and a second mass substructure centered on the observed position of a small group of eight cluster-member galaxies, grouped within a radius of $\sim$9 ($\sim$70 kpc) diameter circle, located $\sim$170 kpc west of the BCG. The lensing analysis points to a nearly equal mass between the two substructures of [SPT$-$0356]{}. Nevertheless, the galaxy distribution is significantly different between those mass components – one dominated by a single galaxy, the other hosting a group of eight galaxies.
The similar masses, the low radial velocity offset between the group and the BCG, and the small impact parameter between the two structures, suggest that this cluster is undergoing a major-merger event. However, to fully characterize this system as a dissociative merger, we would require deep X-ray imaging to probe its intracluster medium and constrain the dynamical state of the cluster gas. If confirmed, [SPT$-$0356]{} will be an important $z>1$ target for next-generation X-ray telescopes.
We find a high mass ratio between the mass associated with cluster-member galaxies to the cluster-scale DM halos at the core of the cluster compared to low mass clusters, perhaps indicating that the subhalos are yet to lose a significant fraction of their DM to the cluster potential. All evidence in hand suggests that [SPT$-$0356]{} provides a unique opportunity to probe the population of high-redshift clusters, and to study the evolution of massive clusters.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is based on observations made with the NASA/ESA [*Hubble Space Telescope*]{}, using imaging data from GO program 13412. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina). TS and JLvdB acknowledge support from the German Federal Ministry of Economics and Technology (BMWi) provided through DLR under project 50 OR 1407. Work at Argonne National Lab is supported by UChicago Argonne LLC,Operator of Argonne National Laboratory. Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated under contract no. DE-AC02-06CH11357. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. Work at Argonne National Lab is supported by UChicago Argonne LLC,Operator of Argonne National Laboratory (Argonne). Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated under contract no. DE-AC02-06CH11357.
The jellyfish galaxy {#sec:jellyfish}
====================
We identify a galaxy at the cluster redshift ($z=1.017$) that exhibits significant star formation based on its colors, and asymmetric morphology with trails of star formation knots ($\alpha=$3:56:22.310, $\delta=-$53:37:52.700; marked with orange circle in Figure \[fig:cluster\]). Galaxies with such properties are often referred to in the literature as “jellyfish” galaxies [e.g., @Suyu2010; @Ebeling2014], and are believed to be undergoing stripping as they fall into the intracluster medium, inducing star formation. The projected distance between this galaxy and the BCG is $\sim$60 kpc. The redshift corresponds to a velocity offset of $2359$ [km.s$^{-1}$]{}between the “jellyfish” galaxy and the BCG, or 2783 [km.s$^{-1}$]{}relative to the median cluster redshift $z=$[1.0359]{}. As the ratio between the velocity and the velocity dispersion is $v/\sigma_v<2$, and given its small projected distance from the cluster core, it is likely that this galaxy is gravitationally bound to the cluster [@Bayliss2017].
The apparent stripped gas of the jellyfish suggests that the trajectory on the plane of sky may be from a North-West position. The green dashed arrow shown in figure \[fig:jellyfish\] represents a best guess of the jellyfish trajectory, following [@McPartland2016]. The bluer redshift of the jellyfish ($z=1.017$) compared to the cluster ($z={1.0359}$) indicates that the jellyfish galaxy is moving toward us in comparison to the rest of the cluster.
![[*Left:*]{} [[*HST*]{}]{} image of the observed jellyfish galaxy in the galaxy cluster, the dashed arrow line represents a projection of the estimated trajectory of the jellyfish galaxy. [*Right:*]{} Spectral features identified in the FIRE spectra within the slit targeting the jellyfish galaxy. []{data-label="fig:jellyfish"}](jellyfish.png "fig:") ![[*Left:*]{} [[*HST*]{}]{} image of the observed jellyfish galaxy in the galaxy cluster, the dashed arrow line represents a projection of the estimated trajectory of the jellyfish galaxy. [*Right:*]{} Spectral features identified in the FIRE spectra within the slit targeting the jellyfish galaxy. []{data-label="fig:jellyfish"}](s5_1.pdf "fig:")
FIRE spectra {#sec:FIREspectra}
============
In this Appendix we present sections of the spectra of the lensed galaxies, highlighting the spectral features that were used to measure the spectroscopic redshifts of these galaxies.
{width="35.00000%"}
{width="95.00000%"} {width="95.00000%"}
{width="55.00000%"} {width="55.00000%"} {width="55.00000%"}
[^1]: <http://web.mit.edu/~rsimcoe/www/FIRE/ob_data.htm>
[^2]: <https://www.astromatic.net/pubsvn/software/sextractor/trunk/doc/sextractor.pdf>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The phenomenological implications of allowing the Higgs to propagate in both AdS${}_5$ and a class of asymptotically AdS spaces are considered. Without tuning, the vacuum expectation value (VEV) of the Higgs is peaked towards the IR tip of the space and hence such a scenario still offers a potential resolution to the gauge-hierarchy problem. When the exponent of the Higgs VEV is approximately two and one assumes order one Yukawa couplings, then the fermion Dirac mass term is found to range from $\sim 10^{-5}$ eV to $\sim 200$ GeV in approximate agreement with the observed fermion masses. However, this result is sensitive to the exponent of the Higgs VEV, which is a free parameter. This paper offers a number of phenomenological and theoretical motivations for considering an exponent of two to be the optimal value. In particular, the exponent is bounded from below by the Breitenlohner-Freedman bound and the requirement that the dual theory resolves the gauge hierarchy problem. While, in the model considered, if the exponent is too large, electroweak symmetry may not be broken. In addition, the holographic method is used to demonstrate, in generality, that the flatter the Higgs VEV, the smaller the contribution to the electroweak $T$ parameter. In addition, the constraints from a large class of gauge mediated and scalar mediated flavour changing neutral currents, will be at minimal values for flatter Higgs VEVs. Some initial steps are taken to investigate the physical scalar degrees of freedom that arise from a mixing between the $W_5/Z_5$ components and the Higgs components.'
bibliography:
- 'bibliography.bib'
---
MZ-TH/12-17
\
Paul R. Archer[^1]\
\
Introduction.
=============
In high energy physics today there exists two striking hierarchies. Firstly there exists a hierarchy between the two known dimensionful parameters, the Planck scale ($\sim10^{18}$ GeV) and the electroweak (EW) scale ($\sim 200$ GeV). Secondly there exists a hierarchy in the observed fermion masses ranging from the top mass ($\sim170$ GeV) to the lightest neutrino mass ($\sim10^{-4}-10^{-2}$ eV). Although there is considerable uncertainty, with regard to the lightest neutrino mass, it is curious that these two apparently unrelated hierarchies should extend over a similar number of orders of magnitude. It has been known for some time that warped extra dimensions, in the context of the Randall and Sundrum model (RS) [@Randall:1999ee], offer a compelling explanation of such hierarchies. In particular, the fundamental scale of the model is taken to be the Planck scale and a small 4D effective EW scale is generated, via gravitational redshifting, by localising the Higgs in the IR tip of the space. Further still, by allowing the standard model (SM) particles to propagate in the bulk, fermions with zero modes peaked towards the UV tip of the space, will gain exponentially small masses [@Grossman:1999ra; @Gherghetta:2000qt; @Huber:2000ie]. While this description of flavour naturally gives rise to a large hierarchy in the fermion masses, it offers no indication of the size of that hierarchy.
As first pointed out in [@Agashe:2008fe], this changes when one allows the Higgs to propagate in the bulk. The model still offers a potential resolution to the gauge hierarchy problem since it is found that in AdS${}_5$, without tuning, the Higgs profile is not flat, but peaked towards the IR tip of the space and would gain a vacuum expectation value (VEV), $h(r)\sim e^{2kr+\alpha kr}$, where $k$ is the curvature scale of the space. Assuming order one Yukawa couplings, this spatial dependence of the Higgs VEV gives rise to a ‘maximum’ and a ‘minimum’ fermion zero mode mass corresponding to IR and UV localised fermions. So the SM fermion masses would now extend down, from the EW scale, by a factor of $\Omega^{-1-\alpha}$. Where $\Omega \sim 10^{15}$ is the warp factor related to the difference between the EW scale and the Planck scale. For example, if $\alpha\approx 0$, then the fermion Dirac masses would range from $\sim 200$ GeV to $\sim10^{-5}$ eV. However the exponent of the Higgs VEV, $\alpha $, is essentially a free parameter. Here we argue that if it is possible to find a plausible reason for why $\alpha $ should be close to zero, then models with a bulk Higgs would help to offer some insight as to why the observed fermion mass hierarchy extends over the range that it does.
From a purely 5D perspective of the RS model, there is no strong argument for why the Higgs should have a special status as the only brane localised particle. Since, as far as the author is aware, there are no symmetries forbidding such bulk Higgs terms. On the other hand, in a conjectured dual theory, an IR brane localised Higgs would be dual to a purely composite Higgs, whereas a bulk Higgs would be dual to a partially composite Higgs (i.e an elementary Higgs mixing with a composite sector). The corresponding Higgs operator would have a scaling dimensions of $2+\alpha$ [@Luty:2004ye; @ArkaniHamed:2000ds; @Rattazzi:2000hs; @PerezVictoria:2001pa; @Batell:2007ez; @Gherghetta:2010cj]. Hence models with a bulk Higgs offer a convenient framework that allow for the scaling dimension of the Higgs operator to be easily varied.
So the bulk of this paper will be concerned with investigating the phenomenological implications of changing the exponent of the Higgs VEV, $\alpha$. However there are a couple of related secondary questions. Firstly, by allowing the Higgs (a complex doublet under SU$(2)$) to propagate in the bulk, one gains additional scalar degrees of freedom that arise as a mixture of the fifth component of the $W$ and $Z$ fields and the additional components of the Higgs. Such scalar fields offer an important prediction that can help with the verification or exclusion of 5D models with a bulk Higgs. Hence in section \[sect:PseudoScal\] we take some initial steps towards investigating the implications of such pseudo-scalars.
A second question is related to the geometry of the space. There has been a considerable amount of work done on the phenomenology of the RS model, however this work has primarily focused on AdS${}_5$. At the very least, the RS model should include a bulk Goldberger-Wise scalar [@Goldberger:1999uk] and typically the model includes many more bulk fields. These bulk fields will lead to a modification in the AdS${}_5$ geometry which is assumed to be small. An important question to ask is, to what extent is the existing phenomenological analysis robust against small modifications to the geometry? With this in mind, in addition to a pure AdS${}_5$ geometry, we shall also consider a class of asymptotically AdS${}_5$ spaces that have arisen from a scalar plus gravity system [@Cabrer:2009we]. These spaces are of additional interest since they can, for certain regions of parameter space, result in a significant reduction in the constraints from EW observables [@Cabrer:2011vu; @Cabrer:2011fb; @Carmona:2011ib] and flavour physics [@Cabrer:2011qb].
Aside from these two additional questions, the central result of this paper is that there are a number of both theoretical and phenomenological motivations for considering $\alpha$ close to zero to be the optimal value. These include:
- The four dimensional effective Higgs potential receives an additional positive $|\Phi|^2$ term proportional to $\alpha$ which can, for large values of $\alpha$, dominate over the negative $|\Phi|^2$ term and result in an EW phase transition no longer occurring.
- If the space is not cut off in the IR, then the Breitenlohner-Freedman bound implies $\alpha\geqslant 0$ [@Breitenlohner:1982jf].
- The holographic method is used to demonstrate, for generic geometries and generic Higgs potentials, that the flatter the Higgs VEV (i.e. the smaller the value of $\alpha $) the smaller the contribution to the Peskin-Takeuchi $T$ parameter [@Peskin:1991sw].
- For the RS model, large values of $\alpha $ can give rise to strongly coupled pseudo-scalars and potentially large constraints from pseudo-scalar mediated flavour changing neutral currents (FCNC’s).
- It is found, for all spaces considered, that the modification to the $W$ and $Z$ profile will be minimal for smaller values of $\alpha$. This will likely result in a reduction in many of the constraints from flavour physics.
- Assuming anarchic Yukawa couplings, there is a general trend that suggests that the smaller the value of $\alpha $ the more the SM fermion profiles will be peaked towards the UV tip of the space. This will again result in the reduction of constraints from FCNC’s [@Cabrer:2011qb; @Agashe:2008uz; @Archer:2011bk].
- In order for a 4D Higgs operator, $\mathcal{O}$, with a scaling dimension $2+\alpha$, to offer a resolution to the gauge hierarchy problem then the corresponding $\mathcal{O}^\dag\mathcal{O}$ operator must not be relevant, which implies $\alpha\geqslant 0$ [@Luty:2004ye].
The outline of this paper is as follows. In section \[sect:Model\] the geometry and the Higgs sector is outlined. In section \[sect:FermMass\] the fermion mass hierarchy is discussed. In section \[sect:EWcons\], the EW constraints are computed. In section \[sect:PseudoScal\] the pseudo-scalars are investigated. In section \[sect:flavour\] the implications for flavour physics are discussed and we conclude in section \[sect:Conc\]. The majority of the core equations have been derived, for a generic space, in the appendix. This paper should be considered as an extension of a number of existing studies on bulk Higgs scenarios, including [@Davoudiasl:2005uu; @Cacciapaglia:2006mz; @Dey:2009gf; @Vecchi:2010em; @Medina:2010mu].
The Model. {#sect:Model}
==========
This paper seeks to investigate the most minimal extension of the SM in which the Higgs propagates in the bulk of a 5D warped extra dimension. Hence we shall consider a bulk $\rm{SU}(2)\times\rm{U}(1)$ gauge symmetry and not the extended custodial symmetry [@Agashe:2003zs]. The fermion content remains the same as the SM and we include a bulk Higgs, $\Phi$, which is a doublet under the $\rm{SU}(2)$ symmetry. As in the RS model, we compactify the space over a $\rm{S}^1/\rm{Z_2}$ orbifold in which the space is cut-off at the two fixed points, $r_{\rm{UV}}=0$ and $r_{\rm{IR}}=R$. The Higgs sector is then described by $$\label{ }
S=\int d^5x \sqrt{G}\left [ |D_M\Phi|^2-V(\Phi)\right ]+\int d^4x\sqrt{g_{\rm{IR}}}\left [-V_{\rm{IR}}(\Phi)\right ]_{r=R}+\int d^4x\sqrt{g_{\rm{UV}}}\left [-V_{\rm{UV}}(\Phi)\right ]_{r=0}$$ where $V_{\rm{IR}/\rm{UV}}$ are the Higgs potentials localised on flat branes located at the orbifold fixed points. While $G$ and $g_{\rm{IR}/\rm{UV}}$ are the determinant of the bulk metric and the induced brane metrics. At this point we must make some simplifying assumptions. Firstly we have neglected brane localised kinetic terms and secondly here we consider potentials of the form $$\label{ AssumPot}
V(\Phi)=M_\Phi^2|\Phi|^2\qquad V_{\rm{IR}}(\Phi)=-M_{\rm{IR}}|\Phi|^2+\lambda_{\rm{IR}}|\Phi|^4\qquad V_{\rm{UV}}(\Phi)=M_{\rm{UV}}|\Phi|^2.$$ We have not included $|\Phi |^4$ terms in the bulk or on the UV brane. In RS type scenarios, the fundamental coefficients of such operators are assumed to be at the Planck scale $\sim\mathcal{O}(k^{-2})$. However, in the 4D effective theory, $\lambda_{\rm{IR}}$ would be warped down to the Kaluza-Klein (KK) scale $\sim\mathcal{O}(M_{\rm{KK}}^{-2})$ while the corresponding UV operator would remain at the Planck scale. A bulk $|\Phi |^4$ term is assumed to be suppressed by an intermediate scale. Although including such a term would result in the Higgs VEV being the solution of a nonlinear differential equation and so result in a significantly more complicated model.
The Geometry.
-------------
With these assumptions the model is largely completely defined and the only remaining input is the geometry of the space. Here we shall consider spaces of the form $$\label{MetricUsed}
ds^2=e^{-2A(r)}\eta^{\mu\nu}dx_\mu dx_\nu-dr^2$$ where $\eta^{\mu\nu}=\rm{diag}(1,-1,-1,-1)$ and $0\leqslant r \leqslant R$. In the original RS model just gravity propagated in the bulk and the solution considered was a slice of AdS${}_5$, $$\label{AdSMetric}
A(r)=kr.$$ However it was quickly realised that in order to stabilise the space one must also include an additional bulk Goldberger-Wise scalar [@Goldberger:1999uk]. Also, in order to generate the fermion mass hierarchy and suppress EW and flavour constraints, the model was extended to allow the SM particles to propagate in the bulk [@Grossman:1999ra; @Gherghetta:2000qt; @Huber:2000ie]. The back reaction of such bulk fields will typically lead to a deviation, from the AdS geometry, in the IR tip of the space. Although the space should be asymptotically AdS in the UV[^2]. While it is not certain how large this IR deformation should be, it is important to ask how sensitive any result, based on AdS${}_5$, is to such modifications in the geometry. So in this paper, we shall also consider a class of geometries that have arisen from solving a scalar plus gravity model, giving solutions of the form [@Cabrer:2009we; @Cabrer:2011fb], $$\label{CGQMetric}
A(r)=kr+\frac{1}{v^2}\ln\left (1-\frac{r}{R+\Delta}\right ).$$ Firstly, it should be noted that scalar gravity systems generically give rise to a singularity [@Gubser:2000nd], here located at $r=R+\Delta$. While it is possible to construct models that incorporate this singularity, such as soft-wall models, here we shall impose an IR cut-off, on the space, before it reaches the singularity (i.e. let $r\in[0,R]$ and require $\Delta>0$). The parameters $v$ and $\Delta$ are determined by the scalar potential and boundary conditions, although here we shall treat them as free parameters. Note AdS${}_5$ is regained by sending $v$ and $\Delta$ to infinity.
At this point it is useful to define the warp factor and KK scale to be $$\label{ }
\Omega\equiv e^{A(R)}\hspace{0.8cm}\mbox{and}\hspace{0.8cm}M_{\rm{KK}}\equiv \frac{\partial_5A(r)|_{r=R}}{\Omega}.$$ In order for the space to offer the potential for a non-supersymmetric resolution to the gauge hierarchy problem, it is required that the space can be stabilised, such that $\Omega\sim 10^{15}$, and also that $M_{\rm{KK}}\sim\mathcal{O}\,(\rm{TeV})$ is phenomenologically viable.
The Higgs VEV. {#sect:HiggsVEV}
--------------
In the above model, a bulk Higgs would gain a VEV, $\langle \Phi \rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0 \\h(r)\end{array}\right)$, where $h(r)$ satisfies $$\label{HiggsVEVode}
\partial_5^2h(r)-4\,\partial_5A(r)\,\partial_5h(r)-M_\Phi^2h=0,$$ with the consistent boundary conditions being either $h|_{r=0,R}=0$ or $$\label{HiggVEVBCs}
\left [\partial_5h-M_{\rm{UV}}h\right ]_{r=0}=0\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\left [\partial_5h-M_{\rm{IR}}h+\lambda_{\rm{IR}}h^3\right ]_{r=R}=0.$$ Since we are not considering an EW symmetry breaking bulk potential we must choose the latter ‘non-Dirichlet’ boundary conditions. In other words here EW symmetry is broken on the IR brane. In the early days of the RS model, it was argued that models with a bulk Higgs required fine tuning in order to achieve the correct W and Z masses [@Chang:1999nh]. However this work assumed a Higgs VEV that was constant with respect to $r$. From the above relations, it can be seen that a constant Higgs VEV can only be achieved by either breaking EW symmetry on both the branes and the bulk, such that $\partial_\Phi V(\Phi)|_{\Phi=h}=\partial_\Phi V_{\rm{IR}}(\Phi)|_{\Phi=h}=-\partial_\Phi V_{\rm{UV}}(\Phi)|_{\Phi=h}=0$, or alternatively breaking EW symmetry just in the bulk and forbidding the existence of brane potentials. Clearly the first option is finely tuned and it is not clear how the second option could be achieved. Hence here we would suggest that a Higgs VEV peaked towards the IR is a more natural scenario, for which the arguments of [@Chang:1999nh] are not applicable.
For the RS model (\[AdSMetric\]), it is then straight forward to solve for the Higgs VEV (see for example [@Luty:2004ye; @Cacciapaglia:2006mz]) $$\label{RSHiggsVEV}
h(r)=N_he^{2kr}\left (e^{\alpha kr}+Be^{-\alpha kr}\right )$$ where $B$ and $N_h$ are constants of integration fixed by the boundary conditions[^3], $$\label{ }
B=-\frac{2k+\alpha k-M_{\rm{UV}}}{2k-\alpha k-M_{\rm{UV}}} \hspace{0.8cm}\mbox{and}\hspace{0.8cm}N_h^2=-\frac{(2k+\alpha k-M_{\rm{IR}})\Omega^{2+\alpha}+B(2k-\alpha k-M_{\rm{IR}})\Omega^{2-\alpha}}{\lambda_{\rm{IR}}\,(\Omega^{2+\alpha}+B\Omega^{2-\alpha})^3}.$$ We have also introduced the Higgs exponent, which will prove to be an important parameter, $$\label{ }
\alpha\equiv\frac{\sqrt{4k^2+M_\Phi^2}}{k}.$$ Clearly, one can always take the limit $\alpha\rightarrow\infty$ in order to gain an IR brane localised Higgs. As shall be demonstrated in the next section, in order to explain the observed fermion mass hierarchy it is required that $\alpha$ is small, $\alpha\sim 0$, which clearly requires that $M_\Phi^2<0$. In AdS space, without an IR cut off, a negative mass squared term is permitted provided it satisfied the Breitenlohner-Freedman bound [@Breitenlohner:1982jf]. In particular, in order for the total energy of the scalar to be conserved (as well as to allow for a valid global Cauchy surface) it is necessary for the energy-momentum flux to vanish at the AdS boundary. For AdS${}_5$, this implies $M_\Phi^2\geqslant -4k^2$ and $\alpha\geqslant 0$. Technically this bound does not apply here since we are cutting the space off before we reach the AdS boundary. Although the RS model is not a UV complete theory and so it is feasible that such a bound is applicable in a more fundamental realisation of the model. Further still, it was pointed out in [@Davoudiasl:2005uu] that such a tachyonic mass term could naturally come from the Higgs coupling to gravity $\sim\zeta \mathcal{R}|\Phi|^2$, where $\zeta$ is an $\mathcal{O}(1)$ coupling and the Ricci scalar is $\mathcal{R}=-20k^2$.
While this paper will primarily focus on the 5D theory, it is worth making a brief comment about the conjectured dual theory. Although we refer readers to [@Luty:2004ye; @ArkaniHamed:2000ds; @Rattazzi:2000hs; @PerezVictoria:2001pa; @Batell:2007ez; @Gherghetta:2010cj] for a more comprehensive discussion. A bulk Higgs in the RS model is conjectured to be dual to an operator, $\mathcal{O}$, of a broken conformal field theory mixing with an elementary source field $\phi_0=\Phi|_{r=0}$. In other words it is conjectured to be dual to a partially composite Higgs. By computing the bulk-brane propagator and taking the large Euclidean momentum limit, it is found that the scaling dimension of the Higgs operator is given by $2+\alpha$ [@Luty:2004ye; @Batell:2007ez]. Further still it is found that, below the cut-off $\Lambda\sim k$, the coupling between the source field and operator is approximately $\mathcal{L}_{4D}\sim \frac{\omega}{\Lambda^{\alpha-1}}\phi_0\mathcal{O}$, where $\omega$ is a dimensionless constant. Hence the coupling is relevant for $\alpha<1$ and therefore $\alpha$ determines the level of mixing between the source field and the composite operator. The point we wish to emphasize here is that the parameter $\alpha$ determines both the scaling dimension of the Higgs operator and its level of compositeness, i.e. the larger $\alpha$ the more composite the Higgs. A consequence of this result, discussed in [@Luty:2004ye], is that in order to resolve the hierarchy problem, one require that the corresponding 4D Higgs mass operator, $\mathcal{O}^\dag\mathcal{O}$, is not relevant. This again implies that $\alpha\geqslant 0$.
The statements of the previous paragraph applies to AdS${}_5$ and while one would anticipate that they should approximately hold when one modifies the geometry, due to the lack of an analytical expression for the bulk-brane propagator, this is challenging to verify. So for the remainder of this paper we shall focus purely on the 5D theory. Turning now to the modified metric (\[CGQMetric\]), the Higgs VEV can again be found by solving (\[HiggsVEVode\]) with (\[HiggVEVBCs\]) to get $$\begin{aligned}
\label{ }
h(r)=N_he^{(2+\alpha)kr}(R+\Delta-r)^{\frac{v^2-4}{v^2}}\Bigg (U\left (\frac{(v^2-2)\alpha+4}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k (R+\Delta-r)\right )\hspace{2cm}\nonumber\\+B\,M\left (\frac{(v^2-2)\alpha+4}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k (R+\Delta-r)\right )\Bigg ),\end{aligned}$$ where $U$ and $M$ are confluent hypergeometric functions or Kummer functions and $$\label{ }
\small
B=-\frac{\left ((2k-\alpha k-M_{\rm{UV}})(R+\Delta)+\frac{2\alpha-4}{\alpha v^2}\right )U\left (\frac{(v^2-2)\alpha+4}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k(R+\Delta)\right )+U\left (\frac{4-2\alpha}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k(R+\Delta)\right )}{\left ((2k-\alpha k-M_{\rm{UV}})(R+\Delta)+\frac{2\alpha-4}{\alpha v^2}\right )M\left (\frac{(v^2-2)\alpha+4}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k(R+\Delta)\right )+\frac{4-\alpha v^2+2\alpha}{\alpha v^2}M\left (\frac{4-2\alpha}{\alpha v^2},\frac{2v^2-4}{v^2},2\alpha k(R+\Delta)\right )}.$$ Bearing in mind that for large $z>0$, $M(a,b,z)\sim e^zz^{a-b}(1+\mathcal{O}(z^{-1}))$ and $U(a,b,z)\sim z^{-a}(1+\mathcal{O}(z^{-1}))$, then far from the IR tip of the space $$h(r)\sim N_he^{2kr}\left ((R+\Delta-r)^{-\frac{2(\alpha+2)}{\alpha v^2}}e^{\alpha kr}+B\,(R+\Delta-r)^{-\frac{2(\alpha-2)}{\alpha v^2}}e^{-\alpha kr} \right ).$$ In other words, in the UV tip of the space one just finds a small power law correction to (\[RSHiggsVEV\]). However, this approximation breaks down in the IR, particularly for $v\lesssim 1$ when $\frac{(v^2-2)\alpha+4}{\alpha v^2}$ is large. This is a potentially quite interesting region of parameter space in which the Higgs VEV typically grows more than exponentially towards the IR. Unfortunately this greater than exponential growth makes carrying out the numerical studies, conducted later in this paper, quite challenging. So we leave a thorough study of this region of parameter space to future work. Even when $v\gtrsim 1$, the above approximation is still not very robust in the IR. In practice we find a better approximation is $$\label{ }
h(r)\approx N_he^{(2+\alpha)kr}\left (R+\Delta-r\right )^{\frac{v^2-4}{2v^2}}K_{\frac{v^2-4}{v^2}}\left (2\sqrt{\frac{2k((v^2-2)\alpha+4)(R+\Delta-r)}{v^2}}\right )$$ where $K$ is a modified Bessel function. Note in practice it is found that $B$ is negligibly small for much of the parameter space. This raises the important point that, since we are breaking EW symmetry in the IR and the Higgs VEV is peaked towards the IR, the majority of results of this paper are relatively insensitive to the value of $M_{\rm{UV}}$. This further supports one of our initial assumptions, notably the exclusion of a $|\Phi |^4$ term on the UV brane. For the remainder of this paper we shall fix $M_{\rm{UV}}=k$.
Fixing Some of the Input Parameters. {#sect:fixParam}
------------------------------------
Even with this relative insensitivity to the UV potential, one may be concerned with the number of free parameters introduced. In particular, in addition to the warp factor and KK scale, the modified metrics have the additional geometrical input parameters $v$ (the IR modification to the curvature) and $\Delta$ (the position of the singularity). While a bulk Higgs, with the assumed potentials (\[ AssumPot\]), gives rise to an additional four free parameters. Two of these parameters are determined by fitting to the EW scale and the Higgs mass. This leaves four free parameters in addition to those of the RS model with a brane localised Higgs. Here we would argue that this enlargement of the parameter space has arisen from relaxing assumptions of the RS model that (from a bottom up perspective) are questionable.
We will now move on to fix the two parameters $M_{\rm{IR}}$ and $\lambda_{IR}$. Throughout this paper we shall make the working assumption that the Higgs Mass is 125 GeV. In practice, provided that the Higgs is lighter than the KK scale, changing the Higgs mass will not significantly change the results. The SM Higgs is taken to be the lowest mass eigenstate of (\[HiggsPartODE\]) with the boundary conditions $$\label{ HiggsPartBCs}
\left [\partial_5H-M_{\rm{UV}}(h+H)\right ]_{r=0}=0\hspace{0.8cm}\mbox{and}\hspace{0.8cm} \left [\partial_5 H-M_{\rm{IR}}(h+H)+\lambda_{\rm{IR}}\left (\frac{3}{2}h^2H+h^3\right )\right ]_{r=R}=0.$$ where $h$ and $H$ are defined in (\[PHIDEF\]). Of course one could also impose Dirichlet boundary conditions (DBC’s) on the Higgs. This would significantly change the phenomenological implications of this model and would, for example, result in no light Higgs. However in light of recent LHC results we consider this scenario disfavoured. Also note that if the Higgs was just localised on the IR brane then the first ‘$\partial_5$’ terms in (\[HiggVEVBCs\]) and (\[ HiggsPartBCs\]) would not be present. This would imply the familiar relation $h=\left(\frac{M_{\rm{IR}}}{\lambda_{\rm{IR}}}\right )^{\frac{1}{2}}$ and result in the tree level Higgs mass being given by just $M_{\rm{IR}}$.
For the RS model, (\[HiggsPartODE\]) can be solve analytically to get $$\label{HiggsPartProfile}
f_n^{(H)}=N_He^{2kr}\left (J_\alpha\left (\frac{m^{(H)}_ne^{kr}}{k}\right )+\beta Y_\alpha\left (\frac{m^{(H)}_ne^{kr}}{k}\right )\right )$$ where we have made the usual KK decomposition, $H(x,r)=\sum_n f_n^{(H)}(r)H^{(n)}(x)$, such that $\partial_\mu\partial^\mu H^{(n)}=-m_n^{(H)\,2}H^{(n)}$. While $J$ and $Y$ are Bessel functions and $\beta$ is a constant of integration fixed by the boundary conditions. When considering the modified metrics one must work with a numerical solution for the Higgs profile. In addition to the Higgs mass we also fit to the Fermi constant, $\hat{G}_f=1.166367(5)\times 10^{-5}$, the Z mass, $\hat{M}_Z=91.1876\pm0.0021$ GeV and the fine structure constant, $\hat{\alpha}(M_Z)^{-1}=127.916\pm0.0015$ [@Nakamura:2010zzi]. These are given, at tree level, by $$\begin{aligned}
\hat{M}_Z=m_0^{(Z)} \hspace{0.8cm}
\sqrt{4\pi \hat{\alpha}(M_Z)}=\frac{gg^{\prime}}{\sqrt{g^2+g^{\prime\,2}}}f_0^{(\gamma)} \hspace{0.8cm}\nonumber\\
4\sqrt{2} \hat{G}_f=g^2\sum_n\frac{\left (\int dr\, e^Af_0^{(\mu_L)}f_n^{(W)}f_0^{(\nu_{\mu \,L})}\right )\;\left (\int dr\, e^Af_0^{(e_L)}f_n^{(W)}f_0^{(\nu_{e\,L})}\right )}{m_n^{(W)\,2}}\label{EWobserv}\end{aligned}$$ where $f_n^{(\gamma,W)}$ are the photon and W profiles, defined in the appendix. While $f_0^{(\mu_L,\nu_{\mu\,L},e_L,\nu_{e\,L})}$ are the fermion zero mode profiles for fermions with a bulk mass parameter $c_L$, given in (\[fermProf\]). For the purpose of this fit, we assume that the muon, electron and neutrinoes all have a universal bulk position $c_L=-c_R=0.7$. Also note that, with the absence of a right handed neutrino zero mode, the charged pseudo-scalars will not contribute at tree level to the Fermi constant (see section \[sect:PseudoScal\]).
\
The results have been plotted in figure \[fig:LambIRMIR\]. This brings us to the first motivation for favouring small values of $\alpha$. In the standard Higgs mechanism one requires a negative $|\Phi|^2$ term, in the Higgs potential, in order to generate a non zero Higgs VEV and break EW symmetry. When one allows the Higgs to propagate in the bulk of an extra dimension, the 4D effective potential will gain, at tree level, a positive contribution to the $|\Phi|^2$ term loosely related to $\partial_5 \Phi$. The larger the value of $\alpha$, the larger this contribution will be and if $\alpha$ is too large then EW symmetry will no longer be broken at tree level. This can be seen explicitly in figure \[fig:LambIRMIR\]. As $\alpha$ is increased, the require negative $M_{\rm{IR}}$ also increases, in order to compensate the positive contribution and at some point ($\alpha \sim 5$ for the RS model) the positive contribution dominates and EW symmetry is no longer broken.
One should be cautious about taking the values of $\alpha$ in figure \[fig:LambIRMIR\], for which this happens, too literally. Where it is not possible to find values of $M_{\rm{IR}}$ and $\lambda_{\rm{IR}}$ that break EW symmetry, one should of course consider additional operators, in particular a bulk $|\Phi |^4$ term and it is not clear to what extent this effect continues beyond tree level. Rather figure \[fig:LambIRMIR\] gives an indicator of the values of $\alpha$ for which breaking EW symmetry, in a fashion that is compatible with EW observables, becomes difficult. What is surprising is that for certain geometries, in particular geometries with small $v$ and large $\Delta$, this can happen for relatively small values of $\alpha$.
Fermion Masses. {#sect:FermMass}
===============
Having found the form of the Higgs VEV, we can now move on to look at the fermion masses. The fermions will gain a mass via the Yukawa coupling, $Y$, to the Higgs. I.e. in addition to the terms in (\[FermAction\]) the action will also include $$\mathcal{L}_{\rm{Yukawa}}=-Y_D\bar{\Psi}\Phi X-Y_U\epsilon^{ab}\bar{\Psi}_a\Phi_b^\dag X$$ where $\Psi$ is a doublet under SU($2$) and $X$ is a singlet. If we split the spinor into its chiral components $X=\chi_L+\chi_R$ and likewise for $\Psi$ (as in the appendix) then, after compactifying over the $S^1/Z_2$ and choosing appropriate boundary conditions, only $\psi_L$ and $\chi_R$ will gain zero modes. So a low energy chiral theory is achieved with $\Psi\ni\{L,Q\}$ and $X\ni\{e,u,d\}$. It can now be seen that, due to this mixing by the bulk Higgs, the profiles given in (\[FermEOM\]) will not be mass eigenstates. To compute the mass eigenstates, if we define $$\label{ }
Y_{LR}^{(n,m)}\equiv \frac{1}{\sqrt{2}}Y\int dr\; hf_n^{(\psi_{L})}f_m^{(\chi_{R})}\qquad\mbox{ and }\qquad Y_{RL}^{(n,m)}\equiv \frac{1}{\sqrt{2}}Y\int dr\; hf_n^{(\psi_{R})}f_m^{(\chi_{L})}$$ then the fermion mass matrix will be given by $$\label{FermionMassMatrix}
\left(\begin{array}{cccc}\bar{\psi}_L^{(0)} & \bar{\psi}_L^{(1)} & \bar{\chi}_L^{(1)} & \dots\end{array}\right)\left(\begin{array}{ccccc}Y_{LR}^{(0,0)} & 0 & Y_{LR}^{(0,1)} & 0 & \dots \\Y_{LR}^{(1,0)} & m_1^{(\psi)} & Y_{LR}^{(1,1)} & 0 & \\0 & Y_{RL}^{(1,1)} & m_1^{(\chi)} & Y_{RL}^{(2,1)} & \\Y_{LR}^{(2,0)} & 0 & Y_{LR}^{(2,1)} & m_2^{(\psi)} & \\\vdots & & & & \ddots\end{array}\right)\left(\begin{array}{c}\chi_R^{(0)} \\\psi_R^{(1)} \\\chi_R^{(1)} \\\psi_R^{(2)} \\\vdots\end{array}\right).$$ Note we have restricted this discussion to one generation, in reality $Y_{LR}$ and $Y_{RL}$ will be block $3\times 3$ matrices. This matrix can then be perturbatively diagonalised (see for example [@Archer:2010hh]) such that the zero mode mass eigenvalue is found to be approximately $$\label{ }
\tilde{m}_0\approx Y_{LR}^{(0,0)}+\sum_{n,m=1}\frac{Y_{LR}^{(0,n)}Y_{RL}^{(n,m)}Y_{LR}^{(m,0)}}{(m_n^{(\chi)}-Y_{LR}^{(0,0)})(m_m^{(\psi)}-Y_{LR}^{(0,0)})}+\mathcal{O}(m_n^{-3}).$$ While the profile of the mass eigenstate of the $\chi_R$ zero mode, for example, will be $$\label{fermCorrections}
\tilde{f}_0^{(\chi_R)}\approx f_0^{(\chi_R)}-\sum_{n=1}\frac{Y_{LR}^{(0,n)}}{(m_n^{(\chi)}-Y_{LR}^{(0,0)})}f_n^{(\chi_R)}+\mathcal{O}(m_n^{-2}).$$ So the SM fermion masses will be given by $Y_{L,R}^{(0,0)}$ with an $\mathcal{O}(\tilde{m}_0^2/ M_{\rm{KK}}^2)$ corrections and will have the profiles $f_0^{(\psi, \chi)}$ with $\mathcal{O}(\tilde {m}_0/ M_{\rm{KK}})$ corrections. In this paper we will be largely concerned with the lighter fermions and so we will neglect these corrections. However these corrections should not be completely forgotten since they may lead to phenomenological deviations from the brane Higgs scenario, particularly with regard to top physics. It is also possible that in models with extended Higgs sectors, such as models with a custodial symmetry [@Agashe:2003zs] or models with axial gluons [@Bauer:2011ah], these corrections may be enhanced.
For the RS model it is then straightforward to analytically integrate over (\[RSHiggsVEV\]) and the fermion profiles (\[fermProf\]) giving $$\begin{aligned}
\label{ }
Y_{LR}^{(0,0)}=\frac{\tilde{Y}}{\sqrt{2}}M_{\rm{KK}}\sqrt{\frac{(1-2c_L)(1+2c_R)\left ((\tilde{M}_{\rm{IR}}-2-\alpha)\Omega^\alpha+B(\tilde{M}_{\rm{IR}}-2+\alpha)\Omega^{-\alpha}\right )}{(\Omega^{1-2c_L}-1)(\Omega^{1+2c_R}-1)\;\tilde\lambda_{IR}\;\left (\Omega^\alpha+B\Omega^{-\alpha}\right )^3}}\times\hspace{1cm}\nonumber\\
\left (\frac{\Omega^{1-c_L+c_R+\alpha}-\Omega^{-1}}{(2-c_L+c_R+\alpha)}+B\frac{\Omega^{1-c_L+c_R-\alpha}-\Omega^{-1}}{(2-c_L+c_R-\alpha)}\right ),\label{RSYLR}\end{aligned}$$ where we have introduced the ‘assumed $\mathcal{O}(1)$’ coefficients $\tilde{M}_{\rm{IR},\rm{UV}}=M_{\rm{IR},\rm{UV}}k^{-1}$, $\tilde{\lambda}_{\rm{IR}}=\lambda_{\rm{IR}}k^2$ and $\tilde{Y}=Y\sqrt{k}$. Although in practice, since the EW scale is not the same as the KK scale, these parameters are not $\mathcal{O}(1)$ (see figure \[fig:LambIRMIR\]). This should be contrasted with the analogous expression for the brane localised Higgs (see for example [@Huber:2003tu]), $$\label{ }
Y_{LR}^{(0,0)}=\frac{\tilde{Y}}{\sqrt{2}}v_4\sqrt{\frac{(1-2c_L)(1+2c_R)}{(\Omega^{1-2c_L}-1)(\Omega^{1+2c_R}-1)}}\Omega^{1-c_L+c_R}$$ where the 4D Higgs VEV is $v_4\approx 246$ GeV[^4]. The range of fermion masses have been plotted in figure \[fermmass\]. For the modified metrics, $Y_{L,R}^{(0,0)}$ does not have a neat analytical form but the resulting distribution of fermion masses is similar to that which is plotted in figure \[fig:BulkHiggs\].
\
This brings us to the central result of this paper and the initial motivation for this study. It is already widely known that warped extra dimensions offer a potential explanation for the large hierarchies that exist in the observed fermion masses [@Grossman:1999ra; @Gherghetta:2000qt; @Huber:2000ie]. As can be seen in figure \[fig:BraneHiggs\], order one changes in the bulk mass parameter result in exponential changes in the zero mode mass. However the model offers no indication of what the overall size of the mass hierarchy should be. When we consider a bulk Higgs this situation changes. When the fermion profiles are peaked towards the IR brane, the $\Omega^{1-c_L+c_R+\alpha}$ term will dominate and $m_0\sim \tilde{Y}M_{\rm{KK}}\tilde{\lambda}^{-\frac{1}{2}}_{\rm{IR}}\sim 200-300$ GeV. Although when the fermions are peaked towards the UV brane the second $\Omega^{-1}$ term will dominate and $m_0\sim \tilde{Y}M_{\rm{KK}}\tilde{\lambda}^{-\frac{1}{2}}_{\rm{IR}}\Omega^{-1-\alpha}$. Hence if we assume there are no hierarchies in $\tilde{Y}$ and we do not have a split fermion scenario, then the difference between the lightest and the heaviest fermion Dirac mass term is $\lesssim \Omega^{-1-\alpha}$.
These are of course approximate relations, but the calculated values are plotted in figure \[MinMaxMass\]. This figure requires a number of remarks. Firstly, what has been plotted is not the absolute maximum and minimum mass but rather the location of the upper and lower ‘plateaus’ in figure \[fig:BulkHiggs\]. These plateaus are not strictly flat but increase (or decrease) logarithmically and also the bulk mass parameters, for which the lower ‘plateau’ begins, increases as $\alpha$ is increased (see analogous plots in [@Archer:2011bk]). We have also assumed $\tilde{Y}\approx 1$. By naive dimensional analysis it is suspected that perturbative control of the theory is lost if $\tilde{Y}\gtrsim 3$ [@Csaki:2008zd], but there is nothing, except naturalness arguments, forbidding $\tilde{Y}$ from being small. Likewise we have rejected the possibility of a split fermion scenario in which the left handed fermions are peaked towards the UV and the right handed fermions are peaked towards the IR or vice-versa. Such a scenario would undoubtably give rise to large constraints from FCNC’s and so are typically disfavoured by fits including relevant flavour observables (see section \[sect:flavour\]).
If we accept these assumptions, then we arrive at a potentially interesting result, first pointed out in [@Agashe:2008fe]. For spaces close to AdS${}_5$, with a small value of $\alpha\lesssim 0.1$, the lower bounds on the fermions masses are remarkably close to the suspected value for the lightest neutrino mass, $10^{-4}-10^{-2}$ eV[^5]. This alone does not explain the size of the neutrino masses, since the above results are for the Dirac mass term. Hence one must also demonstrate why either the Majorana mass term is forbidden, or alternatively explain why it exists at approximately the same scale. There are many possibilities for achieving this, for example, the latter option could be achieved if the origin of the Majorana mass term had a similar profile to the Higgs VEV. However these possibilities would necessarily require an extension of the minimal set up considered in this paper and so here we leave them to future work. Also, a bulk Higgs scenario offers no explanation as to why there is a six orders of magnitude ‘desert’ between the electron mass, and the heaviest neutrino mass or why the leptons are lighter than the quarks. In other words, this work is concerned with the range of the Dirac mass term, while statements concerning neutrino masses necessarily require an extended model.
Having said that, here we would argue that if one could understand why $\alpha$ should be small, then models with a bulk Higgs would offer some insight into why the fermion mass hierarchy is the size that it is. Having already offered two motivations for favouring a small value of $\alpha$, we shall now move on to demonstrate that many of the phenomenological constraints on warped extra dimensions will be at their minimum for smaller values of $\alpha$.
Electroweak Constraints. {#sect:EWcons}
========================
One of the first tests of any model, that modifies the Higgs mechanism of the SM, is that of being able to suppress corrections to the precision EW observables. Typically it is found that such corrections are dominated by modifications to the $W$ and $Z$ propagators and are often parameterised in terms of the ‘oblique’ Peskin-Takeuchi parameters [@Peskin:1991sw]. Such a parameterisation assumes that the corrections to gauge-fermion couplings are negligible and the new physics can be described by the effective Lagrangian $$\label{STeffLang}
\mathcal{L}_{\rm{eff.}}=-\frac{1}{2}Z_\mu\left (p^2-m_Z^2-\Pi_Z(p)\right )Z^\mu-W_\mu^{+}\left (p^2-m_W^2-\Pi_W(p)\right )W^\mu_{-}+\dots$$ where $p=p^\mu$ is the four momentum. The effective Lagrangian also contains terms for the photon as well as terms mixing the photon and the $Z$, but these corrections are found to be zero, at tree level, in models with extra dimensions. Expanding the $\Pi$’s as $$\Pi(p)=\Pi(0)+p^2\Pi^{\prime}(0)+\frac{p^4}{2}\Pi^{\prime\prime}(0)+\dots$$ then the Peskin-Takeuchi $S$ and $T$ parameters are defined to be $$\label{ }
\hat{\alpha}T=\frac{1}{m_Z^2}\left (\Pi_Z(0)-\frac{1}{\sin^2 \theta_w}\Pi_W(0)\right )\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\hat{\alpha}S=4\sin^2 \theta_w\cos^2 \theta_w\left (\Pi_Z^{\prime}(0)-\Pi_\gamma^\prime(0)\right ),$$ where $\theta_w$ is the weak mixing angle. It is possible to introduce additional parameters in order to account for the higher order corrections [@Cacciapaglia:2006pk]. However here it is found that, with the absence of a custodial symmetry [@Agashe:2003zs], the constraint is dominated by contributions to the $T$ parameter.
In order to compute the corrections to the propagator here we shall largely follow [@Cabrer:2011fb] and expand the 5D field in terms of the holographic basis. This method is completely equivalent to a KK expansion and does not rely on the existence of a dual theory. In the interests of generality, we shall also work with the generic metric considered in the appendix (\[genericMetric \]). In particular we factorise the 5D fields $$\label{ }
Z_\mu(p,r)=G^{(Z)}(p,r)\tilde{Z}_\mu(p)\hspace{0.8cm}\mbox{such that}\hspace{0.8cm} Z_\mu(p,r_{\rm{UV}})=\tilde{Z}_\mu(p)$$ and likewise for the $W$. We also impose Neumann boundary conditions (NBC’s) on just the IR brane, $\partial_5Z_\mu |_{r_{\rm{IR}}}=0$. Bearing in mind that $G^{(Z)}$ will satisfy $\partial_5(a^2b^{-1}\partial_5G^{(Z)})-a^2bM_Z^2G^{(Z)}+bp^2G^{(Z)}=0$, the tree level effective Lagrangian will be given purely by the UV boundary term $$\label{effLangObli}
\mathcal{L}_{\rm{eff.}}=\frac{1}{2}\tilde{Z}_\mu \left [G^{(Z)}a^2b^{-1}\partial_5G^{(Z)}\right ]_{r=r_{\rm{UV}}}\tilde{Z}^\mu+\tilde{W}^+_\mu \left [G^{(W)}a^2b^{-1}\partial_5G^{(W)}\right ]_{r=r_{\rm{UV}}}\tilde{W}_-^\mu$$ We have once again neglected possible brane localised kinetic terms which are known to reduce the EW constraints [@Carena:2002dz]. To proceed further we define $$\label{ }
P_{W,Z}(p,r)\equiv\frac{a^2b^{-1}\partial_5G^{(W,Z)}(p,r)}{G^{(W,Z)}(p,r)}$$ such that $$\label{Peqn}
\partial_5 P_{W,Z}+a^{-2}bP_{W,Z}^2-a^2M_{W,Z}^2+bp^2=0.$$ We can now match $P_{W,Z}$ to the oblique parameters by expanding in four momentum $$\label{ }
P_{W,Z}(p,r)=P_{W,Z}^{(0)}(r)+p^2P_{W,Z}^{(1)}(r)+\frac{1}{2}p^4P_{W,Z}^{(2)}(r)+\dots$$ and equating (\[STeffLang\]) to (\[effLangObli\]) giving $$\label{ }
\Pi_{W,Z}(0)=P_{W,Z}^{(0)}\big |_{r=r_{\rm{UV}}}-m_{W,Z}^2\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\Pi^{\prime}_{W,Z}(0)=P_{W,Z}^{(1)}\big |_{r=r_{\rm{UV}}}+1.$$ Where $P^{(n)}$ can be found by expanding (\[Peqn\]) into a set of coupled equations $$\begin{aligned}
\partial_5P_{W,Z}^{(0)}+a^{-2}bP_{W,Z}^{(0)\,2}-a^2bM_{W,Z}^2=0\nonumber\\
\partial_5P_{W,Z}^{(1)}+2a^{-2}bP_{W,Z}^{(0)}P_{W,Z}^{(1)}+b=0\nonumber\\
\partial_5P^{(2)}_{W,Z}+a^{-2}b(P_{W,Z}^{(0)}P_{W,Z}^{(2)}+P_{W,Z}^{(1)\,2})=0\label{Pdecompose}\\
\vdots\nonumber\end{aligned}$$ As already mentioned, the greatest contribution to the EW constraints comes from the $T$ parameter which is given by $P_{W,Z}^{(0)}\big |_{r=r_{\rm{UV}}}$. Without loss of generality, we can set $b=1$, $a(r_{\rm{IR}})=\Omega^{-1}$ and $a(r_{\rm{UV}})=1$. Then using the IR boundary condition, $P_{W,Z}(r_{\rm{IR}})=P^{(0)}_{W,Z}(r_{\rm{IR}})=0$, (\[Pdecompose\]) can be rearranged to give $P_{W,Z}^{(0)}\big |_{r=r_{\rm{UV}}}$ (and hence the $T$ parameter) as $$\label{P0eval}
P_{W,Z}^{(0)\,2}\big |_{r=r_{\rm{UV}}}=\partial_5P_{W,Z}^{(0)}\big |_{r=r_{\rm{IR}}}-\partial_5P_{W,Z}^{(0)}\big |_{r=r_{\rm{UV}}}+M_{W,Z}^2|_{r=r_{\rm{UV}}}-\Omega^{-2}M_{W,Z}^2|_{r=r_{\rm{IR}}}.$$ It should be stressed that this holds for generic Higgs VEV’s and generic 5D geometries. This then brings us to the point of this discussion. Bearing in mind the definition of $M_{W,Z}$ (\[ MWZdef\]), it is now straight forward to see that the ‘flatter’ the Higgs VEV (i.e. the smaller the value of $\alpha$), the smaller the last two terms of (\[P0eval\]) will be and the smaller the contribution to the $T$ parameter will be[^6]. This should not be a particularly surprising result. One gets tree level corrections to the EW observables because the Higgs mixes the SM gauge fields (the zero modes) with the KK gauge fields that are peaked in the IR. The less the Higgs is peaked towards the IR then the weaker this mixing will be. Alternatively, in the language of a possible dual theory, the less composite the Higgs, the less it will mix the composite states with the elementary states.
![The constraints on $M_{\rm{KK}}$ (not the mass of the first KK gauge boson) from EW precision tests for the RS model (black) as well as the modified metrics with $v=10$ (blue), $v=5$ (green) and $v=3$ (red). We have used $k\Delta=1$ (solid line), $k\Delta=0.5$ (dash dash), $m_H=125$ GeV, $M_{\rm{UV}}=k$ and fixed $\Omega=10^{15}$. At each iteration we have fit to $\hat{\alpha}$, $\hat{M}_Z$ and $\hat{G}_f$ and compared with the latest $S-T$ ellipse (\[STellipse\]) at 95% confidence level. For $k\Delta=0.5$, such an analysis is plagued by small numerical inaccuracies which results in not completely smooth curves.[]{data-label="fig:EWconstraints"}](Figures/EWconstraints.eps){width="80.00000%"}
The contribution to the first two terms of (\[P0eval\]) are quite sensitive to the geometry of the space. It was found in [@Cabrer:2011vu; @Cabrer:2011fb; @Carmona:2011ib], that the contribution to these two terms could be significantly reduced in the modified spaces (\[CGQMetric\]) with $v\lesssim 1$ and $k\Delta\lesssim 1$. However, as discussed in section \[sect:HiggsVEV\], this region of parameter space typically results in the Higgs VEV growing more than exponentially which would increase the contribution from the last two terms of (\[P0eval\]). So a minimum in the EW constraints can be found by introducing a moderate tuning such that the Higgs VEV still grows exponentially and $v\lesssim 1$.
Before we can compute the size of these constraints, we must first perform a field redefinition in order to absorb the non-oblique corrections to the gauge-fermion couplings, $$\tilde{W}_\mu\rightarrow\frac{g_4}{g\int dr \,ba^{-1}f_0^{(\psi)}G^{(W)}f_0^{(\psi)}}\tilde{W}_\mu,$$ and likewise for the $Z$ field. Here $g_4$ is the 4D effective coupling found by fitting to (\[EWobserv\]). This is only possible if one assumes universal fermion couplings and hence we again assume a universal fermion position with $c_L=-c_R=0.7$. In practice this is not a good approximation for the heavy quarks. Although the modified metrics, with more realistic fermion positions, have been studied in [@Carmona:2011ib] and it was still found that certain regions of the parameter space still have significantly reduced constraints.
The constraints on the KK scale can then be calculated by comparison with a fit to the $S$ and $T$ parameters (assuming a vanishing $U$ parameter) [@Baak:2011ze], $$\label{STellipse}
S=0.07\pm0.09\hspace{1cm} T=0.10\pm0.08\hspace{1cm} \rho_{\rm{correlation}}=+0.88.$$ These constraints have been plotted in figure \[fig:EWconstraints\]. Note these constraints are for the KK scale and not for mass of the first KK gauge field. The masses of the first KK photon (or gluon) are given in table \[tab:KK masses\]. Here we find that relatively small shifts in the geometry can result in significant reductions in the mass eigenvalues relative to the defined KK scale. It is believed that this is partly responsible for the reduction in the EW constraints found in [@Cabrer:2011vu; @Cabrer:2011fb; @Carmona:2011ib].
The Pseudo-Scalars. {#sect:PseudoScal}
===================
Having demonstrated that EW constraints will generically be at their minimum for smaller values of $\alpha$, we shall now move on to look at how models with a bulk Higgs can be potentially falsified. Before spontaneous symmetry breaking the model considered here contains four 5D massless gauge fields ($4\times 3$ transverse degrees of freedom) and the Higgs, a complex doublet ($4$ scalar degrees of freedom). After compactification and the breaking of EW symmetry the model contains four 4D massive gauge fields ($4 \times 2$ transverse degrees of freedom, plus $4$ longitudinal degrees of freedom) and a Higgs particle ($1$ scalar degree of freedom). Hence the model must necessarily also include an additional three scalar degrees of freedom. In this section we shall demonstrate that, for $\alpha\sim\mathcal{O}(1)$ or less, such pseudo-scalars will gain masses at the KK scale and hence the observation of a $Z^{\prime}$ or $W^{\prime}$ at the LHC should be associated with the existence of a corresponding charged and neutral scalar.
These pseudo-scalars have been previously investigated for warped spaces in [@Cabrer:2011fb; @Falkowski:2008fz] and are well known in models with universal extra dimensions. As found in the appendix, these pseudo-scalars arise as a mixture of the $A_5$ component of the gauge field and the Higgs components, $\pi_i$, see (\[ phiZDef\]) and (\[PhiWdef\]). Note the larger the value of $\alpha$ the more these scalars will be dominated by the Higgs components $\pi_i$ and hence more of the $A_5$ component can be ‘gauged away’.
$k\Delta$ RS $v=10$ $v=5$ $v=3$
----------- ------ -------- ------- -------
0.5 2.45 2.23 1.68 0.87
1 2.45 2.27 1.82 1.07
1.5 2.45 2.30 1.89 1.18
: The mass of the first gauge boson relative to the KK scale, $m_1^{(A_\mu)}/M_{\rm{KK}}$.[]{data-label="tab:KK masses"}
Consistent Boundary Conditions.
-------------------------------
If we begin by considering a gauge field in the 5D space (\[genericMetric \]) then variation of the action yields the boundary term $$\left [\frac{1}{2}\delta A_\mu a^2b^{-1}\eta^{\mu\nu}F_{\nu 5}\right ]_{r=r_{\rm{IR}},r_{\rm{UV}}}=0.$$ In order for such a boundary term to vanish one must impose either NBC’s on the $A_\mu$ components and DBC’s on the $A_5$ components or vice versa. If one is considering just an interval then one can, in theory, impose non-trivial boundary conditions. However if one is compactifying the space over a $S^1/Z_2$ orbifold, such non-trivial boundary conditions are forbidden by gauge invariance across the fixed points[^7]. Hence if one imposes NBC’s on the $A_\mu$ field, in order to gain a zero mode that is associated with the SM field, then one must impose DBC’s on the $A_5$ component.
As for the Higgs particle, the consistent boundary conditions for the Higgs components are either DBC’s, $\pi_i\big|_{r=r_{\rm{IR}},r_{\rm{UV}}}=0$, or non DBC’s $$\label{piBCs}
\left [b^{-1}\partial_5\pi_i-M_{\rm{IR}}\pi_i+\lambda_{\rm{IR}}h^2\pi_i\right ]_{r=r_{\rm{IR}}}=0 \hspace{0.8cm}\mbox{and}\hspace{0.8cm}\left [b^{-1}\partial_5\pi_i-M_{\rm{UV}}\pi_i\right ]_{r=r_{\rm{UV}}}=0.$$ However if we impose DBC’s on the $A_5$ components then, in the unitary gauge, (\[Z5mix\]) and (\[W5mix\]) imply that one must impose DBC’s on the pseudo-scalars, $\phi_{W,Z}\big|_{r=r_{\rm{IR}},r_{\rm{UV}}}=0$. To see that this is consistent we substitute (\[pi3mix\]) and (\[piWMix\]), $$\label{piExpans}
\pi_i=\sum_{n}\left (-\frac{M_{W,Z}a^{2}b^{-2}\partial_5\left (f_n^{(\phi_{W,Z})}\phi_{W,Z}^{(n)}\right )}{m_n^{(\phi_{W,Z})\,2}}\pm\frac{M_{W,Z}^{-1}a^{-2}b^{-1}\partial_5\left (a^{4}b^{-1}M_{W,Z}^2\right )}{m_n^{(\phi_{W,Z})\,2}}f_n^{(\phi_{W,Z})}\phi_{W,Z}^{(n)}\right )$$ where the $\pm$ refers to $\pi_{1,3}$ and $\pi_2$ respectively, into the boundary conditions of $\pi_i$ and note that the second term in (\[piExpans\]) will vanish by the DBC’s. While if we impose DBC’s on the Higgs VEV, $h$, then the first term will always vanish. Alternatively if we impose non DBC’s (\[HiggVEVBCs\]) then we must impose impose non DBC’s on the Higgs components (\[piBCs\]).
The upshot of all this is that if NBC’s are imposed on the $A_\mu$ field then the pseudo-scalars must have DBC’s, regardless of the boundary conditions of the Higgs. This should be intuitive if one considers the zero mode of the $A_\mu$ field which will be a massless 4D gauge field and hence will have 2 transverse degrees of freedom after compactification. By the same arguments, that we began this section with, then clearly the degrees of freedom would not match if the physical pseudo-scalars also gained a zero mode.
The Pseudo-Scalar Masses.
-------------------------
For gauge fields propagating in (\[MetricUsed\]), the pseudo-scalar masses and profiles are found by solving (\[pseudoEOMZ\]) and (\[pseudoEOMW\]), $$\begin{aligned}
\label{phiEOMusedMet}
\partial_5^2f_n^{(\phi_{W,Z})}+\left (-6A^{\prime}+\frac{2M_{W,Z}^{\prime}}{M_{W,Z}}\right )\partial_5f_n^{(\phi_{W,Z})}\hspace{10cm}\nonumber\\
+\left (-4A^{\prime\prime}+8A^{\prime\,2}+2\frac{M_{W,Z}^{\prime\prime}}{M_{W,Z}}-2\frac{M_{W,Z}^{\prime\,2}}{M_{W,Z}^2}-4A^{\prime}\frac{M_{W,Z}^{\prime}}{M_{W,Z}}-M_{W,Z}^2+e^{2A}m^{(\phi_{W,Z})\,2}_n\right )f_n^{(\phi_{W,Z})}=0\label{phiEOMusedMet}\end{aligned}$$ where ${}^{\prime}$ denotes $\partial_5$. Typically this equation cannot be solve analytically and hence we must work with numerics. The first pseudo-scalar masses have been plotted in figure \[fig:ScalarMasses\]. For the RS model (\[AdSMetric\]) and (\[RSHiggsVEV\]), this gives $$\frac{M_{W,Z}^{\prime}}{M_{W,Z}}=2k+\alpha k\frac{e^{\alpha kr}-Be^{-\alpha kr}}{e^{\alpha kr}+Be^{-\alpha kr}}\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\frac{M_{W,Z}^{\prime\prime}}{M_{W,Z}}=(4+\alpha)k^2+4\alpha k^2\frac{e^{\alpha kr}-Be^{-\alpha kr}}{e^{\alpha kr}+Be^{-\alpha kr}}$$ and so as $\alpha\rightarrow 0$ then (\[phiEOMusedMet\]) will reduce to the same equations of motion of a gauge field of mass $M_{W,Z}$ (\[ZEOM\]), i.e the mass eigenfunctions will be approximately root one Bessel functions. Also as either $\alpha\rightarrow \infty$ (i.e. a brane localised Higgs) then $\frac{M_{W,Z}^{\prime}}{M_{W,Z}}\rightarrow \infty$ and the pseudo-scalars will become infinitely heavy. Likewise if $g\rightarrow 0$ (i.e. the gauge fields become decoupled from the Higgs) then $\phi_{W,Z}\rightarrow 0$ and the field should be considered unphysical. It is also clear from figure \[fig:ScalarMasses\], as well as the above equations, that as $\alpha$ is increased the mass of the pseudo-scalars will always increase. Hence, at tree level, the pseudo-scalars will always be heavier than the first KK gauge field. This feature further adds to the ability of the LHC to rule out this model.
![The masses of the first Z pseudo scalar for the RS model (black), as well as the modified metrics with $v=10$ (blue), $v=5$ (green) and $v=3$ (red) and $k\Delta=0.5$ (dash-dot line), $k\Delta=1$ (solid line) and $k\Delta=1.5$ (dash-dash line). []{data-label="fig:ScalarMasses"}](Figures/MPHIZ.eps){width="80.00000%"}
The Pseudo-Scalar Couplings to Fermions.
----------------------------------------
We can now move on to look at how the pseudo-scalars couple to fermions. Again in the interests of generality we shall work with the generic metric (\[genericMetric \]). The Higgs sector and gauge sector couple to the SM fermions via the Yukawa couplings and the fermion kinetic terms $$\label{ }
S=\int d^5x \bigg ( i\frac{b}{a}\bar{\Psi}\gamma^\mu D_\mu \Psi+i\bar{\Psi}\gamma^5 D_5 \Psi+i\frac{b}{a}\bar{X}\gamma^\mu D_\mu X+i\bar{X}\gamma^5 D_5 X -bY_D\bar{\Psi}\Phi X-bY_U\epsilon_{ab}\bar{\Psi}^a\Phi^{\dag b}X +h.c.\bigg ),$$ where $\Psi\ni\{L,Q\}$ and $X\ni\{e,u,d\}$, as in section \[sect:FermMass\]. The first and third $\gamma^\mu D_\mu$ terms are just the usual gauge fermion interactions. The second and fourth $\gamma^5D_5$ terms will give rise to pseudo-scalar currents interacting with the $A^a_5$ components, for example. With $\gamma^5=\left(\begin{array}{cc}i & 0 \\0 & -i\end{array}\right)$ then $$i\bar{\Psi}\gamma^5 D_5 \Psi\supset i\bar{\Psi}\gamma^5(-ig\tau^aA^a_5)\Psi=-ig\bar{\psi}_L\tau^aA^a_5\psi_R+ig\bar{\psi}_R\tau^aA^a_5\psi_L,$$ where $A_5^a$ are the fifth components of the SU$(2)$ gauge bosons, as in the appendix. If we remember that, after compactification over the orbifold, only the left handed components of $\Psi $ and the right handed components of $X$ will gain a zero mode then clearly the $A_5^a$ component will only couple to currents containing one SM fermion and one KK fermion or alternatively two KK fermions. Hence at tree level, the pseudo-scalars will only couple to SM fermion currents via the Yukawa couplings to the Higgs components, $\pi_i$. Bearing in mind that the larger the value of $\alpha$, the more the pseudo-scalars will be dominated by the $\pi_i$ components and hence we can see our first indication that pseudo-scalars will have more significance for SM processes, for larger values of $\alpha$.
At next to leading order, the phenomenology becomes more complex and a proper investigation is beyond the scope of this paper. Although it is worth commenting that the Higgs particle will couple to the pseudo-scalars via the $|\Phi|^4$ term and so one would anticipate that the Higgs mass will be sensitive to the pseudo-scalar masses. Hence if $\alpha$ is very large and the pseudo-scalars are very heavy, then one risks reintroducing a little hierarchy problem.
Moving onto look at the Yukawa couplings and looking just at the quarks, $$\begin{aligned}
S=\int d^5x\; \frac{b}{\sqrt{2}}\bigg(-Y_D\;\bar{u}_{(\psi)}\pi_{+}d_{(\chi)}-Y_D\;\bar{d}_{(\psi)}(h+H+i\pi_3)d_{(\chi)}\hspace{2cm}\nonumber\\
-Y_U\;\bar{u}_{(\psi)}(h+H-i\pi_3)u_{(\chi)}-Y_U\;\bar{d}_{(\psi)}\pi_{-}u_{(\chi)} +h.c.\bigg )\end{aligned}$$ where $\pi_{\pm}=\frac{1}{\sqrt{2}}(\pi_1\pm i\pi_2)$, we have also split $Q=\left(\begin{array}{c}u_{(\psi)} \\d_{(\psi)}\end{array}\right)$ and the $(\psi,\chi)$ subscripts refer to whether the fermion originated from a SU$(2)$ doublet or a singlet. To find the effective coupling to the SM fermions, we can now plug in (\[pi3mix\]), (\[piWMix\]), carry out a partial integration and use the DBC’s of $\phi_{W,Z}$ to get $$\label{ }
\mathcal{L}_{\rm{eff.}}\subset -\frac{Y}{\sqrt{2}}\left (\int dr\; \frac{a^4b^{-1}M_{W,Z}^2f_n^{(\phi_{W,Z})}}{m_n^{(\phi_{W,Z})\,2}}\partial_5\left (M_{W,Z}^{-1}a^{-2}f_0^{(\psi_L)}f_0^{(\chi_R)}\right )\right )\;\phi_{W,Z}^{(n)}\psi_L^{(0)}\chi_R^{(0)}\;\equiv Y_{\rm{eff.}}^{(\phi_{W,Z}\psi\chi)}\phi_{W,Z}^{(n)}\psi_L^{(0)}\chi_R^{(0)}.$$ For the spaces considered in this paper (\[MetricUsed\]), this gives $$\label{ }
Y_{\rm{eff.}}^{(\phi_W\psi\chi)}=-\frac{gYN_\psi N_\chi}{2\sqrt{2}\,m_n^{(\phi_{W})\,2}}\int dr\; \left (2A^{\prime}h-h^{\prime}-(c_L-c_R)kh\right )e^{-2A-(c_L-c_R)kr}f_n^{(\phi_W)},$$ where $N_{\psi ,\chi}$ are the fermion normalisation constants. At first sight, one may be alarmed to see that this effective Yukawa coupling appears to be dependent on the gauge coupling, $g$. However this is not the case since the normalisation constant of $f_n^{(\phi_W)}$, obtained from (\[PhiOrthogRel\]), will contain a factor of $g^{-1}$. Likewise the effective coupling will be independent of $N_h$ at tree level. This is an interesting coupling for a number of reasons. Firstly it gives a good example of a scenario in which relatively small changes in the geometry can result in significant changes in the phenomenology. In the RS model, the ‘$e^{2kr}$ factor’ in the $h^{\prime}$ term will exactly cancel with the $2A^{\prime}h$ term and the effective coupling will be given by $$\begin{aligned}
Y_{\rm{eff.}}^{(\phi_W\psi\chi)}=-\frac{YgN_hk^2}{2\sqrt{2}}\sqrt{\frac{(1-2c_L)(1+2c_R)}{(\Omega^{1-2c_L}-1)(\Omega^{1+2c_R}-1)}}\frac{1}{m_n^{(\phi_W)\,2}} \hspace{6cm}\nonumber\\
\hspace{1cm}\times\int dr\; f_n^{(\phi_W)}\left (e^{(-c_L+c_R)kr}\left ((-c_L+c_R)(e^{\alpha kr}+Be^{-\alpha kr})-\alpha(e^{\alpha kr}-Be^{-\alpha kr})\right )\right ).\hspace{-0.5cm}\end{aligned}$$ This coupling is very sensitive to the value of $\alpha$, as can be seen in figure \[fig:ScalarCoupl\]. For large values of $\alpha$ the pseudo-scalars would become strongly coupled and one would loose perturbative control of the theory. On the other hand, this cancellation does not occur in spaces with a modified metric. The resulting effective coupling has additional terms which result in the coupling being significantly less $\alpha$ dependent.
\
The second interesting feature of this coupling is that there exists special fermion positions (when $2A^{\prime}h-h^{\prime}-(c_L-c_R)kh\approx0$) for which the pseudo-scalars tree level coupling, to the SM fermions, is significantly reduced. This is analogous to the case of the KK gauge fields coupling to SM fermions, when $c_L=-c_R=0.5$. These fermion positions typically occur when the fermions are heavily peaked towards the IR and for many of the modified metrics considered, this occurs for bulk mass parameters not plotted in figure \[fig:ScalarCoupl\].
Clearly there is significant room for further work concerning the phenomenological implications of these pseudo-scalars. However by going further in this direction we risk digressing too far from the central focus of this paper, that of investigating the fermion mass hierarchy and looking for possible motivations for considering a small value of $\alpha$ to be more plausible than a large value. Nevertheless, since we have suggested that the observation, or lack of observation, of such scalars could play an important role in excluding this model it is worth making some comments concerning their phenomenology. In addition to the Yukawa couplings, the pseudo scalars will also couple to the $\gamma$, $W$ and $Z$ gauge fields, the KK gravitons, the radion and the Higgs itself. Also, due to their large masses, one would anticipate that they could decay via a KK particle. Hence any study of the production and decay of such particles would be quite involved and should really be conducted as a separate piece of work.
Returning to the focus of this paper, after one has diagonalised the fermion mass matrix (\[FermionMassMatrix\]), this model would give rise to the possibility of tree level pseudo-scalar mediated FCNC’s. The constraints from these FCNC’s have the potential to force the KK scale to a level at which it becomes questionable whether one has resolved the hierarchy problem. The extent to which such FCNC’s would be suppressed is determined by the size of the effective Yukawa couplings as well as the size of its misalignment with the fermion masses or equivalently its universality with respect to fermion position. This has been plotted in figure \[fig:ScalarCoupl\]. It is found that, when the fermions are peaked towards the UV ($c>0.5$), although this coupling is significantly misaligned and non-universal, it is also very small and hence such FCNC’s would be heavily suppressed. However, for the RS model, as $\alpha$ is increased the size of this coupling increases and it rapidly becomes potentially problematic. Surprisingly, this effect is not as severe in spaces with a modified metric, for reasons discussed above.
Implications for Flavour Physics. {#sect:flavour}
=================================
Of course, this model would not just receive constraints from pseudo-scalar mediated FCNC’s. It is well known that models with warped extra dimensions, that seek to explain the fermion mass hierarchy, suffer from severe constraints from flavour physics (see for example [@Huber:2003tu; @Agashe:2004cp; @Csaki:2008zd; @Blanke:2008zb; @Casagrande:2008hr; @Bauer:2009cf]). These constraints fall into three broad categories:
- Those that are dominated by the modification of a SM coupling. These include, for example, corrections to the $Z\bar{b}b$ vertex as well as rare leptonic decays such as $\mu\rightarrow ee\bar{e}$.
- Those that are dominated by the tree level exchange of a KK particle. For example, a particularly stringent constraint arises from $\epsilon_K$ that receives large contributions from the exchange of a KK gluon.
- Those that arise at next to leading order via additional penguin diagrams. In particular, constraints from $\mu\rightarrow e\gamma$ [@Agashe:2006iy; @Csaki:2010aj] and $b\rightarrow s\gamma$ [@Blanke:2012tv] transitions.
![Coupling of the Z zero mode to fermions at positions $c=c_L=-c_R$ for the RS model (black) and the IR modified spaces with $k\Delta=1$ and $v=10$ (blue), $v=5$ (green), $v=3$ (red). We use the Higgs exponent of $\alpha =0.01$ (solid lines), $\alpha=1.01$ (dash-dash line) and $\alpha=5$ (dash-dot line). In the interests of making a fair comparison we take a KK scale such that the mass of the first KK gauge field is the same, i.e. for RS $M_{\rm{KK}}=2$ TeV, for $v=10$ $M_{\rm{KK}}=2.15$ TeV, for $v=5$ $M_{\rm{KK}}=2.70$ TeV and for $v=3$ $M_{\rm{KK}}=4.59$ TeV. Also $\Omega=10^{15}$. []{data-label="fig:Zcoupl"}](Figures/Zcoupl.eps){width="85.00000%"}
While it certainly lies beyond the scope of this paper to conduct a full and thorough investigation of flavour in models with a bulk Higgs, we are in a position to see how the first two categories of constraints would be effected. It is more difficult to estimate the effect on the next to leading order processes due to the number of conflicting factors contributing to these constraints. In particular there would be modifications to both the KK masses and the couplings plus additional diagrams coming from the pseudo-scalars and the KK Higgs bosons.
Modifications to the SM Fermion Couplings.
------------------------------------------
Of particular importance, to many constraints from flavour physics, is the modification to the $Z$ fermion coupling, which has been plotted in figure \[fig:Zcoupl\]. This coupling not only adds a direct constraint on the KK scale, via rare lepton decays and the partial decay width of $Z\rightarrow\bar{b}{b}$, it also constrains the extents to which a split fermion scenario is allowed. Such a modification occurs because, after spontaneous symmetry breaking, the $Z$ zero mode is not flat but modified in the IR. This gives rise to non-universal couplings to fermions in different locations. As one would expect, the flatter the Higgs VEV the smaller this modification will be. Hence the smaller the value of $\alpha$ the smaller these constraints will be. It is this effect which is partly responsible for the reduction in the constraints from rare lepton decays seen in [@Archer:2011bk; @Atkins:2010cc]. Although it is also found that modifications to the geometry typically give an enhancement of the deformation of the $Z$ zero mode, relative to AdS${}_5$.
The fermion zero mode profiles will also receive corrections arising from the mixings between the SU$(2)$ singlets and doublets, see (\[fermCorrections\]). These corrections can also give a sizeable correction to the $Z\rightarrow\bar{b}{b}$ vertex [@Casagrande:2008hr]. The size of these corrections are determined by the size of $Y_{LR}^{(0,n)}$ for $n\geqslant 1$. Whether this correction increases or decreases, as one increases $\alpha$, is not entirely clear since it is dependent on whether the fermion zero mode profiles are peaked towards the UV brane or towards the IR brane. Hence it is sensitive to the preferred fermion positions (see the following section). It is also sensitive to how quickly the $Y_{LR}^{(0,n)}$ decreases as one increases $n$. It is straight forward to check that $Y_{LR}^{(0,n)}$ will drop away more quickly for smaller values of $\alpha$. In other words, as one shifts the Higgs VEV away from the IR brane one reduces the mixing between fermion zero modes and the higher KK fermion modes.
Another source of flavour violation, in these models, would be the Higgs particle couplings to the SM fermions. Since the Higgs particle profile (\[HiggsPartProfile\]) is not the same as the Higgs VEV (\[RSHiggsVEV\]), after diagonalising (\[FermionMassMatrix\]), the resulting Higgs particle effective Yukawa couplings will not be flavour diagonal [@Azatov:2009na]. The size of this effect will be determined by the difference between the Higgs VEV and the Higgs profile which, after expanding (\[HiggsPartProfile\]), is found to be $\sim\mathcal{O}\left (\left (m_0^{(H)}e^{kr}/k\right )^2 \frac{1}{2\alpha}\right )$. Hence this is the one counter example considered in which reducing $\alpha$ would increase the size of the correction. However in practice it is found, for all spaces considered, that when $m_0^{(H)}\ll M_{\rm{KK}}$ this effect is quite small.
FCNC’s from the Exchange of a KK Particle.
------------------------------------------
As already mentioned, models with warped extra dimensions and bulk fermions suffer from stringent constraints from FCNC’s arising at tree level via the exchange of a KK particle. In particular from $K^0-\bar{K}^0$ mixing and the observable $\epsilon_K$. Such FCNC’s occur at tree level because the KK gluon has a non-universal coupling to fermions with different bulk mass parameters. Models with warped extra dimensions and a large warp factor have a natural suppression of such FCNC’s referred to as the RS-GIM mechanism (see for example [@Huber:2003tu]). In particular fermions peaked towards the UV ($c>0.5$) have approximately universal couplings to the KK gluon, see figure \[gluCoupl\]. The problem really arises because typically it is found that the heavier quarks have to sit quite far towards the IR and hence have non-universal couplings.
The effect on this constraint, of allowing the Higgs to propagate in the bulk, has been studied for the RS model [@Agashe:2008uz], spaces with a modified metric [@Cabrer:2011qb] and spaces with a soft wall [@Archer:2011bk]. All three papers found a sizeable reduction in the constraint relative to that of the RS model with a brane localised Higgs. It is easily checked that the couplings of a KK gluon propagating, in the modified metrics, are not significantly modified relative to that of the RS model, see figure \[gluCoupl\]. This coupling is also found to be independent of $\alpha$. Hence it is suspected that the primary reason for this reduction in the $\epsilon_K$ constraint is due to a shift in the preferred fermion positions. In particular the heavy quarks can sit further towards the UV, where the coupling is more universal. When referring to the preferred fermion positions we are referring to the bulk mass parameters, $c_{L,R}$, that give the correct masses and mixing angles assuming anarchic order one 5D Yukawa couplings, $\tilde{Y}$.
However none of the above papers investigated the $\alpha$ dependence of this shift in the fermion position. A priori it is not obvious how the fermion positions are effected since it is based on two conflicting factors. Notably the ‘maximum fermion mass’ decreases as $\alpha$ is reduced (see figure \[fig:MaxMass\]) but the gradient of the slope in figure \[fig:BulkHiggs\] also reduces. Hence this must be calculated explicitly. To do so we minimise a function that inputs the nine quark bulk mass parameters ($c_L$, $c^u_R$ and $c_R^d$) and computes the $\chi^2$ value for the mean quark masses, mean CKM mixing angles and mean Jarlskog invariant, taken over 5000 anarchic Yukawa matrices with $\frac{1}{3}\leqslant |\tilde{Y}|\leqslant 3$. The quark masses are run down to the mass of the first KK gluon from the $2$ GeV values [@Nakamura:2010zzi], $$\begin{aligned}
m_u=2.4\pm 0.7\mbox{ MeV}\hspace{1cm}m_c=1.29^{+0.05}_{-0.11}\mbox{ GeV}\hspace{1cm} m_t=173\pm 1.5\mbox{ GeV}\nonumber\\
m_d=4.9\pm 0.8\mbox{ MeV}\hspace{1.0cm}m_s=100^{+30}_{-20}\mbox{ MeV}\hspace{1.3cm} m_b=4.2^{+0.18}_{-0.06}\mbox{ GeV}\hspace{0.2cm}\nonumber\end{aligned}$$ and mixing angles used are [@Nakamura:2010zzi; @Bona:2007vi] $$V_{us}=0.2254\pm0.00065\hspace{0.5cm}V_{cb}=0.0408\pm0.00045\hspace{0.5cm}V_{ub}=0.00376\pm0.0002\hspace{0.5cm}J=2.91^{+0.19}_{-0.11}\times 10^{-5}.$$ Due to the size of the parameter space, such a minimisation routine will typically find a local minimum and not a global minimum. Hence this is repeated 200 times with random initial guesses in the ranges $c_L=[0.66\pm0.1,\; 0.58\pm 0.1,\; 0.4\pm0.2]$, $c_R^u=[-0.6\pm0.1,\; -0.51\pm 0.1,\; 0.2\pm0.5]$ and $c_R^d=[-0.66\pm0.1,\; -0.64\pm0.1,\; -0.59\pm0.1]$. Of these 200 configurations, the 40 best $\chi^2$ values are taken and the mean bulk mass parameters have been plotted in figure \[fig:FermPso\] for different values of $\alpha$. Although some prior knowledge of warped extra dimensions has been used in choosing the initial guesses, it is still found that all the bulk mass parameters do still converge to a preferred value.
\
\
\
Inevitably, due to the size of the parameter space, there are large numerical uncertainties associated with these preferred fermion positions. Also one could argue that a better analysis would include more observables such as $\epsilon_K$ and the Z partial decay width, $R_b$, in the fit. These would slightly shift the preferred values but it is suspected that the basic result would still hold[^8]. The result being that, despite these uncertainties, there is a generic trend for the fermions to sit further towards the UV for small values of $\alpha$. This would lead to their couplings being more universal and hence result in a reduction in the constraints from not just $\epsilon_K$ but a significant number of the $\Delta F=2$ and $\Delta F=1$ processes. One can see from the analysis made in [@Cabrer:2011qb; @Agashe:2008uz; @Archer:2011bk] that relatively small changes in the fermion positions can result in quite significant reductions in these constraints.
Finally it should be pointed out that another source of flavour violation in these models arises from Higgs mediated FCNC’s arising from the exchange of a KK fermion. This has not been considered here since it was studied in [@Azatov:2009na] and found to be largely independent of $\alpha$.
Conclusions. {#sect:Conc}
============
The primary focus of this paper has been to investigate the phenomenological changes, in RS type scenarios, as one changes the exponent of a bulk Higgs VEV. The motivation for doing so is that, when the exponent is close to two and one assumes order one Yukawa couplings, the range of fermion zero mode masses is in remarkable agreement with the observed fermion mass hierarchy. After first introducing the model, it was demonstrated that, when brane localised potentials exist, then non tuned Higgs VEVs will not be flat, but will be peaked towards the IR. Hence bulk Higgs scenarios still offer a potential resolution to the gauge hierarchy problem. However, models with a bulk Higgs, gain an additional positive contribution to the $|\Phi|^2$ term in the effective potential which may result in EW symmetry not being broken for large values of the Higgs exponent, $\alpha$. Next the fermion mass hierarchy was considered and it was found that (assuming order one Yukawa couplings) the Dirac mass term of the fermion zero modes stretch from the EW scale to a factor of about $\Omega^{-1-\alpha}$ below it, see figure \[MinMaxMass\] for the calculated values. We then proceeded to demonstrate that a considerable number of the phenomenological constraints will be at a minimum for smaller values of $\alpha$. In particular it is shown, for generic Higgs potentials and generic geometries, that the flatter the Higgs VEV the smaller the contribution to the $T$ parameter. It is also shown that a large class of constraints from flavour physics will be reduced for smaller values of $\alpha$. This is related to three effects. Firstly for the RS model the size of the pseudo-scalar coupling to the SM fermion is enhanced as one increases $\alpha$. This coupling is not flavour diagonal and would give rise to pseudo-scalar mediated FCNC’s. This effect is significantly reduced in spaces with a modified metric. Secondly the non-universality of the $W$ and $Z$ coupling to SM fermions is reduced for smaller value of $\alpha$. Thirdly it is found that there is a general trend for the fermions to sit further towards the UV for smaller values of $\alpha$. This would result in their couplings to KK gauge fields being more universal with respect to flavour.
All of the above motivations seem to favour a small value of $\alpha$ and are independent of the requirement of generating the correct fermion mass hierarchy. One also has a potential lower bound of $\alpha\geqslant 0$ coming from the Breitenlohner-Freedman bound, although this is not strictly applicable here. However the RS model is not a UV complete theory and it is plausible that attempts to embed such a model in a more fundamental theory could result in the Breitenlohner-Freedman bound being applicable. Another lower bound arises when one considers the 4D dual theory. The corresponding Higgs operator, $\mathcal{O}$, would have a scaling dimension of $2+\alpha$. In order to offer a potential resolution to the gauge hierarchy problem one requires that the operator $\mathcal{O}^\dag\mathcal{O}$ is not relevant which in turn implies that $\alpha\geqslant 0$ [@Luty:2004ye]. Hence it is compelling to saturate these two bounds, for the largely phenomenological reasons given in this paper and claim that the optimal value of $\alpha$ is zero (or equivalently the optimal exponent of the Higgs VEV and scaling dimension of the Higgs operator is two).
Even if one accepts that $\alpha$ should be close to zero, then more work is still required before the fermion mass hierarchy can be understood. In particular, models with a bulk Higgs offer no explanation of, for example, why the leptons are lighter than the quarks, or why there is a six orders of magnitude gap between the electron mass and the neutrino masses. We have also focused on the Dirac mass term. It was argued in [@Agashe:2008fe], that in RS type scenarios constraints, from lepton flavour violation, favour Dirac neutrinos over Majorana neutrinos. However there is no symmetry forbidding a Majorana mass term. The validity of this possible explanation of fermion mass hierarchy is conditional on such mass terms being either forbidden or existing at the same mass scale as the Dirac term. There are a number of possibilities for achieving this but all would require an extension of this minimal model and so we do not consider them here.
Further work is also required in order to fully understand the phenomenological implications of both modifying the geometry and the pseudo-scalars. With regards to the modification of the geometry, although here we have not fully explored the possible parameter space, we still find significant phenomenological changes relative to the RS model. In particular it is found that, while the couplings do not change significantly, the relationship, between the curvature and size of the extra-dimension and the KK mass eigenvalues, does change. It is believed that this is partly responsible for the reduction in the EW constraints [@Cabrer:2011vu; @Cabrer:2011fb; @Carmona:2011ib]. With regards to the additional scalar degrees of freedom, assuming the LHC is able to reach the KK scale, then the prediction of a pseudo-scalar, for each $W^{\prime}$ and $Z^{\prime}$, makes models with a bulk Higgs very falsifiable. Although again we must leave a full study of such scalars to future work. Nevertheless, it is believed that there are a number of reasons for considering models with a bulk Higgs, an interesting extension of the description of flavour that already exists in the RS model.
Acknowledgements. {#acknowledgements. .unnumbered}
=================
I am very grateful to Kristian McDonald for a number of useful discussions and comments. This work has also benefited from a number of useful discussions with Mathias Neubert, Susanne Westhoff and Thomas Flacke. This research was supported by the grant 05H09UME of the German Federal Ministry for Education and Research (BMBF).
Appendix
========
In this appendix we shall derive many of the results used throughout this paper. Here we will work with a generic 5D warped space of the form, $$\label{genericMetric }
ds^2=a(r)^2\eta^{\mu\nu}dx_\mu dx_\nu-b(r)^2dr^2,$$ with $r\in [r_{\rm{UV}}, r_{\rm{IR}}]$. Without loss of generality, $b(r)$ can always be set to one with the coordinate transformation $r\rightarrow\tilde{r}=\int^{r}_{c} b(\hat{r})d\hat{r}$. However, by not doing so, one can analytically express a greater range of spaces and also one can easily modify the following expressions in order to use a conformally flat metric.
Electroweak Symmetry Breaking with a Bulk Higgs.
------------------------------------------------
Let us begin by considering a bulk $\rm{SU}(2)\times\rm{U}(1)$ gauge symmetry in addition to a bulk Higgs, $\Phi$, $$\label{ }
S=\int d^5x\;\sqrt{G}\left (-\frac{1}{4}F_{MN}^a F_a^{MN}-\frac{1}{4}B_{MN}B^{MN}+|D_M\Phi|^2-V(\Phi)\right ).$$ Where the covariant derivative is given by $D_M=\partial_M-igA_M^a\tau^a-i\frac{1}{2}g^{\prime}B_M$, with the three $\rm{SU}(2)$ generators $\tau^a=\sigma^a/2$ and $a=\{1,2,3\}$. After spontaneous symmetry breaking the Higgs would acquire a non zero VEV, $\langle \Phi \rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0 \\h(r)\end{array}\right)$, where $h$ would be the solution of $$\label{hVEVeom}
\partial_5\left (a^4b^{-1}\partial_5 h\right )-a^4b\partial_\Phi V(\Phi)|_{\Phi=h}=0.$$ Clearly in order to actually break EW symmetry one must have a potential that does not admit $h=0$ as a solution. This can be done by either using the bulk potential, $V(\Phi)$, or by imposing non-trivial boundary conditions. We can now expand $\Phi$ around $h$ such that, $$\label{PHIDEF}
\Phi(x,r)=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\pi_1(x,r)+i\pi_2(x,r) \\h(r)+H(x,r)+i\pi_3(x,r)\end{array}\right),$$ and make the usual field redefinitions, $$\begin{aligned}
W_M^{\pm}=\frac{1}{\sqrt{2}}\left (A_M^1\mp iA_M^{2}\right )\hspace{1cm}Z_M=\frac{1}{\sqrt{g^2+g^{\prime\,2}}}\left (gA_M^3-g^{\prime}B_M\right ) \hspace{1cm}
A_M=\frac{1}{\sqrt{g^2+g^{\prime\,2}}}\left (g^{\prime}A_M^3+gB_M\right ).\end{aligned}$$ The Higgs particle itself will satisfy $$\label{HiggsPartODE}
a^2b\partial_\mu\partial^\mu H-\partial_5(a^4b^{-1}\partial_5H)+a^4b\partial_\Phi V(\Phi)|_{\Phi=H}=0.$$ It is also useful to define the quantities $$\label{ MWZdef}
M_Z(r)\equiv\frac{\sqrt{g^2+g^{\prime\,2}}\,h(r)}{2}\hspace{0.5cm}\mbox{and}\hspace{0.5cm} M_W(r)\equiv \frac{g\,h(r)}{2}.$$ These, of course, should not be confused with the 4D W and Z masses. We must now include a gauge fixing term which is chosen in order to cancel the mixing between the 4D gauge fields, $A_\mu^a$ and the 4D scalars, $A_5^a$ and $\pi_a$. In particular, working in the $R_\xi$ gauge, we introduce the gauge fixing term $$\begin{aligned}
\mathcal{L}_{G.F.}=-\frac{b}{2\xi}\left (\partial_\mu Z^\mu-\xi b^{-1}\left (\partial_5(a^2b^{-1}Z_5)+a^2bM_Z\pi_3\right )\right )^2-\frac{b}{2\xi}\left (\partial_\mu A_1^\mu-\xi b^{-1}\left (\partial_5(a^2b^{-1}A^1_5)-a^2bM_W\pi_2\right )\right )^2 \nonumber\\
-\frac{b}{2\xi}\left (\partial_\mu A_2^\mu-\xi b^{-1}\left (\partial_5(a^2b^{-1}A^2_5)+a^2bM_W\pi_1\right )\right )^2.\end{aligned}$$ If, for the moment, we just focus on the $Z$ field then the equations of motion are found to be $$\begin{aligned}
-b\left (\partial^\nu Z_{\nu\mu}+\frac{1}{\xi}\partial_\mu(\partial_\nu Z^\nu ) \right )+\partial_5\left (a^2b^{-1}\partial_5Z_\mu\right )-a^2bM_Z^2Z_\mu=0, \\
\partial_\mu\partial^\mu Z_5+a^2M_Z\partial_5\pi_3-a^2M_Z\left (\frac{\partial_5h}{h}\right )\pi_3+a^2M_Z^2Z_5-\xi\partial_5\left (b^{-1}\left (\partial_5(a^2b^{-1}Z_5)+a^2bM_Z\pi_3\right )\right )=0,\label{Z5EOM}\\
\partial_\mu\partial^\mu \pi_3-a^{-2}b^{-1}\partial_5\left (a^4b^{-1}\partial_5\pi_3\right )-a^{-2}b^{-1}\partial_5\left (a^4b^{-1}M_ZZ_5\right )-a^2b^{-2}M_Z\left (\frac{\partial_5h}{h}\right )Z_5+a^2\partial_{\Phi} V(\Phi)|_{\Phi=\pi_3}\hspace{1cm}\nonumber\\+\xi M_Zb^{-1}\left (\partial_5(a^2b^{-1}Z_5)+a^2bM_Z\pi_3\right )=0.\label{PI3EOM}\end{aligned}$$ To find the masses of the 4D Z bosons, one would expand $Z_\mu$ in terms of orthogonal mass eigenstates, i.e. make a KK decomposition, $Z_\mu(x,r)=\sum_nf_n^{(Z)}(r)Z_\mu^{(n)}(x)$ such that $\int dr\, bf_n^{(Z)}f_m^{(Z)}=\delta_{nm}$. The masses would then be defined by the 4D equations of motion, $\partial^\nu Z^{(n)}_{\nu\mu}+\frac{1}{\xi}\partial_\mu(\partial^\nu Z^{(n)}_\nu )=m_n^{(Z)\,2}Z_\mu^{(n)}$, and found by solving for the profiles, $$\label{ZEOM}
\partial_5\left (a^2b^{-1}\partial_5f_n^{(Z)}\right )-a^2bM_Z^2f_n^{(Z)}+bm_n^{(Z)\,2}f_n^{(Z)}=0.$$ The $W$ and photon profiles, $f_n^{(W)}$ and $f_n^{(\gamma)}$ are found using the same equation but with $M_Z$ replaced by $M_W$ and $0$ respectively.
The Gauge-Goldstone Boson Equivalence Theorem Cross Check.
----------------------------------------------------------
There are now two remaining scalar degrees of freedom. One will be eaten by the longitudinal polarisation states of the 4D massive $Z$ fields and will be the unphysical Goldstone boson. The other will be a physical pseudo-scalar. To find the Goldstone bosons we simply note that such fields must have a completely gauge dependent mass. Such that in the unitary gauge ($\xi\rightarrow \infty$) the Goldstone bosons become infinitely heavy and hence should be considered unphysical. From (\[Z5EOM\]) and (\[PI3EOM\]) it is straightforward to see that the Goldstone boson should be given by $$\label{GoldZDef}
\mathcal{G}_Z=b^{-1}\left (\partial_5\left (a^2b^{-1}Z_5\right )+a^2bM_Z\pi_3\right ).$$ However in order for the gauge-Goldstone boson equivalence theorem to hold, in the 4D theory, one necessarily requires that the profiles of the Goldstone boson, $f_n^{(\mathcal{G}_Z)}$, should match those of the Z boson, $f_n^{(Z)}$. Otherwise the 4D effective couplings would be different for the two fields and hence the scattering amplitudes would not be the same. This equivalence has been found in other extra dimensional scenarios, such as when a gauge field propagates in more than five dimensions [@McDonald:2009hf]. To check that the profiles do match we must find the equations of motion for the Goldstone bosons. In particular, taking $\partial_5(a^2b^{-1}( \ref{Z5EOM}))$, adding $a^2bM_Z(\ref{PI3EOM})$ and noting that $\partial_5(a^4b^{-1}M_Z^2Z_5)-M_Z\partial_5(a^4b^{-1}M_ZZ_5)-a^4b^{-1}M_Z^2(\frac{\partial_5h}{h})Z_5=0$ gives $$\partial_\mu\partial^\mu\mathcal{G}_Z-\frac{\sqrt{g^2+g^{\prime\;2}}}{2}\partial_5\left (a^4b^{-1}\partial_5h\right )\pi_3+a^4bM_Z\partial_{\Phi} V(\Phi)|_{\Phi=\pi_3}-\xi b^{-1}\partial_5\left (a^2b^{-1}\partial_5 \mathcal{G}_Z\right )-\xi a^2M_Z^2\mathcal{G}_Z=0.$$ For all Higgs potentials, considered in this paper, the second and third gauge independent terms will cancel using (\[hVEVeom\]). One would anticipate that such a cancellation would occur for all Lorentz and gauge invariant potentials, but this is not verified here. With such a cancellation, one can proceed to make a KK decomposition $\mathcal{G}_Z=\sum_n f_n^{(\mathcal{G}_Z)}(r)\mathcal{G}^{(n)}_Z(x)$ such that $\partial_\mu\partial^\mu\mathcal{G}^{(n)}_Z=-\xi m_n^{(\mathcal{G}_Z)\,2}\mathcal{G}_Z^{(n)}$ and the profiles will be given by $$\label{ }
\partial_5\left (a^2b^{-1}\partial_5f_n^{(\mathcal{G}_Z)}\right )-a^2bM_Z^2f_n^{(\mathcal{G}_Z)}+bm_n^{(\mathcal{G}_Z)\,2}f_n^{(\mathcal{G}_Z)}=0,$$ in agreement with (\[ZEOM\]).
The Physical Pseudo-Scalars.
----------------------------
To find the remaining physical degree of freedom we must take a combination of (\[Z5EOM\]) and (\[PI3EOM\]) such that the mass term is gauge independent. This can be done by adding (\[Z5EOM\]) to $\partial_5(M_Z^{-1}(\ref{PI3EOM}))$ which, after a little algebra, gives $$\label{pseudoEOMZ}
\partial_\mu\partial^\mu \phi_Z-\partial_5\left (a^{-2}b^{-1}M_Z^{-2}\partial_5\left (a^4b^{-1}M_Z^2\phi_Z\right )\right )+a^2M_Z^2\phi_Z=0.$$ We have again used the cancellation $h^{-1}\partial_5(a^4b^{-1}\partial_5h)+a^4b\partial_{\Phi}V(\Phi)|_{\Phi=\pi_3}=0$, as well as defining $$\label{ phiZDef}
\phi_Z\equiv \partial_5\left (M_Z^{-1}\pi_3\right )+Z_5.$$ This equation agrees with equivalent expressions derived previously in [@Cabrer:2011fb; @Falkowski:2008fz]. In order to find the effective action for these pseudo-scalars, we again make a KK decomposition, $\phi_Z(x,r)=\sum_n f_n^{(\phi_Z)}(r)\phi_Z^{(n)}(x)$, such that $\partial_\mu\partial^\mu\phi_Z^{(n)}=-m_n^{(\phi_Z)\;2}\phi_Z^{(n)}$. This allows us to invert (\[GoldZDef\]) and (\[ phiZDef\]) to get $$\begin{aligned}
\pi_3=\sum_n \left (-\frac{M_Z^{-1}a^{-2}b^{-1}\partial_5\left (a^4b^{-1}M_Z^2f_n^{(\phi_Z)}\right )}{m_n^{(\phi_Z)\;2}}\phi_Z^{(n)}+\frac{M_Zf_n^{(\mathcal{G}_Z)}}{m_n^{(\mathcal{G}_Z)\,2}}\mathcal{G}^{(n)}_Z\right ) \label{pi3mix}\\
Z_5=\sum_n\left (\frac{a^2M_Z^2f_n^{(\phi_Z)}}{m_n^{(\phi_Z)\;2}}\phi_Z^{(n)}-\frac{\partial_5f_n^{(\mathcal{G}_Z)}}{m_n^{(\mathcal{G}_Z)\,2}}\mathcal{G}^{(n)}_Z\right ).\label{Z5mix}\end{aligned}$$ For the bulk of this paper we shall work in the unitary gauge, in which the Goldstone bosons are infinitely heavy and can be neglected. The final step is to ensure the mass eigenstates are canonically normalised which is achieved with the orthogonality relation $$\label{PhiOrthogRel}
\int dr\; a^4b^{-1}\frac{M_Z^2}{m_n^{(\phi_Z)\;2}} f_n^{(\phi_Z)}f_m^{(\phi_Z)}=\delta_{nm}.$$
The W Boson.
------------
It is straight forward to repeat this analysis for the W field, where it is found that the two Goldstone bosons are given by $$\label{ }
\mathcal{G}_W^{1}=b^{-1}\partial_5\left (a^2b^{-1}A_5^1\right )-a^2bM_W\pi_2\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\mathcal{G}_W^{2}=b^{-1}\partial_5\left (a^2b^{-1}A_5^2\right )+a^2bM_W\pi_1.$$ While the two physical pseudo-scalars are found to be $$\label{PhiWdef}
\phi_W^{1}=\partial_5(M_W^{-1}\pi_2)-A_5^1\hspace{0.8cm}\mbox{and}\hspace{0.8cm}\phi_W^{2}=\partial_5(M_W^{-1}\pi_1)+A_5^2.$$ In practice there will be a further mixing such that the charged pseudo-scalars are given by combinations of $\pi^{\pm}=\frac{1}{\sqrt{2}}(\pi_1\pm i\pi_2)$ and $W_5^{\pm}=\frac{1}{\sqrt{2}}(A_5^1\mp iA_5^2)$. After making analogous KK decompositions, as those used for the Z field, the equations of motion are analogously $$\begin{aligned}
\partial_\mu\partial^\mu\phi^{1,2}_W-\partial_5(a^{-2}b^{-1}M_W^{-2}\partial_5(a^4b^{-1}M_W^2\phi_W^{1,2}))+a^2M_W^2\phi_W^{1,2}=0\label{pseudoEOMW}\\
\partial_\mu\partial^\mu\mathcal{G}_W^{1,2}-\xi b^{-1}\partial_5\left (a^2b^{-1}\partial_5 \mathcal{G}^{1,2}_W\right )+\xi a^2M_W^2\mathcal{G}_W^{1,2}=0.\end{aligned}$$ The effective action can be found by substituting for $$\begin{aligned}
\pi_{1,2}=\sum_n \left (-\frac{M_W^{-1}a^{-2}b^{-1}\partial_5\left (a^4b^{-1}M_W^2f_n^{(\phi_W)}\right )}{m_n^{(\phi_W)\;2}}\phi_W^{2,1\,(n)}\pm\frac{M_Wf_n^{(\mathcal{G}_W)}}{m_n^{(\mathcal{G}_W)\,2}}\mathcal{G}^{2,1\;(n)}_W\right )\label{piWMix} \\
A^{1,2}_5=\sum_n\left (\mp\frac{a^2M_W^2f_n^{(\phi_W)}}{m_n^{(\phi_W)\;2}}\phi_W^{1,2\,(n)}-\frac{\partial_5f_n^{(\mathcal{G}_W)}}{m_n^{(\mathcal{G}_W)\,2}}\mathcal{G}^{1,2\,(n)}_W\right ).\label{W5mix}\end{aligned}$$
The Fermion Profile.
--------------------
For completeness, we shall briefly derive the widely known expressions for the fermion profiles. Beginning with the action for a massive Dirac spinor, $\Psi$, in 5D $$\label{FermAction}
S=\int d^5x \sqrt{G}\left (i\bar{\Psi}\Gamma^M\nabla_M\Psi-M_\Psi\bar{\Psi}\Psi\right )$$ where the Dirac Matrices in curved space $\Gamma^M=E_A^M\gamma^A$ are related to Dirac matrices in flat space, $\gamma^A$, by the fünfbein $E_A^M=\mathrm{diag}(a^{-1},a^{-1},a^{-1},a^{-1},b^{-1})$. While the covariant derivative, $\nabla_M=D_M+\omega_M$, includes the spin connection $\omega_M=(\frac{1}{2}b^{-1}\partial_5a\,\gamma_5\gamma_\mu,0)$. Splitting $\Psi=\psi_L+\psi_R$ such that $i\gamma_5\psi_{L,R}=\mp \psi_{L,R}$ and making the KK decomposition $\psi_{L,R}=\sum_na^{-2}f_n^{(\psi_{L,R})}(r)\psi_{L,R}^{(n)}(x)$ such that $(i\gamma^\mu\partial_\mu-m_n^{(\psi)})\Psi^{(n)}=0$. This then yields the coupled equations of motion $$\label{FermEOM}
\partial_5 f_n^{(\psi_R)}+bM_\Psi f_n^{(\psi_R)}=\frac{b}{a}m_n^{(\psi)} f_n^{(\psi_L)}\hspace{0.8cm}\mbox{and}\hspace{0.8cm}-\partial_5f_n^{(\psi_L)}+bM_\Psi f_n^{(\psi_L)}=\frac{b}{a}m_n^{(\psi)} f_n^{(\psi_R)}.$$ It is then straight forward to solve for the fermion zero mode profile $$\label{fermProf}
f_0^{(\psi_{L,R})}(r)=\frac{\exp\left (\pm\int_c^r d\tilde{r}\,b(\tilde{r})M_\Psi\right )}{\sqrt{\int dr\,\frac{b}{a}\exp\left (\pm2\int_c^r d\tilde{r}\,b(\tilde{r})M_\Psi\right )}}.$$ It is common to parameterise the bulk mass term such that $M_\Psi=-ck$ where $c\sim \mathcal{O}(1)$.
[^1]: [email protected]
[^2]: The 5D generalisation of Birkhoff’s theorem [@Bowcock:2000cq] ensures that solutions with no (or constant) bulk fields must be AdS. In the RS model the only fields peaked towards the UV are the light fermion modes which are assumed to have a negligible coupling to gravity. The other fields are typically vanishing in the UV. Hence it is anticipated that the space should be asymptotically AdS in the UV.
[^3]: In the special case when $\alpha=0$ then $h(r)=N_h(e^{2kr}+r(M_{\rm{UV}}-2k)e^{2kr})$ and $$N_h^2=\frac{M_{\rm{UV}}(1+2\ln \Omega)+M_{\rm{IR}}(1-2\ln \Omega)-4k\ln\Omega+RM_{\rm{UV}}M_{\rm{IR}}}{\Omega^4\lambda_{\rm{IR}}(1+R(M_{\rm{UV}}-2k))^3}.$$
[^4]: In practice $v_4$, in the RS model with a brane localised Higgs, will still differ from the SM value. See [@Bouchart:2009vq].
[^5]: While there is considerable uncertainty in this mass, assuming a normal hierarchy, the recent measurement of a large value of $\theta_{13}$ would probably favour a lighter minimum neutrino mass. See for example [@Pascoli:2007qh]
[^6]: This should not be mistaken for a proof since we have not demonstrated that the $\alpha$ dependence of the last two terms dominates over the first two terms. It is probably possible to contrive a space for which this is not true. However, one can see that it will hold for all spaces in which $P_{W,Z}^{(0)}$ is growing towards the IR and $\partial_5 P_{W.Z}^{(0)}\propto \alpha^{n}$ with $n\geqslant0$.
[^7]: To be explicit, the gauge field transforms over the orbifold as $A_\mu(r)\rightarrow A_\mu(-r)=PA_\mu(r) P^{\dag}$ and $A_5(r)\rightarrow A_5(-r)=-PA_5(r) P^{\dag}$. Where $P$ is a unitary matrix and the relative minus sign, which preserves gauge invariance under $A_M\rightarrow A_M-\frac{1}{g}\partial_M\Theta$, will ensure opposite BC’s for $A_\mu$ and $A_5$.
[^8]: The $\alpha$ dependence of the fermion position is dominated by the gradient of the slope and the position of the ‘maximum fermion mass’ in figure \[fig:BulkHiggs\]. Including additional observables would not change this but would rather change the extent to which a split fermion scenario was favoured. For example the inclusion of $\epsilon_K$ would favour a shift in $c_L$ towards the IR and $c_R^{d}$ towards the UV [@Archer:2011bk]. While the inclusion of $R_b$ would disfavour a split fermion scenario.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper proposes the first multilingual (French, English and Arabic) and multicultural (Indo-European languages vs. less culturally close languages) irony detection system. We employ both feature-based models and neural architectures using monolingual word representation. We compare the performance of these systems with state-of-the-art systems to identify their capabilities. We show that these monolingual models trained separately on different languages using multilingual word representation or text-based features can open the door to irony detection in languages that lack of annotated data for irony.'
author:
- Bilal Ghanem
- Jihen Karoui
- Farah Benamara
- Paolo Rosso
- Véronique Moriceau
bibliography:
- 'references.bib'
title: Irony Detection in a Multilingual Context
---
Motivations
===========
Figurative language makes use of figures of speech to convey non-literal meaning [@grice:1975; @attardo:2000]. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions [@Veale:2011]. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts [@Deft2017; @farias2016irony]. Most related work concern English [@SemEval:2018; @huang2017irony] with some efforts in French [@Karoui:2015], Portuguese [@carvalho:2009], Italian [@gianti:2012], Dutch [@liebrecht:2013], Hindi [@Swami:18], Spanish variants [@ortega2019overview] and Arabic [@KarouiACLING:2017; @idat2019]. Bilingual ID with one model per language has also been explored, like English-Czech [@Techc:2014] and English-Chinese [@Tang], but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (\#irony, \#sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data [@Huang:2018], ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages [@Bikel:2012; @Ruder17]. While multilinguality has been widely investigated in information retrieval [@Litschko:2018; @sasaki-etal-2018-cross] and several NLP tasks (e.g., sentiment analysis [@Balahur:2014; @Barnes:2018] and named entity recognition [@Jian:2017]), no one explored it for irony. We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al [@Karoui:2017] concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro [@chakhachiro:2007] studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
1. *A new freely available corpus of Arabic tweets* manually annotated for irony detection[^1].
2. *Monolingual ID*: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
3. *Cross-lingual ID*: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
====
**Arabic dataset** (<span style="font-variant:small-caps;">Ar</span>=$11,225$ tweets). Our starting point was the corpus built by [@KarouiACLING:2017] that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years $2011$ to $2018$. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>\#, مسخرة>\#, تهكم>\#, استهزاء>\#) [^2]. The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen’s Kappa was $0.76$, while the agreement score between the annotators’ labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
**French dataset** (<span style="font-variant:small-caps;">Fr</span>=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony [@Deft2017] which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (\#ironie, \#sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen’s Kappa of $0.69$.
**English dataset** (<span style="font-variant:small-caps;">En</span>=$11,225$ tweets). We use the corpus built by [@Techc:2014] which consists of $100,000$ tweets collected using the hashtag \#sarcasm. It was used as benchmark in several works [@ghanem2018ldr; @farias2017sentiment]. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages’ datasets.
Table \[data\] shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section \[CL\]). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
**\# Ironic** **\# Not-Ironic** **Train** **Test**
-------------------------------------------------- --------------- ------------------- ----------- ----------
<span style="font-variant:small-caps;">Ar</span> $6,005$ $5,220$ $10,219$ $1,006$
<span style="font-variant:small-caps;">Fr</span> $2,425$ $4,882$ $5,843$ $1,464$
<span style="font-variant:small-caps;">En</span> $5,602$ $5,623$ $10,219$ $1,006$
: Tweet distribution in all corpora.
\[data\]
Monolingual Irony Detection
===========================
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
**Feature-based models.** We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet’s length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. **Neural model with monolingual embeddings.** We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by [@Kim:2014]. For the embeddings, we relied on $AraVec$ [@Soliman:2017] for Arabic, FastText [@Grave:2018] for French, and Word2vec Google News [@mikolov-etal-2013-linguistic] for English [^3]. For the three languages, the size of the embeddings is $300$ and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$[^4] library.
**Results.** Table \[ResMonolingual\] shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of [@Techc:2014; @tay-etal-2018-reasoning], as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ [@Deft2017]). They outperform those obtained for Arabic ($A=71.7$) [@KarouiACLING:2017] and are comparable to those recently reported in the irony detection shared task in Arabic tweets [@idat2019; @ghanem2019idat] ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
----- ---------- ------ ------ ---------- ---------- ------ ------ ---------- ---------- ------ ------ ----------
A P R F A P R F A P R F
RF 68.0 67.0 82.0 68.0 68.5 71.7 87.3 61.0 61.2 60.0 70.0 61.0
CNN **80.5** 79.1 84.9 **80.4** **77.6** 68.2 59.6 **73.5** **77.9** 74.6 84.7 **77.8**
----- ---------- ------ ------ ---------- ---------- ------ ------ ---------- ---------- ------ ------ ----------
: Results of the monolingual experiments (in percentage) in terms of accuracy (A), precision (P), recall (R), and macro F-score (F).
\[ResMonolingual\]
Cross-lingual Irony Detection {#CL}
=============================
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation)[^5] to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages [@Mikolov:2013], we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries [@Mikolov:2013] or unsupervised methods relying on monolingual corpora [@Conneau:2017; @Artetxe:2018; @Wada:2018]. For our experiments, we use Conneau et al ’s approach as it showed superior results with respect to the literature [@Conneau:2017]. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table \[ResMultilingual\] presents the results.
------------------------ ------ ------ ------ ---------- ------- ------ ------ ----------
Train$\rightarrow$Test A P R F A P R F
Ar$\rightarrow$Fr 60.1 37.2 26.6 **51.7** 47.03 29.9 43.9 46.0
Fr$\rightarrow$Ar 57.8 62.9 45.7 **57.3** 51.11 61.1 24.0 54.0
Ar$\rightarrow$En 48.5 26.5 17.9 34.1 49.67 49.7 66.2 **50.0**
En$\rightarrow$Ar 56.7 57.7 62.3 **56.4** 52.5 58.6 38.5 53.0
Fr$\rightarrow$En 53.0 67.9 11.0 42.9 52.38 52.0 63.6 **52.0**
En$\rightarrow$Fr 56.7 33.5 29.5 50.0 56.44 74.6 52.7 **58.0**
(En/Fr)$\rightarrow$Ar 62.4 66.1 56.8 **62.4** 55.08 56.7 68.5 62.0
Ar$\rightarrow$(En/Fr) 56.3 33.9 09.5 42.7 59.84 60.0 98.7 **74.6**
------------------------ ------ ------ ------ ---------- ------- ------ ------ ----------
: Results of the cross-lingual experiments.
\[ResMultilingual\]
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow$Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow$Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow$(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
==========================
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies [@sigar:2012; @Karoui:2017; @colston2019irony]. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the *absence of context* where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (*Let’s start again, get off get off Mubarak!!*) where the writer mocks the Egyptian revolution, as the actual president “Sisi” is viewed as Mubarak’s fellows. (2) Second, the presence of *out of vocabulary (OOV) terms* because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g *phat* instead of *fat*). (3) Another important issue is the difficulty to *deal with the Arabic language*. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word *table* becomes طابلة> (*tabla*)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه \#مصر > (*Since many days Mubarak didn’t die .. is he sick or what? \#Egypt*), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment {#acknowledgment .unnumbered}
==============
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31).
[^1]: The corpus is available at <https://github.com/bilalghanem/multilingual_irony>
[^2]: All of these words are synonyms where they mean “Irony”.
[^3]: Other available pretrained embeddings models have also been tested.
[^4]: <https://github.com/hyperopt/hyperopt>
[^5]: To avoid language dependencies, we rely on surface features only discarding those that require external semantic resources or morpho-syntactic parsing.
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'mitl.bib'
title: Monitoring Temporal Properties using Interval Analysis
---
Verification of temporal logic properties plays a crucial role in proving the desired behaviors of continuous systems. In this paper, we propose an interval method that verifies the properties described by a bounded signal temporal logic. We relax the problem so that if the verification process cannot succeed at the prescribed precision, it outputs an inconclusive result. The problem is solved by an efficient and rigorous monitoring algorithm. This algorithm performs a forward simulation of a continuous-time dynamical system, detects a set of time intervals in which the atomic propositions hold, and validates the property by propagating the time intervals. In each step, the continuous state at a certain time is enclosed by an interval vector that is proven to contain a unique solution. We experimentally demonstrate the utility of the proposed method in formal analysis of nonlinear and complex continuous systems.
continuous-time dynamical systems, interval analysis, linear temporal logic, falsification method
Introduction {#intro}
============
Reasoning with the temporal logic properties in continuous systems is a challenging and important task that combines computer science, numerical analysis, and control theory. Various methods for the verification of continuous and hybrid systems with bounded temporal properties have been developed, e.g., [@Plaku2009; @Nghiem2010; @David2012; @Zuliani2013], enabling the falsification of various properties (e.g., safety, stability, and robustness) of large and complex systems. However, the state-of-the-art tools are based on numerical simulations whose numerical errors frequently yield qualitatively wrong results, which become problematic even in statistical evaluations.
Computing rigorously approximated reachable sets is a fundamental process in formal methods for continuous systems. Techniques based on interval analysis (Section \[s:interval\]) have proven practical in the reachability analysis of nonlinear and complex continuous systems [@Eggers2008; @Collins2008; @Ramdani2011; @Ishii2011; @Chen2012; @Gao2013:SMODE]. In these frameworks, the computation is *$\delta$-complete* [@Gao2012]: assuming that function values may be perturbed within a predefined $\delta \in \SPosRatSet$, many generically undecidable problems become decidable. However, $\delta$-complete verification of generic properties other than reachability is a challenging topic.
The contribution of this paper is to propose an interval method that verifies (bounded portions of) the signal temporal logic (STL) properties (Section \[s:stl\]) of a class of continuous-time dynamical systems (Section \[s:cs\]; extension to hybrid systems is straightforward). Our method reliably computes three values: $\Valid$, $\Unsat$, and $\Unknown$. The method outputs $\Valid$ or $\Unsat$ when the soundness is guaranteed by interval analysis; otherwise, when the verification fails after reaching a prescribed precision threshold, it outputs $\Unknown$. Our method is based on validated interval analysis, and therefore it is reliable compared to the existing simulation-based monitoring tools, e.g., [@Maler2003; @Fainekos2006a; @Donze2010; @Donze2010a]. We show that simulation with numerical errors may compute an incorrect signal for a chaotic system. In contrast with the existing tools that monitor a single behavior of a system, our method monitors a set of possible behaviors using an interval-based technique; therefore, the method can check the validity of the system. In this sense, our approach can be viewed as an integration of reachability analysis and simulation-based monitoring methods. The relaxation allowing $\Unknown$ results enables us to generate an efficient monitor for STL properties that can be regarded as a variant of $\delta$-complete procedures. We demonstrate efficient and reliable monitors for several continuous systems including a chaotic system.
In Section \[s:method\], we present an algorithm for monitoring STL properties based on the forward simulation that encloses a signal with a set of *boxes* (i.e., interval vectors). For each atomic proposition involved in a property $\phi$ to be verified, the algorithm obtains an inner and outer approximation of the time intervals in which the proposition holds. The interval Newton operator is used for the purposes of accelerating the search of instants where the satisfaction of propositions changes, and certifying the uniqueness of these event within their enclosures, eventually certifying the sequence of consistent/inconsistent time intervals over time for each proposition. Next, it modifies the set of time intervals according to the syntax of the property $\phi$; finally, it checks that $\phi$ holds at the initial time. Using our implementation, we show that several benchmarks are verified efficiently, yet non-robust instances with respect to numerical errors are rejected (Section \[s:ex\]). The implementation reliably analyzes a set of signals and provides a foundation for verification and parameter synthesis of complex systems.
Related Work {#s:related}
============
Many previous studies have applied interval methods to reachability analyses of continuous and hybrid systems [@Eggers2008; @Collins2008; @Ramdani2011; @Ishii2011; @Chen2012; @Gao2013:SMODE]. These methods output an over-approximation of reachable states as a set of boxes. Interval analysis often proves the unique existence of a solution within a resulting interval, and it is also applicable to interval-based reachability analysis [@Ishii2011; @Goubault2014]. Our method utilizes the proof in the verification of more generic temporal properties.
Reasoning of real-time temporal logic has been a research topic of interest [@Alur1996; @Shultz1997]. Numerical method for *falsification* of a temporal property is straightforward [@Maler2003]. The algorithm simulates a signal of a bounded length and checks the satisfiability of the negation of the property described by a bounded temporal logic. This paper presents an interval extension of this falsification method.
To falsify realistic nonlinear models efficiently, researchers have proposed a tree-search method [@Plaku2009], a Monte-Carlo optimization method [@Nghiem2010], and statistical model checking methods [@David2012; @Zuliani2013]. Despite their successes, these methods are compromised by numerical error. To improve the reliability and practicality of the falsification, integration with our interval method will be a promising future direction. An integrated statistical and interval method was also proposed in [@Wang2014] for reachability analysis.
To facilitate simulation-based verification of temporal properties, the *robustness* concept has been proposed [@Fainekos2006a; @Donze2010; @Nghiem2010]. In these works, the degree of robustness defines the distance between a signal and a region over which a proposition holds. If the absolute value of the degree is small, it is likely to be unreliable because of numerical errors. Our method rigorously ensures robustness by verifying that every intersection between a signal and each boundary in the state space is enclosed with an interval.
There exist several methods for model checking of temporal logic properties [@Podelski2006; @Cimatti2014]. [@Podelski2006] proposed a method specialized in stability properties, which is described as a specific form of temporal logic formula. [@Cimatti2014] proposed a method that translates a verification problem into a reachability problem with the $k$-Liveness scheme, which is incomplete in general settings. Our method can be viewed as a bounded model checking method that validates a bounded temporal property when the property is satisfied by all signals emerging from the interval parameter value.
Interval Analysis {#s:interval}
=================
This section introduces selected topics and techniques based on interval analysis [@Moore1966; @Neumaier1990].
A (bounded) *interval* $\a = [\LB{a}, \UB{a}]$ is a connected set of real numbers $\{b \in \RealSet ~|~ \LB{a} \leq b \leq \UB{a}\}$. $\IntSet$ denotes the set of intervals. $\PosIntSet$ denotes the subset $\{[\LB{a},\UB{a}]\in\IntSet ~|~ \LB{a}\leq 0\}$. For an interval $\a$, $\LB{a}$ and $\UB{a}$ denote the lower and upper bounds, respectively; and $\Inter{\a}$ denotes the interior $\{b \in \RealSet ~|~ \LB{a} < b < \UB{a}\}$. $[a]$ denotes a point interval $[a,a]$. The hypermetric between two intervals $\a$ and $\b$, $\Dist{\a}{\b}$ is given by $\max ( |\UB{a}-\UB{b}| , |\LB{a}-\LB{b}| )$. For a set $S \subset \RealSet$, $\Box S$ denotes the interval $[\inf S, \sup S]$. All these definitions are naturally extended to interval vectors; an $n$-dimensional *box* (or interval vector) $\a$ is a tuple of $n$ intervals $(\a_1, \ldots, \a_n)$, and $\IntSet^n$ denotes the set of $n$-dimensional boxes. For $a\in\RealSet^n$ and $\a\in\IntSet^n$, we use the notation $a \in \a$, which is interpreted as $\ForAll{i}{\{1,\ldots,n\}} ~ a_i \in \a_i$.
In actual implementations, the interval bounds should be machine-representable floating-point numbers, and other real values are rounded in the appropriate directions.
Given a function $f : \RealSet^n \to \RealSet$, $\f : \IntSet^n \to \IntSet$ is called an *interval extension* of $f$ if and only if it satisfies the containment condition $\ForAll{\a}{\IntSet^n} ~ \ForAll{a}{\a} ~ ( f(a) \in \f(\a) )$. This definition is generalizable to function vectors $\f : \RealSet^n \to \RealSet^m$. Given two intervals $\a,\b\in\IntSet$, we can compute interval extensions of the four operators $\circ \in \{+,-,\ast,/\}$ as $\Box\{\LB{a}\circ\LB{b}, \LB{a}\circ\UB{b}, \UB{a}\circ\LB{b}, \UB{a}\circ\UB{b}\}$ (assuming $0 \not\in \b$ for division).
For arbitrary intervals $\a,\b,\d\in\IntSet$, the *extended division* $\Box\{d\in\d ~|~ \Exists{a}{\a}~\Exists{b}{\b}~ a = b d\}$ can be implemented as follows (see Section 4.3 of [@Neumaier1990]): $$\ExtDiv(\a,\b,\d) :=
\begin{cases}
(\a/\b) \ \cap\ \d & \text{if}~ 0 \not\in \b\\
\Box(\d \setminus (\LB{a}/\LB{b}, \LB{a}/\UB{b})) & \text{if}~ \LB{a} \!>\! 0 \!\in\! \b\\
\Box(\d \setminus (\UB{a}/\UB{b}, \UB{a}/\LB{b})) & \text{if}~ \UB{a} \!<\! 0 \!\in\! \b\\
\d & \text{if}~ 0 \in \a,\b
\end{cases}$$ In the second and third cases, when $\LB{b}=0$ (resp. $\UB{b}=0$), we set $\LB{a}/\LB{b}$ and $\UB{a}/\LB{b}$ as $-\infty$ and $\infty$ (resp. $\LB{a}/\UB{b}$ and $\UB{a}/\UB{b}$ as $\infty$ and $-\infty$).
Given a differentiable function $f(a) : \RealSet \to \RealSet$ and a domain interval $\a$, a root $\tilde{a} \in \a$ of $f$ such that $f(\tilde{a}) = 0$ is included in the result of the *interval Newton operator* $$\hat{a} + \ExtDiv(-\f(\hat{a}), \f'(\a), \a-\hat{a}) \approx
\left( \hat{a} - \frac{\f(\hat{a})}{\f'(\a)} \right) \cap \a,$$ where $\hat{a} \in \a$, and $\f$ and $\f'$ are interval extensions of $f$ and the derivative of $f$, respectively. The first expression is always valid while the second expression is valid only when $\f'(\a)$ does not contain 0. Iterative applications of the operator will converge. Let $\a'$ be the result of applying the operator to $\a$. If $\a' \subseteq \Inter{\a}$, a unique root exists in $\a'$.
Continuous-Time Dynamical Systems {#s:cs}
=================================
We consider dynamical systems whose behaviors are described by ordinary differential equations (ODEs).
A *continuous-time dynamical system* is a tuple $\CS :=\bigl( (u,x), U\Times X, X_\Init, \Flow \bigr)$ consisting of the following components:
- A vector of real-valued *parameters* $u = (u_1,\ldots,u_m)$.
- A vector of real-valued *variables* $x = (x_1,\ldots,x_n)$.
- A *domain* $U \Times X \subseteq \RealSet^{m+n}$ for the valuation of the parameters and variables.
- An *initial domain* $X_\Init \subseteq X$.
- A *vector fields* $\Flow : U \Times X \to \RealSet^n$ (assuming Lipschitz continuity).
In this work, we specify domains $U$ and $X$ as boxes. The behaviors of a system $\CS$ are formalized as *signals*.
Given a time interval $\t = [0,\UB{t}] \in \IntSet$ and a parameter value $\tilde{u} \in U$, a *signal* of a continuous-time dynamical system $\CS$ is a function $\tilde{x} : \t \to X$ such that $$\begin{gathered}
\tilde{x}(0) \in X_\Init \land
\ForAll{\tilde{t}}{\t} ~ \tfrac{d}{dt}\tilde{x}(\tilde{t}) = F(\tilde{u}, \tilde{x}(\tilde{t})).
\end{gathered}$$
$\Sigs_{\UB{t}}(\CS)$ denotes the set of signals of $\CS$ of length $\UB{t}$.
\[ex:rotation\] An anticlockwise rotation of a 2D particle can be modeled as a continuous-time dynamical system: $$\begin{aligned}
u &:= (u_1), \quad U := ([-0.1,0.1]), \\
x &:= (x_1,x_2), \quad X := [-10,10]^2, \\
X_\Init &:= \{(1, 0)\}, \\
F(u,x) &:= \begin{pmatrix}
u_1 & -1 \\
1 & u_1
\end{pmatrix}
\begin{pmatrix}
x_1 \\ x_2
\end{pmatrix}.
\end{aligned}$$ A signal of this example is illustrated in Figure \[f:rotation\]. The signal moves on the circle of radius 1 when $u_1 = 0$; the system is stable when $u_1 \leq 0$ and is unstable when $u_1 > 0$.
![A signal of the rotation system[]{data-label="f:rotation"}](figures/rotation.pdf){width="\linewidth"}
\[ex:lorenz\] A well-known chaotic dynamical system is the Lorenz equation: $$\begin{aligned}
u &:= (u_1,u_2,u_3), ~ U := (10,28,2.5)+[-1,1]^3, \\
x &:= (x_1,x_2,x_3), \quad X := [-50,50]^3, \\
X_\Init &:= \{(15, 15, 36)\}, \\
F(u,x) &:=
\begin{pmatrix}
u_1(x_2-x_1) \\ x_1(u_2-x_3) - x_2 \\ x_1 x_2 - u_3 x_3
\end{pmatrix}.
\end{aligned}$$ A signal of this system is illustrated in the upper part of Figure \[f:lorenz\].
![Signals of the Lorenz system simulated by validated (upper) and non-validated (lower) numerical methods[]{data-label="f:lorenz"}](figures/lorenz.pdf "fig:"){width="\linewidth"} ![Signals of the Lorenz system simulated by validated (upper) and non-validated (lower) numerical methods[]{data-label="f:lorenz"}](figures/lorenz-breach.pdf "fig:"){width="\linewidth"}
ODE Integration using Interval Analysis {#s:ode}
---------------------------------------
Using tools based on interval Taylor methods, such as CAPD[^1] and VNODE [@Nedialkov2006], we can obtain an interval extension $\XC : \PosIntSet\to\IntSet^n$ of signals in $\Sigs_{\UB{t}}{\Struct{\CS}}$. Given $\t \in {\PosIntSet}$, these tools perform stepwise integration of the flow function $F$ from the initial time $0$ to time $\UB{t}$, and output the value $\XC(\t)$. At each step, interval Taylor methods verify the *unique existence* of a solution in a box enclosure using the Picard-Lindelöf operator and Banach’s fixpoint theorem. Accordingly, when an interval enclosure $\XC(\t)$ is computed by an interval Taylor method, the following property holds: $$\begin{gathered}
\label{e:ode:unique}
\ForAll{u}{U} ~ \ForAll{x_\Init}{X_\Init} ~
\UExists{\tilde{x}}{\Sigs_{\UB{t}}(\CS)} ~ \\ \tilde{x}(0)=x_\Init \LAnd
\ForAll{\tilde{t}}{\t}~
\tfrac{d}{dt} \tilde{x}(\tilde{t}) = F(u,\tilde{x}(\tilde{t})),\end{gathered}$$ where $\exists!$ is interpreted as “uniquely exists.”
In principle, if $F$ is Lipschitz continuous and we can assume arbitrary precision, we obtain an arbitrarily narrow interval enclosure $\XC([t])$ for $t \in \PosRealSet$. However, because interval Taylor methods are implemented using machine-representable real numbers, they may fail to compute an enclosure when verifying the unique existence property, even at the smallest step size.
\[ex:lorenz:sig\] Signals of the Lorenz system in Example \[ex:lorenz\] (when $u := (10,28,2.5)$), computed with an interval method (CAPD) and a non-validated numerical method, are illustrated in Figure \[f:lorenz\]. Non-validated numerical methods may compute a wrong signal for a chaotic system as shown in this figure. On the other hand, validated simulation of this system over a long period is difficult with double-precition floating-point numbers; the width of the interval enclosure computed by CAPD blows up after $25$ time units and the simulation fails.
{width=".9\linewidth"}
Signal Temporal Logic {#s:stl}
=====================
We consider a fragment [@Maler2003] of the real-time metric temporal logic [@Alur1996] whose temporal modalities are bounded by an interval $\t = [\LB{t},\UB{t}]$, where the bounds $\LB{t},\UB{t}$ are in $\PosRatSet$. Following [@Maler2003], we refer to this logic as the *signal temporal logic* (STL).
We consider constraints in the real domain as atomic propositions. The syntax of the STL formulae is defined by the grammar $$\begin{aligned}
\phi ::=&~ \True ~|~ \Prop ~|~ \phi \lor \phi ~|~ \neg \phi ~|~ \phi\,{\mathsf{U}}_{\ttt}\,\phi \\
p ::=&~ f(x) < 0 $$ where $\Prop$ belongs to a set of *atomic propositions* $\APSet_\phi$, ${\mathsf{U}}_{\ttt}$ is the “until” operator bounded by a non-empty positive time interval $\t \in {\PosIntSet}$, $x = (x_1,\ldots,x_n)$ is a vector of variables, and $f : \RealSet^{n} \to \RealSet$. We use the standard abbreviations, e.g., $\phi_1\land\phi_2 := \neg(\neg\phi_1\lor\neg\phi_2)$, ${\mathsf{F}}_\ttt \phi := \True\,{\mathsf{U}}_\ttt\,\phi$ (“eventually”), and ${\mathsf{G}}_\ttt\phi := \neg{\mathsf{F}}_\ttt\neg\phi$ (“always”).
Semantics
---------
The necessary length $\Norm{\phi}$ of the signals for checking an STL formula $\phi$ is inductively defined by the structure of the formula: $$\begin{aligned}
\Norm{p} &:= 0, &
\Norm{\phi_1 \lor \phi_2} &:= \Max\,(\Norm{\phi_1},\Norm{\phi_2}), \\
\Norm{\neg \phi} &:= \Norm{\phi}, &
\Norm{\phi_1 \,{\mathsf{U}}_{\ttt}\, \phi_2} &:= \Max\,(\Norm{\phi_1},\Norm{\phi_2}) + \UB{t}.\end{aligned}$$ The map $\Obs : \APSet_\phi \to \PWS{X}$ associates each proposition $\Prop \in \APSet_\phi$ to a set $\Obs(\Prop) = \{x \!\in\! X ~|~ p(x)\}$.
When we check the satisfiability of $\phi$ at time $t$, we should have a signal of length $t_\Max := \Norm{\phi}+t$ (this value of $t_\Max$ is used in evaluating all subformulae of $\phi$). Let $\tilde{x} \in \Sigs_{t_\Max}(\CS)$ and $\phi$ be an STL property. Then, we have a satisfaction relation defined as follows: $$\begin{aligned}
\Sig,t &\models \True &&\\
\Sig,t &\models \Prop && \text{iff}~~ \Sig(t) \in \Obs(\Prop)\\
\Sig,t &\models \phi_1 \lor \phi_2 && \text{iff}~~ \Sig,t \models \phi_1 \LOr \Sig,t \models \phi_2\\
\Sig,t &\models \neg \phi && \text{iff}~~ \Sig,t \not\models \phi\\
\Sig,t &\models \phi_1 \,{\mathsf{U}}_\ttt\, \phi_2 \\
\omit\rlap{~\text{iff $\Exists{t'}{(t+\t)} ~ \Sig,t' \models \phi_2 \LAnd
(\ForAll{t''}{[t,t']} ~ \Sig,t'' \models \phi_1)$}}\end{aligned}$$ At a given time $t$, $\phi_1\,{\mathsf{U}}_\ttt\,\phi_2$ intuitively means that $\phi_2$ holds within the time interval $t\!+\!\t$ and that $\phi_1$ always hold until then. We also have the following validation relation: $$\CS \models \phi ~~\text{iff}~~ \ForAll{\Sig}{\Sigs_{\Norm{\phi}}(\CS)} ~ \Sig,0 \models \phi$$
Method for Monitoring STL Formulae {#s:stl:monitoring}
----------------------------------
Our interval method is based on the monitoring method proposed in [@Maler2003], which decides whether a signal satisfies an STL property based on the numerical simulation of signals of bounded lengths. This section explains this basic method. First, we introduce the notion of consistent time intervals in the STL evaluation.
\[th:consistent\] Let $\Sig$ be a signal of length $t_\Max$ and $\phi$ be an STL formula. We say that a left-closed and right-open interval $[\LB{t},\UB{t}) \subseteq \PosRealSet$ is *consistent* with $\phi$ iff $\ForAll{t}{(\LB{t},\UB{t})}~ \Sig,t \models \phi$. [^2]
Next, whether a signal satisfies property $\phi$ is checked as follows:
1. For each atomic proposition $p$ in $\APSet_\phi$, monitor the signal of length $\Norm{\phi}$ and identify a non-overlapping set of consistent time intervals $T_p = \{\t_1,\ldots,\t_{n_p}\}$.
2. Following the parse tree of $\phi$ in a bottom-up fashion, obtain a set of consistent time intervals of $\phi$. For each construct of STL, obtain the set that is consistent with the sub-formula as follows: $$\begin{aligned}
T_{\neg \phi} &:=
\PosRealSet \setminus T_\phi \label{e:op:neg} \\
T_{\phi_1\lor \phi_2} &:= T_{\phi_1} \cup T_{\phi_2} \label{e:op:lor} \\
T_{\phi_1 {\mathsf{U}}_{\ttt} \phi_2} &:= \nonumber \\
\omit\rlap{\quad $\{\AlgName{Shift}_{\ttt}(\t_1 \cap \t_2) \cap \t_1 ~|~
\t_1 \in T_{\phi_1}, \t_2 \in T_{\phi_2} \}$} \label{e:op:until}\end{aligned}$$ where $\AlgName{Shift}_{\ttt}(\s) := [\LB{s}-\UB{t}, \UB{s}-\LB{t}) \cap \PosRealSet$.
3. Check whether $\Min\ T_\phi$ contains time 0. If yes, $\phi$ is satisfied; otherwise, it is not satisfied.
\[ex:stl\] We consider the property $$\begin{gathered}
{\mathsf{G}}_{[0,10]} {\mathsf{F}}_{[0,6.284]}\, \neg (x_2\!-\!1 \!<\! 0) ~\equiv~\\
\neg( \True \,{\mathsf{U}}_{[0,10]}\, \neg ( \True \,{\mathsf{U}}_{[0,6.284]}\, \neg(x_2\!-\!1 \!<\! 0) ))
\end{gathered}$$ for the model in Example \[ex:rotation\], which describes that, within the initial $10$ time units, the signal $x_2$ increases beyond $1$ within every $6.284$ time units. Verification with the monitoring method (extended to an interval method) is illustrated in Figure \[f:rotation:proc\], when the parameter is set as $u_1 := 0.1+[-10^{-3},10^{-3}]$.
Interval-Based Monitoring Method {#s:method}
================================
In this section, we propose a reliable method for monitoring STL properties on continuous-time dynamical systems. This method is an interval extension of the monitoring method described in Section \[s:stl:monitoring\].
Given a system $\CS$ and an STL property $\phi$, the proposed $\AlgName{MonitorSTL}$ algorithm (Algorithm \[a:main\]) outputs the following results: $\Valid$ (implying that $\CS \models \phi$), $\Unsat$ (implying that $\CS \models \neg\phi$), or $\Unknown$ (meaning that the computation is inconclusive). The algorithm implements the method described in Section \[s:stl:monitoring\]. The sub-procedures $\AlgName{MonitorAP}$ (Section \[s:method:map\]) for monitoring atomic propositions, and $\Propagate$ and $\AlgName{ConsistentAtInitTime}$ (Section \[s:method:propag\]) for evaluating an STL formula are rendered rigorous and sound by interval analysis; namely, the precision of every numerical computation is guaranteed, and the correctness of the monitoring method is assured by verifying the unique existence of a solution within its resulting interval. Any errors introduced by the sub-procedures are captured by the catch clause at Line 5.
$\CS$, $\phi$ $\Valid$, $\Unsat$, or $\Unknown$
**try** $\mathcal{T} := \AlgName{MonitorAP}(\CS, \phi)$ $\T_\phi := \Propagate(\mathcal{T}, \phi))$ **return** [$\AlgName{ConsistentAtInitTime}(\T_\phi)$]{} **catch error return** $\Unknown$ **end try**
Despite its efficient computational cost (Section \[s:complexity\]), the proposed method has some limitations. First, the method is incomplete; it allows inconclusive computations and outputs $\Unknown$ when the interval computation is too imprecise to separate several solutions within an interval. In practice, the $\Unknown$ output is valuable, because a numerically non-robust signal is rejected as an error in the verification process. Second, although the algorithm validates system properties in principle, its success is guaranteed only over sufficiently small domains $U$ and $X_0$, particularly when evaluating nonlinear systems. Third, the method is a bounded model-checking method, in the sense that the domain $U\Times X$ and the lengths of the signals are both bounded.
Our method is targeted at (i) a more generic framework, in which the possible initial and parameter values can be exhaustively enumerated, and (ii) statistical methods that treat the parameters as random variables and evaluate probabilistic STL properties.
Approximation of Consistent Time Intervals {#s:method:approx}
------------------------------------------
In this section, we introduce an interval approximation for the consistent time intervals (Definition \[th:consistent\]). The basic idea is to enclose each bound of the consistent time intervals within a closed interval.
\[d:acti\] Given a consistent time interval $\t = [\LB{t},\UB{t}) \subseteq [0,\Norm{\phi})$ that is consistent with an STL property $\phi$, we define an *(interval) approximation* as a pair $(\s,\s')$ such that $\s,\s' \in\IntSet$, $\LB{t} \in \s$, and $\UB{t} \in \s'$.
Given an approximation $(\s, \s')$ and a continuous signal $\Sig$, we have $\ForAll{t}{[\UB{s},\LB{s}')}~\Sig,t\models\phi$.
We now approximate a set of consistent time intervals $\{ \t_1,\ldots, \t_{n_\phi} \}$ as a set (or sequence) of approximations. Instead of the set of pairs $\{ (\s_1, \s'_1), \AB \ldots, (\s_{n_\phi}, \s'_{n_\phi}) \}$, we represent a set of approximations with the set $\{ (\s_1,\True), (\s'_1,\False), \ldots, \AB (\s_{n_\phi},\True), (\s'_{n_\phi},\False) \}$, where the tags $\True$ and $\False$ represent whether an element corresponds to a lower or an upper bound. A set of approximations is interpreted as both *outer* and *inner approximations*; that is, each consistent time interval $\t_i$ is enclosed by the outer approximation $[\LB{s}_i,\UB{s}_i']$, and the inner approximation $(\UB{s}_i,\LB{s}'_i)$ is contained in $\t_i$.
\[d:approxs\] Consider a set $\T = \{\AB (\s_1,b_1), \AB \ldots, \AB (\s_{\#\mathbm{T}}, b_{\#\mathbm{T}})\}$ where $\s_i \in \IntSet$, $b_i \in \{\True,\False\}$, and $\#\T\in\NatSet$. The second element of each pair is a *polarity value* that represents whether the pair is an enclosure of a lower or upper bound of a consistent time interval. We say that $\T$ is *canonical* iff
- the elements can be sorted, i.e.,\
$\ForAll{i}{\{1,\ldots,\#\T\!-\!1\}}~ \UB{s}_i < \LB{s}_{i+1}$,
- $\ForAll{i}{\{1,\ldots,\#\T\}}~ 0 \leq \UB{s}_i$,
- $\ForAll{i}{\{1,\ldots,\#\T\!-\!1\}}~ b_i \neq b_{i+1}$, and
- $b_1 = \True$.
We say that $\T$ is an *(interval) approximation* of $T_\phi = \{\t_1,\ldots, \t_{n_\phi}\}$ iff
- $\T$ is canonical,
- $\ForAll{\t}{T_\phi}~ \Exists{(\s,\True)}{\T}~ \LB{t} \in \s$,
- $\ForAll{\t}{T_\phi}~ \UB{t}<t_\Max \Rightarrow \Exists{(\s,\False)}{\T}~ \UB{t} \in \s$,
- $\ForAll{(\s,\True)} {\T}~ \UExists{\t}{T_\phi}~ \LB{t} \in \s$, and
- $\ForAll{(\s,\False)}{\T}~ \UExists{\t}{T_\phi}~ \UB{t} \in \s$.
Given a set of consistent time intervals, its canonical approximation is a disjoint sequence of lower and upper bound enclosures; the sequence starts with a lower bound enclosure and ends with either a lower or an upper bound enclosure. For $\t\in T_\phi$ such that $\UB{t} > \Norm{\phi}$, $\T_\phi$ may contain only its lower-bound enclosure. $\Universe := \{([0],\True)\}$ and $\T_\False := \emptyset$ are the approximations of $T_\True=\PosRealSet$ and $T_\False=\emptyset$, respectively.
\[ex:approx\] Let $\CS$ be $(x, [0,10], 0, F(x) = 1)$ such that the variable $x$ represents the signal $\tilde{x}(t) = t$. Consider a property $\phi := {\mathsf{F}}_{[0,2\pi]} (\cos x<0 \LAnd \sin x<0)$. $\Norm{\phi}$ is $2\pi$, and the set of consistent time intervals within $[0,2\pi]$ is $T_{\cos x<0} := \{ [\frac{\pi}{2},\frac{3}{2} \pi) \}$, $T_{\sin x<0} := \{ [\pi,2\pi) \}$, and $T_\phi := \{ [0, \frac{3}{2} \pi) \}$, respectively. Then, their approximations are $$\begin{aligned}
\T_{\cos x<0} & := \{([1.57,1.58],\True), ([4.71,4.72],\False)\}, \\
\T_{\sin x<0} & := \{([3.14,3.15],\True), ([6.28,6.29],\False)\}, \\
\T_\phi &:= \{ ([0],\True), ([4.71,4.72],\False) \}.
\end{aligned}$$
Monitoring Atomic Propositions {#s:method:map}
------------------------------
This section describes the $\MonitorAP$ procedure (Algorithm \[a:map\]) that, given a system $\CS$ and an STL property $\phi$, computes a set $\mathcal{T}$ containing an approximated set $\T_p$ of consistent time intervals for each $p \in \APSet_\phi$.
$\CS$, $\phi$ $\mathcal{T}$ $\mathcal{T} = \emptyset$ $b \Asn p(\XC(0))$ **if** $b$ **then** $\T_p \Asn \{([0],b)\}$ **else** $\T_p \Asn \emptyset$ $\t \Asn [0,\Norm{\phi}]$; $b \Asn \neg b$ $\t \Asn \SearchZero(\XC, F, f, \t)$ **if** $\t = \emptyset$ **then** **break** **end if** $\T_p := \T_p \cup \{(\t,b)\}$; $\t \Asn [\UB{t}, \Norm{\phi}]$; $b \Asn \neg b$ $\mathcal{T} \Asn \mathcal{T} \cup \{\T_p\}$
$\mathcal{T}$
The outer loop enumerates each atomic proposition $p$ of the form $f(x) < 0$. Lines 3–4 compute the initial polarity by evaluating the proposition at time 0; the set $\T_p$ is initialized accordingly. Note that $\XC$ represents a solving process for the signals in $\Sigs_{\bar{t}}(\CS)$ (see Section \[s:ode\]), which can be regarded as a function $\PosIntSet \to \IntSet^n$. The inner loop searches for bounds at which $f$ changes sign. Line 7 invokes the $\AlgName{SearchZero}$ procedure (Algorithm \[a:searchzero\]), which searches for the *earliest* bound at which $p$ switches consistency within the time interval $\t$, and outputs a sharp enclosure of the bound (or $\emptyset$ if there is no solution). This result is stored in the set $\T_p$, and $\T_p$ is stored in $\mathcal{T}$.
For $\phi$ in Example \[ex:approx\], $\MonitorAP$ computes $\mathcal{T}$ as $\{\T_{\cos x_1<0}, \AB \T_{\sin x_1<0}\}$.
The evaluation of atomic propositions $f(x) < 0$ switches between $\True$ and $\False$ at the root of $f : X\to\RealSet$. The intersection between a signal $\Sig(t)$ and a boundary condition $f(x) = 0$ is searched by Algorithm \[a:searchzero\]. As inputs, this algorithm accepts an interval extension of the signal $\XC : \PosIntSet\to\IntSet^n$, a vector field $F : X \to X$, the function $f$, and a time interval $\t_\mathrm{init} \in\PosIntSet$ to be searched. The algorithm computes the time interval $\t \subseteq \t_\mathrm{init}$ that encloses the earliest root, i.e., $$\begin{gathered}
\label{e:zero:earliest}
\t = \Box \bigl\{~ \Min \{t \!\in\! \t_\mathrm{init} ~|~ f(\Sig(t))=0\} \\
~|~ \ForAll{\Sig}{\Sigs_{\bar{t}_\mathrm{init}}(\CS)} ~\bigr\}.\end{gathered}$$ $\SearchZero$ verifies that $\t$ encloses a unique bound, i.e., $$\label{e:zero:unique}
\ForAll{\Sig}{\Sigs_{\UB{t}_\mathrm{init}}(\CS)}~ \UExists{t}{\t}~ f(\Sig(t)) = 0.$$ Alternatively, if no bound exists in $\t_\mathrm{init}$, $\SearchZero$ verifies the following: $$\label{e:zero:unsat}
\ForAll{\Sig}{\Sigs_{\UB{t}_\mathrm{init}}(\CS)}~ \ForAll{t}{\t_\mathrm{init}}~ f(\Sig(t)) \neq 0.$$
$\XC : \PosIntSet\to\IntSet^n$, $F : X \to X$, $f : X \to \RealSet$, $\t_\mathrm{init} \in \PosIntSet$ $\t\in\IntSet$ $\epsilon \in \SPosRatSet$, $\theta \in (0,1) \subset \RatSet$ $\t \Asn \t_\mathrm{init}$ $\t_\mathrm{bak} \Asn \t$ $\d \Asn \Dt(f, \XC, F, \t)$ $\t \Asn \LB{t}+\ExtDiv(-\f(\XC(\LB{t})),\ \d,\ \t-\LB{t})$ **if** $\t = \emptyset$ **then return** $\emptyset$ **end if** $\t \Asn \LB{t}$; $\Delta \Asn \infty$ $\d \Asn \Dt(f, \XC, F, \t)$ **if** $\d \ni 0$ **then error end if** $\t' \Asn \LB{t}-{\f(\XC(\LB{t}))}/{\d}$ **if** $\t' \subseteq \Inter{\t}$ **then** $\t \Asn \t'$; **break end if** $\Delta_\mathrm{bak} := \Delta$; $\Delta := d(\t,\t')$ $\t \Asn \t_\mathrm{init} \cap \AlgName{Inflate}(\t',1\!+\!\theta)$ **if** $\Delta \geq (1\!-\!\theta)\, \Delta_\mathrm{bak}$ **then error end if** $\t$
If $\SearchZero$ returns an interval $\t \neq \emptyset$, properties and hold. If it returns $\emptyset$, property holds.
To justify the soundness os $\SearchZero$, we describe some details of Algorithm \[a:searchzero\]. Lines 2–6 repeatedly filter the time interval $\t$ using the interval Newton operator. Line 4 (and Line 10) invokes the $\Dt$ procedure, which is given a function $f$ and computes an interval enclosure of the derivative $\frac{d}{dt}f(\Sig(t))$ over $\t$ using the chain rule $$\tfrac{d}{dt}f(\Sig(\t)) \!=\! \tfrac{d}{dx}f(\Sig(\t)) \cdot \tfrac{d}{dt}\Sig(\t)
\subseteq \f'(\XC(\t)) \cdot \F(\XC(\t)).$$ Next, at Line 5, the interval Newton operator is applied. To handle the numerator interval $\d$ containing zero, we implement the interval Newton by the extended division described in Section \[s:interval\]. Because we expand the interval Newton on the lower bound $\LB{t}$ and the extended division encloses the values in the $\t-\LB{t}$ domain, the resulting $\t$ is filtered its inconsistent portion without losing the solutions or being expanded. If the interval Newton returns $\emptyset$, $\SearchZero$ also returns $\emptyset$ to signal the unsatisfiability (Line 7).
Because $\t$ may contain several solutions, Line 8 of the algorithm resets $\t$ to the lower bound as a starting value for computing the enclosure of the earliest solution. Then, $\SearchZero$ checks that the time interval contains a unique solution. To this end, it applies the interval Newton with the inclusion test to prove the unique existence of a solution within the contracted interval $\t'$ (Lines 9–17). The interval Newton verification is repeated with an inflation process of the time interval (see [@Goldztejn2010:RC] for a detailed implementation). If Line 18 is reached with no error, the time interval $\t$ is a sharp enclosure of the first zero of $f(\Sig(t))=0$. When $\SearchZero$ is implemented with machine-representable real numbers or when there is a tangency between the signal and the boundary condition, an error may result. Line 11 of $\SearchZero$ outputs an error if the derivative on an (inflated) time interval contains zero. At Line 16, we limit the number of iterations by specifying a threshold $1\!-\!\theta$ for the inflation ratio between two consecutive contraction amounts as in [@Goldztejn2010:RC].
Evaluation of STL Properties {#s:method:propag}
----------------------------
We now describe the procedures for evaluating STL formulae at Lines 3 and 4 of Algorithm \[a:main\]. Propagation of a set of monitored time intervals that are consistent with the atomic propositions is implemented as a rigorous and sound but incomplete procedure.
To evaluate the approximated sets, we extend the evaluation procedure on sets of consistent time intervals described in Section \[s:stl:monitoring\]. Algorithm \[a:propagate\] implements Step 2 of the procedure, which propagates the STL formulae over a set of time intervals.
$\phi$, $\mathcal{T} = \{\T_p\}_{p \in \APSet_\phi}$ $\T_\phi$
**switch** $\phi$ **case** $p$ : **return** [$\T_p$]{} **case** $\neg \phi'$ : **return** $\AlgName{Invert}(\Propagate(\phi', \mathcal{T}))$ **case** $\phi_1 \lor \phi_2$ : $\T_{1} := \Propagate(\phi_1, \mathcal{T})$ $\T_{2} := \Propagate(\phi_2, \mathcal{T})$ **return** [$\AlgName{Join}(\T_{1}, \T_{2})$]{} **case** $\phi_1 {\mathsf{U}}_\ttt \phi_2$ : $\T_{1} := \Propagate(\phi_1, \mathcal{T})$ $\T_{2} := \Propagate(\phi_2, \mathcal{T})$ **return** $\AlgName{ShiftAll}_\ttt(\T_{1}, \T_{2})$ **end switch**
We now handle the approximated sets by extending the operations – on sets of time intervals. The procedures for the operations $\Invert$, $\JoinOp$, $\Intersect$, and $\ShiftAll$ are described in Figure \[f:operations\] in the appendix. Note that, some operations cause ambiguities when handling non-canonical approximated time intervals. Such a situation is exemplified below. To avoid these ambiguities, our implementation results in an error once a resulting set becomes non-canonical.[^3]
Consider the same timer system as in Example \[ex:approx\], i.e. $\tilde{x}(t) := t$, and the property $\phi := {\mathsf{F}}_{[0,\UB{t}]} \neg (x-1 < 0 \lor 1-x < 0)$, where $\UB{t} \in \PosRealSetS$. The subformula $x-1 < 0 \lor 1-x < 0$ is consistent at every time except at $t = 1$, therefore, the set of consistent time intervals is $T := \{[0,1),[1,t_\Max)\}$. Assume $T$ is approximated with a non-canonical set $\{([0],\True),([0.95,1.1],\False),([0.9,1.05],\True)\}$.[^4] To verify $\phi$, the procedures $\Invert$ and $\AlgName{ShiftAll}_{[0,\UB{t}]}$ should be applied. However, as illustrated in Figure \[f:overlap\], we cannot decide whether the overlapping boundary intervals should be removed and expanded, or separated, and $\Propagate$ results in an error.[^5] This ambiguous situation is avoided by using only canonical approximations. Note that, in some cases, this local ambiguity does not impact the global consistency. In the case of this example, if $\UB{t} < 0.9$, both scenarios lead to $\False$, and forking the resolution process would be able to resolve the local ambiguity.
![Ambiguity caused by overlapping bounds[]{data-label="f:overlap"}](figures/overlap1.pdf){width=".8\linewidth"}
The following claims state that the procedures are closed in the canonical approximated sets, and the $\Propagate$ procedure is sound.
\[th:canonical\] Let $\T_1$ and $\T_2$ be canonical approximated sets. If $\T$ results from $\Invert(\T_1)$, $\JoinOp(\T_1,\T_2)$, $\Intersect(\T_1,\T_2)$, or $\ShiftAll(\T_1,\T_2)$, and if no $\mathbf{error}$ occurs in these procedures, then $\T$ is canonical.
See Appendix \[s:canonical:proof\].
\[th:soundness\] Consider an STL formula $\phi$ and a set $\mathcal{T} = \{\T_p\}_{p \in \APSet_\phi}$ of approximated sets of time intervals that are consistent with atomic propositions. If $\T_\phi = \Propagate(\phi, \mathcal{T})$, then $\T_\phi$ is an approximation of $T_\phi$.
See Appendix \[s:soundness:proof\].
Finally, we obtain $\T_\phi$ and conclude that $\phi$ is $\Valid$ if $\s_1$ is the smallest interval in $\T_\phi$ and $\UB{s}_1 \leq 0 < \LB{s}_1'$, $\Unsat$ if $\T_\phi = \emptyset$ or $0 < \LB{s}_1$, or $\Unknown$ if $0 \in [\LB{s}_1,\UB{s}_1)$. The computation is performed by $\AlgName{ConsistentAtInitTime}$ (see Algorithm \[a:inittime\] in the appendix).
Computational Cost {#s:complexity}
------------------
The time complexity of $\Propagate$ is bounded by the product of the size (i.e., the number of operators) of the considered STL formula $\phi$ and the cost of the procedures $\Invert$, $\JoinOp$, and $\ShiftAll$ in Figure \[f:operations\]. The complexity of the procedures on approximated sets is polynomial in the number of intersections of the signal and the atomic proposition bounds (see Appendix \[s:complexity:op\]). The number of iterations in $\MonitorAP$ is bounded by the product of the size of $\APSet_\phi$ and the maximum number of bounds detected for an atomic proposition, i.e., $\Max_{p\in\APSet_\phi}\ \#\T_p$.
The number of bounds detected depends on the oscillations of the ODE solution and the predicate bound. Although it can be very high in theory, it is usually quite small. In the generic case of non tangent intersection between the ODE solution and the predicate bound, the $\SearchZero$ procedure has a quadratic convergence and therefore a very low computational cost. The main computational cost of the method is therefore the validated simulation of the ODE. This cost is difficult to foresee: It highly depends on the ODE and the solver. For example, validated solvers are currently quite inefficient in solving stiff ODEs, and require to iterate many steps leading to a high computational cost. The complexity of $\Propagate$ is bounded by the product of the size (i.e., the number of operators) of the considered STL formula $\phi$ and the cost of the procedures in Figure \[f:operations\].
$\phi$ \#$\APSet_\phi$ $\GBnd$ $\Wid{\s_1}$ \#$\Valid$ \#$\Unsat$ \#$\Unknown$ time
---------- ----------------- --------- ----------------- ------------ ------------ -------------- -------
0 507 493 0+0 0.51s
$2\cdot10^{-6}$ 483 508 9+0 0.52s
1 $2\cdot10^{-3}$ 0 462 538+0 –
0 490 510 0+0 0.03s
$2\cdot10^{-3}$ 270 470 260+0 0.02s
0 485 515 0+0 1.1s
$2\cdot10^{-6}$ 353 505 26+116 1.03s
2 $2\cdot10^{-3}$ 0 463 537+0 –
0 514 486 0+0 0.05s
$2\cdot10^{-3}$ 86 493 421+0 0.06s
0 482 518 0+0 1.7s
$2\cdot10^{-6}$ 346 498 18+138 1.6s
3 $2\cdot10^{-3}$ 0 0 1000+0 –
0 516 484 0+0 0.08s
$2\cdot10^{-3}$ 84 0 916+0 0.09s
0 490 510 0+0 2.7s
$2\cdot10^{-6}$ 352 477 74+97 2.7s
5 $2\cdot10^{-3}$ 0 0 1000+0 –
\[.1em\] 0 499 501 0+0 0.14
$2\cdot10^{-3}$ 0 0 1000+0 –
\[.1em\]
Experiments {#s:ex}
===========
We have implemented the proposed method and experimented on two examples to confirm the effectiveness of the method. Experiments were run on a 3.4GHz Intel Xeon processor with 16GB of RAM.
Implementation {#s:impl}
--------------
Algorithms \[a:main\]–\[a:inittime\] were implemented in OCaml and C/C++. ODEs were solved by procedures in the CAPD library. The configurable parameters $t_\Min$, $\epsilon$, and $\theta$ correspond to the smallest integration step size that CAPD can take, the threshold used in Figure \[a:searchzero\], and the threshold used in $\AlgName{Inflate}$, respectively. In the experiments, these parameters were set as $t_\Min := 10^{-14}$, $\epsilon := 10^{-14}$, and $\theta := 0.01$.
Verification of the Rotation System
-----------------------------------
We verified the system in Example \[ex:rotation\] on four STL formulae. The specifications and results of this experiment are summarized in Table \[t:ex:rotation\]. The first column lists the STL formulae in which the bound $\GBnd$ of each ${\mathsf{G}}$ operator is parameterized and set to either $\GBnd:=100$ or $\GBnd:=10$. The column “\#$\APSet_\phi$” represents the number of atomic propositions in each $\phi$. In each verification, the parameter value $u_1$ was first randomly selected from $[-0.1,0.1]$ and then modified to $u_1 := u_1 + \u_1$, where $\u_1$ was any of $[0]$, $[-10^{-6},10^{-6}]$, or $[-10^{-3},10^{-3}]$. The column “$\Wid{\u_1}$” indicates the interval used in each verification.
The considered STL properties are assumed to hold if $u_1 > 0$ and not to hold if $u_1 < 0$. Each STL property was verified for 1000 times. The columns “\#$\Valid$”, “\#$\Unsat$”, and “\#$\Unknown$” list the numbers of runs resulting in each output; the “\#$\Unknown$” outputs are separated with ‘$+$’ according to whether it was caused by an error in the $\SearchZero$ algorithm or an error in the $\Propagate$ and $\AlgName{ConsistentAtInitTime}$ algorithms. The column “time” lists the average CPU time taken for a $\Valid$ verification.
From the results, we can observe that the rates of inconclusive runs were related to the simulation lengths, the uncertainties in the parameter values, and the size of the formula $\phi$. $\Unknown$ results were generated by the interval Newton process in $\SearchZero$ and the undecidable situations in $\Propagate$ and $\AlgName{ConsistentAtInitTime}$. In this experiment, verification failures increased as the value of $u_1$ approached 0 and the signal and boundary condition became close to tangent. When the parameter values were exact and $\Wid{\u_1} = 0$, all the verifications succeeded even under near-singular conditions because the considered signals were always enclosed with tight intervals. As coarser intervals were appended to the parameter values and the simulation lengths became longer, the number of $\Unknown$ results increased; meanwhile, the number of $\Valid$ results decreased more rapidly than the number of $\Unknown$ results because a $\Valid$ verification required detecting a number of bounds for each atomic proposition. Any detection failure resulted in $\Unknown$.
The bottleneck of the verification process is the $\SearchZero$ algorithm that integrates ODEs and searches for boundary intervals. The number of calls to $\SearchZero$ depends on the size of $\APSet_\phi$ and the number of bounds as described in Section \[s:complexity\]. Therefore, the runtime increased linearly in either the number of atomic propositions or the simulation length that should be proportional to the number of bounds. The cost of evaluation of the STL formulae seemed relatively small and not affecting the overall timings.
Verification of the Lorenz System
---------------------------------
We verified the system in Example \[ex:lorenz\] on the following STL formula: $$\begin{gathered}
\label{e:lorenz}
{\mathsf{G}}_{[0,15]} (\neg(-x_1-15<0) \Rightarrow \\
{\mathsf{F}}_{[0.5,5]} {\mathsf{G}}_{[0,1]} ((x_1\!-\!10)^2 \!+\! (x_2\!-\!10)^2 \!-\! 150<0))\end{gathered}$$ In each verification, the parameters were set to exact values randomly selected from the domain. The signal $(x_1, x_2)$ oscillates on either the positive or the negative side. According to the formula, when $x_1$ descends below $-15$, $(x_1,x_2)$ moves into the disk $(x_1-10)^2+(x_2-10)^2 <150$ after some duration in the interval $[0.5,5]$ and remains in the disk for at least 1 time unit.
The experimental results are summarized in Table \[t:ex:lorenz\]. As in Table \[t:ex:rotation\], the columns (from left to right) represent the number of atomic propositions, the numbers of $\Valid$, $\Unsat$, and $\Unknown$ verification results in 1000 runs, and the average CPU time for a $\Valid$ verification.
\#$\APSet_\phi$ \#$\Valid$ \#$\Unsat$ \#$\Unknown$ time
----------------- ------------ ------------ -------------- ------
2 566 413 21 9.2s
: \[t:ex:lorenz\] Experimental results (Lorenz)
This experiment demonstrated that the proposed method can handle a chaotic system with a nonlinear atomic proposition. In such systems, non-validated numerical methods frequently output wrong results because of rounding errors, as shown in the next section. As explained in Example \[ex:lorenz\], CAPD integration generated a coarse enclosure of the signal (around $25.8$ time units), which introduced errors in the integration process $\XC$ or the interval Newton process. These errors would account for the 21 $\Unknown$ results in Table \[t:ex:lorenz\].
Comparison with Breach Toolbox
------------------------------
For comparative purposes, we ran the above problems on the Breach Toolbox [@Donze2010a] (built from commit ed1178c in the Mercurial repository), a tool for STL verification based on numerical computation with rounding errors. Breach can check the satisfiability and the *robustness*, which is quantified by a positive or negative real value based on the distance between a considered signal and the bound in the state space where the satisfaction of the STL property switches.
For the rotation system, when the parameter value $u_1$ approached 0 (specifically, at $u_1 := 0.001$), Breach returned $\Unsat$, whereas our implementation returned $\Valid$. This incorrect verification was implied by the low robustness value. In this example, the robustness was low for all parameter values because the initial part of the signal was close to the bounds of the atomic propositions.
For the Lorenz system, the numerical integration process of Breach yielded incorrect signals, as explained in Example \[ex:lorenz:sig\]; therefore, the verification results were unreliable. For example, when $u = (10,28,2.5)$, Breach reported an $\Unsat$ verification of property , whereas our method returned certified $\Valid$.
Breach ran more quickly than our implementation: it required less than 0.01s for both problems.
Conclusions
===========
We have presented a sound STL validation method for checking that all initialized signals satisfy the properties of a system. The proposed method detects a witness signal and verifies its unique existence using an interval-based ODE integration and an interval Newton method. The experimental results demonstrate the potential for the method as a practical tool. In future work, we will improve our method and implementation to handle hybrid systems and large and uncertain initial values. Examples in a realistic setting should be demonstrated with the implementation.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was partially funded by JSPS (KAKENHI 25880008 and 15K15968).
Omitted Procedures and Proofs
=============================
Procedures of the operations on approximated sets are specified in Figure \[f:operations\]. $\Invert$, $\JoinOp$, $\Intersect$, and $\ShiftAll$ implement the operations in Step 2 of Section \[s:stl:monitoring\] as procedures that modify the set of boundary intervals. $\ShiftAll$ consists of sub-procedures $\AlgName{ShiftPairs}$ and $\AlgName{ShiftElem}$; $\AlgName{ShiftPairs}$ computes the intersections and back-shifting pairwise ($\AlgName{Pairs}(\T)$ enumerates approximations of time intervals in $\T$); $\AlgName{ShiftElem}$ applies the back-shifting. The results of the procedures may become non-canonical, so $\Normalize$ is applied at last to make them canonical.
$\AlgName{ConsistentAtInitTime}$ is implemented in Algorithm \[a:inittime\]. An input $\T_\phi$ is either $\Universe$, $\emptyset$, or an approximated set; in the last case, the algorithm picks an earliest approximation with $\AlgName{GetFirstElem}$, and checks whether it contains 0 or not.
$$\begin{aligned}
\Invert(\T) &:=
\begin{cases}
\makebox[20em][l]{$\emptyset$} & \text{if $\T = \Universe$} \\
\Universe & \text{if $\T = \emptyset$ ~~} \\
\Normalize(\ \{(\s,\neg b) ~|~ (\s,b)\in\T \}\ )
& \text{otherwise~}
\end{cases} \\
\JoinOp(\T_1,\T_2) &:=
\begin{cases}
\makebox[20em][l]{$\Universe$} & \text{if $\T_{1} = \Universe \LOr \T_{2} = \Universe$} \\
\Normalize(\ \T_{1} \cup \T_{2}\ ) & \text{otherwise}
\end{cases} \\
\Intersect(\T_1,\T_2) &:=~
\Invert(\ \JoinOp(\Invert(\T_1), \Invert(\T_2))\ ) \\
\ShiftAll(\T_1,\T_2) &:=
\begin{cases}
\makebox[20em][l]{$\emptyset$} & \text{if $\T_1 = \emptyset \LOr \T_2 = \emptyset$} \\
\Normalize(\ \AlgName{ShiftPairs}_{\ttt}(\mathbm{T}_1,\mathbm{T}_2)\ )
& \text{otherwise}
\end{cases} \\
\AlgName{ShiftPairs}_{\ttt}(\mathbm{T}_1,\mathbm{T}_2) &:=
\begin{cases}
\makebox[21em][l]{$
\bigl\{ \AlgName{ShiftElem}_\ttt(\P_2)
$}
~~|~~
\P_2 \in \AlgName{Pairs}(\T_2) \bigr\}
& \text{if $\T_1 = \Universe$} \\
\makebox[21em][l]{$
\bigl\{ \AlgName{Intersect}(\ \AlgName{ShiftElem}_\ttt(\P_1),\ \P_1\ )
$}
~~|~~ \P_1 \in \AlgName{Pairs}(\T_1) \bigr\}
& \text{if $\T_2 = \Universe$} \\
\makebox[21em][l]{$
\bigl\{ \AlgName{Intersect}(\ \AlgName{ShiftElem}_\ttt(\AlgName{Intersect}(\P_1,\P_2)),\ \P_1\ )
$}
~~|~~ \P_1 \in \AlgName{Pairs}(\T_1), \P_2 \in \AlgName{Pairs}(\T_2) \bigr\}
& \text{otherwise}
\end{cases} \\
\AlgName{ShiftElem}_\ttt(\T) &:=~
\Normalize(\ \bigl\{ (\s-\LB{t},\True) ~|~ \Exists{(\s,\True)}{\T} \bigr\} \cup
\bigl\{(\s-\UB{t},\False) ~|~ {(\s,\False)}\in{\T} \bigr\}\ ) \\
\Normalize(\T) &:=
\begin{cases}
\textbf{error} & \text{if $\Exists{(\s,\False)}{\T} ~ \s \neq [0] \LAnd \s \ni 0$} \\
\makebox[20em][l]{$\textbf{error}$} & \text{if $\Exists{(\s,b),(\s',\neg b)}{\T} ~ \s \cap \s' \neq \emptyset$} \\
\AlgName{N}_4(\AlgName{N}_3(\AlgName{N}_2(\AlgName{N}_1(\T)))) & \text{otherwise}
\end{cases} \\
\AlgName{N}_1(\T) &:=~ \bigl\{ (\s,\True)\in\T ~|~
\# \{(\s',\True)\in\T ~|~ \UB{s}' < \LB{s}\} -
\# \{(\s'',\False)\in\T ~|~ \UB{s}'' < \LB{s}\} < 1 \bigr\}\ \cup \\
& \qquad \bigl\{ (\s,\False)\in\T ~|~
\# \{(\s',\True)\in\T ~|~ \UB{s}' < \LB{s}\} -
\# \{(\s'',\False)\in\T ~|~ \UB{s}'' < \LB{s}\} < 2 \bigr\} \\
\AlgName{N}_2(\T) &:=
\begin{cases}
\makebox[20em][l]{$\Universe$} & \text{if $\Max\ \T = (\s,\True)$ such that $\UB{s} \leq 0$}\\
\{ (\s,b)\in\T ~|~ \UB{s} > 0 \} & \text{otherwise}
\end{cases} \\
\AlgName{N}_3(\T) &:=~ \bigl\{ (\s,b)\in\T ~|~ \ForAll{(\s',b)}{\T} ~ \s\cap\s' = \emptyset \bigr\} ~\cup~ \bigl\{ (\s\cup\s',b) ~|~ \Exists{(\s,b),(\s',b)}{\T}~ \s\cap\s' \neq \emptyset \bigr\} \\
\AlgName{N}_4(\T) &:=
\begin{cases}
\makebox[20em][l]{$\T \cup \{([0],\True)\}$} & \text{if $(\t,\False) = \Min\ \T$} \\
\T & \text{otherwise}
\end{cases} \\
$$
$\T_\phi$ $\Valid$, $\Unsat$, or $\Unknown$
$(\s,\True) := \AlgName{GetFirstElem}(\T_\phi)$
Proof of Lemma \[th:consistent\] {#s:canonical:proof}
--------------------------------
We check that each condition of a canonical approximation (Definition \[d:approxs\]) is assured by $\Normalize$, the last sub-process in each procedure:
- During the propagation process, polarity alternation might be inhibited by $\JoinOp$, $\Intersect$, or $\ShiftAll$, which locates a boundary interval inside another consistent time interval. These *embedded* bounds are removed by $\AlgName{N}_1$. An embodiment can be determined by checking the difference between the numbers of lower and upper bounds in the past since the smallest elements in $\T_1$ and $\T_2$ are always the lower-bound enclosures.
- The upper bound of each time interval $\s$ in $\T$ becomes non-negative because elements with non-positive upper bounds are filtered out by $\AlgName{N}_2$.
- No two elements of $\T$ overlap because an overlapping pair with opposite polarity results in an error (the second branch of $\Normalize$) and an overlap with the same polarity is joined ($\AlgName{N}_3$); thus, the elements in $\T$ can be sorted.
- $\AlgName{N}_4$ assures that the polarity value of the smallest element is $\True$.
Proof of Theorem \[th:soundness\] {#s:soundness:proof}
---------------------------------
We perform a structural induction based on the STL formulae.
For the base case $\phi = p \in \APSet_{\phi}$, $\T_p$ exists in $\mathcal{T}$.
For the inductive step, consider STL formulae $\phi_1$ and $\phi_2$, and assume as the inductive hypothesis that we have canonical approximated sets $\T_{\phi_1}$ and $\T_{\phi_2}$ of $T_{\phi_1}$ and $T_{\phi_2}$, respectively. We show that $\Propagate$ computes the approximated set properly for each formula constructed from $\phi_1$ and $\phi_2$.
When $\phi = \neg \phi_1$, the polarity of each bound of $\T_{\phi_1}$ is switched by $\Invert$ to obtain an approximated set for the complementary time intervals, which is sound regarding the operation in Step 2 of Section \[s:stl:monitoring\]. Then, $\Normalize$ is applied to canonicalize the result; it will append or remove the smallest bound. Let $(\s,\False)$ be the smallest element in a result of polarity inversion. We confirm that $\Normalize$ is sound in a case analysis:
- if $\s$ is non-empty and $\s \ni 0$, the computation results in an error (the first branch of $\Normalize$);
- if $\LB{s} > 0$, the element remains and the element $([0],\True)$ is appended by $\NOp_3$;
- if $\s = [0]$, the element is removed by $\NOp_2$.
When $\phi = \phi_1 \lor \phi_2$, $\T_{\phi_1}$ and $\T_{\phi_2}$ are modified by $\AlgName{Join}$, which joins the elements of both approximated sets; a result might be a non-canonical set when two approximated time intervals from $\T_{\phi_1}$ and $\T_{\phi_2}$ overlap. Then, $\Normalize$ is applied to unify two overlapping approximations so that the result becomes a sound approximated set with respect to the operation . When two approximated time intervals $((\s_1,\True),(\s_1',\False))$ and $((\s_2,\True),(\s_2',\False))$ overlap, the boundary interval (e.g., $\s_1$) either (i) overlaps with another boundary interval, (ii) is included in the inner approximation $(\UB{s}_2,\LB{s}_2')$, or (iii) is excluded from the outer approximation $[\LB{s}_2,\UB{s}_2']$. We confirm the soundness of $\Normalize$ in another case analysis:
- in case (i), the bound is removed by the second branch of $\Normalize$ and by $\NOp_3$;
- in case (ii), the bound is removed by $\NOp_1$;
- in case (iii), the bound remains since it should be the bound of the joined time interval.
When $\phi = \phi_1 {\mathsf{U}}_\ttt \phi_2$, $\T_{\phi_1}$ and $\T_{\phi_2}$ are modified by $\AlgName{ShiftAll}_\ttt$, which applies $\Intersect$, $\AlgName{ShiftPairs}_\ttt$ and $\AlgName{ShiftElem}_\ttt$, those implement the operation . The soundness of $\AlgName{Intersect}$ with respect to the set intersection is evident because this procedure simply implements the set operation $(\T_1\setminus\PosRealSet \cup \T_2\setminus\PosRealSet)\setminus\PosRealSet$. $\AlgName{ShiftPairs}_\ttt$ exhaustively applies $\AlgName{ShiftElem}_\ttt$ to each pair of boundary enclosures in $\T_1$ and $\T_2$. $\AlgName{ShiftElem}_\ttt$ translates the lower and upper bounds, according to the operation ; this procedure is sound because an interval enclosure is assumed for each bound of the consistent time intervals. $\Normalize$, then, resolves the overlaps and closes the lowest bound as in the case of $\phi_1 \lor \phi_2$.
Computational Complexity of the Operations on Approximated Sets {#s:complexity:op}
---------------------------------------------------------------
Let $\#\T$ be the number of elements in $\T$; if the bounds appear uniformly in a simulation, $\#\T$ is proportional to $\Norm{\phi}$; in other words, $\#\T$ is bounded by $\Norm{\phi}/\epsilon^*$ where $\epsilon^*$ is the precision of the floating-point numbers. The complexity of $\Normalize$ is bounded by $O(\#\T^2)$ since the complexities of $\NOp_1$, $\NOp_2$, $\NOp_3$, and $\NOp_4$ are $O(\#\T^2)$, $O(\#\T)$, $O(\#\T)$, and $O(1)$, respectively. Without the $\Normalize$ process, the complexities of $\Invert$ and $\JoinOp$ are $O(\#\T)$ and $O(1)$, respectively; together with $\Normalize$, their complexities are $O(\#\T^2)$. The complexity of $\ShiftAll$ is $O(\#\T^4)$ (let $\#\T$ be the larger cardinality for $\T_1$ or $\T_2$) since the complexities of $\AlgName{ShiftElem}$ and $\AlgName{ShiftPairs}$ are $O(\#\T^2)$ and $O(\#\T^2 \cdot \#\T^2)$, respectively.
\[profile\]
[^1]: <http://capd.ii.uj.edu.pl/>
[^2]: The original definition [@Maler2003] involves left-closed right-open time intervals $[\LB{t},\UB{t})$ so that they do not overlap and they can cover $[0,t_\Max]$. However, $\tilde{x}(t)>1 \ \equiv\ 1-\tilde{x}(t)<0$, with $\tilde{x}(t) := t$, is not true in the left-closed right-open interval $[1,2)$. In this paper, we only enforce the predicate to be true in the interior of time intervals $(\LB{t},\UB{t})$ to regard $[1,2)$ consistent. This has no impact on the soundness nor efficiency of the proposed method, since such bounds will be approximated by enclosing intervals in Definition \[d:acti\].
[^3]: To output $\Unknown$ only due to the insufficient precision of numerical computation, the procedure should branch the process and proceed evaluation for both cases; implementation of such a procedure remains as a future work.
[^4]: The bound enclosures are usually very accurate, but kept large on this example to emphasize their impact.
[^5]: The property ${\mathsf{F}}_{[2,3]} \neg (x-1)^2 < 0$ is verified in the same way. The set $T$ is consistent with the atomic proposition $(x-1)^2 < 0$. The verification will result in an error when $\SearchZero$ computes an enclosure of $T$ at time 1.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Elena Accomando,'
- 'Luigi Delle Rose,'
- 'Stefano Moretti,'
- 'Emmanuel Olaiya,'
- 'Claire H. Shepherd-Themistocleous'
title: ' Novel SM-like Higgs decay into displaced heavy neutrino pairs in $U(1)''$ models '
---
Introduction {#sec:introduction}
============
The evolution of the three Standard Model (SM) gauge couplings through the Renormalisation Group Equations (RGEs) shows a remarkable convergence, although only approximate, around $10^{15}$ GeV (this feature is even more evident in a Supersymmetric context, where a near perfect convergence is achieved in presence of light sparticle states). This represents one of the largest hints in favour of a Grand Unification Theory (GUT) which embeds the SM symmetry group into a larger gauge structure. One of the main predictions of a GUT is the appearance of an extra $U(1)'$ gauge symmetry which can be broken at energies accessible at the CERN Large Hadron Collider (LHC). There are several realisations of GUTs that can predict this, such as $E_6$, String Theory motivated, $SO(10)$ and Left-Right (LR) symmetric models [@Langacker:1980js; @Hewett:1988xc; @Faraggi:1990ita; @Faraggi:2015iaa; @Faraggi:2016xnm; @Randall:1999ee; @Accomando:2010fz], for example.
At the Electro-Weak (EW) scale these Abelian extensions of the SM, in which the gauge group is enlarged by an extra $U(1)'$ symmetry, are also characterised by a new scalar field, heavier than the SM-like Higgs. The Vacuum Expectation Value (VEV) of this new scalar field can lie in the TeV range providing the mass for an additional heavy neutral gauge boson, $Z'$, associated to the spontaneous breaking of $U(1)'$. In this case an enlarged flavour sector is also always present. Indeed, in this class of models, the cancellation of the $U(1)'$ gauge and gravitational anomalies naturally predicts Right-Handed (RH) neutrinos at the TeV scale which realise a low-scale seesaw mechanism. Finally, a suitable $U(1)'$ symmetry and the scale of its breaking can be tightly connected to the baryogenesis mechanism through leptogenesis.
The special case in which the conserved charge of the extra Abelian symmetry is the $B-L$ number, with $B$ and $L$ the baryon and lepton charges, respectively, is particularly attractive from a phenomenological point of view. The minimal $B-L$ low-energy extension of the SM, consisting of a further $U(1)_{B-L}$ gauge group, predicts: three heavy RH neutrinos, one extra heavy neutral gauge boson, $Z'$, and an additional Higgs boson generated through the $U(1)_{B-L}$ symmetry breaking. This model has potentially interesting signatures at hadron colliders, particularly the LHC. Those signatures pertaining to the $Z'$ and the (enlarged) Higgs sector have been extensively studied in literature [@Khalil:2007dr; @Basso:2008iv; @Basso:2010hk; @Basso:2010pe; @Basso:2010yz; @Basso:2010jm; @Accomando:2010fz; @Basso:2011na; @Basso:2012sz; @Basso:2012ux; @Accomando:2013sfa; @Accomando:2015cfa; @Accomando:2015ava; @Okada:2016gsh]. In this paper, we look at the heavy neutrino sector of a minimal Abelian extension of the SM. In fact, after the diagonalisation of the neutrino mass matrix realising the seesaw mechanism, we obtain three very light Left-Handed (LH) neutrinos ($\nu_l$), which are identified as SM neutrinos, and three heavy RH neutrinos ($\nu_h$). The latter have an extremely small mixing with the light $\nu_l$’s thereby providing very small but non-vanishing couplings to the gauge bosons. Moreover, owing to the mixing in the scalar sector, the Yukawa interaction of the heavy neutrinos with the heavier Higgs, $H_2$, also provides the coupling of the $\nu_h$’s to the SM-like Higgs boson, $H_1$. These non-zero couplings enable, in particular, the $gg\to H_1\to \nu_h\nu_h$ production mode if $m_{\nu_h} < M_{H_1}/2$ and the subsequent $\nu_h \rightarrow l^\pm W^{\mp *}$, $\nu_h \rightarrow \nu_l Z^{*}$ (off-shell) decays. The novelty of this signature compared to the existing literature is that the SM-like Higgs decays preferably into heavy RH neutrino pairs. A number of recent studies emphasize decay channels where the SM-like Higgs decays instead into one light and one heavy neutrino (see for example Refs. [@Gago:2015vma; @BhupalDev:2012zg; @Cely:2012bz; @Shoemaker:2010fg; @Antusch:2016vyf]).
The signature discussed in this paper gives rise to a different final state, as compared to this existing literature, which require dedicated experimental strategies to be detected. Preliminary studies were performed in Ref. [@Brooijmans:2012yi], at parton level. In this paper, we refine the analysis in a more realistic way, taking into account full detector simulation.
The heavy neutrino couplings to the weak gauge bosons ($W^\pm$ and $Z$) are proportional to the ratio of light and heavy neutrino masses, which is extremely small. Therefore the decay width of the heavy neutrino is small and its lifetime large. The heavy neutrino can therefore be a long-lived particle and, over a large portion of the $U(1)'$ parameter space, its lifetime can be such that it can decay inside the LHC detectors, thereby producing a very distinctive signature with Displaced Vertices (DVs).
In the case of a DV associated to a leptonic track (particularly a muon hit in the muon chamber), this represents a signal with a negligibly small background contribution (the background contribution is somewhat greater for an electron track stemming in the Electro-Magnetic (EM) calorimeter). In fact, for sufficiently large lifetimes, even jet decays of the heavy neutrino populating the hadronic calorimeter could be distinguished from those induced by a $B$-hadron[^1].
It is the purpose of this paper to examine the aforementioned heavy neutrino production and decay phenomenology at the LHC by firstly establishing the regions of the $U(1)'$ parameter space which have survived current theoretical and experimental constraints as well as eventually assessing the future scope of the LHC by accessing these signatures with DVs.
This paper is organised as follows. Sect. \[sec:model\] reviews the model under study together with an overview of its allowed parameter space. Sect. \[sec:ProdDecay\] illustrates the production and decay phenomenology of heavy neutrinos while Sect. \[sec:eventsimulation\] presents our Monte Carlo (MC) analysis. Finally, after a few remarks on the case of the heavy Higgs mediation (Sect. \[sec:H2\]), Sect. \[sec:summa\] concludes.
The model {#sec:model}
=========
We study a minimal renormalizable Abelian extension of the SM with only the matter content necessary to satisfy the cancellation of the gauge and the gravitational anomalies. In this respect, we augment each of the three lepton families by a RH neutrino which is a singlet under the SM gauge group with $B-L$ = $-1$ charge. In the scalar sector we introduce a complex scalar field $\chi$, besides the SM-like Higgs doublet $H$, to trigger the spontaneous symmetry breaking of the extra Abelian gauge group. The new scalar $\chi$ has $B-L$ = 2 charge and is a SM singlet. Its VEV, $x$, gives mass to the new heavy neutral gauge boson $Z'$ and provides the Majorana mass to the RH neutrinos through a Yukawa coupling. The latter dynamically implements the type-I seesaw mechanism.
The presence of two Abelian gauge groups allows for a gauge invariant kinetic mixing operator of the corresponding Abelian field strengths. For the sake of simplicity, the mixing is removed from the kinetic Lagrangian through a suitable transformation (rotation and rescaling), thus restoring its canonical form. It is, therefore, reintroduced, through the coupling $\tilde g$, in the gauge covariant derivative which thus acquires a non-diagonal structure \[eq:gaugecovder\] D\_= \_+ …+ i g\_1 Y B\_+ i ( g Y + g’\_1 Y\_[B-L]{} ) B\_’, where $Y$ and $Y_{B-L}$ are the hypercharge and the $B-L$ quantum numbers, respectively, while $B_\mu$ and $B'_\mu$ are the corresponding Abelian fields. Other parameterisations, with a non-canonical diagonalised kinetic Lagrangian and a diagonal covariant derivative, are, however, completely equivalent. The details of the kinetic mixing and its relation to the $Z-Z'$ mixing, which we omit in this work, can be found in [@Basso:2008iv; @Coriano:2015sea; @Accomando:2016sge]. Here we comment on some of the features of the model.
The $Z-Z'$ mixing angle in the neutral gauge sector is strongly bounded indirectly by the EW Precision Tests (EWPTs) and directly by the LHC data [@Langacker:2008yv; @Erler:2009jh; @Cacciapaglia:2006pk; @Salvioni:2009mt; @Accomando:2016sge] to small values $|\theta'| \lesssim 10^{-3}$. In the $B-L$ model under study, we find \[eq:thetapexpandend\] ’ g , where $v$ is the VEV of the SM-like Higgs doublet, $H$. In this case, the bound on the $Z-Z'$ mixing angle can be satisfied by either $\tilde g \ll 1$ or $M_Z / M_{Z'} \ll 1$, the latter allowing for a generous interval of allowed values for $\tilde g$.
It is also worth mentioning that a continuous variation of $\tilde g$ spans over an entire class of anomaly-free Abelian extensions of the SM with three RH neutrinos. Specific models, often studied in the literature, are identified by a particular choice of the two gauge couplings $g'_1$ and $\tilde g$. For instance, one can recover the pure $B-L$ model by setting $\tilde g = 0$. This choice corresponds to the absence of $Z-Z'$ mixing at the EW scale. Analogously, the Sequential SM is obtained for $g'_1 = 0$, the $U(1)_R$ extension is described by the relation $\tilde g = - 2 g'_1$, while the $U(1)_\chi$, generated at low scale in the $SO(10)$ scenario, is realised by $\tilde g = - 4/5 g'_1$. These models are graphically displayed in the plane $(\tilde g, g'_1$) in Fig. \[Fig.Models\].
![Three typical $U(1)'$ charge assignments identified by the dashed lines in the $\tilde g - g'_1$ plane. The dots represent particular benchmark models described in the literature. \[Fig.Models\]](figures/U1BP.eps)
Moreover, there is no loss of generality in choosing the $U(1)_{B-L}$ as a reference gauge symmetry because any arbitrary $U(1)'$ gauge group can always be recast into a linear combination of $U(1)_Y$ and $U(1)_{B-L}$.
After spontaneous symmetry breaking, two mass eigenstates, $H_{1,2}$, with masses $m_{H_{1,2}}$, are obtained from the orthogonal transformation of the neutral components of $H$ and $\chi$. The mixing angle of the two scalars is denoted by $\alpha$. Moreover, we choose $m_{H_1} \le m_{H_2}$ and we identify $H_1$ with the $125$ GeV SM-like Higgs discovered at the CERN LHC. The couplings between the light (heavy) scalar and the SM particles are equal to the SM rescaled by $\cos \alpha$ ($\sin \alpha$). The interaction of the light (heavy) scalar with the extra states introduced by the Abelian extension, namely the $Z'$ and the heavy neutrinos, is instead controlled by the complementary angle $\sin \alpha$ ($\cos \alpha$).
Finally, the Yukawa Lagrangian is L\_Y = L\_Y\^[SM]{} - Y\_\^[ij]{} H [\_R\^j]{} - Y\_N\^[ij]{} [\_R\^j]{} + h.c., where $\mathcal L_Y^{SM}$ is the SM contribution. The Dirac mass, $m_D = 1/\sqrt{2}\, v Y_\nu$, and the Majorana mass for the RH neutrinos, $M = \sqrt{2}\, x Y_N$, are dynamically generated through the spontaneous symmetry breaking and, therefore, the type-I seesaw mechanism is automatically realised. Notice that $M$ can always be taken real and diagonal without loss of generality. For $M \gg m_D$, the masses of the physical eigenstates, the light and the heavy neutrinos, are, respectively, given by $m_{\nu_l} \simeq - m_D^T M^{-1} m_D$ and $m_{\nu_h} \simeq M$. The light neutrinos are dominated by the LH SM components with a very small contamination of the RH neutrinos, while the heavier ones are mostly RH. The contribution of the RH components to the light states is proportional to the ratio of the Dirac and Majorana masses. After rotation into the mass eigenstates the charged and neutral currents interactions involving one heavy neutrino are given by L = V\_[i]{} |l\_\^P\_L \_[h\_i]{} W\^-\_+ V\_ V\_[i]{}\^\* |\_[h\_i]{} \^P\_L \_[l\_]{} Z\_where $\alpha, \beta = 1,2,3$ for the light neutrino components and $i = 1,2,3$ for the heavy ones. The sum over repeated indices is understood. In particular $V_{\alpha \beta}$ corresponds to the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix while $V_{\alpha i}$ describes the suppressed mixing between light and heavy states. Notice also that the $Z \nu_h \nu_h$ vertex is $\sim V_{\alpha i}^2$ and, therefore, highly dumped. These interactions are typical of a type-I seesaw extension of the SM. The existence of a scalar field generating the Majorana mass for RH neutrinos through a Yukawa coupling, which is a characteristic feature of the Abelian extensions of the SM, allows for a new and interesting possibility of producing a heavy neutrino pair from the SM-like Higgs (besides the obvious heavy Higgs mode). The corresponding interaction Lagrangian is given by L = - Y\_N\^[k]{} H\_1 |\_[h\_k]{} \_[h\_k]{} = - g’\_1 H\_1 |\_[h\_k]{} \_[h\_k]{}, where, in the last equation, we have used $x \simeq M_{Z'}/(2 g'_1)$. This expression for the VEV of $H_2$, $x$, neglects the sub-leading part that is proportional to $\tilde g$. For our purposes, this approximation can be safely adopted [@Basso:2010jm]. The interaction between the light SM-like Higgs and the heavy neutrinos is not suppressed by the mixing angle $V_{\alpha i}$ but is controlled by the Yukawa coupling $Y_N$ and the scalar mixing angle $\alpha$.
For illustrative purposes we assume that the PMNS matrix is equal to the identity matrix and that both neutrino masses, light and heavy, are degenerate in flavour. In this case the elements of the neutrino mixing matrix $V_{\alpha i}$ are simply given by $m_D/M \simeq \sqrt{m_{\nu_l}/m_{\nu_h}}$.
Production and decay {#sec:ProdDecay}
====================
In this section, we focus on the production of heavy neutrino pairs coming from the decay of the light SM-like Higgs, $H_1$, at the LHC. The corresponding cross section can be written as \[eq:sigmaxBR\] (pp H\_1 \_h \_h) = \^2 (pp H\_1)\_ , where $\sigma(pp \rightarrow H_1)_\textrm{SM}$ and $\Gamma^\textrm{tot}_\textrm{SM}$ are the production cross section and total decay width of the SM Higgs state, respectively, while $\Gamma( H_1 \rightarrow \nu_h \nu_h)$ is the partial decay width of the SM-like $H_1$ boson into two heavy neutrinos (summed over the three families). The partial decay width which reads as ( H\_1 \_h \_h) = \^2 ( 1 - )\^[3/2]{}, where we have safely neglected the contribution proportional to the neutrino Dirac mass. As intimated, it can be seen that the $H_1$ production cross section scales with $\cos^2\alpha$ with respect to the SM, where the SM is recovered for $\alpha = 0$, in which case we have $\sigma(pp \rightarrow H_1)_\textrm{SM} = 43.92$ pb (gluon channel) [@LHCHXSWG]. The total width of the SM Higgs is $\Gamma^\textrm{tot}_\textrm{SM} = 4.20 \times 10^{-3}$ GeV [@Heinemeyer:2013tqa].
To a large extent, the cross section in Eq. (\[eq:sigmaxBR\]) depends upon three parameters, namely, the mixing angle in the scalar sector, $\alpha$, the mass of the heavy neutrinos, $m_{\nu_h}$, and the VEV of the extra scalar, $x$ (or, equivalently, the Yukawa coupling $Y_N$). The dependence on $\tilde g$ is in fact negligible. The processes we are considering and the expression of the corresponding $\sigma$ remain unaffected in every extension of the SM in which the Majorana mass of the heavy neutrinos is dynamically generated by a SM-singlet scalar field sharing a non-zero mixing with the SM-like Higgs doublet. This scenario is naturally realised in the $U(1)'$ extension of the SM in which, we recall, the VEV $x$ is related to the mass of the $Z'$ through $x = M_{Z'}/(2 g')$. These parameters are obviously constrained by the $Z'$-boson direct search at the LHC. For this reason, in Fig. \[Fig.DrellYan\], we show the exclusion limits at 95% Confidence Level (CL) that have been extracted from the Drell-Yan (DY) analysis at the LHC with $\sqrt{S} = 13$ TeV and $\mathcal L = 13.3$ fb$^{-1}$. In order to derive these limits, we take into account the pure $Z'$-boson signal along with its interference with the SM background as suggested in Refs. [@Accomando:2011eu; @ACCOMANDO:2013ita; @Accomando:2013dia; @Accomando:2015rsa; @Accomando:2013sfa; @Accomando:2016mvz]. We closely follow the validated procedure given in Refs. [@Accomando:2013sfa; @Accomando:2015ava] where we have included the acceptance times efficiency factors for the electron and muon DY channels quoted by the CMS analysis [@Khachatryan:2016zqb]. We have combined the two channels and used Poisson statistics to extract the 95% CL bounds in the $\tilde g, g'_1$ plane for different $Z'$-boson masses. See also [@Alves:2015mua; @Klasen:2016qux] for related analyses. Taking the allowed values of $g'_1$ and $M_{Z'}$, we can then compute $x$. For three $x$ benchmark values and for $\alpha$ = 0.3, in Fig. \[Fig.SigmaBr\](a), we plot the $\sigma \times \textrm{Branching~Ratio~(BR)}$ for the process $pp \rightarrow H_1 \rightarrow \nu_h \nu_h$ at the 13 TeV LHC as a function of the heavy neutrino mass. The chosen value of the scalar mixing angle complies with the exclusion bounds from LEP, Tevatron and LHC searches and is compatible with the measured signals of the discovered SM-like Higgs state (hereafter, taken to have a mass of 125 GeV) in most of the interval $150 \, \textrm{GeV} \leq m_{H_2 }\leq 500\, \textrm{GeV}$ [@Accomando:2016sge]. The previous constraints are enforced using the `HiggsBounds` [@arXiv:0811.4169; @arXiv:1102.1898; @arXiv:1301.2345; @arXiv:1311.0055; @arXiv:1507.06706] and `HiggsSignals` [@Bechtle:2013xfa] tools. A high heavy neutrinos production rate can be obtained for low $x$-values. These values correspond to large $g'_1$ couplings, which are more likely to be allowed for higher $Z'$ masses. As an example, for $x$ = 4 (corresponding to an allowed point defined by $M_{Z'}$ = 4 TeV and $g'_1$ = 0.5), we get a cross section of 307 fb. The cross section goes down by decreasing the scalar mixing angle.
The BR of the light SM-like Higgs into heavy neutrinos decreases with decreasing mixing angle $\alpha$, while its gluon-induced production rate increases. The net effect is that the cross section $\sigma (pp\rightarrow H_1\rightarrow \nu_h\nu_h )$ diminishes with $\alpha$. For the same parameter point above ($M_{Z'}$ = 4 TeV and $g'_1$ = 0.5), lowering the value of $\alpha$ from 0.3 to 0.1 reduces the cross section by roughly a factor of ten. For $\alpha$ = 0.1, we get in fact about 35 fb.
![Significance analysis for the di-lepton ($l = e, \mu$) DY channel at the LHC for different $Z'$-boson masses. Acceptance cuts for the electron and muon channels are applied and the corresponding efficiency factors are included. The two channels are then combined and the limits are extracted. \[Fig.DrellYan\]](figures/Exclusions.eps)
In the mass region $m_{\nu_h} \le m_{H_1}/2$, the heavy neutrinos undergo the following decay processes: $\nu_h \rightarrow l^\pm \, W^{\mp *}$ and $\nu_h \rightarrow \nu_l \, Z^*$ with off-shell gauge bosons. In principle, $\nu_h \rightarrow \nu_l \, H_{1,2}^*$ and $\nu_h \rightarrow \nu_l \, Z^{'*}$ could also be activated but their BRs are extremely small in the kinematic region considered here. In Fig. \[Fig.SigmaBr\](b) we show the BR of the heavy neutrinos for the four available modes $l^+l^- \nu_l$ (black), $qql$ (blue), $qq \nu_l$ (red) and $\nu_l\nu_l\nu_l$ (green), where $l=e,\mu,\tau$ and $q=u,d,s,c,b$. The first channel, which accounts for the 23% of the decay modes, is mediated by both the $W^\pm$ and the $Z$ boson, the second reaches 50% and is produced solely by $W^\pm$ exchange while the last two are determined by the $Z$ alone. As we have already discussed, the couplings of the heavy neutrinos to the gauge bosons are proportional to $V_{\alpha i}$ and, therefore, are extremely small leading to very long decay lengths $c \tau_0$, where $\tau_0$ is the heavy neutrino lifetime in the rest frame. These long lived particles may generate DVs inside the detector. We show in Figs. \[Fig.Gamma\] and \[Fig.DecayLength\] the total decay width of the heavy neutrino in the rest frame and its proper decay length as a function of the light (a plots) and the heavy (b plots) neutrino mass, respectively. As the total decay width is proportional to $V_{\alpha i}^2 m_{\nu_h}^5 = m_{\nu_l}^2 m_{\nu_h}^3$, it decreases by lowering either of the two masses as illustrated in Fig. \[Fig.Gamma\](c). The proper lifetime is the inverse of the decay width at rest. Thus, light enough RH neutrinos can be quite long lived. In Fig. \[Fig.DecayLength\](c), we show the proper decay length $c\tau_0$ in the plane ($m_{\nu_h}, m_{\nu_l}$). In the next section, we see how this characteristic of heavy neutrinos will impact on their detection at the LHC.
Relaxing the degenerate mass hypothesis in the neutrino sector and taking into account the complete mixing matrix would make the analysis much more involved but the methodology would remain the same. The width and BRs of the heavy neutrinos have been computed with `CalcHEP` [@Belyaev:2012qa] using the $U(1)'$ model file [@Basso:2010jm; @Basso:2011na] accessible on the High Energy Physics Model Data-Base (HEPMDB) [@hepmdb].
In addition, as $c \tau_0$ scales with $|V_{\alpha i}|^{-2} = m_{\nu_h}/m_{\nu_l}$, a simultaneous measurement of the decay length and of the mass of the heavy neutrinos could, in principle, provide insights on the elements of the mixing matrix and on the scale of the light neutrino masses, as previously mentioned.
Heavy neutrino decay probability and detector geometry
------------------------------------------------------
The probability of the heavy RH neutrinos decaying in the detector depends on their kinematic configuration and on the region of the detector in which the decay may occur. For definiteness, we focus our analysis on the CMS detector but the same reasonings apply unchanged to the case of the ATLAS experiment.
The experimental efficiency for reconstructing a displaced event depends on the region of the detector where it decays. We take this into account by considering two regions. A region is chosen to select long lived heavy neutrinos that decay in the tracker beyond where tracks can be reconstructed and before the muon chambers to provide the possibility of observing muon hits (Region 1). A second region is chosen to select long lived heavy neutrinos that decay within the inner tracker (Region 2). Both regions can be approximated by a hollow cylinder. Region 1 is characterised by internal and external radii approximately given by $R_\textrm{min} = 0.5$ m and $R_\textrm{max} = 5$ m, plus longitudinal length, along the $z$ axis, given by $|z| < 8$ m. The requirement $r > R_\textrm{min}$ ensures that the muons are generated in a region in which the inner tracker track reconstruction efficiency is zero, while $r < R_\textrm{max}$ also guarantees that the heavy neutrinos decay in a region prior to the muon chambers. Additionally, Region 2 has $R_\textrm{min} = 0.1$ m, $R_\textrm{max} = 0.5$ m and $|z| < 1.4$ m. The inner radius corresponds to a distance from the beam line which we define to be the lower limit, in the transverse plane, of the heavy neutrino decay vertex in order to safely neglect any source of SM background from the proton-proton collisions.
In Fig. \[Fig.CMSstructure\](a) we depict an approximate description of the CMS detector [@Chatrchyan:2013sba] whose parts are described in terms of the distance covered by a possible heavy RH neutrino before decaying. Such a distance is a function of the neutrinos pseudo-rapidity, $\eta$ (or the associated scattering angle). In particular, we show the tracker (grey region), the EM calorimeter (green region), the hadronic calorimeter (blue region) and the muon chamber (orange region). The tracker has been described as a cylinder whose central axis coincides with the beam line, while the EM and hadronic calorimeters as well as the muon chamber can be approximated as concentric cylinders. The segmented line defining the outer region of the muon chamber reflects the presence of the endcaps. The vertical and horizontal hatched areas correspond to Region 2 and 1, respectively, namely, the regions in which the heavy neutrinos decay is optimised for lepton and jet identification in the inner tracker and for muon detection in the muon chamber.
The probability for the heavy neutrino decaying in the annulus defined by the radial distances $d_1(\eta)$ and $d_2(\eta)$ at the pseudo-rapidity $\eta$ is P = \_[d\_1()]{}\^[d\_2()]{} d x (- ) , \[probability\] where $c \tau = \beta \gamma c \tau_0$ is the decay length in the lab frame with $\beta \gamma$ the corresponding relativistic factor. In order to understand the behaviour of the probability of heavy neutrino decaying as a function of the proper decay length $c \tau_0$ we consider the average probability $\langle P \rangle_\eta$ over an isotropic angular distribution of the heavy neutrinos. The relativistic factor $\beta \gamma$ can be estimated assuming that the Higgs is produced at rest in which case we obtain $\beta \gamma \simeq 0.75$ for $m_{\nu_h} = 50$ GeV. The results are shown in Fig. \[Fig.CMSstructure\](b) for the two different decay volumes described above. The solid line with $\beta\gamma = 1$ represents the exact dependence of $P$ on the decay length $c \tau$ in the laboratory frame. Fig. \[Fig.CMSstructure\](c) shows the averaged decay probabilities computed for $\beta \gamma \simeq 0.75$ outside Region 1 (dotted blue), inside Region 1 (black), inside Region 2 (red), and in the inner region of the tracker (dashed blue) where the vertices and tracks are not displaced. Notice that, for $c \tau_0 = 2$, the probability of decay inside the muon chambers and the inner detector are, approximately, 50% and 25%, respectively, with only a small fraction escaping the detector. For smaller decay lengths the results may be different. As an example, for $c \tau_0 = 0.5$, the heavy neutrinos predominantly decay in the tracker providing displaced and non-displaced signals with almost equal probabilities. The latter signature could be addressed, in principle, with standard, non-displaced, techniques at the cost of increased SM backgrounds. However, this is impractical due to the small heavy neutrino masses ($m_{\nu_h} < m_{H_1}/2$) and poor efficiency of the lepton identification (see discussion below on the lepton $p_T$) which prevents discrimination of the signal event from the SM background. When the RH neutrino decays inside the inner tracker, we thus rely on the DVs whose signature has a negligibly small background contribution. For much larger values of the proper lifetime ($c\tau_0$ > 10 m), the RH neutrinos have a sizeable probability of decaying outside the detector. In this event, the heavy neutrino pair production would appear as a missing energy signature and techniques appropriate for the invisible Higgs decay should be applied (see for example Ref. [@Belyaev:2015ldo] for details on such a technique).
Event simulation {#sec:eventsimulation}
================
In this section we present a search for long-lived heavy neutrinos decaying into final states that include charged leptons (electrons and muons). The corresponding DVs can be reconstructed if the heavy neutrinos decay within the volume of the CMS inner tracker. However, here we also want to exploit the possibility to identify DVs beyond the region where they can be reconstructed in the tracker and prior to the muon chambers of the CMS detector (in this case only muons are taken into account).
In order to highlight the sensitivity of DVs due to the heavy neutrino decays produced by the 125 GeV Higgs, we present the details of a parton-level Monte Carlo (MC) analysis at the LHC with a Centre-of Mass (CM) energy of $\sqrt{S} = 13$ TeV and luminosity $\mathcal L = 100$ fb$^{-1}$. For our simulation we select four Benchmark Points (BPs) characterised by different heavy neutrino masses and different proper decay lengths. These two quantities unambiguously fix the light neutrino mass. The other relevant parameters, $M_{Z'}, g'_1, \alpha$, are as follows: $M_{Z'} = 5$ TeV, $g' _1= 0.65$ and $\alpha = 0.3$. The proper decay length of the heavy neutrinos is chosen to optimise their observability in different regions of the CMS detector, in particular, BP1 and BP2 are characterised by heavy neutrinos mostly decaying in the muon chamber, while BP3 and BP4 provide long-lived particles that are better identified in the inner tracker. The different BPs are defined in Tab. \[Tab:H1BPs\].
$m_{\nu_h} \, \textrm{(GeV)}$ $m_{\nu_l} \, \textrm{(eV)}$ $c \tau_0 \, \textrm{(m)}$ $\sigma_{\nu_h\nu_h} \, \textrm{(fb)}$
----- ------------------------------- ------------------------------ ---------------------------- ----------------------------------------
BP1 40 0.075 1.5 332.3
BP2 50 0.02 2.0 248.3
BP3 45 0.065 1.0 310.2
BP4 50 0.082 0.5 248.3
: Definition of the BPs leading to DVs from the heavy neutrinos produced by the SM-like Higgs decay. From left to right, the columns display the values of heavy RH neutrino mass, light neutrino mass, proper decay length of the heavy neutrinos and production rate of heavy neutrino pairs coming from the decay of the SM-like Higgs produced via gluon fusion. \[Tab:H1BPs\]
As leptons can originate from the three-body decay of soft heavy neutrinos, $m_{\nu_h} < m_{H_1}/2 \simeq 62.5$ GeV, a cut on their transverse momentum is found to have the greatest effect among all the kinematic acceptance requirements. We show in Tab. \[tab.ptleptoncuts\] the efficiencies of the $p_T$ cut on leptons for different thresholds in the BP1 scenario. In particular, we report the combined cuts for different thresholds $p_T^{(1)}$ on the two leading leptons and different thresholds $p_T^{(2)}$ on the third sub-leading lepton, showing how the efficiency drops to a few percent for $p_T^{(1)} \simeq 26$ GeV, which is a trigger requirement at CMS for both the muons reconstructed in the muon chamber and the leptons identified in the inner tracker. These results suggest that a dedicate trigger with a lower lepton $p_T$ threshold would be particularly useful for the analysis of rather soft and long-lived particles.
We consider the process $pp \rightarrow H_1 \rightarrow \nu_h \nu_h$ with the two heavy neutrinos decaying into two to four muons for the analysis of DVs in the muon chamber and into two to four leptons (electrons and muons) for the corresponding analysis in the inner tracker. We do not reconstruct jets in the event. The individual decay chains of the heavy neutrino are summarised as follows:
- $\nu_h\rightarrow l^\mp W^\pm\rightarrow l^\mp {l'}^\pm\nu_{l'}$
- $\nu_h\rightarrow l^\mp W^\pm\rightarrow l^\mp q\bar{q'}$
- $\nu_h\rightarrow \nu_{l'} Z\rightarrow \nu_{l'} l^+ l^-$
- $\nu_h\rightarrow \nu_{l'} Z\rightarrow \nu_{l'} q\bar q$
- $\nu_h\rightarrow \nu_{l'} Z\rightarrow \nu_{l'} \nu_l \nu_l$
where $l = e, \mu, \tau$ and $q$ can be one of five flavours ($q = d, u, c, s, b$). The $\tau$ lepton can decay into $e, \mu$ and hadrons. When considering the decay of the heavy neutrino pair, one can have signatures with two same-sign leptons of same or different flavour, signatures with two opposite-sign leptons of same or different flavour and signatures with more than two leptons.
For each event, we evaluate the length $c\tau$ in the laboratory for the two RH neutrinos. This length depends on the heavy neutrino speed via the relativistic factor $\beta\gamma$. We then randomly sample the distance $L$ travelled by each of the two heavy neutrinos from the exponential distribution $\exp(-x/c\tau)$. Using the simulated momentum of the two heavy neutrinos (or the scattering angles), we can then determine the position of the two DVs. Standard acceptance requirements are imposed on the transverse and longitudinal decay lengths of the heavy neutrinos $L_{xy}$ and $L_z$, respectively. As stated in [@CMS:2015pca], in order to identify muons in the muon chambers, each of the reconstructed muon tracks must satisfy $|L_z| < 8$ m, $L_{xy} < 5$ m and $L_{xy}/\sigma_{L_{xy}} > 12$. The resolution $\sigma_{L_{xy}}$ on the transverse decay length is approximately 3 cm. Thus the geometrical constraints are $0.36 \, \textrm{m} < L_{xy} < 5$ m, globally. The lower and upper bounds ensure that the muons are generated in a region where they are not reconstructed by the tracker and can be identified by the muon chambers. Furthermore, the identification of leptons in the inner tracker demands $0.1 \, \textrm{m} < L_{xy} < 0.5 \, \textrm{m}$ and $|L_z| < 1.4$ m [@CMS:2014hka; @Khachatryan:2014mea]. The upper bound on $L_{xy}$ is determined by the efficiency of the tracker while the lower bound provides DVs in a region where the contamination from the SM background is negligible.
In addition, two extra cuts on the impact parameter, $d_0$, and the angular separation between muon tracks, $\theta_{\mu\mu}$, are used to completely suppress any source of SM background. To this end, we closely follow Ref. [@CMS:2015pca] and we first implement generic detector acceptance requirements to identify the leptons (electrons and muons). In particular, we impose $|\eta_l|$ < 2, $\Delta R_l > 0.2$, $p_T^l > 26$ GeV for the two leading leptons and $p_T^l > 5$ GeV for any sub-leading leptons. Notice that, differently from Ref. [@CMS:2014hka] where a Run I-designed trigger has been adopted, we use trigger thresholds potentially available in Run II where both leading electrons and muons can be required to have $p_T > 26$ GeV. Once the lepton tracks are reconstructed, we then implement the two additional cuts on $\theta_{\mu\mu}$ and $d_0$. A significant source of background may arise from cosmic ray muons which may be misidentified as back-to-back muons. Such events are removed by requiring that the opening angle between two reconstructed muon tracks, $\theta_{\mu\mu}$, satisfies the constraint $\cos \theta_{\mu\mu} > - 0.75$. The impact parameter $d_0$ in the transverse direction of each of the reconstructed tracks of electrons and muons (identified by the acceptance requirements described above) is given by the expression |d\_0|=|x p\_y - y p\_x|/p\_T, where $p_{x,y}$ are the transverse components of the momentum and $p_T$ is the transverse momentum of the leptonic track. In the above formula, the variables $x$ and $y$ correspond to the position where the heavy RH neutrino decays. They are determined by the projection on the $x, y$ transverse plane of the length $L$ covered by the neutrino before decaying.
In Fig. \[Fig:impactparam\], we show the $d_0$ distribution for BP2 and BP4 in which identification cuts on the leptons are applied. As expected, BP4 provides a tighter distribution characterised by a shorter heavy neutrino lifetime. According to [@CMS:2015pca; @CMS:2014hka], the SM background can be significantly reduced by applying cuts on the impact parameter significance $|d_0|/\sigma_{d}$ where the resolution $\sigma_d$ is approximately given by 2 cm and 20 $\mu$m in the muon chambers and the tracker, respectively. We thus impose $|d_0|/\sigma_{d} > 4$ for the muons in the muon chamber and $|d_0|/\sigma_{d} > 12$ for the leptons in the inner tracker.
Finally, the reconstruction efficiency $\epsilon$ of the lepton momenta is taken into account. A reasonable choice of $\epsilon$ for a single lepton, electron or muon, is $\epsilon = 90 \%$. The effect of these cuts and the results of the analysis are discussed in the following section and summarised in Tabs. \[tab:MC\_H1\] and \[tab:IT\_H1\].
![Impact parameter distribution for (a) BP2 and (b) BP4 of Tab. \[Tab:H1BPs\]. The heavy neutrino proper decay lengths are, respectively, 2 m and 0.5 m.\[Fig:impactparam\]](figures/SigBP2_d0_Sel13.eps "fig:") ![Impact parameter distribution for (a) BP2 and (b) BP4 of Tab. \[Tab:H1BPs\]. The heavy neutrino proper decay lengths are, respectively, 2 m and 0.5 m.\[Fig:impactparam\]](figures/SigBP4_d0_Sel13.eps "fig:")
Discussion of the results
-------------------------
In this section, we present the results of the MC analysis of displaced leptons. We commence with the analysis of displaced muons reconstructed using only the muon chamber.
### Muon chamber
It is instructive to classify the events in three distinct categories defined by the number of identified muons, from 2 to 4. In this analysis we use detector trigger requirements which tag two leptons. The cut flow for the different BPs is depicted in Tab. \[tab:MC\_H1\]. Two muons of the $2\mu$ category appear in the muon detector as distinct tracks (they are generated from two different heavy neutrinos) which do not join in a single DV (the number of events with two muons coming from a single DV is less than one). This is due to the combined impact of the $p_T$ cut on the muons and to the small mass of the heavy neutrinos which reduce the probability of selecting two high-$P_T$ leptons originating from the same heavy neutrino. The $3\mu$ category, instead, is characterised by two muon tracks forming a single DV (one high-$P_T$ and one low $P_T$ muon track) and by a third separate high-$P_T$ track, while the $4\mu$ class provides two distinct DVs. Due to the kinematical features discussed above, a DV is always formed by one of the two most energetic muons ($p_T > 26$ GeV) and one with low $p_T$ ($5 \, \textrm{GeV} < p_T < 26 \, \textrm{GeV}$).
The number of expected events for the four selected benchmark points of Tab. \[Tab:H1BPs\], obtained after all cuts and efficiencies have been applied, is listed in Tab. \[tab:MC\_H1\]. As expected, BP4 leads to the smallest number of events among the different BPs. Indeed, even though its cross section and the heavy neutrino mass are the same as BP2, the short heavy neutrino decay length $c\tau_0 = 0.5$ m means the decay of the heavy neutrinos in the muon chambers is less likely and therefore reduces the sensitivity of this analysis to this particular benchmark point. The largest sample of events are collected within the BP1 scenario, where the big cross section and the large decay length enhance the number of muon tracks identified in the detector. Summing all the events in the three disjoint categories ($2\mu$, $3\mu$ and $4\mu$), the total expected number of events (after reconstruction and cuts) from heavy neutrinos is about $N_{evt}$ = 33 at $\mathcal L=100$ fb$^{-1}$ in the BP1 case. This signal has the benefit of having a negligibly small background contribution. The remaining two scenarios, BP2 and BP3, predict roughly 20 events each.
Concerning the possible source of backgrounds (apart from the cosmic muons) in the study of DVs in the muon chamber, the CMS analysis [@CMS:2015pca] takes into account muon pairs from DY which can be misidentified as displaced event due to detector resolution effects, production and decay of tau pairs giving rise to muons and $t\bar t$, $W^+W^-$, $ZZ$ and QCD multi-jets events. In [@CMS:2015pca], it is explicitly stated that all these backgrounds produce negligible contributions. The signal events above could then be potentially detected at the LHC at the end of Run 2 data taking when the collected luminosity is expected to reach $\mathcal L=100$ fb$^{-1}$ at CMS.
### Inner tracker
The same analysis can be performed exploiting the reconstructions of both electrons and muons in the inner tracker. As usual, we classify the events in three categories according to the number of leptons. The results are shown in Tab. \[tab:IT\_H1\].
As explained above, the two leptons in the $2l$ category do not form a single DV but rather appear as separate tracks in the tracker. All the properties concerning the topologies of the different categories and the corresponding kinematical properties of the displaced leptons which have been discussed above for the analysis in the muon chamber apply equally here. The potential sources of SM backgrounds listed in the previous section are also relevant in the analysis of DVs in the tracker. As shown in [@CMS:2014hka] and previously stated, cutting on the impact parameter successfully suppresses the SM background.
After reconstruction and cuts, BP3 and BP4 are the best scenarios owing to the lower proper decay length $c \tau_0$ and, therefore, larger probability of the heavy neutrinos decaying inside the tracker volume (see Fig. \[Fig.CMSstructure\](b)), compared to the other two benchmark points. Moreover, even though BP4 is characterised by a smaller cross section $\sigma_{\nu_h \nu_h}$ than BP3, the smaller decay length $c\tau_0$ allows for a larger number of heavy neutrinos to decay in the inner tracker, thus compensating and providing more reconstructed leptons than in the BP3 scenario. Tab. \[tab:IT\_H1\] summarises the final number of DVs events expected in the inner tracker after reconstruction and cuts. As anticipated, the best scenario is BP4 with about 53 expected events, followed by BP3 with roughly 30 events. For the other two BPs less events are expected, providing a sample of 10 events each.
Being able to distinguish electrons from muons, in the inner tracker it is also possible to identify the flavour composition of the displaced tracks and vertices. We show the composition of the $2l$ and $3l$ events in Tab. \[tab:IT\_H1\_2-3flavour\] and of the $4l$ events in Tab. \[tab:IT\_H1\_4flavour\]. We use the notation $(f_1 f_2)$ to denote that the DV is made of two leptons of flavour $f_1$ and $f_2$. In the $(f_1 f_2)l$ case the lepton $l$ appears as a separate track.
As clear from Tab. \[tab:IT\_H1\_2-3flavour\], the fraction of events with two electrons is always slightly larger than the corresponding $\mu\mu$ case due to the presence of the cut used for the suppression of cosmic muons. The number of events in the mixed case $e \mu$ of the $2l$ category is suppressed with respect to the same-flavour leptons. This effect is due to our simplified setup with a diagonal Dirac Yukawa matrix $Y_\nu$ in which the two leptons originating from the first step of the two heavy neutrinos decay chains, namely $\nu_h \rightarrow l^\pm \, W^{\mp *}$[^2], must share the same flavour, thus reducing the number of possible $e \mu$ pairs with respect to $ee$ or $\mu\mu$. Notice that this result is subject to the assumptions on the Dirac mass matrix. In contrast, the flavour composition of the DV of the $3l$ category is independent of the structure of $Y_\nu$. Interestingly, the majority of the events lay in the $(e \mu)l$ class which could represent a significant signature of DVs from heavy neutrinos.
### Comments on tri-leptons triggers
We show in Fig. \[Fig:pt\] the transverse momentum distributions of the four leptons where no cuts have been imposed. We take as reference the benchmark point BP1 of Tab. \[Tab:H1BPs\]. The different locations of the peaks suggest that an asymmetric choice for the $p_T$ thresholds of the four leptons could help to increase the efficiency of the selection. We comment on this possibility focusing on the implementation of an event identification for which at least three leptons are required.\
We present in Tabs. \[tab:IT\_H1\_3l1\] and \[tab:IT\_H1\_3l2\] the results of the DV analysis in which we have employed a tri-leptons trigger for long lived decays. This trigger differs from typical tri-lepton triggers in that it does not require any of the leptons to point back to the beamspot. In this case the thresholds on the lepton $p_T$ can be relaxed, with respect to the identification requirements discussed in the previous section, allowing for a larger number of events to be collected at the end of the selection procedure. Tri-lepton triggers have been extensively used in searches for supersymmetric particles but never employed in the study of displaced vertices. Being unable to refer to an existing literature, we examine the impact of two reasonable trigger requirements on the lepton $p_T$. The results in Tab. \[tab:IT\_H1\_3l1\] are derived requiring $p_T > 20$ GeV for the two most energetic leptons and $p_T > 10$ GeV for the third one, while Tab. \[tab:IT\_H1\_3l2\] is obtained considering $p_T > 20$ GeV for the most energetic lepton and $p_T > 15$ GeV for the other two. The fourth lepton, if present, is characterised by $p_T > 5$ GeV. All the other cuts remain unchanged. In Tabs. \[tab:IT\_H1\_3l1\] and \[tab:IT\_H1\_3l2\], the second row shows the acceptance after imposing the threshold cuts on the transverse momenta of the leptons in the two different categories $3l$ and $4l$, for the four BPs of Tab. \[Tab:H1BPs\]. The third row is the final acceptance after implementing all cuts plus efficiency, as in Tabs. \[tab.ptleptoncuts\] and \[tab:MC\_H1\] (for brevity we just report the last row).
. The first setup is found to be more efficient. Moreover, from the comparison of the results in Tab. \[tab:IT\_H1\_3l1\] with those of Tab. \[tab:IT\_H1\] it is clear that the number of events collected in the $3l$ category using the $3l$-trigger is compatible with that of the $2l$ category of the previous analysis for the BP1 and BP2, and bigger for the BP3 and BP4. Summing up the two separate categories, $3l$ and $4l$, the number of events more than doubles by employing the tri-lepton trigger of the first type ($P_T^{l_1} >$ 20 GeV, $P_T^{l_2} >$ 20 GeV and $P_T^{l_3} >$ 10 GeV) as compared to the di-lepton trigger. We obtain 46 events in the BP4 scenario and 22 in the BP3 one (with the di-lepton trigger we had 20 and 11 events, respectively).
This simple example highlights the importance of decreasing the thresholds of the $p_T$ cuts on displaced tracks produced by rather soft long-lived particles. Indeed, if implemented this would allow for the exploration of a new decay channel of the SM-like Higgs.
![Distributions of the leptonic transverse momentum. The colours black, blue, red and green correspond respectively to the momentum distributions of the four leptons ordered in decreasing $p_T$. \[Fig:pt\]](figures/pt.eps)
Heavy neutrinos from the heavy Higgs {#sec:H2}
====================================
In this section, we consider the region of the parameter space in which the long-lived heavy neutrinos have a mass greater than the threshold for their on-shell production coming from the 125 GeV Higgs boson decay. In this case, where $m_{\nu_h} > $ 62.5 GeV, the RH neutrinos can be produced from the decay of the heavy Higgs with a cross-section given by \[eq:sigmaxBR\_H2\] (pp H\_2 \_h \_h) = , where $\sigma(pp \rightarrow H_2)_\textrm{SM}$ and $\Gamma^\textrm{tot}_\textrm{SM}(H_2)$ are the production cross section and total width of a SM-like Higgs state with mass $m_{H_2}$, respectively, while $\Gamma( H_2 \rightarrow \nu_h \nu_h)$ and $\Gamma( H_2 \rightarrow H_1 H_1)$ are the partial decay widths of the heavy Higgs into two heavy neutrinos (summed over the three families) and two SM-like Higgs. They are given by ( H\_2 \_h \_h) &=& \^2 ( 1 - )\^[3/2]{} ,\
( H\_2 H\_1 H\_1) &=& ( (x + v ) ( + m\_[H\_1]{}\^2) )\^2 ( 1 - )\^[1/2]{} Notice that the production cross section mediated by the heavy Higgs in Eq. (\[eq:sigmaxBR\_H2\]) is similar to that in Eq. (\[eq:sigmaxBR\]), where the light Higgs is involved, the only difference is its dependence on the complementary scalar mixing angle and the appearance of the extra decay mode $H_2 \rightarrow H_1 H_1$.
We illustrate in Fig. \[Fig.BRH2\] the BRs of the heavy Higgs as a function of its mass $m_{H_2}$ for two values of the heavy neutrino mass, namely, (a) $m_{\nu_h} = 65$ GeV and (b) $m_{\nu_h} = 95$ GeV. In both cases, the decay mode $H_1 \rightarrow \nu_h \nu_h$ is kinematically closed and the heavy neutrino production from $H_2$ becomes the leading production mechanism (the corresponding cross section mediated by the $Z'$ for the same BP is $\sim 0.8$ fb). For $m_{\nu_h} = 65$ GeV, the $\textrm{BR}(H_2\rightarrow \nu_h\nu_h)$ can reach 10% in the low $m_{H_2}$ region in which the on-shell decay modes into $WW$ and $ZZ$ are forbidden. In Fig. \[Fig.BRH2\](c) we plot the heavy neutrino production cross section through $H_2$ for the same heavy neutrino masses discussed above showing that it can reach 456.8 fb for $m_{\nu_h} = 65$ GeV and $m_{H_2} = 150$ GeV.
We repeat the analysis presented in the previous sections for the two BPs given in Tab. \[Tab:H2BPs\]. Due to the heavy neutrino masses being bigger than those of the BPs in the $H_1$-mediated case, the corresponding proper decay lengths are found to be smaller, thus pointing to the analysis of displaced leptons in the inner tracker as the most sensitive. The results are shown in Tabs. \[tab:MC\_H2\] and \[tab:IT\_H2\] for the study of DVs and hits in the muon chambers and inner tracker, respectively. As expected, we count a larger number of events in both BPs if we reconstruct the leptons using the information acquired from the tracker. Moreover, the number of events with displaced objects identified in BP6 is clearly much smaller than in BP5 due to the very different cross sections. Nevertheless, it is worth noticing that the overall efficiency in the reconstruction of the displaced tracks is a factor of $\sim 2$ greater in the BP6 case when the analysis in the tracker is concerned because its shorter $c \tau_0$ favours heavy neutrinos decaying in the inner part of the detector.
Summing over the three disjoint categories, the number of expected events after reconstruction and cuts is around 223 within the BP5 scenario, characterised by a rather low $H_2$ mass. The BP6 scenario is less sensitive, providing a sample of barely 10 events. Interestingly, in the BP5 case the $4l$ category could offer the possibility of reconstructing a visible mass distribution that could give information about $m_{H_2}$. The signal sample is rich enough, with 17 events and a negligibly small background contribution.
Finally, the flavour compositions of the events is shown in Tabs. \[tab:IT\_H2\_2-3flavour\] and \[tab:IT\_H2\_4flavour\].
$m_{H_2} \, \textrm{(GeV)}$ $m_{\nu_h} \, \textrm{(GeV)}$ $m_{\nu_l} \, \textrm{(eV)}$ $c \tau_0 \, \textrm{(m)}$ $\sigma_{\nu_h\nu_h} \, \textrm{(fb)}$
----- ----------------------------- ------------------------------- ------------------------------ ---------------------------- ----------------------------------------
BP5 150 65 0.023 0.5 456.8
BP6 250 95 0.001 0.15 7.45
: Definition of the BPs leading to DVs from the heavy neutrinos produced from the heavy Higgs. From left to right, the columns display the values of heavy Higgs mass, heavy neutrino mass, light neutrino mass, heavy neutrino proper decay length and heavy neutrino pair production rate.\[Tab:H2BPs\]
0 1 3 5 7 9 11 13 15 17 19 21 23 25 26
---- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
0 100
1 96.53 94.31
3 87.26 85.41 79.77
5 75.78 74.3 69.75 63.69
7 63.35 62.2 58.66 53.92 48.47
9 51.75 50.87 48.17 44.52 40.32 36.27
11 41.65 40.97 38.9 36.12 32.93 29.83 26.92
13 33.05 32.53 30.95 28.83 26.42 24.11 21.9 19.85
15 25.83 25.43 24.23 22.61 20.8 19.07 17.44 15.89 14.44
17 19.82 19.51 18.6 17.4 16.07 14.8 13.59 12.46 11.39 10.44
19 14.95 14.72 14.04 13.14 12.18 11.25 10.36 9.534 8.754 8.079 7.454
21 11.06 10.89 10.38 9.731 9.028 8.352 7.721 7.126 6.577 6.108 5.672 5.304
23 8.039 7.914 7.538 7.068 6.553 6.066 5.616 5.195 4.807 4.485 4.193 3.956 3.756
25 5.808 5.719 5.442 5.095 4.721 4.37 4.046 3.75 3.483 3.264 3.069 2.922 2.796 2.709
26 4.907 4.83 4.591 4.289 3.973 3.677 3.406 3.161 2.942 2.764 2.605 2.487 2.389 2.327 2.304
: Efficiencies (%) of the combined cuts $p_T^{lead.~l} > p_T^{(1)}$ on the two most energetic leptons and $p_T^{sublead.~l} > p_T^{(2)}$ on the third lepton for BP1 in Tab. \[Tab:H1BPs\]. \[tab.ptleptoncuts\]
--------------------------------- -------- --------- -------- -------- --------- -------- -------- --------- --------- -------- --------- ---------
$2\mu$ $3 \mu$ $4\mu$ $2\mu$ $3 \mu$ $4\mu$ $2\mu$ $3 \mu$ $4\mu$ $2\mu$ $3 \mu$ $4\mu$
Ev. before cuts 5016 960.2 57.57 3771 756.7 47.65 4699 922.9 55.64 3771 756.7 47.65
$p_T$ cuts 206.7 47.37 3.084 112.6 29.47 1.67 159.9 39.15 2.562 112.6 29.47 1.67
$|\eta| < 2$ 149.4 32.59 1.965 83.7 20.86 1.183 118.8 27.13 1.749 83.7 20.86 1.183
$\Delta R > 0.2$ 147.8 28.42 1.542 82.3 17.94 0.9191 117.5 23.43 1.395 82.3 17.94 0.9191
$\cos \theta_{\mu\mu} > - 0.75$ 114 19.33 0.9453 64.54 12.96 0.5839 92.7 15.86 0.9211 64.54 12.96 0.5839
$L_{xy} < 5$ m 100.7 17.59 0.8279 55.52 11.42 0.5408 89.31 15.47 0.907 62.63 12.77 0.5839
$L_{xy}/\sigma_{L_{xy}} > 12$ 63.19 10.62 0.6247 32.09 6.549 0.292 40.49 7.238 0.4335 10.91 2.068 0.06476
$|L_z| < 8$ m 53.97 8.717 0.5086 26.86 5.717 0.2056 37.1 6.71 0.3924 10.69 2.023 0.06476
$|d_0|/\sigma_d > 4$ 36.46 5.363 0.2764 19.6 3.847 0.1727 24.03 3.923 0.1351 6.201 0.9077 0.02159
rec. eff. 29.53 3.909 0.1813 15.88 2.804 0.1133 19.46 2.86 0.08865 5.023 0.6617 0.01416
--------------------------------- -------- --------- -------- -------- --------- -------- -------- --------- --------- -------- --------- ---------
--------------------------------- ------- ------- -------- ------- ------- -------- ------- ------- ------- ------- ------- -------
$2l$ $3 l$ $4l$ $2l$ $3 l$ $4l$ $2l$ $3 l$ $4l$ $2l$ $3 l$ $4l$
Ev. before cuts 8891 4268 816.5 6645 3285 645.2 8307 4052 780.9 6645 3285 645.2
$p_T$ cuts 374 243.7 43.49 206.7 145.5 26.87 295 208.5 36.63 206.7 145.5 26.87
$|\eta| < 2$ 271 164.4 27.79 153.8 99.68 17.51 215.9 140.9 23.8 153.8 99.68 17.51
$\Delta R > 0.2$ 266.1 145.3 22.66 148.5 86.45 13.98 211.4 124.5 19.31 148.5 86.45 13.98
$\cos \theta_{\mu\mu} > - 0.75$ 239.4 129.3 19.34 135 78.52 12.22 192.1 112.5 16.59 135 78.52 12.22
$10 < |L_{xy}| < 50$ cm 13.24 6.798 1.265 9.259 5.082 0.7555 25.11 15.74 2.232 46.41 27.35 4.474
$|L_{z}| < 1.4$ 12.09 6.391 1.236 8.027 4.358 0.7339 22.38 14.61 2.043 41.51 25.17 4.29
$|d_0|/\sigma_d > 12$ 11.91 6.362 1.207 8.006 4.272 0.7339 22.19 14.39 1.962 40.94 24.96 4.247
rec. eff. 9.65 4.638 0.7916 6.485 3.114 0.4815 17.98 10.49 1.287 33.16 18.2 2.786
--------------------------------- ------- ------- -------- ------- ------- -------- ------- ------- ------- ------- ------- -------
$ee$ $\mu\mu$ $e \mu$ $(ee)l$ $(\mu\mu)l$ $(e \mu)l$
----- ------- ---------- --------- --------- ------------- ------------
BP1 4.943 3.859 0.8485 1.154 0.6992 2.785
BP2 3.361 2.581 0.5429 0.67 0.6062 1.838
BP3 9.557 7.179 1.24 2.9 1.589 6.002
BP4 17.09 13.09 2.978 3.874 3.293 11.03
: Flavour composition of 4-displaced leptons in the tracker. \[tab:IT\_H1\_4flavour\]
$(ee)(ee)$ $(ee)(\mu\mu)$ $(\mu\mu)(\mu\mu)$ $(ee)(e\mu)$ $(\mu\mu)(e\mu)$ $(e \mu)(e\mu)$
----- ------------ ---------------- -------------------- -------------- ------------------ -----------------
BP1 0.03809 0.05714 0 0.2103 0.1905 0.2956
BP2 0.07081 0.04249 0.01416 0.1275 0.1133 0.1133
BP3 0.06243 0.06243 0.05319 0.4432 0.1596 0.5064
BP4 0.2124 0.1774 0.1491 0.7089 0.6023 0.9362
: Flavour composition of 4-displaced leptons in the tracker. \[tab:IT\_H1\_4flavour\]
----------------- ------- ------- ------- ------- ------- ------- ------- -------
$3 l$ $4l$ $3 l$ $4l$ $3 l$ $4l$ $3 l$ $4l$
Ev. before cuts 4268 816.5 3285 645.2 4052 780.9 3285 645.2
$p_T$ cuts 417.1 105 289.5 79.46 369.6 97.07 289.5 79.46
rec. eff. 8.218 1.421 7.122 1.362 19.33 3.471 38.61 8.105
----------------- ------- ------- ------- ------- ------- ------- ------- -------
: Displaced leptons originated from heavy neutrinos produced by the SM-like Higgs. For each BP the initial number of events generated is the cross section $\sigma(pp \rightarrow H_1 \rightarrow \nu_h \nu_h)$ multiplied by a luminosity of 100 fb$^{-1}$. A $3l$-trigger has been employed. The $p_T$ cuts are applied in the following way: $p_T > 20$ GeV for the most energetic lepton and $p_T > 15$ GeV for the other two. A $p_T > 5$ GeV is required on the forth lepton, when present. Only the events surviving the final selection procedure and the cuts on the $p_T$ are shown. \[tab:IT\_H1\_3l2\]
----------------- ------- ------- ------- ------- ------- ------- ------- -------
$3 l$ $4l$ $3 l$ $4l$ $3 l$ $4l$ $3 l$ $4l$
Ev. before cuts 4268 816.5 3285 645.2 4052 780.9 3285 645.2
$p_T$ cuts 290 86.03 201.5 65.81 260.4 79.96 201.5 65.81
rec. eff. 5.307 1.305 5.608 1.369 13.98 3.425 27.29 6.942
----------------- ------- ------- ------- ------- ------- ------- ------- -------
: Displaced leptons originated from heavy neutrinos produced by the SM-like Higgs. For each BP the initial number of events generated is the cross section $\sigma(pp \rightarrow H_1 \rightarrow \nu_h \nu_h)$ multiplied by a luminosity of 100 fb$^{-1}$. A $3l$-trigger has been employed. The $p_T$ cuts are applied in the following way: $p_T > 20$ GeV for the most energetic lepton and $p_T > 15$ GeV for the other two. A $p_T > 5$ GeV is required on the forth lepton, when present. Only the events surviving the final selection procedure and the cuts on the $p_T$ are shown. \[tab:IT\_H1\_3l2\]
--------------------------------- -------- --------- -------- -------- --------- -----------
$2\mu$ $3 \mu$ $4\mu$ $2\mu$ $3 \mu$ $4\mu$
Ev. before cuts 7266 1589 108 162.8 63.69 7.519
$p_T$ cuts 323.7 109.8 10.12 19.38 13.22 3.27
$|\eta| < 2$ 242 74.59 6.479 16.18 10.29 2.316
$\Delta R > 0.2$ 240.8 68.04 5.411 15.22 6.217 1.377
$\cos \theta_{\mu\mu} > - 0.75$ 207.9 51.68 3.763 8.913 4.316 0.8433
$L_{xy} < 5$ m 206.3 51.57 3.763 8.54 4.298 0.8433
$L_{xy}/\sigma_{L_{xy}} > 12$ 14.9 3.252 0.2598 1.642 0.238 0.006817
$|L_z| < 8$ m 14.42 3.252 0.2598 1.608 0.233 0.006817
$|d_0|/\sigma_d > 4$ 9.9 2.049 0.2369 0.3646 0.05198 0.000554
rec. eff. 8.019 1.494 0.1554 0.2953 0.03789 0.0003634
--------------------------------- -------- --------- -------- -------- --------- -----------
: Displaced muons in the muon chambers originated from heavy neutrinos produced by the heavy Higgs. For each BP the initial number of events generated is the cross section $\sigma(pp \rightarrow H_2 \rightarrow \nu_h \nu_h)$ multiplied by a luminosity of 100 fb$^{-1}$. The $p_T$ cuts are applied in the following way: $p_T > 26$ GeV for the two most energetic muons and $p_T > 5$ GeV for all the others. The reconstruction efficiency of the muons is taken into account. \[tab:MC\_H2\]
--------------------------------- ------- ------- ------- ------- ------- -------
$2l$ $3 l$ $4l$ $2l$ $3 l$ $4l$
Ev. before cuts 12556 6612 1371 224.9 202.4 64.88
$p_T$ cuts 553.3 531.7 125.3 31.17 44.95 27.99
$|\eta| < 2$ 414.6 361.7 78.13 25.89 34.81 19.57
$\Delta R > 0.2$ 409.9 333.7 67.6 22.92 21.43 11.82
$\cos \theta_{\mu\mu} > - 0.75$ 385.3 306.9 60.81 19.63 18.8 10
$10 < |L_{xy}| < 50$ cm 174.8 135.5 28.49 5.327 6.596 3.647
$|L_{z}| < 1.4$ 149.3 118.9 26.31 5.202 6.428 3.569
$|d_0|/\sigma_d > 12$ 147.9 117.5 26.11 5.088 6.278 3.462
rec. eff. 119.8 85.63 17.13 4.121 4.577 2.272
--------------------------------- ------- ------- ------- ------- ------- -------
: Displaced leptons originated from heavy neutrinos produced by the heavy Higgs. For each BP the initial number of events generated is the cross section $\sigma(pp \rightarrow H_2 \rightarrow \nu_h \nu_h)$ multiplied by a luminosity of 100 fb$^{-1}$. The $p_T$ cuts are applied in the following way: $p_T > 26$ GeV for the two most energetic muons and $p_T > 5$ GeV for all the others. The reconstruction efficiency of the leptons is taken into account. \[tab:IT\_H2\]
$ee$ $\mu\mu$ $e \mu$ $(ee)l$ $(\mu\mu)l$ $(e \mu)l$
----- ------- ---------- --------- --------- ------------- ------------
BP5 55.55 49.69 14.55 19.13 16.29 50.21
BP6 1.457 1.111 1.553 1.31 0.9903 2.277
: Flavour composition of 4-displaced leptons in the tracker. \[tab:IT\_H2\_4flavour\]
$(ee)(ee)$ $(ee)(\mu\mu)$ $(\mu\mu)(\mu\mu)$ $(ee)(e\mu)$ $(\mu\mu)(e\mu)$ $(e \mu)(e\mu)$
----- ------------ ---------------- -------------------- -------------- ------------------ -----------------
BP5 1.243 0.8808 0.9517 3.998 3.639 6.419
BP6 0.3146 0.03244 0.1942 0.5922 0.5579 0.5802
: Flavour composition of 4-displaced leptons in the tracker. \[tab:IT\_H2\_4flavour\]
Conclusions {#sec:summa}
===========
In summary, we have assessed the significant scope that Run 2 of the LHC can have in exploring the possibility of the existence of new neutrinos, heavier than the SM ones, yet with a mass below the EW scale (100 GeV or so). These objects are almost ubiquitous in the class of BSM scenarios aimed at addressing the puzzle that emerged from the discovery of SM neutrino flavour oscillations, hence and the need to explain their masses.
Furthermore, such states, owing to their EW interactive nature (i.e., with small coupling strengths), relative lightness (so that the phase space open to their decays is small) and the fact that the decay currents proceed via off-shell weak gauge bosons, are generally long lived. In fact, for a significant portion of the parameter space of the BSM scenarios hosting them, they could often decay inside an LHC detector. Depending on the actual decay length, they can do so and be identified (given the SM lepton flavour they generate while decaying) in the inner tracking system or the muon chambers (emulated here through the CMS parameters, though simple adaptations to the ATLAS environment can equally be pursued). In either case, one or two DVs would be visible against a negligibly small background environment, based on well-established triggers available for the CMS detector. Indeed, we have also highlighted the importance that the exploitation of new triggers, specifically, displaced tri-lepton ones, could have for this DV search.
Among st the possible production modes of such heavy neutrino states, we have concentrated here on the case of both light (i.e., SM-like) and heavy (i.e., via an additional state) Higgs mediation starting from gluon-gluon fusion. On the one hand, this approach complements earlier analyses based on $Z'$ mediation, which is now greatly suppressed in the light of the latest experimental limits on $Z'$ masses. On the other hand, it also offers sensitivity (indeed at standard energies and luminosities foreseen for Run 2 of the LHC) to the aforementioned class of BSM scenarios.
In this connection, it should finally be noted that, while we have benchmarked our analysis against the specific realisation of the class of anomaly-free, non-exotic, minimal $U(1)'$ extensions of the SM with a specific $\tilde g/g^\prime_1$ ratio (for the purpose of being quantitative), our conclusions can generally be applied to the whole category of such models.
In short, we believe to have opened a rather simple path to follow towards the efficient exploration of new physics scenarios remedying a significant flow of the SM, i.e., the absence of massive neutrinos.
Acknowledgements {#acknowledgements .unnumbered}
================
LDR thanks L. Basso for useful discussions during the development of this manuscript. We are also grateful to J. Fiaschi for extracting the experimental bounds on mass and couplings of the $Z'$-boson from the data analysis at the LHC RunII. Moreover, a great acknowledgment should go to Ian Tomalin for his guidance on the trigger thresholds. EA, SM and CHS-T are supported in part through the NExT Institute. The work of LDR has been supported by the “Angelo Della Riccia” foundation and the STFC/COFUND Rutherford International Fellowship scheme.
[10]{}
P. Langacker, *Grand Unified Theories and Proton Decay*, [*Phys. Rept.* [ **72**]{} (1981) 185](http://dx.doi.org/10.1016/0370-1573(81)90059-4).
J. L. Hewett and T. G. Rizzo, *Low-Energy Phenomenology of Superstring Inspired E(6) Models*, [*Phys. Rept.* [ **183**]{} (1989) 193](http://dx.doi.org/10.1016/0370-1573(89)90071-9).
A. E. Faraggi and D. V. Nanopoulos, *A SUPERSTRING $Z^\prime$ AT O (1-TeV) ?*, [*Mod. Phys. Lett.* [**A6**]{} (1991) 61–68](http://dx.doi.org/10.1142/S0217732391002621).
A. E. Faraggi and M. Guzzi, *Extra $Z^{\prime }$ s and $W^{\prime }$ s in heterotic-string derived models*, [*Eur. Phys. J.* [**C75**]{} (2015) 537](http://dx.doi.org/10.1140/epjc/s10052-015-3763-4), \[[[ 1507.07406]{}](http://arxiv.org/abs/1507.07406)\].
A. E. Faraggi and J. Rizos, *The 750 GeV di-photon LHC excess and extra $Z^\prime $s in heterotic-string derived models*, [*Eur. Phys. J.* [**C76**]{} (2016) 170](http://dx.doi.org/10.1140/epjc/s10052-016-4026-8), \[[[ 1601.03604]{}](http://arxiv.org/abs/1601.03604)\].
L. Randall and R. Sundrum, *A Large mass hierarchy from a small extra dimension*, [*Phys. Rev. Lett.* [**83**]{} (1999) 3370–3373](http://dx.doi.org/10.1103/PhysRevLett.83.3370), \[[[hep-ph/9905221]{}](http://arxiv.org/abs/hep-ph/9905221)\].
E. Accomando, A. Belyaev, L. Fedeli, S. F. King and C. Shepherd-Themistocleous, *Z’ physics with early LHC data*, [*Phys. Rev.* [ **D83**]{} (2011) 075012](http://dx.doi.org/10.1103/PhysRevD.83.075012), \[[[ 1010.6058]{}](http://arxiv.org/abs/1010.6058)\].
S. Khalil and A. Masiero, *Radiative B-L symmetry breaking in supersymmetric models*, [*Phys.Lett.* [**B665**]{} (2008) 374–377](http://dx.doi.org/10.1016/j.physletb.2008.06.063), \[[[ 0710.3525]{}](http://arxiv.org/abs/0710.3525)\].
L. Basso, A. Belyaev, S. Moretti and C. H. Shepherd-Themistocleous, *Phenomenology of the minimal B-L extension of the Standard model: Z’ and neutrinos*, [*Phys. Rev.* [ **D80**]{} (2009) 055030](http://dx.doi.org/10.1103/PhysRevD.80.055030), \[[[ 0812.4313]{}](http://arxiv.org/abs/0812.4313)\].
L. Basso, S. Moretti and G. M. Pruna, *Constraining the $g'_1$ coupling in the minimal $B-L$ Model*, [*J.Phys.* [ **G39**]{} (2012) 025004](http://dx.doi.org/10.1088/0954-3899/39/2/025004), \[[[ 1009.4164]{}](http://arxiv.org/abs/1009.4164)\].
L. Basso, A. Belyaev, S. Moretti, G. M. Pruna and C. H. Shepherd-Themistocleous, *$Z'$ discovery potential at the LHC in the minimal $B-L$ extension of the Standard Model*, [*Eur. Phys. J.* [**C71**]{} (2011) 1613](http://dx.doi.org/10.1140/epjc/s10052-011-1613-6), \[[[ 1002.3586]{}](http://arxiv.org/abs/1002.3586)\].
L. Basso, S. Moretti and G. M. Pruna, *Phenomenology of the minimal $B-L$ extension of the Standard Model: the Higgs sector*, [*Phys.Rev.* [ **D83**]{} (2011) 055014](http://dx.doi.org/10.1103/PhysRevD.83.055014), \[[[ 1011.2612]{}](http://arxiv.org/abs/1011.2612)\].
L. Basso, S. Moretti and G. M. Pruna, *A Renormalisation Group Equation Study of the Scalar Sector of the Minimal B-L Extension of the Standard Model*, [*Phys.Rev.* [**D82**]{} (2010) 055018](http://dx.doi.org/10.1103/PhysRevD.82.055018), \[[[ 1004.3039]{}](http://arxiv.org/abs/1004.3039)\].
L. Basso, S. Moretti and G. M. Pruna, *Theoretical constraints on the couplings of non-exotic minimal $Z'$ bosons*, [*JHEP* [**08**]{} (2011) 122](http://dx.doi.org/10.1007/JHEP08(2011)122), \[[[1106.4762]{}](http://arxiv.org/abs/1106.4762)\].
L. Basso, K. Mimasu and S. Moretti, *Z’ signals in polarised top-antitop final states*, [*JHEP* [**09**]{} (2012) 024](http://dx.doi.org/10.1007/JHEP09(2012)024), \[[[ 1203.2542]{}](http://arxiv.org/abs/1203.2542)\].
L. Basso, K. Mimasu and S. Moretti, *Non-exotic $Z'$ signals in $\ell^+\ell^-$, $b\bar b$ and $t\bar t$ final states at the LHC*, [*JHEP* [**11**]{} (2012) 060](http://dx.doi.org/10.1007/JHEP11(2012)060), \[[[1208.0019]{}](http://arxiv.org/abs/1208.0019)\].
E. Accomando, D. Becciolini, A. Belyaev, S. Moretti and C. Shepherd-Themistocleous, *Z’ at the LHC: Interference and Finite Width Effects in Drell-Yan*, [*JHEP* [**10**]{} (2013) 153](http://dx.doi.org/10.1007/JHEP10(2013)153), \[[[1304.6700]{}](http://arxiv.org/abs/1304.6700)\].
E. Accomando, A. Belyaev, J. Fiaschi, K. Mimasu, S. Moretti and C. Shepherd-Themistocleous, *Forward-backward asymmetry as a discovery tool for Z’ bosons at the LHC*, [*JHEP* [**01**]{} (2016) 127](http://dx.doi.org/10.1007/JHEP01(2016)127), \[[[1503.02672]{}](http://arxiv.org/abs/1503.02672)\].
E. Accomando, A. Belyaev, J. Fiaschi, K. Mimasu, S. Moretti and C. Shepherd-Themistocleous, *$A_{FB}$ as a discovery tool for $Z^\prime$ bosons at the LHC*, [[ 1504.03168]{}](http://arxiv.org/abs/1504.03168).
N. Okada and S. Okada, *$Z^\prime_{BL}$ portal dark matter and LHC Run-2 results*, [*Phys. Rev.* [**D93**]{} (2016) 075003](http://dx.doi.org/10.1103/PhysRevD.93.075003), \[[[ 1601.07526]{}](http://arxiv.org/abs/1601.07526)\].
A. M. Gago, P. Hernandez, J. Jones-Perez, M. Losada and A. Moreno Briceno, *Probing the Type I Seesaw Mechanism with Displaced Vertices at the LHC*, [*Eur. Phys. J.* [**C75**]{} (2015) 470](http://dx.doi.org/10.1140/epjc/s10052-015-3693-1), \[[[ 1505.05880]{}](http://arxiv.org/abs/1505.05880)\].
P. S. Bhupal Dev, R. Franceschini and R. N. Mohapatra, *Bounds on TeV Seesaw Models from LHC Higgs Data*, [*Phys. Rev.* [ **D86**]{} (2012) 093010](http://dx.doi.org/10.1103/PhysRevD.86.093010), \[[[ 1207.2756]{}](http://arxiv.org/abs/1207.2756)\].
C. G. Cely, A. Ibarra, E. Molinaro and S. T. Petcov, *Higgs Decays in the Low Scale Type I See-Saw Model*, [*Phys. Lett.* [**B718**]{} (2013) 957–964](http://dx.doi.org/10.1016/j.physletb.2012.11.026), \[[[ 1208.3654]{}](http://arxiv.org/abs/1208.3654)\].
I. M. Shoemaker, K. Petraki and A. Kusenko, *Collider signatures of sterile neutrinos in models with a gauge-singlet Higgs*, [*JHEP* [**09**]{} (2010) 060](http://dx.doi.org/10.1007/JHEP09(2010)060), \[[[1006.5458]{}](http://arxiv.org/abs/1006.5458)\].
S. Antusch, E. Cazzato and O. Fischer, *Displaced vertex searches for sterile neutrinos at future lepton colliders*, [*JHEP* [**12**]{} (2016) 007](http://dx.doi.org/10.1007/JHEP12(2016)007), \[[[1604.02420]{}](http://arxiv.org/abs/1604.02420)\].
G. Brooijmans et al., *[Les Houches 2011: Physics at TeV Colliders New Physics Working Group Report]{}*, in *Proceedings, 7th Les Houches Workshop on Physics at TeV Colliders: Les Houches, France, May 30-June 17, 2011*, pp. 221–463, 2012. [[1203.1488]{}](http://arxiv.org/abs/1203.1488).
C. Coriano, L. Delle Rose and C. Marzo, *Constraints on abelian extensions of the Standard Model from two-loop vacuum stability and $U(1)_{B-L}$*, [*JHEP* [**02**]{} (2016) 135](http://dx.doi.org/10.1007/JHEP02(2016)135), \[[[ 1510.02379]{}](http://arxiv.org/abs/1510.02379)\].
E. Accomando, C. Coriano, L. Delle Rose, J. Fiaschi, C. Marzo and S. Moretti, *Z$^{'}$, Higgses and heavy neutrinos in U(1)$^{'}$ models: from the LHC to the GUT scale*, [*JHEP* [**07**]{} (2016) 086](http://dx.doi.org/10.1007/JHEP07(2016)086), \[[[1605.02910]{}](http://arxiv.org/abs/1605.02910)\].
P. Langacker, *The Physics of Heavy $Z^\prime$ Gauge Bosons*, [*Rev. Mod. Phys.* [**81**]{} (2009) 1199–1228](http://dx.doi.org/10.1103/RevModPhys.81.1199), \[[[ 0801.1345]{}](http://arxiv.org/abs/0801.1345)\].
J. Erler, P. Langacker, S. Munir and E. Rojas, *Improved Constraints on Z-prime Bosons from Electroweak Precision Data*, [*JHEP* [**08**]{} (2009) 017](http://dx.doi.org/10.1088/1126-6708/2009/08/017), \[[[0906.2435]{}](http://arxiv.org/abs/0906.2435)\].
G. Cacciapaglia, C. Csaki, G. Marandella and A. Strumia, *The Minimal Set of Electroweak Precision Parameters*, [*Phys.Rev.* [ **D74**]{} (2006) 033011](http://dx.doi.org/10.1103/PhysRevD.74.033011), \[[[ hep-ph/0604111]{}](http://arxiv.org/abs/hep-ph/0604111)\].
E. Salvioni, G. Villadoro and F. Zwirner, *Minimal Z-prime models: Present bounds and early LHC reach*, [*JHEP* [**11**]{} (2009) 068](http://dx.doi.org/10.1088/1126-6708/2009/11/068), \[[[0909.1320]{}](http://arxiv.org/abs/0909.1320)\].
, *[https://twiki.cern.ch/twiki/bin/view/LHCPhysics/CERNYellowReportPageAt13TeV]{}*, .
collaboration, J. R. Andersen et al., *Handbook of LHC Higgs Cross Sections: 3. Higgs Properties*, [[1307.1347]{}](http://arxiv.org/abs/1307.1347).
E. Accomando, D. Becciolini, S. De Curtis, D. Dominici, L. Fedeli and C. Shepherd-Themistocleous, *Interference effects in heavy W’-boson searches at the LHC*, [*Phys. Rev.* [ **D85**]{} (2012) 115017](http://dx.doi.org/10.1103/PhysRevD.85.115017), \[[[ 1110.0713]{}](http://arxiv.org/abs/1110.0713)\].
E. Accomando, D. Becciolini, A. Belyaev, S. De Curtis, D. Dominici, S. F. King et al., *$W'$ and $Z'$ searches at the LHC*, [*PoS* [**DIS2013**]{} (2013) 125]{}.
E. Accomando, K. Mimasu and S. Moretti, *Uncovering quasi-degenerate Kaluza-Klein Electro-Weak gauge bosons with top asymmetries at the LHC*, [*JHEP* [**07**]{} (2013) 154](http://dx.doi.org/10.1007/JHEP07(2013)154), \[[[1304.4494]{}](http://arxiv.org/abs/1304.4494)\].
E. Accomando, *Bounds on Kaluza-Klein states from EWPT and direct searches at the LHC*, [*Mod. Phys. Lett.* [**A30**]{} (2015) 1540010](http://dx.doi.org/10.1142/S0217732315400106).
E. Accomando, D. Barducci, S. De Curtis, J. Fiaschi, S. Moretti and C. H. Shepherd-Themistocleous, *Drell-Yan production of multi Z’-bosons at the LHC within Non-Universal ED and 4D Composite Higgs Models*, [*JHEP* [ **07**]{} (2016) 068](http://dx.doi.org/10.1007/JHEP07(2016)068), \[[[1602.05438]{}](http://arxiv.org/abs/1602.05438)\].
collaboration, V. Khachatryan et al., *Search for narrow resonances in dilepton mass spectra in proton-proton collisions at $\sqrt{s}$ = 13 TeV and combination with 8 TeV data*, [[1609.05391]{}](http://arxiv.org/abs/1609.05391).
A. Alves, A. Berlin, S. Profumo and F. S. Queiroz, *Dirac-fermionic dark matter in U(1)$_{X}$ models*, [*JHEP* [**10**]{} (2015) 076](http://dx.doi.org/10.1007/JHEP10(2015)076), \[[[1506.06767]{}](http://arxiv.org/abs/1506.06767)\].
M. Klasen, F. Lyonnet and F. S. Queiroz, *NLO+NLL Collider Bounds, Dirac Fermion and Scalar Dark Matter in the B-L Model*, [[1607.06468]{}](http://arxiv.org/abs/1607.06468).
P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein and K. E. Williams, *HiggsBounds: Confronting Arbitrary Higgs Sectors with Exclusion Bounds from LEP and the Tevatron*, [*Comput. Phys. Commun.* [**181**]{} (2010) 138–167](http://dx.doi.org/10.1016/j.cpc.2009.09.003), \[[[0811.4169]{}](http://arxiv.org/abs/0811.4169)\].
P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein and K. E. Williams, *HiggsBounds 2.0.0: Confronting Neutral and Charged Higgs Sector Predictions with Exclusion Bounds from LEP and the Tevatron*, [*Comput. Phys. Commun.* [**182**]{} (2011) 2605–2631](http://dx.doi.org/10.1016/j.cpc.2011.07.015), \[[[1102.1898]{}](http://arxiv.org/abs/1102.1898)\].
P. Bechtle et al., *Recent Developments in HiggsBounds and a Preview of HiggsSignals*, [*PoS* [**CHARGED2012**]{} (2012) 024]{}, \[[[1301.2345]{}](http://arxiv.org/abs/1301.2345)\].
P. Bechtle et al., *HiggsBounds-4: Improved Tests of Extended Higgs Sectors against Exclusion Bounds from LEP, the Tevatron and the LHC*, [*Eur. Phys. J.* [**C74**]{} (2014) 2693]{}, \[[[1311.0055]{}](http://arxiv.org/abs/1311.0055)\].
P. Bechtle, S. Heinemeyer, O. Stal, T. Stefaniak and G. Weiglein, *Applying Exclusion Likelihoods from LHC Searches to Extended Higgs Sectors*, [[1507.06706]{}](http://arxiv.org/abs/1507.06706).
P. Bechtle, S. Heinemeyer, O. StÂl, T. Stefaniak and G. Weiglein, *$HiggsSignals$: Confronting arbitrary Higgs sectors with measurements at the Tevatron and the LHC*, [*Eur. Phys. J.* [**C74**]{} (2014) 2711](http://dx.doi.org/10.1140/epjc/s10052-013-2711-4), \[[[ 1305.1933]{}](http://arxiv.org/abs/1305.1933)\].
A. Belyaev, N. D. Christensen and A. Pukhov, *CalcHEP 3.4 for collider physics within and beyond the Standard Model*, [*Comput. Phys. Commun.* [**184**]{} (2013) 1729–1769](http://dx.doi.org/10.1016/j.cpc.2013.01.014), \[[[1207.6082]{}](http://arxiv.org/abs/1207.6082)\].
M. Bondarenko, A. Belyaev, L. Basso, E. Boos, V. Bunichev et al., *High Energy Physics Model Database : Towards decoding of the underlying theory (within Les Houches 2011: Physics at TeV Colliders New Physics Working Group Report)*, [[1203.1488]{}](http://arxiv.org/abs/1203.1488).
collaboration, S. Chatrchyan et al., *The performance of the CMS muon detector in proton-proton collisions at sqrt(s) = 7 TeV at the LHC*, [*JINST* [**8**]{} (2013) P11002](http://dx.doi.org/10.1088/1748-0221/8/11/P11002), \[[[ 1306.6905]{}](http://arxiv.org/abs/1306.6905)\].
A. Belyaev, S. Moretti, K. Nickel, M. C. Thomas and I. Tomalin, *Hunting for neutral, long-lived exotica at the LHC using a missing transverse energy signature*, [*JHEP* [**03**]{} (2016) 018](http://dx.doi.org/10.1007/JHEP03(2016)018), \[[[ 1512.02229]{}](http://arxiv.org/abs/1512.02229)\].
collaboration, C. Collaboration, *Search for long-lived particles that decay into final states containing two muons, reconstructed using only the CMS muon chambers*, [(2015), CMS-PAS-EXO-14-012](http://inspirehep.net/record/1357475/files/EXO-14-012-pas.pdf).
collaboration, V. Khachatryan et al., *Search for long-lived particles that decay into final states containing two electrons or two muons in proton-proton collisions at $\sqrt{s} =$ 8 TeV*, [*Phys. Rev.* [ **D91**]{} (2015) 052012](http://dx.doi.org/10.1103/PhysRevD.91.052012), \[[[ 1411.6977]{}](http://arxiv.org/abs/1411.6977)\].
collaboration, V. Khachatryan et al., *Search for Displaced Supersymmetry in events with an electron and a muon with large impact parameters*, [*Phys. Rev. Lett.* [**114**]{} (2015) 061801](http://dx.doi.org/10.1103/PhysRevLett.114.061801), \[[[ 1409.4789]{}](http://arxiv.org/abs/1409.4789)\].
[^1]: An experimentally resolvable non-zero lifetime along with a mass determination for the heavy neutrino would potentially also enable a determination of the light neutrino mass, as remarked in Ref. [@Basso:2008iv].
[^2]: The decay pattern $\nu_h \rightarrow \nu_l \, Z^*$ undergoes a different counting but is suppressed with respect to the charged-current process.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Satoshi Kawamura$^{1}$, Kai Cai$^{1}$, and Masako Kishida$^{2}$ [^1] [^2] [^3]'
bibliography:
- 'mybib.bib'
title: '**Distributed Output Regulation of Heterogeneous Uncertain Linear Agents** '
---
[^1]: $^{1}$Satoshi Kawamura and Kai Cai are with Department of Electrical and Information Engineering, Osaka City University, Japan. [Emails: [email protected], [email protected]]{}
[^2]: $^{2}$Masako Kishida is with Principles of Informatics Research Division, National Institute of Informatics, Japan. [Email: [email protected]]{}
[^3]: This work was supported by the open collaborative research program at National Institute of Informatics (NII) Japan (FY2018) and the Research and Development of Innovative Network Technologies to Create the Future of National Institute of Information and Communications Technology (NICT) of Japan.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this article we propose a compartmental model for the dynamics of Coronavirus Disease 2019 (COVID-19). We take into account the presence of asymptomatic infections and the main policies that have been adopted so far to contain the epidemic: isolation (or social distancing) of a portion of the population, quarantine for confirmed cases and testing. We model isolation by separating the population in two groups: one composed by key-workers that keep working during the pandemic and have a usual contact rate, and a second group consisting of people that are enforced/recommended to stay at home. We refer to quarantine as strict isolation, and it is applied to confirmed infected cases.
In the proposed model, the proportion of people in isolation, the level of contact reduction and the testing rate are control parameters that can vary in time, representing policies that evolve in different stages. We obtain an explicit expression for the basic reproduction number $\mathcal{R}_0$ in terms of the parameters of the disease and of the control policies. In this way we can quantify the effect that isolation and testing have in the evolution of the epidemic. We present a series of simulations to illustrate different realistic scenarios. From the expression of $\mathcal{R}_0$ and the simulations we conclude that isolation (social distancing) and testing among asymptomatic cases are fundamental actions to control the epidemic, [and the stricter these measures are and the sooner they are implemented,]{} the more lives can be saved. Additionally, we show that people that remain in isolation significantly reduce their probability of contagion, so risk groups should be recommended to maintain a low contact rate during the course of the epidemic.
author:
- 'M.S. Aronna[^1]'
- 'R. Guglielmi[^2]'
- 'L.M. Moschen[^3]'
bibliography:
- 'covid.bib'
title: 'A model for COVID-19 with isolation, quarantine and testing as control measures'
---
Introduction
============
In late December 2019, several cases of an [*unknown pneumonia*]{} were identified in the city of Wuhan, Hubei province, China [@outbreakncipcdcchina]. Some doctors of Wuhan conjectured that it could be severe acute respiratory syndrome (SARS) cases [@stephaniehegarty2020]. Many of the found cases had visited or were related to the Huanan Seafood Wholesale Market. On 31 December 2019, the World Health Organization (WHO) China Country Office was informed of these cases of pneumonia detected in Wuhan City and, up to 3 January 2020, a total of 44 patients with this unknown pneumonia were reported to WHO [@whopneumoniachina]. In the beginning of January 2020 Chinese officials ruled out the hypothesis that the cases were of SARS [@aljazeeratimiline2020], and a few days later the cause was identified to be a new coronavirus that was named SARS-CoV-2. The name given to the infectious disease caused by SARS-CoV-2 is COVID-19.
The first death due to COVID-19 was reported on 9 January and it was a 61-year-old man in Wuhan [@newyorkqina2020]. After mid January infected cases were reported in Thailand, Japan, Republic of Korea, and other provinces in China [@outbreakncipcdcchina]. On 22 January the Chinese authorities announced the quarantine of greater Wuhan. From that time on the virus rapidly spread in many Asiatic countries, reached Europe and the United States. On 28 February, with more than 80.000 confirmed cases and nearly 3.000 deaths globally, WHO increased the assessment of the risk of spread and risk of impact of COVID-19 to very high at the global level [@whoreport392020]. On 9 March 2020, with nearly 400 deaths, Italian government ordered the total lock-down of the national territory [@whoreport492020]. And a few days later, on March 11, WHO declared that COVID-19 was characterized as a pandemic [@whoreport512020].
In March 2020 several nations across the five continents closed their borders, declared forced isolation for the whole population except for essential workers and/or imposed strict measures of social distancing. Detailed information on the actions taken by each country can be found in the report [@OxCGRT]. At that time WHO recommended, apart from social distancing measures, that it was essential to test intensively [@whoremarks16media2020]. The indications were to test every suspected case, to isolate till recovery any positive individual, and to track and test all contacts in the past two days of new confirmed cases.
The most common symptoms of COVID-19 are fever, cough and shortness of breath. Most of the cases result in mild or no symptoms, but some progress to viral pneumonia and multi-organ failure. At this moment, it is estimate that 1 out of 5 cases needs hospitalization [@whocoronavirus2020]. It is yet difficult to estimate the mortality of this virus. Mortality depends on one side on early detection and appropriate treatment, but the rate itself can only be calculated if the real number of infected people is known. There is enough evidence to assure that a significant portion of the infections is asymptomatic [@lavezzo2020suppression; @mizumoto2020estimating; @nishiura2020estimation], which makes it difficult to detect them and thus to calculate the effective mortality of COVID-19. WHO, by March 2020, estimated a death rate of 3,4% worldwide [@whoremarks3media2020]. But in some countries, as Italy, France, Spain and UK, the rate between deaths and confirmed cases up to May 2020 is higher than 10% [@worldometers-countries].
In this article we propose a compartmental model for the dynamics of COVID-19. We take into account the presence of asymptomatic infections, and also the main policies that have been adopted by several countries in the past months to fight this disease, [these being]{}: isolation, quarantine and testing. We model isolation by separating the population in two groups: one composed by [*key-workers*]{} that keep working during the pandemic and having a usual contact rate, and the other group consisting of people that are enforced/recommended to stay at home. Certainly, in the group of people that maintain a high contact rate one can also include people that do not respect social distancing restrictions, that has lately shown to be significant in some countries. We refer to quarantine as strict isolation, and it is applied to confirmed infected cases. Testing is supposed to be applied to all symptomatic cases, and to a portion of the population selected using some of the criteria adopted by health organizations (see [*e.g.*]{} [@investigationgovuk; @guidanceCDC]). The idea to analyze the quantitative effects of non-pharmaceutical interventions, such as isolation and social distancing, on the evolution of the epidemic was inspired by the work [@ferguson2020impact].
For the proposed model, we obtain an expression for the basic reproduction number $\mathcal{R}_0$ in terms of the parameters of the disease and of the control parameters. In this way we can quantify the effects that isolation and testing have on the epidemic. We exhibit a series of simulations to illustrate different realistic situations. We compare, in particular, different levels of isolation and testing. From the expression of $\mathcal{R}_0$ and the simulations, we conclude that isolation (social distancing) and testing among asymptomatic cases are fundamental actions to control the epidemic, [and the stricter these measures are and the sooner they are implemented,]{} the more lives can be saved. [Additionally, we show that people that remain in isolation significantly reduce their probability of contagion, so risk groups should be recommended to maintain a low contact rate during the course of the epidemic.]{}
Several mathematical models for COVID-19 have appeared recently in the literature. At the time being, the flux of publications is very high, so it is difficult to keep track of everything that is being published. We next mention and describe some of the models more closely related to ours. In [@casella2020can] they consider a simple model, with infected and reported infected compartments, and they assume that the transmission rate $\beta$ is a function of a control $u,$ this is $\beta = \beta(u).$ They analyze feedback control strategies, where the control depends on the number of reported cases. In [@djidjou2020optimal] they consider mild and severe cases, the latter having a reduced transmission rate since they are assumed to be in isolation. They use a time-dependent control $c$ of reduction of contacts for the whole population, and optimize with respect to this control. An SEIR model with quarantine for suspected and infected cases cases is considered in [@shi2020seir], and in [@liu2020predicting] they take into account unreported cases, asymptomatic individuals and quarantine for identified cases.
The article is organized as follows. In Section \[sec:model\] we introduce the model, and we discuss its structure. In Section \[sec:R0\] we show an expression of the basic reproduction number $\mathcal{R}_0$ in terms of the parameters and we propose an equivalent threshold. Estimation of realistic parameters and numerical simulations are given in Section \[sec:NumSim\], while Section \[sec:conclusions\] is devoted to the conclusions and a description of possible continuations of this research. Finally, in the Appendix we include the analytical computations of the expression of $\mathcal{R}_0$ and a sensitivity analysis with respect to the involved parameters.
The model description {#sec:model}
=====================
We set up a model to describe the spread of the virus SARS-CoV-2 through a susceptible population. Building upon a usual SEIR model, we obtain a more structured [one]{}, which is tailored on the [current]{} experience of the COVID-19 epidemic, and which also allows to convey the effects of the non-pharmaceutical intervention policies [being]{} adopted by several countries to face its outbreak.
First of all, we normalize [to $1$]{} the total population of $N$ individuals, so that all the compartments (and their sub-compartments) introduced below represent the proportion of individuals of the total population in such compartment. We will assume the population remains constant over time (i.e., we neglect the natural birth and death rates). We start by splitting the population in the compartments listed in Table \[tab:compartment\].
--------------------- -------------------------------------------------
[**Compartment**]{} [**Description**]{}
\[0.5ex\] $S$ susceptible
\[0.3ex\] $E$ exposed
\[0.3ex\] $I$ infectious
$A$ asymptomatic and infectious
$Q$ infected in quarantine (including hospitalized)
$R$ recovered
--------------------- -------------------------------------------------
: \[tab:compartment\]List of aggregated compartments
More specifically, the compartment $S$ collects all the individuals that are susceptible to the virus. Once an individual from $S$ gets exposed to the virus, moves to the compartment $E$. Let us point out that individuals in $E$, though already exposed to the virus, are not contagious yet. After a given *latent time*, an individual in $E$ becomes infectious, and thus is allocated to the compartment $I$. At this stage, after a suitable time, the individual may either remain infectious but asymptomatic (or with mild symptoms), in which case moves to the compartment $A$, or may show clear symptoms onset, thus being tested and then quarantined either at home or at the hospital, and being assigned to the compartment $Q$. Finally, individuals in $A$ and $Q$ will eventually be removed from those compartments and will end up [[either]{} in the compartment $R$ after a *recovery time* or [dead]{}]{}.
We will assume that the fraction of asymptomatic individuals among all infected is given by a certain probability $\alpha\in (0,1)$. It is intuitive that the presence of a relevant portion of asymptomatic infectious individuals plays a major role in the spread of the epidemic, as observed in the current outbreak [@mizumoto2020estimating; @nishiura2020estimation]. Indeed, an asymptomatic individual will maintain a high contact rate, and thus might infect more susceptible individuals with respect to an infectious individual with symptoms [that is in]{} quarantine. In our model, we always refer to the effective contact rate $\beta$, which is given by the product between the *transmissibility* $\nu$ (i.e., probability of infection given contact between a susceptible and infected individual), and the *average rate of contact* [$c$]{} between susceptible and infected individuals. In Tables \[Tab:ParPathogen\] and \[Tab:ParPubPolicy\] we list all the parameters of the model and their description.
The model as described so far takes into account several characteristics of the pathogen and its spread in a susceptible population. We now want to add further structural features to the model in order to include the non-pharmaceutical interventions adopted by public policies to contain the epidemic. In particular, we assume the following conditions.
- A part $p$ of the population is in isolation (either voluntarily, or as a result of public safety policies). The remaining $1-p$ of the population instead gathers all those so-called “key workers" (such as physicians and paramedicals, workers in logistics and distribution, food production, security, and others), that must continue with a regular activity, thus maintaining a large contact rate and being exposed to a higher risk of infection. We will generically refer to such $1-p$ part of population as the *active* population, as opposed to the population in isolation. In this group we can also include people that simply do not respect social distancing, and thus maintain a high contact rate. A situation like this has been observed in countries were monitoring was not strict and a significant percentage of the population did not respect isolation.
- The fraction $1-p$ of active population has an effective contact rate $\beta$, whereas the $p$ part of population in isolation has a contact rate reduced by a factor $r$, thus its compound contact rate is $r\beta$. We will therefore refer to such portion of the population as in [*$r$-isolation*]{}.
- A centralized controller (such as the national health system) may intervene on the system by testing a portion of the population to check for the infectious pathogen. We assume the testing kit to be reliable, that is, we neglect the possibility of false positive/negative. As a rule, then, an individual from the compartment $S$ will always test negative, an individual from $I$ or $A$ always positive, while an individual from $E$ will result positive with a probability $\delta\in [0,1]$. In this way, even though the individuals in $E$ are not contagious, we account for the possibility that they might result positive to the test, depending on the stage of development of the pathogen in that specific individual and to the efficacy of the testing kit.
Let us notice that, in general, the effective contact rate $\beta$ depends on a variety of factors, including the density of population in a given country/region. However, during a pandemic, even the effective contact rate of the individuals not in isolation may be reduced by increased awareness (for example, maintaining the social distancing), or by respecting stricter safety protocols and by availability of proper Personal Protection Equipment (PPE), including face shields, masks, gloves, soap, and so on.
According to the above description, each compartment $S$, $E$, $I$ and $A$ is partitioned as follows: $S = S_f \cup S_r$, where $S_f$ are susceptible and active, while $S_r$ are susceptible and in $r$-isolation; $E = E_f \cup E_r$, where $E_f$ are exposed and active, while $E_r$ are exposed and in $r$-isolation; $I = I_f \cup I_{r}$, where $I_f$ are infectious and active, $I_{r}$ are infectious and in $r$-isolation; $A = A_f \cup A_{r}$, where $A_f$ are asymptomatic infectious and active, $A_{r}$ are asymptomatic infectious and in $r$-isolation. The compartment $Q$ collects all the infected individuals who have been tested positive, either after onset of severe symptoms, or because of a sample test among the population, according to the procedure described in iii) of the above list. Let us stress that, among these compartments, only the individuals in $Q$ are aware of being infected, and thus contagious, hence they are either hospitalized or at home, but in both cases they follow strict procedures to reduce their contact rate to $0$. Finally, we will use the compartments $R$ for the recovered and immune individuals, and $D$ for the disease-induced deaths. Both these last compartments will be removed from the dynamics and will end up in the counter system . Moreover, we point out that the portion $p$ of the population in $r$-isolation is predetermined at the initial time of the evolution, reflecting the public policy in place in that specific period of time. Of course, such fraction $p$ may be updated at a later time, accordingly to newer (stricter or looser) public policies.
The first set of constants, related to the pathogen itself (assuming no mutation occurs in the time of epidemic, or if so, the mutation does not affect such parameters of the virus) and its induced disease, are collected in Table \[Tab:ParPathogen\]. A graphical representation of the course of the disease for symptomatic carriers can be seen in Figure \[timeline\].
---------------------- ---------------------------------------------------------------------------------
[**Par.**]{} [**Description**]{}
\[0.5ex\] $ \tau$ inverse of the latent time from exposure to infectiousness onset
\[0.3ex\] $\sigma$ inverse of the time from infectiousness onset to possible symptoms onset
\[0.3ex\] $\theta$ inverse of mean incubation time (i.e. $\theta^{-1} = \tau^{-1} + \sigma^{-1} $)
\[0.3ex\] $\alpha$ proportion of asymptomatic infections
\[0.3ex\] $\gamma_1$ recovery rate for asymptomatic or mild symptomatic cases
\[0.3ex\] $\gamma_2$ recovery rate for severe and critical cases
\[0.3ex\] $\mu$ mortality rate among confirmed cases
\[0.3ex\] $\delta$ probability of detection by testing in compartment $E$
---------------------- ---------------------------------------------------------------------------------
: Parameters of [COVID-19]{}[]{data-label="Tab:ParPathogen"}
\[timeline\]
The second set of parameters is related to public policies, and consists of the parameters in Table \[Tab:ParPubPolicy\]. [Let us recall at this point that $\beta$ varies in each territory, depending mainly on population density and behaviour.]{} These constants may be used as control parameters, via the tuned lockdown as decided by the public policies (reflecting on $p$ and partially on $r$), the awareness of the population in respecting the social distancing among individuals and in the widespread use of personal protection equipment (expressed by $\beta$ and partially by $r$), the availability of testing kits, that results in a higher or lower value of $\rho$.
---------------------- --------------------------------------------------------------
[**Par.**]{} [**Description**]{}
\[0.5ex\] $\beta(t)$ transmission rate at time $t$ (proportional to contact rate)
\[0.3ex\] $r(t)$ reduction coefficient of transmission rate
for people in isolation at time $t$
\[0.3ex\] $\rho(t)$ testing rate of people with mild or no symptoms at time $t$
$p(t)$
---------------------- --------------------------------------------------------------
: Parameters of Public Policies interventions[]{data-label="Tab:ParPubPolicy"}
The extended state variable of the system thus becomes $$\tilde{X} = (E_f,E_r,I_f,I_{r},A_f,A_r,Q,S_f,S_r,R,D)\, ,$$ where the description of the compartments is given in Table \[table:compartments\].
--------------------- ------------------------------------------------------
[**Compartment**]{} [**Description**]{}
\[0.5ex\] $E_f$ exposed, not in isolation, not contagious
\[0.3ex\] $E_r$ exposed, in $r$-isolation, not contagious
\[0.3ex\] $I_f$ infected and contagious, not in isolation
$I_r$ infected and contagious, in $r$-isolation
$A_f$ asymptomatic and contagious, not in isolation
$A_r$ asymptomatic and contagious, in $r$-isolation
$Q$ infected and tested positive, in enforced quarantine
$S_f$ susceptible not in isolation
\[0.3ex\] $S_r$ susceptible in $r$-isolation
\[0.3ex\] $R$ recovered and immune
$D$ dead
--------------------- ------------------------------------------------------
: List of extended compartments[]{data-label="table:compartments"}
We focus in particular on the evolution of the variable $$X = (E_f,E_r,I_f,I_{r},A_f,A_r,Q,S_f,S_r)\, ,$$ that follows the model $$\label{eq:SEIRwQ}
%\left\{
\begin{array}{l}
\Dot{E}_f = \beta(t) S_f \left[I_f + A_f + r(t)(I_{r} + A_r)\right] -\rho(t) \delta E_f- \tau E_f \\[0.5ex]
\Dot{E}_r = r(t) \beta(t) S_r \left[I_f + A_f + r(t)(I_{r} + A_r)\right] -\rho(t)\delta E_r - \tau E_r\\[0.5ex]
\Dot{I}_f = \tau E_f - \sigma I_f - \rho(t) I_f \\[0.5ex]
\Dot{I}_{r} = \tau E_r - \sigma I_{r} - \rho(t) I_{r} \\[0.5ex]
\Dot{A}_f = \sigma\alpha I_f - \rho(t) A_f - \gamma_1 A_f \\[0.5ex]
\Dot{A}_{r} = \sigma\alpha I_{r} - \rho(t) A_{r} - \gamma_1 A_r \\[0.5ex]
\Dot{Q} = \sigma (1-\alpha) (I_f + I_{r}) + \rho(t) \big[\delta(E_f+E_r)+I_f + I_{r} + A_f + A_{r}\big] - \gamma_2 Q - \mu Q \\[0.5ex]
\Dot{S}_f = -\beta(t) S_f [I_f + A_f + r(t) ( I_{r} + A_r)] \\[0.5ex]
\Dot{S}_r = -r(t) \beta(t) S_r [I_f + A_f + r(t) ( I_{r} + A_r)]
\end{array}
%\right.$$ while the evolution of the states $$\label{eq:counters}
\left.%\{
\begin{array}{l}
\Dot{R} = \gamma_1 (A_f + A_{r}) + \gamma_2 Q\\[0.5ex]
\Dot{D} = \mu Q
\end{array}
\right.$$ only provides counters for the proportion (over the total population) of recovered and dead individuals, respectively. See the compartmental diagram associated to this model in Figure \[diagram\].
\[RemarkTests\] The parameter $\rho$ indicates the proportion of the population presenting either mild or no symptoms that is tested daily. It can also be thought as the inverse of the mean duration that an infected person passes without being tested. For instance, if the system manages to detect, each day, 5% of the asymptomatic infections, then $\rho = 0.05.$ If we are in an ideal “trace and test” situation (see e.g. South Korea [@guardian2020]), in which for each confirmed infection, his/her recent contacts are rapidly and efficiently traced and tested, then $\rho$ will be greater and this will have an impact in the [*basic reproduction number*]{} (see Section \[sec:R0\]).
Recalling that testing is supposed to be applied, at least, to all sufficiently symptomatic cases, we add a counter for the positive tests $T(t)$ until time $t,$ which evolves according to the equation $$%\label{testcounter}
\dot T = \sigma(1-\alpha)(I_f+I_{r}) + \rho(t) \big(\delta(E_f+E_r)+I_f+I_{r}+A_f+A_{r}\big)\; .$$ Having this quantity, one can estimate the total number of tests in each territory using the [*testing positive rate*]{} of that location, which is the ratio between reported cases and tests done [@whoreport331509; @worldometers-testing].
In our framework, we assume that all cases with sufficiently severe symptoms are (tested and) quarantined, and we set the parameter $\alpha \in (0,1)$ to be the fraction of asymptomatic cases, including the cases with mild symptoms. But we can adapt our model to a scenario in which even severe symptoms are not tested until critical. In this case, with a very large value for $\alpha$, only a small portion among the symptomatic individuals enters directly to $Q,$ while the others need to be tested (according to the sampling testing $\rho$ among the population) to be quarantined.
System is endowed with the set of initial conditions given by the vector $$\label{eq:ICb}
X_0 = (E_{f,0},E_{r,0},I_{f,0},I_{r,0},A_{f,0},A_{r,0},Q_{0},S_{f,0},S_{r,0})$$ [with components in the interval $[0,1]$]{}. Setting $$\mathcal{C}_1
:= \big\{X = (x_i)_{i = 1,\ldots,9} \in \R^9 : x_i\in [0,1],\, \text{ for } i=1,\dots 9\big\}\; ,$$ the cube of states with entries between $0$ and $1$, it is easy to check that $\mathcal{C}_1$ is invariant under the flow of system , that is, given an initial condition $X_0\in \mathcal{C}_1$, the solution $X(t)$ to - remains in $\mathcal{C}_1$ for all $t>0$.
We collect here some variations of model that can be formulated in the same framework considered in this paper.
1. One might consider a small but not negligible contact rate between susceptible individuals and people in the compartment $Q$, accounting for infections (mainly of medicals and paramedicals) occurred during hospitalization of an infected individual, or for individuals tested positive in enforced quarantine at home, which do not comply strictly to the isolation procedures and end up infecting relatives or other contacts. In this case, the equations for the evolution of the susceptible compartments shall be completed with additional terms involving $\varepsilon$ in the following way: $$%\left\{
\begin{array}{l}
\Dot{S}_f = -\beta(t) S_f [I_f + A_f + r(t) ( I_{r} + A_r) {\ +\ \varepsilon Q}] \, ,\\[0.7ex]
\Dot{S}_r = -r(t) \beta(t) S_r [I_f + A_f + r(t) ( I_{r} + A_r) {\ +\ \varepsilon Q}]\, ,
\end{array}
%\right.$$ and the same terms with opposite sign shall appear in the equation corresponding to $\Dot{Q}$.
2. At the current stage, it is still not clear how long the immunity of a recovered individual lasts, with a number of findings tending towards a rather long immunization period [@An2020recovered; @time-reinfection; @wajnberg2020humoral]. For this reason, in we assume that a recovered individual will remain immune over the time framework considered in the different scenarios. However, the model can easily describe the case of recovered individuals becoming susceptible again, by adding a transfer term from the compartment $R$ to $S_f$ and $S_r$, with a coefficient depending on the inverse of the average immunization period. Similarly, the model can include the case of reactivation of the virus in an individual previously declared recovered (and not newly exposed to the virus), by inserting a transfer term from the compartment $R$ into $I_f$ and $I_r$, with appropriate coefficients depending on the probability of the reactivation of the virus and on the inverse of the average time of reactivation. However, at the moment there are not strong evidences supporting such possibility [@time-reinfection].
3. A crucial issue while coping with the outbreak of the epidemic, which leads to the so-called urge of *flattening the curve*, is whether the number of critical cases in need of intensive care (IC) treatment (due to respiratory failure, shock, and multiple organ dysfunction or failure) would saturate the number of available intensive care units (ICUs).\
This parameter can be estimated directly from model , considering for each country the number of available ICUs and the percentage of positive confirmed cases requiring IC treatment. For example, this percentage has been estimated to be about $6\%$ for China [@WHO-Ch], and up to $12\%$ for Italy [@GrassPeseCecco2020; @Remuzzi2020]. As an alternative, it would be possible to insert a further compartment $C$ in model counting the number of the individuals needing ICU treatment, by modifying the equations corresponding to the compartments $Q$ and $D$ as follows: $$\begin{split}
\Dot{Q} &= \sigma (1-\alpha) I + \rho(t) \big[\delta E + I + A\big] - \gamma_2 Q {\ -\ \tau_c Q}\\
{\dot C } &= {\tau_c Q\ -\ \mu_c C\ -\ \gamma_c C}\\
\dot D &= {\mu_c C}
\end{split}$$ with suitable coefficients $\tau_c$, $\mu_c$ and $\gamma_c$ denoting the inverse of the time from symptoms onset to critical symptoms, the mortality of critical cases, and the recovery rate for critical cases, respectively.
4. In this paper we have considered the whole population as a fixed number of individuals during the time period of the evolution. It is of course possible to consider the case of an evolving total population, by including in the model the natural birth and mortality rate. In particular, newborns of susceptible individuals shall enter the corresponding susceptible compartment, whereas it is not clear whether the offspring of an infectious individual would be infectious, in such case the newborn shall move directly to the compartment $I$. On the other hand, the natural mortality rate shall act on each compartment of system , as well as on $R$ in system .
The basic reproduction number for model {#sec:R0}
========================================
We are interested in determining the basic reproduction number $\mathcal{R}_0$ associated with system . To do this, we assume to fix a time interval $[t_0,t_1]$ such that the coefficients $\beta(t)$, $r(t)$ and $\rho(t)$ are constant over $[t_0,t_1]$. This is coherent with the setting of the scenarios simulated in Section \[subsec:Scenarios\], where we assume such coefficients to be piecewise constant functions, sharing the same switching times, [that represent different phases of restrictions and policies]{}. Thus, according [to the calculations given in]{} the Appendix \[App1\] and the parameters in Tables \[Tab:ParPathogen\] and \[Tab:ParPubPolicy\], we obtain that the value of $\mathcal{R}_0$ for each time interval between two consecutive switching times is given by $$\label{R0wheneps=0}
\mathcal{R}_0 = \frac{1}{2}\left(
\varphi + \sqrt{\varphi^2 + \frac{4\sigma\alpha}{\rho + \gamma_1}\varphi}\;
\right)\, ,$$ with $$\label{varphi}
\varphi = \frac{\beta \tau (1 - (1 - r^2) p)}{(\rho\delta + \tau)(\sigma + \rho)}\ .$$ From this explicit formula for the reproduction number $\mathcal{R}_0$, we can highlight the qualitative dependence of $\mathcal{R}_0$ on each parameter of the system, in particular:
- If the effective contact rate $\beta$ increases, then $\mathcal{R}_0$ increases.
- Focusing on the coefficient $1 - (1-r^2)p$, we realize that closer is $p$ to $1$, and smaller is $r$, that is, as larger is the portion of population in $r$-isolation and as stricter is the reduction factor $r$ in its contact rate, lower $\mathcal{R}_0$ becomes.
- If $\alpha$ increases, that is, if there is a larger proportion of asymptomatic infectious individuals, then $\mathcal{R}_0$ increases.
- If $\sigma$ increases, corresponding to shorter onset time, then $\mathcal{R}_0$ decreases.
- If either $\rho$ or $\gamma_1$ increase, i.e., either the control action by testing is strengthened, for example through an improved tracing and tracking system, or the recovery rate improves, for example, because of new and more effective treatments, then $\mathcal{R}_0$ decreases.
- If $\delta$ increases, for example, as a result of improved testing kits able to detect the infection at an earlier stage, then $\mathcal{R}_0$ decreases.
Moreover, we can characterize the crucial condition $\mathcal{R}_0\le 1$ by means of a simpler expression than , as described in the next result.
\[PropR0\] Set $$%\varphi = \frac{\beta \tau [1 - (1 - r^2) p]}{(\rho\delta + \tau)(\sigma + \rho)} \; \qquad\text{ and }\qquad
\mathcal{T}_0 := \frac{\beta \tau [1 - (1 - r^2) p]}{(\rho\delta + \tau)(\sigma + \rho)} \left(
1 + \frac{\sigma\alpha}{\rho + \gamma_1}
\right)\, .$$ [Then]{} $\mathcal{R}_0$ is [smaller]{} than (respectively, equal to or greater than) 1 if and only if the same relation holds for $\mathcal{T}_0$. In particular, if $\varphi > 1$ (see ), then $\mathcal{R}_0 > 1$ and $\mathcal{T}_0 > 1$.
We postpone the proof of Proposition \[PropR0\] [to]{} the Appendix \[App2\]. A more quantitative analysis of the dependence of the threshold $\mathcal{T}_0$ on the parameters of the model is developed in the Appendix \[App3\].
\[rem:R0tau\] If we consider the case of $\rho = 0$, that is, the situation without sample testing among the asymptomatic population, then the basic reproduction number $\mathcal{R}_0$ is independent of the latent time $\tau$. [In particular, $\mathcal{T}_0$ becomes $$\frac{\beta [1 - (1 - r^2) p]}{\sigma} \left(
1 + \frac{\sigma\alpha}{\gamma_1}
\right).$$]{}
\[rem:timeDepR0\] [Relation ]{} gives [an expression of]{} $\mathcal{R}_0$, that is the reproduction number in a totally susceptible population. As the epidemic evolves, a portion of the population becomes immune to the disease, and this makes the reproduction number decrease. More precisely, when $p=0$ and all the population has the same contact rate, the time-dependent reproduction number is given by $\mathcal{R}(t) = S(t) \mathcal{R}_0$, where $S(t)$ is the susceptible portion of the population. In our model, since the groups of active individuals and in $r$-isolation evolve differently (see Scenario A$_4$ and Figure \[A4comparison\] below), the time-dependent reproduction number $\mathcal{R}(t)$ is given by the formula where $\varphi$ in is $$\frac{\beta \tau [{S_f(t)} + r^2 S_r(t)]}{(\rho\delta + \tau)(\sigma + \rho)}\; .$$ We do not take into account this time-variation of the reproduction number in our numerical results, since we are only interested in the value of the reproduction number at the beginning of each phase, where $S(t)$ is close to 1.
\[rem:herd\] Herd immunity is defined as the proportion of the population that needs to be immunized in order to naturally slow down the spread of the disease. It depends on the value of the basic reproduction number in the following way: herd immunity level equals $1 - \dfrac{1}{\mathcal{R}_0}.$ So the bigger $\mathcal{R}_0$, the higher the herd immunity. In connection with above Remark \[rem:timeDepR0\], we highlight that herd immunity is achieved at the time $t$ when $\mathcal{R}(t)$ equals 1.
Numerical simulations {#sec:NumSim}
=====================
Retrieving parameters
---------------------
In Table \[parameter\_values\] we collect some parameter values estimated in the literature, in order to do realistic simulations. Recall the description of the parameters given in Tables \[Tab:ParPathogen\]-\[Tab:ParPubPolicy\].
[|c|c|c|c|]{} & [**Value - Range**]{} & [**Reference**]{} & [**Remark**]{}\
& 0.7676& [@shen2020modelling] & \[transmission\]\
$\tau^{-1}$ & $\tau^{-1} = \theta^{-1} - \sigma^{-1}$ & [@liang2020impacts] & \[latent\]\
$\sigma^{-1}$ & & [@Read_etal; @zhang2020evolving; @Zhou2020] & \[rk:infectiouswindow\]\
$\theta^{-1}$ & 5.1 - 6.4 days & [@backer2020incubation; @lauer2020incubation; @liang2020impacts] & \[incubationWuhan\]\
$\gamma_1$ & 7.5 - 12 days & [@WHO-Ch; @Hu_etal_Asy] & \[recovery1\]\
$\gamma_2$ &
--------------
15 - 22 days
--------------
: Realistic range of parameters values[]{data-label="parameter_values"}
& [@WHO-Ch; @Zhou2020] & \[recovery2\]\
$\mu$ & \[0.03/14,0.1/14\] & [@whoremarks3media2020; @wang2020updated] & \[mu\]\
$\alpha$ & \[0.265,0.643\] & [@wiki:diamond; @lavezzo2020suppression] & \[asymp\]\
$p$ & $[0 , 1]$ & [@OxCGRT] & \[pr\]\
$r$ & $[0 , 1]$ & [@OxCGRT] & \[pr\]\
$\rho$ & \[0,0.5\] & [@worldometers-testing] & \[rho\]\
$\delta$ & 1 & [@HarvardMedicine] & \[delta\]\
Several remarks regarding the parameter values in Table \[parameter\_values\] follow.
1. \[transmission\] The parameter $\beta$ strongly depends on the population behaviour. We take the value of $\beta$ from [@shen2020modelling], where they calibrated an SEIR model with isolation and estimated the transmission rate $\beta$, before lockdown, to be $0.7676$ (with a 95$\%$ confidence interval $(0.7403 , 0.7949)$).
2. \[latent\] The mean duration of the latent period can be computed using the estimates for the incubation period (i.e. from exposure to symptom onset) and the time from infectiousness onset to symptom onset, so it is reasonable to take $\tau^{-1}$ between 2 and 4 days. More precisely, in [@liang2020impacts] they fitted an SEIQR model to the data from Wuhan and estimated a latent period of duration 2.92 days with a 95% CI of $(1.09, 5
.28)$.
3. \[asymp\] In [@wiki:diamond] they show the testing results on Diamond Princess passengers, a cruise ship that was quarantined in February-March 2020, at the beginning of the epidemic. Almost all passengers and crew members were tested, resulting in 410 asymptomatic infections among 696 positive-tested persons, which yields an asymptotic rate of $0.589$. In [@lavezzo2020suppression], they studied the infection in the municipality of Vo’, Italy. They estimated a median of asymptomatic cases of 44.8%. with a 95% CI of (26.5,64.3). Other estimates were given in [@Daym1375; @mizumoto2020estimating; @nishiura2020estimation].
4. \[rk:infectiouswindow\] In [@Zhou2020], they measured time from infectiousness onset to appearance of symptoms. It resulted in approximate 1 day for fever and 1-3 days for cough. Furthermore, it has been observed in clinical cases studied in [@Woelfel_etal] that the contagious period may start before the appearance of symptoms, and outlast the symptoms end.
5. \[incubationWuhan\] The reference [@backer2020incubation] estimates $\theta$ to be 6.4 based on travellers returning from Wuhan. [In [@lauer2020incubation] it was estimated to be 5.1 days. Other estimates]{} were given in [@liang2020impacts].
6. \[recovery1\] The estimate of $\gamma_1$ is difficult, since for asymptomatic cases is hard to observe and track the time from exposure to recovery. [@Hu_etal_Asy] estimated 9.5 days for asymptomatic cases, while [@WHO-Ch] estimated 14 days for mild cases. So it is reasonable to assume $\gamma_1$ in the range 7.5 - 12, considering around 2 days between infectiousness onset and symptoms onset.
7. \[recovery2\] In [@Zhou2020] they measured viral shedding duration, and estimated a median of 20 days, with an [interquartile range]{} of $(17,24).$ Removing the approximately 2-day period from [infectiousness]{} onset to symptoms onset, we get [an IQR]{} for $\gamma_2^{-1}$ of $(15,22).$ These values approximate the duration of quarantine recommended to positive-tested cases.
8. \[mu\] The rate $\mu$ depends on the percentage of infections that have been detected, since it is proportional to the ratio between confirmed cases and deaths. WHO Director-General’s opening remarks at the media briefing of 3 March 2020 [@whoremarks3media2020] announced an estimated global death rate of 3.4%. In some countries, like Italy, the ratio between deaths and confirmed cases up to May 2020 is larger that 0.1, while in others, like Israel, it is around 0.01. Regarding the time a person takes to die from COVID-19, in [@Zhou2020] they estimated $18.5$ days from infectiousness onset to death.
9. \[pr\] The values of $p$ and $r$ vary in each country/territory depending on the public policies and the population’s compliance to these measures. A detailed and real-time survey on the percentage of people under lockdown in each country can be found in [@OxCGRT].
10. \[rho\] As already mentioned in Remark \[RemarkTests\], $\rho$ represents the proportion of the infected asymptomatic population that is tested daily. In a realistic scenario, it would not be reasonable to set a too high value of $\rho$, let us say, over 0.5, because it would account for detecting more than $50\%$ of the infected asymptomatic population daily.
11. \[delta\] It is not yet know “at what point during the course of illness a test becomes positive” (see [@HarvardMedicine]). For the simulations we set $\delta$ to 1 and suppose that the tests detect the infection from exposure.
Simulations for different scenarios {#subsec:Scenarios}
-----------------------------------
[In this subsection we consider several scenarios and show their outcomes. Many of the graphics are in logarithmic scale, given that the values represent portions of the population, and then can assume very small values.]{}
### Scenarios A
[We consider the four different scenarios with the following characteristics:]{}
[**Scenario A$_1$:**]{} no isolation, no testing among asymptomatic people
[**Scenario A$_2$:**]{} 20%-isolation of 60% of the population from day 31, no testing among asymptomatic people
[**Scenario A$_3$:**]{} 20%-isolation of 90% of the population from day 31, no testing among asymptomatic people
[**Scenario A$_4$:**]{} 20%-isolation of 90% of the population from day 31, intensive testing among asymptomatic people
These Scenarios A can be seen as: no action, mild lockdown, strict lockdown and strict lockdown with testing among asymptomatic suspected cases. As initial condition we take, in all the scenarios, one exposed case per million inhabitant, this is: $$E_f(0)+E_r(0) = 1 \times 10^{-6},\quad
S_f(0)+S_r(0) = 1-1 \times 10^{-6}.$$ The remainder of the compartments start with value 0. Results and parameters for Scenario A are given in Table \[table:A14\] and graphics in Figure \[FigA14\]. We can observe the effect of the lockdown on the epidemic. The mild lockdown of A$_2$ reduces more than half of the infections w.r.t. the no action situation A$_1$, while the strict lockdowns A$_3$ and A$_4$ induce a reduction of the order of $10^{-2}$ in total recovered, deaths and positive tests. In particular, comparing A$_3$ and A$_4$ we can see that testing and consequent quarantine for positive-tested asymptomatic cases not only reduces the infections and deaths more than 66%, but also [the duration]{} of the epidemic.
[.47]{} ![Scenarios A$_1$, A$_2$, $A_3$ and A$_4$.[]{data-label="FigA14"}](scenario_A1_20200506.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios A$_1$, A$_2$, $A_3$ and A$_4$.[]{data-label="FigA14"}](scenario_A2_20200506.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios A$_1$, A$_2$, $A_3$ and A$_4$.[]{data-label="FigA14"}](scenario_A3_20200506.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios A$_1$, A$_2$, $A_3$ and A$_4$.[]{data-label="FigA14"}](scenario_A4_20200506.eps "fig:"){width="\textwidth"}
United States had 15 confirmed infections by February 15th 2020 [@worldometers-us]. Many states started their lockdown between March 15th and March 20th, which put around 60% of the population in isolation [@OxCGRT]. We can assume that day 0 is February 15th, then day 31 would be in the middle point of the interval when lockdowns started. Day 80 (May 6th) US had 72,287 confirmed deaths, while in Scenario A$_2$ gives 91,231, that is larger but not far. Scenario A$_2$ could be a good approximation of the situation in the US until the beginning of May 2020, and the excess in the computed deaths in comparison to real data suggests that deaths could be underreported by $26\%$, which is coherent with some recent studies on underreported deaths (see [*e.g.*]{} [@newyork-missingdeaths]).
\
For Scenario A$_4$ we make a comparison of infections for the two groups: the [active one (that continues with the usual contact rate)]{} and the one in $r$-isolation. By comparing the infections’ curves and the cumulative infections, we can give an estimate on the lower chance that people in $r$-isolation have to get exposed. In this particular scenario, people that are not in isolation have nearly 6 times more chance to be infected. See the graphs in Figure \[sec:R0\], were we show the curves of infections and cumulative infections for each group, normalized by the proportions $1-p$, $p.$
[.65]{} ![Comparison of infections for population in and out isolation[]{data-label="A4comparison"}](scenario_A4_comparison_f_r.eps "fig:"){width="\textwidth"}
### Scenarios B: different restriction level of lockdown
We next consider the following four scenarios in which we vary the values of the portion $p$ of people under lockdown and their level $r$ of restriction of social contacts.
[**Scenario B$_1$:**]{} 50%-isolation of 50% of the population from day 35
[**Scenario B$_2$:**]{} 40%-isolation of 65% of the population from day 35
[**Scenario B$_3$:**]{} 30%-isolation of 80% of the population from day 35
[**Scenario B$_4$:**]{} 20%-isolation of 90% of the population from day 35
We measure the outcomes. The parameters and results are given in Table \[scenariosB13\], and graphics in Figure \[FigB\]. The parameters that are not specified in Table \[scenariosB13\], are repeated from Table \[table:A14\]. For these four scenarios we consider the same testing rate $\rho$ among asymptomatic cases.
[.49]{} ![Scenarios B$_1$, B$_2$, B$_3$ and B$_4$[]{data-label="FigB"}](scenarios_B1_4_R_D.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios B$_1$, B$_2$, B$_3$ and B$_4$[]{data-label="FigB"}](scenarios_B1_4_I.eps "fig:"){width="\textwidth"}
Scenarios B$_1$ and B$_2$ show cases in which the restrictions are not strong enough. Indeed, in both cases the basic reproduction number $\mathcal{R}_0$ remains above 1 also after the lockdown intervention (see Table \[scenariosB13\]), and the infection reaches 71.6% and 43.6% of the population, causing the death of 1.89% and 1.15% of the population, respectively, which is a catastrophic outcome. Comparing these four scenarios, we shall deduce that, in order to be effective in containing the outbreak, the lockdown shall address at least $80\%$ of the population reducing their contact rate to about $30\%$ of their usual contacts. Indeed, in the scenario B$_3$, the basic reproduction number becomes 0.93 after day 35, meaning that loosening the restrictions of this scenario (while keeping all other parameters unchanged) might turn the $\mathcal{R}_0$ above 1.
\
### Scenarios C: early vs. late lockdown
We now compare two situations, one in which lockdown starts immediately, just 21 days after the first confirmed cases, and the other for which lockdown starts four weeks later. More precisely, we consider the following two scenarios and measure the different outputs:
[**Scenario C$_1$:**]{} 20%-isolation of 90% of the population from day 21
[**Scenario C$_2$:**]{} 20%-isolation of 90% of the population from day 49
The parameters and outputs are given in Table \[scenariosC12\] and Figure \[FigC12\]. It is evident the impact of delaying the beginning of lockdown on the final outcome: the numbers of recovered and deaths in the Scenario C$_2$ are of the order of $10^3$ [times]{} those of the Scenario C$_1$. As an example, from Table \[scenariosC12\] one notice that, at the end of the epidemic, the Scenario C$_1$ counts [4.2]{} deaths per million inhabitants, while the Scenario C$_2$ faces 1020 deaths per million. Moreover, the epidemic in Scenario $C_2$ lasts about 110 days more than in Scenario $C_1$, thus also undergoes worst economic consequences of the lockdown.
\
[.47]{} ![Scenarios C$_1$ and C$_2$.[]{data-label="FigC12"}](scenario_D1.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios C$_1$ and C$_2$.[]{data-label="FigC12"}](scenario_D2.eps "fig:"){width="\textwidth"}
### Scenarios D: early testing vs. late testing
[We now want to asses the impact of testing timing. For this, we]{} consider the following two scenarios and measure the different outputs:\
[**Scenario D$_1$:**]{} 20%-isolation of 80% of the population from day 50, efficient testing before day 50, reduced testing after
[**Scenario D$_2$:**]{} 20%-isolation of 80% of the population from day 50, few testing before day 50, massive testing after
The parameter values and outcomes of the epidemic in Scenarios D$_1$ and D$_2$ are given in Table \[scenariosD12\] and figures in Figure \[FigD12\]. It can be seen the cost in infection and lives it has to start testing late. It is worth noticing that, in spite of a higher total number of tests carried out in the Scenario D$_2$, the strategy adopted in the Scenario D$_1$ attains a considerably better outcome: indeed, the infections and deaths of Scenario D$_2$ are of the order of 10$^2$ w.r.t. the ones in Scenario D$_1,$ and the only difference was doing efficient testing at the beginning of the epidemic.
\
[.47]{} ![Scenarios D$_1$ and D$_2$.[]{data-label="FigD12"}](scenario_C1.eps "fig:"){width="\textwidth"}
[.47]{} ![Scenarios D$_1$ and D$_2$.[]{data-label="FigD12"}](scenario_C2.eps "fig:"){width="\textwidth"}
Scenarios E: different testing rates
------------------------------------
Now we fix the parameters $\beta, \mu, p, r$ as in Table \[scenariosD12\] and we vary only $\rho$ to take the four different values 0, 0.02, 0.05 and 0.1 over the whole time period. We get the outcome of Figure \[FigE\]. From the comparison among these four scenarios, we realize that a high value of $\rho$, as the result of an efficient tracing and testing strategy, may reduce the number of cumulative infected individuals and deaths of an order [of]{} $10^2$.
[.47]{} ![Scenarios E$_1$, E$_2$, E$_3$ and E$_4$[]{data-label="FigE"}](scenarios_E_R_D.eps "fig:"){width="\textwidth"}
[.49]{} ![Scenarios E$_1$, E$_2$, E$_3$ and E$_4$[]{data-label="FigE"}](scenarios_E_I.eps "fig:"){width="\textwidth"}
The simulations were done with Python and all the codes are in the GitHub repository\
[github.com/lucasmoschen/covid-19-model](https://github.com/lucasmoschen/covid-19-model).
Conclusions {#sec:conclusions}
===========
In the paper we present an SEIR model with Asymptomatic and Quarantined compartments to describe the recent and ongoing COVID-19 outbreak. Our model is intended to highlight the strength of three different non-pharmaceutical interventions imposed by public policies in containing the outbreak and the total number of disease-induced infections and deaths:
- reduction of contact rate for a given portion of the population;
- enforced quarantine for confirmed infectious individuals;
- testing among the population to detect also asymptomatic infectious individuals.
On one hand we show that, as expected, each of these interventions has a beneficial impact on flattening the curve of the outbreak. On the other hand, the comparison among different scenarios shows the remarkable efficacy of an early massive testing approach, when the limited number of infected individuals makes easier and more effective the tracing of recent contacts of the individual, as in Scenario D$_1$, and of a timely lockdown, although in the presence of few confirmed infected cases, as in Scenario C$_1$. In both situations, the timing of the intervention plays a crucial role on the incisiveness of the public safety policy.
In addition, we give an explicit representation of the basic reproduction number in terms of the several parameters of the model, which allows to describe its dependence on the features of the virus and on the implemented non-pharmaceutical interventions.
This description makes available a valuable tool to tune the public policies in order to control the outbreak of the epidemic, forcing $\mathcal{R}_0$ below the threshold $1$. However, considering the major effects of an enduring lockdown on the economy of the country that applies it, it is desirable to loosen the lockdown measures after the containment of the outbreak. Nevertheless, the decision makers and each individual shall be aware that a value of $\mathcal{R}_0$ only barely greater than $1$ would lead to an increase in the number of infected and dead by an order $2$ of magnitude, thus provoking the collapse of the relative national health system. This is better explained by the following scenarios: consider a situation with constant testing $\rho = 0.05$ and no lockdown where, after the first 35 days of outbreak with a high $\mathcal{R}_0$ ($\approx 2$), the population gains awareness of the risk and manage to reduce its contact rate so as to steer $\mathcal{R}_0$ to either $0.9$, $1$ or $1.1$. Figure \[FigR0var\] illustrates the large deviations among the outcome of these three different situations.
In order to allow the population to circulate with no restrictions, it is necessary that herd immunity (see Remark \[rem:herd\]) is achieved. The value that matters to compute this threshold of immunization is the basic reproduction number under no social distancing, which has been estimated in this article and in many others as being, in general, greater than 2.5. So achieving herd immunity would imply to infect at least $60\%$ of the population, which would lead, with the current mortality rates, to 1-5% of the population dying, which is, obviously, a catastrophic unwanted situation. Hence, reinforcing what was said in the above paragraph, until a vaccine or treatment is not found, it is necessary to maintain the value of $\mathcal{R}_0$ below 1. Otherwise, the curve of infections will always be increasing.
[.47]{} ![The impact of small variations on $\mathcal{R}_0$[]{data-label="FigR0var"}](scenarios_R0_R_D.eps "fig:"){width="\textwidth"}
[.47]{} ![The impact of small variations on $\mathcal{R}_0$[]{data-label="FigR0var"}](scenarios_R0_I.eps "fig:"){width="\textwidth"}
Appendix
========
Computing $\mathcal{R}_0$ {#App1}
-------------------------
Recalling the model , we are able to give an analytic expression of the basic reproduction number $\mathcal{R}_0$ associated to the system.
In order to do so, we assume to fix a time interval $[t_0,t_1]$ such that the coefficients $\beta(t)$, $r(t)$ and $\rho(t)$ are constant over $[t_0,t_1]$. This is coherent with the setting of the Section \[sec:NumSim\], where we assume such coefficients to be piecewise constant functions, sharing the same switching times. Thus, the following procedure allows to evaluate the value of $\mathcal{R}_0$ for each time interval between two consecutive switching times.
It is well known that the reproduction number $\mathcal{R}_0$ is the crucial parameter to establish whether Disease Free Equilibria (DFE) are stable or not [@diekmann1990definition; @vdDW02]. We denote by $\mathcal{X}_s$ the set of DFE, which is given by $$\mathcal{X}_s = \left\{X\in \mathcal{C}_1 : E_f = E_r = I_f = I_{r} = A_f = A_{r} = Q = 0\right\}\, .$$ We can recast system in the compact form $$\label{eq:abstsys}
\Dot{X}(t) = f(X(t))$$ by introducing $$f(X) =
\left(
\begin{smallmatrix}
\beta S_f \left[I_f + A_f + r(I_{r} + A_r)\right] -\rho \delta E_f- \tau E_f \\[0.5ex]
r \beta S_r \left[I_f + A_f + r(I_{r} + A_r)\right] -\rho\delta E_r - \tau E_r\\[0.5ex]
\tau E_f - \sigma I_f - \rho I_f \\[0.5ex]
\tau E_r - \sigma I_{r} - \rho I_{r} \\[0.5ex]
\sigma\alpha I_f - \rho A_f - \gamma_1 A_f \\[0.5ex]
\sigma\alpha I_{r} - \rho A_{r} - \gamma_1 A_r \\[0.5ex]
\sigma (1-\alpha) [I_f + I_{r}] + \rho \big(\delta(E_f+E_r)+I_f + I_{r} + A_f + A_{r}\big) - \gamma_2 Q - \mu Q \\[0.5ex]
-\beta S_f [I_f + A_f + r( I_{r} + A_r)] \\[0.5ex]
-r \beta S_r [I_f + A_f + r( I_{r} + A_r)]
\end{smallmatrix}
\right)\, .$$ The stability of around a DFE $X^*$ is related to the spectral properties of the linearized system around $X^*$, whose dynamics is ruled by the Jacobian $Df = (\partial f_i/\partial x_j)_{i,j = 1,\dots,9}$ of $f$. However, the high dimensionality of $Df(X)$ makes [it]{} difficult to develop an analytical analysis of its spectrum and its stability properties. We will therefore follow a different approach, deducing the value of $\mathcal{R}_0$ from the result in [@vdDW02], which ensures that $\mathcal{R}_0$ [is]{} given by the formula $\mathcal{R}_0 = \rho(FV^{-1})$, where $\rho(A)$ denotes the spectral radius of the matrix $A$. A comment on the applicability of the results in [@vdDW02] is given in Remark \[RemarkvdD\] below.
Since $X^*$ is a DFE, we may assume that $X^* = (0,0,0,0,0,0,0,1-p,p)$, for some $p\in [0,1]$ representing the portion of population that is initially in the compartment $S_r$, while the remaining $1-p$ fraction of the population is in $S_f$. Thus, in our setting, the matrices $F$ and $V$ related to the dynamics are given by $$F =
\left(
\begin{matrix}
0 & 0 & \beta (1-p) & r \beta (1-p) & \beta (1-p) & r \beta (1-p) & 0 \\[0.3ex]
0 & 0 & r \beta p & r ^2\beta p & r \beta p & r ^2\beta p & 0 \\[0.3ex]
0 & 0 & 0 & 0 & 0 & 0 & 0\\[0.3ex]
0 & 0 & 0 & 0 & 0 & 0 & 0\\[0.3ex]
0 & 0 & \sigma \alpha & 0 & 0 & 0 & 0\\[0.3ex]
0 & 0 & 0 & \sigma \alpha & 0 & 0 & 0\\[0.3ex]
0 & 0 & \sigma (1- \alpha) + \rho & \sigma (1- \alpha) + \rho & \rho & \rho & 0
\end{matrix}
\right)\, ,$$ $$V =
\left(
\begin{matrix}
\rho\delta + \tau & 0 & 0 & 0 & 0 & 0 & 0\\[0.3ex]
0 & \rho\delta + \tau & 0 & 0 & 0 & 0 & 0\\[0.3ex]
-\tau & 0 & \sigma + \rho & 0 & 0 & 0 & 0 \\[0.3ex]
0 & -\tau & 0 & \sigma + \rho & 0 & 0 & 0 \\[0.3ex]
0 & 0 & 0 & 0 & \rho + \gamma_1 & 0 & 0 \\[0.3ex]
0 & 0 & 0 & 0 & 0 & \rho + \gamma_1 & 0 \\[0.3ex]
-\rho\delta & -\rho\delta & 0 & 0 & 0 & 0 & \gamma_2 + \mu
\end{matrix}
\right)\, .$$ Since $V$ is non-singular, we compute $$V^{-1} =
\left(
\begin{smallmatrix}
(\rho\delta + \tau)^{-1} & 0 & 0 & 0 & 0 & 0 & 0\\[0.3ex]
0 & (\rho\delta + \tau)^{-1} & 0 & 0 & 0 & 0 & 0\\[0.3ex]
\frac{\tau}{(\sigma + \rho)(\rho\delta + \tau)} & 0 & (\sigma + \rho)^{-1} & 0 & 0 & 0 & 0 \\[0.3ex]
0 & \frac{\tau}{(\sigma + \rho)(\rho\delta + \tau)} & 0 & (\sigma + \rho)^{-1} & 0 & 0 & 0 \\[0.3ex]
0 & 0 & 0 & 0 & (\rho + \gamma_1)^{-1} & 0 & 0 \\[0.3ex]
0 & 0 & 0 & 0 & 0 & (\rho + \gamma_1)^{-1} & 0 \\[0.3ex]
\frac{\rho\delta}{(\gamma_2 + \mu)(\rho\delta + \tau)} & \frac{\rho\delta}{(\gamma_2 + \mu)(\rho\delta + \tau)} & 0 & 0 & 0 & 0 & (\gamma_2 + \mu)^{-1}
\end{smallmatrix}
\right)\, .$$ Thus, one can easily compute the matrix $FV^{-1}$ and check that its characteristic polynomial is given by $$p(\lambda) = - \lambda^5 P_2(\lambda)\, ,$$ where $P_2(\lambda)$ is a second order polynomial of the form $${P}_2(\lambda) = \lambda^2 - \frac{\beta \tau (1 - p + r^2 p)}{(\rho\delta + \tau)(\sigma + \rho)}\lambda - \frac{\sigma \alpha \tau \beta (1 - p + r^2 p)}{(\rho + \sigma)(\rho\delta + \tau)(\rho + \gamma_1)}\, .$$ ${P}_2(\lambda)$ has one positive and one negative root, given by $$\lambda_{1/2} = \frac{1}{2}\left(
\frac{\beta \tau (1 - p + r^2 p)}{(\rho\delta + \tau)(\sigma + \rho)} \pm \sqrt{\Delta}
\right)\, ,$$ with $$\Delta = \left(\frac{\beta \tau (1 - p + r^2 p)}{(\rho\delta + \tau)(\sigma + \rho)}\right)^2 + \frac{4\sigma \alpha \tau \beta (1 - p + r^2 p)}{(\rho + \sigma)(\rho\delta + \tau)(\rho + \gamma_1)} > 0\, .$$ Since the term $\frac{\beta \tau (1 - p + r^2 p)}{(\rho\delta + \tau)(\sigma + \rho)}$ is positive, the value of $\mathcal{R}_0$ coincides with $\lambda_1$, i.e., $$\label{eq:R0App}
\mathcal{R}_0 = \lambda_1 = \frac{1}{2}\left(
\frac{\beta \tau (1 - p + r^2 p)}{(\rho\delta + \tau)(\sigma + \rho)} + \sqrt{\Delta}
\right)\, .$$ This is an analytic expression of $\mathcal{R}_0$, which shows its explicit dependence on the different parameters of model . Proposition \[PropR0\] gives a convenient equivalent condition to ensure the stability of DFE.
\[RemarkvdD\] In order to directly apply the results in [@vdDW02], it is required that the eigenvalues of $Df(X^*)$ have negative values and, under this assumption, the asymptotic stability of the DFE is established. In our case, the matrix $Df(X^*)$ has zero as an eigenvalue of double multiplicity, with associated eigenvectors in the directions of the last two variables, these being $S_f$ and $S_r.$ It is not hard to see that the results in [@vdDW02] hold for our system by simply modifying asymptotic stability to stability in the directions of the susceptible compartments, which has no consequence in the meaning of the threshold $\mathcal{R}_0$. Alternatively, a way to force the system to comply all the technical assumptions from [@vdDW02] is adding birth and natural mortality to our model, which has no relevant impact in the results we showed (since the natural daily birth/death rates are of the order of $10^{-5}$, hence negligible w.r.t. the other parameters).
Proof of Proposition \[PropR0\] {#App2}
-------------------------------
We remind that Proposition \[PropR0\] claims the following: for $$\varphi = \frac{\beta \tau [1 - (1 - r^2) p]}{(\rho\delta + \tau)(\sigma + \rho)} \; \qquad\text{ and }\qquad
\mathcal{T}_0 = \varphi\left(
1 + \frac{\sigma\alpha}{\rho + \gamma_1}
\right)\, ,$$ $\mathcal{R}_0$ is smaller than (respectively, equal to or greater than) 1 if and only if the same relation holds for $\mathcal{T}_0$. Indeed, by a straightforward computation we realize that $$\begin{gathered}
\mathcal{R}_0\le 1 \quad \Longleftrightarrow\quad
\varphi + \sqrt{\varphi^2 + \frac{4\sigma\alpha}{\rho + \gamma_1}\varphi}\le 2\\[0.3ex]
\Longleftrightarrow\quad \left(0 < \right) \sqrt{\varphi^2 + \frac{4\sigma\alpha}{\rho + \gamma_1}\varphi}\le 2 -\varphi
\quad \overset{\ast}{\Longleftrightarrow}\quad \varphi^2 + \frac{4\sigma\alpha}{\rho + \gamma_1}\varphi\le \left(2 -\varphi\right)^2\\[0.3ex]
\Longleftrightarrow\quad \frac{4\sigma\alpha}{\rho + \gamma_1}\varphi\le 4 - 4\varphi
\quad \Longleftrightarrow\quad
\varphi\left(1 + \frac{\sigma\alpha}{\rho + \gamma_1}\right)\le 1
\quad \Longleftrightarrow\quad
\mathcal{T}_0 \le 1\; .\end{gathered}$$ Observe that the implication $\ \Longleftarrow\ $ in the equivalence $\overset{\ast}{\ \Longleftrightarrow\ }$ holds true because $$\mathcal{T}_0\le 1\quad \Longrightarrow\quad \varphi \le \frac{\rho + \gamma_1}{\rho + \gamma_1 + \sigma\alpha} \le 1\, ,$$ thus $|2 - \varphi| = 2 - \varphi$. In particular, the same chain of relations holds with the equal sign. Finally, since $\mathcal{R}_0\le 1$ is equivalent to $\mathcal{T}_0\le 1$, then also $\mathcal{R}_0 > 1$ is equivalent to $\mathcal{T}_0 > 1$.
In addition, let us notice that, if $\varphi > 1$, then both $\mathcal{R}_0 > 1$ and $\mathcal{T}_0 > 1$. Indeed, from the definition of $\mathcal{T}_0$, since $\frac{\sigma\alpha}{\rho + \gamma_1} \geq 0$, we have that $\mathcal{T}_0 \geq \varphi >1$, and thus also $\mathcal{R}_0 > 1$.
Sensitivity analysis of the threshold $\mathcal{T}_0$ {#App3}
-----------------------------------------------------
The explicit representation of the basic reproduction number $\mathcal{R}_0$ allows to study the sensitivity of $\mathcal{R}_0$ with respect to the several parameters of the model . Moreover, thanks to Proposition \[PropR0\], we know that the threshold $$\mathcal{T}_0 = \frac{\beta \tau [1 - (1 - r^2) p]}{(\rho\delta + \tau)(\sigma + \rho)} \left(
1 + \frac{\sigma\alpha}{\rho + \gamma_1}
\right)$$ can be used for an equivalent characterization of the condition $\mathcal{R}_0 < 1$. For this reason, it is handier to develop the sensitivity of $\mathcal{T}_0$ with respect to the parameters of the model, and deduce its dependence on perturbations of the parameters. We thus compute the normalized sensitivity index $S_x$ corresponding to the $x$ parameter, given by $$S_x := \frac{x}{\mathcal{T}_0}\frac{\partial\mathcal{T}_0}{\partial x}\, ,$$ and we get that $$\begin{aligned}
S_\beta & = 1 > 0\ ,\\[0.3ex]
S_\tau & = \frac{\rho\delta}{\rho\delta + \tau} > 0\ ,\\[0.3ex]
S_p & = - \frac{(1-r^2)p}{1 - (1-r^2)p} < 0\ ,\\[0.3ex]
S_r & = \frac{2r^2p}{1-(1-r^2)p} > 0\ ,\\[0.3ex]
S_\delta & = - \frac{\rho\delta}{\rho\delta + \tau} < 0\ ,\\[0.3ex]
S_\alpha & = \frac{\sigma\alpha}{\rho + \gamma_1 + \sigma\alpha} > 0\ ,\\[0.3ex]
S_{\gamma_1} & = - \frac{\sigma\alpha\gamma_1}{(\rho + \gamma_1)(\rho + \gamma_1 + \sigma \alpha)} < 0\ ,\\[0.3ex]
S_{\sigma} & = - \frac{\sigma[\gamma_1 + (1-\alpha)\rho]}{(\sigma + \rho)(\rho + \gamma_1 + \sigma \alpha)} < 0\ ,\\[0.3ex]
S_{\rho} & = - \frac{\rho}{\rho + \gamma_1 + \sigma \alpha}\left[
\frac{[\delta(\sigma + 2\rho)+\tau](\rho + \gamma_1 + \sigma\alpha)}{(\rho\delta + \tau)(\sigma + \rho)} + \frac{\sigma\alpha}{\rho + \gamma_1}
\right] < 0\ .\end{aligned}$$ We thus notice the same qualitative dependence on the parameters already observed in Section \[sec:R0\]. In particular, if we increase $k$ times the parameter $\beta$, then $\mathcal{T}_0$ increases $k$ times as well. Similar deductions can be made on the other parameters, with the corresponding coefficients obtained by inserting the values of the parameters from Table \[parameter\_values\]. Moreover, from the expression of $S_\tau$ we realize that, if either $\rho$ or $\delta$ equal zero, then $\mathcal{T}_0$ does not depend on $\tau$ (as it happens for $\mathcal{R}_0$ as well, as noticed in Remark \[rem:R0tau\]). Similarly, if $\rho = 0$, then $S_\delta=0$, thus $\mathcal{T}_0$ does not depend on $\delta$. Regarding the parameters $p$ and $r$, their dependence is mutually related as follows: if $p=0$, then $\mathcal{T}_0$ does not depend on $r$ (since $S_r = 0$), whereas if $r=1$ then $\mathcal{T}_0$ does not depend on $p$.
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'As language data and associated technologies proliferate and as the language resources community expands, it is becoming increasingly difficult to locate and reuse existing resources. Are there any lexical resources for such-and-such a language? What tool works with transcripts in this particular format? What is a good format to use for linguistic data of this type? Questions like these dominate many mailing lists, since web search engines are an unreliable way to find language resources. This paper reports on a new digital infrastructure for discovering language resources being developed by the Open Language Archives Community (OLAC). At the core of OLAC is its metadata format, which is designed to facilitate description and discovery of all kinds of language resources, including data, tools, or advice. The paper describes OLAC metadata, its relationship to Dublin Core metadata, and its dissemination using the metadata harvesting protocol of the Open Archives Initiative.'
author:
- |
Steven Bird$^{\ast}$ and Gary Simons$^{\dagger}$\
$^{\ast}$University of Melbourne and University of Pennsylvania\
$^{\dagger}$SIL International\
[Email: `[email protected]`, `[email protected]`]{}
date: 2003
title: |
Extending Dublin Core Metadata to\
Support the Description and Discovery\
of Language Resources
---
Introduction
============
Language technology and the linguistic sciences are confronted with a vast array of *language resources*, richly structured, large and diverse. Multiple *communities* depend on language resources, including linguists, engineers, teachers and actual speakers. Many individuals and institutions provide key pieces of the infrastructure, including archivists, software developers, and publishers. Today we have unprecedented opportunities to *connect* these communities to the language resources they need. First, inexpensive mass storage technology permits large resources to be stored in digital form, while the Extensible Markup Language (XML) and Unicode provide flexible ways to represent structured data and ensure its long-term survival. Second, digital publication – both on and off the world wide web – is the most practical and efficient means of sharing language resources. Finally, a standard resource description model, the Dublin Core Metadata Set, together with an interchange method provided by the Open Archives Initiative (OAI), make it possible to construct a union catalog over multiple repositories and archives.
In December 2000, a new initiative which applied the OAI to language archives was founded, with the following statement of purpose:
> OLAC, the Open Language Archives Community, is an international partnership of institutions and individuals who are creating a worldwide virtual library of language resources by: (i) developing consensus on best current practice for the digital archiving of language resources, and (ii) developing a network of interoperating repositories and services for housing and accessing such resources.
This paper presents the motivation and governing ideas of OLAC, Dublin Core metadata and the Open Archives Initiative Protocol for Metadata Harvesting (§\[sec:background\]), followed by the OLAC Metadata Set (§\[sec:metadata\]). It concludes with an overview of ongoing developments and a call for participation by the wider community. Updated information on OLAC is available from the OLAC Gateway \[[www.language-archives.org](www.language-archives.org)\].
Locating Data, Tools and Advice {#sec:background}
===============================
We can observe that the individuals who use and create language resources are looking for three things: data, tools, and advice. By DATA we mean any information that documents or describes a language, such as a published monograph, a computer data file, or even a shoebox full of hand-written index cards. The information could range in content from unanalyzed sound recordings to fully transcribed and annotated texts to a complete descriptive grammar. By TOOLS we mean computational resources that facilitate creating, viewing, querying, or otherwise using language data. Tools include not just software programs, but also the digital resources that the programs depend on, such as fonts, stylesheets, and document type definitions. By ADVICE we mean any information about what data sources are reliable, what tools are appropriate in a given situation, what practices to follow when creating new data, and so forth [@BirdSimons03language]. In the context of OLAC, the term *language resource* is broadly construed to include all three of these: data, tools and advice.
![In reality the user can’t always get there from here[]{data-label="fig:vision2"}](vision2.ps){width="0.9\linewidth"}
Unfortunately, today’s user does not have ready access to the resources that are needed. Figure \[fig:vision2\] offers a diagrammatic view of the reality. Some archives (e.g. Archive 1) do have a site on the internet which the user is able to find, so the resources of that archive are accessible. Other archives (e.g. Archive 2) are on the internet, so the user could access them in theory, but the user has no idea they exist so they are not accessible in practice. Still other archives (e.g. Archive 3) are not even on the internet. And there are potentially hundreds of archives (e.g. Archive $n$) that the user needs to know about. Tools and advice are out there as well, but are at many different sites.
There are many other problems inherent in the current situation. For instance, the user may not be able to find all the existing data about a language of interest because different sites have called it by different names (low recall). The user may be swamped with irrelevant resources because search terms have important meanings in other domains (low precision). (For a detailed discussion of precision and recall in the context of metadata, see [@Svenonius00].) The user may not be able to use an accessible data file for lack of being able to match it with the right tools. The user may locate advice that seems relevant but have no basis for judging its merits.
Bridging the gap
----------------
### Why improved web-indexing is not enough
As the internet grows and web-indexing technologies improve one might hope that a general-purpose search engine should be sufficient to bridge the gap between people and the resources they need. However, this is a vain hope. The first reason is that many language resources, such as audio files and software, are not text-based. The second reason concerns language identification, the single most important property for describing language resources. If a language has a canonical name which is distinctive as a character string, then the user has a chance of finding any online resources with a search engine. However, the language may have multiple names, possibly due to the vagaries of romanization, such as a language known variously as Fadicca, Fadicha, Fedija, Fadija, Fiadidja, Fiyadikkya, and Fedicca (giving low recall). The language name may collide with a word which has other interpretations that are vastly more frequent, e.g. the language names Mango and Santa Cruz (giving low precision).
The third reason why general-purpose search engines are inadequate is the simple fact that much of the material is not, and will not, be documented in free prose on the web. Either people will build systematic catalogues of their resources, or they won’t do it at all. Of course, one can always export a back-end database as HTML and let the search engines index the materials. Indeed, encouraging people to document resources and make them accessible to search engines is part of our vision. However, despite the power of web search engines, there remain many instances where people still prefer to use more formal databases to house their data.
This last point bears further consideration. The challenge is to build a system for “bringing like things together and differentiating among them” [@Svenonius00]. There are two dominant storage and indexing paradigms, one exemplified by traditional databases and one exemplified by the web. In the case of language resources, the metadata is coherent enough to be stored in a formal database, but sufficiently distributed and dynamic that it is impractical to maintain it centrally. Language resources occupy the middle ground between the two paradigms, neither of which will serve adequately. A new framework is required that permits the best of both worlds, namely bottom-up, distributed initiatives, along with consistent, centralized finding aids. The Dublin Core Metadata Initiative and the Open Archives Initiative provide the framework we need to “bridge the gap.”
### The Dublin Core Metadata Initiative
The Dublin Core Metadata Initiative began in 1995 to develop conventions for resource discovery on the web \[[dublincore.org](dublincore.org)\]. The Dublin Core (DC) metadata elements represent a broad, interdisciplinary consensus about the core set of elements that are likely to be widely useful to support resource discovery. The Dublin Core consists of 15 metadata elements, where each element is optional and repeatable: . This set can be used to describe resources that exist in both digital and traditional formats.
To support more precise description and more focussed searching, the DC metadata set has been extended with encoding schemes and refinements [@DCQ00; @DCER02]. An encoding scheme specifies a particular controlled vocabulary or notation for expressing the value of an element. An encoding scheme serves to aid a client system in interpreting the exact meaning of the element content. A refinement makes the meaning of the element more specific. For example, a element can be [*encoded*]{} using the conventions of RFC 3066 to unambiguously identify the language in which the resource is written (or spoken). A element can be given a language [*refinement*]{} to restrict its interpretation to concern the language the resource is about.
### The Open Archives Initiative
The Open Archives Initiative (OAI) was launched in October 1999 to provide a common framework across electronic preprint archives, and it has since been broadened to include digital repositories of scholarly materials regardless of their type \[[www.openarchives.org](www.openarchives.org)\] [@LagozeVandeSompel01; @VandeSompelLagoze02]. Each participating archive, or “data provider,” has a network accessible server offering public access to metadata records describing archive holdings. The holdings themselves may be documents, raw data, software, recordings, physical artifacts, digital surrogates, and so forth. Each metadata record describes an archive holding, and includes a reference to an entry point for the holding such as a URL or a physical location.
Participating archives must comply with two standards: the [*OAI Shared Metadata Set*]{} (Dublin Core) which facilitates interoperability across all repositories participating in the OAI, and the [*OAI Protocol for Metadata Harvesting*]{} which allows “service providers” to combine metadata from multiple archives into a single catalogue. End-users interact directly with a service provider to quickly locate distributed resources.
Applying the OAI to language resources using specialized metadata
-----------------------------------------------------------------
The OAI infrastructure is a new invention: it has the bottom-up, distributed character of the web, while simultaneously having the efficient, structured nature of a centralized database. This combination is well-suited to the language resource community, where the available data is growing rapidly and where a large user-base is fairly consistent in how it describes its resource needs.
Recall that the OAI community is defined by the archives which comply with the OAI metadata harvesting protocol and that register with the OAI. Any compliant repository can register as an OAI archive, and the metadata provided by the archive is open to the public. OAI data providers may support metadata formats in addition to DC. A specialist community can define a metadata format specific to its domain and expose it via the OAI protocol. Service providers, data providers and users that employ this specialized metadata format constitute an OAI *subcommunity*.
Consequently, applying the OAI to language resources is chiefly a matter of having a common metadata format tailored for language resource description and discovery. Section \[sec:metadata\] reports on such a format, which is already in use by over twenty archives having a combined total of 30,000 metadata records. These OLAC metadata records can be harvested from multiple archives using the OAI protocol and stored in a single location, where end-users can query all participating archives simultaneously. The LINGUIST List now offers an OLAC cross-archive search service at \[<http://www.linguistlist.org/olac>\].
A Core Metadata Set for Language Resources {#sec:metadata}
==========================================
The OLAC Metadata Set extends the Dublin Core set only to the minimum degree required to express basic properties of language resources which are useful as finding aids. All Dublin Core elements and refinements are used in the OLAC Metadata Set. In order to meet the specific needs of the language resources community, certain elements have been extended following DCMI guidelines [@DCQ00; @DCXML03]. This section describes some of the attributes, elements and controlled vocabularies of the OLAC Metadata Set, then shows how they are represented in XML and how they are mapped to other formats for wider dissemination.
Attributes used in implementing the OLAC Metadata Set
-----------------------------------------------------
Three attributes – , , and are used throughout the XML implementation of the metadata elements. The attribute is used to qualify the Dublin Core element, by refining its meaning (to make it narrower or more specific), or by identifying an encoding scheme, or both. If the specifies one of the OLAC vocabularies, then the attribute is used to hold the selected value. For example, with the element, we may specify the type to indicate that we are describing the subject language of the resource. We may also provide a code to uniquely identify the language. We may further supply element content, as a freeform elaboration of the coded value. This design permits service providers to uniformly interpret the meaning of any code value, thereby providing good precision and recall. At the same time, data providers may use the element content when there is not an appropriate code or when they want to add qualifications to the coded value.
As with Dublin Core, every element in the OLAC metadata set may use the attribute. It specifies the language in which the text in the content of the element is written. By using multiple instances of the metadata elements tagged for different languages, data providers may offer their metadata records in multiple languages.
The elements of the OLAC Metadata Set
-------------------------------------
In this section we present a synopsis of the elements of the OLAC metadata set. For each element, we provide a one sentence definition followed by a brief discussion, systematically borrowing and adapting the definitions provided by the Dublin Core Metadata Initiative [@DCER02]. Each element is optional and repeatable.
:
: [**An entity responsible for making contributions to the content of the resource.**]{} Examples of a Contributor include a person, an organization, or a service. Recommended best practice is to identify the role played by the named entity in the creation of the resource using the OLAC Role Vocabulary [@OLAC-Role].
:
: [**The extent or scope of the content of the resource.**]{} Coverage will typically include spatial location or temporal period. Where the geographical information is predictable from the language identification, it is not necessary to specify geographic coverage.
:
: [**An entity primarily responsible for making the content of the resource.**]{} As with the element, recommended best practice is to identify the role played by the named entity in the creation of the resource using the OLAC Role Vocabulary [@OLAC-Role].
:
: [**A date associated with an event in the life cycle of the resource.**]{} Best practice is to use the W3C Date and Time Format [@W3CDTF]. Dublin Core qualifiers may be used to refine the meaning of the date (for instance, date of creation versus date of issue versus date of modification, and so on). The refinements to are defined in [@DCER02].
:
: [**An account of the content of the resource.**]{} Description may include but is not limited to: an abstract, table of contents, reference to a graphical representation of content, or a free-text account of the content.
:
: [**The physical or digital manifestation of the resource.**]{} Typically, will specify the media-type or dimensions of a physical resource, or the character encoding or markup of a digital resource. It may be used to determine the software, hardware or other equipment needed to use the resource. Since this element applies both to software and data, service providers can use it to match data with appropriate software tools and vice versa.
:
: [**An unambiguous reference to the resource within a given context.**]{} Recommended best practice is to identify the resource by means of a string or number conforming to a globally-known formal identification system (e.g. by URI or ISBN). For non-digital archives, may use the existing scheme for locating a resource within the collection.
:
: [**A language of the intellectual content of the resource.**]{} The element is used for a language the resource is in, as opposed to a language it describes (i.e. a “subject language”). It identifies a language that the creator of the resource assumes that its eventual user will understand. Recommended best practice is to identify the language precisely using a coded value from the OLAC Language Vocabulary.
:
: [**An entity responsible for making the resource available.**]{} Examples of a publisher include a person, an organization, or a service.
:
: [**A reference to a related resource.**]{} This element is used to document relationships between resources. Dublin Core qualifiers may be used to refine the nature of the relationship (for instance, is replaced by, requires, is part of, and so on). The refinements to are defined in [@DCER02].
:
: [**Information about rights held in and over the resource.**]{} Typically, a element will contain a rights management statement for the resource, or reference a service providing such information. Rights information often encompasses intellectual property rights, copyright, and various property rights.
:
: [**A reference to a resource from which the present resource is derived.**]{} For instance, it may be the bibliographic information about a printed book of which this is the electronic encoding or from which the information was extracted.
:
: [**The topic of the content of the resource.**]{} Typically, a Subject will be expressed as keywords, key phrases or classification codes that describe a topic of the resource. Recommended best practice is to select a value from a controlled vocabulary or formal classification scheme. Where the subject of the resource is a language, recommended best practice is to use the OLAC Language Vocabulary (cf. the element above).
:
: [**A name given to the resource.**]{} Typically, a title will be a name by which the resource is formally known.
:
: [**The nature or genre of the content of the resource.**]{} Recommended best practice is to use the Dublin Core controlled vocabulary DC-Type for broad classification of type. OLAC provides additional vocabularies that are relevant for language resources: the OLAC Linguistic Data Type Vocabulary [@OLAC-Type], and the OLAC Discourse Type Vocabulary [@OLAC-Discourse].
The controlled vocabularies {#sec:cv}
---------------------------
Controlled vocabularies are enumerations of legal values, or specifications of legal formats, for the attribute. In some cases, more than one value applies, in which case the corresponding element must be repeated, once for each applicable value. In other cases, no value is applicable ands the corresponding element is simply omitted. In yet other cases, the controlled vocabulary may fail to provide a suitable item, in which case a similar item can be optionally specified and a prose comment included in the element content.
### The OLAC Language Vocabulary
Language identification is an important dimension of language resource classification. However, the character-string representation of language names is problematic for several reasons: different languages (in different parts of the world) may have the same name; the same language may have a different name in each country where it is spoken; within the same country, the preferred name for a language may change over time; in the early history of discovering new languages (before names were standardized), different people referred to the same language by different names; and for languages having non-Roman orthographies, the language name may have several possible romanizations. Together, these facts suggest that a standard based on names will not work. Instead, we need a standard based on unique identifiers that do not change, combined with accessible documentation that clarifies the particular speech variety denoted by each identifier.
The information technology community has a standard for language identification, namely, ISO 639 [@ISO639]. Part 1 of this standard lists two-letter codes for identifying 160 of the world’s major languages; part 2 of the standard lists three-letter codes for identifying about 400 languages. ISO 639 in turn forms the core of another standard, RFC 3066 (formerly RFC 1766), which is the standard used for language identification in the xml:lang attribute of XML and in the language element of the Dublin Core metadata set. RFC 3066 provides a mechanism for users to register new language identification codes for languages not covered by ISO 639, but very few additional languages have ever been registered.
Unfortunately, the existing standard falls far short of meeting the needs of the language resources community since it fails to account for more than 90% of the world’s languages, and it fails to adequately document what languages the codes refer to [@Simons00]. However, SIL’s Ethnologue [@Grimes00] provides a complete system of language identifiers which is openly available on the Web. OLAC will employ the RFC 3066 extension mechanism to build additional language identifiers based on the Ethnologue codes. For the 130-plus ISO-639-1 codes having a one-to-one mapping onto Ethnologue codes, OLAC will support both. Where an ISO code is ambiguous OLAC requires the Ethnologue code. New identifiers for ancient languages, currently being developed by LINGUIST List, will be incorporated. These language identifiers are expressed using the attribute of the and elements (using the special prefix of RFC 3066 for user-defined extensions). The free-text content of these elements may be used to specify an alternative human-readable name for the language (where the name specified by the standard is unacceptable for some reason) or to specify a dialect (where the resource is dialect-specific).
### The OLAC Linguistic Data Type Vocabulary
After language identification, another dimension of central importance for language resources is the linguistic type of a resource. Notions such as “lexicon” and “primary text” are fundamental, and the discourse of the language resources community depends on shared assumptions about what these types mean.
At present, the OLAC Linguistic Data Type Vocabulary [@OLAC-Type] distinguishes just three types: , , and . A lexicon is defined as a “systematic listing of lexical entries... Each lexical item may, but need not, be accompanied by a definition, a description of the referent (in the case of proper names), or an indication of the item’s semantic relationship to other lexical items.” A primary text is defined as “linguistic material which is itself the object of study, typically material in the subject language which is a performance of a speech event, or the written analog of such an event.” Finally, language description is a resource which “describes a language or some aspect(s) of a language via a systematic documentation of linguistic structures.”
### Other controlled vocabularies
Here we list three other OLAC vocabularies. For full definitions, examples and notes, the reader is referred to the cited vocabulary document.
Discourse Type:
: The OLAC Discourse Type Vocabulary describes “the content of a resource as representing discourse of a particular structural type” [@OLAC-Discourse]. The vocabulary terms are as follows: drama, formulaic discourse, interactive discourse, language play, oratory, narrative, procedural discourse, report, singing, and unintelligible speech.
Role:
: The OLAC Role Vocabulary [@OLAC-Role] serves to identify the role of an individual or institution in creating or contributing to a language resource. The vocabulary terms are as follows: annotator, artist, author, compiler, consultant, depositor, developer, editor, illustrator, interviewer, participant, performer, photographer, recorder, researcher, respondent, signer, speaker, sponsor, transcriber, and translator.
Linguistic Subject:
: The OLAC Linguistic Subject Vocabulary [@OLAC-Subject] describes the content of a resource as being about a particular subfield of linguistic science. The list has been developed in the course of classifying resources on the LINGUIST List website. The vocabulary terms are as follows: anthropological linguistics, applied linguistics, cognitive science, computational linguistics, discourse analysis, forensic linguistics, general linguistics, historical linguistics, history of linguistics, language acquisition, language documentation, lexicography, linguistics and literature, linguistic theories, mathematical linguistics, morphology, neurolinguistics, philosophy of language, phonetics, phonology, pragmatics, psycholinguistics, semantics, sociolinguistics, syntax, text and corpus linguistics, translating and interpreting, typology, and writing systems.
In addition to the five vocabularies discussed here, other vocabularies have been proposed and are being considered by the community.
Once a vocabulary is reviewed and accepted by the community as OLAC best practice in language resource description, the corresponding XML schema is hosted on the OLAC website. Archives which use this vocabulary can then be automatically tested for conformance. Prior to acceptance, any new vocabulary can be set up as a “third-party extension” and adopted by archives without any centralized review process. This bottom-up approach encourages experimentation and innovation, yet only leads to community-wide adoption once the benefit of the new vocabulary for resource discovery has been demonstrated.
XML representation {#sec:xml}
------------------
The XML implementation of OLAC metadata follows the “Guidelines for implementing Dublin Core in XML” [@DCXML03]. The OLAC metadata schema is an application profile [@HeeryPatel00] that incorporates the elements from two metadata schemas developed by the DC Architecture Working Group for implementing qualified DC. The most recent version of the OLAC metadata schema is posted on the OLAC website[^1] and an example record is available[^2].
The container for an OLAC metadata record is the element , which is defined in a namespace called <http://www.language-archives.org/OLAC/1.0/>. By convention the namespace prefix is used, and the DC namespace is declared to be the default so that the metadata element tags need not be prefixed. For instance, the following is a valid OLAC metadata record:
\
`<olac:olac`\
` xmlns:olac=`“`http://www.language-archives.org/OLAC/1.0/`”\
` xmlns=`“`http://purl.org/dc/elements/1.1/`”\
` xmlns:xsi=`“`http://www.w3.org/2001/XMLSchema-instance`”\
` xsi:schemaLocation=`\
` `“`http://www.language-archives.org/OLAC/1.0/ `\
` http://www.language-archives.org/OLAC/1.0/olac.xsd`”`>`\
` <creator>Bloomfield, Leonard</creator>`\
` <date>1933</date>`\
` <title>Language</title>`\
` <publisher>New York: Holt</publisher>`\
`</olac:olac>`
In addition to this DC metadata, an element may use a DC qualifier, following the guidelines given in [@DCXML03]. The element may specify a refinement (using an element defined in the dcterms namespace) or an encoding scheme (using a scheme defined in as the value of the attribute), or both. Note that the metadata record must declare the namespace as follows: . For instance, the following element represents a creation date encoded in the W3C date and time format:
\
`<dcterms:created xsi:type=`“`dcterms:W3C-DTF`”`>2002-11-28`\
` </dcterms:created>`
The attribute is a directive that is built into the XML Schema standard \[<http://www.w3.org/XML/Schema>\]. It functions to override the type definition of the current element by the type definition named in its value. In this example, the value of resolves to a complex type definition in the XML schema for the namespace.
Any element may also use the attribute to indicate the language of the element content. For instance, the following represents a title in the Lau language of Solomon Islands and its translation into English:
\
`<title xml:lang=`“`x-sil-LLU`”`>Na tala ’uria na idulaa`\
` diana</title>`\
`<dcterms:alternative xml:lang=`“`en`”`>The road to good`\
` reading</dcterms:alternative>`
For further detailed discussion of the XML format, the reader is referred to [@SimonsBird03lht; @SimonsBird03metadata].
Mapping OLAC metadata to other formats
--------------------------------------
As we have seen, OLAC metadata uses attributes to support resource description using controlled vocabularies, and service providers may use these attributes to perform precise searches. However, service providers also need to be able to display metadata records to users in an easy-to-read format. This involves translating coded attribute values into human-readable form, and combining this information with the element content to produce a display of all information pertaining to a metadata element [@Simons03display].
Transforming OLAC metadata records into such a display format is a non-trivial task. Instead of having each service provider perform this task independently, OLACA, the OLAC Aggregator [@SimonsBird03lht] offers a human-readable version of all OLAC metadata. Service providers can harvest this metadata, and expose the content of the metadata elements to end-users without any further processing.
Beyond this, the OLAC website exposes human-readable versions of OLAC metadata to wider communities. First, a simple DC version of the human-readable metadata is exposed to OAI service providers, so that all OLAC archives show up in digital library catalogs of the wider OAI community (e.g. in the ARC service <http://arc.cs.odu.edu/>). Second, an HTML version of the human-readable metadata is exposed to web crawlers, permitting all OLAC metadata records to be indexed by web search engines and to be stored in internet archives.
Conclusions
===========
As language resources proliferate, and as the associated community grows, the need for a consistent and comprehensive framework for resource description and discovery is becoming critical. OLAC has addressed this need by providing metadata tailored to the needs of language resource description, minimally extending the DC standard. At the same time, the OAI Protocol for Metadata Harvesting on which the OLAC infrastructure is built permits end-users to search the contents of multiple archives from a single location.
OLAC provides a ready *template* for resource description, with two clear benefits over traditional full-text description and retrieval. First, the template guides the resource creator in giving a *complete description* of the resource, in contrast to prose descriptions which may omit important details. And second, the template associates the elements of a description with *standard labels*, such as and , permitting users to do focussed searching. Resources and repositories can proliferate, yet a common metadata format will support centralized services, giving users easy access to language resources.
Despite its many benefits, simply making resources findable is insufficient on its own. There must also be a framework in which the community can identify and promote best practices for digital representation of linguistic information to ensure re-usability and long-term preservation. To support this need, OLAC has developed a process which specifies how the community can identify best practices [@OLAC-Process].
We conclude by calling for wider participation in OLAC. First, the controlled vocabularies used by the OLAC Metadata Set and described in this article are works in progress, and are continuing to be revised with input from participating archives and members of the community. We hope to have provided sufficient motivation and exemplification for readers to be able to contribute to ongoing developments. Second, the OLAC process can be used by community members to develop new vocabularies and other best practice recommendations. Finally, the core infrastructure of data providers and service providers is operational, and individuals and institutions are encouraged to use it for the widespread dissemination of their language resources.
Acknowledgements {#acknowledgements .unnumbered}
================
This material is based upon work supported by the National Science Foundation under grants: 9910603 *International Standards in Language Engineering*, and 9978056 *TalkBank*. Earlier versions of this material were presented at the Workshop on Web-Based Language Documentation and Description in Philadelphia, December 2000 [@BirdSimons00], the ACL/EACL Workshop on Sharing Tools and Resources for Research and Education [@BirdSimons01], and the IRCS Workshop on Open Language Archives [@BirdSimons02workshop]. We are indebted to members of the OLAC community for their active participation in the creation and development of the OLAC metadata format.
[23]{} natexlab\#1[\#1]{}url \#1[[\#1]{}]{}
Helen [Aristar Dry]{} and Michael Appleby. linguistic subject vocabulary, 2003. <http://www.language-archives.org/REC/field.html>.
Helen [Aristar Dry]{} and Heidi Johnson. linguistic data type vocabulary, 2002. <http://www.language-archives.org/REC/type.html>.
Steven Bird and Gary Simons, editors. , 2000. <http://www.ldc.upenn.edu/exploration/expl2000/>.
Steven Bird and Gary Simons. The [OLAC]{} metadata set and controlled vocabularies. In [*Proceedings of ACL/EACL Workshop on Sharing Tools and Resources for Research and Education*]{}, 2001. <http://arXiv.org/abs/cs/0105030>.
Steven Bird and Gary Simons, editors. , 2002. <http://www.language-archives.org/events/olac02/>.
Steven Bird and Gary Simons. Seven dimensions of portability for language documentation and description. , 79:0 557–82, 2003.
. qualifiers, 2000. <http://dublincore.org/documents/2000/07/11/dcmes-qualifiers/>.
. elements and element refinements – a current list, 2002. <http://dublincore.org/usage/terms/dc/current-elements/>.
Barbara F. Grimes, editor. . Dallas: Summer Institute of Linguistics, 14th edition, 2000. [http//www.ethnologue.com/](http//www.ethnologue.com/).
Rachel Heery and Manjula Patel. Application profiles: mixing and matching metadata schemas. In [*Ariadne*]{}, volume 25. UK Office for Library and Information networking (UKOLN), University of Bath, 2000. <http://www.ariadne.ac.uk/issue25/app-profiles/>.
. 639: Codes for the representation of names of languages-part 2: Alpha-3 code, 1998. <http://lcweb.loc.gov/standards/iso639-2/langhome.html>.
Heidi Johnson. role vocabulary, 2002. <http://www.language-archives.org/REC/role.html>.
Heidi Johnson and Helen [Aristar Dry]{}. discourse type vocabulary, 2002. <http://www.language-archives.org/REC/discourse.html>.
Carl Lagoze and Herbert [Van de Sompel]{}. The [Open Archives Initiative]{}: Building a low-barrier interoperability framework. In [*Proceedings of the First ACM/IEEE-CS Joint Conference on Digital Libraries*]{}, pages 54–62, 2001. <http://www.cs.cornell.edu/lagoze/papers/oai-jcdl.pdf>.
Andy Powell and Pete Johnston. Guidelines for implementing [Dublin Core]{} in [XML]{}, 2003. <http://dublincore.org/documents/dc-xml-guidelines/>.
Gary Simons. Language identification in metadata descriptions of language archive holdings. In Steven Bird and Gary Simons, editors, [*Proceedings of the Workshop on Web-Based Language Documentation and Description*]{}, 2000. <http://www.ldc.upenn.edu/exploration/expl2000/papers/simons/>.
Gary Simons. Specifications for an olac metadata display format and an olac-to-oai\_dc crosswalk, 2003. <http://www.language-archives.org/NOTE/olac_display.html>.
Gary Simons and Steven Bird. process, 2002. <http://www.language-archives.org/OLAC/process.html>.
Gary Simons and Steven Bird. Building an [Open Language Archives Community]{} on the [OAI]{} foundation. , 21:0 210–218, 2003. <http://www.arxiv.org/abs/cs.CL/0302021>.
Gary Simons and Steven Bird. metadata, 2003. <http://www.language-archives.org/OLAC/metadata.html>.
Elaine Svenonius. . The MIT Press, 2000.
Herbert [Van de Sompel]{} and Carl Lagoze. Notes from the interoperability front: A progress report on the [Open Archives Initiative]{}. In [*Proceedings of the European Conference on Digital Libraries*]{}, pages 144–157, 2002. <http://www.openarchives.org/documents/ecdl-oai.pdf>.
Misha Wolf and Charles Wicksteed. Date and time formats, 1997. <http://www.w3.org/TR/NOTE-datetime>.
[^1]: <http://www.language-archives.org/OLAC/1.0/olac.xsd>
[^2]: <http://www.language-archives.org/OLAC/1.0/olac.xml>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
This paper is devoted to the classification of flag-transitive $2$-$(v,k,2)$ designs. We show that apart from two known symmetric $2$-$(16,6,2)$ designs, every flag-transitive subgroup $G$ of the automorphism group of a nontrivial $2$-$(v,k,2)$ design is primitive of affine or almost simple type. Moreover, we classify the $2$-$(v,k,2)$ designs admitting a flag transitive almost simple group $G$ with socle ${{\rm PSL}}(n,q)$ for some $n\geq 3$. Alongside this analysis we give a construction for a flag-transitive $2$-$(v, k-1, k-2)$ design from a given flag-transitive $2$-$(v,k, 1)$ design which induces a 2-transitive action on a line. Taking the design of points and lines of the projective space ${{\rm PG}}(n-1,3)$ as input to this construction yields a $G$-flag-transitive $2$-$(v,3,2)$ design where $G$ has socle ${{\rm PSL}}(n,3)$ and $v=(3^n-1)/2$. Apart from these designs, our ${{\rm PSL}}$-classification yields exactly one other example, namely the complement of the Fano plane.
[**Keywords:**]{} flag-transitive design; projective linear group
[**MSC2020:**]{} 05B05, 05B25, 20B25
author:
- 'Alice Devillers[^1]'
- 'Hongxue Liang[^2]'
- 'Cheryl E. Praeger'
- Binzhou Xia
title: 'On flag-transitive $2$-$(v,k,2)$ designs'
---
Introduction
============
A *$2$-$(v,k,\lambda)$ design* ${\mathcal{D}}$ is a pair $(\P,{\mathcal{B}})$ with a set $\P$ of $v$ *points* and a set ${\mathcal{B}}$ of *blocks* such that each block is a $k$-subset of $\P$ and each two distinct points are contained in $\lambda$ blocks. We say ${\mathcal{D}}$ is *nontrivial* if $2<k<v$, and *symmetric* if $v=b$. All $2$-$(v,k,\lambda)$ designs in this paper are assumed to be nontrivial. An automorphism of ${\mathcal{D}}$ is a permutation of the point set which preserves the block set. The set of all automorphisms of ${\mathcal{D}}$ with the composition of permutations forms a group, denoted by ${{\rm Aut}}({\mathcal{D}})$. For a subgroup $G$ of ${{\rm Aut}}({\mathcal{D}})$, $G$ is said to be *point-primitive* if $G$ acts primitively on $\P$, and said to be *point-imprimitive* otherwise. A *flag* of ${\mathcal{D}}$ is a point-block pair $(\alpha,B)$ where $\alpha$ is a point and $B$ is a block incident with $\alpha$. A subgroup $G$ of ${{\rm Aut}}({\mathcal{D}})$ is said to be *flag-transitive* if $G$ acts transitively on the set of flags of ${\mathcal{D}}$.
A $2$-$(v,k,\lambda)$ design with $\lambda=1$ is also called a finite *linear space*. In 1990, Buekenhout, Delandtsheer, Doyen, Kleidman, Liebeck and Saxl [@BDDKLS] classified all flag-transitive linear spaces apart from those with a one-dimensional affine automorphism group. Since then, there have been efforts to classify $2$-$(v,k,2)$ designs $\mathcal{D}$ admitting a flag-transitive group $G$ of automorphisms. Through a series of papers [@Re2005; @Biplane1; @Biplane2; @Biplane3], Regueiro proved that, if $\mathcal{D}$ is symmetric, then either $(v,k)\in\{(7,4),(11,5),(16,6)\}$, or $G\leq {{\rm A\Gamma L}}(1,q)$ for some odd prime power $q$. Recently, Zhou and the second author [@Liang1] proved that, if $\mathcal{D}$ is not symmetric and $G$ is point-primitive, then $G$ is affine or almost simple. In each of these cases $G$ has a unique minimal normal subgroup, its *socle* ${{\rm Soc}}(G)$, which is elementary abelian or a nonabelian simple group, respectively.
Our first objective in this paper is to fill in a missing piece in this story, namely to treat the case where $G$ is flag-transitive and point-imprimitive and $\mathcal{D}$ is a not-necessarily-symmetric $2$-$(v,k,2)$ design. Such flag-transitive, point-imprimitive designs exist: it was shown in 1945 by Hussain [@Huss], and independently in 1946 by Nandi [@Nandi], that there are exactly three $2$-$(16,6,2)$-designs. O’Reilly Regueiro [@Re2005 Examples 1.2] showed that exactly two of these designs are flag-transitive, and each admits a point-imprimitive, flag-transitive subgroup of automorphisms (one with automorphism group $2^4\mathrm{S}_6$ and point stabiliser $(\mathbb{Z}_2\times\mathbb{Z}_8)(\mathrm{S}_4.2)$ and the other with automorphism group $\mathrm{S}_6$ and point stabiliser $\mathrm{S}_4.2$, see also [@Praeger Remark 1.4(1)]). We prove that these are the only point-imprimitive examples, and thus, together with [@Liang1 Theorem 1.1] and [@Re2005 Theorem 2], we obtain the following result.
\[Th1\] Let $\mathcal{D}$ be a $2$-$(v,k,2)$ design with a flag-transitive group $G$ of automorphisms. Then either
1. $\mathcal{D}$ is one of two known symmetric $2$-$(16,6,2)$ designs with $G$ point-imprimitive; or
2. $G$ is point-primitive of affine or almost simple type.
Theorem \[Th1\] reduces the study of flag-transitive $2$-$(v,k,2)$ designs to those whose automorphism group $G$ is point-primitive of affine or almost simple type. Regueiro [@Re2005; @Biplane1; @Biplane2; @Biplane3] has classified all such examples where the design is symmetric (up to those admitting a one-dimensional affine group). In the non-symmetric case, the second author and Zhou have dealt with the cases where the socle ${{\rm Soc}}(G)$ is a sporadic simple group or an alternating group, identifying three possibilities: namely $(v,k)=(176,8)$ with $G=\mathrm{HS}$, the Higman-Sims group in [@Liang1], and $(v,k)=(6,3)$ or $(10,4)$ with ${{\rm Soc}}(G)=A_v$ in [@Liang2]. Our contribution is the case where ${{\rm Soc}}(G)={{\rm PSL}}(n,q)$ for some $n\geq 3$ and $q$ a prime power. In contrast to the cases considered previously, an infinite family of examples occurs, which may be obtained from the following general construction method for flag-transitive designs from linear spaces.
\[cons\] For a $2$-$(v,k,1)$ design $\mathcal{S=(P,L)}$ with $k\geq 3$, let $$\mathcal{B}=\{\ell\setminus\{\alpha\}\,\mid\,\ell\in \mathcal{L},\,\alpha\in\ell\}$$ and $\mathcal{D(S)}=(\P,{\mathcal{B}})$.
We show in Proposition \[exist 1\] that $\mathcal{D(S)}$ is a $2$-$(v,k-1,k-2)$ design, and moreover, that $\mathcal{D(S)}$ is $G$-flag-transitive whenever $G\leq{{\rm Aut}}(\mathcal{S})$ is flag-transitive on $\mathcal{S}$ and induces a 2-transitive action on each line of $\mathcal{S}$. In particular, these conditions hold if $\cal S$ is the design of points and lines of ${{\rm PG}}(n-1,3)$, for some $n\geq 3$, and ${{\rm Soc}}(G)={{\rm PSL}}(n,3)$ (Proposition \[exist 1\]). Apart from these designs, our analysis shows that there is only one other $G$-flag-transitive $2-(v,k,2)$ design with ${{\rm Soc}}(G)={{\rm PSL}}(n,q)$, $n\geq3$.
\[Th2\] Let ${\mathcal{D}}$ be a $2$-$(v,k,2)$ design admitting a flag-transitive group $G$ of automorphisms, such that ${{\rm Soc}}(G)={{\rm PSL}}(n,q)$ for some $n\geq3$ and prime power $q$. Then either
(a) ${\mathcal{D}}={\mathcal{D}}(\cal S)$ is as in Construction $\ref{cons}$, where $\cal S$ is the design of points and lines of ${{\rm PG}}(n-1,3)$; or
(b) ${\mathcal{D}}$ is the complement of the Fano plane (that is, blocks are the complements of the lines of ${{\rm PG}}(2,2)$).
The designs in part (a) are non-symmetric (Proposition \[exist 1\]), while the complement of the Fano plane is symmetric, and arises also in Regueiro’s classification [@Biplane2 Theorem 1] (noting that the group ${{\rm PSL}}(3,2)$ is isomorphic to the group ${{\rm PSL}}(2,7)$ in her result).
The proofs of Theorems \[Th1\] and \[Th2\] will be given in Sections \[sec3\] and \[sec4\], respectively.
Preliminaries {#sec2}
=============
We first collect some useful results on flag-transitive designs and groups of Lie type.
\[condition 1\] Let ${\mathcal{D}}$ be a $2$-$(v,k,\lambda)$ design and let $b$ be the number of blocks of ${\mathcal{D}}$. Then the number of blocks containing each point of ${\mathcal{D}}$ is a constant $r$ satisfying the following:
1. $r(k-1)=\lambda(v-1)$;
2. $bk=vr$;
3. $b\geq v$ and $r\geq k$;
4. $r^2>\lambda v$.
In particular, if ${\mathcal{D}}$ is non-symmetric then $b>v$ and $r>k$.
Parts (i) and (ii) follow immediately by simple counting. Part (iii) is Fisher’s Inequality [@Ryser p.99]. By (i) and (iii) we have $$r(r-1)\geq r(k-1)=\lambda(v-1)$$ and so $r^2\geq\lambda v+r-\lambda$. Since ${\mathcal{D}}$ is nontrivial, we deduce from (i) that $r>\lambda$. Hence $r^2>\lambda v$, as stated in part (iv).
For a permutation group $G$ on a set $\P$ and an element $\alpha$ of $\P$, denote by $G_\alpha$ the stabiliser of $\alpha$ in $G$, that is, the subgroup of $G$ fixing $\alpha$. A *subdegree* $s$ of a transitive permutation group $G$ is the length of some orbit of $G_\alpha$. We say that $s$ is *non-trivial* if the orbit is not $\{\alpha\}$, and $s$ is *unique* if $G_\alpha$ has only one orbit of size $s$.
\[condition 2\] Let ${\mathcal{D}}$ be a $2$-$(v,k,\lambda)$ design, let $G$ be a flag-transitive subgroup of ${{\rm Aut}}({\mathcal{D}})$, and let $\alpha$ be a point of ${\mathcal{D}}$. Then the following statements hold:
1. $|G_\alpha|^3>\lambda |G|$;
2. $r$ divides $\gcd(\lambda(v-1),|G_{\alpha}|)$;
3. $r$ divides $\lambda\gcd(v-1,|G_{\alpha}|)$;
4. $r$ divides $s\gcd(r,\lambda)$ for every nontrivial subdegree $s$ of $G$.
By Lemma \[condition 1\] we have $r^2>\lambda v$. Moreover, the flag-transitivity of $G$ implies that $v=|G|/|G_\alpha|$ and $r$ divides $|G_\alpha|$, and in particular, $|G_\alpha|\geq r$. It follows that $$|G_\alpha|^2\geq r^2>\lambda v=\frac{\lambda|G|}{|G_\alpha|}$$ and so $|G_\alpha|^3>\lambda|G|$. This proves statement (i).
Since $r$ divides $r(k-1)=\lambda(v-1)$ and $r$ divides $|G_\alpha|$, we conclude that $r$ divides $$\label{1}
\gcd(\lambda(v-1),|G_{\alpha}|),$$ as statement (ii) asserts. Note that the quantity in divides $$\gcd(\lambda(v-1),\lambda|G_{\alpha}|)=\lambda\gcd(v-1,|G_{\alpha}|).$$ We then conclude that $r$ divides $\lambda\gcd(v-1,|G_\alpha|)$, proving statement (iii).
Finally, statement (iv) is proved in [@Dav1 p.91] and [@Dav2].
For a positive integer $n$ and prime number $p$, let $n_p$ denote the *$p$-part of $n$* and let $n_{p'}$ denote the *$p'$-part of $n$*, that is, $n_p=p^t$ such that $p^t\mid n$ but $p^{t+1}\nmid n$ and $n_{p'}=n/n_p$. We will denote by $d$ the greatest common divisor of $n$ and $q-1$.
\[bound\] Suppose that ${\mathcal{D}}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={{\rm PSL}}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $d=\gcd(n,q-1)$. Then for any point $\alpha$ of ${\mathcal{D}}$ the following statements hold:
1. $|X|<2(df)^2|X_{\alpha}|^3$;
2. $r$ divides $2df|X_{\alpha}|$;
3. if $p\mid v$, then $r_p$ divides $2$, $r$ divides $2df|X_{\alpha}|_{p'}$, and $|X|<2(df)^2|X_{\alpha}|^2_{p'}|X_{\alpha}|$.
Since $G$ is point-primitive and $X$ is normal in $G$, the group $X$ is transitive on the point set. Hence $G=XG_\alpha$ and so $$\frac{|G_\alpha|}{|X_\alpha|}=\frac{|G_\alpha|}{|X\cap G_\alpha|}=\frac{|XG_\alpha|}{|X|}=\frac{|G|}{|X|}.$$ Moreover, as ${{\rm Soc}}(G)=X={{\rm PSL}}(n,q)$, we have $G\leq{{\rm Aut}}(X)$. Hence $|G_\alpha|/|X_\alpha|=|G|/|X|$ divides $|\operatorname{Out}(X)|=2df$. Consequently, $|G_\alpha|/|X_\alpha|\leq2df$. Since Lemma \[condition 2\](i) yields $$|G_\alpha|^3>2|G|=\frac{2|X||G_\alpha|}{|X_\alpha|},$$ it follows that $$2|X|<|X_\alpha||G_\alpha|^2=\left(\frac{|G_\alpha|}{|X_\alpha|}\right)^2|X_\alpha|^3\leq (2df)^2|X_\alpha|^3.$$ This leads to statement (i). Since $|G_\alpha|/|X_\alpha|$ divides $|\operatorname{Out}(X)|=2df$ and the flag-transitivity of $G$ implies that $r$ divides $|G_\alpha|$, we derive that $r$ divides $2df|X_{\alpha}|$, as in statement (ii).
Now suppose that $p$ divides $v$. Then the equality $2(v-1)=r(k-1)$ implies that $r_p$ divides $2$. As a consequence of this and part (ii) we see that $r$ divides $2df|X_{\alpha}|_{p'}$. Since $r^2>2v$ by Lemma \[condition 1\](iv), and $v=|X|/|X_{\alpha}|$ by the point-transitivity of $X$, it then follows that $$(2df|X_{\alpha}|_{p'})^2>2v=\frac{2|X|}{|X_\alpha|}.$$ This implies that $2(df)^2|X_{\alpha}|^2_{p'}|X_{\alpha}|>|X|$, completing the proof of part (iii).
\[L:subgroupdiv\] Suppose that ${\mathcal{D}}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={{\rm PSL}}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $d=\gcd(n,q-1)$. Let $\alpha$ and $\beta$ be distinct points of ${\mathcal{D}}$, and suppose $H\leq G_{\alpha,\beta}$. Then $r$ divides $4df|X_\alpha|/|H|$.
By Lemma \[condition 2\](iv), $r$ divides $2|\beta^{G_\alpha}|=2|G_\alpha|/|G_{\alpha\beta}|$. Since $|{G}_\alpha|$ divides $2df|{X}_\alpha|$ (see proof of Lemma \[bound\]) and $H$ divides $|G_{\alpha,\beta}|$, it follows that $r$ divides $4df|X_\alpha|/|H|$. We will need the following results on finite groups of Lie type.
\[parabolic\] Suppose that ${\mathcal{D}}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={{\rm PSL}}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $r$ is the number of blocks incident with a given point. Let $\alpha$ be a point of ${\mathcal{D}}$. Suppose that $X_\alpha$ has a normal subgroup $Y$, which is a finite simple group of Lie type in characteristic $p$, and $Y$ is not isomorphic to $\mathrm{A}_{5}$ or $\mathrm{A}_{6}$ if $p=2$. If $r_p\mid 2_p$, then $r$ is divisible by the index of a proper parabolic subgroup of $Y$.
Since $G$ is flag-transitive, we have $r=|G_\alpha|/|G_{\alpha,B}|$, where $B$ is a block through $\alpha$. Since $X_\alpha\unlhd G_\alpha$, $|X_\alpha|/|X_{\alpha,B}|$ divides $r$. Now since $Y\unlhd X_\alpha$, we also have that $|Y|/|Y_{B}|$ divides $r$. Let $H:=Y_{B}$. Since $r_{p}\mid 2_{p}$, we have that $|Y{:}H|_{p}\leq 2_{p}$. We claim that $H$ is contained in a proper parabolic subgroup of $Y$. First assume $|Y{:}H|_{p}=1$. Then by [@Saxl Lemma 2.3], $H$ is contained in a proper parabolic subgroup of $Y$. Now suppose $|Y{:}H|_{p}=2$. Then $p=2$ and $4\nmid |Y{:}H|$, and so by [@Biplane2 Lemma 7], $H$ is contained in a proper parabolic subgroup of $Y$. So the claim is proved in both cases. It follows that $r$ is divisible by the index of a parabolic subgroup of $Y$.
[([@Alavi Lemma 4.2, Corollary 4.3])]{}\[eq2\] Table $\ref{tab1}$ gives upper bounds and lower bounds for the orders of certain $n$-dimensional classical groups defined over a field of order $q$, where $n$ satisfies the conditions in the last column.
Group $G$ Lower bound on $|G|$ Upper bound on $|G|$ Conditions on $n$
-------------------- ------------------------------------------- ------------------------------------------------- -------------------
${{\rm GL}}(n,q)$ $>(1-q^{-1}-q^{-2})q^{n^2}$ $\leq(1-q^{-1})(1-q^{-2})q^{n^2}$ $n\geq 2$
${{\rm PSL}}(n,q)$ $>q^{n^2-2}$ $\leq(1-q^{-2})q^{n^2-1}$ $n\geq 2$
${\rm GU}(n,q)$ $\geq(1+q^{-1})(1-q^{-2})q^{n^2}$ $\leq(1+q^{-1})(1-q^{-2})(1+q^{-3})q^{n^2}$ $n\geq 2$
${\rm PSU}(n,q)$ $>(1-q^{-1})q^{n^2-2}$ $\leq(1-q^{-2})(1+q^{-3})q^{n^2-1}$ $n\geq 3$
${\rm Sp}(n,q)$ $>(1-q^{-2}-q^{-4})q^{\frac{1}{2}n(n+1)}$ $\leq(1-q^{-2})(1-q^{-4})q^{\frac{1}{2}n(n+1)}$ $n\geq 4$
: Bounds for the order of some classical groups[]{data-label="tab1"}
We finish this section with an arithmetic result.
[([@Low Lemma 1.13.5])]{}\[gcd\] Let $p$ be a prime, let $n, e$ and $f$ be positive integers such that $n>1$ and $e\mid f$, and let $q_{0}=p^e$ and $q=p^f$. Then
1. $\displaystyle{
\frac{q-1}{{{\rm lcm}}(q_0-1,(q-1)/\gcd(n,q-1))}=\gcd\left(n,\frac{q-1}{q_0-1}\right);
}$
2. $\displaystyle{
\frac{q+1}{{{\rm lcm}}(q_0+1,(q+1)/\gcd(n,q+1))}=\gcd\left(n,\frac{q+1}{q_0+1}\right);
}$
3. If $f$ is even, then $q^{1/2}=p^{f/2}$ and $\displaystyle{
\frac{q-1}{{{\rm lcm}}(q^{1/2}+1,(q-1)/\gcd(n,q-1))}=\gcd\left(n,q^{1/2}-1\right).
}$
Proof of Theorem \[Th1\] {#sec3}
========================
Let $\cal D =(\P, \cal B)$ be a $2$-$(v,k,2)$ design admitting a flag-transitive group $G$ of automorphisms. If $G$ is point-primitive, then by [@Liang1] and [@Re2005], $G$ is of affine or almost simple type. Thus we may assume that $G$ leaves invariant a non-trivial partition $\mathcal{C}=\{\Delta_1,\Delta_2,\dots,\Delta_y\}$ of $\P$, where $$\label{Eq5}
v=xy.$$ with $1<y<v$ and $|\Delta_i|=x$ for each $i$. If $(v,k)=(16,6)$ then by Lemma \[condition 1\], it follows that $\cal D$ is symmetric and hence, in the light of the discussion before the statement of Theorem \[Th1\], in this case Theorem \[Th1\](i) holds. Hence we may assume further that $(v,k)\ne (16,6)$. Our objective now is to derive a contradiction to these assumptions. Our proof uses the facts, which can easily be verified by <span style="font-variant:small-caps;">Magma</span> [@magma], that for each 2-transitive permutation group of degree $2p=10$ or $22$ there is a unique class of subgroups of index $2p$ and each such group is almost simple with a 2-transitive unique minimal normal subgroup (its socle). In fact the socle is one of ${{\rm PSL}}(2,9)$ or $\mathrm{A}_{10}$ (for degree 10), or $M_{22}$ or $\mathrm{A}_{22}$ (for degree 22).
First we introduce a new parameter $\ell$: let $\alpha\in\P$ and $\Delta\in \mathcal{C}$ such that $\alpha\in\Delta$; choose $B\in \mathcal{B}$ containing $\alpha$, and let $\ell=|B\cap\Delta|$. It follows from [@Praeger Lemma 2.1] that, for each $B'\in\mathcal{B}$ and $\Delta'\in\mathcal{C}$ such that $B'\cap\Delta'\neq\emptyset$, the intersection size $|B'\cap\Delta'|=\ell$, so that $B'$ meets each of exactly $k/\ell$ parts of $\cal C$ in $\ell$ points and is disjoint from the other parts. Moreover, $$\label{Eq6}
\ell\mid k\quad \mbox{and}\ 1<\ell < k.$$ (Note that the proof of [@Praeger Lemma 2.1] uses flag-transitivity of $\cal D$, but is valid for all $2$-designs, not only symmetric ones.)
*Claim 1:*$(v,b, r,k, \ell)=(x^2, \frac{2x^2(x-1)}{x+2}, 2x-2, x+2, 2)$, and $x=2p$ with $p\in\{5, 11\}$.
*Proof of Claim:*Counting the point-block pairs $(\alpha',B')$ with $\alpha'\in\Delta\setminus\{\alpha\}$ and $B'$ containing $\alpha$ and $\alpha'$, we obtain $$\label{Eq7}
2(x-1)=r(\ell-1).$$ It follows from and Lemma \[condition 1\](i) that $$r(k-1)=2(xy-1)=2y(x-1)+2(y-1),$$which together with yields $$\label{Eq8}
r(k-1)=yr(\ell-1)+2(y-1).$$ Let $z=k-1-y(\ell-1)$. Then $z$ is an integer and, by , $rz=2(y-1)>0$ so $z$ is a positive integer and $$\label{Eq9}
y=\frac{rz+2}{2}.$$ This in conjunction with leads to $$r(k-1)+2=y(r(\ell-1)+2)=\frac{(rz+2)(r(\ell-1)+2)}{2}.$$ Hence $$\label{Eq10}
2(k-\ell-z)=rz(\ell-1).$$ Since $k\leq r$ (Lemma \[condition 1\](iii)), we have $$kz(\ell-1)\leq rz(\ell-1)=2(k-\ell-z)<2k,$$ and hence $z=1$ and $\ell=2$. Then becomes $r=2x-2$, and so gives $y=x$ (and hence $v=x^2$) and the definition of $z$ gives $k=x+2$. It then follows from $r\geq k$ that $x\geq 4$, and from that $k$, and hence also $x$, is even. Finally by Lemma \[condition 1\](ii), $$b=\frac{vr}{k}=\frac{x^2(2x-2)}{x+2} = 2x^2-6x+12 -\frac{24}{x+2},$$ and hence $(x+2)\mid24$. Therefore, $x=4$, $6$, $10$ or $22$, but since we are assuming that $(v,k)\ne (16,6)$ the parameter $x\ne4$. If $x=6$, then $(v,b,r,k)=(36,45,10,8)$, but one can see from [@Handbook II.1.35] that there is no $2$-$(36,8,2)$ design. Thus $x=10$ or $22$, and Claim 1 is proved.
*Claim 2:*For $\Delta\in\cal C$, the induced group $G_\Delta^\Delta$ is 2-transitive. Moreover the kernel $K:=G_{(\cal C)}\ne 1$, $\cal C$ is the set of $K$-orbits in $\P$, and $K^\Delta$ and its socle ${{\rm Soc}}(K)^\Delta$ are $2$-transitive with 2-transitive socle ${{\rm PSL}}(2,9)$ or $\mathrm{A}_{10}$ for degree 10, and $M_{22}$ or $\mathrm{A}_{22}$ for degree 22. *Proof of Claim:*Since each element of $G$ fixing $\alpha$ stabilises $\Delta$, we have the inclusion ${G_\alpha}\leq G_{\Delta}$. Let $\beta, \gamma$ be arbitrary points in $\Delta\setminus\{\alpha\}$, and consider $B_1\in\mathcal{B}$ containing $\alpha$ and $\beta$, and $B_2\in \mathcal{B}$ containing $\alpha$ and $\gamma$. Since $G$ is flag-transitive, there exists $h\in G_\alpha$ such that $B_1^h=B_2$, and in particular, $\beta^h\in B_2$. As $\ell=2$ (by Claim 1), each block of $\mathcal{D}$ through $\alpha$ contains exactly one point in $\Delta\setminus\{\alpha\}$. Since $\beta^h\in(\Delta\setminus\{\alpha\})^h=\Delta\setminus\{\alpha\}$, it then follows that $\beta^h=\gamma$. This shows that $G_\alpha$ is transitive on $\Delta\setminus\{\alpha\}$, and hence $G^{\Delta}_{\Delta}$ is 2-transitive and hence primitive.
By Claim 1, each non-trivial block of imprimitivity for $G$ in $\P$ has size $x=\sqrt{v}=2p$ (with $p=5$ or $11$), and hence the induced permutation group $G^{\mathcal{C}}$ on $\mathcal{C}$ is primitive. Suppose that $K=1$, so $G^{\mathcal{C}}\cong G$. Since $G$ is point-transitive and $v=4p^2$, it follows that $|G|=|G^{\mathcal{C}}|$ is divisible by $p^2$, and hence $G^{\mathcal{C}}_{\Delta}\cong G_\Delta$ has order divisible by $p$ (since $|G:G_\Delta|=2p$). Thus $G^{\mathcal{C}}_{\Delta}$ contains an element of order $p$ which acts on $\cal C$ as a $p$-cycle fixing $p$ of the parts. Then by a result of Jordan [@Wie1964 Theorem 13.9] we have $G^{\mathcal{C}}=\mathrm{A}_{2p}$ or $\mathrm{S}_{2p}$ and thus $G_\Delta\cong G^{\mathcal{C}}_{\Delta}=\mathrm{A}_{2p-1}$ or $\mathrm{S}_{2p-1}$. The kernel of the action of $G_\Delta$ on $\Delta$ is normal in $G_\Delta$ and so can only be $1$, $\mathrm{A}_{2p-1}$ or $\mathrm{S}_{2p-1}$. Since $G_\Delta^\Delta$ is transitive of degree $2p>2$, this kernel must be trivial. Hence $G_\Delta\cong G_\Delta^\Delta$ is primitive of degree $2p$ and neither $\mathrm{A}_{2p-1}$ nor $\mathrm{S}_{2p-1}$ has such an action, for $p\in\{5,11\}$. This contradiction implies that $K\ne 1$.
Since $K\ne 1$ and $K$ is normal in $G$, its orbits are nontrivial blocks of imprimitivity for $G$ in $\P$, and by Claim 1, they must have size $x=2p$. Hence the set of $K$-orbits in $\P$ is the partition $\cal C$. Since $1\ne {{\rm Soc}}(K)\unlhd G$ it follows that ${{\rm Soc}}(K)^\Delta\ne1$ and hence ${{\rm Soc}}(K)^\Delta$ contains the socle of $G^{\Delta}_{\Delta}$, which is $2$-transitive on $\Delta$ (see above). Therefore ${{\rm Soc}}(K)^\Delta$ is $2$-transitive, and so also $K^\Delta$ is $2$-transitive. By Burnside’s Theorem (see [@PS Theorem 3.21]), since $|\Delta|=2p$ is not a prime power, $G^{\Delta}_{\Delta}$ , $K^\Delta$ and ${{\rm Soc}}(K)^\Delta$ are almost simple with 2-transitive nonabelian simple socle. As mentioned above these 2-transitive groups must have socle ${{\rm PSL}}(2,9)$ or $\mathrm{A}_{10}$ for degree 10, and $M_{22}$ or $\mathrm{A}_{22}$ for degree 22, and that socle is also 2-transitive on $\Delta$.
*Claim 3:*The group $K$ is faithful on $\Delta$, so $K$ is almost simple with nonabelian simple socle.
*Proof of Claim:*Let $\Delta\in\cal C$ and suppose that $A=K_{(\Delta)}\ne1$. Let $F$ denote the set of fixed points of $A$, so $\Delta\subseteq F$. If $\beta\in F$ and $\beta\in\Delta'\in\cal C$, then since $K$ is transitive on $\Delta'$ (Claim 2) and $A\unlhd K$, it follows that $A$ fixes $\Delta'$ pointwise. Thus $A\leq K_{(\Delta')}$, and since $K_{(\Delta)}, K_{(\Delta')}$ are conjugate in $G$ we have $A= K_{(\Delta')}$. Therefore $F$ is a union of parts of $\cal C$.
If $g\in G$, then $A^g$ has fixed point set $F^g$ and $F^g$ is a union of some parts of $\cal C$. Thus if $F\cap F^g$ contains a point $\beta$ and $\beta\in\Delta'\in\cal C$, then by the previous paragraph $A= K_{(\Delta')} = A^g$ and so $F=F^g$. It follows that $F$ is a block of imprimitivity for $G$ in $\P$, and $F$ is non-trivial since $A\ne 1$. Thus ${\cal C'}:=\{\ F^g \mid g\in G \}$ is a non-trivial $G$-invariant partition of $\P$. By Claim 1, $|F|=x$, and since $F$ contains $\Delta$ we conclude that $F=\Delta$. This means that $A^{\Delta'}\ne 1$ for each $\Delta'\in{\cal C}\setminus\{\Delta\}$, and since $K^{\Delta'}$ is $2$-transitive (Claim 2), it follows that $A^{\Delta'}$ is transitive. Now choose $\alpha, \beta\in F=\Delta$ and let $B_1, B_2\in\cal B$ be the two blocks containing $\{\alpha,\beta\}$. Then $A\leq G_{\alpha\beta}$, and $G_{\alpha\beta}$ fixes $B_1\cup B_2$ setwise. By Claim 1, there exists $\Delta'\in{\cal C}\setminus\{\Delta\}$ such that $|B_1\cap \Delta'|=\ell=2$, and $|B_2\cap \Delta'|=0$ or $2$. Thus $(B_1\cup B_2)\cap \Delta'$ has size between 2 and 4 and is fixed setwise by $A$. This is a contradiction since $A$ is transitive on $\Delta'$ and $|\Delta'|=2p\geq 10$. Therefore $A=1$ so $K$ is faithful on $\Delta$. By Claim 2, $K\cong K^\Delta$ is almost simple with nonabelian simple socle.
Since $K$ is 2-transitive of degree $c=2p$, as mentioned above, $K$ has only one conjugacy class of subgroups of index $2p$, and so $K$ has a unique $2$-transitive representation of degree $c$, up to permutational equivalence. It follows that, for $\alpha\in\Delta$, the stabiliser $K_\alpha$ fixes exactly one point in each part of $\cal C$. Let $\beta$ be another point fixed by $K_\alpha$. Let $B_1, B_2\in\cal B$ be the two blocks containing $\{\alpha,\beta\}$. By Claim 1, $|B_i\cap \Delta|=2$ for each $i$ and hence $K_{\alpha\beta}$ fixes setwise $(B_1\cup B_2)\cap
\Delta$, a set of size 2 or 3. On the other hand $K_{\alpha\beta}=K_\alpha$ since $\beta$ is a fixed point of $K_\alpha$, and by Claim 2, $K$ is 2-transitive on $\Delta$, so the $K_\alpha$-orbits in $\Delta$ have sizes $1, c-1$. This final contradiction completes the proof of Theorem \[Th1\].
Proof of Theorem \[Th2\] {#sec4}
========================
Our first result in this section proves that the designs arising from Construction \[cons\] are all $2$-designs, and inherit certain symmetry properties from those of the input design. In particular we show that the designs coming from projective geometries over a field of three elements give examples for Theorem \[Th2\].
\[exist 1\] Let $\mathcal{S=(P,L)}$ be a $2$-$(v,k,1)$ design, $\ell\in\mathcal{L}$ and $G\leq {\rm Aut}(\mathcal{S})$.
1. Then the design ${\mathcal{D}}(\mathcal{S})$ given in Construction \[cons\] is a non-symmetric $2$-$(v,k-1,k-2)$ design and $G$ is a subgroup of ${{\rm Aut}}({\mathcal{D}}(\mathcal{S}))$;
2. Moreover, if $G$ is flag-transitive on $\mathcal{S}$ and $G_{\ell}$ is $2$-transitive on $\ell$, then $G$ is flag-transitive and point-primitive on ${\mathcal{D}}(\mathcal{S})$;
3. In particular, if $\cal S$ is the design of points and lines of the projective space ${{\rm PG}}(n-1,3)$ ($n\geq3$), and $G\geq{{\rm PSL}}(n,3)$, then $\mathcal{D}(\mathcal{S})$ is a non-symmetric $G$-flag-transitive, $G$-point-primitive $2$-$(v,3,2)$ design.
Let $\mathcal{D}=\mathcal{D}(\mathcal{S})$ with block set $\mathcal{B}=\{\ell\setminus\{\alpha\}\,\mid\,\ell\in \mathcal{L},\,\alpha\in\ell\}$, so $\mathcal{D=(P,B)}$. Let $\alpha,\beta$ be distinct points of $\mathcal{P}$. Then there exists a unique line $\ell\in \mathcal{L}$, such that $\alpha,\beta\in \ell$. As $|\ell|=k$, exactly $k-2$ blocks of $\mathcal{B}$ contain $\alpha$ and $\beta$. Thus, $\mathcal{D}$ is a 2-$(v,k-1,k-2)$ design, which is nontrivial provided that $3<k$. By Lemma \[condition 1\] applied to $\mathcal{S}$, $|\mathcal{L}|\geq v$, and since $|\mathcal{B}|=k|\mathcal{L}|>|\mathcal{L}|$ it follows that $\mathcal{D}$ is not symmetric. Moreover, for all $B=\ell\setminus\{\alpha\}\in \mathcal{B}$ and for all $g\in G\leq {\rm Aut}(\mathcal{S})$, we have $\ell^{g}\in \mathcal{L}$ and $\alpha^{g}\in \ell^{g}$, and so $B^{g}=(\ell \backslash \{\alpha\})^{g}=\ell^{g}\backslash \{\alpha^{g}\}\in \mathcal{B}$. Thus, $G\leq {\rm Aut}({\mathcal{D}})$ and part (i) is proved.
Now assume that $G$ is flag-transitive on $\mathcal{S}$ and $G_{\ell}$ is $2$-transitive on $\ell$. Let $\alpha\in \ell$ and $B=\ell\setminus\{\alpha\}$. From the flag-transitivity of $G$, we know that $G$ acts primitively on the point set $\mathcal{P}$ by [@HM Propositions 1–3], and $G$ acts transitively on the block set $\mathcal{B}$ of ${\mathcal{D}}$. Furthermore, $G_{\ell,\alpha}\leq G_{B}$. Since $G_{\ell}$ is 2-transitive on $\ell$, $G_{\ell,\alpha}$ is transitive on $B$. Hence $G_{B}$ is transitive on $B$, and so $G$ is flag-transitive on ${\mathcal{D}}$ and part (ii) is proved.
In the special case where $\cal S$ is the design of points and lines of the projective space ${{\rm PG}}(n-1,3)$ ($n\geq3$), and $H={{\rm PSL}}(n,3)$, $H$ is flag-transitive on $\cal S$ and $H_\ell$ induces the $2$-transitive group ${{\rm PGL}}(2,3)\cong \mathrm{S}_4$ on $\ell$. Thus part (iii) follows from part (i) and (ii) for any group $G$ such that $H\leq G\leq {\rm Aut}(G)$.
Broad proof strategy and the natural projective action
------------------------------------------------------
In the remainder of the paper we assume the following hypothesis:
\[H\] Let ${\mathcal{D}}=(\P,\cal B)$ be a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={{\rm PSL}}(n,q)$ for some $n\geq 3$, where $q=p^f$ with prime $p$ and positive integer $f$.
Observe that $G\cap {\rm P\Gamma L}(n,q)$ has a natural projective action on a vector space $V$ of dimension $n$ over the field $\mathbb{F}_q$. Consider a point $\alpha$ of ${\mathcal{D}}$ and a basis $v_{1},v_{2},\ldots,v_{n}$ of the vector space $V$. Since $G$ is primitive on $\P$, the stabiliser $G_\alpha$ is maximal in $G$, and so by Aschbacher’s Theorem [@Asch](see also [@PB]), $G_\alpha$ lies in one of the geometric subgroup families $\mathcal{C}_i(1\leq i \leq 8)$, or in the family $\mathcal{C}_9$ of almost simple subgroups not contained in any of these families. When investigating the subgroups in the Aschbacher families, we make frequent use of the information on their structures in [@PB Chap. 4]. We will sometimes use the symbol $\tilde{H}$ to indicate that we are giving the structure of the pre-image of $H$ in the corresponding (semi)linear group.
In the next proposition we treat the case where $\P$ is the point set of the projective space ${{\rm PG}}(n-1,q)$ associated with $V$.
\[4\] Assume Hypothesis $\ref{H}$, and that $\mathcal{P}$ is the point set of the projective space ${{\rm PG}}(n-1,q)$, with $G$ acting naturally on $\mathcal{P}$. Then either
(a) $q=3$, $k=3$, $v=(3^{n}-1)/2$ and ${\mathcal{D}}={\mathcal{D}}(\cal S)$ from Construction $\ref{cons}$, where $\cal S$ is the design of points and lines of ${{\rm PG}}(n-1,3)$; or
(b) $q=2$, $k=4$, $v=2$ and ${\mathcal{D}}$ is the complement of the Fano plane (that is, blocks are the complements of the lines in ${{\rm PG}}(2,2)$).
Let $\alpha,\beta$ be distinct points. Since $\lambda=2$, there are exactly two blocks $B_{1}$ and $B_{2}$ containing $\alpha$ and $\beta$. Moreover, $G_{\alpha\beta}$ fixes $B_{1}\cup B_{2}$ setwise, so $B_{1}\cup B_{2}$ is a union of $G_{\alpha,\beta}$-orbits. Let $\ell$ be the unique projective line containing $\alpha$ and $\beta$. Then $G_{\alpha,\beta}$ is transitive on the $v-(q+1)$ points $\mathcal{P}\backslash\ell$ and on $\ell\backslash\{\alpha,\beta\}.$ Hence, either
1. $(B_{1}\cup B_{2})\backslash\{\alpha,\beta\}\supseteq \mathcal{P}\backslash\ell$, or
2. $B_{1}\cup B_{2}=\ell$.
Suppose first that $(B_{1}\cup B_{2})\backslash\{\alpha,\beta\}\supseteq \mathcal{P}\backslash\ell$. Then $2k-2\geq |B_{1}\cup B_{2}|\geq 2+v-(q+1)$, that is $k-1\geq (v-q+1)/2$. Now $r(k-1)=2(v-1)$ (Lemma \[condition 1\]) and $v=(q^{n}-1)/(q-1)$, so that $$\label{Eq11}
r=\frac{2(v-1)}{k-1}\leq\frac{4(v-1)}{v-q+1}=4\cdot \left(1+\frac{q-2}{q^{n-1}+\cdots+q^{2}+2}\right)<8.$$ Since $r\geq k$, we have that $k\leq 7$. Now combining this with $k-1\geq (v-q+1)/2$, we have that $12\geq 2(k-1)\geq q^{n-1}+\cdots+q^{2}+2$. If $n\geq 4$, then $12\geq q^{n-1}+\cdots+q^{2}+2\geq q^3+q^{2}+2\geq 2^3+2^{2}+2=14$, a contradiction. So $n=3$ and $12\geq q^{2}+2$, which implies that $q\leq 3$. If $q=3$, then $v=13$, and $6\geq k-1\geq (v-q+1)/2$ implies that $k=7$. Now $r(k-1)=2(v-1)$ implies that $r=4$, contradicting $r\geq k$. Hence $(n,q)=(3,2)$. Then $v=7$, $k-1\geq 3$, and $r=2(v-1)/(k-1)\leq4$, and so $r\leq 4\leq k$. Since $r\geq k$, we get that $r=k=4$, and thus $b=(vr)/k=7$. Thus, $\mathcal{D}$ is a symmetric $2$-$(7,4,2)$ design with $X={{\rm PSL}}(3,2)$. Since $k=4$, and $G_B$ is transitive on the block $B$, it follows that $B$ does not contain a line of ${{\rm PG}}(2,2)$. The only possibility is that $B=\P\backslash \ell'$, where $\ell'$ is a line of ${{\rm PG}}(2,2)$, that is, the blocks are complements of the lines of ${{\rm PG}}(2,2)$. Hence ${\mathcal{D}}$ is the complement of the Fano projective plane and (b) holds.
Now assume that $B_{1}\cup B_{2}=\ell$, and every block is contained in a line of the projectice space. We get $2k-2\geq|B_{1}\cup B_{2}|=q+1$, while $q+1=|\ell|>|B_{i}|=k$. Hence $q>k-1\geq(q+1)/2>q/2$.
Assume that there are $s$ blocks of ${\mathcal{D}}$ through $\alpha$ contained in the projective line $\ell$. Since $G$ acts flag-transitively on the projective space ${{\rm PG}}(n-1,q)$, for any projective line $\ell'$ and any point $\alpha'\in\ell'$, there are $s$ blocks containing $\alpha'$ that are contained in $\ell'$. Since for any two distinct points, there is a unique projective line containing them, the sets of blocks on $\alpha$ that are contained in distinct lines $\ell,~\ell'$ through $\alpha$ are disjoint. Note that there are $(q^{n-1}-1)/(q-1)$ projective lines through $\alpha$, so the number of blocks through $\alpha$ is $r=s(q^{n-1}-1)/(q-1)$.
As $r(k-1)=2(v-1)$, it follows that $s(k-1)(q^{n-1}-1)/(q-1)=2((q^{n}-1)/(q-1)-1)$, so $s(k-1)=2q$. Then it follows from $q>k-1> q/2$ that $1>2/s>1/2$, and so $s=3$. Thus there are 3 blocks through $\alpha$ contained in $\ell$, and $k-1=2q/3$, so $q=3^{f}$ for some $f$, and $k=2\cdot3^{f-1}+1$.
Assume that there are $c$ blocks of ${\mathcal{D}}$ contained in the projective line $\ell$. Since $G$ acts transitively on the projective lines, for any projective line $\ell'$, there are $c$ blocks contained in $\ell'$. Now, counting the number of flags $(\gamma,B)$ in two ways, where $\gamma\in \ell$ and $B\subseteq \ell$ for a fixed line $\ell$, we have that $3(q+1)=ck$, so $3(3^{f}+1)=c(2\cdot 3^{f-1}+1)$, which can be rewritten as $3^{f-1}(9-2c)=c-3$. Suppose $f\geq2$. Then $3$ divides $c$: when $c=3$, the equation cannot hold, and when $c\geq 6$ the left hand side is negative while the right hand side is positive. Hence $f=1$, $q=3$, $k=3$, and $c=4$. Therefore, the blocks contained in $\ell$ are all the sets $\ell\backslash\{\gamma\}$, for $\gamma\in \ell$, and this implies that $\mathcal{B}=\{\ell\backslash\{\gamma\}\,|\,\ell\in \mathcal{L}, \gamma\in \ell\}$. Therefore, ${\mathcal{D}}={\mathcal{D}}(\cal S)$ is the design in Construction \[cons\], where $\cal S$ is the design of points and lines of ${{\rm PG}}(n-1,3)$.
In what follows, we analyse each of the families $\mathcal{C}_{1}$–$\mathcal{C}_{9}$ for $G_\alpha$.
$\mathcal{C}_{1}$-subgroups {#sec3.1.1}
---------------------------
In this analysis we repeatedly use the Gaussian binomial coefficient $\gbc{m}{i}_q$ for the number of $i$-spaces in an $m$-dimensional space $\mathbb{F}_q^m$, where $0\leq i\leq m$. A straightforward argument counting bases of $\mathbb{F}_q^m$ and its subspaces shows that, for $i\geq1$, $$\gbc{m}{i}_q= \frac{(q^m-1)(q^m-q)\cdots(q^{m}-q^{i-1})}{(q^i-1)(q^i-q)\cdots(q^i-q^{i-1})}
= \frac{\prod_{j=1}^i(q^{m-i+j}-1)}{\prod_{j=1}^i(q^j-1)} = \prod_{j=1}^i \frac{q^{m-i+j}-1}{q^{j}-1}.$$ We use this equality without further comment. We also use the facts that $\gbc{m}{i}_q=\gbc{m}{m-i}_q$, that the number of complements in $\mathbb{F}_q^m$ of a given $i$-space is $q^{i(m-i)}$, and hence that the number of decompositions $U\oplus W$ of $\mathbb{F}_q^m$ with $\dim(U)=i$ is $\gbc{m}{i}_q\cdot q^{i(m-i)}$.
\[c1’\] Assume Hypothesis $\ref{H}$. If the point-stabilizer $G_\alpha\in\mathcal{C}_{1}$, then $G_\alpha$ is the stabiliser in $G$ of an $i$-space and $G\leq{{\rm P\Gamma L}}(n,q)$.
If $G\leq {{\rm P\Gamma L}}(n,q)$ then $G_\alpha$ is the stabiliser in $G$ of an $i$-space, for some $i$, so assume that $G\nleq{{\rm P\Gamma L}}(n,q)$. Then $G$ contains a graph automorphism of ${{\rm PSL}}(n,q)$, so in particular $n\geq3$, and $G_{\alpha}$ stabilizes a pair $\{U,W\}$ of subspaces $U$ and $W$, where $U$ has dimension $i$ and $W$ has dimension $n-i$ with $1\leq i< n/2$. It follows that $G^*:=G\cap{{\rm P\Gamma L}}(n,q)$ has index $2$ in $G$. Moreover, either $U\subseteq W$ or $U\cap W=0$.
**Case 1:** $U \subset W$.
In this case, $v$ is the number $\gbc{n}{n-i}_q$ of $(n-i)$-spaces $W$ in $V$, times the number $\gbc{n-i}{i}_q$ of $i$-spaces $U$ in $W$, so $$\begin{aligned}
v
&=\gbc{n}{n-i}_q\cdot\gbc{n-i}{i}_q = \prod_{j=1}^{n-i} \frac{q^{n-(n-i)+j}-1}{q^{j}-1}\cdot
\prod_{j=1}^i \frac{q^{(n-i)-i+j}-1}{q^{j}-1}\\
&=\left. \prod_{j=1}^{n-i} (q^{i+j}-1) \right/ {\left(\prod_{j=1}^{n-2i} (q^{j}-1)\cdot \prod_{j=1}^i (q^{j}-1)\right)}\\
&= \prod_{j=1}^{i} \frac{q^{i+j}-1}{q^{j}-1} \cdot \prod_{j=1}^{n-2i} \frac{q^{2i+j}-1}{q^{j}-1}.\\
$$ Then, using the fact that $q^m-1>q^{m-j}(q^j-1)$, for integers $1\leq j<m$, $$v > \prod_{j=1}^i q^{i} \cdot \prod_{j=1}^{n-2i} q^{2i} = q^{i^2 + 2i(n-2i)} = q^{i(2n-3i)}.$$
Consider the following points of ${\mathcal{D}}$: $\alpha = \{U,W\}$, where $W=\langle v_{1},v_{2},\ldots, v_{n-i}\rangle$ and $U= \langle v_{1},v_{2},\ldots, v_{i}\rangle$, and $\beta = \{U',W\}$, where $U'= \langle v_{1},v_{2},\ldots, v_{i-1}, v_{i+1}\rangle$. Then the $G^*_\alpha$-orbit $\Delta$ containing $\beta$ consists of all the points $\{U'',W\}$ such that the $i$-space $U''\subset W$ and $\dim(U\cap U'')=i-1$. Thus the cardinality $|\Delta|$ is the number $\gbc{i}{i-1}_q$ of $(i-1)$-spaces $U\cap U''$ in $U$, times the number $\gbc{n-2i+1}{1}_q-1$ of $1$-spaces in $W/(U\cap U'')$ distinct from $U/(U\cap U'')$. Therefore, since $\gbc{i}{i-1}_q=\gbc{i}{1}_q$, $$|G^*_\alpha:G^*_{\alpha\beta}| = |\Delta| = \gbc{i}{1}_q\cdot \left(\gbc{n-2i+1}{1}_q-1\right) =
\frac{q^i-1}{q-1}\cdot \frac{q(q^{n-2i}-1)}{q-1}.$$
Note that $G_\alpha$ contains a graph automorphism, and each such graph automorphism interchanges $U$ and $W$, and hence does not leave $\Delta$ invariant. Thus the $G_\alpha$-orbit containing $\beta$ has cardinality $2|\Delta|$ (a subdegree of $G$), so by Lemma \[condition 2\](iv), $r$ divides $$4|\Delta|=\frac{4q(q^{n-2i}-1)(q^i-1)}{(q-1)^2}.$$ Note that $(q^j-1)/(q-1)<2q^{j-1}$ for each integer $j>0$. It follows that $$r\leq \frac{4q(q^{n-2i}-1)(q^i-1)}{(q-1)^2}
<4q\cdot2q^{n-2i-1}\cdot2q^{i-1}=16q^{n-i-1}.$$ Combining this with $r^2>2v$ and $v>q^{i(2n-3i)}$, we see that $16^2q^{2(n-i-1)}>2q^{i(2n-3i)}$, that is, $$\label{Eq12}
2^7>q^{2(i-1)n-3i^2+2i+2}\geq 2^{2(i-1)n-3i^2+2i+2}.$$ Since $n>2i$, it follows that $2(i-1)n-3i^2+2i+2 > 4i(i-1)-3i^2+2i+2 = i^2-2i+2$, and so $i^2-2i-5<0$, which implies $i\leq 3$.
**Subcase 1.1:** $i=3$.
Then $n>2i=6$. From we have $2^7>q^{4n-19}\geq 2^{4n-19}$, which implies $n\leq 6$, a contradiction.
**Subcase 1.2:** $i=2$.
Then $n>4$. From we have $2^7>q^{2n-6}\geq 2^{2n-6}$, which implies $n=5$ or $6$. Then $r\mid 4q(q+1)^{n-4}$ (for $n=5$ or $6$) and $v>q^{4n-12}$. Combining this with $r^2>2v$, we deduce $16q^2(q+1)^{2n-8}>2q^{4n-12}$, that is, $8(q+1)^{2n-8}>q^{4n-14}$. For $n=6,$ this gives $8(q+1)^4>q^{10}$, which is impossible. Thus $n=5$ and $8(q+1)^2>q^6$, so $q=2$ and $v=5\cdot7\cdot31$. On the one hand $r\mid 24$ and on the other hand the condition $r^2>2v$ implies $r\geq 47$, a contradiction.
**Subcase 1.3:** $i=1$. Then $n>2$, $r$ divides $4q(q^{n-2}-1)/(q-1)$, and $$v=\frac{(q^{n}-1)(q^{n-1}-1)}{(q-1)^2}.$$ Combining this with the condition $r\mid 2(v-1)$, we seee that $r$ divides $$\begin{aligned}
R:&=\gcd\left(2(v-1),\frac{4q(q^{n-2}-1)}{q-1}\right)\\
&=2\gcd\left(\frac{(q^{n}-1)(q^{n-1}-1)}{(q-1)^2}-1,\frac{2q(q^{n-2}-1)}{q-1}\right),\\
&=\frac{2q}{(q-1)^2}\cdot
\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,2(q-1)(q^{n-2}-1)\right).\end{aligned}$$ Since $$(q^{2n-2}-q^{n-1}-q^{n-2}-q+2)-(q-1)^2=(q^n+q^2-q-1)(q^{n-2}-1)$$ is divisible by $(q-1)(q^{n-2}-1)$, we see that $$\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,(q-1)(q^{n-2}-1)\right)
\text{ divides }
(q-1)^2,$$ and so $$\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,2(q-1)(q^{n-2}-1)\right)
\text{ divides }
2(q-1)^2.$$ Therefore, $R$ divides $$\frac{2q}{(q-1)^2}\cdot2(q-1)^2=4q.$$ Combining this with $r\mid R$, $r^2>2v$ and $v>q^{2n-3}$, we deduce $16q^2>2q^{2n-3}$. Therefore, $8>q^{2n-5}\geq 2^{2n-5}$, which leads to $n=3$, and $q<8$. Note that $v=(q^2+q+1)(q+1)$, so $R=\gcd\left(2(v-1),4q\right)=2\gcd\left(q(q^2+2q+2),2q\right)=2q\gcd\left(q^2+2q+2,2\right)$. When $q$ is odd we see that $R=2q$. Then $r^2>2v$ leads to $ 2(q^2+q+1)(q+1)<r^2\leq R^2= 4q^2, $ which is not possible. Hence $q\in\{2,4\}$ and $R=4q$.
First assume that $q=4$. Then $v=105$ and $R=16$. Combining this with $r\mid R$ and $r^2>2v$, we conclude that $r=16$. Then it follows from $r(k-1)=2(v-1)$ and $bk=vr$ that $k=14$ and $b=120$. Since $G$ is block-transitive, it follows that $X:={{\rm Soc}}(G)={{\rm PSL}}(3,4)$ has equal length orbits on blocks, of length dividing $b=120$. This implies that $X$ has a maximal subgroup of index dividing $120$, and hence by [@Atlas page 23], we conclude that $X$ is primitive on blocks, that the stabiliser $X_B$ of a block $B$ is a maximal ${\mathcal{C}}_5$-subgroup stabilising an $\mathbb{F}_2$-structure $V_0=\mathbb{F}_2^3 <V$, and $X_B$ has two orbits on 1-spaces, and on $2$-spaces in $V$. An easy computation shows that $X_B$ has precisely four orbits on the point set $\P$, of lengths 14, 14, 21, 56: these are subsets of flags $\{U,W\}$ determined by whether $U\cap V_0$ contains a non-zero vector or not, and whether $W\cap V_0$ is a 2-space of $V_0$ or not. Since $X_B$ preserves the $k=14$ points of $B$, it follows that $B$ is equal to one of the $X_B$-orbits of length 14, so that $X$ acts flag-transitively and point-imprimitively on ${\mathcal{D}}$, contradicting Theorem \[Th1\]. (In fact $G_B$ interchanges the two $X_B$-orbits of length $14$ and so $G_B$ does not leave invariant a point-subset of size 14.)
Thus $q=2$. Then $v=21$ and $R=8$ and $G={{\rm PSL}}(3,2).2\cong {{\rm PGL}}(2,7)$. This together with $r\mid R$ and $r^2>2v$ implies $r=8$. Then we derive from $r(k-1)=2(v-1)$ and $bk=vr$ that $k=6$ and $b=28$. However, one can see from [@Handbook II.1.35] that there is no $2$-$(21,6,2)$ design, a contradiction. We also checked with <span style="font-variant:small-caps;">Magma</span> that considering every subgroup of index 28 as a block stabiliser, and each of its orbits of size 6 as a possible block, the orbit of that block under $G$ does not yield a $2$-design.
**Case 2:** $V=U\oplus W$.
In this case the number $v$ of points is the number $\gbc{n}{i}_q$ of $i$-spaces $U$ of $V$, times the number $q^{i(n-i)}$ of complements $W$ to $U$ in $V$, so $$\begin{aligned}
v&=q^{i(n-i)} \prod_{j=1}^i \frac{q^{n-i+j}-1}{q^j-1},\end{aligned}$$ so in particular $p\mid v$, and by Lemma \[bound\](iii), $r_p$ divides 2.
Note that $q^i-1>q^{i-j}(q^j-1)$, for integers $i>j$. Thus $$\begin{aligned}
v&> q^{i(n-i)} \prod_{j=1}^i q^{n-i} =q^{i(n-i)}(q^{n-i})^i=q^{2i(n-i)}.\end{aligned}$$ We consider the point $\alpha=\{U,W\}$ with $U=\langle v_{1},\dots,v_i\rangle, W=\langle v_{i+1},\ldots, v_{n}\rangle$ and the $G^*_\alpha$-orbit $\Delta$ containing $\beta=\{U', W'\}$ with $U'=\langle v_{1},\ldots, v_{i-1},v_{i+1}\rangle,
W'=\langle v_i,v_{i+2},\ldots, v_{n}\rangle$. Then $\Delta$ consists of all $\{U'',W''\}$ with $\dim(U''\cap U)=i-1, \dim(W''\cap W)=n-i-1, \dim(U''\cap W)= \dim(W''\cap U)=1$, so $|\Delta|$ is the number $\gbc{i}{1}_q\cdot q^{i-1}$ of decompositions $U=(U''\cap U)\oplus (W''\cap U)$, times the number $\gbc{n-i}{1}_q\cdot q^{n-i-1}$ of decompositions $W=(U''\cap W)\oplus (W''\cap W)$. Thus $$|G_{\alpha}^*:G_{\alpha\beta}^*|=|\Delta|= q^{i-1}\frac{q^i-1}{q-1}\cdot q^{n-i-1}\frac{q^{n-i}-1}{q-1} = q^{n-2}\frac{(q^i-1)(q^{n-i}-1)}{(q-1)^2},$$ and $G$ has a subdegree $|\Delta|$ or $2|\Delta|$. By Lemma \[condition 2\](iv), $r$ divides $4|\Delta|$. Since $r_p\mid 2$, we deduce that $r$ divides $4(q^i-1)(q^{n-i}-1)/(q-1)^2$ (and even $2(q^i-1)(q^{n-i}-1)/(q-1)^2$ if $q$ is even). Let $a=1$ if $q$ is even and $2$ otherwise. Then $r$ divides $2^a(q^i-1)(q^{n-i}-1)/(q-1)^2$
Considering the inequality $r^2>2v>2q^{2i(n-i)}$ and the fact that $(q^j-1)/(q-1)<2q^{j-1}$ for each integer $j>0$, it follows that $$\label{Alice}
q^{2i(n-i)}<2^{2a-1}\frac{(q^i-1)^2(q^{n-i}-1)^2}{(q-1)^4}<2^{2a-1}(2q^{i-1})^2(2q^{n-i-1})^2=2^{2a+3}q^{2n-4}.$$ Thus $2^{2a+3}>q^{2n(i-1)-2i^2+4}\geq 2^{2n(i-1)-2i^2+4},$ so (since $n>2i$) $$2a-2\geq 2n(i-1)-2i^2>4i(i-1)-2i^2=2i(i-2).$$ Hence $i=1$ or $2$, and the case $i=2$ only happens if $a=2$, that is if $q$ is odd.
Assume $i=2$, so $q$ is odd. Then $2\geq 2n(i-1)-2i^2=2n-8$, so $n\leq 5$. On the other hand $n>2i$, so $n=5$. By $q^{12}<2^7q^{6}$, so $q^{6}<2^7$, a contradiction since $q\geq 3$. Therefore $i=1$. In this case $v=q^{n-1}\frac{q^{n}-1}{q-1}$ and we compute that $v-1=\frac{q^{n-1}-1}{q-1}\cdot (q^n+q-1)$. Since $r\mid 2(v-1)$, $r$ divides $\gcd(2\frac{q^{n-1}-1}{q-1}\cdot (q^n+q-1),4\frac{q^{n-1}-1}{q-1})=2\frac{q^{n-1}-1}{q-1}\gcd(q^n+q-1,2)=2\frac{q^{n-1}-1}{q-1}$. In other words $a=1$ in the computation above whether $q$ is odd or even. Then by $q^{2(n-1)}<2(q^{n-1}-1)^2/(q-1)^2<2(2q^{n-2})^2=2^3q^{2n-4}$, which can be rewritten as $q^2<2^{3}$, so $q=2$. Thus $v= 2^{n-1}(2^{n}-1)$ and $r$ divides $2(2^{n-1}-1)$, so $r^2>2v$ implies that $ 2^{2n-1}-2^{n-1}<2(2^{n-1}-1)^2=2^{2n-1}-2^{n+1}+2$, which is impossible.
\[c1\] Assume Hypothesis \[H\], and that the point-stabilizer $G_\alpha\in\mathcal{C}_{1}$. Then either
(a) ${\mathcal{D}}={\mathcal{D}}(\cal S)$ is as in Construction \[cons\], where $\cal S$ is the design of points and lines of ${{\rm PG}}(n-1,3)$; or
(b) ${\mathcal{D}}$ is the complement of the Fano plane.
By Lemma \[c1’\], $G\leq{{\rm P\Gamma L}}(n,q)$, and $G_{\alpha}\cong {\rm P_{i}}$ is the stabiliser of a subspace $W$ of $V$ of dimension $i$, for some $i$. As we will work with the action on the underlying space $V$ we will usually consider a linear group $\tilde{G}$ satisfying $\tilde{X}={{\rm SL}}(n,q)\leq \tilde{G}\leq{{\rm \Gamma L}}(n,q)$, acting unfaithfully on $\P$ with kernel a subgroup of scalars. By Proposition \[4\] we may assume that $i\geq2$. Also, on applying a graph automorphism that interchanges $i$-spaces and $(n-i)$-spaces (and replacing $\cal D$ by an isomorphic design) we may assume further that $i\leq n/2$. Then $v$ is the number of $i$-spaces: $$v=\gbc{n}{i}_q= \prod_{j=1}^i \frac{q^{n-i+j}-1}{q^{j}-1}.$$ Using the fact that $q^i-1>q^{i-j}(q^j-1)$, for integers $i>j$, it follows that $v>q^{i(n-i)}$.
Consider the following points of ${\mathcal{D}}$: $\alpha = W$, where $W= \langle v_{1},v_{2},\ldots, v_{i}\rangle$, and $\beta = W'$, where $W'= \langle v_{1},v_{2},\ldots, v_{i-1}, v_{i+1}\rangle$. Then the $\tilde{G}_\alpha$-orbit $\Delta$ containing $\beta$ consists of all the points $W''$ such that $\dim(W\cap W'')=i-1$. Thus the cardinality $|\Delta|$ is the number $\gbc{i}{i-1}_q$ of $(i-1)$-spaces $W\cap W''$ in $W$, times the number $\gbc{n-i+1}{1}_q-1$ of $1$-spaces in $V/(W\cap W'')$ distinct from $W/(W\cap W'')$. Therefore, since $\gbc{i}{i-1}_q=\gbc{i}{1}_q$, $$|\Delta|=\gbc{i}{1}_q\cdot \left(\gbc{n-i+1}{1}_q-1\right)=\frac{q(q^{i}-1)(q^{n-i}-1)}{(q-1)^2}.$$ Since $\tilde{G}$ is flag-transitive, $r$ divides $2|\Delta|$ (by Lemma \[condition 2\](iv)). Combining this with $r^{2}>2v$ (Lemma \[condition 1\](iv)) we have that $$\frac{2q^2(q^{i}-1)^2(q^{n-i}-1)^2}{(q-1)^4}>\frac{(q^{n}-1)\cdots(q^{n-i+1}-1)}{(q^{i}-1)\cdots(q-1)}>q^{i(n-i)}.$$ Since $2q^{j-1}>(q^j-1)/(q-1)$ for all $j\in\mathbb{N}$, it follows that $$\label{Alice2}
q^{i(n-i)}<\frac{2q^2(q^{i}-1)^2(q^{n-i}-1)^2}{(q-1)^4}<2q^2(2q^{i-1})^2(2q^{n-i-1})^2=32q^{2n-2}\leq q^{2n+3}$$ Hence $$\label{eq3}
2n+3>i(n-i)$$ and so $i^2+3>n(i-2)\geq2i(i-2)$, which implies that $i\leq4$. Note from Lemma \[condition 1\] that $r\mid 2(v-1)$. Let $R=2\gcd(|\Delta|,v-1)$. As $r$ divides $2|\Delta|$, it follows that $r$ divides $R$ and hence $r\leq R$.
**Case 1:** $i=4$.
In this case, we derive from that $n\leq9$. This together with the restriction $n\geq2i=8$ leads to $n=8$ or $9$. We also deduce from that $32q^{2n-2}>q^{4(n-4)}$, that is $32>q^{2n-14}$. First assume that $n=8$. Then $32>q^{2}$, so $q\leq 5$. We get $$|\Delta|=\frac{q(q^4-1)^2}{(q-1)^2}$$ and $$v=\frac{(q^8-1)(q^7-1)(q^6-1)(q^5-1)}{(q^4-1)(q^3-1)(q^2-1)(q-1)}$$
We easily compute that $R=4,6,40,10$ when $q=2,3,4,5$ respectively, in each case contradicting $r^2>2v$, since $r\leq R$.
Next assume $n=9$. Then $32>q^{4}$, so $q=2$. We get $$|\Delta|=\frac{q(q^4-1)(q^5-1)}{(q-1)^2}=930$$ and $$v=\frac{(q^9-1)(q^8-1)(q^7-1)(q^6-1)}{(q^4-1)(q^3-1)(q^2-1)(q-1)}=3309747.$$ Therefore $R=124$, again contradicting $r^2>2v$. **Case 2:** $i=3$.
In this case, we derive from that $n\leq11$. Together with the restriction $n\geq2i=6$ leads to $n\in\{6,7,8,9,10,11\}$.
For $n=6$, $|\Delta|=\frac{q(q^3-1)^2}{(q-1)^2}=q(q^2+q+1)^2$, while $$v-1=\frac{(q^6-1)(q^5-1)(q^4-1)}{(q^3-1)(q^2-1)(q-1)}-1=q(q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1).$$ Thus $R= 2\gcd(|\Delta|,v-1)=2q\gcd((q^2+q+1)^2,q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1)$. Using the Euclidean algorithm, we easily see that $$\gcd(q^2+q+1,q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1)=1,$$ so $R=2q$, contradicting $r^2>2v$.
For $n=7$, $|\Delta|=\frac{q(q^3-1)(q^4-1)}{(q-1)^2}=q(q^2+q+1)(q+1)(q^2+1)$, while $$v-1=\frac{(q^7-1)(q^6-1)(q^5-1)}{(q^3-1)(q^2-1)(q-1)}-1=q(q^2+1)(q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1).$$ Thus $$R= 2\gcd(|\Delta|,v-1)=2q(q^2+1)\gcd((q^2+q+1)(q+1),q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1).$$ Using the Euclidean algorithm, we easily see that $$\gcd(q^2+q+1,q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1)=1$$ and $$\gcd(q+1,q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1)=1,$$ so $R=2q(q^2+1)$, contradicting $r^2>2v$.
Assume now that $8\leq n\leq 11$. We deduce from that $32q^{2n-2}>q^{3(n-3)}$, that is $32>q^{n-7}$. So there are only a finite number of cases to consider and we easily check that for all of them, $R^2<2v$, a contradiction.
**Case 3:** $i=2$.
In this case, the point set is the set of $2$-spaces and $n\geq 4$, but the above restrictions on $r$ do not lead easily to contradictions as they do for larger values of $i$. So we have a different approach. Recall that $\tilde{X}={{\rm SL}}(n,q)\leq \tilde{G}\leq{{\rm \Gamma L}}(n,q)$, acting unfaithfully on $\P$ (with kernel a scalar subgroup of $\tilde{G}$). First we deal with $n=4$. In this case $$v=\frac{(q^4-1)(q^{3}-1)}{(q^2-1)(q-1)} = (q^2+1)(q^2+q+1), \quad |\Delta|=\frac{q(q^{2}-1)^2}{(q-1)^2} = q(q+1)^2$$ and by Lemmas \[condition 1\] and \[condition 2\], $r^2>2v$ and $r$ divides $$\begin{aligned}
2\gcd(v-1, |\Delta|) &= 2\gcd(q^4+q^3+2q^2+q, q(q+1)^2)\\
& = 2q\gcd(q^3+q^2+2q+1,(q+1)^2)\\
&= 2q\gcd((q+1)^2(q-1)+3q+2,(q+1)^2)\\
& = 2q\gcd(3q+2,(q+1)^2) = 2q\end{aligned}$$ which implies $4q^2\geq r^2 > 2v > q^4$, a contradiction. Thus $n\geq5$.
Let $H:= \tilde{G}\cap{{\rm GL}}(n,q)$. Then setwise stabiliser $H_{\{\alpha,\beta\}}$ of the points $\alpha = W=\langle v_1, v_2\rangle$ and $\beta=W'=\langle v_1,v_3\rangle$, fixes setwise the two blocks $B_1, B_2$ of ${\mathcal{D}}$ containing $\{\alpha,\beta\}$. Also $H_{\{\alpha,\beta\}}$ leaves invariant the spaces $Y=W+W'=\langle v_1,v_2,v_3\rangle$ and $Y'=W\cap W'=\langle v_1\rangle$, induces ${{\rm GL}}(n-3,q)$ on $V/Y$ (since even ${{\rm SL}}(V)\cap ({{\rm GL}}(\la v_1\ra)\times {{\rm GL}}(\la v_4,\dots,v_n\ra))$ induces ${{\rm GL}}(n-3,q)$ on $V/Y$). Moreover $H_{\{\alpha,\beta\}}$ is transitive on $V\setminus Y$, and has orbits of lengths $1, 2q, q^2-q$ on the 1-spaces in $Y$. Since $H_{\{\alpha,\beta\}}\cap \tilde{G}_{B_1}=H_{\{\alpha,\beta\}}\cap H_{B_1}$ is normal of index 1 or 2 in $H_{\{\alpha,\beta\}}$, it follows that $H_{\{\alpha,\beta\}}\cap H_{B_1}$ also induces at least ${{\rm SL}}(n-3,q)$ on $V/Y$ and is transitive on $V\setminus Y$. Hence the only non-zero proper subspaces of $V$ left invariant by $H_{\{\alpha,\beta\}}\cap H_{B_1}$ are $Y, W, W', Y'$, and if $q=2, 3$ then possibly also the $q-1$ other 2-spaces of $Y$ containing $Y'$.
We claim that $H_{B_1}$ is irreducible on $V$. Suppose to the contrary that $H_{B_1}$ leaves invariant a nonzero proper subspace $U$. Then also $H_{\{\alpha,\beta\}}\cap H_{B_1}$ leaves $U$ invariant. We see from the previous paragraph that $U$ must be contained in $Y$. If $U=Y$, then as $\tilde{G}_{B_1}$ is transitive on the set $[B_1]$ of points of ${\mathcal{D}}$ incident with $B_1$, it follows that all such points must be 2-spaces contained in $Y$. This is impossible since $\dim(Y)=3$, while some block, and hence all blocks, must be incident with a pair of 2-spaces which intersect trivially. Thus $U$ is a proper subspace of $Y$. The only 1-space invariant under $H_{\{\alpha,\beta\}}\cap H_{B_1}$ is $Y'$, and if $U=Y'$ then the same argument would yield that all 2-spaces incident with $B_1$ would contain $Y'$, which is not true since some block, and hence all blocks, must be incident with a pair of 2-spaces which intersect trivially. Thus $\dim(U)=2$, and $U$ is a 2-space of $Y$ containing $Y'$. Since $H_{B_1}$ does not fix $\alpha$ or $\beta$, it follows that $U\ne W$ or $W'$, and hence $q=2$ or $3$, and $U$ is one of the $q-1$ other 2-spaces containing $Y'$. Again, since $\tilde{G}_{B_1}$ is transitive on $[B_1]$, each 2-space $\alpha'\in [B_1]$ intersects $U$ in a 1-space. Let $\gamma=W''$ be a 2-space which intersects $\alpha=W$ trivially, and let $B$ be a block of ${\mathcal{D}}$ containing $\{\alpha,\gamma\}$. Then $H_B$ leaves invariant a 2-space, say $U'$, and we have shown that both $W\cap U'$ and $W''\cap U'$ have dimension 1, so $U'$ is contained in the 4-space $W\oplus W''$. Now the subgroup induced by $H_{\{\alpha,\gamma\}}$ on $W\oplus W''$ contains ${{\rm GL}}(W)\times{{\rm GL}}(W'')$. The orbit of $U'$ under this group has size $(q+1)^2$. However the group $H_{\{\alpha,\gamma\}}\cap H_{B}$ has index at most $2$ in $H_{\{\alpha,\gamma\}}$ and fixes $U'$, so we have a contradiction. Thus we conclude that $H_{B_1}$ is irreducible.
The irreducible group $H_{B_1}$ has a subgroup $H_{\{\alpha,\beta\}}\cap H_{B_1}$ inducing at least ${{\rm SL}}(n-3,q)$ on $V/Y$. We will apply a deep theorem from [@NieP] which relies on the presence of various prime divisors of the subgroup order $|H_{B_1}|$. For $b, e\geq 2$, a primitive prime divisor (ppd) of $b^e-1$ is a prime $r$ which divides $b^e-1$ but which does not divide $b^i-1$ for any $i<e$. Such ppd’s are known to exist unless either $(b,e)=(2,6)$, or $e=2$ and $b=2^s-1$ for some $s$, (a theorem of Zsigmondy, see [@NieP Theorem 2.1]). Each ppd $r$ of $b^e-1$ satisfies $r\equiv 1\pmod{e}$, and if $r>e+1$ then $r$ is said to be large; usually $b^e-1$ has a large ppd and the rare exceptions are known explicitly, see [@NieP Theorem 2.2]. Also, if $b=p^f$ for a prime $p$ then each ppd of $p^{fe}-1$ is a ppd of $b^e-1$ (but not conversely) and this type of ppd of $b^e-1$ is called basic. We will apply [@NieP Theorem 4.8] which, in particular, classifies all subgroups $H_{B_1}$ with the following properties:
1. for some integer $e$ such that $n/2 < e\leq n-4$, $|H_{B_1}|$ is divisible by a ppd of $q^e-1$ and also by a ppd of $q^{e+1}-1$;
2. for some (not necessarily different) integers $e', e''$ such that $n/2 < e'\leq n-3$ and $n/2 < e''\leq n-3$, $|H_{B_1}|$ is divisible by a large ppd of $q^{e'}-1$ and a basic ppd of $q^{e''}-1$.
Since $|H_{B_1}|$ is divisible by $|{{\rm SL}}(n-3,q)|$, it is straightforward to check, using [@NieP Theorems 2.1 and 2.2], that $H_{B_1}$ has these properties whenever either $n\geq 11$ with arbitrary $q$, or $n\in\{9,10\}$ with $q>2$. In these cases we can apply [@NieP Theorem 4.8] to the irreducible subgroup $H_{B_1}$ of ${{\rm GL}}(n,q)$. Note that $H$ does not contain ${{\rm SL}}(n,q)$ since it fixes $[B_1]$ setwise; also, since $e, e+1$ differ by 1 and $e+1\leq n-3$, $H$ is not one of the ‘Extension field examples’ from [@NieP Theorem 4.8 (b), see Lemma 4.2], and finally since $n\geq9$ and $e+1\leq n-3$, $H$ is not one of the ‘Nearly simple examples’ from [@NieP Theorem 4.8 (c)]. Thus we conclude that either $n\in\{9,10\}$ with $q=2$, or $n\in\{5, 6, 7, 8\}$.
Finally we deal with the remaining values of $n$. Since $H_{\{\alpha,\beta\}}\cap H_{B_1}$ has index at most 2 in $H_{\{\alpha,\beta\}}$ it follows that $H_{B_1}$ has a subgroup of the form $[q^{3\times (n-3)}].{{\rm SL}}(n-3,q)$ which is transitive on $V\setminus Y$, and hence $H_{B_1}$ has order divisible by $q^{x}$ with $x=x(n)=3(n-3)+\binom{n-3}{2} = (n-3)(n+2)/2$; also $H_{B_1}$ does not contain ${{\rm SL}}(n,q)$ since it fixes $[B_1]$ setwise. It follows that $H_{B_1}\cap{{\rm SL}}(n,q)$ is contained in a maximal subgroup of ${{\rm SL}}(n,q)$ which is irreducible (that is, not in class $\mathcal{C}_1$ in [@Low]) and has order divisible by $q^{x(n)}$. A careful check of the possible maximal subgroups in the relevant tables in [@Low], as listed in Table \[tabc1\], shows that no such subgroup exists. This completes the proof.
$n$ $x(n)$ Tables from [@Low] for $n$
------ -------- ----------------------------
$5$ $7$ Tables 8.18 and 8.19
$6$ $12$ Tables 8.24 and 8.25
$7$ $18$ Tables 8.35 and 8.36
$8$ $25$ Tables 8.44 and 8.45
$9$ $33$ Tables 8.54 and 8.55
$10$ $42$ Tables 8.60 and 8.61
: Tables from [@Low] to check for the proof of Lemma \[c1\], Case $i=2$[]{data-label="tabc1"}
$\mathcal{C}_{2}$-subgroups {#sec3.2}
---------------------------
Here $G_{\alpha}$ is a subgroup of type ${{\rm GL}}(m,q)\wr {\rm \mathrm{S}_{t}}$, preserving a decomposition $V=V_{1}\oplus\cdots\oplus V_{t}$ with each $V_{i}$ of the same dimension $m$, where $n=mt$, $t\geq2$. We can think of the pointset of ${\mathcal{D}}$ as the set of these decompositions (for a fixed $m$ and $t$). Note that graph automorphisms swap $i$-spaces with $n-i$-spaces, so $G\leq {{\rm P\Gamma L}}(n,q)$ unless $t=2$. When $t=2$ we have to consider that $G$ could contain graph automorphisms, and so could $G_\alpha$.
\[c2\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{2}$.
Recall that we denote $\gcd(n,q-1)$ by $d$. By Lemma \[eq2\], $|X|=|{{\rm PSL}}(n,q)|>q^{n^2-2}$, and by [@PB Proposition 4.2.9], $$v=\frac{|{{\rm GL}}(mt,q)|}{|{{\rm GL}}(m,q)|^tt!}\quad\mbox{so}\quad |X_{\alpha}|=\frac{|X|}{v}=\frac{t!|{{\rm GL}}(m,q)|^t}{d(q-1)}.$$ **Case 1:** $m=1$.
Then $n=t\geq 3$, so $\tilde{G}\leq{{\rm \Gamma L}}(n,q)$. Take $\alpha$ as the decomposition $\oplus_{i=1}^n\la e_i\ra$ and $\beta$ as the decomposition $\la e_1+e_2\ra\oplus (\oplus_{i=2}^n\la e_i\ra)$. The orbit of $\beta$ under $G_{\alpha}$ consists of the decomposition $\la e_i+\lambda e_j\ra\oplus (\oplus_{\ell\neq i}\la e_\ell\ra)$, which has size $s:=n(n-1)(q-1)$. Thus by Lemmas \[condition 1\](iv) and \[condition 2\](iv), and Table \[tab1\], $$4n^2(n-1)^2(q-1)^2\geq (2s)^2\geq r^2 > 2v =2\frac{|{{\rm GL}}(n,q)|}{(q-1)^n n!} > 2\frac{q^{n^2}}{4(q-1)^nn!}$$ so $8n^2(n-1)^2 n! > q^{n^2}/(q-1)^{n+2}>q^{n^2-n-2}$. This implies that either $(n,q)=(5,2)$ or $(4,2)$, or $n=3$ and $q\leq 5$.
Suppose first that $n=3$. Then $v=q^3(q^2+q+1)(q+1)/6$ and $r\leq 2s=12(q-1)$. Since $r^2>2v$ we conclude that $q=2$ or $3$. In either case $v$ is divisible by $q$, and since $r$ divides $2(v-1)$ (Lemma \[condition 1\]), $r$ is not divisible by $4$ if $q=2$, and not divisible by $3$ if $q=3$. Hence $r$ divides $6(q-1)$ if $q=2$, or $4(q-1)$ if $q=3$ (Lemma \[condition 2\]), and then $r^2>2v$ leads to a contradiction. Thus $q=2$ and $n$ is 4 or 5. In either case, $v$ is divisible by $4$, so $4$ does not divide $r$ (Lemma \[bound\]). Then, since $r$ divides $2s=2n(n-1)$, we see that $r$ divides $6$ or $10$ for $n=4, 5$ respectively, giving a contradiction to $r^2> 2v$. Thus we may assume that $m\geq2$.
**Case 2:** $t=2$.
Next we deal with the case where $G$ may contain a graph automorphism, namely the case $t=2$, so $n=mt\geq 4$, and $G$ acts on decomposition into two subspaces of dimension $m=n/2$. Let ${\alpha}$ be the decomposition $V_{1}\oplus V_{2}$ where $$V_1=\langle v_{1},\ldots, v_{m}\rangle, \quad V_2=\langle v_{m+1},\ldots, v_{2m}\rangle.$$ Leet $\beta$ be the decomposition $V_1'\oplus V_2'$, where $V_1'= \langle v_{1},\ldots, v_{m-1},v_{m+1}\rangle$ and $V_2'=
\langle v_{m},v_{m+2},\ldots, v_{2m}\rangle$. Let $G^*:=G\cap{{\rm P\Gamma L}}(n,q)$, so $|G:G^*|\leq 2$. Since $G$ is point-primitive, $G$ is point-transitive, and so $|G_\alpha:G^*_\alpha|=|G:G^*|\leq 2$.
Moreover, let $G^*_{V_1,V_2}$ be the subgroup of $G^*_\alpha$ fixing $V_1$ and $V_2$, so $G^*_{V_1,V_2}$ has index at most $2$ in $G^*_\alpha$. If $m>2$, then we are in the same situation as in Lemma \[c1’\] (Case 2) with $i=m=n/2$ and $$|\beta^{G^*_{V_1,V_2}}|= q^{n-2}\frac{(q^m-1)^2}{(q-1)^2} =q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.$$ If $m=2$, then we have double counted (as $G^*_{V_1,V_2}$ does not fix each of the spaces $V_i\cap V_j'$; in fact it contains an element $x:v_1\leftrightarrow v_2, v_3\leftrightarrow v_4$), and $|\beta^{G^*_{V_1,V_2}}|=q^{2(m-1)}\frac{(q^m-1)^2}{2(q-1)^2}$. In both cases, $|\beta^{G_{\alpha}}|\mid 4q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.$ By Lemma \[condition 2\](iv), $r$ divides $2|\beta^{G_{\alpha}}|$, and hence $$r\mid 8q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.$$ Note that $$v=\frac{|X|}{|X_{\alpha}|}=\frac{q^{m^2}(q^{2m}-1)\cdots(q^{m+1}-1)}{2(q^m-1)\cdots(q-1)}>\frac{q^{2m^2}}{2}$$ and in particular $p\mid v$. By Lemma \[bound\](iii), $r_p$ divides 2, and hence $r$ divides $
\frac{8(q^m-1)^2}{(q-1)^2}.
$ This together with $r^2>2v$ leads to $$\label{Eq20}
\frac{64(q^m-1)^4}{(q-1)^4}>q^{2m^2}.$$ It follows that $64\cdot(2q^{m-1})^4>q^{2m^2}$ and so $$2^{10}>q^{2(m^2-2m+2)}\geq 2^{2(m^2-2m+2)}.$$ Hence $10>2(m^2-2m+2)$ and so $m=2$ and $r\mid 8(q+1)^2$. Then we deduce from that $64(q+1)^4>q^{8}$, which implies that $q=2$ or $3$. Assume $q=2$. Then $r_2\mid 2$, so $r$ divides $2(q+1)^2=18$, contradicting the condition $r^2>2v=560$. Hence $q=3$, $r\mid 2^7$ and $v=5265$. Combining this with $r\mid2(v-1)$ we conclude that $r$ divides $2^5$, again contradicting the condition $r^2>2v$. Thus $t\geq3$ and in particular $n=mt\geq6$ and $G\leq {{\rm \Gamma L}}(n,q)$.
**Case 3:** $t\geq3$.
Since $|{{\rm GL}}(m,q)|<q^{m^2}$, we have $$|X_{\alpha}|=\frac{t!|{{\rm GL}}(m,q)|^t}{d(q-1)}<\frac{t!q^{n^2/t}}{d(q-1)}.$$ Combining this with the assertion $|X|<2(df)^2|X_{\alpha}|^3$ from Lemma \[bound\](i), we obtain $$|X|<\frac{2f^2(t!)^3q^{3n^2/t}}{d(q-1)^3}<2(t!)^3q^{3n^2/t}.$$ It then follows from $|X|>q^{n^2-2}$ that $q^{n^2-2}<2(t!)^3q^{3n^2/t}$, that is, $$\label{Eq18}
q^{n^2(1-\frac{3}{t})-2}<2(t!)^3.$$ Since $n\geq 2t$, we derive from that $$\label{Eq19}
2^{4t(t-3)-2}\leq q^{4t(t-3)-2}\leq q^{n^2(1-\frac{3}{t})-2}<2(t!)^3.$$ Hence either $t=3$ or $(t,q)=(4,2)$. Consider the latter case. Here becomes $2^{n^2/4-2}<2\cdot(4!)^3$ and hence $n\leq8$. As $n\geq2t=8$, we conclude that $n=8$ and $m=2$. However, then $|X|=|{{\rm PSL}}(8,2)|$ and $|X_\alpha|=24|{{\rm GL}}(2,2)|^4$, contradicting the condition $|X|<2(df)^2|X_{\alpha}|^3=2|X_{\alpha}|^3$ from Lemma \[bound\](i).
Thus $t=3$, and ${\alpha}$ is a decomposition $V_{1}\oplus V_{2}\oplus V_3$ with $\dim(V_{1})=\dim(V_{2})=\dim(V_{3})=m=n/3$. Say $$V_1=\langle v_{1},\ldots, v_{m}\rangle, \quad V_2=\langle v_{m+1},\ldots, v_{2m}\rangle, \quad V_3=\langle v_{2m+1},\ldots, v_{3m}\rangle.$$ Let $\beta$ be the decomposition $\langle v_{1},\ldots, v_{m-1},v_{m+1}\rangle\oplus\langle v_{m},v_{m+2},\ldots, v_{2m}\rangle\oplus V_3$. Arguing as in Case 2 we find that $|\beta^{G_{V_1,V_2,V_3}}| =q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}$ if $m\geq3$, or $|\beta^{G_{V_1,V_2,V_3}}| =q^{2(m-1)}\frac{(q^m-1)^2}{2(q-1)^2}$ if $m=2$. Now $G_{V_1,V_2,V_3}$ has index dividing $6$ in $G_\alpha$, so $|\beta^{G_{\alpha}}|$ divides $6q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}$. By Lemma \[condition 2\](iv), $r$ divides $2|\beta^{G_{\alpha}}|$. Since $v=\frac{|{{\rm GL}}(3m,q)|}{|{{\rm GL}}(m,q)|^3 3!}$, it follows that $p$ divides $v$ and so by Lemma \[bound\], $r_p$ divides $2$, and hence $$r\mid 12\frac{(q^m-1)^2}{(q-1)^2},\quad \mbox{so}\quad r^2 < 144 (2q^{m-1})^4 = 2304 q^{4m-4}.$$ Note that $$v=\frac{|{{\rm GL}}(3m,q)|}{|{{\rm GL}}(m,q)|^3 3!}=\frac{q^{3m^2}}{6} \prod_{i=1}^m\frac{q^{2m+i}-1}{q^i-1}\cdot \prod_{i=1}^m \frac{q^{m+i}-1}{q^i-1} >\frac{1}{6}q^{3m^2+2m\cdot m+m\cdot m}=\frac{q^{6m^2}}{6},$$ and since $r^2>2v$, we get $$\frac{q^{6m^2}}{3}<2v<r^2< 2304 q^{4m-4},$$ and so $6912>q^{6m^2-4m+4}\geq 2^{6m^2-4m+4}\geq 2^{20}$, a contradiction.
$\mathcal{C}_{3}$-subgroups {#sec3.3}
---------------------------
Here $G_{\alpha}$ is an extension field subgroup.
\[c3\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{3}$.
By Lemma \[eq2\] we have $|X|>q^{n^2-2}$, and by [@PB Proposition 4.3.6], $$X_{\alpha}\cong\mathbb{Z}_a.{{\rm PSL}}(n/s,q^s).\mathbb{Z}_b.\mathbb{Z}_s,$$ where $s$ is a prime divisor of $n$, $d=\gcd(n,q-1)$, $a=\gcd(n/s,q-1)(q^s-1)/(d(q-1))$, and $b=\gcd(n/s,q^s-1)/\gcd(n/s,q-1)$. Thus, $$|X_{\alpha}|=\frac{s|{{\rm GL}}(n/s,q^s)|}{d(q-1)}.$$
**Case 1:** $n=s$.
Here $n$ is a prime, $|X_{\alpha}|=n(q^n-1)/(d(q-1))$, and by Lemma \[bound\](i), $$|X|<2(df)^2|X_{\alpha}|^3
=\frac{2f^2n^3}{d}\left(\frac{q^n-1}{q-1}\right)^3
<2q^2n^3\cdot(2q^{n-1})^3
=16n^3q^{3n-1}.$$ Combining this with $|X|>q^{n^2-2}$ we obtain $$\label{Eq20b}
q^{n^2-3n-1}<16n^3,$$ and so $2^{n^2-3n-1}<16n^3$, which implies $n\leq 5$.
**Subcase 1.1:** $n=5$.
In this case implies that $q^{9}<16\cdot5^3$, which leads to $q=2$. However, this means that $|X|=|{{\rm PSL}}(5,2)|$ and $|X_\alpha|=5\cdot31$, contradicting the condition $|X|<2(df)^2|X_{\alpha}|^3=2|X_{\alpha}|^3$ from Lemma \[bound\](i).
**Subcase 1.2:** $n=3$.
Then $X={{\rm PSL}}(3,q)$, $|X_\alpha|=3(q^2+q+1)/d$ and so $v=q^3(q^2-1)(q-1)/3$. It follows from Lemma \[bound\](ii) that $r$ divides $2df|X_{\alpha}|=6f(q^2+q+1)$. Combining this with $r^2>2v$, we obtain that $54f^2(q^2+q+1)^2>q^3(q^2-1)(q-1)$, that is, $$54f^2>\frac{q^6-q^5-q^4+q^3}{(q^2+q+1)^2}.$$ This inequality holds only when $$q\in\{2,3,4,5,7,8,9,16,32\}.$$ Let $R=\gcd(6f(q^2+q+1), 2(v-1))$. Then $r$ is a divisor of $R$. For each $q$ and $f$ as above, the possible values of $v$ and $R$ are listed in Table \[tab3\].
$q$ $v$ $R$ $q$ $v$ $R$
----- -- --------- ------- -- -- -- -- -- ------ ------------- -------- -- -- -- -- -- -- -- -- -- -- -- -- --
$2$ $8$ $14$ $8$ $75264$ $146$
$3$ $144$ $26$ $9$ $155520$ $182$
$4$ $960$ $14$ $16$ $5222400$ $182$
$5$ $4000$ $186$ $32$ $346390528$ $6342$
$7$ $32928$ $38$
: Possible values of $q$, $v$ and $R$[]{data-label="tab3"}
Hence the condition $r^2>2v$ implies that $q\in \{2,3,5\}$.
Assume $q=2$. Then $v=8$ and $r$ and divides $14$. From $r(k-1)=2(v-1)$ and $r\geq k\geq 3$ we deduce that $r=7$ and $k=3$, which contradicts the condition that $bk=vr$. Similarly, we have $q\neq 5$ (two cases to check: $(r,k)\in\{(186,44),(93,87)\}.$) Hence $q=3$. By Table \[tab3\], $v=144$ and $r$ divides $26$. Then from $r(k-1)=2(v-1)$, $bk=vr$ and $r\geq k\geq 3$, we deduce that $r=26$, $k=12$ and $b=312$. Since $|X_\alpha|=39$, Lemma \[condition 2\](ii) implies that $G>X$. Since $Out(X)$ has size $2$, we must have $G=X.2$ (with graph automorphism). By flag-transitivity, a block stabiliser must have index $312$ and have an orbit of size $12$. We checked with <span style="font-variant:small-caps;">Magma</span>, considering every subgroup of index $312$, and only one has an orbit of size 12 (which is unique), and the orbit of that block under $G$ does not yield a $2$-design. **Case 2:** $n\geq2s$.
By Lemma \[eq2\] we have $$|X_\alpha|=\frac{s|{{\rm GL}}(n/s,q^s)|}{d(q-1)}\leq \frac{s(1-q^{-s})(1-q^{-2s})q^{n^2/s}}{d(q-1)}
<\frac{sq^{n^2/s}}{d(q-1)}.$$ Moreover $|X_\alpha|_p=s_p\cdot q^{n(n-s)/2s}$ and $|X|_p=q^{n(n-1)/2}$. We deduce that $p$ divides $v=|X:X_\alpha|$, so by Lemma \[bound\](iii), $r_p$ divides $2$, and $$|X|<2(df)^2|X_{\alpha}|_{p'}^2|X_{\alpha}|=2(df)^2|X_{\alpha}|^3/|X_{\alpha}|_p^2
<\frac{2f^2s^3q^{(3n^2/s)-n(n-s)/s}}{(s_p)^2d(q-1)^3}
\leq \frac{n^3}4 q^{(2n^2/s) +n}.$$ For the last inequality, we used that $s\leq n/2$ and $f^2\leq (q-1)^3$. Combining this with $|X|>q^{n^2-2}$ we obtain $$\label{Eq21}
4q^{(1-2/s)n^2-n-2}\leq n^3.$$
**Subcase 2.1:** $s\geq 3$.
Then $n\geq 2s\geq 6$ and implies that $$n^3\geq 4q^{(1-2/s)n^2-n-2}\geq 4q^{(n^2/3)-n-2}\geq 2^{(n^2/3)-n}.$$ We easily see that this inequality only holds for $n\leq 6$. Therefore $n=2s=6$, and so implies that $q=2$. It follows that $X={{\rm PSL}}(6,2)$ and $|X_\alpha|=3|{{\rm GL}}(2,8)|=2^3\cdot3^3\cdot7^2$, so we can compute $v=|X|/|X_\alpha|=2^{12}\cdot3\cdot5\cdot31$ and $v-1=11\cdot 173149$. We know that $r\mid 2(v-1)$. By Lemma \[bound\](iii), we also know that $r\mid 2df|X_\alpha|_{p'}=2\cdot3^3\cdot7^2$, thus $r\mid 2$, contradicting $r^2>2v$.
**Subcase 2.2:** $s=2$.
Then $n=2m\geq4$ and $n$ is even, $$|X_\alpha|=\frac{2|{{\rm GL}}(n/2,q^2)|}{d(q-1)}= \frac{2q^{n(n-2)/4}(q^n-1)(q^{n-2}-1)\cdots (q^{2}-1)}{d(q-1)},$$ and $$v=\frac{q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^{3}-1)(q-1)}{2}.$$ As we observed, $r_p\mid 2$. Also $v$ is even, and so, from $r(k-1)=2(v-1)$ we deduce that $4\nmid r$.
First assume that $n=4$. Then $$|X_\alpha|=\frac{2q^{2}(q^4-1)(q+1)}{d}
\quad\text{and}\quad
v=\frac{q^4(q^3-1)(q-1)}{2}.$$ By Lemma \[bound\](iii), $r$ divides $2df|X_\alpha|_{p'}$ and hence $r\mid 2f(q^4-1)(q+1)$, which can be rewritten as $r\mid 2f(q^2+1)(q-1)(q+1)^2$ . Note that $$v-1=\frac{(q+1)(q^7-2q^6+2q^5-3q^4+4q^3-4q^2+4q-4)}{2}+1,$$ so that $\gcd(v-1,q+1)=1$. Hence, since $r\mid 2(v-1)$, it follows that $\gcd(r,q+1)\mid 2$. Moreover, it follows from $(q-1)\mid v$ that $\gcd(r,q-1)\mid 2$. Combining this with $4\nmid r$ and $r\mid 2f(q^4-1)(q+1)$, we obtain $r\mid 2f(q^2+1)$. Therefore, using Lemma \[condition 1\](iv), $$4f^2(q^2+1)^2\geq r^2>2v=q^4(q^3-1)(q-1).$$ However, there is no $q=p^f$ satisfying $4f^2(q^2+1)^2>q^4(q^3-1)(q-1)$, a contradiction.
Thus $n\geq 6$. Recall that $\tilde{X}={{\rm SL}}(n,q)\leq \tilde{G}\leq{{\rm \Gamma L}}(n,q)$, acting unfaithfully on $\P$ (with kernel a scalar subgroup of $\tilde{G}$). We regard $V$ as an $m$-dimensional vector space over $\mathbb{F}_{q^2}$ with basis $\{e_1,e_2,\ldots,e_m\}$ and $\tilde{G}_\alpha$ the subgroup of $\tilde{G}$ preserving this vector space structure. Take $w\in\mathbb{F}_{q^2}\backslash\mathbb{F}_q$. Then $$V=\langle e_1,e_2,\ldots,e_m\rangle_{\mathbb{F}_{q^2}}
=\langle e_1,we_1,e_2,we_2,\ldots,e_m,we_m\rangle_{\mathbb{F}_{q}}.$$ Let $$W=\langle e_1,e_2\rangle_{\mathbb{F}_{q^2}}
=\langle e_1,we_1,e_2,we_2\rangle_{\mathbb{F}_{q}}.$$ Consider $g\in {{\rm SL}}(n,q)$ defined by $$\begin{cases}
e^g_1=e_1,~e^g_2=-e_2,~(we_1)^g=we_2,~(we_2)^g=we_1 &\text{for }1\leq i\leq 2;\\
(e_i)^g=e_i~,(we_i)^g=we_i &\text{for }3\leq i\leq m.
\end{cases}$$ Then $g$ does not fix $\alpha$. Let $\beta=\alpha^g$ and let $\tilde{G}_{\alpha,(W)}$ be the subgroup of $\tilde{G}_\alpha$ fixing every vector of $W$. Note that $W^g=\langle e_1,we_1,-e_2,we_2\rangle_{\mathbb{F}_{q}}=W$ and so $\tilde{G}_{\alpha,(W)}\leq \tilde{G}_{\alpha,\beta}$. Now ${{\rm SL}}(n,q)_{\alpha,(W)}$ contains $I_4\times{{\rm SL}}(n/2-2,q^2)$, and since this subgroup intersects the scalar subgroup trivially it follows that ${X}_{\alpha,(W)}$ contains a subgroup isomorphic to ${{\rm SL}}(n/2-2,q^2)$ (and so do ${G}_{\alpha,(W)}$, ${G}_{\alpha,\beta}$, and ${X}_{\alpha,\beta}$). By Lemma \[L:subgroupdiv\], $r$ divides $4df|X_\alpha|/|{{\rm SL}}(\frac{n}{2}-2,q^2)|=8fq^{2n-6}(q^n-1)(q^{n-2}-1)(q+1)$. Combining this with $r_p\mid 2$ and $4\nmid r$, we obtain $$\label{Eq22b}
r\mid 2f(q^n-1)(q^{n-2}-1)(q+1).$$ Then from $r^2>2v$ and $$2v=q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^{3}-1)(q-1)$$ we deduce that $$\label{Eq22}
4f^2(q^n-1)^2(q^{n-2}-1)^2(q+1)^2
>q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)(q-1),$$ and so $$4q^2(q^n)^2(q^{n-2})^2(2q)^2
>q^{n^2/4}q^{n-2}q^{n-4}\cdots q^4q^2=q^{(n^2-n)/2}.$$ Therefore, $$2^4q^{4n}>q^{(n^2-n)/2}.$$ This implies that $$2^4>q^{n(n-9)/2}\ge 2^{n(n-9)/2},$$ and hence $n\le 8$ (since $n$ is even).
Assume that $n=8$. By we have that $$4f^2(q^{8}-1)^2(q^{6}-1)^2(q+1)^2
>q^{16}(q^{7}-1)(q^{5}-1)(q^3-1)(q-1),$$ and this implies that $q\in\{2,3,4\}$. By , $r$ divides $
u:=2f(q^{8}-1)(q^{6}-1)(q+1),
$ and hence $r$ divides $R:=\gcd(2(v-1),u)$. However, for each $q\in\{2,3,4\}$, we find $R^2<2v$, contradicting the fact that $r^2>2v$.
Hence $n=6$, and here $r\mid 2f(q^6-1)(q^4-1)(q+1)$ by , which can be rewritten as $r\mid 2f(q^2-q+1)(q^2+1)(q^3-1)(q-1)(q+1)^3$ . Recall that $r\mid 2(v-1)$, and in this case $2(v-1)=q^9(q^5-1)(q^3-1)(q-1)-2$, which is congruent to $6$ module $q+1$. Thus $\gcd(2(v-1),q+1)=6$, and so $\gcd(r,q+1)$ divides 6. On the other hand, $(q^3-1)(q-1)$ divides $v$, so $\gcd(r,(q^3-1)(q-1))$ divides 2. Recall that $4\nmid r$. We conclude that $r\mid 54f(q^2-q+1)(q^2+1)$. Thus $$2916f^2(q^2-q+1)^2(q^2+1)^2\geq r^2>2v=q^9(q^5-1)(q^3-1)(q-1),$$ which implies that $q=2$. It then follows that $v=55,552$ and $r\mid 810$. However, as $r\mid 2(v-1)$, we conclude that $r\mid 6$, contradicting $r^2>2v$.
$\mathcal{C}_{4}$-subgroups {#sec3.4}
---------------------------
Here $G_{\alpha}$ stabilises a tensor product $V_1 \otimes V_2$, where $V_1$ has dimension $a$, for some divisor $a$ of $n$, and $V_2$ has dimension $n/a$, with $2\leq a<n/a$, that is $2\leq a<\sqrt{n}$. In particular $n\geq 6$. Recall that $d=\gcd(n,q-1)$.
\[c4\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{4}$.
According to [@PB Proposition 4.4.10], we have $$|X_{\alpha}|=\frac{\gcd(a,n/a,q-1)}{d}
\cdot|{{\rm PGL}}(a,q)|\cdot|{{\rm PGL}}(n/a,q)|.$$ By Lemma \[eq2\], $$|X_{\alpha}|\leq|{{\rm PGL}}(a,q)|\cdot|{{\rm PGL}}(n/a,q)|
<\frac{(1-q^{-1})q^{a^2}}{q-1}\cdot
\frac{(1-q^{-1})q^{n^2/a^2}}{q-1}
=q^{a^2+(n^2/a^2)-2}.$$ Let $f(a)=a^2+{\frac{n^2}{a^2}}-2=(a+\frac{n}{a})^2-2-2n$. This is a decreasing function of $a$ on the interval $(2,\sqrt{n})$, and hence $f(a)\leq f(2)= (n^2/4)+2$. Hence $|X_{\alpha}|<q^{a^2+(n^2/a^2)-2}\leq q^{(n^2/4)+2}$. By Lemma \[bound\](i), $$|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{(3n^2/4)+6}<2q^{(3n^2/4)+10}.$$ Combining this with the fact that $|X|>q^{n^2-2}$ (from Lemma \[eq2\]), we obtain $$q^{(n^2/4)-12}<2.$$ Therefore, $n^2/4\leq 12$, which implies that $n=6$, and hence that $a=2$. Thus $$|X_{\alpha}|=\frac{q^4(q^3-1)(q^2-1)^2}{d}
\quad\text{and}\quad
v=q^{11}(q^6-1)(q^5-1)(q^2+1).$$ Consequently, $p\mid v$ and $v$ is even. By Lemma \[bound\](iii), $r_p$ divides 2, $4\nmid r$, and $r$ divides $2df|X_\alpha|_{p'}$ and hence $r$ divides $2f(q^3-1)(q^2-1)^2$. Note that $(q^3-1)(q+1)\mid q^6-1$ and $q-1\mid q^5-1$, so $(q^3-1)(q^2-1)$ divides $v$. We conclude that $\gcd(r,(q^3-1)(q^2-1))$ divides $2$. Hence, $r\mid 2f(q^2-1)$, contradicting the condition $r^2>2v$.
$\mathcal{C}_{5}$-subgroups {#sec3.5}
---------------------------
Here $G_{\alpha}$ is a subfield subgroup of $G$ of type ${{\rm GL}}(n,q_{0})$, where $q=p^f=q_{0}^s$ for some prime divisor $s$ of $f$.
\[c5\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{5}$.
According to [@PB Proposition 4.5.3], $$|X_{\alpha}|\cong\frac{q-1}{d\cdot
{{\rm lcm}}(q_0-1,(q-1)/\gcd(n,q-1))}{{\rm PGL}}(n,q_{0})$$ and, setting $d_0= \gcd(n, (q-1)/(q_0-1))$ (a divisor of $d$), by Lemma \[gcd\](i) we have $$\begin{aligned}
\label{Eq23a}
|X_{\alpha}|&=\frac{d_0}{d}\cdot|{{\rm PGL}}(n,q_{0})|
=\frac{d_0}{d}\cdot q_{0}^{n(n-1)/2}(q_{0}^n-1)(q_{0}^{n-1}-1)\cdots(q_{0}^2-1).\end{aligned}$$ In particular, the $p$-part $|X_\alpha|_p=q_0^{n(n-1)/2}$ is strictly less than $|X|_p=q^{n(n-1)/2}$, so $v=|X|/|X_\alpha|$ is divisible by $p$, and hence, by Lemma \[bound\](iii), $r_p$ divides 2, and $2(df)^2|X_\alpha|^2_{p'}|X_\alpha|>|X|$. Hence $$q^{n^2-2} < |X| < 2d^2f^2 q_{0}^{n(n-1)/2}\cdot \frac{d_0^3}{d^3}\cdot((q_{0}^n-1)
(q_{0}^{n-1}-1)\cdots(q_{0}^2-1))^3.$$ Since $d_0\leq d<q$, $f<q$ and $2\leq q_0$, this implies that $$\label{Eq23}
q^{n^2-2} < 2d^2f^2\cdot q_{0}^{n(n-1)/2}\cdot q_{0}^{3(n+2)(n-1)/2} < q_0\cdot q^4\cdot q_{0}^{2n^2+n-3}.$$ As $q=q_{0}^s$, we have $s(n^2-2) < 4s+2n^2+n-2$, so $$2n^2+n-3\geq s(n^2-6).$$
**Case 1:** $s\geq 5$.
Then $2n^2+n-3\geq 5(n^2-6)$, and so $n=3$. However, the first inequality in then implies $$q^{7}<2\cdot 3^2\cdot q^2\cdot q_{0}^{18},$$ that is, $q_{0}^{5s-18}<18$. This is not possible as $q_{0}^{5s-18}\geq q_0^7\geq2^7$.
**Case 2:** $s=3$, that is $q=q_0^3$.
Then $2n^2+n-3\geq 3(n^2-6)$, and so $n=3$ or $4$. Suppose $n=4$. Then the first inequality in implies $$q^{14}<2\cdot 4^2\cdot q^2\cdot q_{0}^{33},$$ that is, $32>q_{0}^3$. This leads to $q_{0}=2$ or $3$, and so $q=q_{0}^3=8$ or $27$, which does not satisfy the first inequality in , a contradiction. Therefore, $n=3=s$, and examining $d=\gcd(3,q-1)$ and $d_0=\gcd(3,q_0^2+q_0+1)$, we see that $d_0=d\in\{1,3\}$. The inequality $|X|<2(df)^2|X_\alpha|^2_{p'}|X_\alpha|$ from Lemma \[bound\](iii) becomes (using ) $$q^3(q^3-1)(q^2-1)/d=|X|<2d^2f^2q_{0}^3
\cdot(q_{0}^3-1)^3(q_{0}^2-1)^3,$$ or equivalently, since $q=q_0^3$, $$q_{0}^6(q_{0}^9-1)(q_{0}^6-1)<2d^3f^2\cdot(q_{0}^3-1)^3(q_{0}^2-1)^3.$$ Since $(q_{0}^3-1)^3(q_{0}^2-1)^3<(q_{0}^9-1)(q_{0}^6-1)$ and $d\leq n=3$, it follows that $$\label{Eq24}
q_{0}^6< 2d^3f^2\leq 54f^2.$$ As $3\mid f$ and $q_0=p^{f/3}$, we then conclude that $f=3$ and $q_{0}=2$, but this means that $d=1$, contradicting the first inequality of .
**Case 3:** $s=2$, that is $q=q_0^2$.
In this case, $d_0=\gcd(n,q_0+1)$ in the expression for $|X_\alpha|$ in . Let $a\in \mathbb{F}_q\backslash \mathbb{F}_{q_{0}}$ and consider $$g=\begin{pmatrix} a & &\\ & a^{-1}&\\&&I_{n-2} \end{pmatrix}\in \tilde{X}={{\rm SL}}(n,q).$$ Now $g$ does not preserve $\alpha$. Let $\beta=\alpha^g\ne \alpha$. Then $$\left\{
\begin{pmatrix}
1 & & \\
&1 & \\
& & B
\end{pmatrix}
\,\middle|\,
B\in {{\rm SL}}(n-2,q_{0})
\right\}
\leq \tilde{X}_{\alpha}\cap (\tilde{X}_{\alpha})^g=\tilde{X}_{\alpha\beta}.$$ Since this subgroup intersects the scalar subgroup trivially, $X_{\alpha\beta}$ contains a subgroup isomorphic to ${{\rm SL}}(n-2,q_{0})$, and hence so does $G_{\alpha\beta}$. By Lemma \[L:subgroupdiv\], $r$ divides $4df|X_{\alpha}|/|{{\rm SL}}(n-2,q_{0})|$. Thus, using , $$r\mid 4fd_0q_{0}^{2n-3}(q^n_{0}-1)(q^{n-1}_{0}-1).$$ Recall that $r_{p}\mid 2$. Moreover, $$v=\frac{|X|}{|X_\alpha|}=\frac{q_{0}^{n(n-1)/2}(q_{0}^n+1)(q_{0}^{n-1}+1)\cdots(q_{0}^2+1)}
{d_0}$$ is even, and so $4\nmid r$. Therefore, $$\label{Eq25}
r\mid 2fd_0(q^n_{0}-1)(q^{n-1}_{0}-1).$$ From $r^2>2v$, that is to say, $r^2/2 > v$, we see that $$\label{Eq26}
2f^2d_0^2(q^n_{0}-1)^2(q^{n-1}_{0}-1)^2
>\frac{q_{0}^{n(n-1)/2}(q_{0}^n+1)(q_{0}^{n-1}+1)\cdots(q_{0}^2+1)}{d_0},$$ and so, using $f<q=q_0^2$, $$2d_0^3q_{0}^{4n+2}>q_{0}^{n^2-1},$$ that is, $2\gcd(n,q_{0}+1)^3=2d_0^3>q_{0}^{n^2-4n-3}$. If $n\geq6$, then it follows that $2(q_{0}+1)^3>q_{0}^{9}$, a contradiction. Thus $3\leq n\leq 5$.
Assume that $n=5$, so $2d_0^3> q_0^2$. It follows that $d_0\neq 1$, and so $d_0=\gcd(5,q_{0}+1)=5$. This together with $250>q_{0}^2$ implies that $q_{0}\in\{4,9\}$. In either case $f=4$, and the inequality does not hold, a contradiction. Hence $n\leq 4$.
Since ${{\rm PSL}}(n,q_{0})\lhd X_\alpha$ and $r_{p}\mid 2_{p}$, by Lemma \[parabolic\], $r$ is divisible by the index of a parabolic subgroup of ${{\rm PSL}}(n,q_{0})$, that is, the number of $i$-spaces for some $i\leq n/2$.
**Subcase 3.1:** $n=4$. There are $(q_0+1)(q_0^2+1)$ 1-spaces and $(q_0^2+1)(q_0^2+q_0+1)$ 2-spaces, so $q^2_{0}+1$ divides $r$. Moreover, it follows from $v=q^6_{0}(q^4_{0}+1)(q^3_{0}+1)(q^2_{0}+1)/\gcd(4,q_{0}+1)$ that $q^2_{0}+1$ divides $v$, since $\gcd(4,q_{0}+1)$ is a divisor of $q_0^3+1$. Therefore, $q^2_{0}+1$ divides $\gcd(r,v)$. However $r\mid 2(v-1)$ and hence $\gcd(r,v)\mid 2$, and this implies that $q^2_{0}+1$ divides $2$, a contradiction.
**Subcase 3.2:** $n=3$. Here the number $q_0^2+q_0+1$ of 1-spaces must divide $r$. Since $r\mid 2(v-1)$ and $q^2_{0}+q_{0}+1$ is odd, it follows that $q^2_{0}+q_{0}+1$ divides $v-1$. On the other hand $v=q^3_{0}(q^3_{0}+1)(q^2_{0}+1)/d_0$, and it follows that $\gcd(v-1,q^2_{0}+q_{0}+1)=q^2_{0}+q_{0}+1$ must divide $2q_{0}+d_0$. This implies that $q_{0}=2$ and $d_0=\gcd(3,q_0+1)=3$. Therefore, $7\mid r$, $f=2$ and $v=120$. However, from and $r\mid 2(v-1)$ we obtain $r=7$ or $14$, contradicting $r^2>2v$.
$\mathcal{C}_{6}$-subgroups {#sec3.6}
---------------------------
Here $G_{\alpha}$ is of type $t^{2m}\cdot{{\rm Sp}}_{2m}(t)$, where $n=t^m$ for some prime $t\ne p$ and positive integer $m$, and moreover $f$ is odd and is minimal such that $t\gcd(2,t)$ divides $q-1=p^f-1$(see [@PB Table 3.5.A]).
\[c6\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{6}$.
From [@PB Propositions 4.6.5 and 4.6.6] we have $|X_\alpha|\leq t^{2m}|{{\rm Sp}}_{2m}(t)|$, and from Lemma \[eq2\] we have $|{\rm Sp}_{2m}(t)|<t^{m(2m+1)}$. Moreover $t<q$, since $t\gcd(2,t)$ divides $q-1$. Hence $|X_\alpha|<t^{2m+m(2m+1)}<q^{2m^2+3m}$. By Lemma \[bound\](i), recalling that $d=\gcd(n,q-1)$, $$|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{6m^2+9m}<2q^{6m^2+9m+4}.$$ Combining this with the fact that $|X|>q^{n^2-2}=q^{t^{2m}-2}$ (by Lemma \[eq2\]), we obtain $$q^{t^{2m}-(6m^2+9m+6)}<2.$$ Therefore, $$\label{Eq27}
t^{2m}\leq 6m^2+9m+6.$$ As $t\geq 2$, we deduce that $2^{2m}\leq 6m^2+9m+6$, and hence $m\leq 3$.
**Case 1:** $m=1$.Here $t=n\geq3$, so $t$ is an odd prime, and from we have $t^2\leq 21$. Hence $t=n=3$, so that $t\gcd(2,t)=3$ divides $q-1$, and $d=\gcd(n,q-1)=3$. Also $|X_\alpha|\leq t^{2m}|{{\rm Sp}}_{2m}(t)|=3^2|{\rm Sp}_2(3)|=2^3\cdot 3^3$, and then it follows from $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ that $$q^7<2(3f)^2|X_\alpha|^3\leq2\cdot(3f)^2\cdot(2^3\cdot 3^3)^3=f^2\cdot2^{10}\cdot 3^{11}.$$ This inequality, together with the fact that $f$ is odd and is minimal such that $t\gcd(2,t)=3$ divides $p^f-1$, implies that $q\in\{7,13\}$, and hence also that $f=1$. In particular, $q\equiv4$ or 7$\pmod{9}$, so that, by [@PB Proposition 4.6.5], we have $X_\alpha\cong3^2.Q_8$. According to Lemma \[bound\](ii), $r$ divides $2df|X_{\alpha}|=432$. Thus $r$ divides $R:=\gcd(432, 2(v-1))$. If $q=7$ then $v= 2^2\cdot7^3\cdot19$, and so $R=6$; and if $q= 13$, then $v=2^2\cdot7\cdot13^3\cdot 61$, and again $R=6$. Then $R^2<2v$, contradicting $r^2>2v$. **Case 2:** $m=2$.In this case shows that $t^4\leq 48$ and so $t=2$ and $n=4$. Thus $|X_\alpha|\leq t^{2m}|{{\rm Sp}}_{2m}(t)|=2^{4}|{\rm Sp}_{4}(2)|<2^{14}$. From [@PB Proposition 4.6.6] we see that $q=p\equiv1\pmod 4$. In particular, $f=1$ and $d=4$. Then the condition $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ implies that $$q^{14}<2\cdot4^2\cdot(2^{14})^3=2^{47},$$ which yields $q=5$. Then by [@PB Proposition 4.6.6] we have $X_\alpha\cong2^4.\mathrm{A}_6$. Therefore, $v=|X|/|X_\alpha|=5^5\cdot13\cdot31$. By Lemma \[bound\](ii), $r$ divides $2df|X_{\alpha}|=2^{10}\cdot 3^2\cdot5$. This together with $r\mid 2(v-1)$ implies that $r\mid 4$, contradicting the condition $r^2>2v$.
**Case 3:** $m=3$.We conclude similarly (using [@PB Proposition 4.6.6]) that $t=2$, $n=8$, $q=p\equiv1\pmod 4$ (so $f=1$) and $|X_\alpha|<2^{27}$. However, this together with $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ implies that $q^{62}<2^{82}\gcd(8,q-1)^2<2^{88}$. Thus $q=2$, a contradiction.
$\mathcal{C}_{7}$-subgroups {#sec3.7}
---------------------------
Here $G_\alpha$ is a tensor product subgroup of type ${{\rm GL}}(m,q)\wr {\rm S}_t$, where $t\geq 2$, $m\geq 3$ and $n=m^t$ (see [@PB Table 3.5.A]).
\[c7\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{7}$.
From [@PB Proposition 4.7.3] we deduce that $|X_{\alpha}|\leq |{{\rm PGL}}(m,q)|^t\cdot t!$. This together with Lemma \[eq2\] implies that $|X_{\alpha}|<q^{t(m^2-1)}\cdot t!$. Then by Lemma \[bound\](i), $$|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{3t(m^2-1)}\cdot (t!)^3<q^{3t(m^2-1)+5}\cdot (t!)^3.$$ Combining this with the fact that $|X|>q^{n^2-2}=q^{m^{2t}-2}$ (by Lemma \[eq2\]), we obtain $$\label{Eq28}
(t!)^3 > q^{m^{2t}-3t(m^2-1)-7} \geq 2^{m^{2t}-3t(m^2-1)-7}.$$ Let $f(m)=m^{2t}-3t(m^2-1)-7$. It is straightforward to check that $f(m)$ is an increasing function of $m$, for $m\geq 3$, and hence $f(m)\geq f(3)= 3^{2t}-24t-7$. Thus implies that $$2^{3^{2t}-24t-7}< (t!)^3 \leq t^{3t}.$$ Taking logarithms to base 2 we have $3^{2t}-24t-7 < 3t\log_2(t)$, which has no solutions for $t\geq2$.
$\mathcal{C}_{8}$-subgroups {#sec3.8}
---------------------------
Here $G_\alpha$ is a classical group in its natural representation.
\[c8.1\] Assume Hypothesis \[H\]. If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$, then $G_\alpha$ cannot be symplectic.
Suppose for a contradiction that $G_\alpha$ is a symplectic group in $\mathcal{C}_{8}$. Then by [@PB Proposition 4.8.3], $n$ is even, $n\geq 4$, and $$X_{\alpha}\cong {\rm PSp}(n,q)\cdot\left[\frac{\gcd(2,q-1)\gcd(n/2,q-1)}{d}\right],$$ where $d=\gcd(n,q-1)$. For convenience we will also use the notation $d'= \gcd(n/2,q-1)$ in this proof. Therefore, $$|X_\alpha|=q^{n^2/4}(q^n-1)(q^{n-2}-1)\cdots(q^2-1)d'/d,$$ and so $$v=\frac{|X|}{|X_\alpha|}=\frac{q^{(n^2-2n)/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)}{d'},$$ so in particular $p\mid v$. By Lemma \[bound\](iii), $r_p$ divides 2. Since ${\rm PSp}(n,q){\trianglelefteq}X_{\alpha}$, except for $(n,q)=(4,2)$, we can apply Lemma \[parabolic\], and so in these cases $r$ is divisible by the index of a parabolic subgroup of ${\rm PSp}(n,q)$. We first treat the case $n=4$.
**Case 1:** $n=4$.
In this case, $$X_{\alpha}\cong {\rm PSp}(4,q)\cdot\left[\frac{\gcd(2,q-1)^2}{\gcd(4,q-1)}\right]
\quad\text{and}\quad
v=\frac{q^2(q^3-1)}{\gcd(2,q-1)}.$$ If $(n,q)= (4,2)$, then a Magma computation shows that the subdegrees of $G$ are $12$ and $15$, so by Lemma \[condition 2\] (iv), $r\mid\gcd(24,30)=6$, contradicting $r^2>2v$. Since $X\cong {\rm A}_8$, using [@Biplane1 Theorem 1] for symmetric designs and [@Liang2 Theorem 1.1] for non-symmetric designs also rules out this case. Hence $(n,q)\ne (4,2)$. Then, since the indices of the parabolic subgroups ${\rm P}_1$ and ${\rm P}_2$ in ${\rm PSp}(4,q)$ are both equal to $(q+1)(q^2+1)$, it follows that $(q+1)(q^2+1)\mid r$ and, since $r\mid 2(v-1)$, that $(q+1)(q^2+1)$ divides $2(v-1)$. Suppose first that $q$ is even. Then $$2(v-1) = 2q^2(q^3-1)-2 = 2(q^2+1)(q^3-q-1) +2q,$$ which is not divisible by $q^2+1$. Thus $q$ is odd, and we have $$2(v-1) = q^2(q^3-1)-2 = (q^2+1)(q^3-q-1) +q-1,$$ and again this is not divisible by $q^2+1$. Thus $n\ne 4$.
**Case 2:** $n\geq 6$.
Let $\tilde{X}={{\rm SL}}(n,q)$, the preimage of $X$ in ${{\rm GL}}(n,q)$, and let $\{e_1,\dots, e_{n/2}, f_1,\dots, f_{n/2}\}$ be a basis for $V$ such that the nondegenerate alternating form preserved by $\tilde{X}_\alpha$ satisfies $$(e_i,e_j)=(f_i,f_j)=0\quad\mbox{and}\quad (e_i,f_j)=\delta_{ij}\quad\mbox{for all $i, j$}.$$ Let ${{\rm SL}}(4,q)$ denote the subgroup of $\tilde{X}$ acting naturally on $U:=\la e_1,e_2, f_1, f_2\ra$ and fixing $W:=\la e_3,\dots, e_{n/2}, f_3,\dots, f_{n/2}\ra$ pointwise, and let ${{\rm Sp}}(4,q)={{\rm SL}}(4,q)\cap \tilde{X}_\alpha$, namely the pointwise stabiliser of $W$ in $\tilde{X}_\alpha$. Let $g\in {{\rm SL}}(4,q)\setminus\mathbf{N}_{{{\rm SL}}(4,q)}({{\rm Sp}}(4,q))$ so $g\not\in\tilde{X}_\alpha$, and let $\beta=\alpha^g\neq\alpha$. Since $g$ fixes $W$ pointwise, it follows that the alternating forms preserved by $\alpha$ and $\beta$ agree on $W$ and hence that $\tilde{X}_{\alpha\beta}=\tilde{X}_{\alpha}\cap (\tilde{X}_{\alpha})^g$ contains the pointwise stabiliser ${{\rm Sp}}(n-4,q)$ of $U$ in $\tilde{X}_\alpha$.
Since this subgroup ${{\rm Sp}}(n-4,q)$ intersects the scalar subgroup trivially, $X_{\alpha\beta}$ contains a subgroup isomorphic to ${{\rm Sp}}(n-4,q)$, and hence so does $G_{\alpha\beta}$. By Lemma \[L:subgroupdiv\], $r$ divides $4df|X_{\alpha}|/|{{\rm Sp}}(n-4,q)|$, that is, $$r\mid 4d'fq^{2n-4}(q^n-1)(q^{n-2}-1).$$ Recall that $r_p\mid 2$. Also, since $n\geq6$, $v$ is even, and hence $4 \nmid r$. Similarly, it follows from $(q-1)\mid v$ that $r_t\mid2$ for each prime divisor $t$ of $q-1$. Therefore, $$r\mid 2f\cdot\frac{q^n-1}{q-1}\cdot\frac{q^{n-2}-1}{q-1}.$$ As $f<q$ and $(q^j-1)/(q-1)<2q^{j-1}$ for all $j$, it follows that $r<8q^{2n-3}$. From $r^2>2v$ we derive that $$\begin{aligned}
64q^{4n-6}&> 2q^{(n^2-2n)/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)/d'\\
&>2q^{(n^2-2n)/4}(q^{n-2}q^{n-4}\cdots q^2)/q =2q^{(n^2/2)-n-1},\end{aligned}$$ and so $32>q^{(n^2/2)-5n+5}\geq 2^{(n^2/2)-5n+5}$, that is, $n^2-10n<0$. This implies that $n\leq 8$.
Suppose that $n=8$. Here $d'=\gcd(4,q-1)$. In this case the index of each of the parabolic subgroups $P_i$, for $1\leq i\leq 4$, is divisible by $q^4+1$, and hence $q^4+1$ divides $r$, which in turn divides $2(v-1)$ by Lemma \[condition 2\]. Then $$q^4+1\mid 2d'(v-1) = 2 q^{12}(q^{7}-1)(q^{5}-1)(q^3-1) - 2d'.$$ Since the remainders on dividing $q^{12}, q^{7}-1, q^{5}-1$ by $q^4+1$ are $-1, -q^3-1$ and $-q-1$, respectively, it follows that $$q^4+1\mid -2 (q^3+1)(q+1)(q^3-1)-2d' = -2 (q^6-1)(q+1)-2d'.$$ The remainder on dividing $q^6-1$ by $q^4+1$ is $-q^2-1$, and hence $$q^4+1 \mid 2 (q^2+1)(q+1)-2d'=2(\frac{q^4-1}{q-1}-d').$$ This implies that $$q^4+1 \mid 2(q^4-1)-2d'(q-1) = 2(q^4+1) -4-2d'(q-1)$$ and hence $q^4+1\leq 2d'(q-1)+4\leq 8q-4$ (since $d'\leq 4$), a contradiction.
Thus $n=6$. Here $d'=\gcd(3,q-1)$. The indices of the parabolic subgroups ${\rm P}_1$, ${\rm P}_2$ and ${\rm P}_3$ in ${\rm PSp}(6,q)$ are $(q^3+1)(q^2+q+1)$, $(q^3+1)(q^2+q+1)(q^2+1)$ and $(q^3+1)(q^2+1)(q+1)$, and since one of these numbers divides $r$, we deduce that $(q^3+1)\mid r$, and so $(q^3+1)$ divides $2d'(v-1)= 2\left(q^{6}(q^{5}-1)(q^3-1) - d'\right)$. Since the remainders on dividing $q^{6}, q^{5}-1, q^{3}-1$ by $q^3+1$ are $1, -q^2-1$ and $-2$, respectively, it follows that $q^3+1$ divides $2 \left( 2(q^2+1)-d'\right)$. Hence $q^3+1\leq 2(2q^2+2-d')\leq 2(2q^2+1)$, which implies that $q\leq 4$. If $q=3$, but then $d'=1$ and $q^3+1=28$ does not divide $2(2(q^2+1)-d')=38$. Thus $q$ is even and the divisibility condition implies that $q^3+1$ divides $2(q^2+1)-d'\leq 2q^2+1$, which forces $q=2$ and $d'=1$. Hence $v=2^6\cdot7\cdot31$, and therefore $v-1$ is coprime to $5$ and $7$. However $r$, and hence also $2(v-1)$ is divisible by the index of one of the parabolic subgroups ${\rm P}_1$, ${\rm P}_2$ or ${\rm P}_3$ of ${\rm PSp}(6,2)$, and these are $3^2\cdot7$, $3^2\cdot5\cdot7$, $3^3\cdot5$. This is a contradiction.
\[c8.2\] Assume Hypothesis \[H\]. If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$, then $G_\alpha$ cannot be orthogonal.
Suppose for a contradiction that $G_\alpha$ is an orthogonal group in $\mathcal{C}_{8}$. Then by [@PB Proposition 4.8.4], $q$ is odd, $n\geq3$, and $$X_{\alpha}\cong {\rm PSO}^\epsilon(n,q).\gcd(n,2),$$ where $\epsilon\in\{\circ,+,-\}$. Let $\tilde{X}={{\rm SL}}(n,q)$ and let $\tilde{X}_\alpha$ denote the full preimage of $X_\alpha$ in $\tilde{X}$.
Let $\varphi$ be the non-degenerate symmetric bilinear form on $V$ preserved by $\tilde{X}_\alpha$, and let $e_1, f_1\in V$ be a hyperbolic pair, that is $e_1, f_1$ are isotropic vectors and $\varphi(e_1, f_1)=1$. Let $U=\langle e_1,f_1\rangle$, and consider the decomposition $V=U\oplus U^\perp$. Let $g\in \tilde{X}$ fixing $U^\perp$ pointwise and mapping $e_1$ onto itself and $f_1$ onto $e_1+f_1$. Then $g$ maps the isotropic vector $f_1$ onto the non-isotropic vector $e_1+f_1$, and so $g\notin \tilde{X}_{\alpha}$. Let $\beta=\alpha^g$, so that $\tilde{X}_\beta$ leaves invariant the form $\varphi^g$. Then, since $\varphi$ and $\varphi^g$ restrict to the same form on $U^\perp$, we have that $$\left\{
\begin{pmatrix}
I_2 & \\
& B
\end{pmatrix}
\,\middle|\,
B\in {\rm SO}^\epsilon(n-2,q)
\right\}
\leq \tilde{X}_{\alpha}\cap \tilde{X}^g_{\alpha}=\tilde{X}_{\alpha\beta}.$$ Since this group intersects the scalar subgroup trivially, ${X}_{\alpha\beta}$ contains a subgroup isomorphic to ${\rm SO}^\epsilon(n-2,q)$, and hence so does $G_{\alpha\beta}$. By Lemma \[L:subgroupdiv\], $$\begin{aligned}
\label{EqOrth}
r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|.\end{aligned}$$
We now split into cases where $n$ is odd or even.
**Case 1:** $n=2m+1$ is odd, so $\epsilon=\circ$ and is usually omitted.
In this case, $X_{\alpha}\cong {\rm PSO}(2m+1,q)$. Thus $$|X_\alpha|=q^{m^2}(q^{2m}-1)(q^{2m-2}-1)\cdots(q^2-1),$$ and so $$v=|X|/|X_\alpha| = q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)/d,$$ where $d=\gcd(2m+1,q-1)$, and this implies that $v$ is even and $p\mid v$. By Lemma \[bound\](iii), $r_p$ divides 2, so $r_p=1$ since $q$ is odd. Moreover, since $r\mid 2(v-1)$, it follows that $4\nmid r$.
**Subcase 1.1:** $m=1$.
Then $$|X_{\alpha}|=q(q^2-1)
\quad\text{and}\quad
v=q^{2}(q^3-1)/d.$$ As $p\mid v$, it follows from Lemma \[bound\](iii) that $r$ divides $2df|X_{\alpha}|_{p'}$ and hence $r$ divides $2df(q^2-1)$. Combining this with $r\mid 2(v-1)$, we deduce that $r$ divides $$\begin{aligned}
2\gcd\left(d(v-1),df(q^2-1)\right)
=& 2\gcd\left(q^{2}(q^3-1)-d,df(q^2-1)\right).\end{aligned}$$ Noting that $\gcd\left(q^{2}(q^3-1)-d,q^2-1\right)$ divides $$q^{2}(q^3-1)-d-(q^2-1)(q^3+q-1)=q-1-d,$$ we conclude that $r$ divides $2df(q-1-d)$. If $d=\gcd(3,q-1)=3$, then $q\geq7$ (since $q$ is odd) and $r\mid 6f(q-4)$. From $r^2>2v=2q^{2}(q^3-1)/3$ we derive that $54f^2(q-4)^2>q^{2}(q^3-1)$, which yields a contradiction. Consequently, $d=1$. Then $r\mid 2f(q-2)$, and from $r^2>2v=2q^{2}(q^3-1)$ we derive that $2f^2(q-2)^2>q^{2}(q^3-1)$, which is not possible.
**Subcase 1.2:** $m\geq 2$. By , $r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|$, that is, $$r\mid 4dfq^{2m-1}(q^{2m}-1).$$ Recall that $r_p=1$ and $4\nmid r$. We conclude that $$r\mid 2df(q^{2m}-1).$$ Therefore, as $r^2>2v$, we have $$4d^2f^2(q^{2m}-1)^2>\frac{2q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)}{d},$$ and hence $$\begin{aligned}
2q^3\cdot q^2\cdot q^{4m}&>2d^3f^2(q^{2m}-1)^2\\
&>q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)\\
&>q^{m^2+m}(q^{2m}q^{2m-2}\cdots q^{2})\\
&=q^{2m^2+2m},\end{aligned}$$ This implies that $q^{2m^2-2m-5}<2$ and so $2m^2-2m-5\leq 0$. Thus $m=2$ and $d\leq 5$. Therefore $q^{6}(q^5-1)(q^{3}-1)<2d^3f^2(q^{4}-1)^2<250f^2(q^{4}-1)^2$, which implies $q=2$, a contradiction.
**Case 2:** $n=2m$ is even, where $m\geq 2$ since $2m=n\geq 3$.
In this case, $X_{\alpha}\cong {\rm PSO}^\epsilon(2m,q)\cdot 2$ with $\epsilon=\pm$ (we identify $\pm$ with $\pm1$ for superscripts). Hence $$|X_\alpha|=q^{m(m-1)}(q^{m}-\epsilon)(q^{2m-2}-1)(q^{2m-4}-1)\cdots(q^2-1),$$ and so $$v=\frac{|X|}{|X_\alpha|}
=\frac{q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1)}
{d},$$ where $d=\gcd(2m,q-1)$, and this implies that $v$ is even and $p\mid v$. By Lemma \[bound\](iii), $r_p$ divides 2, so $r_p=1$ since $q$ is odd. Moreover, since $r\mid 2(v-1)$, it follows that $4\nmid r$.
By , $r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|$, that is, $r$ divides $$4dfq^{2m-2}(q^{m}-\epsilon)\frac{q^{2m-2}-1}{q^{m-1}-\epsilon}=
4dfq^{2m-2}(q^{m}-\epsilon)(q^{m-1}+\epsilon).$$ As $r_p=1$ and $4\nmid r$, it follows that $$\begin{aligned}
\label{Eq30a}
r\mid 2df(q^{m}-\epsilon)(q^{m-1}+\epsilon).\end{aligned}$$ Then we deduce from $r^2>2v$ that $$\begin{aligned}
\label{Eq30}
&2d^3f^2(q^{m}-\epsilon)^2(q^{m-1}+\epsilon)^2\nonumber\\
>&q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1),\end{aligned}$$ and so $$\begin{aligned}
2q^3\cdot q^2(2q^{2m-1})^2&>2d^3f^2(q^{m}-\epsilon)^2(q^{m-1}+\epsilon)^2\\
&>q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1)\\
&>q^{m^2}(2q^{m-1})(q^{2m-2}\cdots q^{2})\\
&=2q^{2m^2-1}.\end{aligned}$$ Hence $q^{2m^2-4m-4}<4$ and so $2m^2-4m-4<2$, which implies $m=2$ and $d\leq 4$. Thus $X_\alpha\cong {\rm PSO}^\epsilon(4,q)\cdot 2$.
Suppose $\epsilon=-$, so that $X_\alpha\cong {\rm PSO}^{-}(4,q)\cdot 2$. Then gives $$2d^3f^2(q^{2}+1)^2(q-1)^2>q^{4}(q^{2}-1)(q^{3}-1),$$ which can be simplified to $$\label{Eq31bis}
2d^3f^2(q^{2}+1)^2>q^{4}(q+1)(q^{2}+q+1).$$ Thus $128f^2(q^{2}+1)^2>q^{4}(q+1)(q^{2}+q+1)$. Since $q$ is odd, this implies that $q=3$ so that $d=2$, but then is not satisfied.
Therefore $\epsilon=+$, so that $X_\alpha\cong {\rm PSO}^{+}(4,q)\cdot 2$. Then gives $$\label{Eq31}
2d^3f^2(q^{2}-1)^2(q+1)^2
>q^4(q^2+1)(q^3-1)$$ and thus $$128f^2(q+1)^2>(q^2+1)(q^3-1).$$ Since $q$ is odd, we conclude that $q=3$ or $5$. However, $q=3$ does not satisfy , thus $q=5$, $f=1$ and $d=4$. Then $v=|X|/|X_\alpha|=503750$. By , $r\mid 2df(q^2-1)(q+1)=2^7*3^3$. This together with $r\mid 2(v-1)$ (Lemma \[condition 1\](i)) leads to $r\mid 2$, contradicting $r^2>2v$.
\[c8.3\] Assume Hypothesis \[H\]. If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$, then $G_\alpha$ cannot be unitary.
Suppose that $G_\alpha$ is a unitary group in $\mathcal{C}_{8}$. Then by [@PB Proposition 4.8.5], $n\geq 3$, $q=q^2_{0}$, and $$X_{\alpha}\cong {\rm PSU}(n,q_0)\cdot\left[\frac{\gcd(n,q_0+1)c}{d}\right],$$ where $d=\gcd(n,q-1)$ and $c=(q-1)/{{\rm lcm}}(q_0+1,(q-1)/d)$. By Lemma \[gcd\](iii), $c=\gcd(n,q_0-1)$. Hence $$\begin{aligned}
|X_{\alpha}|
&=|{\rm PSU}(n,q_0)|\cdot\frac{\gcd(n,q_0+1)\gcd(n,q_0-1)}{\gcd(n,q^2_0-1)}\\
&=\frac{c}{d}\cdot q^{n(n-1)/2}_0\prod_{i=2}^n(q^i_0-(-1)^i)\end{aligned}$$ and $$v=\frac{|X|}{|X_\alpha|}
=\frac{1}{c}\cdot q^{n(n-1)/2}_0\prod_{i=2}^n(q^i_0+(-1)^i),$$ which implies that $p\mid v$ and $v$ is even. Since $r\mid 2(v-1)$, it follows that $r_p\mid 2$ and $4\nmid r$.
**Case 1:** $n=3$.
In this case, $$|X_{\alpha}|=\frac{cq^3_0(q^3_0+1)(q^2_0-1)}{d}
\quad\text{and}\quad
v=\frac{q^3_0(q^3_0-1)(q^2_0+1)}{c},$$ where $c=\gcd(3, q_0-1)$ and $d=\gcd(3,q^2_0-1)$. Since ${\rm PSU}(n,q_0){\trianglelefteq}X_\alpha$, by Lemma \[parabolic\], $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(3,q_0)$, that is, $q^3_0+1$. Hence $(q^3_0+1)\mid r$, which implies that $(q^3_0+1)$ divides $2(v-1)$ and hence also $2c(v-1)=2 q^3_0(q^3_0-1)(q^2_0+1) -2c$. Since the remainders on dividing $q^3_0, q^3_0-1$ by $q^3_0+1$ are $-1, -2$, respectively, it follows that $$q_0^3+1\mid 4 (q^2_0+1) - 2c,$$ which implies that $q_0=2$, $d=3$, $c=1$, and $f=2$. Thus $v=q^3_0(q^3_0-1)(q^2_0+1)=280$ and $|X_\alpha|=q^3_0(q^3_0+1)(q^2_0-1)/3=72$. Since $r\mid 2(v-1)$ and $r\mid 2df|X_\alpha|_{p'}$ by Lemma \[bound\](iii), we conclude that $r$ divides $18$, contradicting the condition $r^2>2v$.
**Case 2:** $n\geq 4$.
Let $\tilde{X}={{\rm SL}}(n,q)$ and let $\tilde{X}_\alpha$ denote the full preimage of $X_\alpha$ in $\tilde{X}$. Let $U=\langle e_1, f_1\rangle$ be a nondegenerate $2$-subspace of $V$ relative to the unitary form $\varphi$ preserved by $\tilde{X}_\alpha$. Let $A\in {{\rm SL}}(U)$ such that $A$ does not preserve modulo scalars the restriction of $\varphi$ to $U$. Then the element $g=\begin{pmatrix}
A & \\
& I
\end{pmatrix}\in \tilde{X}$ but $g$ does not lie in $\tilde{X}_\alpha$. Hence $\beta :=\alpha^g\neq\alpha$. On the other hand $$\left\{
\begin{pmatrix}
I & \\
& B
\end{pmatrix}
\,\middle|\,
B\in {\rm SU}(n-2,q_0)
\right\}\leq \tilde{X}_{\alpha}\cap \tilde{X}^g_{\alpha}=\tilde{X}_{\alpha\beta}.$$ Since this group intersects the scalar subgroup trivially, ${X}_{\alpha\beta}$ contains a subgroup isomorphic to ${\rm SU}(n-2,q)$, and hence so does $G_{\alpha\beta}$. By Lemma \[L:subgroupdiv\], $r$ divides $4df|X_{\alpha}|/|{\rm SU}(n-2,q_0)|$, that is, $$r\mid 4cfq^{2n-3}_0(q^{n}_0-(-1)^n)(q^{n-1}_0-(-1)^{n-1}).$$ Since $r_p\mid 2$ and $4\nmid r$, we derive that $$r\mid 2cf(q^{n}_0-(-1)^n)(q^{n-1}_0-(-1)^{n-1}).$$ This together with $r^2>2v$ and $v=|X|/|X_\alpha|$ leads to $r^2|X_\alpha|>2|X|$. By Lemma \[eq2\] we have $$|X|>q^{2n^2-4}_0
\quad\text{and}\quad
|X_\alpha|<\frac{q^{n^2-1}_0c\gcd(n,q_0+1)}{d}.$$ Consequently, noting that $\gcd(n,q_0+1)\leq d=\gcd(n,q^2_0-1)$, $c=\gcd(n,q_0-1)<q_0$, and $f<q=q_0^2$, we get $$\begin{aligned}
2q^{2n^2-4}_0
&<4c^3f^2(q^{n}_0-(-1)^n)^2(q^{n-1}_0-(-1)^{n-1})^2\cdot \frac{q^{n^2-1}_0\gcd(n,q_0+1)}{d}\\
&<4q^7_0(q^{n}_0-(-1)^n)^2(q^{n-1}_0-(-1)^{n-1})^2\cdot q^{n^2-1}_0\\
&<4q^{n^2+6}_0(2q^{n+n-1}_0)^2=16q^{n^2+4n+4}_0\end{aligned}$$ and hence $$q^{n^2-4n-8}_0<8.$$ It follows that $n^2-4n-8<3$, which implies $n=4$ or $5$.
**Subcase 2.1:** $n=4$.
Then $$v=q^6_0(q^4_0+1)(q^3_0-1)(q^2_0+1)/c,$$ where $c=\gcd(4,q_0-1)$. Since $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(4,q_0)$, which is either $(q^2_0+1)(q^3_0+1)$ or $(q_0+1)(q^3_0+1)$, we derive that $(q^3_0+1)\mid r$. Hence $(q^3_0+1)$ divides $2(v-1)$, and hence also $2c(v-1)= 2 q^6_0(q^4_0+1)(q^3_0-1)(q^2_0+1) -2c$. Since the remainders on dividing $q^{6}_0, q^{4}_0+1, q^{3}_0-1$ by $q^3_0+1$ are $1, -q_0+1$ and $-2$, respectively, it follows that $q^3_0+1$ divides $2 (-q_0+1)(-2)(q_0^2+1) -2c = 4(q_0-1)(q_0^2+1)-2c$, which equals $4(q_0^3+1) - 4(q_0^2-q_0+2) -2c$. It follows that $q^3_0+1$ divides $4(q^2_0-q_0+2)+2c$, which implies $q_0=2$. Thus $v=2^6\cdot5\cdot7\cdot17$, and the index of a parabolic subgroup of ${\rm PSU}(4,q_0)$ is either $45$ or $27$. However, neither $45$ nor $27$ divides $2(v-1)$, a contradiction.
**Subcase 2.2:** $n=5$.
Then $$v=q^{10}_0(q^5_0-1)(q^4_0+1)(q^3_0-1)(q^2_0+1)/c,$$ where $c=\gcd(5,q_0-1)$. Since $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(5,q_0)$, which is either $(q^2_0+1)(q^5_0+1)$ or $(q^3_0+1)(q^5_0+1)$, we derive that $(q^5_0+1)\mid r$. Hence $(q^5_0+1)$ divides $2(v-1)$, and hence also $2c(v-1)= 2q^{10}_0(q^5_0-1)(q^4_0+1)(q^3_0-1)(q^2_0+1) -2c$. Since the remainders on dividing $q^{10}_0, q^{5}_0-1, (q^{3}_0-1)(q_0^2+1)$ by $q^5_0+1$ are $1, -2$ and $q_0^3-q_0^2-2$, respectively, it follows that $q^5_0+1$ divides $-4(q_0^4+1)(q_0^3-q_0^2-2)-2c$, which equals $$-4(q_0^5+1)(q_0^2-q_0) -4(-2q_0^4+q_0^3-2q_0^2+q_0-2) -2c.$$ Thus $q^5_0+1$ divides $8q_0^4 -4q_0^3 +8q^2_0 -4q_0+8-2c$. However, there is no prime power $q_0$ satisfying this condition, a contradiction.
$\mathcal{C}_{9}$-subgroups {#sec3.9}
---------------------------
Here $G_\alpha$ is an almost simple group not contained in any of the subgroups in $\mathcal{C}_1$–$\mathcal{C}_8$.
\[c9\] Assume Hypothesis \[H\]. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{9}$.
By Lemma \[condition 2\](i) and Lemma \[eq2\], we have $|G_\alpha|^3>|G|\geq|X|=|{{\rm PSL}}(n,q)|>q^{n^2-2}$. Moreover, by [@Liemaximal Theorem 4.1], we have that $|G_\alpha|<q^{3n}$. Hence $q^{n^2-2}<|G_\alpha|^3<q^{9n}$, which yields $n^2-2<9n$ and so $3\leq n\leq9$. Further, it follows from [@Liemaximal Corollary 4.3] that either $n=y(y-1)/2$ for some integer $y$ or $|G_\alpha|<q^{2n+4}$. If $n=y(y-1)/2$, then as $3\leq n\leq9$ we have $n=3$ or $6$. If $|G_\alpha|<q^{2n+4}$, then we deduce from $|G_\alpha|^3>q^{n^2-2}$ that $q^{6n+12}>q^{n^2-2}$, which implies $6n+12>n^2-2$ and so $3\leq n\leq7$. Therefore, we always have $3\leq n\leq7$. The possibilities for $X_\alpha$ can be read off from [@Low Tables 8.4, 8.9, 8.19, 8.25, 8.36]. In Table \[tab5\] we list all possibilities, sometimes fusing some cases together. Not all conditions from [@Low] are listed, but we list what is necessary for our proof. Note that in some listed cases $X_\alpha$ is not maximal in $X$ but there is a group $G$ with $X<G\leq {{\rm Aut}}(X)$ such that $G_\alpha$ is maximal in $G$ and $G_\alpha\cap X$ is equal to this non-maximal subgroup $X_\alpha$.
[cclll]{}
\
\
\
Case &$X$ &$X_\alpha$ & Conditions on $q$ from [@Low] &Bound\
1&${{\rm PSL}}(3,q)$ &${{\rm PSL}}(2,7)$ &$q=p\equiv1,2,4\pmod7$, $q\neq2$ &$q<14$\
2& &${\rm A}_6$ &$q=p\equiv1,4\pmod{15}$ &$q<19$\
3& &${\rm A}_6$&$q=p^2,p\equiv2,3\pmod5$, $p\neq3$ &$q<23$\
4&${{\rm PSL}}(4,q)$ &${{\rm PSL}}(2,7)$ &$q=p\equiv1,2,4\pmod7$, $q\neq 2$ &$q< 4$\
5& &${\rm A}_7$ &$q=p\equiv1,2,4\pmod 7$ &$q< 7$\
6& &${\rm PSU}(4,2)$ &$q=p\equiv1\pmod6$ &$q< 12$\
7&${{\rm PSL}}(5,q)$ &${{\rm PSL}}(2,11)$ &$q=p$ odd &$q< 3$\
8& &${\rm M}_{11}$ &$q=3$ &$q< 4$\
9& &${\rm PSU}(4,2)$ &$q=p\equiv1\pmod6$ &$q< 5$\
10&${{\rm PSL}}(6,q)$ &${\rm A}_6.2_3$ &$q=p$ odd &$q< 3$\
11& &${\rm A}_6$ &$q=p$ or $p^2$ odd &$q< 2$\
12& &${{\rm PSL}}(2,11)$ &$q=p$ odd &$q< 3$\
13& &${\rm A}_7$ &$q=p$ or $p^2$ odd &$q< 3$\
14& &${{\rm PSL}}(3,4)^{.}2^-_1$ &$q=p$ odd &$q< 3$\
15& &${{\rm PSL}}(3,4)$ &$q=p$ odd &$q< 3$\
16& &${\rm M}_{12}$ &$q=3$ &$q< 4$\
17& &${\rm PSU}(4,3)^{.}2^-_2$ &$q=p\equiv1\pmod{12}$ &$q< 5$\
18& &${\rm PSU}(4,3)$ &$q=p\equiv7\pmod{12}$ &$q< 5$\
19& &${{\rm PSL}}(3,q)$ &$q$ odd &\
20&${{\rm PSL}}(7,q)$ &${\rm PSU}(3,3)$ &$q=p$ odd &$q< 2$\
By Lemma \[bound\](i) and Lemma \[eq2\], we have $2d^2f^2|X_\alpha|^3>|X|>q^{n^2-2}$. Using the fact that $d=\gcd(n,q-1)\leq n$, it follows that $$\label{lastineq}
q<\left(2n^2f^2|X_\alpha|^3\right)^{1/(n^2-2)}.$$ Note that, except for case (19), we know that $f=1$ or $2$. This inequality gives us, in each case except (19), an upper bound for $q$, which is listed in the last column in Table \[tab5\]. Comparing the last two columns of the table we see the condition and bound are satisfied only in the following cases: (1) for $q=11$, (3) for $q=4$, (5) for $q=2$, (6) for $q=7$, (8) and (16). For case (19), we know that $f<q$ and $|{{\rm PSL}}(3,q)|<q^8$ by Lemma \[eq2\], so $72q^2q^{24}>q^{34}$, that is $q^8<72$, which is not satisfied for any $q$. In case (1) for $q=11$, $d=1$ and the inequality $2d^2f^2|X_\alpha|^3>q^{n^2-2}$ is not satisfied.
For each of the remaining cases, we compute $v$ and $2df|X_\alpha|$. By Lemma \[bound\](ii), $r\mid 2df|X_\alpha|$. On the other hand $r\mid 2(v-1)$, so $r$ divides $R:=\gcd(2(v-1),2df|X_\alpha|)$. Now using $R^2\geq r^2>2v$, this argument rules out cases (3) for $q=4$, (6) for $q=7$, (8) and (16). This leaves the single remaining case (5) with $q=2$. Then this argument yields $r\mid 14$, $v=8$. As $r^2>2v$, $r=7$ or $14$. By Lemma \[condition 1\](i), $r(k-1)=14$, so the condition $k\geq3$ implies that $r=7$ and $k=3$. Now Lemma \[condition 1\](ii) yields a contradiction since $k\nmid vr$. Hence, we rule out case (5) for $q=2$, completing the proof.
[1]{}
S. H. Alavi and T. C. Burness, Large subgroups of simple groups, J. Algebra, 421 (2015), 187–233.
M. Aschbacher, On the maximal subgroups of the finite classical groups, Invent. Math. 76 (1984), 469–514.
W. Bosma, J. Cannon and C. Playoust, The magma algebra system I: The user language, J. Symbolic Comput., 24 (1997), no. 3-4, 235–265.
J. N. Bray, D. F. Holt and C. M. Roney-Dougal, The Maximal Subgroup of the Low-Dimensional Finite Classical Groups, London Math. Soc. Lecture Note Ser., vol. 407, Cambridge University Press, 2013.
F. Buekenhout, A. Delandtsheer, J. Doyen, P. B. Kleidman, M. Liebeck and J. Saxl, Linear spaces with flag-transitive automorphism groups, Geom. Dedicata 36 (1990), 89–94.
C. J. Colbourn and J. H. Dinitz, The CRC Handbook of Combinatorial Designs, CRC Press, Boca Raton, FL (2007).
J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson, [*Atlas of Finite Groups*]{}, Clarendon Press, Oxford, $1985$.
H. Davies, Flag-transitivity and primitivity, Discrete Math. 63 (1987), 91–93.
H. Davies, Automorphisms of Designs, Ph.D. Thesis, University of East Anglia, 1987.
A. Delandtsheer and J. Doyen, Most block-transitive $t$-designs are primitive, Geom. Dedicata 29 (1989), 307–310.
D. G. Higman and J. E. Mclaughlin, Geometric ABA-groups, Illinois J. Math. 5 (1961), 382–397. Q. M. Hussain, On the totality of the solutions for the symmetrical incomplete block designs $\lambda= 2$, $k= 5$ or $6$, Sankhya 7 (1945), 204–208.
W. M. Kantor, Homogeneous designs and geometric lattices, J. Combin. Theory Ser. A 38 (1985), 66–74.
P. B. Kleidam and M. W. Liebeck, The Subgroup Structure of the Finite Classical Groups, London Math. Soc. Lecture Note Ser., vol. 129, Cambridge University Press, 1990.
H. Liang and S. Zhou, Flag-transitive point-primitive automorphism groups of non-symmetric $2$-$(v,k,2)$ designs, J. Combin. Designs. 24 (2016), 421–435.
H. Liang and S. Zhou, Flag-transitive point-primitive non-symmetric $2$-$(v,k,2)$ designs with alternating socle, Bull. Belg. Math. Soc. Simon Stevin 23 (2016), 559–571.
M. W. Liebeck, On the orders of maximal subgroups of the finite classical groups, Proc. London Math. Soc. 50 (1985), 426–446.
M. W. Liebeck, C. E. Praeger and J. Saxl, On the O’Nan-Scott theorem for finite primitive permutation groups, J. Aust. Math. Soc. Ser. A 44 (1988), 389–396. H. K. Nandi, Enumeration of nonisomorphic solutions of balanced incomplete block designs, Sankhya 7 (1946), 305–312.
P. M. Neumann and C. E. Praeger, Cyclic matrices over finite fields, J. Lond. Math. Soc. 52 (1995), 263–284.
A. C. Niemeyer and C. E. Praeger, A recognition algorithm for classical groups over finite fields, Proc. London Math. Soc. (3) 77 (1998), 117–169.
C. E. Praeger and C. Schneider, *Permutation Groups and Cartesian Decompositions*, London Math. Soc. Lecture Note Ser., vol. 449, Cambridge University Press, 2018.
C. E. Praeger and Shenglin Zhou, Imprimitive flag-transitive symmetric designs, J. Combin. Theory Ser. A 113 (2006), 1381–1395.
E. O’Reilly Regueiro, On primitivity and reduction for flag-transitive symmetric designs, J. Combin. Theory Ser. A 109 (2005), 135–148.
E. O’Reilly Regueiro, Biplanes with flag-transitive automorphism groups of almost simple type, with alternating or sporadic socle, European J. Combin. 26 (2005), 577–584.
E. O’Reilly Regueiro, Biplanes with flag-transitive automorphism groups of almost simple type, with classical socle, J. Algebr. Comb. 26 (2007), 529–552.
E. O’Reilly Regueiro, Biplanes with flag-transitive automorphism groups of almost simple type, with exceptional socle, J. Algebr. Comb. 27 (2008), 479–491.
H. J. Ryser, Combinatorial mathematics, Wiley, New York (1964).
J. Saxl, On finite linear spaces with almost simple flag-transitive automorphim groups, J. Combin. Theory Ser. A 100 (2002), 322–348.
H. Wielandt, Finite Permutation Groups, Academic Press, New York (1964).
Devillers, Praeger: <span style="font-variant:small-caps;">Centre for Mathematics of Symmetry and Computation, University of Western Australia, 35 Stirling Highway, Perth 6009, Australia.</span>
Email: [{Alice.Devillers,Cheryl.Praeger}@uwa.edu.au]{}
Liang (Corresponding author): <span style="font-variant:small-caps;">School of Mathematics and Big Data, Foshan University, Foshan 528000, P. R. China</span>
Email: [[email protected]]{}
Xia: <span style="font-variant:small-caps;">School of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia</span>
Email: [[email protected]]{}
[^1]: The first and third author were supported by an ARC Discovery Grant Project DP200100080.
[^2]: The second author was supported by the NSFC (Grant No.11871150).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The distributed representation of correlated multi-view images is an important problem that arise in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed correlated images are jointly decoded in order to improve the reconstruction quality of all the compressed images. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among different cameras. A central decoder first estimates the underlying correlation model from the independently compressed images which will be used for the joint signal recovery. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images that comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be consistent with their compressed versions. We show by experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our proposed algorithm compares advantageously to state-of-the-art distributed coding schemes based on disparity learning and on the DISCOVER.'
author:
- 'Vijayaraghavan Thirumalai, and Pascal Frossard, [^1]'
bibliography:
- 'thesis\_bibN.bib'
title: 'Joint Reconstruction of Multi-view Compressed Images [^2]'
---
Distributed compression, Joint reconstruction, Optimization, Multi-view images, Depth estimation.
Introduction
============
In recent years, vision sensor networks have been gaining an ever increasing popularity enforced by the availability of cheap semiconductor components. These systems usually acquire multiple correlated images of the same 3D scene from different viewpoints. Compression techniques shall exploit this correlation in order to efficiently represent the 3D scene information. The distributed coding paradigm becomes particularly attractive in such settings; it permits to efficiently exploit the correlation between images with low encoding complexity and minimal inter-sensor communication, which directly translate into power savings in sensor networks. In the distributed compression framework, a central decoder jointly reconstructs the visual information from the compressed images by exploiting the correlation between the samples. This permits to achieve a good rate-distortion tradeoff in the representation of correlated multi-view images, even if the encoding is performed independently.
The first information-theoretic results on distributed source coding appeared in the late seventies for the noiseless [@Slepian] and noisy cases [@WynerZiv]. However, most results in distributed coding have remained non-constructive for about three decades. Practical DSC schemes have then been designed by establishing a relation between the Slepian-Wolf theorem and channel coding [@Pradhan]. Based on the results in [@Pradhan], several distributed coding schemes for video and multi-view images have been proposed in the literature [@DVC_overview; @DVC_overview_sp]. In such schemes, a feedback channel is generally used for accurately controlling the Slepian-Wolf coding rate. Unfortunately, this results in increased latency and bandwidth usage due to the multiple requests from the decoder. These schemes can thus hardly be used in real time applications. One solution to avoid the feedback channel is to use a separate encoding rate control module to precisely control the Slepian-Wolf coding rate [@prism]. The overall computational complexity at the encoder becomes non-negligible due to this rate control module. In this paper, we build a distributed coding scheme, where the correlated compressed images are directly transmitted to the joint decoder without implementing any Slepian-Wolf coding; this avoids the necessity for complex estimation of the statistical correlation estimation and of the coding rate at the encoder.
We consider a scenario where a set of cameras are distributed in a 3D scene. In most practical deployments of such systems, the images captured by the different cameras are likely to be correlated. The captured images are encoded independently using standard encoding solutions and are transmitted to the central decoder. Here, we assume that the images are compressed using balanced rate allocation, which permits to share the transmission and computational costs equally among the sensors. It thus prevents the necessity for hierarchical relationship among the sensors. The central decoder builds a correlation model from the compressed images which is used to jointly decode the multi-view images. The joint reconstruction is formulated as a convex optimization problem. It reconstructs the multi-view images that are consistent with the underlying correlation information and with the compressed images information. While reconstructing the images, we also effectively handle the occlusions that commonly arise in multi-view imaging. We solve the joint reconstruction problem using effective parallel proximal algorithms [@Combettes_prox].
We evaluate the performance of our novel joint decoding scheme in several multi-view datasets. Experimental results demonstrate that the proposed distributed coding solution improves the rate-distortion performance of the separate coding results by taking advantage of the inter-view correlation. We show that the quality of the decoded images is quite balanced for a given bit rate, as expected from a symmetric coding solution. We observe that our scheme, at low bit rate, performs close to the joint encoding solutions based on H.264, when the block size used for motion compensation is set to $4\times4$. Finally, we show that our framework outperforms state-of-the-art distributed coding solutions based on disparity learning [@David] and on the DISCOVER codec [@discover], in terms of rate-distortion performance. It certainly provides an interesting alternative to most classical DSC solutions [@DVC_overview; @DVC_overview_sp; @prism], since it does not require any statistical correlation information at the encoder.
Only very few works in the literature address the distributed compression problem without using a channel encoder or a feedback channel. In [@Wag], a distributed coding technique for compressing the multi-view images has been proposed, where a joint decoder reconstructs the views from low resolution images using super-resolution techniques. In more details, each sensor transmits a low resolution compressed version of the original image to the decoder. At the decoder, these low resolution images are registered with respect to a reference image, where the image registration is performed by shape analysis and image warping. The registered low resolution images are then jointly processed to decode a high resolution image using image super-resolution techniques. However, this framework requires communication between the encoders in order to facilitate the registration, e.g., the transmission of feature points. Other works in super-resolution use multiple compressed images that are fused for improved resolution [@sr_overview]. Such techniques usually target reconstruction of a *single* high resolution image from multiple compressed images. Alternatively, techniques have been developed in [@mdc; @mdc1] to decode a *single* high quality image from several encoded versions of the same source image or videos. This is achieved by solving an optimization problem that enforces the final reconstructed image to be consistent with all the compressed copies. Our main target in this paper is to jointly improve the quality of *multiple* compressed correlated (multi-view) images and not to increase the spatial resolution of the compressed images or to extract a single high quality image. More recently, Schenkel [*et al.*]{} [@Schenkel] have considered a distributed representation of image pairs. In particular, they have proposed an optimization framework to enhance the quality of the JPEG compressed images. This work, however, considered an asymmetric scenario that requires a reference image for joint decoding.
The rest of the paper is organized as follows. The joint decoding algorithm along with the optimization framework for joint reconstruction is described in Section \[sec:joint\_decoder\]. In Section \[Sec:optmeth\], we present the optimization algorithm based on proximal splitting methods. In Section \[sec:results\], we present the experimental results for the joint reconstruction of pairs of images. Section \[sec:multiview\] describes the extension of our proposed framework to decode multiple images along with the simulation results. Finally, in Section \[sec:conc\] we draw some concluding remarks.
Joint Decoding of Image Pairs {#sec:joint_decoder}
=============================
We consider the scenario illustrated in Fig. \[Fig:system\], where a pair of cameras $C_1$ and $C_2$ project the 3D visual information on the 2D plane $\mathcal{I}_1$ and $\mathcal{I}_2$ (with resolution $N = N_1\times N_2$), respectively. The images $\mathcal{I}_1$ and $\mathcal{I}_2$ are compressed independently using standard encoding solutions (e.g., JPEG, H.264 intra) and are transmitted to a central decoder. The joint decoder has the access to the compressed version of the correlated images and its main objective is to improve the quality of all the compressed views by exploiting the underlying inter-view correlation. We first propose to estimate the correlation between images from the decoded images $\tilde{I}_1$ and $\tilde{I}_2$, which is effectively modeled by a dense depth image $D$. The joint reconstruction stage then uses the depth information $D$ and enhances the quality of the decoded images $\tilde{I}_1$ and $\tilde{I}_2$. Note that one could solve a joint problem to estimate simultaneously the correlation information $D$ and the improved images. However, such a joint optimization problem would be hard to solve with a complex objective function. Therefore, we propose to split the problem in two steps: (i) we estimate a correlation information from the decoded images; and (ii) we carry out joint reconstruction using the estimated correlation information. These two steps are detailed in the rest of this section.
Depth Estimation {#sec:depth_est}
----------------
The first task is to estimate the correlation between images, which typically consists in a depth image. In general, the dense depth information is estimated by matching the corresponding pixels between images. Several algorithms have been proposed in the literature to compute dense depth images. For more details, we refer the reader to [@Scharstein]. In this work, we estimate a dense depth image from the compressed images in a regularized energy minimization framework, where the energy $E$ is composed of a data term $E_d$ and a smoothness term $E_s$. A dense depth image $D$ is obtained by minimizing the energy function $E$ as $$\label{eqn:energy_chap4}
D= \underset{D_c}{\operatorname{argmin}} \; E(D_c) = \underset{D_c}{\operatorname{argmin}} \; \{E_d(D_c) + \lambda \; E_s(D_c)\},$$ where $\lambda$ balances the importance of the data and smoothness terms, and $D_c$ represents the candidate depth images. The candidate depth values $D_c(m,n)$ for every pixel position $(m,n)$ are discrete; this is constructed by uniformly sampling the inverse depth in the range $[1/D_{max}, 1/D_{min}]$, where $D_{min}$ and $D_{max}$ are the minimal and maximal depth values in the scene, respectively [@multiview_gc].
We now discuss in more details the components of the energy function of Eq. (\[eqn:energy\_chap4\]). The data term, $E_d$ is used to match the pixels across views by assuming that the 3D scene surfaces are Lambertian, i.e., the intensity is consistent irrespective of the viewpoints. It is computed as $$\label{eqn:datacost}
E_d(D_c) = \sum_{m=1}^{N_1} \sum_{n=1}^{N_2} \mathcal{C}((m,n),D_c(m,n)),$$ where $N_1$ and $N_2$ represent the image dimensions and $(m,n)$ represent a pixel position. The most commonly used pixel-based cost function $\mathcal{C}$ includes squared intensity differences and absolute intensity differences. In this work, we use square intensity difference to measure the disagreement of assigning a depth value $D_c(m,n)$ to the pixel location $(m,n)$. Mathematically, it is computed as $$\label{eqn:pixelcost}
\small
\mathcal{C}((m,n),D_c(m,n)) = {{\lVert\tilde{I}_2(m,n)- \mathcal{W}(\tilde{I}_1(m,n),D_c(m,n))\rVert^{2}_{2}}},$$ where $\mathcal{W}$ is a warping function that warps the image $\tilde{I}_1$ using the depth value $D_c(m,n)$. This warping, in general, is a two step process [@mv_geo]. First the pixel position $(m,n)$ in the image $\tilde{I}_1$ is projected to the world coordinate system. This projection step is represented as $$\label{eqn:proj_step1}
[u, v, w]^T = R_1 P_1^{-1} [m,n,1]^T D_c(m,n) + T_1,$$ where $P_1$ is the intrinsic camera matrix of the camera $C_1$ and $(R_1, T_1)$ represent the extrinsic camera parameters with respect to the global coordinate system. Then, the 3D point $[u, v, w]^T$ is projected on the coordinates of the camera $C_2$ with the internal and external camera parameters, respectively as $P_2$ and $(R_2,T_2)$. This projection step can be described as $$\label{eqn:proj_step2}
[x^{\prime}, y^{\prime}, z^{\prime}]^T = P_2 R_2^{-1} \{[u,v,w]^T - T_2\}.$$ Finally, the pixel location of the warped image is taken as $(m^\prime, n^\prime)=(round(x^{\prime}/z^{\prime}), (round(y^{\prime}/z^{\prime}))$, where $round(x)$ rounds $x$ to the nearest integer.
The smoothness term, $E_s$ is used to enforce consistent depth values at neighboring pixel locations $(m,n)$ and $(\tilde{m}, \tilde{n})$. It is measured as $$\label{eqn:chap4_energy}
E_s(D_c) = \sum_{(m,n),(\tilde{m},\tilde{n})\in \mathcal{N}} min( |D_c(m,n)-D_c(\tilde{m}, \tilde{n})|, \tau),$$ where $\mathcal{N}$ represents the usual four-pixel neighborhood and $\tau$ sets an upper level on the smoothness penalty such that discontinuities can be preserved [@Veksler].
We can finally rewrite the regularized energy objective function for the depth estimation problem as $$\label{eqn:depth_est}
\begin{aligned}
E(D_c) = & \sum_{m=1}^{N_1} \sum_{n=1}^{N_2} \mathcal{C}((m,n),D_c(m,n)) + \\
& \lambda \sum_{(m,n),(\tilde{m}, \tilde{n}) \in \mathcal{N}} min( |D_c(m,n)-D_c(\tilde{m}, \tilde{n})|, \tau).
\end{aligned}$$ This cost function is used in the optimization problem of Eq. (\[eqn:energy\_chap4\]), which is usually a non-convex problem. Several minimization algorithms exist in the literature to solve Eq. (\[eqn:energy\_chap4\]), e.g., Simulated annealing [@Barnard], Belief Propagation [@Belief_prop], Graph Cuts [@Graph_cuts; @Kolomogorov]. Among these solutions, the optimization techniques based on Graph Cuts compute the minimum energy in polynomial time and they generally give better results than the other techniques [@Scharstein]. Motivated by this, in our work, we solve the minimization problem of Eq. (\[eqn:energy\_chap4\]) using Graph Cut techniques.
Image Warping as Linear Transformation {#sec:warp_as_linear}
---------------------------------------
Before describing our joint reconstruction problem, we show how the image warping operation ${\mathcal{W}(\tilde{I}_1,D)}$ in Eq. (\[eqn:pixelcost\]) can be written as matrix multiplication of the form $A\cdot \mathcal{R}(\tilde{I}_1)$[^3]; this linear representation offers a more flexible formulation of our joint reconstruction problem. The reshaping operator ${\mathcal{R}}: I_{N_1\times N_2}\rightarrow X_{N_1N_2 \times 1}$ produces a vector $X = \mathcal{R}(I) = [ I_{.,1} ^T \; I_{.,2} ^T \ldots I_{.,{N_1}}^T] ^T$ from the matrix $I$, where $I_{.,m}$ represents the $m^{th}$ row of the matrix $I$ and $(.)^T$ denotes the usual transpose operator. For our convenience, we also define another operator $\mathcal{R}^{-1}_{N_1\times N_2}: X_{N_1N_2 \times 1} \rightarrow I_{N_1\times N_2}$ that takes the vector $X = [\mathcal{R}(I)]_{N_1N_2 \times 1}$ and gives back the matrix $I_{N_1\times N_2}$, i.e., this operator $\mathcal{R}^{-1}$ performs the inverse operations corresponding to $\mathcal{R}$. The matrix $A$ describes the warping by re-arranging the elements of $\mathcal{R}(\tilde{I}_1)$. Its construction is described in this section.
We have shown earlier that the warping function $\mathcal{W}$ shifts the pixel position $(m,n)$ in the reference image to the position $(m^\prime, n^\prime)$ in the target image. Alternatively, this pixel shift between images can be represented using a horizontal component $\bold{m}^h$ and a vertical component $\bold{m}^v$ of the motion field as $(m^\prime, n^\prime) = (m+ \bold{m}^h(m,n), n+\bold{m}^v(m,n))$. Note that this motion field $(\bold{m}^h,\bold{m}^v)$ can be easily computed from Eqs. (\[eqn:proj\_step1\]) and (\[eqn:proj\_step2\]), once the depth information $D$ and the camera parameters are known. Now, our goal is to represent the motion compensation operation ${\tilde{I}_1(m+\bold{m}^h(m,n),n+\bold{m}^v(m,n))}$ as a linear transformation $A\cdot \mathcal{R}(\tilde{I}_1)$ given as $$\begin{aligned}
\label{eqn:I2AI1}
\underbrace{\left[ \begin{array}{c}
\bar{I}_{2,1}^T \\
\bar{I}_{2,2}^T \\
\vdots \\
\bar{I}_{2,{N_1}}^T \end{array} \right] }_{\mathcal{R}(\bar{I}_2)}
= \underbrace{\left[ \begin{array}{c}
A^1 \\
A^2 \\
\vdots \\
A^{N_1} \end{array} \right] }_{A}
\underbrace{ \left[ \begin{array}{c}
\tilde{I}_{1,1} ^T \\
\tilde{I}_{1,2} ^T \\
\vdots \\
\tilde{I}_{1,{N_1}}^T \end{array} \right]}_{\mathcal{R}(\tilde{I}_1)}.
\end{aligned}$$ Here, ${\bar{I}_2 = \mathcal{W}(\tilde{I}_1(m,n),D)}$ represents the warped image and $A^m$ is a matrix of dimensions $N_2 \times N_1N_2$ whose entries are determined by the horizontal and vertical components of the motion field in the $m^{th}$ row, i.e., $\bold{m}^h(m,.)$ and $\bold{m}^v(m,.)$.
In general, the elements of the matrix $A^m$ can be found in two ways: (i) forward warping; and (ii) inverse warping. In this work, we propose to construct the matrix $A^m$ based on forward warping; this permits easier handling of the occluded pixels as shown later. Given a motion vector, the elements of the matrix $A^m$ are given by $$\label{eqn:matA}
A^m(n-\beta_1-\beta_2N_2, n) = \left\{
\begin{array}{ll}
1 & \mbox{if \ } \bold{m}^h(m,n) = \beta_1,\\
& \mbox{and } \bold{m}^v(m,n) = \beta_2, \\
0 & \mbox{otherwise.} \end{array} \right.$$ If $n-\beta_1-\beta_2N_2 < 0$ (e.g., at image boundaries), we set ${n-\beta_1-\beta_2N_2 = 1}$ so that the dimensions of the matrix $A^m$ stays $N_2 \times N_1N_2$. It should be noted that the matrix $A^m$ formed using Eq. (\[eqn:matA\]) contains multiple entries with values of ‘$1$’ in each row. This is because several pixels in the source image can be mapped to the same location in the destination image during forward warping. In such cases, for a given row index $m$ we keep only the last ‘$1$’ entry in the matrix $A^m$ while the remaining ones in the row are set to zero. This is motivated by the fact that, during forward warping when multiple source pixels are mapped to the same destination point $(m^\prime, n^\prime)$, the intensity value of the last source pixel is assigned to the destination pixel $(m^\prime, n^\prime)$ [^4]. Furthermore, it is interesting to note that some of the rows in the matrix $A^m$ do not contain any entry with value of ‘$1$’, i.e., all entries in $m^{th}$ row of $A^m$ are zeros. This means that the set of pixel locations $\{j: j \in \mathcal{J}^m \}$ in the warped image $\bar{I}_{2,m}(j)$ has zero value, where $\mathcal{J}^m$ is the set of row indexes in the matrix $A^m$ that do not contain any entry with value of ‘$1$’. These pixel positions represent holes in the warped image that define the occluded regions. Finally, the $m^{th}$ row in the warped image is represented as $$\label{}
\bar{I}_{2,m}(j) = \left\{
\begin{array}{ll}
0 & \mbox{if \ } j \in \mathcal{J}^m \\
\tilde{I}_{1}(k, n) & \mbox{if \ } A^m(j,(k-m)N_2+n) =1. \end{array} \right.$$ Thus, it is clear that the matrix $A^m$ shifts the pixels in $\tilde{I}_{1}$ by the corresponding motion vector $(\bold{m}^h(m,.), \bold{m}^v(m,.))$ in order to form $\bar{I}_{2,m}$. In a similar way, we can construct the matrix $A^m, \forall m \in \{1,2,\ldots,N_1 \}$, and thus we can represent the image warping ${\mathcal{W}(\tilde{I}_1(m,n),D)}$ as $A\cdot \mathcal{R}(\tilde{I}_1)$. Finally, note that similar operations can also be performed with an inverse mapping. For details related to the construction of the matrix $A^m$ based on inverse warping, we refer the reader to [@Vijay_thesis Ch. 6, p. 95].
Joint reconstruction {#sec:jr}
--------------------
We now discuss our novel joint reconstruction algorithm that takes benefit of the estimated correlation information given by the matrix $A$ (or $D$) in order to reconstruct the images. We propose to reconstruct an image pair $(\hat{I}_1, \hat{I}_2)$ as a solution to the following optimization problem: $$\begin{aligned}
\label{eqn:jr}
(\hat{I}_1, \hat{I}_2) = \; \underset{I_1, I_2 \in \mathbb{R}^{N_1\times N_2} }
{\operatorname{argmin}} \;
& ({{\lVertI_1\rVert}}_{TV} + {{\lVertI_2\rVert}}_{TV}) \\ \nonumber
\mbox{s.t.} \;
& {{\lVert\mathcal{R}(I_1) - \mathcal{R}(\tilde{I}_1)\rVert}}_2 \leq \epsilon_1,\\ \nonumber
& {{\lVert\mathcal{R}(I_2) - \mathcal{R}(\tilde{I}_2)\rVert}}_2 \leq \epsilon_1, \\ \nonumber
& {{\lVert\mathcal{R}(I_2) - A\cdot \mathcal{R}(I_1)\rVert}}_2^2 \leq \epsilon_2.\end{aligned}$$ Here, $\tilde{I}_1$ and $\tilde{I}_2$ represent the decoded views (see Fig. \[Fig:system\]) and ${{\lVert.\rVert}}_{TV}$ represents the total-variation (TV) norm. The first two constraints of Eq. (\[eqn:jr\]) forces the reconstructed images $\hat{I}_1$ and $\hat{I}_2$ to be close to the respective decoded images $\tilde{I}_1$ and $\tilde{I}_2$. The last constraint encourages the reconstructed images to be consistent with the correlation information represented by $A$, i.e., the warped image $A\cdot \mathcal{R}(I_1)$ should be consistent with the image $\mathcal{R}(I_2)$. Finally, the TV prior term ensures that the reconstructed images $\hat{I}_1$ and $\hat{I}_2$ are smooth. In general, inclusion of the prior knowledge brings effective reduction in the search space, which leads to efficient optimization solutions. The optimization problem of Eq. (\[eqn:jr\]), therefore reconstructs a pair of TV smooth images that is consistent with both the compressed images and the correlation information. In our framework, we use the TV prior on the reconstructed images, however one could also use a sparsity prior that minimizes the $l_1$ norm of the coefficients in a sparse representation of the images [@Donoho; @Candes]
In the above formulation, it is clear that we measure the correlation consistency of all the pixels in the image $\mathcal{R}(I_2)$ and the warped image $A\cdot\mathcal{R}(I_1)$. However, this assumption is not true in multi-view imaging scenarios, as there are often problems due to occlusions. This indicates that we need to consider only the pixels that appear in both the views and we need to ignore the holes in the warped image $A \cdot \mathcal{R}(I_1)$ while enforcing consistency between $\mathcal{R}(I_2)$ and $A\cdot \mathcal{R}(I_1)$. The positions of holes in the warped image $A\cdot\mathcal{R}(I_1)$ correspond to the row indexes in the matrix $A$ that do not contain any value of ‘$1$’, i.e., all entries in a given row are zero. Once these rows are identified, we simply ignore that contribution while we measure the correlation consistency between the images $\mathcal{R}(I_2)$ and $A\cdot \mathcal{R}(I_1)$. More formally, let $\mathcal{J} = \bigcup_{m=1}^{N_1} \mathcal{J}^m$ be the set of indexes of these rows. Let us denote a diagonal matrix $M$ that is formed as $$\label{eqn:matrix_M}
M(j,j) = \left\{
\begin{array}{ll}
0 & \mbox{if \ } j \in \mathcal{J} \\
1 & \mbox{otherwise,} \end{array} \right.$$ where $j = \{1,2,\ldots,N_1N_2\}$. For effective occlusion handling, the joint reconstruction problem of Eq. (\[eqn:jr\]) can be modified as $$\begin{aligned}
\tag{OPT-1}\label{eqn:jr_f}
(\hat{I}_1, \hat{I}_2) = \;
\underset{I_1, I_2}{\operatorname{argmin}} \; &
({{\lVertI_1\rVert}}_{TV} + {{\lVertI_2\rVert}}_{TV}) \\ \nonumber
\mbox{s.t.} \;
& {{\lVert\mathcal{R}(I_1) - \mathcal{R}(\tilde{I}_1)\rVert}}_2 \leq \epsilon_1, \\ \nonumber
& {{\lVert\mathcal{R}(I_2) - \mathcal{R}(\tilde{I}_2)\rVert}}_2 \leq \epsilon_1, \\ \nonumber
& {{\lVertM(\mathcal{R}(I_2) - A\cdot\mathcal{R}(I_1))\rVert}}_2^2 \leq \epsilon_2.\end{aligned}$$ Note that, by setting $M = \mathbbm{1}$, we get the optimization problem of Eq. (\[eqn:jr\]) that considers the consistency of all the pixels in $\mathcal{R}(I_2)$ and $A\cdot \mathcal{R}(I_1)$. We show later that the quality of the reconstructed images are improved, when our joint decoding problem OPT-1 is solved with the matrix $M$ constructed using Eq. (\[eqn:matrix\_M\]). Finally, the depth estimation and the joint reconstruction steps could be iterated several times. In our experiments, however, we have not observed any significant improvement in the quality of the reconstructed images by repeating these two steps.
Optimization methodology {#Sec:optmeth}
========================
We propose now a solution for the joint reconstruction problem \[eqn:jr\_f\]. We first show that the optimization problem is convex. Then, we propose an effective solution based on proximal methods.
\[prop:jr\_cvx\] The OPT-1 optimization problem is convex.
Our objective is to show that all the functions in \[eqn:jr\_f\] problem are convex. However, it is quite easy to check that the functions ${{\lVertI_j\rVert}}_{TV}$ and ${{\lVert\mathcal{R}(I_j) - \mathcal{R}(\tilde{I}_j)\rVert}}_2$, $\forall j \in \{1,2\}$ are convex [@Boyd_CVX]. So, we have to show that the last constraint ${{\lVertM(\mathcal{R}(I_2) - A\cdot\mathcal{R}(I_1))\rVert}}_2^2$ is a convex function.
Let $g(\grave{I}_1,\grave{I}_2) = {{\lVert\grave{I}_2 - \grave{A} \grave{I}_1\rVert^{2}_{2}}}$, where $\grave{I}_2 = M\cdot \mathcal{R}(I_2)$, $\grave{A} = MA$ and $\grave{I}_1 = \mathcal{R}(I_1)$. The function $g$ can be represented as $$\begin{aligned}
\nonumber
g(\grave{I}_1,\grave{I}_2) & = &(\grave{I}_2-\grave{A}\grave{I}_1)^T (\grave{I}_2-\grave{A}\grave{I}_1) \\ \nonumber & = & {\grave{I}_2^T\grave{I}_2- \grave{I}_2^T\grave{A}\grave{I}_1 - \grave{I}_1^T\grave{A}^T\grave{I}_2 + \grave{I}_1^T\grave{A}^T\grave{A}\grave{I}_1}. \end{aligned}$$ The second derivative $\nabla^2 g$ of the function $g$ is given as $$\begin{aligned}
\nonumber
\nabla^2 g = \left[ \begin{array}{cc}
2\grave{A}\grave{A}^T & -2\grave{A} \\
-2\grave{A}^T & 2 \\
\end{array}
\right]
= 2C^TC
\succeq 0.\end{aligned}$$ Here, $C = [\grave{A}^{T} \; -\mathbbm{1}]$, where $\mathbbm{1}$ represents the identity matrix and $2C^{T}C \succeq 0$ follows from $2x^{T}C^{T}Cx$ = $2 {{\lVertCx\rVert}}_2^2 \geq 0$ for any $x$. This means that the Hessian function $\nabla^2 g$ is positive semi-definite and thus $g(\grave{I}_1,\grave{I}_2)$ is convex.
We now propose an optimization methodology to solve \[eqn:jr\_f\] convex problem with proximal splitting methods [@Combettes_prox]. For mathematical convenience, we rewrite \[eqn:jr\_f\] as $$\label{eqn:jr_mod_chap4}
\begin{aligned}
& \underset{X \in \mathbb{R}^{2N} }{\operatorname{argmin}}
& & \{ {{\lVert\mathcal{R}^{-1}(S_1X)\rVert}}_{TV} + {{\lVert\mathcal{R}^{-1}(S_2X)\rVert}}_{TV}\} \\
& \mbox{s.t.}
& & {{{\lVertS_1(Y - X)\rVert}}_2 \leq \epsilon_1}, {{{\lVertS_2(Y - X)\rVert}}_2 \leq \epsilon_1}, \\
&&&{{{\lVertBX\rVert}}_2^2 \leq \epsilon_2},
\end{aligned}$$ where $X = [ \mathcal{R}(I_1) \; ; \mathcal{R}(I_2) ] $, $Y= [ \mathcal{R}(\tilde{I}_1) \; ; \mathcal{R}(\tilde{I}_2) ] $, $S_1=[ \mathbbm{1} \; 0]$, $S_2=[ 0 \; \mathbbm{1}]$, $B=[ -MA \; M ] $ and $\mathbbm{1}$ represents the identity matrix. Recall that $\mathcal{R}^{-1}_{N_1\times N_2}$ (for simplicity we omit the subscript in Eq. (\[eqn:jr\_mod\_chap4\])) is the operator that outputs a matrix of dimensions $N_1\times N_2$ from a column vector of dimensions $N = N_1N_2$. The optimization problem of Eq. (\[eqn:jr\_mod\_chap4\]) can be visualized as a special case of general convex problem as $$\label{eqn:convex_opt_chap4}
\underset{X \in \mathbb{R}^{2N}}{\operatorname{argmin}} \{ f_1(X) + f_2(X) + f_3(X) + f_4(X) + f_5(X)\},$$ where the functions $f_1, f_2, f_3, f_4, f_5 \in \Gamma_0(\mathbb{R}^{2N})$ [@Combettes_prox]. $\Gamma_0(\mathbb{R}^{2N})$ is the class of lower semicontinuous convex functions from $\mathbb{R}^{2N}$ to $(-\infty~+\infty]$ that are not infinity everywhere. For the optimization problem given in Eq. (\[eqn:jr\_mod\_chap4\]), the functions in the representation of Eq. (\[eqn:convex\_opt\_chap4\]) are
1. [$f_1(X) ={{\lVert\mathcal{R}^{-1}(S_1X)\rVert}}_{TV}$]{},
2. [ $f_2(X) ={{\lVert\mathcal{R}^{-1}(S_2 X)\rVert}}_{TV}$]{},
3. [$ f_3(X) = i_{c_1}(X) = \left\{ \begin{array}{ll} 0 & X \in {c_1} \\ \infty & \mbox{otherwise,} \end{array} \right.
$\
i.e., $f_3(X)$ is the indicator function of the closed convex set ${c_1} = \{ X : {{\lVertS_1(Y-X)\rVert}}_2 \leq \epsilon_1 \},$ ]{}
4. [$ f_4(X) = i_{c_2}(X) = \left\{ \begin{array}{ll} 0 & X \in {c_2} \\ \infty & \mbox{otherwise,} \end{array} \right.
$\
where ${c_2} = \{ X : {{\lVertS_2(Y-X)\rVert}}_2 \leq \epsilon_1 \},$ ]{}
5. [ $f_5(X) = i_{c_3}(X) = \left\{ \begin{array}{ll} 0 & X \in {c_3} \\ \infty & \mbox{otherwise,} \end{array} \right. $\
where $ {c_3} = \{ X : {{\lVertBX\rVert^{2}_{2}}} \leq \epsilon_2\}.$]{}
The solution to the problem of Eq. (\[eqn:convex\_opt\_chap4\]) can be estimated by generating the recursive sequence $X^{(t+1)} = prox_{f}(X^{(t)})$, where the function $f$ is given as $f = \sum_{i=1}^5 f_i$. The proximity operator is defined as the $prox_f(X) = min_{X}~ \{ f (X) + \frac{1}{2}{{\lVertX-Z\rVert^{2}_{2}}} \}$. The main difficulty with these iterations is the computation of the $prox_{f}(X)$ operator. There is no closed form expression to compute the $prox_{f}(X)$, especially when the function $f$ is the cumulative sum of two or more functions. In such cases, instead of computing the $prox_{f}(X)$ directly for the combined function $f$, one can perform a sequence of calculations involving separately the individual operators $prox_{f_i}(X), \forall i \in \{1,\ldots,5 \}$. The algorithms in this class are known as *splitting methods* [@Combettes_prox], which lead to an easily implementable algorithm.
We describe in more details the methodology to compute the *prox* for the functions $f_i, \forall i \in \{1,\ldots,5 \}$. For the function $f_1(X) = {{\lVert\mathcal{R}^{-1}(S_1X)\rVert}}_{TV}$, the $prox_{f_1}(X)$ can be computed using Chambolle’s algorithm [@Chambolle]. A similar approach can be used to compute the $prox_{f_2}(X)$. The function $f_3$ can be represented as $f_3 = F\circ G$, where $F = i_{d(\epsilon_1)}$ and $G = S_1X - S_1Y $. The set $d(\epsilon_1)$ represents the $l_2$-ball defined as $d(\epsilon_1) = \{ y \in \mathbb{R}^{2N} : {{\lVerty\rVert}}_2 \leq \epsilon_1 \} $. Then, the $prox_{f_3}$ can be computed using the following closed form expression: $$\label{eqn:chap4_proxf3}
prox_{f_3}(X) = prox_{F\circ G}(X) = X + (S_1)^*(prox_{F}-\mathbbm{1})(G(X))$$ [@Fadili_ICIP], where $(S_1)^*$ represents the conjugate transpose of $S_1$. The $prox_{F}(y)$ with $F = i_{d(\epsilon_1)}$ can be computed using radial projection [@Combettes_prox] as $$\label{eqn:chap4_l2ball}
prox_F(y) = \left\{ \begin{array}{ll} y & {{\lVerty\rVert}}_2 \leq \epsilon_1 \\ \frac{y}{{{\lVerty\rVert}}_2} & \mbox{otherwise.} \end{array} \right.$$ The $prox$ for the function $f_4$ can also be solved using Eq. (\[eqn:chap4\_proxf3\]) by setting $F = i_{d(\epsilon_1)}$ and $G = S_2X- S_2Y $. Finally, the function $f_5$ can be represented with $F = i_{d(\sqrt{\epsilon_2})}$ and an affine operator $G_1 = BX $, i.e., $f_5 = F\circ G_1$. As the operator $B$ is not a tight frame, the $prox_{f_5}$ can be computed using an iterative scheme [@Fadili_ICIP]. Let $\mu_t \in ( 0, 2/\gamma_2 )$, and $\gamma_1$ and $\gamma_2$ be the frame constants with $\gamma_1 \mathbbm{1} \leq B B^* \leq \gamma_2 \mathbbm{1}$. The $prox_{f_5}$ can be calculated iteratively [@Fadili_ICIP] as $$\begin{aligned}
\label{eqn:chap4_proxf4_step1}
u^{(t+1)} &=& \mu_t ( \mathbbm{1} - prox_{\mu_t^{-1}F})(\mu_t^{-1}u^{(t)} + G_1p^{(t)} ) \\ \label{eqn:chap4_proxf4_step2}
p^{(t+1)} &=& X -B^* u ^{(t+1)},\end{aligned}$$ where $u^{(t)} \rightarrow u$ and $p^{(t)} \rightarrow prox_{F\circ G} = prox_{f_5} = X-B^*u$. It has been shown that both $u^{(t)}$ and $p^{(t)}$ converge linearly and the best convergence rate is attained when ${\mu_t = 2/(\gamma_1+\gamma_2)}$.
In our work, we use the parallel proximal algorithm (PPXA) proposed by Combettes *et al.* [@Combettes_prox] to solve Eq. (\[eqn:convex\_opt\_chap4\]), as this algorithm can be easily implementable on multicore architectures due to its parallel structure. The PPXA algorithm starts with an initial solution $X^{(0)}$ and computes the $prox_{f_i}, \forall i \in \{1,\ldots, 5\}$ in each iteration and the results are used to update the current solution $X^{(0)}$. The iterative procedure for computing the *prox* of functions $f_i, \forall i \in \{1,\ldots, 5\}$, and the updating steps are repeated until convergence is reached. The authors have shown that the sequence $(X^{(t)})_{t\geq 1}$ generated by the PPXA algorithm is guaranteed to converge to the solution of problems such as the one given in Eq. (\[eqn:convex\_opt\_chap4\]).\
Experimental Results {#sec:results}
====================
Setup
-----
We study now the performance of our distributed representation scheme for the joint reconstruction of pairs of compressed images. The experiments are carried out in six natural datasets namely, *Tsukuba* (views center and right), *Venus* (views 2 and 6) [@Scharstein], *Plastic* (views 1 and 2) [@scharstein_plastic], *Flowergarden* (frames 5 and 6), *Breakdancers* (views 0 and 2) and *Ballet* (views 3 and 4) [@MSR_sequence]. The first four datasets have been captured by a camera array where the different viewpoints correspond to translating the camera along one of the image coordinate axis. In such a scenario, the motion of objects due to the viewpoint change is restricted to the horizontal direction with no motion along the vertical direction. The depth estimation is thus a one-dimensional search problem and the data cost function given in Eq. (\[eqn:datacost\]) is modified accordingly.
We compress the images independently using an H.264 intra coding scheme; this implementation is carried out using the JM reference software version 18 [@JM_software]. The bit rate at the encoder is varied by changing the quantization parameter (QP) in the H.264 coding scheme. In our experiments, we use six different QP parameters, namely $51, 48, 45, 42, 39\; \mbox{and}\; 35$ in order to generate the rate-distortion (RD) plots. Also, we use the same QP value while encoding the images $\mathcal{I}_1$ and $\mathcal{I}_2$, in order to ensure balanced rate allocation among different cameras. We estimate a depth image from the decoded images $\tilde{I}_1$ and $\tilde{I}_2$ by solving the regularized energy minimization problem of Eq. (\[eqn:energy\_chap4\]) using $\alpha$-expansion algorithm in Graph Cuts [@Graph_cuts]. Unless stated explicitly, we solve the OPT-1 optimization problem with matrix $M$ constructed using Eq. (\[eqn:matrix\_M\]). The smoothness parameters $(\lambda, \tau)$ of the depth estimation problem of Eq. (\[eqn:depth\_est\]) and the $(\epsilon_1, \epsilon_2)$ parameters of the OPT-1 joint reconstruction problem are given in Table \[table:parameters\] for all the six datasets; these parameters are selected based on trial and error experiments. The solution to the OPT-1 problem is computed by running the PPXA algorithm for 100 iterations.
We report in this section the performance of the proposed joint reconstruction scheme and highlight the benefit of exploiting the inter-view correlation while decoding the images. We then study the effect of compression on the quality of the estimated depth images. Then, we analyze the importance of the matrix $M$ that enforces correlation consistency only on the corresponding pixels (i.e., the pixels that are not occluded) on the quality of the reconstructed images. Finally, we compare the rate-distortion performance of our scheme w.r.t. state-of-the-art distributed coding solutions and joint encoding algorithms.
[ Dataset]{} [$\lambda$]{} [$\tau$]{} [$\epsilon_1$]{} [$\epsilon_2$]{}
-------------- --------------- ------------ ------------------ ------------------
Tsukuba 190 4 3 2
Venus 220 4 1 2
Plastic 120 4 1 2
Flowergarden 170 3 1 1.25
Breakdancer 300 160 2 1
Ballet 290 160 1 2.2
: The parameters $(\lambda, \tau)$ in and $(\epsilon_1, \epsilon_2)$ in the OPT-1 problem used in our experiments.[]{data-label="table:parameters"}
$\begin{array}{c@{\hspace{0.1 in}}c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=3in \epsffile{figures/rdplot_venus_stereo.eps} & \epsfxsize=3in \epsffile{figures/rdplot_flowergarden_stereo.eps} \\
\mbox{(a)} & \mbox{(b)} \\
\end{array}$
Performance Analysis
--------------------
We first compare our joint reconstruction results with respect to a scheme where the images are reconstructed independently. Fig. \[Fig:jr\_ir\_rectified\](a), Fig. \[Fig:jr\_ir\_rectified\](b) and Fig. \[Fig:jr\_ir\_breakdancers\] compare the overall quality of the decoded images between the independent (denoted as *H.264 Intra*) and the joint decoding solutions (denoted as *Proposed*), respectively for the Venus, Flowergarden and Breakdancers datasets. The x-axis represent the total number of bits spent on encoding the images and the y-axis represent the mean PSNR value of the reconstructed images $\hat{I}_1$ and $\hat{I}_2$. From the plots, we see that the proposed joint reconstruction scheme performs better than the independent reconstruction scheme by a margin of about $0.7$ dB, $0.95$ dB and $0.3$ dB respectively for the different datasets. This confirms that the proposed joint decoding framework is effective in exploiting the inter-view correlation while reconstructing the images. Similar experimental results have been observed on other datasets. When compared to the first two datasets, the gain due to joint reconstruction for the Breakdancers dataset is smaller as confirmed in Fig. \[Fig:jr\_ir\_breakdancers\]. It is well known that this dataset is weakly correlated due to large camera spacing [@MSR_sequence], hence the gain provided by the joint decoding is small.
We then quantitatively compare the RD performances between the joint and the independent coding schemes using the Bjontegaard metric [@Bjontegaard]. In our experiments, we use the first four points in the RD plot for the computation in order to highlight the benefit in the low bit rate region; this corresponds to the QP values $51, 48, 45 \; \mbox{and} \; 42$. The relative rate savings due to joint reconstruction for all the six datasets is available in the second column of Table \[table:ratesavings\]. From the values in Table \[table:ratesavings\] we see that the benefit of joint reconstruction depends on the correlation among the images; in general, higher the correlation, the better the performance. For example, we see that the Flowergarden dataset gives $22.8\%$ rate savings on average compared to H.264 intra due to very high correlation. On the other hand, the Breakdancers and Ballet datasets only provide about $5\%$ rate savings due to weak correlation mainly because of large distances between the cameras. Though the gain is small for these datasets, we show later that the performance of our scheme competes with the performance of the joint encoding solutions based on H.264 at low bit rates.
-------------- --------------- ---------------- ---------------- -----------
[ Dataset]{} [Proposed ]{} [Proposed:]{} [H.264: 4x4]{} [H.264]{}
[True depth]{}
Tsukuba 14.9 20.5 15.8 42.4
Venus 15.9 21.3 12.2 44.9
Plastic 10.3 11 8.4 28.7
Flowergarden 22.8 29.3 29.2 46.5
Breakdancer 5.8 6.6 -6.5 8.9
Ballet 4.4 6.1 -8.9 2.7
-------------- --------------- ---------------- ---------------- -----------
: Rate savings with respect to the independent coding schemes based on H.264 intra for the stereo images. The average rate savings ($\%$) is computed using the Bjontegaard metric for the QP values $52, 48, 45 \; \mbox{and} \; 42$. []{data-label="table:ratesavings"}
$\begin{array}{@{\hspace{-0.25 in}} c@{\hspace{-0.25 in}}c @{\hspace{-0.25 in}} c@{\hspace{-0.25 in}} c@{\hspace{-0.25 in}} c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} &\multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} &\multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=1.6in \epsffile{figures/venus-groundtruth.eps} & \epsfxsize=1.6in \epsffile{figures/venus_disp_QP51.eps} & \epsfxsize=1.6in \epsffile{figures/venus_disperror_QP51.eps} & \epsfxsize=1.6in \epsffile{figures/venus_disp_QP35.eps} & \epsfxsize=1.6in \epsffile{figures/venus_disperror_QP35.eps} \vspace{-0.1in} \\
\mbox{(a) $s/D_g$ } & \mbox{(b) $s/D $} & \mbox{(c) $|s/D_g-s/D| >1$} & \mbox{(d) $s/D$} & \mbox{(e) $|s/D_g-s/D| >1$}
\end{array}$
$\begin{array}{@{\hspace{-0.25 in}} c@{\hspace{-0.25 in}}c @{\hspace{-0.25 in}} c@{\hspace{-0.25 in}} c@{\hspace{-0.25 in}} c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} &\multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} &\multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=1.6in \epsffile{figures/flowergarden_disp_origimages.eps} & \epsfxsize=1.6in \epsffile{figures/flowergarden_disp_QP51.eps} & \epsfxsize=1.6in \epsffile{figures/flowergarden_disperror_QP51.eps} & \epsfxsize=1.6in \epsffile{figures/flowergarden_disp_QP35.eps} & \epsfxsize=1.6in \epsffile{figures/flowergarden_disperror_QP35.eps} \vspace{-0.1in} \\
\mbox{(a) $s/D_g$ } & \mbox{(b) $s/D $} & \mbox{(c) $|s/D_g-s/D| >1$} & \mbox{(d) $s/D$} & \mbox{(e) $|s/D_g-s/D| >1$}
\end{array}$
We then carry out the same experiments in a scenario, where the images are jointly reconstructed using a correlation model that is estimated from the original images. This scheme thus serves as a benchmark for the joint reconstruction, since the correlation is accurately known at the decoder. The corresponding results are denoted as *proposed: True depth* in Fig. \[Fig:jr\_ir\_rectified\]. The corresponding rate savings compared to the independent compression based on H.264 intra is given in the third column of Table \[table:ratesavings\]. At low bit rates, in general, we see that our scheme is away from the upper bound performance due to the poor quality of the depth estimation from compressed images. For example, in Fig. \[Fig:jr\_ir\_rectified\](b) (for Flowergarden dataset) we see that at bit rate of 0.2 (i.e., QP = $51$), the proposed scheme is away from the upper bound performance by a margin of around $0.5$ dB. As a result, we see in Table \[table:ratesavings\] that the rate savings is better, when the actual depth information is used for the joint reconstruction compared to the performance of the scheme where the depth information is estimated from compressed images. We show in Fig. \[Fig:venu\_deptherror\](b) and Fig. \[Fig:venu\_deptherror\](d) the inverse depth images (i.e., disparity images) estimated from the decoded images $\tilde{I}_1, \tilde{I}_2$ that are encoded with QP = $51$ (resp. total bit rate = 0.08 bpp) and QP = $35$ (resp. total bit rate = 0.98 bpp), respectively for the Venus dataset. Comparing the respective disparity images with respect to the actual disparity information in Fig. \[Fig:venu\_deptherror\](a) we observe poor quality disparity results for QP = $51$. Quantitatively, the errors in the disparity images are found to be $43\%$ and $12 \%$, respectively for QP = $51$ and QP = $35$, when it is measured as the percentage of pixels with an absolute error greater than one. This confirms that the quantization noise in the compressed images are not properly handled while estimating the correlation information. Similar conclusions can be derived for the Flowergarden dataset from Fig. \[Fig:flowergarden\_deptherror\], where, in general, the estimated depth information from highly compressed images is not accurate. Developing robust correlation estimation techniques to alleviate this problem is the target of our future works. We finally see in Fig. \[Fig:jr\_ir\_rectified\] that the reconstruction quality achieved with the correlation estimated from compressed images converges to the upper-bound performance when the rate increases or equivalently, when the quality of decoded images $\tilde{I}_1$ and $\tilde{I}_2$ improves.
$\begin{array}{@{\hspace{-0 in}}c @{\hspace{0.1 in}}c @{\hspace{0.1 in}}c @{\hspace{0.1 in}}c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}}& \multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=1.7in \epsffile{figures/Tsukuba_orig_rightn.eps} & \epsfxsize=1.7in \epsffile{figures/Tsukuba_jr_right_QP42_MIn.eps} & \epsfxsize=1.7in \epsffile{figures/Tsukuba_jr_right_QP42n.eps} & \epsfxsize=1.7in \epsffile{figures/Tsukuba_jr_left_QP42n.eps} \\
\mbox{(a) $\mathcal{I}_2$ }& \mbox{(b) $\hat{I}_2$ } &\mbox{(c) $\hat{I}_2$} & \mbox{(d) $\hat{I}_1$ }
\end{array}$
$\begin{array}{c@{\hspace{0.1 in}}c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=3in \epsffile{figures/rdplot_venus_stereo_jp.eps} & \epsfxsize=3in \epsffile{figures/rdplot_flowergarden_stereo_jp.eps} \\
\mbox{(a)} & \mbox{(b)} \\
\end{array}$
We now analyze the importance of the matrix $M$ in the optimization problem \[eqn:jr\_f\] which enables us to measure the correlation consistency objective only to the non-occluded pixels, i.e., the holes in the warped image $A\cdot \mathcal{R}(I_1)$ are ignored while measuring the correlation consistency between the images $A\cdot\mathcal{R}(I_1)$ and $\mathcal{R}(I_2)$. In order to highlight the benefit, we first solve the OPT-1 joint reconstruction problem by setting $M = \mathbbm{1}$. The corresponding reconstructed right image $\hat{I}_2$ is shown in Fig. \[Fig:import\_M\](b). Comparing it with the original right view $\mathcal{I}_2$ in Fig. \[Fig:import\_M\](a), we see that the visual artifacts are noticeable in the reconstructed right image $\hat{I}_2$. In particular, we notice strong artifacts along the edges of the *lamp holder* and in the *face* regions; this is mainly due to the improper handling of the occluded pixels. Quantitatively, the PSNR of the reconstructed image $\hat{I}_2$ is $26.84$ dB (respectively the quality of the reconstructed left view $\hat{I}_1$ is $29.95$ dB). We then solve the OPT-1 optimization problem with a matrix $M$ constructed using Eq. (\[eqn:matrix\_M\]). The corresponding reconstructed right image $\hat{I}_2$ and left image $\hat{I}_1$ is available in Fig. \[Fig:import\_M\](c) and Fig. \[Fig:import\_M\](d), respectively. We now do not see any annoying artifacts in the reconstructed image $\hat{I}_2$ due to the effective handling of the occlusions via the matrix $M$. Also, the quality of the reconstructed images becomes quite similar and the respective values for the right and left views are $30.01$ dB and $29.97$ dB.
We then compare the RD performance of our scheme to a distributed coding solution (DSC) based on the LDPC encoding of DCT coefficients, where the disparity field is estimated at the decoder using Expectation Maximization (EM) algorithms [@David]. The resulting RD performance is given in Fig. \[Fig:jr\_jpeg\](a) and Fig. \[Fig:jr\_jpeg\](b) (denoted as *Disparity learning*) for the Venus and Flowergarden datasets, respectively. In the DSC scheme, the Wyner-Ziv image $\mathcal{I}_2$ is decoded with the JPEG-coded reference image $\mathcal{I}_1$ as the side information. In order to have a fair comparison between the proposed scheme and this DSC scheme [@David], we carry out our joint reconstruction experiments with the JPEG compressed images. That is, instead of H.264 intra we now use JPEG for independently compressing the images $\mathcal{I}_1$ and $\mathcal{I}_2$. Then, from the JPEG coded images $\tilde{I}_1$ and $\tilde{I}_2$, we jointly reconstruct a pair of images $\hat{I}_1$ and $\hat{I}_2$ using the methodology described in Section \[sec:joint\_decoder\]. The resulting RD performance of the proposed scheme is available in Fig. \[Fig:jr\_jpeg\](a) and Fig. \[Fig:jr\_jpeg\](b), respectively for both datasets. We first notice that the proposed joint reconstruction scheme improves the quality of the compressed images; this is consistent with our earlier observations. We further observe that the disparity learning scheme marginally improves the quality of the compressed images only at low bit rates, however, it fails to perform better than the JPEG coding scheme at high bit rates. Also, we note that the DSC scheme in [@David] requires a feedback channel in order to accurately control the LDPC encoding rate, while our proposed solution does not require any statistical correlation modeling at the encoder nor any feedback channel; this clearly highlights the benefits of the proposed solution.
For the sake of completeness, we finally compare the performance of our scheme compared to the joint encoding solutions based on H.264. In particular, the joint compression of views is carried out by setting the profile ID = 128; this corresponds to the stereo profile [@JM_software]. In this profile, one of the images (say $\mathcal{I}_1$) is encoded as a I-frame while the remaining view (say $\mathcal{I}_2$) is encoded as a P-frame. We consider two different settings in the H.264 motion estimation, which is performed with a variable and a fixed macroblock size of $4\times4$. The RD performance corresponding to both cases (resp. denoted as *H.264* and *H.264: 4$\times$4 blocks*) is available in Fig. \[Fig:jr\_ir\_rectified\](a), Fig. \[Fig:jr\_ir\_rectified\](b) and Fig. \[Fig:jr\_ir\_breakdancers\] for the Venus, Flowergarden and Breakdancers datasets, respectively. Also, we report in the columns 4 and 5 of Table \[table:ratesavings\], the rate savings of the joint encoding scheme compared to the H.264 intra scheme. First, it is interesting to note that for rectified images (or when the camera motion is horizontal), our scheme competes with the H.264 joint encoding performance when a block size is set to $4\times4$. However, our scheme could not perform as well at high bit rates due to the lack of texture encoding. In other words, our scheme decodes the images by exploiting the geometrical correlation information while the visual information along the texture and edges are not perfectly captured. However, for the non-rectified images like the Breakdancers dataset (see Fig. \[Fig:jr\_ir\_breakdancers\]), we see that our scheme competes with the joint encoding solutions based on H.264. Similar conclusions can be derived for the Ballet dataset in Table \[table:ratesavings\], where the proposed scheme provides rate savings of 4.4$\%$, while H.264 saves only 2.7$\%$. This is because, when the images are not rectified, which is the case in the Breakdancers and Ballet datasets, the block-based motion compensation is not an ideal model to capture the inter-view correlation. Also, for the same reason, we see in Fig. \[Fig:jr\_ir\_breakdancers\] that the H.264 joint encoding with $4\times4$ blocks performs even worse than the H.264 intra coding scheme; this is indicated with a negative sign in Table \[table:ratesavings\].
Joint Reconstruction of multiple images {#sec:multiview}
=======================================
Optimization Problem
--------------------
So far, we have focused on the distributed representation of pairs of images. Now, we describe the extension of our framework to datasets with $J$ correlated images $\mathcal{I}_1,\mathcal{I}_2, \ldots, \mathcal{I}_J$ that are captured by the cameras $C_1, C_2, \ldots, C_J$ from different viewpoints. We further assume that the $J$ cameras are calibrated, where we denote the intrinsic camera matrix respectively, for the $J$ cameras as $P_1, P_2, \ldots, P_J$. Also, let $R_1, R_2, \ldots, R_J$ and $T_1, T_2, \ldots, T_J$, respectively represent the rotation and translation of the $J$ cameras with respect to the global coordinate system. Similarly to the stereo setup, the $J$ correlated images $\mathcal{I}_1,\mathcal{I}_2, \ldots, \mathcal{I}_J$ are compressed independently (e.g., H.264 intra or JPEG) with a balanced rate allocation. The compressed visual information is transmitted to the central decoder, where we jointly process all the $J$ compressed views in order to take benefit of the inter-view correlation for improved reconstruction quality. In particular, as carried out in stereo decoding framework, we first estimate a depth image from the $J$ decoded images (resp. $\tilde{I}_1,\tilde{I}_2, \ldots, \tilde{I}_J$) and we use it for joint signal recovery. The $J$ reconstructed images are respectively given as $\hat{I}_1,\hat{I}_2, \ldots, \hat{I}_J$.
We propose to estimate the depth image from the $J$ decoded images in a regularized energy minimization framework as a tradeoff between a data term $\mathcal{E}_{d}$ and a smoothness term $\mathcal{E}_{s}$. The depth image $D$ is estimated by minimizing the energy $\mathcal{E}$ that is represented as $$\label{eqn:energy_mv}
D = \underset{D_c}{\operatorname{argmin}} \; \mathcal{E}(D_c) = \underset{D_c}{\operatorname{argmin}} \; \{ \mathcal{E}_{d}(D_c) + \lambda \; \mathcal{E}_{s}(D_c)\}.$$ where $D_c$ represents the candidate depth images. Note that this formulation is similar to Eq. (\[eqn:energy\_chap4\]) in the stereo case.
The data term $\mathcal{E}_{d}(D_c)$ in the multi-view setup should measure the cost of assigning a depth image $D_c$ that is globally consistent with all the compressed images. In the literature, there are plenty of works that address the problem of finding a good multi-view data cost function with global consistency, e.g., [@mvstereo_overview; @multiview_gc; @Stretcha]. In this work, for the sake of simplicity, we propose to compute the global photo consistency as the cumulative sum of the data term $E_d(D_c)$ given in Eq. (\[eqn:datacost\]). That is, the global photo consistency term is given as $$\label{eqn:datacost_mv}
\mathcal{E}_d(D_c) = \sum_{j=2}^{J} \sum_{m,n}^{N_1,N_2} {{\lVert\tilde{I}_j(m,n)- \mathcal{W}_j(\tilde{I}_1(m,n), D_c(m,n))\rVert^{2}_{2}}},$$ where $\mathcal{W}_j$ is the warping function that projects the intensity values in the view $1$ to the view $j$ using the depth information $D_c$. As described previously in Section \[sec:depth\_est\], this warping is a two step process. We first project the pixels from view $1$ to the global coordinate system using Eq. (\[eqn:proj\_step1\]) and then it is projected to the view $j$ using the camera parameters $P_j, R_j$ and $T_j$ (see Eq. (\[eqn:proj\_step2\])). The objective of the smoothness cost $\mathcal{E}_s$ is to enforce consistency in the depth solution. For a candidate depth image $D_c$, the smoothness energy is computed using Eq. (\[eqn:chap4\_energy\]). Finally, the minimization problem of Eq. (\[eqn:energy\_mv\]) can be solved using strong optimization techniques (e.g., Graph Cuts) in order to estimate a depth image $D$ from the decoded images. At last, we note that one could estimate a more accurate depth information by considering additional energy terms in the energy model of Eq. (\[eqn:energy\_mv\]) in order to properly account for the occlusions, global scene visibility, etc. More details are available in the overview paper [@mvstereo_overview].
Now, we focus on the joint decoding problem, where we are interested in the reconstruction of $J$ correlated images from the compressed information $\tilde{I}_1,\tilde{I}_2, \ldots, \tilde{I}_J$; this is carried out by exploiting the correlation that is given in terms of depth information $D$ or from the operator $A$ derived from the depth $D$ as described in Section \[sec:warp\_as\_linear\]. In particular, we can represent the warping operation $\mathcal{W}_j(\tilde{I}_1,D)$ as matrix multiplication of the form $\bar{I}_j = A_j\cdot\mathcal{R}(\tilde{I}_1)$, where $\bar{I}_j$ represents an approximation of the image at viewpoint $j$. We propose to jointly reconstruct the $J$ multi-view images as a solution to the following optimization problem: $$\begin{aligned}
\tag{OPT-2} \label{eqn:jr_mv_n}
(\hat{I}_1, \hat{I}_2, \ldots, \hat{I}_J) = \underset{I_1, I_2, \ldots, I_J}{\operatorname{argmin}} \;
& \sum_{j =1}^J {{\lVertI_j\rVert}}_{TV} \\ \nonumber
\mbox{s.t.} \;
& {{\lVert\mathcal{R}(I_1) - \mathcal{R}(\tilde{I}_1)\rVert}}_2 \leq \delta_1, \\ \nonumber
& {{\lVert\mathcal{R}(I_2) - \mathcal{R}(\tilde{I}_2)\rVert}}_2 \leq \delta_1, \ldots, \\ \nonumber
& {{\lVert\mathcal{R}(I_J) - \mathcal{R}(\tilde{I}_J)\rVert}}_2 \leq \delta_1, \\ \nonumber
& \sum_{j=2}^J {{\lVertM_j(\mathcal{R}(I_j) - A_j\cdot \mathcal{R}(I_1))\rVert^{2}_{2}}} \leq \delta_2,\end{aligned}$$ where $M_j$ (see Eq. (\[eqn:matrix\_M\])) is a diagonal matrix that is constructed using a similar procedure described in Section \[sec:jr\]; this allows to measure the correlation consistency to only to those pixels that are available in all the views. From the above equation, we see that the proposed reconstruction algorithm estimates $J$ TV smooth images that are consistent with both the compressed and the correlation (depth) informations. It is interesting to note that by setting $J=2$ in OPT-2, we get the stereo joint reconstruction problem OPT-1.
Finally, using the results derived in Prop. \[prop:jr\_cvx\] it is easy to check that the optimization problem OPT-2 is convex. Therefore, our multi-view joint reconstruction problem OPT-2 can also be solved using proximal splitting methods. We can rewrite the OPT-2 problem as $$\begin{aligned}
\label{eqn:jr_mv_mod}
\underset{X \in \mathbb{R}^{JN}}{\operatorname{argmin}} \; & \sum_{j =1}^J {{\lVert\mathcal{R}^{-1}(S_jX)\rVert}}_{TV} \\ \nonumber
\mbox{s.t.} \;
& {{{\lVertS_1(Y - X)\rVert}}_2 \leq \delta_1}, {{{\lVertS_2(Y - X)\rVert}}_2 \leq \delta_1}, \ldots, \\ \nonumber
& {{{\lVertS_J(Y - X)\rVert}}_2 \leq \delta_1}, {{{\lVertHX\rVert}}_2^2 \leq \delta_2}. \end{aligned}$$ Here, $X = [ \mathcal{R}({I}_1); \; \mathcal{R}(I_2); \; \cdots \; ; \mathcal{R}(I_J) ]$, $Y = [ \mathcal{R}(\tilde{I}_1) ;\; \mathcal{R}(\tilde{I}_2) ; \; \cdots \; ;\mathcal{R}(\tilde{I}_J) ]$, $S_1=[ \mathbbm{1} \; 0 \; \cdots \; 0 ]$, $S_J=[ 0 \: 0 \; \cdots \;\mathbbm{1}]$, and the matrix $H$ is given as $$H = {\left[ \begin{array}{ccccc}
-M_2A_2 & M_2 &0 & \ldots &0 \\
-M_3A_3 & 0 & M_3 & \ldots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
-M_JA_J & 0 & 0 & \ldots & M_J \end{array} \right]} .$$ It can be noted that the above optimization problem is an extension to the one described in Eq. (\[eqn:jr\_mod\_chap4\]), where the TV prior, measurement and correlation consistency objectives are now applied to all the $J$ images. Therefore, the *prox* operators for the objective function and the constraints of Eq. (\[eqn:jr\_mv\_mod\]) can be computed as described in Section \[Sec:optmeth\].
Performance Evaluation
----------------------
We now evaluate the performance of the multi-view joint reconstruction algorithm using five images (center, left, right, bottom and top views) of the *Tsukuba* [@multiview_gc], three views (views 0, 1 and 2) of the *Plastic* [@scharstein_plastic], three views (views 0, 2 and 4) of the *Breakdancers* and three views (views 3, 4 and 5) of the *Ballet* [@MSR_sequence]. Similarly to the stereo setup, we independently encode the multi-view images using H.264 intra by varying the QP values. At the joint decoder, we estimate a depth image $D$ from the compressed images by solving Eq. (\[eqn:energy\_mv\]) with parameters $(\lambda, \tau)$ = $(390,4),(180,4), (330, 180)$ and $(300, 180)$, respectively for the different datasets. Then, using the estimated depth image $D$ we jointly decode the multiple views as a solution to the problem OPT-2 with the matrix $M_j$ constructed using Eq. (\[eqn:matrix\_M\]). This problem is solved with the parameters $(\delta_1, \delta_2) = (2.5, 7), (1,3), (2.3, 2)$ and $(1.1,4.3)$, respectively for the datasets. Finally, we iterate the PPXA algorithm for 100 times in order to reconstruct the $J$ correlated images.
$\begin{array}{c@{\hspace{0.1 in}}c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=3in \epsffile{figures/rdplot_tsukuba_multiview.eps} & \epsfxsize=3in \epsffile{figures/rdplot_plastic_multiview.eps} \\
\mbox{(a)} & \mbox{(b)}
\end{array}$
$\begin{array}{c@{\hspace{0.1 in}}c} \multicolumn{1}{l}{\mbox{}} & \multicolumn{1}{l}{\mbox{}} \\
\epsfxsize=3in \epsffile{figures/rdplot_breakdancers_multiview.eps} & \epsfxsize=3in \epsffile{figures/rdplot_ballet_multiview.eps} \\
\mbox{(a)} & \mbox{(b)}
\end{array}$
We first compare our results with a stereo setup, where the depth estimation and the joint reconstruction steps are carried out with pairs of images. In more details, we take $\mathcal{I}_1$ as being the center image in Tsukuba, the view 1 in Plastic, the view 2 in Breakdancers and the view 4 in Ballet, respectively and we perform joint decoding between the image $\mathcal{I}_1$ and rest of images by selecting different pairs of images independently (all pairs include $\mathcal{I}_1$). For example, for the Tsukuba dataset, we perform the depth estimation and the joint reconstruction steps in the following order: (i) center and right views; (ii) center and left views; (iii) center and top views; and (iv) center and bottom views. After decoding all the images, we take the mean PSNR of all the reconstructed images. Note that, in this setup the center image is reconstructed four times. For a fair comparison, we keep the reconstructed image $\hat{I}_1$ that gives highest PSNR with respect to $\mathcal{I}_1$. In a similar way, the experiments are carried out for the other datasets, where we perform the joint reconstruction of pairs of images and then compute the average PSNR of the reconstructed images. The resulting RD performance is denoted as *Proposed: Stereo* in Fig. \[Fig:rd\_mv\](a), Fig. \[Fig:rd\_mv\](b), Fig. \[Fig:rd\_mv2\](a) and Fig. \[Fig:rd\_mv2\](b) for the different datasets. From Fig. (\[Fig:rd\_mv\]) and Fig. (\[Fig:rd\_mv2\]) it is clear that the proposed joint multi-view reconstruction scheme (denoted as *Proposed: Multiview*) performs better than the algorithm where the images are handled in pairs. It clearly highlights the benefits of our proposed solution. We also calculate the rate savings compared to an H.264 intra encoding and the results are tabulated in the second and third columns of Table \[table:ratesavings\_mv\]. It is clear that the rate savings are higher in the multi-view setup than in the stereo setup. Finally, we note that the proposed multi-view joint decoding framework is a simple extension of the stereo image reconstruction algorithm. Still, it permits to show experimentally that it is beneficial to handle all the multi-view images simultaneously at the decoder rather decoding them by pairs. We strongly believe that the rate-distortion performance in the multi-view problem can be further improved when the depth information is estimated more accurately. For instance, this can be achieved by explicitly considering the visibility and occlusion constraints in the depth estimation framework, e.g., [@multiview_gc; @Stretcha]. We leave this topic as part of our future work.
--------------- ---------------- ---------------- ---------------- -----------
[ Data set]{} [Proposed: ]{} [Proposed: ]{} [H.264: 4x4]{} [H.264]{}
[ ]{} [Stereo ]{} [Multiview]{}
Tsukuba 14.7 19.2 20.3 77.8
Plastic 10.2 13.2 11.5 45.5
Breakdancers 5.7 7.8 -1.5 14.7
Ballet 4.2 6.6 -2.3 9.2
--------------- ---------------- ---------------- ---------------- -----------
: Rate savings with respect to the independent coding schemes based on H.264 intra for the multi-view problem. The rate savings $\%$ is computed using the Bjontegaard metric for the QP values of $52, 48, 45 \; \mbox{and} \; 42$. []{data-label="table:ratesavings_mv"}
We then compare the RD performance of our multi-view joint decoding algorithm to a state-of-the-art distributed coding scheme (DSC) based on the DISCOVER [@discover]. The DSC experiments are carried out in the following settings. In the Tsukuba dataset, we consider four views, namely left, right, top and bottom images as the key frames, and the center view is considered as the Wyner-Ziv frame. At the decoder, we generate a side information by fusing two side information images that are generated based on motion compensated interpolation: (i) from the left and right decoded views; and (ii) from the top and bottom decoded views. This fusion step is implemented using the algorithm proposed in [@thomas_fusion]. For the other datasets, we consider the two extreme views as the key frames and the center view is considered as the Wyner-Ziv frame. In this scenario, a side information image is generated based on motion compensated interpolation from the decoded key frames. The resulting rate-distortion performance is available in Fig. \[Fig:rd\_mv\] and Fig. \[Fig:rd\_mv2\] (denoted as *DISCOVER*). Comparing the performance of the proposed scheme (denoted as *Proposed: Multiview*) and the DISCOVER scheme, we show that our scheme outperforms the distributed coding solution. Note that this is the case even in the Tsukuba dataset, where four images are fused together to estimate the best possible side information. Furthermore, we can see that the DSC scheme based on DISCOVER actually performs worse (expect for the Tsukuba dataset) than the H.264 intra scheme where all the images are decoded independently. This is mainly due to the poor quality of the side information image generated based on motion compensated interpolation. In other words, the linear motion assumption is not an ideal model for capturing the correlation between images captured in multi-view camera networks. Finally, it is interesting to note that our joint decoding framework does not require a Slepian-Wolf encoder nor any feedback channel, while the DISCOVER coding scheme requires a feedback channel to ensure successful decoding; this comes at the price of high latency due to multiple requests from the decoder [@DVC_overview].
For the sake of completeness, we finally compare the performance of our scheme with respect to the joint encoding framework based on H.264 with an $\mbox{IPP}$ coding structure. More precisely, we consider one of the views as the I-frame (this is the views center, 0, 0 and 3 for the different datasets, respectively.), and the remaining views are encoded as P-frames. We perform the joint encoding experiments where the motion compensation is carried out in both variable and fixed block size of $4\times 4$. The resulting rate-distortion performance is available in Fig. \[Fig:rd\_mv\] and Fig. \[Fig:rd\_mv2\]. The corresponding rate savings with respect to the H.264 intra are available in columns 4 and 5 of Table \[table:ratesavings\_mv\]. From the plots (see Figs. \[Fig:rd\_mv\] and \[Fig:rd\_mv2\]) and from Table \[table:ratesavings\_mv\], it is clear that our proposed multi-view reconstruction scheme competes and sometimes beats the performance of H.264 4$\times$4 scheme at low bit rates; this is consistent with the tendencies we have observed in the stereo experiments. However, at high bit rates our scheme performs worse than the H.264 joint coding scheme due to suboptimal representation of high frequency components such as edges and textures. Contrarily to H.264, our scheme is however distributed and this reduces the complexity at the encoders, which is attractive for distributed processing applications.
Conclusions {#sec:conc}
===========
In this paper, we have proposed a novel rate balanced distributed representation scheme for compressing the correlated multi-view images captured in camera networks. In contrary to the classical DSC schemes, our scheme compresses the images independently without knowing the inter-view statistical relationship between the images at the encoder. We have proposed a novel joint decoding algorithm based on a constrained optimization problem that permits to improve the reconstruction quality by exploiting the correlation between images. We have shown that our joint reconstruction problem is convex, so that it can be efficiently solved using proximal methods. Simulation results confirm that the proposed joint representation algorithm is successful in improving the reconstruction quality of the compressed images with a balanced quality between the images. Furthermore, we have shown by experiments that the proposed coding scheme outperforms state-of-the-art distributed coding solutions based on disparity learning and on the DISCOVER. Therefore, our scheme certainly provides an effective solution for distributed image processing with low encoding complexity, since it does not require a Slepian-Wolf encoder nor a feedback channel. Our future work focuses on developing robust techniques to estimate more accurate correlation information from highly compressed images.
Acknowledgments
===============
The authors would like to thank Dr. Thomas Maugey for many insightful discussions and for his help in the experimental comparisons with the DISCOVER distributed coding scheme.
[^1]: The authors are with Signal Processing Laboratory - LTS4, Institute of Electrical Engineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne 1015, Switzerland. e-mail: ([email protected]; [email protected]).
[^2]: Part of this work has been accepted to the European Signal Processing Conference (EUSIPCO), Bucharest, Romania, Aug. 2012 [@Vijay_EUSIPCO2012].
[^3]: For consistency, we use the compressed image $\tilde{I}_1$; however, this matrix multiplication holds even if one uses the original image $\mathcal{I}_1$ for warping.
[^4]: We assume that the pixels are scanned from left to right and then top to bottom.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we prove the existence of solutions to the Fu-Yau equation on compact Kähler manifolds. As an application, we give a class of non-trivial solutions of the modified Strominger system.'
address:
- 'Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, P. R. China'
- 'School of Mathematical Sciences, University of Science and Technology of China, Hefei 230026, P. R. China'
- 'School of Mathematical Sciences, Peking University, Yiheyuan Road 5, Beijing 100871, P. R. China '
author:
- Jianchun Chu
- Liding Huang
- Xiaohua Zhu
title: 'The Fu-Yau equation in higher dimensions'
---
\[section\] \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[section\] \[theorem\][Remark]{} \[theorem\][Conjecture]{}
Introduction
============
In 1985, Strominger proposed a new system of equations, now referred as the Strominger system, on $3$-dimensional complex manifolds [@Str86]. This system arises from the study on supergravity in theoretical physics. Mathematically, the Strominger system can be regarded as a generalization of the Calabi equation for Ricci-flat Kähler metrics to non-Kähler spaces [@YS05]. It is also related to Reid’s fantasy on the moduli space of Calabi-Yau threefolds [@RM87].
Let us first recall this system. Assume that $X$ is a $3$-dimensional Hermitian manifold which admits a nowhere vanishing holomorphic $(3,0)$-form $\Omega$. Let $E\rightarrow X$ be a holomorphic vector bundle with Hermitian metric $H$. The Strominger system is given by $$\label{Hermitian-Einstein equation}
F_{H}\wedge\omega_{X}^{2} = 0, F_{H}^{2,0} = F_{H}^{0,2} = 0;$$ $$\label{Dilation equation}
d^{*}\omega_{X} = \sqrt{-1}({\overline{\partial}}- \partial)\log\|\Omega\|_{\omega_{X}};$$ $$\label{Bianchi identity}
{\sqrt{-1} \partial \overline{\partial}}\omega_{X}-\frac{\alpha}{4}(\textrm{tr}R\wedge R -\textrm{tr}F_{H}\wedge F_{H})=0,$$ where $\omega_X$ is a Hermitian metric on $X$ with the Chern curvature $R$ and $F_{H}$ is the curvature of Hermitian metrics $(E,H)$. By $\textrm{tr}$, we denote the trace of Endomorphism bundle of either $E$ or $TX$.
To achieve a supersymmetry theory, both $H$ and $\omega_{X}$ have to satisfy and . is also called the dilation equation. Li and Yau observed that it is equivalent to a conformally balanced condition [@LY05], $$d(\|\Omega\|_{\omega_{X}}\omega_{X}^{2}) \,=\, 0.$$ The lest understood equation of the system is known as the Bianchi identity, which is also related to index theorey for Dirac operators ([@FD86], [@WE85]), the topological theory of string structures ([@BU11], [@RC11], [@SSS12]) and generalized geometry ([@BH15], [@FM14], [@FRT]). It is an equation on $4$-forms and intertwines $\omega_{X}$ with the curvatures $R$ and $F_{H}$, which is very difficult to understand in view of analysis.
We know little about the Strominger in general except for a few special spaces on which one can make use of particular structures. In [@LY05], Li and Yau found the first irreducible smooth solution. They considered a stable holomorphic bundle $E$ of rank $r=4,5$ on a Calabi-Yau $3$-fold $X$ and constructed a solution of the Strominger system as a perturbation of a Calabi-Yau metric on $X$ and a Hermitian-Einstein metric $H$ on $E$.
In [@FuY08], Fu and Yau constructed non-perturbative, non-Kähler solutions of the Strominger system on a toric fibration over a $K3$ surface constructed by Goldstein-Prokushki. Let us recall this construction. Let $\omega_{1}$ and $\omega_{2}$ be two anti-self-dual $(1,1)$ forms on a $K3$ surface $(S,\omega_{S})$ $(\omega_{S}$ is a Kähler-Ricci flat metric on $S$) with a nowhere vanishing holomorphic $(2,0)$-form $\Omega_{S}$ satisfying: $[\frac{\omega_{1}}{2\pi}],[\frac{\omega_{2}}{2\pi}]\in H^{1,1}(S,\mathbb{Z})$. In [@GP04], Goldstein-Prokushki constructed a toric fibration $\pi: X\rightarrow S$ which is determined by $\omega_{1}$, $\omega_{2}$ and a $(1,0)$ form $\theta$ on $X$ such that $$\Omega = \pi^{*}(\Omega_{S})\wedge\theta$$ defines a nowhere vanishing holomorphic $(3,0)$ form on $X$. Then, for any ${\varphi}\in C^{\infty}(S)$, $(X,\omega_{{\varphi}})$ always satisfies (\[Dilation equation\]), where $$\omega_{{\varphi}} = \pi^{*}(e^{{\varphi}}\omega_{S})+\sqrt{-1}\theta\wedge{\overline{\theta}}.$$ Thus if $E\rightarrow X$ is a degree zero stable holomorphic vector bundle with a Hermitian-Einstein metric $H$ on $E$, $(\pi^{*}E,\pi^{*}H,X,\omega_{{\varphi}})$ satisfies both (\[Hermitian-Einstein equation\]) and (\[Dilation equation\]). In [@FuY07], Fu and Yau showed that (\[Bianchi identity\]) for $(\pi^{*}E,\pi^{*}H,X,\omega_{{\varphi}})$ is equivalent to the following equation for $\varphi$, also called the Fu-Yau equation, $$\label{Fu-Yau equation-1}
{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega_{S}-\alpha e^{-{\varphi}}\rho)+2\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}+\mu\frac{\omega_{S}^{2}}{2!}=0,$$ where $\rho$ is a real-valued smooth $(1,1)$-form, $\mu$ is a smooth function and $\alpha\neq0$ is a constant called slope parameter. They further proved the existence for (\[Fu-Yau equation-1\]) in the case of $\alpha<0$ [@FuY08] and $\alpha>0$ [@FuY07] on Kähler surfaces, respectively.
In higher dimensions, Fu and Yau proposed a modified Strominger system for $(E, H, X, \omega_X )$, $$F_{H}\wedge\omega_{X}^{n} = 0,\quad F_{H}^{2,0} = F_{H}^{0,2} = 0;$$ $$\label{Modified dilatino equation}
d(\|\Omega\|_{\omega_{X}}^{\frac{2(n-1)}{n}}\omega_{X}^{n}) = 0;$$ $$\label{Modified Bianchi identity}
\left({\sqrt{-1} \partial \overline{\partial}}\omega_{X}-\frac{\alpha}{4}(\textrm{tr}R\wedge R-\textrm{tr}F_{H}\wedge F_{H})\right)\wedge\omega_{X}^{n-2} = 0.$$ Here $X$ is an $(n+1)$-dimensional Hermitian manifold, equipped with a nowhere vanishing holomorphic $(n+1,0)$ form $\Omega$. Clearly, the modified Strominger system is the same as the original Strominger system when $n=2$. Given any Calabi-Yau manifold $M$ with a nowhere vanishing holomorphic $(n,0)$ form $\Omega_{M}$, Goldstein-Prokushki’s construction gives rise to a toric fibration $\pi:X\mapsto M$ as in case of $K3$ surfaces. Fu and Yau showed that the modified Strominger system for $(\pi^{*}E,\pi^{*}H,X,\omega_{{\varphi}})$ can be reduced to the Fu-Yau equation on $M$, $$\label{Fu-Yau equation}
\begin{split}
{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega & -\alpha e^{-{\varphi}}\rho) \wedge\omega^{n-2} \\
& +n\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2}+\mu\frac{\omega^{n}}{n!}=0.
\end{split}$$
More recently, Phong, Picard and Zhang proved the existence for (\[Fu-Yau equation\]) in higher dimensions when $\alpha<0$ [@PPZ16b]. However, the solvability of (\[Fu-Yau equation\]) in higher dimensions is still open when $\alpha>0$. The purpose of present paper is to give a complete solution in this case. Actually, we will give a unified way for (\[Fu-Yau equation\]) in higher dimensions in both cases $\alpha>0$ and $\alpha<0$, more precisely, we prove
\[Existence Theorem\] Let $(M,\omega)$ be an $n$-dimensional compact Kähler manifold. There exists a small constant $A_0>0$ depending only on $\alpha$, $\rho$, $\mu$ and $(M,\omega)$ such that for any positive $A{\leqslant}A_0$, there exists a smooth solution ${\varphi}$ of (\[Fu-Yau equation\]) satisfying the elliptic condition $$\label{Elliptic condition}
\tilde{\omega} = e^{{\varphi}}\omega+\alpha e^{-{\varphi}}\rho+2n\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\in \Gamma_{2}(M),$$ and the normalization condition $$\label{Normalization condition}
\|e^{-{\varphi}}\|_{L^{1}} = A,$$ where $\Gamma_{2}(M)$ is the space of $2$-th convex $(1,1)$-forms (cf. Section 3).
We point out that if $\alpha<0$ and $n=2$, our normalization condition (\[Normalization condition\]) is the same as that in [@FuY07]. However, in the case that $\alpha>0$ and $n=2$, Fu and Yau [@FuY08] solved (\[Fu-Yau equation\]) under the normalization condition $\|e^{-{\varphi}}\|_{L^{4}}=A$, which is stronger than (\[Normalization condition\]). When $\alpha<0$ and $n>2$, Phong, Picard and Zhang used a different normalization condition $\|e^{{\varphi}}\|_{L^{1}}=\frac{1}{A}$. Hence, our result is also new compared to the results cited above.
As a geometric application of Theorem \[Existence Theorem\], we prove
For any $n{\geqslant}2$, there exists a function ${\varphi}\in C^{\infty}(M)$ such that the Fu-Yau’s reduction $(\pi^{*}E,\pi^{*}H,X,\omega_{{\varphi}})$ yields a smooth solution of the modified Strominger system.
From the view point of PDE, (\[Fu-Yau equation\]) can be written as a $2$-nd Hessian equation of the form $$\begin{aligned}
\label{fu-2}
\sigma_{2}({\tilde{\omega}})=F(z,\varphi, {\partial}{\varphi}),\end{aligned}$$ where $$F(z,\varphi, {\partial}{\varphi})= \frac{n(n-1)}{2}\left(e^{2{\varphi}}-4\alpha e^\varphi|{\partial}{\varphi}|_{g}^{2}\right)+\frac{n(n-1)}{2}f(z,{\varphi},{\partial}{\varphi})$$ and $f(z,{\varphi},{\partial}{\varphi})$ satisfies (cf. (\[Definition of f\])), $$|f(z,{\varphi},{\partial}{\varphi})|{\leqslant}C(e^{-2\varphi}+ e^{-\varphi}|\nabla\varphi|^2+1).$$
There are many interesting works for the $k$-th complex Hessian equation of the form: $$\begin{aligned}
\label{fu-2-2}
\sigma_{k}(\omega+{\sqrt{-1} \partial \overline{\partial}}{\varphi})\,=\,F(z).\end{aligned}$$ For examples, Hou, Ma and Wu proved the second order estimate for [@HMW10]; Combining Hou-Ma-Wu’s estimate with a blow-up argument, Dinew and Ko[ł]{}odziej solved [@DK12]; Székelyhidi, and also Zhang, obtained analogous result in the Hermitian case [@Sze15], [@Zha15].
However, for the Fu-Yau equation (\[Fu-Yau equation\]), new difficulties arise because the right hand side $F$ of (\[fu-2\]) depends on ${\partial}{\varphi}$. Moreover, (\[Fu-Yau equation\]) may become degenerate when $\alpha>0$. This makes a big difference between the case $\alpha>0$ and the case $\alpha <0$. When $\alpha<0$, there is no issue on non-degeneracy. However, when $\alpha >0$, one needs to establish a non-degeneracy estimate. In dimension $2$, Fu and Yau obtained such an estimate [@FuY08]. Unfortunately, their arguments do not work in higher dimensions. It has been a main obstacle to solving (\[Fu-Yau equation\]) in higher dimensions when $\alpha >0$.
In this paper, we find a new method for establishing the non-degeneracy estimate. This estimate is different from either Fu-Yau’s one in [@FuY07; @FuY08] or Phong-Picard-Zhang’s one in [@PPZ15; @PPZ16b]. We regard the first and second order estimates as a whole and derive the required non-degeneracy estimate. To be more specific, assuming that $$|{\partial}{\overline{\partial}}{\varphi}|_{g} {\leqslant}D_{0},$$ where $D_{0}$ is a constant (depending only on $n$, $\alpha$, $\rho$, $\mu$ and $(M,\omega)$) to be determined later, we derive a stronger gradient estimate ( independent of $D_0$) by choosing a small number $A$ in (\[Normalization condition\]) (cf. Proposition \[Gradient estimate\]). Then using this stronger gradient estimate, we obtain an improved estimate (cf. Proposition \[Second order estimate\]), $$|{\partial}{\overline{\partial}}{\varphi}|_{g} {\leqslant}\frac{D_{0}}{2}.$$ This can be used to obtain an a prior $C^2$-estimate and consequently the non-degeneracy estimate via the continuity method (cf. (\[non-degenrate\]) in Section 5).
From the proof of Theorem \[Existence Theorem\], we also prove the following uniqueness result of (\[Fu-Yau equation\]).
\[Uniqueness Theorem\]The solution ${\varphi}$ of (\[Fu-Yau equation\]) is unique if it satisfies (\[Elliptic condition\]), (\[Normalization condition\]) and $$\label{condition-uniqueness}
e^{-{\varphi}} {\leqslant}\delta_{0},~~|{\partial}{\overline{\partial}}{\varphi}|_{g} {\leqslant}D,~~D_{0}{\leqslant}D \text{~and~} A {\leqslant}\frac{1}{C_{0}M_{0}D},$$ where $C_0$ is a uniform constant, and $\delta_{0}, M_0$ and $D_{0}$ are constants determined in Proposition \[Zero order estimate\], Proposition \[Second order estimate\], respectively.
Since the normalization condition (\[Normalization condition\]) is different from ones in the previous works as in [@FuY07; @FuY08; @PPZ15; @PPZ16b], etc., we shall also derive $C^0, C^1, C^2$-estimates for solutions of (\[fu-2\]) step by step.
The paper is organized for each estimate in one section. Theorem \[Existence Theorem\] and Theorem \[Uniqueness Theorem\] are both proved in last section, Section 5.
[**Acknowledgements.**]{} On the occasion of his 60th birthday, the authors would like to thank Professor Gang Tian for his guidance and encouragement in mathematics. His insight and teaching in mathematics give us a lot of benefits in past years. It is our pleasure to dedicate this paper to him.
Zero order estimate
===================
In this section, we use the iteration method to derive the following zero order estimate of ${\varphi}$ to (\[Fu-Yau equation\]).
\[Zero order estimate\] Let ${\varphi}$ be a smooth solution of (\[Fu-Yau equation\]). There exist constants $A_{0}$ and $M_{0}$ depending only on $\alpha$, $\rho$, $\mu$ and $(M,\omega)$ such that if $$e^{-{\varphi}} {\leqslant}\delta_{0} := \sqrt{\frac{1}{2|\alpha|\|\rho\|_{C^{0}}+1}} \text{~and~} \|e^{-{\varphi}}\|_{L^{1}} = A {\leqslant}A_{0},$$ then $$\label{c0-estimate}
\frac{1}{M_{0}A}{\leqslant}e^{\inf_{M}{\varphi}} \text{~and~} e^{\sup_{M}{\varphi}}{\leqslant}\frac{M_{0}}{A}.$$
We first do the infimum estimate. The supremum estimate depends on the established infimum estimate. By the choice of $\delta_{0}$ and the condition $e^{-{\varphi}}{\leqslant}\delta_{0}$, it is clear that $$\label{Infimum estimate equation 4}
\omega+\alpha e^{-2{\varphi}}\rho{\geqslant}\frac{1}{2}\omega.$$ By the elliptic condition (\[Elliptic condition\]), we have for $k{\geqslant}2$, $$k\int_{M}e^{-k{\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge{\tilde{\omega}}\wedge\omega^{n-2} {\geqslant}0.$$ By the Stokes’ formula, it follows that $$\label{Infimum estimate equation 2}
\begin{split}
& -k\int_{M}e^{-k{\varphi}}(e^{{\varphi}}\omega+\alpha e^{-{\varphi}}\rho)\wedge\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
{\leqslant}{} & 2n\alpha k\int_{M}e^{-k{\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
= {} & -2n\alpha \int_{M}\sqrt{-1}{\partial}e^{-k{\varphi}}\wedge{\overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
= {} & 2n\alpha \int_{M}e^{-k{\varphi}}{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
= {} & -2\int_{M}e^{-k{\varphi}}{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega-\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2}-2\int_{M}e^{-k{\varphi}}\mu\frac{\omega^{n}}{n!}.
\end{split}$$ In the last equality, we used the equation (\[Fu-Yau equation\]).
For the first term of right hand side in (\[Infimum estimate equation 2\]), we compute $$\label{Infimum estimate equation 5}
\begin{split}
& -2\int_{M}e^{-k{\varphi}}{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega-\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2} \\
= {} & -2k\int_{M}e^{-k{\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}(e^{{\varphi}}\omega-\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2} \\
= {} & -2k\int_{M}e^{-k{\varphi}}(e^{{\varphi}}\omega+\alpha e^{-{\varphi}}\rho)\wedge\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
& +2\alpha k\int_{M}e^{-(k+1){\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}\rho\wedge\omega^{n-2}.
\end{split}$$ Substituting (\[Infimum estimate equation 5\]) into (\[Infimum estimate equation 2\]), we see that $$\label{Infimum estimate equation 3}
\begin{split}
& k\int_{M}e^{-k{\varphi}}(e^{{\varphi}}\omega+\alpha e^{-{\varphi}}\rho)\wedge\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
{\leqslant}{} & 2\alpha k\int_{M}e^{-(k+1){\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}\rho\wedge\omega^{n-2}-2\int_{M}e^{-k{\varphi}}\mu\frac{\omega^{n}}{n!}.
\end{split}$$ Combining (\[Infimum estimate equation 3\]) with (\[Infimum estimate equation 4\]) and the Cauchy-Schwarz inequality, it follows that $$\begin{split}
k\int_{M}e^{-(k-1){\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n}
{\leqslant}{} & Ck\int_{M}\left(e^{-(k+1){\varphi}}|{\partial}{\varphi}|_{g}+e^{-k{\varphi}}\right)\omega^{n} \\
{\leqslant}{} & \frac{k}{2}\int_{M}e^{-(k-1){\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n}+Ck\int_{M}\left(e^{-(k+3){\varphi}}+e^{-k{\varphi}}\right)\omega^{n}.
\end{split}$$ Recalling $e^{-{\varphi}}{\leqslant}\delta_{0}$, we get $$\frac{k}{2}\int_{M}e^{-(k-1){\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n}
{\leqslant}Ck(\delta_{0}^{4}+\delta_{0})\int_{M}e^{-(k-1){\varphi}}\omega^{n},$$ which implies $$\int_{M}|{\partial}e^{-\frac{(k-1){\varphi}}{2}}|_{g}^{2}\omega^{n} {\leqslant}Ck^{2}\int_{M}e^{-(k-1){\varphi}}\omega^{n}.$$ Replacing $k-1$ by $k$, for $k{\geqslant}1$, we deduce $$\int_{M}|{\partial}e^{-\frac{k{\varphi}}{2}}|_{g}^{2}\omega^{n} {\leqslant}Ck^{2}\int_{M}e^{-k{\varphi}}\omega^{n}.$$ Hence, by the Moser iteration together with (\[Normalization condition\]), we obtain $$\|e^{-{\varphi}}\|_{L^{\infty}}{\leqslant}C\|e^{-{\varphi}}\|_{L^{1}} = CA.$$ As a consequence, we prove $$\label{infimum estimate}\frac{1}{M_{0}A}{\leqslant}e^{\inf_{M}{\varphi}}.$$
Next we do the supremum estimate. By the similar calculation of (\[Infimum estimate equation 2\])-(\[Infimum estimate equation 3\]), for $k{\geqslant}1$, we have $$\begin{split}
& k\int_{M}e^{k{\varphi}}(e^{{\varphi}}\omega+\alpha e^{-{\varphi}}\rho)\wedge\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
{\leqslant}{} & 2\alpha k\int_{M}e^{(k-1){\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}\rho\wedge\omega^{n-2}+2\int_{M}e^{k{\varphi}}\mu\frac{\omega^{n}}{n!}.
\end{split}$$ Combining this with (\[Infimum estimate equation 4\]), we have $$\int_{M}e^{(k+1){\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n} {\leqslant}C\int_{M}\left(e^{(k-1){\varphi}}|{\partial}{\varphi}|_{g}+e^{k{\varphi}}\right)\omega^{n}.$$ Using $e^{-{\varphi}}{\leqslant}\delta_{0}$ and the Cauchy-Schwarz inequality, it then follows that $$\label{Supremum estimate equation 2}
\int_{M}e^{(k+1){\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n} {\leqslant}C\int_{M}e^{k{\varphi}}\omega^{n}.$$ Moreover, by , we get $$\label{Supremum estimate equation 1}
\int_{M}e^{k{\varphi}}|{\partial}{\varphi}|_{g}^{2}\omega^{n} {\leqslant}C\int_{M}e^{k{\varphi}}\omega^{n}.$$ We will use (\[Supremum estimate equation 1\]) to do the iteration. We need
\[l1-estimate\] $$\begin{aligned}
\label{claim} \|e^{{\varphi}}\|_{L^{1}}{\leqslant}\frac{C}{A}.
\end{aligned}$$
Without loss of generality, we assume that ${\mathrm{Vol}}(M,\omega)=1$. We define a set by $$U = \{x\in M~|~e^{-{\varphi}(x)}{\geqslant}\frac{A}{2}\}.$$ Then by (\[infimum estimate\]) and (\[Normalization condition\]), we have $$\begin{split}
A = {} & \int_{M}e^{-{\varphi}}\omega^{n} \\
= {} & \int_{U}e^{-{\varphi}}\omega^{n}+\int_{M\setminus U}e^{-{\varphi}}\omega^{n} \\
{\leqslant}{} & e^{-\inf_{M}{\varphi}}{\mathrm{Vol}}(U)+\frac{A}{2}(1-{\mathrm{Vol}}(U)) \\
{\leqslant}{} & \left(M_{0}-\frac{1}{2}\right)A{\mathrm{Vol}}(U)+\frac{A}{2}.
\end{split}$$ It implies $$\label{Supremum estimate equation 3}
{\mathrm{Vol}}(U) {\geqslant}\frac{1}{C_{0}}.$$
On the other hand, by the Poincaré inequality and (\[Supremum estimate equation 2\]) (taking $k=1$), we have $$\int_{M}e^{2{\varphi}}\omega^{n}-\left(\int_{M}e^{{\varphi}}\omega^{n}\right)^{2}
{\leqslant}C\int_{M}|{\partial}e^{{\varphi}}|_{g}^{2}\omega^{n}
{\leqslant}C\int_{M}e^{{\varphi}}\omega^{n}.\notag$$ By (\[Supremum estimate equation 3\]) and the Cauchy-Schwarz inequality, we obtain $$\begin{split}
\left(\int_{M}e^{{\varphi}}\omega^{n}\right)^{2}
& {\leqslant}(1+C_{0})\left(\int_{U}e^{{\varphi}}\omega^{n}\right)^{2}
+\left(1+\frac{1}{C_{0}}\right)\left(\int_{M\setminus U}e^{{\varphi}}\omega^{n}\right)^{2}\\
& {\leqslant}\frac{4(1+C_{0})}{A^{2}}({\mathrm{Vol}}(U))^{2}+\left(1+\frac{1}{C_{0}}\right)(1-{\mathrm{Vol}}(U))^{2}\int_{M}e^{2{\varphi}}\omega^{n}\\
& {\leqslant}\frac{4(1+C_{0})}{A^{2}}+\left(1-\frac{1}{C_{0}^{2}}\right)\left(\left(\int_{M}e^{{\varphi}}\omega^{n}\right)^{2}
+C\int_{M}e^{{\varphi}}\omega^{n}\right).
\end{split}$$ Clearly, the above implies (\[claim\]).
By Claim \[l1-estimate\], (\[Supremum estimate equation 1\]) and the Moser iteration, we see that $$\|e^{{\varphi}}\|_{L^{\infty}}{\leqslant}C\|e^{{\varphi}}\|_{L^{1}}{\leqslant}\frac{C}{A}.$$ Thus $$\|e^{{\varphi}}\|_{L^{\infty}}{\leqslant}\frac{C}{A}.$$
First order estimate
====================
In this section, we give the first order estimate of $\varphi$. For convenience, in this and next section, we say a constant is uniform if it depends only on $\alpha$, $\rho$, $\mu$ and $(M,\omega)$.
\[Gradient estimate\] Let ${\varphi}$ be a solution of (\[Fu-Yau equation\]) satisfying (\[Elliptic condition\]). Assume that $$\frac{A}{M_0}{\leqslant}e^{-{\varphi}}{\leqslant}M_0A \text{~and~} |{\partial}{\overline{\partial}}{\varphi}|_{g}{\leqslant}D,$$ where $M_0$ is a uniform constant. Then there exists a uniform constant $C_{0}$ such that if $$A {\leqslant}A_{D} := \frac{1}{C_{0}M_{0}D},$$ then $$|{\partial}{\varphi}|_{g}^{2} {\leqslant}M_{1},$$ where $M_{1}$ is a uniform constant.
The key point in Proposition \[Gradient estimate\] is that $M_{1}$ is independent of $D$. The constant $D$ can be chosen arbitrary large and the constant $A_{D}$ depends on $D$. This will play an important role in the second order estimate next section. In fact, we will determine $D$ so that $A$ can be determined (cf. Proposition \[Second order estimate\]).
As usually, for any $\eta=(\eta_{1},\eta_{2},\cdots,\eta_{n})\in\mathbb{R}^{n}$, we define $$\begin{split}
& \sigma_{k}(\eta) =\sum_{1<i_{1}<\cdots<i_{k}<n}\eta_{i_{1}}\eta_{i_{2}}\cdots\eta_{i_k},\\
\Gamma_{2} = & \{ \eta\in\mathbb{R}^{n}~|~ \text{$\sigma_{j}(\eta)>0$ for $j=1,2$} \}.
\end{split}$$ Clearly $\sigma_2$ is a $2$-multiple functional. Then one can extend it to $A^{1,1}(M)$ by $$\sigma_{k}(\alpha)=\left(
\begin{matrix}
n\\
k
\end{matrix}
\right)
\frac{\alpha^{k}\wedge\omega^{n-k}}{\omega^{n}}, ~\forall ~\alpha\in A^{1,1}(M),$$ where $A^{1,1}(M)$ is the space of smooth real (1,1) forms on $(M,\omega)$. Define a cone $\Gamma_{2}(M)$ on $A^{1,1}(M)$ by $$\Gamma_{2}(M)=\{ \alpha\in A^{1,1}(M)~|~ \text{$\sigma_{j}(\alpha)>0$ for $j=1,2$} \}.$$ Then, (\[Fu-Yau equation\]) is equivalent to (\[fu-2\]) while the function $f(z,{\varphi},{\partial}{\varphi})$ satisfies $$\begin{aligned}
\label{Definition of f}
f\omega^{n} &= 2\alpha\rho\wedge\omega^{n-1}+\alpha^{2}e^{-2{\varphi}}\rho^{2}\wedge\omega^{n-2}-4n\alpha\mu\frac{\omega^{n}}{n!}\notag \\
& +4n\alpha^{2}e^{-{\varphi}}\sqrt{-1}\left({\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\rho-{\partial}{\varphi}\wedge{\overline{\partial}}\rho
-{\partial}\rho\wedge{\overline{\partial}}{\varphi}+{\partial}{\overline{\partial}}\rho\right)\wedge\omega^{n-2}.\end{aligned}$$ We will use (\[fu-2\]) to apply the maximum principle to the quantity $$Q = \log|{\partial}{\varphi}|_{g}^{2}+\frac{{\varphi}}{B},$$ where $B>1$ is a large uniform constant to be determined later.
Assume that $Q$ achieves a maximum at $x_{0}$. Let $\{e_{i}\}_{i=1}^{n}$ be a local unitary frame in a neighbourhood of $x_{0}$ such that, at $x_{0}$, $$\label{tilde gij}
\tilde{g}_{i{\overline}{j}}
= \delta_{i{\overline}{j}}\tilde{g}_{i{\overline}{i}}
= \delta_{i{\overline}{j}}(e^{{\varphi}}+\alpha e^{-{\varphi}}\rho_{i{\overline{i}}}+2n\alpha {\varphi}_{i{\overline{i}}}).$$ For convenience, we use the following notation: $$\hat{\omega} = e^{-{\varphi}}{\tilde{\omega}},
\hat{g}_{i{\overline{j}}} = e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}} \text{~and~}
F^{i{\overline{j}}} = \frac{\partial\sigma_{2}(\hat{\omega})}{{\partial}\hat{g}_{i{\overline{j}}}}.$$ Since ${\tilde{g}}_{i{\overline{j}}}(x_{0})$ is diagonal at $x_0$, it is easy to see that $$\label{Expression of Fij}
F^{i{\overline{j}}} = \delta_{ij}F^{i{\overline{i}}} = \delta_{ij}e^{-{\varphi}}\sum_{k\neq i}{\tilde{g}}_{k{\overline{k}}}.$$ By the assumption of Proposition \[Gradient estimate\], at the expense of increasing $C_{0}$, we have $$\label{Small condition}
e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g}
{\leqslant}M_{0}DA_{D}
{\leqslant}\frac{1}{1000B n^{3}|\alpha|}.$$ Combining this with (\[tilde gij\]) and (\[Expression of Fij\]), we get $$\label{Bound of Fij}
\left|F^{i{\overline{i}}}-(n-1)\right| {\leqslant}\frac{1}{100}.$$
We need to estimate the lower bound of $F^{i{\overline{j}}}e_{i}e_{{\overline{j}}}(|{\partial}{\varphi}|_{g}^{2})$, where we are summing over repeated indices. Note $$|{\partial}{\varphi}|_{g}^{2} = \sum_{k}{\varphi}_{k}{\varphi}_{{\overline{k}}},$$ where ${\varphi}_{k}=e_{k}({\varphi})$ and ${\varphi}_{{\overline{k}}}={\overline{e}}_{k}({\varphi})$, in the local frame $\{e_{i}\}_{i=1}^{n}$. Then, at $x_{0}$, $$\begin{split}
F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})
= {} & \sum_{j}F^{i{\overline{i}}}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2}) \\
& +\sum_{k}F^{i{\overline{i}}}\left(e_{i}{\overline{e}}_{i}e_{k}({\varphi}){\varphi}_{{\overline{k}}}+e_{i}{\overline{e}}_{i}{\overline{e}}_{k}({\varphi}){\varphi}_{k}\right).
\end{split}$$ On the other hand, by the relation (see e.g. [@HL15]) $$\label{ddbar formula}
{\varphi}_{i{\overline{j}}} = {\partial}{\overline{\partial}}{\varphi}(e_{i},{\overline{e}}_{j}) = e_{i}{\overline{e}}_{j}({\varphi})-[e_{i},{\overline{e}}_{j}]^{(0,1)}({\varphi}),$$ we have $$\begin{split}
e_{k}({\varphi}_{i{\overline{i}}}) = {} & e_{k}e_{i}{\overline{e}}_{i}({\varphi})-e_{k}[e_{i},{\overline{e}}_{i}]^{(0,1)}({\varphi}) \\
= {} & e_{i}{\overline{e}}_{i}e_{k}({\varphi})+[e_{k},e_{i}]{\overline{e}}_{i}({\varphi})+e_{i}[e_{k},{\overline{e}}_{i}]({\varphi})-e_{k}[e_{i},{\overline{e}}_{i}]^{(0,1)}({\varphi}).
\end{split}$$ Thus combining this with (\[Bound of Fij\]), we get $$\begin{split}
& \sum_{k}F^{i{\overline{i}}}\left(e_{i}{\overline{e}}_{i}e_{k}({\varphi}){\varphi}_{{\overline{k}}}+e_{i}{\overline{e}}_{i}{\overline{e}}_{k}({\varphi}){\varphi}_{k}\right) \\[3mm]
{\geqslant}{} & \sum_{k}F^{i{\overline{i}}}\left(e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right)
-C|{\partial}{\varphi}|_{g}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|+|e_{i}e_{j}({\varphi})|)-C|{\partial}{\varphi}|_{g}^{2} \\
{\geqslant}{} & \sum_{k}F^{i{\overline{i}}}\left(e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right)
-\frac{1}{10}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-C|{\partial}{\varphi}|_{g}^{2}.
\end{split}$$ Hence, we obtain $$\label{Maximum principle gradient 1}
\begin{split}
F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})
{\geqslant}{} & \frac{4}{5}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-C|{\partial}{\varphi}|_{g}^{2} \\
& +\sum_{k}\left(F^{i{\overline{i}}}e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right).
\end{split}$$
Next, we use equation (\[fu-2\]) to deal with the third order terms in (\[Maximum principle gradient 1\]).
\[Gradient estimate lemma\] At $x_{0}$, we have $$\label{gradient-lapalce}
\begin{split}
& \sum_{k}F^{i{\overline{i}}}\left(e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right) \\[2mm]
{\geqslant}{} & -\frac{1}{5}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})
-2(n-1){\rm Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right) \notag\\[2mm]
& -\left(Ce^{-{\varphi}}+\frac{1}{B}\right)|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2}-C.
\end{split}$$
By (\[fu-2\]), we have $$\label{fu-3}
\sigma_{2}(e^{-{\varphi}} \tilde\omega) = e^{-2{\varphi}}F(z,{\varphi},{\partial}{\varphi}).$$ Differentiating (\[fu-3\]) along $e_{k}$ at $x_{0}$, we get $$\begin{split}
& F^{i{\overline}{j}}e_{k}(g_{i{\overline}{j}}+\alpha e^{-2{\varphi}}\rho_{i{\overline}{j}}+2n\alpha e^{-{\varphi}}{\varphi}_{i{\overline}{j}})\\
= {} & -2\alpha n(n-1)\left(-e^{-{\varphi}}|{\partial}{\varphi}|^{2}_{g}{\varphi}_{k}+e^{-{\varphi}}(|{\partial}{\varphi}|^{2}_{g})_{k}\right)+\frac{n(n-1)}{2}(e^{-2{\varphi}}f)_{k}.
\end{split}$$ Then $$\begin{split}
2n\alpha F^{i{\overline{i}}}e_{k}({\varphi}_{i{\overline{i}}})
= {} & 2\alpha e^{-{\varphi}}{\varphi}_{k}F^{i{\overline{i}}}\rho_{i{\overline{i}}}-\alpha e^{-{\varphi}}F^{i{\overline{i}}}e_{k}(\rho_{i{\overline{i}}})+2n\alpha{\varphi}_{k}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}} \\[1mm]
& -2\alpha n(n-1)(|{\partial}{\varphi}|_{g}^{2})_{k}+2\alpha n(n-1)|{\partial}{\varphi}|_{g}^{2}{\varphi}_{k} \\
& -n(n-1)e^{-{\varphi}}f{\varphi}_{k}+\frac{n(n-1)}{2}e^{-{\varphi}}f_{k}.
\end{split}$$ It follows that $$\label{Gradient estimate equation 7}
\begin{split}
& \sum_{k}F^{i{\overline{i}}}\left(e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right) \\[1mm]
= {} & \frac{2}{n}e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}\rho_{i{\overline{i}}}
-\frac{1}{n}e^{-{\varphi}}\textrm{Re}\left(\sum_{k}F^{i{\overline{i}}}e_{k}(\rho_{i{\overline{i}}}){\varphi}_{{\overline{k}}}\right)
+2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}} \\[1mm]
& -2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right)+2(n-1)|{\partial}{\varphi}|_{g}^{4}
-\frac{n-1}{\alpha}e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}f \\
& +\frac{n-1}{2\alpha}e^{-{\varphi}}\textrm{Re}\left(\sum_{k}f_{k}{\varphi}_{{\overline{k}}}\right) \\[1mm]
{\geqslant}{} & -Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}+2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}}
-2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right) \\[1mm]
& +2(n-1)|{\partial}{\varphi}|_{g}^{4}-\frac{n-1}{\alpha}e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}f
+\frac{n-1}{2\alpha}e^{-{\varphi}}\textrm{Re}\left(\sum_{k}f_{k}{\varphi}_{{\overline{k}}}\right),
\end{split}$$ where we used (\[Bound of Fij\]) in the last inequality. On the other hand, by (\[Definition of f\]), a direct calculation shows that $$\label{Gradient estimate equation 8}
\begin{split}
& -\frac{n-1}{\alpha}e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}f+\frac{n-1}{2\alpha}e^{-{\varphi}}\textrm{Re}\left(\sum_{k}f_{k}{\varphi}_{{\overline{k}}}\right) \\[1mm]
{\geqslant}{} & -C\left(e^{-2{\varphi}}|{\partial}{\varphi}|^{2}_{g}+e^{-2{\varphi}}|{\partial}{\varphi}|_{g}\right)\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|+|e_{i}e_{j}({\varphi})|)-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{4} \\
& -Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{3}-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g} \\[1mm]
{\geqslant}{} & -\frac{1}{10}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{4}-Ce^{-{\varphi}},
\end{split}$$ where we used the Cauchy-Schwarz inequality in the last inequality. Thus substituting (\[Gradient estimate equation 8\]) into (\[Gradient estimate equation 7\]), we derive $$\label{Gradient estimate equation 9}
\begin{split}
& \sum_{k}F^{i{\overline{i}}}\left(e_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{{\overline{k}}}+{\overline{e}}_{k}({\varphi}_{i{\overline{i}}}){\varphi}_{k}\right) \\[2mm]
{\geqslant}{} & 2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}}+2(n-1)|{\partial}{\varphi}|_{g}^{4}
-2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right) \\[2mm]
& -\frac{1}{10}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-Ce^{-{\varphi}}|{\partial}{\varphi}|_{g}^{4}-Ce^{-{\varphi}}.
\end{split}$$
By (\[Expression of Fij\]), we have $$\label{Gradient estimate equation 1}
\begin{split}
& 2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline}{i}}{\varphi}_{i{\overline}{i}} \\[2mm]
&= 2|{\partial}{\varphi}|^{2}_{g}\sum_{i}\sum_{k\neq i}(g_{k{\overline}{k}}+\alpha e^{-2{\varphi}}\rho_{k{\overline}{k}}+2n\alpha e^{-{\varphi}}{\varphi}_{k{\overline}{k}}){\varphi}_{i{\overline}{i}} \\
& = 2(n-1)|{\partial}{\varphi}|_{g}^{2}\Delta{\varphi}+4n\alpha|{\partial}{\varphi}|_{g}^{2}e^{-{\varphi}}\sum_{i\neq k}{\varphi}_{i{\overline{i}}}{\varphi}_{k{\overline{k}}}
+2\alpha e^{-2{\varphi}}|{\partial}{\varphi}|_{g}^{2}\sum_{i\neq k}{\varphi}_{i{\overline{i}}}\rho_{k{\overline{k}}} \\
&{\geqslant}2(n-1)|{\partial}{\varphi}|_{g}^{2}\Delta{\varphi}-4n^{2}(n-1)|\alpha|e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}
-Ce^{-2{\varphi}}|{\partial}{\varphi}|^{2}_{g}|{\partial}{\overline{\partial}}{\varphi}|_{g}.
\end{split}$$ Note that by (\[Fu-Yau equation\]) it holds $$\frac{{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega-\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2}}{\omega^{n}}
{\geqslant}-n|\alpha||{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}-C,$$ which implies $$\label{Gradient estimate equation 2}
\Delta{\varphi}+|{\partial}{\varphi}|_{g}^{2}
{\geqslant}-n|\alpha|e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}-Ce^{-2{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g}-Ce^{-2{\varphi}}|{\partial}{\varphi}|_{g}^{2}-C.$$ Then substituting (\[Gradient estimate equation 2\]) into (\[Gradient estimate equation 1\]), we get $$\begin{split}
2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}}
{\geqslant}{} & -2(n-1)|{\partial}{\varphi}|_{g}^{4}-5n^{3}|\alpha|e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2} \\
{} & -Ce^{-2{\varphi}}|{\partial}{\varphi}|_{g}^{2}|{\partial}{\overline{\partial}}{\varphi}|_{g}-Ce^{-2{\varphi}}|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2}.
\end{split}$$ Thus by (\[Small condition\]) and the Cauchy-Schwarz inequality, we derive $$\label{Gradient estimate equation 10}
\begin{split}
& 2|{\partial}{\varphi}|_{g}^{2}F^{i{\overline{i}}}{\varphi}_{i{\overline{i}}}+2(n-1)|{\partial}{\varphi}|_{g}^{4} \\[2mm]
{\geqslant}{} & -5n^{3}|\alpha|(e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g})(|{\partial}{\varphi}|_{g}^{4}+|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}) \\[2mm]
& -C(e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g})(e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2})-Ce^{-2{\varphi}}|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2} \\[2mm]
{\geqslant}{} & -\frac{1}{B}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}-\left(Ce^{-2{\varphi}}+\frac{1}{B}\right)|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2} \\
{\geqslant}{} & -\frac{1}{10}\sum_{i,j}|e_{i}{\overline{e}}_{j}({\varphi})|^{2}-\left(Ce^{-2{\varphi}}+\frac{1}{B}\right)|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2}.
\end{split}$$ Combining (\[Gradient estimate equation 9\]) and (\[Gradient estimate equation 10\]), we prove Lemma \[Gradient estimate lemma\] immediately.
By (\[Maximum principle gradient 1\]) and Lemma \[Gradient estimate lemma\], we get a lower bound for $F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})$ at $x_{0}$ as follows, $$\label{Maximum principle gradient 2}
\begin{split}
&F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})\\
&{\geqslant}{} \frac{3}{5}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})
-2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right) \\[2mm]
& -\left(Ce^{-{\varphi}}+\frac{1}{B}\right)|{\partial}{\varphi}|_{g}^{4}-C|{\partial}{\varphi}|_{g}^{2}-C.
\end{split}$$ Now we are in a position to prove Proposition \[Gradient estimate\].
Without loss of generality, we assume that $|{\partial}{\varphi}|_{g}^{2}{\geqslant}1$. By (\[Maximum principle gradient 2\]) and the maximum principle, at $x_{0}$, we see that $$\label{Gradient estimate equation 3}
\begin{split}
0 {\geqslant}{} & F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(Q) \\
= {} & \frac{F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})}{|{\partial}{\varphi}|_{g}^{2}}
-\frac{F^{i{\overline{j}}}e_{i}(|{\partial}{\varphi}|_{g}^{2}){\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})}{|{\partial}{\varphi}|_{g}^{4}}
+\frac{1}{B}F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}({\varphi}) \\
{\geqslant}{} & \frac{1}{2|{\partial}{\varphi}|_{g}^{2}}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})
-\frac{2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right)}{|{\partial}{\varphi}|_{g}^{2}} \\
& -\frac{F^{i{\overline{i}}}|e_{i}(|{\partial}{\varphi}|_{g}^{2})|^{2}}{|{\partial}{\varphi}|_{g}^{4}}
-\left(Ce^{-{\varphi}}+\frac{1}{B}\right)|{\partial}{\varphi}|_{g}^{2}-C+\frac{1}{B}F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi}).
\end{split}$$ The second and third terms in (\[Gradient estimate equation 3\]) can be controlled by the relation $dQ(x_{0})=0$. Namely, we have $$\label{Gradient estimate equation 4}
-\frac{2(n-1)\textrm{Re}\left(\sum_{k}(|{\partial}{\varphi}|_{g}^{2})_{k}{\varphi}_{{\overline{k}}}\right)}{|{\partial}{\varphi}|_{g}^{2}}
=\frac{2(n-1)}{B}|{\partial}{\varphi}|_{g}^{2}$$ and $$\label{Gradient estimate equation 5}
-\frac{F^{i{\overline{i}}}|e_{i}(|{\partial}{\varphi}|_{g}^{2})|^{2}}{|{\partial}{\varphi}|_{g}^{4}}
= -\frac{1}{B^{2}}F^{i{\overline{i}}}{\varphi}_{i}{\varphi}_{{\overline{i}}}
{\geqslant}-\frac{C}{B^{2}}|{\partial}{\varphi}|_{g}^{2},$$ where we used (\[Bound of Fij\]) in the last inequality. On the other hand, by (\[Bound of Fij\]) and the Cauchy-Schwarz inequality, we have $$\label{Gradient estimate equation 6}
\frac{1}{B}F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi})
{\geqslant}-\frac{1}{4|{\partial}{\varphi}|_{g}^{2}}\sum_{i,j}|e_{i}{\overline{e}}_{j}({\varphi})|^{2}-\frac{C}{B^{2}}|{\partial}{\varphi}|_{g}^{2}.$$ Thus substituting (\[Gradient estimate equation 4\]), (\[Gradient estimate equation 5\]), (\[Gradient estimate equation 6\]) into (\[Gradient estimate equation 3\]), we get $$\label{dradient-lalpace-2}
\begin{split}
0 {\geqslant}{} & \frac{1}{4|{\partial}{\varphi}|_{g}^{2}}\sum_{i,j}\left(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2}\right)-C_{0} \\
& +\left(\frac{2n-3}{B}-\frac{C_{0}}{B^{2}}-C_{0}e^{-{\varphi}}\right)|{\partial}{\varphi}|_{g}^{2},
\end{split}$$ where $C_{0}$ is a uniform constant.
We choose the number $B = 2C_{0}$ in (\[dradient-lalpace-2\]). Moreover, by the assumption in the proposition we may also assume $$C_{0}e^{-{\varphi}} {\leqslant}\frac{1}{8C_{0}}.$$ Then, we get $$|{\partial}{\varphi}|_{g}^{2}(x_{0}) {\leqslant}8C_{0}^{2}.$$ Hence, by Proposition \[Zero order estimate\], we obtain $$\max_{M}|{\partial}{\varphi}|_{g}^{2} {\leqslant}e^{\frac{1}{B}(\sup_{M}{\varphi}-\inf_{M}{\varphi})}|{\partial}{\varphi}|_{g}^{2}(x_{0}) {\leqslant}C,$$ as desired.
The following lemma will be used in the next section.
\[Maximum principle gradient lemma\] For a uniform constant $C_{1}$, we have $$F^{i{\overline{j}}}e_{i}{\overline{e}}_{j}(|{\partial}{\varphi}|_{g}^{2})
{\geqslant}\frac{1}{2}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-C_{1}.$$
This lemma is an immediate consequence of (\[Maximum principle gradient 2\]), Proposition \[Gradient estimate\] and the Cauchy-Schwarz inequality.
Second order estimate
=====================
This section is denoted to $C^2$-estimate. We prove
\[Second order estimate\] Let ${\varphi}$ be a solution of (\[Fu-Yau equation\]) satisfying (\[Elliptic condition\]) and $\frac{A}{M_0}{\leqslant}e^{-{\varphi}}{\leqslant}M_0A$ for some uniform constant $M_0$. There exist uniform constants $D_{0}$ and $C_{0}$ such that if $$|{\partial}{\overline{\partial}}{\varphi}|_{g} {\leqslant}D, ~~D_{0}{\leqslant}D \text{~and~} A{\leqslant}A_{D}:=\frac{1}{C_{0}M_{0}D},$$ then $$|{\partial}{\overline{\partial}}{\varphi}|_{g} {\leqslant}\frac{D}{2}.$$
We consider the following quantity $$Q = |{\partial}{\overline{\partial}}{\varphi}|_{g}^{2} + B|{\partial}{\varphi}|_{g}^{2},$$ where $B>1$ is a uniform constant to be determined later. As in Section 3, we assume that $Q(x_{0})=\max_{M}Q$ and a local $g$-unitary frame $\{e_{i}\}_{i=1}^{n}$ for $T_{\mathbb{C}}^{(1,0)}M$ around $x_{0}$ such that ${\tilde{g}}_{i{\overline{j}}}(x_{0})$ is diagonal. By the following notations, $$\hat{\omega} = e^{-{\varphi}}{\tilde{\omega}},
\hat{g}_{i{\overline{j}}} = e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}},
F^{i{\overline{j}}} = \frac{{\partial}{\sigma_{2}(\hat{\omega})}}{{\partial}\hat{g}_{i{\overline{j}}}}
\text{~and~}
F^{i{\overline{j}},k{\overline{l}}} = \frac{{\partial}^{2}{\sigma_{2}(\hat{\omega})}}{{\partial}\hat{g}_{i{\overline{j}}}{\partial}\hat{g}_{k{\overline{l}}}},$$ we have $$F^{i{\overline{j}}} = \delta_{ij}F^{i{\overline{i}}} = \delta_{ij}e^{-{\varphi}}\sum_{k\neq i}{\tilde{g}}_{k{\overline{k}}}$$ and $$F^{i{\overline{j}},k{\overline{l}}}=
\left\{\begin{array}{ll}
1, & \text{if $i=j$, $k=l$, $i\neq k$;}\\[1mm]
-1, & \text{if $i=l$, $k=j$, $i\neq k$;}\\[1mm]
0, & \text{\quad\quad~otherwise.}
\end{array}\right.$$ By the assumption of Proposition \[Second order estimate\], at the expense of increasing $C_{0}$, we may also assume that $$\label{Second order estimate equation 6}
e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g}{\leqslant}\frac{1}{1000n^{3}|\alpha|B}.$$ Hence, we get $$\label{Second order estimate equation 1}
|F^{i{\overline{i}}}-(n-1)| {\leqslant}\frac{1}{100}
\text{~and~} |F^{i{\overline{j}},k{\overline{l}}}|{\leqslant}1.$$
We need the following lemma.
\[Second order estimate lemma\] At $x_{0}$, we have $$\begin{split}
|F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi}_{k{\overline{l}}})|
{\leqslant}{} & 8n|\alpha| e^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}+C\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+C.
\end{split}$$
Differentiating (\[fu-3\]) twice along $e_{k}$ and $\bar {e}_{l}$ at $x_{0}$, we have $$\label{second-derivative-sigma2}
\begin{split}
&F^{i{\overline{j}},p{\overline{q}}}e_{k}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}}){\overline{e}}_{l} (e^{-{\varphi}}{\tilde{g}}_{p{\overline{q}}})
+F^{i{\overline{j}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}}) \\
&= -2n(n-1)\alpha e_{k}{\overline{e}}_{l}(e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2})+\frac{n(n-1)}{2}e_{k}{\overline{e}}_{l}(e^{-2{\varphi}}f).
\end{split}$$ Let $$\begin{split}
I_{1} = {} & -F^{i{\overline{j}},p{\overline{q}}}e_{k}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}}){\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{p{\overline{q}}}), \\[1.5mm]
I_{2} = {} & -2n(n-1)\alpha e_{k}{\overline{e}}_{l}(e^{-{\varphi}}|{\partial}{\varphi}|_{g}^{2}), \\
I_{3} = {} & \frac{n(n-1)}{2}e_{k}{\overline{e}}_{l}(e^{-2{\varphi}}f).
\end{split}$$ Then (\[second-derivative-sigma2\]) becomes $$\label{Second order estimate equation 2}
F^{i{\overline{j}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{j}}})
= I_{1}+I_{2}+I_{3}.$$ We estimate each term in (\[Second order estimate equation 2\]) below. For $I_{1}$, by (\[Second order estimate equation 1\]), Proposition \[Gradient estimate\] and the Cauchy-Schwarz inequality, we have $$\begin{split}
|I_{1}| {\leqslant}{} & \sum_{i,j,k}\left|e_{k}(\alpha e^{-2{\varphi}}\rho_{i{\overline{j}}}+2n\alpha e^{-{\varphi}}{\varphi}_{i{\overline{j}}})\right|^{2} \\
{\leqslant}{} & 2\sum_{i,j,k}\left|e_{k}(2n\alpha e^{-{\varphi}}{\varphi}_{i{\overline{j}}})\right|^{2}
+2\sum_{i,j,k}\left|e_{k}(\alpha e^{-2{\varphi}}\rho_{i{\overline{j}}})\right|^{2} \\
{\leqslant}{} & 8n^{2}\alpha^{2}e^{-2{\varphi}}\sum_{i,j,k}\left|e_{k}e_{i}{\overline{e}}_{j}({\varphi})-e_{k}[e_{i},{\overline{e}}_{j}]^{(0,1)}({\varphi})
-{\varphi}_{k}{\varphi}_{i{\overline{j}}}\right|^{2}+Ce^{-4{\varphi}} \\
{\leqslant}{} & 16n^{2}\alpha^{2}e^{-2{\varphi}}\sum_{i,j,k}|e_{k}e_{i}{\overline{e}}_{j}({\varphi})|^{2}
+Ce^{-2{\varphi}}\sum_{i,j}\left(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2}\right)+Ce^{-2{\varphi}},
\end{split}$$ where we used (\[ddbar formula\]) in the last inequality. Similarly, for $I_{2}$ and $I_{3}$, we get $$|I_{2}| {\leqslant}Ce^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|
+Ce^{-{\varphi}}\sum_{i,j}\left(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2}\right)+Ce^{-{\varphi}}$$ and $$\begin{split}
|I_{3}| = {} & \frac{n(n-1)}{2}e^{-2{\varphi}}\left|4{\varphi}_{k}{\varphi}_{{\overline{l}}}f-2e_{k}{\overline{e}}_{l}({\varphi})f
-2{\varphi}_{{\overline{l}}}f_{k}-2{\varphi}_{k}f_{{\overline{l}}}+e_{k}{\overline{e}}_{l}f\right| \\
{\leqslant}{} & Ce^{-2{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|+Ce^{-2{\varphi}}\sum_{i,j}\left(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}
+|e_{i}{e}_{j}({\varphi})|^{2}\right)+Ce^{-2{\varphi}},
\end{split}$$ where we used Proposition \[Gradient estimate\] and (\[Definition of f\]). Thus substituting these estimates into (\[Second order estimate equation 2\]), we obtain $$\label{Second order estimate equation 7}
\begin{split}
&|F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{i}}})|\\
&{\leqslant}16n^{2}\alpha^{2}e^{-2{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}+Ce^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +Ce^{-{\varphi}}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+Ce^{-{\varphi}}.
\end{split}$$
On the other hand, by the definition of ${\tilde{g}}_{i{\overline{i}}}$ and (\[ddbar formula\]), we have $$\begin{split}
F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{i}}})
= {} & \alpha F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-2{\varphi}}\rho_{i{\overline{i}}})+2n\alpha F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\varphi}_{i{\overline{i}}}) \\
= {} & \alpha F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-2{\varphi}}\rho_{i{\overline{i}}})+2n\alpha F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}e_{i}{\overline{e}}_{i}({\varphi})) \\
& -2n\alpha F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}[e_{i},{\overline{e}}_{i}]^{(0,1)}({\varphi})).
\end{split}$$ Then by (\[Second order estimate equation 1\]) and Proposition \[Gradient estimate\], it follows that $$\begin{split}
|2n\alpha e^{-{\varphi}}F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}e_{i}{\overline{e}}_{i}({\varphi})|
{\leqslant}{} & |F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}(e^{-{\varphi}}{\tilde{g}}_{i{\overline{i}}})|+Ce^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \notag\\
& +Ce^{-{\varphi}}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+Ce^{-{\varphi}}.
\end{split}$$ Thus substituting (\[Second order estimate equation 7\]) into the above inequality, we derive $$\label{Second order estimate equation 5}
\begin{split}
|F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}e_{i}{\overline{e}}_{i}({\varphi})|
{\leqslant}{} & 8n|\alpha|e^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}+C\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+C.
\end{split}$$
Note that $$\label{Second order estimate equation 4}
\begin{split}
e_{i}{\overline{e}}_{i}e_{k}{\overline{e}}_{l}({\varphi})
= {} & e_{k}{\overline{e}}_{l}e_{i}{\overline{e}}_{i}({\varphi})+e_{k}[e_{i},{\overline{e}}_{l}]{\overline{e}}_{i}({\varphi})+[e_{i},e_{k}]{\overline{e}}_{l}{\overline{e}}_{i}({\varphi}) \\
& +e_{i}e_{k}[{\overline{e}}_{i},{\overline{e}}_{l}]({\varphi})+e_{i}[{\overline{e}}_{i},e_{k}]{\overline{e}}_{l}({\varphi}).
\end{split}$$ Since $(M,\omega)$ is Hermitian, near $x_{0}$, $[e_{i},e_{k}]$ is a $(1,0)$ vector field and $[{\overline{e}}_{i},{\overline{e}}_{l}]$ is a $(0,1)$ vector field. By (\[Second order estimate equation 1\]) and (\[Second order estimate equation 5\]), we see that $$\begin{split}
|F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}e_{k}{\overline{e}}_{l}({\varphi})|
{\leqslant}{} & |F^{i{\overline{i}}}e_{k}{\overline{e}}_{l}e_{i}{\overline{e}}_{i}({\varphi})|+C\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|+|e_{i}e_{j}({\varphi})|) \\
{\leqslant}{} & 8n|\alpha|e^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}+C\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+C.
\end{split}$$ As a consequence, we obtain $$\begin{split}
|F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi}_{k{\overline{l}}})|
{\leqslant}{} & |F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}e_{k}{\overline{e}}_{l}({\varphi})|+|F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}[e_{k},{\overline{e}}_{l}]^{(0,1)}({\varphi})| \\[2.5mm]
{\leqslant}{} & 8n|\alpha|e^{-{\varphi}}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}+C\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})| \\
& +C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})+C.
\end{split}$$ The lemma is proved.
By Lemma \[Second order estimate lemma\] and the Cauchy-Schwarz inequality, at $x_{0}$, we have $$\begin{split}
F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2})
= {} & 2\sum_{k,l}F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi}_{k{\overline{l}}}){\varphi}_{l{\overline{k}}}+2\sum_{k,l}F^{i{\overline{i}}}e_{i}({\varphi}_{k{\overline{l}}}){\overline{e}}_{i}({\varphi}_{l{\overline{k}}}) \\
{\geqslant}{} & -2|{\partial}{\overline{\partial}}{\varphi}|_{g}\sum_{k,l}|F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}({\varphi}_{k{\overline{l}}})|+\frac{1}{2}\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2} \\
& -C\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}e_{j}({\varphi})|^{2})-C \\
{\geqslant}{} & \left(\frac{1}{4}-8n^{3}|\alpha|e^{-{\varphi}}|{\partial}{\overline{\partial}}{\varphi}|_{g}\right)\sum_{i,j,p}|e_{p}e_{i}{\overline{e}}_{j}({\varphi})|^{2}-C \\
& -C(|{\partial}{\overline{\partial}}{\varphi}|_{g}+1)\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2}).
\end{split}$$ Recalling (\[Second order estimate equation 6\]) and $|{\partial}{\overline{\partial}}{\varphi}|_{g}{\leqslant}D$. Thus $$\begin{split}
F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2})
{\geqslant}-C_{0}(D+1)\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2})-C_{0},
\end{split}$$ where $C_{0}$ is a uniform constant. On the other hand, by Lemma \[Maximum principle gradient lemma\], we have $$F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(|{\partial}{\varphi}|_{g}^{2})
{\geqslant}\frac{1}{2}\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2})-C_{1}.$$ Hence, by the maximum principle, at $x_{0}$, we get $$\begin{split}
0 {\geqslant}{} & F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(Q) \\[1mm]
= {} & F^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2})+BF^{i{\overline{i}}}e_{i}{\overline{e}}_{i}(|{\partial}{\varphi}|_{g}^{2}) \\
{\geqslant}{} & \left(\frac{B}{2}-C_{0}D-C_{0}\right)\sum_{i,j}(|e_{i}{\overline{e}}_{j}({\varphi})|^{2}+|e_{i}{e}_{j}({\varphi})|^{2})-C_{0}-C_{1}B.
\end{split}$$ Choose $B=8C_{0}D+8C_{0}$. It follows that $$|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}(x_{0}) {\leqslant}C.$$ Therefore, by Proposition \[Gradient estimate\], at the expense of increasing $D_{0}$, we obtain $$\max_{M}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2} {\leqslant}|{\partial}{\overline{\partial}}{\varphi}|_{g}^{2}(x_{0})+BC {\leqslant}CD {\leqslant}\frac{D^{2}}{4}.$$
Proofs of Theorem \[Existence Theorem\] and Theorem \[Uniqueness Theorem\]
==========================================================================
In this section, we prove Theorem \[Existence Theorem\] and Theorem \[Uniqueness Theorem\]. We use the continuity method and consider the family of equations ($t\in[0,1]$), $$\label{Fu-Yau equation t}
\begin{split}
{\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega & -t\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2} \\
& +n\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2}+t\mu\frac{\omega^{n}}{n!}=0,
\end{split}$$ where ${\varphi}$ satisfies the elliptic condition, $$\label{Elliptic condition t}
e^{{\varphi}}\omega+t\alpha e^{-{\varphi}}\rho+2n\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\in \Gamma_{2}(M)$$ and the normalization condition $$\label{Normalization condition t}
\|e^{-{\varphi}}\|_{L^{1}} = A.$$ We shall prove that (\[Fu-Yau equation t\]) is solvable for any $t\in[0,1]$. As the Fu-Yau equation (\[Fu-Yau equation\]), (\[Fu-Yau equation t\]) is equivalent to a $2$-nd Hessian type equation as (\[fu-2\]).
For a fixed $\beta\in (0,1)$, we define the following sets of functions on $M$, $$\begin{split}
B & = \{ {\varphi}\in C^{2,\beta}(M) ~|~ \|e^{-{\varphi}}\|_{L^{1}}=A \},\\[2mm]
B_{1} & = \{ ({\varphi},t)\in B\times[0,1] ~|~ \text{${\varphi}$ satisfies (\ref{Elliptic condition t})} \},\\
B_{2} & = \{ u\in C^{\beta}(M) ~|~ \int_{M}u\omega^{n}=0 \}.
\end{split}$$ Then $B_{1}$ is an open subset of $B\times[0,1]$. Since $\int_{M}\mu\omega^{n}=0$, we introduce a map $\Phi:B_{1}\rightarrow B_{2}$, $$\begin{split}
\Phi({\varphi},t)\omega^{n} = {} & {\sqrt{-1} \partial \overline{\partial}}(e^{{\varphi}}\omega-t\alpha e^{-{\varphi}}\rho)\wedge\omega^{n-2} \\
& +n\alpha{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge{\sqrt{-1} \partial \overline{\partial}}{\varphi}\wedge\omega^{n-2}+t\mu\frac{\omega^{n}}{n!}.
\end{split}$$ Let $I$ be the set $$\{ t\in [0,1] ~|~ \text{there exists $({\varphi},t)\in B_{1}$ such that $\Phi({\varphi},t)=0$} \}.$$ Thus, to prove Theorem \[Existence Theorem\], it suffices to prove that $I=[0,1]$. Note that ${\varphi}_{0}=-\ln A$ is a solution of (\[Fu-Yau equation t\]) at $t=0$. Hence, we have $0\in I$. In the following, we prove that the set $I$ is both open and closed.
Openness
--------
Suppose that $\hat{t}\in I$. By the definition of the set $I$, there exists $(\hat{{\varphi}},\hat{t})\in B_{1}$ such that $\Phi(\hat{{\varphi}},\hat{t})=0$. Let $(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}$ be the linearized operator of $\Phi$ at $\hat{{\varphi}}$. Then we have $$(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}: \{ u\in C^{2,\beta}(M) ~|~ \int_{M}ue^{-\hat{{\varphi}}}\omega^{n}=0 \}
\rightarrow \{ v\in C^{\beta}(M) ~|~ \int_{M}v\omega^{n}=0 \}$$ and $$\begin{split}
(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}(u)\omega^{n} = {\sqrt{-1} \partial \overline{\partial}}(ue^{\hat{{\varphi}}}\omega & +\hat{t}\alpha ue^{-\hat{{\varphi}}}\rho)\wedge\omega^{n-2} \\
& +2n\alpha{\sqrt{-1} \partial \overline{\partial}}\hat{{\varphi}}\wedge{\sqrt{-1} \partial \overline{\partial}}u\wedge\omega^{n-2}.
\end{split}$$ We use the implicit function theorem to prove the openness of $I$. It suffices to prove that $(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}$ is injective and surjective. For convenience, we let $L:C^{2,\beta}(M)\rightarrow C^{\beta}(M)$ be an extension operator of $(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}$. First we compute the formal $L^{2}$-adjoint of $L$ in the following.
For any $u,v\in C^{\infty}(M)$, we have $$\begin{split}
& \int_{M}vL(u)\omega^{n} \\
= {} & \int_{M}v\left({\sqrt{-1} \partial \overline{\partial}}(ue^{\hat{{\varphi}}}\omega+\hat{t}\alpha ue^{-\hat{{\varphi}}}\rho)
+2n\alpha{\sqrt{-1} \partial \overline{\partial}}\hat{{\varphi}}\wedge{\sqrt{-1} \partial \overline{\partial}}u\right)\wedge\omega^{n-2} \\
= {} & \int_{M}u\left((e^{\hat{{\varphi}}}\omega+\hat{t}\alpha e^{-\hat{{\varphi}}}\rho)\wedge{\sqrt{-1} \partial \overline{\partial}}v
+2n\alpha{\sqrt{-1} \partial \overline{\partial}}\hat{{\varphi}}\wedge{\sqrt{-1} \partial \overline{\partial}}v\right)\wedge\omega^{n-2}.
\end{split}$$ This implies that $$L^{*}(v)\omega^{n} = {\sqrt{-1} \partial \overline{\partial}}v\wedge\left((e^{\hat{{\varphi}}}\omega+\hat{t}\alpha e^{-\hat{{\varphi}}}\rho)+2n\alpha{\sqrt{-1} \partial \overline{\partial}}\hat{{\varphi}}\right)\wedge\omega^{n-2}.$$ By the strong maximum principle, it follows $$\textrm{Ker}L^{*} = \{ \text{Constant functions on $M$} \}.$$ Since the index of $L$ is zero, we see that $\dim\textrm{Ker}L=1$. Combining this with the theory of linear elliptic equations, there exists a positive function $u_{0}\in C^{2,\beta}(M)$ such that $$\textrm{Ker}L = \{ cu_{0} ~|~ c\in\mathbf{R} \}.$$ Hence, $$\int_{M}u_{0}e^{-\hat{{\varphi}}}\omega^{n}>0 \text{~and~}
u_{0} \notin \{ u\in C^{2,\beta}(M) ~|~ \int_{M}ue^{-\hat{{\varphi}}}\omega^{n}=0 \},$$ which implies $(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}$ is injective.
Next, for any $v\in C^{\beta}(M)$ such that $\int_{M}v\omega^{n}=0$, by the Fredholm alternative, there exists a weak solution $u$ of the equation $Lu=v$. Moreover, by the theory of linear elliptic equations, we see that $u\in C^{2,\beta}(M)$. Taking $$c_{0} = -\frac{\int_{M}ue^{-\hat{{\varphi}}}\omega^{n}}{\int_{M}u_{0}e^{-\hat{{\varphi}}}\omega^{n}}.$$ Then $$(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}(u+c_{0}u_{0})=L(u+c_{0}u_{0})=v \text{~and~} \int_{M}(u+c_{0}u_{0})e^{-\hat{{\varphi}}}\omega^{n}=0,$$ which implies $(D_{{\varphi}}\Phi)_{(\hat{{\varphi}},\hat{t})}$ is surjective.
Closeness
---------
Since $0\in I$ and $I$ is open, there exists $t_{0}\in (0,1]$ such that $[0,t_{0})\subset I$. We need to prove $t_{0}\in I$. It suffices to prove the following proposition.
\[A priori estimate\] Let ${\varphi}_{t}$ be the solution of (\[Fu-Yau equation t\]). If ${\varphi}_{t}$ satisfies (\[Elliptic condition t\]) and (\[Normalization condition t\]), there exists a constant $C_{A}$ depending only on $A$, $t_{0}$, $\rho$, $\mu$, $\alpha$, $\beta$ and $(M,\omega)$ such that $$\|{\varphi}_{t}\|_{C^{2,\beta}}{\leqslant}C_{A}.$$
First, we prove the zero order estimate. In fact, we have
\[claim-2\] $$\label{Closeness equation 1}
\sup_{M}e^{-{\varphi}_{t}} {\leqslant}2M_{0}A, ~t\in[0,t_{0}),$$ where $M_{0}$ is the constant in Proposition \[Zero order estimate\].
Note that ${\varphi}_{0}=-\ln A$. Then $\sup_{M}e^{-{\varphi}_{0}} {\leqslant}M_{0}A$, which satisfies (\[Closeness equation 1\]). Thus, if (\[Closeness equation 1\]) is false, there will exist $\tilde{t}\in (0,t_{0})$ such that $$\label{Closeness equation 2}
\sup_{M}e^{-{\varphi}_{\tilde{t}}} = 2M_{0}A.$$ We may assume that $2M_{0}A{\leqslant}\delta_{0}$, where $\delta_{0}=\sqrt{\frac{1}{2|\alpha|\|\rho\|_{C^{0}}+1}}$ is chosen as in Proposition \[Zero order estimate\]. Namely, $e^{-{\varphi}_{{\tilde{t}}}}{\leqslant}\delta_{0}$. Hence, we can apply Proposition \[Zero order estimate\] to ${\varphi}_{{\tilde{t}}}$ whlie $\rho$ and $\mu$ are replaced by $t\rho$ and $t\mu$, respectively, and we obtain $$e^{-{\varphi}_{\tilde{t}}} {\leqslant}M_{0}A,$$ which contradicts to (\[Closeness equation 2\]). This proves (\[Closeness equation 1\]). Combining (\[Closeness equation 1\]) and Proposition \[Zero order estimate\], we obtain the zero order estimate
Next, we use the similar argument to prove the second order estimate $$\label{Closeness equation 4}
\sup_{M}|{\partial}{\overline{\partial}}{\varphi}_{t}|_{g} {\leqslant}D_{0},$$ for any $t\in(0,t_{0})$, where $D_{0}$ is the constant as in Proposition \[Second order estimate\]. If (\[Closeness equation 4\]) is false, there exists $\tilde{t}\in (0,t_{0})$ such that $$\sup_{M}|{\partial}{\overline{\partial}}{\varphi}_{{\tilde{t}}}|_{g} = D_{0}.$$ Recalling Proposition \[Second order estimate\], we get $$\sup_{M}|{\partial}{\overline{\partial}}{\varphi}_{{\tilde{t}}}|_{g} {\leqslant}\frac{D_{0}}{2},$$ which is a contradiction. Thus (\[Closeness equation 4\]) is true.
By (\[Closeness equation 4\]) and Proposition \[Gradient estimate\], we have the first order estimate $$\label{Closeness equation 3}
\sup_{M}|{\partial}{\varphi}_{t}|_{g}^{2} {\leqslant}C.$$ Combining (\[Closeness equation 4\]) and (\[Closeness equation 3\]) with equation (\[Fu-Yau equation t\]) (Note that (\[Fu-Yau equation t\]) is equivalent to a $2$-nd Hessian type equation as (\[fu-2\])), we get $$\label{non-degenrate}
\left|\sigma_{2}({\tilde{\omega}})-\frac{n(n-1)}{2}e^{2{\varphi}}\right| {\leqslant}Ce^{{\varphi}}.$$ Then, by the zero order estimate, we deduce $$\frac{1}{CA^{2}} {\leqslant}\sigma_{2}({\tilde{\omega}}) {\leqslant}\frac{C}{A^{2}}.$$ Hence, (\[Fu-Yau equation t\]) is uniformly elliptic and non-degenerate. By the $C^{2,\alpha}$-estimate (cf. [@TWWY15 Theorem 1.1]), we obtain $$\label{c2-alpha}
\|{\varphi}_{t}\|_{C^{2,\beta}}{\leqslant}C_{A}.$$
Uniqueness
----------
In this subsection, we give the proof of Theorem \[Uniqueness Theorem\]. First, we show the uniqueness of solutions to (\[Fu-Yau equation t\]) when $t=0$.
\[Uniqueness lemma 1\] When $t=0$, (\[Fu-Yau equation t\]) has a unique solution $${\varphi}_{0} = -\ln A.$$
By the similar calculation of (\[Infimum estimate equation 3\]) (taking $k=1$), we obtain $$\begin{split}
& \int_{M}e^{-{\varphi}}(e^{{\varphi}}\omega+\alpha e^{-{\varphi}}t\rho)\wedge\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-2} \\
{\leqslant}{} & 2\alpha \int_{M}e^{-2{\varphi}}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}(t\rho)\wedge\omega^{n-2}-2\int_{M}e^{-{\varphi}}t\mu\frac{\omega^{n}}{n!}.\notag
\end{split}$$ When $t=0$, it is clear that $$\int_{M}\sqrt{-1}{\partial}{\varphi}\wedge{\overline{\partial}}{\varphi}\wedge\omega^{n-1} = 0.$$ Combining this with the normalization condition $\|e^{-{\varphi}}\|_{L^{1}}=A$, we obtain $${\varphi}_{0} = -\ln A.$$
Assume that we have two solutions ${\varphi}$ and ${\varphi}'$ of (\[Fu-Yau equation\]). We use the continuity method to solve (\[Fu-Yau equation t\]) from $t=1$ to $0$. Note that ${\varphi}$ and ${\varphi}'$ are both solutions when $t=1$. Then by the implicit function theorem as in Subsection 5.1, there is a smooth solution $\varphi_t^1$ (or $\varphi_t^2$) of (\[Fu-Yau equation t\]) for any $t\in (t_0, 1]$ ($t_0<1$) with the property $\varphi_1^1= {\varphi}$ (or $\varphi_1^2={\varphi}'$). Set $$\begin{aligned}
J_{{\varphi}}= \{ t\in [0,1] ~|~&\text{there exists a family of smooth solutions}~ \varphi_{t'}^1~{\rm of}~ (\ref{Fu-Yau equation t} )\notag\\
& {\rm for ~any}~t'\in [t,1]~ \text{such that} ~ {\varphi}_{1}^{1} = {\varphi}\}.\notag\end{aligned}$$ From the argument in Section 2-4, we see that Proposition \[Zero order estimate\], Proposition \[Gradient estimate\] and Proposition \[Second order estimate\] are still true for $\varphi_t^1$. As a consequence, Proposition \[A priori estimate\] holds for $\varphi_t^1$. Thus $J_{{\varphi}}= [0,1]$. Similarly, $J_{{\varphi}' }= [0,1]$. On the other hand, thanks to Lemma \[Uniqueness lemma 1\], we have $${\varphi}_{0}^1 = {\varphi}_{0}^2 = -\ln A.\notag$$ Hence $\varphi_t^1=\varphi_t^2$ for any $t\in [0,1]$. Theorem \[Uniqueness Theorem\] is proved.
It seems that the condition (\[condition-uniqueness\]) in Theorem \[Uniqueness Theorem\] can be removed. In precise, we have the following conjecture.
\[uniqueness-general\] The solution ${\varphi}$ of (\[Fu-Yau equation\]) in Theorem \[Existence Theorem\] is unique.
We remark that conjecture \[uniqueness-general\] is true if $\alpha<0$ and $\rho{\geqslant}0$ in equation (\[Fu-Yau equation\]). In fact, by modifying the argument in the proof of Proposition \[Zero order estimate\], we can get the $C^0$-estimate for the solution ${\varphi}_t$ of equation (\[Fu-Yau equation t\]) by the assumption of (\[Elliptic condition\]) and (\[Normalization condition\]) in this case. Then by the $C^2$-estimate in [@PPZ16b Proposition 5, Proposition 6], we can also obtain (\[c2-alpha\]). We will discuss it for details somewhere.
[99]{} *D. Baraglia* and *P. Hekmati,* [Transitive Courant Algebroids, String Structures and Tduality]{}, Adv. Theor. Math. Phys. **19** (2015) 613–672. *U. Bunke,* [String structures and trivialisations of a Pfaffian line bundle,]{} Comm. Math. Phys. **307** (2011) 675–712.
*S. Dinew* and *S. Ko[ł]{}odziej,* [Liouville and Calabi-Yau type theorems for complex Hessian equations,]{} Amer. J. Math. **139** (2017), no. 2, 403–415.
*D. Freed,* [Determinants, torsion and strings,]{} Commun. Math. Phys. **107** (1986), 483–513.
*J.-X. Fu* and *S.-T. Yau,* [A Monge-Ampère-type equation motivated by string theory,]{} Comm. Anal. Geom. **15** (2007), no. 1, 29–75.
*J.-X. Fu* and *S.-T. Yau,* [The theory of superstring with flux on non-Kähler manifolds and the complex Monge-Ampère equation,]{} J. Differential Geom. **78** (2008), no. 3, 369–428.
*E. Goldstein* and *S. Prokushkin*, [Geometric model for complex non-Kähler manifolds with $SU(3)$ structure,]{} Comm. Math. Phys. **251** (2004), no. 1, 65–78. *M. Garcia-Fernandez*, [Torsion-free Generalized connections and heterotic supergravity]{}, Comm. Math. Phys. **332** (2014) 89–115.
*M. Garcia-Fernandez, R. Rubio* and *C. Tipler*, [Infinitesimal moduli for the Strominger system and killing spinors in generalized geometry]{}, Math. Ann. **369** (2017), no. 1-2, 539–595.
*F. R. Harvey* and *H. B. Lawson,* [Potential theory on almost complex manifolds,]{} Ann. Inst. Fourier (Grenoble) **65** (2015), no. 1, 171–210.
*Z. Hou, X.-N. Ma* and *D. Wu,* [A second order estimate for complex Hessian equations on a compact Kähler manifold,]{} Math. Res. Lett. **17** (2010), no. 3, 547–561.
*J. Li* and *S.-T. Yau,* [ The existence of supersymmetric string theory with torsion,]{} J. Differential Geom. **70** (2005), no. 1, 143–181.
*D. H. Phong, S. Picard* and *X. Zhang,* [On estimates for the Fu-Yau generalization of a Strominger system,]{} to appear in J. Reine Angew. Math.
*D. H. Phong, S. Picard* and *X. Zhang,* [The Fu-Yau equation with negative slope parameter,]{} Invent. Math. **209** (2017), no. 2, 541–576.
*M. Reid,* [The moduli space of 3-folds with $K = 0$ may nevertheless be irreducible,]{} Math. Ann. **278** (1987), 329–334.
*C. Redden,* [String structures and canonical 3-forms]{}, Pacific J. Math. **249** (2011) 447–484.
*H. Sati, U. Schreiber* and *J. Stasheff,* [Twisted differential string and fivebrane structures,]{} Comm. Math. Phys. **315** (2012) 169–213.
*A. Strominger,* [Superstrings with torsion,]{} Nuclear Phys. B **274** (1986), no. 2, 253–284.
*G. Székelyhidi,* [Fully non-linear elliptic equations on compact Hermitian manifolds,]{} to appear in J. Differential Geom.
*V. Tosatti, Y. Wang, B. Weinkove* and *X. Yang,* [$C^{2,\alpha}$ estimate for nonlinear elliptic equations in complex and almost complex geometry,]{} Calc. Var. Partial Differential Equations **54** (2015), no. 1, 431–453.
*E. Witten,* [Global Gravitational Anomalies]{}, Comm. Math. Phys. **100** (1985), 197–229.
Science in China Series A Mathematics **48** (2005), 47–60.
*D. Zhang,* [Hessian equations on closed Hermitian manifolds,]{} to appear in Pacific J. Math.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.'
author:
- 'Luca M. Ghiringhelli$^1$, Jan Vybiral$^2$, Emre Ahmetcik$^1$, Runhai Ouyang$^1$, Sergey V. Levchenko$^1$, Claudia Draxl$^{3,1}$, and Matthias Scheffler$^{1,4}$'
title: |
Learning physical descriptors for materials science\
by compressed sensing
---
Introduction
============
Big-data-driven research offers new routes towards scientific insight. This big-data challenge is not only about storing and processing huge amounts of data, but also, and in particular, it is a chance for new methodology for modeling, describing, and understanding. Let us explain this by a simple, but most influential, historic example. About 150 years ago, D. Mendeleev and others organized the 56 atoms that were known at their time in terms of a table, according to their weight and chemical properties. Many atoms were not yet identified, but from the table, it was clear that they should exist, and even their chemical properties were anticipated. The scientific reason behind the table, i.e., the shell structure of the electrons, was unclear and was understood only 50 years later when quantum mechanics had been developed. Finding structure in the huge space of materials, e.g., one table (or a map) for each property, or function of interest, is one of the great dreams of (big-)data-driven materials science.
Obviously, the challenge to sort all materials is much bigger than that of the mentioned example of the Periodic Table of Elements (PTE). Up to date, the PTE contains 118 elements that have been observed. About $200\, 000$ materials are “known” to exist [@LB], but only for very few of these “known” materials, the basic properties (elastic constants, plasticity, piezoelectric tensors, thermal and electrical conductivity, etc.) have been determined. When considering surfaces, interfaces, nanostructures, organic materials, and hybrids of these mentioned systems, the amount of possible materials is practically infinite. It is, therefore, highly likely that new materials with superior and up to now simply unknown characterstics exist that could help solving fundamental issues in the fields of energy, mobility, safety, information, and health.
For materials science it is already clear that big data are structured, and in terms of materials properties and functions, the space of all possible materials is sparsely populated. Finding this structure in the big data, e.g., asking for efficient catalysts for CO$_2$ activation or oxidative coupling of methane, good thermal barrier coatings, shape memory alloys as artery stents, or thermoelectric materials for energy harvesting from temperature gradients, just to name a few examples, may be possible, even if the actuating physical mechanisms of these properties and functions are not yet understood in detail. Novel big-data analytics tools, e.g., based on machine learning or compressed sensing, promise to do so.
Machine learning of big data has been used extensively in a variety of fields ranging from biophysics and drug design to social media and text mining. It is typically considered a universal approach for ‘learning’ (fitting) a complex relationship $y=f(x)$. Some, though few, machine-learning based works have been done in materials science, e.g., Refs. . Most of them use kernel ridge regression or Gaussian processes, where in both cases the fitted/learned property is expressed as a weighted sum over all or selected data points. A clear breakthrough, demonstrating a ‘game change’, is missing, so far, and a key problem, emphasized in Ref. , is that a good feature selection for descriptor identification is central to achieving a good-quality, predictive statistically learned equation.
One of the purposes of discovering structure in materials-science data, is to predict a property of interest for a given complete class of materials. In order to do this, a model needs to be built, that maps some accessible, descriptive input quantities, identifying a material, into the property of interest. In statistical learning, this set of input parameters is called descriptor [@Ghiringhelli2015]. For our materials science study, this descriptor (elsewhere also termed “fingerprint” [@Ramprasad13; @Curtarolo15]) is a set of physical parameters that uniquely describe the material and its function of interest. It is important that materials that are very different (similar) with respect to the property or properties of interest, should be characterized by very different (similar) values of the descriptors. Moreover, the determination of the descriptor must not involve calculations as intensive as those needed for the evaluation of the property to be predicted.
In most papers published so far, the descriptor has been introduced *ad hoc*, i.e., without performing an extensive, systematic analysis and without demonstrating that it is the best (in some sense) within a certain broad class. The risk of such *ad hoc* approach is that the learning step is inefficient (i.e., very many data points are needed), the governing physical mechanisms remain hidden, and the reliability of predictions is unclear. Indeed, machine learning is understood as an interpolation and not extrapolation approach, and it will do so well, if enough data are available. However, quantum mechanics and materials science are so multifaceted and intricate that we are convinced that we hardly ever will have enough data. Thus, it is crucial to develop these tools as domain-specific methods and to introduce some scientific understanding, keeping the bias as low as possible.
In this paper, we present a methodology for discovering physically meaningful descriptors and for predicting physically (materials-science) relevant properties. The paper is organized as follows: In section II, we introduce compressed-sensing[@Donoho06; @Candes08; @Kutyniok12; @Vybiral15] based methods for feature selection; in section III, we introduce efficient algorithms for the construction and screening of feature spaces in order to single out “the best” descriptor. “Feature selection” is a widespread set of techniques that are used in statistical analysis in different fields [@FitSel]. In the following sections, we analyze the significance of the descriptors found by the statistical-learning methods, by discussing the interpolation ability of the model based on the found descriptors (section IV), the stability of the models in terms of sensitivity analysis (section V), and the extrapolation capability, i.e., the possibility of predicting new materials (section VI).
As showcase application and proof of concept, we use the quantitative prediction of the crystal structure of binary compound semiconductors, which are known to crystallize in zincblende (ZB), wurtzite (WZ), or rocksalt (RS) structures. The structures and energies of ZB and WZ are very close and for the sake of clarity we will not distinguish them here. For many materials, the energy difference between ZB and RS is larger, though still very small, namely just 0.001[%]{} or less of the total energy of a single atom. Thus, high accuracy is required to predict this difference. The property $P$ that we aim to predict is the difference in the energies between RS and ZB for the given AB-compound material, $\Delta E_\textrm{AB}$. The energies are calculated within the Kohn-Sham formulation[@KohnSham65] of density-functional theory [@DFT64], with the local-density approximation (LDA) exchange-correlation functional [@CA80; @PW92]. Since in this paper we are concerned with the development of a data-analytics approach, it is not relevant that also approximations better than LDA exist, only the internal consistency and the realistic complexity of the data are important. The below described methodology applies to these better approximations as well.
Clearly, the sign of $\Delta E_\textrm{AB}$ gives the classification (negative for RS and positive for ZB), but the quantitative prediction of $\Delta E_\textrm{AB}$ values gives a better understanding. The separation of RS and ZB structures into distinct classes, on the basis of their chemical formula only, is difficult and has existed as a key problem in materials science for over 50 years [@vanVechten69; @Phillips70; @Bloch72; @Bloch74; @Zunger80; @Chelikowsky82; @Pettifor84; @Villars85; @Tosi85; @Galli87; @Chelikowski12]. In the rest of the paper, we will always write the symbol $\Delta E$, without the subscript AB, to signify the energy of the RS compound minus the energy of the ZB compound.
Compressed-sensing based methods for feature selection
======================================================
Let us assume that we are given data points $\{\bm{d}_1, P_1\},\dots,\{\bm{d}_N,
P_N\}$, with $\bm{d}_1,\dots,\bm{d}_N\in{\mathbb{R}}^M$ and $P_1,\dots,P_N\in{\mathbb{R}}.$ We would like to analyze and understand the dependence of the output $P$ (for “Property”) on the input $\bm{d}$ (for “descriptor”), that reflects the specific measurement or calculation. Specifically, we are seeking for an [ *interpretable*]{} equation $P=f(\bm{d})$, e.g., an explicit analytical formula, where $P$ depends on a small number of input parameters, i.e., the dimension of the input vector $\bm{d}$ is small. Below, we will impose constraints in order to arrive at a method for finding a low-dimensional linear solution for $P=f(\bm{d})$. Later, we introduce nonlinear transformations of the vector $\bm{d}$; dealing with such nonlinearities is the main focus of this paper.
When looking for a linear dependence $P=f(\bm{d})=\bm{d}\cdot\bm{c}$, the most simple approach to the problem is the method of *least squares*. This is the solution of $$\label{eq:ls}
\mathop{\rm arg min}\limits_{\bm{c} \in {\mathbb{R}}^M} \sum_{j=1}^N\Bigl(P_j-\sum_{k=1}^M
d_{j,k} c_k\Bigr)^2= \mathop{\rm arg min}\limits_{\bm{c} \in {\mathbb{R}}^M}
\|\bm{P}-\bm{D}\bm{c}\|_2^2,$$ where $\bm{P}=(P_1,\dots,P_N)^T$ is the column vector of the outputs and $\bm{D}=(d_{j,k})$ is the $(N\times M)$-dimensional matrix of the inputs, and the column vector $\bm{c}$ is to be determined. The equality defines the $\ell_2$ or Euclidean norm ($\|\cdot\|_2$) of the vector $\bm{P}-\bm{D}\bm{c}$. The resulting function would then be just the linear relationship $f(\bm{d})=\langle \bm{d}, \bm{c} \rangle = \sum_{k=1}^M d_k c_k$. It is the function that minimizes the error $
\|\bm{P}-\bm{D}\bm{c}\|_2^2$ among all linear functions of $\bm{d}$. Let us point out a few properties of Eq. . First, the solution of Eq. is given by an explicit formula $\bm{c}=(\bm{D}^T\bm{D})^{-1}\bm{D}^T\bm{P}$ if $\bm{D}^T\bm{D}$ is a non-singular matrix. Second, it is a convex problem, therefore the solution could also be found efficiently even in large dimensions (i.e. if $M$ and $N$ are large) and there are many solvers available [@nonlinpro]. Finally, it does not put any more constraints on $\bm{c}$ (or $f$) than that it is linear and minimizes Eq. .
A general linear function $f$ of $\bm{d}$ usually involves also an absolute term, i.e. it has a form $f(\bm{d})=c_0+\sum_{k=1}^M d_k c_k$. It is easy to add the absolute term (also called *bias*) to Eq. in the form $$\label{eq:ls_bias}
\mathop{\rm arg min}\limits_{c_0\in{\mathbb{R}},\bm{c}\in {\mathbb{R}}^{M}} \sum_{j=1}^N\Bigl(P_j-
c_0-\sum_{k=1}^M d_{j,k} c_k\Bigr)^2.$$ This actually coincides with Eq. if we first enlarge the matrix $\bm{D}$ by adding a column full of ones. We will tacitly assume in the following that this modification was included in the implementation and shall not repeat that any more.
If pre-knowledge is available, it can be (under some conditions) incorporated into Eq. . For example, we might want to prefer those vectors $\bm{c}$, which are small (in some sense). This gives rise to *regularized* problems. In particular, the *ridge regression* $$\label{eq:rr}
\mathop{\rm arg min}\limits_{\bm{c}\in {\mathbb{R}}^M} \left(
\|\bm{P}-\bm{D}\bm{c}\|_2^2+\lambda\|\bm{c}\|^2_2 \right)$$ with a penalty parameter $\lambda>0$ weights the magnitude of the vector $\bm{c}$ and of the error of the fit against each other. The larger $\lambda$, the smaller is the minimizing vector $\bm{c}$. The smaller $\lambda$, the better is the least square fit (the first addend in Eq. ). More specifically, if $\lambda\to 0$, the solutions of Eq. converge to the solution of Eq. . And if $\lambda\to \infty$, the solutions of Eq. tend to zero.
Sparsity, NP-hard problems, and convexity of the minimization problem
---------------------------------------------------------------------
The kind of pre-knowledge, which we will discuss next, is that we would like $f(\bm{d})$ depend only on a small number of components of $\bm{d}$. In the notation of Eq. , we would like to achieve that most of the components of the solution $\bm{c}$ are zero. Therefore, we denote the number of non-zero coordinates of a vector $\bm{c}$ by $$\label{eq:l0}
\|\bm{c}\|_0=\#\{j:c_j\not=0\}.$$ Here, $\{j:c_j\not=0\}$ stands for the set of all the coordinates $j$, such that $c_j$ is non-zero, and $\#\{\ldots\}$ denotes the number of elements of the set $\{\ldots\}$. Thus, $\|\bm{c}\|_0$ is the number of non-zero elements of the vector $\bm{c}$, and it is often called the $\ell_0$ norm of $\bm{c}$. We say that $\bm{c}\in{\mathbb{R}}^M$ is $k$-sparse, if $\|\bm{c}\|_0\le k$, i.e. if at most $k$ of its coordinates are not equal to zero.
When trying to regularize Eq. in such a way that it prefers sparse solutions $\bm{c}$, one has to minimize: $$\label{eq:rr0}
\mathop{\rm arg min}\limits_{\bm{c}\in {\mathbb{R}}^{M}} \left(
\|\bm{P}-\bm{D}\bm{c}\|_2^2+\lambda\|\bm{c}\|_0 \right).$$ This is a rigorous formulation of the problem that we want to solve, but it has a significant drawback: It is computationally infeasible when $M$ is large. This can be easily understood as follows. The naive way of solving Eq. is to look first on all index sets $T\subset\{1,\dots,M\}$ with one element and try to minimize over vectors supported in such sets. Then one looks for all subsets with two, three, and more elements. Unfortunately. their number grows quickly with $M$ and the level of sparsity $k$. It turns out, that this naive way can not be improved much. The problem is called “NP-hard”[@Arora09]. In a “Non-deterministic Polynomial-time hard” problem, a good candidate for the solution can be checked in a polynomial time, but the solution itself can not be found in polynomial time. The basic reason is that Eq. is not convex.
Methods based on the $\ell_1$ norm
----------------------------------
Reaching a compromise between convexity and promoting sparsity is made possible by the LASSO (Least Absolute Shrinkage and Selection Operator)[@LASSO] approach, in which the $\ell_0$ regularization of Eq. is replaced by the $\ell_1$ norm: $$\label{eq:rr1}
\mathop{\rm arg min}\limits_{\bm{c}\in {\mathbb{R}}^{M}}
\|\bm{P}-\bm{D}\bm{c}\|_2^2+\lambda\|\bm{c}\|_1.$$ The use of the $\ell_1$ norm ($\|\bm{c}\|_1=\sum_k|c_k|$), also known as the “Manhattan” or “Taxicab” norm, is crucial here. On the one hand, the optimization problem is convex [@Tibshirani09]. On the other hand, the geometry of the $\ell_1$-unit ball $\{\bm{x}\in{\mathbb{R}}^M:\|\bm{x}\|_1\le 1\}$ shows, that it promotes sparsity [@Kutyniok12; @Vybiral15].
Similarly to Eq. , the larger the $\lambda>0$, the smaller the $\ell_1$-norm of the solution of Eq. would be, and vice versa. Actually, there exists a smallest $\lambda_0 > 0$, such that the solution of Eq. is zero. If $\lambda$ then falls below this threshold, one or more components of $\bm{c}$ become non-zero.
Compressed sensing
------------------
The minimization problem in Eq. is an approximation of the problem in Eq. , and their respective results do not necessarily coincide. The relation between these two minimization problems was studied intensively [@Greenshtein06; @deGeer08; @deGeer11] and we summarize the results, developed in the context of a recent theory of compressed sensing (CS) [@Donoho06; @Kutyniok12; @Vybiral15]. These will be useful for justifying the implementation of our method (see below). The literature on CS is extensive and we believe that the following notes are a useful compendium.
We are especially interested in conditions on $\bm{P}$ and $\bm{D}$ which guarantee that the solutions of Eqs. and coincide, or at least do not differ much. We concentrate on a simplified setting, namely attempting to find $\bm{c}\in{\mathbb{R}}^M$ fulfilling $\bm{Dc}=\bm{P}$ with fewest non-zero entries. This is written as $$\label{eq:CS1}
\mathop{\rm arg min}\limits_{\bm{c}:\bm{D}\bm{c}=\bm{P}}\|\bm{c}\|_0 .$$ As mentioned above, this minimization is computationally infeasible, and we are interested in finding the same solution by its convex reformulation: $$\label{eq:CS2}
\mathop{\rm arg min}\limits_{\bm{c}:\bm{D}\bm{c}=\bm{P}}\|\bm{c}\|_1.$$ The answer can be characterized by the notion of the Null Space Property (NSP) [@NSP09]. Let $\bm{D}\in {\mathbb{R}}^{M\times N}$ and let $\Omega\in\{1,\dots,N\}$. Then $\bm{D}$ is said to have the NSP of order $\Omega$ if $$\label{eq:def:NSP}
\sum_{j\in T}|v_j|<\sum_{j\not\in T}|v_{j}|\quad \forall \bm{v}\not=0 \
\text{with} \ \bm{D}\bm{v}=0\ \forall T\subset\{1,\dots,N\}\
\text{with}\ \#T\le \Omega.$$ It is shown [@NSP09] that every $\Omega$-sparse vector $\bm{x}$ is the unique solution of Eq. with $\bm{P}=\bm{D}\bm{x}$ if, and only if, $\bm{D}$ has the NSP of order $\Omega$. As a consequence, the minimization of the $\ell_1$ norm as given by Eq. recovers the unknown $\Omega$-sparse vectors ${\bm c}\in{\mathbb{R}}^N$ from $\bm{D}$ and $\bm{P}=\bm{D}\bm{c}$ only if $\bm{D}$ satisfies the NSP of order $\Omega$. Unfortunately, when given a specific matrix $\bm{D}$, it is not easy to check if it indeed has the NSP.
However, the CS analysis gives us some guidance for the admissible dimension $M$ of $\bm{D}.$ It is known [@NSP09; @Vybiral15] that there exists a constant $C>0$ such that whenever $$\label{eq:CS3}
M\ge C \ \! \Omega \ \! \ln(N)$$ then there exist matrices $\bm{D}\in {\mathbb{R}}^{M\times N}$ with NSP of order $\Omega$. Actually, a random matrix with these dimensions satisfies the NSP of order $\Omega$ with high probability. On the other hand, this bound is tight, i.e., if $M$ falls below this bound no stable and robust recovery of $\bm{c}$ from $\bm{D}$ and $\bm{P}$ is possible, and this is true for every matrix $\bm{D}\in{\mathbb{R}}^{M\times N}.$ Later on, $M$ will correspond to the number of AB compounds considered and will be fixed to $M=82$. Furthermore, $N$ will be the number of input features, and Eq. tells us that we can expect reasonable performance of $\ell_1$-minimization only if their number is at most of the order $e^{\frac{M}{C\Omega}}.$
Furthermore, the CS analysis sets forth that the NSP is surely violated already for $\Omega=2$ if any two columns of $\bm{D}$ are a multiple of each other. In general, one can expect the solution of Eq. to differ significantly from the solution of Eq. if any two columns of $\bm{D}$ are highly correlated or, in general, if any of the columns of $\bm{D}$ lies close to the span of a small number of any other its columns.
Historically, LASSO appeared first in 1996 as a machine-learning (ML) technique, able to provide stable solutions to underdetermined linear systems, thanks to the $\ell_1$ regularization. Ten years later, the CS theory appeared, sharing with LASSO the concept of $\ell_1$ regularization, but the emphasis is on the reconstruction of a possibly noisy signal $\bm{c}$ from a minimal number of observations. CS can be considered the theory of LASSO, in the sense that it gives conditions (on the matrix $\bm{D}$) on when it is reasonable to expect that the $\ell_1$ and $\ell_0$ regularizations coincide. If the data and a, typically overcomplete, basis set are known, then LASSO is applied as a ML technique, where the task is to identify the smallest number of basis vectors that yield a model of a given accuracy. If the setting is such that a minimal set of measurements should be performed to reconstruct a signal, as in quantum-state tomography [@Eisert10], then CS tells how the data should be acquired. As it will become clear in the following, our situation is somewhat intermediate, in the sense that we have the data (here, the energy of the 82 materials), but we have a certain freedom to select the basis set for the model. Therefore, we use CS to justify the construction of the matrix $\bm{D}$, and we adopt LASSO as a mean to find the low-dimensional basis set.
A simple LASSO example: the energy differences of crystal structures
--------------------------------------------------------------------
In this and following sections, we walk through the application of LASSO to a specific materials-science problem, in order to introduce step by step a systematic approach for finding simple analytical formulas, that are built from simple input parameters (the [*primary features*]{}), for approximating physical properties of interest. We aim at predicting $\Delta E_\textrm{AB}$, the difference in DFT-LDA energy between ZB and RS structures in a set of 82 octet binary semiconductors. Calculations were performed using the all-electron full-potential code FHI-aims [@Blum09] with highly accurate basis sets, $\bf k$-meshes, integration grids, and scaled ZORA [@scaledZORA] for the relativistic treatment.
The order of the two atoms in the chemical formula AB is chosen such that element A is the cation, i.e., it has the smallest Mulliken electronegativity, $\textrm{EN}\! =\! -(\textrm{IP}+\textrm{EA})/2$. IP and EA are the ionization potential and electron affinity of the free, isolated, spinless, and spherical symmetric atom. As noted, the calculation of the descriptor must involve less intensive calculations than those needed for the evaluation of the property to be predicted. Therefore, we consider only properties of the free atoms A and B, that build the binary material, and properties of the gas-phase dimers. In practice, we identified the following [*primary features*]{}, exemplified for atom A: the ionization potential IP(A), the electron affinity EA(A) [@IPEA], H(A) and L(A), i.e., the energies of the highest-occupied and lowest-unoccupied Kohn-Sham (KS) levels, and $r_s$(), $r_p$(), and $r_d$(), i.e., the radii where the radial probability density of the valence $s$, $p$, and $d$ orbitals are maximal. The same features were used for atom B. Clearly, these primary features were chosen because it is well known since long that the relative ionization potentials, the atomic radii, and $sp$-hybridization govern the bonding in these materials. Consequently, some authors[@Bloch73; @Phillips78; @Phillips79] already recognized that certain combinations of $r_s$ and $r_p$ – called $r_\sigma$ and $r_\pi$ (see below) – may be crucial for constructing a descriptor that predicts the RS/ZB classification. Note that just the sign of $\Delta E$ was predicted, while the model we present here targets at a quantitative prediction of $\Delta
E$. In contrast to previous work, we analyze how LASSO will perform in this task and how much better the corresponding description is. We should also note that the selected set of atomic primary features contains redundant physical information, namely H(A) and L(A) contain similar information as IP(A) and EA(A). In particular H(A) $-$ L(A) is correlated, on physical grounds, to IP(A) $-$ EA(A). However, since the values of these two differences are not the same [@IPEA], all four features were included. As it will be shown below and in particular in Appendix \[A:A\], the two pairs of features are not interchangeable.
We start with a simplified example, in order to show in an easily reproducible way the performance of LASSO in solving our problem. To this extent, we describe each compound AB by the vector of the following six quantities: $$\label{eq:featspace6}
\bm{d}_{\textrm{AB}}=(r_s(\textrm{A}), r_p(\textrm{A}), r_d(\textrm{A}),
r_s(\textrm{B}), r_p(\textrm{B}), r_d(\textrm{B})).$$ All this data collected gives a $82\times 6$ matrix $\bm{D}$ – one row for each compound. We standardize $\bm{D}$ to have zero mean and variance 1, i.e. subtract from each column its mean and divide by its standard deviation. Standardization of the data is common practice[@Tibshirani09] and aimed at controlling the numerical stability of the solution. However, when non-linear combinations of the features and cross-validation is involved, the standardization has to be performed carefully (see below). The energy differences are stored in a vector $\bm{P}\in{\mathbb{R}}^{82}.$
We are aiming at two goals:
- On the one hand, we would like to minimize the *mean-squared error* (MSE), a measure of the quality of the approximation, given by $$\frac{1}{N}\sum_{j=1}^{N}\Bigl(P_j-\sum_{k=1}^M d_{j,k} c_k\Bigr)^2.$$ In this tutorial example, the size $N$ of the dataset is 82 and, with the above choice of Eq. , $M=6$.
- On the other hand, we prefer sparse solutions $\bm{c}^\dag$ with small $\Omega$, as we like to explain the dependence of the energy difference $\bm{P}$ based on a low-dimensional (small $\Omega$) [*descriptor*]{} $\bm{d}^\dag$.
These two tasks are obviously closely connected, and go against each other. The larger the coefficient $\lambda$ in Eq. , the more sparse the solution $\bm{c}$ will be; the smaller $\lambda$, the smaller the mean-squared error will be. The choice of $\lambda$ is at our disposal and let us weight sparsity of $\bm{c}$ against the size of the MSE.
In the following, we will rather report the [*root mean square error*]{}(RMSE) as a quality measure of a model. The reason is that the RMSE has the same units as the predicted quantities, therefore easier to understand in absolute terms. Specifically, when a sparse solution $\bm{c}^\dag$ of Eq. is mentioned, with $\Omega$ non-zero components, we report the RMSE: $$\sqrt{ \frac{1}{82} \sum_{j=1}^{82} \Bigl( P_j - \sum_{k=1}^\Omega
d_{j,k}^\dag c_k^* \Bigr)^2},$$ where $\bm{D}^\dag = (d_{j,k}^\dag)$ contains the columns corresponding to the non-zero components of $\bm{c}^\dag$ and $\bm{c}^* = (c_k^*)$ is the solution of the least square fit: $$\mathop{\rm arg min}\limits_{\bm{c}^*\in {\mathbb{R}}^{\Omega}}
\|\bm{P}-\bm{D}^\dag\bm{c}^*\|_2^2.$$
Let us first state that the MSE obtained when setting $\bm{c}=0$ is 0.1979 eV$^2$, which corresponds to a RMSE of 0.4449 eV. This means that if one “predicts” $\Delta E=0$ for all the materials, the RMSE of such “model” is 0.4449 eV. This number acts as a baseline for judging the quality of new models.
In order to run this and all the numerical tests in this paper, we have written [Python]{} scripts and used the linear-algebra solvers, including the LASSO solver, from the scikit\_learn ([sklearn]{}) library (see Appendix \[A:D\] for the actual script used for this section). We like to calculate the coefficients for a decreasing sequence of a hundred $\lambda$ values, starting from the smallest $\lambda = \lambda_1$, such that the solution is $\bm{c}=0$. $\lambda_1$ is determined by $N \lambda_1 =
\mathop{\rm max}\limits_{i} \langle \bm{d}_i \,,
\bm{c}\rangle$ [@Tibshirani09]. The sequence of $\lambda$ values is constructed on a log scale, and the smallest $\lambda$ value is set to $\lambda_{100} = 0.001 \lambda_{1}$. In our case, $\lambda_{1}=0.3169$ and the first non-zero component of $\bm{c}$ is the second one, corresponding to $r_p(\textrm{A})$. At $\lambda_{19} = 0.0903$, the second non-zero component of $\bm{c}$ appears, corresponding to $r_d(\textrm{B})$ (see Eq. ). For decreasing $\lambda$, more and more components of $\bm{c}$ are different from zero until, from the column corresponding to $\lambda_{67}=0.0032$ down to $\lambda_{100}=0.0003$, all entries are occupied, i.e. no sparsity is left. The method clearly suggests $r_p(\textrm{A})$ to be the most useful feature for predicting the energy difference. Indeed, by enumerating all six linear (least-square) models constructed with only one of the components of vector $\bm{d}$ at a time, the smallest RMSE is given by the model based on $r_p(\textrm{A})$. We conclude that LASSO really finds the coordinate best describing (in a linear way) the dependence of $P$ on $\bm{d}$.
Let us now proceed to the best pair. The second appearing non-zero component is $r_d(B)$. The RMSE for the pair $(r_p(\textrm{A}), r_d(\textrm{B}))$ is 0.2927 eV. An exhaustive search over all the fifteen pairs of elements of $\bm{d}_{\textrm{AB}}$ reveals that the pair $(r_p(\textrm{A}),
r_d(\textrm{B}))$, indeed, yields the lowest (least-square) RMSE among all possible pairs in $\bm{d}_{\textrm{AB}}$.
The outcome for the best triplet reveals the weak point of the method. The third non-zero components of $\bm{d}$ is $r_p(\textrm{B})$, and the RMSE of the triplet $r_p(\textrm{A}),r_p(\textrm{B}),r_d(\textrm{B})$ is 0.2897 eV, while an exhaustive search over all the triplets found, suggests ($r_s(\textrm{B}), r_p(\textrm{A}), r_p(\textrm{B})$) to be optimal with a RMSE error of 0.2834, i.e., some 2% better. The reason is that Eq. is only a convex proxy of the actual problem in Eq. . Their solutions have similar performance in terms of RMSE, but they do not have to coincide. Now let us compare the norm $\|\bm{c}\|_1$ of both triplets, for $\bm{c}^*$ obtained by the least square regression with standardized columns. For the optimal triplet, $(r_s(\textrm{B}), r_p(\textrm{A}), r_p(\textrm{B}))$, $\|\bm{c}\|_1 =
3.006$ and for $(r_p(\textrm{A}),r_p(\textrm{B}),r_d(\textrm{B}))$, $\|\bm{c}\|_1 = 0.5843$. Here we observe, that the first one needs higher coefficients for a small $\|\bm{P}-\bm{D}\bm{c}\|_2^2$, while the second one provides a better compromise between a small $\|\bm{P}-\bm{D}\bm{c}\|_2^2$ and a small $\lambda\|\bm{c}\|_1$ in Eq. . The reason for the large coefficients for the former is that considering the 82-dimensional vectors whose components are the values of $r_s(\textrm{B})$ and $r_p(\textrm{B})$ for all the materials (in the same order), these vectors are almost parallel (their Pearson correlation coefficient is 0.996)[@CorrBerkson]. In order to understand this, let us have a look at both least square models: $$\begin{aligned}
\Delta E_{\ell_0} = + 1.272 r_s(\textrm{B}) - 0.296 r_p(\textrm{A}) - 1.333
r_p(\textrm{B}) + 0.106\\
\Delta E_{\ell_1} = - 0.337 r_p(\textrm{A}) - 0.044 r_p(\textrm{B}) - 0.097
r_d(\textrm{B}) + 0.106. \end{aligned}$$ In the optimal $\ell_0$ model, the highly correlated vectors $r_s(\textrm{B})$ and $r_p(\textrm{B})$ appear approximatively as the difference $r_s(\textrm{B})
- r_p(\textrm{B})$ (their coefficients in the linear expansion have almost the same magnitude, $\sim 1.3$, and opposite sign). The difference of these two almost collinear vectors with the same magnitude (due to the applied standardization) is a vector much shorter than both $r_s(\textrm{B})$ and $r_p(\textrm{A})$, but also much shorter than $r_p(\textrm{B})$ and $P$, therefore $r_s(\textrm{B}) - r_p(\textrm{B})$ needs to be multiplied by a relatively large coefficient in order to be comparable with the other vectors. Indeed, while the coefficient of $r_s(\textrm{B}) - r_p(\textrm{B})$ is about 1.3, all the other coefficients, in particular in the $\ell_1$ solution, are much smaller. Such large coefficient is penalized by the $\lambda\|\bm{c}\|_1$ term and therefore a sub-optimal (according to $\ell_0$-regularized minimization) triplet is selected. This examples shows how with highly correlated features, $\lambda\|\bm{c}\|_1$ may not be a good approximation for $\lambda\|\bm{c}\|_0$.
Repeating the LASSO procedure for the matrix $\bm{D}$ consisting once only of elements of the first triplet and once of the second triplet, shows that only for $\lambda \leq 0.00079$ the first triplet has a smaller LASSO-error, since then the contribution of the high coefficients to the LASSO-error are sufficiently small. When $\bm{D}$ contains all features, at this $\lambda$ already all coefficients are non-zero.
For the sake of completeness, let us just mention that the RMSE for the pair of descriptors defined by John and Bloch [@Bloch74] as $$\begin{aligned}
r_\sigma &=& \big|\big(r_p(\textrm{A})+r_s(\textrm{A})\big)-
\big(r_p(\textrm{B})+r_s(\textrm{B})\big) \big| \\
r_\pi &=&
\big|r_p(\textrm{A})-r_s(\textrm{A})\big|+\big|r_p(\textrm{A})-r_s(\textrm{A}
)\big|\end{aligned}$$ is 0.3055 eV. For predicting the energy difference in a linear way, the pair of descriptors $(r_p(\textrm{A}),r_d(\textrm{B}))$ is already slightly better (by 4%) than this.
A more complex LASSO example: non-linear mapping
------------------------------------------------
Until this point, we have treated only linear models, i.e., where the function $P=f(\bm{d})$ is a linear combination of the components of the input vector $\bm{d}$. This is a clear limitation. In this section, we describe how one can easily introduce non-linearities, without loosing the simplicity of the linear-algebra solution. To the purpose, the vector $\bm{d}$ is mapped by a (in general non-linear) function $\Phi:{\mathbb{R}}^{M_1}\to {\mathbb{R}}^{M_2}$ into a higher-dimensional space, and only then the linear methods are applied in ${\mathbb{R}}^{M_2}$. This idea is well known in $\ell_2$-regularized [*kernel*]{} methods [@Hofmann08], where the so-called kernel trick is exploited. In our case, we aim at explicitly defining and evaluating a higher dimensional-mapping of the initial features, where each new dimension is a non-linear function of one or more initial features.
We stay with the case of 82 compounds. To keep the presentation simple, we leave out the inputs $r_d(\textrm{A})$ and $r_d(B)$. Hence, every compound is first described by the following four [*primary features*]{}, $$\bm{d}_{\textrm{AB}}=(r_s(\textrm{A}),r_p(\textrm{A}),r_s(B),r_p(B)).$$ We will construct a simple, but non-trivial non-linear mapping $\Phi:{\mathbb{R}}^4\to
{\mathbb{R}}^{M^*}$ (with $M^*$ to be defined) and apply LASSO afterwards. The construction of $\Phi$ involves some of our pre-knowledge, for example, on dimensional grounds, we expect that expressions like $r_s(\textrm{A})^2-r_p(\textrm{A})$ have no physical meaning and should be avoided. Therefore, in practice, we allow for sums and differences only of quantities with the same units. We hence consider the 46 features listed in Table \[T:featspace\_small\]: the 4 primary plus 42 [*derived*]{} features, building $\Phi(\bm{d}_{AB})$, represented by the matrix $\bm{D}$.
Columns of $\bm{D}$ Description Typical formula
--------------------- ------------------------------------------------ -----------------------------------------------------
1-4 Primary features $r_s(\textrm{A}),r_p(\textrm{A}),r_s(B),r_p(B)$
5-16 All ratios of all pairs of $r$’s $r_s(\textrm{A})/r_p(\textrm{A})$
17-22 Differences of pairs $r_s(\textrm{A})-r_p(\textrm{A})$
23-34 All differences divided by the remaining $r$’s $(r_s(\textrm{A})-r_p(\textrm{A}))/r_s(B)$
35-40 Absolute values of differences $|r_s(\textrm{A})-r_p(\textrm{A})|$
41-43 Sums of absolute values of differences
with no $r$ appearing twice $|r_s(\textrm{A})-r_p(\textrm{A})|+|r_s(B)-r_p(B)|$
44-46 Absolute values of sums of differences
with no $r$ appearing twice $|r_s(\textrm{A})-r_p(\textrm{A})+r_s(B)-r_p(B)|$
: Definition of the feature space for the tutorial example described in section II.E. \[T:featspace\_small\]
The descriptors of John and Bloch [@Bloch74], $r_\pi$ and $r_\sigma$, are both included in this set, with indexes 41 and 46, respectively. The data are standardized, so that each of the 46 columns of $\bm{D}$ has mean zero and variance 1. Note that for columns 5-46, the standardization is applied only after the analytical function of the primary features is evaluated, i.e., for physical consistency, the primary features enter the formula unshifted and unscaled.
$\lambda$ Feature Action
----------- --- ----------------------------------------------------------------------- --------
1 $r_p(\textrm{A})$ +
0.257 2 $|r_p(\textrm{A})-r_s(\textrm{A})|+|r_p(\textrm{B})-r_s(\textrm{B})|$ +
0.240 3 $r_s(\textrm{B})/r_p(\textrm{A})$ +
0.147 4 $r_s(\textrm{A})/r_p(\textrm{A})$ +
0.119 3 $|r_p(\textrm{A})-r_s(\textrm{A})|+|r_p(\textrm{B})-r_s(\textrm{B})|$ $-$
0.111 4 $r_s(\textrm{A})$ +
0.104 3 $r_p(\textrm{A})$ $-$
0.084 4 $|r_s(\textrm{B})-r_p(\textrm{B})|$ +
0.045 5 $(r_s(\textrm{B})-r_p(\textrm{B}))/r_p(\textrm{A})$ +
0.034 6 $|r_s(\textrm{A})-r_p(\textrm{B})|$ +
0.028 5 $r_s(\textrm{A})$ $-$
: Bookkeeping of (decreasing) $\lambda$ values at which either a new feature gets non-zero coefficients (marked by a ‘+’ in column Action) or a feature passes from non-zero to zero coefficient (marked by a ‘$-$’). The value of $\ell_0$ counts the non-zero coefficients at each reported $\lambda$ value. In a rather non-intuitive fashion, the number of non-zero coefficients fluctuates with decreasing $\lambda$, rather than monotonically increasing; this is an effect of linear correlations in the feature space. \[T:Bookkeeping\]
Applying LASSO gives the result shown in Table \[T:Bookkeeping\], where we list the coordinates as they appear when $\lambda$ decreases. Where we have truncated the list, at $\lambda = 0.028$, the features with non-zero coefficient are: $$1: \frac{r_s(\textrm{A})}{r_p(\textrm{A})}, 2:
\frac{r_s(\textrm{B})}{r_p(\textrm{A})}, 3:
\frac{r_s(\textrm{B})-r_p(\textrm{B})}{r_p(\textrm{A})}, 4:
|r_s(\textrm{A})-r_p(\textrm{B})|\ \text{and}\ 5:
|r_s(\textrm{B})-r_p(\textrm{B})|.$$ Let us remark, that the features 2 and 3 and the features 3 and 5 are strongly correlated, with covariance greater than 0.8 for both pairs [@transcorr]. This results in a difficulty for LASSO to identify the right features.
With these five descriptor candidates, we ran an exhaustive $\ell_0$ test over all the 5 $\cdot$ 4/2=10 pairs. We discovered that the pair of the second and third selected features, i.e. of $$\label{eq:pair}
\frac{r_s(\textrm{B})}{r_p(\textrm{A})},\quad
\frac{r_s(\textrm{B})-r_p(\textrm{B})}{r_p(\textrm{A})}$$ achieves the smallest RMSE of $0.1520$ eV, improving on John and Bloch’s descriptors [@Bloch74] by a factor 2. We also ran an exhaustive $\ell_0$ search for the optimal pair over all 8 features that were singled out in the LASSO sweep, i.e., including also $r_p(\textrm{A})$, $r_\pi$, and $r_s(\textrm{A})$. The best pair was still the one in Eq. . We then also performed a full search over the 46 $\cdot$ 45/2 = 1035 pairs, where the pair in Eq. still turned out to be the one yielding the lowest RMSE. We conclude that even though LASSO is not able to find directly the optimal $\Omega$-dimensional descriptor, it can efficiently be used for filtering the feature space and single out the “most important” features, whereas the optimal descriptor is then identified by enumeration over the subset identified by LASSO.
The numerical test discussed in this section shows the following:
- A promising strategy is to build an even larger feature space, by combining the [*primary features*]{} via analytic formulas. In the next section, we walk through this strategy for systematically constructing a large feature space.
- LASSO cannot always find the best $\Omega$-dimensional descriptor as the first $\Omega$ columns of $\bm{D}$ with non-zero coefficient by decreasing $\lambda$. This is understood as caused by features that are (nearly) linearly correlated. However, an efficient strategy emerges: First using LASSO for extracting a number $\Theta > \Omega $ of “relevant” components. These are the first $\Theta$ columns of $\bm{D}$ with non-zero coefficients found when decreasing $\lambda$. Then, performing an exhaustive search over all the $\Omega$-tuples that are subsets of the $\Theta$ extracted columns. The latter is in practice the problem formulated in Eqs. and . In the next section, we formalize this strategy, which we call henceforth LASSO+$\ell_0$, because it combines LASSO ($\ell_1$) and $\ell_0$ optimization.
The feature-space construction and LASSO+$\ell_0$ strategies presented in the next section, are essentially those employed in our previous paper [@Ghiringhelli2015]. The purpose of this extended presentation is to describe a general approach for the solution of diverse problems, where the only requisite is that the set of basic ingredients (the [*primary features*]{}) is known. As a concluding remark of this section, we note that compressed sensing and LASSO were successfully demonstrated to help solving quantum-mechanics and materials-science problems in Refs. . In all those papers, a $\ell_1$ based optimization was adopted to select from a well defined set of functions, in some sense the “natural basis set” for the specific problem, a minimal subset of “modes” that maximally contribute to the accurate approximation of the property under consideration. In our case, the application of the $\ell_1$ (and subsequent $\ell_0$) optimization must be preceded by the construction of a basis set, or feature space, for which a construction strategy is not at all [*a priori*]{} evident.\
In the discussed numerical tests until this point, we have always looked for the low-dimensional model that minimizes the square error (the square of the $\ell_2$ norm of the fitting function). Another quantity of physical relevance that one may want to minimize is the maximum absolute error (maxAE) of the fit. This is called infinity norm and, for the vector $\bm{x}$, it is written as $\|\bm{x}\|_\infty= \max |x_k|$. The minimization problem of Eq. then becomes $$\label{eq:rrinf}
\mathop{\rm arg min}\limits_{\bm{c}\in {\mathbb{R}}^{M}}
\|\bm{P}-\bm{D}\bm{c}\|_\infty+\lambda\|\bm{c}\|_1 \ .$$ This is still a convex problem. of Eq. . We have looked for the model that gives the lowest MaxAE, starting from the feature space of size 46 defined in Table \[T:featspace\_small\].
--------------------------------------------------------------------------------------
Descriptor
--- --------------- ------ ------ ----------------------------------------------------
1 $\ell_2$ 0.32 1.93 $r_p(\textrm{A})$
1 $\ell_\infty$ 0.36 1.70 $r_s(\textrm{A})/r_s(\textrm{B})$
2 $\ell_2$ 0.15 0.42 $r_s(\textrm{B})/r_p(\textrm{A}),
(r_s(\textrm{B})-r_p(\textrm{B}))/r_p(\textrm{A})$
2 $\ell_\infty$ 0.15 0.42 $r_s(\textrm{B})/r_p(\textrm{A}),
(r_s(\textrm{B})-r_p(\textrm{B}))/r_p(\textrm{A})$
--------------------------------------------------------------------------------------
: Comparison of descriptors selected by minimizing the $\ell_2$-norm of the fitting function (the usual LASSO problem) or the $\ell_\infty$ (maximum norm) over the feature space described in Table \[T:featspace\_small\]. Reported is also the performance of the models in terms of RMSE and MaxAE. \[T:l2lmax\]
For the specific example presented here, we find (see Table \[T:l2lmax\]) that the 2D model is the same for both settings, i.e., the model that minimizes the RMSE also minimizes the MaxAE. This is, of course,not necessarily true in general. In fact, the 1D model that minimizes the RMSE differs from the 1D model that minimizes the MaxAE.
Generation of a feature space
=============================
For the systematic construction of the feature space, we first divide the [ *primary features*]{} in groups, according to their physical meaning. In particular, necessary condition is that elements of each group are expressed with the same units. We start from atomic features, see Table \[T:featspace0\].
[|c|l|l|c|]{} & Description & Symbols &
------------------------------------------------------------------------
\
$A1$ & Ionization Potential (IP) and Electron Affinity (EA) & IP(A), EA(A), IP(B), EA(B) & 4
------------------------------------------------------------------------
\
$A2$ & Highest occupied (H) and lowest unoccupied (L) & H(A), L(A), H(B), L(B) & 4
------------------------------------------------------------------------
\
& Kohn-Sham levels & &\
$A3$ & Radius at the max. value of $s$, $p$, and $d$ & $r_s(\textrm{A})$, $r_p(\textrm{A})$, $r_d(\textrm{A})$ & 6
------------------------------------------------------------------------
\
& valence radial radial probability density & $r_s(\textrm{B})$, $r_p(\textrm{B})$, $r_d(\textrm{B})$ &\
Next, we define the combination rules. We note here that building algebraic function over a set of input variables (in our case, the [*primary features*]{}) by using a defined dictionary of algebraic (unary and binary) operators and finding the optimal function with respect to a given cost functional is the strategy of [*symbolic regression*]{} [@Koza98]. In this field of statistical learning, the optimal algebraic function is searched via an evolutionary algorithm, where the analytic expression is evolved by replacing parts of the test functions with more complex functions. In other words, in symbolic regression, the evolutionary algorithm guides the construction of the algebraic functions of the primary features. In our case, we borrow from symbolic regression the idea of constructing functions by combining ‘building blocks’ in more and more complex way, but the selection of the optimal function (in our language, the descriptor) is determined by the LASSO+$\ell_0$ algorithm over the whole set of generated functions, [*after*]{} the set of candidate functions is generated.
Our goal is to create “grammatically correct” combinations of the [*primary features*]{}. This means, besides applying the usual syntactic rule of algebra, we add a physically motivated constraint, i.e., we exclude linear combinations of inhomogeneous quantities, such as “IP + $r_s$” or “$r_s$ + $r_p^2$”. In practice, quantities are inhomogeneous when they are expressed with different units. Except this exclusions of physically unreasonable combinations, we produce as many combinations as possible. However, compressed-sensing theory poses a limit on the maximum size $M$ of the feature space from which the best (low-) $\Omega$-dimensional descriptor can be extracted by sampling the feature space with the knowledge of $N$ data points: $
N = C \ \! \Omega \ \! \ln (M) $ [@Donoho06; @Candes06; @Foucart06], when the $M$ candidate features are uncorrelated. $C$ is not a universal constant, however it is typically estimated to be in the range between 4 and 8 (see Ref. ). For $\Omega =2$ and $N=82$, this implies a range of $M$ between $\sim 200$ and $\sim 30000$. Therefore, we regarded values of a few thousand as an upper limit for $M$. Since the number of thinkable features is certainly larger than few thousands, we proceeded iteratively in several steps, by learning from the previous step what to put and what not in the candidate-feature list of the next step. In the following, we describe how a set of $\sim 4500$ features was created. In Appendixes \[A:A\] and \[A:B\], we summarize how different feature space can be constructed, starting from different assumptions.
First of all, we form sums and absolute differences of homogeneous quantities and apply some unary operators (powers, exponential), see Table \[T:featspace1\].
------------------------------------------------------------------------------------------------------------------------------------------
ID Description Prototype formula \#
------ --------------------------------------------------- -------------------------------------------------------------------------- ----
Absolute differences of $A1$ $|\textrm{IP}(\textrm{A})-\textrm{EA}(\textrm{A})|$
$B2$ Absolute differences of $A2$ $|\textrm{L}(\textrm{A}) - 6
\textrm{H}(\textrm{A})|$
$B3$ Absolute differences and sums of $A3$ $|r_p(\textrm{A}) \pm 30
r_s(\textrm{A})|$
$C3$ Squares of $A3$ and $B3$ (only sums) $r_s(\textrm{A})^2, 21
(r_p(\textrm{A}) + r_s(\textrm{A}))^2$
$D3$ Exponentials of $A3$ and $B3$ (only sums) $\exp(r_s(\textrm{A})), 21
\exp(r_p(\textrm{A}) \pm r_s(\textrm{A}))$
$E3$ Exponentials of squared $A3$ and $B3$ (only sums) $\exp[r_s(\textrm{A})^2], \exp[(r_p(\textrm{A}) \pm r_s(\textrm{A}))^2]$ 21
------------------------------------------------------------------------------------------------------------------------------------------
: First set of operators applied to the primary features (Table \[T:featspace0\]). Each group, labeled by a different ID, is formed by starting from a different group of primary features and/or by applying a different operator. The label A stays for both A and B of the binary material \[T:featspace1\]
Next, the above combinations are further combined, see Table \[T:featspace2\].
.2cm
--------------------------------------------------------------------------------------------------------------------------------------------------
ID Description Prototype formula \#
---------------- --------------------------------------- --------------------------------------------------------------------------- -------------
$\{F1,F2,F3\}$ Abs. differences and sums of $\left| |r_p(\textrm{A}) \pm
r_s(\textrm{A})| \mp |r_p(\textrm{B}) \pm r_s(\textrm{B})| \right| $ \[\]
$\{B1,B2,B3\}$, without repetitions
$G$ ratios of any of $\{Ai,Bi\}, i=1,2,3$ $\left| $\sim 4300$
r_p(\textrm{B})-r_s(\textrm{B}) \right| /
(r_d\textrm{(\textrm{A})}+r_s(\textrm{B}))^2$
with any of $\{A3,C3,D3,E3\}$
--------------------------------------------------------------------------------------------------------------------------------------------------
: Second set of operators applied to the groups defined in Table \[T:featspace0\] and \[T:featspace1\]. \[T:featspace2\]
LASSO was then applied to this set of $\sim 4500$ candidate features. If the features had low linear correlation, the first two features appearing upon decreasing $\lambda$ would be the best 2D descriptor, i.e., the one that minimizes the RMSE [@CorrCov]. Unfortunately, checking all pairs of features for linear correlation would scale with size $M$ as unfavorably as just performing the brute force search for the best 2D descriptor by trying all pairs. Furthermore, such a screening would require the definition of a threshold for the absolute value of the covariance, for the decision whether or not any two features are correlated, and then possibly discarding one of the two. A similar problem would appear in case more refined techniques, like singular-value decomposition, were tried in order to discard eigenvectors with low eigenvalues. Still a threshold should be defined and thus tuned.
We adopted instead a simple yet effective solution: The best $\Omega=30$ features with non-zero coefficients that emerge from the application of LASSO at decreasing $\lambda$ are grouped, and among them an exhaustive $\ell_0$ minimization is performed (Eqs. and for $\|\bm{c}\|_0 = 1, 2, 3,$ …). The single features, pairs, triplets, etc. that score the lowest RMSE are the outcome of the procedure as 1D, 2D, 3D, etc., descriptors. The validity of this approach was tested by checking that running it on smaller feature spaces ($M \sim $ few hundreds), where the direct search among all pairs and all triples could be carried out, gave the same result.
Our procedure, applied to the above defined set of features (Tables \[T:featspace0\]-\[T:featspace2\]), found both the best 1D, 2D, and 3D descriptor, as well as the coefficients of the equations for predicting the property $\Delta E$ as shown below (energies are in eV and radii are in Å): $$\begin{aligned}
\label{eq:desc1D} \Delta E &=& 0.117
\frac{\textrm{EA(B)}-\textrm{IP(B)}}{r_p(\textrm{A})^2} - 0.342,\\
\label{eq:desc2D} \Delta E &=& 0.113
\frac{\textrm{EA(B)}-\textrm{IP(B)}}{r_p(\textrm{A})^2} + 1.542
\frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{\exp(r_s(\textrm{A}))} - 0.137,\\
\nonumber \Delta E &=& 0.108
\frac{\textrm{EA(B)}-\textrm{IP(B)}}{r_p(\textrm{A})^2} + 1.737
\frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{\exp(r_s(\textrm{A}))} + \\
\label{eq:descr3D} & + & 9.025 \frac{\left| r_p(\textrm{B})-r_s(\textrm{B})
\right|}{\exp(r_d\textrm{(\textrm{A})}+r_s(\textrm{B}))} - 0.030.\end{aligned}$$ We removed the absolute value from “$\textrm{IP(B)}-\textrm{EA(B)}$” as this difference is always negative. In Fig. \[Fig:2D\], we show a [*structure map*]{} obtained by plotting the 82 materials, where we used the two components of the 2D descriptor (Eq. ) as coordinates. We note that the descriptor we found contains physically meaningful quantities, like the band gap of B in the numerator of the first component and the size mismatch between valence $s$- and $p$-orbitals (numerators of the second and third component).
\[h!\] ![Predicted energy differences between RS and ZB structures of the 82 octet binary AB materials, arranged according to our optimal two-dimensional descriptor. The parallel straight lines are isolevels of the predicted model, from left to right, at -0.2, 0, 0.2, 0.5, 1.0 eV. The distance from the 0 line is proportional to the difference in energy between RS and ZB. The color/symbol code is for the reference (DFT-LDA) energies.[]{data-label="Fig:2D"}](2016-08-02_ZB_RS4.png "fig:"){width="85.00000%"}
In closing this section, we note that the algorithm described above can be dynamically run in a web-based graphical application at:\
`https://analytics-toolkit.nomad-coe.eu/tutorial-LASSO_L0`.
Cross Validation, sensitivity analysis, and extrapolation
=========================================================
In this section, we discuss in detail a series of analyses performed on our algorithm. We start with the adopted cross-validation (CV) scheme. Then, we investigate how the cross-validation error varies with the size of the feature space (actually, its “complexity”, as will be defined below). We proceed by discussing the stability of the choice of the descriptor with respect to sensitivity analysis. Finally, we test the [*extrapolation*]{} capabilities of the model.
In the numerical test described above, we have always used all the data for the training of the models; the RMSE given as figure of merit were therefore [ *fitting errors*]{}. However, in order to assess the predictive ability of a model, it is necessary to test it on data that have not been used for the training, otherwise one can incur the so-called [*overfitting*]{}[@Tibshirani09]. Overfitting is in general signaled by a noticeable discrepancy between the fitting error (the RMSE over the training data) and the test error (the RMSE over control data, i.e., the data that were not used during the fitting procedure). If a large amount of data is available, one can just partition the data into a training and a test set, fully isolated one from another. In our case, that is not so unusual, the amount of data is too little for such partitioning, therefore, we adopted a cross-validation strategy.
In general, the data set is still partitioned into training and test data, but this procedure repeated several times, choosing different test data, in order to achieve a good statistics. In our case, we adopted a leave-10%-out cross-validation scheme, where the data set is divided randomly into a $\sim
90$% of training data (75 data points, in our case) and a $\sim 10$% of test data. The model is trained on the training data and the RMSE is evaluated on the test set. By “training”, we mean the whole LASSO+$\ell_0$ procedure that selects the descriptor and determines the model (the coefficients of the linear equation) as in Eqs. –. Another figure of merit that was monitored is the maximum absolute error over the test set. The random selection, training, and error evaluation procedure was repeated until the average RMSE and maximum absolute errors did not change significantly. In practice, we typically performed 150 iterations, but the quantities were actually converged well before. We note that, at each iteration, the standardization is applied by calculating the average and standard deviation only of the data points in the training set. In this way, no information from the test set is used in the training, while if the standardization were computed once and for all over all the available data, some information on the test set would be used in the training. In fact, it can be shown [@Tibshirani09] that such approach can lead to spurious performance.
The cross-validation test can serve different purposes, depending on the adopted framework. For instance, in kernel ridge regression, for a Gaussian kernel, the fitted property is expressed as a weighted sum of Gaussians (see also section V): $P({\bm d}) = \sum_{i=1}^{N} c_{i} \exp{ \left( - \| {\bm d}_{i} - {\bm
d}\|^2_2 / 2\sigma^2 \right) } $, where $N$ is the number of training data points, i.e., there are as many coefficients as (training) data points. The coefficients $c_{i}$ are determined by minimizing $ \sum_{i=1}^{N} (P({\bm
d}_{i}) - P_{i})^2 + \lambda \sum_{i,j=1}^{N,N} c_{i}{c}_{j} \exp{ \left( - \|
{\bm d}_{i} - {\bm d}_{j}\|^2_2 / 2\sigma^2 \right) } $, where $\| {\bm d}_i -
{\bm d}_j\|^2_2 = \sum_{\alpha=1}^\Omega ( d_{i,\alpha} - d_{j,\alpha})^2$ is the squared $\ell_2$ norm of the difference of descriptors of different materials. A recommended strategy [@Tibshirani09] is to use the cross validation to determine the optimal value of the so-called [*hyper-parameters*]{}, $\lambda$ and $\sigma$, in the sense that it is selected the pair $(\lambda,\sigma)$ that minimizes the average RMSE upon cross validation. In our scheme, we can regard the dimensionality $\Omega$ of the descriptor and the size $M$ of the feature space as [*hyper-parameters*]{}. By increasing both parameters, we do not observe a minimum, but we rather reach a plateau, where no significant improvement on the cross-validation average RMSE is achieved (see below).
Since in our procedure, the descriptor is found by the algorithm itself, a fundamental aspect of the cross-validation scheme is that all the procedure, including the selection of the descriptor, is repeated from scratch with each training set. This means that, potentially, the descriptor changes at each training-set selection. We found a remarkable stability of the 1D and 2D descriptors. The 1D was the same as the all-data descriptor (Eq. for 90% of the training sets, while the 2D descriptor was the same as in Eq. in all cases. For 3D and higher dimensionality, as expected from the $ N = C \Omega \ln (M) $ relationship, the selection of the descriptor becomes more unstable, i.e., for different training sets, the selected descriptor often differs in at least one of the three components. The RMSE, however, does not change much from one training set to the other, i.e., the instability of the descriptor selection just reflects the presence of many competing models. We show in Table \[T:tier\] the cross-validation figures of merit, average RMSE and maximum absolute error, as a function of increased dimensionality. We also show in comparison the fitting error.
Descriptor
------------ ------ ------ ------ ------
RMSE 0.14 0.10 0.08 0.06
MaxAE 0.32 0.32 0.24 0.20
RMSE, CV 0.14 0.11 0.08 0.07
MaxAE, CV 0.27 0.18 0.16 0.12
: Root mean square error (RMSE) and maximum absolute error (MaxAE) in eV for the fit of all data (first two lines) and of the test set in a leave-10%-out cross validation (L-10%-OCV), averaged over 150 random selections of the training set (last two lines), according to our LASSO+$\ell_0$ algorithm. \[T:RMSE\]
Complexity of the feature space
-------------------------------
Our feature space is subdivided in 5 tiers.
- In tier zero, we have the 14 primary features as in Table \[T:featspace0\].
- In tier one, we group features obtained by applying only one unary (e.g., $\mathcal{A}^2, \exp(\mathcal{A})$) or binary (e.g., $|\mathcal{A} -
\mathcal{B}|$) operation on primary features, where $\mathcal{A}$ and $\mathcal{B}$ stand for any primary feature. Note that in this scheme the absolute value, applied to differences, is not counted as an extra operation, i.e., we consider the operator $|\mathcal{A}-\mathcal{B}|$ as a single operator.
- In tier two, two operations are applied, e.g., $\mathcal{A}/(\mathcal{B}+\mathcal{C}), (\mathcal{A}-\mathcal{B})^2,
\mathcal{A}/\mathcal{B}^2, \mathcal{A}\exp(-\mathcal{B})$.
- In tier three, we apply three operations: $|\mathcal{A} \pm B
|/(\mathcal{C}+\mathcal{D}), |\mathcal{A} \pm \mathcal{B} |/\mathcal{C}^2,
|\mathcal{A} \pm \mathcal{B} |\exp(-\mathcal{C}),
\mathcal{A}/(\mathcal{B}+\mathcal{C})^2, \ldots $.\
- Tier four: $|\mathcal{A} \pm \mathcal{B} |/(\mathcal{C}+\mathcal{D})^2,
|\mathcal{A} \pm \mathcal{B} |\exp(-(\mathcal{C}+\mathcal{D})), \ldots$.
- Tier five: $|\mathcal{A} \pm \mathcal{B} |\exp(-(\mathcal{C}+\mathcal{D})^2),
\ldots$.
Our procedure was executed with tier 0, then with tier 0 AND 1, then with tiers from 0 to 2, and so on. The results are shown in Table \[T:featnoise\]. A clear result of this test is that little is gained, in terms of RMSE, when going beyond tier 3. The reason why MaxAE may increase at larger tiers is that the choice of the descriptor becomes more unstable (i.e., different descriptors may be selected) the larger the feature space is. This leads to less controlled maximal errors, and is a reflection of overfitting.
Tier 0 Tier 1 Tier 2 Tier 3 Tier 4 Tier 5
-------------- -------- -------- -------- -------- -------- --------
$\Omega = 1$
RMSE, CV 0.31 0.19 0.14 0.14 0.14 0.14
MaxAE, CV 0.67 0.37 0.32 0.28 0.29 0.30
$\Omega = 2$
RMSE, CV 0.27 0.16 0.12 0.10 0.10 0.10
MaxAE, CV 0.60 0.39 0.27 0.18 0.19 0.22
$\Omega = 3$
RMSE, CV 0.27 0.12 0.10 0.08 0.08 0.08
MaxAE, CV 0.52 0.39 0.27 0.16 0.18 0.20
: Errors after L-10%-OCV. “Tier $x$” means that [*all tiers up to*]{} tier $x$ are included in the feature space. \[T:tier\]
Incidentally, while for the 1D and 2D descriptor, the results presented in Eqs. and contain only features up to tier 3, the third component of the 3D descriptor shown in Eq. belongs to tier 4. The 3D descriptor and model limited to tier 3 is: $$\begin{aligned}
\Delta E &=& 0.108 \frac{\textrm{EA(B)}-\textrm{IP(B)}}{r_p(\textrm{A})^2} +
1.790 \frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{\exp(r_s(\textrm{A}))} + \\
\nonumber & + & 3.766 \frac{\left| r_p(\textrm{B})-r_s(\textrm{B})
\right|}{\exp(r_d\textrm{(\textrm{A})})} - 0.0267\end{aligned}$$ This is the same as presented in Ref. . The CV RMSE of the 3D model compared to the one in Eq. is worse by less than 0.01 eV.
Sensitivity Analysis
--------------------
Cross validation tests if the found model is good only for the specific set of data used for the training or if it is stable enough to predict the value of the target property for unseen data. Sensitivity analysis is a complementary test on the stability of the model, where the data are perturbed, typically by random noise. The purpose of sensitivity analysis can be finding out which of the input parameters maximally affect the output of the model, but also how much the model depends on the specific values of the training data. In practice, the training data can be affected by [*measurement errors*]{} even if they are calculated by an [*ab initio*]{} model. This is because numerical approximations are used to calculate the actual values of both the primary features and the property. Since, through our LASSO+$\ell_0$ methodology, we determine functional relationships between the primary features and the property, applying noise to the primary features and the property is a way of finding out how much the found functional relationship is affected by numerical inaccuracies; in other words, if it is an artifact of the level of accuracy or a deeper, physically meaningful, relationship.
### Noise applied to the primary features
In this numerical test, each of the 14 primary features of Table \[T:featspace0\] was independently multiplied by Gaussian noise with mean 1 and standard deviation $\sigma=0.001,0.01,0.03,0.05,0.1,0.13, 0.3$, respectively. The derived features are then constructed by using these primary features including noise. We also considered multiplying by Gaussian noise (same level as for the independent features) all features at once. The test with independent features reflects the traditional sensitivity analysis test (see e.g., Ref. ), where the goal is to single out which input parameters maximally affect the results yielded by a model. The test with all features perturbed takes into account that all primary features (as well as the fitted property) are evaluated with the same physical model and computational parameters. Therefore, inaccuracies related to not fully converged computational settings are modeled as noise.
[CV scheme]{}
------------------- --------------- ---- ---- ---- ---- ---- ---- ----
IP(B) LOOCV 99 99 98 70 4 0 0
IP(B) L-10%-OCV 84 84 71 51 10 1 0
EA(B) LOOCV 99 99 99 98 91 86 30
EA(B) L-10%-OCV 86 84 84 84 80 72 28
$r_s(\textrm{A})$ LOOCV 99 99 99 99 96 61 0
$r_s(\textrm{A})$ L-10%-OCV 83 87 84 86 72 38 0
$r_p(\textrm{A})$ LOOCV 99 98 86 64 2 0 0
$r_p(\textrm{A})$ L-10%-OCV 85 85 67 42 0 0 0
$r_p(\textrm{B})$ LOOCV 99 99 99 99 81 50 1
$r_p(\textrm{B})$ L-10%-OCV 86 85 86 83 72 53 2
All 14 LOOCV 99 98 70 11 0 0 0
All 14 L-10%-OCV 85 82 52 15 0 0 0
: Number of times the 2D descriptor of the noiseless model (see Eq. \[eq:desc2D\]) is found when noise is applied to the primary features. The noise is measured in terms of the standard deviation of the Gaussian-distributed set of random numbers that multiply the feature. Only for the 5 features contained in the 2D noiseless model, the noise affects the selection of the descriptor, therefore only 5 out of 14 primary features are listed. The last line shows the effect of noise applied to all primary features simultaneously. The results are displayed for both leave-one-out (LOOCV) and leave-10%-out CV schemes, as indicated by the “CV scheme” column. \[T:featnoise\]
Table \[T:best\] summarizes the results. It reports the fractional number of times, in %, in which the 2D descriptor of Eq. is found by LASSO+$\ell_0$ as a function of the noise level. For each noise level, 50 random extractions of the Gaussian-distributed random number were performed. For the leave-one-out CV (LOOCV) scheme, 82 iterations were performed for each random number, i.e. each materials was once the test material. For the leave-10%-out (L-10%-OCV) 50 iterations were performed, with 50 random selections of 74 materials as training and 8 materials as test set. As expected, for the 9 primary features that do not appear in the 2D descriptor of Eq. , the noise does not affect the final result at any level, i.e., the 2D descriptor of the noiseless model is always found, together with the fitting coefficients.
When one of the 5 features appearing in the 2D descriptor is perturbed, the result is affected by the noise level. For noise applied to some features, the percent of selection of the 2D descriptor of the noiseless model drops faster with the noise level than for others. Of course, even when the 2D descriptor of the noiseless model is found, the fitting coefficients differ from iteration to iteration (each iteration is characterized by a different value of the random noise). For the LOOCV, the RMSE goes from 0.09 eV ($\sigma=0.001$) to 0.12 eV ($\sigma=0.3$), while for the L-10%-OCV the RMSE goes from 0.11 to 0.15 eV. So, even when, at the largest noise level, the selected descriptor may vary each time, the RMSE is only mildly affected. This reflects the fact that many competitive models are present in the feature space, and therefore a model yielding a similar RMSE can always be found. It is interesting to note that upon applying noise to IP(B) in [*all cases*]{} the new 2D descriptor is $$\left( \frac{\textrm{EA(B)}}{r_p(\textrm{A})^2},
\frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{\exp(r_s(\textrm{A}))}
\right),$$ i.e, the very similar to the descriptor in Eq. , but here IP(B) is simply missing. It is also surprising that for quite large levels of noise (10-13%) applied to EA(B), the descriptor containing this feature is selected. In general, up to noise levels of 5%, the descriptor of the noiseless model is recovered the majority of times or more. Therefore, we can conclude that the model selection is not very sensitive to the noise applied to isolated features. When the noise is applied to all features, however, the frequency of recovery of the 2D descriptor of Eq. drops quickly. Still, for noise levels up to 1%, that could be related, e.g., to computational inaccuracies (non fully converged basis sets, or other numerical settings), the model is recovered almost always.
### Adding noise to $P = \Delta E$
We have added [*uniformly distributed*]{} noise of size $\delta = \pm 0.01, \pm 0.03, \ldots$ eV to the DFT data of $\Delta E$. Here, we have selected two feature spaces of size 2924 and 1568, constructed by two different set of functions, but always including the descriptors of Eqs. -. The results are shown in Table \[T:noCBN\]. For $\Omega = 2$, we report the fraction of trials for which the 2D descriptor of the unperturbed data was found in a L-10%-OCV test. (10 selection of random noise were performed and for each selection L-10%-OCV was run for 50 random selections of the training set, so the statistic is over 500 independent selections.) The selection of the descriptor is remarkably stable up to uniform noise of $\pm
0.1$ eV (incidentally, at around the value of the RMSE), then it drops rapidly. We note that the errors stay constant when the noise is in the “physically meaningful” regime, i.e., the relative ordering of the materials along the $\Delta E$ scale is not much perturbed. Only when the noise starts mixing the relative order of the materials, then the prediction becomes also less and less accurate in terms of RMSE.
Number of features Quantity
-------------------- ---------------------- ----------- ------------ ------------ ------------ ------------
$\pm 0.0$ $\pm 0.01$ $\pm 0.03$ $\pm 0.10$ $\pm 0.20$
1536 % best 2D descriptor 100 99 99 93 63
RMSE \[eV\] 0.10 0.11 0.11 0.12 0.17
AveMaxAE \[eV\] 0.18 0.18 0.18 0.23 0.32
2924 % best 2D descriptor 96 94 93 86 68
RMSE \[eV\] 0.11 0.11 0.12 0.13 0.18
AveMaxAE \[eV\] 0.19 0.20 0.22 0.24 0.34
: Performance of the model for increasing uniform noise added to the calculated $\Delta E$. Besides the RMSE and the AveMaxAE, we report the number of times, the 2D descriptor of the unperturbed data is recovered. \[T:best\]
Extrapolation: (re)discovering diamond
--------------------------------------
Most of machine-learning models, in particular kernel-based models, are known to yield unreliable performance for extrapolation, i.e., when predictions are made for a region of the input data where there are no training data. We note that, in condensed-matter physics, the distinction between what systems are similar and suitable for interpolation and what are not is difficult if not impossible. We test the extrapolative capabilities of our LASSO+$\ell_0$ methodology, by setting up two exemplary numerical tests. In the first test, we remove from the training set the two most stable ZB materials, namely C-diamond and BN (the two rightmost points in Fig. \[Fig:2D\]), and then calculate for both of them $\Delta E$, as predicted by the trained model. Although the prediction errors of 1.2 and 0.34 eV for C and BN, respectively, are very large, as can be seen in Table \[T:noCBN\] for the 2D descriptor, the model still predicts C and BN as the most stable ZB structures. Thus, in a setup where C and BN were unknown, the model would have predicted them as good candidates to be the most stable ZB materials.
Material
---------- ------- ------- --------
C -2.64 -1.44 -10.14
BN -1.71 -1.37 -9.72
: Performance of the model found by LASSO+$\ell_0$ when diamond and BN are excluded from the training. The rightmost column reports the LDA cohesive energy [*per atom*]{} of the ZB structure, referred to spinless atoms as used for determining the primary features in this work. \[T:noCBN\]
In the other test, we remove form the training set all four carbon-containing materials, namely C-diamond, SiC, GeC, and SnC, and then calculate for all of them $\Delta E$, as predicted by the trained model. The results are reported in Table \[T:noCall\] for the model based on the 2D descriptor. The prediction error for C-diamond is comparable to the first numerical test, and also the other errors are relatively large. However, the remarkable thing, here, is that the trained model does not know anything about carbon as a chemical element, nevertheless, it is able to predict that it will form ZB materials, and the relative magnitude of $\Delta E$ is respected.
Material
---------- ------- ------- --------
C -2.64 -1.37 -10.14
SiC -0.67 -0.48 -8.32
GeC -0.81 -0.46 -7.28
SnC -0.45 -0.23 -6.52
: Performance of the model found by LASSO+$\ell_0$ when the carbon atom is excluded from the training, i.e., C-diamond, SiC, GeC, SnC are excluded from the training.The rightmost column reports the LDA cohesion energy [*per atom*]{} of the ZB structure, referred to spinless atoms as used for determining the primary features in this work. \[T:noCall\]
We conclude that a LASSO+$\ell_0$ based model is likely to have good, at least qualitative, extrapolation capabilities. This is owing to the stability of the linear model and the physical meaningfulness of the descriptor, which contains elements of the chemistry of the chemical species building the material.
In closing this section, that we cannot draw general conclusions from the particularly robust performance of the descriptors that are found by our LASSO+$\ell_0$ algorithm when applied to the features space constructed as explained above. The tests described in this section, however, form a useful basis for assessing the robustness of a found model. We regard such or similar strategy to be good practice. Two criteria give us confidence that the found models may have a physical meaning: These are the particular nature of the models found by our methodology, i.e., they are expressed as explicit analytic functions of the primary features, and the evidence of robustness with respect to perturbations on the training set and the primary features. We also note that the functional relationships between a subset of the primary features and the property of interest that are found by our methodology cannot be automatically regarded as physical laws. In fact, both the primary features and $\Delta E$ are determined by the Kohn-Sham equations where the physically relevant input only consists of the atomic charges.
Comparison to Gaussian-kernel ridge regression with various descriptors
=======================================================================
In this section, we use Gaussian-kernel ridge regression to predict the DFT-LDA $\Delta E$ for the 82 octet binaries, with various descriptors built from our primary features (see Table \[T:featspace0\]) or functions of them. The purpose of this analysis is to point out pros and cons of using kernel ridge regression (KRR), when compared to an approach such as our LASSO+$\ell_0$. In the growing field of data analytics applied to materials-science problems, KRR is perhaps the machine-learning method that is most widely used to predict properties of a given set of molecules or materials [@Rupp12; @Ramprasad13; @Gross2014; @Lilienfeld16].
KRR solves the nonlinear regression problem: $$\label{Eq:KRR}
\mathop{\rm argmin}\limits_{\bm c} \sum_{j=1}^{N} \left( P_j - \sum_{i=1}^N c_i
k({\bm d}_i,{\bm d}_j) \right)^2 + \lambda \sum_{i,j=1}^{N} c_{i}k({\bm
d}_i,{\bm d}_j) {c}_{j}$$ where $P_j$ are the data points, $k({\bm d}_i,{\bm d}_j)$ is the kernel matrix built with the descriptor ${\bm d}$, and $\lambda$ is the regularization parameter, with a similar role as $\lambda$ in Eqs. , , and . In KRR, $\lambda$ is determined by minimizing the cross-validation error. The fitting function determined by KRR is therefore $P({\bm d})= \sum_{i=1}^N c_i k({\bm d}_i,{\bm d})$, i.e. a weighted sum over all the data points. The crucial steps for applying this method are the selection of the descriptor and of the kernel. The most commonly used kernel is the Gaussian kernel: $k({\bm d}_i,{\bm d}_j) = \exp{ \left( - \| {\bm d}_{i} -
{\bm d}_{j}\|^2_2 / 2\sigma^2 \right) } $. The parameter determining the width of the Gaussian, $\sigma$, is recommended[@Tibshirani09; @Hansen13] to be determined together with $\lambda$, by minimizing the cross-validation error, and this is the strategy used here. The results are summarized in Table \[T:KRR\]. In each case, the optimal $(\lambda,\sigma)$ was determined by running LOOCV.
.5cm
[|c|c|c|c|c|]{} & & & &
------------------------------------------------------------------------
\
------------------------------------------------------------------------
1 & 2D & $Z_\textrm{A}, Z_\textrm{B}$ & $(1\cdot10^{-6},0.008)$ & 0.13\
2 & 2D & John and Bloch’s $r_\sigma$ and $r_\pi$ & $(7\cdot10^{-6},0.008)$ & 0.09\
3 & 2D & our 2D & $(7\cdot10^{-6},0.73)$ & 0.10\
4 & 4D & G(A), G(B), R(A), R(B) & $(1\cdot10^{-6},0.14)$ & 0.09\
5 & 4D & $r_s(\textrm{A}), r_p(\textrm{A}), r_s(\textrm{B}), r_p(\textrm{B})$ & $(2\cdot10^{-6},0.14)$ & 0.08\
6 & 5D & IP(B), EA(B), $r_s(\textrm{A}), r_p(\textrm{A}), r_p(\textrm{B}) $ &$(7\cdot10^{-6},0.14)$ & 0.07\
7 & 14D & All atomic features & $(1\cdot10^{-6},0.42)$ & 0.09\
8 & 23D & All atomic and dimer features & $(1\cdot10^{-6},6.72)$ & 0.24\
We make the following observations:
- With several atomic-based descriptors, KRR fits reach levels of RMSE comparable to or slightly better than our fit with the LASSO+$\ell_0$.
- However, the performance of KRR is not improving with the dimensionality of the descriptor: Descriptor 7 contains the same features as descriptor 5 or 6, plus other, possibly relevant, features. One thus expects a better performance, which is not the case. The same happens when going to all 23 atomic and dimer features (descriptor 8).
Prediction test with KRR
------------------------
We have repeated the tests as in section IV.C, i.e., we have trained a KRR model for all materials except C and BN and for all materials except all four carbon compounds. Then we have evaluated the predicted $\Delta
E$ for the excluded materials. This test was done by using descriptors 1, 2, 4, 5, and 7 from Table \[T:KRR\]. Furthermore, the 2D LASSO+$\ell_0$ descriptors were evaluated for the two datasets described above (details on these descriptors are given in Appendix \[A:C\]). These two 2D descriptors are analytical functions of five primary features each and these were also used as a 5D descriptor. In all cases, we have determined the dimensionless hyperparameters $\lambda$ and $\sigma$ by minimizing the RMSE over a LOOCV run, and the descriptors are normalized component by component. Each component of the descriptor is [*normalized*]{} by the $\ell_2$ norm of the vector of the values of that component over the whole [*training*]{} set. The results are shown in Tables \[T:noCBN\_KRR\] and \[T:noCx\_KRR\].
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Method Descriptor Dim. ($\lambda,\sigma$) $\Delta E$(BN) $\Delta
E$(C)
---------------- ---------------------------------------------------------------------- ------ --------------------------- ---------------- ---------------
LDA [**-1.71**]{} [**-2.64**]{}
LASSO+$\ell_0$ CBN 2 -1.37 -1.44
KRR CBN 2 $(1\cdot 10^{-6},0.14)$ -2.85 -4.30
KRR CBN$^*$ 5 $(7\cdot 10^{-6},0.24)$ -0.96 -1.35
KRR $Z_\textrm{A}, Z_\textrm{B}$ 2 $(1\cdot 10^{-6},0.0085)$ -0.68 -0.56
KRR G(A), G(B), R(A), R(B) 4 $(1\cdot 10^{-6},0.079)$ -0.50 -1.29
KRR $r_\sigma$ and $r_\pi$ 2 $(0.013,0.045)$ -1.00 -1.17
KRR $r_s(\textrm{A}), r_p(\textrm{A}), r_s(\textrm{B}), r_p(\textrm{B})$ 4 $(1\cdot 10^{-6},0.14)$ -1.27 -2.31
KRR All atomic features 14 $(1\cdot 10^{-6},0.42)$ -1.01 -1.91
-----------------------------------------------------------------------------------------------------------------------------------------------------------
: KRR prediction of $\Delta E$ (all in eV) for C and BN, when these two materials are not included in the training set for various descriptors, compared to the LASSO$+\ell_0$ result and the LDA reference. Descriptor CBN is the same 2D descriptor found by LASSO$+\ell_0$ for this dataset (see Table \[T:LMO\], row b) and descriptor CBN$^*$ is built with the 5 primary features found in descriptor CBN. \[T:noCBN\_KRR\]
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Method Descriptor Dim. ($\lambda,\sigma$) $\Delta E$(C) $\Delta $\Delta E$(GeC) $\Delta E$(SnC)
E$(SiC)
---------------- ---------------------------------------------------------------------- ------ ---------------------------- --------------- --------------- ----------------- -----------------
LDA [**-2.64**]{} [**-0.67**]{} [**-0.81**]{} [**-0.45**]{}
LASSO+$\ell_0$ $x$C 2 -1.37 -0.48 -0.46 -0.23
KRR $x$C 2 $(1.3\cdot 10^{-5},0.079)$ -3.05 -0.66 -0.71 -0.24
KRR $x$C$^*$ 5 $(5.7\cdot 10^{-4},0.42)$ -2.28 -0.48 -0.44 -0.17
KRR $Z_\textrm{A}, Z_\textrm{B}$ 2 $(7.3\cdot 10^{-3},0.015)$ -2.38 -0.22 -0.59 -0.29
KRR G(A),G(B),R(A),R(B) 4 $(1\cdot 10^{-6},0.13)$ -2.28 -0.48 -0.47 -0.28
KRR $r_\sigma$ and $r_\pi$ 2 $(1.6\cdot 10^{-4},0.079)$ -1.96 -0.67 -0.50 -0.31
KRR $r_s(\textrm{A}), r_p(\textrm{A}), r_s(\textrm{B}), r_p(\textrm{B})$ 4 $(1.3\cdot 10^{-5},0.079)$ -3.06 -0.66 -0.70 -0.24
KRR All atomic features 14 $(1\cdot 10^{-6},0.42)$ -1.55 -1.24 -0.31 -0.04
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: KRR prediction of $\Delta E$ (all in eV) for all four carbon compounds, when these materials are not included in the training set for various descriptors, compared to the LASSO$+\ell_0$ result and the LDA reference. Descriptor $x$C$^*$ is the same 2D descriptor found by LASSO$+\ell_0$ for this dataset (see Table \[T:LMO\], row d) and descriptor $x$C$^*$ is built with the 5 primary features found in descriptor $x$C. \[T:noCx\_KRR\]
The performance of KRR in predicting the $\Delta E$ of selected subsets of materials is strongly dependent on the descriptor. In particular, when KRR is used for extrapolation (descriptor CBN, where C and BN data points are distant from the other data points in the metric defined by this 2D descriptor), the performance is rather poor in terms of quantitative error, even though still correctly predicting BN and C as very stable ZB materials. Some descriptors expected to carry relevant physical information, such as the set ($r_s(\textrm{A}), r_p(\textrm{A}), r_s(\textrm{B}), r_p(\textrm{B})$), show also good predictive ability in these examples.
In summary, KRR is a viable alternative to the analytical models found by LASSO$+\ell_0$ (as also noted in Ref. ), but only when a good descriptor is identified. The strength of our LASSO$+\ell_0$ approach is that the descriptor is determined by the method itself. Most importantly, LASSO$+\ell_0$ is not fooled by features that are redundant or useless (i.e., carrying little or no information on the property). These features are simply discarded. In contrast, KRR cannot discard components of a descriptor, resulting in decreasing predictive quality when a mixture of relevant and non-relevant features is naively used as descriptor.
Conclusions
===========
We have presented a compressed-sensing methodology for identifying physically meaningful descriptors, i.e., physical parameters that describe the material and its properties of interest, and for quantitatively predicting properties relevant for materials-science. The methodology starts from introducing possibly relevant [*primary features*]{} that are suggested by physical intuition and pre-knowledge on the specific physical problem. Then, a large feature space is generated by listing nonlinear functions of the primary features. Finally, few features are selected with a compressed-sensing based method, that we call LASSO+$\ell_0$ because it uses the Least Absolute Shrinkage and Selection Operator for a prescreening of the features, and an $\ell_0$-norm minimization for the identification of the few most relevant features. This approach can deal well with linear correlations among different features. We analyzed the significance of the descriptors found by the LASSO+$\ell_0$ methodology, by discussing the interpolation ability of the model based on the found descriptors, the robustness of the models in terms of stability analysis, and their extrapolation capability, i.e., the possibility of predicting new materials.
Acknowledgment
==============
This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 676580, The NOMAD Laboratory, a European Center of Excellence, the BBDC (contract 01IS14013E), and the Einstein Foundation Berlin (project ETERNAL). J.V. was supported by the ERC CZ grant LL1203 of the Czech Ministry of Education and by the Neuron Fund for Support of Science. This research was initiated while CD, LMG, MS, and JV were visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (NSF).
[70]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
.
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , , , ().
, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, , , , ** (, ).
, , , , ** ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, ** (, ).
, ** (, , ).
, ****, ().
, , , ** (, , ).
, ****, ().
, ****, ().
, ** (, , ).
, , , ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, ** (, ).
, , , ****, ().
, ** (, , ).
.
, , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , , , , , ****, ().
More details on the construction of the feature space {#A:A}
=====================================================
In order to determine the final feature space as described in section III, we proceeded in this way:
- As scalar features describing the valence orbitals, we use the radii at which the radial probability densities of the valence $s$, $p$, and $d$ orbitals have their maxima. This type of radii was, in fact, selected by our procedure, as opposed to the average radii (i.e., the quantum-mechanical expectation value of the radius). To the purpose, first two feature spaces starting from both sets of radii as primary features were constructed. In practice, in one case we started from the same primary features as in Table \[T:featspace0\], but without group A2 (in order to reduce the dimensionality of the final space), in the other case we substituted the average radii in group $A3$, again without group $A2$. We then constructed both spaces following the rules of Tables \[T:featspace1\] and \[T:featspace2\]. Finally, we joined the spaces and applied LASSO+$\ell_0$ to this joint space. Only features containing the radii at maximum were selected among the best.
- Similarly, we also defined three other radii-derived features for the atoms: the radius of the highest occupied orbital of the neutral atom, $r_0$, and analogously defined radii for the anions, $r_-$, and the cations, $r_+$. $r_0$ is either $r_s$ or $r_p$ as defined above, depending on which valence orbital is the HOMO. As in the previous point, we constructed a feature space containing both $\{r_0, r_-, r_+\}$ and $\{r_s, r_p, r_d\}$ and their combinations, and found that only the latter radii were selected among the best.
- We have considered in addition features related to the AA, BB and AB dimers (see Table \[T:dimers\]). These new features where combined in the same way as the groups $A1$, $A2$, and $A3$, respectively (see Tables \[T:featspace0\]–\[T:featspace2\]). After running our procedure, we found that features containing dimer-related quantities are never selected among the most prominent ones.
[|c|l|l|c|]{} & [Descriptor]{} & Symbols &
------------------------------------------------------------------------
\
$A1'$ & Binding energy & $E_b(\textrm{AA})$, $E_b(\textrm{BB})$, $E_b(\textrm{AB})$ & 3
------------------------------------------------------------------------
\
$A2'$ & HOMO-LUMO KS gap & HL(AA), HL(BB), HL(AB) & 3
------------------------------------------------------------------------
\
$A3'$ & Equilibrium distance & $d$(AA), $d$(BB), $d$(AB) & 3
------------------------------------------------------------------------
\
.3cm
- We constructed in sequence several alternative sets of features, in particular varying systematically the elements of group $G$ (see Table \[T:featspace2\]). Multiplication of the $\{Ai,Bi,Ei\}$, $(i=1,2,3)$, by the $\{A3,B3\}$ was included, as well as division of $\{Ai,Bi,Ei\}, (i=1,2,3)$, by the $\{A3,B3\}$ cubed (instead of squared ans in Table \[T:featspace2\]). Only division by $C3$ were selected by LASSO. At this stage, a descriptor in the form $$\left(
\frac{|\textrm{IP(B)}-\textrm{EA(B)}|}{r_p(\textrm{A})^2},\,
\frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{r_s(\textrm{A})^2},\, \frac{\left|
r_p(\textrm{B})-r_s(\textrm{B}) \right|}{r_d\textrm{(\textrm{A})}^2}
\right)$$ was found.
The persistence of the $C3$ group in the denominator suggested to try other decaying functions of $r$ and $r+r'$; for instance, exponentials as defined in $D3$ and $E3$. Interestingly, when the set of features containing $C3$, $D3$, and $E3$ was searched, the second and third component of the above descriptor were substituted by corresponding forms where the denominator squared is replaced by exponentials of the same atomic features (see below). This descriptor was therefore found by LASSO, in the sense that the substitutions $1/r_s(\textrm{A})^2 \rightarrow \exp(-r_s(\textrm{A}))$ and $1/r_d\textrm{(\textrm{A})}^2 \rightarrow \exp(-r_d\textrm{(\textrm{A})})$ are an outcome of the LASSO procedure, not of a directly attempted substitution.
- The fact that all selected features belong to group $G$ (see Table \[T:featspace2\]), which is the most populated, is [*not*]{} due to the fact that members of the other groups are “submerged” by the large population of group $G$ and “not seen” by LASSO. We have run extensive tests on the groups excluding $G$ and, indeed, the best-performing descriptors yield RMSE larger than those we have found.
- By noticing that the features in group $A2$ (see Table \[T:featspace0\]) never appear in the selected descriptors, and that the information carried by the features in $A1$ is similar to those in $A2$, we investigated what happens if $A1$ is removed from the primary features (and therefore all derived features containing primary features from $A1$ are removed from the feature space). We find the following models $$\begin{aligned}
\label{eq:desc1Db} \Delta E &=& 0.518 \frac{| \textrm{H(A)}-
\textrm{L(B)}|}{\exp[r_p(\textrm{A})^2]} - 0.174, \\
\label{eq:desc2Db} \Delta E &=& 4.335
\frac{r_d(\textrm{A})-r_s(\textrm{B})}{(r_p(\textrm{A})+r_d(\textrm{A}))^2} +
16.321 \frac{r_s(\textrm{B})}{\exp[(r_p(\textrm{A}) + r_p(\textrm{B}))^2]} -
0.406.\end{aligned}$$
In practice, $| \textrm{H(A)}- \textrm{L(B)}|$ replaces $|\textrm{IP(B)}-\textrm{EA(B)}|$ in the 1D model, with a similar denominator as in Eq. . However, the 2D model does not contain any feature from $A2$ and it is all built with features from $A3$. The RMSE of the 1D model of Eq. is 0.145 eV, slightly worse than 0.142 eV for the model of Eq. . The RMSE of the 2D model in Eq. is 0.109 eV, compared to 0.099 eV for Eq. . Equation is also the best LASSO+$\ell_0$ model if the features space is constructed by using only $A3$ as primary features. In the latter case, the 1D model would be (RMSE=0.160 eV): $$\begin{aligned}
\label{eq:desc1Dc} \Delta E &=& 19.384
\frac{r_p(\textrm{B})}{\exp[(r_s(\textrm{A})+r_s(\textrm{B}))^2]} - 0.257\end{aligned}$$
Other feature spaces {#A:B}
====================
In this Appendix, we show the performance of our algorithm with feature spaces constructed from completely different [*primary features*]{} than in the main text. The purpose is to underline the importance of the choice of the initial set of features. We note that when combination rules are established, performing new numerical tests takes just the (human) time to tabulate the new set of primary features for the data set.
Primary features including valence and row of the PTE {#A:B1}
-----------------------------------------------------
We included the “periodic-table coordinates”, period (labeled as R for Row) and group (labeled as G, from 1 to 18 according to the IUPAC numbering) as features. The reason for this test is to see whether by introducing more information than just the atomic number Z$_\textrm{A}$ and Z$_\textrm{B}$ (see ), a predictive model for $\Delta E$ is obtained. The new information is the explicit similarity between elements of the same group (they share coordinate G), that is not contained in the atomic number Z. First we started with only R(A), R(B), G(A), and G(B) as primary features and constructed a features space using the usual combination rules. We note that G(B) is redundant for many but not all the cases, given G(A). In fact, for $sp$ octet binaries, the sum of the groups is 18, but for binaries containing Cu, Ag, Cd, and Zn (16 out of 82), the sum is different and therefore the coordinates are effectively 4. Next, we construct a feature space starting from the set of 14 primary fetures described in Table \[T:featspace0\] – but where the group $A2$ (the HOMO and LUMO Kohn-Sham levels) are substituted with R(A), R(B), G(A), and G(B) – and then following the rules summarized in Tables \[T:featspace1\]and \[T:featspace2\].
For the case where the primary features are only R(A), R(B), G(A), and G(B), we generated a feature space of size 1143 and then ran LASSO+$\ell_0$. The outcome is shown in Table \[T:PeriodicTableFeat\]. We conclude that R(A), R(B), G(A), and G(B) alone do not contain enough information for building a predictive model, following our algorithm.
$\Omega=1$ $\Omega=2$ $\Omega=3$
-------------- ------------ ------------ ------------
RMSE \[eV\] 0.20 0.19 0.17
MaxAE \[eV\] 0.69 0.71 0.62
: Feature space with primary features R (Row or Period) and G (Group) of the PTE. Performance over the whole set of 82 materials. \[T:PeriodicTableFeat\]
For the case where the primary features are 14, including R(A), R(B), G(A), and G(B), we constructed a feature space of size 4605, and LASSO+$\ell_0$ found the following 1D optimal model: $$\Delta E = -0.376 + 0.0944
\frac{|\textrm{R(B)}-\textrm{G(B)}|}{r_p(\textrm{A})^2}.$$ In essence, $|\textrm{R(B)}-\textrm{G(B)}|$ replaced $|\textrm{IP(B)}-\textrm{EA(B)}|$ in Eq. , while the denominator remained unchanged. The RMSE of this new 1D model is only sightly better than the model in Eq. , namely 0.13 eV (vs 0.14 eV), but the MaxAE is much worse, i.e., 0.43 vs. 0.32 eV. However, this finding is remarkable as it implies some correlation between $|\textrm{R(B)}-\textrm{G(B)}|$ and $|\textrm{IP(B)}-\textrm{EA(B)}|$, where the latter is a DFT-LDA property while the former comes simply from the number of electrons and the Aufbau principle. Indeed, there is a Pearson’s correlation of 0.87 between the two quantities, while, when both quantities are divided by $r_p(\textrm{A})^2$, the Pearson’s correlation becomes as high as 0.98. The 2D and 3D optimal descriptor, though, are the same as in Eqs. and .
Even though it is expected that the difference IP$-$EA grows when moving in the PTE to the right, a linear correlation between the difference $|\textrm{R(B)}-\textrm{G(B)}|$ and IP$-$EA it is unexpected. Figure \[Fig:EAIP\] shows in the bottom panel this correlation for the $p$ elements (the anions B in the AB materials) of the PTE, while the top panel shows an even stronger linear correlation between IP$-$EA and the group G in the PTE, for each period.
\[h!\] ![Plot of the difference IP$-$EA for the anions (B atoms in the AB materials) vs their Group (top pane) or the difference Group$-$Row (bottom pane) in the PTE. The straight lines are linear least-square fit of the data points \[Fig:EAIP\]](anions2.png "fig:"){width="80.00000%"}
Adding John and Bloch’s $r_\sigma$ and $r_\pi$ as primary features {#A:B2}
------------------------------------------------------------------
Here, we added $r_\sigma$ and $r_\pi$ [@Bloch74] to the primary features, in order to see whether combined with other features, according to our rules, yielded a more accurate and/or more robust modeling. The feature space was reduced, as plainly adding all the combinations with these 2 extra features, made the whole procedure unstable (remember that we have only 82 data points). By optimizing over a feature space of size 2924 and using all 82 materials for learning and testing, LASSO$+\ell_0$ identified the same 2D descriptor as in Eq. . In other words, no function of $r_\sigma$ or $r_\pi$ won over the known descriptor. For the L-10%-OCV, in about 10% of the cases, a descriptor containing a function of $r_\sigma$ or $r_\pi$ was selected.
Use of force constants derived from the tedrahedral pentamers AB$_4$ and BA$_4$ as primary features {#A:B3}
---------------------------------------------------------------------------------------------------
Here, we build a feature space exploring the idea that the difference in energy between RS and ZB may depend on the mechanical stability of the basic constituent of either crystal structure. For instance, we choose ZB and we look at the mechanical stability of the tetrahedral pentamers AB$_4$ and BA$_4$. In practice, we look at the elastic constants for the symmetric and antisymmetric stretching deformations.
We write the elastic energy of deformation of a tetrahedral pentamer XY$_4$, for a symmetric stretch: $$\label{Eq:DES}
\Delta E_\textrm{harm}^\textrm{SYMM} = 4 \alpha_\textrm{XY} (\Delta r)^2$$ where $\alpha_\textrm{XY}$ is the bond-stretching constant (of one XY bond in the tetrahedral arrangement, which is not necessarily the same value as the one of the XY dimer). The factor 4 comes from the fact that there are 4 bonds equally stretched. The second derivative of $\Delta E_\textrm{harm}^\textrm{SYMM}$ with respect to $\Delta r$ is $(\Delta E_\textrm{harm}^\textrm{SYMM})'' = 8 \alpha_\textrm{XY}$. The left-hand side is evaluated from the (LDA) calculated $\Delta
E_\textrm{harm}^\textrm{SYMM}(\Delta r)$ curve, fitted to a second order polynomial.
Considering an asymmetric stretch, which means that we moved the central atom X along one of the four XY bonds, we write: $$\begin{aligned}
\Delta E_\textrm{harm}^\textrm{ASYMM} &=& 2 \alpha_\textrm{XY} (\Delta r)^2 + 3
\beta_\textrm{YXY} (\theta_1)^2 + 3 \beta_\textrm{YXY} (\theta_2)^2, \end{aligned}$$ where $\theta_1$ and $\theta_2$ are the two different distortion angles, that the 6 angles formed by the atoms X-Y-X, undergo upon the asymmetric stretch. After working out the geometrical relationship $\theta_{1,2}^2= \theta_{1,2}^2((\Delta r)^2)$, we find: $$\begin{aligned}
\Delta E_\textrm{harm}^\textrm{ASYMM} &=& 2 ( \alpha_\textrm{XY} + 4
\beta_\textrm{YXY} ) (\Delta r)^2.\end{aligned}$$ Since the asymmetric deformation is defined through $\Delta r$, for convenience, $\beta_\textrm{YXY}$ is referred to the linear displacement, rather than the angular one. In practice, the relationship $\theta^2((\Delta r)^2)$ was derived. The second derivative of the above expression is equated to the second derivative of the calculated $\Delta E (\Delta r)$ curve. Since $\alpha_\textrm{XY}$ is known from Eq. , $\beta_\textrm{YXY}$ is then inferred.
We have 4 primary features for each material, now. The first two, $\alpha_\textrm{AB}$ and $\alpha_\textrm{ABA}$, come from the AB$_4$ tetrahedral molecule, while the third and fourth ones, $\alpha_\textrm{BA}$ and $\alpha_\textrm{BAB}$ come from the BA$_4$ molecule. In addition, by noting that $\alpha_\textrm{XY}$ and $\beta_\textrm{YXY}$ enter the expression $ \Delta E_\textrm{harm}^\textrm{ASYMM}$, with a ratio of 1/4, two primary features were added, i.e., $\alpha_\textrm{AB}+4\beta_\textrm{ABA}$ and $\alpha_\textrm{BA}+4\beta_\textrm{BAB}$. These 6 primary features were combined in (some of) the usual ways. Note that the above combination of $\alpha_\textrm{BA}$ and $\alpha_\textrm{BAB}$ opens a new level of complexity. At present, when constructing the feature space, we apply operations like $\textrm{A}+\textrm{B}$ and $|\textrm{A}-\textrm{B}|$, but we do not allow for the freedom of arbitrary linear combinations of them. Note also that the information about the two atoms here are intermingled in all six primary features.
To give an idea, the so obtained 2D descriptor is: $$\left( \left[ \beta_\textrm{ABA}-(\alpha_\textrm{BA}+4\beta_\textrm{BAB}) \right]
\alpha_\textrm{AB},
\frac{\alpha_\textrm{AB}+4\beta_\textrm{ABA}+\alpha_\textrm{BA}+4\beta_\textrm{
BAB}}{\alpha_\textrm{AB}+\alpha_\textrm{BA}} \right)$$ The performance in terms of RMSE and CV is summarized in Table \[T:elastic\].
RMSE all RMSE CV MAE CV MaxAE CV
----- ---------- --------- -------- ----------
$1$ 0.1910 0.1809 0.1473 0.3583
$2$ 0.1532 0.1797 0.1303 0.3836
$3$ 0.1346 0.2195 0.1553 0.4810
: Performance of the model built from primary features based on force-constants. \[T:elastic\]
We conclude that the features based on the elastic energy do not yield a model with good predictive ability.
Numerical tests excluding selected elements {#A:C}
===========================================
In this numerical test, we have removed several sets of materials from the data set. In practice, we have removed:
0\) Nothing\
a) C-diamond (1 material)\
b) C-diamond and BN (2 materials)\
c) BN (1 material)\
d) All carbon compounds (4 materials)\
e) All boron compounds (4 materials)\
f) CdO (example of a material with $\Delta E \sim 0 $ (1 material))\
g) All oxygen-containing compounds (7 materials)\
h) All cadmium-containing compounds (4 materials)\
After the removal, the usual L-10%-OCV was performed on the remaining materials. The purpose was to analyze the stability of the model when some crucial (or less crucial) elements/materials are removed completely from the data set. Also, we aimed of identifying outliers, i.e. data points whose removal from the set improve the accuracy of the fit. The results for a tier-3 feature space of size 2420 are summarized in Table \[T:LMO\].
--------------- ----------- ------ ------ ------ ------ --------------------
Set Dimension
all CV CV
0\) all 1 0.14 0.15 0.28 0.85 $\mathfrak{A}$
2 0.10 0.11 0.18 0.99 $\mathfrak{A,B}$
3 0.08 0.09 0.17 0.95 $\mathfrak{A,B,C}$
a\) no CC 1 0.12 0.14 0.29 0.91 $\mathfrak{D}$
2 0.10 0.13 0.26 0.39 $\mathfrak{B,E}$
3 0.08 0.08 0.17 0.91 $\mathfrak{D,B,C}$
b\) no CC, BN 1 0.12 0.14 0.27 0.94 $\mathfrak{D}$
2 0.10 0.12 0.26 0.36 $\mathfrak{D,C}$
3 0.08 0.08 0.16 0.91 $\mathfrak{D,C,B}$
c\) no BN 1 0.14 0.19 0.39 0.75 $\mathfrak{A}$
2 0.10 0.13 0.26 0.83 $\mathfrak{A,B}$
3 0.08 0.11 0.24 0.87 $\mathfrak{A,B,C}$
d\) no C 1 0.12 0.13 0.27 0.76 $\mathfrak{F}$
2 0.09 0.12 0.23 0.74 $\mathfrak{B,E}$
3 0.07 0.10 0.20 0.47 $\mathfrak{D,B,C}$
e\) no B 1 0.13 0.18 0.37 0.37 $\mathfrak{G}$
2 0.10 0.14 0.30 0.42 $\mathfrak{A,B}$
3 0.08 0.12 0.24 0.38 $\mathfrak{A,B,C}$
f\) no CdO 1 0.14 0.18 0.38 0.77 $\mathfrak{A}$
2 0.10 0.12 0.23 0.86 $\mathfrak{A,B}$
3 0.08 0.10 0.22 0.90 $\mathfrak{A,B,C}$
g\) no O 1 0.13 0.16 0.33 0.79 $\mathfrak{A}$
2 0.10 0.13 0.27 0.74 $\mathfrak{A,B}$
2 0.08 0.12 0.25 0.33 $\mathfrak{A,B,H}$
h\) no Cd 1 0.14 0.18 0.38 0.80 $\mathfrak{A}$
2 0.10 0.13 0.26 0.84 $\mathfrak{A,B}$
3 0.08 0.11 0.24 0.89 $\mathfrak{A,B,C}$
--------------- ----------- ------ ------ ------ ------ --------------------
: Summary of the training and CV errors for 1-, 2-, 3-dimensional descriptors for different datasets. The last column reports the descriptor found using all data in the dataset for the training and the column “Ratio” reports the fraction of times the same descriptor was found over the L-10%-OCV iterations. CC means C-diamond.\[T:LMO\]
The code for the descriptors is: $$\begin{aligned}
\nonumber \mathfrak{A} &:
\frac{\textrm{IP(B)}-\textrm{EA(B})}{r_p(\textrm{A})^2} &\, \mathfrak{B} &:
\frac{|r_s(\textrm{A})-r_p(\textrm{B})|}{\exp(r_s(\textrm{A}))} &\, \mathfrak{C}
&: \frac{|r_s(\textrm{B})-r_p(\textrm{B})|}{\exp(r_d(\textrm{A}))} \\
\nonumber \mathfrak{D} &: \frac{\textrm{IP(B)}}{\exp[(r_p(\textrm{A}))^2]} &\,
\mathfrak{E} &: \frac{|\textrm{H(B)}-\textrm{L(B)}|}{r_p(\textrm{A})^2} &\,
\mathfrak{F} &: \frac{\textrm{H(B)}}{r_p(\textrm{A})^2}\\
\mathfrak{G} &: \frac{\textrm{IP(A)}}{[r_s(\textrm{A})+r_p(\textrm{A})]^2} &\,
\mathfrak{H} &: \frac{\textrm{EA(A)}}{\exp[(r_d(\textrm{A}))^2]}\end{aligned}$$ We make the following observations:
- When C-diamond (and C-diamond together with BN) are excluded from the set, the errors are marginally smaller. However, the descriptor changes in both cases. In particular, the 3D descriptor is remarkably stable both for b) and a).
- Removal or all 4 carbon compound leads to a similar behavior as in a) and b). However, upon removal of BN or all boron compounds leads to a fit similar to 0).
- Carbon can be thus considered as an anomaly, but it also carries important information for the stability of the overall fit.
`PYTHON` script for running the LASSO algorithm with the scikit\_learn library {#A:D}
==============================================================================
``` {.python language="Python"}
from sklearn import linear_model
lasso = sklearn.linear_model.Lasso(alpha=lambda)
lasso.fit(D,P)
coefficients = lasso.coef_
```
Here [sklearn]{} solves the problem in Eq. , with the $\ell_2$-norm scaled by a factor $\frac{1}{2N}$. The *bias* $c_0$ is included by default.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We define a faithful linear monoidal functor from the partition category, and hence from Deligne’s category ${{\mathrm{Rep}}}(S_t)$, to the Heisenberg category. We show that the induced map on Grothendieck rings is injective and corresponds to the Kronecker coproduct on symmetric functions.'
address:
- |
Department of Mathematics and Statistics\
University of Ottawa\
Ottawa, ON K1N 6N5, Canada
- |
Department of Mathematics and Statistics\
University of Ottawa\
Ottawa, ON K1N 6N5, Canada
- |
Department of Mathematics\
Massachusetts Institute of Technology\
Cambridge, MA 02139-4307, USA
author:
- |
Samuel Nyobe Likeng and Alistair Savage\
(appendix with Christopher Ryba)
bibliography:
- 'DeligneHeisenberg.bib'
title: 'Embedding Deligne’s category ${{\mathrm{Rep}}}(S_t)$ in the Heisenberg category'
---
Introduction\[intro\]
=====================
In [@Del07], Deligne introduced a linear monoidal category ${{\mathrm{Rep}}}(S_t)$ that interpolates between the categories of representations of the symmetric groups. In particular, when $t$ is a nonnegative integer $n$, the category of representations of $S_n$ is equivalent to the quotient of Deligne’s ${{\mathrm{Rep}}}(S_n)$ by the tensor ideal of negligible morphisms. One particularly efficient construction of ${{\mathrm{Rep}}}(S_t)$ is as the additive Karoubi envelope of the *partition category* ${\mathcal{P}\mathit{ar}}(t)$. The endomorphism algebras of the partition category are the *partition algebras* first introduced by Martin ([@Mar94]) and later, independently, by Jones ([@Jon94]) as a generalization of the Temperley–Lieb algebra and the Potts model in statistical mechanics. The partition algebras have a Schur–Weyl duality property with respect to the action of the symmetric groups on tensor powers of the permutation representation.
In [@Kho14], Khovanov defined another linear monoidal category, the *Heisenberg category* ${{\mathcal{H}\mathit{eis}}}$, which is also motivated by the representation theory of the symmetric groups. In particular, ${{\mathcal{H}\mathit{eis}}}$ acts on $\bigoplus_{n \ge 0} S_n{\textup{-mod}}$, where its two generating objects act by induction $S_n{\textup{-mod}}\to S_{n+1}{\textup{-mod}}$ and restriction $S_{n+1}{\textup{-mod}}\to S_n{\textup{-mod}}$. Morphisms in ${{\mathcal{H}\mathit{eis}}}$ act by natural transformations between compositions of induction and restriction functors.
Deligne’s category ${{\mathrm{Rep}}}(S_t)$ can be thought of describing the representation theory of $S_n$ for arbitrary $n$ in a uniform way, but with $n$ fixed (and not necessarily a nonnegative integer). On the other hand, the Heisenberg category goes further, allowing $n$ to vary and describing the representation theory of all the symmetric groups at once. Thus, it is natural to expect a precise relationship between the two categories, with the Heisenberg category being larger. The goal of the current paper is to describe such a relationship.
For the purposes of this introduction, we describe our results in the case where $t$ is generic. Our first main result (\[functordef,faithful,bike\]) is the construction of a faithful strict linear monoidal functor $$\Psi_t \colon {\mathcal{P}\mathit{ar}}(t) \to {{\mathcal{H}\mathit{eis}}}.$$ This functor sends $t$ to the clockwise bubble in ${{\mathcal{H}\mathit{eis}}}$ and is compatible with the actions of ${\mathcal{P}\mathit{ar}}(t)$ and ${{\mathcal{H}\mathit{eis}}}$ on categories of modules for symmetric groups (\[actcom\]). Since Deligne’s category ${{\mathrm{Rep}}}(S_t)$ is the additive Karoubi envelope of the partition category, we have an induced faithful linear monoidal functor $$\Psi_t \colon {{\mathrm{Rep}}}(S_t) \to \operatorname{Kar}({{\mathcal{H}\mathit{eis}}}),$$ where $\operatorname{Kar}({{\mathcal{H}\mathit{eis}}})$ denotes the additive Karoubi envelope of the Heisenberg category ${{\mathcal{H}\mathit{eis}}}$.
The Grothendieck ring of ${{\mathrm{Rep}}}(S_t)$ is isomorphic to the ring $\operatorname{Sym}$ of symmetric functions. On the other hand, the Grothendieck ring of ${{\mathcal{H}\mathit{eis}}}$ is isomorphic to a central reduction ${\mathrm{Heis}}$ of the universal enveloping algebra of the Heisenberg Lie algebra. This was conjectured by Khovanov in [@Kho14 Conj. 1] and recently proved in [@BSW18 Th. 1.1]. We thus have an induced map $$[\Psi_t] \colon \operatorname{Sym}\cong K_0({{\mathrm{Rep}}}(S_t)) \to K_0({{\mathcal{H}\mathit{eis}}}) \cong {\mathrm{Heis}}.$$ Our second main result (\[finally\]) is that this map is injective and is given by the Kronecker coproduct on $\operatorname{Sym}$. We also describe the map induced by $\Psi_t$ on the traces (or zeroth Hochschild homologies) of ${{\mathrm{Rep}}}(S_t)$ and ${{\mathcal{H}\mathit{eis}}}$.
The partition algebras contain many so-called *diagram algebras* that have been well-studied in the literature. These include the Brauer algebras, Temperley–Lieb algebras, rook algebras, planar partition algebras, planar rook algebras, rook-Brauer algebras, and Motzkin algebras. As a result, the functor $\Psi_t$ also yields explicit embeddings of these algebras (and their associated categories) into the Heisenberg category.
We expect that the results of this paper are the starting point of a large number of precise connections between various algebras and categories that are well-studied in the literature. We list here some such possible extensions of the current work:
1. Replacing the role of the symmetric group with wreath product algebras, one should be able to define an embedding, analogous to $\Psi_t$, relating the $G$-colored partition algebras of [@Blo03], the wreath Deligne categories of [@Mor12; @Kno07], and the Frobenius Heisenberg categories of [@RS17; @Sav18Frob].
2. Quantum versions of $\Psi_t$ should exist relating the $q$-partition algebras of [@HT10], a quantum analogue of Deligne’s category, and the quantum Heisenberg category of [@LS13; @BSW18quant].
3. Replacing the role of the symmetric group by more general degenerate cyclotomic Hecke algebras should relate the categories of [@Eti14 §5.1] to the higher central charge Heisenberg categories of [@MS18; @Bru18].
The embedding of Deligne’s category ${{\mathrm{Rep}}}(S_t)$ in the Heisenberg category also suggests that there should exist “Heisenberg categories” corresponding to the other Deligne categories, including ${{\mathrm{Rep}}}(O_t)$ and ${{\mathrm{Rep}}}(GL_t)$.
The organization of this paper is as follows. In \[sec:partition\] we recall the definition of the partition category and Deligne’s category ${{\mathrm{Rep}}}(S_t)$. We then recall the Heisenberg category in \[sec:Heis\]. We define the functor $\Psi_t$ in \[sec:functor\]. In \[sec:faithful\] we show that $\Psi_t$ intertwines the natural categorical actions on categories on modules of symmetric groups and that it is faithful when ${\Bbbk}$ is an integral domain of characteristic zero. Finally, in \[sec:Groth\] we discuss the induced map on Grothendieck rings and traces. In \[appendix\], we show that $\Psi_t$ is faithful when ${\Bbbk}$ is any commutative ring.
Notation {#notation .unnumbered}
--------
Throughout, ${\Bbbk}$ denotes an arbitrary commutative ring unless otherwise specified. We let ${\mathbb{N}}$ denote the additive monoid of nonnegative integers.
Acknowledgements {#acknowledgements .unnumbered}
----------------
This research of A. Savage was supported by Discovery Grant RGPIN-2017-03854 from the Natural Sciences and Engineering Research Council of Canada. S. Nyobe Likeng was also supported by this Discovery Grant. The authors would like to thank Georgia Benkart, Victor Ostrik, Michael Reeks, and Ben Webster for useful conversations, Jon Brundan for helpful comments on an earlier draft of the paper, and Christopher Ryba for suggesting the proof given in \[appendix\].
The partition category and Deligne’s category ${{\mathrm{Rep}}}(S_t)$\[sec:partition\]
======================================================================================
In this section we recall the definition and some important facts about one of our main objects of study. We refer the reader to [@Sav18] for a brief treatment of the language of string diagrams and strict linear monoidal categories suited to the current work. For a morphism $X$ in a category, we will denote the identity morphism on $X$ by $1_X$.
For $m,\ell \in {\mathbb{N}}$, a *partition* of type $\binom{\ell}{m}$ is a partition of the set $\{1,\dotsc,m,1',\dotsc,\ell'\}$. The elements of the partition will be called *blocks*. We will depict such a partition as a simple graph with $\ell$ vertices in the top row, labelled $1',\dotsc,\ell'$ from *right to left*, and $m$ vertices in the bottom row, labelled $1,\dotsc,m$ from *right to left*. (We choose the right-to-left numbering convention to better match with the Heisenberg category later.) We draw edges joining elements of each block of the partition. Thus, the blocks are the connected components of the graph. For example, the partition $\big\{ \{1,5\}, \{2\}, \{3,1'\}, \{4,4',7'\}, \{2', 3'\}, \{5'\}, \{6'\} \big\}$ of type $\binom{7}{5}$ is depicted as follows: $$\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)} node[anchor=north] {$5$};
{\filldraw[black] (1,0) circle (1.5pt)} node[anchor=north] {$4$};
{\filldraw[black] (1.5,0) circle (1.5pt)} node[anchor=north] {$3$};
{\filldraw[black] (2,0) circle (1.5pt)} node[anchor=north] {$2$};
{\filldraw[black] (2.5,0) circle (1.5pt)} node[anchor=north] {$1$};
{\filldraw[black] (0,1) circle (1.5pt)} node[anchor=south] {$7'$};
{\filldraw[black] (0.5,1) circle (1.5pt)} node[anchor=south] {$6'$};
{\filldraw[black] (1,1) circle (1.5pt)} node[anchor=south] {$5'$};
{\filldraw[black] (1.5,1) circle (1.5pt)} node[anchor=south] {$4'$};
{\filldraw[black] (2,1) circle (1.5pt)} node[anchor=south] {$3'$};
{\filldraw[black] (2.5,1) circle (1.5pt)} node[anchor=south] {$2'$};
{\filldraw[black] (3,1) circle (1.5pt)} node[anchor=south] {$1'$};
\draw (1,0) {to[out=up,in=down]}(0,1);
\draw (1,0) {to[out=up,in=down]}(1.5,1);
\draw (0.5,0) to[out=up,in=up] (2.5,0);
\draw (1.5,0) {to[out=up,in=down]}(3,1);
\draw (2,1) to[out=down,in=down,looseness=1.5] (2.5,1);
\end{tikzpicture}$$ From now on, we will omit the labels of the vertices when drawing partition diagrams. We write $D \colon m \to \ell$ to indicate that $D$ is a partition of type $\binom{\ell}{m}$. We denote the unique partition diagrams of types $\binom{1}{0}$ and $\binom{0}{1}$ by $$\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0,0.25) to (0,0.5);
\end{tikzpicture}
\ \colon 0 \to 1
\qquad \text{and} \qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
\draw (0,0) to (0,0.25);
\end{tikzpicture}
\ \colon 1 \to 0.$$
Given two partitions $D' \colon m \to \ell$, $D \colon \ell \to k$, one can stack $D$ on top of $D'$ to obtain a simple diagram $\begin{matrix} D \\ D' \end{matrix}$ with three rows of vertices. The number of connected components in the middle row of $\begin{matrix} D \\ D' \end{matrix}$ is denoted $\alpha(D, D')$. Let $D \star D'$ be the partition of type $\binom{k}{m}$ with the following property: vertices are in the same block of $D \star D'$ if and only if the corresponding vertices in the top and bottom rows of $\begin{matrix} D \\ D' \end{matrix}$ are in the same block.
Recall that ${\Bbbk}$ is a commutative ring and fix $t \in {\Bbbk}$. The *partition category* ${\mathcal{P}\mathit{ar}}(t)$ is the strict ${\Bbbk}$-linear monoidal category whose objects are nonnegative integers and, given two objects $m,\ell$ in ${\mathcal{P}\mathit{ar}}(t)$, the arrows from $m$ to $\ell$ are ${\Bbbk}$-linear combinations of partitions of type $\binom{\ell}{m}$. The vertical composition is given by $$D \circ D'
=t^{\alpha(D,D')} D \star D'$$ for composable partition diagrams $D,D'$, and extended by linearity. The bifunctor $\otimes$ is given on objects by $$\otimes \colon {\mathcal{P}\mathit{ar}}(t) \times {\mathcal{P}\mathit{ar}}(t) \to {\mathcal{P}\mathit{ar}}(t),\quad (m,n)\mapsto m+n.$$ The tensor product on morphisms is given by horizontal juxtaposition of diagrams, extended by linearity.
For example, if $$D' =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (1.5,1) circle (1.5pt)};
{\filldraw[black] (2,1) circle (1.5pt)};
{\filldraw[black] (2.5,1) circle (1.5pt)};
{\filldraw[black] (3,1) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (2.5,0) circle (1.5pt)};
\draw (1,0) {to[out=up,in=down]}(0,1);
\draw (1,0) {to[out=up,in=down]}(1.5,1);
\draw (0.5,0) to[out=up,in=up] (2.5,0);
\draw (1.5,0) {to[out=up,in=down]}(3,1);
\draw (2,1) to[out=down,in=down,looseness=1.5] (2.5,1);
\end{tikzpicture}
\qquad \text{and} \qquad
D =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (1.5,1) circle (1.5pt)};
{\filldraw[black] (2,1) circle (1.5pt)};
{\filldraw[black] (-0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (2.5,0) circle (1.5pt)};
\draw (-0.5,0) {to[out=up,in=down]}(0,1);
\draw (0.5,1) to[out=down,in=down] (1.5,1);
\draw (0,0) to[out=up,in=up] (1,0);
\draw (2.5,0) {to[out=up,in=down]}(1,1);
\draw (2.5,0) {to[out=up,in=down]}(2,1);
\end{tikzpicture}$$ then $$\begin{matrix}
D \\ D'
\end{matrix}
\ = \
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (1.5,1) circle (1.5pt)};
{\filldraw[black] (2,1) circle (1.5pt)};
{\filldraw[black] (2.5,1) circle (1.5pt)};
{\filldraw[black] (3,1) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (2.5,0) circle (1.5pt)};
\draw (1,0) {to[out=up,in=down]}(0,1);
\draw (1,0) {to[out=up,in=down]}(1.5,1);
\draw (0.5,0) to[out=up,in=up] (2.5,0);
\draw (1.5,0) {to[out=up,in=down]}(3,1);
\draw (2,1) to[out=down,in=down,looseness=1.5] (2.5,1);
{\filldraw[black] (0.5,2) circle (1.5pt)};
{\filldraw[black] (1,2) circle (1.5pt)};
{\filldraw[black] (1.5,2) circle (1.5pt)};
{\filldraw[black] (2,2) circle (1.5pt)};
{\filldraw[black] (2.5,2) circle (1.5pt)};
\draw (0,1) {to[out=up,in=down]}(0.5,2);
\draw (1,2) to[out=down,in=down] (2,2);
\draw (0.5,1) to[out=up,in=up] (1.5,1);
\draw (3,1) {to[out=up,in=down]}(1.5,2);
\draw (3,1) {to[out=up,in=down]}(2.5,2);
\end{tikzpicture}
\ ,\quad
D \star D' =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (1.5,1) circle (1.5pt)};
{\filldraw[black] (2,1) circle (1.5pt)};
\draw (0,0) to[out=up,in=up] (2,0);
\draw (0.5,0) {to[out=up,in=down]}(0,1);
\draw (0.5,1) to[out=down,in=down] (1.5,1);
\draw (1,0) to (1,1);
\draw (1,0) {to[out=up,in=down]}(2,1);
\end{tikzpicture}
\ ,\quad \text{and} \quad
D \circ D' = t^2\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (1.5,1) circle (1.5pt)};
{\filldraw[black] (2,1) circle (1.5pt)};
\draw (0,0) to[out=up,in=up] (2,0);
\draw (0.5,0) {to[out=up,in=down]}(0,1);
\draw (0.5,1) to[out=down,in=down] (1.5,1);
\draw (1,0) to (1,1);
\draw (1,0) {to[out=up,in=down]}(2,1);
\end{tikzpicture}
\ .$$ The partition category is denoted ${{\mathrm{Rep}}}_0(S_t)$ in [@Del07] and $\underline{{{\mathrm{Rep}}}}_0(S_t; {\Bbbk})$ in [@CO11].
For a linear monoidal category ${\mathcal{C}}$, we let $\operatorname{Kar}({\mathcal{C}})$ denote its additive Karoubi envelope, that is, the idempotent completion of its additive envelope $\operatorname{Add}({\mathcal{C}})$. Then $\operatorname{Kar}({\mathcal{C}})$ is again naturally a linear monoidal category. *Deligne’s category* ${{\mathrm{Rep}}}(S_t)$ is the additive Karoubi envelope of ${\mathcal{P}\mathit{ar}}(t)$. (See [@Del07 §8] and [@CO11 §2.2].)
The following proposition gives a presentation of the partition category.
\[Ppresent\] As a ${\Bbbk}$-linear monoidal category, the *partition category* ${\mathcal{P}\mathit{ar}}(t)$ is generated by the object $1$ and the morphisms $$\mu =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0.25,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.25,0.5);
\draw (0.5,0) {to[out=up,in=down]}(0.25,0.5);
\end{tikzpicture}
\colon 2 \to 1,\quad
{\delta}=
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,0) circle (1.5pt)};
\draw (0.25,0) {to[out=up,in=down]}(0,0.5);
\draw (0.25,0) {to[out=up,in=down]}(0.5,0.5);
\end{tikzpicture}
\colon 1 \to 2,\quad
s =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5);
\end{tikzpicture}
\colon 2 \to 2,\quad
\eta =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw(0,0.25) to (0,0.5);
\end{tikzpicture}
\ \colon 0 \to 1,\quad
\varepsilon =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
\draw (0,0.25) to (0,0);
\end{tikzpicture}
\ \colon 1 \to 0,$$ subject to the following relations: $$\begin{gathered}
\label{P1}
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,1) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.25,1);
\draw (0.5,0.25) to (0.5,0.5) {to[out=up,in=down]}(0.25,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,1) circle (1.5pt)};
\draw (0.5,0) to (0.5,0.5) {to[out=up,in=down]}(0.25,1);
\draw (0,0.25) to (0,0.5) {to[out=up,in=down]}(0.25,1);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.25,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
\draw (0.25,0) {to[out=up,in=down]}(0,0.5) to (0,1);
\draw (0.25,0) {to[out=up,in=down]}(0.5,0.5) to (0.5,0.75);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.25,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
\draw (0.25,0) {to[out=up,in=down]}(0.5,0.5) to (0.5,1);
\draw (0.25,0) {to[out=up,in=down]}(0,0.5) to (0,0.75);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.5,1);
\draw (0.5,0) to (0.5,1);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to (1,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0.25,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.25,0.5) {to[out=up,in=down]}(0,1);
\draw (0.5,0) {to[out=up,in=down]}(0.25,0.5) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
\draw (0.5,0) {to[out=up,in=down]}(0,0.5) to (0,1);
\draw (0.5,0) to (0.5,1);
\draw (1,0) to (1,0.5) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ ,
\\ \label{P2}
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0,1);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5);
\draw (0.5,0) to (0.5,0.5);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (0,1.5) circle (1.5pt)};
{\filldraw[black] (0.5,1.5) circle (1.5pt)};
{\filldraw[black] (1,1.5) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.5,1) {to[out=up,in=down]}(1,1.5);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to (1,1) {to[out=up,in=down]}(0.5,1.5);
\draw (1,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0,1) to (0,1.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (0,1.5) circle (1.5pt)};
{\filldraw[black] (0.5,1.5) circle (1.5pt)};
{\filldraw[black] (1,1.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(1,1) to (1,1.5);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5) to (0,1) {to[out=up,in=down]}(0.5,1.5);
\draw (1,0) to (1,0.5) {to[out=up,in=down]}(0.5,1) {to[out=up,in=down]}(0,1.5);
\end{tikzpicture}
\ ,
\\ \label{P3}
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.5,1);
\draw (0.5,0.25) to (0.5,0.5) {to[out=up,in=down]}(0,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0.25) to (0,0.5);
\draw (0.5,0) to (0.5,0.5);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (0,1.5) circle (1.5pt)};
{\filldraw[black] (0.5,1.5) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.5,1) to (0.5,1.5);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to (1,1) {to[out=up,in=down]}(0.5,1.5);
\draw (1,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0,1) to (0,1.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(1,1);
\draw (0.5,0) to (0.5,0.5);
\draw (1,0) to (1,0.5) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5) to (0.5,0.75);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5) to (0,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.25);
\draw (0.5,0) to (0.5,0.5);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
{\filldraw[black] (0,1.5) circle (1.5pt)};
{\filldraw[black] (0.5,1.5) circle (1.5pt)};
{\filldraw[black] (1,1.5) circle (1.5pt)};
\draw (0,0) to (0,0.5) {to[out=up,in=down]}(0.5,1) {to[out=up,in=down]}(1,1.5);
\draw (0.5,0) to (0.5,0.5) {to[out=up,in=down]}(0,1) to (0,1.5);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to (1,1) {to[out=up,in=down]}(0.5,1.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (0,1) circle (1.5pt)};
{\filldraw[black] (0.5,1) circle (1.5pt)};
{\filldraw[black] (1,1) circle (1.5pt)};
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to (1,1);
\draw (1,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0,1);
\draw (0.5,0.5) to (0.5,1);
\end{tikzpicture}
\ ,
\\ \label{P4}
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,1) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0.25,1);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5) {to[out=up,in=down]}(0.25,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0.25,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.25,0.5);
\draw (0.5,0) {to[out=up,in=down]}(0.25,0.5);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.25,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,1) circle (1.5pt)};
\draw (0.25,0) {to[out=up,in=down]}(0,0.5) {to[out=up,in=down]}(0.25,1);
\draw (0.25,0) {to[out=up,in=down]}(0.5,0.5) {to[out=up,in=down]}(0.25,1);
\end{tikzpicture}
\ = \
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
\draw (0,-0.25) to (0,0.25);
\end{tikzpicture}
= t 1_0.
\end{gathered}$$
This result is proved in [@Com16 Th. 2.1]. While it is assumed throughout [@Com16] that ${\Bbbk}$ is a field of characteristic not equal to $2$, these restrictions are not needed in the proof of [@Com16 Th. 2.1]. The essence of the proof is noting that ${\mathcal{P}\mathit{ar}}(t)$ is isomorphic to the category obtained from the ${\Bbbk}$-linearization of a skeleton of the category $2\mathcal{C}\mathit{ob}$ of 2-dimensional cobordisms by factoring out by the second and third relations in \[P4\]. Then the result is deduced from the presentation of $2\mathcal{C}\mathit{ob}$ described in [@Koc04 §1.4].
The relations \[P1\] are equivalent to the statement that $(1,\mu,\eta,{\delta},\varepsilon)$ is a Frobenius object (see, for example, [@Koc04 Prop. 2.3.24]). Relations \[P2,P3\] are precisely the statement that $s$ equips ${\mathcal{P}\mathit{ar}}(t)$ with the structure of a symmetric monoidal category (see, for example, [@Koc04 §1.3.27, §1.4.35]). Then the relations \[P4\] are precisely the statements that the Frobenius object $1$ is commutative, special, and of dimension $t$, respectively. Thus, \[Ppresent\] states that ${\mathcal{P}\mathit{ar}}(t)$ is the free ${\Bbbk}$-linear symmetric monoidal category generated by a $t$-dimensional special commutative Frobenius object.
The endomorphism algebra $P_k(t) := \operatorname{End}_{{\mathcal{P}\mathit{ar}}(t)}(k)$ is called the *partition algebra*. We have a natural algebra homomorphism $$\label{garage}
{\Bbbk}S_k \to P_k(t),$$ mapping to $\tau \in S_k$ to the partition with blocks $\{i,\tau(i)'\}$, $1 \le i \le k$.
For remainder of this section, suppose ${\Bbbk}$ is a field. Let $V = {\Bbbk}^n$ be the permutation representation of $S_n$ and let $\mathbf{1}_n$ denote the one-dimensional trivial $S_n$-module. As explained in [@Com16 §2.4], there is a strong monoidal functor $\Phi_n \colon {\mathcal{P}\mathit{ar}}(n) \to S_n{\textup{-mod}}$ defined on generators by setting $\Phi_n(1) = V$ and $$\begin{aligned}
\Phi_n(\mu) &\colon V \otimes V \to V,& v_i\otimes v_j &\mapsto \delta_{i,j}v_i, \\
\Phi_n(\eta) &\colon \mathbf{1}_n \to V,& 1 &\mapsto \textstyle \sum_{i=1}^{n} v_i, \\
\Phi_n({\delta}) &\colon V \to V \otimes V,& v_i &\mapsto v_i \otimes v_i, \\
\Phi_n(\varepsilon) &\colon V \to \mathbf{1}_n,& v_i &\mapsto 1, \\
\Phi_n(s) &\colon V \otimes V \to V \otimes V,& v_i\otimes v_j &\mapsto v_j\otimes v_i.\end{aligned}$$ The proposition below is a generalization of the Schur–Weyl duality property of the partition algebra mentioned in the introduction (see [@HR05 Th. 3.6] and [@CST10 Th. 8.3.13]).
\[bee\]
1. The functor $\Phi_n$ is full.
2. The induced map $$\operatorname{Hom}_{{\mathcal{P}\mathit{ar}}(n)}(k,\ell)\to\operatorname{Hom}_{S_n}(V^{\otimes k}, V^{\otimes \ell})$$ is an isomorphism if and only if $k + \ell \leq n$.
This is proved in [@Com16 Th. 2.3]. While it is assumed throughout [@Com16] that ${\Bbbk}$ is a field of characteristic not equal to $2$, that assumption is not needed in the proof of [@Com16 Th. 2.3]. When $k=\ell$, the current proposition reduces to a statement about the partition algebra; see [@HR05 Th. 3.6].
The Heisenberg category\[sec:Heis\]
===================================
In this section we define the Heisenberg category originally introduced by Khovanov in [@Kho14]. This is the central charge $-1$ case of a more general Heisenberg category described in [@MS18; @Bru18]. We give here the efficient presentation of this category described in [@Bru18 Rem. 1.5(2)].
The *Heisenberg category* ${{\mathcal{H}\mathit{eis}}}$ is the strict ${\Bbbk}$-linear monoidal category generated by two objects $\uparrow$, $\downarrow$, (we use horizontal juxtaposition to denote the tensor product) and morphisms $$\begin{tikzpicture}[anchorbase]
\draw[->] (0.6,0) -- (0,0.6);
\draw[->] (0,0) -- (0.6,0.6);
\end{tikzpicture}
\colon \uparrow \uparrow\ \to\ \uparrow \uparrow
, \quad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,.2) -- (0,0) arc (180:360:.3) -- (.6,.2);
\end{tikzpicture}
\ \colon {\mathbbm{1}}\to\ \downarrow \uparrow
, \quad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-.2) -- (0,0) arc (180:0:.3) -- (.6,-.2);
\end{tikzpicture}
\ \colon \uparrow \downarrow\ \to {\mathbbm{1}}, \quad
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,.2) -- (0,0) arc (180:360:.3) -- (.6,.2);
\end{tikzpicture}
\ \colon {\mathbbm{1}}\to\ \uparrow \downarrow
, \quad
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0,.2) arc (180:0:.3) -- (.6,0);
\end{tikzpicture}
\ \colon \downarrow \uparrow\ \to {\mathbbm{1}},$$ where ${\mathbbm{1}}$ denotes the unit object, subject to the relations $$\begin{gathered}
\label{H1}
\begin{tikzpicture}[anchorbase]
\draw[->] (0.3,0) {to[out=up,in=down]}(-0.3,0.6) {to[out=up,in=down]}(0.3,1.2);
\draw[->] (-0.3,0) to[out=up,in=down] (0.3,0.6) {to[out=up,in=down]}(-0.3,1.2);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.2,0) -- (-0.2,1.2);
\draw[->] (0.2,0) -- (0.2,1.2);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
\draw[->] (0.4,0) -- (-0.4,1.2);
\draw[->] (0,0) {to[out=up,in=down]}(-0.4,0.6) {to[out=up,in=down]}(0,1.2);
\draw[->] (-0.4,0) -- (0.4,1.2);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (0.4,0) -- (-0.4,1.2);
\draw[->] (0,0) {to[out=up,in=down]}(0.4,0.6) {to[out=up,in=down]}(0,1.2);
\draw[->] (-0.4,0) -- (0.4,1.2);
\end{tikzpicture}
\ ,
\\ \label{H2}
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0,0.6) arc(180:0:0.2) -- (0.4,0.4) arc(180:360:0.2) -- (0.8,1);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0,1);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,1) -- (0,0.4) arc(180:360:0.2) -- (0.4,0.6) arc(180:0:0.2) -- (0.8,0);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0,1);
\end{tikzpicture}
\ ,
\\ \label{H3}
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.3,-0.6) {to[out=up,in=down]}(0.3,0) {to[out=up,in=down]}(-0.3,0.6);
\draw[<-] (0.3,-0.6) {to[out=up,in=down]}(-0.3,0) {to[out=up,in=down]}(0.3,0.6);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.2,-0.6) to (-0.2,0.6);
\draw[<-] (0.2,-0.6) to (0.2,0.6);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
\draw[<-] (-0.3,-0.6) {to[out=up,in=down]}(0.3,0) {to[out=up,in=down]}(-0.3,0.6);
\draw[->] (0.3,-0.6) {to[out=up,in=down]}(-0.3,0) {to[out=up,in=down]}(0.3,0.6);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[<-] (-0.2,-0.6) to (-0.2,0.6);
\draw[->] (0.2,-0.6) to (0.2,0.6);
\end{tikzpicture}
-
\begin{tikzpicture}[anchorbase]
\draw[<-] (-0.3,0) to (-0.3,0.2) arc(180:0:0.3) to (0.3,0);
\draw[->] (-0.3,1.2) to (-0.3,1) arc(-180:0:0.3) to (0.3,1.2);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0.6) to (0,0.3);
\draw (-0.3,-0.2) to [out=180,in=-90](-.5,0);
\draw (-0.5,0) to [out=90,in=180](-.3,0.2);
\draw (-0.3,.2) to [out=0,in=90](0,-0.3);
\draw (0,-0.3) to (0,-0.6);
\draw (0,0.3) to [out=-90,in=0] (-.3,-0.2);
\end{tikzpicture}
\ = 0,
\qquad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(90:450:0.3);
\end{tikzpicture}
= 1_{\mathbbm{1}}.\end{gathered}$$ Here the left and right crossings are defined by $$\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0.6,0.6);
\draw[->] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\ :=\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.2,-0.3) to (0.2,0.3);
\draw[<-] (-0.6,-0.3) to[out=up,in=135,looseness=2] (0,0) to[out=-45,in=down,looseness=2] (0.6,0.3);
\end{tikzpicture}
\ ,\qquad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0.6,0.6);
\draw[<-] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\ :=\
\begin{tikzpicture}[anchorbase]
\draw[<-] (-0.2,-0.3) to (0.2,0.3);
\draw[->] (-0.6,-0.3) to[out=up,in=135,looseness=2] (0,0) to[out=-45,in=down,looseness=2] (0.6,0.3);
\end{tikzpicture}
\ .$$
The category ${{\mathcal{H}\mathit{eis}}}$ is strictly pivotal, meaning that morphisms are invariant under isotopy (see [@Bru18 Th. 1.3(ii),(iii)]). The relations \[H3\] imply that $$\label{key}
\downarrow \uparrow\ \cong\ \uparrow \downarrow \oplus {\mathbbm{1}}.$$ In addition, we have the following *bubble slide* relations (see [@Kho14 p. 175], [@Bru18 (13), (19)]): $$\label{bubslide}
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[->] (0.6,-0.5) to (0.6,0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[->] (-0.6,-0.5) to (-0.6,0.5);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.5) to (0,0.5);
\end{tikzpicture}
\qquad \text{and} \qquad
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[<-] (0.6,-0.5) to (0.6,0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[<-] (-0.6,-0.5) to (-0.6,0.5);
\end{tikzpicture}
\ -\
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,-0.5) to (0,0.5);
\end{tikzpicture}.$$ We can define downward crossings $$\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0.6,0.6);
\draw[<-] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\ :=\
\begin{tikzpicture}[anchorbase]
\draw[<-] (-0.2,-0.3) to (0.2,0.3);
\draw[<-] (-0.6,-0.3) to[out=up,in=135,looseness=2] (0,0) to[out=-45,in=down,looseness=2] (0.6,0.3);
\end{tikzpicture}$$ and then we have $$\label{genbraid}
\begin{tikzpicture}[anchorbase]
\draw (0.4,0) -- (-0.4,1.2);
\draw (0,0) {to[out=up,in=down]}(-0.4,0.6) {to[out=up,in=down]}(0,1.2);
\draw (-0.4,0) -- (0.4,1.2);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw (0.4,0) -- (-0.4,1.2);
\draw (0,0) {to[out=up,in=down]}(0.4,0.6) {to[out=up,in=down]}(0,1.2);
\draw (-0.4,0) -- (0.4,1.2);
\end{tikzpicture}
\quad \text{for all possible orientations of the strands}$$ (see [@Kho14 p. 175], [@Bru18 (20)]).
For $1 \le i \le k-1$, let $s_i \in S_k$ denote the simple transposition of $i$ and $i+1$. We have natural algebra homomorphisms $$\label{SunnyD}
{\Bbbk}S_k \to \operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\uparrow^k)
\quad \text{and} \quad
{\Bbbk}S_k \to \operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\downarrow^k),$$ where $s_i$ is mapped to the crossing of strands $i$ and $i+1$, numbering strands *from right to left*.
Let ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$ denote the full ${\Bbbk}$-linear monoidal subcategory of ${{\mathcal{H}\mathit{eis}}}$ generated by $\uparrow \downarrow$. It follows immediately from \[bubslide\] that $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[->] (0.6,-0.5) to (0.6,0.5);
\draw[<-] (0.9,-0.5) to (0.9,0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\draw[<-] (-0.6,-0.5) to (-0.6,0.5);
\draw[->] (-0.9,-0.5) to (-0.9,0.5);
\end{tikzpicture}
\ .$$ In other words, the clockwise bubble is strictly central in ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$. Thus, fixing $t \in {\Bbbk}$, we can define ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$ to be the quotient of ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$ by the additional relation $$\label{H4}
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\end{tikzpicture}
\ = t 1_{\mathbbm{1}}.$$
We adopt the convention that $s_i s_{i+1} \dotsm s_j = 1$ when $i > j$. Then the elements $$\label{gi-def}
g_i = s_i s_{i+1} \dotsm s_{n-1},\quad i=1,\dotsc,n,$$ form a complete set of right coset representatives of $S_{n-1}$ in $S_n$.
We now recall the action of ${{\mathcal{H}\mathit{eis}}}$ on the category of $S_n$-modules first defined by Khovanov [@Kho14 §3.3]. We begin by defining a strong ${\Bbbk}$-linear monoidal functor $$\Theta \colon {{\mathcal{H}\mathit{eis}}}\to \prod_{m \in {\mathbb{N}}} \left( \bigoplus_{n \in {\mathbb{N}}} (S_n,S_m){\textup{-bimod}}\right).$$ The tensor product structure on the codomain is given by the usual tensor product of bimodules, where we define the tensor product $M \otimes N$ of $M \in (S_n, S_m){\textup{-bimod}}$ and $N \in (S_k,S_\ell){\textup{-bimod}}$ to be zero when $m \ne k$. We adopt the convention that $S_0$ is the trivial group, so that $S_0{\textup{-mod}}$ is the category of ${\Bbbk}$-vector spaces. For $0 \le m,k \le n$, let ${}_k(n)_m$ denote ${\Bbbk}S_n$, considered as an $(S_k,S_m)$-bimodule. We will omit the subscript $k$ or $m$ when $k=n$ or $m=n$, respectively. We denote tensor product of such bimodules by juxtaposition. For instance $(n)_{n-1}(n)$ denotes ${\Bbbk}S_n \otimes_{n-1} {\Bbbk}S_n$, considered as an $(S_n,S_n)$-bimodule. Then, on objects, we define $$\Theta(\uparrow) = \bigoplus_{n \ge 1} (n)_{n-1},\qquad
\Theta(\downarrow) = \bigoplus_{n \ge 1} {}_{n-1}(n).$$ On the generating morphisms, we define $$\begin{aligned}
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[->] (0.6,0) -- (0,0.6);
\draw[->] (0,0) -- (0.6,0.6);
\end{tikzpicture}
\right)
&=
\Big(
(n)_{n-2} \to (n)_{n-2},\ g \mapsto g s_{n-1}
\Big)_{n \ge 2},
\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[->] (0,.2) -- (0,0) arc (180:360:.3) -- (.6,.2);
\end{tikzpicture}
\right)
&=
\Big(
(n-1) \to {}_{n-1}(n)_{n-1},\ g \mapsto g
\Big)_{n \ge 1},
\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-.2) -- (0,0) arc (180:0:.3) -- (.6,-.2);
\end{tikzpicture}
\right)
&=
\Big(
(n)_{n-1}(n) \to (n),\ g \otimes h \mapsto gh
\Big)_{n \ge 1},
\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,.2) -- (0,0) arc (180:360:.3) -- (.6,.2);
\end{tikzpicture}
\right)
&=
\Big( \textstyle
(n) \to (n)_{n-1}(n),\ g \mapsto \sum_{i=1}^n g_i \otimes g_i^{-1}g = \sum_{i=1}^n g g_i \otimes g_i^{-1}
\Big)_{n \ge 1},
\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0,.2) arc (180:0:.3) -- (.6,0);
\end{tikzpicture}
\right)
&=
\left(
{}_{n-1}(n)_{n-1} \to (n-1),\ g \mapsto
\begin{cases}
g & \text{if } g \in S_{n-1}, \\
0 & \text{if } g \in S_n \setminus S_{n-1}
\end{cases}
\right)_{n \ge 1}.\end{aligned}$$ One can then compute that $$\begin{aligned}
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0.6,0.6);
\draw[->] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\right)
&=
\left(
{}_{n-1}(n)_{n-1} \to (n-1)_{n-2}(n-1),\
\begin{cases}
g s_{n-1} h \mapsto g \otimes h, & g,h \in S_{n-1}, \\
g \mapsto 0, & g \in S_{n-1}
\end{cases}
\right)_{n \ge 2},
\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0.6,0.6);
\draw[<-] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\right)
&=
\Big(
(n-1)_{n-2}(n-1) \to {}_{n-1}(n)_{n-1},\ g \otimes h \mapsto g s_{n-1} h
\Big)_{n \ge 2},\\
\Theta
\left(
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) -- (0.6,0.6);
\draw[<-] (0.6,0) -- (0,0.6);
\end{tikzpicture}
\right)
&=
\Big(
{}_{n-2}(n) \to {}_{n-2}(n),\ g \mapsto s_{n-1} g
\Big)_{n \ge 2}.\end{aligned}$$
Restricting to ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$ yields a functor, which we denote by the same symbol, $$\Theta \colon {{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}\to \bigoplus_{m \in {\mathbb{N}}} (S_m,S_m){\textup{-bimod}}.$$ Recall that $\mathbf{1}_n$ denotes the one-dimensional trivial $S_n$-module. Then the functor $- \otimes_{S_n} \mathbf{1}_n$ of tensoring on the right with $\mathbf{1}_n$ gives a functor $$\bigoplus_{m \in {\mathbb{N}}} (S_m,S_m){\textup{-bimod}}\xrightarrow{- \otimes_{S_n} \mathbf{1}_n} S_n{\textup{-mod}}.$$ Here we define $M \otimes_{S_n} \mathbf{1}_n = 0$ for $M \in (S_m,S_m){\textup{-bimod}}$, $m \ne n$. Consider the composition $${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}\xrightarrow{\Theta} \bigoplus_{m \in {\mathbb{N}}} (S_m,S_m){\textup{-bimod}}\xrightarrow{- \otimes_{S_n} \mathbf{1}_n} S_n{\textup{-mod}}.$$ It is straightforward to verify that the image of the relation \[H4\] under this composition holds in $S_n{\textup{-mod}}$ with $t=n$. Therefore, the composition factors through ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(n)$ to give us our action functor: $$\Omega_n \colon {{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(n) \to S_n{\textup{-mod}}.$$ Note that the functor $\Omega_n$ is not monoidal, since the functor $- \otimes_{S_n} \mathbf{1}_n$ is not.
Existence of the embedding functor\[sec:functor\]
=================================================
In this section we define a functor from the partition category to the Heisenberg category. We will later show, in \[faithful,appendix\], that this functor is faithful.
\[functordef\] There is a strict linear monoidal functor $\Psi_t \colon {\mathcal{P}\mathit{ar}}(t) \to {{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$ defined on objects by $k \mapsto (\uparrow \downarrow)^k$ and on generating morphisms by $$\begin{gathered}
\mu =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0.25,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.25,0.5);
\draw (0.5,0) {to[out=up,in=down]}(0.25,0.5);
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(0.33,1);
\draw[<-] (1,0) {to[out=up,in=down]}(0.67,1);
\draw[<-] (0.33,0) to (0.33,0.1) to[out=up,in=up,looseness=2] (0.67,0.1) to (0.67,0);
\end{tikzpicture}
\ ,\qquad
{\delta}=
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (0.25,0) circle (1.5pt)};
\draw (0.25,0) {to[out=up,in=down]}(0,0.5);
\draw (0.25,0) {to[out=up,in=down]}(0.5,0.5);
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[anchorbase]
\draw[->] (0.33,0) {to[out=up,in=down]}(0,1);
\draw[<-] (0.67,0) {to[out=up,in=down]}(1,1);
\draw[->] (0.33,1) to (0.33,0.9) to[out=down,in=down,looseness=2] (0.67,0.9) to (0.67,1);
\end{tikzpicture}
\ ,\qquad
s =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0.5,0.5);
\draw (0.5,0) {to[out=up,in=down]}(0,0.5);
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(1,1);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (1,0) {to[out=up,in=down]}(0,1);
\draw[<-] (1.5,0) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (1.2,0) -- (1.2,1);
\draw[->] (1.5,1) -- (1.5,0.9) arc (180:360:.25) -- (2,1);
\draw[<-] (1.5,0) -- (1.5,0.1) arc (180:0:.25) -- (2,0);
\draw[<-] (2.3,0) -- (2.3,1);
\end{tikzpicture}
\ ,
\\
\eta =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0,0.25) to (0,0.5);
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,1) -- (0,0.9) arc (180:360:.25) -- (0.5,1);
\end{tikzpicture}
\ ,\qquad
\varepsilon =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
\draw (0,0) to (0,0.25);
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0,0.1) arc (180:0:.25) -- (0.5,0);
\end{tikzpicture}
\ .
\end{gathered}$$
It suffices to prove that the functor $\Psi_t$ preserves the relations \[P1,P2,P3,P4\]. Since the objects $\uparrow$ and $\downarrow$ are both left and right dual to each other, the fact that $\Psi_t$ preserves the relations \[P1\] corresponds to the well-known fact that when $X$ and $Y$ are objects in a monoidal category that are both left and right dual to each other, then $XY$ is a Frobenius object. Alternatively, one easily can verify directly that $\Psi_t$ preserves the relations \[P1\]. This uses only the isotopy invariance in ${{\mathcal{H}\mathit{eis}}}$ (i.e. the fact that ${{\mathcal{H}\mathit{eis}}}$ is strictly pivotal).
To verify the first relation in \[P2\], we compute the image of the left-hand side. Since left curls in ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$ are zero by \[H3\], this image is $$\Psi_t(s) \circ \Psi_t(s)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(1,1) {to[out=up,in=down]}(0,2);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1.5,1) {to[out=up,in=down]}(0.5,2);
\draw[->] (1,0) {to[out=up,in=down]}(0,1) {to[out=up,in=down]}(1,2);
\draw[<-] (1.5,0) {to[out=up,in=down]}(0.5,1) {to[out=up,in=down]}(1.5,2);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.8,-1) to (-0.8,1);
\draw[<-] (0.8,-1) to (0.8,1);
\draw[<-] (-0.3,-1) to (-0.3,-0.9) arc(180:0:0.3) to (0.3,-1);
\draw[->] (-0.3,1) to (-0.3,0.9) arc(180:360:0.3) to (0.3,1);
\draw[->] (0.3,0) arc(0:360:0.3);
\end{tikzpicture}
\ \underset{\cref{H3}}{\overset{\cref{H1}}{=}}\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,2);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1,1) {to[out=up,in=down]}(0.5,2);
\draw[->] (1,0) {to[out=up,in=down]}(0.5,1) {to[out=up,in=down]}(1,2);
\draw[<-] (1.5,0) to (1.5,2);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.8,-1) to (-0.8,1);
\draw[<-] (0.8,-1) to (0.8,1);
\draw[<-] (-0.3,-1) to (-0.3,-0.9) arc(180:0:0.3) to (0.3,-1);
\draw[->] (-0.3,1) to (-0.3,0.9) arc(180:360:0.3) to (0.3,1);
\draw[->] (0.3,0) arc(0:360:0.3);
\end{tikzpicture}
\ \overset{\cref{H3}}{=}
1_{(\uparrow \downarrow)^2}.$$
Next we verify the second relation in \[P2\]. First we compute $$\begin{gathered}
\Psi_t(1_1 \otimes s) \circ \Psi_t(s \otimes 1_1)
\\
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to[out=up,in=240] (2,1);
\draw[<-] (0.5,0) to[out=60,in=down] (2.5,1);
\draw[->] (1,0) {to[out=up,in=down]}(0,1);
\draw[<-] (1.5,0) {to[out=up,in=down]}(0.5,1);
\draw[->] (2,0) {to[out=up,in=down]}(1,1);
\draw[<-] (2.5,0) {to[out=up,in=down]}(1.5,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,1);
\draw[<-] (0.5,0) to (0.5,0.1) to[out=up,in=up,looseness=1.5] (1,0.1) to (1,0);
\draw[<-] (1.5,0) {to[out=up,in=down]}(2.5,1);
\draw[->] (2,0) {to[out=up,in=down]}(1,1);
\draw[<-] (2.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (0.5,1) to[out=down,in=down] (2,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(1,1);
\draw[<-] (0.5,0) to[out=up,in=up] (2,0);
\draw[->] (1,0) {to[out=up,in=down]}(0,1);
\draw[<-] (1.5,0) {to[out=up,in=down]}(0.5,1);
\draw[<-] (2.5,0) to (2.5,1);
\draw[->] (1.5,1) to (1.5,0.9) to[out=down,in=down,looseness=1.5] (2,0.9) to (2,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,1);
\draw[<-] (0.5,0) to (0.5,0.1) to[out=up,in=up,looseness=1.5] (1,0.1) to (1,0);
\draw[<-] (1.5,0) to (1.5,0.1) to[out=up,in=up,looseness=1.5] (2,0.1) to (2,0);
\draw[<-] (2.5,0) to (2.5,1);
\draw[->] (0.5,1) to (0.5,0.9) to[out=down,in=down,looseness=1.5] (1,0.9) to (1,1);
\draw[->] (1.5,1) to (1.5,0.9) to[out=down,in=down,looseness=1.5] (2,0.9) to (2,1);
\end{tikzpicture}
\ .
\end{gathered}$$ Thus, using the fact that left curls are zero and counterclockwise bubbles are $1_{\mathbbm{1}}$ by \[H3\], we have $$\begin{aligned}
&\Psi_t(s \otimes 1_1)\circ \Psi_t(1_1 \otimes s) \circ \Psi_t(s \otimes 1_1) \\
&=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(2.1,0) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) to[out=60,in=down] (2.5,0) to[out=up,in=-60] (1.5,0.75);
\draw[->] (2,-0.75) to[out=120,in=down] (0,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=-60] (0.5,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[<-] (0.5,-0.75) to[out=up,in=up,looseness=0.8] (2,-0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(1.75,0) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) {to[out=up,in=down]}(2.25,0) {to[out=up,in=down]}(1.5,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=down,looseness=0.8] (1.25,0) to[out=up,in=down,looseness=0.8] (2.5,0.75);
\draw[->] (0.5,0.75) to[out=down,in=down,looseness=0.8] (2,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) {to[out=up,in=down]}(2.5,0.75);
\draw[->] (2,-0.75) {to[out=up,in=down]}(0,0.75);
\draw[<-] (2.5,-0.75) {to[out=up,in=down]}(0.5,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(1.7,0.1) to[out=up,in=up,looseness=1.5] (1.2,0.1) to[out=down,in=up] (0.5,-0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(0,0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[<-] (2.5,-0.75) {to[out=up,in=down]}(1.5,0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[<-] (0.5,-0.75) to (0.5,-0.65) to[out=up,in=up,looseness=1.5] (1,-0.65) to (1,-0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[<-] (2.5,-0.75) to (2.5,0.75);
\end{tikzpicture}
\\
&\underset{\cref{H3}}{\overset{\cref{H1}}{=}}
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(2.1,0) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) to[out=60,in=down] (2.5,0) to[out=up,in=-60] (1.5,0.75);
\draw[->] (2,-0.75) to[out=120,in=down] (0,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=-60] (0.5,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[->] (1,-0.75) to (1,0.75);
\draw[<-] (1.5,-0.75) to (1.5,0.75);
\draw[<-] (2.5,-0.75) to (2.5,0.75);
\draw[<-] (0.5,-0.75) to[out=up,in=up] (2,-0.75);
\draw[->] (0.5,0.75) to[out=down,in=down] (2,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) {to[out=up,in=down]}(2.5,0.75);
\draw[->] (2,-0.75) to[out=120,in=down] (0,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=-60] (0.5,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[<-] (0.5,-0.75) to (0.5,-0.65) to[out=up,in=up,looseness=1.5] (1,-0.65) to (1,-0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(0,0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[<-] (2.5,-0.75) {to[out=up,in=down]}(1.5,0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[<-] (0.5,-0.75) to (0.5,-0.65) to[out=up,in=up,looseness=1.5] (1,-0.65) to (1,-0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[<-] (2.5,-0.75) to (2.5,0.75);
\end{tikzpicture}
\ .
\end{aligned}$$ Similarly, $$\begin{gathered}
\Psi_t(1_1 \otimes s) \circ \Psi_t(s \otimes 1_1) \circ \Psi_t(1_1 \otimes s) \\
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) to[out=120,in=down] (0,0) to[out=up,in=240] (1,0.75);
\draw[<-] (1.5,-0.75) {to[out=up,in=down]}(0.4,0) {to[out=up,in=down]}(1.5,0.75);
\draw[->] (2,-0.75) to[out=120,in=down] (0,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=-60] (0.5,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[->] (1,-0.75) to (1,0.75);
\draw[<-] (1.5,-0.75) to (1.5,0.75);
\draw[<-] (2.5,-0.75) to (2.5,0.75);
\draw[<-] (0.5,-0.75) to[out=up,in=up] (2,-0.75);
\draw[->] (0.5,0.75) to[out=down,in=down] (2,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) {to[out=up,in=down]}(1,0.75);
\draw[<-] (1.5,-0.75) {to[out=up,in=down]}(2.5,0.75);
\draw[->] (2,-0.75) to[out=120,in=down] (0,0.75);
\draw[<-] (2.5,-0.75) to[out=up,in=-60] (0.5,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[<-] (0.5,-0.75) to (0.5,-0.65) to[out=up,in=up,looseness=1.5] (1,-0.65) to (1,-0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to[out=up,in=240] (2,0.75);
\draw[<-] (0.5,-0.75) to[out=60,in=down] (2.5,0.75);
\draw[->] (1,-0.75) {to[out=up,in=down]}(0,0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[<-] (2.5,-0.75) {to[out=up,in=down]}(1.5,0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.75) to (0,0.75);
\draw[<-] (0.5,-0.75) to (0.5,-0.65) to[out=up,in=up,looseness=1.5] (1,-0.65) to (1,-0.75);
\draw[<-] (1.5,-0.75) to (1.5,-0.65) to[out=up,in=up,looseness=1.5] (2,-0.65) to (2,-0.75);
\draw[->] (0.5,0.75) to (0.5,0.65) to[out=down,in=down,looseness=1.5] (1,0.65) to (1,0.75);
\draw[->] (1.5,0.75) to (1.5,0.65) to[out=down,in=down,looseness=1.5] (2,0.65) to (2,0.75);
\draw[<-] (2.5,-0.75) to (2.5,0.75);
\end{tikzpicture}
\ .
\end{gathered}$$ Hence it follows from \[genbraid\] that $\Psi_t$ preserves the second relation in \[P2\].
To verify the first relation in \[P3\], we compute $$\Psi_t(s) \circ (1_1 \otimes \eta)
=\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.5) {to[out=up,in=down]}(1,0.5);
\draw[<-] (0.5,-0.5) {to[out=up,in=down]}(1.5,0.5);
\draw[<-] (0,0.5) to[out=down,in=up] (1,-0.2) to[out=down,in=down,looseness=1.5] (1.4,-0.2) to[out=up,in=-60] (0.5,0.5);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.5) to (0,0.5);
\draw[<-] (0.5,-0.5) to (0.5,-0.2) to[out=up,in=up,looseness=1.5] (1,-0.2) to[out=down,in=down,looseness=1.5] (1.5,-0.2) to (1.5,0.5);
\draw[->] (0.5,0.5) to (0.5,0.4) to[out=down,in=down,looseness=1.5] (1,0.4) to (1,0.5);
\end{tikzpicture}
\ \underset{\cref{H3}}{\overset{\cref{H1}}{=}}\
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0.5) to (0,0.4) to[out=down,in=down,looseness=1.5] (0.5,0.4) to (0.5,0.5);
\draw[->] (1,-0.5) to (1,0.5);
\draw[<-] (1.5,-0.5) to (1.5,0.5);
\end{tikzpicture}
= \Psi_t(\eta \otimes 1_1).$$
To verify the second relation in \[P3\], we first compute $$\begin{gathered}
\Psi_t(s \otimes 1_1) \circ \Psi_t(1_1 \otimes s)
\\
=
\begin{tikzpicture}[anchorbase]
\draw[<-] (2.5,0) to[out=up,in=-60] (0.5,1);
\draw[->] (2,0) to[out=120,in=down] (0,1);
\draw[<-] (1.5,0) {to[out=up,in=down]}(2.5,1);
\draw[->] (1,0) {to[out=up,in=down]}(2,1);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (0,0) {to[out=up,in=down]}(1,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[<-] (2.5,0) to (2.5,1);
\draw[->] (2,0) to (2,0.1) to[out=up,in=up,looseness=1.5] (1.5,0.1) to (1.5,0);
\draw[->] (1,0) {to[out=up,in=down]}(0,1);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (0,0) {to[out=up,in=down]}(1,1);
\draw[<-] (2,1) to[out=down,in=down] (0.5,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[<-] (2.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (2,0) to[out=up,in=up] (0.5,0);
\draw[<-] (1.5,0) {to[out=up,in=down]}(2.5,1);
\draw[->] (1,0) {to[out=up,in=down]}(2,1);
\draw[->] (0,0) to (0,1);
\draw[<-] (1,1) to (1,0.9) to[out=down,in=down,looseness=1.5] (0.5,0.9) to (0.5,1);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,1);
\draw[<-] (0.5,0) to (0.5,0.1) to[out=up,in=up,looseness=1.5] (1,0.1) to (1,0);
\draw[<-] (1.5,0) to (1.5,0.1) to[out=up,in=up,looseness=1.5] (2,0.1) to (2,0);
\draw[<-] (2.5,0) to (2.5,1);
\draw[->] (0.5,1) to (0.5,0.9) to[out=down,in=down,looseness=1.5] (1,0.9) to (1,1);
\draw[->] (1.5,1) to (1.5,0.9) to[out=down,in=down,looseness=1.5] (2,0.9) to (2,1);
\end{tikzpicture}
\ .
\end{gathered}$$ Then, using \[H3\], we have $$\Psi_t(1_1 \otimes \mu) \circ \Psi_t(s \otimes 1_1) \circ \Psi_t(1_1 \otimes s)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.5) {to[out=up,in=down]}(1.5,0.5);
\draw[<-] (1.5,-0.5) {to[out=up,in=down]}(2,0.5);
\draw[->] (2,-0.5) to[out=120,in=down] (0.5,0.5);
\draw[<-] (2.5,-0.5) to[out=up,in=-60] (1,0.5);
\draw[<-] (0.5,-0.5) to (0.5,-0.4) to[out=up,in=up,looseness=1.5] (1,-0.4) to (1,-0.5);
\end{tikzpicture}
+
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.5) {to[out=up,in=down]}(0.5,0.5);
\draw[<-] (0.5,-0.5) to (0.5,-0.4) to[out=up,in=up,looseness=1.5] (1,-0.4) to (1,-0.5);
\draw[<-] (1.5,-0.5) to (1.5,-0.4) to[out=up,in=up,looseness=1.5] (2,-0.4) to (2,-0.5);
\draw[->] (1,0.5) to (1,0.4) to[out=down,in=down,looseness=1.5] (1.5,0.4) to (1.5,0.5);
\draw[<-] (2.5,-0.5) {to[out=up,in=down]}(2,0.5);
\end{tikzpicture}
= \Psi_t(s) \circ \Psi_t(\mu \otimes 1_1).$$ The proofs of the second and third relations in \[P3\] are analogous.
Finally, to verify the relations \[P3\], we compute $$\begin{gathered}
\Psi_t(\mu) \circ \Psi_t(s)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.7) {to[out=up,in=down]}(1,0.2) to[out=up,in=up,looseness=1.5] (0.5,0.2) to[out=down,in=up] (1.5,-0.7);
\draw[<-] (0.5,-0.7) {to[out=up,in=down]}(1.3,0) {to[out=up,in=down]}(1,0.7);
\draw[->] (1,-0.7) {to[out=up,in=down]}(0.2,0) {to[out=up,in=down]}(0.5,0.7);
\end{tikzpicture}
\ +\
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.7) {to[out=up,in=down]}(0.5,0.7);
\draw[<-] (0.5,-0.7) to (0.5,-0.6) to[out=up,in=up,looseness=1.5] (1,-0.6) to (1,-0.7);
\draw[<-] (1.5,-0.6) {to[out=up,in=down]}(1,0.7);
\draw[->] (1,0) arc(0:360:0.2);
\end{tikzpicture}
\ \overset{\cref{H3}}{=} \
\begin{tikzpicture}[anchorbase]
\draw[->] (0,-0.7) {to[out=up,in=down]}(0.5,0.7);
\draw[<-] (0.5,-0.7) to (0.5,-0.6) to[out=up,in=up,looseness=1.5] (1,-0.6) to (1,-0.7);
\draw[<-] (1.5,-0.6) {to[out=up,in=down]}(1,0.7);
\end{tikzpicture}
\ = \Psi_t(\mu),
\\
\Psi_t(\mu) \circ \Psi_t({\delta})
=
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.25,-0.7) {to[out=up,in=down]}(-0.4,0) {to[out=up,in=down]}(-0.25,0.7);
\draw[<-] (0.25,-0.7) {to[out=up,in=down]}(0.4,0) {to[out=up,in=down]}(0.25,0.7);
\draw[->] (0.2,0) arc(0:360:0.2);
\end{tikzpicture}
\ \overset{\cref{H3}}{=}\
\begin{tikzpicture}[anchorbase]
\draw[->] (-0.25,-0.7) to (-0.25,0.7);
\draw[<-] (0.25,-0.7) to (0.25,0.7);
\end{tikzpicture}
\ = \Psi_t(1_1),
\\
\Psi_t(\varepsilon) \circ \Psi_t(\eta)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0.3,0) arc(360:0:0.3);
\end{tikzpicture}
\overset{\cref{H4}}{=} t 1_{\mathbbm{1}}= \Psi_t (t 1_0). \qedhere
\end{gathered}$$
\[bike\] There are two natural ways to enlarge the codomain of the functor $\Psi_t$ to the entire Heisenberg category ${{\mathcal{H}\mathit{eis}}}$ (or a suitable quotient), rather than the category ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$. The obstacle to this is that the clockwise bubble is not central in ${{\mathcal{H}\mathit{eis}}}$ and so the relation \[H4\] is not well behaved there. We continue to suppose that ${\Bbbk}$ is a commutative ring.
1. \[wheel\] We can define ${\mathcal{P}\mathit{ar}}$ to be the ${\Bbbk}$-linear *partition category with bubbles*, which has the same presentation as in \[Ppresent\], but without the last relation in \[P4\]. Free floating blocks (i.e. blocks not containing any vertices at the top or bottom of a diagram) are strictly central “bubbles”. The category ${\mathcal{P}\mathit{ar}}(t)$ is obtained from ${\mathcal{P}\mathit{ar}}$ by specializing the bubble at $t$. Then we have a ${\Bbbk}$-linear monoidal functor ${\mathcal{P}\mathit{ar}}\to {{\mathcal{H}\mathit{eis}}}$ (factoring through ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$) mapping the bubble of ${\mathcal{P}\mathit{ar}}$ to the clockwise bubble $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\end{tikzpicture}
\ .$$ This is equivalent to considering ${\mathcal{P}\mathit{ar}}(t)$ over the ring ${\Bbbk}[t]$ and ${{\mathcal{H}\mathit{eis}}}$ over ${\Bbbk}$ and viewing $\Psi_t$ as a ${\Bbbk}$-linear monoidal functor ${\mathcal{P}\mathit{ar}}(t) \to {{\mathcal{H}\mathit{eis}}}$ with $$t \mapsto
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\end{tikzpicture}
\ .$$ (Then $t$ is the “bubble” in the partition category.) We refer to this setting by saying that $t$ is *generic*.
2. If $t \in {\Bbbk}$, let ${\mathcal{I}}$ denote the left tensor ideal of ${{\mathcal{H}\mathit{eis}}}$ generated by $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,0.3) arc(450:90:0.3);
\end{tikzpicture}
- t 1_{\mathbbm{1}}.$$ Then $\Psi_t$ induces a ${\Bbbk}$-linear functor $${\mathcal{P}\mathit{ar}}(t) \to {{\mathcal{H}\mathit{eis}}}/{\mathcal{I}}.$$ Note, however, that this induced functor is no longer monoidal.
As noted in \[sec:partition\], the partition category is the free ${\Bbbk}$-linear symmetric monoidal category generated by an $t$-dimensional special commutative Frobenius object. Thus, \[functordef\] implies that $\uparrow \downarrow$, together with certain morphisms, is a special commutative Frobenius object in the Heisenberg category. Note, however, that neither the Heisenberg category nor ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$ is symmetric monoidal.
Actions and faithfullness\[sec:faithful\]
=========================================
Consider the standard embedding of $S_{n-1}$ in $S_n$, and hence of ${\Bbbk}S_{n-1}$ in ${\Bbbk}S_n$. We adopt the convention that ${\Bbbk}S_n = {\Bbbk}$ when $n=0$. We have the natural induction and restriction functors $$\operatorname{Ind}_{n-1}^n \colon S_{n-1}{\textup{-mod}}\to S_n{\textup{-mod}},\qquad
\operatorname{Res}_{n-1}^n \colon S_n{\textup{-mod}}\to S_{n-1}{\textup{-mod}}.$$
If we let $B$ denote ${\Bbbk}S_n$, considered as a $({\Bbbk}S_n, {\Bbbk}S_{n-1})$-bimodule, then we have $$\operatorname{Ind}_{n-1}^n \operatorname{Res}_{n-1}^n (M) = B \otimes_{n-1} M,\quad M \in S_n{\textup{-mod}},$$ where $\otimes_{n-1} := \otimes_{{\Bbbk}S_{n-1}}$ denotes the tensor product over ${\Bbbk}S_{n-1}$. We will use the unadorned symbol $\otimes$ to denote tensor product over ${\Bbbk}$. As before, we denote the trivial one-dimensional $S_n$-module by $\mathbf{1}_n$.
Recall the coset representatives $g_i \in S_n$ defined in \[gi-def\]. In particular, we have $$\label{hungry}
g_i^{-1} g_j \in S_{n-1} \iff i=j.$$
Let $V = {\Bbbk}^n$ be the standard $S_n$-module with basis $v_1,\dotsc,v_n$. Then we have $$\label{yeehah}
B \otimes_{n-1} \mathbf{1}_{n-1} = \operatorname{Ind}_{n-1}^n(\mathbf{1}_{n-1}) \cong V
\quad \text{as $S_n$-modules}.$$ Furthermore, the elements $g_i \otimes_{n-1} 1$, $1 \le i \le n$, form a basis of $B \otimes_{n-1} \mathbf{1}_{n-1}$ and the isomorphism \[yeehah\] is given explicitly by $$B \otimes_{n-1} \mathbf{1}_{n-1} \xrightarrow{\cong} V,\quad
g_i \otimes_{n-1} 1 \mapsto v_i = g_i v_n.$$
More generally, define $$B^k := \underbrace{B \otimes_{n-1} B \otimes_{n-1} \dotsb \otimes_{n-1} B}_{k \text{ factors}}.$$ Then we have an isomorphism of $S_n$-modules $$\begin{aligned}
\beta_k \colon V^{\otimes k} &\xrightarrow{\cong} B^k \otimes_{n-1} \mathbf{1}_{n-1}, \\
v_{i_k} \otimes \dotsb \otimes v_{i_1} &\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1,\quad
1 \le i_1,\dotsc,i_k \le n,\end{aligned}$$ with inverse map $$\begin{aligned}
\beta_k^{-1} \colon
B^k \otimes_{n-1} \mathbf{1}_{n-1} &\xrightarrow{\cong} V^{\otimes k}, \\
a_k \otimes \dotsb \otimes a_1 \otimes 1
&\mapsto (a_k v_n) \otimes (a_k a_{k-1} v_n) \otimes \dotsm \otimes (a_k \dotsm a_1 v_n), \quad
a_1,\dotsc,a_k \in {\Bbbk}S_n.\end{aligned}$$
\[actcom\] Fix $n \in {\mathbb{N}}$, and consider the following functors: $$\begin{tikzcd}[column sep=2cm]
{\mathcal{P}\mathit{ar}}(n) \arrow[r,"\Psi_n"] \arrow[rd, swap,"\Phi_n"] &
{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(n) \arrow[d,"\Omega_n"] \\
& S_n{\textup{-mod}}\end{tikzcd}$$ The morphisms $\beta_k$, $k \in {\mathbb{N}}$, give a natural isomorphism of functors $\Omega_n \circ \Psi_n \cong \Phi_n$.
Since the $\beta_k$ are isomorphisms, it suffices to verify that they define a natural transformation. For this, we check the images of a set of generators of ${\mathcal{P}\mathit{ar}}(n)$. Since the functor $\Omega_n$ is not monoidal, we need to consider generators of ${\mathcal{P}\mathit{ar}}(n)$ as a ${\Bbbk}$-linear category. Such a set of generators is given by $$1_k \otimes x \otimes 1_j,\quad
k,j \in {\mathbb{N}},\ x \in \{\mu,{\delta},s,\eta,\epsilon\}.$$ See, for example, [@Liu18 Th. 5.2].
Let $j \in \{1,2,\dotsc,n-1\}$. We compute that $$\beta_{k-1}^{-1} \circ \left( \Omega_n \circ \Psi_n \left( 1_{k-j-1} \otimes \mu \otimes 1_{j-1} \right) \right) \circ \beta_k
\colon V^{\otimes k} \to V^{\otimes {k-1}}$$ is the $S_n$-module map given by $$\begin{aligned}
v_{i_k} \otimes \dotsb \otimes v_{i_1}
&\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\overset{\mathclap{\cref{hungry}}}{\mapsto}\ \delta_{i_j,i_{j+1}} g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_{j+3}}^{-1} g_{i_{j+2}} \otimes g_{i_{j+2}}^{-1} g_{i_j} \otimes g_{i_j}^{-1} g_{i_{j-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\mapsto \delta_{i_j,i_{j+1}} v_{i_k} \otimes \dotsb \otimes v_{i_{j+2}} \otimes v_{i_j} \otimes \dotsb \otimes v_{i_1}.
\end{aligned}$$ This is precisely the map $\Phi_n (1_{k-j-1} \otimes \mu \otimes 1_{j-1})$.
Similarly, we compute that $$\beta_{k+1}^{-1} \circ \left( \Omega_n \circ \Psi_n \left( 1_{k-j} \otimes {\delta}\otimes 1_{j-1} \right) \right) \circ \beta_k
\colon V^{\otimes k} \to V^{\otimes {k+1}}$$ is the $S_n$-module map given by $$\begin{aligned}
v_{i_k} \otimes \dotsb \otimes v_{i_1}
&\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_{j+1}}^{-1} g_{i_j} \otimes 1 \otimes g_{i_j}^{-1} g_{i_{j-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1 \\
&= g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_{j+1}}^{-1} g_{i_j} \otimes g_{i_j}^{-1} g_{i_j} \otimes g_{i_j}^{-1} g_{i_{j-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1 \\
&\mapsto v_{i_k} \otimes v_{i_{j+1}} \otimes v_{i_j} \otimes v_{i_j} \otimes v_{i_{j-1}} \otimes \dotsb \otimes v_{i_1}.
\end{aligned}$$ This is precisely the map $\Phi_n (1_{k-j} \otimes {\delta}\otimes 1_{j-1})$.
Now let $j \in \{1,\dotsc,n\}$. We compute that $$\beta_{k+1}^{-1} \circ \left( \Omega_n \circ \Psi_n \left( 1_{k-j} \otimes \eta \otimes 1_j \right) \right) \circ \beta_k
\colon V^{\otimes k} \to V^{\otimes {k+1}}$$ is the map $$\begin{aligned}
v_{i_k} &\otimes \dotsb \otimes v_{i_1}
\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\mapsto \sum_{m=1}^n g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_{j+2}}^{-1} g_{i_{j+1}} \otimes g_{j+1}^{-1} g_m \otimes g_m^{-1} g_j \otimes g_{i_j}^{-1} g_{i_{j-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1 \\
&\mapsto \sum_{m=1}^n v_{i_k} \otimes \dotsb \otimes v_{i_{j+1}} \otimes v_m \otimes v_{i_j} \otimes \dotsb \otimes v_{i_1}.
\end{aligned}$$ This is precisely the map $\Phi_n (1_{k-j} \otimes \eta \otimes 1_j)$.
We also compute that $$\beta_{k-1}^{-1} \circ \left( \Omega_n \circ \Psi_n \left( 1_{k-j} \otimes \varepsilon \otimes 1_{j-1} \right) \right) \circ \beta_k
\colon V^{\otimes k} \to V^{\otimes {k-1}}$$ is the map $$\begin{aligned}
v_{i_k} \otimes \dotsb \otimes v_{i_1}
&\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\mapsto g_{i_k} \otimes g_{i_k}^{-1} g_{i_{k-1}} \otimes \dotsb \otimes g_{i_{j+2}}^{-1} g_{i_{j+1}} \otimes g_{i_{j+1}}^{-1} g_{i_{j-1}} \otimes g_{i_{j-1}}^{-1} g_{i_{j-2}} \otimes \dotsb \otimes g_{i_2}^{-1} g_{i_1} \otimes 1
\\
&\mapsto v_{i_k} \otimes \dotsb \otimes v_{i_{j+1}} \otimes v_{i_{j-1}} \otimes \dotsb \otimes v_{i_1}.
\end{aligned}$$ This is precisely the map $\Phi_n (1_{k-j} \otimes \varepsilon \otimes 1_{j-1})$.
It remains to consider the generator $s$. Let $j \in \{1,2,\dotsc,n-1\}$. Define the elements $x,y \in \operatorname{End}_{{{\mathcal{H}\mathit{eis}}}}(\uparrow \downarrow \uparrow \downarrow)$ by $$\label{breakdown}
x =
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(1,1);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1.5,1);
\draw[->] (1,0) {to[out=up,in=down]}(0,1);
\draw[<-] (1.5,0) {to[out=up,in=down]}(0.5,1);
\end{tikzpicture}
\ ,\qquad
y =
\begin{tikzpicture}[anchorbase]
\draw[->] (1.2,0) -- (1.2,1);
\draw[->] (1.5,1) -- (1.5,0.9) arc (180:360:.25) -- (2,1);
\draw[<-] (1.5,0) -- (1.5,0.1) arc (180:0:.25) -- (2,0);
\draw[<-] (2.3,0) -- (2.3,1);
\end{tikzpicture}
\ .$$ Note that $$x = x_3 \circ x_2 \circ x_1,$$ where $$x_1 =
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,0.6);
\draw[<-] (0.5,0) {to[out=up,in=down]}(1,0.6);
\draw[->] (1,0) {to[out=up,in=down]}(0.5,0.6);
\draw[<-] (1.5,0) to (1.5,0.6);
\end{tikzpicture}
\ ,\quad
x_2 =
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(0.5,0.6);
\draw[->] (0.5,0) {to[out=up,in=down]}(0,0.6);
\draw[<-] (1,0) {to[out=up,in=down]}(1.5,0.6);
\draw[<-] (1.5,0) {to[out=up,in=down]}(1,0.6);
\end{tikzpicture}
\ ,\quad
x_3 =
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,0.6);
\draw[->] (0.5,0) {to[out=up,in=down]}(1,0.6);
\draw[<-] (1,0) {to[out=up,in=down]}(0.5,0.6);
\draw[<-] (1.5,0) to (1.5,0.6);
\end{tikzpicture}
\ .$$ Suppose $i,j \in \{1,\dotsc,n\}$ and $h,h' \in {\Bbbk}S_n$. We first compute the action of $\Theta(x)$ and $\Theta(y)$ on the element $$\alpha = h g_i \otimes g_i^{-1} g_j \otimes g_j^{-1} h'
\in (n)_{n-1}(n)_{n-1}(n).$$ If $i = j$, then $g_i^{-1} g_j = 1$, and so $\Phi_n(x_1)(\alpha) = 0$. Now suppose $i < j$. Then we have $$g_i^{-1} g_j
= s_{n-1} \dotsm s_i s_j \dotsm s_{n-1}
= s_{j-1} \dotsm s_{n-2} s_{n-1} s_{n-2} \dotsm s_i.$$ Hence $$\Phi_n(x_1)(\alpha)
= h g_i s_{j-1} \dotsm s_{n-2} \otimes s_{n-2} \dotsm s_i g_j^{-1} h'
\in (n)_{n-2}(n).$$ Thus $$\Phi_n(x_2 \circ x_1)(\alpha)
= h g_i s_{j-1} \dotsm s_{n-1} \otimes g_i^{-1} g_j^{-1} h'
\in (n)_{n-2}(n),$$ and so $$\begin{aligned}
\Phi_n(x)(\alpha)
&= h g_i s_{j-1} \dotsm s_{n-1} \otimes s_{n-1} \otimes g_i^{-1} g_j^{-1} h' \\
&= h g_j g_i s_{n-1} \otimes s_{n-1} \otimes s_{n-2} \dotsb s_{j-1} g_i^{-1} h' \\
&= h g_j \otimes g_i s_{n-2} \dotsb s_{j-1} \otimes g_i^{-1} h' \\
&= h g_j \otimes g_j^{-1} g_i \otimes g_i^{-1} h'.
\end{aligned}$$ The case $i > j$ is similar, giving $$\Phi_n(x)(h g_i \otimes g_i^{-1} g_j \otimes g_j^{-1} h')
=
\begin{cases}
0 & \text{if } i=j, \\
h g_j \otimes g_j^{-1} g_i \otimes g_i^{-1} h' & \text{if } i \ne j.
\end{cases}$$ We also easily compute that $$\Phi_n(y)(h g_i \otimes g_i^{-1} g_j \otimes g_j^{-1} h')
=
\begin{cases}
h g_i \otimes 1 \otimes g_i^{-1} h' & \text{if } i=j, \\
0 & \text{if } i \ne j.
\end{cases}$$ Thus, for all $i,j \in \{1,\dotsc,n\}$, we have $$\label{turkey}
\Phi_n(x+y)(h g_i \otimes g_i^{-1} g_j \otimes g_j^{-1} h')
= h g_j \otimes g_j^{-1} g_i \otimes g_i^{-1} h'.$$ It now follows easily that $$\beta_k^{-1} \circ \left( \Omega_n \circ \Psi_n \left( 1_{k-j-1} \otimes s \otimes 1_{j-1} \right) \right) \circ \beta_k
= \beta_k^{-1} \circ \Omega_n \left(1_{\uparrow \downarrow}^{\otimes (k-j-1)} \otimes (x+y) \otimes 1_{\uparrow \downarrow}^{\otimes (j-1)} \right) \circ \beta_k.$$ is the map given by $$v_{i_k} \otimes \dotsb \otimes v_{i_1}
\mapsto v_{i_k} \otimes \dotsb \otimes v_{i_{j+2}} \otimes v_{i_j} \otimes v_{i_{j+1}} \otimes v_{i_{j-1}} \otimes \dotsb \otimes v_{i_1},$$ which is precisely the map $\Phi_n \left( 1_{k-j-1} \otimes s \otimes 1_{j-1} \right)$.
\[faithful\] The functor $\Psi_t$ is faithful.
We give here a proof under the assumption that ${\Bbbk}$ is of characteristic zero. The general case will be treated in \[appendix\].
It suffices to show that given $k,\ell \ge 0$ the linear map $$\Psi_t(k,\ell) \colon \operatorname{Hom}_{{\mathcal{P}\mathit{ar}}(t)}(k,\ell)
\to\operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}\big( (\uparrow\downarrow)^k, (\uparrow\downarrow)^\ell \big)$$ is injective. Consider ${\mathcal{P}\mathit{ar}}(t)$ over ${\Bbbk}[t]$ and suppose $$f = \sum_{i=1}^m a_i(t) f_i \in \ker \left( \Psi_t(k,\ell) \right)$$ for some $a_i \in {\Bbbk}[t]$ and partition diagrams $f_i$. Choose $n \ge k + \ell$ and evaluate at $t=n$ to get $$f_n = \sum_{i=1}^m a_i(n) f_i \in \operatorname{Hom}_{{\mathcal{P}\mathit{ar}}(n)}(k,\ell).$$ (Here ${\mathcal{P}\mathit{ar}}(n)$ is a ${\Bbbk}$-linear category.) implies that $\Phi_n(f_n)=0$. Then \[bee\] implies $f_n=0$. Since the partition diagrams form a basis for the morphisms spaces in ${\mathcal{P}\mathit{ar}}(n)$, we have $a_i(n) = 0$ for all $i$. Since this holds for all $n \ge k + \ell$, we have $a_i = 0$ for all $i$. (Here we use that the characteristic of ${\Bbbk}$ is zero.) Hence $f=0$ and so $\Psi_t$ is faithful.
Since any faithful linear monoidal functor induces a faithful linear monoidal functor on additive Karoubi envelopes, we obtain the following corollary.
The functor $\Psi_t$ induces a faithful linear monoidal functor from Deligne’s category ${{\mathrm{Rep}}}(S_t)$ to the additive Karoubi envelope $\operatorname{Kar}({{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t))$ of ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}(t)$.
Note that $\Psi_t$ is *not* full. This follows immediately from the fact that the morphisms spaces in ${\mathcal{P}\mathit{ar}}(t)$ are finite-dimensional, while those in ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$ are infinite-dimensional, as follows from the explicit basis described in [@Kho14 Prop. 5] (see also [@BSW18 Th. 6.4]).
Grothendieck rings\[sec:Groth\]
===============================
In this section, we assume that ${\Bbbk}$ is a field of characteristic zero. We consider ${\mathcal{P}\mathit{ar}}(t)$ over the ground ring ${\Bbbk}[t]$ and ${{\mathcal{H}\mathit{eis}}}$ over ${\Bbbk}$. We can then view $\Psi_t$ as a ${\Bbbk}$-linear functor ${\mathcal{P}\mathit{ar}}(t) \to {{\mathcal{H}\mathit{eis}}}$ as noted in \[bike\]\[wheel\].
For an additive linear monoidal category ${\mathcal{C}}$, we let $K_0({\mathcal{C}})$ denote its split Grothendieck ring. The multiplication in $K_0({\mathcal{C}})$ is given by $[X] [Y] = [X \otimes Y]$, where $[X]$ denotes the class in $K_0({\mathcal{C}})$ of an object $X$ in ${\mathcal{C}}$.
Recall that Deligne’s category ${{\mathrm{Rep}}}(S_t)$ is the additive Karoubi envelope $\operatorname{Kar}({\mathcal{P}\mathit{ar}}(t))$ of the partition category. The additive monoidal functor $\Psi_t$ of \[functordef\] induces a ring homomorphism $$\label{juice}
[\Psi_t] \colon K_0({{\mathrm{Rep}}}(S_t)) \to K_0(\operatorname{Kar}({{\mathcal{H}\mathit{eis}}})),\quad
[\Psi_t]([X]) = [\Psi_t(X)].$$ The main result of this section (\[finally\]) is a precise description of this homomorphism.
Let ${\mathcal{Y}}$ denote the set of Young diagrams $\lambda = (\lambda_1,\lambda_2,\dotsc,\lambda_\ell)$, $\lambda_1 \ge \lambda_2 \ge \dotsb \ge \lambda_\ell > 0$. (We avoid the terminology *partition* here to avoid confusion with the partition category.) For a Young diagram $\lambda \in {\mathcal{Y}}$, we let $|\lambda|$ denote its size (i.e. the sum of its parts). Let $\operatorname{Sym}$ denote the ring of symmetric functions with integer coefficients. Then $\operatorname{Sym}$ has a ${\mathbb{Z}}$-basis given by the Schur functions $s_\lambda$, $\lambda \in {\mathcal{Y}}$. We have $$\label{Symp}
\operatorname{Sym}_{\mathbb{Q}}:= {\mathbb{Q}}\otimes_{\mathbb{Z}}\operatorname{Sym}\cong {\mathbb{Q}}[p_1,p_2,\dotsc]
= \bigoplus_{\lambda \in {\mathcal{Y}}} {\mathbb{Q}}p_\lambda,$$ where $p_n$ denotes the $n$-th power sum and $p_\lambda = p_{\lambda_1} \dotsm p_{\lambda_k}$ for a Young diagram $\lambda = (\lambda_1,\dotsc,\lambda_k)$.
The *infinite-dimensional Heisenberg Lie algebra* ${\mathfrak{h}}$ is the Lie algebra over ${\mathbb{Q}}$ generated by $\{p_n^\pm, c : n \ge 1\}$ subject to the relations $$[p_m^-,p_n^-] = [p_m^+, p_n^+] = [c, p_n^\pm] = 0,\quad
[p_m^+, p_n^-] = \delta_{m,n} n c.$$ The central reduction $U({\mathfrak{h}})/(c+1)$ of its universal enveloping algebra can also be realized as the *Heisenberg double* $\operatorname{Sym}_{\mathbb{Q}}\#_{\mathbb{Q}}\operatorname{Sym}_{\mathbb{Q}}$ with respect to the bilinear Hopf pairing $$\langle -, - \rangle \colon \operatorname{Sym}_{\mathbb{Q}}\times \operatorname{Sym}_{\mathbb{Q}},\quad
\langle p_m, p_n \rangle = \delta_{m,n} n.$$ By definition, $\operatorname{Sym}_{\mathbb{Q}}\#_{\mathbb{Q}}\operatorname{Sym}_{\mathbb{Q}}$ is the vector space $\operatorname{Sym}_{\mathbb{Q}}\otimes_{\mathbb{Q}}\operatorname{Sym}_{\mathbb{Q}}$ with associative multiplication given by $$(e \otimes f)(g \otimes h)
= \sum_{(f),(g)} \langle f_{(1)}, g_{(2)} \rangle e g_{(1)} \otimes f_{(2)} h,$$ where we use Sweedler notation for the usual coproduct on $\operatorname{Sym}_{\mathbb{Q}}$ determined by $$\label{burrito}
p_n \mapsto p_n \otimes 1 + 1 \otimes p_n.$$ Comparing the coefficients appearing in [@Ber17 Th. 5.3] to [@Ber17 (2.2)], we see that the pairing of two complete symmetric functions is an integer. (Note that our $p_n^\pm$ are denoted $p_n^\mp$ in [@Ber17].) We can therefore restrict $\langle -, - \rangle$ to obtain a biadditive form $\langle -, - \rangle \colon \operatorname{Sym}\otimes_{\mathbb{Z}}\operatorname{Sym}\to {\mathbb{Z}}$. The corresponding Heisenberg double $${\mathrm{Heis}}:= \operatorname{Sym}\#_{\mathbb{Z}}\operatorname{Sym}$$ is a natural ${\mathbb{Z}}$-form for $U({\mathfrak{h}})/(c+1) \cong \operatorname{Sym}_{\mathbb{Q}}\#_{\mathbb{Q}}\operatorname{Sym}_{\mathbb{Q}}$. For $f \in \operatorname{Sym}$ we let $f^-$ and $f^+$ denote the elements $f \otimes 1$ and $1 \otimes f$ of ${\mathrm{Heis}}$, respectively.
Recall the algebra homomorphisms \[garage,SunnyD\], which we use to view elements of ${\Bbbk}S_k$ as endomorphisms in the partition and Heisenberg categories. In particular, the homomorphisms \[SunnyD\] induce a natural algebra homomorphism $$\label{twostep}
{\Bbbk}S_k \otimes_{\Bbbk}{\Bbbk}S_k \to \operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\uparrow^k \downarrow^k).$$ We will use this homomorphism to view elements of ${\Bbbk}S_k \otimes {\Bbbk}S_k$ as elements of $\operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\uparrow^k \downarrow^k)$.
One can deduce explicit presentations of ${\mathrm{Heis}}$ (see [@Ber17 §5] and [@LRS18 Appendix A]), but we will not need such presentations here. Important for our purposes is that $$s_\lambda^+ s_\mu^-,\quad \lambda,\mu \in {\mathcal{Y}},$$ is a ${\mathbb{Z}}$-basis for ${\mathrm{Heis}}$, and that there is an isomorphism of rings $$\label{K0Heis}
{\mathrm{Heis}}\xrightarrow{\cong} K_0(\operatorname{Kar}({{\mathcal{H}\mathit{eis}}})), \quad
s_\lambda^+ s_\mu^- \mapsto [(\uparrow^{|\lambda|} \downarrow^{|\mu|}, e_\lambda \otimes e_\mu)],\quad
\lambda,\mu \in {\mathcal{Y}},$$ where $e_\lambda$ is the Young symmetrizer corresponding to the Young diagram $\lambda$. We adopt the convention that $e_\varnothing = 1$ and $s_\varnothing = 1$, where $\varnothing$ denotes the empty Young diagram of size $0$. The isomorphism \[K0Heis\] was conjectured in [@Kho14 Conj. 1] and proved in [@BSW18 Th. 1.1]. Via the isomorphism \[K0Heis\], we will identify $K_0(\operatorname{Kar}({{\mathcal{H}\mathit{eis}}}))$ and ${\mathrm{Heis}}$ in what follows.
Recall that there is an isomorphism of Hopf algebras $$\label{pink}
\bigoplus_{n=0}^\infty K_0(S_n{\textup{-mod}}) \cong \operatorname{Sym},\quad [{\Bbbk}S_n e_\lambda] \mapsto s_\lambda.$$ The product on $\bigoplus_{n=0}^\infty K_0(S_n{\textup{-mod}})$ is given by $$[M] \cdot [N] = \left[ \operatorname{Ind}_{S_m \times S_n}^{S_{m+n}} (M \boxtimes N) \right],\quad
M \in S_m{\textup{-mod}},\ N \in S_n{\textup{-mod}},$$ while the coproduct \[burrito\] is given by $$[K] \mapsto \bigoplus_{n+m=k} \left[ \operatorname{Res}^{S_k}_{S_m \times S_n} K \right],\quad K \in S_k{\textup{-mod}}.$$
In addition to the coproduct \[burrito\], there is another well-studied coproduct on $\operatorname{Sym}_{\mathbb{Q}}$, the *Kronecker coproduct*, which is given by $${\Delta_{\mathrm{Kr}}}\colon \operatorname{Sym}_{\mathbb{Q}}\to \operatorname{Sym}_{\mathbb{Q}}\otimes_{\mathbb{Q}}\operatorname{Sym}_{\mathbb{Q}},\quad
{\Delta_{\mathrm{Kr}}}(p_\lambda) = p_\lambda \otimes p_\lambda.$$ It is dual to the Kronecker (or internal) product on $\operatorname{Sym}_{\mathbb{Q}}$. Restriction to $\operatorname{Sym}$ gives a coproduct $$\label{milk}
{\Delta_{\mathrm{Kr}}}\colon \operatorname{Sym}\to \operatorname{Sym}\otimes_{\mathbb{Z}}\operatorname{Sym}.$$ The fact that the restriction of ${\Delta_{\mathrm{Kr}}}$ to $\operatorname{Sym}$ lands in $\operatorname{Sym}\otimes_{\mathbb{Z}}\operatorname{Sym}$ is implied by the following categorical interpretation of the Kronecker coproduct. The diagonal embedding $S_n \to S_n \times S_n$ extends by linearity to an injective algebra homomorphism $$\label{diagonal}
d \colon {\Bbbk}S_n \to {\Bbbk}S_n \otimes_{\Bbbk}{\Bbbk}S_n.$$ Under the isomorphism \[pink\], the functor $$S_n{\textup{-mod}}\to (S_n \times S_n){\textup{-mod}},\quad M \mapsto \operatorname{Ind}_{S_n}^{S_n \times S_n}(M)$$ corresponds precisely to ${\Delta_{\mathrm{Kr}}}$ after passing to Grothendieck groups. (See [@Lit56].)
Now view the Kronecker coproduct as a linear map $$\label{chariot}
{\Delta_{\mathrm{Kr}}}\colon \operatorname{Sym}\to \operatorname{Sym}\#_{\mathbb{Z}}\operatorname{Sym}= {\mathrm{Heis}}.$$ It is clear that the map \[milk\] is a ring homomorphism with the product ring structure on $\operatorname{Sym}\otimes_{\mathbb{Z}}\operatorname{Sym}$. In fact, it turns out that we also have the following.
The map \[chariot\] is an injective ring homomorphism.
We prove the result over ${\mathbb{Q}}$; then the statement follows by restriction to $\operatorname{Sym}$. By \[Symp\], it suffices to prove that ${\Delta_{\mathrm{Kr}}}(p_n)$ and ${\Delta_{\mathrm{Kr}}}(p_m)$ commute for $n,m \in {\mathbb{N}}$. Since, for $n \ne m$, $${\Delta_{\mathrm{Kr}}}(p_n) {\Delta_{\mathrm{Kr}}}(p_m)
= p_n^+ p_n^- p_m^+ p_m^-
= p_m^+ p_m^- p_n^+ p_n^-
= {\Delta_{\mathrm{Kr}}}(p_m) {\Delta_{\mathrm{Kr}}}(p_n),$$ we see that ${\Delta_{\mathrm{Kr}}}$ is a ring homomorphism. It is clear that it is injective.
Our first step in describing the map \[juice\] is to decompose the objects $(\uparrow \downarrow)^k$ appearing in the image of $\Psi_t$. Recall that the Stirling number of the second kind $\stirling{k}{\ell}$, $k, \ell \in {\mathbb{N}}$, counts the number of ways to partition a set of $k$ labelled objects into $\ell$ nonempty unlabelled subsets. These numbers are given by $$\stirling{k}{\ell} = \frac{1}{\ell!} \sum_{i=0}^\ell (-1)^i \binom{\ell}{i} (\ell-i)^k$$ and are determined by the recursion relation $$\stirling{k+1}{\ell} = \ell \stirling{k}{\ell} + \stirling{k}{\ell-1}
\quad \text{with} \quad
\stirling{0}{0}=1
\quad \text{and} \quad
\stirling{k}{0} = \stirling{0}{k} = 0,\ k > 0.$$
In ${{\mathcal{H}\mathit{eis}}}$, we have $$\label{jungle}
(\uparrow \downarrow)^k
\cong\ \bigoplus_{\ell=1}^k (\uparrow^\ell \downarrow^\ell)^{\oplus \stirling{k}{\ell}}.$$ In particular, since $\stirling{k}{k}=1$, the summand $\uparrow^k \downarrow^k$ appears with multiplicity one.
First note that repeated use of the isomorphism \[key\] gives $$\label{mail}
\uparrow \downarrow \uparrow^k \downarrow^k
\ \cong\ \uparrow^{k+1} \downarrow^{k+1} \oplus (\uparrow^k \downarrow^k)^{\oplus k}.$$ We now prove the lemma by induction on $k$. The case $k=1$ is immediate. Suppose the result holds for some $k \ge 1$. Then we have $$\begin{gathered}
(\uparrow \downarrow)^{k+1}
\cong (\uparrow \downarrow) \left( \bigoplus_{\ell=1}^k (\uparrow^\ell \downarrow^\ell)^{\oplus \stirling{k}{\ell}} \right)
\overset{\cref{mail}}{\cong} \bigoplus_{\ell=1}^k \left( (\uparrow^{\ell+1} \downarrow^{\ell+1})^{\oplus \stirling{k}{\ell}} + (\uparrow^\ell \downarrow^\ell)^{\oplus \ell \stirling{k}{\ell}} \right)
\\
\cong\ \bigoplus_{\ell=1}^{k+1} (\uparrow^\ell \downarrow^{\ell})^{\oplus \left( \ell \stirling{k}{\ell} + \stirling{k}{\ell-1} \right)}
\cong\ \bigoplus_{\ell=1}^{k+1} (\uparrow^\ell \downarrow^{\ell})^{\oplus \stirling{k+1}{\ell}}. \qedhere
\end{gathered}$$
Recall that, under \[twostep\], for each Young diagram $\lambda$ of size $k$, we have the idempotent $$d(e_\lambda) \in \operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\uparrow^k \downarrow^k),$$ where $d$ is the map \[diagonal\]. Recall also the definition $P_k(t) = \operatorname{End}_{{\mathcal{P}\mathit{ar}}(t)}(k)$ of the partition algebra. Let $$\xi =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) to (0.5,0) to (0.5,0.5) to (0,0.5) to (0,0);
\end{tikzpicture}
\in P_2(t)
\quad \text{and} \quad
\xi_i = 1_{k-i-1} \otimes \xi \otimes 1_{i-1} \in P_k(t)
\quad \text{for } 1 \le i \le k-1.$$ It is straightforward to verify that the intersection of $P_k(t)$ with the tensor ideal of ${\mathcal{P}\mathit{ar}}(t)$ generated by $\xi$ is equal to the ideal $(\xi_1)$ of $P_k(t)$ generated by $\xi_1$. Denote this ideal by $P_k^\xi(t)$.
As noted in [@CO11 Lem. 3.1(2)], we have an isomorphism $$\label{dinobot}
P_k(t)/ P_k^\xi(t) \cong {\Bbbk}S_k,\quad a + P_k^\xi(t) \mapsto a,\quad a \in {\Bbbk}S_k,$$ where we view elements of ${\Bbbk}S_k$ as elements of $P_k(t)$ via the homomorphism \[garage\]. This observation allows one to classify the primitive idempotents in $P_k(t)$ by induction on $k$. This classification was first given by Martin in [@Mar96].
\[prims\] For $k > 0$, the primitive idempotents in $P_k(t)$, up to conjugation, are in bijection with the set of Young diagrams $\lambda \in {\mathcal{Y}}$ with $0 < |\lambda| \le k$. Furthermore:
1. Under this bijection, idempotents lying in $P_k^\xi(t)$ correspond to Young diagrams $\lambda$ with $0 < |\lambda| < k$.
2. For each Young diagram $\lambda$ of size $k$, we can choose a primitive idempotent $f_\lambda \in P_k(t)$ corresponding to $\lambda$ so that $f_\lambda + P_k^\xi(t)$ maps to the Young symmetrizer $e_\lambda$ under the isomorphism \[dinobot\].
This follows as in the proof of [@CO11 Th. 3.4]. Note that since $t$ is generic, $$\eta \circ \varepsilon
=\
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
\end{tikzpicture}$$ is not an idempotent in $P_1(t)$. Thus, the argument proceeds as in the $t=0$ case in [@CO11 Th. 3.4].
For $\lambda \in {\mathcal{Y}}$, define the indecomposable object of ${{\mathrm{Rep}}}(S_t)$ $$L(\lambda) := (|\lambda|,f_\lambda).$$
\[indecs\] Fix an integer $k \ge 0$. The map $$\lambda \mapsto L(\lambda),\quad \lambda \in {\mathcal{Y}},$$ gives a bijection from the set of $\lambda \in {\mathcal{Y}}$ with $0 \le |\lambda| \le k$ to the set of nonzero indecomposable objects in ${{\mathrm{Rep}}}(S_t)$ of the form $(m,e)$ with $m \le k$, up to isomorphism. Furthermore
1. If $\lambda \in {\mathcal{Y}}$ with $0 < |\lambda| \le k$, then there exists an idempotent $e \in P_k(t)$ with $(k,e) \cong L(\lambda)$.
2. We have that $(0,1_0)$ is the unique object of the form $(m,e)$ that is isomorphic to $L(\varnothing)$.
This follows as in the proof of [@CO11 Lem. 3.6]. Again, our assumption that $t$ is generic implies that we proceed as in the $t=0$ case of [@CO11 Lem. 3.6].
We are now ready to prove the main result of this section.
\[finally\] The homomorphism $[\Psi_t]$ of \[juice\] is injective and its image is $$[\Psi_t] \big( K_0({{\mathrm{Rep}}}(S_t)) \big) = {\Delta_{\mathrm{Kr}}}(\operatorname{Sym}) \subseteq {\mathrm{Heis}},$$ where ${\mathrm{Heis}}$ is identified with $K_0({{\mathcal{H}\mathit{eis}}})$ as in \[K0Heis\].
For $k \in {\mathbb{N}}$, let ${{\mathrm{Rep}}}_k(S_t)$ denote the full subcategory of ${{\mathrm{Rep}}}(S_t)$ containing the objects of the form $(k,e)$. By \[indecs\], ${{\mathrm{Rep}}}_k(S_t)$ is also the full subcategory of ${{\mathrm{Rep}}}(S_t)$ containing the objects of the form $(m,e)$, $m \le k$.
We prove by induction on $k$ that the restriction of $[\Psi_t]$ to $K_0({{\mathrm{Rep}}}_k(S_t))$ is injective, and that $$[\Psi_t](K_0({{\mathrm{Rep}}}_k(S_t))) = {\Delta_{\mathrm{Kr}}}(\operatorname{Sym}_{\le k}),$$ where $\operatorname{Sym}_{\le k}$ denotes the subspace of $\operatorname{Sym}$ spanned by symmetric functions of degree $\le k$. The case $k=0$ is immediate.
Suppose $k \ge 1$. The components $\uparrow^k \downarrow^k \to (\uparrow \downarrow)^k$ and $(\uparrow \downarrow)^k \to\ \uparrow^k \downarrow^k$ of the isomorphism \[jungle\] are $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,2);
\draw[->] (0.5,0) {to[out=up,in=down]}(1,2);
\node at (1.1,0.3) {$\cdots$};
\draw[->] (1.5,0) {to[out=up,in=down]}(3,2);
\draw[<-] (2,0) {to[out=up,in=down]}(0.5,2);
\draw[<-] (2.5,0) {to[out=up,in=down]}(1.5,2);
\node at (3,0.3) {$\cdots$};
\draw[<-] (3.5,0) {to[out=up,in=down]}(3.5,2);
\node at (2.3,1.7) {$\cdots$};
\end{tikzpicture}
\qquad \text{and} \qquad
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,0) to (0,-2);
\draw[<-] (0.5,0) to[out=down,in=up] (1,-2);
\node at (1.1,-0.3) {$\cdots$};
\draw[<-] (1.5,0) to[out=down,in=up] (3,-2);
\draw[->] (2,0) to[out=down,in=up] (0.5,-2);
\draw[->] (2.5,0) to[out=down,in=up] (1.5,-2);
\node at (3,-0.3) {$\cdots$};
\draw[->] (3.5,0) to[out=down,in=up] (3.5,-2);
\node at (2.3,-1.7) {$\cdots$};
\end{tikzpicture}$$ respectively. Consider the morphism $$\label{cadet}
\Psi_t(s_i - \xi_i)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) to (0,1);
\draw[<-] (0.5,0) to (0.5,1);
\node at (1.25,0.5) {$\cdots$};
\draw[->] (2,0) to (2,1);
\draw[<-] (2.5,0) to (2.5,1);
\draw[->] (3,0) {to[out=up,in=down]}(4,1);
\draw[<-] (3.5,0) {to[out=up,in=down]}(4.5,1);
\draw[->] (4,0) {to[out=up,in=down]}(3,1);
\draw[<-] (4.5,0) {to[out=up,in=down]}(3.5,1);
\draw[->] (5,0) to (5,1);
\draw[<-] (5.5,0) to (5.5,1);
\node at (6.25,0.5) {$\cdots$};
\draw[->] (7,0) to (7,1);
\draw[<-] (7.5,0) to (7.5,1);
\end{tikzpicture}
\ \in \operatorname{End}_{{\mathcal{H}\mathit{eis}}}\big( (\uparrow \downarrow)^k \big).$$ Under the isomorphism \[jungle\], this corresponds to $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,-2) to (0,2);
\draw[->] (1,-2) {to[out=up,in=down]}(2,-0.5) to (2,0.5) {to[out=up,in=down]}(1,2);
\draw[->] (1.5,-2) {to[out=up,in=down]}(3,-0.5) {to[out=up,in=down]}(4,0.5) {to[out=up,in=down]}(2,2);
\draw[->] (2,-2) {to[out=up,in=down]}(4,-0.5) {to[out=up,in=down]}(3,0.5) {to[out=up,in=down]}(1.5,2);
\draw[->] (2.5,-2) {to[out=up,in=down]}(5,-0.5) to (5,0.5) {to[out=up,in=down]}(2.5,2);
\draw[->] (3.5,-2) {to[out=up,in=down]}(7,-0.5) to (7,0.5) {to[out=up,in=down]}(3.5,2);
\draw[<-] (4,-2) {to[out=up,in=down]}(0.5,-0.5) to (0.5,0.5) {to[out=up,in=down]}(4,2);
\draw[<-] (5,-2) {to[out=up,in=down]}(2.5,-0.5) to (2.5,0.5) {to[out=up,in=down]}(5,2);
\draw[<-] (5.5,-2) {to[out=up,in=down]}(3.5,-0.5) {to[out=up,in=down]}(4.5,0.5) {to[out=up,in=down]}(5.5,2);
\draw[<-] (6,-2) {to[out=up,in=down]}(4.5,-0.5) {to[out=up,in=down]}(3.5,0.5) {to[out=up,in=down]}(6,2);
\draw[<-] (6.5,-2) {to[out=up,in=down]}(5.5,-0.5) to (5.5,0.5) {to[out=up,in=down]}(6.5,2);
\draw[<-] (7.5,-2) to (7.5,2);
\node at (0.5,-1.8) {$\cdots$};
\node at (3,-1.8) {$\cdots$};
\node at (4.5,-1.8) {$\cdots$};
\node at (7,-1.8) {$\cdots$};
\node at (1.25,0) {$\cdots$};
\node at (6.25,0) {$\cdots$};
\node at (0.5,1.8) {$\cdots$};
\node at (3,1.8) {$\cdots$};
\node at (4.5,1.8) {$\cdots$};
\node at (7,1.8) {$\cdots$};
\end{tikzpicture}
\overset{\cref{H3}}{=} s_i \otimes s_i = d(s_i) \in \operatorname{End}_{{\mathcal{H}\mathit{eis}}}(\uparrow^k \downarrow^k).$$ It follows that, for any Young diagram $\lambda$ of size $k$, we have $\Psi_t(e_\lambda) - d(e_\lambda) \in \Psi_t(P_k^\xi(t))$. Since $f_\lambda - e_\lambda \in P_k^\xi(t)$ by \[prims\], this implies that $$\Psi_t(f_\lambda) - d(e_\lambda) \in \Psi_t(P_k^\xi(t)).$$ Thus, by \[prims\] and the induction hypothesis, we have $$[\Psi_t(L(\lambda))] - {\Delta_{\mathrm{Kr}}}(s_\lambda)
= [\Psi_t(L(\lambda))] - [\uparrow^k \downarrow^k, d(e_\lambda)]
\in {\Delta_{\mathrm{Kr}}}(\operatorname{Sym}_{\le (k-1)}).$$ Since the $s_\lambda$ with $|\lambda| = k$ span the space of degree $k$ symmetric functions, we are done.
As an immediate corollary of \[finally\], we recover the following result of [@Del07 Cor. 5.12]. (The $T_n$ of [@Del07] correspond to the complete symmetric functions.)
\[K0P\] We have an isomorphism of rings $K_0({{\mathrm{Rep}}}(S_t)) \cong \operatorname{Sym}$.
The Grothendieck group/ring is one method of decategorification. Another is the trace, or zeroth Hochschild homology. We refer the reader to [@BGHL14] for details. The functor $\Psi_t$ induces a ring homomorphism on traces. We conclude with a brief discussion of this induced map. First, note that the trace of a category is isomorphic to the trace of its additive Karoubi envelope. (See [@BGHL14 Prop. 3.2].) Thus, $\operatorname{Tr}({\mathcal{P}\mathit{ar}}(t)) \cong \operatorname{Tr}({{\mathrm{Rep}}}(S_t))$. In addition, our assumption that $t$ is generic (in particular, $t \notin {\mathbb{N}}$) implies that ${{\mathrm{Rep}}}(S_t)$ is semisimple. (See [@Del07 Th. 2.18].) It follows that the Chern character map $$h \colon K_0({{\mathrm{Rep}}}(S_t)) \to \operatorname{Tr}({{\mathrm{Rep}}}(S_t))$$ is an isomorphism. (See [@Sav18 Prop. 5.4].) Hence $\operatorname{Tr}({\mathcal{P}\mathit{ar}}(t)) \cong \operatorname{Tr}({{\mathrm{Rep}}}(S_t)) \cong \operatorname{Sym}$ by \[K0P\]. On the other hand, the trace of the Heisenberg category was computed in [@CLLS18 Th. 1] and shown to be equal to a quotient of the W-algebra $W_{1+\infty}$ by a certain ideal $I$. This quotient contains the Heisenberg algebra ${\mathrm{Heis}}$ and the Chern character map induces an injective ring homomorphism $${\mathrm{Heis}}\cong K_0({{\mathcal{H}\mathit{eis}}}) \to \operatorname{Tr}({{\mathcal{H}\mathit{eis}}}) \cong W_{1+\infty}/I.$$ It follows that the functor $\Psi_t$ induces an injective ring homomorphism $$\operatorname{Sym}\cong \operatorname{Tr}({{\mathrm{Rep}}}(S_t)) \to \operatorname{Tr}({{\mathcal{H}\mathit{eis}}}) \cong W_{1+\infty}/I,$$ and the image of this map is ${\Delta_{\mathrm{Kr}}}(\operatorname{Sym}) \subseteq {\mathrm{Heis}}\subseteq W_{1+\infty}/I$.
Faithfulness over any commutative ring\
(with Christopher Ryba)\[appendix\]
=======================================
In this appendix prove \[faithful\] over an arbitrary commutative ring ${\Bbbk}$.
We say a partition diagram is a *permutation* if it is the image of an element of $S_k$, $k \in {\mathbb{N}}$, under the map \[garage\]. A partition diagram is *planar* if it can be represented as a graph without edge crossings inside of the rectangle formed by its vertices. Note that a partition diagram is planar if and only if it is the tensor product (horizontal juxtaposition) of partition diagrams with a single block.
Every partition diagram $D$ can be factored as a product $D = D_1 \circ D_2 \circ D_3$, where $D_1$ and $D_3$ are permutations and $D_2$ is planar. See, for example, [@HR05 p. 874]. The number of blocks in $D$ is clearly equal to the number of blocks in $D_2$. For example, the partition diagram $$D =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (1.5,0.5) circle (1.5pt)};
{\filldraw[black] (2,0.5) circle (1.5pt)};
\draw (0,0.5) to[out=down,in=down] (1,0.5);
\draw (0.5,0) to[out=up,in=up,looseness=0.5] (1,0);
\draw (0.5,0.5) to[out=down,in=down,looseness=0.5] (1.5,0.5);
\draw (1,0) {to[out=up,in=down]}(2,0.5);
\draw (2,0) {to[out=up,in=down]}(1.5,0.5);
\end{tikzpicture}$$ has four blocks and decomposition $D = D_1 \circ D_2 \circ D_3$, where $$D_1 =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (1.5,0.5) circle (1.5pt)};
{\filldraw[black] (2,0.5) circle (1.5pt)};
\draw (0,0) {to[out=up,in=down]}(0,0.5);
\draw (1.5,0) {to[out=up,in=down]}(1.5,0.5);
\draw (2,0) {to[out=up,in=down]}(2,0.5);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5);
\draw (1,0) {to[out=up,in=down]}(0.5,0.5);
\end{tikzpicture},
\quad
D_2 =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (1.5,0.5) circle (1.5pt)};
{\filldraw[black] (2,0.5) circle (1.5pt)};
\draw (0,0.5) to[out=down,in=down] (0.5,0.5);
\draw (0.5,0) {to[out=up,in=down]}(1,0.5) to[out=down,in=down] (1.5,0.5);
\draw (1,0) to[out=up,in=up] (1.5,0) {to[out=up,in=down]}(2,0.5);
\end{tikzpicture}
=
\begin{tikzpicture}[{>=To,baseline={(0,0.15)}}]
{\filldraw[black] (0,0) circle (1.5pt)};
\draw (0,0) to (0,0.25);
\node at (0,.5) {};
\end{tikzpicture}
\otimes
\begin{tikzpicture}[{>=To,baseline={(0,0.15)}}]
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0.5) to[out=down,in=down,looseness=1.5] (0.5,0.5);
\node at (0,0) {};
\end{tikzpicture}
\otimes
\begin{tikzpicture}[{>=To,baseline={(0,0.15)}}]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (0,0) to (0,0.5) to[out=down,in=down,looseness=1.5] (0.5,0.5);
\end{tikzpicture}
\otimes
\begin{tikzpicture}[{>=To,baseline={(0,0.15)}}]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
\draw (0.5,0) to[out=up,in=up,looseness=1.5] (0,0) to (0,0.5);
\end{tikzpicture}
,
\quad
D_3 =
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (1.5,0) circle (1.5pt)};
{\filldraw[black] (2,0) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
{\filldraw[black] (1.5,0.5) circle (1.5pt)};
{\filldraw[black] (2,0.5) circle (1.5pt)};
\draw (0.5,0) to[out=up,in=down,looseness=0.5] (1.5,0.5);
\draw (1,0) to[out=up,in=down,looseness=0.5] (2,0.5);
\draw (1.5,0) to[out=up,in=down,looseness=0.5] (0.5,0.5);
\draw (2,0) to[out=up,in=down,looseness=0.5] (1,0.5);
\end{tikzpicture}.$$
For $n,k,\ell \in {\mathbb{N}}$, let $\operatorname{Hom}_{{\mathcal{P}\mathit{ar}}}^{\le n}(k,\ell)$ denote the subspace of $\operatorname{Hom}_{{\mathcal{P}\mathit{ar}}}(k,\ell)$ spanned by partition diagrams with at most $n$ blocks. Composition respects the corresponding filtration on morphism spaces.
Recall the bases of the morphism spaces of ${{\mathcal{H}\mathit{eis}}}$ given in [@Kho14 Prop. 5]. For any such basis element $X$ in $\operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}\big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big)$, define the *block number* of $X$ to be number of distinct closed (possibly intersecting) loops in the diagram $$\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) -- (0,0.1) arc (180:0:.25) -- (0.5,0);
\end{tikzpicture}^{\otimes \ell}
\circ
X
\circ
\begin{tikzpicture}[anchorbase]
\draw[<-] (0,1) -- (0,0.9) arc (180:360:.25) -- (0.5,1);
\end{tikzpicture}^{\otimes k}.$$ For $n \in {\mathbb{N}}$, let $\operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}^{\le n} \big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big)$ denote the subspace of $\operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}} \big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big)$ spanned by basis elements with block number at most $n$. Composition respects the resulting filtration on morphism spaces.
The image under $\Psi_t$ of planar partition diagrams (writing the image in terms of the aforementioned bases of the morphism spaces of ${{\mathcal{H}\mathit{eis}}}$) is particularly simple to describe. Since each planar partition diagram is a tensor product of single blocks, consider the case of a single block. Then, for example, we have $$\Psi_t
\left(
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
{\filldraw[black] (1,0.5) circle (1.5pt)};
\draw (0,0.5) to[out=up,in=up,looseness=1.5] (0.5,0.5);
\draw (0.5,0.5) to[out=up,in=up,looseness=1.5] (1,0.5);
\end{tikzpicture}
\right)
=
\begin{tikzpicture}[>=To,baseline={(0,0.2)}]
\draw[<-] (1.5,0) to[out=up,in=up,looseness=2] (2,0);
\draw[<-] (0.5,0) to[out=up,in=up,looseness=2] (1,0);
\draw[->] (0,0) to[out=up,in=up] (2.5,0);
\end{tikzpicture}
\quad \text{and} \quad
\Psi_t
\left(
\begin{tikzpicture}[anchorbase]
{\filldraw[black] (0,0) circle (1.5pt)};
{\filldraw[black] (0.5,0) circle (1.5pt)};
{\filldraw[black] (1,0) circle (1.5pt)};
{\filldraw[black] (0,0.5) circle (1.5pt)};
{\filldraw[black] (0.5,0.5) circle (1.5pt)};
\draw (1,0) to (0,0) to (0,0.5) to (0.5,0.5);
\end{tikzpicture}
\right)
=
\begin{tikzpicture}[anchorbase]
\draw[->] (0,0) {to[out=up,in=down]}(0.5,1);
\draw[<-] (0.5,0) to[out=up,in=up,looseness=2] (1,0);
\draw[<-] (1.5,0) to[out=up,in=up,looseness=2] (2,0);
\draw[<-] (2.5,0) {to[out=up,in=down]}(2,1);
\draw[->] (1,1) to[out=down,in=down,looseness=2] (1.5,1);
\end{tikzpicture}
\ .$$ The general case is analogous. In particular, if $D$ is a planar partition diagram with $n$ blocks, then $\Psi_t(D)$ is a planar diagram with block number $n$.
For a permutation partition diagram $D \colon k \to k$, let $T(D)$ be the planar diagram (a morphism in ${{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}$) defined as follows: Write $D = s_{i_1} \circ s_{i_2} \circ \dotsb \circ s_{i_r}$ as a reduced word in simple transpositions and let $$T(D) = \Psi_t(s_{i_1} - \xi_{i_1}) \circ \Psi_t(s_{i_2} - \xi_{i_2}) \circ \dotsb \circ \Psi_t(s_{i_r} - \xi_{i_r}).$$ (See \[cadet\].) It follows from the braid relations \[genbraid\] that $T(D)$ is independent of the choice of reduced word for $D$.
\[locker\] Suppose $D \colon k \to \ell$ is a partition diagram with $n$ blocks. Write $D = D_1 \circ D_2 \circ D_3$, where $D_1$ and $D_3$ are permutations and $D_2$ is a planar partition diagram. Then $$\Psi_t(D) - T(D_1) \circ \Psi_t(D_2) \circ T(D_3) \in \operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}^{\le n-1} \big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big).$$
We have $\Psi_t(D) = \Psi_t(D_1) \circ \Psi_t(D_2) \circ \Psi_t(D_3)$. As noted above, $D_2$ has $n$ blocks and $\Psi_t(D_2)$ has block number $n$. Suppose $1 \le j < \ell$. Then, for any partition diagram $D' \colon k \to \ell$ with $n$ blocks, we have $s_j \circ D' = D' = \xi_j \circ D'$ if $j'$ and $(j+1)'$ are in the same block of $D'$. On the other hand, if $j'$ and $(j+1)'$ lie in different blocks, then $\xi_j \circ D'$ has $n-1$ blocks, while $s_j \circ D'$ has $n$ blocks. It follows that $$\Psi_t(s_j) \circ \Psi_t(D') - \Psi_t(s_j - \xi_j) \circ \Psi_t(D')
\in \operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}^{\le n-1} \big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big).$$ Similarly, $$\Psi_t(D') \circ \Psi_t(s_j) - \Psi_t(D') \circ \Psi_t(s_j - \xi_j)
\in \operatorname{Hom}_{{{{{\mathcal{H}\mathit{eis}}}_{\uparrow \downarrow}^{}}}}^{\le n-1} \big( (\uparrow \downarrow)^k, (\uparrow \downarrow)^\ell \big)$$ for $1 \le j < k$. The result then follows by writing $D_1$ and $D_3$ as reduced words in simple transpositions.
The functor $\Psi_t$ is faithful over an arbitrary commutative ring ${\Bbbk}$.
It is clear that, in the setting of \[locker\], $T(D_1) \circ \Psi_t(D_2) \circ T(D_3)$ is uniquely determined by $D$. Indeed, $D$ is the partition diagram obtained from $T(D_1) \circ \Psi_t(D_2) \circ T(D_3)$ by replacing each pair $\uparrow \downarrow$ by a vertex and each strand by an edge. Furthermore, the diagrams of the form $T(D_1) \circ \Psi_t(D_2) \circ T(D_3)$ are linearly independent by [@Kho14 Prop. 5]. The result then follows by a standard triangularity argument.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- François Lafond
- Daniel Kim
bibliography:
- 'bib.bib'
title: 'Long-run dynamics of the U.S. patent classification system [^1] '
---
Introduction
============
The U.S. patent system contains around 10 million patents classified in about 500 main classes. However, some classes are much larger than others, some classes are much older than others, and more importantly none of these classes can be thought of as a once-and-for-all well defined entity. Due to its important legal role, the U.S. Patent and Trademark Office (USPTO) has constantly devoted resources to improve the classification of inventions, so that the classification system has greatly evolved over time, reflecting contemporaneous technological evolution. Classifications evolve because new classes are created but also because existing classes are abolished, merged and split. In fact, all current classes in 2015 have been established in the U.S. Patent Classification System (USPCS) after 1899, even though the first patent was granted in 1790 and the first classification system was created in 1829-1830. To give just another example, out of all patents granted in 1976, 40% are in a different main class now than they were in 1976.
To maintain the best possible level of searchability, the USPTO reclassifies patents so that at every single moment in time the patents are classified according to a coherent, up-to-date taxonomy. The downside of this is that the current classification is not meant to reflect the historical description of technological evolution as it unfolded. In other words, while the classification system provides a consistent classification of all the patents, this consistency is not time invariant. Observers at different points in time have a different idea of what is a consistent classification of the past, even when classifying the same set of past patents. In this paper, we focus on the historical evolution of the U.S. patent classification. We present three sets of findings.
First we study the evolution of the number of distinct classes, contrasting current and historical classification systems. Recent studies [@strumsky2012using; @strumsky2015identifying; @youn2015invention] have shown that it is possible to reconstruct the long-run evolution of the number of subclasses using the current classification system. This allowed them to obtain interesting results on the types of recombinations and on the relative rates of introduction of new subclasses and new combinations. An alternative way to count the number of distinct categories is to go back to the archives and check how many classes did actually exist at different points in the past. We found important differences between the historical and reconstructed evolution of the classification system. In particular, we find that historically the growth of the number of distinct classes has been more or less linear, with about two and a half classes added per year. By contrast, the reconstructed evolution – which considers how many current classes are needed to classify all patents granted before a given date – suggests a different pattern with most classes created in the 19$^{th}$ century and a slowdown in the rate of introduction of novel classes afterwards. Similarly, using the historical classes we find that the relationship between the number of classes and the number of patents is compatible with Heaps’ law, a power law scaling of the number of categories with the number of items, originally observed between the number of different words and the total number of words in a text [@heaps1978information]. Using the reconstructed evolution Heaps’ law does not hold over the long run.
Knowing the number of distinct classes, the next question is about their growth and relative size (in terms of the number of patents). Thus our second set of findings concerns the size distribution of classes. We find that it is exponential, confirming a result of @carnabuci2013distribution on a much more restricted sub-sample. We also find that there is no clear relationship between the size and the age of classes, which rules out an explanation of the exponential distribution in terms of simple stochastic growth models in which classes are created once and for all.
Third, we hypothesize that new technology fields and radical innovations tend to be associated with a higher reclassification activity. This suggests that the history of reclassification contains interesting information on the most transformative innovations. Our work here is related to @wang2016technological who study how a range of metrics (claims, references, extensions, etc.) correlate with reclassification for 3 million utility patents since 1994. We used the data since 1976, for which we observe the class of origin and the citations statistics. It appears that reclassified patents are more cited than non-reclassified patents. We also construct a reclassification flow diagram, with aggregation at the level of NBER patent categories [@hall2001nber]. This reveals that a non-negligible share of patents are reclassified across NBER categories. We find that patents in “Computers” and in “Electronics” are often reclassified in other NBER categories, which is not the case of other categories such as “Drugs”. We then discuss three examples of new classes (Fabric, Combinatorial Chemistry and Artificial Intelligence).
Finally, we argue that it is not possible to explain the observed patterns without accounting for reclassification. We develop a simple model in which classes grow according to preferential attachment but have a probability of being split. The model’s only inputs are the number of patents and classes in 2015 and the Heaps’ law exponent. Despite this extreme parsimony, the model is able to reproduce i) the historical and reconstructed patterns of growth of the number of classes, ii) the size distribution and (partially) the lack of age-size relationship, and iii) the time evolution of the reclassification rates.
The empirical evidence that we present and the assumptions we need to make for the model make it clear that the USPCS has evolved considerably and it is hardly possible to think of patent classes as technological domains with a stable definition. The classification system cannot be well understood as a system in which categories are created once-and-for-all and accumulate patents over time. Instead, it is better understood as a system that is constantly re-organized. Because of this, using the current classification system to study a set of older patents is akin to looking at the past with today’s glasses. In this paper, we not only show the differences between the historical and reconstructed reality, but we also explain how these differences emerged.
The paper is organized as follows. Section \[section:motiv\] details our motivation, gives some background on categorization and reviews the literature on technological categories. Section \[section:data\] describes the USPCS and our data sources. Section \[section:Nclass\] presents our results on the evolution of the number of classes. Section \[section:sizedist\] discusses the size distribution of classes. Section \[section:reclass\] presents our results on reclassification since 1976. Section \[section:model\] presents a model that reproduces the main empirical patterns discovered in the previous sections. The last section discusses the results, motivates further research and concludes.
Why is studying classification systems important? {#section:motiv}
=================================================
Classification systems are pervasive because they are extremely useful. At a fundamental level, categorization is at the basis of pattern recognition, learning, and sense-making. Producing a discourse regarding technologies and their evolution is no exception. As a matter of fact, theoretical and *a fortiori* empirical studies almost always rely on some sort of grouping – or aim at defining one.
Historically, the interest in technology classifications has been mostly driven by the need to match technological and industrial activities [@schmookler1966invention; @scherer1984using; @verspagen1997measuring]. Since patented technologies are classified according to their function, not their industry of use or origin, this problem is particularly difficult. Clearly, a good understanding of both industry and patent classification systems is crucial to build a good crosswalk. Here we highlight the need to acknowledge that both classification systems *change*. For this reason our results give a strong justification for automated, probabilistic, data-driven approaches to the construction of concordance tables such as the recent proposal by @lybbert2014getting which essentially works by looking for keywords of industry definitions in patents to construct technology-industry tables.
With the rise of interest in innovation itself many studies have used existing patent classifications to study spillovers across technology domains, generally considering classification as static. For instance @kutz2004examining studied the growth and distribution of patent classes since 1976; @leydesdorff2008patent, @antonelli2010recombinant, @strumsky2012using and @youn2015invention studied co-classification patterns; and @caminati2010pattern and @acemoglu2016innovation studied the patterns of citations across USPCS or NBER technology classes. Similarly, technological classification systems are used to estimate technological distance, typically between firms or inventors in the “technology space” based on the classification of their patent portfolio [@breschi2003knowledge; @nooteboom2007optimal; @aharonson2016mapping; @alstott2016mapping]. Additional methodological contributions include @benner2008close, who have pointed out that using all the codes listed on patents increases the sample size and thus reduces bias in measuring proximity, and @mcnamee2013can who argues for using the hierarchical structure of the classification system[^2].
In spite of this wide use of the current patent classification system, there have been no quantitative studies of the historical evolution of the system apart from the counts of the number of distinct classes by @bailey1946history and @stafford1952rate, which we update here. Recently though, @strumsky2012using originated a renewed interest in patent classification by arguing that the classification of patents in multiple fields is indicative of knowledge recombination. Using the complete record of US patents classified according to the current classification system, @youn2015invention studied the subclasses (“technology codes”). They found that the number of subclasses used up to a given year is proportional to the cumulative number of patents until about 1870, but grew less and less fast afterwards. Remarkably, however, this slowdown in the “introduction” of new subclasses does not apply to new *combinations* of subclasses. @youn2015invention found that the number of combinations has been consistently equal to 60% of the number of patents. This finding confirms [@strumsky2012using’s [-@strumsky2012using]]{} argument that patent classifications contain useful information to understand technological change over the long-run. Furthermore, the detailed study of combinations can reveal the degree of novelty of specific patents [@strumsky2015identifying; @kim2016technological].
Besides their use for simplifying the analysis and creating crosswalks, technology taxonomies are also interesting *per se*. A particularly interesting endeavour would be to construct systematic technology phylogenies showing how a technology descends from others [@basalla1988evolution; @sole2013evolutionary] (for specific examples, see @temkin2007phylogenetics for cornets and @valverde2015punctuated for programming languages).
But categories are not simply useful to describe reality, they are often used to *construct* it [@foucault1966mots]. When categories are created as nouns, they can have a predicate and become a subject. As a result, classification systems are institutions that allow agents to coordinate and agree on how things should be called and on where boundaries should be drawn. Furthermore, classification systems may create a feedback on the system it describes, for instance by legitimizing the items that it classifies or more simply by biasing which items are found through search and reused in recombination to create other items. Categorization thus affects the future evolution of the items and their relation (boundaries) with other items. Along this line of argument, the process of categorization is performative. In summary, data on the evolution of technological classification systems provides a window on how society understands its technological artefacts and legitimizes them through the process of categorization. According to @latour2005reassembling, social scientists should not over impose their own categories over the actors that they analyze. Instead a researcher should follow the actors and see how they create categories themselves.
@nelson2006perspectives described technological evolution as the co-evolution of a body of practice and a body of understanding. The role of the body of understanding is to “rationalize” the practice. According to him this distinction has important implications for understanding evolutionary dynamics, since each body has its own selection criteria. Our argument here is that the evolution of the USPCS reflects how the beliefs of the community of technologists about the mesoscale structure of technological systems coevolves with technological advancements. We consider patent categorization as a process of codification of an understanding concerning the technological system. To see why studying patent categories goes beyond studying patents, it is useful to remember that examiners and applicants do not need to prove that a technology improves our *understanding* of a natural phenomenon; they simply need to show that a device or process is novel and effective at solving a problem. However, to establish a new class, it is necessary to agree that bringing together inventions under this new header actually improves understanding, and thus searchability of the patent system. In that sense we believe that the dynamics of patent classes constitute a window on the “community of technologists”.[^3] Since classification systems are designed to optimize search, they reflect how search takes place which in turn is indicative of what thought processes are in place. These routines are an integral part of normal problem-solving within a paradigm. As a result, classification systems must be affected by paradigm-switching radical innovations. As noted by e.g. @pavitt1985patent and @hicks2011structural, a new technology which fits perfectly in the existing classification scheme may be considered an incremental innovation, as opposed to a radical innovation which challenges existing schemes. A direct consequence is that the historical evolution of the classification system contains a great deal of information on technological change beyond the information contained in the patents[^4]. We now describe our attempt at reconstructing the dynamics of the U.S. patent classification system.
The data: the USPCS {#section:data}
===================
We chose the USPCS for several reasons. First of all, we chose a patent system, because of our interest in technological evolution but also because due to their important legal role patent systems benefit from resources necessary to be maintained up to date. Among the patent classification systems, the USPCS is the oldest still in use (as of couple of years ago) [@wolter2012takes]. It is also fairly well documented, and in english. Moreover, additional files are available: citation files, digitized text files of the original patents from which to get the classification at birth, files on current classification, etc. Finally, it is one of the most if not the most used patent classification system in studies of innovation and technological change. The major drawback of this choice is that the USPCS is now discontinued. This means that the latest years may include a classificatory dynamics that anticipate the transition to the Cooperative Patent Classification[^5], and also implies that our research will not be updated and cannot make predictions specific to this system that can be tested in the future. More generally, we do recognize that nothing guarantees external validity; one could even argue that if the USPCS is discontinued and other classification systems are not, it shows that the USPCS has specificities and therefore it is not representative of other classification systems. Nevertheless, we think that the USPCS had a major influence on technology classifications and is the best case study to start with.
The early history of the USPCS
------------------------------
The U.S. patent system was established on 31st July 1790, but the need for examination was abolished 3 years later and reestablished only in 1836. As a result, there was no need to search for prior art and therefore the need for a classification was weak.
The earliest known official subject matter classification appeared in 1823 as an appendix to the Secretary of State’s report to the Congress for that year [@rotkin1999history]. It classified 635 patents models in 29 categories such as “Bridges and Locks”, 1184 in a category named “For various purposes”, and omitted those which were not “deemed of sufficient importance to merit preservations”.
In 1829, a report from the Superintendent proposed that with the prospect of the new, larger apartments for the Patent office, there would be enough room for a systematic arrangement and classification of models. He appended a list of 14 categories to the report.[^6]
In 1830 the House of representatives ordered the publication of a list of all patents, which appeared in December 1830/January 1831 with a table of contents organizing patents in 16 categories, which were almost identical to the 14 categories of 1829 plus “Surgical instruments” and “Horology”.[^7]
In July 1836, the requirement of novelty examination came into effect, making the search for prior art more pressing. Incidentally, in December the Patent office was completely destroyed by a fire. In 1837, a new classification system of 21 classes was published, including a Miscellaneous class and a few instances of cross noting[^8]. The following year another schedule was published, with some significant reorganization and a total number of classes of 22.
A new official classification appeared in 1868 and contained 36 main classes. Commenting on this increase in the number of classes, the Commissioner of patents wrote that [@rotkin1999history]
> “The number of classes has risen from 22 to 36, a number of subjects being now recognized individually which were formally merged with others under a more generic title. Among these are builder’s hardware, felting, illumination, paper, and sewing machines, to each of which subject so much attention has been directed by inventors that a division became a necessity to secure a proper apportionment of work among the corps of examiners.”
Clearly, one of the rationale behind the creation and division of classes is to balance the class sizes, but this was not only to facilitate search. This class schedule was designed with administrative problems in mind, including the assignment of patent applications to the right examiners and the “equitable apportionment of work among examiners” [@rotkin1999history].
Shortly after 1868 a parallel classification appeared, containing 176 classes used in the newly set up patent subscription service. This led to a new official classification containing 145 classes and published as a book in 1872. The number of classes grew to 158 in 1878 and 164 in 1880. @rotkin1999history note that the 1880 classification did not contain any form of cross-noting and cross references, by contrast to the 1872 classification. In 1882 classification reached 167 classes and introduced indentation of subclasses at more than one level. The classification of 1882 also introduced a class called “Electricity”, long before this general purpose technology fully reached its potential.
In 1893 it was made clear in the annual report that a Classification division was required “so that \[the history of invention\] would be readily accessible to searchers upon the novelty of any alleged invention”. After that, the need for a classification division (and the associated claim for extra budget) was consistently legitimated by this need to “oppose the whole of prior art” to every new application. In 1898 the “Classification division” was created with a head, two assistants and two clerks, with the purpose of establishing clearer classification principles and reclassifiying all existing patents. This marked the beginning of professional classification at the USPTO.
Since then the classification division has been very active and the patent classification system has evolved considerably, as we document extensively in this paper. But before, we need to explain the basic organizing principles of the classification system.
Rationale and organization of the modern USPCS {#section:USPCSrationale}
----------------------------------------------
The USPCS attributes to each patent at least one subject matter. A subject matter includes a main class, delineating the main technology, and a subclass, delineating processes, structural features and functional features. All classes and most subclasses have a definition. Importantly, these are the patent claims which are classified, not the whole patent itself. The patent inherits the classification of its claims; its main classification is the classification of its main (“most comprehensive”) claim.
There are different types of patents, and they are translated into different types of classes. According to the USPTO[^9], “in general terms, a utility patent protects the way an article is used and works, while a design patent protects the way an article looks.” The “classification of design patents is based on the concept of function or intended use of the industrial design disclosed and claimed in the Design patent.”[^10].
During the 19$^{th}$ century classification was based on which industry or profession was using the invention, for instance “Bee culture” (449) or “Butchering” (452). The example of choice [@falasco2002bases; @uspto2005handbook; @strumsky2012using] is that of cooling devices which were classified separately if they were used to cool different things, such as beer or milk. Today’s system would classify both as cooling devices into the class “Heat exchange” (165), which is the utility or function of the invention. Another revealing example [@schmookler1966invention; @griliches1990patent] is that a subclass dealing with the dispensing of liquids contains both a patent for a water pistol and one for a holy water dispenser. This change in the fundamental principles of classification took place at the turn of the century with the establishment of the Classification division [@falasco2002bases; @rotkin1999history]. Progressively, the division undertook to redesign the classification system so that inventions would be classified according their utility. The fundamental principle which emerged is that of “utility classification by *proximate* function” [@falasco2002bases] where the emphasis on “proximate” means that it is the fundamental function of the invention, not some example application in a particular device or industry. For instance “Agitating” (366) is the relevant class for inventions which perform agitation, whether this is to wash clothes, churn butter, or mix paint [@simmons2014categorizing]. Another classification by utility is the classification by effect or product, where the result may be tangible (e.g. Semiconductors device and manufacture, 438) or intangible (e.g. Audio signal system, 381). Finally, the classification by structure (“arrangement of components”) is sometimes used for simple subject matter having general function. This rationale is the most often used for chemical compounds and stock material. It is rarely used for classes and more often used at the subclass level [@uspto2005handbook]
Even though the classification by utility is the dominant principle, the three classification rationales (by industry, utility and structure) coexist. Each class “reflects the theories of classification that existed at the time it was reclassified” [@uspto2005handbook]. In addition, the system keeps evolving as classes (and even more so subclasses) are created, merged and split. New categories emerge when the need is felt by an examiner and approved by the appropriate Technology Center; in this case the USPCS is revised through a “Classification order” and all patents that need to are reclassified [@strumsky2012using]. An example of how subclasses are created is through alpha subclasses. Alpha subclasses were originally informal collections created by patent examiners themselves to help their work, but were later incorporated into the USPC. They are now created and used as temporary subclasses until they become formalized [@falasco2002united; @uspto2005handbook]. When a classification project is completed, a classification order is issued, summarising the changes officially, and all patents that need to are, in principle, reclassified.
One of the latest class to have been created is “Nanotechnology (977)”, in October 2004. As noted by @strumsky2012using, using the current classification system one finds that after reclassification the first nanotechnology patent was granted much earlier[^11]. According to @paradise2012claiming, large federal research funding led to the emergence of “nanotechnology” as a unifying term, which became reflected in scientific publications and patents. Because nanotechnologies were new, received lots of applications and require interdisciplinary knowledge, it was difficult to ensure that prior art was reviewed properly. The USPTO engaged in a classification project in 2001, which started by defining nanotechnologies and establishing their scope, through an internal process as well as by engaging with other stakeholders such as users or other patent offices. In 2004 the Nanotechnology cross-reference digest was established; cross-reference means that this class cannot be used as a primary class. @paradise2012claiming argues that class 977 has been defined with a too low threshold of 1 to 100 nanometers. Also, reclassification has been encouraged but is not systematic, so that many important nanopatents granted before 2004 may not be classified as such.
Another example of class creation worth mentioning is given by @erdi2013prediction who argue that the creation of “Fabric (woven, knitted, or nonwoven textile or cloth, etc.)” (442) created in 1997, could have been predicted based on clustering analysis of citations. @kyebambe2017forecasting recently generalized this approach, by formulating it as a classical machine learning classification problem: patent clusters are characterized by sets of features (citations, claims, etc.), and only some patent clusters are later on recognized as “emerging technology” by being reclassified into a new USPCS main class. In this sense, USPCS experts are labelling data, and @kyebambe2017forecasting developed a method to create clusters and train machine learning algorithms on the data labelled by USPCS experts.
Finally, a last example is that of organic chemistry[^12]. Class 260 used to contain the largest array of patent documents but it was decided that this class needed to be reclassified “because its concepts did not necessarily address new technology and several of its subclasses were too difficult to search because of their size.”. To make smaller reclassification projects immediately available it was decided to split the large class into many individual classes in the range of Classes 518-585. Each of these classes is “considered an independent class under the Class 260 umbrella”; many of these classes have the same general name such as “Organic coumpounds – part of the class 532-570 series”[^13]
As argued by @strumsky2012using, this procedure of introducing new codes and modifying existing ones ensures that the current classification of patents is consistent and makes it possible to study the development of technologies over a long period of time. However, while looking at the past with today’s glasses ensures that we look at different periods of the past in a consistent way, it is not the same as reporting what the past was in the eyes of those who lived it. In this sense, we believe that it is also interesting to try and reconstruct the classification systems that were in place in the past. We now describe our preliminary attempt to do so, by listing available sources and constructing a simple count of the number of classes used in the past.
Dataset construction {#section:data-construction}
--------------------
Before describing the data construction in details, let us state clearly three important caveats.
First, we focus on main classes, due to the difficulty of collecting historical data at the subclass level. This is an important omission and avenue for further research. Investigating the complete hierarchy could add significant insight, for instance by contrasting “vertical” and “horizontal” growth of the classification tree, or by exploiting the fact that different layers of system play a different role for search [@uspto2005handbook].
Second, we limit our investigations to Primary (“OR”) classes, essentially for simplicity. Multiple classifications are indeed very interesting and would therefore warrant a complete independent study. Clearly, the fact that multiple classifications can be used is a fundamental feature of the current USPCS. In fact it is a key feature of its evolution: as noted above “cross-noting” was common in some periods and absent in others, and a recent example of a novel class – Nanotechnology - happens to be an XR-only class (i.e., used only as secondary classification). Here we have chosen to use only OR classes because it allows us to show the main patterns in relatively simple way. Of course some of our results, in particular those of Section \[section:reclass\], are affected by this choice, and further research will be necessary to evaluate the robustness of our results. That said, OR classifications, which are used on patent applications to find the most appropriate examining division [@falasco2002united], are arguably the most important.
Third, we limit our investigation to the USPCS, as justified in the beginning of Section \[section:data\]. We have good reasons for choosing the USPCS in this study, which aims at giving a long-run picture. However, for studying the details of reclassification patterns and firmly establishing reclassification and classification system changes as novel and useful indicators of technological change, future research will need to establish similar patterns in the IPC or CPC.
As a result of these choices, our aim is to build a database[^14] of 1) the evolution of the USPCS primary classes, and 2) the reclassification of patents from one class to the other. To do this we relied on several sources.
First, our most original data collection effort concerns the historical number of classes. For the early years our main sources are @bailey1946history and @rotkin1999history, complemented by @reingold1960us and the “Manual of Classification” for the 5 years within the period 1908–1923. For the 1950–60’s, we used mostly a year-specific source named “General information concerning Patents” which contained a sentence like “Patents are classified into $x$ classes”. Unfortunately, starting in 1969 the sentence becomes “Patents are classified into more than 310 classes”. We therefore switched to another source named “Index of patents issued from the United States Patent Office”, which contains the list of classes. Starting 1963, it contains the list of classes with their name and number on a separate page[^15]. For 1985, we used a report of the Office of Technology Assessment and Forecast (OTAF) of the Patent and Trademark Office [@otaf1985]. For the years 2001 to 2013, we collected data from the Internet Archive.[^16] As of February 2016 there are 440 utility classes (including the Miscellaneous 001 and and the “Information storage” G9B (established in 2008)), 33 design classes, and the class PLT “Plant", giving a total of 474 classes.[^17].
Second, to obtain reclassification data we matched several files. We obtained “current” classifications from the Master Classification File (version mcfpat1506) for patents granted up to the end of June 2015. We matched this with the Patent Grant Authority File (version 20160130) to obtain grant years[^18]. To obtain the classification at birth, we used the file “Patent Grant Bibliographic (Front Page) Text Data (January 1976 – December 2015)”, provided by the USPTO[^19], from which we also gathered citation data.
Dynamics of the number of classes and Heaps’ law {#section:Nclass}
================================================
Our first result concerns the growth of the number of classes (Fig. \[fig:Nclasses\]), which we have computed using three different methods.


First, we used the raw data collected from the historical sources mentioned in Section \[section:data-construction\]. Quite unexpectedly, the data suggests a linear growth, with appreciable fluctuations mainly due to the introduction of an entirely new system in 1872 and to design classes in 1977 (see footnote ). The grey line shows the linear fit with an estimated slope of 2.41 (s.e. 0.06) and $R^2$ of 0.96 (we treat years with no data as NA, but filling them with the figure from the last observed year does not dramatically affect the results).
Second, we have computed, using the Master Classification File for June 2015, the number of distinct classes in which the patents granted up to year $t$ are classified (black line). To do so, we have used all classes in which patents are classified (i.e. including cross-reference classes).[^20] The pattern of growth is quite different from the historical data. If we consider only the post-1836 data, the growth of the number of classes is sublinear – less and less classes are introduced every year. Before 1836, the trend was linear or perhaps exponential, giving a somewhat asymmetric S-shape to the overall picture.
Third, we computed the growth of the number of classes based on the dates at which all current classes were established (blue line)[^21]. According to this measure, the first class was created in 1899, when the reorganization of classification started with the creation of the classification division[^22].
Fig. \[fig:Heaps\] displays the number of classes against the number of patents in a log-log scale. In many systems, it has been found that the number of categories grows as a power law of the number of items that they classify, a result known as Heaps’ law (for an example based on a classification system –the medical subject headings– instead of a language, see @petersen2016triple). Here we find that using the 2015 classification, Heaps’ law is clearly violated[^23]. Using the historical data, Heaps’ law appears as a reasonable approximation. We estimate the Heaps’ exponent to be $0.378$ with standard error of 0.010 and $R^2=0.95$. The inset on the bottom right of Fig. \[fig:Heaps\] shows that for the latest years, Heaps’ law fails: for the latest 2 million patents (about 20% of the total), almost no classes were created. We do not know whether this slowdown in the introduction of classes is due to a slowdown of radical innovation, or to a more institutionally-driven reason such as a lack of investment in the USPCS due to the expected switch to the Cooperative Patent Classification. Since the joint classification system was first announced on 25 October 2010 [@blackman2011classification], we show this date (more precisely, patent number 7818817 issued on the $26^{th}$) as a suggestive indicator (dashed line on the inset). Another consideration is that the system may be growing more “vertically”, in terms of the number of layers of subclasses – unfortunately here we have to focus on classes, so we are not able to test for this.
The size distribution and the age-size relationship {#section:sizedist}
===================================================
Besides the creation and reorganization of technological categories, we are interested in their growth and relative sizes. More generally, our work is motivated by the Schumpeterian idea that the economy is constantly reshaping itself by introducing novelty [@dopfer2004micro; @saviotti2004economic]. The growth of technological domains has been deeply scrutinized in the economics of technical change and development [@schumpeter1934theory; @dosi1982technological; @pasinetti1983structural; @pavitt1984sectoral; @freeman1997economics; @saviotti1996technological; @malerba2002sectoral]. A recurring theme in this literature is the high heterogeneity among sectors. When sectors or technological domains grow at different rates, structural change occurs: the relative sizes of different domains is modified. To study this question in a parsimonious way, one may opt for a mesoscale approach, that is, study the size distribution of categories.
Our work here is most directly related to @carnabuci2013distribution who first showed on data for 1963–1999 that the size distribution of classes is close to exponential. This is an interesting and at first surprising finding, because based on the assumption that all domains grow at the same average rate stochastic growth models such as @gibrat1931inegalites or @yule1925mathematical predict a Log-normal or a Pareto distribution, which are much more fat tailed. Instead, we do not see the emergence of relatively very large domains, and this may at first suggest that older sectors do not keep growing as fast as younger ones, perhaps due to technology life-cycles [@vernon1966international; @klepper1997industry; @andersen1999hunt]. However, as we will discuss, we are able to explain the exponential size distribution by keeping Gibrat’s law, but assuming that categories are split randomly.
The size distribution of categories
-----------------------------------
In this section we study the size distribution of classes, where size is the number of patents in 2015 and classes are defined using the current classification system. We use only the primary classification, so we have only 464 classes. Fig. \[fig:ranksize\] suggests a linear relationship between the size of a class and the log of its rank, that is, class sizes are exponentially distributed[^24]. To see this, let $p(k)$ be the probability density of the sizes $k$. If it is exponential, it is $p(k)=\lambda e^{-\lambda k}$. By definition, the rank $r(k)$ of a class of size $k$ is the number of classes that have a larger size, which is $r(k)=N \int_{k}^{\infty} \lambda e^{-\lambda x} dx = N e^{-\lambda k}$, where $N$ is the number of classes. This is equivalent to size being linear in the logarithm of the rank. We estimate the parameter $\lambda$ by maximum likelihood and obtained $\hat{\lambda}=4.71 \times 10^{-5}$ with standard error $0.22 \times 10^{-5}$. Note that $\hat{\lambda}$ is one over the mean size, 21223. We use this estimate to plot the resulting fit in Fig. \[fig:ranksize\].
![Rank-size relationship.[]{data-label="fig:ranksize"}](sizerank.pdf)
It is interesting to find an exponential distribution, since one may have expected a power law, which is quite common as a size distribution, and appears often with Heaps’ law [@lu2010zipf; @petersen2016triple]. Since the exponential distribution is a good representation of the data, it is worth looking for a simple mechanism that generates this distribution, which we will do in Section \[section:model\]. But since many models can generate an exponential distribution we first need to present additional empirical evidence that will allow us to discriminate between different candidate models.
The age-size relationship
-------------------------
To determine whether older classes contain more patents than younger ones, we first need to note that there are two ways of measuring age: the official date at which the class was established, and the year in which its first patent was granted. As expected, it appears that the year in which a class is established is always posterior to the date of its first patent[^25].
![Age-size relationship.[]{data-label="fig:agesize"}](agesize.pdf)
Since these two ways of measuring age can be quite different, we show the age-size (or rather size-birth date) relationship for both in Fig. \[fig:agesize\]. If stochastic growth models without reclassification were valid, we would observe a negative slope, that is, newer classes should have fewer patents because they have had less time for accumulation from random growth. Instead, we find no clear relationship. In the case of the year established, linear regressions indicated a positive relationship significant at the 10% but not at the 5% confidence level, whether or not the two “outliers” were removed. Using a log-linear model, we found a significant coefficient of 0.004 after removing the two outliers. In the case of the year of the first patent, the linear model indicated no significant relationship, but the log-linear model delivered a highly significant negative coefficient of -0.005 (which halves and becomes significant at the 10% level only once the two outliers are removed); In all 8 cases (two different age variables and two different models, removing outliers or not) the $R^2$ was between 0.001 and 0.029.
We conclude that these relationships are at best very weak, and in one case of the “wrong” sign (with classes established in recent years being on average larger). Whether they are significant or not, our point here is that their magnitude and the goodness of fits are much lower than what one would expect from growth-only models such as @simon1955class, or its modification with uniform attachment (to match the exponential size distribution). We will come back to the discussion of models later, but first we want to show another empirical pattern and explain why we think reclassification and classification system changes are interesting indicators of technological change.
Reclassification activity as an indicator of technological change {#section:reclass}
=================================================================
It seems almost tautological to say that a radical innovation is hard to categorize when it appears. If an innovation is truly “radical”, it should profoundly change how we think about a technology, a technological domain, or a set of functions performed by technologies. If this is the case a patent related to a radical innovation is originally hard to classify. It is likely that it will have to be reclassified in the future, when a more appropriate set of concepts has been developed and institutionalized (that is, when the community of technologists have codified a novel understanding about the radical innovation). It is also well accepted that radical innovations may create a new wave of additional innovations, which may or may not cluster in time [@silverberg2003breaking] but when they are general purpose we do expect a rise in innovative activity [@bresnahan1995general]. A less commented consequence of the emergence and diffusion of General Purpose Technologies (GPTs) is that both due to the sheer increase in the number of patents in this technology, and to the impact of this technology on others, we should expect higher classification volatility. Classification volatility is to be expected particularly in relation to GPTs because by definition GPTs interact with existing technologies and create or reorganize interactions among existing technologies. From the point of view of the classification, the very definition of the objects and their boundaries are transformed. In short, some categories become too large and need to be split; some definitions become obsolete and need to be changed; and the “best” grouping of technologies is affected by the birth and death of conceptual relationships between the function, industry of origin or application, and structural features of technologies.
In this section we provide a preliminary study. First we establish that this indicator does exist (reclassification rates can be quite high, reaching 100% if we look far enough in the past). Second, we show that reclassified patents are more cited. Third, we show that reclassification can take place across fairly distant technological domains, as measured by 1-digit NBER categories. Fourth, we discuss three examples of novel classes.
Reclassification rates
----------------------
How many patents have been reclassified? To start with, since no classification existed prior to 1829, all patents published before that have been “(re)classified” in the sense that their category has been determined several and potentially many years after being granted. The same applies to all patents granted at times where completely different classification systems prevailed, which is the case before 1899. In modern times, classification has evolved but as discussed in Section \[section:data\], the overall classification framework put in place at the turn of the century stayed more or less the same. For the period after 1976, we know the original classification of each patent because we can read it on the digitized version of the original paper (see Section \[section:data-construction\]). After extensive efforts in parsing the data and a few manual corrections, we found an original class for 99.45% of the post-1976 patents in the Master Classification File mcfpat1506. Out of these 5,615,525 patents, 412,724 (7.35%) have been reclassified. There are 789 distinct original classes, including 109 with only 1 patent (apart from data errors, this can come from original classes that had no post-1976 patents classified in them). All current classes have been used as original classes except “001” which is only used as a miscellaneous class in which they are reclassified[^26].
![Share of patents granted in a given year that are in a different class in 2015, as compared to when they were granted.[]{data-label="fig:sharereclass"}](sharereclass.pdf)
Figure \[fig:sharereclass\] shows the evolution of the reclassification rate, defined as the share of patents granted in year $t$ which have a different classification in 2015 than in $t$. It appears that as much as 40% of the 1976’s patents belong to a different class now than when they first appear. This reclassification rate declines sharply after that, reaching about 10% in the 1990’s and almost zero thereafter. This is an expected result, since the longer the time since granting the patent, the higher the chances that the classification system has changed.
Are reclassified patents more cited?
------------------------------------
Since there is an established relationship between patent value and the number of citations received [@hall2005market], it is interesting to check if reclassified patents are more cited. Of course, we are only observing correlations, and the relationship between citations and reclassification can work in multiple ways. A plausible hypothesis is that the more active is a technological domain (in terms of new patents and thus new citations being made), the more likely it is that there will be a need for reclassification, if only to keep the classes at a manageable size[^27]. Another hypothesis is that highly innovative patents are intrinsically ambiguously defined in terms of the classification system existing when they first appear. In any case, since we only have the class number at birth and the class number in 2015, we cannot make subtle distinctions between different mechanisms. However, we can check whether reclassified patents are on average more cited, and we can do so after controlling for the grant year and class at birth.
Table \[table:cit\] shows basic statistics[^28]. Reclassified patents constitute 7.35% of the sample, and have received on average more than 24 citations, which is more than twice as much as the non reclassified patents.
share mean median s.d.
------------------ -------- ------- -------- -------
All 100.00 11.30 4.00 26.64
Non reclassified 92.65 10.27 4.00 23.94
Reclassified 7.35 24.29 11.00 47.40
: Patent citations summary statistics.[]{data-label="table:cit"}
![Coefficient of the year-specific regressions of the log of citations received on the reclassification dummy (including dummies for the class of origin or not).[]{data-label="fig:reg-coeff-time-evol"}](reg-coeff-time-evol.pdf)
We expect this result to be largely driven by the fact that older patents have both a higher chance to have been reclassified and a higher chance to have accumulated many citations. To investigate the relationship between reclassification and citations in more detail, we regressed the log of total citations received in 2015 on the reclassification dummy and on dummies for the class at birth, for each year separately (and keeping only the patents with at least one citation received, 76.6%): $$\log(c_{i})=\alpha_t + \beta_t R_i + \sum_{j=1}^{J_t-1} \gamma_{j,t} D_{i,j}$$ where $c_{i}$ is the number of citations received by patent $i$ between its birth (time $t$) and (June) 2015, $R_i$ is a dummy that takes the value of 1 if patent $i$ has a main class code in 2015 different from the one it had when it appeared (i.e. in year $t$), $J_t$ is the number of distinct classes in which the patents born in year $t$ were classified at birth, and $D_{i,j}$ is a dummy that takes the value of 1 if patent $i$ was classified in class $j$ at birth.
Note that we estimate this equation separately for every grant year. We include the class at birth dummies because this allows us to consider patents that are “identical twins” in the sense of being born in the same class in the same year. The coefficient $\beta$ then shows if reclassified patents have on average received more citations. The results are reported in Fig. \[fig:reg-coeff-time-evol\], showing good evidence that reclassification is associated with more citations received. As expected, recent years[^29] are not significant since there has not been enough time for reclassification to take place and citations to accumulate (the bands represent standard approximate 95% confidence intervals). We also note that controlling for the class at birth generally weakens the effect (red dashed line compared to black solid line).
Reclassification flows
----------------------
{width="\textwidth"}
To visualize the reclassification flows, we consider only the patents that have been reclassified. As in @wang2016technological we want to construct a bipartite graph showing the original class on one side and the current class on the other side. Since we identify classes by their code number, a potentially serious problem may arise if classes are renumbered, although we believe this tends to be rare given the limited time span 1976–2015. An example of this is “Bee culture” which was class number 6, but since 1988 is class number 449 and class number 6 does no longer exists. However, even in this case, even though these two classes have the same name, we do not know if they are meant to encompass the same technological domain and have just been “renumbered”, or if other considerations prevailed and renumbering coincides with a more substantive reorganisation. An interesting extension of our work would be to use natural language processing techniques on class definitions to define a measure of reclassification distance more precisely and exclude mere renumbering.
To make the flow diagram readable and easier to interpret, we aggregate by using the NBER categories[^30]. To assign each class to a NBER category, we used the 2006 version of the NBER classification, which we modified slightly by classifying the Design classes separately, and classifying USPCS 850 (Scanning probe techniques and apparatus) in NBER 4 (Electrical) and USPCS PLT (Plant) in NBER 6 (Others).
Fig. \[fig:reclassification\_flow\] shows the results[^31]. The share of a category means the fraction of reclassified patents whose primary class is in a particular NBER category. The width of the lines between an original category $i$ and a current category $j$ is proportional to the number of reclassified patents whose original class is in category $i$ and current class is in category $j$. Line colors indicate the original category.
We can see that patents originally classified in the categories Chemical tend to be reclassified in another class of the category Chemical. The same pattern is observed for the category Drugs. By contrast, the categories Computers & Communications and Electrical & Electronics display more cross-reclassifications, in line with [@wang2016technological’s [-@wang2016technological]]{} findings on a restricted dataset. This may indicate that the NBER categories related to computers and electronics are not as crisply defined as those related to Chemical and Drugs, and may be suggestive of the general purpose nature of computers. This could also suggest that that these domains were going through a lot of upheaval during this time period. While there is some ambiguity in interpreting these patterns, they are not *a priori* obvious and point to the same phenomenon as the correlation between citations and reclassifications: dynamic, impact-full, really novel, general purpose fields are associated to more taxonomic volatility.
Three examples of novel classes
-------------------------------
We now complement the study by providing three examples of novel classes, chosen among recently created classes (and excluding cross-reference only classes). We proceed by looking at the origin of patents reclassified in the new class when it is created. We approximate this by looking at the patents that have been granted on a year preceding the birth year of a class, and now appear as reclassified into it. Note that we can determine the class of origin only for patents granted after 1976. We also give as example the oldest reclassified (utility) patent we can find. We discuss each class separately (see Table \[table:reclass\_main\] for basic statistics on each of the three example classes, and Table \[table:reclass\_origin\] for the source classes in each case (“Date” is the date at which an “origin” class was established.)
[|p[15mm]{}p[20mm]{}p[15mm]{}p[15mm]{}|]{} Class Number & Date
established & Size & Size post 1976\
442 & 1997 & 6240 & 2654\
506 & 2007 & 1090 & 1089\
706 & 1998 & 1270 & 1217\
[|rp[45mm]{}cc|]{}\
Size & Title & Num. & Date\
2615 & Stock material or miscellaneous articles & 428 & 1975\
16 & Compositions & 252 & 1940\
5 & Chemical apparatus and process disinfecting, deodorizing, preserving, or steril & 422 & 1978\
\
579 & Chemistry: molecular biology and microbiology & 435 & 1979\
127 & Chemistry: analytical and immunological testing & 436 & 1982\
69 & Chemical apparatus and process disinfecting, deodorizing, preserving, or steril & 422 & 1978\
\
966 & \[NA\] Information Processing System Organization & 395 & 1991\
195 & \[NA\] Electrical Computers and Data Processing Systems & 364 & 1977\
41 & Electrical transmission or interconnection systems & 307 & 1952\
Motivated by the study of @erdi2013prediction showing that the emergence of a new class (442) could have been predicted by citation clustering, we study class 442, “Fabric (woven, knitted, or nonwoven textile or cloth, etc.)”. The class definition indicates that it is “for woven, knitted, nonwoven, or felt article claimed as a fabric, having structural integrity resulting from forced interassociation of fibers, filaments, or strands, the forced interassociation resulting from processes such as weaving, knitting, needling hydroentangling, chemical coating or impregnation, autogenous bonding (…) or felting, but not articles such as paper, fiber-reinforced plastic matrix materials (PPR), or other fiber-reinforced materials (…)”. This class is “an integral part of Class 428 \[and as such it\] incorporates all the definitions and rules as to subject matter of Class 428.” The oldest patent reclassified in it was a patent by Charles Goodyear describing how applying caoutchouc to a woven cloth lead to a material with “peculiar elasticity” (US4099, 1845, no classification on the paper file). A first remark is that this class was relatively large at birth. Second, an overwhelming majority of patents came from the “parent” class 428. Our interpretation is that this is an example of an old branch of knowledge, textile, that due to continued development needs to be more finely defined to allow better classification and retrieval - note that the definition of 442 is not only about what the technologies are, but what they are not (paper and PPR).
Our second example is motivated by [@kang2012science’s [-@kang2012science]]{} qualitative study of the process of creation of an IPC class, to which the USPTO participated. @kang2012science describes that the process of class creation was initiated because of a high number of incoming patents on the subject matter. Her main conclusion is that disputes regarding class delineation were resolved by evaluating the size of the newly created category under certain definitions. Class 506, “Combinatorial chemistry technology: method, library, apparatus” includes in particular “Methods specially adapted for identifying the exact nature (e.g., chemical structure, etc.) of a particular library member” and “Methods of screening libraries or subsets thereof for a desired activity or property (e.g., binding ability, etc.)”. The oldest reclassified patent is US3814732 (1974), “modified solid supports for solid phase synthesis”. It claims polymeric hydrocarbon resins that are modified by the introduction of other compounds. It was reclassified from class 260, “Chemistry of carbon compounds”. In contrast to 442 or 706 reviewed below, the reclassified patents are drawn relatively uniformly from several categories. Our interpretation is that this is an example of a mid-age technology (chemistry), which due to its interactions with other technologies (computers) develops a novel branch that is largely cross-cutting, but specific enough to warrant the creation of a new class.
Our last example is 706, “Data processing - Artificial Intelligence”, which is a “generic class for artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (…); and including systems for reasoning with uncertainty (…), adaptive systems, machine learning systems, and artificial neural networks.”. We chose it because we possess at least some domain knowledge. The oldest reclassified AI patent is US3103648 (1963), which is an “adaptive neuron having improved output”, nicely echoing the recent surge of interest in neural networks for machine learning (deep learning). It was originally classified in class 340, “Communications: electrical”. In contrast to the other two examples, we find that the two largest sources were classes that have since been abolished (we recovered the names of 395 and 364 from the “1996 Index to the US patent classification”; their date established was available from the “Date Established” file documented in Section \[section:data-construction\]). Other classes with the “Data processing” header were created during the period, showing that the USPTO had to completely re-organize its computer-related classes around the turn of the millennium. Our interpretation is that this is an example of a highly novel technology, emerging within the broader context of the third and perhaps fourth industrial revolution. Because computers are relatively recent and general purpose, it is very difficult to create taxonomies with stable boundaries.
These three examples show strikingly different patterns of technological development and its associated classification volatility. An old branch of knowledge which is deepening (textile), a mid-age branch of knowledge that develops novel interactions with others (chemistry), and a new branch of knowledge (computers) for which classification officers strive to find useful organizational schemes. We acknowledge that these are only examples - presumably, some other examples of new classes would follow similar patterns, but other patterns may exists. We have found that about two thirds of post-1976 new classes have more than 90% of their pre-birth (and post-1976) reclassified patents coming from a single origin (pre-existing class), suggesting that a form of “branching” or “class splitting” is fairly common, at least when looking at OR classes only. We do not want to put too much weight on these early results, which will have to be systematised, developed further using subclasses and multiple classifications, and, crucially, compared against results obtained using the IPC/CPC. We do think that such a systematic study of classification re-organizations would tell a fairly detailed story of the evolution of technology, but rather than embarking on such a detailed study here we propose to summarize most of what we have learned so far into a simple theoretical model.
A simple model {#section:model}
==============
{width="\textwidth"}
In this section, we propose a very simple model that reproduces several facts described above. As compared to other recent models for size distributions and Heaps’ law in innovation systems [@tria2014dynamics; @marengo2016arrival; @lafondSDPC], the key assumption that we will introduce is that classes are sometimes split and their items reclassified. We provide basic intuition instead of a rigorous discussion[^32].
Let us start with the well-known model of @simon1955class. A new patent arrives every period. The patent creates a new category with probability $\alpha$, otherwise it goes to an existing category which is chosen with probability proportional to its size. The former assumption is meaningful, because in reality the number of categories grows over time. The second assumption is meaningful too, because this “preferential attachment”/“cumulative advantage” is related to Gibrat’s law: categories grow at a rate independent of their size, so that their probability of getting the next patent is proportional to their size.
There are three major problems with this model. First it gives the Yule-Simon distribution for the size distribution of classes. This is basically a power law so it has much fatter tails than the exponential law that we observe. In other words, it over predicts the number of very large categories by a large margin. Second, since older categories have more time to accumulate patents, it predicts a strong correlation between age and size. Third, since at each time step categories are created with probability $\alpha$ and patents are created with probability $1$, the relationship between the number of categories $\alpha t$ and the number of patents $t$ is linear instead of Heaps’ constant elasticity relation.
A solution to make the size distribution exponential instead of power law is to change preferential attachment for uniform random attachment, that is to choose each category with equal probability. Besides the fact that this new assumption may seem less intuitive than Gibrat’s law, this would not solve the second problem because it would still be the case that older categories accumulate more patents. The solution is to acknowledge that categories are not entities that are defined once and for all; instead, they are frequently split and their patents are reclassified.
We therefore turn to the model proposed by @ijiri1975some. It assumes that new categories are introduced over time by splitting existing ones. In its original form the model postulates a linear arrangement of stars and bars. Each star represents a patent, and bars materialize the classes. For instance, if there are 3 patents in class 1 and 1 patent in class 2, we have |\*\*\*|\*|. Now imagine that between any two symbols there is a space. At each period, we choose a space uniformly at random and fill it with either a bar (with probability $\alpha$) or a star (with complementary probability). When a star is added, it means that an existing category acquires a new patent. When a bar is added, it means that an existing category is split into two categories. It turns out that the resulting size-distribution is exponential, as desired. But before we can evaluate the age-size relationship, we need to decide how to measure the age of a category. To do this we propose to reformulate the model as follows.
We start with one patent in one category. At each period, we first select an existing category $j$ with probability proportional to its size $k_j$ and add one patent in it. Next, with probability $\alpha$ we create two novel categories by splitting the selected category uniformly at random; that is, we draw a number $s$ from a uniform distribution ranging from 1 to $k_j$. Next, each patent in $j$ is assigned to the new category 1 with the probability being $s/k_j$, or to the new category 2 otherwise. This procedure leads to a straightforward interpretation: the patents are *reclassified* from $j$ to the first or the second new category. These two categories are *established* at this step of the process, and since patents are created sequentially one by one, we also know the *date of the first patent* of each new category. To give a date in calendar years to patents and categories, we can simply use the dates of the real patents.
Since $\alpha$ is constant, as in Simon’s original model, we are left with the third problem (Heaps’ power law is violated). We propose to make $\alpha$ time dependent to solve this issue[^33]. Denoting the number of categories by $C_t$ and the number of patents by $t$ (since there is exactly one new patent per period), we want to have $C_t = C_0 t^b$ (Heap’s law). This means that $C_t$ should grow at a per period rate of $dC_t/dt=C_0 b t^{b-1}$. Since we have measured $b \approx 0.378$ and we want the number of categories to be 474 when the number of patents is 9,847,315, we can calculate $C_0=C_t/t^b=1.07$. This gives $\alpha_t = 1.07 \times 0.378 \hspace{1mm} t^{0.378-1}$, which we take to be 1 when $t=1$.[^34]
Note how parsimonious the model is: its only inputs are the current number of patents and categories, and the Heaps’ exponent. Here we do not attempt to study it rigorously. We provide simulation results under specific parameter values. Fig. \[fig:simulationresults\] shows the outcome of a single simulation run (black dots and lines), compared to empirical data (red crosses).
The first pair of panels (a and b) shows the same (empirical) data as Fig. \[fig:Nclasses\] and \[fig:Heaps\] using red crosses. The results from the simulations are the curves. The simulation reproduces Heaps’ law well, by direct construction (the grey middle curve on panel b). But it also reproduces fairly well the evolution of the reconstructed number of classes, both the one based on the “date of first patent” and the one based on the “dates established”, and both against calendar time (years) and against the cumulative number of patents.
The second pair of panels (c and d) show the age-size relationships, with the same empirical data as in Fig. \[fig:agesize\]. Panel c shows that the model seems to produce categories whose sizes are *not* strongly correlated with the year in which they were established, as in the empirical data. However, in panel d, in our model there is a fairly strong negative correlation between size and the year of the first patent and this correlation is absent (or is much weaker) in the empirical data. These results for one single run are confirmed by Monte Carlo simulations. We ran the model 500 times and recorded the estimated coefficient of a simple linear regression between the log of size and each measure of age. The insets show the distribution of the estimated coefficients, with a vertical line showing the coefficient estimated on the empirical data.
The next panel (e) shows the size distribution in a rank-size form, as in Fig. \[fig:ranksize\]. As expected, the model reproduces this feature of the empirical data fairly well. However the empirical data is not exactly exponential and may be slightly better fitted by a negative binomial model (which has one more parameter and recovers the exponential when its shape parameter equals one). The top right histogram shows the distribution of the estimated negative binomial shape parameter. The empirical value departs only slightly from the Monte Carlo distribution.
Finally, the last panel (f) shows the evolution of the share of reclassified patents, with the empirical data from Fig. \[fig:sharereclass\] augmented by values of 1 between 1790 and 1899 (since no current categories existed prior to 1899, all patents have been reclassified). Here again, the model reproduces fairly well the empirical pattern. All or almost all patents from early years have been reclassified, and the share is falling over time. That said, for recent years (post 1976), the specific shape of the curve is different.
Overall, we think that given its simplicity the model reproduces a surprisingly high number of empirical facts. It allows us to understand the differences between the different patterns of growth of the reconstructed and historical number of classes. Without a built-in reclassification process it would not have been possible to match all these empirical facts – if only because without reclassification historical and reconstructed evolution coincide. This shows how important it is to consider reclassification when we look at the mesoscale evolution of the patent system. On the other hand, much more could be done to make the model more interesting and realistic, for instance by also modelling subclasses and requiring that reclassification takes place within a certain distance.
Conclusion
==========
In this paper, we have presented a quantitative history of the evolution of the main patent classes within the U.S. Patent Classification System. Our main finding is that the USPCS incurred regular and important changes. For academic researchers, these changes may be perceived as a source of problems, because this suggests that it may not always be legitimate to think that a given patent belongs to one and the same category forever. This means that results obtained using the current classification system may change in the future, when using a different classification system, and even if the very same set of patent is considered.
That said, we do not think the effect would be strong. Besides, using the current classification system is still often the best thing to do because of its consistency. Our point here is not to critique the use of the current classification, but to argue that historical changes to the classification system itself contain interesting information that has not been exploited.
Our first result is that different methods to compute the growth of the number of classes give widely different results, establishing that the changes to the classification system are very important. Our second result suggests that we do not see very large categories in empirical data because categories are regularly split, leading to an exponential size distribution with no relationship between the age and size of a category. Our third result is that reclassification data contains useful information to understand technological evolution. Our fourth result is that a very simple model that can explain many of the observed patterns needs to include the splitting of classes and the reclassification of patents. Taken together, these results show that it is both necessary and interesting to understand the evolution of classification systems.
An important limitation of our study is that it is highly limited in scope: we study the US, at the class level, using main classifications only. A contrasting example we have found is the French patent classification of 1853, which contained 20 groups, was revised multiple times in the $19^{th}$ century but while subclasses were added it kept a total of 20 classes even in the “modern” classification of 1904. Similarly, while direct comparison is difficult, our preliminary exploration of other classification systems, such as the IPC and CPC, suggests that they do not feature the same size distribution, perhaps pointing to a different mode of evolution than the one proposed in our model.
We believe that our findings are interesting for all researchers working with economic and technological classifications, because we characterized quantitatively the volatility of the patent classification system. We do not know whether they are unstable because collective representations of technological artefacts are context-dependent, or because as more items are introduced and resources invested in classifying them appropriately, collective discovery of the “true” mesoscale partition takes place. But clearly, when interpreting the results which rely upon a static snapshot of a classification system, one should bear in mind that classification systems are dynamic.
A case in point is the use of technological classes to produce forecasts: how can we predict the evolution of a given class or set of classes several decades ahead, when we know these classes might not even exist in the future? In this paper, we are not proposing a solution to this forecasting issue - only raising conceptual problems that classification system changes pose. Further, even if we consider that today’s categorization will not change, a subtle issue arises in the production of correct forecasting models. To see this consider developing a time series model describing the growth of some particular classes. To test the forecasting ability of the model, one should perform out-of-sample tests, as e.g. @farmer2016predictable did for technology performance time series. Part of the past data is used to predict more recent data, and the data which is not used for estimation is compared to the forecasts. Now, note that when we use the current classification, we effectively use data from the present; that is, the delineation of categories for past patents uses knowledge from the present, and it is therefore not entirely valid to evaluate forecasts (there is “data snooping” in the sense that one uses knowledge of the future to predict the future).
Classification system changes pose serious problems for forecasting but may also bring opportunities: if classification changes reflect technological change then one can in principle construct quantitative theories of that change. Since the patterns described here could be roughly understood using an extremely simple model, it may be possible to make useful forecasts with more detailed models and data, for instance predicting new classes [@erdi2013prediction; @kyebambe2017forecasting]. This could be useful because patent classification changes are more frequent than changes to other classification systems such as industries, products and occupations. An interesting avenue for future research would be to use the changes of the patent classification system to predict the changes of industry and occupation classification systems, thus predicting the types of jobs of the future.
Beyond innovation studies, with the rising availability of very large datasets, digitized and carefully recorded classifications and classification changes will become available. It will be possible to explore classifications as an evolving network and track the splitting, merging, birth and death of categories. This is an exciting new area of research, but the big data that we will accumulate will only (or mostly) cover recent years. This makes historical studies such as the present one all the more important.
[^1]: This paper reuses material from an unpublished chapter of the first author’s PhD thesis at UNU-MERIT, Maastricht University. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korean Government \[Grants No. NRF-2017R1A2B3006930 (D.K.)\], and the European Commission project FP7-ICT-2013-611272 (GROWTHCOM) (F.L.). We also acknowledge support from Partners for a New Economy, the London Institute for Mathematical Sciences, the Institute for New Economic Thinking at the Oxford Martin School, and the Oxford Martin School Programme on Technological and Economic Change. We have benefited from excellent comments from two anonymous referees and many colleagues, including Jeff Alstott, Yuki Asano, Mariano Beguerisse Díaz, J. Doyne Farmer, Marco Pangallo, Emanuele Pugliese, Giorgio Triulzi and Vilhelm Verendel. We are also grateful to Diana Greenwald and the USPTO for helping us locating data sources. All remaining errors are ours. Contacts: [email protected], [email protected]
[^2]: In a related context (how professional diversity scales with city size), @bettencourt2014professional and @youn2016scaling exploited the different layers of industry and occupation classifications systems to identify resolution-independent quantities. Measuring diversity depends on which layer of the classification system one uses, but in such a way that the infinite resolution limit (deepest classification layer) exists and can be used to characterise universal quantities.
[^3]: Patent officers are generally highly skilled workers. Besides anecdotal evidence on particularly smart patent examiners (Albert Einstein), patent officers are generally highly qualified (often PhDs). That said, @rotkin1999history mention that classification work was not particularly attractive and that the Classification division had difficulties attracting volunteers. More recently @paradise2012claiming eludes to “high turnover, less than ideal wages and heavy workloads”. There is an emerging literature on patent officers’ biases and incentives [@cockburn2003all; @schuett2013patent] but it is focused on the decision to grant the patent. Little is known about biases in classification.
[^4]: In labor economics, some studies have exploited classification system changes. @xiang2005new finds that new goods, as measured by changes to the SIC system, have a higher skill intensity than existing goods. @lin2011technological and @berger2015industrial used changes in the index of industries and the dictionary of occupational titles to evaluate new work at the city level.
[^5]: <http://www.cooperativepatentclassification.org/index.html>
[^6]: The main titles were Agriculture, Factory machine, Navigation, Land works, Common trades, Wheel carriages, Hydraulicks (the spelling of which was changed in 1830), Calorific and steam apparatus, Mills, Lever and screw power, Arms, Mathematical instruments, Chemical compositions and Fine arts.
[^7]: An interesting remark on this classification [@rotkin1999history] is that it already contained classes based on industry categories (agriculture, navigation, …) and classes based on a “specific mechanical force system” (such as Lever and screw power).
[^8]: The first example given by @rotkin1999history is a patent for a pump classified in both “Navigation” and in “Hydraulics and Hydrostatics”
[^9]: <http://www.uspto.gov/web/offices/pac/mpep/s1502.html>
[^10]: <http://www.uspto.gov/page/seven-classification-design-patents>
[^11]: 1986 for @strumsky2012using, 1978 for @paradise2012claiming and 1975 according to @strumsky2015identifying and to the data that we use here (US3896814). Again, these differences reflect the importance of reclassification.
[^12]: see <http://www.uspto.gov/page/addendum-reclassification-classes-518-585>
[^13]: These classes also have a hierarchy indicated by their number, as subclasses within a class schedule usually do.
[^14]: Our data is available at <https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/ZJCDCE>
[^15]: We had to make some assumptions. In the 1960’s, Designs appeared subdivided into “Industrial arts” and “Household, personal and fine arts”, so we assumed that the number of design classes is 2, up to the year 1977 where Design classes appear with their name and number. We implicitly assume that prior to 1977 the design classes were actually subclasses, since in 1977, there were 39 Design classes, whereas the number of (sub)classes used for design patents in 1976 was more than 60. It should be noted though that according to the dates established, some of the current design classes were created in the late 60’s. Another issue was that for 1976 the number of Organic compound classes was not clear - we assumed it was 6, as listed in 1977. Finally, we sometimes had two slightly different values for the same year due to contradictory sources or because the sources refer to a different month.\[footnoteDesign\]
[^16]: <https://archive.org/index.php> where we can find the evolution of the url <http://www.uspto.gov/web/patents/classification/selectnumwithtitle.htm>. We added the class “001” to the count.
[^17]: The list of classes available with their dates established contains 476 classes, but it does not contain 001, and it contains 364, 389, and 395 which have been abolished. We removed the abolished classes, and for Figs \[fig:Nclasses\] and \[fig:Heaps\] we assumed 001 was established in 1899.
[^18]: We first removed 303 patents with no main (OR) classification, and then 92 patents dated January 1st 1800. We kept all patent kinds.
[^19]: at <https://bulkdata.uspto.gov/> (Access date: January 7, 2018)
[^20]: The (reconstructed) number of classes is slightly lower if we consider only Primary classes, because some classes are used only as a cross-reference, never as primary class. These classes are 902: Electronic funds transfer, 903: Hybrid electric vehicles, 901: Robots, 930: Peptide or protein sequence, 977: Nanotechnology, 976: Nuclear technology, 968: Horology, 987: Organic compounds containing a bi, sb, as, or p atom or containing a metal atom of the 6th to 8th group of the periodic system, 984: Musical instruments, G9B: Information storage based on relative movement between record carrier and transducer.
[^21]: Collected from <https://www.uspto.gov>, page USPCS dates-established
[^22]: “Buckles, Buttons, clasps, etc.” is an example of a class that was created early under a slightly different name (1872 according to @simmons2014categorizing, see @bailey1946history for details) but has a posterior “date established” (1904 according to the USPTO). Another example is “Butchering”.
[^23]: It is possible to obtain a good fit by limiting the fit to the latest periods, however this is arbitrary, and gives a very low Heaps’ exponent, leaving unexplained the creation of the vast majority of classes.
[^24]: For simplicity we used the (continuous) exponential distribution instead of the more appropriate (discrete) geometric distribution, but this makes no difference to our point. We have not rigorously tested whether or not the exponential hypothesis can be rejected, because the proper hypothesis is geometric and classical test statistics such as Kolmogorov-Smirnov do not easily apply to discrete distributions. Likelihood ratio tests interpreted at the 5% level showed that it is possible to obtain better fits using two-parameters distributions that extends the exponential/geometric, namely the Weibull and the Negative binomial, especially after removing the two smallest categories which are outliers (contain 4 and 6 patents) and are part of larger series (532 and 520).
[^25]: Apart from class 532. We confirmed this by manually searching the USPTO website. 532 is part of the Organic compound classes, which have been reorganized heavily, as discussed in Section \[section:USPCSrationale\]
[^26]: We removed US6481014.
[^27]: Relatedly, as noted by a referee, if patent examiners are also responsible for reclassification, then their prior art search might be oriented towards patents that they have re-classified, for which their memory is more vivid.
[^28]: We count citations made to patents for which we have reclassification data, from patents granted until June 2015. We removed duplicated citations
[^29]: 2015 is excluded because no patents had been reclassified
[^30]: For more details on the NBER categories, see the historical reference [@hall2001nber] and the recent effort by @marco2015uspto to attribute NBER (sub) categories to patent applications.
[^31]: See the online version at <http://danielykim.me/visualizations/PatentReclassificationHJTcategory/>
[^32]: For instance, we do not claim that the model *in general* produces a certain type of pattern such as a lack of age-size relationship. We simply show that under a specific parametrisation taken from the empirical data (say $\sim$10 million patents, 500 classes, and a Heaps exponent of $0.38$), it produces patterns similar to the empirical data.
[^33]: An interesting alternative (instead of using the parameter $\alpha$) would be to model separately the process by which the number of patents grow and patent classification officers split categories.
[^34]: There is a small inconsistency arising because the model is about primary classification only, but the historical number of classes and Heaps’ law are measured using all classes, because we could not differentiate cross-reference classes in historical data. Another point of detail is that we could have used the estimated $C_0=0.17$ instead of the calculated one. These details do not fundamentally change our point.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Massive black hole binaries are naturally predicted in the context of the hierarchical model of structure formation. The binaries that manage to lose most of their angular momentum can coalesce to form a single remnant. In the last stages of this process, the holes undergo an extremely loud phase of gravitational wave emission, possibly detectable by current and future probes. The theoretical effort towards obtaining a coherent physical picture of the binary path down to coalescence is still underway. In this paper, for the first time, we take advantage of observational studies of active galactic nuclei evolution to constrain the efficiency of gas-driven binary decay. Under conservative assumptions we find that gas accretion toward the nuclear black holes can efficiently lead binaries of any mass forming at high redshift ($\gsim 2$) to coalescence within the current time. The observed “downsizing” trend of the accreting black hole luminosity function further implies that the gas inflow is sufficient to drive light black holes down to coalescence, even if they bind in binaries at lower redshifts, down to $z \approx 0.5$ for binaries of $\sim 10^7 \msun$, and $z \approx 0.2$ for binaries of $\sim 10^6 \msun$. This has strong implications for the detection rates of coalescing black hole binaries of future space-based gravitational wave experiments.'
author:
- |
M. Dotti$^{1,2}$[^1], A. Merloni$^{3}$, C. Montuori$^{4}$\
$^{1}$ Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy\
$^{2}$ INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy\
$^{3}$ Max Planck Institut für Extraterrestrische Physik, Giessenbachstrasse 1, D-85748 Garching bei München, Germany\
$^{4}$ Università degli Studi dell’Insubria, Via Valleggio 11, 22100 Como, Italy\
title: Linking the fate of massive black hole binaries to the active galactic nuclei luminosity function
---
\[firstpage\]
quasars: supermassive black holes - galaxies: interactions - galaxies: nuclei - galaxies: active - black hole physics - gravitational waves
Introduction
============
Massive black hole (MBH) pairs are expected to form during galaxy mergers [@begelman80 BBR, hereafter]. If the nuclei of the two merging galaxies manage to survive against the tidal forces in play for long enough [e.g. @callegari09; @vw14], dynamical friction can efficiently bring the two MBHs to the centre of the galactic remnant, forcing them to bind in a MBH binary (BHB).
>From the binary formation on, dynamical friction becomes less and less efficient (BBR), and other dynamical processes are needed to further evolve the binary. In particular, the interaction with single stars and with nuclear gas have been thoroughly investigated [see @dotti12 for a recent review]. The sole effect of gravitational wave emission forces the two MBHs to coalesce within the Hubble time if any physical process manages to shrink the semi-major axes of the BHB down to: $$\label{eq:agw} a_{\rm GW} \approx 2\times 10^{-3}
f(e)^{1/4}{q^{1/4}\over(1+q)^{1/2}}\left ({M \over 10^6\,\msun}\right )^{3/4}\,{\rm pc},$$ where $M=M_1+M_2$ is the total mass of the binary, $q=M_2/M_1$ is its mass ratio, and $f(e)=[1+(73/24)e^2+(37/96)e^4](1-e^2)^{-7/2}$ is a function of the binary eccentricity $e$ [@peters64]. The assessment of how effective the various processes are to evolve the binary down to $a_{\rm GW}$ is usually referred to as the ’last parsec’ problem.
Attempts to determine the fate of BHBs (whether they manage to reach $a_{\rm
GW}$ and to coalesce or they remain bound in double systems forever) have been made first considering gas-poor environments, where BHB dynamics is assumed to be driven by three-body interactions with single stars. In principle, only stars whose orbits intersect the BHB can efficiently interact with it. In an extended stellar system, however, only a small fraction of the phase space (the so called binary “loss cone”) is populated by such orbits. Stars interacting with the BHB remove energy and angular momentum from the binary, getting ejected from the loss cone. The binary evolution timescale is hence related to the rate at which new stars are fed into the loss cone [e.g. @mf04]. Physical mechanisms able to efficiently refuel the loss cone are required in order for the binary to coalesce in less that an Hubble time. Possible mechanisms that have been proposed so far are: the presence of massive perturbers [such as giant molecular clouds @mp08], deviations from central symmetry (e.g. Khan, Just & Merritt 2011; Preto et al. 2011; Gualandris & Merritt 2011, but see also Vasiliev, Antonini & Merritt 2014) and gravitational potential evolving with time [e.g. @vam14].
Similarly, the effect of the interaction between BHBs and nuclear gas has been explored analytically as well as numerically [see @dotti12 for an up to date review]. While the full details of the gas/binary interaction are still under debate, mainly due to the complexity of the system, a clear issue remains to be addressed. Similarly to the stellar-driven case, the migration timescale of the BHB primarily depends on how much gas is able to spiral toward the MBHs and interact with them, instead of, e.g. turn into stars [e.g. @lodato09]. This problem is remarkably similar to the fueling problem of Active Galactic Nuclei (AGN), i.e. how gas manages to lose most of its angular momentum in order to sustain the observed nuclear activity.
Differently from the stellar driven case, however, observational studies of the AGN population allow to constrain the properties of the gas flowing onto a MBH (in particular its mass accretion rate). Decades of multi-wavelength surveys of accreting MBHs have provided a relatively robust picture of the AGN luminosity function evolution [see e.g. @Hasinger2005; @Hopkins2007; @Buchner2014]. Coupling such evolution with the observationally determined MBH mass function via a continuity equation [@Cavaliere71; @Small92] allows to further infer the evolution of the nuclear inflow rates as a function of MBH mass and redshift (e.g. Merloni & Heinz 2008, Shankar et al. 2013).
In this work we will assume that the fueling of BHBs is consistent with that of single MBHs in the same mass and redshift interval. In this way we can estimate the incremental reservoir of angular momentum that a BHB can interact with during its cosmic evolution, constraining the binary fate, at least in a statistical fashion, directly from observations.
Model
=====
Gas driven BHB dynamics
-----------------------
To get an order of magnitude estimate of the binary coalescence timescale we propose a very simple zeroth-order model for the interaction between a BHB and a circum-binary accretion disc. We assume that the BHB is surrounded by an axisymmetric, geometrically thin accretion disc co-rotating with the BHB. Under this assumption, the gas inflow is expected to be halted by the binary, whose gravitational torque acts as a dam, at a separation $r_{\rm gap}
\approx 2 a$ [e.g. @al94], where $a$ is the binary semi-major axis. At this radius the gravitational torque between the binary and the disc is perfectly balanced by the torques that determine the large scale ($r \gg a$) radial gas inflow. Then, we can write the variation of the binary orbital angular momentum magnitude ($\rm{L_{BHB}}$) as: $$\label{eqn:delta_mom1}
{\rm dL_{\rm BHB} = -dL_{\rm gas}} = -\dot m \, {\rm d}t \, \sqrt{G \, M \,
r_{\rm gap}}$$ where $M$ is the total binary mass, and $\dot m$ is the accretion rate within the disc. Considering the definition of $\rm{L_{BHB}}=\mu \,
\sqrt{G\,M\,a}$, where $\mu$ is the binary reduced mass, from eq.\[eqn:delta\_mom1\] we derive: $$\label{eqn:delta_a1}
\frac{\mu}{2\sqrt{2}} \, \frac{{\rm d}a}{a} \approx - \dot{m} \, {\rm d}t,$$ where $\mu$ does not evolve in time consistently with the assumption that the binary interacts with the disc only through the gravitational torques, that stop the gas inflow preventing the binary components to accrete.
As a note of caution, we stress that the circum-binary discs could be, in principle, counter-rotating with respect to the BHBs they orbit [@nixon11b]. In this configuration the gas interacts with the BHB at $\approx a$, instead of $\sim 2a$ [@nixon11a], and the specific angular momentum transfer per unit time is uncertain, depending on how strongly the secondary MBH is able to perturb the gas inflow. As an example, if all the gas passing through $a$ bounds to the secondary MBH, the binary angular momentum diminishes by two times the angular momentum carried by the gas. In this scenario, the BHB evolves on the same timescale regardless of the BHB-disc relative orientation. Moreover, [@Constanze2014] demonstrated that BHBs embedded in self-gravitating retrograde discs may secularly tilt their orbital plane toward a coplanar prograde equilibrium configuration. For these reasons we will focus our investigation on the prograde case in the following.
As a first order of magnitude estimate of the coalescence timescale we can make the simplifying (although quite common) assumption of an Eddington limited accretion event and integrate eq.\[eqn:delta\_a1\]. Further assuming a fixed radiative efficiency ($\epsilon =0.075$, see section 2.2) we obtain $$\label{eqn:delta_t1}
\Delta t_{\rm BHB} \sim \ln{\left(\frac{a_i}{a_c} \right ) \frac{\mu \,
\epsilon \,c^{2}}{2\sqrt{2} \,L_{\rm Edd}}} \sim 10^{7} \,
\frac{q}{(1+q)^2}\ln{\left(\frac{a_i}{a_c} \right ) \, {\rm yr} }$$ where $a_i$ and $a_c$ define the binary separation range where the MBH-gas interaction drives the binary orbital decay.
Eq. \[eqn:delta\_t1\] shows that, in order to coalesce, the binary has to interact with an amount of matter of the order of its reduced mass, with only a weak dependence on the exact ratio between the initial and final separation. A conservative estimate for this ratio can be obtained setting $a_i$ equal to the radius at which the two MBHs bind in a binary $$\label{eqn:binary}
a_i \sim G
M/2\sigma_{\star}^2 \sim 0.5 ~\left(\frac{M}{10^6 \msun}\right)^{1/2} \, {\rm pc},$$ where we have assumed the $M -\sigma_{\star}$ relation [@gultekin]. The final separation can be conservatively estimated as $a_c \sim 6\times 10^{-5} (M/10^6 \msun)^{3/4}$ pc, in order for the BHB to coalesce due to the emission of gravitational waves in $\sim 10^4 \,$ yr. Under these assumptions $a_i/a_c \propto M^{-1/4}$ and $\ln(a_i/a_c) < 9$ for a binary mass $> 10^6 M_{\odot}$.
Gas inflow onto BHBs: observational constraints
-----------------------------------------------
We can now, for the first time, try to relax any a priori assumption on the accretion rate in the circum-binary disc (such as the Eddington limit used in eq. \[eqn:delta\_t1\]), assuming an observational driven prescription for the evolution of $\dot m$ as a function of MBH and redshift. In particular, we adopt the average accretion rates obtained assuming that the MBH evolution is governed by a continuity equation, where the MBH mass function at any given time can be used to predict that at any other time, provided the distribution of accretion rates as a function of black hole mass is known. The continuity equation can be written as: $$\label{eq:continuity}
\frac{\partial \psi(m,t)}{\partial t} +
\frac{\partial}{\partial m}\left( \psi(m,t) \langle \dot M
(m,t)\rangle \right)=0$$ where $m=Log\, M$ ($M$ is the black hole mass in solar units), $\psi(m,t)$ is the MBH mass function at time $t$, and $\langle \dot
M (m,t) \rangle$ is the average accretion rate of a MBH of mass $M$ at time $t$. The average accretion rate can be defined through a “fueling” function, $F(\dot m,m,t)$, describing the distribution of accretion rates for objects of mass $M$ at time $t$: $\langle \dot
M(M,z)\rangle = \int \dot M F(\dot m,m,z)\, \mathrm{d}\dot m$. Such a fueling function is not known a priori, and observational determinations thereof have been able so far to probe robustly only the extremes of the overall population. However, the fueling function can be derived by inverting the integral equation that relates the luminosity function ($\phi$) of the AGN population with its mass function. Indeed we can write: $$\label{eq:filter}
\phi(\ell,t)=\int F(\ell-\zeta,m,t) \psi(m,t)\; \mathrm{d}m$$ where we have called $\ell=Log\, L_{\rm bol}$ and $\zeta=Log\,
(\epsilon c^2)$, with $\epsilon$ the radiative efficiency. This is assumed to be constant and its average value can be estimated by means of the Soltan argument [@Soltan1982], which relates the mass density of remnants MBH in the local Universe with the integrated amount of accreted gas during the AGN phases, as identified by the luminosity function.
Gilfanov & Merloni (2014) reviewed the most recent assessments of the Soltan argument. Adopting as a starting point the bolometric AGN luminosity function of Hopkins et al. (2007), the estimate of the (mass-weighted) average radiative efficiency, $\langle \epsilon \rangle$, can be expressed as:
$$\label{eq:soltan}
\frac{\langle \epsilon \rangle}{1-\langle \epsilon \rangle} \approx 0.075 \left[\xi_0(1-\xi_i-\xi_{\rm CT}+\xi_{\rm lost})
\right]^{-1}$$
where $\xi_0=\rho_{\rm BH,z=0}/ 4.2\times 10^5 M_{\odot} {\rm Mpc}^{-3}$ is the local ($z=0$) MBH mass density in units of 4.2$\times 10^5 M_{\odot} {\rm
Mpc}^{-3}$ [@marconi2004]; $\xi_i$ is the mass density of black holes at the highest redshift probed by the bolometric luminosity function, $z
\approx 6$, in units of the local one, and encapsulates our uncertainty on the process of BH formation and seeding in proto-galactic nuclei; $\xi_{\rm CT}$ is the fraction of the MBH mass density (relative to the local one) grown in heavily obscured, Compton Thick AGN; finally, $\xi_{\rm lost}$ is the fraction of BH mass contained in “wandering” objects, that have been ejected from a galaxy nucleus, for example, in the aftermath of a merging event because of the anisotropy in the emission of gravitational waves [e.g. @Lousto13 and references therein]. More recent estimates of the fraction of MBH mass density accumulated in heavily obscured, Compton-Thick AGN [@Buchner2014] suggest that $\xi_{\rm CT} \approx 0.35$. Neglecting $\xi_i$ in eq. \[eq:soltan\] (i.e. assuming a negligibly small seed BH mass density), the average radiative efficiency will vary approximately between 0.075 and 0.1 for $0< \xi_{\rm lost} < 0.3$. Therefore in the following we will use the results obtained by performing a numerical inversion of eq. \[eq:filter\], based on a minimization scheme that used the Hopkins et al. (2007) AGN bolometric luminosity function as a constraint, and assuming a fixed radiative efficiency in the range $0.075<\epsilon<0.1$. The average Eddington ratios $f_{\rm Edd}$ (bolometric luminosity normalized to the Eddington limit) and accretion rates obtained in this way are shown as a function of redshift in the left and right panels of figure \[fig:mdot\], respectively. We note that increasing the radiative efficiency value implies a decrease in the average accretion rates, especially for higher MBH masses at higher redshifts where the MBH evolution is relatively more important. This is consistent with the adopted calculation scheme where the AGN luminosity function is assumed as a constraint to derive the accretion rates estimates.
 
Results
=======
The observational constraints on the average value of $\dot{m}$ (function of M and z) discussed in section 2.2 allow us to numerically integrate eq. \[eqn:delta\_a1\] to determine the migration timescale $\Delta t_{\rm
BHB}$ for any BHB. We can further translate $\Delta t_{\rm BHB}$ into an estimate of the minimum redshift $z_{\rm BHB}$ at which a BHB of mass $M$ must form in order to coalesce within a given redshift $z_{\rm coal}$. The results of the numerical integration are shown in figure \[fig:zq\] for a binary mass ratio $q=0.1$ (lower panel) and $q=0.3$ (upper panel).
![Minimum redshift $z_{\rm BHB}$ at which two MBHs must bind in a binary of total mass $M_{\rm BHB}$ in order to coalesce within three different redshifts $z_{\rm coal}=0$ (black lines), 1 (blue lines) and 2 (red lines). The upper and lower panels correspond to a mass ratio $q=0.3$ and $q=0.1$, respectively. As in figure \[fig:mdot\], the shaded areas correspond to the range of values obtained considering a fixed radiative efficiency comprised between $\epsilon=0.075$ (left curve) and $\epsilon=0.1$ (right curve). Thin dotted lines mark the three values assumed for $z_{\rm coal}$, and are shown to facilitate the reader. The case represented in the figure is when all the gas inflowing toward the galaxy nucleus interacts with the secondary MBH.[]{data-label="fig:zq"}](fig1_new.ps)
No ’last parsec’ problem seems to exist for binaries of total mass $M<10^7$ and $q \lsim 0.3$ formed at $z_{\rm BHB} > 0.5$. More massive binaries ($M>10^8$) do not coalesce within the present time if formed at $z_{\rm BHB} < 1.2$, and the extreme cases of $M>10^9$ coalesce within $z=0$ only if formed at $z_{\rm BHB} \gsim 2$. Binaries forming at higher redshift coalesce in shorter times, since the average accretion rates increase with $z$ within the redshift interval considered in this analysis. For example, all the $q<0.3$ binaries forming at $z_{\rm BHB} \sim 2.5$ coalesce within $z_{\rm coal}=2$.
As a note of caution, we stress that the assumption that all the gas inflow is stopped by the binary is oversimplifying. As a matter of fact, numerical 2-D and 3-D simulations (independently of the exact treatment of gravity or hydrodynamics) demonstrated that the deviations from axisymmetry close to the binary, driven by its gravitational potential, allow for periodic inflows of gas within $r_{\rm gap}$ [@Hayasaki08; @Cuadra09; @Roedig11; @Sesana12; @Shi12; @Noble12; @Dorazio13; @Farris14]. To put firm upper limits to the MBH migration timescale, we assume that only a fraction $f=0.4$ of the gas inflow interacts dynamically with the binary, while the remaining $60\%$ of the gas fails to strongly interact gravitationally with the binary, and falls onto one of the MBHs unimpeded. A simple timescale estimate for the binary shrinkage, under the assumption of MBHs accreting at a fixed fraction of the Eddington limit, can be obtained replacing $L_{\rm Edd}$ with $f\,L_{\rm EDD}$ in eq. \[eqn:delta\_t1\]. Reducing the fraction of interacting matter increases the migration timescale, hence increasing the minimum $z_{\rm BHB}$ required for the BHB to coalesce within a given $z_{\rm
coal}$ as shown in figure \[fig:z0.1\].
![Same as figure \[fig:zq\] either assuming that all the gas interacts with the secondary MBH (shading with inclined lines) or that only a fraction of 0.4 of the inflowing gas mass interacts with $M_2$ (shading with horizontal lines).[]{data-label="fig:z0.1"}](fig2_new.ps)
As expected, decreasing $f$ the evolution of every binary slows down, but the general trends discussed while commenting the $f=1$ cases remain valid. BHBs with total mass $M\sim 10^7$ at $z > 2$ are particularly affected, because of the redshift at which their typical Eddington ratio peaks (see figure \[fig:mdot\]). Still, these BHBs manage to coalesce between $z_{\rm coal}\approx 1 - 2$, as well as their more massive counterparts.
Conclusions
===========
We estimated the gas driven orbital decay of BHBs from the instant at which they bind in a binary down to their final coalescence. For the first time we propose an observationally driven approach, that has the advantage of not being affected neither by any assumption on the (largely unknown) feeding process driving the accretion, nor by the fraction of the gas inflow that turns into stars at large scales before interacting with the BHB.
Our investigation proves that 1) high redshift BHBs of any mass coalesce on a very short time; 2) Low mass BHBs ($M\lsim 10^7 \msun$) formed at low redshift manage to merge anyway within $z=0$, since their accretion history peaks at lower redshifts. These findings are particularly relevant since the coalescence of low mass BHBs is one of the sources of gravitational waves detectable by future space based gravitational wave interferometers, such as the mission concept eLISA [@lisa].
We have worked under very conservative assumptions:\
- We assumed that binaries in the late stages of galaxy mergers are fueled as much as MBHs in comparable isolated galaxies, without assuming any merger driven boost in the accretion. The merger process itself is considered, however, an efficient reshuffler of the gas angular momentum at galactic scales, driving efficiently gas inflows all the way down to the two MBHs, as confirmed by observations [e.g. @kk84; @k85; @a07; @k11; @e11; @s11; @s14] as well as by a wealth of numerical works performed on different kind of mergers [e.g. @dsh05; @jbn09; @hopkins10; @callegari11; @vw12; @capelo14].\
- We have assumed that all the gas accretion onto MBHs is radiatively efficient, and that the mass accretion rate at few gravitational radii ($R_g$, where basically all the luminosity is emitted) is equal to the that at thousands of $R_g$, where the gas interacts with the secondary MBH. We stress that a significant fraction of the accretion flow, however, could be ejected in the form of fast outflows, as often found in numerical simulations [e.g. @proga03; @narayan12].
Under our conservative assumptions, gas driven migration of high mass ($M
\gsim 10^8 \msun$) BHBs formed at low redshift could be inefficient. Such binaries are of particular interest, being the only ones observable through pulsar timing [@Hobbs10 and references therein]. The morphological and dynamical characteristic of their hosts suggest, however, that interactions with stars could play a significant role in the binaries shrinking. The hosts of very massive MBHs often show triaxial profiles [e.g. @faber97; @kormendy09 and references therein]. The lack of spherical and axial symmetry in the potential of the hosts allows the single stars to modify substantially their angular momentum components. Stars can hence re-fill the loss cone of the binaries at rates significantly higher than those expected in spherical systems[^2], possibly leading BHBs to a fast coalescence [see @vasiliev14 for a recent discussion].
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge the anonimous Referee, Alberto Sesana and Eugene Vasiliev for useful comments and fruitful discussions.
[99]{}
Alonso M.S., Lambas D.G., Tissera P. & Coldwell G., 2007, MNRAS, 375, 1017
Amaro Seoane P., et al., 2013, ArXiv:1305.5720
Artymowicz P. & Lubow S.H., 1994, ApJ, 421, 651
Begelman M.C., Blandford R.D., & Rees M.J., 1980, Nature, 287, 307
Buchner J., et al., A&A, in press.
Callegari S., Mayer L., Kazantzidis S., Colpi M., Governato F., Quinn T., & Wadsley J. 2009, ApJ Letters, 696, 89
Callegari S., Kazantzidis S., Mayer L., Colpi M., Bellovary J. M.., Quinn T., & Wadsley J. 2011, ApJ, 729, 85
Capelo P.R., Volonteri M., Dotti M., Bellovary J.M., Mayer L., & Governato F., 2014, submitted to MNRAS (arXiv:1409.0004)
Cavaliere A., Morrison P., & Wood K., 1971, ApJ, 170, 233
Cuadra J., Armitage P.J., Alexander R.D. & Begelman, M. C., 2009, MNRAS, 393, 1423
Di Matteo T., Springel V., & Hernquist L., 2005, Nature, 433, 604
D’Orazio D.J., Haiman Z. & MacFadyen A.I., 2013, MNRAS, 436, 2997
Dotti M., Sesana A., Decarli R., 2012, Advances in Astronomy, 2012
Ellison S.L., Patton D.R., Mendel J.T. & Scudder J.M., 2011, MNRAS, 418, 2043
Faber S.M. et al., 1997, AJ 114, 1771
Farris B. D., Duffell P., MacFadyen A.I. & Haiman Z., 2014, ApJ, 783, 134
Gilfanov M., & Merloni A., 2014, Space Science Reviews, 183, 121
Gold R., Paschalidis V., Etienne Z.B., Shapiro S.L. & Pfeiffer H.P., 2014, PhRvD, 89, 4060
Gültekin K. et al., 2009, ApJ, 698, 198
Gualandris A., & Merritt D., 2011, (arXiv:1107.4095)
Hasinger G., Miyaji T., & Schmidt M., 2005, A&A, 441, 417
Hayasaki K., Mineshige S. & Ho L.C., 2008, ApJ, 682, 1134
Hobbs G., et al., 2010, Class. Quant. Grav., 27, 084013
Hopkins P. F., Richards G. T., & Hernquist L. 2007, ApJ, 654, 731
Hopkins P. F., & Quataert E., 2010, MNRAS, 407, 1529
Johansson P.H., Burkert A., & Naab T., 2009, ApJ, 707, L184
Keel W.C, Kennicutt R.C., Himmel E. & van der Hulst J.M., 1985, AJ, 90, 708
Kennicutt R.C. & Keel W.C, 1984, ApJ, 279, L5
Khan F.M., Just A. & Merritt D., 2011, ApJ, 732, 89
Kormendy J., Fisher D.B., Cornell M.E & Bender R., 2009, ApJS, 182, 216
Koss M., Mushotzky R., Veilleux S., Winter L.M., Baumgartner W., Tueller J., Gehrels N. & Valencic L., 2011, ApJ, 739, 57
Lodato, G., Nayakshin, S., King, A.R., & Pringle, J.E., 2009, MNRAS, 398, 1392
Lousto C. O., Zlochower Y., 2013, Phys. Rev. D, 87, 084027
Makino, J., & Funato, Y., 2004, ApJ, 602, 93
Marconi A., Risaliti G., Gilli R., Hunt L. K., Maiolino R., Salvati M., 2004, MNRAS, 351, 169
Merloni A. & Heinz S., 2008, MNRAS, 388, 1011
Merritt D., 2013, Dynamics and Evolution of Galactic Nuclei (Princeton University Press)
Narayan R., Sädowski A., Penna R.F. & Kulkarni A.K., 2012, MNRAS, 426, 3241
Nixon C.J., Cossins, P. J., King A.R. & Pringle J.E., 2011, MNRAS, 412, 1591
Nixon C.J., King A.R. & Pringle J.E., 2011, MNRAS, 417, L66
Noble S.C., Mundim B.C., Nakano H., Krolik J.H., Campanelli M., Zlochower Y. & Yunes N., 2012, ApJ, 755, 51
Preto M., Berentzen I., Berczik P. & Spurzem R., 2011, ApJ, 732, 26
Perets H.B. & Alezander T., 2008, ApJ, 677, 146
Peters P.C., 1964, Phys. Rev. B, 136, 1224
Proga D., 2003, ApJ, 585, 406
Rödig C., Dotti M., Sesana A., Cuadra J. & Colpi M., 2011, MNRAS, 415, 3033
Rödig C.& Sesana A., 2014, MNRAS, 439, 3476
Satyapal S., Ellison S.L., McAlpine W., Hickox R.C., Patton D.R., & Mendel J.T., 2014, MNRAS, 441, 1297
Sesana A., Rödig C., Reynolds M.T. & Dotti M., 2012, MNRAS, 420, 860
Shi, J., Krolik J.H., Lubow S.H. & Hawley J.F., 2012, ApJ, 749, 118
Silverman et al., 2011, ApJ, 743, 2
Small T.A., & Blandford R.D., 1992, MNRAS, 259, 725
Soltan A., 1982, MNRAS, 200, 115
Van Wassenhove S., Volonteri M., Mayer L., Dotti M., Bellovary J. M., & Callegari S., 2012, ApJ, 748, L7
Van Wassenhove S., Capelo P. R., Volonteri M., Dotti M., Bellovary J. M., Mayer L. & Governato F., 2014, MNRAS, 439, 474
Vasiliev E., Antonini F., & Merritt D., 2014, ApJ, 785, 163
Vasiliev E., 2014, (arXiv:1411.1762)
\[lastpage\]
[^1]: [email protected]
[^2]: In spherical potentials the collisional refilling of the binary loss cone can lead binaries to coalescence within $10^{10}$ yr only if the total mass of the binary is $M \lsim 10^6 \msun$, see e.g. Section 8.3 in [@merritt13].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It was shown by Kuperberg that the partition function of the square-ice model related to the quarter-turn symmetric alternating-sign matrices of even order is the product of two similar factors. We propose a square-ice model whose states are in bijection with the quarter-turn symmetric alternating-sign matrices of odd order, and show that the partition function of this model can be also written in a similar way. This allows to prove, in particular, the conjectures by Robbins related to the enumeration of the quarter-turn symmetric alternating-sign matrices.'
author:
- |
A. V. Razumov, Yu. G. Stroganov\
*Institute for High Energy Physics\
*142281 Protvino, Moscow region, Russia**
title: 'Enumeration of quarter-turn symmetric alternating-sign matrices of odd order'
---
Introduction
============
An alternating-sign matrix is a matrix with entries $1$, $0$, and $-1$ such that the $1$ and $-1$ entries alternate in each column and each row and such that the first and last nonzero entries in each row and column are $1$. Starting from the famous conjectures by Mills, Robbins and Rumsey [@MilRobRum82; @MilRobRum83] a lot of enumeration and equinumeration results on alternating-sign matrices and their various subclasses were obtained. Most of the results were proved using bijections between alternating-sign matrices and states of different variants of the statistical square-ice model. For the first time such a method to solve enumeration problems was used by Kuperberg [@Kup96], see also the rich in results paper [@Kup02].
Our previous paper [@RazStr05] is devoted to enumerations of the half-turn symmetric alternating-sign matrices of odd order on the base of the corresponding square-ice model. In the present paper we again treat matrices of odd order. But this time we consider the quarter-turn symmetric alternating-sign matrices.
In Section 2 we discuss first the square-ice model related to the quarter-turn symmetric alternating-sign matrices of even order proposed by Kuperberg [@Kup02]. Then a square-ice model whose states are in bijection with the quarter-turn symmetric alternating-sign matrices of odd order is introduced. In contrast with the case of the matrices of even order, the usual recursive relations are not enough to determine the partition function of the model recursively by Lagrange interpolation.
In Section 3 we obtain some important additional recursive relations involving the special spectral parameter that is attached to the middle line of the graph describing the states of the model.
In Section 4 we show that the partition function of the model is the product of two factors closely related to the Pfaffians used by Kuperberg to write an expression for the partition function of the square-ice model corresponding to quarter-turn symmetric alternating-sign matrices of even order [@Kup02].
In Section 5 we consider an important special case of the overall parameter of the model that allow to prove, in particular, the enumeration conjectures by Robbibs [@Rob00] on the quarter-turn symmetric alternating-sign matrices of odd order.
We denote $\bar x = x^{-1}$ and use the following convenient abbreviations $$\begin{gathered}
\sigma(x) = x - \bar x, \\
\alpha(x) = \sigma(a x)\sigma(a \bar x),\end{gathered}$$ proposed by Kuperberg [@Kup02]. Here $a$ is some parameter, which will be introduced below.
Square-ice models related to quarter-turn symmetric alternating-sign matrices
=============================================================================
An alternating-sign $n \times n$ matrix $A$ is said to be quarter-turn symmetric if $$(A)_{j,n+1-i} = (A)_{ij}, \qquad i,j = 1, \ldots n.$$ It can be shown that quarter-turn symmetric alternating-sign matrices of an even order $n$ exist only when $n$ is a multiple of $4$. A quarter-turn symmetric alternating-sign matrix of order $n = 2m+1$ has $-1$ in the center if $m$ is odd, and it has $1$ in the center if $m$ is even.
To enumerate a symmetry class of the alternating-sign matrices Kuperberg proposed to start with a square-ice model whose states are in bijection with the elements of the symmetry class under consideration [@Kup02]. The next step is to find the partition function of the model, defined as the sum of the weights of all possible states. It appears that for many symmetry classes of alternating-sign matrices a determinant or Pfaffian representation of the partition function of the corresponding square-ice model can be found. Using such a representation and specifying in an appropriate way the parameters of the model one finds desired enumerations [@Kup02; @Zei96b; @Kup96; @RazStr04; @RazStr05].
To describe the states of a square-ice model it is convenient to use a graphical pattern. For example, the states of the square-ice model corresponding to the quarter-turn symmetric alternating-sign matrices of an even order are described by the graph given in Figure \[f:1\].
(-1,-1)(16,16) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (0,8)(2,8) (2,2)(9,2) (2,4)(9,4) (2,6)(9,6) (2,8)(9,8) (2,2)(2,9) (4,2)(4,9) (6,2)(6,9) (8,2)(8,9) (9,9)[1]{}[270]{}[180]{} (9,9)[3]{}[270]{}[180]{} (9,9)[5]{}[270]{}[180]{} (9,9)[7]{}[270]{}[180]{} (2,2)(2,4)(2,6)(2,8)(4,2)(4,4)(4,6)(4,8) (6,2)(6,4)(6,6)(6,8)(8,2)(8,4)(8,6)(8,8) (-0.5,8)[$x_4$]{} (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_1$]{} (4,-0.5)[$x_2$]{} (6,-0.5)[$x_3$]{} (8,-0.5)[$x_4$]{} (1;45)(3;45)(5;45)(7;45)
The labels $x_i$ are the spectral parameters which are used to define the partition function of the model. To get a concrete state of the model one chooses an orientation for each of the unoriented edges in such a way that two edges enter and leave every tetravalent vertex, and either two edges enter or two edges leave every bivalent vertex.[^1] Certainly, we draw a pattern for a fixed order of matrices, but a generalisation to the case of an arbitrary possible order is always evident.
The weight of a state is the product of the weights of the vertices. The choice for the weights of tetravalent vertices used in the present paper is as given in Figure \[f:wghts\].
$$\psset{unit=2em}
\begin{array}{cccccc}
\vertexA{\small $x$} & \vertexB{\small $x$} & \vertexC{\small $x$} &
\vertexD{\small $x$} & \vertexE{\small $x$} & \vertexF{\small $x$} \\[.5em]
\mathsmall{\sigma(a^2)} & \mathsmall{\sigma(a^2)} & \mathsmall{\sigma(a \,
x)} & \mathsmall{\sigma(a \, x)} & \mathsmall{\sigma(a \,\bar x)} &
\mathsmall{\sigma(a \, \bar x)}
\end{array}$$
The parameter $a$ is common for all tetravalent vertices. All bivalent vertices have weight $1$. If a vertex is unlabelled and formed by intersection of two labeled lines, then the value of the vertex label is set to $x \bar y$ if it is in the quadrant which is swept by the line with the spectral parameter $x$ when it is rotated anticlockwise to the line with the spectral parameter $y$. One may move a vertex label $x$ one quadrant to an adjacent one changing it to $\bar x$.
A graph, similar to the one given in Figure \[f:1\], denotes also the corresponding function. Here the summation over all possible orientations of internal edges is implied. It can be easily understood that our conventions make the formalism invariant under rotations and orientation preserving smooth deformation of graphs. If we reflect a graph over a line and overline the line labels we obtain a graph which describes the same function as the initial graph. After all, reversing orientation of all oriented edges we obtain a graph which again gives the same function as the initial graph.
If we have unoriented boundary edges, then the graph represents the set of the quantities corresponding to their possible orientations. Usually, graphs with such edges arise when we give a graphical representation of equality of functions. In such a case, if it is needed, we should rotate both sides of an equality simultaneously.
As a useful example one can take the graph corresponding to the well-known Yang–Baxter equation $$\psset{unit=.25in}
\begin{pspicture}[.45](-.5,-.5)(4.5,4.5)
\pscurve(0,1)(1.3,2.0)(2.6,2.6)(4,3)
\pscurve(0,3)(1.3,2.0)(2.6,1.4)(4,1)
\pscurve(2.5,4)(2.8,2.7)(2.8,1.3)(2.5,0)
\rput(.5,2){\small $z$}
\rput(3.3,3.3){\small $y$}
\rput(3.3,.7){\small $x$}
\psdots(1.3,2)(2.8,1.34)(2.8,2.66)
\end{pspicture}
\quad = \quad
\begin{pspicture}[.45](-.5,-.5)(4.5,4.5)
\pscurve(4,1)(2.7,2.0)(1.4,2.6)(0,3)
\pscurve(4,3)(2.7,2.0)(1.4,1.4)(0,1)
\pscurve(1.5,4)(1.2,2.7)(1.2,1.3)(1.5,0)
\rput(3.5,2){\small $z$}
\rput(.7,3.3){\small $x$}
\rput(.7,.7){\small $y$}
\psdots(2.7,2)(1.2,1.34)(1.2,2.66)
\end{pspicture} \label{e:yb}$$ This equation is satisfied if $xyz = a$.
The procedure described above to find enumerations of quarter-turn symmetric alternating-sign matrices of even order was realised by Kuperberg [@Kup02]. In the present paper we treat the case of quarter-turn symmetric alternating-sign matrices of odd order. The graphical pattern for the state space of the corresponding square-ice model depends on the order of the matrices. For the order $2m+1$ with $m$ even we have the pattern given at Figure \[f:3\], and for the order $2m+1$ with $m$ odd we have the pattern given in Figure \[f:4\].
(-1,-1)(10,10) (2,2)(2,0) (4,2)(4,0) (6,6)(6,4) (0,2)(2,2) (0,4)(2,4) (6,2)(6,0) (2,2)(6,2) (2,4)(6,4) (2,2)(2,6) (4,2)(4,6) (6,4)(6,2) (2,2)(2,4)(4,2)(4,4)(6,4)(6,2) (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_1$]{} (4,-0.5)[$x_2$]{} (6,-0.5)[$x_3$]{} (6,6)[2]{}[270]{}[180]{} (6,6)[4]{}[270]{}[180]{} (2;45)(4;45)
(-1,-1)(14,14) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (8,6)(8,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (2,2)(8,2) (2,4)(8,4) (2,6)(8,6) (2,2)(2,8) (4,2)(4,8) (6,2)(6,8) (8,6)(8,2) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6) (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_1$]{} (4,-0.5)[$x_2$]{} (6,-0.5)[$x_3$]{} (8,-0.5)[$x_4$]{} (8,8)[2]{}[270]{}[180]{} (8,8)[4]{}[270]{}[180]{} (8,8)[6]{}[270]{}[180]{} (2;45)(4;45)(6;45)
The difference actually is in the orientation of the boundary edges belonging to the ‘middle’ line. It is not difficult to get convinced that there is a bijection between the states of the square-ice models described by Figures \[f:3\] and \[f:4\] and the corresponding subsets of the alternating-sign matrices.
The partition function of the model depends on $m+1$ spectral parameters $x_1, x_2, \ldots, x_{m+1}$. We denote it $Z_{\mathrm{QT}}(2m + 1; \bm x)$, where $\bm x = (x_1, \ldots, x_{m+1})$ is the $(m+1)$-dimensional vector formed by the spectral parameters. Using Yang–Baxter equation (\[e:yb\]) and an evident equality $$\begin{pspicture}[.4](-1,-1)(4,3)
\psbezier(0,2)(1,2)(2,0)(3,0)
\psbezier(0,0)(1,0)(2,2)(3,2)
\psline(3,0)(4,0)
\psline(3,2)(4,2)
\psdots(1.5,1)(3,0)(3,2)
\rput(-.5,0){\small $x$}
\rput(-.5,2){\small $y$}
\end{pspicture}
\quad = \quad
\begin{pspicture}[.4](-1,-1)(4,3)
\psbezier(1,2)(2,2)(3,0)(4,0)
\psbezier(1,0)(2,0)(3,2)(4,2)
\psline(0,0)(1,0)
\psline(0,2)(1,2)
\psdots(2.5,1)(1,0)(1,2)
\rput(-.5,0){\small $x$}
\rput(-.5,2){\small $y$}
\end{pspicture} \quad$$ one can show that the function $Z_{\mathrm{QT}}(2m + 1; \bm x)$ is symmetric in the variables $x_1, x_2, \ldots, x_m$.
It is not difficult to get convinced that the following equality $$\begin{pspicture}[.5](-1,-2)(6,7)
\psline(0,2)(6,2)
\psline(0,4)(6,4)
\psline(2,2)(2,4)
\arrowLine(2,2)(2,0)
\arrowLine(2,4)(2,6)
\psdots(2,2)(2,4)(4,2)(4,4)
\rput(-.5,2){\small $x$}
\rput(-.5,4){\small $y$}
\rput(2,-.5){\small $z$}
\end{pspicture}
\quad = \quad
\begin{pspicture}[.5](-1,-2)(6,7)
\psline(0,2)(6,2)
\psline(0,4)(6,4)
\psline(4,2)(4,4)
\arrowLine(4,0)(4,2)
\arrowLine(4,6)(4,4)
\psdots(2,2)(2,4)(4,2)(4,4)
\rput(-.5,2){\small $x$}
\rput(-.5,4){\small $y$}
\rput(4,-.5){\small $z$}
\end{pspicture}
\label{e:2}$$ is valid. Actually there are similar equalities with different orientations of the oriented edges in the left hand side and reversed orientations of the corresponding edges in the right hand side. Reflect now the graph in Figure \[f:4\] over the line which is drawn as a dotted line in Figure \[f:5\],
(-1,-1)(14,14) (1,1)(13,13) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (0,8)(2,8) (8,8)(6,8) (2,2)(8,2) (2,4)(8,4) (2,6)(8,6) (2,8)(6,8) (2,2)(2,8) (4,2)(4,8) (6,2)(6,8) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(2,8)(4,8)(6,8) (-0.5,6)[$\bar x_3$]{} (-0.5,4)[$\bar x_2$]{} (-0.5,2)[$\bar x_1$]{} (2,-0.5)[$\bar x_1$]{} (4,-0.5)[$\bar x_2$]{} (6,-0.5)[$\bar x_3$]{} (-.5,8)[$\bar x_4$]{} (8,8)[2]{}[270]{}[180]{} (8,8)[4]{}[270]{}[180]{} (8,8)[6]{}[270]{}[180]{} (2;45)(4;45)(6;45)
(-1,-1)(14,14) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (8,6)(8,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (2,2)(8,2) (2,4)(8,4) (2,6)(8,6) (2,2)(2,8) (4,2)(4,8) (6,2)(6,8) (8,6)(8,2) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6) (-0.5,6)[$\bar x_3$]{} (-0.5,4)[$\bar x_2$]{} (-0.5,2)[$\bar x_1$]{} (2,-0.5)[$\bar x_1$]{} (4,-0.5)[$\bar x_2$]{} (6,-0.5)[$\bar x_3$]{} (8,-0.5)[$\bar x_4$]{} (8,8)[2]{}[270]{}[180]{} (8,8)[4]{}[270]{}[180]{} (8,8)[6]{}[270]{}[180]{} (2;45)(4;45)(6;45)
overline all the labels, and reverse the orientations of the oriented edges. As follows from the remarks made above, the resulting graph which is given in Figure \[f:5\] corresponds to the same function as the graph given in Figure \[f:4\]. Using equality (\[e:2\]), we transform Figure \[f:5\] to Figure \[f:6\] which again corresponds to the same function as the graph given in Figure \[f:4\]. Thus, we proved the equality $$Z_{\mathrm{QT}}(2m + 1; \bm x) = Z_{\mathrm{QT}}(2m + 1; \bar{\bm x}),
\label{e:3}$$ where $\bar{\bm x} = (\bar x_1, \ldots, \bar x_{m+1})$.
Following the usual procedure (see, for example, the proof of Lemma 13 in paper [@Kup02]) we obtain $2m - 2$ recursive relations $$\begin{aligned}
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_1 = a x_j} = \sigma^{2}(a) &
\sigma^{2}(a^2) \\
& \times \prod_{\substack{k = 2 \\ k \ne j}}^{m+1} \sigma^2(a^2 \bar
x_k x_j) \sigma^2(a \bar x_j x_k) Z_{\mathrm{QT}}(2m - 3; \bm x
\smallsetminus
x_1 \smallsetminus x_j), \\
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_1 = \bar a x_j} = \sigma^{2}(a) &
\sigma^{2}(a^2) \\
& \times \prod_{\substack{k = 2 \\ k \ne j}}^{m+1} \sigma^2(a \bar x_k x_j)
\sigma^2(a^2 \bar x_j x_k) Z_{\mathrm{QT}}(2m - 3; \bm x \smallsetminus
x_1 \smallsetminus x_j),\end{aligned}$$ where $j = 2, \ldots, m$. Since the partition function $Z_{\mathrm{QT}}(2m
+ 1; \bm x)$ is symmetric in the variables $x_1, \ldots, x_m$, we actually have $m^2 - m$ recursive relations $$\begin{gathered}
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_i = a x_j} =
\sigma^{2}(a) \sigma^{2}(a^2) \\
\times \prod_{\substack{k=1 \\ k \ne i,j}}^{m+1} \sigma^2(a^2 \bar x_k x_j)
\sigma^2(a \bar x_j x_k) Z_{\mathrm{QT}}(2m - 3; \bm x \smallsetminus
x_i \smallsetminus x_j),
\label{1st}\end{gathered}$$ where $i, j=1, \ldots, m$, and $i \ne j$.
Consider some fixed value of the index $i$ such that $1 \le i \le m$. The partition function $Z_{\mathrm{QT}}(2m + 1; \bm x)$ is a centered Laurent polynomial of width $2m - 2$ in the square of the variable $x_i$. Therefore, if we know $2m-1$ values of $Z_{\mathrm{QT}}(2m + 1; \bm x)$ for $2m-1$ values of $x_i^2$ we know $Z_{\mathrm{QT}}(2m + 1; \bm x)$ completely. Recursive relations (\[1st\]) supply us with the expressions for $Z_{\mathrm{QT}}(2m + 1; \bm x)$ via $Z_{\mathrm{QT}}(2m - 3; \bm x
\smallsetminus x_i \smallsetminus x_j)$ for $2m - 2$ values of $x_i$. It is not enough to determine $Z_{\mathrm{QT}}(2m + 1; \bm x)$ recursively by the Lagrange interpolation.[^2] From the other hand, the partition function $Z_{\mathrm{QT}}(2m + 1; \bm x)$ is a centered Laurent polynomial in $x_{m+1}^2$ of width $m-1$ if $m$ is odd, and of width $m$ if $m$ is even. It appears that there are enough recursive relations involving the variable $x_{m+1}$ to determine $Z_{\mathrm{QT}}(2m + 1; \bm x)$ via Lagrange interpolation.
Recursive relations involving variable $x_{m+1}$
================================================
Multiply the partition function $Z_{\mathrm{QT}}(2m + 1; \bm x)$ by $\sigma(a z_m)$, where $z_m$ is a parameter which is not specified yet. One can easily understood that the resulting function can be represented by Figure \[f:7\].
(-3,-3)(14,14) (2,2)(2,0) (4,2)(4,0) (8,6)(8,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (2,2)(8,2) (2,4)(8,4) (2,6)(8,6) (2,2)(2,8) (4,2)(4,8) (6,2)(6,8) (8,6)(8,2) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6) (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_1$]{} (4,-0.5)[$x_2$]{} (8,-2.5)[$x_3$]{} (6,-2.5)[$x_4$]{} (7,-.5)[$z_3$]{} (8,8)[2]{}[270]{}[180]{} (8,8)[4]{}[270]{}[180]{} (8,8)[6]{}[270]{}[180]{} (8,2)(8,1)(6,0)(6,-1) (6,2)(6,1)(8,0)(8,-1) (8,-1)(8,-2) (8,-1)(8,-1.3) (6,-1)(6,-2) (6,-1)(6,-1.3) (7,.5) (2;45)(4;45)(6;45)
(-1,-1)(16,16) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (8,8)(10,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (2,2)(9,2) (2,4)(9,4) (2,6)(10,6) (2,2)(2,9) (4,2)(4,9) (6,2)(6,7) (8,8)(8,2) (7,8)(8,8) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6)(8,8) (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_1$]{} (4,-0.5)[$x_2$]{} (6,-0.5)[$x_4$]{} (8,-0.5)[$x_3$]{} (7.4,7.4)[$z_3$]{} (7,7)[1]{}[90]{}[180]{} (10,8)[2]{}[270]{}[180]{} (9,9)[5]{}[270]{}[180]{} (9,9)[7]{}[270]{}[180]{} (5;45)(7;45) (2;45)
Put $z_m = a x_{m+1} \bar x_m$, and transform the graph in Figure \[f:7\] to the graph in Figure \[f:8\] using the Yang–Baxter equation (\[e:yb\]). Repeating this procedure we see that Figure \[f:9\], where $z_i = a x_{m+1} \bar x_i$ corresponds to the partition function $Z_{\mathrm{QT}}(2m + 1; \bm x)$ multiplied by the product $\prod_{i=1}^m \sigma(a^2 x_{m+1} \bar x_i)$.
(-1,-1)(16,14) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (8,8)(10,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (2,2)(10,2) (2,4)(10,4) (2,6)(10,6) (2,2)(2,7) (4,2)(4,8) (6,2)(6,8) (8,2)(8,8) (3,8)(8,8) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6) (8,8)(6,8)(4,8) (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_4$]{} (4,-0.5)[$x_1$]{} (6,-0.5)[$x_2$]{} (8,-0.5)[$x_3$]{} (7.4,7.4)[$z_3$]{} (5.4,7.4)[$z_2$]{} (3.4,7.4)[$z_1$]{} (3,7)[1]{}[90]{}[180]{} (10,8)[2]{}[270]{}[180]{} (10,8)[4]{}[270]{}[180]{} (10,8)[6]{}[270]{}[180]{} (2;45)(4;45)(6;45)
(-1,-1)(16,14) (2,2)(2,0) (4,2)(4,0) (6,2)(6,0) (8,2)(8,0) (8,8)(10,8) (0,2)(2,2) (0,4)(2,4) (0,6)(2,6) (10,2)(8,2) (8,2)(6,2) (6,2)(4,2) (4,2)(2,2) (2,4)(4,4) (4,4)(6,4) (2,6)(4,6) (4,6)(6,6) (4,8)(6,8) (2,2)(2,4) (2,4)(2,6) (4,8)(4,6) (4,6)(4,4) (4,4)(4,2) (6,4)(6,2) (8,4)(8,2) (6,4)(10,4) (6,6)(10,6) (2,6)(2,7) (6,4)(6,8) (8,4)(8,8) (3,8)(4,8) (6,8)(8,8) (2,2)(2,4)(2,6)(4,2)(4,4)(4,6) (6,2)(6,4)(6,6)(8,2)(8,4)(8,6) (8,8)(6,8)(4,8) (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$x_4$]{} (4,-0.5)[$x_1$]{} (6,-0.5)[$x_2$]{} (8,-0.5)[$x_3$]{} (7.4,7.4)[$z_3$]{} (5.4,7.4)[$z_2$]{} (3.4,7.4)[$z_1$]{} (3,7)[1]{}[90]{}[180]{} (3,7)[1]{}[135]{}[180]{} (10,8)[2]{}[270]{}[180]{} (10,8)[4]{}[270]{}[180]{} (10,8)[6]{}[270]{}[175]{} (10,8)[6]{}[170]{}[180]{} (2;45)(4;45)(6;45)
Put now $x_{m+1} = \bar a x_1$. One can easily see that after that some vertices become fixed (see Figure \[f:10\]). If we remove these vertices we will come to the function described by Figure \[f:11\].
(1,1)(12,12) (6,8)(8,8) (4,4)(4,2) (6,4)(6,2) (2,6)(4,6) (2,8)(4,8) (2,4)(4,4) (4,4)(8,4) (2,6)(8,6) (4,2)(4,8) (6,2)(6,8) (4,8)(6,8) (4,8)(6,8)(4,4)(4,6)(6,4)(6,6) (1.5,6)[$x_3$]{} (1.5,4)[$x_2$]{} (1.5,8)[$x_1$]{} (4,1.5)[$x_2$]{} (6,1.5)[$x_3$]{} (8,8)[2]{}[270]{}[180]{} (8,8)[4]{}[270]{}[180]{} (2;45)(4;45)
(-1,-1)(10,10) (2,2)(2,0) (4,2)(4,0) (6,6)(6,4) (0,2)(2,2) (0,4)(2,4) (2,2)(6,2) (2,4)(6,4) (2,2)(2,6) (4,2)(4,6) (2,2)(2,4)(4,2)(4,4)(6,4)(6,2) (6,4)(6,2) (6,2)(6,0) (-0.5,4)[$x_3$]{} (-0.5,2)[$x_2$]{} (2,-0.5)[$x_2$]{} (4,-0.5)[$x_3$]{} (6,-0.5)[$x_1$]{} (6,6)[2]{}[270]{}[180]{} (6,6)[4]{}[270]{}[180]{} (2;45)(4;45)
Removal of a fixed vertex from a graph describing a function is equivalent to the division of the function by the weight of the vertex. Taking into account all multiplications and divisions we made, we obtain an important recursive relation $$\begin{gathered}
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_{m+1} = \bar a x_1} = \sigma(a)
\sigma(a^2) \\
\times \prod_{k=2}^m \sigma(a^2 \bar x_1 x_k) \sigma(a \bar x_k x_1)
Z_{\mathrm{QT}}(2m - 1; (x_2, \ldots, x_m, x_1)).\end{gathered}$$ Using the symmetricity of the partition function $Z_{\mathrm{QT}}(2m + 1;
\bm x)$ in the variables $x_1, \ldots, x_m$ we obtain $m$ recursive relations $$\begin{gathered}
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_{m+1} = \bar a x_j} = \sigma(a)
\sigma(a^2) \\
\times \prod_{\substack{k = 1 \\ k \ne j}}^m \sigma(a^2 \bar x_j x_k)
\sigma(a \bar x_k x_j) Z_{\mathrm{QT}}(2m - 1; (x_1, \ldots, \hat x_j,
\ldots, x_m, x_j)), \label{2nd}\end{gathered}$$ where the hat means omission of the corresponding argument. Taking into account the inversion symmetry (\[e:3\]), we obtain $m$ additional recursive relations $$\begin{gathered}
Z_{\mathrm{QT}}(2m + 1; \bm x)|_{x_{m+1} = a x_j} = \sigma(a)
\sigma(a^2) \\
\times \prod_{\substack{k = 1 \\ k \ne j}}^m \sigma(a^2 \bar x_k x_j)
\sigma(a \bar x_j x_k) Z_{\mathrm{QT}}(2m - 1; (x_1, \ldots, \hat x_j,
\ldots, x_m, x_j)). \label{3rd}\end{gathered}$$ Hence we have $2m$ specializations in the square of the variable $x_{m+1}$. It is more than enough to reconstruct the partition function by recursion. Certainly, we have to use also the initial value $$Z_{\mathrm{QT}}(3; \bm x) = \sigma(a) \sigma(a^2). \label{ini}$$
Kuperberg’s pfaffians and partition function
============================================
Following Kuperberg for any positive integer $r$ introduce an antisymmetric $2l \times 2l$ matrix $M^{(r)}(l; \bm x)$ with the matrix elements $$M_{ij}^{(r)}(l; \bm x) = \frac{\sigma(\bar x_i^r x_j^r)}{\alpha(\bar x_i
x_j)}, \qquad i, j = 1, \ldots, 2l.$$ Recall that the Pfaffian of an antisymmetric $2l \times 2l$ matrix $A$ can be defined as $$\Pf A = \frac{1}{2^l l!} \sum_{s \in S_{2l}} \mathrm{sgn}(s) \,
A_{s(1)s(2)} A_{s(3)s(4)} \ldots A_{s(2l-1)s(2l)},$$ where $S_{2l}$ is the symmetric group of degree $2l$.
Again following Kuperberg define the following functions of $2l$ variables $x_1,x_2,...,x_{2l}$: $$Z^{(r)}_{\mathrm{QT}}(l; \bm x) = \prod_{1 \leq i < j \leq 2l}
\frac{\alpha (\bar x_i x_j)} {\sigma (\bar x_i x_j)} \, \Pf M^{(r)}(l;
\bm x).$$ The functions $Z^{(r)}_{\mathrm{QT}}(l; \bm x)$ are symmetric in the variables $x_1, \ldots, x_{2l}$, and one can verify the validity of the following recursive relations $$\left. Z^{(r)}_{\mathrm{QT}}(l; \bm x) \right|_{x_i = a x_j} =
\frac{\sigma(a^r)}{\sigma(a)} \prod_{\substack{k = 1 \\ k \ne i, j}}^{2l}
\left[ \sigma(a^2 \bar x_k x_j) \sigma(a \bar x_j x_k) \right]
Z^{(r)}_{\mathrm{QT}}(l - 1; \bm x \smallsetminus x_i \smallsetminus x_j),
\label{e:7}$$ where $i,j = 1, \ldots, 2l$ and $i \ne j$. On the basis of these recursive relation Kuperberg proved [@Kup02] that the partition function of the square-ice model corresponding to the quarter-turn symmetric alternating-sign matrices of even order can be represented as $$Z_{\mathrm{QT}}(4l; \bm x) = [\sigma^{3l}(a) \sigma^l(a^2)]
Z^{(1)}_{\mathrm{QT}}(l; \bm x) Z^{(2)}_{\mathrm{QT}}(l; \bm x).$$ It appears that the partition function for the case of the quarter-turn symmetric alternating-sign matrices of odd order can be also written in a similar way.
The function $Z^{(1)}_{\mathrm{QT}}(l; \bm x)$ is a centered Laurent polynomial of width $2l-2$ in the square of each of the variables $x_1, \ldots x_{2l}$, and for the function $Z^{(2)}_{\mathrm{QT}}(l; \bm
x)$ we have $$Z^{(2)}_{\mathrm{QT}}(l, \bm x) = \sum_{i=1}^{2l} c_i(l; \bm x
\smallsetminus x_{2l}) \, x_{2l}^{2i - 2l -1}.$$ Introduce the function $$\widetilde Z^{(2)}_{\mathrm{QT}}(l, \bm x) = \left[ \prod_{k=1}^{2l-1} x_k
\right] c_{2l}(l; \bm x),$$ which is a centered Laurent polynomial of width $2l-2$ in the square of each the variables $x_1, \ldots, x_{2l-1}$. It follows from (\[e:7\]) that $$\left. \widetilde Z^{(2)}_{\mathrm{QT}}(l; \bm x) \right|_{x_i = a x_j} =
- \frac{\sigma(a^2)}{\sigma(a)} \prod_{\substack{k = 1 \\ k \ne i,
j}}^{2l-1} \left[ \sigma(a^2 \bar x_k x_j) \sigma(a \bar x_j x_k) \right]
\widetilde Z^{(2)}_{\mathrm{QT}}(l - 1; \bm x \smallsetminus x_i
\smallsetminus x_j),
\label{2ndz}$$ where $i, j = 1, \ldots, 2l-1$ and $i \ne j$. Now we can write the following expressions the partition function of the square-ice model corresponding to the quarter-turn symmetric alternating-sign matrices of odd order $$\begin{aligned}
& Z_{\mathrm{QT}}(4l + 1; \bm x) = [(-1)^l \sigma^{3l}(a) \sigma^l(a^2)]
Z^{(1)}_{\mathrm{QT}}(l, \bm x \smallsetminus x_{2l+1}) \widetilde
Z^{(2)}_{\mathrm{QT}}(l + 1; \bm x), \label{res1} \\[.5em]
& Z_{\mathrm{QT}}(4l - 1; \bm x) = [(-1)^{l+1} \sigma^{3l-2}(a)
\sigma^l(a^2)] Z^{(1)}_{\mathrm{QT}}(l, \bm x ) \widetilde
Z^{(2)}_{\mathrm{QT}}(l; \bm x \smallsetminus x_{2l}). \label{res2}\end{aligned}$$ Using recursive relations (\[e:7\]) and (\[2ndz\]) and the initial values $$Z^{(1)}_{\mathrm{QT}}(1; \bm x) = 1, \qquad \widetilde
Z^{(2)}_{\mathrm{QT}}(1; \bm x) = 1,$$ it is not difficult to check that the right-hand sides of (\[res1\]) and (\[res2\]) satisfy the initial condition (\[ini\]) and recursive relations (\[2nd\]) and (\[3rd\]).
Special value of the parameter $a$ and enumerations
===================================================
It turns out, that in the special case $a = \exp (\rmi \pi/3)$ one can relate the functions $Z^{(1)}_{\mathrm{QT}}(1; \bm x)$ and $\widetilde
Z^{(2)}_{\mathrm{QT}}(1; \bm x)$ to the partition functions of the square-ice models corresponding to all alternating-sign matrices and to the half-turn symmetric alternating-sign matrices of odd order.
If $a = \exp (\rmi\pi/3)$, then $$\sigma(a^2 x) = -\sigma(\bar a x) = \sigma(a \bar x),
\label{iden}$$ and recursive relations (\[e:7\]) for $r=1$ becomes $$\left. Z^{(1)}_{\mathrm{QT}}(l; \bm x) \right|_{x_i = a x_j} =
\prod_{\substack{k = 1 \\ k \ne i, j}}^{2l}
\sigma^2(a \bar x_j x_k) Z^{(1)}_{\mathrm{QT}}(l - 1; \bm x \smallsetminus
x_i \smallsetminus x_j).
\label{1stzspec}$$
Recall that the partition function $Z(l; \bm x, \bm y)$ of the square-ice model, corresponding to all alternating-sign matrices depends on $2l$ parameters $x_1, \ldots, x_l$ and $y_1, \ldots, y_l$ (see, for example [@Kup02]). It is a centered Laurent polynomial of width $l-1$ in the square of each of the variables $x_1, \ldots, x_l$ and $y_1, \ldots, y_l$, satisfying the recursive relations $$\label{4th}
\left. Z(l; \bm x, \bm y) \right|_{y_i = a x_j} = \sigma(a^2)
\prod_{\substack{k = 1 \\ k \ne j}}^l \sigma (a \bar x_j y_k)
\prod_{\substack{k = 1 \\ k \ne i}}^l \sigma (a^2 \bar x_k x_j)
Z(l - 1; \bm x \smallsetminus x_j, \bm y \smallsetminus y_i),$$ where $i, j = 1, \ldots, l$ and $i \ne j$, with the initial value $Z(1; \bm
x, \bm y) = \sigma (a^2)$. It was shown in paper [@my1] that in the case $a = \exp (\rmi\pi/3)$ this function is symmetric in the union of the variables $x_1, \ldots, x_l$ and $y_1, \ldots, y_l$ (see also [@Ok3] and references therein). Introducing the symmetric notations $$\label{note}
x_i = x_i, \quad x_{i+l} = y_i, \quad i=1,2,...,l,$$ and taking into account identity (\[iden\]) we write the recursive relations for $Z(l, \bm x)$ as $$\label{4thspec}
\left. Z(l; \bm x) \right|_{x_i = a x_j} = \sigma(a^2)
\prod_{\substack{k = 1 \\ k \ne i,j}}^{2l} \sigma (a \bar x_j x_k)
Z(l - 1; \bm x \smallsetminus x_i \smallsetminus x_j).$$ Comparing relations (\[1stzspec\]) with relations (\[4thspec\]) and taking into account the initial values for $Z^{(1)}_{\mathrm{QT}}(l; \bm
x)$ and $Z^2(l; \bm x)$, we find that $$\label{result1}
Z^{(1)}_{\mathrm{QT}}(l; \bm x) = \sigma ^{-2l}(a^2) Z^2(l; \bm x).$$ This equality was also obtained by Okada [@Ok3].
The function $Z^{(2)}_{\mathrm{QT}}(l; \bm x)$ in the case $a=\exp
(\rmi\pi/3)$ satisfies the recursive relations $$\left. \widetilde Z^{(2)}_{\mathrm{QT}}(l; \bm x) \right|_{x_i = a x_j} =
- \prod_{\substack{k = 1 \\ k \ne i, j}}^{2l-1} \sigma^2(a \bar x_j x_k)
\widetilde Z^{(2)}_{\mathrm{QT}}(l - 1; \bm x \smallsetminus x_i
\smallsetminus x_j),
\label{2ndzspec}$$ which are rather different from (\[1stzspec\]).
In our recent paper [@RazStr05] we considered the partition function for the square-ice model corresponding to the half-turn symmetric alternating-sign matrices of odd order (see Figure \[f:13\]).
(-1,-1)(10,12) (2,2)(2,0) (4,2)(4,0) (2,10)(2,12) (4,10)(4,12) (0,2)(2,2) (0,4)(2,4) (0,8)(2,8) (0,10)(2,10) (2,2)(6,2) (2,4)(6,4) (2,8)(6,8) (2,10)(6,10) (2,2)(2,10) (4,2)(4,10) (6,6)[2]{}[270]{}[90]{} (6,6)[4]{}[270]{}[90]{} (2,2)(2,4)(2,8)(2,10)(4,2)(4,4) (4,8)(4,10)(2,6)(4,6)(6,4)(6,2) (0,6)(2,6) (2,6)(4,6) (4,4)[2]{}[0]{}[90]{} (6,4)(6,2) (6,2)(6,0) (-0.5,10)[$x_1$]{} (-0.5,8)[$x_2$]{} (-0.5,6)[$x_3$]{} (-0.5,4)[$x_2$]{} (-0.5,2)[$x_1$]{} (2,-0.5)[$y_1$]{} (4,-0.5)[$y_2$]{} (6,-0.5)[$y_3$]{}
The corresponding partition function $Z_{\mathrm{HT}}(2l-1; \bm x, \bm y)$ depends here on $2l$ spectral parameters $x_1, \ldots, x_l$ and $y_1,
\ldots, y_l$. We proved that in the case $a = \exp(\rmi\pi/3)$ and $y_{m+1}
= x_{m+1}$ the partiton function is a symmetric function in all $2l-1$ variables. Again introducing symmetric notations $$x_i = x_i, \quad i = 1, \ldots, l, \qquad x_{i+l} = y_i, \quad i = 1,
\ldots, l-1,$$ and using for the function under consideration the same notation $Z_{\mathrm{HT}}(2l - 1; \bm x)$, one sees that this function satisfies the recursive relations $$\label{5thspec}
\left. Z_{\mathrm{HT}}(2l - 1; \bm x) \right|_{x_i = a x_j}= \sigma^2(a^2)
\prod_{\substack{k=1 \\ k \ne i, j}}^{2l-1} \sigma^2(a \bar x_j x_k)
Z_{\mathrm{HT}}(2l - 3; \bm x \smallsetminus x_i \smallsetminus x_j),$$ where $i, j = 1, \ldots, 2l-1$ and $i \ne j$. Comparing relations (\[2ndzspec\]) with relations (\[5thspec\]) and taking into account the initial values for $Z^{(2)}_{\mathrm{QT}}(l; \bm
x)$ and $Z_{\mathrm{HT}}(2l - 1; \bm x)$, we see that $$\widetilde Z^{(2)}_{\mathrm{QT}}(l; \bm x) = (-1)^{l+1} \sigma^{2-2l}(a^2)
Z_{\mathrm{HT}}(2l - 1; \bm x).
\label{result2}$$ Relations (\[result1\]) and (\[result2\]) allow us to write equalities (\[res1\]) and (\[res2\]) as $$\begin{aligned}
\label{Zoddresult}
&Z_{\mathrm{QT}}(4l + 1; \bm x)= Z^2(l; \bm x \smallsetminus
x_{2l+1}) Z_{\mathrm{HT}}(2l + 1; \bm x), \\
&Z_{\mathrm{QT}}(4l - 1; \bm x)= Z^2(l; \bm x) Z_{\mathrm{HT}}(2l - 1; \bm
x \smallsetminus x_{2l}).\end{aligned}$$ It is natural to recall here the similar equality $$Z_{\mathrm{QT}}(4l; \bm x) = Z^2(l; \bm x) Z_{\mathrm{HT}}(2l; \bm x),$$ obtained by Okada [@Ok3].
Considering the last equalities at $\bm x = (1, \ldots, 1)$, one comes to the relations $$A_{\mathrm{QT}}(4l + 1) = A^2(l) A_{\mathrm{HT}}(2l + 1), \qquad
A_{\mathrm{QT}}(4l - 1) = A^2(l) A_{\mathrm{HT}}(2l - 1),$$ where $A$ instead of $Z$ means the number of alternating-sign matrices of the corresponding kind. Combining these relations with the result obtained by Kuperberg for the matrices of even order, we have $$A_{\mathrm{QT}}(4l + \epsilon)= A^2(l) A_{\mathrm{HT}}(2l + \epsilon),
\qquad \epsilon = -1, 0, 1.$$ Thus, the Robbins conjecture [@Rob00] on the enumeration of the quarter-turn symmetric alternating-sign matrices is proved.
[*Acknowledgments*]{} The work was supported in part by the Russian Foundation for Basic Research under grant \# 04–01–00352.
[\*\*]{}
W. H. Mills, D. P. Robbins, and H. Rumsey, [*Proof of the Macdonald conjecture*]{}, Invent. Math. [**66**]{} (1982) 73–87.
W. H. Mills, D. P. Robbins, and H. Rumsey, [*Alternating-sign matrices and descending plane partitions*]{}, J. Combin. Theory Ser. A, [**34**]{} (1983) 340–359.
G. Kuperberg, [*Another proof of the alternating-sign matrix conjecture*]{}, Int. Math. Res. Notes [**3**]{} (1996) 139–150;\
[arXiv:math.CO/9712207]{}
G. Kuperberg, [*Symmetry classes of alternating-sign matrices under one roof*]{}, Ann. Math. [**156**]{} (2002) 835–866;\
[arXiv:math.CO/0008184]{}.
A. V. Razumov,Yu. G. Stroganov, [*Enumerations of half-turn symmetric alternating-sign matrices of odd order*]{},\
[arXiv:math-ph/0504022]{}.
D. P. Robbins, [*Symmetry Classes of Alternating Sign Matrices*]{},\
[arXiv:math.CO/0008045]{}.
D. Zeilberger, [*Proof of the refined alternating sign matrix conjecture*]{}, New York J. Math., [**2**]{} (1996) 59–68;\
[arXiv:math.CO/9606224]{}.
A. V. Razumov, Yu. G. Stroganov, [*Refined enumerations of some symmetry classes of alternating-sign matrices*]{}, Theor. Math. Phys. [**141**]{} (2004) 1609–1630;\
[math-ph/0312071]{}.
Yu. G. Stroganov, [*Izergin–Korepin determinant at a third root of unity*]{}, Theor. Math. Phys. [**146**]{} (2006) 53–62;\
[arXiv:math-ph/0204042]{}.
S. Okada, [*Enumeration of symmetry classes of alternating sign matrices and characters of classical groups*]{}, J. Algebr. Comb. [**23**]{} (2006) 43–69;\
[arXiv:math.CO/0408234]{}.
[^1]: Kuperberg uses a dashed line crossing an edge to say that its orientaion reverses as it crosses the line [@Kup02]. It is convenient for our purposes to treat reversal of the orientation as a special type of a vertex.
[^2]: Note that for $Z_{\mathrm{QT}}(4k, \bm x)$ the recursive relations similar to (\[1st\]) give enough data for Lagrange interpolation [@Kup02].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The paper presents a systematic theory for asymptotic inference of autocovariances of stationary processes. We consider nonparametric tests for serial correlations based on the maximum (or ${\cal
L}^\infty$) and the quadratic (or ${\cal L}^2$) deviations. For these two cases, with proper centering and rescaling, the asymptotic distributions of the deviations are Gumbel and Gaussian, respectively. To establish such an asymptotic theory, as byproducts, we develop a normal comparison principle and propose a sufficient condition for summability of joint cumulants of stationary processes. We adopt a simulation-based block of blocks bootstrapping procedure that improves the finite-sample performance.
address: |
Department of Statistics\
5734 S. University Ave.\
Chicago, IL 60637\
\
author:
-
-
bibliography:
- 'max\_auto\_corr.bib'
date:
-
-
title: Asymptotic Inference of Autocovariances of Stationary Processes
---
Introduction {#sec:intro}
============
If $(X_i)_{i \in {{\mathbb{Z}}}}$ is a real-valued stationary process, then from a second-order inference point of view it is characterized by its mean $\mu = {{\mathbb{E}}}X_i$ and the autocovariance function $\gamma_k
= {{\mathbb{E}}}[(X_0-\mu) (X_k-\mu)]$, $k \in {{\mathbb{Z}}}$. Assume $\mu = 0$. Given observations $X_1, \ldots, X_n$, the natural estimates of $\gamma_k$ and the autocorrelation $r_k = \gamma_k / \gamma_0$ are $$\label{eq:autocov}
\hat{\gamma}_k=(1/n)\sum_{i=|k|+1}^nX_{i-|k|}X_{i}
\quad \hbox{and} \quad \hat r_k=\hat \gamma_k / \hat \gamma_0,
\,\, 1-n \le k \le n-1,$$ respectively. The estimator $\hat\gamma_k$ plays a crucial role in almost every aspect of time series analysis. It is well-known that for linear processes with [*independent and identically distributed*]{} (iid) innovations, under suitable conditions, $\sqrt{n}(\hat{\gamma}_k
- \gamma_k) \Rightarrow \mathcal{N}(0,\tau_k^2)$, where $\Rightarrow$ stands for convergence in distribution, $\mathcal{N}(0,\tau_k^2)$ denotes the normal distribution with mean zero and variance $\tau_k^2$. Here $\tau_k^2$ can be calculated by Bartlett’s formula (see Section 7.2 of [@brockwell:1991]). Other contributions on linear processes include [@hannan:1972], [@hosoya:1982], [@anderson:1991] and [@phillips:1992] etc. [@romano:1996] and [@wu:2009] considered the asymptotic normality of $\hat
\gamma_k$ for nonlinear processes. As a primary goal of the paper, we shall study asymptotic properties of the quadratic (or ${\cal L}^2$) and the maximum (or ${\cal L}^\infty$) deviations of $\hat
{\gamma}_k$.
The ${\cal L}^2$ Theory
-----------------------
Testing for serial correlation has been extensively studied in both statistics and econometrics, and it is a standard diagnostic procedure after a model is fitted to a time series. Classical procedures include [@durbin:1950; @durbin:1951], [@box:1970], [@robinson:1991] and their variants. The Box-Pierce portmanteau test uses $Q_K = n \sum_{k=1}^K \hat r_k^2$ as the test statistic, and rejects if it lies in the upper tail of $\chi^2_K$ distribution. An arguable deficiency of this test and many of its modified versions (for a review see for example [@escanciano:2009]) is that the number of lags $K$ included in the test is held as a constant in the asymptotic theory. As commented by [@robinson:1991]:
> ”[*...unless the statistics take account of sample autocorrelations at long lags there is always the possibility that relevant information is being neglected...*]{}”
The problem is particularly relevant if practitioners have no prior information about the alternatives. The attempt of incorporating more lags emerged naturally in the spectral domain analysis; see among others [@durlauf:1991], [@hong:1996] and [@deo:2000]. The normalized spectral density $f(\omega) =
(2\pi)^{-1} \sum_{k \in {{\mathbb{Z}}}} r_k \cos(k\omega)$ should equal to $(2\pi)^{-1}$ when the serial correlation is not present. Let $\hat f(\omega) = \sum_{k=1-n}^{n-1}h(k/s_n)\hat r_k\cos(k\omega)$ be the lag-window estimate of the normalized spectral density, where $h(\cdot)$ is a kernel function and $s_n$ is the bandwidth satisfying the natural condition $s_n \to \infty$ and $s_n /n \to
0$. The former aims to include correlations at large lags. A test for the serial correlation can be obtained by comparing $\hat f$ and the constant function $f(\omega) \equiv (2\pi)^{-1}$ using a suitable metric. In particular, using the quadratic metric and rectangle kernel, the resulting test statistic is the Box-Pierce statistic with unbounded lags. [@hong:1996] established the following result: $$\label{eq:hong}
\frac{1}{\sqrt{2s_n}}\left(n\sum_{k=1}^{s_n}
(\hat r_k-r_k)^2 -s_n\right) \Rightarrow \mathcal{N}(0,1),$$ under the condition that $X_i$ are iid, which implies that all $r_k$ in the preceding equation are zero. [@lee:2001] and [@duchesne:2010] studied similar tests in spectral domain, but using a wavelet basis instead of trigonometric polynomials in estimating the spectral density and henceforth working on wavelet coefficients. [@fan:1996] considered a similar problem in a different context and proposed [*adapative Neyman*]{} test and thresholding tests, using $\max_{1\leq k \leq s_n} (Q_k-k) /
\sqrt{2k}$ and $n \sum_{k=1}^{s_n}\hat r_k^2I(|\hat r_k|>\delta)$ as test statistics respectively, where $\delta$ is a threshold value. [@escanciano:2009] proposed to use $Q_{s_n}$ with $s_n$ being selected by AIC or BIC.
It has been an important and difficult question on whether the iid assumption in [@hong:1996] can be relaxed. Similar problems have been studied by [@durlauf:1991], [@deo:2000] and [@hong:2003] for the case that $X_i$ are martingale differences. Recently [@shao:2011] showed that (\[eq:hong\]) is true when $(X_i)$ is a general white noise sequence, under the geometric moment contraction (GMC) condition. Since the GMC condition, which implies that the autocovariances decay geometrically, is quite strong, the question arises as to whether it can be replaced by a weaker one. Furthermore, one may naturally ask: what if the serial correlation is present in (\[eq:hong\])? To the best of our knowledge, there has been no results in the literature for this problem. This paper shall address these questions and substantially generalizes earlier results. We shall prove that (\[eq:hong\]) remains true even if all or some of $r_k$ are not zero, but the variance of the limiting distribution, being different, will depend on the values of $r_k$. Furthermore, we derive the limiting distribution of $\sum_{k=1}^{s_n} \hat r_k^2$ when the serial correlation is present. The latter result enables us to calculate the asymptotic power of the Box-Pierce test with unbounded lags.
The ${\cal L}^\infty$ Theory
----------------------------
Another natural omnibus choice is to use the maximum autocorrelation as the test statistic. [@wu:2009] obtained a stochastic upper bound for $$\label{eq:wu}
\sqrt{n} \max_{1\leq k \leq s_n} |\hat\gamma_k-\gamma_k|,$$ and argued that in certain situations the test based on (\[eq:wu\]) has a higher power over the Box-Pierce tests with unbounded lags in detecting weak serial correlation. It turns out that the uniform convergence of autocovariances is also closely related to the estimation of orders of ARMA processes or linear systems in general. The pioneer works in this direction were given by E. J. Hannan and his collaborators, see for example [@hannan:1974] and [@an:1982]. For a summary of these works we recommend [@hannan:1988 Section §5.3] and references therein. In particular, [@an:1982] showed that if $s_n = O[(\log n)^\alpha]$ for some $\alpha<\infty$, then with probability one $$\begin{aligned}
\label{eq:hannan}
\sqrt{n} \max_{1\leq k \leq s_n} |\hat\gamma_k-\gamma_k|
= O\left(\log\log n\right).\end{aligned}$$
The question of deriving the asymptotic distribution of (\[eq:wu\]) is more challenging. Although [@wu:2009] was not able to obtain the limiting distribution of (\[eq:wu\]), his work provided important insights into this problem. Assuming $k_n
\to \infty$, $k_n/n\to 0$ and $h\geq 0$, he showed that, for $T_k
= \sqrt n (\hat\gamma_{k}- {{\mathbb{E}}}\hat \gamma_{k})$, $$\label{eq:cltunbdd}
\left(T_{k_n}\,,\, T_{k_n+h}\right)^\top
\Rightarrow \mathcal{N}\left[0,
\begin{pmatrix}
\sigma_0 & \sigma_h \\
\sigma_h & \sigma_0
\end{pmatrix}\right], \quad
\hbox{where } \sigma_h = \sum_{k\in{{\mathbb{Z}}}}
\gamma_k \gamma_{k+h},$$ and we use the superscript $\top$ to denote the transpose of a vector or a matrix. The asymptotic distribution in (\[eq:cltunbdd\]) does not depend on the speed of $k_n \to \infty$. It suggests that, at large lags, the covariance structure of $(T_k)$ is asymptotically equivalent to that of the Gaussian sequence $$\label{eq:gp}
(G_k) := \left(\sum_{i\in{{\mathbb{Z}}}}\gamma_i\eta_{i-k}\right)$$ where $\eta_i$’s are iid standard normal random variables. Define the sequences $(a_n)$ and $(b_n)$ as $$\begin{aligned}
\label{eq:gumbel_constants}
a_n=(2\log n)^{-1/2} \,\, \hbox{ and } \,\,
b_n=(2\log n)^{1/2} - (8\log n)^{-1/2}(\log\log n + \log 4\pi).\end{aligned}$$ According to [@berman:1964] (also see Remarks \[rk:berman\] and \[rk:absolute\]), under the condition $\lim_{n \to \infty}
{{\mathbb{E}}}(G_0 G_n) \log n = 0$, $$\begin{aligned}
\lim_{s\to\infty}P\left(\max_{1\leq i\leq s}|G_i| \leq
\sqrt{\sigma_0}(a_{2s}\,x+b_{2s})\right) = \exp\{-\exp(-x)\}.\end{aligned}$$ Therefore, [@wu:2009] conjectured that under suitable conditions, one has the Gumbel convergence $$\begin{aligned}
\label{eq:main1}
\lim_{n\to\infty}
P\left(\max_{1\leq k\leq s_n} |T_k|
\leq \sqrt{\sigma_0}(a_{2s_n}\,x+b_{2s_n})\right)
= \exp\{-\exp(-x)\}.\end{aligned}$$ In a recent work, [@jirak:2011] proved this conjecture for linear processes and for $s_n$ growing with at most logarithmic speed. We shall prove (\[eq:main1\]) in Section \[sec:proof\] for general stationary processes; and our result allows $s_n$ to grow as $s_n =
O(n^\eta)$ for some $0 < \eta < 1$, and $\eta$ can be arbitrarily close to $1$ under appropriate moment and dependence conditions. The latter result substantially relaxes the severe restriction on the growth speed in (\[eq:hannan\]) and [@jirak:2011] and, moreover, the obtained distributional convergence are more useful for statistical inference. For example, other than testing for serial correlation and estimating the order of a linear system, (\[eq:main1\]) can also be used to construct simultaneous confidence intervals of autocovariances.
Relations with the Random Matrix Theory
---------------------------------------
In a companion paper, using the asymptotic theory of sample autocovariances developed in this paper, [@wu:2010] studied convergence properties of estimated covariance matrices which are obtained by banding or thresholding. Their bounds are analogs under the time series context to those of [@bickel:2008a; @bickel:2008b]. There is an important difference between these two settings: we assume that only one realization is available, while [@bickel:2008a; @bickel:2008b] require multiple iid copies of the underlying random vector.
There has been some related works in the random matrix theory literature that are similar to (\[eq:main1\]). Suppose one has $n$ iid copies of a $p$-dimensional random vector, forming a $p \times n$ data matrix ${\boldsymbol{X}}$. Let $\hat r_{ij}$, $1\le i,j \le p$, be the sample correlations. [@jiang:2004] showed that the limiting distribution of $\max_{1\leq i<j\leq p}|\hat r_{ij}|$, after suitable normalization, is Gumbel provided that each column of ${\boldsymbol{X}}$ consists of $p$ iid entries and each entry has finite moment of some order higher than 30, and $p/n$ converges to some constant. His work was followed and improved by [@zhou:2007] and [@liu:2008]. In a recent article, [@cai:2010] extended those results in two ways: (i) the dimension $p$ could grow exponentially as the sample size $n$ provided exponential moment conditions; and (ii) they showed that the test statistic $\max_{|i-j|>s_n} |\hat r_{ij}|$ also converges to the Gumbel distribution if each column of ${\boldsymbol{X}}$ is Gaussian and is $s_n$-dependent. The latter generalization is important since it is one of the very few results that allow dependent entries. Their method is Poisson approximation [see for example @arratia:1989], which heavily depends on the fact that for each sample correlation to be considered, the corresponding entries are independent. [@schott:2005] proved that $\sum_{1\leq i<j\leq p}
\hat r_{ij}^2$ converges to normal distribution after suitable normalization, under the conditions that each column of ${\boldsymbol{X}}$ contains iid Gaussian entries and $p/n$ converges to some positive constant. His proof heavily depends on the normality assumption. Techniques developed in those papers are not applicable here since we have [*only one realization*]{} and the dependence structure among the entries can be quite complicated.
A Summary of Results of the Paper
---------------------------------
We present the main results in Section \[sec:main\], which include a central limit theory of (\[eq:hong\]) and the Gumbel convergence (\[eq:main1\]). The proofs are given in Section \[sec:proof\]. In Section \[sec:normalcomparison\] we prove a normal comparison principle, which is of independent interest. Since summability conditions of joint cumulants are commonly used in time series analysis (see for example [@brillinger:2001] and [@rosenblatt:1985]) and is needed in the proof of Theorem \[thm:ljung\], we present a sufficient condition in Section \[sec:cum\]. Some auxiliary lemmas are collected in Section \[sec:aux\]. We also conduct a simulation study in Section \[sec:simulation\], where we design a simulation-based block of blocks bootstrapping procedure that improves the finite-sample performance.
Main Results {#sec:main}
============
To develop an asymptotic theory for time series, it is necessary to impose suitable measures of dependence and structural assumptions for the underlying process $(X_i)$. Here we shall adopt the framework of [@wu:2005]. Assume that $(X_i)$ is a stationary causal process of the form $$\begin{aligned}
\label{eq:wold}
X_i = g(\cdots,\epsilon_{i-1},\epsilon_{i}),\end{aligned}$$ where $\epsilon_i, i \in {{\mathbb{Z}}}$, are iid random variables, and $g$ is a measurable function for which $X_i$ is a properly defined random variable. For notational simplicity we define the operator $\Omega_k$: suppose $X = h(\epsilon_j, \epsilon_{i-1}, \ldots)$ is a random variable which is a function of the innovations $\epsilon_l, l\le j$, then $\Omega_k(X) := h(\epsilon_j, \ldots,
\epsilon_{k+1}, \epsilon_k', \epsilon_{k-1}, \ldots)$, where $(\epsilon_k')_{k \in {{\mathbb{Z}}}}$ is an iid copy of $(\epsilon_k)_{k \in
{{\mathbb{Z}}}}$. Namely $\epsilon_k$ in $X$ is replaced by $\epsilon_k'$.
For a random variable $X$ and $p > 0$, we write $X \in
\mathcal{L}^p$ if $\|X\|_p:=({{\mathbb{E}}}|X|^p )^{1/p}<\infty$, and in particular, use $\|X\|$ for the $\mathcal{L}^2$-norm $\|X\|_2$. Assume $X_i \in \mathcal{L}^p$, $p > 1$. Define the [*physical dependence measure of order $p$*]{} as $$\label{eq:physical}
\delta_p(i)=\|X_i-\Omega_0(X_i)\|_p,$$ which quantifies the dependence of $X_i$ on the innovation $\epsilon_0$. Our main results depend on the decay rate of $\delta_p(i)$ as $i\to\infty$. Let $p'=\min(2,p)$ and define $$\begin{aligned}
\label{eq:stable}
\Theta_{p}(n)&=&\sum_{i=n}^{\infty}\delta_p(i),
\quad
\Psi_{p}(n)=\left(\sum_{i=n}^{\infty}\delta_{p}(i)^{p'}\right)^{1/p'},
\quad\hbox{and}\quad \cr
\Delta_p(n)&=&\sum_{i=0}^{\infty}\min\{{\mathcal{C}}_p\Psi_{p}(n),\delta_p(i)\},\end{aligned}$$ where ${\mathcal{C}}_p$ is defined in (\[eq:burkholder\]). It is easily seen that $\Psi_p(\cdot) \le \Theta_p(\cdot) \le \Delta_p(\cdot)$. We use $\Theta_p$, $\Psi_p$ and $\Delta_p$ as shorthands for $\Theta_p(0)$, $\Psi_p(0)$ and $\Delta_p(0)$ respectively. We make the convention that $\delta_p(k)=0$ for $k<0$.
There are several reasons that we use the framework (\[eq:wold\]) and the dependence measure (\[eq:physical\]). First, the class of processes that (\[eq:wold\]) represents is huge and it includes linear processes, bilinear processes, Volterra processes, and many other time series models. See, for instance, [@tong:1990] and [@wiener:1958]. Second, the physical dependence measure is easy to work with and it is directly related to the underlying data-generating mechanism. Third, it enables us to develop an asymptotic theory for complicated statistics of time series.
Maximum deviations of sample autocovariances
--------------------------------------------
Note that $\hat\gamma_k$ is a biased estimate of $\gamma_k$ with ${{\mathbb{E}}}\hat\gamma_k = (1-|k|/n) \gamma_k$. It is then more convenient to consider the centered version $\max_{1\le k\le s_n} \sqrt{n}
|\hat\gamma_k - {{\mathbb{E}}}\hat\gamma_k|$ instead of $\max_{1\le k\le s_n}
\sqrt{n} |\hat\gamma_k - \gamma_k|$. Recall (\[eq:gumbel\_constants\]) for $a_n$ and $b_n$.
\[thm:single\] Assume ${{\mathbb{E}}}X_i=0$, $X_i \in \mathcal{L}^p$ for some $p>4$, and $\Theta_p(m)=O(m^{-\alpha})$, $\Delta_p(m) =
O(m^{-\alpha'})$ for some $\alpha\geq\alpha'>0$. If $s_n$ satisfies $s_n \to \infty$ and $s_n = O(n^\eta)$ with $$\begin{aligned}
\label{eq:decay_rate}
0<\eta<1, \quad \eta<\alpha p/2, \,\,\, \hbox{and} \,\,\,
\eta \min\{2(p-2-\alpha p),\, (1-2\alpha')p\}<p-4,
\end{aligned}$$ then for all $x \in {{\mathbb{R}}}$, $$\begin{aligned}
\label{eq:main}
\lim_{n\to\infty}P\left(\max_{1\leq k\leq
s_n}|\sqrt{n}[\hat{\gamma}_k-(1-k/n)\gamma_k]|
\leq \sqrt{\sigma_0}(a_{2s_n}\,x+b_{2s_n})\right)
= \exp\{-\exp(-x)\}.
\end{aligned}$$
In (\[eq:decay\_rate\]), if $p \le 2 + \alpha p$ or $1 \le 2
\alpha'$, then the second and third conditions are automatically satisfied, and hence Theorem \[thm:single\] allows a very wide range of lags $s_n = O(n^\eta)$ with $0 < \eta < 1$. In this sense Theorem \[thm:single\] is nearly optimal.
For the maximum deviation $\max_{1\leq k<n} | \hat\gamma_k - {{\mathbb{E}}}\hat\gamma_k|$ over the whole range $1\leq k < n$, it seems not possible to derive a limiting distribution by using our method. However, we can obtain a sharp bound $(n^{-1} \log n)^{1/2}$. The upper bound is given in (\[eq:covorder\]), while the lower bounded can be obtained by applying Theorem \[thm:single\] and choosing a sufficiently small $\eta$ such that (\[eq:decay\_rate\]) holds. Using Theorem \[thm:covorder\], [@wu:2010] derived convergence rates for the thresholded autocovariance matrix estimates.
\[thm:covorder\] Assume ${{\mathbb{E}}}X_i=0$, $X_i \in \mathcal{L}^p$ for some $p>4$, and $\Theta_p(m)=O(m^{-\alpha})$, $\Delta_p(m)=O(m^{-\alpha'})$ for some $\alpha\geq\alpha'>0$. If $$\begin{aligned}
\label{eq:decay_rate2}
\alpha>1/2 \quad\hbox{or}\quad \alpha'p>2
\end{aligned}$$ then for $c_p=6(p+4)\,e^{p/4}\,\kappa_4\,\Theta_4$, $$\begin{aligned}
\label{eq:covorder}
\lim_{n\to\infty}
P\left(\max_{1\leq k<n}|\hat\gamma_k-{{\mathbb{E}}}\hat\gamma_k|
\leq c_p\sqrt{\frac{\log n}{n}}\right) =1.
\end{aligned}$$
Since $\Theta_p(m) \ge \Psi_p(m)$, we can assume $\alpha \ge
\alpha'$. For a detailed discussion on their relationship, see Remark 6 of [@wu:2010]. It turns out that for the special case of linear processes the condition (\[eq:decay\_rate\]) can be weakened to the following one: $$\begin{aligned}
\label{eq:decay_rate1}
0 < \eta <1, \quad \eta<\alpha p/2,
\quad \hbox{and} \quad (1-2\alpha)\eta<(p-4)/p.\end{aligned}$$ See Remark \[rk:decay\_rate\]. Furthermore, for linear processes the condition (\[eq:decay\_rate2\]) can be relaxed to $\alpha
p>2$ as well.
In practice, the mean $\mu = {{\mathbb{E}}}X_0$ is often unknown and we can estimate it by the sample mean $\bar X_n = (1/n)\sum_{i=1}^n X_i$. The usual estimates of autocovariances and autocorrelations are $$\begin{aligned}
\label{eq:D26175}
\breve\gamma_k=\frac{1}{n}
\sum_{i=k+1}^n(X_{i-k} - \bar X_n)(X_i - \bar X_n)
\quad \hbox{and} \quad
\breve r_k = \breve \gamma_k / \breve \gamma_0.\end{aligned}$$
\[thm:single\_corr\] Theorem \[thm:single\] and Theorem \[thm:covorder\] still hold if we replace $\hat\gamma_k$ therein by $\breve\gamma_k$. Furthermore, $$\begin{aligned}
\lim_{n\to\infty}
P\left(\max_{1\leq k\leq s_n}
\left|\sqrt{n}[\breve r_k-(1-k/n)r_k]\right|
\leq (\sqrt{\sigma_0}/\gamma_0)(a_{2s_n}\,x+b_{2s_n})\right)
= \exp\{-\exp(-x)\}.
\end{aligned}$$
For the $\breve\gamma_k$ version of Theorem \[thm:single\], it suffices to show that $$\begin{aligned}
\label{eq:38}
\max_{1\leq k\leq s_n}
\left|\sqrt{n}(\breve\gamma_k-\hat\gamma_k)\right|
= o_P\left(\frac{1}{\sqrt{\log s_n}}\right).
\end{aligned}$$ Let $S_k=\sum_{i=1}^k X_i$. By Theorem 1 (iii) of [@wu:2007], we have $\left\|\max_{1\leq k \leq n}\left|S_k\right|\right\| \leq
2\sqrt{n}\Theta_2$. Since $$\begin{aligned}
\sum_{i=k+1}^n (X_{i-k}-\bar X_n)(X_i-\bar X_n) - \sum_{i=k+1}^nX_{i-k}X_i
& = -\bar X_n\sum_{i=1}^{n-k}X_i + \bar X_n\sum_{i=1}^k X_i - k\bar
X_n^2,
\end{aligned}$$ we have (\[eq:38\]). The proof of the $\breve\gamma_k$ version of Theorem \[thm:covorder\] is similar. The assertion on sample autocorrelations can be proved easily, and details are omitted.
Box-Pierce tests
----------------
Box-Pierce tests [@box:1970; @ljung:1978] are commonly used in detecting lack of fit of a particular time series model. After a correct model has been fitted to a set of observations, one would expect the residuals to be close to a sequence of iid random variables, and therefore one should perform some tests for serial correlations as model diagnostics. Suppose $(X_i)_{1\leq i\leq n}$ is an iid sequence, let $\hat r_k$ be its sample autocorrelations. Then the distribution of $Q_n(K) := n \sum_{k=1}^K\hat r_K^2$ is approximately $\chi^2_K$. Logically, it is not sufficient to consider a fixed number of correlations as the number of observations increases, because there may be some dependencies at large lags. We present a normal theory about the Box-Pierce test statistic, which allows the number of correlations included in $Q_n$ to go to infinity.
\[thm:ljung\] Assume $X_i \in {\mathcal{L}}^8$, ${{\mathbb{E}}}X_i=0$ and $\sum_{k=0}^\infty
k^6\delta_8(k)<\infty$. If $s_n \to \infty$ and $s_n
=O(n^\beta)$ for some $\beta<1$, then $$\begin{aligned}
\frac{1}{\sqrt{s_n}}\sum_{k=1}^{s_n}
\left[n(\hat\gamma_k - (1-k/n)\gamma_k)^2 - (1-k/n)\sigma_0\right]
\Rightarrow \mathcal{N}\left(0,2\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2\right).
\end{aligned}$$
To see the connection to the Box-Pierce test, we have the following corollary on autocorrelations. Using the same argument, we can show that the same asymptotic law holds for the similar Ljung-Box test statistic $Q_{L B} = n(n+2) \sum_{k=1}^K \hat r_K^2
/ (n-k)$.
\[thm:ljung\_corr\] Under the conditions of Theorem \[thm:ljung\], the same result holds if $\hat \gamma_k$ is replaced by $\breve \gamma_k$. Furthermore, $$\begin{aligned}
\label{eq:ljung_corr}
\frac{1}{\sqrt{s_n}}\sum_{k=1}^{s_n}
\left[n(\hat r_k - (1-k/n)r_k)^2
- (1-k/n)\sigma_0/\gamma_0^2\right]
\Rightarrow \mathcal{N}\left(0,
\frac{2}{\gamma_0^4}\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2\right).
\end{aligned}$$
\[rk:chisq\] Theorem \[thm:ljung\] clarifies an important historical issue in testing of correlations. If $\gamma_k = 0$ for all $k\geq 1$, which means $X_i$ are uncorrelated; then $\sigma_0 = \gamma_0^2$ and $\sigma_k = 0$ for all $|k| \ge 1$, and (\[eq:ljung\_corr\]) becomes $$\begin{aligned}
\label{eq:ljung_uncorr}
\frac{1}{\sqrt{s_n}}\sum_{k=1}^{s_n}
\left[n\hat r_k^2 - (1-k/n)\right]
\Rightarrow \mathcal{N}\left(0,2\right).
\end{aligned}$$ In an influential paper, [@romano:1996] argued that, for fixed $K$, the chi-squared approximation for $Q_n(K)$ does not hold if $X_i$ are only uncorrelated but not independent. One of the main reasons is that for fixed $K$, $\hat r_1, \ldots, \hat r_K$ are not asymptotically independent if $X_i$ are not independent. However, interestingly, the situation is different if the number of correlations included in $Q_n$ can increase to infinity. According to (\[eq:cltunbdd\]), $\sqrt n \hat \gamma_{k_n}$ and $\sqrt n
\hat \gamma_{k_n+h}$ are asymptotically independent if $h > 0$ and $k_n \to \infty$, because the asymptotic covariance is $\sigma_h =
0$. Therefore, the original Box-Pierce approximation of $Q_n(s_n)$ by $\chi^2_{s_n}$, with unbounded $s_n$, is still asymptotically valid in the sense of (\[eq:ljung\_uncorr\]) since $(\chi^2_{s_n} -
s_n) / \sqrt{s_n} \Rightarrow \mathcal{N}\left(0,2\right)$ as $s_n
\to \infty$. This observation again suggests that the asymptotic behaviors for bounded and unbounded lags are different. A similar observation has been made in [@shao:2011], whose result also suggests that (\[eq:ljung\_uncorr\]) is true under the assumption that $\delta_8(k) = O(\rho^k)$ for some $0<\rho<1$. Our condition $\sum_{k=1}^\infty k^6\delta_8(k)<\infty$ is much weaker.
The next theorem consists of two separate but closely related parts, one is on the estimation of $\sigma_0 = \sum_{k \in {{\mathbb{Z}}}}
\gamma_k^2$, and the other is related to the power of the Box-Pierce test. Define the projection operator $$\begin{aligned}
{\mathcal{P}}^j \cdot = {{\mathbb{E}}}(\cdot|{\mathcal{F}}_{-\infty}^j) - {{\mathbb{E}}}(\cdot |
{\mathcal{F}}_{-\infty}^{j-1}), \mbox{ where } {\mathcal{F}}_i^j = \langle \epsilon_i,
\epsilon_{i+1}, \ldots, \epsilon_j \rangle, \, i, j \in {{\mathbb{Z}}}.\end{aligned}$$
\[thm:ljung\_power\] Assume $X_i\in{\mathcal{L}}^4$, ${{\mathbb{E}}}X_i=0$ and $\Theta_4<\infty$. If $s_n\to\infty$ and $s_n=o(\sqrt{n})$, then $$\begin{aligned}
\label{eq:var_estimation}
\sqrt{n}\left(\sum_{k=-s_n}^{s_n} \hat\gamma_k^2
- \sum_{k=-s_n}^{s_n} \gamma_k^2\right)
\Rightarrow \mathcal{N}(0,4\|D_0'\|^2),
\end{aligned}$$ where $D'_0=\sum_{i=0}^\infty {\mathcal{P}}^0 (X_iY_i)$ with $Y_i=\gamma_0 X_i
+ 2\sum_{k=1}^\infty\gamma_k X_{i-k}$. Furthermore, if $\sum_{k=1}^\infty \gamma_k^2>0$, then $$\begin{aligned}
\label{eq:ljung_power}
\sqrt{n}\left(\sum_{k=1}^{s_n} \hat\gamma_k^2
- \sum_{k=1}^{s_n} \gamma_k^2\right)
\Rightarrow \mathcal{N}(0,4\|D_0\|^2),
\end{aligned}$$ where $D_0=\sum_{i=0}^\infty {\mathcal{P}}^0 (X_iY_i)$ with $Y_i=\sum_{k=1}^\infty\gamma_kX_{i-k}$.
\[thm:ljung\_corr\_power\] Under conditions of Theorem \[thm:ljung\_power\], the same results hold if $\hat \gamma_k$ is replaced by $\breve \gamma_k$. Furthermore, there exist positive numbers $\tau_1^2$ and $\tau_2^2$ such that $$\begin{aligned}
\sqrt{n}\left(\sum_{k=1}^{s_n} \hat r_k^2
- \sum_{k=1}^{s_n} r_k^2\right)
\Rightarrow \mathcal{N}(0,\tau_1^2)
\quad \hbox{and} \quad
\sqrt{n}\left(\sum_{k=-s_n}^{s_n} \hat r_k^2
- \sum_{k=-s_n}^{s_n} r_k^2\right)
\Rightarrow \mathcal{N}(0,\tau_2^2).
\end{aligned}$$
As an immediate application, we consider testing whether $(X_i)$ is an uncorrelated sequence. According to (\[eq:ljung\_uncorr\]), we can use the test statistic $$\begin{aligned}
T_n:=\frac{1}{\sqrt{s_n}}
\left[Q_n(s_n)-\frac{s_n(2n-s_n-1)}{2n}\right],\end{aligned}$$ whose asymptotic distribution under the null hypothesis is $\mathcal{N}(0,2)$. The null is rejected when $T_n > \sqrt{2}
z_{1-\alpha}$, where $z_{1-\alpha}$ is the $(1-\alpha)$-th quantile of a standard normal random variable $Z$. However, under the alternative hypothesis $\sum_{k=1}^\infty r_k^2>0$, the distribution of $T_n$ should be approximated according to Corollary \[thm:ljung\_corr\_power\], and the asymptotic power is $$\begin{aligned}
P\left(T_n>\sqrt{2}z_{1-\alpha}\right)
\approx P\left(\tau_1Z >
\frac{\sqrt{2s_n}\cdot z_{1-\alpha}}{\sqrt{n}}
+ \frac{s_n(2n-s_n-1)}{2n^{3/2}}
- \sqrt{n}\sum_{k=1}^{s_n}r_k^2\right),\end{aligned}$$ which increases to 1 as $n$ goes to infinity.
A Simulation Study {#sec:simulation}
==================
Suppose $\left(r^{(0)}_k\right)$ is a sequence of autocorrelations, one might be interested in the hypothesis test that $r_k=r^{(0)}_k$ for all $k \geq 1$. This hypothesis is, however, impossible to test in practice, except in some special parametric cases. A more tractable hypothesis is $$\label{eq:hypothesis}
\boldsymbol{\mathrm{H}}_0:\quad r_k=r^{(0)}_k
\quad\hbox{for } 1 \leq k \leq s_n.$$ In traditional asymptotic theory, one often assumes that $s_n$ is a fixed constant, for example, the popular Box-Pierce test for serial correlation. Our results in the previous section provide both ${\cal
L}^{\infty}$ and ${\cal L}^2$ based tests, which allow $s_n$ to grow as $n$ increases. Nonetheless, the asymptotic tests can perform poorly when the sample size $n$ is not large enough, namely, there may exist noticeable differences between the true and nominal probabilities of rejecting $\boldsymbol{\mathrm{H}}_0$ (hereafter referred as error in rejection probability or ERP). In a recent paper, [@horowitz:2006] showed that the Box-Pierce test with bootstrap-based $p$-values can significantly reduce the ERP. They used the blocks of blocks bootstrapping with overlapping blocks (hereafter referred as BOB) invented by [@kunsch:1989]. For finite sample, our ${\cal L}^2$ based test is similar as the traditional Box-Pierce test considered in their paper, so in this section our focus will be on the ${\cal
L}^{\infty}$ based tests. We shall provide simulation evidence showing that the BOB works reasonably well.
Throughout this section, we let the innovations $\epsilon_i$ be iid standard normal random variables, and consider the following four models. $$\begin{aligned}
& \hbox{I.I.D.: } & & X_i=\epsilon_i && \label{eq:iid}\\
& \hbox{AR(1): } & & X_i=bX_{i-1}+\epsilon_i && \label{eq:ar}\\
& \hbox{Bilinear: } & &
X_i=(a+b\epsilon_i)X_{i-1}+\epsilon_i && \label{eq:bilin}\\
& \hbox{ARCH: } & &
X_i=\sqrt{a+bX_{i-1}^2} \cdot \epsilon_i. && \label{eq:arch}\end{aligned}$$ We generate each process with length $n=2\times 10^7$, and compute $$\label{eq:quantity_simulated}
a_{2s_n}^{-1}\left(\max_{1\leq k\leq s_n}\sqrt{n}
\left|\hat r_k-(1-k/n)r_k\right|/\sqrt{\hat\sigma_0}-b_{2s_n}\right)$$ with $s_n= 5\times 10^5$ and $\hat\sigma_0=\sum_{k=-t_n}^{t_n}\hat
r_k^2$, where $t_n$ is chosen as $t_n={\lfloor n^{1/3} \rfloor} = 271$. Based on 1000 repetitions, we plot the empirical distribution functions in Figure \[fig:all\_in\_one\]. We see that these four empirical curves are close to the one for the Gumbel distribution, which confirms our theoretical results.
![\[fig:all\_in\_one\] Empirical distribution functions for quantities in (\[eq:quantity\_simulated\]). [We choose $b = 0.5$ for model (\[eq:ar\]), $a = b = 0.4$ for model (\[eq:bilin\]), and $a = b = 0.25$ for model (\[eq:arch\]). The black line gives the true distribution function of the Gumbel distribution. ]{}](asymp_all_four_s9.pdf){width="10cm"}
One the other hand, these empirical distributions are not very close to the limiting one if the sample size is not large, because the Gumbel type of convergence in (\[eq:main\]) is slow. This is a well-known phenomenon; see for example [@hall:1979]. It is therefore not reasonable to use the limiting distribution to approximate the finite sample distributions. To perform the test (\[eq:hypothesis\]), we repeat the BOB procedure as described in [@horowitz:2006] (called SBOB in their paper). Since in the bootstrapped tests, the test statistics are not to be compared with the limiting distribution, we can ignore the norming constants in (\[eq:quantity\_simulated\]) and simply use the following test statistics $$\begin{aligned}
M_n = \max_{1\leq k\leq s_n} \left|r_k-(1-k/n)r_k^{(0)} \right|
\mbox{ and }
\mathcal M_n = {{M_n} \over { \sqrt{\hat\sigma_0}}},\end{aligned}$$ where $\mathcal M_n$ is the self-normalized version with $\sigma_0$ estimated as $\hat \sigma_0 = \sum_{k=-t_n}^{t_n} \hat
r_k^2$, with $t_n = \min\{{\lfloor n^{1/3} \rfloor}, s_n\}$. For simplicity, we refer these two tests as $M$-test and $\mathcal M$-test, respectively. From the series $X_1,\ldots,X_n$, for some specified number of lags $s_n$ that will be included in the test and block size $\mathfrak{b}_n$, form $Y_i=(X_i,X_{i+1},\ldots,X_{i+s_n})^\top$, $1\le i \le n-s_n$ and blocks $\mathcal{B}_j = (Y_j, Y_{j+1},
\ldots, Y_{j+\mathfrak{b}_n-1})$, $1 \le j \le n - s_n -
\mathfrak{b}_n+1$. For simplicity assume $h_n = n/\mathfrak{b}_n$ is an integer. Suppose $Y_\sharp$ is obtained by sampling a block $\mathcal{B}_\sharp$ from the set of blocks $\{\mathcal{B}_1,
\mathcal{B}_2, \ldots, \mathcal{B}_{n-s_n-\mathfrak{b}_n+1}\}$, and then sampling a column from $\mathcal{B}_\sharp$, let $\operatorname{Cov}_\sharp$ represent the covariance of the bootstrap distribution of $Y_\sharp$, conditional on $(X_1,X_2,\ldots,X_n)$. Denote by $Y_\sharp^j$ the $j$-th entry of $Y_\sharp$, set $$r_k^{(e)}= {{\operatorname{Cov}_\sharp(Y_\sharp^1, Y_\sharp^{k+1})}
\over
\sqrt{\operatorname{Cov}_\sharp(Y_\sharp^1,Y_\sharp^1) \cdot
\operatorname{Cov}_\sharp(Y_\sharp^{k+1},Y_\sharp^{k+1})}}.$$ The explicit formula of $r_k^{(e)}$ was also given in [@horowitz:2006]. The BOB algorithm is as follows.
1. Sample $h_n$ times with replacement from $\{\mathcal{B}_1,
\mathcal{B}_2, \ldots, \mathcal{B}_{n-s_n-\mathfrak{b}_n+1}\}$ to obtain blocks $\{\mathcal{B}^\ast_1, \mathcal{B}^\ast_2, \,
\ldots, \, \mathcal{B}^\ast_{h_n}\}$, which are laid end-to-end to form a series of vectors $(Y^\ast_1, Y^\ast_2, \ldots, Y^\ast_n)$.
2. Pretend that $(Y^\ast_1,Y^\ast_2,\ldots,Y^\ast_n)$ is a random sample of size $n$ from some $s_n$-dimensional population distribution, let $r^\ast_k$ be the sample correlation of the first entry and the $(k+1)$-th entry. Then calculate the test statistic $M_n^\ast=\max_{1\leq k\leq s_n} \left|
r^\ast_k-r^{(e)}_k \right|$ and $\mathcal M_n^\ast =
M_n^\ast/\sqrt{\sigma_0^\ast}$, where $\sigma_0^\ast =
\sum_{k=-t_n}^{t_n} \left(r_k^\ast\right)^2$.
3. Repeat steps 1 and 2 for $N$ times. The bootstrap $p$-value of the $M$-test is given by $\#(M_n^\ast>M_n)/N$. For a nominal level $\alpha$, we reject $\boldsymbol{\mathrm{H}}_0$ if $\#(M_n^\ast>M_n)/N<\alpha$. The $\mathcal M$-test is performed in the same manner.
We compare the BOB tests and the asymptotic tests for the four models listed at the beginning of this section, with $a=.4$ for (\[eq:ar\]), $a=b=.4$ for (\[eq:bilin\]) and $a=b=.25$ for (\[eq:arch\]). We set the series length as $n=1800$, and consider four choices of $s_n$: ${\lfloor \log(n) \rfloor}=7$, ${\lfloor n^{1/3} \rfloor}=12$, ${\lfloor \sqrt{n} \rfloor}=42$ and 25. The BOB tests are performed with $N=999$, and the asymptotic tests are carried out by comparing $a_{2s_n}^{-1}(\sqrt{n}\mathcal M_n-b_{2s_n})$ with the corresponding quantiles of the Gumbel distribution. The empirical rejection probabilities based on 10,000 repetitions are reported in Table \[tab:sbob\]. All probabilities are given in percentages. For all cases, we see that the asymptotic tests are too conservative, and the ERP are quite large. At the nominal level $1\%$, the rejection probabilities are often less than or around $0.1\%$, and at most $0.51\%$; while at nominal level $10\%$, they are often less than $3\%$ and at most $6.4\%$. Except for the bilinear models with $s_n=7$ and $s_n=12$, the bootstrapped tests significantly reduce the ERP, which are often less than $0.2\%$ at nominal level $1\%$, less than $.5\%$ at level $5\%$, and less than $1\%$ at level $10\%$. The performance of $M$-test and $\mathcal M$-test are similar, with the former being slightly more conservative. The BOB tests are roughly insensitive to the block size, which provides additional evidence of the findings on BOB tests in [@davison:1997].
[ ]{}
The bootstrapped tests still perform relatively poorly for bilinear models when $s_n$ is small (7 and 12). This is possibly due to the heavy-tailedness of the bilinear process. [@tong:1981] gave necessary conditions for the existence of even order moments. On the other hand, [@horowitz:2006] showed that the iterated bootstrapping further reduce the ERP. It is of interest to see whether the iterated procedure has the same effect for the ${\cal L}^{\infty}$ based tests, in particular, whether it makes the ERP reasonably small for the bilinear models when $s_n$ is small. The simulation for the iterated bootstrapping will be computationally expensive and we do not pursue it here.
Proofs {#sec:proof}
======
This section provides proofs for the results in Section \[sec:main\]. For readability we list the notation here. For a random variable $X$, write that $X \in \mathcal{L}^p$, $p>0$, if $\|X\|_p := ({{\mathbb{E}}}|X|^p )^{1/p} < \infty$. Write $\|X\| =
\|X\|_2$ if $p = 2$. To express centering of random variables concisely, we define the operator ${{\mathbb{E}}}_0$ as ${{\mathbb{E}}}_0 X := X - {{\mathbb{E}}}X$. For a vector ${\boldsymbol{x}} = (x_1,\ldots,x_d)^\top \in {{\mathbb{R}}}^d$, let $|{\boldsymbol{x}}|$ be the usual Euclidean norm, $|{\boldsymbol{x}}|_{\infty} :=
\max_{1 \le i \le d}|x_i|$, and $|{\boldsymbol{x}}|_{\bullet} := \min_{1\leq
i \leq d}|x_i|$. For a square matrix $A$, $\rho(A)$ denotes the operator norm defined by $\rho(A) := \max_{|{\boldsymbol{x}}|=1} |A {\boldsymbol{x}}|$. Let us make some convention on the constants. We use $C$, $c$ and $\mathcal{C}$ for constants. The notation ${\mathcal{C}}_p$ is reserved for the constant appearing in Burkholder’s inequality, see (\[eq:burkholder\]). The values of $C$ may vary from place to place, while the value of $c$ is fixed within the statement and the proof of a theorem (or lemma). A constant with a symbolic subscript is used to emphasize the dependence of the value on the subscript.
The framework (\[eq:wold\]) is particularly suited for two classical tools for dealing with dependent sequences, martingale approximation and $m$-dependence approximation. For $i\leq j$, define ${\mathcal{F}}_i^j = \langle \epsilon_i, \epsilon_{i+1}, \ldots,
\epsilon_j \rangle$ be the $\sigma$-field generated by the innovations $\epsilon_i, \epsilon_{i+1}, \ldots, \epsilon_j$, and the projection operator ${\mathcal{H}}_{i}^j (\cdot) = {{\mathbb{E}}}(\cdot|{\mathcal{F}}_{i}^j)$. Set ${\mathcal{F}}_i := {\mathcal{F}}_{i}^\infty$, ${\mathcal{F}}^j := {\mathcal{F}}_{-\infty}^j$, and define ${\mathcal{H}}_i$ and ${\mathcal{H}}^j$ similarly. Define the projection operator ${\mathcal{P}}^j(\cdot) = {\mathcal{H}}^j(\cdot) - {\mathcal{H}}^{j-1}(\cdot)$, and ${\mathcal{P}}_i(\cdot) = {\mathcal{H}}_i(\cdot) - {\mathcal{H}}_{i+1}(\cdot)$, then $({\mathcal{P}}^j(\cdot))_{j \in {{\mathbb{Z}}}}$ and $({\mathcal{P}}_{-i}(\cdot))_{i \in {{\mathbb{Z}}}}$ become martingale difference sequences with respect to the filtrations $({\mathcal{F}}^j)$ and $({\mathcal{F}}_{-i})$, respectively. For $m\geq
0$, define $\tilde X_i={\mathcal{H}}_{i-m} X_i$, then $(\tilde X_i)_{i \in
{{\mathbb{Z}}}}$ is a $(m+1)$-dependent sequence.
Some Useful Inequalities
------------------------
We collect in Proposition \[thm:facts\] some useful facts about physical dependence measures and martingale and $m$-dependence approximations. We expect that it will be useful in other asymptotic problems that involve sample covariances. Hence for convenience of other researchers, we provide explicit upper bounds.
We now introduce a moment inequality (\[eq:mar\_zyg\]) which follows from the Burkholder inequality [see @burkholder:1988]. Let $(D_i)$ be a martingale difference sequence and for every $i$, $D_i
\in {\mathcal{L}}^p$, $p > 1$, then $$\begin{aligned}
\label{eq:mar_zyg}
\left\|D_1+D_2+\cdots+D_n\right\|_p^{p'} \leq {\mathcal{C}}_p^{p'}
\left(\|D_1\|_p^{p'} + \|D_2\|_p^{p'} + \cdots
+ \|D_n\|_p^{p'}\right),\end{aligned}$$ where $p'=\min\{p,2\},$ and the constant $$\begin{aligned}
\label{eq:burkholder}
\mathcal{C}_p = (p-1)^{-1} \hbox{ if } 1 < p < 2 \mbox{ and }
= \sqrt{p-1} \hbox{ if } p \ge 2.\end{aligned}$$ We note that when $p > 2$, the constant ${\mathcal{C}}_p$ in (\[eq:mar\_zyg\]) equaled to $p-1$ in [@burkholder:1988], and it was improved to $\sqrt{p-1}$ by [@rio:2009].
\[thm:facts\]
1. Assume ${{\mathbb{E}}}X_i=0$ and $p>1$. Recall that $p'=\min(p, 2)$. $$\begin{aligned}
\label{eq:fact1}
& \|{\mathcal{P}}^0X_i\|_p \leq \delta_p(i) \quad\hbox{and}\quad
\|{\mathcal{P}}_0X_i\|_p \leq \delta_p(i)
\\\label{eq:fact2}
& \kappa_p:=\|X_0\|_p \leq \mathcal{C}_p\Psi_p\\\label{eq:fact3}
& \left\|\sum_{i=1}^n c_iX_i\right\|_p
\leq \mathcal{C}_p A_n\Theta_p, \mbox{ where }
A_n = \left(\sum_{i=1}^n |c_i|^{p'}\right)^{1/p'} \\\label{eq:fact4}
& |\gamma_k| \leq \zeta_2(k), \quad \hbox{where }
\zeta_p(k):=\sum_{j=0}^\infty \delta_p(j)\delta_p(j+k) \\\label{eq:fact5}
& \left\|\sum_{i=1}^n (X_{i-k}X_i -\gamma_k)\right\|_{p/2}
\leq 2{\mathcal{C}}_{p/2}\kappa_p\Theta_p\sqrt{n},
\quad \hbox{when } p \geq 4 \\ \label{eq:fact5.5}
& \left\|\sum_{i,j=1}^n c_{i,j} (X_iX_j-\gamma_{i-j})\right\|_{p/2}
\leq 4{\mathcal{C}}_{p/2}{\mathcal{C}}_p \Theta_p^2 B_n \sqrt{n}, \quad \hbox{when } p \geq 4
\end{aligned}$$ where $B_n^2 =\max \{\max_{1\leq i \leq n} \sum_{j=1}^n c_{i,j}^2,
\, \max_{1\leq j \leq n} \sum_{i=1}^n c_{i,j}^2 \}.$
2. For $m \geq 0$, define $\tilde X_i = {\mathcal{H}}_{i-m} X_i$. For $p>1$, let $\tilde \delta_p(\cdot)$ be the physical dependence measures for the sequence $(\tilde X_i)$. Then $$\begin{aligned}
\label{eq:fact6}
& \tilde \delta_{p}(i) \leq \delta_p(i) \\ \label{eq:fact7}
& \|X_0-\tilde X_0\|_p \leq {\mathcal{C}}_p \Psi_p(m+1)\\ \label{eq:fact8}
& \left\|\sum_{i=1}^n c_i(X_i-\tilde X_i)\right\|_p
\leq \mathcal{C}_p A_n \Theta_{p}(m+1)\\
\label{eq:fact9}
& \left\|\sum_{i=k+1}^n \left(X_{i-k}X_i-\gamma_k
-\tilde X_{i-k}\tilde X_i+\tilde\gamma_k\right)\right\|_p
\leq 4{\mathcal{C}}_p(n-k)^{1/p'} \kappa_{2p} \Delta_{2p}(m+1).
\end{aligned}$$
The inequalities (\[eq:fact1\]) and (\[eq:fact6\]) are obtained by the first principle. Since $X_{i-k} = \sum_{j\in{{\mathbb{Z}}}}
{\mathcal{P}}^{j} X_{i-k}$ and $X_i = \sum_{j\in{{\mathbb{Z}}}}{\mathcal{P}}^{j}X_i$, we have $$\begin{aligned}
|\gamma_k|=\left|\sum_{j=-k}^\infty {{\mathbb{E}}}\left[({\mathcal{P}}^{-j}
X_0) ({\mathcal{P}}^{-j} X_{k})\right]\right| \leq
\delta_2(j)\delta_2(j+k) \leq \zeta_{k},\end{aligned}$$ which proves (\[eq:fact4\]). For (\[eq:fact5.5\]), it can be similarly proved as Proposition 1 of [@liu:2010], and (\[eq:fact8\]) was given by Lemma 1 of the same paper. (\[eq:fact3\]) is a special case of (\[eq:fact8\]). Define $Y_i = X_{i-k} X_i$, then $(Y_i)$ is also a stationary process of the form (\[eq:wold\]). By Hölder’s inequality, $\|Y_i -
\Omega_0(Y_i) \|_{p/2} \le 2 \kappa_p[\delta_p(i) +
\delta_p(i-k)]$. Applying (\[eq:fact3\]) to $(Y_i)$, we obtain (\[eq:fact5\]). To see (\[eq:fact7\]), we first write $X_m -
\tilde X_m = \sum_{j=1}^\infty {\mathcal{P}}_{-j}X_m$. Since $\left\|{\mathcal{P}}_{-j}X_m\right\|_p \leq \delta_p(m+j)$, and $({\mathcal{P}}_{-j}X_m)_{j\geq 1}$ is a martingale difference sequence, by (\[eq:mar\_zyg\]), we have $$\|X_0-\tilde X_0\|^{p'}_p \leq \mathcal{C}_p^{p'}
\sum_{j=1}^{\infty}\left\|{\mathcal{P}}_{-j}X_m\right\|_p^{p'}
\leq \mathcal{C}_p^{p'} \sum_{j=1}^{\infty} [\delta_p(m+j)]^{p'}
= \mathcal{C}_p^{p'} [\Psi_p(m+1)]^{p'}.$$ The above argument also leads to (\[eq:fact2\]). Using a similar argument as in the proof of Theorem 2 of [@wu:2009], we can show (\[eq:fact9\]). Details are omitted.
Proof of Theorem \[thm:single\]
-------------------------------
The proof is quite complicated and will be divided into several steps. We first give the outline.
### 4.1.0. Outline {#outline .unnumbered}
####
Define $R_{n,k}=\sum_{i=k+1}^n (X_{i-k}X_i-\gamma_k)$. Set $m_n={\lfloor n^\beta \rfloor}$, $0<\beta<1$. Define $\tilde X_i = {\mathcal{H}}_{i-m_n}
X_i$, $\tilde \gamma_k = {{\mathbb{E}}}(\tilde X_0 \tilde X_k)$, and $\tilde
R_{n,k}=\sum_{i=k+1}^n (\tilde X_{i-k} \tilde X_i - \tilde \gamma_k
)$. We next show that it suffices to consider $\tilde R_{n,k}$.
\[thm:mdep\_app\] Assume ${{\mathbb{E}}}X_i=0$, $X_i\in{\mathcal{L}}^{p}$, and $\Theta_p(m) =
O(m^{-\alpha})$ for some $p>4$ and $\alpha>0$. If $s_n = O(n^\eta)$ with $0<\eta<\alpha p/2$, then there exists a $\beta$ such that $\eta<\beta<1$ and $$\begin{aligned}
\max_{1\leq k\leq s_n} \left| R_{n,k}-\tilde R_{n,k} \right|
= o_P\left(\sqrt{n/\log s_n}\right).
\end{aligned}$$
####
Let $l_n = {\lfloor n^\gamma \rfloor}$, where $\gamma \in (\beta, 1)$. For each $t_n < k \leq s_n$, we apply the blocking technique and split the integer interval $[k+1,n]$ into alternating large and small blocks $$\label{eq:blk}
\begin{aligned}
& K_1=[k+1,s_n] \cr
& H_j = [s_n+(j-1)(2m_n+l_n)+1,s_n+(j-1)(2m_n+l_n)+l_n];
\quad 1 \leq j \leq w_n-1,\cr
& K_{j+1}=[s_n+(j-1)(2m_n+l_n)+l_n+1,s_n+j(2m_n+l_n)];
\quad 1\leq j \leq w_n-1; \quad \hbox{and} \cr
& H_{w_n}=[s_n+(w_n-1)(2m_n+l_n)+1,n],
\end{aligned}$$ where $w_n$ is the largest integer such that $s_n + (w_n-1)
(2m_n+l_n) + l_n \leq n$. Denote by $|H|$ the size of a block $H$. By definition we know $l_n\leq |H_{w_n}| \leq 3l_n$ when $n$ is large enough. For $1\leq j \leq w_n$ define $$\begin{aligned}
V_{k,j}=\sum_{i\in K_j,\,i>k} \left(\tilde X_{i-k} \tilde X_i -
\tilde \gamma_k\right)
\mbox{ and }
U_{k,j}=\sum_{i\in H_j}
\left(\tilde X_{i-k} \tilde X_i - \tilde
\gamma_k\right).\end{aligned}$$ Note that $w_n \sim n/(2 m_n+l_n) \sim n^{1-\gamma}$. We show that the sums over small blocks are negligible.
\[thm:small\_blk\] Assume the conditions of Theorem \[thm:single\]. Then $$\begin{aligned}
\max_{1 \leq k\leq s_n} \left|\sum_{j=1}^{w_n} V_{k,j} \right|
= o_P\left(\sqrt{\frac{n}{\log s_n}}\right).
\end{aligned}$$
####
We show that it suffices to consider $$\begin{aligned}
{\mathcal{R}}_{n,k} = \sum_{j=1}^{w_n} \bar U_{k,j}, \mbox{ where }
\bar U_{k,j} = {{\mathbb{E}}}_0 \left(U_{k,j}I\{|U_{k,j}| \le \sqrt{n} /
(\log s_n)^3\}\right).\end{aligned}$$
\[thm:large\_blk\] Assume the conditions of Theorem \[thm:single\]. Then $$\begin{aligned}
\max_{1\leq k\leq s_n} \left|\sum_{j=1}^{w_n}(U_{k,j}-\bar U_{k,j})\right|
= o_P\left(\sqrt{\frac{n}{\log s_n}}\right).
\end{aligned}$$
####
In order to prove Lemma \[thm:vec\_mod\_dev\], we need the autocovariance structure of $\left({\mathcal{R}}_{n,k}/\sqrt{n}\right)$ to be close to that of $(G_k)$. However, this only happens when $k$ is large. We show that there exists an $0 < \iota < 1$ such that for $t_n = 3{\lfloor s_n^\iota \rfloor}$, (i) $\max_{1\leq k\leq t_n}
|{\mathcal{R}}_{n,k}/\sqrt{n}|$ does not contribute to the asymptotic distribution; and (ii) the autocovariance structure of $\left({\mathcal{R}}_{n,k}/\sqrt{n}\right)$ converges to that of $(G_k)$ uniformly on $t_n < k \leq s_n$.
\[thm:small\_lag\] Under conditions of Theorem \[thm:single\], there exists a constant $0<\iota<1$ such that for $t_n=3{\lfloor s_n^\iota \rfloor}$, $$\begin{aligned}
\label{eq:small_lag}
\lim_{n\to\infty}P \left(\max_{1 \leq k \leq t_n}
|{\mathcal{R}}_{n,k}| > \sqrt{\sigma_0n\log s_n}\right)=0.
\end{aligned}$$
\[thm:cov\_struc\] Let conditions of Theorem \[thm:single\] be satisfied. Recall that $t_n=3{\lfloor s_n^\iota \rfloor}$ from Lemma \[thm:small\_lag\]. There exist constants $C_p>0$ and $0<\ell<1$ such that for any $t_n<k\leq
k+h\leq s_n$, $$\begin{aligned}
& |\operatorname{Cov}({\mathcal{R}}_{n,k},{\mathcal{R}}_{n,k+h})/n-\sigma_h| \leq C_p\,s_n^{-\ell}.
\end{aligned}$$
####
Let $t_n=3{\lfloor s_n^\iota \rfloor}$ be as in Lemma \[thm:small\_lag\]. For $t_n<k_1<k_2<\ldots<k_d\leq s_n$, define ${\boldsymbol{{\mathcal{R}}}}_n =
({\mathcal{R}}_{n,k_1},{\mathcal{R}}_{n,k_2},\ldots,{\mathcal{R}}_{n,k_d})^\top$ and ${\boldsymbol{V}}=(G_{k_1},G_{k_2},\ldots,G_{k_d})^\top$, where $(G_k)$ is defined in (\[eq:gp\]). Let $\Sigma_n=\operatorname{Cov}({\boldsymbol{{\mathcal{R}}}}_n)$ and $\Sigma=\operatorname{Cov}({\boldsymbol{V}})$. For fixed $x \in {{\mathbb{R}}}$, set $z_n = a_{2s_n} x
+ b_{2s_n}$, where the constants $a_n$ and $b_n$ are defined in (\[eq:gumbel\_constants\]). In the following lemma we provide a moderate deviation result for ${\boldsymbol{{\mathcal{R}}}}_n$.
\[thm:vec\_mod\_dev\] Assume conditions of Theorem \[thm:single\]. Then there exists a constant $C_{p,d}>1$ such that for all $t_n<k_1<k_2<\ldots<k_d\leq
s_n$, $$\begin{aligned}
\left|P\left(\left|{\boldsymbol{{\mathcal{R}}}}_n/\sqrt{n}\right|_\bullet \geq z_n \right)
- P\left(\left|{\boldsymbol{V}}\right|_\bullet \geq z_n \right)\right|
\le C_{p,d} {{ P\left(\left|{\boldsymbol{V}}\right|_\bullet \geq z_n \right)}
\over{(\log s_n)^{1/2}}}
+ C_{p,d} \,\exp\left\{-{{(\log s_n)^2} \over{C_{p,d}}}\right\}.
\end{aligned}$$
### Step 1: $m$-dependence approximation
Recall that $m_n={\lfloor n^\beta \rfloor}$ with $\eta<\beta<1$. We claim $$\begin{aligned}
\label{eq:mdep_app}
\left\| R_{n,k} - \tilde R_{n,k}\right\|_{p/2}
\leq 6\,\mathcal{C}_{p/2}\Theta_p\Theta_p(m_n-k+1)\cdot \sqrt{n}.
\end{aligned}$$ It follows that for any $\lambda>0$ $$\begin{aligned}
P & \left( \max_{1\leq k\leq s_n} \left| R_{n,k}-\tilde R_{n,k} \right|
> \lambda \sqrt{n/\log s_n}\right)
\leq {{ (\log s_n)^{p/4}}\over{n^{p/4} \lambda^{p/2}}}
\sum_{k=1}^{s_n} \| R_{n,k}-\tilde R_{n,k}\|^{p/2}_{p/2} \\
& \qquad \qquad \qquad \leq C_p \lambda^{-p/2}
s_n (\log s_n)^{p/4} n^{-\alpha\beta p/2}
\leq C_p \lambda^{-p/2} n^{\eta - \alpha\beta p/2} (\log n)^{p/4}.
\end{aligned}$$ Therefore, if $\alpha p/2>\eta$, then there exists a $\beta$ such that $\eta<\beta<1$ and $\eta - \alpha\beta p/2 <0$, and hence the preceding probability goes to zero as $n\to\infty$. The proof of Lemma \[thm:mdep\_app\] is complete.
We now prove claim (\[eq:mdep\_app\]). For each $1 \le k \le
s_n$, we have $$\begin{aligned}
\| R_{n,k} - \tilde R_{n,k} \|_{p/2}
\leq & \left\|\sum_{i=k+1}^n ( X_{i-k} - \tilde X_{i-k})
\tilde X_i\right\|_{p/2}
+ \left\|\sum_{i=k+1}^n ({\mathcal{H}}_{i-m_n} X_{i-k})
( X_i-\tilde X_i)\right\|_{p/2} \\
& + \left\|\sum_{i=k+1}^n
{{\mathbb{E}}}_0\left[( X_{i-k} - {\mathcal{H}}_{i-m_n} X_{i-k})( X_i-\tilde X_i)
\right]\right\|_{p/2}
\end{aligned}$$ Observe that $ (\tilde X_i{\mathcal{P}}_{i-k-j} X_{i-k} )_{1\leq i \leq n}$ is a backward martingale difference sequence with respect to ${\mathcal{F}}_{i-k-j}$ if $j>m_n$, so by the inequality (\[eq:mar\_zyg\]), $$\begin{aligned}
\left\|\sum_{i=k+1}^n ( X_{i-k}
- \tilde X_{i-k})\tilde X_i\right\|_{p/2}
& \leq \sum_{j=m+1}^\infty
\left\|\sum_{i=k+1}^n \tilde X_i{\mathcal{P}}_{i-k-j} X_{i-k}
\right\|_{p/2} \cr
& \leq \sum_{j=m+1}^\infty \sqrt{n}\mathcal{C}_{p/2}
\|\tilde X_{j+k}{\mathcal{P}}_0 X_j\|_{p/2} \cr
& \leq \mathcal{C}_{p/2} \Theta_p\Theta_p(m_n+1)\cdot \sqrt{n}.
\end{aligned}$$ Similarly we have $\|\sum_{i=k+1}^n ({\mathcal{H}}_{i-m_n} X_{i-k})( X_i -
\tilde X_i) \|_{p/2} \leq \sqrt{n} \mathcal{C}_{p/2} \Theta_p
\Theta_p(m_n+1)$. Similarly as (\[eq:fact8\]), we get $\|\tilde
X_{i-k} - {\mathcal{H}}_{i-m_n} X_{i-k}\|_p \leq \Theta_p(m_n-k+1)$. Let $Y_{n,i}:=( X_{i-k} - {\mathcal{H}}_{i-m_n} X_{i-k})( X_i-\tilde X_i)$. Then $$\begin{aligned}
\left\|Y_{n,i}-\Omega_0(Y_{n,i})\right\|_{p/2} \leq
2\left[\delta_p(i)\Theta_p(m_n-k+1) + \delta_p(i-k)\Theta_p(m_n+1)\right].
\end{aligned}$$ Therefore, by (\[eq:fact3\]), it follows that $$\begin{aligned}
\left\|\sum_{i=k+1}^n {{\mathbb{E}}}_0\left[( X_{i-k} - {\mathcal{H}}_{i-m_n} X_{i-k})
( X_i-\tilde X_i)\right]\right\|_{p/2}
\leq 4\,\mathcal{C}_{p/2}\Theta_p\Theta_p(m_n-k+1)\cdot\sqrt{n},
\end{aligned}$$ and the proof of (\[eq:mdep\_app\]) is complete.
### Step 2: Throw out small blocks
In this section, as well as many other places in this article, we often need to split an integer interval $[s,t] = \{s, s+1, \ldots,
t\} \subset {{\mathbb{N}}}$ into consecutive blocks $\mathcal{B}_1, \ldots,
\mathcal{B}_w$ with the size $m$. Since $s-t+1$ may not be a multiple of $m$, we make the convention that unless the size of the last block is specified clearly, it has the size $m \leq
|\mathcal{B}_w| <2m$, and all the other ones have the same size $m$.
It suffices to show that for any $\lambda>0$, $$\begin{aligned}
\lim_{n\to\infty}\sum_{k=1}^{s_n}
P\left(\left|\sum_{j=1}^{w_n} V_{k,j}\right|
\geq \lambda \sqrt{\frac{n}{\log s_n}}\right) =0.
\end{aligned}$$ Observe that $V_{k,j}, 1\leq j\leq w_n$, are independent. By (\[eq:fact5\]), $\|V_{k,j}\| \leq
2|K_j|^{1/2} \kappa_4 \Theta_4$. By Corollary 1.6 of [@nagaev:1979], for any $M>1$, there exists a constant $C_{M}>1$ such that $$\label{eq:48}
\begin{aligned}
P\left(\left|\sum_{j=1}^{w_n} V_{k,j}\right|
\geq \lambda\sqrt{\frac{n}{\log s_n}}\right)
& \leq \sum_{j=1}^{w_n} P\left(|V_{k,j}|
\geq C_{M}^{-1}\lambda\sqrt{{n}/{\log s_n}}\right)
+\left(\frac{4e^2\kappa_4^2\Theta_4^2\sum_{j=1}^{w_n}|K_j|}
{C_{M}^{-1}\lambda^2 n/\log s_n}\right)^{C_{M}/2} \\
& \leq \sum_{j=1}^{w_n} P\left(|V_{k,j}|
\geq C_{M}^{-1}\lambda\sqrt{{n}/{\log n}}\right)
+ C_{M} \left(n^{\beta-\gamma}\log n\right)^{C_{M}/2} \\
& \leq \sum_{j=1}^{w_n} P\left(|V_{k,j}|
\geq C_{M}^{-1} \sqrt{{n}/{\log n}}\right) +
n^{-M}.
\end{aligned}$$ where we resolve the constant $\lambda$ into the constant $C_M$ in the last inequality. It remains to show that $$\begin{aligned}
\label{eq:44}
\lim_{n\to\infty}\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(|V_{k,j}| \geq q_1\delta \phi_n\right)=0,
\mbox{ where } \phi_n = \sqrt{\frac{n}{\log n}},
\end{aligned}$$ holds for any $\delta>0$, where $q_1$ is the smallest integer such that $\beta^{q_1}<\min\{(p-4)/p, \, (p-2-2\eta)/(p-2)\}$. This choice of $q_1$ will be explained later. We adopt the technique of successive $m$-dependence approximations from [@liu:2010] to prove (\[eq:44\]).
For $q\geq 1$, set $m_{n,q}={\lfloor n^{\beta^q} \rfloor}$. Define $X_{i,q} =
{\mathcal{H}}_{i-m_{n,q}} X_i$, $\gamma_{k,q} = {{\mathbb{E}}}(X_{0,q} X_{k,q})$, and $$\begin{aligned}
V_{k,j,q} = \sum_{i \in K_j, i>k} (X_{i-k,q}X_{i,q}-\gamma_{k,q}).\end{aligned}$$ In particular, $m_{n,1}$ is same as $m_n$ defined in Step 2, and $V_{k,j,1}=V_{k,j}$. Without loss of generality assume $s_n \leq
{\lfloor n^\eta \rfloor}$. Let $q_0$ be such that $\beta^{q_0+1} \leq \eta <
\beta^{q_0}$. We first consider the difference between $V_{k,j,q}$ and $V_{k,j,q+1}$ for $1\leq q <q_0$. Split the block $K_j$ into consecutive small blocks $\mathcal{B}_1, \ldots,
\mathcal{B}_{w_{n,q}}$ with size $2m_{n,q}$. Define $$\label{eq:49}
\begin{aligned}
V_{k,j,q,t}^{(0)} = \sum_{i \in \mathcal{B}_t}
(X_{i-k,q}X_{i,q}-\gamma_{k,q}) \quad \hbox{and} \quad
V_{k,j,q,t}^{(1)} = \sum_{i \in \mathcal{B}_t}
(X_{i-k,q+1}X_{i,q+1}-\gamma_{k,q+1}).
\end{aligned}$$ Observe that $V_{k,j,q,t_1}^{(0)}$ and $V_{k,j,q,t_2}^{(0)}$ are independent if $|t_1-t_2|>1$. Similar as (\[eq:48\]), for any $M>1$, there exists a constant $C_M>1$ such that, for sufficiently large $n$, $$\label{eq:53}
\begin{aligned}
P\left(\left|V_{k,j,q}-V_{k,j,q+1}\right| \geq \delta\phi_n\right)
& =P\left[\left|\sum_{t=1}^{w_{n,q}} \left(V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right)\right|
\geq \delta \phi_n\right] \\
& \leq \sum_{t=1}^{w_{n,q}} P\left(\left|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right|
\geq C_M^{-1} \phi_n \right) + n^{-M}.
\end{aligned}$$ Similarly as (\[eq:mdep\_app\]), we have $\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p/2} \leq C_p
|\mathcal{B}_t|^{1/2} m_{n,q+1}^{-\alpha}$. It follows that $$\begin{aligned}
\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right)
& \leq C_{p,M} n^{\eta} n^{1-\gamma} \left(n^{-M} +
\frac{n^\gamma m_{n,q}^{p/4}
m_{n,q+1}^{-\alpha p/2}}{m_{n,q}
(n/\log n)^{p/4}} \right) \\
& \leq C_{p,M} \left(n^{\eta+1-\gamma-M} + n^{\eta} n^{1-p/4}
m_{n,q}^{p/4-1-\alpha\beta p/2}\right).
\end{aligned}$$ Under the condition (\[eq:decay\_rate1\]), there exists a $0<\beta<1$, such that $$\begin{aligned}
\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n \right)
\leq C_{p,M} \left(n^{\eta+1-\gamma-M} + n^{\eta+ 1-p/4
+\beta^q (p/4-1-\alpha\beta p/2)} \right) \to 0.
\end{aligned}$$
Recall that $q_1$ is the smallest integer such that $\beta^{q_1}<\min\{(p-4)/p, (p-2-2\eta)/(p-2)\}$. We now consider the difference between $V_{k,j,q}$ and $V_{k,j,q+1}$ for $q_0 \leq q
<q_1$. The problem is more complicated than the preceding case $1\leq q<q_0$, since now it is possible that $m_{n,q}<k$ for some $1\leq k \leq s_n$. We consider three cases.
[*Case 1: $k \geq 2m_{n,q}$.*]{} Partition the block $K_j$ into consecutive smaller blocks $\mathcal{B}_1, \ldots, \mathcal{B}_{w_{n,q}}$ with same size $m_{n,q}$. Define $V_{k,j,q,t}^{(0)}$ and $V_{k,j,q,t}^{(1)}$ as in (\[eq:49\]). Observe that $\left(V_{k,j,q,t}^{(0)}-V_{k,j,q,t}^{(1)}\right)_{t \hbox{
\scriptsize{is odd}}}$ is a martingale difference sequence with respective to the filtration $\left(\xi_t:=\langle
\epsilon_{l}:\,l\leq \max\left\{\mathcal{B}_t\right\}
\rangle\right)_{t \hbox{ \scriptsize{is odd}}}$, and so is the sequence and filtration labelled by even $t$. Set $\xi_0=\langle
\epsilon_l:\,l<\min\{\mathcal{B}_1\} \rangle$ and $\xi_{-1} =
\langle \epsilon_l:\,l<\min\{\mathcal{B}_1\}-m_{n,q} \rangle$. For each $1 \leq t \leq w_{n,q}$, define $$\begin{aligned}
\mathcal{V}^{(l)}_{t}
={{\mathbb{E}}}\left[\left(V^{(l)}_{k,j,q,t}\right)^2 | \xi_{t-2}\right]
= \sum_{i_1,i_2 \in \mathcal{B}_t} X_{i_1-k,q+l}X_{i_2-k,q+l}
\gamma_{i_1-i_2,q+l}
\end{aligned}$$ for $l=0,1$. By Lemma 1 of [@haeusler:1984], for any $M>1$, there exists a constant $C_M>1$ such that $$\label{eq:51}
\begin{aligned}
P & \left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right)
\leq \sum_{t=1}^{w_{n,q}}
P\left(\left|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right|
\geq \sqrt{\frac{n}{(\log n)^3}}\right) + n^{-M} \\
& \quad + \sum_{l=0,1} 2
\left\{P\left[\sum_{t \hbox{ {\scriptsize is odd}}} \mathcal{V}^{(l)}_t
\geq \frac{C_M^{-1}n}{(\log n)^2}\right]
+ P\left[\sum_{t \hbox{ {\scriptsize is even}}} \mathcal{V}^{(l)}_t
\geq \frac{C_M^{-1}n}{(\log n)^2}\right] \right\}.
\end{aligned}$$ By (\[eq:fact4\]), $\sum_{k \in {{\mathbb{Z}}}} |\gamma_{k,q+l}|^2 \leq
\Theta_2^2$, and hence by (\[eq:fact5.5\]), $\|\mathcal{V}^{(l)}_{t}\|_{p/2} \leq C_p m_{n,q}^{1/2}$. Observe that $\mathcal{V}^{(0)}_{t_1}$ and $\mathcal{V}^{(0)}_{t_1}$ are independent if $|t_1-t_2|>1$, so similarly as (\[eq:48\]), we have $$\begin{aligned}
P\left[\sum_{t \hbox{ {\scriptsize is odd}}} \mathcal{V}^{(l)}_t
\geq \frac{C_M^{-1}n}{(\log n)^2}\right]
& \leq n^{-M} + \sum_{t\hbox{ {\scriptsize is odd}}} P \left[\mathcal{V}^{(l)}_t
\geq \frac{C_M^{-2}n}{(\log n)^2}\right] \\
& \leq n^{-M} + C_{p,M} \cdot w_{n,q} \cdot n^{-p/2} (\log n)^p \cdot m_{n,q}^{p/4}.
\end{aligned}$$ The same inequality holds for the sum over even $t$. For the first term in (\[eq:51\]), we claim that $$\begin{aligned}
\label{eq:50}
\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p}
\leq C_p \cdot m_{n,q}^{1/2} \cdot m_{n,q+1}^{-\alpha},
\end{aligned}$$ which together with the preceding two inequalities implies that $$\begin{aligned}
P & \left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right)
\leq C_{p,M} \,w_{n,q} \cdot n^{-p/2} (\log n)^{3p/2}
\left( m_{n,q}^{p/2} \cdot m_{n,q+1}^{-\alpha p}
+ m_{n,q}^{p/4}\right) + n^{-M}.
\end{aligned}$$ It follows that under condition (\[eq:decay\_rate1\]), there exists a $0<\beta<1$ such that $$\label{eq:52}
\begin{aligned}
& \sum_{k=2m_{n,q}}^{s_n} \sum_{j=1}^{w_n}
P \left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right) \\
& \quad \leq n^{1+\eta-M}
+ C_{p,M}\cdot n^{1+\eta-p/2} (\log n)^{3p/2}
\left[ n^{\beta^q (p/2-1-\alpha\beta p)}
+ n^{\beta^q (p/4-1)}\right]
=o(1).
\end{aligned}$$
[*Case 2: $k \leq m_{n,q+1}/2$.*]{} Partition the block $K_j$ into consecutive smaller blocks $\mathcal{B}_1, \ldots, \mathcal{B}_{w_{n,q}}$ with size $3m_{n,q}$. Define $V_{k,j,q,t}^{(0)}$ and $V_{k,j,q,t}^{(1)}$ as in (\[eq:49\]). Similarly as (\[eq:mdep\_app\]), we have $$\begin{aligned}
\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p/2}
\leq C_p \cdot m_{n,q}^{1/2} \cdot m_{n,q+1}^{-\alpha}.
\end{aligned}$$ Similar as (\[eq:53\]), for any $M > 1$, there exist a constant $C_M > 1$ such that $$\begin{aligned}
P\left(\left|V_{k,j,q}-V_{k,j,q+1}\right| \geq \delta \phi_n\right)
& \leq \sum_{t=1}^{w_{n,q}}
P\left(\left|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right|
\geq C_M^{-1} \phi_n \right) + n^{-M} \\
& \leq n^{-M} + C_{p,M} \cdot w_{n,q} \cdot n^{-p/4} (\log n)^{p/4}
\cdot m_{n,q}^{p/4}\cdot m_{n,q+1}^{-\alpha\beta p/2}.
\end{aligned}$$ It follows that that under condition (\[eq:decay\_rate1\]), there exists a $0<\beta<1$ such that $$\label{eq:56}
\begin{aligned}
& \sum_{k=1}^{m_{n,q+1}/2} \sum_{j=1}^{w_n}
P \left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right) \\
& \quad \leq n^{1+\eta-M} + C_{p,M}\cdot n^{1-p/4} (\log n)^{p/4}
\cdot \left(n^{\beta^q}\right)^{p/4-\alpha\beta p/2}
=o(1).
\end{aligned}$$
[*Case 3: $m_{n,q+1}/2 < k < 2m_{n,q}$.*]{} We use the same argument as in Case 2. But this time we claim that $$\begin{aligned}
\label{eq:57}
\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p/2}
\leq C_p \left[ m_{n,q}^{1/2} \cdot m_{n,q+1}^{-\alpha}
+ m_{n,q}\zeta_p(k)\right],
\end{aligned}$$ where $\zeta_p(k)$ is defined in (\[eq:fact4\]). Since $\sum_{k=m}^\infty [\zeta_p(k)]^{p/2} \leq
\left[\sum_{k=m}^\infty\zeta_p(k)\right]^{p/2} = O(m^{-\alpha
p/2})$, under condition (\[eq:decay\_rate\]), there exist constants $C_{p,M}>1$ and $0<\beta<1$ such that for $M$ large enough $$\label{eq:58}
\begin{aligned}
& \sum_{k>m_{n,q+1}/2}^{2m_{n,q}-1} \sum_{j=1}^{w_n}
P \left(\left|V_{k,j,q}-V_{k,j,q+1}\right| \geq \delta \phi_n\right)
\leq C_{p,M}\cdot n^{1-p/4} (\log n)^{p/4} m_{n,q}^{p/4-\alpha\beta p/2} \\
& \qquad + n^{1+\eta-M}
+ C_{p,M}\cdot n^{1-p/4} (\log n)^{p/4}
\cdot m_{n,q}^{p/2-1} \sum_{k>m_{n,q+1}/2}^{2m_{n,q}-1}
[\zeta_p(k)]^{p/2} \\
& \quad \leq n^{1+\eta-M} + C_{p,M}\cdot n^{1-p/4} (\log n)^{p/4}
\cdot m_{n,q}^{p/2-1-\alpha\beta p/2}
=o(1).
\end{aligned}$$ Alternatively, if we use the bound from (\[eq:fact9\]), $\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p/2} \leq C_p
m_{n,q}^{1/2} \cdot m_{n,q+1}^{-\alpha'}$, it is still true that under condition (\[eq:decay\_rate\]), there exist constants $C_{p,M}>1$ and $0<\beta<1$ such that for $M$ large enough $$\label{eq:10}
\begin{aligned}
\sum_{k>m_{n,q+1}/2}^{2m_{n,q}-1} & \sum_{j=1}^{w_n}
P \left(\left|V_{k,j,q}-V_{k,j,q+1}\right|
\geq \delta \phi_n\right) \\
& \leq n^{1+\eta-M} + C_{p,M}\cdot n^{1-p/4} (\log n)^{p/4}
\cdot m_{n,q}^{p/2-1-\alpha'\beta p/2}
=o(1).
\end{aligned}$$ Combine (\[eq:52\]), (\[eq:56\]), (\[eq:58\]) and (\[eq:10\]), we have shown that $$\begin{aligned}
\label{eq:61}
\lim_{n\to\infty}\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(\left|V_{k,j,q}-V_{k,j,q+1}\right| \geq \delta \phi_n\right) = 0.
\end{aligned}$$ for $1\leq q <q_1$. Therefore, to prove (\[eq:44\]), it suffices to show $$\begin{aligned}
\label{eq:60}
\lim_{n\to\infty}\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(|V_{k,j,q_1}| \geq \delta \phi_n \right)=0
\end{aligned}$$ By considering two cases (i) $2m_{n,q_1} \leq k \leq s_n$ and (ii) $1\leq k <2m_{n,q_1}$ under the condition $\beta^{q_1}<\min\{(p-4)/p, (p-2-2\eta)/(p-2)\}$, and using similar arguments as those in proving (\[eq:61\]), we can obtain (\[eq:60\]). The proof of Lemma \[thm:small\_blk\] is complete.
We now turn to the proof of the two claims (\[eq:50\]) and (\[eq:57\]). For (\[eq:57\]), we have $$\begin{aligned}
\left\|V^{(0)}_{k,j,q,t}-V^{(1)}_{k,j,q,t}\right\|_{p/2}
\leq & \left\|\sum_{i\in\mathcal{B}_t}
( X_{i-k,q} - X_{i-k,q+1})X_{i,q+1}\right\|_{p/2}
+ \left\|\sum_{i\in\mathcal{B}_t}
{{\mathbb{E}}}_0\left[X_{i-k,q+1}( X_{i,q}-X_{i,q+1})\right]\right\|_{p/2} \\
& + \left\|\sum_{i\in\mathcal{B}_t}
{{\mathbb{E}}}_0\left[( X_{i-k,q}-X_{i-k,q+1})(X_{i,q}-X_{i,q+1})\right]\right\|_{p/2}
=: I + \II + \III.
\end{aligned}$$ Similarly as in the proof of (\[eq:mdep\_app\]), we have $$\begin{aligned}
I \leq {\mathcal{C}}_{p/2}\Theta_p\Theta_p(m_{n,q+1}+1)\cdot\sqrt{3m_{n,q}}
\quad\hbox{and}\quad
\III \leq 4\,{\mathcal{C}}_{p/2}\Theta_p\Theta_p(m_{n,q+1}+1)\cdot\sqrt{3m_{n,q}}.
\end{aligned}$$ For the second term $\II$, write $$\begin{aligned}
{{\mathbb{E}}}_0\left[X_{i-k,q+1}( X_{i,q}-X_{i,q+1})\right]
= \sum_{l_1=0}^{m_{n,q+1}} \sum_{l_2=m_{n,q+1}+1}^{m_{n,q}}
{{\mathbb{E}}}_0\left[({\mathcal{P}}_{i-k-l_1}X_{i-k})({\mathcal{P}}_{i-l_2}X_{i})\right].
\end{aligned}$$ For a pair $(l_1,l_2)$ such that $i-k-l_1 \neq i-l_2$, by the inequality (\[eq:mar\_zyg\]), we have $$\begin{aligned}
\left\|\sum_{i \in \mathcal{B}_t}
({\mathcal{P}}_{i-k-l_1}X_{i-k})({\mathcal{P}}_{i-l_2}X_{i})\right\|_{p/2}
\leq {\mathcal{C}}_{p/2} \delta_p(l_1)\delta_p(l_2)\cdot \sqrt{3m_{n,q}}.
\end{aligned}$$ For the pairs $(l_1,l_2)$ such that $i-k-l_1 = i-l_2$, by the triangle inequality $$\begin{aligned}
\left\|\sum_{i\in\mathcal{B}_t}\sum_{l=0}^{m_{n,q+1}}
{{\mathbb{E}}}_0\left[({\mathcal{P}}_{i-k-l}X_{i-k})({\mathcal{P}}_{i-k-l}X_{i})\right]\right\|_{p/2}
\leq 3m_{n,q}\cdot2\sum_{l=0}^{m_{n,q+1}} \delta_p(l)
\delta_p(k+l) \leq 6m_{n,q} \zeta_p(k).
\end{aligned}$$ Putting these pieces together, the proof of (\[eq:57\]) is complete. The key observation in proving (\[eq:50\]) is that since $k \geq 2m_{n,q}$, $X_{i-k,q}$ and $X_{i,q}$ are independent, hence the product $X_{i-k,q}X_{i,q}$ has finite $p$-th moment. The rest of the proof is similar to that of (\[eq:57\]). Details are omitted.
\[rk:decay\_rate\] Condition (\[eq:decay\_rate\]) is only used to deal with Case 3, while (\[eq:decay\_rate1\]) suffices for the rest of the proof. In fact, for linear processes, one can show that the term $m_{n,q}\zeta_p(k)$ in (\[eq:57\]) can be removed, so we have (\[eq:58\]) under condition (\[eq:decay\_rate1\]) and do not need (\[eq:10\]). So (\[eq:decay\_rate1\]) suffices for Theorem \[thm:single\]. Furthermore, for nonlinear processes with $\delta_p(k)=O\left[k^{-(1/2+\alpha)}\right]$, the term $m_{n,q}\zeta_p(k)$ can also be removed from (\[eq:57\]). Details are omitted.
### Step 3: Truncate sums over large blocks
We need to show for any $\lambda>0$ $$\begin{aligned}
\lim_{n\to\infty}\sum_{k=1}^{s_n}
P\left(\left|\sum_{j=1}^{w_n} (U_{k,j}-\bar U_{k,j})\right|
\geq \lambda \sqrt{\frac{n}{\log s_n}}\right) =0.
\end{aligned}$$ Using (\[eq:fact5\]), elementary calculation gives $$\begin{aligned}
\label{eq:31}
\left\|{\tilde U}_{k,j}-\bar{U}_{k,j}\right\|^2
\leq \frac{{{\mathbb{E}}}|{\tilde U}_{k,j}|^{p/2}}{(\sqrt{n}/\log s_n)^{p/2-2}}
\leq \frac{(2{\mathcal{C}}_{p/2}\kappa_p\Theta_p)^{p/2}|H_j|^{p/4}
(\log s_n)^{3(p-4)/2}}{n^{(p-4)/4}}.
\end{aligned}$$ Similarly as (\[eq:48\]), for any $M>1$, there exists a constant $C_{M}>1$ such that $$ \begin{aligned}
P \left(\left|\sum_{j=1}^{w_n} (U_{k,j}-\bar U_{k,j})\right|
\geq \lambda \sqrt{\frac{n}{\log s_n}}\right)
\leq & \sum_{j=1}^{w_n} P\left(|U_{k,j}-\bar U_{k,j}|
\geq C_{M}^{-1} \lambda \sqrt{\frac{n}{\log s_n}}\right) \\
& +\left(\frac{C_p\sum_{j=1}^{w_n}|H_j|^{p/4}(\log n)^{3p/2}}
{C_{M}^{-1} \lambda^2 n^{p/4}}\right)^{C_{M}/2} \\
\leq & \sum_{j=1}^{w_n} P\left(|U_{k,j}-\bar U_{k,j}|
\geq C_{M}^{-1}\sqrt{\frac{n}{\log s_n}}\right) + n^{-M}.
\end{aligned}$$ Therefore, it suffices to show that for any $\delta>0$, $$\begin{aligned}
\lim_{n\to\infty}\sum_{k=1}^{s_n} \sum_{j=1}^{w_n}
P\left(|U_{k,j}-\bar U_{k,j}| \geq \delta\sqrt{\frac{n}{\log n}}\right)=0.
\end{aligned}$$ Since we can use the same arguments as those for (\[eq:44\]), Lemma \[thm:large\_blk\] follows.
### Step 4: Compare covariance structures
Since $|\bar U_{k,j}| \leq 2\sqrt{n}/(\log s_n)$ and ${{\mathbb{E}}}\bar
U_{k,j}^2 \leq {{\mathbb{E}}}U_{k,j}^2 \leq 4(\kappa_4\Theta_4)^2|H_j|$, by Bernstein’s inequality [cf. Fact 2.3, @einmahl:1997], we have $$\begin{aligned}
P \left(|{\mathcal{R}}_{n,k}| > \sqrt{\sigma_0 n\log s_n}\right)
\leq \exp\left\{-\frac{(\sigma_0 n\log s_n)/2}
{4(\kappa_4\Theta_4)^2 n
+ n \sqrt{\sigma_0/(\log s_n)}}\right\}.
\end{aligned}$$ Therefore, for any $0< \iota < \sigma_0/[8(\kappa_4 \Theta_4)^2]$, (\[eq:small\_lag\]) holds.
For $1\leq j\leq w_n$, by (\[eq:31\]), we have $$\begin{aligned}
\left|{{\mathbb{E}}}(\bar U_{k,j}\bar U_{k+h,j})
- {{\mathbb{E}}}({\tilde U}_{k,j}{\tilde U}_{k+h,j})\right|
& \leq \|\bar U_{k,j}-{\tilde U}_{k,j}\|\|\bar U_{k+h,j}\|
+ \|{\tilde U}_{k,j}\|\|\bar U_{k+h,j} - {\tilde U}_{k+h,j}\| \cr
& \leq 4\kappa_4\Theta_4|H_j|^{1/2}
\frac{(2{\mathcal{C}}_{p/2}\kappa_p\Theta_p)^{p/4}|H_j|^{p/8}(\log s_n)^{3(p-4)/4}}{n^{(p-4)/8}}\cr
& \leq C_{p} |H_j| n^{-(1-\gamma)(p-4)/8}(\log n)^{3(p-4)/4}.
\end{aligned}$$ Let $S_{k,j}=\sum_{i\in H_j}(X_{i-k}X_i- \gamma_k)$, by (\[eq:fact5\]) and (\[eq:mdep\_app\]), we have $$\begin{aligned}
\left|{{\mathbb{E}}}(S_{k,j}S_{k+h,j})
- {{\mathbb{E}}}(\tilde U_{k,j}\tilde U_{k+h,j})\right|
& \leq \|S_{k,j}-\tilde U_{k,j}\|\|S_{k+h,j}\|
+ \|\tilde U_{k,j}\|\|S_{k+h,j} - \tilde U_{k+h,j}\| \cr
& \leq 4\kappa_4\Theta_4|H_j|^{1/2}\cdot
6\Theta_4\Theta_4(m_n-k+1) |H_j|^{1/2}
\leq C|H_j|n^{-\alpha\beta}.
\end{aligned}$$ Since $\Theta_4(m) = O(m^{-\alpha})$, elementary calculation shows that $\Delta_4(m) = O(n^{-\alpha^2/(1+\alpha)})$, which together with Lemma \[thm:covconvergence\] implies that if $k>t_n$, $$\begin{aligned}
\left|{{\mathbb{E}}}(\tilde U_{k,j}\tilde U_{k+h,j})/|H_j| - \sigma_h\right|
& \leq \Theta_{4}^3\left(16\Delta_4(t_n/3+1) +
6\Theta_4\sqrt{t_n/l_n} + 4\Psi_4(t_n/3+1)\right)\cr
& \leq C \left(s_n^{-\alpha^2\iota/(1+\alpha)}+n^{-(1-\iota)\gamma/2}\right).
\end{aligned}$$ Choose $\ell$ such that $0<\ell<\min\{(1-\eta)(p-4)/8,\,
\alpha\beta, \, \alpha^2\iota/(1+\alpha),\, (1-\iota)\gamma/2,\,
\gamma-\beta\}$. Then $$\begin{aligned}
|\operatorname{Cov}({\mathcal{R}}_{n,k}, {\mathcal{R}}_{n,k+h})/n-\sigma_h|
& \leq C_p \Big(n^{-(1-\eta)(p-4)/8}(\log n)^{(p-4)/4}
+ n^{-\alpha\beta} \cr
& \qquad + s_n^{-\alpha^2\iota/(1+\alpha)}
+n^{-(1-\iota)\gamma/2}\Big)
+ {{2w_n m_n\sigma_0} \over n}
\leq C_p\,s_n^{-\ell}
\end{aligned}$$ and the lemma follows.
### Step 5: Moderate deviations.
Note that for ${\boldsymbol{x}}, {\boldsymbol{y}} \in {{\mathbb{R}}}^{d}$, $|x+y|_\bullet\leq
|x|_\bullet+|y|$. Let ${\boldsymbol{Z}}\sim\mathcal{N}(0,I_d)$ and $\theta_n=(\log s_n)^{-1}$. Since $|\bar U_{k,j}| \leq
2\sqrt{n}/(\log s_n)^3$, by Fact 2.2 of [@einmahl:1997], $$\begin{aligned}
P(|{\boldsymbol{{\mathcal{R}}}}_n/\sqrt{n}|_\bullet\geq z_n)
& \leq P(|\Sigma_n^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - \theta_n)
+ P (|{\boldsymbol{{\mathcal{R}}}}_n/\sqrt{n} - \Sigma_n^{1/2}{\boldsymbol{Z}}| \geq \theta_n) \\
& \leq P(|\Sigma_n^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - \theta_n)
+ C_{p,d}\, \exp\left\{-C_{p,d}^{-1}(\log s_n)^2\right\}.
\end{aligned}$$ By Lemma \[thm:eigenbdd\], the smallest eigenvalue of $\Sigma$ is bounded from below by some $c_d>0$ uniformly on $1 \leq
k_1<k_2<\cdots<k_d$. By Lemma \[thm:cov\_struc\] we have $\rho(\Sigma^{1/2}_n-\Sigma^{1/2}) \leq
c_d^{-1/2}\cdot\rho(\Sigma_n-\Sigma)\leq C_{p,d} \,s_n^{-\ell}$, where the first inequality is taken from Problem 7.2.17 of [@horn:1990]. It follows that by (\[eq:normbdd\]) and elementary calculations that $$\begin{aligned}
P(|\Sigma_n^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - \theta_n)
& \leq P(|\Sigma^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - 2\theta_n)
+ P\left[\left|\left(\Sigma^{1/2}_n-\Sigma^{1/2}\right){\boldsymbol{Z}}\right|
\geq \theta_n \right] \\
& \leq P(|\Sigma^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - 2\theta_n)
+ C_{p,d}\, \exp\left\{s_n^{-\ell}\right\}.
\end{aligned}$$ By Lemma \[thm:lemma1\], we have $$\begin{aligned}
P(|\Sigma^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n - 2\theta_n)
\leq \left[1+C_{p,d}(\log s_n)^{-1/2}\right]
P(|\Sigma^{1/2}{\boldsymbol{Z}}|_{\bullet} \geq z_n).
\end{aligned}$$ Putting these pieces together and observing that ${\boldsymbol{V}}$ and $\Sigma^{1/2}{\boldsymbol{Z}}$ have the same distribution, we have $$\begin{aligned}
P(|{\boldsymbol{{\mathcal{R}}}}_n/\sqrt{n}|_\bullet\geq z_n)
\leq \left[1+C_{p,d}(\log s_n)^{-1/2}\right] P(|{\boldsymbol{V}}|_\bullet\geq z_n)
+ C_{p,d}\, \exp\left\{-C_{p,d}^{-1}(\log s_n)^2\right\},
\end{aligned}$$ which together with a similar lower bound completes the proof of Lemma \[thm:vec\_mod\_dev\].
### Proof of Theorem \[thm:single\]
After these preparation steps, we are now ready to prove Theorem \[thm:single\].
Set $z_n = a_{2s_n}\,x+b_{2s_n}$. It suffices to show $$\begin{aligned}
\label{eq:gumbel}
\lim_{n\to\infty}P\left(\max_{t_n< k\leq s_n}|{\mathcal{R}}_k/\sqrt{n}|
\leq \sqrt{\sigma_0} z_n\right) = \exp\{-\exp(-x)\}.
\end{aligned}$$ Without loss of generality assume $\sigma_0=1$. Define the events $A_k=\{G_k\geq z_n\}$ and $B_k=\{{\mathcal{R}}_k/\sqrt{n}\geq z_n\}$. Let $$\begin{aligned}
Q_{n,d} = \sum_{t_n< k_1<\ldots<k_d\leq s_n}
P(A_{k_1}\cap \cdots \cap A_{k_d})
\quad\hbox{and}\quad
\tilde Q_{n,d} = \sum_{t_n< k_1<\ldots<k_d\leq s_n}
P(B_{k_1}\cap \cdots \cap B_{k_d}).
\end{aligned}$$ By the inclusion-exclusion formula, we know for any $q\geq 1$ $$\begin{aligned}
\label{eq:inex}
\sum_{d=1}^{2q} (-1)^{d-1}\tilde Q_{n,d}
\leq P\left(\max_{t_n< k\leq s_n}|{\mathcal{R}}_k/\sqrt{n}|
\geq a_{2s_n}\,x+b_{2s_n}\right)
\leq \sum_{d=1}^{2q-1} (-1)^{d-1}\tilde Q_{n,d}.
\end{aligned}$$ By Lemma \[thm:vec\_mod\_dev\], $|\tilde Q_{n,d}-Q_{n,d}| \leq
C_{p,d}(\log s_n)^{-1/2}Q_{n,d} + s_n^{-1}. $ By Lemma \[thm:normalcomparison\] with elementary calculations, we know $\lim_{n\to \infty}Q_{n,d} = e^{-dx}/d!$, and hence $\lim_{n
\to \infty}\tilde Q_{n,d} = e^{-dx}/d!$. By letting $n$ go to infinity first and then $d$ go to infinity in (\[eq:inex\]), we obtain (\[eq:gumbel\]), and the proof is complete.
Proof of Theorem \[thm:covorder\]
---------------------------------
We start with an $m$-dependence approximation that is similar to the proof of Theorem \[thm:single\]. Set $m_n={\lfloor n^\beta \rfloor}$ for some $0<\beta<1$. Define $\tilde X_i = {\mathcal{H}}_{i-m_n} X_i$, $\tilde \gamma_k = {{\mathbb{E}}}(\tilde X_0 \tilde X_k)$, and $\tilde
R_{n,k}=\sum_{i=k+1}^n (\tilde X_{i-k} \tilde X_i - \tilde
\gamma_k )$. Similarly as the proof of Lemma \[thm:small\_blk\], we have under the condition (\[eq:decay\_rate2\]) $$\begin{aligned}
\max_{1 \leq k <n} |R_{n,k}-\tilde R_{n,k}|
= o_P\left(\sqrt{{n}/{\log n}}\right).
\end{aligned}$$ For $\tilde R_{n,k}$, we consider two cases according to whether $k
\geq 3m_n$ or not.
[*Case 1: $k \geq 3m_n$.*]{} We first split the interval $[k+1,n]$ into the following big blocks of size $(k-m_n)$ $$\begin{aligned}
& H_j=[k+{j-1}(k-m_n)+1,k+j(k-m_n)]
\quad \hbox{for } 1 \leq j \leq w_n-1 \\
& H_{w_n}=[k+(w_n-1)(k-m_n)+1,n],
\end{aligned}$$ where $w_n$ is the smallest integer such that $k+w_n(k-m_n) \geq
n$. For each block $H_j$, we further split it into small blocks of size $2m_n$ $$\begin{aligned}
& K_{j,l}=[k+(j-1)(k-m_n)+(l-1)2m_n+1,k+(j-1)(k-m_n)+2lm_n] \quad \hbox{for } 1 \leq l < v_j \\
& K_{j,v_j}=[k+(v_j-1)(k-m_n)+(l-1)2m_n+1,k+(j-1)(k-m_n)+|H_j|]
\end{aligned}$$ where $v_j$ is the smallest integer such that $2m_nv_j\geq|H_j|$. Now define $U_{k,j,l}=\sum_{i \in K_{j,l}}
\tilde X_{i-k}\tilde X_i$ and $$\begin{aligned}
\label{eq:11}
\tilde R_{n,k}^{u,1}=\sum_{j \equiv u \!\!\!\!\pmod 3}\sum_{l \hbox{ \scriptsize{odd}}} U_{k,j,l} \quad \hbox{and} \quad
\tilde R_{n,k}^{u,2}=\sum_{j \equiv u \!\!\!\!\pmod 3}\sum_{l \hbox{ \scriptsize{even}}} U_{k,j,l}
\end{aligned}$$ for $u=0,1,2$. Observe that each $\tilde R_{n,k}^{u,o}$ ($u=0,1,2; \;o=1,2$) is a sum of independent random variables. By (\[eq:fact5\]), $\|U_{k,j,l}\| \leq
2\kappa_4\Theta_4|U_{k,j,l}|^{1/2}$. By Corollary 1.7 of [@nagaev:1979] where we take $y_i=\sqrt{n}$ in their result, we have for any $\lambda>0$ $$\label{eq:40}
\begin{aligned}
P & \left(|\tilde R_{n,k}| \geq 6 \lambda \sqrt{n\log n}\right)
\leq \sum_{u=0}^2 \sum_{o=1,2}
P\left(\left|\tilde R_{n,k}^{u,o}\right| \geq \lambda \sqrt{n\log n}\right) \\
& \leq \sum_{u=0}^2 \sum_{o=1,2} \sum_{j,l}^{\ast} P \left(|U_{k,j,l}|
\geq \lambda \sqrt{n\log n}\right)
+ 12 \left(\frac{C_p \,n^{1-\beta} \cdot
n^{\beta p/4}}{n^{p/4}}\right)^{p\sqrt{\log n}/(p+4)} \\
& \quad + 12\exp\left\{-\frac{2 \lambda^2}
{(p+4)^2\cdot e^{p/2}\cdot\kappa_4^2\cdot\Theta_4^2}
\cdot\log n\right\} =: I_{n,k} + \II_{n,k} + \III_{n,k},
\end{aligned}$$ where the range of $j,l$ in the sum $\sum_{j,l}^{\ast}$ is as in (\[eq:11\]). Clearly, $\sum_{k=3m_n}^{n-1}
\II_{n,k}=o(1)$. Similarly as the proof of Lemma \[thm:small\_lag\], we can show that $\sum_{k=3m_n}^{n-1}
I_{n,k}=o(1)$. Therefore, if $\epsilon = c_p/6$, then $\sum_{k=3m_n}^{n-1} \III_{n,k}=O(n^{-1})$.
[*Case 2: $1\leq k < 3m_n$.*]{} This case is easier. By splitting the interval $[k+1,n]$ into blocks with size $4m_n$ and using a similar argument as (\[eq:40\]), we have $$\begin{aligned}
\lim_{n\to\infty}\sum_{k=1}^{3m_n-1} P \left(|\tilde R_{n,k}| \geq c_p \sqrt{n\log n}\right) =0.
\end{aligned}$$ The proof is complete.
Box-Pierce tests
----------------
Similarly as the proof of Theorem \[thm:single\], we use $m$-dependence approximations and blocking arguments to prove Theorem \[thm:ljung\]. We first outline the intermediate steps and give the main proof in Section \[sec:proof\_ljung\], and then provide proofs of the intermediate lemmas in Section \[sec:proof\_blk\] and Section \[sec:proof\_cleared\]. We prove Theorem \[thm:ljung\_power\] in Section \[sec:proof\_power\], and prove Corollary \[thm:ljung\_corr\] and \[thm:ljung\_corr\_power\] in Section \[sec:proof\_corollary\].
### Proof of Theorem \[thm:ljung\] {#sec:proof_ljung}
####
Recall that $R_{n,k}=\sum_{i=k+1}^n (X_{i-k}X_i-\gamma_k)$. Without loss of generality, assume $s_n \leq {\lfloor n^\beta \rfloor}$. Set $m_n=2{\lfloor n^\beta \rfloor}$. Let $\tilde X_i={\mathcal{H}}_{i-m_n}^i X_i$ and $\tilde
R_{n,k}=\sum_{i=k+1}^n (\tilde X_{i-k}\tilde X_i - \tilde\gamma_k
)$. By (\[eq:fact5\]) and (\[eq:mdep\_app\]), we know if $\Theta_4(m)=o(m^{-\alpha})$ for some $\alpha > 0$, then for all $1\leq k\leq s_n$ $$\begin{aligned}
{{\mathbb{E}}}|R_{n,k}^2 - \tilde R_{n,k}^2 | \leq \|R_{n,k}+\tilde R_{n,k}\| \cdot \|R_{n,k}-\tilde R_{n,k}\|
\leq C\, \Theta_4^3 \cdot n \cdot \Theta_4\left(m_n/2\right) = o\left(n^{1-\alpha\beta}\right).\end{aligned}$$ The condition $\sum_{k=0}^\infty k^6\delta_8(k)<\infty$ implies that $\Theta_4(m)=O(m^{-6})$. Therefore, under the conditions of Theorem \[thm:ljung\], we have $$\begin{aligned}
\frac{1}{n\sqrt{s_n}} \sum_{k=1}^{s_n}
{{\mathbb{E}}}_0\left(R_{n,k}^2 - \tilde R_{n,k}^2\right)
=o_P(1).\end{aligned}$$
####
Let $l_n={\lfloor n^\eta \rfloor}$, where $\eta \in (\beta,1)$. Split the interval $[1,n]$ into alternating small and large blocks similarly as (\[eq:blk\]): $$\begin{aligned}
& K_0=[1,s_n] \cr
& H_j = [s_n+(j-1)(2m_n+l_n)+1,s_n+(j-1)(2m_n+l_n)+l_n]
\quad 1 \leq j \leq w_n\cr
& K_{j}=[s_n+(j-1)(2m_n+l_n)+l_n+1,s_n+j(2m_n+l_n)]; \quad 1\leq j \leq w_n-1;
\quad \hbox{and} \cr
& K_{w_n}=[s_n+(w_n-1)(2m_n+l_n)+l_n+1,n],
\end{aligned}$$ where $w_n$ is the largest integer such that $s_n+(w_n-1)(2m_n+l_n)+l_n\leq n$. Define $U_{k,0}=0$, $V_{k,0}=\sum_{i\in K_0,i>k}(\tilde X_{i-k}\tilde
X_{i}-\tilde\gamma_k)$, and $U_{k,j}=\sum_{i\in H_j}(\tilde
X_{i-k}\tilde X_{i}-\tilde\gamma_k)$, $V_{k,j}=\sum_{i\in K_j}(\tilde
X_{i-k}\tilde X_{i}-\tilde\gamma_k)$ for $1\leq j \leq w_n$. Set ${\mathcal{R}}_{n,k}=\sum_{j=1}^{w_n}U_{k,j}$. Observe that by construction, $U_{k,j},1\leq j \leq w_n$ are iid random variables. In the following lemma we show that it suffices to consider ${\mathcal{R}}_{n,k}$.
\[thm:blk\] Assume $X_i \in {\mathcal{L}}^8$, ${{\mathbb{E}}}X_i=0$, and $\sum_{k=0}^\infty
k^6\delta_8(k) < \infty$, then $$\begin{aligned}
\frac{1}{n\sqrt{s_n}}\sum_{k=1}^{s_n}
{{\mathbb{E}}}_0\left(\tilde R_{n,k}^2 - {\mathcal{R}}_{n,k}^2\right) = o_P(1).
\end{aligned}$$
####
\[thm:ljung\_cleared\] Assume $X_i \in {\mathcal{L}}^8$, ${{\mathbb{E}}}X_i=0$, and $\sum_{k=0}^\infty k^6\delta_8(k) < \infty$, then $$\begin{aligned}
\frac{1}{n\sqrt{s_n}}\sum_{k=1}^{s_n} \left({\mathcal{R}}_{n,k}^2 - {{\mathbb{E}}}{\mathcal{R}}_{n,k}^2\right)
\Rightarrow \mathcal{N}\left(0,2\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2\right).
\end{aligned}$$
We are now ready to prove Theorem \[thm:ljung\].
By Lemma \[thm:blk\] and Lemma \[thm:ljung\_cleared\], we know $$\begin{aligned}
\frac{1}{n\sqrt{s_n}}\sum_{k=1}^{s_n}
\left(R_{n,k}^2-{{\mathbb{E}}}R_{n,k}^2\right)
\Rightarrow
\mathcal{N}\left(0, 2\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2\right).
\end{aligned}$$ It remains to show that $$\begin{aligned}
\label{eq:ljung_bias}
\lim_{n\to\infty} \frac{1}{n\sqrt{s_n}}\sum_{k=1}^{s_n}
\left[{{\mathbb{E}}}R_{n,k}^2-(n-k)\sigma_0\right] = 0.
\end{aligned}$$ We need Lemma \[thm:covconvergence\] with a slight modification. Observe that in equation (\[eq:21\]), we now have $\sum_{j=1}^{m_n} \Theta_2(j)^2<\infty$, and hence $$\begin{aligned}
\left|{{\mathbb{E}}}R_{n,k}^2 - (n-k)\sigma_0\right|
\leq C \left[(n-k)\Delta_4({\lfloor k/3 \rfloor}+1) + \sqrt{n-k}\right]
\end{aligned}$$ With the condition $\Theta_8(m)=o(m^{-6})$, elementary calculations show that $\Delta_4(m)=o(m^{-5})$. Then (\[eq:ljung\_bias\]) follows, and the proof is complete.
### Step 2: Throw out small blocks. {#sec:proof_blk}
Let $\mathcal{A}_2$ be the collection of all double arrays $A=(a_{ij})_{i,j\geq 1}$ such that $$\|A\|_{\infty}:=\max\left\{\sup_{i\geq
1}\sum_{j=1}^{\infty}|a_{ij}|,\, \sup_{j\geq
1}\sum_{i=1}^{\infty}|a_{ij}|\right\}<\infty.$$ For $A,B\in\mathcal{A}_2$, define $AB=(\sum_{k=1}^{\infty} a_{ik}
b_{kj})$. It is easily seen that $AB \in \mathcal{A}_2$ and $\|AB\|_{\infty} \le \|A\|_{\infty} \|B\|_{\infty}$. Furthermore, this fact implies the following proposition, which will be useful in computing sums of products of cumulants. For $d \geq 0$, let $\mathcal{A}_d$ be the collection of all $d$-dimensional array $A=A(i_1,i_2,\ldots,i_d)$ such that $$\|A\|_\infty:=\max_{1\leq j\leq d}
\left\{\sup_{i_j\geq 1}\sum_{\{i_k:\,k\neq j\}}
|A(i_1,i_2,\ldots,i_d)|\right\} < \infty.$$ Note that $\mathcal{A}_0={{\mathbb{R}}}$, and $\|A\|_\infty=|A|$ if $A \in
\mathcal{A}_0$.
\[thm:array\] For $k\ge 0$, $l\ge 0$ and $d \ge 1$, if $A \in
\mathcal{A}_{k+d}$ and $B \in \mathcal{A}_{l+d}$, define an array $C$ by $$\begin{aligned}
C(i_1,\ldots,i_k,i_{k+1},\ldots,i_{k+l})
=\sum_{j_1,\ldots,j_d\geq 1}
A(i_1,\ldots,i_k,j_1,\ldots,j_d)
B(j_1,\ldots,j_d,i_{k+1},\ldots,i_{k+l})
\end{aligned}$$ then $C \in \mathcal{A}_{k+l}$, and $\|C\|_{\infty}\leq
\|A\|_{\infty} \|B\|_{\infty}$.
In Lemma \[thm:mag\] we present an upper bound for $\operatorname{Cov}(R_{n,k},R_{n,h})$. We formulate the lemma in a more general way for later uses in the proofs of Lemma \[thm:blk\] and Lemma \[thm:ljung\_cleared\]. For a $k$-dimensional random vector $(Y_1,\ldots,Y_k)$ such that $\|Y_i\|_k<\infty$ for $1\leq i \leq k$, denote by $\operatorname{Cum}(Y_1,\ldots,Y_k)$ its $k$-th order joint cumulant. For the stationary process $(X_i)_{i\in{{\mathbb{Z}}}}$, we write $$\gamma(k_1,k_2,\ldots,k_{d})
:= \operatorname{Cum}(X_0,X_{k_1},X_{k_2},\ldots,X_{k_{d}}).$$ We need the assumption of summability of joint cumulants in Lemma \[thm:mag\], Lemma \[thm:blk\] and Lemma \[thm:ljung\_cleared\]. For this reason, we provide a sufficient condition in Section \[sec:cum\].
\[thm:mag\] Assume $X_i \in {\mathcal{L}}^4$, ${{\mathbb{E}}}X_i=0$, $\Theta_2<\infty$ and $\sum_{k_1,k_2,k_3\in{{\mathbb{Z}}}} |\gamma(k_1,k_2,k_3)|<\infty$. For $k,h \geq 1$, $l_n\geq t_n > 0$ and $s_n \in {{\mathbb{Z}}}$, set $U_k=\sum_{i=1}^{l_n}(X_{i-k}X_i-\gamma_k)$ and $V_h=\sum_{j=s_n+1}^{s_n+t_n}(X_{j-h}X_{j}-\gamma_j)$, then we have $$\begin{aligned}
|{{\mathbb{E}}}(U_k V_h)| \leq t_n \Xi(k,h)
\end{aligned}$$ where $\left[\Xi(k,h)_{k,h\geq 1}\right]$ is a symmetric double array of non-negative numbers such that $\Xi\in\mathcal{A}_2$, and $$\begin{aligned}
\|\Xi\|_\infty \leq 2\Theta_2^4
+ \sum_{k_1,k_2,k_3\in{{\mathbb{Z}}}} |\gamma(k_1,k_2,k_3)|.
\end{aligned}$$
Write $$\begin{aligned}
{{\mathbb{E}}}(U_k V_h)
= & \sum_{i=1}^{l_n}\sum_{j=1}^{t_n}
{{\mathbb{E}}}[(X_{i-k}X_i-\gamma_k)(X_{s_n+j-h}X_{s_n+j}-\gamma_h)]\cr
= & \sum_{i=1}^{l_n}\sum_{j=1}^{t_n}
[\gamma(-k,j+s_n-i-h,j+s_n-i) \cr
&\quad\quad + \gamma_{j+s_n-i+k-h}\gamma_{j+s_n-i}
+ \gamma_{j+s_n-i+k}\gamma_{j+s_n-i-h} ].
\end{aligned}$$ For the sum of the second term, we have $$\begin{aligned}
\left|\sum_{i=1}^{l_n}\sum_{j=1}^{t_n}
\gamma_{j+s_n-i+k-h}\gamma_{j+s_n-i}\right|
= & \bigg| \sum_{d=1}^{t_n-1}
(\gamma_{s_n+d+k-h}\gamma_{s_n+d})(t_n-d) \cr
&\quad + t_n\sum_{d=t_n-l_n}^0
\gamma_{s_n+d+k-h}\gamma_{s_n+d} \cr
& \quad + \sum_{d=1-l_n}^{t_n-l_n-1}
(\gamma_{s_n+d+k-h}\gamma_{s_n+d})(l_n+d)\bigg|\cr
\leq & t_n\sum_{d \in {{\mathbb{Z}}}} |\gamma_{s_n+d+k-h}\gamma_{s_n+d}|
\cr
\leq & t_n\sum_{d \in {{\mathbb{Z}}}}\zeta_{d+k-h}\zeta_{d}.
\end{aligned}$$ Similarly, for the sum of the last term $$\begin{aligned}
\left|\sum_{i=1}^{l_n}\sum_{j=1}^{t_n}
\gamma_{j+s_n-i+k}\gamma_{j+s_n-i-h}\right|
\leq & t_n\sum_{d\in{{\mathbb{Z}}}} \zeta_{d+k+h}\zeta_{d}.
\end{aligned}$$ Observe that $\sum_{h=1}^{\infty}\sum_{d \in {{\mathbb{Z}}}} \zeta_{d+k-h}
\zeta_{d} \leq \left(\sum_{d\in{{\mathbb{Z}}}}\zeta_d\right)^2 \leq \Theta_2^4$ and similarly $\sum_{h=1}^{\infty}\sum_{d \in
{{\mathbb{Z}}}}\zeta_{d+k+h}\zeta_{d} \leq \Theta_2^4$. For the sum of the first term, it holds that $$\begin{aligned}
\left|\sum_{i=1}^{l_n}\sum_{j=1}^{t_n}
\gamma(-k,j+s_n-i-h,j+s_n-i) \right|
\leq t_n\sum_{d\in{{\mathbb{Z}}}}|\gamma(-k,d-h,d)|.
\end{aligned}$$ Utilizing the summability of cumulants, the proof is complete.
In the proof of Lemma \[thm:blk\], we need the concept of [ *indecomposable partitions*]{}. Consider the table
--------- --- -----------
$(1,1)$ … $(1,J_1)$
$(I,1)$ … $(I,J_I)$
--------- --- -----------
Denote the $j$-th row of the table by $\vartheta_j$. A partition ${\boldsymbol{\nu}} = \{\nu_1,\ldots,\nu_q\}$ of the table is said to be [ *indecomposable*]{} if there are no sets $\nu_{i_1},\ldots,\nu_{i_k}$ ($k<q$) and rows $\vartheta_{j_1},\ldots,\vartheta_{j_l}$ ($l<I$) such that $\nu_{i_1}
\cup \cdots \cup \nu_{i_k} = \vartheta_{j_1} \cup \cdots \cup \vartheta_{j_l}$.
Write $$\begin{aligned}
\sum_{k=1}^{s_n} {{\mathbb{E}}}_0(\tilde R_{n,k}^2 - {\mathcal{R}}_{n,k}^2)
& = 2\sum_{k=1}^{s_n} {{\mathbb{E}}}_0\left[{\mathcal{R}}_{n,k}(\tilde
R_{n,k}-{\mathcal{R}}_{n,k})\right]
+ \sum_{k=1}^{s_n} {{\mathbb{E}}}_0(\tilde R_{n,k}-{\mathcal{R}}_{n,k})^2 \cr
& =: 2 I_n + \II_n.
\end{aligned}$$ Using Lemma \[thm:ljung\_cleared\], we know $\II_n / (n\sqrt{s_n})
= o_P(1)$. We can express $I_n$ as $$\begin{aligned}
I_n = \sum_{a=0}^1\sum_{b=0}^1 I_{n, a b}
= I_{n, 00} + I_{n, 01} + I_{n, 10} + I_{n, 11}.
\label{eq:35}
\end{aligned}$$ where for $a,b=0,1$ (assume without loss of generality that $w_n$ is even), $$\begin{aligned}
I_{n, a b} = \sum_{k=1}^{s_n} {{\mathbb{E}}}_0\left(\sum_{j=0}^{w_n/2}U_{k,2j-a} \sum_{j=0}^{w_n/2}V_{k,2j-b}\right).
\end{aligned}$$ Consider the first term in (\[eq:35\]), write $$\begin{aligned}
{{\mathbb{E}}}(I_{n, 00}^2) &
= \sum_{k,h=1}^{s_n} {{\mathbb{E}}}\left[ \sum_{j=1}^{w_n/2}
{{\mathbb{E}}}_0(U_{k,2j}V_{k,2j})\cdot{{\mathbb{E}}}_0(U_{h,2j}V_{h,2j})\right]\cr
&\quad + \sum_{k,h=1}^{s_n} \sum_{j_1 \neq j_2}
{{\mathbb{E}}}(U_{k,2j_1}U_{h,2j_1}){{\mathbb{E}}}(V_{k,2j_2}V_{h,2j_2}) \cr
&\quad + \sum_{k,h=1}^{s_n} \sum_{j_1 \neq j_2}
{{\mathbb{E}}}(U_{k,2j_1}V_{h,2j_1}){{\mathbb{E}}}(V_{k,2j_2}U_{h,2j_2}) \cr
& :=A_n + B_n + C_n.
\end{aligned}$$ By Lemma \[thm:mag\], it holds that $$\begin{aligned}
|B_n|
&\leq& \sum_{k,h=1}^{s_n} \sum_{j_1,j_2=0}^{w_n/2}
l_n|K_{2j_2}| \cdot \left[\tilde\Xi(k,h)\right]^2 \cr
&\leq& w_nl_n \cdot (w_nm_n+2l_n) \sum_{k,h=1}^{s_n}
\left[\tilde\Xi_n(k,h)\right]^2 = o(n^2s_n),
\end{aligned}$$ where $\tilde\Xi_n(k,h)$ is the $\Xi(k,h)$ (defined in Lemma \[thm:mag\]) for the sequence $(\tilde X_i)$. Similarly, $$\begin{aligned}
|C_n| &\leq& \sum_{k,h=1}^{s_n} \sum_{j_1,j_2=1}^{w_n/2}
|K_{2j_1}|\cdot |K_{2j_2}|
\cdot\left[\tilde\Xi_n(k,h)\right]^2\cr
&\leq& (w_nm_n+l_n)^2 \sum_{k,h=1}^{s_n}
\left[\tilde\Xi_n(k,h)\right]^2 = o(n^2s_n).
\end{aligned}$$ To deal with $A_n$, we express it in terms of cumulants $$\begin{aligned}
A_n & = &\sum_{k,h=1}^{s_n} \sum_{j=1}^{w_n/2}
[ \operatorname{Cum}(U_{k,2j},V_{k,2j},U_{h,2j},V_{h,2j}) \cr
&&\quad\quad\quad\quad\quad
+ {{\mathbb{E}}}(U_{k,2j}U_{h,2j}){{\mathbb{E}}}(V_{k,2j}V_{h,2j}) \cr
&&\quad\quad\quad\quad\quad
+ {{\mathbb{E}}}(U_{k,2j}V_{h,2j}){{\mathbb{E}}}(V_{k,2j}U_{h,2j})]\cr
& =:& D_n + E_n + F_n.
\end{aligned}$$ Apparently $|E_n| = o(n^2 s_n)$ and $|F_n| = o(n^2 s_n)$. Using the multilinearity of cumulants, we have $$\begin{aligned}
\operatorname{Cum}(U_{k,2j},V_{k,2j},U_{h,2j},V_{h,2j})
= \sum_{i_1,i_2\in H_{2j}} \sum_{j_1,j_2\in K_{2j}}
\operatorname{Cum}(\tilde X_{i_1-k} \tilde X_{i_1},
\tilde X_{j_1-k} \tilde X_{j_1},
\tilde X_{i_2-h} \tilde X_{i_2},
\tilde X_{j_2-h} \tilde X_{j_2})\end{aligned}$$ for $1 \le k, h \le s_n$. By Theorem II.2 of [@rosenblatt:1985], we know $$\begin{aligned}
\label{eq:cum_decompose}
\operatorname{Cum}\left(\tilde X_{i_1-k}\tilde X_{i_1},\tilde X_{j_1-k}\tilde X_{j_1},
\tilde X_{i_2-h}\tilde X_{i_2},\tilde X_{j_2-h}\tilde X_{j_2}\right)
= \sum_{{\boldsymbol{\nu}}}\prod_{q=1}^b\operatorname{Cum}(\tilde X_i,\;i\in\nu_q)
\end{aligned}$$ where the sum is over all indecomposable partitions ${\boldsymbol{\nu}}=\{\nu_1,\ldots,\nu_q\}$ of the table
--------- -------
$i_1-k$ $i_1$
$j_1-k$ $j_1$
$i_2-h$ $i_2$
$j_2-h$ $j_2$
--------- -------
By Theorem \[thm:cum\], the condition $\sum_{k=0}^\infty
k^6\delta_8(k)<\infty$ implies that all the joint cumulants up to order eight are absolutely summable. Therefore, using Proposition \[thm:array\], we know $$\begin{aligned}
\sum_{k,h=1}^{s_n} \left|\operatorname{Cum}(U_{k,2j},V_{k,2j},U_{h,2j},V_{h,2j})\right| = O(|K_{2j}|s_n^2),
\end{aligned}$$ and it follows that $|D_n| = O\left((w_nm_n+l_n)s_n^2\right) =
o(n^2s_n). $ We have shown that ${{\mathbb{E}}}(I_{n, 00}^2) = o(n^2s_n), $ which, in conjunction with similar results for the other three terms in (\[eq:35\]), implies that ${{\mathbb{E}}}(I_n^2) = o(n^2 s_n)$ and hence $I_n/(n\sqrt{s_n}) = o_P(1)$. The proof is now complete.
### Step 3: Central limit theorem concerning ${\mathcal{R}}_{n,k}$’s. {#sec:proof_cleared}
Let $\Upsilon_n(k,h) := {{\mathbb{E}}}(U_{k,1} U_{h,1})$ and $\upsilon_n(k,h)
:= \Upsilon_n(k,h)/l_n$. By Lemma \[thm:mag\] we know $|\upsilon_n(k,h)|\leq \tilde \Xi_n(k,h)$. Write $$\begin{aligned}
\sum_{k=1}^{s_n} {{\mathbb{E}}}_0 {\mathcal{R}}_{n,k}^2
= & \sum_{k=1}^{s_n} \left[ \sum_{j=1}^{w_n}\left(U_{k,j}^2-\Upsilon_n(k,k)\right)
+ 2\sum_{j=1}^{w_n}\left(U_{k,j}\sum_{l=1}^{j-1}U_{k,l}\right) \right] \cr
= & \sum_{j=1}^{w_n} \left[\sum_{k=1}^{s_n} \left(U_{k,j}^2 - \Upsilon_n(k,k)\right)\right]
+ 2
\sum_{j=1}^{w_n}\left(\sum_{k=1}^{s_n}U_{k,j}\sum_{l=1}^{j-1}U_{k,l}\right).
\end{aligned}$$ Using similar a argument as the one for dealing with the term $A_n$ in Lemma \[thm:blk\], we know $$\begin{aligned}
\sum_{j=1}^{w_n}\left\|\sum_{k=1}^{s_n}\left(U_{k,j}^2
-\Upsilon_n(k,k)\right)\right\|^2 = o(n^2s_n),
\end{aligned}$$ and it follows that $$\begin{aligned}
\frac{1}{n\sqrt{s_n}}\sum_{j=1}^{w_n}
\left[\sum_{k=1}^{s_n} \left(U_{k,j}^2 - \Upsilon_n(k,k)\right)\right]
= o_P(1).
\end{aligned}$$ Therefore, it suffices to consider $$\begin{aligned}
\sum_{j=1}^{w_n}\left(\sum_{k=1}^{s_n}U_{k,j}
\sum_{l=1}^{j-1}U_{k,l}\right)
=:\sum_{j=1}^{w_n} D_{n,j}.
\end{aligned}$$ Let $\mathcal{G}_{n,j}=\langle D_{n,1},\ldots,D_{n,j}\rangle$. Observe that $(D_{n,j})$ is a martingale difference sequence with respect to $(\mathcal{G}_{n,j})$. We shall apply the martingale central limit theorem. Write $$\begin{aligned}
{{\mathbb{E}}}\left(D_{n,j}^2|\mathcal{G}_{n,j-1}\right) - {{\mathbb{E}}}D_{n,j}^2
&= \sum_{k,h=1}^{s_n} \Upsilon_n(k,h)
\left(\sum_{l=1}^{j-1}U_{k,l}\sum_{l=1}^{j-1}U_{h,l}
- (j-1)\Upsilon_n(k,h)\right)\cr
& = \sum_{k,h=1}^{s_n} \Upsilon_n(k,h)
\left(\sum_{l=1}^{j-1}U_{k,l}U_{h,l}
-(j-1)\Upsilon_n(k,h)\right) \cr
& + \sum_{k,h=1}^{s_n} \Upsilon_n(k,h)
\left(\sum_{l=1}^{j-1}U_{k,l}\sum_{q=1}^{l-1}U_{h,q}
+ \sum_{l=1}^{j-1}U_{h,l}\sum_{q=1}^{l-1}U_{k,q}\right)\cr
& =: I_{n,j} + \II_{n,j}
\end{aligned}$$ For the first term, by Lemma \[thm:mag\], we have $$\begin{aligned}
\left\|\sum_{j=1}^{w_n} I_{n,j}\right\|^2 = & \left\|\sum_{j=1}^{w_n-1} (w_n-j) \sum_{k,h=1}^{s_n} \Upsilon_n(k,h)
\left[U_{k,j}U_{h,j}-\Upsilon_n(k,h)\right]\right\|^2\cr
= & \sum_{j=1}^{w_n-1}(w_n-j)^2\left[\sum_{k,h} |\Upsilon_n(k,h)|\left\|(U_{k,j}U_{h,j}-\Upsilon_n(k,h))\right\|\right]^2\cr
\leq & w_n^3l_n^4 \left[\sum_{k,h} |\upsilon_n(k,h)| \cdot 4\Theta_8^2 \right]^2 = o(n^4s_n^2).
\end{aligned}$$ Using Lemma \[thm:mag\] and Proposition \[thm:array\], we obtain $$\begin{aligned}
& \left\|\sum_{j=1}^{w_n} \II_{n,j}\right\|^2 = \left\|\sum_{j=1}^{w_n-1} (w_n-j)
\sum_{k,h}\Upsilon_n(k,h)\left(U_{k,j}\sum_{l=1}^{j-1}U_{h,l} + U_{h,j}\sum_{l=1}^{j-1}U_{k,l}\right)\right\|^2 \cr
= & 2\sum_{j=1}^{w_n-1}(w_n-j)^2(j-1)\sum_{1\leq k_1,h_1,k_2,h_2\leq s_n}\Upsilon_n(k_1,h_1)\Upsilon_n(k_2,h_2)
\left[\Upsilon_n(k_1,k_2)\Upsilon_n(h_1,h_2) + \Upsilon_n(k_1,h_2)\Upsilon_n(h_1,k_2)\right]\cr
\leq & 4 n^4 \sum_{1\leq k_1,h_1,k_2,h_2\leq s_n}
\left|\upsilon_n(k_1,h_1)\upsilon_n(h_1,h_2)\upsilon_n(h_2,k_2)\upsilon_n(k_2,k_1)\right| = O(n^4s_n) = o(n^4s_n^2).
\end{aligned}$$ Therefore, we have $$\begin{aligned}
\frac{1}{n^2s_n}\left[\sum_{j=1}^{w_n}{{\mathbb{E}}}\left(D_{n,j}^2|\mathcal{G}_{n,j-1}\right)
-\sum_{j=1}^{w_n}{{\mathbb{E}}}D_{n,j}^2 \right] \stackrel{p}{\to} 0.
\end{aligned}$$ Using Lemma \[thm:mag\] and Lemma \[thm:covconvergence\], we know $$\begin{aligned}
\frac{1}{n^2s_n}\sum_{j=1}^{w_n}{{\mathbb{E}}}D_{n,j}^2 = \frac{1}{2n^2s_n} w_n(w_n-1)l_n^2\sum_{k,h=1}^{s_n}[\upsilon_n(k,h)]^2
\to \frac{1}{2}\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2,
\end{aligned}$$ and it follows that $$\begin{aligned}
\label{eq:varcon}
\frac{1}{n^2s_n}\sum_{j=1}^{w_n}{{\mathbb{E}}}\left(D_{n,j}^2|\mathcal{G}_{n,j-1}\right)
\stackrel{p}{\to} \frac{1}{2}\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2.
\end{aligned}$$ To verify the Lindeberg condition, we compute $$\begin{aligned}
{{\mathbb{E}}}D_{n,j}^4 = & \sum_{k_1,k_2,k_3,k_4=1}^{s_n}
{{\mathbb{E}}}\left(U_{k_1,j}U_{k_2,j}U_{k_3,j}U_{k_4,j}\right)\cr
& \times {{\mathbb{E}}}\left[\left(\sum_{l=1}^{j-1}U_{k_1,l}\right) \left(\sum_{l=1}^{j-1}U_{k_2,l}\right)
\left(\sum_{l=1}^{j-1}U_{k_3,l}\right)\left(\sum_{l=1}^{j-1}U_{k_4,l}\right)\right]\cr
\leq & \sum_{k_1,k_2,k_3,k_4=1}^{s_n} \left|{{\mathbb{E}}}(U_{k_1,j}U_{k_2,j}U_{k_3,j}U_{k_4,j})\right| \cdot
2\mathcal{C}_4^4(j-1)^2l_n^2\Theta_8^8\end{aligned}$$ We express ${{\mathbb{E}}}(U_{k_1,1}U_{k_2,1}U_{k_3,1}U_{k_4,1})$ in terms of cumulants $$\begin{aligned}
{{\mathbb{E}}}(U_{k_1,1}U_{k_2,1}U_{k_3,1}U_{k_4,1}) & = \operatorname{Cum}(U_{k_1,1},U_{k_2,1},U_{k_3,1},U_{k_4,1})
+ {{\mathbb{E}}}(U_{k_1,1}U_{k_2,1}){{\mathbb{E}}}(U_{k_3,1}U_{k_4,1}) \cr
& \quad + {{\mathbb{E}}}(U_{k_1,1}U_{k_3,1}){{\mathbb{E}}}(U_{k_2,1}U_{k_4,1}) + {{\mathbb{E}}}(U_{k_1,1}U_{k_4,1}){{\mathbb{E}}}(U_{k_2,1}U_{k_3,1})\cr
& =:A_n+B_n+E_n+F_n
\end{aligned}$$ From Lemma \[thm:mag\], it is easily seen that $$\begin{aligned}
\sum_{k_1,k_2,k_3,k_4=1}^{s_n} |B_n| \leq l_n^2 \sum_{k_1,k_2,k_3,k_4=1}^{s_n} \tilde\Xi_n(k_1,k_2)\cdot\tilde\Xi_n(k_3,k_4)
= O(l_n^2s_n^2),
\end{aligned}$$ and similarly $\sum_{k_1,k_2,k_3,k_4=1}^{s_n} |E_n| = O(l_n^2s_n^2)$ and $\sum_{k_1,k_2,k_3,k_4=1}^{s_n} |F_n| = O(l_n^2s_n^2)$. By multilinearity of cumulants, $$\begin{aligned}
A_n = \sum_{i_1,i_2,i_3,i_4=1}^{l_n}
\operatorname{Cum}(\tilde X_{i_1-k_1}\tilde X_{i_1},\tilde X_{i_2-k_2}\tilde X_{i_2},
\tilde X_{i_3-k_3}\tilde X_{i_3},\tilde X_{i_4-k_4}\tilde
X_{i_4}).
\end{aligned}$$ Each cumulant in the preceding equation is to be further simplified similarly as (\[eq:cum\_decompose\]). Using summability of joint cumulants up to order eight and Proposition \[thm:array\], we have $$\begin{aligned}
\sum_{k_1,k_2,k_3,k_4=1}^{s_n} |A_n|
= O(l_ns_n^3) =o(l_n^2s_n^2).
\end{aligned}$$ Using orders obtained for $|A_n|$, $|B_n|$, $|E_n|$ and $|F_n|$, we obtain $\sum_{j=1}^{w_n} {{\mathbb{E}}}D_{n,j}^4 = o(n^4s_n^2)$. Then, by (\[eq:varcon\]), we can apply Corollary 3.1. of [@hall:1980] to obtain $$\begin{aligned}
\frac{1}{n\sqrt{s_n}} \sum_{j=1}^{w_n} D_{n,j}
\Rightarrow \mathcal{N}\left(0,
\frac{1}{2}\sum_{k\in{{\mathbb{Z}}}}\sigma_k^2\right),
\end{aligned}$$ and the lemma follows.
### Proof of Theorem \[thm:ljung\_power\] {#sec:proof_power}
We shall only prove (\[eq:ljung\_power\]), since (\[eq:var\_estimation\]) can be obtained by very similar arguments. Write $\hat\gamma_k = {{\mathbb{E}}}_0 \hat\gamma_k + \gamma_k -
(\gamma_k-{{\mathbb{E}}}\hat\gamma_k)$, and hence $$\begin{aligned}
\sum_{k=1}^{s_n} (\hat\gamma_k^2 - \gamma_k^2)
& = 2\sum_{k=1}^{s_n} \gamma_k {{\mathbb{E}}}_0 \hat\gamma_k
+ \sum_{k=1}^{s_n} ( {{\mathbb{E}}}_0\hat\gamma_k)^2
- 2\sum_{k=1}^{s_n} \frac{k}{n}\gamma_k {{\mathbb{E}}}_0\hat\gamma_k
- 2\sum_{k=1}^{s_n} \frac{k}{n}\gamma_k^2
+ \sum_{k=1}^{s_n}\frac{k^2}{n^2}\gamma_k^2 \cr
& = : 2I_n + \II_n + \III_n + \IV_n + V_n.
\end{aligned}$$ Using the conditions $\Theta_4<\infty$ and $s_n=o(\sqrt{n})$, it is easily seen that $\sqrt{n}\IV_n\to 0$ and $\sqrt{n}V_n\to 0$. Furthermore $$\begin{aligned}
\sqrt{n}\|\III_n\| \leq 2\sqrt{n}\sum_{k=1}^{s_n} \frac{k}{n}|\gamma_k|\cdot \frac{2\Theta_4^2}{\sqrt{n}} \to 0
\quad \hbox{and} \quad \sqrt{n}{{\mathbb{E}}}\II_n\leq \sqrt{n}\sum_{k=1}^{s_n}\frac{4\Theta_4^4}{n} \to 0.
\end{aligned}$$ Define $Y_i=\sum_{k=1}^\infty \gamma_kX_{i-k}$. For the term $I_n$, write $$\begin{aligned}
nI_n & = \sum_{i=1}^n {{\mathbb{E}}}_0(X_iY_i) - \sum_{i=1}^n
{{\mathbb{E}}}_0\left(X_i\sum_{k=s_n+1}^\infty\gamma_kX_{i-k}\right)
+ \sum_{k=1}^{s_n}
\gamma_k\left(\sum_{i=1}^{k}(X_{i-k}X_i-\gamma_k)\right) \cr
& =: A_n + B_n + E_n
\end{aligned}$$ Clearly $\|E_n\|/\sqrt{n} \leq \sum_{k=1}^{s_n} |\gamma_k|
2\Theta_4^2\sqrt{k}/\sqrt{n} \to 0$. Define $W_{n,i}=X_i\sum_{k=s_n+1}^\infty\gamma_kX_{i-k}$, then $$\begin{aligned}
\|{\mathcal{P}}^0 W_{n,i}\| \leq \left\{
\begin{array}{ll}
\delta_{4}(i)\cdot\Theta_4\sum_{k=s_n+1}^\infty |\gamma_k|
& \hbox{if}\quad 0 \leq i \leq s_n \\
\Theta_4\delta_{4}(i)\sum_{k=s_n+1}^\infty |\gamma_k|
+ \Theta_4\sum_{k=s_n+1}^i |\gamma_k|\delta_4(i-k)
& \hbox{if}\quad i>s_n.
\end{array}\right.
\end{aligned}$$ It follows that $$\begin{aligned}
\|B_n/\sqrt{n}\| \leq 2\Theta_4^2\sum_{k=s_n+1}^\infty |\gamma_k| \to 0.
\end{aligned}$$ Set $Z_i=X_iY_i$, then $(Z_i)$ is a stationary process of the form (\[eq:wold\]). Furthermore $$\begin{aligned}
\|{\mathcal{P}}^0 Z_i\| \leq \delta_{4}(i)\cdot\Theta_4\sum_{k=1}^\infty |\gamma_k|
+ \Theta_4\sum_{k=1}^i |\gamma_k|\delta_4(i-k).
\end{aligned}$$ Since $\sum_{i=0}^\infty \|{\mathcal{P}}^0 Z_i\|<\infty$, utilizing Theorem 1 in [@hannan:1973] we have $A_n/\sqrt{n} \Rightarrow
\mathcal{N}(0,\|D_0\|^2)$, and then (\[eq:ljung\_power\]) follows.
### Proof of Corollary \[thm:ljung\_corr\] and \[thm:ljung\_corr\_power\] {#sec:proof_corollary}
By (\[eq:fact3\]), we know $\|n\bar X_n\|_4 \le \sqrt{3n} \Theta_4$, and it follows that $$\begin{aligned}
\left\|\sum_{i=k+1}^n(X_{i-k}-\bar X_n)(X_i - \bar X_n)
- \sum_{i=k+1}^n X_{i-k}X_i\right\|
\leq 9 \Theta_4^2.
\end{aligned}$$ Theorem \[thm:ljung\] holds for $\breve\gamma_k$ because $$\begin{aligned}
\frac{n}{\sqrt{s_n}} \sum_{k=1}^{s_n}
{{\mathbb{E}}}\left|(\hat\gamma_k-{{\mathbb{E}}}\hat\gamma_k)^2
- (\breve\gamma_k-{{\mathbb{E}}}\hat\gamma_k)^2\right|
& \leq \frac{n}{\sqrt{s_n}} \sum_{k=1}^{s_n}
\left\|\hat\gamma_k+\breve\gamma_k-2{{\mathbb{E}}}\hat\gamma_k\right\|
\cdot \left\|\hat\gamma_k-\breve\gamma_k\right\| \cr
& \leq \frac{n}{\sqrt{s_n}} \sum_{k=1}^{s_n}
\left(\frac{4\Theta_4^2}{\sqrt{n}} + \frac{9\Theta_4^2}{n}\right)
\cdot \frac{9\Theta_4^2}{n} \to 0.
\end{aligned}$$ In Theorem \[thm:ljung\_power\], (\[eq:ljung\_power\]) holds with $\hat \gamma_k$ replaced by $\breve \gamma_k$ because $$\begin{aligned}
\sqrt{n}\sum_{k=1}^{s_n}
{{\mathbb{E}}}\left|\hat\gamma_k^2-\breve\gamma_k^2\right|
\leq \sqrt{n}\sum_{k=1}^{s_n}
\|\hat\gamma_k+\breve\gamma_k\|\cdot\|\hat\gamma_k-\breve\gamma_k\|
\leq \sqrt{n}\sum_{k=1}^{s_n} \left(2|\gamma_k|
+ \frac{4\Theta_4^2}{\sqrt{n}} + \frac{9\Theta_4^2}{n}\right)
\frac{9\Theta_4^2}{n} \to 0,
\end{aligned}$$ and (\[eq:var\_estimation\]) can be proved similarly. Now we turn to the sample autocorrelations. Write $$\begin{aligned}
\sum_{k=1}^{s_n} \left\{[\hat r_k - (1-k/n)r_k]^2
- [\hat\gamma_k/\gamma_0 - (1-k/n)r_k]^2\right\}
& = \sum_{k=1}^{s_n}
{{2 ({{\mathbb{E}}}_0\hat\gamma_k)[\hat\gamma_k(\gamma_0-\hat\gamma_0)]}
\over{\gamma_0^2\hat\gamma_0}}
+ {{\hat\gamma_k^2(\gamma_0-\hat\gamma_0)^2}
\over {\gamma_0^2\hat\gamma_0^2}}.
\end{aligned}$$ Since $$\begin{aligned}
\sum_{k=1}^{s_n} {{\mathbb{E}}}\left| ({{\mathbb{E}}}_0\hat\gamma_k)
\hat\gamma_k(\gamma_0-\hat\gamma_0)\right|
\leq \sum_{k=1}^{s_n} 2{\mathcal{C}}_3\Theta_6^2\frac{1}{\sqrt{n}} \cdot
\left(|\gamma_k|+2{\mathcal{C}}_3\Theta_6^2\frac{1}{\sqrt{n}}\right)
\cdot 2{\mathcal{C}}_3\Theta_6^2\frac{1}{\sqrt{n}}
= o\left(\frac{\sqrt{s_n}}{n}\right)
\end{aligned}$$ and similarly $\sum_{k=1}^{s_n}
{{\mathbb{E}}}\left|\hat\gamma_k^2(\gamma_0-\hat\gamma_0)^2\right| =
o(\sqrt{s_n}/n)$, (\[eq:ljung\_corr\]) follows by applying the Slutsky theorem. To show the limit theorems in Corollary \[thm:ljung\_corr\_power\], note that using the Cramer-Wold device, we have $$\begin{aligned}
\left[\sqrt{n}(\hat\gamma_0^2-\gamma_0^2),
\sqrt{n}\left(\sum_{k=1}^{s_n}\hat\gamma_k^2
- \sum_{k=1}^{s_n}\gamma_k^2\right)\right]
\end{aligned}$$ converges to a bivariate normal distribution. Then Corollary \[thm:ljung\_corr\_power\] follows by applying the delta method.
A Normal Comparison Principle {#sec:normalcomparison}
=============================
In this section we shall control tail probabilities of Gaussian vectors by using their covariance matrices. Denote by $\varphi_d ((r_{ij}); x_1,\ldots, x_d)$ the density of a $d$-dimensional multivariate normal random vector ${\boldsymbol{X}}=(X_1,\ldots,X_d)^\top$ with mean zero and covariance matrix $(r_{ij})$, where we always assume $r_{ii}=1$ for $1 \leq i \leq
d$ and $(r_{ij})$ is nonsingular. For $1\leq h < l \leq d$, we use $\varphi_{2}((r_{ij});X_h=x_h,X_l=x_l)$ to denote the marginal density of the sub-vector $(X_h,X_l)^\top$. Let $$Q_d\left((r_{ij});z_1,\ldots,z_d\right) = \int_{z_1}^\infty
\cdots \int_{z_d}^{\infty}
\varphi_d\left((r_{ij}),x_1,\ldots,x_d\right) {{\,\mathrm{d}}}x_{d} \cdots {{\,\mathrm{d}}}x_{1}.$$ The partial derivative with respect to $r_{hl}$ is obtained similarly as equation (3.6) of [@berman:1964] by using equation (3) of [@plackett:1954] $$\begin{aligned}
\label{eq:deriv}
& \frac{\partial Q_d\left((r_{ij});z_1,\ldots,z_d\right)}{\partial
r_{hl}} \cr
& = \left(\prod_{k \neq h,l}\int_{z_k}^\infty\right)
\varphi_d\left((r_{ij});x_1,\ldots,x_{h-1},
z_h,x_{h+1},\ldots,x_{l-1},z_l,x_{l+1},\ldots,x_d\right)
\prod_{k \neq h,l} {{\,\mathrm{d}}}x_k.\end{aligned}$$ where $\left(\prod_{k \neq h,l}\int_{z_k}^\infty\right)$ stands for $\int_{z_1}^\infty \cdots \int_{z_{h-1}}^\infty
\int_{z_{h+1}}^\infty \cdots \int_{z_{l-1}}^\infty
\int_{z_{l+1}}^\infty \cdots \int_{z_d}^\infty$. If all the $z_k$ have the same value $z$, we use the simplified notation $Q_d\left((r_{ij});z\right)$ and $\partial Q_d
((r_{ij});z)/\partial r_{hl}$. The following simple facts about conditional distribution will be useful. For four different indicies $1 \leq h,l,k,m \leq d$, we have $$\begin{aligned}
\label{eq:condmean}
{{\mathbb{E}}}(X_k|X_h=X_l=z) & = \frac{r_{kh}+r_{kl}}{1+r_{hl}}z,
\\ \label{eq:condvar}
\operatorname{Var}(X_k|X_h=X_l=z) & =
\frac{1-r_{hl}^2-r_{kh}^2-r_{kl}^2+2r_{hl}r_{kh}r_{kl}}
{1-r_{hl}^2},
\\ \label{eq:condcov}
\operatorname{Cov}(X_k,X_m|X_h=X_l=z) & = r_{km} -
\frac{r_{hk}r_{hm}+r_{lk}r_{lm}-r_{hl}r_{hk}r_{lm}-r_{hl}r_{hm}r_{lk}}
{1-r_{hl}^2}.\end{aligned}$$
\[thm:normbdd\] For every $z>0$, $0<s<1$, $d \geq 1$ and $\epsilon>0$, there exists positive constants $C_d$ and $\epsilon_d$ such that for $0 < \epsilon < \epsilon_d$
1. if $|r_{ij}| < \epsilon$ for all $1 \leq i < j \leq d$, then $$\begin{aligned}
\label{eq:normbdd1}
Q_d\left((r_{ij});z\right) & \leq C_d
\exp\left\{-\left(\frac{d}{2} - C_d \epsilon \right)z^2\right\} \\ \label{eq:normbdd3}
Q_{d}\left((r_{ij});z,\ldots,z\right) & \leq C_d \,f_d(\epsilon,1/z)\,
\exp\left\{-\left(\frac{d}{2} - C_d \epsilon
\right)z^2\right\} \\ \label{eq:normbdd2}
Q_d\left((r_{ij});sz,z,\ldots,z\right) & \leq C_d
\exp\left\{-\left(\frac{s^2+d-1}{2} - C_d \epsilon
\right)z^2\right\}
\end{aligned}$$ where $f_{2k}(x,y)=\sum_{l=0}^k x^ly^{2(k-l)}$ and $f_{2k-1}(x,y)=\sum_{l=0}^{k-1}x^ly^{2(k-l)-1}$ for $k \geq 1$;
2. if for all $1 \leq i < j \leq d+1$ such that $(i,j)\neq(1,2)$, $|r_{ij}| \leq \epsilon$, then $$\label{eq:normbdd4}
Q_{d+1}\left((r_{ij});z\right) \leq C_d
\exp\left\{-\left(\frac{(1-|r_{12}|)^2+d}{2}- C_d\epsilon\right)z^2\right\}.$$
The following facts about normal tail probabilities are well-known: $$\label{eq:normbdd}
P(X_1\geq x) \leq \frac{1}{\sqrt{2\pi}x} e^{-x^2/2} \hbox{ for } x>0
\quad \hbox{and} \quad
\lim_{x \to \infty}\frac{P(X_1 \geq x)}{(1/x)(2\pi)^{-1/2}
\exp\left\{-x^2/2\right\}}=1,$$ By (\[eq:normbdd\]), the inequalities (\[eq:normbdd1\]) – (\[eq:normbdd2\]) with $\epsilon=0$ are true for the random vector with iid standard normal entries. The idea is to compare the desired probability with the corresponding one for such a vector. We first prove (\[eq:normbdd1\]) by induction. When $d=1$, the inequality is trivially true. When $d=2$, by (\[eq:deriv\]), there exists a number $r_{12}'$ between $0$ and $r_{12}$ such that $$\begin{aligned}
|Q_2((r_{ij});z) - Q_2(I_2;z)| &\leq& \varphi((r_{ij}'),z,z)
|r_{12}| \cr
&\leq& C \exp\left\{-\frac{z^2}{1+|r_{12}'|}\right\} \leq C
\exp\left\{-(1-\epsilon)z^2\right\},\end{aligned}$$ which, together with $Q_2(I_2;z) \leq C \exp\{-z^2\}$, implies (\[eq:normbdd1\]) for $d=2$ with $\epsilon_2=1/2$ and some $C_2
> 1$. Now for $d \geq 3$, assume (\[eq:normbdd1\]) holds for all dimensions less than $d$. There exists a matrix $(r_{ij}') =
\theta(r_{ij}) + (1-\theta) I_d$ for some $0<\theta<1$ such that $$\label{eq:2}
Q_d\left((r_{ij});z\right) -
Q_d\left((I_d;z\right) = \sum_{1 \leq h,l \leq d}
\frac{\partial Q_d}{\partial r_{hl}}((r_{ij}');z,\ldots,z) r_{hl}.$$ By (\[eq:condmean\]), ${{\mathbb{E}}}(X_k|X_h=X_l=z) \leq
2\epsilon'z/(1-\epsilon')$ for $k \neq h,l$. Therefore, by writing the density in (\[eq:deriv\]) as the product of the density of $(X_h,X_l)$ and the conditional density of ${\boldsymbol{X}}_{-\{h,l\}}$ given $X_h=X_l=z$, where ${\boldsymbol{X}}_{-\{h,l\}}$ denotes the sub-vector $(X_1,\ldots,X_{h-1},X_{h+1},\ldots,X_{l-1},X_{l+1},\ldots,X_d)^\top$; we have $$\label{eq:3}
\left|\frac{\partial Q_d}{\partial
r_{hl}}((r_{ij}');z,\ldots,z)\right| \leq
\varphi_2((r_{ij}');X_h=X_l=z)
Q_{d-2}((r_{ij|hl}');(1-3\epsilon)z),$$ where $(r_{ij|hl}')$ is the correlation matrix of the conditional distribution of ${\boldsymbol{X}}_{-\{h,l\}}$ given $X_h$ and $X_l$. By (\[eq:condvar\]) and (\[eq:condcov\]), we know for $k, m \in [d]\setminus\{h,l\}$ and $k \neq m$, $$\operatorname{Var}(X_k|X_h=X_l=z) \geq 1-3\epsilon^2-2\epsilon^3 \quad
\hbox{and} \quad
\operatorname{Cov}(X_k,X_m|X_h=X_l=z) \leq \frac{\epsilon(1+\epsilon)}{1-\epsilon}.$$ Therefore, all the off-diagonal entries of $(r_{ij|hl}')$ are less than $2\epsilon$ if we let $\epsilon<1/5$. Applying the induction hypothesis, if $2\epsilon<\epsilon_{d-2}$, then $$Q_{d-2}((r_{ij|hl}');(1-3\epsilon)z) \leq C_{d-2}
\exp\left\{-\left(\frac{d-2}{2}-2C_{d-2}\epsilon\right)(1-3\epsilon)^2z^2\right\},$$ and equation (\[eq:3\]) becomes $$\left|\frac{\partial Q_d}{\partial
r_{hl}}((r_{ij}');z,\ldots,z)\right| \leq CC_{d-2}
\exp\left\{-(1-\epsilon)z^2\right\} \cdot
\exp\left\{-\left(\frac{d-2}{2}-\left(2C_{d-2}+3(d-2)\right)\epsilon\right)z^2\right\}.$$ Therefore, (\[eq:normbdd1\]) holds for $\epsilon_d<\min\{1/5,\epsilon_{d-2}/2\}$ and some $C_d > 2C_{d-2}+3(d-2)+1$.
Using very similar arguments, inequality (\[eq:normbdd2\]) can be proved by applying (\[eq:normbdd1\]); and inequality (\[eq:normbdd4\]) can be obtained by employing both (\[eq:normbdd1\]) and (\[eq:normbdd2\]). To prove inequality (\[eq:normbdd3\]), which is a refinement of (\[eq:normbdd1\]), it suffices to observe that, by (\[eq:normbdd\]), (\[eq:2\]) and (\[eq:3\]) $$\begin{aligned}
Q_d\left((r_{ij});z\right) & \leq Q_d(I_d;z) + \sum_{1\leq h,l\leq
d} C\,\epsilon\, \exp\{-(1-\epsilon)z^2\}
Q_{d-2}((r_{ij|hl}');(1-3\epsilon)z) \\
& \leq C_d \frac{1}{z^d} \exp\left\{\frac{dz^2}{2}\right\} + C_d \,\epsilon\,
\exp\{-(1-\epsilon)z^2\} \sum_{1\leq h,l\leq
d}
Q_{d-2}((r_{ij|hl}');(1-3\epsilon)z);\end{aligned}$$ and apply the induction argument.
\[thm:normalcomparison\] Let $(X_n)$ be a stationary mean zero Gaussian process. Let $r_k=Cov(X_0,X_k)$. Assume $r_0=1$, and $\lim_{n\to\infty} r_n(\log
n) =0$. Let $a_n=(2\log n)^{-1/2}$, $b_n=(2\log n)^{1/2} - (8\log
n)^{-1/2}(\log\log n + \log 4\pi)$, and $z_n=a_nz+b_n$ for $z \in
{{\mathbb{R}}}$. Define the event $A_i=\{X_i \geq z_n\}$, and $$Q_{n,d} = \sum_{1\leq i_1<\ldots<i_d\leq n}P(A_{i_1}\cap \cdots
\cap A_{i_d}).$$ Then $\lim_{n \to \infty} Q_{n,d} = e^{-dz}/d\,!$ for all $d \geq
1$.
Note that $z_n^2 = 2\log n - \log\log n -\log (4\pi) + 2z +
o(1)$. If $(X_n)$ consists of iid random variables, by the equality in (\[eq:normbdd\]), $$\begin{aligned}
\lim_{n \to \infty} Q_{n,d}
&=& \lim_{n\to\infty} {n
\choose d} Q_d(I_d,z_n) \cr
&=& \lim_{n \to \infty} {n \choose d}
\frac{1}{(2\pi)^{d/2}z_n^d}
\exp\left\{-\frac{dz_n^2}{2}\right\}
= \frac{e^{-dz}}{d!}.
\end{aligned}$$ When the $X_n$’s are dependent, the result is still trivially true when $d=1$. Now we deal with the $d\geq 2$ case. Let $\gamma_k =
\sup_{j \geq k}|r_j|$, then $\gamma_1<1$ by stationarity, and $\lim_{n\to\infty}\gamma_n\log n=0$. Consider an ordered subset $J=\left\{t,{t+l_1},{t+l_1+l_2},\ldots,{t+l_1+\cdots+l_{d-1}}\right\}
\subset [n]$, where $l_1,\ldots,l_{d-1} \geq 1$. We define an equivalence relation $\sim$ on $J$ by saying $k \sim j$ if there exists $k_1,\ldots,k_p \in J$ such that $k=k_1<k_2<\cdots<k_p=j$, and $k_h-k_{h-1} \leq L$ for $2 \leq h
\leq p$. For any $L\geq 2$, denote by $s(J,L)$ the number of $l_j$ which are less than or equal to $L$. To similify the notation, we sometimes use $s$ instead of $s(J,L)$. $J$ is divided into $d-s$ equivalence classes $\mathcal{B}_1,\ldots,\mathcal{B}_{d-s}$. Suppose $s \geq 1$, assume w.l.o.g. that $|\mathcal{B}_1|\geq 2$. Pick $k_0,k_1 \in \mathcal{B}_1$, and $k_p \in \mathcal{B}_p$ for $2\leq p \leq d-s$, and set $K=\{k_0,k_1,k_2,\ldots,k_{d-s}\}$. Define $Q_J=P(\cap_{k \in
J}A_k)$ and $Q_K$ similarly, then $Q_J \leq Q_K$. By (\[eq:normbdd4\]) of Lemma \[thm:normbdd\], there exists a number $M>1$ depending on $d$ and the sequence $(\gamma_k)$, such that when $L>M$, $$\begin{aligned}
Q_K &\leq& C_{d-s}
\exp\left\{-\left(\frac{(1-\gamma_1)^2+d-s}{2}-
C_{d-s}\gamma_L\right)z_n^2\right\} \cr
&\le &
C_{d-s}\exp\left\{-\left(\frac{d-s}{2}
+\frac{(1-\gamma_1)^2}{3} \right) z_n^2\right\}.\end{aligned}$$ Note that $z_n^2 = 2\log n - \log\log n + O(1)$. Pick $L_n=\max\{{\lfloor n^{\alpha} \rfloor},M\}$ for some $\alpha
<2(1-\gamma_1^2)/3d$. For any $1 \leq a \leq d-1$, since there are at most $L_n^an^{d-a}$ ordered subset $J \subset [n]$ such that $s(J,L_n)=a$, we know the sum of $Q_J$ over these $J$ is dominated by $$C_{d-a}\exp\left\{\log n \left( (d-a) + \frac{2(d-1)(1-\gamma_1)^2}{3d}
-(d-a) - \frac{2(1-\gamma_1)^2}{3} \right)\right\}$$ when $n$ is large enough, which converges to zero. Therefore, it suffices to consider all the ordered subsets $J$ such that $l_j>L_n$ for all $1\leq j \leq d-1$.
Let $J=\{t_1,\ldots,t_d\}\subset [n]$ be an ordered subset such that $t_i-t_{i-1} > L_n$ for $2 \leq i \leq d$, and $\mathcal{J}(d,L_n)$ be the collection of all such subsets. Let $(r_{ij})$ be the $d$-dimensional covariance matrix of ${\boldsymbol{X}}_J$. There exists a matrix $R_J=\theta(r_{ij})_{i,j\in J}+(1-\theta)I_d$ for some $0 <
\theta < 1$ such that $$ Q_J-Q_d(I_d,z_n) = \sum_{h,l \in J, h<l} \frac{\partial
Q_d}{\partial r_{hl}}[R_J;z_n]r_{ij}.$$ Let $R_H$, $H = J \setminus \{h,l\}$, be the correlation matrix of the conditional distribution of ${\boldsymbol{X}}_H$ given $X_h$ and $X_l$. By (\[eq:normbdd3\]) of Lemma \[thm:normbdd\], for $n$ large enough $$\begin{aligned}
\frac{\partial Q_{d}}{\partial r_{hl}}[R_J;z_n] & \leq C
\exp\left\{-\frac{z_n^2}{1+\gamma_{l-h}}\right\} \cdot
Q_{d-2}\left(R_K;(1-3\gamma_{L_n})z_n\right) \\
& \leq C C_{d-2}f_{d-2}(\gamma_{L_n},1/z_n)
\exp\left\{-\frac{z_n^2}{1+\gamma_{l-h}}\right\} \cr
& \quad \times
\exp\left\{-\left(\frac{d-2}{2} - 2C_{d-2} \gamma_{L_n}
\right)(1-3\gamma_{L_n}))^2z_n^2\right\} \\
& \leq C_df_{d-2}(\gamma_{L_n},1/z_n)
\exp\left\{-\left(\frac{d}{2} - (2C_{d-2}+3(d-2))
\gamma_{L_n} -\gamma_{h-l} \right)z_n^2\right\}\\
& \leq C_df_{d-2}(\gamma_{L_n},1/z_n)
\exp\left\{-\left(\frac{d}{2} - C_d
\gamma_{L_n} -\gamma_{h-l} \right)z_n^2\right\}.
\end{aligned}$$ It follows that $$\begin{aligned}
\label{eq:5}
&& \sum_{J \in \mathcal{J}(d,L_n)} |Q_J-Q_d(I_d;z_n)| \cr
&& \leq C_d f_{d-2}(\gamma_{L_n},1/z_n)\sum_{J \in
\mathcal{J}(d,L_n)} \sum_{1\leq i<j\leq d}
\exp\left\{-\left(\frac{d}{2} - C_d \gamma_{L_n} -\gamma_{t_j-t_i}
\right)z_n^2\right\} \gamma_{t_j-t_i} \cr
&& = C_d f_{d-2}(\gamma_{L_n},1/z_n)
\sum_{1\leq i<j\leq d} \sum_{J \in \mathcal{J}(d,L_n)}
\exp\left\{-\left(\frac{d}{2} - C_d
\gamma_{L_n} -\gamma_{t_j-t_i} \right)z_n^2\right\}
\gamma_{t_j-t_i}.
\end{aligned}$$ For each fixed pair $1\leq i<j\leq d$, the inner sum in (\[eq:5\]) is bounded by $$\begin{aligned}
\nonumber
& C_d f_{d-2}(\gamma_{L_n},1/z_n)\sum_{l=L_n+1}^{n-1} (n-l)^{d-1} \exp\left\{-\left(\frac{d}{2} - C_d \gamma_{L_n} -\gamma_{l}
\right)z_n^2\right\} \gamma_l \\\label{eq:7}
\leq & C_d f_{d-2}(\gamma_{L_n},1/z_n)(\log n)^{d/2} n^{-d}
\sum_{l=L_n+1}^{n-1}(n-l)^{d-1} \exp\left\{\left( C_d \gamma_{L_n}
+ \gamma_{l} \right)2\log n\right\}\gamma_l \\ \label{eq:6}
\leq & C_df_{d-2}(\gamma_{{\lfloor n^\alpha \rfloor}},1/z_n)\,
\gamma_{{\lfloor n^\alpha \rfloor}}(\log n)^{d/2}
\exp\left\{2\left( C_d +1 \right)
\gamma_{{\lfloor n^\alpha \rfloor}}\log n \right\}.
\end{aligned}$$ Since $\lim_{n \to \infty} \gamma_n \log n=0$, it also holds that $\lim_{n \to \infty}
\gamma_{{\lfloor n^\alpha \rfloor}} \log n
=0$. Note that $\lim_{n\to\infty} (\log n)^{1/2}/z_n = 2^{-1/2}$, it follows that $\lim_{n\to\infty}
f_{d-2}(\gamma_{{\lfloor n^\alpha \rfloor}},1/z_n) (\log n)^{d/2-1} =
2^{-d/2+1}$. Therefore, the term in (\[eq:6\]) converges to zero, and the proof is complete.
\[rk:berman\] This lemma provides another proof of Theorem 3.1 in [@berman:1964], which gives the asymptotic distribution of the maximum term of a stationary Gaussian process. They also showed that the theorem is true if the condition $\lim_{n\to\infty}r_n
\log n = 0$ is replaced by $\sum_{n=1}^\infty r_n^2<\infty$. Under the later condition, if we replace $\gamma_{t_j-t_j}$ by $|r_{t_j-t_i}|$ in (\[eq:5\]), $\gamma_l$ by $|r_l|$ in (\[eq:7\]), then the term in (\[eq:7\]) converges to zero, and hence our result remains true.
\[rk:absolute\] In the proof, the upper bounds on $Q_J$ and $|Q_J-Q(I_d;z_n)|$ are expressed through the absolute values of the correlations, so we can obtain the same bounds for probabilities of the form $P(\cap_{1\leq
i \leq d} \{(-1)^{a_i}X_{t_i} \geq z_n\})$ for any $(a_1,\ldots,a_d) \in \{0,1\}^d$. Therefore, our result can be used to show the asymptotic distribution of the maximum absolute term of a stationary Gaussian process. Specifically, we have $$\begin{aligned}
\lim_{n\to\infty}P\left(\max_{1\leq i\leq n}|X_i| \leq a_{2n}\,x+b_{2n}\right) = \exp\{-\exp(-x)\}.
\end{aligned}$$ [@deo:1972] obtained this result under the condition $\lim_{n\to\infty} r_n (\log n)^{2+\alpha}=0$ for some $\alpha>0$, whereas we only need $\lim_{n\to\infty} r_n \log
n =0$.
Summability of Cumulants {#sec:cum}
========================
For a $k$-dimensional random vector $(Y_1,\ldots,Y_k)$ such that $\|Y_i\|_k<\infty$ for $1\leq i \leq k$, the $k$-th order joint cumulant is defined as $$\label{eq:cumulant}
\operatorname{Cum}(Y_1,\ldots,Y_k) = \sum (-1)^{p-1}(p-1)!\prod_{j=1}^p\left({{\mathbb{E}}}\prod_{i\in\nu_j}Y_i\right),$$ where the summation extends over all partitions $\{\nu_1,\ldots,\nu_p\}$ of the set $\{1,2,\ldots,k\}$ into $p$ non-empty blocks. For a stationary process $(X_i)_{i\in{{\mathbb{Z}}}}$, we abbreviate $$\gamma(k_1,k_2,\ldots,k_{d}) := \operatorname{Cum}(X_0,X_{k_1},X_{k_2},\ldots,X_{k_{d}}),$$ Summability conditions of cumulants are often assumed in the spectral analysis of time series, see for example [@brillinger:2001] and [@rosenblatt:1985]. Recently, such conditions were used by [@anderson:2008] in studying the spectral properties of banded sample covariance matrices. While such conditions are true for some Gaussian processes, functions of Gaussian processes [@rosenblatt:1985], and linear processes with iid innovations [@anderson:1971], they are not easy to verify in general. [@wu:2004] showed that the summability of joint cumulants of order $d$ holds under the condition that $\delta_d(k)=O(\rho^k)$ for some $0<\rho<1$. We present in Theorem \[thm:cum\] a generalization of their result. To simplify the proof, we introduce the [composition]{} of an integer. A [*composition*]{} of a positive integer $n$ is an ordered sequence of strictly positive integers $\{\upsilon_1,\upsilon_2,\ldots,\upsilon_q\}$ such that $\upsilon_1+\cdots+\upsilon_q=n$. Two sequences that differ in the order of their terms define different compositions. There are in total $2^{n-1}$ different compositions of the integer $n$. For example, we are giving in the following all of the eight compositions of the integer 4. $$\begin{aligned}
\{1,1,1,1\} \quad \{1,1,2\} \quad \{1,2,1\}
\quad \{1,3\} \quad \{2,1,1\} \quad \{2,2\}
\quad \{3,1\} \quad \{4\}.\end{aligned}$$
\[thm:cum\] Assume $d\geq 2$, $X_i \in {\mathcal{L}}^{d+1}$ and ${{\mathbb{E}}}X_i=0$. If $$\begin{aligned}
\label{eq:kddcm}
\sum_{k=0}^{\infty}k^{d-1}\delta_{d+1}(k)<\infty,\end{aligned}$$ then $$\label{eq:smcmD25}
\sum_{k_1,\ldots,k_{d}\in {{\mathbb{Z}}}} |\gamma(k_1,k_2,\ldots,k_{d})| < \infty.$$
By symmetry of the cumulant in its arguments and stationarity of the process, it suffices to show $$\sum_{0 \leq k_1\leq k_2\leq\cdots\leq k_{d}} |\gamma({k_1}, {k_2},\ldots,{k_{d}})| < \infty.$$ Set $X(k,j):={\mathcal{H}}_jX_k$, we claim $$\begin{aligned}
\label{eq:tele}
\gamma({k_1}, {k_2},\ldots,{k_{d}}) = \sum
\operatorname{Cum}& \left[X_0, X(k_1,1),\ldots,X({k_{\upsilon_1-1}},1), X_{k_{\upsilon_1}}-X({k_{\upsilon_1}},1),\right.\cr
& \left.\phantom{[}X({k_{\upsilon_1+1}},{k_{\upsilon_1}+1}),\ldots,X({k_{\upsilon_2-1}},{k_{\upsilon_1}+1}),
X_{k_{\upsilon_2}}-X({k_{\upsilon_2}},{k_{\upsilon_1}+1}),\right.\cr
& \left.\phantom{[} \cdots, \right. \cr
& \left.\phantom{[} X({k_{\upsilon_{q}+1}},{k_{\upsilon_{q}}+1}),\ldots,X({k_{d-1}},{k_{\upsilon_{q}}+1}),
X_{k_{d}}-X({k_{d}},{k_{\upsilon_{q}}+1})\right];
\end{aligned}$$ where the sum is taken over all the $2^{d-1}$ increasing sequences $\{\upsilon_0,\upsilon_1,\ldots,\upsilon_q,\upsilon_{q+1}\}$ such that $\upsilon_0=0$, $\upsilon_{q+1}=d$ and $\{\upsilon_1,\upsilon_2-\upsilon_1,\ldots,\upsilon_q-\upsilon_{q-1},d-\upsilon_q\}$ is a composition of the integer $d$. We first consider the last summand which corresponds to the sequence $\{\upsilon_0=0,\upsilon_1=d\}$, $$\begin{aligned}
\operatorname{Cum}\left[X_0,X(k_1,1),\ldots,X(k_{d-1},1),X_{k_d}-X(k_d,1)\right]
\end{aligned}$$ Observe that $X_0$ and $(X(k_1,1),\ldots,X(k_{d-1},1))$ are independent. By definition, only partitions for which $X_0$ and $X_{k_d}-X(k_d,1)$ are in the same block contribute to the sum in (\[eq:cumulant\]). Suppose $\{\nu_1,\ldots,\nu_p\}$ is a partition of the set $\{k_1,k_2,\ldots,k_{d-1}\}$, since $$\begin{aligned}
\left|{{\mathbb{E}}}\left[X_0(X_{k_d}-X(k_d,1))\prod_{k\in\nu_1}X(k,1)\right]\right|
& = \left|\sum_{j=-\infty}^0 {{\mathbb{E}}}\left[{\mathcal{P}}_j X_0 {\mathcal{P}}_jX_{k_d}\prod_{k\in\nu_1}X(k,1)\right]\right|\cr
& \leq \sum_{j=-\infty}^0 \delta_{d+1}(-j)\delta_{d+1}(k_d-j)\kappa_{d+1}^{|\nu_1|},
\end{aligned}$$ it follows that $$\begin{aligned}
\left|{{\mathbb{E}}}\left[X_0(X_{k_d}-X(k_d,1))\prod_{k\in\nu_1}X(k,1)\right]
\cdot\prod_{j=2}^p\left({{\mathbb{E}}}\prod_{k\in\nu_j}X(k,1)\right)\right|
\leq \sum_{j=0}^\infty \delta_{d+1}(j)\delta_{d+1}(k_d+j)\kappa_{d+1}^{d-1}
\end{aligned}$$ and therefore $$\begin{aligned}
& \sum_{0 \leq k_1\leq k_2\leq\cdots\leq k_{d}}
\left|\operatorname{Cum}\left[X_0,X(k_1,1),\ldots,X(k_{d-1},1),X_{k_d}-X(k_d,1)\right]\right|\cr
& \leq C_d \sum_{0 \leq k_1\leq k_2\leq\cdots\leq k_{d}}\sum_{j=0}^\infty \delta_{d+1}(j)\delta_{d+1}(k_d+j)
\leq C_d \sum_{j=0}^\infty\sum_{k=0}^{\infty}{k+d-1\choose d-1}\delta_{d+1}(j)\delta_{d+1}(k+j) < \infty,
\end{aligned}$$ provided that $\sum_{k=0}^\infty k^{d-1}\delta_{d+1}(k)<\infty$.
The other terms in (\[eq:tele\]) are easier to deal with. For example, for the term corresponding to the sequence $\{\upsilon_0=0,\upsilon_1=1,\upsilon_2=d\}$, we have $$\begin{aligned}
& \left|\operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X(k_2,k_1+1),\ldots,X(k_{d-1},k_1+1),X_{k_d}-X(k_d,k_1+1)\right]\right| \cr
& \leq C_d \kappa_{d+1}^{d-1}\Psi_{d+1}(k_1)\Psi_{d+1}(k_d-k_1).
\end{aligned}$$ Since $\sum_{k=0}^\infty k^{d-1}\delta_{d+1}(k)<\infty$ implies $\sum_{k=0}^\infty k^{d-2}\Psi_{d+1}(k) \leq \infty$, it follows that $$\begin{aligned}
\sum_{0 \leq k_1\leq k_2\leq\cdots\leq k_{d}}
& \left|\operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X(k_2,k_1+1),\ldots,X(k_{d-1},k_1+1),X_{k_d}-X(k_d,k_1+1)\right]\right| \cr
& \leq C_d\kappa_{d+1}^{d-1} \sum_{k=0}^\infty \Psi_{d+1}(k) \sum_{k=0}^\infty {k+d-2 \choose d-2}\Psi_{d+1}(k) \leq \infty.
\end{aligned}$$ We have shown that every cumulant in (\[eq:tele\]) is absolutely summable over $0\leq k_1\leq \cdots\leq k_d$, and it remains to show the claim (\[eq:tele\]). We shall derive the case $d=3$, (\[eq:tele\]) for other values of $d$ are obtained using the same idea. By multilinearity of cumulants, we have $$\begin{aligned}
\gamma(k_1,k_2,k_3) = & \operatorname{Cum}(X_0,X_{k_1},X_{k_2},X_{k_3})\cr
= &\operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X_{k_2},X_{k_3}\right] \cr
& + \operatorname{Cum}\left[X_0,X(k_1,1),X_{k_2}-X(k_2,1),X_{k_3}\right] \cr
& +
\operatorname{Cum}\left[X_0,X(k_1,1),X({k_2},1),X_{k_3}-X(k_3,1)\right]\cr
& + \operatorname{Cum}\left[X_0,X(k_1,1),X({k_2},1),X(k_3,1)\right].
\end{aligned}$$ Since $X_0$ and $(X(k_1,1),X({k_2},1),X(k_3,1))$ are independent, the last cumulant is 0. Apply the same trick for the first two cumulants, we have $$\begin{aligned}
&& \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X_{k_2},X_{k_3}\right] \cr
&& = \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),
X_{k_2}-X(k_2,k_1+1),X_{k_3}\right]\cr
&& + \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),
X(k_2,k_1+1),X_{k_3}-X(k_3,k_1+1)\right]\cr
&& + \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),
X(k_2,k_1+1),X(k_3,k_1+1)\right]\cr
&& = \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X_{k_2}-X(k_2,k_1+1),
X_{k_3}-X(k_3,k_2+1)\right]\cr
&& + \operatorname{Cum}\left[X_0,X_{k_1}-X(k_1,1),X(k_2,k_1+1),
X_{k_3}-X(k_3,k_1+1)\right]
\end{aligned}$$ and $$\begin{aligned}
\operatorname{Cum}\left[X_0,X(k_1,1),X_{k_2}-X(k_2,1),X_{k_3}\right] =
\operatorname{Cum}\left[X_0,X(k_1,1),X_{k_2}-X(k_2,1),X_{k_3}-X(k_3,k_2+1)\right].
\end{aligned}$$ Then the proof is complete.
When $d = 1$, (\[eq:kddcm\]) reduces to the [*short-range dependence*]{} or [*short-memory*]{} condition $\Theta_2 =
\sum_{k=0}^\infty \delta_2(k) < \infty$. If $\Theta_2 = \infty$, then the process $(X_i)$ may be long-memory in that the covariances are not summable. When $d \geq 2$, we conjecture that (\[eq:kddcm\]) can be weakened to $\Theta_{d+1} < \infty$. It holds for linear processes. Let $X_k = \sum_{i=0}^\infty a_{i}
\epsilon_{k-i}$. Assume $\epsilon_k \in {\mathcal{L}}^{d+1}$ and $\sum_{k=0}^\infty|a_k| < \infty$, then $\delta_{d+1}(k) = |a_k|
\|\epsilon_0\|_{d+1}$. Let $\operatorname{Cum}_{d+1}(\epsilon_0)$ be the $(d+1)$-th cumulant of $\epsilon_0$. Set $k_0=0$, by multilinearity of cumulants, we have $$\begin{aligned}
\gamma(k_1,\ldots,k_d)
&=&\sum_{t_0,t_1,\ldots,t_d\geq 0}
\left(\prod_{j=0}^{d}a_{t_j}\right)
\operatorname{Cum}(\epsilon_{-t_0},\epsilon_{k_1-t_1},
\ldots,\epsilon_{k_d-t_d})\cr
&=& \sum_{t=0}^\infty \prod_{j=0}^{d}a_{k_j+t}
\operatorname{Cum}_{d+1}(\epsilon_0).\end{aligned}$$ Therefore, the condition $\Theta_{d+1} < \infty$ suffices for (\[eq:smcmD25\]). For a class of functionals of Gaussian processes, [@rosenblatt:1985] showed that (\[eq:smcmD25\]) holds if $\sum_{k=0}^\infty|\gamma_k|<\infty$, which in turn is implied by $\Theta_{d+1}<\infty$ under our setting. It is unclear whether in general the weaker condition $\Theta_{d+1} < \infty$ implies (\[eq:smcmD25\]).
Some Auxiliary Lemmas {#sec:aux}
=====================
Suppose that ${\boldsymbol{X}}$ is a $d$-dimensional random vector, and ${\boldsymbol{X}} \sim \mathcal{N}(0,\Sigma)$. If $\Sigma=I_d$, then by (\[eq:normbdd\]), it is easily seen that the ratio of $P\left(z_n-c_n\leq|{\boldsymbol{X}}|_\bullet\leq z_n\right)$ over $P\left(|{\boldsymbol{X}}|_\bullet\geq z_n\right)$ tends to zero provided that $c_n\to 0$, $z_n\to \infty$ and $c_nz_n\to 0$. It is a similar situation when $\Sigma$ is not an identity matrix, as shown in the following lemma, which will be used in the proof of Lemma \[thm:vec\_mod\_dev\].
\[thm:lemma1\] Let ${\boldsymbol{X}} \sim \mathcal{N}(0,\Sigma)$ be a $d$-dimensional normal random vector. Assume $\Sigma$ is nonsingular. Let $\lambda_0^2$ and $\lambda_1^2$ be the smallest and largest eigenvalue of $\Sigma$ respectively. Then for $0<c<\delta<1/2$ such that $A :=
(2\pi\lambda_1^2)^{(d-1)/2}\lambda_0^2c^2\delta^{-2} + d \delta
\exp\{(\sqrt{6}d\lambda_1+\lambda_0)/\lambda_0^3\} < 1$, then for any $z \in [1, \delta/c]$, $$\label{eq:9}
P\left(z-c \leq \|{\boldsymbol{X}}\|_{\bullet} \leq z\right)
\leq (1-A)^{-1} A \,P\left(\|{\boldsymbol{X}}\|_{\bullet} \geq z\right).$$
Let $C_d = (6d)^{1/2}\lambda_1/\lambda_0$. Since $\lambda_0^2$ is the smallest eigenvalue of $\Sigma$, $$\begin{aligned}
P(\|{\boldsymbol{X}}\|_{\bullet}\geq z-c)
&\geq& {(2\pi\det(\Sigma))^{-d/2}}
\exp\left\{-\frac{d(z+1)^2}{2\lambda_0^2}\right\} \cr
&\geq& (2\pi\lambda_1^2)^{-d/2}
\exp\left\{-\frac{4d\delta^2}{2\lambda_0^2c^2}\right\}.\end{aligned}$$ Since $P(\|{\boldsymbol{X}}\|_\infty \geq C_d \delta/c) \leq d
(2\pi\lambda_1^2)^{-1/2} \exp\{6d\delta^2/(2\lambda_0^2c^2)\}$, we have $$\label{eq:1}
P(\|{\boldsymbol{X}}\|_\infty \geq C_d\delta/c) \leq
(2\pi\lambda_1^2)^{(d-1)/2}\lambda_0^2c^2\delta^{-2} \,
P(\|{\boldsymbol{X}}\|_{\bullet}\geq z-c).$$ For $0\leq k \leq
{\lfloor 1/\delta \rfloor}$, define the orthotopes $R_k=[z+(k-1)c,z+kc]\times[z-c,C_d\delta/c]^{d-1}$. For two points ${\boldsymbol{x}}=(x_1,\ldots,x_d) \in R_0$, ${\boldsymbol{x}}_k=(x_1+kc,x_2,\ldots,x_d)
\in R_k$, we have ${\boldsymbol{x}}_k^\top\Sigma^{-1}{\boldsymbol{x}}_k
-{\boldsymbol{x}}^\top\Sigma^{-1}{\boldsymbol{x}} \leq
(2\sqrt{d}C_d+1)/\lambda_0^2$, and hence $P({\boldsymbol{X}} \in R_k) \geq
\exp\{-(\sqrt{d}C_d+1)/\lambda_0^2\}P({\boldsymbol{X}}\in
R_0)$ for any $1\leq k \leq {\lfloor 1/\delta \rfloor}$. Since the same inequality holds for every coordinate, we have $$\label{eq:8}
P\left(z-c \leq \|{\boldsymbol{X}}\|_{\bullet} \leq
z,\,\|{\boldsymbol{X}}\|_\infty \leq C_d\delta/c\right) \leq
d\delta\exp\{(\sqrt{d}C_d+1)/\lambda_0^2\}\,
P\left(\|{\boldsymbol{X}}\|_{\bullet} \geq z-c\right)$$ Combine (\[eq:1\]) and (\[eq:8\]), we know $P\left(z-c \le
\|{\boldsymbol{X}}\|_{\bullet} \leq z\right) \le A \cdot
P\left(\|{\boldsymbol{X}}\|_{\bullet} \geq z-c\right)$. So (\[eq:9\]) follows.
The preceding lemma requires the eigenvalues of $\Sigma$ to be bounded both from above and away from zero. In our application, $\Sigma$ is taken as the covariance matrix of $(G_{k_1}, G_{k_2},
\ldots, G_{k_d})^\top$, where $(G_k)$ is defined in (\[eq:gp\]). Furthermore, we need such bounds be uniform over all choices of $k_1<k_2<\cdots<k_d$. Let $f(\omega) = (2\pi)^{-1} \sum_{h \in {{\mathbb{Z}}}}
\sigma_h \cos(h\omega)$ be the spectral density of $(G_k)$. A sufficient condition would be that there exists $0<m<M$ such that $$\label{eq:32}
m \leq f(\omega) \leq M, \quad\hbox{for } \omega \in [0,2\pi],$$ because the eigenvalues of the autocovariance matrix are bounded from above and below by the maximum and minimum values that $f$ takes respectively. For the proof see Section 5.2 of [@grenander:1958]. Clearly the upper bound in (\[eq:32\]) is satisfied in our situation, because $\sum_{h\in{{\mathbb{Z}}}} |\sigma_h| <
\infty$. However, the existence of lower bound in (\[eq:32\]) rules out some classical times series models. For example, if $(G_k)$ is the moving average of the form $G_k = (\eta_k + \eta_{k-1}) / \sqrt{2}$, then $f(\omega) = (1+\cos(\omega)) / 2\pi$, and $f(\pi)=0$. Nevertheless, although the minimum eigenvalue of the autocovariance matrix converges to $\inf_{\omega\in[0,2\pi]}
f(\omega)$ as the dimension of the matrix goes to infinity, there does exist a positive lower bound for the smallest eigenvalues of all the principal sub-matrices with a fixed dimension.
\[thm:eigenbdd\] If $\sum_{h \in {{\mathbb{Z}}}}\sigma_h^2<\infty$, then for each $d\geq 1$, there exists a constant $C_d>0$ such that $$\begin{aligned}
\inf_{k_1<k_2<\cdots<k_d} \lambda_{\min}
\left\{\operatorname{Cov}\left[(G_{k_1},G_{k_2},
\ldots,G_{k_d})^\top\right]\right\}
\geq C_d.
\end{aligned}$$
We use induction. It is clear that we can choose $(C_d)$ to be a non-increasing sequence. Without loss of generality, let us assume $k_1=1$. The statement is trivially true when $d=1$. Suppose it is true for all dimensions up to $d$, we now consider the dimension $(d+1)$ case. There exist an integer $N_{d}$ such that $\sum_{h=N_d}\sigma_h^2<2C_d^2/(d+1)$. If all the differences $k_{i+1}-k_i\leq N_{d}$ for $1\leq i\leq d-1$, there are $N_d^{d-1}$ possible choices of $k_1=1<k_2<\cdots<k_d$. Since the process $(G_k)$ is non-deterministic, for all these choices, the corresponding covariance matrices are non-singular. Pick $C_d'>0$ to be the smallest eigenvalue of all these matrices. If there is one difference $k_{l+1}-k_l> N_{d}$, set $\Sigma_1=\operatorname{Cov}[(G_{k_i})_{1\leq
i\leq l}]$ and $\Sigma_2=\operatorname{Cov}[(G_{k_i})_{l< i\leq d}]$, then $\lambda_{\min}(\Sigma_1)\geq C_d$ and $\lambda_{\min}(\Sigma_2)
\geq C_d$. It follows that for any real numbers $c_1,c_2,\ldots,c_d$ such that $\sum_{i=1}^dc_i^2=1$, $$\begin{aligned}
\sum_{1\leq i,j\leq d}c_ic_j\operatorname{Cov}(G_{k_i},G_{k_j})
& = & (c_1,\ldots,c_i)^\top\Sigma_J(c_1,\ldots,c_i) \cr
& & \quad +
(c_{i+1},\ldots,c_d)^\top\Sigma_J(c_{i+1},\ldots,c_d)\cr
& & \quad + 2\sum_{i\leq l,j>l}c_ic_j\sigma_{k_j-k_i} \cr
& \geq & C_d - 2\left(\sum_{i\leq l,j>l}\sigma_{k_j-k_i}^2\right)^{1/2}
\left(\sum_{i\leq l,j>l}c_i^2c_j^2\right)^{1/2}\cr
& \geq & C_d - \frac{1}{2}\left( {{d+1}\over 2}\cdot
\sum_{h=N_d}\sigma_h^2\right)^{1/2} \geq {{C_d} \over 2}.
\end{aligned}$$ Setting $C_{d+1} = \min\{C_d/2,C_d'\}$, the proof is complete.
The following lemma is used in the proof of Lemma \[thm:cov\_struc\].
\[thm:covconvergence\] Assume $X_i \in \mathcal{L}^4$, ${{\mathbb{E}}}X_0=0$, and $\Theta_{4}<\infty$. Assume $l_n\to\infty$, $k_n \to
\infty$, ${\check m_n} < {\lfloor k_n/3 \rfloor}$ and $h\geq 0$. Define ${S}_{n,k}=\sum_{i=1}^{l_n} (X_{i-k}X_i-\gamma_k)$. Then $$\begin{aligned}
\label{eq:19}
\left|{{\mathbb{E}}}\left({S}_{n,k_n}{S}_{n,k_n+h}\right)/l_n - \sigma_h\right|
& \leq \Theta_{4}^3\left(16\Delta_4({\check m_n}+1) + 6\Theta_4\sqrt{{\check m_n}/l_n} + 4\Psi_4({\check m_n}+1)\right).
\end{aligned}$$
Let $\check X_{i} = {\mathcal{H}}_{i-{\check m_n}}^i X_i$, then $\check X_i$ and $\check X_{i-k_n}$ are independent, because ${\check m_n} \leq
{\lfloor k_n/3 \rfloor}$. Define $\check {S}_{n,k}=\sum_{i=1}^{l_n} \check
X_{i-k}\check X_i$. By (\[eq:fact9\]), we have for any $k \geq 0$, $$\label{eq:27}
\left\|({S}_{n,k}-\check {S}_{n,k})/\sqrt{l_n}\right\|
\leq 4\kappa_4\Delta_{4}({\check m_n}+1).$$ By (\[eq:fact5\]), $\left\|{S}_{n,k}/\sqrt{l_n}\right\| \leq
2\kappa_4\Theta_4$ for any $k\geq 0$, and it follows that $$\label{eq:24}
\begin{aligned}
& \left|{{\mathbb{E}}}({S}_{n,k_n},{S}_{n,k_n+h})
- {{\mathbb{E}}}(\check {S}_{n,k_n}\check {S}_{n,k_n+h})\right| \cr
& \quad \leq \left\| {S}_{n,k_n} - \check {S}_{n,k_n} \right\|
\cdot \left\|{S}_{n,k_n+h}\right\|
+ \left\|\check {S}_{n,k_n}\right\|
\cdot \left\|{S}_{n,k_n+h} - \check {S}_{n,k_n+h}
\right\|\cr
&\quad\le 16 l_n\kappa_4^2\Theta_4\Delta_4({\check m_n}+1).
\end{aligned}$$ For any $k > 3{\check m_n}$, define $M_{n,k}=\sum_{j=1}^{l_n}D_j$, where $D_j=\sum_{i=j}^{j+{\check m_n}}\check X_{i-k}{\mathcal{P}}^j\check X_i =
\sum_{q=0}^{{\check m_n}} X_{j+q-k}{\mathcal{P}}^jX_{j+q}$. Observe that ${\mathcal{P}}^j\check
X_{j+q}$ and $\check X_{j+q-k}$ are independent, we have $$\begin{aligned}
\left\|\check {S}_{n,k} - M_{n,k}\right\|
& = \left\|\sum_{i=1}^{l_n}\sum_{j=i-{\check m_n}}^i \check X_{i-k}{\mathcal{P}}^j\check X_i
- \sum_{j=1}^{l_n}\sum_{i=j}^{j+{\check m_n}}\check X_{i-k}{\mathcal{P}}^j\check X_i\right\| \cr
& \leq \left\|\sum_{j=1-{\check m_n}}^{0}\sum_{i=1}^{j+{\check m_n}} \check X_{i-k}{\mathcal{P}}^j\check X_i\right\|
+ \left\|\sum_{j=l_n-{\check m_n}+1}^{l_n} \sum_{i=l_n+1}^{j+{\check m_n}} \check X_{i-k}{\mathcal{P}}^j\check X_i\right\| \cr \label{eq:21}
& \leq 2\left(\sum_{j=1}^{{\check m_n}} \kappa_2^2\Theta_2(j)^2\right)^{1/2} \leq 2\kappa_2\Theta_2\sqrt{{\check m_n}}
\end{aligned}$$ According to the proof of Theorem 2 of [@wu:2009], when $k >
3{\check m_n}$ $\|M_{n,k}/\sqrt{n}\|^2 = \sum_{k \in {{\mathbb{Z}}}} \check
\gamma_k^2$, where $\check\gamma_k={{\mathbb{E}}}\check X_{0}\check X_k$. By (\[eq:fact4\]) and (\[eq:fact6\]), $|\check\gamma_k| \leq
\zeta_{k}$; and hence $$\begin{aligned}
\label{eq:22}
\left\|M_{n,k}/\sqrt{n}\right\|^2
&\leq \sum_{k \in {{\mathbb{Z}}}} \zeta_k^2
=\sum_{j,j'=0}^\infty \left(\delta_2(j)\delta_2(j')
\sum_{k \in {{\mathbb{Z}}}} \delta_2(j+k)\delta_2(j'+k)\right)\cr
& \leq \sum_{j,j'=0}^\infty \delta_2(j)\delta_2(j')
\Psi_2^2 \leq \Theta_2^2\Psi_2^2.\end{aligned}$$ By (\[eq:fact5\]) and (\[eq:fact6\]), $\left\|\check
{S}_{n,k}/\sqrt{l_n}\right\| \leq 2\kappa_4\Theta_4$ for any $k\geq 0$. Combining (\[eq:21\]) and (\[eq:22\]), we have $$\label{eq:25}
\left|{{\mathbb{E}}}(\check {S}_{n,k_n}\check {S}_{n,k_n+h}) - {{\mathbb{E}}}(M_{n,k_n}M_{n,k_n+h})\right|
\leq (2\kappa_4\Theta_4+\Theta_2\Psi_2)\sqrt{l_n} \cdot 2\kappa_2\Theta_2\sqrt{{\check m_n}}.$$ Observe that when $k_n > 3{\check m_n}$, $X_{q-k_n} X_{q'-k_n-h}$ and ${\mathcal{P}}^0 X_q {\mathcal{P}}^0 X_{q'}$ are independent for $0 \leq q,q' \leq
{\check m_n}$. Therefore, $$\begin{aligned}
\label{eq:23}
{{\mathbb{E}}}(M_{n,k_n}M_{n,k_n+h})
& = l_n {{\mathbb{E}}}\left(\sum_{q,q'=0}^{{\check m_n}}
X_{q-k_n}X_{q'-k_n-h}
{\mathcal{P}}^0\check X_q{\mathcal{P}}^0\check X_{q'}\right) \cr
& = l_n \sum_{q,q'=0}^{{\check m_n}}
\check\gamma_{q-q'+h}
{{\mathbb{E}}}\left[({\mathcal{P}}^0\check X_q)({\mathcal{P}}^0\check X_{q'})\right] \cr
& = l_n \sum_{k \in {{\mathbb{Z}}}} \check\gamma_{k+h}
\sum_{q'\in {{\mathbb{Z}}}}
{{\mathbb{E}}}\left[({\mathcal{P}}^0\check X_{q'+k})({\mathcal{P}}^0\check X_{q'})\right]\cr
&= l_n \sum_{k \in {{\mathbb{Z}}}} \check\gamma_{k+h}
\sum_{q'\in {{\mathbb{Z}}}}
{{\mathbb{E}}}\left[({\mathcal{P}}^{q'}\check X_{k})({\mathcal{P}}^{q'}\check
X_{0})\right]\cr
& = l_n \sum_{k \in {{\mathbb{Z}}}} \check\gamma_{k+h}\check\gamma_{k}.
\end{aligned}$$ By (\[eq:fact7\]), $|\gamma_k-\check \gamma_k| \leq 2\kappa_2
\Psi_2(m+1)$. Since $|\gamma_k|\leq\zeta_k$ and $|\check \gamma_k|
\leq \zeta_k$, we have $$\begin{aligned}
\label{eq:26}
\left|\sigma_h
-\sum_{k \in {{\mathbb{Z}}}} \check\gamma_{k+h}\check\gamma_{k}\right|
&=& \left|\sum_{k \in {{\mathbb{Z}}}}
(\gamma_k\gamma_{k+h}
- \check\gamma_k\check\gamma_{k+h})\right| \cr
&\leq& 4\kappa_2\Psi_2(m+1) \sum_{k \in {{\mathbb{Z}}}} \zeta_k
\leq 4\kappa_2\Psi_2(m+1)\Theta_2^2.\end{aligned}$$ Combining (\[eq:24\]), (\[eq:25\]) and (\[eq:26\]), the lemma follows by noting that $\kappa_2$, $\kappa_4$ are dominated by $\Theta_4$; and $\Theta_2(\cdot)$, $\Psi_2(\cdot)$ and $\Psi_4(\cdot)$ are all dominated by $\Theta_4(\cdot)$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In a recent paper [@Kwon2013], Kwon and Oum claim that every graph of bounded rank-width is a pivot-minor of a graph of bounded tree-width (while the converse has been known true already before). We study the analogous questions for “depth” parameters of graphs, namely for the tree-depth and related new shrub-depth. We show that shrub-depth is monotone under taking vertex-minors, and that every graph class of bounded shrub-depth can be obtained via vertex-minors of graphs of bounded tree-depth. We also consider the same questions for bipartite graphs and pivot-minors.'
address:
- 'Faculty of Informatics, Masaryk University, Botanická 68a, Brno, Czech Republic'
- 'Department of Mathematical Sciences, KAIST, 291 Daehak-ro Yuseong-gu Daejeon, 305-701 South Korea'
author:
- Petr Hliněný
- 'O-joung Kwon'
- Jan Obdržálek
- Sebastian Ordyniak
bibliography:
- 'gtbib.bib'
title: 'Tree-depth and Vertex-minors'
---
[^1] [^2]
Introduction {#sec:intro}
============
Various notions of graph containment relations (e.g. graph minors) play an important part in structural graph theory. Recall that a graph $H$ is a minor of a graph $G$ if $H$ can be obtained from $G$ by a sequence of edge contractions, edge deletions and vertex deletions. In their seminal series of papers, Robertson and Seymour introduced the notion of tree-width and showed the following: The tree-width of a minor of $G$ is at most the tree-width of $G$ and, moreover, for each $k$ there is a finite list of graphs such that a graph $G$ has tree-width at most $k$ if, and only if, no graph in the list is isomorphic to a minor of $G$. This, among other things, implies the existence of a polynomial-time algorithm to check that the tree-width of a graph is at most $k$.
There have been numerous attempts to extend this result to (or find a similar result for) “width” measures other than tree-width . The most natural candidate is clique-width, a measure generalising tree-width defined by Courcelle and Olariu [@CO00]. However, the quest to prove a similar result for this measure has been so far unsuccessful. For one, taking the graph minor relation is clearly not sufficient as every graph on $n>1$ vertices is a minor of the complete graph $K_n$, clique-width of which is 2.
However Oum [@Oum05] succeeded in finding the appropriate containment relation – called *vertex-minor* – for the notion of rank-width, which is closely related to clique-width. (More precisely, if the clique-width of a graph is $k$, then its rank-width is between $\log_2(k+1)-1$ and $k$.) Vertex-minors are based on the operation of local complementation: taking a vertex $v$ of a graph $G$ we replace the subgraph induced on the neighbours of $v$ by its edge-complement, and denote the resulting graph by $G*v$. We then say that a graph $H$ is a vertex-minor of $G$ if $H$ can be obtained from $G$ by a sequence of local complementations and vertex deletions. In [@Oum05] it was shown that if $H$ is a vertex-minor of $G$, then its rank-width is at most the rank-width of $G$.
Another graph containment relation, the *pivot-minor*, also defined in [@Oum05], is closely related to vertex-minor. Pivot-minors are based on the operation of edge-pivoting: for an edge $e=\{u,v\}$ of a graph $G$ we perform the operation $G*u*v*u$. Then a graph $H$ is a pivot-minor of $G$ if it can be obtained from $G$ by a sequence of edge-pivotings and vertex deletions. It follows from the definition that every pivot-minor is also a vertex-minor.
This brings an interesting question: What is the exact relationship between various width measures with respect to these new graph containment relations? Recently, it was shown that every graph of rank-width $k$ is a pivot-minor of a graph of tree-width at most $2k$ [@Kwon2013]. In this paper we investigate the existence of similar relationships for two “shallow” graph width measures: tree-depth and shrub-depth.
*Tree-depth* [@no06] is a graph invariant which intuitively measures how far is a graph from being a star. Graphs of bounded tree-depth are sparse and play a central role in the theory of graph classes of bounded expansion. *Shrub-depth* [@GHNOOR12] is a very recent graph invariant, which was designed to fit into the gap between tree-depth and clique-width. (If we consider tree-depth to be the “shallow” counterpart of tree-width, then shrub-depth can be thought of as a “shallow” counterpart of clique-width.)
Our results can be summarised as follows. We start by showing that shrub-depth is monotone under taking vertex-minors (Corollary \[cor:shrubd-monotone\]). Next we prove that every graph class of bounded shrub-depth can be obtained via vertex-minors of graphs of bounded tree-depth (Theorem \[thm:tdtoshrubd\]). Note that, unlike for rank-width and tree-width, restricting ourselves to pivot-minors is not sufficient. Indeed, this is because, as we prove in Proposition \[prop:noclique\], graphs of bounded tree-depth cannot contain arbitrarily large cliques as pivot-minors. Interestingly, we are however able to show the same result for pivot-minors if we restrict ourselves to bipartite graphs, which were, in a similar connection, investigated already in [@Kwon2013]. [ In particular, our main result of the last section is that for any class of bounded shrub-depth there exists an integer $d$ such that any bipartite graph in the class is a pivot-minor of a graph of tree-depth $d$. ]{}
Preliminaries {#sec:prelim}
=============
In this paper, all graphs are finite, undirected and simple. A [*tree*]{} is a connected graph with no cycles, and it is [*rooted*]{} if some vertex is designated as the root. A leaf of a rooted tree is a vertex other than the root having just one neighbour. The height of a rooted tree is the maximum length of a path starting in the root (and hence ending in a leaf). Let $G$ be a graph. We denote $V(G)$ as the vertex set of $G$ and $E(G)$ as the edge set of $G$. For $v\in V(G)$, let $N_G(v)$ be the set of the neighbours of $v$ in $G$.
We sometimes deal with [*labelled graphs*]{} $G$, which means that every vertex of $G$ is assigned a subset (possibly empty) of a given finite label set. A graph is [*$m$-coloured*]{} if every vertex is assigned exactly one of given $m$ labels (this notion has no relation to ordinary graph colouring).
We now briefly introduce the *monadic second order logic* ([$\mathrm{MSO}$]{}) over graphs and the concept of [$\mathrm{FO}$]{}([$\mathrm{MSO}$]{}) graph interpretation. [$\mathrm{MSO}$]{}is the extension of first-order logic ([$\mathrm{FO}$]{}) by quantification over sets, and comes in two flavours, [$\mathrm{MSO}_1$]{}and [$\mathrm{MSO}_2$]{}, differing by the objects we are allowed to quantify over:
The language of [$\mathrm{MSO}_1$]{}consists of expressions built from the following elements:
- variables $x,y,\ldots$ for vertices, and $X,Y$ for sets of vertices,
- the predicates $x\in X$ and $\prebox{edge}(x,y)$ with the standard meaning,
- equality for variables, quantifiers $\forall$ and $\exists$ ranging over vertices and vertex sets, and the standard Boolean connectives.
[$\mathrm{MSO}_1$]{}logic can be used to express many interesting graph properties, such as 3-colourability. We also mention [$\mathrm{MSO}_2$]{}logic, which additionally includes quantification over edge sets and can express properties which are not [$\mathrm{MSO}_1$]{}definable (e.g. Hamiltonicity). The large expressive power of both [$\mathrm{MSO}_1$]{}and [$\mathrm{MSO}_2$]{}makes them a very popular choice when formulating algorithmic metatheorems (e.g., for graphs of bounded clique-width or tree-width, respectively).
The logic we will be mostly concerned with is an extension of [$\mathrm{MSO}_1$]{}called *Counting monadic second-order logic* ([$\mathrm{CMSO}_1$]{}). In addition to the [$\mathrm{MSO}_1$]{}syntax [$\mathrm{CMSO}_1$]{}allows the use of predicates ${\operatorname{mod}}_{a,b}(X)$, where $X$ is a set variable. The semantics of the predicate ${\operatorname{mod}}_{a,b}(X)$ is that the set $X$ has $a$ modulo $b$ elements. We use [$\mathrm{C_2MSO}_1$]{}to denote the parity counting fragment of [$\mathrm{CMSO}_1$]{}, i.e. the fragment where the predicates ${\operatorname{mod}}_{a,b}(X)$ are restricted to $b=2$.
A useful tool when solving the model checking problem on a class of structures is the ability to “efficiently translate” an instance of the problem to a different class of structures, for which we already have an efficient model checking algorithm. To this end we introduce simple FO/[$\mathrm{MSO}_1$]{}graph interpretation, which is an instance of the general concept of interpretability of logic theories [@Rab64] restricted to simple graphs with vertices represented by singletons.
\[def:interpretation\] A [*FO ([$\mathrm{MSO}_1$]{}) graph interpretation*]{} is a pair $I=(\nu,\mu)$ of [*FO*]{} ([$\mathrm{MSO}_1$]{}) formulae (with $1$ and $2$ free variables respectively) in the language of graphs, where $\mu$ is symmetric (i.e. $G\models\mu(x,y)\leftrightarrow\mu(y,x)$ in every graph $G$). To each graph $G$ it associates a graph $G^I$, which is defined as follows:
- The vertex set of $G^I$ is the set of all vertices $v$ of $G$ such that $G\models \nu(v)$;
- The edge set of $G^I$ is the set of all the pairs $\{u,v\}$ of vertices of $G$ such that $G\models \nu(u)\wedge\nu(v)\wedge\mu(u,v)$.
This definition naturally extends to the case of vertex-labelled graphs (using a finite set of labels, sometimes called colours) by introducing finitely many unary relations in the language to encode the labelling.
For example, a complete graph can be interpreted in any graph (with the same number of vertices) by letting $\nu\equiv\mu\equiv true$, and the complement of a graph has an interpretation using $\mu(x,y)\equiv\neg\prebox{edge}(x,y)$.
Vertex-minors and Pivot-minors {#vertex-minors-and-pivot-minors .unnumbered}
------------------------------
For $v\in V(G)$, the *local complementation* at a vertex $v$ of $G$ is the operation which complements the adjacency between every pair of two vertices in $N_G(v)$. The resulting graph is denoted by $G*v$. We say that two graphs are *locally equivalent* if one can be obtained from the other by a sequence of local complementations. For an edge $uv\in E(G)$, *pivoting* an edge $uv$ of $G$ is defined as $G\wedge
uv=G*u*v*u=G*v*u*v$. A graph $H$ is a *vertex-minor* of $G$ if $H$ is obtained from $G$ by applying a sequence of local complementations and deletions of vertices. A graph $H$ is a pivot-minor of $G$ if $H$ is obtained from $G$ by applying a sequence of pivoting edges and deletions of vertices. From the definition of pivoting every pivot-minor of a graph is also its vertex-minor.
Pivot-minors of graphs are closely related to a matrix operation called pivoting. To give the exact relationship (Proposition \[prop:pivots-minors-matrices\]) we will need to introduce some matrix concepts.
Pivoting on a Matrix {#pivoting-on-a-matrix .unnumbered}
--------------------
For two sets $A$ and $B$, we denote by $A\Delta B=(A\setminus B)\cup
(B\setminus A)$ its symmetric difference. Let $\mx M$ be a $S\times T$ matrix. For $A\subseteq S$ and $B\subseteq T$, we denote the $A\times B$ submatrix of $\mx M$ as $\mx M[A,B]=(m_{i,j})_{i\in A, j\in B}$. If $A=B$, then $\mx M[A]=\mx M[A,A]$ and we call it a *principal submatrix* of $\mx M$. If $a\in S$ and $b\in T$, then we denote $\mx M_{a,b}=\mx M[\{a\}, \{b\}]$. The *adjacency matrix* $\mx A(G)$ of $G$ is the $V(G)\times V(G)$ matrix such that for $v,w\in V(G)$, $\mx A(G)_{v,w}=1$ if $v$ is adjacent to $w$ in $G$, and $\mx A(G)_{v,w}=0$ otherwise.
Let $$\mx M=\bordermatrix{
& S & X\setminus S\cr
S & A & B\cr
X\setminus S & C & D
}$$ be a $X\times X$ matrix over a field $F$.
If $\mx A=\mx M[S]$ is non-singular, then we define *pivoting* $S$ on the matrix $\mx M$ as $$\mx M\ast S=
\bordermatrix{
& S & X\setminus S\cr
\hfill S & A^{-1} & A^{-1}B \cr
X\setminus S & -CA^{-1} & D-CA^{-1}B
}.$$ It is sometimes called a *principal pivot transformation* [@Tsatsomeros2000]. The following theorem is useful when dealing with matrix pivoting.
\[thm:21\] Let $\mx M$ be a $X\times X$ matrix over a field. If $\mx M[S]$ is a non-singular principal submatrix of $\mx M$, then for every $T\subseteq X$, $(\mx M\ast S)[T]$ is non-singular if and only if $\mx M[S\Delta T]$ is non-singular.
See Bouchet’s proof in Geelen [@Geelen1995 Theorem 2.7].
\[thm:22\] Let $\mx M$ be a $X\times X$ matrix over a field. If $\mx M[S]$ and $(\mx M*S)[T]$ are non-singular, then $(\mx M*S)*T=\mx M*(S\Delta T)$.
See Geelen [@Geelen1995 Theorem 2.8].
We are now ready to state the relationship between pivot-minors and matrix pivots. The proof of the following proposition uses Theorem \[thm:21\] and Theorem \[thm:22\], and we refer the reader to [@Kwon2013] for detailed explanation.
\[prop:pivots-minors-matrices\] Graph $H$ is a pivot-minor of $G$ if and only if $H$ is the graph whose adjacency matrix is $(\mx A(G)*X)[Y]$ where $X, Y\subseteq V(G)$ and $\mx
A(G)[X]$ is non-singular.
Tree-depth {#tree-depth .unnumbered}
----------
For a forest $T$, the closure ${\operatorname{Clos}}(T)$ of $T$ is the graph obtained from $T$ by making every vertex adjacent to all of its ancestors. The *tree-depth* of a graph $G$, denoted by ${\operatorname{td}}(G)$, is one more than the minimum height of a rooted forest $T$ such that $G\subseteq {\operatorname{Clos}}(T)$.
Shrub-depth and Vertex-minors
=============================
In this section we show the first of our results – that shrub-depth is monotone under taking vertex-minors. The shrub-depth of a graph class is defined by the following very special kind of a simple [$\mathrm{FO}$]{}interpretation:
\[def:tree-model\] We say that a graph $G$ has a [*tree-model of $m$ colours and depth $d$*]{} if there exists a rooted tree $T$ (of height $d$) such that:
i. the set of leaves of $T$ is exactly $V(G)$,
ii. the length of each root-to-leaf path in $T$ is exactly $d$,
iii. each leaf of $T$ is assigned one of $m$ colours (i.e. $T$ is [*$m$-coloured*]{}),
iv. \[it:tree-model-edge\] and the existence of an edge between $u,v\in V(G)$ depends solely on the colours of $u,v$ and the distance between $u,v$ in $T$.
The class of all graphs having such a tree-model is denoted by $\TM dm$.
For example, $K_n\in\TM11$ or $K_{n,n}\in\TM12$. We thus consider:
\[def:shrub-depth\] A class of graphs $\cf S$ has [*shrub-depth*]{} $d$ if there exists $m$ such that $\cf S\subseteq\TM dm$, while for all natural $m$ it is $\cf S\not\subseteq\TM{d-1}m$.
It is easy to see that each class $\TM dm$ is closed under complements and induced subgraphs, but neither under disjoint unions, nor under subgraphs. However, the class $\TM dm$ is not closed under local complementations. On the other hand, to prove that shrub-depth is closed under vertex-minors it is sufficient to show that for each $m$ there exists $m'$ such that all graphs locally equivalent to those in $\TM dm$ belong to $\TM d{m'}$. As shrub-depth does not depend on $m$, this will be our proof strategy.
Note that Definition \[def:shrub-depth\] is asymptotic as it makes sense only for infinite graph classes; the shrub-depth of a single finite graph is always at most one. For instance, the class of all cliques has shrub-depth $1$. More interestingly, graph classes of certain shrub-depth are characterisable exactly as those having simple [$\mathrm{CMSO}_1$]{}interpretations in the classes of rooted labelled trees of fixed height:
\[thm:shrub-depth-interpretability\] A class $\cf S$ of graphs has a simple [$\mathrm{CMSO}_1$]{}interpretation in the class of all finite rooted labelled trees of height $\leq d$ if, and only if, $\cf S$ has shrub-depth at most $d$.
In [@GHNOOR12] this statement occurs with a little shift—involving [$\mathrm{MSO}_1$]{}logic instead of [$\mathrm{CMSO}_1$]{}. However, since the proof in [@GHNOOR12] builds everything on one technical claim (kernelization of [$\mathrm{MSO}$]{}on trees of bounded height) which has been subsequently extended to [$\mathrm{CMSO}$]{}in [@GH14 Section 3.2], the full statement follows as well.
Note that the above theorem implies that any class of graphs of bounded shrubdepth is closed under simple [$\mathrm{CMSO}_1$]{}interpretations, i.e., the class of graphs obtained via a simple [$\mathrm{CMSO}_1$]{}interpretation on a class of graphs of bounded shrub-depth has itself bounded shrub-depth. This is one of the two essential ingredients we need to prove that shrub-depth is closed under vertex-minors. The other ingredient is the following technical claim:
\[lem:interpret-localeq\] For a graph $G$, let $\cf L(G)$ denote the set of graphs which are locally equivalent to $G$. Then there exists a simple [$\mathrm{C_2MSO}_1$]{}interpretation such that each such $\cf L(G)$ is interpreted in vertex-labellings of $G$.
Again, [@CO07 Corollary 6.4] states nearly the same what we claim here. The only trouble is that [@CO07] speaks about more general so-called transductions. Here we briefly survey that the transduction constructed in [@CO07 Corollary 6.4] is really a simple [$\mathrm{C_2MSO}_1$]{}interpretation (we have to stay on an informal level since a formal introduction to all necessary concepts would take up several pages):
i. In [@CO07] local complementations of a graph $G$ are treated via a so called isotropic system $S=S(G)$. It is, briefly, a set of $V(G)$-indexed three-valued vectors, and so $S$ can be described on the ground set $V(G)$ by a collection of triples of disjoint sets. This representation is definable in [$\mathrm{C_2MSO}_1$]{}[@CO07 Proposition 6.2].
ii. The set of graphs locally equivalent to $G$ then corresponds to the set of isotropic systems strongly isomorphic to $S$. A strong isomorphism of isotropic systems on the ground set $V(G)$ is expressed in [$\mathrm{MSO}_1$]{}with respect to a suitable $6$-partition of $V(G)$ by [@CO07 Proposition 6.1].
iii. Finally, a graph $H$ is locally equivalent to $G$ if and only if $H$ is the fundamental graph of some (not unique) $S'\simeq S$ with respect to a special vector of $S'$, which again has a [$\mathrm{C_2MSO}_1$]{}expression with respect to a triple of subsets of $V(G)$ describing the vector (as in point i.) by [@CO07 Proposition 6.3].
Note that all the aforementioned [$\mathrm{C_2MSO}_1$]{}expressions are on the same ground set $V(G)$. In the desired interpretation $I$ we treat the nine parameter sets of (ii.) and (iii.) as a vertex-labelling of $G$, which consequently can interpret any $H$ locally equivalent to $G$ using [$\mathrm{C_2MSO}_1$]{}.
\[thm:shrub-local-equal\] For a graph class $\cf C$, let $\cf L(\cf C)$ denote the class of graphs which are locally equivalent to a member of $\cf C$. Then the shrub-depth of $\cf L(\cf C)$ is equal to the shrub-depth of $\cf C$.
Let $d$ be the least integer such that, for some $m$ as in Definition \[def:shrub-depth\], it is $\cf C\subseteq\TM dm$. Let $I$ denote an [$\mathrm{FO}$]{}interpretation of $\cf C$ in the class $\cf T_d$ of rooted labelled trees of height $d$ which naturally follows from Definition \[def:tree-model\], and let $J$ be the simple [$\mathrm{C_2MSO}_1$]{}interpretation from Lemma \[lem:interpret-localeq\].
For every $H\in\cf L(\cf C)$ there is a suitably labelled graph $G\in\cf C$ such that $H\simeq G^J$, and a tree $T\in\cf T_d$ such that $G\simeq T^I$. As this $T$ can additionally inherit any suitable labelling of $G$, we can claim $H\simeq (T^I)^J$. Therefore, the composition $J\circ I$ is a [$\mathrm{C_2MSO}_1$]{}interpretation of $\cf L(\cf C)$ in $\cf T_d$. By Theorem \[thm:shrub-depth-interpretability\], $\cf L(\cf C)$ is of shrub-depth at most $d$ and, at the same time, $\cf C\subseteq\cf L(\cf C)$.
\[cor:shrubd-monotone\] The shrub-depth parameter is monotone under taking vertex-minors over graph classes.
By the definition, a vertex-minor is obtained as an induced subgraph of a locally equivalent graph. Since taking induced subgraphs does not change a tree-model, the claim follows from Theorem \[thm:shrub-local-equal\].
From small Tree-depth to small SC-depth
=======================================
We have just seen that taking vertex-minors does not increase the shrub-depth of a graph class. It is thus interesting to ask whether, perhaps, every class of bounded shrub-depth could be constructed by taking vertex-minors of some special graph class. This indeed turns out to be true in a very natural way—the special classes in consideration are the graphs of bounded tree-depth.
Before proceeding we need to introduce another “depth” parameter asymptotically related to shrub-depth which, unlike the former, is defined for any single graph. Let $G$ be a graph and let $X\subseteq V(G)$. We denote by $\overline{G}^X$ the graph $G'$ with vertex set $V(G)$ where $x\neq y$ are adjacent in $G'$ if either
(i) $\{x,y\}\in E(G)$ and $\{x,y\}\not\subseteq X$, or
(ii) $\{x,y\}\not\in E(G)$ and $\{x,y\}\subseteq X$.
In other words, $\overline{G}^X$ is the graph obtained from $G$ by complementing the edges on $X$.
\[def:SC-depth\] We define inductively the class $\SC k$ as follows:
i. let $\SC0=\{K_1\}$;
ii. if $G_1,\dots,G_p\in\SC k$ and $H= G_1\dot\cup\dots\dot\cup G_p$ denotes the disjoint union of the $G_i$, then for every subset $X$ of vertices of $H$ we have $\overline{H\,}^X\in\SC{k+1}$.
The [*SC-depth*]{} of $G$ is the minimum integer $k$ such that $G\in\SC k$.
\[thm:shrubsc\] The following are equivalent for any class of graphs $\cf G$:
- there exist integers $d$, $m$ such that $\cf G\subseteq \TM dm$;
- there exists an integer $k$ such that $\cf G\subseteq \SC k$.
From Definition \[def:SC-depth\], one can obtain the following claim:
\[lem:vertex-minor-tree-depth\] Let $k$ be a positive integer. If a graph $G$ has SC-depth at most $k$, then $G$ is a vertex-minor of a graph of tree-depth at most $k+1$.
For a graph $G$ of SC-depth $k$, we recursively construct a graph $U$ and a rooted forest $T$ such that
i. $G$ can be obtained from $U$ as a vertex-minor via applying local complementations only at the vertices in $V(U)\setminus V(G)$, and
ii. $U\subseteq {\operatorname{Clos}}(T)$ and $T$ has depth $k$.
If $k=0$, then it is clear by setting $G=U=T=K_1$. We assume that $k\ge 1$.
Since $G$ has SC-depth $k$, there exist a graph $H$ and $X\subseteq V(H)$ such that $G=\overline{H}^X$ and $H$ is the disjoint union of $H_1, H_2, \ldots, H_m$ such that each $H_i$ has SC-depth $k-1$. By induction hypothesis, for each $1\le i\le m$, $H_i$ is a vertex-minor of a graph $U_i$ and $U_i\in {\operatorname{Clos}}(T_i)$ where the height of $T_i$ is at most $k$. For each $1\le i\le m$, let $r_i$ be the root of $T_i$, and let $T$ be the rooted forest obtained from the disjoint union of all $T_i$ by adding a root $r$ which is adjacent to all $r_i$. Let $U$ be the graph obtained from the disjoint union of all $U_i$ and $\{r\}$ by adding all edges from $r$ to $X$. Validity of (ii.) is clear from the construction.
Now we check the statement (i.). By our construction of $U$, any local complementation in $U_i$ has no effect on $U_j$ for $j\not=i$, and local complementations at vertices in $V(U_i)\setminus V(H_i)$ do not change edges incident with $r$. Hence, by induction, we can obtain $H$ as a vertex-minor of $U$ and still have $r$ adjacent precisely to $X\subseteq V(H)$. We then apply the local complementation at $r\in V(U)\setminus
V(H)$, and delete $V(U)\setminus V(G)$ to obtain $G$.
[ This, with Proposition \[thm:shrubsc\], now immediately gives the main conclusion: ]{}
\[thm:tdtoshrubd\] For any class $\cf S$ of bounded shrub-depth, there exists an integer $d$ such that every graph in $\cf S$ is a vertex-minor of a graph of tree-depth $d$.
Comparing Theorem \[thm:tdtoshrubd\] with [@Kwon2013] one may naturally ask whether, perhaps, weaker pivot-minors could be sufficient in Theorem \[thm:tdtoshrubd\]. Unfortunately, that is very false from the beginning. Note that all complete graphs have SC-depth $1$. On the other hand, we will prove (Proposition \[prop:noclique\]) that graphs of bounded tree-depth cannot contain arbitrarily large cliques as pivot-minors.
We need the following technical lemmas.
\[lem:avoidingedge\] Let $G$ be a graph and $X\subseteq V(G)$ such that $\mx A(G)[X]$ is non-singular and ${\lvert X\rvert}\ge 3$. If $u\in X$, then there exist $v, w\in X\setminus \{u\}$ such that $vw\in E(G)$.
Let $u\in X$. Suppose that for every pair of distinct vertices $v, w\in X\setminus \{u\}$, $vw\notin E(G)$. That means $G[X]$ is isomorphic to a star with the centre $u$. However, the matrix $\mx A(G)[X]$ is clearly singular, and it contradicts to the assumption.
\[lem:reorder\] Let $G$ be a graph and let $X\subseteq V(G)$ such that $X\neq \emptyset$ and $\mx A(G)[X]$ is non-singular. Let $s\in X$. Then $G$ has a sequence of pairs of vertices $\{x_1,y_1\},
\{x_2,y_2\},$ $\ldots, \{x_m, y_m\}$ such that
1. $\mx A(G)*X=\mx A(G\wedge x_1y_1\wedge x_2y_2 \cdots \wedge x_my_m)$,
2. $(\{x_i, y_i\}: {1\le i\le m})$ is a partition of $X$ (in particular, ${\lvert X\rvert}$ is even), and
3. $s\in \{x_m,y_m\}$.
We prove the theorem by induction on ${\lvert X\rvert}\geq1$. If ${\lvert X\rvert}=1$, then $\mx A(G)[X]$ cannot be non-singular, as we have no loops in $G$. If $X=\{x_1, x_2\}$, then $x_1, x_2$ must form an edge of $G$ since, again, $\mx A(G)[X]$ is non-singular. Since $\mx A(G)*\{x_1, x_2\}=\mx A(G\wedge x_1x_2)$, and either $s=x_1$ or $s=x_2$, we conclude the claim.
For an inductive step, we assume that ${\lvert X\rvert}\ge3$. Since $\mx A(G)[X]$ is non-singular, by Lemma \[lem:avoidingedge\], there exist two vertices $x_1, y_1\in X\setminus \{s\}$ such that $x_1y_1\in E(G)$. Also, by Theorem \[thm:21\], $\mx A(G\wedge x_1y_1)[X\setminus \{x_1, y_1\}]$ is non-singular. By Theorem \[thm:22\], we have $$\begin{aligned}
\mx A(G)*X
&=\mx A(G)*(\{x_1,y_1\}\Delta (X\setminus \{x_1, y_1\}) \\
&=(\mx A(G)*\{x_1,y_1\})*(X\setminus \{x_1, y_1\}) \\
&=\mx A(G\wedge x_1y_1)*(X\setminus \{x_1, y_1\}).
\end{aligned}$$
Since $s\in X\setminus \{x_1, y_1\}\not=\emptyset$, by the induction hypothesis, $G\wedge x_1y_1$ has a sequence of pairs of vertices $\{x_2,y_2\}, \ldots, \{x_m,y_m\}$ such that
a) $\mx A(G\wedge x_1y_1)*(X\setminus \{x_1, y_1\})=\mx A((G\wedge x_1y_1)\wedge x_2y_2 \cdots \wedge x_my_m)$,
b) $(\{x_i, y_i\}: {2\le i\le m})$ is a partition of $X\setminus\{x_1, y_1\}$, and
c) $s\in \{x_m,y_m\}$.
Thus, $\mx A(G)*X=\mx A(G\wedge x_1y_1\wedge x_2y_2 \cdots \wedge x_my_m)$ and we can easily verify that $\{x_1, y_1\}, \{x_2, y_2\}, \ldots, \{x_m, y_m\}$ is the desired sequence.
Now we are ready to prove the promised negative proposition.
=\[circle, draw, solid, fill=black, inner sep=0pt, minimum width=2.5pt\]
\(a) at (10,50) ; (b) at (30,50) ;
(0,20) ellipse (10 and 7); (40,20) ellipse (10 and 7); (20,30) ellipse (10 and 7);
\(a) – (-2, 20); (a) – (2, 20); (b) – (42, 20); (b) – (38, 20);
\(a) – (22, 30); (a) – (18, 30); (b) – (22, 30); (b) – (18, 30);
\(a) – (b);
(4, 22) – (16, 28); (36, 22) – (24, 28); (6, 20) – (34, 20);
(10,55) node[$a_m$]{}; (30,55) node[$b_m$]{};
\(a) at (10,50) ; (b) at (30,50) ;
(0,20) ellipse (10 and 7); (40,20) ellipse (10 and 7); (20,30) ellipse (10 and 7);
\(a) – (-2, 20); (a) – (2, 20); (b) – (42, 20); (b) – (38, 20);
\(a) – (22, 30); (a) – (18, 30); (b) – (22, 30); (b) – (18, 30);
\(a) – (b);
(4, 22) – (16, 28); (36, 22) – (24, 28); (6, 20) – (34, 20);
(10,55) node[$a_m$]{}; (30,55) node[$b_m$]{};
\[prop:noclique\] Let $d,t$ be positive integers such that $t> 3^{d-1}$. Then a graph of tree-depth at most $d$ cannot contain a pivot-minor isomorphic to the clique $K_t$.
Let $K(d)=\max\,\{q:{\operatorname{td}}(G)\le d \text{ and $G$ has a pivot-minor isomorphic to $K_q$}\}$. The statement is equivalent to $K(d)\le 3^{d-1}$. If $d=1$, then each component of a graph of tree-depth $1$ has one vertex and we have $K(1)=1$. We assume $d\ge 2$.
We choose minimal $d$ such that a graph $G$ of tree-depth at most $d$ has a pivot-minor isomorphic to $K_t$ where $t> 3^{d-1}$. Let $T$ be a tree-depth decomposition for $G$ of height at most $d$. Since $G$ is without loss of generality connected, $T$ has a unique root $r$ which is a vertex of $G$, too. Since $G$ has a pivot-minor isomorphic to $K_t$, there exists $X\subseteq V(G)$ and $S\subseteq V(G)$ such that
a) $\mx A(G)[X]$ is non-singular, and
b) the graph whose adjacency matrix is $(\mx A(G)*X)[S]$ is isomorphic to $K_t$.
By Lemma \[lem:reorder\], for $s=r$ if $r\in X$ or $s\in X$ chosen arbitrarily otherwise, there exists a sequence of pairs of vertices $\{a_1, b_1\}, \{a_2, b_2\}, \ldots, \{a_m, b_m\}$ in $G$ such that $\mx A(G)*X=\mx A(G\wedge a_1b_1\wedge a_2b_2 \cdots \wedge a_mb_m)$ and $r\notin \{a_i,b_i\}$ for $1\le i\le m-1$.
Let $G'=G\wedge a_1b_1\wedge a_2b_2 \cdots \wedge a_{m-1}b_{m-1}$. Then $(G'\wedge a_mb_m)[S]$ is isomorphic to $K_t$, and there are two cases:
i. $r\not\in\{a_m,b_m\}$, which means that $G\setminus r$ has the pivot-minor $(G'\wedge a_mb_m)\setminus r$ containing a $K_{t-1}$-subgraph. Since the tree-depth of $G\setminus r$ is $t-1$ as witnessed by the decomposition $T\setminus r$, and $t-1\geq3^{d-1}>3^{d-1-1}$, this contradicts our minimal choice of $d$.
ii. $r=a_m$, up to symmetry. After the pivot $a_mb_m$, a new clique $K$ in $G$ (which is not present in $G'$) is created in two possible ways: $K$ belongs to the closed neighbourhood of one of $a_m,b_m$, or $K$ is formed in the union of the neighbourhoods of $a_m,b_m$ (excluding $a_m,b_m$). See Figure \[fig:noclique\]. In either case, $K$ is formed on two or three, respectively, cliques of $G'\setminus\{a_m,b_m\}$. Again, by minimality of $d$, the largest clique contained in $G'\setminus r$ can be of size ${3^{d-1-1}}$. Therefore, $t\leq\max\big(1+2\cdot3^{d-2},
3\cdot3^{d-2}\big)=3^{d-1}$, a contradiction.
Indeed, $t=K(d)\le 3^{d-1}$ as desired.
Bipartite Graphs of small BSC-depth
===================================
In the previous section we have seen that every graph class of bounded shrub-depth can be obtained via vertex-minors of graphs of tree-depth $d$ for some $d$. Moreover, we have also proved that this statement does not hold if we replace vertex-minors with pivot-minors. However this raises a question whether there is some simple condition on the graph class in question which would guarantee us the theorem to hold for pivot-minors. It turns out that one such simple restriction is to consider just bipartite graphs of bounded shrub-depth, as stated by Theorem \[thm:tdtoshrubd-bip\].
To get our result, we introduce the following “depth” definition better suited to the pivot-minor operation, which builds upon the idea of SC-depth. Let $G$ be a graph and let $X,Y\subseteq V(G)$, $X\cap Y=\emptyset$. We denote by $\overline{G}^{(X,Y)}$ the graph $G'$ with vertex set $V(G)$ and edge set $E(G')=E(G)\Delta \{xy:x\in X, y\in Y\}$. In other words, $\overline{G}^{(X,Y)}$ is the graph obtained from $G$ by complementing the edges between $X$ and $Y$.
\[def:BSC-depth\] We define inductively the class $\BSC k$ as follows:
i. let $\BSC0=\{K_1\}$;
ii. if $G_1,\dots,G_p\in\BSC k$ and $H= G_1\dot\cup\dots\dot\cup G_p$, then for every pair of disjoint subsets $X,Y\subseteq V(H)$ we have $\overline{H\,}^{(X,Y)}\in\BSC{k+1}$.
The [*BSC-depth*]{} of $G$ is the minimum integer $k$ such that $G\in\BSC k$.
In general, graphs of bounded SC-depth may have arbitrarily large BSC-depth, but the two notions are anyway closely related, as in Lemma \[lem:BSC-SC-relation\]. Here $\chi(G)$ denotes the chromatic number of a graph.
\[lem:BSC-SC-relation\]
1. The BSC-depth of any graph $G$ is at least $\lceil\log_2\chi(G)\rceil$. \[it:logchi\]
2. The SC-depth of $G$ is not larger than three times its BSC-depth.
3. If $G$ is bipartite, then the BSC-depth of $G$ is not larger than its SC-depth. \[it:bipSC\]
a\) If $H'=\overline{H\,}^{(X,Y)}$, then $\chi(H')\leq2\chi(H)$ since one may use a fresh set of colours for the vertices in $Y$. Then the claim follows by induction from Definition \[def:BSC-depth\].
b\) We have $$\overline{H\,}^{(X,Y)} = \overline{\left(
\overline{\left(\overline{H\,}^X\right)}^Y
\right)}^{X\cup Y}$$ and so the claim directly follows by comparing Definitions \[def:BSC-depth\] and \[def:SC-depth\].
c\) Let $G\in \SC k$. Let $V(G)=A\cup B$ be a bipartition of $G$, i.e., that $A$ and $B$ are disjoint independent sets. We use here for $G$ the same “decomposition” as in Definition \[def:SC-depth\]; just replacing at every step a single set $X$ with the pair $(X\cap A,X\cap B)$ (point ii. of the definitions). The resulting graph $G'\in\BSC k$ then fulfils the following: both $A,B$ are independent sets in $G'$, and every $uv\in A\times B$ is an edge in $G'$ if and only if $uv$ is an edge of $G$. Therefore, $G=G'\in\BSC k$.
In particular, following Lemma \[lem:BSC-SC-relation\]\[it:logchi\]), the BSC-depth of the clique $K_n$ equals $\lceil\log_2n\rceil$, while $K_{m,n}$ always have BSC-depth $1$.
\[lem:BSC-from-pivot\] Let $k$ be a positive integer. If a graph $G$ is of BSC-depth at most $k$, then $G$ is a pivot-minor of a graph of tree-depth at most $2k+1$.
The proof follows along the same line as the proof of Lemma \[lem:vertex-minor-tree-depth\]. For a graph $G$ of BSC-depth $k$, we recursively construct a graph $U$ and a rooted forest $T$ such that
i. $G$ can be obtained from $U$ as a pivot-minor via pivoting edges only between vertices in $V(U)\setminus V(G)$, and
ii. $U\subseteq {\operatorname{Clos}}(T)$ and $T$ has depth at most $2k+1$.
If $k=0$, then it is clear by setting $G=U=T=K_1$. We assume that $k\ge 1$.
Since $G$ has BSC-depth $k$, there exist a graph $H$ and disjoint subsets $X,Y\subseteq V(H)$ such that $G=\overline{H}^{(X,Y)}$ and $H$ is the disjoint union of $H_1, H_2, \ldots, H_m$ such that each $H_i$ has BSC-depth $k-1$. By induction hypothesis, for each $1\le i\le m$, $H_i$ is a pivot-minor of a graph $U_i$ and $U_i\in {\operatorname{Clos}}(T_i)$ where the height of $T_i$ is at most $2(k-1)+1$. For each $1\le i\le m$, let $r_i$ be the root of $T_i$, and let $T$ be the rooted forest obtained from the disjoint union of all $T_i$ by adding an edge between two new vertices $r_x$ and $r_y$ and by connecting $r_Y$ to all $r_i$. Let $U$ be the graph obtained from the disjoint union of all $U_i$ and the vertices $\{r_x,r_y\}$ by adding an edge between $r_x$ and $r_y$ and all edges from $r_x$ to $X$ as well as all edges from $r_y$ to $Y$. Validity of (ii.) is clear from the construction.
Now we check the statement (i.). By our construction of $U$, any pivoting on edges in $U_i$ has no effect on $U_j$ for $j\not=i$, and pivoting on edges in $V(U_i)\setminus V(H_i)$ does not change edges incident with $r_x$ or $r_y$. Hence, by induction, we can obtain $H$ as a pivot-minor of $U$ and still have $r_x$ adjacent precisely to $r_y$ and $X\subseteq V(H)$ and $r_y$ adjacent to $r_x$ and $Y \subseteq V(H)$. We then pivot the edge $\{r_x,r_y\}\in V(U)\setminus
V(H)$, and delete $V(U)\setminus V(G)$ to obtain $G$.
[The main result of this section now immediately follows from Lemmas \[lem:BSC-from-pivot\], \[lem:BSC-SC-relation\]\[it:bipSC\]) and Proposition \[thm:shrubsc\].]{}
\[thm:tdtoshrubd-bip\] For any class $\cf S$ of bounded shrub-depth, there exists an integer $d$ such that every [*bipartite*]{} graph in $\cf S$ is a pivot-minor of a graph of tree-depth $d$.
Conclusions
===========
We finish the paper with two questions that naturally arise from our investigations. While the first question has a short negative answer, the second one is left as an open problem.
A [*cograph*]{} is a graph obtained from singleton vertices by repeated operations of disjoint union and (full) complementation. This well-studied concept has been extended to so called “$m$-partite cographs” in [@GHNOOR12] (we skip the technical definition here for simplicity); where cographs are obtained for $m=1$. It has been shown in [@GHNOOR12] that $m$-partite cographs present an intermediate step between classes of bounded shrub-depth and those of bounded clique-width.
The first question is whether some of our results can be extended from classes of bounded shrub-depth to those of $m$-partite cographs. We know that shrub-depth is monotone under taking vertex-minors (Corollary \[cor:shrubd-monotone\]) and an analogous claim is asymptotically true also for clique-width [@os06]. However, the main obstacle to such an extension is the fact that $m$-partite cographs do not behave well with respect to local and pivot equivalence of graphs. To show this we will employ the following proposition:
A path of length $n$ is an $m$-partite cograph if, and only if, $n<3(2^m-1)$.
By the proposition, to negatively answer our question it is enough to find a class of $m$-partite cographs containing long paths as pivot-minors:
\[prop:Hn-path\] Let $H_n$ denote the graph on $2n$ vertices from Figure \[fig:cograph2n\]. Then $H_n$ is a cograph for each $n\geq1$, and $H_n$ contains a path of length $n$ as a pivot-minor.
![A graph on $2n$ vertices [@GHNOOR12] which is a cograph and pivoting on $a_2b_2,a_3b_3,\dots,a_{n-1}b_{n-1}$ results in an induced path on $a_1,b_1,b_2,\dots,b_n$.[]{data-label="fig:cograph2n"}](half){width=".55\textwidth"}
It is $V(H_n)=\{a_i,b_i: i=1,2,\dots,n\}$ and $E(H_n)=\{b_ib_j:$ $1\leq i<j\leq n\}\cup
\{b_ia_j:$ $1\leq i\leq j\leq n\}$. The graph $H_n$ can be constructed iteratively as follows, for $j=n,n-1,\dots,1$: add a new vertex $a_j$, complement all the edges of the graph, add a new vertex $b_j$, complement again. Consequently, $H_n$ is a cograph (and, in fact, a so called threshold graph).
For the second part, we let inductively $G_1:=H_n$ and $G_{j}:=G_{j-1}\wedge a_jb_j$ for $j=2,\dots,n-1$. Then, by the definition, $G_{2}$ is obtained from $H_n$ by removing all the edges incident with $b_1$ except $b_1a_2,b_1b_2$. In particular, $G_{2}\setminus\{a_1,b_1\}$ is isomorphic to $H_{n-1}$, and $a_3,b_3$ are adjacent in $G_{2}$ only to vertices other than $a_1,b_1$. Consequently, by induction, $G_j$ is obtained from $G_{j-1}$ by removing all the edges incident with $b_{j-1}$ except $b_{j-1}a_{j},b_{j-1}b_{j}$, and $G_{n-1}$ has the edge set $\{b_1b_2,b_2b_3,\dots,b_{n-1}b_n\}\cup
\{a_1b_1,a_2b_1,a_2b_2,a_3b_2,\dots,a_nb_n\}$. Then $G_{n-1}[a_1,b_1,b_2,\dots,b_n]$ is a path.
Building on this negative result, it is only natural to ask whether not having a long path as vertex-minor is the property exactly characterising shrub-depth.
A class $\cf C$ of graphs is of bounded shrub-depth if, and only if, there exists an integer $t$ such that no graph $G\in\cf C$ contains a path of length $t$ as a vertex-minor.
[^1]: P. Hliněný and J. Obdržálek have been supported by the Czech Science Foundation, project no. 14-03501S. S. Ordyniak have been supported by the European Social Fund and the state budget of the Czech Republic under project CZ.1.07/2.3.00/30.0009 (POSTDOC I)
[^2]: O. Kwon have been supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0011653).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'For the module category of a hereditary ring, the Ext-orthogonal pairs of subcategories are studied. For each Ext-orthogonal pair that is generated by a single module, a 5-term exact sequence is constructed. The pairs of finite type are characterized and two consequences for the class of hereditary rings are established: homological epimorphisms and universal localizations coincide, and the telescope conjecture for the derived category holds true. However, we present examples showing that neither of these two statements is true in general for rings of global dimension 2.'
address:
- |
Henning Krause\
Fakultät für Mathematik\
Universität Bielefeld\
33501 Bielefeld\
Germany.
- |
Jan Šťovíček\
Charles University in Prague\
Faculty of Mathematics and Physics\
Department of Algebra\
Sokolovska 83\
186 75 Praha 8\
Czech Republic.
author:
- Henning Krause
- Jan Šťovíček
title: 'The telescope conjecture for hereditary rings via Ext-orthogonal pairs'
---
Introduction
============
In this paper, we prove the telescope conjecture for the derived category of any hereditary ring. To achieve this, we study Ext-orthogonal pairs of subcategories for hereditary module categories.
The telescope conjecture for the derived category of a module category is also called smashing conjecture. It is the analogue of the telescope conjecture from stable homotopy theory which is due to Bousfield and Ravenel [@B; @Ra]. In each case one deals with a compactly generated triangulated category. The conjecture then claims that a localizing subcategory is generated by compact objects provided it is smashing, that is, the localizing subcategory arises as the kernel of a localization functor that preserves arbitrary coproducts [@Ne1992]. In this general form, the telescope conjecture seems to be wide open. For the stable homotopy category, we refer to the work of Mahowald, Ravenel, and Shick [@MRS] for more details. In our case, the conjecture takes the following form and is proved in §\[se:tel\]:
Let $A$ be a hereditary ring. For a localizing subcategory $\C$ of $\bfD({\operatorname{Mod}\nolimits}A)$ the following conditions are equivalent:
1. There exists a localization functor $L\colon\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ that preserves coproducts and such that $\C={\operatorname{Ker}\nolimits}L$.
2. The localizing subcategory $\C$ is generated by perfect complexes.
For the derived category of a module category, only two results seem to be known so far. Neeman proved the conjecture for the derived category of a commutative noetherian ring [@N], essentially by classifying all localizing subcategories; see [@HPS] for a treatment of this approach in the context of axiomatic stable homotopy theory. On the other hand, Keller gave an explicit example of a commutative ring where the conjecture does not hold [@Ke]. In fact, an analysis of Keller’s argument [@Kr2005] shows that there are such examples having global dimension $2$; see Example \[ex:tc\_fail\].
The approach for hereditary rings presented here is completely different from Neeman’s. In particular, we are working in a non-commutative setting and without using any noetherianess assumption. The main idea here is to exploit the very close connection between the module category and the derived category in the hereditary case. Unfortunately, this approach cannot be extended directly even to global dimension $2$, as mentioned above.
At a first glance, the telescope conjecture seems to be a rather abstract statement about unbounded derived categories. However in the context of a fixed hereditary ring, it turns out that smashing localizing subcategories are in bijective correspondence to various natural structures; see §\[se:bij\]:
\[th:ThB\] For a hereditary ring $A$ there are bijections between the following sets:
1. Extension closed abelian subcategories of ${\operatorname{Mod}\nolimits}A$ that are closed under products and coproducts.
2. Extension closed abelian subcategories of ${\operatorname{mod}\nolimits}A$.
3. Homological epimorphisms $A\to B$ (up to isomorphism).
4. Universal localizations $A\to B$ (up to isomorphism).
5. Localizing subcategories of $\bfD({\operatorname{Mod}\nolimits}A)$ that are closed under products.
6. Localization functors $\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ preserving coproducts (up to natural isomorphism).
7. Thick subcategories of $\bfD^b({\operatorname{mod}\nolimits}A)$.
This reveals that the telescope conjecture and its proof are related to interesting recent work by some other authors. In [@Sch2007], Schofield describes for any hereditary ring its universal localizations in terms of appropriate subcategories of finitely presented modules. This is a consequence of the present work since we show that homological epimorphisms and universal localizations coincide for any hereditary ring; see §\[se:uni-loc\]. However, as we mention at the end of §\[se:uni-loc\], the identification between homological epimorphisms and universal localizations also fails already for rings of global dimension $2$.
In [@NS], Nicolás and Saorín establish for a differential graded algebra a correspondence between recollements for its derived category and differential graded homological epimorphisms. This correspondence specializes for a hereditary ring to the above mentioned bijection between smashing localizing subcategories and homological epimorphisms.
The link between the structures mentioned in Theorem B is provided by so-called Ext-orthogonal pairs. This concept seems to be new, but it is based on the notion of a perpendicular category which is one of the fundamental tools for studying hereditary categories arising in representation theory [@Sch1991; @GL].
Given any abelian category $\A$, we call a pair $(\X,\Y)$ of full subcategories *Ext-orthogonal* if $\X$ and $\Y$ are orthogonal to each other with respect to the bifunctor $\coprod_{n\geq
0}{\operatorname{Ext}\nolimits}^n_\A(-,-)$. This concept is the analogue of a *torsion pair* and a *cotorsion pair* where one considers instead the bifunctors ${\operatorname{Hom}\nolimits}_\A(-,-)$ and $\coprod_{n> 0}{\operatorname{Ext}\nolimits}^n_\A(-,-)$, respectively [@D; @Sa].
Torsion and cotorsion pairs are most interesting when they are *complete*. For a torsion pair this means that each object $M$ in $\A$ admits a short exact sequence $0\to X_M\to M\to Y^M\to 0$ with $X_M\in\X$ and $Y^M\in\Y$. In the second case this means that each object $M$ admits short exact sequences $0\to Y_M\to X_M\to M\to 0$ and $0\to M\to Y^M\to X^M\to 0$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$.
It turns out that there is also a reasonable notion of completeness for Ext-orthogonal pairs. In that case each object $M$ in $\A$ admits a 5-term exact sequence $$0\to Y_M\to X_M\to M\to Y^M \to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$. This notion of a complete Ext-orthogonal pair is meaningful also for non-hereditary module categories, see Example \[ex:domains\].
In this work, however, we study Ext-orthogonal pairs mainly for the module category of a hereditary ring. As already mentioned, this assumption implies a close connection between the module category and its derived category, which we exploit in both directions. We use Bousfield localization functors which exist for the derived category to establish the completeness of certain Ext-orthogonal pairs for the module category; see §\[se:ext\]. On the other hand, we are able to prove the telescope conjecture for the derived category by showing first a similar result for Ext-orthogonal pairs; see §\[se:fin\] and §\[se:tel\].
Specific examples of Ext-orthogonal pairs arise in the representation theory of finite dimensional algebras via perpendicular categories; see §\[se:exm\]. Note that a perpendicular category is always a part of an Ext-orthogonal pair. Schofield introduced perpendicular categories for representations of quivers [@Sch1991] and this fits into our set-up because the path algebra of any quiver is hereditary. In fact, the concept of a perpendicular category is fundamental for studying hereditary categories arising in representation theory [@GL]. It is therefore somewhat surprising that the 5-term exact sequence for a complete Ext-orthogonal pair seems to appear for the first time in this work.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors would like to thank Lidia Angeleri Hügel and Manolo Saorín for helpful discussions concerning this work.
Ext-orthogonal pairs {#se:ext}
====================
Let $\A$ be an abelian category. Given a pair of objects $X,Y\in\A$, set $${\operatorname{Ext}\nolimits}_\A^*(X,Y)=\coprod_{n\in\bbZ}{\operatorname{Ext}\nolimits}_\A^n(X,Y).$$ For a subcategory $\C$ of $\A$ we consider its full Ext-orthogonal subcategories $$\begin{aligned}
{^\perp}\C&=\{X\in\A\mid {\operatorname{Ext}\nolimits}^*_\A(X,C)=0\text{ for all }C\in\C\},\\
\C^\perp&=\{Y\in\A\mid {\operatorname{Ext}\nolimits}^*_\A(C,Y)=0\text{ for all }C\in\C\}.\end{aligned}$$ If $\C = \{X\}$ is a singleton, we write ${^\perp}X$ instead of ${^\perp}\{X\}$, and similarly with $X^\perp$.
An *Ext-orthogonal pair* for $\A$ is a pair $(\X,\Y)$ of full subcategories such that $\X^\perp=\Y$ and $\X={^\perp}\Y$. An Ext-orthogonal pair $(\X,\Y)$ is called *complete* if there exists for each object $M\in\A$ an exact sequence $$\e_M\colon\:\:0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$. The pair $(\X,\Y)$ is *generated* by a subcategory $\C$ of $\A$ if $\Y=\C^\perp$.
The definition can be extended to the derived category $\bfD(\A)$ of $\A$ if we put for each pair of complexes $X,Y\in\bfD(\A)$ and $n\in\bbZ$ $${\operatorname{Ext}\nolimits}^n_\A(X,Y)={\operatorname{Hom}\nolimits}_{\bfD(\A)}(X,Y[n]).$$ Thus an *Ext-orthogonal pair* for $\bfD(\A)$ is a pair $(\X,\Y)$ of full subcategories of $\bfD(\A)$ such that $\X^\perp=\Y$ and $\X={^\perp}\Y$.
Recall that an *abelian subcategory* of $\A$ is a full subcategory $\C$ such that the category $\C$ is abelian and the inclusion functor $\C\to\A$ is exact. Moreover, we will always assume that an abelian subcategory $\C$ is closed under taking isomorphic objects in the original category $\A$. Suppose $\A$ is *hereditary*, that is, ${\operatorname{Ext}\nolimits}_\A^n(-,-)$ vanishes for all $n>1$. Then a simple calculation shows that for any subcategory $\C$ of $\A$, the subcategories $\C^\perp$ and $^{\perp}\C$ are extension closed abelian subcategories; see [@GL Proposition 1.1].
The following result establishes the completeness for certain Ext-orthogonal pairs. Recall that an abelian category is a *Grothendieck category* if it has a set of generators and admits colimits that are exact when taken over filtered categories.
\[th:perpX\] Let $\A$ be a hereditary Grothendieck category and $X$ an object in $\A$. Set $\Y=X^\perp$ and let $\X$ denote the smallest extension closed abelian subcategory of $\A$ that is closed under taking coproducts and contains $X$. Then $(\X,\Y)$ is a complete Ext-orthogonal pair for $\A$. Thus there exists for each object $M\in\A$ an exact sequence $$0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$. This sequence is natural and induces bijections ${\operatorname{Hom}\nolimits}_\A(X,X_M)\to{\operatorname{Hom}\nolimits}_\A(X,M)$ and ${\operatorname{Hom}\nolimits}_\A(Y^M,Y)\to{\operatorname{Hom}\nolimits}_\A(M,Y)$ for all $X\in\X$ and $Y\in\Y$.
The proof uses derived categories and Bousfield localization functors. Thus we need to collect some basic facts about hereditary abelian categories and their derived categories.
The derived category of a hereditary abelian category {#the-derived-category-of-a-hereditary-abelian-category .unnumbered}
-----------------------------------------------------
Let $\A$ be a hereditary abelian category and let $\bfD(\A)$ denote its derived category. We assume that $\A$ admits coproducts and that the coproduct of any set of exact sequences is again exact. Thus the category $\bfD(\A)$ admits coproducts, and for each integer $n$ these coproducts are preserved by the functor $H^n\colon\bfD(\A)\to\A$ which takes a complex to its cohomology in degree $n$.
It is well-known that each complex is quasi-isomorphic to its cohomology. That is:
\[le:formal\] Given a complex $X$ in $\bfD(\A)$, there are (non-canonical) isomorphisms $$\coprod_{n\in\bbZ}(H^nX)[-n]\cong X\cong
\prod_{n\in\bbZ}(H^nX)[-n].$$
See for instance [@Kr §1.6].
A full subcategory $\C$ of $\bfD(\A)$ is called *thick* if it is a triangulated subcategory which is, in addition, closed under taking direct summands. A thick subcategory is *localizing* if it is closed under taking coproducts. Note that for each full subcategory $\C$ the subcategories $\C^\perp$ and $^\perp\C$ are thick.
To a full subcategory $\C$ of $\bfD(\A)$ we assign the full subcategory $$H^0\C=\{M\in\A\mid M= H^0X\text{ for some }X\in\C\},$$ and given a full subcategory $\X$ of $\A$, we define the full subcategory $$\bfD_\X(\A)=\{X\in\bfD(\A)\mid H^nX\in\X\text{ for all
}n\in\bbZ\}.$$ Both assignments induce mutually inverse bijections between appropriate subcategories. This is a useful fact which we recall from [@Br Theorem 6.1].
\[pr:thick-corr\] The functor $H^0\colon\bfD(\A)\to\A$ induces a bijection between the localizing subcategories of $\bfD(\A)$ and the extension closed abelian subcategories of $\A$ that are closed under coproducts. The inverse map sends a full subcategory $\X$ of $\A$ to $\bfD_\X(\A)$.
\[re:thick\] The bijection in Proposition \[pr:thick-corr\] has an analogue for thick subcategories. Given any hereditary abelian category $\B$, the functor $H^0\colon\bfD^b(\B)\to\B$ induces a bijection between the thick subcategories of $\bfD^b(\B)$ and the extension closed abelian subcategories of $\B$; see [@Br Theorem 5.1].
Next we extend these maps to bijections between Ext-orthogonal pairs.
\[pr:thick\] The functor $H^0\colon\bfD(\A)\to\A$ induces a bijection between the Ext-orthogonal pairs for $\bfD(\A)$ and the Ext-orthogonal pairs for $\A$. The inverse map sends a pair $(\X,\Y)$ for $\A$ to $(\bfD_\X(\A),\bfD_\Y(\A))$.
First observe that for each pair of complexes $X,Y\in\bfD(\A)$, we have ${\operatorname{Ext}\nolimits}_\A^*(X,Y)=0$ if and only if ${\operatorname{Ext}\nolimits}_\A^*(H^pX,H^qY)=0$ for all $p,q\in\bbZ$. This is a consequence of Lemma \[le:formal\]. It follows that $H^0$ and its inverse send Ext-orthogonal pairs to Ext-orthogonal pairs. Each Ext-orthogonal pair is determined by its first half, and therefore an application of Proposition \[pr:thick-corr\] shows that both maps are mutually inverse.
Localization functors {#localization-functors .unnumbered}
---------------------
Let $\T$ be a triangulated category. A *localization functor* $L\colon\T\to\T$ is an exact functor that admits a natural transformation $\eta\colon{\operatorname{Id}\nolimits}_\T\to L$ such that $L\eta_X$ is an isomorphism and $L\eta_X=\eta_{LX}$ for all objects $X\in\T$. Basic facts about localization functors one finds, for example, in [@BIK §3].
\[pr:local\] Let $\A$ be a hereditary abelian category. For a full subcategory $\X$ of $\A$ the following are equivalent.
1. There exists a localization functor $L\colon\bfD(\A)\to\bfD(\A)$ such that ${\operatorname{Ker}\nolimits}L=\bfD_\X(\A)$.
2. There exists a complete Ext-orthogonal pair $(\X,\Y)$ for $\A$.
\(1) $\Rightarrow$ (2): The kernel ${\operatorname{Ker}\nolimits}L$ and the essential image ${\operatorname{Im}\nolimits}L$ of a localization functor $L$ form an Ext-orthogonal pair for $\bfD(\A)$; see for instance [@BIK Lemma 3.3]. Then it follows from Proposition \[pr:thick\] that the pair $(\X,\Y)=(H^0{\operatorname{Ker}\nolimits}L,H^0{\operatorname{Im}\nolimits}L)$ is Ext-orthogonal for $\A$.
The localization functor $L$ comes equipped with a natural transformation $\eta\colon{\operatorname{Id}\nolimits}_{\bfD(\A)}\to L$, and for each complex $M$ we complete the morphism $\eta_M\colon M\to LM$ to an exact triangle $$\Ga M\to M\to LM\to \Ga M[1].$$ Note that $\Ga M\in{\operatorname{Ker}\nolimits}L$ and $LM\in{\operatorname{Im}\nolimits}L$ since $L\eta_M$ is an isomorphism and $L$ is exact. Now suppose that $M$ is concentrated in degree zero. Applying $H^0$ to this triangle yields an exact sequence $$0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$.
\(2) $\Rightarrow$ (1): Let $(\X,\Y)$ be an Ext-orthogonal pair for $\A$. This pair induces an Ext-orthogonal pair $(\bfD_\X(\A),\bfD_\Y(\A))$ for $\bfD(\A)$ by Proposition \[pr:thick\]. In order to construct a localization functor $L\colon\bfD(\A)\to\bfD(\A)$ such that ${\operatorname{Ker}\nolimits}L=\bfD_\X(\A)$, it is sufficient to construct for each object $M$ in $\bfD(\A)$ an exact triangle $X\to M\to Y\to X[1]$ with $X\in\bfD_\X(\A)$ and $Y\in\bfD_\Y(\A)$. Then one defines $LM=Y$ and the morphism $M\to Y$ induces a natural transformation $\eta\colon{\operatorname{Id}\nolimits}_{\bfD(\A)}\to L$ having the required properties. In view of Lemma \[le:formal\] it is sufficient to assume that $M$ is a complex concentrated in degree zero.
Suppose that $M$ admits an approximation sequence $$\e_M\colon\:\:0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$. Let $M'$ denote the image of $X_M\to M$ and $M''$ the image of $M\to Y^M$. Then $\e_M$ induces the following three exact sequences $$\begin{aligned}
\a_M\colon\:\:&0\to M'\to M\to M''\to 0,\\
\b_M\colon\:\:&0\to Y_M\to X_M\to M'\to 0,\\
\g_M\colon\:\:&0\to M''\to Y^M\to X^M\to 0.\end{aligned}$$ In $\bfD(\A)$ these three exact sequence give rise to the following commuting square $$\xymatrix{ X^M[-2]\ar[r]^{\g_M}\ar[d]^0&M''[-1]\ar[d]^{\a_M}\\
X_M\ar[r]^{\bar\b_M}&M' }$$ where $\bar\b_M$ is the second morphism in $\b_M$. Commutativity of the diagram is clear since ${\operatorname{Hom}\nolimits}_{\bfD(\A)}(U[-2],V)=0$ for any $U,V\in\A$. An application of the octahedral axiom shows that this square can be extended as follows to a diagram where each row and each column is an exact triangle. $$\xymatrix{
X^M[-2]\ar[r]\ar[d]^0&M''[-1]\ar[r]\ar[d]&Y^M[-1]\ar[r]\ar[d]^0&X^M[-1]\ar[d]^0\\
X_M\ar[r]\ar[d]&M'\ar[r]\ar[d]&Y_M[1]\ar[r]\ar[d]&X_M[1]\ar[d]\\
X_M\oplus X^M[-1]\ar[r]\ar[d]&M\ar[r]\ar[d]&Y_M[1]\oplus
Y^M\ar[r]\ar[d]&X_M[1]\oplus X^M\ar[d]\\
X^M[-1]\ar[r]&M''\ar[r]&Y^M\ar[r]&X^M }$$ The first and third column are split exact triangles, and this explains the objects appearing in the third row. In particular, this yields the desired exact triangle $X\to M\to Y\to X[1]$ with $X\in\bfD_\X(\A)$ and $Y\in\bfD_\Y(\A)$.
The proof of the implication (2) $\Rightarrow$ (1) comes as a special case of a more general result on the existence of exact triangles with a specified long exact sequence of cohomology objects. We refer to work of Neeman [@Ne2007] for more details.
Next we formulate the functorial properties of the 5-term exact sequence constructed in Proposition \[pr:local\].
\[le:exact\] Let $\A$ be an abelian category and $(\X,\Y)$ an Ext-orthogonal pair for $\A$. Suppose there is an exact sequence $$\e_M\colon\:\:0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ in $\A$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$.
1. The sequence $\e_M$ induces for all $X\in\X$ and $Y\in\Y$ bijections ${\operatorname{Hom}\nolimits}_\A(X,X_M)\to{\operatorname{Hom}\nolimits}_\A(X,M)$ and ${\operatorname{Hom}\nolimits}_\A(Y^M,Y)\to{\operatorname{Hom}\nolimits}_\A(M,Y)$.
2. Let $\e_N\colon\:\:0\to Y_N\to X_N\to N\to Y^N\to X^N\to 0$ be an exact sequence in $\A$ with $X_N,X^N\in\X$ and $Y_N,Y^N\in\Y$. Then each morphism $M\to N$ extends uniquely to a morphism $\e_M\to\e_N$ of exact sequences.
3. Any exact sequence $0\to Y'\to X'\to M\to Y''\to X''\to 0$ in $\A$ with $X',X''\in\X$ and $Y',Y''\in\Y$ is uniquely isomorphic to $\e_M$.
We prove part (1). Then parts (2) and (3) are immediate consequences.
Fix an object $X\in\X$. The map $\mu\colon{\operatorname{Hom}\nolimits}_\A(X,X_M)\to{\operatorname{Hom}\nolimits}_\A(X,M)$ is injective because ${\operatorname{Hom}\nolimits}_\A(X,Y_M)=0$. Any morphism $X\to M$ factors through the kernel $M'$ of $M\to Y^M$ since ${\operatorname{Hom}\nolimits}_\A(X,Y^M)=0$. The induced morphism $X\to M'$ factors through $X_M\to M'$ since ${\operatorname{Ext}\nolimits}_\A^1(X,Y_M)=0$. Thus $\mu$ is surjective. The argument for the other map ${\operatorname{Hom}\nolimits}_\A(Y^M,Y)\to{\operatorname{Hom}\nolimits}_\A(M,Y)$ is dual.
Ext-orthogonal pairs for Grothendieck categories {#se:groth .unnumbered}
------------------------------------------------
Now we give the proof of Theorem \[th:perpX\]. The basic idea is to establish a localization functor for $\bfD(\A)$ and to derive the exact approximation sequence in $\A$ by taking the cohomology of some appropriate exact triangle as in Proposition \[pr:local\].
Let $\X$ denote the smallest extension closed abelian subcategory of $\A$ that contains $X$ and is closed under coproducts. Then Proposition \[pr:thick-corr\] implies that $\bfD_\X(\A)$ is the smallest localizing subcategory of $\bfD(\A)$ containing $X$. Thus there exists a localization functor $L\colon\bfD(\A)\to\bfD(\A)$ with ${\operatorname{Ker}\nolimits}L=\bfD_\X(\A)$. This is a result which goes back to Bousfield’s work in algebraic topology, [@B]. In the context of derived categories we refer to [@AJS Theorem 5.7]. Now apply Proposition \[pr:local\] to get the 5-term exact sequence for each object $M$ in $\A$. The properties of this sequence follow from Lemma \[le:exact\].
We do not know an example of an Ext-orthogonal pair $(\X,\Y)$ for a hereditary Grothendieck category such that the pair $(\X,\Y)$ is not complete.
Ext-orthogonal pairs naturally arise also for non-hereditary abelian categories. Here we mention one such class of examples, but we do not know whether or when exactly they are complete:
\[ex:groth\_loc\] Let $\A$ be any Grothendieck category and $\X$ a *localizing subcategory*. That is, $\X$ is a full subcategory closed under taking coproducts and such that for any exact sequence $0\to M'\to M\to M''\to 0$ in $\A$ we have $M\in\X$ if and only if $M',M''\in\X$. Set $\Y=\X^\perp$ and let $\Y_{\mathrm{inj}}$ denote the full subcategory of injective objects of $\A$ contained in $\Y$. Then $\X={^\perp\Y_{\mathrm{inj}}}$ and therefore $(\X,\Y)$ is an Ext-orthogonal pair for $\A$; see [@Ga III.4] for details.
Torsion and cotorsion pairs {#se:tor-cotor .unnumbered}
---------------------------
We also sketch an interpretation of an Ext-orthogonal pair in terms of torsion and cotorsion pairs. Here, a pair $(\U,\V)$ of full subcategories of $\A$ is called a *torsion pair* if $\U$ and $\V$ are orthogonal to each other with respect to ${\operatorname{Hom}\nolimits}_\A(-,-)$. Analogously, a pair of full subcategories is a *cotorsion pair* if both categories are orthogonal to each other with respect to $\coprod_{n>0}{\operatorname{Ext}\nolimits}^n_\A(-,-)$.
Let $\A$ be an abelian category and $(\X,\Y)$ an Ext-orthogonal pair. The subcategory $\X$ generates a torsion pair $(\X_0,\Y_0)$ and a cotorsion pair $(\X_1,\Y_1)$ for $\A$, if one defines the corresponding full subcategories of $\A$ as follows: $$\begin{aligned}
\Y_0&=\{Y\in\A\mid{\operatorname{Hom}\nolimits}_\A(X,Y)=0\text{ for all }X\in\X\},\\
\X_0&=\{X\in\A\mid{\operatorname{Hom}\nolimits}_\A(X,Y)=0\text{ for all }Y\in\Y_0\},\\
\Y_1&=\{Y\in\A\mid{\operatorname{Ext}\nolimits}^n_\A(X,Y)=0\text{ for all }X\in\X,\,n>0\},\\
\X_1&=\{X\in\A\mid{\operatorname{Ext}\nolimits}^n_\A(X,Y)=0\text{ for all }Y\in\Y_1,\,n>0\}.\end{aligned}$$ Note that $\X=\X_0\cap\X_1$ and $\Y=\Y_0\cap\Y_1$. In particular, one recovers the pair $(\X,\Y)$ from $(\X_0,\Y_0)$ and $(\X_1,\Y_1)$.
Suppose an object $M\in\A$ admits an approximation sequence $$\e_M\colon\:\:0\to Y_M\to X_M\to M\to Y^M\to X^M\to 0$$ with $X_M,X^M\in\X$ and $Y_M,Y^M\in\Y$. We give the following interpretation of this sequence. Let $M'$ denote the image of $X_M\to
M$ and $M''$ the image of $M\to Y^M$. Then there are three short exact sequences: $$\begin{aligned}
\a_M\colon\:\:&0\to M'\to M\to M''\to 0,\\
\b_M\colon\:\:&0\to Y_M\to X_M\to M'\to 0,\\
\g_M\colon\:\:&0\to M''\to Y^M\to X^M\to 0.\end{aligned}$$ The sequence $\a_M$ is the approximation sequence of $M$ with respect to the torsion pair $(\X_0,\Y_0)$, that is, $M'\in\X_0$ and $M''\in\Y_0$. On the other hand, $\b_M$ and $\g_M$ are approximation sequences of $M'$ and $M''$ respectively, with respect to the cotorsion pair $(\X_1,\Y_1)$, that is, $X_M,X^M\in\X_1$ and $Y_M,Y^M\in\Y_1$. Thus the 5-term exact sequence $\e_M$ is obtained by splicing together three short exact approximation sequences.
Suppose finally that the Ext-orthogonal pair $(\X,\Y)$ is complete. It is not hard to see that then the associated torsion pair $(\X_0,\Y_0)$ has an explicit description: we have $\X_0={\operatorname{Fac}\nolimits}\X$ and $\Y_0={\operatorname{Sub}\nolimits}\Y$, where $${\operatorname{Fac}\nolimits}\X=\{X/U\mid U\subseteq X,\,X\in\X\}\quad\text{and}\quad
{\operatorname{Sub}\nolimits}\Y=\{U\mid U\subseteq Y,\,Y\in\Y\}.$$
Homological epimorphisms
========================
From now on we will study Ext-orthogonal pairs only for module categories. Thus we fix a ring $A$ and denote by ${\operatorname{Mod}\nolimits}A$ the category of (right) $A$-modules. The full subcategory formed by all finitely presented $A$-modules is denoted by ${\operatorname{mod}\nolimits}A$.
Most of our results require the ring $A$ to be (right) *hereditary*. This means the category of $A$-modules is hereditary, that is, ${\operatorname{Ext}\nolimits}_A^n(-,-)$ vanishes for all $n>1$.
We are going to show that Ext-orthogonal pairs for module categories over hereditary rings are closely related to homological epimorphisms. Recall that a ring homomorphism $A\to B$ is a *homological epimorphism* if $$B\otimes_AB\cong B\quad\text{and}\quad{\operatorname{Tor}\nolimits}_n^A(B,B)=0\quad\text{for
all}\quad n>0,$$ or equivalently, if restriction induces isomorphisms $${\operatorname{Ext}\nolimits}_B^*(X,Y){\xrightarrow}{\sim}{\operatorname{Ext}\nolimits}_A^*(X,Y)$$ for all $B$-modules $X,Y$; see [@GL] for details. The first observation is that every homological epimorphism naturally induces two complete Ext-orthogonal pairs:
\[pr:hom-epi1\] Let $A$ be a hereditary ring and $f\colon A\to B$ a homological epimorphism. Denote by $\Y$ the category of $A$-modules which are restrictions of modules over $B$. Set $\X={^\perp\Y}$ and $\Y^\perp=\Z$. Then $(\X,\Y)$ and $(\Y,\Z)$ are complete Ext-orthogonal pairs for ${\operatorname{Mod}\nolimits}A$ with $\Y=({\operatorname{Ker}\nolimits}f\oplus{\operatorname{Coker}\nolimits}f)^\perp$ and $\Z=B^\perp$.
We wish to apply Theorem \[th:perpX\] which provides a construction for complete Ext-orthogonal pairs.
First observe that $\Y$ is the smallest extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$ closed under coproducts and containing $B$. This yields $\Z=B^\perp$.
Next we show that $\Y=({\operatorname{Ker}\nolimits}f\oplus{\operatorname{Coker}\nolimits}f)^\perp$. In fact, an $A$-module $Y$ is the restriction of a $B$-module if and only if $f$ induces an isomorphism ${\operatorname{Hom}\nolimits}_A(B,Y)\to{\operatorname{Hom}\nolimits}_A(A,Y)$. Using the assumptions on $A$ and $f$, a simple calculation shows that this implies $\Y=({\operatorname{Ker}\nolimits}f\oplus{\operatorname{Coker}\nolimits}f)^\perp$.
It remains to apply Theorem \[th:perpX\]. Thus $(\X,\Y)$ and $(\Y,\Z)$ are complete Ext-orthogonal pairs.
Now we use a crucial theorem of Gabriel and de la Peña. It identifies, only by their closure properties, the full subcategories of a module category ${\operatorname{Mod}\nolimits}A$ that arise as the images of the restriction functors ${\operatorname{Mod}\nolimits}B\to{\operatorname{Mod}\nolimits}A$ for ring epimorphisms $A\to B$. In our version, we identify in a similar way the essential images of the restriction functors of homological epimorphisms, provided $A$ is hereditary.
\[pr:hom-epi2\] Let $A$ be a hereditary ring and $\Y$ an extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$ that is closed under taking products and coproducts. Then there exists a homological epimorphism $f\colon A\to
B$ such that the restriction functor ${\operatorname{Mod}\nolimits}B\to{\operatorname{Mod}\nolimits}A$ induces an equivalence ${\operatorname{Mod}\nolimits}B{\xrightarrow}{\sim}\Y$.
It follows from [@GP Theorem 1.2] that there exists a ring epimorphism $f\colon A\to B$ such that the restriction functor ${\operatorname{Mod}\nolimits}B\to{\operatorname{Mod}\nolimits}A$ induces an equivalence ${\operatorname{Mod}\nolimits}B{\xrightarrow}{\sim}\Y$. To be more specific, one constructs a left adjoint $F\colon {\operatorname{Mod}\nolimits}A\to \Y$ for the inclusion $\Y\to {\operatorname{Mod}\nolimits}A$. Then $FA$ is a small projective generator for $\Y$, because $A$ has this property for ${\operatorname{Mod}\nolimits}A$ and the inclusion of $\Y$ is an exact functor that preserves coproducts. Thus one takes for $f$ the induced map $A\cong{\operatorname{End}\nolimits}_A(A)\to{\operatorname{End}\nolimits}_A(FA)$.
We claim that restriction via $f$ induces an isomorphism $${\operatorname{Ext}\nolimits}_B^n(X,Y){\xrightarrow}{\sim}{\operatorname{Ext}\nolimits}_A^n(X,Y)$$ for all $B$-modules $X,Y$ and all $n\geq 0$. This is clear for $n=0,1$ since $\Y$ is extension closed. On the other hand, the isomorphism for $n=1$ implies that ${\operatorname{Ext}\nolimits}_B^1(X,-)$ is right exact since $A$ is hereditary. It follows that $B$ is hereditary and ${\operatorname{Ext}\nolimits}_B^n(-,-)$ vanishes for all $n>1$.
We get as an immediate consequence that any class $\mathcal{Y}$ satisfying the assumptions of Proposition \[pr:hom-epi2\] belongs to two complete cotorsion pairs. In order to obtain more information about the corresponding 5-term approximation sequences, we prefer, however, to postpone this corollary after the following lemma:
\[le:hom-epi\] Let $A\to B$ be a homological epimorphism and denote by $\Y$ the category of $A$-modules which are restrictions of modules over $B$.
1. The functor $\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ sending a complex $X$ to $X\otimes_A^\bfL B$ is a localization functor with essential image equal to $\bfD_\Y({\operatorname{Mod}\nolimits}A)$.
2. The functor $\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ sending a complex $X$ to the cone (which is in this case functorial) of the natural morphism ${\operatorname{\mathbf{R}Hom}\nolimits}_A(B,X)\to X$ is a localization functor with kernel equal to $\bfD_\Y({\operatorname{Mod}\nolimits}A)$.
Restriction along $f\colon A\to B$ identifies ${\operatorname{Mod}\nolimits}B$ with $\Y$. The functor induces an isomorphism $${\operatorname{Ext}\nolimits}_B^n(X,Y){\xrightarrow}{\sim}{\operatorname{Ext}\nolimits}_A^n(X,Y)$$ for all $B$-modules $X,Y$ and all $n\geq 0$, because $f$ is a homological epimorphism. This isomorphism implies that the induced functor $f_*\colon\bfD({\operatorname{Mod}\nolimits}B)\to\bfD({\operatorname{Mod}\nolimits}A)$ is fully faithful with essential image $\bfD_\Y({\operatorname{Mod}\nolimits}A)$. Moreover, $f_*$ is naturally isomorphic to both ${\operatorname{\mathbf{R}Hom}\nolimits}_B({_A}B,-)$ and $-\otimes^\bfL_BB_A$. It follows that:
\(1) The functor $f_*$ admits a left adjoint $f^*=-\otimes^\bfL_AB$ and we therefore have a localization functor $L\colon \bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ sending a complex $X$ to $f_*f^*(X)$; see [@BIK Lemma 3.1]. It remains to note that the essential images of $L$ and $f_*$ coincide.
\(2) The functor $f_*$ admits a right adjoint $f^!={\operatorname{\mathbf{R}Hom}\nolimits}_A(B,-)$ and we therefore have a colocalization functor $\Ga\colon \bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ sending a complex $X$ to $f_*f^!(X)$. Note that the adjunction morphism $\Ga X\to X$ is an isomorphism if and only if $X$ belongs to $\bfD_\Y({\operatorname{Mod}\nolimits}A)$. Completing $\Ga X\to X$ to a triangle yields a well defined localization functor $\bfD({\operatorname{Mod}\nolimits}B)\to\bfD({\operatorname{Mod}\nolimits}A)$ with kernel $\bfD_\Y({\operatorname{Mod}\nolimits}A)$; see [@BIK Lemma 3.3].
Now we state the above mentioned immediate consequence of Propositions \[pr:hom-epi1\] and \[pr:hom-epi2\], but with an alternative and more explicit proof.
\[co:perp\] Let $A$ be a hereditary ring and $\Y$ an extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$ that is closed under taking products and coproducts. Set $\X={^\perp} \Y$ and $\Z=\Y^\perp$. Then $(\X,\Y)$ and $(\Y,\Z)$ are both complete Ext-orthogonal pairs.
There exists a homological epimorphism $f\colon A\to B$ such that restriction identifies ${\operatorname{Mod}\nolimits}B$ with $\Y$; see Proposition \[pr:hom-epi2\]. Then Lemma \[le:hom-epi\] produces two localization functors $L_1,L_2\colon \bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ with ${\operatorname{Im}\nolimits}L_1=\bfD_\Y({\operatorname{Mod}\nolimits}A)={\operatorname{Ker}\nolimits}L_2$. Thus $${\operatorname{Ker}\nolimits}L_1={^\perp}({\operatorname{Im}\nolimits}L_1)=\bfD_\X({\operatorname{Mod}\nolimits}A)\quad \text{and}\quad {\operatorname{Im}\nolimits}L_2=({\operatorname{Ker}\nolimits}L_2)^\perp=\bfD_\Z({\operatorname{Mod}\nolimits}A),$$ where in both cases the first equality follows from [@BIK Lemma 3.3] and the second from Proposition \[pr:thick\]. It remains to apply Proposition \[pr:local\] which yields in both cases for each $A$-module the desired 5-term exact sequence.
\[re:5seq\] The proof of Lemma \[le:hom-epi\] and Corollary \[co:perp\] yields for any $A$-module $M$ an explicit description of some terms of the 5-term exact sequence $\e_M$, using the homological epimorphism $A\to B$. In the first case, we have $$\e_M\colon\:\: 0\to {\operatorname{Tor}\nolimits}_1^A(M,B)\to X_M\to M\to M\otimes_AB\to X^M\to 0,$$ and in the second case, we have $$\e_M\colon\:\: 0\to Z_M\to {\operatorname{Hom}\nolimits}_A(B,M)\to M\to Z^M\to{\operatorname{Ext}\nolimits}_A^1(B,M)\to 0.$$
We also mention another consequence of the above discussion, which is immediately implied by Corollary \[co:perp\]. It reflects the fact that given a homological epimorphism $A\to B$ and the fully faithful functor $f_*\colon \bfD({\operatorname{Mod}\nolimits}B) \to \bfD({\operatorname{Mod}\nolimits}A)$ having both a left and a right adjoint, there exists a corresponding recollement of the derived category $\bfD({\operatorname{Mod}\nolimits}A)$; see [@Kr2008 §4.13].
Let $A$ be a hereditary ring and $(\X,\Y)$ an Ext-orthogonal pair for the category of $A$-modules.
1. There is an Ext-orthogonal pair $(\W,\X)$ if and only if $\X$ is closed under products.
2. There is an Ext-orthogonal pair $(\Y,\Z)$ if and only if $\Y$ is closed under coproducts.
Examples {#se:exm}
========
We present a number of examples of Ext-orthogonal pairs which illustrate the results of this work. The first example is classical and provides one of the motivations for studying perpendicular categories in representation theory of finite dimensional algebras. We refer to Schofield’s work [@Sch1986; @Sch1991] which contains some explicit calculations; see also [@GL; @H].
Let $A$ be a finite dimensional hereditary algebra over a field $k$ and $X$ a finite dimensional $A$-module. Then $X^\perp=\Y$ identifies via a homological epimorphism $A\to B$ with the category of modules over a $k$-algebra $B$ and this yields a complete Ext-orthogonal pair $(\X,\Y)$. If $X$ is *exceptional*, that is, ${\operatorname{Ext}\nolimits}^1_A(X,X)=0$, then $B$ is finite dimensional (see the proposition below) and often can be constructed explicitly. We refer to [@Sch1986] for particular examples. Note that in this case for each finite dimensional $A$-module $M$ the corresponding 5-term exact sequence $\e_M$ consists of finite dimensional modules. Moreover, the category $\X$ is equivalent to the module category of another finite dimensional algebra. We do not know of a criterion on $X$ that characterizes the fact that $B$ is finite dimensional; see however the following proposition.
\[pr:except\] Let $A$ be a finite dimensional hereditary algebra over a field $k$ and $(\X,\Y)$ a complete Ext-orthogonal pair such that $\Y$ is closed under coproducts. Fix a homological epimorphism $A\to B$ inducing an equivalence ${\operatorname{Mod}\nolimits}B{\xrightarrow}{\sim}\Y$. Then the following are equivalent.
1. There exists an exceptional module $X\in{\operatorname{mod}\nolimits}A$ such that $\Y=X^\perp$.
2. The algebra $B$ is finite dimensional over $k$.
3. For each $M\in{\operatorname{mod}\nolimits}A$, the 5-term exact sequence $\e_M$ belongs to ${\operatorname{mod}\nolimits}A$.
\(1) $\Rightarrow$ (2): This follows, for example, from [@GL Proposition 3.2].
\(2) $\Rightarrow$ (3): This follows from Remark \[re:5seq\].
\(3) $\Rightarrow$ (1): Let $\X_{\mathrm{fp}}=\X\cap{\operatorname{mod}\nolimits}A$ and $\Y_{\mathrm{fp}}=\Y\cap{\operatorname{mod}\nolimits}A$. The assumption on $(\X,\Y)$ implies that $(\X_{\mathrm{fp}},\Y_{\mathrm{fp}})$ is a complete Ext-orthogonal pair for ${\operatorname{mod}\nolimits}A$. Moreover, every object in $\X$ is a filtered colimit of objects in $\X_{\mathrm{fp}}$. To see this, we first express $X$ as a filtered colimit $\varinjlim M_i$ of finitely presented modules. Then, using the forthcoming Lemma \[le:seq\](2), we see that $\varepsilon_X = \varinjlim \varepsilon_{M_i}$, from which it easily follows that $X \cong \varinjlim X_{M_i}$. Now choose an injective cogenerator $Q$ in ${\operatorname{mod}\nolimits}A$ and let $X=X_Q$ be the module from the 5-term exact sequence $\e_Q$. This module is the image of $Q$ under a right adjoint of the inclusion $\X_{\mathrm{fp}}\to{\operatorname{mod}\nolimits}A$. Note that a right adjoint of an exact functor preserves injectivity. It follows that $X$ is an exceptional object and that $\X_{\mathrm{fp}}$ is the smallest extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$ containing $X$. Thus $X^\perp=\X_{\mathrm{fp}}^\perp=\X^\perp=\Y$, using the fact that $\X = \varinjlim \X_{\mathrm{fp}}$.
As a special case, any finitely generated projective module generates an Ext-orthogonal pair that can be described explicitly; see [@GL §5]. For cyclic projective modules, this is discussed in more generality in the following example.
Let $A$ be a hereditary ring and $e^2=e\in A$ an idempotent. Let $\X$ denote the category of $A$-modules $M$ such that the natural map $Me\otimes_{eAe}eA\to M$ is an isomorphism, and let $\Y=eA^\perp=\{M\in{\operatorname{Mod}\nolimits}A\mid Me=0\}$. Thus $-\otimes_{eAe} eA$ identifies ${\operatorname{Mod}\nolimits}eAe$ with $\X$ and restriction via $A\to A/AeA$ identifies ${\operatorname{Mod}\nolimits}A/AeA$ with $\Y$. Then $(\X,\Y)$ is a complete Ext-orthogonal pair for ${\operatorname{Mod}\nolimits}A$, and for each $A$-module $M$ the 5-term exact sequence $\e_M$ is of the form $$0\to {\operatorname{Tor}\nolimits}_1^A(M,A/AeA)\to Me\otimes_{eAe}eA\to M\to
M\otimes_A A/AeA\to 0 \to 0.$$
The next example[^1] arises from the work of Reiten and Ringel on infinite dimensional representations of canonical algebras; see [@RR] which is our reference for all concepts and results in the following discussion. Note that these algebras are not necessarily hereditary. The example shows the interplay between Ext-orthogonal pairs and (co)torsion pairs.
Let $A$ be a finite dimensional canonical algebra over a field $k$. Take for example a tame hereditary algebra, or, more specifically, the Kronecker algebra ${\left[\begin{smallmatrix}k&k^2\\ 0&k\end{smallmatrix}\right]}$. For such algebras, there is the concept of a *separating tubular family*. We fix such a family and denote by $\T$ the category of finite dimensional modules belonging to this family. There is also a particular *generic module* over $A$ which depends in some cases on the choice of the tubular family; it is denoted by $G$. Then the full subcategory $\X=\li\T$ consisting of all filtered colimits of modules in $\T$ and the full subcategory $\Y={\operatorname{Add}\nolimits}G$ consisting of all coproducts of copies of $G$ form an Ext-orthogonal pair $(\X,\Y)$ for ${\operatorname{Mod}\nolimits}A$. Note that the endomorphism ring $D={\operatorname{End}\nolimits}_A(G)$ of $G$ is a division ring and that the canonical map $A\to B$ with $B={\operatorname{End}\nolimits}_D(G)$ is a homological epimorphism which induces an equivalence ${\operatorname{Mod}\nolimits}B{\xrightarrow}{\sim}\Y$. In the particular case of the Kronecker algebra $A = {\left[\begin{smallmatrix}k&k^2\\ 0&k\end{smallmatrix}\right]}$, a direct computation shows that $B = M_2\big(k(x)\big)$.
The category of $A$-modules which are generated by $\T$ and the category of $A$-modules which are cogenerated by $G$ form a torsion pair $({\operatorname{Fac}\nolimits}\X,{\operatorname{Sub}\nolimits}\Y)$ for ${\operatorname{Mod}\nolimits}A$ which equals the torsion pair $(\X_0,\Y_0)$ generated by $\X$. On the other hand, let $\C$ denote the category of $A$-modules which are cogenerated by $\X$, and let $\D$ denote the category of $A$-modules $M$ satisfying ${\operatorname{Hom}\nolimits}_A(M,\T)=0$. Then the pair $(\C,\D)$ forms a cotorsion pair for ${\operatorname{Mod}\nolimits}A$ which identifies with the cotorsion pair $(\X_1,\Y_1)$ generated by $\X$.
If $A$ is hereditary, then the Ext-orthogonal pair $(\X,\Y)$ is complete by Corollary \[co:perp\]; see also Remark \[re:5seq\] for an explicit description of the 5-term approximation sequence $\e_M$ for each $A$-module $M$. Alternatively, one obtains the sequence $\e_M$ by splicing together appropriate approximation sequences which arise from $(\X_0,\Y_0)$ and $(\X_1,\Y_1)$.
The following example of an Ext-orthogonal pair arises from a localizing subcategory; it is a specialization of Example \[ex:groth\_loc\] and provides a simple (and not necessarily hereditary) model for the previous example.
\[ex:domains\] Let $A$ be an integral domain with quotient field $Q$. Let $\X$ denote the category of torsion modules and $\Y$ the category of torsion free divisible modules. Note that the modules in $\Y$ are precisely the coproducts of copies of $Q$. Then $(\X,\Y)$ is a complete Ext-orthogonal pair for ${\operatorname{Mod}\nolimits}A$, and for each $A$-module $M$ the 5-term exact sequence $\e_M$ is of the form $$0 \to 0\to tM \to M\to
M\otimes_A Q\to \bar M\to 0.$$
We conclude the section by showing that there are examples of abelian categories that admit only trivial Ext-orthogonal pairs.
Let $A$ be a local artinian ring and set $\A={\operatorname{Mod}\nolimits}A$. Then ${\operatorname{Hom}\nolimits}_A(X,Y)\neq 0$ for any pair $X,Y$ of non-zero $A$-modules. This is because the unique (up to isomorphism) simple module $S$ is a submodule of $Y$ and a factor of $X$. Thus if $(\X,\Y)$ is an Ext-orthogonal pair for $\A$, then $\X=\A$ or $\Y=\A$.
Ext-orthogonal pairs of finite type {#se:fin}
===================================
At this point, we use the results from §3 to characterize for hereditary rings the Ext-orthogonal pairs of *finite type*. Those are, by definition, the Ext-orthogonal pairs generated by a set of finitely presented modules.
\[th:ext\] Let $A$ be a hereditary ring and $(\X,\Y)$ an Ext-orthogonal pair for the module category of $A$. Then the following are equivalent.
1. The subcategory $\Y$ is closed under taking coproducts.
2. Every module in $\X$ is a filtered colimit of finitely presented modules from $\X$.
3. There exists a category $\C$ of finitely presented modules such that $\C^\perp=\Y$.
We need some preparations for the proof of this result. The first lemma is a slight modification of [@BET Proposition 2.1].
\[le:ext1perp\] Let $A$ be a ring and $\Y$ a full subcategory of its module category. Denote by $\X$ the category of $A$-modules $X$ of projective dimension at most $1$ satisfying ${\operatorname{Ext}\nolimits}^1_A(X,Y)=0$ for all $Y\in\Y$. Then any module in $\X$ is a filtered colimit of finitely presented modules from $\X$.
Let $X\in\X$. Choose an exact sequence $0\to P{\xrightarrow}{\p} Q\to X\to 0$ such that $P$ is free and $Q$ is projective. Note that ${\operatorname{Ext}\nolimits}^1_A(X,Y)=0$ implies that every morphism $P\to
Y$ factors through $\p$. The commuting diagrams of $A$-module morphisms $$\xymatrix{0\ar[r]&P_i\ar[r]^{\p_i}\ar[d]&Q_i\ar[d]\ar[r]&X_i\ar[d]\ar[r]&0\\
0\ar[r]&P\ar[r]^\p&Q\ar[r]&X\ar[r]&0 }$$ with $P_i$ and $Q_i$ finitely generated projective form a filtered system of exact sequences such that $\li\p_i=\p$. Note that $P$ is a filtered colimit of its finitely generated direct summands since $P$ is free. Thus there is a cofinal subsystem such that each morphism $P_i\to P$ is a split monomorphism. Therefore we may without loss of generality assume that each morphism $P_i\to P$ is a split monomorphism.
Clearly $\li X_i=X$, and it remains to prove that ${\operatorname{Ext}\nolimits}_A^1(X_i,\Y)=0$ for all $i$. This is equivalent to showing that each morphism $\mu\colon P_i\to Y$ with $Y\in\Y$ factors through $\p_i$. For this, we first factor each such $\mu$ through the split monomorphism $P_i
\to P$, then through $\p$, and finally compose the morphism $Q \to Y$ which we have obtained with the morphism $Q_i \to Q$. The result is a morphism $\nu\colon Q_i \to Y$ such that $\nu \p_i = \mu$, as desired.
The second lemma establishes some necessary properties of the 5-term sequences.
\[le:seq\] Let $A$ be a hereditary ring and $(\X,\Y)$ a complete Ext-orthogonal pair for ${\operatorname{Mod}\nolimits}A$. Let $M$ be an $A$-module and $\e_M$ the corresponding 5-term exact sequence.
1. If ${\operatorname{Ext}\nolimits}_A^1(M,\Y)=0$, then $Y_M=0$.
2. Suppose that $\Y$ is closed under coproducts and let $M=\li M_i$ be a filtered colimit of $A$-modules $M_i$. Then $\e_M=\li\e_{M_i}$.
We use the uniqueness of the 5-term exact sequences guaranteed by Lemma \[le:exact\]. If ${\operatorname{Ext}\nolimits}_A^1(M,\Y)=0$, then the image of the morphism $X_M\to M$ belongs to $\X$. Thus $X_M\to M$ is a monomorphism since $\e_M$ is unique, and this yields (1).
To prove (2), one uses that $\X$ and $\Y$ are closed under taking colimits and that taking filtered colimits is exact. Thus $\li\e_{M_i}$ is an exact sequence with middle term $M$ and all other terms in $\X$ or $\Y$. Now the uniqueness of $\e_M$ implies that $\e_M=\li\e_{M_i}$.
Finally, the following lemma is needed for hereditary rings which are not noetherian.
\[le:subfp\] Let $M$ be a finitely presented module over a hereditary ring and $N\subseteq M$ any submodule. Then $N$ is a direct sum of finitely presented modules.
We combine two results. Over a hereditary ring, any submodule of a finitely presented module is a direct sum of a finitely presented module and a projective module; see [@C Theorem 5.1.6]. In addition, one uses that any projective module is a direct sum of finitely generated projective modules; see [@Al].
\(1) $\Rightarrow$ (2): Suppose that $\Y$ is closed under taking coproducts. We apply Corollary \[co:perp\] to obtain for each module $M$ the natural exact sequence $\e_M$. Here note that we a priori did not assume completeness of $(\X,\Y)$. Now suppose that $M$ belongs to $\X$. Then one can write $M=\li M_i$ as a filtered colimit of finitely presented modules with ${\operatorname{Ext}\nolimits}_A^1(M_i,\Y)=0$ for all $i$; see Lemma \[le:ext1perp\]. Next we apply Lemma \[le:seq\]. Thus $$\li X_{M_i}{\xrightarrow}{\sim}X_M{\xrightarrow}{\sim}M,$$ and each $X_{M_i}$ is a submodule of the finitely presented module $M_i$. Finally, each $X_{M_i}$ is a filtered colimit of finitely presented direct summands by Lemma \[le:subfp\]. Thus $M$ is a filtered colimit of finitely presented modules from $\X$.
\(2) $\Rightarrow$ (3): Let $\X_{\mathrm{fp}}$ denote the full subcategory that is formed by all finitely presented modules in $\X$. Observe that $^\perp Y$ is closed under taking colimits for each module $Y$, because $^\perp Y$ is closed under taking coproducts and cokernels. Thus $\X_{\mathrm{fp}}^\perp=\X^\perp=\Y$ provided that $\X=\li\X_{\mathrm{fp}}$.
\(3) $\Rightarrow$ (1): Use that for each finitely presented $A$-module $X$, the functor ${\operatorname{Ext}\nolimits}_A^*(X,-)$ preserves all coproducts.
Note that Theorem \[th:ext\] gives rise to a bijection between extension closed abelian subcategories of finitely presented modules and Ext-orthogonal pairs of finite type. We will state this explicitly in §8, but we in fact prove it here by the following proposition.
\[pr:ext\] Let $A$ be a hereditary ring and $\C$ a category of finitely presented $A$-modules. Then $^\perp(\C^\perp)\cap{\operatorname{mod}\nolimits}A$ equals the smallest extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$ containing $\C$.
Let $\D$ denote the smallest extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$ containing $\C$. We claim that the category $\li\D$ which is formed by all filtered colimits of modules in $\D$ is an extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$.
Assume for the moment that the claim holds. Then Theorem \[th:perpX\] implies that $\X={^\perp(\C^\perp)}$ equals the smallest extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$ closed under coproducts and containing $\C$. Our claim then implies $\X=\li\D$, so $\X\cap{\operatorname{mod}\nolimits}A=\D$ and we are finished.
Therefore, it only remains to prove the claim. First observe that every morphism in $\li\D$ can be written as a filtered colimit of morphisms in $\D$. Using that taking filtered colimits is exact, it follows immediately that $\li\D$ is closed under kernels and cokernels in ${\operatorname{Mod}\nolimits}A$.
It remains to show that $\li\D$ is closed under extensions. To this end let $\eta\colon 0\to L\to M\to N\to 0$ be an exact sequence with $L$ and $N$ in $\li\D$. We can without loss of generality assume that $N$ belongs to $\D$, because otherwise the sequence $\eta$ is a filtered colimit of the pull-back exact sequences with the last terms in $\D$. Next we choose a morphism $\p\colon M'\to M$ with $M'$ finitely presented. All we need to do now is to show that $\p$ factors through an object in $\D$; see [@L]. We may, moreover, assume that the composite of $\p$ with $M\to N$ is an epimorphism. This is because otherwise we can take an epimorphism $P \to N$ with $P$ finitely generated projective, factor it through $M \to N$, and replace $\p$ by $\p'\colon M' \oplus P \to M$. Finally, denote by $L'$ the kernel of $\p$, which is necessarily a finitely presented module. The induced map $L'\to L$ then factors through an object $L''$ in $\D$ since $L$ belongs to $\li\D$. Forming the push-out exact sequence of $0\to L'\to M'\to N\to 0$ along the morphism $L'\to L''$ gives an exact sequence $0\to L''\to M''\to
N\to 0$. Now $\p$ factors through $M''$ which belongs to $\D$.
Universal localizations {#se:uni-loc}
=======================
A ring homomorphism $A\to B$ is called a *universal localization* if there exists a set $\Si$ of morphisms between finitely generated projective $A$-modules such that
1. $\s\otimes_AB$ is an isomorphism of $B$-modules for all $\s\in\Si$, and
2. every ring homomorphism $A\to B'$ such that $\s\otimes_AB'$ is an isomorphism of $B$-modules for all $\s\in\Si$ factors uniquely through $A\to B$.
Let $A$ be a ring and $\Si$ a set of morphisms between finitely generated projective $A$-modules. Then there exists a universal localization inverting $\Si$ and this is unique up to a unique isomorphism; see [@S] for details. The universal localization is denoted by $A\to A_\Si$ and restriction identifies ${\operatorname{Mod}\nolimits}A_\Si$ with the full subcategory consisting of all $A$-modules $M$ such that ${\operatorname{Hom}\nolimits}_A(\s,M)$ is an isomorphism for all $\s\in\Si$. Note that ${\operatorname{Hom}\nolimits}_A(\s,M)$ is an isomorphism if and only if $M$ belongs to $\{{\operatorname{Ker}\nolimits}\s,{\operatorname{Coker}\nolimits}\s\}^\perp$, provided that $A$ is hereditary. The main result of this section is then the following theorem.
\[th:epi\] Let $A$ be a hereditary ring. A ring homomorphism $f\colon
A\to B$ is a homological epimorphism if and only if $f$ is a universal localization.
Suppose first that $f\colon A\to B$ is a homological epimorphism. This gives rise to an Ext-orthogonal pair $(\X,\Y)$ for ${\operatorname{Mod}\nolimits}A$, if we identify ${\operatorname{Mod}\nolimits}B$ with a full subcategory $\Y$ of ${\operatorname{Mod}\nolimits}A$; see Proposition \[pr:hom-epi1\]. Let $\X_{\mathrm{fp}}$ denote the full subcategory that is formed by all finitely presented modules in $\X$. It follows from Theorem \[th:ext\] that $\X_{\mathrm{fp}}^\perp=\Y$. Now fix for each $X\in\X_{\mathrm{fp}}$ an exact sequence $$0\to P_X{\xrightarrow}{\s_X}Q_X\to X\to 0$$ such that $P_X$ and $Q_X$ are finitely generated projective, and let $\Si=\{\s_X\mid X\in\X_{\mathrm{fp}}\}$. Then $${\operatorname{Mod}\nolimits}B=\X_{\mathrm{fp}}^\perp={\operatorname{Mod}\nolimits}A_\Si.$$ Therefore, $f\colon A\to B$ is a universal localization, since $\X_{\mathrm{fp}}^\perp$ determines the corresponding ring epimorphism uniquely up to isomorphism, see the proof of Proposition \[pr:hom-epi2\].
Now suppose $f\colon A\to B$ is a universal localization. Then restriction identifies the category of $B$-modules with a full extension closed subcategory of ${\operatorname{Mod}\nolimits}A$. Thus we have induced isomorphisms $${\operatorname{Ext}\nolimits}_B^*(X,Y){\xrightarrow}{\sim}{\operatorname{Ext}\nolimits}_A^*(X,Y)$$ for all $B$-modules $X,Y$, since $A$ is hereditary. It follows that $f$ is a homological epimorphism.
Neither implication in Theorem \[th:epi\] is true if one drops the assumption on the ring $A$ to be hereditary, not even if the global dimension is $2$. In [@Ke], Keller gives an example of a Bézout domain $A$ and a non-zero ideal $I$ such that the canonical map $A\to A/I$ is a homological epimorphism, but any map $\s$ between finitely generated projective $A$-modules needs to be invertible if $\s\otimes_AA/I$ is invertible. We refine the construction so that ${\operatorname{gldim}\nolimits}A = 2$, see Example \[ex:tc\_fail\]. On the other hand, Neeman, Ranicki, and Schofield use finite dimensional algebras to construct in [@NRS] examples of universal localizations that are not homological epimorphisms. They are also able to construct such examples of global dimension $2$, see [@NRS Remark 2.13].
The telescope conjecture {#se:tel}
========================
Now we are ready to state and prove an extended version of Theorem A after recalling the necessary notions.
Let $A$ be a ring. A complex of $A$-modules is called *perfect* if it is isomorphic to a bounded complex of finitely generated projective modules. Note that a complex $X$ is perfect if and only if the functor ${\operatorname{Hom}\nolimits}_{\bfD({\operatorname{Mod}\nolimits}A)}(X,-)$ preserves coproducts. One direction of this statement is easy to prove since ${\operatorname{Hom}\nolimits}_{\bfD({\operatorname{Mod}\nolimits}A)}(A,-)$ preserves coproducts and every perfect complex is finitely built from $A$. The converse follows from [@Ne1992 Lemma 2.2] and [@BN Proposition 3.4]. Recall also that a localizing subcategory $\C$ of $\bfD({\operatorname{Mod}\nolimits}A)$ is *generated by perfect complexes* if $\C$ admits no proper localizing subcategory containing all perfect complexes from $\C$.
\[th:TC\] Let $A$ be a hereditary ring. For a localizing subcategory $\C$ of $\bfD({\operatorname{Mod}\nolimits}A)$ the following conditions are equivalent:
1. There exists a localization functor $L\colon\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ that preserves coproducts and such that $\C={\operatorname{Ker}\nolimits}L$.
2. The localizing subcategory $\C$ is generated by perfect complexes.
3. There exists a localizing subcategory $\D$ of $\bfD({\operatorname{Mod}\nolimits}A)$ that is closed under products such that $\C={^\perp}\D$.
\(1) $\Rightarrow$ (2): The kernel ${\operatorname{Ker}\nolimits}L$ and the essential image ${\operatorname{Im}\nolimits}L$ of a localization functor $L$ form an Ext-orthogonal pair for $\bfD({\operatorname{Mod}\nolimits}A)$; see [@BIK Lemma 3.3]. We obtain an Ext-orthogonal pair $(\X,\Y)$ for ${\operatorname{Mod}\nolimits}A$ by taking $\X=H^0{\operatorname{Ker}\nolimits}L$ and $\Y=H^0{\operatorname{Im}\nolimits}L$; see Proposition \[pr:thick\]. The fact that $L$ preserves coproducts implies that $\Y$ is closed under taking coproducts. It follows from Theorem \[th:ext\] that $\X$ is generated by finitely presented modules. Each finitely presented module is isomorphic in $\bfD({\operatorname{Mod}\nolimits}A)$ to a perfect complex, and therefore ${\operatorname{Ker}\nolimits}L$ is generated by perfect complexes.
\(2) $\Rightarrow$ (3): Suppose that $\C$ is generated by perfect complexes. Then there exists a localization functor $L\colon\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ such that ${\operatorname{Ker}\nolimits}L=\C$. Thus we have an Ext-orthogonal pair $(\C,\D)$ for $\bfD({\operatorname{Mod}\nolimits}A)$ with $\D={\operatorname{Im}\nolimits}L$; see [@BIK Lemma 3.3]. Now observe that $\D=\C^\perp$ is closed under coproducts, since for any perfect complex $X$ the functor ${\operatorname{Hom}\nolimits}_{\bfD({\operatorname{Mod}\nolimits}A)}(X,-)$ preserves coproducts. It follows that $\D$ is a localizing subcategory.
\(3) $\Rightarrow$ (1): Let $\D$ be a localizing subcategory that is closed under products such that $\C={^\perp}\D$. Then $\Y=H^0\D$ is an extension closed abelian subcategory of ${\operatorname{Mod}\nolimits}A$ that is closed under products and coproducts; see Proposition \[pr:thick-corr\]. In the proof of Corollary \[co:perp\] we have constructed a localization functor $L\colon\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ such that $\C={\operatorname{Ker}\nolimits}L$. More precisely, there exists a homological epimorphism $A\to B$ such that $L=-\otimes_A^\bfL B$. It remains to notice that this functor preserves coproducts.
The implication (1) $\Rightarrow$ (2) is known as the *telescope conjecture*. Let us sketch the essential ingredients of the proof of this implication. In fact, the proof is not as involved as one might expect from the references to preceding results of this work.
We need the 5-term exact sequence $\e_M$ for each module $M$ which one gets immediately from the the localization functor $L$; see Proposition \[pr:local\]. The perfect complexes generating $\C$ are constructed in the proof of Theorem \[th:ext\], where the relevant implication is (1) $\Rightarrow$ (2). For this proof, one uses Lemmas \[le:ext1perp\] – \[le:subfp\], but this is all.
Let $A$ be a hereditary ring and $B$ a ring that is derived equivalent to $A$, that is, there is an equivalence of triangulated categories $\bfD({\operatorname{Mod}\nolimits}A){\xrightarrow}{\sim}\bfD({\operatorname{Mod}\nolimits}B)$. Then the statement of Theorem \[th:TC\] carries over from $A$ to $B$. In particular, the statement of Theorem \[th:TC\] holds for every tilted algebra in the sense of Happel and Ringel [@HR].
Given the proof of the telescope conjecture for the derived categories of hereditary rings, one may be tempted to think that perhaps it is possible to get a similar result for rings of higher global dimension. Here we show that this is not the case. Namely, we construct a class of rings for which the conjecture fails for the derived category, and we will see that some of them have global dimension $2$. To achieve this, we use the following result due to Keller [@Ke].
\[le:tc\_fail\] Let $A$ be a ring and $I$ a non-zero two-sided ideal of $A$ such that
1. ${\operatorname{Tor}\nolimits}_i^A(A/I,A/I) = 0$ for all $i \ge 1$ (that is, the surjection $A \to A/I$ is a homological epimorphism), and
2. $I$ is contained in the Jacobson radical of $A$.
Then $L = - {\otimes^{\mathbf L}}_A A/I\colon \bfD({\operatorname{Mod}\nolimits}A) \to \bfD({\operatorname{Mod}\nolimits}A)$ is a coproduct preserving localization functor but ${\operatorname{Ker}\nolimits}L$, which is the smallest localizing subcategory containing $I$, contains no non-zero perfect complexes. In particular, the telescope conjecture fails for $\bfD({\operatorname{Mod}\nolimits}A)$.
In order to find such $A$ and $I$ with (right) global dimension of $A$ equal to $2$, we restrict ourself to the case when $A$ is a valuation domain. That is, $A$ is a commutative domain with the property that for each pair $a, b \in A$, either $a$ divides $b$ or $b$ divides $a$. We refer to [@FS Chapter II] for a discussion of such domains. Here, we mention only the properties which we need for our example:
\[le:vd\_basic\] The following holds for a valuation domain $A$ which is not a field.
1. The ring $A$ is local and its weak global dimension equals $1$.
2. The maximal ideal $P$ of $A$ is either principal or idempotent.
3. For any ideal $I$ of $A$ we have the isomorphism ${\operatorname{Tor}\nolimits}_1^A(A/I,A/I) \cong I/I^2$.
\(1) The ring $A$ is local since the ideals of $A$ are totally ordered by inclusion. The second part of (1) follows from [@FS VI.10.4].
\(2) This is a direct consequence of results in [@FS Section II.4]. For an ideal $I$, one defines $$I' = \{a \in A \mid aI \subsetneq I\}.$$ It turns out that $I'$ is always a prime ideal and $I$ is naturally an $R_{I'}$-module. Moreover, $I = I'$ if $I$ itself is a prime ideal, [@FS II.4.3 (iv)]. In particular we have $P' = P$. On the other hand, [@FS p. 69, item (d)] says that $I' \cdot I \subsetneq I$ if and only if $I$ is a principal ideal of $R_{I'}$. Specialized to $P$, this precisely says that $P^2 = P' \cdot P \subsetneq P$ if and only if $P$ is a principal ideal of $R$.
\(3) Tensoring the exact sequence $0 \to I \to A \to A/I \to 0$ with $A/I$ gives the exact sequence $$A/I \otimes_A I \overset{0}\longrightarrow A/I
\overset{\sim}\longrightarrow A/I \otimes_A A/I \to 0.$$ It follows that ${\operatorname{Tor}\nolimits}_1^A(A/I,A/I) \cong A/I \otimes_A I$, and the right exactness of the tensor product yields $A/I \otimes_A I \cong
I/I^2$.
The following result is a straightforward consequence.
\[pr:cntex\] Let $A$ be a valuation domain whose maximal ideal $P$ is non-principal. Then the telescope conjecture fails for $\bfD({\operatorname{Mod}\nolimits}A)$. More precisely, $L = - {\otimes^{\mathbf L}}_A A/P$ is a coproduct preserving localization functor on $\bfD({\operatorname{Mod}\nolimits}A)$ whose kernel is non-trivial (it contains $P$) but not generated by perfect complexes.
It is enough to prove that the maximal ideal $P$ meets the conditions of Lemma \[le:tc\_fail\]. As $P$ is the Jacobson radical of $A$, condition (2) is fulfilled. Condition (1) follows easily from Lemma \[le:vd\_basic\].
What we are left with now is to construct a valuation domain whose maximal ideal is non-principal and whose global dimension is $2$. To this end, we recall the basic tool to construct valuation domains with given properties: the value group. If $A$ is a valuation domain, denote by $Q$ its quotient field and by $U$ the group of units of $A$. Then $U$ is clearly a subgroup of the multiplicative group $Q^* =
Q \setminus \{0\}$ and $$G = Q^* / U$$ is a totally ordered abelian group. More precisely, $G$ is an abelian group, the relation $\leq$ on $G$ defined by $aU \leq bU$ if $ba^{-1}
\in A$ gives a total order on $G$, and we have the compatibility condition $$\alpha \leq \beta \quad
\textrm{implies} \quad
\alpha \cdot \gamma \leq \beta \cdot \gamma \quad
\textrm{ for all } \alpha,\beta,\gamma \in G.$$ The pair $(G,\leq)$ is called the *value group* of $A$. We will use the following fundamental result [@FS Theorem 3.8].
\[pr:vd\_constr\] Let $k$ be a field and $(G, \leq)$ a totally ordered abelian group. Then there is a valuation domain $A$ whose residue field $A/P$ is isomorphic to $k$, and whose value group is isomorphic to $G$ as an ordered group.
Now, we can give the promised example.
\[ex:tc\_fail\] Let $G$ be a free abelian group of countable rank. If we view $G$ as the group $\bbZ^{(\bbN)}$ (with additive notation), then $G$ is naturally equipped with the lexicographic ordering which makes it to a totally ordered group. Let $A$ be a valuation domain whose value group is isomorphic to $G$. In fact, looking closer at the particular construction in [@FS Section II.3], we can construct $A$ such that it is countable.
We claim that the maximal ideal $P$ of $A$ is non-principal and that ${\operatorname{gldim}\nolimits}A = 2$. Indeed, each ideal of $A$ is flat and countably generated since the value group is countable. Thus, each ideal is of projective dimension at most $1$ and ${\operatorname{gldim}\nolimits}A \le 2$. On the other hand, it is easy to see that $A$ has non-principal, hence non-projective, ideals and so is not hereditary. One of them is $P$, which is generated by elements of $A$ whose cosets in the value group $Q^*/U$ correspond, under the isomorphism $Q^*/U \cong \bbZ^{(\bbN)}$, to the canonical basis elements $e_1, e_2, e_3, \ldots \in
\bbZ^{(\bbN)}$.
This way, we obtain a countable valuation domain $A$ of global dimension $2$ such that the telescope conjecture fails for $\bfD({\operatorname{Mod}\nolimits}A)$ by Proposition \[pr:cntex\].
A bijective correspondence {#se:bij}
==========================
In this final section we summarize our findings by stating explicitly the correspondence between various structures arising from Ext-orthogonal pairs for hereditary rings. In particular, this completes the proof of an extended version of Theorem B:
\[th:bijection\] For a hereditary ring $A$ there are bijections between the following sets:
1. Ext-orthogonal pairs $(\X,\Y)$ for ${\operatorname{Mod}\nolimits}A$ such that $\Y$ is closed under coproducts.
2. Ext-orthogonal pairs $(\Y,\Z)$ for ${\operatorname{Mod}\nolimits}A$ such that $\Y$ is closed under products.
3. Extension closed abelian subcategories of ${\operatorname{Mod}\nolimits}A$ that are closed under products and coproducts.
4. Extension closed abelian subcategories of ${\operatorname{mod}\nolimits}A$.
5. Homological epimorphisms $A\to B$ (up to isomorphism).
6. Universal localizations $A\to B$ (up to isomorphism).
7. Localizing subcategories of $\bfD({\operatorname{Mod}\nolimits}A)$ that are closed under products.
8. Localization functors $\bfD({\operatorname{Mod}\nolimits}A)\to\bfD({\operatorname{Mod}\nolimits}A)$ preserving coproducts (up to natural isomorphism).
9. Thick subcategories of $\bfD^b({\operatorname{mod}\nolimits}A)$.
We state the bijections explicitly in the following table and give the references to the places where these bijections are established.
[ccl]{} Direction&Map&Reference\
\
(1) $\leftrightarrow$ (3)& $(\X,\Y)\mapsto \Y$& Corollary \[co:perp\]\
(2) $\leftrightarrow$ (3)& $(\Y,\Z)\mapsto \Y$& Corollary \[co:perp\]\
(3) $\to$ (4)& $\Y\mapsto ({^\perp}\Y)\cap{\operatorname{mod}\nolimits}A$& Thm. \[th:ext\] & Prop. \[pr:ext\]\
(4) $\to$ (3)& $\C\mapsto \C^\perp$& Thm. \[th:ext\] & Prop. \[pr:ext\]\
(3) $\to$ (5)& $\Y\mapsto (A\to {\operatorname{End}\nolimits}_A(FA))$& Proposition \[pr:hom-epi2\]\
(5) $\to$ (3)& $f\mapsto ({\operatorname{Ker}\nolimits}f\oplus{\operatorname{Coker}\nolimits}f)^\perp$& Proposition \[pr:hom-epi1\]\
(5) $\leftrightarrow$ (6)& $f\mapsto f$& Theorem \[th:epi\]\
(3) $\to$ (7)& $\Y\mapsto \bfD_\Y({\operatorname{Mod}\nolimits}A)$& Proposition \[pr:thick-corr\]\
(7) $\to$ (3)& $\C\mapsto H^0\C$& Proposition \[pr:thick-corr\]\
(7) $\to$ (8)& $\C\mapsto (X\mapsto GX)$& Theorem \[th:TC\]\
(8) $\to$ (7)& $L\mapsto {\operatorname{Im}\nolimits}L$& Theorem \[th:TC\]\
(4) $\to$ (9)& $\X\mapsto \bfD^b_\X({\operatorname{mod}\nolimits}A)$& Remark \[re:thick\]\
(9) $\to$ (4)& $\C\mapsto H^0\C$& Remark \[re:thick\]\
For (3) $\to$ (5), the functor $F$ denotes a left adjoint of the inclusion $\Y\to{\operatorname{Mod}\nolimits}A$. For (7) $\to$ (8), the functor $G$ denotes a left adjoint of the inclusion $\C\to\bfD({\operatorname{Mod}\nolimits}A)$,
Let us mention that this correspondence is related to recent work of some other authors. In [@Sch2007], Schofield establishes for any hereditary ring the bijection (4) $\leftrightarrow$ (6). In [@NS], Nicolás and Saorín establish for a differential graded algebra $A$ a correspondence between recollements for the derived category $\bfD(A)$ and differential graded homological epimorphisms $A\to B$. This correspondence specializes for a hereditary ring to the bijection (5) $\leftrightarrow$ (8).[^2]
A finiteness condition {#a-finiteness-condition .unnumbered}
----------------------
Given an Ext-orthogonal pair for the category of $A$-modules as in Theorem \[th:bijection\], it is a natural question to ask when its restriction to the category of finitely presented modules yields a complete Ext-orthogonal pair for ${\operatorname{mod}\nolimits}A$. This is very important especially when considering relations of results from this paper to representation theory of finite dimensional algebras. For that setting, we characterize this finiteness condition in terms of finitely presented modules; see also Proposition \[pr:except\].
Let $A$ be a finite dimensional hereditary algebra over a field and $\C$ an extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$. Then the following are equivalent.
1. There exists a complete Ext-orthogonal pair $(\C,\D)$ for ${\operatorname{mod}\nolimits}A$.
2. The inclusion $\C\to{\operatorname{mod}\nolimits}A$ admits a right adjoint.
3. There exists an exceptional object $X\in\C$ such that $\C$ is the smallest extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$ containing $X$.
4. Let $(\X,\Y)$ be the Ext-orthogonal pair for ${\operatorname{Mod}\nolimits}A$ generated by $\C$. Then for each $M\in{\operatorname{mod}\nolimits}A$ the 5-term exact sequence $\e_M$ belongs to ${\operatorname{mod}\nolimits}A$.
\(1) $\Rightarrow$ (2): For $M\in{\operatorname{mod}\nolimits}A$ let $0\to D_M\to C_M\to M\to
D^M\to C^M\to 0$ be its 5-term exact sequence. Sending a module $M$ to $C_M$ induces a right adjoint for the inclusion $\C\to{\operatorname{mod}\nolimits}A$; see Lemma \[le:exact\].
\(2) $\Rightarrow$ (3): Choose an injective cogenerator $Q$ in ${\operatorname{mod}\nolimits}A$ and let $X$ denote its image under the right adjoint of the inclusion of $\C$. A right adjoint of an exact functor preserves injectivity. It follows that $X$ is an exceptional object and that $\C$ is the smallest extension closed abelian subcategory of ${\operatorname{mod}\nolimits}A$ containing $X$.
\(3) $\Rightarrow$ (4): See Proposition \[pr:except\].
\(4) $\Rightarrow$ (1): The property of the pair $(\X,\Y)$ implies that $(\X\cap{\operatorname{mod}\nolimits}A,\Y\cap{\operatorname{mod}\nolimits}A)$ is a complete Ext-orthogonal pair for ${\operatorname{mod}\nolimits}A$. An application of Proposition \[pr:ext\] yields the equality $\X\cap{\operatorname{mod}\nolimits}A=\C$. Thus there exists a complete Ext-orthogonal pair $(\C,\D)$ for ${\operatorname{mod}\nolimits}A$.
There is a dual result which is obtained by applying the duality between modules over the algebra $A$ and its opposite $A^{\mathrm{op}}$. Note that condition (3) is self-dual.
[99]{} F. Albrecht, On projective modules over semi-hereditary rings, Proc. Amer. Math. Soc. [**12**]{} (1961), 638–639. L. Alonso Tarrío, A. Jeremías López andM. J. Souto Salorio, Localization in categories of complexes and unbounded resolutions, Canad. J. Math. [**52**]{} (2000), no. 2, 225–247. S. Bazzoni, P. C. Eklof and J. Trlifaj, Tilting cotorsion pairs, Bull. London Math. Soc. [**37**]{} (2005), no. 5, 683–696. D. Benson, S. Iyengar and H. Krause, Local cohomology and support for triangulated categories, to appear in Ann. Sci. École Norm. Sup., arXiv:math/0702610v2. M. Bökstedt and A. Neeman, Homotopy limits in triangulated categories, Compositio Math. [**86**]{} (1993), no. 2, 209–234. A. K. Bousfield, The localization of spectra with respect to homology, Topology [**18**]{} (1979), no. 4, 257–281. K. Brüning, Thick subcategories of the derived category of a hereditary algebra, Homology, Homotopy Appl. [**9**]{} (2007), no. 2, 165–176. P. M. Cohn, [*Free ideal rings and localization in general rings*]{}, Cambridge Univ. Press, Cambridge, 2006. S. E. Dickson, A torsion theory for Abelian categories, Trans. Amer. Math. Soc. [**121**]{} (1966), 223–235. L. Fuchs and L. Salce, *Modules over non-Noetherian domains*, Mathematical Surveys and Monographs, 84. American Mathematical Society, Providence, RI, 2001. P. Gabriel, Des catégories abéliennes, Bull. Soc. Math. France [**90**]{} (1962), 323–448. P. Gabriel and J. A. de la Peña, Quotients of representation-finite algebras, Comm. Algebra [**15**]{} (1987), no. 1-2, 279–307. W. Geigle and H. Lenzing, Perpendicular categories with applications to representations and sheaves, J. Algebra [**144**]{} (1991), no. 2, 273–343. D. Happel, Partial tilting modules and recollement, in [*Proceedings of the International Conference on Algebra, Part 2 (Novosibirsk, 1989)*]{}, 345–361, Contemp. Math., Part 2, Amer. Math. Soc., Providence, RI, 1992. D. Happel and C. M. Ringel, Tilted algebras, Trans. Amer. Math. Soc. [**274**]{} (1982), no. 2, 399–443. M. Hovey, J. H. Palmieri and N. P. Strickland, Axiomatic stable homotopy theory, Mem. Amer. Math. Soc. [**128**]{} (1997), no. 610, [x]{}+114 pp. B. Keller, A remark on the generalized smashing conjecture, Manuscripta Math. [**84**]{} (1994), no. 2, 193–198. H. Krause, Cohomological quotients and smashing localizations, Amer. J. Math. [**127**]{} (2005), no. 6, 1191–1246. H. Krause, Derived categories, resolutions, and Brown representability, in [*Interactions between homotopy theory and algebra*]{}, 101–139, Contemp. Math., 436, Amer. Math. Soc., Providence, RI, 2007. H. Krause, Localization theory for triangulated categories, arXiv:math/0806.1324v2, to appear in the proceedings of the “Workshop on Triangulated Categories” in Leeds 2006. H. Lenzing, Homological transfer from finitely presented to infinite modules, in [*Abelian group theory (Honolulu, Hawaii, 1983)*]{}, 734–761, Lecture Notes in Math., 1006, Springer, Berlin, 1983. M. Mahowald, D. Ravenel and P. Shick, The triple loop space approach to the telescope conjecture, in [*Homotopy methods in algebraic topology (Boulder, CO, 1999)*]{}, 217–284, Contemp. Math., 271, Amer. Math. Soc., Providence, RI, 2001. A. Neeman, Long exact sequences coming from triangles, in [*Proceedings of the 39th Symposium on Ring Theory and Representation Theory*]{}, 23–29, Symp. Ring Theory Represent. Theory Organ. Comm., Yamaguchi, 2007. A. Neeman, The connection between the $K$-theory localization theorem of Thomason, Trobaugh and Yao and the smashing subcategories of Bousfield and Ravenel, Ann. Sci. École Norm. Sup. (4) [**25**]{} (1992), no. 5, 547–566. A. Neeman, The chromatic tower for $D(R)$, Topology [**31**]{} (1992), no. 3, 519–532. A. Neeman, A. Ranicki and A. Schofield, Representations of algebras as universal localizations, Math. Proc. Cambridge Philos. Soc. [**136**]{} (2004), no. 1, 105–117. P. Nicolás and M. Saorín, Parametrizing recollement data, arXiv:math/0801.0500v1. D. C. Ravenel, Localization with respect to certain periodic homology theories, Amer. J. Math. [**106**]{} (1984), no. 2, 351–414. I. Reiten and C. M. Ringel, Infinite dimensional representations of canonical algebras, Canad. J. Math. [**58**]{} (2006), no. 1, 180–224. L. Salce, Cotorsion theories for abelian groups, in [*Symposia Mathematica, Vol. XXIII (Conf. Abelian Groups and their Relationship to the Theory of Modules, INDAM, Rome, 1977)*]{}, 11–32, Academic Press, London, 1979. A. H. Schofield, [*Representation of rings over skew fields*]{}, Cambridge Univ. Press, Cambridge, 1985. A. H. Schofield, Semi-invariants of quivers, J. London Math. Soc. (2) [**43**]{} (1991), no. 3, 385–395. A. H. Schofield, Universal localisation for hereditary rings and quivers. Ring theory (Antwerp, 1985), 149–164, Lecture Notes in Math., 1197, Springer, Berlin, 1986. A. H. Schofield, Universal localisations of hereditary rings, arXiv:math/0708.0257v1.
[^1]: The first author is grateful to Lidia Angeleri Hügel for suggesting this example.
[^2]: The first author is grateful to Manolo Saorín for pointing out this bijection.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The $1+1$ dimensional directed polymers in a Poissonean random environment is studied. For two polymers of maximal length with the same origin and distinct end points we establish that the point of last branching is governed by the exponent for the transversal fluctuations of a single polymer. We also investigate the density of branches.'
author:
- |
Patrik L. Ferrari and Herbert Spohn\
[Zentrum Mathematik, TU München]{}\
[D-85747 Garching, Germany]{}\
[emails: [email protected], [email protected]]{}
title: Last Branching in Directed Last Passage Percolation
---
[Keywords: ]{} First passage percolation\
[AMS Subject Classification: ]{}Primary $60K35$, Secondary $82B44$\
[Running Title: ]{}Last Branching
Introduction and main result
============================
First passage percolation was invented as a simple model for the spreading of a fluid in a porous medium. One imagines that the fluid is injected at the origin. Upon spreading the time it takes to wet across a given bond is postulated to be random. In the directed version the wetting is allowed along a preferred direction only. The task is then to study the random shape of the wetted region at some large time $t$. The existence of a deterministic shape as $t\to\infty$ follows from the subadditive ergodic theorem [@Kesten]. The shape fluctuations are more difficult to analyse and only some bounds are available [@Piza].
A spectacular progress has been achieved recently by Baik, Deift, and Johansson [@BDJ], who prove that for directed first passage percolation in two dimensions the wetting time measured along a fixed ray from the origin has fluctuations of order $t^{1/3}$. The amplitude has a non-Gaussian distribution. In fact it is Tracy-Widom distributed [@TW], a distribution known previously from the theory of Gaussian random matrices. Of course, such a detailed result is available only for a very specific model. In this model the wetting time is negative, which can be converted into a positive one at the expense of studying *last* rather than first passage percolation, hence our title. One thereby loses the physical interpretation of the spreading of a fluid. But directed first and last passage percolation models are expected to be in the same universality class under the condition that along the ray under consideration the macroscopic shape has a non-zero curvature [@KMH; @PS].
Such detailed results are available only for a few last passage percolation models, among them the Poissonean model studied in [@BDJ]. It was first introduced by Hammersley [@Ha], cf. also the survey by Aldous and Diaconis [@AD]. We start from a Poisson point process on ${\mathbb{R}}_+^2$ with intensity one. Let $(x,y)\prec (x',y')$ if $x<x'$ and $y<y'$. For a given configuration $\omega$ of the Poisson process and two points $S\prec E \in {\mathbb{R}}_+^2$ a *directed polymer* starting at $S$ and ending at $E$ is a piecewise linear path $\pi$ obtained by connecting $S$ and $E$ through a subset $\{q_1,\ldots,q_N\}$ of points in $\omega$ such that $S\prec q_1 \prec \cdots
\prec q_N \prec E$. The length, $l(\pi)$, of the directed polymer $\pi$ is the number of Poisson points visited by $\pi$. We denote by $\Pi(S,E,\omega)$ the set of all directed polymers from $S$ to $E$ for given $\omega$ and we are interested in directed polymers which have maximal length. In general there will be several of these and we denote by $\Pi_{\max}(S,E,\omega)$ the set of maximizers, i.e. of directed polymers in $\Pi(S,E,\omega)$ with *maximal length* $$L(S,E)(\omega)= \max_{\pi \in \Pi(S,E,\omega)} l(\pi).$$
\[simulation\]
{width="16cm"}
For the specific choice $S=(0,0)$, $E=(t,t)$ let us set $L(S,E)=L(t)$. The distribution function for $L(t)$ can be written in determinantal form as $$\label{eq1v2}
{\mathbb{P}}(L(t)<a)=\operatorname*{Det}(\mathbbm{1}- P_a B_t)$$ Here $P_a$ and $B_t$ are projection operators in $\ell_2({\mathbb{Z}})$. $P_a$ projects onto $[a,\infty)$ and $B_t$ is the spectral projection corresponding to the interval $(-\infty,0]$ of the operator $H_t$ defined through $$H_t\, \psi(n)=-\psi(n+1)-\psi(n-1)+\frac{n}{t}\psi(n),$$ i.e. $B_t$ is the discrete Bessel kernel. (\[eq1v2\]) should be compared with the determinantal formula for the largest eigenvalue, $E_{\max}$, of a $N\times
N$ Gaussian, $\beta=2$ random matrix, which has the distribution function $${\mathbb{P}}(E_{\max}\leq a)=\operatorname*{Det}(\mathbbm{1}-P_a K_N).$$ Here $P_a$ and $K_N$ are projections in $L^2({\mathbb{R}})$. $P_a$ projects onto the semiinfinite interval $[a,\infty)$ and $K_N$ is the spectral projection onto the interval $[0,N]$ of the operator $-\frac{1}{2}{\frac{\operatorname*{d}^2 }{\operatorname*{d}\! x^2}}+\frac{1}{2}x^2$. In the limit of large $t$, under suitable rescaling [@PS; @TW], both determinantal formulae converge to $$\label{eq4v2}
\operatorname*{Det}(\mathbbm{1}-P_a K)$$ where $K$ is the Airy kernel, i.e. the spectral projection corresponding to $(-\infty,0]$ of the Airy operator $-{\frac{\operatorname*{d}^2 }{\operatorname*{d}\! x^2}}+x$. (\[eq4v2\]) is the distribution function for a standard Tracy-Widom random variable $\zeta_2$ [@TW]. The famous result in [@BDJ] states that $$\label{eq5v2}
L(t)\cong 2t+t^{1/3}\zeta_2$$ in the limit of large $t$. In brackets, we remark that the proof in [@BDJ] proceeds via Toeplitz and not, as indicated here, via Fredholm determinants.
While (\[eq5v2\]) gives very precise information about the typical length of directed polymer, it leaves untouched the issue of typical spatial excursions of a maximizing directed polymer. As shown in [@Jo], they are in fact of size $t^{2/3}$ away from the diagonal. No information on the distribution is available. The transverse exponent $2/3$ appears also in a somewhat different quantity [@PS]. Set $S=(0,0)$, $E=(t-yt^\nu,t+yt^\nu)$ and consider the joint distribution of $t^{-1/3}(L(t)-2t)$ and $t^{-1/3}(L(S,E)-2t)$. If $\nu>2/3$, the two random variables become independent as $t\to\infty$ and if $\nu<2/3$ the joint distribution is concentrated on the diagonal. Only for $\nu=2/3$ there is a non-degenerate joint distribution which can be written in terms of suitable determinants involving the Airy operator $-\frac{d^2}{dx^2}+x$ on $L^2({\mathbb{R}})$.
In our present work we plan to study a related, but more geometrical quantity, see Figure \[simulation\] which displays the directed polymers rotated by $\pi/4$ for better visibility. The root point is always $S=(0,0)$ and the end points lie on the line $U_t=\{(t-x,t+x), \vert x \vert \leq t\}$. For fixed realization $\omega$ and for each end point $E$ we draw the set of all maximizers. Note that, e.g. for $E=(t,t)$, the directed polymer splits and merges again, which reflects that $\Pi_{\max}(S,E,\omega)$ contains many paths, their number presumably growing exponentially in $t$. The resulting network of lines has some resemblance to a river network with $(0,0)$ as the mouth or to a system of blood vessels, see [@Me] for related models. To characterize the network a natural geometrical object is the *last branching* for a pair of directed polymers with distinct end points [@FH]. As in Figure \[simulation\] the starting point is always $S=(0,0)$ and the end point $E$ must lie on the line $U_t$. If $\pi_i$ is a maximizer with start point $S=(0,0)$ and end point $E_i \in U_t$, $i=1,2$, then the last point in which $\pi_1$ and $\pi_2$ intersect is denoted by $I(\pi_1,\pi_2)$. We define the *last intersection point* for two sets of maximizers by $$J(E_1,E_2)= I(\pi_1,\pi_2)\textrm{ which minimizes }d(I(\pi_1,\pi_2),U_t)$$ where $d(X,U_t)$ is the Euclidean distance between $X$ and $U_t$. $J(E_1,E_2)$ depends on the configuration $\omega$ of the Poisson points but is independent from the choice of the maximizers. $J(E_1,E_2)$ is unique since the existence of two distinct last intersection points is in contradiction with the condition of being the last intersection. In particular if $(E_1)_1< (E_2)_1$, then $J(E_1,E_2)$ can be obtained by taking the highest maximizer from $0$ to $E_2$ and the lowest maximizer from $0$ to $E_1$.
Instead of the geometrical intersection, one could require the last intersection point to be a Poisson point. The two maximisers have then necessarily a common root. For the coarse quantities studied here there is no distinction and our results are identical in both cases.
One would expect that the branching is governed again by the transverse exponent $2/3$. More precisely let us assume that $$d(E_1,E_2)={\mathcal{O}}(t^\nu), \,0\leq \nu <1
\textrm{ and }E_1,E_2 \in U_t.$$ If $\nu=2/3$, the last branching point should have a distance of order $t$ from $U_t$ with some on that scale non-degenerate distribution. On the other hand if $\nu>2/3$ the branching will be close to the root and if $\nu<2/3$ the branching will be close to $U_t$. Our main result is to indeed single out $\nu=2/3$ and provide some estimates on the tails.
\[thm1\] Let $E_1=(t,t)$ and $E_2=E_1+y t^\nu (-1,1)$ with $y \sim {\mathcal{O}}(1)$.
- For $\nu>2/3$, there exists a $C(y)<\infty$ such that for all $\sigma>5/3-\nu$, $$\lim_{t\to\infty}{\mathbb{P}}(\{d(0,J(E_1,E_2))\leq C(y) t^\sigma\})=1.$$
- For $\nu\leq 2/3$ and for all $\mu<2\nu-1/3$ one has $$\lim_{t \to \infty}{\mathbb{P}}(\{d(J(E_1,E_2),U_t)\leq t^\mu\})=0.$$
In particular for $\nu=2/3$, one can choose $\mu<1$.
Our result does not rule out the possibility that for $\nu=2/3$ the distribution of the last intersection point is degenerate near the origin. In fact the proof exploits geometric aspects for branching points close to $U_t$, which cannot be used to obtain sharp results close to the origin.
Another way to characterize the network of Figure \[simulation\] is to consider the line density at the cross-section $U_s$, equivalently the typical distance between maximizers when crossing $U_s$. To have a definition, for given $\omega$ let $\overline{M}_t$ be all the maximizers with end points in $U_t$ considered as a subset of $\{(x_1,x_2), 0\leq x_1+x_2\leq 2t\}$. $\overline{M}_t$ consists of straight segments connecting two points of $\omega$ and straight segments connecting $0$ with a point in $\omega$. In addition there is a union of triangles with base contained in $U_t$ and the apex a point of $\omega$. We define $M_t$ to be $\overline{M}_t$ such that in every triangle only the two sides emerging from the apex are retained. Let $$\label{Nt}N_t(s)=\#\textrm{ of points of }M_t\cap U_s,\, 0 \leq s \leq t.$$ If $s=c\, t$, $0<c<1$, then the typical distance between lines is of order $t^{2/3}$ and thus one expects $$N_t(ct)\simeq t^{1/3}.$$ On the other hand for a cross-section closer to $U_t$ the number of points should increase faster. In particular $N_t(t)\simeq t$. This suggests that $$N_t(t-t^\mu)\simeq t^{g(\mu)}$$ with $g(0)=1$ and $g(1)=1/3$. In the last section we prove the lower bound $$g(\mu)\geq \frac{5}{6}-\frac{\mu}{2}.$$
Last branching
==============
We plan to prove Theorem \[thm1\]. Before we introduce some notation and state some results of [@BDJ] concerning large deviations for the length of maximizers.
For any $w\prec w' \in {\mathbb{R}}_+^2$, we denote by $[w,w']$ the rectangle with corners at $w$ and $w'$ and by $a(w,w')$ its area. The maximal length $L(w,w')$ is a random variable whose distribution function depends only on $a(w,w')$ with $L(w,w')\sim 2\sqrt{a(w,w')}$. Large deviation estimates for ${\mathbb{P}}(L(w,w')\leq 2\sqrt{a(w,w')} + n)$ are proved in [@BDJ], Lemma 7.1. We consider the case of $a(w,w')\gg 1$ and $\vert n \vert \gg 1$. Let $$\tau=n (\sqrt{a(w,w')}+n/2)^{-1/3}.$$ Then there are some positive constants $\theta, T_0, c_1, c_2$ so that\
1. [*Upper tail*]{}: if $T_0\leq \tau$ and $n\leq 2\sqrt{a(w,w')}$, then $${\mathbb{P}}(L(w,w')\geq 2\sqrt{a(w,w')}+n)\leq c_1 \exp(-c_2 \tau^{3/2}),
\label{upLD}$$\
2. [*Lower tail*]{}: if $\tau\leq -T_0$ and $\vert n \vert \leq 2\sqrt{a(w,w')} \,\theta$, then $${\mathbb{P}}(L(w,w')\leq 2\sqrt{a(w,w')}-\vert n \vert) \leq c_1 \exp(-c_2\vert{\tau}\vert^3).
\label{lowLD}$$
Our first step is to prove a lemma on the length (as in \[Jo\], Lemma 3.1) and a geometric lemma because both will be used in the proofs. Let $\mathcal{Z}$ be a set of points in ${\mathbb{R}}_+^2$ such that $\vert\mathcal{Z}\vert \leq t^m$ for a finite $m$ and let for each $z\in\mathcal{Z}$ be $z'=z+\hat{x}$ where $\hat{x}$ is a unit vector of ${\mathbb{R}}_+^2$.
\[lemmalength\] Let $\delta\in(1/3,1)$ and $E$ a fixed end point on $U_t$. For each $z\in \mathcal{Z}$, $$E_z=\{\omega \in \Omega {\textrm{ s.t. }}L(0,z')\leq 2 \sqrt{a(0,z')}+t^\delta \textrm{ and }L(z,E)\leq 2
\sqrt{a(z,E)}+t^\delta\}.$$ Then for all ${\varepsilon}>0$ and $t$ large enough we have $${\mathbb{P}}\bigg(\bigcup_{z\in\mathcal{Z}}\Omega{\setminus}E_z\bigg)\leq {\varepsilon}.$$
Let $\omega([z,E])$ be the number of Poisson points in $[z,E]$. If $a=a(z,E)\leq t^{\delta/2}$, then $${\mathbb{P}}(\omega([z,E])\geq t^{\delta})
=\sum_{j\geq t^{\delta}}{{\textrm{e}}^{-a}\frac{a^j}{j!}}\leq C \sum_{j\geq
t^\delta}{{\textrm{e}}^{-a f(j/a)}},$$ where $f(x)=1-x+x\ln{x}$ and $C>0$ a constant (using Stirling’s formula). But for $x>7$, $f(x)>x$ and here $x=j/a\geq t^{\delta/2}\gg 1$, therefore $$\label{lem2eq1}{\mathbb{P}}(L(z,E)\geq
2\sqrt{a(z,E)}+t^\delta)\leq {\mathbb{P}}(\omega([z,E])\geq t^{\delta})\leq C \sum_{j\geq
t^\delta}{{\textrm{e}}^{-j}} \leq 2 C {\textrm{e}}^{-t^{\delta}}.$$ The same bound holds for ${\mathbb{P}}(L(0,z')\geq 2 \sqrt{a(0,z')}+t^\delta)$.
If $a=a(z,E)\geq t^{\delta/2}$ then $${\mathbb{P}}(L(z,E)\geq 2\sqrt{a}+t^\delta) \leq
{\mathbb{P}}(L(z,E)\geq 2\sqrt{a}+a^{\delta/2}).$$ Consequently taking $n=a^{\delta/2}$, we have $\tau=a^{\delta/2-1/6}(1+o(1))$. Moreover $\tau\geq t^{(\delta-1/3)\delta/4}/2$ for $t$ large enough, because $a\geq t^{\delta/2}$ and consequently by (\[upLD\]) $$\label{lem2eq2}{\mathbb{P}}(L(z,E)\geq 2\sqrt{a(z,E)}+t^\delta) \leq
c_1 \exp(-c_2 t^{(3\delta-1)\delta/8}/3).$$ The same estimate holds for ${\mathbb{P}}(L(0,z')\geq 2\sqrt{a(0,z')}+t^\delta)$. Since $-t^\delta\ll -t^{(3\delta-1)\delta/8}$ for $t$ large, combining (\[lem2eq1\]) and (\[lem2eq2\]) we have $$\label{EndLemma2}
{\mathbb{P}}\bigg(\bigcup_{z\in\mathcal{Z}}\Omega{\setminus}E_z\bigg)
\leq t^m \max_{z\in\mathcal{Z}}{\mathbb{P}}(\Omega{\setminus}E_z)\leq t^m c_1 \exp(-c_2
t^{(3\delta-1)\delta/8}/3)\leq {\varepsilon}$$ for $t$ large enough.
\[\]\[\]\[1.8\][$A$]{} \[\]\[\]\[1.8\][$B$]{} \[\]\[\]\[1.8\][$A'$]{} \[\]\[\]\[1.8\][$B'$]{} \[\]\[\]\[1.8\][$U_t$]{} \[\]\[\]\[1.8\][$w$]{} \[\]\[\]\[1.8\][$l$]{} \[\]\[\]\[1.8\][$0$]{} \[\]\[\]\[1.8\][$E$]{} \[\]\[\]\[1.8\][$z_j$]{} \[\]\[\]\[1.8\][$C(w,l)$]{} \[\]\[\]\[1.8\][$1$]{} \[\]\[\]\[1.8\][$2$]{} ![*Geometrical construction used in Lemma \[lemmageom\] and Theorem \[thm1\] $i)$.*[]{data-label="fig1"}](Figure2.eps "fig:"){width="8cm"}
Let us consider an end point $E$ on $U_t$ given by $E=(t(1-k),t(1+k))$ with $k\in
(-1,1)$ and let $\hat{x}$ be the unit vector with direction $\overrightarrow{0E}$. The cylinder $C(w,l)$ has axis $\overline{0E}$, width $w$ and length $l$ (see Figure \[fig1\]). $\partial C(w,l)$ is the boundary of the cylinder without lids. Then the following geometric lemma holds.
\[lemmageom\] Let $z\in\partial C(w,l)$ with $w=t^\nu$, $l=t^\mu$, $\nu<\mu$ and $z'=z+\hat{x}$. Then there exists a $C(k)>0$ such that $$\label{eqlemgeom}
\sqrt{a(0,z')}+\sqrt{a(z,E)}-\sqrt{a(0,E)}\leq -C(k) \frac{w^2}{l}\textrm{ as
}t\to\infty.$$
First let us consider $\mu<1$. Let $e_1= \frac{1}{\sqrt{2(1+k^2)}}\binom{1-k}{1+k}$ and $e_2=\frac{1}{\sqrt{2(1+k^2)}}\binom{-(1+k)}{1-k}$. Then $z=E\pm w e_2 -
\lambda l e_1$ with $\lambda\leq 1$ such that $z\in [0,E]$. For the computations we consider the “$+$” case, the “$-$” case is obtained replacing $w$ with $-w$ at the end. Let $Q= \sqrt{2(1+k^2)}$ and $l'= \lambda l
-1$. Then $$z=\binom{t(1-k)-w (1+k)/Q-\lambda l (1-k)/Q}{t(1+k)+w(1-k)/Q-\lambda l (1+k)/Q}$$ and $$z'=\binom{t(1-k)-w (1+k)/Q-l' (1-k)/Q}{t(1+k)+w(1-k)/Q-l'(1+k)/Q}.$$ Expansion leads to the following results, $$\begin{aligned}
\sqrt{a(0,z')}&=&t \sqrt{1-k^2}-\frac{\lambda
l}{\sqrt{2}}\sqrt{\frac{1-k^2}{1+k^2}}-\frac{\sqrt{2}k
w}{\sqrt{1-k^4}}+{\mathcal{O}}(w^2/t),\\
\sqrt{a(0,E)}&=&t\sqrt{1-k^2},\\
\sqrt{a(z,E)}&=&\frac{\lambda l}{\sqrt{2}}\sqrt{\frac{1-k^2}{1+k^2}}
f(k,w/\lambda l),\end{aligned}$$ where $f(k,\zeta)=\sqrt{1+4 k \zeta (1-k^2)^{-1}-\zeta^2}$. It follows that $$\sqrt{a(0,z')}+\sqrt{a(z,E)}-\sqrt{a(0,E)}=-\sqrt{\frac{1-k^2}{2(1+k^2)}}
h(k,w,\lambda l)+{\mathcal{O}}(w^2/t),$$ where $$h(k,w,\lambda l)=\lambda l \bigg(1-\sqrt{1+\frac{4 k w}{\lambda l
(1-k^2)}-\frac{w^2}{\lambda^2 l^2}}+\frac{2 k w}{\lambda l (1-k^2)}\bigg).$$ It is easy to see that $h(k,w,\lambda l)\geq 0$ (in fact, $h(k,w,\lambda
l)=0$ only if $\frac{1}{\lambda l}=0$). Moreover $$h(k,w,\lambda l)\sim
\frac{(k^2+1)^2 w^2}{2(k^2-1)^2 \lambda l} + {\mathcal{O}}(w^3/(\lambda l)^2),$$ and the minimal value is obtained for $\lambda=1$. Consequently, for $l=t^\nu$ large enough, $$\sqrt{a(0,z')}+\sqrt{a(z,E)}-\sqrt{a(0,E)}\leq - C(k)\frac{w^2}{l}\textrm{
with }C(k)=\frac{(k^2+1)^2}{4(k^2-1)^2}.$$
Secondly let us consider the case $\mu=1$. In this case a $z\in\partial C(w,l)$ can be written as $$z=\binom{\alpha t(1-k)-w (1+k)/Q}{\alpha t(1+k)+w(1-k)/Q}\textrm{ and }
z'=\binom{\alpha' t(1-k)-w (1+k)/Q}{\alpha' t(1+k)+w(1-k)/Q}$$ where $\alpha \in (0,1)$ such that $z\in [0,E]$ and $\alpha'=\alpha+1/tQ$. The expansion yields to $$\begin{aligned}
\sqrt{a(0,z')}+\sqrt{a(z,E)}-\sqrt{a(0,E)}&=&
-\left(\frac{1}{\alpha}+\frac{1}{1-\alpha}\right)
\frac{1+k^2}{4(1-k^2)^{3/2}}\frac{w^2}{t}+{\mathcal{O}}(w^3/t^2)\\
\leq -\frac{1+k^2}{(1-k^2)^{3/2}}\frac{w^2}{t}+{\mathcal{O}}(w^3/t^2) &\leq& -C(k)
\frac{w^2}{t}\textrm{ with }C(k)=\frac{1+k^2}{2(1-k^2)^{3/2}}\end{aligned}$$ for $t$ large enough.
[**Proof of Theorem \[thm1\]:**]{}
Let $E_1=(t,t)$ and $E_2=E_1+yt^\nu(-1,1)$ with $\nu>2/3$. First we prove that for a $E=(t(1-k),t(1+k))$ with $k\in (-1,1)$, all maximizers from $0$ to $E$ are contained in a cylinder $C(w)$ of axis $\overline{OE}$, width $w=t^{\kappa}$, $\kappa>2/3$, with probability one (as in Section 3 of [@Jo]). Then we compute the intersection of such cylinders starting at $0$ and ending at $E_1$ and $E_2$ respectively.
Let us consider the following event: $$D\equiv D(w)= \{\omega \in \Omega {\textrm{ s.t. }}\forall\, \pi \in
\Pi_{\max}(0,E,\omega) \textrm{ we have }\pi\cap\partial C(w) = \emptyset\}.$$ We prove that $$\label{eqA1}
\forall \, {\varepsilon}>0, {\mathbb{P}}(D)\geq 1-{\varepsilon}\textrm{ for }t\textrm{ large enough.}$$ If $\omega \in \Omega {\setminus}D$, then there exists a maximizer $\pi$ such that $\pi\cap\partial C(w)\neq \emptyset$. We divide the two sides of $\partial C(w)$ in $K=2t$ equidistant points (see Figure \[fig1\]) with $A=z_0$, $B=Z_K$ and $z_j=A+j\vert AB \vert/K \hat{x}$ where $\hat{x}$ is the unit vector with direction $\overrightarrow{0E}$. Likewise for the second side of the cylinder. Let $\mathcal{A}$ be the set of all these points. We define $z(\omega)$ as follows: if the last intersection of $\pi$ with $\partial C(w)$ is in $\overline{z_{j-1} z_j}$, then $z(\omega)=z_j$ (with $z(\omega)=z_j$ is the intersection is exactly at $z_j$), and $z'(\omega)=z(\omega)+\hat{x}$. Then we have $$\label{eqA2}
L(0,E)\leq L(0,z'(\omega))+L(z(\omega),E).$$ Defining for all $z\in\mathcal{A}$ $$\label{eqA3}
E_z=\{\omega \in \Omega {\textrm{ s.t. }}L(0,z')\leq 2 \sqrt{a(0,z')}+t^\delta
\textrm{ and }L(z,E)\leq 2 \sqrt{a(z,E)}+t^\delta\}$$ we obtain, by Lemma \[lemmalength\], that for $\delta>1/3$ $${\mathbb{P}}\bigg(\bigcup_{z\in\mathcal{A}}\Omega{\setminus}E_z\bigg)\leq
{\varepsilon}\textrm{ for all }{\varepsilon}>0\textrm{ and }t\textrm{ large enough.}$$ We consider now the set of events $F= (\Omega{\setminus}D)\bigcap_{z\in\mathcal{A}}E_z$. Then ${\mathbb{P}}(F)=1-{\mathbb{P}}(\Omega{\setminus}F)\geq 1-{\mathbb{P}}(D)-
{\mathbb{P}}(\bigcup_{z\in\mathcal{A}}\Omega{\setminus}E_z) \geq {\mathbb{P}}(\Omega{\setminus}D) - {\varepsilon}$ if $t$ is large enough, that means $${\mathbb{P}}(\Omega{\setminus}D)\leq {\varepsilon}+ {\mathbb{P}}(F).$$ We need to prove that ${\mathbb{P}}(F)\leq {\varepsilon}$ for $t$ large enough. For all $\omega \in
F$, from (\[eqA2\]) and (\[eqA3\]) follows: $$\label{eqA4}
L(0,E)\leq 2 t^\delta + 2 (\sqrt{a(0,z'(\omega))}+\sqrt{a(z(\omega),E)}).$$ Applying Lemma \[lemmageom\] with $\mu=1$ we obtain $$\label{eqA5}
\sqrt{a(0,z'(\omega))}+\sqrt{a(z(\omega),E)}\leq \sqrt{a(0,E)}-C(k) t^{2\kappa-1}.$$ From (\[eqA4\]) and (\[eqA5\]) we have, for all $\omega \in F$, $L(0,E)-2\sqrt{a(0,E)}\leq 2 t^\delta - 2C(k) t^{2\kappa-1}$ for $t$ large enough. This implies, taking $\delta<2\kappa-1$ (always possible since $2\kappa-1>1/3$), for $t$ large enough, $$\begin{aligned}
{\mathbb{P}}(F)&\leq&{\mathbb{P}}(L(0,E)-2\sqrt{a(0,E)}\leq 2 t^\delta - 2C(k) t^{2\kappa-1})\\
&\leq & {\mathbb{P}}(L(0,E)-2\sqrt{a(0,E)}\leq - C(k) t^{2\kappa-1}) \leq {\varepsilon}\end{aligned}$$ because $- t^{2\kappa-1}/t^{1/3}\to -\infty$ as $t\to\infty$. This proves (\[eqA1\]).
Therefore with probability approaching to one as $t$ goes to infinity, the maximizers from $0$ to $E$ are in a cylinder of width $w=t^\kappa$ with $\kappa>2/3$. We use the result for $E=E_1$ and for $E=E_2$. Let us take $\kappa \in (2/3,\nu)$ and let $C_1, C_2$ be the cylinders that include the maximizers from $0$ to $E_1, E_2$ respectively. Let $G$ be the farthest point from the origin in $C_1\cup C_2$. Then for $t$ large enough and for all ${\varepsilon}>0$, $$\label{eqA6}
{\mathbb{P}}(d(0,J(\omega))\leq d(0,G))\geq 1-2{\varepsilon}.$$ We need only to compute $d(0,G)$. By some algebraic computations we obtain $$d(0,G)=\frac{t^{\kappa+1-\nu}}{\vert y\vert}+{\mathcal{O}}(t^{\nu-1}) \leq
\frac{2t^{\kappa+1-\nu}}{\vert y\vert}\textrm{ for }t\textrm{ large enough}$$ and $\kappa \in (2/3,\nu)$ implies $\kappa+1-\nu>5/3-\nu$.
We consider the case $y>0$, the case $y<0$ is obtained by symmetry. Let us consider the cylinder $C(w,l)$ with axis $\overline{0 E_1}$ of length $l=t^\mu$ and width $w=yt^\nu$, $\nu<\mu$. We note by $\partial C(w,l)_{+}$ the upper side of $C(w,l)$ (see Figure \[fig2\]).
\[\]\[\]\[1.8\][$A$]{} \[\]\[\]\[1.8\][$B$]{} \[\]\[\]\[1.8\][$U_t$]{} \[\]\[\]\[1.8\][$0$]{} \[\]\[\]\[1.8\][$E_1$]{} \[\]\[\]\[1.8\][$E_2$]{} \[\]\[\]\[1.8\][$d_m$]{} \[\]\[\]\[1.8\][$C(w,l)$]{} \[\]\[\]\[1.8\][$1$]{} \[\]\[\]\[1.8\][$2$]{} ![*Geometrical construction used in Theorem \[thm1\] $ii)$ and in Theorem \[thm2\].*[]{data-label="fig2"}](Figure3.eps "fig:"){width="8cm"}
Let $$D\equiv D(w,l)= \{\omega \in \Omega {\textrm{ s.t. }}\forall\, \pi \in
\Pi_{\max}(0,E_1,\omega) \textrm{ we have }\pi\cap\partial C(w,l)_{+} = \emptyset\}.$$ If $\omega \in \Omega {\setminus}D$ then the highest maximizer, $\pi_0$, from $0$ to $E_1$ intersect $\partial C(w,l)_{+}$ in $\overline{AB}$. We divide $\overline{AB}$ in $K=[\sqrt{2}(l-w)]+1$ equidistant points with $A=z_0$, $B=z_K$ and $z_j=A+j(l-w)(1,1)/K$. Let $\mathcal{A}$ be the set of all these points. We define $z(\omega)$ as follows: if the last intersection of $\pi_0$ with $\partial C(w,l)_{+}$ is in $\overline{z_{j-1} z_j}$, then $z(\omega)=z_j$ (with $z(\omega)=z_j$ if the intersection is exactly at $z_j$) and $z'(\omega)=z(\omega)+(1,1)/\sqrt{2}$. We have $$\label{eqB1}
L(0,E_1)\leq L(0,z'(\omega))+L(z(\omega),E_1).$$ We define for all $z\in\mathcal{A}$ $$\label{eqB2}
E_z=\{\omega \in \Omega {\textrm{ s.t. }}L(0,z')\leq 2 \sqrt{a(0,z')}+t^\delta
\textrm{ and }L(z,E_1)\leq 2 \sqrt{a(z,E_1)}+t^\delta\}$$ and the set of events $F= (\Omega {\setminus}D)\bigcap_{z\in\mathcal{A}}E_z$. In what follows we consider $\delta>1/3$. Then using Lemma \[lemmalength\] we conclude that for all ${\varepsilon}>0$ and $t$ large enough $${\mathbb{P}}(\Omega{\setminus}D)\leq {\varepsilon}+ {\mathbb{P}}(F).$$ For all $\omega \in F$, from (\[eqB1\]) and (\[eqB2\]) follows: $$\label{eqB3}
L(0,E_1)\leq 2 t^\delta + 2(\sqrt{a(0,z'(\omega))}+\sqrt{a(z(\omega),E_1)}).$$ From the geometric Lemma \[lemmageom\] we deduce $$\label{eqB4}
\sqrt{a(0,z'(\omega))}+\sqrt{a(z(\omega),E_1)}\leq\sqrt{a(0,E_1)}-C y^2 t^{2\nu-\mu}.$$ Therefore for all $\omega \in F$, by (\[eqB3\]) and (\[eqB4\]), $L(0,E_1)-2t\leq 2 t^\delta - 2C y^2 t^{2\nu-\mu}\leq -C y^2 t^{2\nu-\mu}$ for $t$ large enough if $2\nu-\mu>\delta$. This implies that for all ${\varepsilon}>0$ and $t$ large enough $${\mathbb{P}}(\Omega {\setminus}D)\leq {\varepsilon}+{\mathbb{P}}(F)\leq {\varepsilon}+ {\mathbb{P}}(L(0,E_1)-2t\leq -C y^2
t^{2\nu-\mu})\leq 2{\varepsilon}\textrm{, if }\mu<2\nu-\delta.$$ Let now define the set of events $$Q= \{\omega \in \Omega {\textrm{ s.t. }}d(J(E_1,E_2)(\omega),u)\leq l\textrm{ with }l=t^{\mu}\}.$$ We need to prove that $$\label{eqB5}
\lim_{t\to\infty}{\mathbb{P}}(Q)=0\textrm{ for all }\mu<2\nu-1/3.$$ We consider the event $T= Q \cap D$ with $\mu<2\nu-1/3$. For any choice of $\mu<2\nu-1/3$, there exists a $\delta>1/3$ such that $\mu<2\nu-\delta$ is verified. Then for $t$ large enough we have ${\mathbb{P}}(T)\geq {\mathbb{P}}(D)+{\mathbb{P}}(Q)-1\geq {\mathbb{P}}(Q)-{\varepsilon}$, i.e. ${\mathbb{P}}(Q)\leq {\mathbb{P}}(T)+{\varepsilon}$.\
If $\omega \in T$ then the lowest maximizer from $0$ to $E_2$ intersect $\partial C(w,l)_{+}$ at some point $H$, $$H=\binom{t}{t}-\frac{\lambda}{\sqrt{2}}\binom{l}{l}+\frac{1}{\sqrt{2}}\binom{-w}{w}$$ with $\lambda \in (0,1]$ such that $(H)_1\leq t-yt^{\nu}$. We define $h(\omega)=z_j\in\mathcal{A}$ if $H(\omega)\in \overline{z_{j-1}z_j}$ (always with $h(\omega)=z_j$ if $H(\omega)=z_j$) and $h'(\omega)=h(\omega)+(1,1)/\sqrt{2}$. As before, for $\omega \in T$ we have $$L(0,E_2)\leq 2 t^\delta+2(\sqrt{a(0,h'(\omega))}+\sqrt{a(h(\omega),E_2)}).$$ In order to apply the geometric lemma we need to know the minimal distance $d_m$ between $\partial C(w,l)_{+}$ and the segment $\overline{0 E_2}$. We find $d_m=(\sqrt{2}-1)yt^\nu+{\mathcal{O}}(t^{\nu+\mu-1})$.\
Applying Lemma \[lemmageom\] we obtain $$\sqrt{a(0,h'(\omega))}+\sqrt{a(h(\omega),E_2)}\leq\sqrt{a(0,E_2)}-C'
t^{2\nu-\mu}$$ provided that $\mu<2\nu-1/3$. Therefore for all ${\varepsilon}>0$ and $\mu<2\nu-1/3$, $${\mathbb{P}}(Q)\leq{\mathbb{P}}(T)+{\mathbb{P}}(\Omega{\setminus}D)\leq {\mathbb{P}}(L(0,E_2)-2\sqrt{a(0,E_2)}\leq -C' t^{2\nu-\mu})+
2{\varepsilon}\leq 3{\varepsilon}$$ for $t$ large enough.
Density of branches
===================
We recall the definition (\[Nt\]) of the number of branches $N_t(s)$ at cross-section $U_s$.
\[thm2\] For $0\leq\mu<1$ the following lower bound holds, $$\lim_{t\to\infty}{\mathbb{P}}\left(N_t(t-t^\mu)\geq t^\sigma\right)=1$$ for all $\sigma<\frac{5}{6}-\frac{\mu}{2}.$
The first part of the proof is close to the one of Theorem \[thm1\] ii).
As in Figure \[fig2\] let us consider two fixed points $$E_1=(t(1-k),t(1+k))\textrm{ and }E_2=E_1+t^\nu (-1,1)$$ with $k\in(-1,1)$. We look at the region closer than $l=t^\mu$ from the line $U_t$. We take $w=t^\nu/2$ and define $C(w,l)$, $\partial C(w,l)_+$ and $D=D(w,l)$ as in the previous proof. We divide $\overline{AB}$ in $K=[\vert AB\vert]+1$ equidistant points $z_j$, define the $z(\omega)$, $z'(\omega)$ and $E_z$ (see (\[eqB2\])) as in the previous proof. Equation (\[eqB1\]) holds unchanged too. Let $F= (\Omega {\setminus}D)\bigcap_{z\in\mathcal{A}}E_z$. Then for $\delta>1/3$ the proof of Lemma \[lemmalength\] gives also (see (\[EndLemma2\])) $${\mathbb{P}}\bigg(\bigcup_{z\in\mathcal{A}}\Omega{\setminus}E_z\bigg)\leq
1/t^2\textrm{ for }t\textrm{ large enough.}$$ Then $${\mathbb{P}}(\Omega{\setminus}D)\leq
t^{-2} + {\mathbb{P}}(F)$$ for $t$ large enough.
We still have (\[eqB3\]) for all $\omega\in F$ and (\[eqB4\]) becomes $$\sqrt{a(0,z'(\omega))}+\sqrt{a(z(\omega),E_1)}\leq \sqrt{a(0,E_1)}-C(k)t^{2\nu-\mu}/4.$$ Therefore, taking $2\nu-\mu>\delta$, for $t$ large enough $${\mathbb{P}}(F)\leq {\mathbb{P}}(L(0,E_1)\leq 2\sqrt{a(0,E_1)}-C(k) t^{2\nu-\mu}/4).$$ Let $\psi= \min\{2\nu-\mu,(1+\delta)/2\}$, then $${\mathbb{P}}(F)\leq {\mathbb{P}}(L(0,E_1)\leq 2\sqrt{a(0,E_1)}-C(k)t^\psi/4).$$ The $\psi$ is introduced in order to remain in the domain in which (\[lowLD\]) can be applied. The large deviation estimate leads to $${\mathbb{P}}(F)\leq c_1 \exp(-c_2 P(k) t^{3\psi-1})\textrm{ with }
P(k)=\frac{C(k)^3}{4^3 2\sqrt{1-k^2}}>0.$$ Therefore ${\mathbb{P}}(F)\leq 1/t^2$ for $t$ large enough. Consequently $${\mathbb{P}}(\Omega{\setminus}D)\leq t^{-2}+{\mathbb{P}}(F)\leq 2t^{-2}.$$
Define the set of events $$Q= \{\omega \in \Omega {\textrm{ s.t. }}d(J(E_1,E_2)(\omega),U_t)\leq l\textrm{ with }l=t^{\mu}\}.$$ We prove that for $t$ large enough $${\mathbb{P}}(Q)\leq t^{-2}\textrm{ for all }\mu<2\nu-1/3.$$ We consider the event $T= Q \cap D$ with $\mu<2\nu-1/3$. As in the previous proof, for $t$ large enough we have ${\mathbb{P}}(Q)\leq {\mathbb{P}}(T)+{\mathbb{P}}(\Omega{\setminus}D)$. If $\omega \in T$ then the lowest maximizer from $0$ to $E_2$ intersects $\partial C(w,l)_{+}$ at some point $H$. We define $h(\omega)$ and $h'(\omega)$ as in the previous proof and for $\omega \in T$ we have $$L(0,E_2)\leq 2 t^\delta+2(\sqrt{a(0,h'(\omega))}+\sqrt{a(h(\omega),E_2)}).$$ We compute the minimal distance $d_m$ between $\partial C(w,l)_{+}$ and the segment $\overline{0 E_2}$ finding $d_m=\left(\frac{\sqrt{2}}{\sqrt{1+k^2}}-\frac{1}{2}\right)t^\nu+{\mathcal{O}}(t^{\mu-1})$. Applying Lemma \[lemmageom\] we obtain $$\sqrt{a(0,h'(\omega))}+\sqrt{a(h(\omega),E_2)}\leq\sqrt{a(0,E_2)}-C'(k)
t^{2\nu-\mu}/2$$ provided that $\mu<2\nu-1/3$ and with $C'(k)=C(k)\left(\frac{\sqrt{2}}{\sqrt{1+k^2}}-\frac{1}{2}\right)^2$. Therefore for , $${\mathbb{P}}(T)\leq {\mathbb{P}}(L(0,E_2)-2\sqrt{a(0,E_2)}\leq -C'(k) t^{2\nu-\mu})
\leq {\mathbb{P}}(L(0,E_2)-2\sqrt{a(0,E_2)}\leq -C'(k) t^{\psi}).$$ Applying (\[lowLD\]) with $n=-C'(k) t^{\psi}$ we obtain $${\mathbb{P}}(T)\leq c_1 \exp(-c_2 C''(k) t^{3\psi-1})$$ with $C''(k)=C'(k)^3/2\sqrt{1-k^2}$. Therefore ${\mathbb{P}}(T)\leq 1/t^2$ for $t$ large enough. Finally for $t$ large enough $${\mathbb{P}}(T)\leq {\mathbb{P}}(\Omega{\setminus}D)+{\mathbb{P}}(T)\leq 3/t^2$$ provided that $\mu<2\nu-1/3$.
Now we can prove the theorem. Let us fix $0<k_0\ll 1$ and $M=[(1-k_0)t^{1-\nu}]$. We choose $2M+1$ points on $U_t$ as follows: $T_0=(t,t)$ and $T_j=T_0+j t^\nu (-1,1)$ for $j=-M,\ldots,M$. Let $W(j)$ be the set of all intersections between the maximizers with end point at $T_j$ and the ones with end point at $T_{j+1}$. We define $m(j)$ to be the set of points of $W(j)$ whose distance to $U_t$ is at most $l=t^\mu$. Then $${\mathbb{P}}(\exists \, j {\textrm{ s.t. }}m(j)\neq \emptyset)={\mathbb{P}}\bigg(\bigcup_{j=-M}^{M-1}m(j)\neq
\emptyset\bigg)\leq 2 M \max_{j=-M,\ldots,M-1}{\mathbb{P}}(m(j)\neq \emptyset) \leq 6
t^{-1-\nu}$$ as $t$ goes to infinity. Then as $t$ goes to infinity we have, at distance $t^\mu$ with $\mu<2\nu-1/3$, at least $2M+1\sim t^{1-\nu}$ branches that have not yet merged with probability one. Since for $k_0\ll 1$, $2M+1 =2[(1-k_0)t^{1-\nu}]+1\geq t^{1-\nu}$, for all $\sigma= 1-\nu<5/6-\mu/2$ we have $$\lim_{t\to\infty}{\mathbb{P}}\left(N_t(t-t^\mu)\geq t^\sigma\right)=1.$$
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Michael Prähofer for helping us to generate Figure \[simulation\].
[00000]{}
D. Aldous, P. Diaconis, *Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson Theorem*, Bull. Amer. Math. Soc. [**36**]{}, 413–432 (1999). J. Baik, P.A. Deift, K. Johansson, *On the distribution of the length of the longest increasing subsequence of random permutations*, J. Amer. Math. Soc. [**12**]{} 1119–1178 (1999). D.S. Fisher, T. Hwa, *Anomalous fluctuations of directed polymers in random media*, Phys. Rev. B [**49**]{}, 3136–3154 (1994). J.M. Hammersley, *A few seedlings of research*, Proc. Sixth Berkeley Symp. Math. Statist. and Probability [**1**]{}, 345–394, University of California Press (1972). H. Kesten, *Aspects of first-passage percolation*, Lecture Notes in Math. [**1180**]{} 125–264 (1986). J. Krug, P. Meakin, T. Halpin-Healy, *Amplitude universality for driven interfaces and directed polymers in random media*, Phys. Rev. A [**45**]{}, 638–653 (1992). K. Johansson, *Transversal fluctuations for increasing subsequences on the plane*, Probab. Theory Relat. Fields [**116**]{}, 445–456 (2000). P. Meakin, *Fractals, Scaling and Growth Far From Equilibrium*, Cambridge University Press, Cambridge, 1998. M.S.T. Piza, *Directed Polymers in a random environment: some results on fluctuations*, J. Stat. Phys. [**89**]{}, 581–603 (1997). M. Prähofer, H. Spohn, *Scale invariance of the PNG droplet and the Airy process*, J. Stat. Phys. [**108**]{}, 1071–1106 (2002). C.A. Tracy, H. Widom, *Level-spacing distributions and the Airy kernel*, Comm. Math. Phys. [**159**]{}, 151–174 (1994).
|
{
"pile_set_name": "ArXiv"
}
|
---
address: 'Dipartimento di Matematica “F. Enriques" — Università degli Studi di Milano — via Saldini, 50 — I-20133 Milano — Italy'
author:
- 'Alberto S. Cattaneo'
title: 'Configuration Space Integrals and Invariants for 3-Manifolds and Knots'
---
[^1]
Introduction
============
In this paper we give a brief description of the way proposed in [@BC] of associating invariants of both 3-dimensional rational homology spheres (r.h.s.) and knots in r.h.s.’s to certain combinations of trivalent diagrams. In addition, we discuss the relation between this construction and Kontsevich’s proposal [@K].
The same diagrams appear in the LMO invariant [@LMO] for 3-manifolds, and it would be very interesting to know if there exists any relationship between the two approaches in the case of r.h.s.’s.
The reason for restricting here to r.h.s.’s is quite technical, as will be clear in Sect. \[sec-irhs\]. Until that point, without any loss of generality we can assume $M$ to be any connected, compact, closed, oriented 3-manifold.
Our construction yields the invariants in terms of integrals over a suitable compactification of the configuration space of points on $M$. More precisely, the number of points corresponds to the number of vertices in the trivalent diagram, and the integrand is obtained by associating to each edge in the diagram a certain 2-form that represents the integral kernel of an “inverse of the exterior derivative $d$."
One reason for constructing invariants in terms of “$d^{-1}$" comes from perturbative Witten–Chern–Simons theory [@W]. (More precisely, one should invert the covariant derivative with respect to a flat connection; so the present construction is related to the trivial-connection contribution.)
Another reason, which is perhaps more transparent to topologists, relies on the definition of the linking number of two curves in ${{\mathbb{R}}}^3$ as the intersection number of one curve with any surface cobounding the other. Thus, linking number may be defined in terms of an inverse of the boundary operator ${\partial}$. If one wants to represent the linking number with an integral formula (viz., Gauss’s formula), then one must consider the Poincaré duals of curves and apply “$d^{-1}$."
The exterior derivative $d$ is neither injective nor surjective. Thus, to invert it, one must restrict it to the complement of its kernel and invert it on its image. Notice that one needs an explicit choice of the complement of the kernel, and this introduces an element of arbitrariness in the construction. Actually, our main task will be to prove that the invariants we define are really independent of this arbitrary choice.
A general way of defining the inverse of $d$ is by introducing a [*parametrix*]{}, i.e., a linear operator on $\Omega^*(M)$ that decreases by one the form degree and satisfies the following equation: $$d\circ P + P\circ d = I - S,
{\label{dP}}$$ where $I$ is the identity operator and $S$ is a suitable smooth operator such that has a solution.
The definition of the parametrix is far from unique. For the choice of $S$ is to a large extent arbitrary and, moreover, given a solution $P$, we get another solution of the form $P+d\circ Q - Q\circ d$ for any linear operator $Q$ that decreases by two the form degree. These ambiguities reflect the ambiguities in defining an inverse of $d$.
One possible choice for the parametrix is the Riemannian parametrix $P_g$, which is based on the choice of a Riemannian metric $g$: $$P_g = d^*\circ (\Delta+{\operatorname{\pi_{\rm Harm}}})^{-1},
\qquad\alpha\in\Omega^*(M).
{\label{Pg}}$$ where $*$ is the Hodge-$*$ operator, $\Delta = d^*d+dd^*$ is the Laplace operator, and ${\operatorname{\pi_{\rm Harm}}}$ is the projection to harmonic forms. $P_g$ satisfies with $S={\operatorname{\pi_{\rm Harm}}}$.
The main point is now to define an integral kernel (of course not unique) for $P$: i.e., we want to represent $P$ as a convolution. In the language of differential topology, convolution can be written as $$P\alpha = -\pi_{2*}({\hat\eta}\ \pi_1^*\alpha),
\qquad\alpha\in\Omega^*(M),$$ where $\pi_1$ and $\pi_2$ are the projections from $M\times M$ to either copy of $M$, and ${\hat\eta}$ is the integral kernel for $P$ (wedges will be understood throughout). By dimensional reasons it is clear that ${\hat\eta}$ must be a 2-form on $M\times M$. Because of the identity operator in , it is also clear that ${\hat\eta}$ cannot be a smooth form. This is clearly a problem since we want to use ${\hat\eta}$ to define smooth invariants for the manifold $M$. The solution consists in replacing $M\times M$ with a suitable compactification $C_2(M)$ of the configuration space.
The ambiguities in the definition of the parametrix imply that the form ${\hat\eta}$ is not unique. Thus, the main point will be to prove that the invariants are independent of the choices involved in the construction of ${\hat\eta}$. The plan of the construction is then the following:
1. One constructs a form ${\hat\eta}$ to represent the integral kernel of a parametrix with a prescription on its behavior on the boundary of $C_2(M)$.
2. One introduces the unit interval $I$ as a space of parameters to take care of the arbitrary choices involved in this construction.
3. One associates to each trivalent diagram with $n$ vertices a function on $I$ by integrating pullbacks of the form ${\hat\eta}$ over a suitable compactification $C_n(M)$ (to be defined in Sect. \[sec-cs\]) of the configuration space of $n$ points on $M$. An invariant will then be a constant function on $I$.
4. If the integrand form is closed, then the differential of the corresponding function on $I$ is determined only by contributions on the boundary of $C_n(M)$. One shows then that if $M$ is a r.h.s., it is possible to associate a closed non-trivial integrand to each trivalent diagram.
5. Because of the prescribed behavior of ${\hat\eta}$ on the boundary, one can cancel the boundary contributions by summing up appropriate combinations of diagrams.
We discuss points 1. and 2. in Sect. \[sec-constr\] and points 3., 4. and 5. in Sect. \[sec-irhs\].
More concisely, the final statement is that to certain combinations (cocycles) of trivalent diagrams it is possible to associate a well-defined element of $H^{3n}(C_n(M),{\partial}C_n(M))$, where $M$ is a r.h.s. The invariant for $M$ is then obtained by comparing this element with the unit generator.
As we will see, the form ${\hat\eta}$ is not closed, and this accounts for the complications in point 4. This is due to the fact that the cohomology of $M$ is not trivial. There are however cases when one can obtain a closed form; viz.:
1. One can introduce a flat bundle $E$ over $M$ and consider the relevant covariant derivative instead of the exterior derivative. If the bundle is non trivial, it may happen that the complex $H^*(M;E)$ is acyclic. In this case, one can construct a covariantly closed form to represent the inverse of the covariant derivative.
2. If $M$ is a r.h.s., one can remove one point, thus obtaining a rational homology disc and, consequently, a closed form ${\hat\eta}$.
Case 1. was studied by Axelrod and Singer [@AS] in the Riemannian framework of eq. (with $\Delta$ the covariant Laplace operator and, by hypothesis, ${\operatorname{\pi_{\rm Harm}}}=0$). For a more general treatment of this case, s. [@BC2]. Notice, however, that this approach does not apply in general: e.g., it does not even work for $S^3$ whose only flat connection is trivial.
Case 2. was proposed by Kontsevich [@K]. A realization of this proposal for the simplest invariant—known as the $\Theta$-invariant—was then studied by Taubes [@T]. He also proved his version of the $\Theta$-invariant to be trivial on integral homology spheres. This rules out any relationship with the LMO invariants which predict the $\Theta$-invariant on integral homology spheres to be the Casson invariant.
In Sect. \[sec-K\] we will apply Kontsevich’s proposal to the invariants constructed in [@BC] (and Sect. \[sec-irhs\]), and will compare our result with Taubes’s.
The above construction can be used to define invariants for knots in a r.h.s. as well, s. [@BC] (and Sect. \[sec-irhs\]).
Also in this case the construction is simplified if one gets a closed form ${\hat\eta}$. This happens, e.g., when $M={{\mathbb{R}}}^3$ [@BT; @AF], or when $M=\Sigma\times [0,1]$ with $\Sigma$ a connected, compact, closed, oriented 2-manifold [@AM].
Another possibility is an approach à la Kontsevich when $M$ is a r.h.s., s. Sect. \[sec-K\].
A compactification of configuration spaces
==========================================
[\[sec-cs\]]{}
In this section we assume $M$ to be a $d$-dimensional connected, compact, closed, oriented manifold.
The (open) configuration space $C_n^0(M)$ of $n$ points in $M$ is obtained by removing all diagonals from the Cartesian product $M^n$.
A $C^\infty$-compactification for these spaces was proposed in [@AS] (generalizing the algebraic compactification of [@FM]). This compactification is obtained by taking the closure $$C_n(M) \doteq \overline{C_n^0(M)} \subset
M^n\times\prod_{\substack{S\subset\{1,2,\dots,n\}\\
|S|\ge2}} Bl(M^S,\Delta_S),$$ where $Bl(M^S,\Delta_S)$ denotes the differential-geometric blowup obtained by replacing the principal diagonal $\Delta_S$ in $M^S$ with its unit normal bundle $N(\Delta_S)/{{\mathbb{R}}}^+$. Notice that $\Delta_S$ is diffeomorphic to $M$ and that $$N(\Delta_S)
\simeq TM^{\oplus S}/\text{global translations}.$$ Thus, the boundary of $Bl(M^S,\Delta_S)$ is a bundle over $M$ associated to the tangent bundle. Because of all the blowups, the spaces $C_n(M)$ turn out to be manifolds with corners ($C_2(M)$ is simply a manifold with boundary). The codimension-one components of the boundary of $C_n(M)$ are labeled by subsets of $\{1,\dots,n\}$. By permuting the factors, we can always put ourselves in the case when this subset is $\{1,\dots,k\}$, $2\le k\le n$. We will denote by ${\mathcal{S}}_{n,k}\subset {\partial}C_n(M)$ a face of this kind. Then we have the following functorial description of ${\mathcal{S}}_{n,k}$: $$\begin{CD}
{\mathcal{S}}_{n,k} \simeq (\pi_1)^{-1}\widehat{C}_k(TM) @>>> \widehat{C}_k(TM)\\
@VVV @VVV\\
C_{n-k+1}(M) @>>{\pi_1}> M
\end{CD}{\label{calS}}$$ Here $\pi_1$ is the projection onto the first copy of $M$ (i.e., where the first $k$ points have collapsed) and $\widehat{C}_k(TM)$ is a bundle associated to the tangent bundle of $M$ whose fiber $\widehat{C}_k({{\mathbb{R}}}^d)$, $d=\dim M$, is obtained from $({{\mathbb{R}}}^d)^k/G$—$G$ being the group of global translations and scalings—by blowing up all diagonals. More precisely, $$\widehat{C}_k({{\mathbb{R}}}^d) = \overline{{C_k^0({{\mathbb{R}}}^d)}
/G }
\subset\left(
({{\mathbb{R}}}^d)^k\times\prod_{\substack{S\subset\{1,2,\dots,k\}\\
|S|\ge2}} Bl(({{\mathbb{R}}}^d)^S,\Delta_S)\right)/G.$$ Notice that each $\widehat{C}_k({{\mathbb{R}}}^d)$ is a compact manifold with corners. In the simplest case we have $\widehat{C}_2({{\mathbb{R}}}^d)
=S^{d-1}$.
The diagonal action of $SO(d)$ on $({{\mathbb{R}}}^d)^k$ descends to $\widehat{C}_k({{\mathbb{R}}}^d)$. If we choose a Riemannian metric on $M$, we can write $$\widehat{C}_k(TM) =
OM \times_{SO(d)}\widehat{C}_k({{\mathbb{R}}}^d),$$ where $OM$ is the orthonormal frame bundle of $TM$. In particular, when $n=2$, we have $$C_2(M) = Bl(M\times M,\Delta),$$ and ${\partial}C_2(M) \simeq S(TM) = OM\times_{SO(d)}S^{d-1}$.
The construction of a parametrix
================================
[\[sec-constr\]]{}
In this section we assume that $M$ is any connected, compact, closed, oriented 3-manifold.
We start considering the following commutative diagram: $$\begin{CD}
{\partial}C_2(M) @>{\iota^{\partial}}>> C_2(M) \\
@V{\pi^{\partial}}VV @VV{\pi}V \\
\Delta @>>{\iota^\Delta}> M\times M
\end{CD}$$ Then we define the involution $T$ that exchanges the factors in $M\times M$. By abuse of notation, we will denote by $T$ also the corresponding involution on $C_2(M)$ and on its boundary ${\partial}C_2(M)$. On the latter $T$ acts as the antipodal map on the fiber crossed with the identity on the base. We will denote by $H^*_\pm$ the $+$ and $-$ eigenspaces of $T$ in the cohomology of any of the above spaces.
We will denote by $\chi_\Delta\in\Omega^3(M\times M)$ a representative of the Poincaré dual of the diagonal $\Delta$. Since $[\chi_\Delta]\in H^3_-(M\times M)$, there is really no loss of generality in choosing an odd representative.
On the sphere bundle ${\partial}C_2(M)\to\Delta$, one can introduce a global angular form $\eta$; i.e., a form $\eta\in
\Omega^2({\partial}C_2(M))$ with the following properties:
1. the restriction of $\eta$ to each fiber is a generator of the cohomology of the fiber;
2. $d\eta=-\pi^{{\partial}*}e$, where $e$ is a representative of the Euler class.
Since $M$ is 3-dimensional, the Euler class is trivial. Moreover, since $H^2_+(S^2)=0$, we may choose the global angular form to be odd. Since $T$ acts as the identity on the base, 2. is then replaced by $d\eta=0$. We have the following
Given an odd global angular form $\eta$ and an odd representative $\chi_\Delta$ of the Poincaré dual of the diagonal $\Delta$ in $M\times M$, there exists a form ${\hat\eta}\in\Omega^2(C_2(M))$ with the following properties:
[\[propheta\]]{} $$\begin{aligned}
d{\hat\eta}&= \pi^*\chi_\Delta,\\
\iota_{\partial}^*{\hat\eta}&= -\eta,\\
T^*{\hat\eta}&= -{\hat\eta}.\end{aligned}$$
[\[prop-heta\]]{}
This is a simple generalization of the analogous proposition in [@BC] for the case when $M$ is a r.h.s.
Let $U$ be a tubular neighborhood of $\Delta$ in $M\times M$, and $\tilde U=\pi^{-1}U$ its preimage in $C_2(M)$. Then $\tilde U$ has the structure of ${\partial}C_2(M)\times [0,1]$. Let us denote by ${\partial}_0\tilde U = {\partial}C_2(M)$ and by ${\partial}_1\tilde U = \pi^{-1}{\partial}U$ the two boundary components of $\tilde U$.
Let $\rho$ be a function on $\tilde U$ which is constant and equal to $-1$ in a neighborhood of ${\partial}_0\tilde U$ and is constant and equal to $0$ in a neighborhood of ${\partial}_1\tilde U$. Moreover, assume that $\rho$ is even under the action of $T$.
Let $p:\tilde U\to {\partial}C_2(M)$ be the natural projection. Then consider the form $\tilde\eta = \rho\,p^*\eta$. Since $\eta$ is a global angular form, $d\tilde\eta = d\rho\,p^*\eta$ is a representative of the Thom class of the normal bundle of $\Delta$. Therefore, if we extend $\tilde\eta$ by zero on the whole of $C_2(M)$, we have that $d\tilde\eta$ is the pullback of a representative of the Poincaré dual of the diagonal.
This might not be our choice $\chi_\Delta$. Anyhow, $d\tilde\eta=\pi^*(\chi_\Delta + d\alpha)$, and it is not difficult to check that one can choose $\alpha\in\Omega^2_-(M\times M)$. So we set ${\hat\eta}= \tilde\eta - \pi^*\alpha$, and it is an immediate check that properties hold.
Notice that, as is clear from the proof, the definition of ${\hat\eta}$ is not unique, even for fixed $\eta$ and $\chi_\Delta$.
With such a form ${\hat\eta}$ we can finally define a parametrix. In fact, let us denote by $\rho_1$ and $\rho_2$ the projections from $M\times M$ to each factor, and by $\pi_1$ and $\pi_2$ the corresponding projections from $C_2(M)$. By defining the push-forward as acting from the left, we have the following
If ${\hat\eta}\in\Omega^2(M)$ satisfies (\[propheta\]a) and (\[propheta\]b), then $$P\alpha = -\pi_{2*}({\hat\eta}\,\pi_1^*\alpha),
\qquad\alpha\in\Omega^*(M),$$ is a parametrix with $S = -\rho_{2*}(\chi_\Delta\,\rho_1^*\alpha)$.
The proof is a simple exercise of fiber integration in the case when the fiber has a boundary (s. [@BC]).
Property (\[propheta\]c) is not needed in the definition of the parametrix, but is natural and simplifies the writing of the invariants. Notice incidentally that the [*propagator*]{} in perturbative Witten–Chern–Simons theory is odd under the action of $T$, being related to the [*expectation value*]{} of a connection 1-form placed at two different points.
The global angular form
-----------------------
[\[ssec-gaf\]]{} To end with the construction, we have to specify a choice for the global angular form.
Following [@AS] and [@BC], we pick up a Riemannian metric $g$. Then ${\partial}C_2(M) \simeq OM\times_{SO(3)} S^2$. Let $$p:OM\times S^2 \to OM\times_{SO(3)} S^2$$ be the natural projection, and let $\theta$ be a connection form on $OM$ (i.e., a metric connection). By abuse of notation, we will write $\theta$ also for its pullback to $OM\times S^2$.
We call a global angular form $\bar\eta\in\Omega^2(OM\times S^2)$ [*equivariant*]{} if it is a polynomial in $\theta$ and $d\theta$—with coefficients in $\Omega^*(S^2)$—and it is basic (i.e., $\bar\eta=p^*\eta$).
For a given $\theta$, the equivariant global angular form is unique and its $\theta$-independent part is the $SO(3)$-invariant unit volume form on $S^2$, which we will denote by $\omega$ throughout. See [@BC] for an explicit construction.
We assume that the restriction of ${\hat\eta}$ to the boundary is such that its pullback to $OM\times S^2$ is the equivariant global angular form.[\[cond-heta\]]{}
We denote by $\omega_{ij}$ the pullbacks of $\omega$ by the projections $\widehat{C}_n({{\mathbb{R}}}^3)\to\widehat{C}_2({{\mathbb{R}}}^3)=S^2$. Then a useful property proved in [@K] (s. also [@BC]) is expressed by the following
If $x_i$, $1\le i\le n$, is any coordinate in $\widehat{C}_n({{\mathbb{R}}}^3)$, then, for any indices $j$ and $k$ ($j\not=i$, $k\not=i$) $$\int_{x_i}\omega_{ij}\,\omega_{ik}=0.$$ [\[lem-K\]]{}
Parametrizing the choices
-------------------------
We have constructed an integral kernel ${\hat\eta}$ for a parametrix depending on the following choices: a metric $g$, a compatible connection form $\theta$, a representative $\chi_\Delta$ of the Poincaré dual of $\Delta$ (and a function $\rho$ and a 2-form $\alpha$ as in the proof of Prop. \[prop-heta\]).
To take care of these choices, we introduce the parameter space $I=[0,1]$. Then, denoting by $\sigma$ the inclusion $M\times M\hookrightarrow
M\times M\times I$, we take $\chi_\Delta\in\Omega^3(M\times M\times I)$ such that $d\chi_\Delta=0$ and $\sigma^*\chi_\Delta$ is a representative of the Poincaré dual of $\Delta$. (We treat similarly $\rho$ and $\alpha$.)
As for $g$ and $\theta$, we operate as follows. We take a block-diagonal metric $g$ on $M\times I$ (i.e., a metric such that $g_{(m,t)}(v,w)=0$ for all $v\in T_mM$ and $w\in T_tI$), and consider the orthonormal frame bundle $\widetilde{OM}$ of $TM\to M\times I$. Then we choose a connection form $\theta$ on $\widetilde{OM}$ and define the equivariant global angular form on $\widetilde{OM}\times S^2$.
Using the projections $C_{n}(M)\times I\to C_2(M)\times I$, we can pull back the form ${\hat\eta}$ in $n(n-1)/2$ different ways which we denote by ${\hat\eta}_{ij}$.
We call a form on $\Omega^*(C_n(M))$ [*special*]{} if it is a product of pullbacks of ${\hat\eta}$.
Each special form can be graphically associated to a diagram, each edge representing a pullback of ${\hat\eta}$.
Let ${\mathcal{S}}_{n,k}$ denote, as in the previous section, the face in ${\partial}C_n(M)$ corresponding to the collapse of the first $k$ points (we consider only this case since all other codimension-one faces can be reduced to this one by simply applying a permutation of the factors). Let $\pi^{\mathcal{S}}$ denote the induced projection ${\mathcal{S}}_{n,k}\times I\to C_{n-k+1}(M)\times I$, and let $\pi_1:
C_{n-k+1}(M)\times I\to M\times I$ denote the projection on the first point (i.e., where the first $k$ points have collapsed). Then we have the following
If $\alpha\in\Omega^*({\mathcal{S}}_{n,k})$ is the restriction of a special form, then $$\pi^{{\mathcal{S}}}_*\alpha = \beta\,\pi_1^*\gamma,$$ where $\beta$ is special and $\gamma$ is either a constant or a multiple of the first Pontrjagin form $p_1$ associated to $\theta$. [\[lem-AS\]]{}
For the proof, s. [@AS] or [@BC]. Using the same notations, we also have the following
$\gamma$ (and hence $\pi_1^*\gamma$) is a constant in the case when no parameter space is introduced. [\[cor-AS\]]{}
An invariant for rational homology spheres
==========================================
[\[sec-irhs\]]{} In this section we assume $M$ to be a 3-dimensional r.h.s. We then choose a representative $v$ of the unit generator of $H^3(M)$, so we can take, as the Poincaré dual of the diagonal in $M\times M$, $$\chi_\Delta = v_2 - v_1,
{\label{v21}}$$ where $v_i=\rho_i^*v$, and $\rho_i$, $i=1,2$, is the projection to the $i$-th factor.
We now define our form ${\hat\eta}$ as in the preceding section—i.e., satisfying and Condition \[cond-heta\]—with $\chi_\Delta$ as in .
Next we consider the three projections $\pi_{ij}:C_3(M)\to C_2(M)$, and write ${\hat\eta}_{ij}=\pi_{ij}^*{\hat\eta}$. Then implies $d{\hat\eta}_{ij} = v_j-v_i$. Thus, we can define the following non-trivial closed form in $\Omega^2(C_3(M))$: $${\hat\eta}_{123} \doteq {\hat\eta}_{12}+{\hat\eta}_{23}+{\hat\eta}_{31}.$$ Notice that on any configuration space $C_n(M)$, $n>2$, we can analogously define closed forms ${\hat\eta}_{ijk}$ for any triple of distinct indices $i,j,k$.
Now consider graphs with numbered vertices, and set equivalent to zero all graphs with an edge connecting a vertex to itself. We have then an induced orientation of the edges (viz., each edge is oriented from the lower to the higher end-point).
To each trivalent graph $\Gamma$ of the above type we can associate the following number: $$A_\Gamma(M) \doteq \int_{C_{n+1}(M)} v_0\,
\prod_{(ij)\in E(\Gamma)}{\hat\eta}_{ij0},
{\label{defAGamma}}$$ where $n$ is the number of vertices and $E(\Gamma)$ is the set of ordered edges. The point labeled by $0$ is an extra point and not a vertex of $\Gamma$. We extend $A_\Gamma$ to combinations of graphs by linearity.
We are interested in the dependence of $A_\Gamma$ on the choices in the construction of ${\hat\eta}$. So we introduce a parameter space $I$ as in the preceding section and consider $A_\Gamma$ as a function on $I$. (As for $v$, we take it in $\Omega^3(M\times I)$ and such that it is closed and its restriction to $M$ is a representative of the unit generator of $H^3(M)$.) Then we consider the differential of this function. Since the integrand form is closed, this differential is given only by boundary terms. These are dealt with by using Lemmata \[lem-K\] and \[lem-AS\].
$A_\Gamma$ can be defined also if $M$ is not a r.h.s. However, in this case the integrand form is not closed. So in differentiating $A_\Gamma$ we also have a bulk contribution which we do not know how to deal with.
We now define a coboundary operator $\delta$ that acts on graphs by contracting each edge one at a time, with a sign given by the parity of the higher end-point. In [@BC], it is shown that $\delta$ is a coboundary operator.
We call a [*cocycle*]{} a $\delta$-closed combination of graphs. We say that it is connected (trivalent) if all its terms are connected (trivalent) graphs.
Finally, we consider the Chern–Simons integral, $${{\rm CS}}(M,f)=-\frac1{8\pi^2}\int_Mf^*{\operatorname{Tr}}\left({
\theta\,d\theta +\frac23\,\theta^3}\right),$$ where $f$ is a section of $OM$ and $\theta$ is the same connection form as in the construction of ${\hat\eta}$. In [@BC], the following was proved:
If $\Gamma$ is a connected, trivalent cocycle, then there exists a constant $\phi(\Gamma)$ such that $$I_\Gamma(M,f) = A_\Gamma(M) +\phi(\Gamma)\,{{\rm CS}}(M,f)$$ is an invariant for the framed rational homology 3-sphere $M$. [\[thm-Gamma\]]{}
Instead of defining the equivariant global angular form, one could repeat the previous construction by choosing a trivialization of $S(TM)$ and by defining the global angular form to be the (pullback of the) $SO(3)$-invariant unit volume form $\omega$ on $S^2$ (as suggested in [@K]).
The equivariant treatment shows, cf. Thm. \[thm-Gamma\], that under a change of framing the invariants $A_\Gamma$ behave as multiples of the Chern–Simons integral. In particular, they are invariant under a homotopic change of framing. [\[rem-triv\]]{}
All graphs in a cocycle have the same number $n$ of vertices. If the cocycle is trivalent, this number is even. In this case, one can define $${\operatorname{ord}}\Gamma = \frac n2.$$ The constant $\phi(\Gamma)$ depends only on $\Gamma$ and not on $M$. One can show [@AS; @BC] that $\phi(\Gamma)=0$ if ${\operatorname{ord}}\Gamma$ is even. Moreover, in [@BC] it is shown that $\phi(\Theta)=1/4$, with $$A_\Theta = \int_{C_3(M)} v_0\,{\hat\eta}_{012}^3.
{\label{defATheta}}$$ This allows for the definition of the following unframed invariants (for $\Gamma\not=\Theta$): $$J_\Gamma(M) = A_\Gamma(M) - 4\,\phi(\Gamma)\, A_\Theta(M).$$ A computation in [@BC] shows that $J_\Gamma(S^3)=J_\Gamma(SO(3))=0$ if ${\operatorname{ord}}\Gamma$ is odd.
Knot invariants
---------------
In [@BC], invariants for knots in a r.h.s. are studied.
If $K$ is an imbedding $S^1\hookrightarrow M$, one has induced imbeddings $\widetilde C_n(S^1)\hookrightarrow C_n(M)$, where $\widetilde C_n(S^1)$ is the connected component of $C_n(S^1)$ defined by an ordering of the points on $S^1$. The configuration space $\widetilde{\mathcal{C}}_{n,t}^K(M)$ of $n$ points on the knot and $t$ points in $M$ is then defined by pulling back the bundle $C_{n+t}(M)\to
C_n(M)$.
All the forms introduced before can be pulled back to $\widetilde{\mathcal{C}}_{n,t}^K$, and by abuse of notation we will keep calling them with the same names. One should keep in mind, however, that the pulled-back forms depend on the imbedding $K$. This understood, one defines the self-linking number $${\operatorname{sln}}(K,M) \doteq \int_{{\widetilde{\mathcal{C}}}_{2,0}^K(M)} {\hat\eta}_{12}.$$
Now consider graphs with a distinguished loop, which represents the knot. We call [*external*]{} the vertices and the edges on this loop, and [*internal*]{} all the others. To a trivalent graph we can then associate the number $$A_\Gamma(K,M) \doteq \int_{{\widetilde{\mathcal{C}}}_{n,t+1}^K(M)} v_0\,
\prod_{(ij)\in I(\Gamma)} {\hat\eta}_{ij0},
{\label{AKM}}$$ where $n$ and $t$ are the numbers of external and internal vertices in $\Gamma$, and $I(\Gamma)$ is the set of internal edges. Again we extend to combinations of graphs by linearity.
Next we define a coboundary operator $\delta$ as before with the only difference that now $\delta$ does not contract internal edges connecting two external vertices. Graph combinations killed by $\delta$ are called cocycles. (An explicit computation of these cocycles is presented in [@AF].)
We name [*prime*]{} a graph which is connected after removing any pair of external edges (in [@BC] a graph of this kind was called connected). A cocycle will be called prime (trivalent) if all its terms are prime (trivalent). In [@BC], the following was proved:
If $K$ is a knot in the rational homology 3-sphere $M$, and $\Gamma$ is a prime, trivalent cocycle, then there exists a constant $\mu(\Gamma)$ such that $$I_\Gamma(K,M) = A_\Gamma(K,M) + \mu(\Gamma)\, {\operatorname{sln}}(K,M)$$ is a knot invariant. Moreover, $\mu(\Gamma)=0$ if ${\operatorname{ord}}\Gamma$ is even. [\[thm-GammaK\]]{}
Relationship with Kontsevich’s proposal
=======================================
[\[sec-K\]]{} As we recalled in the Introduction, in [@K] Kontsevich proposed a different way of constructing invariants for r.h.s.’s. His proposal differs from our construction since $i)$ one point is removed from $M$ in order to make its rational homology trivial, and $ii)$ the global angular form on the boundary of the configuration space is defined via a trivialization of $TM$. Let us consider part $i)$ of the proposal first.
We introduce the compactified configuration space $C_n(M,{{x_\infty}})$ of $n$ points on $M\backslash{{x_\infty}}$ (where ${{x_\infty}}$ is an arbitrary base point) as the fiber of $C_{n+1}(M)\to M$ over the point ${{x_\infty}}$ in the last copy of $M$; viz.: $$\begin{CD}
C_n(M,{{x_\infty}}) @>{p}>> C_{n+1}(M) \\
@VVV @VV{\pi_{n+1}}V \\
{{x_\infty}}@>>> M
\end{CD}$$ Consider now the projections $\pi_{ij}:C_{n+1}(M)\to C_2(M)$, $i<j\le n+1$, and set $p_{ij}=\pi_{ij}\circ p$ for $i<j\le n$ and $p_{i\infty}=\pi_{i,n+1}\circ p$ for $i\le n$. Then we will denote by ${\hat\eta}_{ij}$ and ${\hat\eta}_{i\infty}$ the pullbacks of ${\hat\eta}$ to $C_n(M,{{x_\infty}})$. Accordingly, we will define ${\hat\eta}_{ijk}$ and ${\hat\eta}_{ij\infty}$. On the boundary faces we will have the pullbacks of global angular forms $\eta_{ij}$ and $\eta_{i\infty}$. Observe that $\eta_{i\infty}=
\omega_{i\infty}$, with the notations of Lemma \[lem-K\].
We also have projections $C_n(M,{{x_\infty}})\to C_n(M)$. The forms ${\hat\eta}_{ij}$ we have written above can also be seen as the pullbacks of the forms with the same name on $C_n(M)$.
In particular, $C_1(M,{{x_\infty}})$ is just $M$ blown up at ${{x_\infty}}$, and ${\partial}C_1(M,{{x_\infty}})=S^2$. If we pull $v$ back by the projection $\tau:C_1(M,{{x_\infty}})\to M$, we get an exact form $\tau^*v=d w$, where the two-form $w$ restricted to the boundary must be a representative of the unit generator of $H^2(S^2)$. We may choose this representative to be $\omega$ since, by Thm. \[thm-Gamma\], the explicit choice of $v$ does not affect the invariant. We have then the following
If $\Gamma$ is a connected, trivalent cocycle and $M$ is a rational homology 3-sphere, then $$A_\Gamma(M) = A_\Gamma'(M) + B_\Gamma,$$ with $$\begin{aligned}
A_\Gamma'(M) &= \int_{C_{n}(M,{{x_\infty}})}
\prod_{(ij)\in E(\Gamma)}{\hat\eta}_{ij\infty},\\
B_\Gamma &= \int_{\widehat C_{n+2}({{\mathbb{R}}}^3)}
\omega_{0\infty}\, \prod_{(ij)\in E(\Gamma)}\omega_{ij0}.\end{aligned}$$ Moreover, if ${\operatorname{ord}}\Gamma$ is odd, $B_\Gamma$ vanishes. [\[thm-AGamma’\]]{}
Notice that $B_\Gamma$ does not depend on $M$ or on any arbitrary choice. Thus, even if it should not vanish, it would just represent a constant shift in the invariant. As a consequence, Thm. \[thm-Gamma\] holds with $A_\Gamma(M)$ replaced by $A_\Gamma'(M)$.
We can pull back the integrand form in to $C_{n+1}(M,{{x_\infty}})$ and integrate it over there. Using the fact that the pullback of $v_0$ is exact, by Stokes’s theorem we can rewrite as $$A_\Gamma(M)=\int_{{\partial}C_{n+1}(M,{{x_\infty}})} w_0\,
\prod_{(ij)\in E(\Gamma)}{\hat\eta}_{ij0}.$$ The codimension-one faces in ${\partial}C_{n+1}(M,{{x_\infty}})$ are labeled by subsets of $\{0,1,\dots,n,\infty\}$. Denote by ${\mathcal{S}}$ any of these subsets.
Assume now that the cardinality of ${\mathcal{S}}'={\mathcal{S}}\cap\{1,\dots,n\}$ is $k$. Since points in ${\mathcal{S}}'$ label vertices in the graph and the graph is trivalent, we have the relation $$3k=2e+e_0,$$ where $e$ denotes the number of edges connecting points in ${\mathcal{S}}'$, and $e_0$ denotes the number of edges with exactly one end-point in ${\mathcal{S}}'$. Now we have four cases, according to $${\mathcal{S}}\backslash{\mathcal{S}}'=
\begin{cases}
\{0\} &\text{(a)}\\
\emptyset &\text{(b)}\\
\{0,\infty\} &\text{(c)}\\
\{\infty\} &\text{(d)}\\
\end{cases}$$ The cardinality $r$ of ${\mathcal{S}}$ is then $k+1$, $k$, $k+2$ and $k+1$ respectively. The boundary face labeled by ${\mathcal{S}}$ is a bundle over $C_{n+2-r}(M,{{x_\infty}})$ with projection $\pi^{{\mathcal{S}}}$ and fiber $\widehat{C}_r({{\mathbb{R}}}^3)$. So the fiber dimension is $3k-1$, $3k-4$, $3k+2$ and $3k-1$ respectively.
We now write the integrand form $\alpha$ restricted to this boundary as $(\pi^{{\mathcal{S}}*}\beta)\,\alpha'$, where $\beta$ is the product of the pullbacks of ${\hat\eta}$ corresponding to edges with at least one end-point not in ${\mathcal{S}}$, times $w_0$ in cases (a), (b) and (d).
In cases (b) and (d), the term ${\hat\eta}_{ij0}$ contributes to $\alpha'$ only if both $i$ and $j$ are in ${\mathcal{S}}'$. In case (a) and (c), also terms with either $i$ or $j$ in ${\mathcal{S}}'$ contribute. Moreover, $w_0$ contributes to $\alpha'$ only in case (c). As a consequence, the degree of $\alpha'$ will be: (a) $2e+2e_0$, (b) $2e$, (c) $2e+2e_0+2$, (d) $2e$.
By using all the above results, we see that the degree of $\gamma$ in Corollary \[cor-AS\] is $e_0+1$, $4-e_0$, $e_0$ and $1-e_0$ respectively. Since $\gamma$ must be a constant zero-form, we see that the contribution of the face ${\mathcal{S}}$ vanishes unless we are in case (b) with $e_0=4$, in case (c) with $e_0=0$, or in case (d) with $e_0=1$. Notice, moreover, that we can replace ${\hat\eta}$ by $\omega$ in $\alpha'$. Thus, in the last case above we conclude that the contribution vanishes by Lemma \[lem-K\]. The first case is taken care of by the fact that $\Gamma$ is a cocycle.
We are then left with case (c) and $e_0=0$. Since $\Gamma$ is connected, there are only two possibilities: 1) only point $0$ has collapsed at ${{x_\infty}}$, 2) all points have collapsed at ${{x_\infty}}$. In case 1), $\alpha'=\omega_{0\infty}$ and the fiber is $S^2$. After this trivial integration we get $A_\Gamma'(M)$. Case 2) yields $B_\Gamma$.
To prove that $B_\Gamma$ vanishes if ${\operatorname{ord}}\Gamma = n/2$ is odd, consider the involution $x_i\to-x_i$, $i=0,1,\dots,n,\infty$. All the pullbacks of $\omega$ change signs. Since the number of edges is $3n/2$, the integrand form gets the sign $(-1)^{3n/2+1}$. On the other hand, since $\widehat C_{n+2}({{\mathbb{R}}}^3)$ is $S^{3n+2}$ with some submanifolds blown up, under the involution the orientation gets the sign $(-1)^{n+1}$.
In the particular case of the $\Theta$-invariant , we have $$A_\Theta'(M) = \int_{C_2(M,{{x_\infty}})} {\hat\eta}_{12\infty}^3.$$ By our construction, ${\hat\eta}_{12\infty}$ is a closed form on $C_2(M,{{x_\infty}})$ which reduces to the global angular form when restricted to the faces $(1\infty)$, $(2\infty)$ and $(12)$.
As observed in remark \[rem-triv\], one can also modify the construction by choosing a trivialization of $S(TM)$ at the very beginning, and this corresponds to part $ii)$ of Kontsevich’s proposal. Invariance under homotopic changes of framing is then guaranteed (while under non-homotopic changes, the invariant behaves as $-1/4\,{{\rm CS}}$). In this case, we have the additional property that ${\hat\eta}_{12\infty}^2$ vanishes close to the faces $(1\infty)$, $(2\infty)$ and $(12)$. However, close to $(12\infty)$ neither ${\hat\eta}_{12\infty}^2$ nor ${\hat\eta}_{12\infty}^3$ vanish.
This is to be compared with Taubes’s invariant $$\widetilde A_\Theta(M) \doteq \int_{C_2(M,{{x_\infty}})} \omega^3,$$ where $\omega$ is a 2-form on $C_2(M,{{x_\infty}})$ with the following properties:
1. $\omega$ restricted to the faces $(1\infty)$, $(2\infty)$ and $(12)$ is a global angular form;
2. $\omega^2$ vanishes not only close to $(1\infty)$, $(2\infty)$ and $(12)$ but also close to $(12\infty)$.
The latter property is achieved only by choosing what Taubes names a [*singular framing*]{} for $T(M\backslash{{x_\infty}})$. As a consequence, $\omega^2$ (and hence $\omega^3$) is a form with compact support, and Taubes’s $\Theta$-invariant can actually be defined as an integral over the uncompactified configuration space $C_2^0(M,{{x_\infty}})$. Moreover, property 2. is crucial in Taubes’s proof that his invariant is trivial on integral homology spheres.
Now the main question is if there is any relationship between the two different ways, $A'_\Theta$ and $\widetilde A_\Theta$, of realizing Kontsevich’s proposal for the $\Theta$-invariant.
The case of knots
-----------------
Let us consider an imbedding $K$ of $S^1$ in the interior of $M\backslash{{x_\infty}}$. This induces imbeddings $\widetilde C_n(S^1)\hookrightarrow C_n(M,{{x_\infty}})$. By pulling back the bundles $C_{n+t}(M,{{x_\infty}})\to C_n(M,{{x_\infty}})$, we then obtain the configuration spaces ${\mathcal{C}}_{n,t}^{K}(M,{{x_\infty}})$. We have the following
If $\Gamma$ is a prime, trivalent cocycle and $K$ is a knot in the rational homology 3-sphere $M$, then $$\begin{aligned}
A_\Gamma(K,M) &= \int_{{\widetilde{\mathcal{C}}}_{n,t}^{K}(M,{{x_\infty}})}
\prod_{(ij)\in I(\Gamma)} {\hat\eta}_{ij\infty},\\
{\operatorname{sln}}(K,M) &= \int_{{\widetilde{\mathcal{C}}}_{2,0}^{K}(M,{{x_\infty}})} {\hat\eta}_{12}.\end{aligned}$$
In particular, if $M=S^3$ and we choose the Euclidean metric on ${{\mathbb{R}}}^3=S^3\backslash x_\infty$, we recover Bott and Taubes’s result [@BT]. As a consequence, the anomaly coefficients $\mu(\Gamma)$ are the same in the two cases.
We work as in the proof of Thm. \[thm-AGamma’\]. The only difference is that we must distinguish between the cases when the collapse is at a point on $K$ or otherwise.
Notice that, since ${{x_\infty}}$ does not belong to the image of $K$, there is no such term as $B_\Gamma$. By the same reason, when we consider a collapse at a point on $K$, we only have points in $\{0,1,\dots,n+t\}$. If $0$ is involved, the term vanishes since $w_0$ is basic and is a 2-form. If $0$ is not involved, reasoning as in the proof of Thm. \[thm-AGamma’\] and applying Corollary \[cor-AS\] shows that the term vanishes unless $e_0=2$. But this is taken care of by the fact that $\Gamma$ is a cocycle.
I thank J. E. Andersen, R. Bott, P. Cotta-Ramusino, N. Habegger, R. Longoni, G. Masbaum and P. Vogel for very useful discussions.
For partial support at Madeira conference on “Low Dimensional Topology," January 1998, I acknowledge the Center of Mathematical Sciences (CCM) PRAXIS XXI project. I especially thank H. Nencka for her efforts in organizing the conference.
I am finally thankful to the Institut de Mathematiques de Jussieu and to the University of Nantes for warm hospitality and financial support.
D. Altschuler and L. Freidel, “Vassiliev Knot Invariants and Chern–Simons Perturbation Theory to All Orders,” [[Commun. Math. Phys. **187**]{}]{} (1997), 261–287. J. E. Andersen and J. Mattes, “Configuration Space Integrals and Universal Vassiliev Invariants over Closed Surfaces," q-alg/9704019. S. Axelrod and I. M. Singer, “Chern–Simons Perturbation Theory,” in [*Proceedings of the XXth DGM Conference*]{}, edited by S. Catto and A. Rocha (World Scientific, Singapore, 1992), pp. 3–45; “Chern–Simons Perturbation Theory. II,” [[J. Diff. Geom. **39**]{}]{} (1994), 173–213. R. Bott and A. S. Cattaneo, “Integral Invariants of 3-Manifolds," [[J. Diff. Geom. **48**]{}]{} (1998), 91–133. , “Integral Invariants of 3-Manifolds. II," math/9802062, to appear in [[J. Diff. Geom. ****]{}]{} R. Bott and C. Taubes, “On the Self-Linking of Knots," [[J. Math. Phys. **35**]{}]{} (1994), 5247–5287. W. Fulton and R. MacPherson, “A Compactification of Configuration Spaces," [[Ann. Math. **139**]{}]{} (1994), 183–225. M. Kontsevich, “Feynman Diagrams and Low-Dimensional Topology,” First European Congress of Mathematics, Paris 1992, Volume II, [*Progress in Mathematics*]{} [**120**]{} (Birkhäuser, 1994), 120. T. Q. T. Le, J. Murakami and T. Ohtsuki, “On a Universal Perturbative Invariant of 3-Manifolds," Topology [**37**]{} (1998), 539–574. C. Taubes, “Homology Cobordism and the Simplest Perturbative Chern–Simons 3-Manifold Invariant," in [*Geometry, Topology, and Physics for Raoul Bott*]{}, edited by S.-T. Yau (International Press, Cambridge, 1994), pp. 429–538. E. Witten, “Quantum Field Theory and the Jones Polynomial," [[Commun. Math. Phys. **121**]{}]{} (1989), 351–399.
[^1]: Supported by MURST and partially by INFN
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a physical implementation of time and spatial parity transformations, as well as Galilean boosts, in a trapped-ion quantum simulator. By embedding the simulated model into an enlarged simulating Hilbert space, these fundamental symmetry operations can be fully realized and measured with ion traps. We illustrate our proposal with analytical and numerical techniques of prototypical examples with state-of-the-art trapped-ion platforms. These results pave the way for the realization of time and spatial parity transformations in other models and quantum platforms.'
author:
- 'Xiao-Hang Cheng'
- 'Unai Alvarez-Rodriguez'
- Lucas Lamata
- Xi Chen
- Enrique Solano
title: Time and spatial parity operations with trapped ions
---
Introduction
============
In the last decade, observing quantum phenomena that are difficult or even impossible to detect in the laboratory has been possible through the concept of quantum simulation [@qs]. Originally an idea of Richard Feynman [@Feynman], it is based on implementing a complex quantum dynamics on a controllable quantum system. Many proposals and experiments on quantum simulations in different controllable platforms such as trapped ions [@trapped; @ion; @trapped; @ion2; @trapped; @ion3], superconducting circuits [@superconducting1; @superconducting2; @superconducting3], ultracold gases [@ultracold; @gas1; @ultracold; @gas2], quantum photonics systems [@photonics1; @photonics2], and optical lattices [@optical; @lattice1; @optical; @lattices2], have been performed and have led to a deeper understanding of a wide variety of phenomena.
Up to now, proposed models and experimental realizations of quantum simulations with trapped ions have been realized in spin models [@spin; @models1; @spin; @models2; @spin; @models3], quantum field theories [@QFT], quantum phase transitions [@quantum; @phase; @transition], many-body systems [@fermion; @lattice; @Ising; @model; @Holstein; @model], fermionic and bosonic interactions [@fermionic; @and; @bosonic; @models], relativistic quantum physics, including Dirac equation *Zitterbewegung*, [@qs; @dirac] and its realization in the laboratory [@nature], Klein paradox [@Klein; @tunneling; @and; @Dirac; @potentials; @Klein; @Paradox], and interacting Dirac particles [@dirac; @particles], among others. Recently, an implementation of the Majorana equation and unphysical quantum operations was proposed [@majorana1], and experimentally realized [@SzameitMajo; @KihwanMajo]. In addition, U. Alvarez-Rodriguez *et al.* [@noncausal] have developed a mathematical formulation of an *enlarged space* or *embedding space* to perform linear transformations between space-time coordinates in a general quantum simulator. However, the implementation of these concepts in a trapped-ion simulator, including parity $\mathcal{P}$ operations, has not yet been analyzed.
In this Letter, we propose the realization of time and spatial parity operations, as well as Galilean boosts, in a trapped-ion quantum simulator. We perform analytical and numerical calculations in paradigmatic examples to illustrate our protocol, which is based on state-of-the-art trapped-ion technologies. We show that this proposal, including state initialization, dynamics, and measurement, can be efficiently implemented in current experiments. While Ref. [@noncausal] focuses on the underlying mathematical properties behind the theoretical protocol for Galilean transformations, here we propose a realistic implementation in trapped-ion systems. Moreover, we develop a toolbox for trapped ions that can be used to implement reference frame transformations for any given dynamical equations. New interesting types of simulations can emerge by adding the reference frame transformation to the toolbox of possible operations. This work significantly advances the field of quantum simulations of unphysical operations and establishes a path for implementing time and spatial parities in quantum optics systems. As a further scope, these results may allow us to enhance our capabilities when studying many-body interacting systems and their symmetries.
The formalism introduced in Ref. [@noncausal] allows us to implement the quantum simulation of reference frame transformations in the lab. This symmetry transformation is described by a linear relation between the initial $(t,x)$ and the final $(t',x')$ coordinates, $x'_i=\sum_{ij} \alpha_{ij} x_j$, $i,j=0,1$. The spinor in the enlarged space is defined as $\Psi(x, t)=(\psi^e, \psi^o)^T$, where the even and odd part of any wave function can be expressed as $\psi^{e,o}=\frac{1}{2}[\psi(x, t)\pm\psi(x', t')]$. Therefore, the dynamical information of $\psi(x,t)$ and $\psi(x',t')$ is encoded in the evolution of $\Psi$. Moreover, through a judicious choice of measurement observables, one can perform a reference frame transformation via a local $\sigma^z$ operator, or even observe spacetime correlation functions between different reference frames. Throughout this paper we consider the evolution in the simulated Hilbert space as given by the Schrödinger equation $i\partial_t \psi=-ic\partial_x\psi$, where $c$ is a simulated speed of light and we fix $\hbar=1$. The corresponding equation for $\Psi$ in the embedding space, for arbitrary Galilean transformations, may be written as $$i \partial_{t} \Psi = -i (\tilde{\alpha}_1 \mathbb{1} +\tilde{\alpha}_2 \sigma^x ) \partial_x \Psi,$$ where $\tilde{\alpha}_{1,2}=[c(\alpha_{11}\pm \alpha_{00}) \mp \alpha_{10}]/(2\alpha_{11})$. We explain now how to use this representation for the implementation in a trapped-ion system. In the Lamb-Dicke regime, $\eta\sqrt{\langle(a+a^{\dag})^2\rangle}\ll1$, the Hamiltonian describing the interaction between an ion and a laser driving is [@trapped; @ion] $${\cal H}(t)=\Omega_0\sigma^+[1+i\eta(ae^{-i\nu t}+a^\dag e^{i\nu t})]e^{i(\phi-\delta t)}+{\rm H.c.},\nonumber$$ where $\delta=\omega-\omega_0$ is the laser detuning, $\eta=k\sqrt{1/2m\nu}$ is the Lamb-Dicke parameter [@trapped; @ion], $k$ is the wave number of the external field, $m$ is the mass of ion, $\nu$ is the frequency of a static potential harmonic oscillator, $\Omega_0$ is the coupling strength, $a$ and $a^{\dag}$ are the annihilation and creation operators of any suitable vibrational mode of the ion string, that we choose to be the center of mass motional mode, and $\phi$ is the field phase. In spin-1/2 language, $\sigma^+=|e\rangle\langle g|=(\sigma^x+i\sigma^y) / 2$, $\sigma^-=|g\rangle\langle e|=(\sigma^x-i\sigma^y) / 2$.
When $\delta=0$, a carrier resonance can be obtained with Hamiltonian $H_c=\Omega(\sigma^+e^{i\phi}+\sigma^-e^{-i\phi})$. A red-sideband, also known as Jaynes-Cummings (JC) interaction, is realised in the case of $\delta=-\nu$. This Hamiltonian is written as $H_r=\tilde{\Omega}\eta(a\sigma^+e^{i\phi_r}+a^\dag\sigma^-e^{-i\phi_r})$. Respectively, when $\delta=\nu$, a blue-sideband, anti-Jaynes-Cummings (AJC) interaction can be achieved, and its Hamiltonian can be expressed as $H_b=\tilde{\Omega}\eta(a^\dag\sigma^+e^{i\phi_b}+a\sigma^-e^{-i\phi_b})$. We illustrate now how to apply these techniques to generate the time and spatial parity transformations in trapped ions.
Time parity transformation
==========================
As a first example, we show how to use two trapped ions to simulate a time parity transformation, $(t, x)\rightarrow(-t, x)$, $(\alpha_{00}, \alpha_{01}, \alpha_{10}, \alpha_{11})=(-1, 0, 0, 1)$. Here, we choose the Hamiltonian in the simulated space as a time-independent one, $H=H^e=c p$ and the momentum $p>0$, which describes a massless Dirac Hamiltonian without the internal degree of freedom. The corresponding one-dimensional Schrödinger equation in the enlarged space can be expressed as $$i \partial_t \Psi = \sigma_1^x c \hat{p} \Psi.$$ The Hamiltonian in the enlarged space, $\mathcal{H}=\sigma_1^x\otimes H=\sigma_1^x c \hat{p}$, where $\sigma_1^x$ is the Pauli operator acting on ion 1, which can be realized by implementing a blue- and red-sideband simultaneously [@trapped; @ion; @qs; @dirac], $$\mathcal{H}=\eta\tilde{\Omega}(\sigma_1^+ a^\dag e^{i\phi_b}+\sigma_1^- a e^{-i\phi_b})+\eta\tilde{\Omega}(\sigma_1^+ a e^{i\phi_r}+\sigma_1^- a^\dag e^{-i\phi_r}),$$ with proper phases for blue- and red-sideband $\phi_b=\pi/2$, $\phi_r=-\pi/2$. Here, $\eta\tilde{\Omega}=\frac{c}{2\Delta}$ and $i(a^{\dag}-a)/2=\hat{p}\Delta$ with $\Delta=\sqrt{1/2m\nu}$. We depict in Fig. \[setup\], a scheme of the experimental setup with two ions interacting with lasers.
The initial state in the enlarged space is given as, $$\Psi(x,t=0)=\left(\begin{array}{cc} 1 \\ 0 \end{array}\right)\otimes \psi(x,t=0),$$ where $\psi(x,t=0)$ can be described as a Gaussian wave packet, $\psi(x,t=0)=\psi_0(x,t=0)e^{ip_0x}=(\sqrt{\sqrt{2\pi}\Delta})^{-1}e^{-x^2/4\Delta^2}e^{ip_0x}$. In a trapped-ion setup, this can be achieved by cooling the motional mode to the ground state, which is a Gaussian, and displacing it by simultaneous red and blue sidebands with the Hamiltonian $p_0 \hat{x} \sigma^x_2$, where $\hat{x}=\Delta (a+ a^\dag)$, with different phases in the $a$ and $a^\dag$ operators as compared to the simulating Hamiltonian $\mathcal{H}$. This will allow one to achieve an average $p_0$ momentum, using the auxiliary second ion initialized in an eigenstate of the $\sigma^x_2$ operator. After applying the evolution propagator $\exp(-i\mathcal{H}t)$, we can evolve the state for any time. The solution reads $$\begin{aligned}
&&\Psi(x,t)= \frac{1}{2\sqrt{\sqrt{2\pi}\Delta}} \\ &&\times\left(\begin{array}{cc} e^{-\frac{(ct+x)^2}{4\Delta^2}}e^{ip_0(ct+x)}+e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)} \\ -e^{-\frac{(ct+x)^2}{4\Delta^2}}e^{ip_0(ct+x)}+e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)} \end{array}\right).\nonumber\end{aligned}$$ Furthermore, the quantum states in the simulated spaces are obtained reversing the initial mapping, $$\begin{aligned}
\psi(x,t)&=&(1,1)\Psi(x,t)=\frac{1}{\sqrt{\sqrt{2\pi}\Delta}}e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)},\nonumber\\
\psi(x,-t)&=&(1,1)\sigma^z\Psi(x,t)=\frac{1}{\sqrt{\sqrt{2\pi}\Delta}}e^{-\frac{(ct+x)^2}{4\Delta^2}}e^{ip_0(ct+x)}.\vspace{-0.2cm}\nonumber\\\label{wavepacketstime}\end{aligned}$$
We plot in Fig. \[WavepacketsFig\](a) the initial wavepacket in the simulated space, and in Fig. \[WavepacketsFig\](b) the evolved and time-parity-transformed wavepackets in Eq. (\[wavepacketstime\]). We calculate now the position average values in the simulated space for the different reference frames and their correlation, $$\langle \hat{x} \rangle_{\psi(x,t)}=ct, \langle \hat{x} \rangle_{\psi(x,-t)}=-ct, \langle \hat{x} \rangle_{\psi(x,t),\psi(x,-t)}=0.$$
The spacetime correlations may be measured in different ways with current ion technology. The one we introduce here extends the physical principle employed in [@nature]. The measurement is performed upon the $\sigma_1^z$ observable of the first ion, associated with the enlarged space degree of freedom, via fluorescence detection. To achieve this, a state-dependent displacement operator $U=\exp(-ik\hat{x}\sigma_1^x/2)$ is applied to the internal state of this ion and the joint mode, with $A=U^\dag \sigma_1^z U=\cos(k\hat{x})\sigma_1^z+\sin(k\hat{x})\sigma_1^y$ [@nature]. In order to detect, e.g., the spacetime correlation $\langle \hat{x} \rangle_{\psi(x,t),\psi(x,-t)}$ in the simulated space, one should measure $\langle \hat{x} (\sigma_1^x +\mathbb{1}) \sigma_1^z \rangle_{\Psi(x,t)}$ following Eq. (\[wavepacketstime\]). Here, the operator $(\sigma_1^x+\mathbb{1}) \sigma_1^z$ acts on the qubit degree of freedom of the enlarged space, and the measurement can be decomposed into two parts, one for each summand, $\langle \hat{x} \sigma_1^z \rangle_{\Psi(x,t)}$, and $-i\langle \hat{x}\sigma_1^y \rangle_{\Psi(x,t)}$. These two measurements can be obtained from the derivative of the $A$ observable with respect to $k$ in the limit $k\langle \hat{x}\rangle\ll 1$, in which $\partial_k\langle A\rangle\approx\langle \hat{x}\sigma_1^y \rangle$, with a local rotation in the first case to change $\sigma_1^y$ into $\sigma_1^z$. Moreover, computing $\langle A\rangle$ for sizable $k$, for the cases of initial $\sigma_1^z$ and $\sigma_1^y$ eigenstates in the internal state, allows one to obtain $\cos(k\hat{x})$ and $\sin(k\hat{x})$. Via Fourier transform, we can obtain the position wavepacket probability distribution. The previous procedure enables us, among other things, to compute spacetime correlation functions without full tomography, which can reduce significantly the required resources.
Spatial parity transformation
=============================
The second case we consider is the simulation of a spatial parity transformation, $(t, x)\rightarrow(t, -x)$, $(\alpha_{00}, \alpha_{01}, \alpha_{10}, \alpha_{11})=(1, 0, 0, -1)$. The initial state in the simulated space coincides with the one of time parity, $\psi(x, t=0)=(\sqrt{\sqrt{2\pi}\Delta})^{-1}e^{-x^2/4\Delta^2}e^{i p_0 x}$. So does the Hamiltonian in the simulated space, $H=H^e=c \hat{p}$. It is obvious that $\psi(-x, t=0)=(\sqrt{\sqrt{2\pi}\Delta})^{-1}e^{-x^2/4\Delta^2}e^{-i p_0 x}$. The Hamiltonian in the enlarged space $\mathcal{H}$ goes to $\sigma_1^x \otimes H^e$, and the initial spinor in the enlarged space can be expressed as $$\Psi(x,0)=\frac{1}{2\sqrt{\sqrt{2\pi}\Delta}}e^{-\frac{x^2}{4\Delta^2}}\left(\begin{array}{cc}e^{i p_0 x}+e^{-i p_0 x} \\ e^{i p_0 x}-e^{-i p_0 x}\end{array}\right).$$
This can be achieved by initializing the internal state associated with the enlarged space in the $(1, 0)^T$ state, cooling the motional mode to the ground state, which is a Gaussian, and performing a conditional displacement of the motional state with the Hamiltonian $p_0 \hat{x}\sigma_1^x$. We point out that for spatial parity one ion suffices for both state initialization and simulation.
Accordingly, the state in the enlarged space considering time-evolution results, $$\begin{aligned}
&&\Psi(x,t)= \frac{1}{2\sqrt{\sqrt{2\pi}\Delta}}\\ &&\times\left(\begin{array}{cc} e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)}+e^{-\frac{(ct+x)^2}{4\Delta^2}}e^{-ip_0(ct+x)} \\ e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)}-e^{-\frac{(ct+x)^2}{4\Delta^2}}e^{-ip_0(ct+x)} \end{array}\right). \nonumber\end{aligned}$$ As in the previous case, in order to recover the wavefunction in each of the simulated spaces we add or subtract the even and odd components of the spinor $\Psi$. $$\begin{aligned}
\psi(x,t)&=&\frac{1}{\sqrt{\sqrt{2\pi}\Delta}} e^{\frac{-(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)},\label{SPp} \\
\psi(-x,t)&=&\frac{1}{\sqrt{\sqrt{2\pi}\Delta}} e^{\frac{-(ct+x)^2}{4\Delta^2}}e^{-ip_0(ct+x)}.\label{SPm}\end{aligned}$$ We plot in Fig. \[WavepacketsFig\](c) the evolved and spatial-parity-transformed wavepackets in Eqs. (\[SPp\]) and (\[SPm\]). We also obtain the expectation values in each of the correlations. $$\begin{aligned}
\langle \hat{x}\rangle_{\psi(x,t)}=ct, \qquad \langle \hat{x}\rangle_{\psi(-x,t)}=-ct, \nonumber\end{aligned}$$ $$\begin{aligned}
&&\langle \hat{x}\rangle_{\psi(x,t),\psi(-x,t)}=\langle\Psi(x,t)|\left(\begin{array}{cc}1 & -1 \\ 1 & -1\end{array}\right)\hat{x}|\Psi(x,t)\rangle\nonumber\\&&=\langle\Psi(x,t)|\sigma_1^z\hat{x}|\Psi(x,t)\rangle-i\langle\Psi(x,t)|\sigma_1^y\hat{x}|\Psi(x,t)\rangle\nonumber\\&&=\langle\Psi(x,t)|\sigma_1^z\hat{x}|\Psi(x,t)\rangle-i\langle\Phi(x,t)|\sigma_1^z\hat{x}|\Phi(x,t)\rangle\nonumber\\&&=-2ip_0\Delta^2e^{-\frac{(ct)^2}{2\Delta^2}}e^{-2p_0^2\Delta^2} ,\end{aligned}$$ where $\Phi(x,t)=e^{-i\pi \sigma_1^x/4}\Psi(x,t)$. These can be measured as in the previous example. We point out that the spatial parity case can be distinguished from the time parity case through the spacetime correlation, which is different in both cases. The computation of these correlations with our method makes full state tomography unnecessary.
Galilean Boost
==============
Here, we propose the simulation of a reference frame change associated with a Galilean boost, $(t, x)\rightarrow(t, x-vt)$, $(\alpha_{00}, \alpha_{01}, \alpha_{10}, \alpha_{11})=(1, 0, -v, 1)$, where $v$ is the relative velocity [@footnote]. Here, we consider the previous Hamiltonian and initial state in the simulated space. The corresponding Hamiltonian in the simulating enlarged space reads $$\mathcal{H}(t)=\underbrace{\left(c+\frac{v}{2}\right)\mathbb{1}\hat{p}}_{\mathcal{H}_1(t)}\underbrace{-\frac{v}{2}\sigma^x{\hat p}}_{\mathcal{H}_2(t)}. \label{grill}$$ Moreover, we can calculate the initial spinor state and compute the time evolution. The expression for the quantum states in the simulated spaces reads, $$\begin{aligned}
\psi(x,t)&=&\frac{1}{\sqrt{\sqrt{2\pi}\Delta}} e^{-\frac{(ct-x)^2}{4\Delta^2}}e^{-ip_0(ct-x)},\label{wavepacketsGalilean1}\\
\hspace{-0.3cm}\psi(x-vt,t)&=&\frac{1}{\sqrt{\sqrt{2\pi}\Delta}} e^{-\frac{(ct-x+vt)^2}{4\Delta^2}}e^{-ip_0(ct-x+vt)},\label{wavepacketsGalilean2}\end{aligned}$$
We plot in Fig. \[WavepacketsFig\](d) the evolution of the wavepackets with and without Galilean boost in Eqs. (\[wavepacketsGalilean1\]) and (\[wavepacketsGalilean2\]). Moreover, the expectation values for position $\hat{x}$ are $$\begin{aligned}
&&\langle \hat{x}\rangle_{\psi(x,t)}=ct, \hspace{0.25cm} \langle \hat{x}\rangle_{\psi(x-vt,t)}=(c+v)t,\\
&&\langle \hat{x}\rangle_{\psi(x,t),\psi(x-vt,t)}=\frac{1}{2}t(2c+v)e^{-\frac{t^2v^2}{8\Delta^2}}e^{-ip_0tv}.\end{aligned}$$
For the trapped-ion simulation the initialization of the spinor can be done similarly to the time parity case. For the subsequent dynamics we divide the Hamiltonian in Eq. into two parts to implement its evolution in the trapped-ion system. To realize $\mathcal{H}_1$ in the laboratory, we propose to use a second auxiliary ion initialized in an eigenstate of $\sigma_2^x$, $$\mathcal{H}_1|\Psi\rangle|+\rangle=\left(c+\frac{v}{2}\right)\mathbb{1}\hat{p}|\Psi\rangle|+\rangle\equiv\left(c+\frac{v}{2}\right)\sigma_2^x\hat{p}|\Psi\rangle|+\rangle.$$ Then, the Hamiltonian can be implemented as $$\label{H1Galilean}
\mathcal{H}_1'=\sigma_2^x\hat{p}\left(c+\frac{v}{2}\right)=i\eta\tilde{\Omega}_1\sigma_2^x(a^\dag-a),$$ with $\eta\tilde{\Omega}_1=(c+\frac{v}{2})/2\Delta$. Moreover, the second term in Eq. can be realized as $$\mathcal{H}_2=-\frac{v}{2}\sigma_1^x\hat{p}=i\eta\tilde{\Omega}_2\sigma_1^x(a^\dag-a),$$ and with $\eta\tilde{\Omega}_2=-\frac{v}{4\Delta}$, through simultaneous red and blue sideband excitations upon the first ion.
Discussion
==========
To analyze the robustness of the simulating system, we computed the dynamics with a master equation including different decoherence sources. We considered unintended carrier transitions due to off-resonant coupling, heating $\Gamma_{h}$, phonon loss $\Gamma_{c}$, dephasing $\Gamma_\phi$, and spontaneous emission $\Gamma_{-}$, $$\begin{aligned}
\nonumber
\dot{\rho}=&&-i[{\cal H}_T,\rho]+\Gamma_{h} L(a^\dag)\rho+\Gamma_{c} L(a)\rho\\&&+\Gamma_\phi L(\sigma^z)\rho+\Gamma_{-}L(\sigma^{-})\rho,\label{LabMasterEq}\end{aligned}$$ where the Lindblad superoperators are $L(\hat{X})\rho=(2\hat{X}\rho \hat{X}^{\dag}-\hat{X}^{\dag} \hat{X}\rho-\rho \hat{X}^{\dag} \hat{X})/2$. Here, ${\cal H}_T$ is the trapped-ion Hamiltonian corresponding to each of the three cases analyzed, namely, time parity, spatial parity, and Galilean boost. We include carrier and counterrotating sideband terms in the dynamics, i.e., without performing vibrational rotating-wave approximation. Therefore, this master equation accounts for all the significant decoherence and error sources present in current trapped-ion experiments.We plot in Figs. \[Cafidelities\](a)-(c) and Figs. \[Befidelities\](a)-(c) the fidelities of trapped calcium and beryllium ions after state initialization and evolution with the dynamics in Eq. (\[LabMasterEq\]) for the cases of time parity, spatial parity, and Galilean boost. For the state initialization part, we compute the dynamics with an equivalent master equation for the corrresponding initialization Hamiltonian as described for each case (a)-(c) in the text. We considered for the initialization time $p_0\Delta=\eta \tilde{\Omega} t=1$ in all cases. We point out that because of the relatively small atomic mass of beryllium, large trap frequency, sizable Lamb-Dicke factor and the corresponding large decoherence rates are introduced in the realistic experiments of NIST [@trapped; @ion2]. Our work shows a significant feasibility in different trapped-ion setups.
Conclusions
===========
To summarize, we have proposed the physical implementation of fundamental symmetry transformations including time and spatial parity with trapped ions. The formalism permits as well to perform Galilean boosts with the same technology. By embedding the simulated physical system into an enlarged Hilbert space living in the trapped-ion system, the proposed formalism can be carried out with current ion-trap setups. Furthermore, our work establishes a path for the realization of parity transformations in other quantum platforms and many-body interacting models.
Acknowledgements
================
We acknowledge funding from National Natural Science Foundation of China (61176118, 11474193), Shanghai Pujiang Program (13PJ1403000), Shuguang Program (14SG35), Program for Eastern Scholar, Specialized Research Fund for the Doctoral Program of Higher Education (2013310811003), Basque Government IT472-10 and BFI-2012-322, Spanish MINECO FIS2012-36673-C03-02, Ramón y Cajal RYC-2012-11391, UPV/EHU EHUA14/04, UPV/EHU Grant No. UFI 272 11/55, PROMISCE, and SCALEQIT EU projects.
[99]{} I. M. Georgescu, S. Ashhab, and F. Nori, “Quantum Simulation", Rev. Mod. Phys. **86**, 153 (2014). R. Feynman, “Simulating Physics with Computers", Int. J. Theor. Phys. **21**, 467 (1982). D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, “Quantum Dynamics of Single Trapped Ions", Rev. Mod. Phys. **75**, 281 (2003). H. Häffner, C. F. Roos, and R. Blatt, “Quantum Computing with Trapped Ions", Phys. Rep. **469**, 155 (2008). R. Blatt and C. F. Roos, “Quantum Simulations with Trapped Ions", Nat. Phys. **8**, 277 (2012). A. A. Houck, H. E. Türeci, and J. Koch, “On-chip Quantum Simulation with Superconducting Circuits", Nat. Phys. **8**, 292 (2012). D. Marcos, P. Rabl, E. Rico, and P. Zoller, “Superconducting Circuits for Quantum Simulation of Dynamical Gauge Fields", Phys. Rev. Lett. **111**, 110504 (2013). G. S. Paraoanu, “Recent Progress in Quantum Simulation Using Superconducting Circuits", J. Low Temp. Phys. **175**, 633 (2014). I. Bloch, “Ultracold Quantum Gases in Optical Lattices", Nat. Phys. **1**, 23 (2005). I. Bloch, J. Dalibard, and S. Nascimbène, “Quantum Simulations with Ultracold Quantum Gases", Nat. Phys. **8**, 267 (2012). B. P. Lanyon, J. D. Whitfield, G. G. Gillet, M. E. Goggin, M. P. Almeida, I. Kassal, J. D. Biamonte, M. Mohseni, B. J. Powell, M. Barbieri, A. Aspuru-Guzik, and A. G. White, “Towards Quantum Chemistry on a Quantum Computer", Nat. Chem. **2**, 106 (2010). A. Aspuru-Guzik and P. Walther, “Photonic Quantum Simulators", Nat. Phys. **8**, 285 (2012). L. Mazza, A. Bermudez, N. Goldman, M. Rizzi, M. A. Martin-Delgado, and M. Lewenstein, “An Optical-lattice-based Quantum Simulator for Relativistic Field Theories and Topological Insulators", New J. Phys. **14**, 015007 (2012). N. Szpak and R. Schützhold, “Optical Lattice Quantum Simulator for Quantum Electrodynamics in Strong External Fields: Spontaneous Pair Creation and the Sauter-Schwinger Effect", New J. Phys. **14**, 035001 (2012). D. Porras and J. I. Cirac, “Effective Quantum Spin Systems with Trapped Ions", Phys. Rev. Lett. **92**, 207901 (2004). H. Friedenauer, H. Schmitz, J. Glueckert, D. Porras, and T. Schätz, “Simulating a Quantum Magnet with Trapped Ions", Nat. Phys. **4**, 757 (2008). K. Kim, M.-S. Chang, S. Korenblit, R. Islam, E.E. Edwards, J. K. Freericks, G.-D. Lin, L.-M. Duan, and C. Monroe, “Quantum Simulation of Frustrated Ising Spins with Trapped Ions", Nature **465**, 590 (2010). J. Casanova, L. Lamata, I. L. Egusquiza, R. Gerritsma, C. F. Roos, J. J. García-Ripoll, and E. Solano, “Quantum Simulation of Quantum Field Theories in Trapped Ions", Phys. Rev. Lett. **107**, 260501 (2011). R. Islam, E. E. Edwards, K. Kim, S. Korenblit, C. Noh, H. Carmichael, G.-D. Lin, L.-M. Duan, C.-C. Joseph Wang, J. K. Freericks, and C. Monroe, “Onset of a Quantum Phase Transition with a Trapped Ion Quantum Simulator", Nat. Comm. **2**, 377 (2011). J. Casanova, A. Mezzacapo, L. Lamata, and E. Solano, “Quantum Simulation of Interacting Fermion Lattice Models in Trapped Ions", Phys. Rev. Lett. **108**, 190502 (2012). K. Kim, S. Korenblit, R. Islam, E. E. Edwards, M.-S. Chang, C. Noh, H. Carmichael, G.-D. Lin, L.-M. Duan, C. C. Joseph Wang, J. K. Freericks, and C. Monroe, “Quantum Simulation of the Transverse Ising Model with Trapped Ions", New J. Phys. **13**, 105003 (2011). A. Mezzacapo, J. Casanova, L. Lamata, and E. Solano, “Digital Quantum Simulation of the Holstein Model in Trapped Ions", Phys. Rev. Lett. **109**, 200501 (2012). L. Lamata, A. Mezzacapo, J. Casanova, and E. Solano, “Efficient Quantum Simulation of Fermionic and Bosonic Models in Trapped Ions", EPJ Quantum Technology [**1**]{}, 9 (2014). L. Lamata, J. León, T. Schätz, and E. Solano, “Dirac Equation and Quantum Relativistic Effects in a Single Trapped Ion", Phys. Rev. Lett. **98**, 253005 (2007). R. Gerritsma, G. Kirchmair, F. Zähringer, E. Solano, R. Blatt, and C. F. Roos, “Quantum Simulation of the Dirac Equation", Nature **463**, 68 (2010). J. Casanova, J. J. García-Ripoll, R. Gerritsma, C. F. Roos, and E. Solano, “Klein Tunneling and Dirac Potentials in Trapped Ions", Phys. Rev. A [**82**]{}, 020101(R) (2010). R. Gerritsma, B. P. Lanyon, G. Kirchmair, F. Zähringer, C. Hempel, J. Casanova, J. J. García-Ripoll, E. Solano, R. Blatt, and C. F. Roos, “Quantum Simulation of the Klein Paradox with Trapped Ions", Phys. Rev. Lett. **106**, 060503 (2011). L. Lamata, J. Casanova, R. Gerritsma, C. F. Roos, J. J. García-Ripoll, and E. Solano, “Relativistic Quantum Mechanics with Trapped Ions", New J. Phys. **13**, 095003 (2011). J. Casanova, C. Sabín, J. León, I. L. Egusquiza, R. Gerritsma, C. F. Roos, J. J. García-Ripoll, and E. Solano, “Quantum Simulation of the Majorana Equation and Unphysical Operations", Phys. Rev. X **1**, 021018 (2011). R. Keil, C. Noh, A. Rai, S. Stützer, S. Nolte, D. G. Angelakis, and A. Szameit, “Optical Simulation of Charge Conservation Violation and Majorana Dynamics", Optica [**2**]{}, 454 (2015). X. Zhang, Y. Shen, J. Zhang, J. Casanova, L. Lamata, E. Solano, M.-H. Yung, J.-N. Zhang, and K. Kim, “Time Reversal and Charge Conjugation in an Embedding Quantum Simulator", Nat. Commun. [**6**]{}, 7917 (2015). U. Alvarez-Rodriguez, J. Casanova, L. Lamata, and E. Solano, “Quantum Simulation of Noncausal Kinematic Transformations", Phys. Rev. Lett. **111**, 090503 (2013). We point out that our protocol allows us to perform Galilean group transformations but not purely Lorentz boosts. The reason is the need, in the latter case, to know in advance the wavefunction in the whole spacetime to generate the initial state in the transformed frame [@noncausal]. For the sake of clarity and ease of experimental implementation with ion traps, we consider throughout the paper a linear dispersion relationship for the simulated Schrödinger equation, and when applying the Galilean boost we regard this state as nonrelativistic, i.e., respecting Galilean invariance. A particular example where this takes place is the burgeoning field of graphene \[A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, The electronic properties of graphene, Rev. Mod. Phys. [**81**]{}, 109 (2009)\]. Our techniques may be straightforwardly applied as well to perform Galilean group transformations on nonrelativistic dynamics of massive particles.
|
{
"pile_set_name": "ArXiv"
}
|
An individual particle driven through an overdamped medium exhibits a linear velocity vs applied force relation. When quenched disorder is added to the system, a critical threshold driving force must be applied before the particle can move, and once motion begins, the velocity-force curves can be nonlinear. Examples of overdamped systems with quenched disorder that exhibit threshold forces and nonlinear velocity force curves include driven vortices in type-II superconductors or Josephson junctions [@Dominguez], sliding charge density waves [@Fisher; @Higgins], driven magnetic bubble arrays [@Westervelt], and charge transport in assemblies of metallic dots [@Middleton]. These systems have been studied extensively and exhibit a variety of dynamic phases as well as power law velocity force curves.
Another system that has been far less studied is an overdamped particle moving in the [*absence*]{} of quenched disorder but in the presence of a disordered two-component background of non-driven particles. Since there is no quenched disorder, simple overdamped motion with an increased damping constant might be expected. Instead, a critical threshold force $F_{th}$ for motion exists and the velocity force curves are nonlinear with the power law form $V = (F -F_{th})^{\beta}$, as shown in recent simulations [@Hastings] and experiments [@Weeks] for individual colloids driven through a background of non-driven colloids. Unlike systems with quenched disorder, for $F<F_{th}$ the entire system must move along with the driven particle. For $F>F_{th}$ the driven particle is able to shear past its neighbors. The simulations give $\beta=1.5$ for colloids interacting with a screened Coulomb potential. The experiments were performed with lightly charged colloids where steric interactions are important, and give $\beta\approx 1.5$ in the lower density limit [@Weeks]. A particle moving through a viscous fluid can be regarded as interacting with many much smaller particles. In this limit, the surrounding medium can be replaced with a continuum and the velocity force curves are linear. In the case of a single driven colloid, it is not known when and how the dynamics change from nonlinear to linear, since it is expected that the system passes to the continuum limit when the colloids in the surrounding medium are small. This change could be a sharp transition, or it could occur as a cutoff of the scaling, or it could occur through a continuously changing exponent.
In this work we consider a single colloid of charge $q_d$ driven through a $T=0$ disordered two-component assembly of other colloids with average charge $q$. We find that when $q \approx q_d$, the velocity force curves have a power law form with $\beta > 1$ that is robust over two decades and for different system sizes. In this regime the velocity fluctuations of the driven colloid are highly intermittent and the colloid velocity $V$ frequently drops nearly to zero when background colloids trap the driven colloid until rearrangements release it and $V$ jumps back to a higher value. The fluctuations in $V$ have a highly skewed distribution and $1/f$ noise fluctuation properties. As $q_d$ increases for fixed drive $F_d$, the average velocity $V$ drops, the velocity fluctuations become Gaussian, and $\beta$ is reduced. When $q_d\gg q$, the driven colloid interacts with a large number of surrounding colloids and forms a circular depletion region, while for $q_d\ll q$, the background colloids act as stationary disorder and the velocity force curves are linear. Our system can be experimentally realized for dielectric colloids driven with optical traps or magnetic colloids driven with external magnetic fields. In systems where it is difficult to vary the charge on individual particles, $q_d$ could be increased by capturing a large number of particles in a single optical trap and dragging the assembly through the background. Other related systems include dragging different sized particles through granular media [@Albert]. There have been several proposals to use individual particle manipulation as a new microrheology method for examining frequency responses in soft matter systems [@Levine]. It would be valuable to understand under what conditions the individual particle is in the continuum overdamped regime or in the nonlinear regime in the dc limit.
We consider a substrate-free, zero temperature, two-dimensional system with periodic boundary conditions in the $x$ and $y$ directions. A binary mixture of $N=N_c-1$ background colloids, charged with a ratio $q_1/q_2=1/2$, interact with a repulsive screened Yukawa potential, $V(r_{ij}) = (q_{i}q_{j}/|{\bf r}_{i} - {\bf r}_{j}|)
\exp(-\kappa|{\bf r}_{i} - {\bf r}_{j}|)$, where $q_{i(j)}$ is the charge and ${\bf r}_{i(j)}$ is the position of colloid $i$($j$) and $1/\kappa$
is the screening length which is set to $2$ in all our simulations. Throughout the paper we refer to the average background charge $q=(q_1+q_2)/2$. The initial disordered configuration for the two-component background of colloids is obtained by annealing from a high temperature. An additional driven colloid with charge $q_d$ is placed in the system and a constant driving force ${\bf F}_{d}=F_d{\hat{\bf x}}$ is applied only to that colloid. The overdamped equation of motion for colloid $i$ is $$\eta\frac{d {\bf r}_{i}}{dt} = {\bf F}_{i}^{cc} + {\bf F}_{d} + {\bf F}_{T}$$ where ${\bf F}_{i}^{cc} = -\sum_{j \neq i}^{N_{c}}\nabla_i V(r_{ij})$, $\eta=1$, and the thermal force ${\bf F}_{T}$ comes from random Langevin kicks. We have considered various temperatures and discuss the $T=0$ case here. We have previously used similar Langevin dynamics for colloids under nonequilibrium and equilibrium conditions [@ourstuff]. The interaction range is assumed much larger than the physical particle size, and in this low volume fraction limit, hydrodynamic interactions can be neglected and may be strongly screened [@Riese]. To generate velocity-force curves, we set $F_{d}$ to a fixed value and measure the average velocity of the driven colloid $<V>$ in the direction of the drive for several million time steps to ensure that a steady state is reached. The drive is then increased and the procedure repeated. Near the depinning threshold $F_d \gtrsim F_{th}$, the relative velocity fluctuations are strongly enhanced. In the absence of any other particles, the driven colloid moves at the velocity of the applied drive giving a linear velocity force curve. In this work we consider system sizes of $L = 24$, 36, and 48 with a fixed colloid density of $1.1$. This is four times denser than the system considered in Ref. [@Hastings].
In Fig. 1 we show images from the two limits of our system. In Fig. 1(a) the driven colloid (large black dot) has $q_{d}/q = 1.33$ and is similar in charge to the colloids forming the surrounding disordered medium (smaller black dots). Fig. 1(b) illustrates the case $q_{d}/q = 67$, where a large depletion zone forms around the driven colloid.
To illustrate the scaling in the velocity force curves, we plot representative $V$ vs $F_{d} - F_{th}$ curves in Fig. 2 for
varied $q_{d}/q$ and different system sizes. Here $F_{th}$ is the threshold velocity and the charge of the driven colloid increases from the top curve to the bottom. All of the curves have a power law velocity force scaling of the form $$V \propto (F_{d} - F_{th})^{\beta}$$ This scaling is robust over two decades in driving force. To test for finite size effects, we conducted simulations with systems of size $L = 24$, $L = 36$, and $L = 48$, indicated by different symbols in Fig. 2, and we find that the same scaling holds for all the system sizes. The scaling of the velocity force curves for the small charge $q_{d}/q = 0.25$ is linear, as seen in previous simulations [@Hastings]. In this regime the driven colloid does not cause any distortions in the surrounding media as it moves. This situation is very similar to a single particle moving in a quenched background, where it is known that the velocity force curves scale linearly or sublinearly [@Fisher]. As the charge of the driven colloid increases, the scaling exponent initially rises, as shown for the case of $q_{d}/q = 1.33$ with $\beta = 1.54$, but the exponent decreases again for the more highly charged driven colloids, since $q_{d}/q = 13$ gives $\beta = 1.28$ and $q_{d}/q = 67$ gives $\beta = 1.13$. As $q_{d}/q$ increases, the average velocity at fixed $F_d-F_{th}$ decreases when more background colloids become involved in the motion.
We plot the changes in the scaling exponent $\beta$ with varying $q/q_{d}$ from a series of simulations in Fig. 3, which shows three regions. For low $q_{d}/q$, the motion is mainly elastic with $\beta$ near 1. As $q_d/q$ approaches 1, the driven colloid charge becomes of the same order as that of the surrounding medium and the motion becomes plastic with $\beta \approx 1.5$. As $q_d/q$ increases further, $\beta$ decreases approximately logarithmically toward 1, indicating that the motion of the surrounding medium is becoming more continuum-like. The maximum value of
$\beta$ falls at higher $q_d/q$ for less dense systems, such as that in Ref. [@Hastings], and at lower $q_d/q$ for denser systems.
For driven colloids with $q_d/q\ll 1$, the background colloids act as a stationary disorder potential. The driven colloid passes through this potential, deviating around background colloids as necessary, but the background colloids do not respond to the presence of the driven particle and remain essentially fixed in their locations. In contrast, when $q_d/q\gg 1$, the driven colloid strongly distorts the background and forms a depletion zone which moves with the driven colloid. Thus a large number of background colloids must rearrange in order to allow the driven colloid to pass, producing a continuum-like behavior. Between these two limits, when $q_d/q \approx 1$, no depletion zone forms, but when the driven colloid moves, it distorts the background which deforms plastically in order to allow the driven colloid to pass. Here we find intermittent motion in which the driven colloid sometimes slips past a background colloid similar to the $q_d/q\ll 1$ case, but at other becomes trapped behind a background colloid and pushes it over some distance, similar to the $q_d/q\gg 1$ case. It is in the regime of this complex motion, when all of the charges are similar in magnitude, that the strongest deviation from linear response, $\beta \approx 1.5$, occurs.
We next consider the velocity fluctuations of the driven colloid in the regime where $\beta \approx 1.5$ as well as in the high $q_d/q$ regime where $\beta$ starts to approach 1. In Fig. 4(a) we plot a segment of the time series of the instantaneous driven colloid velocity for the case of $q_{d}/q = 1.33$ at a drive producing an average velocity of $V = 0.0425$. The motion is highly intermittent and at times the colloid temporarily stops moving in the direction of the drive. When the driven colloid is trapped, strain accumulates in the surrounding media until one or more of the surrounding colloids suddenly shifts by a distance larger than the average interparticle spacing and the driven particle begins to move again. As the drive is further increased, the length of the time intervals during which the driven
colloid is stopped decreases.
In Fig. 4(b) we show $V(t)$ for a system with $q_{d}/q = 67$ where the applied force gives the same average velocity as in Fig. 4(a). Here the amplitude of the velocity fluctuations is much smaller than the $q_d/q=1.33$ case and there are no intermittent stall periods. This strongly charged driven colloid is interacting with a much larger number of surrounding colloids than the weakly charged driven colloid would, and as a result, it cannot be trapped behind a single background colloid, giving much smoother motion. We note that we find no intermittent behavior for the strongly charged driven colloid even at the lowest applied forces. We also measured the variance of the transverse velocity fluctuations $V_y$, and find that it decreases with increasing $q_d/q$ roughly as a power law with an exponent of -1.7. This decrease occurs since a larger number of background colloids are contributing to the fluctuations experienced by the driven particle, leading to a smoother signal.
In Fig. 4(c) we plot the histogram of the velocity fluctuations $P(V)$ for the time series shown in Fig. 4(a). The fluctuations are non-Gaussian and are heavily skewed toward the positive velocities with a spike at $V = 0$ due to the intermittency. We note that in simulations of vortex systems where nonlinear velocity force scaling occurs, bimodal velocity distributions are also observed when individual vortices are intermittently pinned for a period of time before moving again [@Marchetti]. Non-Gaussian velocity fluctuations are also found in sheared dusty plasmas [@Woon].
For higher drives, we find that the average velocity increases; however, the histograms remain highly skewed for all charge and drive regimes where the scaling in the velocity force curves give a large $\beta \sim 1.5$. For comparison, in Fig. 4(d) we plot $P(V)$ for the large $q_d/q$ system shown in Fig. 4(b). Here the histogram has very little skewness and fits well to a Gaussian distribution. For other drives at this charge ratio we observe similar Gaussian distributions of the velocity. In general, we find decreasing skewness in the velocity distributions as $q_{d}/q$ increases. The Gaussian nature of the fluctuations is also consistent with the interpretation that, as $q_d/q$ becomes large, the system enters the continuum limit.
In Fig. 4(e), we show that the power spectrum $S(\nu )$ for the time series in Fig. 4(a) has a $1/f^{\alpha}$ form with $\alpha = 0.8$. Throughout the $\beta\approx 1.5$ regime we find similar spectra with $\alpha = 0.5$ to $1.1$, indicative of intermittent dynamics. For comparison, in Fig. 4(f) we show that $S(\nu )$ for the high $q_d/q$ case has a white velocity spectrum characteristic, indicative of the absence of long time correlations in the velocity. In general, $\alpha$ decreases with increasing $q_{d}/q$. For the small charge regime of $q_{d}/q\ll 1$, where the driven colloid moves without distorting the background, we also observe a white noise spectrum. Here the velocity fluctuations are determined by the static configuration of the background particles, and $P(V)$ does not show any intermittent periods of zero velocity because if the colloid becomes trapped in this regime, there can be no rearrangements of the surrounding medium to untrap it.
To explore the transient behavior of the system, we consider the effect of a suddenly applied subthreshold drive of $F_{d}/F_{th} = 0.8$. In Fig. 5(a) we show the transient velocity response for a system with $q_{d}/q = 1.33$. Here the velocity relaxation is consistent with a power law decay, $V(t) \propto t^{-1.1}$. The driven colloid translates by several lattice constants before coming to rest with respect to the surrounding medium. The large velocity oscillations that appear at longer times are due to the local plastic rearrangements of the surrounding colloids as the driven colloid passes. We find a power law decay in the transient response for values of $q_{d}/q$ that give $\beta > 1.28$. If the suddenly applied subthreshold drive is small enough that no local rearrangements of the surrounding medium are possible, then an exponential decay of the velocity occurs instead. In Fig. 5(b) we show the transient response for the case of $q_{d}/q = 67$. Here the relaxation is fit to $V(t) \propto \exp(-t)$. The large velocity fluctuations that appeared for the smaller $q_{d}$ are absent. For all the large values of $q_{d}/q$ we observe an exponential velocity relaxation, and we also find exponential relaxation for very small charges $q_{d}/q \ll 1.0$.
In summary, we have studied a single colloid with varying charge driven through a disordered background of other colloids in the absence of quenched disorder. When the charge of the driven colloid is close to the same as that of the surrounding colloids, a nonlinear power law velocity force curve appears with an exponent near $\beta=1.5$. In this regime, the time dependent velocity is intermittent with a highly skewed velocity distribution and $1/f^{\alpha}$ noise fluctuations. As the charge of the driven colloid is increased, the velocity force characteristic becomes more linear while the effective damping from the background colloids increases. The number of colloids that interact with the driven colloid increases and the velocity fluctuations become Gaussian with a white noise spectrum. We predict that in the nonlinear regime, the transient velocity responses are of a power law form, while in the linear regime the transient velocity is exponentially damped. We interpret our results as a crossover in the response of the background colloids from intermittent dynamics when the driven and background colloids are similarly charged to continuum dynamics for a highly charged driven colloid.
We thank E. Weeks and M.B. Hastings for useful discussions. This work was supported by the U.S. Department of Energy under Contract No. W-7405-ENG-36.
D. Dom[' i]{}nguez, Phys. Rev. Lett. [**72**]{}, 3096 (1994); S. Bhattacharya and M.J. Higgins, [*ibid.*]{} [**70**]{}, 2617 (1993).
D.S. Fisher, Phys. Rev. B [**31**]{}, 1396 (1985).
S. Bhattacharya, M.J. Higgins, and J.P. Stokes, Phys. Rev. Lett. [**63**]{}, 1503 (1989).
J. Hu and R.M. Westervelt, Phys. Rev. B [**51**]{}, R17279 (1995).
A.A. Middleton and N.S. Wingreen, Phys. Rev. Lett. [**71**]{}, 3198 (1993); R. Parthasarathy, X.-M. Lin, and H.M. Jaeger, [*ibid.*]{} [**87**]{}, 186807 (2001); C. Reichhardt and C.J. Olson Reichhardt, [*ibid.*]{} [**90**]{}, 046802 (2003).
M.B. Hastings, C.J. Olson Reichhardt, and C. Reichhardt, Phys. Rev. Lett. [**90**]{}, 098302 (2003).
P. Habdas, D. Schaar, A.C. Levitt, and E.R. Weeks, Europhys. Lett. [**67**]{}, 477 (2004).
I. Albert [*et al.*]{}, Phys. Rev. E [**64**]{}, 061303 (2001).
A.J. Levine and T.C. Lubensky, Phys. Rev. Lett. [**85**]{}, 1774 (2000).
C. Reichhardt and C.J. Olson, Phys. Rev. Lett. [**89**]{}, 078301 (2002); [**90**]{}, 095504 (2003).
D.O. Riese [*et al.*]{} Phys. Rev. Lett. [**85**]{}, 5460 (2000).
M.C. Faleski, M.C. Marchetti, and A.A. Middleton, Phys. Rev. B [**54**]{}, 12427 (1996).
W.Y. Woon and L. I, Phys. Rev. Lett. [**92**]{}, 065003 (2004).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'For each $n \geq 2$ we construct a measurable subset of the unit ball in ${\mathbb{R}}^n$ that does not contain pairs of points at distance 1 and whose volume is greater than $(1/2)^n$ times the volume of the unit ball. This disproves a conjecture of Larman and Rogers from 1972.'
address:
- 'F.M. de Oliveira Filho, Delft Institute of Applied Mathematics, Delft University of Technology, Van Mourik Broekmanweg 6, 2628 XE Delft, The Netherlands.'
- 'F. Vallentin, Mathematisches Institut, Universität zu Köln, Weyertal 86–90, 50931 Köln, Germany.'
author:
- Fernando Mário de Oliveira Filho
- Frank Vallentin
date: 11 March 2019
title: A counterexample to a conjecture of Larman and Rogers on sets avoiding distance 1
---
Larman and Rogers [@LarmanR1972 Conjecture 1] conjectured: “Suppose that the distance $1$ is not realized in a closed subset $S$ of a spherical ball $B$ of radius $1$. Then the Lebesgue measure of $S$ is less than $(1/2)^n$ times the Lebesgue measure of $B$.” Since an open ball of radius $1/2$ does not have pairs of points at distance 1, this bound would be tight. Croft, Falconer, and Guy [@CroftFG1991 page 178] comment on the planar case of this conjecture, saying about the optimal set $S$ (they disregard the requirement that it has to be closed): “Surely it must be a disk of radius $1/2$, but this seems hard to prove.”
The following simple construction shows that the conjecture is wrong in all dimensions $n \geq 2$. Let $e_1 = (1, 0, \ldots, 0) \in {\mathbb{R}}^n$ and write $a = (1/6) (1 + \sqrt{10})$. Let $$T_n = \{\, x \in {\mathbb{R}}^n : x_1 > 1/2,\ \|x-a e_1\| < 1/2,\ \|x\| < 1\,\}$$ and set $S_n = T_n \cup -T_n$. The set $T_n$ is the intersection of the open ball of radius $1/2$ centered at $a e_1$ with the open ball of radius 1 centered at the origin and the open halfspace $x_1 >
1/2$. It is easy to see that $T_n$ does not contain pairs of points at distance 1, and hence neither does $S_n$. This is the counterexample to the conjecture[^1], as is shown below; see also Figure \[fig:mosquito\].
![The set $S_2$. The parameter $a$ is chosen so that $a e_1$ is equidistant to the hyperplane $x_1 = 1/2$ and the hyperplane that contains the intersection between the spheres $\|x\| = 1$ and $\|x - a e_1\| = 1/2$; this choice for $a$ maximizes the volume of $T_n$.[]{data-label="fig:mosquito"}](mosquito-1.pdf)
The volume of the unit ball $B_n$ is $v_n = \pi^{n/2} / \Gamma(1 +
n/2)$. So $$\operatorname{vol}T_n = \int_{1/2 - a}^{a - 1/2} v_{n-1} (1/4 - x^2)^{(n-1)/2}\, dx
+ \int_{2a - 1/2}^1 v_{n-1}(1 - x^2)^{(n-1)/2}\, dx.$$ This gives $\operatorname{vol}S_2 / \operatorname{vol}B_2 = 0.2848\ldots$ and $\operatorname{vol}S_3 / \operatorname{vol}B_3 = 0.1563\ldots$.
Actually, for $n \geq 3$ it suffices to use the lower bound $$\operatorname{vol}T_n \geq \int_{1/2 - a}^{a - 1/2} v_{n-1} (1/4 -
x^2)^{(n-1)/2}\, dx$$ to disprove the conjecture. It is known that the volume of the unit ball is concentrated around the equator[^2]: if $n \geq 3$ and $c \geq 1$, then $$\frac{\operatorname{vol}\{\, x \in B_n : |x_1| \leq c / \sqrt{n-1}\,\}}{\operatorname{vol}B_n}
\geq 1 - (2 / c) e^{-c^2 / 2}.$$ From this inequality one gets immediately the asymptotic relation $$\label{eq:asymp}
\frac{\operatorname{vol}S_n}{\operatorname{vol}B_n} = (2 - o(1)) (1/2)^n > (1/2)^n.$$ In other words, in high dimension it is almost possible to fit two balls of radius $1/2$ inside the unit ball instead of only one like conjectured by Larman and Rogers. Choosing an appropriate constant $c$, one shows that $\operatorname{vol}S_n / \operatorname{vol}B_n > (1/2)^n$ for all $n \geq 15$; the remaining cases can be checked directly.
One of the original motivations of Larman and Rogers in proposing their conjecture is that it is related to another, still open conjecture of L. Moser, also popularized by Erdős, on the global behavior of sets avoiding distance 1. This conjecture states that the upper density[^3] of any measurable subset of ${\mathbb{R}}^2$ containing no pair of points at distance 1 is less than $1/4$; Larman and Rogers’ conjecture would imply that any such subset of ${\mathbb{R}}^2$ has upper density at most $1/4$. Note that Larman and Rogers’ conjecture, if it were true, still would not imply Moser’s conjecture. Indeed, Larman and Rogers’ conjecture says that a *closed* subset of the unit disk that avoids distance 1 has area *less than* $1/4$ times the area of the unit disk; this in turn implies that the area of a measurable subset of the unit disk that avoids distance 1 is *at most* $1/4$ times the area of the unit disk.
The construction of $S_n$ shows that the local behavior resembles the double cap conjecture [@Kalai2015 Conjecture 2.8], which states that the union of two antipodal spherical caps of radius $\pi/4$ each is a maximum-area subset of the unit sphere having no pairs of orthogonal vectors; see DeCorte, Oliveira, and Vallentin [@DeCorteOV2018] for more information on these conjectures.
[7]{} K. Ball, An elementary introduction to modern convex geometry, in: [*Flavors of Geometry*]{}, Mathematical Sciences Research Institute Publications 31, Cambridge University Press, Cambridge, 1997, pp. 1–58.
A. Blum, J. Hopcroft, and R. Kannan, [*Foundations of Data Science*]{}, 2018, <http://www.cs.cornell.edu/jeh>.
H.T. Croft, K.J. Falconer, and R. Guy, [*Unsolved problems in geometry*]{}, Springer-Verlag, New York, 1991.
E. DeCorte, F.M. de Oliveira Filho, and F. Vallentin, Complete positivity and distance-avoiding sets, arXiv:1804:09099, 2018, 58pp.
G. Kalai, Some old and new problems in combinatorial geometry I: around Borsuk’s problem, in: [*Surveys in combinatorics 2015*]{}, London Mathematical Society Lecture Note Series 424, Cambridge University Press, Cambridge, 2015, pp. 147–174.
D.G. Larman and C.A. Rogers, The realization of distances within sets in Euclidean space, [*Mathematika*]{} 19 (1972) 1–24.
J. Matoušek, [*Lectures on discrete geometry*]{}, Graduate Texts in Mathematics 212, Springer-Verlag, New York, 2002.
[^1]: Since Larman and Rogers ask for a closed set, take closed inner approximations of $S_n$.
[^2]: See e.g. Theorem 2.7 in Blum, Hopcroft, and Kannan [@BlumHK2018]; Ball [@Ball1997] and Matoušek [@Matousek2002] present analogous results for the sphere instead of the ball.
[^3]: The *upper density* of a Lebesgue-measurable set $X \subseteq {\mathbb{R}}^n$ is $$\sup_{p\in{\mathbb{R}}^n}\limsup_{T\to\infty} \frac{\operatorname{vol}(X \cap (p +
[-T, T]^n))}{\operatorname{vol}[-T,T]^n}.$$ Intuitively, it is the fraction of space covered by $X$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this expository paper we survey some recent results on Dirichlet problems of the form $Lu=f(x,u)$ in $\Omega$, $u\equiv0$ in $\R^n\backslash\Omega$. We first discuss in detail the boundary regularity of solutions, stating the main known results of Grubb and of the author and Serra. We also give a simplified proof of one of such results, focusing on the main ideas and on the blow-up techniques that we developed in [@RS-K; @RS-stable]. After this, we present the Pohozaev identities established in [@RS-Poh; @RSV; @Grubb-Poh] and give a sketch of their proofs, which use strongly the fine boundary regularity results discussed previously. Finally, we show how these Pohozaev identities can be used to deduce nonexistence of solutions or unique continuation properties.
The operators $L$ under consideration are integro-differential operator of order $2s$, $s\in(0,1)$, the model case being the fractional Laplacian $L=(-\Delta)^s$.
address: 'The University of Texas at Austin, Department of Mathematics, 2515 Speedway, Austin, TX 78751, USA'
author:
- 'Xavier Ros-Oton'
title: |
Boundary regularity, Pohozaev identities\
and nonexistence results
---
[^1]
Introduction
============
This expository paper is concerned with the study of solutions to $$\label{pb}
\left\{ \begin{array}{rcll}
L u &=&f(u)&\textrm{in }\Omega \\
u&=&0&\textrm{in }\R^n\backslash\Omega,
\end{array}\right.$$ where $\Omega\subset\R^n$ is a bounded domain, and $L$ is an elliptic integro-differential operator of the form $$\label{L}\begin{split}
Lu(x)=\textrm{P.V.}&\int_{\R^n}\bigl(u(x)-u(x+y)\bigr)K(y)dy,\\
K\geq0,\qquad K(y)=K(-y),&\qquad\textrm{and}\qquad \int_{\R^n}\min\bigl\{|y|^2,1\bigr\}K(y)dy<\infty.
\end{split}$$ Such operators appear in the study of stochastic process with jumps: Lévy processes. In the context of integro-differential equations, Lévy processes play the same role that Brownian motion plays in the theory of second order PDEs. In particular, the study of such processes leads naturally to problems posed in bounded domains like .
Solutions to are critical points of the nonlocal energy functional $$\mathcal E(u)=\frac12\int_{\R^n}\int_{\R^n}\bigl(u(x)-u(x+y)\bigr)^2K(y)dy\,dx-\int_\Omega F(u)dx$$ among functions $u\equiv0$ in $\R^n\setminus\Omega$. Here, $F'=f$.
Here, we will work with operators $L$ of order $2s$, with $s\in(0,1)$. In the simplest case we will have $K(y)=c_{n,s}|y|^{-n-2s}$, which corresponds to $L=(-\Delta)^s$, the fractional Laplacian. More generally, a typical assumption would be $$0<\frac{\lambda}{|y|^{n+2s}}\leq K(y)\leq \frac{\Lambda}{|y|^{n+2s}}.$$ Under such assumption, operators can be seen as uniformly elliptic operators of order $2s$, for which Harnack inequality and other regularity properties are well understood; see for example [@R-Survey].
For the Laplace operator, becomes $$\label{Laplacian}
\left\{ \begin{array}{rcll}
-\Delta u &=&f(u)&\textrm{in }\Omega \\
u&=&0&\textrm{on }\partial\Omega.
\end{array}\right.$$ A model case for is the power-type nonlinearity $f(u)=|u|^{p-1}u$, with $p>1$. In this case, it is well known that the mountain pass theorem yields the existence of (nonzero) solutions for $p<\frac{n+2}{n-2}$, while for powers $p\geq \frac{n+2}{n-2}$ the only bounded solution in star-shaped domains is $u\equiv0$. In other words, one has existence of solutions in the subcritical regime, and non-existence of solutions in star-shaped domains in the critical or supercritical regimes.
An important tool in the study of solutions to is the *Pohozaev identity* [@P]. This celebrated result states that any bounded solution to this problem satisfies the identity $$\label{Poh-classic-1}
\int_\Omega\bigl\{2n\,F(u)-(n-2)u\,f(u)\bigr\}dx=\int_{\partial\Omega}\left(\frac{\partial u}{\partial \nu}\right)^2(x\cdot\nu)d\sigma(x),$$ where $$F(u)=\int_0^uf(t)dt.$$ When $f(u)=|u|^{p-1}u$ then the identity becomes $$\left(\frac{2n}{p+1}-(n-2)\right)\int_\Omega|u|^{p+1}dx=\int_{\partial\Omega}\left(\frac{\partial u}{\partial \nu}\right)^2(x\cdot\nu)d\sigma(x).$$ When $p\geq\frac{n+2}{n-2}$, the left hand side of this identity is negative or zero, while the right hand side is strictly positive for nonzero solutions in star-shaped domains. Thus, the nonexistence of solutions follows.
The proof of the identity is based on the following integration-by-parts type formula $$\label{Poh-classic-2}
2\int_\Omega (x\cdot\nabla u)\Delta u\,dx=(2-n)\int_\Omega u\,\Delta u\,dx+\int_{\partial\Omega}\left(\frac{\partial u}{\partial \nu}\right)^2(x\cdot\nu)d\sigma(x),$$ which holds for any $C^2$ function with $u=0$ on $\partial\Omega$. This identity is an easy consequence of the *divergence theorem*. Indeed, using that $$\Delta (x\cdot\nabla u)=x\cdot\nabla \Delta u+2\Delta u$$ and that $$x\cdot \nabla u=(x\cdot\nu)\frac{\partial u}{\partial \nu}\qquad \textrm{on} \quad\partial\Omega,$$ then integrating by parts (three times) we find $$\begin{split}\int_\Omega (x\cdot\nabla u)\Delta u\,dx&=-\int_\Omega \nabla (x\cdot\nabla u)\cdot\nabla u\,dx+\int_{\partial\Omega}(x\cdot\nabla u)\frac{\partial u}{\partial \nu}\,d\sigma\\
& = \int_\Omega \Delta(x\cdot\nabla u)\,u\,dx+\int_{\partial\Omega}(x\cdot\nabla u)\frac{\partial u}{\partial \nu}\,d\sigma \\
& = \int_\Omega \{x\cdot\nabla \Delta u+2\Delta u\}u\,dx+\int_{\partial\Omega}(x\cdot\nu)\left(\frac{\partial u}{\partial \nu}\right)^2d\sigma\\
&= \int_\Omega \{-\textrm{div}(xu)\Delta u+2u\Delta u\}dx+\int_{\partial\Omega}(x\cdot\nu)\left(\frac{\partial u}{\partial \nu}\right)^2d\sigma \\
& = \int_\Omega \{-(x\cdot\nabla u)\Delta u-(n-2)u\Delta u\}dx+\int_{\partial\Omega}(x\cdot\nu)\left(\frac{\partial u}{\partial \nu}\right)^2d\sigma,
\end{split}$$ and hence follows.
Identities of Pohozaev-type like and have been used widely in the analysis of elliptic PDEs: they yield to monotonicity formulas, unique continuation properties, radial symmetry of solutions, and uniqueness results. Moreover, they are also used in other contexts such as hyperbolic equations, harmonic maps, control theory, and geometry.
The aim of this paper is to show what are the *nonlocal* analogues of these identities, explain the main ideas appearing in their proofs, and give some immediate consequences concerning the nonexistence of solutions. Furthermore, we will also discuss a very related issue: the *boundary regularity* of solutions.
$\bullet$ [**A simple case.**]{} In order to have a first hint on what should be the analogue of for integro-differential operators , let us look at the simplest case $L=(-\Delta)^s$, and let us assume that $u\in C^\infty_c(\Omega)$. In this case, a standard computation shows that $$(-\Delta)^s(x\cdot\nabla u)=x\cdot\nabla(-\Delta)^su+2s\,(-\Delta)^su.$$ This is a pointwise equality that holds at every point $x\in\R^n$. This, combined with the global integration by parts identity in all of $\R^n$ $$\label{global}
\int_{\R^n}u\,(-\Delta)^sv\,dx=\int_{\R^n}(-\Delta)^su\,v\,dx,$$ leads to $$\label{toy}
\qquad\qquad 2\int_\Omega(x\cdot \nabla u)(-\Delta)^s u\,dx=(2s-n)\int_\Omega u(-\Delta)^s u\,dx,\qquad \textrm{for}\quad u\in C^\infty_c(\Omega).$$ Indeed, taking $v=x\cdot \nabla u$ one finds $$\begin{split}
\int_{\R^n}(x\cdot\nabla u)(-\Delta)^su\,dx&=\int_{\R^n}u\,(-\Delta)^s(x\cdot \nabla u)\,dx\\
&=\int_{\R^n} u\left\{x\cdot\nabla(-\Delta)^su+2s\,(-\Delta)^su\right\}dx\\
&=\int_{\R^n} \left\{-\textrm{div}(xu)(-\Delta)^su+2s\,u(-\Delta)^su\right\}dx\\
&=\int_{\R^n} \left\{-(x\cdot\nabla u)(-\Delta)^su+(2s-n)\,u(-\Delta)^su\right\}dx,
\end{split}$$ and thus follows.
This identity has no boundary term (recall that we assumed that $u$ and all its derivatives are zero on $\partial\Omega$), but it is a first approximation towards a nonlocal version of . The only term that is missing is the boundary term.
As we showed above, when $s=1$ and $u\in C^2_0(\overline\Omega)$, the use of the divergence theorem in $\Omega$ (instead of the global identity ) leads to the Pohozaev-type identity , with the boundary term. However, in case of nonlocal equations there is no divergence theorem in bounded domains, and this is why at first glance there is no clear candidate for a nonlocal analogue of the boundary term in .
In order to get such a Pohozaev-type identity for solutions to , with the right boundary term, we first need to answer the following: $$\textrm{What is the \emph{boundary regularity} of solutions to \eqref{pb}?}$$ Once this is well understood, we will come back to the study of Pohozaev identities and we will present the nonlocal analogues of - established in [@RS-Poh; @RSV].
The paper is organized as follows:
In Section \[sec2\] we discuss the boundary regularity of solutions to . We will state the main known results, and give a sketch of the proofs and their main ingredients. Then, in Section \[sec3\] we present the Pohozaev identities of [@RS-Poh; @RSV] and give some ideas of their proofs. Finally, in Section \[sec4\] we give some consequences of such Pohozaev identities.
Boundary regularity {#sec2}
===================
The study of integro-differential equations started already in the fifties with the works of Getoor, Blumenthal, and Kac, among others [@BGR; @G]. Due to the relation with Lévy processes, they studied Dirichlet problems $$\label{linear}
\left\{ \begin{array}{rcll}
Lu &=&g(x)&\textrm{in }\Omega \\
u&=&0&\textrm{in }\R^n\backslash\Omega,
\end{array}\right.$$ and proved some basic properties of solutions, estimates for the Green function, and the asymptotic distribution of eigenvalues. Moreover, in the simplest case of the fractional Laplacian $(-\Delta)^s$, the following explicit solutions were found: $$\quad u_0(x)=(x_+)^s\qquad\quad\textrm{solves}\qquad
\left\{\begin{array}{rcl}
(-\Delta)^su_0\hspace{-2.5mm}&=0 & \quad\textrm{in}\quad (0,\infty) \\
u_0\hspace{-2.5mm}&=0 & \quad \textrm{in}\quad (-\infty,0)
\end{array}\right.\quad$$ and $$\label{explicit}
u_1(x)=\kappa_{n,s}\,\bigl(1-|x|^2\bigr)^s_+\qquad\textrm{solves}\qquad
\left\{\begin{array}{rcl}
(-\Delta)^su_0\hspace{-2.5mm}&=1 & \quad\textrm{in}\quad B_1 \\
u_0\hspace{-2.5mm}&=0 & \quad \textrm{in}\quad \R^n\setminus B_1,
\end{array}\right.$$ for certain constant $\kappa_{n,s}$; see [@BV-book].
The interior regularity of solutions for $L=(-\Delta)^s$ is by now well understood. Indeed, potential theory for this operator enjoys an explicit formulation in terms of the Riesz potential, and thus it is similar to that of the Laplacian; see the classical book of Landkov [@L].
For more general linear operators , the interior regularity theory has been developed in the last years, and it is now quite well understood for operators satisfying $$\label{L0}
0<\frac{\lambda}{|y|^{n+2s}}\leq K(y)\leq \frac{\Lambda}{|y|^{n+2s}};$$ see for example the results of Bass [@Bass], Serra [@Se], and also the survey [@R-Survey] for regularity results in Hölder spaces.
Concerning the boundary regularity theory for the fractional Laplacian, fine estimates for the Green’s function near $\partial\Omega$ were established by Kulczycki [@Potential1] and Chen-Song [@Potential2]; see also [@BGR]. These results imply that, in $C^{1,1}$ domains, all solutions $u$ to are comparable to $d^s$, where $d(x)=\textrm{dist}(x,\R^n\setminus\Omega)$. More precisely, $$\label{Cd^s}
|u|\leq Cd^s$$ for some constant $C$, and from this bound one can deduce an estimate of the form $$\|u\|_{C^s(\overline\Omega)}\leq C\|g\|_{L^\infty(\Omega)}$$ for . Moreover, when $g>0$, then $u\geq c\,d^s$ for some $c>0$ —recall the example . In particular, solutions $u$ are $C^s$ up to the boundary, and this is the optimal Hölder exponent for the regularity of $u$, in the sense that in general $u\notin C^{s+\epsilon}(\overline\Omega)$ for any $\epsilon>0$.
More generally, when the equation and the boundary data are given only in a subregion of $\R^n$, one has the following estimate, whose proof we sketch below. Notice that the following estimate is for a general class of nonlocal operators $L$, which includes the fractional Laplacian.
\[Cs-estimate\] Let $\Omega\subset\R^n$ be any bounded $C^{1,1}$ domain with $0\in\partial\Omega$, and $L$ be any operator -, with $K(y)$ homogeneous. Let $u$ be any bounded solution to $$\left\{ \begin{array}{rcll}
Lu &=&g&\textrm{in }\Omega\cap B_1 \\
u&=&0&\textrm{in }B_1\backslash\Omega,
\end{array}\right.$$ with $g\in L^\infty(\Omega\cap B_1)$. Then,[^2] $$\|u\|_{C^s(B_{1/2})}\leq C\bigl(\|g\|_{L^\infty(\Omega\cap B_1)}+\|u\|_{L^\infty(B_1)}+\|u\|_{L^1_s(\R^n)}\bigr),$$ with $C$ depending only on $n$, $s$, $\Omega$, and ellipticity constants.

We give a short sketch of this proof. For more details, see [@RS-Dir] or [@RS-stable].
First of all, truncating $u$ and dividing it by a constant if necessary, we may assume that $\|g\|_{L^\infty(\Omega\cap B_1)}+\|u\|_{L^\infty(\R^n)}\leq 1$. Second, by constructing a supersolution (using for example Lemma \[lem-ds\] below) one can show that $$\label{bound-ds}
|u|\leq Cd^s\qquad \textrm{in}\quad\Omega.$$ Then, once we have this we need to show that $$\label{Cs}
|u(x)-u(y)|\leq C|x-y|^s\qquad \forall x,y\in \overline\Omega.$$ We separate two cases, depending on whether $r=|x-y|$ is bigger or smaller than $\rho=\min\{d(x),d(y)\}$.
More precisely, if $2r\leq \rho$, then $y\in B_{\rho/2}(x)\subset B_\rho(x)\subset \Omega$ (we assume without loss of generality that $\rho=d(x)\leq d(y)$ here). Therefore, one can use known interior estimates (rescaled) and to get $$\label{int-est}
[u]_{C^s(B_{\rho/2}(x))}\leq C.$$ Indeed, by we have that the rescaled function $u_\rho(z):=u(x+\rho z)$ satisfies $$\|u_\rho\|_{L^\infty(B_1)}\leq C\rho^s,\qquad \|u_\rho\|_{L^1_s(\R^n)}\leq C\rho^s,\qquad \textrm{and}\qquad \|Lu_\rho\|_{L^\infty(B_1)}\leq C\rho^{2s},$$ and therefore by interior estimates $$\begin{split}
\rho^s[u]_{C^s(B_{\rho/2}(x))}& =[u_\rho]_{C^s(B_{1/2})}\leq C\bigl(\|u_\rho\|_{L^\infty(B_1)}+\|u_\rho\|_{L^1_s(\R^n)}+\|Lu_\rho\|_{L^\infty(B_1)}\bigr)\\
&\leq C(\rho^s+\rho^s+\rho^{2s})\leq C\rho^s.
\end{split}$$ In particular, it follows from that $|u(x)-u(y)|\leq C|x-y|^s$.
On the other hand, in case $2r>\rho$ then we just use to get $$\begin{split}
|u(x)-u(y)|&\leq |u(x)|+|u(y)|\leq Cd^s(x)+Cd^s(y)\\
&\leq Cd^s(x)+C\bigl(d^s(x)+|x-y|^s\bigr)\leq C\rho^s+C(r+\rho)^s\\
&\leq Cr^s=C|x-y|^s.\end{split}$$ In any case, we get , as desired.
Higher order boundary regularity estimates
------------------------------------------
Unfortunately, in the study of Pohozaev identities the bound is not enough, and finer regularity results are needed. A more precise description of solutions near $\partial\Omega$ is needed.
For second order (local) equations, solutions to the Dirichlet problem are known to be $C^\infty(\overline\Omega)$ whenever $\Omega$ and the right hand side $g$ are $C^\infty$. In case $g\in L^\infty(\Omega)$, then $u\in C^{2-\epsilon}(\overline\Omega)$ for all $\epsilon>0$. This, in particular, yields a fine description of solutions $u$ near $\partial\Omega$: for any $z\in \partial\Omega$ one has $$\bigl|u(x)-c_zd(x)\bigr|\leq C|x-z|^{2-\epsilon},$$ where $d(x)=\textrm{dist}(x,\Omega^c)$ and $c_z\in \R$. This is an expansion of order $2-\epsilon$, which holds whenever $g\in L^\infty$ and $\Omega$ is $C^{1,1}$. When $g$ and $\Omega$ are $C^\infty$, then one has analogue higher order expansions that essentially say that $u/d\in C^\infty(\overline\Omega)$.
The question for nonlocal operators was: are there any nonlocal analogues of such higher order boundary regularity estimates?
The first result in this direction was obtained by the author and Serra in [@RS-Dir] for the fractional Laplacian $L=(-\Delta)^s$; we showed that $u/d^s\in C^\alpha(\overline\Omega)$ for some small $\alpha>0$. Such result was later improved and extended to more general operators by Grubb [@Grubb; @Grubb2] and by the author and Serra [@RS-K; @RS-stable]. These results may be summarized as follows.
\[bdry\] Let $\Omega\subset\R^n$ be any bounded domain, and $L$ be any operator -. Assume in addition that $K(y)$ homogeneous, that is, $$\label{stable}
K(y)=\frac{a\left(y/|y|\right)}{|y|^{n+2s}}.$$ Let $u$ be any bounded solution to , and[^3] $d(x)={\rm dist}(x,\R^n\setminus\Omega)$. Then,
- If $\Omega$ is $C^{1,1}$, then $$g\in L^\infty(\Omega)\quad \Longrightarrow\quad u/d^s\in C^{s-\epsilon}(\overline\Omega)\qquad\quad \textrm{for all}\ \epsilon>0,$$
- If $\Omega$ is $C^{2,\alpha}$ and $a\in C^{1,\alpha}(S^{n-1})$, then $$\qquad g\in C^\alpha(\overline\Omega)\quad \Longrightarrow\quad u/d^s\in C^{\alpha+s}(\overline\Omega)\qquad \textrm{for small}\ \alpha>0,$$ whenever $\alpha+s$ is not an integer.
- If $\Omega$ is $C^\infty$ and $a\in C^\infty(S^{n-1})$, then $$\quad g\in C^\alpha(\overline\Omega)\quad \Longrightarrow\quad u/d^s\in C^{\alpha+s}(\overline\Omega)\qquad \textrm{for all}\ \alpha>0,$$ whenever $\alpha+s\notin\mathbb Z$. In particular, $u/d^s\in C^\infty(\overline\Omega)$ whenever $g\in C^\infty(\overline\Omega)$.
It is important to remark that the above theorem is just a particular case of the results of [@Grubb; @Grubb2] and [@RS-K; @RS-stable]. Indeed, part (a) was proved in [@RS-stable] for any $a\in L^1(S^{n-1})$ (without the assumption ); (b) was established in [@RS-K] in the more general context of fully nonlinear equations; and (c) was established in [@Grubb; @Grubb2] for all pseudodifferential operators satisfying the $s$-transmission property. Furthermore, when $s+\alpha$ is an integer in (c), more information is given in [@Grubb2] in terms of Hölder-Zygmund spaces $C^k_*$.
When $g\in L^\infty(\Omega)$ and $\Omega$ is $C^{1,1}$, the above result yields a fine description of solutions $u$ near $\partial\Omega$: for any $z\in \partial\Omega$ one has $$\bigl|u(x)-c_zd^s(x)\bigr|\leq C|x-z|^{2s-\epsilon},$$ where $d(x)=\textrm{dist}(x,\Omega^c)$. This is an expansion of order $2s-\epsilon$, which is analogue to the one described above for the Laplacian.
In case of second order (local) equations, only the regularity of $g$ and $\partial\Omega$ play a role in the result. In the nonlocal setting of operators of the form -, a third ingredient comes into play: the regularity of the kernels $K(y)$ in the $y$-variable. This is why in parts (b) and (c) of Theorem \[bdry\] one has to assume some regularity of $K$. This is a purely nonlocal feature, and cannot be avoided. In fact, when the kernels are not regular then counterexamples to higher order regularity can be constructed, both to interior and boundary regularity; see [@Se; @RS-stable]. Essentially, when the kernels are not regular, one can expect to get regularity results up to order $2s$, but not higher. We refer the reader to [@R-Survey], where this is discussed in more detail.
Let us now sketch some ideas of the proof of Theorem \[bdry\]. We will focus on the simplest case and try to show the main ideas appearing in its proof.
Sketch of the proof of Theorem \[bdry\](a)
------------------------------------------
A first important ingredient in the proof of Theorem \[bdry\](a) is the following computation.
\[lem-ds\] Let $\Omega$ be any $C^{1,1}$ domain, $s\in (0,1)$, and $L$ be any operator of the form - with $K(y)$ homogeneous, i.e., of the form . Let $d(x)$ be any positive function that coincides with $\textrm{dist}(x,\R^n\setminus\Omega)$ in a neighborhood of $\partial\Omega$ and is $C^{1,1}$ inside $\Omega$. Then, $$|L(d^s)|\leq C_\Omega\qquad \textrm{in}\quad \Omega,$$ where $C_\Omega$ depends only on $n$, $s$, $\Omega$, and ellipticity constants.
Let $x_0\in \Omega$ and $\rho=d(x)$. Notice that when $\rho\geq\rho_0>0$ then $d^s$ is $C^{1,1}$ in a neighborhood of $x_0$, and thus $L(d^s)(x_0)$ is bounded by a constant depending only on $\rho_0$. Thus, we may assume that $\rho\in(0,\rho_0)$, for some small $\rho_0>0$.
Let us denote $$\ell(x):=\bigl(d(x_0)+\nabla d^s(x_0)\cdot (x-x_0)\bigr)_+,$$ and notice that $\ell^s$ is a translated and rescaled version of the 1-D solution $(x_n)_+^s$. Thus, we have $$L(\ell^s)=0\qquad \textrm{in}\quad \{\ell>0\};$$ see [@RS-K Section 2]. Moreover, notice that by construction of $\ell$ we have $$d(x_0)=\ell(x_0)\qquad \textrm{and}\qquad \nabla d(x_0)=\nabla \ell(x_0).$$ Using this, it is not difficult to see that $$\bigl|d(x_0+y)-\ell(x_0+y)\bigr|\leq C|y|^2,$$ and this yields $$\bigl|d^s(x_0+y)-\ell^s(x_0+y)\bigr|\leq C|y|^2\bigl(d^{s-1}(x_0+y)+\ell^{s-1}(x_0+y)\bigr).$$
On the other hand, for $|y|>1$ we clearly have $$\bigl|d^s(x_0+y)-\ell^s(x_0+y)\bigr|\leq C|y|^s\qquad \textrm{in}\quad \R^n\setminus B_1.$$
Using the last two inequalities, and recalling that $L(\ell^s)(x_0)=0$ and that $d(x_0)=\ell(x_0)$, we find $$\begin{split}
\bigl|L(d^s)(x_0)\bigr|&=\bigl|L(d^s-\ell^s)(x_0)\bigr|=\left|\int_{\R^n}(d^s-\ell^s)(x_0+y)K(y)dy\right|\\
&\leq C\int_{B_1}|y|^2\bigl(d^{s-1}(x_0+y)+\ell^{s-1}(x_0+y)\bigr)\frac{dy}{|y|^{n+2s}} +C\int_{B_1^c}|y|^s\frac{dy}{|y|^{n+2s}}\\
&\leq C\int_{B_1}\bigl(d^{s-1}(x_0+y)+\ell^{s-1}(x_0+y)\bigr)\frac{dy}{|y|^{n+2s-2}}+C.
\end{split}$$ Such last integral can be bounded by a constant $C$ depending only on $s$ and $\Omega$, exactly as in [@RS-C1 Lemma 2.5], and thus it follows that $$\bigl|L(d^s)(x_0)\bigr|\leq C,$$ as desired.
Another important ingredient in the proof of Theorem \[bdry\](a) is the following classification result for solutions in a half-space.
\[prop-Liouville\] Let $s\in (0,1)$, and $L$ be any operator of the form - with $K(y)$ homogeneous. Let $u$ be any solution of $$\label{eq-I-flat}
\left\{ \begin{array}{rcll}
L v &=&0&\textrm{in }\{x\cdot e>0\} \\
v&=&0&\textrm{in }\{x\cdot e\leq0\}.
\end{array}\right.$$ Assume that, for some $\beta<2s$, $u$ satisfies the growth control $$\label{gr-v}
|v(x)|\leq C\bigl(1+|x|^\beta\bigr)\qquad \textrm{in}\quad \R^n.$$ Then, $$v(x)=K(x\cdot e)_+^s$$ for some constant $K\in \R$.
The idea is to differentiate $v$ in the directions that are orthogonal to $e$, to find that $v$ is a 1D function $v(x)=\bar v(x\cdot e)$. Then, for 1D functions any operator $L$ with kernel is just a multiple of the 1D fractional Laplacian, and thus one only has to show the result in dimension 1. Let us next explain the whole argument in more detail.
Given $R\geq1$ we define $$v_R(x):=R^{-\beta}v(Rx).$$ It follows from the growth condition on $v$ that $$\bigl| v_R(x)\bigr|\leq C\bigl(1+|x|^\beta\bigr)\qquad \textrm{in}\quad \R^n,$$ and moreover $v_R$ satisfies , too.
Since $\beta<2s$, then $\|v_R\|_{L^1_s(\R^n)}\leq C$, and thus by Proposition \[Cs-estimate\] we get $$\|v_R\|_{C^s(B_{1/2})}\leq C,$$ with $C$ independent of $R$. Therefore, using $[v]_{C^s(B_{R/2})}=R^{\beta-s}[v_R]_{C^s(B_{1/2})}$, we find $$\label{v-R}
\bigl[v\bigr]_{C^s(B_{R/2})}\leq CR^{\beta-s}\quad \textrm{for all}\quad R\geq1.$$
Now, given $\tau\in S^{n-1}$ such that $\tau\cdot e=0$, and given $h>0$, consider $$w(x):=\frac{v(x+h\tau)-v(x)}{h^s}.$$ By we have that $w$ satisfies the growth condition $$\|w\|_{L^\infty(B_R)}\leq CR^{\beta-s}\quad \textrm{for all}\quad R\geq1.$$ Moreover, since $\tau$ is a direction which is parallel to $\{x\cdot e=0\}$, then $w$ satisfies the same equation as $v$, namely $L w=0$ in $\{x\cdot e>0\}$, and $w=0$ in $\{x\cdot e\leq0\}$. Thus, we can repeat the same argument above with $v$ replaced by $w$ (and $\beta$ replaced by $\beta-s$), to find $$\bigl[w\bigr]_{C^s(B_{R/2})}\leq CR^{\beta-2s}\quad \textrm{for all}\quad R\geq1.$$
Since $\beta<2s$, letting $R\to\infty$ we find that $$w\equiv0\quad \textrm{in}\quad \R^n.$$ This means that $v$ is a 1D function, $v(x)=\bar v(x\cdot e)$. But then yields that such function $\bar v:\R\to\R$ satisfies $$\left\{ \begin{array}{rcll}
(-\Delta)^s\bar v &=&0&\textrm{in }(0,\infty) \\
\bar v&=&0&\textrm{in }(-\infty,0],
\end{array}\right.$$ with the same growth condition . Using [@RS-K Lemma 5.2], we find that $\bar v(x)=K(x_+)^s$, and thus $$v(x)=K(x\cdot e)_+^s,$$ as desired.
Using the previous results, let us now give the:
$\bullet$ [**Sketch of the proof of Theorem \[bdry\](a).**]{} In Proposition \[Cs\] we saw how combining with interior estimates (rescaled) one can show that $u\in C^s(\overline\Omega)$. In other words, in order to prove the $C^s$ regularity up to the boundary, one only needs the bound $|u|\leq Cd^s$ and interior estimates.
Similarly, it turns out that in order to show that $u/d^s\in C^\gamma(\overline\Omega)$, $\gamma=s-\epsilon$, we just need an expansion of the form $$\label{expansion}
\left|u(x)-Q_z d^s(x)\right|\leq C|x-z|^{s+\gamma},\qquad z\in \partial\Omega,\qquad Q_z\in\R.$$ Once this is done, one can combine with interior estimates and get $u/d^s\in C^\gamma(\overline\Omega)$; see [@RS-stable Proof of Theorem 1.2] for more details.
Thus, we need to show .
The proof of is by contradiction, using a blow-up argument. Indeed, assume that for some $z\in \partial\Omega$ the expansion does not hold for any $Q\in\R$. Then, we clearly have $$\sup_{r>0}r^{-s-\gamma}\|u-Qd^s\|_{L^\infty(B_r(z))}=\infty\qquad \textrm{for all}\quad Q\in \R.$$ Then, one can show (see [@RS-stable Lemma 5.3]) that this yields $$\sup_{r>0}r^{-s-\gamma}\|u-Q(r)d^s\|_{L^\infty(B_r(z))}=\infty,\qquad \textrm{with}\qquad Q(r)=\frac{\int_{B_r(z)}u\,d^s}{\int_{B_r(z)}d^{2s}}.$$ Notice that this choice of $Q(r)$ is the one which minimizes the $L^2$ distance between $u$ and $Qd^s$ in $B_r(z)$.
We define the monotone quantity $$\theta(r):=\max_{r'\geq r}(r')^{-s-\gamma}\|u-Q(r')d^s\|_{L^\infty(B_{r'}(z))}.$$ Since $\theta(r)\rightarrow\infty$ as $r\rightarrow0$, then there exists a sequence $r_m\to0$ such that $$(r_m)^{-s-\gamma}\|u-Q(r_m)d^s\|_{L^\infty(B_{r_m})}=\theta(r_m).$$
We now define the blow-up sequence $$v_m(x):=\frac{u(z+r_mx)-Q(r_m)d^s(z+r_m x)}{(r_m)^{s+\gamma}\theta(r_m)}.$$ By definition of $Q(r_m)$ we have $$\label{contrad1}
\int_{B_1}v_m(x)\,d^s(z+r_m x)dx=0,$$ and by definition of $r_m$ we have $$\label{contrad2}
\|v_m\|_{L^\infty(B_1)}=1$$ Moreover, it can be shown that we have the growth control $$\|v_m\|_{L^\infty(B_R)}\leq CR^{s+\gamma}\qquad\textrm{for all}\ R\geq1.$$ To prove this, one first shows that $$|Q(Rr)-Q(r)|\leq C(rR)^{\gamma}\theta(r),$$ and then use the definitions of $v_m$ and $\theta$ to get $$\begin{split}
\|v_m\|_{L^\infty(B_R)} &=\frac{1}{(r_m)^{s+\gamma}\theta(r_m)}\bigl\|u-Q(r_m)d^s\bigr\|_{L^\infty(B_{r_m R})}\\
&\leq \frac{1}{(r_m)^{s+\gamma}\theta(r_m)}\left\{\bigl\|u-Q(r_mR)d^s\bigr\|_{L^\infty(B_{r_m R})} +\bigl|Q(r_mR)-Q(r_m)\bigr|(r_mR)^s\right\}\\
&\leq \frac{1}{(r_m)^{s+\gamma}\theta(r_m)}\theta(r_mR)(r_mR)^{s+\gamma} +\frac{C}{(r_m)^{s+\gamma}\theta(r_m)}(r_mR)^\gamma\theta(r_m)(r_mR)^s\\
&\leq R^{s+\gamma}+CR^{s+\gamma}.
\end{split}$$ In the last inequality we used $\theta(r_mR)\leq \theta(r_m)$, which follows from the monotonicity of $\theta$ and the fact that $R\geq1$.
On the other hand, the functions $v_m$ satisfy $$|Lv_m(x)|=\frac{(r_m)^{2s}}{(r_m)^{s+\gamma}\theta(r_m)}\bigl|Lu(z+r_m x)-L(d^s)(z+r_m x)\bigr|\qquad \textrm{in}\quad \Omega_m,$$ where the domain $\Omega_m=(r_m)^{-1}(\Omega-z)$ converges to a half-space $\{x\cdot e>0\}$ as $m\to\infty$. Here $e\in S^{n-1}$ is the inward normal vector to $\partial\Omega$ at $z$.
Since $Lu$ and $L(d^s)$ are bounded, and $\gamma<s$, then it follows that $$Lv_m\rightarrow 0\quad\textrm{uniformly in compact sets in}\ \{x\cdot e>0\}.$$ Moreover, $v_m\rightarrow0$ uniformly in compact sets in $\{x\cdot e<0\}$, since $u=0$ in $\Omega^c$.
Now, by $C^s$ regularity estimates up to the boundary and the Arzelà-Ascoli theorem the functions $v_m$ converge (up to a subsequence) to a function $v\in C(\R^n)$. The convergence is uniform in compact sets of $\R^n$. Therefore, passing to the limit the properties of $v_m$, we find $$\label{global1}
\|v\|_{L^\infty(B_R)}\leq CR^{s+\gamma}\qquad\textrm{for all}\ R\geq1,$$ and $$\label{global2}
\left\{ \begin{array}{rcll}
Lv &=&0&\textrm{in }\{x\cdot e>0\} \\
v&=&0&\textrm{in }\{x\cdot e<0\}.
\end{array}\right.$$ Now, thanks to Proposition \[prop-Liouville\], we find $$\label{classif}
v(x)=K(x\cdot e)_+^s\qquad \textrm{for some}\quad K\in\R.$$
Finally, passing to the limit —using that $d^s(z+r_m x)/(r_m)^s\rightarrow (x\cdot e)_+ ^s$— we find $$\label{contrad1bis}
\int_{B_1}v(x)\,(x\cdot e)_+^sdx=0,$$ so that $K\equiv0$ and $v\equiv0$. But then passing to the limit we get a contradiction, and hence is proved.
It is important to remark that in [@RS-stable] we show with a constant $C$ depending only on $n$, $s$, $\|g\|_{L^\infty}$, the $C^{1,1}$ norm of $\Omega$, and ellipticity constants. To do that, the idea of the proof is exactly the same, but one needs to consider sequences of functions $u_m$, domains $\Omega_m$, points $z_m\in \partial\Omega_m$, and operators $L_m$.
Comments, remarks, and open problems
------------------------------------
Let us next give some final comments and remarks about Theorem \[bdry\], as well as some related open problems.
$\bullet$ [**Singular kernels.**]{} Theorem \[bdry\] (a) was proved in [@RS-stable] for operators $L$ with general homogeneous kernels of the form with $a\in L^1(S^{n-1})$, not necessarily satisfying . In fact, $a$ could even be a singular measure. In such setting, it turns out that Lemma \[lem-ds\] is in general false, even in $C^\infty$ domains. Because of this difficulty, the proof of Theorem \[bdry\](a) given in [@RS-stable] is in fact somewhat more involved than the one we sketched above.
$\bullet$ [**Counterexamples for non-homogeneous kernels.**]{} All the results above are for kernels $K$ satisfying and such that $K(y)$ is *homogeneous*. As said above, for the interior regularity theory one does not need the homogeneity assumption: the interior regularity estimates are the same for homogeneous or non-homogeneous kernels. However, it turns out that something different happens in the boundary regularity theory. Indeed, for operators with $x$-dependence $$Lu(x)=\textrm{P.V.}\int_{\R^n}\bigl(u(x)-u(x+y)\bigr)K(x,y)dy,$$ $$0<\frac{\lambda}{|y|^{n+2s}}\leq K(x,y)\leq \frac{\Lambda}{|y|^{n+2s}}, \qquad K(x,y)=K(x,-y),$$ we constructed in [@RS-K] solutions to $Lu=0$ in $\Omega$, $u=0$ in $\R^n\setminus\Omega$, that are *not* comparable to $d^s$. More precisely, we showed that in dimension $n=1$ there are $\beta_1<s<\beta_2$ for which the functions $(x_+)^{\beta_i}$, solve an equation of the form $Lu=0$ in $(0,\infty)$, $u=0$ in $(-\infty,0]$. Thus, no fine boundary regularity like Theorem \[bdry\] can be expected for non-homogeneous kernels; see [@RS-K Section 2] for more details.
$\bullet$ [**On the proof of Theorem \[bdry\] (b).**]{} The proof of Theorem \[bdry\](b) in [@RS-K] has a similar structure as the one sketched above, in the sense that we show first $L(d^s)\in C^\alpha(\overline\Omega)$ and then prove an expansion of order $2s+\alpha$ similar to . However, there are extra difficulties coming from the fact that we would get exponent $2s+\alpha$ in , and thus the operator $L$ is not defined on functions that grow that much. Thus, the blow-up procedure needs to be done with incremental quotients, and the global equation is replaced by [@RS-K Theorem 1.4].
$\bullet$ [**On the proof of Theorem \[bdry\] (c).**]{} Theorem \[bdry\](c) was proved in [@Grubb; @Grubb2] by Fourier transform methods, completely different from the techniques presented above. Namely, the results in [@Grubb; @Grubb2] are for general pseudodifferential operators satisfying the so-called $s$-transmission property. A key ingredient in those proofs is the existence of a factorization of the principal symbol, which leads to the boundary regularity properties for such operators.
$\bullet$ [**Open problem: Regularity in $C^{k,\alpha}$ domains.**]{} After the results of [@Grubb; @Grubb2; @RS-K; @RS-stable], a natural question remains open: what happens in $C^{k,\alpha}$ domains?
Our results in [@RS-K; @RS-stable] give sharp regularity estimates in $C^{1,1}$ and $C^{2,\alpha}$ domains —Theorem \[bdry\] (a) and (b)—, while the results of Grubb [@Grubb; @Grubb2] give higher order estimates in $C^\infty$ domains —Theorem \[bdry\] (c). It is an open problem to establish sharp boundary regularity results in $C^{k,\alpha}$ domains, with $k\geq3$, for operators - with homogeneous kernels.
For the fractional Laplacian, sharp estimates in $C^{k,\alpha}$ domains have been recently established in [@JN], by using the extension problem for the fractional Laplacian. For more general operators, this is only known for $k=1$ [@RS-C1] and $k=2$ [@RS-K].
The development of sharp boundary regularity results in $C^{k,\alpha}$ domains for integro-differential operators $L$ would lead to the higher regularity of the free boundary in obstacle problems such operators; see [@DS], [@JN], [@CRS-obst].
$\bullet$ [**Open problem: Parabolic equations.**]{} Part (a) of Theorem \[bdry\] was recently extended to parabolic equations in [@FR]. A natural open question is to understand the higher order boundary regularity of solutions for parabolic equations of the form $$\partial_t u+Lu=f(t,x).$$ Are there analogous estimates to those in Theorem \[bdry\] (b) and (c) in the parabolic setting?
This could lead to the higher regularity of the free boundary in parabolic obstacle problems for integro-differential operators; see [@CF; @BFR2].
$\bullet$ [**Open problem: Operators with different scaling properties.**]{} An interesting open problem concerning the boundary regularity of solutions is the following: What happens with operators with kernels having a different type of singularity near $y=0$? For example, what happens with operators with kernels $K(y)\approx |y|^{-n}$ for $y\approx 0$? This type of kernels appear when considering geometric stable processes; see [@SSV]. The interior regularity theory has been developed by Kassmann-Mimica in [@KM] for very general classes of kernels, but much less is known about the boundary regularity; see [@BGR2] for some results in that direction.
Pohozaev identities {#sec3}
===================
Once the boundary regularity is known, we can now come back to the Pohozaev identities. We saw in the previous section that solutions $u$ to $$\label{pb2}
\left\{ \begin{array}{rcll}
L u &=&f(x,u)&\textrm{in }\Omega \\
u&=&0&\textrm{in }\R^n\backslash\Omega.
\end{array}\right.$$ are not $C^1$ up to the boundary, but the quotient $u/d^s$ is Hölder continuous up to the boundary. In particular, for any $z\in \partial\Omega$ there exists the limit $$\frac{u}{d^s}(z):=\lim_{\Omega \ni x\rightarrow z}\frac{u(x)}{d^s(x)}.$$ As we will see next, this function $u/d^s|_{\partial\Omega}$ plays the role of the normal derivative $\partial u/\partial\nu$ in the nonlocal analogues of -.
\[thpoh\] Let $\Omega$ be any bounded $C^{1,1}$ domain, and $L$ be any operator of the form , with $$K(y)=\frac{a\left(y/|y|\right)}{|y|^{n+2s}}.$$ and $a\in L^\infty(S^{n-1})$. Let $f$ be any locally Lipschitz function, $u$ be any bounded solution to . Then, the following identity holds $$\label{Poh1}
\hspace{-3mm}-2\int_\Omega(x\cdot\nabla u)Lu\ dx=(n-2s)\int_{\Omega}u\,Lu\, dx+\Gamma(1+s)^2\int_{\partial\Omega}\mathcal A(\nu)\left(\frac{u}{d^s}\right)^2(x\cdot\nu)d\sigma.$$ Moreover, for all $e\in \R^n$, we have[^4] $$\label{Poh2}
-\int_\Omega \partial_eu\,Lu\,dx=\frac{\Gamma(1+s)^2}{2}\int_{\partial\Omega}\mathcal A(\nu)\left(\frac{u}{d^{s}}\right)^2(\nu\cdot e)\,d\sigma.$$ Here $$\label{A}
\mathcal A(\nu)=c_s\int_{S^{n-1}}|\nu\cdot\theta|^{2s}a(\theta)d\theta,$$ $a(\theta)$ is the function in , and $c_s$ is a constant that depends only on $s$. For $L=(-\Delta)^s$, we have $\mathcal A(\nu)\equiv 1$.
When the nonlinearity $f(x,u)$ does not depend on $x$, the previous theorem yields the following analogue of $$\int_\Omega\bigl\{2n\,F(u)-(n-2s)u\,f(u)\bigr\}dx=\Gamma(1+s)^2\int_{\partial\Omega}\mathcal A(\nu)\left(\frac{u}{d^s}\right)^2(x\cdot\nu)d\sigma(x).$$ Before our work [@RS-Poh], no Pohozaev identity for the fractional Laplacian was known (not even in dimension $n=1$). Theorem \[thpoh\] was first found and established for $L=(-\Delta)^s$ in [@RS-Poh], and later the result was extended to more general operators in [@RSV]. A surprising feature of this result is that, even if the operators are nonlocal, the identities - have completely local boundary terms.
Let us give now a sketch of the proof of the Pohozaev identity . In order to focus on the main ideas, no technical details will be discussed.
Sketch of the proof
-------------------
For simplicity, let us assume that $\Omega$ is $C^\infty$ and that $u/d^s\in C^\infty(\overline\Omega)$.
. We first assume that $\Omega$ is strictly star-shaped; later we will deduce the general case from this one. Translating $\Omega$ if necessary, we may assume it is strictly star-shaped with respect to the origin.
Let us define $$u_\lambda(x) = u(\lambda x),\qquad\lambda>1,$$ and let us write the right hand side of as $$2\int_\Omega(x\cdot \nabla u) Lu = 2\left.\frac{d}{d\lambda}\right|_{\lambda=1^+} \int_\Omega u_\lambda Lu.$$ This follows from the fact that $\left.\frac{d}{d\lambda}\right|_{\lambda=1^+} u_\lambda(x)=(x\cdot\nabla u)$ and the dominated convergence theorem. Then, since $u_\lambda$ vanishes outside $\Omega$, we will have $$\int_\Omega u_\lambda Lu=\int_{\R^n} u_\lambda Lu=\int_{\R^n}L^{\frac 12}u_\lambda L^{\frac 12} u,$$ and therefore $$\begin{aligned}
\int_\Omega u_\lambda Lu=\int_{\R^n}L^{\frac 12}u_\lambda L^{\frac 12} u
&=& \lambda^s\int_{\R^n}\left(L^{\frac 12}u\right)(\lambda x)L^{\frac 12}u(x)\,dx \\
&=& \lambda^s\int_{\R^n}w(\lambda x)w(x)\,dx\\
&=& \lambda^{\frac{2s-n}{2}}\int_{\R^n}w(\lambda^{\frac12}y)w(\lambda^{-\frac12}y)\,dy\end{aligned}$$ where $w(x)= L^{\frac 12} u(x)$.
Now, since $2\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\lambda^{\frac{2s-n}{2}}=2s-n$, the previous identities (and the change $\sqrt{\lambda}\mapsto\lambda$) yield $$2\int_\Omega(x\cdot \nabla u) Lu=(2s-n)\int_{\R^n} w^2+\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_{\R^n}w_{{\lambda}}w_{1/{\lambda}}.$$ Moreover, since $$\int_{\R^n} w^2=\int_{\R^n} L^{1/2}u\,L^{1/2}u=\int_{\R^n} u\,Lu=\int_{\Omega} u\,Lu,$$ then we have $$\label{step1}
-2\int_\Omega(x\cdot \nabla u) Lu=(n-2s)\int_{\Omega} u\,Lu+\mathcal I(w),$$ where $$\label{operator}
\mathcal I(w)=-\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_{\R^n}w_{{\lambda}}w_{1/{\lambda}},$$ $w_\lambda(x)=w(\lambda x)$, and $w(x)= L^{\frac 12} u(x)$.
At this point one should compare and . In order to establish , we “just” need to show that $\mathcal I(w)$ is exactly the boundary term we want.
Let us take a closer look at the operator defined by . The first thing one may observe by differentiating under the integral sign is that $$\varphi\ \textrm{is ``nice enough''}\qquad \Longrightarrow\qquad \mathcal I(\varphi)=0.$$ In particular, one can also show that $\mathcal I(\varphi+h)=\mathcal I(\varphi)$ whenever $h$ is “nice enough”.
The function $w=L^{1/2}u$ is smooth inside $\Omega$ and also in $\R^n\setminus\overline\Omega$, but it has a singularity along $\partial\Omega$. In order to compute $\mathcal I(w)$, we have to study carefully the behavior of $w=L^{1/2}u$ near $\partial\Omega$, and try to compute $\mathcal I(w)$ by using . The idea is that, since $u/d^s$ is smooth, then we will have $$\label{cosa1}
w=L^{1/2}u=L^{1/2}\left(d^s\frac{u}{d^s}\right)=L^{1/2}\bigl(d^s\bigr)\frac{u}{d^s}+\textrm{``nice terms''},$$ and thus the behavior of $w$ near $\partial\Omega$ will be that of $L^{1/2}\bigl(d^s\bigr)\frac{u}{d^s}$.
Using the previous observation, and writing the integral in in the “star-shaped coordinates” $x=tz$, $z\in \partial\Omega$, $t\in(0,\infty)$, we find $$\begin{split}
-\mathcal I(w)=\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_{\mathbb R^n}w_{{\lambda}}w_{1/{\lambda}}&=
\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_{\partial\Omega}(z\cdot \nu)d\sigma(z)\int_0^\infty t^{n-1}w(\lambda tz)w\left(\frac{tz}{\lambda}\right)dt\\
&=\int_{\partial\Omega}(z\cdot \nu)d\sigma(z)\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_0^\infty t^{n-1}w(\lambda tz)w\left(\frac{tz}{\lambda}\right)dt.
\end{split}$$
![\[figura2\]Star-shaped coordinates $x=tz$, with $z\in \partial\Omega$. ](dibuix2.pdf)
Now, a careful analysis of $L^{1/2}(d^s)$ leads to the formula $$\label{delicate0}
L^{1/2}\bigl(d^s\bigr)(tz)=\phi_s(t)\sqrt{\mathcal A(\nu(z))}+\textrm{``nice terms''},$$ where $\phi_s(t)=c_1\{\log^-|t-1|+c_2\,\chi_{(0,1)}(t)\}$, and $c_1,c_2$ are explicit constants that depend only on $s$. Here, $\chi_A$ denotes the characteristic function of the set $A$.
This, combined with , gives $$\label{delicate}
w(tz)=\phi_s(t)\sqrt{\mathcal A(\nu(z))}\,\frac{u}{d^s}(z)+\textrm{``nice terms''}.$$
Using the previous two identities we find $$\label{last-step}\begin{split}
\mathcal I(w)&=
-\int_{\partial\Omega}(z\cdot \nu)d\sigma(z)\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_0^\infty t^{n-1}w(\lambda tz)w\left(\frac{tz}{\lambda}\right)dt\\
&=-\int_{\partial\Omega}(z\cdot \nu) d\sigma(z)\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_0^\infty t^{n-1}\phi_s(\lambda t)\phi_s\left(\frac{t}{\lambda}\right)\mathcal A(\nu(z))\left(\frac{u}{d^s}(z)\right)^2dt\\
&=\int_{\partial\Omega}\mathcal A(\nu)\left(\frac{u}{d^s}\right)^2 (z\cdot \nu)d\sigma(z)\,C(s),
\end{split}$$ where $$C(s)=-\left.\frac{d}{d\lambda}\right|_{\lambda=1^+}\int_0^\infty t^{n-1}\phi_s(\lambda t)\phi_s\left(\frac{t}{\lambda}\right)dt$$ is a (positive) constant that can be computed explicitly. Thus, follows from and .
. Let now $\Omega$ be any $C^{1,1}$ domain. In that case, the above proof does not work, since the assumption that $\Omega$ was star-shaped was very important in such proof. Still, as shown next, once the identity is established for star-shaped domains, then the identity for general $C^{1,1}$ domains follows from an argument involving a partition of unity and the fact that every $C^{1,1}$ domain is locally star-shaped.
Let $B_i$ be a finite collection of small balls covering $\Omega$. Then, we consider a family of functions $\psi_i\in C^\infty_c(B_i)$ such that $\sum_i \psi_i=1$, and we let $u_i=u\psi_i$.
We claim that for every $i,j$ we have the following bilinear identity $$\label{bilinear}\begin{split}
-\int_\Omega(x\cdot\nabla &u_i)Lu_j\, dx-\int_\Omega(x\cdot\nabla u_j)Lu_i\,dx=\frac{n-2s}{2}\int_\Omega u_iLu_j\, dx\,+\\
&+\frac{n-2s}{2}\int_{\Omega}u_jLu_i\, dx+\Gamma(1+s)^2\int_{\partial\Omega}\mathcal A(\nu)\frac{u_i}{\delta^{s}}\frac{u_j}{\delta^{s}}(x\cdot\nu)\, d\sigma.
\end{split}$$ To prove this, we separate two cases. In case $\overline B_i\cap \overline B_j\neq\emptyset$ then it turns out that $u_i$ and $u_j$ satisfy the hypotheses of Step 1, and thus they satisfy the identity —here we are using that the intersection of the $C^{1,1}$ domain $\Omega$ with a small ball is always star-shaped. Then, applying to the functions $(u_i+u_j)$ and $(u_i-u_j)$ and subtracting such two identities, one gets . On the other hand, in case $\overline B_i\cap \overline B_j=\emptyset$ then the identity is a simple computation similar to , since in this case we have $u_iu_j=0$ and thus there is no boundary term in . Hence, we got for all $i,j$. Therefore, summing over all $i$ and all $j$ and using that $\sum_i u_i=u$, follows.
. Let us finally show the second identity . For this, we just need to apply the identity that we already proved, , with a different origin $e\in\R^n$. We get $$\label{Poh1-origin}
\begin{split}
-2\int_\Omega\bigl((x-e)\cdot\nabla u\bigr)Lu\ dx=&\,(n-2s)\int_{\Omega}u\,Lu\, dx\\
&+\Gamma(1+s)^2\int_{\partial\Omega}\mathcal A(\nu)\left(\frac{u}{d^s}\right)^2\bigl((x-e)\cdot\nu\bigr)d\sigma.
\end{split}$$ Subtracting and we get , as desired.
Comments and further results
----------------------------
Let us next give some final remarks about Theorem \[thpoh\].
$\bullet$ [**On the proof of Theorem \[thpoh\].**]{} First, notice that the smoothness of $u/d^s$ and $\partial\Omega$ is hidden in . In fact, the proof of - requires a very fine analysis, even if one assumes that both $u/d^s$ and $\partial\Omega$ are $C^\infty$. Furthermore, even in this smooth case, the “nice terms” in are not even $C^1$ near $\partial\Omega$, and a delicate result for $\mathcal I$ is needed in order to ensure that $\mathcal I(\textrm{``nice terms''})=0$; see Proposition 1.11 in [@RS-Poh].
Second, note that the kernel of the operator $L^{1/2}$ has an explicit expression in case $L=(-\Delta)^s$, but not for general operators with kernels . Because of this, the proofs of and are simpler for $L=(-\Delta)^s$, and some new ideas are required to treat the general case, in which we obtain the extra factor $\sqrt{\mathcal A(\nu(z))}$.
$\bullet$ [**Extension to more general operators.**]{} After the results of [@RS-Poh; @RSV], a last question remained to be answered: what happens with more general operators ? For example, is there any Pohozaev identity for the class of operators $(-\Delta+m^2)^s$, with $m>0$? And for operators with $x$-dependence?
In a recent work [@Grubb-Poh], G. Grubb obtained integration-by-parts formulas as in Theorem \[thpoh\] for pseudodifferential operators $P$ of the form $$\label{psido}
Pu=\operatorname{Op}(p(x,\xi))u=\mathcal F^{-1}_{\xi \to x}(p(x,\xi )(\mathcal Fu)(\xi )),$$ where $\mathcal F$ is the Fourier transform $(\mathcal F u)(\xi )=\int_{\R^n}e^{-ix\cdot \xi }u(x)\, dx$. The symbol $p(x,\xi)$ has an asymptotic expansion $p(x,\xi )\sim\sum_{j\in{\Bbb N}_0}p_j(x,\xi )$ in homogeneous terms: $p_j(x,t\xi)=t^{2s-j}p_j(x,\xi)$, and $p$ is [*even*]{} in the sense that $p_j(x,-\xi)=(-1)^j p_j(x,\xi)$ for all $j$.
When $a$ in is $C^\infty (S^{n-1})$, then the operators - are pseudodifferential operators of the form . For these operators -, the lower-order terms $p_j$ ($j\ge 1$) vanish and $p_0$ is real and $x$-independent. Here $p_0(\xi)=\mathcal F_{y\to \xi }K(y)$, and $\mathcal A(\nu)=p_0(\nu)$. The fractional Laplacian $(-\Delta)^s$ corresponds to $a\equiv1$ in -, and to $p(x,\xi)=|\xi|^{2s}$ in .
In case of operators with no $x$-dependence and with real symbols $p(\xi)$, the analogue of proved in [@Grubb-Poh] is the following identity $$\begin{split}
-2\int_\Omega(x\cdot\nabla u)Pu\ dx=&\,\Gamma(1+s)^2\int_{\partial\Omega}p_0(\nu)\left(\frac{u}{d^s}\right)^2(x\cdot\nu)d\sigma\,+\\
&\qquad\qquad\qquad\qquad+n\int_{\Omega}u\,Pu\, dx-\int_{\Omega}u\,\operatorname{Op}(\xi\cdot\nabla p(\xi))u\,dx,
\end{split}$$ where $p_0(\nu)$ is the principal symbol of $P$ at $\nu$. Note that when the symbol $p(\xi)$ is homogeneous of degree $2s$ (hence equals $p_0(\xi)$), then $\xi\cdot\nabla p(\xi)=2s\,p(\xi)$, and thus we recover the identity .
The previous identity can be applied to operators $(-\Delta+m^2)^s$. Furthermore, the results in [@Grubb-Poh] allow $x$-dependent operators $P$, which result in extra integrals over $\Omega$. The methods in [@Grubb-Poh] are complex and quite different from the ones we use in [@RS-Poh; @RSV]. The domain $\Omega$ is assumed $C^\infty$ in [@Grubb-Poh].
Nonexistence results and other consequences {#sec4}
===========================================
As in the case of the Laplacian $\Delta$, the Pohozaev identity gives as an immediate consequence the following nonexistence result for operators -: If $f(u)=|u|^{p-1}u$ in , then
- If $\Omega$ is star-shaped and $p=\frac{n+2s}{n-2s}$, the only nonnegative weak solution is $u\equiv0$.
- If $\Omega$ is star-shaped and $p>\frac{n+2s}{n-2s}$, the only bounded weak solution is $u\equiv0$.
This nonexistence result was first established by Fall and Weth in [@FW] for $L=(-\Delta)^s$. They used the extension property of the fractional Laplacian, combined with the method of moving spheres.
On the other hand, the existence of solutions for subcritical powers $1<p<\frac{n+2s}{n-2s}$ was proved by Servadei-Valdinoci [@SV] for the class of operators -. Moreover, for the critical power $p=\frac{n+2s}{n-2s}$, the existence of solutions in an annular-type domains was obtained in [@SSS].
The methods introduced in [@RS-Poh] to prove the Pohozaev identity were used in [@RS-nonex] to show nonexistence results for much more general operators $L$, including for example the following.
\[corlevy\] Let $L$ be any operator of the form $$\label{levy}
Lu(x)=-\sum_{i,j}a_{ij}\partial_{ij}u+{\rm PV}\int_{\R^n}\bigl(u(x)-u(x+y)\bigr)K(y)dy,$$ where $(a_{ij})$ is a positive definite symmetric matrix and $K$ satisfies the conditions in . Assume in addition that $$\label{condicio2levy}
K(y)|y|^{n+2}\ \textrm{is nondecreasing along rays from the origin.}$$ and that $$|\nabla K(y)|\leq C\,\frac{K(y)}{|y|}\quad \textrm{for all}\ y\neq0,$$ Let $\Omega$ be any bounded star-shaped domain, and $u$ be any bounded solution of with $f(u)=|u|^{p-1}u$. If $p\geq\frac{n+2}{n-2}$, then $u\equiv0$.
Similar nonexistence results were obtained in [@RS-nonex] for other types of nonlocal equations, including: kernels without homogeneity (such as sums of fractional Laplacians of different orders), nonlinear operators (such as fractional $p$-Laplacians), and operators of higher order ($s>1$).
Finally, let us give another immediate consequence of the Pohozaev identity .
\[cor-uniquecont\] Let $L$ be any operator of the form --, $\Omega$ be any bounded $C^{1,1}$ domain, and $\phi$ be any bounded solution to $$\left\{ \begin{array}{rcll}
L \phi &=&\lambda\phi&\textrm{in }\Omega \\
\phi&=&0&\textrm{in }\R^n\backslash\Omega,
\end{array}\right.$$ for some real $\lambda$. Then, $\phi/d^s$ is Hölder continuous up to the boundary, and the following unique continuation principle holds: $$\left.\frac{\phi}{d^s}\right|_{\partial\Omega}\equiv0\quad \textrm{on}\quad \partial\Omega\qquad \Longrightarrow\qquad \phi\equiv0\quad \textrm{in}\quad \Omega.$$
The same unique continuation property holds for any *subcritical* nonlinearity $f(x,u)$; see Corollary 1.4 in [@RSV].
[00]{}
R. Bass, *Regularity results for stable-like operators*, J. Funct. Anal. 257 (2009), 2693–2722.
B. Barrios, A. Figalli, X. Ros-Oton, *Free boundary regularity in the parabolic obstacle problem for the fractional Laplacian*, preprint arXiv (May 2016).
R. M. Blumenthal, R. K. Getoor, D. B. Ray, *On the distribution of first hits for the symmetric stable processes*, Trans. Amer. Math. Soc. 99 (1961), 540-554.
K. Bogdan, T. Grzywny, M. Ryznar, *Heat kernel estimates for the fractional Laplacian with Dirichlet conditions*, Ann. of Prob. 38 (2010), 1901-1923.
K. Bogdan, T. Grzywny, M. Ryznar, *Barriers, exit time and survival probability for unimodal Lévy processes*, Probab. Theory Relat. Fields 162 (2015), 155-198.
C. Bucur, E. Valdinoci, *Nonlocal Diffusion and Applications*, Lecture Notes of the Unione Matematica Italiana, Vol. 20, 2016.
L. Caffarelli, A. Figalli, *Regularity of solutions to the parabolic fractional obstacle problem*, J. Reine Angew. Math., 680 (2013), 191-233.
L. Caffarelli, X. Ros-Oton, J. Serra, *Obstacle problems for integro-differential operators: regularity of solutions and free boundaries*, Invent. Math., to appear.
Z.-Q. Chen, R. Song, *Estimates on Green functions and Poisson kernels for symmetric stable processes*, Math. Ann. 312 (1998), 465-501.
D. De Silva, O. Savin, *A note on higher regularity boundary Harnack inequality*, Disc. Cont. Dyn. Syst. 35 (2015), 6155-6163.
M. M. Fall, T. Weth, *Nonexistence results for a class of fractional elliptic boundary value problems*, J. Funct. Anal. 263 (2012), 2205-2227.
X. Fernandez-Real, X. Ros-Oton. *Regularity theory for general stable operators: parabolic equations*, preprint arXiv (Nov. 2015).
R. K. Getoor, *First passage times for symmetric stable processes in space*, Trans. Amer. Math. Soc. 101 (1961), 75–90.
G. Grubb, *Fractional Laplacians on domains, a development of Hörmander’s theory of $\mu$-transmission pseudodifferential operators*, Adv. Math. 268 (2015), 478-528.
G. Grubb, *Local and nonlocal boundary conditions for $\mu$-transmission and fractional elliptic pseudodifferential operators*, Anal. PDE 7 (2014), 1649-1682.
G. Grubb, *Integration by parts and Pohozaev identities for space-dependent fractional-order operators*, J. Differential Equations 261 (2016), 1835-1879.
Y. Jhaveri, R. Neumayer, *Higher regularity of the free boundary in the obstacle problem for the fractional Laplacian*, preprint arXiv (Jun. 2016).
M. Kassmann, A. Mimica, *Intrinsic scaling properties for nonlocal operators*, J. Eur. Math. Soc. (JEMS), to appear.
T. Kulczycki, *Properties of Green function of symmetric stable processes*, Probab. Math. Statist. 17 (1997), 339–364.
N. S. Landkof, *Foundations of Modern Potential Theory*, Springer, New York, 1972.
S. I. Pohozaev, *On the eigenfunctions of the equation $\Delta u + \lambda f(u) = 0$*, Dokl. Akad. Nauk SSSR 165 (1965), 1408-1411.
X. Ros-Oton, *Nonlocal elliptic equations in bounded domains: a survey*, Publ. Mat. 60 (2016), 3-26.
X. Ros-Oton, J. Serra, *The Dirichlet problem for the fractional Laplacian: regularity up to the boundary*, J. Math. Pures Appl. 101 (2014), 275-302.
X. Ros-Oton, J. Serra, *The Pohozaev identity for the fractional Laplacian*, Arch. Rat. Mech. Anal 213 (2014), 587-628.
X. Ros-Oton, J. Serra. *Nonexistence results for nonlocal equations with critical and supercritical nonlinearities*, Comm. Partial Differential Equations 40 (2015), 115-133.
X. Ros-Oton, J. Serra, *Boundary regularity for fully nonlinear integro-differential equations*, Duke Math. J. 165 (2016), 2079-2154.
X. Ros-Oton, J. Serra, *Regularity theory for general stable operators*, J. Differential Equations 260 (2016), 8675-8715.
X. Ros-Oton, J. Serra, *Boundary regularity estimates for nonlocal elliptic equations in $C^1$ and $C^{1,\alpha}$ domains*, preprint arXiv (Dec. 2015).
X. Ros-Oton, J. Serra, E. Valdinoci, *Pohozaev identities for anisotropic integro-differential operators*, preprint arXiv (Feb. 2015).
S. Secchi, N. Shioji, M. Squassina, *Coron problem for fractional equations*, Differential Integral Equations 28 (2015), 103-118.
J. Serra, *$C^{\sigma+\alpha}$ regularity for concave nonlocal fully nonlinear elliptic equations with rough kernels*, Calc. Var. Partial Differential Equations 54 (2015), 3571-3601.
R. Servadei, E. Valdinoci, *Mountain pass solutions for non-local elliptic operators*, J. Math. Anal. Appl. 389 (2012), 887-898.
H. Sikic, R. Song, Z. Vondracek, *Potential theory of geometric stable processes*, Probab. Theory Relat. Fields 135 (2006), 547-575.
[^1]: The author was supported by NSF grant DMS-1565186 (US) and by MINECO grant MTM2014-52402-C3-1-P (Spain)
[^2]: Here, we denote $\|w\|_{L^1_s(\R^n)}:=\int_{\R^n}\frac{w(x)}{1+|x|^{n+2s}}\,dx.$
[^3]: In fact, to avoid singularities inside $\Omega$, we define $d(x)$ as a positive function that coincides with $\textrm{dist}(x,\R^n\setminus\Omega)$ in a neighborhood of $\partial\Omega$ and is as regular as $\partial\Omega$ inside $\Omega$.
[^4]: In , we have corrected the sign on the boundary contribution, which was incorrectly stated in [@RS-Poh Theorem 1.9].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The category $\fin$ of [*symmetric-simplicial operators*]{} is obtained by enlarging the category $\ord$ of monotonic functions between the sets $\{0,1,\dots n\}$ to include all functions between the same sets. Marco Grandis [@Grandis] has given a presentation of $\fin$ using the standard generators $d_i$ and $s_i$ of $\ord$ as well as the adjacent transpositions $t_i$ which generate the permutations in $\fin$. The purpose of this note is to establish an alternative presentation of $\fin$ in which the codegeneracies $s_i$ are replaced by [*quasi-codegeneracies*]{} $u_i$. We also prove a unique factorization theorem for products of $d_i$ and $u_j$ analogous to the standard unique factorizations in $\ord$. This presentation has been used by the author to construct [*symmetric hypercrossed complexes*]{} (to be published elsewhere) which are algebraic models for homotopy types of spaces based on the hypercrossed complexes of [@CarrascoCegarra].'
author:
- Eric Ramón Antokoletz
bibliography:
- 'Preprint\_4\_QuasiMonoticPresFin.bib'
title: 'An Alternative Presentation of the Symmetric-Simplicial Category'
---
Introduction
============
Grandis’s Presentation\
of the Symmetric-Simplicial Category
====================================
Proof of the Alternative Presentation\
of the Symmetric-Simplicial Category
======================================
The Algebra of the Symmetric-Simplicial Category
================================================
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A novel method to experimentally study the dynamics of long-living excitons in coupled quantum well semiconductor heterostructures is presented. Lithographically defined top gate electrodes imprint in-plane artificial potential landscapes for excitons via the quantum confined Stark effect. Excitons are shuttled laterally in a time-dependent potential landscape defined by an interdigitated gate structure. Long-range drift exceeding a distance of $150$$\mu$m at an exciton drift velocity ${v_{\text{d}}}\gtrsim 10^3$$\mathrm{m} /
\mathrm{s}$ is observed in a gradient potential formed by a resistive gate stripe.
author:
- Andreas Gärtner
- Dieter Schuh
- 'Jörg P. Kotthaus'
title: 'Dynamics of Long-Living Excitons in Tunable Potential Landscapes'
---
[^1]
Introduction {#introduction .unnumbered}
============
In the past few years new material systems such as coupled quantum well (QW) heterostructures emerged. They allow to host long-living excitons with life times up to about $\approx$30$\mu$s [@SnoPRL05]. Being composite bosonic particles made of an electron and a hole, excitons are expected to show Bose-Einstein-Condensation ([[BEC]{}]{}) at low temperatures and at sufficient high densities [@KelJETP68]. Up to present there is no unambigious experimental evidence for excitonic [[BEC]{}]{} [@ButNat02b; @SnoNat02; @RapPRL04; @SnoPss03]. One reason is the lack of spatial control on the exciton gas leading to quick expansion and dilution of an initially dense exciton cloud. In [coupled QW]{} samples, exciton confinement has been observed only in intrinsic [“natural traps”]{} [@ButNat02a] and in mechanically stressed configurations [@SnoAPL99]. However, both do not allow an in-situ control of the trapping potential. A profound understanding of controlling exciton dynamics is essential to define confinement potentials for [[BEC]{}]{} experiments. In this contribution, voltage-tunable potential landscapes for excitons are experimentally demonstrated enabling spatial and temporal control of exciton dynamics. Using the quantum confined Stark effect ([[QCSE]{}]{}), laterally modulated excitonic potential landscapes are induced in [[coupled QW]{}]{}s. Exciton shuttling between two electrodes in a time-varying lateral potential landscape is demonstrated. In the last section, long-range exciton drift exceeding 150$\mu$m is observed in a gradient potential defined by a resistive gate, and an estimate for the exciton drift velocity is given.
Sample and Experimental Details {#sample-and-experimental-details .unnumbered}
===============================
![ (a) Layout of the heterostructure containing [[coupled QW]{}]{}s. (b) Formation of a spatially indirect exciton (dashed ellipse) in [[coupled QW]{}]{}s. The exciton’s life-time and energy are both controllable via an external electric field applied along the z-axis. []{data-label="fig:indexc"}](fig1){width="0.9\linewidth"}
Starting point is an epitaxially grown [coupled QW]{} heterostructure depicted in Fig. \[fig:indexc\](a). Two GaAs layers with a thickness of 8nm each form the [[QW]{}]{}s, while the coupling strength is given by a 4nm tunnel barrier made out of Al$_{0.3}$Ga$_{0.7}$As. The center of the [coupled QW]{} structure is located 60nm below the surface to assure excellent optical access. At a depth of 370nm, an n-doped GaAs layer serves as back gate. In conjunction with lithographically defined metallic gate structures deposited on the sample, an electric field parallel to the crystal growth direction can be applied and spatially varied. As sketched in Fig. \[fig:indexc\](b), the resulting voltage-tunable tilt of the band structure allows the formation of spatially indirect excitons (dashed ellipse). Initially photo-generated electrons (open circle) and holes (black circle) are spatially separated by the tunnel barrier. The excitonic life-time of typically $\approx$1ns in bulk material is extended into the $\mu$s regime, exceeding the excitonic cooling time of typically $\approx 400$ps by far [@DamPRB90]. The energetic red-shift of the indirect excitons, mediated by the [QCSE]{}, can be tuned via the strength of the external electric field. Lithographically structured top gate designs allow to tune the energy of the excitons within the [coupled QW]{} plane in dependence on position, and potential landscapes for excitons of controlled geometries are formed. This approach also allows to temporarily control the potential landscape in-situ via the applied gate voltages.
Time-resolved photoluminescence ([PL]{}) is used to follow the spatial and temporal decay of excitons. The experiments are carried out in a continuous flow cryostat at a temperature of $3.8$K. The [[coupled QW]{}]{}s are selectively populated with indirect excitons by a pulsed laser with a wavelength of $\lambda=680$nm. The diameter of the laser spot on the sample is $< 20$$\mu$m. The delayed [PL]{}emission occuring at a wavelength of about $\gtrsim 800$nm is detected normal to the surface via a gated intensified CCD camera. The spatial resolution of $\approx 1$$\mu$m enables to directly reveal the lateral distribution of excitons. A long-pass filter bocks non-excitonic [PL]{} of a wavelength less than $\approx 780$nm. The camera’s shutter is set to an exposure time of 50ns. Each experiment is performed at a repetition rate of 100kHz and is integrated for 40s in order to yield a comfortable signal-to-noise ratio.
Shuttling Excitons {#shuttling-excitons .unnumbered}
==================
![ Shuttling of indirect excitons in a time-dependent lateral potential modulation. (a) An interdigitated gate design defines a laterally undulated potential landscape for excitons. The strength of the electric field is indicated by the density of vertical arrows. Inset: top view cutout of the gate design. (b) Chronology of laser excitation ($t=0$ns) and the voltage configurations at times of $t=200$ns (I), $t=350$ns (II), and $t=440$ns (III). (c) Images of the lateral [PL]{} distribution taken at I, II, and III. Long-living excitons (bright [PL]{} lines) are shuttling between the gate fingers along the y-direction. []{data-label="fig:idgate"}](fig2){width="0.9\linewidth"}
A semi-transparent interdigitated gate structure with a periodicity of 4$\mu$m is deposited on top of the sample similar to ref. [@ZimAPL98]. Fig. \[fig:idgate\](a) sketches two adjacent gate fingers labeled [“gate A”]{} and [“gate B”]{}. They are made out of NiCr (10nm thickness) and measure a length of 500$\mu$m. Fig. \[fig:idgate\](b) shows the tenor of the experiment. A bias voltage of ${U_{\text{B}}}= -450$mV and a differential voltage of ${U_{\Delta}}=50$mV are applied to the gates and define an undulated lateral potential landscape for long-living indirect excitons within the plane of the [[coupled QW]{}]{}s. The lateral potential modulation is chosen to be sufficiently small to avoid exciton ionisation [@KraPRL02]. Population of the [[coupled QW]{}]{}s with excitons is performed via subsequent laser illumination for $50$ns, with the time $t=0$ns marking the end of the excitation pulse. At a time of $t=200$ns after laser illumination (indicated by [“I”]{}), the lateral distribution of the emitted [PL]{}is imaged by the [intensified CCD]{} camera. The voltages of gate A and gate B are exchanged at a time of $t=300$ns, and a second image (indicated by [“II”]{}) is taken at a time of $t=350$ns. A second gate voltage reversal follows at $t=400$ns, and a third image (indicated by [“III”]{}) is taken at $t=440$ns. Cutouts of the image data obtained in this experiment are shown in Fig. \[fig:idgate\](c). With the position of the gates A und B being indicated, the [PL]{} is aligned with respect to the gate fingers. Being [“high-field-seekers”]{}, excitons accumulate underneath the gate of stronger electric field minimizing their potential energy.
![ Line-by-line integrated [PL]{} yielded from the data shown in Fig. \[fig:idgate\](c). A constant offset was added to each curve for clarity. Excitons are collected underneath the gate finger of larger electric field (maximum in [[PL]{}]{}-intensity). Excitonic motion is initiated by swapping the gate polarities (I $\to$ II and II $\to$ III). []{data-label="fig:id-osc"}](fig3){width="0.9\linewidth"}
A line-by-line integrated analysis of the data is depicted in Fig. \[fig:id-osc\]. The data was corrected for unwanted background light. Sinusoidal curves (I-III) were fitted to the [PL]{} intensity data, corresponding to the respective images in Fig. \[fig:idgate\](c). In all curves the 4-$\mu$m-periodicity of the interdigitated gate structure is nicely reproduced. By swapping the gate voltages the repulsive and attractive action of the gate fingers exchanges. As can be seen in curve II the [PL]{} is shifted by 2$\mu$m compared to curve I, indicating that the mobile excitons follow the moving potential. The second gate voltage reversal (II $\to$ III) completes the excitonic shuttling process. Regarding the sequence of curve I through curve III, the [PL]{} amplitude is diminishing in agreement with the fact that the number of excitons decays in time due to recombination.
Long-range drift {#long-range-drift .unnumbered}
================
![ (a) A resisitive gate stripe on top of the sample (grey) is used to define a linear gradient potential for excitons. The strength of the electric field is indicated by the density of vertical arrows. (b) Exciton drifting along the gradient. The slope is tunable via the voltage difference ${U_{\Delta}}$. (c) Greyscale image of the [PL]{} distribution taken with the gradient potential switched off (${U_{\Delta}}=0$V). Excitons are created underneath the black disk located at the rim of the resistive gate (dashed region). (d) Excitonic drift over more than $150$$\mu$m is observed at a voltage difference of ${U_{\Delta}}=+1$V. []{data-label="fig:gradpot"}](fig4){width="0.9\linewidth"}
In order to study long-range excitonic drift a resistive gate stripe was defined on top of the heterostructure represented by the grey area in Fig. \[fig:gradpot\](a). The length of the semitransparent titanium gate is 500$\mu$m, its width equals 50$\mu$m, and its thickness is 10nm. A bias voltage of ${U_{\text{B}}}=-600$mV is applied corresponding to a maximal vertical electric field of $3.5 \times 10^6$V/m at the left side of the gate. This estimate accounts for an intrinsic bias voltage of $\approx -700$mV provided by the metal/semiconductor interface. An optional voltage difference ${U_{\Delta}}$ of $\pm 1$V over the gate stripe can be applied. The resulting strength of the lateral electric field of $\approx 3 \times 10^3$V/m is small compared to the strength of the vertical electric field. Both are set to temporary constant values during the experiment. Subsequently, by illuminating the sample by a laser pulse of a duration of 50ns and assisted by the bias voltage ${U_{\text{B}}}$, long-living indirect excitons are created. Via the voltage drop ${U_{\Delta}}$ over the resistive gate stripe a gradient potential for excitons can be induced in the [[coupled QW]{}]{}-layer as sketched in Fig. \[fig:gradpot\](b). The slope of the [[QCSE]{}]{}-mediated gradient is tunable via the voltage difference ${U_{\Delta}}$. The excitation laser beam was focused to the rim of the gate stripe, located underneath the black disk shown in Fig. \[fig:gradpot\](c). This configuration enables to spatially separate mobile excitons in the [[coupled QW]{}]{}s from slowly decaying stationary [PL]{} originating from bulk GaAs defects. After a time of 50ns following the illumination, a spatially resolved top view image of the delayed PL is taken by the [intensified CCD]{} camera. Fig. \[fig:gradpot\](c) shows the experimental result without using a voltage difference (${U_{\Delta}}=0$V). No directed drift is observed as the gradient potential is not swiched on, but a uniform diffusive excitonic cloud spreads in the vicinity of the excitation spot. In Fig. \[fig:gradpot\](d), ${U_{\Delta}}$ is set to $+1$V, exposing the excitons to a gradient potential as shown in Fig. \[fig:gradpot\](b). Under its influence the excitons below the gate stripe start to travel along the y-axis towards the region of stronger vertical electric field. Setting the voltage difference ${U_{\Delta}}$ to $-1$V reverses the drift direction (not shown). Drift of individual electrons and holes can be excluded as they would be forced to travel in opposite directions by the voltage difference ${U_{\Delta}}$. Due to the spatial separation, no recombination PL would occur [@KraPRL02]. It is worth noting that in contrast to ref. [@HagAPL95] in this experiment the drift covers a macroscopic distance exceeding 150$\mu$m, and is only limited by the length of the gate stripe. A first estimate on the lower limit of the drift velocity ${v_{\text{d}}}$ of indirect excitons can be given. The excitons are drifting during a period of time of $\le 150$ns from the beginning of the laser illumination until the end of the camera exposure. Together with the drift length measured to be $\ge
150$$\mu$m, a minimum drift velocity ${v_{\text{d}}}= 10^3$$ \mathrm{m} / \mathrm{s}$ is deduced for this configuration, being comparable to the speed of sound in GaAs.
Summary {#summary .unnumbered}
=======
Our experiments demonstrate that voltage-tunable artificial potentials can be employed to induce excitonic drift over macroscopic distances. This enables us to design and to test artificial excitonic traps needed to accumulate large exciton densities, a prerequisite for the observation of [BEC]{}. We thank J. Krauß and A. W. Holleitner for valuable discussions as well as the Deutsche Forschungsgemeinschaft for financial support.
[9]{} .
. . . . . . . . . . .
[^1]: Corresponding author.\
E-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we construct the jet geometrical extensions of the KCC-invariants, which characterize a given second-order system of differential equations on the 1-jet space $J^{1}(\mathbb{R} ,M)$. A generalized theorem of characterization of our jet geometrical KCC-invariants is also presented.'
author:
- Vladimir Balan and Mircea Neagu
- '[June 2009; Last revised December 2009 (an added bibliographical item)]{}'
title: 'Jet geometrical extension of the KCC-invariants'
---
**Mathematics Subject Classification (2000):** 58B20, 37C10, 53C43.
**Key words and phrases:** 1-jet spaces, temporal and spatial semisprays, nonlinear connections, SODEs, jet $h-$KCC-invariants.
Geometrical objects on 1-jet spaces
===================================
We remind first several differential geometrical properties of the 1-jet spaces. The 1-jet bundle$$\xi =(J^{1}(\mathbb{R},M),\pi _{1},\mathbb{R}\times M)$$is a vector bundle over the product manifold $\mathbb{R}\times M$, having the fibre of type $\mathbb{R}^{n}$, where $n$ is the dimension of the *spatial* manifold $M$. If the spatial manifold $M$ has the local coordinates $(x^{i})_{i=\overline{1,n}}$, then we shall denote the local coordinates of the 1-jet total space $J^{1}(\mathbb{R},M)$ by $(t,x^{i},x_{1}^{i})$; these transform by the rules [@8]$$\left\{
\begin{array}{l}
\widetilde{t}=\widetilde{t}(t)\medskip \\
\widetilde{x}^{i}=\widetilde{x}^{i}(x^{j})\medskip \\
\widetilde{x}_{1}^{i}=\dfrac{\partial \widetilde{x}^{i}}{\partial x^{j}}\dfrac{dt}{d\widetilde{t}}\cdot x_{1}^{j}.\end{array}\right. \label{rgg}$$In the geometrical study of the 1-jet bundle, a central role is played by the *distinguished tensors* ($d-$tensors).
A geometrical object $D=\left( D_{1k(1)(l)...}^{1i(j)(1)...}\right) $ on the 1-jet vector bundle, whose local components transform by the rules$$D_{1k(1)(u)...}^{1i(j)(1)...}=\widetilde{D}_{1r(1)(s)...}^{1p(m)(1)...}\frac{dt}{d\widetilde{t}}\frac{\partial x^{i}}{\partial \widetilde{x}^{p}}\left(
\frac{\partial x^{j}}{\partial \widetilde{x}^{m}}\frac{d\widetilde{t}}{dt}\right) \frac{d\widetilde{t}}{dt}\frac{\partial \widetilde{x}^{r}}{\partial
x^{k}}\left( \frac{\partial \widetilde{x}^{s}}{\partial x^{u}}\frac{dt}{d\widetilde{t}}\right) ..., \label{tr-rules-$d-$tensors}$$is called a *$d-$tensor field*.
The use of parentheses for certain indices of the local components$$D_{1k(1)(l)...}^{1i(j)(1)...}$$of the distinguished tensor field $D$ on the 1-jet space is motivated by the fact that the pair of indices $"$ $_{(1)}^{(j)}$ $"$ or $"$ $_{(l)}^{(1)}$ $" $ behaves like a single index.
\[Liouville\] The geometrical object$$\mathbf{C}=\mathbf{C}_{(1)}^{(i)}\dfrac{\partial }{\partial x_{1}^{i}},$$where $\mathbf{C}_{(1)}^{(i)}=x_{1}^{i}$, represents a $d-$tensor field on the 1-jet space; this is called the *canonical Liouville $d-$tensor field* of the 1-jet bundle and is a global geometrical object.
\[normal\]Let $h=(h_{11}(t))$ be a Riemannian metric on the relativistic time axis $\mathbb{R}$. The geometrical object $$\mathbf{J}_{h}=J_{(1)1j}^{(i)}\dfrac{\partial }{\partial x_{1}^{i}}\otimes
dt\otimes dx^{j},$$where $J_{(1)1j}^{(i)}=h_{11}\delta _{j}^{i}$ is a $d-$tensor field on $J^{1}(\mathbb{R},M)$, which is called the $h$*-normalization $d-$tensor field* of the 1-jet space and is a global geometrical object.
In the Riemann-Lagrange differential geometry of the 1-jet spaces developed in [@7], [@8] important rôles are also played by geometrical objects as the *temporal* or *spatial semisprays*, together with the *jet nonlinear connections*.
A set of local functions $H=\left( H_{(1)1}^{(j)}\right) $ on $J^{1}(\mathbb{R},M),$ which transform by the rules$$2\widetilde{H}_{(1)1}^{(k)}=2H_{(1)1}^{(j)}\left( \frac{dt}{d\widetilde{t}}\right) ^{2}\frac{\partial \widetilde{x}^{k}}{\partial x^{j}}-\frac{dt}{d\widetilde{t}}\frac{\partial \widetilde{x}_{1}^{k}}{\partial t},
\label{tr-rules-t-s}$$is called a *temporal semispray* on $J^{1}(\mathbb{R} ,M)$.
\[H0\] Let us consider a Riemannian metric $h=(h_{11}(t))$ on the temporal manifold $\mathbb{R} $ and let$$H_{11}^{1}=\frac{h^{11}}{2}\frac{dh_{11}}{dt},$$where $h^{11}=1/h_{11}$, be its Christoffel symbol. Taking into account that we have the transformation rule$$\widetilde{H}_{11}^{1}=H_{11}^{1}\frac{dt}{d\widetilde{t}}+\frac{d\widetilde{t}}{dt}\frac{d^{2}t}{d\widetilde{t}^{2}}, \label{t-Cris-symb}$$we deduce that the local components$$\mathring{H}_{(1)1}^{(j)}=-\frac{1}{2}H_{11}^{1}x_{1}^{j}$$define a temporal semispray $\mathring{H}=\left( \mathring{H}_{(1)1}^{(j)}\right) $ on $J^{1}(\mathbb{R} ,M)$. This is called the *canonical temporal semispray associated to the temporal metric* $h(t)$.
A set of local functions $G=\left( G_{(1)1}^{(j)}\right) ,$ which transform by the rules$$2\widetilde{G}_{(1)1}^{(k)}=2G_{(1)1}^{(j)}\left( \frac{dt}{d\widetilde{t}}\right) ^{2}\frac{\partial \widetilde{x}^{k}}{\partial x^{j}}-\frac{\partial
x^{m}}{\partial \widetilde{x}^{j}}\frac{\partial \widetilde{x}_{1}^{k}}{\partial x^{m}}\widetilde{x}_{1}^{j}, \label{tr-rules-s-s}$$is called a *spatial semispray* on $J^{1}(\mathbb{R} ,M)$.
\[G0\] Let $\varphi =(\varphi _{ij}(x))$ be a Riemannian metric on the spatial manifold $M$ and let us consider$$\gamma _{jk}^{i}=\frac{\varphi ^{im}}{2}\left( \frac{\partial \varphi _{jm}}{\partial x^{k}}+\frac{\partial \varphi _{km}}{\partial x^{j}}-\frac{\partial
\varphi _{jk}}{\partial x^{m}}\right)$$its Christoffel symbols. Taking into account that we have the transformation rules$$\widetilde{\gamma }_{qr}^{p}=\gamma _{jk}^{i}\frac{\partial \widetilde{x}^{p}}{\partial x^{i}}\frac{\partial x^{j}}{\partial \widetilde{x}^{q}}\frac{\partial x^{k}}{\partial \widetilde{x}^{r}}+\frac{\partial \widetilde{x}^{p}}{\partial x^{l}}\frac{\partial ^{2}x^{l}}{\partial \widetilde{x}^{q}\partial \widetilde{x}^{r}}, \label{s-Cris-symb}$$we deduce that the local components$$\mathring{G}_{(1)1}^{(j)}=\frac{1}{2}\gamma _{kl}^{j}x_{1}^{k}x_{1}^{l}$$define a spatial semispray $\mathring{G}=\left( \mathring{G}_{(1)1}^{(j)}\right) $ on $J^{1}(\mathbb{R} ,M)$. This is called the *canonical spatial semispray associated to the spatial metric* $\varphi (x)$.
A set of local functions $\Gamma =\left(
M_{(1)1}^{(j)},N_{(1)i}^{(j)}\right) $ on $J^{1}(\mathbb{R} ,M),$ which transform by the rules$$\widetilde{M}_{(1)1}^{(k)}=M_{(1)1}^{(j)}\left( \frac{dt}{d\widetilde{t}}\right) ^{2}\frac{\partial \widetilde{x}^{k}}{\partial x^{j}}-\frac{dt}{d\widetilde{t}}\frac{\partial \widetilde{x}_{1}^{k}}{\partial t}
\label{tr-rules-t-nlc}$$and$$\widetilde{N}_{(1)l}^{(k)}=N_{(1)i}^{(j)}\frac{dt}{d\widetilde{t}}\frac{\partial x^{i}}{\partial \widetilde{x}^{l}}\frac{\partial \widetilde{x}^{k}}{\partial x^{j}}-\frac{\partial x^{m}}{\partial \widetilde{x}^{l}}\frac{\partial \widetilde{x}_{1}^{k}}{\partial x^{m}}, \label{tr-rules-s-nlc}$$is called a *nonlinear connection* on the 1-jet space $J^{1}(\mathbb{R},M)$.
Let us consider that $(\mathbb{R} ,h_{11}(t))$ and $(M,\varphi _{ij}(x))$ are Riemannian manifolds having the Christoffel symbols $H_{11}^{1}(t)$ and $\gamma _{jk}^{i}(x)$. Then, using the transformation rules (\[rgg\]), (\[t-Cris-symb\]) and (\[s-Cris-symb\]), we deduce that the set of local functions$$\mathring{\Gamma}=\left( \mathring{M}_{(1)1}^{(j)},\mathring{N}_{(1)i}^{(j)}\right) ,$$where$$\mathring{M}_{(1)1}^{(j)}=-H_{11}^{1}x_{1}^{j}\text{ \ \ and \ \ }\mathring{N}_{(1)i}^{(j)}=\gamma _{im}^{j}x_{1}^{m},$$represents a nonlinear connection on the 1-jet space $J^{1}(\mathbb{R} ,M)$. This jet nonlinear connection is called the *canonical nonlinear connection attached to the pair of Riemannian metrics* $(h(t),\varphi (x))$.
In the sequel, let us study the geometrical relations between *temporal* or* spatial semisprays* and *nonlinear connections* on the 1-jet space $J^{1}(\mathbb{R} ,M)$. In this direction, using the local transformation laws (\[tr-rules-t-s\]), (\[tr-rules-t-nlc\]) and (\[rgg\]), respectively the transformation laws (\[tr-rules-s-s\]), ([tr-rules-s-nlc]{}) and (\[rgg\]), by direct local computation, we find the following geometrical results:
a\) The *temporal semisprays* $H=(H_{(1)1}^{(j)})$ and the sets of *temporal components of nonlinear connections* $\Gamma _{\text{temporal}}=(M_{(1)1}^{(j)})$ are in one-to-one correspondence on the 1-jet space $J^{1}(\mathbb{R},M)$, via: $$M_{(1)1}^{(j)}=2H_{(1)1}^{(j)},\qquad H_{(1)1}^{(j)}=\frac{1}{2}M_{(1)1}^{(j)}.$$
b\) The *spatial semisprays* $G=(G_{(1)1}^{(j)})$ and the sets of *spatial components of nonlinear connections* $\Gamma _{\text{spatial}}=(N_{(1)k}^{(j)})$ are connected on the 1-jet space $J^{1}(\mathbb{R},M)$, via the relations: $$N_{(1)k}^{(j)}=\frac{\partial G_{(1)1}^{(j)}}{\partial x_{1}^{k}},\qquad
G_{(1)1}^{(j)}=\frac{1}{2}N_{(1)m}^{(j)}x_{1}^{m}.$$
Jet geometrical KCC-theory
==========================
In this Section we generalize on the 1-jet space $J^{1}(\mathbb{R},M)$ the basics of the KCC-theory ([@1], [@2], [@3], [@9]). In this respect, let us consider on $J^{1}(\mathbb{R} ,M)$ a second-order system of differential equations of local form$$\frac{d^{2}x^{i}}{dt^{2}}+F_{(1)1}^{(i)}(t,x^{k},x_{1}^{k})=0,\text{ \ \ }i=\overline{1,n}, \label{SODE}$$where $x_{1}^{k}=dx^{k}/dt$ and the local components $F_{(1)1}^{(i)}(t,x^{k},x_{1}^{k})$ transform under a change of coordinates (\[rgg\]) by the rules$$\widetilde{F}_{(1)1}^{(r)}=F_{(1)1}^{(j)}\left( \frac{dt}{d\widetilde{t}}\right) ^{2}\frac{\partial \widetilde{x}^{r}}{\partial x^{j}}-\frac{dt}{d\widetilde{t}}\frac{\partial \widetilde{x}_{1}^{r}}{\partial t}-\frac{\partial x^{m}}{\partial \widetilde{x}^{j}}\frac{\partial \widetilde{x}_{1}^{r}}{\partial x^{m}}\widetilde{x}_{1}^{j}. \label{transformations-F}$$
The second-order system of differential equations (\[SODE\]) is invariant under a change of coordinates (\[rgg\]).
Using a temporal Riemannian metric $h_{11}(t)$ on $\mathbb{R} $ and taking into account the transformation rules (\[tr-rules-t-s\]) and ([tr-rules-s-s]{}), we can rewrite the SODEs (\[SODE\]) in the following form:$$\frac{d^{2}x^{i}}{dt^{2}}-H_{11}^{1}x_{1}^{i}+2G_{(1)1}^{(i)}(t,x^{k},x_{1}^{k})=0,\text{ \ \ }i=\overline{1,n},$$where$$G_{(1)1}^{(i)}=\frac{1}{2}F_{(1)1}^{(i)}+\frac{1}{2}H_{11}^{1}x_{1}^{i}$$are the components of a spatial semispray on $J^{1}(\mathbb{R} ,M)$. Moreover, the coefficients of the spatial semispray $G_{(1)1}^{(i)}$ produce the spatial components $N_{(1)j}^{(i)}$ of a nonlinear connection $\Gamma $ on the 1-jet space $J^{1}(\mathbb{R} ,M)$, by putting$$N_{(1)j}^{(i)}=\frac{\partial G_{(1)1}^{(i)}}{\partial x_{1}^{j}}=\frac{1}{2}\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{j}}+\frac{1}{2}H_{11}^{1}\delta _{j}^{i}.$$
In order to find the basic jet differential geometrical invariants of the system (\[SODE\]) (see Kosambi [@6], Cartan [@4] and Chern [@5]) under the jet coordinate transformations (\[rgg\]), we define the $h-$*KCC-covariant derivative of a $d-$tensor of kind* $T_{(1)}^{(i)}(t,x^{k},x_{1}^{k})$ on the 1-jet space $J^{1}(\mathbb{R} ,M)$ via$$\begin{aligned}
\frac{\overset{h}{D}T_{(1)}^{(i)}}{dt} &=&\frac{dT_{(1)}^{(i)}}{dt}+N_{(1)r}^{(i)}T_{(1)}^{(r)}-H_{11}^{1}T_{(1)}^{(i)}= \\
&=&\frac{dT_{(1)}^{(i)}}{dt}+\frac{1}{2}\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{r}}T_{(1)}^{(r)}-\frac{1}{2}H_{11}^{1}T_{(1)}^{(i)},\end{aligned}$$where the Einstein summation convention is used throughout.
The $h-$*KCC-covariant derivative* components $\dfrac{\overset{h}{D}T_{(1)}^{(i)}}{dt}$ transform under a change of coordinates (\[rgg\]) as a $d-$tensor of type $\mathcal{T}_{(1)1}^{(i)}.$
In such a geometrical context, if we use the notation $x_{1}^{i}=dx^{i}/dt$, then the system (\[SODE\]) can be rewritten in the following distinguished tensorial form:$$\begin{aligned}
\frac{\overset{h}{D}x_{1}^{i}}{dt}
&=&-F_{(1)1}^{(i)}(t,x^{k},x_{1}^{k})+N_{(1)r}^{(i)}x_{1}^{r}-H_{11}^{1}x_{1}^{i}=
\\
&=&-F_{(1)1}^{(i)}+\frac{1}{2}\frac{\partial F_{(1)1}^{(i)}}{\partial
x_{1}^{r}}x_{1}^{r}-\frac{1}{2}H_{11}^{1}x_{1}^{i},\end{aligned}$$
The distinguished tensor$$\overset{h}{\varepsilon }\text{ }\!\!_{(1)1}^{(i)}=-F_{(1)1}^{(i)}+\frac{1}{2}\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{r}}x_{1}^{r}-\frac{1}{2}H_{11}^{1}x_{1}^{i}$$is called the *first* $h-$*KCC-invariant* on the 1-jet space $J^{1}(\mathbb{R} ,M)$ of the SODEs (\[SODE\]), which is interpreted as an *external force* [@1], [@3].
It can be easily seen that for the particular first order jet rheonomic dynamical system$$\frac{dx^{i}}{dt}=X_{(1)}^{(i)}(t,x^{k})\Rightarrow \frac{d^{2}x^{i}}{dt^{2}}=\frac{\partial X_{(1)}^{(i)}}{\partial t}+\frac{\partial X_{(1)}^{(i)}}{\partial x^{m}}x_{1}^{m}, \label{Jet_DS}$$where $X_{(1)}^{(i)}(t,x)$ is a given $d-$tensor on $J^{1}(\mathbb{R} ,M)$, the first* *$h-$KCC-invariant has the form$$\overset{h}{\varepsilon }\text{ }\!\!_{(1)1}^{(i)}=\frac{\partial
X_{(1)}^{(i)}}{\partial t}+\frac{1}{2}\frac{\partial X_{(1)}^{(i)}}{\partial
x^{r}}x_{1}^{r}-\frac{1}{2}H_{11}^{1}x_{1}^{i}.$$
In the sequel, let us vary the trajectories $x^{i}(t)$ of the system ([SODE]{}) by the nearby trajectories $(\overline{x}^{i}(t,s))_{s\in
(-\varepsilon ,\varepsilon )},$ where $\overline{x}^{i}(t,0)=x^{i}(t).$ Then, considering the *variation $d-$tensor field*$$\mathit{\ }\xi ^{i}(t)=\left. \dfrac{\partial \overline{x}^{i}}{\partial s}\right\vert _{s=0},$$we get the *variational equations*$$\frac{d^{2}\xi ^{i}}{dt^{2}}+\frac{\partial F_{(1)1}^{(i)}}{\partial x^{j}}\xi ^{j}+\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{r}}\frac{d\xi ^{r}}{dt}=0. \label{Var-Equations}$$
In order to find other jet geometrical invariants for the system (\[SODE\]), we also introduce the $h-$*KCC-covariant derivative of a $d-$tensor of kind* $\xi ^{i}(t)$ on the 1-jet space $J^{1}(\mathbb{R} ,M)$ via $$\frac{\overset{h}{D}\xi ^{i}}{dt}=\frac{d\xi ^{i}}{dt}+N_{(1)m}^{(i)}\xi^{m}= \frac{d\xi ^{i}}{dt}+\frac{1}{2}\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{m}} \xi ^{m}+\frac{1}{2}H_{11}^{1}\xi ^{i}.$$
The $h-$*KCC-covariant derivative* components $\dfrac{\overset{h}{D}\xi ^{i}}{dt}$ transform under a change of coordinates (\[rgg\]) as a $d-$tensor $T_{(1)}^{(i)}.$
In this geometrical context, the variational equations (\[Var-Equations\]) can be rewritten in the following distinguished tensorial form:$$\frac{\overset{h}{D}}{dt}\left[ \frac{\overset{h}{D}\xi ^{i}}{dt}\right] =\overset{h}{P}\text{ \negthinspace \negthinspace }_{m11}^{i}\xi ^{m},$$where $$\begin{aligned}
\overset{h}{P}\text{ \negthinspace \negthinspace }_{j11}^{i} &=&-\frac{\partial F_{(1)1}^{(i)}}{\partial x^{j}}+\frac{1}{2}\frac{\partial
^{2}F_{(1)1}^{(i)}}{\partial t\partial x_{1}^{j}}+\frac{1}{2}\frac{\partial
^{2}F_{(1)1}^{(i)}}{\partial x^{r}\partial x_{1}^{j}}x_{1}^{r}-\frac{1}{2}\frac{\partial ^{2}F_{(1)1}^{(i)}}{\partial x_{1}^{r}\partial x_{1}^{j}}F_{(1)1}^{(r)}+ \\
&&+\frac{1}{4}\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{r}}\frac{\partial F_{(1)1}^{(r)}}{\partial x_{1}^{j}}+\frac{1}{2}\frac{dH_{11}^{1}}{dt}\delta _{j}^{i}-\frac{1}{4}H_{11}^{1}H_{11}^{1}\delta _{j}^{i}.\end{aligned}$$
The $d-$tensor $\overset{h}{P}$ $_{j11}^{i}$ is called the *second* $h-$*KCC-invariant* on the 1-jet space $J^{1}(\mathbb{R},M)$ of the system (\[SODE\]), or the *jet* $h-$*deviation curvature $d-$tensor*.
If we consider the second-order system of differential equations of the *harmonic curves associated to the pair of Riemannian metrics* $(h_{11}(t),\varphi _{ij}(x)),$ system which is given by (see the Examples \[H0\] and \[G0\])$$\frac{d^{2}x^{i}}{dt^{2}}-H_{11}^{1}(t)\frac{dx^{i}}{dt}+\gamma _{jk}^{i}(x)\frac{dx^{j}}{dt}\frac{dx^{k}}{dt}=0,$$where $H_{11}^{1}(t)$ and $\gamma _{jk}^{i}(x)$ are the Christoffel symbols of the Riemannian metrics $h_{11}(t)$ and $\varphi _{ij}(x),$ then the second $h-$KCC-invariant has the form$$\overset{h}{P}\text{ \negthinspace \negthinspace }_{j11}^{i}=-R_{pqj}^{i}x_{1}^{p}x_{1}^{q},$$where$$R_{pqj}^{i}=\frac{\partial \gamma _{pq}^{i}}{\partial x^{j}}-\frac{\partial
\gamma _{pj}^{i}}{\partial x^{q}}+\gamma _{pq}^{r}\gamma _{rj}^{i}-\gamma
_{pj}^{r}\gamma _{rq}^{i}$$are the components of the curvature of the spatial Riemannian metric $\varphi _{ij}(x).$ Consequently, the variational equations ([Var-Equations]{}) become the following *jet Jacobi field equations*:$$\frac{\overset{h}{D}}{dt}\left[ \frac{\overset{h}{D}\xi ^{i}}{dt}\right]
+R_{pqm}^{i}x_{1}^{p}x_{1}^{q}\xi ^{m}=0,$$where$$\frac{\overset{h}{D}\xi ^{i}}{dt}=\frac{d\xi ^{i}}{dt}+\gamma
_{jm}^{i}x_{1}^{j}\xi ^{m}.$$
For the particular first order jet rheonomic dynamical system (\[Jet\_DS\]) the jet $h-$deviation curvature $d-$tensor is given by$$\overset{h}{P}\text{ \negthinspace \negthinspace }_{j11}^{i}=\frac{1}{2}\frac{\partial ^{2}X_{(1)}^{(i)}}{\partial t\partial x^{j}}+\frac{1}{2}\frac{\partial ^{2}X_{(1)}^{(i)}}{\partial x^{j}\partial x^{r}}x_{1}^{r}+\frac{1}{4}\frac{\partial X_{(1)}^{(i)}}{\partial x^{r}}\frac{\partial X_{(1)}^{(r)}}{\partial x^{j}}+\frac{1}{2}\frac{dH_{11}^{1}}{dt}\delta _{j}^{i}-\frac{1}{4}H_{11}^{1}H_{11}^{1}\delta _{j}^{i}.$$
The distinguished tensors$$\overset{h}{R}\text{ \negthinspace \negthinspace }_{jk1}^{i}=\frac{1}{3}\left[ \frac{\partial \overset{h}{P}\text{ \negthinspace \negthinspace }_{j11}^{i}}{\partial x_{1}^{k}}-\frac{\partial \overset{h}{P}\text{
\negthinspace \negthinspace }_{k11}^{i}}{\partial x_{1}^{j}}\right] ,\qquad\overset{h}{B}\text{ \negthinspace \negthinspace }_{jkm}^{i}=\frac{\partial
\overset{h}{R}\text{ \negthinspace \negthinspace }_{jk1}^{i}}{\partial
x_{1}^{m}}$$and$$D_{jkm}^{i1}=\frac{\partial ^{3}F_{(1)1}^{(i)}}{\partial x_{1}^{j}\partial
x_{1}^{k}\partial x_{1}^{m}}$$are called the *third*, *fourth* and* fifth* $h-$*KCC-invariant* on the 1-jet vector bundle $J^{1}(\mathbb{R},M)$ of the system (\[SODE\]).
Taking into account the transformation rules (\[transformations-F\]) of the components $F_{(1)1}^{(i)}$, we immediately deduce that the components $D_{jkm}^{i1}$ behave like a $d-$tensor.
For the first order jet rheonomic dynamical system (\[Jet\_DS\]) the third, fourth and fifth $h-$KCC-invariants are zero.
All the five $h-$KCC-invariants of the system (\[SODE\]) cancel on $J^{1}(\mathbb{R},M)$ if and only if there exists a flat symmetric linear connection $\Gamma _{jk}^{i}(x)$ on $M$ such that$$F_{(1)1}^{(i)}=\Gamma _{pq}^{i}(x)x_{1}^{p}x_{1}^{q}-H_{11}^{1}(t)x_{1}^{i}.
\label{h-KCC=0}$$
“$\Leftarrow $” By a direct calculation, we obtain$$\overset{h}{\varepsilon }\text{ }\!\!_{(1)1}^{(i)}=0,\text{ \ \ }\overset{h}{P}\text{ \negthinspace \negthinspace }_{j11}^{i}=-\mathfrak{R}_{pqj}^{i}x_{1}^{p}x_{1}^{q}=0\text{ and }D_{jkl}^{i1}=0,$$where $\mathfrak{R}_{pqj}^{i}=0$ are the components of the curvature of the flat symmetric linear connection $\Gamma _{jk}^{i}(x)$ on $M.$
“$\Rightarrow $” By integration, the relation$$D_{jkl}^{i1}=\frac{\partial ^{3}F_{(1)1}^{(i)}}{\partial x_{1}^{j}\partial
x_{1}^{k}\partial x_{1}^{l}}=0$$subsequently leads to $$\begin{aligned}
\frac{\partial ^{2}F_{(1)1}^{(i)}}{\partial x_{1}^{j}\partial x_{1}^{k}}
&=&2\Gamma _{jk}^{i}(t,x)\Rightarrow \frac{\partial F_{(1)1}^{(i)}}{\partial
x_{1}^{j}}=2\Gamma _{jp}^{i}x_{1}^{p}+\mathcal{U}_{(1)j}^{(i)}(t,x)\Rightarrow \\
&\Rightarrow &F_{(1)1}^{(i)}=\Gamma _{pq}^{i}x_{1}^{p}x_{1}^{q}+\mathcal{U}_{(1)p}^{(i)}x_{1}^{p}+\mathcal{V}_{(1)1}^{(i)}(t,x),\end{aligned}$$where the local functions $\Gamma _{jk}^{i}(t,x)$ are symmetrical in the indices $j$ and $k$.
The equality $\overset{h}{\varepsilon }$ $\!\!_{(1)1}^{(i)}=0$ on $J^{1}(\mathbb{R},M)$ leads us to$$\mathcal{V}_{(1)1}^{(i)}=0$$and to$$\mathcal{U}_{(1)j}^{(i)}=-H_{11}^{1}\delta _{j}^{i}.$$Consequently, we have$$\frac{\partial F_{(1)1}^{(i)}}{\partial x_{1}^{j}}=2\Gamma
_{jp}^{i}x_{1}^{p}-H_{11}^{1}\delta _{j}^{i}$$and$$F_{(1)1}^{(i)}=\Gamma _{pq}^{i}x_{1}^{p}x_{1}^{q}-H_{11}^{1}x_{1}^{i}.$$
The condition $\overset{h}{P}$ $_{j11}^{i}=0$ on $J^{1}(\mathbb{R},M)$ implies the equalities $\Gamma _{jk}^{i}=\Gamma
_{jk}^{i}(x)$ and$$\mathfrak{R}_{pqj}^{i}+\mathfrak{R}_{qpj}^{i}=0,$$where$$\mathfrak{R}_{pqj}^{i}=\frac{\partial \Gamma _{pq}^{i}}{\partial x^{j}}-\frac{\partial \Gamma _{pj}^{i}}{\partial x^{q}}+\Gamma _{pq}^{r}\Gamma
_{rj}^{i}-\Gamma _{pj}^{r}\Gamma _{rq}^{i}.$$It is important to note that, taking into account the transformation laws (\[transformations-F\]), (\[tr-rules-t-s\]) and (\[rgg\]), we deduce that the local coefficients $\Gamma _{jk}^{i}(x)$ behave like a symmetric linear connection on $M.$ Consequently, $\mathfrak{R}_{pqj}^{i}$ represent the curvature of this symmetric linear connection.
On the other hand, the equality $\overset{h}{R}$ $_{jk1}^{i}=0$ leads us to $\mathfrak{R}_{qjk}^{i}=0,$ which infers that the symmetric linear connection $\Gamma _{jk}^{i}(x)$ on $M$ is flat.
**Acknowledgements.** The present research was supported by theRomanian Academy Grant 4/2009.
[ ]{}
[99]{}
[Authors’ addresses: ]{}
[Vladimir BalanUniversity Politehnica of Bucharest, Faculty of Applied Sciences,Department of Mathematics-Informatics I,Splaiul Independenţei 313, RO-060042 Bucharest, Romania.E-mail: [email protected]: http://www.mathem.pub.ro/dept/vbalan.htm ]{}
[Mircea NeaguUniversity Transilvania of Braşov, Faculty of Mathematics and Informatics,Department of Algebra, Geometry and Differential Equations,B-dul Iuliu Maniu, Nr. 50, BV 500091, Braşov, Romania.E-mail: [email protected]: http://www.2collab.com/user:mirceaneagu ]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X$ a homologically left-bounded complex with finitely generated homologies. We establish Faltings’ Local-global Principle and Annihilator Theorems for the local cohomology modules [[${\mbox{H}}_{\mathcal{Z}}^i(X).$ ]{}]{} Our versions contain variations of results already known on these theorems.'
address:
- 'K. Divaani-Aazar, Department of Mathematics, Alzahra University, Vanak, Post Code 19834, Tehran, Iran-and-School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.'
- 'Majid Rahro Zargar, Department of Engineering Sciences, Faculty of Advanced Technologies, University of Mohaghegh Ardabili, Namin, Iran-and-School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran.'
author:
- 'Kamran Divaani-Aazar'
- Majid Rahro Zargar
title: 'The derived category analogues of Faltings’ Local-global Principle and Annihilator Theorems'
---
Introduction
============
Throughout, $R$ is a commutative Noetherian ring with identity. Let ${\mathfrak{a}}$ be an ideal of $R$ and $M$ a finitely generated $R$-module. The finiteness dimension of $M$ relative to ${\mathfrak{a}}$, $f_{{\mathfrak{a}}}(M)$, is defined as the infimum of the integers $i$ such that ${\mbox{H}}^i_{{\mathfrak{a}}}(M)$ is not finitely generated. Let $r$ be a positive integer. It is known that ${\mbox{H}}^i_{{\mathfrak{a}}}(M)$ is finitely generated for all $i<r$ if and only if ${\mathfrak{a}}^n{\mbox{H}}^i_{{\mathfrak{a}}}(M)=0$ for some positive integer $n$ and all $i<r$. Faltings’ Local-global Principle Theorem [@Fa1 Satz 1] asserts that the $R$-module ${\mbox{H}}^i_{{\mathfrak{a}}}(M)$ is finitely generated for all $i<r$ if and only if the $R_{{\mathfrak{p}}}$-module ${\mbox{H}}^i_{{\mathfrak{a}}R_{{\mathfrak{p}}}}(M_{{\mathfrak{p}}})$ is finitely generated for all $i<r$ and for all ${\mathfrak{p}}\in \operatorname{Spec}R$. Thus, $$f_{{\mathfrak{a}}}(M)=\inf \left\{i\in \mathbb{N}_0|\ {\mathfrak{a}}\nsubseteq \sqrt{(0:_R{\mbox{H}}^i_{{\mathfrak{a}}}(M))} \right\}=\inf
\left\{f_{{\mathfrak{a}}R_{{\mathfrak{p}}}}(M_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R \right\}.$$
Now, let ${\mathfrak{b}}$ be a second ideal of $R$ such that ${\mathfrak{b}}\subseteq {\mathfrak{a}}$. The ${\mathfrak{b}}$-finiteness dimension of $M$ relative to ${\mathfrak{a}}$ is defined by $$f^{{\mathfrak{b}}}_{{\mathfrak{a}}}(M):=\inf \left\{i\in \mathbb{N}_0|\ {\mathfrak{b}}\nsubseteqq \sqrt{(0:_R{\mbox{H}}^i_{{\mathfrak{a}}}(M))}\right\}.$$ It is natural to ask whether Faltings’ Local-global Principle generalizes for the pair ${\mathfrak{b}}\subseteq {\mathfrak{a}}$. In other words, does $$f_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M)=\inf\left\{f_{{\mathfrak{a}}R_{{\mathfrak{p}}}}^{{\mathfrak{b}}R_{{\mathfrak{p}}}}(M_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R \right\}?$$
In [@Ra Corollary], Raghavan deduced from Faltings’ Annihilator Theorem that if $R$ is a homomorphic image of a regular ring, then the Local-global Principle holds for the pair ${\mathfrak{b}}\subseteq {\mathfrak{a}}$. The ${\mathfrak{b}}$-minimum ${\mathfrak{a}}$-adjusted depth of $M$ is defied by $$\lambda_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M):=\inf \left\{\depth M_{{\mathfrak{p}}}+{\mbox{ht}\,}\left(\frac{{\mathfrak{a}}+{\mathfrak{p}}}{{\mathfrak{p}}}\right)|\ {\mathfrak{p}}\in \operatorname{Spec}R-{\mbox{V}}({\mathfrak{b}})
\right\}.$$ It is always the case that $f^{{\mathfrak{b}}}_{{\mathfrak{a}}}(M)\leq \lambda_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M)$. Faltings’ Annihilator Theorem [@Fa2 Satz 1] states that if $R$ is a homomorphic image of a regular ring, then $f^{{\mathfrak{b}}}_{{\mathfrak{a}}}(M)=\lambda_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M)$.
In the literature, there are many generalizations of Faltings’ Local-global Principle and Annihilator Theorems for ordinary local cohomology and also for some of its generalizations; see e.g. [@AKS], [@BRS], [@Ka], [@KS], [@KYA] and [@Ra].
It is known that all the generalizations $H_{\mathcal{Z}}^{i}(M)$, $H_{\mathcal{\mathfrak{a},\mathfrak{b}}}^{i}(M)$, and $H_{\mathcal{\mathfrak{a}}}^{i}(M,N)$ of the local cohomology module $H_{\mathcal{\mathfrak{a}}}^{i}(M)$ of an $R$-module $M$, are special cases of the local cohomology module $H_{\mathcal{Z}}^{i}(X)$ of a complex $X$ with support in a specialization closed subset $\mathcal{Z}$ of $\operatorname{Spec}R$. For the definitions of $H_{\mathcal{\mathfrak{a},\mathfrak{b}}}^{i}(M)$ and $H_{\mathcal{\mathfrak{a}}}^{i}(M,N)$, we refer the reader to [@TYY] and [@He]. Also, Yoshino and Yoshizawa [@YY Theorem 2.10] have shown that for every abstract local cohomology functor $\delta$ from the category of homologically left bounded complexes of $R$-modules to itself, there is a specialization closed subset $\mathcal{Z}$ of $\operatorname{Spec}R$ such that $\delta
\cong {{\mathbf R}\Gamma}_{\mathcal{Z}} $. Therefore, any established result on $H_{\mathcal{Z}}^{i}(X)$ encompasses all the previously known results on each of these local cohomology modules.
Our aim in this paper is to establish Faltings’ Local-global Principle and Annihilator Theorems for the local cohomology modules [[${\mbox{H}}_{\mathcal{Z}}^i(X).$ ]{}]{} More precisely, we prove the following theorem; see Theorems \[G\] and \[M\], and Corollaries \[H\] and \[P\]. To state it, we first need to fix some notation.
Let $\mathcal{Z}\subseteq \mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$ and $X$ a homologically left bounded complex with finitely generated homologies. Set $$f_{\tiny{\mathcal{Z}}}^{\tiny{\mathcal{Y}}}(X):=\inf \left\{i\in
\mathbb{Z}|\ {\mathfrak{c}}{\mbox{H}}_{\mathcal{Z}}^i(X)\neq 0\ \text{for all ideals}\ {\mathfrak{c}}\ \text{of} \ R \ \text{with} \ {\mbox{V}}({\mathfrak{c}})\subseteq
\mathcal{Y}\right\}$$ and $$\lambda_{\mathcal{Z}}^{\mathcal{Y}}(X):=\inf\left\{\depth_{R_{{\mathfrak{q}}}}X_{{\mathfrak{q}}}+{\mbox{ht}\,}\frac{{\mathfrak{p}}}{{\mathfrak{q}}}|\ {\mathfrak{q}}\notin
\mathcal{Y} \ \ \text{and} \ \ {\mathfrak{p}}\in\mathcal{Z}\cap {\mbox{V}}({\mathfrak{q}})\right\}.$$ Abbreviate $f_{\mathcal{Z}}^{\mathcal{Z}}(X)$ and $\lambda_{\mathcal{Z}}^{\mathcal{Z}}(X),$ by $f_{\mathcal{Z}}(X)$ and $\lambda_{\mathcal{Z}}(X)$, respectively. Note that *$f_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M)=f_{\tiny{{{\mbox{V}}({\mathfrak{a}})}}}^{\tiny{{\mbox{V}}({\mathfrak{b}})}}(M)$* and *$\lambda_{{\mathfrak{a}}}^{{\mathfrak{b}}}(M)=\lambda_{\tiny{{{\mbox{V}}({\mathfrak{a}})}}}^{
\tiny{{\mbox{V}}({\mathfrak{b}})}}(M).$*
\[1\] Let $\mathcal{Z}\subseteq \mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$ and $X$ a homologically left-bounded complex with finitely generated homologies. Then the following statements hold.
- *[$f_{\mathcal{Z}}(X)=\inf \left\{f_{\mathcal{Z_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R\right\}=\inf \left\{i
\in \mathbb{Z}|\ {\mbox{H}}_{\mathcal{Z}}^i(X) \ \text{is not finitely generated}\right\}.$]{}*
- Assume that $X$ is homologically bounded. Then $f_{\mathcal{Z}}^{\mathcal{Y}}(X)
\leq \lambda_{\mathcal{Z}}^{\mathcal{Y}}(X)$.
- Assume that $R$ is a homomorphic image of a finite-dimensional Gorenstein ring and $X$ is homologically bounded. Then $f_{\mathcal{Z}}^{\mathcal{Y}}(X)=\lambda_{\mathcal{Z}}^{\mathcal{Y}}(X)$ and $$f_{\mathcal{Z}}^{\mathcal{Y}}(X)
=\inf \left\{f_{\mathcal{Z_{{\mathfrak{p}}}}}^{\mathcal{Y_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R\right\}.$$
Prerequisites
=============
The derived category of $R$-modules is denoted by $\mathrm{D}(R)$. Simply put, an object in $\mathcal{D}(R)$ is an $R$-complex $X$ displayed in the standard homological style $$X= \cdots \rightarrow X_{i+1} \xrightarrow {\partial^{X}_{i+1}} X_{i} \xrightarrow {\partial^{X}_{i}} X_{i-1}
\rightarrow \cdots.$$ We use the symbol $\simeq$ for denoting isomorphisms in $\mathrm{D}(R)$. We denote the full subcategory of homologically left-bounded complexes by $\mathrm{D}_{\sqsubset}(R)$. Also, we denote the full subcategory of complexes with finitely generated homology modules that are homologically bounded (resp. homologically left-bounded) by $\mathrm{D}_{\Box}^f(R)$ (resp. $\mathrm{D}_{\sqsubset}^f(R)$). Given an $R$-complex $X$, the standard notion $$\sup X=\sup \left\{i \in \mathbb{Z}| \ {\mbox{H}}_{i}(X) \neq 0 \right\}$$ is frequently used, with the convention that $\sup \emptyset=-\infty$.
Let ${\mathfrak{a}}$ be an ideal of $R$ and $X\in \mathrm{D}_{\sqsubset}(R)$. A subset $\mathcal{Z}$ of $\operatorname{Spec}R$ is said to be [*specialization closed*]{} if ${\mbox{V}}({\mathfrak{p}})\subseteq \mathcal{Z}$ for all ${\mathfrak{p}}\in \mathcal{Z}$. For every $R$-module $M$, set $\Gamma_{\mathcal{Z}}(M):=\left\{x\in M|~\operatorname{Supp}_RRx\subseteq \mathcal{Z}\right\}.$ The right derived functor of the functor $\Gamma_{\mathcal{Z}}(-)$ in $\mathrm{D}(R)$, ${{\mathbf R}\Gamma}_{
\mathcal{Z}}(X)$, exists and is defined by ${\bf R} \Gamma_{\mathcal{Z}}(X):=\Gamma_{\mathcal{Z}}(I)$, where $I$ is any injective resolution of $X$. Also, for every integer $i$, the $i$-th local cohomology module of $X$ with respect to $\mathcal{Z}$ is defined by ${\mbox{H}}_{\mathcal{Z}}^i(X):={\mbox{H}}_{-i}({\bf R}\Gamma_{\mathcal{Z}}(X))$. To comply with the usual notation, for $\mathcal{Z}:={\mbox{V}}({\mathfrak{a}})$, we denote ${{\mathbf R}\Gamma}_{\mathcal{Z}} (-)$ and ${\mbox{H}}_{
\mathcal{Z}}^i(-)$ by ${\bf R}\Gamma_{{{\mathfrak{a}}}}(-)$ and ${\mbox{H}}_{{\mathfrak{a}}}^i(-)$, respectively. Denote the set of all ideals ${\mathfrak{b}}$ of $R$ such that ${\mbox{V}}({\mathfrak{b}})\subseteq \mathcal{Z}$ by $F(\mathcal{Z})$. Since for every $R$-module $M,$ $\Gamma_{
\mathcal{Z}}(M)=\bigcup_{{\mathfrak{b}}\in F(\mathcal{Z})}\Gamma_{{\mathfrak{b}}}(M)$, for every integer $i$, one can easily check that $${\mbox{H}}_{\mathcal{Z}}^i(X)\cong \underset{{\mathfrak{b}}\in F(\mathcal{Z})}\varinjlim{\mbox{H}}_{{\mathfrak{b}}}^i(X).$$
Recall that $\operatorname{Supp}_RX:=\underset{l\in \mathbb{Z}}\bigcup\operatorname{Supp}_R{\mbox{H}}_l(X)$ and $$\depth({\mathfrak{a}},X):=-\sup {\bf R}\Hom_R(R/{\mathfrak{a}},X).$$ By [@Iy Theorem 6.2], it is known that $$\depth({\mathfrak{a}},X)=\inf\left\{i\in \mathbb{Z}|\ {\mbox{H}}_{{\mathfrak{a}}}^i(X)\neq 0\right\}.$$ When $R$ is local with maximal ideal ${\mathfrak{m}}$, $\depth({\mathfrak{m}},X)$ is simply denoted by $\depth_RX$. For every prime ideal ${\mathfrak{p}}$ of $R$ and every integer $i$, the i-th Bass number $\mu^i({\mathfrak{p}},X)$ is defined to be the dimension of the $R_{{\mathfrak{p}}}/{\mathfrak{p}}R_{{\mathfrak{p}}}$-vector space ${\mbox{H}}_{-i}({{\mathbf R}\Hom}_{R_{{\mathfrak{p}}}}(R_{{\mathfrak{p}}}/{\mathfrak{p}}R_{{\mathfrak{p}}},X_{{\mathfrak{p}}}))$.
Local-global Principle Theorem
==============================
The following easy observation will be very useful in the rest of the paper.
\[A\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in\mathrm{D}_{
\sqsubset}(R).$ Then the following statements hold.
- *${\mbox{H}}_{\mathcal{Z}}^i(X)=0$* for all $i<-\sup X$ and *${\mbox{H}}_{\mathcal{Z}}^{-\sup X}(X)$* is a finitely generated $R$-module whenever $X\in\mathrm{D}_{\sqsubset}^f(R)$.
- *$\operatorname{Supp}_R{\mbox{H}}_{\mathcal{Z}}^i(X)\subseteq \mathcal{Z}$* for every integer $i$.
- If *$\operatorname{Supp}_RX\subseteq \mathcal{Z}$,* then *${{\mathbf R}\Gamma}_{\mathcal{Z}} (X)\simeq X$*.
\(i) Let $s:=\sup X$. By [@Ch Theorem A.3.2 (I)], $X$ possesses an injective resolution $I$ such that $I_i=0$ for all $i>s$. As for every integer $i$, one has ${\mbox{H}}_{\mathcal{Z}}^i(X)={\mbox{H}}_{-i}(\Gamma_{\mathcal{Z}} (I))$, it follows that ${\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<-s$ and that ${\mbox{H}}_{\mathcal{Z}}^{-s}(X)$ is a submodule of ${\mbox{H}}_{s}(I)$. If $X\in\mathrm{D}_{\sqsubset}^f(R)$, then ${\mbox{H}}_{s}(X)$ is finitely generated, and so ${\mbox{H}}_{\mathcal{Z}}^{-s}(X)$ is finitely generated too.
\(ii) the proof is easy and we leave it to the reader.
\(iii) Assume that *$\operatorname{Supp}_RX\subseteq \mathcal{Z}$.* For every prime ideal ${\mathfrak{p}}$ of $R$, one can check that $$\Gamma_{\mathcal{Z}}({\mbox{E}}(R/{\mathfrak{p}}))=\begin{cases} {\mbox{E}}(R/{\mathfrak{p}}) & ,{\mathfrak{p}}\in \mathcal{Z}\\ 0& ,{\mathfrak{p}}\notin \mathcal{Z}.
\end{cases}$$ By [@Fo Lemma 2.3 (a) and Proposition 3.18], $X$ possesses an injective resolution $I$ such that $$I_i \cong \underset{{\mathfrak{p}}\in
\operatorname{Supp}_RX}\bigoplus{\mbox{E}}(R/{\mathfrak{p}})^{(\mu^i({\mathfrak{p}},X))}$$ for all integers $i$. Thus, $\Gamma_{\mathcal{Z}}(I_i)\cong I_i$ for all integers $i$, and so $${{\mathbf R}\Gamma}_{\mathcal{Z}} (X)\simeq \Gamma_{\mathcal{Z}}(I)\simeq I\simeq X.$$
The following result plays an essential role in the proof of the derived category analogue of Falting’s Local-global Principle Theorem.
\[C\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in\mathrm{D}_{
\sqsubset}^f(R).$ Let $t$ be an integer such that *${\mbox{H}}_{\mathcal{Z}}^i(X)$* is a finitely generated $R$-module for all $i<t$. Then for every ${\mathfrak{a}}\in F(\mathcal{Z})$, the $R$-module *$\Hom_R(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^t(X))$* is finitely generated.
By Lemma \[A\] (i), ${\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<-\sup X$ and ${\mbox{H}}_{\mathcal{Z}}^{-\sup X}(X)$ is finitely generated. So, we may assume that $-\sup X<t$. Set $T:=\Sigma^{-\sup X}X$ and note that ${\mbox{H}}_{\mathcal{Z}}^i(X)
\cong {\mbox{H}}_{\mathcal{Z}}^{i+\sup X}(T)$ for all integers $i$. So, by replacing $X$ with $T$, we may and do assume that $\sup X=0$ and $0<t$. Then, there exists an injective resolution $I$ of $X$ such that $I_{l}=0$ for all $l>0$.
Now, let ${\mathfrak{a}}\in F(\mathcal{Z})$ and $P$ be a projective resolution of $R/{\mathfrak{a}}$. Set $M_{p,q}:=\Hom_{R}(P_{-p}, \Gamma_{
\mathcal{Z}}(I_{q}))$. Hence $\mathcal{M}:=\left\{M_{p,q}\right\}$ is a third quadrant bicomplex, and so the complex $\Hom_{R}(P,\Gamma_{\mathcal{Z}}(I))$ is the total complex of $\mathcal{M}$.
For every $R$-module $M$, one has $\Gamma_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(M))=\Gamma_{{\mathfrak{a}}}(M)$. So, the two complexes $\Gamma_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(I))$ and $\Gamma_{{\mathfrak{a}}}(I)$ are the same. By [@Li Propositon 3.2.2], for any two complexes $X_1, X_2\in \mathrm{D}_{\sqsubset}(R)$ with $\operatorname{Supp}_RX_1\subseteq {\mbox{V}}({\mathfrak{a}}),$ one has an isomorphism $${{\mathbf R}\Hom}_{R}(X_1,X_2)
\simeq {{\mathbf R}\Hom}_{R}(X_1,{{\mathbf R}\Gamma}_{{\mathfrak{a}}}(X_2))$$ in $\mathrm{D}(R)$. This yields $\dag$ and $\ddag$ in the following display of isomorphisms in $\mathrm{D}(R)$:
$$\begin{array}{rl}
\Hom_{R}(P,\Gamma_{\mathcal{Z}}(I))&\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},\Gamma_{\mathcal{Z}}(I))\\
&\overset{\dag}\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},{{\mathbf R}\Gamma}_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(I)))\\
&\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},\Gamma_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(I)))\\
&\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},\Gamma_{{\mathfrak{a}}}(I))\\
&\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},{{\mathbf R}\Gamma}_{{\mathfrak{a}}}(I))\\
&\overset{\ddag}\simeq {{\mathbf R}\Hom}_{R}(R/{\mathfrak{a}},I).
\end{array}$$ Thus, there is a first quadrant spectral sequence $${\mbox{E}}_{2}^{p,q}:=\Ext_{R}^p(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^q(X))\underset{p}
\Longrightarrow\Ext_{R}^{p+q}(R/{\mathfrak{a}},X).$$ Note that $\Ext_{R}^{i}(R/{\mathfrak{a}},X)$ is finitely generated for all integers $i$. For each $r\geq2$, let $$Z_{r}^{0,t}:={\mbox{Ker}\,}({\mbox{E}}_{r}^{0,t}\longrightarrow{\mbox{E}}_{r}^{r,t+1-r})$$ and $$B_{r}^{0,t}:=
{\mbox{Im}\,}({\mbox{E}}_{r}^{-r,t+r-1}\longrightarrow{\mbox{E}}_{r}^{0,t}).$$ As ${\mbox{E}}_{r}^{p,q}$ is a subquotient of ${\mbox{E}}_{2}^{p,q}$, it follows that ${\mbox{E}}_{r}^{-r,t+r-1}=0$ and ${\mbox{E}}_{r}^{r,t+1-r}$ is finitely generated. Hence $${\mbox{E}}_{r+1}^{0,t}=\frac{Z_{r}^{0,t}}{B_{r}^{0,t}}
\cong Z_{r}^{0,t}$$ and ${\mbox{E}}_{r}^{0,t}/Z_{r}^{0,t}$ is finitely generated. Thus ${\mbox{E}}_{r}^{0,t}$ is a finitely generated $R$-module if and only if ${\mbox{E}}_{r+1}^{0,t}$ is a finitely generated $R$-module. Now, we claim that ${\mbox{E}}_{r}^{0,t}$ is a finitely generated $R$-module for all $r\geq 2$. To this end, we use descending induction on $r$. Let $r\geq t+2$. Then one can use the fact that $\sup X=0$ to deduce that ${\mbox{E}}_{r}^{-r,t+r-1}={\mbox{E}}_{r}^{r,t+1-r}=0$, and so ${\mbox{E}}_{t+2}^{0,t}
\cong\ldots\cong{\mbox{E}}_{\infty}^{0,t}$. Now, consider the following filtration $$\left\{0\right\}=\Psi_{t+1}{\mbox{H}}^{t}\subseteq \Psi_{t}
{\mbox{H}}^{t}\subseteq \cdots\subseteq\Psi_{1}{\mbox{H}}^t\subseteq \Psi_{0}{\mbox{H}}^t={\mbox{H}}^t,$$ where ${\mbox{H}}^t:=\Ext_{R}^t(R/{\mathfrak{a}},X)$ and ${\mbox{E}}^{p,t-p}_{\infty}=\frac{\Psi_{p}{\mbox{H}}^t}{\Psi_{p+1}{\mbox{H}}^t}$, to see that ${\mbox{E}}_{t+2}^{0,t}$ is finitely generated. Suppose that the result has been proved for all $2<r\leq t+2$. Then we want to show that the result holds for $r-1$. Notice that $2\leq r-1$, and so by the above argument and inductive hypothesis one can see that ${\mbox{E}}_{r-1}^{0,t}$ is finitely generated. It therefore follows that $\Hom_{R}(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^t(X))$ is a finitely generated $R$-module.
Next, we record the following immediate consequence.
\[D\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in\mathrm{D}_{\sqsubset}^f(R)$. Then for every integer $t$, the following statements are equivalent:
- *[${\mbox{H}}_{\mathcal{Z}}^i(X)$ is a finitely generated $R$-module for all $i<t$.]{}*
- *[There exists an ideal ${\mathfrak{a}}\in F(\mathcal{Z})$ such that ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<t$.]{}*
(i)$\Rightarrow$(ii) For each $i<t$, set ${\mathfrak{a}}_{i}:=(0:_R {\mbox{H}}_{\mathcal{Z}}^i(X))$ and note that Lemma \[A\] (ii) implies that ${\mathfrak{a}}_{i}\in F(\mathcal{Z})$. Now, the ideal ${\mathfrak{a}}:=\prod_{i=-\sup X}^{t-1}{\mathfrak{a}}_{i}$ belongs to $F(\mathcal{Z})$ and ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<t$.
(ii)$\Rightarrow$(i) We may and do assume that $t\geq 1-\sup X$ and proceed by induction on $t$. If $t=1-\sup X$, then by Lemma \[A\] (i) we see that ${\mbox{H}}_{\mathcal{Z}}^{i}(X)$ is a finitely generated $R$-module for all $i<t$. Let $t>1-\sup X$ and suppose that the result has been proved for $t-1$. Now by the induction hypothesis, ${\mbox{H}}_{\mathcal{Z}}^i(X)$ is a finitely generated $R$-module for all $i<t-1$, and so by Lemma \[C\] the $R$-module $\Hom_R(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^{t-1}(X))$ is finitely generated. But, by our assumption ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)=0$, and so $\Hom_R(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^{t-1}(X))\cong {\mbox{H}}_{\mathcal{Z}}^{
t-1}(X)$.
Let $T$ be a second commutative Noetherian ring with identity and $f:R{\longrightarrow}T$ be a ring homomorphism. Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$. Then it is easy to check that $\mathcal{Z}^f:=\left\{{\mathfrak{q}}\in \operatorname{Spec}T|
\ f^{-1}({\mathfrak{q}})\in \mathcal{Z}\right\}$ is a specialization closed subset of $\operatorname{Spec}T$.
\[B\] Let $f:R{\longrightarrow}T$ be a ring homomorphism and $\mathcal{Z}$ a specialization closed subset of $\operatorname{Spec}R$. Let $X\in \mathrm{D}_{\sqsubset}(R)$ and $Y\in \mathrm{D}_{\sqsubset}(T)$. Then the following statements hold.
- There is a natural $R$-isomorphism *${\mbox{H}}_{\mathcal{Z}}^i(Y)\cong {\mbox{H}}_{\mathcal{Z}^f}^i(Y)$* for all integers $i$.
- Suppose that $T$ is flat as an $R$-module. There is a natural $T$-isomorphism *${\mbox{H}}_{\mathcal{Z}}^i(X)\otimes_RT
\cong {\mbox{H}}_{\mathcal{Z}^f}^i(X\otimes_RT)$* for all integers $i$.
Set $\Omega:=\left\{{\mathfrak{a}}T|\ {\mathfrak{a}}\in F(\mathcal{Z})\right\}$. Then, it is easy to see that $\Omega\subseteq F(\mathcal{Z}^f)$. Let $\widetilde{{\mathfrak{b}}}\in F(\mathcal{Z}^f)$ and set ${\mathfrak{b}}:=f^{-1}(\widetilde{{\mathfrak{b}}})$. We claim that ${\mathfrak{b}}\in F(\mathcal{Z})$. To this end, it is enough to show that every minimal element ${\mathfrak{p}}$ in ${\mbox{V}}({\mathfrak{b}})$ belongs to $\mathcal{Z}$.
Let ${\mathfrak{p}}$ be a minimal element in ${\mbox{V}}({\mathfrak{b}})$ and $\widetilde{{\mathfrak{b}}}=\bigcap_{i=1}^nQ_i$ be a minimal primary decomposition of $\widetilde{{\mathfrak{b}}}$ in $T$. Then $\sqrt{Q_i}\in \mathcal{Z}^f$ for all $i=1,\dots, n$. As ${\mathfrak{b}}=\bigcap_{i=1}^nf^{-1}(Q_i)$ is a primary decomposition of ${\mathfrak{b}}$, it turns out that ${\mathfrak{p}}=f^{-1}(\sqrt{Q_j})$ for some $1\leq j\leq n$. So, ${\mathfrak{p}}\in \mathcal{Z}$. Hence ${\mathfrak{b}}\in F(\mathcal{Z})$, and so ${\mathfrak{b}}T\in \Omega$. Clearly, ${\mathfrak{b}}T\subseteq \widetilde{{\mathfrak{b}}}$. Thus the two families of ideals $\Omega$ and $F(\mathcal{Z}^f)$ are cofinal.
\(i) Let ${\mathfrak{a}}$ be an ideal of $R$. Then by [@Li Corollary 3.4.3], there is an $R$-isomorphism ${\mbox{H}}_{{\mathfrak{a}}}^i(Y)\cong {\mbox{H}}_{{\mathfrak{a}}T}^i(Y)$ for all integers $i$. Hence, (i) follows by the following display of $R$-isomorphisms
$$\begin{array}{rl}
{\mbox{H}}_{\mathcal{Z}}^i(Y)&\cong \underset{{{\mathfrak{a}}\in F(\mathcal{Z})}}\varinjlim {\mbox{H}}_{{\mathfrak{a}}}^i(Y)\\
&\cong \underset{{{\mathfrak{a}}\in F(\mathcal{Z})}}\varinjlim {\mbox{H}}_{{\mathfrak{a}}T}^i(Y)\\
&\cong \ \ \underset{{{\widetilde{{\mathfrak{b}}}\in \Omega}}}\varinjlim \ \ {\mbox{H}}_{\widetilde{{\mathfrak{b}}}}^i(Y)\\
&\cong \underset{{{\widetilde{{\mathfrak{b}}}\in F(\mathcal{Z}^f)}}}\varinjlim {\mbox{H}}_{\widetilde{{\mathfrak{b}}}}^i(Y)\\
&\cong \ \ {\mbox{H}}_{\mathcal{Z}^f}^i(Y).
\end{array}$$
\(ii) In view of [@Li Corollary 3.4.4], one has the third isomorphism in the following display of $T$-isomorphisms
$$\begin{array}{rl}
{\mbox{H}}_{\mathcal{Z}}^i(X)\otimes_RT&\cong (\underset{{{\mathfrak{a}}\in F(\mathcal{Z})}}\varinjlim {\mbox{H}}_{{\mathfrak{a}}}^i(X))\otimes_RT\\
&\cong \underset{{{\mathfrak{a}}\in F(\mathcal{Z})}}\varinjlim ({\mbox{H}}_{{\mathfrak{a}}}^i(X)\otimes_RT)\\
&\cong \underset{{{\mathfrak{a}}\in F(\mathcal{Z})}}\varinjlim {\mbox{H}}_{{\mathfrak{a}}T}^i(X\otimes_RT)\\
&\cong \ \ \underset{{{\widetilde{{\mathfrak{b}}}\in \Omega}}}\varinjlim \ \ {\mbox{H}}_{\widetilde{{\mathfrak{b}}}}^i(X\otimes_RT)\\
&\cong \underset{{{\widetilde{{\mathfrak{b}}}\in F(\mathcal{Z}^f)}}}\varinjlim {\mbox{H}}_{\widetilde{{\mathfrak{b}}}}^i(X\otimes_RT)\\
&\cong \ \ {\mbox{H}}_{\mathcal{Z}^f}^i(X\otimes_RT),
\end{array}$$ which completes the proof of (ii).
Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$. Let $S$ be a multiplicatively closed subset of $R$ and $f:R{\longrightarrow}S^{-1}R$ be the natural ring homomorphism. In this case, we denote $\mathcal{Z}^f$ by $S^{-1}\mathcal{Z}$. Clearly, $$S^{-1}\mathcal{Z}=\left\{S^{-1}{\mathfrak{p}}|\ {\mathfrak{p}}\cap S=\emptyset ~\text{and}~ {\mathfrak{p}}\in\mathcal{Z}\right\}.$$ In particular, for a prime ideal ${\mathfrak{p}}$ of $R$, we denote $(R-{\mathfrak{p}})^{-1}\mathcal{Z}$ by $\mathcal{Z}_{{\mathfrak{p}}}$. Assume that $R$ is local with the unique maximal ideal ${\mathfrak{m}}$ and $\hat{R}$ is the completion of $R$ with respect to the ${\mathfrak{m}}$-adic topology. Let $f:R{\longrightarrow}\hat{R}$ be the natural ring homomorphism. In this case, we denote $\mathcal{Z}^f$ by $\widehat{\mathcal{Z}}$. Restating Lemma \[B\] (ii) for the flat $R$-algebras $S^{-1}R$ and $\hat{R}$ yields the following result.
\[E\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and *$X\in \mathrm{D}_{
\sqsubset}(R)$*. Then the following statements hold.
- Assume that $S$ is a multiplicatively closed subset of $R$. There is a natural $S^{-1}R$-isomorphism *$S^{-1}
({\mbox{H}}_{\mathcal{Z}}^i(X)) \cong {\mbox{H}}_{S^{-1}\mathcal{Z}}^i(S^{-1}X)$* for all integers $i$.
- Assume that $(R,{\mathfrak{m}})$ is a local ring. There is a natural $\hat{R}$-isomorphism *${\mbox{H}}_{\mathcal{Z}}^i(X)\otimes_R
\hat{R}\cong {\mbox{H}}_{\widehat{\mathcal{Z}}}^i(X\otimes_R\hat{R})$* for all integers $i$.
The next result provides a comparison between the annihilation of local cohomology modules with respect to a specialization closed subset of $\operatorname{Spec}R$ and the annihilation of their localizations.
\[F\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in \mathrm{D}^f_{\sqsubset}(R)$. Then for every ${\mathfrak{a}}\in F(\mathcal{Z})$ and every integer $t$, the following statements are equivalent:
- *[There exists a positive integer $l$ such that ${\mathfrak{a}}^{l}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<t$.]{}*
- *[For every ${\mathfrak{p}}\in\operatorname{Spec}R$, there exists a positive integer $l_{{\mathfrak{p}}}$ such that ${\mathfrak{a}}^{l_{{\mathfrak{p}}}}
{\mbox{H}}_{{\mathcal{Z}}_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}})
=0$ for all $i<t.$ ]{}*
(i)$\Rightarrow$(ii) immediately follows by Corollary \[E\] (i).
(ii)$\Rightarrow$(i) Clearly, we may assume that $t\geq 1-\sup X$. We proceed by induction on $t$. Let $t=1-\sup X$. Then by Lemma \[A\] (i), ${\mbox{H}}_{\mathcal{Z}}^{-\sup X}(X)$ is finitely generated, and so we may assume that $$\operatorname{Ass}_{R}({\mbox{H}}_{\mathcal{Z}}^{-\sup
{X}}(X))=\left\{\frak{p_{1}},\ldots, {\mathfrak{p}}_{r}\right\}.$$ Now, by our assumption, there exist positive integers $l_{\frak{p_{1}}},
\ldots, l_{{\mathfrak{p}}_{r}}$ such that $${\mathfrak{a}}^{l_{{\mathfrak{p}}_i}}{\mbox{H}}_{{\mathcal{Z}}_{{\mathfrak{p}}_{i}}}^{-\sup {X}}(X_{{\mathfrak{p}}_i})=0$$ for all $i=1,\ldots,r$. Let $l:=\max\left\{l_{\frak{p_{1}}}, \ldots,l_{{\mathfrak{p}}_{r}}\right\}$. Then, in view of Corollary \[E\] (i), $({\mathfrak{a}}^{l}
{\mbox{H}}_{\mathcal{Z}}^{-\sup {X}}(X))_{{\mathfrak{p}}_{i}}=0$ for all $i=1, \ldots,r$. Thus ${\mathfrak{a}}^{l}{\mbox{H}}_{\mathcal{Z}}^{-\sup {X}}(X)=0$, because $$\operatorname{Ass}_{R}({\mathfrak{a}}^l{\mbox{H}}_{\mathcal{Z}}^{-\sup {X}}(X))\subseteq \operatorname{Ass}_{R}({\mbox{H}}_{\mathcal{Z}}^{-\sup {X}}(X)).$$ Hence, ${\mathfrak{a}}^{l}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<1-\sup X$.
Next, suppose that $t>1-\sup X$ and the result has been proved for $t-1$. From the induction hypothesis, we deduce that there exists a positive integer $l_1$ such that ${\mathfrak{a}}^{l_1}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<t-1$. Then, Corollary \[D\] yields that ${\mbox{H}}_{\mathcal{Z}}^i(X)$ is finitely generated for all $i<t-1$. Now, Lemma \[C\] implies that $\Hom_R(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^{t-1}(X))$ is finitely generated. By the assumption, for every prime ideal ${\mathfrak{p}}$ of $R$, there exists a positive integer $l_{{\mathfrak{p}}}$ such that ${\mathfrak{a}}^{l_{{\mathfrak{p}}}}{\mbox{H}}_{{\mathcal{Z}}_{{\mathfrak{p}}}}^{t-1}(X_{{\mathfrak{p}}})=0$, and so $\operatorname{Supp}_R({\mbox{H}}_{\mathcal{Z}}^{t-1}(X)) \subseteq{\mbox{V}}({\mathfrak{a}})$. Therefore, $$\operatorname{Ass}_{R}({\mbox{H}}_{\mathcal{Z}}^{t-1}(X)) =\operatorname{Ass}_R(\Hom_R(R/{\mathfrak{a}},
{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)))$$ is finite. Hence, by a similar argument as in the case $t=1-\sup X$, we may find a positive integer $l_2$ such that ${\mathfrak{a}}^{l_2}{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)=0$. Finally, set $l:=\max\left\{l_1,l_2\right\}$.
Let us come to the last preparation for proving the main result of this section.
\[F1\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and ${\mathfrak{p}}\in \mathcal{Z}$. Let $\widetilde{{\mathfrak{b}}}$ be an ideal of the ring $R_{{\mathfrak{p}}}$. If $\widetilde{{\mathfrak{b}}}\in F(\mathcal{Z}_{{\mathfrak{p}}})$, then $\widetilde{{\mathfrak{b}}}\cap R\in F(\mathcal{Z})$.
Assume that $\widetilde{{\mathfrak{b}}}\in F(\mathcal{Z}_{{\mathfrak{p}}})$. Let $\widetilde{{\mathfrak{b}}}=\bigcap_{i=1}^nQ_i$ be a minimal primary decomposition of $\widetilde{{\mathfrak{b}}}$ in $R_{{\mathfrak{p}}}$. Let $1\leq i\leq n$. As ${\mbox{V}}(\widetilde{{\mathfrak{b}}})\subseteq \mathcal{Z}_{{\mathfrak{p}}}$, one has $\sqrt{Q_i}\in \mathcal{Z}_{{\mathfrak{p}}}$, and so $$\sqrt{Q_i\cap R}=\sqrt{Q_i}\cap R\in \mathcal{Z}.$$ This completes the argument, because $\widetilde{{\mathfrak{b}}}\cap R=\bigcap_{i=1}^n(Q_i\cap R)$.
The following result is the derived category analogue of Faltings’ Local-global Principle Theorem for a single specialization closed subset $\mathcal{Z}$ of $\operatorname{Spec}R$.
\[G\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in \mathrm{D}^f_{\sqsubset}(R)$. Then for every integer $t$, the following statements are equivalent:
- *[${\mbox{H}}_{\mathcal{Z}}^i(X)$ is a finitely generated $R$-module for all $i<t$.]{}*
- *[${\mbox{H}}_{\mathcal{Z}_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}})$ is a finitely generated $R_{{\mathfrak{p}}}$-module for all $i<t$ and all ${\mathfrak{p}}\in\operatorname{Spec}R$.]{}*
(i)$\Rightarrow$(ii) is clear by Corollary \[E\] (i).
(ii)$\Rightarrow$(i) We may and do assume that $t\geq 1-\sup X$ and proceed by induction on $t$. If $t=1-\sup X$, then by Lemma \[A\] (i) we see that ${\mbox{H}}_{\mathcal{Z}}^{i}(X)$ is a finitely generated $R$-module for all $i<t$. Let $t>1-\sup X$ and suppose that the result has been proved for $t-1$. The induction hypothesis implies that ${\mbox{H}}_{\mathcal{Z}}^i(X)$ is a finitely generated $R$-module for all $i<t-1$, and so by Lemma \[C\] the $R$-module $\L_{{\mathfrak{b}}}:=\Hom_R(R/{\mathfrak{b}},{\mbox{H}}_{\mathcal{Z}}^{t-1}(X))$ is finitely generated for all ${\mathfrak{b}}\in F(\mathcal{Z})$.
Fix ${\mathfrak{b}}\in F(\mathcal{Z})$. For every prime ideal ${\mathfrak{p}}$ of $R$, we set $${\mathfrak{a}}_{{\mathfrak{p}}}:={\mbox{Ann}\,}_{R_{{\mathfrak{p}}}}({\mbox{H}}_{\mathcal{Z}_{{\mathfrak{p}}}}^{t-1}(X_{{\mathfrak{p}}}))
\cap R.$$ Lemma \[F1\] yields that ${\mathfrak{a}}_{{\mathfrak{p}}}\in F(\mathcal{Z})$. As ${\mathfrak{a}}_{{\mathfrak{p}}}\L_{{\mathfrak{b}}}$ is a finitely generated $R$-module and $({\mathfrak{a}}_{{\mathfrak{p}}}\L_{{\mathfrak{b}}})_{{\mathfrak{p}}}=0$, there exists an element $x_{{\mathfrak{p}}}$ in $R-{\mathfrak{p}}$ such that $({\mathfrak{a}}_{{\mathfrak{p}}}\L_{{\mathfrak{b}}})_{x_{{\mathfrak{p}}}} =0$. Now, for every prime ideal ${\mathfrak{p}}$ of $R$, set $U_{x_{{\mathfrak{p}}}}:=\operatorname{Spec}R-{\mbox{V}}(Rx_{{\mathfrak{p}}})$ and notice that for every ${\mathfrak{q}}\in U_{x_{{\mathfrak{p}}}}$, one has $({\mathfrak{a}}_{{\mathfrak{p}}}\L_{{\mathfrak{b}}})_{{\mathfrak{q}}}=0$. Since any increasing chain of open subsets of $\operatorname{Spec}R$ is stationary, there exists a finite subset $\left\{{\mathfrak{p}}_1,...,{\mathfrak{p}}_{\ell}\right\}$ of $\operatorname{Spec}R$ such that $$\operatorname{Spec}R=\bigcup_{i=1}^{\ell}
U_{x_{{\mathfrak{p}}_i}}.$$ Hence, by setting ${{\mathfrak{a}}}:=\bigcap_{i=1}^{\ell}{\mathfrak{a}}_{{\mathfrak{p}}_{i}}$, one can see that ${\mathfrak{a}}\in F(\mathcal{Z})$ and $({\mathfrak{a}}\L_{{\mathfrak{b}}})_{{\mathfrak{p}}}=0$ for all ${\mathfrak{p}}\in \operatorname{Spec}R$. So, ${\mathfrak{a}}\L_{{\mathfrak{b}}}=0$. This implies that ${\mathfrak{a}}(0:_{{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)}{\mathfrak{b}})=0$, because $\L_{{\mathfrak{b}}}\cong 0:_{{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)}{\mathfrak{b}}$. By Lemma \[A\] (ii), one has $\operatorname{Supp}_R {\mbox{H}}_{\mathcal{Z}}^{t-1}(X)
\subseteq \mathcal{Z}$. Hence ${\mbox{H}}_{\mathcal{Z}}^{t-1}(X)=\Gamma_{\mathcal{Z}}({\mbox{H}}_{\mathcal{Z}}^{t-1}(X))$, and so $${\mbox{H}}_{\mathcal{Z}}^{t-1}(X)
=\underset{{\mathfrak{b}}\in F(\mathcal{Z})}\bigcup (0:_{{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)}{\mathfrak{b}}).$$ Thus ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^{t-1}(X)=0$, and so ${\mbox{H}}_{\mathcal{Z}}^{t-1}(X)\cong \Hom_R(R/{\mathfrak{a}},{\mbox{H}}_{\mathcal{Z}}^{t-1}(X))$. Now, Lemma \[C\] completes the proof.
By Corollary \[D\] and Theorem \[G\], one can immediately deduce the following result.
\[H\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in\mathrm{D}^f_{\sqsubset}(R)$. *[$$f_{\mathcal{Z}}(X)=\inf \left\{i\in \mathbb{Z}|\ {\mbox{H}}_{\mathcal{Z}}^i(X) \ \text{is not
finitely generated}\right\}=\inf \left\{f_{\mathcal{Z_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R\right\}.$$]{}*
Faltings’ Annihilator Theorem
=============================
We start this section with the following technical, but useful, result.
\[I\] Let $\mathcal{Z}\subseteq \mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$ such that *${\mbox{V}}({\mathfrak{p}})\cap (\mathcal{Y}-\mathcal{Z})=\left\{{\mathfrak{p}}\right\}$* for all ${\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}$. Then for every injective $R$-module $E$, there exists a natural $R$-isomorphism $$\Theta_{E}:\frac{\Gamma_{\mathcal{Y}}(E)}{\Gamma_{\mathcal{Z}}(E)}
{\longrightarrow}\underset{\tiny {{\mathfrak{p}}\in\mathcal{Y}-\mathcal{Z}}}\bigoplus\Gamma_{{\mathfrak{p}}R_{{\mathfrak{p}}}}(E_{{\mathfrak{p}}}).$$
Let $M$ be an $R$-module. There exists a natural $R$-homomorphism $$\theta_{M}:\Gamma_{\mathcal{Y}}(M)\longrightarrow
\underset{\tiny{{\mathfrak{p}}\in\mathcal{Y}-\mathcal{Z}}}\bigoplus \Gamma_{{\mathfrak{p}}R_{{\mathfrak{p}}}}(M_{{\mathfrak{p}}}),$$ with $\theta_{M}(m):=(\frac{m}{1})_{{\mathfrak{p}}}$ for all $m\in \Gamma_{\mathcal{Y}}(M)$. Let $m\in\Gamma_{\mathcal{Y}}(M)$. Then $\operatorname{Supp}_RRm\subseteq \mathcal{Y}$, and hence our assumption on $\mathcal{Y}-\mathcal{Z}$ implies that each element of $(\operatorname{Supp}_RRm)\cap (\mathcal{Y}-\mathcal{Z})$ is minimal in $\operatorname{Supp}_RRm$. Thus $\frac{m}{1}\in \Gamma_{{\mathfrak{p}}R_{{\mathfrak{p}}}}(M_{{\mathfrak{p}}})$ for all ${\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}$ and $(\operatorname{Supp}_RRm)
\cap (\mathcal{Y}-\mathcal{Z})$ is a finite set. So, $\theta_{M}$ is well-defined. Also, one can easily check that ${\mbox{Ker}\,}\theta_{M}=\Gamma_{\mathcal{Z}}(M)$.
Next, we prove that for any injective $R$-module $E$, the $R$-homomorphism $\theta_{E}$ is surjective. Let $$E=\underset{\tiny{{\mathfrak{q}}\in
\operatorname{Spec}R}}\bigoplus {\mbox{E}}(R/{\mathfrak{q}})^{(\mu^{0}({\mathfrak{q}},E))}$$ be an injective $R$-module. Let ${\mathfrak{p}}_{\circ}\in \mathcal{Y}-\mathcal{Z}$ and $\frac{x}{s}
\in \Gamma_{{\mathfrak{p}}_{\circ} R_{{\mathfrak{p}}_{\circ}}}(E_{{\mathfrak{p}}_{\circ}})$. Since $x\in E$, $x=(x_{{\mathfrak{q}}})_{{\mathfrak{q}}},$ where $x_{{\mathfrak{q}}}\in {\mbox{E}}(R/{\mathfrak{q}})^{(\mu^{0}({\mathfrak{q}},
E))}$. As $\frac{x}{s}\in \Gamma_{{\mathfrak{p}}_{\circ} R_{{\mathfrak{p}}_{\circ}}}(E_{{\mathfrak{p}}_{\circ}})$, there is a positive integer $n$ and $\widetilde{s}
\in R-{\mathfrak{p}}_{\circ}$ such that $\widetilde{s}{\mathfrak{p}}_{\circ}^nx=0$. Let ${\mathfrak{q}}$ be a prime ideal of $R$ with ${\mathfrak{p}}_{\circ}\nsubseteq {\mathfrak{q}}$ and let $t_{{\mathfrak{q}}}\in {\mathfrak{p}}_{\circ}-{\mathfrak{q}}$. Since $${\mbox{E}}(R/{\mathfrak{q}})^{(\mu^{0}({\mathfrak{q}},E))}\overset{t_{{\mathfrak{q}}}^n}\longrightarrow {\mbox{E}}(R/{\mathfrak{q}})^{(\mu^{0}({\mathfrak{q}},E))}$$ is an isomorphism, we get that $\widetilde{s}x_{{\mathfrak{q}}}=0$. Next, let ${\mathfrak{q}}$ be a prime ideal of $R$ with ${\mathfrak{q}}\nsubseteq {\mathfrak{p}}_{\circ}$. There is a positive integer $n_{{\mathfrak{q}}}$ such that ${\mathfrak{q}}^{n_{{\mathfrak{q}}}}x_{{\mathfrak{q}}}=0$. Let $s_{{\mathfrak{q}}}\in {\mathfrak{q}}-{\mathfrak{p}}_{\circ}$. Then $s_{{\mathfrak{q}}}^{n_{{\mathfrak{q}}}}x_{{\mathfrak{q}}}=0$. So, we may take $\check{s}\in R-{\mathfrak{p}}_{\circ}$ such that $\check{s}x_{{\mathfrak{q}}}=0$ for all ${\mathfrak{q}}\neq {\mathfrak{p}}_{\circ}$. Note that only finitely many of $x_{{\mathfrak{q}}}\ $’s are nonzero. Thus, without loss of generality, we may assume that $x_{{\mathfrak{q}}}=0$ for all ${\mathfrak{q}}\neq {\mathfrak{p}}_{\circ}$. In particular, $(0:_Rx)=(0:_Rx_{{\mathfrak{p}}_{\circ}}).$ Hence $x$ is annihilated by some power of ${\mathfrak{p}}_{\circ}$, and so $x\in \Gamma_{\mathcal{Y}}(E)$. On the other hand, using the fact that the map $${\mbox{E}}(R/{\mathfrak{p}}_{\circ})^{(\mu^{0}({\mathfrak{p}}_{\circ},E))}\overset{s}\longrightarrow {\mbox{E}}(R/{\mathfrak{p}}_{\circ})^{(\mu^{0}({\mathfrak{p}}_{\circ},E))}$$ is an isomorphism, one deduces that $sy_{{\mathfrak{p}}_{\circ}}=x_{{\mathfrak{p}}_{\circ}}$ for some $y_{{\mathfrak{p}}_{\circ}}\in {\mbox{E}}(R/{\mathfrak{p}}_{\circ})^{(\mu^{0}({\mathfrak{p}}_{\circ},E))}$. Let $\delta$ denote the Kronecker delta. Then for $y:=(\delta_{{\mathfrak{p}}_{\circ},{\mathfrak{q}}}y_{{\mathfrak{p}}_{\circ}})_{{\mathfrak{q}}}$, we have $\frac{y}{1}=\frac{x}{s}$ in $E_{{\mathfrak{p}}_{\circ}}$. Note that there exists a positive integer $t$ such that ${\mathfrak{p}}_{\circ}^t y=0$, and so $y\in \Gamma_{\mathcal{Y}}(E)$. Since ${\mbox{V}}({\mathfrak{p}}_{\circ})\cap (\mathcal{Y}-\mathcal{Z})=\left\{{\mathfrak{p}}_{\circ}\right\}$, we deduce that $\frac{y}{1}=0$ in $E_{{\mathfrak{p}}}$ for all ${\mathfrak{p}}\in
(\mathcal{Y}-\mathcal{Z})-\left\{{\mathfrak{p}}_{\circ}\right\}$. So, $$\theta_{E}(y)=(\frac{y}{1})_{{\mathfrak{p}}}=(\delta_{{\mathfrak{p}}_{\circ},{\mathfrak{p}}}\frac{x}{s})_{{\mathfrak{p}}}.$$ Therefore $\theta_{E}$ is surjective, and so it induces a natural $R$-isomorphism $$\Theta_{E}:\frac{\Gamma_{\mathcal{Y}}(E)}{\Gamma_{
\mathcal{Z}}(E)}{\longrightarrow}\underset{\tiny{{\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}}}\bigoplus\Gamma_{{\mathfrak{p}}R_{{\mathfrak{p}}}}(E_{{\mathfrak{p}}}).$$
Next, we establish a useful long exact sequence of local cohomology modules.
\[J\] Let $\mathcal{Z}\subseteq \mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$ such that *${\mbox{V}}({\mathfrak{p}})
\cap (\mathcal{Y}-\mathcal{Z})=\left\{{\mathfrak{p}}\right\}$* for all ${\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}$. Then for any $X\in \mathrm{D}_{\sqsubset}(R),$ there is a long exact sequence *$$\cdots \tiny{\rightarrow} \underset{\tiny{{\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}}}\bigoplus \large{{\mbox{H}}}_{{\mathfrak{p}}R_{{\mathfrak{p}}}}^{i-1}(X_{{\mathfrak{p}}})
\tiny{\rightarrow} \large{{\mbox{H}}}_{\mathcal{Z}}^i(X)\rightarrow \large{{\mbox{H}}}_{\mathcal{Y}}^i(X)\tiny{\rightarrow} \underset{\tiny{{\mathfrak{p}}\in \mathcal{Y}-\mathcal{Z}}}\bigoplus
\large{{\mbox{H}}}_{{\mathfrak{p}}R_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}})\tiny{\rightarrow} \large{{\mbox{H}}}_{\mathcal{Z}}^{i+1}(X)\tiny{\rightarrow} \cdots.$$*
First, let $M$ be an $R$-module, and set $\Gamma_{\mathcal{Y}/\mathcal{Z}}(M):=\Gamma_{\mathcal{Y}}(M)/\Gamma_{\mathcal{Z}}(M)$. Note that $\Gamma_{\mathcal{Y}/\mathcal{Z}}(-)$ is a functor from the category of $R$-modules to itself, but not necessarily left exact. We consider the right derived functor of this functor in $\mathrm{D}(R)$. Let $X\in \mathrm{D}_{\sqsubset}(R)$ and $I$ be an injective resolution of $X$. Then for every prime ideal ${\mathfrak{p}}$ of $R$, we may check that $I_{{\mathfrak{p}}}$ is an injective resolution of the $R_{{\mathfrak{p}}}$-complex $X_{{\mathfrak{p}}}$.
We set ${\mbox{H}}_{\mathcal{Y}/ \mathcal{Z}}^i(X):={\mbox{H}}_{-i}(\Gamma_{\mathcal{Y}/\mathcal{Z}}(I))$ for all integers $i$. One can use the following exact sequence of $R$-complexes $$0\longrightarrow\Gamma_{\mathcal{Z}}(I)\longrightarrow\Gamma_{\mathcal{Y}}(I)\longrightarrow \Gamma_{\mathcal{Y}/
\mathcal{Z}}(I)\longrightarrow 0,$$ to obtain the long exact sequence $$\cdots \longrightarrow{\mbox{H}}_{\mathcal{Y}/\mathcal{Z}}^{i-1}(X)\longrightarrow
{\mbox{H}}_{\mathcal{Z}}^i(X)\longrightarrow {\mbox{H}}_{\mathcal{Y}}^i(X)\longrightarrow{\mbox{H}}_{\mathcal{Y}/\mathcal{Z}}^i(X)\rightarrow {\mbox{H}}_{\mathcal{Z}}^{i+1}(X)
\rightarrow \cdots.$$ Lemma \[I\] yields that the two complexes $\Gamma_{\mathcal{Y}/\mathcal{Z}}(I)$ and $\underset{\tiny{{\mathfrak{p}}\in \mathcal{Y}-
\mathcal{Z}}}\bigoplus\Gamma_{{\mathfrak{p}}R_{{\mathfrak{p}}}}(I_{{\mathfrak{p}}})$ are isomorphic, and so $${\mbox{H}}_{\mathcal{Y}/\mathcal{Z}}^i(X)\cong \underset{\tiny{{\mathfrak{p}}\in
\mathcal{Y}-\mathcal{Z}}}\bigoplus {\mbox{H}}_{{\mathfrak{p}}R_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}})$$ for all integers $i$. This completes the proof.
Next, we include the following immediate consequence.
\[K\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$ and $X\in \mathrm{D}_{\sqsubset}(R)$. Then the following statements hold.
- For every integer $n$, *$\mathcal{Z}_n:=\left\{~{\mathfrak{p}}\in\mathcal{Z}|~{\mbox{ht}\,}{\mathfrak{p}}\geq n\right\}$* is a specialization closed subset of $\operatorname{Spec}R$ and *$\bigcap_{n\in\mathbb{Z}}\mathcal{Z}_{n}=\emptyset$.*
- If $\dim R$ is finite, then *${\mbox{H}}_{\mathcal{Z}_{n}}^i(X)=0$* for all $i$ and all $n> \dim R$.
- For any two integers $i$ and $n$, there exists an exact sequence *$${\mbox{H}}_{\mathcal{Z}_{n+1}}^i(X)
\longrightarrow {\mbox{H}}_{\mathcal{Z}_{n}}^i(X)\longrightarrow\underset{\tiny{{\mathfrak{p}}\in \mathcal{Z}_{n}-\mathcal{Z}_{n+1}}}\bigoplus
{\mbox{H}}_{{\mathfrak{p}}R_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}}).$$*
We need to apply the following first quadrant spectral sequence in the proof of the main result of this section.
\[L\] Let $\mathcal{Z}$ be a specialization closed subset of $\operatorname{Spec}R$. Then for any $X\in \mathrm{D}_{\sqsubset}(R)$ with $\sup X=0$ and any ${\mathfrak{a}}\in F(\mathcal{Z})$, there is a first quadrant spectral sequence *$${\mbox{E}}_2^{p,q}:={\mbox{H}}_{{\mathfrak{a}}}^{p}
({\mbox{H}}_{\mathcal{Z}}^{q}(X))\underset{p}\Longrightarrow {\mbox{H}}_{{\mathfrak{a}}}^{p+q}(X).$$*
By [@FI 1.6], we have the following spectral sequence $${\mbox{E}}^2_{p,q}:={\mbox{H}}_{{\mathfrak{a}}}^{-p}({\mbox{H}}_{q}({{\mathbf R}\Gamma}_{\mathcal{Z}}(X)))
\underset{p}\Longrightarrow {\mbox{H}}_{{\mathfrak{a}}}^{-p-q}({{\mathbf R}\Gamma}_{\mathcal{Z}}(X)).$$ Let $I$ be an injective resolution of $X$. Then, one has the following natural $R$-isomorphisms $$\begin{array}{rl}
{\mbox{H}}_{{\mathfrak{a}}}^{-p-q}({{\mathbf R}\Gamma}_{\mathcal{Z}}(X))&\cong {\mbox{H}}_{{\mathfrak{a}}}^{-p-q}(\Gamma_{\mathcal{Z}}(I))\\
&\cong {\mbox{H}}_{p+q}({{\mathbf R}\Gamma}_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(I)))\\
&\cong {\mbox{H}}_{p+q}(\Gamma_{{\mathfrak{a}}}(\Gamma_{\mathcal{Z}}(I)))\\
&\cong {\mbox{H}}_{p+q}(\Gamma_{{\mathfrak{a}}}(I))\\
&\cong {\mbox{H}}_{p+q}({{\mathbf R}\Gamma}_{{\mathfrak{a}}}(X))\\
&\cong {\mbox{H}}_{{\mathfrak{a}}}^{-p-q}(X),
\end{array}$$ which completes the argument.
Now, we are ready to prove the Annihilator Theorem for local cohomology modules of complexes.
\[M\] Let $\mathcal{Z}\subseteq\mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$, $X\in\mathrm{D}^f_{\Box}(R)$ and $n$ an integer. Consider the following statements:
- *[There exists an ideal ${\mathfrak{a}}\in F(\mathcal{Y})$ such that ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i\leq n.$ ]{}*
- *[For every ${\mathfrak{q}}\notin \mathcal{Y}$ and every ${\mathfrak{p}}\in\mathcal{Z}\cap {\mbox{V}}({\mathfrak{q}})$, one has $\depth_{R_{{\mathfrak{q}}}}X_{{\mathfrak{q}}}+
{\mbox{ht}\,}{\mathfrak{p}}/{\mathfrak{q}}> n.$]{}*
Then, *(i)* always implies *(ii)* and *(ii)* implies *(i)*, provided $R$ is a homomorphic image of a finite-dimensional Gorenstein ring.
Note that by Lemma \[A\] (i), ${\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<-\sup X$ and ${\mbox{H}}_{\mathcal{Z}}^{-\sup X}(X)$ is finitely generated. Set $T:=\Sigma^{-\sup X}X$ and note that ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i\leq n$ if and only if ${\mathfrak{a}}{\mbox{H}}_{\mathcal{Z}}^i(T)=0$ for all $i\leq n+\sup X$. Also, one can see that $$\depth_{R_{{\mathfrak{q}}}}T_{{\mathfrak{q}}}=\depth_{R_{{\mathfrak{q}}}} X_{{\mathfrak{q}}}+
\sup X$$for all ${\mathfrak{q}}\in\operatorname{Spec}R$. Therefore, by replacing $X$ with $T$, we may and do assume that $\sup X=0$, and so ${\mbox{H}}_{\mathcal{Z}}^i(X)=0$ for all $i<0$ and ${\mbox{H}}_{\mathcal{Z}}^{0}(X)$ is a finitely generated $R$-module.
(i)$\Rightarrow$(ii) First, we reduce the situation to the case that $R$ is a Gorenstein local ring. To this end, let ${\mathfrak{q}}\notin \mathcal{Y}$ and ${\mathfrak{p}}\in\mathcal{Z}\cap {\mbox{V}}({\mathfrak{q}})$. We should show that $$\depth_{R_{{\mathfrak{q}}}}X_{{\mathfrak{q}}}+{\mbox{ht}\,}{\mathfrak{p}}/{\mathfrak{q}}> n,$$ which it is equivalent to show that $$\depth_{({R_{{\mathfrak{p}}})}_{{\mathfrak{q}}R_{{\mathfrak{p}}}}}(({X_{{\mathfrak{p}}}})_{{\mathfrak{q}}R_{{\mathfrak{p}}}})+\dim R_{{\mathfrak{p}}}/{\mathfrak{q}}R_{{\mathfrak{p}}}> n.$$ Hence, in view of Corollary \[E\] (i), by replacing $R$, $\mathcal{Y}$, $\mathcal{Z}$ and $X$ with $R_{{\mathfrak{p}}}$, $\mathcal{Y}_{{\mathfrak{p}}}$, $\mathcal{Z}_{{\mathfrak{p}}}$ and $X_{{\mathfrak{p}}}$; respectively, we may and do assume that $(R,{\mathfrak{m}})$ is a local ring and we must show that for every ${\mathfrak{q}}\notin \mathcal{Y}$, $$\depth_{R_{{\mathfrak{q}}}}X_{{\mathfrak{q}}}+\dim R/{\mathfrak{q}}>n.$$ Let $\hat{R}$ be the completion of $R$ with respect to the ${\mathfrak{m}}$-adic topology. We have $$\dim R/{\mathfrak{q}}=\dim \hat{R}/{\mathfrak{q}}\hat{R}=\dim\hat{R} /\frak Q,$$ for some $\frak Q\in{\mbox{Min}\,}{{\mathfrak{q}}\hat{R}}$, and one can see that $\frak Q\notin \widehat{\mathcal{Y}}.$ On the other hand, in view of [@Iy Corollary 2.6] we have $$\depth_{R_{{\mathfrak{q}}}} X_{{\mathfrak{q}}}=\depth_{\hat{R}_{\frak{Q}}}(X_{{\mathfrak{q}}}\otimes_{R_{{\mathfrak{q}}}}\hat{R}_{\frak{Q}})=\depth_{\hat{R}_{\frak{Q}}}(
X\otimes_{R}{\hat{R}})_{\frak{Q}}.$$ Corollary \[E\] (ii) yields that there is an $\hat{R}$-isomorphism ${\mbox{H}}_{\mathcal{Z}}^i(X)
\otimes_{R}\hat{R}\cong {\mbox{H}}_{\widehat{\mathcal{Z}}}^i(X\otimes_{R}\hat{R})$ for all integers $i$. Thus we may assume that $R$ is a complete local ring, and so it is a homomorphic image of a Gorenstein local ring. Next, in view of Lemma \[B\] (i), we can assume that $R$ is a Gorenstein local ring.
Now for a given ${\mathfrak{q}}\notin\mathcal{Y},$ we should show that ${\mbox{H}}_{{\mathfrak{q}}R_{{\mathfrak{q}}}}^j(X_{{\mathfrak{q}}})=0$ for all $j\leq n-\dim R/{\mathfrak{q}}.$ By Lemma \[L\], we have a first quadrant spectral sequence $${\mbox{E}}_2^{p,q}:={\mbox{H}}_{{\mathfrak{m}}}^{p}({\mbox{H}}_{\mathcal{Z}}^{q}(X))\underset{p}\Longrightarrow
{\mbox{H}}_{{\mathfrak{m}}}^{p+q}(X).$$ So, we can use our assumption to deduce that ${\mathfrak{a}}^t{\mbox{H}}_{{\mathfrak{m}}}^{j+\dim R/{\mathfrak{q}}}(X)=0$ for some positive integer $t$. Hence, by using the Local Duality Theorem [@Ha Chapter V, Theorem 6.2], ${{\mathfrak{a}}^t}\Ext_{R}^{\dim R-j-\dim R/{\mathfrak{q}}}(X,R)=0$, and so $$\operatorname{Supp}_R(\Ext_R^{\dim R-j-\dim R/{\mathfrak{q}}}(X,R))\subseteq {\mbox{V}}({\mathfrak{a}})\subseteq \mathcal{Y}.$$ As ${\mathfrak{q}}\notin \mathcal{Y}$ and $\dim R_{{\mathfrak{q}}}
=\dim R-\dim R/{\mathfrak{q}},$ we deuce that $\Ext_{R_{{\mathfrak{q}}}}^{\dim R_{{\mathfrak{q}}}-j}(X_{{\mathfrak{q}}}, R_{{\mathfrak{q}}})=0$. Therefore, the Local Duality Theorem implies that ${\mbox{H}}_{{\mathfrak{q}}R_{{\mathfrak{q}}}}^{j}(X_{{\mathfrak{q}}})=0$.
(ii)$\Rightarrow$(i) First, note that by Lemma \[B\] (i) we may assume that $R$ is Gorenstein and $\dim R<\infty$. Fix a non-negative integer $i\leq n$. Let ${\mathfrak{q}}\notin\mathcal{Y}$ and ${\mathfrak{p}}\in\mathcal{Z}\cap {\mbox{V}}({\mathfrak{q}})$. By the assumption, we have $$i-{\mbox{ht}\,}{\mathfrak{p}}/{\mathfrak{q}}\leq n-{\mbox{ht}\,}{\mathfrak{p}}/{\mathfrak{q}}<
\depth_{R_{{\mathfrak{q}}}}X_{{\mathfrak{q}}},$$ and so ${\mbox{H}}_{{\mathfrak{q}}R_{{\mathfrak{q}}}}^{i-{\mbox{ht}\,}{\mathfrak{p}}/{\mathfrak{q}}}(X_{{\mathfrak{q}}})=0$. Then the Local Duality Theorem yields that $\Ext_{R}^{{\mbox{ht}\,}{\mathfrak{p}}-i}(X,R)_{{\mathfrak{q}}}=0$. Thus, one deduces that $\operatorname{Supp}_{R_{{\mathfrak{p}}}}(\Ext_{R_{{\mathfrak{p}}}}^{{\mbox{ht}\,}{\mathfrak{p}}-i}(X_{{\mathfrak{p}}},R_{{\mathfrak{p}}}))
\subseteq \mathcal{Y}_{{\mathfrak{p}}}$.
Next, for every ${\mathfrak{p}}\in \mathcal{Z}$, set $${{\mathfrak{a}}}_{{\mathfrak{p}},i}:={\mbox{Ann}\,}_{R_{{\mathfrak{p}}}}(\Ext_{R_{{\mathfrak{p}}}}^{{\mbox{ht}\,}{\mathfrak{p}}-i}(X_{{\mathfrak{p}}},
R_{{\mathfrak{p}}}))\cap R.$$ Applying Lemma \[F1\] implies that ${\mbox{V}}({{\mathfrak{a}}}_{{\mathfrak{p}},i})\subseteq\mathcal{Y}$. Let $t$ be a non-negative integer and ${\mathfrak{p}}\in \mathcal{Z}_{t}-\mathcal{Z}_{t+1}$. As ${{\mathfrak{a}}}_{{\mathfrak{p}},i}\Ext_{R_{{\mathfrak{p}}}}^{t-i}(X_{{\mathfrak{p}}},R_{{\mathfrak{p}}})=0,$ there exists $x_{{\mathfrak{p}}}\in R-{\mathfrak{p}}$ such that $({{\mathfrak{a}}}_{{\mathfrak{p}},i}\Ext_{R}^{t-i}(X,R))_{x_{{\mathfrak{p}}}}=0$. Now, set $U_{x_{{\mathfrak{p}}}}:=\operatorname{Spec}R-{\mbox{V}}(Rx_{{\mathfrak{p}}})$ and note that for every ${\mathfrak{q}}\in U_{x_{{\mathfrak{p}}}}$, one has $${{\mathfrak{a}}}_{{\mathfrak{p}},i}\Ext_{R_{{\mathfrak{q}}}}^{t-i}(X_{{\mathfrak{q}}},R_{{\mathfrak{q}}})=0.$$ By using the fact that every increasing chain of open subsets of $\operatorname{Spec}R$ is stationary, one can deduce that there exists a finite subset $\mathcal{W}_t$ of $\mathcal{Z}_{t}-\mathcal{Z}_{t+1}$ such that $$\underset{{\mathfrak{p}}\in \mathcal{Z}_{t}-\mathcal{Z}_{t+1}}\bigcup U_{x_{{\mathfrak{p}}}}=\underset{{\mathfrak{p}}\in \mathcal{W}_t}\bigcup U_{x_{{\mathfrak{p}}}}.$$ Hence, by setting ${{\mathfrak{a}}}_{t,i}:=\underset{{\mathfrak{p}}\in \mathcal{W}_t}\bigcap{\mathfrak{a}}_{{\mathfrak{p}},i}$, one can see that ${\mbox{V}}({{\mathfrak{a}}}_{t,i})\subseteq\mathcal{Y}$ and ${\mathfrak{a}}_{t,i}\Ext_{R_{{\mathfrak{p}}}}^{t-i}(X_{{\mathfrak{p}}},R_{{\mathfrak{p}}})=0$ for all ${\mathfrak{p}}\in\mathcal{Z}_{t}-\mathcal{Z}_{t+1}$. Applying the Local Duality Theorem again implies that ${\mathfrak{a}}_{t,i}{\mbox{H}}_{{\mathfrak{p}}R_{{\mathfrak{p}}}}^{i}(X_{{\mathfrak{p}}})=0$ for all ${\mathfrak{p}}\in\mathcal{Z}_{t}-\mathcal{Z}_{t+1}$. Now, using the exact sequence given in Corollary \[K\] (iii) implies that for the ideal ${\mathfrak{a}}_i:=\prod_{t=0}^{\dim R}{\mathfrak{a}}_{t,i}$, we have ${\mathfrak{a}}_i{\mbox{H}}_{\mathcal{Z}}^{i}(X)=0$, and so by setting ${\mathfrak{a}}:=\bigcap_{i=0}^n{\mathfrak{a}}_{i}$ the assertion follows.
We close the paper with the following result.
\[P\] Let $\mathcal{Z}\subseteq\mathcal{Y}$ be two specialization closed subsets of $\operatorname{Spec}R$ and $X\in\mathrm{D}^f_{\Box}(R)$. Then the following statements hold.
- $f_{\mathcal{Z}}^{\mathcal{Y}}(X)
\leq \lambda_{\mathcal{Z}}^{\mathcal{Y}}(X)$.
- Assume that $R$ is a homomorphic image of a finite-dimensional Gorenstein ring. Then $f_{\mathcal{Z}}^{\mathcal{Y}}(X)=
\lambda_{\mathcal{Z}}^{\mathcal{Y}}(X)$ and $f_{\mathcal{Z}}^{\mathcal{Y}}(X)
=\inf \left\{f_{\mathcal{Z_{{\mathfrak{p}}}}}^{\mathcal{Y_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R\right\}$.
\(i) follows by the implication (i)$\Longrightarrow$(ii) in Theorem \[M\].
\(ii) The first assertion of (ii) follows by Theorem \[M\].
Denote $\inf \left\{f_{\mathcal{Z_{{\mathfrak{p}}}}}^{\mathcal{Y_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})|\ {\mathfrak{p}}\in \operatorname{Spec}R\right\}$ by $t$. For every prime ideal ${\mathfrak{p}}$ of $R$, Corollary \[E\] (i) easily yields that $f_{\mathcal{Z}}^{\mathcal{Y}}(X)\leq f_{\mathcal{Z_{{\mathfrak{p}}}}}^{\mathcal{Y_{{\mathfrak{p}}}}}
(X_{{\mathfrak{p}}})$, and so $f_{\mathcal{Z}}^{\mathcal{Y}}(X)\leq t$. Let $n$ be any integer with $n<t$ and ${\mathfrak{p}}$ be a prime ideal of $R$. As $n<f_{\mathcal{Z_{{\mathfrak{p}}}}}^{\mathcal{Y_{{\mathfrak{p}}}}}(X_{{\mathfrak{p}}})$, it turns out that there exists ${\mathfrak{c}}\in F(\mathcal{Y}_{{\mathfrak{p}}})$ such that ${\mathfrak{c}}{\mbox{H}}_{{\mathcal{Z}}_{{\mathfrak{p}}}}^i(X_{{\mathfrak{p}}})=0$ for all $i\leq n$. Hence, by Theorem \[M\], for every ${\mathfrak{q}}\notin\mathcal{Y}$ and every ${\mathfrak{p}}\in \mathcal{Z}\cap {\mbox{V}}({\mathfrak{q}})$, one can deduce that$$\depth_{R_{{\mathfrak{q}}}} X_{{\mathfrak{q}}}+{\mbox{ht}\,}{{\mathfrak{p}}}/{{\mathfrak{q}}}> n.$$ Thus, we can apply Theorem \[M\] again to deduce that there exists ${\mathfrak{a}}\in F(\mathcal{Y})$ such that ${\mathfrak{a}}{\mbox{H}}_{{\mathcal{Z}}}^i(X)=0$ for all $i\leq n$. Therefore $f_{\mathcal{Z}}^{\mathcal{Y}}(X)>n$, and so $f_{\mathcal{Z}}^{\mathcal{Y}}(X)\geq t$.
[99]{}
, *Local-global principle for annihilation of general local cohomology*, Colloq. Math., $\textbf{87}$(1), (2001), 129-136.
, *On annihilators and associated primes of local cohomology modules*, J. Pure Appl. Algebra, $\textbf{153}$(3), (2000), 197-227.
, [*Gorenstein dimensions*]{}, Lecture Notes in Mathematics, $\textbf{1747}$, Springer-Verlag, Berlin, 2000.
, *Der Endlichkeitssatz in der lokalen Kohomologie*, Math. Ann., $\textbf{255}$(1), (1981), 45-56.
, *Über die Annulatoren lokaler Kohomologiegruppen*, Arch. Math. (Basel), $\textbf{30}$(5), (1978), 473-476.
, *A homological theory of complexes of modules*, Preprint Series no. 19 a & 19 b, Department of Mathematics, University of Copenhagen, 1981.
, *Depth and amplitude for unbounded complexes*, Commutative algebra (Grenoble/Lyon, 2001), 119-137, Contemp. Math., $\textbf{331}$, Amer. Math. Soc., Providence, RI, (2003).
, [*Residues and duality*]{}, Lecture Notes in Mathematics, [**20**]{}, Springer-Verlag, Berlin-New York, 1966.
, [*Komplex Auflösungen und Dualität in der lokalen Algebra*]{}, Habilitationsschrift, Universität Regensburg, 1974.
, *Depth for complexes, and intersection theorems*, Math. Z., $\textbf{230}$(3), (1999), 545-567.
, *On Faltings’ annihilator theorem*, Proc. Amer. Math. Soc., $\textbf{136}$(4), (2008), 1205-1211.
, *Faltings’ theorem for the annihilation of local cohomology modules over a Gorenstein ring*, Proc. Amer. Math. Soc., $\textbf{132}$(8), (2004), 2215-2220.
, *A new version of local-global principle for annihilations of local cohomology modules*, Colloq. Math., $\textbf{100}$(2), (2004), 213-219.
, *Lectures on local cohomology and duality,* Local cohomology and its applications (Guanajuato, 1999), Lecture Notes in Pure and Appl. Math., $\textbf{226}$, Dekker, New York, (2002), 39-89.
, *Local-global principle for annihilation of local cohomology*, Contemp. Math., $\textbf{159}$, (1994), 329-331.
, *Local cohomology based on a nonclosed support defined by a pair of ideals*, J. Pure Appl. Algebra, $\textbf{213}$(4), (2009), 582-600.
, *Abstract local cohomology functors*, Math. J. Okayama Univ., $\textbf{53}$, (2011), 129-154.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'D. Bothner'
- 'D. Wiedmaier'
- 'B. Ferdinand'
- 'R. Kleiner'
- 'D. Koelle'
title: 'Supplementary Material for: Improving superconducting resonators in magnetic fields by reduced field-focussing and engineered flux screening'
---
Spectra of resonators without normal-conducting ground-plane extensions {#spectra-of-resonators-without-normal-conducting-ground-plane-extensions .unnumbered}
=======================================================================
In the main text, we describe measurements on coplanar microwave resonators which have been fabricated by a combination of superconducting and normal-conducting films. These resonators consist of a superconducting center conductor with width $S$ and superconducting ground conductors on both sides of the center conductor, but the superconducting ground conductors have only a width of $G=50\,\mu$m. The rest of the ground-planes is made of a normal-conducting film, cf. Fig. 1 (c), (d) of the main text. The width of the gap between center conductor and ground planes is $W$. Here, we show and discuss the transmission spectra of resonators with $G=50\,\mu$m but without the normal-conducting parts. As a consequence of not having the normal-conducting parts, the ground conductors are only connected to the sample box at the transition from the microwave connectors to the chip. Figure \[fig:FigureSupp1\] shows in direct comparison the transmitted microwave power for three different resonator configurations. The black line with a constant background transmission of $\sim -88\,$dBm shows the spectrum of an inductively coupled resonator with $S=50\,\mu$m and superconducting ground-planes to the edge of the chip (IC-**N**-50, cf. main text). At the chip edge, the ground-planes are everywhere around the chip connected to the sample box walls by means of silver paste. The spectrum shows three very pronounced transmission peaks around 3.3, 6.6 and 9.9 GHz, which correspond to the first three modes of the $\lambda/2$ resonator. Except for some small spurious resonances far away from the resonator peaks, the transmission is completely suppressed.
A very different transmission is observed for the two resonators with narrow ground-planes. There is a large transmission over almost the whole frequency range; for certain frequency ranges the transmission is even orders of magnitude higher than the transmission in resonance of the full-ground sample. This observation reveals that for small ground-planes the microwave finds a way around the waveguide resonator through the box. Most probably the microwave fields occupy the whole box volume, which is very unfavorable for any kind of experiment, where mode selectivity or spatial field confinement is desired or necessary. From the large and broad transmission peaks between $5$ and $6\,$GHz and another increase around 10 GHz, we come to the conclusion that low quality factor box resonances are probably responsible for this high broadband transmission. These box resonances, however, are not just a background but they are coupled to the coplanar microwave structure. They depend on its geometry and coupling type, which is revealed by the differences in frequency and width of the peak for the two different resonators. It is important to note, that the geometrical difference between the two coupling types is very small compared to the chip or the box dimensions. What is important is the change of boundary conditions due to the different coupling types, and this change of boundary conditions is significant and has an influence on the transmission over the complete frequency range. The difference in transmission between the inductively coupled and the capacitively coupled resonator is systematic and qualitatively reproducible for identical boxes, mountings and resonator layouts, so it is not related to different qualities of the transition from the SMA connectors to the chip or other non-systematic variations between different chips. For other resonator layouts, other cross-sectional parameters, and other box geometries, however, also the background transmission changes significantly (data not shown here). The presence of the box modes and the fact that they are coupled to the coplanar microwave structures, however, has another consequence. Around the resonance frequencies of the full-ground coplanar microwave resonator (indicated with $n=1, 2, 3$ in the figure), resonance signatures also appear in the transmission spectra of the small-ground samples. But they are significantly distorted in magnitude and frequency. This distortion reveals that we do not see bare coplanar waveguide resonances anymore; the coplanar resonators are reactively and dissipatively loaded by the box walls. Thus, a fair comparison between these loaded modes and the bare coplanar waveguide modes with respect to the impact of magnetic fields is not possible. How the coplanar waveguide resonances are loaded and how the resonance frequencies and quality factors are shifted, however, depends on mode shape, box shape, coupling type and probably more, i.e., on many parameters of the complete system. In conclusion we state, that without a proper grounding of the waveguide ground conductors, the transmission becomes sensitive to many parameters of the system and the resonances can not be attributed to the coplanar waveguide structure alone anymore. When using normal-conducting ground-plane extensions, however, the background transmission is suppressed again to the level of the resonator with full ground and the resonance frequencies are back at their designed positions, independent of coupling type, mode number etc. The implementation of our approaches without normal-conducting elements thus requires also a careful consideration of sample holder design, chip mounting and resonator layout.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the physical mechanism of Maxwell’s Demon (MD) helping to do extra work in thermodynamic cycles, by describing measurement of position, insertion of wall and information erasing of MD in a quantum mechanical fashion. The heat engine is exemplified with one molecule confined in an infinitely deep square potential inserted with a movable solid wall, while the MD is modeled as a two-level system (TLS) for measuring and controlling the motion of the molecule. It is discovered that the the MD with quantum coherence or on a lower temperature than that of the heat bath of the particle would enhance the ability of the whole work substance formed by the system plus the MD to do work outside. This observation reveals that the role of the MD essentially is to drive the whole work substance being off equilibrium, or equivalently working with an effective temperature difference. The elaborate studies with this model explicitly reveal the effect of finite size off the classical limit or thermodynamic limit, which contradicts the common sense on Szilard heat engine (SHE). The quantum SHE’s efficiency is evaluated in detail to prove the validity of second law of thermodynamics.'
author:
- 'H. Dong, D.Z. Xu and C.P. Sun'
title: 'Quantum Maxwell’s Demon in Thermodynamic Cycles'
---
Introduction
============
Maxwell’s demon (MD) has been a notorious being since its existence could violate the second law of thermodynamics (SLoT) [@MD_book; @Nori2009]: the MD distinguishes the velocities of the gas molecules, and then controls the motions of molecules to create a difference of temperatures between the two domains. In 1929, Leo Szilard proposed the “one molecular heat engine”(we call Szilard heat engine(SHE)) [@Szilard1929] as an alternative version of heat engine assisted by MD. The MD firstly measures which domain, the single molecule stays in and then gives a command to the system for extracting work according to the measurement results. In a thermodynamic cycle, the molecule seems to extract heat from a single heat bath at temperature $T$, and thus do work $k_{\mathrm{B}}T\ln2$ without evoking other changes. This consequence obviously violates the SLoT.
The first revival of the studies of MD is due to the recognition about the trade-off between information and entropy in the MD-controlled thermodynamic cycles. The milestone discovery is the “Landauer principle” [@Landauer1961], which reveals that erasing one bit information from the memory in computing process would inevitably accompany an increasing entropy of the environment. In the SHE, the erasing needs work $k_{\mathrm{B}}T\ln2$ done by the external agent. It gives a conceptual solution for the MD paradox [@Bennett1982] by considering the MD as a part of the whole work substance, thus the erasing information stored in the demon’s memory is necessary to restart a new thermodynamic cycle. This observation about erasing the information of the MD finally saves the SLoT.
The recent revival of the studies of MD is due to the development of quantum information science. The corresponding quantum thermodynamics concerns the quantum heat engines (QHEs) [@Kieu2004; @htquan] utilizing quantum coherent system serving as the work substance. A quantum work substance is a quantum system off the thermodynamic limit, which perseveres its quantum coherence [@Scully2003; @htquan2006] to some extent, and obviously has tremendous influence on the properties of QHEs. Especially, when quantum MD is included in the thermodynamical cycle [@Zurek1984; @lloyd1997; @Quan2006], the whole work substance formed by the work substance plus MD would be off the thermodynamic limit and possesses some quantum coherence. There are many attempts to generalize the SHE by quantum mechanically approaching the measurement process [@Zurek1984], the motion control [@Quan2006], the insertion and the expansion process [@Ueda2010]. However, to our best knowledge, a fully quantum approach for all actions in the SHE integrated with a quantum MD intrinsically is still lack. The quantum-classical hybrid description of the SHE may result in some notorious observations about MD assisted thermodynamic process, which seems to challenge the common senses in physics. Therefore, we need a fully quantum theory for the MD-assisted thermodynamics.
In this paper, we propose a quantum SHE assisted by MD with a finite temperature different from that of the system. In this model, we give a consistent quantum approach to the measurement process without using the von Neumann projection [@Pzhang]. Then we calculate the works done by the insertion of the movable wall in the framework of quantum mechanics. The controlled gas expansion is treated with the quantum conditional dynamics. Furthermore, we also consider the process of removing wall to complete a thermodynamic cycles. With these necessary subtle considerations, the quantum approach for the MD-assisted thermodynamic cycle will go much beyond the conventional theories about the SHE. We show that the system off the thermodynamic limit exhibits uncommon observable quantum effects due to the finite size of system , which results in the discrete energy levels that could be washed out by the heat fluctuation. Quantum coherence can assist the MD to extract more work by reducing effective temperature, while thermal excitation of the MD at a finite temperature would reduce its abilities for quantum measurement and conditional control of the expansion. It means that, only cooled to the absolute zero temperature, could the MD help the molecule to do maximum work outside.
Our paper is organized as follows: In Sec. II we firstly give a brief review of classical version of SHE, and then present our model in quantum version with MD included intrinsically. The role of quantum coherence of MD is emphasized with the definition of the effective temperature for arbitrary two level system. In the Sec. III of main part, we consider the quantum SHE with a quantum MD at finite temperature, doing measurement for the position of the particle confined in a one dimensional infinite deep well. The whole cycle consists four steps: insertion, measurement, expansion and removing. Detailed descriptions are performed subsequently in the whole cycle of SHE. We calculate the work done and heat exchange in every sub-step. In Sec. IV, we discuss quantum SHE’s operation efficiency in comparison with the Carnot heat engine. We restore the well-known results in the classical case by tuning the parameters in the quantum version, such as the width of the potential well. Conclusions and remarks are given in Sec. VI.
Quantum Maxwell’s Demon in Szilard Heat Engine\[sec:II\]
========================================================
In this section, we firstly revisit Szilard’s single molecular heat engine (SHE) in brief. As illustrated in Fig. \[fig:cycle\](a), the whole thermodynamic cycle consists three steps: insertion(i-ii), measurement(ii-iii) and controlled expansion(iii-iv) by the MD. The demon inserts a piston isothermally in the center of the chamber. Then, it finds which domain, the single molecule stays in and changes its own state to register the information of the system. Without losing generality, we assume the demon initially is in the state $0$. Finding the molecule is on the right, namely $L/2<x<L$, the demon changes its own memory to state $1$, while it does not change if the molecule is on the left $\left(0<x<L/2\right)$. According to the information acquired in the measurement process, the demon controls the expansion of the domain with the single molecule: allowing the isothermal expansion with the piston moving from $L/2$ to $L$ if its memory registers $0$, and moving from $L/2$ to $0$, if the register is on the state $1$. In each thermodynamic cycle, the system does work $W=k_{\mathrm{B}}T\ln2$ to the outside agent in the isothermal expansion. In an overall looking, the system extracts heat from a single heat bath to do work, thus it would violate the SLoT if the MD were not treated as a part of the work substance in the SHE. However, after the cycle, MD stores one bit information as its final state and is in the mixture of $0$ and $1$ states with equal probability. Thus it does not return to its initial state. Landauer’s principle states that to erase such a bit of information at temperature $T$ requires the dissipation of energy at least $k_{\mathrm{B}}T\ln2$. The work extracted by the system just compensates the energy for erasing the information. Therefore, the SLoT is saved. In this sense, the classical version of MD paradox is only a misunderstanding, due to the ignorance of some roles of the MD [@Bennett1982].
![Classical and quantum Szilard’s single molecular heat engine. **(a)** Classical version: (i-ii) A piston is inserted in the center of a chamber. (ii-iii) The demon finds which domain, the single molecule stays in. (iii-iv) The demon controls the system to do work according to its memory; **(b)** Quantum version: The demon is modeled as a two level system with two energy levels $\left|g\right\rangle $ and $\left|e\right\rangle $ and energy spacing $\Delta$. The chamber is quantum mechanically described as an infinite potential with width $L$. (I-II) An impenetrable wall is inserted at arbitrary position in the potential. (II-III) The demon measures the state of the system and then record the results in its memory by flipping its own state or no action taken. The measurement may result in the wrong results illustrated in the green dashed rectangle. (III-IV) The demon controls the expansion for the single molecule according to the measurement. (IV-V) The wall is removed from the potential. []{data-label="fig:cycle"}](MDq1){width="7cm"}
In the most of previous investigations about the MD paradox, it is usually assumed the system and the MD possess the same heat bath. Thus the whole work substance formed by the system plus the MD is in equilibrium, and no quantum coherence exists. If the demon is in contact with a lower temperature heat bath while the system’s environment possesses higher temperature $T$, the work needed in the erasing process is smaller than $k_{B}T\ln2$ [@Quan2006]. Under this circumstance, we actually construct a quantum heat engine with non-equilibrium or an equilibrium working substance work between two different heat baths. Furthermore, when the MD is initially prepared with quantum coherence, the quantum nature of the whole work substance results in many exotic functions for QHE.
To tackle this problem, we study here a quantum version of Szilard’s model with an MD accompanying it. In this model, the chamber is modeled as an infinite square potential well with the width $L$, as illustrated in Fig. \[fig:cycle\](b). And the demon is realized by a single two-level atom with energy levels $\left\vert g\right\rangle $, $\left\vert e\right\rangle $ and level spacing $\Delta$. Initially, the system is in thermal state with inverse temperature $\beta$. And the demon has been in contact with the low temperature bath at the inverse temperature $\beta_{D}$. Namely, the demon is initial prepared in the equilibrium state
$$\rho_{D}=p_{g}\left\vert g\right\rangle \left\langle g\right\vert +p_{e}\left\vert e\right\rangle \left\langle e\right\vert ,$$
with the probability $p_{e}=1-p_{g}$ in the excited state and one in ground state $$p_{g}=1/\left[1+\exp\left(-\beta_{D}\Delta\right)\right].$$
Actually, the inverse temperature $\beta_{D}$ could represent an effective inverse temperature of the MD with quantum coherence. For an environment being a mesoscopic system, the number of its degrees of freedom is not so large. Under this circumstance, the strong coupling to the MD leaves finite off-diagonal elements in the reduced density matrix[@Dong2007]. This remnant of coherence can be utilized to improve the apparent efficiency of the heat engine [@Scully2003; @htquan2006]. For the demon with coherence, the density matrix usually reads as $$\rho_{D}=\left[\begin{array}{cc}
p_{g} & F\\
F^{\ast} & p_{e}\end{array}\right],\label{eq:cohe}$$ where the off-diagonal element $F$ measures the quantum coherence. The eigen-values of the above reduced density matrix represent two effective population probabilities as$$\begin{aligned}
p_{+}\left(F\right) & \simeq & p_{e}-\coth\left(\frac{\Delta}{2}\beta_{D}\right)\left\vert F\right\vert ^{2},\notag\\
p_{-}\left(F\right) & \simeq & p_{g}+\coth\left(\frac{\Delta}{2}\beta_{D}\right)\left\vert F\right\vert ^{2}.\end{aligned}$$ We can define an effective inverse temperature $\beta_{\mathrm{eff}}=\ln p_{+}\left(F\right)/p_{-}\left(F\right)$ for the two-level MD, namely, $$\beta_{\mathrm{eff}}=\beta_{D}+\frac{4\left\vert F\right\vert ^{2}}{\Delta}\cosh^{2}\left(\frac{\Delta}{2}\beta_{D}\right)\coth\left(\frac{\Delta}{2}\beta_{D}\right).$$ The effective temperature $T_{\mathrm{eff}}=1/\beta_{\mathrm{eff}}$ here is lower than the bath temperature $T_{D}$. As shown as follows, it is the lowing of the effective temperature of the MD that results in an increasing of the heat engine efficiency.
As for the modeling of the chamber as an infinite square potential well, the eigenfunctions of the confined single molecule are $$\left\langle x\right.\left\vert \psi_{n}\left(L\right)\right\rangle =\sqrt{\frac{2}{L}}\sin\left[n\pi x/L\right],$$ with the corresponding eigen-energies $E_{n}\left(L\right)=\left(\hbar n\pi\right)^{2}/\left(2mL^{2}\right)$, where the quantum number $n$ ranges from $1$ to $\infty$.
On this bases, the initial state of the total system is expressed as a product state $$\rho_{0}=\frac{1}{Z\left(L\right)}\sum_{n}e^{-\beta E_{n}\left(L\right)}\left\vert \psi_{n}\left(L\right)\right\rangle \left\langle \psi_{n}\left(L\right)\right\vert \otimes\rho_{0}^{D},$$ where $$Z\left(L\right)=\sum_{n}\exp\left[-\beta E_{n}\left(L\right)\right]$$ is the partition function of the system.
Here, we remark that the discrete spectrum of the system results from the finite size of the width $L.$ As $L\rightarrow\infty$, the spectrum becomes continuous as the energy level spacings is proportional to $1/L^{2}$. Then heat excitation characterized by $k_{B}T$ can wash out the quantum effect so that the system approaches a classical limit. Some of finite size effect based quantum phenomenon could also disappear as $T\rightarrow\infty.$
With the above modelings, the MD-assisted thermodynamic cycle for the quantum SHE is divided as four steps illustrated in Fig. \[fig:cycle\](b): (I-II) the insertion of a mobile solid wall into the potential well at a position $x=l$ (the origin is $x=0$ ); (II-III) the measurement done by the MD to create the quantum entanglement of its two internal states to the spatial wave functions of the confined molecule; (III-IV) quantum control for the mobile wall to move the according to the record in the demon’s memory; (IV-V) removing the wall so that the next thermodynamic cycle can be restarted. Their descriptions will be discussed subsequently in the next section and detailed calculations will be found in the Appendix.
Quantum Thermodynamic Cycles with Measurement
=============================================
In this section we analyze in details the thermodynamic cycle of the molecule confined in an infinite square potential well. The molecule’s position is monitored and then controlled by the MD. The MD may have quantum coherence as in Eq.\[eq:cohe\], or equivalently, possesses a lower temperature $T_{\mathrm{D}}=1/\beta_{\mathrm{D}}$ than $T=1/\beta$ of the confined molecule’s heat bath. In each step, we will evaluate the work done by outside agent and heat exchange in detail. In order to concentrate on the physical properties, we put the calculations in the Appendix.
![(Color Online) Probability $P_{L}$ and the corresponding classical one $P_{L}^{C}$ vs temperature $1/\beta$ for different piston position $l=1/3$ and $l=1/4$. Without losing generality, we set the parameters as $L=1$, $m=\pi^{2}/2$ and $\hbar=1$.[]{data-label="fig:pl"}](Pl){width="5cm"}
Step1: Quantum Insertion (I-II) {#step1-quantum-insertion-i-ii .unnumbered}
-------------------------------
In the first process, the system is in contact with the heat bath $\beta$, then a piston is inserted isothermally into the potential at position $l$. The potential is then divided into two domains, denoted simply as $L$ and $R$, with the length $l$ and $L-l$ respectively. The eigenstates change into the following two sets as
$$\begin{aligned}
\left\langle x\right.\left|\psi_{n}^{R}(L-l)\right\rangle & = & \begin{cases}
\sqrt{\frac{2}{L-l}}\sin\left[\frac{n\pi\left(x-l\right)}{L-l}\right] & l\leq x\leq L\\
0 & 0\leq x\leq l\end{cases},\end{aligned}$$
and
$$\left\langle x\right.\left\vert \psi_{n}^{L}(l)\right\rangle =\begin{cases}
0 & l\leq x\leq L\\
\sqrt{\frac{2}{l}}\sin(n\pi x/l) & 0\leq x\leq l\end{cases},$$ with the corresponding eigen-values $E_{n}\left(L-l\right)$ and $E_{n}\left(l\right)$. In the following discussions we use the free Hamiltonian $H_{T}=H+H_{D}$ for $$\begin{aligned}
H & = & \sum_{n}[E_{n}\left(l\right)\left\vert \psi_{n}\left(l\right)\right\rangle \left\langle \psi_{n}\left(l\right)\right\vert \\
& & +E_{n}(L-l)\left\vert \psi_{n}\left(L-l\right)\right\rangle \left\langle \psi_{n}\left(L-l\right)\right\vert ]\end{aligned}$$ for $0\leq l\leq L$ and $H_{D}=\Delta\left\vert e\right\rangle \left\langle e\right\vert .$ Here, we take its ground state energy as the zero point of energy of atom.
At the end of the insertion process, the system is still in the thermal state with the temperature $\beta$ and the MD is on its own state without any changes. With respect to the above splitted bases, the state of the whole system is rewritten in terms of the new bases as
$$\rho_{\mathrm{ins}}=[P_{L}\left(l\right)\rho^{L}\left(l\right)+P_{R}\left(l\right)\rho^{R}\left(L-l\right)]\otimes\rho_{0}^{D},$$
where $$\rho^{L}\left(l\right)=\sum_{n}\frac{e^{-\beta E_{n}\left(l\right)}}{Z\left(l\right)}\left\vert \psi_{n}^{L}(l)\right\rangle \left\langle \psi_{n}^{L}(l)\right\vert ,$$ and $$\rho^{R}\left(L-l\right)=\sum_{n}\frac{e^{-\beta E_{n}\left(L-l\right)}}{Z\left(L-l\right)}\left\vert \psi_{n}^{R}(L-l)\right\rangle \left\langle \psi_{n}^{R}(L-l)\right\vert ,$$ refer to the system localized in the left and right domain respectively. With respect to the their sum $\mathcal{Z}\left(l\right)=Z\left(l\right)+Z\left(L-l\right),$ the temperature dependent ratios $$P_{L}\left(l\right)=Z\left(l\right)/\mathcal{Z}\left(l\right)$$ and $$P_{R}\left(l\right)=Z\left(L-l\right)/\mathcal{Z}\left(l\right).$$ are the probabilities to find the single molecule on the left and the right side respectively. For simplicity, we denote $P_{L}\left(l\right)$ and $P_{R}\left(l\right)$ by $P_{L}$ and $P_{R}$ respectively in the following discussions. We emphasize that the probabilities are different from the classical probabilities, $P_{L}^{c}=l/L$ and $P_{L}^{c}=\left(L-l\right)/L$, finding single molecule on the left and right side that is proportional to the volume. We numerically illustrate this discrepancy between this classical result and ours in Fig. \[fig:pl\] for different insertion position $l=1/3$ and $l=1/4$. It is clearly in Fig. \[fig:pl\] that the probabilities $P_{L}$ approaches to the corresponding classical ones $P_{L}^{c}$, as the temperature increases to the high temperature limit. However, a large discrepancy is observed at low temperature. This deviate from the classical one is due to the discreteness of the energy levels of the potential well with finite width, which disappears as level spacing becomes small with $L\rightarrow\infty$. In this case, the heat excitation will erase all the quantum feature of the system and the classical limit is approached.
In this step, work should be done to the system. In the isothermal process, the work done by the outside agent can be expressed as $W_{\mathrm{ins}}=\Delta U_{\mathrm{ins}}-T\Delta S_{\mathrm{ins}}$, with the internal energy change $$\Delta U_{\mathrm{ins}}=\mathrm{Tr}\left[\left(\rho_{\mathrm{ins}}-\rho_{0}\right)H_{T}\right]$$ and the total entropy change $$\Delta S_{\mathrm{ins}}=\mathrm{Tr}\left(-\rho_{\mathrm{ins}}\ln\rho_{\mathrm{ins}}+\rho_{0}\ln\rho_{0}\right).$$ During this isothermal process, the work done by outside just compensates the change of the free energy as $$W_{\mathrm{ins}}=T\left[\ln Z\left(L\right)-\ln\mathcal{Z}\left(l\right)\right].$$ The same result has been obtained in Ref. [@Ueda2010]. By taken inverse temperature $\beta=1$ and $L=1$, we illustrate the work needed for the insertion of the piston into the potential in Fig. \[fig:wins\]. It is shown that to insert the piston at the center of the potential needs the maximum work to be done. Another reasonable fact is that no work is needed to insert the piston at position $l=0$ and $l=L$. Classically, it is well known that no work should be paid for inserting the piston at any position, while for a fixed $L$, we notice that $W_{\mathrm{ins}}\rightarrow-\infty$ as $T\rightarrow\infty$. The discrete property of the system due to the finite width of the potential well results in the typical quantum effect, even at a high temperature, namely, $\lim_{T\rightarrow\infty}W_{\mathrm{ins}}\neq0$ and $\lim_{T\rightarrow\infty}Q_{\mathrm{ins}}\neq0$. This finite size induced quantum effect is typical for mesoscopic system. To restore the classical results, we simply take the limit $L\rightarrow\infty$ to make the spectrum continuous, rather than $T\rightarrow\infty$. Under this limit $L\rightarrow\infty$, we have $\mathcal{Z}\left(l\right)/Z\left(L\right)\rightarrow1$, which just recovers the classical result that $$\lim_{L\rightarrow\infty}W_{\mathrm{ins}}=0,\label{eq:asyw}$$ as illustrated in Fig. \[fig:wins\](b) for different insertion positions $l=0.1L$, $0.3L$ and $0.5L$.
![(Color Online) Work done by the outside agent. (a)$W_{\mathrm{ins}}$ vs $l$ for different system inverse temperature $\beta=1$, $0.5$ and $0.1$. Here, we choose the same parameter as that in Fig. \[fig:pl\]. (b) $W_{\mathrm{ins}}$ vs $L$ for different insertion position $l=0.1L$, $0.3L$ and $0.5L$.[]{data-label="fig:wins"}](Wins){width="8cm"}
After the insertion of piston, the entropy of the system changes. The system exchanges heat with the heat bath during this isothermal reversible process. The heat is obtained by $Q_{\mathrm{ins}}=-T\Delta S_{\mathrm{ins}}$ as $$Q_{\mathrm{ins}}=\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(L\right)-\ln\mathcal{Z}\left(l\right)\right].$$ Similar to the asymptotic properties of the work in Eq. (\[eq:asyw\]), $Q_{\mathrm{ins}}$ approaches to zero when $L\rightarrow\infty$.
Step2: Quantum Measurement (II-III) {#step2-quantum-measurement-ii-iii .unnumbered}
-----------------------------------
In the second step, the system is isolated from the heat bath. The MD finds which domain, the single molecule stays in and registers the result into its own memory. In the classical way, the memory can also be imaged as a chamber with single molecule. The classical state of single molecule on the right and left side are denoted as the state $0$ and $1$. And the memory is architecture always by two bistable states with no energy difference $\Delta=0$ and no energy is needed in the measurement process. This setup based on “chamber ” argument seems to exclude the possibility for quantum coherence in a straightforward way. Therefore, we adopt the TLS as the memory to allow the quantum coherence to take the role, as discussed in Sec. \[sec:II\]. In the scheme here, the demon performs the controlled-NOT operation [@Quan2006]. If the molecule is on the left side, no operation is done. And the demon flips its state, when finding the molecule on the right. This operation is realized by the following unitary operator,
$$\begin{aligned}
U & = & \sum_{n}\left\vert \psi_{n}^{L}\left(l\right)\right\rangle \left\langle \psi_{n}^{L}\left(l\right)\right\vert \otimes\left(\left\vert g\right\rangle \left\langle g\right\vert +\left\vert e\right\rangle \left\langle e\right\vert \right)\notag\\
& & +\left\vert \psi_{n}^{R}\left(L-l\right)\right\rangle \left\langle \psi_{n}^{R}\left(L-l\right)\right\vert \otimes\left(\left\vert e\right\rangle \left\langle g\right\vert +\mathrm{h.c}\right).\end{aligned}$$
After the measurement, the MD and the system are correlated. This correlation enables the MD to control the system to perform work to the outside agent. The density matrix of the whole system after measurement is $$\begin{aligned}
\rho_{\mathrm{mea}} & = & \left[P_{L}p_{g}\rho^{L}\left(l\right)+P_{R}p_{e}\rho^{R}\left(L-l\right)\right]\otimes\left\vert g\right\rangle \left\langle g\right\vert \notag\\
& & +\left[P_{L}p_{e}\rho^{L}\left(l\right)+P_{R}p_{g}\rho^{R}\left(L-l\right)\right]\otimes\left\vert e\right\rangle \left\langle e\right\vert .\end{aligned}$$ If the temperature of the demon is zero, namely $T_{D}=0$, the measurement actually results in a perfect correlation between the system and the MD, $$\rho_{\mathrm{mea}}=P_{L}\rho^{L}\left(l\right)\otimes\left\vert g\right\rangle \left\langle g\right\vert +P_{R}\rho^{R}\left(L-l\right)\otimes\left\vert e\right\rangle \left\langle e\right\vert .$$ Then the demon can distinguish exactly the domain where the single molecule stays, e.g. state $\left\vert g\right\rangle $ representing the molecule on left side and vice visa. At a finite temperature, this correlation gets ambiguous. As illustrated in the dashed green box in Fig. \[fig:cycle\](b), the demon actually gets the wrong information about the domain, where the single molecule stays. For example, the demon thinks the molecule is on the left with memory registering $\left\vert g\right\rangle $, while the molecule is actually on the right. The MD loses a certain amount of information about the system and lowers its ability to extract work. For case $\Delta\neq0$ at finite temperature, the above imperfect correlation leads to a condition for the MD’s temperature, under which the total system could extract positive work.
The worst case is that, when we first let the MD to become degenerate, i.e., $\Delta=0,$ then the temperature to approach zero. In this sense the demon is prepared in s mixing state $$\rho_{0}^{D}\left(\Delta=0\right)=\frac{1}{2}\left(\left\vert g\right\rangle \left\langle g\right\vert +\left\vert e\right\rangle \left\langle e\right\vert \right)$$ and the state of the whole system after the measurement reads $$\rho_{\mathrm{mea}}=\left[\rho^{L}\left(l\right)+\rho^{R}\left(L-l\right)\right]\otimes\rho_{0}^{D}\left(\Delta=0\right).$$ Thus, no information is obtained by MD. There exists another limit process that the non-degenerate MD is firstly prepared in the zero-temperature environment, and then let $\Delta$ approach zero. Thus, the state of the MD is broken into $\left\vert g\right\rangle \left\langle g\right\vert $ of $\rho_{0}^{D}\left(\Delta=0\right)$. In this case, we get a more cleaver MD as mentioned above. The physical essence of the difference between the two limit processes lies on the symmetry breaking[@jqliao2009] (we will discuss this again later). With such symmetry breaking, the degenerate MD could also make an ideal measurement. A intuitive understanding for the zero-temperature MD helping to do work is that a more calm MD can see the states of the molecule more clear, thus control it more effectively.
Next we calculate the work done in the measurement process by assuming the total system being isolated from the heat bath of the molecule. The heat exchange here is exactly zero, namely $Q_{\mathrm{mea}}=0$, since the operation is unitary and the total entropy is not changed during this process. However, the total internal energy changes, which merely results from the work done by the outside agent$$W_{\mathrm{mea}}=P_{R}\left(p_{g}-p_{e}\right)\Delta.$$ to register the MD’s memory. The work needed is actually a monotonous function of the demon’s bath temperature $T_{D}$. If the temperature of the demon is zero ( the MD is prepared in a pure state ), namely $T_{D}=0$, the work reaches its maximum $W_{\mathrm{mea}}^{\mathrm{max}}=P_{R}\Delta$. The demon can distinguish exactly the domain, where the single molecule stays, state $\left\vert g\right\rangle $ representing molecule on the left, and vice visa. As discussed as follows, the work done by the outside agent here is the main factor to low down the efficiency of the heat engine. However, the low temperature results in a more perfect quantum correlation between the MD and the system, thus enables the MD to extract more work. Requirement of the work done in the measurement and the ability of controlling free expansion are two competing factors of the QHE. Finally, we prove that a low temperature of the demon results in the high efficiency of quantum heat engine in the following section. It is clear that less work is needed, if the insertion position is closer to the right boundary of the potential. And the work needed in the measurement process approaches to zero, namely $W_{mea}\rightarrow0$, when $l\rightarrow L$. Thus, the efficiency is promoted to reach the corresponding Carnot efficiency when $l=L$ for this measurement.
Step3: Controlled Expansion (III-IV) {#step3-controlled-expansion-iii-iv .unnumbered}
------------------------------------
In the third step, the system is brought into contact with the heat bath with temperature $\beta$. Then the expansion is performed slowly enough to enable the process to be reversible and isothermal. The expansion is controlled by the demon according to its memory. Finding its state on $\left\vert g\right\rangle $, the outside agent allows the piston to move right, thus the single molecule performs work to the outside. However, the agent pays some work to move piston to the right if the MD’s memory is inaccurate, e.g. the situation in the green dashed box in Fig. \[fig:cycle\](b). If in state $\left\vert e\right\rangle $, the piston is allowed to move to the left side. Under this description, we avoid the conventional heuristic discussion with adding an object in the classical version of SHE. Here, we choose two arbitrary final positions of the controlled expansion as $l_{g}$ and $l_{e}$ for the corresponding MD’s state $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $. We will prove later that the total work extracted is independent on the expansion position chosen here. After the expansion process, the density matrix of the whole system is expressed as $$\begin{aligned}
\rho_{\mathrm{exp}} & = & \left[P_{L}p_{g}\rho^{L}\left(l_{g}\right)+P_{R}p_{e}\rho^{R}\left(L-l_{g}\right)\right]\otimes\left\vert g\right\rangle \left\langle g\right\vert \notag\\
& & +\left[P_{L}p_{e}\rho^{L}\left(l_{e}\right)+P_{R}p_{g}\rho^{R}\left(L-l_{e}\right)\right]\otimes\left\vert e\right\rangle \left\langle e\right\vert .\end{aligned}$$ During the expansion, the system performs work $-W_{\mathrm{exp}}\geq0$ to the outside agent, $$\begin{aligned}
W_{\mathrm{exp}} & = & T\left[\ln\mathcal{Z}\left(l\right)+P_{L}\ln P_{L}+P_{R}\ln P_{R}\right.\notag\\
& & -P_{L}p_{g}\ln Z\left(l_{g}\right)-P_{R}p_{e}\ln Z\left(L-l_{g}\right)\notag\\
& & \left.-P_{L}p_{e}\ln Z\left(l_{e}\right)-P_{R}p_{g}\ln Z\left(L-l_{e}\right)\right].\end{aligned}$$ For a perfect correlation ($p_{g}=1$), the piston is moved to the side of the potential, namely $l_{g}=L$ and $l_{e}=0$, and the work is simply $$W_{\mathrm{exp}}=T\left(P_{L}\ln P_{L}+P_{R}\ln P_{R}\right)-W_{\mathrm{ins}},$$ which is the maximum work one can be extracted in this process. In the classical limit $L\rightarrow\infty$, and the work is $$W_{\mathrm{exp}}=T\left(P_{L}\ln P_{L}+P_{R}\ln P_{R}\right).$$ We restore the well known result $W_{\mathrm{exp}}=-k_{\mathrm{B}}T\ln2$, when the piston is inserted in the center of the potential. If the demon is not perfectly correlated to the position of the single molecule ($p_{g}<1$), the work extracted $-W_{\mathrm{exp}}$ would be less. Therefore, it is clear that the ability of MD to extract work closely depends on the accuracy of the measurement.
In this step, the heat exchange is related to the change of entropy as $$\begin{aligned}
\!\!\!\! Q_{\mathrm{exp}}= & P_{L}\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(l\right)-p_{g}\ln Z\left(l_{g}\right)-p_{e}\ln Z\left(L-l_{e}\right)\right]\notag\\
& \!\!\!\!+P_{R}\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(L-l\right)-p_{e}\ln Z\left(L-l_{g}\right)-p_{g}\ln Z\left(l_{e}\right)\right].\end{aligned}$$
Step4: Removing(IV-V) {#step4-removingiv-v .unnumbered}
---------------------
To complete the thermodynamic cycle, the system and the MD should be reset to their own initial states respectively. As for the system, the piston inserted in the first step should be removed. In the previous studies, this process is neglected, since the measurements are always ideal and the piston is moved to an end boundary of the chamber. Thus no work is required to remove piston. However, in an arbitrary process, we can show the importance of removing piston in the whole cycle. During this process, the system keeps contact with the heat bath with inverse temperature $\beta$ and the removing is performed isothermally. The density matrix of the total system after removing the piston reads $$\begin{aligned}
\!\!\!\!\!\!\!\!\rho_{\mathrm{rev}} & = & \sum_{n}\frac{e^{-\beta E_{n}\left(L\right)}}{Z(L)}\left\vert \psi_{n}\left(L\right)\right\rangle \left\langle \psi_{n}\left(L\right)\right\vert \otimes\notag\\
& & \!\!\!\!\!\left[\left(P_{L}p_{g}\!+\! P_{R}p_{e}\right)\left\vert g\right\rangle \!\!\left\langle g\right\vert \!+\!\left(P_{L}p_{e}\!+\! P_{R}p_{g}\right)\left\vert e\right\rangle \!\!\left\langle e\right\vert \right].\end{aligned}$$ In this process, the work done and the heat absorbed by the outside are $$\begin{aligned}
\!\!\!\!\!\!\! W_{\mathrm{rev}} & = & \mathrm{Tr}\left[\left(\rho_{\mathrm{rev}}-\rho_{\mathrm{exp}}\right)\left(H+H_{D}\right)\right]\notag\\
& & -T\mathrm{Tr}\left[-\rho_{\mathrm{rev}}\ln\rho_{\mathrm{rev}}\right]+T\mathrm{Tr}\left[-\rho_{\mathrm{exp}}\ln\rho_{\mathrm{exp}}\right],\end{aligned}$$ and $$Q_{\mathrm{rev}}=-T\mathrm{Tr}\left[-\rho_{\mathrm{rev}}\ln\rho_{\mathrm{rev}}\right]+T\mathrm{Tr}\left[-\rho_{\mathrm{exp}}\ln\rho_{\mathrm{exp}}\right],$$ respectively. We refer the Appendix for the exact expression of those two formula. The MD now is no longer entangled with the system. And the density matrix of the demon is factorized out as $$\rho_{\mathrm{rev}}^{D}=\left(P_{L}p_{g}\!+\! P_{R}p_{e}\right)\left\vert g\right\rangle \!\!\left\langle g\right\vert \!+\!\left(P_{L}p_{e}\!+\! P_{R}p_{g}\right)\left\vert e\right\rangle \!\!\left\langle e\right\vert .$$ In the ideal case $T_{D}=0$, the demon is on the state$$\rho_{\mathrm{rev}}^{D}=P_{L}\left\vert g\right\rangle \left\langle g\right\vert +P_{e}\left\vert e\right\rangle \left\langle e\right\vert$$ with entropy $$S_{\mathrm{rev}}^{D}=-P_{L}\ln P_{L}-P_{R}\ln P_{R}.$$ According to Landauer’s Principal, erasing the memory of the MD dissipates at least $T_{D}S_{\mathrm{rev}}^{D}=0$ work into the environment. In this sense, we can extracted $k_{B}T\ln2$ work with MD’s help. However, we does not violate the SLoT, since the whole system functionalizes as a heat engine working between high temperature bath and zero temperature bath. Actually, the increase of entropy in the zero temperature bath is exactly $S_{\mathrm{rev}}^{D}$. Therefore, the energy dissipated actually depends on the temperature of environment, where the information is erased. In the previous studies, people always set the same temperature for the system and MD. Thus the exactly mechanism of MD was not clear to certain extent, especially for SHE. Let’s consider another special case $\Delta=0$, which directly results in $p_{e}=p_{g}=1/2$. MD is prepared on its maximum entropy state $\rho_{0}^{D}\left(\Delta=0\right)$. At the end of the cycle, MD actually is on the same state, namely $\rho_{\mathrm{rev}}^{D}=\rho_{0}^{D}\left(\Delta=0\right)$. Thus, no work is paid to erase the memory.
After this procedure, the MD is decoupled from the system and brought into contact with its own thermal bath with inverse temperature $\beta_{D}$. Since $$P_{L}p_{e}+P_{R}p_{g}\geq p_{e},$$ the MD releases energy into its heat bath. We will not discuss this thermalization process here in details. The MD and the system are reset to their own initial states $\rho_{0}$, which allows a new cycle to start.
![(Color Online) Work vs insertion position $l$ and MD’s inverse temperature $\beta_{D}$. (a) Total work as a function of $\beta_{D}$ for different $l=0.2$, $0.5$ and $0.8$. (b)Total work as a function of insertion position $l$ for different $\beta_{D}=2.0$, $3.0$ and $4.0$. (c) Contour plot for total work as function of $l$ and $\beta_{D}$. The position for maximum work extracted is denoted as white dashed line.[]{data-label="fig:wtot_gen_imp"}](Wtot_gen_Imp){width="8cm"}
![(Color Online) Efficiency vs insertion position $l$ and O inverse temperature $\beta_{D}$. (a) Efficiency as a function of $\beta_{D}$ for different $l=0.4$, $0.5$ and $0.6$. (b) Efficiency as a function $l$ for different $\beta_{D}=2$, $3$ and $4$. (c) Contour plot of efficiency vs $l$ and $\beta_{D}$.[]{data-label="fig:eff_gen_imp"}](Eff_gen_Imp){width="8cm"}
Efficiency of Szilard Heat Engine
=================================
For quantum version of the SHE, the quantum coherent based on the finite size of the chamber results in various different properties from the classical one. Work is required during the insertion and removing processes, while the same process can be done freely in the classical version. The microscopic model here relates the efficiency of the measurement by MD to the temperature of the heat bath. In the whole thermodynamic cycle, the work done by the system to outside is the sum of all the work done in each process, $$\begin{aligned}
W_{\mathrm{tot}} & = & -\left(W_{\mathrm{ins}}+W_{\mathrm{mea}}+W_{\mathrm{exp}}+W_{\mathrm{rev}}\right)\notag\\
& = & T\left[\left(p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)\right.\notag\\
& & -\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)\notag\\
& & \left.-\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\right]\notag\\
& & -P_{R}\left(p_{g}-p_{e}\right)\Delta.\end{aligned}$$
To enable the system to do work outside, the temperature of the MD should be low enough to make sure $W_{\mathrm{tot}}\geq0$, which is known as the positive-work condition(PWC) [@htquan]. To evaluate the efficiency of QHE, we need to obtain the heat absorbed from the high temperature heat bath. Different from the classical one, the exchange of heat with high temperature source persists in each step. The total heat absorbed from the high temperature source is the sum over that of all the four steps, $$\begin{aligned}
Q_{\mathrm{tot}} & = & -\left(Q_{\mathrm{ins}}+Q_{\mathrm{mea}}+Q_{\mathrm{exp}}+Q_{\mathrm{rev}}\right)\notag\\
& = & T\left[\left(p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)\right.\notag\\
& & -\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)\notag\\
& & \left.-\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\right].\end{aligned}$$ Here, the absorbed energy is used to perform work to the outside, while only the measurement process wastes $W_{\mathrm{mea}}$, which is released to the low temperature heat bath. It is very interesting to notice that $W_{\mathrm{mea}}\rightarrow0$ as $\Delta\rightarrow0$, while the total heat $Q_{\mathrm{tot}}\rightarrow0$ and $W_{\mathrm{tot}}\rightarrow0$. To check the validity of SLoT, one should concern the the efficiency of this heat engine in a cycle,
$$\begin{aligned}
\eta & = & 1-\frac{P_{R}\left(p_{g}-p_{e}\right)\Delta}{Q_{\mathrm{tot}}}.\end{aligned}$$
As an example, we consider the special case $l=L/2$, which is similar to the case of the ordinary SHE with the piston inserted in the center of the chamber. In this special case, the probabilities for the single molecule staying at the two sides are the same as that of the classical one, namely $P_{L}=P_{R}=1/2$. The total work extracted here can be written in a simple form $$W_{\mathrm{tot}}=T\left(\ln2+p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)-\left(p_{g}-p_{e}\right)\Delta/2.$$ In this special case, to make the system capable to do work on the outside, there is a requirement to the temperature of the demon (low temperature bath). For example, when we choose $\beta=1$ and $\Delta=0.5$, the PWC is $\beta_{D}\geq2.09$. This requirement is more strict than that of Carnot heat engine, $\beta_{D}>1$. And the efficiency of this heat engine reads $$\eta=1-\frac{\left(p_{g}-p_{e}\right)\Delta}{2T\left(\ln2+p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)},$$ which is lower than the corresponding Carnot efficiency $$\eta_{\mathrm{Carnot}}=1-\frac{T_{D}}{T}.$$ Here, the efficiency is a monotonic function of the energy spacing $\Delta$ and reaches its maximum $$\eta_{\mathrm{max}}=1-\frac{2T_{D}}{T}\leq\eta_{\mathrm{Carnot}}$$ with $\Delta=0$.
In the general case, we show the work done by the system and efficiency of the heat engine vs the position of the wall $l$ and the temperature of demon $\beta_{D}$ in Fig. \[fig:wtot\_gen\_imp\] and Fig. \[fig:eff\_gen\_imp\]. As illustrated in Fig. \[fig:wtot\_gen\_imp\](a), for small insertion position, e.g. $l=0.16$ and $0.36$, the system can not extract positive work. There exists a critical insertion position $l_{\mathrm{cri}}$ to extract positive work, namely $$T\left(P_{L}^{\mathrm{cri}}\ln P_{L}^{\mathrm{cri}}+P_{R}^{\mathrm{cri}}\ln P_{R}^{\mathrm{cri}}\right)+P_{R}^{\mathrm{cri}}\Delta=0,$$ where $P_{R}^{\mathrm{cri}}=P_{R}\left(l_{\mathrm{cri}}\right)$ and $P_{L}^{\mathrm{cri}}=P_{L}\left(l_{\mathrm{cri}}\right)$. This critical value of insertion position here is $l_{\mathrm{cri}}=0.447$ for the typical parameter chosen here. Due to the requirement of work in the measurement process, the work extracted is not a symmetric function of the insertion piston $l$, namely $W_{\mathrm{tot}}\left(0.5-l\right)\neq W_{\mathrm{tot}}\left(0.5+l\right)$, as illustrated in Fig. \[fig:wtot\_gen\_imp\](b,c). Since the high energy state $\left|e\right\rangle $ of the demon is utilized to register the right side for single molecule, more work is need when $l<L/2$. Due to the requirement of work done by outside agent in the measurement process, the optimal position to extract maximum work is not at the center of the potential. The maximum work can be extracted for a given MD’s inverse temperature is reached, when $$\frac{P_{L}^{\mathrm{wmax}}p_{e}+P_{R}^{\mathrm{wmax}}p_{g}}{P_{L}^{\mathrm{wmax}}p_{g}+P_{R}^{\mathrm{wmax}}p_{e}}=e^{-\beta\Delta},$$ where $P_{L}^{\mathrm{wmax}}=P_{L}\left(l_{\mathrm{wmax}}\right)$ and $P_{R}^{\mathrm{wmax}}=P_{R}\left(l_{\mathrm{wmax}}\right)$. It is clear that the position for the maximum work depends on the temperature of the demon $\beta_{D}$.
In Fig. \[fig:eff\_gen\_imp\], we show the efficiency of this single molecular heat engine. We consider only the positive work situation, and set efficiency as $0$ for all the negative work area. Fig. \[fig:eff\_gen\_imp\](a) shows the monotonous behavior of efficiency as the MD’s inverse temperature. Efficiency is also a monotonous function of the insertion position $l$, illustrated in Fig. \[fig:eff\_gen\_imp\](b,c), which is not similar to the total work extracted. It worth noticing that the efficiency reaches its maximum at $l=1$, while no work can be extracted. Since the measurement is the only way of wasting energy, it is the only way to improve the efficiency by reducing $W_{\mathrm{mea}}$ with decreasing $P_{R}$. The efficiency of QHE reaches the well-known Carnot efficiency $\eta_{\mathrm{Carnot}}$, when $P_{R}=0$. At the same time, the total work extracted approaches to zero, namely $W_{\mathrm{tot}}=0$. We meet this dilemma, since the measurement results in an imperfect correlation between MD and the system.
Before concluding this paper, we draw our attention to two limit processes[@jqliao2009] again $$\begin{aligned}
\lim_{\beta_{D}\rightarrow+\infty}\lim_{\Delta\rightarrow0}\rho_{D} & =\left(\left\vert g\right\rangle \left\langle g\right\vert +\left\vert e\right\rangle \left\langle e\right\vert \right)/2,\\
\lim_{\Delta\rightarrow0}\lim_{\beta_{D}\rightarrow+\infty}\rho_{D} & =\left\vert g\right\rangle \left\langle g\right\vert .\end{aligned}$$ Note that taking the two limits in different orders leads to completely different results, the latter being a reflection of the spontaneous symmetry breaking phenomenon. This difference for the MD’s initial state results in the different work extracted, namely, $$\begin{aligned}
\lim_{\beta_{D}\rightarrow+\infty}\lim_{\Delta\rightarrow0}W_{\mathrm{tot}} & =0,\\
\lim_{\Delta\rightarrow0}\lim_{\beta_{D}\rightarrow+\infty}W_{\mathrm{tot}} & =k_{B}T\ln2.\end{aligned}$$ The former one means that MD actually gets no information about the position of molecule and extracts no work, while the latter one show that MD obtains the exact information on the position of the molecule and enables the system to perform maximum work to the outside agent. The same phenomenon has also been revealed in the process of dynamic thermalization[@jqliao2009].
Conclusions
===========
In summary, we have studied a quantum version of SHE with a quantum MD with lower finite temperature than that of the system. We overall simplified the MD as a two-level system, which carries out measurement in quantum fashion and controlling the system to do work to the out-side agent. In this sense, the MD assisted thermodynamic cycle are clarified as the four steps, insertion, measurement, expansion and removing, which are all described in the framework of quantum mechanics. In each step, we also consider the special case to restore the well-known results in classical version of SHE. We explicitly analyzed the total work extracted and the corresponding efficiency. To resolve the MD paradox, we compared the obtained efficiency of the heat engine with that of Carnot heat engine. It is found the efficiency is always below that of Carnot since the quantum MD is included as the a part of the the whole work substance and its functions are also correctly “quantized”. Thus nothing violates the SLoT.
In comparison with the classical version of SHE, the following quantum natures were discovered in the quantum thermodynamic cycles: (1) The finite size effect of the potential well was found as reason for the non-vanishing work required in the insertion and removing of the middle walls, while the corresponding manipulations could be achieved freely in the classical case; (2) The quantum coherence is allowed to exist in the MD’s density matrix. It is the decrease of effective temperature caused by this coherence that actually improves the efficiency of SHE; (3) In the measurement process, the finite temperature of MD actually results in the incorrect decision to control the single molecule’s motion. This incorrectness decreased the MD’s ability to extract work. To our best knowledge, even for in the classical case, the similar investigation has never been carried out; (4) In the whole thermodynamic cycle, the removing process is necessary in returning to the initial state for the whole work substance. This fact is neglected in the previous studies even for the classical SHE.
Finally, we should stress that the model studied here could help to resolve many paradoxical observations due to heuristic arguments with hybridization of classical-quantum points of views about thermodynamics. For instance, it could be recognized that the conventional argument about the MD paradox only concerns a classical version of MD at the same temperature as that of the system. Our results can enlighten the comprehensive understandings about some fundamental problems in thermodynamics, such as the relationship between quantum unitarity and SLoT[@Dong2010] .
HD would like to thank J.N Zhang for helpful discussion. This work was supported by NSFC through grants 10974209 and 10935010 and by the National 973 program (Grant No. 2006CB921205).
Appendix {#appendix .unnumbered}
========
In this appendix, we present a detailed calculation for the work done and efficiency of SHE. Following the calculations for the four steps listed in the context step by step, the reader can deeply understand the physical essences of the MD in some subtle fashion.
**Step 1: Insertion.** In this process, the changes of internal energy $\Delta U_{\mathrm{int}}=\mathrm{Tr}\left[\left(H+H_{D}\right)\left(\rho_{\mathrm{ins}}-\rho_{0}\right)\right]$ and total entropy $\Delta S_{\mathrm{ins}}=\mathrm{Tr}\left[-\rho_{\mathrm{ins}}\ln\rho_{\mathrm{ins}}\right]-\mathrm{Tr}\left[-\rho_{\mathrm{0}}\ln\left(\rho_{0}\right)\right]$ is explicitly given by $$\begin{aligned}
\Delta U_{\mathrm{int}} & =\sum_{n}p_{n}\left(l\right)E_{n}\left(l\right)+\notag\\
& \sum_{n}p_{n}\left(L^{\prime}\right)E_{n}\left(L^{\prime}\right)-\sum_{n}p_{n}\left(L\right)E_{n}\left(L\right)\\
& =\frac{\partial}{\partial\beta}\left[\ln Z\left(L\right)-\ln\mathcal{Z}\left(l\right)\right],\notag\end{aligned}$$ where $L^{\prime}=L-l$ and $$\begin{aligned}
\Delta S_{\mathrm{ins}} & =\left(\ln\mathcal{Z}\left(l\right)-\ln Z\left(L\right)\right)+\notag\\
& \beta\sum_{n}\left[\begin{array}{c}
p_{n}\left(l\right)E_{n}\left(l\right)\\
+p_{n}\left(L^{\prime}\right)E_{n}\left(L^{\prime}\right)-p_{n}\left(L\right)E_{n}\left(L\right)\end{array}\right]\notag\\
& =\left(1-\beta\frac{\partial}{\partial\beta}\right)\left(\ln\mathcal{Z}\left(l\right)-\ln Z\left(L\right)\right),\end{aligned}$$ where$$p_{n}\left(y\right)=\frac{\exp\left(-\beta E_{n}\left(y\right)\right)}{Z\left(y\right)}.$$ For the isothermal process, the work done by outside agent and the heat exchange are simply $W_{\mathrm{ins}}=\Delta U_{\mathrm{int}}-T\Delta S_{\mathrm{ins}}$ and $Q_{\mathrm{ins}}=-T\Delta S_{\mathrm{ins}}$, namely, $$\begin{aligned}
W_{\mathrm{ins}} & =T\left[\ln Z\left(L\right)-\ln\mathcal{Z}\left(l\right)\right],\\
Q_{\mathrm{ins}} & =\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(L\right)-\ln\mathcal{Z}\left(l\right)\right].\end{aligned}$$
**Step2: Measurement**. The measurement is realized by a controlled-NOT unitary operation, which has been illustrated clearly in the Sec. II. After the measurement process, the density matrix for the total system is
$$\begin{aligned}
\rho_{\mathrm{mea}} & = & \left[P_{L}p_{g}\rho^{L}\left(l\right)+P_{R}p_{e}\rho^{R}\left(L^{\prime}\right)\right]\otimes\left\vert g\right\rangle \left\langle g\right\vert \\
& & +\left[P_{L}p_{e}\rho^{L}\left(l\right)+P_{R}p_{g}\rho^{R}\left(L^{\prime}\right)\right]\otimes\left\vert e\right\rangle \left\langle e\right\vert .\end{aligned}$$
The entropy is not changed in this step. And the work done by outside is $$W_{\mathrm{mea}}=\Delta U_{\mathrm{mea}}=P_{R}\left(p_{g}-p_{e}\right)\Delta.$$
**Step3: Controlled expansion**. At the ending of expansion, the state for the total system reads$$\begin{aligned}
\rho_{\mathrm{exp}} & = & \left[P_{L}p_{g}\rho^{L}\left(l_{g}\right)+P_{R}p_{e}\rho^{R}\left(L_{g}\right)\right]\otimes\left\vert g\right\rangle \left\langle g\right\vert \\
& & +\left[P_{L}p_{e}\rho^{L}\left(L_{e}\right)+P_{R}p_{g}\rho^{R}\left(l_{e}\right)\right]\otimes\left\vert e\right\rangle \left\langle e\right\vert .\end{aligned}$$ where $L_{g}=L-l_{g}$ and $L_{e}=L-l_{e}$
We move the wall isothermally. And the work done by out-side agent can be obtain by the same methods used in insertion process as
$$\begin{aligned}
W_{\mathrm{exp}} & = & \mathrm{Tr}\left[\rho_{\mathrm{exp}}\left(H+H_{D}\right)\right]-\mathrm{Tr}\left[\rho_{\mathrm{mea}}\left(H+H_{D}\right)\right]\notag\\
& & -T\mathrm{Tr}\left[-\rho_{\mathrm{exp}}\ln\rho_{\mathrm{exp}}\right]+T\mathrm{Tr}\left[-\rho_{\mathrm{mea}}\ln\rho_{\mathrm{mea}}\right]\notag\\
& = & \sum_{n}[P_{L}p_{g}p_{n}\left(l_{g}\right)E_{n}\left(l_{g}\right)+P_{R}p_{e}p_{n}\left(L_{g}\right)E_{n}\left(L_{g}\right)+P_{L}p_{e}p_{n}\left(L_{e}\right)E_{n}\left(L_{e}\right)\notag\\
& & +P_{R}p_{g}p_{n}\left(l_{e}\right)E_{n}\left(l_{e}\right)]+\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& & -\sum_{n}\left(P_{L}p_{n}\left(l\right)E_{n}\left(l\right)+P_{R}p_{n}\left(L^{\prime}\right)E_{n}\left(L^{\prime}\right)\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& & -T\sum_{n}\left[P_{L}p_{g}p_{n}\left(l_{g}\right)\ln P_{L}p_{g}p_{n}\left(l_{g}\right)+P_{R}p_{e}p_{n}\left(L^{\prime}\right)\ln P_{R}p_{e}p_{n}\left(L_{g}\right)\right.\notag\\
& & \left.\qquad\qquad+P_{L}p_{e}p_{n}\left(L_{e}\right)\ln P_{L}p_{e}p_{n}\left(L_{e}\right)+P_{R}p_{g}p_{n}\left(l_{e}\right)\ln P_{R}p_{g}p_{n}\left(l_{e}\right)\right]\notag\\
& & -T\sum_{n}[p_{g}p_{n}\left(l\right)\ln p_{g}p_{n}\left(l\right)+p_{e}p_{n}\left(L^{\prime}\right)\ln p_{e}p_{n}\left(L^{\prime}\right)\notag\\
& & \qquad\qquad+p_{e}p_{n}\left(l\right)\ln p_{e}p_{n}\left(l\right)+p_{g}p_{n}\left(L^{\prime}\right)\ln p_{g}p_{n}\left(L^{\prime}\right)]\\
& = & P_{L}T\left[\ln Z\left(l\right)-p_{g}\ln Z\left(l_{g}\right)-p_{e}\ln Z\left(L_{e}\right)\right]+P_{R}T\left[\ln Z\left(L^{\prime}\right)-p_{e}\ln Z\left(L_{g}\right)-p_{g}\ln Z\left(l_{e}\right)\right].\end{aligned}$$
The internal energy changes can be also evaluated as
$$\begin{aligned}
\Delta U_{\mathrm{exp}} & = & \sum_{n}\left[P_{L}p_{g}p_{n}\left(l_{g}\right)E_{n}\left(l_{g}\right)+P_{R}p_{e}p_{n}\left(L_{g}\right)E_{n}\left(L_{g}\right)+P_{L}p_{e}p_{n}\left(L_{e}\right)E_{n}\left(L_{e}\right)+P_{R}p_{g}p_{n}\left(l_{e}\right)E_{n}\left(l_{e}\right)\right]\notag\\
& & +\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& & -\sum_{n}\left(P_{L}p_{n}\left(l\right)E_{n}\left(l\right)+P_{R}p_{n}\left(L'\right)E_{n}\left(L'\right)\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& = & \sum_{n}\left[P_{L}p_{g}p_{n}\left(l_{g}\right)E_{n}\left(l_{g}\right)+P_{R}p_{e}p_{n}\left(L_{g}\right)E_{n}\left(L_{g}\right)+P_{L}p_{e}p_{n}\left(L_{e}\right)E_{n}\left(L_{e}\right)+P_{R}p_{g}p_{n}\left(l_{e}\right)E_{n}\left(l_{e}\right)\right]\notag\\
& & -\sum_{n}\left[P_{L}p_{n}\left(l\right)E_{n}\left(l\right)+P_{R}p_{n}\left(L^{\prime}\right)E_{n}\left(L^{\prime}\right)\right]\notag\\
& = & P_{L}\frac{\partial}{\partial\beta}\left[\ln Z\left(l\right)-p_{g}\ln Z\left(l_{g}\right)-p_{e}\ln Z\left(L_{e}\right)\right]+P_{R}\frac{\partial}{\partial\beta}\left[\ln Z\left(L^{\prime}\right)-p_{e}\ln Z\left(L_{g}\right)-p_{g}\ln Z\left(l_{e}\right)\right].\end{aligned}$$
Then, we obtain the heat exchanges in this process as $Q_{\mathrm{exp}}=-T\Delta S_{\mathrm{exp}}=W_{\mathrm{exp}}-\Delta U_{\mathrm{exp}}$ or
$$\begin{aligned}
Q_{\mathrm{exp}} & = & P_{L}\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(l\right)-p_{g}\ln Z\left(l_{g}\right)-p_{e}\ln Z\left(L_{g}\right)\right]\\
& & +P_{R}\left(T-\frac{\partial}{\partial\beta}\right)\left[\ln Z\left(L^{\prime}\right)-p_{e}\ln Z\left(L_{g}\right)-p_{g}\ln Z\left(l_{e}\right)\right].\end{aligned}$$
**Step4: Removing**. The piston is removed in this process. After that, the system returns to its initial state and is not entangled with MD as The last step would be remove the wall in the trap. The system is on the state as $$\begin{aligned}
\rho_{\mathrm{rev}} & = & \sum_{n}\frac{\exp\left[-\beta E_{n}\left(L\right)\right]}{Z(L)}\left\vert \psi_{n}\left(L\right)\right\rangle \left\langle \psi_{n}\left(L\right)\right\vert \otimes\\
& & \left[\left(P_{L}p_{g}+P_{R}p_{e}\right)\left\vert g\right\rangle \left\langle g\right\vert +\left(P_{L}p_{e}+P_{R}p_{g}\right)\left\vert e\right\rangle \left\langle e\right\vert \right].\notag\end{aligned}$$ Then, the work done and the heat absorbed is respectively $$\begin{aligned}
W_{\mathrm{rev}} & = & \mathrm{Tr}\left[\rho_{\mathrm{rev}}\left(H+H_{D}\right)\right]-\mathrm{Tr}\left[\rho_{\mathrm{exp}}\left(H+H_{D}\right)\right]\\
& & -T\mathrm{Tr}\left[-\rho_{\mathrm{rev}}\ln\rho_{\mathrm{rev}}\right]+T\mathrm{Tr}\left[-\rho_{\mathrm{exp}}\ln\rho_{\mathrm{exp}}\right]\notag\end{aligned}$$
or $$\begin{aligned}
W_{\mathrm{rev}} & = & \sum_{n}p_{n}\left(L\right)E_{n}\left(L\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& & -\sum_{n}\left[P_{L}p_{g}p_{n}\left(l_{g}\right)E_{n}\left(l_{g}\right)+P_{R}p_{e}p_{n}\left(L_{g}\right)E_{n}\left(L_{g}\right)+P_{L}p_{e}p_{n}\left(L_{e}\right)E_{n}\left(L_{e}\right)+P_{R}p_{g}p_{n}\left(l_{e}\right)E_{n}\left(l_{e}\right)\right]\notag\\
& & -\left(P_{L}p_{e}+P_{R}p_{g}\right)\Delta\notag\\
& & +T[\sum_{n}p_{n}\left(L\right)\ln p_{n}\left(L\right)+\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)]\notag\\
& & -T\sum_{n}\left\{ P_{L}p_{g}p_{n}\left(l_{g}\right)\ln\left[P_{L}p_{g}p_{n}\left(l_{g}\right)\right]+P_{R}p_{e}p_{n}\left(L_{g}\right)\ln\left[P_{R}p_{e}p_{n}\left(L_{g}\right)\right]\right.\notag\\
& & \left.\qquad\qquad+P_{L}p_{e}p_{n}\left(L_{e}\right)\ln\left[P_{L}p_{e}p_{n}\left(L_{e}\right)\right]+P_{R}p_{g}p_{n}\left(l_{e}\right)\ln\left[P_{R}p_{g}p_{n}\left(l_{e}\right)\right]\right\} \notag\\
& = & T\left[-\ln Z\left(L\right)+\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\right.\notag\\
& & \qquad-P_{L}\ln P_{L}-P_{R}\ln P_{R}-p_{e}\ln p_{e}-p_{g}\ln p_{g}\notag\\
& & \left.\qquad+P_{L}p_{g}\ln Z\left(l_{g}\right)+P_{R}p_{e}\ln Z\left(L_{g}\right)+P_{L}p_{e}\ln Z\left(L_{e}\right)+P_{R}p_{g}\ln Z\left(l_{e}\right)\right].\end{aligned}$$ and $$\begin{aligned}
Q_{\mathrm{rev}} & = & -T\mathrm{Tr}\left[-\rho_{\mathrm{rev}}\ln\rho_{\mathrm{rev}}\right]+T\mathrm{Tr}\left[-\rho_{\mathrm{exp}}\ln\rho_{\mathrm{exp}}\right]\notag\\
& = & T\left[\sum_{n}p_{n}\left(L\right)\ln p_{n}\left(L\right)+\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\right]\notag\\
& & -T\sum_{n}\left\{ P_{L}p_{g}p_{n}\left(l_{g}\right)\ln\left[P_{L}p_{g}p_{n}\left(l_{g}\right)\right]+P_{R}p_{e}p_{n}\left(L_{g}\right)\ln\left[P_{R}p_{e}p_{n}\left(L_{g}\right)\right]\right.\notag\\
& & \left.\qquad\qquad+P_{L}p_{e}p_{n}\left(L_{e}\right)\ln\left[P_{L}p_{e}p_{n}\left(L_{e}\right)\right]+P_{R}p_{g}p_{n}\left(l_{e}\right)\ln\left[P_{R}p_{g}p_{n}\left(l_{e}\right)\right]\right\} \notag\\
& = & T\left\{ -\ln Z\left(L\right)+\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\right.\notag\\
& & \left.\qquad-P_{L}\ln P_{L}-P_{R}\ln P_{R}-p_{e}\ln p_{e}-p_{g}\ln p_{g}+P_{L}p_{g}\ln Z\left(l_{g}\right)+P_{R}p_{e}\ln Z\left(L_{g}\right)+P_{L}p_{e}\ln Z\left(L_{e}\right)+P_{R}p_{g}\ln Z\left(l_{e}\right)\right\} \notag\\
& & -\sum_{n}\left[p_{n}\left(L\right)E_{n}\left(L\right)-P_{L}p_{g}p_{n}\left(l_{g}\right)E_{n}\left(l_{g}\right)-P_{R}p_{e}p_{n}\left(L_{g}\right)E_{n}\left(L_{g}\right)\right.\notag\\
& & \left.\qquad\qquad-P_{L}p_{e}p_{n}\left(L_{e}\right)E_{n}\left(L_{e}\right)-P_{R}p_{g}p_{n}\left(l_{e}\right)E_{n}\left(l_{e}\right)\right]\notag\\
& = & T\left[\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)+\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)-P_{L}\ln P_{L}-P_{R}\ln P_{R}-p_{e}\ln p_{e}-p_{g}\ln p_{g}\right]\notag\\
& & -\left(T-\frac{\partial}{\partial\beta}\right)\ln Z\left(L\right)+P_{L}p_{g}\left(T-\frac{\partial}{\partial\beta}\right)\ln Z\left(l_{g}\right)+P_{R}p_{e}\left(T-\frac{\partial}{\partial\beta}\right)\ln Z\left(L_{g}\right)\notag\\
& & +P_{L}p_{e}\left(T-\frac{\partial}{\partial\beta}\right)\ln Z\left(L_{e}\right)+P_{R}p_{g}\left(T-\frac{\partial}{\partial\beta}\right)\ln Z\left(l_{e}\right).\end{aligned}$$
The total work extracted by outside agent is the sum of work extracted in each step as
$$\begin{aligned}
W_{\mathrm{tot}} & =-\left(W_{\mathrm{ins}}+W_{\mathrm{mea}}+W_{\mathrm{exp}}+W_{\mathrm{rev}}\right)\notag\\
& =T[\left(p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)-\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)\notag\\
& \qquad\qquad-\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)]-P_{R}\left(p_{g}-p_{e}\right)\Delta.\end{aligned}$$
The total heat absorbed can also be obtained as $$\begin{aligned}
Q_{\mathrm{tot}} & =-\left(Q_{\mathrm{ins}}+Q_{\mathrm{exp}}+Q_{\mathrm{rev}}\right)\notag\\
& =T\left[\begin{array}{c}
\left(p_{e}\ln p_{e}+p_{g}\ln p_{g}\right)\\
-\left(P_{L}p_{g}+P_{R}p_{e}\right)\ln\left(P_{L}p_{g}+P_{R}p_{e}\right)\\
-\left(P_{L}p_{e}+P_{R}p_{g}\right)\ln\left(P_{L}p_{e}+P_{R}p_{g}\right)\end{array}\right].\end{aligned}$$
[17]{} *Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing*, edited by H. S. Leff and A. F. Rex (Institute of Physics, Bristol, 2003).
K. Maruyama, F. Nori, and V. Vedral, Rev. Mod. Phys. 81, 1 (2009).
L. Szilard, Z. Phys. **53**, 840 (1929).
R. Landauer, IBM J. Res. Dev. **5**, 183 (1961).
C. H. Bennett, Int. J. Theor. Phys. **21**, 905 (1982); Sci. Am. **257**, 108 (1987).
T. D. Kieu, Phys. Rev. Lett. **93**, 140403 (2004); Eur. Phys. J. D **39**, 115 (2006);
H. T. Quan, Yu-xi Liu, C. P. Sun, and F. Nori, Phys. Rev. E **76**, 031105 (2007); H. T. Quan, Phys. Rev. E **79**, 041129 (2009).
M. O. Scully, M. S. Zubairy, G. S. Agarwal, and H. Walther, Science **299**, 862 2003.
H. T. Quan, P. Zhang, and C. P. Sun, Phys. Rev. E **73**, 036122 (2006).
H. T. Quan, Y. D. Wang, Yu-xi Liu, C. P. Sun, and F. Nori, Phys. Rev. Lett. **97**, 180402 (2006).
W. H. Zurek, Szilard’s engine, Maxwell’s demon and quantum measurements, lecture notes from NATO ASI in Santa Fe, New Mexico, June 3-17, 1984, pp. 145-150 in the Proceedings Frontiers of Nonequilibrium Quantum Statistical Mechanics, edited by G. T. Moore and M. O. Scully (Plenum, 1986).
S. Lloyd, Phys. Rev. A **56**, 3374 (1997).
S. W. Kim, T. Sagawa, S. D. Liberato, M. Ueda, arXiv: 1006.1471 (2010).
P. Zhang, X. F. Liu, and C. P. Sun, Phys. Rev. A 66, 042104 (2002).
H. Dong, S. Yang, X.F. Liu and C.P. Sun, Phys. Rev. A **76**, 044104 (2007).
Jie-Qiao Liao, H. Dong, X. G. Wang, X. F. Liu, C. P. Sun, arXiv:0909.1230.
H. Dong, D.Z. Xu and C. P. Sun, in preparation.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Friederike Schmid
title: 'Reply to Comment on: ”Are stress-free membranes really ’tensionless’?”'
---
Fournier and Barbetta state the central message in their paper as follows (see conclusion of Ref. ): ”[*We showed that $\tau$ differs from the tensionlike coefficient $r$ of the fluctuation spectrum and we unveiled the correct way to derive $\tau$ from the free energy.*]{}” Here $\tau$ is the frame tension, called ${\mbox{$\sigma_{\mbox{\tiny f}}$}}$ in my work, and $r$ is the fluctuation tension, called ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}$ in my work. In Ref. , I argue that the arguments of Fournier and Barbetta – as well as those of other previous authors who came to the same conclusion[@imparato; @stecki] – are inherently inconsistent, due to the fact that they are based on a theory which has been linearized with respect to a small parameter $(A-A_p)/A_p$, yet predict an effect which is nonlinear in this parameter. (Here $A$ is the membrane area and $A_p$ the projected membrane area.) Fournier now claims that the arguments in Ref. is really based on an expansion in ${\mbox{$k_{_B}T$}}$, which would be the usual expansion in a diagrammatic field-theoretic treatment of the problem.
Let us therefore consider the expansion in ${\mbox{$k_{_B}T$}}$. The starting point is the Helfrich Hamiltonian ${\cal H}_{\mbox{\tiny Helfrich}}$ in Monge gauge, [*i.e.*]{}, a gauge where the coordinates $(x,y)$ are defined by projection onto the plane of the frame and the membrane is parametrized by the normal distance $h(x,y)$ onto this plane. In the field-theoretic treatment, ${\cal H}_{\mbox{\tiny Helfrich}}$ is expanded about a planar reference state, yielding [@kleinert] $${\cal H}= {\cal H}_0 + {\cal H}'$$ with $$\begin{aligned}
{\cal H}_0 &=& {\mbox{$\sigma_{\mbox{\tiny 0}}$}}A_p + \frac{1}{2} \int_{A_p}
{\mbox{d}}A_p \{ {\mbox{$\sigma_{\mbox{\tiny 0}}$}}(\nabla h)^2 + \kappa (\Delta h)^2 \}, \\
{\cal H}' &=&
- \int_{A_p} {\mbox{d}}A_p \Big[
\frac{{\mbox{$\sigma_{\mbox{\tiny 0}}$}}}{8} (\nabla h)^4
+ \frac{\kappa}{2} \big(\frac{1}{2} (\nabla h)^2 (\Delta h)^2
\nonumber \\ &&
\quad + \: 2 (\Delta h) (\partial_\alpha h)(\partial_\beta h)
(\partial_\alpha \partial_\beta h) + \cdots
\Big].\end{aligned}$$ Here ${\mbox{$\sigma_{\mbox{\tiny 0}}$}}$ is the bare tension, which coincides with the internal tension ${\mbox{$\sigma_{\mbox{\tiny int}}$}}$ in a system with fixed number of lipids, and $\kappa$ is the bending rigidity. The Hamiltonian ${\cal H}_0$ is Gaussian and ${\cal H}'$ subsumes the nonlinear terms. The ${\mbox{$k_{_B}T$}}$ expansion is based on the full nonlinear Hamiltonian ${\cal H}$, where ${\cal H}'$ is treated as a perturbation about ${\cal H}_0$ in a diagrammatic scheme. Further corrections come in through the nonlinearity of the measure ${\cal D}[h]$ [@cai]. Within this approach, some quantities can be evaluated up to first order ${\mbox{$k_{_B}T$}}$ without having to actually consider nonlinear corrections to ${\cal H}_0$. The frame tension ${\mbox{$\sigma_{\mbox{\tiny f}}$}}$ is probably such a quantity, and therefore, Eq. (4) in Ref. indeed gives the correct leading correction for ${\mbox{$\sigma_{\mbox{\tiny f}}$}}/{\mbox{$\sigma_{\mbox{\tiny int}}$}}$ in a ${\mbox{$k_{_B}T$}}$ expansion. The fluctuation tension ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}$, however, is [*not*]{} such a quantity [@cai], and the calculation of the ${\mbox{$k_{_B}T$}}$ order involves the calculation of first-order loop diagrams. Such calculations can be very tricky [@cai] and have not been attempted in Refs. . If Fournier and Barbetta meant to show ${\mbox{$\sigma_{\mbox{\tiny f}}$}}\ne {\mbox{$\sigma_{\mbox{\tiny fluc}}$}}$ by an expansion in powers of ${\mbox{$k_{_B}T$}}$, then their calculation not just inconsistent, it is incomplete. In that case, they should have finished the one-loop calculation for ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}$ before making any claims.
We conclude that the arguments of Refs. clearly cannot be justified by an expansion in ${\mbox{$k_{_B}T$}}$ or the corresponding dimensionless quantity[@note1] $\epsilon={\mbox{$k_{_B}T$}}/\kappa$. Instead, the authors Refs. have simply replaced ${\cal H} =
{\cal H}_0$, which implies (among other) omitting higher order terms in $(\nabla h)^2 \ll 1$ and [*setting*]{} ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny int}}$}}$. This is the approximation examined in Ref. . From the relation ${\mbox{d}}A =
\sqrt{(1-(\nabla h)^2} \: {\mbox{d}}A_p$, one gets locally $$(\nabla h)^2 = \frac{({\mbox{d}}A)^2 - ({\mbox{d}}A_p)^2}{({\mbox{d}}A_p)^2} = \eta (2 + \eta)$$ with $\eta(x,y) = {\mbox{d}}(A-A_p)/{\mbox{d}}A_p$, hence $(\nabla h)^2 \ll 1$ implies $\eta
\ll 1$. The global average of $\eta$ is $\bar{\eta} = (A-A_p)/A_p$, which is thus a small parameter in [*this*]{} approximation: It neglects terms that are not linear in $\bar{\eta}$. The ”expansion” is not systematic, because other terms (higher orders of higher derivatives of $h$) are neglected as well, but this is not important for our argument.
It is important ot note that the parameters $\bar{\eta}= (A-A_p)/A_p$ and $\epsilon = {\mbox{$k_{_B}T$}}/\kappa$ can be varied independently. This is physically feasible, since $A_p$ can be controlled either directly or by tuning the frame tension, independent of the temperature ${\mbox{$k_{_B}T$}}$. The approximations $\bar{\eta}
\ll 1$ and $\epsilon \ll 1$ are thus not equivalent. On the one hand, capillary wave Hamiltonians [@cw] that ignore bending terms – which corresponds to setting $\kappa \to 0$ or $\epsilon \to \infty$ – have been extremely successful in describing the properties of liquid/liquid interfaces at large wavelengths. On the other hand, membranes with fixed number of lipids and approximately fixed area per lipid can be studied at fixed projected area. This is actually a common setting in simulations.
By appropriate Legendre transforms, one can calculate the free energy of the Gaussian model ${\cal H}_0$ in such a $(N,A,A_p)$ ensemble: $$\begin{aligned}
\frac{F(N,A,A_p)}{{\mbox{$k_{_B}T$}}(N-1)} &=& - \frac{1}{2} \Big[
\ln\big(1-\exp(-\frac{8 \pi \kappa}{{\mbox{$k_{_B}T$}}} \: \frac{(A-A_p)}{A_p})\big)
\nonumber \\ &&
+ \ln\big(\frac{A_p {\mbox{$k_{_B}T$}}}{8 \pi \kappa \lambda^2 (N-1) }\big) + 2 \Big].\end{aligned}$$ The frame tension and the fluctuation tension tension can be calculated [*via*]{} ${\mbox{$\sigma_{\mbox{\tiny f}}$}}= \partial F/\partial A_p$ and ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny int}}$}}= -\partial
F/\partial A$, and the results are of course the same as those presented in Refs. for other ensembles. Nevertheless, the results from the Gaussian model clearly cannot be trusted at order $((A-A_p)/A_p)^2$. Nonlinear effects will become important even at small temperatures if $(A-A_p)/A_p$ is large.
I wish to stress once more that this whole controversy is not about the relation between the frame tension and the [*internal*]{} tension, but about the [*fluctuation*]{} tension. One should give Farago and Pincus credit for having been the first to derive the relation (4) in Ref. between ${\mbox{$\sigma_{\mbox{\tiny f}}$}}$ and ${\mbox{$\sigma_{\mbox{\tiny int}}$}}$ for compressible membranes with fixed number of lipids [@farago1]. As Fournier correctly points out, this result gives most likely the correct leading order in a diagrammatic expansion in powers of ${\mbox{$k_{_B}T$}}/\kappa$. However, ${\mbox{$\sigma_{\mbox{\tiny int}}$}}$ is a rather uninteresting quantity, since it can neither be controlled nor measured. Farago and Pincus recognized that their result does not carry over to the fluctuation tension, and gave a very general argument why the fluctuation tension should equal the frame tension [@farago2; @farago3], which solely relies on the requirement of ”rotational invariance”, [*i.e.*]{}, gauge invariance. Their reasoning is similar to a classic argument by Cai [[*et al.*]{}]{}[@cai], who showed ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny f}}$}}$ for incompressible membranes with variable number of lipids. The conclusion that gauge invariance leads to ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny f}}$}}$ has very recently been corroborated by numerical simulations [@farago3].
Notwithstanding, the highly accurate simulations of Ref. suggest that ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}$ should be slightly renormalized, ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny f}}$}}(A_p/A)$. This result is in line with model-free thermodynamic considerations on the relation between different tension parameters in vesicles [@diamant]. Whether and how it can be reconciled with the general arguments for ${\mbox{$\sigma_{\mbox{\tiny fluc}}$}}={\mbox{$\sigma_{\mbox{\tiny f}}$}}$ quoted above still remains to be elucidated.
[66]{} J.-B. Fournier, C. Barbetta, Phys. Rev. Lett. [**100**]{}, 078103 (2008). F. Schmid, EPL [**95**]{}, 28008 (2011). A. Imparato, J. Chem. Phys. [**124**]{}, 154714 (2006). J. Stecki, J. Phys. Chem. B [**112**]{}, 4246 (2008). H. Kleinert, Phys. Lett. [**174B**]{}, 335 (1986). W. Cai, T.C. Lubensky, P. Nelson, T. Powers, J. Phys. II (France) [**4**]{}, 931 (1994). The parameter $\epsilon={\mbox{$k_{_B}T$}}/\kappa$ is the only dimensionless quantity which can be constructed from ${\mbox{$k_{_B}T$}}, \sigma_0$, and $\kappa$ and which is linear in ${\mbox{$k_{_B}T$}}$. J. S. Rowlinson, B. Widom, [*Molecular Theory of Capillarity*]{} (Clarendon, Oxford, 1982). O. Farago, P. Pincus, Eur. Phys. J. E [**11**]{}, 399 (2003). O. Farago, P. Pincus, J. Chem. Phys. [**120**]{}, 2934 (2004). O. Farago, to appear in Phys. Rev. E, extended version at arxiv:1111.0175 (2011). H. Diamant, preprint arxiv:1109.2021.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A low maintenance long-term operational cryogenic sapphire oscillator has been implemented at 11.2 GHz using an ultra-low-vibration cryostat and pulse-tube cryocooler. It is currently the world’s most stable microwave oscillator employing a cryocooler. Its performance is explained in terms of temperature and frequency stability. The phase noise and the Allan deviation of frequency fluctuations have been evaluated by comparing it to an ultra-stable liquid-helium cooled cryogenic sapphire oscillator in the same laboratory. Assuming both contribute equally, the Allan deviation evaluated for the cryocooled oscillator is $\sigma_y \approx 1 \times 10^{-15}\tau^{-1/2}$ for integration times $1 < \tau < 10$ s with a minimum $\sigma_y = 3.9 \times 10^{-16}$ at $\tau = 20$ s. The long term frequency drift is less than $5 \times 10^{-14}$/day. From the measured power spectral density of phase fluctuations the single side band phase noise can be represented by $\cal{L}$$_{\phi}(f) = 10^{-14.0}/f^4+10^{-11.6}/f^3+10^{-10.0}/f^2+10^{-10.2}/f+ 10^{-11.0} \, rad^2/Hz$ for Fourier frequencies $10^{-3}<f<10^3$ Hz in the single oscillator. As a result $\cal{L}$$_{\phi} \approx -97.5 \; dBc/Hz$ at 1 Hz offset from the carrier.'
author:
- 'John G. Hartnett and Nitin R. Nand,[^1][^2]'
title: 'Ultra-low vibration pulse-tube cryocooler stabilized cryogenic sapphire oscillator with $10^{-16}$ fractional frequency stability'
---
[Shell : Bare Demo of IEEEtran.cls for Journals]{}
cryogenic sapphire oscillator, cryocooler, phase noise, frequency stability.
Introduction
============
purity single crystal sapphire has extremely low loss at microwave frequencies [@Braginsky] and at cryogenic temperatures and as a result it has been used to develop the most highly stable microwave oscillators on the planet [@Hartnett2006; @Hartnett2010]. Loop oscillators are operated with a cryogenic sapphire resonator tuned to a high order whispering-gallery mode. This acts both as a very narrow loop filter and a ultra-high Q-factor frequency determining element in the Pound servo used to lock the oscillator frequency to the very narrow natural resonance line in the sapphire [@Locke2008].
Cryogenic sapphire oscillators have been deployed to a number of frequency standards labs [@Watabe2006; @Watabe2007] and have facilitated atomic fountain clocks reaching a performance limited only by quantum projection noise [@Santarelli1999]. Since most atomic fountain clocks [@Wynands2005] have significant dead time in their interrogation cycle, simply due to the fact that they are pulsed devices where one has to wait until the detection process is finished before the next cloud of cold atoms is launched, it is impossible to avoid influences of the Dick effect [@Dick1987; @Dick1990; @Audion1998; @Santarelli1998]. The Dick effect is the introduction of the phase noise of the local reference oscillator, from which the microwave cavity frequency is derived, to the short term stability of the fountain clock. One way to reduce this is to use a lower noise highly stable reference, and this is where the cryogenic sapphire oscillator has been used.
Recently a new project was started at the University of Western Australia to replicate the state-of-the-art cryogenic sapphire oscillator but using an ultra-low vibration designed cryostat [@Chao] and a pulse tube cryo-cooler [@Hartnett2010]. The latter paper reported only on the frequency stability when using the low vibration cryocooler and cryostat under different operating conditions. Improvements have since been made and the design optimized. This paper details the cryogenic sapphire oscillator performance after those changes. For the first time we report on the resonator temperature stability, on the oscillator phase noise over 6 decades of offset Fourier frequencies from the carrier and also on the oscillator stability evolution for integration times $0.1 \, s < \tau < 10,000 \, s$) and what are the limiting noise processes with a view to future improvements.
Previously cryocoolers have been used to cool the sapphire element in cryogenic sapphire oscillators. One was used in oscillators developed for NASA’s deep space tracking network [@Dick2; @Wang], which were built using a Gifford-McMahon cycle cryocooler, but still had limited performance at about $10^{-14}$ over 1 s of averaging and a fractional frequency drift of about $10^{-13}$/day [@Wang2]. Later, Watabe et al. [@Watabe2003] used a two-stage pulse-tube cryocooler with a lower level of vibration than the Gifford-McMahon type, but the stability was still above $10^{-14}$. The pulse-tube design described in this paper is much lower in vibration at the cryocooler head and incorporates more vibration reduction features [@Chao] to effectively reduce vibrations to a point where they do not impact the operation of the oscillator [@Hartnett2010]. Similar efforts have been made at the FEMTO-ST Institute, Besançon, France. There, a cryogenic oscillator was built for ESA’s deep space tracking network and it has a fractional frequency stability of about $2 \times 10^{-15}$ from 1 to 1000 s of averaging [@Grop1; @Grop2].
This current project has been to develop a local oscillator for VLBI radio astronomy, where an improved short-term stability of about two orders of magnitude over the hydrogen maser offers significant gains. Due to the high altitude of very high frequency VLBI sites it may be that the quality of the receiver signal is limited by the local oscillator. If that is the case, by averaging over shorter time intervals using a cryogenic sapphire oscillator as the local oscillator will result in clearer images.
Ultra-low vibration cryostat
============================
In order to reach the stability levels achieved in liquid helium cooled sapphire oscillators the vibrations produced by any cryocooler need to be reduced or compensated for in some way, so that they have minimal effect on the frequency of the oscillator determined by the cryogenic resonator. Wang et al. [@Wang2] used a jacket of helium gas to disconnect the motion of the condenser from the cryogenic resonator. The same approach has been adopted here [@Chao]. See Fig. \[fig\_1\].
In this case a helium gas space was designed in the first version of the cryostat primarily to allow for operation with positive pressure in that space. Hence a bellows was introduced to free the cold head and hence the “condenser” from any rigid links to the resonator. At the condenser there is approximately a 12 $\mu m$ vertical motion at about 1.4 Hz. The resonator inside its own vacuum can is attached to the base of the space where the condenser maintains a few liters of liquid helium [@Chao]. This is labeled “coldfinger” in Fig. \[fig\_1\].
It was found however that there was a strong effect on the frequency stability of the oscillator at about 1000 s of averaging when run with a pressure near one atmosphere [@Hartnett2010]. The best condition was found where the lowest pressure (about 50 kPa in practice) was allowed to develop in the helium gas space. Still there is sufficient gas there to provide the convective link to recool the helium as it evaporates. The temperature was maintained at about 3.35 K in the liquid under these conditions. The short term frequency stability (measured at an averaging time of $\tau$ = 1 s) was found to slightly worse but the long term stability was greatly improved. Therefore the bellows was unnecessary and it was removed in a redesign of the cryostat.
![Schematic of cryocooler cryostat showing vacuum enclosure and the location of sapphire loaded cavity on the “coldfinger” that is cooled by a few liters of liquid helium maintained by the condenser inside the helium gas space. The sapphire temperature is actively controlled using a carbon glass CGR-1-2000 sensor, a power resistor as a heater and a Lakeshore 340 temperature controller. The sensor and the heater are located on the “copper post” supporting the sapphire resonator. []{data-label="fig_1"}](Fig1.eps){width="3.5in"}
![Schematic of microwave loop oscillator. The symbols are as follows: MP is the mechanical phase-shifter to achieve the correct loop phase for sustained oscillation, VCPs are voltage controller phase-shifters used for modulation and error correction of the loop phase lengths, and VCA is the voltage controlled attenuator used to servo control the circulating power in the loop by comparing the output of AD2 (an amplitude detector) to a voltage reference (circuit not shown). AD1 is the amplitude detector connected to the resonator reflection port, the signal from which is used as the input to the Lock-in amplifier to generate the frequency control error signal to servo control the microwave oscillator frequency to that of the resonance.[]{data-label="fig_2"}](Fig2.eps){width="3.5in"}
![(color online) Temperatures at the three points monitored: the condenser, the coldfinger and on the copper post supporting the cavity. The curves indicate the cool-down time from initial start of cryocooler compressor. The sensors on the coldfinger and the copper post are uncalibrated above 100 K.[]{data-label="fig_3"}](Fig3.eps){width="3.5in"}
![(color online) $\left\langle \Delta T \right\rangle$ the *rms* temperature fluctuations expressed in units of \[mK\], calculated from the Allan deviation algorithm applied to time sampled (with a gate time of 1 s) temperatures at both the coldfinger and the copper post. []{data-label="fig_4"}](Fig4.eps){width="3.5in"}
Cryogenic Sapphire Oscillator
=============================
A highly stable oscillator employing a cryogenic temperature stabilized Crystal Systems HEMEX grade [@CS] single-crystal sapphire resonator, cooled with a 2-stage CryoMech PT407-RM ultra-low-vibration pulse-tube cryocooler, has been constructed. The new cryocooled sapphire oscillator contains a nominally identical resonator to those in the liquid helium cooled cryogenic sapphire oscillators used in the lab. The design of sapphire resonators used in these cryogenic oscillators has been previously discussed [@Tobar2006] as has the design of the loop oscillator and its servo control systems [@Hartnett2006; @Locke2008]. See Fig. 2. However in this case we derive 17 dBm of output power from the microwave amplifier in the loop.
The chosen operational mode is the whispering-gallery $WGH_{16,0,0}$ mode with a frequency of 11.202 GHz. The mode nomenclature means it has 16 azimuthal variations in the electromagnetic field standing wave around half its circumference, and one variation each along the radial and axial axes. The new resonator exhibited a turning point in its frequency-temperature dependence at about 5.9385 K and has a loaded Q-factor of $1.05 \times 10^9$ at the turnover temperature. The turnover temperature was previously reported [@Hartnett2010] as 5.984 K but this difference was found to be due to thermal gradients, which has been reduced by the introduction of a 4 K radiation shield, made from thin (0.5 mm thick) aluminum (Fig. \[fig\_1\]).
The resonator primary and secondary port coupling coefficients at the turnover temperature were set to 0.80 and about 0.01, respectively. The microwave energy is coupled into the resonator with loop antenna probes, through the lateral cavity wall, made from the same coaxial cables that connect it to the loop oscillator in the room temperature environment. Using an Endwave JCA812-5001 amplifier with sufficient microwave gain (nearly 50 dB), a microwave filter set at the resonance frequency and the correct loop phase, set via a mechanical phase shifter, we get sustained oscillation. The details of the loop oscillator, the temperature control of the sapphire loaded cavity, the power control of the loop oscillator and the Pound frequency servo locking the oscillator frequency to the resonance in the sapphire crystal are the same as used in the liquid helium cooled oscillators [@Locke2008].
In the measurements reported here a liquid helium cooled sapphire oscillator [@Hartnett2006], operating on the same $WGH_{16,0,0}$ mode with a frequency of 11.200 GHz, was used to make frequency and phase comparisons with the new cryocooled sapphire oscillator.
Temperature Stability
=====================
Carbon glass CGR-1-2000 sensors were thermally anchored to lower vacuum can. One at the base of the coldfinger to monitor the temperature there and another to the copper post supporting the sapphire loaded cavity resonator. The latter is used to actively control the temperature of the sapphire crystal using a Lakeshore 340 temperature controller. As part of the thermal design we introduced 3 stainless steel washers to thermally isolate the copper post from the coldfinger. By using stainless washers of different thickness one is able to increase the thermal time constant and hence also increase the minimum temperature at which the resonator stabilizes.
Figure \[fig\_3\] shows the cool down temperatures as a function of time from initially switching on the compressor of the cryocooler. During the cool down slightly more than 1 atmosphere of pressure is maintained in the helium gas space. After 4 K is reached at the coldfinger the valve to the helium gas supply is closed off and a partial vacuum develops in the helium gas space (actually about 50 kPa). The temperature of the liquid helium falls but once the temperature control is activated (to bring the copper post, hence the sapphire, to the control point of 5.9385 K) the coldfinger temperature is raised to and maintained at about 3.35 K.
![(color online) Power spectral density of temperature fluctuations expressed in units of \[\] calculated from time sampled (with a gate time of 0.08 s) temperatures at both the coldfinger and the copper post. The top (blue) spectrum represents the temperature fluctuations at the coldfinger and the bottom (red) spectrum represents the temperature fluctuations at the copper post supporting the sapphire resonator. This is the temperature control point and therefore represents an upper limit to the temperature fluctuations there.[]{data-label="fig_5"}](Fig5.eps){width="3.5in"}
![(color online) The (red) solid circles represent the Allan deviation $\sigma_y$ calculated from time domain beat data using a $\Lambda$-counter, with gate times of 0.25, 0.5, 1, and 10 s. The (black) dashed line represents the modeled $\sigma_y$ determined from Eq. (\[Sy\]) and the single side band phase noise best fit of Eq. (\[L\]). CSO = cryogenic sapphire oscillator.[]{data-label="fig_6"}](Fig6.eps){width="3.5in"}
The *rms* temperature fluctuations $\left\langle \Delta T \right\rangle$ at the coldfinger and the copper post supporting the sapphire resonator were monitored by taking 150,000 samples with a 1 s gate time of the resistance of the two carbon glass sensors at these points. The data were converted to temperature and $\left\langle \Delta T \right\rangle$ was then calculated from the time series temperature data using the *Stable32* software–the Allan deviation of fractional temperature fluctuations $\sigma_T(\tau)$ was calculated. The latter is related $$\label{sigmaT}
\left\langle \Delta T \right\rangle = \sigma_T(\tau) \; T_0,$$ where $T_0$ is the mean temperature being measured. In these measurements there is no mean drift.
The resulting *rms* temperature fluctuations $\left\langle \Delta T \right\rangle$ at the coldfinger and the copper post supporting the sapphire resonator are shown in Fig. \[fig\_4\], in units of \[mK\]. From this it is clear that over the region of most interest, $1<\tau<100$ s, the temperature fluctuations at the copper post supporting the sapphire (hence at the sapphire) are $\left\langle \Delta T \right\rangle \leq 10 \; \mu$K. In fact, since we sampled the control thermometer, in this case, at the ‘copper post’, which was under active servo control, those temperature fluctuations can be only considered an upper limit. In the case of the ‘coldfinger’ data the sensor was only used to monitor the temperature.
At the frequency temperature turnover point ($T_0=5.9385$ K) we calculated the second derivative $$\frac{1}{f_0} \frac{\partial^2 f}{\partial T^2} = 1.98 \times 10^{-9} \; [K^{-2}]$$ by measuring the oscillator beat as we changed the temperature in the cryocooled sapphire oscillator. The latter can be related [@Hartnett2002] $$\label{sT}
\sigma_y = \left\langle \frac{\Delta f}{f_0}\right\rangle=\frac{1}{f_0} \frac{\partial^2 f}{\partial T^2} \; \delta T \; \left\langle \Delta T \right\rangle,$$ where $f_0$ = 11.202 GHz is the microwave oscillator frequency and $\delta T$ is the error in setting the temperature control set point for the sapphire crystal exactly on the turnover point.
If we solve Eq. (\[sT\]) for $\delta T$ assuming a value $\left\langle \Delta T \right\rangle$ = 10 $\mu$K, which is the value at $\tau = 10$ s, we get $\delta T$ = 25 mK. Certainly we can find the turnover point much better than that, at least by a factor of 10, so temperature fluctuations do not add any significant noise to the oscillator.
It may be noted also from Fig. \[fig\_4\] that the *rms* temperature fluctuations at the coldfinger are at least an order of magnitude higher than at the copper post. This is due to the thermal filtering of the stainless steel spacers. Note too that at higher frequencies (ie. at shorter times $\tau$) there is a stronger suppression effect. This is particularly indicated in Fig. \[fig\_5\], where we have converted the time series temperature data sampled with 0.08 s gate time to power spectral densities of temperature fluctuations. In the upper spectrum for the coldfinger temperature fluctuations one can clearly see a peak due to the cryocooler compressor cycle frequency of 1.43 Hz. Higher harmonics are apparent too. However in the lower spectrum for the copper post temperature fluctuations one cannot see this; the signal has been significantly attenuated.
Frequency Stability
===================
The cryocooled oscillator was implemented and time domain data of the 1.58 MHz beat between it and the liquid helium cooled oscillator measured with an Agilent 53132A $\Lambda$-counter [@Dawkins] with gate times of 0.25, 0.5, 1 and 10 s. From this the Allan deviation ($\sigma_y$) of frequency fluctuations was calculated at integration times $\tau$ equal to the gate times and at even multiples of the gate time with the 10 s gate time data using the *Stable32* software. See Fig. \[fig\_6\] for the results. (When using this type of counter the calculated Allan deviation $\sigma_y$ differs slightly from that calculated from a standard counter. The reader is advised to refer to Ref. [@Dawkins] where a full analysis is given.)
In the first cryostat design we observed a significant (of order $10^{-12}$/day) linear frequency drift in the oscillator [@Hartnett2010]. This has now been significantly reduced to a level of slightly below $5 \times 10^{-14}$/day (see Fig. \[fig\_7\]). It was accomplished largely by the introduction of a 4 K radiation shield around the vacuum can containing the sapphire resonator, and better thermal grounding of the coaxial cables at the 4 K stage. Also, the same time domain data was converted to $S_y$ \[dB/Hz\] from the power spectral density of fractional frequency measurements after the linear drift was removed. This is discussed below.
![(color online) The measured fractional frequency offset of the beat between the cryocooled sapphire oscillator and an 11.2 GHz signal produced by picking off a high order harmonic of a step recovery diode driven by the doubled 100 MHz signal from our Kvarz hydrogen maser. Each datum is a 100 s gate time sample. The initial large deviation is due to the poor functioning of the air conditioner in the lab, which was later rectified.[]{data-label="fig_7"}](Fig7.eps){width="3.5in"}
![Phase noise measurement setup used to obtain zero beat between the two cryogenic oscillators (see text for details).[]{data-label="fig_8"}](Fig8.eps){width="2.5in"}
Phase noise
===========
To date there have been very few phase noise measurements of cryogenic sapphire oscillators. Tobar et al. [@Tobar1994] measured the phase noise above 10 Hz Fourier frequencies in two nominally identical superconducting cavity oscillators, Dick et al. [@Dick1991] measured the phase noise of a superconducting sapphire cavity stabilized maser, Watabe et al. [@Watabe2007] measured the phase noise of two 100 MHz signals synthesized from two independent cryogenic sapphire oscillators, Marra et al. [@Marra] measured the phase noise of two nominally identical liquid-helium cooled sapphire oscillators and Grop et al. [@Grop2] measured the phase noise of a cryocooled sapphire oscillator using a liquid-helium cooled oscillator.
Recently we also estimated the low Fourier frequency phase noise from time domain measurements of the beat (sampled at 1 s) between our cryocooled sapphire oscillator and a liquid helium cooled sapphire oscillator in the same lab [@Hartnett2010]. In the following we show measurement of the phase noise from those same two cryogenic oscillators but using the zero beat derived with the setup shown in Fig. \[fig\_8\].
Since the beat note of the two oscillators is about 1.58 MHz, a few mW of signal power at this frequency was derived from the IF port of a Watkins Johnson M14A mixer. This frequency was also synthesized from a low noise synthesizer that is phase locked to the cryocooled sapphire oscillator. The details of this synthesizer can be found in Hartnett et al. [@Hartnett2009] (see Fig. 5 of that paper). The single side band residual phase noise of the down-converter in the synthesizer was measured to be -124 dBc/Hz at 1 Hz offset on a 100 MHz signal and hence does not contribute to the noise measurements here. See Fig. 3 of Hartnett et al. [@Hartnett2009]. Using 1.58 MHz from the synthesizer as the LO drive on a Minicircuits ZX05-1L-S mixer we produced a zero beat. Final adjustment to get zero voltage on the oscilloscope was achieved with a mechanical microwave phase shifter in RF input arm to the M14A microwave mixer. Both cryogenic oscillators are sufficiently stable to maintain zero voltage, monitored on an oscilloscope, at the output IF port of the mixer, for at least a few minutes, sufficient to make the measurement on an Agilent 89410A FFT spectrum analyzer. The resulting single sided phase noise for a single oscillator, assuming they both contribute equally, is shown in Fig. \[fig\_9\].
![(color online) The single side band phase noise $S_{\phi}$ \[dBc/Hz\] for a single oscillator assuming both contribute equally. []{data-label="fig_9"}](Fig9.eps){width="3.5in"}
The arrows indicate the bright lines derived from the cryocooler compressor cycle at 1.43 Hz. There is no clear bright line at the fundamental frequency of the cryocooler cycle, but it is clear at the higher harmonics. The hump at a few kHz is due to the bandwidth of the Pound servo used to lock the loop oscillator to the cryogenic resonator. Other lines are mains power harmonics.
In order to fully understand and characterize the different phase noise mechanisms observed in the oscillators we converted the power spectral density of time domain beat data to phase noise and show the result in Fig. \[fig\_10\] together with phase noise data from the zero beat measurement (Fig. \[fig\_9\]). Using a $\Lambda$-counter [@Dawkins], with gate times of 1 s and 10 s, the beat frequency was sampled and converted to $S_y$ using the software *Stable32*, after any linear drift was removed.
Then the single side band phase noise was calculated using, $$\label{Sy}
{\mathcal L}_{\phi}(f)=\frac{1}{2}\frac{\nu_0^2}{f^2} S_y(f),$$ where $\nu_0$ is the microwave oscillator frequency.
We then found the best fit power series that fits the measured power spectral density of phase fluctuations over each decade of offset Fourier frequencies in Fig. \[fig\_10\]. Hence the single oscillator single side band phase noise can be represented by $$\begin{aligned}
\label{L}
& {\mathcal L}_{\phi}(f) = \frac{10^{-14.0}}{f^4} + \frac{10^{-11.6}}{f^3} + \frac{10^{-10.0}}{f^2} + ... \nonumber \\
& + \frac{10^{-10.2}}{f} + 10^{-11.0}\,\, [rad^2/Hz],\end{aligned}$$ for Fourier frequencies $10^{-3}<f<10^3$ Hz. To represent Eq. (\[L\]) on the plot in Fig. \[fig\_10\] take $10 \; log ({\mathcal L}_{\phi}(f))$ with units \[dBc/Hz\]. From Eq. (\[L\]) we get $\cal{L}$$_{\phi}(1 \; Hz) \approx -97.5$ dBc/Hz, i.e. at 1 Hz offset from the 11.2 GHz carrier.
Then in turn if we calculate the Allan deviation from Eq. (\[L\]) we get the result shown as the dashed curve in Fig. \[fig\_6\]. Here we evaluate the coefficients $h_{-2}, h_{-1}, h_0, h_1, h_2$ to $S_y$ determined using Eqs (\[Sy\]) and (\[L\]), for a $\Lambda$-counter (see the last column of Table I of Ref. [@Dawkins]). From there we calculated the total contribution to the Allan deviation and have compared that to the measured $\sigma_y$ in Fig. \[fig\_6\].
![(color online) The single side band phase noise $S_{\phi}$ \[dBc/Hz\] from $10^{-3}$ to $10^3$ Hz. The data labeled as counter 1 s and counter 10 s is the phase noise calculated from power spectral density of time domain beat data using a $\Lambda$-counter, with gate times of 1 s and 10 s, respectively. The solid line is the best fit over the whole frequency range. []{data-label="fig_10"}](Fig10.eps){width="3.5in"}
As would be expected the fit is very good (since we have mostly used time domain data) except at integration times $\tau < 1$ s. This may be due to two causes: 1) it is difficult to account for all the bright lines at Fourier frequencies $f > 1$ Hz and 2) there is significant dead time ($\tau_d$) (by percentage of the sample duration) at both $\tau = 0.25$ and $0.5$ s. Flicker PM introduces an error of the order 0.43 $\tau_d/\tau$ for a high precision $\Lambda$-counter [@Dawkins].
Discussion
==========
The modeled $\sigma_y = \sigma_0 \times 10^{-15}$ for a single cryogenic oscillator can be characterized by $$\label{modelsy}
\sigma_0^2 = 0.072 + \frac{0.204}{\tau^3} + \frac{0.051}{\tau^2} + \frac{1.063}{\tau} +
0.00121 \; \tau.$$ Therefore it follows that, at integration times $\tau \ll 1$ s the dominant noise process is white PM characterized by $$\sigma_y \approx \frac{4.52 \times 10^{-16}}{\tau^{3/2}}.$$ This may be reduced by the introduction of a lower phase noise amplifier. Over the range $1<\tau<10$ s white FM noise is dominant and $$\sigma_y \approx \frac{1.03 \times 10^{-15}}{\tau^{1/2}}.$$ Then $\sigma_y$ reaches a minimum value ($3.9 \times 10^{-16}$) where there is a limiting noise floor due to a flicker FM process. This can be characterized by $$\sigma_y \approx 2.69 \times 10^{-16}.$$ Assuming one could reduce the white FM noise one could conceivably reach this flicker floor. A flicker floor as high as $2 \times 10^{-15}$ has been previously observed resulting from a noisy ferrite circulator used on the reflection port of the cryogenic resonator [@Chang2000]. The cause of the limiting flicker floor deserves further investigation in these oscillators.
At longer integration times there is random walk FM noise described by $$\sigma_y \approx 3.47 \times 10^{-17} \tau^{1/2}.$$ At $\tau = $1000 s this means $\sigma_y \approx 1.1 \times 10^{-15}$ and at $\tau = $10,000 s $\sigma_y \approx 3.5 \times 10^{-15}$ but in practice temperature and pressure changes in the ambient environment have an effect. See Fig. \[fig\_7\].
Concluding Remarks
==================
In the first design of the cryostat [@Hartnett2010] we reported a significant linear frequency drift but this has now been reduced to less than $5 \times 10^{-14}$/day. We believe this largely resulted from thermal gradients, which we reduced by the installation of a 4 K radiation shield. But still there is room for improvement.
The major advantage of the cryocooled oscillator is that it does not suffer from the need for regular and reliable liquid helium supplies in remote sites. The cryocooler head only requires maintenance after about 3 to 5 years of continuous operation. Because this design incorporates a small volume of liquid helium (that is continually reliquified) if the power fails for short periods the resonator will remain cold. It takes 20 minutes after power fails for the pressure to build sufficient to vent helium gas through the relief valve. However the resonator will remain cold for several hours, which allows one to easily restart the unit after power is restored. If left too long more pure helium gas will need to be added. Nevertheless some hours of power failure can be accommodated by the design.
The frequency stability of the cryocooled oscillator is sufficient to meet the requirements of a flywheel oscillator for atomic fountain clocks to overcome the Dick effect [@Wynands2005] and is more practical for standards labs than the version that required continual refilling with liquid helium. Its frequency stability matches that of the best liquid helium cooled cryogenic sapphire oscillators. It can be used as a local oscillator a few orders of magnitude more stable, at integration times between 1 and 10 s, than the best hydrogen maser, whereas at about 1000 s the stability becomes comparable to that of a hydrogen maser.
The hydrogen maser is currently the standard in VLBI radio-astronomy, but at millimeter wave frequencies this can limit the coherence of the receiver signal. A more stable local reference can overcome this limitation. Therefore the cryocooled sapphire oscillator has the potential to provide a much improved local oscillator for remote very high frequency VLBI radio-astronomy sites.
Acknowledgments
===============
The authors would like to thank the Australian Research Council, Poseidon Scientific Instruments, the University of Western Australia, Curtin University of Technology and the CSIRO ATNF (Australian Telescope National Facility); the latter will provide a telescope site where the cryogenic sapphire oscillator will be tested as a local oscillator for VLBI radio astronomy. We also wish to thank E.N. Ivanov and M.E. Tobar for their useful advice, and J-M. Le Floch for assistance with data acquisition software.
[99]{} V.B. Braginsky, V.P. Mitrofanov, V.I. Panov, *Systems with small dissipation* (Univ. of Chicago Press, Chicago, 1985) J.G. Hartnett, C.R. Locke, E.N. Ivanov, M.E. Tobar, P.L. Stanwix, *Appl. Phys. Lett.*, **89**(20): 203513, 2006 J.G. Hartnett, N.R. Nand, C. Wang, J-M. Le Floch, *IEEE Trans. Ultrason. Ferroelectrics Freq. Control*, **57**(5): 1034-1038, 2010 C.R. Locke, E.N. Ivanov, J.G. Hartnett, P.L. Stanwix, M.E. Tobar, *Rev. Sci. Instru.*, **79**(5): 051301, 2008 K. Watabe, J.G. Hartnett, C.R. Locke, G. Santarelli, S. Yanagimachi, T. Shimazaki, T. Ikegami, S. Ohshima, *Jap. J. Appl. Phys.* **45**(12): 9234–9237, 2006 K. Watabe, H. Inaba, K. Okumura, F. Hong, J.G. Hartnett, C.R. Locke, G. Santarelli, S. Yanagimachi, K. Minoshima, T. Ikegami, A. Onae, S. Ohshima, H. Matsumoto,*CPEM Special Issue IEEE Trans. on Instr. Meas.* **56**(2): 632–636, 2007 G. Santarelli, P.L., P. Lemonde and A. Clairon, *Phys. Rev. Lett.*, **82**(23): 4619–4622, 1999 R. Wynands and S. Weyers, *Metrologia* **42**, S64–-S79, 2005 G.J. Dick, *Proc. 19th Ann. PTTI Systems and Applications Meeting* (Redondo Beach, CA), 133–-147, 1987 G.J. Dick, J.D. Prestage, C.A. Greenhall and L. Maleki, *Proc. 22th Ann. PTTI Systems and Applications Meeting* (Vienna), 497–-508, 1990 C. Audoin, G. Santarelli, A. Makdissi and A. Clairon, *IEEE Trans. Ultrason. Ferroelectrics Freq. Control* **45**, 877, 1998 G. Santarelli, C. Audoin, A. Makdissi, P. Laurent, G.J. Dick and A. Clairon, *IEEE Trans. Ultrason. Ferroelectrics Freq. Control* **45**, 887, 1998 C. Wang, J.G. Hartnett, *Cryogenics*, 50, 336–341, 2010 G. J. Dick and R. T. Wang, *in Proc. 2000 Int. IEEE Freq. Contr. Symp.*: 480-–484, 2000 R.T.Wang, G.J. Dick, W.A. Diener, *in Proc. 2004 IEEE Int. Freq. Contr. Symp.*: 752-–756, 2004 R.T. Wang and G.J. Dick, *IEEE Trans. on Instr. and Meas.* **48**: 528-531, 1999 K. Watabe, Y. Koga, S. Ohshima, T. Ikegami, and J.G. Hartnett, *in Proc. 2003 IEEE Int. Freq. Contr. Symp.*, 388-390, 2003; *IEICE Trans. Electron.* **E87-C**(9): 1640–1642, 2004 S. Grop, P.Y. Bourgeois, N. Bazin, Y. Kersalé, E. Rubiola, C. Langham, M. Oxborrow, D. Clapton, S. Walker, J. De Vicente, V. Giordano, *Rev. Sci. Instru.*, **81**, 025102, 2010 S. Grop, P-Y. Bourgeois, R. Boudot, Y. Kersalé, E. Rubiola and V. Giordano, *Elect. Lett.*, **46**(6): 420-422, 2010 J.G. Hartnett, M.E. Tobar, E.N. Ivanov, P. Bilski, *Cryogenics* **42**: 45-–48, 2002 See www.crystalsystems.com/hemex\_sapph.html M.E. Tobar, E.N. Ivanov, *IEEE Trans. on MTT* **42**(2): 344–347, 1994 G. J. Dick and R. T. Wang, *IEEE Trans. on Instr. and Meas.* **40**(2): 174–177, 1991 M.E. Tobar, E.N. Ivanov, C.R. Locke, P.L. Stanwix, J.G. Hartnett, A.N. Luiten, R.B. Warrington, P.T.H. Fisk, *IEEE Trans. Ultrason. Ferroelectrics Freq. Control* **53**(12): 2386–2393, 2006 J.G. Hartnett, D.L. Creedon, D. Chambon, G. Santarelli, *in Proc. Frequency Control Symposium, 2009 Joint with the 22nd European Frequency and Time forum.*, 372 - 375, 2009 S.T. Dawkins, J.J. McFerran, A.N. Luiten, *IEEE Trans. Ultrason. Ferroelectrics Freq. Control*, **54**: 918–-925, 2007. G. Marra, D. Henderson and M. Oxborrow, *Meas. Sci. Technol.* **18**: 1224-–1228, 2007 S. Chang, A.G. Mann, A.N. Luiten, *Electron. Lett.* **36**(5): 480–481, 2000
[^1]: John G. Hartnett and Nitin R. Nand are with the School of Physics, the University of Western Australia, Crawley, 6009, W.A., Australia.
[^2]: Manuscript received March 20, 2010; ....
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A model is proposed which generates all oriented $3d$ simplicial complexes weighted with an invariant associated with a topological lattice gauge theory. When the gauge group is $SU_q(2)$, $q^n=1,$ it is the Turaev-Viro invariant and the model may be regarded as a non-perturbative definition of $3d$ simplicial quantum gravity. If one takes a finite abelian group $G$, the corresponding invariant gives the rank of the first cohomology group of a complex $C$: $I_G(C) = rank(H^1(C,G))$, which means a topological expansion in the Betti number $b^1$. In general, it is a theory of the Dijkgraaf-Witten type, $i.e.$ determined completely by the fundamental group of a manifold.'
author:
- |
D.V. BOULATOV[^1]\
Service de Physique Théorique de Saclay[^2],
- 'F-91191 Gif-sur-Yvette Cedex, France'
date: February 1992
title: ' A MODEL OF THREE-DIMENSIONAL LATTICE GRAVITY'
---
SPhT/92-017
Introduction
============
Lattice models have always been an useful tool in field theory. They often helped to look at a theory from another point of view, which led to better understanding and computational progress. The most recent example of such a kind was the matrix models of 2d gravity [@MatMod]. In that case lattice and continuum approaches have been developed in the close connection stimulating each other. The success of the matrix models made it desirable to extend this approach to higher dimensional euclidean gravity. The general idea is rather natural: the integral over all d-dimensional manifolds should be substituted by a sum over all d-dimensional simplicial complexes. If a topology is fixed, a lattice action may be chosen linear in the number of simplexes of every dimension. The partition function in the $3d$ case is of the form
Z\_[top]{}=\_[C\_[top]{}]{}e\^[N\_1 - N\_3]{} =\_[N\_1,N\_3]{}Z\_[N\_1N\_3]{}e\^[N\_1 - N\_3]{} \[Ztop\]
where $top$ means a fixed topology, $\sum_{C_{top}}$ is the sum over all $3d$ simplicial manifolds of the chosen topology. Let us remind that in odd dimensions manifolds have the zero Euler character, hence
= N\_0-N\_1+N\_2-N\_3 = 0 \[chi\]
$N_k$ is the number of simplexes of the $k$-th dimension in a complex $C$, $i.e.$ points, links, triangles and tetrahedra, respectively. The other constraint is
N\_2=2N\_3
which means simply that every triangle is shared by exactly two tetrahedra. Owing to the constraints, if the volume is fixed, only one parameter remains in the 3d and 4d cases and one may hope that it should be related to a bare Newton coupling. Indeed, keeping all tetrahedra equilateral, one gets, from counting deficit angles associated with links, the lattice analog of the mean curvature [@Regge]
d\^3xR \~a(2N\_1-6N\_3\^[-1]{}( )) \[curvature\]
where $a$ is a lattice spacing.
For the fixed spherical topology, a 3-dimensional model of such a type was investigated numerically in refs. [@AgMig; @BK] and a 4-dimensional one, in refs. [@4dsim]. It appears that the micro-canonical partition function $Z_{N_1N_3}$ is exponentially bounded at large $N_3$
Z\_[N\_1N\_3]{} \~e\^[\^\*N\_3]{}
while with respect to $N_1$ at $N_3$ fixed its shape is, roughly speaking, gaussian [@BK]. Varying the lattice analog of the inverse Newton constant, $\alpha$, one can only shift the position of the maximum changing continuously the mean curvature (\[curvature\]). It appeared that the vacuum is not unique. In refs. [@BK] the first order phase transition was found at some $\alpha_c>0$ which separates phases of positive ($\alpha>\alpha_c$) and negative ($\alpha<\alpha_c$) mean curvatures (\[curvature\]). The remarkable feature of the first phase is that the mean curvature per unit volume, $2\pi N_1/N_3-6\cos^{-1}\big(\frac{1}{3}\big)$ , does not depend on $N_3$ at all [@BK]. It means the existence of the continuum thermodynamical limit for the model (\[Ztop\]). A similar transition also exists in the 4-d model and there is some hope that here it is of the second order [@4dsim]. If it is confirmed, one can find a non-trivial continuum limit in its vicinity. Anyhow, the lattice models of gravity are interesting in their own rights.
The aim of this paper is to construct a model which generates all 3-dimensional simplicial complexes within a perturbation expansion so that it might be regarded as a 3-d analog of the matrix models. The naive generalization, so-called tensor models [@TensorMod], suffers from serious diseases. The main one is that they do not contain the sufficient number of parameters: it is impossible to perform any topological expansion within them. It makes these models uninteresting because of their non-universality. As we learnt from the matrix models, only a perturbation topological expansion might be universal [@Matmod2]. So, one has somehow to control the topology.
It is well-known that, in the 3d space-time, integration over diffeomorphisms and local Lorentz rotations is equivalent to the $ISO(2,1)$ Chern-Simons field theory [@Witten]. Although that connection holds only on-shell, it is clear that, in general, every topology should be somehow weighted. So far the Turaev-Viro $SU(2)$ invariant has been considered as the lattice counterpart of the $ISO(2,1)$ Chern-Simons partition function [@TV; @Ooguri]. Strictly speaking, the corresponding argumentation is heuristic[^3] and it may be better to consider as general class of models as possible. As will be shown in this paper, the underlying structure of the Turaev-Viro [@TV] (or Ponzano-Regge in its original form [@PR]) partition function is 3-dimensional topological lattice gauge theory, and, simply taking different gauge groups, one is able to construct different invariants. This “degree of freedom” appears to be rather useful and, as we hope, will lead to the better understanding of the problem of $3d$ gravity.
The paper is organized as follows. In Section 2, a model is formulated which generates all $3d$ simplicial complexes weighted with the Ponzano-Regge ([*i.e.*]{} non-regularized) partition function. From the point of view of the Regge calculus [@Regge], this partition function corresponds to a discretization of $3d$ euclidean gravity [@PR]. A natural generalization leads to a whole class of models of such a type. In Section 3, the $Z_n$ gauge group is considered. It is shown that, in some scaling limit, an expansion in the Betti number $b_2$ can be performed. In Section 4, the case of $q$-deformed $SU(2)$ gauge group is considered, when the Ponzano-Regge construction leads to the Turaev-Viro invariant. Section 5 is devoted to a discussion.
General construction
====================
The basic object is a set of real functions of 3 variables $\phi(x,y,z)$ (where $x,y,z \in G$ for some compact group $G$) invariant under simultaneous right shifts of all variables by $u \in G$.
(x,y,z) = (xu,yu,zu); (x,y,z) = (x,y,z) \[defofphi\] We also demand the cyclic symmetry
(x,y,z) = (z,x,y)= (y,z,x) \[cycsim\] The general Fourier decomposition of such a function is of the form
$$\phi(x,y,z) = \sum_{j_1,j_2,j_3} \sum_{\{m,n,k\}}
\Phi^{m_1m_2m_3;k_1k_2k_3}_{\;j_1,\;j_2,\;j_3}
D^{j_1}_{m_1,n_1}(x) D^{j_2}_{m_2,n_2}(y) D^{j_3}_{m_3,n_3}(z)$$dD\^[j\_1]{}\_[n\_1k\_1]{}() D\^[j\_2]{}\_[n\_2k\_2]{}() D\^[j\_3]{}\_[n\_3k\_3]{}() \[phiexp\] $n_i,m_i,k_i=1,\ldots,d_{j_{i}}$ ($d_j$ is the dimension of an irrep $j$); $D^j_{nm}(x)$ are matrix elements obeying the orthogonality condition
dx D\^j\_[nm]{}(x) \^[j’]{}\_[n’m’]{}(x) = [1d\_j]{}\^[j,j’]{} \_[n,n’]{}\_[m,m’]{} \[orthcon\]
Throughout the paper all measures are assumed to be normalized to the unity:
\_G dx \_[gG]{}1 = 1 \[normaliz\]
The integral of three matrix elements is proportional to a product of two Clebsch-Gordan coefficients $\langle j_1j_2n_1n_2 \mid j_1j_2j_3n_3 \rangle$. We shall use the following notation:
dx D\^[j\_1]{}\_[m\_1n\_1]{}(x)D\^[j\_2]{}\_[m\_2n\_2]{}(x)D\^[j\_3]{}\_[m\_3n\_3]{}(x) =
(
[ccc]{} j\_1&j\_2&j\_3\
m\_1&m\_2&m\_3
)
(
[ccc]{} j\_1&j\_2&j\_3\
n\_1&n\_2&n\_3
)
\[int3me\]
In the $SU(2)$ case, ${\left( \begin{array}{ccc}
j_1&j_2&j_3 \\ n_1&n_2&n_3
\end{array} \right)}$ is called the Wigner $3j$-symbol:
(
[ccc]{} j\_1&j\_2&j\_3\
n\_1&n\_2&n\_3
)
=[(-1)\^[j\_1-j\_2+n\_3]{} ]{}j\_1j\_2n\_1n\_2 j\_1j\_2j\_3-n\_3 \[def3j\] The Fourier coefficients,
$$A^{m_1,m_2,m_3}_{\;j_1,\;j_2,\;j_3}=
\frac{1}{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}}$$\_[k\_1k\_2k\_3]{} \^[m\_1m\_2m\_3;k\_1k\_2k\_3]{}\_[j\_1,j\_2,j\_3]{}
(
[ccc]{} j\_1&j\_2&j\_3\
k\_1&k\_2&k\_3
)
are complex numbers having symmetries of $3j$-symbols except for the condition
m\_1+m\_2+m\_3=0
An action of interest can be constructed with those functions as follows
&&S =dxdydz \^2 (x,y,z)-&& - dxdydzdudvdw (x,y,z)(x,u,v)(y,v,w)(z,w,u) \[action1\]
If the variables are attached to edges, the first term can be regarded as two glued triangles and the second, as four triangles forming a tetrahedron. Integrating out all group variables, one gets in the $SU(2)$ case
&&S = \_[{j\_1,j\_2,j\_3}]{}\_[{-j\_k m\_k j\_k}]{} A\_[j\_1,j\_2,j\_3]{}\^[m\_1,m\_2,m\_3]{} \^2 - && - \_[{j\_1,...,j\_6}]{}\_[{-j\_k m\_k j\_k}]{} (-1)\^[\_k\^6 (m\_k+j\_k)]{} A\_[ j\_1, j\_2, j\_3]{}\^[-m\_1,-m\_2,-m\_3]{} A\_[j\_3, j\_4,j\_5]{}\^[m\_3,-m\_4,m\_5]{} && A\_[j\_1, j\_5,j\_6]{}\^[m\_1,-m\_5,m\_6]{} A\_[j\_2, j\_6,j\_4]{}\^[m\_2,-m\_6,m\_4]{}
{
[ccc]{} j\_1&j\_2&j\_3\
j\_4&j\_5&j\_6
}
\[Sintout\]
If the coefficients $A_{\;j_1,\;j_2,\;j_3}^{m_1,m_2,m_3}$ obey the condition following from the reality of $\phi(x,y,z)$ (eq. (\[defofphi\])),
\_[j\_1,j\_2,j\_3]{}\^[m\_1,m\_2,m\_3]{} = (-1)\^[\_i\^3(m\_i+j\_i)]{}A\_[ j\_1, j\_2, j\_3]{}\^[-m\_1,-m\_2,-m\_3]{} \[conjA\]
and the measure of integration is taken to be
= \_[{j,j’,j”}]{}\_[{-j m j}]{} dA\_[j,j’,j”]{}\^[m,m’,m”]{}
where $\prod_{\{j,j',j''\}}$ means the product over all triplets $(j,j',j'')$ obeying the triangle inequality: $\mid j' - j''\mid \le j \le j' + j''$, then the partition function will generate all possible $3d$ simplicial complexes weighted with corresponding (non-regularized) Ponzano-Regge partition functions, [*i.e.*]{}
Z = e\^[-S]{} = \_[{C}]{} \^[N\_3(C)]{} \_[{j}]{} \_[L C]{} (2j\_[\_L]{}+1) \_[T C]{}
{
[ccc]{} j\_[\_[T\_1]{}]{}&j\_[\_[T\_2]{}]{}&j\_[\_[T\_3]{}]{}\
j\_[\_[T\_4]{}]{}&j\_[\_[T\_5]{}]{}&j\_[\_[T\_6]{}]{}
}
\[Z\]
where $\sum_{\{C\}}$ is the sum over all oriented $3d$ simplicial complexes; $N_3(C)$ is the number of tetrahedra in a complex $C$, $\sum_{\{j\}}$ is the sum over all possible configurations of $j$’s (colorings of links); $\prod_{\{L \in
C\}}$ is the product over all links $L$ in $C$; $\prod_{\{T \in C\}}$ is the product over all tetrahedra $T$ ($j_{\hbox{}_{T_i}}, i=1,\ldots,6$; are six momenta attached to edges of a tetrahedron $T$). ${\left\{ \begin{array}{ccc}
j_{\hbox{}_{T_1}}&j_{\hbox{}_{T_2}}&j_{\hbox{}_{T_3}} \\ j_{\hbox{}_{T_4}}&j_{\hbox{}_{T_5}}&j_{\hbox{}_{T_6}}
\end{array} \right\}}$ is the Racah-Wigner $6j$-symbol attached to a tetrahedron $T$. We use the normalization for which the $6j$-symbol is symmetric with respect to permutations of columns:
$${\left\{ \begin{array}{ccc}
j_1&j_2&j_3 \\ j_4&j_5&j_6
\end{array} \right\}}=
\sum_{\{-j_i\leq m_i \leq j_i\}} (-1)^{j_4+j_5+j_6+m_4+m_5+m_6}
{\left( \begin{array}{ccc}
j_1&j_2&j_3 \\ m_1&m_2&m_3
\end{array} \right)}$$
(
[ccc]{} j\_5&j\_6&j\_1\
m\_5&-m\_6&m\_1
)
(
[ccc]{} j\_6&j\_4&j\_2\
m\_6&-m\_4&m\_2
)
(
[ccc]{} j\_4&j\_5&j\_3\
m\_4&-m\_5&m\_3
)
Eq.(\[Z\]) is formal and has to be somehow regularized. Let us postpone a discussion on that and firstly make several remarks. Eq.(\[defofphi\]) means that we are considering functions of two independent variables. If we drop the cyclic symmetry condition (\[cycsim\]), then we shall have the representation
$$\phi(x,y,z) = f(xz^+,yz^+) = \sum_{j_1;m_1,n_1}\sum_{j_2;m_2,n_2}
F^{m_1,n_1;}_{\ j_1} \hbox{}^{m_2,n_2}_{\ j_2}
D^{j_1}_{m_1n_1}(xz^+) D^{j_2}_{m_2n_2}(yz^+)$$$$=\sum_{j_1;m_1,n_1}\sum_{j_2;m_2,n_2}\sum_{j_3;m_3,n_3}
F^{m_1,n_1;}_{\ j_1} \hbox{}^{m_2,n_2}_{\ j_2}
\langle j_1 j_2 m'_1 m'_2 \mid j_1j_2j_3-\!\!n_3 \rangle$$j\_1 j\_2 n\_1 n\_2 j\_1j\_2j\_3-m\_3 (-1)\^[m\_3+n\_3]{} D\^[j\_1]{}\_[m\_1[m’]{}\_1]{}(x) D\^[j\_2]{}\_[m\_2[m’]{}\_2]{}(y) D\^[j\_3]{}\_[m\_3n\_3]{}(z)
Hence,
\_[j\_1,j\_2,j\_3]{}\^[m\_1,m\_2,m\_3]{} = \_[n\_1,n\_2]{}
(
[ccc]{} j\_1&j\_2&j\_3\
n\_1&n\_2&m\_3
)
F\^[m\_1,n\_1]{}\_[ j\_1]{} \^[m\_2,n\_2]{}\_[ j\_2]{}
From eq.(\[conjA\]) it follows that
\^[m\_1,n\_1;]{}\_[ j\_1]{} \^[m\_2,n\_2]{}\_[ j\_2]{}= (-1)\^[m\_1+m\_2+n\_1+n\_2]{} F\^[-m\_1,-n\_1;]{}\_[ j\_1]{} \^[-m\_2,-n\_2]{}\_[ j\_2]{} \[conjF\] and, if the correlator of the Fourier coefficients is of the form
&F\^[m\_1,n\_1;]{}\_[ j\_1]{} \^[m\_2,n\_2]{}\_[ j\_2]{} \^[m’\_1,n’\_1;]{}\_[ j’\_1]{} \^[m’\_2,n’\_2]{}\_[ j’\_2]{}= (2j\_1+1)(2j\_2+1) \_[j\_1,j’\_1]{}\_[j\_2,j’\_2]{}&&\^[m\_1+m’\_1,0]{}\^[m\_2+m’\_2,0]{} \^[n\_1+n’\_1,0]{}\^[n\_2+n’\_2,0]{}& \[corrF\] then
$$\langle
\widetilde{A}_{\;j_1,\;j_2,\;j_3}^{m_1,m_2,m_3}
\overline{\widetilde{A}}_{\;j'_1,\;j'_2,\;j'_3}^{m'_1,m'_2,m'_3}\rangle =
\sum_{n_1,n_2}
{\left( \begin{array}{ccc}
j_1&j_2&j_3 \\ n_1&n_2&m_3
\end{array} \right)}
{\left( \begin{array}{ccc}
j_1&j_2&j'_3 \\ n_1&n_2&-m'_3
\end{array} \right)}$$\_[j\_1,j’\_1]{}\_[j\_2,j’\_2]{} \^[m\_1+m’\_1,0]{}\^[m\_2+m’\_2,0]{} =\_[j\_1,j’\_1]{}\_[j\_2,j’\_2]{}\_[j\_3,j’\_3]{} \^[m\_1+m’\_1,0]{}\^[m\_2+m’\_2,0]{}\^[m\_3+m’\_3,0]{}
In terms of the function $f(x,y)$ the action (\[action1\]) takes the form
&&S =dxdy f\^2 (x,y) && - dxdydudvdw h(x,y)h(xw,uw)h(v,u)h(vw,yw) && \[action2\] where $h(x,y)=\frac{1}{3}(f(x,y)+f(yx^+,x^+)+f(y^+,xy^+))$.
For a general compact group, the action (\[action2\]), as well as (\[action1\]), together with the reality condition
(x,y) = f(x,y)
may be regarded as a definition of the model. The underlying mathematical structure here is topological lattice gauge theory. It can be seen as follows. The action (\[action1\]) generates $3d$ complexes so that two $3j$-symbols are attached to every triangle. Such a combination can be obtained integrating three matrix elements as in eq. (\[int3me\]). All lower indices of the matrix elements are sumed up inside tetrahedra forming $6j$-symbols. It is easy to notice that the partition function (\[Z\]) can be written then in the form
$$Z=\sum_{\{C\}} \lambda^{N_3}\!\!\!\!\sum_{\{j;n,m\}}\prod_{\{L\in C\}}
d_{j_{\hbox{}_L}}\prod_{t\in C}
\int dx_t\, D^{j_{t_1}}_{m_{t_1}n_{t_1}}(x_t)D^{j_{t_2}}_{m_{t_2}n_{t_2}}(x_t)
D^{j_{t_3}}_{m_{t_3}n_{t_3}}(x_t)$$$$=\sum_{\{C\}} \lambda^{N_3}\int\prod_{t\in C}dx_t
\prod_{\{L\in C\}}\delta(\prod_{around\,L}x_{t_{\hbox{}_L}},1)$$ =\_[{C}]{} \^[N\_3]{}Z\_[gauge]{}(C) \[Z2\]
where $\prod_{\{L\in C\}}$ and $\prod_{\{t\in C\}}$ are products over all links and triangles, respectively. The matrix elements being multiplied around links produce characters and then, summing over representations, one gets a $\delta$-function for every link. Its argument, $\prod_{around\,L}x_{t_{\hbox{}_L}}$, are the product of group elements $x_{t_{\hbox{}_L}}$ around a link $L$. Triangles are oriented, the change of an orientation leading to the conjugasion: $x\to x^+=x^{-1}$. All products have to be performed taking the orientation into account. Although it is not the fact for a general compact group, in the $SU(2)$ case our model generates only oriented complexes. It follows immediately from (\[conjA\]) (or (\[conjF\])) and the form of the action (\[Sintout\]).
If a complex is fixed, the model is equivalent to $3d$ gauge theory with fields defined on links of the dual $\phi^4$ graphs and the pure gauge condition on dual faces. If $\delta$-functions were substituted by, for example, the heat-kernel weights
(x)=\_jd\_j\_j(x) (x)=\_jd\_j\_j(x) e\^[-C\_j]{} \[heatkernel\]
where $C_j$ is the quadratic Casimir, one would have just the ordinary lattice gauge theory. The former is the weak coupling limit of the latter.
Now, to prove the topological invariance, we need only formal properties of the group measure and $\delta$-functions. We have to investigate transformations of $Z_{gauge}(C)$ under topology preserving deformations of a complex $C$. Two complexes are of the same topological type (homeomorphic), if they can be connected by a sequence of elementary “continuous” deformations (moves). These moves can be defined as follows [@AgMig; @BK]: if some subcomplex of a $d$-dimensional complex can be identified with a part of the boundary of the $(d+1)$-dimensional simplex, it is substituted by the rest of the boundary. In the 3-dimensional case there are two pairs of mutually inverse moves shown in Fig. 1 (a) and (b). The first pair is called the triangle-link exchange (the dual diagrams are shown in Fig. 2(a)). A pair of tetrahedra glued together by faces is substituted by three tetrahedra sharing the new link. For the first configuration we have the integral of the type
&dx\_1dx\_2dx\_3dy\_1dy\_2dy\_3dw D\^[j\_1]{}\_[m\_1n\_1]{}(x\_1x\_2\^+) D\^[j\_2]{}\_[m\_2n\_2]{}(x\_2x\_3\^+)D\^[j\_3]{}\_[m\_3n\_3]{}(x\_3x\_1\^+)&& D\^[j’\_1]{}\_[m’\_1n’\_1]{}(y\_1y\_2\^+)D\^[j’\_2]{}\_[m’\_2n’\_2]{}(y\_2y\_3\^+) D\^[j’\_3]{}\_[m’\_3n’\_3]{}(y\_3y\_1\^+)&& D\^[l\_1]{}\_[a\_1b\_1]{}(x\_1wy\_1\^+)D\^[l\_2]{}\_[a\_2b\_2]{}(x\_2wy\_2\^+) D\^[l\_3]{}\_[a\_3b\_3]{}(x\_3wy\_3\^+)& \[2tetr\]
$x$’s and $y$’s stay for faces of the upper and lower tetrahedra, respectively, and $w$, for the common face. It is clear that the dependence on $w$ can be removed by the shift $x_1\to x_1w^+$; $x_2\to
x_2w^+$; $x_3\to x_3w^+$. In the second case the situation is quite analogous: there are three triangles ($w$’s) and one link ($\delta$-function) inside the subcomplex. The counterpart of eq. (\[2tetr\]) is
&dx\_1dx\_2dx\_3dy\_1dy\_2dy\_3dw\_1dw\_2dw\_3D\^[j\_1]{}\_[m\_1n\_1]{}(x\_1w\_1x\_2\^+) D\^[j\_2]{}\_[m\_2n\_2]{}(x\_2w\_2x\_3\^+)&&D\^[j\_3]{}\_[m\_3n\_3]{}(x\_3w\_3x\_1\^+) D\^[j’\_1]{}\_[m’\_1n’\_1]{}(y\_1w\_1y\_2\^+)D\^[j’\_2]{}\_[m’\_2n’\_2]{}(y\_2w\_2y\_3\^+) D\^[j’\_3]{}\_[m’\_3n’\_3]{}(y\_3w\_3y\_1\^+)&& D\^[l\_1]{}\_[a\_1b\_1]{}(x\_1y\_1\^+)D\^[l\_2]{}\_[a\_2b\_2]{}(x\_2y\_2\^+) D\^[l\_3]{}\_[a\_3b\_3]{}(x\_3y\_3\^+)(w\_1w\_2w\_3,1)& \[3tetr\]
where all $w$-integrations are trivial due to the $\delta$-function.
Instead of proving the invariance under the moves in Fig.1(b), we can consider the case of two tetrahedra glued along three faces (Fig.2(b)). This configuration can be obtained removing one of the links in Fig.1(b) by the triangle-link exchange. The appearing integral is of the form
&dw\_1dw\_2dw\_3 (w\_1w\_2\^+)(w\_2w\_3\^+)(w\_3w\_1\^+) D\^[j\_1]{}\_[m\_1n\_1]{}(w\_1)D\^[j\_2]{}\_[m\_2n\_2]{}(w\_2)D\^[j\_3]{}\_[m\_3n\_3]{}(w\_3)&& =(1,1)dwD\^[j\_1]{}\_[m\_1n\_1]{}(w)D\^[j\_2]{}\_[m\_2n\_2]{}(w) D\^[j\_3]{}\_[m\_3n\_3]{}(w) \[pointout\]
which means that, up to $\delta(1,1)=rank(G)$, those two glued tetrahedra are equivalent to a single triangle. We see that the partition function (\[Z\]) can be finally written down as
Z=\_[{C}]{} \^[N\_3]{}(rank(G))\^[N\_0-1]{}I\_[\_G]{}(C) \[Z3\]
where $I_{\hbox{}_G}(C)$ is a topological invariant associated with a group $G$.
For finite groups, our model is well defined, as in this case the rank is equal to the number of group elements. For continuous compact groups, the $q$-deformation provides us with a regularization of the model (notice that the substitution (\[heatkernel\]) destroys the topological properties of the gauge theory). For example, in the $SU_q(2)$ case, $q^n=1$,
rank(SU\_q(2))= \[rankSU(2)\]
Indeed, the representations of the $q$-analogs of compact groups resemble the classical representations. And, while one is working with $3j$ and $6j$-symbols not permuting momenta, as we did above, the $R$-matrix does not appear and all formal manipulations coincide in both cases[^4]. We see that quantum groups are here on equal footing with finite groups. That is why in the next section we shall concentrate ourselves on the simpler latter case.
Topological gauge theory for finite groups and the $Z_p$ model.
===============================================================
The topological lattice model appeared in the previous section is a particular example of the Dijkgraaf-Witten theory [@DijkW]. Actually, it is the simplest model of such a type. Dijkgraaf and Witten introduced a topological action, which exists, however, not for all groups. In our case there is no action and, therefore, there are no corresponding restrictions. Two other peculiarities are (i) the gauge fields are defined on dual edges rather than on links of a triangulation; (ii) since $\sum_{\{C\}}$ runs over all possible complexes, we should take into consideration non-manifolds as well. Nevertheless, the model bears general properties of the Dijkrgaaf-Witten one. The main is that its partition function is determined completely by the fundamental group.
Among lattices generated perturbatively there are such that, strictly speaking, do not obey the definition of the simplicial complex (for example, shown in Fig.2 (b)). It forces us to work with more general cell complexes.
From now on, we shall consider simultaneously triangulations and dual $\phi^4$ lattices denoting quantities defined for the latter by the tilde ${\widetilde{\ }}$. So, at the beginning we have a cell complex dual to a triangulation: 0-cells are counterparts of tetrahedra, 1-cells of triangles, 2-cells of links and 3-cells of vertices of the triangulation. Since analogs of eqs. (\[2tetr\],\[3tetr\],\[pointout\]) are valid for general polyhedra as well, we can shrink a 1-cell identifying two 0-cells forming its boundary (Fig. 3(a)); delete a 2-cell joining two 3-cells a common boundary of which it was (Fig. 3(b)) and drop a subcomplex homotopic to a $3d$ spherical ball (Fig. 3(c)). Of course, all these manipulations are possible only when a final complex is homotopic to an initial one. So, we have in hands the powerful apparatus of the cell homology theory.
Given a cell complex ${\widetilde{C}}$, one can easily calculate the corresponding invariant as follows:\
1) all complexes under consideration should be put in the form where there are only one 3-cell $\sigma^3$ and only one 0-cell $\sigma^0$. It is always possible for oriented connected manifolds.\
2) a gauge variable $g_i\in G$ is put into correspondence to every 1-cell $\sigma^1_i;\ i=1,\ldots,n_1$.\
3) each 2-cell $\sigma^2_j;\ j=1,\ldots,n_2$; gives a $\delta$-function with the argument equal to the ordered product of the gauge variables along its boundary, $\partial
\sigma^2_j$, taking an orientation into account (the inversion $g\to g^{-1}$ corresponding to the moving in the opposite direction). If the boundary is empty, one has to substitute $\delta(1,1)=rank(G)$.\
Finally, one gets
\_G()=\_[G]{}\_[i=1]{}\^[n\_1]{}dg\_i \_[j=1]{}\^[n\_2]{}(\_[\^2\_j]{} g\_,1) \[invariant\] $n_1$ and $n_2$ are the numbers of 1-cells and 2-cells, respectively.
Let us point out the simple fact that
*the $n_2$ conditions*
\_[\^2\_j]{} g\_=1 can be regarded as the defining relations of the fundamental group $\pi_1({\widetilde{C}})$, if one considers $G$ as the free group on $n_1$ generators
.
From eq.(\[invariant\]) it follows that
\_G()=rank(\_1()G) \[invariant2\] which is reminiscent of theories of the Dijkgraaf-Witten type. $\pi_1({\widetilde{C}})\stackrel{h}{\mapsto}G$ means the homomorphism of $\pi_1({\widetilde{C}})$ into a finite group $G$ defined by the above construction.
From eq. (\[invariant\]) it follows that ${\widetilde{I}}_G({\widetilde{C}})$ is multiplicative with respect to the connected sum of two $3d$ complexes, ${\widetilde{C}}={\widetilde{C}}_1\#{\widetilde{C}}_2$,
\_G()=\_G(\_1)\_G(\_2) The operation $\#$ is commutative, hence, eq. (\[invariant\]) can be regarded as a representation of this semi-group.
An interesting case is abelian groups. Since
H\_1(,G)=\_1()/\[\_1(),\_1()\] ($i.e.$ the first homology group is a commutated fundamental group), we have in this case
\_G()=rank(H\_2(,G)) \[Iabel\] where $H_2({\widetilde{C}},G)$ is the second homology group of a complex ${\widetilde{C}}$ with coefficients in $G$.
To prove eq. (\[Iabel\]) let us note that there are only one 0-cell $\sigma^0$ and only one 3-cell $\sigma^3$ and for all 1-cells $\sigma^1_i$; $i=1,\ldots,n_1$
\^1\_i=0 where $\partial$ is the standard homologic boundary operator ($\partial:
\sigma^k_* \to \sigma^{k-1}_*$). Because of the orientability,
\^3 =0 as well ($i.e.$ there are no exact 2-cells) and, hence, every 2-cell having zero boundary gives a generator of $H_2({\widetilde{C}},G)$. But it is exactly the condition that is coded in the arguments of the $\delta$-functions in eq.(\[invariant\]): $rank(H_2({\widetilde{C}},G))$ is equal to the number of times the $\delta$-functions “have worked”.
The group $H_2({\widetilde{C}},G)$ is isomorphic to $H^1(C,G)$ by the Poincaré duality generated by the transformation from $\phi^4$ graphs to triangulations and $vice$ $versa$.
Eq. (\[Iabel\]) allows us to determine the Betti number [**mod**]{} $G$:
b\_1=where $[x]$ means the integer part of $x$.
Now, let us give several simple examples for the cyclic group $Z_p$.\
1) $Sphere\ S^3$. There are no 1- and 2-cells at all.
\_G(S\^3)=1 2) $Lenses\ L^q=S^3/Z_q$. There is one 1-cell and one 2-cell: $\partial
\sigma^2 = q\sigma^1$.
\_G(L\^q)=\_G dg (g\^q,1)
\_[Z\_p]{}(L\^q)={
[ll]{} p& ,p=q\
1& ,pq
. 3) $S^1\times S^2$ There is one 1-cell and one 2-cell: $\partial
\sigma^2 = 0$.
\_G(S\^1S\^2)=\_G dg (1,1) = rank(G) \_[Z\_p]{}(S\^1S\^2)=p 4) $S^1\times M^2_r$ where $M^2_r$ is a $2d$ oriented surface with $r$ handles $r\ge 1$:
$${\widetilde{I}}_G(S^1\times M^2_r)=\int_G dg\prod_{i=1}^r df_idh_i \;
\delta\big(\prod_{j=1}^rh_jf_jh_j^{-1}f_j^{-1},1\big)$$ \_[j=1]{}\^r(gh\_jg\^[-1]{}h\_[j]{}\^[-1]{},1) (gf\_jg\^[-1]{}f\_[j]{}\^[-1]{},1) \_[Z\_p]{}(S\^1M\^2\_r)=p\^[2r+1]{}
The consideration so far involved more or less standard things and now let us discuss peculiarities. First, we should extend our construction to non-manifolds. In three dimensions there is no general restriction on the Euler character but in our case $\chi$ defined by eq. (\[chi\]) appears to be non-negative. It can be seen as follows. For each vertex in a complex, tetrahedra touching it form a 3-ball with a non-trivial, in general, $2d$ boundary. Let us denote $\chi^{(2)}_i=2(1-p_i)$ the $2d$ Euler character of the boundary of the ball for the $i$-th vertex. Summing over vertices one gets
\_[i=1]{}\^[N\_0]{}\^[(2)]{}\_i=2N\_0-2\_[i=1]{}\^[N\_0]{}p\_i On the other hand, this quantity can be obtained counting the numbers of simplexes of different dimensions. A simple algebra gives
=\_[i=1]{}\^[N\_0]{}p\_i 0 By definition a complex is a manifold, iff $\forall \ i:\ p_i=0$.
The Euler character can be as well expressed through the Betti numbers:
=b\_0-b\_1+b\_2-b\_3 and, since, for oriented connected complexes, always $b_0=b_3$, we have the inequality
b\_2 b\_1 The dual quantities, ${\widetilde{b}}_i=b_{3-i}$ and ${\widetilde{\chi}}=-\chi$ by the Poincaré duality, which reads $H^k(C,Z_p) = H_{3-k}({\widetilde{C}},Z_p)$. Hence, our invariant is sensitive to $b_1$.
For manifolds, in eq. (\[invariant\]), the number of integrations is always equal to the number of $\delta$-functions ($n_1=n_2$). In general, there can be an excess of variables. It means that, at least for abelian groups, the invariant does not distinguish between manifolds and non-manifolds. For every manifold there are infinitely many non-manifolds (having different $\chi$’s) giving the same answer. Therefore, the choice $G=Z_p$ looks rather reasonable. The invariant gives essentially $p^{b_1}$ (up to subtleties clearly seen in the case of lenses). And, if we weigh links with $\mu/p$, triangles with $p$ and tetrahedra with $\lambda/p$, the partition function will take the form
Z=\_[{C\_c}]{} Q(C)\^[N\_3]{}\^[N\_1]{}p\^[b\_2-1]{} \[Zfin\] where the factor $Q(C)=I_{Z_p}(C)/p^{b_1} < p$; $\sum_{\{C_c\}}$ is the sum over connected oriented complexes.
So, we arrive at the following generalization of the $2d$ matrix models
$$Z=\int \prod_{a,b,c=1}^{\mu/p}\prod_{(i,j,k)}d\phi_{i,j,k}^{abc} \exp \bigg\{
-\frac{1}{2}\sum_{a,b,c=1}^{\mu/p}\sum_{(i,j,k)}\mid\phi_{i,j,k}^{abc}\mid^2$$+ \_\^[/p]{} \_ \_[i,j,k]{}\^[abc]{}\_[-i,l,-m]{}\^[ade]{}\_[-j,m,-n]{}\^[beg]{} \_[-k,n,-l]{}\^[cgd]{} } \[Zpmodel\] Lower indices, $i,j,k,l,m,n$, are taken ${\rm\bf mod}\ p$; $\phi^{abc}_{i,j,k}=0$, unless $i+j+k=0\ ({\rm\bf mod}\ p)$, and all sums and products run over this set of indices. The field $\phi_{i,j,k}^{abc}$ has to obey the following additional conditions
\_[i,j,k]{}\^[abc]{}=\_[j,k,i]{}\^[bca]{}=\_[k,i,j]{}\^[cab]{} and
\_[i,j,k]{}\^[abc]{}=\_[-i,-j,-k]{}\^[abc]{}
If $p$ is odd, all complexes generated by the model are oriented and the above analysis is valid. In the formal limit $p\to 0$ only homologic spheres survive. But one should be very careful here. There are infinitely many topologies at $b_2=0$ (all lenses among them). In our model their number is cut by the volume, $N_3$. Hence, one should keep $p$ sufficiently large (at least larger than the biggest $q$ among appearing lenses $L^q$). It means that, for a given $\lambda$ away from a critical point $\lambda_c$ ($N_3$ is finite), one should take the limit $p\to \infty$ first. After that one may tend $\lambda \to\lambda_c$ performing simultaneously an analytical continuation to $p=0$. It means a non-trivial scaling. In any case, one has somehow to remove a singularity at $\lambda=0$ like it was done for the matrix models in refs. [@Matmod2]. The problem, however, is whether the number of complexes with $b_2$ fixed is exponentially bounded. If it is so, the critical value $\lambda_c$ exists and the above program is self-consistent. If not, then a further topological classification is needed. For a fixed topology, the answer to that question is “yes”. At least, numerical experiments clearly showed that the number of spheres homeomorphic to $S^3$ grows exponentially with the volume. This growth should be determined locally, as in the $2d$ case, $i.e.$ independently of a topology. So, the question is “How many topologies can one fill a given volume with?”. But, even if the above program does make sense, hard technical problems still remain to be overcome.
The $SU_q(2),\ q^n=1$, model.
=============================
In the case of $q$-deformed $SU(2)$ group, some conceptual problems still remain. The main tool of our analysis in Section 2 was the Peter-Weyl theorem stating that the algebra of regular functions on a compact group is isomorphic to the algebra of matrix elements of finite dimensional representations. The $q$-analog of this theorem was proved in refs. [@Woron] for $\vert q \vert < 1$. In this case there is one-to-one correspondence between representations of $SU_q(2)$ and $SU(2)$, and the notion of the matrix elements is naturally generalized. The main difference in the quantum case is that the tensor product is not commutative (for example, $\delta(x,y)\neq\delta(y,x)$). Although in this case there exists a definition of a rank which appeares to be finite [@Majid], the lattice topological gauge theory built with this group does not exist because of divergencies. $q^n=1$ changes the situation drastically. The analysis of refs. [@Woron] is not valid in this case and the whole subject has to be revised. On the other hand, the theory of representations of the quantized universal enveloping algebra ${\cal U}_q(SL(2))$, when $q^n=1$, was given in refs. [@repsu2] and in the most complete form in ref. [@Keller].
As was established in [@Keller], all highest weight irreps $\rho_j$ of $\ {\cal U}_q(SL(2))$, when $q^n=1$, fall into two classes:\
a) dimension of $\rho_j,\ dim(\rho_j)\ <M$, where $M=\left\{\begin{array}{ll}
n/2&\mbox{,$n$ even}\\
n&\mbox{,$n$ odd}
\end{array}\right.$\
These irreps are numbered by two integers $d$ and $z$: $\langle
d,z\rangle$, where $d=dim(\rho_j)$, and the highest weights are
j=(d-1)+z \[highw\] b) $dim(\rho_j)=M$. In this case irreps $I^1_z$ are labeled by a complex number $z\in {\rm C}\backslash\big\{{\rm Z}+\frac{2}{n}r\mid 1\leq r\leq
M-1\big\}$ and have the highest weights
j=(M-1)+z
There are also indecomposable representations which are not irreducible but nevertheless cannot be expanded in a direct sum of invariant subspaces. They are labeled by an integer $2\leq p\leq M$ and the complex number $z$: $I^p_z$. Their dimension $dim(I^p_z)=2M$.
In ref. [@Keller] the following facts important for us were established:\
1) If $n\geq 4$, irreps $\langle d,0\rangle$ are unitary only for even $n$.\
2) Representations of the type $I^p_z,\ 1\leq p\leq M$ form a two sided ideal in the ring of representations ([*i.e.*]{}, if at least one of them appears in a tensor product, then all representations in the decomposition will be of this type). Their quantum dimension vanishes: $dim_q(I^p_z)=\left\{\begin{array}{ll}
[M]&,p=1\\
\![2M]&,p\geq 2
\end{array}\right\}=0$, where $[x]=\frac{q^x-q^{-x}}{q-q^{-1}}$.\
3) For the tensor product of two irreps the following formula takes place:
$$\langle i,z\rangle\otimes\langle j,w\rangle=
\bigg(\bigoplus_{k=\vert i-j\mid+1;+3;+5,\ldots}
^{\min(i+j-1,2M -i-j-1)}\langle k,z+w\rangle\bigg)\oplus$$(\_[=r,r+2,r+4,…]{}\^[i+j-M]{} I\^\_[z+w]{}) \[tenprod\] where $r=\left\{\begin{array}{ll}
1&\mbox{,$i+j-M$ odd}\\
2&\mbox{,otherwise}
\end{array}\right.$\
Eq. (\[tenprod\]) means that the class of representations $\langle
d,z\rangle$ and $I^p_z$ with $z=0$ form a ring with respect to the tensor product. The highest weights (\[highw\]) are in the one-to-one correspondence with the ones at $\vert q\vert<1$. Let us [*suppose*]{} that, for even $n\geq
4$, matrix elements of the first $n/2-1$ irreps of $SU_q(2)$, $\vert
q\vert<1$, allows a limit $q\to e^{\frac{2\pi i}{n}}$, and form (together with their descendants) the above mentioned ring. On the other hand, we can ignore $I^p_z$ representations, while we calculate integrals of products of matrix elements. Hence, we have to truncate the space of functions to integrate over to a subspace spanned by the matrix elements of irreps of the type $\langle d,0\rangle,\ 1\leq
d\leq n/2-1$, for even $n\geq 4$. Then we have a guarantee that appearing invariants coincide with the Turaev-Viro ones. This construction reminds very much the finite-groups one considered in the previous section.
In the quantum case we have to correct a number of formulas of Section 2. For a unitary representation we still have
D\^j\_[nm]{}(x\^[-1]{})=\^j\_[mn]{}(x) but the orthogonality condition (\[orthcon\]) need to be modified as follows
$$\int dx D^j_{nm}(x) \overline{D}^{j'}_{n'm'}(x) = {q^{2m}\over [2j+1]}
\delta^{j,j'}\delta_{n,n'}\delta_{m,m'}$$dx \^[j’]{}\_[n’m’]{}(x) D\^j\_[nm]{}(x) = [q\^[-2n]{}]{} \^[j,j’]{}\_[n’,n]{}\_[m’,m]{} \[orthconq\]
To integrate over $SU_q(2)$ variables in eq. (\[action1\]), we can use the following useful formula
\^j\_[nm]{}(x)=(-q)\^[m-n]{}D\^j\_[-n,-m]{}(x) which gives
dx D\^j\_[n\_1m\_1]{}(x) D\^[j’]{}\_[n\_2m\_2]{}(x) = [(-q)\^[m\_1+n\_1]{}]{} \^[j,j’]{}\_[n\_1,-n\_2]{}\_[m\_1,-m\_2]{} Hence, we get, instead of eq. (\[conjA\]), the following “hermiticity” condition
\_[j\_1,j\_2,j\_3]{}\^[m\_1,m\_2,m\_3]{} = (-1)\^[j\_1+j\_2+j\_3]{} (-q)\^[m\_1+m\_2+m\_3]{}A\_[ j\_1, j\_2, j\_3]{}\^[-m\_1,-m\_2,-m\_3]{}
Quantum $3j$ and $6j$-symbols were investigated in ref. [@Kiresh], which contains many useful formulas. The $3j$ symbol is connected to the Clebsch-Gordan coefficient as follows
(
[ccc]{} j\_1&j\_2&j\_3\
n\_1&n\_2&n\_3
)
\_q= (-1)\^[j\_1-j\_2]{} j\_1 j\_2 n\_1 n\_2j\_1 j\_2 j\_3 -n\_3 and the eq. (\[int3me\]) is still valid.
It is easy to see that
(
[ccc]{} j\_1&j\_2&j\_3\
n\_1&n\_2&n\_3
)
\_q=q\^[-2n\_3]{}
(
[ccc]{} j\_3&j\_1&j\_2\
n\_3&n\_1&n\_2
)
\_q \[permsym\] And the cyclic symmetry condition in the form (\[cycsim\]) cannot be imposed in the quantum case.
The Racah-Wigner $6j$-symbol can be defined for example as follows
$${\left\{ \begin{array}{ccc}
j_1&j_2&j_3 \\ j_4&j_5&j_6
\end{array} \right\}}_q=
\sum_{\{-j_i\leq m_i \leq j_i\}} (-1)^{j_4+j_5+j_6}(-q)^{m_4+m_5+m_6}$$$${\left( \begin{array}{ccc}
j_5&j_6&j_1 \\ m_5&-m_6&m_1
\end{array} \right)}_q
{\left( \begin{array}{ccc}
j_6&j_4&j_2 \\ m_6&-m_4&m_2
\end{array} \right)}_q$$
(
[ccc]{} j\_4&j\_5&j\_3\
m\_4&-m\_5&m\_3
)
\_q
(
[ccc]{} j\_1&j\_2&j\_3\
m\_1&m\_2&m\_3
)
\_q From which the analog of eq. (\[Sintout\]) immediately follows
$$S = \frac{1}{2}\sum_{\{j_1,j_2,j_3\}}\sum_{\{-j_k \le m_k \le j_k\}}
\mid A_{\;j_1,\;j_2,\;j_3}^{m_1,m_2,m_3} \mid^2 -$$$$-\frac{\lambda}{4!}\sum_{\{j_1,...,j_6\}}\sum_{\{-j_k \le m_k \le j_k\}}
(-q)^{\sum_k^6 m_k}
A_{\ j_1,\ j_2,\ j_3}^{-m_1,-m_2,-m_3}
A_{\;j_3,\ j_4,\;j_5}^{m_3,-m_4,m_5}$$A\_[j\_1, j\_5,j\_6]{}\^[m\_1,-m\_5,m\_6]{} A\_[j\_2, j\_6,j\_4]{}\^[m\_2,-m\_6,m\_4]{} (-1)\^[\_k\^6 j\_k]{}
{
[ccc]{} j\_1&j\_2&j\_3\
j\_4&j\_5&j\_6
}
\_q
Now, it is strightforward to generalize eq. (\[Z2\]) to the quantum case. One should take care of an order of matrix elements and use the quantum $\delta$-functions:
\_q(x,y)=\_[j=0]{}\^[M-1]{}\[2j+1\]\_[a,b=-j]{}\^[j]{} q\^[-2b]{}\^j\_[ab]{}(x)D\^j\_[ab]{}(y) With this definition
dx f(x)(x,y)=dx (y,x)f(x) =f(y)
One can imagine every $\delta$-function in eq. (\[Z2\]) as an index loop going around a link of a triangulation. Matrices forming the argument of the $\delta$-function can be identified with intersections between the loop and triangles sharing the link. In the $SU_q(2)$ case, such loops can form non-trivial knots and links [@Kiresh]. If the corresponding links[^5] are trivial, equations (\[2tetr\]), (\[3tetr\]) and (\[pointout\]) are valid in the quantum case as well and we have the same proof of the topological invariance as for classical groups.
A thorough investigation of the model formulated in this section is beyond the scope of the present paper and will be given elsewhere. A discussion on calculations of the Turaev-Viro invariant for the lenses can be found in ref. [@Tata]. In order to conclude, let us notice that this invariant is more sensitive than the one considered in Section 3. In principle, it can distinguish between manifolds having the same fundamental group, which makes it, potentially, to be a powerful tool in the theory of $3d$ manifolds.
Discussion
==========
The models considered in this paper may be regarded as generalizing the well-known $2d$ matrix models to the $3d$ case. They are adequate to the problem of $3d$ euclidean quantum gravity, since they contain the sufficient number of parameters and allow a topological expansion to be performed.
In $d$-dimensional space a metric has $d(d-1)/2$ angular degrees of freedom which can be simulated by summing over equilateral simplicial complexes. The other $d$ degrees of freedom are the gauge ones and one can simply ignore them while working with a fixed topology (as in numerical simulations in refs. [@AgMig; @BK; @4dsim]). However, a complete theory has to take into account both types of degrees of freedom. The aim of this paper was to formulate such a model. Different choices of the gauge group may be interpreted as different space structures. It would be interesting to solve the “inverse” problem, [*i.e.*]{}, to recover the geometry of a “space” (if any) corresponding to a particular gauge group. The cyclic group $Z_n$, from this point of view, corresponds to a space in which all lengths are quantized to be integers [**mod**]{} $n$ but, instead of the triangle inequality, one has the one-dimensional “triangle equality”. It is, in a sense, actually a model of lattice quantum gravity but with a one-dimensional “target space”.
The lattice gauge theory with a quantum gauge group also may be of interest. It is easy to introduce an action in it ([*e.g.*]{}, by eq. (\[heatkernel\])). In this case, the theory exists for general $q$ as well and can be generalized to an arbitrary quantum group [*à la*]{} Woronowicz. It is a theory with dynamical degrees of freedom and might be useful in a search for new physics.
The author would like to thank D.Bernard, P.Ginsparg, V.A.Kazakov, A.Krzywicki, A.A.Migdal, M.A.Semenov-Tian-Shansky for helpful discussions and, especially, C.Itzykson for his encouraging interest and reading the paper and also the Service de Physique Théorique de Saclay for hospitality.
[**Figures**]{}
(170,100)
(10,40)[(2,3)[10]{}]{} (10,40)[(3,-1)[15]{}]{} (10,40)[(1,-2)[10]{}]{} (20,55)[(1,-4)[5]{}]{} (20,55)[(1,-1)[15]{}]{} (20,20)[(1,3)[5]{}]{} (20,20)[(3,4)[15]{}]{} (25,35)[(2,1)[10]{}]{} (10,40)(5,0)[5]{}[(1,0)[4]{}]{}
(40,43)[(1,0)[10]{}]{} (50,37)[(-1,0)[10]{}]{}
(55,40)[(2,3)[10]{}]{} (55,40)[(3,-1)[15]{}]{} (55,40)[(1,-2)[10]{}]{} (65,55)[(1,-4)[5]{}]{} (65,55)[(1,-1)[15]{}]{} (65,20)[(1,3)[5]{}]{} (65,20)[(3,4)[15]{}]{} (70,35)[(2,1)[10]{}]{} (55,40)(5,0)[5]{}[(1,0)[4]{}]{} (65,20)[(0,1)[35]{}]{}
(40,10)[(10,10)[(a)]{}]{}
(90,35)[(1,2)[10]{}]{} (90,35)[(3,-1)[15]{}]{} (100,55)[(1,-5)[5]{}]{} (100,55)[(3,-4)[15]{}]{} (105,30)[(2,1)[10]{}]{} (90,35)(5,0)[5]{}[(1,0)[4]{}]{}
(120,43)[(1,0)[10]{}]{} (130,37)[(-1,0)[10]{}]{}
(140,35)[(1,2)[10]{}]{} (140,35)[(3,-1)[15]{}]{} (150,55)[(1,-5)[5]{}]{} (150,55)[(3,-4)[15]{}]{} (155,30)[(2,1)[10]{}]{} (140,35)(5,0)[5]{}[(1,0)[4]{}]{} (150,40)[(0,1)[15]{}]{} (150,40)[(3,-1)[15]{}]{} (150,40)[(1,-2)[5]{}]{} (150,40)[(-2,-1)[10]{}]{}
(120,10)[(10,10)[(b)]{}]{}
Fig. 1
: — (a) The triangle-link exchange: the common triangle of two tetrahedra on the left is removed and three new triangles sharing the new link appear on the right. (b) The subdivision: 4 new tetrahedra fill an old one.
(170,100)
(30,30)[(0,1)[20]{}]{} (30,30)[(2,-1)[10]{}]{} (30,30)[(1,-2)[5]{}]{} (30,30)[(-2,-1)[10]{}]{} (30,30) (30,50)[(2,1)[10]{}]{} (30,50)[(1,2)[5]{}]{} (30,50)[(-2,1)[10]{}]{} (30,50)
(40,43)[(1,0)[10]{}]{} (50,37)[(-1,0)[10]{}]{}
(60,40) (60,40)[(-1,3)[5]{}]{} (60,40)[(-1,-3)[5]{}]{} (60,40)[(1,0)[10]{}]{} (60,40)[(2,-1)[10]{}]{} (70,35) (70,35)[(1,5)[5]{}]{} (70,35)[(1,-3)[5]{}]{} (80,40) (80,40)[(1,3)[5]{}]{} (80,40)[(1,-3)[5]{}]{} (80,40)[(-2,-1)[10]{}]{} (80,40)[(-1,0)[8]{}]{}
(40,5)[(10,10)[(a)]{}]{}
(110,25)[(0,1)[30]{}]{}
(120,43)[(1,0)[10]{}]{} (130,37)[(-1,0)[10]{}]{}
(145,25)[(0,1)[30]{}]{} (145,40)[(12,16)]{} (145,32) (145,48)
(120,5)[(10,10)[(b)]{}]{}
Fig. 2
: — Dual graphs: (a) the triangle-link exchange; (b) two tetrahedra glued along three common faces (a self-energy insertion) are equivalent to a triangle.
(170,100)
(5,40) (5,40)[(1,0)[15]{}]{} (20,40)
(25,40)[(1,0)[10]{}]{}
(40,40)
(0,40)[(10,10)[$\sigma^0_1$]{}]{} (15,40)[(10,10)[$\sigma^0_2$]{}]{} (35,40)[(10,10)[$\sigma^0$]{}]{}
(15,5)[(10,10)[(a)]{}]{}
(70,40) (70,40)[(20,8)]{} (62,37)(2,0)[6]{}[(1,1)[6]{}]{} (74,37)[(1,1)[5]{}]{} (76,37)[(1,1)[3]{}]{} (60,38)[(1,1)[5]{}]{} (58,39)[(1,1)[3]{}]{} (65,32)[(10,10)\[b\][$\sigma^{{{\scriptscriptstyle 2}}}_{{{\scriptscriptstyle 1}}}$]{}]{}
(65,46)[(10,10)\[b\][$\sigma^{{{\scriptscriptstyle 2}}}_{{{\scriptscriptstyle 2}}}$]{}]{}
(85,40)[(1,0)[10]{}]{} (110,40) (105,35)[(10,10)[$\sigma^2$]{}]{}
(85,5)[(10,10)[(b)]{}]{}
(135,40) (145,40)[(1,0)[10]{}]{} (160,40) (155,40)[(10,10)[$\sigma^0$]{}]{} (125,45)[(20,10)[$S^3\backslash(\bullet)$]{}]{}
(145,5)[(10,10)[(c)]{}]{}
Fig. 3
: — A subcomplex can be substituted by a homotopic one: (a) $\sigma^0_1=\sigma^0_2=\sigma^0$; (b) $\sigma^2_1\cup\sigma^2_2=\sigma^2$; (c) a $3d$ ball is homotopic to a point.
[99]{}
V.A.Kazakov, Phys.Lett [**B150**]{} (1985) 282; F. David, Nucl. Phys. [**B257**]{} (1985) 45,543; J. Ambjørn, B.Durhuus and J. Fröhlich Nucl. Phys. B [**257**]{} (1985) 433; D.V.Boulatov, V.A. Kazakov, I.K. Kostov and A.A. Migdal, Nucl.Phys. [**B275**]{} (1986) 641.
T.Regge, Nouvo Cimento [**19**]{} (1961) 558.
M.E. Agishtein and A.A. Migdal, Mod. Phys. Lett. [**A6**]{} (1991) 1863; J.Ambjørn and S. Varsted,Phys.Lett. [**B226**]{} (1991) 258.
D.V. Boulatov and A. Krzywicki, Mod. Phys. Lett. [**A6**]{} (1991) 3005; J.Ambjørn, D.Boulatov, A.Krzywicki and S.Varsted, [*The vacuum in three-dimensional simplicial quantum gravity"*]{}, preprint NBI-HE-91-46, LPTHE Orsay 91/57 (October 1991).
M.E. Agishtein and A.A. Migdal, [*Simulations of four-dimensional simplicial quantum gravity"*]{}, preprint PUPT-1287 (October 1991); J.Ambjørn and J.Jurkiewicz, preprint NBI-HE-91-47 (November 1991).
D.V.Boulatov and V.A.Kazakov, unpublished; J. Ambjørn, B. Durhuus and T. Jonsson, Mod. Phys. Lett. [**A6**]{} (1991) 1133; N.Sasakura, Mod. Phys. Lett. [**A6**]{} (1991) 2613; N.Godfrey and M.Gross, Phys. Rev. [**D43**]{} (1991) R1749.
E.Brezin and V.A.Kazakov, Phys.Lett. [**B236**]{} (1990) 144; M.Douglas and S.Shenker, Nucl. Phys. [**B335**]{} (1990) 635; D.Gross and A.A.Migdal, Phys. Rev. Lett. [**64**]{} (1990) 127.
E.Witten, Commun.Math.Phys. [**121**]{} (1989) 351; Nucl. Phys. [**B311**]{} (1988/89) 46.
V.G.Turaev and O.Y.Viro, [*State sum invariants of 3-manifolds and quantum $6j$-symbols"*]{}, LOMI preprint (1990).
H.Ooguri and N.Sasakura, Mod. Phys. Lett. [**A6**]{} (1991) 3591; F.Archer and R.M.Williams, Phys.Lett. [**B273**]{} (1991) 438.
G.Ponzano and T. Regge, [*in Spectroscopic and group theoretical methods in physics*]{}, ed. F.Bloch (North-Holland, Amsterdam, 1968).
R.Dijkgraaf and E.Witten, Commun. Math. Phys. [**129**]{} (1990) 393.
S.L.Woronowicz, Commun. Math. Phys. [**111**]{} (1987) 613; Ya.Soibelman, Algebra i analiz [**2**]{} (1990) 190.
S.Majid, Int. J. Mod. Phys. [**A 5**]{} (1990) 1.
E.K.Sklyanin, Funct. Anal. Appl. [**17**]{} (1983) 273; P.Roche and D.Arnaudon, Lett. Math. Phys. [**17**]{} (1989) 295.
G.Keller, Lett. Math. Phys. [**21**]{} (1991) 273.
A.N.Kirillov and N.Yu.Reshetikhin, [*Representations of the algebra ${\cal U}_q(SL_2)$, q-orthogonal polynomials and invariants of links*]{}, LOMI preprint E-9-88, Leningrad 1988.
S.Rama and S.Sen, [*“Three manifolds and graph invariants”*]{}, Tata Institute preprint TIFR/TH/91-59 (December 1991).
[^1]: Address after 1 October 1992: The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Kopenhagen Ø, Denmark.
[^2]: Laboratoire de la Direction des Sciences de la Matière du Commissariat à l’Energie Atomique
[^3]: For example, in $ISO(2,1)$ Chern-Simons theory there is no reason to quantize the coupling constant $k$ in contrast with the $SU(2)$ case.
[^4]: A more complete discussion will be given in Section 4.
[^5]: In the knot theory sense.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The perturbative solutions to the semiclassical Einstein field equations describing spherically-symmetric and static lukewarm black hole are constructed. The source term is composed of the (classical) stress-energy tensor of the electromagnetic field and the renormalized stress-energy tensor of the quantized massive scalar field in a large mass limit. We used two different parametrizations. In the first parametrization we calculated the zeroth-order solution. Subsequently, making use of the quantum part of the total stress-energy tensor constructed in the classical background we calculated the corrections to the metric potentials and the corrections to the horizons. This procedure can be thought of as switching the quantized field on and analyzing its influence on the classical background via the back-reaction. In the second parametrization we are looking for a self-consistent lukewarm solution from the very beginning. This requires knowledge of a generic tensor which depend functionally on the metric tensor. The transformation formulas relating the line element in both parametrizations are given.'
author:
- Jerzy Matyjasek and Katarzyna Zwierzchowska
title: Semiclassical lukewarm black holes
---
Introduction
============
One of the most characteristic features of the Riessner-Nordström- deSitter black holes is the simultaneous occurrence of the three horizon-like surfaces: the inner horizon, the event horizon and the cosmological horizon. This reflects the fact that the equation $g_{tt}(r) =0$ has four real roots (not necessarily distinct), three of which are positive and represent horizons whereas the negative root has no physical interpretation. This leads to a number of special cases with the near-horizon geometries described by the Bertotti-Robinson [@bertotti; @robinson], Nariai [@Nariai1; @Nariai2] or Plebanski-Hacyan [@Hacyan] line elements. Among various allowable black hole configurations the class of solutions in which the temperature of the event horizon equals the temperature of the cosmological horizon is special [@Romans; @Mellor1; @Mellor2]. It is because the mean values of the characteristics of the quantized fields, such as the field fluctuation and the renormalized stress-energy tensor are expected to be regular in the thermal state at the natural temperature. These expectations have been confirmed by a direct calculation of the stress-energy tensor in the two dimensional setting [@Lizzie1] and the field fluctuation of the massive quantized scalar field in the full four dimensional geometry [@Lizzie2]. The black holes for which both temperatures are equal are usually referred to as the lukewarm black holes. Recent analyses include [@Breen; @Kaska3; @Dunajski]. It should be noted that there is no lukewarm configuration for the Schwarzschild-de Sitter black hole. Since the stress-energy tensor of the quantized fields contribute to the source term of the semiclassical Einstein field equations, the resulting geometry changes due to the back reaction process. Consequently, the natural question arises: Whether or not it is possible to construct the lukewarm black hole in the semiclassical gravity. In Ref [@Kaska3] a related problem has been considered in the context of the (classical) quadratic gravity. Specifically, it has been shown, that although the full, detailed answer is beyond our capabilities, it is possible to provide an affirmative answer to the restricted problem in which the differential equations describing the model are solved perturbatively. These results can also be viewed as a first step towards incorporation of the quantum effects into the picture. It is because the renormalized stress-energy tensor of the quantized massive field in the large mass limit may be approximated by the object constructed from the curvature tensor, its covariant derivatives and contractions. In this approach one ignores the particle creation which is a nonlocal phenomenon. In spite of this limitation this framework is still the most general one not restricted to any particular type of symmetry.
Here we generalize the results of Ref. [@Kaska3] in a twofold way: First, we employ the renormalized stress-energy tensor of the massive scalar field with the arbitrary curvature coupling constructed within the framework of the Schwinger-DeWitt approximation to construct the semiclassical lukewarm black hole. Secondly, we study the relation between the results expressed in terms of the radii of the cosmological and event horizons of the classical lukewarm solution and the analogous results constructed in $({r_{+}},{\tilde{r}_{c}})$ parametrization, where ${r_{+}}$ is the exact location of the event horizon of the semiclassical black hole and ${\tilde{r}_{c}}$ describes the cosmological horizon in a zeroth-order approximation.
A few words on the method are in order. First observe that first three terms of the Schwinger-DeWitt expansion are divergent and can be absorbed into the (classical) gravitational action with the cosmological constant and the quadratic terms [@Birrell]. This renormalizations leaves us with the two additional parameters, say, $\alpha$ and $\beta$ which should be determined observationally. Their exact values are presently unknown, it is expected, however, that they are small, otherwise they would lead to various observational effects. To simplify calculations, in what follows, we shall set them to zero. The second observation is related to the perturbative approach in the effective theories. In fact it may be the only method to deal with them. Indeed, since the semiclassical gravity involves six-order derivatives of the metric their nonperturbative solutions may appear to be spurious and one has to invent a method for systematic selecting physical ones. It seems that the acceptable solutions, when expanded in powers of the small parameter, should reduce to those obtained within the framework of perturbative approach. Finally observe that there are good reasons to believe that (in most cases) the black hole exists as perturbative solution of the higher order equations provided it exists classically [@RCM1].
Classical lukewarm black holes
==============================
A very convenient representation of the line element describing the lukewarm black holes in the Einstein-Maxwell theory with the (positive) cosmological constant is that in terms of the horizons. Denoting the location of the event horizon by $a$ and the location of the cosmological horizon by $b$ the line element may be written in the form $$ds^{2} = -f_{0} (r) dt^{2} + \frac{1}{f_{0}(r)} dr^{2} + r^{2}
\left(d\theta^{2} + \sin^{2}\theta d\phi^{2} \right),
\label{spec1}$$ where $$f_{0}(r) = \left(1 - \frac{ab}{(a+b)r} \right)^{2} -\frac{r^{2}}{(a+b)^{2}}.$$ Such a configuration is allowed provided $$Q^{2} = \left( \frac{ab}{a+b} \right)^{2} \hspace{0.5cm}{\rm and} \hspace{0.5cm}
\Lambda = \frac{3}{(a+b)^{2}}.
\label{luke}$$ It means that the electric charge $Q$ (if there is a magnetic charge, $P,$ $Q^{2}$ must be substituted by $Z^{2} = Q^{2}+P^{2}$) and $\Lambda$ completely determines the lukewarm solution.
The simplest way to demonstrate that the line element of the lukewarm Reissner-Nordström-de Sitter black hole can be expressed solely in terms of $a$ and $b$ is to solve the Einstein-Maxwell equations with the cosmological term for a general static and spherically-symmetric metric $$ds^2 = - A (r) dt^2 + B (r) dr^2 +
r^2 \left(d\theta^{2} + \sin^{2}\theta d\phi^{2} \right)
\label{familiar}$$ with the boundary conditions $B^{- 1} (a) = 0$ and $A(b) B (b) = 1.$ Solving the field equations and making use of the boundary conditions yields $$A(r) = \frac{1}{B (r)} = 1 - \frac{a}{r} - \frac{Q^2}{r a} + \frac{\Lambda
a^3}{3 r} + \frac{Q^2}{r^2} - \frac{\Lambda r^2}{3},
\label{aabb}$$ which, with the substitution $$M_H = \frac{a}{2} - \frac{Q^2}{2 a} + \frac{\Lambda a^2}{6}$$ leads to the line element (\[familiar\]) written in a more familiar form $$A (r) = B^{- 1} (r) = 1 - \frac{2 M_H}{r} + \frac{Q^2}{r^2} - \frac{\Lambda
r^2}{3}.$$ We shall refer to $M_H$ as the horizon-defined mass. Simple analysis indicate that equation $A (r) = 0$ can have, depending on the values of the parameters, three, two or one distinct positive solutions. The above configurations can, therefore, have three distinct horizons located at zeros of $A (r),$ a degenerate and a nondegenerate horizon, and, finally, one triply degenerate horizon. Let us denote remaining solutions of the equation $A (r) = 0$ by $r_{--}$ and $r_-$ $(r_{--} < r_{-} \leq a \leq b).$ Solving of the system of equations $$A (b) = 0 \hspace{0.5cm}{\rm and} \hspace{0.5cm}
\kappa (a) + \kappa (b) = 0,$$ where $$\kappa (r_i) = \frac{1}{2} \left(- g_{tt} g_{rr} \right)^{-1 / 2} \frac{d}{dr}
g_{tt}$$ with respect to $\Lambda$ and $Q^{2}$ one obtains (\[luke\]). Finally, substituting the thus obtained $\Lambda$ and $Q^{2}$ into Eq. \[aabb\] gives (\[spec1\]).
Semiclassical lukewarm black holes
==================================
General equations in $(a,b)$ parametrization
---------------------------------------------
Now, let us consider the semiclassical Einstein field equations $$R_{i}^{j} - \frac{1}{2} R \delta_{i}^{j} + \Lambda \delta_{i}^{j} =
8\pi \left( T_{i}^{(em)j} + T_{i}^{(q)j} \right),
\label{semiEinstein}$$ where $T_{i}^{(em)j}$ is the stress-energy tensor of the electromagnetic field and $T_{i}^{(q)j}$ is the renormalized stress-energy tensor of some quantized test field or fields, evaluated in suitable state. Optimally, the renormalized stress-energy tensor of the quantized fields should functionally depend on the metric tensor, or, at least, on the wide class of geometries. Unfortunately, because of the mathematical complexity of the problem one is forced to refer either to the analytic approximations or numerics or both. Here we shall use the renormalized stress-energy tensor of the massive scalar field with an arbitrary curvature coupling in a large mass limit. Such a tensor can be calculated from the effective action, $W_{R},$ constructed within the framework of the Schwinger-DeWitt method [@Bryce1]. Although the results appear to be state-independent the formulae used in this paper have been constructed with the assumption that the Green functions are defined in the spaces with the positive-definite metric. Of course, in this approach one ignores creation of real particles but the analyses that have been carried out so far suggest that for sufficiently massive fields it provides a reasonably accurate approximation [@PaulA].
The approximate renormalized one-loop effective action of the quantized massive fields in the large mass limit is given by a well-known formula $$W_{R}\,=\,\frac{1}{32\pi^{2}}\sum_{n=3}^{\infty}
\frac{(n-3)!}{(m^{2})^{n-2}}\int d^{4}x \sqrt{g} [a_{n}],
\label{Weff}$$ where each $[a_{n}]$ has dimensionality of $[length]^{-2 n}$ and is constructed from the Riemann tensor, its covariant derivatives up to $2 n-2$ order and appropriate contractions. For the technical details of this approach the reader is referred, for example, to Refs. [@Barvinsky:1985an; @FZ3] and the references cited therein. Inspection of Eq. (\[Weff\]) shows that the lowest term of the approximate $W_{R}$ is to be constructed from the (integrated) coincidence limit of the fourth Hadamard- Minakshisundaram-DeWitt-Seely coefficient, $[a_{3}],$ whereas the next to leading term should be constructed form $[a_{4}].$ Here we will confine ourselves to the first term of the expansion (\[Weff\]) and briefly discuss some general features of the second-order term at the end of the paper. The approximate renormalized stress-energy tensor can be constructed from $W_{R}$ using the standard relation $$T^{ab} = \frac{2}{g^{1/2}}\frac{\delta}{\delta g_{ab}} W_{R}.
\label{def_set}$$
It should be noted that even if the renormalized stress-energy tensor is known the resulting semiclassical field equations are far too complicated to be solved exactly. Since the quantum effect are expected to be small, it is reasonable to assume that the quantum-corrected lukewarm black hole is described by a set of parameters that are close to their classical counterparts. Introducing slightly distorted metric potentials $$A(r) = f_{0}(r) + \alpha(r)
\label{alfa}$$ and $$1/B(r) = f_{0}(r) + \beta(r),
\label{bett}$$ where $\alpha(r)$ and $\beta(r)$ are small corrections to the main approximation, expanding and retaining only the linear terms in the resulting differential equations one obtains $$\frac{1}{r} \frac{d\beta(r)}{dr} + \frac{\beta(r)}{r^{2}} -\frac{1}{12\pi m^{2}}t_{t}^{t} = 0$$ and $$\frac{1}{r} \frac{d \alpha(r)}{dr} -2 p(r) \alpha(r) +q(r) \beta(r) -\frac{1}{12\pi m^{2}}t_{r}^{r} =0,$$ where $$p(r) ={\frac {{a}^{2}br-{a}^{2}{b}^{2}+a{b}^{2}r-{r}^{4}}{{r}^{2}
\left( b-r \right) \left( a-r \right) \left( ab-ar-br-{r}^{2}
\right) }}$$ and $$q(r) = {\frac {{a}^{2}{b}^{2}-{r}^{2}{a}^{2}-2\,{r}^{2}ab+3\,{r}^{4}-{r}^{2}
{b}^{2}}{{r}^{2} \left( b-r \right) \left( r-a \right) \left( ab-ar-
br-{r}^{2} \right) }}.$$ The general solution to the system can be written $$\beta(r) = \frac{1}{12\pi m^{2} r}\int t_{t}^{t}(r) r^{2} dr + C_{1}
\label{bbb}$$ $$\alpha(r) =-\frac{P(r)}{6\pi m^{2}} \int \frac{1}{P(r)}\left( 12 q(r) \beta(r) \pi m^{2}
- t_{r}^{r}(r) \right)dr +P(r) C_{2},
\label{aaa}$$ where $$P(r) = \exp\left(\int p(r) dr \right).$$ and, consequently, the construction of the general solution reduces to two quadratures. Here $t_{a}^{b} = 96 \pi^{2} m^{2} T_{a}^{(q)b}.$ In deriving the above (formal) solution we have ignored subtleties associated with the stress-energy tensor itself, i.e., it is tacitly assumed that it is regular on the horizons. Fortunately, it turns out that the stress-energy tensor considered in this paper is free of such complications. The integration constants $C_{1}$ and $C_{2}$ should be determined from physically motivated boundary conditions.
The approximate stress-energy tensor {#sec:sss}
------------------------------------
To solve the semiclassical Einstein field equations in a self-consistent way the renormalized stress-energy tensor describing the quantized source term is required. Such a tensor for a given field should functionally depend on the generic metric tensor. Unfortunately, because of the unavoidable complexities of the problem, its exact form is unknown. Provided we are interested in the analytical calculations all we can do is to look for some reasonable approximation [@FZ3; @AHS95; @Matyjasek:1999an; @kocio1; @lemosT; @kocio:2009; @MatryZw].
The first-order approximation to the renormalized effective action of the quantized massive scalar field with arbitrary coupling to the curvature $\xi$ satisfying the covariant Klein-Gordon equation $$\left( \Box \,-\,\xi R\,-\,m^{2}\right) \phi \,=\,0,
\label{wave}$$ can be constructed from the coincidence limit of the coefficient $$[a_{3}] = a_{3}^{(0)} + \xi a_{3}^{(1)} + \xi^{2} a_{3}^{(2)} + \xi^3 a_{3}^{(3)},
\label{a3a}$$ where $$\begin{aligned}
a_{3}^{(0)} &=& \frac{11}{1680} R^3+
\frac{17}{5040} R_{;a}^{{\phantom}{;{\phantom}{a}}} R_{{\phantom}{;{\phantom}{a}}}^{;a}-
\frac{1}{2520} R_{ab;c}^{{\phantom}{a}{\phantom}{b}{\phantom}{;{\phantom}{c}}} R_{{\phantom}{a}{\phantom}{b}{\phantom}{;{\phantom}{c}}}^{ab;c}-
\frac{1}{1260} R_{ab;c}^{{\phantom}{a}{\phantom}{b}{\phantom}{;{\phantom}{c}}} R_{{\phantom}{a}{\phantom}{c}{\phantom}{;{\phantom}{b}}}^{ac;b}\nonumber \\
&+
&\frac{1}{560} R_{abcd;e}^{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}{\phantom}{;{\phantom}{e}}}
R_{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}{\phantom}{;{\phantom}{e}}}^{abcd;e}+
\frac{1}{180} R R_{;a{\phantom}{a}}^{{\phantom}{;{\phantom}{a}}a}+
\frac{1}{280} R_{;a{\phantom}{a}b{\phantom}{b}}^{{\phantom}{;{\phantom}{a}}a{\phantom}{b}b}+
\frac{1}{420} R_{;ab}^{{\phantom}{;{\phantom}{a}}{\phantom}{b}} R_{{\phantom}{a}{\phantom}{b}}^{ab}\nonumber \\
&-
&\frac{1}{630} R_{ab;c{\phantom}{c}}^{{\phantom}{a}{\phantom}{b}{\phantom}{;{\phantom}{c}}c} R_{{\phantom}{a}{\phantom}{b}}^{ab}-
\frac{109}{2520} R R_{ab}^{{\phantom}{a}{\phantom}{b}} R_{{\phantom}{a}{\phantom}{b}}^{ab}+
\frac{73}{1890} R_{ab}^{{\phantom}{a}{\phantom}{b}} R_{c{\phantom}{a}}^{{\phantom}{c}a} R_{{\phantom}{b}{\phantom}{c}}^{bc}+
\frac{1}{210} R R_{abcd}^{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}} R_{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}}^{abcd}\nonumber \\
&+
&\frac{1}{105} R_{ab;cd}^{{\phantom}{a}{\phantom}{b}{\phantom}{;{\phantom}{c}}{\phantom}{d}} R_{{\phantom}{a}{\phantom}{c}{\phantom}{b}{\phantom}{d}}^{acbd}+
\frac{19}{630} R_{ab}^{{\phantom}{a}{\phantom}{b}} R_{cd}^{{\phantom}{c}{\phantom}{d}} R_{{\phantom}{a}{\phantom}{c}{\phantom}{b}{\phantom}{d}}^{acbd}-
\frac{1}{189} R_{abcd}^{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}} R_{ef{\phantom}{a}{\phantom}{b}}^{{\phantom}{e}{\phantom}{f}ab}
R_{{\phantom}{c}{\phantom}{d}{\phantom}{e}{\phantom}{f}}^{cdef},
\label{a3b}\end{aligned}$$
$$\begin{aligned}
a_{3}^{(1)} &=& -
\frac{1}{72} R^3-
\frac{1}{30} R_{;a}^{{\phantom}{;{\phantom}{a}}} R_{{\phantom}{;{\phantom}{a}}}^{;a}-
\frac{11}{180} R R_{;a{\phantom}{a}}^{{\phantom}{;{\phantom}{a}}a} -
\frac{1}{180} R R_{abcd}^{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}} R_{{\phantom}{a}{\phantom}{b}{\phantom}{c}{\phantom}{d}}^{abcd}
\nonumber \\
&-
&\frac{1}{60} R_{;a{\phantom}{a}b{\phantom}{b}}^{{\phantom}{;{\phantom}{a}}a{\phantom}{b}b}-
\frac{1}{90} R_{;ab}^{{\phantom}{;{\phantom}{a}}{\phantom}{b}} R_{{\phantom}{a}{\phantom}{b}}^{ab}+
\frac{1}{180} R R_{ab}^{{\phantom}{a}{\phantom}{b}} R_{{\phantom}{a}{\phantom}{b}}^{ab},
\label{a3c}\end{aligned}$$
$$a_{3}^{(2)} = \frac{1}{12}R^3 +
\frac{1}{12} R_{;a}^{{\phantom}{;{\phantom}{a}}} R_{{\phantom}{;{\phantom}{a}}}^{;a}+
\frac{1}{6} R R_{;a{\phantom}{a}}^{{\phantom}{;{\phantom}{a}}a},
\label{a3d}$$
and $$a_{3}^{(3)} = -
\frac{1}{6} R^3
\label{a3e}$$ It is evident that the approximate stress-energy tensor of the quantized massive field constructed from the effective action $W_{R}$ depends functionally on the metric as it is constructed solely from the Riemann tensor, its contractions and covariant derivatives to some required order. The type of the field enters the general formulas by the spin-dependent numeric coefficients. There is no need, when computing the stress-energy tensor, to retain all terms in $[a_{3}].$ Indeed, the total divergences can be discarded and further simplifications of the effective action are possible. Here we display the coefficient $[a_{3}(x,x')]$ in its full form simply because it will be of use in the calculations of the field fluctuation.
Now, repeating the steps of Refs [@Matyjasek:1999an; @kocio1] (to which the reader is referred for additional informations), after some algebra, one obtains the covariantly conserved tensor (\[def\_set\]), with the tensor $t_{a}^{b}$ is given by $$\begin{aligned}
t_{t}^{t}& =& -\frac{2327 a^6 b^6}{105 r^{12} (a+b)^6}+\frac{2452 a^5 b^5}{35
r^{11} (a+b)^5}-\frac{8611 a^4 b^4}{105 r^{10}
(a+b)^4}+\frac{643 a^4 b^4}{35 r^8 (a+b)^6}+\frac{4442 a^3
b^3}{105 r^9 (a+b)^3}\nonumber \\
&&-\frac{2468 a^3 b^3}{105 r^7
(a+b)^5}-\frac{57 a^2 b^2}{7 r^8 (a+b)^2}+\frac{289 a^2
b^2}{35 r^6 (a+b)^4}
+\frac{a^2 b^2}{15 r^4 (a+b)^6}
+\xi^3 \left(\frac{432}{(a+b)^6}-\frac{432
a^2 b^2}{r^4 (a+b)^6}\right) \nonumber \\
&&-\frac{37}{21 (a+b)^6} +\xi^2 \left(\frac{216 a^2 b^2}{r^4
(a+b)^6}-\frac{216}{(a+b)^6}\right)
+\xi \left(\frac{546 a^6 b^6}{5 r^{12}
(a+b)^6}-\frac{1808 a^5 b^5}{5 r^{11} (a+b)^5}+\frac{6664 a^4
b^4}{15 r^{10} (a+b)^4}\right.\nonumber \\
&&\left.
-\frac{462 a^4 b^4}{5 r^8
(a+b)^6}-\frac{240 a^3 b^3}{r^9 (a+b)^3}+\frac{656 a^3 b^3}{5
r^7 (a+b)^5}+\frac{48 a^2 b^2}{r^8 (a+b)^2}-\frac{48 a^2
b^2}{r^6 (a+b)^4}-\frac{26 a^2 b^2}{r^4 (a+b)^6}+\frac{174}{5
(a+b)^6}\right),
\label{czasowa}\end{aligned}$$ $$\begin{aligned}
t_{r}^{r} &=&\frac{421 a^6 b^6}{105 r^{12} (a+b)^6}-\frac{52 a^5 b^5}{3 r^{11} (a+b)^5}
+\frac{949 a^4 b^4}{35 r^{10} (a+b)^4}-\frac{29 a^4 b^4}{3 r^8
(a+b)^6}-\frac{646 a^3 b^3}{35 r^9 (a+b)^3}
-\frac{37}{21 (a+b)^6}
\nonumber \\
&&+\frac{1604 a^3 b^3}{105 r^7 (a+b)^5}
+\frac{33 a^2 b^2}{7 r^8 (a+b)^2}-\frac{97 a^2 b^2}{15 r^6
(a+b)^4}+\xi^3 \left(\frac{432}{(a+b)^6}
-\frac{432 a^2 b^2}{r^4 (a+b)^6}\right)
+\frac{29 a^2 b^2}{15 r^4 (a+b)^6}
\nonumber \\
&&+\xi^2 \left(\frac{216 a^2 b^2}{r^4
(a+b)^6}-\frac{216}{(a+b)^6}\right)
+\xi \left(-\frac{78 a^6 b^6}{5 r^{12} (a+b)^6}+\frac{336 a^5 b^5}{5
r^{11} (a+b)^5}
-\frac{1592 a^4 b^4}{15 r^{10} (a+b)^4}
+\frac{42 a^4 b^4}{r^8 (a+b)^6}
\right. \nonumber \\
&&\left.
+\frac{368 a^3 b^3}{5 r^9 (a+b)^3}
-\frac{336 a^3 b^3}{5
r^7 (a+b)^5}-\frac{96 a^2 b^2}{5 r^8 (a+b)^2}
+\frac{144 a^2 b^2}{5 r^6 (a+b)^4}-\frac{178 a^2 b^2}{5 r^4 (a+b)^6}
+\frac{174}{5
(a+b)^6}\right)
\label{radialna}\end{aligned}$$ and $$\begin{aligned}
t_{\theta}^{\theta} &=& -\frac{497 a^6 b^6}{15 r^{12} (a+b)^6}
+\frac{11404 a^5 b^5}{105 r^{11} (a+b)^5}
-\frac{13903 a^4 b^4}{105 r^{10} (a+b)^4}+\frac{1769 a^4 b^4}{105
r^8 (a+b)^6}+\frac{2486 a^3 b^3}{35 r^9 (a+b)^3}
-\frac{a^2 b^2}{r^4 (a+b)^6}
\nonumber \\
&&
-\frac{108 a^3 b^3}{5 r^7 (a+b)^5}-\frac{99 a^2 b^2}{7 r^8 (a+b)^2}+\frac{683 a^2 b^2}{105
r^6 (a+b)^4} -\frac{37}{21 (a+b)^6} + \xi^3 \left(\frac{432 a^2 b^2}{r^4 (a+b)^6}
+\frac{432}{(a+b)^6}\right)
\nonumber \\
&&
+\xi^2 \left(-\frac{216 a^2 b^2}{r^4
(a+b)^6}-\frac{216}{(a+b)^6}\right)
+\xi \left(\frac{702 a^6 b^6}{5 r^{12} (a+b)^6}-\frac{2272 a^5 b^5}{5 r^{11}
(a+b)^5}+\frac{8216 a^4 b^4}{15 r^{10} (a+b)^4}
\right. \nonumber \\
&& \left.
-\frac{342 a^4 b^4}{5 r^8 (a+b)^6}
-\frac{1456 a^3 b^3}{5 r^9 (a+b)^3}+\frac{416 a^3 b^3}{5 r^7
(a+b)^5}+\frac{288 a^2 b^2}{5 r^8 (a+b)^2}-\frac{24 a^2 b^2}{r^6 (a+b)^4}
+\frac{154 a^2 b^2}{5 r^4 (a+b)^6}+\frac{174}{5
(a+b)^6}\right).
\label{katowa}\end{aligned}$$ These components can also be constructed from the Euler-Lagrange equations [@MatryZw] with the Lagrangian depending on the time and radial components of the metric tensor, their derivatives and coordinate $r.$ Because of the complexity of the calculations of the stress-energy tensor this alternative approach may serve as a useful check.
In order to establish the regularity of the stress-energy tensor at the horizon it is necessary to transform it into the coordinate system that is regular there. Alternatively, one can use in this regard a freely falling frame. For radial motion, the orthogonal vectors of the frame are the unit tangent to the geodesic $e_{(0)}^i$ and the three spacelike mutually perpendicular unit vectors $e_{(j)}^i .$ Integrating the geodesic equation one obtains $$u^a = \left[ \frac{C}{A}, - \sqrt{\frac{1}{B} \left( \frac{C^2}{A} - 1
\right)}, 0, 0 \right]$$ and $$n^a = \left[ \pm \frac{\sqrt{C^2 - A}}{A}, \mp \frac{C}{\sqrt{AB}}, 0, 0
\right],$$ where $C$ is the energy per unit mass along the geodesic. Elementary manipulations shows that the components ${\tilde{T}}_{(0) (0)},$ ${\tilde{T}}_{(1) (1)}$ and ${\tilde{T}}_{(0) (1)}$ of the stress-energy tensor in a freely falling frame are given by $${\tilde{T}}_{(0) (0)} = \frac{C^2}{A} \left( T_r^r - T_t^t \right) - T_r^r,$$ $${\tilde{T}}_{(1) (1)} = \frac{C^2}{A} \left( T_r^r - T_t^t \right) + T_r^r$$ and $${\tilde{T}}_{(0) (1)} = {\tilde{T}}_{(1) (0)} = \frac{C \sqrt{C^2 - A}}{A} \left( T_r^r -
T_t^t \right).$$ Since the difference between the radial and time components of the stress-energy tensor (\[czasowa\]-\[katowa\]) factors as $$T_r^{(q)r} - T_t^{(q)t} = A (r) F (r),$$ where $F (r)$ is a regular function, one concludes that the stress-energy tensor in a frame freely falling from the cosmological horizon or falling on the event horizon is regular in a physical sense.
Semiclassical lukewarm black holes in $(a,b)$ parametrization
-------------------------------------------------------------
As is seen from Eqs. (\[bbb\]) and (\[aaa\]) the general solution of the linearized system (\[semiEinstein\]) requires two simple quadratures, which, after some calculations, yield rather long expressions. The corrections to the “metric potentials” $1/B(r)$ and $A(r),$ i.e., the functions $\beta(r)$ and $\alpha(r)$ have the structure $$\beta(r) =\frac{1}{\pi m^{2} (a+b)^{6}} \sum_{i=0}^{3} \beta_{i}(r) \xi^{i}
+ \frac{C_{1}}{r}
\label{bet}$$ and $$\alpha(r) = \beta(r) + \frac{f_{0}(r)}{\pi m^{2}(a+b)^{4}} \left[ \sum_{i=0}^{1} \alpha_{i}(r) \xi^{i}
-6 \xi^{2} + 12 \xi^{3} \right] - (a+b)^{2} C_{2},
\label{al}$$ where $f_{0}(r)$ is given by (\[spec1\]) and the exact form of the functions $\alpha_{i}(r)$ and $\beta_{i}(r)$ are relegated to the appendix. Of the two integration constants, only $C_{1}$ affects location of the horizons; the second integration constant is left free throughout the calculation. If it exists, the quantum-corrected lukewarm black hole must satisfy the same requirements as its classical counterpart, and, consequently, in order to determine the line element describing such a configuration, one has to solve the system of algebraic equations: $$A(r_{+}) = A(r_{c}) =0$$ and $$\kappa(r_{+}) + \kappa(r_{c}) =0,$$ where $r_{+} = a + r^{(1)}_{+} $ and $r_{c} = b + r^{(1)}_{c},$ with respect to $C_{1},$ $r^{(1)}_{+} $ and $r^{(1)}_{c}.$ Here ${r_{+}}$ and $r_{c}$ correspond respectively to the event horizon and the cosmological horizon, whereas $r^{(1)}_{+}$ and $ r^{(1)}_{c}$ are small corrections. Now, simple manipulations give $$r_{+}^{(1)} = \frac{1}{\pi m^{2} (a-b)(a+b)^{5}}\left(W_{+}^{(0)} + W_{+}^{(1)} \xi\right)
+ \frac{1}{\pi m^{2} (a-b)(a+b)^{4}}\left(W_{+}^{(2)} \xi^{2} +W_{+}^{(3)}\xi^{3}\right)
\label{event}$$ and $$C_{1} = \frac{1}{\pi m^{2} (a+b)^{7}}\left(V^{(0)} + V^{(1)} \xi\right)
+ \frac{1}{\pi m^{2} (a+b)^{5}}\left(V^{(2)} \xi^{2} +V^{(3)}\xi^{3}\right)
\label{ce1}$$ where functions $W_{+}^{(k)}$ and $V^{(k)}$ are listed in Appendix. It should be noted that $$r_{c}^{(1)}(a,b) = -r_{+}^{(1)}(a\to -b, b\to -a).
\label{cosmoH}$$ i.e., the correction to the cosmological horizon equals minus the correction to the event horizon with simultaneous substitution $a \to - b$ and $b \to -a.$ The integration constant $C_{2}$ can be determined using, for example, the condition $A(r_{c}) B(r_{c}) = 1.$
The equations (\[bet\]) with (\[ce1\]) and (\[al\]) solve the problem completely: the semiclassical lukewarm configuration is characterized by the cosmological constant and (total) charge as given by (\[luke\]), whereas the quantum corrections to the event and cosmological horizon are given by (\[event\]) and (\[cosmoH\]), respectively.
Semiclassical black holes in $({r_{+}},{\tilde{r}_{c}})$ parametrization
------------------------------------------------------------------------
Now we shall show that the above procedure is equivalent to a more familiar approach, in which one looks for a lukewarm solution parametrized by the exact location of the event horizon and the zeroth-order approximation to the cosmological horizon. First, let us assume that the cosmological constant is a parameter in a space of theories rather than the space of solutions. Now, one can solve the semiclassical Einstein field equations for a line element of the form (\[familiar\]) with $$A(r) = \left(1-\frac{2 M(r)}{r} \right) e^{2\psi(r)}$$ and $$B(r) = \left(1-\frac{2 M(r)}{r} \right)^{-1},$$ where $M(r) = M_{0}(r) + M_{1}(r)$ and $\psi(r) = \psi_{0}(r) + \psi_{1}(r)$ and the functions $M_{1}(r)$ and $\psi_{1}(r)$ are small corrections to the main approximation. In constructing the linearized solution we adopt the natural conditions $M_{0}(r_{+}) = r_{+}/2,$ $M_{1}(r_{+}) =0$ and $\psi_{0}(r) =0,$ leaving unspecified the integration constant, say $\tilde{C}_{2},$ which appears as a result of integration of the differential equation for $\psi_{1}(r).$ The zeroth-order solution is therefore parametrized by the exact location of the event horizon, $r_{+},$ and the charge $Q^{2}.$ The lukewarm configuration can be expressed in terms of ${r_{+}}$ and the zeroth-order approximation to the cosmological horizon ${\tilde{r}_{c}}$ $$A (r) = B^{- 1} (r) = \left( 1 - \frac{{r_{+}}{\tilde{r}_{c}}}{\left( {r_{+}}+ {\tilde{r}_{c}}\right) r} \right)^2 - \frac{r^2}{\left( {r_{+}}+ {\tilde{r}_{c}}\right)^2}.$$ In this approach we do not attribute any physical significance to the zeroth-order solution. Once again, the first-order corrections can easily be constructed by the two simple quadratures and the quantum-corrected lukewarm black hole is characterized by the exact location of the event horizon, ${r_{+}},$ the location of the cosmological horizon $$r_{c} = {\tilde{r}_{c}}+ \delta({r_{+}},{\tilde{r}_{c}})$$ and the relation between the (total) charge and ${r_{+}}$ and ${\tilde{r}_{c}}:$ $$Q^{2} = \left(\frac{{r_{+}}{\tilde{r}_{c}}}{{r_{+}}+{\tilde{r}_{c}}} \right)^{2} + \Delta({r_{+}},{\tilde{r}_{c}})$$ By the assumption we made about the cosmological constant it is still given by $$\Lambda = \frac{3}{({r_{+}}+{\tilde{r}_{c}})^2}.$$ Although we have calculated both $\delta({r_{+}},{\tilde{r}_{c}})$ and $\Delta({r_{+}},{\tilde{r}_{c}})$ we shall not display them here, simply because they are rather lengthy. Moreover, they can easily be constructed form the formulas listed in the appendix by switching from $(a,b)$ representation to $({r_{+}},{\tilde{r}_{c}}),$ (i. e., by the reverse procedure to that discussed below).
For the same black hole configuration one can switch from $({r_{+}},{\tilde{r}_{c}})$ representation to $(a,b)$ making use of the equations $$\frac{3}{({r_{+}}+{\tilde{r}_{c}})^2} = \frac{3}{(a+b)^{2}}
\label{repr1}$$ and $$\left(\frac{{r_{+}}{\tilde{r}_{c}}}{{r_{+}}+{\tilde{r}_{c}}} \right)^{2} + \Delta({r_{+}},{\tilde{r}_{c}})
= \left( \frac{ab}{a+b} \right)^{2}
\label{repr2}$$ Indeed, substituting $${r_{+}}= a + a_{1}
\hspace{0.5cm}{\rm and} \hspace{0.5cm}
{\tilde{r}_{c}}= b+ b_{1},$$ where $a_{1}$ and $b_{1}$ are small corrections, into the system (\[repr1\]) and (\[repr2\]) one obtains $$a_{1} = - b_{1} = \frac{(a+b)^{2} }{2 a b (a-b)} \Delta(a,b).$$ Since we are interested in the first-order calculations, we can safely change the arguments of the function $ \Delta({r_{+}},{\tilde{r}_{c}}).$ It can be demonstrated that $r_{+}$ and $r_{c} =b + b_{1} + \delta(a,b)$ are given respectively by by (\[event\]) and (\[cosmoH\]), as expected. Moreover, both approaches yield identical results for the metric tensor, provided the integration constant $\tilde{C}_{2}$ is related to the constant $C_{2}$ by $$\tilde{C}_{2} = - (a+b)^2 C_{2} -\frac{1}{\pi m^{2} (a+b)^{4}} \left(
\frac{37}{756} -\frac{29}{30} \xi +6 \xi^{2} -12 \xi^{3}
\right).$$ As before, to determine $C_{2}$ (and hence $\tilde{C}_{2}$) additional informations are required.
Final remarks
=============
In this paper we have constructed perturbative solutions to the semiclassical Einstein field equations describing spherically-symmetric and static lukewarm black hole. The total source term is composed of two parts: the (classical) stress-energy tensor of the electromagnetic field and the renormalized stress-energy tensor of the quantized massive scalar field in a large mass limit. In the course of our calculations we used two different parametrizations, and, assuming that the first-order results describe the same black hole configuration we constructed the transformation rules from the one parametrization to the other. In the parametrization $(a,b)$ we first calculated the zeroth-order solution. Subsequently, making use of the quantum part of the total stress-energy tensor constructed in the classical background we calculated the corrections to the metric potentials and the corrections to the horizons. This procedure can be thought of as switching the quantized field on and analyzing its influence on the classical background via the back-reaction. On the other hand, in the parametrization $({r_{+}},{\tilde{r}_{c}}),$ we are looking for a self-consistent solution from the very beginning. This requires a generic tensor which depend functionally on the metric tensor. Since the calculations in that case are rather involved and produce complex results, we discussed them only briefly.
We conclude this paper with a number of comments:
1. Here we have considered only the quantized massive scalar fields with $\xi R \phi$ coupling in a large mass limit. Since the approximate stress energy tensors of the massive spinor and the massive vector fields in for a generic metric are known, the results presented here can easily be extended to these cases.
2. Once the renormalized stress-energy tensor is known a similar analysis can be carried out for the quantized massless fields. Unfortunately, although the renormalized stress-energy tensor of various fields are well documented in the Schwarzschild spacetime (see for example Refs. [@Page:1982fm; @Brown:1985ri; @Brown:1986jy; @Frolov:1987gw; @Jirinek96prd; @Jirinek97cqg; @Jirinek97prd; @Jirinek98prd; @Groves:2002mh; @Carlson:2003ub] and the references cited therein) less is known about $T_{a}^{b}$ in more complicated geometries. The remarks made in subsection \[sec:sss\] remain intact and it is crucial to check the regularity of the stress-energy tensor on the horizons.
3. Since the stress-energy tensor is constructed solely form the Riemann tensor, its derivatives and the metric one expects that similar calculations can be performed in other theories in which the higher curvature terms appear. For example, although we have considered only the main approximation to the stress-energy tensor constructed from the integrated coincidence limit of the Hadamard-DeWitt coefficient $a_{3}(x,x')$ the calculations can be extended, at the price of the technical complications, to the next-to-leading term involving functional derivatives of the coincidence limit of the coefficient $a_{4}(x,x').$ Preliminary calculations indicate that it is possible to construct the lukewarm black hole in such a case.
4. It could be shown that the approximation to the mean value of the field fluctuation in a large mass limit is given by [@Frolov_hab] $$\langle \phi^{2}\rangle = \frac{1}{16\pi^{2}}\sum_{n=2}^{N}\frac{(n-2)!}{m^{2(n-1)}} [a_{n}],$$ where $N-1$ is the number of terms retained in the expansion. Taking $$[a_{2}] = \frac{1}{6}\left(\frac{1}{5} -\xi \right) \Box R +
\frac{1}{2} \left(\frac{1}{6} -\xi \right)^{2}R^{2}
-\frac{1}{180} R_{ab}R^{ab}+
\frac{1}{180} R_{abcd} R^{abcd}$$ and $[a_{3}]$ as given by (\[a3a\]-\[a3e\]) one can calculate the first two terms of the approximation. For example, routine calculations carried out in the spacetime of the Einstein-Maxwell lukewarm black hole (\[spec1\]) give the for the main approximation $$16 \pi^{2} m^{2} \langle \phi^{2}\rangle = \frac{4}{15}\frac{a^{2} b^{2}}{r^{6} w^{2}}
-\frac{8}{15} \frac{a^{3}b^{3}}{r^{7} w^{3}}
+ \frac{13}{45} \frac{a^{4} b^{4}}{r^{8} w^{4}}
+\frac{29}{15 w^{4}} -\frac{24}{w^{4}}\xi + \frac{72}{w^{4}}\xi^{2},$$ where $w = a + b.$
5. Since the cosmological “constant” is expected to vary in time (see for example [@Padmana; @Nov; @Mat] and the references cited therein) it would be interesting to analyze the charged black holes in such a dynamic environment. This, however, would require a deeper understanding of quantum phenomena taking place in the spacetime of nonstatic black holes.
6. Extension of the results presented in this paper to the black hole spacetimes of $d-$dimensions requires detailed knowledge of the higher HMDS coefficients.
Some of the listed problems are under active investigations and the results will be published elsewhere.
The geometry of the semiclassical lukewarm black hole in the $(a,b)$ parametrization is described by the line element (\[familiar\]) with (\[alfa\]) and (\[bett\]). The $\beta(r)$ is given by $$\beta(r) =\frac{1}{\pi m^{2} (a+b)^{6}} \sum_{i=0}^{3} \beta_{i}(r) \xi^{i} + \frac{C_{1}}{r}$$ where $$\begin{aligned}
\beta_{0}(r) &=&\frac{2327 a^6 b^6}{11340 r^{10}}-\frac{613 a^6 b^5}{840 r^9}
+\frac{8611 a^6 b^4}{8820 r^8}-\frac{2221 a^6 b^3}{3780 r^7}
+\frac{19 a^6 b^2}{140 r^6}-\frac{613 a^5 b^6}{840 r^9}+\frac{8611 a^5 b^5}{4410 r^8}
-\frac{2221 a^5 b^4}{1260 r^7}+\frac{19 a^5 b^3}{35 r^6}
\nonumber \\
&&+\frac{8611 a^4 b^6}{8820 r^8}
-\frac{2221 a^4 b^5}{1260 r^7}+\frac{1067 a^4 b^4}{2100 r^6}
+\frac{617 a^4 b^3}{1260 r^5}-\frac{289 a^4 b^2}{1260 r^4}
-\frac{2221 a^3 b^6}{3780 r^7}+\frac{19 a^3 b^5}{35 r^6}+\frac{617 a^3 b^4}{1260 r^5}
-\frac{289 a^3 b^3}{630 r^4}
\nonumber \\
&&+\frac{19 a^2 b^6}{140 r^6}-\frac{289 a^2 b^4}{1260 r^4}
-\frac{a^2 b^2}{180 r^2}-\frac{37 r^2}{756},\end{aligned}$$
$$\begin{aligned}
\beta_{1}(r) &=& -\frac{91 a^6 b^6}{90 r^{10}}+\frac{113 a^6 b^5}{30 r^9}-
\frac{238 a^6 b^4}{45 r^8}+\frac{10 a^6 b^3}{3 r^7}-\frac{4 a^6 b^2}{5 r^6}
+\frac{113 a^5 b^6}{30 r^9}-\frac{476 a^5 b^5}{45 r^8}+\frac{10 a^5 b^4}{r^7}
-\frac{16 a^5 b^3}{5 r^6}
\nonumber \\
&&-\frac{238 a^4 b^6}{45 r^8}+\frac{10 a^4 b^5}{r^7}
-\frac{163 a^4 b^4}{50 r^6}-\frac{41 a^4 b^3}{15 r^5}+\frac{4 a^4 b^2}{3 r^4}
+\frac{10 a^3 b^6}{3 r^7}-\frac{16 a^3 b^5}{5 r^6}-\frac{41 a^3 b^4}{15 r^5}
+\frac{8 a^3 b^3}{3 r^4}
\nonumber \\
&&-\frac{4 a^2 b^6}{5 r^6}+\frac{4 a^2 b^4}{3 r^4}
+\frac{13 a^2 b^2}{6 r^2}+\frac{29 r^2}{30}\end{aligned}$$
and $$\beta_{2}(r) = -\frac{\beta_{3}(r)}{2} = -\frac{18 a^2 b^2}{r^2}-6 r^2.$$
The function $\alpha(r)$ is given by $$\alpha(r) = \beta(r) + \frac{f_{0}(r)}{\pi m^{2}(a+b)^{4}} \left[ \sum_{i=0}^{1} \alpha_{i}(r) \xi^{i}
-6 \xi^{2} + 12 \xi^{3} \right] - (a+b)^{2} C_{2},$$ where $$\alpha_{0}(r) =-\frac{229 a^4 b^4}{840 r^8}+\frac{184 a^4 b^3}{441 r^7}
-\frac{5 a^4 b^2}{28 r^6}+\frac{184 a^3 b^4}{441 r^7}-\frac{5 a^3 b^3}{14
r^6}-\frac{5 a^2 b^4}{28 r^6}+\frac{7 a^2 b^2}{180 r^4}-\frac{37}{756}$$ and $$\alpha_{1}(r) =\frac{13 a^4 b^4}{10 r^8}-\frac{32 a^4 b^3}{15 r^7}
+\frac{14 a^4 b^2}{15 r^6}-\frac{32 a^3 b^4}{15 r^7}+\frac{28 a^3 b^3}{15 r^6}
+\frac{14 a^2 b^4}{15 r^6}-\frac{a^2 b^2}{5 r^4}+\frac{29}{30}.$$
The location of the event horizon of the quantum-corrected lukewarm black hole is given by (\[event\]) with $$W^{(0)}_{+} = \frac{17 a^6}{317520 b^3}+\frac{17 a^5}{31752 b^2}+\frac{41 a^4}{4900 b}
+\frac{b^6}{504 a^3}+\frac{2641 a^3}{396900}+\frac{31 b^5}{35280 a^2}
+\frac{719 a^2 b}{11340}+\frac{173 b^4}{11340 a}+\frac{1208 a b^2}{11025}
+\frac{89 b^3}{2835},$$ $$W^{(1)}_{+} = -\frac{a^4}{50 b}-\frac{b^6}{180 a^3}+\frac{109 a^3}{300}-a^2 b
-\frac{11 b^4}{180 a}-\frac{473 a b^2}{300}-\frac{b^3}{10}$$ and $$W_{+}^{2} = - \frac{W_{+}^{(3)}}{2} = -3 a^{2} + 9 a b.$$ The integration constant $C_{1}$ is given by (\[ce1\]) with $$\begin{aligned}
V^{(0)} &=& \frac{17 a^7}{158760 b^3}+\frac{17 a^6}{15876 b^2}
+\frac{41 a^5}{2450 b}+\frac{24707 a^4}{396900}+\frac{17 b^7}{158760 a^3}
+\frac{1993 a^3 b}{11340}
\nonumber \\
&&+\frac{17 b^6}{15876 a^2}
+\frac{14039 a^2 b^2}{44100}+\frac{41 b^5}{2450 a}
+\frac{1993 a b^3}{11340}+\frac{24707 b^4}{396900}, \end{aligned}$$ $$V^{(1)} = -\frac{a^5}{25 b}-\frac{6 a^4}{25}-\frac{89 a^3 b}{30}
-\frac{439 a^2 b^2}{75}-\frac{b^5}{25 a}-\frac{89 a b^3}{30}
-\frac{6 b^4}{25}$$ and $$V^{(2)} = -\frac{V^{(3)}}{2} = 18 a b.$$
[39]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ** (, , ).
, in **, edited by (), pp. .
, ** (, , ).
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Nicholas Johnson, Vidyashankar Sivakumar, Arindam Banerjee\
{njohnson,sivakuma,[email protected]}\
Department of Computer Science and Engineering\
University of Minnesota, Twin Cities
bibliography:
- 'library.bib'
- 'structured\_bandits.bib'
title: Structured Stochastic Linear Bandits
---
Introduction {#sec:intro}
============
Background: High-Dimensional Structured Estimation {#sec:est}
==================================================
Structured Bandits: Problem and Algorithm {#sec:setting}
=========================================
Algorithm {#sec:alg}
---------
Regret Bound for Structured Bandits {#sec:results}
===================================
Overview of the Analysis {#sec:analysis}
========================
Conclusions {#sec:conc}
===========
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We interpret a valuation $v$ on a ring $R$ as a map $v: R \to M$ into a so called bipotent semiring $M$ (the usual max-plus setting), and then define a **supervaluation** $\vrp$ as a suitable map into a supertropical semiring $U$ with ghost ideal $M$ (cf. [@IzhakianRowen2007SuperTropical], [@IzhakianRowen2008Matrices]) covering $v$ via the ghost map $U \to M$. The set ${\operatorname{Cov}}(v)$ of all supervaluations covering $v$ has a natural ordering which makes it a complete lattice. In the case that $R$ is a field, hence for $v$ a Krull valuation, we give a completely explicit description of ${\operatorname{Cov}}(v)$.
The theory of supertropical semirings and supervaluations aims for an algebra fitting the needs of tropical geometry better than the usual max-plus setting. We illustrate this by giving a supertropical version of Kapranov’s Lemma.
address:
- 'Department of Mathematics, Bar-Ilan University, 52900 Ramat-Gan, Israel'
- 'Department of Mathematics, NWF-I Mathematik, Universität Regensburg, 93040 Regensburg, Germany'
- 'Department of Mathematics, Bar-Ilan University, 52900 Ramat-Gan, Israel'
author:
- Zur Izhakian
- Manfred Knebusch
- Louis Rowen
title: Supertropical semirings and supervaluations
---
[^1] [^2] [^3]
Introduction {#introduction .unnumbered}
============
As explained in [@IMS] and [@Gathmann:0601322], tropical geometry grew out of a logarithmic correspondence taking a polynomial $f({\lambda}_1, \dots, {\lambda}_n)$ over the ring of Puiseux series to a corresponding polynomial $\bar f({\lambda}_1, \dots, {\lambda}_n)$ over the max-plus algebra $T$. A key observation is Kapranov’s Lemma, that this correspondence sends the algebraic variety defined by $f$ into the so-called [*corner locus*]{} defined by $\bar f$. More precisely, this correspondence involves the negative of a valuation (where the target ($T$) is an ordered monoid rather than an ordered group), which has led researchers in tropical mathematics to utilize valuation theory. In order to avoid the introduction of the negative, some researchers, such as [@SS], have used the min-plus algebra instead of the max-plus algebra. There is a deeper result which describes the image of this correspondence; several versions appear in the literature, one of which is in [@Payne08].
Note that whereas a valuation $v$ satisfies $v(ab) = v(a) + v(b),$ one only has $$v(a+b) = \min \{ v(a), v(b)\}$$ when $v(a) \ne
v(b);$ for the case that $v(a) = v(b)$, $v(a+b)$ could be any element $\ge v(a).$ From this point of view, the max-plus (or, dually, min-plus) algebra does not precisely reflect the tropical mathematics. In order to deal with this issue, as well as to enhance the algebraic structure of the max-plus algebra $T$, the first author introduced a cover of $T$, graded by the multiplicative monoid $(\mathbb Z_2,\cdot),$ which was dubbed the [*extended tropical arithmetic*]{}. Then, in [@IzhakianRowen2007SuperTropical] and [@IzhakianRowen2008Matrices], this structure has been amplified to the notion of **supertropical semiring**. A supertropical semiring $U$ is equipped with a “**ghost map**" $\nu := \nu_U : U \to U$, which respects addition and multiplication and is idempotent, i.e., $\nu \circ \nu = \nu$. Moreover $a + a = \nu(a)$ for every $a \in U$ (cf. §3). This rule replaces the rule $a+ a= a$ in the usual max-plus (or min-plus) arithmetic. We call $\nu(a)$ the “**ghost**" of $a$ (often writing $a^\nu$ instead of $\nu(a)$), and we call the elements of $U$ which are not ghost “**tangible**"[^4].
The image of the ghost map is a so-called **bipotent semiring**, i.e., a semiring $M$ such that $a + b \in \{ a, b\}$ for every $a, b \in M$. So $M$ is a semiring typically occurring in tropical algebra. In this paper supertropical and bipotent semirings are nearly always tacitly assumed to be commutative.
It turns out that supertropical semirings allow a refinement of valuation theory to a theory of “supervaluations". Supervaluations seem to be able to give an enriched version of tropical geometry. In the present paper we illustrate this by giving a refined and generalized version of Kapranov’s Lemma (§9, §11). Very roughly, one may say that the usual tropical algebra is present in the ghost level of our supertropical setting.
We consider valuations on rings (as defined by Bourbaki [@B]) instead of just fields. We mention that these are essential for understanding families of valuations on fields, cf. e.g. [@HK] and [@KZ]. We use multiplicative notation, writing a valuation $v$ on a ring $R$ as a map into $\Gm \cup \{ 0 \}$ with $\Gm$ a multiplicative ordered abelian group and $0 < \Gm$, obeying the rules $$\renewcommand{\theequation}{$*$}\addtocounter{equation}{-1}\label{eq:str.1}
\begin{array}{c}
v(0) = 0, \quad v(1) = 1, \quad v(ab) = v(a) v(b), \\[2mm]
v(a+b)
\leq \max(v(a),v(b)). \\
\end{array}$$ We view the ordered monoid $\Gm \cup \{ 0\}$ as a bipotent semiring by introducing the addition $x + y := \max(x,y)$, cf. §1 and §2. It is then very natural to replace $\Gm \cup \{ 0
\}$ by any bipotent semiring $M$, and to define an **-valuation** (= monoid valuation) $v: R \to M$ in the same way $(*)$ as before.
Given an -valuation $v : R \to M$ there exist multiplicative mappings $\vrp: R \to M$ into various supertropical semirings $U$, with $\vrp(0) = 0$, $\vrp(1) =1$, such that $M$ is the ghost ideal of $U$ and $\nu_U \circ \vrp = v$. These are the **supervaluations** covering $v$, cf. §4.
In §5 we define maps $\al:U \to V$ between supertropical semirings, called **transmissions**, which have the property that for a supervaluation $\vrp:R \to U$ the composite $\al \circ
\vrp : R \to V$ is again a supervaluation. Given two supervaluations $\vrp: R \to U$ and $\psi: R \to V$ (not necessarily covering the same -valuation $v$), we say that $\vrp$ **dominates** $\psi$, and write $\vrp \geq \psi$, if there exists a transmission $\al: U \to V$, such that $\psi = \al
\circ \vrp$. {The transmission $\al$ then is essentially unique.}
Restricting the dominance relation to the set of supervaluations[^5] covering a fixed -valuation $v: R \to M$ we obtain a partially ordered set ${\operatorname{Cov}}(v)$, which turns out to be a complete lattice, as proved in §7. The bottom element of this lattice is the -valuation $v$, viewed as a supervaluation. The top element, denoted $\vrp_v: R \to U(v)$, can be described explicitly in good cases. This description is already given in §4, cf. Example \[examp4.5\]. The other elements of ${\operatorname{Cov}}(v)$ are obtained from $\vrp_v$ by dividing out suitable equivalence relations on the semiring $U(v)$, called -relations (= multiplicative fiber conserving equivalence relations). They are defined in §6. Finally in §8, we obtain an explicit description of all elements of ${\operatorname{Cov}}(v)$ in the case that $R$ is a field, in which case $v$ is a Krull valuation.
If $R$ is only a ring, our results are far less complete. Nevertheless it seems to be absolutely necessary to work at least in this generality for many reasons, in particular functorial ones, cf. e.g. [@HK], [@KZ].
In §9 we delve deeper into the supertropical theory to pinpoint a relation, which we call the ****, which seems to be a key for working in supertropical semirings. On the one hand, the restricts to equality on tangible elements, thereby enabling us to specialize to the max-plus theory. On the other hand, the appears in virtually every supertropical theorem proved so far, especially in supertropical matrix theory in [@IzhakianRowen2008Matrices] and [@IzhakianRowen2009Equations].
In the present paper the ghost surpassing relation is the essential gadget for understanding and proving a general version of Kapranov´s Lemma in §11 (Theorem 11.15, preceded by Theorem 9.11), valid for any valuation $v: R \to M$ which is “**strong**". This means that $v(a+b) = \max(v(a), v(b))$ whenever $v(a) \neq v(b)$. If $R$ is a ring, every valuation on $R$ is strong, as is very well known, but if $R$ is only a semiring, this is a restrictive condition. On our way to Kapranov’s Lemma we employ supervaluations $\vrp \in {\operatorname{Cov}}(v)$ which are **tangible**, i.e., have only tangible values, and are **tangibly additive**, which means that $\vrp(a+b) =
\vrp(a) + \vrp(b)$ whenever $\vrp(a) + \vrp(b)$ is tangible. We apostrophize tangibly additive supervaluations which cover strong -valuations as **strong supervaluations**.
The strong tangible supervaluations in ${\operatorname{Cov}}(v)$ seem to be the most suitable ones for applications in tropical geometry also beyond Kapranov’s Lemma, as to be explained at the end of this introduction. They form a sublattice $\tsCov(v)$ of ${\operatorname{Cov}}(v)$. In particular there exists an “initial" tangible strong valuation in ${\operatorname{Cov}}(v)$, denoted by $\overline \vrp _v$, which dominates all others. It gives the “best" supertropical version of Kapranov’s Lemma, cf. §9. At the end of §10 we make $\overline \vrp _v$ explicit in the case that $v$ is the natural valuation of the field of formal Puiseux series in a variable $t$ (with real or with rational exponents). We can interpret the value of $\overline \vrp _v (a(t))$ of a Puiseux series $a(t)$ as the leading term of $a(t)$, while $v(a(t))$ can be seen as the $t$-power contained in the leading term.
Strictly speaking, Kapranov’s Lemma extends the valuation $v$ to the polynomial ring $R[{\lambda}_1, \dots, {\lambda}_n]$ over $R$, with target in the polynomial ring $M[{\lambda}_1, \dots, {\lambda}_n]$, which no longer is bipotent. Thus, the theory in this paper needs to be generalized if we are to deal formally with such notions. This is set forth in the last Section \[sec:13\], in which the target of a valuation is replaced by a monoid with a binary sup operation.
Since the theory of tropicalization has developed recently in terms of the valuations on the field of Puiseux series, let us indicate briefly how this theory can be extended to the supertropical environment.
We recall the algebraic theory of analytification, as presented by Payne [@Payne09]. A **multiplicative seminorm** $|\phantom{e}|: R\to \Real_{\geq 0}$ on a ring $R$ is a multiplicative map satisfying the triangle inequality $$|a+b| \le
|a| + |b|$$ for all $a,b\in R.$ In particular, any -valuation $v: R \to \Real_{\geq 0}$ is a seminorm. {Recall that we use multiplicative notation.} If $X$ ia an affine variety over a field $K$ (e.g. $K$ is a field of generalized Puiseux series over an algebraically closed field) and $v: K \to \Real_{\geq 0}$ is a valuation, then Payne’s space $K[X]^\an$ is the set of all multiplicative seminorms on $K[X]$ that extend $v$. More generally, if $v:K \to M$ is a valuation with $M= \tG \cup \{0
\}$ any bipotent semifield (cf. §1), then we may define a space $K[X]^\an$ associated to $(K,v)$ and $X$ in exactly the same way.
But in the supertropical context we can do more. Let $$U:= D(\tG) = \STR(\tG, \tG, {\operatorname{id}}_\tG),$$ as defined in Example \[examps3.16\]. This is a supertropical semifield with ghost part $\tG \cup \{ 0 \} = M$. We define a space $K[X]^\san$ as the set of all strong supervaluations $\vrp: K[X] \to
U$ such that the valuation $w: K[X] \to M$ covered by $\vrp$ (cf. Definition \[defn4.1\]) is an element of $K[X]^\an$, i.e., $w$ extends $v$.
We have the natural map $K[X]^\san \to K[X]^\an$, given by $\vrp
\mapsto w$, which exhibits $K[X]^\san$ as a “covering” of Payne’s space $K[X]^\an$. But there is still another relation between these two spaces, which seems to be more intriguing. The set $U$ contains a second copy of $M$ as a multiplicative submonoid, namely the tangible part $\tT(U) \cup \{ 0\} \cong \tG
\cup \{ 0 \}$. Interpreting the elements of $K[X]^\an$ as maps from $K[X]$ to $\tT(U) \cup \{ 0\}$ we may view $K[X]^\an$ as the set of all tangible supervaluations $\vrp: K[X] \to
U$ (automatically strong) with $\vrp|K$ covering $v$. Thus $K[X]^\an$ can be seen as the subspace of $K[X]^\san$ consisting of the $\vrp \in K[X]^\san$ which do not have ghost values.
Of course, nothing can prevent us from replacing $K$ by any ring, or even semiring $R$, and $v$ by any strong -valuation on $R$, and defining $K[X]^\an$ and $K[X]^\san$ in this generality.
The reader may ask whether valuations and supervaluations on semirings instead of just rings deserve interest apart from formal issues. They do. It is only for not making a long paper even longer that we do not give applications to semirings here.
The semiring $R = \sum A^2$ of sum of squares of a commutative ring (or even a field) $A$ with $-1 \notin R$ is a case in point. Real algebra seems to be a fertile ground for studying valuations and supervaluations on semirings. The paper contains only one very small hint pointing in this direction, Example \[example2.2\].
Bipotent semirings {#bilpotentsemirings}
==================
Let $R$ be a semiring (always with unit element $1=1_R).$ Later we will assume that $R$ is commutative, but presently this is not necessary.
\[defn1.1\] We call a pair $(a,b)\in
R^2$ [**bipotent**]{} if $a+b\in\{a,b\}.$ We call the semiring $R$ [**bipotent**]{} if every pair $(a,b)\in R^2$ is bipotent.
\[prop1.2\] Assume that $R$ is a bipotent semiring. Then the binary relation $(a,b\in R)$ $$\label{1.1}
a\le b\quad\text{iff}\quad a+b=b$$ on $R$ is a total ordering on the set $R,$ compatible with addition and multiplication, i.e., for all $a,b,c\in R$ $$\begin{aligned}
&a\le b\quad\Rightarrow\quad ac\le bc,\ ca\le cb,
\\ & a\le b\quad\Rightarrow \quad a+c\le b+c.\end{aligned}$$
A straightforward check.
\[rem1.3\] We can define such a binary relation $\le$ by in any semiring, and then obtain a partial ordering compatible with addition and multiplication. The ordering is total iff $R$ is bipotent. Clearly, $0_R\le x$ for every $ x\in R.$
\[defn1.4\] We call a semiring $R$ a [**semidomain**]{}, if $R$ has no zero divisors, i.e., the set $R\setminus \{0\}$ is closed under multiplication. We call $R$ a [**semifield**]{}, if $R$ is commutative and every element $x\ne0$ of $R$ is invertible; hence $R\setminus\{0\}$ is a group under multiplication.
Given a bipotent semidomain $R$, the set $G:=R\setminus\{0\}$ is a totally ordered monoid under the multiplication of $R.$
In this way we obtain all (totally) ordered monoids. Indeed, if $G=(G,\cdot)$ is a given ordered monoid, we gain a bipotent semiring $R$ as follows: Adjoin a new element $0$ to $G$ and form the set $R:=G\cup\{0\}.$ Extend the multiplication on $G$ to a multiplication on $R$ by the rules $0\cdot g=g\cdot0=0$ for any $g\in G$ and $0\cdot0=0.$ Extend the ordering of $G$ to a total ordering on $R$ by the rule $0<g$ for $g\in G.$ Then define an addition on $R$ by the rule $$x+y:=\max(x,y)$$ for any $x,y\in R.$ It is easily checked that $R$ is a bipotent semiring, and that the ordering on $R$ by the rule is the given one. We denote this semiring $R$ by $T(G).$
These considerations can be easily amplified to the following theorem.
\[thm1.4\] The category of (totally) ordered monoids $G$ is isomorphic[^6] to the category of bipotent semidomains $R$ by the assignments $$G\mapsto T(G),\quad R\mapsto R\setminus\{0\}.$$
Here the morphisms in the first category by definition are the order preserving monoid homomorphisms $\gamma: G\to G'$ in the weak sense; i.e., $\gamma$ is multiplicative, $\gamma(1)=1,$ and $x\le y\Rightarrow \gamma(x)\le \gamma(y),$ while the morphisms in the second category are the semiring homomorphisms (with $1\mapsto1).$
In the following we regard an ordered monoid and the associated bipotent semiring as the same entity in a different disguise. Usually we prefer the semiring viewpoint.
\[examp1.5\] Starting with the monoid $G=(\mathbb
R,+)$, i.e., the field of real numbers with the usual addition, we obtain a bipotent semifield $$T(\mathbb R):=\mathbb R\cup\{-\infty\},$$ where addition $\oplus$ and multiplication $\odot$ of $T(\mathbb
R)$ are defined as follows, and the neutral element of addition is denoted by $-\infty$ instead of $0,$ since our monoid is now given in additive notation. For $x,y\in\mathbb R$ $$\begin{aligned}
x\oplus y&=\max(x,y),\\
x\odot y&=x+y,\\
(-\infty)\oplus x&=x\oplus(-\infty)=x,\\
(-\infty)\odot x&=x \odot (-\infty)=-\infty,\\
(-\infty)\oplus(-\infty)&=-\infty,\\
(-\infty)\odot(-\infty)&=-\infty.\end{aligned}$$
$T(\mathbb R)$ is the “real tropical semifield" of common tropical algebra, often called the “max-plus" algebra $\mathbb
R\cup\{-\infty\}:$ cf. [@IMS], or [@SS] (there a “min-plus" algebra is used).
-valuations {#sec:mval}
===========
In this section we assume that all occurring semirings and monoids are commutative.
Let $R$ be a semiring.
\[defn2.1\] An [**-valuation**]{} (= monoid valuation) on $R$ is a map $v: R\to M$ into a (commutative) bipotent semiring $M\ne\{0\}$ with the following properties: $$\begin{aligned}
&V1: v(0)=0,\\
&V2: v(1)=1,\\
&V3:v(xy)=v(x)v(y)\quad\forall x,y\in R,\\
&V4: v(x+y)\le v(x)+v(y)\quad [=\max(v(x),v(y))]\quad \forall
x,y\in R.\end{aligned}$$ We call the -valuation $v$ [**strict**]{}, if instead of V4 the following stronger axiom holds: $$\begin{aligned}
V5: v(x+y)=v(x)+v(y)\quad \forall x,y\in R. & \qquad \qquad
\qquad \qquad\end{aligned}$$
Note that a strict -valuation $v: R\to M$ is just a semiring homomorphism from $R$ to $M.$
In the special case that $M=\Gamma\cup\{0\}$ with $\Gamma$ an ordered abelian group, we call the -valuation $v:
R\to M$ a [**valuation**]{}. Notice that in the case that $R$ is a ring (instead of a semiring), this is exactly the notion of a valuation as defined by Bourbaki [@B] (Alg. Comm. VI, §3, No.1) and studied, e.g., in [@HK] and [@KZ Chap. I], except that for $\Gamma$ we have chosen the multiplicative notation instead of the additive notation.
If $v: R\to M$ is an -valuation, we may replace $M$ by the submonoid $v(R).$ We then speak of $v$ as a [**surjective**]{} [**-valuation**]{}.
\[defn2.2\] A (commutative) monoid $G$ is called [**cancellative**]{}, if, for any $a,b,c\in G$, the equation $ac=bc$ implies $a=b.$
Notice that an ordered monoid $G$ is cancellative iff $a<b$ implies $ac<bc$ for any $a,b,c\in G.$ An ordered cancellative monoid can be embedded into an ordered abelian group $\Gamma$ in the well-known way by introducing formal fractions $\frac{a}{b}$ for $a,b\in G.$ Then an -valuation $v$ from $R$ to $T(G)=G\cup\{0\}$ is essentially the same thing as an -valuation from $R$ to $\Gamma\cup\{0\}.$ For this reason, we extend the notion of “valuation" from above as follows.
\[defn2.3\] A [**valuation**]{} on a semiring $R$ is an -valuation $v: R\to
G\cup\{0\}$ with $G$ a cancellative monoid.
-valuations on rings have been studied in [@HV], and then by D. Zhang [@Z].
If $R$ is a [**ring**]{}, an -valuation $v: R\to M $ can [**never**]{} be strict, since we have an element $-1\in R$ with $1+(-1)=0,$ from which for $v$ strict it would follow that $0_M=v(0)=\max(v(1),v(-1));$ hence $v(1)=0_M,$ a contradiction to axiom V2. But for $R$ a semiring there may exist interesting strict -valuations, even with values in a group.
\[example2.2\] Let $T$ be a [**preprime**]{} in a ring $R,$ by which we simply mean that $T$ is a sub-semiring of $R$ $(T+T\subset T, \ T\cdot
T\subset T,\ 0\in T, 1\in T).$ {We do not exclude the case $-1\in
T$ (“improper preprime") but these will not matter.}
We say that a valuation $v: R\to M$ is $T$-[**convex**]{} if the restriction $v | T: T\to M$ is strict. As is well-known, if $T=\sum R^2$ (and $M\setminus\{0\}$ is a group) the $T$-convex valuations are just the real valuations on $R.$ (A valuation $v:
R\to \Gamma\cup\{0\}$ is called [**real**]{} if the residue class field $k(v)$ is formally real.) See [@KZ1], §5 for $T$ a preordering, and §2 for $T=\sum R^2.$
The entire paper [@KZ1] witnesses the importance of $T$-convex valuations for $T$ a preordering.
If $R$ is a ring, every -valuation on $R$ is strong. This can be seen by the same argument as is well-known for valuations on fields.
Semirings, even semifields, may admit valuations which are not strong.
\[examp2.7\] Let $F$ be a totally ordered field, and $R:=\{x\in F|x \geq 0\}$ the subsemifield of nonnegative elements. Further let $\Gamma:=\{x\in F|x > 0\},$ viewed as a totally ordered group, and $M:=\{0\}\cup\Gamma$ the associated bipotent semifield. The map $v: R\to M$ with $v(0)=0,$ $v(a)=\frac{1}{a}$ for $a\ne0$, is a valuation on $R,$ which is not strong.
\[prop2.4\]
1. If $v: R\to M$ is an -valuation and $M$ is a bipotent semidomain, then $v^{-1}(0)$ is a prime ideal of $R$ (i.e., an ideal of $R,$ whose complement in $R$ is closed under multiplication).
2. If $v$ is strong, then, for any $x\in R$ and $z\in
v^{-1}(0),$ $$\label{2.1}
v(x+z)=v(x).$$
a): If $v(x)=0,$ $v(y)=0,$ then $$v(x+y)\le\max(v(x),v(y))=0;$$ hence $v(x+y)=0.$ Thus $v^{-1}(0)$ is closed under addition. If $x\in R,$ $z\in v^{-1}(0),$ then $v(xz)=v(x)v(z)=0.$ Thus $v^{-1}(0)$ is closed under multiplication by elements in $R.$ If $v(x)>0,$ $v(z)>0$, then $v(xz)=v(x)v(z)>0.$ Thus $R\setminus
v^{-1}(0)$ is closed under multiplication.
b): We have $v(x+z)\le \max(v(x),v(z))=v(x).$ Assume that $v$ is strong. If $v(x)\ne0$ we have $$v(x+z)=\max(v(x),v(z))=v(x).$$
If $v: R\to M$ is an arbitrary -valuation, then it is still obvious that $v^{-1}(0)$ is an ideal of $R.$
\[defn2.5\] We call the ideal $v^{-1}(0)$ the [**support**]{} of the -valuation $v,$ and write $v^{-1}(0)={\operatorname{supp}}(v).$ We call the support of $v$ [**insensitive**]{}, if the equality above holds for any $x\in R$ and $z\in{\operatorname{supp}}(v),$ [**sensitive**]{} otherwise.
[Proposition \[prop2.4\]]{}.b tells us that ${\operatorname{supp}}(v)$ is insensitive if $v$ is strong. In particular, this holds if $R$ is a ring.
\[example2.7\] Let $\Gamma$ be an ordered abelian group and $H$ is a convex proper subgroup. Let $\mathfrak a :=\{g\in \Gamma\mid g>
H\}\cup\{0\}.$ We regard $\Gamma\cup\{0\}$ as a bipotent semifield (cf. §1), and define a subsemiring $M$ of $\Gamma\cup\{0\}$ by $$M:=H\cup\mathfrak a .$$ Notice that we have $H\cdot\mathfrak a \subset\mathfrak a ,$$\mathfrak a \cdot \mathfrak a \subset\mathfrak a ,$ and $\mathfrak a +\mathfrak a \subset \mathfrak a .$ Thus $M$ is indeed a subsemiring of $\Gamma\cup\{0\},$ and $\mathfrak a $ is an ideal of $M$. We define a map $v: M\to H\cup\{0\}$ by setting $v(x)=x$ if $x\in H,$ and $v(x)=0$ if $x\in\mathfrak a .$ It is easily checked that $v$ fulfills the axioms V1–V3 and moreover has the following “bipotency": $$\text{If } \ a,b \in M \ \text{ and } \ v(a) \neq v(b), \ \text{ then } \ v(a+b) \in \{ v(a), v(b)\}.$$ But the support $\mathfrak a$ of $v$ is sensitive: For $x\in H,$ $z\in \mathfrak a $ and $z\ne0,$ we have $v(x)>0,$ $v(z)=0,$ $x+z=z;$ hence $v(x+z)=0.$
We switch over to the problem of “comparing" different -valuations on the same semi-ring $R.$
\[defn2.11\] Let $v: R\to M$ and $w: R\to N$ be -valuations.
1. We say that $v$ [**dominates**]{} $w,$ if for any $a,b\in R$ $$v(a)\le v(b)\Rightarrow w(a)\le w(b).$$
2. If $v$ dominates $w$ and $v$ is surjective, there clearly exists a unique map\
$\gamma: M\to N$ with $w=\gamma\circ v.$ We denote this map $\gamma$ by $\gamma_{w,v}.$
Clearly, $\gamma_{w,v}$ is multiplicative and sends $0$ to $0$ and $1$ to $1$. $\gamma_{w,v}$ is also order-preserving and hence is a homomorphism from the bipotent semiring $M$ to $N.$
\[prop2.12\] Assume that $M,N$ are bipotent semirings and $v:R\to M$ is a surjective -valuation.
1. The -valuations $w: R\to N$ dominated by $v$ correspond uniquely with the homomorphisms $\gamma: M\to N$ via $w=\gamma\circ v,$ $\gamma=\gamma_{w,v}.$
2. If $v$ has one of the properties “strict", or “strong", and dominates $w$, then $w$ has the same property.
If $w$ is an -valuation dominated by $v$ then we know already that $\gm := \gm_{w,v}$ is a homomorphism and $w = \gm \circ v$. On the other hand, given a homomorphism $\gm: M \to N$, clearly $\gm \circ v$ is an -valuation, and $\gm \circ v$ inherits from $v$ each of the properties “strict" and “strong".
We mention that for strong -valuations the dominance condition in Definition \[defn2.11\] can be weakened.
\[prop2.122\] Assume that $v:R \to M$ and $w:R \to N$ are strong -valuations and that $$\forall a,b \in R: \qquad v(a) = v(b) \ \Rightarrow \ w(a) = w(b).$$
Then $v$ dominates $w$.
Let $a,b \in R$ and assume that $v(a) \leq v(b)$. If $v(a) <
v(b)$ then $v(a+ b) = v(b)$, hence $w(a+b) = w(b)$. It follows that $w(a) \leq w(b)$ since $w(a) > w(b)$ would imply $w(a+b) =
w(a).$ Thus $w(a) \leq w(b)$ in both cases.
Supertropical semirings {#sec:3}
=======================
\[defn3.1\] A [**semiring with idempotent**]{} is a pair $(R,e)$ consisting of a semiring $R$ and a [**central**]{} idempotent $e.$ {For the moment $R$ is allowed to be noncommutative.}
We then have an endomorphism $\nu: R\to R$ (which usually does not map 1 to 1) defined by $\nu(a)=ea.$ It obeys the rules
$$\label{3.1} \nu\circ \nu=\nu,$$
$$\label{3.2} a\nu(b)=\nu(a)b=\nu(ab).$$
Conversely, if a pair $(R,\nu)$ is given consisting of a semiring $R$ and an endomorphism $\nu$ (not necessarily $\nu(1)=1),$ such that , hold, then $e:=\nu(1)$ is a central idempotent of $R$ and $\nu(a)=ea$ for every $a\in R.$
Thus such pairs $(R,\nu)$ are the same objects as semirings with idempotents.
\[defn3.2\] A [**semiring with ghosts**]{} is a semiring with idempotent $ (R,e)$ such that the following axiom holds $(\nu(a):=ea)$ $$\label{3.3}
\nu(a)=\nu(b)\quad\Rightarrow \quad a+b=\nu(a).$$
\[rem3.3\] This axiom implies that $ea=e(a+b)=ea+eb$ if $\nu(a)=\nu(b).$ We [**do not**]{} want to demand that then $eb=0.$ Usually, $(R,+)$ will be a highly non-cancellative abelian semigroup.
\[term3.4\] If $(R,e)$ is a semiring with ghosts, then $\nu : x\mapsto ex,$$R\to R$ is called the [**ghost map**]{} of $(R,e).$ The idea is that every $x\in R$ has an associated “ghost" $\nu(x),$ which is thought of to be somehow “near" to the zero element $0$ of $R,$ without necessarily being 0 itself. {That will happen for all $x\in R$ only if $e=0.$} We call $eR$ the [**ghost ideal**]{} of $(R,e).$
Now observe that, if $(R,e)$ is a semiring with ghosts, the idempotent $e$ is determined by the semiring $R$ above, namely $$e=1+1.$$ Thus we may suppress the idempotent $e$ in the notation of a semiring with ghosts and redefine these objects as follows.
\[defn3.5\] A semiring $R$ is called a [**semiring with ghosts**]{} if $$1+1=1+1+1+1\tag{3.3$'$}$$ and for all $a,b\in R$ $$a+a=b+b\quad\Rightarrow \quad a+b=a+a. \tag{3.3$''$}$$
If [(3.3)$'$]{} holds then $e:=1+1$ is a central idempotent of $R.$ Passing from $R$ to $(R,e)=(R,1+1),$ we see that [(3.3 $''$)]{} is the previous axiom . Notice also that [(3.3$''$)]{} implies that $1+1+1=1+1.$ (Take $a=1,$ $b=e.$) Thus, $m1=1+1$ for all natural numbers $m\ge 2.$
\[term3.6\] If $R$ is a semiring with ghosts, we write $e=e_R $ and $\nu=\nu_R$ if necessary. We also introduce the notation $$\begin{aligned}
&\mathcal T:=\mathcal T(R):=R\setminus Re,\\
&\mathcal G:=\mathcal G(R):=Re\setminus\{0\},\\
&\mathcal G_0:= \mathcal G\cup\{0\}=Re.\end{aligned}$$ We call the elements of $\mathcal T$ the [**tangible elements**]{} of $R$ and the elements of $\mathcal G$ the [**ghost elements**]{} of $R.$ We do not exclude the case that $\mathcal T$ is empty, i.e., $e=1.$ In this case $R$ is called a [**ghost semiring**]{}.
The ghost ideal $\mathcal G_0=eR$ of $R$ is itself a semiring with ghosts, in fact, a ghost semiring. It has the property $a+a=a$ for every $a\in Re,$ as follows from . {Some people call a semiring $T$ with $a+a=a$ for every $a\in T$ an “idempotent semiring".}
We mention a consequence of axiom for the ghost map $\nu: R\to Re,$ $\nu(x):=ex.$
\[remark3.7\] If $R$ is a semiring with ghosts, then, for any $x\in R,$ $$\nu(x)=0\quad\Leftrightarrow\quad x=0.$$
$(\Leftarrow)$: evident.
$(\Rightarrow)$: We have $\nu(x)=0=\nu(0);$ hence by $x=x+0=\nu(0)=0.$
We are ready for the central definition of the section.
\[defn3.8\] A semiring $R$ is called [**supertropical**]{} if $R$ is a semiring with ghosts and $$\label{3.4}
\forall a,b\in R:\ a+a\ne b+b\quad\Rightarrow \quad
a+b\in\{a,b\}.$$ In other terms, every pair $(a,b)$ in $R$ with $ea\ne eb$ is bipotent.
\[remark3.9\] $ $
1. It follows that then $\mathcal G(R)_0=Re$ is a bipotent semidomain. Indeed, if $a,b$ are different elements of $\mathcal
G(R)$, then $a=ea\ne b=eb;$ hence $a+b\in\{a,b\}$ by axiom . If $a=0$ or $b=0$, this trivially is also true. If $a=b$ then $a+b=ea=a.$ Thus $a+b\in\{a,b\}$ for any $a,b\in
\mathcal G(R)_0.$ The set $\mathcal G(R)$ is either empty (the case $1+1=0)$ or $\mathcal G(R)$ is an ordered monoid under the multiplication of $R,$ as explained in §$1$.
2. The supertropical semirings without tangible elements are just the bipotent semirings.
3. Every subsemiring of a supertropical semiring is again supertropical.
\[thm3.10\] Let $R$ be a supertropical semiring, $e:=e_R,$ $\mathcal G:=\mathcal G(R).$ Then the addition on $R$ is determined by the multiplication on $R$ and the ordering on the multiplicative submonoid $\mathcal G$ of $R,$ in case $\mathcal G\ne\emptyset,$ as follows. For any $a,b\in R$ $$a+b=\begin{cases} a\quad & \text{if}\quad ea>eb,\\
b\quad & \text{if}\quad ea<eb,\\
ea\quad & \text{if}\quad ea=eb,\end{cases}$$ If $\mathcal G=\emptyset$ then $a+b=0$ for any $a,b\in R.$
We may assume that $ea\ge eb.$ If $ea=eb,$ axiom tells us that $a+b=ea.$ Assume now that $ea>eb.$ By definition of the ordering on $eR$ (cf. §1), we have $$e(a+b)=ea+eb=ea.$$ By axiom , $a+b=a$ or $a+b=b.$
Suppose that $a+b=b.$ Then $e(a+b)=eb.$ Since $ea\ne eb,$ this is a contradiction. We conclude that $a+b=a.$
From now on, [**we always assume that our semirings are commutative**]{}.
\[rem3.12\] If $R$ is a supertropical semiring, the ghost map $\nu_R: R\to eR,$ $x\mapsto ex$ is a strict -valuation. Indeed, the axioms V1–V3 and V5 from §2 are clearly valid for $\nu_R.$
Thus, every supertropical semiring has a natural built-in strict -valuation.
There are important cases where $\nu_R$ is even a valuation (cf. Definition \[defn2.3\]), as we explicate now.
\[prop3.12\] Assume that $R$ is a supertropical semiring and $\mathcal T(R)$ is closed under multiplication. Then the submonoid $G:=e\mathcal T(R)$ of $\mathcal G(R)$ is cancellative. (N.B. We have $e\mathcal T(R)\subset\mathcal G(R)$ by Remark \[remark3.7\].)
Let $a,b,c\in\mathcal T(R)$ be given with $(ea)(ec)=(eb)(ec),$ i.e., $eac=ebc.$ Suppose that $ea\ne eb,$ say $ea<eb.$ Then [Theorem \[thm3.10\]]{} tells us that $a+b=b $ and $ac+bc=ebc.$ By assumption, $bc\in\mathcal T(R)$; hence $bc\ne
ebc.$ But the first equation gives $ac+bc=bc,$ a contradiction. Thus $ea=eb.$
In the situation of this proposition we may omit the part $\mathcal G(R)\setminus G$, consisting of “useless" ghosts, in the semiring $R$, and then obtain a “supertropical domain" $U:=
\mathcal T(R)\cup G\cup\{0\},$ as defined below, whose ghost map $\nu_U:= U\to G\cup\{0\}$ is a *surjective strict valuation*.
\[defn3.13\] Let $M$ be a bipotent semiring and $R$ a supertropical semiring.
1. We say that the semiring $M$ is [**cancellative**]{} if for any $x,y,z \in M$ $$xz = yz, \ z \neq 0 \ \Rightarrow \ x = y.$$ This means that $M$ is a bipotent semidomain (cf. Definition \[defn1.4\]) and the multiplicative monoid $M \setminus \{ 0 \}$ is cancellative.
2. We call $R$ a [**supertropical predomain**]{}, if $\mathcal T(R)=R\setminus eR$ is not empty (i.e., $e\ne1)$ and is closed under multiplication, and moreover $eR$ is a cancellative bipotent semidomain.
3. We call $R$ a [**supertropical domain**]{}, if $ \mathcal T(R)$ is not empty and is closed under multiplication, and $R$ maps $\mathcal T(R)$ onto $\mathcal
G(R).$
Notice that the last condition in Definition \[defn3.13\].c implies that $\mathcal G(R)$ is a cancellative monoid ([Proposition \[prop3.12\]]{}). Thus a supertropical domain is a supertropical predomain.
Looking again at [Theorem \[thm3.10\]]{}, we see that a way is opened up to construct supertropical predomains and domains. First notice that the theorem implies the following
\[rem3.13\] If $R$ is a supertropical predomain, we have for every $a\in
\mathcal T(R)$ and $x\in \mathcal G(R)$ the multiplication rule $$ax=v(a)x$$ with $v:=\nu_R\mid \mathcal T(R).$ Thus the multiplication on $$R=\mathcal T(R)\ \dot\cup\ \mathcal G(R)\ \dot\cup\ \{0\}$$ is completely determined by the triple $(\mathcal T(R),\mathcal
G(R),v).$ We write $v=v_R.$
\[constr3.14\] Conversely, let a triple $(\mathcal T,\mathcal G,v)$ be given with $\mathcal T$ a monoid, $\mathcal G$ an ordered cancellative monoid and $v:\mathcal T\to \mathcal G$ a monoid homomorphism. We define a semiring $R$ as follows. As a set $$R=\mathcal T\ \dot\cup\ \mathcal G\ \dot\cup\ \{0\}.$$ The multiplication on $R$ will extend the given multiplications on $\mathcal T$ and $\mathcal G.$ If $a\in \mathcal T,$ $x\in \mathcal G,$ we decree that $$a\cdot x=x\cdot a:=v(a)x.$$ Finally, $0\cdot z=z\cdot 0:=0\quad\text{for all}\quad z\in R.$
The addition on $R$ extends the addition on $\mathcal G\cup\{0\}$ as the bipotent semiring corresponding to the ordered monoid $\mathcal G,$ as explained in §$1$. For $x,y\in \mathcal T$ we decree $$x+y:=\begin{cases} x&\ \text{if}\quad v(x)>v(y),\\
y&\ \text{if}\quad v(x)<v(y),\\
v(x)&\ \text{if}\quad v(x)=v(y).\end{cases}$$ Finally, for $x\in \mathcal T$ and $y\in \mathcal G\cup\{0\}$ $$x+y=y+x:=\begin{cases} x&\ \text{if}\quad v(x)> y, \\
y&\ \text{if}\quad v(x)\le y.\end{cases}$$
It now can be checked in a straightforward way[^7] that $R$ is a supertropical predomain with $\mathcal T(R)=\mathcal T,$ $\mathcal G(R)= \mathcal G,$ $v_R=v.$ Thus we have gained a description of all supertropical predomains $R$ by triples $(\mathcal T,\mathcal G,v)$ as above. We write $$R=\STR(\mathcal T,\mathcal G,v)$$ {$\STR$ = “supertropical"}. Notice that in this semiring $R$ every pair $(x,y)\in R^2$ is bipotent except the pairs $(a,b)$ with $a\in \mathcal T,$ $b\in \mathcal T$ and $v(a)=v(b).$ If $v$ is onto, then $R$ is a supertropical domain.
\[defn3.15\] A semiring $R$ is called a [**supertropical semifield**]{}, if $R$ is a supertropical domain, and every $x\in \mathcal T(R)$ is invertible; hence both $\mathcal T(R)$ and $\mathcal G(R)$ are groups under multiplication.
We write down primordial examples of supertropical domains and semifields (cf. [@I], [@IzhakianRowen2007SuperTropical]). Other examples will come up in §$4$.
\[examps3.16\] Let $\mathcal G$ be an ordered cancellative monoid. This given us the supertropical domain (cf. Construction \[constr3.14\]) $$D(\mathcal G):=\STR(\mathcal G, \mathcal G,{\operatorname{id}}_{\mathcal G}).$$ $D(\mathcal G)$ is a supertropical semifield iff $\mathcal G$ is an ordered abelian group.
We come closer to the objects and notations of usual tropical algebra if we take here for $\mathcal G$ ordered monoids in [**additive**]{} notation, $\mathcal G=(\mathcal G,+),$ e.g., $\mathcal G=\mathbb R,$ $\mathbb R_{>0},$ $\mathbb N,$ $\mathbb
Z,$ $\mathbb Q$ with the usual addition. $D(\mathcal G)$ contains the set $\mathcal G.$ For every $a\in \mathcal G$ there is an element $a^\nu$ in $D(\mathcal G)$ (read “a-ghost"), and $$\mathcal G^\nu:=\{a^\nu \ | \ a\in \mathcal G\}$$ is a copy of the additive monoid $\mathcal G$ disjoint from $\mathcal G.$ The zero element of the semiring $D(\mathcal G)$ is now written $-\infty.$ Thus $$D(\mathcal G)=\mathcal G\ \dot \cup\ \mathcal G^\nu\dot\cup\ \{-\infty\}.$$ Denoting addition and multiplication of the semiring $D\mathcal (
G)$ by $\oplus$ and $\odot,$ we have the following rules. For any $x\in D(\mathcal G),$ $a\in \mathcal G,$ $b\in \mathcal G,$ $$\begin{aligned}
-{\infty}\oplus x&=x\oplus-\infty=x,\\
a\oplus b& =\max(a,b),\quad\text{if}\ a\ne b,\\
a\oplus a&=a^\nu,\\
a^\nu\oplus b^\nu&=\max(a,b)^\nu,\\
a\oplus b^\nu&=a,\quad \text{if}\ a >b,\\
a\oplus b^\nu&=b^\nu,\quad\text{if}\ a \le b,\\
-\infty\odot x&=x\odot-\infty=-\infty,\\
a\odot b&=a+b,\\
a^\nu\odot b&=a\odot b^\nu=a^\nu\odot b^\nu=(a+b)^\nu.\end{aligned}$$
In the case $\mathcal G=(\mathbb R,+)$ these rules can already be found in [@I]. There also motivation is given for their use in tropical algebra and tropical geometry.
We now only say that the semiring $D(\mathcal G)$ associated to an additive ordered cancellative monoid $\mathcal G$ should be compared with the max-plus algebra $T(\mathcal G)=\mathcal G\cup\{-\infty\}$ introduced in §1. The ghost ideal $ \mathcal G^\nu\cup\{-\infty\}$ of $D( \mathcal G)$ is a copy of $T(\mathcal G).$
Supervaluations {#sec:4}
===============
In this section $R$ is always a (commutative) semiring. Usually the letters $U,V$ denote supertropical (commutative) semirings. If $U$ is any such semiring, the idempotent $e_U=1_U+1_U$ will be often simply denoted by the letter “$e$", regardless of which supertropical semiring is under consideration (as we write $0_U=0,$ $1_U=1).$
\[defn4.1\] $ $
1. A [**supervaluation**]{} on $R$ is a map $\varphi: R\to U$ from $R$ to a supertropical semiring $U$ with the following properties. $$\begin{aligned}
{2}
&SV1:\ &&\varphi(0)=0,\\
&SV2:\ &&\varphi(1)=1,\\
&SV3:\ &&\forall a,b\in R: \varphi(ab)=\varphi(a)\varphi(b),\\
&SV4:\ &&\forall a,b\in R: e\varphi(a+b)\le
e(\varphi(a)+\varphi(b))\quad
[=\max(e\varphi(a),e\varphi(b))].\end{aligned}$$
2. If $\varphi: R\to U$ is a supervaluation, then the map $$v: R\to eU,\qquad v(a):=e\varphi(a)$$ is clearly an -valuation. We denote this -valuation $v$ by $e_U\varphi $ (or simply by $e\varphi )$, and we say that $\varphi$ [**covers**]{} the -valuation $e_U\varphi=v.$
3. We say that a supervaluation $\varphi: R\to U$ is [**tangible**]{}, if $\varphi(R)\subset \mathcal T(U)\cup\{0\}$, and we say that $\varphi $ is [**ghost**]{} if $\varphi(R)\subset eU.$
[N.B.]{} A ghost supervaluation $\varphi: R\to U$ is nothing other than an -valuation, after replacing the target $U$ by $eU.$
\[prop4.2\] Assume that $\varphi:R\to U$ is a supervaluation and $v: R\to
e_UU=: M$ is the -valuation $e_U\varphi$ covered by $\varphi.$ Then $$U':=\varphi(R)\cup e\varphi(R)$$ is a subsemiring of $U.$ The semiring $U'$ is again supertropical and $e_{U'}=e_U (=e).$
The set $v(R)$ is a multiplicative submonoid of the bipotent semiring $M$; hence is itself a bipotent semiring. In particular, $v(R)$ is closed under addition. If $a,b\in R$ are given with $v(a)\le
v(b),$ then either $v(a)< v(b),$ in which case $$a+b=b,\quad v(a)+b=b,\quad a+v(b)=v(b),$$ or $v(a)=v(b), $ in which case $$a+b=v(a)+b=a+v(b)=v(a).$$ This proves that $U'+U'\subset U'.$ Clearly $0\in U',$ $1\in U'$ and $U'\cdot U'\subset U'.$ Thus $U'$ is a subsemiring of $U.$ As stated above (Remark \[remark3.9\].iii), every subsemiring of a supertropical semiring is again supertropical. Thus $U'$ is supertropical.
\[defn4.3\] We say that the supervaluation $\varphi: R\to U$ is [**surjective**]{} if $U'=U.$ We say that $\varphi$ is [**tangibly surjective**]{} if $\varphi(R)\supset \mathcal
T(U).$
\[rem4.4\] If $\varphi: R\to U$ is any supervaluation, then, replacing $U$ by $U'=\varphi(R)\cup
e\varphi(R),$ we obtain a surjective supervaluation. If we only replace $U$ by $\varphi(R)\cup (eU)$, which is again a subsemiring of $U,$ we obtain a tangibly surjective supervaluation.
Thus, whenever necessary we may retreat to tangibly surjective or even surjective supervaluations without loss of generality.
Recall that an -valuation $v: R\to M$ is called a valuation, if the bipotent semiring $M$ is cancellative (cf. Definition 2.3, Definition 3.14.a). Every valuation can be covered by a tangible supervaluation, as the following easy but important construction shows.
\[examp4.5\] Let $v: R\to M$ be a valuation, and let $\mathfrak q :=v^{-1}(0)$ denote the support of $v.$ We then have a monoid homomorphism $$R\setminus \mathfrak q \to M\setminus\{0\},\quad a\mapsto v(a),$$ which we denote again by $v.$ Let $$U:=\STR(R\setminus \mathfrak q ,M\setminus\{0\},v),$$ the supertropical predomain given by the triple $(R\setminus\mathfrak q ,M\setminus \{0\},v),$ as explained in Construction \[constr3.14\]. Thus, as a set, $$U=(R\setminus\mathfrak q )\ \dot\cup\ M.$$ We have $e=1_M,$ $e\cdot a=v(a)$ for $a\in R\setminus \mathfrak q
.$ The multiplication on $U$ restricts to the given multiplications on $ R\setminus \mathfrak q$ and on $M$, and $a\cdot x=x\cdot a=v(a)x$ for $ a\in R\setminus \mathfrak q,$ $x\in M.$ The addition on $U$ is determined by $e$ and the multiplication in the usual way (cf. [Theorem \[thm3.10\]]{}). In particular, for $a,b\in R\setminus\mathfrak q ,$ we have $$a+b=\begin{cases} a &\quad\text{if}\ v(a)>v(b),\\
b &\quad\text{if}\ v(a)<v(b),\\
v(a) &\quad\text{if}\ v(a)=v(b).\end{cases}$$
Now define a map $\varphi: R\to U$ by $$\varphi(a):=\begin{cases} a &\quad\text{if}\ a\in R\setminus\mathfrak q, \\
0&\quad\text{if}\ a\in \mathfrak q.\end{cases}$$ One checks immediately that $\varphi$ obeys the rules SV1–SV3. If $a\in R\setminus\mathfrak q ,$ then $$e_U\varphi(a)=1_M\cdot v(a)=v(a),$$ and for $x\in\mathfrak q ,$ we have $$e_U\varphi(a)=e_U\cdot 0=0=v(a)$$ also. Thus SV4 holds, and $\varphi$ is a supervaluation covering $v.$
By construction $\varphi$ is tangible and tangibly surjective. If $v$ is surjective then $\varphi$ is surjective.
\[defn4.6\] We denote the supertropical ring just constructed by $U(v)$ and the supervaluation $\varphi$ just constructed by $\varphi_v.$ Later we will call $\varphi_v: R\to
U(v)$ [**the initial cover of**]{} $v,$ cf. Definition \[defn5.15\].
Notice that $U(v)$ is a supertropical domain iff $v$ is surjective, and that in this case the supervaluation $\varphi_v$ is surjective.
\[rem4.7\] The supertropical predomain $U(v)$ just constructed deviates strongly in its nature from the supertopical domain $D(\mathcal
G)$ for $\mathcal G$ an ordered monoid studied in Examples \[examps3.16\]. While for $U=D(\mathcal G)$ the restriction $$\nu_U\mid \mathcal T(U): \mathcal T(U)\to \mathcal G(U)$$ of the ghost map $\nu_U$ is bijective, for $U=U(v)$ this map usually has big fibers.
Dominance and transmissions {#sec:5}
===========================
As before now all semirings are assumed to be commutative. $R$ is any semiring, and $ U,V$ are bipotent semirings.
\[defn5.1\] If $\varphi: R\to U$ and $\psi: R\to
V$ are supervaluations, we say that $\varphi$ [**dominates**]{} $\psi$, and write $\varphi\ge \psi,$ if for any $a,b\in R$ the following holds. $$\begin{aligned}
{3}
&D1.\quad && \varphi(a)=\varphi(b)&&\Rightarrow \psi(a)=\psi(b),\\
&D2.\quad && e\varphi(a)\le e\varphi(b)&&\Rightarrow e\psi(a)\le e\psi(b),\\
&D3.\quad && \quad\varphi(a)\in eU &&\Rightarrow \psi(a)\in eV.\end{aligned}$$ Notice that D3 can be also phrased as follows: $$\varphi(a)=e\varphi(a)\quad\Rightarrow \quad \psi(a)=e\psi(a).$$
\[lem5.2\] Let $\varphi: R\to U$ and $\psi: R\to V$ be supervaluations. Assume that $\varphi$ dominates $\psi$, and also (without essential loss of generality) that $\varphi$ is surjective. Then there exists a unique map $\alpha: U\to V$ with $\psi=\alpha\circ \varphi$ and $$\forall x\in U: \alpha(e_Ux)=e_V\alpha(x)$$ (i.e., $\alpha\circ\nu _U=\nu_V\circ\alpha).$
By D1 and D2 we have a unique well-defined map $\beta: \varphi(R)\to\psi(R)$ with $\beta(\varphi(a))=\psi(a)$ for all $a\in R$ and a unique well-defined map $\gamma: e\varphi(R)\to
e\psi(R)$ with $\gamma(e\varphi(a))=e\psi(a)$ for all $a\in R. $ Now $U=\varphi(R)\cup e\varphi(R)$, since $\varphi$ is assumed to be surjective. Suppose that $x\in\varphi(R)\cap e\varphi(R).$ Then $x=\varphi(a)$ for some $a\in R,$ and $x=ex=e\varphi(a).$ By axiom D3 we conclude that $\psi(a)=e\psi(a).$ Thus $\beta(x)=\gamma(x).$ This proves that we have a unique well-defined map $\alpha: U\to
V$ with $\alpha(x)=\beta(x)$ for $x\in\varphi(R)$ and $\alpha(y)=\gamma(y)$ for $y\in e\varphi (R).$ We have $\alpha(\varphi(a))=\psi(a),$ i.e., $\psi=\alpha\circ\varphi.$ Moreover, for any $a\in R,$ $\alpha(e_U\varphi(a))=\gamma(e_U\varphi(a))=e_V\psi(a).$
We record that in this proof we did not use the full strength of D2 but only the weaker rule that $e\varphi(a)=e\varphi(b)$ implies $e\psi(a)=e\psi(b).$
\[defn5.3\] Assume that $U$ and $V$ are supertropical semirings.
1. If $\alpha$ is a map from $U$ to $V$ with $\alpha(eU)\subset eV,$ we say that $\alpha$ [**covers**]{} the map $\gamma: eU\to eV$ obtained from $\alpha$ by restriction, and we write $\gamma=\alpha^\nu.$ We also say that $\gamma$ is the [**ghost part**]{} of $\alpha.$
2. Assume that $\varphi: R\to U$ is a surjective supervaluation and $\psi: R\to V$ is a supervaluation dominated by $\varphi.$ Then we call the map $\alpha$ occurring in [Lemma \[lem5.2\]]{}, which is clearly unique, the [**transmission from**]{} $\varphi$ [**to**]{} $\psi,$ and we denote this map by $\alpha_{\psi,\varphi}.$ Clearly, $\alpha_{\psi,\varphi}$ covers the map $\gamma_{w,v}$ connecting the surjective -valuation $v:=
e\varphi: R\to eU$ to the -valuation $w:=e\psi: R\to eV$ introduced in Definition \[defn2.11\].
\[thm5.4\] Let $\varphi: R\to U$ be a surjective supervaluation and $\psi:
R\to V$ a supervaluation dominated by $\varphi.$ The transmission $\alpha:=\alpha_{\psi,\varphi}$ obeys the following rules: $$\begin{aligned}
{2}
&TM1: \quad&& \alpha(0)=0,\\
&TM2: \quad &&\alpha(1)=1,\\
&TM3: \quad &&\forall x,y\in U:\quad
\alpha(xy)=\alpha(x)\alpha(y),\\
&TM4:\quad&&\alpha(e_U)=e_V,\\
&TM5:\quad &&\forall x,y\in eU: \quad
\alpha(x+y)=\alpha(x)+\alpha(y).
\end{aligned}$$
TM1, TM2, and TM4 are obtained from the construction of $\alpha$ in the proof of [Lemma \[lem5.2\]]{}. This construction tells us also that $\alpha$ sends $e U$ to $eV$. Using (again) that $U =
\varphi (R) \cup e\varphi (R)$, we check easily that $TM3$ holds. The rule D2 (in its full strength) tells us that the map $\gamma:
eU \to eV$, obtained from $\alpha$ by restriction, is order preserving. This is TM5.
\[defn5.5a\] If $U$ and $V$ are supertropical semirings, we call any map $\alpha: U\to V$ which the rules TM1–TM5, a [**transmissive map**]{} from $U$ to $V.$
The axioms TM1-TM5 tell us that a transmissive map $\alpha: U \to
V$ is the same thing as a homomorphism from the monoid $(U, \cdot
\ )$ to $(V, \cdot \ )$ which restricts to a semiring homomorphism from $eU$ to $eV$. It is evident that every homomorphism from the semiring $U$ to $V$ is a transmissive map, but there exist quite a few transmissive maps, which are not homomorphisms; cf. §9 below and [@IKR].
As a converse to Lemma \[lem5.2\] we have the following fact.
\[prop5.6\] Assume that $\varphi: R\to U$ is a supervaluation and $\alpha:
U\to V$ is a transmissive map from $U$ to a supertropical semiring $V.$ Then $\alpha\circ \varphi: R\to V$ is again a supervaluation. If $e \varphi$ is either “strong" or “strict", then $e(\alpha\circ\varphi)$ has the same property.
Let $\psi: =\alpha\circ\varphi,$ $v := e \vrp$, $w:= e
\psi$. Clearly $\psi$ inherits the properties SV1–SV3 from $\varphi,$ since $\alpha$ obeys TM1–TM3. If $a\in R,$ then, by TM4, $$w(a) = e\psi(a)=e(\alpha(\varphi(a)))=\alpha(e\varphi(a)) = \al(v(a));$$ hence $w = \alpha^\nu \circ v.$ Now $\al^\nu : N \to N$ is a semiring homomorphism, hence order preserving. Thus it is immediate that $w$ is an -valuation, and $w$ is strict or strong if $v$ is strict or strong, respectively.
\[rem5.7\] If $\varphi: R\to U$ is a surjective supervaluation (cf. Definition \[defn4.3\]) and $\alpha: U\to V$ is a surjective transmissive map, then the supervaluation $\alpha\circ\varphi$ is again surjective. Conversely, if $\varphi:
R\to U$ and $\psi: R\to V$ are surjective supervaluations, and $\varphi$ dominates $\psi$, then the transmission $\alpha_{\psi,\varphi}:U\to V$ is a surjective map.
Combining [Theorem \[thm5.4\]]{}, [Proposition \[prop5.6\]]{} and this remark, we read off the following facts.
\[schol5.8\] Let $U,V$ be supertropical semirings and $\varphi: R\to U$ a surjective supervaluation.
1. The supervaluations $\psi: R\to V$ dominated by $\varphi$ correspond uniquely with the transmissive maps $\alpha:
U\to V$ via $\psi=\alpha\circ\varphi,$ $\alpha=\alpha_{\psi,\varphi}.$
2. If $P$ is one of the properties “strict" or “strong" and $e \varphi$ has property $P,$ then $e \psi$ has property $P.$
3. The supervaluation $\psi$ is surjective iff the map $\alpha$ is surjective.
4. Given a semiring homomorphism $\gamma: eU\to eV$, the supervaluation $\psi$ covers the -valuation $\gamma\circ (e\varphi)$ iff $\alpha^\nu=\gamma.$
$$\xymatrix{
& R \ar@{-->}[dr]^\psi \ar[dl]_\varphi\\
U \ar[d]_{\nu_U} \ar@{-->}[rr]_\alpha && V
\ar[d]^{\nu_V}
\\
eU \ar[rr]_\gamma && eV
}$$
\[examp5.9\] Let $U$ be a supertropical semiring with ghost ideal $M:=eU.$ Then, as we know, the ghost map $\nu_U:U\to M,$ $x\mapsto
ex,$ is a strict -valuation on the semiring $U$ (Remark \[rem3.12\]). Clearly, the identity map ${\operatorname{id}}_U:U\to U$ is a supervaluation covering $\nu_U.$ Assume now that $\alpha: U\to
V$ is a transmissive map. Let $\gamma: =\alpha^\nu$ denote the homomorphism from $M$ to $N:=eV$ covered by $\alpha.$ Then $v:=\gamma\circ\nu_U=\nu_V\circ\alpha$ is a strict valuation on $U$ with values in $N,$ and $\alpha:=\alpha\circ{\operatorname{id}}_U$ is a supervaluation on $U$ covering $v.$ Thus $\alpha $ is the transmission from the supervaluation ${\operatorname{id}}_U: U\to U$ to the supervaluation $\alpha:U\to V$ covering $v.$
The example tells us in particular that every transmissive map is the transmission between some supervaluations. *Therefore we may and will also use the shorter term “**transmission**" for “transmissive map".*
In general, a transmission does not behave additively; hence is not a homomorphism. We now record cases where nevertheless some additivity holds.
\[prop5.10\] Let $\alpha:U\to V$ be a transmission and $\gamma:eU\to eV$ denote the ghost part of $\alpha,$ $\gamma=\alpha^\nu$ (which is a semiring homomorphism).
1. If $x,y\in U$ and $ex=ey,$ then $\alpha(x)+\alpha(y)=\alpha(x+y).$
2. If $x,y\in U$ and $\alpha(x)+\alpha(y)$ is tangible, then again $\alpha(x)+\alpha(y)=\alpha(x+y).$
3. If $\gamma$ is injective, then $\alpha$ is a semiring homomorphism.
Let $x,y\in U$ be given, and assume without loss of generality that $ex\le ey.$ Notice that this implies $$e\alpha(x)=\alpha(ex)\le\alpha(ey)=e\alpha(y).$$
1. If $ex=ey,$ then $e\alpha(x)=e\alpha(y),$ and we have $x+y=ex,$ $\alpha(x)+\alpha(y)=e\alpha(x)=\alpha(ex)$; hence $\alpha(x)+\alpha(y)=\alpha(x+y).$
2. If $\alpha(x)+\alpha(y)$ is tangible, then certainly $e\alpha(x)\ne e\alpha(y);$ hence $e\alpha(x)<e\alpha(y).$ This implies $ex<ey.$ Thus $x+y=y,$ $\alpha(x)+\alpha(y)=\alpha(y);$ hence $\alpha(x)+\alpha(y)=\alpha(x+y).$
3. From i) we know that $\alpha(x+y)=\alpha(x)+\alpha(y) $ holds if $ex=ey.$ Assume now that $ex<ey.$ Since $\gamma$ is injective this implies $e\alpha(x)<e\alpha(y).$ Thus $x+y=y,$ $\alpha(x)+\alpha(y)=\alpha(y);$ hence again $\alpha(x+y)=\alpha(x)+\alpha(y).$
Given an -valuation $v: R\to M$, we now focus on the supervaluations $\varphi: R\to U$ which cover $v,$ i.e., with $eU=M$ and $e\varphi=\nu_U\circ\varphi=v.$ We single out a class of supervaluations which will play a special role.
\[defn5.11\] A supervaluation $\varphi: R\to U$ is called [**tangibly injective**]{} if the map $\varphi$ is injective on the set $\varphi^{-1}(\mathcal T(U)),$ i.e., $$\forall a,b\in R: \ \varphi(a)=\varphi(b)\in \mathcal T(U)\quad\Rightarrow \quad
a=b.$$
\[examp5.12\] The supervaluation $\varphi_v:
R\to U(v)$ constructed in §$4$ (cf. Example \[examp4.5\] and Definition \[defn4.6\]) is injective on the set $R\setminus
v^{-1}(0),$ hence certainly tangibly injective. Notice that $\varphi^{-1}(\mathcal T(U(v)))=R\setminus v^{-1}(0),$ i.e., $\varphi$ is tangible. $\varphi$ is also surjective.
\[thm5.13\] Assume that $\varphi: R\to U$ is a tangibly injective supervaluation covering $v: R\to M.$ Let $\psi: R\to V$ be another supervaluation covering $v,$ in particular, $eU=eV=M.$
1. $\varphi$ dominates $\psi$ iff the following holds: $$\label{5.5}
\forall a\in R: \ \varphi(a)=v(a)\quad\Rightarrow \quad
\psi(a)=v(a),$$ in other terms, $\varphi(a)\in eU\quad\Rightarrow \quad \psi(a)\in
eV.$
2. If, in addition, $\varphi$ is tangibly surjective (cf. Definition \[defn4.1\].c), then $\varphi$ dominates $\psi$ iff there exists a homomorphism map $\alpha: U\to V$ covering the identity of $M$ such that $\alpha\circ \varphi=\psi.$ The supervaluation $\psi$ is tangibly surjective iff $\alpha$ is surjective.
a): In the definition of dominance in Definition \[defn5.1\], the axiom D2 holds trivially since $e\varphi(a)=e\psi(a)=v(a).$ Axiom D3 is our present condition . Axiom D1 needs only to be checked in the case $\varphi(a)=\varphi(b)\in \mathcal T(U),$ and then holds trivially since this implies $a=b$ by the tangible injectivity of $\varphi.$
b): Replacing $U$ by the subsemiring $\mathcal T(U)\cup v(R)$ we assume without loss of generality that the supervaluation $\varphi$ is surjective. A transmission $\alpha$ from $\varphi$ to $\psi$ is forced to cover the identity of $M; $ hence is a semiring homomorphism, cf. Proposition \[prop5.10\].iii. We have $\alpha(U)\supset eV.$ Thus $\alpha$ is surjective iff $\alpha(\mathcal T(U))=\mathcal T (V).$ This gives us the last claim.
\[cor5.14\] Assume that $v:R\to M$ is a valuation. The supervaluation $\varphi_v: R~\to~U(v)$ dominates every supervaluation $\psi: R\to
U$ covering $v.$ Thus these supervaluations $\psi$ correspond uniquely with the transmissive maps $\alpha: U(v)\to U$ covering ${\operatorname{id}}_M.$ They are semiring homomorphisms.
$\varphi_v$ is tangibly injective, and holds trivially, since $\varphi_v(a)\in eU$ only if $v(a)=0.$ Theorem \[thm5.13\] and [Proposition \[prop5.10\]]{}.iii apply.
\[defn5.15\] Due to this property of $\varphi_v$ we call $\varphi_v$ the [**initial supervaluation**]{} covering $v$ (or [**initial cover**]{} of $v$ for short).
\[rem5.16\] We may also regard $v: R\to M$ as a cover of $v,$ viewing $M$ as a ghost supertropical semiring. Clearly every supervaluation $\psi:
R\to U$ covering $v$ dominates $v$ with transmission $\nu_U.$ Thus we may view $v: R\to M$ as the [**terminal supervaluation**]{} covering $v$ (or [**terminal cover**]{} of $v$ for short).
The following proposition gives examples of dominance $\varphi\ge\psi$ where $\varphi$ is not assumed to be tangibly injective.
\[prop5.17\] Let $U$ be a supertropical semiring with ghost ideal $M:=eU.$ Assume that $L$ is a submonoid of $(M,\cdot)$ with $M\cdot(M\setminus L)\subset M\setminus L.$
1. The map $\alpha: U\to U,$ defined by $$\alpha(x)=\begin{cases} x &\ \text{if}\quad ex\in L,\\
ex&\ \text{if}\quad ex\in M\setminus L,\end{cases}$$ is an endomorphism of the semiring $U.$
2. If $\varphi: R\to U$ is any supervaluation, then the map $\varphi_L:=\alpha\circ
\varphi$ from $R$ to $U$ is a supervaluation dominated by $\varphi$ and covering the same -valuation as $\varphi,$ i.e. $e\varphi_L=e\varphi.$
a): We have $e\alpha(x)=ex$ for every $x\in U,$ and $\alpha(x)=x$ for every $x\in M.$ One checks in a straightforward way that $\alpha$ is multiplicative, $\alpha(0)=0,$ $\alpha(1)=1.$
We verify additivity. Let $x,y\in U$ be given, and assume without loss of generality that $ex\le ey.$ We have $e\alpha(x)=\alpha(e)\alpha(x)=\alpha(ex)=ex$ and $e\alpha(y)=ey.$ If $ex=ey$ then $x+y=ex,$ and $\alpha(x)+\alpha(y)=e\alpha(x)=ex=\alpha(x+y).$ If $ex<ey$ then $x+y=y$ and $\alpha(x)+\alpha(y)=\alpha(y);$ hence again $\alpha(x)+\alpha(y)=\alpha(x+y).$
b): Now obvious.
Notice that $\varphi_L=\alpha\circ\varphi$ with a map $\alpha:
U\to U$ given by $\alpha(x)=x$ if $ex\in L,$ and $\alpha(x)=ex$ if $ex\in M\setminus L.$ Thus if $\varphi$ is surjective, $\alpha$ is the transmission from $\varphi$ to $\varphi_L$.
It is not difficult to find instances where [Proposition \[prop5.17\]]{} applies.
\[examp5.18\] Assume that $M$ is a submonoid of $\Gamma\cup\{0\}$ for $\Gamma$ an ordered abelian group. Let $H$ be a subgroup of $\Gamma$ containing the set $\{x\in M\bigm | x>1\}.$ Then $$L=\{x\in M\mid \exists h\in H\quad\text{with}\quad x\ge h\}$$ is a submonoid of $M\setminus\{0\}.$ We claim that $M \cdot
(M\setminus L)\subset M\setminus L.$
Let $x\in M,$ $y\in M\setminus L$ be given. If $x\le 1,$ then $xy\le y;$ hence, clearly, $xy\in M\setminus L.$ Assume now that $x>1.$ Then $x\in H.$ Suppose that $xy\in L;$ hence $h\le xy$ for some $h\in H.$ Then $x^{-1}\le y$ and $x^{-1}h\in H;$ hence $y\in
L,$ a contradiction. Thus $xy\in M\setminus L$ again.
In [@IKR] we will meet many transmissions which are not semiring homomorphisms.
Fiber contractions {#sec:6}
==================
Before we come to the main theme of this section, we write down functorial properties of the class of transmissive maps.
\[prop6.1\] Let $\alpha: U\to V$ and $\beta:V\to W$ be maps between supertropical semirings.
1. If $\alpha$ and $\beta$ are transmissive, then $\beta\alpha$ is transmissive.
2. If $\alpha$ and $\beta\alpha$ are transmissive and $\alpha$ is surjective, then $\beta$ is transmissive.
a\) It is evident that analogous statements hold for the class of maps between supertropical semirings obeying the axioms TM1–TM4 in §5. Thus we may assume from the beginning that $\alpha,\beta$ and (hence) $\beta\alpha$ obey TM1–TM4, and have only to deal with the axiom TM5 (cf. [Theorem \[thm5.4\]]{}, Definition \[defn5.5a\]).
b\) We conclude from TM3 and TM4 that $\alpha$ maps $eU$ to $eV$ and $\beta$ maps $eV$ to $eW.$ TM5 demands that these restricted maps are semiring homomorphisms. Thus it is evident that $\beta\alpha$ obeys TM5 if $\alpha$ and $\beta$ do. If $\alpha$ is surjective, then also the restriction $\alpha | eU: eU\to eV$ is surjective, since for $x\in U,$ $y\in eV$ with $\alpha(x)=y$ we also have $\alpha(ex)=y.$ Clearly, TM5 for $\alpha$ and $\beta\alpha$ implies TM5 for $\beta$ in this case.
Often we will only need the following special case of [Proposition \[prop6.1\]]{}.
\[cor6.2\] Let $U,V,W$ be supertropical semirings. Assume that $\alpha: U\to
V$ is a surjective semiring homomorphism. Then a map $\beta: V\to
W$ is transmissive iff $\beta\alpha$ has this property.
In the entire section $U$ is a *supertropical semiring*. We look for equivalence relations on the set $U$ that respect the multiplication on $U$ and the fibers of the ghost map $\gamma_U:
U\to eU.$
\[defn6.3\] Let $E$ be an equivalence relation on the set $U$. We say that $E$ is [**multiplicative**]{} if for any $x_1,x_2,y\in U,$ $$\label{6.3}
x_1\sim_E x_2\quad\Rightarrow\quad x_1y\sim_E x_2 y.$$ We say that $E$ is [**fiber conserving**]{} if for any $x_1,x_2\in
U,$ $$\label{6.4}
x_1\sim_E x_2\quad\Rightarrow\quad ex_1=ex_2.$$ If $E$ is both multiplicative and fiber conserving, we call $E$ an [**MFCE-relation**]{} (multiplicative fiber conserving equivalence relation) for short.
\[examps6.4\] $ $
1. Assume that $\alpha: U\to V$ is a multiplicative map from $U$ to a supertropical semiring $V.$ Then the equivalence $E(\alpha)$, given by $$x_1\sim x_2\quad\text{iff}\quad \alpha(x_1)=\alpha(x_2),$$ is clearly multiplicative. If in addition $\alpha(e_U)=e_V,$ and if the induced map $\gamma: eU\to eV,$ $\gamma(ex)=e\alpha(x),$ is injective, then $E(\alpha)$ is also fiber conserving; hence an MFCE-relation. We usually denote this equivalence $\sim$ by $\sim_\alpha.$
In particular, we have an MFCE-relation $E(\alpha)$ on $U$ for any semiring homomorphism $\alpha: U\to V$ which is injective on $eU.$
2. The ghost map $\nu=\nu_U: U\to U$ gives us an MFCE-relation $E(\nu)$ on $U.$ Clearly $$x_1\sim_\nu x_2\quad\text{iff}\quad ex_1=ex_2.$$ $E(\nu)$ is the coarsest MFCE-relation on $U.$
3. If $E_1$ and $E_2$ are equivalence relations on the set $U$, then $E_1\cap E_2$ is again an equivalence relation on $U.$ $\{$As usual, we regard an equivalence relation on $U$ as a subset of $U\times U \}.$ We have $$x_1\sim_{E_1\cap E_2} x_2\quad\text{iff}\quad
x_1\sim_{E_1}x_2\quad \text{and}\quad x_1\sim_{E_2}x_2.$$ If $E_1$ is multiplicative and $E_2$ is an MFCE, then $E_1\cap E_2$ is an MFCE.
4. In particular, every multiplicative equivalence relation $E$ on $U$ gives us an MFCE-relation $E\cap E(\nu)$ on $U.$ This is the coarsest MFCE-relation on $U$ which is finer than $E.$ We have $$x_1\sim_{E\cap E(\nu)}x_2\quad\text{iff}\quad x_1\sim_E
x_2\quad\text{and}\quad ex_1=ex_2.$$
5. We define an equivalence relation $\tE$ (the “$t$” alludes to “tangible”) on $U$ as follows, writing $\sim_t$ for $\sim_{\tE}:$ $$\begin{aligned}
x_1\sim_t x_2\quad \text{iff either}&
\quad
x_1=x_2\quad\\
\text{or} & \quad x_1,x_2\in \mathcal T(U)\quad \text{and}\quad
ex_1=ex_2.\end{aligned}$$ Clearly, this is an MFCE-relation iff for any tangible $x_1,x_2,y\in E$ with $ex_1=ex_2$ both $x_1y$ and $x_2y$ are tangible or equal. In particular, $\tE$ is an MFCE if $\mathcal T(U)$ is closed under multiplication.
Let $F$ denote the equivalence relation on $U$ which has the equivalence classes $\mathcal T(U)$ and $eU$. It is readily checked that $\tE=F\cap E(\nu).$
The equivalence classes of $\tE$ contained in $\mathcal T(U)$ are the sets $\mathcal T(U)\cap\nu_U^{-1}(z)$ with $z\in M,$ which are not empty. We call them the [**tangible fibers**]{} of $\nu_U.$
Our next goal is to prove that, given an MFCE-relation $E$ on $U,$ the set $U/E$ of all $E$-equivalence classes inherits from $U$ the structure of a supertropical semiring.
\[lem6.5\] If $E$ is a fiber conserving equivalence relation on $U$, then for any $x_1,x_2,y\in U$ $$x_1\sim_E
x_2\quad\Rightarrow\quad x_1+y\sim_E x_2+y.$$
$ex_1=ex_2.$ If $ey<e x_1,$ we have $x_1+y=x_1,$$x_2+y=x_2.$ If $ey=ex_1,$ we have $x_1+y=ey=x_2+y .$ If $ey>ex_1,$ we have $x_1+y=y=x_2+y.$ Thus, in all three cases, $x_1+y\sim_E x_2+y.$
Notice that, as a formal consequence of the lemma, more generally $$x_1 \sim_E x_2, \ y _1 \sim_E y_2 \quad \Rightarrow \quad x_1 + y_1 \sim_E x_2 + y_2.$$
\[thm6.6\] Let $E$ be an MFCE-relation on a supertropical semiring $U.$ On the set $\overline U: = U/E$ of equivalence classes $[x]_E,$ $x\in
U,$ we have a unique semiring structure such that the projection map $\pi_E: U\to \overline U,$ $x\mapsto [x]_E$ is a semiring homomorphism. This semiring $\overline U$ is supertropical, and $\pi_E$ covers a semiring isomorphism $eU{\overset\sim \rightarrow}\bar e\overline U.$ (Here $\bar e:=e_{\overline U}=\pi_E(e).$)
We write $\bar x: =[x]_E$ for $x\in U$ and $\pi:=\pi_E.$ Thus $\pi(x)=\bar x.$ Due to [Lemma \[lem6.5\]]{} and condition , we have a well-defined addition and multiplication on $\overline
U,$ given by the rules $(x,y\in U)$ $$\bar x+\bar y:=\overline{x+y},\qquad \bar x\cdot \bar
y:=\overline{xy}.$$
The axioms of a commutative semiring are valid for these operations, since they hold in $U,$ and the map $\pi$ is a homomorphism from $U$ onto the semiring $\overline U$.
We have $\bar 1+ \bar 1 = \bar e$ and $\bar e \overline U = \pi
(eU)$. If $x,y \in eU$ and $x \sim_E y$ then $x =ex = ey =y$, since $E$ is fiber conserving. Thus the restriction $\pi | eU$ is an isomorphism from the bipotent semiring $eU$ onto the semiring $\bar e \overline U$ (which thus is again bipotent).
We are ready to prove that $\bU$ is supertropical, i.e. that axioms $(3.3')$, $(3.3'')$, $(3.4)$ from §\[sec:3\] are valid. It is obvious that $\bU$ inherit properties $(3.3')$ and $(3.4)$ from $U$. Let $x,y \in E$ be given with $\be \bx = \be \by$, i.e. $\overline{ex} = \overline{ey}$. Then $ex = ey$; hence $x + y =
ex$ by axiom $(3.3'')$ for $U$. Applying the homomorphism $\pi$ we obtain $\bx + \by = \be \bx$. Thus $\bU$ also obeys $(3.3'')$.
\[rem6.7\] [Theorem \[thm6.6\]]{} tells us, in particular, that every MFCE-relation $E$ on $U$ is of the form $E(\alpha)$ for some semiring homomorphism $\alpha: U\to V$ with $\alpha| eU$ bijective, namely, $E=E(\pi_E).$
\[thm6.8\] Assume that $\alpha: U\to V$ is a multiplicative map. Let $E$ be an MFCE-relation on $U,$ which is respected by $\alpha,$ i.e., $x\sim_E y$ implies $\alpha(x)=\alpha(y).$ Clearly, we have a unique multiplicative map $\bar\alpha: U/E\to V$ with $\bar\alpha\circ\pi_E=\alpha.$
Then, if $\alpha$ is a transmission (a semiring homomorphism), the map $\bar\alpha$ is of the same kind.
Corollary \[cor6.2\] gives us all the claims, since $\pi_E$ is a surjective homomorphism.
\[defn6.10\] We call a map $\alpha:U\to V$ between supertropical semirings a [**fiber contraction**]{}, if $\alpha$ is transmissive and surjective, and the map $\gamma: eU\to eV$ covered by $\alpha$ is strictly order preserving.
Notice that then $\alpha$ is a semiring homomorphism (cf. [Proposition \[prop5.10\]]{}.iii) (hence $\alpha$ is a transmission), and $\gamma$ is an isomorphism from $eU$ to $eV.$
\[schol6.11\]$ $
1. If $E$ is an MFCE-relation on $U$, by [Theorem \[thm6.6\]]{}, the map $\pi_E: U\to
U/E$ is a fiber contraction. On the other hand, if a surjective fiber contraction $\alpha: U\twoheadrightarrow V$ is given, then clearly $E(\alpha)$ is an MFCE-relation, and, as [Theorem \[thm6.8\]]{} tells us, $\alpha$ induces a semiring isomorphism $\bar{\alpha}:
U/E(\alpha){\overset\sim \rightarrow}V$ with $\alpha=\bar\alpha\circ\pi_{E(\alpha)}.$ In short, every fiber contraction $\alpha$ on $U$ is a map $\pi_E$ with $E$ an MFCE-relation on $U$ uniquely determined by $\alpha,$ followed by a semiring isomorphism.
2. If the semiring isomorphism $\bar\alpha$ is the identity ${\operatorname{id}}_M$ of $M:=eU$ (in particular $eU=eV$), we say $\alpha$ is a [**fiber contraction over**]{} $M$.
If $E$ is an equivalence relation on a set $X$, and $Y$ is a subset of $X$, we denote the set of all equivalence classes $[x]_E$ with $ x\in Y\} $ by $Y/E.$
\[examp6.12\] Assume that $U$ is a supertropical domain (cf. \[defn3.13\]). Then the equivalence relation $\tE$ introduced in Example \[examps6.4\].v is MFCE, and $\mathcal
T(U)$ is a union of $\tE$-equivalence classes. The ring $\overline U=U/\tE$ is a supertropical domain with $\mathcal
T(\overline U)=\mathcal T(U)/\tE$ and $\mathcal G(\overline
U)=\mathcal G(U).$ The ghost map of $\overline U$ maps $\mathcal
T(\overline U)$ bijectively to $\mathcal G(U);$ hence gives us a monoid isomorphism $v: \mathcal T(\overline U){\overset\sim \rightarrow}\mathcal G(U).$ Thus (in notation of Examples \[examps3.16\]) $$U/\tE=D(\mathcal G(U)).$$ The map $\pi_{\tE}$ is a fiber contraction over $eU=eU/\tE.$
\[examp6.13\] (cf. [Proposition \[prop5.17\]]{}) Let $U$ be a supertropical semiring, $M:=eU,$ and let $L$ be a submonoid of $(M,\cdot)$ with $M\cdot (M\setminus L)\subset M\setminus L.$ Then the map $\alpha: U\to U$ with $\alpha(x)=x$ if $ex\in L$, $\alpha(x)=ex$ if $ex\in M\setminus L$, is a fiber contraction over $M.$ The image of $\alpha$ is the subsemiring $\nu_U^{-1}(L)\cup(M\setminus L)$ of $U.$
\[examp6.14\] Let again $U$ be a supertropical semiring and $M:=eU.$ But now assume only that $L$ is a [**subset**]{} of $M$ with $M\cdot(M\setminus L)\subset M\setminus L.$ We define an equivalence relation $E(L)$ on $U$ as follows: $$x\sim_{E(L)}y\quad \Leftrightarrow \quad\text{either}\quad x=y
\quad\text{or}\quad ex=ey\in M\setminus L.$$ One checks easily that $E(L)$ is MFCE. But if $L$ is not a submonoid of $(M,\cdot)$, then in the supertropical semiring $\overline U:=U/E(L)$ the set $\mathcal T(\overline U)$ of tangible elements is not closed under multiplication. In particular, $\overline U$ is not isomorphic to a subsemiring of $U.$
For later use we introduce one more notation.
\[not6.14\] If $\vrp:R \to U$ is a supervaluation and $E$ is an -relation on $U$, let $\vrp / E$ denote the supervaluation $\pi_E \circ \vrp : R \to U/ E$. Thus, for any $a \in R$, $$(\vrp/E)(a) \ := \ [\vrp(a)]_E \ .$$
The lattices $C(\varphi)$ and ${\operatorname{Cov}}(v)$ {#sec:7}
=======================================================
Given an -valuation $v: R\to M$ on a semiring $R,$ we now can say more about the class of all supervaluations $\varphi$ covering $v.$ Recall that these are the supervaluations $\varphi: R\to U$ with $eU=M$ and $\nu_U\circ\varphi=v,$ in other words, $e\varphi=v.$ For short, we call these supervaluations $\varphi$ the [**covers**]{} of the -valuation $v.$ It suffices to focus on covers of $v$ which are tangibly surjective, cf. Remark \[rem4.4\]. (N.B. Without loss of generality, we could even assume that $v$ is surjective. Then a cover $\varphi$ of $v$ is tangibly surjective iff $\varphi$ is surjective.)
\[defn7.1\]
1. We call two covers $\varphi_1:R\to U_1,$ $\varphi_2:
R\to U_2$ of $ v$ [**equivalent**]{}, if $\varphi_1\ge \varphi_2$ and $\varphi_2\ge~\varphi_1,$ i.e., $\varphi_1$ dominates $\varphi_2$, and $\varphi_2$ dominates $\varphi_1.$ If $\varphi_1$ and $\varphi_2$ are tangibly surjective (without essential loss of generality, cf. Remark \[rem4.4\]), this means that $\varphi_2=\alpha\circ\varphi_1$ with $\alpha: U_1\to U_2$ a semiring isomorphism over $M$ (i.e., $e\alpha(x)=ex$ for all $x\in
U_1). $
2. We denote the equivalence class of a cover $\varphi: R\to U$ of $v$ by $[\varphi]$, and we denote the set of all these equivalence classes by ${\operatorname{Cov}}(v)$. $\{$Notice that ${\operatorname{Cov}}(v)$ is really a set, not just a class, since for any tangibly surjective cover $\varphi: R\to U$, we have $U=\varphi(R)\cup M$; hence the cardinality of $U$ is bounded by ${\operatorname{Card}}R+{\operatorname{Card}}M.\}$ On ${\operatorname{Cov}}(v)$ we have a partial ordering: $[\varphi_1]\ge[\varphi_2]$ iff $\varphi_1$ dominates $\varphi_2.$ We always regard ${\operatorname{Cov}}(v)$ as a poset[^8] in this way.
3. If a covering $\varphi: R\to U$ of $v$ is given, we denote the subposet of ${\operatorname{Cov}}(v)$ consisting of all $[\psi]\in{\operatorname{Cov}}(v)$ with $[\varphi]\ge[\psi]$ by $C(\varphi).$ $\{$Notice that this poset is determined by $\varphi$ alone, since $v=e\varphi.\}$
In §5 we have seen that, given a tangibly surjective cover $\varphi: R\to U$ of $v,$ the tangibly surjective covers $\psi:
R\to V$ dominated by $\varphi$ correspond uniquely to the transmissive surjective maps $\alpha: U\to V$ which restrict to the identity on $M=eU=eV.$ Scholium \[schol6.11\] from the preceding section tells us, in particular, the following.
\[thm7.2\] Assume that $\varphi: R\to U$ is a tangibly surjective covering of the -valuation $v: R\to M.$
1. The elements $[\psi]$ of $C(\varphi)$ correspond uniquely to the MFCE-relations $E$ on $U$ via $[\psi]=[\varphi / E]$.
2. Let $\MFC(U)$ denote the set of all -relations on $U,$ ordered by the coarsening relation: $E_1\le E_2$ iff $E_2$ is coarser than $E_1,$ i.e., $E_1\subset
E_2$, if the $E_i$ are viewed – as customary – as subsets of $U\times U.$ The map $E\mapsto [\varphi / E]$ is an anti-isomorphism (i.e., an order reversing bijection) from the poset $\MFC(U)$ to the poset $C(\varphi).$
If $(E_i\bigm| i\in I)$ is a family in $\MFC(U)$ then the intersection $E:=\bigcap_{i\in I}E_i$ is again an MFCE-relation on $U,$ and is the infimum of the family $(E_i\bigm|i\in I)$ in $\MFC(U)$. Since $\MFC(U)$ has a biggest and smallest element, namely $E(\nu_U)$ and the diagonal of $U$ in $U\times U,$ it is now clear that the poset $\MFC(U)$ is a complete lattice. Thus, for any cover $\varphi: R\to U$ of the -valuation $v: R\to M$, also the poset $C(\varphi)$ is a complete lattice. $\{$We easily retreat to the case that $\varphi$ is tangibly surjective.$\}$
The supremum of a family $(E_i\bigm|i\in I)$ in $\MFC(U)$ is the following equivalence relation $F$ on $U.$ Two elements $x,y$ of $U$ are $F$-equivalent iff there exists a finite sequence $x_0=x,x_1,\dots,x_m=y$ in $U$ such that for each $j\in\{1,\dots,m\}$ the element $x_{j-1}$ is $E_k$-equivalent to $x_j$ for some $k\in I.$
\[constr7.3\] Assume again that $\varphi$ is tangibly surjective. The supremum $\bigvee_{i\in I}\xi_i$ of a family $(\xi_i\bigm| i\in I)$ in $C(\varphi)$ can be described as follows. Choose for each $i\in I$ a tangibly surjective representative $\psi_i: R\to V_i$ of $\xi_i.$ Thus $eV_i=M,$ and $\psi_i$ is a cover of $v$ dominated by $\varphi.$ Let $e_i:=e_{V_i}$ $(=1_M)$, and let $V$ denote the set of all elements $x=(x_i\bigm| i\in I)$ in the semiring $\prod_{i\in I}V_i$ with $e_ix_i=e_jx_j$ for $i\ne j.$ This is a subsemiring of $\prod_{i\in I} V_i$ containing the image $M'$ of $M$ in $\prod V_i$ under the diagonal embedding of $M$ into $\prod
V_i.$ We identify $M'=M,$ and then have $$e_U=1_M=(e_i\bigm| i\in I)=1_V+1_V.$$ It is now a trivial matter to verify that $V$ is a supertropical semiring by checking the axioms in §3. We have $e_VV=eV=M'=M.$ The supervaluations $\psi_i:R\to U_i$ combine to a map $\psi: R\to V,$ given by $$\psi(a):=(\psi_i(a)\bigm|i\in I)\in V$$ for $a\in R.$ It is a supervaluation covering $v,$ and $\varphi: R\to U$ dominates $\psi$ (e.g., check the axioms D1–D3 in §$5$). The class $[\psi]$ is the supremum of the family $(\xi_i\bigm|i\in I)$ in $C(\varphi).$
Given again a family $(\xi_i\bigm|i\in I)$ in $C(\varphi)$ with representatives $\psi_i: R\to V_i$ of the $\xi_i,$ we indicate how the infimum $ \bigwedge \xi_i$ in $C(\varphi)$ can be built, without being as detailed as above for the supremum.
We assume that each supervaluation $\psi_i$ is surjective. The transmission $\delta_i: U\to V_i$ from $\varphi$ to $\psi_i$ is a surjective semiring homomorphism. We form the categorical direct limit (= colimit) of the family $(\delta_i\bigm|i\in I)$ in the category of semirings (cf. [@Mit Chap. II], [@ML III, §3]). Thus we have a semiring $V$ together with a family of semiring homomorphisms $(\alpha_i: V_i\to V\bigm|i\in I)$ such that $\alpha_i\circ\delta_i=\alpha_j\circ\delta_j$ for $i\ne j$, which is universal. This means that, given a family $(\beta_i:
V_i\to W\bigm|i\in I)$ of homomorphisms with $\beta_i\circ\delta_i=\beta_j\circ\delta_j$ for $i\ne j, $ there exists a unique homomorphism $\beta: V\to W$ with $\beta\circ
\alpha_i=\beta_i$ for every $i\in I.$ Choosing some $i\in I$ let $$\varepsilon:=\alpha_i\circ\delta_i:U\to V.$$ This homomorphism, which is independent of the choice of $i,$ is surjective, due to universality, since all maps $\delta_j: U\to
V_j$ are surjective. It turns out that the restriction $\varepsilon|eU$ maps $eU=M$ isomorphically onto $eV.$ We identify $M$ with $eV$ by this isomorphism and then have $\varepsilon
|eU=1_M.$
This can be seen as follows. Let $\nu:=\nu_U$ and $\nu_i:=\nu_{V_i}$ denote the ghost maps of $U$ and $V_i.$ For every $i\in I$ we have $\nu_i\circ\delta_i=\nu.$ By universality we obtain a homomorphism $\mu: V\to M$ with $\mu\circ\alpha_i=\nu_i$ for every $i.$ Let $j_i$ denote the inclusion map from $M$ to $V_i.$ We have $\nu_i\circ j_i={\operatorname{id}}_M;$ hence $$\mu\circ\alpha_i\circ j_i=\nu_i\circ j_i={\operatorname{id}}_M.$$ The surjective homomorphism $\alpha_i$ maps $M=eV_i$ onto $eV.$ We conclude that the restriction $\alpha_i | M$ gives an isomorphism from $M$ onto $eV,$ the inverse map being given by $\mu.$
We identify $M$ with $eV$ via $\alpha_i | M.$ Now $\alpha_i:
V_i\to V$ has become a surjective semiring homomorphism over $M$ (for every $i).$ Thus also $\varepsilon: U\to V$ is a surjective homomorphism over $M.$ We conclude, that $\ep$ gives an -relation $E(\ep)$ and the semiring $V$ is supertropical. The supervaluation $$\psi:=\varepsilon \varphi=\alpha_i\circ
\psi_i \quad \text{is dominated by every}\quad
\psi_i\quad\text{and}\quad [\psi]=\bigwedge_{i}\xi_i.$$ Since $V_i=\psi_i(R)\cup M$ for every $i,$ the semiring $V$ and the $\alpha_i$ can be described completely in terms of the $\psi_i$ without mentioning $U$ and the $\delta_i.$ We leave this to the interested reader.
\[defn7.4\] We call a supervaluation $\varphi$ [**initial**]{} if $\varphi$ dominates every other supervaluation $\psi$ with $e\varphi=e\psi.$ We then also say that $\varphi$ is an [**initial cover**]{} of $v:=e\varphi.$
If an -valuation $v: R\to M$ is given, a supervaluation $\varphi: R\to U$ is an initial cover of $v$ iff $e\varphi=v$ and $[\varphi]$ is the biggest element of the poset ${\operatorname{Cov}}(v).$
Such an initial cover had been constructed explicitly in §$4$ in the case that $v$ is a valuation, namely, the supervaluation $\varphi_v: R\to U(v),$ cf. Definition \[defn4.6\] and Corollary \[cor5.14\]. We now prove that an initial cover always exists, although in general we do not have an explicit description.
\[prop7.5\] Every -valuation $v: R\to M$ has an initial cover. The poset ${\operatorname{Cov}}(v)$ is a complete lattice.
Let $(\psi_i\bigm| i\in I)$ be a family of coverings of $v$ which represents every element of the set ${\operatorname{Cov}}(v).$ Now repeat Construction \[constr7.3\] with this family. It gives us a covering $\psi: R\to V$ of $v$ which dominates all $\psi_i;$ hence is an initial covering of $v.$ Of course, $C(\psi)={\operatorname{Cov}}(v),$ and thus ${\operatorname{Cov}}(v)$ is a complete lattice.
\[not7.6\] If $v: R\to M$ is any -valuation, let $\varphi_v: R\to U(v)$, denote a fixed tangibly surjective initial supervaluation covering $v.$ If $v$ is a valuation, we choose for $\varphi_v$ the supervaluation constructed in Example \[examp4.5\].
Notice that $\varphi_v$ is unique up to unique isomorphism over $M,$ i.e., if $\psi: R\to V$ is another surjective initial cover of $v$, there exists a unique semiring isomorphism $\alpha:
U(v){\overset\sim \rightarrow}V$ which restricts to the identity on $M.$ We call $\varphi_v$ “[**the**]{}” [**initial cover**]{} of $v.$ The lattice ${\operatorname{Cov}}(v)$ coincides with $C(\varphi_v).$
Given a supervaluation $\varphi: R\to U$ or an -valuation $v:
R\to M$, we view the lattice $C(\varphi)$ and ${\operatorname{Cov}}(v)$ as a measure of complexity of $\varphi$ and $v$, respectively, and thus make the following formal definition.
\[defn7.7\] We call the isomorphism class of the lattice $C(\varphi)$ the [**lattice complexity**]{} of the supervaluation $\varphi$ and denote it by $\lc(\varphi).$ In the same vein we call the isomorphism class of the lattice ${\operatorname{Cov}}(v)$ the [**tropical complexity**]{} of the -valuation $v$ and denote it by $\trc(v).$ We have $\trc(v)=\lc(\varphi_v).$
The word “complexity" in Definition \[defn7.7\] should not be taken too seriously. Usually a “measure of complexity" has values in natural numbers or, more generally, in some well understood fixed ordered set. The isomorphism classes of lattices are not values of this kind. Our idea behind the definition is that, if you are given a function $m$ on the class of lattices which measures (part of) their complexity in some way, then $m\circ
\lc,$ resp. $m\circ \trc,$ is such a function on the class of supervaluations, resp. -valuations.
\[thm7.8\] If $\varphi: R\to U$ and $\varphi':
R'\to U$ are tangibly surjective supervaluations with values in the same supertropical semiring $U$, then $\lc(\varphi)=\lc(\varphi').$
Both lattices $C(\varphi)$ and $C(\varphi')$ are anti-isomorphic to $\MFC(U);$ hence are isomorphic.
This result is quite remarkable, since it says that the lattice complexity of a surjective supervaluation $\vrp: R \to U$ depends only on the isomorphism class of the target semiring $U$.
\[examp7.9\] Let $\varphi: R\to U$ be a tangibly surjective supervaluation. The identity\
${\operatorname{id}}_U: U\to U$ is also a supervaluation. It is the initial cover of the ghost map $\nu_U:
U\to eU.$ We have $\lc(\varphi)=\trc(\nu_U).$
Orbital equivalence relations {#sec:8}
=============================
Our main goal in this section is to introduce and study a special kind of MFCE-relations on supertropical semirings, which seems to be more accessible than MFCE-relations in general. But for use in later sections, we will define more generally “orbital" equivalence relations on supertropical semirings. They are multiplicative but not necessarily fiber conserving. The relations we are looking for here then will be the orbital -relations.
In the following $U$ is a supertropical semiring, and $M:=eU$ denotes its ghost ideal. We always assume that $\mathcal T(U)$ is not empty, i.e., $e\ne1.$ We introduce the set $$S(U):=\{x\in U\bigm| x\mathcal T(U)\subset \mathcal T(U)\}.$$ This is a subset of $\mathcal T(U)$ closed under multiplication and containing the unit element $1_U;$ hence is a monoid.
The monoid $S(U)$ operates on the sets $U$ and $\mathcal T(U)$ by multiplication. If $\mathcal T(U)$ itself is closed under multiplication then $S(U)=\mathcal T(U).$
Let $G$ be a submonoid of $S(U).$ Then also $G$ operates on $U$ and on $\mathcal T(U).$ For any $x\in U$ we call the set $Gx$ the [**orbit**]{} of $x$ under $G$ (as common at least for $G$ a group). We define a binary relation $\sim_G$ on $U$ as follows: $$x\sim_Gy\quad\Leftrightarrow\quad \exists g,h\in G: gx=hy.$$ Thus $x\sim_G y$ iff the orbits $Gx$ and $Gy$ intersect. Clearly this is an equivalence relation on $U,$ which is multiplicative, i.e., obeys the rule from §$6$. We denote this equivalence relation by $E(G).$
The relation $E(G)$ on $U$ is MFCE, i.e., obeys also the rule from §6, iff $G$ is contained in the “unit-fiber” $$\mathcal T_e(U):=\{x\in \mathcal T(U)|ex=e\}$$ of $\mathcal T(U).$ The biggest such monoid is the unit fiber $$S_e(U):=\{g\in S(U)\bigm| eg=e\}=\mathcal T_e(U)\cap S(U)$$ of $S(U)$.
\[examp8.1\] Assume that $R$ is a field and $v:
R\to \Gamma\cup\{0\}$ is a surjective valuation on $R.$ $\{$In classical terms, $v$ is a Krull valuation on $R$ with value group $\Gamma.\}$ Let $$U:=U(v)=(R\setminus\{0\})\ \dot\cup\ \Gamma\ \dot\cup\ \{0\},$$ cf. Definition \[defn4.6\]. Then $S(U)$ is the multiplicative group $R^*=R\setminus\{0\}$ of the field $R,$ and $S_e(U)$ is the group $\mathfrak o_v^*$ of units of the valuation domain $$\mathfrak o_v:=\{x\in R\bigm| v(x)\le 1\}.$$
\[defn8.2\] We call an equivalence relation $E$ on the supertropical semiring $U$ [**orbital**]{} if $E=E(G)$ for some submonoid $G$ of $S(U).$ We denote the set of all orbital equivalence relations on $U$ by $\Orb(U)$ and the subset $\Orb(U)\cap \MFC(U),$ consisting of the orbital MFCE-relations on $U,$ by $\OFC(U).$ $\{$“OFC" alludes to “orbital fiber conserving".$\}$ Consequently, we call the elements of $\OFC(U)$ the [**orbital fiber conserving equivalence relations**]{} on $U$, or [**OFCE-relations**]{} for short.
\[examp8.3\] It is evident that $E(S(U))$ is the coarsest orbital equivalence relation and $F:=E(S_e(U))$ is the coarsest OFCE-relation on $U.$ Assume now that $U$ is a supertropical domain. Then $S(U)=\mathcal
T(U),$ $S_e(U)=\mathcal T_e(U),$ and $\mathcal G(U)=e\mathcal
T(U).$ $E(S(U))$ has just $3$ equivalence classes, namely, $\mathcal T(U),$ $\mathcal G(U)$ and $\{0\}.$ On the other hand, $F$ is finer than the MFCE-relation $\tE$ introduced in Example \[examps6.4\].v, whose equivalence classes in $\mathcal T(U)$ are the tangible fibers of the ghost map $\nu_U.$ Very often $\tE$ is not orbital; hence $F\subsetneqq \tE.$
\[subexamp8.4\] Let $R=k[x]$ be the polynomial ring in one variable $x$ over a field $k.$ Choose a real number $\vartheta$ with $0<\vartheta<1,$ and let $v$ be the surjective valuation on $R$ defined by $$v(f)=\vartheta^{\deg f}.$$ Thus, $v: R\twoheadrightarrow G\cup\{0\}$ with $G$ the monoid $\{\vartheta^n\bigm| n\in\mathbb N_0\}\subset\mathbb R.$ Finally, take $$U:=U(v)=(R\setminus\{0\})\cup G\cup\{0\},$$ cf. Definition \[defn4.6\]. We have $S(U)=R\setminus\{0\}$ and $$S_e(U)=\{f\in R\bigm| \deg f=0\}=k\setminus\{0\},$$ the set of nonzero constant polynomials. If $f,g\in \mathcal T(U)$ are given with $ef=eg,$ i.e., $\deg f=\deg g,$ then $f\sim_F g$ iff $g=cf$ with $c$ a constant $\ne0.$ Thus, the set of $F$-equivalence classes in $\mathcal T(U)$ can be identified with the set of monic polynomials in $k[x],$ while the $\tE$-equivalence classes are the sets $\{f\in k[x]\bigm|\deg
f=n\}$ with $n$ running thorough $\mathbb N_0.$ For $n=0$ this $\tE$-equivalence class is also an $F$-equivalence class, while for $n>0$ it decomposes into infinitely many $F$-equivalence classes if the field $k$ is infinite, and into $|k|^n$ $F$-equivalence classes if $k$ is finite.
The semiring $U/F$ (cf. §6) can be identified with the subsemiring $V$ of $U$, which has as tangible elements the monic polynomials in $k[x]$ and has the same ghost ideal $eV=eU$ as $U.$
Different submonoids $G,H$ of $S(U)$ may yield the same orbital equivalence relation $E(G)=E(H).$ But this ambiguity can be tamed.
\[prop8.5\] If $G$ is a submonoid of $S(U)$, then $$G':=\{x\in S(U)\bigm| \exists g\in G: gx\in G\}$$ is a submonoid of $S(U)$ containing $G,$ and $E(G)=E(G').$ If $G\subset S_e(U)$ then $G'\subset S_e(U).$
a\) It is immediate that $G'$ is a submonoid of $S(U)$ and that $G\subset G'.$ Given $x\in G'$ we have elements $g,h\in G$ with $gx=h.$ If in addition $G\subset S_e(U)$, then $e=eh=(eg)(ex)=ex;$ hence $x\in S_e(U).$ Thus $G'\subset S_e(U).$ It follows from $G\subset G'$ that $E(G)\subset E(G').$
b\) Let $x,y\in U$ be given with $x\sim_{G'}y.$ We have elements $g_1',g_2'$ in $G'$ with $g_1'x=g_2'y.$ We furthermore have elements $h_1,h_2$ in $G$ with $h_1g_1'=g_1\in G$ and $h_2g_2'=g_2\in G.$ Now $$g_1h_2x=h_1h_2g_1'x=h_1h_2g_2'y=h_1g_2y.$$ Thus $x\sim_G y.$ This proves $E(G')\subset E(G);$ hence $E(G)=E(G').$
\[defn8.6\] We call $G'$ the [**saturation**]{} of the monoid $G$ (in $U),$ and we say that $G$ is saturated if $G=G'.$
It is immediate that $(G')'=G'.$ Thus $G'$ is always saturated.
\[examp8.7\] If $S(U)$ happens to be a group, then the saturation of a submonoid $G$ of $S(U)$ is just the subgroup of $S(U)$ generated by $G.$ Indeed, the elements of $G'$ are the $x\in S(U)$ with $g_1x=g_2$ for some $g_1,g_2\in G,$ i.e., the elements $g_1^{-1}g_2$ with $g_1,g_2\in G.$
\[prop8.8\] Let $E$ be a multiplicative equivalence relation on $U.$
1. The set $$G_E:=\{x\in S(U)\bigm|x\sim_E1\}$$ is a saturated submonoid of $S(U).$
2. If $E=E(H)$ for some submonoid $H$ of $S(U)$, then $G_E$ is the saturation $H'$ of $H.$
3. In general, $E(G_E)$ is the coarsest orbital equivalence relation on $U$ which is finer than $E.$
4. If $E$ is MFCE then $G_E\subset
S_e(U),$ and $E(G_E)$ is the coarsest OFCE-relation on $U$ which is finer than $E.$
a): If $x,y\in G_E$ then $x\sim_E1,$ $y\sim_E1;$ hence $xy\sim_Ey\sim_E1,$ thus $xy\in G_E.$ This proves that $G_E$ is a submonoid of $S(U).$ Let $x\in G_E'$ be given. We have elements $g,h\in G_E$ with $hx=g.$ It follows from $g\sim_E1,$ $h\sim_E1$ that $$x\sim_Ehx=g\sim_E1.$$ Thus $x\in G_E.$ This proves that $G_E'=G_E.$
b): Assume that $E=E(H)$ with $H$ a submonoid of $S(U).$ For $x\in
S(U)$ we have $$x\sim_E1\quad\Leftrightarrow\quad \exists h_1,h_2\in H:
h_1x=h_2\quad\Leftrightarrow\quad x\in H'.$$ Thus $G_E=H'.$
c): Let $G:=G_E.$ If $x\sim_Gy$ then $g_1x=g_2y$ with some $g_1,g_2\in G.$ From $g_1\sim_E1,$ $g_2\sim_E1,$ we conclude that $$x\sim_E g_1x=g_2y\sim_Ey.$$ Thus $E(G)\subset E.$ If $H$ is any submonoid of $S(U)$ with $E(H)\subset E,$ then $$H\subset G_{E(H)}\subset G_E=G.$$ Thus $E(H)\subset E(G).$
d): Assume that $E$ is MFCE. If $x\in G_E$ then we conclude from $x\sim_E1$ that $ex=e.$ Thus $G_E\subset S_e(U).$ Every multiplicative equivalence relation on $U$ which is finer than $E$ is MFCE. In particular, this holds for orbital relations. We learn from c) that $E(G_E)$ is the coarsest OFCE-relation on $U$ finer than $E.$
We denote the set of saturated submonoids of $S(U)$ by Sat$(S(U))$ and the set of saturated submonoids of $S_e(U)$ by Sat$(S_e(U)).$
\[schol8.9\] Propositions \[prop8.5\] and \[prop8.8\] imply that we have an isomorphism of posets $H~\mapsto~E(H)$ from ${\rm Sat}(S(U))$ to $\Orb(U),$ mapping ${\rm Sat}(S_e(U))$ onto $\OFC(U),$ with inverse map $E\mapsto G_E.$ $\{$Here, of course, both sets ${\rm Sat}(S(U))$ and $\Orb(U)$ are ordered by inclusion.$\}$
It is fairly obvious that Sat$(S(U))$ is a complete lattice. Indeed, the supremum of a family $(H_i\bigm|i\in I)$ of saturated submonoids of $S(U)$ is the saturation $H'$ of the submonoid of $S(U)$ generated by the $H_i,$ while the infimum of this family is the saturation $(\bigcap_i H_i)'$ of the intersection of the family. Thus also $\Orb(U)$ is a complete lattice. It follows that Sat$(S_e(U))$ and $\OFC(U)$ are complete sublattices of Sat$(S(U))$ and $\Orb(U)$, respectively.
Let Mult$(U)$ denote the set of all multiplicative equivalence relations on $U,$ partially ordered by inclusion. In §7 we have seen that the subposet $\MFC(U)$ of Mult$(U),$ consisting of the MFCE-relations on $U,$ is a complete lattice. In the same way one proves that Mult$(U)$ itself is a complete lattice, the supremum and infimum of a family in Mult$(U)$ being given in exactly the same way as in §7 for MFCE-relations. This makes it also evident that $\MFC(U)$ is a complete sublattice of Mult$(U).$
We doubt whether $\Orb(U)$ and $\OFC(U)$ are always sublattices of Mult$(U)$ and $\MFC(U)$, respectively. But we have the following partial result.
\[prop8.10\] Let $(G_i\bigm|i\in I)$ be a family of submonoids of $S(U)$, and let $G$ denote the monoid generated by this family in $S(U).$ Then, in the lattice ${\rm Mult}(U),$ $$E(G)=\bigvee_{i\in I} E(G_i).$$ $\{$N.B. Thus the same holds in $\MFC(U)$, if every $G_i\subset
S_e(U).\}$
Let $F:=\bigvee_i E(G_i)$ in Mult$(U).$ Of course, $F\subset E(G)$ since each $E(G_i)\subset E(G).$ Let $x,y\in U$ be given with $x\sim_Gy.$ We want to conclude that $x\sim_Fy,$ and then will be done.
We have $gx=hy$ with elements $g,h$ of $G.$ Now $g$ and $h$ are products of elements in $\bigcup_i G_i,$ and for any $g'\in
\bigcup_i G_i$ and $z\in U$, we have $z\sim_Fg'z.$ It follows that $x\sim_Fgx$ and $y\sim_Fhy;$ hence $x\sim_Fy.$
We present an important case where $\OFC(U)$ and $\MFC(U)$ nearly coincide.
\[thm8.11\] Assume that every $x\in \mathcal T(U)$ is invertible; hence $\mathcal T(U)$ is a group under multiplication. $\{$The main case is that $U$ is a supertropical semifield.} Let $E$ be an MFCE-relation on $U.$ Then either $E=E(\nu),$ i.e., $E$ is the top element of $\MFC(U)$ (cf. Example \[examps6.4\].ii), or $E$ is orbital.
a\) Assume that there exists some $x_0\in \mathcal T(U)$ with $x_0\sim_Eex_0.$ Multiplying by $x_0^{-1}$ we obtain $1\sim_Ee,$ and then obtain $x\sim_Eex$ for every $x\in U.$ Thus $E=E(\nu).$
b\) Assume now that $x\not\sim_E ex$ for every $x\in \mathcal T(U)$ (i.e., $E\subset \tE). $ Clearly $S_e(U)=\mathcal T_e(U)$. Let $$H:=G(E)=\{x\in \mathcal T(U)\bigm| x\sim_E1\}.$$ Then $E(H)\subset E.$ Given $x,y\in U$ with $x\sim_Ey,$ we want to prove that $x\sim_Hy.$ We have $ex=ey.$ If $x\in eU$ or $y\in eU$, we conclude that $x=y,$ due to our assumption on $E.$ There remains the case that both $x$ and $y$ are tangible. Then we infer from $x\sim_Ey$ that $$1=x^{-1}x\sim_Ex^{-1}y.$$ Thus $x^{-1}y\in H,$ which implies $x\sim_Hy.$ This completes the proof that $E=E(H).$
\[cor8.12\] If every element of $\mathcal T(U)$ is invertible, then the poset $\MFC(U)\setminus\{E(\nu)\}$ is isomorphic to the lattice of subgroups of $\mathcal T_e(U).$
\[prop8.13\] If $R$ is a semifield, then every supervaluation $\vrp: R \to
U$ with $U \neq e U$ is tangible.
This follows from Theorem \[thm8.11\] applied to the target $U(v)$ of the initial supervaluation $\vrp_v$ of $v := e \vrp $, since for any orbital equivalence relation $E$ on $U(v)$ the transmission $\pi_E$ sends tangibles to tangibles. A more direct proof runs as follows.
Let $a \in R $, $a \neq 0$. Then $$\vrp(a) \vrp(a^{-1}) = \vrp (1) = 1.$$ Since $1_U \neq e_U$ this forces $\vrp(a)$ to be tangible.
N.B. The argument shows more generally that any supervaluation on a semiring sends units to tangible elements,provided not the whole target is ghost.
In the case that $R$ is a field the following in now amply clear.
\[schol8.13\] If $v$ is a Krull valuation on a field $R$ with value group $\Gamma$, then the lattice ${\operatorname{Cov}}(v)$ of equivalence classes of supervaluations covering $v$ is anti-isomorphic to the lattice of subgroups of the unit group $\mathfrak o_v^*$ of the valuation domain $\mathfrak o_v:=\{x\in R\bigm| v(x)\le 1\},$ augmented by one element at the top.
The ; strong supervaluations {#sec:10}
============================
Let $U$ be any supertropical semiring. If $x,y \in U$, it has become customary to write $$x = y + \text{ghost}$$ if $x$ equals $y$ plus an unspecified ghost element (including zero). In more formal terms we have a binary relation $\lmodg$ on $U$ defined as follows:
\[defn10.1\] $$x \lmodg y \ \Leftrightarrow \ \exists z \in e U \ \text{with} \ x = y
+z.$$ We call $\lmodg$ the **ghost surpassing relation** on $U$ or **-relation**, for short.
The -relation seems to be at the heart of many supertropical arguments. Intuitively $x \lmodg y$ means that $x$ coincides with $y$ up to some “negligible” or “near-zero” element, namely a ghost element. But we have to handle the -relation with care, since it is not symmetric. In fact it is antisymmetric, see below.
The -relation is clearly transitive: $$x \lmodg y, \ y \lmodg z \ \ \Rightarrow \ x \lmodg z.$$ It is also compatible with addition and multiplication: For any $z
\in U$, $x \lmodg y$ implies $x + z \lmodg y + z$, and $x z
\lmodg y z$.
We observe the following further properties of this subtle binary relation.
\[rem10.2\] Let $x,y \in U$.
1. $x = y \ \Rightarrow \ x \lmodg y \ \Rightarrow \ \nu(x ) \geq
\nu(y)$.
2. If $x \in \tT(U) \cup \{ 0 \}$, then $x \lmodg y \ \Leftrightarrow \ x = y.
$
3. If $x \in \tG(U) \cup \{ 0 \}$, then $ x \lmodg y \ \Leftrightarrow \ \nu(x) \geq \nu(y).
$
4. $x \lmodg 0 \ \text{iff } \ x = eU$.
\[lem10.3\] The -relation is antisymmetric, i.e.; $$x \lmodg y, \ y \lmodg x \ \Rightarrow \ x = y.$$
If $x \in \tT(U)$ or $y \in \tT(U)$ this is clear by Remark \[rem10.2\].ii. Assume now that both $x,y \in eU$. Then $\nu(x)
\geq \nu(y)$ and $\nu(y) \geq \nu(x)$ by Remark \[rem10.2\].iii; hence $\nu(x) = \nu (y)$, i.e., $x = y$.
\[prop10.4\] $ $
1. Assume that $\al:U \to V$ is a transmission. Then, for any $x,y \in U$, $$x \lmodg y \ \Rightarrow \ \al(x) \lmodg \al(y).$$
2. Assume that $\vrp :R \to U$ and $\psi: R \to V$ are supervaluations with $\vrp \geq \psi$. Then for any $a,b \in R$ $$\vrp(a) \lmodg \vrp(b) \ \Rightarrow \ \psi(a) \lmodg \psi(b).$$
i): Let $x \lmodg y $. If $x$ is tangible or zero, then $x= y$; hence $\al(x) = \al(y)$. If $x$ is ghost, then $\nu(x) \geq \nu(y)$; hence $$\nu(\al(x)) = \al(\nu(x)) \geq
\al(\nu(y)) = \nu(\al(y))$$ by rule TM5 in §5. Since $\al(x)$ is ghost, this means $\al(x) \lmodg \al(y)$, cf. Remark \[rem10.2\].iii above.
ii): We may assume that the supervaluation $\vrp$ is surjective. By §5 we have a (unique) transmission $\al: U \to V$ with $\al
\circ \vrp = \psi$. Thus the claim follows from part i).
We cannot resist giving a second proof of part ii) of the proposition relying only on Definition \[defn5.1\] of dominance (conditions D1-D3).
Assume that $\vrp(a) \lmodg \vrp(b)$. If $\vrp(a)$ is tangible or zero, then $\vrp(a) = \vrp(b)$; hence $\psi(a) = \psi(b)$ by D1; hence $\psi(a) \lmodg \psi(b)$. If $\vrp(a)$ is ghost then $e\vrp(a) \geq e\vrp(b)$; hence $e\psi(a)
\geq e\psi(b)$ by D2. By D3 the element $\psi(a)$ is ghost. Thus $\psi(a) \lmodg \psi(b)$ again,
The -relation seems to be helpful for analyzing additivity properties of supervaluations.
\[lem10.5\] If $\vrp:R \to U$ is a supervaluation on a semiring $R$ with $\vrp(a) + \vrp(b) \in eU$, then $$\renewcommand{\theequation}{$*$}\addtocounter{equation}{-1}\label{eq:str.1}
\vrp(a) + \vrp(b) \ \lmodg \ \vrp(a+b).$$
Let $v: R \to eU$ denote the -valuation covered by $\vrp$, $v =
e\vrp$. We have $v(a+b) \leq v(a) + v(b)$; hence $e\vrp(a+b)
\leq e(\vrp(a) + \vrp(b))$. If $\vrp(a) + \vrp(b) \in eU$, this shows that $\vrp(a) + \vrp(b) \lmodg \vrp(a+b)$.
It will turn out to be desirable to have supervaluations on $R$ at hand, where the property $(*)$ holds for **all** elements $a,b$ of $R$.
\[defn10.6\] We call a supervaluation $\vrp:R \to U$ **tangibly additive**, if in addition to the rules SV1-SV4 from §4 the following axiom holds:
$$\begin{aligned}
{2}
& SV5: \ && \text{If } a,b \in R \ \text{and } \vrp(a) + \vrp(b)
\in \tT(U), \ \text{then } \vrp(a) + \vrp(b) = \vrp(a+b).\end{aligned}$$
\[prop10.7\] A supervaluation $\vrp: R \to U$ is tangibly additive iff for any $a,b \in R$ $$\vrp(a) + \vrp(b) \ \lmodg \ \vrp(a+b).$$
This is clear by Lemma \[lem10.5\] and Remark \[rem10.2\].ii above.
\[cor10.8\] If $\vrp:R \to U$ is tangibly additive, then for every finite sequence $a_1, \dots, a_m$ of elements of $R$ $$\sum_{i=1}^m \vrp(a_i) \lmodg \vrp \bigg(\sum_{i=1}^m a_i\bigg).$$
This holds for $m=2$ by Proposition \[prop10.7\]. The general case follows by an easy induction using the transitivity of the -relation.
*Comment:* We elaborate what it means that a given supervaluation $\vrp: R \to U$ is tangibly additive in the case that the underlying -valuation $v = e\vrp: R \to eU$ is [strong]{}.
Let $a,b \in R $ be given with $\vrp(a)+ \vrp(b) \in \tT(U)$, i.e., $v(a) \neq v(b)$, and assume without loss of generality that $v(a) < v(b)$. Then $v(a+b) = v(b)$. Hence, $\vrp(a+b)$ is some element of the fiber $\nu^{-1}_U(v(b))$; but the axioms SV1-SV4 say little about the position of $\vrp(a+b)$ in this fiber. SV5 demands that $\vrp(a+b)$ has the “correct” value $\vrp(a) +
\vrp(b) = \vrp(b)$.
Concerning applications the strong -valuations seem to be more important than the others. (Recall that any -valuation on a ring is strong.) Thus the tangibly additive supervaluations covering strong -valuations deserve a name on their own.
\[defn10.9\] We call a **supervaluation $\vrp: R \to
U$ strong** if $\vrp$ is tangibly additive and the covered -valuation $e \vrp: R \to eU$ is strong.
We exhibit an important case where a tangibly additive supervaluation is automatically strong.
\[prop10.10\] Assume that $\vrp:R \to U$ is a tangible (cf. Definition \[defn4.1\]) and tangibly additive supervaluation. Then $\vrp$ is strong.
We have to verify that $v:= e \vrp$ is strong. Let $a,b \in R$ be given with $v(a) \neq v(b)$. Suppose without loss of generality that $v(a) < v(b)$. Then $\vrp(a),\vrp(b) \in U$ and $\vrp(b) \neq
0$. Since $\vrp$ is tangible, $\vrp(b) \in \tT(U)$. It follows that $\vrp(a) + \vrp(b) \in \tT(U)$; hence $$\vrp(a) + \vrp(b) = \vrp (a + b),$$ because $\vrp$ is tangibly additive. Multiplying by $e$ we obtain $$v(a) + v(b) = v (a + b).$$
We now are ready to aim at an application of the supervaluation theory developed so far. We start with the polynomial semiring $R[{\lambda}] = R [{\lambda}_1, \dots, {\lambda}_n]$ in a sequence ${\lambda}= ({\lambda}_1,
\dots, {\lambda}_n)$ of $n$ variables over a semiring $R$. Let $\vrp: R
\to U$ be a tangibly additive valuation with underlying -valuation $v: R \to M$, $M:= eU$.
Given a polynomial $$\label{poly1} f = \sum_i c_i
{\lambda}^i \in R[{\lambda}]$$ in the usual multimonomial notation ($i$ runs though the multi-indices $i = (i_1, \dots, i_n) \in
\N_0^{n}$, ${\lambda}^i = {\lambda}_1^{i_1} \cdots {\lambda}_n^{i_n}$, only finitely many $c_i \neq 0$), we obtain from $f$ polynomials $$\tvrp(f) \ : = \ \sum _i \vrp(c_i) {\lambda}^i \in U[{\lambda}],$$ $$\tv(f) \ : = \ \sum _i v(c_i) {\lambda}^i \in M[{\lambda}],$$ by applying $\vrp$ and $v$ to the coefficients of $f$. This gives us maps $$\tvrp : \ R[{\lambda}] \ \to \ U[{\lambda}], \quad \tv : \ R[{\lambda}] \ \to \ M[{\lambda}].$$
Let $a = (a_1,\dots, a_n \in R^n)$ be an $n$-tuple of elements of $R$. It gives us $n$-tuples $$\vrp(a) = (\vrp(a_1), \dots, \vrp(a_n) ), \quad v(a) = (v(a_1), \dots, v(a_n) )$$ in $U^n$ and $M^n$, respectively. We have an evaluation map $\ep_a: R[{\lambda}] \to R,$ which sends the polynomial $f$ (notation as in ) to $$\label{poly2} \ep_a(f) \ = f(a) \ = \ \sum _i c_i a ^i$$ and analogous evaluation maps $$\ep_{\vrp(a)}(f) : \ U[{\lambda}] \to U, \quad \ep_{v(a)}(f) : \ M[{\lambda}] \to M.$$ These evaluation maps are semiring homomorphisms. We have a diagram $$\xymatrix{
R[{\lambda}] \ar[d]_{\tvrp} \ar[rr]^{\ep_a} && R
\ar[d]^{\vrp}
\\
U[{\lambda}] \ar[rr]^{\ep_{\vrp(a)}} && U
}$$ (and an analogous diagram with $v$ instead of $\vrp$) which usually does not commute. But it commutes “nearly”.
\[thm10.11\] For $f \in R[{\lambda}]$ $$\ep_{\vrp(a)}(\tvrp(f)) \ \lmodg \ \vrp(\ep_a(f)).$$
Let again $f = \sum_i c_i {\lambda}^i$. Now $\vrp(\ep_a(f)) =
\vrp(\sum_i c_i a^i)$, while $$\ep_{\vrp(a)(\tvrp(f))} = \sum_i
\vrp(c_i) \vrp(a)^i = \sum_i \vrp(c_ia^i).$$ Thus the claim is that $$\renewcommand{\theequation}{$*$}\addtocounter{equation}{-1}\label{eq:str.1}
\sum_i \vrp(c_i a^i) \ \lmodg \ \vrp(\sum_i c_i a^i).$$ This follows from Corollary \[cor10.8\] above.
We draw a consequence of this theorem. Let $$Z(f) \ := \ \{ a \in
R^n \ | \ f(a) = 0\},$$ the zero set of $f$. Let further $$Z_0(\tvrp(f)) \ := \ \{ b \in U^n \ | \ \tvrp(f)(b) \in eU \},$$ which we call the **root set** of $\tvrp(f)$. For $a \in
Z(f)$ we have $\vrp\left( \sum_i c_i a^i \right) = 0$. It follows by Theorem \[thm10.11\] that $\tvrp(f)(\vrp(a)) \lmodg 0$, i.e., $\tvrp(f)(\vrp(a))$ is ghost.
We have proved
\[cor10.12\] If $\vrp:R \to U$ is tangibly additive, then, for any $f \in R[{\lambda}]$, $$\vrp(Z(f)) \subset Z_0(\tvrp(f)).$$
Assume now that $\vrp$ is *tangible* and tangibly additive; hence strong (cf. Proposition \[prop10.10\]). Then, of course, $\vrp(Z(f)) \subset \tT(U)_0^n$ with $\tT(U)_0 := \tT(U) \cup \{
0 \}$. Thus we have $$\renewcommand{\theequation}{$**$}\addtocounter{equation}{-1}\label{eq:str.1}
\vrp(Z(f)) \ \subset \ Z_0(\tvrp(f))_{\tan} $$ with $$Z_0(\tvrp(f))_{\tan} \ : = \ Z_0(\tvrp(f)) \cap \tT(U)^n_0,$$ which we call **tangible root set** of $\tvrp(f)$. We want to translate $(**)$ into a statement about the relation between $Z(f)$ and the so called “corner locus”, of the polynomial $\tv(f) \in M[{\lambda}]$, to be defined.
We call a polynomial $g = \sum_i d_i {\lambda}^i \in M[{\lambda}]$ a **tropical polynomial**, and define the **corner-locus** $\corn(g)$ of $g$ as the set of all $b \in M^n$ such that there exists two different multi-indices $j,k \in \N_0^n$ with $$d_j b^j = d_k b^k \geq d_i b^i$$ for all $i \neq j,k$. We also say that $\corn(g)$ is the **tropical hypersurface** defined by the tropical polynomial $g$.
This is well established terminology at least in the “classical case” that $M$ is the bipotent semiring $T(\R)$ given by the order monoid $(\R,+ \ )$, the so called max-plus algebra of $\R$ (cf. §1, [@IMS §1.5]). A small point here is, that we admit coordinates with value $0_M := -\infty$, which usually is not done in tropical geometry. On the other hand we could work as well with Laurent polynomials. Then of course we would have to discard the zero element.
Returning to our tangible strong supervaluation $\vrp: R \to U$ and the -valuation $$v = e \vrp : R \to M,$$ we look at the tropical polynomial $$\tv(f) = \sum_i v(c_i){\lambda}^i$$ from above. Let $a \in R^n$. Then $$\tvrp(f)(\vrp(a)) \ = \ \sum \vrp(c_i a^i),$$ and all summands are the right side are in $\tT(U)_0$. Thus the sum is ghost iff the maximum of the $\nu$-values $$\nu(\vrp(c_i a^i)) = v(c_i) v(a^i) \qquad (i \in \N^n_0)$$ is attained for at least two multi-indices. This means that $v(a)
\in \corn(\tv(f))$.
Thus $(**)$ has the following consequence
\[cor10.13\] Let $v:R \to M$ be a strong -valuation on a semiring $R$. Assume that there exists a tangible supervaluation $\vrp:R \to U$ covering $v$. Then for any polynomial $f \in R[{\lambda}]$, $$v(Z(f)) \ \subset \ \corn( \tv(f)).$$
We have arrived at a very general version of the Lemma of Kapranov ([@EKL Lemma 2.1.4]), as soon as we find a tangible cover $\vrp: R \to U$ of the given -valuation $v: R \to M$. This turns out to be easy in the case that $M$ is cancellative (i.e., $v$ is a strong *valuation*).
\[lem10.14\] Suppose there is given a **tangible multiplicative section** of the ghost map $\nu : U \to M$, i.e., a map $s: M \to \tT(U)_0$ with $s(0) = 0$, $s(1) = 1$, $s(xy) =
s(x)s(y)$, and $\nu(s(x))=x$ for any $x,y \in M$. Let $v:R \to M$ be a strong -valuation. Then $s \circ v : R \to U$ is a tangible strong supervaluation covering $v$.
Clearly $\vrp = sv$ obeys SV1-SV4. Let $a,b \in R$ be given with $v(a) <
v(b)$. Then $v(a+b) = v(b)$; hence $s v (a+b) = sv(b) $. Thus SV5 holds true. We have $e \vrp = \nu \circ \vrp = v$.
If $U$ is a supertropical semifield, it is known that such a section $s$ always exists ([@IzhakianRowen2009Equations Proposition 1.6]).
\[examp10.5\] Assume that $M$ is a cancellative bipotent semiring, and $v: R \to
M$ is a strong valuation. We take $U:= D(M \setminus \{ 0\})$ (Example \[examps3.16\]), for which we write more briefly $D(M)$. For every $z\in M$ there exists a unique $x \in \tT(U)_0$ with $\nu(x) = z$. We write $x = \hat z$. Clearly $z \mapsto \hat
z $ is a tangible multiplicative section of the ghost map, in fact the only one. By the lemma we obtain a tangible supervaluation $$\hat v : R \to U, \quad \hat v (z) := \widehat{v(z)},$$ which covers $v$, in fact the only such supervaluation.
Looking again at Corollary \[cor10.13\] we now know that $$v(Z(f)) \subset \corn(\tv(f)),$$ whenever $v: R \to M$ is a strong valuation and $f\in R[{\lambda}]$.
The tangible strong supervaluations in ${\operatorname{Cov}}(v)$ {#sec:11}
================================================================
Given an -valuation $v: R \to M$, recall from §7 that the equivalence classes $[\vrp]$ of supervaluations $\vrp$ covering $v$ form a complete lattice ${\operatorname{Cov}}(v)$. Abusing notation, we usually will not distinguish between a supervaluation $\vrp$ and its class $[\vrp]$, thus writing $\vrp \in {\operatorname{Cov}}(v)$ if $\vrp$ covers $v$. This will cause no harm in the present section. {N.B If you are sceptical about this, you may always assume that $\vrp$ is surjective, more specially, that $\vrp = \vrp_v / E$ with $\vrp_v$ the initial covering of $v$ and $E$ an -relation on $U(v)$ (cf. Notation \[not6.14\]). These supervaluations $\vrp$ are canonical representatives of their classes $[\vrp]$.}
\[lem11.1\] Assume that $\vrp:R \to U$ and $\psi:R \to V $ are supervaluation with $\vrp \geq \psi.$
1. If $\psi$ is tangible, then $\vrp$ is tangible.
2. It $\vrp$ is tangibly additive, then $\psi$ is tangibly additive.
i): is clear from the axiom D3 in the definition of dominance (cf. Definition \[defn5.1\]).
ii): follows from Propositions \[prop10.7\] and \[prop10.4\].ii.
*Starting from now we assume that $v$ is a strong valuation* (which means in particular that $M$ is cancellative). Let $\gq$ denotes the support of $v$, i.e., $\gq = v^{-1}(0)$.
\[not11.2\] ${\operatorname{Cov}}_{\tng}(v)$ denotes the set of tangible supervaluations in ${\operatorname{Cov}}(v)$, and ${\operatorname{Cov}}_{\stg}(v)$ denotes the set of strong ($=$ tangibly additive) supervaluations in ${\operatorname{Cov}}(v)$. Finally, let $$\stCov(v) \ := \ \tCov(v ) \cap
\sCov(v),$$ be the set of tangible strong supervaluations covering $v$.
We already know by Example \[examp10.5\] that the set $\stCov(v)$ is not empty. Lemma \[lem11.1\] tells us in particular that $\tCov(v)$ is an upper set and $\sCov(v)$ is a lower set in the poset ${\operatorname{Cov}}(v)$.
Let us study these sets more closely. We start with $\tCov(v)$. The initial supervaluation $\vrp_v: R \to U(v)$ (cf. Definition \[defn5.15\]) is the top ($=$ biggest) element of ${\operatorname{Cov}}(v)$, and thus is also the top element of $\tCov(v)$. This can also be read off from the explicit description of $\vrp_v$ in Example \[examp4.5\]. The other elements of ${\operatorname{Cov}}(v)$ are the supervaluations $\vrp_v / E : R \to U(v) / E$, with $E$ running through the -relations on $U(v)$. We have to find out which -relations $E$ on $U(v)$ give tangible supervaluations $\vrp_v / E$.
Here is a definition which - for later use - is slightly more general than what we need now:
\[defn11.3\] We call an equivalence relation $E$ on a supertropical semiring $U$ **ghost separating** if for all $x \in \tT (U)$, $y \in U$, $$x \sim_E y \quad \Rightarrow \quad y \in \tT(U) \ \text{or} \ x \sim_E 0.$$
If $E$ is an -relation on $U$, then $x \sim_E 0$ only if $x=0$. Thus, $E$ is ghost separating iff $\tT(U)$ is a union of $E$-equivalence classes. This means that $E$ is finer than the -relation $\tE$ introduced in Examples \[examps6.4\].v, whose equivalence classes are the tangible fibers of $\nu_U$ and the one-point sets in $eU$.
If $\vrp:R \to U$ is a surjective tangible supervaluation and $E$ is an -relation on $U$, then it is obvious that $\vrp / E : R
\to U / E$ is again tangible iff $E$ is ghost separating. Thus we see that $\vrp_v / \tE$ is the bottom ($=$ smallest) element of $\tCov(v)$.
Now recall from Example \[examp6.12\] that, in the notation at the end of §9 (Example \[examp10.5\]), $$U(v) / \tE \ = \ D(M) ;$$ hence $\vrp_v / \tE$ coincides with the only tangible cover $\hat
v$ of $v$ with values in $D(M)$, cf. Example \[examp10.5\]. We conclude that $$\tCov (\vrp) = \{ \psi \in {\operatorname{Cov}}(v) \ | \ \psi \geq \hat v\}.$$ Again by Example \[examp10.5\] we know that $\hat v$ is strong. This $\hat v$ is also the bottom of the poset $\stCov(\vrp)$.
We turn to $\sCov(v)$. We will construct a new element of this poset in a direct way. For that reason we introduce an equivalence relation on $R$.
\[defn11.4\] Let $S(v)$ denote the equivalence relation on the set $R$ defined as follows. {We write $\sim_v$ for $\sim_{S(v)}$.}
If $a_1,a_2 \in R$ then $$\begin{array}{lll}
a_1 \sim_v a_2 & \Longleftrightarrow & \text{either } v(a_1) = v(a_2) = 0 \\
& & \text{or } \exists c_1, c_2 \in R, \ \text{with } v(c_1) < v(a_1), \ v(c_2) < v(a_2),\\
& & a_1 + c_1 = a_2 + c_2.
\end{array}$$
It is easily checked that $S(v)$ is indeed an equivalence relation on the set $R$, by making strong use if the assumption that the valuation $v$ is strong. This is the finest equivalence relation $E$ on $U$ such that $a \sim_E a +c $ if $v(c) < v(a)$. Observe also that $$a_1 \vsim a_2 \ \Longrightarrow \ v(a_1) = v(a_2).$$
We claim that $S(v)$ is compatible with multiplication, i.e., $$a_1 \vsim a_2 \ \Longrightarrow \ a_1b \vsim a_2b$$ for every $b \in R$. This is obvious if $a_1 \in \gq$ or $a_2 \in
\gq$, or $b \in \gq$. Otherwise $v(b)> 0$, and we have elements $c_1,c_2 \in R$ with $ v(c_1) < v(a_1)$, $v(c_2) < v(a_2)$, $a_1 +
c_1 = a_2 + c_2$. Then $a_1b + c_1b = a_2b + c_2b$ and $$v(c_i b ) \ = \ v(c_i)v(b) \ < \ v(a_i)v(b) \ = \ v(a_ib)$$ for $i = 1,2$, since by assumption $M$ is cancellative. Thus indeed $a_1 b \vsim a_2b$.
We denote the $S(v)$-equivalence class of an element $a$ of $R$ by $[a]_v$. The set $\bR := R / S(v)$ is a monoid under the well defined multiplication $$[a]_v \cdot [b]_v \ = \ [ab]_v$$ for $a,b \in R$. The subset $R \setminus \gq$ of $R$ is a union of $S(v)$-equivalence classes and the subset $\overline{R \setminus
\gq} : = (\Rmq) / S(v)$ of $\bR$ is a submonoid of $R$. We have $$\bR = \overline{R \setminus \gq} \ \cup \ \{ \bar 0 \}$$ with $\bar 0 = [0]_v = \gq.$
Since $a_1 \vsim a_2$ implies $v(a_1) = v(a_2)$, we have a well defined monoid homomorphism $\bR \to M$, $[a]_v \mapsto v(a)$, which restricts to a monoid homomorphism $$\bv: \overline{R
\setminus \gq} \ \to \ M \setminus \{ 0\}.$$ This map $\bv$ gives us a supertropical semiring $$U := \STR(\overline{R
\setminus \gq}, M \setminus \{ 0\}, \bv),$$ cf. Construction \[constr3.14\]. Notice that $\tT(U) = \overline{R \setminus
\gq}$ and $eU = M$. We identify $\tT(U)_0 = \bR$.
\[prop11.5\] The map $\chi: R \to U$ given by $$\chi(a) :=
0 \quad \text{if}\quad a \in \gq, \qquad \chi(a) := [a]_v \in
\tT(U) = \bRmq \quad\text{if} \quad a \notin \gq,$$ is a tangible strong supervaluation covering $v$.
It is obvious that $\chi$ obeys the rules SV1-SV3 in the definition of supervaluations (Definition \[defn4.1\]). Due to our construction of $U$ we have $\nu_U \circ \chi = v$. Thus $\chi$ also obeys SV4, and hence is a supervaluation covering the strong valuation $v$. It is clearly tangible.
It remains to verify that $\chi$ is tangibly additive. Let $a,b
\in R$ be given with $\chi(a) + \chi(b) \in \tT(U)$, i.e., $v(a)
\neq v(b)$. Assume without loss of generality that $v(a) < v(b)$. Then $a + b \vsim b$. This means that $\chi(a+b) = \chi(b)$, as desired.
We strive for an understanding of the set of all $\psi \in {\operatorname{Cov}}(v)$ which are dominated by this supervaluation $\chi$. We need a new definition.
\[defn11.6\] We call a supervaluation $\vrp: R \to V$ **very strong**, if $$\begin{aligned}
{2}
&SV5^*:\ && \forall a,b \in R : e\vrp(a)< e \vrp(b) \
\Longrightarrow \ \vrp(a+b) = \vrp(b).\end{aligned}$$
Clearly SV$5^*$ implies that the -valuation $v$ is strong. If we require this property only for $a,b \in R$ with $e\vrp(a) < e \vrp(b)$ and $\vrp(b)$ tangible, we are back to condition SV5 given above (Definition \[defn10.6\]). Thus, a very strong supervaluation is certainly strong. On the other hand, every **tangible** strong supervaluation is very strong.
\[lem11.7\] If $\vrp: R \to V$ is very strong, then any supervaluation $\psi:
R \to W$ dominated by $\vrp$ is again very strong.
Let $a,b \in R $ be given with $ e\psi(a) < e \psi
(b)$. It follows from axiom D2 that $ e\vrp(a) < e \vrp (b)$, since $ e\vrp(a) \geq e \vrp (b)$ would imply $ e\psi(a) \geq e
\psi (b)$. Thus $\vrp(a+b) = \vrp(b)$, and we obtain by D1 that $\psi(a+b) = \psi(b)$.
Returning to our given strong valuation $v: R \to M$, let $\sCov^*(v)$ denote the subset of all $\vrp \in {\operatorname{Cov}}(v)$ which are very strong. Lemma \[lem11.7\] tells us in particular that $\sCov^*(v)$ is a lower set in the poset ${\operatorname{Cov}}(v)$, and hence in $\sCov(v)$. We have $$\tCov(v) \cap \sCov^*(v)\ = \ \tCov(v) \cap \sCov(v) \ = \ \tsCov(v).$$
\[thm11.8\] The tangible strong supervaluation $\chi : R \to U$ from above (Proposition \[prop11.5\]) dominates every very strong supervaluation covering $v$, and hence is the top element of both $\sCov^*(v)$ and $\tsCov(v)$.
Let $\psi: R \to V$ be a very strong supervaluation covering $v$ (in particular $eV = M$). We verify axioms D1-D3 for the pair $\chi$, $\psi$, and then will be done. D2 is obvious, and D3 holds trivially since $\chi$ is tangible. Concerning D1, assume that $\chi(a_1) = \chi(a_2)$. By definition of $\chi$ this means that $a_1 \vsim a_2$.
We have to prove that $\psi(a_1) = \psi(a_2)$. Either $a_1, a_2
\in \gq$, or there exist $c_1, c_2 \in R$ with $v(c_1) < v(a_1)$, $v(c_2) < v(a_2)$, $c_1 + a_1 = c_2 + a_2$. In the first case $e
\psi (a_1) = e \psi (a_2) = 0 $ hence $\psi(a_1) = \psi(a_2) =
0$. In the second case we have $$\psi(a_1) = \psi (a_1 + c_1 ) = \psi(a_2 + c_2) = \psi (a_2)$$ since $\psi$ is very strong. Thus $\psi(a_1) = \psi(a_2)$ in both cases.
\[notation11.9\] We denote the semiring $U$ given above by $\bUv$ and the supervaluation $\chi$ given above by $\bvrp_v$. We call $$\bvrp_v : R \ \to \ \bUv = \STR(\bRmq, M
\setminus \{ 0 \}, \bv)$$ the **initial very strong supervaluation** covering $v$.
In this notation $$\begin{array}{lll}
\sCov^*(v) & = & \{ \psi \in {\operatorname{Cov}}(v) \ | \ \bvrp_v \geq \psi \},
\\[2mm]
\tsCov(v) & = & \{ \psi \in {\operatorname{Cov}}(v) \ | \ \bvrp_v \geq \psi \geq \hat v \}. \\
\end{array}$$
Let $E(v)$ denote the equivalence relation on $U(v)$ whose equivalence classes are the sets $[a]_v$ with $a \in R
\setminus \gq = \tT(U(v))$ and the one point sets $\{ x \}$ with $x\in M$. In other terms, the restriction $E(v) | \tT(U)$ coincides with $S(v) | \Rmq$, while $E(v) | M$ is the diagonal ${\operatorname{diag}}(M)$ of $M$. We identify $$U(v) / E(v) = \bUv$$ in the obvious way.
\[prop11.10\] $E(v) $ is a ghost separating -relation and $$\bvrp_v = \vrp_v / E(v).$$
It is immediate that $E(v)$ is and ghost separating. For $a$ in $\Rmq$ we have $$\pi_{E(v)} (\vrp_v(a)) = \pi_{S(v)}(a) = [a]_v = \bvrp_v(a)$$ and for $a \in \gq$ $$\pi_{E(v)} (\vrp_v(a)) = \pi_{E(v)}(a) = 0 = \bvrp_v(a),$$ again, Thus $\pi_{E(v)}$ is the transmission from $\vrp_v$ to $\bvrp_v$.
\[cor11.11\] The -relations $E$ on $U(v)$ such that $\vrp_v / E$ is very strong are precisely all $E \in \MFC(U(v))$ with $E \supset
E(v)$.
This is a consequence of our observations above (Lemma \[lem11.7\], Theorem \[thm11.8\], Proposition \[prop11.10\]) and the theory in §7, cf. Theorem \[thm7.2\].
We now focus on the special case that $R$ is a semifield. Slightly more generally we assume that every element of $\Rmq$ is invertible, while $\gq$ may be different from $\{ 0\}$.
$\tT(U(v)) = \Rmq$ is a group under multiplication. Thus the results from the end of §8 apply. We have $$\tT_e(U(v)) = \{ a\in R \ | \ v(a) = 1_M\} = \go_v^*,$$ with $\go_v^*$ the unit group of the subsemiring $$\go_v := \{ a \in R \ | \ v(a) \leq 1_M \}$$ of $R$. Notice that the set $$\mm_v := \go_v \setminus \go_v^* = \{a \in R \ | \ v(a) < 1_M\}$$ is an ideal of $\go_v$, just as in the classical (and perhaps most important) case, where $R$ is a field and $v$ is a Krull valuation on $R$.
By Theorem \[thm8.11\] and Corollary \[cor8.12\] we know that every -relation on $U(v)$ except $E(\nu)$ is orbital, hence ghost separating. We have $$\vrp_v / E(\nu) = v,$$ viewed as a supervaluation. The other supervaluations $\vrp$ covering $v$ correspond uniquely with the subgroups $H$ of $\go_v^*$ via $\vrp = \vrp_v / E(H)$; cf. Corollary \[cor8.12\].
Instead of $U(v) / E(H)$ and $\vrp_v / E(H)$ we now write $U(v) /
H$ and $\vrp_v / H$ respectively. In this notation $$\tT(U(v) / H) = (\Rmq) / H,$$ and $ \vrp_v / H: R \to \Uv / H$ is given by $$(\vrp_v / H)(a) = \left\{
\begin{array}{lll}
aH & \text{if} & a \in \Rmq, \\
0 & \text{if} & a \in \gq.
\end{array}
\right.$$
\[thm11.12\] Assume that every element of $\Rmq$ is invertible (e.g. $R$ is a semifield).
1. Every strong supervaluation covering $v$ is very strong. Except $v$ itself, viewed as a supervaluation, all these supervaluations are tangible. In other terms, $$\sCov(v) = \sCov^*(v) = \tsCov(v) \cup \{ v \} .$$
2. $\bvrp_v = \vrp_v /\langle 1 + \mm_v\rangle$, with $\langle 1 + \mm_v\rangle$ the group generated by $ 1 + \mm_v$ in $\go_v^*$ [^9].
3. The tangible strong supervaluations $\vrp$ covering $v$ correspond uniquely with the subgroup $H$ of $\go_v^* $ containing the semigroup $1 + \mm_v$ via $\vrp = \vrp_v /
H$. Thus we have an anti-isomorphism $H \mapsto \vrp_v / H$ from the lattice of all subgroups $H$ of $\go_v^*$ containing $1 + \mm_v$ to the lattice $\tsCov(v)$.
i): Every supervaluation $\vrp$ covering $v$ is either tangible or $\vrp = v$. Thus, if $\vrp$ is strong, then $\vrp$ is very strong in both cases.
ii): We know that $\bvrp_v = \vrp_v / E(v)$ (Proposition \[prop11.10\]). $E(v)$ is ghost separating, hence orbital. The subgroup $H$ of $\go_v^*$ with $E(H) = E(v)$ has the following description (cf. Proposition \[prop8.8\]): If $a \in \Rmq =
\tT(U(v))$, then $ a\in H$ iff $a \vsim 1$. This means that there exist elements $c_1, c_2 \in \mm_v$ with $a + c_1 = 1 + c_2$. Now $a + c_1 = a(1 + d_1)$ with $d_1 = \frac {c_1}{ a} \in \mm_v$. Thus $a \vsim 1$ iff $a$ is in the group $\langle 1 + \mm_v
\rangle$.
iii): Now obvious, since $\bvrp_v$ is the top element of $\tsCov(v)$.
We look again at the -sentence $$\renewcommand{\theequation}{$*$}\addtocounter{equation}{-1}\label{eq:str.1}
\varepsilon_{\vrp(a)} (\tvrp(f)) \ \lmodg \ \vrp(\varepsilon_a(f))$$ from §9, valid for any $\vrp \in \sCov(v)$, $f \in R[{\lambda}]$, $a
\in R^n$, cf. Theorem \[thm10.11\]. Choosing here any $\vrp \in
\tsCov(v)$, we learned that $(*)$ implies Kapranov’s Lemma (Corollary \[cor10.13\]). But the statement $(*)$ itself has a different content for different $\vrp \in \tsCov(v)$. If also $\psi \in \tsCov(v)$ and $\vrp \geq \psi$, then we obtain statement $(*)$ for $\psi$ from the statement $(*)$ for $\vrp$, leaving $f$ and the tuple $a$ fixed, by applying the transmission $\al_{\psi, \vrp}$. Thus it seems that $(*)$ has the most content if we choose for $\vrp$ the initial strong supervaluation $\bvrp_v: R \to \bUv$.
We close this section by an explicit description of $\bUv$ and $\bvrp_v$ in a situation typically met in tropical geometry. Let $R : = F\{ t \}$ be the field of formal **Puiseux series with real powers** over any field[^10] $F$, cf. [@IMS p.6]. The elements of $R$ are the formal series $$a(t) = \sum_{j \in I } c_j t^j$$ with $c_j \in F^*$ and $I \subset \R$ a well ordered set, in set theoretic sense, (including $I = \emptyset$). Let further $M$ be the bipotent semifield $T(\R_{>0})$ (cf. Theorem \[thm1.4\]), i.e., $$M = \R_{>0} \cup \{ 0 \} = \R_{\geq 0},$$ with the max-plus structure.
We define a (automatically strong) valuation $$v : F \{ t \} \to
M$$ by putting $$v(a(t)) :=
\vth^{\min(I)}$$ if $a(t) \neq 0$, written as above, and $v(0) := 0$. Here $\vth$ is a fixed real number with $0 < \vth < 1$ (cf. [@IMS]) loc. cit, but we use a multiplicative notation). Now $\go_v^*$ is the group consisting of all series $$a(t) = c_0 + \sum_{j > 0}c_jt^j, \qquad c_0 \neq 0,$$ in $F\{ t \} $, and $1 + \mm_v$ is the subgroup consisting of these series with $c_0 = 1$.
The equivalence relation $S(v)$ on $R^* = \tT(U(v))$ is given by $$a(t) \vsim b(t) \ \Longleftrightarrow \ \frac{ a(t)} { b(t)} \in 1 + \mm_v .$$ This means that the series $a(t)$ and $b(t)$ have the same leading term $\ell(a(t)) = \ell (b(t))$. Thus the group of monomials $$G := \{ ct^j \ | \ c \in F^*, \ j \in \R \}$$ is a system of representatives of the equivalence classes of $S(v)$. We identify $$G = R^* / S(v) = \tT(U(v)) / E(v) .$$ Then $\bUv = \STR(G, \R_{>0 }, v|G) = G \dot\cup M$ in the notation of Construction \[constr3.14\], and our supervaluation $\bvrp_v : R \to \bUv$ is the map $a(t) \mapsto \ell(a(t))$, which sends each formal series $a(t)$ to its leading term. {We read $\ell(0) = 0$, of course.}
In short, applying $v$ to a series $a(t)$ means taking its leading $t$-power and replacing $t$ by $\vth$, while applying $\bvrp_v$ means taking its leading term.
Similarly we can interpret the bottom supervaluation $\hat v \in
\tsCov(v)$. The $t$-powers $t^j$, $j \in \R$, are a multiplicative set of representatives of the $\tE$-equivalence classes. Identifying $$\Uv / \tE = \{ t^j \ | \ j \in \R \},$$ we can say that $\hat v(a(t))$ is the leading $t$-power of the series $a(t)$. The ghost map from $\Uv/ \tE = D(M)$ to $M$ sends $t$ to $\vth$.
Iq-valuations on polynomial semirings and related supervaluations. {#sec:13}
==================================================================
Since the semiring of polynomials over a supertropical domain is no longer supertropical (or analogously, the semiring of polynomials over a bipotent semiring is no longer bipotent), we would like a theory generalizing valuations to maps with values in these polynomial semirings. Unfortunately, the target is no longer an ordered group (and is not even an ordered monoid). In this section, we formulate some concepts of this paper in the more general context of monoids with a supremum, instead of ordered monoids, and show how this encompasses Kapranov’s Lemma.
Recall that an operation $a\vee b$ on a set $S$ is called a **sup** if it has a distinguished element $0$ and satisfies the following properties for all $a,b,c\in S$:
1. $0 \vee a = a;$
2. $a\vee b = b \vee a;$
3. $a \vee a = a;$
4. $a \vee
(b \vee c) = (a\vee b) \vee c.$
In this case, we can define a partial order on $S$ by defining $a
\le b$ when $a \vee b = b.$ Then the following properties are immediate for all $a,b,c \in S$:
1. $0 \le a$;
2. $a \vee b \ge a$ and $a \vee b \ge b;$
3. if $a \le c$ and $b \le c$, then $a \vee b \le c.$ (Indeed, if $a\vee c = c$ and $b\vee c = c,$ then $(a\vee b) \vee c =
(a\vee c)\vee (b\vee c) = c\vee c = c.$)
We also say that a given sup $x\vee y$ on a monoid $M$ is **compatible** with $M$ if $a(x\vee y) = ax \vee ay$ for all $a,x,y \in M$.
In order to axiomatize this in the language of semirings, we recall that an **idempotent semiring** $R$ satisfies the property that $x+x = x$ for all $x\in R$.
\[sup1\] $ $
1. Every idempotent semiring $R$ can be viewed as a multiplicative monoid with a compatible sup $\vee$ defined by $$x \vee y : = x+y.$$
2. Conversely, given a monoid $M$ with a compatible sup, we can define an idempotent semiring structure on $M$, with the same multiplication, and with addition given by $x+y
:= x \vee y$.
All of the other verifications are immediate.
If $R$ is an idempotent semiring, then so is the polynomial semiring $R[{\lambda}]$ as well as the matrix semiring $M_n(R)$.
Both of these assertions fail when we substitute “bipotent” for “idempotent.” Thus, it makes sense to pass to idempotent semirings when studying polynomials and matrices. In the case of semifields, we actually have a lattice structure.
If $R$ is a semifield, where $\vee$ is given by addition (as in Proposition \[sup1\]), then there is a compatible inf relation $\wedge$ given by $x \wedge y := \frac {xy}{x+y}$ (taking $0 \wedge 0 = 0$), thereby making $(R,\vee,\wedge)$ a distributive lattice satisfying $$\label{mult} (x\vee y)(x\wedge
y) = xy, \quad \forall x,y\in R.$$
Property follows at once from the definitions, and implies that $a(x\wedge y) = ax \wedge ay$, as well as associativity of $\wedge.$ To check distributivity, we need to check $$(x\wedge y )\vee z = (x \vee z) \wedge ( y \vee z).$$ Since $\le$ is clear, we only check $\ge,$ and also may assume $x,y,z \ne 0.$ Now $$\begin{aligned}(x\wedge y )\vee
z & =
\frac{xy}{x+y} + z \\ & \ge \frac{xy}{x+y+z} + z\frac {x+y+z}{x+y+z} \\
& = \frac{(x+z)(y+z)}{x+y+z} = \frac{(x+z)(y+z)}{(x+z)+(y+z)}
\\ & =(x \vee z) \wedge ( y \vee z).\end{aligned}$$
Having the translation of the sup relation to semirings at hand, we are ready to reformulate some of the results of this paper. But first it is instructive to introduce a parallel of the .
\[defnep.11\] $y \lmod x \ \Leftrightarrow \ \exists a \in R \ \text{with} \ y =
x+a.$
Clearly, $\lmod$ is a transitive binary relation on $R$.
\[def:13.5\] $R$ is an **upper-bound** semiring, written ub-semiring, if the relation $\lmod$ is anti-symmetric; i.e., $$x \lmod y \ \text{and} \ y \lmod x \quad \Leftrightarrow \quad x=y.$$
The reason for this terminology is that now the relation $\lmod$ gives a partial ordering on the set $R$ $$a \leq b \ \ \text{iff} \
\ b \lmod a
\ \ \text{iff} \ \ \exists c \in R: \ a+c = b,$$ and $x + y$ is an upper bound of $x,y$ in this ordering[^11].
\[rmk:13.6\] $ $
1. The condition that a semiring $R$ is ub can be rephrased as follows:
For any $a,b,x \in R,$ if $x+a+b =x,$ then $x+a = x.$
2. Any ub-semiring $R$ has the property that $a+b= 0$ implies $a=b=0$, by (i). (Take $x=0$.)
Any idempotent semiring is an ub-semiring.
If $x+a+b =x,$ then $$x+a = (x+ a +b )+a = x+ a +b = x.$$
If $R$ is any semiring, let $R[{\lambda}] = R[{\lambda}_1, \dots, {\lambda}_n]$ denote the polynomial semiring over $R$ in a set of variables ${\lambda}= ({\lambda}_1, \dots, {\lambda}_n)$.
\[prop:13.11\] Every supertropical semiring $U$ is upper bound, and $U[{\lambda}_1,
\dots, {\lambda}_n]$ is upper bound for every $n$.
We have to check the condition in Remark \[rmk:13.6\].i. Let $x,a,b \in U$ be given with $x + a+ b = x$. We have to verify that $x + a = x$. Multiplying by $e$ we obtain $ex + ea+ eb = ex$, hence $ea \leq ex$ and $eb \leq ex$. If $ea < ex$, then $x+a = x$ right away.If $eb < ex$, then $x + b = x$, hence $x =x + a + b =
x +a$ again. There remains the case that $ea=eb=ex$. Now $x +a +b
= ex$, hence $x$ is ghost, and $x+a = ex =x$ again. This proves that $U$ is ub.
Let now $f,g,h \in U[{\lambda}_1, \dots, {\lambda}_n]$ be given with $f+g+h =
f$. We write $f = \sum \al_i {\lambda}^i$, $g = \sum \bt_i {\lambda}^i$, $h =
\sum \gm_i {\lambda}^i$. Then $\al_i + \bt_i + \gm_i =\al_i$ for every $i$, and we conclude that $\al_i + \bt_i = \al_i$ for every $i$, hence $f+g =f$, as desired.
The reason we want to consider the idempotent semiring $M[{\lambda}]$ is that we want to extend any m-valuation $v: R \to M$ to the map $\tv : R[{\lambda}] \to M[{\lambda}]$, where we define $$\label{tild} \tv \bigg (\sum_i {\alpha}_i {\lambda}_1 ^{i_1 }\dots
{\lambda}_n ^{i_n } \bigg) = \sum_i v({\alpha}_ i) {\lambda}_1 ^{i_1 }\dots {\lambda}_n
^{i_n } .$$ Since $M[{\lambda}]$ is no longer bipotent in the natural way, we would like to generalize Definition \[defn2.1\] to permit valuations to idempotent semirings.
Unfortunately, $\tv $ as defined in need not satisfy property V3 of Definition \[defn2.1\], since $\tv (fg)$ could differ from $\tv (f )\tv (g).$ Indeed, if $f = \sum_i {\alpha}_i {\lambda}^{i}$ and $g = \sum_j \bt_j {\lambda}^{j
},$ with $i = (i_1, \dots,i_n)$ and $j = (j_1, \dots, j_n)$, then writing $fg = \sum_k \left(\sum _{i+j =k} {\alpha}_i\bt_{j}\right) {\lambda}^{k },$ we have $$\begin{aligned} \tv (fg) & =
\sum_k v\!\bigg(\sum _{i+j = k } {\alpha}_i\bt _{j}\bigg) {\lambda}^{k }
\\ & \le \sum _k \sum _{i+j =k} v({\alpha}_i)v(\bt
_{j}) {\lambda}^{k } \\ & = \bigg(\sum v({\alpha}_ i){\lambda}^{i}
\bigg)\bigg(\sum v(\bt_ j){\lambda}^{j}
\bigg),\end{aligned}$$ where there could be strict inequality. (Notice that our partial oredering on $M[{\lambda}]$ extends the total ordering of $M$.) Accordingly, we need a weaker notion:
\[def:13.10\] An [**iq-valuation**]{} (= idempotent monoid quasi-valuation) on a semiring $ R$ is a map $v: R\to M$ into a (commutative) idempotent semiring $M\ne\{0\}$ with the following properties: $$\begin{aligned}
&IQV1: v(0)=0,\\
&IQV2: v(1)=1,\\
&IQV3:v(xy)\le v(x)v(y)\quad\forall x,y\in R,\\
&IQV4: v(x+y)\le v(x)+v(y)\quad \forall x,y\in R.\end{aligned}$$
{NB: Here as elsewhere we use the partial order introduced above following Definition \[def:13.5\].}
The following is now obvious.
\[iqval\] Suppose $M$ is a bipotent semiring and $v: R \to M$ is an -valuation.
1. Then the map $\tv :
R[{\lambda}] \to M[{\lambda}]$ given above is an iq-valuation.
2. For any given $a \in M^{n}$, the map $\ep_a \circ \tv : R[{\lambda}] \to M$ is again an iq-valuation. {Here $\ep_a$ denotes the evaluation map $f({\lambda}) \mapsto f(a)$, as in the previous sections.}
If $v$ is strong we can do better.
\[thm13.12\] Assume that $v: R \to M$ is a surjective strong -valuation. Then, for any $a \in M^n$, $\ep_a \circ \tv : R[{\lambda}] \to M$ is again a strong -valuation.
By an easy induction we restrict to the case of $n=1$. Given $f =
\sum _i \al_i {\lambda}^i $, $g = \sum _j \bt_i {\lambda}^i $ in $R[{\lambda}]$ we have to verify the following:
1. $\ep_a\tv (fg ) = \ep_a \tv (f)
\cdot \ep_a \tv (g)$;
2. If $\ep_a \tv (f) < \ep_a \tv (g)$, then $\ep_a \tv (f+g) = \ep_a \tv (g)$.
(1): We know already by Proposition \[prop11.10\] that $$\ep_a\tv (fg ) \leq \ep_a \tv (f)
\cdot \ep_a \tv (g).$$ Due to the bipotence of $M$ we have smallest indices $k$ and $\ell$ such that $$\begin{aligned} \ep_a\tv (f ) = \sum_i v(\al_i)a^i =
v(\al_k)a^k, \\
\ep_a\tv (g ) = \sum_j v(\bt_i)a^j = v(\bt_\ell)a^\ell.
\end{aligned}$$ We chose some $c \in R $ with $v(c) = a$. Since $v$ is strong and $k,\ell$ have been chosen minimally we have
$$v \bigg( \sum_{i+j = k + \ell} \al_i c^i \bt_j c^j\bigg) = v(\al_k c^k \bt_\ell c^\ell) = \ep_a \tv (f)
\cdot \ep_a \tv (g).$$ Thus $$\begin{aligned}
\ep_a \tv (fg) & = \sum_r v \bigg(
\sum_{i + j =r} \al_i \bt_j
\bigg) v(c)^r \\
& = \sum_r v \bigg(
\sum_{i + j =r} \al_i c^i \bt_j c^j
\bigg) \\ & \geq \sum_{i+j = k + \ell} v \bigg(
\sum_{i + j =k} \al_i c^i \bt_j c^j
\bigg) \\[3 mm]
& = \ep_a \tv (f)
\cdot \ep_a \tv (g).
\end{aligned}$$ We conclude that $$\ep_a \tv (fg)
= \ep_a \tv (f)
\cdot \ep_a \tv (g).$$
$(2)$: Assume that $ \ep_a \tv (f) < \ep_a \tv (g)
$. Using the same $k,\ell$, and $c$ as before we have for all $i$ $$\begin{aligned} v(\al_ic^i) < v(\bt_\ell c^\ell), \\
v(\bt_i c^i) \leq v(\bt_\ell c^\ell). \end{aligned}$$ Now $$\ep_a \tv (f+g) = \sum_i v\((\al_i + \bt_i)c^i\),$$ and $v\((\al_i + \bt_i)c^i\) \leq v(\bt_\ell c^\ell)$ for all $i$, but $$v\((\al_\ell + \bt_\ell)c^\ell\) = v(\bt_\ell c^\ell).$$ Thus, $$\ep_a \tv (f+g) = v(\bt_\ell c^\ell )= \ep_a \tilde
v(g).$$
In particular, we could take $v$ to be the natural valuation on the field of Puiseux series with rational exponents, as used in [@Gathmann:0601322], or with real exponents as introduced above in §11.
Let us formulate the analogue of Definition \[defn4.1\] in the realm of semirings with ghosts.
\[edefn4.1\] An iq-[**supervaluation**]{} on a semiring $R$ is a map $\varphi: R\to U$ from $R$ to a ub-semiring $U$ with ghosts, satisfying the following properties. $$\begin{aligned}
{2}
&IQSV1:\ &&\varphi(0)=0,\\
&IQSV2:\ &&\varphi(1)=1,\\
&IQSV3:\ &&\forall a,b\in R: \varphi(ab)\le \varphi(a)\varphi(b),\\
&IQSV4:\ &&\forall a,b\in R: e\varphi(a+b)\le
e(\varphi(a)+\varphi(b)).\end{aligned}$$
Here again we use the ordering given by the relation $ \lmodg $. The definition works in particular for $U$ a supertropical semiring and to Proposition \[prop:13.11\].
We are ready for the main purpose of this section.
Assume that $\varphi: R \to U$ is a surjective strong supervaluation, and $$v: R \to eU =M$$ is the strong -valuation covered by $\vrp$. Let $a =
(a_1,\dots,a_n) \in U^n$ be given, and let $b:= (ea_1,\dots,ea_n)
\in M^n$.
1. $\vrp$ can be extended to an iq-supervaluation $\tvrp : R[{\lambda}] \to
U[{\lambda}]$ by the formula $$\tvrp \bigg (\sum_i {\alpha}_i {\lambda}^{i} \bigg) = \sum_i
\varphi({\alpha}_i){\lambda}^{i} .$$
2. $\ep_a \circ \tvrp : R[{\lambda}] \to U$ is a strong supervaluation. It covers the (strong) valuation $\ep_b \circ
\tv : R[{\lambda}] \to M$.
(i): If $a,b \in R $ then we know from §9 that $\vrp(a) + \vrp(b) \lmodg \vrp(a+b)$. This implies $\vrp(a) + \vrp(b) \lmod \vrp(a+b)$, i.e. $$\renewcommand{\theequation}{$*$}\addtocounter{equation}{-1}\label{eq:str.1}
\vrp(a+b) \leq \vrp(a) + \vrp(b).$$
An argument parallel to the one before Definition \[def:13.10\] now tells us that for $f,g \in R[{\lambda}]$ we have $$\tvrp(fg) \leq \tvrp(f) \cdot \tvrp(g).$$ Clearly $\tvrp$ extends $\vrp$, in particular $\tvrp(0) = 0$, $\tvrp(1) =1 $. From $(*)$ it is also obvious that $\tvrp(f+g)
\leq \tvrp(f) + \tvrp(g)$, hence $$e\tvrp(f+g) \leq e\tvrp(f) +
e\tvrp(g).$$ Thus, $\tvrp$ is an iq-supervaluation. Clearly $e\tvrp(f) = \tv(f)$ for all $f \in R[{\lambda}]$. {By the way, this gives us again that $e\tvrp(f + g) \leq e\tvrp(f) + e\tvrp(g)$.}
(ii): Again we restrict to the case of $n=1$ by an easy induction. It is pretty obvious that $\ep_a \tvrp: R[{\lambda}] \to U$ obeys the rules SV1, SV2, SV4 from §4 (Definition \[defn4.1\]), and $e \cdot \ep_a \tvrp(f) = \ep_b \tv(f)$ for every $f \in
R[{\lambda}]$. Given $f = \sum_i \al_i {\lambda}^i $, $g = \sum_i \bt_i {\lambda}^i $ in $R[{\lambda}]$ it remains to prove the following:
1. $\ep_a \tvrp(fg) = \ep_a \tvrp(f) \cdot \ep_a
\tvrp(g)$,
2. If $\ep_a \tvrp(f) \leq \ep_a \tvrp(g)$ then $\ep_a \tvrp(f+g) = \ep_a
\tvrp(g)$.
(1): Let $k,\ell$ be the minimal indices such that $$\renewcommand{\theequation}{$**$}\addtocounter{equation}{-1}\label{eq:str.1}
e \sum_i \vrp(\al_i) a^i = e \vrp(\al_k)a^k = e \ep_a \tvrp(f),$$ $$\renewcommand{\theequation}{$***$}\addtocounter{equation}{-1}\label{eq:str.1}
e \sum_i \vrp(\bt_i) a^i = e \vrp(\bt_\ell)a^\ell = e \ep_a
\tvrp(g),$$ (as in the proof of Theorem \[thm13.12\]). We know by Theorem \[thm13.12\] that $$e ( \ep_a \circ \tvrp)(fg) = e \vrp(\al_k) a^k \cdot e \vrp(\bt_\ell) a^\ell
= e ( \ep_a \circ \tvrp)(f) \cdot e ( \ep_a \circ \tvrp)(g).$$
We chose some $c \in R$ with $\vrp(c) = a$. Using $(*)$ we obtain $$\begin{aligned} ( \ep_a \circ \tvrp)(fg) & = \sum_r \vrp\bigg( \sum_{i+j =r} \al_i
\bt_j\bigg) a^r \\
& = \sum_r \vrp\bigg( \sum_{i+j =r} \al_i c^i \cdot
\bt_j c^j \bigg) \\
& \leq \sum_r \sum_{i+j =r} \vrp(\al_i c^i) \cdot
\vrp(\bt_j c^j) \\
& = \sum_{i,j} \vrp(\al_i) a^i \cdot
\vrp(\bt_j) a^j.
\end{aligned}$$ There is a single $\nu$-dominating term in this sum iff there is a single $ \nu$-dominating term on the left of $(**)$ and of $(***)$, so we conclude that $$\ep_a \tvrp(fg) = \ep_a \tvrp(f)
\cdot \ep_a
\tvrp(g)$$ in all cases, using the fact that tangible elements $x,y$ of $U$ with $x \leq y$, $ex= ey$ are equal.
(2): This can be proved in the way analogous to claim (2) in the proof of Theorem \[thm13.12\].
Thus, for $U$ a supertropical semiring, the evaluation map returns us from iq-supervaluations with values in $U[{\lambda}]$ to the firmer ground of supervaluations.
Looking again at Theorem \[thm10.11\] we realize now that the theorem gives pleasant examples of pairs of supervaluations which obey a “GS-relation" in the following sense.
\[def:13.4\] If $\rho:A \to V$ and $\sig:A \to V$ are supervaluations on a semiring $A$ with values in the same supertropical semiring $V$, then we say that **$\rho$ surpasses $\sig$ by ghost**, and write $\rho \lmodg \sig$, if $\rho(a) \lmodg \sig(a)$ for every $a
\in A$.
In this terminology Theorem \[thm10.11\] reads as follows:
\[thm:13.5\] Let $\vrp: R \to U$ be a strong supervaluation. Then for any $a \in R^n$ the supervaluation $
\varepsilon_{\vrp(a)} \circ \tilde \vrp: R[{\lambda}_1,\dots,{\lambda}_n] \to
U$ surpasses the supervaluation $ \vrp \circ \varepsilon_a :
R[{\lambda}_1,\dots,{\lambda}_n] \to U$ by ghost.
Of course, we should look for other examples of pairs of supervaluations $\rho:A \to V$ and $\sig:A \to V$ with $\rho
\lmodg \sig$. Here the “classical" case that $A$ is a semifield, or even a field, and $eV$ is cancellative, is perhaps not the most interesting one. Indeed, for such pairs $\rho$, $\sig$ we have $e
\rho(a) \geq e \sig(a)$ for every $a \in A$, and this forces $e
\rho(a) = e \sig(a)$ since for $a \neq 0$ also $e \rho(a^{-1})
\geq e \sig(a^{-1})$. Thus $\rho$ and $\sig$ cover the same valuation $e \rho = e \sig: A \to eV$. But for the pairs occurring in Theorem \[thm:13.5\], where $A$ is a polynomial semiring, the valuation $e\rho$ and $e \sig$ usually will have even different support, and $\rho$ can be a very interesting “perturbation" of $\sig$ by ghosts.
The phenomenon of “surpassing by ghost” for supervaluations shows clearly the importance of studying valuations and supervaluations on semirings instead of just semifields.
[WMS]{}
N. Bourbaki, *Alg. Comm. VI*, §3, No.1.
M. Einsiedler, M. Kapranov, and D. Lind, *Non-Archimedean amoebas and tropical varieties*, J. Reine Angew. Math., 2006, 601, 139-157.
A. Gathmann. , Jahresbericht der DMV **108** (2006), 3–32. (Preprint at arXiv:math.AG/0601322.)
R. Huber and M. Knebusch, *On valuation spectra*, Contemp. Math. **155** (1994), 167-206.
D.K. Harrison and M.A. Vitulli, *$V$-valuations of a commutative ring I*, J. Algebra **126** (1989), 264-292.
I. Itenberg, G. Mikhalkin and E. Shustin, *Tropical Algebraic Geometry*, Oberwolfach Seminars, 35, Birkhäuser Verlag, Basel, 2007.
Z. Izhakian, *Tropical arithmetic and tropical matrix algebra*, Commun. in Algebra, **37**:4 (2009), [1445–1468]{}. (Preprint at arXiv: math/0505458v2.)
Z. Izhakian and L. Rowen. *Supertropical algebra*, Advances in Math., **225** (2010), 2222––2286. (Preprint at arXiv:0806.1175.)
Z. Izhakian and L. Rowen. *Supertropical matrix algebra*. Israel J. Math., to appear. (Preprint at arXiv:0806.1178, 2008.)
Z. Izhakian and L. Rowen. *Supertropical matrix algebra II: Solving tropical equations*, Israel J. Math., to appear. (Preprint at arXiv:0902.2159, 2009.)
Z. Izhakian, M. Knebusch and L. Rowen, *Supertropical semirings and supervaluations II: Dominance and transmissions* , in preparation.
M. Knebusch and D. Zhang, *Manis Valuations and Prüfer Extensions. I. A New Chapter in Commutative Algebra*, Lecture Notes in Mathematics, 1791, Springer-Verlag, Berlin, 2002.
M. Knebusch and D. Zhang, *Convexity, valuations, and Prüfer extensions in real algebra*, Doc. Math. **10** (2005), 1-109.
S. MacLane, *Categories for the Working Mathematician*, 4th ed., Springer Verlag, 1988.
G. Mikhalkin. *Introduction to tropical geometry* (notes from the impa lectures in summer 2007). preprint at arXiv:0709.1049, 2007.
B. Mitchell, *Theory of Categories*, Academic Press, 1965.
S. Payne, Fibers of tropicalizations, Preprint at arXiv: 0705.1732v2 \[math.AG\], 2008.
S. Payne, Analytification is the limit of all tropicalizations, Preprint at arXiv: 0806.1916v3 \[math.AG\], 2009.
D. Speyer and B. Sturmfels, *Tropical mathematics*, Math. Mag. **82** (2009), 163–-173. (Preprint at arXiv:math.CO/0408099.)
D. Zhang, *The $M$-valuation spectrum of a commutative ring*, Commun. Algebra **30** (2002), 2883–2896.
[^1]: This research of the first and third authors is supported by the Israel Science Foundation (grant No. 448/09).
[^2]: This research of the second author was supported in part by the Gelbart Institute at Bar-Ilan University, the Minerva Foundation at Tel-Aviv University, the Department of Mathematics of Bar-Ilan University, and the Emmy Noether Institute at Bar-Ilan University.
[^3]: This paper was completed under the auspices of the Resarch in Pairs program of the Mathematisches Forschungsinstitut Oberwolfach, Germany.
[^4]: The element $0$ may be regarded both as tangible and ghost.
[^5]: More precisely we should consider equivalence classes of supervaluations. We suppress this point here.
[^6]: This is more than equivalent!
[^7]: Alternatively consult [@IKR §3] (as soon as available), where a detailed proof of a more general statement is given.
[^8]: = partially ordered set
[^9]: If $R$ is a field, then $\langle 1 + \mm_v\rangle = 1 + \mm_v$.
[^10]: For the matter of geometric applications, one usually needs $F$ to be algebraically closed, but here we can omit this restriction.
[^11]: All inequalities in the following will refer to this ordering.
|
{
"pile_set_name": "ArXiv"
}
|
---
title: |
Deformation of singular connections I:\
$G_{2}-$instantons with point singularities
---
Introduction
============
The celebrated work of Donaldson-Thomas [@DonaldsonThomas] has inspired extensive studies on special holonomy. On a seven-dimensional manifold $M$ with a $G_{2}-$structure $\phi$, for a vector bundle $E\rightarrow M$, it’s suggested in [@DonaldsonThomas] to consider the $G_{2}-$instanton equation of connections on $E$:
$$\label{equ instanton equation for A}
F_{A}\wedge \psi=0,$$
where $\psi$ is the co associative form uniquely determined by $\phi$, and $F_{A}$ is the curvature form of the connection $A$. As pointed out in [@DonaldsonThomas], the $G_{2}-$instantons should form a basis for a Casson-Floer-type theory for $7-$manifolds. By adding the torsion-free $G_{2}-$structures $\phi\ (\psi)$ as a “parameter” in (\[equ instanton equation for A\]), a very natural moduli space is the set of solutions $(\phi,A)$ to (\[equ instanton equation for A\]) (modulo gauge and some other natural equivalence). According to the seminal paper of Tian [@Tian], it’s expected that instantons with point singularities should appear in the natural compactification. Therefore, a fundamental step to understand this moduli space (and any related one) is to study the following question.
*Given a $G_{2}-$instanton with point singularities, can we still see the instanton and the singularity for nearby $G_{2}-$structures?*
The best story we can expect is that the singularity disappears by perturbing the $G_{2}-$structure. On mean curvature flows, by the work of Colding-Minicozzi [@Colding], this indeed happens: any flow which develops singularity as “shrinking donuts” (Angenent [@Angenent]) can be perturbed away. However, our following main result shows that this does not happen very often for $G_{2}-$instantons.
\[Thm Deforming instanton simple version\] Let $E$ be an admissible bundle defined away from finitely-many points ($O_{j}$) on a $7-$manifold with a $G_{2}-$structure. Suppose $E$ admits an admissible $G_{2}-$instanton with trivial co-kernel, then for any small enough admissible deformation of the $G_{2}-$structure, there exists a $G_{2}-$monopole with the same tangent connection at each $O_{j}$.
The above statement is not the most precise, but we hope it is easy to understand. **The most precise version of Theorem \[Thm Deforming instanton simple version\] is Theorem \[Thm Deforming instanton\], to which we strongly recommend the readers to pay attention**. We will only prove Theorem \[Thm Deforming instanton\].
In the rest of the article we call the points where $E$ is undefined **singular points**. Please notice that **our definition also includes the case when the bundle is smooth across some of (or all) the singularities, and in this case we allow both singular and smooth connections**. Nevertheless, we are more interested in the case when the bundle (connection) is truly singular. When the bundle and connection are both smooth across a singular point, our **local right inverse** of the linearised operator is still different from the standard one (see [@Gilkey], [@Lawson]).
When the $G_{2}-$form is co-closed, any $G_{2}-$monopole (\[equ instantonequation without cokernel\]) on a closed manifold is an instanton. However, a locally defined one might not.
\[rmk on Tian work\] Theorem 1 of Yang [@Yang] and Lemma \[lem homotopy\] suggest that, via a bundle isomorphism and a smooth gauge away from the singularities, any $G_{2}-$ instanton (on a singular bundle) with quadratic curvature blowing-up at each singular point can be reduced to the case in Theorem \[Thm Deforming instanton\]. The work of Tian [@Tian] indicates that the tangent connections at the singularities are the cone connections (bundles) on $\R^{7}\setminus O$, pulled back from a smooth Hermitian-Yang-Mills connection over $S^{6}$ (with respect to the standard nearly-Kähler structure) via the spherical projection (Remark \[rmk homotopy property and def of r\]). The work of Charbonneau-Harland [@Harland] indicates that the deformation of these Hermitian connections can be identified with a subspace of the kernel of a Dirac operator.
We expect the co-kernel to be trivial for most singular instantons i.e. we have tranversality in most of the cases. This is reasonable at least when the inner product is unweighted: the instanton constructed by Walpuski [@Walpuski] is rigid, and the self-adjointness implies the co-kernel is trivial.
The key to the deformation problem is a **Fredholm-theory** for the linearised operator (\[equ introduction formula for deformation operator\]). It is a Dirac operator i.e. the square of it is a Laplacian. In the model case, **though we can not do separation of variable to the deformation operator itself, we can do it for the Laplacian**. As the standard Laplacian in polar coordinate, we have a **polar coordinate formula for the Laplacian of any cone connection** (Lemma \[lem cone formula for laplacian\]). Using the Galerkin method (see 7.1.2 in [@Evans]), we can construct a **local inverse for the Laplacian**, and this gives a **local inverse for the deformation operator** between the desired weighted-Sobolev spaces.
To handle the non-linearity of the instanton equation (or to preserve the tangent cone), a theory of Sobolev-spaces is not sufficient. The spaces should satisfy some multiplicative properties. Therefore, it should be helpful to turn on the a priori Schauder-estimates of Douglis-Nirenberg (Theorem 1 in [@Nirenberg]. Nevertheless, the essentially difficulty is the $C^{0}-$estimate.
Our crucial observation is that the **$W^{1,2}_{p,b}-$estimate of sufficiently negative $p$ yields the $C^{0}-$estimate**. Moreover, to handle the non-linearity of (\[equ instanton equation for A\]), it suffices to consider a hybrid space consisting of a weighted-$C^{2,\alpha}$ space and the weighted Sobolev-space (with norm as the sum of the two). The global version of our main analytic theorem is:
\[Thm Fredholm\] Let $E\rightarrow M$ be the same as in Theorem \[Thm Deforming instanton\]. Suppose $A$ is an admissible connection of order $4$. Then for any $A-$generic negative $p$ (Definition \[Def A generic\]) and $b\geq 0$, $L_{A}$ (see (\[equ introduction formula for deformation operator\])) is $(p,b)$-Fredholm (Definition \[Def Fredholm operators and isomorphisms\]) from $W^{1,2}_{p,b}$ to $L^{2}_{p,b}$ (weighted Sobolev-spaces in Definition \[Def global weight and Sobolev spaces\]). $L_{A}$ is also $(p,b)$-Fredholm from $H_{p,b}$ to $N_{p,b}$ (Hybrid spaces in Definition \[Def Hybrid spaces\]).
**Our Fredholm-theory works for a much larger class of operators, as long as the model operator is cone-type and admits seperation of variable with respect to some operator on the link**. In particular, it works for the Laplace-type operators (Theorem \[thm W22 estimate on 1-forms\]). **We only assume discreteness and some natural asymptotics of the spectrum of the operator on the link**. When the eigenfunctions are explicitly known, one might have a summation formula for the heat kernel and Green’s function, thus more information can be extracted (Theorem 4.3 and 1.13 in [@Myself2013]).
It does not follow directly from definition that the kernel of the formal adjoint is finite-dimensional, nor equal to the co-kernel. Nevertheless, a trick (in Lemma \[lem boostrap Kstar\]) ensures that **we can decrease the blowing-up rate of the co-kernel a little bit with respect to the spectrum gaps**. This does not only give us an interesting PDE-result, but also implies **the co-kernel is precisely the kernel of the formal adjoint (Theorem \[thm characterizing cokernel\])**.
We can’t have optimal Sobolev-estimates unless the weight is properly chosen. Roughly speaking, $A-$generic means the weight $p$ of our Fredholm-theory avoids some discrete values determined by the spectrum of the tangential operators. This phenomenon, in other settings, is well understood (see [@Donaldson], [@Degeratu], [@Lockhart]).
Usually the weight in the Schauder-estimate is required to have non-negative power (Lemma 3 in [@Nirenberg]). Thus, to obtain Fredholmness for every negative $p$, we should use a different norm (see (\[equ Def local Schauder norm 1\])) when the power is negative, and adopt a trick in [@GilbargTrudinger] to avoid global interpolations (the use of (\[equ C20 estimate in the apriori Schauder\])).
**For Theorem \[Thm Deforming instanton\] (Theorem \[Thm Deforming instanton simple version\]), we only need the hybrid theory when $p\in (-\frac{5}{2},-\frac{3}{2})$ and $b=0$**. The most important usage is to handle the first iteration (\[equ first iteration\]). Nevertheless, a theory for all $p$ and $b$ is useful for other applications.
The local version of our deformation theory states as follows.
\[Thm Deforming local instanton\] In the setting of Remark \[rmk on Tian work\], for any smooth $SO(m)-$bundle $E\rightarrow S^{6}$ equipped with a smooth Hermitian Yang-Mills connection $A_{O}$, there is a $\delta_{0}>0$, such that for any admissible $\delta_{0}$-deformation $(\underline{\phi}, \underline{\psi})$ over $B_{O}(\frac{1}{2})\subset \R^{7}$ of the Euclidean $G_{2}-$forms (\[eqnarray Euc G2 forms\]), there exists a $G_{2}-$monopole of $\underline{\psi}$ (\[equ instantonequation without cokernel\]) over $B_{O}(\frac{1}{4})$ tangent to $A_{O}$ at the origin $O$.
Currently there is only one Hermitian Yang-Mills connection known over $S^{6}$: the canonical connection (Example 2.2 in [@Xu]). **Theorem \[Thm Deforming local instanton\] produces concrete local examples of singular $G_{2}-$monopoles tangent to the canonical connection for almost Euclidean $G_{2}-$structures.**
Historically, the Fredholm-problem of elliptic operators has been extensively studied. The most related work to the present article is done by Lockhart-McOwen [@Lockhart]. They proved that, over non-compact manifolds, a large class of operators are Fredholm between proper weighted Sobolev-spaces. Melrose-Mendoza [@Melrose] also obtained similar results in the $W^{k,2}-$setting generalized to pseudo-differential operators. Our hybrid-spaces, though not the most general, are sufficient for this study and are **specially designed for singular connections**.
Very recently the author learned from Thomas Walpuski that, using cylindrical method for the deformation operator, and the theory in [@Lockhart], he could also obtain a local inverse between weighted Schauder-spaces for cones. This local Schauder-estimate is well illustrated in Section 2.1 of [@HaskinsHein]. The author also learned from Goncalo Oliveira that in [@Goncalo], he obtained $G_{2}-$monopoles with diffferent kind of singularities.
In the aspect of $G_{2}-$instantons or monopoles, related work are conducted by Walpuski [@Walpuski], Sa Earp-Walpuski [@SaEarp], and Oliveira ([@Goncalo1], [@Goncalo]). On monopoles in other settings, see the work of Foscolo [@LorenzoFoscolo] and Oliveira ([@Goncalo2],[@Goncalo3]). In the metric setting, the most related research is done by Joyce [@Joyce] (ALE space), Degeratu-Mazzeo [@Degeratu] (Quasi ALE space), Mazzeo [@Mazzeo] (Edge-operators), Donaldson [@Donaldson11] (conic Kähler), the author-Chen [@Myself2013] (parabolic conic Kähler), Akutagawa-Carron-Mazzeo [@ACM] (Yamabe problem on singular spaces). The author believes the above list is not complete, and refers the readers to the references therein.
Omitting a number of necessary intermediate results, the following diagram shows the important steps to prove the main theorems.
\(3) [T\[Thm Fredholm\]]{} ; (2) \[left of=3\] [T\[Thm Deforming instanton\](T\[Thm Deforming instanton\])]{};
\(4) \[below of=3\] [T\[thm W22 estimate on 1-forms\],C\[Cor solving model laplacian equation over the ball without the compact support RHS condition\]]{}; (5) \[below left of=2\] [T\[thm characterizing cokernel\]]{}; (8) \[right of=3\] [T\[thm C0 est\]]{}; (12) \[right of=8\] [T\[Thm Deforming local instanton\]]{}; (6) \[below left of=4\] [T\[thm existence of good solutions to the uniform ODE with estimates\]]{}; (7) \[below right of=4\] [P\[prop seperation of variable for general cone\]]{}; (10) \[below of=4\] [L\[lem formula for LA squared\]]{}; (9) \[above right of=7\] [T\[thm global apriori L22 estimate\]]{}; (11) \[right of=9\] [L\[lem compact imbedding\]]{}; (13) \[below right of=9\] [L\[lem bound on C3 norm of solution to laplace equation when f is smooth and vainishes near O\],Claim \[clm local regularity L12 for global apriori estimate\]]{}; at (8,-2) [Figure 1.\
“P” is Proposition,\
“L” is Lemma.\
“C” is corollary,\
“T” is theorem.\
The arrows mean\
implying.]{}; (9) edge node \[left\] (3) (4) edge node \[left\] (9) (8) edge node \[left\] (3) (10) edge node \[left\] (4) (4) edge node \[left\] (3) (6) edge node \[left\] (4) (6) edge node \[left\] (5) (7) edge node \[left\] (4) (11) edge node \[left\] (3) (4) edge node \[left\] (5) (8) edge node \[left\] (12) (4) edge node \[left\] (12) (13) edge node \[left\] (9) (3) edge node \[left\] (2);
This article is organized as following: **most of the notions and symbols are defined in Section \[section Setting up the analysis and notations\]**. In Section \[section Seperation of variable for the system in the model case\], we do seperation of variable, and reduce the “squared” model linearized equation to ODEs. In Section \[section Solutions to the ODEs on the Fourier-coefficients\], we solve these ODEs. In Section \[section Local solutions for the model cone connection\], we establish the optimal local Sobolev-theory. In Section \[section Global apriori estimate\], \[section Hybrid space and C0-estimat\], \[section Global Fredholm and Schauder Theory\], we establish the global Fredholm theory of Sobolev and Hybrid spaces. In Section \[section Perturbation\], we prove the main geometric theorems. In Section \[section Characterizing the cokernel\], we prove the PDE result and characterize the co-kernel.
**Acknowledgements**: The author would like to thank Professor Simon Donaldson for suggesting this problem to work on, and for numerous inspiring conversations. The author is grateful to Song Sun and Thomas Walpuski for many valuable discussions, and for careful reading of the previous versions of this article. The author is grateful to Alex Waldron, Lorenzo Foscolo, Gao Chen, and Professor Xianzhe Dai for valuable discussions.
Definitions and Setting \[section Setting up the analysis and notations\]
=========================================================================
We work under the setting of Theorem \[Thm Deforming instanton\]. By a bundle, we mean a open cover and associated overlap functions. Two bundles with different overlap functions are considered to be different, even when they are isomorphic. **The definitions in this section are all routine and natural, a reader familiar with related material such as [@GilbargTrudinger], [@Nirenberg], [@Donaldson] can skip this section and come back if necessary**.
\[Def the bundle xi\] A smooth $SO(m)-$bundle $E\rightarrow M\setminus (\cup_{j}O_{j})$ is said to be an **admissable bundle** if
- $E$ is defined by an admissible cover $U_{\rho_{0}}$ (Definition \[Def admissable open cover\]) for some $\rho_{0}>0$,
- for each singular point $O_{j}$, the overlap function between $V_{+,O_{j}}$ and $V_{-,O_{j}}$ does not depend on $r$ (see Remark \[rmk homotopy property and def of r\]) i.e. the overlap function is pulled back from the sphere.
Let $\Xi$ denote $\Omega^{0}(ad E)\oplus \Omega^{1}(ad E)$ ($adE$-valued $0-$form and $1-$form), and the corresponding bundle over $S^{n-1}$ as in Section \[section Seperation of variable for the system in the model case\]. **All the analysis in this article are on sections to** $\Xi$, over $M$ or various domains. **We omit $\Xi$ in the notations of the section spaces in Definition \[Def abbreviation of notations for spaces\]**. All the definitions and discussions below apply to $\Xi$ as well. When $E$ is a complex bundle, we require it to be a $U(\frac{m}{2})$-bundle, and we still view it as a real bundle.
\[Def admissable open cover\] (Admissible open cover). Given an (reference) open cover of $M$ and a (reference) coordinate system, a refinement (with the same coordinate maps) denoted as $\mathbb{U}_{\tau_{0}}=\{B_{l},B_{O_{j}} (V_{+,O_{j}},V_{-,O_{j}}),\ l,j \ \textrm{are integers with finite range}\}$ is called an $\tau_{0}-$admissible cover if the following conditions are satisfied.
1. Each $B_{l}$ is in the smooth part of $E$, the ball $100B_{l}$ (concentric and of radius 100 times larger) is still away from the singularities and is contained in a coordinate chart. This is different from saying that $B_{l}$ is a metric ball in the manifold. In this article, by abuse of notation, $B_{l}$ means both the ball in the chart and the open set in the manifold (it should be clear from the specific context which notion we mean).
2. Each $B_{O_{j}}$ is centred at a singular point of $E$ with radius $\tau_{0}$, and contain no other singular point. $O_{j}$ corresponds to the origin in the chart. $100B_{O_{j}}$ is still a ball in a coordinate chart and are disjoint from each other. Moreover, in this coordinate $\phi(O_{j})$ is the standard $G_{2}-$form.
3. $\frac{B_{l}}{100}$ and $\frac{B_{O_{j}}}{100}$ still form a cover of $M$.
When $\tau_{0}$ small enough with respect to $M$ and $E$, this cover always exists if one adds enough balls of small radius.
The letter $O$ always means a singular point among the $O_{j}$’s, and also the origin in the coordinate (by abuse of notations). We denote it as “$B_{O}(\rho)$” when we want a ball with radius $\rho$. The symbols “$B_{O}$” (“$B_{O_{j}}$”) without radius usually means one of balls in $\mathbb{U}_{\tau_{0}}$ defined above.
Let $M_{\tau}$ denote $M\setminus \cup_{j}B_{O_{j}}(\tau)$ (the part far away from the singularities).
\[rmk in practice, we usually consider normal coordinate\] In practice, we usually choose the coordinates as the normal coordinates of the underline Riemannian metric, though our definition allows any smooth coordinate. In [@Tian], the existence of tangent cone connection (near the singularities) is proved in normal coordinates.
Let $B_{O}(1)$ denote the unit ball in $\R^{n}$ centred at the origin. Since $B_{O}(1)$ admits a natural smooth deformation retraction onto $S^{n-1}\times \{\frac{1}{2}\}$, the well known homotopy property (Theorem 6.8 and the last paragraph in page 58 of [@BottTu]) of bundles gives the following lemma.
\[lem homotopy\] Any smooth $SO(m)-$bundle $\widehat{E}\rightarrow M\setminus (\cup_{j}O_{j})$ defined by a locally finite cover is isomorphic to an admissible bundle $E$ in Definition \[Def the bundle xi\]. The isomorphism covers the identity map from $M\setminus (\cup_{j}O_{j})$ to itself.
\[rmk homotopy property and def of r\] Near each $O_{j}$, for some $\tau_{0}>0$, the smooth isomorphism (away from $O_{j}$) is the one in Theorem 6.8 of [@BottTu], with respect to the natural homotopy deforming the identity map $id$ of $B_{O_{j}}(\tau_{0})$ to the map $g\circ f$: $$\begin{tikzpicture}
\node at (0,0.4) {g};
\draw[->,semithick] (-1,0.1) -- (1,0.1);
\node at (-2.2,0) {$S^{n-1}\times (0,\tau_{0})$} ;
\node at (-4.5,0) {$B_{O_{j}}(\tau_{0})\simeq$};
\node at (2.2,0) {$S^{n-1}\times (\frac{\tau_{0}}{2}),$};
\draw[<-,semithick] (-1,-0.1) -- (1,-0.1);
\node at (0,-0.4) {f} ;
\end{tikzpicture}$$ where $g$ is the **spherical projection** $(x,t)\rightarrow (x,\frac{\tau_{0}}{2})$, and $f$ is the identity inclusion. Let $r$ (sometimes $r_{x}$) denote the Euclidean distance to the singular set $\{O_{1},...,O_{m_{0}}\}$ in the reference coordinate chart respectively.
\[rmk relation between Euc and Spherical norm\] Any bundle-valued $k-$form $\xi$ without $dr-$ component (defined over $\R^{n}\setminus O$) can be viewed as a $r-$dependent bundle-valued $k-$form over $S^{n-1}(1)$. Let $|\xi|$ denote the usual Euclidean norm of $\xi$ (as a form over $\R^{n}\setminus O$) , and $|\xi|_{S}$ denote the norm on the unit sphere with respect to the standard round metric (as a spherical form). The relation is $$|\xi|^{2}=\frac{1}{r^{2k}}|\xi|_{S}^{2},\ \textrm{for any}\ \xi.$$
\[Def Admissable connection with polynomial or exponential convergence\](Admissible connections) Given a smooth bundle $E\rightarrow M$ with finite many singular points , and a smooth $G_{2}-$structure $(\phi,\psi)$ over $M$, a connection $A$ of $E$ is called an admissible connection of order $k_{0}$, if it satisfies the following conditions.
- $A$ is smooth away from the $O_{j}$’s.
- There exist a $\mu_{1}>0$, such that for any $O$ among the $O_{j}$’s, there is smooth connection $A_{O}$ on $E\rightarrow S^{n-1}$ such that the following holds in the reference coordinate chart. $$\label{equ condition on rate of convergence for an admissable instanton}
\Sigma_{j=0}^{k_{0}}r^{j+1}|\nabla^{j}_{A_{O}}(A-A_{O})|\leq C(-\log r)^{-\mu_{1}},$$ where we view $A_{O}$ as the pulled-back connection over $\R^{7}\setminus O$.
For the purpose of quantization, $A$ is also said to be **of polynomial rate $\mu_{1}$ at $O_{j}$** (we omit the $O_{j}$ if the rate holds at every singular point).
Suppose for some constant $C$, $A$ satisfies (\[equ condition on rate of convergence for an admissable instanton\]) with right hand side replaced by $Cr^{\mu_{0}}$ ( at $O_{j}$), $\mu_{0}>0$, then $A$ is said to be **of exponential rate $\mu_{0}$ at $O_{j}$**.
When $A$ is admissible and satisfies the instanton equation away from the singularities, we call it an **admissable instanton**. In practice, the coordinate near the singularities are normal coordinate of $g_{\phi}$ (see Remark \[rmk in practice, we usually consider normal coordinate\]).
\[Def condition SAp\] A connection $\underline{A}$ is satisfies **Condition ${\circledS_{A,p}}$** if the following holds with respect to the reference instanton $A$.
- $\underline{A} $ is an admissible connection of order $3$.
- $\underline{A} $ is close to $A$ in $H_{p}$ (Definition \[Def Hybrid spaces\] and \[Def abbreviation of notations for spaces\]). Consequently,
- the tangent connections of $\underline{A}$ at each $O_{j}$ is the same as that of $A$;
- $\underline{A}$ is with the same polynomial rate as $A$ at each $O_{j}$. Moreover, if $A$ is with exponential rate $\mu_{0}>0$ at $O_{j}$, then $\underline{A}$ is with exponential rate $\min\{\mu_{0}, -\frac{3}{2}-p\}$ at the same point.
Near any singular point $O$ (among the $O_{j}$’s), the bundle $E$ is trivialized by 2 coordinate patches $U_{+},U_{-}$ of $S^{n-1}$, then we choose the cover of $B_{O}$ as $V_{+,O} (V_{-,O})=U_{+}(U_{-})\times [0,\tau_{0}]$. In these coordinates, we can easily define the weighted Schauder norms for sections of $\Xi$ without involving any connection.
\[Def local Schauder norms\] As in Definition 2.4 of [@Myself2013], we don’t even need a connection to define the Schauder norms. Let $r_{x,y}=\min\{r_{x},r_{y}\}$, $\underline{r_{x,y}}=\max\{r_{x},r_{y}\}$. Near a singular point $O$, let $\Gamma$ be a locally defined matrix-valued tensor in a coordinate chart of $\Xi$ (Definition \[Def admissable open cover\]), we define the following. $$\label{equ Def local Schauder norm 1}
[\Gamma]^{(\mu,b)}_{\alpha, \mathfrak{U}}= \left\{ \begin{array}{cc}\sup_{x,y\in \mathfrak{U}}(-\log \underline{r_{x,y}})^{b}r^{\mu+\alpha}_{x,y}\frac{|\Gamma(x)-\Gamma(y)|}{|x-y|^{\alpha}},& \textrm{when}\ \mu+\alpha\geq 0 \\
\sup_{x,y\in \mathfrak{U}}(-\log \underline{r_{x,y}})^{b}(\underline{r_{x,y}})^{\mu+\alpha}\frac{|\Gamma(x)-\Gamma(y)|}{|x-y|^{\alpha}},& \textrm{when}\ \mu+\alpha< 0.
\end{array}\right.$$ $$\label{equ Def local Schauder norm 2}
[\Gamma]^{(\mu,b)}_{0, \mathfrak{U}}= \sup_{x\in \mathfrak{U}}(-\log r_{x})^{b}r^{\mu}_{x}|\Gamma(x)|.$$ The idea of (\[equ Def local Schauder norm 1\]) is to choose the weight function “as small as possible”. Note that we allow $\mu$ to be any real number, while in Lemma 3 in [@Nirenberg], the power is required to be non-negative. We usually let $\mathfrak{U}$ be $V_{+,O}$ ($V_{-,O}$) or a ball contained therein. We then define $$\label{equ def top order seminorm in the sector}
|\xi|^{(\gamma,b)}_{2,\alpha, V_{+,O}}\triangleq [\nabla^{2}\xi]^{(2+\gamma,b)}_{\alpha,V_{+,O}}+|\nabla^{2}\xi|^{(2+\gamma,b)}_{0,V_{+,O}}+|\nabla\xi|^{(1+\gamma,b)}_{0,V_{+,O}}+|\xi|^{(\gamma,b)}_{0,V_{+,O}}.$$ where the $\nabla$ is just the usual gradient in Euclidean coordinates. Moreover, by abuse of notation (which we adopt through out this article in this case), the “$\xi$” in (\[equ def top order seminorm in the sector\]) means the multi-matrix-valued function in $V_{+,O}$ representing $\xi$. $|\xi|^{(\gamma,b)}_{2,\alpha, V_{-,O}}$ is defined in the same way throughout this article, so does $|\xi|^{(\gamma,b)}_{2,\alpha, B}$ for any ball $B\subset V_{+,O}$ or $V_{-,O}$.
\[Def Global Schauder norms\](Global Schauder norms) In the same context as Definition \[Def admissable open cover\], let $\rho_{0}>0$ be independent of $A$ such that there exists a $\rho_{0}-$admissible cover $\mathbb{U}_{\rho_{0}}$. We define $$\label{equ norm I}|\xi|^{(\gamma,b)}_{2,\alpha,M,I}\triangleq \sup_{B_{l}\in \mathbb{U}_{\rho_{0}}}|\xi|_{2,\alpha,B_{l}}+ \sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,V_{+,O_{j}}}+\sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,V_{-,O_{j}}}.$$
The $|\xi|_{2,\alpha,B_{l}}$ are the unweighted Schauder norms defined in (4.5),(4.6) in [@GilbargTrudinger]. Actually we have 2 other ways to define the Schauder norms. One is by using the smaller cover: $$\label{equ norm II}
|\xi|^{(\gamma,b)}_{2,\alpha,M,II}\triangleq \sup_{B_{l}\in \mathbb{U}_{\rho_{0}}}|\xi|_{2,\alpha,\frac{B_{l}}{100}}+ \sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,\frac{V_{+,O_{j}}}{100}}+\sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,\frac{V_{-,O_{j}}}{100}}.$$ The third definition is by using the naturally weighted Schauder norms in (4.17) of [@GilbargTrudinger] (away from the singularity): $$\label{equ norm III}
|\xi|^{(\gamma,b)}_{2,\alpha,M,III}\triangleq \sup_{B_{l}\in \mathbb{U}_{\rho_{0}}}|\xi|^{\star}_{2,\alpha,\frac{B_{l}}{50}}+ \sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,\frac{V_{+,O_{j}}}{50}}+\sup_{B_{O_{j}}\in \mathbb{U}_{\rho_{0}}}|\xi|^{(\gamma,b)}_{2,\alpha,\frac{V_{-,O_{j}}}{50}}.$$
An easy but important lemma is
The 3 norms in (\[equ norm I\]), (\[equ norm II\]), (\[equ norm III\]) are equivalent.
This is an easy exercise by definition. For the reader’s convenience, we still point out the crucial detail. Obviously norm I is stronger than norm III, and norm III is stronger than norm II. We only need to show norm II is stronger than norm I. This is because of the last item in Definition \[Def admissable open cover\]: $V_{+,O_{j}}\setminus \frac{V_{+,O_{j}}}{100}$ is covered by the $\frac{B_{l}}{100}'s$. Since the transition functions are smooth, then the Schauder norm of $\xi$ over $V_{+,O_{j}}\setminus \frac{V_{+,O_{j}}}{100}$ is controlled by the supreme of Schauder norms on the $\frac{B_{l}}{100}'s$.
The same holds for $V_{-,O_{j}}\setminus \frac{V_{-,O_{j}}}{100}$ and $B_{l}\setminus \frac{B_{l}}{100}$ away from the singularities.
\[Def Schauder spaces\] The **weighted Schauder-space** $C^{k,\alpha}_{(\gamma,b)}(M)$ consists of sections with the norm (\[equ norm I\]) being finite. This notation also applies to any domain.
\[Def local Schauder adapted to local perturbation\] For the local perturbation in Theorem \[Thm Deforming local instanton\], on $B_{O}(R)$, we need a Schauder space whose weights near $O$ and $\partial B_{O}(R)$ are different. To be precise, we define the space $C^{k,\alpha}_{\{\gamma\},t}[B_{O}(R)]$ by the norm $$\begin{aligned}
|\xi|^{\{\gamma\},t}_{k,\alpha,B_{O}(R)}
&\triangleq & \Sigma_{j=0}^{k}\sup_{x\in V_{+,O}(R)}\min\{r_{x}^{\gamma+j},(R-r_{x})^{t+j}\}|\nabla^{j}\xi|(x)
\\& & + \sup_{x,y\in V_{+,O}(R)}\min\{r_{x}^{\gamma+k+\alpha},(R-r_{x})^{t+k+\alpha}\}\frac{|\nabla \xi(x)-\nabla \xi(x)|}{|x-y|^{\alpha}}\\& &+\ \textrm{the same in}\ V_{-,O}(R).\end{aligned}$$
\[Def deformation of the G2 structure\](Admissible $\delta_{0}-$deformations of $G_{2}-$structures) A $G_{2}-$structure $(\underline{\phi},\underline{\psi})$ is called an admissible $\delta_{0}-$deformation of $\phi$ if
- $\underline{\phi}$ is smooth and $$\label{equ def of admissable deformations of the G2 structure}
|\underline{\phi}-\phi|_{C^{5}(M)}\leq \delta_{0}.$$ where the $C^{5}(M)-$norm is defined by the base $G_{2}-$structure $\phi$;
- $\underline{\phi}=\phi$ at each $O_{j}$.
Then we automatically have $$|\underline{\phi}-\phi|(x)\leq Cr_{x}$$ when $x$ is close to the singularities.
Since the $\underline{\phi}$ determines a smooth metric $g_{\underline{\phi}}$ , and a smooth co-associative form $\underline{\psi}$ (see [@SalamonWalpuski] and [@Bryant]), we also obtain a small deformation of the base form $\psi$ such that $$|\underline{\psi}-\psi|_{C^{5}(M)}\leq C\delta_{0}.$$
Note that we don’t require $\underline{\phi}$ to be closed, but when we want an instanton, we have to assume it’s co-closed i.e. $\underline{\psi}$ is closed.
\[Def Dependence of the constants\] (General constants) *The background data in this article is the dimension $n$ (in most cases it’s 7), the manifold $M$ and bundle $E$ ($\Xi$) with a fixed coordinate system, the $p,b$ in the weights, the reference $G_{2}-$structures $\phi$ and $\psi$, the tangent cone connections $A_{O_{j}}$ (and the bundle $E$ ($\Xi$) on the sphere), the Hölder-exponent $\alpha$, and the base connection $A$. Without further specification, the constants “$C$”, $\delta_{0}$, $\epsilon_{0}$, $\mu_{1}$, $\vartheta_{1}$... in each estimate means a constant depending (at most) on the above data. We add sub-letters to the “$C$” when it depends on more data than the above, or when we want to emphasize the dependence on some specific factor. The “$C$’s” in different places might be different. The $\delta_{0}$, $\epsilon_{0}$, $\mu_{1}$, etc are usually small enough with respect to the above data. There are some auxiliary small numbers like $\epsilon$, $\delta$, which we usually let tend to $0$.*
When a bound depends only on the above data, we say it’s uniform.
\[Def special constants\] (Special constants) *For any $O$ among the singular points, we let $\bar{C}$ denote any constant depending **only** on the weights $p,b$, and the underlying cone connection $A_{O}$ ( and the sub-symbol if there is any).*
In particular, these $\bar{C}$’s do not depend on the radius of the underlying balls, so our requirements are fulfilled. They mainly appear in Section \[section Solutions to the ODEs on the Fourier-coefficients\] and \[section Local solutions for the model cone connection\].
\[Def Fredholm operators and isomorphisms\]( $(p,b)-$Fredholm operators and isomorphisms) In the space of $L_{loc}^{2}$-sections to $\Xi\rightarrow M\setminus \cup_{j}O_{j}$, consider the inner product given by the weighted space $L^{2}_{p,b}$. As in page 49 of [@Donaldson], let $H$ and $N$ be Banach spaces of sections to the bundle $\Xi$, and $L$ is a bounded linear operator $H\rightarrow N$. $L$ is called a $(p,b)-$Fredholm operator if following conditions are satisfied.
- Both $H$ and $N$ are subspaces of $L^{2}_{p,b}$. Let $\perp$ be the orthogonal complement with respect the $L^{2}_{p,b}$-inner product.
- $Image L$ is closed in $N$. Both $KerL$ and $coker L=N/ Image L$ are finite dimensional.
- $Coker L$ is isomorphic to $Image^{\perp} L\cap N$, and under this isomorphism, $N$ admits a direct-sum decomposition $$N=Image L\oplus_{p,b} coker L,$$ where $\oplus_{p,b}$ is orthogonal with respect to $L^{2}_{p,b}$.
- $L:\ Ker^{\perp}L\cap H \rightarrow Image L$ is an isomorphism (under the norms of $H$ and $N$). The “isomorphism” means $L$ is bijective (restricted to the 2 closed subspaces), and both $L$ and $L^{-1}$ are bounded,
\[Def abbreviation of notations for spaces\] (Abbreviation of notations for the spaces of sections). When the log-power $b$ is equal to $0$, we abbreviate all the notations $$W^{1,2}_{p,b},L^{2}_{p,b}, H_{p,b},N_{p,b}, J_{p,b}, Q_{p,b,A_{O}},Q_{A,p,b},etc$$ as $$W^{1,2}_{p},L^{2}_{p}, H_{p},N_{p}, J_{p} , Q_{p,A_{O}},Q_{A,p},etc.$$
\[Def tensor product\] (Tensor products) The sign “$\otimes$” means a tensor product depending on (some of and at most) the reference $G_{2}-$structure $\phi,\psi$, the metric $g_{\psi}$, the Euclidean metric in the coordinates, or some other $G_{2}-$structure, manifold, or bundle. Thus the norms of these $\otimes$’s are bounded with respect to the above data. The $\otimes$’s in different places might be different. When we are considering some specific tensor product, we add sub-letter or symbol to the $\otimes$ (like in Lemma \[lem formula for LA squared\] and proof of Proposition \[prop seperation of variable for general cone\]).
\[Def A generic\] Given an admissible connection $A$, let $p$ be a real number. $p$ is called **$A-$generic** if $1-p$ and $-p$ do not belong to the $v-$spectrum of any $\Upsilon_{A_{O_{j}}}$ (see (\[equ relation between v and beta\]) and Definition \[Def v spectrum\]). This means neither $7.25-(1-p)^{2}$ nor $7.25-p^{2}$ is an eigenvalue of any $\Upsilon_{A_{O_{j}}}$.
Local theory
============
Seperation of variable for the system in the model case. \[section Seperation of variable for the system in the model case\]
----------------------------------------------------------------------------------------------------------------------------
By abuse of notation, we still let $\Xi$ denote the space of sections to the bundle $\Xi$ etc. By the monopole equation (\[equ instantonequation without cokernel\]), the linearised operator with respect to $\sigma\in \Omega^{0}_{adE}$ and $a\in \Omega^{1}_{adE}$ (at $(0,0)\in \Omega^{0}_{adE}\oplus \Omega^{1}_{adE}=\Xi$ when $\underline{\psi}=\psi$) is $$\label{equ introduction formula for deformation operator}L_{A}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
d_{A}^{\star}a \\
d_{A}\sigma+\star(d_{A}a\wedge \psi)
\end{array}]$$ where $\psi$ is the base co-associative form in Theorem \[Thm Deforming instanton\]. Let $L_{A_{O}}$ denote the deformation operator of $A_{O}$ and Euclidean $G_{2}-$structure. If the operator depends on any different $G_{2}-$structure than the Euclidean one and $\phi,\psi$, we add sub-symbol.
Thus $L_{A_{O}}^{2}$ is still an operator from $\Xi$ to itself. To achieve seperation of variable for this operator, we should understand the bundle in another way. Working in general dimension $n\geq 4$, given any $\xi=\left|\begin{array}{c}
\sigma \\
a\\
\end{array}\right |\in \Xi$, we write $$\label{equ decompose of 1-form to radial and spherical part}
\sigma=\frac{\zeta}{r},\ a=a_{r}\frac{dr}{r}+a_{s},$$ where $a_{s}$ does not have radial component, and $a_{r}$ is a $r-$dependent section of $\Omega^{0}_{E}(S^{n-1})$. In another word, we want to view sections of $\Xi$ as $r-$dependent sections of the bundle (over $S^{n-1}$) $$\label{equ splitting of xi over the sphere}
\Xi=\Omega^{0}_{adE}(S^{n-1})\oplus \Omega^{0}_{adE}(S^{n-1}) \oplus \Omega^{1}_{adE}(S^{n-1})\ \textrm{under the basis in}\ (\ref{equ decompose of 1-form to radial and spherical part}).$$
Let $\nabla_{S}$ denote the covariant derivative with respect to the connection $A_{O}$, viewed as a connection over $S^{n-1}$.
For the $0-$form $\sigma$, the well known cone formula for the rough Laplacian reads as $$-\nabla^{\star}\nabla \sigma= \frac{\partial^{2}\sigma}{\partial r^{2}} +\frac{n-1}{r}\frac{\partial \sigma}{\partial r}+ \frac{\Delta_{s}\sigma}{r^{2}}.$$
Let $\zeta=r\sigma$, by Claim \[clm weight change on the ODE\], we have $$\label{equ cone formula for the 0form with proper basis}
-r\nabla^{\star}\nabla (\frac{\zeta}{r})= \frac{\partial^{2}\zeta}{\partial r^{2}} +\frac{n-3}{r}\frac{\partial \zeta}{\partial r}+\frac{(3-n)\zeta}{r^{2}}+ \frac{\Delta_{s}\zeta}{r^{2}},$$ where $\Delta_{s}$ is negative of the rough Laplacian of $A_{O}$ over $S^{n-1}$.
On $1-$forms, we have the following polar coordinate formula.
\[lem cone formula for laplacian\] Suppose $A_{O}$ is a cone connection over $ \R^{n}\setminus O$. Then $$\begin{aligned}
& &-\nabla^{\star}\nabla a\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ a\ \textrm{is written as in}\ (\ref{equ decompose of 1-form to radial and spherical part}),
\\&=& (\frac{\partial^{2} a_{r}}{\partial r^{2}}+\frac{n-3}{r}\frac{\partial a_{r}}{\partial r}-\frac{2(n-2)a_{r}}{r^{2}}+\frac{\Delta_{s}a_{r}+2d_{s}^{\star}a_{s}}{r^{2}})\frac{dr}{r}
\\& & +\nabla_{r}(\nabla_{r} a_{s})+\frac{n-1}{r}\nabla_{r} a_{s}-\frac{a_{s}}{r^{2}}+\frac{\Delta_{s}a_{s}+2d_{s}a_{r}}{r^{2}}\end{aligned}$$ where $\Delta_{s}$ is the negative of the rough laplacian of $A_{O}$ on $S^{n-1}$.
**The proof of Lemma \[lem cone formula for laplacian\] will be deferred to Section \[Appendix B: proof of cone formula for laplacian\].**
Next we return to dimension 7. By the formula in Lemma \[lem formula for LA squared\], $-L_{A_{O}}^{2}$ is the rough Laplacian of $A_{O}$ plus some algebraic operators, thus the polar coordinate formula naturally involves the $SU(3)$-structure of $S^{6}$ (see [@FoscoloHaskins], [@Xu]) for the formulas we need. Let $(\omega,\Omega)$ (as in [@FoscoloHaskins]) be the standard $SU(3)-$structure over $S^{6}$, where $\omega$ is the standard Hermitian metric with respect to the almost complex structure, and $\Omega$ is the $(3,0)-$form. They satisfies $$\label{equ SU3 structure of S6}
d\omega=3Re\Omega,\ dIm\Omega=-2\omega^{2}.$$ Moreover, the standard $G_{2}-$forms can be written as $$\label{equ G2 forms in terms of nearly Kahler forms on the link}
\phi_{0}=r^{2}dr\wedge \omega+r^{3}Re\Omega,\ \psi_{0}=-r^3 dr \wedge Im
\Omega+ \frac{r^{4}}{2}\omega^{2}.$$
A necessary algebraic definition is the following.
\[Def Two tensor products\](Some specific tensor products) Let $\theta$ be a $p-$form and $\Theta$ be a $q-$form, both are possibly $adE-$valued.
Suppose $q>p$, then we define $\theta \lrcorner \Theta$ as the $q-p$ form $$\theta \lrcorner \Theta(Y_{1}...Y_{q-p})=\Sigma_{i_{1},...,i_{p}}\theta(v_{i_{1}},...,v_{i_{p}})\Theta(v_{i_{1}},...,v_{i_{p}},Y_{1}...Y_{q-p}),$$ where $v_{j}$’s form an orthogonal basis of the underlying metric.
Suppose $q<p$, similar to the previous paragraph, we define $\theta \llcorner \Theta$ as the $p-q$ form $\theta \llcorner \Theta(Y_{1}...Y_{p-q})=\Sigma_{i_{1},...,i_{q}}\theta(v_{i_{1}},...,v_{i_{q}},Y_{1}...Y_{p-q})\Theta(v_{i_{1}},...,v_{i_{q}})$.
The order of multiplication is important, since they are matrix-valued.
The symbol “$\underline{\otimes}$” means the tensor product $$F\underline{\otimes} a=[F(e_{i},e_{j}),a(e_{i})]e^{j},$$ where $F$ is an $adE-$valued $2-$form, $a$ is an $1-$form, and the $e_{i}'s$ form an orthonormal frame of the underline metric (which is the Euclidean metric in the model case).
The symbol “$\underline{\otimes}_{S}$” means the $\underline{\otimes}$ over $S^{6}$ with respect to the standard round metric, so do the symbols $\lrcorner_{S}$ and $\llcorner_{S}$.
Routine computation gives the main result in this section.
\[prop seperation of variable for general cone\]Under the basis in (\[equ decompose of 1-form to radial and spherical part\]), the equation $$-L_{A_{O}}^{2}(\begin{array}{ccc}
\frac{1}{r}& 0& 0 \\
0& \frac{dr}{r} & Id
\end{array})\left |\begin{array}{c}
\zeta \\
a_{r}\\
a_{s} \end{array}\right |=
(\begin{array}{ccc}
\frac{1}{r}& 0& 0 \\
0& \frac{dr}{r} & Id
\end{array})\left |\begin{array}{c}
\underline{f}_{0} \\
\underline{f}_{1} \\
\underline{f}_{2} \end{array}\right |$$ is equivalent to $$\label{equ cone formula for square of dirac}\left \{\begin{array}{c}
\frac{\partial^{2} \zeta}{\partial r^{2}}+\frac{4}{r}\frac{\partial \zeta}{\partial r}-\frac{5\zeta}{r^{2}}+\frac{\Upsilon_{A_{O},0}}{r^{2}}=\underline{f}_{0} \\
\frac{\partial^{2} a_{r}}{\partial r^{2}}+\frac{4}{r}\frac{\partial a_{r}}{\partial r}-\frac{5a_{r}}{r^{2}}+\frac{\Upsilon_{A_{O},1}}{r^{2}}=\underline{f}_{1} \\
\nabla_{r}(\nabla_{r} a_{s})+\frac{6}{r}\nabla_{r} a_{s}-\frac{a_{s}}{r^{2}}+\frac{\Upsilon_{A_{O},2}}{r^{2}}=\underline{f}_{2} \end{array} \right.,$$ where $$\begin{aligned}
\label{equ prop cone formula general one}& &
\Upsilon_{A_{O}}\left |\begin{array}{c}
\zeta\\
a_{r} \\
a_{s}
\end{array}\right|=\left |\begin{array}{c}
\Upsilon_{A_{O},0}\\
\Upsilon_{A_{O},1} \\
\Upsilon_{A_{O},2}
\end{array}\right|
\\&=&\left |
\begin{array}{c} \Delta_{s}\zeta+\zeta+[F_{A_{O}},a_{s}]\lrcorner_{S}Re\Omega+[F_{A_{O}},a_{r}]\lrcorner_{S}\omega \\
\Delta_{s}a_{r}-5a_{r}+2d_{s}^{\star}a_{s}+[F_{A_{O}},a_{s}]\lrcorner_{S}Im\Omega-[F_{A_{O}},\zeta]\lrcorner_{S}\omega \\
\{\Delta_{s}a_{s}+2d_{s}a_{r}-F_{A_{O}}\underline{\otimes}_{S}a_{s}-[F_{A_{O}},a_{r}]\lrcorner_{S}Im\Omega
\\+[F_{A_{O}},a_{s}]\lrcorner_{S}\frac{\omega^{2}}{2}-[F_{A_{O}},\zeta]\lrcorner_{S}Re\Omega\}
\end{array}\right| \nonumber\end{aligned}$$ The operator $\Upsilon_{A_{O}}$ is a smooth self-adjoint elliptic operator over $\Xi\rightarrow S^{6}$.
When $A_{O}$ is a $\psi_{0}-$instanton i.e. $F_{A_{O}}\wedge \psi_{0}=0$ as a connection pulled back to $\R^{7}\setminus O$ (equivalent to that $A_{O}$ is Hermitian Yang-Mills on $S^{6}$ [@Xu]), we have $$\label{equ prop cone formula special one}
\Upsilon_{A_{O}}\left |\begin{array}{c}
\zeta\\
a_{r} \\
a_{s}
\end{array}\right|=\left |
\begin{array}{c} \Delta_{s}\zeta+\zeta \\
\Delta_{s}a_{r}-5a_{r}+2d_{s}^{\star}a_{s} \\
\Delta_{s}a_{s}+2d_{s}a_{r}-2F_{A_{O}}\underline{\otimes}_{S}a_{s}
\end{array}\right|$$
We only prove (\[equ prop cone formula general one\]), equation (\[equ prop cone formula special one\]) is a special case and is implied by Lemma \[lem formula for LA squared\], or by Lemma 2.4 in [@Xu] (using (\[equ prop cone formula general one\])). It suffices to combine Lemma \[lem cone formula for laplacian\], Lemma \[lem formula for LA squared\] (and the proof of it), formulas (\[equ cone formula for the 0form with proper basis\]), (\[equ SU3 structure of S6\]), (\[equ G2 forms in terms of nearly Kahler forms on the link\]). We say a few more for the readers’ convenience.
First , since $A_{O}$ is a cone connection, then $$\label{equ formula for F tensor a}F_{A_{O}}\underline{\otimes} a=F_{A_{O}}\underline{\otimes} a_{s}=\frac{1}{r^{2}}F_{A_{O}}\underline{\otimes}_{S} a_{s}.$$
Second, formula (\[equ G2 forms in terms of nearly Kahler forms on the link\]) directly implies $$[F_{A_{O}},a]\lrcorner \psi=-\frac{1}{r^{2}} [F_{A_{O}},a_{r}]\lrcorner_{S} Im\Omega+\frac{dr}{r^{3}} [F_{A_{O}},a_{s}]\lrcorner_{S} Im\Omega+\frac{1}{2r^{2}} [F_{A_{O}},a_{s}]\lrcorner_{S} \omega^{2},$$ $$\star([F_{A_{O}},a]\wedge\psi)= [F_{A_{O}},a]\lrcorner\phi_{0}=\frac{1}{r^{3}} [F_{A_{O}},a_{s}]\lrcorner_{S}Re\Omega+\frac{1}{r^{3}} [F_{A_{O}},a_{r}]\lrcorner_{S} \omega,\ \textrm{and}$$ $$\star([F_{A_{O}},\sigma]\wedge \psi)=[F_{A_{O}},\sigma]\lrcorner \phi_{0}=\frac{dr}{r^{2}} [F_{A_{O}},\sigma]\lrcorner_{S} \omega+\frac{1}{r} [F_{A_{O}},\sigma]\lrcorner_{S} Re\Omega.$$
The proof of (\[equ prop cone formula general one\]) is complete.
To show the self adjointness of $\Upsilon_{A_{O}}$ as an operator over $S^{6}$, it suffices to note that \[$F_{A_{O}},a_{s}]\lrcorner_{S}Re\Omega$ is adjoint to $-[F_{A_{O}},\zeta]\lrcorner_{S}Re\Omega$, $[F_{A_{O}},a_{r}]\lrcorner_{S}\omega$ is adjoint to $-[F_{A_{O}},\zeta]\lrcorner_{S}\omega$, $2d_{s}a_{r}$ is adjoint to $2d_{s}^{\star}a_{s}$, and $[F_{A_{O}},a_{s}]\lrcorner_{S}Im\Omega$ is adjoint to $-[F_{A_{O}},a_{r}]\lrcorner_{S}Im\Omega$. Moreover, both $F_{A_{O}}\underline{\otimes}_{S}a_{s}$ and $[F_{A_{O}},a_{s}]\lrcorner_{S}\frac{\omega^{2}}{2}$ are self-adjoint. A very import formula for verifying these relations is
For any $adE-$valued $p-$form $a_{1}$, $adE-$valued $q-$form $a_{2}$, and ordinary $(p+q)-$form $B$, we have $$<a_{1}\lrcorner B,a_{2}>= (-1)^{pq}<a_{1},a_{2}\lrcorner B>.$$
The assumption that $\Xi\rightarrow S^{6}$ is a $SO(m)-$bundle implies $[F_{A},\cdot]$ is anti-symmetric with respect to the inner product of the Lie-algebra of $so(m)$.
We denote the eigenvalues of $\Upsilon_{A_{O}}$ as $\beta$, and the corresponding eigensection as $\Psi_{\beta}$ (there might be multiplicities) i.e $$\Upsilon_{A_{O}} \Psi_{\beta}=\beta \Psi_{\beta},\ \Psi_{\beta}=(\begin{array}{c}\phi_{0,\beta} \\
\phi_{1,\beta} \\
\phi_{2,\beta}
\end{array}),\ \phi_{0,\beta},\ \phi_{1,\beta}\in \Omega_{ad E}^{0}(S^{6}),\ \phi_{2,\beta}\in \Omega_{ad E}^{1}(S^{6}).$$ We require $\Psi_{\beta}$ to be an orthonormal basis in $L_{\Xi}^{2}(S^{6})$, which is the space of $L^{2}-$sections to $\Xi\rightarrow S^{6}$, with respect to the natural inner product of the direct sums in (\[equ splitting of xi over the sphere\]). By (\[equ cone formula for square of dirac\]) and (\[equ radial derivative of a spherical form\]), the equation $$\label{equ Def of equation for square of model LAO}-L_{A_{O}}^2 \xi= \underline{f}$$ is then equivalent to $$\label{equ raw ODE for 1 forms}
\frac{d^{2} \underline{\xi}_{\beta}}{d r^{2}}+\frac{4}{r}\frac{d \underline{\xi}_{\beta}}{d r}+\frac{(\beta-5)\underline{\xi}_{\beta}}{r^{2}}=\underline{f}_{\beta},\ \xi=\Sigma_{\beta}\underline{\xi}_{\beta}\Psi_{\beta},\ \textrm{where}\ \underline{f}=\Sigma_{\beta}\underline{f}_{\beta}\Psi_{\beta}.$$ Let $$\label{equ relation between v and beta}
-v^{2}=\beta-\frac{29}{4}.$$ $v$ is either a non-negative real number or a purely imaginary number. To reduce the above equation into the form we are most familiar with, we consider $\xi_{v}=r^{\frac{3}{2}}\underline{\xi}_{\beta}$, $f_{v}=r^{\frac{3}{2}}\underline{f}_{\beta}$, then (\[equ raw ODE for 1 forms\]) becomes $$\label{equ the ODE of v for 1 forms}
\frac{d^{2} \xi_{v}}{d r^{2}}+\frac{1}{r}\frac{d \xi_{v}}{d r}-v^{2}\frac{\xi_{v}}{r^{2}}=f_{v},\ \textrm{where}\ \xi_{v}\ \textrm{and}\ f_{v}\ \textrm{only depend on}\ r.$$
By abuse of notation, we shall study the ordinary differential equation $$\label{equ the standard ODE}
\frac{d^{2} u}{d r^{2}}+\frac{1}{r}\frac{d u}{d r}-v^{2}\frac{u}{r^{2}}=f.$$
\[Def v spectrum\]($v-$spectrum) Since the $v$’s are determined by $\beta$ via (\[equ relation between v and beta\]), we call them $v-$spectrum of the tangential operators. By abuse of notation, we write $\Psi_{\beta}$ as $\Psi_{v}$, $\underline{f}_{\beta}$ as $\underline{f}_{v}$, $\underline{\xi}_{\beta}$ as $\underline{\xi}_{v}$ etc.
Solutions to the ODEs on the Fourier-coefficients \[section Solutions to the ODEs on the Fourier-coefficients\]
---------------------------------------------------------------------------------------------------------------
In this section, for any singular point $O$ of the connection and bundle, we shall solve (\[equ Def of equation for square of model LAO\]) locally for the cone connection $A_{O}$. This is equivalent to solving the ODEs (\[equ the standard ODE\]) of the Fourier-coefficients. By proving Theorem \[thm existence of good solutions to the uniform ODE with estimates\], we show the existence of good solutions to (\[equ the standard ODE\]) with the correct and optimal $L^{2}-$estimates. **Choosing different formulas for different spectrum, the solution for each $v$ are given by (\[equ solution when v is real and p>1-v is integral from 1 to r\]), (\[equ solution formula when v positive and p less than 1-v\]), (\[equ solution when v is purely imaginary\]), (\[equ solution when v=0\]).** These solutions possess the properties for building up a deformation theory for singular connections.
We can not prove Theorem \[thm existence of good solutions to the uniform ODE with estimates\] only by “potential” estimates. To be precise, when $v=0$, Proposition \[prop existence of solution with lowest order estimate\], a result by potential estimate, is not the optimal estimate we want in Theorem \[thm existence of good solutions to the uniform ODE with estimates\]. Nevertheless, the interesting thing is that the 2 terms in (\[equ solution when v=0\]) actually enjoy some magic cancellation. This allows us to use integral identity to improve the non-optimal estimate in Proposition \[prop existence of solution with lowest order estimate\] to optimal estimate in Propositions (\[prop good solution with lowest order estimate when v is nonzero\]). Since this technique involves integration by parts, we need to choose the solution properly with respect to the weight, so that the boundary terms in Lemma \[lem rough L2 identity integration by parts \] vanish, so we have the identity (\[equ strong L2 identity integration by parts\]).
This is similar to Theorem 9.9 of [@GilbargTrudinger]: though the weak $L^{2,1}-$estimate can be done by Calderon-Zygmund potential estimate, the $W^{2,2}-$estimate still requires integration by parts.
\[thm existence of good solutions to the uniform ODE with estimates\]Suppose $f$ is supported in $(0,\frac{1}{100}]$ and vanishes near $r=0$. Suppose $p<0$ is $A_{O}-$ generic and $b\geq 0$. Then for any $v$ among the $v-$spectrum of $\Upsilon_{A_{O}}$ (see Definition \[Def v spectrum\]), there exists a solution $u$ to (\[equ the standard ODE\]) with the following uniform estimate. $$\begin{aligned}
\label{equ in thm of ODE optimal estimate v neq 1-p} \int^{\frac{1}{4}}_{0} u^{2} (-\log r)^{2b}r^{2p-3} dr
\leq \bar{C} \int^{\frac{1}{2}}_{0}f^{2}r^{2p+1}(-\log r)^{2b} dr. \end{aligned}$$ $\bar{C}$ is as in Definition \[Def special constants\].
Suppose $p$ is not $A_{O}-$generic i.e. there is some $v$ such that $v=1-p$, there is a $f$ which violates the conclusion of Theorem \[thm existence of good solutions to the uniform ODE with estimates\]. Namely, let $v=\frac{5}{2}$, $p=-\frac{3}{2}$, and $f=r^{\frac{1}{2}}(-\log r)^{-b-1-\epsilon},\ b>0, \frac{1}{100}>\epsilon>0$, then $\int^{\frac{1}{2}}_{0}f^{2}r^{-2}(-\log r)^{2b}dr<\infty$. However, there is no solution $u$ to (\[equ the standard ODE\]) such that $\int^{\frac{1}{2}}_{0}u^{2}r^{-6}(-\log r)^{2b}dr<\infty$. Using a limiting argument, this means we can’t find solution which satisfies the optimal bound in (\[equ in thm of ODE optimal estimate v neq 1-p\]).
It’s a combination of Proposition \[prop good solution with lowest order estimate when v is nonzero\], \[prop good solution with lowest order estimate when v is imaginary\], and \[prop final ODE W22 estimate when v=0\].
\[prop good solution with lowest order estimate when v is nonzero\] Under the same conditions in Theorem \[thm existence of good solutions to the uniform ODE with estimates\], suppose $v>0$, then there exists a solution $u$ to (\[equ the standard ODE\]) such that $$\int^{\frac{1}{2}}_{0}u^{2}r^{2p-3}(-\log r)^{2b}dr \leq \frac{\bar{C}}{1+|v|^{3}}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2p+1}(-\log x)^{2b} dx.$$
\[prop good solution with lowest order estimate when v is imaginary\] Under the same conditions in Theorem \[thm existence of good solutions to the uniform ODE with estimates\], suppose $v$ is purely imaginary, then there exists a solution $u$ to (\[equ the standard ODE\]) such that $$\int^{\frac{1}{2}}_{0}u^{2}r^{2p-3}(-\log r)^{2b}dr \leq \bar{C}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2p+1}(-\log x)^{2b} dx.$$
\[prop existence of solution with lowest order estimate\] Under the same conditions in Theorem \[thm existence of good solutions to the uniform ODE with estimates\], suppose $v=0$, then there exists a solution $u$ to (\[equ the standard ODE\]) such that $$\int^{\frac{1}{2}}_{0}u^{2}r^{2p-3}(-\log r)^{2b-2}dr\leq \bar{C}\int^{\frac{1}{2}}_{0}f^{2}x^{2p+1}(-\log x)^{2b} dx.$$
Case 1: $v>1-p$. We choose a solution to (\[equ the standard ODE\]) as $$\label{equ solution when v is real and p>1-v is integral from 1 to r}
u=\frac{1}{-2v}\{r^{v}\int_{r}^{\frac{1}{2}}x^{-v+1}f(x)dx+r^{-v}\int_{0}^{r}x^{v+1}f(x)dx\}.$$ We denote $u_{I}=\frac{r^{v}}{-2v}\int_{r}^{\frac{1}{2}}x^{-v+1}f(x)dx$, $u_{II}=\frac{r^{-v}}{-2v}\int_{0}^{r}x^{v+1}f(x)dx$.
There exists a $q$ such that $p>q>1-v$. By Hölder inequality, $$\begin{aligned}
& & u^{2}_{I}
\\&\leq &\frac{\bar{C}r^{2v}}{1+|v|^{2}}[\int^{\frac{1}{2}}_{r}x^{-2v+2}x^{-2q-1}(-\log r)^{-2b}dx][\int^{\frac{1}{2}}_{r}f^{2}(x)x^{2q+1}(-\log r)^{2b}dx]
\\&\leq & \frac{\bar{C}}{(1+|v|^{2})|v|}r^{2v}r^{-2v-2q+2}(-\log r)^{-2b}[\int^{\frac{1}{2}}_{r}f^{2}(x)x^{2q+1}(-\log r)^{2b}dx],\ q>1-v.\end{aligned}$$ $$\begin{aligned}
\textrm{Thus}& & \int^{\frac{1}{2}}_{0}u^{2}_{I}r^{2p-3}(-\log r)^{2b}dr
\\&\leq & \frac{\bar{C}}{(1+|v|^{2})^{\frac{3}{2}}}\int^{\frac{1}{2}}_{0} r^{2p-2q-1}dr[\int^{\frac{1}{2}}_{r}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx]
\\&= & \frac{\bar{C}}{(1+|v|^{2})^{\frac{3}{2}}}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx\int^{x}_{0}r^{2p-2q-1}dr,\ p>q
\\&= & \frac{\bar{C}}{(1+|v|^{2})^{\frac{3}{2}}}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b} x^{2p-2q} dx,\ p>q.
\\&= & \frac{\bar{C}}{(1+|v|^{2})^{\frac{3}{2}}}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2p+1}(-\log x)^{2b} dx.\end{aligned}$$ Since $p<0$, we directly use (\[eqnarray Prop ODE solution v>0 1\]) and (\[eqnarray Prop ODE solution v>0 2\]) replacing the “$q$” there by $0$, “$v$” by $-v$. Hence $$\int^{\frac{1}{2}}_{0}u^{2}_{II}r^{2p-3}(-\log r)^{2b}dr
\leq \frac{\bar{C}}{(1+|v|^{2})^{\frac{3}{2}}}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2p+1}(-\log x)^{2b}dx.$$ The proof is complete when $v>1-p$.
Case 2: $0<v<1-p$. In this case we take the solution as $$\label{equ solution formula when v positive and p less than 1-v}
u=\frac{1}{-2v}\{-r^{v}\int_{0}^{r}x^{-v+1}f(x)dx+r^{-v}\int_{0}^{r}x^{v+1}f(x)dx\}.$$ Let $u_{I}$ in this case be $=\frac{r^{v}}{2v}\int_{0}^{r}x^{-v+1}f(x)dx$. Choose $q$ such that $p<q<1-v$, by Hölder inequality and Lemma \[lem log estimate\], we calculate $$\begin{aligned}
\label{eqnarray Prop ODE solution v>0 1}
& & u^{2}_{I}
\\&\leq &r^{2v}[\int^{r}_{0}x^{-2v+2}x^{-2q-1}(-\log x)^{2b}dx][\int^{r}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx]\nonumber
\\&\leq & \bar{C}r^{-2q+2}(-\log r)^{-2b}[\int^{r}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx],\ q<1-v.\nonumber\end{aligned}$$ $$\begin{aligned}
\label{eqnarray Prop ODE solution v>0 2}
\textrm{Thus}& & \int^{\frac{1}{2}}_{0}u^{2}_{I}r^{2p-3}(-\log r)^{2b}dr
\\&\leq & \bar{C}\int^{\frac{1}{2}}_{0} r^{2p-2q-1}dr[\int^{r}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx]\nonumber
\\&= & \bar{C}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b}dx\int^{\frac{1}{2}}_{x}r^{2p-2q-1}dr,\ p<q\nonumber
\\&= & \bar{C}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2q+1}(-\log x)^{2b} x^{2p-2q} dx,\ p<q.\nonumber
\\&= & \bar{C}\int^{\frac{1}{2}}_{0}f^{2}(x)x^{2p+1}(-\log x)^{2b} dx.\nonumber\end{aligned}$$ The estimate of the other term is similar. The proof of Proposition \[prop good solution with lowest order estimate when v is nonzero\] is complete.
In this case we write i.e $-v^{2}=\mu^{2}>0$, where $\mu$ is real and positive. Hence equation (\[equ the standard ODE\]) becomes $$\label{equ ODE when v is pure imaginary}
\frac{d^{2} u}{d r^{2}}+\frac{1}{r}\frac{d u}{d r}+\frac{\mu^{2}u}{r^{2}}=f.$$ Since the solutions to the homogeneous equation are $\sin (\mu\log r)$ , $\cos (\mu\log r)$, then we choose a particular solution to (\[equ ODE when v is pure imaginary\]) as $$\begin{aligned}
\label{equ solution when v is purely imaginary}
& &u
\\&=&\frac{\sin (\mu\log r)}{\mu}\int^{r}_{0}x\cos (\mu\log x)f(x)dx-\frac{\cos (\mu\log r)}{\mu} \int^{r}_{0}x\sin (\mu\log x)f(x)dx.\nonumber\end{aligned}$$ Since the solutions to homogeneous equations are bounded, the estimate is easier than that of Proposition \[prop good solution with lowest order estimate when v is nonzero\]. Choosing $q$ such that $p<q<0$, for the estimate of $u_{I}=\frac{\sin (\mu\log r)}{\mu}\int^{r}_{0}x\cos (\mu\log x)f(x)dx$, we only need to use (\[eqnarray Prop ODE solution v>0 1\]) and (\[eqnarray Prop ODE solution v>0 2\]) by replacing the “$r^{v}$” there by $\sin (\mu\log r)$, “$x^{-v}$” by $\cos (\mu\log x)$. The estimate of the other term is similar.
When $v=0$, the solutions to the homogeneous equation $$\label{equ the standard homo ODE v=0}
\frac{d^{2} u}{d r^{2}}+\frac{1}{r}\frac{d u}{d r}=0$$ are $1$ and $\log r$, then we choose a particular solution to $$\label{equ solution when v=0}
\frac{d^{2} u}{d r^{2}}+\frac{1}{r}\frac{d u}{d r}=f\ \textrm{as}\
u=-\int_{0}^{r}x\log x f(x)dx+\log r\int_{0}^{r}xf(x)dx.$$ Let $u_{II}=\log r\int_{0}^{r}xf(x)dx$, we compute $$u_{II}^{2}\leq (\log r)^{2}(\int^{r}_{0}f^{2}xdx)(\int^{r}_{0}xdx)=\frac{r^{2}(\log r)^{2}}{2}\int^{r}_{0}f^{2}xdx.$$ $$\begin{aligned}
\textrm{Then}& &\int^{\frac{1}{2}}_{0}u_{II}^{2}r^{2p-3}(-\log r)^{2b-2}dr
\\& \leq & \int_{0}^{r}f^{2}xdx \int_{x}^{\frac{1}{2}}
r^{2p-1}(-\log r)^{2b}dr \leq \bar{C}(\int_{0}^{r}f^{2}xdx) [x^{2p}(-\log x)^{2b}]
\\&= &\bar{C}\int_{0}^{r}f^{2}x^{2p+1}(-\log x)^{2b}dx.\end{aligned}$$ The estimate for the other term in (\[equ solution when v=0\]) is similar.
Next we use integration by parts to improve Proposition \[prop existence of solution with lowest order estimate\] to \[prop final ODE W22 estimate when v=0\]. We verify the following identity, it doesn’t harm to have identities for every $v$ instead of only for $v=0$.
\[clm weight change on the ODE\] Suppose $B$, $V^{2}$, $p$ are real numbers. Suppose $u$ is a solution to the following differential equation $$\frac{d^{2} u}{d r^{2}}+\frac{B}{r}\frac{d u}{d r}-\frac{V^{2}u}{r^{2}}=f,$$ then the function $\bar{u}=r^{p}u$ satisfies $$\frac{d^{2} \bar{u}}{d r^{2}}+\frac{(B-2p)}{r}\frac{d \bar{u}}{d r}-[V^{2}+p(p-1)+p(B-2p)]\frac{\bar{u}}{r^{2}}=fr^{p}.$$ In particular, when $B=1$, we have $$\frac{d^{2} \bar{u}}{d r^{2}}+\frac{(1-2p)}{r}\frac{d \bar{u}}{d r}-[V^{2}-p^{2}]\frac{\bar{u}}{r^{2}}=fr^{p}.$$
Suppose $u$ solves (\[equ the standard ODE\]), consider $\bar{u}$ as in Claim \[clm weight change on the ODE\]. Then $u$ satisfies $$\label{weighted ODE}
\frac{d^{2} \bar{u}}{d r^{2}}+\frac{k}{r}\frac{d \bar{u}}{d r}-\frac{a^{2}\bar{u}}{r^{2}}=\bar{f},$$ where $\bar{f}=r^{p}f$, $k=1-2p$, $a^{2}=v^{2}-p^{2}$. Then $$k^{2}+2a^{2}=2v^{2}+1-4p+2p^{2},\ a^{4}-2ka^{2}-2a^{2}=(v^{2}-p^{2})(v^{2}-(p-2)^{2}).$$ Moreover, we consider the weight $w_{0}$ as $$\label{equ w0 is decreasing}
w_{0}=(\frac{1}{2}-r)^{10}(-\log r)^{2b}.\ \textrm{Notice that}\ \frac{d w_{0}}{d r}\leq 0.$$ Using $\bar{u}=r^{p}u$ and integrating the square of (\[weighted ODE\]), we directly verify
Under the same condition on $f$ as in Theorem \[thm existence of good solutions to the uniform ODE with estimates\], suppose in each case we choose the solutions to (\[weighted ODE\]) ((\[equ the standard ODE\])) as in (\[equ solution when v is real and p>1-v is integral from 1 to r\]), (\[equ solution formula when v positive and p less than 1-v\]), (\[equ solution when v is purely imaginary\]), (\[equ solution when v=0\]). Then all the boundary terms in (\[equ rough L2 identity integration by parts\]) are 0. Consequently, $$\begin{aligned}
\label{equ strong L2 identity integration by parts}
& &\int^{\frac{1}{2}}_{0}\bar{f}^{2}rw_{0} dr
\\&=& \int^{\frac{1}{2}}_{0}|\frac{d^{2} \bar{u}}{d r^{2}}|^{2}rw_{0} dr+(2v^{2}+1-4p+2p^{2})\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{w_{0}}{r} dr\nonumber
\\& &+ (v^{2}-p^{2})(v^{2}-(p-2)^{2})\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} w_{0}}{ r^{3}} dr-(1-2p)\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{d w_{0}}{d r}dr\nonumber
\\& &+(3-2p)(v^{2}-p^{2})\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} }{ r^{2}} \frac{d w_{0}}{d r} dr-(v^{2}-p^{2})\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} }{ r}\frac{d^{2} w_{0}}{d r^{2}} dr.\nonumber\end{aligned}$$
\[prop final ODE W22 estimate when v=0\] Under the same conditions in Theorem \[thm existence of good solutions to the uniform ODE with estimates\], suppose $v=0$. Let $\bar{u}=r^{p}u$, $u$ is the solution to (\[equ the standard ODE\]) in (\[equ solution when v=0\]), then $$\begin{aligned}
\label{eqnarray equivalent statement for the L2 estimate v=0}\int^{\frac{1}{2}}_{0}\bar{u}^{2} w_{0}r^{-3} dr
\leq \bar{C}\int^{\frac{1}{2}}_{0}\bar{f}^{2}rw_{0} dr . \end{aligned}$$ Consequently, $u$ satisfies (\[equ in thm of ODE optimal estimate v neq 1-p\]).
By (\[equ w0 is decreasing\]) and $p<0$, the terms on the right hand side of (\[equ strong L2 identity integration by parts\]) are all non-negative except the term $-(0-p^{2})\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} }{ r}\frac{d^{2} w_{0}}{d r^{2}} dr$ (even this term is non-negative when $b\geq 1$, but we do not assume this in general). We compute $$\begin{aligned}
\label{eqnarray second derivative of w0}
\frac{d^{2} w_{0}}{d r^{2}}=\textrm{non-negative terms}+\frac{2b(2b-1)}{r^{2}}(\frac{1}{2}-r)^{10}(-\log r)^{2b-2}.\nonumber\end{aligned}$$ Hence (\[equ strong L2 identity integration by parts\]) implies $$\begin{aligned}
\label{eqnarray L2 estimate v=0 with the junk term}& &\int^{\frac{1}{2}}_{0}\bar{u}^{2}r^{-3}w_{0} dr
\leq \bar{C}\{\int^{\frac{1}{2}}_{0}\bar{f}^{2}rw_{0} dr+\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}}{r^{3}} (\frac{1}{2}-r)^{10}(-\log r)^{2b-2} dr\}\nonumber
\\& \leq & \bar{C}\{\int^{\frac{1}{2}}_{0}\bar{f}^{2}r(-\log r)^{2b} dr+\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}}{r^{3}}(-\log r)^{2b-2} dr\}. \end{aligned}$$ $$\label{equ integral of u with log weight -2 is bounded}\textrm{Proposition \ref{prop existence of solution with lowest order estimate} implies}\
\int^{\frac{1}{2}}_{0} \bar{u}^{2} (-\log r)^{2b-2}r^{-3} dr\leq \bar{C}\int^{\frac{1}{2}}_{0}\bar{f}^{2}r(-\log r)^{2b} dr.\nonumber$$ The proof of (\[eqnarray equivalent statement for the L2 estimate v=0\]) is complete by combining the above two inequalities.
Local solutions for the model cone connection \[section Local solutions for the model cone connection\]
-------------------------------------------------------------------------------------------------------
Our goal in this section is to solve the following deformation equation in model case, and obtain optimal Sobolev-estimates. $$\label{equ model equation of laplacian for cone}
L_{A_{O}}\xi=\underline{f}.$$
Let $w=r^{2p}(-\log r)^{2b}$, $p<0$. Let $\mathfrak{B}$ be an open set in $\R^{n}\setminus O$, and $\xi$ be a smooth section of $ \Xi$ over $\mathfrak{B}$, we define $$\begin{aligned}
& &|\xi|^{2}_{W^{2,2}_{p,b,A_{0}}(\mathfrak{B})}
\triangleq \int_{\mathfrak{B}}|\nabla_{A_{0}}\nabla_{A_{0}} \xi|^{2}wdV+\int_{\mathfrak{B}}\frac{1}{r^{2}}|\nabla_{A_{0}}\xi|^{2}wdV+\int_{\mathfrak{B}}\frac{|\xi|^{2}}{r^{4}}wdV\end{aligned}$$ $$|\xi|^{2}_{W^{1,2}_{p,b,A_{0}}(\mathfrak{B})}
\triangleq \int_{\mathfrak{B}}|\nabla_{A_{0}} \xi|^{2}wdV+\int_{\mathfrak{B}}\frac{|\xi|^{2}}{r^{2}}wdV;\
|\xi|^{2}_{L^{2}_{p,b}(\mathfrak{B})}
\triangleq \int_{\mathfrak{B}}|\xi|^{2}wdV.$$
Since $A$ is admissible, the norm $W^{2,2}_{p,b,A}(\mathfrak{B})$ (with connection $A$ instead of $A_{O}$) is equivalent to $W^{2,2}_{p,b,A}(\mathfrak{B})$, etc. Thus, by deleting the symbol of the connections, we denote the spaces as $W^{2,2}_{p,b}(\mathfrak{B})$, $W^{1,2}_{p,b}(\mathfrak{B})$.
When the bundle is trivial over $B_{O}$ and the connection is smooth across $O$, our $W^{2,2}_{p,b}-$norm is stronger than the usual $W^{2,2}-$norm weighted by $w$. It fits into our setting better, in the sense that the estimates in (\[equ bounding L2 norm of gradient for the model cone laplace equation\]), Proposition \[prop bounding L2 norm of Hessian for the model cone laplace equation\], and Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] do not depend on the radius of the balls.
\[Def of L22 norm model case\]
The space $W^{2,2}_{p,b}(\mathfrak{B})$ is the completion of the section space $$\{\xi|\xi \in C^{2}(\mathfrak{B}\setminus O),\ |\xi|_{W^{2,2}_{p,b} (\mathfrak{B})}<\infty\}\ \textrm{under the}\ W^{2,2}_{p,b} (\mathfrak{B})- \textrm{norm}. \ \ \ \ \ \ \ $$ The space $W^{1,2}_{p,b}(\mathfrak{B})$ is the completion of the section space $$\{\underline{f}|\underline{f} \in C^{1}(\mathfrak{B}\setminus O),\ |\underline{f}|_{W^{1,2}_{p,b} (\mathfrak{B})}<\infty\}\ \textrm{under the}\ W^{2,2}_{p,b} (\mathfrak{B})- \textrm{norm}. \ \ \ \ \ \ \ $$ The space $L^{2}_{p,b}(\mathfrak{B})$ is the completion of the following under the $L^{2}_{p,b} (\mathfrak{B})-$norm. $$\{ \underline{f} | \underline{f} \in C^{\infty}_{c}(\mathfrak{B}\setminus O), \textrm{only finite terms in the series}\ (\ref{equ raw ODE for 1 forms})\ \textrm{are non-zero}\}.$$
We define the space $\mathfrak{W}^{1,2}_{p,b}$ ($\mathfrak{W}^{2,2}_{p,b}$, $\mathfrak{W}^{0,2}_{p,b}$) as the following $$\mathfrak{W}^{1,2}_{p,b}(B_{O})=\{\xi\in W^{1,2}_{loc} \ \textrm{in the coordinates}\ V_{+,O},\ V_{-,O}| |\xi|_{W^{1,2}_{p,b}(B_{O})}<\infty\}.$$
\[lem density of smooth functions in weighted L2 space\] For any $\rho>0$, $L^{2}_{p,b} [B_{O}(\rho)]$ is the space of measurable functions on $B_{O}(\rho)$ which are square integrable with respect to the weight $w$.
\[lem H=W\]$W^{k,2}_{p,b}[B_{O}(\rho)]=\mathfrak{W}^{k,2}_{p,b}[B_{O}(\rho)],\ k=0,1,2.$
**The proofs of the above two lemmas are deferred to Section \[section Appendix D: Density and smooth convergence of Fourier Series\]**. Lemma \[lem density of smooth functions in weighted L2 space\] reduces Theorem \[thm W22 estimate on 1-forms\] to a finite-dimensional problem. On Lemma \[lem H=W\], we expect that when the bundle $\Xi$ is trivial, $ \mathfrak{W}^{1,2}_{p,b}[B_{O}(\rho)]$ can even be approximated by sections that are smooth across the singularity. This is because $w$ and $r^{-2}w$ are $A_{p}-$weights (see Theorem 1 in [@Goldstein]).
Our main result in this section is the following. **The crucial observation is that the $L^{2}-$estimate (given by Theorem \[thm existence of good solutions to the uniform ODE with estimates\]) and simple integration by parts yield the $W^{2,2}-$estimate**.
\[thm W22 estimate on 1-forms\]Suppose $p<0$ is $A_{O}-$generic and $b\geq 0$. Then there is a bounded linear operator $\mathfrak{Q}_{p,b,A_{O}}$ from $L^{2}_{p,b}[B_{O}(\frac{1}{4})]$ to $W^{2,2}_{p,b}[B_{O}(\frac{1}{4})]$ such that
- $-L^{2}_{A_{0}}\mathfrak{Q}_{p,b,A_{O}}=Id$ from $L^{2}_{p,b}[B_{O}(\frac{1}{4})]$ to itself,
- the bound on $\mathfrak{Q}_{p,b,A_{O}}$ is less than a $\bar{C}$ as in Definition \[Def special constants\].
This is a direct application of Theorem \[thm existence of good solutions to the uniform ODE with estimates\]. We first prove it assuming that $\underline{f}$ satisfies the conditions in Definition \[Def of L22 norm model case\] for $L^{2}_{p,b}[B_{O}(\frac{1}{4})]$. We write $\underline{f}=\Sigma_{v<v_{0}}r^{-\frac{3}{2}}f_{v}\Psi_{v}$ for some $0<v_{0}<\infty$ (see Definition \[Def v spectrum\]). For each $f_{v}$, we define $\xi_{v}$ as the solution to (\[equ the ODE of v for 1 forms\]) in (\[equ solution when v is real and p>1-v is integral from 1 to r\]), (\[equ solution formula when v positive and p less than 1-v\]), (\[equ solution when v is purely imaginary\]), (\[equ solution when v=0\]), and let $$\label{equ Def of mathfrak Q right inverse}
\mathfrak{Q}_{p,b,A_{O}}\underline{f}\triangleq \Sigma_{v}r^{-\frac{3}{2}}\xi_{v}\Psi_{v}\triangleq \xi.$$ It suffices to bound the $L^{2}_{p-2,b}[B_{O}(\frac{1}{4})]-$norm of $\xi$. By Theorem \[thm existence of good solutions to the uniform ODE with estimates\], we find $$\begin{aligned}
\label{eqnarray L2 estimate in model case for laplacian}
& &\int_{B_{O}(\frac{1}{4})}\frac{|\xi|^{2}}{r^{4}}wdV
= \Sigma_{v}\int^{\frac{1}{4}}_{0}\int_{S^{6}(1)}\xi_{v}^{2}r^{-9}|\Psi_{v}|^{2}_{S}r^{6}wd\theta dr\nonumber
= \Sigma_{v}\int^{\frac{1}{4}}_{0}|\xi_{v}|^{2} r^{-3}wdr
\\&\leq & \bar{C} \Sigma_{v}\int^{\frac{1}{4}}_{0}f_{v}^{2} rwdr=\bar{C}\int_{B_{O}(\frac{1}{4})}|\underline{f}|^{2}wdV.
\end{aligned}$$
By Definition \[Def of L22 norm model case\], (\[eqnarray L2 estimate in model case for laplacian\]), (\[equ bounding L2 norm of gradient for the model cone laplace equation\]), and Proposition \[prop bounding L2 norm of Hessian for the model cone laplace equation\], the proof is complete when $\underline{f}$ satisfies the a priori conditions in the first paragraph of this proof.
In the general case, for any $\underline{f}\in L^{2}_{p,b}$, by Lemma \[lem density of smooth functions in weighted L2 space\], there exists a sequence $\underline{f}_{j}\rightarrow \underline{f}$ in $L^{2}_{p,b}-$topology, and $\underline{f}_{j}$ satisfy the a priori conditions. We denote $\xi_{j}$ as $\mathfrak{Q}_{p,b,A_{O}}\underline{f}_{j}$. By the a priori estimate proved (in the first step) and the linearity of $\mathfrak{Q}_{p,b,A_{O}}$, $\xi_{j}$ is a Cauchy-sequence in $W^{2,2}_{p,b}[B_{O}(\frac{1}{4})]$. By completeness, $\xi_{j}$ converges to $\xi_{\infty}$ in $W^{2,2}_{p,b}[B_{O}(\frac{1}{4})]$. Moreover, $\xi_{\infty}$ satisfies the estimate in Theorem \[thm W22 estimate on 1-forms\]. We thus define $\mathfrak{Q}_{p,b,A_{O}}\underline{f} $ as $\xi_{\infty}$, the bounds in Theorem \[thm W22 estimate on 1-forms\] implies that this definition does not depend on the approximation. The proof is complete.
\[Cor solving model laplacian equation over the ball without the compact support RHS condition\] Let $p,b$ be as in Theorem \[thm W22 estimate on 1-forms\]. There exists a local right inverse $Q_{p,b,A_{O}}$ of $L_{A_{O}}$ with the following properties. For any $\tau\leq \frac{1}{10}$,
- $Q_{p,b,A_{O}}$ is bounded from $L^{2}_{p,b}[B_{O}(\tau)]$ to $W^{1,2}_{p,b}[B_{O}(\tau)]$. The bound is less than a $\bar{C}$ as in Definition \[Def special constants\]. In particular, it does not depend on $\tau$.
- $L_{A_{0}}Q_{p,b,A_{O}}=Id$ from $L^{2}_{p,b}[B_{O}(\tau)]$ to itself.
By extending to vanish outside $B_{O}(\tau)$, $\underline{f}$ can be viewed as a section in $L^{2}_{p,b}[B_{O}(\frac{1}{4})]$. It suffices to take $Q_{p,b,A_{O}}=-L_{A_{O}}\mathfrak{Q}_{p,b,A_{O}}$ and restrict it to $B_{O}(\tau)$. Under this extension, $Q_{p,b,A_{O}}$ does not depend on $\tau$.
Next, we establish two crucial building-blocks of Theorem \[thm W22 estimate on 1-forms\].
\[lem bounding L2 norm of gradient for the model cone laplace equation\]Under the same conditions in Theorem \[thm W22 estimate on 1-forms\], suppose $\underline{f}$ satisfies the a priori conditions in the first paragraph of proof of Theorem \[thm W22 estimate on 1-forms\]. Let $\xi=\mathfrak{Q}_{p,b,A_{O}}\underline{f}$. Then for any $ \varrho \in (0,1]$, the following bound holds. $$\label{equ bounding L2 norm of gradient for the model cone laplace equation}\int_{B_{O}(\frac{\varrho}{4.5})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}wdV\leq \bar{C} (\int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}wdV+\int_{B_{O}(\frac{\varrho}{4})}|\underline{f}|^{2} wdV).$$ $\bar{C}$ is as in Definition \[Def special constants\], thus is **independent of $\varrho$**.
The estimate is independent of $\varrho$ because (\[equ bounding L2 norm of gradient for the model cone laplace equation\]) is scaling-correct. This is important for (\[equ Thm global apriori est 1\]). When $\underline{f}$ satisfies the a priori conditions, $\mathfrak{Q}_{p,b,A_{O}}\underline{f}$ is smooth away from $O$, thus every term in the proofs of Theorem \[thm W22 estimate on 1-forms\], Lemma \[lem bounding L2 norm of gradient for the model cone laplace equation\], Proposition \[prop bounding L2 norm of Hessian for the model cone laplace equation\] makes sense.
Let $\eta_{\epsilon}$ be the standard cut-off function (of the singular point $O$) which vanishes in $B_{O}(\epsilon)$, and is identically $1$ when $r\geq 2\epsilon$. We have $$\label{equ cut-off function bound near the singular point}
\epsilon|\nabla \eta_{\epsilon}|+\epsilon^{2}|\nabla^{2} \eta_{\epsilon}|< \bar{C},\ \textrm{when}\ \epsilon<<\varrho.$$
Let $\chi$ be the standard cut-off function supported in $B_{O}(\frac{\varrho}{4})$ and identically $1$ over $B_{O}(\frac{\varrho}{4.5})$, $|\nabla \chi|\leq \frac{\bar{C}}{\varrho}$. Lemma \[lem formula for LA squared\] implies $$\label{equ in L2 bound on the gradient consequence of Bochner formula}
\nabla^{\star}_{A_{O}}\nabla_{A_{O}}\xi=-\underline{f}+F_{A_{O}}\otimes \xi.$$ We compute $$\begin{aligned}
\label{eqnarray in lemma bounding L2 norm of gradient for the model cone laplace equation}& &
\int_{B_{O}(\frac{\varrho}{4})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}\eta_{\epsilon}\chi^{2} wdV
\\&=& \int_{B_{O}(\frac{\varrho}{4})}\frac{<\xi, \nabla^{\star}_{A_{O}}\nabla_{A_{O}}\xi>}{r^{2}}\eta_{\epsilon}w\chi^{2} dV- \int_{B_{O}(\frac{\varrho}{4})}<\xi, [\nabla(\frac{\eta_{\epsilon}\chi^{2} w}{r^{2}})]\lrcorner\nabla_{A_{O}}\xi> dV\nonumber
\\&=& -\int_{B_{O}(\frac{\varrho}{4})}\frac{<\xi,\underline{f}>}{r^{2}}\eta_{\epsilon}w\chi^{2} dV+\int_{B_{O}(\frac{\varrho}{4})}\frac{<\xi,F_{A_{O}}\otimes \xi>}{r^{2}}\eta_{\epsilon}w\chi^{2} dV\nonumber
\\& & -\int_{B_{O}(\frac{\varrho}{4})}<\xi, (\nabla \eta_{\epsilon})\lrcorner\nabla_{A_{O}}\xi> \frac{\chi^{2} w}{r^{2}}dV-\int_{B_{O}(\frac{\varrho}{4})}<\xi, (\nabla \frac{\chi^{2} w}{r^{2}})\lrcorner\nabla_{A_{O}}\xi> \eta_{\epsilon}dV
\nonumber\end{aligned}$$
By Definition \[Def of L22 norm model case\], the cheap estimate $|F_{A_{O}}|\leq \frac{\bar{C}}{r^{2}}$, and Cauchy-Schwartz inequality, the first 2 terms on the most right hand side of (\[eqnarray in lemma bounding L2 norm of gradient for the model cone laplace equation\]) are bounded by the right hand side of (\[equ bounding L2 norm of gradient for the model cone laplace equation\]), uniformly in $\epsilon$. Note $$\label{equ gradient of the cutoff function and weight}|\nabla w|\leq \frac{Cw}{r}\Longrightarrow|\nabla (\frac{\chi^{2} w}{r^{2}})|\leq \frac{\bar{C}\chi^{2} w}{r^{3}}+2\chi|\nabla\chi|\frac{ w}{r^{2}},$$ hence we obtain the following bound on the last term. $$\begin{aligned}
\label{eqnarray in model L12 bound 1}& &\int_{B_{O}(\frac{\varrho}{4})}<\xi, (\nabla \frac{\chi^{2} w}{r^{2}})\lrcorner\nabla_{A_{O}}\xi> \eta_{\epsilon}dV
\\&\leq & \bar{C}\int_{B_{O}(\frac{\varrho}{4})} |\xi||\nabla_{A_{O}}\xi|\frac{\chi^{2}w\eta_{\epsilon}}{r^{3}} dV+ \bar{C}\int_{B_{O}(\frac{\varrho}{4})} |\xi||\nabla_{A_{O}}\xi|\frac{\chi|\nabla \chi|w\eta_{\epsilon}}{r^{2}} dV \nonumber
\\&\leq & \vartheta \int_{B_{O}(\frac{\varrho}{4})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}\eta_{\epsilon}\chi^{2} wdV+\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}\chi^{2}w\eta_{\epsilon}dV\nonumber
\\& &+\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}|\nabla\chi|^{2}\frac{|\xi|^{2}}{r^{2}}w\eta_{\epsilon}dV.\nonumber\end{aligned}$$ On the last term above, we notice that in $B_{O}(\frac{\varrho}{4})$, $\frac{1}{\varrho}\leq \frac{1}{r}$. By $|\nabla \chi|\leq \frac{\bar{C}}{\varrho}$, we obtain the following bound $$\label{equ in model L12 bound 1}
\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}|\nabla\chi|^{2}\frac{|\xi|^{2}}{r^{2}}w\eta_{\epsilon}dV\leq \bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}w\eta_{\epsilon}dV.$$
Therefore (\[eqnarray in model L12 bound 1\]) and (\[equ in model L12 bound 1\]) imply $$\begin{aligned}
\label{eqnarray the L12 estimate model case with a small error term on the right to be absorbed}
& &\int_{B_{O}(\frac{\varrho}{4})}<\xi, (\nabla \frac{\chi^{2} w}{r^{2}})\lrcorner\nabla_{A_{O}}\xi> \eta_{\epsilon}dV
\\&\leq &\vartheta \int_{B_{O}(\frac{\varrho}{4})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}\eta_{\epsilon}\chi^{2} wdV+\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}wdV\nonumber.\end{aligned}$$
Let $\vartheta=\frac{1}{10}$, plugging (\[eqnarray the L12 estimate model case with a small error term on the right to be absorbed\]) in (\[eqnarray in lemma bounding L2 norm of gradient for the model cone laplace equation\]) and using the remark under (\[eqnarray in lemma bounding L2 norm of gradient for the model cone laplace equation\]), we find $$\begin{aligned}
\label{eqnarray L2 bound on the gradient model case crucial decomposition}
& &\int_{B_{O}(\frac{\varrho}{4})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}\eta_{\epsilon}w\chi^{2} dV
\\&\leq & \bar{C}\{\int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}wdV+\int_{B_{O}(\frac{\varrho}{4})}|\underline{f}|^{2} wdV+|\int_{B_{O}(\frac{\varrho}{4})}<\xi,\nabla_{A_{O},\nabla \eta_{\epsilon}}\xi> \frac{\chi^{2} w}{r^{2}}dV|\}. \nonumber\end{aligned}$$ It suffices to show $\Pi_{1}$ approaches $0$ as $\epsilon\rightarrow 0$. The condition on $\underline{f}$ implies $L^{2}_{A_{O}}\xi=0$ in $B_{O}(r_{\underline{f}})$, for some $r_{\underline{f}}>0$. Since $\xi=\mathfrak{Q}_{p,b,A_{O}}\underline{f}\in L^{2}_{p-2,b}[B_{O}(\frac{\varrho}{4})])$, Lemma \[lem bound on C3 norm of solution to laplace equation when f is smooth and vainishes near O\] (with “$p$” replaced by $p-1$) gives $$|\xi|\leq \frac{C_{\underline{f}}}{r^{\frac{3}{2}+p-\lambda}},\ |\nabla_{A_{O}}\xi|\leq \frac{C_{\underline{f}}}{r^{\frac{5}{2}+p-\lambda}},\ \lambda>0.\ \textrm{Hence when}\ \epsilon\ \textrm{goes to}\ 0,$$ $$\label{equ integration by parts holds true in the case of L12 model estimate wrt to cone}
\Pi_{1}\leq C_{\underline{f}}\int_{B_{O}(2\epsilon)-B_{O}(\epsilon)}|\nabla_{A_{O}}\xi||\xi|r^{2p-3}(-\log r)^{2b}dV\leq C_{\underline{f}}\epsilon^{2\lambda}(-\log \epsilon)^{2b}\rightarrow 0\nonumber$$ The proof of (\[equ bounding L2 norm of gradient for the model cone laplace equation\]) is complete.
Using almost the same technique, we obtain
\[prop bounding L2 norm of Hessian for the model cone laplace equation\]Let $p,b$,$\underline{f}$,$\xi,\varrho$ be as in Lemma \[lem bounding L2 norm of gradient for the model cone laplace equation\]. Then $$\begin{aligned}
& & \int_{B_{O}(\frac{\varrho}{5})}|\nabla^{2}_{A_{O}}\xi|^{2}wdV
\\& \leq & \bar{C}\int_{B_{O}(\frac{\varrho}{4.5})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}wdV+\bar{C} \int_{B_{O}(\frac{\varrho}{4.5})}\frac{|\xi|^{2}}{r^{4}}wdV+\bar{C}\int_{B_{O}(\frac{\varrho}{4.5})}|\underline{f}|^{2} wdV.
\end{aligned}$$
**For the reader’s convenience, we still do the full proof of Proposition \[prop bounding L2 norm of Hessian for the model cone laplace equation\] in Section \[section Appendix E: Various integral identities and proof\]**.
\[Def global weight and Sobolev spaces\] Given an admissible connection $A$, let $\tau_{0}$ be small enough. Let $p<0$ be $A-$generic and $b\geq 0$. Let $w_{p,b}$ be the smooth function such that for any singular point $O_{j}$, $$w_{p,b}=\left \{\begin{array}{cc}
1& \textrm{when}\ r\geq 2\tau_{0}, \\
r^{2p}(-\log r)^{2b}& \textrm{when}\ r\leq \tau_{0},\end{array} \right.$$ $r$ is the distance to $O_{j}$ in local coordinates (by abuse of notation). Away from the coordinate neighbourhoods of the singular points, $w_{p,b}\equiv 1$.
Then we define the global $L^{2}_{p,b}-$space as the completion of smooth functions (away from the singular points) under the norm $|\cdot|_{L^{2}_{p,b}}$: $$\label{equ Def of global weighted L2 space for tame connections}
|\xi|^{2}_{L^{2}_{p,b}}=\int_{M}|\xi|^{2}w_{p,b}dV.$$
We define the space $W^{1,2}_{p,b}$ as the completion of smooth functions (away from the singular points) under the norm $$|\xi|_{W^{1,2}_{p,b}}=|\nabla_{A}\xi|_{L^{2}_{p,b}}+|\xi|_{L^{2}_{p-1,b}}.$$
**Convention of section-spaces**: the global norms over $M$ are denoted just as $W^{1,2}_{p,b}$ or $L^{2}_{p,b}$ without any symbol on the domain, the local norms are usually with a symbol indicating the domain (c.f. Definition \[Def of L22 norm model case\]).
\[Def volume forms\] By abuse of notation, let $dV$ denote all the volume forms of our integrations. The convention is: locally, it usually means the Euclidean volume form; globally, it usually means the volume form determined by $\phi$.
Anyway, the $dV$ in various cases are equivalent up to a constant depending on the (reference) $G_{2}-$structure.
Global Theory
=============
By abuse of notation, from now on let $f$ denote the image.
Global apriori estimate. \[section Global apriori estimate\]
------------------------------------------------------------
\[Def spectrum gap\] For any real number $\tau$, let $\vartheta_{\tau}$ denote the $v-$spectrum gap of $\Upsilon_{A_{O_{j}}}'$s at $\tau$ i.e the distance from $\tau$ to the closest $v-$eigenvalue (of any $\Upsilon_{A_{O_{j}}}$) other than $\tau$ itself. When the gap$>1$, we let $\vartheta_{\tau}=1$.
The following bootstrapping lemma for the model operator is important especially for Theorem \[thm global apriori L22 estimate\].
\[lem bound on C3 norm of solution to laplace equation when f is smooth and vainishes near O\] Let $ \tau_{0} \in (0,\frac{1}{10})$, $p< 0$, $b\geq 0$. Suppose $\xi\in L^{2}_{p-1,b}[B_{O}(2\tau_{0})]$, and $L^{2}_{A_{O}}\xi=0$ in $B_{O}(2\tau_{0})$. Then $\xi$ is actually in $ L^{2}_{p-1-\lambda,b}[B_{O}(2\tau_{0})]$ for all $0\leq \lambda <\vartheta_{-p}$, and we have the following estimates. $$\label{equ local kernel regularity 0}
|\xi|_{L^{2}_{p-1-\lambda,b}[B_{O}(2\tau_{0})]}\leq \frac{\bar{C}}{\tau_{0}^{1+\lambda}}|\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}.$$ $$\label{equ local kernel regularity 0.5}
r|\xi|+r^{2}|\nabla_{A_{O}}\xi|+r^{3}|\nabla^{2}_{A_{O}}\xi|+r^{4}|\nabla^{3}_{A_{O}}\xi|\leq C_{\xi,\tau_{0}}r^{-\frac{3}{2}-p+\lambda}\ \textrm{when}\ r<\tau_{0}.$$ $\bar{C}$ is **independent of $\tau_{0}$**.
We don’t need $p$ to be $A_{O}-$generic, since by condition (\[equ local regularity L2p-1 bound\]), any “harmonic” section in $W^{1,2}_{p,b}$ does not have non-trivial component on $v=-p$.
**The idea of proof is to use Fourier-expansion to rule out some bad eigenvalues**. We write $$\label{equ local kernel regularity 1}
\xi=r^{-\frac{3}{2}}U_{v}\Psi_{v}\ (\textrm{see Definition}\ \ref{Def v spectrum}).$$ Since $\xi$ is harmonic, and $U_{v}$ is the spherical inner product, by (\[equ cone formula for square of dirac\]) (with right hand side as $0$), we directly verify $$\frac{\partial ^{2} U_{v} }{\partial r^{2}}+\frac{1}{r}\frac{\partial U_{v} }{\partial r}-\frac{v^{2}U_{v}}{r^{2}}=0.$$ This means $$\label{equ local kernel regularity 2} U_{v}=\left \{
\begin{array}{ccc} c_{1,v}r^{-v}+c_{2,v}r^{v};&\ v>0.\\
c_{1,v}+c_{2,v}\log r;&\ v=0.\\
c_{1,v}\sin(v\log r)+c_{2,v}\cos(v\log r),&\ v<0.
\end{array}\right.$$ $c_{1,v}$, $c_{2,v}$ are constants. The condition $\xi\in L^{2}_{p-1,b}[B_{O}(2\tau_{0})]$ implies $$\label{equ local regularity L2p-1 bound}
\int^{2\tau_{0}}_{0} U_{v}^{2}r^{2p-1}(-\log r)^{2b}dr<\infty.$$ Since $p< 0$, the terms $1,\log r,\sin (v\log r),\cos (v\log r),r^{-v}$ can not appear. Moreover, $r^{v}$ can not appear if $2v+2p\leq 0$. Thus, only those $r^{v}$ with $v>-p$ will appear. Moreover, by the discreteness of the spectrum, only those $r^{v}$ with $v\geq -p+\vartheta_{-p}$ could appear. In this case, we have a $v-$ independent $L^{2}_{p-1-\lambda,b}-$estimate by the $L^{2}_{p,b}-$norm. $$\begin{aligned}
\int^{2\tau_{0}}_{0}| r^{v}|^{2}r^{2p-2\lambda-1}(-\log r)^{2b}dr\leq \frac{(2\tau_{0})^{2v+2p-2\lambda}(-\log 2\tau_{0})^{b}}{2v+2p-2\lambda}.
\end{aligned}$$
On the other hand, $$\int^{2\tau_{0}}_{0}|r^{v}|^{2}r^{2p+1}(-\log r)^{2b}dr
\geq (-\log 2\tau_{0})^{b}\frac{(2\tau_{0})^{2v+2p+2}}{2v+2p+2}.$$ Then $$\label{equ local kernel regularity 3}
\int^{2\tau_{0}}_{0}|r^{v}|^{2}r^{2p-2\lambda-1}(-\log r)^{2b}dr\leq \bar{C}_{\lambda}\tau_{0}^{-2-2\lambda}\int^{2\tau_{0}}_{0}|r^{v}|^{2}r^{2p+1}(-\log r)^{2b}dr.$$ Using (\[equ local kernel regularity 1\]) and (\[equ local kernel regularity 2\]), (\[equ local kernel regularity 3\]) is equivalent to (\[equ local kernel regularity 0\]). The estimate (\[equ local kernel regularity 0.5\]) is a direct consequence of (\[equ 0 estimate local version\]) and Lemma \[lem Schauder estimate in local coordinates in small balls\] (with $k=3$, $A=A_{O}$).
\[clm local regularity L12 for global apriori estimate\] Under the same conditions in Lemma \[lem bound on C3 norm of solution to laplace equation when f is smooth and vainishes near O\], we have $$\label{equ Thm global apriori estimate}|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq \frac{\bar{C}}{\tau_{0}}|\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]},\ \bar{C}\ \textrm{is independent of}\
\tau_{0}.$$
It suffices to show $\int_{B_{O}(\tau_{0})}|\nabla_{A_{O}}\xi|^{2}wdV\leq \bar{C} \int_{B_{O}(2\tau_{0})}\frac{|\xi|^{2}}{r^{2}}wdV.$ It is a much easier version of Lemma \[lem bounding L2 norm of gradient for the model cone laplace equation\], we only need to run the argument through with the measure $\eta_{\epsilon}\chi^{2}wdV$ instead of $\frac{\eta_{\epsilon}\chi^{2}w}{r^{2}}dV$.
**Our main theorem in this section implies the image is closed.**
(Global a priori estimate)\[thm global apriori L22 estimate\] Suppose $\xi\in W^{1,2}_{p,b}$, then $$|\xi|_{W^{1,2}_{p,b}}\leq C(|L_{A}\xi|_{L^{2}_{p,b}}+|\xi|_{L^{2}_{p,b}}).$$
**The observation is that we can reduce the estimate for $L_{A}$ to estimate of the model operator**. We only need to derive this estimate near the singularity. Away from the singularity it follows from the standard estimates, then we patch up the estimates in each piece.
For any $\epsilon_{0}$, when $\tau_{0}-$small enough, given $\xi \in W^{1,2}_{p,b}[B_{O}(2\tau_{0})]$, by Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\], there exists a $\eta$ such that $$\begin{aligned}
\label{eqnarray Thm global apriori estimate}
& &L_{A_{O}}\eta=L_{A_{O}}\xi\ \textrm{and}
\\& & |\eta|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]}\leq \bar{C}|L_{A_{O}}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]};\ \bar{C}\ \textrm{is independent of}\ \epsilon_{0}\ \textrm{and}\ \tau_{0}.\nonumber
\\& \leq & \bar{C}|(L_{A}-L_{A_{O}})\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ \bar{C}|L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}\nonumber
\\&\leq & \bar{C}|L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ \epsilon_{0}|\xi|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]}.\nonumber
\end{aligned}$$ Then we estimate $$\begin{aligned}
\label{equ priliminary L22 apriori estimate }
& & |\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq |\xi-\eta|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}+|\eta|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\nonumber
\\&\leq & \bar{C} |L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ \epsilon_{0}|\xi|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]}+ |\xi-\eta|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}.
\end{aligned}$$ $\xi-\eta$ satisfies $L_{A_{O}}(\xi-\eta)=0$ in $B_{O}(2\tau_{0})$. Then it’s smooth away from the singularity, and we have $$-L^{2}_{A_{O}}(\xi-\eta)=0.$$
Thus (\[equ priliminary L22 apriori estimate \]) and Claim \[clm local regularity L12 for global apriori estimate\] (for $\xi-\eta$) yield $$\label{equ Thm Global apriori estimate -1}
|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq \bar{C}|L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ \epsilon_{0}|\xi|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]}+ \frac{\bar{C}}{\tau_{0}}|\xi-\eta|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}.$$
Within $B_{O}(2\tau_{0})$, we have $\frac{1}{\tau_{0}}\leq \frac{2}{r}$, then by definition and (\[eqnarray Thm global apriori estimate\]), we obtain $$\label{equ Thm global apriori est 0}
\frac{|\eta|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}}{\tau_{0}}
\leq
4|\eta|_{L^{2}_{p-1,b}[B_{O}(2\tau_{0})]}\leq \bar{C}|L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ 4\epsilon_{0}|\xi|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]},$$ where $\bar{C}$ **does not depend on $\epsilon_{0}$ or $\tau_{0}$**. Then (\[equ Thm global apriori estimate\]), (\[equ Thm Global apriori estimate -1\]), and (\[equ Thm global apriori est 0\]) imply $$\label{equ Thm global apriori est 1}
|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq \bar{C}|L_{A}\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}+ \bar{C}\epsilon_{0}|\xi|_{W^{1,2}_{p,b}[B_{O}(2\tau_{0})]}+ \frac{\bar{C}}{\tau_{0}}|\xi|_{L^{2}_{p,b}[B_{O}(2\tau_{0})]}.$$
Away from the singularities we have $$\label{equ Thm global apriori est away from singularities}
|\xi|_{W^{1,2}_{p,b}(M_{\frac{\tau_{0}}{2}})}\leq C_{\epsilon_{0},\tau_{0}}|L_{A}\xi|_{L^{2}_{p,b}(M_{\frac{\tau_{0}}{4}})}+ C_{b,p,\tau_{0}}|\xi|_{L^{2}_{p,b}(M_{\frac{\tau_{0}}{4}})}.$$ Adding (\[equ Thm global apriori est 1\]) and (\[equ Thm global apriori est away from singularities\]), we find $$|\xi|_{W^{1,2}_{p,b}}\leq C|L_{A}\xi|_{L^{2}_{p,b}}+ \bar{C}\epsilon_{0}|\xi|_{W^{1,2}_{p,b}}+ C|\xi|_{L^{2}_{p,b}}.$$ Choosing the $\epsilon_{0}$ (initially in (\[eqnarray Thm global apriori estimate\])) to be less than $\frac{1}{20\bar{C}}$ (we let $\tau_{0}$ be small enough with respect to $\epsilon_{0}$), the proof of Theorem \[thm global apriori L22 estimate\] is complete.
\[thm existence of a good solution when f is in the image\] Let $A,p,b$ be as in Theorem \[Thm Fredholm\], then $Ker L_{A}$ in $W^{1,2}_{p,b}$ is finite-dimensional. Moreover, for any $f\in Image L_{A}$, there exists a solution $\xi$ to $L_{A}\xi=f$ such that $\xi\in Ker^{\perp}L_{A}\subset W^{1,2}_{p,b}$ and $|\xi|_{W^{1,2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}$.
We first show that $ker L_{A}$ is finite-dimensional. Were this not true, there exist countably many $\xi_{k}$’s in $ker L_{A}$, such that for any $k$, $\xi_{k}$ is not in the span of the preceding vectors. Then using the $L^{2}_{p,b}-$inner product, the Gram-Schmidt process produces an orthonormal sequence $\widehat{\xi}_{k}$ of sections in $ker L_{A}$. On the other hand, by Theorem \[thm global apriori L22 estimate\], we have $|\widehat{\xi}_{k}|_{W^{1,2}_{p,b}}\leq C$, then Lemma \[lem compact imbedding\] implies $\xi_{k}$ converges in $L^{2}_{p,b}$, which contradicts the orthogonality.
Since $Ker L_{A}\subset\ W^{1,2}_{p,b}$ is now shown to be finite dimensional, we consider the projection of $\underline{\xi}$ onto $Ker L_{A}$ (with respect to the $L^{2}_{p,b}-$inner product) as $\underline{\xi}\parallel_{Ker L_{A}}$. Given any $\underline{\xi}$ such that $L_{A}\underline{\xi}=f$, we consider $$\xi=\underline{\xi}-[\underline{\xi}\parallel_{Ker L_{A}}],\ \textrm{then}\ \xi\in Ker^{\perp}L_{A}.$$
To prove the estimate in Theorem \[thm existence of a good solution when f is in the image\], by Theorem \[thm global apriori L22 estimate\], it suffices to show $$\label{equ apriori L2 estimate when f is in the image and xi perdicular to kernel}
|\xi|_{L^{2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}.$$ Were (\[equ apriori L2 estimate when f is in the image and xi perdicular to kernel\]) not true, there exists a sequence $\xi_{i}\in W^{1,2}_{p,b}$, $f_{i}=L_{A}\xi_{i}\in Image L_{A}\subset L^{2}_{p,b}$, such that $$|\xi_{i}|_{L^{2}_{p,b}}=1,\ \xi_{i}\in ker^{\perp} L_{A},\ \textrm{but}\
|f_{i}|_{L^{2}_{p,b}}\rightarrow 0.$$
By Theorem \[thm global apriori L22 estimate\] and Lemma \[lem compact imbedding\], $\xi_{i}$ converges in $L^{2}_{p,b}$ to $\xi_{\infty}$. By the linearity of $L_{A}$, these in turn imply $\xi_{i}$ is a Cauchy-Sequence in $W^{1,2}_{p,b}$: $$|\xi_{i}-\xi_{j}|_{W^{1,2}_{p,b}}\leq C|\xi_{i}-\xi_{j}|_{L^{2}_{p,b}}+C|f_{i}-f_{j}|_{L^{2}_{p,b}}\rightarrow 0.$$ Then $\xi_{i}$ converges to $\xi_{\infty}$ in $W^{1,2}_{p,b}$, $\xi_{\infty}\in W^{1,2}_{p,b}$, $|\xi_{\infty}|_{L^{2}_{p,b}}=1$, and $L_{A}\xi_{\infty}=0$. But $\xi_{i}\in ker^{\perp} L_{A}$ implies $\xi_{\infty}\perp ker L_{A}$. This is a contradiction.
Hybrid space and $C^{0}-$estimate.\[section Hybrid space and C0-estimat\]
-------------------------------------------------------------------------
Using an interpolation trick, the $C^{0}-$estimate is a direct corollary of the $W^{1,2}_{p,b}$-estimate. To see this, for any point $q$ close enough to a singular point $O$, **$\xi\in W^{1,2}_{p,b}\subset L^{2}_{p-1,b}$ implies the average of $|\xi|$ over $B_{q}(\frac{r_{q}}{2})$ is bounded correctly i.e. by $r_{q}^{-\frac{5}{2}-p}(-\log r_{q})^{-b}$. Since $\xi$ satisfies an elliptic equation, the interpolation trick gives the $C^{0}-$bound.** For second order uniformly elliptic equations of divergence form, this can be done by the well known Nash-Moser iteration.
\[Def Hybrid spaces\](Hybrid spaces) The hybrid spaces are defined as (their norms are defined in the parenthesis on the left hand side of “$<\infty$”)
- $H_{p,b}=\{\xi\ \textrm{is}\ C^{2,\alpha}\ \textrm{away from}\ O|\ |\xi|_{W^{1,2}_{p,b}}+|\xi|^{(\frac{5}{2}+p,b)}_{2,\alpha,M}<\infty\}$ and
- $N_{p,b}=\{\xi\ \textrm{is}\ C^{1,\alpha}\ \textrm{away from}\ O|\ |\xi|_{L^{2}_{p,b}}+|\xi|^{(\frac{7}{2}+p,b)}_{1,\alpha,M}<\infty\}$.
\[lem multiplicative property of hybrid spaces\] Suppose $b\geq 0$, $p\leq -\frac{3}{2}$. Suppose $\xi_{1},\ \xi_{2}\in H_{p,b}$, then for any smooth tensor product $\otimes$, $|\xi_{1}\otimes \xi_{2}|\in N_{p,b}$ and $$|\xi_{1}\otimes \xi_{2}|_{N_{p,b}}\leq C|\xi_{1}|_{H_{p,b}}|\xi_{2}|_{H_{p,b}},\ C\ \textrm{depends on the }\ C^{2}-\textrm{norm of}\ \otimes.$$
This multiplicative property works for the quadratic non-linearity of (\[equ instantonequation without cokernel\]).
By Definition \[Def Global Schauder norms\], \[Def local Schauder norms\], and the conditions on $p,b$, we have $\xi_{1}\otimes \xi_{2}\in C^{1,\alpha}_{(\frac{7}{2}+p,b)}$ and $$|\xi_{1}\otimes \xi_{2}|_{C^{1,\alpha}_{(\frac{7}{2}+p,b)}}\leq C|\xi_{1}|_{C^{1,\alpha}_{(\frac{5}{2}+p,b)}}|\xi_{2}|_{C^{1,\alpha}_{(\frac{5}{2}+p,b)}}\leq C|\xi_{1}|_{H_{p,b}}|\xi_{2}|_{H_{p,b}}.$$
**The $L^{2}_{p,b}-$bound is estimated by making use of the $C^{0}-$norms:** $$\begin{aligned}
& &
\int_{M}|\xi_{1}\otimes \xi_{2}|^2w_{p,b}dV\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p\leq \frac{3}{2}
\\&\leq & C|\xi_{1}|_{C^{1,\alpha}_{(\frac{5}{2}+p,b)}}\int_{M}\frac{|\xi_{2}|^2}{r^{5+2p}(-\log r)^{2b}}w_{p,b}dV\leq C|\xi_{1}|_{H_{p,b}}\int_{M}\frac{|\xi_{2}|^2}{r^{2}}w_{p,b}dV
\\&\leq & C|\xi_{1}|_{H_{p,b}}|\xi_{2}|_{H_{p,b}}. \end{aligned}$$
\[thm C0 est\] Let $b\geq 0$, $0<\alpha<1$, and $\gamma$ be any real number. Suppose $f\in C^{\alpha}_{(\gamma,b)}(M)$, and $\xi$ is $C^{2,\alpha}$ away from the singularities. Suppose $L_{A}\xi=f$ or $L_{A}^{\star}\xi=f$, and $\xi\in L^{2}_{-\frac{9}{2}+\gamma,b}$. Then $\xi$ satisfies $$\label{equ Thm C0 estimate 1}
|\xi|^{(\gamma-1,b)}_{0,M}\leq C\{|f|^{(\gamma,b)}_{\alpha,M}+|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}}\}.$$ Consequently, $$\label{equ Thm C0 estimate 2}
|\xi|^{(\gamma-1,b)}_{2,\alpha,M}\leq C\{|f|^{(\gamma,b)}_{1,\alpha,M}+|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}}\}.$$
We only prove it for $L_{A}$, the proof for $L_{A}^{\star}$ is the same.
Let $|\cdot|^{[y]}_{k,\alpha,B}$ denote the weighted norm in (6.10) of [@GilbargTrudinger] with respect to $B$. Notice that this is **different** from $|\cdot|^{(y)}_{k,\alpha,B}$ (Definition \[Def Schauder spaces\]) on which the weight depends only on the distance to the singular point.
By Lemma \[lem Schauder estimate in local coordinates in small balls\] and multiplication of weight, in $B\triangleq B_{q}(\frac{r_{q}}{100})$, we have $$\label{equ interior estimate in balls away from singularity with weight half of n}
|\xi|^{[\frac{7}{2}]}_{1,\alpha,B}\leq C|L_{A}\xi|^{[\frac{9}{2}]}_{\alpha,B}+C|\xi|^{[\frac{7}{2}]}_{0,B}.$$ It suffices to prove that (\[equ intermediate C0 estimate\]) holds for any $\mu<\frac{1}{10}$.
$$\label{equ intermediate C0 estimate}
|\xi|^{[\frac{7}{2}]}_{0,B}\leq \mu|\nabla \xi|^{[\frac{9}{2}]}_{0,B}+C_{\mu}\frac{r_{q}^{\frac{9}{2}-\gamma}}{(-\log r_{q})^{b}}|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}.$$
Assuming (\[equ intermediate C0 estimate\]), we go on to prove Theorem \[thm C0 est\]. By (\[equ interior estimate in balls away from singularity with weight half of n\]), we obtain $$|\xi|^{[\frac{7}{2}]}_{1,\alpha,B}\leq C|L_{A}\xi|^{[\frac{9}{2}]}_{\alpha,B}+\mu C|\nabla \xi|^{[\frac{9}{2}]}_{0,B}+C_{\mu}\frac{r_{q}^{\frac{9}{2}-\gamma}}{(-\log r_{q})^{b}}|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}.$$ $$\label{equ 1 alpha estimate local version}
\textrm{Let}\ \mu C<\frac{1}{10},\ \textrm{then}\ |\xi|^{[\frac{7}{2}]}_{1,\alpha,B}\leq C|f|^{[\frac{9}{2}]}_{\alpha,B}+C_{\mu}\frac{r_{q}^{\frac{9}{2}-\gamma}}{(-\log r_{q})^{b}}|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}.$$ In particular, on the $C^{0}-$norm, we have $$r^{\frac{7}{2}}_{q}|\xi|(q)\leq C|f|^{[\frac{9}{2}]}_{\alpha,B}+C_{\mu}\frac{r_{q}^{\frac{9}{2}-\gamma}}{(-\log r_{q})^{b}}|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}.$$ $$\textrm{By definition we have}\ \
r_{q}^{-\frac{7}{2}}|f|^{[\frac{9}{2}]}_{\alpha,B}\leq \frac{C_{\mu}}{r^{\gamma-1}_{q}(-\log r_{q})^{b}}|f|^{(\gamma,b)}_{\alpha,B}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$\label{equ 0 estimate local version}\textrm{Then}\ \
|\xi|(q)\leq \frac{C}{r^{\gamma-1}_{q}(-\log r_{q})^{b}}\{|f|^{(\gamma,b)}_{\alpha,B}+|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}\}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$
Since $q$ is an arbitrary point near the singularity, and $|f|^{(\gamma,b)}_{\alpha,B}\leq |f|^{(\gamma,b)}_{\alpha,M}$, the proof of Theorem \[thm C0 est\] is complete (assuming (\[equ intermediate C0 estimate\])).
Now we prove (\[equ intermediate C0 estimate\]) using simple interpolation. For any $x\in B$, consider $B_{x}(\mu d_{x})\ (\mu<\frac{1}{10})$, then there is a point $x_{0}$ in $B_{x}(\mu d_{x})$ such that $$\begin{aligned}
& &|\xi|(x_{0})\leq [\oint_{B_{x}(\mu dx)}|\xi|^{2}dy]^{\frac{1}{2}}\leq \frac{C[\int_{B_{x}(\mu dx)}|\xi|^{2}r^{-9+2\gamma}(-\log r)^{2b}dy]^{\frac{1}{2}}]}{r_{x}^{-\frac{9}{2}+\gamma}((-\log r_{x})^{b})(\mu d_{x})^{\frac{7}{2}}}\end{aligned}$$
For any $x\in B$, $r_{x}$ is comparable to $r_{q}$, we then compute $$\begin{aligned}
& & d^{\frac{7}{2}}_{x}|\xi|(x)
=d^{\frac{7}{2}}_{x}[|\xi|(x)-|\xi|(x_{0})]+d^{\frac{7}{2}}_{x}|\xi|(x_{0})
\\&\leq &2\mu d_{x}^{\frac{9}{2}}\sup_{y\in B_{x}(\mu d_{x})}|\nabla \xi|(y)+\frac{C_{\mu}r_{q}^{\frac{9}{2}-\gamma}}{(-\log r_{q})^{b}}|\xi|_{L^{2}_{-\frac{9}{2}+\gamma,b}(B)}.\end{aligned}$$
Replacing $"2\mu"$ by $\mu$, by definition, we deduce (\[equ Thm C0 estimate 1\]). Then (\[equ Thm C0 estimate 2\]) is a Corollary of Proposition \[prop log weighted Schauder estimate\] and (\[equ Thm C0 estimate 1\]).
Compact imbedding
-----------------
\[lem compact imbedding\] Suppose $p_{1}-1<p_{2}$, or $p_{1}-1=p_{2}$ and $b_{1}>b_{2}$. Then for any ball $B$ such that $\partial B$ does not intersect the singularities, the imbedding $W^{1,2}_{p_{1},b_{1}}(B) \rightarrow L^{2}_{p_{2},b_{2}}(B)$ is compact.
Consequently, the imbedding $W^{1,2}_{p_{1},b_{1}} \rightarrow L^{2}_{p_{2},b_{2}}$ (of global spaces) is compact.
It suffices to assume $B$ is centred at a singular point $O$, and does not contain any other singular point. We only prove the case when $p_{1}-1=p_{2}$ and $b_{1}=b_{2}+1$. The proof in general is the same, except that we have to spell out more notations. By definition, the imbedding from $W^{1,2}_{p_{1},b_{1}}(B)$ to $L^{2}_{p_{1}-1,b_{1}}(B)$ is bounded. For any concentric and smaller ball $B(R)$, We have $$\int_{B(R)}|u|^2w_{p_{2},b_{2}}dx \leq \frac{1}{(-\log R)^{2}}\int_{B(R)}|u|^2w_{p_{1}-1,b_{1}}dx.$$ Then suppose $|u|_{W^{1,2}_{p_{1},b_{1}}(B)}\leq C_{1}$, we can choose $R_{m}$ depending on $b$ and $C_{1}$ such that $$(\int_{B(R_{m})}|u|^2w_{p_{2},b_{2}}dx )^{\frac{1}{2}}\leq \frac{1}{m}.$$
Given a sequence $u_{i}$ such that $|u_{i}|_{W^{1,2}_{p_{1},b_{1}}(B)}\leq C$, using compactness of the imbedding $W^{1,2}(B\setminus B(R_{m}))\rightarrow L^{2}(B\setminus B(R_{m}))$, we obtain a Cauchy-subsequence $u_{m,j}$ in $L^{2}_{p_{2},b_{2}}(B\setminus B(R_{m}))$ and $$|u_{m,j}|_{L^{2}_{p_{2},b_{2}}[B(R_{m})]}\leq \frac{1}{m}.$$ This means for any $m$ and the $u_{m,j}$, there is a $N_{m}$ such that $j,l>N_{m}$ implies $$|u_{m,j}-u_{m,l}|_{L^{2}_{p_{2},b_{2}}(B)}<\frac{4}{m}.$$ When $b>a$, we choose $u_{b,j}$ as a subsequence of $u_{a,j}$. Then we can write $u_{aa}$ as $u_{m,j_{a}}$, $u_{bb}$ as $u_{m,j_{b}}$. Since they are subsequence of $u_{m,j}$, then $j_{a}>a$, $j_{b}>b$. Consider the diagonal sequence $u_{aa}$. By the discussion above, for any $m$, when $a,\ b$ $>m+N_{m}$, we have $$|u_{aa}-u_{bb}|_{L^{2}_{p_{2},b_{2}}(B)}<\frac{4}{m}.$$ Thus the diagonal sequence $u_{aa}$ is a Cauchy-Sequence in $L^{2}_{p_{2},b_{2}}(B)$.
Fredholm Theory \[section Global Fredholm and Schauder Theory\]
---------------------------------------------------------------
**Using the local inverse in Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] and the above compact Sobolev-imbedding, it’s almost standard to prove $L_{A}$ is Fredholm**. $$\label{equ deformation equation for A on a small ball}
\textrm{We consider the linearized equation}\ L_{A}\xi=f\ \textrm{on a small ball}.$$
\[prop solving the deformation equation locally for the noncone connection\] Let $A,p,b$ be as in Theorem \[Thm Fredholm\]. There is a $\tau_{0}>0$ such that for any singular point $O$, there exists a bounded linear map $Q_{A,p,b}$ from $L^{2}_{p,b}[B_{O}(\tau_{0})]$ to $W^{1,2}_{p,b}[B_{O}(\tau_{0})]$ with the following properties.
- $L_{A}Q_{A,p,b}=Id$ from $L^{2}_{p,b}[B_{O}(\tau_{0})]$ to itself.
- The bound on $Q_{A,p,b}$ is less than a $\bar{C}$ as in Definition \[Def special constants\].
Consequently, equation (\[equ deformation equation for A on a small ball\]) admits a solution $\xi$ such that $$\label{equ lem local inverse near O is bounded 2}|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq \bar{C} |f|_{L^{2}_{p,b}[B_{O}(\tau_{0})]}.\ \textrm{In particular},\ \bar{C}\ \textrm{does not depend on}\ \tau_{0}.$$
Note that for $W^{1,2}_{p,b}-$estimate, we don’t have to shrink domain. This is similar to the case of standard Laplace equation (Theorem 9.9 in [@GilbargTrudinger]).
The proof is similar to that of Theorem 5.2 in [@GilbargTrudinger].
Equation (\[equ deformation equation for A on a small ball\]) is equivalent to $$\label{equ lem local inverse near O 1}
\xi=Q_{p,b,A_{O}}f+Q_{p,b,A_{O}}P_{A_{O},A}\xi\triangleq \square \xi,\ P_{A_{O},A}=L_{A_{O}}-L_{A},$$ where $Q_{p,b,A_{O}}$ is the one in Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] ($\tau=\tau_{0}$).
By Definition \[Def Admissable connection with polynomial or exponential convergence\] and Lemma \[lem bounding local perturbation of deformation operator \], for any $\epsilon_{0}$, when $\tau_{0}$ is small enough with respect to $\epsilon_{0}$ and $\psi$, we obtain the following $\textrm{for any}\ \xi_{1},\xi_{2}\in W^{1,2}_{p,b}[B_{O}(\tau_{0})]$. $$\label{equ lem local inverse near O 2}
|P_{A_{O},A}(\xi_{1}-\xi_{2})|_{L^{2}_{p,b}[B_{O}(\tau_{0})]}\leq \bar{C}\epsilon_{0}|\xi_{1}-\xi_{2}|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}.$$ By the optimal $W^{1,2}_{p,b}-$ estimate of $Q_{p,b,A_{O}}$, we obtain $$\label{equ contract mapping for linear equation with Cepsilon0}
|\square (\xi_{1}-\xi_{2})|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq \bar{C}\epsilon_{0}|\xi_{1}-\xi_{2}|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}.$$
Let $ \bar{C}\epsilon_{0}<\frac{1}{4}$, $\square$ is a contract mapping from $W^{1,2}_{p,b}[B_{O}(\tau_{0})]$ to itself, then there must be a unique fixed point $\xi$ which solves (\[equ deformation equation for A on a small ball\]). We define $Q_{A,p,b}f$ as this $\xi$, the uniqueness also implies **$Q_{A,p,b}$ is linear**. The condition $ \bar{C}\epsilon_{0}<\frac{1}{4}$, (\[equ lem local inverse near O 1\]), (\[equ lem local inverse near O 2\]), and Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] imply $$\label{equ lem local inverse near O is bounded 1}
|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}\leq |Q_{p,b,A_{O}}f|_{W^{1,2}_{p,b}}+ \frac{1}{4}|\xi|_{W^{1,2}_{p,b}[B_{O}(\tau_{0})]}.$$ Therefore (\[equ lem local inverse near O is bounded 2\]) is true.
The following Lemma is well known.
\[lem parametrice in the smooth part\](Lemma 1.3.1 of [@Gilkey], Theorem 4.3 of [@Lawson]). Under the same assumptions in Proposition \[prop solving the deformation equation locally for the noncone connection\] on $A$, for the $\tau_{0}$ obtained there, the $\tau_{0}-$admissible cover $\mathbb{U}_{\tau_{0}}$ satisfies the following property. For any ball $B_{l}\in \mathbb{U}_{\tau_{0}}$ away from the singularity, there exists a local parametrix $Q_{l}:\ L^{2}(B_{l})\rightarrow W^{1,2}(B_{l})$ such that $L_{A}Q_{l}=Id+K_{l}, \ K_{l} \ \textrm{is compact}\ (L^{2}(B_{l})\rightarrow L^{2}(B_{l})).
$
For the reader’s convenience we mention a little bit: any section $\xi\in L^{2}(B_{l})$ can be extended as $0$ outside $B_{l}$, thus gives a section in “$H^{0}$” ($L^{2}$) defined in page 6 of [@Gilkey]. Choosing the “$\phi$” in (a) of Lemma 1.3.1 in [@Gilkey] to be the standard cutoff function which is identically $1$ in $B_{l}$ but vanishes outside $2B_{l}$, Lemma 1.3.1 in [@Gilkey] says $\phi(L_{A}Q_{l}-Id)$ is an infinitely-smoothing operator (defined in first line of page 12 in [@Gilkey]). Then by restricting $\phi(L_{A}Q_{l}-Id)\xi$ onto $B_{l}$, the proof is complete.
\[thm global parametrix\]Let $A,p,b$ be as in Theorem \[Thm Fredholm\], there is a bounded linear operator $Q\ (L^{2}_{p,b}\rightarrow W^{1,2}_{p,b})$ such that $K_{p,b}\triangleq L_{A}Q-id\ (L^{2}_{p,b}\rightarrow L^{2}_{p,b})$ is compact.
We consider the $\tau_{0}-$admissible cover $\mathbb{U}_{\tau_{0}}$ in Definition (\[Def admissable open cover\]), for the $\tau_{0}$ in Proposition \[prop solving the deformation equation locally for the noncone connection\]. In the $B_{O_j}$’s (near the singular points), we use the right inverse inverse $Q_{j}$ constructed in Proposition \[prop solving the deformation equation locally for the noncone connection\]. In the $B_{l}$’s (away from the singular points), we use the $Q_{l}$’s in Lemma \[lem parametrice in the smooth part\]. Then let $$\label{equ Thm global Sobolev parametrix}Q(f)=\Sigma_{j}\varphi_{j}Q_{j}(\chi_{j}f)+\Sigma_{l}\varphi_{l}Q_{l}(\chi_{l}f),$$ where $\varphi_{j},\ \varphi_{l}$’s are the partition of unity of $\frac{\mathbb{U}_{\tau_{0}}}{5}$ (with co-centric balls with radius $\frac{1}{5}$ of the original one), and $\chi_{j}$ ($\chi_{l}$) are smooth functions such that $$\chi_{j}\ (\chi_{l}) =\left \{ \begin{array}{cc}
1, & \textrm{over}\ supp \varphi_{j}\subset \frac{B_{O_{j}}}{5}\ (supp \varphi_{l}\subset \frac{B_l}{5});\\
0,& \textrm{outside}\ \frac{B_{O_{j}}}{4}\ (\frac{B_{l}}{4}).
\end{array}\right.$$ Then $\varphi_{j}\chi_{j}=\varphi_{j}$ ($\varphi_{l}\chi_{l}=\varphi_{l}$).
For any smooth function $\varphi$ and section $\xi=\left [
\begin{array}{c}
\sigma \\
a \\
\end{array}\right ]$, we calculate $$\label{equ parametrix for LA: def of G}
L_{A}(\varphi\xi)= \varphi L_{A}\xi+G(d\varphi,\xi),\ \textrm{where}$$ $$\label{equ Def G..} G(d\varphi,\xi)=\left [
\begin{array}{cc}
0 & -\star (d\varphi \wedge \star a) \\
d\varphi \wedge \sigma & \star (d\varphi \wedge a \wedge \psi) \\
\end{array}\right ]\ \textrm{is an algebraic operator. }$$ $$\label{equ formula for LQ-id compact operator}\textrm{Thus}\
L_{A}Q \xi=f+ \Sigma_{j}G[d\varphi_{j},Q_{j}(\chi_{j}f)]+\Sigma_{l}G[d\varphi_{l},Q_{l}(\chi_{l}f)]+\Sigma_{l}\varphi_{l}K_{l}(\chi_{l}f).$$ Since each $\varphi_{j}$ is smooth, Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] yields $$|\Sigma_{j}G[d\varphi_{j},Q_{j}(\chi_{j}f)|_{W^{1,2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}.$$ $\textrm{Lemma \ref{lem parametrice in the smooth part} implies}\
|\Sigma_{l}\varphi_{l}K_{l}(\chi_{l}f)|_{L_{p,b}^{1,2}}\leq C|f|_{L_{p,b}^{2}}$. Let $$\label{equ formula for Kpb}K_{p,b}(f)=\Sigma_{j}G[d\varphi_{j},Q_{j}(\chi_{j}f)]+\Sigma_{l}G[d\varphi_{l},Q_{l}(\chi_{l}f)]+\Sigma_{l}\varphi_{l}K_{l}(\chi_{l}f),$$ we obtain $|K_{p,b} f|_{W^{1,2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}.$
By Lemma \[lem compact imbedding\], $K_{p,b}$ is compact from $L^{2}_{p,b}$ to itself.
By Theorem \[thm global parametrix\], using the argument in page 50 of [@Donaldson], and in the proof of Theorem 4 in [@Evans], the proof of the Sobolev-theory part of Theorem \[Thm Fredholm\] is complete.
The Hybrid part in Theorem \[Thm Fredholm\] is a direct corollary of the crucial $C^{0}-$ estimate in Theorem \[thm C0 est\]. Suppose $\bar{f}$ is in cokernel, (\[equ coker contained in a bigger space\]) yields $L_{A}^{\star}\bar{f}=0$ away from the singularities. Then Theorem \[thm C0 est\] for $L^{\star}_{A}$ ($\gamma=\frac{9}{2}+p$), with “$f$” there being $0$ and “$\xi$” there being $\bar{f}$) yields $$\label{equ Thm paramatix 1}
|\bar{f}|_{C^{1,\alpha}_{(\frac{7}{2}+p,b)}(M)}\leq C |\bar{f}|_{L^{2}_{p,b}}.$$ This means $CokerL_{A}\subset N_{p,b}$ (Definition \[Def Hybrid spaces\]).
Since $Imaga L_{A}|_{W^{1,2}_{p,b}}$ is closed in $L^{2}_{p,b}$, for any $f\in N_{p,b}$ , we have a resolution into parallel and perpendicular components with respect to $Image L_{A}|_{W^{1,2}_{p,b}}$: $$\label{equ f=fparallel + f perp}
f=f^{\parallel}+f^{\perp},\ f^{\perp}\in CokerL_{A}.$$ The estimate (\[equ Thm paramatix 1\]) means $f^{\perp}\in N_{p,b}$ and $$|f^{\perp}|_{C^{1,\alpha}_{\frac{7}{2}+p,b}}\leq
C|f^{\perp}|_{L^{2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}\leq C|f|_{N_{p,b}}.$$ The decomposition (\[equ f=fparallel + f perp\]) implies $f^{\parallel}\in N_{p,b}$. Thus we have proved
\[lem regularity of projection of f onto Image of L\]Suppose $f\in N_{p,b}$, then $$|f^{\parallel}|_{N_{p,b}}\leq C|f|_{N_{p,b}},\ |f^{\perp}|_{N_{p,b}}\leq C|f|_{N_{p,b}}.$$
Lemma \[lem regularity of projection of f onto Image of L\] and Theorem \[thm existence of a good solution when f is in the image\] yields a solution to $L_{A}\xi=f^{\parallel}$ which is orthogonal to the kernel and $$\label{equ proof of Thm Fredholm 1}
|\xi|_{L^{2}_{p-1,b}}\leq |\xi|_{W^{1,2}_{p,b}}\leq C|f|_{L^{2}_{p,b}}.$$ Theorem \[thm C0 est\] ($\gamma=\frac{7}{2}+p$) and (\[equ proof of Thm Fredholm 1\]) gives $$\label{equ proof of Thm Fredholm 2}|\xi|_{2,\alpha,M}^{(\frac{5}{2}+p,b)}\leq C|f|_{N_{p,b}}. \ \textrm{Therefore}\
|\xi|_{H_{p,b}}\leq C|f|_{N_{p,b}}.$$ The proof of the Hybrid-spaces part of Theorem \[Thm Fredholm\] is complete.
\[rmk Jpb\](The pre-image space $J_{p,b}$) We define $$J_{p,b}=\{\xi \in H_{p,b}| \xi\perp kerL_{A}\}.$$ Theorem \[Thm Fredholm\] says $L_{A}$ is an isomorphism from $J_{p,b}$ to $Image L_{A}|_{H_{p,b}}\subset N_{p,b}$. Denoting $|\xi|_{H_{p,b}}$ as $|\xi|_{J_{p,b}}$ when $\xi\in J_{p,b}$, we rewrite (\[equ proof of Thm Fredholm 2\]) as $$\label{equ in Jpb LA inverse is bounded}
|L_{A}^{-1}f|_{J_{p,b}}\leq C|f|_{N_{p,b}}, \ \textrm{for any}\ f\in Image L_{A}|_{H_{p,b}}.$$
Perturbation \[section Perturbation\]
=====================================
In this section we don’t need the log-weight, thus we conform to the abbreviation convention in Definition \[Def abbreviation of notations for spaces\] i.e. **there will be no “$b$” in the symbols of the function spaces.** We prove the most precise version of Theorem \[Thm Deforming instanton simple version\].
\[Thm Deforming instanton\] Let $M$ be a $7-$manifold with a smooth $G_{2}-$structure $(\phi,\psi)$, and $E\rightarrow M$ be an admissible $SO(m)-$bundle defined away from finitely-many points (Definition \[Def the bundle xi\]). Suppose $A$ is an admissible $\psi-$instanton of order $4$ (Definition \[Def Admissable connection with polynomial or exponential convergence\] and (\[equ instanton equation for A\])). Then for any $A-$generic $p\in (-\frac{5}{2},-\frac{3}{2})$ (Definition \[Def A generic\]), there exists a $\delta_{0}>0$ with the following property.
Suppose $cokerL_{A}=\{0\}$ (Theorem \[thm characterizing cokernel\]), for any admissible $\delta_{0}$-deformation $(\underline{\phi},\underline{\psi})$ of $(\phi,\psi)$ (Definition \[Def deformation of the G2 structure\]), there exists a $\underline{\psi}-G_{2}$ monopole $(\underline{A},\sigma)$ such that $\underline{A}$ satisfies Condition ${\circledS_{A,p}}$ (Definition \[Def condition SAp\]). In particular, the tangent connection of $\underline{A}$ at each singular point is the same as that of $A$. When $\underline{\psi}$ is closed, $\underline{A}$ is a $\underline{\psi}-$instanton.
\[prop C1 global bound for the error term\] Under the same conditions as in Theorem \[Thm Deforming instanton\], for any $\lambda_{1}>0$ and $\epsilon_{1}$, there is a $\delta_{0}$ with the following property. For any admissible $\delta_{0}-$deformation $\underline{\psi}$ of $\psi$, we have $$\label{equ in prop C1 global bound for the error term}
|\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{1,\alpha}_{{(1+\lambda_{1})}}(M)}< \epsilon_{1},\ \textrm{for any}\ 0<\alpha\leq 1.$$ Thus for any $\lambda_{2}>0$ and $\epsilon_{1}$, the following is true when $\delta_{0}$ is small enough. $$\label{equ in prop C1 global bound for the error term 1}
|\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{N_{-\frac{5}{2}+\lambda_{2}}}< \epsilon_{1}.$$
The essential issue can be elaborated by the $C^{0}-$estimate. The instanton condition (\[equ instanton equation for A\]) implies $$\label{equ decomposation of the full error term}
F_{A}\wedge \underline{\psi}= F_{A}\wedge (\underline{\psi}-\psi).$$
Let $\rho_{0}>0$ be small enough such that $B_{O}(\rho_{0})$ is within the coordinate chart near $O$. When $r\leq \rho_{0}$, the admissible condition implies $\underline{\psi}-\psi=[\underline{\psi}-\underline{\psi}(O)]+[\psi(O)-\psi].\ \textrm{Hence}\ \ |\underline{\psi}-\psi|<Cr.$ $\textrm{Adjust}\ \rho_{0}\ \textrm{such that}\ C\rho_{0}^{\lambda_{1}}<\frac{\epsilon_{1}}{2},\ \textrm{we find}$ $$\label{equ C0 bound on the error step 1}
|F_{A}\wedge (\underline{\psi}-\psi)|\leq \frac{C}{r}\leq \frac{C\rho_{0}^{\lambda_{1}}}{r^{1+\lambda_{1}}}<\frac{\epsilon_{1}}{r^{1+\lambda_{1}}}.$$ When $r\geq \rho_{0}$, still by the condition in Proposition \[prop C1 global bound for the error term\], we have $$\label{equ C0 bound on the error step 2}
|F_{A}\wedge (\underline{\psi}-\psi)|\leq C\delta_{0}\rho^{-2}_{0}=C\delta_{0}(\frac{\epsilon_{1}}{2})^{-\frac{2}{\lambda_{1}}}<\epsilon_{1},\ \textrm{when}\ \delta_{0}\ \ \textrm{is small enough}.$$
Thus we obtain the $C^{0}-$bound $$\label{equ C0 bound on the error}
|F_{A}\wedge \underline{\psi})|_{C^{0}_{{(1+\lambda_{1})}}(M)}\approx|\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{0}_{{(1+\lambda_{1})}}(M)}< C\epsilon_{1}.$$ where “$\approx$” means equivalent up to a constant in the sense of Definition \[Def Dependence of the constants\].
The bounds on $|\nabla_{A} \star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{0}_{{(2+\lambda_{1})}}(M)}$ and $|\nabla^{2}_{A} \star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{0}_{{(3+\lambda_{1})}}(M)}$ are similar. For the reader’s convenience, we still do the gradient bound. $$\begin{aligned}
& &\nabla_{A,\psi}\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})=\nabla_{A,\psi}\star_{\underline{\psi}}(F_{A}\wedge [\underline{\psi}-\psi]) \\&=&[\nabla_{\psi}(\star_{\underline{\psi}})](F_{A}\wedge [\underline{\psi}-\psi])+\star_{\underline{\psi}}(F_{A}\wedge \nabla_{\psi}[\underline{\psi}-\psi])+\star_{\underline{\psi}}(\nabla_{A,\psi}F_{A}\wedge[\underline{\psi}-\psi]) \nonumber
\end{aligned}$$
For the first term, by (\[equ C0 bound on the error\]) and that $|\underline{\psi}|_{C^{5}(M)}\leq C$, we have $$\begin{aligned}
& &|[\nabla_{\psi}(\star_{\underline{\psi}})](F_{A}\wedge [\underline{\psi}-\psi])|_{C^{0}_{{(2+\lambda_{1})}}(M)}\leq |[\nabla_{\psi}(\star_{\underline{\psi}})](F_{A}\wedge [\underline{\psi}-\psi])|_{C^{0}_{{(1+\lambda_{1})}}(M)}\nonumber
\\&\leq & C\epsilon_{1}.
\end{aligned}$$
For the second term, we have the following cheap estimate $$|\star_{\underline{\psi}}(F_{A}\wedge \nabla_{\psi}[\underline{\psi}-\psi])|\leq \frac{C\delta_{0}}{r^{2}}.$$ By exactly the trick (relaxing the weight a little bit) from (\[equ C0 bound on the error step 1\]) to (\[equ C0 bound on the error\]), we obtain for any $\lambda_{1}>0$ that $$|\star_{\underline{\psi}}(F_{A}\wedge \nabla_{\psi}[\underline{\psi}-\psi])|_{C^{0}_{{(2+\lambda_{1})}}(M)}< \epsilon_{1}\ \textrm{when}\ \delta_{0}\ \textrm{is small enough}.$$ In the same way it follows that the $C^{0}_{{(2+\lambda_{1})}}(M)-$norm of the third term is less than $\epsilon_{1}$ (using $|\nabla_{A,\psi}F_{A}|\leq \frac{C}{r^{3}}$). Then we obtain $$\label{equ error bound 2}|\nabla_{A,\psi}\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{0}_{{(2+\lambda_{1})}}(M)}< C\epsilon_{1}\ \textrm{when}\ \delta_{0}\ \ \textrm{is small enough}.$$ $$\label{equ error bound 3}\textrm{The proof of the Hessian estimate }\ |\nabla_{A,\psi}^{2}\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{0}_{{(3+\lambda_{1})}}(M)}< C\epsilon_{1}$$ is absolutely the same, except that we have one more negative power on $r$.
By replacing “$C\epsilon_{1}$” by $\epsilon_{1}$ (since $\epsilon_{1}$ is arbitrary and $C$ does not depend on it), the estimates (\[equ C0 bound on the error\]), (\[equ error bound 2\]), (\[equ error bound 3\]) amount to $$|\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{C^{2}_{{(1+\lambda_{1})}}(M)}< \epsilon_{1}.$$
By Lemma \[lem C1 C0 intepolate Calpha\], the proof of (\[equ in prop C1 global bound for the error term\]) is complete. Using Definition \[Def Hybrid spaces\], the proof of (\[equ in prop C1 global bound for the error term 1\]) is done by letting $\lambda_{1}=\frac{\lambda_{2}}{2}$ in (\[equ in prop C1 global bound for the error term\]) and $\delta_{0}$ small enough.
The monopole equation with respect to $\underline{\psi}$ is $$\label{equ instantonequation without cokernel}\star_{\underline{\psi}}(F_{A+a}\wedge \underline{\psi})+d_{A+a}\sigma=0, \textrm{with gauge fixing equation}\ d^{\star_{\underline{\psi}}}_{A}a=0$$ It’s equivalent to $$\label{equ instanton equation without cokernel without LA}
\star_{\underline{\psi}}(d_{A }a\wedge \underline{\psi})+\frac{1}{2}\star_{\underline{\psi}}([a,a]\wedge \underline{\psi})+d_{A}\sigma+[a,\sigma]+\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})=0$$ with gauge fixing. Equation (\[equ instantonequation without cokernel\]) and (\[equ instanton equation without cokernel without LA\]) can be written as $$\label{equ instanton column vector equation without cokernel}
L_{A,\underline{\psi}}[\begin{array}{c}
\sigma \\
a
\end{array}]=\left[\begin{array}{c}
0 \\
-\frac{1}{2}\star_{\underline{\psi}}([a,a]\wedge \underline{\psi})
-[a,\sigma]-\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})
\end{array}\right ]$$
\[lem Launderline psi is an isomorphism from Jp to Np\] Under the same conditions as in Theorem \[Thm Deforming instanton\], $L_{A,\underline{\psi}}$ is an isomorphism from $J_{p}$ to $N_{p}$, and the bounds (on itself and the inverse of it) are uniform for all admissible $\delta_{0}-$deformations of $\psi$, when $\delta_{0}$ is small enough with respect to the data in Definition \[Def Dependence of the constants\].
The proof is exactly as that of Proposition \[prop solving the deformation equation locally for the noncone connection\], for the reader’s convenience we include the crucial detail. $$\label{equ linearized equation without cokernel }
\textrm{For any} \ f\in N_{p},\ \textrm{the equation}\ L_{A,\underline{\psi}}\left[\begin{array}{c}
\sigma \\
a
\end{array}\right ] =f\ \textrm{is equivalent to}$$ $$\label{equ linearized equation for iteration no cokernel}
[\begin{array}{c}
\sigma \\
a
\end{array}]=L^{-1}_{A}(L_{A}-L_{A,\underline{\psi}})[\begin{array}{c}
\sigma \\
a
\end{array}]+L^{-1}_{A}f.$$
The right hand side of (\[equ linearized equation for iteration no cokernel\]) is a contract mapping in terms of $[\begin{array}{c}
\sigma \\
a
\end{array}]$ from $J_{p}$ to itself, thus iteration implies (\[equ linearized equation without cokernel \]) can be uniquely solved in $J_{p}$. The bound on $L^{-1}_{A}$ follows from the last part in the proof of Proposition \[prop solving the deformation equation locally for the noncone connection\].
By Lemma \[lem Launderline psi is an isomorphism from Jp to Np\], (\[equ instanton column vector equation without cokernel\]) is equivalent to the following equation $$\left[\begin{array}{c}
\sigma \\
a
\end{array}\right ]=L_{A,\underline{\psi}}^{-1}P\left[\begin{array}{c}
\sigma \\
a
\end{array}\right ],$$ where $
P\left[\begin{array}{c}
\sigma \\
a
\end{array}\right ]$ means the right hand side of (\[equ instanton column vector equation without cokernel\]).$$\label{equ first iteration}
\textrm{The first iteration is}\ L_{A,\underline{\psi}}^{-1}P\left[\begin{array}{c}
0 \\
0\\
\end{array}\right ]=L_{A,\underline{\psi}}^{-1}\left[\begin{array}{c}
0 \\
-\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})
\end{array}\right ].$$
For any $p$ as in Theorem \[Thm Deforming instanton\], there is a $\lambda_{2}$ such that $p>-\frac{5}{2}+\lambda_{2}$, thus Proposition \[prop C1 global bound for the error term\] implies the following bound on the first iteration $$\label{equ first iteration is small when cokernel is present}
|L_{A,\underline{\psi}}^{-1}P\left[\begin{array}{c}
0 \\
0\\
\end{array}\right ]|_{J_{p}}\leq C|\star_{\underline{\psi}}(F_{A}\wedge \underline{\psi})|_{N_{p}}< C\epsilon_{1}\ \textrm{when}\ \delta_{0}\ \textrm{is small enough}.$$
To solve the taming-pair equation (\[equ instantonequation without cokernel\]), it suffices to show that $L_{A,\underline{\psi}}^{-1}P$ is a contract mapping when restricted to a small enough neighbourhood of $0$ in $J_{p}$, then iteration as in section 3 of [@Myself2013] implies the existence of a unique solution close enough to $0$.
Since $p<-\frac{3}{2}$, this is an easy consequence of the multiplicative property of $J_{p}\ (H_{p}),N_{p}$ in Lemma \[lem multiplicative property of hybrid spaces\], and that the 2 terms $$\label{equ quadratic terms 1 in the instanton equation with cokernel}-\frac{1}{2}\star_{\underline{\psi}}([a,a]\wedge \underline{\psi}])\ \textrm{and}\ -[a,\sigma]$$ are quadratic. For the reader’s convenience, we include the crucial detail. $$\begin{aligned}
\label{eqnarray 1 in proof of Main thm without cokernel}
\textrm{We compute} & & P\left[\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right ]-
P\left[\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right]
\\&=&-\frac{1}{2}\{\star_{\underline{\psi}}([a_{1}-a_{2},a_{1}]\wedge \underline{\psi})+\frac{1}{2}\star_{\underline{\psi}}([a_{2},a_{2}-a_{1}]\wedge \underline{\psi})\}\nonumber\\& &-\{ [a_{1}-a_{2},\sigma_{1}]+[a_{2},\sigma_{2}-\sigma_{1}]\}\nonumber\end{aligned}$$ Then Lemma \[lem multiplicative property of hybrid spaces\] implies $$\label{equ 1 in proof of Main thm without cokernel}
|P\left[\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right ]-
P\left[\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right]|_{N_{p}}
\leq C |\left[\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right]-
\left[\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right]|_{J_{p}}
\times (\left|\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right|_{J_{p}}+
\left|\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right|_{J_{p}})$$
Thus, letting $\epsilon_{1}$ small enough with respect to the “$C$” above and the “$C$” in (\[equ in Jpb LA inverse is bounded\]), the condition $(\left|\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right|_{J_{p}}+
\left|\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right|_{J_{p}})<\epsilon_{1}$ implies $$|L_{A,\underline{\psi}}^{-1}P\left[\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right ]-
L_{A,\underline{\psi}}^{-1}P\left[\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right]|_{J_{p}}
\leq \frac{1}{2} |\left[\begin{array}{c}
\sigma_{1} \\
a_{1}
\end{array}\right]-
\left[\begin{array}{c}
\sigma_{2} \\
a_{2}
\end{array}\right]|_{J_{p}}.$$
The proof of the contract mapping is complete.
Denote $A+a$ as $\underline{A}$. When $\underline{\psi}$ is closed, note that by applying $d^{\star_{\underline{\phi}}}_{\underline{A}}$ to (\[equ instanton equation without cokernel without LA\]) away from the singularity, we obtain $d^{\star_{\underline{\phi}}}_{\underline{A}}d_{\underline{A}}\sigma=0$. Then we choose the cut-off function $\eta_{\epsilon}$ as in (\[equ cut-off function bound near the singular point\]), and calculate $$\label{equ Thm deforming instanton 3}
0= \int_{M}<d^{\star_{\underline{\phi}}}_{\underline{A}}d_{\underline{A}}\sigma, \eta_{\epsilon}\sigma>dV=\int_{M}<d_{\underline{A}}\sigma,(d\eta_{\epsilon})\wedge\sigma>+\int_{M} \eta_{\epsilon}|d_{\underline{A}}\sigma|^{2}dV.$$ $\sigma\in J_{p}$ implies $|d_{\underline{A}}\sigma|\leq \frac{C_{\sigma}}{r^{\frac{7}{2}+p}},\ |\sigma|\leq \frac{C}{r^{\frac{5}{2}+p}}$. Then $$\label{equ Thm deforming instanton 4}
|\int_{M}<d_{\underline{A}}\sigma,(d\eta_{\epsilon})\wedge\sigma>|\leq C_{\sigma}\int_{B(\epsilon)-B(2\epsilon)}\frac{1}{r^{\frac{7}{2}+p}}\frac{1}{r^{\frac{5}{2}+p}}\frac{1}{r}\leq C\epsilon^{-2p}\rightarrow 0\ \textrm{as}\ \epsilon\rightarrow 0.$$
Let $\epsilon\rightarrow 0$ in (\[equ Thm deforming instanton 3\]), monotone convergence theorem and (\[equ Thm deforming instanton 4\]) implies $\int_{M}|d_{\underline{A}}\sigma|^{2}dV=0.$ This means $d_{\underline{A}}\sigma=0$, and thus (\[equ instantonequation without cokernel\]) says $\underline{A}$ is a $\underline{\psi}-$instanton.
Using Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\], this is much easier than Theorem \[Thm Deforming instanton\]. The crucial trick is to cut off the nonlinear term and error in (\[equ instantonequation without cokernel\]) \[(\[equ instanton column vector equation without cokernel\])\] i.e. let $\xi$ denote $\left[\begin{array}{c} a \\
\sigma
\end{array}\right]$, we should consider the equation $$\label{equ instanton column vector equation without cokernel cutted off}
L_{A_{O},\underline{\psi}}\xi=\chi\left[\begin{array}{c} 0 \\
\xi\otimes \xi-\star_{\underline{\psi}}(F_{A_{O}}\wedge \underline{\psi})
\end{array} ]\right. ,\ \xi\otimes \xi=-\frac{1}{2}\star_{\underline{\psi}}([a,a]\wedge \underline{\psi})
-[a,\sigma],$$ $\chi$ is the standard cut-off function $\equiv 1$ in $B_{O}(\frac{1}{4})$ and $\equiv 0$ outside $B_{O}(\frac{5}{16})$. Using (\[equ 0 estimate local version\]) with $\gamma=\frac{7}{2}+p$ (**for any $p$ as in Theorem \[Thm Deforming instanton\]**), and proof of Theorem 1 in [@Nirenberg] \[in $B_{O}(\frac{7}{16})$ and near $\partial B_{O}(\frac{7}{16})$\], we obtain $$\label{equ Thm deformation local instanton 1}
|\xi|^{\{\frac{5}{2}+p\},\frac{7}{2}}_{2,\alpha,B_{O}(\frac{7}{16})}\leq \bar{C}\{|L_{A_{O}}\xi|^{\{\frac{7}{2}+p\},\frac{9}{2}}_{1,\alpha,B_{O}(\frac{7}{16})}+|\xi|_{L^{2}_{p-1}[B_{O}(\frac{7}{16})]}\}\ (\textrm{see Definition}\ \ref{Def local Schauder adapted to local perturbation}).$$ We define
- $\bar{H}_{p}=\{\xi\ \textrm{is}\ C^{2,\alpha}\ \textrm{away from}\ O|\ |\xi|_{W^{1,2}_{p}(B_{O}(\frac{7}{16}))}+|\xi|^{\{\frac{5}{2}+p\},\frac{7}{2}}_{2,\alpha,B_{O}(\frac{7}{16})}<\infty\}$ and
- $\bar{N}_{p}=\{\xi\ \textrm{is}\ C^{1,\alpha}\ \textrm{away from}\ O|\ |\xi|_{L^{2}_{p}(B_{O}(\frac{7}{16}))}+|\xi|^{\{\frac{7}{2}+p\},\frac{9}{2}}_{1,\alpha,B_{O}(\frac{7}{16})}<\infty\}$.
Using Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\] and (\[equ Thm deformation local instanton 1\]), the $L_{A_{O},\psi_{0}}:\ \bar{H}_{p}\rightarrow \bar{N}_{p}$ is inverted by $Q_{p,A_{O}}$ which is linear and bounded by a $\bar{C}$ as in Definition \[Def special constants\]. Moreover, the advantage of cutting-off the monopole equation is
\[clm multiplicative property of cutting off\] We have $|\chi f|_{C^{1,\alpha}_{\{\frac{7}{2}+p\},\frac{9}{2}}[B_{O}(\frac{7}{16})]}\leq C |f|_{C^{1,\alpha}_{(\frac{7}{2}+p)}[B_{O}(\frac{6}{16})]}$. Consequently, $$|\chi \xi_{1}\otimes \xi_{2}|_{\bar{N}_{p}}\leq C|\xi_{1}|_{\bar{H}_{p}}|\xi_{2}|_{\bar{H}_{p}},\ \textrm{where}\ \otimes\ \textrm{and}\ \chi\ \textrm{are as in}\ (\ref{equ instanton column vector equation without cokernel cutted off}).$$
The proof of the above is similar to Lemma \[lem multiplicative property of hybrid spaces\], the cutoff function $\chi$ plays the key role. **Claim \[clm multiplicative property of cutting off\] means we can avoid the boundary estimate near $\partial B$**, $B$ as in Theorem \[Thm Deforming local instanton\]. With the help of Proposition \[prop C1 global bound for the error term\], by going through the proof of Lemma \[lem Launderline psi is an isomorphism from Jp to Np\], Theorem \[Thm Deforming instanton\], and
- replacing the $A$ and $\psi$ (Proposition \[prop C1 global bound for the error term\]) by $A_{O}$ and $\psi_{0}$,
- replacing the $L_{A,\underline{\psi}}$, $L_{A}$ in Lemma \[lem Launderline psi is an isomorphism from Jp to Np\] by $L_{A_{O},\underline{\psi}}$, $L_{A_{O}}$ respectively,
- replacing the $L^{-1}_{A,\underline{\psi}}$ in proof of Theorem \[Thm Deforming instanton\] by $L^{-1}_{A_{O},\underline{\psi}}$,
- replacing the $J_{p},N_{p}$ in proof of Theorem \[Thm Deforming instanton\] by $\bar{H}_{p},\bar{N}_{p}$ respectively,
we obtain a solution $\xi$ to (\[equ instanton column vector equation without cokernel cutted off\]) in $\bar{H}_{p}$, **for any $p$ as in Theorem \[Thm Deforming instanton\]**. Since $\chi\equiv 1$ in $B_{O}(\frac{1}{4})$, $A_{O}+a$ solves the monopole-equation therein \[see (\[equ instanton column vector equation without cokernel\]) and the discussion above it\]. The proof of Theorem \[Thm Deforming local instanton\] is complete.
Characterizing the cokernel \[section Characterizing the cokernel\]
===================================================================
The formal adjoint of $L_{A}$ is $$\label{equ LA star formula} L_{A}^{\star}f=L_{A}f+G(\frac{d\omega_{p,b}}{\omega_{p,b}},f)\ \textrm{defined away from the singularities}.$$
The cokernel is defined as $Image^{\perp}L_{A}$ i.e. $$\label{equ Def of Coker} cokerL_{A}\triangleq \{f\in L^{2}_{p,b}|\int_{M}<L_{A}\xi,f>w_{p,b}dV=0\ \textrm{for all}\ \xi\in W^{1,2}_{p,b}\}.$$ Taking the test sections $\xi$ in (\[equ Def of Coker\]) as smooth sections supported away from the singularities, using Theorem \[thm C0 est\] (for $\gamma=\frac{9}{2}+p$), we deduce $$\label{equ coker contained in a bigger space}
cokerL_{A}\subset \{f\in N_{p,b}|L^{\star}_{A}f=0\ \textrm{away from the singularities}\}.$$ However, a priorily, there is no guarantee that the right hand side of (\[equ coker contained in a bigger space\]) is finite-dimensional. **Fortunately, when $p$ is $A-$generic, the “blowing-up” rate of elements in cokernel can be improved as follows.**
\[thm characterizing cokernel\] Let $A,p,b$ be as in Theorem \[Thm Fredholm\]. For all $0<\mu<\vartheta_{1-p}$, $$\label{equ thm characterizing cokernel}
cokerL_{A}=\{f\in C^{1,\alpha}_{(\frac{7}{2}+p-\mu)}(M)| L^{\star}_{A}f=0\ \textrm{away from the singularites}\}.$$ Moreover, $|f|^{(\frac{7}{2}+p-\mu)}_{1,\alpha,M}\leq C|f|_{L^{2}_{p,b}}$ for any $f\in cokerL_{A}$. In particular, if $f$ satisfies the two conditions in the parentheses of (\[equ thm characterizing cokernel\]) for some $\mu>0$, then $f$ satisfies them for all $\mu<\vartheta_{1-p}$.
By Theorem 5 of Appendix D.5 in [@Evans], $cokerL_{A}\subset ker(Id-K^{\star}_{p,b})|_{L^{2}_{p,b}}$. Lemma \[lem boostrap Kstar\] implies any $f\in ker(Id-K^{\star}_{p,b})$ is actually in $L^{2}_{p-\mu,b}$ with uniform bound in terms of the $L^{2}_{p,b}-$ norm of $f$. Then Theorem \[thm characterizing cokernel\] is a directly corollary of (\[equ coker contained in a bigger space\]), Lemma \[lem boostrap Kstar\], \[lem cokernel contains the more vanishing sections\], and Theorem \[thm C0 est\] (for $L_{A}^{\star}$ by taking $\gamma=\frac{9}{2}+p-\mu$).
The crucial observation is that in the setting of Theorem \[thm characterizing cokernel\], we have $$\label{equ Crucial observation that Q is the same for neary by weights}
Q_{A_{O_{j}}, p,b}f=Q_{A_{O_{j}}, p+\mu,b}f \ \textrm{for any} \ j.$$ The reason is that the $v-$spectrum of the tangential operators are fixed. Thus, if $v>(<) 1-p$, the same holds with $p$ with replaced by $p+\mu$ or $p-\mu$. Hence the solution formulas (in (\[equ solution when v is real and p>1-v is integral from 1 to r\]), (\[equ solution formula when v positive and p less than 1-v\]), (\[equ solution when v is purely imaginary\]), and (\[equ solution when v=0\])) do not change.
It’s directly implies by the Lemmas \[lem boostrap K\], \[lem boostrap Kstar\], and \[lem cokernel contains the more vanishing sections\].
\[lem boostrap K\]Let $A,p,b,\mu$ be as in Theorem \[thm characterizing cokernel\], then $L^{2}_{p,b}\subset L^{2}_{p+\mu,b}$ is an invariant subspace of $K_{p+\mu,b}$, and $K_{p+\mu,b}=K_{p,b}$ when restricted to $L^{2}_{p,b}$. Consequently, for any $f\in L^{2}_{p,b}$, we have $$|K_{p,b}f|_{L^{2}_{p,b}}\leq C_{\mu}|f|_{L^{2}_{p+\mu,b}}.$$
By (\[equ formula for Kpb\]), we only need to show the parametrices $Q_{j}$ near the singularities satisfies the property asserted. The parametrices away from the singularity do not depend on the weight chosen.
In the setting of (\[equ lem local inverse near O 2\]) and (\[equ contract mapping for linear equation with Cepsilon0\]), let $Q_{j,p,b}$ denote $Q_{A,p,b}$ near $O_{j}$. By (\[equ Crucial observation that Q is the same for neary by weights\]) and uniqueness of fixed point of contract mappings, we find $Q_{j,p,b}f=Q_{j,p+\mu,b}f$ when $f\in C^{\infty}_{c}[B_{O_{j}}\setminus O_{j}]$. Since $Q_{j,p,b}\ (Q_{j,p+\mu,b})$ is bounded from $L^{2}_{p,b}(B_{O_{j}})\ (L^{2}_{p+\mu,b}(B_{O_{j}}))$ to $W^{1,2}_{p,b}(B_{O_{j}})\ (W^{1,2}_{p+\mu,b}(B_{O_{j}}))$, their extensions (as in proof of Theorem \[thm W22 estimate on 1-forms\]) agree on $L^{2}_{p,b}(B_{O_{j}})\subset L^{2}_{p+\mu,b}(B_{O_{j}})$. Hence $$|Q_{j,p,b}f|_{L^{2}_{p,b}(B_{O_{j}})}\leq |f|_{L^{2}_{p+\mu,b}(B_{O_{j}})}\ \textrm{when}\ f\in L^{2}_{p,b}(B_{O_{j}}).$$
\[lem boostrap Kstar\] Let $A,p,b,\mu$ be as in Theorem \[thm characterizing cokernel\], then $K_{p,b}^{\star}$ is a bounded operator from $L^{2}_{p,b}$ to $L^{2}_{p-\mu,b}$. The bound is uniform as in Definition \[Def Dependence of the constants\].
Assuming $\xi\in L^{2}_{p,b}$ vanishes near the singularities (thus $\xi\in L^{2}_{p,b}$ for any $p$), by Lemma \[lem boostrap K\], we find $$\label{equ lem boostrap Kstar 1}
\int_{M}<K_{p,b}(\frac{\xi}{r^\mu}),\ f>w_{p,b}dV\leq C|K_{p,b}(\frac{\xi}{r^\mu})|_{L^{2}_{p,b}}|f|_{L^{2}_{p,b}}\leq C|\frac{\xi}{r^\mu}|_{L^{2}_{p+\mu,b}}|f|_{L^{2}_{p,b}}.$$
On the other hand, let $\eta_{\epsilon}$ be the cutoff function of the singular points satisfying condition (\[equ cut-off function bound near the singular point\]). Let $\xi=\eta^{2}_{\epsilon}\frac{K_{p,b}^{\star}\xi}{r^\mu}$, we obtain $$\label{equ lem boostrap Kstar 2}
\int_{M}<K_{p,b}(\frac{\xi}{r^\mu}),\ f>w_{p,b}dV=\int_{M}\eta^{2}_{\epsilon}\frac{|K_{p,b}^{\star}f|^{2}}{r^{2\mu}}w_{p,b}dV\geq C |\eta_{\epsilon}K_{p,b}^{\star}f|^{2}_{L^{2}_{p-\mu,b}}.$$ Hence (\[equ lem boostrap Kstar 1\]) and (\[equ lem boostrap Kstar 2\]) imply $$|\eta_{\epsilon}K_{p,b}^{\star}f|^{2}_{L^{2}_{p-\mu,b}}\leq C|\eta^{2}_{\epsilon}K_{p,b}^{\star}f|_{L^{2}_{p-\mu,b}}|f|_{L^{2}_{p,b}}\leq C|\eta_{\epsilon}K_{p,b}^{\star}f|_{L^{2}_{p-\mu,b}}|f|_{L^{2}_{p,b}}.$$ Then $|\eta_{\epsilon}K_{p,b}^{\star}f|_{L^{2}_{p-\mu,b}}\leq C|f|_{L^{2}_{p,b}}$. Let $\epsilon\rightarrow 0$, by the monotone convergence theorem of Lebesgue measure, the proof of Lemma \[lem boostrap Kstar\] is complete.
\[lem cokernel contains the more vanishing sections\] Suppose $L_{A}^{\star}f=0$ away from the singular points and $f\in C^{1,\alpha}_{\frac{7}{2}+p-\mu}(M)$ for some $\mu>0$. Then $f\in coker L_{A}$ .
With the same $\eta_{\epsilon}$ as in Lemma \[lem boostrap Kstar\], we compute $$\begin{aligned}
& &\int_{M}<L_{A}\xi,f>w_{p,b}dV=\lim_{\epsilon\rightarrow 0}\int_{M}<L_{A}\xi,\eta_{\epsilon}f>w_{p,b}dV\nonumber
\\&=&\lim_{\epsilon\rightarrow 0}\int_{M}<\xi,G(d\eta_{\epsilon},f)>w_{p,b}dV \triangleq \lim_{\epsilon\rightarrow 0}\Pi_{\epsilon}. \nonumber\end{aligned}$$
By (\[equ cut-off function bound near the singular point\]), (\[equ LA star formula\]), (\[equ Def of Coker\]), and Hölder inequality, since $\mu>0$, we obtain $$\begin{aligned}
& &\Pi_{\epsilon}
\leq C|\xi|_{L^{2}_{p-1,b}}(\int_{M}|G(d\eta_{\epsilon},f)|^{2}r^{2}w_{p,b}dV)^{\frac{1}{2}}\\&\leq & C\Sigma_{j}|\xi|_{L^{2}_{p-1,b}}(\epsilon^{2\mu}\int_{B_{O_{j}}(2\epsilon)-B_{O_{j}}(\epsilon)}|f|^{2}\frac{w_{p,b}}{r^{2\mu}}dV)^{\frac{1}{2}}
\\&\leq & C\epsilon^{\mu}|\xi|_{L^{2}_{p-1,b}}|f|_{L^{2}_{p-\mu,b}}\rightarrow 0\ \textrm{as}\ \epsilon\rightarrow 0.\end{aligned}$$ The proof is complete.
Appendix
========
Appendix A: Weitzenböck formula in the model case. \[section Appendix A: Weitzenb formula in the model case\]
-------------------------------------------------------------------------------------------------------------
Suppose $p+q\leq n$, $\Phi$ is a $p-$form, $\nu$ is a $q-$form, and both are possibly $adE-$valued. We have $$\label{equ appendix A star wedge}
\star(\Phi\wedge \nu)= (\star \Phi)\llcorner\nu=(-1)^{pq} \Phi \lrcorner(\star\nu).$$
Let the $e_{i}$’s be the standard coordinate vectors in $\R^{7}$. The standard (Euclidean) $G_{2}-$structure over $\R^{7}\setminus O$ is $$\begin{aligned}
\label{eqnarray Euc G2 forms}& &\phi_{0}=e^{123}-e^{145}-e^{167}-e^{246}+e^{257}-e^{347}-e^{356}
\\& & \psi_{0}=-e^{1247}-e^{1256}+e^{1346}-e^{1357}-e^{2345}-e^{2367}+e^{4567}\nonumber\end{aligned}$$
Notice the $e_{i}$’s here are not the same as the ones in Section \[Appendix B: proof of cone formula for laplacian\]. We abuse notations on frames in different sections.
\[lem formula for LA squared\]Suppose $A_{O}$ is a cone connection over $E\rightarrow \R^{7}\setminus O$. Let $L_{A_{O}}$ be the deformation operator of $A_{O}$ with respect to $\phi_{0},\psi_{0}$ (see (\[equ introduction formula for deformation operator\])), and $[\begin{array}{c}
\sigma \\
a
\end{array}]$ be as in Section \[section Seperation of variable for the system in the model case\]. Then $$\label{equ bochner identity for general cone connection}
L_{A_{O}}^{2}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
\nabla^{\star}\nabla \sigma -\star([F_{A_{O}},a]\wedge\psi_{0}) \\
\star([F_{A_{O}},\sigma]\wedge\psi_{0})+\nabla^{\star}\nabla a+F_{A_{O}}\underline{\otimes} a-[F_{A_{O}}, a]\lrcorner \psi_{0}.
\end{array}]$$
Suppose $A_{O}$ is a $G_{2}-$instanton with respect to the standard (Euclidean) $G_{2}-$structures i.e. $\star(F_{A_{O}}\wedge \phi_{0})=-F_{A_{O}}$ or ($F_{A_{O}}\wedge \psi_{0}=0$), then $$\label{equ bochner identity for instanton cone connection}
L_{A_{O}}^{2}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
\nabla^{\star}\nabla \sigma \\
\nabla^{\star}\nabla a+2F_{A_{O}}\underline{\otimes} a.
\end{array}]$$
All the forms in this proof can possibly be $adE-$valued.
By (\[equ appendix A star wedge\]) and (\[equ introduction formula for deformation operator\]), we have $L_{A_{O}}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
d^{\star}a \\
d\sigma+da\lrcorner \phi_{0}.
\end{array}]$ Hence $$L^{2}_{A_{O}}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
d^{\star}d\sigma -\star([F_{A_{O}},a]\wedge\psi_{0}) \\
\star([F_{A_{O}},\sigma]\wedge\psi_{0})+ dd^{\star}a+\{d[da\lrcorner \phi_{0}]\}\lrcorner \phi_{0}.
\end{array}]$$
We first prove the general formula (\[equ bochner identity for general cone connection\]). For any 1-form $B=B_{i}e^{i}$, we compute $$(dB\lrcorner \phi_{0})(e_{1})=-d_{6}B_{7}+d_{7}B_{6}-d_{4}B_{5}+d_{5}B_{4}-d_{3}B_{2}+d_{2}B_{3}$$ Let $b$ be a $2-$form, write $b=\Sigma_{i<j}b_{ij}e^{ij}$. Let $B=b\lrcorner \phi_{0}$, then $$\begin{aligned}
& & B_{7}=-b_{16}+b_{25}-b_{34},\ \ B_{6}=-b_{24}+b_{17}-b_{35},\ \ B_{5}=-b_{27}-b_{14}+b_{36}.\nonumber
\\& & B_{4}=b_{15}+b_{37}+b_{26},\ \ B_{3}=-b_{47}-b_{56}+b_{12},\ \ B_{2}=-b_{13}-b_{46}+b_{57}.\end{aligned}$$ $$\begin{aligned}
\textrm{Then} & & \{[d (b\lrcorner \phi_{0})] \lrcorner \phi_{0} \}(e_{1})
\\&=&-\Sigma_{i=2}^{7}d_{i}b_{i1}-<db,e^{625}>+<db,e^{634}>+<db,e^{427}>+<db,e^{537}>\nonumber
\\&=& d^{\star}b(e_{1})-(db\lrcorner \psi_{0})(e_{1}).\nonumber\end{aligned}$$
Therefore, by computing the component on $e_{2},......,e_{7}$ similarly, we arrive at the following intermediate result.
\[equ intermediate formula for LA[2]{}\] For any $adE$-valued $2-$form $b$, the following formula holds. $$[d (b\lrcorner \phi_{0})] \lrcorner \phi_{0}=d^{\star}b-db\lrcorner \psi_{0}.$$
Let $b=da$, we obtain $[d (da\lrcorner \phi_{0})] \lrcorner \phi_{0}=d^{\star}da-[F_{A_{O}},a]\lrcorner \psi_{0}$. Using the Bochner-identity $
d^{\star}da+dd^{\star}a=\nabla^{\star}\nabla a+F_{A_{O}}\underline{\otimes} a$, we find $$dd^{\star}a+\{d[da\lrcorner \phi_{0}]\}\lrcorner \phi_{0}= \nabla^{\star}\nabla a+F_{A_{O}}\underline{\otimes} a-[F_{A_{O}},a]\lrcorner \psi_{0}.$$ The proof of (\[equ bochner identity for general cone connection\]) is complete
Next, suppose $A_{O}$ is a $G_{2}-$instanton with respect to $\phi_{0}$, we prove (\[equ bochner identity for instanton cone connection\]). Notice in this case we automatically have $\star([F_{A_{O}},a]\wedge\psi_{0})=0$ and $\star([F_{A_{O}},\sigma]\wedge\psi_{0})=0$. We compute $$-[F_{A_{O}},a]\lrcorner \psi_{0} =-\star(\phi_{0}\wedge [F_{A_{O}},a])=\star\{-\phi_{0}\wedge F_{A_{O}}\wedge a)\}+\star\{\phi_{0}\wedge a\wedge F_{A_{O}})\}$$
The instanton equation implies $$\star\{-\phi_{0}\wedge F_{A_{O}}\wedge a\}=-[\star(\phi_{0}\wedge F_{A_{O}})]\lrcorner a=F_{A_{O}}\llcorner a,$$ and $$\star\{\phi_{0}\wedge a\wedge F_{A_{O}}\}=-\star\{ a\wedge \phi_{0} \wedge F_{A_{O}}\}=a\lrcorner \star( \phi_{0} \wedge F_{A_{O}})=-a\lrcorner F_{A_{O}}.$$
Then we get $$-[F_{A_{O}},a]\lrcorner \psi_{0} =F_{A_{O}}\llcorner a-a\lrcorner F_{A_{O}}=\Sigma_{i,j}[F_{ij},a_{i}]e^{j}\triangleq F_{A_{O}}\underline{\otimes} a.$$
The proof of (\[equ bochner identity for instanton cone connection\]) and Lemma \[lem formula for LA squared\] is complete.
Appendix B: Proof of Lemma \[lem cone formula for laplacian\] \[Appendix B: proof of cone formula for laplacian\]
-----------------------------------------------------------------------------------------------------------------
The polar coordinate formula for $1-$forms is more involved. We need some preliminary identities. The Euclidean metric is equal to $dr^{2}+r^{2}g_{S^{n-1}}$. For any point $(\mathfrak{p},r)\in \R^{7}\setminus O$, we choose $e_{1}......e_{n-1}$ as an orthonormal frame on $S^{n-1}$ near $\mathfrak{p}$. Furthermore, we require $e_{1}......e_{n-1}$ to be the geodesic coordinates on the sphere at $\mathfrak{p}$.
As vector fields defined under polar coordinate near $\mathfrak{p}\times (0,1)$, for any $r$, let $\nabla^{S}$ denote the covariant derivative induced on $S^{n-1}(r)$. $\nabla^{S}$ is just the Levi-Civita connection of the induced metric. Since (the metric on) $S^{n-1}(r)$ differs from the unit sphere by a constant rescaling, and $e_{i}'$s are the geodesic coordinates at $\mathfrak{p}$, then $$\label{equ cone formula the ei are geodesic coordinate at p}
\nabla^{S}_{e_{j}}e_{i}=0 \ \textrm{along}\ \mathfrak{p}\times (0,1),\ \textrm{for all}\ i,\ j.$$ Notice the $e_{i}$’s here are not the same as the ones in Section \[section Appendix A: Weitzenb formula in the model case\]. We abuse these notations.
The vector files $\frac{\partial}{\partial r},\frac{e_{i}}{r},i=1...n-1$ form an orthonormal basis for the Euclidean metric over $\R^{7}\setminus O$.
\[lem covariant derivatives\] In a neighbourhood of $\mathfrak{p}\times (0,1)$ in $\R^{n}\setminus O$, we have $$\nabla_{\frac{\partial}{\partial r}}e_{i}=\frac{e_{i}}{r};\ \ \ \ \ \ \ \nabla_{\frac{\partial}{\partial r}}e^{i}=-\frac{e^{i}}{r};\ \ \ \ \ \ \;\ \nabla_{e_{i}}dr=re^{i}.$$ $$\label{equ at p ei are orthonormal basis on sphere}
\nabla_{e_{i}}e^{k}=\nabla^{S}_{e_{i}}e^{k}-\delta_{ik}\frac{dr}{r};\ \ \ \ \ \ \nabla_{e_{i}}e_{j}=\nabla^{S}_{e_{i}}e_{j}-\delta_{ij}r\frac{\partial }{\partial r}.$$
Hence we compute for any $\phi\in \Omega^{1}_{\Xi}(S^{n-1})$ that $$\label{equ radial derivative of a spherical form}
\nabla_{\frac{\partial}{\partial r}}\phi=\nabla_{\frac{\partial}{\partial r}}(\phi_{i}e^{i})=-\frac{\phi}{r};\ \nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}\phi=\frac{2\phi}{r^{2}}.$$
We first observe that, for any bundle-valued form $b$, $$-\nabla^{\star}\nabla b=\Sigma_{k=1}^{n}\nabla^{2}b(v_{k},v_{k}),\ (v_{k})\ \textrm{is an orthonormal basis}.$$ This definition does not depend on the orthonormal basis chosen, then we can use $\frac{\partial}{\partial r},\frac{e_{1}}{r}......\frac{e_{n-1}}{r}$ to obtain $$\begin{aligned}
\label{eqnarray cone formula 1 forms 1}& &-\nabla^{\star}\nabla b=\nabla^{2}b(\frac{\partial}{\partial r},\frac{\partial}{\partial r})+\frac{1}{r^{2}}\Sigma_{i=1}^{n-1}\nabla^{2}b(e_{i},e_{i})
\\&=&\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}b+\frac{1}{r^{2}}\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}b-\frac{1}{r^{2}}\nabla_{(\Sigma_{i}\nabla_{i}e_{i})}b\nonumber
\\&=&\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}b+\frac{1}{r^{2}}\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}b-\frac{1}{r^{2}}\nabla_{(\Sigma_{i}\nabla^{S}_{i}e_{i})}b+\frac{n-1}{r}\nabla_{\frac{\partial}{\partial r}}b.\nonumber\end{aligned}$$ The $\nabla_{i}\nabla_{i}b$ should be understood as $\nabla_{i}(\nabla_{i}b)$.
Part I: we compute the rough laplacian of the radial part $a_{r}\frac{dr}{r}$. By (\[eqnarray cone formula 1 forms 1\]), $$\begin{aligned}
\label{eqnarray cone formula 1 forms 2}& & -\nabla^{\star}\nabla(a_{r}\frac{dr}{r})
\\&=&\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}(a_{r}\frac{dr}{r})+\frac{1}{r^{2}}\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}(a_{r}\frac{dr}{r})-\frac{1}{r^{2}}\nabla_{(\Sigma_{i}\nabla^{S}_{i}e_{i})}(a_{r}\frac{dr}{r})+\frac{n-1}{r}\nabla_{\frac{\partial}{\partial r}}(a_{r}\frac{dr}{r}).\nonumber\end{aligned}$$ Term-wise computation gives $$\begin{aligned}
\label{eqnarray termwise computation for laplacian of the radial term}
& &\nabla_{\frac{\partial}{\partial r}}(a_{r}\frac{dr}{r})
=(\nabla_{\frac{\partial }{\partial r}}a_{r}) \frac{dr}{r}-a_{r}\frac{dr}{r^2};\\& & \nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}(a_{r}\frac{dr}{r})=(\nabla_{\frac{\partial }{\partial r}}\nabla_{\frac{\partial }{\partial r}}a_{r}) \frac{dr}{r}-2(\nabla_{\frac{\partial}{\partial r}}a_{r})\frac{dr}{r^{2}}+\frac{2}{r^{3}}a_{r}dr.\nonumber\end{aligned}$$ For the hardest term $\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}(a_{r}\frac{dr}{r})$, fix $i$, we compute $$\label{equ cone formula 1 forms 1}
\nabla_{i}\nabla_{i}(a_{r}\frac{dr}{r})=(\nabla_{i}\nabla_{i}a_{r})\frac{dr}{r}+2(\nabla_{i}a_{r})(\nabla_{i}\frac{dr}{r})+a_{r}\nabla_{i}\nabla_{i}\frac{dr}{r}.$$
Using $$\nabla_{i}\frac{dr}{r}=e^{i},\ \nabla_{i}\nabla_{i}\frac{dr}{r}=-\frac{dr}{r}+\nabla^{S}_{i}e^{i},\ \textrm{and}\ \Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}a_{r}=\Delta_{S}a_{r}+\nabla^{S}_{\Sigma_{i}\nabla^{S}_{i}e_{i}}a_{r},$$ we obtain (by summing up $i$ in (\[equ cone formula 1 forms 1\])) $$\label{equ hardest term in rough laplacian of radial term}
\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}(a_{r}\frac{dr}{r})=(\Delta_{S}a_{r})\frac{dr}{r}+(\nabla^{S}_{\Sigma_{i}\nabla^{S}_{i}e_{i}}a_{r})\frac{dr}{r}+2d_{S}a_{r}-(n-1)a_{r}\frac{dr}{r}+a_{r}(\Sigma_{i}\nabla^{S}_{i}e^{i})$$
By (\[equ cone formula the ei are geodesic coordinate at p\]), (\[eqnarray termwise computation for laplacian of the radial term\]), (\[eqnarray cone formula 1 forms 2\]), and (\[equ hardest term in rough laplacian of radial term\]), along $\mathfrak{p}\times (0,1)$, we have $$\begin{aligned}
\label{equ rough lapla of the radial term}
& &-\nabla^{\star}\nabla (a_{r}\frac{dr}{r})
\\&=&(\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}a_{r})\frac{dr}{r}+\frac{n-3}{r}(\nabla_{\frac{\partial }{\partial r}}a_{r})\frac{dr}{r}+\frac{1}{r^{2}}[\Delta_{S}a_{r}-(2n-4)a_{r}]\frac{dr}{r}+\frac{2}{r^{2}}d_{S}a_{r}.\nonumber\end{aligned}$$
Since (\[equ rough lapla of the radial term\]) is independent of the coordinate chosen, and $\mathfrak{p}$ is arbitrary, then it holds everywhere on $\R^{7}\setminus O$.
Part II: we compute the rough laplacian of the radial part $a_{s}$ (which does not have radial component). First we have $$-\nabla^{\star}\nabla a_{s}
=\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}a_{s}+\frac{1}{r^{2}}\Sigma_{i=1}^{n-1}\nabla_{i}\nabla_{i}a_{s}-\frac{1}{r^{2}}\nabla_{(\Sigma_{i}\nabla^{S}_{i}e_{i})}a_{s}+\frac{n-1}{r}\nabla_{\frac{\partial}{\partial r}}a_{s}.$$ In this case, we only have to compute the crucial term $\nabla_{i}\nabla_{i}a_{s}$. We write $a_{s}=\Sigma_{i=1}^{n-1}a_{i}$, then $$\nabla_{i}a_{s}=(\nabla_{i}a_{k})e^{k}+a_{k}\nabla_{i}e^{k}=\nabla^{S}_{i}a_{s}+a_{k}(\nabla_{i}e^{k}-\nabla_{i}^{S}e^{k})
=\nabla^{S}_{i}a_{s}-a_{s}(e_{i})\frac{dr}{r}.$$
To compute $\nabla_{i}\nabla_{i}a_{s}$, it suffices to compute $\nabla_{i}\nabla_{i}^{S}a_{s}$ and $\nabla_{i}[a_{s}(e_{i})\frac{dr}{r}]$.
$$\Sigma_{i}\nabla_{i}\nabla_{i}^{S}a_{s}=\Sigma_{i}\nabla^{S}_{i}\nabla_{i}^{S}a_{s}-\Sigma_{i}(\nabla_{i}^{S}a_{s})(e_{i})\frac{dr}{r}=\Delta_{S}a_{s}+\nabla^{S}_{\nabla^{S}_{i}e_{i}}a_{s}+d^{\star}a_{s}\frac{dr}{r}.$$
$$\begin{aligned}
\textrm{
On the other hand}& &\Sigma_{i}\nabla_{i}[a_{s}(e_{i})\frac{dr}{r}]=\Sigma_{i}\{[\nabla^{S}_{i}a_{s}(e_{i})]\frac{dr}{r}+\frac{1}{r}a_{i}\nabla_{i}dr\}\nonumber
\\&=& \Sigma_{i}\{(\nabla^{S}_{i}a_{s})(e_{i})\frac{dr}{r}+a_{s}(\nabla_{i}^{S}e_{i})\frac{dr}{r}+a_{i}e^{i}\}\nonumber
\\&=& -(d^{\star}a_{s})\frac{dr}{r}+a_{s}(\nabla_{i}^{S}e_{i})\frac{dr}{r}+a_{s}.\end{aligned}$$
$$\label{equ cone formula 1 forms 2}\textrm{Then}\
\Sigma_{i}\nabla_{i}\nabla_{i}a_{s}=\Delta_{S}a_{s}+2(d^{\star}a_{s})\frac{dr}{r}+\nabla^{S}_{\nabla^{S}_{i}e_{i}}a_{s}-a_{s}(\nabla_{i}^{S}e_{i})\frac{dr}{r}-a_{s}.$$
By (\[equ at p ei are orthonormal basis on sphere\]), (\[equ cone formula 1 forms 2\]), and (\[equ cone formula the ei are geodesic coordinate at p\]), the following holds along $\mathfrak{p}\times (0,1)$ $$\label{equ rough lapla of the spherical part} -\nabla^{\star}\nabla a_{s}=
\nabla_{\frac{\partial}{\partial r}}\nabla_{\frac{\partial}{\partial r}}a_{s}+\frac{n-1}{r}\nabla_{\frac{\partial}{\partial r}}a_{s}
+\frac{1}{r^{2}}(\Delta_{S}a_{s}-a_{s})+(2d^{\star}a_{s})\frac{dr}{r^{3}}.$$ It does not depend on the coordinates chosen, thus holds everywhere. The proof of Lemma \[lem cone formula for laplacian\] is completed by combining (\[equ rough lapla of the radial term\]) and (\[equ rough lapla of the spherical part\]).
Appendix C: Fundamental facts on elliptic systems
-------------------------------------------------
**In this section, we work under the same conditions as in Theorem \[Thm Fredholm\]**. For any $q$ near a singular point $O$, denote $$\label{equ abbreviation for the ball}B=B_{q}(\frac{r_{q}}{100}),$$ then $B$ lies in one coordinate sector. Let $L_{\psi}$ be the local elliptic operator with $A=0$ in $B$ i.e. $L_{\psi}[\begin{array}{c}
\sigma \\
a
\end{array}]=[\begin{array}{c}
d^{\star}a \\
d\sigma+da\lrcorner \phi
\end{array}]$. For any locally defined $G_{2}-$structure $(\phi,\psi)$, we have the following weighted estimate due to Nirenberg-Douglis [@Nirenberg]. $$|\xi|^{\star}_{2,\alpha,B}\leq C|L_{\psi} \xi|^{(1)}_{1,\alpha,B}+C|\xi|_{0,B},$$ where “$C$” depends at most on the $C^{5}-$norm and the non-degeneracy of $\phi$.
An easy but important building-block is
\[lem Schauder estimate in local coordinates in small balls\] Suppose $\xi\in C^{k,\alpha}(B)$, the following estimate holds. $$|\xi|^{\star}_{k,\alpha,B}\leq C|L_{A}\xi|^{[1]}_{k-1,\alpha,B}+C|\xi|_{0,B},\ k=2,3,4.$$ The estimate holds the same with $L_{A}$ replaced by $L_{A}^{\star}$.
We only prove it for $L_{A}$ when $k=2$. On $L_{A}^{\star}$, note that Definition \[Def global weight and Sobolev spaces\] implies $|\frac{dw_{p,b}}{w_{p,b}}|\leq \frac{C}{r}$, thus we verify that $|\frac{dw_{p,b}}{w_{p,b}}|^{[1]}_{2,0,B}\leq |\frac{dw_{p,b}}{w_{p,b}}|^{(1)}_{2,0,B}<C$. Then formula (\[equ LA star formula\]) implies that the proof for $L_{A}$ directly carries over to $L_{A}^{\star}$.
The admissible conditions implies $|A|^{[1]}_{2,0,B}
\leq C$, hence $|A|^{[1]}_{1,\alpha,B}
\leq C$. Using $|(L_{A}-L_{\psi})\xi|=|A\otimes_{g_{\phi}}\xi|$ and splitting of weight, we have $$|A\otimes_{g}\xi|^{[1]}_{1,\alpha,B}\leq |A|^{[1]}_{1,\alpha,B}|\xi|^{\star}_{1,\alpha,B}\leq C|\xi|^{\star}_{1,\alpha,B}.$$ Thus $|(L_{A}-L_{\psi})\xi|^{[1]}_{1,\alpha,B}\leq C|\xi|^{\star}_{1,\alpha,B}$, and this implies $$\label{equ Schauder est with junk terms}
|\xi|^{\star}_{2,\alpha,B}\leq C(|L_{A}\xi|^{[1]}_{1,\alpha,B}+|\xi|_{0,B})+|(L_{A}-L_{\psi})\xi|^{[1]}_{1,\alpha,B}\leq C(|L_{A}\xi|^{[1]}_{1,\alpha,B}+|\xi|^{\star}_{1,\alpha,B})$$ By Lemma 6.32 in [@GilbargTrudinger] (standard interpolation), for any $\mu\in (0,\frac{1}{100})$, we have $$\label{equ intepolation for Schauder est}
|\xi|^{\star}_{1,\alpha,B}\leq \mu[\nabla^{2}\xi]^{[2]}_{0,B}+C_{\mu}|\xi|_{0,B}.$$ Then (\[equ Schauder est with junk terms\]) and (\[equ intepolation for Schauder est\]) imply Lemma \[lem Schauder estimate in local coordinates in small balls\].
\[prop log weighted Schauder estimate\] Let $\gamma$ be any real number, suppose $\xi\in $ is $C^{2,\alpha}$ away from the singularity, $L_{A}\xi\in C_{(\gamma,b)}^{1,\alpha}(M)$, $\xi\in C_{(\gamma-1,b)}^{0}(M) $. Then $\xi\in C_{(\gamma-1,b)}^{2,\alpha}(M)$, and $$|\xi|^{(\gamma-1,b)}_{2,\alpha,M}\leq C|L_{A}\xi|^{(\gamma,b)}_{1,\alpha,M}+C|\xi|^{(\gamma-1,b)}_{0,M}.$$
We only consider the case when $1+\gamma+\alpha\geq 0$. By our choice of $x$ and $y$ (the paragraph above (\[equ Prop Apriori Schauder 1 Appendix C\])), the proof is similar when it’s negative. Without loss of generality, for any singular point $O$, we only consider $x\in B_{O}(\rho_{0})$ and $B_{x}(\frac{r_{x}}{1000})$. For any $y\in B_{x}(\frac{r_{x}}{1000})$, the distance from $y$ to $\partial B_{x}(\frac{r_{x}}{1000})$ is less than $r_{y}$, and we have $2r_{y}>r_{x}>\frac{r_{y}}{2}$, $10(-\log r_{y})>-\log r_{x}>\frac{-\log r_{y}}{10}$. Thus $r_{x,y}\simeq r_{y} \simeq r_{x}\simeq \underline{r_{x,y}}$. Hence, using the proof of 4.20 in Theorem 4.8 of [@GilbargTrudinger] and Lemma \[lem Schauder estimate in local coordinates in small balls\], multiplying both sides of the estimate in Lemma\[lem Schauder estimate in local coordinates in small balls\] by $r_{q}^{\gamma-1}(-\log r_{q})^{b}$, we obtain the following lower order estimate
$$\label{equ C20 estimate in the apriori Schauder}
|\xi|^{(\gamma-1,b)}_{2,0,M}\leq C|L_{A}\xi|^{(\gamma,b)}_{1,\alpha,M}+C|\xi|^{(\gamma-1,b)}_{0,M}.$$
The term left to estimate is the following one with highest order. $$Q_{x,y}=(-\log \underline{r_{x,y}})^{b}r^{1+\gamma+\alpha}_{x,y}\frac{|\nabla^{2}\xi(x)-\nabla^{2} \xi(y)|}{|x-y|^{\alpha}}.$$ This can be done in a standard way. We can assume $r_{x}\geq r_{y}$, since otherwise we only need to interchange them. We note that this choice is different from the paragraph above (6.15) in [@GilbargTrudinger]. Suppose $y\in B_{x}(\frac{r_{x}}{2000})$, the proof of (\[equ C20 estimate in the apriori Schauder\]) implies one more conclusion: $$\label{equ Prop Apriori Schauder 1 Appendix C}
Q_{x,y}\leq C|L_{A}\xi|^{(\gamma,b)}_{1,\alpha,M}+C|\xi|^{(\gamma-1,b)}_{0,M}.$$ When $y\notin B_{x}(\frac{r_{x}}{2000})$ (we only need to consider $y$ in $B_{O}(\rho_{0})$), we find $$\begin{aligned}
& &Q_{x,y}
\\&\leq & (-\log r_{x})^{b}r^{1+\gamma}_{x}|\nabla^{2}\xi(x)|+(-\log r_{y})^{b}r^{1+\gamma}_{y}|\nabla^{2} \xi(y)|\leq C|\xi|^{(\gamma-1,b)}_{2,0,M}\nonumber
\\& \leq & C|L_{A}\xi|^{(\gamma,b)}_{\alpha,M}+C|\xi|^{(\gamma-1,b)}_{0,M},\ \textrm{by}\ (\ref{equ C20 estimate in the apriori Schauder}).\nonumber\end{aligned}$$
The proof of Proposition \[prop log weighted Schauder estimate\] is complete.
Working on each coordinate patch separately, by the proof of Lemma 6.32 in [@GilbargTrudinger] with slight modification (on log weight as Proposition \[prop log weighted Schauder estimate\]), we obtain
(Intepolation)\[lem C1 C0 intepolate Calpha\] For any $\mu<\frac{1}{3000}$, $b\geq 0$, $0<\alpha<1$, and real numbers $k$, there exists a constant $C_{\mu,k,\alpha,b}$ with the following property. For any section $\xi$ and non-negative integer $j$, the following intepolation holds. $$\label{equ intepolation 1 for calpha norm}
|\nabla^{j}\xi|^{(k,b)}_{\alpha,M}\leq \mu |\nabla^{j+1} \xi|^{(k+1,b)}_{0,M}+C_{\mu,k,\alpha,b} |\nabla^{j}\xi|^{(k,b)}_{0,M},$$ where $\nabla^{j}\xi$ is viewed as a combination of locally defined matrix-valued tensors in each chart of $\mathbb{U}_{\rho_{0}}$.
\[lem regularity of solution to the deformation equation with C1alpha rhs\] Suppose $B$ is a ball such that $2B$ is contained in a single coordinate neighbourhood away from the singularity. Suppose $\xi\in W^{1,2}(2B)$ and $L_{A}\xi\in C^{1,\alpha}(2B)$ (in the sense of strong solution). Then $u\in C^{2,\alpha}(B)$.
We believe Lemma \[lem regularity of solution to the deformation equation with C1alpha rhs\] is in literature. Since the author can not find an exact reference, we still give a proof for the readers’ convenience.
$\textrm{We apply}\ -L_{A}\ \textrm{to the equation again to obtain}\
-L^{2}_{A}\xi=-L_{A}f\in C^{\alpha}(2B).$ From the proof of Lemma \[lem formula for LA squared\], we see that the difference of the Weitzenböck formula between the model case and the general case is some lower order term (concerning at most first derivative of $\xi$ in local coordinates). Then $$\label{equ in regularity main lem}
\Delta_{g_{\phi}}\xi=\nabla\xi\otimes T_{\phi,\psi,A,1}+\xi\otimes T_{\phi,\psi,A,0}-L_{A}f,$$ where the $T$’s are tensors depending algebraicly on $\phi$, $\psi$, $A$, and their derivatives. The $T$’s might be only locally defined, but this is sufficient.
The important point is that $\Delta_{g_{\phi}}\xi$ means the metric Laplacian of each entry of $\xi$ in local coordinates, thus (\[equ in regularity main lem\]) is actually a bunch of scalar equations. Then Lemma \[lem regularity of solution to the deformation equation with C1alpha rhs\] follows by applying Lemma \[lem regularity of the laplace equation\] successively.
([@GilbargTrudinger])\[lem regularity of the laplace equation\] Under the same conditions on $B$ in Lemma \[lem regularity of solution to the deformation equation with C1alpha rhs\], suppose $\xi \in W^{1,2}(2B)$ is a weak solution to $$\label{equ standard laplace equation in lem regularity of laplace equation}
\Delta_{g_{\phi}}\xi=h\ \textrm{in}\ 2B.$$ Suppose $h\in L^{p}(2B)$, $p\geq 2$, then $\xi\in L^{2,p}(B)$. Suppose $h\in C^{\alpha}(2B)$, $0<\alpha<1$, then $\xi\in C^{2,\alpha}(B)$.
It suffices to construct local solutions (with optimal regularity) to (\[equ standard laplace equation in lem regularity of laplace equation\]). Viewing (\[equ standard laplace equation in lem regularity of laplace equation\]) as a bunch of scalar equations, let the boundary value over $\partial B$ be $0$, when $h\in L^{p}(2B)$ and $p\geq 2$, Theorem 9.15 in [@GilbargTrudinger] implies that (\[equ standard laplace equation in lem regularity of laplace equation\]) admits a solution $\bar{\xi}\in L^{2,p}(B)$ (in $B$). When $h\in C^{\alpha}(2B)$, Theorem 6.14 in [@GilbargTrudinger] gives a solution $\bar{\xi}\in C^{2,\alpha}(B)$ to (\[equ standard laplace equation in lem regularity of laplace equation\]). In both cases, $\Delta_{g}(\bar{\xi}-\xi)=0$, therefore $\bar{\xi}-\xi$ is smooth by Lemma \[lem regularity of kernel and cokernel\]. Then $\xi=(\xi-\bar{\xi})+\bar{\xi}\in L^{2,p}(B)$ or $C^{2,\alpha}(B)$, when $h\in L^{p}(B)$ or $C^{\alpha}(B)$, respectively.
\[lem regularity of kernel and cokernel\](see Gilkey [@Gilkey]) Suppose $f\in L^{2}_{p,b}$ belongs to the cokernel of $L_{A}$ in distribution sense (see (\[equ Def of Coker\])). Then $f$ is smooth away from the singular points. $\xi\in kerL_{A}\subset W^{1,2}_{p,b}$ also implies $\xi$ is smooth away from the singularities.
It’s an easy exercise on pseudo-differential operators. We only need to show the conditions imply $L^{\star}_{A}f=0$ where we view $L^{\star}_{A}f$ as an element in $H^{-1}$ (see Lemma 1.2.1 in [@Gilkey]). Thus Lemma 1.3.1 of [@Gilkey] is directly applicable.
To achieve this, for any ball $B$ such that $100B$ is still away from the singularities, we choose $\eta$ as the standard cutoff function which vanishes outside $2B$ and is identically $1$ in $B$. We also choose $\chi$ as the standard cutoff function which vanishes outside $3B$ and is identically $1$ in $2B$. By Lemma 1.2.1 and 1.1.6 in [@Gilkey], using a limiting argument with respect the smoothing of $\chi f$, we obtain $\eta L^{\star}_{A} (\chi f)=0$ as an element in $H^{-1}$. Since $L_{A}^{\star}$ is elliptic, we conclude by Lemma 1.3.1 of [@Gilkey] that $f$ is smooth in $B$.
\[lem bounding local perturbation of deformation operator \]Suppose $\tau_{0}\leq \delta$, and $\psi_{1},\psi_{2}$ are two $G_{2}-$structures over $B_{O}(2\tau_{0})$, $|\psi_{1}-\psi_{2}|<\delta.$ Suppose $A_{1},\ A_{2}$ are 2 connections over $B_{O}(\tau_{0})$, $|A_{1}-A_{2}|<\frac{\delta}{r}$. $$\label{equ Lem bounding local perturbation of deformation operator }
\textrm{Then}\ |L_{A_{1},\psi_{1}}-L_{A_{2},\psi_{2}}|\leq C\delta|\nabla_{A_{2}}\xi|+\frac{C\delta|\xi|}{r}\ \textrm{in}\ B_{O}(\tau_{0}). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$
The $C$ depends on $C^{2}-$norms of $\psi_{1},\psi_{2}$ and $C^{0}-$norm of $rA_{1}$ in local coordinates. The $\nabla_{A_2}$ is the covariant derivative with respect to the Euclidean metric in the coordinate. The estimate (\[equ Lem bounding local perturbation of deformation operator \]) still holds for $\nabla_{A_{1}}$ and with respect to any smooth metric.
In (\[equ introduction formula for deformation operator\]), we only estimate the difference from $d_{A}^{\star}a$, the errors from the other terms are similar. By $d_{A}^{\star}=-\star d_{A} \star$, we note $\star_{\psi_{1}}d_{A_{1}}\star_{\psi_{1}}-\star_{\psi_{2}}d_{A_{2}}\star_{\psi_{2}}
=(\star_{\psi_{1}}-\star_{\psi_{2}})d_{A_{1}}\star_{\psi_{1}}+\star_{\psi_{2}}(d_{A_{1}}-d_{A_{2}})\star_{\psi_{1}}+\star_{\psi_{2}}d_{A_{2}}(\star_{\psi_{1}}-\star_{\psi_{2}}).$ We only estimate the last term $\star_{\psi_{2}}d_{A_{2}}(\star_{\psi_{1}}-\star_{\psi_{2}})$, the estimate of the other terms are similar and easier. Using $
d_{A_{2}}[(\star_{\psi_{1}}-\star_{\psi_{2}})a]= (\star_{\psi_{1}}-\star_{\psi_{2}})\otimes \nabla_{A_{2}}a+[\nabla (\star_{\psi_{1}}-\star_{\psi_{2}})]\otimes a$, we get $|L_{A_{1},\psi_{1}}-L_{A_{2},\psi_{2}}|\leq C\delta|\nabla_{A_{2}}\xi|+C|\xi|$. Then we obtain (\[equ Lem bounding local perturbation of deformation operator \]) when $r< \tau_{0}$.
Appendix D: Density and smooth convergence of Fourier Series \[section Appendix D: Density and smooth convergence of Fourier Series\]
-------------------------------------------------------------------------------------------------------------------------------------
\[Lemma Appendix D\] Let $S$ be a closed Riemannian manifold (of any dimension), and $\Xi\rightarrow S$ be a smooth $SO(m)-$vector bundle with an inner product. Suppose $A_{S}$ is a smooth connection on $\Xi$, and $\Delta_{A_{S}}=\nabla^{\star}_{A_{S}}\nabla_{A_{S}}+\mathfrak{F}$ is a **self-adjoint** Laplacian-type operator acting on sections of $\Xi$, where $\nabla^{\star}_{A_{S}}\nabla_{A_{S}}$ is the rough Laplacian of $A_{S}$ and the Riemannian metric, $\mathfrak{F}$ is a smooth algebraic operator (which does not concern any covariant derivative). Let $\beta$ be the real eigenvalues of $\Delta_{A_{S}}$ repeated according to their multiplicities, and $\Psi_{\beta}$ be the corresponding orthonormal basis in $L^{2}_{\Xi}(S)$. Then for any smooth section $\underline{f}$ to $\Xi$, the Fourier-series $\Sigma_{\beta}\underline{f}_{\beta}\Psi_{\beta}$ (of $\underline{f}$) converges to $\underline{f}$ in the $C^{\infty}-$topology.
Moreover, the speed of convergence only depends on the $C^{\infty}-$norm of $\underline{f}$ i.e. there exists a integer $\tau>0$ depending only on $\Delta_{A_{S}}$, such that for any $\epsilon>0$ and integer $s\geq 0$, there exists a $k$ depending only on $|\underline{f}|_{W^{2\tau+2s,2}(S)}$, $\epsilon$, and $\Delta_{A_{S}}$, such that $|\underline{f}-\Sigma_{\beta< k}\underline{f}_{\beta}\Psi_{\beta}|_{W^{2s,2}(S)}<\epsilon$.
\[cor Appendix D:\] In the setting of Lemma \[lem density of smooth functions in weighted L2 space\] and Theorem \[thm W22 estimate on 1-forms\], let $\underline{f}\in C^{\infty}_{c}[B_{O}(\rho)\setminus O]$, then the Fourier-series in (\[equ raw ODE for 1 forms\]) converges in $C^{0}[B_{O}(\rho)]$ to $\underline{f}$.
Since $\Delta_{A_{S}}$ is bounded from below, by considering $\Delta_{A_{S}}+a_{0}I$ for some big enough $a_{0}$, we can assume all the $\beta$’s are larger than $1000$.
In Claim \[clm by Zeta function\], we note that $\Delta_{A_{S}}^{l}(\Sigma_{\beta<k_{l}}\underline{f}_{\beta}\Psi_{\beta})$ is the Fourier partial sum of $\Delta_{A_{S}}^{l}\underline{f}$. Let $F_{k}$ denote $\Sigma_{\beta\geq k}\underline{f}_{\beta}\Psi_{\beta}$, and $k=100+\sup_{0\leq l\leq s}k_{l}$, the standard $W^{2,2}-$estimate for $\Delta_{A_{S}}$ and Claim \[clm by Zeta function\] imply $$\label{equ Cor appendix D 1}
|\Delta^{s-1}_{A_{S}}F_{k,m}|_{W^{2,2}(S)}\leq \bar{C}|\Delta^{s}_{A_{S}} F_{k,m}|_{L^{2}(S)}+\bar{C}|\Delta^{s-1}_{A_{S}}F_{k,m}|_{L^{2}(S)}\leq \bar{C}\epsilon$$ uniformly in $m$. Let $m\rightarrow \infty$, we find $\Delta^{s}F_{k}\in W^{2,2}(S)$ and $$\label{equ Cor appendix D 2}
|\Delta^{s-1}_{A_{S}}F_{k}|_{W^{2,2}(S)}\leq \bar{C}|\Delta^{s}_{A_{S}} F_{k}|_{L^{2}(S)}+\bar{C}|\Delta^{s-1}_{A_{S}} F_{k}|_{L^{2}(S)}\leq \bar{C}\epsilon.$$ By induction, using Theorem 5.2 in [@Lawson], by similar estimates as (\[equ Cor appendix D 1\]) and (\[equ Cor appendix D 2\]), we obtain $|F_{k}|_{W^{2s,2}(S)}\leq \bar{C}_{s}\epsilon.$ Replacing $\bar{C}_{s}\epsilon$ by $\epsilon$, the proof of Lemma \[Lemma Appendix D\] is complete by assuming the following.
\[clm by Zeta function\] For any $\epsilon>0$, integer $l\geq 0$, there exists a $k_{l}$ depending only on $|\underline{f}|_{W^{2l+2\tau,2}(S)}$, $\epsilon$, $l$, $\Delta_{A_{S}}$, such that $|\Delta_{A_{S}}^{l}\underline{f}-\Delta_{A_{S}}^{l}(\Sigma_{\beta<k_{l}}\underline{f}_{\beta}\Psi_{\beta})|_{L^{2}(S^{6})}<\epsilon$.
The proof of the Claim is by the asymptotic property of zeta-functions. For any positive integer $t$, using $$\underline{f}_{\beta}=\int_{S}<\underline{f},\Psi_{\beta}>=\frac{\int_{S}<\underline{f},\Delta^{t}_{A_{S}}\Psi_{\beta}>}{\beta^{t}}=\frac{\int_{S}<\Delta^{t}_{A_{S}}\underline{f},\Psi_{\beta}>}{\beta^{t}},$$ we get $|\underline{f}_{\beta}|<\bar{C}\frac{|\underline{f}|_{W^{2t,2}(S)}}{\beta^{t}}.$ Then $
|\Delta_{A_{S}}^{l}F_{k_{l}}|_{L^{2}(S^{6})}\leq \bar{C}|\underline{f}|_{W^{2t,2}(S)}\Sigma_{\beta\geq k_{l}}\frac{1}{\beta^{t-l}}.$ The sum $\Sigma_{\beta\geq k_{l}}\frac{1}{\beta^{t-l}}$ is part of the zeta-function of $\Delta_{A_{S}}$. There exists a large enough $\tau$ with respect to the data in Lemma \[Lemma Appendix D\], such that $\Sigma_{\beta}\frac{1}{\beta^{t-l}}$ converges to an analytic function of $t-l\geq \tau$. By Corollary 2.43 in [@Getzler], or Lemma 1.10.1 in [@Gilkey], we can take $\tau=\frac{dim S}{2}+2$. Nevertheless, we don’t need $\tau$ to be explicit.
Let $t=\tau+l$, and $k_{l}$ be large enough with respect to $\epsilon$ and the zeta function of $\Delta_{A_{S}}$, the proof of Claim \[clm by Zeta function\] is complete.
The condition $\underline{f}\in C^{\infty}_{c}[B_{O}(\rho)\setminus O]$ implies that, by viewing $\underline{f}$ as an $r-$dependent smooth section, the Sobolev norms of $\underline{f}$ are uniformly bounded in $r$ i.e $|\underline{f}(r,\cdot)|_{W^{2t,2}(S^{n-1})}\leq C_{f,t}$. Moreover, $\underline{f}$ (and its Fourier-coefficients) vanishes when $r$ is small enough. Then for any $\epsilon$, let $s=\frac{n-1}{4}+10$, by Sobolev imbedding, there exists a $k$ as in Lemma \[Lemma Appendix D\] such that the estimate $$|\underline{f}(r,\cdot)-\Sigma_{\beta< k}\underline{f}_{\beta}(r)\Psi_{\beta}|_{C^{0}(S^{n-1})}\leq \bar{C}_{\underline{f}}|\underline{f}(r,\cdot)-\Sigma_{\beta< k}\underline{f}_{\beta}(r)\Psi_{\beta}|_{W^{2s,2}(S^{n-1})}<\epsilon$$ holds uniformly in $r$. The proof of Corollary \[cor Appendix D:\] is complete.
Without loss of generality, we assume $\rho=1$. Dropping the last condition in Definition \[Def of L22 norm model case\], we first show that $C^{\infty}_{c}[B_{O}(1)\setminus O]$ is dense in $L^{2}_{p,b} [B_{O}(1)]$. We assume $\underline{f}$ satisfies the condition after the “which” in Lemma \[lem density of smooth functions in weighted L2 space\]. Under Local coordinate, $\underline{f}$ is a matrix-valued function. For any $\epsilon>0$, by absolute continuity of Lebesgue integration (Theorem 4.12 in [@Zhouminqiang]), for any small enough $h>0$, we can decompose $\underline{f}=\underline{f}_{0}+\underline{f}_{1}$. $\underline{f}_{0}$ is supported in $V_{+,O}\setminus V_{+,h}$ and $
\int_{ V_{+,O}\setminus V_{+,h}}|\underline{f}_{0}|^{2}wdV<(\frac{\epsilon}{2})^{2},\ \underline{f}_{1}\ \textrm{is supported in}\ V_{+,h},$ where $V_{+,h}$ is the set of points with distance to $\partial V_{+,O}$ greater than $h$.
Since $\underline{f}_{1}$ is supported away from the singular point, using Lemma 7.2 in [@GilbargTrudinger], we can find $\bar{\underline{f}}_{+}$ such that $|\bar{\underline{f}}_{+}-\underline{f}_{1}|_{L^{2}_{p,b}(V_{+,O})}<\frac{\epsilon}{2}, supp \bar{\underline{f}}_{+} \subset\ V_{+,\frac{h}{2}}.$ Then $|\bar{\underline{f}}_{+}-\underline{f}|_{L^{2}_{p,b}(V_{+,O})}\leq |\bar{\underline{f}}_{+}-\underline{f}_{1}|_{L^{2}_{p,b}(V_{+,O})}+|\underline{f}_{0}|_{L^{2}_{p,b}(V_{+,O})}<\epsilon$. $\bar{\underline{f}}_{+}$ is exactly the desired approximation. Let $\bar{\underline{f}}_{-}$ be the same approximation in $V_{-,O}$. We denote the partition of unity over $S^{n-1}$ subordinate to $U_{+}, U_{-}$ as $\eta_{+},\eta_{-}$, and pull them back to $\R^{7}\setminus O$. Let $\underline{f}_{\epsilon}=\eta_{+}\bar{\underline{f}}_{+}+\eta_{-}\bar{\underline{f}}_{-}$, we obtain $$\label{equ Density in L2 1}
|\underline{f}_{\epsilon}-\underline{f}|_{L^{2}_{p,b}[B_{O}(1)]}\leq |\eta_{+}\underline{f}-\eta_{+}\bar{\underline{f}}_{+}|_{L^{2}_{p,b}[V_{+,O}]}+|\eta_{-}\underline{f}-\eta_{-}\bar{\underline{f}}_{-}|_{L^{2}_{p,b}[V_{-,O}]}<2\epsilon.$$
Viewing $\underline{f}_{\epsilon}$ as a $r-$dependent smooth section of $\Xi\rightarrow S^{6}(1)$, Corollary \[cor Appendix D:\] implies that the series $\Sigma_{v}\underline{f}_{\epsilon,v}(r)\Psi_{v}$ (see (\[equ raw ODE for 1 forms\])) converges to $\underline{f}_{\epsilon}$ in $C^{0}[B_{O}(1)]$, and the following holds for some large enough $v_{0}>0$. $$\label{equ Density in L2 2}
\int_{B_{O}(1)}|\underline{f}_{\epsilon}-\underline{f}_{[v_{0}],\epsilon}|^{2}wdV<\epsilon^{2},\ \textrm{where}\ \underline{f}_{[v_{0}],\epsilon}\triangleq \Sigma_{v< v_{0}}\underline{f}_{\epsilon,v}(r)\Psi_{v}.$$ Inequalities (\[equ Density in L2 1\]) and (\[equ Density in L2 2\]) imply $|\underline{f}_{[v_{0}],\epsilon}-\underline{f}|_{L^{2}_{p,b}[B_{O}(1)]}<3\epsilon$.
The following proof does not depend on Corollary \[cor Appendix D:\].
We only consider the case $k=1$, and assume $\rho=1$. The assertion $W^{1,2}_{p,b}[B_{O}(1)]\subset \mathfrak{W}^{1,2}_{p,b}[B_{O}(1)]$ is an easy exercise using monotone convergence theorem and Theorem 7.4 in [@GilbargTrudinger] away from the singularity.
The assertion $ \mathfrak{W}^{1,2}_{p,b}[B_{O}(1)]\subset W^{1,2}_{p,b}[B_{O}(1)]$ means every section $\xi\in \mathfrak{W}^{1,2}_{p,b}[B_{O}(1)]$ can be approximated by smooth sections defined in $B_{O}(1)$ (away from $O$) in $W^{1,2}_{p,b}[B_{O}(1)]$-topology. It suffices to show every $\xi\in \mathfrak{W}^{1,2}_{p,b}(V_{+,O})$ (in local coordinate) can be approximated by smooth multi-matrix-valued functions defined in $V_{+,O}$. This job is done by using the proof of Theorem 7.9 in [@GilbargTrudinger] with the “$\frac{\epsilon}{2^{j}}$” (in (7.25) there) replaced by $\frac{\epsilon}{2^{j}\tau_{j}}$, where $\tau_{j}=1+\sup_{\Omega_{j}}\frac{\omega}{r^{2}}$, and $\Omega_{j}$ is the corresponding open set in a natural cover of $V_{+,O}$.
Appendix E: Various integral identities and proof of Proposition \[prop bounding L2 norm of Hessian for the model cone laplace equation\] \[section Appendix E: Various integral identities and proof\]
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[lem rough L2 identity integration by parts \]Under the conditions as in Proposition \[prop existence of solution with lowest order estimate\] and \[prop final ODE W22 estimate when v=0\], let $u$ be as in (\[equ solution when v=0\]), let $\bar{u}$ and $\bar{f}$ be as in Claim \[clm weight change on the ODE\] and (\[weighted ODE\]), we have $$\begin{aligned}
\label{equ rough L2 identity integration by parts}
& &\int^{\frac{1}{2}}_{0}\bar{f}^{2}rw_{0} dr
\\&=& \int^{\frac{1}{2}}_{0}|\frac{d^{2} \bar{u}}{d r^{2}}|^{2}rw_{0}dr+(k^{2}+2a^{2})\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{w_{0}}{r} dr\nonumber
+ (a^{4}-2ka^{2}-2a^{2})\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} w_{0}}{ r^{3}} dr\\& &-k\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{d w_{0}}{d r} dr
+(2+k)a^{2}\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} }{ r^{2}} \frac{d w_{0}}{d r}dr-a^{2}\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} }{ r}\frac{d^{2} w_{0}}{d r^{2}} dr\nonumber
\\& &+ k |\frac{d \bar{u}}{d r}|^{2}w_{0}|^{\frac{1}{2}}_{0}- (k+1)a^{2} \frac{ \bar{u}^{2}}{ r^{2}}w_{0}|^{\frac{1}{2}}_{0}-2a^{2} \frac{d \bar{u}}{d r}\frac{ \bar{u}}{ r}w_{0}|^{\frac{1}{2}}_{0}+a^{2}\frac{ \bar{u}^{2}}{ r}\frac{d w_{0}}{d r}|^{\frac{1}{2}}_{0}\nonumber\end{aligned}$$
Integrating the square of both hand sides of (\[weighted ODE\]) over $(0,\frac{1}{2})$ with respect to $ rw_{0}$, we obtain $$\begin{aligned}
\label{eqnarray simply integrate the square of both hand sides}
& &\int^{\frac{1}{2}}_{0}\bar{f}^{2}rw_{0} dr
\\&=& \int^{\frac{1}{2}}_{0}|\frac{d^{2} \bar{u}}{d r^{2}}|^{2}rw_{0}dr+k^{2}\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{w_{0}}{r} dr\nonumber
+ a^{4}\int^{\frac{1}{2}}_{0}\frac{ \bar{u}^{2} w_{0}}{ r^{3}} dr\\& &+2k\int^{\frac{1}{2}}_{0}\frac{d^{2} \bar{u}}{d r^{2}}\frac{d \bar{u}}{d r}w_{0} dr
-2a^{2}\int^{\frac{1}{2}}_{0}\frac{d^{2} \bar{u}}{d r^{2}}\frac{\bar{u}}{ r}w_{0}dr-2ka^{2}\int^{\frac{1}{2}}_{0}\frac{d\bar{u}}{d r}\frac{\bar{u}}{ r^{2}}w_{0}dr\nonumber.\end{aligned}$$ Integration by parts (of $\frac{d}{d r}$) gives $$\begin{aligned}
& &2k\int^{\frac{1}{2}}_{0}\frac{d^{2} \bar{u}}{d r^{2}}\frac{d \bar{u}}{d r}w_{0} dr=-k\int^{\frac{1}{2}}_{0}|\frac{d \bar{u}}{d r}|^{2}\frac{d w_{0}}{d r}dr
+k|\frac{d \bar{u}}{d r}|^{2}w_{0}\mid^{\frac{1}{2}}_{0}
\\& &-2ka^{2}\int^{\frac{1}{2}}_{0}\frac{d\bar{u}}{d r}\frac{\bar{u}}{ r^{2}}w_{0}dr
=-2ka^{2}\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}w_{0}}{ r^{3}}dr+ka^{2}\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}}{ r^{2}}\frac{d w_{0}}{d r}dr-ka^{2}\frac{\bar{u}^{2}w_{0}}{r^{2}}\mid^{\frac{1}{2}}_{0}.\nonumber\end{aligned}$$ $$\begin{aligned}
& &-2a^{2}\int^{\frac{1}{2}}_{0}\frac{d^{2}\bar{u}}{d r^{2}}\frac{\bar{u}}{ r}w_{0}dr
\\&=&-2a^{2}\frac{d u}{d r}\frac{u}{r}w_{0}\mid^{\frac{1}{2}}_{0} -2a^{2}\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}w_{0}}{ r^{3}}dr+2a^{2}\int^{\frac{1}{2}}_{0}\frac{\bar{u}^{2}}{ r^{2}}\frac{d w_{0}}{d r}dr-a^{2}\frac{\bar{u}^{2}w_{0}}{r^{2}}\mid^{\frac{1}{2}}_{0}.\nonumber
\\& &+a^{2}\frac{\bar{u}^{2}}{r}\frac{d w_{0}}{d r}\mid^{\frac{1}{2}}_{0}+2a^{2}\int^{\frac{1}{2}}_{0}|\frac{d u}{d r}|^{2}\frac{w_{0}}{r}dr-a^{2}\int^{\frac{1}{2}}_{0}\frac{ u^{2}}{ r}\frac{d^{2}w_{0}}{d r^{2}}dr.\nonumber\end{aligned}$$ Plugging the above in (\[eqnarray simply integrate the square of both hand sides\]), the proof of Lemma \[lem rough L2 identity integration by parts \] is complete.
\[lem log estimate\] Let $b\geq 0$. For any $k$, there exists a constant $C_{k,b}$ which depends only on the positive lower bound of $|k-1|$ (not on the upper bound), with the following properties. Suppose $k<1$, then $$\label{equ the log integral 1}
\int^{r}_{0}x^{-k}(-\log x)^{2b}dx\leq \frac{C_{k,b}}{1-k}r^{1-k}(-\log r)^{2b},\ \textrm{for all}\ r\in\ [0,\frac{1}{2}].$$ Suppose $k>1$, then $$\label{equ the log integral 2}
\int^{\frac{1}{2}}_{r}x^{-k}(-\log x)^{2b}dx\leq \frac{C_{k,b}}{k-1}r^{1-k}(-\log r)^{2b},\ \textrm{for all}\ r\in\ [0,\frac{1}{2}].$$
This is an absolutely easy practice in calculus. We only prove (\[equ the log integral 1\]) assuming $b<\frac{1}{2}$. The general case and proof of (\[equ the log integral 2\]) are similar except that we have more terms of the same nature. We compute $$\label{eqnarray log integral with induction}\int^{r}_{0}x^{-k}(-\log x)^{2b}dx
= \frac{x^{1-k}}{1-k}(-\log x)^{2b}|^{r}_{0}+\frac{2b}{1-k}\int^{r}_{0}x^{-k}(-\log x)^{2b-1}dx.$$ Since $2b-1<0$, we have $\int^{r}_{0}x^{-k}(-\log x)^{2b-1}dx\leq C\int^{r}_{0}x^{-k}dx= \frac{Cr^{1-k}}{1-k}$. Thus the right hand side of (\[eqnarray log integral with induction\]) is bounded by $\frac{C_{k}}{1-k}r^{1-k}(-\log r)^{2b}$.
Let $\eta_{\epsilon}$ be as in (\[equ cut-off function bound near the singular point\]), let $d_{j}$ denote $\nabla_{A_{O}, \frac{\partial}{\partial x_{j}}}$ (under Euclidean metric and coordinate), we compute for any $\varrho>0$ that $$\begin{aligned}
\label{eqnarray L22 identity from integration by parts}& &\int_{B_{O}(\frac{\varrho}{4})}|\nabla^{\star}_{A_{O}}\nabla_{A_{O}}\xi|^{2}\eta_{\epsilon}\chi^{2}wdV=\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}d_{j}\xi,d_{i}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV
\\&=& -\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{j}d_{i}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{i}\xi>[\nabla_{j}(\eta_{\epsilon}\chi^{2}w )]dV\nonumber
\\&=& -\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{j}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,[F_{ij},d_{i}\xi]>\eta_{\epsilon}\chi^{2}w dV \nonumber
\\& &-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{i}\xi>[\nabla_{j}(\eta_{\epsilon}\chi^{2}w )]dV \nonumber\end{aligned}$$ $$\begin{aligned}
&=& \Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{i}d_{j}\xi,d_{j}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV
+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{j}d_{i}\xi>[\nabla_{i}(\eta_{\epsilon}\chi^{2}w )]dV\nonumber
\\& &+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,[F_{ij},d_{i}\xi]>\eta_{\epsilon}\chi^{2}w dV \nonumber
-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{i}\xi>[\nabla_{j}(\eta_{\epsilon}\chi^{2}w )]dV
\\&=& \Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}d_{i}\xi,d_{j}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV
+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <[F_{ij},\xi],d_{j}d_{i}\xi>\eta_{\epsilon}\chi^{2}w dV \nonumber
\\& &+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{j}d_{i}\xi>[\nabla_{i}(\eta_{\epsilon}\chi^{2}w )]dV+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,[F_{ij},d_{i}\xi]>\eta_{\epsilon}\chi^{2}w dV\nonumber
\\& &- \Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{i}\xi>[\nabla_{j}(\eta_{\epsilon}\chi^{2}w )]dV. \nonumber\end{aligned}$$ Then we distribute all derivatives like $\nabla (\eta_{\epsilon}\chi^{2}w)$. By the method in (\[equ integration by parts holds true in the case of L12 model estimate wrt to cone\]), Lemma \[lem bound on C3 norm of solution to laplace equation when f is smooth and vainishes near O\], and the proof of (\[equ bounding L2 norm of gradient for the model cone laplace equation\]), all the integrals containing $\nabla \eta_{\epsilon}$ tend to $0$ as $\epsilon\rightarrow 0$, thus the equality between top and bottom of (\[eqnarray L22 identity from integration by parts\]) gives. $$\begin{aligned}
\label{eqnarray prop hessian L2 bound model case}& & \int_{B_{O}(\frac{\varrho}{4})}|\nabla^{2}_{A_{O}}\xi|^{2}\chi^{2}wdV
\\&=& \int_{B_{O}(\frac{\varrho}{4})}|\nabla^{\star}_{A_{O}}\nabla_{A_{O}}\xi|^{2}\chi^{2}wdV-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <[F_{ij},\xi],d_{j}d_{i}\xi>\chi^{2}w dV \nonumber
\\& &-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{j}d_{i}\xi>[\nabla_{i}(\chi^{2}w )]dV-\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,[F_{ij},d_{i}\xi]>\chi^{2}w dV\nonumber
\\& &+\Sigma_{j,i}\int_{B_{O}(\frac{\varrho}{4})} <d_{j}\xi,d_{i}d_{i}\xi>[\nabla_{j}(\chi^{2}w )]dV. \nonumber
\end{aligned}$$
Using (\[equ gradient of the cutoff function and weight\]), (\[equ in L2 bound on the gradient consequence of Bochner formula\]), and (\[eqnarray prop hessian L2 bound model case\]), by the proof of (\[eqnarray in model L12 bound 1\]) and (\[eqnarray the L12 estimate model case with a small error term on the right to be absorbed\]), we deduce $$\begin{aligned}
\label{eqnarray prop hessian L2 bound 2}& & \int_{B_{O}(\frac{\varrho}{4})}|\nabla^{2}_{A_{O}}\xi|^{2}\chi^{2}wdV
\\& \leq & \bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}\chi^{2}wdV+\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}|\nabla\chi|^{2}|\nabla_{A_{O}}\xi|^{2}wdV\nonumber
\\& &+\bar{C}_{\vartheta} \int_{B_{O}(\frac{\varrho}{4})}\frac{|\xi|^{2}}{r^{4}}\chi^{2}wdV+\bar{C}\int_{B_{O}(\frac{\varrho}{4})}|\underline{f}|^{2} \chi^{2}wdV.
+\vartheta \int_{B_{O}(\frac{\varrho}{4})}|\nabla_{A_{O}}^{2}\xi|^{2}\chi^{2}wdV.\nonumber
\end{aligned}$$
Let $\chi$ be the standard cutoff function which is identically $1$ over $B_{O}(\frac{\varrho}{5})$ and vanishes outside $B_{O}(\frac{\varrho}{4.5})$, the proof of (\[equ in model L12 bound 1\]) implies $$\label{eqnarray prop hessian L2 bound 3}\bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4})}|\nabla\chi|^{2}|\nabla_{A_{O}}\xi|^{2}wdV\leq \bar{C}_{\vartheta}\int_{B_{O}(\frac{\varrho}{4.5})}\frac{|\nabla_{A_{O}}\xi|^{2}}{r^{2}}wdV.$$
Let $\vartheta=\frac{1}{10}$, combining (\[eqnarray prop hessian L2 bound 2\]) and (\[eqnarray prop hessian L2 bound 3\]), the proof is complete.
Notation and Subject Index
==========================
The locations in the column of “definition” includes the nearby material.
Subject or Notation definition
----------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$A-$generic Def \[Def A generic\]
admissible connections Def \[Def Admissable connection with polynomial or exponential convergence\]
$H_{p,b}$, $H_{p}$, $N_{p,b}$, $N_{p}$ Def \[Def Hybrid spaces\], Def \[Def abbreviation of notations for spaces\]
$C^{k,\alpha}_{\gamma,b}$, $C^{k,\alpha}_{\gamma}$, $|\cdot|^{(\gamma,b)}_{k,\alpha}$,$|\cdot|^{(\gamma)}_{k,\alpha}$ Def \[Def Schauder spaces\], Def \[Def Global Schauder norms\], Def \[Def abbreviation of notations for spaces\]
$|\cdot|^{[y]}_{k,\alpha,B}$, $|\cdot|^{\star}_{k,\alpha,B}$ Proof of Theorem \[thm C0 est\],(\[equ norm III\])
$\mathbb{U}_{\tau_{0}}$, $\tau_{0}-$admissible cover, $\mathbb{U}_{\rho_{0}}$ Def \[Def admissable open cover\], Def \[Def Global Schauder norms\]
condition ${\circledS_{A,p}}$ Def \[Def condition SAp\]
admissible $\delta_{0}-$deformation of the $G_{2}-$structure, $\phi$, $\psi$, $\phi_{0}$, $\psi_{0}$ Def \[Def deformation of the G2 structure\], (\[eqnarray Euc G2 forms\])
$W^{2,2}_{p,b}$, $W^{1,2}_{p,b}$, $W^{1,2}_{p}$,$L^{2}_{p,b}$, $L^{2}_{p}$ Def \[Def global weight and Sobolev spaces\], Def \[Def of L22 norm model case\], Def \[Def abbreviation of notations for spaces\]
$\underline{\otimes}$, $\llcorner$, $\lrcorner$, $\otimes$ Def \[Def Two tensor products\], Def \[Def tensor product\]
$\bar{C}$ Def \[Def special constants\], Def \[Def tensor product\]
$L_{A}$, $L_{A_{O}}$, $L_{A,\underline{\psi}}$, $L_{A}^{\star}$ (\[equ Def of equation for square of model LAO\]), Lemma \[lem Launderline psi is an isomorphism from Jp to Np\], (\[equ introduction formula for deformation operator\]), (\[equ LA star formula\])
$Q_{A,p,b}$, $Q_{A,p}$ Corollary \[Cor solving model laplacian equation over the ball without the compact support RHS condition\], Def \[Def abbreviation of notations for spaces\]
$J_{p,b}$, $J_{p}$ Remark \[rmk Jpb\], Def \[Def abbreviation of notations for spaces\]
$G(\cdot,\cdot)$ (\[equ Def G..\])
$v$, $v-$spectrum, $\beta$, $\Psi_{\beta}$, $\Psi_{v}$ Def \[Def v spectrum\]
$K_{p,b}$ Theorem \[thm global parametrix\]
$\Xi$ Def \[Def the bundle xi\]
$\perp$, $\parallel$ Def \[Def Fredholm operators and isomorphisms\]
$O$, $O_{j}$, $V_{O,+}$, $V_{O,-}$, $U_{+}$, $U_{-}$ Def \[Def admissable open cover\]
$\Upsilon_{A_{O}}$, $\Upsilon_{A_{O_{j}}}$ Proposition \[prop seperation of variable for general cone\]
$\Delta_{s}$ (\[equ cone formula for the 0form with proper basis\])
$w$, $w_{p,b}$ Def \[Def of L22 norm model case\], Def \[Def global weight and Sobolev spaces\]
$dV$ Def \[Def volume forms\]
$\vartheta_{-p}$, $\vartheta_{1-p}$ Def \[Def spectrum gap\]
$coker L_{A}$ (\[equ Def of Coker\])
$d_{j}$, $d_{i}$ (\[eqnarray L22 identity from integration by parts\])
$r,\ r_{x},\ r_{x,y},\ \underline{r_{x,y}}$ Remark \[rmk homotopy property and def of r\], Def \[Def local Schauder norms\]
(p,b)-Fredholm Def \[Def Fredholm operators and isomorphisms\]
[0]{} K. Akutagawa, G. Carron, R. Mazzeo.*The Yamabe problem on stratified spaces*. arXiv:1210.8054. To appear in Geometric and Functional Analysis. S.B. Angenent. *Shrinking doughnuts*. Nonlinear Diffusion Equations and their Equilibrium States (Gregynog, 1989). N. Berline, E. Getzler, M. Vergne. *Heat Kernels and Dirac Operators*. Grundlehren Text Editions. 2004.
R. Bott, L.W. Tu. *Differential Forms in Algebraic Topology*. Graduate Texts in Mathematics. 1982.
R.L. Bryant. *Some remarks on $G_{2}-$structures*. Proceedings of 12th Gökova Geometry-Topology Conference. 75-109. T.H. Colding, W.P. Minicozzi II. *Generic mean curvature ow I; generic singularities*. Annals of Mathematics 175 (2012), 755-833. B. Charbonneau, D. Harland. *Deformations of nearly Kähler instantons*. arXiv:1510.07720. X.X. Chen, Y.Q. Wang. *Bessel functions, heat kernel and the Conical Kähler-Ricci flow*. Journal of Functional Analysis. Volume 269, Issue 2 . A. Degeratu, R. Mazzeo. *Fredholm theory for elliptic operators on quasi-asymptotically conical spaces*. arXiv:1406.3465. S.K. Donaldson. *Floer homology groups in Yang-Mills theory*. Cambridge Tracts in Mathematics. 147.
S.K. Donaldson. *Kähler metrics with cone singularities along a divisor*. In: Essays on Mathematics and its applications (P.M. Pardalos et al., Eds.), Springer, 2012, pp. 49-79. S.K. Donaldson, E. Segal. *Gauge Theory in higher dimensions, II*. from: “Geometry of special holonomy and related topics”, (NC Leung, ST Yau, editors), Surv. Differ. Geom. 16, International Press (2011) 1–41.
S.K. Donaldson, R.P. Thomas. *Gauge Theory in Higher Dimensions*. from: “The Geometric Universe”, (SA Huggett, L J Mason, KP Tod, S Tsou, NMJ Woodhouse, editors), Oxford Univ. Press (1998) 31–47. A. Douglis, L. Nirenberg. *Interior estimates for elliptic systems of partial differential equations*. Communications on Pure and Applied Mathematics. Volume 8, Issue 4, pages 503-538, November 1955. L.C. Evans. *Partial differential equations*. Graduate Studies in Mathematics, Vol 19. AMS. L. Foscolo. *Deformation theory of periodic monopoles (with singularities)*. arXiv:1411.6946. L. Foscolo, M. Haskins. *New $G_{2}-$holonomy cones and exotic nearly Kähler structure on $S^{6}$ and $S^{3}\times S^{3}$*. arxiv1501.07838. D. Gilbarg, N.S. Trudinger. *Elliptic Partial Differential Equations of Second Order*. Springer. P.B. Gilkey. *Invariance theory, the heat equation, and the Atiyah-Singer index theorem*. Library of Congress Catalog Card Number 84-061166. ISBN 0-914098-20-9. V. Gol’dshtein, A. Ukhlov. *Weighted Sobolev spaces and embedding theorems*. Trans. Amer. Math. Soc., 361 (2009), 3829-3850. M. Haskins, H.J. Hein, J. Nordström. *Asymptotically cylindrical Calabi-Yau manifolds*. arXiv:1212.6929.To appear in J. Diff. Geom. D.D. Joyce. *Compact manifolds with special holonomy*. Oxford Mathematical Monographs. Oxford University Press. 2000. H.B. Lawson, M.L. Michelson. *Spin Geometry*. Princeton mathematical series: 38. R.B. Lockhart, R.C. McOwen. *Elliptic differential operators on noncompact manifolds*. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze (1985) Volume: 12, Issue: 3, page 409-447. R. Mazzeo. *Elliptic theory of differential edge operators I*. Comm. Partial Differential Equations 16 (1991), no.10, 1615-1664. R.B. Melrose, G. Mendoza. *Elliptic operators of totally characteristic type*. MSRI, Berkeley, CA June 1983 MSRI 047-83.
G. Oliveira. *$G_{2}$-monopoles with singularities (examples)*. Unpublished work. G. Oliveira. *Monopoles on the Bryant-Salamon $G_{2}$-manifolds*. Journal of Geometry and Physics, vol. 86 (2014), pp. 599-632, ISSN 0393-0440. G. Oliveira. *Calabi-Yau Monopoles for the Stenzel Metric*. To appear in Communications in Mathematical Physics. G. Oliveira. *Monopoles on 3 dimensional AC manifolds*. arXiv:1412.2252. H. Sa Earp, T. Walpuski. *$G_{2}-$instantons over twisted connected sums*. Geometry Topology 19 (2015) 1263-1285. D.A. Salamon, T. Walpuski. *Notes on the octonians*. arXiv:1005.2820. J. Song, X. Wang. *The greatest Ricci lower bound, conical Einstein metrics and the Chern number inequality*. arXiv1207.4839. To appear in Geometry and Topology. T. Walpuski. *$G_{2}-$instantons on generalised Kummer constructions*. Geometry and Topology 17 (2013). 2345-2388. G. Tian. *Gauge theory and calibrated geometry*. Ann. Math. 151. 193-268 (2000). F. Xu. *On instantons on nearly Kähler 6-manifolds*. Asian. J. Math. 2009 International Press Vol. 13, No. 4, pp. 535-568, December 2009. B.Z. Yang. *The uniqueness of tangent cones for Yang-Mills connections with isolated singularities*. Advances in Mathematics (180), 648-691. 2003. M.Q. Zhou. *Theory of Real Functions* (Mandarin Chinese). Peking University Press; 2nd edition (1991).
Yuanqi Wang, Department of Mathematics, Stony Brook University, Stony Brook, NY, USA; [email protected].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Advanced Wakefield Experiment (AWAKE) develops the first plasma wakefield accelerator with a high-energy proton bunch as driver. The bunch from CERN Super Proton Synchrotron (SPS) propagates through a long rubidium plasma, ionized by a laser pulse co-propagating with the proton bunch. The relativistic ionization front seeds a self-modulation process. The seeded self-modulation transforms the bunch into a train of micro-bunches resonantly driving wakefields. We measure the density modulation of the bunch, in time, with a streak camera with picosecond resolution. The observed effect corresponds to alternating focusing and defocusing fields. We present a procedure recovering the charge of the bunch from the experimental streak camera images containing the charge density. These studies are important to determine the charge per micro-bunch along the modulated proton bunch and to understand the wakefields driven by the modulated bunch.'
address:
- '$^1$ CERN, Geneva, Switzerland'
- '$^2$ Max-Planck Institute for Physics, Munich, Germany'
- '$^3$ Technical University Munich, Munich, Germany'
author:
- 'A.-M. Bachmann$^{1,2,3}$, P. Muggli$^{2,1}$'
title: 'Determination of the Charge per Micro-Bunch of a Self-Modulated Proton Bunch using a Streak Camera'
---
Introduction {#sec:intro}
============
AWAKE uses the CERN SPS proton bunch as a plasma wakefield driver. The bunch propagates through of plasma, created by laser ionization of a rubidium (Rb) vapor. The laser pulse co-propagates with the proton bunch, seeding the self-modulation with the relativistic ionization front, i.e. with the abrupt beam plasma interaction within the bunch [@AWAKE]. Along the plasma (with a density of $n_{pe}=2\cdot 10^{14} \, \textnormal{cm}^{-3}$ for the measurements reported here) the long proton bunch ($\sigma_{\zeta} \approx 9 \, \textnormal{cm}$) divides into micro-bunches, spaced by the plasma wavelength ($\lambda_{pe} \approx 2.4 \, \textnormal{mm}$) [@Marlene; @Karl]. The micro-bunches resonantly drive wakefields in the plasma. The wakefields accelerate an injected electron witness bunch [@Nature]. The principle of the experiment is sketched in Figure \[fig:sketchofawakeprinciple\].\
In the following we present a method to determine the charge in each micro-bunch from time-resolved images of the proton bunch transverse distribution. The images are produced by a streak camera.
Method {#sec:method}
======
In this section we explain the analysis of the streak camera images [@KarlSc] applied for the determination of the charge per micro-bunch.
Streak Camera as Diagnostic
---------------------------
After the Rb plasma the modulated proton bunch propagates through an optical transition radiation (OTR) screen ( thick Silicon wafer coated with thick mirror-finished aluminium), placed after the plasma exit [@Karl], see Figure \[fig:transportOTR\]. We collect the backwards emitted OTR that contains the spatio-temporal information of the proton bunch charge distribution and transport it to a streak camera (Hamamatsu C10910-05 model, 16-bit, 2048 x 2048 pixel ORCA-Flash4.0 CMOS sensor, binned to 512 x 512 pixels for streak operation). The imaging system has a limited aperture ($\pm 4 \, \textnormal{mm}$ in Figure \[fig:example\_streakcameraimage\_laseroff\] and \[fig:example\_streakcameraimage\_laseron\] and later). We operate the camera with a slit width of , an MCP gain of $40$ and a time window of $73 \, \textnormal{ps}$. The time resolution is $\approx 1 \, \textnormal{ps}$ in this time window. Light is collected by the streak camera through a slit for temporal resolution. Thus, for a cylindrically symmetric light signal, as the transverse image of the proton bunch, with a size larger than the slit width, the larger the size, the smaller the fraction of light that is transmitted through the slit (Figure \[fig:sketchslitsc\]). The streak camera image thus contains information about the bunch charge density and not the charge.
Streak Camera Images
--------------------
The streak camera produces a time resolved image of the proton bunch transverse charge distribution [@KarlSc]. The temporal evolution of the streak voltage leads to a time interval per pixel that varies along the image. Therefore we interpolate the original image to linearize the time axis. We subtract from each image a background image, obtained by averaging seven images without proton bunch. Images are weighted by the measured incoming proton bunch population. We acquire two series of images (each 20 images with plasma, two images without plasma) with $\approx 50 \, \textnormal{ps}$ delay between series. Together with the proton bunch OTR, we send to the streak camera a replica of the ionizing laser pulse ($\approx 120 \, \textnormal{fs}$ long) that we also delay by $50 \, \textnormal{ps}$ for each series. This reference laser pulse is sent onto the edge of the image to minimize the overlap with the proton bunch signal (see top edge on Figure \[fig:example\_streakcameraimage\_laseroff\] and \[fig:example\_streakcameraimage\_laseron\]). With this laser pulse time reference we can sum images in a series with the same time delay, despite the $\approx 20 \, \textnormal{ps}$ trigger jitter of the streak camera. We stitch the series together to obtain long time scale images with short time scale resolution. This method is described in reference [@FBMarker] of these proceedings.\
The result without plasma is shown in Figure \[fig:example\_streakcameraimage\_laseroff\] and with plasma in Figure \[fig:example\_streakcameraimage\_laseron\]. Here $t=0$ corresponds to $6 \, \textnormal{ps}$ behind the proton bunch center, with a bunch length of $\sigma_{\zeta} = 300 \, \textnormal{ps}$ and a total population of $N_{p^+} = (2.8 \pm 0.2) \cdot 10^{11}$. The transverse center of the bunch $x=0$ was determined as the peak of a Gaussian fit of the unmodulated head of the bunch, before the ionizing laser pulse ($t < 24 \, \textnormal{ps}$). After propagation through Rb vapor, the bunch charge distribution is uniform (Figure \[fig:example\_streakcameraimage\_laseroff\]). After propagating through plasma (Figure \[fig:example\_streakcameraimage\_laseron\]), the proton bunch is self-modulated. One can see the micro-bunches along the propagation axis as well as defocused protons in between. The image shows that the charge density of the micro-bunches decreases along the bunch. In the following we present a method to determine the charge per micro-bunch for a change in width (radius) along the bunch.
Micro-Bunch Temporal Structure
------------------------------
For further analysis we first determine the longitudinal (temporal) center of the micro-bunches. The central projection ($-0.1 \, \textnormal{mm} \le x \le 0.1 \, \textnormal{mm}$) of Figure \[fig:example\_streakcameraimage\_laseron\] is shown in Figure \[fig:findMBsc\] and \[fig:findDefocusedsc\] with the blue solid line. For determining the center of the micro-bunches (Figure \[fig:findMBsc\]), we fit a second order polynomial $f(t_i,\vec{\lambda}) = \lambda_1 + \lambda_2 \, t_i + \lambda_3 \, {t_i}^2$, with start points $\vec{\lambda} = \{ \lambda_1, \lambda_2, \lambda_3 \}$, constraining $\lambda_3<0$, i.e. a downwards opened parabola to the bunch projection. We let the fit move along the projection centered at time $t_i$ and fit over a range $\{ t_i - \Delta t :t_i+ \Delta t \}$ and $\Delta t = 2.7 \, \textnormal{ps}$, to include most of the data points of a micro-bunch, but avoid covering more than one bunch for the given plasma wakefield period ($T_{pe} = 7.9 \, \textnormal{ps}$ at $n_{pe}= 2\cdot 10^{14} \, \textnormal{cm}^{-3}$). The weighted distance squared function $\chi^2$, giving the difference between the model expectation $f(t_i|\vec{\lambda})$ and the measured projection $y_i$ is given by $$\chi^2 = \sum_i \frac{(y_i - f(t_i|\pmb{\vec{\lambda}}))^2}{\omega_i^2}.$$ We weight the fit with the curvature of the parabola $\omega_i = \lambda_3$, as we expect the strongest curvature in the center of the micro-bunch. The result of the $\chi^2$ fit is shown with the orange solid line in Figure \[fig:findMBsc\], restricting the plot to the low values of the function for better visualization. The temporal center of the micro-bunches is defined as the minima of the function, indicated with the green vertical dashed lines.\
We use a similar analysis but constraining the curvature fit parameter to $\lambda_3>0$, i.e. an upwards opened parabola, to find the minimum between two micro-bunches, corresponding to the maximum defocused regions, see Figure \[fig:findDefocusedsc\]. Figures \[fig:findMBsc\] and \[fig:findDefocusedsc\] show that this automatic procedure finds the micro-bunch center, as well as their beginning and end.\
Micro-Bunch Size Determination
------------------------------
\[fig:exampleProfileMB\]
We use the temporal center of the micro-bunches and defocused areas to determine the transverse and longitudinal width of the micro-bunches. A transverse profile of the first micro-bunch of Figure \[fig:example\_streakcameraimage\_laseron\] (at $t= 28\, \textnormal{ps}$, obtained from Figure \[fig:findMBsc\], averaged over $\pm 0.4 \, \textnormal{ps}$) is shown in Figure \[fig:exampleProfileMB\] as an example, demonstrating the typical transverse shape of the micro-bunches. The profiles suggest that there is more charge in the micro-bunch (red line) than in the incoming bunch (blue line) over the same time range. This is not possible, since the proton bunch is strongly relativistic, i.e. protons cannot move longitudinally with respect to each other. This is a good illustration of the slit effect. The micro-bunch width is less that that of the incoming bunch, thus more light is collected through the slit, giving the impression that it contains more charge. Instead, only its charge density is larger (see below).\
In order to avoid having to assume a transverse (or longitudinal) profile for the micro-bunches, we plot the running sum of counts over each micro-bunch. For the transverse width we calculate the running sum of counts from the bunch center ($x=0$, Figure \[fig:exampleProfileMB\]) to the edge of the image over the time range of each micro-bunch, as determined in Figure \[fig:findDefocusedsc\]. We use the time ranges of the micro-bunches of the modulated bunch to calculate the corresponding sums in the unmodulated bunch. Figure \[fig:transverseWidthprofiles\]a) shows the profiles of the unmodulated bunch (Figure \[fig:example\_streakcameraimage\_laseroff\]) for comparison, and Figure \[fig:transverseWidthprofiles\]b) of the modulated bunch (Figure \[fig:example\_streakcameraimage\_laseron\]). We linearly fit the profiles for $\Delta x>4 \, \textnormal{mm}$, corresponding to summation of background, as the light collection from the proton bunch is limited by the imaging aperture. The subtraction of the linear function (dashed lines in Figure \[fig:transverseWidthprofiles\]) leads to saturation of the profiles. We define the bunch width (radius) as the radial position at which the sum reaches $60 \%$ of the final value, indicated with the vertical dash-dotted lines. As expected, the width along the unmodulated bunch remains essentially constant, see \[fig:transverseWidthprofiles\]a). For the modulated bunch, \[fig:transverseWidthprofiles\]b) shows the running sum over the micro-bunch number one, four and nine, also representing the shape over the other micro-bunches. Unlike the unmodulated bunch, the width (vertical lines) of the individual micro-bunches is changing along the bunch.\
![\[label\] Transverse width of the first nine micro-bunches of the modulated bunch (red circles) and the unmodulated bunch (blue crosses), determined with the procedure shown in Figure \[fig:transverseWidthprofiles\].[]{data-label="fig:AllWidthsMicroBunches"}](TransverseWidths_0_1sigma.png){width="16pc"}
The transverse width of each micro-bunch along the bunch is plotted in Figure \[fig:AllWidthsMicroBunches\] (red circles) and compared to the width of the incoming bunch (blue crosses) with a mean of $540\, ( \pm $ ). One can see that the width of the signal is increasing along the image.\
We use a similar approach to determine the length of the micro-bunches, see Figure \[fig:longitudinalWidthprofiles\]. The micro-bunch length before and after the micro-bunch center here differ from each other and are thus treated individually. We calculate the running sum of counts from the center of the micro-bunch (Figure \[fig:findMBsc\]) to its beginning and end (Figure \[fig:findDefocusedsc\]). For the given measurement, the bunch is not fully modulated near the ionization front, thus the counts between the micro-bunches do not reach the value 0, i.e. the sums do not saturate. Therefore we limit the range of summation with the beginning (\[fig:longitudinalWidthprofiles\]a) and end (\[fig:longitudinalWidthprofiles\]b) of the micro-bunch and the time of the ionizing laser pulse as the beginning of the first micro-bunch. Here we sum over the transverse range $\SI{-70}{\micro\m} <x< \SI{70}{\micro\m} $ for a less noisy profile. The vertical dash-dotted lines indicate the determined micro-bunch length, where the sum reaches $60 \%$ of the final value.\
The lengths are summarized in Figure \[fig:AllLengthsMicrobunches\] for each micro-bunch along the bunch. The length from the center to the beginning as blue, from the center to the end as red, and the sum of the two as green crosses. One can see that the first two micro-bunches are longer and after the third micro-bunch the length saturates to $3.1 \, (\pm \, 0.1) \, \textnormal{ps}$. In the following we use the length of the micro-bunches for the unmodulated bunch for the comparison of charge in a given length along the bunch. Note that temporal resolution might lower the counts per pixel for a signal with time structure, as the modulated bunch, while it should not affect a signal without, as the unmodulated bunch.\
Results {#sec:results}
=======
Since the proton bunch is cylindrically symmetric, its image onto the streak camera slit is also symmetric. With the minimum bunch diameter ( at the OTR screen from Figure \[fig:AllWidthsMicroBunches\], corresponding to at the streak camera slit due to the de-magnification by the OTR light transport) being larger than the slit width (), the streak camera image profile at each time (Figure \[fig:exampleProfileMB\]) can be interpreted as a measurement of the bunch charge density as a function of time $n(r,t)$ or $n(x,t)$ on the images. The charge at each time of the image, or in each micro-bunch as determined above, can be calculated multiplying the charge density by $2 \pi \, r \, dr$.
Charge Determination of the Proton Bunch
----------------------------------------
![\[label\] Charge summed transversely over the unmodulated bunch (blue solid line) and over the modulated bunch (red solid line), theoretical Gaussian profile (blue dashed line) and timing of ionizing laser pulse (black dashed line)[]{data-label="fig:TotalChargeNormOn"}](Image_TotalChargeConservation_0_1sigma_onlySum.png){width="16pc"}
We apply this procedure to calculate the charge of the image of the unmodulated (Figure \[fig:example\_streakcameraimage\_laseroff\]) and modulated bunch (Figure \[fig:example\_streakcameraimage\_laseron\]). To avoid the signal from the timing reference laser pulse we use only half of the image ($x > 0$). We compare the charge density summed transversely over the entire modulated bunch (red curve) with the unmodulated bunch (blue curve) in Figure \[fig:TotalChargeNormOff\]. As expected, the shape for the unmodulated bunch follows the Gaussian bunch distribution (with length $\sigma_{\zeta} = 300 \, \textnormal{ps}$ and $t=0$ being $6 \, \textnormal{ps}$ behind the bunch center and amplitude normalized to the measured profile), indicated with the blue dashed line. The summation of the modulated bunch includes focused and defocused protons. Summing the charge density transversely over the bunch before the start of the plasma ($t<24 \, \textnormal{ps}$) leads to similar values for the modulated bunch and the incoming bunch, as expected. Summing the charge density transversely over the bunch within the plasma ($t>24 \, \textnormal{ps}$) shows significantly lower counts of the modulated bunch than the incoming bunch. The sum of the modulated bunch also exhibits the periodic modulation from the self-modulation process.\
The decrease in signal along the image in Figure \[fig:TotalChargeNormOff\] for the modulated bunch (red curve) is caused by the increase in width (see Figure \[fig:AllWidthsMicroBunches\], red circles). We can account for the effect of the slit and determine the charge of the image by multiplying the images containing the charge density with $2\pi \, r \, dr$. Figure \[fig:TotalChargeNormOn\] shows the sum of the charge over the modulated bunch (red curve) compared to the incoming bunch (blue curve). It shows that the charge along the self-modulated bunch is very close to that of the unmodulated bunch, following the Gaussian profile. The procedure recovers the same charge for parts of the bunch before the plasma ($t<24 \, \textnormal{ps}$). However, it retains some of the modulation in the charge density and the recovered charge decreases along the bunch when compared to the incoming bunch charge. These deviations are probably due to light collection and detection limitations of the diagnostic. Protons are more and more defocused along the bunch (see [@Marlene]) and images show that they fall out of the imaged field ($-4 \, \textnormal{mm} < x <4 \, \textnormal{mm}$) later along the bunch. Also, the bunch charge density decreases further along the bunch. The streak camera has a limited signal to noise ratio and low level light is not detected, falling below the detection threshold. The effects increase along the bunch.\
Figure \[fig:TotalChargeNormOn\] shows how well the charge along the bunch can be determined with this diagnostic and procedure. Now, that we have developed a procedure, to recover the charge in the bunch from the measurement of its charge density, and determined its limitation, we can measure the charge carried by each micro-bunch, i.e. not including the charge of defocused protons, and compare it to the incoming charge.\
Charge Determination of Individual Micro-Bunches
------------------------------------------------
We use the transverse width and length of the incoming bunch and the micro-bunches, as obtained using the procedure detailed in section \[sec:method\] (Figures \[fig:AllWidthsMicroBunches\] and \[fig:AllLengthsMicrobunches\]), to determine the relative charge per micro-bunch. For comparison of the charge in each micro-bunch with the charge in the incoming bunch, we use the same time ranges for the summation on the images of the modulated and unmodulated bunch.\
Summation of the charge density, as given by the original streak camera image, over the range of the micro-bunches, is shown in Figure \[fig:Charge\_norm\_Off\]. The image of the unmodulated bunch, where the squares indicate the transverse width of the unmodulated bunch and the length of the micro-bunches, is shown in a); the image of the modulated bunch, with the ellipses of the width and length of the micro-bunches in b). In c) the charge density summed over the squared areas in a) (blue crosses) and over the ellipses in b) (red circles) is shown. One can see that the summed charge density for the unmodulated bunch remains essentially constant (considering the limitation of this procedure, the longitudinal bunch position close to the center, and the difference in length for the first micro-bunches and thus summation length being small). In constrast, the summed charge density of the micro-bunches decreases rapidly along the bunch, due to the increasing radial size of the signal (see Figure \[fig:AllWidthsMicroBunches\]). This is consistent with Figure \[fig:TotalChargeNormOff\], where the charge density of the unmodulated bunch follows the Gaussian bunch distribution, while the charge density of the modulated bunch decreases along the bunch.\
In order to determine the charge per micro-bunch we multiply the streak camera images $n(r,t)$ by $2\pi \, r \, dr$, as shown in Figure \[fig:Charge\_norm\_On\]a) for the unmodulated and b) for the modulated bunch. We sum over the same squares and ellipses as described above, in order to determine the charge per micro-bunch. Figure c) compares the charge per micro-bunch with the charge of the incoming bunch. Again, we expect the charge of the unmodulated bunch to be essentially constant (central position within the long Gaussian bunch and small changes in micro-bunch length), which is confirmed by the measurement in blue. In red it is demonstrated that also the charge per micro-bunch is roughly constant along the bunch. The mean charge per micro-bunch covered in the ellipse is $64 \, ( \pm \, 9 ) \%$ of the charge covered in the squares of the incoming bunch.
Summary {#sec:conclusion}
=======
We showed that because of the streak camera slit the streak camera images must be interpreted as charge density of the proton bunch after $3.5 \, \textnormal{m}$ of propagation in vacuum and not in the plasma. We presented a procedure that recovers the charge in the bunch from the streak camera images. Applying the procedure we demonstrated that the charge in each micro-bunch is constant along the bunch for the first nine micro-bunches. The charge in the micro-bunches corresponds to more than $60 \%$ of the charge of the incoming bunch, over the same time period, within limitations of the diagnostic. This procedure will be used to characterize the result of the self-modulation process on the proton bunch and potentially for studying the resulting wakefields.
Acknowledgments {#sec:Acknowledgments}
===============
This work is sponsored by the Wolfgang Gentner Program of the German Federal Ministry of Education and Research (05E15CHA).
References {#references .unnumbered}
==========
[9]{} P. Muggli et al. (AWAKE Collaboration), Plasma Physics and Controlled Fusion, 60(1) 014046 (2017). M. Turner et al. (AWAKE Collaboration), Phys. Rev. Lett. 122, 054801 (2019). E. Adli et al. (AWAKE Collaboration), Phys. Rev. Lett. 122, 054802 (2019). E. Adli et al. (AWAKE Collaboration), Nature 561, 363–367 (2018).
K. Rieger et al., Rev. of Scientific Instruments 88, 025110 (2017). F. Batsch et al., submitted for EAAC2019 proceedings, arXiv:1911.12201.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we introduce three combinatorial models for symmetrized poly-Bernoulli numbers. Based on our models we derive generalizations of some identities for poly-Bernoulli numbers. Finally, we set open questions and directions of further studies.'
address:
- 'Faculty of Water Sciences, University of Public Service, Budapest, HUNGARY'
- 'Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, JAPAN'
author:
- Beáta Bényi
- Toshiki Matsusaka
title: 'On the combinatorics of symmetrized poly-Bernoulli numbers'
---
Introduction
============
The symmetrized poly-Bernoulli numbers were introduced by Kaneko-Sakurai-Tsumura [@KST] in order to generalize the dual formula of poly-Bernoulli numbers. The poly-Bernoulli polynomials $B_n^{(k)}(x)$ of index $k\in {{\mathbb Z}}$ are defined by the generating function $$\begin{aligned}
\sum_{n=0}^{\infty}B_n^{(k)}(x)\frac{t^n}{n!}=e^{-xt}\frac{\li_k(1-e^{-t})}{1-e^{-t}},\end{aligned}$$ where $\li_k(z)$ is the polylogarithm function, $$\begin{aligned}
\li_k(z)=\sum_{m=1}^\infty \frac{z^m}{m^k}\quad (|z|<1).\end{aligned}$$ The two types of poly-Bernoulli numbers, $B_n^{(k)}$ and $C_{n}^{(k)}$ [@AIK; @K97; @K10] are special values of the poly-Bernoulli polynomials at $x=0$ and $x=1$. $$\begin{aligned}
B_n^{(k)}(0)=B_n^{(k)}\quad\mbox{and}\quad B_n^{(k)}(1)=C_n^{(k)}.\end{aligned}$$
For negative $k$ index these number sequences are integers (A099594 and A136126 [@OEIS]) and have several interesting combinatorial interpretations [@BH15; @BH17; @B08]. Both $B_n^{(-k)}$ and $C_n^{(-k)}$ are symmetric number arrays. These properties are special cases of the more general identity on poly-Bernoulli polynomials which hold for any non-negative integers $n$, $k$ and $m$. $$\begin{aligned}
\sum_{j=0}^{m}\st{m}{j} B_{n}^{(-k-j)}(m)=\sum_{j=0}^{m}\st{m}{j} B_{k}^{(-n-j)}(m),\end{aligned}$$ where $\st{n}{k}$ is the (unsigned) Stirling number of the first kind which count the number of permutations of $[n]=\{1,2,\ldots, n\}$ with $k$ disjoint cycles.
Kaneko-Sakurai-Tsumura [@KST] defined this expression as the *symmetrized poly-Bernoulli numbers*. $$\begin{aligned}
{\mathscr{B}}_n^{(-k)}(m):=\sum_{j=0}^{m}\st{m}{j} B_{n}^{(-k-j)}(m).\end{aligned}$$ Note that $$\begin{aligned}
{\mathscr{B}}_n^{(-k)}(0)=B_n^{(-k)}\quad \mbox{and}\quad {\mathscr{B}}_n^{(-k)}(1)=C_n^{(-k-1)}.\end{aligned}$$ The authors [@KST] suggested the combinatorial investigations of these number sequences. The first result in this direction is due to the second author. Matsusaka [@M20] showed that the alternating diagonal sums of symmetrized poly-Bernoulli numbers coincide with certain values of the Dumont-Foata polynomials/Gandhi polynomials. $$\begin{aligned}
\label{Gandhi}
\sum_{j=0}^{n} (-1)^j {\mathscr{B}}_{n-j}^{(-j)}(k)=k!(-1)^{n/2}G_n(k),\end{aligned}$$ where $G_n(z)$ denotes the Gandhi polynomials satisfying $$\begin{aligned}
G_{n+2}(z)=z(z+1)G_n(z+1)-z^2G_n(z)\end{aligned}$$ with $G_0(z)=1$ and $G_1(z)=0$. Special cases of the theorem [@M20] are $$\begin{aligned}
\sum_{j=0}^{n} (-1)^j B_{n-j}^{(-j)} = \begin{cases}
1, &\text{if } n = 0,\\
0, &\text{if } n > 0,
\end{cases}\end{aligned}$$ which was proven analytically in [@AK99] and combinatorially in [@BH15], and $$\begin{aligned}
\sum_{j=0}^{n} (-1)^j C_{n-j}^{(-j-1)} = -G_{n+2},\end{aligned}$$ where $G_n := (2 - 2^{n+1}) B_n^{(1)} (1)$ are the Genocchi numbers $0,1, -1,0,1,0,-3,0,17,0,-155\ldots$ A001469 [@OEIS]. This last identity was proven by using analytical methods in [@KST], but providing a combinatorial explanation is still open and seems to be a difficult problem.
The paper is organized as follows. In the first three sections after the introduction we introduce three combinatorial models for the normalized symmetrized poly-Bernoulli numbers. In Section 5 we prove some recurrence relations. In the last section we formulate a conjecture and pose some open questions.
Barred Callan sequences
=======================
In this section we present a model of the *normalized symmetrized poly-Bernoulli numbers* $\widehat{\mathscr{B}}_n^k(m)$. We are interested in the combinatorics of symmetrized poly-Bernoulli numbers with negative $k$ indices (since these numbers are positive integers). Keeping the notation simpler, we define for non-negative integers $n$,$k$ and $m$, $$\widehat{{\mathscr{B}}}_n^k(m):=\frac{1}{m!}{\mathscr{B}}_n^{(-k)}(m) \in {{\mathbb Z}}.$$ In A099594 [@OEIS] Callan has given a combinatorial interpretation of the poly-Bernoulli numbers in certain type of permutations. Namely, $B_n^{(-k)}$ is the number of permutations of $[n+k]=\{1,\ldots, n+k \}$ such that all substrings of elements $\leq n$ and all substrings of elements $>n$ are in increasing order. Such permutations were called in the literature [@BH15; @BH17] *Callan permutations*. Essentially the same are *Callan sequences* that we define as follows. Consider the set $N=\{\piros{1},\ldots, \piros{n}\}\cup\{\piros{*}\}$ (referred to as red elements) and $K=\{\kek{1},\ldots, \kek{k}\}\cup\{\kek{*}\}$ (referred to as blue elements). Let $R_1,\ldots, R_r,R^*$ be a partition of the set $N$ into $r+1$ non-empty blocks ($0 \leq r \leq n$) and $B_1,\ldots,B_r,B^*$ a partition of the set of $K$ into $r+1$ non-empty blocks. The blocks containing $\kek{*}$ and $\piros{*}$ are denoted by $B^*$ and $R^*$, respectively. We call $B^*$ and $R^*$ *extra blocks*, while the other blocks *ordinary blocks*. We call a pair of a blue and a red block, $(B_i;R_i)$ for an $i$ a *Callan pair*. A Callan sequence is a linear arrangement of Callan pairs augmented by the extra pair $$(B_1;R_1)(B_2;R_2)\cdots(B_r;R_r)\cup(B^*;R^*).$$
It is easy to check that this definition is equivalent with the one given by Callan in [@OEIS]. Given a Callan sequence, write the elements of the blocks in increasing order, record the blocks in the given order and if there are elements in $R^*$ besides $\piros{*}$ move this red elements into the front of the sequence, while the elements in $B^*$ at the end of the sequence. Delete $\kek{*}$ and $\piros{*}$, and shift the blue elements by $n$, $\kek{i}\rightarrow i+n$.
$$\begin{aligned}
&(\kek{1},\kek{2},\kek{*};\piros{1},\piros{2};\piros{*}) & &(\kek{1},\kek{2};\piros{1},\piros{2})(\kek{*};\piros{*}), & &(\kek{1};\piros{1},\piros{2})(\kek{2},\kek{*};\piros{*}), & &(\kek{2};\piros{1},\piros{2})(\kek{1},\kek{*};\piros{*}), & &(\kek{1},\kek{2};\piros{1})(\kek{*};\piros{2},\piros{*}),\\
&(\kek{1},\kek{2};\piros{2})(\kek{*};\piros{1},\piros{*}), & &(\kek{1};\piros{1})(\kek{2},\kek{*};\piros{2},\piros{*}), & &(\kek{2};\piros{1})(\kek{1},\kek{*};\piros{2},\piros{*}), & &(\kek{1};\piros{2})(\kek{2},\kek{*};\piros{1},\piros{*}), & &(\kek{2};\piros{2})(\kek{1},\kek{*};\piros{1},\piros{*}),\\
&(\kek{1};\piros{1})(\kek{2};\piros{2})(\kek{*};\piros{*}), & &(\kek{1};\piros{2})(\kek{2};\piros{1})(\kek{*};\piros{*}), & &(\kek{2};\piros{1})(\kek{1};\piros{2})(\kek{*};\piros{*}), & &(\kek{2};\piros{2})(\kek{1};\piros{1})(\kek{*};\piros{*}).
\end{aligned}$$
We list the corresponding Callan permutations in the same order as above $$\begin{aligned}
&\piros{12} \kek{34}, & &\kek{34}\piros{12}, & &\kek{3}\piros{12}\kek{4}, & &\kek{4}\piros{12}\kek{3}, & &\piros{2}\kek{34}\piros{1},\\
&\piros{1}\kek{34}\piros{2}, & &\piros{2}\kek{3}\piros{1}\kek{4}, & &\piros{2}\kek{4}\piros{1}\kek{3}, & &\piros{1}\kek{3}\piros{2}\kek{4}, & &\piros{1}\kek{4}\piros{2}\kek{3},\\
&\kek{3}\piros{1}\kek{4}\piros{2}, & &\kek{3}\piros{2}\kek{4}\piros{1}, & &\kek{4}\piros{1}\kek{3}\piros{2}, & &\kek{4}\piros{2}\kek{3}\piros{1}.
\end{aligned}$$
For integers $n,k > 0$ and $m \geq 0$, the $m$-barred Callan sequence of size $n \times k$ is the Callan sequence with $m$ bars inserted between (before and after) the ordinary pairs. We let $\mathcal{C}_n^k(m)$ denote the number of all $m$-barred Callan sequences of size $n \times k$.
\[All $2$-barred Callan sequences with $n = 3$ and $k = 1$\] $$\begin{aligned}
&||(\kek{1}, \kek{*}; \piros{1}, \piros{2}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{1}, \piros{2}, \piros{3}) (\kek{*}; \piros{*}), & &||(\kek{1}; \piros{1}, \piros{2})(\kek{*}; \piros{3}, \piros{*}), & &||(\kek{1}; \piros{1},\piros{3}) (\kek{*}; \piros{2}, \piros{*}),\\
&||(\kek{1}; \piros{2}, \piros{3}) (\kek{*}; \piros{1}, \piros{*}), & &||(\kek{1}; \piros{1}) (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{2}) (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{3}) (\kek{*}; \piros{1}, \piros{2}, \piros{*}),\\
&|(\kek{1}; \piros{1}, \piros{2}, \piros{3}) | (\kek{*}; \piros{*}), & &|(\kek{1}; \piros{1}, \piros{2}) | (\kek{*}; \piros{3}, \piros{*}), & &|(\kek{1}; \piros{1},\piros{3}) | (\kek{*}; \piros{2}, \piros{*}), & &|(\kek{1}; \piros{2}, \piros{3}) | (\kek{*}; \piros{1}, \piros{*}),\\
&|(\kek{1}; \piros{1}) | (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &|(\kek{1}; \piros{2}) | (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &|(\kek{1}; \piros{3}) | (\kek{*}; \piros{1}, \piros{2}, \piros{*}),\\
&(\kek{1}; \piros{1}, \piros{2}, \piros{3}) || (\kek{*}; \piros{*}), & &(\kek{1}; \piros{1}, \piros{2}) || (\kek{*}; \piros{3}, \piros{*}), & &(\kek{1}; \piros{1},\piros{3}) || (\kek{*}; \piros{2}, \piros{*}), & &(\kek{1}; \piros{2}, \piros{3}) || (\kek{*}; \piros{1}, \piros{*}),\\
&(\kek{1}; \piros{1}) || (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &(\kek{1}; \piros{2}) || (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &(\kek{1}; \piros{3}) || (\kek{*}; \piros{1}, \piros{2}, \piros{*}).
\end{aligned}$$
$m$-barred Callan sequences can be viewed in fact as a pair $(P, BP)$, where $P$ is a preferential arrangement of a subset of $\{1,2,\ldots,n\}$ and $BP$ is a barred preferential arrangement of a subset of $\{1,2,\ldots, k\}$. Barred preferential arrangements were introduced in [@AUP] and were used for combinatorial analysis of generalizations of geometric polynomials for instance in [@NBCC].
The number $\mathcal{C}_n^k(m)$ of $m$-barred Callan sequences of size $n \times k$ is given by the normalized symmetrized poly-Bernoulli number $\widehat{{\mathscr{B}}}_n^k(m)$.
Let $r$ be the number of ordinary pairs. Partition the elements of $N$ into $r+1$ blocks in $\sts{n+1}{r+1}$ ways, similarly $K$ into $r+1$ blocks in $\sts{k+1}{r+1}$ ways. ($\sts{n}{k}$ denotes the Stirling number of the second kind, counting the number of partitions of an $n$-element set into $k$ non-empty blocks.) Order both types of ordinary blocks in $r!$ ways and choose the positions of the $m$ bars from the $r+1$ places between the ordinary blocks (note that repetition is allowed) in $\binom{r+1+m-1}{m}$ ways. By summing them up, we have $$\begin{aligned}
\label{Cal-exp}
\mathcal{C}_n^k(m)=\sum_{r=0}^{\min(n,k)}\binom{r+m}{m}(r!)^2\sts{n+1}{r+1}\sts{k+1}{r+1}.\end{aligned}$$ By comparing this expression with the closed formula derived in [@KST (2.9)] for the symmetrized poly-Bernoulli numbers, the theorem follows.
It obviously follows from the definition that $$\mathcal{C}_n^k(m) =\mathcal{C}_k^n(m).$$
A *labeled* $m$-barred Callan sequence is an $m$-barred Callan sequence such that the bars are labeled. The number of labeled $m$-barred Callan sequences of size $n \times k$ is given by ${\mathscr{B}}_n^{(-k)}(m)$. Clearly, ${\mathscr{B}}_n^{(-k)}(m)={\mathscr{B}}_k^{(-n)}(m)$.
By the right-hand side of (\[Cal-exp\]), we define $\mathcal{C}_n^k(m)$ for $n= 0$ or $k=0$. Namely, $\mathcal{C}_n^0 (m) = \mathcal{C}_0^k (m) := 1$.
For integers $n \geq 0$ and $k > 0$, the number $\mathcal{C}_n^k(m)$ obeys the recurrence relation of $$\mathcal{C}_n^k(m) = \mathcal{C}_n^{k-1} (m) + \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j+1}^{k-1} (m) + m \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j}^{k-1} (m).$$
We count $m$-barred Callan sequences of size $n \times k$ according to the following cases. We let $|_\ell$ denote $\ell$ consecutive bars.
- $|_m (\kek{1}, \kek{2}, \dots, \kek{k}, \kek{*}; \piros{1}, \piros{2}, \dots, \piros{n}, \piros{*})$.
- $(\kek{1}, \kek{B}; \piros{R})$ is the first ordinary Callan pair with $\kek{B} \neq \emptyset$.
- $|_\ell (\kek{1}; \piros{R})$ is the first ordinary Callan pair.
- $(\kek{B'}; \piros{R})|_\ell (\kek{1}, \kek{B}; \piros{R'})$ for some $(\kek{B'}; \piros{R})$ and $\kek{B} \neq \emptyset$.
- $(\kek{B'}; \piros{R}) |_0 (\kek{1}; \piros{R'})$ for some $(\kek{B'}; \piros{R})$.
- $(\kek{B'}; \piros{R'}) |_\ell (\kek{1}; \piros{R})$ for some $\ell > 0$ and $(\kek{B'}; \piros{R'})$.
The cases (0) and $(1)$ are in bijection with $m$-barred Callan sequences of size $n \times (k-1)$ by deleting $\kek{1}$. So the number of such cases is $\mathcal{C}_n^{k-1} (m)$.
Next, we consider the cases $(2)_0$, $(3)_\ell$, and $(4)_0$. In these cases, we delete $\kek{1}$ and $\piros{R}$, and insert the additional number $\piros{0}$ as follows. We assume that $\piros{R}$ contains $j$ elements. ($1 \leq j \leq n$).
- Insert $\piros{0}$ into the extra red block. $$|_0 (\kek{1}; \piros{R}) |_{\ell'} (\kek{B'}; \piros{R'}) \cdots (\piros{R''}, \piros{*}; \kek{B''}, \kek{*}) \leftrightarrow |_{\ell'} (\kek{B'}; \piros{R'}) \cdots (\piros{0}, \piros{R''}, \piros{*}; \kek{B''}, \kek{*}).$$ This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that $\piros{0}$ is in the extra pair.
- Replace $\piros{R}$ with $\piros{0}$. $$(\kek{B'}; \piros{R}) |_\ell (\kek{1}, \kek{B}; \piros{R'}) \leftrightarrow (\kek{B'}; \piros{0}) |_\ell (\kek{B}; \piros{R'}).$$ This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that $\piros{0}$ is alone in an ordinary pair.
- Replace $\piros{R}$ with $\piros{0}$, and merge with $\piros{R'}$. $$(\kek{B'}; \piros{R}) |_0 (\kek{1}; \piros{R'}) \leftrightarrow (\kek{B'}; \piros{0}, \piros{R'}).$$ This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that the block that contains $\piros{0}$ includes also other red elements.
Clearly, the number of ways to create the $\piros{R}$ with $j$ elements is ${n \choose j}$. Thus, the number of patterns in the cases $(2)_0$, $(3)_\ell$ ($0 \leq \ell \leq m$), and $(4)_0$ is $$\sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j+1}^{k-1}(m).$$
Finally, consider the remaining cases $(2)_\ell$ and $(4)_\ell$ with $1 \leq \ell \leq m$. If we delete the pair $(\kek{1}; \piros{R})$, we obtain $m$-barred Callan sequences of size $(n-j) \times (k-1)$. However, we obtain the same sequence $m$-times since $(\kek{1}; \piros{R})$ could have been after any bar. Indeed, conversely, take an $m$-barred Callan sequence of size $(n-j) \times (k-1)$ and insert the pair $(\kek{1}; \piros{R})$ after any bar. Thus, now we have $$m \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j}^{k-1} (m).$$ This concludes the proof.
We give another type of recursion. Let $\widehat{{\mathscr{B}}}_n^k(m;r)$ denote the number of $m$-barred Callan sequences with $r$ ordinary blocks. Then we have the following recursion.
For positive integers $n,k >0$ and $m \geq 0$, it holds $$\begin{aligned}
\widehat{{\mathscr{B}}}_n^{k}(m)=\sum_{j=1}^{n}\binom{n}{j}\sum_{r=0}^{\min{(n-j,k-1)}}(m+r+1)\widehat{{\mathscr{B}}}_{n-j}^{k-1}(m;r)+
\sum_{r=0}^{\min{(n,k-1)}}(r+1)\widehat{{\mathscr{B}}}_n^{k-1}(m; r).\end{aligned}$$
Consider an $m$-barred Callan sequence. There are two cases: $\kek{k}$ is in an ordinary pair as a singleton, or not, i.e., it is in an ordinary pair with other elements or in the extra pair. If it is in an ordinary pair as a singleton, let $j$ be the number of the red elements in this pair. Choose in $\binom{n}{j}$ ways such a Callan pair. Since it is an ordinary pair, $j$ is at least $1$. This new block can be inserted into the arrangement of the ordinary blocks and bars formed by the $m$-barred Callan sequence of size $(n-j) \times (k-1)$ with $r$ ordinary blocks, i.e., in $m+r+1$ ways. This gives the first part of our sum.
On the other hand, if we insert $\kek{k}$ into any block that contains a blue element already, or into the extra block, that can be done in $r+1$ ways, which gives the second part of the sum.
Weighted barred Callan sequences
================================
In this section we present a combinatorial interpretation, which allows us to extend the number that counted in our previous model the bars inserted between the Callan pairs, to arbitrary numbers.
For this sake we introduce first a weight on permutations. Let $\pi$ be a permutation $\pi=\pi_1\pi_2\ldots\pi_n \in \mathfrak{S}_n$. Consider the maximal sequence $\pi_{i_0}>\pi_{i_1}>\pi_{i_2}>\cdots>\pi_{i_r}$, where $\pi_{i_0}=\pi_1$ and $\pi_{i_{j+1}}$ is the first element to the right of $\pi_{i_j}$ that is smaller for all $j$. Let $w(\pi)=r$, i.e., the length of this maximal sequence reduced by $1$. In other words, considering the elements of the permutation from left to right mark an element if it is smaller than the previous marked element. Then $w(\pi)$ is the number of marked elements reduced by one. For instance, for $\pi=\piros{8}\piros{6}9\piros{5}7\piros{2}34\piros{1}$ $w(\pi)=4$. Let $x^{\overline{n}}=x(x+1)(x+2)\cdots(x+n-1)$ denote the rising factorial. We have the following lemma.
$$\sum_{\pi \in \mathfrak{S}_n} x^{w(\pi)} = (x+1)^{\overline{n-1}}.$$
The left-hand side obeys the same recurrence as the right-hand side. The initial value is $\sum_{\pi \in \mathfrak{S}_{1}} x^{w(\pi)} = 1$. By inserting the element $n$ into a permutation $\pi\in \mathfrak{S}_{n-1}$ the weight is increasing by one if we add it in the front as starting element, and stays preserved otherwise. Hence, $$\sum_{\pi\in \mathfrak{S}_{n}} x^{w(\pi)} = (x+n) \sum_{\pi \in \mathfrak{S}_{n-1}} x^{w(\pi)}.$$
$$\begin{aligned}
&\textcolor{red}{1}234, \textcolor{red}{1}243, \textcolor{red}{1}324, \textcolor{red}{1}342, \textcolor{red}{1}423, \textcolor{red}{1}432, \textcolor{red}{21}34, \textcolor{red}{21}43, \textcolor{red}{2}3\textcolor{red}{1}4, \textcolor{red}{2}34\textcolor{red}{1}, \textcolor{red}{2}4\textcolor{red}{1}3, \textcolor{red}{2}43\textcolor{red}{1},\\
&\textcolor{red}{31}24, \textcolor{red}{31}42, \textcolor{red}{321}4, \textcolor{red}{32}4\textcolor{red}{1}, \textcolor{red}{3}4\textcolor{red}{1}2, \textcolor{red}{3}4\textcolor{red}{21}, \textcolor{red}{41}23, \textcolor{red}{41}32, \textcolor{red}{421}3, \textcolor{red}{42}3\textcolor{red}{1}, \textcolor{red}{431}2, \textcolor{red}{4321}
\end{aligned}$$
We have $$\sum_{\pi \in \mathfrak{S}_4} x^{w(\pi)} = x^3 + 6x^2 + 11x + 6 = (x+1)(x+2)(x+3).$$
We define now a weight on a $1$-barred Callan sequence (from now on barred Callan sequence) using the weight on permutations above. Let $\mathcal{B}_n^k$ denote the set of barred Callan sequences of size $n \times k$ and $\alpha\in \mathcal{B}_n^k$. The natural order of the blocks in a partition $\sigma=B_1/B_2/\ldots/B_n$ is given by the least elements. For instance, the blocks of the partition $\{1,3,9\}/\{2,4,7\}/\{5,6\}/\{8\}$ are listed in the natural order. We consider now the set of blue blocks of the Callan sequence with this natural order and add $|$ as the smallest element to the set. The weight $w(\alpha)$ is the weight of the permutation of the blue blocks (and the bar) in the barred Callan sequence $\alpha$.
\[All $1$-barred Callan sequences with $n = 2$ and $k = 2$ with indication of their weight\] $$\begin{aligned}
&\underline{|}(\kek{1}, \kek{2}, \kek{*}; \piros{1}, \piros{2}, \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{1}, \piros{2})(\kek{*};\piros{*}) & &\underline{|}(\kek{1}; \piros{1}, \piros{2})(\kek{2}, \kek{*}; \piros{*}), & &\underline{|}(\kek{2}; \piros{1}, \piros{2})(\kek{1}; \kek{*}; \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{1}) (\kek{*}; \piros{2}, \piros{*}),\\
&\underline{|}(\kek{1}; \piros{1})(\kek{2},\kek{*}; \piros{2}, \piros{*}), & &\underline{|}(\kek{2}; \piros{1})(\kek{1}, \kek{*}; \piros{2}, \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{2}) (\kek{*}; \piros{1}, \piros{*}), & &\underline{|}(\kek{1}; \piros{2}) (\kek{2}, \kek{*}; \piros{1}, \piros{*}), & &\underline{|}(\kek{2}; \piros{2})(\kek{1}, \kek{*}; \piros{1}, \piros{*}),\\
&\underline{|}(\kek{1}; \piros{1})(\kek{2}; \piros{2})(\kek{*}; \piros{*}), & &\underline{|}(\kek{2}; \piros{1})(\kek{1}; \piros{2})(\kek{*}; \piros{*}), & &\underline{|}(\kek{1};\piros{2})(\kek{2}; \piros{1})(\kek{*}; \piros{*}), & &\underline{|}(\kek{2};\piros{2})(\kek{1}; \piros{1}) (\kek{*}; \piros{*}), & &(\underline{\kek{1}}, \kek{2}; \piros{1}, \piros{2})\underline{|} (\kek{*};\piros{*}),\\
&(\underline{\kek{1}}; \piros{1}, \piros{2}) \underline{|} (\kek{2}, \kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}, \piros{2}) \underline{|} (\kek{1}; \kek{*}; \piros{*}), & &(\underline{\kek{1}}, \kek{2}; \piros{1}) \underline{|} (\kek{*}; \piros{2}, \piros{*}), & &(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{2},\kek{*}; \piros{2}, \piros{*}), & &(\underline{\kek{2}}; \piros{1}) \underline{|} (\kek{1}, \kek{*}; \piros{2}, \piros{*}),\\
&(\underline{\kek{1}}, \kek{2}; \piros{2}) \underline{|} (\kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{1}}; \piros{2}) \underline{|} (\kek{2}, \kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{2}}; \piros{2}) \underline{|} (\kek{1}, \kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{2}; \piros{2})(\kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}) \underline{|} (\kek{1}; \piros{2})(\kek{*}; \piros{*}),\\
&(\underline{\kek{1}};\piros{2}) \underline{|} (\kek{2}; \piros{1})(\kek{*}; \piros{*}), & &(\underline{\kek{2}};\piros{2}) \underline{|} (\kek{1}; \piros{1}) (\kek{*}; \piros{*}), & &(\underline{\kek{1}}; \piros{1}) (\kek{2}; \piros{2}) \underline{|} (\kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}) (\underline{\kek{1}}; \piros{2}) \underline{|} (\kek{*}; \piros{*}), & &(\underline{\kek{1}};\piros{2}) (\kek{2}; \piros{1}) \underline{|} (\kek{*}; \piros{*}),\\
&(\underline{\kek{2}};\piros{2})(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{*}; \piros{*}).
\end{aligned}$$
We define the *Callan polynomial* for any positive integers $n$ and $k$ as $$C_n^k(x)=\sum_{\alpha\in \mathcal{B}_n^k}x^{w(\alpha)}.$$
By the above example, we see that $C_2^2(x) = 2x^2 + 15x + 14$.
\[C-exp\] The polynomials $C_n^k(x)$ are given by $$C_n^k(x)=\sum_{j=0}^{\min(n,k)}j!(x+1)^{\overline{j}}\sts{n+1}{j+1}\sts{k+1}{j+1}.$$
It is straightforward from the definition of barred Callan sequences and the definition of the weight.
Next, we show the recursion by modifying the proof appropriately in the previous section. We define $C_n^0(x)=C_0^k(x)=1$.
\[Callan-poly\] For any integers $n \geq 0$ and $k > 0$, we have $$\begin{aligned}
\label{recperm}
C_n^{k}(x)=C_{n}^{k-1}(x)+\sum_{j=1}^n\binom{n}{j}C_{n-j+1}^{k-1} (x) +x\sum_{j=1}^{n}\binom{n}{j}C_{n-j}^{k-1} (x).\end{aligned}$$
We split the set $\mathcal{B}_n^k$ into disjoint subsets as follows: Let $A$ denote the set $\alpha\in \mathcal{B}_n^k$ such that $\kek{k}$ is in the extra pair with $\kek{*}$. Let $B$ denote the set $\beta\in \mathcal{B}_{n}^k$ such that $\kek{k}$ is in the first Callan pair alone and there is no bar before it. Let $C$ denote the set $\gamma\in \mathcal{B}_n^k$ such that $\kek{k}$ is in an ordinary block. Further, if it is alone in the first Callan pair, then the bar is before it.
If $\kek{k}$ is in the extra blue block $B^*$, we simply take a barred Callan sequence with $k-1$ blue elements and $n$ red elements and insert $\kek{k}$ into the extra block. The extra block does not affect the weight. Thus, we have $$\sum_{\alpha\in A}x^{w(\alpha)}=C_{n}^{k-1}(x).$$
We obtain a Callan sequence $\beta\in B$ by choosing in $\binom{n}{j}$ ways $j$ red elements for the first Callan pair $(\kek{k}; \piros{R_1})$, and constructing from the remaining $n-j$ red elements and $k-1$ blue elements a barred Callan sequence. $(\kek{k}; \piros{R_1})$ is glued simply before the sequence. The weight will be increased by one, since the block $(\kek{k}; \piros{R_1})$ is the greatest among the blocks. Hence, we have $$\sum_{\beta\in B}x^{w(\beta)}=x\sum_{j=1}^{n}\binom{n}{j}C_{n-j}^{k-1}(x).$$
We split the set $C$ into further disjoint subsets as follows. $C_1$ are the Callan sequences, where $\kek{k}$ is alone in its ordinary block and the bar is directly before it. $C_2$ consists of the Callan sequences, where $\kek{k}$ is alone, a bar is not before it and it is not in the first Callan pair. Finally, $C_3$ are the Callan sequences, where $\kek{k}$ is not alone in its blue block. Clearly, $C=C_1\dot{\cup}C_2\dot{\cup}C_3$.
Choose again $j$ red elements in $\binom{n}{j}$ ways for $\kek{k}$ to create a block $(\kek{k};\widehat{\piros{R}})$. Construct a barred Callan sequence with $([n] \backslash \widehat{\piros{R}} )\cup \{\piros{0}\}$ red elements and $[k-1]$ blue elements. We have three cases:
If $\piros{0}$ is in the extra block, delete $\piros{0}$ and insert $(\kek{k};\widehat{\piros{R}})$ directly after the bar. In this case we obtain the set $C_1$. The weight does not change, since $\kek{k}$ is “greater” than $|$.
If $\piros{0}$ is in an ordinary block and there is no other red element in its block, merge $(\kek{k};\widehat{\piros{R}})$ to this Callan pair by $(\kek{B}; \piros{0}) \to (\kek{B}, \kek{k}; \widehat{\piros{R}})$. Do not change the position of the bar. This case gives the set $C_3$. The weight does not change, since the so obtained blue block contains smaller elements than $\kek{k}$, and the order of the blocks are determined by their least elements.
If $\piros{0}$ is in an ordinary pair, say $(\kek{B}; \piros{0}, \piros{R})$ and this block contains other red elements also, then delete $\piros{0}$ and insert $(\kek{k};\widehat{\piros{R}})$ after this Callan pair, that is, $(\kek{B}; \piros{0}, \piros{R}) \to (\kek{B}; \piros{R}) (\kek{k}; \widehat{\piros{R}})$. If the bar was directly after this pair $(\kek{B}; \piros{0}, \piros{R})$, then delete it from here and place it now after $(\kek{k};\widehat{\piros{R}})$. This case gives the set $C_2$. The weight does not change since there is a block with smaller value (respecting to the order of blocks) to the left of the block $(\kek{k};\widehat{\piros{R}})$, hence, $(\kek{k};\widehat{\piros{R}})$ does not affect the weight anymore.
We have $$\sum_{\gamma\in C}x^{w(\gamma)}=\sum_{j=1}^n\binom{n}{j}C_{n-j+1}^{k-1}(x),$$ which concludes the proof.
For any integers $n, k \geq 0$ and $m \geq 0$, we have $$C_n^k(m) = \mathcal{C}_n^k(m) = \widehat{{\mathscr{B}}}_n^k (m).$$
Weighted alternative tableaux of rectangular shape {#s5}
==================================================
In this section we introduce a weight on alternative tableaux of rectangular shapes and show that the so obtained polynomials are identical with the Callan polynomials, hence, the numbers of such tableaux are the normalized symmetrized poly-Bernoulli numbers. Alternative tableaux were introduced by Viennot [@Viennot]. The literature on alternative tableaux and related topics is extremely rich. For instance, a combinatorial interpretation of the generalized Dumont-Foata polynomial in terms of alternative tableaux was given in [@Josuat].
[@N11 Definition 1.2] An *alternative tableau* of rectangular shape of size $n \times k$ is a rectangle with a partial filling of the cells with left arrows $\leftarrow$ and down arrows $\downarrow$, such that all cells pointed by an arrow are empty. We let ${\mathcal{T}}_n^k$ denote the set of all alternative tableaux of rectangular shape of size $n \times k$.
\[Ex-T\] In Figure \[alttabl\] we give an example of alternative tableaux of size $5 \times 6$ with its weight defined later.
![An alternative tableaux of size $5 \times 6$[]{data-label="alttabl"}](image1){width="40mm"}
We introduce a weight on alternative tableaux as follows. For each $\lambda \in {\mathcal{T}}_n^k$,
- Consider the first (from the top) consecutive rows that contain left arrows $\leftarrow$.
- Count the number of left arrows $\leftarrow$ such that all $\leftarrow$ in the upper rows are located further to the right.
We let $w(\lambda)$ denote the number of such left arrows. For instance, the alternative tableau in Figure \[alttabl\] has the weight $w(\lambda) = 3$. In Figure \[altweight\] we list all $31$ elements in ${\mathcal{T}}_2^2$ with their weights.
![Alternative tableaux of size $2\times 2$ with their weights[]{data-label="altweight"}](image3){width="160mm"}
We define the polynomial $T_n^k(x)$ by $$T_n^k(x) := \sum_{\lambda \in {\mathcal{T}}_n^k} x^{w(\lambda)}.$$ From the above example, $T_2^2(x) = 2x^2 + 15x + 14$, which coincides with the Callan polynomial $C_2^2(x)$. In general, the following holds.
We define $T_n^0(x) = T_0^k (x) = 1$. For any integers $n, k \geq 0$, the polynomial $T_n^k(x)$ coincides with the Callan polynomial $C_n^k(x)$.
For each $\lambda \in {\mathcal{T}}_n^k$, we let $R = R(\lambda)$ denote the most right column of $\lambda$. We split the set ${\mathcal{T}}_n^k$ into disjoint subsets as follows: Let $A$ denote the set $\lambda \in {\mathcal{T}}_n^k$ such that $R$ contains no $\leftarrow$. Let $B$ denote the set $\lambda \in {\mathcal{T}}_n^k$ such that the top-right box is empty and $R$ contains at least one $\leftarrow$. Let $C$ denote the set $\lambda \in {\mathcal{T}}_n^k$ such that the top-right box contains $\leftarrow$.
If $\lambda \in A$, then $R$ is empty or contains the unique $\downarrow$. The remaining rectangle $\lambda^- := \lambda \backslash R$ defines a sub-rectangle in ${\mathcal{T}}_n^{k-1}$, and we see that $w(\lambda) = w(\lambda^-)$. The number of patterns of $R$ is $n+1$ (empty or one $\downarrow$). Thus, we get $$\sum_{\lambda \in A} x^{w(\lambda)} = (n+1) T_n^{k-1}(x).$$
If $\lambda \in B$, then $R$ contains $j \leftarrow$’s ($1 \leq j \leq n-1$). For each $j$, the number of patterns of $R$ is ${n \choose j+1}$, ($j \leftarrow$ and zero or one $\downarrow$). In the rectangle $\lambda \backslash R$, $j$ rows are killed, and the remaining rows define a sub-rectangle $\lambda^- \in {\mathcal{T}}_{n-j}^{k-1}$. In this case it holds for the weight $w(\lambda^-) = w(\lambda)$, and hence $$\sum_{\lambda \in B} x^{w(\lambda)} = \sum_{j=1}^{n-1} {n \choose j+1} T_{n-j}^{k-1} (x) = \sum_{j=1}^{n-1} {n \choose j-1} T_j^{k-1} (x).$$
Finally, if $\lambda \in C$, then $R$ contains $(j+1) \leftarrow$’s ($0 \leq j \leq n-1$). For each $j$, the number of patterns of $R$ is ${n \choose j+1}$. In the rectangle $\lambda \backslash R$, $(j+1)$ rows are killed, and the remaining rows define a sub-rectangle $\lambda^- \in {\mathcal{T}}_{n-j-1}^{k-1}$. In this case, the $\leftarrow$ in the corner affect the weight of $\lambda$, thus $w(\lambda^-) = w(\lambda) - 1$. Hence, $$\sum_{\lambda \in C} x^{w(\lambda)} = x \sum_{j=0}^{n-1} {n \choose j+1} T_{n-j-1}^{k-1} (x) = x \sum_{j=0}^{n-1} {n \choose j} T_j^{k-1} (x).$$
Therefore, we have $$\begin{aligned}
\label{T-rec}
T_n^k(x) = (n+1) T_n^{k-1}(x) + \sum_{j=1}^{n-1} {n \choose j-1} T_j^{k-1} (x) + x \sum_{j=0}^{n-1} {n \choose j} T_j^{k-1} (x),
\end{aligned}$$ which is equivalent to the recursion formula for the Callan polynomial in Theorem \[Callan-poly\].
For any integers $n, k \geq 0$ and $m \geq 0$, we have $$T_n^k(m) = \widehat{{\mathscr{B}}}_n^k(m).$$
Applications
============
First, we present a generalization of Ohno-Sasaki’s result on poly-Bernoulli numbers [@OS20 Theorem 1] (see also [@OS20+]). $$\begin{aligned}
\label{OS-eq}
\sum_{0 \leq i \leq \ell \leq m} (-1)^{i} \st{m+2}{i+1} B_{n+\ell}^{(-k)} = 0 \qquad (n \geq 0, m \geq k > 0),\end{aligned}$$ The theorem gives a new type of recurrence relation for the (normalized) symmetrized poly-Bernoulli numbers $\widehat{{\mathscr{B}}}_n^k(m)$ with the single index $k$, (see also a related question in [@AIK Remark 14.5]).
\[OS\] For any $n \geq 0, m > k \geq 0$, we have $$\sum_{\ell = 0}^m (-1)^\ell \st{m+1}{\ell+1} C_{n+\ell}^k (x) = 0.$$
By Proposition \[C-exp\], the left-hand side equals $$\begin{aligned}
\sum_{j=0}^k j! (x+1)^{\overline{j}} \sts{k+1}{j+1} \sum_{\ell = 0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1}.
\end{aligned}$$ By showing the identity $$\begin{aligned}
\label{iden}
\sum_{\ell = 0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1} = 0 \qquad \text{for } j < m,
\end{aligned}$$ the theorem holds by the assumption $k < m$. We prove Equation by induction on $n$. Let $\delta_{i,j}$ denote the Kronecker delta defined by $\delta_{i,j} = 1$ if $i = j$ and $\delta_{i,j} = 0$ otherwise. For $n = 0$, by [@AIK Proposition 2.6 (5.2)], we have $$\sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{\ell+1}{j+1} = (-1)^j \delta_{j, m},$$ which equals $0$ if $j <m$. For any positive $n$, by the recurrence relation of the Stirling numbers of the second kind, $$\sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1} = \sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \left(\sts{n+\ell}{j} + (j+1) \sts{n+\ell}{j+1} \right),$$ which also equals to $0$ by the induction hypothesis.
For example, since $C_n^k(0) = \widehat{{\mathscr{B}}}_n^k(0) = B_n^{(-k)}$, we get $$\begin{aligned}
\label{Ber-rec}
\sum_{\ell = 0}^m (-1)^\ell \st{m+1}{\ell+1} B_{n+\ell}^{(-k)} = 0 \qquad (n \geq 0, m > k \geq 0).\end{aligned}$$ Our formula looks simpler than Ohno-Sasaki’s formula (\[OS-eq\]). Here we show the relation between these two results. Let $\mathrm{OS}(n)$ be the left-hand side of (\[OS-eq\]). By a direct calculation, $$\begin{aligned}
\mathrm{OS}(n) - \mathrm{OS}(n+1) &= \sum_{0 \leq i \leq \ell \leq m} (-1)^{i} \st{m+2}{i+1} B_{n+\ell}^{(-k)} + \sum_{1 \leq i \leq \ell \leq m+1} (-1)^i \st{m+2}{i} B_{n+\ell}^{(-k)}\\
&= \sum_{\ell = 0}^m (-1)^\ell \st{m+2}{\ell+1} B_{n+\ell}^{(-k)} + \sum_{i = 1}^{m+1} (-1)^i \st{m+2}{i} B_{n +m+1}^{(-k)}.\end{aligned}$$ Since $$\begin{aligned}
\sum_{j=0}^n \st{n+1}{j+1} x^j = (x+1)^{\overline{n}}\end{aligned}$$ and $\st{n}{n} = 1$ hold, the last sum becomes $\sum_{i=1}^{m+1} (-1)^i \st{m+2}{i} = (-1)^{m+1}$. Hence, $$\mathrm{OS}(n) - \mathrm{OS}(n+1) = \sum_{\ell=0}^{m+1} (-1)^\ell \st{m+2}{\ell+1} B_{n+\ell}^{(-k)},$$ which coincides with the left-hand side of (\[Ber-rec\]) with shifted $m$ by one. This concludes that the equation (\[OS-eq\]) implies (\[Ber-rec\]).
Next, we give another recurrence formula.
\[diag-sum\] For any integers $n, k \geq 0$, we have $$\sum_{\ell=0}^n \st{n+1}{\ell+1} C_\ell^k (x) = n! \sum_{j=0}^{\min(n,k)} (x+1)^{\overline{j}} \sts{k+1}{j+1} {n+1 \choose j+1}.$$
By using Proposition \[C-exp\] again, the left-hand side becomes $$\sum_{j=0}^\infty j! (x+1)^{\overline{j}} \sts{k+1}{j+1} \sum_{\ell=0}^n \st{n+1}{\ell+1} \sts{\ell+1}{j+1}.$$ The inner sum is an expression for the Lah numbers, which satisfies $$\sum_{\ell=0}^n \st{n+1}{\ell+1} \sts{\ell+1}{j+1} = {n \choose j} \frac{(n+1)!}{(j+1)!}.$$ Both sides of the identity counts the ways of partitions of $\{1,2,\ldots,n+1\}$ into $j+1$ linear arrangements, lists. In order to obtain a set of lists, split first the $n+1$ elements into $\ell+1$ cycles, then partition the $\ell+1$ cycles into $j+1$ blocks. The product of the cycles determines the list in a block. On the other hand, take a permutation of $[n+1]$ and place bars to split it into $j+1$ pieces (from the $n$ places between the elements we choose $j$ to place the bars in $\binom{n}{j}$ ways). Since the order of the lists is irrelevant, we divide by the number of permutations of the lists, $(j+1)!$.
The theorem follows.
To apply the theorem for the special cases at $x = 0$ and $x=1$, we recall the following identity.
\[Faul\] For any integers $n > 0, k \geq 0$, we have $$\begin{aligned}
\label{Seki}
\sum_{j=0}^\infty j! \sts{k+1}{j+1} {n \choose j+1} = \sum_{i=1}^{n} i^k =:S_k(n).
\end{aligned}$$
Let $s_k(n)$ the left-hand side of (\[Seki\]), and consider the generating function $$\sum_{k=0}^\infty s_k(n) \frac{t^k}{k!} = \sum_{j=0}^\infty j! {n \choose j+1} \sum_{k=0}^\infty \sts{k+1}{j+1} \frac{t^k}{k!} = e^t \sum_{j=0}^\infty {n \choose j+1} (e^t-1)^j.$$ The last equality follows from the fact [@AIK Proposition 2.6, (7)] $$\sum_{k=0}^\infty \sts{k+1}{j+1} \frac{t^k}{k!} = \frac{e^t (e^t-1)^j}{j!}.$$ This implies that $$\sum_{k=0}^\infty (s_k(n+1) - s_k(n)) \frac{t^k}{k!} = e^t \sum_{j=0}^n {n \choose j} (e^t-1)^j = e^{(n+1)t} = \sum_{k=0}^\infty (n+1)^k \frac{t^k}{k!},$$ that is, $s_k(1) = 1$ and $s_k(n+1) = s_k(n) +(n+1)^k$. Hence $s_k(n) = S_k(n)$.
We can also prove the equation $$(s_k(n+1) - s_k(n) = ) \sum_{j=0}^\infty j! \sts{k+1}{j+1} {n \choose j} = (n+1)^k$$ combinatorially. The term $(n+1)^k$ counts the number of words $w_1 w_2 \cdots w_k$ of length $k$ out of an alphabet with $n+1$ distinct letters $\{0, 1, \dots, n\}$. We can get such a word as follows also: add the special position $w_0 := 0$ and partition the $k+1$ positions of the word into $j+1$ subsets, on the positions of a subset, the entries are the same. We choose the remaining entries in $j! {n \choose j}$ ways.
At $x = 0$, $$\begin{aligned}
\label{B-Seki}
\sum_{\ell=0}^n \st{n+1}{\ell+1} B_\ell^{(-k)} = n! S_k(n+1),
\end{aligned}$$ At $x = 1$, we also get $$\begin{aligned}
\label{diag-sum-C}
\sum_{\ell=0}^n \st{n+1}{\ell+1} \widehat{{\mathscr{B}}}_\ell^k (1) = n! (n+1)^{k+1}.
\end{aligned}$$
The first equation (\[B-Seki\]) immediately follows from Theorem \[diag-sum\] and Lemma \[Faul\].
For the second equation (\[diag-sum-C\]), by Theorem \[diag-sum\] again, we have $$\sum_{\ell=0}^n \st{n+1}{\ell+1} \widehat{{\mathscr{B}}}_\ell^k (1) = n! \sum_{j=0}^{\min(n,k)} (j+1)! \sts{k+1}{j+1} {n+1 \choose j+1} = n! (n+1)^{k+1}.$$ The last equation is given combinatorially by counting the term $(n+1)^{k+1}$ in a similar way as the proof of Lemma \[Faul\]. In this argument, we do not need the special position $w_0$.
We also give a direct combinatorial proof for the identity (\[diag-sum-C\]). Both sides of the equation counts the number of permutations of $[n+(k+1)]$ such that tall substrings of consecutive elements greater than $n$ are in increasing order. Such a permutation can be decoded by a pair $(\pi, w)$, where $\pi \in \mathfrak{S}_n$, a permutation of $n$ and $w=w_1\ldots w_k w_{k+1}$ is a word of length $k+1$ on the alphabet $\{0,1,\ldots, n\}$. Let $\sigma$ be a permutation with the above property. Then the subsequence of the elements $\{1,2,\ldots, n\}$ is $\pi$, while $w_i$ is the number of the elements to the left of $i+n$ that are smaller than or equal to $n$. Clearly, the number of such pairs is given by $n!(n+1)^{k+1}$. For instance, for $n=7$ and $k=6$ the permutation $\sigma={\bf 11}-6-{\bf 8-10}-3-1-{\bf 13-14}-7-5-4-2-{\bf 9-12}$ is decoded by the pair $(\pi;w)=(6-3-1-7-5-4-2;1710733)$.
On the other hand, we obtain such a permutation $\sigma$ using Callan sequences as follows. A *$C$-Callan permutation* is a Callan permutation starting with an element greater than $n$. Equivalently, a *$C$-Callan sequence* is a ($0$-barred) Callan sequence of size $n \times k$ with an extra red block $R^* = \{\piros{*}\}$. It can be shown that $C$-Callan permutations are in bijection with $1$-barred Callan sequences, and hence, their number is $B_n^{(-k)} (1) = \widehat{{\mathscr{B}}}_n^{k-1}(1)$. Take a $C$-Callan sequence with red elements $\{\piros{1}, \dots, \piros{\ell}, \piros{*}\}$ and blue elements $\{\kek{1}, \dots, \kek{k}, \kek{k+1}, \kek{*}\}$. By the definition of the $C$-Callan sequence, this ends with $(\kek{B}\cup\{ \kek{*}\}; \piros{*})$. Construct a permutation of $\{0, 1, \dots, n\}$ with $\ell+1$ cycles $c_0, c_1, \dots, c_\ell$ in $\st{n+1}{\ell+1}$ ways. Let $c_i$ denote the $i$th cycle in the natural order of the cycles determined by the smallest elements of them. So for instance $c_0$ denotes the cycle that contains $0$. Replace in the $C$-Callan sequence $\kek{*} \piros{*}$ by $c_0$, and each red element for $i > 0$ by the cycle $c_i$ and take the product of the cycles in each red block. Finally, delete $0$ and shift the blue elements by $n$, $\kek{i} \to i+n$. The so obtained permutation is $\sigma$. For instance, the $C$-Callan sequence $(\kek{4}; \piros{4}) (\kek{1}, \kek{3}; \piros{1}) (\kek{6}, \kek{7}; \piros{2}, \piros{3}) (\kek{2}, \kek{5}, \kek{*}; \piros{*})$ and the cycles $c_0 = (0), c_1 = (1, 3), c_2 = (2, 7), c_3 = (4,5), c_4 = (6)$ with $n = 7, k = 6, \ell = 4$ correspond to the above $\sigma$ by $$\begin{aligned}
(\kek{4}; \piros{4}) (\kek{1}, \kek{3}; \piros{1}) (\kek{6}, \kek{7}; \piros{2}, \piros{3}) &(\kek{2}, \kek{5}, \kek{*}; \piros{*}) \to \kek{4} (6) \kek{13} (1,3) \kek{67} (2,7)(4,5) \kek{25} (0)\\
&\to \kek{4}-6-\kek{1}-\kek{3}-3-1-\kek{6}-\kek{7}-7-5-4-2-\kek{2}-\kek{5}-0\\
&\to {\bf 11}-6-{\bf 8-10}-3-1-{\bf 13-14}-7-5-4-2-{\bf 9-12}.\end{aligned}$$
Further problems
================
In section \[s5\], we define the weight $w_{\leftarrow}(\lambda) := w(\lambda)$ on alternative tableaux by using left arrows $\leftarrow$. We let $w_\downarrow (\lambda)$ denote another weight on alternative tableaux corresponding to down arrows similarly. More precisely, for each $\lambda \in {\mathcal{T}}_n^k$, the weight $w_\downarrow(\lambda)$ is defined as follows.
- Consider the first (from the right) consecutive columns that contain down arrows $\downarrow$.
- Count the number of down arrows $\downarrow$ such that all $\downarrow$ in the right-hand columns are located in the upper rows.
Figure \[altweight2\] shows the list of all elements in ${\mathcal{T}}_2^2$ with the weight $w_\downarrow(\lambda)$.
![Alternative tableaux of size $2 \times 2$ with their weights $w_\downarrow(\lambda)$[]{data-label="altweight2"}](image4){width="160mm"}
We define the two-variable polynomial $$T_n^k(x,y) := \sum_{\lambda \in {\mathcal{T}}_n^k} x^{w_\leftarrow(\lambda)} y^{w_\downarrow(\lambda)}.$$ From the above example, $T_2^2(x,y) = x^2 y + xy^2 + x^2 + 7xy + y^2 + 7x + 7y + 6$. By simple observations, we also see that $$T_n^1(x,y) = T_1^n (x,y) = (2^{n-1} -1)xy + 2^{n-1} x + 2^{n-1} y + 2^{n-1}.$$
We put $$\begin{aligned}
t_n^0(x,y) &= t_0^k(x,y) = 1,\\
t_n^1(x,y) &= t_1^n(x,y) = (2^{n-1}-1)xy + 2^{n-1}x + 2^{n-1}y + 2^{n-1}
\end{aligned}$$ as initial values. The polynomials defined by $$\begin{aligned}
t_n^k(x,y) &:= \sum_{j=0}^n {n+1 \choose j} t_j^{k-1} (x,y) + (x-1) \sum_{j=0}^{n-1} {n \choose j} t_j^{k-1} (x,y)\\
& \qquad + (y-1) \sum_{j=0}^{n-1} {n \choose j} t_j^{k-1} (x,y) + (x-1)(y-1) \sum_{j=0}^{n-1} {n-1 \choose j} t_j^{k-1} (x,y)
\end{aligned}$$ coincide with $T_n^k(x,y)$.
We checked the coincidence for $(n,k) = (2,2), (3,2)$, and $(2,3)$ by hand. By comparing the recurrence formula at $x =1$ or $y=1$ with that in (\[T-rec\]), we easily see that $t_n^k(x,1) = t_n^k(1,x) = T_n^k(x)$.
Another direction is to consider the polynomial at other values, for instance at negative integers.
By applying Theorem \[OS\] for $n=-1$ formally, we get $$``C_k^{-1} (x) = -\frac{1}{m!} \sum_{\ell=1}^m (-1)^\ell \st{m+1}{\ell+1} C_k^{\ell-1} (x)".$$ Here we used the symmetric property $C_n^k(x) = C_k^n(x)$. Recalling the condition $m > k \geq 0$ on $m$, and specializing by $m = k+1$, we tentatively define $C_k^{-1}(x)$ by $$C_k^{-1}(x) := \frac{1}{(k+1)!} \sum_{\ell=0}^k (-1)^\ell \st{k+2}{\ell+2} C_k^\ell (x).$$
For any integer $k \geq 0$, we have $$C_k^{-1} (x) = -\frac{S_k(-x)}{x},$$ where $S_k(x)$ is the Seki-Bernoulli polynomial [@AIK Section 1.2] defined by $$S_k(x) := \frac{1}{k+1} \sum_{j=0}^k {k+1 \choose j} B_j x^{k+1-j}$$ with the classical Bernoulli number $B_k = B_k^{(1)} (0)$.
Let $s_k(x) := x C_k^{-1}(-x)$. By Proposition \[C-exp\], $$s_k(x) = x C_k^{-1} (-x) = -\frac{1}{(k+1)!} \sum_{j=0}^k j! (-x)^{\overline{j+1}} \sts{k+1}{j+1} \sum_{\ell=0}^\infty (-1)^\ell \st{k+2}{\ell+2} \sts{\ell+1}{j+1}.$$ Since the inner sum over $\ell$ equals $(-1)^j (k+1)!/(j+1)!$ for $0 \leq j \leq k$ (we prove it in Lemma \[last-lem\]), we have $$\begin{aligned}
s_k(x) = \sum_{j=0}^k (-1)^{j+1} (-x)^{\overline{j+1}} \sts{k+1}{j+1} \frac{1}{j+1}.
\end{aligned}$$ For any positive integer $n > 0$, $$s_k(n) = \sum_{j=0}^k j! {n \choose j+1} \sts{k+1}{j+1}.$$ By Lemma \[Faul\], this equals $S_k(n)$. Since $s_k(x)$ and $S_k(x)$ are polynomials, this concludes the proof, that is, $s_k(x) = S_k(x)$.
\[last-lem\] For integers $k \geq j \geq 0$, we have $$\sum_{\ell=j}^k (-1)^{\ell+j} \st{k+2}{\ell+2} \sts{\ell+1}{j+1} = \frac{(k+1)!}{(j+1)!}.$$
Consider the generating function of the left-hand side with respect to $k$. By [@AIK Proposition 2.6 (7) and (9)], $$\begin{aligned}
\sum_{k=j}^\infty \sum_{\ell=j}^k (-1)^{\ell+j} \st{k+2}{\ell+2} \sts{\ell+1}{j+1} \frac{t^{k+1}}{(k+1)!} &= \sum_{\ell=j}^\infty (-1)^{\ell+j}\sts{\ell+1}{j+1} \sum_{k=\ell}^\infty \st{k+2}{\ell+2} \frac{t^{k+1}}{(k+1)!}\\
&= \frac{(-1)^{j+1}}{1-t} \sum_{\ell=j}^\infty \sts{\ell+1}{j+1} \frac{(\log(1-t))^{\ell+1}}{(\ell+1)!}\\
&= \frac{1}{(j+1)!} \frac{t^{j+1}}{1-t} = \sum_{k=j}^\infty \frac{(k+1)!}{(j+1)!} \frac{t^{k+1}}{(k+1)!}
\end{aligned}$$ This concludes the proof.
One natural question is whether there exists a suitable generalization of the Callan polynomial $C_n^k(x)$ or the symmetrized poly-Bernoulli numbers $\widehat{{\mathscr{B}}}_n^k(m)$ for negative integers $k$ and $m$.
It would be interesting to investigate the polynomials that arise by the weight function on alternative tableaux of other special shapes or on arbitrary shapes.
In this paper we did not provide bijections between our models. It would be interesting to find simple bijections, especially between alternative tableaux and the Callan sequences. Also there should exist combinatorial proofs of Theorem \[OS\] and so on.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Yasuo Ohno and Yoshitaka Sasaki for sending us their preprint and some helpful comments. Further, we thank to Sithembele Nkonkobe for helpful conversations. The second author was supported by JSPS KAKENHI Grant Number 20K14292.
[99]{} C. Ahlbach, J. Usatine, and N. Pippenger, Barred preferential arrangements, *Electron. J. of Combin. * **20(2)** (2013), P55. T. Arakawa, T. Ibukiyama, and M. Kaneko, *Bernoulli Numbers and Zeta Functions*, with an appendix by Don Zagier, Springer, 2014. T. Arakawa and M. Kaneko, On poly-Bernoulli numbers, *Comment. Math. Univ. St. Paul.* **48** (1999), 159–167. B. Bényi and P. Hajnal, Combinatorics of poly-Bernoulli numbers, *Studia Sci. Math. Hungarica* **52**(2015), 537–558. B. Bényi and P. Hajnal, Combinatorial properties of poly-Bernoulli relatives, *Integers* **17** (2017), A31. C. R. Brewbaker, A combinatorial interpretation of the poly-Bernoulli numbers and two Fermat analogues, *Integers* **8** (2008), A02. M. Josuat-Verges, Generalized Dumont-Foata polynomials and alternative tableaux, *Sém. Lothar. Combin.* 64 (2010/11), Art. B64b, 17 pp. M. Kaneko, Poly-Bernoulli numbers, *J. Théor. Nombres Bordeaux* 9 (1997), 221–228. M. Kaneko, Poly-Bernoulli numbers and related zeta functions, *Algebraic and Analytic Aspects of Zeta Functions and L-functions* (Ed. by G. Bhowmik, K. Matsumoto and H. Tsumura), MSJ Memoir 21 (2010), 73–85. M. Kaneko, F. Sakurai, and H. Tsumura, On a duality formula for certain sums of values of poly-Bernoulli polynomials and its application, *J. de Théorie des Nombres de Bordeaux*, **30**-1, (2018) 203–218. T. Matsusaka, Symmetrized poly-Bernoulli numbers and combinatorics, *Journal of Integer Sequences*, to appear. P. Nadeau, The structure of alternative tableaux, *J. Combin. Theory Ser. A* **118** (2011), no. 5, 1638–1660. S. Nkonkobe, B. Bényi, R. Corcino, and C. Corcino, A combinatorial analysis of higher order generalised geometric polynomials: a generalisation of barred preferential arrangements, *Discrete Mathematics* **343(3)** (2020), 111729. Y. Ohno and Y. Sasaki, Recurrence formulas for poly-Bernoulli polynomials, *Adv. Stud. Pure Math.* Various Aspects of Multiple Zeta Functions – in honor of Professor Kohji Matsumoto’s 60th birthday, H. Mishou, T. Nakamura, M. Suzuki and Y. Umegaki, eds. (Tokyo: Mathematical Society of Japan, 2020), 353–360. Y. Ohno and Y. Sasaki, Recursion formulas for poly-Bernoulli numbers and their applications, *Int. J. Number Theory*, to appear. N. J. A. Sloane, The on-line encyclopedia of integer sequences, <http://oeis.org> X. Viennot, Alternative tableaux, permutations and partially asymmetric exclusion process, Isaac Newton Institute, 2008,
http://wwwold.newton.ac.uk/webseminars/pg+ws/2008/csm/csmw04/0423/viennot/.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the largest near-infrared (NIR) data sets, $JHK_{\rm S}$, ever collected for classical Cepheids in the Magellanic Clouds (MCs). We selected fundamental (FU) and first overtone (FO) pulsators, and found 4150 (2571 FU, 1579 FO) Cepheids for Small Magellanic Cloud (SMC) and 3042 (1840 FU, 1202 FO) for Large Magellanic Cloud (LMC). Current sample is 2–3 times larger than any sample used in previous investigations with NIR photometry. We also discuss optical $VI$ photometry from OGLE-III. NIR and optical–NIR Period-Wesenheit (PW) relations are linear over the entire period range ($0.0<\log P_{\rm FU} \le1.65 $) and their slopes are, within the intrinsic dispersions, common between the MCs. These are consistent with recent results from pulsation models and observations suggesting that the PW relations are minimally affected by the metal content. The new FU and FO PW relations were calibrated using a sample of Galactic Cepheids with distances based on trigonometric parallaxes and Cepheid pulsation models. By using FU Cepheids we found a true distance moduli of $18.45\pm0.02{\rm(random)}\pm0.10{\rm(systematic)}$ mag (LMC) and $18.93\pm0.02{\rm(random)}\pm0.10{\rm(systematic)}$ mag (SMC). These estimates are the weighted mean over 10 PW relations and the systematic errors account for uncertainties in the zero–point and in the reddening law. We found similar distances using FO Cepheids ($18.60\pm0.03{\rm(random)}\pm0.10{\rm(systematic)}$ mag \[LMC\] and $19.12\pm0.03{\rm(random)}\pm0.10{\rm(systematic)}$ mag \[SMC\]). These new MC distances lead to the relative distance, $\Delta\mu=0.48\pm0.03$ mag (FU, $\log P=1$) and $\Delta\mu=0.52\pm0.03$ mag (FO, $\log P=0.5$),which agrees quite well with previous estimates based on robust distance indicators.'
author:
- 'L. Inno, N. Matsunaga, G. Bono, F. Caputo, R. Buonanno, K. Genovali, C.D. Laney, M. Marconi, A.M. Piersimoni, F. Primas, M. Romaniello'
date: 'drafted / Received / Accepted '
title: 'On the distance of the Magellanic Clouds using Cepheid NIR and optical–NIR Period–Wesenheit Relations'
---
Introduction
============
Recent detailed investigations indicate that 2%–3% of the systematic error affecting the Hubble constant estimate is due to the Cepheid distance to the Large Magellanic Cloud [LMC @madore10; @riess11; @freedman12]. Moreover, the Magellanic Clouds (MCs) are fundamental benchmarks to constrain the accuracy and the precision of the most popular primary distance indicators [@pietrzynski10; @matsunaga09 2011]. The decrease of a factor two in metallicity between the LMC and the Small Magellanic Cloud (SMC) makes these galaxies also excellent laboratories to constrain the possible dependence of different standard candles on the metal content. Although, the MC Cepheids play a crucial role in many astrophysical problems, the number of homogeneous optical (*B,V,R,I*) and near-infrared (NIR; *J,H,K$_{\rm S}$*) data sets is quite limited.
The most extensive surveys in the optical bands (*V,I*) were performed by micro–lensing experiments (MACHO, EROS, OGLE). The MACHO project collected $R$, $I$ band data for $\sim$1900 Cepheids in the LMC [@skrutskie06; @allsman00; @welch96], while EROS collected $V,R$ band data for $\sim$300 and $\sim$600 Cepheids in the LMC and in the SMC, respectively [@marquette00]. The most complete sample of MC Cepheids was collected by OGLE-III [@sos2008 2010]. Their catalog includes $V,I$ band light curves for more $\sim$7000 Cepheids (LMC: 2000 fundamental \[FU\], 1000 first overtone \[FO\]; SMC: 2500 FU, 1500 FO). Accurate distance determinations to the MCs based on optical Period–Wesenheit (PW) relations have also been provided by @udalski99, @bono02, @greo03 and @ngeow09. More recently, @dicriscienzo12 provided a detailed theoretical investigation concerning the PW relations in the Sloan Digital Sky Survey bands.
The NIR data bases for MC Cepheids are significantly smaller: @laney86 [1994] collected NIR light curves for 44 MC Cepheids (21 LMC, 23 SMC), while @welch97 for 91 SMC Cepheids. More recently @persson04 [hereinafter P04] collected NIR light curves for 92 LMC Cepheids. More recently, accurate $K$ band photometry (12 phase points per variable) was collected by @ripepi12 [hereinafter R12] in two LMC fields located around 30 Doradus (172 FU, 152 FO) and the South Elliptical Pole (11 Cepheids). They provided, by using also literature data, accurate estimates of Period–Luminosity (PL), PW, and Period-Luminosity-Color relations for both FU and FO Cepheids. One of the key advantages in using NIR data is that the pulsation amplitude decreases for increasing wavelength and the estimate of the mean magnitude becomes easier. The previous largest set of single epoch measurements for MC Cepheids was collected by @groenewegen00 (LMC: 713 FU, 450 FO; SMC: 1200 FU, 675 FO) using 2MASS and DENIS data sets. The same approach was also followed by @nikolaev04 using MACHO and 2MASS data sets (LMC: 1357 FU, 749 FO) and more recently by @ngeow09 using the 2MASS data set (LMC: 1761 FU) and the mid-infrared SAGE *Spitzer* [@meixner06] data set (LMC: 1759 FU).
In this investigation, we provide new MC distances using a new sample of single-epoch *J,H,K$_{\rm S}$* measurements of a significant fraction of MC Cepheids ($\sim$80%) detected by OGLE. In particular, in Section 2 we discuss the NIR and optical data sets we adopted in this investigation together with the criteria to select both FU and FO Cepheids. In Section 3 we present the PW relations, while in Section 3.1 we focus our attention on the linearity and the metallicity dependence of NIR and optical-NIR PW relations. Empirical and theoretical absolute calibrations of the PW relations are addressed in Section 3.2. The summary of the results and more detailed discussion concerning pros and cons of the two independent calibrations are given in Section 4, while in Section 5 we briefly outline some possible future avenues concerning the developments of this project.
Data sets and data selection
============================
The Cepheid intrinsic parameters are taken from the OGLE-III catalog [@sos2008 2010]. We adopted the following Cepheid parameters: pulsation period, position (right ascension and declination), mean $V$ and $I$ band magnitude, $I$ band amplitude, epoch of maximum and pulsation mode. The optical OGLE-III Cepheid catalog was cross-correlated with the NIR catalog of the IRSF/SIRIUS Near-Infrared Magellanic Clouds Survey provided by @Kato07. The single-epoch *J,H,K$_{\rm S}$* magnitudes for the OGLE-III FU Cepheids were extracted by @matsunaga11. In this investigation we also included FO Cepheids. We ended up with a sample of 3042 LMC (1840 FU, 1202 FO) and 4150 SMC (2571 FU, 1579 FO) Cepheids with three NIR (*J,H,K$_{\rm S}$*) single-epoch measurements. The IRSF/SIRIUS *J,H,K$_{\rm S}$* measurements were transformed into the 2MASS NIR photometric system following @Kato07.
The mean magnitudes of FU Cepheids were estimated using the NIR template light curves provided by @templ. To assess the accuracy of this method, we compared our estimates of mean magnitudes with the mean magnitudes for LMC Cepheids based on finely sampled light curves (P04). The template light curves and the mean magnitude by P04 were also transformed into the 2MASS NIR photometric system following @carpenter01. Figure 1 shows that the intrinsic dispersion decreases by a factor of two when moving from the single-epoch measurements to the mean magnitudes based on the template (0.12 versus 0.05 mag). The total error budget of the mean magnitudes estimated using the template light curve is given by $\sigma^{2}_{\lambda_i}=\sigma^{2}_{m_i}+\sigma^{2}_{\rm cal}+\sigma^{2}_{\rm rph}$, where $\sigma^{2}_{m_i}$ is the intrinsic photometric error, with a typical value of $\sim 0.03$ at 16 mag in *J,H,K$_{\rm S}$*; $\sigma^{2}_{\rm cal}$ is the error due to the transformation into the 2MASS photometric system, it is of the order of 0.01 mag for UKIRT, LCO and IRSF systems; $\sigma^{2}_{\rm rph}$ is the scatter due to the random phase sampling, it is given by the template algorithm and it is $\sim$0.05 mag [@templ].
For the FO Cepheids the mean magnitude is based on the single epoch measurements, since template light curves are not available for these pulsators. It is worth mentioning, that the mean magnitudes of these pulsators are less affected by their random sampling, since their luminosity amplitude is on average three times smaller than for FU Cepheids [@madfreed10]. The errors of the FO mean magnitudes were estimated using the above relation, but the term $\sigma^{2}_{\rm rph}$ gives the typical semi-amplitude of FO light curves ($\sim$0.10 mag). We plan to address the discussion concerning the template light curve for FO Cepheids and their errors in a forthcoming paper.
Period Wesenheit relations
==========================
The Wesenheit indices, introduced by @madore82, are pseudo-magnitudes closely related to apparent magnitudes, but minimally affected by uncertainties on reddening. On the basis of two magnitudes, $m_{\lambda_1}$ and $m_{\lambda_2}$, we can define a Wesenheit index: $$W(\lambda_2,\lambda_1)=m_{\lambda_1}-\left[\frac{A(\lambda_1)}{E (m_{\lambda_2}-m_{ \lambda_1})}\right]\times (m_{\lambda_2}-m_{ \lambda_1});
\label{eq: W}$$ where $ \lambda_1>\lambda_2$ and $\frac{A(\lambda_1)}{E(m_{\lambda_2}-m_{ \lambda_1})}$ is the total to selective extinction for the given filters –$\{\lambda_i=V, I, J, H, K_{\rm S}\}$– and for the adopted reddening law. The clear advantage in using the Wesenheit indices is that they are minimally affected by uncertainties affecting reddening corrections for Galactic and extragalactic Cepheids. Once we fix the reddening law [@cardelli89] and we assume $R_{V}$=$\frac{A(V)}{A(B)-A(V)}$=3.23, we obtain the following selective absorption ratios, namely $A_I$/$A_V$=0.61; $A_J$/$A_V$=0.29; $A_H$/$A_V$=0.18; $A_{K_S}$/$A_V$=0.12 mag.
By combining the five optical–NIR (*VIJHK$_{\rm S}$*) mean magnitudes, we can compute 10 Wesenheit indices for each Cepheid in the sample, and in turn, 10 PW relations of the form $W(\lambda_2,\lambda_1)=a+b\times\log P$, where P is the pulsation period in days. We decided to analyze separately FU and FO Cepheids to overcome possible systematic uncertainties that the ’fundamentalization’ of the FO periods might introduce in the estimate of the PL relations [@feast97; @marengo10]. Therefore, we computed independent PW relations for FU and FO Cepheids. We performed a linear fit of the data to identify possible outliers. We have included data up to 4$\sigma$ from the central location by using the robust *Biweight* location estimator [@bw]. We ended up with a sample of $\sim$4000 SMC ($\sim$2500, FU; $\sim$1500, FO) and $\sim$3000 LMC ($\sim$1800, FU; $\sim$1100, FO) Cepheids. For approximately three dozen Cepheids with period $\gtrsim$ 20 days the $I$ band is saturated in the OGLE-III data-set, and therefore we cannot apply the template. The NIR mean magnitudes for these Cepheids were taken from P04 and transformed into the 2MASS photometric system (see green dots in Figure 2).
We then performed a linear regression of the NIR data and the results for the three PW relations are showed in Figure 2 (see also Table 1). From top to bottom each panel shows FU (red and green dots) and FO (blue dots) LMC (left) and SMC (right) Cepheids. The PW relations are over–plotted as black lines. Data plotted in Figure 2 display four relevant findings concerning the NIR PW relations. (1)The FU and the FO PW relations are linear over the entire period range. (2)The intrinsic dispersion of the SMC PW relations are a factor of two larger than for the LMC PW relations. The difference is mainly caused by depth effects in the former system [@vandenbergh00]. (3)For each PW relation the difference in the slope between LMC and SMC Cepheids is small. Data listed in Tables 1 and 2 indicate that it is, on average, smaller than 0.8$\sigma_{tot}$, where $\sigma_{tot}$ is the sum in quadrature of the dispersion of LMC and SMC PW relations.
This indicates a minimal dependence of the NIR PW relations on the metal content. (4)The width in temperature of the FO instability strip is narrower than the instability strip for FU Cepheids [@bono00], but the intrinsic dispersions are not significantly different. The lack of a template light curve for FO Cepheids increases the dispersion of their mean magnitudes.
The above findings support recent theoretical [@bono10] and empirical [@majess11] investigations. The main advantage of the current approach is that the results rely on NIR single epoch measurements that are 2–3 times larger than any previous investigation [@groenewegen00]. In order to constrain the possible occurrence of systematic errors in the NIR PW relations, we also computed the optical–NIR PW relations using the $V$, $I$ mean magnitudes provided by OGLE-III. Figures 3 and 4 show the optical–NIR PW relations for FU (red dots) and FO (blue dots) LMC and SMC Cepheids, respectively. Once again, we found that the PW relations are linear and the slopes are minimally affected by the difference in metal content (see Table 1). Current results concerning the linearity of NIR and optical–NIR PW relations support previous findings by @ngeow05 and @madore09.
Linearity of PW Relations
-------------------------
To constrain on a more quantitative basis the linearity of NIR PW relations we estimated the distance of individual Cepheids from the least squared solution. The residuals for FU Cepheids plotted in the top panels of Figure 5 do not show any trend as a function of the pulsation period. To further constrain this evidence, we performed a linear fit to the residuals and we found that the zero–points, the slopes and the means attain vanishing values. Moreover, the dispersions are typically smaller than 0.2 mag. The same outcome applies to the FO Cepheids (see bottom panels of Figure 5). Note that the residuals of FO PW relations are larger than the residuals of the FU PW relations, since for the former ones we lack template light curves. The residuals referred to SMC are larger than the residuals of LMC due to depth effects. The anonymous referee explicitly asked a quantitative analysis concerning the linearity of the PW relations for both FU and FO SMC Cepheids. To our knowledge there is no clear physical reason why FU and FO NIR PW relations should show a break, therefore, we decided to constrain the possible occurrence using different breaks in period. We split the entire Cepheid sample by adopting a break in period at $\log P$=0.45. This means that we assume as short-period Cepheids those with $\log P\le$0.45, while the long–period ones are those with 0.45$< \log P \le$1.65 (FU) and with 0.45$< \log P\le $0.65 (FO). The zero–points and the slopes for FU NIR PW relations listed in Table 3 show that their errors are a factor of 3–4 larger than the errors of the linear regressions based on the entire sample. This trend is expected, since the number of Cepheids included in the two new linear regressions decreases from a factor of three (long–period) to 50% (short–period). On the other hand, the dispersions of the new PW relations are either similar (short–period) or on average smaller (long–period). The new FO PW relations show similar trends concerning the intrinsic errors on the zero–points, on the slopes and on the dispersions.
The break in period is defined somewhat arbitrarily, therefore, we decided to perform the same test, but using a break at $\log P$=0.40 and $\log P$=0.35. Current empirical evidence suggests that optical PL relations of SMC Cepheids show a break in period at $\log P$$\approx$0.4 [@sandage09], while for LMC Cepheids the break seems to be at $\log P$$\approx$1 [@sandage04]. The results concerning the new NIR PW relations are listed in Table 3 and indicate that the short–period PW relations are quite similar to the global PW relations, i.e., the PW relations covering the entire period range. This trend is –once again– expected, since more than 2/3 of the Cepheid sample is in the short–period range. The evidence that linear regressions with an arbitrary break in period, give PW relations with either similar or marginally smaller dispersion is also expected. This is the consequence of the increase in the degrees of freedom of the linear regressions. However, this does not mean that the PW relations with a break in period are a better representation of observations. To address this issue on a more quantitative basis, we devised a new empirical test based on the relative distance between SMC and LMC. The MC relative distance is quite solid, since different standard candles provide similar estimates.
By adopting both NIR and optical–NIR PW relations, we found that the relative distance modulus based on FU Cepheids and at $\log P$=1 is $\Delta \mu$=0.48$\pm$0.03 mag. This evaluation agrees quite well with similar estimates available in the literature (see Section 4). To further constrain the intrinsic accuracy of the NIR PW relations with a break in period, we computed three new PW($J$,$K_{\rm S}$) relations for LMC Cepheids. Following @sandage04, we adopted a break in period at $\log P$=1. The zero–points and the slopes of the new NIR PW relations are listed in Table 4. We estimated the MC relative distances by using the short–period and the long–period PW relations. The relative distances based on the former ones were estimated at $\log P$=0.3, while those based on the latter ones were estimated at $\log P$=1.0. The results listed in Table 5 indicate –as expected– that the MC relative distances based on short–period PW relations agree quite well with the MC relative distances based on global PW relations. The main difference is that the relative distance based on short–period PW relations have intrinsic errors, estimated by propagating the errors on both the coefficients and the dispersion of the individual PW relations, that are on average a factor of two larger than those ones based on the global PW relations. The same outcome applies to the MC relative distances based on the long–period PW relations. However, their intrinsic errors are larger and they also show a larger spread among the three different NIR PW relations. Note that the MC relative distance based on the long–period PW($J$,$H$) relations are systematically smaller than the others, because the zero–point of the long–period PW relation for LMC Cepheids is larger than the zero–point of the global PW relation (15.949 versus 15.876).
We repeated the same test by using two different pivot periods, namely $\log P$=0.2 for the short–period and $\log P$=1.2 for the long–period PW relations and the results are quite similar. We also performed the same test using NIR FO PW relations and the outcome is –once again– quite similar. Note that the intrinsic errors on the coefficients of the long–period FO PW relations are larger than the errors of the short–period ones, since the Cepheid sample in the former period interval is from a factor of five to a factor of 10 smaller than in the latter one.
The above findings indicate that the PW relations with arbitrary breaks in period when compared with global PW relations have larger intrinsic errors on the coefficients of the linear regressions and roughly equivalent dispersions.
However, the MC relative distances based on the former ones are characterized by intrinsic errors that are, on average, a factor of two larger than the latter ones. Thus further supporting the use of global NIR PW relations.
This provides an independent support to the results concerning the linearity of both optical and NIR PW relations for FU Cepheids by @persson04 [@bono10; @ngeow12 and references therein]. We found that optical and NIR PW relations for FO Cepheids are also linear over the entire period range, supporting previous findings by @ngeow05 and @madore09. We are thus facing the empirical evidence that optical and NIR PL relations for FU Cepheids do show a change in the slope for $\log P\approx$0.4 [@sasselov97; @bono99; @ngeow05; @koen07; @matsunaga11]. The difference between the PL and the PW relations is mainly due to the fact that the latter is mimicking, as originally suggested by @bonomarconi99, a PLC relation.
Metallicity dependence of the PW relations
------------------------------------------
To further constrain the metallicity dependence of the NIR PW relations, we performed a detailed comparison with similar estimates available in the literature. The middle panel of Figure 6 shows the difference between the slope of the PW($J$,$K_{\rm S}$) relations we estimated for LMC (black line) and SMC (green line) Cepheids with similar PW relations for Galactic Cepheids (see Table 6) derived by @storm [hereinafter S11a; red line] and by @ngeow12 [hereinafter N12; purple line]. The standard deviations plotted in the same figure clearly indicate that current Magellanic and Galactic NIR PW relations do agree within 1$\sigma$. The difference in the slope between our SMC and Galactic PW($J$,$K_{\rm S}$) relations is, on average, smaller than 0.3$\sigma$ (N12) and 0.4$\sigma$ (S11a).
The anonymous referee suggested to perform the same comparison for the optical PW($V$,$I$) relation. The top panel of Figure 6 shows the difference between the slope of the PW($V$,$I$) relations we estimated for LMC (black line) and SMC (green line) Cepheids with similar PW relations for Galactic Cepheids (see Table 6) derived by S11a (red line) and @benedict07 [hereinafter B07; blue line]. The standard deviations plotted in the same figure clearly indicate that current Magellanic and Galactic optical PW relations do agree within 1$\sigma$. The difference in the slope of the PW($V$,$I$) relation between our metal-poor stellar system (SMC, \[Fe/H\]=-0.75) and our metal-rich stellar system (Galaxy, \[Fe/H\]= -0.18 to +0.25) is, on average, smaller than $\sim$0.1$\sigma$ (B07) and $\sim$0.9$\sigma$ (S11a). The bottom panel of Figure 6 shows the difference between the slope of the PW($V$,$K_{\rm S}$) relations we estimated for LMC (black line) and SMC (green line) Cepheids with the PW relation for LMC Cepheids (see Table 6) derived by R12 (gray line). Data plotted in this figure clearly indicate the good agreement between the two independent LMC slopes. Moreover, current SMC and LMC PW relations do agree within 1$\sigma$. The other NIR and optical-NIR PW relations provide similar results. The quoted numbers indicate that the PW relations are, in the metallicity range covered by Magellanic Cepheids, independent of metal abundance. The extension into the more metal-rich regime does require more accurate measurements for Galactic Cepheids.
Absolute calibration of the PW relations
----------------------------------------
To estimate the distances to the MCs, we combined our new comprehensive sets of PW relations with recent findings concerning absolute magnitudes of Galactic Cepheids. We followed the same approach suggested by P04 to calibrate the LMC PW relations and adopted the 10 FU Galactic Cepheids with *Hubble Space Telescope* trigonometric parallaxes [@benedict07]. To calibrate the FO PW relations, we adopted the [*H*ipparcos]{} trigonometric parallaxes for Polaris provided by @vanleeuwen07. The mean *J,H,K$_{\rm S}$* magnitudes for the calibrating Galactic Cepheids are from @laney92. We estimated the true distance modulus –$\mu$– of both LMC and SMC by using the quoted calibrators and by imposing the slope of individual PW relations for FU and FO Cepheids (see Column 6 of Table 1).
Note that the true distance moduli for FU Cepheids were estimated as the weighted mean of the $\mu_i$ of individual calibrating Cepheids. The associated error on $\mu$ is the sum in quadrature of the weighted error on the distance and of the intrinsic dispersion associated with the linear regression (see Column 5 of Table 1). The weighted means based on the FU PW relations give $\mu$(LMC)=18.45$\pm$0.02 and $\mu$(SMC)=18.93$\pm$0.02 mag, while the weighted means based on FO PW relations give $\mu$(LMC)=18.60$\pm$0.03 and $\mu$(SMC)=19.12$\pm$0.03 mag. To constrain the possible occurrence of deceptive errors in the absolute zero-point, we performed an independent zero-point calibration using predicted FU PW relations for Magellanic Cepheids provided by @bono99 and @marconi05. Recent investigations indicate that theory and observations agree quite well concerning optical and optical-NIR PW relations [@bono10]. The true distance moduli based on the new zero-point calibration are listed in column seven of Table 1. The error associated to individual distance moduli is the standard deviation from the theoretical PW relation. The new weighted means based on the FU PW relations give $\mu$(LMC)=18.56$\pm$0.02 and $\mu$(SMC)=18.93$\pm$0.02 mag. Interestingly enough, the two independent calibrations for FU Cepheids do provide weighted true distance moduli to the MCs that agree quite well (DM $\lesssim 0.11$ mag). This finding appears even more compelling if we take into account that we are using independent NIR and optical data sets together with independent theoretical and empirical calibrators.
Current zero-point calibration for FO PW relations relies on the trigonometric parallax of a single object [Polaris, @vanleeuwen07]. Absolute distances for FO Galactic and Magellanic Cepheids based on the IRSB method are not available. To overcome this problem, we decided to use predicted FO PW relations for Magellanic Cepheids provided by @bono99 and @marconi05. The true distance moduli based on the new zero-point calibration are listed in Column 7 of Table 1. The error associated to individual distance moduli are the dispersions from the theoretical PW relation. The new weighted means based on the FO PW relations give $\mu$(LMC)=18.51$\pm$0.02 and $\mu$(SMC)=19.02$\pm$0.02 mag. The two independent empirical calibrations for FU and FO Cepheids provide weighted true distance moduli to the MCs that differ from 0.15 (LMC) to 0.19 (SMC) mag. On the other hand, the weighted true distance moduli based on the theoretical calibrations differ at the level of a few hundredths of mag. The difference between the two empirical calibrations is due to the fact that the empirical calibrations for FO PW relations rely on a single object (see Section 4).
However, data listed in Table 1 indicate that the PW($J,H$) and the PW($I,H$) relations for FU and FO Cepheids, calibrated using the Galactic Cepheids with trigonometric distances, provide true distance moduli that differ at the 2$\sigma$-3$\sigma$ level from the weighted mean. The evidence that distances based on PW relations, calibrated using theoretical predictions for MC Cepheids (L. Inno et al. 2013, in preparation), show smaller differences indicates that the main culprit seems to be the precision of the $H$ band zero-point calibration. However, the difference in the weighted mean distances, provided by the two independent zero-point calibrations for FU Cepheids, is smaller than 5% (LMC: 49.0 $\pm$1.2 versus 51.5$\pm$1.2 kpc; SMC: 61.1$\pm$2.2 versus 68.8$\pm$2.3 kpc). Moreover, the total uncertainty of current LMC and SMC distances is at the $\sim$2% and at the $\sim$4% level, respectively. Note that we obtain very similar distances if we neglect the distances based on the PW($J,H$) and PW($I,H$) relations, namely 18.47$\pm$0.02 (trigonometric parallaxes) and 18.57$\pm$0.03 (theory) mag.
To further constrain the possible sources of systematic errors in current distance estimates, we also constrained the impact of the adopted reddening law. In a recent investigation @kudritzki11 suggested that distance determinations based on the PW relations may be affected by changes in the reddening law either in the Galaxy or in external stellar systems. To constrain this effect we computed a new set of PW relations by adopting the reddening law by @mccall04. We found that the difference in the true distance moduli, based on the two different reddening laws, is on average $\sim$0.01 mag. The mild dependence of current distance determinations on the reddening law might also be due to the fact that the selective absorption ratios of optical–NIR PW relations are less sensitive to the fine structure of the reddening law. The selective absorption ratios given in Section 3 indicate that the coefficient of the color term of the PW($V,K$) relation is at least one order of magnitude smaller than the coefficients of PW($J,H$) and PW($H,K_{\rm S}$) relations (0.13 versus 1.63 and 1.92 mag). This evidence indicates that the difference in the true distance moduli based on PW($J,H$) and PW($H,K_{\rm S}$) relations might also be caused either by photometric error in the mean magnitudes or by changes in the reddening law along the line-of-sight of the *HST* Galactic calibrating Cepheids.
Summary and discussion
======================
We present new true distance modulus determinations of the MCs using NIR and optical-NIR PW relations. The NIR PW relations were estimated adopting the largest data set of *J,H,K$_{\rm S}$* single epoch measurements ever collected for MC Cepheids. The optical $V,I$ measurements come from the OGLE-III data set. We ended up with a sample of 4150 (2571, FU; 1579, FO; SMC) and 3042 (1840, FU; 1202, FO; LMC) Cepheids. We estimated independent PW relations for both FU and FO Cepheids. The slopes of the current FU PW relations agree quite well with similar estimates available in the literature. We found that they agree at 1.2$\sigma$ level with the slopes of the NIR PW relations for LMC Cepheids derived by P04. The agreement is even better if we compare our slopes for the PW($J,K_{\rm S}$) relations with the slopes recently provided by @stormb [hereinafter S11b] for the LMC (LMC: -3.31$\pm$0.09 versus -3.365$\pm$0.008). The above findings are even more relevant if we take into account that current slopes are based on data samples that are from 80 (S11b) – 30 (P04) times to $\sim$ 3 times [@groenewegen00] larger than the quoted samples. We cannot perform a similar comparison concerning the slopes of the FO PW relations, since to our knowledge they are not available in the literature.
Moreover, we found that both FU and FO PW relations are linear over the entire period range and their slopes attain, within the intrinsic dispersions, similar values in the MCs. The difference is, on average, smaller than 0.8$\sigma$. The difference between the slope of our SMC and Galactic PW($J$,$K_{\rm S}$) relations available in the literature is, on average, smaller than 0.5$\sigma$ (0.3$\sigma$,N12; 0.4$\sigma$, S11b). The same outcome applies to optical bands, and indeed the difference in the slope between our SMC and Galactic PW($V$,$I$) relations available in the literature is, on average, smaller than $\sim$0.1$\sigma$ (B07) and $\sim$0.9$\sigma$ (S11a). This supports the evidence for a marginal dependence of NIR and PW(V,I) relations on the metal content, as suggested by pulsation predictions and recent empirical investigations.
The new PW relations were calibrated using two independent sets of Galactic Cepheids with individual distances based either on trigonometric parallaxes or on theoretical models. By using FU Cepheids we found a true distance modulus to the LMC of 18.45$\pm$0.02 (random) $\pm$0.10 (systematic) mag and to the SMC of 18.93$\pm$0.02 (random) $\pm$0.10 (systematic) mag. These estimates are the weighted mean over the entire set of distance determinations. The random error was estimated by taking into account the intrinsic dispersion of individual PW relations. The systematic error is the sum in quadrature of the difference in $\mu$ introduced by the change in reddening law and in the zero-point calibration.
We found similar distances using FO Cepheids 18.60$\pm$0.03 (random) $\pm$0.10 (systematic) mag, LMC and 19.12$\pm$0.03 (random) $\pm$0.10 (systematic) mag, SMC. Once again the random errors were estimated by taking into account the intrinsic dispersion of individual PW relations, while the systematic ones are the sum in quadrature of the difference in $\mu$ introduced by the change in reddening law and in the zero-point calibration.
The two independent empirical calibrations for FU and FO Cepheids provide weighted true distance moduli to the MCs that differ for 0.15 (LMC) and 0.19 (SMC) mag. On the other hand, the weighted true distance moduli based on the theoretical calibrations differ at the level of a few hundredths of mag. The difference between the empirical and theoretical calibrations is due to the fact that the empirical calibrations for FO PW relations rely on a single object (see Section 3.2).
The relative distance of the MCs, for distance indicators minimally affected by the metal content, is independent of uncertainties affecting the zero-point calibration. We found that the weighted mean relative distance between SMC and LMC using FU Cepheids and the PW relation listed in Table 1 ($\log P$=1) is $\Delta\mu$=0.48$\pm$0.03 mag. We applied the same approach by using FO Cepheids and we found $\Delta\mu$=0.52$\pm$0.03 mag ($\log P$=0.5). The errors on the weighted mean relative distances were estimated by using the dispersions of individual PW relations. The quoted determinations agree quite well with each other and with the recent estimate $\Delta\mu$=0.47$\pm$0.15 mag provided by S11b by using the IRSB method [see also @groenewegen00; @bono10; @matsunaga11].
The distance modulus we obtained for the LMC agrees quite well with the recent estimate provided by S11b (18.45$\pm$0.04 mag) and by P04 (18.50$\pm$ 0.05mag) by using the NIR PL, PLC and PW relations. The difference is also minimal with the “classical" value –18.50$\pm$0.10 mag– [@freedman01]. The same conclusion can be reached if we compare the current estimate with recent distance moduli provided by @benedict07 [18.50$\pm$0.03; $HST$ trigonometric parallaxes for Galactic Cepheids and the LMC slope of the optical PW relation]; by @ngeow08 [18.49$\pm$0.04; optical PL and PLC relations]; by @madore10 [18.44$\pm$0.03 (random) $\pm$0.06 (systematic); PW($V,I$) relation for Galactic and LMC Cepheids ] and by @ngeow12 [18.531$\pm$0.043 mag; NIR and optical-NIR PW relations] [^1]. Moreover, our result also agrees with the most recent distance modulus – 18.477 $\pm$ 0.033– provided by @scowcroft11 and @freedman12, using the $Spitzer$ mid-IR band PL relations.
The distance modulus we obtained for the SMC is, once again, in very good agreement with the independent estimates provided by [@groenewegen00 19.11$\pm$0.11 mag; *Hipparcos* trigonometric parallaxes and PW($V,I$) relation] and S11b (18.92$\pm$0.14 mag).
Final remarks
=============
The key feature of current findings is that the random errors associated to our distance determinations are very small, due to the fact that we adopt an homogeneous and accurate NIR data set and also because we are fully exploiting the use of NIR and optical–NIR PW relations. Moreover, the use of two independent zero-point calibrations and two different reddening laws indicate that the global uncertainty on the MC distances seems to be of the order of 1% by using either the 10 NIR/optical–NIR PW relations or the seven optical–NIR PW relations.
However, there are a few pending issues that need to be addressed in more detail in the near future.
1. The very good intrinsic accuracy of NIR and optical–NIR PW relations further support the findings by @bono10 [see their Figures 13 and 14] indicating that the difference between optical ($B,V$) and NIR ($J,K_{\rm S}$) PW relations can be adopted to constrain the metallicity correction(s) to the Cepheid distance scale based on optical PL relations. Moreover, current findings indicate that the error budget of the absolute distances based on PW relations is dominated by uncertainties in the zero–point. The solution of this problem appears quite promising in light of the fact that *Gaia* will be launched in approximately one year and the number of double eclipsing binaries including classical Cepheids is steadily increasing during the last few years [@pietrzynski10; @pietrzynski11]. Moreover, new optical [OGLEIV @sos2012] and NIR [Galaxy:VVV, @minniti10]; [MCs:VMC, @cioni11 R12] surveys will also provide new, homogenous and accurate mean magnitudes.
2. The above results provide an independent support to the plausibility of the physical assumptions adopted in current hydrodynamical models of variable stars. Indeed the distance moduli based on theoretical calibrations agree well with distance moduli based on empirical calibration. However, we still lack detailed investigations concerning the pulsation properties of Classical Cepheids in the metal-intermediate regime. In particular, we need a comprehensive analysis of the metallicity dependence of both PW relations and Period-Luminosity-Color relations in the optical and in the NIR regime.
3. Accurate spectroscopic iron abundances are only available for roughly the 50% of Galactic Cepheids [@romaniello08; @pedicelli09; @luck11 and references therein] and for a few dozen of MC Cepheids. However, the empirical scenario will have a relevant jump thanks to the ongoing massive ground-based spectroscopic surveys at the 8m class telescopes [*Gaia ESO Survey*, @gilmore12; @tolstoy09]
4. Plain physical arguments indicate that FO Cepheids have the potential to be robust distance indicators [@bono00]. However, we still lack for these variables NIR template light curves. Moreover, current FO absolute calibrations are also hampered by the lack of precise distance determinations based on trigonometric parallaxes for a good sample of Galactic calibrators. These circumstantial evidence limits the precision of MC distance determinations based on FO Cepheids.
5. Absolute distances based on PW relations including the H band are characterized by a large spread. The reasons for this behavior are not clear. No doubt that new high-resolution, high signal-to-noise NIR spectra of Galactic Cepheids [@bono12] can shed new lights on this open problem.
It is a pleasure to thank an anonymous referee for his/her pertinent suggestions and criticisms that helped us to improve the readability of the paper. We also acknowledge M. Fabrizio for many useful discussions concerning the use of Biweight and data set cleaning. One of us (G.B.) thanks the ESO for support as a science visitor. This work was partially supported by PRIN-INAF 2011 “Tracing the formation and evolution of the Galactic halo with VST" (P.I.: M. Marconi) and by PRIN-MIUR (2010LY5N2T) “Chemical and dynamical evolution of the Milky Way and Local Group galaxies" (P.I.: F. Matteucci).
Alcock, C., Allsman, R. A., Alves, D. R., et al. 2000, , 542, 281 Benedict, G. F., McArthur, B. E., Feast, M. W., et al. 2007, , 133, 1810 Beaulieu, J. P. & Marquette, J. B. 2000, in ASP Conf. Ser. 203, The Impact of Large-Scale Surveys on Pulsating Star Research, ed. L. Szabados & D: Kurtz (IAU Colloq. 176; San Francisco,CA: ASP), 139 Bono, G., Caputo, F., Marconi, M., & Musella, I. 2010, , 715, 277 Bono, G., Caputo, F., Castellani, V., & Marconi, M. 1999, , 512, 711 Bono, G., Castellani, V., & Marconi, M. 2000, , 529, 293 Bono, G., Groenewegen, M. A. T., Marconi, M., & Caputo, F. 2002, , 574, L33 Bono, G., & Marconi, M. 1999, in IAU Symp. 190, New Views of the Magellanic Clouds, ed. Y.-H. Chu, N. Suntzeff, J. Hesser, & D. Bohlender (Cambridge: Cambridge University Press), 527 Bono,G., Matsunaga, N., Inno, L.,Lagioia, E.P., & Genovali, K. 2013, in Cosmic-Ray in star-forming environments, ed. D.F. Torres,ÊO. Reimer (Sant Cugat Forum in Astrophysics; Berlin: Springer), Êin press Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245 Carpenter, J. M. 2001, , 121, 2851 Cioni, M.-R. L., Clementini, G., Girardi, L., et al. 2011, , 527, A116 Di Criscienzo, M., Marconi, M., Musella, I., Cignoni, M., & Ripepi, V. 2012, , 36, 212 Fabrizio, M., Nonino, M., Bono, G., et al. 2011, , 123, 384 Feast, M. W., & Catchpole, R. M. 1997, , 286, L1 Freedman, W. L., & Madore, B. F. 2010a, , 48, 673 Freedman, W. L., & Madore, B. F. 2010b, , 719, 335 Freedman, W. L., Madore, B. F., Gibson, B. K., et al. 2001, , 553, 47 Freedman, W. L., Madore, B. F., Scowcroft, V., et al. 2012, , 758, 24 Gilmore, G., Randich, S., Asplund, M., et al. 2012, The Messenger, 147, 25 Groenewegen, M. A. T. 2000, , 363, 901 Groenewegen, M. A. T., & Salaris, M. 2003, , 410, 887 Kato, D., Nagashima, C., Nagayama, T., et al. 2007, , 59, 615 Koen, C., Kanbur, S., & Ngeow, C. 2007, , 380, 1440 Kudritzki, R.-P., & Urbaneja, M. A. 2012, , 65, 131 Laney, C. D., & Stobie, R. S. 1986, , 222, 449 Laney, C. D., & Stobie, R. S. 1992, , 93, 93 Laney, C. D., & Stobie, R. S. 1994, , 266, 441 Luck, R. E., & Lambert, D. L. 2011, , 142, 136 Madore, B. F. 1982, , 253, 575 Madore, B. F., & Freedman, W. L. 2009, , 696, 1498 Majaess, D., Turner, D., & Gieren, W. 2011, , 741, L36 Marconi, M., Musella, I., & Fiorentino, G. 2005, , 632, 590 Marengo, M., Evans, N. R., Barmby, P., et al. 2010, , 709, 120 Matsunaga, N., Feast, M. W., & Menzies, J. W. 2009, , 397, 933 Matsunaga, N., Feast, M. W., & Soszy[ń]{}ski, I. 2011, , 413, 223 McCall, M. L. 2004, , 128, 2144 Meixner, M., Gordon, K. D., Indebetouw, R., et al. 2006, , 132, 2268 Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, New Astronomy, 15, 433 Ngeow, C., & Kanbur, S. M. 2008, Galaxies in the Local Volume, ed. Koribalski, B. S. & Jerjen, H. (Amsterdarm: Springer), 317 Ngeow, C.-C. 2012, ApJ, 747, 50 (N12) Ngeow, C.-C., Kanbur, S. M., Neilson, H. R., et al. 2009, , 693, 691 Ngeow, C.-C., Kanbur, S. M., Nikolaev, S., et al. 2005, , 363, 831 Nikolaev, S., Drake, A. J., Keller, S. C., et al. 2004, , 601, 260 Pedicelli, S., Bono, G., Lemasle, B., et al. 2009, , 504, 81 Persson, S. E., Madore, B. F., Krzemi[ń]{}ski, W., et al. 2004, , 128, 2239 (P04) Pietrzy[ń]{}ski, G., Thompson, I. B., Gieren, W., et al. 2010, , 468, 542 Pietrzy[ń]{}ski, G., Thompson, I. B., Graczyk, D., et al. 2011, , 742, L20 Riess, A. G., Macri, L., Casertano, S., et al. 2011, , 730, 119 Ripepi, V., Moretti, M. I., Marconi, M., et al. 2012, , 424, 1807 (R12) Romaniello, M., Primas, F., Mottini, M., et al. 2008, , 488, 731 Sandage, A., Tammann, G. A., & Reindl, B. 2004, , 424, 43 Sandage, A., Tammann, G. A., & Reindl, B. 2009, , 493, 471 Sasselov, D. D., Beaulieu, J. P., Renault, C., et al. 1997, , 324, 471 Scowcroft, V., Freedman, W. L., Madore, B. F., et al. 2011, , 743, 76 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163 Soszy[ń]{}ski, I., Gieren, W., & Pietrzy[ń]{}ski, G. 2005, , 117, 823 Soszy[ń]{}nski, I., Poleski, R., Udalski, A., et al. 2008, AcA, 58, 163 Soszy[ń]{}ski, I., Poleski, R., Udalski, A., et al. 2010, AcA, 60, 17 Soszy[ń]{}ski, I., Udalski, A., Poleski, R., et al. 2012, AcA, 62, 219 Storm, J., Gieren, W., Fouqu[é]{}, P., et al. 2011, , 534, A94 (S11a) Storm, J., Gieren, W., Fouqu[é]{}, P., et al. 2011, , 534, A95 (S11b) Tolstoy, E., Hill, V., & Tosi, M. 2009, , 47, 371 Udalski, A., Szymanski, M., Kubiak, M., et al. 1999, AcA, 49, 201 van den Bergh, S. 2000, The galaxies of the Local Group, (Cambridge, Cambridge University Press), 92 van Leeuwen, F., Feast, M. W., Whitelock, P. A., & Laney, C. D. 2007, , 379, 723 Welch, D. L. , & MACHO Collaboration 1999, in IAU Symp. 190, New Views of the Magellanic Clouds, ed. Y.-H. Chu, N. Suntzeff, J. Hesser, & D. Bohlender (Cambridge: Cambridge Univ. Press), 513 Welch, D. L., McLaren, R. A., Madore, B. F., & McAlary, C. W. 1987, , 321, 162
[llrrcll]{}
\[tab\]\
\
W($J,K_{\rm S}$) & FU (1708)& 15.876 $\pm$ 0.005 & -3.365 $\pm$ 0.008&0.08 & 18.44 $\pm$0.05 & 18.53 $\pm$ 0.07\
W($J,H$) & FU (1701)& 15.630 $\pm$ 0.006 & -3.373 $\pm$ 0.008& 0.08 & 18.30 $\pm$0.05 & 18.65 $\pm$ 0.04\
W($H,K_{\rm S}$) & FU (1709)& 16.058 $\pm$ 0.006 & -3.360 $\pm$ 0.010& 0.10 & 18.54 $\pm$0.05 & 18.46 $\pm$ 0.12\
W($V,K_{\rm S}$) & FU (1737)& 15.901 $\pm$ 0.005 & -3.326 $\pm$ 0.008&0.07 & 18.46 $\pm$0.05 & 18.51 $\pm$ 0.08\
W($V,H$) & FU (1730)& 15.816 $\pm$ 0.005 & -3.315 $\pm$ 0.008&0.07 & 18.40 $\pm$0.05 & 18.56 $\pm$ 0.06\
W($V,J$) & FU (1732)& 15.978 $\pm$ 0.006 & -3.272 $\pm$ 0.009&0.08 & 18.49 $\pm$0.05 & 18.47 $\pm$ 0.12\
W($I,K_{\rm S}$) & FU (1737)& 15.902 $\pm$ 0.005 & -3.325 $\pm$ 0.008&0.07 & 18.46 $\pm$0.05 & 18.52 $\pm$ 0.08\
W($I,H$) & FU (1734)& 15.801 $\pm$ 0.005 & -3.317 $\pm$ 0.008&0.08 & 18.39 $\pm$0.05 & 18.55 $\pm$ 0.06\
W($I,J$) & FU (1735)& 16.002 $\pm$ 0.007 & -3.243 $\pm$ 0.011&0.10 & 18.50 $\pm$0.05 & 18.46 $\pm$ 0.12\
W($V,I$) & FU (1700)& 15.899 $\pm$ 0.005 & -3.327 $\pm$ 0.008&0.07 & 18.47 $\pm$0.05 & 18.53 $\pm$ 0.13\
&&&&&&\
MEAN (FU) &&&&& 18.45 $\pm$ 0.02 & 18.56 $\pm$ 0.02\
&&&&&&\
W($J,K_{\rm S}$)& FO (1057) & 15.370 $\pm$ 0.005 & -3.471 $\pm$ 0.013 & 0.08 & 18.60 $\pm$0.08 & 18.52 $\pm$ 0.06\
W($J,H$)& FO (1064) & 15.207 $\pm$ 0.005 & -3.507 $\pm$ 0.015& 0.09 & 18.60 $\pm$0.08 & 18.56 $\pm$ 0.06\
W($H,K_{\rm S}$)& FO (1063) & 15.483 $\pm$ 0.007 & -3.425 $\pm$ 0.017 &0.10 & 18.59 $\pm$0.08 & 18.49 $\pm$ 0.07\
W($V,K_{\rm S}$)& FO (1061) & 15.410 $\pm$ 0.005 & -3.456 $\pm$ 0.013 &0.07 & 18.61 $\pm$0.08 & 18.51 $\pm$ 0.06\
W($V,H$)& FO (1071) & 15.357 $\pm$ 0.004 & -3.485 $\pm$ 0.011 &0.08 & 18.61 $\pm$0.08 & 18.52 $\pm$ 0.06\
W($V,J$)& FO (1086) & 15.475 $\pm$ 0.005 & -3.434 $\pm$ 0.014 &0.10 & 18.62 $\pm$0.08 & 18.48 $\pm$ 0.06\
W($I,K_{\rm S}$)& FO (1059) & 15.402 $\pm$ 0.005& -3.448 $\pm$ 0.013 &0.08 & 18.61 $\pm$0.08 & 18.50 $\pm$ 0.06\
W($I,H$)& FO (1072) & 15.351 $\pm$ 0.004 & -3.489 $\pm$ 0.012 &0.08 & 18.62 $\pm$0.08 & 18.51 $\pm$ 0.06\
W($I,J$)& FO (1100) & 15.499 $\pm$ 0.006& -3.423 $\pm$ 0.020 & 0.13 & 18.66 $\pm$0.08 & 18.45 $\pm$ 0.06\
W($V,I$)& FO (1081) & 15.399 $\pm$ 0.003 &-3.460 $\pm$ 0.009 & 0.07 & 18.52 $\pm$0.06 & 18.56 $\pm$ 0.06\
\
MEAN (FO) &&&&& 18.60$\pm$0.03 & 18.51 $\pm$0.02\
\
\
W($J,K_{\rm S}$) & FU (2448)& 16.457 $\pm$ 0.006 & -3.480 $\pm$ 0.011 &0.16 & 18.92 $\pm$0.05 & 19.01 $ \pm$ 0.10\
W($J,H$) & FU (2448)& 16.217 $\pm$ 0.006 & -3.542 $\pm$ 0.011 &0.17 & 18.74 $\pm$0.05 & 19.02 $ \pm$ 0.07\
W($H,K_{\rm S}$) & FU (2448)& 16.638 $\pm$ 0.006 & -3.445 $\pm$ 0.011 &0.19 & 19.05 $\pm$0.05 & 19.01 $ \pm$ 0.14\
W($V,K_{\rm S}$) & FU (2295)& 16.507 $\pm$ 0.005 & -3.461 $\pm$ 0.011 &0.15 & 18.95 $\pm$0.05 & 19.00 $ \pm$ 0.11\
W($V,H$) & FU (2285)& 16.426 $\pm$ 0.005 & -3.475 $\pm$ 0.010 &0.15 & 18.88 $\pm$0.05 & 19.00 $ \pm$ 0.10\
W($V,J$) & FU (2286)& 16.614 $\pm$ 0.005 & -3.427 $\pm$ 0.011 &0.16 & 19.00 $\pm$0.05 & 18.98 $ \pm$ 0.14\
W($I,K_{\rm S}$) & FU (2294)& 16.511 $\pm$ 0.005 & -3.464 $\pm$ 0.011 &0.16 & 18.95 $\pm$0.05 & 19.00 $ \pm$ 0.10\
W($I,H$) & FU (2202)& 16.417 $\pm$ 0.005 & -3.480 $\pm$ 0.011 &0.15 & 18.87 $\pm$0.05 & 19.00 $ \pm$ 0.10\
W($I,J$) & FU (2279)& 16.662 $\pm$ 0.006 & -3.424 $\pm$ 0.013 &0.18 & 19.01 $\pm$0.05 & 18.92 $ \pm$ 0.14\
W($V,I$) & FU (2260)& 16.482 $\pm$ 0.005 & -3.449 $\pm$ 0.010 &0.13 & 18.95 $\pm$0.05 & 19.03 $ \pm$ 0.12\
\
MEAN (FU) &&&&& 18.93$\pm$0.02 & 18.99$\pm$0.03\
&&&&&&\
W($J,K_{\rm S}$)& FO (1461) & 15.947 $\pm$ 0.005 &-3.651 $\pm$ 0.022 &0.16 & 19.06 $\pm$0.08 & 19.02 $ \pm$ 0.04\
W($J,H$)& FO (1473) & 15.778 $\pm$ 0.006 &-3.722 $\pm$ 0.023 &0.17 & 19.17 $\pm$0.08 & 19.05 $ \pm$ 0.03\
W($H,K_{\rm S}$)& FO (1456) & 16.069 $\pm$ 0.007 &-3.579 $\pm$ 0.027 &0.19 & 19.00 $\pm$0.08 & 19.01 $ \pm$ 0.04\
W($V,K_{\rm S}$)& FO (1472) & 15.992 $\pm$ 0.005 &-3.624 $\pm$ 0.021 &0.16 & 19.09 $\pm$0.08 & 19.02 $ \pm$ 0.04\
W($V,H$)& FO (1482) & 15.937 $\pm$ 0.005 &-3.660 $\pm$ 0.020 &0.15 & 19.16 $\pm$0.08 & 19.03 $ \pm$ 0.05\
W($V,J$)& FO (1494)& 16.074 $\pm$ 0.006 &-3.578 $\pm$ 0.023 &0.18 & 19.17 $\pm$0.08 & 19.02 $ \pm$ 0.04\
W($I,K_{\rm S}$)& FO (1471) & 15.990 $\pm$ 0.005 &-3.630 $\pm$ 0.020 &0.16 & 19.09 $\pm$0.08 & 19.02 $ \pm$ 0.05\
W($I,H$)& FO (1477) & 15.932 $\pm$ 0.005 &-3.667 $\pm$ 0.020 &0.16 & 19.17 $\pm$0.08 & 19.02 $ \pm$ 0.04\
W($I,J$)& FO (1499) & 16.113 $\pm$ 0.007 &-3.595 $\pm$ 0.027 &0.20 & 19.17 $\pm$0.08 & 18.00 $ \pm$ 0.05\
W($V,I$)& FO (1465) & 15.958 $\pm$ 0.005 &-3.599 $\pm$ 0.019 &0.14 & 19.12 $\pm$0.06 & 19.05 $ \pm$ 0.03\
\
MEAN (FO) &&&&& 19.12$ \pm$ 0.03 & 19.02 $\pm$0.02\
[llrrcll]{}
\[tab\] W($J,K_{\rm S}$)& FU & 0.115 $\pm$ 0.014 & 0.18\
W($J,H$)& FU & 0.169 $\pm$ 0.014 & 0.19\
W($H,K_{\rm S}$)& FU & 0.120 $\pm$ 0.015 & 0.21\
W($V,K_{\rm S}$)& FU & 0.135 $\pm$ 0.014 & 0.17\
W($V,H$)& FU & 0.160 $\pm$ 0.013 &0.17\
W($V,J$)& FU & 0.155 $\pm$ 0.014 &0.18\
W($I,K_{\rm S}$)& FU & 0.139 $\pm$ 0.014 & 0.17\
W($I,H$)& FU & 0.163 $\pm$ 0.014 & 0.18\
W($I,J$)& FU & 0.181 $\pm$ 0.017 & 0.21\
W($V,I$)& FU & 0.122 $\pm$ 0.014 & 0.15\
&&&\
\
W($J,K_{\rm S}$)& FO & 0.180 $\pm$ 0.026 & 0.18\
W($J,H$)& FO & 0.215 $\pm$ 0.027 & 0.19\
W($H,K_{\rm S}$)& FO & 0.154 $\pm$ 0.032 &0.21\
W($V,K_{\rm S}$)& FO & 0.172 $\pm$ 0.024 &0.18\
W($V,H$)& FO & 0.175 $\pm$ 0.023 & 0.17\
W($V,J$)& FO & 0.143 $\pm$ 0.027 &0.21\
W($I,K_{\rm S}$)& FO & 0.182 $\pm$ 0.024 &0.18\
W($I,H$)& FO & 0.178 $\pm$ 0.023 &0.18\
W($I,J$)& FO & 0.172 $\pm$ 0.034 & 0.23\
W($V,I$)& FO & 0.139 $\pm$ 0.021 & 0.16\
[llrrrlr]{}
\[tab\]\
&&&&&&\
W($J,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.35 & 16.532 $\pm$ 0.011&-3.751 $\pm$ 0.052&0.16& 1335\
W($J,H$)& FU & $\log$ P$\leqslant$0.35 & 16.282 $\pm$ 0.011&-3.767 $\pm$ 0.053&0.16& 1343\
W($H,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.35 & 16.712 $\pm$ 0.014&-3.720 $\pm$ 0.066&0.20& 1331\
W($J,K_{\rm S}$)& FU & 0.35$<$ $\log$ P$\leqslant$1.65 & 16.395 $\pm$ 0.016&-3.429 $\pm$ 0.026&0.14& 942\
W($J,H$)& FU & 0.35$<$ $\log$ P$\leqslant$1.65 & 16.156 $\pm$ 0.016&-3.490 $\pm$ 0.026&0.14& 938\
W($H,K_{\rm S}$)& FU & 0.35$<$ $\log$ P$\leqslant$1.65 & 16.574 $\pm$ 0.018&-3.392 $\pm$ 0.029&0.16& 936\
W($J,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.35 & 15.953 $\pm$ 0.006&-3.714 $\pm$ 0.032&0.17& 1206\
W($J,H$)& FO & $\log$ P$\leqslant$0.35 & 15.780 $\pm$ 0.005&-3.757 $\pm$ 0.032&0.17& 1213\
W($H,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.35 & 16.071 $\pm$ 0.007&-3.625 $\pm$ 0.041&0.21& 1204\
W($J,K_{\rm S}$) & FO & 0.35$<$ $\log$ P$\leqslant$0.65 & 15.870 $\pm$ 0.055&-3.460 $\pm$ 0.120&0.14& 261\
W($J,H$) & FO & 0.35$<$ $\log$ P$\leqslant$0.65 & 15.697 $\pm$ 0.053&-3.528 $\pm$ 0.116&0.13& 259\
W($H,K_{\rm S}$) & FO & 0.35$<$ $\log$ P$\leqslant$0.65 & 15.996 $\pm$ 0.061&-3.405 $\pm$ 0.132&0.15& 260\
\
\
&&&&&&\
W($J,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.40 & 16.531 $\pm$ 0.010&-3.746 $\pm$ 0.043&0.16& 1460\
W($J,H$)& FU & $\log$ P$\leqslant$0.40 & 16.281 $\pm$ 0.010&-3.770 $\pm$ 0.044&0.16& 1464\
W($H,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.40 & 16.710 $\pm$ 0.013&-3.706 $\pm$ 0.056&0.20& 1457\
W($J,K_{\rm S}$)& FU & 0.40$<$ $\log$ P$\leqslant$1.65 &16.376 $\pm$ 0.019 & -3.403 $\pm$ 0.029 & 0.14 & 816\
W($J,H$)& FU & 0.40$<$ $\log$ P$\leqslant$1.65 & 16.137 $\pm$ 0.019 & -3.464 $\pm$ 0.029 & 0.14 & 816\
W($H,K_{\rm S}$)& FU & 0.40$<$ $\log$ P$\leqslant$1.65 & 16.543 $\pm$ 0.021 & -3.350 $\pm$ 0.032 & 0.15& 814\
W($J,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.40 & 15.950 $\pm$ 0.005 & -3.687 $\pm$ 0.029 & 0.16 & 1277\
W($J,H$)& FO & $\log$ P$\leqslant$0.40 & 15.780 $\pm$ 0.005&-3.748 $\pm$ 0.029&0.16 & 1290\
W($H,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.40 & 16.069 $\pm$ 0.007&-3.599 $\pm$ 0.036 & 0.21& 1278\
W($J,K_{\rm S}$)& FO & 0.40$<$ $\log$ P$\leqslant$0.65 & 15.779 $\pm$ 0.089&-3.282 $\pm$ 0.181 & 0.14& 186\
W($J,H$)& FO & 0.40$<$ $\log$ P$\leqslant$0.65 & 15.570 $\pm$ 0.082&-3.277 $\pm$ 0.167 & 0.13 & 182\
W($H,K_{\rm S}$)& FO & 0.40$<$ $\log$ P$\leqslant$0.65 &15.913 $\pm$ 0.096&-3.244 $\pm$ 0.196 & 0.15& 185\
\
\
&&&&&&\
W($J,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.45 & 16.533 $\pm$ 0.009 & -3.758 $\pm$ 0.038 & 0.16 & 1565\
W($J,H$)& FU & $\log$ P$\leqslant$0.45 & 16.284 $\pm$ 0.010 & -3.787 $\pm$ 0.038 & 0.16 & 1568\
W($H,K_{\rm S}$)& FU & $\log$ P$\leqslant$0.45 & 16.714 $\pm$ 0.012&-3.722 $\pm$ 0.048 & 0.20 & 1563\
W($J,K_{\rm S}$)& FU & 0.45$<$ $\log$ P$\leqslant$1.65 & 16.375 $\pm$ 0.021&-3.401 $\pm$ 0.031& 0.13 & 707\
W($J,H$)& FU & 0.45$<$ $\log$ P$\leqslant$1.65 & 16.138 $\pm$ 0.022&-3.465 $\pm$ 0.032 & 0.14 & 712\
W($H,K_{\rm S}$)& FU & 0.45$<$ $\log$ P$\leqslant$1.65 &16.536 $\pm$ 0.024&-3.339 $\pm$ 0.035 & 0.15 & 711\
W($J,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.45 & 15.950 $\pm$ 0.005&-3.688 $\pm$ 0.026&0.16& 1335\
W($J,H$)& FO & $\log$ P$\leqslant$0.45 & 15.780 $\pm$ 0.005&-3.754 $\pm$ 0.026&0.16& 1348\
W($H,K_{\rm S}$)& FO & $\log$ P$\leqslant$0.45 &16.069 $\pm$ 0.007&-3.600 $\pm$ 0.033&0.20& 1333\
W($J,K_{\rm S}$)& FO & 0.45$<$ $\log$ P$\leqslant$0.65 & 15.831 $\pm$ 0.135&-3.379 $\pm$ 0.261&0.14& 128\
W($J,H$)& FO & 0.45$<$ $\log$ P$\leqslant$0.65 & 15.613 $\pm$ 0.135&-3.369 $\pm$ 0.262&0.14& 128\
W($H,K_{\rm S}$)& FO & 0.45$<$ $\log$ P$\leqslant$0.65 & 15.954 $\pm$ 0.144&-3.321 $\pm$ 0.279&0.15& 127\
\
[llrrrlr]{}
\[tab\] W($J,K_{\rm S}$)& FU & $\log$P$\leqslant$1.0 & 15.884 $\pm$ 0.007 &-3.380 $\pm$ 0.011 &0.07& 1674\
W($J,H$) & FU & $\log$P$\leqslant$1.0 & 15.676 $\pm$ 0.007 &-3.457 $\pm$ 0.012 &0.08 & 1675\
W($H,K_{\rm S}$)& FU & $\log$P$\leqslant$1.0 & 16.039 $\pm$ 0.008 &-3.324 $\pm$ 0.014 &0.10 & 1684\
W($J,K_{\rm S}$)& FU & 1.0$<$ $\log$ P$\leqslant$1.65 & 15.950 $\pm$ 0.071 &-3.413 $\pm$ 0.056 &0.08& 69\
W($J,H$)& FU & 1.0$<$ $\log$ P$\leqslant$1.65 & 15.778 $\pm$ 0.084 &-3.419 $\pm$ 0.067 & 0.10& 68\
W($H,K_{\rm S}$)& FU & 1.0$<$ $\log$ P$\leqslant$1.65 & 16.107 $\pm$ 0.075 &-3.437 $\pm$ 0.060 &0.09& 69\
[clrrlll]{}
\[tab\] W($J,K_{\rm S}$) & FU & 0.55 $\pm$ 0.06 & 0.47 $\pm$ 0.06 & …& 0.3 & 1.0\
” & FU & 0.53 $\pm$ 0.11 & 0.48 $\pm$ 0.13 & 0.35 & 0.3 &1.0\
” & FU & 0.54 $\pm$ 0.12 & 0.44 $\pm$ 0.13 & 0.40 & 0.3 & 1.0\
” & FU & 0.54 $\pm$ 0.09 & 0.48 $\pm$ 0.13 & 0.45 & 0.3 &1.0\
\
W($J,H$) & FU & 0.55 $\pm$ 0.05 & 0.42 $\pm$ 0.06 & …& 0.3 & 1.0\
” & FU & 0.51 $\pm$ 0.11 & 0.31 $\pm$ 0.16 & 0.35 & 0.3 &1.0\
” & FU & 0.51 $\pm$ 0.10 & 0.31 $\pm$ 0.16 & 0.40 & 0.3 & 1.0\
” & FU & 0.50 $\pm$ 0.10 & 0.31 $\pm$ 0.16 & 0.45 & 0.3 &1.0\
\
W($H,K_{\rm S}$) & FU & 0.56 $\pm$ 0.07 & 0.50 $\pm$ 0.08 & …& 0.3 & 1.0\
” & FU & 0.56 $\pm$ 0.12 & 0.52 $\pm$ 0.16 & 0.35 & 0.3 &1.0\
” & FU & 0.56 $\pm$ 0.14 & 0.51 $\pm$ 0.15 & 0.40 & 0.3 & 1.0\
” & FU & 0.56 $\pm$ 0.13 & 0.52 $\pm$ 0.17 & 0.45 & 0.3 &1.0\
[llrlccc]{}
\[tab\] W($J,K_{\rm S}$) & FU (229) & -2.65 $\pm$ 0.02 & -3.34 $\pm$ 0.03 & 0.10 & MW & N12\
W($J,K_{\rm S}$) & FU (70) & -2.52 $\pm$ 0.12 & -3.44 $\pm$ 0.09 & 0.23 & MW & S11a\
W($V,I$) & FU (70) & -2.70 $\pm$ 0.15 & -3.26 $\pm$ 0.11 & 0.26 & MW & S11a\
W($V,I$) & FU (10) & -2.48 $\pm$ 0.15 & -3.37 $\pm$ 0.12 & 0.11 & MW & B07\
W($V,K_{\rm S}$) & FU (10) & -2.60 $\pm$ 0.07 & -3.325 $\pm$ 0.014 & 0.08 & LMC &R12\
\[fig1\] {width="90.00000%"}
\[fig2\] {width="90.00000%"}
\[fig3\] {height="90.00000%" width="75.00000%"}
\[fig4\] {height="90.00000%" width="75.00000%"}
\[fig5\] {height="90.00000%" width="75.00000%"}
\[fig6\] {height="80.00000%" width="80.00000%"}
[^1]: Note that in the comparison of LMC distance moduli, we adopted the estimates that neglect the metallicity dependence.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We measure the intrinsic relation between velocity dispersion ($\sigma$) and luminosity ($L$) for massive, luminous red galaxies (LRGs) at redshift $z \sim 0.55$. We achieve unprecedented precision by using a sample of 600,000 galaxies with spectra from the Baryon Oscillation Spectroscopic Survey (BOSS) of the third Sloan Digital Sky Survey (SDSS-III), covering a range of stellar masses $M_* \gtrsim 10^{11} M_{\odot}$. We deconvolve the effects of photometric errors, limited spectroscopic signal-to-noise ratio, and red–blue galaxy confusion using a novel hierarchical Bayesian formalism that is generally applicable to any combination of photometric and spectroscopic observables. For an L-$\sigma$ relation of the form $L \propto \sigma^{\beta}$, we find $\beta = 7.8 \pm 1.1$ for $\sigma$ corrected to the effective radius, and a very small intrinsic scatter of $s = 0.047 \pm 0.004$ in $\log_{10} \sigma$ at fixed $L$. No significant redshift evolution is found for these parameters. The evolution of the zero-point within the redshift range considered is consistent with the passive evolution of a galaxy population that formed at redshift $z=2-3$, assuming single stellar populations. An analysis of previously reported results seems to indicate that the passively-evolved high-mass L-$\sigma$ relation at $z\sim0.55$ is consistent with the one measured at $z=0.1$. Our results, in combination with those presented in [@MonteroDorta2014], provide a detailed description of the high-mass end of the red sequence (RS) at $z\sim0.55$. This characterization, in the light of previous literature, suggest that the high-mass RS distribution corresponds to the “core" elliptical population.'
author:
- |
\
$^1$ Department of Physics and Astronomy, The University of Utah, 115 South 1400 East, Salt Lake City, UT 84112, USA\
$^2$ Steward Observatory, 933 N. Cherry Ave., University of Arizona, Tucson, AZ 85721, USA\
bibliography:
- './paper.bib'
date: 'Accepted —. Received —;in original form —'
title: 'A Steep Slope and Small Scatter for the High-Mass End of the L-$\sigma$ Relation at $z\sim0.55$'
---
surveys - galaxies: evolution - galaxies: kinematics and dynamics - galaxies: statistics - methods: analytical - methods: statistical
Introduction {#sec:intro}
============
In the 60’s and 70’s, several empirical scaling relations between the kinematic and photometric properties of early-type galaxies (ETGs) were identified. Thanks to the seminal work of [@Djorgovski1987] and [@Dressler1987], today we know that these relations are different projections of the so-called *fundamental plane* (FP) of ETGs, which is a thin plane outlined by the occupation of ETGs in the three-dimensional space spanned by velocity dispersion, effective radius and surface brightness, i.e., $\log_{10} \sigma$, $\log_{10} R_e$, and $\log_{10} \langle I \rangle_e$, respectively.
Of particular interest among these scaling relations is the L-$\sigma$ relation, a specific two-dimensional projection of the FP that relates the luminosity $L$ and the central stellar velocity dispersion $\sigma$ of ETGs. This relation was first reported by [@Minkowski1962], using a sample of only 13 ETGs, and by [@Morton1973] a decade later, although no quantification was provided. The relation was first quantified in a sample of 25 galaxies by [@FJR], as a power law in the form $L \propto \sigma^4$, and has been commonly called the *Faber-Jackson Relation* (F-J relation) since then. As a link between a distance-independent quantity $\sigma$ and an intrinsic property $L$, the F-J relation immediately became a useful distance estimator and consequently a cosmological probe [e.g., @deVaucouleurs1982a; @deVaucouleurs1982b; @Pature1992].
With the emergence of large-scale structure (LSS) galaxy surveys in the last decades, the size of the ETG samples available increased dramatically, especially at low redshift, i.e. $z\sim0.1$ (mostly with the Sloan Digital Sky Server, SDSS, @York2000). It was then confirmed with statistical significance, that the L-$\sigma$ relation at intermediate masses/luminosities approximately follows a canonical F-J relation with a slope of $\sim4$ (note that the words “slope" an “exponent" are commonly used interchangeably, as the relation is often expressed in the form $M \propto \log_{10} \sigma$, where M is the absolute magnitude). This result is reported in, e.g., [@Bernardi2003b] and [@Desroches2007], along with a typical scatter in $\log_{10} \sigma$ of $\sim0.1$ dex.
Almost since the very discovery of the L-$\sigma$ relation, however, it became clear that the slope depended on the luminosity range of the sample under analysis. A value of $\gtrsim 4.5$ has been measured in luminous ETGs [e.g., @Schechter1980; @Malumuth1981; @Cappellari2013_XX; @Kormendy2013] and $\sim 2$ in faint ETGs [e.g., @Tonry1981; @Davies1983; @Held1992; @deRijcke2005; @Matkovic2007]. A confirmation of the curvature of the L-$\sigma$ relation towards the high-mass end with high statistical significance has been possible thanks to the SDSS (see @Desroches2007 [@Hyde2009a; @NigocheNetro2010; @Bernardi2011]). The $z\sim0.1$ sample of @Bernardi2011, as an example, contains $\sim18,000$ ETGs with $\log_{10} M_* \gtrsim 11.2/11.5$. As the F-J relation, where $L \propto \sigma^4$, is therefore a particular, canonical case that only appears to hold for morphologically selected samples at intermediate mass ranges, in the remainder of this paper we adopt the generic “L-$\sigma$ relation" terminology.
The curvature of the L-$\sigma$ relation is a consequence of the curvature of the FP itself. In fact, the phenomenology of the FP has turned out to be rather complex. Enough evidence has been gathered that its characteristics depend, not only on luminosity/stellar mass but, to a greater or lesser degree, on a variety of other galaxy properties. In addition, the sensitivity of the measurements to selection effects and low-number statistics has often lead to contradictory results. In terms of redshift evolution, it appears clear that the zero-point of the FP must evolve from $z\sim1$ in a way that approximates that of a passively-evolving galaxy population [e.g., @Bernardi2003c; @Jorgensen2006]. However, there are some indications that the FP might be significantly steeper at higher redshifts [@Jorgensen2006; @Fritz2009], whereas for the intrinsic scatter, conclusions are so far somewhat contradictory (e.g., @NigocheNetro2011 measure a decrease in the scatter of the the L-$\sigma$ relation, whereas results from @Shu2012 appear to indicate the opposite). It seems that the properties of the FP also depend, to some extent, on the wavelength range probed, which is an indication that stellar population properties have an impact on the FP. While these properties appear to remain fairly unchanged in the optical [@Bernardi2003c; @Hyde2009b], there are clear indications that the FP is intrinsically different in the infrared [e.g., @Pahre1998; @LaBarbera2010; @Magoulas2012]. The dependence of the FP on environment has also been the subject of a number of works (see e.g. @Treu2001 [@Bernardi2003c; @Reda2004; @Reda2005; @Denicolo2005; @LaBarbera2010], for the FP; @Focardi2012 for the L-$\sigma$ relation). This is not by any means a complete list; a number of other galaxy properties have been investigated in recent years.
While the use of the FP and its projections as a cosmological probe has been largely eclipsed by other techniques, the FP has gained increasing attention in the last decades as a source of observational constraints for galaxy formation and evolution theories. In this sense, the variation and evolution trends with respect to galaxy properties in the slope and intrinsic scatter of the FP and it projections are believed to be the imprints of non-homological physical processes that occur during the formation and evolution of galaxies. Understanding these observed trends provides crucial insight into these fundamental processes. In this sense, recent cosmological simulations have investigated the impacts of various physical processes such as major/minor mergers and disc instabilities on shaping the internal structure and kinematic properties of ETGs [e.g., @Oser2012; @Shankar2013; @Posti2014; @Porter2014].
In this paper, we use data from the Baryon Oscillation Spectroscopic Survey (BOSS, @Dawson2013) of the SDSS-III [@Eisenstein2011] to measure, for the first time, the high-mass end of the L-$\sigma$ relation at $0.5<z<0.7$. This is a continuation of the work presented in [@MonteroDorta2014], hereafter MD2014, where the intrinsic colour-colour-magnitude red sequence (RS) distribution is deconvolved from photometric errors and selection effects in order to compute the evolution of the RS luminosity function (LF). An important conclusion of MD2014 is that, at fixed apparent magnitude, and for a narrow redshift slice, the RS is an extremely narrow distribution ($<0.05$ mag), consistent with a single point in the optical colour-colour plane. This work is intended to measure the L-$\sigma$ relation that this photometrically distinct population obeys. At intermediate redshifts, the BOSS capability to characterize the massive RS population is unrivaled, with a huge sample of more than 1 million luminous red galaxies (LRGs) with stellar masses $M_* \gtrsim 10^{11} M_{\odot}$. No other previous survey or sample has been able to probe this population with comparable statistics, hence the unique value of the measurements reported here (find a preliminary analysis from BOSS in @Shu2012, where incompleteness is, however, only partially addressed). Importantly, our intrinsic RS was identified in MD2014 using exclusively photometric information, i.e. our red-blue deconvolution is based on the phenomenology of the colour-colour plane, and not on any morphological classification. We will, therefore, use the “RS" terminology instead of the “ETG” terminology when referring to our sample/results.
The mass range covered by BOSS has been hard to probe at $z\sim0.55$. In fact, most of the information that we have about the high-mass end of the L-$\sigma$ relation comes from the SDSS at $z\sim0.1$ or from small samples of very-nearby ETGs, using high-resolution observations (the latter being strongly affected by low-number statistics and selection effects). One of the most important discoveries from these studies is that the slope of the L-$\sigma$ relation is steeper at higher masses/luminosities, as mentioned above. In recent years, a picture that attempts to explain this mass dependence has emerged. The curvature in the scaling relations has been associated with a characteristic stellar mass scale of $\sim 2 \times 10^{11} M_{\odot}$. This scale, which was first reported at high significance by [@Bernardi2011] with the SDSS, is thought to be related to a change in the assembly history of galaxies (recently, @Bernardi2014 have shown that this scale is also special for the late-type galaxy population).
High-resolution imaging has shown that the high-mass scale marks the separation between two distinct ETG populations: [*[core ellipticals]{}*]{} are defined by the fact that the central light profile is a shallow power law separated by a break from the outer, steep Sersic function profile, whereas in [*[coreless ellipticals]{}*]{} (also known as “power-law" or “cusp" ellipticals) this feature is not present (see e.g. @Lauer2007a [@Lauer2007b]). It has been shown in small samples that core ellipticals dominate at the high-mass end, while coreless ellipticals are predominant at lower masses [@Kormendy1996; @Faber1997; @Hyde2008; @Cappellari2013_XX; @Kormendy2013]. Importantly, this bimodality in the central surface brightness profile extends to a variety of other properties. The distinct characteristics of each of these types, including the fact that core ellipticals obey an L-$\sigma$ relation with a significantly steeper slope [@Lauer2007a; @Kormendy2013], have been associated with 2 different evolutionary paths for these objects. Core ellipticals are thought to be formed through major dissipation-less mergers [@Desroches2007; @vonderLinden2007; @Hyde2008; @Lauer2007b; @Bernardi2011; @Cappellari2013_XX; @Kormendy2013], whereas coreless ellipticals might have undergone more recent episodes of star formation (@Kormendy2009 review evidence that they are formed in wet mergers with starbursts).
To complement the work done in MD2014, here we present a novel method to combine photometric and spectroscopic quantities in low signal-to-noise (SN) large-photometric-error samples that we call [*[Photometric Deconvolution of Spectroscopic Observables]{}*]{} (hereafter, PDSO). The PDSO is a hierarchical Bayesian statistical method that allows us to combine the velocity dispersion likelihood function measurements of [@Shu2012] with the photometric red/blue deconvolution of MD2014 to provide the most precise measurement ever performed of the high-mass end of the intrinsic L-$\sigma$ relation within the redshift range $0.5<z<0.7$.
This paper is organized as follows. Section \[sec:overview\] provides an overview of methods and motivations. In Section \[sec:data\] we briefly describe the target selection for the galaxy sample that we use, the BOSS CMASS sample (\[sec:cmass\]), and the computation of stellar velocity dispersion likelihood functions from [@Shu2012] (\[sec:vdisp\]). Section \[sec:intrinsic\] is devoted to summarizing the results of MD2014 regarding the intrinsic RS colour-colour-magnitude distribution. In Section \[sec:formalism0\], we present our PDSO method n a general form (\[sec:formalism\]) and we addresses the application of our method to the BOSS CMASS sample (\[sec:application\]). In Section \[sec:aperture\], we describe our aperture correction procedure. In Section \[sec:results\] we present the best-fit parameters for the $\sigma$ – apparent magnitude relation (\[sec:parameters\]), discuss the effect of addressing completeness and the red/blue population deconvolution (\[sec:effect\]) and present our the L-$\sigma$ relation results (\[sec:F-J relation\]). In Section \[sec:discussion\], we compare our measurements with previous results from the literature (\[sec:comparison\]) and discuss on the physical implications of our measurements (\[sec:interpretation\]). Finally, we summarize our main conclusions and discuss future applications in Section \[sec:conclusions\]. Throughout this paper we adopt a cosmology with $\Omega_M=0.274$, $\Omega_\Lambda=0.726$ and $H_0 = 100h$ km s$^{-1}$ Mpc$^{-1}$ with $h=0.70$ (WMAP7, @Komatsu2011), and use AB magnitudes [@OkeGunn1983].
Overview of methods and motivations {#sec:overview}
===================================
The statistical power of BOSS to cover the very-massive RS population is unrivaled at $z\sim 0.55$, with a sample of $\sim1$ million galaxies (in the latest data release, see @Alam2015) with stellar masses $M_* \gtrsim 10^{11} M_{\odot}$ and a median stellar mass of $M_* \simeq 10^{11.3} M_{\odot}$ (as measured by @Maraston2013, assuming a [@Kroupa2001] initial mass function). The samples, however, present significant challenges, including low SN ratio for the spectra, large photometric errors and a selection scheme that allows for a fraction of bluer objects that increases with redshift. The photometric issues are addressed in MD2014, where we photometrically deconvolve the intrinsic red sequence distribution from photometric errors and selections effects. Our red–blue population deconvolution allows us to characterize completeness in the CMASS sample, which is the main LRG sample from BOSS, covering a redshift range $0.4<z<0.7$ (see next section for a complete description of the sample). This characterization allows us to analyze the luminosity function and colour evolution of the LRG population.
The aforementioned analysis in MD2014 is performed within the framework of a Bayesian hierarchal statistical method that is aimed at constraining distributions of galaxy properties, instead of individual-galaxy properties (since an object-by-object approach is discouraged by the characteristics of the BOSS data). The same philosophy is applied in this work to compute the high-mass end of the L-$\sigma$ relation. The main steps of our analysis are:
- Development of a general Bayesian hierarchal statistical method that combines the photometric red-blue deconvolution and selection function from MD2014 with probability density information from a spectroscopic observable. We call this formalism [*[photometric deconvolution of spectroscopic observables]{}*]{} or PDSO. Our method is aimed at constraining the hyper-parameters of a model for the joint pdf of survey galaxies in physical parameter space, by marginalizing over the physical parameter likelihood functions of individual galaxies given the survey data.
- Application of the above formalism to the computation of the best-fit intrinsic L-$\sigma$ relation from BOSS. The idea is to parametrize the intrinsic distribution in L-$\sigma$ space, and use the PDSO method to constrain the parameters of this distribution: the slope, zero-point and the intrinsic scatter of the L-$\sigma$ relation.
- Evaluation of the redshift evolution of the best-fit parameters that define the L-$\sigma$ relation within a suitable redshift range, namely $0.5<z<0.7$.
The results of the above analysis will add important constraints to the evolution of massive RS galaxies. In addition, the PDSO method will lay the foundations for future BOSS studies, where the intrinsic distributions of spectroscopically-derived quantities can be determined. In the broader picture, an important goal of this paper is to complement the detailed characterization of the main statistical properties of the LRG population initiated in MD2014 that will be eventually used, in combination with N-body numerical simulations, to investigate the intrinsic clustering properties of these systems and the halo-galaxy connection in a fully consistent way. The connection between galaxies and halos will be performed by applying the techniques of halo occupation distributions (HOD: e.g., @Berlind2002; @Zehavi2005) and halo abundance matching (HAM: e.g., @Vale2004; @Trujillo2011). These future applications are addressed in more detail in the last section of the paper.
The data {#sec:data}
========
The CMASS sample {#sec:cmass}
----------------
In this work we make use of both spectroscopic and photometric data from the Tenth Data Release of the SDSS (DR10, @Ahn2014), which corresponds to the third data release of the SDSS-III program and the second release that includes BOSS data. We choose the DR10 instead of the recently published Data Release 12 (DR12, @Alam2015), in order to be consistent with the luminosity function results shown in MD2014. The spectroscopic DR10 BOSS sample contains a total of $927,844$ galaxy spectra and $535,995$ quasar spectra (this is a growth of almost a factor two as compared to the SDSS DR9, @Ahn2012). The baseline imaging sample is the final SDSS imaging data set, which contains, not only the new SDSS-III imaging, but also the previous SDSS-I and II imaging data. This imagining data set was released as part of the DR8 [@Aihara2011]. These imagining programs provide five-band [*[ugriz]{}*]{} imaging over 7600 sq deg in the Northern Galactic Hemisphere and $\sim$ 3100 sq deg in the Southern Galactic Hemisphere. The typical magnitude corresponding to the $50\%$ completeness limit for detection of point sources is $r = 22.5$. The following papers provide comprehensive information about technical aspects of the SDSS survey: [@Fukugita1996] describes the SDSS [*[ugriz]{}*]{} photometric system; [@Gunn1998] and [@Gunn2006] describe the SDSS camera and the SDSS telescope, respectively; [@Smee2013] provides detailed information about the SDSS/BOSS spectrographs.
The catalog that we used to compute the RS LF in MD2014 is the DR10 Large Scale Structure catalog (DR10 LSS). The DR10 LSS, which is basically built from the BOSS spectroscopic catalog and is thoroughly described in [@Anderson2014], contains a small number of galaxies from the SDSS Legacy Survey. The SDSS Legacy Survey includes the SDSS-I survey and a small fraction of the SDSS-II survey. In the previous SDSS programs, the spectra were observed through 3 arcsec fibers, while the aperture size is only 2 arcsec for BOSS. This varying aperture size makes the LSS catalog slightly heterogeneous as far as the velocity-dispersion measurement is concerned. To avoid this problem, we opt to use the spectroscopic catalog, which is very similar to the LSS catalog, and essentially maps the same intrinsic population as that described by the RS LF presented in MD2014. We have also checked that SDSS Legacy galaxies are confined to a very small region of low redshift and high luminosity within the redshift-luminosity space. Importantly, they are mostly found at $z\lesssim0.5$, which is below the redshift range where our LF results are reliable ($0.52 \lesssim z \lesssim 0.65$, see MD2014).
We restrict our analysis to the CMASS (for “Constant MASS") spectroscopic sample of the BOSS spectroscopic catalog. This sample is built within the official SDSS-III pipeline by first applying an imaging pre-selection to ensure that only detections passing a particular set of quality criteria are chosen as targets. Secondly, a set of colour-magnitude cuts is applied to the resulting catalog, intended to select the LRG sample required to effectively measure the BAO within a nominal redshift range $0.43<z<0.70$ (the sample extends slightly beyond these limits). In a similar way, the low-redshift LOWZ sample (not used in this paper) is constructed, covering a nominal redshift range $0.15<z<0.43$. For more information on the BOSS selection refer to [@Eisenstein2011], [@Dawson2013] and [@Reid2015] (also a summary is provided in MD2014). Importantly, the CMASS selection allows for a fraction of $\sim37\%$ of blue cloud objects, as measured by MD2014. The stellar masses for the red population, as measured by [@Maraston2013], are $M_* \gtrsim 10^{11} M_{\odot}$, peaking at $M_* \simeq 10^{11.3} M_{\odot}$, assuming a Kroupa initial stellar mass function [@Kroupa2001]. All colours quoted in this paper are [*[model]{}*]{} colours and all magnitudes [*[cmodel]{}*]{} magnitudes.
The total number of unique CMASS galaxies with a good redshift estimate and with model and cmodel apparent magnitudes and photometric errors in all g,r and i bands in the catalog is 549,005. The mean redshift in the sample is $0.532$ and the standard deviation $0.128$; approximately $\sim7.5 \%$ and $\sim4.5 \%$ of galaxies lie below and above the nominal low redshift and high redshift limits, i.e., $z=0.43$ and $z=0.70$, respectively. For a complete discussion on the selection effects affecting the CMASS data, see MD2014.
Velocity dispersion likelihood functions {#sec:vdisp}
----------------------------------------
One of the key ingredients in our L-$\sigma$ relation study is the likelihood function of the central stellar velocity dispersion. The method for determining this likelihood function is described in detail in [@Shu2012]. Here we provide a brief summary. For every galaxy in our sample, the line-of-sight stellar velocity dispersion within the central circular region of radius $1$ arcsec, is measured spectroscopically by fitting a linear combination of broadened stellar [*[eigenspectra]{}*]{} to the observed galaxy spectrum (note that the typical seeing for BOSS is $1.5$ arcsec). Thus, for the $i^{th}$ galaxy, the $\chi^2$ of the fit as a function of the trial velocity dispersion is converted into the likelihood function of velocity dispersion with respect to the observational data $d_i$ as $$p (d_i | \log_{10} \sigma) \propto \exp[-\chi^2_i(\log_{10} \sigma)/2].$$ The best-fit velocity dispersion and its uncertainty can then be inferred from the $\chi^2$ function.
Figure \[fig:vdisp\_likelihood\] illustrates the type of velocity dispersion data that we have in BOSS. In the bottom panel, the best-fit central velocity dispersion and its uncertainty are shown in a scatter plot for all red CMASS galaxies within the redshift slice $z=0.55 \pm 0.005$. Here, a simple colour cut $g-i > 2.35$ is used to approximately remove blue galaxies in observed space (following @Masters2011). This is just illustrative, as in MD2014 we demonstrate that a simple colour cut is not efficient in terms of isolating the intrinsic RS, as large photometric errors scatter objects in and out of the colour demarcation. At $z=0.55$ the colour cut removes $20\%$ of the sample. The fraction of intrinsically non-RS objects in the CMASS sample at the same redshift according to the population deconvolution of MD2014 is $38\%$.
The two quantities shown in Figure \[fig:vdisp\_likelihood\] are inferred from the velocity-dispersion likelihood functions. Figure \[fig:vdisp\_likelihood\] shows that the distribution is centered around $\log_{10} \sigma \simeq 2.35$, or 220 km/s, and $\Delta \log_{10} \sigma \simeq 0.06$ dex, or 30 km/s. More than two-thirds of the subsample have velocity dispersions between $120$ km/s and $300$ km/s with $20-50$ km/s uncertainties. The top panel displays an example of a typical $\chi^2$ function of a galaxy with both velocity dispersion and uncertainty consistent with the aforementioned central values.
![ \[fig:vdisp\_likelihood\] An illustration of the BOSS velocity dispersion data. *Top panel*: a typical $\chi^2 (\sigma)$ curve as a function of the trial $\sigma$ (D.O.F.=1736). This particular galaxy within the redshift range $z=0.55 \pm 0.005$ is chosen to have best-fit velocity dispersion and uncertainty matching the densest region shown in the bottom panel plot. *Bottom panel*: scatter plot of the best-fit velocity dispersion in logarithmic scale and its uncertainty (also in logarithmic scale, black dots) for red CMASS galaxies ($g-i > 2.35$) within the redshift range $z=0.55 \pm 0.005$. Contours that enclose $30\%$, $67\%$, $95\%$, and $99.7\%$ of this subsample are overplotted in red.](figures/vdisp_likelihood_v4.eps){width="8.5cm"}
As the broad shape of the $\chi^2$ function displayed in the top panel of Figure \[fig:vdisp\_likelihood\] indicates (and as also emphasized in @Shu2012), a point estimate for the velocity dispersion is only partially informative, even after a dedicated treatment is adopted for the new $\chi^2$ calculation. Note that an appreciable fraction of this subsample have best-fit velocity dispersions equal to $0$ km/s due to the limitations of fitting low SN spectra. Using the likelihood function propagates all information to the higher level analysis of the entire population. That is one of the main motivations for employing a hierarchical Bayesian statistical approach in this paper.
Bimodality in the colour-colour plane: The intrinsic Red Sequence distribution {#sec:intrinsic}
==============================================================================
A key element in the computation of the intrinsic L-$\sigma$ relation for the RS from the CMASS sample is the underlying intrinsic RS magnitude and colour distribution. This aspect is thoroughly addressed in MD2014, where we present an analytical method for deconvolving the observed (g-r) colour - (r-i) colour - (i)-band magnitude CMASS distributions from the blurring effect produced by photometric errors and selection cuts. The CMASS sample comprises a considerable fraction of bluer objects, that can scatter into the red side of the colour-colour plane due to photometric errors. In MD2014, this aspect is treated by modeling the BC and the RS separately (red/blue deconvolution), which allows us to correct the RS intrinsic distribution from the contamination caused by BC objects. Importantly, this modeling is not performed from the basis of previous assumptions about “blue" or “red" objects based on stellar population synthesis models, but is intended to describe the bimodality found in the colour-colour plane. With this consideration, these components present the following characteristics:
- [*[Red Sequence (RS)]{}*]{}: The RS is so narrow that is consistent, within the errors, at fixed magnitude and for a narrow redshift slice, with a delta function in the colour-colour plane (width $<0.05$ mag), with only a shallow colour-magnitude relation shifting the location of this point. The results reported in this paper regarding the L-$\sigma$ relation are based on this intrinsic distribution.
- [*[Blue Cloud (BC)]{}*]{}: The BC is defined as a background distribution that contains [*[everything not belonging to the RS]{}*]{} and is well described by a more extended 2-D Gaussian in the colour-colour plane. The RS is superimposed upon the BC, that extends through the red side of the colour-colour plane. Again, the name “BC" is not meant to imply that this distribution contains only blue, young objects; the BC is a spectroscopically and photometrically heterogeneous population to which other types of ETGs can pertain, such as dusty ellipticals not belonging to the narrow RS.
The intrinsic distribution of magnitudes for the RS is the key ingredient in the computation of the RS L-$\sigma$ relation, and this is provided in MD2014 in the form of the RS LF. For the sake of convenience, and given the very narrow redshift slices that we consider, we will constrain the $\log_{10} \sigma - m_i$ (apparent magnitude in the i band) relation (note that this is not the L-$\sigma$ relation, which involves absolute magnitudes). As we show in the following section, for constraining this relation, it suffices to know the shape of the intrinsic distribution of apparent magnitudes, which is given by a Schechter Function of the form:
$$\begin{aligned}
\displaystyle
n_{sch}(m_i,\phi_*,m_*, \alpha\}) = 0.4 log(10) \phi_* \left[10^{0.4(m_*-m_i)(\alpha+1)}\right] \times \nonumber \\
exp\left(-10^{0.4(m_*-m_i)}\right)
\label{eq:schechter}\end{aligned}$$
where $\alpha = -1$ and $\phi_* =$ unity, for the sake of simplicity. The assumption that $\alpha = -1$ is dictated by the narrow magnitude range, that prohibits fitting for $\alpha$. The intrinsic RS i-band magnitude distribution, as a function of redshift, can be obtained by inserting the following linear relation for the redshift evolution of the characteristic magnitude $m_*$ into Equation \[eq:schechter\]:
$$\begin{aligned}
\displaystyle
m_*^{RS} (z) = (4.425\pm0.125)~(z-0.55) + (20.370\pm0.007)
\label{eq:linear_fits_rs}\end{aligned}$$
Similarly, for the BC:
$$\begin{aligned}
\displaystyle
m_*^{BC} (z) = (4.011\pm0.178)~(z-0.55) + (20.730\pm0.010)
\label{eq:linear_fits_bc}\end{aligned}$$
All the relations provided in this section come from the analysis presented in MD2014, although they are not explicitly reported there. By using these linear relations instead of the best fit values at each redshift we avoid introducing unnecessary noise into the analysis. These relations are obtained within the redshift range $0.525 < z < 0.65$, where selection effects are less severe. Note that the BC LF was not reported in MD2014 due to the extreme incompleteness affecting the BC in the CMASS sample. Instead, it is used as a means of accounting for the contribution of the BC in the sample and of subtracting this contribution from the RS LF. Following the same strategy, we compute the $\log_{10} \sigma - m_i$ relation for the BC (and later the BC L-$\sigma$ relation) only as a means of subtracting the contribution of the BC on the RS $\log_{10} \sigma - m_i$ relation.
The last ingredient in the computation of the RS $\log_{10} \sigma - m_i$ relation, as we show in the next section, is the fraction of BC objects in the CMASS sample, i.e., $f_{blue}$, which is also provided in MD2014. Again, to avoid introducing noise in the computation, we use the following linear relation for the redshift dependence of $f_{blue}$:
$$\begin{aligned}
\displaystyle
f_{blue} (z) = (0.890\pm0.051)~(z-0.55) + (0.381\pm0.003)
\label{eq:linear_fits_bc2}\end{aligned}$$
PDSO: Formalism for the photometric deconvolution of spectroscopic observables {#sec:formalism0}
==============================================================================
The method {#sec:formalism}
----------
The L-$\sigma$ relation is a relation between a spectroscopic observable, the stellar velocity dispersion $\sigma$, and a photometric observable, the luminosity $L$ (or, in a more practical fashion, the $\log_{10} \sigma$ and the absolute magnitude $M$). As previously mentioned, measuring velocity dispersions from BOSS galaxies is hindered by low SN spectra, which dictates that instead of point measurements we use likelihood functions for the velocity dispersions. The photometric aspect of the L-$\sigma$ relation computation presents challenges as well: as discussed in MD2014, the observed CMASS distribution is strongly affected by photometric errors and selection effects. In this section we present a formalism that combines our likelihood measurements for the velocity dispersions and our red/blue deconvolution of photometric quantities. This formalism, which we call PDSO, can be used for the photometric deconvolution of any spectroscopic observables, so we will present it in a general way.
Let us start by assuming that we have a sample of galaxies indexed by $i$, each of which has a spectroscopic data vector $\mathbf{d}_i$ (in our case, velocity dispersions) and a photometric data vector $\mathbf{c}_i$ (colours and/or magnitudes). These galaxies are selected (i.e., for measuring the spectra) according to some photometric cuts described by $P(\mathbf{c})$, which is the probability (between 0 and 1) of selecting a galaxy with photometric data vector $\mathbf{c}$. $P(\mathbf{c})$ in the case of the BOSS CMASS sample can be straightforwardly derived from the CMASS selection scheme (see @Dawson2013).
Assume that we have quantified the probabilistic mapping from *intrinsic* (noise-free) photometric values $\mathbf{C}$ into observed (noisy) photometric values $\mathbf{c}$, which we denote as $p(\mathbf{c} | \mathbf{C})$. In MD2014 this mapping is obtained by modeling the covariance matrix of magnitudes and colours using SDSS Stripe 82 multi-epoch data.
Assume that these galaxies have been quantified in terms of a number of one or more photometrically distinct population components indexed by $k$, with the results expressed as the intrinsic number density of galaxies per unit $\textbf{C}$ in component $k$, denoted by $n_k(\mathbf{C})$ such that $n_k(\mathbf{C}) \, d\mathbf{C}$ has units of spatial number density. In reality, as we will show below, this quantity can also have units of number. The above characterization can come in the form of the decomposition into RS and BC galaxy colour–luminosity functions. Note that $n_k(\mathbf{C})$ need not be normalized over $\mathbf{C}$, and indeed may be divergent.
As a result of the previous analysis, we have derived the individual probability density functions (PDF’s) of each component in observed space within the selected sample, as well as the fraction of sampled objects in each component. To express these in terms of previously introduced quantities, we introduce the following function for notational convenience: $$\Phi_k(\mathbf{c}) = P(\mathbf{c}) \int d\mathbf{C} \, p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C}).$$ This is the number (density) of galaxies in observed photometric space for component $k$.
The number of galaxies in the sample in component $k$ is $$N_k = \int d\mathbf{c} \, \Phi_k(\mathbf{c})$$
The fraction of sample galaxies pertaining to component $k$ is $$f_k = N_k \left/ \sum_{k\prime} N_{k\prime} \right. .$$
Note that $f_k$ would correspond to $f_{blue}$ (and $1-f_{blue}$) within our deconvolution framework. The PDF of galaxies in observed photometric space in component $k$ is $$p_k(\mathbf{c}) = \Phi_k(\mathbf{c}) / N_k .$$
The full PDF of the observed sample in observed photometric space is $$p(\mathbf{c}) = \sum_k f_k \, p_k(\mathbf{c}).$$
A derived function that we will make use of below is the PDF of a sample galaxy in *intrinsic* photometric space given its position in *observed* photometric space, which we can derive by noting that the joint PDF of observed and intrinsic photometric vectors for component $k$ is proportional as $$p_k(\mathbf{C},\mathbf{c}) \propto P(\mathbf{c}) \, p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C})$$
so that then $$\begin{aligned}
p_k(\mathbf{C} | \mathbf{c}) &=& {{p_k(\mathbf{C},\mathbf{c})} \over {\int d\mathbf{C} \, p_k(\mathbf{C},\mathbf{c})}} \\
&=& {{P(\mathbf{c}) \, p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C})}
\over {\int d\mathbf{C} \, P(\mathbf{c}) \, p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C})}} \\
&=& {{p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C})}
\over {\int d\mathbf{C} \, p(\mathbf{c} | \mathbf{C}) \, n_k(\mathbf{C})}}\end{aligned}$$
Importantly, $p_k(\mathbf{C} | \mathbf{c})$, which is the function that we will use ultimately, is independent of some of the choices that we make about the intrinsic distributions, $n_k(\mathbf{C})$. In particular, any normalization will cancel off, so we can simply use the intrinsic number counts shown in Equation \[eq:schechter\], where volume is not taken into account and $\phi_* = 1$ unity by definition.
Now assume that there is some parameter or vector of parameters $\mathbf{x}$ that can be measured from the spectroscopic data vector $\mathbf{d}_i$ in the sense that we can write down and compute the function $$p(\mathbf{d}_i | \mathbf{x}_i)$$ for each galaxy $i$. In our case, $\mathbf{x}$ is the velocity dispersion, $\sigma$ and $p(\mathbf{d}_i | \mathbf{x}_i)$ is proportional to the function $\exp[-\chi^2(\sigma)]$.
Next we assume a parameterized model for the variation of the spectroscopic observable within the sample populations as a function of photometric values. This is expressed as $$p_k (\mathbf{x} | \mathbf{C} ; \mathbf{t}).$$ The vector $\mathbf{t}$ will denote the “hyperparameters” that describe this PDF. Our goal is to infer the elements of $\mathbf{t}$. To do this, we proceed to express the likelihood function of $\mathbf{t}$ given the spectroscopic data and the photometric data. In our framework, the hyperparameters $\mathbf{t}$ only affect the probabilities of the spectroscopic observables *given* the photometric observations. $$\begin{aligned}
&&\!\!\!\!\! \mathcal{L} (\mathbf{t} | \left\{\mathbf{d}_i\right\}, \left\{\mathbf{c}_i\right\}) = p(\left\{\mathbf{d}_i\right\} | \left\{\mathbf{c}_i\right\}, \mathbf{t}) \\
&=& \prod_i p(\mathbf{d}_i | \mathbf{c}_i, \mathbf{t}) \\
&=& \prod_i \int d\mathbf{x} \, p(\mathbf{d}_i | \mathbf{x}) \, p(\mathbf{x} | \mathbf{c}_i, \mathbf{t}) \\
&=& \prod_i \int d\mathbf{x} \, p(\mathbf{d}_i | \mathbf{x}) \sum_k f_k \, p_k(\mathbf{x} | \mathbf{c}_i, \mathbf{t}) \\
&=& \prod_i \int d\mathbf{x} \, p(\mathbf{d}_i | \mathbf{x}) \sum_k f_k \int d\mathbf{C} \, p_k(\mathbf{x} | \mathbf{C}, \mathbf{t})
\, p_k(\mathbf{C} | \mathbf{c}_i)
\label{eq:likelihood}\end{aligned}$$
At this point, we have arrived at an expression on the right-hand side entirely in terms of quantities that we have introduced above, and we can proceed to map and/or maximize the likelihood function of the hyperparameters $\mathbf{t}$.
Application to the computation of the L-$\sigma$ relation in BOSS {#sec:application}
-----------------------------------------------------------------
For the sake of convenience, we proceed by applying our formalism to the computation of the $\log_{10} \sigma -m_i$ relation from the CMASS sample. Our spectroscopic observable is therefore the logarithm of the velocity dispersion, i.e. $\log_{10} \sigma$, and the intrinsic distributions ($n_k$) are represented by the Schechter number counts of Equation \[eq:schechter\], with a redshift dependence given by Equations \[eq:linear\_fits\_rs\] and \[eq:linear\_fits\_bc\] for the RS and the BC, respectively. This choice implies that $\mathbf{C}$ and $\mathbf{c}$ correspond to the i-band apparent magnitude, $m_i$, in intrinsic and observed space, respectively.
The parameterized model for the variation of the spectroscopic observable ($\log_{10} \sigma$) within the sample populations as a function of photometric values ($m_i$) encodes the $\log_{10} \sigma-m_i$ relation. Motivated by results from [@Bernardi2003b], we approximate the intrinsic distribution of velocity dispersions at fixed L as a Gaussian distribution in $\log_{10} \sigma$ with mean $<\log_{10} \sigma>$ and intrinsic scatter $s$. For component k this has the form:
$$p_k (\log_{10} \sigma | \mathbf{m_i} ; \mathbf{t_k}) = \frac{1}{\sqrt{2 \pi} s_k} \exp\left[\frac{-(\log_{10} \sigma - <\log_{10} \sigma>_k)^2}{2 s_k^2}\right]
\label{eq:gaussian}$$
The mean of the velocity dispersion for component k, i.e.,$ <\log_{10} \sigma>_k$, is assumed to follow a linear relation with apparent magnitude, $m_i$, of the form:
$$<\log_{10} \sigma>_k = c_{1,k} + 2.5 + c_{2,k} ( m_i - 19)
\label{eq:F-J relation}$$
This expression, as we will show in following sections, can be easily transformed, within our framework, into the L-$\sigma$ relation, which is expressed in terms of absolute magnitudes.
Aperture Correction {#sec:aperture}
===================
BOSS velocity dispersions are measured within the 2 arcsec diameter aperture of the BOSS fibers. As we move to higher redshift within the CMASS sample, the angular size of the fiber probes progressively larger physical scales. This effect is accounted for [*[a posteriori]{}*]{}, by applying an aperture correction (AC) to the best-fit relations obtained by maximizing the likelihood function of Equation \[eq:likelihood\]. By assuming a de Vaucouleurs profile for the variation of the surface brightness as a function of apparent distance to the center of the galaxy, we have obtained the following relation:
$$\sigma_{obs} / \sigma(<R_e) = 0.98~(R_e/R_{\mathrm{aperture}})^{0.048}
\label{eq:aperture}$$
that relates the observed velocity dispersion $\sigma_{obs}$ that we measure in BOSS, the velocity dispersion averaged within the effective radius, $R_e$, and the effective radius itself, in arcsec. As part of the derivation of the above relation, the blurring produced by an average seeing of 1.8 arcsec has been assumed.
In order to perform a realistic AC we need to take into account the variation of $R_e$ as a function of apparent magnitude, $m_i$, for each redshift slice, which may affect the slope of the $\log_{10} \sigma -m_i$ relation. To this end, we fit a linear relation to the mean observed i-band $\log_{10} R_e$ measured by the pipeline (in arcsec) as a function of $m_i$ [^1]. For a given redshift slice, this relation takes the form:
$$<\log_{10} R_{e}> = a + b (m_i - 19)
\label{eq:r_e}$$
where we have shifted the reference magnitude to $m_i = 19$, similarly to Equation \[eq:F-J relation\], for consistency. Figure \[fig:aperture\_correction2\] displays the observed i-band $\log_{10} R_e$ in arcsec as a function of $m_i$ for the redshift slice centered at $z=0.55$, in contours enclosing 67$\%$, 95$\%$ and 99.7$\%$ of the entire sample, respectively. The squares show the mean values in magnitude bins of $0.1$ mag and the errors the 1-$\sigma$ scatter around the mean. The solid line shows the linear fit to the mean values that we use to correct, on average, our $\log_{10} \sigma -m_i$ relation.
By using Equation \[eq:aperture\], it can be easily demonstrated that the aperture-corrected $c_1$ parameter, i.e. $c_1^{ac}$, is related to parameter $a$ in Equation \[eq:r\_e\] in the following way:
$$c_1^{ac} = c_1 - 0.048 a - \log_{10}(0.98)
\label{eq:m1_corrected}$$
And, similarly, for $c_2^{ac}$:
$$c_2^{ac} = c_2 - 0.048 b
\label{eq:m2_corrected}$$
Within the same redshift slice, more luminous galaxies are larger in size (see Figure \[fig:aperture\_correction2\]), which implies $b<0$. The AC correction by definition tends, therefore, to steepen the L-$\sigma$ relation.
In order to avoid introducing any extra noise we use a linear fit to the values of $a$ and $b$ as a function redshift, $a(z)$, $b(z)$. This is shown in Figure \[fig:aperture\_correction1\] for the redshift range of interest, $0.5<z<0.7$. The linear fit that we obtain for $a(z)$ is:
![The observed i-band $\log_{10} R_e$ in arcsec as a function of the i-band apparent magnitude $m_i$ for the redshift slice centered at $z=0.55$, in contours enclosing 67$\%$, 95$\%$ and 99.7$\%$ of the entire subsample, respectively. The squares show the mean values in magnitude bins of $0.1$ mag and the errors the 1-$\sigma$ scatter around the mean. The solid line shows a linear fit to the mean values. []{data-label="fig:aperture_correction2"}](figures/logR_e-m_i_at_z_eq_055_v2.eps)
![The redshift dependence of parameters $b$ (upper panel, squares) and $a$ (lower panel, squares), representing, respectively, the slope and the zero-point of the mean relation between the observed i-band $\log_{10} R_e$ (in arcsec) and the i-band apparent magnitude $m_i$. Errors correspond to uncertainty in the fit associated to each parameter. The solid lines show the linear fits that we use to correct our L-$\sigma$ relation measurements.[]{data-label="fig:aperture_correction1"}](figures/slope_zp_z_v2.eps)
$$a(z) = ( 1.47 \pm 0.16) z+( -0.61\pm 0.09)
\label{eq:a_z}$$
and for $b(z)$:
$$b(z) = ( -0.57\pm 0.32) z+( -0.08\pm 0.19)
\label{eq:b_z}$$
Note that the error on the slope of $b(z)$ is large, so we will consider also the case where $b(z)$ is constant and equal to the mean value within the redshift range $0.52<z<0.65$, i.e. $b(z) =-0.402\pm0.020$.
A typical value of $b(z) =-0.402$ implies a correction on the slope of the $\log_{10} \sigma$-$m_i$ relation (the $c_2$ parameter) of $\sim0.02$. Within our framework, the slope of the $\log_{10} \sigma$-$m_i$ relation at a given redshift slice coincides with the slope of the L-$\sigma$ relation. As a reference, typical values for this slope are $\sim-0.1$ (or equivalently 4 for the exponent $\beta$ in the form $L \propto \sigma^\beta$; this is the F-J relation). This implies that the AC has a significant effect on the slope of the L-$\sigma$ relation when computed from BOSS. The main idea is that brighter (and hence larger) galaxies have their velocity dispersions corrected by a different factor than fainter (and hence smaller) galaxies, for a given fixed angular aperture, which affects the slope of the L-$\sigma$ relation.
It is interesting to compare the effect of adopting a different aperture correction on the $\log_{10} \sigma -m_i$ relation. Equations \[eq:m1\_corrected\] and \[eq:m2\_corrected\], in combination with the values of parameters $a(z)$ and $b(z)$, dictate the sensitivity of the AC to the exponent in Equation \[eq:aperture\]. By varying this exponent between a reasonable range, we can evaluate the effect on the zero-point and slope of the $\log_{10} \sigma -m_i$ relation. We have chosen a range between $0.04$ (corresponding to the value derived by @Jorgensen1995) and $0.066$ (found by @Cappellari2006). The majority of values adopted in the literature are within this range (e.g. @Mehlert2003 estimate a value of $0.06$).
Using Equations \[eq:m1\_corrected\], and given that the typical variation of $a(z)$ within the redshift range considered is $\sim0.2$ (see Figure \[fig:aperture\_correction1\]), we find that the variation in the AC for the zero-point, $\Delta AC$, within the redshift range considered (note that $a$ is a function of redshift), for an exponent of $0.048$, is $0.0096$ dex, in absolute values. Increasing the exponent of the AC function to $0.066$ would translate into an increase in $\Delta AC$, for the zero-point, of a $38\%$. On the other hand, decreasing the exponent of the AC function to $0.04$ would result in a $\Delta AC$ $17\%$ smaller. Although the net effect is small, these variations can modify the zero-point - redshift trend. In Section \[sec:F-J relation\] we discuss the redshift evolution of the zero-point, concluding that the effect on the zero-point of the uncertainties on the AC is significant, given the narrow redshift range that we probe and the mild evolution that we measure for the zero-point.
With regard to the slope, from Equation \[eq:m2\_corrected\] and Figure \[fig:aperture\_correction1\], as mentioned above, we find that the typical AC on this parameter is $\sim 0.02$ (i.e. $b(z) \times 0.048$), basically independent of redshift. Adopting a range of values for the exponent of the AC correction $0.04-0.066$ would translate into corrections within the range $0.016-0.026$. Although the effect is not negligible, adopting a different AC would not modify the main conclusion of the paper, in terms of the steep slope of the L-$\sigma$ relation, in any qualitative way (see following sections).
Results {#sec:results}
=======
Best-fit parameters for the $\log_{10} \sigma$-$m_i$ relation {#sec:parameters}
-------------------------------------------------------------
By maximizing the likelihood function of Equation \[eq:likelihood\], we obtain the best-fit values for the hyperparameters $\mathbf{t}$. These parameters are the following, for each component $k$ (RS and BC): $c_{1,k}$, the zero-point of the $\log_{10} \sigma$-$m_i$ relation, that corresponds to the mean $\log_{10} \sigma$ at $m_i = 19$; $c_{2,k}$, the slope of this linear relation, and $s_k$, the intrinsic scatter. The optimization of the likelihood function has been performed from $z=0.40$ to $z=0.70$, using a bin size of $\Delta z = 0.01$. This redshift range exceeds the redshift interval where the computation of the RS LF is more reliable, i.e. $0.52 \lesssim z \lesssim 0.65$. Results outside this high-confidence range are obtained by extrapolation of the linear fits to the redshift evolution of the intrinsic distribution and the fraction of blue objects in the sample.
Figure \[fig:best\_fit\] displays the redshift evolution of the 3 best-fit hyperparameters $\mathbf{t}$, for both the RS and the BC. It is important to bear in mind that, due to extreme incompleteness, our BC component is not representative of the entire BC population, so we simply use it as a means of correcting for the contribution of BC objects in the sample. In each panel, this BC component is represented in blue lines/symbols. The aperture-corrected RS parameters are represented in red and the uncorrected RS parameters, in green.

Statistical errors on each parameter are obtained by mapping the likelihood function within the 6-dimension hyperspace and computing the posterior probability distribution for each parameter. This computation yields very small errors, which is consistent with the rapid convergence of the algorithm. While this procedure might not take all the possible sources of error into account, other more realistic options, such as a bootstrap analysis, are computationally challenging. In order to avoid underestimating our uncertainties, we take a conservative approach here and, instead of the aforementioned error, we use, as the real error, the scatter on each parameter (standard deviation) with respect to the best-fit relation with redshift.
In the upper panel of Figure \[fig:best\_fit\], the 2.5 + $c_{1, RS}$ increases linearly with redshift, from a value of $\sim$ 2.43 at $z=0.50$ to $\sim$ 2.53 at $z=0.70$. The aperture correction has very little effect on the zero-point of the $\log_{10} \sigma$-$m_i$ relation. The following linear relation is obtained for the aperture-corrected RS $c_{1}$ (hereafter we drop the $ac$ superindex for simplicity):
$$2.5 + c_{1,RS} (z) = (2.235\pm0.009) + (0.381\pm0.016)~z
\label{eq:m1}$$
At fixed apparent magnitude ($m_i = 19$), we look at progressively more luminous galaxies as we move to higher redshift. The L-$\sigma$ relation implies that more luminous RS galaxies have higher velocity dispersions, which explains the $c_{1,RS}$ - redshift trend. On the other hand, the BC represents a photometrically and spectroscopically heterogenous population that contains a large fraction of blue, spiral galaxies (or even disky ellipticals) for which we expect smaller velocity dispersions. At fixed absolute magnitude, the BC component is expected, therefore, to have smaller mean $\log_{10} \sigma$ than the RS, which is exactly what the upper panel of Figure \[fig:best\_fit\] shows.
The middle panel of Figure \[fig:best\_fit\] shows the redshift evolution of the slope of the $\log_{10} \sigma$-$m_i$ relation, $c_{2}$. The RS is consistent with a single point in the colour-colour plane, with a shallow colour-magnitude relation that we neglect in the computation of K-corrections. Absolute magnitudes are simply obtained by rescaling the apparent magnitudes at each redshift slice. This result implies that $c_{2,RS}$ coincides with the slope of the L-$\sigma$ relation, as we explicitly show in the next section. Our results for $c_{2,RS}$ indicate little evolution in the slope of the $\log_{10} \sigma$-$m_i$ within the redshift range considered. Importantly, the AC, as discussed previously, has a significant effect on $c_{2,RS}$, i.e., that changes from $\sim-0.07$ to $\sim-0.05$. The following linear relation with redshift is obtained for the aperture corrected $c_{2,RS}$:
$$c_{2,RS} (z) = - (0.033\pm0.012) - (0.029\pm0.021)~z
\label{eq:m2}$$
As mentioned above, this dependence of $c_{2,RS}$ with redshift is not significant given the scatter in the data. The $\Delta \chi^2$ with respect to the redshift-independent assumption is only $1.57$ (i.e. a significance of $1.25 \sigma$). The slope in the $\log_{10} \sigma - m_i$ relation that we measure within the redshift range $0.5<z<0.7$ is therefore $-0.070\pm0.006$ before AC and $-0.050\pm0.007$ after. The middle panel of Figure \[fig:best\_fit\] also shows in a black dashed line our results for $c_{2,RS}$ assuming a constant value for the AC parameter $b$ of $b(z) = -0.042$. Neglecting the redshift evolution of $b$ in Figure \[fig:aperture\_correction1\] has little impact on $c_{2,RS}$.
The bottom panel of Figure \[fig:best\_fit\] displays the redshift evolution of the intrinsic scatter in the $\log_{10} \sigma - m_i$ relation for both the RS and the BC component. This is the intrinsic RMS scatter in $\log_{10} \sigma$, at fixed $m_i$. Note that an “orthogonal" version of our result (i.e., scatter perpendicular to the $\log_{10} \sigma - m_i$ relation) would be essentially the same, given the steep slope that we measure for this relation. The scatter for the RS, $s_{RS}$, increases slightly with redshift, but this trend is not significant given the computed errors (the $\Delta \chi^2$ with respect to the redshift-independent assumption is only $1.36$, i.e., a significance of $1.17 \sigma$). The mean value that we obtain is only $s_{RS} = 0.047\pm0.004$ in $\log_{10} \sigma$. This value for the scatter in the $\log_{10} \sigma - m_i$ relation is very small as compared to the typical value of $\sim0.1$ found at intermediate-mass ranges and low redshift. In Section \[sec:discussion\] we show his this result is in excellent agreement with previous low-z high-mass results.
The hyperparameter $s$ for the BC component is significantly larger, i.e., $\sim 0.12$. Keep in mind that the BC component in our sample is by no means complete, and therefore, the corresponding $s$ value that we measure does not represent the true scatter for the entire BC population. It does provide, however, some indication of the scatter of the BC population relative to the RS population. The value of $\sim 0.12$ is consistent with the intrinsic scatter trends shown by @Shu2012, who without accounting for the contamination effect caused by the BC objects, find that the overall intrinsic scatter value for the CMASS galaxy sample is $\sim 0.1$, an intermediate value due to the mixture of both RS and BC galaxies. A trend of higher intrinsic scatter at higher redshift is also discovered by @Shu2012, which can be explained by the fact that the relative fraction of the BC objects in the CMASS sample increases with redshift.
The $\log_{10} \sigma$-$m_i$ relation in a magnitude-$\log_{10} \sigma$ diagram for 4 different redshift slices is explicitly shown in Figure \[fig:effect1\] (Method 1, see next section).
The effect of the various corrections implemented {#sec:effect}
-------------------------------------------------

Our computation of the $\log_{10} \sigma - m_i$ (equivalent to the L-$\sigma$ relation) at $z\sim0.55$ incorporates accurate treatments of various issues affecting the BOSS data. In particular, we correct for incompleteness within the colour-colour-magnitude space, and we separately model the intrinsic RS and BC populations, which allows the statistical removal of all BC in the CMASS sample, including those that are predicted to scatter through the red side of the colour-colour plane (see MD2014 for more details). This information is incorporated into our PDSO method, a hierarchical Bayesian statistical framework that allows us to utilize the stellar velocity dispersion likelihood functions of individual objects, instead of the point estimates associated to them. This section is intended to provide a sense as to what impact these corrections have on our $\log_{10} \sigma - m_i$ results.
In order to illustrate the effect of each correction, we have computed the $\log_{10} \sigma - m_i$ relation using the following 4 methods:
- [**[Method 1]{}**]{}: This is our optimized method, including all the corrections mentioned above.
- [**[Method 2]{}**]{}: This uses exactly the same methodology as the one implemented in [@Shu2012], applied to the dataset used in this work. [@Shu2012] developed of a hierarchical approach that incorporates velocity dispersion likelihood functions. The main differences with Method 1 are that 1) the whole CMASS sample is used for the computation, without any red-blue deconvolution, and 2) only a very approximative completeness correction is applied. For the sake of comparison, here we exclude blue objects by imposing a simple colour cut $g-i > 2.35$ in observed spaced.
- [**[Method 3]{}**]{}: The observed L-$\sigma$ relation for the blue subsample defined using a simple colour cut $g-i > 2.35$. No red-blue deconvolution or completeness correction is applied. Instead of velocity dispersion likelihood functions, point estimates for the velocity dispersion are used.
- [**[Method 4]{}**]{}: Same as Method 3, but for the entire CMASS sample.
### Effect on the slope and the zero-point
In Figure \[fig:effect1\] and Figure \[fig:effect2\] we compare in 4 different redshift slices ($z=0.52, 0.55, 0.60, 0.65$) the $\log_{10} \sigma$-$m_i$ relation computed using the 4 methods presented above. Figure \[fig:effect1\] displays these relations in a $\log_{10} \sigma$ vs. $m_i$ diagram, where the solid contours show the best-fit central velocity dispersions (point estimates) as a function of apparent magnitude once the colour demarcation is applied, and the dashed contours represent the entire sample. Figure \[fig:effect2\] shows the redshift trends for the zero-point and the slope (not aperture-corrected, for simplicity), respectively.
With regard to the zero-point, Figure \[fig:effect1\] and especially the upper panel of Figure \[fig:effect2\], indicate that an inadequate or partial removal of blue objects in the sample tends to artificially push this parameter towards smaller values. This is expected, given that the velocity dispersion is obviously smaller in bluer (typically spiral) galaxies (see also the upper panel of Figure \[fig:best\_fit\]). A partial removal of blue objects using a colour cut (Method 2) only palliates this effect slightly. We would still measure a zero point $10\%$ smaller than that of the intrinsic RS distribution. Part of this difference can also be due to the fact that in Method 2 completeness is also only partially addressed, by just applying a rough correction to account for the scatter of objects in and out different magnitude bins, due to photometric errors. Interestingly, a comparison between the zero-point obtained from Method 2 and Method 3 shows that, for the same sample (both using a colour cut in observed space, no deconvolution), the use of velocity dispersion likelihood functions in the context of a hierarchical Bayesian approach (Method 2) has a minor effect on the measured zero-point.
The effect of corrections on the pre-aperture-corrected slope of the $\log_{10} \sigma - m_i$ relation is less obvious. The lower panel of Figure \[fig:effect2\] shows that the measured slope obtained with the optimized Method 1 is only slightly steeper (larger in this figure) than what we would measure using the observed distribution alone (with a colour cut, i.e., Method 3). This differences would likely be within the errors once the slope is aperture corrected (see following section). Interestingly, using [@Shu2012] method, i.e. Method 2, would lead to a pre-aperture-corrected slope much closer to the canonical value of 4 (i.e. $-0.1$ in this figure), $15 - 25 \%$ shallower than the values obtained with Method 1 (note that we would still measure a slope steeper than the canonical value once aperture correction is applied).
### Effect on the scatter
The use of velocity dispersion likelihood functions in combination with completeness/intrinsic distribution results within a hierarchical Bayesian statistical framework has a tremendous impact on our ability to recover the intrinsic scatter in the $\log_{10} \sigma - m_i$ relation (which coincides with the scatter in the L-$\sigma$ relation). The typical observed scatter in Figure \[fig:effect1\] ranges from $\sim0.1$ at the bright end to $\sim0.16$ at the faint end. For the entire, partially-completeness-corrected observed distribution, [@Shu2012] report a value of $\sim0.1$, which is in agreement with the scatter in the higher-SN regime (within the sample). Our comprehensive analysis allows us to dig even deeper, unveiling the intrinsic scatter of the intrinsic RS distribution: a tiny $0.05$ dex, as Figure \[fig:best\_fit\] shows.
The difference between the scatter that we measure and the one reported by [@Shu2012] illustrates the importance of the red-blue deconvolution. As shown in the bottom panel of Figure \[fig:best\_fit\], the scatter that we expect for the BC population is significantly larger. This is consistent with the fact that the BC is a much more extended distribution (photometrically and spectroscopically heterogenous) in the colour-colour plane. Mixing RS and BC objects and not properly correcting for completeness inevitably leads to an increase in the reported intrinsic scatter of the L-$\sigma$ relation.
![ Zero-point (upper panel) and slope (lower panel) as a function of redshift obtained using the 4 different methods explained in the text (results from Method 1, 2, 3, and 4 are shown in red, blue, black and green, respectively).[]{data-label="fig:effect2"}](figures/methods_fjr_v2.eps)
In order to demonstrate that we actually have the statistical power to measure such a small scatter, we have performed a simple Monte Carlo variance-estimation analysis to estimate the resolution that we can expect given the number of objects that we have in a typical redshift bin ($\sim20,000$ objects) and the typical error that we have in our individual velocity dispersion measurements ($\sim0.1$ dex, as Figure \[fig:vdisp\_likelihood\] shows). By simulating the observed distribution of velocity dispersions assuming an intrinsic scatter of $0.047$ dex in $\log_{10} \sigma$ and the aforementioned typical measurement error on $\log_{10}\sigma$, we can evaluate our capacity to recover the intrinsic scatter. Our analysis shows that the typical uncertainty on the intrinsic scatter is of the order of $0.001$ dex, which is more than one order of magnitude smaller than the value that we find for the scatter, i.e. $0.047$ dex in $\log_{10} \sigma$. Note that the error that we estimate for the scatter is $0.004$ dex. This test gives us confidence that our measurement is not an artifact that results from working below our resolution limit.
The L-$\sigma$ relation {#sec:F-J relation}
-----------------------
Translating the best-fit hyperparameters shown in Figure \[fig:best\_fit\] into the standard L-$\sigma$ relation ($\log_{10} \sigma$ as a function of absolute magnitude) for the RS is straightforward, due to the intrinsic characteristics of the RS colour-colour-magnitude distribution. In MD2014 we show that, at a given narrow redshift slice (width $\Delta z = 0.01$), and magnitude bin, the RS intrinsic distribution is consistent with a single point in the colour-colour plane, with only a shallow colour-magnitude relation that shifts this point slightly with L. Under such conditions, the K-correction, independently of the stellar population synthesis model chosen, changes very little within the apparent magnitude range of the CMASS sample, so we can basically assume it to be constant. We can, therefore, convert from apparent magnitudes to absolute magnitudes (K-corrected to $z=0.55$) at a given redshift slice by simply rescaling the apparent magnitude using the standard equation:
$$^{0.55}M_i = m_i - DM(z) - ^{0.55}K_i (z)
\label{eq:absmag}$$
The width of each redshift slice is small enough that the variation of DM within the redshift bin can also be neglected. By substituting Equation \[eq:absmag\] into Equation \[eq:F-J relation\], rearranging and adding a factor $M_0 c_{2}$ at each side of the equation we arrive at the following expression:
$$<\log_{10} \sigma> = c_{1}^{\prime} + 2.5 + c_{2}^{\prime} \left(^{0.55}M_i - M_0\right)
\label{eq:F-J relation2}$$
where:
$$\begin{aligned}
\displaystyle
c_{1}^{\prime} (z) = c_{1} (z) - c_{2} (z) (19 - DM(z) - ^{0.55}K_i (z) - M_0) \nonumber \\
c_{2}^{\prime} (z) = c_{2} (z)
\label{eq:F-J relation3}\end{aligned}$$
The slope of the L-$\sigma$ relation is independent of the fact that we use apparent magnitudes or absolute magnitudes, again due to the characteristic shape of the RS in colour-colour-magnitude space. The K-correction as a function of redshift, $^{0.55}K_i (z)$, is computed using a grid of models generated using the Flexible Stellar Population Synthesis code (FSPS, @Conroy2009) in the way described in MD2014. This grid expands a range of plausible stellar population properties for redshift-dependent models within the CMASS redshift range. An average K-correction assuming the colours of the RS is computed at every redshift.
Figure \[fig:F-J relation\_absmag\] displays parameters $c_{1}^{\prime}$ and $c_{2}^{\prime}$ for the L-$\sigma$ relation as a function of redshift, assuming $^{0.55}M_0 = -23$ (black). Errors are assumed to be equal to the standard deviation of the data for each parameter. The value of $^{0.55}M_0 = -23$ for the reference absolute magnitude has been chosen because it falls within the CMASS magnitude range across the entire redshift range considered (see MD2014).
As Figure \[fig:F-J relation\_absmag\] indicates, the zero-point that we measure for the L-$\sigma$ relation has a slight dependence on redshift, so that larger values are found at higher redshifts. It is, however, a very small effect, of $\sim0.005$ dex within the redshift range considered ($0.5<z<0.7$, $\sim 1.3$ Gyr of cosmic time). The best-fit linear relation that we measure is $2.429\pm0.007 + (0.023\pm0.011)~z$ (black dashed line). This redshift dependence is significant as compared to a best-fit redshift-independent value of $c_{1}^{\prime} = 2.443\pm0.004$ if we consider the whole redshift range ($\Delta \chi^2 = 4.49$, i.e., a significance of $\sim2 \sigma$), but not if we restrict ourselves to the high-confidence redshift range $0.52<z<0.65$ ($\Delta \chi^2 = 1.579$, i.e., a significance of $\sim1.25 \sigma$).
In MD2014 we conclude that the LF evolution of the LRG population at $z\sim0.55$ is consistent with that of a passively-evolving population that fades at a rate of $1.18$ mag per unit redshift. Assuming plausible single-stellar population models, including both Flexible Stellar Population Synthesis (FSPS, @Conroy2009) and M09 [@Maraston2009] models, such fading rate, at that redshift, translates into a formation redshift for the LRGs of $z=2-3$. The evolution of the zero-point assuming a best-fit passive model of the aforementioned characteristics is shown in a red dashed line in Figure \[fig:F-J relation\_absmag\]. Here, by best-fit model we mean that the normalization of the zero-point of the passive model is fit to the data points, so only the redshift evolution is meaningful. The deviations found between our fiducial model (the best-fit model that we obtained from our photometric deconvolution formalism) and the best-fit passive model are of the order of $\pm0.004$ dex within the entire redshift range $0.5<z<0.7$. These deviations are again significant ($\Delta \chi^2 = 10.24$, i.e., a significance of $\sim3.2 \sigma$) if we considered the entire redshift range, but that significance is questionable if we restrict ourselves to the high-confidence redshift range ($\Delta \chi^2 = 3.01$, i.e., a significance of $\sim1.73 \sigma$).
![ The aperture-corrected L-$\sigma$ relation parameters as a function of redshift within the redshift range $0.5<z<0.7$. Upper panel: The zero-point or, equivalently, the mean of the $\log_{10} \sigma$ at a K-corrected reference magnitude of $^{0.55}M_i = -23$ (black dots), and the best-fit linear model its redshift evolution (black dashed line). In addition, we show the best-fit passive model that fades at a rate of $1.18$ mag per unit redshift (as measured in MD2014). Lower panel: the slope of the L-$\sigma$ relation, $c_{2}^{\prime}$. Errors are assumed to be equal to the standard deviation with respect to the best-fit model for each parameter. []{data-label="fig:F-J relation_absmag"}](figures/fj_figure2_v5.eps)
The fact that the significance of the discrepancies with the best-fit passive model reported in MD2014 depend strongly on the redshift range considered, in combination with the fact the effect is so small ($\pm 0.004$ dex), appear to suggest that our results for the zero-point - redshift trend are consistent with the best-fit passive model. This idea is reinforced when we consider the uncertainty on the computation of the AC. A modification of the AC, within reasonable limits, would produce an effect of the same magnitude as the discrepancies that we find between the data and the best-fit passive model. As a matter of fact, a variation of the $a(z)$ and $b(z)$ functions involved in the AC, within the reported uncertainties, could account for the discrepancies found. More importantly, the trend shown in Figure \[fig:F-J relation\_absmag\] is sensitive to the exponent of the AC, as discussed in Section \[sec:aperture\]. We have checked that by increasing slightly this exponent we can rapidly reach a much better agreement between the model and the data. A value of $0.06$ (instead of $0.048$), which is within the typical range of values previously used in literature, would produce almost a perfect agreement (a tiny $\Delta \chi^2 = 0.11$, when using the entire redshift range).
In summary, we conclude that our results are consistent with a passive model with formation redshift $z=2-3$, given the small variation of the zero-point that we measure within such a narrow redshift range and the uncertainties that the AC is subject to. This result is in good agreement with a variety of studies on the evolution of the FP, as we discuss in Section \[sec:comparison\]. An improvement in the AC or $R_e$ estimation in BOSS will be necessary to further constrain the evolution of the zero-point. The possibility remains that the discrepancies found may also be partially due to the fading rate of the passive model being too high; a value closer to $\sim0.9$ mag per unit redshift would suffice to make these discrepancies clearly not significant. This, in any case, would not imply a formation redshift much higher than $z=2-3$, given the typical fading-rate evolution of passive models (see MD2014 for a discussion).
Discussion {#sec:discussion}
==========
Comparison with previous results {#sec:comparison}
--------------------------------
In this section we show that our results for the high-mass L-$\sigma$ relation at $z\sim0.55$ are in good agreement with previous findings at low redshift.
The canonical form of the L-$\sigma$ relation for our chosen bandpass is $L_i \propto \sigma^{\beta}$, where $L_i$ represents the i-band luminosity. The slope of this relation, $\beta$, is directly related to the measured slope of the $\log_{10} \sigma$-$M_i$ relation, $m_2^{\prime}$, as $$\beta = - \frac{0.4}{c_{2}^{\prime}}.$$ As we have shown, the mean value of $c_{2}^{\prime}$ for the RS, within the high-confidence redshift range, is $-0.070\pm0.006$, before AC is applied. This value implies a L-$\sigma$ relation slope of $\beta=5.71 \pm 0.49$. Imposing a constant offset of $-0.019$ to the slope, as introduced by the AC according to Equation \[eq:m2\_corrected\], the aperture-corrected L-$\sigma$ relation becomes even steeper, with a mean slope of $\beta=7.83 \pm 1.11$.
The above measurement has been performed with a sample of more than $600,000$ massive LRGs, with a mean stellar mass of $M_* \simeq 10^{11.3} M_{\odot}$, within the redshift range $0.5<z<0.7$. At intermediate mass ranges and $z\sim0.1$, using ETG samples extracted from the SDSS, it has been shown that the slope of the L-$\sigma$ relation is consistent with that of a canonical F-J relation, i.e. $\sim 4$ (@Bernardi2003b [@Desroches2007]). Even though this is a fairly robust result, studies at these mass ranges might still be subject to an inadequate treatment of selection effects. As an example, [@LaBarbera2010], using the SDSS-UKIDSS survey, report a slope of $\sim 5$ for a sample at a similar mass range.
At higher mass ranges, the curvature of the L-$\sigma$ relation has also been detected at high statistical significance at $z\sim0.1$ using the SDSS (@Desroches2007, @Hyde2009a, @Bernardi2011). However, a definite quantification of the high-mass slope has not emerged. [@Desroches2007] report a slope of $\sim4.5$ at $M_r \geq -24$ (or $\log_{10} \sigma \gtrsim 2.4$). The authors find a much steeper slope, of $\sim5.9$, for a subsample of brightest cluster galaxies (BCGs).
Some other works have focused on small samples of ETGs in the nearby universe (of dozens to a couple of hundred objects). While these samples are strongly affected by low-number statistics, they have the advantage that galaxy properties can be measured with higher precision. [@Lauer2007b], using a compilation of HST observations of a sample of 219 ETGs, measure a slope for the L-$\sigma$ relation of $\sim7$ for a subsample of core and BCG ellipticals, the latter being predominantly core ellipticals as well (note that @Bernardi2007, using an SDSS sample, report a significantly steeper slope for BCGs as compare to normal ETGs; from their Figure 6 we can visually estimate a slope of $\sim8$). Core ellipticals are identified by the fact that the central light profile is a shallow power law separated by a break from the outer, steep Sersic function profile. A value of $\sim2$ is reported for the coreless or “power-law” elliptical subsample, a class of objects that present steep cusps in surface brightness. Interestingly, a number of studies suggest that core ellipticals dominate at the high-mass end, whereas coreless ellipticals are predominant at lower masses (see e.g. @Faber2007 [@Lauer2007a; @Lauer2007b; @Hyde2008]). Subsequently, [@Kormendy2013], using a similar sample with some corrections and slight modifications in the core/coreless classification, estimate a slope of $\sim8$ for the core sample alone, and close to the canonical value of $4$ for the coreless sample. Our results for the slope of the L-$\sigma$ relation are in excellent agreement with these studies, in terms of core ellipticals.
Results consistent with the above picture are also obtained from the ATLAS$^{3D}$ sample [@Cappellari2011], comprising 260 local-volume ETGs. [@Cappellari2013_XX] measure a high-mass slope of $\sim4.7$ (for the related stellar mass M-$\sigma$ relation). This result is obtained above a characteristic stellar mass of $\sim 2 \times 10^{11} M_{\odot}$, a range of masses where the ETG population in this sample is again reportedly dominated by core ellipticals. Note that a detection of this mass scale in a [*[statistically-significant]{}*]{} sample came with the SDSS [@Bernardi2011]. This scale corresponds approximately to the range of masses covered by BOSS.
Independently of the central brightness profile classification, a steep slope at the bright/massive end is confirmed by other studies. In the environment analysis of [@Focardi2012] (using a small sample of a few hundred objects extracted from HYPERLEDA, @Paturel2003) a value of $\sim5.6$ is reported at high luminosities, although the authors indicate that an alternative method to compute this slope could yield a value closer to $4.5$.
To summarize, we have drawn two main conclusions in terms of the slope of the high-mass L-$\sigma$ relation from a study of previous literature at low redshift. Firstly, enough evidence has been gathered of a steeper slope at the high-mass end from SDSS works at $z\sim0.1$. Secondly, the reported value for the slope varies among different works, from $\sim4.5$ to $\sim8$. Note that the L-$\sigma$ relation is sensitive to low-number / selection effects (which nearby samples are affected by), but also to the region of the FP probed (in particular, to the exact luminosity range under analysis). Our work, provides, for the first time, a measurement at an unprecedented statistical significance of the slope at an intermediate redshift and at the highest-mass end, where the L-$\sigma$ relation“saturates" (using the terminology of @Kormendy2013, among other authors).
Another important result from this work is the small intrinsic scatter of the L-$\sigma$ relation at the high-mass end at $z\sim0.55$; a measurement that has been performed with high statistical significance for the first time. The scatter, quantified by the hyperparameter $s$ in this work, is found to have a mean value of $s=0.047 \pm 0.004$ in $\log_{10} \sigma$ at fixed L and redshift slice (with no significant redshift dependence). Again, in order to place these results in the context of previous literature we can only compare with nearby/low-redshift samples, where some clear indications have been reported that the intrinsic scatter is smaller at higher masses.
Figure 9 (right panel) from [@Hyde2009a] clearly shows that the scatter in the L-$\sigma$ relation (in particular, in the $\log_{10} \sigma$ - $M_r$ relation) decreases towards high luminosities. In particular, we can visually estimate a value of $\sim0.05$ dex at $M_r < -23$. A similar value for the scatter of the mass-$\sigma$ relation at the high-mass end can be visually estimated from Figure 1 of [@Bernardi2011]. Although these values are not explicitly reported, they are obtained from relatively large SDSS samples (the @Bernardi2011 sample contains $\sim18,000$ massive ETGs), which indicates that these results are statistically significant.
In nearby samples, however, low number statistics prevent a reliable estimation of the scatter. In any case, indications have been reported that the intrinsic scatter in smaller in core ellipticals, which would be consistent with our measurement. At the high-mass end, @Kormendy2013, by adopting measurement errors of 0.1 mag in magnitude and 0.03 in $\log_{10} \sigma$, report an intrinsic physical scatter of 0.06 in $\log_{10} \sigma$ for core galaxies and 0.10 in $\log_{10} \sigma$ for core-less galaxies at a given magnitude.
Even though we must be careful to compare results at the high-mass end from a quantitative point of view, a mass (or luminosity) dependence of the intrinsic scatter of the L-$\sigma$ relation have been reported in other works, including [@Sheth2003], [@Desroches2007], [@NigocheNetro2011] and [@Focardi2012].
Finally, our result that the evolution of the zero-point of the L-$\sigma$ relation is consistent with a passive model with a formation redshift of $z=2-3$ is in good agreement with the redshift evolution of the FP as measured in galaxy clusters up to $z\sim1$ (see e.g. @vanDokkum1996 at $z=0.39$; @Kelson1997 at $z=0.58$; @vanDokkum1998 at $z=0.83$; @vanDokkum1998 at $z=0.83$; @Wuyts2004 at $z=0.583$ and $z=0.83$; @vanDokkum2003 [@Holden2005] at $z\sim1.25$). More generally, the result that the high-mass RS population evolves passively from a high formation redshift is in agreement with a wide array of analyses (see, e.g., @Wake2006 [@Cool2008], MD2014 for LF results; @Maraston2013 for LRG-SED evolution results; @Guo2013 [@Guo2014] for LRG-clustering results).
In light of the above comparison, it appears that the high-mass end of the L-$\sigma$ relation not only remains unchanged within the redshift range $0.5<z<0.7$, but would also be consistent with $z\sim0$ results.
Physical interpretation {#sec:interpretation}
-----------------------
The very steep slope of the L-$\sigma$ relation at the high-mass end implies that the interplay between the different processes involved in shaping the evolution of RS galaxies at different mass ranges is systematically different. As galaxies grow, central velocity dispersions do not change as much as expected according to the scaling relations at lower masses. This result is consistent with the systematic variation of the total mass profile as a function of mass in the central region of ETGs found by [@Shu2015], so that more massive galaxies have shallower profiles. One possibility to explain this behavior is that the relative efficiencies of gas cooling and feedback in RS galaxies vary at different mass scales. Gas cooling permits baryons to condense in the central regions of galaxies, and therefore it is believed to make the mass distribution more centrally concentrated [e.g. @Gnedin2004; @Gustafsson2006; @Abadi2010; @Velliscig2014]. Heating due to dynamical friction and supernovae (SN)/Active Galactic Nucleus (AGN) feedback, in contrast, can soften the central density concentration [e.g. @Nipoti2004; @Romano-Diaz2008; @Governato2010; @Duffy2010; @Martizzi2012; @Dubois2013; @Velliscig2014]. If feedback became more efficient in more massive galaxies, that could explain both the results presented here and in [@Shu2015].
A more straightforward explanation comes from a scenario where massive and intermediate-mass ETGs have a different evolutionary history. From high-resolution images of a small number of local ETGs, some evidence has been gathered that the high-mass end of the RS population might be occupied almost exclusively by core ellipticals, whereas coreless ellipticals dominate at intermediate and lower masses [@Lauer2007a; @Lauer2007b; @Hyde2008; @Cappellari2013_XX]. This transition occurs at $\log_{10}M_* \sim 11.2$, a mass scale that was first detected [*[with high-statistical significance]{} by [@Bernardi2011].*]{}Even though this classification arises from the shape of the central surface brightness profile, several studies have shown that this bimodality, known as the “E-E dichotomy", extends to a number of other galaxy properties: core ellipticals have boxy isophotes and are slow rotators while coreless ellipticals have more disky isophotes and rotate faster (to name but a few properties, see e.g. @Kormendy1996 [@Faber1997; @Lauer2007a; @Lauer2007b; @Hyde2008; @Cappellari2013_XX; @Kormendy2013] for a complete discussion). Importantly, the above mass scale has been associated, with high statistical significance, with the curvature of the scaling relations (see @Hyde2009a [@Bernardi2011]).
This characterization confirms the ideas of @Kormendy1996, who proposed a revision of the Hubble Sequence for elliptical galaxies, where isophote shape is used as an implicit indicator of velocity anisotropy. The general consensus is that the properties of these two distinct populations are the consequence of two different evolutionary paths. Massive core ellipticals appear to be formed through major dissipationless mergers [@Lauer2007a; @Lauer2007b; @Bernardi2011; @Cappellari2013_XX; @Kormendy2013], whereas the less-massive coreless ellipticals seem to have undergone a more complex evolution (@Kormendy2009 review evidence that they are formed in wet mergers with starbursts).
This paper provides the most precise measurements of the L-$\sigma$ relation at the highest mass range ever probed with statistical significance. Unfortunately, however, BOSS does not provide the type of data required to perform a central surface brightness profile analysis that can confirm that the sample is dominated by core ellipticals, as previous results suggest. A complete analysis of this type would answer the question as to whether core ellipticals are fully responsible for the steep slope. This option is claimed by [@Kormendy2013] by showing that the slope of the L-$\sigma$ relation is significantly steeper for core galaxies even in the luminosity region where core and coreless galaxies overlap (recall the statistical limitations of the study). Independently of this discussion, the shallow dependence of $\sigma$ on galaxy mass (obtained from a L-$\sigma$ relation very similar to what we obtained) is reported in [@Kormendy2013] to be approximately similar to N-body predictions [@Nipoti2003; @BoylanKolchin2006; @Hilz2012] for dissipationless major mergers.
Previous literature about the “E-E dichotomy" along with the similarities between our measurements and those reported by [@Lauer2007a] and [@Kormendy2013] suggest that the intrinsic high-mass RS distribution characterized in MD2014 (the same distribution for which we compute the L-$\sigma$ relation here) can be identified as a core-elliptical population. In MD2014, the intrinsic RS distribution is photometrically deconvolved from photometric errors and selection effects. The resulting distribution is so narrow in the colour-colour plane that is consistent with a delta function, at fixed magnitude and narrow redshift slice (with a shallow colour-magnitude dependence for its location). The second component of the intrinsic model, the BC, is a more extended distribution, well described by a Gaussian function in the colour-colour plane, upon which the RS is superimposed. Our BC is defined as a background distribution including everything not belonging to the very-pronounced RS (see MD2014 for details). This BC actually extends through the red side of the colour-colour plane. This characterization could reflect the “E-E dichotomy" on the red side of the colour-colour plane, with this intrinsic BC being composed by a large fraction of coreless ellipticals, for which more scatter on the colour-colour plane is to be expected. Follow-up work will be needed to confirm this picture.
The intrinsic scatter of the L-$\sigma$ relation at $z=0.55$ adds to the result reported in MD2014 that the intrinsic RS distribution is extremely concentrated in the colour-colour plane as well (at fixed magnitude and narrow redshift slice), which is an indication that there is little variability as far as the stellar populations are concerned within this population. The small scatter in these relations is consistent with the idea that this is an aged quiescent population that has seen little activity in a long time.
The above idea is reinforced by the fact that the high-mass L-$\sigma$ relation at $z=0.55$ appears to be very similar to that reported at $z=0.1$. Also, the evolution of the zero-point within the redshift range $0.5<z<0.7$ is consistent with that of a passively-evolving population that formed at redshift of $z=2-3$, in agreement with the LF-evolution results shown in MD2014 for the same population. The picture of the evolution of the high-mass end of the RS is, however, not completely clarified yet. [@Bernardi2015] have recently analyzed the high-mass luminosity and stellar mass function evolution from the CMASS to the SDSS, reporting a puzzling result. The evolution of this functions appears to be “impressively" passive, when K+E corrections computed from the [@Maraston2009] models are used. However, when matched in comoving number- or luminosity-density, the SDSS galaxies are less strongly clustered than CMASS galaxies, which is obviously inconsistent with a passive evolution scenario.
Conclusions and Future Applications {#sec:conclusions}
===================================
We have measured the intrinsic L-$\sigma$ relation for massive, luminous red galaxies within the redshift range $0.5<z<0.7$. We achieve unprecedented precision at the high-mass end ($M_* \gtrsim 10^{11} M_{\odot}$) on the measurement of the parameters of the L-$\sigma$ relation by using a sample of 600,000 galaxies from the BOSS CMASS sample (SDSS-III). We have deconvolved the effects of photometric and spectroscopic uncertainties and red–blue galaxy confusion using a novel hierarchical Bayesian formalism that is generally applicable to any combination of photometric and spectroscopic observables. The main conclusions of our analysis can be summarized as follows:
- At $z\sim0.55$, the passively-evolved L-$\sigma$ relation at $M_* \gtrsim 10^{11} M_{\odot}$ appears to be consistent with that at $z=0.1$.
- The slope of the $z=0.55$ L-$\sigma$ relation at the high-mass end is $\beta = 7.83 \pm 1.11$, corresponding to the canonical form $L_i \propto \sigma^{\beta}$. This value confirms, with the highest statistical significance ever achieved, the idea of a curved mass-dependent L-$\sigma$ relation. Scaling relations for the most massive LRGs are systematically different than the relations defined at lower masses.
- The intrinsic scatter on the L-$\sigma$ relation is $s=0.047 \pm 0.004$ in $\log_{10} \sigma$ at fixed L. This value confirms, with the highest statistical significance ever achieved, that the intrinsic scatter decreases as a function of mass.
- We detect no significant evolution in the slope and scatter of the L-$\sigma$ relation within the redshift range considered. Under a single stellar population assumption, the redshift evolution of the zero-point is consistent within the errors with that of a passively-evolving galaxy population that formed at redshift $z=2-3$. This is in agreement with the LF-evolution results reported in MD2014 for the same population.
- Our results, in combination with those reported in MD2014, provide an accurate description of the high-mass end of the red sequence population at $z\sim0.55$, which is characterized in MD2014 as an extremely narrow population in the optical colour-colour plane.
- Our results for the L-$\sigma$ relation, in the light of previous literature, suggest that our high-mass RS distribution might be identified with the “core-elliptical" galaxy population. In light of the ETG dichotomy, the second component identified in MD2014, a much more extended distribution upon which the RS is superimposed, would contain a significant fraction of “coreless" ellipticals towards the red side. The larger scatter in colour found for this population would be consistent with the evolutionary path that has been proposed for coreless ellipticals.
The above results lead us to consider followup work intended to investigate core–coreless elliptical demographics in BOSS. This project will require the use of high-resolution data.
The success of our algorithm for the photometric deconvolution of spectroscopic observables opens a field of future applications, as it can be used to constrain the intrinsic distribution of a variety of spectroscopically-derived quantities in BOSS. In the broader picture, the statistical techniques developed in this work and in MD2014 lay the foundations for galaxy-evolution studies using other current and future dark energy surveys, like eBOSS, which are subject to the same type of SN limitations and selection effects that we face in BOSS.
The extensive characterization of the high-mass RS presented in this work and in MD2014 will be used in combination with N-body numerical simulations to investigate the intrinsic clustering properties of this galaxy population, along with the intrinsic connection between these galaxies and the dark matter haloes that they inhabit. The connection between galaxies and halos will be performed by applying the techniques of halo occupation distributions (HOD: e.g., @Berlind2002; @Zehavi2005) and halo abundance matching (HAM: e.g., @Vale2004; @Trujillo2011). This is a novel approach as compared with the previous clustering/halo-galaxy-connection studies in BOSS, which have focused on the observed galaxy distribution and lacked a proper completeness correction.
Finally, the velocity-dispersion distributions implied by the L-$\sigma$ relation that we have obtained in this work can be used in combination with the luminosity-function results of MD2014 to determine the statistical strong gravitational lensing cross section of the CMASS sample. This cross section can in turn be used to predict and interpret the incidence of spectroscopically selected strong lenses within large redshift surveys (e.g., @Bolton2008, @Brownstein2012, @Arneson2012, @Bolton2012b), and to derive constraints on cosmological parameters from the statistics of gravitationally lensed quasars (e.g., @Kochanek1996, @Chae2002, @Mitchell2005).
Acknowledgments {#acknowledgments .unnumbered}
===============
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0010331.
The support and resources from the Center for High Performance Computing at the University of Utah are gratefully acknowledged.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III Web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, The University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
\[lastpage\]
[^1]: @Beifiori2014 present an analysis on the evolution of the effective radius in BOSS, but the $R_e$ - magnitude relation necessary to derive the aperture correction is not reported.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study a single product market with affine inverse demand function in which two producers/retailers, producing under capacity constraints, may order additional quantities from a single profit maximizing supplier. We express this chain as a two stage game and show that it has a unique subgame perfect Nash equilibrium, which we explicitly calculate. If the supplier faces uncertainty about the demand intercept and whenever a Bayesian Nash equilibrium exists, the supplier’s profit margin at equilibrium is a fixed point of a translation of the MRL function of his belief under the assumption that his belief is non-atomic and the retailers are identical. If the supplier’s belief is of Decreasing Mean Residual Lifetime (DMRL), the game has a unique subgame perfect Bayesian Nash equilibrium. Various properties of the equilibrium solution are established, inefficiencies at equilibrium generated by the lack of information of the supplier are investigated, and examples are provided for various interesting cases. Finally, the main results are generalized for more than two identical producers/retailers.'
author:
- 'Stefanos Leonardos[^1]'
- 'Costis Melolidakis[^2]'
bibliography:
- 'mybib.bib'
title: 'Endogenizing the cost parameter in Cournot oligopoly[^3]'
---
**Keywords:** Cournot Nash, Nash Equilibrium, Duopoly, Incomplete Information, Existence, Uniqueness\
**Mathematics Subject Classification (2000):** 91A10, 91A40
Introduction {#sec1}
============
The main objective of this paper is to examine equilibria in a classic Cournot market with affine inverse demand function, when the competing firms’ cost is affected exogenously by a supplier, who may even be uncertain about the demand parameter of the market. As @Ma15 point out, although Cournot’s model has become the workhorse model of oligopoly theory, there has been very little discussion about the origin of the competing firms costs in the quite vast Cournot literature. As they underline, if these costs represent purchases from a third party, then important strategic considerations come into play, which are not addressed by the prevailing approach. @Ma15 question the robustness of the results obtained thus far in the Cournot oligopoly theory due to this negligence.
The key in studying the effects of an exogenous source of supply for Cournot oligopolists is the relation between demand and the various costs. If the demand is high enough, the competing firms may have incentive to place orders to an external supplier, depending of course on the price he asks. In the classic original Cournot model, the crucial parameter concerning demand is the y-intercept of the affine inverse demand function, which implies its important role in studying Cournot markets with exogenous supply. Next, in such a study, various questions have to be addressed: Do the firms have production capacities of their own? If yes, then these capacities have to be assumed bounded (as they are in reality), so that need may arise for additional quantities. Does the supplier know the actual demand the oligopolists face? If no, then he may ask for a price that the competing firms will not accept, even if it is to his advantage not to do so. The latter question implies an incomplete information game setup.
It is the aim of this paper to address the questions raised above and contribute to extending the classic Cournot theory to oligopolists that may purchase additional quantities from an external supplier. By viewing the interaction of the competing Cournot oligopolists with their supplier as a two stage game, the cost parameter of the classic Cournot model is endogenized and answers can be worked out.
As we shall see, obtaining the equilibrium strategies for the Cournot firms in such a market may be very tedious, but nevertheless manageable [*after*]{} one formulates the setup. However, the case for the supplier, who is also a player in this game, is somehow more demanding. In the duopoly case, if the supplier knows the true value of the demand parameter, a unique subgame perfect equilibrium always exists, although its description is quite complicated (see Propositions \[prop3.3full\] and \[prop3.4full\] in the Appendix). In all other cases (three or more oligopolists and/or incomplete information), the study of equilibria is possible only if we assume that the competing firms are identical. So, assuming that the oligopolists are identical, we obtain concrete results concerning the price the supplier will ask in both the complete and the incomplete information case under equilibrium. In the latter, when the supplier does not know the demand parameter, we give necessary and sufficient conditions for the existence of a subgame perfect equilibrium via a fixed point equation involving a translation of the Mean Residual Lifetime (MRL) of his belief, provided the measure this belief induces on the demand parameter space is non-atomic. If they exist, the evaluation of all subgame perfect equilibria is possible by using this equation, i.e. our approach is constructive. The MRL function is well known among actuarials, reliability engineers, and applied probabilists but, to our knowledge, has never arisen before in a Cournot context.
In detail, inspired by the classic Cournot oligopoly where producers/retailers compete over quantity, we study the market of a homogenous good differing from the classic model in the following aspects. Each producer/retailer may produce only a limited quantity of the good up to a specified and commonly known production capacity. If needed, the retailers may refer to a single supplier to order additional quantities. The supplier may produce unlimited quantities but at a higher cost than the retailers, making it best for the retailers to exhaust their production capacities before placing additional orders. The market clears at a price that is determined by an affine inverse demand function. The demand parameter or demand intercept is considered to be a random variable with a commonly known non-atomic distribution having a finite expectation.
Depending on the time of the demand realization, two variations of this market structure are examined, which correspond to a complete and an incomplete information two-stage game played as follows. In the first stage the supplier fixes a price by deciding his profit margin and then in the second stage, the retailers, knowing this price and the true value of the demand parameter, decide their production quantity (up to capacity) and also place their orders (if any). The decisions of the producers/retailers are simultaneous. If the supplier knows the true value of the demand parameter before making his decision as well, this is an ordinary two stage game (and the demand random variable is just a formality). On the other hand, if the random demand parameter is realized after the supplier has set his price, then this is a two stage game of incomplete information where the demand distribution is the belief of the supplier about this variable. So, in both variations the demand uncertainty is resolved prior to the decision of the retailers, who always know the true value of the demand intercept. In this respect, our setup differs from the usual incomplete information models of Cournot markets, where demand uncertainty involves the producers (e.g. see @Ei10 and references given therein).
The limited production capacities of the producers/retailers may equivalently be viewed as inventory quantities that may be drawn at a fixed cost per unit. The latter setting corresponds to the operation of a retailer’s cooperative (hence the characterization of the ‘producers’ as retailers’ also). In case the production capacities are 0, this is a classic Cournot model with the cost input determined exogenously. [@Ma15] point out the scarceness of research along this line and the need of endogenizing the cost parameter in the Cournot model. To do so, they study the game that arises when competing Cournot firms purchase their inputs from a common supplier based on contracts, with the competing firms having the bargaining power.
We study the equilibrium behavior of the two-stage game by first determining the unique equilibrium solution of the second stage game, which concerns the quantities that the producers/retailers will produce and the additional quantities they will order from the supplier. This stage is the same in both the complete and the incomplete information case and is solved completely. Then, we examine the first stage game, which concerns the price the supplier will ask for the product. In the case of the complete information duopoly (two producers/retailers-supplier knows the demand parameter), a unique subgame-perfect Nash equilibrium always exists, which we give explicitly in the Appendix. Its description is quite complicated as it is very sensitive to the relationship of all parameters of the problem. Otherwise, we proceed by considering identical producers/retailers. Under this assumption, the complete information case simplifies significantly. However, our focus is mainly on the incomplete information case. For that case, we show that if a subgame perfect Bayesian-Nash equilibrium exists, then it will necessarily be a fixed point of a translation of the Mean Residual Lifetime (MRL) function of the supplier’s belief. If the support of the supplier’s belief is bounded, then such an equilibrium exists. If the MRL function is decreasing (as in most “well behaved” distributions), then a subgame perfect equilibrium always exists and is unique, irrespective of the supplier’s belief support. In addition, we investigate the behavior of this subgame perfect equilibrium in various interesting cases.
A well known byproduct of incomplete information is the appearance of inefficiencies in the sense that transactions that would have been beneficial for all market participants are excluded at equilibrium under incomplete information, while this wouldn’t have been the case under complete information. In the present paper, an upper bound on a measure of inefficiency over all DMRL distributions is derived, it is shown that it is tight, and the implications of the structure of the supplier’s belief on the efficiency of the incomplete information equilibrium are investigated.
Under incomplete information, the DMRL property is sufficient for the existence of a unique optimal strategy for the supplier, but not necessary. Possible relaxations of this condition are examined, basically in terms of the increasing generalized failure rate (IGFR) condition, see @La06. As far as necessary conditions are concerned, we don’t have a characterization of the distribution when an equilibrium exists, although we have a characterization of the equilibrium itself through the MRL function. Finally, as far as the inverse demand function is concerned, its affinity assumption may not be dropped if one still wants to express the supplier’s payoff in terms of the MRL function. Hence, a generalization in models with non-affine inverse demand function is open.
As a dynamic two-stage model, the current work combines elements from various active research areas. Cournot competition with an external supplier having uncertainty about the demand intercept may be viewed as an application of game theoretic tools and concepts in supply chain management. However, we are interested in the equilibrium behavior of the incomplete information two stage game rather than to building coordination mechanisms. The incomplete information of the supplier relates our work to incomplete information Cournot models, although the lack of information refers to the producers in most papers in this area. Having capacity constraints for the producers/consumers, as it is necessary for our model, has also attracted attention in the Cournot literature, see eg. @Bi09. Finally, the application of the concept of the mean residual lifetime, which originates from reliability theory, to a purely economic setting is another research area to which our paper is related.
@Ca04 survey the applications of game theoretic concepts in supply chain analysis in detail and provide a thorough literature review up to 2004. They state that the concept that is mostly used is that of static non-cooperative games and highlight the need for a game theoretic analysis of more dynamic settings. At the time of their survey, they report of only two papers that apply the solution concept of the subgame perfect equilibrium (in settings quite different to ours). @Be05 investigate the equilibrium behavior of a decentralized supply chain with competing retailers under demand uncertainty. Their model accounts for demand uncertainty from the retailers’ point of view also and focuses mainly on the design of contracts that will coordinate the supply chain. Based on whether the uncertainty of the demand is resolved prior to or after the retailers’ decisions, they identify two main streams of literature. For the first stream, in which the present paper may be placed, they refer to @Vi01 for an extensive survey.
@Ei10 examine Cournot competition under incomplete producers’ information about the demand and production costs. They provide examples of such games without a Bayesian Cournot equilibrium in pure strategies, they discuss the implication of not allowing negative prices and provide additional sufficient conditions that will guarantee existence and uniqueness of equilibrium. @Ri13 discusses Cournot competition under incomplete producers’ information about their production capacities and proves the existence of equilibrium under the assumption of stochastic independence of the unknown capacities. He also discusses simplifications of the inverse demand function that result to symmetric equilibria and implications of information sharing among the producers.
Capacity constrained duopolies are mostly studied in the view of price rather than quantity competition. @Os86 and the references therein are among the classics in this field. Equally common is the study of models where the capacity constraints are viewed as inventories kept by the retailers at a lower cost. Papers in this direction focus mainly on determining optimal policies in building the inventory over more than one periods. @Ha15 and the references therein are indicative of this field of research.
The condition of decreasing mean residual lifetime that arises in our treatment, is closely related to the more general concept of log-concave probability. In an inspiring survey, @Ba05 examine a series of theorems relating log-concavity and/or log-convexity of probability density functions, distribution functions, reliability functions, and their integrals and point out numerous applications of log-concave probability that arise, among other areas, in mechanism design, monopoly theory and in the analysis of auctions. @Lag06 examines a Cournot duopoly where the producers do not know the demand intercept but have a common belief about it. Under the condition that the distribution of the demand intercept has monotone hazard rate and an additional mild technical assumption, he proves uniqueness of equilibrium. @Gu88 survey earlier advances about the mean residual lifetime function, its properties and its applications. Among many other fields, they mention a single economic application in a setting about optimal inventory policies for perishable items with variable supply which is however rather distinct from ours.
Outline
-------
The rest of the paper is structured as follows. In we build up the formal setting for a duopoly. In we present some preliminary results and treat the complete information case. In we analyze the model of incomplete information, state our main results and highlight the special case with no production capacities for the retailers. In we investigate inefficiencies caused by the lack of information on the true value of the demand parameter by the supplier. The use of results exhibited in the previous sections is highlighted by various examples and calculations of the subgame perfect equilibrium in . Finally, in we generalize our results to the case of $n>2$ - identical retailers. The proofs of and proceed by an extensive case discrimination and are presented in Appendix \[app1\].
Throughout the rest of this paper we drop the double name “producers/retailers” and keep the term “retailers” only.
The Model {#sec2}
=========
Notation and preliminaries
--------------------------
We consider the market of a homogenous good that consists of two producers/retailers ($R_1$ and $R_2$) that compete over quantity ($R_i$ places quantity $Q_i$) and a supplier (or wholesaler) under the following assumptions
1. The retailers may produce quantities $t_1$ and $t_2$ up to a capacities $T_1$ and $T_2$ respectively at a common fixed cost $h$ per unit normalized to zero[^4].
2. Additionally, they may order quantities $q_1, q_2$ from the supplier at a price $w$ set prior to and independently of their orders. The total quantity $Q_i{\left(}w{\right)}, i=1,2$ that each retailer releases to the market is equal to the sum $$Q_i{\left(}w{\right)}:=t_i{\left(}w{\right)}+q_i{\left(}w{\right)}$$ or shortly $Q_i:=t_i+q_i$, where, for $i=1,2$, the variable $t_i\le T_i$ is the quantity that retailer $R_i$ produces by himself or draws from his inventory (at normalized zero cost) and $q_i$ is the quantity that the retailer $R_i$ orders from the supplier at price $w$.
3. The supplier may produce unlimited quantity of the good at a cost $c$ per unit. We assume that the retailers are more efficient in the production of the good or equivalently that $c>h$. After the normalization of the retailers’ production cost $h$ to $0$, the rest of the parameters i.e. $w$ and $c$ are also normalized. Thus, $w$ represents a normalized price, i.e. the initial price that was set by the supplier minus the retailers’ production cost and $c$ a normalized cost, i.e. the supplier’s initial cost minus the retailers’ production cost. The supplier’s profit margin $r$ is not affected by the normalization and is equal to $$r:=w-c$$
4. After the total quantity $Q=Q_1+Q_2$ that will be released to the market is set by the retailers, the market clears at a price $p$ that is determined by an inverse demand function, which we assume to be affine[^5] $$\label{1} p=\alpha-Q$$
5. The demand parameter $\alpha$ is a non-negative random variable with finite expectation ${\mathbb{E}}{\left(}\alpha{\right)}<+\infty$ and a continuous cumulative distribution function (cdf) $F$ (i.e. the measure induced on the space of $\alpha$ is non-atomic–singular distribution functions, like the Cantor function, are acceptable). We will assume that $\alpha \ge h$ for all values of $\alpha$, i.e. that the demand parameter is greater or equal to the retailers’ production cost. The latter assumption is consistent with the classic Cournot duopoly model, which is resembled by the second stage of the game (however, the second stage game is not a classic Cournot duopoly due to the capacity constraints $T_1$ and $T_2$ and to the possibility of $w>\alpha$). After normalization, in what follows we will use the term $\alpha_H$ (resp. $\alpha_L$) to denote the lower upper bound (resp. the upper lower bound) of the support of $\alpha$.
6. The capacities $T_1, T_2$ and the distribution of the random demand parameter $\alpha$ are common knowledge among the three participants of the market (the retailers and the supplier).
Based on these assumptions, a strategy $s_i{\left(}w{\right)}$ for retailer $R_i, i=1,2$ is a vector valued function $s_i{\left(}w{\right)}={\left(}t_i(w),q_i{\left(}w{\right)}{\right)}$ or shortly a pair $s_i=(t_i, q_i)$ for $i=1,2$. Equation implies that $Q_i$ may not exceed $\alpha$ and hence the strategy set $\tilde{S}^i$ of $R_i$ will satisfy $$\label{2} \tilde{S}^i \subset \left\{(t_i,q_i) : 0\le t_i \le T_i \text{ and } 0\le q_i+t_i \le \alpha\right\}$$ Denoting by $s:={\left(}s_1, s_2{\right)}$ a strategy profile, the payoff $u^i{\left(}s \mid {w}{\right)}$ of retailer $R_i, i=1,2$ will be given by $$\label{3} u^i{\left(}s\mid w{\right)}= Q_i{\left(}\alpha-Q{\right)}-w q_i=Q_i{\left(}\alpha-w-Q{\right)}+w t_i$$ Whenever confusion may not arise we will write $u^i(s)$ instead of $u^i(s\mid w)$. A strategy for the supplier is the price $w$ he charges to the retailers or equivalently his profit margin $r$. From we see that $w$ may not exceed $a_H$, otherwise the retailers will not order for sure. Additionally, it may not be lower than $c$, since in that case his payoff will become negative. Hence, in terms of his profit margin $r$, the strategy set $R$ of the supplier satisfies $$\label{4}R\subset \{r : 0 \le r \le \alpha_H-c\}$$ Consequently, a reasonable assumption is that $c<\alpha_H$, otherwise the problem becomes trivial from the supplier’s perspective. For a given value of $\alpha$, the supplier’s payoff function, stated in terms of $r$ rather than $w=r+c$, is given by $$\label{5}u^s{\left(}r\mid \alpha{\right)}=r{\left(}q_1{\left(}w{\right)}+q_2{\left(}w{\right)}{\right)}$$ On the other hand, for the retailers it is not necessary to know the exact values of $c$ and $r$ and hence, from their point of view (2nd stage game), we keep the notation $w=r+c$. If the supplier doesn’t know $\alpha$ (incomplete information case), then his payoff function will be $$\label{6}{u^s{\left(}r{\right)}}:={\mathbb{E}}u^s{\left(}r\mid \alpha{\right)}$$ the expectation taken with respect to the distribution of $\alpha$.
To proceed with the formal two-stage game model, we recall that both the production/inventory capacities $T_1$ and $T_2$ of the retailers and the distribution of the demand parameter $\alpha$ are common knowledge of the three market participants. We then have,
- The demand parameter $\alpha$ is realized and observed by both the supplier and the retailers[^6]. Then, at stage 1, the supplier fixes his profit margin $r$ and hence his price $w$. His strategy set and payoff function are given by and . At stage 2, based on the value of $w$, each competing retailer chooses the quantity $Q_i, i=1,2$ that he will release to the market by determining how much quantity $t_i$ he will draw (at zero cost) from his inventory $T_i$ and how much additional quantity $q_i$ he will order (at price $w$) from the supplier. The strategy sets and payoff functions of the retailers are given by and respectively.
- At stage 1, the supplier chooses $r$ without knowing the true value of $\alpha$. His strategy set remains the same but his payoff function is now given by . After $r$ and hence $w=r+c$ is fixed, the demand parameter $\alpha$ is realized and along with the price $w$ is observed by the retailers. Then we proceed to stage 2, which is identical to that of the Complete Information Case.
- the above are assumed to be common knowledge of the three players.
Subgame-perfect equilibria under complete information {#sec3}
=====================================================
At first we treat the case with no uncertainty on the side of the supplier about the demand parameter $\alpha$. In subsections \[sub3.1\] and \[sub3.2\] we determine the subgame perfect equilibria of this two stage game.
Equilibrium strategies of the second stage {#sub3.1}
------------------------------------------
We begin with the intuitive observation that it is best for the retailers to produce up to their capacity constraints or equivalently to exhaust their inventories before ordering additional quantitites from the supplier at unit price $w$. For simplicity in the notation of Lemma \[lem3.1\] and Lemma \[lem3.2\] fix $i \in \{1,2\}$ and let $T_i:=T$. As above, $Q_i=t_i+q_i$.
\[lem3.1\] Any strategy $s_i={\left(}t_i, q_i{\right)}\in \tilde{S}^i$ with $t_i<T$ and $q_i>0$ is strictly dominated by a strategy $s_i'={\left(}t_i',q_i'{\right)}$ with $${\left(}t_i',q_i'{\right)}=\begin{cases}{\left(}T, Q_i-T{\right)}, & \text{ if }Q_i\ge T \\ {\left(}Q_i, 0{\right)}, & \text{ if }Q_i< T \end{cases}$$ or equivalently by ${\left(}t_i',q_i'{\right)}={\left(}\min{\{Q_i,T\}}, {\left(}Q_i-T{\right)}^+{\right)}$.
Let $Q_i\ge T$. Then for any $s_j \in \tilde{S}^j$, implies that $u^i{\left(}s_i, s_j{\right)}=u^i{\left(}s'_i,s_j{\right)}+w{\left(}t_i-T{\right)}$. Since $t_i<T$, the result follows. Similarly, if $Q_i< T$, then for any $s_j \in \tilde{S}^j$, implies that $u^i{\left(}s_i, s_j{\right)}=u^i{\left(}s'_i,s_j{\right)}-wq_i.$
Accordingly, we restrict attention to the strategies in $$S^i=\bigg\{{\left(}t_i,q_i{\right)}: 0\le t_i < \min{\{\alpha, T\}}, q_i=0\bigg\} \bigcup \bigg\{{\left(}t_i,q_i{\right)}: T=\min{\{\alpha, T\}}, 0\le q_i\le {\left(}\alpha-T{\right)}^+\bigg\}$$ depicts the set $S^i$ when $T<\alpha$ and when $T\ge\alpha$. As shown below, Lemma \[lem3.1\] considerably simplifies the maximization of $u^i{\left(}\cdot, s_j{\right)}$, since for any strategy $s_j$ of retailer $R_j$, the maximum of $u^i{\left(}\cdot, s_j{\right)}$ will be attained at the bottom or right hand side boundary of the region $[0,\min{\{\alpha,T\}}]\times\left[0,{\left(}\alpha-T{\right)}^+\right]$.
[.5]{}
(-0.5,0) – (7,0); (0,-0.5) – (0,5); (7,-0.13) node\[below\][$t_i$]{}(-0.1,5) node\[left\][$q_i$]{} (6.5,4)–(6.5,-0.1) node\[below\][$T$]{} (6.5,0)–(0,0) (-0.1,4) node\[left\][$\alpha-T$]{} –(6.5,4) (-0.3, -0.3) node[$0$]{}; (0,0)–(6.5,0); (6.5,0)–(6.5,4); (1.4,2.1) node\[above, right\][ $Q_i=q_i+t_i:$ constant]{}–(0.8,1.6); (4.45,2.2)–(5.05,2.7); (2.8, 0.7) node\[above, right\][Dominant strategies]{}–(2.2, 0.2); (5.7, 0.8)–(6.3, 1.3); (0,1)–(1,0) (0,2)–(2,0) (5,4)–(6.5,2.5) (4, 4)–(6.5,1.5); (2,0) circle (2pt) (1,0) circle (2pt) (6.5,2.5) circle (2pt) (6.5,1.5) circle (2pt);
[0.5]{}
(-0.5,0) – (7,0); (0,-0.5) – (0,5); (-0.1,5) node\[left\][$q_i$]{} (4,0.1)–(4,-0.1) node\[below\][$\alpha$]{} (6.5,0)–(0,0) (-0.1,4) node\[left\][$\alpha$]{} –(0.1,4) (0,4)–(4,0) (6.5,0.1)–(6.5,-0.1) node\[below\][ $T$]{} (-0.3, -0.3) node[$0$]{} (7,-0.13) node\[below\][$t_i$]{}; (0,0)–(4,0); (1.4,2.1) node\[above, right, fill=white\][$Q_i=q_i+t_i:$ constant]{}–(0.8,1.6); (2.8, 0.7) node\[above, right, fill=white\][Dominant strategies]{}–(2.2, 0.2); (0,1)–(1,0) (0,2)–(2,0); (2,0) circle (2pt) (1,0) circle (2pt);
Moreover, Lemma \[lem3.1\] implies that when $\alpha\le T$, retailer $R_i$ will order no additional quantity from the supplier[^7]. Although trivial, we may not exclude this case in general, since in we consider $\alpha$ to be varying.
Restricting attention to $S^i$ for $i=1,2$, we obtain the best reply correspondences $\operatorname{BR^1}$ and $\operatorname{BR^2}$ of retailers $R_1$ and $R_2$ respectively. To proceed, we notice that the payoff of retailer $R_i, i=1,2$ depends on the total quantity $Q_j$ that retailer $R_j, j=3-i$ releases to the market and not on the explicit values of $t_j, q_j$, cf. .
\[lem3.2\] The best reply correspondence $\operatorname{BR}^i$ of retailer $R_i$ for $i=1,2$ is given by $$\begin{aligned}
\operatorname{BR}^i(Q_j)=\left\{\begin{alignedat}{5}
&t_i=T,& & q_i= \dfrac{\alpha-w-Q_j}{2}-T, &\,\,\,&\text{ if }\,\, 0\le Q_j< \alpha-w-2T&\hspace{30pt}& (1)\\
&t_i=T, &&q_i= 0, &&\text{ if }\,\, \alpha-w-2T\le Q_j< \alpha-2T&&(2)\\
&t_i=\dfrac{\alpha-Q_j}{2},& \,\,\,& q_i= 0, & &\text{ if }\,\, \alpha-2T\le Q_j &&(3)
\end{alignedat} \right. \end{aligned}$$
The enumeration $(1), (2), (3)$ of the different parts of the best reply correspondence will be used for a more clear case discrimination in the subsequent equilibrium analysis, see Appendix \[app1\].
See Appendix \[app1\].
A generic graph of the best reply correspondence $\operatorname{BR^1}$ of retailer $R_1$ to the total quantity $Q_2$ that retailer $R_2$ releases to the market is given in .
(4,2.4) circle (2pt) (4,4.8) circle (2pt); (-0.5,0) – (12,0) node\[below\] [$Q_1$]{}; (0,-0.5) – (0,8) node\[left\] [$Q_2$]{}; (4,0.1)–(4,-0.1) node\[below\][$T\,\,$]{} (8,0.1)–(8,-0.1) node\[below\][$\dfrac{\alpha}{2}-\dfrac{w}{2}\,\,$]{} (0.1,2.4)–(-0.1,2.4) node\[left\][$\alpha-w-2T$]{} (0.1,4.8)–(-0.1,4.8) node\[left\][$\alpha-2T$]{} (0.1,7.2)–(-0.1,7.2) node\[left\][$\alpha$]{} (0,7.2)–(4,4.8)–(4,2.4)–(8,0) (5.5,1.05) node [**(1)**]{} (3.6,3.5) node [**(2)**]{} (1.7,5.7) node [**(3)**]{} (-0.3, -0.3) node[$0$]{}; (6.2,1.6) node\[right\][$(t_1,q_1)={\left(}T,\frac{\alpha-w-Q_2}{2}-T{\right)}$]{}–(5.9,1.36); (4.5,3.84) node\[right\][$(t_1,q_1)=(T,0)$]{}–(4.2,3.6); (2.3,6.4) node\[right\][$(t_1,q_1)={\left(}\frac{\alpha-Q_2}{2},0{\right)}$]{}–(2,6.16);
The equilibrium analysis of the second stage of the game proceeds in the standard way, i.e. with the identification of the Nash equilibria through the intersection of the best reply correspondences $\operatorname{BR^1}$ and $\operatorname{BR^2}$. For a better exposition of the results we restrict attention from now on to the case of identical retailers, i.e. $T_1=T_2=T$. The general case $T_1\ge T_2$ is treated in Appendix \[app1\], where the complete statements and the proofs of Proposition \[prop3.3\] and Proposition \[prop3.4\] are provided.
If the retailers are identical, i.e. $T_1=T_2=T$ only symmetric equilibria may occur in the second stage. The equilibrium strategies depend on the value of $\alpha$ and its relative position to $3T$.
\[prop3.3\] If $T_1=T_2=T$, then for all values of $\alpha$ the second stage equilibrium strategies between retailers $R_1$ and $R_2$ are symmetric and for $i=1,2$ they are given by $s^*_i{\left(}w{\right)}={\left(}t^*_i{\left(}w{\right)}, q^*_i{\left(}w{\right)}{\right)}$, or shortly $s^*_i={\left(}t^*_i, q^*_i{\right)}$, with $$t^*_i(w)= T-\frac13{\left(}3T-\alpha{\right)}^+, \quad q^*_i{\left(}w{\right)}= \frac13{\left(}\alpha-3T-w{\right)}^+.$$
See Appendix \[app1\], where Proposition \[prop3.3\] is stated and proved for the general case $T_1 \ge T_2$.
The quantity $Q^*_i=t^*_i+q^*_i$ is the total quantity of the good that each retailer releases to the market in equilibrium. If $\alpha \le 3T$ then $Q^*_i$ is equal to the equilibrium quantity of a classic Cournot duopolist who faces linear inverse demand with intercept equal to $\alpha$ and has $0$ cost per unit. If $\alpha>3T$, then the equilibrium quantities of the retailers depend on the supplier’s price $w$. If $w$ is low enough, i.e. if $w<\alpha-3T$, they will be willing to order additional quantities from the supplier. In this case, $Q^*_i=t^*_i+q^*_i=T+\frac13{\left(}\alpha-3T-w{\right)}=\frac13{\left(}\alpha-w{\right)}$, which is equal to the equilibrium quantity of a Cournot duopolist who faces linear demand with intercept equal to $\alpha$ and has $w$ cost per unit for all product units, despite the fact that in our model the retailers face a cost of 0 for the first $T$ units and $w$ for the rest. Finally, for $w \ge\alpha-3T$ the retailers will avoid ordering and will release their inventories to the market.
The total quantity that is released to the market has a direct impact on the consumers welfare as expressed in terms of the consumers surplus[^8]. For lower values of the demand parameter the consumers benefit from the “low cost” inventories or production capacities of the retailers, since otherwise no goods would have been released to the market. On the other hand, for higher values of $\alpha$ the total quantity that is released to the market does not depend directly on $T$ and thus the benefits from keeping inventory are eliminated for the consumers.
Equilibrium strategies of the first stage {#sub3.2}
-----------------------------------------
The general form of the supplier’s payoff function $u^s{\left(}r\mid \alpha{\right)}$ is given in and his strategy set in . Obviously, the supplier will not be willing to charge prices lower than his cost $c$. Based on the discussion of Proposition \[prop3.3\] above and the constraint $w\ge c$ (i.e. $r\ge 0$), we conclude that a transaction will take place for values of $\alpha>3T+c$ and for $r\in [0, \alpha -3T -c)$. As we shall see, in that case, the optimal profit margin of the supplier is at the midpoint of the $r$-interval.
To see this, let $q^*{\left(}w{\right)}:=q^*_1{\left(}w{\right)}+q^*_2{\left(}w{\right)}$ denote the total quantity that the supplier will receive as an order from the retailers when they respond optimally. By Proposition \[prop3.3\], $q^*(w)=\frac23{\left(}\alpha-3T-w{\right)}^+$. Hence, on the equilibrium path and for $r \ge 0$, the payoff of the supplier is $$\label{pay}u^s{\left(}r\mid \alpha{\right)}=rq^*{\left(}w{\right)}=\frac23r{\left(}\alpha-3T-c-r{\right)}^+$$ We then have
\[prop3.4\] For $T_1=T_2=T$ and for all values of $\alpha$, the subgame perfect equilibrium strategy $r^*{\left(}\alpha{\right)}$ of the supplier is given by $$r^*{\left(}\alpha{\right)}=\frac12{\left(}\alpha-3T-c{\right)}^+$$
See Appendix \[app1\], where Proposition \[prop3.4\] is stated and proved for the general case $T_1 \ge T_2$.
Proposition \[prop3.4\] implies that if $\alpha<3T+c$, then the optimal profit of the supplier is equal to $0$, i.e. he will set a price equal to his cost. Actually, he is indifferent between any price $w\ge c$ since in that case he knows that the retailers will order no additional quantity.
In sum, Proposition \[prop3.3\] and Proposition \[prop3.4\] provide the subgame perfect equilibrium of the two stage game in the case of identical (i.e. $T_1=T_2=T$) retailers.
\[thm3.5\] If the capacities of the producers (retailers) are identical, i.e. if $T_1=T_2=T$, then the complete information two stage game has a unique subgame perfect Nash equilibrium, under which the supplier sells with profit margin $r^*{\left(}\alpha{\right)}=\frac12{\left(}\alpha-3T-c{\right)}^+$ and each of the producers (retailers) orders quantity $q^*{\left(}w{\right)}= \frac13{\left(}\alpha-3T-w{\right)}^+$ and produces (releases from his inventory) quantity $t^*{\left(}w{\right)}= T-\frac13{\left(}3T-\alpha{\right)}^+$.
Subgame-perfect equilibrium under demand uncertainty {#sec4}
====================================================
We investigate the equilibrium behavior of the supply chain assuming that the supplier has no information about the true value of the demand parameter $\alpha$ when he sets his price, while the retailers know it when they place their orders (if any). In the two stage game context, we assume that the demand is realized after the first stage (i.e. after the supplier sets his price) but prior to the second stage of the game (i.e. prior to the decision of the retailers about the quantity that they will release to the market). The case $T_1\ge T_2$ exhibits significant computational difficulties and we will restrict our attention to the symmetric case $T_1=T_2=T$.
Under these assumptions the equilibrium analysis of the second stage (as presented in \[sub3.1\]) remains unaffected: the retailers observe the actual demand parameter $\alpha$ and the price $w$ set by the supplier and choose the quantities $t_i$ and $q_i$ for $i=1,2$. However, in the first stage, the actual payoff of the supplier depends on the unknown parameter $\alpha$ for which he has a belief: the distribution $F$ of $\alpha$, which induces a non-atomic measure on $[0, \infty)$ with finite expectation, as assumed. Given the value of $\alpha$ and taking as granted that the retailers respond to the supplier’s choice $w$ with their unique equilibrium strategies, the supplier’s actual payoff is provided by equation . Taking the expectation with respect to his belief (see also ), we derive the payoff function of the supplier when $\alpha$ is unknown and distributed according to $F$ $$\label{8} {u^s{\left(}r{\right)}}=\frac23r \,{\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+} \;\; \mbox{for $r \ge 0$}.$$ Let $r_H:=\alpha_H-3T-c$ and $r_L:=\alpha_L-3T-c$. If $r_H \leq 0$, then $\alpha \leq 3T+c $ for all $\alpha$, i.e. then, ${u^s{\left(}r{\right)}}\equiv 0$ and the problem is trivial. Hence, in order to proceed we assume that $r_H>0 \Leftrightarrow 3T+c<\alpha_H$. Then, since ${u^s{\left(}r{\right)}}=0$ for $r\ge r_H$, we may restrict the domain of ${u^s{\left(}r{\right)}}$ and take it to be the interval $[0,r_H)$ in . Next, observe that since ${\left(}\alpha-3T-c-r{\right)}^{+}$ is non-negative, ${\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+} = \int_{0}^{\infty}P{\left(}{\left(}\alpha-3T-c-r{\right)}^{+} >y{\right)}dy$, [e.g. see @Bi86] and hence, by a simple change of variable, ${\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+} = \int_{3T+c+r}^{\infty}{\left(}1-F{\left(}x{\right)}{\right)}dx$ which implies that $$\label{9} {u^s{\left(}r{\right)}}=\frac23r\int_{3T+c+r}^{\infty}{\left(}1-F{\left(}x{\right)}{\right)}dx \;\; \mbox{for $0 \le r<r_H$}.$$ We are now able to prove the following lemma.
\[lem diffa\] The supplier’s payoff function ${u^s{\left(}r{\right)}}$ is continuously differentiable on ${\left(}0,r_H{\right)}$ and $$\label{10} {\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}=\frac23\int_{3T+c+r}^{\infty}{\left(}1-F{\left(}x{\right)}{\right)}dx-\frac23r{\left(}1-F{\left(}3T+c+r{\right)}{\right)}.$$
It suffices to show that $\frac{\mathop{d}}{\mathop{dr}}{\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+} = -{\left(}1-F{\left(}3T+c+r{\right)}{\right)}$. Then, equation as well as continuity are implied. So, let $$\label{11} K_{h}{\left(}\alpha{\right)}:=-\frac1h\,\left[{\left(}\alpha-3T-c-r-h{\right)}^+-{\left(}\alpha-3T-c-r{\right)}^+\right]$$ and take $h>0$. Then, $K_h{\left(}\alpha{\right)}= \mathbf{1}_{\{\alpha>3T+c+r+h\}} + \dfrac{\alpha-3T-c-r}{h}\mathbf{1}_{\{3T+c+r< \alpha \le 3T+c+r+h\}}$ and therefore $$\lim_{h\to 0+}K_{h}{\left(}\alpha{\right)}=\mathbf{1}_{\{\alpha>3T+c+r\}}$$ Since $0\le K_h{\left(}\alpha{\right)}\le 1$ for all $\alpha$, the dominated convergence theorem implies that $$\lim_{h\to0+}{\mathbb{E}}{\left(}K_h{\left(}\alpha{\right)}{\right)}= P{\left(}\alpha>3T+c+r{\right)}$$ In a similar fashion, one may show that $\lim_{h\to0-}{\mathbb{E}}{\left(}K_h{\left(}\alpha{\right)}{\right)}= P{\left(}\alpha \geq 3T+c+r{\right)}$. Since the distribution of $\alpha$ is non-atomic, $P{\left(}\alpha>3T+c+r{\right)}= P{\left(}\alpha \geq 3T+c+r{\right)}$ and hence, $\lim_{h\to0}{\mathbb{E}}{\left(}K_h{\left(}\alpha{\right)}{\right)}= 1-F{\left(}3T+c+r{\right)}$. By , $\lim_{h\to0}{\mathbb{E}}{\left(}K_h{\left(}\alpha{\right)}{\right)}= -\frac{\mathop{d}}{\mathop{dr}}{\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+}$, which concludes the proof.
At this point, we need to introduce the Mean Residual Lifetime function (MRL) of $\alpha$. For a more thorough discussion of the definition and properties of ${\mathrm{m}}{\left(}\cdot{\right)}$ one is referred to applied probability books and papers–@Ha81 or @Gu88 are considered classic. It is given by $$\label{imrl} {\mathrm{m}}{\left(}t{\right)}:=\begin{cases}\,{\mathbb{E}}{\left(}\alpha-t \mid \alpha >t{\right)}& \mbox{if $P{\left(}\alpha >t{\right)}>0$} \\
\,0 & \mbox{otherwise} \end{cases} \,
= \, \begin{cases}\,\dfrac{\int_{t}^{\infty}{\left(}1-F{\left(}x{\right)}{\right)}dx}{1-F{\left(}t{\right)}} & \mbox{if $P{\left(}\alpha >t{\right)}>0$} \\
\,0 & \mbox{otherwise} \end{cases}$$ Then, the following holds.
\[lem diff\] For $r \in {\left(}0,r_H{\right)}$, the supplier’s payoff function and its derivative are expressed through the MRL function by ${u^s{\left(}r{\right)}}=\frac23r{\mathrm{m}}{\left(}3T+c+r{\right)}{\left(}1-F{\left(}3T+c+r{\right)}{\right)}$ and $$\label{der}{\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}=\frac23{\left(}{\mathrm{m}}{\left(}3T+c+r{\right)}-r{\right)}{\left(}1-F{\left(}3T+c+r{\right)}{\right)}$$ respectively. In addition, the roots of ${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}$ in ${\left(}0,r_H{\right)}$ (if any) satisfy the fixed point equation $$\label{14}r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$$
The formulas for ${u^s{\left(}r{\right)}}$ and ${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}$ are immediate using \[9\], \[10\], and \[imrl\]. As an interesting observation, notice that the two terms of ${u^s{\left(}r{\right)}}$, as given in the lemma, may not be separately differentiable. Finally, if $r \in {\left(}0,r_H{\right)}$, then $1-F{\left(}3T+c+r{\right)}$ is positive. Hence, equation implies that the critical points $r^*$ of ${u^s{\left(}r{\right)}}$ (if any) satisfy $r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$.
We remark that since ${\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+} = {\mathrm{m}}{\left(}3T+c+r{\right)}{\left(}1-F{\left(}3T+c+r{\right)}{\right)}$, one may be tempted to use the product rule to derive its derivative and hence show that ${u^s{\left(}r{\right)}}$ is differentiable. The problem is that the product rule does not apply, since the two terms in this expression of ${\mathbb{E}}{\left(}\alpha-3T-c-r{\right)}^{+}$ may both be non-differentiable, even if $\alpha$ has a density and its support is connected (e.g. consider the point $r_L$ in case $r_L>0$).
Due to Lemma \[lem diff\], if a non-zero optimal response of the supplier exists at equilibrium, it will be a critical point of ${u^s{\left(}r{\right)}}$, i.e. it will satisfy . It is easy to see that such a response always exists when the support of $\alpha$ is bounded, i.e. when $\alpha_H < \infty$. However, this is not the case when $\alpha_H = \infty$. So, let us determine conditions under which such a critical point exists, is unique and corresponds to a global maximum of the supplier’s payoff function. To this end, we study the first term of , namely $g(r):={\mathrm{m}}{\left(}r+3T+c{\right)}- r$.
Clearly, $g{\left(}r{\right)}$ is continuous on ${\left(}0,r_H{\right)}$. We first show that $\lim_{r \to 0+} g(r) > 0$ by considering cases. If $r_L > 0$, then for $0 < r < r_L$, ${\mathrm{m}}{\left(}r+3T+c{\right)}= {\mathbb{E}}(\alpha) -r -3T -c$. Hence, $\lim_{r \to 0+} g(r) = {\mathbb{E}}(\alpha) -3T -c> r_L >0$. If $r_L \leq 0$, then we use Proposition 1f of @Ha81, according to which ${\mathrm{m}}{\left(}t{\right)}\ge {\left(}{\mathbb{E}}{\left(}\alpha{\right)}-t{\right)}^+$ with equality if and only if $F{\left(}t{\right)}=0$ or $F{\left(}t{\right)}=1$. So, take first $r_L < 0$, which implies $\alpha_L < 3T+c < \alpha_H$. Hence, $0< F(3T+c) < 1$ and (by Proposition 1f) ${\mathrm{m}}{\left(}3T+c{\right)}> {\mathbb{E}}{\left(}\alpha -3T-c{\right)}^+ \geq 0$. Hence, $\lim_{r \to 0+} g{\left(}r{\right)}> 0$ if $r_L < 0$. Finally, if $r_L = 0$, then ${\mathrm{m}}{\left(}3T+c{\right)}={\mathrm{m}}{\left(}\alpha_L{\right)}={\mathbb{E}}{\left(}\alpha{\right)}-\alpha_L >0$, which again implies that $\lim_{r \to 0+} g{\left(}r{\right)}> 0$.
We then examine the behavior of $g{\left(}r{\right)}$ near $r_H$. If $\alpha_H<+\infty$, then $\lim_{r \to r_H-} g{\left(}r{\right)}= -r_H < 0$ and by the intermediate value theorem an $r^* \in {\left(}0,r_H{\right)}$ exists such that $g{\left(}r^*{\right)}=0$. For $\alpha_H<+\infty$, we also notice that to get uniqueness of the critical point $r^*$, it suffices to assume that the Mean Residual Lifetime (MRL) of the distribution of $\alpha$ is decreasing[^9], in short that $F$ has the DMRL property. On the other hand, if $\alpha_H=+\infty$, then the limiting behavior of ${\mathrm{m}}{\left(}r{\right)}$ as $r$ increases to infinity may vary, see @Br03, and an optimal solution may not exist, see below. But, if we assume as before that ${\mathrm{m}}{\left(}r{\right)}$ is decreasing, then $g{\left(}r{\right)}$ will eventually become negative and stay negative as $r$ increases and hence, existence along with uniqueness of an $r^*$ such that $g{\left(}r^*{\right)}=0$ is again established.
Now, $1-F{\left(}3T+c{\right)}>0$ since $3T+c < \alpha_H$ and hence $\lim_{r \to 0+} {\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}> 0$, i.e. ${u^s{\left(}r{\right)}}$ starts increasing on ${\left(}0, r_H{\right)}$. Assuming that $F$ has the DMRL property, the first term of is negative in a neighborhood of $r_H$ while the second term goes to 0 from positive values. Hence, ${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}< 0$ in a neighborhood of $r_H$, i.e ${u^s{\left(}r{\right)}}$ is decreasing as $r$ approaches $r_H$. Clearly, for $\epsilon$ sufficiently small, ${u^s{\left(}r{\right)}}$ will take a maximum in the interior of the interval $[\epsilon, r_H -\epsilon]$ if $r_H < \infty$ or a maximum in the interior of the interval $[\epsilon, \infty)$ if $r_H = \infty$. Since ${u^s{\left(}r{\right)}}$ is differentiable, the maximum will be attained at a critical point of ${u^s{\left(}r{\right)}}$, i.e. at the unique $r^*$ given implicitly by .
actually characterizes $r^*$ as the fixed point of a translation of the MRL function ${\mathrm{m}}{\left(}\cdot{\right)}$, namely of ${\mathrm{m}}{\left(}\cdot +3T+c{\right)}$. Its evaluation sometimes has to be numeric, but in one interesting case it may be evaluated explicitly: If $r_L >0$, then $\alpha -3T-c > 0$ for all $\alpha$, which implies that ${\mathbb{E}}(\alpha) -3T-c > 0$. Then, if $$\label{cond} \frac12{\left(}{\mathbb{E}}{\left(}\alpha{\right)}-3T-c{\right)}\leq r_L \ ,$$ we get $r^* = \frac12{\left(}{\mathbb{E}}(\alpha) -3T-c{\right)}$. To see this, notice that is equivalent to ${\mathrm{m}}{\left(}\alpha_L{\right)}\le r_L$, i.e. to $${\mathrm{m}}{\left(}r_L+3T+c{\right)}\le r_L$$ Then, by the DMRL property ${\mathrm{m}}{\left(}r+3T+c{\right)}<r$ for all $r>r_L$. This implies that $r^*\le r_L$ or equivalently that $r^*+3T+c \le \alpha_L$. In this case ${\mathrm{m}}{\left(}r^*+3T+c{\right)}={\mathbb{E}}{\left(}\alpha{\right)}-{\left(}r^*+3T+c{\right)}$ and hence $r^*$ will be given explicitly by $$r^*=\frac12{\left(}{\mathbb{E}}{\left(}\alpha{\right)}-3T-c{\right)}\, .$$ Intuitively, this special case occurs under the conditions that (a) the lower bound of the demand $\alpha_L$ exceeds the particular threshold $3T+c$, (i.e. $\alpha_L > 3T+c$ or $r_L > 0$), and (b) the expected excess of $\alpha$ over its lower bound $\alpha_L$ is at most equal to the excess of $\alpha_L$ over $3T+c$ (i.e. ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L \leq \alpha_L -3T-c = r_L$). Of course, since ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L >0$, condition (b) suffices. In that case, compare with the optimal $r^*$ of the complete information case (Proposition \[prop3.4\]).
Finally, if $r_L >0$ and $\frac12{\left(}{\mathbb{E}}{\left(}\alpha{\right)}-3T-c{\right)}> r_L$ (i.e, if ${\mathbb{E}}{\left(}\alpha{\right)}> r_L +\alpha_L$) we get that $r^* > r_L$, for if $r^* \leq r_L$, then ${\mathrm{m}}{\left(}r^*+3T+c{\right)}\ge {\mathrm{m}}{\left(}\alpha_L{\right)}$, i.e. $r^* \ge {\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L$. The latter implies that then $r_L \ge {\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L$ which contradicts the assumption.
To sum up, we obtained that
\[thm4.1\] Under incomplete information with identical producers/retailers (i.e. for $T_1=T_2=T$) for the non-trivial case $r_H > 0$ and assuming the supplier’s belief induces a non-atomic measure on the demand parameter space:
1. [**(necessary condition)**]{} If the optimal profit margin $r^*$ of the supplier exists when the producers/ retailers follow their equilibrium strategies in the second stage, then it satisfies the fixed point equation $r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$.
2. [**(sufficient condition)**]{} If the mean residual lifetime ${\mathrm{m}}{\left(}\cdot{\right)}$ of the demand parameter $\alpha$ is decreasing, then the optimal profit margin $r^*$ of the supplier exists under equilibrium and it is the unique solution of the equation $r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$. In that case, if ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L \leq \alpha_L -3T-c \ {\left(}=r_L{\right)}$, then $r^*$ is given explicitly by $r^*=\frac12{\left(}{\mathbb{E}}{\left(}\alpha{\right)}-3T-c{\right)}$. Moreover, if ${\mathbb{E}}(\alpha) - \alpha_L > \alpha_L -3T-c \ {\left(}=r_L{\right)}$, then $r^* \in {\left(}r_L^+, r_H{\right)}$.
Proposition \[prop3.3\] and lead to
\[cor4.2\] If the capacities of the producers (retailers) are identical, i.e. if $T_1=T_2=T$, and if the distribution $F$ of the demand intercept $\alpha$ is of decreasing mean residual lifetime ${\mathrm{m}}{\left(}\cdot{\right)}$, then the incomplete information two stage game has a unique subgame perfect Bayesian Nash equilibrium for the non-trivial case $r_H > 0$. At equilibrium, the supplier sells with profit margin $r^*$, which is the unique solution of the fixed point equation $r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$ and each of the producers (retailers) orders quantity $q^*{\left(}w{\right)}= \frac13{\left(}\alpha-3T-w{\right)}^+$ and produces (releases from his inventory) quantity $t^*{\left(}w{\right)}= T-\frac13{\left(}3T-\alpha{\right)}^+$.
enables one to show the intuitive result that under the DMRL assumption, if everything else stays the same but inventories of the retailers are rising (but staying below $(\alpha_H -c)/3$ because of the non-triviality assumption), the supplier will decrease strictly or keep constant the price he charges at equilibrium, because his profit margin $r^*$ will be non-increasing. The reason is that as $T$ increases, the graph of ${\mathrm{m}}{\left(}\cdot, T, c{\right)}:={\mathrm{m}}{\left(}\cdot +3T+c{\right)}$ shifts to the left and therefore its intercept with the line bisecting the first quadrant decreases[^10]. By , this intercept is $r^*$ and hence the price $w^*=c+r^*$ the supplier asks at equilibrium will be decreasing. By the same argument, if we take the supplier’s cost $c$ to be increasing (but staying below $\alpha_H -3T$) while everything else stays fixed, then the supplier’s profit margin $r^*$ will again be decreasing. However, this time the price $w^*=c+r^*$ the supplier asks at equilibrium will be increasing. To see this, let $c_1 < c_2$. Then, since ${\mathrm{m}}{\left(}\cdot{\right)}$ is decreasing, $r_2^* \leq r_1^*$. If $r_2^* = r_1^*$, then $w_1^* = r_1^* + c_1 < r_2^* + c_2 = w_2^*$. If $r_2^* < r_1^*$, by , ${\mathrm{m}}(r_2^* + 3T + c_2) < {\mathrm{m}}(r_1^* + 3T + c_1)$. The DMRL property then implies that $r_1^* + c_1 \leq r_2^* + c_2)$, i.e. $w_2^* \geq w_1^*$. So, we proved that
\[cor4.3\] Under the DMRL property:
If everything else stays the same and the inventory quantity of the retailers $T$ increases in the interval $\left[0, {\left(}\alpha_H -c{\right)}/3{\right)}$, then at equilibrium, the supplier’s profit margin $r^*$, and hence the price $w^*=c+r^*$ he asks, both decrease.
If everything else stays the same and the cost of the supplier $c$ increases in the interval $[0, \alpha_H - 3T)$, then at equilibrium, the supplier’s profit margin $r^*$ decreases while the price $w^*$ he asks increases.
We close this section with some remarks and observations.
- By the DMRL property, implies that $r^*\le {\mathbb{E}}{\left(}\alpha{\right)}$, since $$\label{16}r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}\le {\mathrm{m}}{\left(}u{\right)}\le {\mathrm{m}}{\left(}0{\right)}={\mathbb{E}}{\left(}\alpha{\right)}\;\; \mbox{for all $u$ such that \; $0\le u \le r^*+3T+c$} \; .$$
- A sufficient condition for the mean residual lifetime to be decreasing is that $F$ is absolutely continuous (i.e. has a density) and is of increasing failure rate (IFR). The fact that the IFR assumption on $F$ yields the desired properties for the critical point $r^*$ can also be seen by taking the second derivative of ${u^s{\left(}r{\right)}}$, which one may check that is given by $$\label{17}{\frac{\mathop{d^2u^s}}{\mathop{dr^2}}{\left(}r{\right)}}=\frac23{\left(}r{\mathrm{h}}{\left(}r+3T+c{\right)}-2{\right)}{\left(}1-F{\left(}r+3T+c{\right)}{\right)}\, ,$$ and studying the behavior of ${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}$ through ${\frac{\mathop{d^2u^s}}{\mathop{dr^2}}{\left(}r{\right)}}$. If the IFR property applies, then finiteness of every moment of $F$, in particular of the expectation ${\mathbb{E}}{\left(}\alpha{\right)}$ is implied, so that this does not have to be explicitly assumed (cf. assumption 5 in ).
- If $\alpha$ has a density, then by the explicit form of the second derivative, the weaker assumption that $r{\mathrm{h}}{\left(}r+c+3T{\right)}$ is increasing would suffice under the additional assumption that the term $r{\mathrm{h}}{\left(}r+c+3T{\right)}$ exceeds $2$. This is always the case if $\alpha_H<+\infty$, since then $\lim_{r\to r_H-}{\mathrm{h}}{\left(}r{\right)}=+\infty$, but has to be assumed if $\alpha_H=+\infty$. @La01 and @La06 introduce and examine a similar condition, the *increasing generalized failure rate (IGFR)*. Specifically, $F$ has the IGFR property if $r{\mathrm{h}}{\left(}r{\right)}$ is increasing. However, the assumption that $r{\mathrm{h}}{\left(}r{\right)}$ is increasing does not imply in general that $r{\mathrm{h}}{\left(}r+k{\right)}$, for some constant $k \in \mathbb R$, is also increasing and therefore, assuming that $F$ is of IGFR would not suffice in our model to ensure existence of a solution in the unbounded support case.
- Finally, we note that a necessary and sufficient condition for $F$ to be of DMRL is that the integral $$H{\left(}t{\right)}:=\int_{t}^{\alpha_H}{\left(}1-F{\left(}\alpha{\right)}{\right)}\mathop{d\alpha}$$ is log-concave, see @Ba05.
Classic Cournot with no production capacities and a single supplier {#sub4.1}
-------------------------------------------------------------------
If $T=0$, then our model corresponds to a classic Cournot duopoly at which the retailers’ cost equals the price that is set by a single profit maximizing supplier. In this case, we may normalize $c$ to $0$ (now the non-negative assumption on $\alpha$ implies that $\alpha_L>c$) and the first derivative of ${u^s{\left(}r{\right)}}$, see , simplifies to $${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}=\frac23{\left(}{\mathrm{m}}{\left(}r{\right)}-r{\right)}{\left(}1-F{\left(}r{\right)}{\right)}\, .$$ Thus, cf. , the unique - under the DMRL assumption - critical point of ${u^s{\left(}r{\right)}}$ now satisfies $r^*={\mathrm{m}}{\left(}r^*{\right)}$, i.e. the optimal profit margin $r^*$ of the supplier is the unique fixed point of the mean residual lifetime function. So we have,
\[cor4.4\] [*\[A Cournot supply chain with demand uncertainty for the supplier\]*]{} Consider a market with a linear inverse demand function for a single product distributed by two retailers without inventories, who order the quantities they will release to the market from a single supplier, who sells at a common price. Assume that the supplier doesn’t know the actual value of the demand intercept $\alpha$ but has a belief about it, expressed by a probability distribution function $F$. Then, under the assumption that $F$ is of decreasing mean residual lifetime ${\mathrm{m}}{\left(}\cdot{\right)}$ and the supplier’s cost is below all values of $\alpha$, the supply chain formed by the supplier and the retailers has a unique Bayesian Nash equilibrium. At equilibrium and after normalizing the supplier’s cost to 0, the supplier sells at a price $w^*=r^*$, which is the unique fixed point of ${\mathrm{m}}{\left(}\cdot{\right)}$ and each of the producers (retailers) orders quantity $q^*{\left(}w{\right)}= \frac13{\left(}\alpha-w{\right)}^+$.
In case $T=0$, the second derivative of the supplier’s payoff, see , is simplified to $${\frac{\mathop{d^2u^s}}{\mathop{dr^2}}{\left(}r{\right)}}=\frac23{\left(}r{\mathrm{h}}{\left(}r{\right)}-2{\right)}{\left(}1-F{\left(}r{\right)}{\right)}$$ so that now the assumption that $F$ has the IGFR property, i.e. that $r{\mathrm{h}}{\left(}r{\right)}$ is increasing, is sufficient to ensure existence of a unique solution, if one additionally assumes that the term $r{\mathrm{h}}{\left(}r{\right)}$ exceeds $2$. Note that in that case, $r^*$ will still be a fixed point of ${\mathrm{m}}{\left(}r{\right)}$, as it will have to be a root of ${\frac{\mathop{du^s}}{\mathop{dr}}{\left(}r{\right)}}$. Example 4 of Section 6 deals with such a case, where ${\mathrm{m}}(r)$ is not a decreasing function.
Inefficiency of equilibrium in the incomplete information case {#sec5}
==============================================================
It is well known that markets with incomplete information may be inefficient in equilibrium in the sense that trades that would be beneficial for all players may not occur under Bayesian Nash equilibrium, eg. see @My83. In this section, we discuss the inefficiency exhibited by the Bayesian Nash equilibrium we derived for the incomplete information case of our chain. By inefficiency we mean that, under equilibrium, there exist values of $\alpha$ for which a transaction would have occurred in the complete information case but will not occur in the incomplete information case. Inefficiency [*should not*]{} be interpreted as a comparison between the actual payments of market participants in the complete and in the incomplete information case for various realizations of $\alpha$.
So, for the incomplete information case and for a particular distribution $F$ of $\alpha$, let $U$ be the event that a transaction would have occurred under equilibrium, if we had been in the complete information case, and let $V$ be the event that a transaction does not occur in the incomplete information case under equilibrium. Our aim is to measure the inefficiency of our market by studying $P{\left(}V \cap U{\right)}$ and $P{\left(}V \mid U{\right)}$.
By Proposition \[prop3.3\], a transaction will take place under equilibrium if and only if $$\label{ntr} \alpha > r^* +3T +c$$ where $r^*$ stands for the profit margin of the supplier in the incomplete information case under equilibrium. Using (\[ntr\]) and Proposition \[prop3.4\], we conclude that a necessary and sufficient condition for a transaction to take place under equilibrium in the complete information case is $\alpha > 3T +c$. Hence, using $S$ for the support of $F$, we get $U = \left\{\alpha\mid \alpha \in S \mbox{ and } \alpha > 3T +c\right\}$ and $V = \left\{\alpha\mid \alpha \in S \; \mbox{and} \; \alpha \leq r^* +3T +c\right\}$. So, $V \cap U = \{\alpha \mid \alpha \in S \; \mbox{and} \; 3T +c < \alpha \le r^* +3T +c\}$, and therefore $P{\left(}V \cap U{\right)}= F{\left(}r^* +3T +c{\right)}- F{\left(}3T +c{\right)}$ and $$P{\left(}V \mid U{\right)}= \frac{F{\left(}r^* +3T +c{\right)}- F{\left(}3T +c{\right)}} {1-F{\left(}3T +c{\right)}}$$
In order both to clarify things and to be able to check that the MRL function ${\mathrm{m}}(\cdot)$ is non-zero at certain points of later interest, but also because it is needed for the discussion at the end of this section, let us see the cases that may occur. Using and the steps at the end of its proof, it is straightforward to check that if ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L \leq r_L$, then is satisfied for all $\alpha > \alpha_L$ since the ordering is $3T +c < r^* +3T +c \leq \alpha_L$. Hence $V \cap U=\emptyset$. So let us assume that ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L > r_L$. Then, by , the supplier will sell at $r^*\in {\left(}r_L^+, r_H{\right)}$ in equilibrium. So, there are two cases, either (a) $r_L > 0$ or (b) $r_L \leq 0$. In case (a), we get $3T + c < \alpha_L < r^* +3T +c <\alpha_H$, hence $V \cap U = \left[\alpha_L, r^* +3T +c\right]\cap S$. In case (b), $\alpha_L \leq 3T + c < r^* +3T +c <\alpha_H$, hence $V \cap U = {\left(}3T +c, r^* +3T +c\right]\cap S$ (notice that for $\alpha \leq 3T +c$ no transaction would have taken place under complete information also). So, we have proved that
\[lem5.1\] For a given distribution $F$ of $\alpha$ with the DMRL property, let $V$ be the event that a transaction does not occur in the incomplete information case under equilibrium and let $U$ be the event that a transaction would have occurred under equilibrium if we had been in the complete information case. Then, $P{\left(}V \cap U{\right)}= F{\left(}r^* +3T +c{\right)}- F{\left(}3T +c{\right)}$ and $P{\left(}V \mid U{\right)}= \frac{F{\left(}r^* +3T +c{\right)}- F{\left(}3T +c{\right)}}{1-F{\left(}3T +c{\right)}}$ . In particular, if $ {\mathbb{E}}(\alpha) - \alpha_L \leq \alpha_L -3T-c \ {\left(}=r_L{\right)}$, then $P{\left(}V \cap U{\right)}= P{\left(}V \mid U{\right)}= 0$.
Our next aim is to obtain a robust upper bound for $P{\left(}V \mid U{\right)}$, i.e a bound independent of the distribution $F$. Firstly, we express the distribution function $F$ in terms of the MRL function, e.g. see [@Gu88], to get $$\label{inv} 1-F{\left(}t{\right)}=\frac{{\mathbb{E}}{\left(}\alpha{\right)}}{{\mathrm{m}}{\left(}t{\right)}}\exp{\left\{-\int_{0}^{t}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}\mathop{du}\right\}} \; \; \mbox{for}\; \, 0\le t < \alpha_H$$ We will use the DMRL property, cf. , to derive an upper bound for $P{\left(}V \mid U{\right)}$. To this end, we use first for $t=r^*+3T+c$, then for $t=3T+c$, and then, dividing the two equations (division by 0 is no danger, see the discussion preceding Lemma \[lem5.1\]), we get $$1-F{\left(}r^*+3T+c{\right)}={\left(}1-F{\left(}3T+c{\right)}{\right)}\frac{{\mathrm{m}}{\left(}3T+c{\right)}}{{\mathrm{m}}{\left(}r^*+3T+c{\right)}}\exp{\left\{-\int_{3T+c}^{r^*+3T+c}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}du\right\}} \, .$$ So, $$\begin{aligned}
P{\left(}V \cap U{\right)}&=1-F{\left(}3T+c{\right)}-{\left(}1-F{\left(}r^*+3T+c{\right)}{\right)}\\&={\left(}1-F{\left(}3T+c{\right)}{\right)}{\left(}1-\frac{{\mathrm{m}}{\left(}3T+c{\right)}}{{\mathrm{m}}{\left(}r^*+3T+c{\right)}}\exp{\left\{-\int_{3T+c}^{r^*+3T+c}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}du\right\}}{\right)}\, ,\end{aligned}$$ which shows that $$\label{20} P{\left(}V \mid U{\right)}= 1-\frac{{\mathrm{m}}{\left(}3T+c{\right)}}{{\mathrm{m}}{\left(}r^*+3T+c{\right)}}\exp{\left\{-\int_{3T+c}^{r^*+3T+c}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}du\right\}} \,.$$ By inequality for $3T +c \leq u \leq r^* +3T +c$ and the monotonicity of the exponential function $$\exp{\left\{-\frac{1}{{\mathrm{m}}{\left(}r^*+3T+c{\right)}}\int_{3T+c}^{r^*+3T+c}du\right\}}\le \exp{\left\{-\int_{3T+c}^{r^*+3T+c}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}du\right\}} \, .$$ Using the fact that $r^*={\mathrm{m}}{\left(}r^*+3T+c{\right)}$, the last inequality becomes $$\exp{\left\{-1\right\}}\le \exp{\left\{-\int_{3T+c}^{r^*+3T+c}\frac{1}{{\mathrm{m}}{\left(}u{\right)}}du\right\}} \, .$$ Substituting in we derive the following upper bound for $P{\left(}V \mid U{\right)}$ $$P{\left(}V \mid U{\right)}\le 1-\frac{{\mathrm{m}}{\left(}3T+c{\right)}}{{\mathrm{m}}{\left(}r^*+3T+c{\right)}} e^{-1} \, .$$ Since ${\mathrm{m}}{\left(}r^*+3T+c{\right)}\le {\mathrm{m}}{\left(}3T+c{\right)}$ by the DMRL property, the upper bound may be relaxed to $$\label{21} P{\left(}V \mid U{\right)}\le 1-e^{-1} \, .$$ This upper bound is robust in the sense that it is independent of the particular distribution $F$ over the family of DMRL distributions of $\alpha$. Surprisingly enough, it is also tight over all DMRL distributions of $\alpha$, since it is attained by the exponential distribution, which thus is the least favorable as far as efficiency at equilibrium is concerned. It is also asymptotically attained by a parametric family of Beta distributions, see and below. So, we have proved that
\[thm5.2\] For any distribution $F$ of $\alpha$ with the DMRL property, the conditional probability $P{\left(}V \mid U{\right)}$ that a transaction does not occur under equilibrium in the incomplete information case, given that a transaction would have occurred under equilibrium if we had been in the complete information case, can not exceed the bound $1-e^{-1}$. This bound is tight over all DMRL distributions, as it is attained by the exponential distribution.
To gain more intuition about the reasons that make the supplier charge a price that runs a (sometimes) considerable risk of no transaction, although such a transaction would be beneficial for all market participants, recall the discussion preceding Lemma \[lem5.1\]. To have $P{\left(}V \mid U{\right)}= 0$, the expectation restriction, ${\mathbb{E}}(a) \leq \alpha_L + r_L$, must apply. So, assume this restriction applies and keeping everything else the same (i.e. $\alpha_L$, $T$ and $c$), start moving probability mass to the right. Then, ${\mathbb{E}}(\alpha)$ will be increasing which results to the supplier’s optimal profit margin, $r^* = \frac{1}{2} {\left(}{\mathbb{E}}(\alpha) -3T -c{\right)}$, increasing. So, there is a threshold for ${\mathbb{E}}(\alpha)$, namely $\alpha_L + r_L$ (where the supplier charges $r^* = r_L$), above which the supplier is willing to charge a price so high (i.e. above $r_L$) that he will run the risk of receiving no orders. Next, assume the expectation restriction applies and keeping $T$ and $c$ the same, start moving $\alpha_L$ to the left. Now, ${\mathbb{E}}(\alpha)$ will be decreasing, which results to the supplier’s optimal profit margin, $r^*$, decreasing. However, since $r_L$ will eventually become non-positive while ${\mathbb{E}}(\alpha) - \alpha_L$ always stays positive, it is easy to see that again the same threshold applies for the expectation condition, i.e. eventually we will get ${\mathbb{E}}(\alpha) = \alpha_L + r_L$, the supplier will charge $r^* = r_L$ there, and for lower values of $\alpha_L$ inefficiencies will appear. In other words, for values of $\alpha_L$ below the threshold, although he is reducing his profit margin, the supplier is asking for a [*relatively*]{} high price (i.e. above $r_L$) and is willing to take the risk of no transaction. Finally, using similar arguments we can see that if we either increase the inventory level $T$ of the retailers or the cost $c$ of the supplier, all else being kept the same (see also Corollary \[cor4.3\]), the same threshold applies for the appearance of inefficiencies.
Examples {#sec6}
========
To illustrate the results of Sections 4 and 5, we present some examples, in which, whenever $T = 0$, we have normalized $c$ to 0. A list of probability distributions with the DMRL property can be found, among others, in @Ba05 [Tables 1 and 3].
*Uniform distribution.*\
Let $\alpha \sim U \left[ \alpha_L, \alpha_H \right]$ with $0\le \alpha_L< \alpha_H$ and $T=0$. Since $${\mathrm{m}}{\left(}t{\right)}=\begin{cases}{\mathbb{E}}{\left(}\alpha{\right)}-t & \mbox{if $-\infty < t \le \alpha_L$} \\\dfrac{\alpha_H-t}{2} & \mbox{if $\alpha_L < t\le \alpha_H$},\end{cases}$$ $F$ has the DMRL property. Since, ${\mathbb{E}}{\left(}\alpha{\right)}=\frac{\alpha_L+\alpha_H}{2}$, by the optimal strategy of the supplier is given by $$r^*= \begin{cases}\dfrac12{\mathbb{E}}{\left(}\alpha{\right)}& \mbox{if $\alpha_H \le 3\alpha_L$} \\[0.2cm] \dfrac13\alpha_H & \mbox{if $\alpha_H > 3\alpha_L$}.\end{cases}$$ Consequently, if $\alpha \sim U{\left(}1,2{\right)}$ then $r^*=\frac34$, while if $\alpha \sim U{\left(}0,1{\right)}$ then $r^*=\frac13$. In the first case a transaction takes place for any value of $\alpha$, whereas in the second case, the probability of no transaction is $F{\left(}r^*{\right)}=\frac13$.
A rather distinct example is the uniform distribution on two disconnected intervals, e.g. $$f{\left(}\alpha{\right)}=\frac12\mathbf{1}_{\{1\le x\le2\}\cup\{3\le x\le 4\}}$$ It is left to the reader to check that this is not a DMRL distribution. However, we still get a unique $r^*$ satisfying $r^*={\mathrm{m}}{\left(}r^*{\right)}$ since $g{\left(}r{\right)}={\mathrm{m}}{\left(}r{\right)}-r$ is strictly decreasing for all $r<4$ and satisfies $g{\left(}0+{\right)}>0$ and $g{\left(}4-{\right)}<0$.
\[ex2\]*Exponential distribution.*\
Let $\alpha \sim \exp{\left(}\lambda{\right)}$, i.e. $f(\alpha)=\lambda e^{-\lambda \alpha} 1_{\{0 \leq \alpha < \infty\}}$, with $\lambda >0$ and let $T\ge 0$. Since $${\mathrm{m}}{\left(}t{\right)}= \begin{cases}\dfrac{1}{\lambda} -t & \mbox{if $-\infty < t \le 0$} \\[0.2cm] \dfrac{1}{\lambda} & \mbox{if $0 < t < \infty$}\end{cases}$$ $F$ is of DMRL. By , the optimal strategy $r^*$ of the supplier is independent of $T,c$ and is given by $r^*=\frac1\lambda$. The conditional probability of no transaction $P{\left(}V \mid U{\right)}$ is also independent of $T,c$ and equal to $1-e^{-1}$ due to the memoryless property of the exponential distribution (i.e. $P{\left(}\alpha>s+t\mid \alpha>t{\right)}=P{\left(}\alpha>s{\right)}$). Indeed, $$\begin{aligned}
P{\left(}V \mid U{\right)}&= P{\left(}\alpha \leq r^*+3T+c \mid \alpha> 3T+c{\right)}=1-P{\left(}\alpha> r^*+3T+c \mid \alpha> 3T+c{\right)}\\&=1-P{\left(}\alpha>r^*{\right)}=F{\left(}1/\lambda{\right)}=1-e^{-1}\end{aligned}$$ implying that the inequality in is tight over all DMRL distributions.
The next distribution is a special case of the beta distribution known also as the Kumaraswamy distribution, see @Jo09.
\[ex3\]*Beta distribution.*\
Let $T=0$ and let $\alpha \sim Be{\left(}1,\lambda{\right)}$ with $\lambda>1$, i.e. the pdf of $\alpha$ is given by $f{\left(}\alpha{\right)}= \lambda {\left(}1-\alpha{\right)}^{\lambda-1}1_{\{0<\alpha<1\}}$. Then, for $0 \le \alpha <1$, we have that $1-F{\left(}\alpha{\right)}= {\left(}1-\alpha{\right)}^{\lambda}$ and using the equivalent expression for the mean residual lifetime $${\mathrm{m}}{\left(}t{\right)}=\dfrac{\int_{t}^{1}{\left(}1-F{\left(}\alpha{\right)}{\right)}\,d\alpha}{1-F{\left(}t{\right)}} \; \; \mbox{if $t < 1$},$$ the MRL function can be calculated to be $${\mathrm{m}}{\left(}t{\right)}= \begin{cases}\dfrac{1}{1+\lambda} -t & \mbox{if $-\infty < t \le 0$} \\[0.2cm]\dfrac{1-t}{1+\lambda} & \mbox{if $0 < t < 1$}.\end{cases}$$ Once more, since the MRL function is decreasing, applies and the optimal profit margin of the supplier will be $r^*=\frac{1}{\lambda+2}$. Since, $T=0$, we have that $$P{\left(}V\mid U{\right)}= F{\left(}r^*{\right)}= 1-{\left(}1-\frac{1}{\lambda+2}{\right)}^{\lambda} \, \stackrel{\lambda \to \infty}{\longrightarrow} \, 1-e^{-1} .$$ This example shows that our upper bound of $P{\left(}V\mid U{\right)}$ is still tight over distributions with strictly decreasing MRL, i.e. it is not the flatness of the exponential MRL that generated the large inefficiency.
Finally, the Pareto distribution provides an example of a distribution for which no optimal solution exists for the supplier. This distribution has the IGFR property, but not the DMRL property.
\[ex4\]*Pareto distribution.*\
Let $\alpha$ be Pareto distributed with pdf $f(\alpha)=\frac{k\alpha_L^k}{\alpha^{k+1}}$ for $\alpha \ge \alpha_L>0$, where $k > 1$ is a constant (for $0<k\le 1$ we get ${\mathbb{E}}{\left(}\alpha{\right)}=+\infty$, which contradicts the basic assumptions of our model). To simplify, let $\alpha_L=1$, so that $f(\alpha)=k \alpha^{-k-1}1_{\{1 \leq \alpha < \infty\}}$, $F{\left(}\alpha{\right)}= {\left(}1 - \alpha^{-k}{\right)}1_{\{1 \leq \alpha < \infty\}}$ and ${\mathbb{E}}{\left(}\alpha{\right)}=\frac{k}{k-1}$. Then, the mean residual lifetime of $\alpha$ is given by $${\mathrm{m}}{\left(}t{\right)}= \begin{cases}\dfrac{k}{k-1} - t & \mbox{if $-\infty < t \leq 1$} \\[0.2cm] \dfrac{t}{k-1} & \mbox{if $t > 1$}. \end{cases}$$ and hence it is increasing on $[1, \infty)$, so that does not apply. Notice that for $1\le r$ the failure (hazard) rate ${\mathrm{h}}{\left(}r{\right)}=kr^{-1}$ is decreasing, but the generalized failure rate $r {\mathrm{h}}{\left(}r{\right)}=k$ is constant and hence $F$ is of IGFR. Assuming that $T$ was taken to be 0 and that $c$ was also normalized to 0, the payoff function of the supplier is $${u^s{\left(}r{\right)}}=\frac23r \, {\mathrm{m}}{\left(}r{\right)}{\left(}1-F{\left(}r{\right)}{\right)}=\begin{cases}\dfrac{2}{3} r {\left(}\dfrac{k}{k-1}-r{\right)}& \mbox{if $0 \leq r \leq 1$} \\[0.3cm]
\dfrac{2}{3{\left(}k-1{\right)}} r^{2-k} & \mbox{if $r > 1$}, \end{cases}$$ which diverges as $r\to +\infty$ if we take $k < 2$ (also, notice that the second moment of $\alpha$ is infinite for $1 <k < 2$, which sheds light to the pathology of this example). Now, let us recall the remark after Corollary \[cor4.4\]. Then, we see that for $F$ being of IGFR, the assumption that the generalized failure rate $r{\mathrm{h}}(r)$ must exceed 2 may not be dropped if we want to guarantee a solution. On the other hand, in our example and for $k > 2$, we get a unique solution as expected, namely $r^* = \frac{k}{2{\left(}k-1{\right)}}$, which is indeed the unique fixed point of ${\mathrm{m}}(r)$.
General case with j identical retailers {#sec7}
=======================================
The results of the previous sections admit a straightforward extension to the case of $n>2$ identical retailers, i.e. $n$ retailers each having capacity constraint $T_i=T$. Formally, let $N=\{1,2, \ldots, n\}$, with $n\ge 2$ and denote with $R_i$ retailer $i$, for $i\in N$. As in , a strategy profile is denoted with $s={\left(}s_1, s_2, \ldots, s_n{\right)}$. The payoff function of $R_i$ depends on the total quantity of the remaining $n-1$ retailers and is given by , where now $Q$ denotes the total quantity sold by all $n$ retailers, i.e. $$Q=\sum_{j=1}^nQ_j=\sum_{j=1}^n (t_j+q_j)$$ Following common notation, let $s={\left(}s_{-i},s_i{\right)}$ and $Q_{-i}=Q-Q_i$ for $i \in N$. It is immediate that Lemma \[lem3.1\] and Lemma \[lem3.2\] still apply, if one replaces $Q_j$ with $Q_{-i}$ and $s_j$ with $s_{-i}$. Hence, one may generalize Proposition \[prop3.3\] as follows
\[prop7.1\] If $T_i=T$ for $i \in N$, then for all values of $\alpha$ the strategies $s^*_i{\left(}w{\right)}={\left(}t^*_i{\left(}w{\right)}, q^*_i{\left(}w{\right)}{\right)}$, or shortly $s^*_i={\left(}t^*_i, q^*_i{\right)}$, with $$t^*_i(w)= T-\frac{1}{n+1}{\left(}{\left(}n+1{\right)}T-\alpha{\right)}^+, \quad q^*_i{\left(}w{\right)}= \frac{1}{n+1}{\left(}\alpha-{\left(}n+1{\right)}T-w{\right)}^+$$ are second stage equilibrium strategies among the retailers $R_i$, for $i \in N$.
See Appendix \[app1\].
Turning attention to the first stage, the payoff function of the supplier in the complete information case (cf. ) will be given by ${u^s{\left(}r{\right)}}=rq^*{\left(}w{\right)}=\frac{n}{n+1}r{\left(}\alpha-{\left(}n+1{\right)}T-c-r{\right)}^+$ and hence it is maximized at $$r^*{\left(}\alpha{\right)}=\frac12{\left(}\alpha-{\left(}n+1{\right)}T-c{\right)}^+$$ which generalizes Proposition \[prop3.4\]. Similarly, if the supplier knows only the distribution and not the true value of $\alpha$, the arguments of still apply. Then, the payoff function of the supplier - cf. - becomes $${u^s{\left(}r{\right)}}=\frac{n}{n+1}r \,{\mathbb{E}}{\left(}\alpha-{\left(}n+1{\right)}T-c-r{\right)}^{+} \;\; \mbox{for $r \ge 0$}.$$ and hence $${u^s{\left(}r{\right)}}=\frac{n}{n+1}r \,{\mathrm{m}}{\left(}r+{\left(}n+1{\right)}T+c{\right)}{\left(}1-F{\left(}r+{\left(}n+1{\right)}T+c{\right)}{\right)}\;\; \mbox{for $0\le r<\alpha_H-{\left(}n+1{\right)}T-c$}.$$ Proceeding as in , we may generalize to the case of $n\ge 2$ identical retailers. To this end, let $r_L^{\,n}:=\alpha_L-{\left(}n+1{\right)}T-c$ and $r_H^{\,n}:=\alpha_H-{\left(}n+1{\right)}T-c$. Then,
\[thm7.2\] Under incomplete information, for identical producers/retailers (i.e. for $T_i=T,\,i \in N$) for the non-trivial case $r_H^{\,n} >0$ and assuming the supplier’s belief induces a non-atomic measure on the demand parameter space:
1. [**(necessary condition)**]{} If the optimal profit margin $r^*$ of the supplier exists when the producers/ retailers follow their equilibrium strategies in the second stage, then it satisfies the fixed point equation $r^*={\mathrm{m}}{\left(}r^*+{\left(}n+1{\right)}T+c{\right)}$.
2. [**(sufficient condition)**]{} If the mean residual lifetime ${\mathrm{m}}{\left(}\cdot{\right)}$ of the demand parameter $\alpha$ is decreasing, then the optimal profit margin $r^*$ of the supplier exists under equilibrium and it is the unique solution of the equation $r^*={\mathrm{m}}{\left(}r^*+{\left(}n+1{\right)}T+c{\right)}$. In that case, if ${\mathbb{E}}{\left(}\alpha{\right)}- \alpha_L \leq \alpha_L -{\left(}n+1{\right)}T-c\ {\left(}=r_L^{\,n}{\right)}$, then $r^*$ is given explicitly by $r^*=\frac12{\left(}{\mathbb{E}}{\left(}\alpha{\right)}-{\left(}n+1{\right)}T-c{\right)}$. Moreover, if ${\mathbb{E}}(\alpha) - \alpha_L > \alpha_L -{\left(}n+1{\right)}T-c {\left(}=r_L^{\,n}{\right)}$, then $r^* \in {\left(}{\left(}r_L^{\,n}{\right)}^+, r_H^{\,n}{\right)}$.
The results of and may be generalized in a similar manner.
Appendix: General statements and proofs {#app1}
=======================================
Equilibrium strategies of the second stage {#equilibrium-strategies-of-the-second-stage}
------------------------------------------
For convenience in notation we will give the proof for $i=1$ and $j=2$ which results in no loss of generality due to symmetry of the retailers. Let $w\ge c$ and $s_2={\left(}t_2, q_2{\right)}\in S^2$ with $Q_2=t_2+q_2$. Restricting ourselves firstly to strategies with $q_1=0$, we have that $$u^1{\left(}{\left(}t_1,0{\right)},s_2{\right)}=t_1{\left(}\alpha-Q_2-t_1{\right)}$$ Let $t_1^*=\operatorname*{arg\,max}_{0\le t_1 \le T} u^1{\left(}{\left(}t_1,0{\right)}, s_2 {\right)}$. Since $\alpha-Q_i\ge 0, \, i=1,2$, we have that if $\alpha-2T\le Q_2$, then $t_1^*=\frac12{\left(}\alpha-Q_2{\right)}$ while if $\alpha-2T>Q_2$, then the maximum of $u^1{\left(}{\left(}t_1,0{\right)},s_2{\right)}$ is attained at the highest admissible (due to the production capacity constraint) value of $t_1$ and hence $t^*_1=T$. Similarly, for strategies with $t_1=T$ and $q_1>0$ we have that the payoff function of retailer $R_1$ $$u^1{\left(}{\left(}T,q_1{\right)},s_2{\right)}=Q_1{\left(}\alpha-w-Q_2-Q_1{\right)}+Tw=T{\left(}\alpha-Q_2-T{\right)}+q_1{\left(}\alpha-w-2T-Q_2-q_1{\right)}$$ is maximized at $q^*_1=\dfrac12{\left(}\alpha-w-2T-Q_2{\right)}$ under the constraint that $q^*_1>0$ or equivalently $Q_2 < \alpha-w-2T$. If instead $q^*_1\le0$, then the maximum of $u^1{\left(}{\left(}T,q_1{\right)},s_2{\right)}$ is attained at the lowest admissible value of $q_1$ i.e. $q_1=0$. Since conditions $\alpha-2T\le Q_2$ and $Q_2<\alpha-w-2T$ cannot apply at the same time, the result obtains.
From the explicit form of the best reply correspondence that is given in Lemma \[lem3.2\] (see also ), it is straightforward that the equilibrium strategies depend on the values of $\alpha$ and $w$ and their relation to $T_1, T_2$. For convenience we will denote with $\Gamma_{ij}$ the case that the equilibrium occurs as an intersection of parts $i,j$ for $i, j=1,2,3$. Since by assumption $T_1\ge T_2$, only the cases with $i\ge j$ (instead of all possible $9$ cases) may occur.
\[prop3.3full\] Given the values of $\alpha$ and $w$, the second stage equilibrium strategies between retailers $R_1$ and $R_2$ for all possible values of $T_1 \ge T_2$ are given by
[max size=]{}
[c|l|\[1.5pt\]l|l|\[1.5pt\]l|l]{} &\
**Case** & **$\alpha$ is in** &$t_1^*$ & $q_1^*$ & $t_2^*$ & $q_2^*$\
$\Gamma_{11}$ & $(3T_1+w, \infty)$ & $T_1$ & $\frac{\alpha-w}{3}-T_1$ & $T_2$ & $\frac{\alpha-w}{3}-T_2$\
$\Gamma_{21}$ & ${\left(}\max\left\{3T_1-w, T_1+2T_2+w\right\}, 3T_1+w\right]$ & $T_1$ & $0$ & $T_2$ & $\frac{\alpha-w-T_1}{2}-T_2$\
$\Gamma_{22}$ & $(2T_1+T_2, T_1+2T_2+w]$ & $T_1$ & $0$ & $T_2$ & $0$\
$\Gamma_{31}$ & $(3T_2+2w, 3T_1-w]$ & $\frac{\alpha+w}{3}$ & $0$ & $T_2$ & $\frac{\alpha-2w}{3}-T_2$\
$\Gamma_{32}$ & $\left(3T_2, \min\left\{2T_1+T_2, 3T_2+2w\right\}\right]$ & $\frac{\alpha-T_2}{2}$ & $0$ & $T_2$ & $0$\
$\Gamma_{33}$ & $[0, 3T_2]$ & $\frac{\alpha}{3}$ & $0$ & $\frac{\alpha}{3}$ & $0$\
\[tab:1\]
The second stage equilibria regions in the $T_1-T_2$ plane are depicted in .
The proof proceeds by examining under what conditions a second stage equilibrium occurs as an intersection of parts $i,j = (1), (2), (3)$ of the best reply correspondences of the retailers.
- In this case the best reply correspondences intersect in their parts denoted by (1). By rearranging part (1) of the best reply correspondence in Lemma \[lem3.2\] and assuming that $R_i$ replies optimally to $R_j$ for $i,j=1,2$, we obtain that the total quantities released to the market under equilibrium will be given by $Q_1^*=Q_2^*=\frac{\alpha-w}{3}$ subject to the constraints $Q_2^*< \alpha-w-2T_1$ and $Q_1^*< \alpha-w-2T_2$. Substituting the values $Q^*_1$ and $Q^*_2$ in the constraints we find that these solutions are acceptable if $\frac{\alpha-w}{3}< \alpha-w-2T_1$ and $\frac{\alpha-w}{3}< \alpha-w-2T_2$ or equivalently if $\max{\left\{T_1, T_2\right\}} < \frac{\alpha-w}{3}$ which may be reduced to $T_1 <\dfrac{\alpha-w}{3}$ since $T_2\le T_1$ is assumed. Combining the above relations and decomposing $Q^*_i$ as in Lemma \[lem3.2\] we obtain the first line of that settles case $\Gamma_{11}$. Similarly,
- Now, $Q^*_1=T_1$ and $Q^*_2=\frac{\alpha-w-T_1}{2}$ subject to $\alpha-w-2T_1\le \frac{\alpha-w-T_1}{2} < \alpha-2T_1$ and $0\le T_1 < \alpha-w-2T_2$. Solving the constraints, yields $\max{\left\{T_1+T_2+w, 3T_1-w\right\}}< \alpha \le 3T_1+w$.
- Now, $Q^*_i=T_i$, for $i=1,2$. These strategies are acceptable if $\alpha-w-2T_1\le T_2 < \alpha-2T_1$ and $\alpha-w-2T_2\le T_1 < \alpha-2T_2$ which gives $2T_1+T_2 < \alpha \le T_1+2T_2+w$.
- Now, $Q^*_1=\frac{\alpha-Q^*_2}{2}$ and $Q^*_2=\frac{\alpha-w-Q^*_1}{2}$. Hence, $Q^*_1=\frac{\alpha+w}{3}$ and $Q^*_2=\frac{\alpha-2w}{3}$ subject to $\alpha-2T_1\le \frac{\alpha-2w}{3}$ and $0\le \frac{\alpha+w}{3}< \alpha-w-2T_2$ or equivalently $3T_2+2w< \alpha \le 3T_1-w$.
- It is easy to see that $Q^*_1=\frac{\alpha-T_2}{2}$ and $Q^*_2=T_2$ subject to $\alpha-2T_1\le T_2$ and $\alpha-w-2T_2\le \frac{\alpha-T_2}{2} < \alpha-2T_2$, which yields $3T_2 < \alpha \le 3T_2+2w$. Finally,
- $Q^*_1=Q^*_2=\frac{\alpha}{3}$ subject to $\alpha-2T_1\le \frac{\alpha}{3}$ and $\alpha-2T_2\le \frac{\alpha}{3}$ or equivalently $\alpha \le 3T_2$.
One should notice that the conditions under which the cases $\Gamma_{ij}$ (second column of ) apply are mutually exclusive, hence there exists a unique second stage equilibrium. In more detail, if $T_1 > T_2$, exactly one of the following two mutually exclusive arrangements of the $\alpha$-intervals will obtain: Either (a) $0 < 3T_2 < 2T_1+T_2 < T_1+2T_2+w < 3T_1+w < \infty$ with case $\Gamma_{31}$ empty or (b) $0 < 3T_2 < 3T_2+2w < 3T_1-w < 3T_1+w < \infty$ with case $\Gamma_{22}$ empty. If $T_1 = T_2$, then (a) obtains and the $\alpha$-intervals simplify to $0 < 3T < 3T+w < \infty$. Also, the retailers’ equilibrium strategies are continuous at the cutting points of the $\alpha$-intervals, i.e. the “$\alpha$ [*is in*]{}” intervals of can be taken as left hand side closed also. Generically, in the $T_1-T_2$ plane, the result of Proposition \[prop3.3full\] is summarized in .
(-0.5,0) – (11,0) node\[below\] [$T_1$]{}; (0,-0.5) – (0,9) node\[left\] [$T_2$]{}; (6,6)–(6,-0.1) node\[below\][$\dfrac{\alpha}{3}$]{} (6,6)–(5.5,7) node\[above\] [$\alpha=2T_1+T_2$]{} (4,4)–(3,4.5) node\[above\] [$\alpha=T_1+2T_2+w\quad$]{} (8,2)–(9,0); plot(,) node\[right\][$T_1=T_2$]{} (-0.1,2)–(0.1,2) node\[left\][$\dfrac{\alpha}{3}-\dfrac{2w}{3}\,\,$]{} (-0.1,4)–(0.1,4) node\[left\][$\dfrac{\alpha}{3}-\dfrac{w}{3}\,\,$]{} (-0.1,6)–(0.1,6) node\[left\][$\dfrac{\alpha}{3}\,\,$]{} (4,4)–(4,-0.1) node\[below\][$\dfrac{\alpha}3-\dfrac{w}{3}$]{} (8,2)–(8,-0.1) node\[below\][$\dfrac{\alpha}{3}+\dfrac{w}{3}$]{} (8,2)–(11,2) (4,4)–(8,2)–(6,6)–(11,6); (2.2,1) node[**$\left(\Gamma_{11}\right)$**]{} (6,1) node\[fill=white\][**$\left(\Gamma_{21}\right)$**]{} (9.5,1) node\[fill=white\][$\left(\Gamma_{31}\right)$]{} (9.5,4) node[$\left(\Gamma_{32}\right)$]{} (6,4) node\[fill=white\][$\left(\Gamma_{22}\right)$]{} (9.5,7) node\[fill=white\][$\left(\Gamma_{33}\right)$]{} (-0.3, -0.3) node[$0$]{};
The boundaries of the different regions in (or equivalently the conditions in ) depend on the values of both $w$ and $\alpha$. However, the point $\frac{\alpha}{3}$ on both the $T_1$ and the $T_2$ axes and the line $\alpha=2T_1+T_2$ that separates the cases $\Gamma_{22}$ and $\Gamma_{32}$ depend only on the value of $\alpha$ and not on $w$. Thus, they will serve as a basis for case discrimination when solving for the optimal strategy of the supplier in the first stage.
Equilibrium strategies of the first stage {#equilibrium-strategies-of-the-first-stage}
-----------------------------------------
Based on (or equivalently on ) and using equation , we can calculate the payoff function, $u^s{\left(}r\mid \alpha{\right)}$, of the supplier as $\alpha$ varies, when the retailers use their equilibrium strategies at the second stage. To proceed, we denote with $q^*_{ij}{\left(}w{\right)}:=q^*_1{\left(}w{\right)}+q^*_2{\left(}w{\right)}$ the total quantity that the retailers order from the supplier when case $\Gamma_{ij}$ for $j\le i \in \{1,2,3\}$ occurs in the second stage. By above $$\begin{aligned}
q^*_{31}{\left(}w{\right)}&=\frac13 {\left(}\alpha-2w-3T_2{\right)}, &&\hspace{-70pt}\text{ if } 3T_2+2w < \alpha \le 3T_1-w \\
q^*_{21}{\left(}w{\right)}&=\frac12 {\left(}\alpha-w-T_1-2T_2{\right)}, &&\hspace{-70pt} \text{ if } \max{\{3T_1-w, T_1+2T_2+w\}} < \alpha \le 3T_1+w \\
q^*_{11}{\left(}w{\right)}&=\frac23 {\left(}\alpha-w-\frac32T_1-\frac32T_2{\right)}, &&\hspace{-70pt} \text{ if } 3T_1+w < \alpha \\
q^*_{ij}{\left(}w{\right)}&=0, &&\hspace{-70pt} \text{ else }\end{aligned}$$ and since $w=r+c$, with $c>0$ being a constant and $r \geq 0$ being the strategic variable, the payoff function of the supplier may be written as $$u^s{\left(}r\mid \alpha{\right)}= r\cdot \begin{cases}
q^*_{31}{\left(}r+c{\right)}, & 3T_2+2c+2r< \alpha \le 3T_1-c-r \\
q^*_{21}{\left(}r+c{\right)}, & \max{\left\{3T_1-c-r, T_1+2T_2+c+r\right\}}< \alpha \le 3T_1+c+r \\
q^*_{11}{\left(}r+c{\right)}, & 3T_1+c+r < \alpha \\
0, & \text{ else} \end{cases}$$ for $r\ge 0$, assuming that a subgame perfect equilibrium is played in the second stage. Re-arranging the conditions in the last column and taking into account the continuity of $u^s{\left(}r\mid \alpha{\right)}$ at the cutting points of the $\alpha$-intervals and the non-negativity constraint for $r$, we obtain that $$u^s{\left(}r\mid \alpha{\right)}= r\cdot \begin{cases}
q^*_{31}{\left(}r+c{\right)}, & 0\le r \le \min{\left\{\frac12 {\left(}\alpha-2c-3T_2{\right)}, 3T_1-c-\alpha\right\}}\\
q^*_{21}{\left(}r+c{\right)}, & \max{\left\{0, \alpha-c-3T_1, 3T_1-c-\alpha\right\}}\le r \le \alpha-c-T_1-2T_2\\
q^*_{11}{\left(}r+c{\right)}, & 0\le r < \alpha-c-3T_1 \\
0, & \text{ else}\end{cases}$$ It is immediate that some cases may not occur for different values of the parameters. Hence, in order to derive the optimal strategy $r^*$ of the supplier (see the proof of Proposition \[prop3.4full\] below), a further case discrimination is necessary. As we have already noted in the previous paragraph, the relative position of $\alpha$ to the quantities $3T_1, 3T_2$ and $2T_1+T_2$ will serve as a basis for case discrimination. Maximizing the payoff function of the supplier for each case yields his optimal strategy $r^*$ (i.e. the profit margin that maximizes his profits given that the retailers will play their equilibrium strategies in the next stage) for all possible values of $\alpha$ and $T_1, T_2$. The optimal values $r^*$ are denoted by $r^*_{ij}, j\le i\in \{1,2,3\}$, according to the equilibrium that is played in the second stage. It will be convenient to discriminate two cases, **A** and **B**, depending on whether $0<c\le T_1-T_2$ or $T_1-T_2<c$. Also, as mentioned above, the second stage equilibrium quantities are continuous at the points where their expression changes and therefore we allow the subsequent cases to overlap on the cutting points. Finally, to simplify the expressions below, let $S:=T_1+T_2, \; D:=T_1-T_2, \; \Delta:=\frac{\sqrt{3}+3}{2}D$.
\[prop3.4full\] For given $T_1\ge T_2$, the optimal strategy $r^*$ of the supplier for all possible values of $\alpha$ is given by
- $0<c\le D$ $$r^*(\alpha)=\left\{ \begin{alignedat}{3}
& r \in \mathbb R_+, & 0\le \alpha &\le 3T_2+2c \\
& r^*_{31}=\frac14{\left(}\alpha-2c-3T_2{\right)}, & 3T_2+2c \le \alpha &\le 3T_2+\Delta-\frac{\sqrt{3}-1}{2}c\\
& r^*_{21}=\frac12{\left(}\alpha-c-T_1 -2T_2{\right)}, & 3T_2+\Delta-\frac{\sqrt{3}-1}{2}c \le \alpha &\le 3T_2+2\Delta+c\\
& r^*_{11}=\frac12{\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}, \quad & 3T_2+2\Delta+c \le \alpha
\end{alignedat} \right.$$
- $0\le D<c$ $$r^*(\alpha)=\left\{ \begin{alignedat}{3}
& r \in \mathbb R_+, & 0\le \alpha &\le T_1+2T_2+c \\
& r^*_{21}=\frac12{\left(}\alpha-c-T_1 -2T_2{\right)}, & T_1+2T_2+c \le \alpha &\le 3T_2+2\Delta+c\\
& r^*_{11}=\frac12{\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}, \quad & 3T_2+2\Delta+c \le \alpha
\end{alignedat} \right.$$
<!-- -->
- Let $0<c\le D$. Then, $3T_2+2c \le T_1+2T_2+c \le 2T_1+T_2 \le 3T_1-c \le 3T_1+c$ and hence for the different values of $\alpha$ we have
- $0\le \alpha \le 3T_2+2c$. For all $r \in \mathbb R_+$ we have that $u^s{\left(}r\mid \alpha{\right)}\equiv 0$ and hence $r^*=r \in \mathbb R_+$.
- $3T_2+2c \le \alpha \le 2T_1+T_2$. The supplier’s payoff function is given by $$u^s{\left(}r\mid \alpha{\right)}=r\cdot \begin{cases}q^*_{31}{\left(}r+c{\right)}, & 0\le r \le \frac12{\left(}\alpha-2c-3T_2{\right)}\\0, & \text{ else}\end{cases}$$ and hence, as a quadratic polynomial in $r$, it’s first part is maximized at $r^*=r^*_{31}$.
- $2T_1+T_2\le \alpha \le 3T_1-c$. Now $$u^s{\left(}r\mid \alpha{\right)}=r\cdot \begin{cases}q^*_{31}{\left(}r+c{\right)}, & 0\le r \le 3T_1-c-\alpha \\q^*_{21}{\left(}r+c{\right)}, & 3T_1-c-\alpha \le r \le \alpha-c-T_1-2T_2 \\ 0, & \text{ else}\end{cases}$$ As quadratic polynomials in $r$, the first part is maximized at $r^*_{31}$ and the second at $r^*_{21}$. It is easy to see that $0\le r^*_{31}\le r^*_{21}\le \alpha-c-T_1-2T_2$. Hence, in order to determine the maximum of $u^s{\left(}r\mid \alpha{\right)}$ with respect to $r$ we distinguish three sub-cases.
- $0 \le r^*_{31} \le r^*_{21}\le 3T_1-c-\alpha$. Then the overall maximum of $u^s{\left(}r\mid \alpha{\right)}$ is attained at $r^*_{31}$. The inequality $r^*_{21}\le 3T_1-c-\alpha $ holds iff $\frac12{\left(}\alpha-c-T_1-2T_2{\right)}\le 3T_1-c-\alpha \Leftrightarrow \alpha \le \frac32S+\frac56D-\frac13c$.
- $3T_1-c-\alpha \le r^*_{31}\le r^*_{21}$. Then the overall maximum of $u^s{\left(}r\mid \alpha{\right)}$ is attained at $r^*_{21}$. The inequality $3T_1-c-\alpha \le r^*_{31}$ holds iff $ 3T_1-c-\alpha \le \frac14{\left(}\alpha-2c-3T_2{\right)}\Leftrightarrow \alpha \ge \frac32S+\frac9{10}D-\frac25c$.
- $0 \le r^*_{31} \le 3T_1-c-\alpha \le r^*_{21}$. In this case we need to compare the payoffs $u^s{\left(}r^*_{31}\mid \alpha{\right)}$ and $u^s{\left(}r^*_{21}\mid \alpha{\right)}$. The overall maximum is attained at $r^*_{31}$ iff $u^s{\left(}r^*_{21}\mid \alpha{\right)}\le u^s{\left(}r^*_{31}\mid \alpha{\right)}\Leftrightarrow \frac18{\left(}\alpha-c-T_1-2T_2{\right)}^2 \le \frac1{24}(\alpha-2c-3T_2)^2$. Both terms ${\left(}\alpha-c-T_1-2T_2{\right)}$ and ${\left(}\alpha-2c-3T_2{\right)}$ are non-negative since $\alpha \ge 2T_1+T_2$ and $c \le T_1-T_2$ hold by assumption. Hence, we may take the square root of both sides to obtain $$\begin{aligned}
u^s{\left(}r^*_{21}\mid \alpha{\right)}\le u^s{\left(}r^*_{31}\mid \alpha{\right)}&\iff {\left(}\sqrt{3}-1{\right)}\alpha \le \sqrt{3}T_1+{\left(}2\sqrt{3}-3{\right)}T_2+{\left(}\sqrt{3}-2{\right)}c \\[0.2cm]&\iff \alpha \le \frac32S+\frac{\sqrt{3}}{2}D-\frac{\sqrt{3}-1}{2}c\end{aligned}$$ Now, it is straightforward to check that the following ordering is equivalent to $c\leq D$, which is true by the defining condition of Case A. $$2T_1+T_2 \leq \frac32S+\frac56D-\frac13c \leq \frac32S+\frac{\sqrt{3}}{2}D-\frac{\sqrt{3}-1}{2}c \le \frac32S+\frac9{10}D-\frac25c \le 3T_1-c$$
Hence, by the previous discussion and after observing that $\frac32S+\frac{\sqrt{3}}{2}D-\frac{\sqrt{3}-1}{2}c = 3T_2+\Delta-\frac{\sqrt{3}-1}{2}c$, we conclude that the optimal solution $r^*$ in this case is given by $$r^*=\begin{cases}r^*_{31}=\frac14{\left(}\alpha-2c-3T_2{\right)}, & \text{ if }\,\, 2T_1+T_2\le \alpha \le 3T_2+\Delta-\frac{\sqrt{3}-1}{2}c\\[0.2cm]
r^*_{21}=\frac12{\left(}\alpha-c-T_1-2T_2{\right)}, & \text{ if } \,\,\, 3T_2+\Delta-\frac{\sqrt{3}-1}{2}c \le \alpha < 3T_1-c\end{cases}$$
- $3T_1-c \le \alpha \le 3T_1+c$. Now, the supplier’s payoff function is given by $$u^s{\left(}r\mid \alpha{\right)}=r\cdot \begin{cases}q^*_{21}{\left(}r+c{\right)}, & 0\le r \le \alpha-c-T_1-2T_2 \\0, & \text{ else}\end{cases}$$ and hence, as a quadratic polynomial in $r$, it is maximized at $r^*=r^*_{21}$.
- $3T_1+c\le \alpha$. Now $$u^s{\left(}r\mid \alpha{\right)}=r\cdot \begin{cases}q^*_{11}{\left(}r+c{\right)}, & 0\le r \le \alpha-c-3T_1 \\q^*_{21}{\left(}r+c{\right)}, & \alpha-c-3T_1 \le r \le \alpha-c-T_1-2T_2 \\ 0, & \text{ else} \end{cases}$$ Now, the first part is maximized at $r^*_{11}$ and the second at $r^*_{21}$. Again, one checks easily that $0 < r^*_{11} < r^*_{21} < \alpha-c-T_1-2T_2$. As in Case A3, in order to determine the maximum of $u^s{\left(}r\mid \alpha{\right)}$ with respect to $r$ we distinguish three sub-cases.
- $0< r^*_{11}< r^*_{21}\le \alpha-c-3T_1$. Then the overall maximum of $u^s{\left(}r\mid \alpha{\right)}$ is attained at $r^*_{11}$. The inequality $r^*_{21}\le \alpha-c-3T_1$ holds iff $\frac12{\left(}\alpha-c-T_1-2T_2{\right)}\le \alpha-c-3T_1 \Leftrightarrow \alpha \geq \frac32 S+\frac72 D+c$.
- $\alpha-c-3T_1\le r^*_{11}< r^*_{21}$. Then the overall maximum of $u^s{\left(}r\mid \alpha{\right)}$ is attained at $r^*_{21}$. The inequality $\alpha-c-3T_1\le r^*_{11}$ holds iff $\alpha-c-3T_1\le \frac12{\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}\Leftrightarrow \alpha \leq \frac32 S+3D+c$.
- $0<r^*_{11} < \alpha-c-3T_1 < r^*_{21}$. In this case we need to compare the payoffs $u^s{\left(}r^*_{11}\mid \alpha{\right)}$ and $u^s{\left(}r^*_{21}\mid \alpha{\right)}$. The overall maximum is attained at $r^*_{11}$ iff $$u^s{\left(}r^*_{21}\mid \alpha{\right)}\le u^s{\left(}r^*_{11}\mid \alpha{\right)}\Leftrightarrow \frac18(\alpha-c-T_1-2T_2)^2 \le \frac16{\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}^2$$ Both terms ${\left(}\alpha-c-T_1-2T_2{\right)}$ and ${\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}$ are positive since $\alpha \ge 3T_1+c$ holds by assumption. Hence, we may take the square root of both sides to obtain $$\begin{aligned}
u^s{\left(}r^*_{21}\mid \alpha{\right)}\le u^s{\left(}r^*_{11}\mid \alpha{\right)}&\iff {\left(}3-\sqrt{3}{\right)}T_1+{\left(}3-2\sqrt{3}{\right)}T_2+{\left(}2-\sqrt{3}{\right)}c \le {\left(}2-\sqrt{3}{\right)}\alpha \\[0.2cm]&\iff \frac32 S+ {\left(}\frac32 + \sqrt{3}{\right)}D +c \le \alpha\end{aligned}$$
Now, since $\frac32 S+{\left(}\frac32+\sqrt{3}{\right)}D+c = 3T_2+2\Delta+c$ and $3T_1+c \le \frac32 S+3D+c \le \frac32 S+{\left(}\frac32+\sqrt{3}{\right)}D+c \le \frac32 S+\frac72 D+c$, we conclude that the optimal solution $r^*$ in this case is given by $$r^*=\begin{cases}r^*_{21}=\frac12{\left(}\alpha-c-T_1-2T_2{\right)}, & \text{ if }\,\, 3T_1+c\le \alpha \le 3T_2+2\Delta+c\\[0.2cm]
r^*_{11}=\frac12{\left(}\alpha-c-\frac32T_1-\frac32T_2{\right)}, & \text{ if } \,\,\, 3T_2+2\Delta+c \le \alpha \end{cases}$$
This concludes case **A**.
- Let $0\le D<c$. Then, $3T_1-c< 2T_1+T_2 < T_1+2T_2+c < \min {\left(}3T_1+c,3T_2+2c{\right)}$ and hence for the different values of $\alpha$ we have
- $0\le \alpha \le T_1+2T_2+c$. Now $u^s{\left(}r\mid \alpha{\right)}\equiv 0$ for all $r \in \mathbb R_+$ and hence $r^*=r \in \mathbb R_+$.
- $T_1+2T_2+c \le \alpha \le 3T_1+c$. Now, the supplier’s payoff function is given by $$u^s{\left(}r\mid \alpha{\right)}=r\cdot \begin{cases}q^*_{21}{\left(}r+c{\right)}, & 0\le r \le \alpha-T_1-2T_2-c \\0, & \text{ else}\end{cases}$$ and hence as a quadratic polynomial in $r$, it is maximized at $r^*=r^*_{21}$.
- $3T_1+c\le \alpha$. This case is identical to A5 above.
Collecting the results about the optimal strategy of the supplier in all sub-cases of **A** and **B**, we obtain the claim of Proposition \[prop3.4full\].
Although, as has already been pointed out, the supplier’s payoff function $u^s{\left(}r\mid \alpha{\right)}$ is continuous at the cutting points of the $\alpha$-intervals, the same is [*not*]{} true for the supplier’s optimal strategy as can be checked from Proposition \[prop3.4full\]. For given $T_1, T_2, \, T_1 \ge T_2$, not only $r^*{\left(}\alpha{\right)}$ is not continuous in $\alpha$, but it is also not increasing in $\alpha$ (although it is increasing on each sub-interval). At the points of discontinuity the supplier is indifferent between the left hand side and right hand side strategies. However, any mixture of these strategies does not yield the same payoff since his payoff is not linear in $r^*$. The reason for these discontinuities is that the supplier faces a piecewise linear demand instead of a linear demand. Therefore, at the cutting points he “jumps” from the $\operatorname*{arg\,max}$ of the first linear part to the $\operatorname*{arg\,max}$ of the other linear part, hence the discontinuity. This also explains the decrease of the optimal price as we move from certain cutting points to the right, even if the increase in the demand intercept $\alpha$ is $\epsilon$-small. Of course, one may check that under his optimal strategy, not only the supplier’s payoff is continuous, but it is also an increasing function of the demand $\alpha$, as expected.
Case of l identical retailers
------------------------------
For $i\in N$, the best reply correspondence of retailer $R_i$ is given by Lemma \[lem3.2\] if we replace $Q_j$ by $Q_{-i}$. Hence, we may simplify the proof by fixing $i \in N$ and distinguishing the following cases:
- Let $0 \le \alpha \le {\left(}n+1{\right)}T$ and for $N \ni j\neq i$ let $t^*_j=\frac1{n+1}\alpha, q^*_j=0$. Then, $Q^*_{-i}=\frac{n-1}{n+1}\alpha\ge \alpha-2T$. Hence, by Lemma \[lem3.2\], $\operatorname{BR}^i{\left(}Q^*_{-i}{\right)}={\left(}t^*_i, q^*_i{\right)}={\left(}\frac{\alpha}{n+1}, 0{\right)}$.
- Let ${\left(}n+1{\right)}T< \alpha \le {\left(}n+1{\right)}T+w$ and for $N\ni j \neq i$ let $t^*_j=T, q^*_j=0$. Then $Q^*_{-i}={\left(}n-1{\right)}T$ with $\alpha-w-2T\le Q^*_{-i}<\alpha-2T$. Hence, by Lemma \[lem3.2\], $\operatorname{BR}^i{\left(}Q^*_{-i}{\right)}={\left(}t^*_i, q^*_i{\right)}={\left(}T, 0{\right)}$.
- Let ${\left(}n+1{\right)}T+w<\alpha$ and for $N\ni j\neq i$ let $t^*_j=T, q^*_j=\frac1{n+1}{\left(}\alpha-{\left(}n+1{\right)}T-w{\right)}$. Then $Q^*_{-i}=\frac{n-1}{n+1}{\left(}\alpha-w{\right)}<\alpha-w-2T$. As above $\operatorname{BR}^i{\left(}Q^*_{-i}{\right)}={\left(}t^*_i, q^*_i{\right)}={\left(}T, \frac1{n+1}{\left(}\alpha-{\left(}n+1{\right)}T-w{\right)}{\right)}$.
Summing up, we obtain the equilibrium strategies as given in Proposition \[prop7.1\].
=0mu plus 1mu
[^1]: [email protected]
[^2]: [email protected]
[^3]: This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program "ARISTEIA II: Optimization of Stochastic Systems Under Partial Information and Application, Investing in knowledge society through the European Social Fund”. Stefanos Leonardos gratefully acknowledges support by a scholarship of the Alexander S. Onassis Public Benefit Foundation.
[^4]: See for an alternative interpretation of these quantities as inventories.
[^5]: After normalization of the slope parameter (initially denoted with $\beta$) of the inverse demand function to $1$, all variables in are expressed in the units of quantity and not in monetary units and therefore any interpretations or comparisons of the subsequent results should be done with caution.
[^6]: Of course, this means that there is no randomness in $\alpha$, so the description of $\alpha$ as a random variable is redundant in the complete information case. We use it just to give a common formal description of both the complete and the incomplete information case.
[^7]: In Proposition \[prop3.3\] we will see that in equilibirum retailer $R_i$ will order no additional quantity if $\alpha\le 3T$.
[^8]: We remind that the consumers surplus is proportional to the square of the total quantity that is released to the market.
[^9]: We use the terms “decreasing” in the sense of “non-increasing” (i.e. flat spots are permitted), as they do in the pertinent literature, where this use of the term has been established.
[^10]: To avoid confusion, we use the terms “decreases” in the sense “decreases non-strictly” or “does not increase”, as we had to do before.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study a general corresponding principle between discrete-variable quantum states and continuous-variable (especially, restricted on Gaussian) states via quantum purification method. In the previous work, we have already investigated an information-theoretic correspondence between the Gaussian maximally mixed states (GMMSs) and their purifications known as Gaussian maximally entangled states (GMESs) in \[Phys. Lett. A [**380**]{}, 3607 (2016)\]. We here compare an $N\times N$-dimensional maximally entangled state to the GMES we proposed previously, through an explicit calculation of quantum fidelity between those entangled states. By exploiting the results, we naturally conclude that our GMES is more suitable to the concept of *maximally entangled* state in Gaussian quantum information, and thus it might be useful or applicable for quantum information tasks than the two-mode squeezed vacuum (TMSV) state in the Gaussian regime.'
author:
- Youngrong Lim
- Jaewan Kim
- Soojoon Lee
- Kabgyun Jeong
title: Correspondence between maximally entangled states in discrete and Gaussian regimes
---
Introduction {#intro}
============
Quantum information science in a finite dimensional Hilbert space not only in theory but also in experiment has been well established [@NC00; @W13] since 1980s. It enables us to break the limit which classical physics cannot overcome, for many computation and communication tasks, such as quantum speed-up algorithms [@S97; @G97], quantum key distribution (QKD) protocols [@BB84; @E91; @B92], and non-additive channel-capacity problems [@SY08; @H09; @LWZG09].
However, for the *infinite* dimensional Hilbert space, there is still a subtle gap to the finite case of its concrete interpretation, because it has infinite degrees of freedom in general, and thus we must face with many unphysical situations. For example, we know that EPR state [@EPR] is a kind of maximally entangled states in the continuous-variable (CV) regime, but it is actually unphysical (i.e., it is unnormalizable). Nevertheless, many investigation of CV systems are steadily performed, since all of those experiments are well-worked and quite easy in the setting of the continuous-variable regime. In particular, the usual components such as beam-splitter, squeezer, and laser in quantum optics, are well described by only Gaussian states and Gaussian unitary operations that are special kinds of continuous-variable states and operations.
Though both discrete-variable (DV) and CV quantum information theory are important, however, they have been investigated separately without any obvious correspondence in quantum information communities. Finding a general corresponding principle between CV and DV quantum regimes was suggested as in Ref. [@Caslav03], but it is still missing a comprehensible connection. As an easier question, first we can ask a correspondence relation between DV quantum states and Gaussian states, instead of general CV states themselves. The Gaussian states are still living in the infinite Hilbert space (i.e., phase space) but all the properties we are interested in are encoded only on the covariance matrices whose dimensions are finite. Many researches have been performed on Gaussian quantum information [@rmp] including maximally entangled states in Gaussian regime [@GMMES], on the contrary there is no clear connection from infinite to finite dimensional quantum states.
In the previous work [@JeongLim], we have proposed several candidates for Gaussian maximally entangled states (GMESs) given by the purification process over Gaussian maximally mixed states (GMMSs) with coherent states as well as arbitrary squeezed Gaussian states [@Bradler; @JKL15]. Here, we take comparisons between our specific GMES and the maximally entangled state (MES) having $N \times N$ dimension. When an appropriate parameter is chosen, the quantum fidelity between the MES ($N >200$) and our GMES is almost close to 0.99, that is much greater value than the case of two-mode squeezed vacuum (TMSV) state, which is another well-known candidate for the GMES. Also we can check Bell violation for two-qutrit cases and this result confirms the fact that our GMES shows more violation of Bell’s inequality than TMSV state under the constraint of the same average-photon number.
In Section \[genco\], we introduce several ideas for the corresponding framework between DV quantum states and general quantum CV (or Gaussian) states. In Section \[CVDV\], we focus on the GMES case and give a brief review of previous works including quantum purification method in the Gaussian regime. By using the analysis of Bell functions and quantum fidelity on Gaussian quantum states, main results are described in Section \[main\] with several remarks. Finally, we summarize our results and discuss for possible future works in Section \[discussion\].
Correspondence framework: quantum DV vs. CV systems {#genco}
===================================================
Since a CV quantum state is lying on the infinite dimensional Hilbert space, it has the infinite degrees of freedom in general. Therefore, it is not an overestimation that a generic CV quantum state possesses potentially higher resources than a DV state for performing a quantum information processing. In this sense, finding a specific correspondence relation between the CV state and the DV state with arbitrary dimension is a reasonable and important task. One of the methods is that, in advance, find a mapping between DV states having different dimensions, and take a limit of dimension going to infinity for one of those states. For an $n$-dimensional quantum state, we have generators for SU($n$) algebra from the transition operators [@Hioe]. Then we consider another $N$-dimensional quantum state with $N\geq n$ and assume that $N$ can be divided by $n$ for the simplicity. By using an appropriate coarse-graining method, we can have $N/n$ of SU($n$) algebras and sum those up to finally get the generators of SU($n$) algebra fully spanned in $N$-dimension [@Caslav03]. The problem is arising, however, when we take the limit $N \rightarrow \infty$ for getting CV state. In this limit, roughly speaking, a non-MES can be mapped onto the MES of finite dimension. In other words, there might be several ways of mapping MES in finite dimension to the CV state.
There is another example showing the correspondence between CV and DV states under nonlinear quantum optical settings [@Van; @Lee; @Kim]. In fact, a coherent state is a superposition of pseudo-number states, which are $d$-modulo photon number states, and can be written as ${\left|\alpha\right>}={1 \over \sqrt{d}} \sum_{k=0}^{d-1}{\left|k_d\right>}$, where ${\left|k_d\right>}$ is a pseudo-number state with ‘$k \mod d$’ number of photons [@Kim]. After a cross-Kerr interaction represented by $e^{{{2\pi i}\over{d}}\hat{n}_1\hat{n}_2}$ on the two-mode initial state ${\left|\alpha\right>}_1{\left|\alpha\right>}_2$, the MES of pseudo-number states and pseudo-phase states can be produced, if $|\alpha| > d$ holds, as follows: $$\label{selfkerr}
{\left|\alpha\right>}_1\otimes{\left|\alpha\right>}_2\overset{\underset{{{\textnormal{Cross-Kerr}}}}{}}{\longmapsto}{1\over\sqrt{d}}\sum_{k=0}^{d-1}{\left|k_d\right>}_1\otimes|\tilde{k}_d\rangle_2,$$ where $|\tilde{k}_d\rangle$ is a pseudo-phase state which is equivalent to a ${2\pi k}\over{d}$-phase-shifted coherent state $|e^{{2\pi k i}\over{d}} \alpha\rangle$.
The simplest example is the quantum information processing with even- and odd-cat states representing logical 0- and 1-qubit [@Yurke; @Sanders]. This is quite simple but the pseudo-number states become orthogonal only when $|\alpha|$ gets larger than $d$ and stronger cross-Kerr nonlinearity is required. Experimental generation of this type of MES from coherent states with large amplitude is quite challenging and the strength of Kerr interaction is extremely weak usually.
Although it is possible to create a maximally entangled states through the nonlinear material as in Eq. (\[selfkerr\]) above, we can easily observe that those processes involving general CV states are still problematic as well as unsuccessful. Instead, we move our focus onto the Gaussian states for dealing with finite degrees of freedom only. The Gaussian state is a quantum state having Wigner function of Gaussian shape or normal distribution. Since we can always displace the average value by a local displacement operation, so the only valuable information of the Gaussian states is lying on the second moment of canonical variables, which is generally called by covariance matrix (CvM). The CvM is positive and real symmetric $2n\times 2n$ matrix for any $n$-mode Gaussian state. Also Gaussian operation over the Gaussian states is defined as a unitary operation preserving Gaussian characteristics, which is homogeneous subgroup of Sp(2$n$,${{\mathbb{R}}}$) for $n$-mode Gaussian state [@Lupo14].
There still exists a subtle problem of dimension-mode matching, however, even though we are dealing with finite degrees of freedom of Gaussian states. For the simplest example, let us consider a discrete quantum state of dimension 2 (i.e., qubit) and a single-mode Gaussian state. In this case, the number of free parameters for both cases are 3 under the consideration of the normalization. Next non-trivial example is a quantum state of dimension 4 (two-qubit cases) and a two-mode Gaussian state. Here, the number of free parameters of the two-qubit state is 15, but of the two-mode Gaussian state is only 10 [@Strang]. Consequently, we cannot make a simple correspondence between $d$-dimensional DV and $n$-mode Gaussian quantum states for general cases.
Correspondence between MES and Gaussian MES {#CVDV}
===========================================
As seen in Section \[genco\], to find general corresponding relation between Gaussian states and DV quantum states is still challenging. Thus we need to move our concentration to the simpler case, that is, correspondence between maximally entangled states (MESs) in both regimes. Before describing our main results in detail, we need to briefly review what the Gaussian states and Gaussian operations are, and how the maximally mixed state (MMS)/MES can be defined in the Gaussian regime.
The EPR state can be well-matched to a MES of arbitrary dimension. In Gaussian regime, TMSV state is approaching to the EPR state in the limit of *infinite* squeezing [@NOPA]. Note that, on two-mode vacuum states, it is generally represented by $$\label{TMSV}
{\left|\psi(r)\right>}_{{\textnormal{TMSV}}}=e^{\frac{r}{2}(\hat{a}\hat{b}-\hat{a}^\dag\hat{b}^\dag)}{\left|0\right>}_A{\left|0\right>}_B,$$ where $r$ is the squeezing parameter, and $\hat{a}$ and $\hat{b}$ the bosonic field operators. The state also can be derived from the quantum purification of the thermal state as ${\left|\psi(r)\right>}_{{\textnormal{TMSV}}}=\sum_{n=0}^\infty\frac{(\tanh r)^n}{\cosh r}{\left|n\right>}_A{\left|n\right>}_B$ in Fock basis under the condition of the average photon number $\bar{n}=\sinh^2r$. Since the infinite squeezing is not physically plausible operation, thus we have to consider only finite squeezing. In other words, when we are dealing with Gaussian states physically, we need to confine our states in appropriate energy boundary. Then, we can guess a TMSV with finite squeezing parameter which might match to the MES of a specific dimension.
The thermal state can be expressed in coherent state basis as $\rho_{{\textnormal{th}}}(\bar{n})=\frac{1}{\bar{n}\pi}\int e^{-\frac{|\alpha|^2}{\bar{n}}}|\alpha\rangle\!\langle\alpha| d^2\alpha$, where $\bar{n}$ is the average photon number. As $\bar{n}$ goes to infinity, it is obvious that the thermal state has uniform distribution with respect to the coherent state basis on the entire phase space; this fact means Gaussian maximally mixed state (GMMS). Instead, another version of GMMS is possible [@Bradler] such that $$\begin{aligned}
\label{CVMMS}
\rho_{{{\textnormal{GMMS}}}}|_b&:=\frac{{{\mathbbm{1}}}_b}{C_b}= \frac{1}{C}\int_b {| \alpha \rangle\!\langle \alpha |}d^2\alpha \nonumber\\
&=\frac{1}{b^2}\sum_{n=0}^\infty\left(1-\sum_{k=0}^n\frac{b^{2k}}{k!}e^{-b^2}\right){| n \rangle\!\langle n |},\end{aligned}$$ where $C=\pi b^2$ is the normalization constant. Since the coherent state ${\left|\alpha\right>}$ is a Gaussian state, this GMMS is the convex sum of all Gaussian states within radius $b$ from the origin of the phase space. This state indeed has the property of MMS in the sense that it can be used for private quantum channel and the same holds true for its general version including squeezing operations [@JKL15].
We can think a purification of $\rho_{{{\textnormal{GMMS}}}}|_b$ as $$|\psi(b)\rangle_{{{\textnormal{GMES}}}}=\sum_{n=0}^{\infty}\sqrt{f(n,b)}|n\rangle _A |n\rangle _B,$$ where the coefficient is explicitly given by $f(n,b)=(1-\sum_{k=0}^n\frac{b^{2k}}{k!}e^{-b^2})/b^2$ [@JeongLim]. Like the TMSV state, this state is indeed another candidate for GMES as $b \rightarrow \infty$. Also we infer that it can be related with MES of certain dimension when $b$ is finite. Now we can raise a reasonable question. Which candidate is more appropriate for the concept of GMES? We can answer this question quantitatively in the next section, our main results.
![Maximal Bell functions of the two-qutrit for our GMES and TMSV state under the average photon number $\bar{n}$. The maximal Bell function of the GMES is larger than of TMSV state after reaching around the local realistic bound 2. []{data-label="fig1"}](Fig1.pdf){width="9.3cm"}
Main results {#main}
============
We first introduce a Bell’s test on two-qutrit systems, and investigate the violation of Bell’s inequality via comparison of two specific candidates having quantum entanglement. For the MES of bipartite qudit system, the amount of violation for Bell’s inequality is already known [@CGLMP]. However, the coefficients of GMESs we proposed are not uniform under the condition of finite squeezing and at a given boundary, so we cannot directly apply this method to our case. Although, the analytic formula of maximal Bell function is known, for less general case, i.e., two-qutrit [@qutrit], as $$\label{qutrits}
{\left|\psi\right>}=\sum_{k=0}^2 a_k {\left|k\right>}_A{\left|k\right>}_B,$$ where $a_k$’s are real coefficients and $\forall k, {\left|k\right>}_{A(B)}$ denote the orthonormal basis for the qutrit on the system $A(B)$, respectively. For the two-qutrit case, the Bell-CHSH inequality can be expressed as [@CHSH1; @CHSH2] $$\label{CHSH}
-4\leq {{\mathcal{B}}} \leq 2.$$ Also, the Bell function or operator ${{\mathcal{B}}}$ is written as in the form of $$\begin{aligned}
\label{Bell}
{{\mathcal{B}}}:=&{{\textnormal{Re}}}\left[Q_{11}+Q_{12}-Q_{21}+Q_{22}\right] \nonumber\\
&+\tfrac{1}{\sqrt{3}}{{\textnormal{Im}}}\left[Q_{11}-Q_{12}-Q_{21}+Q_{22}\right],\end{aligned}$$ where $Q_{ij}$ is the correlation function between Alice’s and Bob’s measurements for two observables ($\forall i,j\in\{1,2\}$). Then, a maximal value of the Bell function, Eq. (\[Bell\]), is known as ${{\mathcal{B}}}_{\max}=4|a_0a_1|+4/\sqrt{3}(|a_0a_2|+|a_1a_2|)$ [@qutrit], if $\max\{ |a_0|,|a_1|,|a_2|\} \leq \sqrt{18+9\sqrt{3}}/2$, which is in our case we want. We have $a_k={(\tanh{r})^k / \sqrt{1+(\tanh{r})^2+(\tanh{r})^4}}~$ for the TMSV state, and $$a_k=\frac{\sqrt{1-\frac{\Gamma \left(k+1,b^2\right)}{\Gamma (k+1)}}}{\sqrt{-e^{-b^2}-\Gamma \left(2,b^2\right)-\frac{\Gamma \left(3,b^2\right)}{2}+3}}(:=b_k)$$ for our GMES, where $\Gamma(n+1,b^2)=\int_{b^2}^\infty t^n e^{-t}dt$ is the incomplete gamma function. (Note that, for convenience, $b_k$ emphasizes the boundary radius $b$ in the Gaussian state.) In Fig. \[fig1\], it is shown that our GMES has higher value for the maximal Bell function than that of TMSV state after passing the local realistic bound, though the difference is not so much. Note that in the limit of $\bar{n}\rightarrow \infty$, then the value of maximal Bell function is $ \sim$2.873—it was known for maximally entangled two-qutrit states. For the low range of the average photon number ($\bar{n}$), our GMES has the maximal entanglement more effectively, because the average photon number is one kind of physical resources for generating a quantum state in real experiment.
![(a), (b) Fidelities between GMESs/TMSV states and MESs of $N \times N$ dimension. Fidelities of GMESs are much higher than of TMSV states for those $N=5,20,200$, and 1000. (c) Fidelity between various MESs and GMES of $b=15$. The maximum fidelity is around 0.99. (d) Fidelity between various MESs and TMSV state of $r=5$. Also, the maximum fidelity is achieved around 0.9.[]{data-label="fig2"}](Fig2.pdf){width="9.2cm"}
For our original purpose, however, we need more direct comparison for the maximally entangled states. In this reason, we will check which dimension of MES corresponds to the TMSV state or the GMES with finite parameters by calculating a fidelity between those states. The fidelity between two pure states ${\left|\psi\right>}$ and ${\left|\phi\right>}$ is generally defined by $F:=| \langle \psi | \phi \rangle |$, thus we easily calculate and plot these results as in Fig. \[fig2\]. Note that in the limit $N \rightarrow \infty$, fidelities converges unity for both cases. However, our GMES is much more practical in a sense that the fidelity is already over 0.99 for $N=200$. On the contrary, for the case of TMSV state, the fidelity is around 0.9 even for very high $N$. In other words, if we want to make a kind of Gaussian state which corresponds to $N\times N$-maximally entangled state, our GMES is more appropriate than TMSV state. For the converse cases, as in Fig. \[fig2\] (c) and (d), also we obtain high fidelities when we use the GMES of specific $b$. It can be understood that our GMES comes from the uniform distribution Eq. (\[CVMMS\]) but TMSV state from the thermal state whose distribution is Gaussian. Since our GMMS already has “infinite temperature” within a boundary $b$, the purified state in GMES has more uniform distribution that corresponds to the discrete-variable MES.
Experimentally, it is well-known that TMSV state can be generated for relatively small squeezing parameter [@TMSV]. Although we don’t have concrete idea for generating our GMES, but we have for GMMS. An input coherent state passes through a phase shifter and a beam splitter, and assume we don’t know phase and transmissivity. Then a final state has completely unknown phase and amplitude, the only boundary $b$ can be estimated from the input amplitude. This is the exact GMMS as in Eq. (\[CVMMS\]).
Discussion
==========
We showed that the two-mode Gaussian maximally entangled state (GMES) we proposed has more proper correspondence with discrete MES than the TMSV state. There might be many kinds of Gaussian maximally entangled states in contrast with discrete cases. Our GMES is might be *optimal*, because it is from the uniform distribution Eq. (\[CVMMS\]) although more consideration is needed for its rigorousness.
Unfortunately, there are several problems we have. First of all, we don’t know yet the method for experimental realization of the GMES. It can be obtained from the purification of GMMS or be from another method. Second, we only investigated the corresponding relation between MMS/MES in the Gaussian and discrete regimes. Especially, only two-mode state is enough for this MES-GMES correspondence, beside we should consider multi-mode Gaussian states for general cases. Multi-mode Gaussian states have rich structure like genuine multi-partite entanglement, but also have very different structure even for three-mode cases [@multi].
Another interesting problems can be found from bound entangled states, which are entangled state but cannot be distillable. For the discrete case, the minimal dimension of the bound entangled state is $2 \otimes 4$ or $3 \otimes 3$. On the contrary, $2 \oplus 2$ modes is minimum for Gaussian states, but there is no bound entanglement for $1 \oplus n$ modes [@bound]. This implies that the mode-dimension correspondence problem is still open, and we will study bound entangled states in both regimes for the next simplest case.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1A6A3A01007264) and the Ministry of Science and ICT (NRF-2016R1A2B4014928). J.K. appreciates the financial support by the KIST Institutional Program (Project No. 2E26680-16-P025). K.J. acknowledges financial support by the National Research Foundation of Korea (NRF) through a grant funded by the Korean government (Ministry of Science and ICT) (NRF-2017R1E1A1A03070510 & NRF-2017R1A5A1015626).
[32]{} M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, 2000).
M. M. Wilde, [*Quantum Information Theory*]{} (Cambridge University Press, Cambridge, 2013).
P. W. Shor, SIAM Journal on Computing [**26**]{}, 1484 (1997).
L. K. Grover, Phys. Rev. Lett. [**79**]{}, 325 (1997).
C. H. Bennett and G. Brassard, In *Proceedings of IEEE International Conference on Computers, Systems and Signal Processing*, [**175**]{}, p. 8 (New York, 1984).
A. K. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991).
C. H. Bennett, Phys. Rev. Lett. [**68**]{}, 3121 (1992).
G. Smith and J. Yard, Science [**321**]{}, 1812 (2008).
M. B. Hastings, Nature Phys. [**5**]{}, 255 (2009).
K. Li, A. Winter, X. Zou, and G. Guo, Phys. Rev. Lett. [**103**]{}, 120501 (2009).
A. Einstein, B. Podolsky, and N. Rogen, Phys, Rev. [**47**]{}, 777 (1935).
C. Brukner, M. S. Kim, J.-W. Pan, and A. Zeilinger, Phys. Rev. A [**68**]{}, 062105 (2003).
C. Weedbrook, S. Pirandola, R. Garc[' i]{}a-Patr[' o]{}n, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Rev. Mod. Phys. [**84**]{}, 621 (2012).
P. Facchi, G. Florio, C. Lupo, S. Mancini, and S. Pascazio, Phys. Rev. A [**80**]{}, 062311 (2009).
K. Jeong and Y. Lim, Phys. Lett. A [**380**]{}, 3607 (2016).
K. Brádler, Phys. Rev. A [**72**]{}, 042313 (2005).
K. Jeong, J. Kim, and S.-Y. Lee, Sci. Rep. [**5**]{}, 13974 (2015).
F. T. Hioe and J. H. Eberly, Phys. Rev. Lett. [**47**]{}, 838 (1981).
S. van Enk, Phys. Rev. Lett. [**91**]{}, 017902 (2003).
Y. W. Cheong and J. Lee, J. Korean Phys. Soc. [**51**]{}, 1513 (2007).
J. Kim, J. Lee, S.-W. Ji, H. Nha, P. M. Anisimov, and J. P. Dowling, Opt. Comm. [**337**]{}, 79 (2015).
B. Yurke and D. Stoler, Phys. Rev. Lett. [**57**]{}, 13 (1986).
B. C. Sanders, Phys. Rev. A [**45**]{}, 6811 (1992).
C. Lupo, S. Mancini, A. De Pasquale, P. Facchi, G. Florio, and S. Pascazio, J. Math. Phys. [**53**]{}, 122209 (2012).
G. Strang, [*Introduction to Linear Algebra*]{} (Wellesley-Cambridge Press, Wellesley, 1993).
K. Banaszek and K. Wódkiewicz, Acta Phys. Slov. [**49**]{}, 491 (1999).
D. Collins, N. Gisin, N. Linden, S. Massar, and S. Popescu, Phys. Rev. Lett. [**88**]{}, 040404 (2002).
L.-B. Fu, J.-L. Chen, and X.-G. Zhao, Phys. Rev. A [**68**]{}, 022323 (2003).
D. Kaszlikowski, L. C. Kwek, J. L. Chen, M. [Ż]{}ukowski, and C. H. Oh, Phys. Rev. A [**65**]{}, 032118 (2002).
J.-L. Chen, D. Kaszlikowski, L. C. Kwek, and C. H. Oh, Mod. Phys. Lett. A [**17**]{}, 2231 (2002).
Y. Kurochkin, A. S. Prasad, and A. I. Lvovsky, Phys. Rev. Lett. [**112**]{}, 070402 (2014).
G. Adesso, A. Serafini, and F. Illuminati, Phys. Rev. A [**73**]{}, 032345 (2006).
R. F. Werner and M. M. Wolf, Phys. Rev. Lett. [**86**]{}, 3658 (2001).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Identifying misinformation is increasingly being recognized as an important computational task with high potential social impact. Misinformation and fake contents are injected into almost every domain of news including politics, health, science, business, etc., among which, the fakeness in health domain pose serious adverse effects to scare or harm the society. Misinformation contains scientific claims or content from social media exaggerated with strong emotion content to attract eyeballs. In this paper, we consider the utility of the affective character of news articles for fake news identification in the health domain and present evidence that emotion cognizant representations are significantly more suited for the task. We outline a technique to leverage emotion intensity lexicons to develop emotionized text representations, and evaluate the utility of such a representation for identifying fake news relating to health in various supervised and unsupervised scenarios. The consistent and significant empirical gains that we observe over a range of technique types and parameter settings establish the utility of the emotional information in news articles, an often overlooked aspect, for the task of misinformation identification in the health domain.'
author:
- Anoop K
- 'Deepak P\*'
- Lajish V L
bibliography:
- 'refs.bib'
date: 'Received: date / Accepted: date'
title: 'Emotion Cognizance improves Health Fake News Identification [^1] '
---
Introduction {#intro}
============
The spread of misinformation is increasingly being recognized as an enormous problem. In recent times, misinformation has been reported to have grave consequences such as causing accidents [@ma2016detecting], while fake news around election times have reportedly reached millions of people [@allcott2017social] causing concerns as to whether they might have influenced the electoral outcome. [*Post-Truth*]{} was recognized as the Oxford Dictionary Word of the Year in 2016[^2]. These have spawned an extensive interest in the data analytics community in devising techniques to detect fake news in social and online media leveraging content, temporal and structural features (e.g., [@kwon2013prominent]). A large majority of research efforts on misinformation detection has focused on the political domain within microblogging environments (e.g., [@zhao2015enquiring; @ma2016detecting; @ma2017detect; @qazvinian2011rumor; @zubiaga2016learning; @castillo2013predicting; @zhang2017detecting]) where structural (e.g., the user network) and temporal propagation information (e.g., re-tweets in Twitter) are available in plenty.
Fake news and misinformation within the health domain have been increasingly recognized as a problem of immense significance. As a New York Times article suggests, [*‘Fake news threatens our democracy. Fake medical news threatens our lives’*]{} [^3]. Fake health news is markedly different from fake news in politics or event-based contexts on at least two major counts; first, they originate in online websites with limited potential for dense and vivid digital footprints unlike social media channels, and secondly, the core point is conveyed through long and nuanced textual narratives. Perhaps in order to aid their spread, the core misinformation is often intertwined with trustworthy information. They may also be observed to make use of an abundance of anecdotes, conceivably to appeal to the readers’ own experiences or self-conscious emotions (defined in [@tracy2004putting]). This makes health misinformation detection a challenge more relevant to NLP than other fields of data analytics.
We target detection of health fake news within quasi-conventional online media sources which contain information in the form of articles, with content generation performed by a limited set of people responsible for it. We observe that the misinformation in these sources is typically of the kind where scientific claims or content from social media are exaggerated or distilled either knowingly or maliciously (to attract eyeballs). Some example headlines and excerpts from health fake news articles we crawled are shown in Table \[tab:examples\]; these illustrate, besides other factors, the profusion of trustworthy information within them and the abundantly emotion-oriented narrative they employ. Such sources resemble newspaper websites in that consumers are passive readers whose consumption of the content happens outside social media platforms. This makes fake news detection a challenging problem in this realm since techniques are primarily left to work with just the article content - as against within social media where structural and temporal data offer ample clues - in order to determine their veracity.
Our Contribution {#sec:1.1}
----------------
In this paper, we consider the utility of the affective character of article content for the task of health fake news identification, a novel direction of inquiry though related to the backdrop of fake news identification approaches that target exploiting satire and stance [@rubin2016fake; @chopra2017towards]. We posit that fake and legitimate health news articles espouse different emotional characters that may be effectively utilized to improve fake news identification. We develop a simple method to amplify emotion information within documents by leveraging emotion lexicons, and empirically illustrate that such amplification helps significantly in improving the accuracy of health fake news identification within both supervised and unsupervised settings. Our emotion-enrichment method is intentionally of simple design in order to illustrate the generality of the point that emotion cognizance improves health fake news detection. While the influence of emotions on persuasion has been discussed in recent studies [@vosoughi2018spread; @majeed2017want], our work provides the first focused data-driven analysis and quantification of the relationship between emotions and health fake news. Through illustrating that there are significant differences in the emotional character of fake and legitimate news in the health domain in that exaggerating the emotional content aids techniques that would differentiate them, our work sets the stage for further inquiry into identifying the nature of the differences in the emotional content.
The objective of our study is motivated by the need to illustrate the generality of the point that emotion cognizance improves fake news detection (as indicated or informally observed in various studies e.g., [@vosoughi2018spread; @majeed2017want]). Accordingly, we devise a methodology to leverage external emotion lexicons to derive emotion-enriched textual documents. Our empirical study in using these emotion-enriched documents for supervised and unsupervised fake news identification tasks establish that emotion cognizance improves the accuracy of fake news identification. This study is orthogonal but complementary to efforts that rely heavily on non-content features (e.g., [@wu2018tracing]).
Related Work {#sec:rel}
============
Our particular task, that of understanding the prevalence of emotions and its utility in detecting fake news in the health domain, has not been subject to much attention from the scholarly community. Herein, we survey two streams of related work very pertinent to our task, that of general fake news detection, and secondly, those relating to the analysis of emotions in fake news.
Fake News Detection {#sec:2.1}
-------------------
Owing to the emergence of much recent interest in the task of fake news detection, there have been many publications on this topic in the last few years. A representative and a non-comprehensive snapshot of work in the area appears in Table \[tab:lit\]. As may be seen therein, most efforts have focused on detecting misinformation within microblogging platforms using content, network (e.g., user network) and temporal (e.g., re-tweets in Twitter) features within the platform itself [@anoop2019leveraging]; some of them, notably [@wu2018tracing], target scenarios where the candidate article itself resides outside the microblogging platform, but the classification task is largely dependent on information within. An emerging trend, as exemplified by [@ma2017detect; @wu2018tracing], has been to focus on how information propagates within the microblogging platform, to distinguish between misinformation and legitimate ones. Unsupervised misinformation detection techniques [@zhang2017detecting; @zhang2016distance] start with the premise that misinformation is rare and of differing character from the large majority, and use techniques that resemble outlier detection methods in flavor. Of particular interest is a recent work [@guo2019exploiting] that targets to exploit emotions for fake news detection within microblogging platforms. This makes extensive usage of the [*publisher emotions*]{}, the emotions expressed in the content, and [*social emotions*]{}, the emotions expressed in responses, in order to improve upon the state-of-the-art in fake news detection accuracies. To contrast with this stream of work on fake news detection, it may be noted that our focus is on the health domain where information is usually in the form of long textual narratives, with limited information on the responses, temporal propagation and author/spreader/reader network structure available for the technique to make a veracity decision.
[llccc]{} & &\
& & Content & Network & Temporal\
\
[@kwon2013prominent] & Twitter & & &\
[@zubiaga2017exploiting] & Twitter & & &\
[@qazvinian2011rumor] & Twitter & & &\
[@wu2018tracing] & Twitter & & &\
[@ma2016detecting] & Twitter & & &\
[@zhao2015enquiring] & Twitter & & &\
[@ma2017detect] & Twitter & & &\
[@guo2019exploiting] & Weibo & & &\
\
[@zhang2017detecting]& Weibo & & &\
[@zhang2016distance] & Weibo & & &\
Emotions and Fake News {#sec:2.2}
----------------------
Fake news is generally crafted with the intent to mislead, and thus narratives powered with strong emotion content may be naturally expected within them. [@bakir2018fake] analyze fake news vis-a-vis emotions and argue that what is most significant about the contemporary fake news furore is what it portends: the use of personally and emotionally targeted news produced by journalism referring to what they call as “empathic media”. They further go on to suggest that the commercial and political phenomenon of empathically optimised automated fake news is on the near-horizon, and is a challenge needing significant attention from the scholarly community. A recent study, [@paschen2019investigating], conducts an empirical analysis on 150 real and 150 fake news articles from the political domain, and report finding significantly more negative emotions in the titles of the latter. Apart from being distinctly different in terms of domain, our focus being health (vs. politics for them), we also significantly differ from them in the intent of the research; our work is focused not on identifying the tell-tale emotional signatures of real vis-a-vis fake news, but on providing empirical evidence that there are differences in emotional content which may be exploited through simple mechanisms such as word-addition-based text transformations. In particular, our focus is on establishing that there are differences, and we keep identification of the nature of differences outside the scope of our present investigation. A recent tutorial survey on fake news in social media, [@shu2019detecting], also places significant emphasis on the importance of emotional information within the context of fake news detection.
Our Work in Context {#sec:2.3}
-------------------
To put our work in context, we note that the affective character of the content has not been a focus of health fake news detection so far, to our best knowledge. Our effort is orthogonal but complementary to most work described above in that we provide evidence that emotion cognizance in general, and our emotion-enriched data representations in particular, are likely to be of much use in supervised and unsupervised fake news identification for the health domain. As observed earlier, identifying the nature of emotional differences between fake and real news in the health domain is outside the scope of our work, but would evidently lead to interesting follow-on work.
Emotionizing Text {#sec:3}
=================
The intent in this paper is to provide evidence that the affective character of fake news and legitimate articles differ in a way that such differences can be leveraged to improve the task of fake news identification. First, we outline our methodology to leverage an external emotion lexicon to build emotion amplified (i.e., [*emotionized*]{}) text representations. The methodology is designed to be very simple to describe and implement, so any gains out of emotionized text derived from the method can be attributed to emotion-enrichment in general and not to some nuances of the details, as could be the case if the transformation method were to involve sophisticated steps. The empirical analysis of our emotionized representations [*vis-a-vis*]{} raw text for fake news identification will be detailed in the next section.
The Task {#sec:3.1}
--------
The task of emotionizing is to leverage an emotion lexicon $\mathcal{L}$ to transform a text document $D$ to an emotionized document $D'$. We would like $D'$ also to be similar in format to $D$ in being a sequence of words so that it can be fed into any standard text processing pipeline; retaining the document format in the output, it may be noted, is critical for the uptake of the method. In short:
$
D, \mathcal{L} \xrightarrow[]{Emotionization} D'
$
Without loss of generality, we expect that the emotion lexicon $\mathcal{L}$ would comprise of many 3-tuples, e.g., $[w, e, s]$, each of which indicate the affinity of a word $w$ to an emotion $e$, along with the intensity quantified as a score $s \in [0,1]$. An example entry could be $[unlucky, sadness, 0.7]$ indicating that the word [*unlucky*]{} is associated with the [*sadness*]{} emotion with an intensity of 0.7.
Methodology {#sec:3.2}
-----------
Inspired by recent methods leveraging lexical neighborhoods to derive word [@mikolov2013distributed] and document [@le2014distributed] embeddings, we design our emotionization methodology as one that alters the neighborhood of highly emotional words in $D$ by adding emotion labels. As illustrated in Algorithm \[algorithm\], we sift through each word in $D$ in order, outputting that word followed by its associated emotion from the lexicon $\mathcal{L}$ into $D'$, as long as the word emotion association in the lexicon is stronger than a pre-defined threshold $\tau$. In cases where the word is not associated with any emotion with a score greater than $\tau$, no emotion label is output into $D'$. In summary, $D'$ is an ‘enlarged’ version of $D$ where every word in $D$ that is strongly associated with an emotion additionally being followed by the emotion label. This ingestion of ‘artificial’ words is similar in spirit to [*sprinkling*]{} topic labels to enhance text classification [@DBLP:conf/acl/HingmireC14], where appending topic labels to document is the focus.
Let $D = [w_1, w_2, \ldots, w_n]$ initialize $D'$ to be empty output $D'$
A sample of article excerpts and their emotionized versions appear in Table \[tab:emotionize\_example\].
Empirical Study {#sec:4}
===============
Given our focus on evaluating the effectiveness of emotionized text representations over raw representations, we consider a variety of unsupervised and supervised methods (in lieu of evaluating on a particular state-of-the-art method) in the interest of generality. Data-driven fake news identification, much like any analytics task, uses a corpus of documents to learn a statistical model that is intended to be able to tell apart fake news from legitimate articles. Our empirical evaluation is centered on the following observation: [*for the same analytics model learned over different data representations, differences in effectiveness (e.g., classification or clustering accuracy) over the target task can intuitively be attributed to the data representation*]{}. In short, if our emotionized text consistently yields better classification/clustering models over those learned over raw text, emotion cognizance and amplification may be judged to influence fake news identification positively. We first describe our dataset, followed by the empirical study settings and their corresponding results.
Dataset and Emotion Lexicon {#sec:4.1}
---------------------------
With most fake news datasets being focused on microblogging websites in the political domain making them less suitable for content-focused misinformation identification tasks as warranted by the domain of health, we curated a new dataset of fake and legitimate news articles within the topic of [*health and well being*]{} which will be publicly released upon publication to aid future research. For legitimate news, we crawled $500$ health and well-being articles from reputable sources such as CNN, NYTimes, New Indian Express and many others. For fake news, we crawled $500$ articles on similar topics from well-reported misinformation websites such as BeforeItsNews, Nephef, MadWorldNews, and many others. These were manually verified for category suitability. The detailed dataset statistics is shown in Table \[tab:dataset\].
For the lexicon, we use the NRC Intensity Emotion Lexicon [@mohammad2017word] which has data in the 3-tuple form outlined earlier. For simplicity, we filter the lexicon to retain only one entry per word, choosing the emotion entry with which the word has the highest intensity; this filtered out around 22% of entries in our lexicon. This filtering entails that each word in $D$ can only introduce up to one extra token in $D'$; the emotionization using the filtered corpus was seen to lengthen documents by an average of 2%, a very modest increase in document size. To put it in perspective, only around one in fifty words triggered the lexicon label attachment step. Interestingly, there was only a slight difference in the lengthening of document across the classes; while fake news documents were seen to be enlarged by $2.2\%$ on average, legitimate news articles recorded an average lengthening by $1.8\%$. For e.g., out of 1923 word sense entries that satisfy the threshold $\tau = 0.6$, our filter-out-non-best heuristic filtered out 424 entries (i.e., 22%); thus, only slightly more than one-fifth of entries were affected. This heuristic to filter out all-but-one entry per word was motivated by the need to ensure that document structures be not altered much (by the introduction of too many lexicon words), so assumptions made by the downstream data representation learning procedure such as document well-formedness are not particularly disadvantaged.
[lccccc]{} & Class & Documents & Average Words & Average Sentences & Total Words\
& Real & 500 & 724 & 31 & 362117\
& Fake & 500 & 578 & 28 & 289477\
Supervised Setting - Conventional Classifiers {#sec:4.2}
---------------------------------------------
Let $\mathcal{D} = \{ \ldots, D, \ldots \}$ be the corpus of all news articles, and $\mathcal{D}' = \{ \ldots, D', \ldots \}$ be the corresponding emotionized corpus. Each document is labeled as either fake or not (0/1). With word/document embeddings gaining increasing popularity, we use the DBOW doc2vec model[^4] to build vectors over each of the above corpora separately, yielding two datasets of vectors, correspondingly called $\mathcal{V}$ and $\mathcal{V}'$. While the document embeddings are learnt over the corpora ($\mathcal{D}$ or $\mathcal{D}'$), the output comprises one vector for each document in the corpus that the learning is performed over. The doc2vec model uses an internal parameter $d$, the dimensionality of the embedding space, i.e., the length of the vectors in $\mathcal{V}$ or $\mathcal{V}'$.
Each of these vector datasets are separately used to train a conventional classifier using train and test splits within them. By conventional classifier, we mean a model such as random forests, kNN, SVM, Naive Bayes, Decision Tree or AdaBoost. The classification model learns to predict a class label (one of [*fake*]{} or [*real*]{}) given a d-dimensional embedding vector. We use multiple train/test splits for generalizability of results where the chosen dataset (either $\mathcal{V}$ or $\mathcal{V}'$) is partitioned into $k$ random splits (we use $k=10$); these lead to $k$ separate experiments with $k$ models learnt, each model learnt by excluding one of $k$ splits, and evaluated over their corresponding held-out split. The accuracies obtained by $k$ separate experiments are then simply averaged to obtain a single accuracy score for the chosen dataset ($Acc(\mathcal{D})$ and $Acc(\mathcal{D}')$ respectively). The quantum of improvement achieved, i.e., $Acc(\mathcal{D}') - Acc(\mathcal{D})$ is illustrative of the improvement brought in by emotion cognizance.
Supervised Setting - Neural Networks
------------------------------------
Neural network models such as LSTMs and CNNs are designed to work with vector sequences, one for each word in the document, rather than a single embedding for the document. This allows them to identify and leverage any existence of sequential patterns or localized patterns respectively, in order to utilize for the classification task. These models, especially LSTMs, have become very popular for building text processing pipelines, making them pertinent for a text data oriented study such as ours.
Adapting from the experimental settings for the conventional classifiers in Section \[sec:4.2\], we learn LSTM and CNN classifiers with learnable word embeddings where each word would have a length of either $100$ or $300$. Unlike in Section \[sec:4.2\] where the document embeddings are learnt separately and then used in a classifier, this model interleaves training of the classifier and learning of the embeddings, so the word embeddings are also trained, in the process, to benefit the task. The overall evaluation framework remains the same as before, with the classifier-embedding combo being learnt separately for $\mathcal{D}$ and $\mathcal{D}'$, and the quantum by which $Acc(\mathcal{D}')$ surpasses $Acc(\mathcal{D})$ used as an indication of the improvement brought about by the emotionization.
### Results and Discussion
Table \[tab:classification\] lists the classification results of a variety of standard classifiers as well as those based on CNN and LSTM, across two values of $d$ and various values of $\tau$. $d$ is overloaded for convenience in representing results; while it indicates the dimensionality of the document vector for the conventional classifiers, it indicates the dimensionality of the word vectors for the CNN and LSTM classifiers. Classification [*models learned over the emotionized text are seen to be consistently more effective for the classification task*]{}, as exemplified by the higher values achieved by $Acc(\mathcal{D}')$ over $Acc(\mathcal{D})$ (highest values in each row are indicated in bold). While gains are observed across a wide spectrum of values of $\tau$, the gains are seen to peak around $\tau \approx 0.6$. Lower values of $\tau$ allow words of low emotion intensity to influence $D'$ while setting it to a very high value would add very few labels to $D'$ (at the extreme, using $\tau=1.0$ would mean $D=D'$). Thus the observed peakiness is along expected lines, with $\tau \approx 0.6$ achieving a middle ground between the extremes. The quantum of gains achieved, i.e., $|Acc(\mathcal{D}')-Acc(\mathcal{D})|$, is seen to be significant, sometimes even bringing $Acc(\mathcal{D}')$ very close to the upper bound of $1.0$; this establishes that emotionized text is much more suitable for supervised misinformation identification. It is further notable that the highest accuracy is achieved by AdaBoost as against the CNN and LSTM models; this may be due to the lexical distortions brought about addition of emotion labels limiting the emotionization gains in the LSTM and CNN classifiers that attempt to make use of the word sequences explicitly.
Unsupervised Setting {#sec:4.3}
--------------------
The corresponding evaluation for the unsupervised setting involves clustering both $\mathcal{V}$ and $\mathcal{V}'$ (Ref. Sec. \[sec:4.2\]) using the same method and profiling the clustering against the labels on the clustering purity measure[^5]; as may be obvious, the labels are used only for evaluating the clustering, clustering being an unsupervised learning method. We used K-Means [@macqueen1967some] and DBSCAN [@ester1996density] clustering methods, two very popular clustering methods that come from distinct families. K-Means uses a top-down approach to discover clusters, estimating cluster centroids and memberships at the dataset level, followed by iteratively refining them. DBSCAN, on the other hand, uses a more bottom-up approach, forming clusters and enlarging them by adding proximal data points progressively. Another aspect of difference is that K-Means allows the user to specify the number of clusters desired in the output, whereas DBSCAN has a substantively different mechanism, based on neighborhood density. For K-Means we measured purities, averaged over 1000 random initializations, across varying values of $k$ (desired number of output clusters); it may be noted that purity is expected to increase with $k$ with finer clustering granularities leading to better purities (at the extreme, each document in its own cluster would yield a purity of $1.0$). For DBSCAN we measured purities across varying values of $ms$ (minimum samples to form a cluster); the [*ms*]{} parameter is the handle available to the user within the DBSCAN framework to indirectly control the granularity of the clustering (i.e., the number of clusters in the output). Analogous to the $Acc(.)$ measurements in classification, the quantum of purity improvements achieved by the emotionized text, i.e., $Pur(\mathcal{D}')-Pur(\mathcal{D})$, indicate any improved effectiveness of emotionized representations.
### Results and Discussion
Table \[tab:clustering\] lists the clustering results in a format similar to that of the classification study. With the unsupervised setting posing a harder task, the quantum of improvements ($Pur(\mathcal{D}')-Pur(\mathcal{D})$) achieved by emotionization is correspondingly lower. We believe the cause of low accuracy is because most conventional combinations of document representation and clustering algorithm are suited to generate topically coherent clusters, and thus fare poorly on a substantially different task of fakeness identification. However, the trends are consistent with the earlier observations in that emotionization has a positive effect, with gains peaking around $\tau \approx 0.6$.
Conclusions {#sec:con}
===========
In this paper, we considered the utility of the affective character of news articles for the task of misinformation detection in the health domain. We illustrated that amplifying the emotions within a news story (and in a sense, uplift their importance) helps downstream algorithms, supervised and unsupervised, to identify health fake news better. In a way, our results indicate that fake and real news differ in the nature of emotional information within them, so exaggerating the emotional information within both stretch them further apart in any representation, helping to distinguish them from each other. In particular, our simple method to emotionize text using external emotion intensity lexicons were seen to yield text representations that were empirically seen to be much more suited for the task of identifying health fake news. In the interest of making a broader point establishing the utility of affective information for the task, we empirically evaluated the representations over a wide variety of supervised and unsupervised techniques and methods over varying parameter settings, across which consistent and significant gains were observed. This firmly establishes the utility of emotion information in improving health fake news identification.
Future Work {#sec:5.1}
-----------
As a next step, we are considering developing emotion-aware end-to-end methods for supervised and unsupervised health fake news identification. Secondly, we are considering the use of lexicons learned from data [@bandhakavi2014generating] which may be better suited for fake news identification in niche domains. Third, we are exploring the usage of the affective content of responses to social media posts.
[^1]: Pre-print at https://arxiv.org/abs/1906.10365. On behalf of all authors, the corresponding author states that there is no conflict of interest.
[^2]: https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016
[^3]: https://www.nytimes.com/2018/12/16/opinion/statin-side-effects-cancer.html
[^4]: https://radimrehurek.com/gensim/models/doc2vec.html
[^5]: https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Inspired by the new measurements on $B^{-}\to \eta^{(\prime)}\ell
\bar\nu_{\ell}$ from the BaBar Collaboration, we examine the constraint on the flavor-singlet mechanism, proposed to understand the large branching ratios for $B\to \eta^{\prime} K$ decays. Based on the mechanism, we study the decays of $\bar B_{d,s}\to
\eta^{(\prime)} \ell^{+} \ell^{-}$ and find that they are sensitive to the flavor-singlet effects. In particular, we show that the decay branching ratios of $\bar B_{d,s}\to \eta^{\prime} \ell^{+}
\ell^{-}$ can be as large as $O(10^{-8})$ and $O(10^{-6})$, respectively.
author:
- '**Chuan-Hung Chen$^{1,2}$[^1] and Chao-Qiang Geng$^{3,4}$[^2]**'
title: '**$\eta^{(\prime)}$ productions in semileptonic B decays**'
---
Until now, the unexpected large branching ratios (BRs) for the decays $B\to \eta^{\prime} K $ are still mysterious phenomena among the enormous measured exclusive $B$ decays at $B$ factories [@belle_0603; @babar_0608]. One of the most promising mechanisms to understand the anomaly is to introduce a flavor-singlet state, produced by the two-gluon emitted from the light quarks in $\eta^{(\prime)}$ [@Kroll; @BN_NPB651]. In this mechanism, the form factors in the $B\to\eta^{(\prime)}$ transitions receive leading power corrections. Consequently, the authors in Ref. [@KOY] have studied the implication on the semileptonic decays of $\bar
B\to P\ell \nu_{\ell}$ with $P=\eta^{(\prime)}$ and $\ell=e,\ \mu$. In particular, they find that the decay BRs of the $\eta^{\prime}$ modes can be enhanced by one order of magnitude. Recently, the BaBar Collaboration [@Babar_ICHEP06] has measured the semileptonic decays with the data as follows: $$\begin{aligned}
BR(B^{+}\to \eta \ell^{+} \nu_{\ell} )&=&(0.84\pm 0.27 \pm
0.21)\times
10^{-4}< 1.4\times 10^{-4} (\rm 90\%\ C.L.)\, ,\nonumber \\
BR(B^{+}\to \eta^{\prime} \ell^{+} \nu_{\ell} )&=&(0.33\pm 0.60 \pm
0.30)\times 10^{-4}< 1.3\times 10^{-4} (\rm 90\%\ C.L.)\, .
\label{Data}\end{aligned}$$ Although the measurements in Eq. (\[Data\]) are only $2.55\sigma$ and $0.95\sigma$ significances, respectively, it is important to examine if they give some constraints on the form factors due to the flavor-singlet state in the decays of $\bar B\to
\eta^{(\prime)}\ell \bar\nu_{\ell}$. It should be also interesting to investigate the implication of the measurements in Eq. (\[Data\]) by concentrating on the flavor-singlet contributions on the flavor changing neutral current (FCNC) decays of $\bar B_{d,s} \to \eta^{(\prime)} \ell^{+} \ell^{-}$.
We start by writing the effective Hamiltonians for $B^{-}\to
\eta^{(\prime)}\ell \nu_{\ell}$ and $\bar B\to \eta^{(\prime)}\ell^{+} \ell^{-}$ at quark level in the SM as $$\begin{aligned}
{\cal H}_{I}&=& \frac{G_FV_{ub}}{\sqrt{2}} \bar u\gamma_{\mu}
(1-\gamma_5) b\, \bar \ell \gamma^{\mu} (1-\gamma_5) \nu_{\ell}\,,
\label{eq:heff_lnu}
\\
{\cal H}_{II} &=& \frac{G_F\alpha_{em} \lambda^{q^{\prime}}_t}{\sqrt{2}
\pi}\left[ H_{1\mu} L^{\mu} +H_{2\mu}L^{5\mu} \right]\,,
\label{eq:heff_ll}
\end{aligned}$$ respectively, with $$\begin{aligned}
H_{1\mu } &=&C^{\rm eff}_{9}(\mu )\bar{q}^{\prime}\gamma _{\mu }P_{L}b\ -\frac{2m_{b}}{%
q^{2}}C_{7}(\mu )\bar{q}^{\prime}i\sigma _{\mu \nu }q^{\nu }P_{R}b \,,
\nonumber \\
H_{2\mu } &=&C_{10}\bar{q}^{\prime}\gamma _{\mu }P_{L}b \,,
\nonumber\\
L^{\mu } &=&\bar{\ell}\gamma ^{\mu }\ell\,, \ \ \ L^{5\mu } =\bar{\ell}\gamma ^{\mu }\gamma
_{5}\ell\,,
\label{heffc}
\end{aligned}$$ where $\alpha_{em}$ is the fine structure constant, $\lambda^{q^{\prime}}_t=V_{tb}V_{tq^{\prime}}^*$, $m_b$ is the current b-quark mass, $q$ is the momentum transfer, $P_{L(R)}=(1\mp
\gamma_5)/2$ and $C_{i}$ are the Wilson coefficients (WCs) with their explicit expressions given in Ref. [@BBL]. In particular, $C^{\rm eff}_{9}$, which contains the contribution from the on-shell charm-loop, is given by [@BBL] $$\begin{aligned}
C_{9}^{\rm eff}(\mu)&=&C_{9}( \mu ) +\left( 3C_{1}\left( \mu \right)
+C_{2}\left( \mu \right) \right) h\left( z,s^{\prime}\right)
%-\frac{3}{\alpha^{2}_{em}}\sum_{V=\Psi ,\Psi ^{\prime
%}}k_{V}\frac{\pi \Gamma \left( V\rightarrow \ell^{+}\ell^{-}\right)
%M_{V}}{M_{V}^{2}-q^{2}-iM_{V}\Gamma _{V}}\right)
\,, \nonumber \\
%
h(z,s^{\prime})&=&-\frac{8}{9}\ln\frac{m_b}{\mu}-\frac{8}{9}\ln z
+\frac{8}{27} +\frac{4}{9}x -\frac{2}{9}(2+x)|1-x|^{1/2} \nonumber
\\
&\times& \left\{
\begin{array}{c}
\ln \left|\frac{\sqrt{1-x}+1}{\sqrt{1-x}-1} \right|-i\, \pi, \ {\rm for}\ x\equiv 4z^2/s^{\prime}<1 \, , \\
2\, arctan\frac{1}{\sqrt{x-1}},\ {\rm for}\ x\equiv 4z^2/s^{\prime}>1 \, ,\\
\end{array}
\right.
\label{C9eff}\end{aligned}$$ where $h(z,s^{\prime})$ describes the one-loop matrix elements of operators $O_{1}= \bar{s}_{\alpha }\gamma ^{\mu }P_{L}b_{\beta }\
\bar{c}_{\beta }\gamma _{\mu }P_{L}c_{\alpha }$ and $O_{2}=\bar{s}\gamma ^{\mu }P_{L}b\ \bar{c}\gamma _{\mu }P_{L}c$ [@BBL] with $z=m_c/m_b$ and $s^{\prime}=q^2/m^2_b$. Here, we have ignored the resonant contributions [@Res; @CGPRD66] as they are irrelevant to our analysis. In Table \[tab:wcs\], we show the values of dominant WCs at $\mu=4.4$ GeV in the next-to-leading-logarithmic (NLL). We note that since the value of $|h(z,s^{\prime})|$ is less than 2, the influence of the charm-loop is much less than $C_{9, 10}$ which are dominated by the top-quark contributions.
$C_1$ $C_2$ $C_7$ $C_9$ $C_{10}$
---------- --------- ---------- --------- ----------
$-0.226$ $1.096$ $-0.305$ $4.344$ $-4.599$
: WCs at $\mu=4.4$ GeV in the NLL order. []{data-label="tab:wcs"}
To study the exclusive semileptonic decays, the hadronic QCD effects for the $\bar B\to P$ transitions are parametrized by $$\begin{aligned}
\langle P(p_{P}) | \bar q^{\prime} \gamma^{\mu} b| \bar
B(p_B)\rangle &=& f^{P}_{+}(q^2)\left(P^{\mu}-\frac{P\cdot
q}{q^2}q^{\mu} \right)+f^{P}_{0}(q^2) \frac{P\cdot q}{q^2} q_{\mu}\,
, \nonumber
\\
%
\langle P(p_{P} )| \bar q^{\prime} i\sigma_{\mu\nu} q^{\nu}b| \bar B
(p_{B})\rangle &=& {f^{P}_{T}(q^2)\over m_{B}+m_{P}}\left[P\cdot q\,
q_{\mu}-q^{2}P_{\mu}\right]\,, \label{eq:bpff}\end{aligned}$$ with $P_{\mu}=(p_{B}+p_{P})_{\mu}$ and $q_{\mu}=(p_{B}-p_{P})_{\mu}$. Consequently, the transition amplitudes associated with the interactions in Eqs. (\[eq:heff\_lnu\]) and (\[eq:heff\_ll\]) can be written as $$\begin{aligned}
{\cal M}_{I}&=&\frac{\sqrt{2}G_{F}V_{ub}}{\pi }
f^{P}_{+}(q^2) \bar{\ell} \not{p}_{P} \ell\,,
\label{amppln}
\\
{\cal M}_{II}&=&\frac{G_{F}\alpha_{em} \lambda^{q^{\prime}} _{t}}{\sqrt{2}\pi }
\left[ \tm_{97} \bar{\ell} \not{p}_P \ell + \tm_{10} \bar{\ell} \not{p}_P \gamma_5 \ell
\right]\,,\label{amppll}
\end{aligned}$$ for $\bar B\rightarrow P \ell \bar\nu_{\ell}$ and $\bar B\rightarrow P \ell^{+} \ell^{-}$, respectively, with $$\begin{aligned}
\tm_{97}&=& C^{\rm eff}_9 f^{P}_+(q^2) +\frac{2m_b}{m_B+m_{P}}C_7
f^{P}_T(q^2) \,, \ \ \ \tm_{10}= C_{10} f^{P}_+(q^2)\, .
\label{eq:m7910}
\end{aligned}$$ Since we concentrate on the productions of the light leptons, we have neglected the terms explicitly related to $m_{\ell}$. By choosing the coordinates for various particles: $$\begin{aligned}
q&=&(\sqrt{q^2},\,0,\, 0,\, 0), \ \ \ p_{B}=(E_{B},\, 0,\, 0,\, |\vec{p}_{P}|), \nonumber \\
p_{P}&=& (E_{P},\, 0,\, 0,\, |\vec{p}_{P}|), \ \ \
p_{\ell}=E_{\ell}(1,\, \sin\thl,\, 0,\, \cos\thl)\,,
\label{eq:coordinates}\end{aligned}$$ where $E_{P}=(m^{2}_{B}-q^2-m^2_{P})/(2\sqrt{q^2})$, $|\vec{p}_{P}|=\sqrt{E^2_{P}-m^2_{P}}$ and $\theta_{\ell}$ is the polar angle, the differential decay rates for $ B^{-}\to P \ell
\bar\nu_{\ell}$ and $\bar
B_{d} \to P \ell^+ \ell^-$ as functions of $q^2$ are given by $$\begin{aligned}
\frac{d\Gamma_{I}}{dq^2 }&=& \frac{G^{2}_{F} |V_{ub}|^2
m^3_{B}}{3\cdot 2^6 \pi^3}\sqrt{(1-s+\hmp^2)^2-4\hmp^2}
\left(f^{P}_{+}(q^2) \hat P_{P}\right)^2\,,
\label{eq:diffplnu}
\\
\frac{d\Gamma _{II} }{dq^2
}&=&\frac{G_{F}^{2}\alpha^{2}_{em}m^{3}_{B}}{ 3\cdot 2^{9} \pi ^{5}}
|\lambda^{q^{\prime}} _{t}|^{2}\sqrt{(1-s+\hmp^2)^2-4\hmp^2} \hat
P^2_{P} \left( |\tilde{m}_{97}|^2+|\tilde{m}_{10}|^2\right) \label{eq:difpll}\, ,
\end{aligned}$$ respectively, with $\hat P_{P}=2\sqrt{s}
|\vec{p}_{P}|/m_{B}=\sqrt{(1-s-\hmp^2 )^2-4s\hmp^2}$, $\hat{m}_{P}=m_{P}/m_B$ and $s=q^2/m^2_{B}$.
To discuss the $P=\eta^{(\prime)}$ modes, we employ the quark-flavor scheme to describe the states $\eta$ and $\eta^{\prime}$, expressed by [@flavor0; @flavor] $$\begin{aligned}
\left( {\begin{array}{*{20}c}
\eta \\
{\eta '} \\
\end{array}} \right) = \left( {\begin{array}{*{20}c}
{\cos \phi } & { - \sin \phi } \\
{\sin \phi } & {\cos \phi } \\
\end{array}} \right)\left( {\begin{array}{*{20}c}
{\eta _{q} } \\
{\eta _{s} } \\
\end{array}} \right) \label{eq:flavor}\end{aligned}$$ with $\eta _{q} = ( {u\bar u + d\bar d})/\sqrt{2}$, $\eta_{s} =
s\bar s $ and $\phi=39.3^{\circ}\pm 1.0^{\circ}$. Based on this scheme, it is found that the form factors in Eq. (\[eq:bpff\]) at $q^2=0$ with the flavor-singlet contributions are given by [@BN_NPB651] $$\begin{aligned}
f^{\eta}_{i}(0)&=&\frac{\cos\phi}{\sqrt{2}} \frac{f_{q}}{f_{\pi}}
f^{\pi}_{i}(0) + \frac{1}{\sqrt{3}} \left( \sqrt{2} \cos\phi\frac{
f_{q}}{f_{\pi}} - \sin\phi \frac{f_{s}}{f_{\pi}}\right) f^{\rm
sing}_{i}(0)\, ,\nonumber \\
%
f^{\eta^{\prime}}_{i}(0)&=&\frac{\sin\phi}{\sqrt{2}}
\frac{f_{q}}{f_{\pi}} f^{\pi}_{i}(0) + \frac{1}{\sqrt{3}} \left(
\sqrt{2} \sin\phi\frac{ f_{q}}{f_{\pi}} + \cos\phi
\frac{f_{s}}{f_{\pi}}\right) f^{\rm sing}_{i}(0)\,, \label{eq:fs}\end{aligned}$$ where $i=+,T$, $f_{q}=(1.07\pm 0.02)f_{\pi}$, $f_{s}=(1.34\pm 0.06)f_{\pi}$ [@flavor] and $f^{\rm sing}_{i}(0)$ denote the unknown transition form factors in the flavor-singlet mechanism. We note that the flavor-singlet contributions to $\bar B\to\eta^{(\prime)}$ have also been considered in the soft collinear effective theory [@SCET]. For the $q^2$-dependence form factors $f^{\pi}_{+(T)}(q^2)$, we quote the results calculated by the light-cone sum rules (LCSR) [@BZ_PRD71], given by $$\begin{aligned}
f^{\pi}_{+(T)}(q^2)&=&
\frac{f^{\pi}_{+(T)}(0)}{(1-q^2/m^2_{B^*})(1-\alpha_{+(T)}
q^2/m^2_{B^*})}\end{aligned}$$ with $f^{\pi}_{+(T)}(0)=0.27$, $\alpha_{+(T)}=0.52(0.84)$ and $m_{B^*}=5.32$ GeV. Since $f^{\rm sing}_{+,T}(q^2)$ are unknown, as usual, we parametrize them to be [@KOY] $$\begin{aligned}
f^{\rm sing}_{+(T)}(q^2)&=& \frac{f^{\rm
sing}_{+(T)}(0)}{(1-q^2/m^2_{B^*})(1-\beta_{+(T)} q^2/m^2_{B^*})}
\label{eq:ffsing}\end{aligned}$$ with $\beta_{+(T)}$ being the free parameters. We will demonstrate that the BRs for the semileptonic decays are not sensitive to the values of $\beta_{+(T)}$, but those of $f^{\rm
sing}_{+(T)}(0)$. Moreover, based on the result of $f^{\pi}_{+}(0)\sim f^{\pi}_{T}(0)$ in the large energy effective theory (LEET) [@LEET], we may relate the singlet form factors of $f^{\rm sing}_{+}(0)$ and $f^{\rm
sing}_{T}(0)$. Explicitly, we assume that $f^{\rm sing}_{T}(0)\sim f^{\rm
sing}_{+}(0)$. Note that this assumption will not make a large deviation from the real case since the effects of $f^{P}_{T}(q^2)$ on the dilepton decays are small due to $C_9>>C_7$ in Eq. (\[eq:m7910\]). Hence, the value of $f^{\rm sing}_{+}(0)$ could be constrained by the decays $\bar B\to \eta^{(\prime)} \ell
\bar\nu_{\ell}$.
Before studying the effects of $f^{\rm sing}_{+}(q^2)$ on the BRs of semileptonic decays, we examine the $\beta_{+}$ dependence on the BRs. By taking $|V_{ub}|=3.67\times 10^{-3}$ and Eqs. (\[eq:diffplnu\]) and (\[eq:ffsing\]), in Table \[table:beta\], we present $BR(B^{-}\to \eta^{(\prime)} \ell\bar\nu_{\ell})$ and $BR(B_{d}\to
\eta^{(\prime)} \ell^{+} \ell^{-})$ with $f^{\rm sing}_{+}(0)=0.2$ and various values of $\beta_{+}$. From the table, we see clearly that the errors of BRs induced by the errors of $60\%$ in $\beta_{+}$ are less than $7\%$ and $14\%$ for the $\eta$ and $\eta^{\prime}$ modes, respectively. Hence, it is a good approximation to take the $q^{2}$ dependence for $f^{\rm sing}_{+}(q^2)$ to be the same as $f^{\pi}_{+}(q^2)$. Consequently, the essential effect on the BRs for semileptonic decays is the value of $f^{\rm sing}_{+}(0)$.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$\beta_{+}$ $B^{-}\to \eta \ell \bar \nu_{\ell}$ $B^{-}\to $\bar B_{d}\to \eta \ell^{+} \ell^{-}$ $\bar B_{d}\to \eta^{\prime} \ell^{+} \ell^{-}$
\eta^{\prime} \ell \bar \nu_{\ell}$
------------- -------------------------------------- ------------------------------------- ---------------------------------------- -------------------------------------------------
$0.2$ $0.61$ $1.37 $ $0.080$ $0.190$
$0.4$ $ 0.62$ $ 1.46$ $0.082$ $0.204$
$0.5$ $ 0.63$ $ 1.52$ $0.083$ $0.211$
$0.6$ $0.64$ $1.58$ $0.085$ $0.220$
$0.8$ $0.67$ $1.73$ $0.088$ $0.242$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: BRs of $B^{-}\to \eta^{(\prime)} \ell \bar
\nu_{\ell}$ ( in units of $10^{-4}$) and $\bar B_{d} \to
\eta^{(\prime)} \ell^{+} \ell^{-}$ ( in units of $10^{-7}$) with $f^{\rm
sing}_{+}(0)=0.2$ and $\beta_{+}=0.2$, $0.4$, $0.5$, $0.6$ and $0.8$, respectively.[]{data-label="table:beta"}
With $|V_{td}|=8.1\times 10^{-3}$ [@PDG06] and Eqs. (\[eq:diffplnu\]) and (\[eq:difpll\]), the decay BRs of $B^{-}\to\eta^{(\prime)} \ell \bar\nu_{\ell}$ and $\bar
B_{d}\to\eta^{(\prime)} \ell^{+} \ell^{-}$ as functions of $f^{\rm sing}_{+}(0)$ are shown in Fig. \[fig:beta\].
![BRs of (a)\[(b)\] $B^{-}\to \eta^{[\prime]} \ell
\bar\nu_{\ell}$ (in units of $10^{-4}$) and (c)\[(d)\] $\bar B_{d}\to
\eta^{[\prime]} \ell^{+} \ell^{-}$ (in units of $10^{-7}$) as functions of $f^{\rm sing}_{+}(0)$, where the horizontal solid and dashed lines in (a) denote the central and upper and lower values of the current data at $90\%$ C.L., while the dashed line in (b) is the upper limit of the data.[]{data-label="fig:beta"}](beta){width="4.5"}
In Table \[table:rate\], we also explicitly display the BRs with $f^{\rm sing}_{+}(0)=0$, $0.1$ and $0.2$.
$f^{\rm sing}_{+}(0)$ $B^{-}\to \eta \ell \bar \nu_{\ell}$ $B^{-}\to \eta^{\prime} \ell \bar \nu_{\ell}$ $\bar B_{d}\to \eta \ell^{+} \ell^{-}$ $\bar B_{d}\to \eta^{\prime} \ell^{+} \ell^{-}$
----------------------- -------------------------------------- ----------------------------------------------- ---------------------------------------- -------------------------------------------------
$0.0$ $0.41$ $0.20 $ $0.06$ $0.03$
$0.1$ $ 0.52$ $ 0.71$ $0.07$ $0.10$
$0.2$ $0.63$ $1.53$ $0.08$ $0.21$
: BRs of $B^{-}\to \eta^{(\prime)} \ell \bar
\nu_{\ell}$ ( in units of $10^{-4}$) and $\bar B_{d} \to
\eta^{(\prime)} \ell^{+} \ell^{-}$ ( in units of $10^{-7}$) with $\phi=39.3^{\circ}$ and $f^{\rm sing}_{+}(0)=0.0$, $0.1$ and $0.2$.[]{data-label="table:rate"}
From Table \[table:rate\], we find that without the flavor-singlet effects, the result for $BR(B^{-}\to \eta \ell
\bar\nu_{\ell})$ is a factor of 2 smaller than the central value of the BaBar data in Eq. (\[Data\]). Clearly, if the data shows a correct tendency, it indicates that there exist some mechanisms, such as the one with the flavor-singlet state, to enhance the decay of $\bar B\to \eta$ as illustrated in Table \[table:rate\]. Moreover, as shown in Fig. \[fig:beta\]b and Table \[table:rate\], the decays of $B^{-}\to \eta^{\prime} \ell \nu_{\ell}$ are very sensitive to $f^{\rm sing}_{+}(0)$. In particular, the current data has constrained that $$\begin{aligned}
f^{\rm sing}_{+}(0)&\leq& 0.2.
\label{Limit}\end{aligned}$$ It is interesting to note that for $f^{\rm
sing}_{+}(0)=0.2$, $BR(\bar B_{d}\to \eta^{\prime} \ell^{+} \ell^{-})=
0.21\times 10^{-7}$, which is as large as $BR(B^{-}\to \pi^{-} \ell^{+} \ell^{-})$, while that of $\bar B_{d}\to \eta \ell^{+} \ell^{-}$ is slightly enhanced. In addition, it is easy to see that the flavor-singlet contributions could result in the BRs of the $\eta^{\prime}$ modes to be over than those of the $\eta$ ones.
Our investigation of the flavor-singlet effects can be extended to the dileptonic decays of $\bar B_{s}\to\eta^{(\prime)} \ell^{+} \ell^{-}$ [@GL]. In the following, we use the notation with a tilde at the top to represent the form factors associated with $B_{s}$. Hence, similar to Eq. (\[eq:fs\]), we express the form factors for $\bar{B}_{s}\to\eta^{(\prime)}$ with the flavor-singlet effects at $q^{2}=0$ to be $$\begin{aligned}
\tilde{f}^{\eta}_{+}(0)&=&-\sin\phi
\tilde{f}^{\eta_s[m_{\eta}]}_{+}(0) + \frac{1}{\sqrt{3}} \left(
\sqrt{2} \cos\phi\frac{ f_{q}}{f_{K}} - \sin\phi
\frac{f_{s}}{f_{K}}\right) \tilde{f}^{\rm
sing}_{+}(0)\, ,\nonumber \\
%
\tilde{f}^{\eta^{\prime}}_{+}(0)&=&\cos\phi
\tilde{f}^{\eta_s[m_{\eta^{\prime}}]}_{+}(0) + \frac{1}{\sqrt{3}}
\left( \sqrt{2} \sin\phi\frac{ f_{q}}{f_{K}} + \cos\phi
\frac{f_{s}}{f_{K}}\right) \tilde{f}^{\rm sing}_{+}(0)\,.\end{aligned}$$ For the $q^2$-dependence form factors of $\tilde{f}^{\eta_{s}[m_{\eta^{(\prime)}}]}_{+,T}$, we adopt the results calculated by the constituent quark model (CQM) [@CQM], given by $$\begin{aligned}
\tilde{f}^{\eta_{s}[m_{\eta^{(\prime)}}]}_{+,T}(q^{2})=\frac{\tilde{f}^{\eta_{s}[m_{\eta^{(\prime)}}]}_{+,T}(0)}{1-a^{\eta^{(\prime)}}_{+,T}
q^2/m^{2}_{B_s}+b^{\eta^{(\prime)}}_{+,T} (q^2/m^2_{B_s})^2}\end{aligned}$$ with $\tilde{f}^{\eta_{s}[m_{\eta}]}_{+}(0)=\tilde{f}^{\eta_{s}[m_{\eta^{\prime}}]}_{+}(0)=\tilde{f}^{\eta_s[m_{\eta}]}_{T}(0)=0.36$, $\tilde{f}^{\eta_s[m_{\eta^{\prime}}]}_{T}(0)=0.39$, $a^{\eta}_{+}=a^{\eta^{\prime}}_{+}=0.60$, $b^{\eta}_{+}=b^{\eta^{\prime}}_{+}=0.20$, $a^{\eta}_{T}=a^{\eta^{\prime}}_{T}=0.58$ and $b^{\eta}_{T}=b^{\eta^{\prime}}_{T}=0.18$. By using $m_{B_s}=5.37$ GeV and $V_{ts}=-0.04$ instead of $m_{B}$ and $V_{td}$ in Eq. (\[eq:difpll\]), we present the BRs of $\bar B_{s}\to
\eta^{(\prime)} \ell^{+} \ell^{-}$ in Table \[table:ratebs\]. We also display the BRs as functions of $\tilde{f}^{\rm
sing}_{+}(0)$ in Fig. \[fig:bseta\].
$\tilde{f}^{\rm sing}_{+}(0)$ $0.0$ $0.1$ $0.2$
-------------------------------------------------- -------- -------- --------
$\bar B_{s}\to \eta \ell^{+} \ell^{-}$ $3.71$ $3.27$ $2.84$
$\bar B_{s} \to \eta^{\prime} \ell^{+} \ell^{-}$ $3.35$ $5.97$ $9.35$
: BRs of $\bar B_{s}\to \eta^{(\prime)} \ell^{+} \ell^{-}$ ( in units of $10^{-7}$) with $\phi=39.3^{\circ}$ and $\tilde{f}^{\rm
sing}_{+}(0)=0.0$, $0.1$ and $0.2$.[]{data-label="table:ratebs"}
![(a)\[(b)\] BRs (in units of $10^{-7}$) of $\bar B_{s}\to
\eta^{[\prime]} \ell^{+} \ell^{-}$ as functions of $\tilde{f}^{\rm sing}_{+}(0)$. []{data-label="fig:bseta"}](bseta){width="4.in"}
As seen from Table \[table:ratebs\] and Fig. \[fig:bseta\], due to the flavor-singlet effects, the BRs of $\bar B_{s} \to \eta^{\prime}\ell^{+}
\ell^{-}$ are enhanced and could be as large as $O(10^{-6})$ with around a factor of $3$ enhancement, whereas those of $\bar B_{s} \to \eta\ell^{+}\ell^{-}$ decrease as increasing $\tilde{f}^{\rm sing}_{+}(0)$, which can be tested in future hadron colliders.
In summary, we have studied the effects of the flavor-singlet state on the $\eta^{(\prime)}$ productions in the semileptonic B decays. In terms of the constraints from the current data of $B^{-}\to \eta^{(\prime)} \ell
\nu_{\ell}$, we have found that the BRs of $\bar B_{d,s}\to
\eta^{\prime}\ell^{+} \ell^{-}$ could be enhanced to be $O(10^{-8})$ and $O(10^{-6})$, respectively. Finally, we remark that the flavor-singlet effects could result in the BRs of the $\bar B_{d,s}\to
\eta^{\prime}$ modes to be larger than those of $\bar B\to \eta$ and the statement is reversed if the effects are neglected.
Note added: After we presented the paper, Charng, Kurimoto and Li [@Li] calculated the flavor singlet contribution to the $B\to \eta^{(')}$ transition form factors from the gluonic content of $\eta^{(')}$ in the large recoil region by using the perturbative QCD (PQCD) approach. Here, we make some comparisons as follows:\
1. While Ref. [@Li] gives a theoretical calculation on the flavor singlet contribution to the form factors in the PQCD, we consider the direct constraint from the experimental data. The conclusion that the singlet contribution is negligible (large) in the $B\to \eta^{(')}$ form factors in Ref. [@Li] is the same as ours. However, the overall ratios in Ref. [@Li] for the singlet contributions are about a factor 4 smaller than ours. On the other hand, as stressed in Ref. [@Li], they have used a small Gegenbauer coefficient, which corresponds smaller gluonic contributions. For a larger allowed value of the the Gegenbauer coefficient in Ref. [@BN_NPB651], the overall ratios will be a few factors larger. In other words, the real gluonic contributions rely on future experimental measurements.\
2. Although our assumption of $f^{\rm sing}_{T}(0)\sim f^{\rm
sing}_{+}(0)$ seems to be somewhat different from the PQCD calculation as pointed out in Ref. [@Li] due to an additional term, the numerical values of $f^{\rm sing}_{+}(0)=0.042$ and $f^{\rm sing}_{T}(0)=0.035$ by the PQCD [@Li] do not change our assumption very much. After all, as stated in point 1 that there exist large uncertainties for the wave functions in the PQCD. In addition, the difference actually is not important for our results as explained in the text.\
\
\
[**Acknowledgments**]{}\
This work is supported in part by the National Science Council of R.O.C. under Grant \#s:NSC-95-2112-M-006-013-MY2, NSC-94-2112-M-007-(004,005) and NSC-95-2112-M-007-059-MY3.
[99]{}
BELLE Collaboration, K. Abe [*et al.*]{}, arXiv:hep-ex/0603001.
BABAR Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 191802 (2005); arXiv:hep-ex/0608005.
P. Kroll and K. Passek-Kumericki, Phys. Rev. D [**67**]{}, 054017 (2003) \[arXiv:hep-ph/0210045\].
M. Beneke and M. Neubert, Nucl. Phys. B[**651**]{}, 225 (2003) \[arXiv:hep-ph/0210085\].
C.S. Kim, S. Oh and C. Yu, Phys. Lett. B[**590**]{}, 223 (2004) \[arXiv:hep-ph/0305032\].
Babar Collaboration, B. Aubert [*et al.*]{}, arXiv:hep-ex/0607066.
G. Buchalla, A. J. Buras and M. E. Lautenbacher, Rev. Mod. Phys [**68**]{}, 1230 (1996) \[arXiv:hep-ph/9512380\].
C. S. Lim, T. Morozumi and A. I. Sanda, Phys. Lett. B [**218**]{}, 343 (1989); N. G. Deshpande, J. Trampetic and K. Panose, Phys. Rev. D [**39**]{}, 1461 (1989); P. J. O’Donnell and H. K. K. Tung, Phys. Rev. D [**43**]{}, 2067 (1991).
C.H. Chen and C.Q. Geng, Phys. Rev. D[**66**]{}, 034006 (2002) \[arXiv:hep-ph/0207038\].
J. Schechter, A. Subbaraman and H. Weigel, Phys. Rev. D [**48**]{}, 339 (1993).
T. Feldmann, P. Kroll and B. Stech, Phys. Rev. D[**58**]{}, 114006(1998) \[arXive:hep-ph/9802409\].
A. R. Williamson and J. Zupan, Phys. Rev. D [**74**]{}, 014003 (2006) \[Erratum-ibid. D [**74**]{}, 03901 (2006)\] \[arXiv:hep-ph/0601214\].
P. Ball and R. Zwicky, Phys. Rev. D[**71**]{}, 014015 (2005) \[arXiv:hep-ph/0406232\]; Phys. Lett. B[**625**]{}, 225 (2005) \[arXiv:hep-ph/0507076\].
M.J. Dugan and B. Grinstein, Phys. Lett. B[**255**]{}, 583 (1991); J. Charles [*et. al.*]{}, Phys. Rev. D[**60**]{}, 014001 (1999).
Particle Data Group, W.M. Yao [*et al.*]{}, J. Phys. G [**33**]{}, 1 (2006).
C.Q. Geng and C.C. Liu, J. Phys. G [**29**]{}, 1103 (2003) \[arXiv:hep-ph/0303246\].
D. Melikhov and B. Stech, Phys. Rev. D[**62**]{}, 014006 (2000) \[arXiv:hep-ph/0001113\].
Y. Y. Charng, T. Kurimoto and H. n. Li, Phys. Rev. D [**74**]{}, 074024 (2006) \[arXiv:hep-ph/0609165\].
[^1]: Email: [email protected]
[^2]: Email: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Set partitions and permutations with restrictions on the size of the blocks and cycles are important combinatorial sequences. Counting these objects lead to the sequences generalizing the classical Stirling and Bell numbers. The main focus of the present article is the analysis of combinatorial and arithmetical properties of them. The results include several combinatorial identities and recurrences as well as some properties of their $p$-adic valuations.'
address:
- 'Department of Mathematics, Tulane University, New Orleans, LA 70118'
- 'Departamento de Matemáticas, Universidad Nacional de Colombia, Bogotá, Colombia'
- 'Department of Mathematics, Tulane University, New Orleans, LA 70118'
author:
- 'Victor H. Moll'
- 'José L. Ramirez'
- Diego Villamizar
title: Combinatorial and Arithmetical Properties of the Restricted and Associated Bell and Factorial Numbers
---
Introduction {#intro}
============
The (unsigned) Stirling numbers of the first kind denoted by $c(n,k)$ or ${n \brack k}$ enumerate the number of permutations on $n$ elements with $k$ cycles. The corresponding Stirling numbers of the second kind, denoted by $S(n,k)$ or ${n \brace k}$, enumerate the number of partitions of a set with $n$ elements into $k$ non-empty blocks; see [@comtet-1974a] for general information about them. The recurrences $$\begin{aligned}
{n+1 \brack k}&={n \brack k-1}+n{n \brack k} \quad \text{and}\\
{n+1 \brace k}&={n \brace k-1}+k{n \brace k},\end{aligned}$$ with the initial conditions $$\begin{aligned}
{0 \brack 0}&=1, \quad {n \brack 0}={0 \brack n}=0,\\
{0 \brace 0}&=1, \quad {n \brace 0}={0 \brace n}=0,
\end{aligned}$$ hold for $n\geq 1$. They are related to each other by the orthogonality relation $$\sum_{k\geq 0} {n \brace k}{k \brack m}(-1)^{n-k} =\delta_{n,m},$$ where $\delta_{n,m}$ is the Kronecker delta function.
The Bell numbers, $B_n$, enumerate the set partitions of a set with $n$ elements, so that $\begin{displaystyle} B_n=\sum_{k=0}^n{n \brace k} \end{displaystyle}$. The *Spivey’s formula* [@spivey-2008a] $$\begin{aligned}
\label{spiveye}
B_{n+m}=\sum_{k=0}^n\sum_{j=0}^mj^{n-k} {m \brace j} \binom{n}{k}B_k,\end{aligned}$$ gives a recurrence for them. Further properties of this sequence appear in [@comtet-1974a; @mansour-2012a].\
The literature contains several generalizations of Stirling numbers; see [@mansour-2015a]. Among them, the so-called restricted and associated Stirling numbers of both kinds (cf. [@bona-2016a; @choijy-2005a; @choijy-2006b; @choijy-2006a; @comtet-1974a; @komatsu-2015c; @komatsu-2015b; @mezo-2014a]) constitute the central character of the work presented here.
The *restricted Stirling numbers of the second kind* $ {n \brace k}_{\leq m }$ give the number of partitions of $n$ elements into $k$ subsets, with the additional restriction that none of the blocks contain more than $m$ elements. Komatsu et al. [@komatsu-2015c] derived the recurrences $$\begin{aligned}
\label{recur-1}
{n+1 \brace k}_{\leq m} =\sum_{j=0}^{m-1} \binom{n}{j}
{n-j \brace k-1}_{\leq m} = k {n \brace k}_{\leq m} + {n \brace k-1}_{\leq m} - \binom{n}{m} {n-m \brace k-1}_{\leq m},
\end{aligned}$$ with initial conditions $ {0 \brace 0}_{\leq m} = 1$ and $ {n\brace 0}_{\leq m} = 0$, for $n \geq 1$. The *restricted Bell numbers* defined by [@miksa-1958a] $$B_{n, \leq m} = \sum_{k=0}^{n} {n \brace k}_{\leq m },$$ enumerate partitions of $n$ elements into blocks, each one of them with at most $m$ elements. For example, $B_{4, \leq 3} = 14$, the partitions being $$\begin{aligned}
& \left\{ \{1 \}, \{ 2 \}, \{ 3 \}, \{ 4 \} \right\}, & & \left\{ \{ 1, \, 2 \}, \{ 3 \}, \{ 4 \} \right\}, & & \left\{ \{ 1, \, 2 \}, \{ 3, \, 4 \} \right\}, & & \left\{ \{ 1, \, 3 \}, \{ 2 \}, \{ 4 \} \right\}, & \\
& \left\{ \{ 1, \, 3 \}, \{ 2, \, 4 \} \right\}, & & \left\{ \{ 1, \, 4 \}, \{ 2 \}, \{ 3 \} \right\}, & & \left\{ \{ 1, \, 4 \}, \{ 2, \, 3 \} \right\}, & & \left\{ \{ 1, \, 2, \, 3 \}, \{ 4 \} \right\}, &\\
& \left\{ \{ 1, \, 2, \, 4 \}, \{ 3 \} \right\}, & & \left\{ \{ 1, \, 3, \, 4 \}, \{ 2 \} \right\}, & & \left\{\{ 1 \}, \{ 2, \, 3, \, 4 \}\right\}, & & \left\{ \{ 1 \}, \{ 2 \}, \, \{ 3, 4 \} \right\}, & \\
& \left\{ \{ 1 \}, \{ 2, 4 \}, \{ 3 \} \right\}, & & \left\{ \{ 1 \}, \{ 2, 3 \}, \{ 4 \} \right\}. & & &
\end{aligned}$$
An associated sequence is the *restricted Stirling numbers of the first kind* $ {n \brack k}_{\leq m }$. This gives the number of permutations on $n$ elements with $k$ cycles with the restriction that none of the cycles contain more than $m$ items (see [@mezo-2014a] for more information). Komatsu et al. [@komatsu-2015b] established the recurrence $$\begin{aligned}
{n+1 \brack k}_{\leq m} =\sum_{j=0}^{m-1} \frac{n!}{(n-j)!}
{n-j \brack k-1}_{\leq m} = n {n \brack k}_{\leq m} + {n \brack k-1}_{\leq m} - \frac{n!}{(n-m)!} {n-m \brack k-1}_{\leq m},
\end{aligned}$$ with initial conditions $ {0 \brack 0}_{\leq m} = 1$ and $ {n\brack 0}_{\leq m} = 0$. The *restricted factorial numbers*, see [@mezo-2014a], are defined by $$A_{n, \leq m} = \sum_{k=0}^{n} {n \brack k}_{\leq m }.$$ These enumerate all permutations of $n$ elements into cycles with the condition that every cycle has at most $m$ items. For example, $A_{4, \leq 3} = 18$ with the permutations being $$\begin{aligned}
& (1)(2)(3)(4), && (1)(2)(43), & & (1)(32)(4), & & (1)(342), & & (1)(432), & \\
& (1)(42)(3), & & (21)(3)(4), & & (21)(43), & & (231)(4), & & (241)(3), & \\
& (321)(4), & & (31)(2)(4), & & (341)(2), & & (31)(42), & & (421)(3), & \\
& (431)(2), & & (41)(2)(3), && (41)(32).
\end{aligned}$$
The outline of the paper is this: Section 2 contains some known identities of the restricted Bell numbers $B_{n, \leq 2}$. In this case, $m=2$, the restricted Bell and restricted factorial numbers coincide, i.e., $B_{n, \leq 2}=A_{n, \leq 2}$. Information about their Hankel transform is included. Section 3 contains extensions of these properties to $m=3$ and Sections 4 and 5 present the general case. Section 6 establishes the log-convexity of the restricted Bell and factorial sequences, extending classical results. Some conjectures on the roots of the restricted Bell polynomials are proposed here. Finally, Section 7 presents some preliminary results on the $p$-adic valuations of these sequences. Explicit expressions for the prime $p=2$ are established. A more complete discussion of these issues is in preparation.
Restricted Bell numbers $B_{n, \leq 2}$ and restricted factorial numbers $A_{n, \leq 2}$ {#sec-restricted2}
========================================================================================
This section discusses the sequence $B_{n, \leq 2}$, which enumerates partitions of $n$ elements into blocks of length at most $2$. Then $B_{n, \leq 2}=A_{n, \leq 2}$ is precisely the number of *involutions* of the $n$ elements, denoted in [@amdeberhan-2015b] by $\text{Inv}_{1}(n)$. This sequence is also called *Bessel numbers of the second kind*, see [@choijy-2003a] for further information.
The well-known recurrence $$\label{recBn2}
B_{n, \leq 2} = B_{n-1,\leq 2} + (n-1) B_{n-2, \leq 2},$$ with initial conditions $B_{0,\leq 2} = B_{1, \leq 2} = 1$, yields the exponential generating function $$\label{expogen}
\sum_{n=0}^{\infty} B_{n, \leq 2} \, \frac{x^{n} }{n!} = \exp{ \left( x + \tfrac{1}{2} x^{2} \right)}$$ as well as the closed-form expression $$\label{eqr1}
B_{n, \leq 2} = \sum_{j=0}^{\lfloor n/2 \rfloor} \binom{n}{2j} \frac{(2j)!}{2^{j} \, j!}.$$ The recurrence $$B_{n_{1}+n_{2}, \leq 2} = \sum_{k \geq 0} k! \binom{n_{1}}{k} \binom{n_{2}}{k} B_{n_{1}-k, \leq 2} B_{n_{2}-k, \leq 2}
\label{recu-1a}$$ is established in [@amdeberhan-2015b].
Congruences for the involution numbers appeared in Mező [@mezo-2014a], in a problem on the distribution of last digits of related sequences. These include $$B_{n, \leq 2} \equiv B_{n+5, \leq 2} \bmod 10 \text{ if } n > 1 \ \text{ and }
B_{n, \leq 3} \equiv B_{n+5, \leq 2} \bmod 10 \text{ if } n > 3.$$
The Hankel Transform of $B_{n,\leq 2}$
--------------------------------------
For a sequence $A =\left(a_n\right)_{n\in{{\mathbb N}}}$, its *Hankel matrix* $H_n$ of order $n$ is defined by $$\begin{aligned}
H_n=\begin{bmatrix}
a_0 & a_1 & a_2 & \cdots & a_n\\
a_1 & a_2 & a_3 & \cdots & a_{n+1}\\
\vdots & \vdots & \vdots & & \vdots \\
a_n & a_{n+1} & a_{n+2} & \cdots & a_{2n}
\end{bmatrix}.\end{aligned}$$ The *Hankel transform* of $A$ is the sequence $\left( \det H_{n} \right)_{n \in \mathbb{N}}$. Aigner [@aigner-1999a] showed that the Hankel transform of the Bell numbers is the sequence of the product of first $n$ factorials, so-called *superfactorials* , i.e., $(1!, 1!2!, 1!2!3!, \dots)$. Theorem \[hankel-2\] below shows that the Hankel transform of $B_{n,\leq 2}$ is also given by superfactorials.
The first result gives the binomial transform of $B_{n, \leq 2}$. This involves the *double factorials* $$(2n-1)!!=\prod_{k=1}^n(2k-1)=\frac{(2n)!}{n!2^n}.$$
\[propo\] The binomial transform of the sequence $B_{n,\leq 2}$ is $$\sum_{i=0}^n(-1)^i\binom{n}{i}B_{i,\leq 2}=\begin{cases}(n-1)!!,& \text{if} \ n \ \text{is even;} \\ 0,& \text{if} \ n \text{\ is odd.} \end{cases}$$ The numbers on the right are called the aerated double factorial.
The exponential generating function $A(x)$ of a sequence $(a_n)_{n\geq 0}$ and that of its binomial transform $S(x)$ are related by $S(x)=e^{-x}A(x)$. The result now follows from (\[expogen\]).
*Combinatorial Proof of Proposition \[propo\]:* Let $\mathcal{B}_{n,\leq 2}$ be the set of all partitions into blocks of length at most 2. Let $\mathcal{S}_{n,i}=\{\pi \in \mathcal{B}_{n,\leq 2}:\{i\}\in \pi\}$ be the set of partitions of $[n]$ in blocks of length less or equal to $2$, where $i$ is a singleton block. There are $B_{n-1,\leq 2}$ of them. Then $$\mathcal{B}_{n,\leq 2}=\bigcup _{i=1}^{n}\mathcal{S}_{n,i} \,\, \bigcup \underbrace{(\mathcal{B}_{n,\leq 2}\setminus (\bigcup _{i=1}^{n}\mathcal{S}_{n,i}))}_{\text{Denote this
by $L_{n}$}}.$$ The inclusion-exclusion principle gives $$B_{n,\leq 2}=\sum _{i=1}^n(-1)^{i-1}\binom{n}{i}B_{n-i,\leq 2}+|L_n|,$$ that yields $$|L_n|=B_{n,\leq 2}-\sum _{i=1}^n(-1)^{i-1}\binom{n}{i}B_{n-i,\leq 2}=\sum _{i=0}^n(-1)^{i}\binom{n}{i}B_{n-i,\leq 2}=\sum _{i=0}^n(-1)^{n-i}\binom{n}{i}B_{i,\leq 2}.$$ On the other hand, $L_n=\{\pi \in \mathcal{B}_{n,\leq 2}: \text{such that if $B\in \pi$ then $|B|=2$}\}$, because it is the complement of the partitions with at least one singleton. Thus $$|L_n|=\frac{\binom{n}{2,2,\dots ,2}}{ \left(\frac{n}{2} \right)!}=\frac{n!}{2^{n/2} \left(\frac{n}{2} \right)!}$$ if $n$ is even and $0$ if $n$ is odd. This establishes the identity.\
Barry and Hennessy [@barry-2010a Example 16] show that Hankel transform of the aerated double factorial is the superfactorials. The fact that any integer sequence has the same Hankel transform as its binomial transform [@layman-2001a; @spivey-2006a], gives the next result.
\[hankel-2\] The Hankel transform of the restricted Bell numbers $B_{n,\leq 2}$ is the superfactorials; that is, for any fixed $n$, $$\det\begin{bmatrix}
B_{0,\leq 2} & B_{1,\leq 2} & B_{2,\leq 2} & \cdots & B_{n,\leq 2}\\
B_{1,\leq 2} & B_{2,\leq 2} & B_{3,\leq 2} & \cdots & B_{n+1,\leq 2}\\
\vdots & \vdots & \vdots & & \vdots \\
B_{n,\leq 2} & B_{n+1,\leq 2} & B_{n+2,\leq 2} & \cdots & B_{2n,\leq 2}
\end{bmatrix}=\prod_{i=0}^{n}i!.$$
The Restricted Bell numbers $B_{n, \leq 3}$ and the restricted factorial numbers $A_{n, \leq 3}$ {#sec-recurrence}
================================================================================================
The goal of the current section is to extend some results in the previous section to the case $m=3$. Recurrences established here are employed in Section \[sec-arith\] to discuss arithmetic properties of $B_{n, \leq 3}$ and $A_{n, \leq 3}$.
The first statement relates $B_{n, \leq 3}$ to the involution numbers $B_{n, \leq 2}$.
\[teorecr1\] The restricted Bell numbers $B_{n, \leq 3}$ are given by $$B_{n, \leq 3} = \sum_{j=0}^{\lfloor{ n/3 \rfloor}} \binom{n}{3j} \frac{(3j)!}{(3!)^{j} \, j!} B_{n-3j, \leq 2}.
\label{rec-1}$$
Count the set of all partitions of $[n]$ into block of size at most 3, with exactly $j$ blocks of size 3. To do so, first choose a subset of $[n]$ of size $3j$ to place the $j$ blocks of size 3. This is done in $\binom{n}{3j}$ ways. Then, the number of set partitions of $[3j]$ such that each block has three elements is $\frac{(3j)!}{(3!)^{j} \, j!}$. The remaining $n-3j$ elements produce $B_{n-2j,\leq 2}$ partitions. Summing over $j$ completes the argument.
The next result gives a recurrence for $B_{n, \leq 3}$.
\[req1\] The restricted Bell numbers $B_{n, \leq 3 }$ satisfy the recurrence $$B_{n, \leq 3} = B_{n-1, \leq 3} + \binom{n-1}{1} B_{n-2, \leq 3} + \binom{n-1}{2} B_{n-3, \leq 3},
\label{rec-stir1}$$ with initial conditions $B_{0, \leq 3} = 1, \, B_{1, \leq 3} = 1, \, B_{2, \leq 3} = 2.$
The expression for $B_{n, \leq 2}$ in and produce $$B_{n, \leq 3} = \sum_{i=0}^{\lfloor \tfrac{n}{3} \rfloor } \sum_{j=0}^{\lfloor \tfrac{n}{2} \rfloor }
\binom{n}{3i} \frac{(3i)!}{6^{i} i!} \binom{n-3i}{2j} \binom{2j}{j} \frac{j!}{2^{j}},$$ that may be written as $$B_{n, \leq 3} = \sum_{i=0}^{\lfloor \tfrac{n}{3} \rfloor } \sum_{j=0}^{\lfloor \tfrac{n}{2} \rfloor}
\binom{n}{3i+2j} \binom{3i+2j}{2j} \binom{2j}{j} \frac{(3i)! \, j!}{6^{i} \, i! \, 2^{j}}.$$ The recurrence is obtained as a routine application of the WZ-method [@nemesi-1997a; @petkovsek-1996a].
*Combinatorial proof of Theorem \[req1\]*: Suppose the first block is the size $i$ with $i=1, 2$ or 3. Since this block contains the minimal element, one only needs to choose $l$ elements, with $l=0, 1$ or 2. Therefore, the number of set partitions of $[n]$ with exactly $i$ elements in the first block is given by $\binom{n-1}{i}B_{n-i,\leq3}$ for $i=1, 2, 3$. Summing over $i$ completes the argument.
The above recurrence produces the exponential generating function $$\label{genfun3}
\sum_{n=0}^{\infty} B_{n, \leq 3} \, \frac{x^{n} }{n!} = \exp{ \left( x + \tfrac{1}{2} x^{2} + \tfrac{1}{3!} x^{3} \right)}.$$
The next result is an extension of for the case of partitions with blocks of length at most $3$. It is an analog of the Spivey-like formula .
\[teo1a\] Define $a(i,j) = \frac{1}{3}(2i-j-k)$. Then the restricted Bell numbers $B_{n, \leq 3}$ satisfy the relation $$\begin{aligned}
B_{n+m, \leq 3} & = & \sum_{i=0}^{n} \binom{n}{i} B_{n-i, \leq 3}
\sum_{j= \left\lceil \tfrac{i}{2} \right\rceil}^{\min \{m, 2i \} } \binom{m}{j} B_{m-j, \leq 3} \\
& & \times \sum_{\substack{k=0 \\ k\equiv -i-j \bmod{3}}}^{\min \{ i, j, 2i-j,2j-i \}} \binom{i}{k} \binom{j}{k} k!
\binom{i-k}{a(j,i)} \binom{j-k}{a(i,j)} \frac{(2a(i,j))! \, (2a(j,i))!}{2^{a(i,j) + a(j,i)}} \nonumber\\
& = &
\sum _{i=0}^n \sum _{j=\lceil \frac{i}{2}\rceil}^{\min \{m,2i\}} \sum_{\substack{k=0 \\ k\equiv -i-j \bmod{3}}}^{\min \{i,j,2i-j,2j-i\}}\frac{n!m!B_{n-i,\leq 3}B_{m-j, \leq 3}}{k!(n-i)!(m-j)!a(i,j)!a(j,i)!2^{\frac{i+j-2k}{3}}}.
\end{aligned}$$
The set of $n+m$ elements whose partitions are enumerated by $B_{n+m, \leq 3}$ is split into two disjoint sets $I_{1}$ and $I_{2}$ of cardinality $n$ and $m$, respectively. Any such partition $ \pi$ can be written uniquely in the form $\pi = \pi_{1} \cup \pi_{2} \cup \pi_{3}$, where $\pi_{1}$ is a partition of a subset of $I_{1}, \, \pi_{2}$ is a partition of a subset of $I_{2}$ and the blocks in $\pi_{3}$ contain elements of both $I_{1}$ and $I_{2}$. Denote by $a_{2}$ the number of blocks in $\pi_{1}$ and $a_{5}$ those in $\pi_{2}$.
The blocks in $\pi_{3}$ come in three different forms:
*Type 1*. The block is of the form $x = \{ \alpha_{1}, \, \beta_{1} \}$ with $\alpha_{1} \in I_{1}$ and $\beta_{1} \in I_{2}$. Let $a_{1}$ be the number of them. The $n+m$ elements can be placed into these type of blocks in $$\binom{n}{a_{1}} \binom{m}{a_{1}} a_{1}! \quad \text{ ways}.$$
*Type 2*. The form is now $x = \{ \alpha_{1}, \beta_{1}, \beta_{2} \}$ with $\alpha_{1} \in I_{1}$ and $\beta_{j} \in I_{2}$, for $j = 1, 2$. Let $a_{3}$ denote the number of these type of blocks. These contributed $$\binom{n}{2a_{3}} \binom{m}{a_{3}} \frac{(2a_{3})!}{2^{a_{3}}} \quad \text{ to the placement of the } n+m \text{ elements}.$$
*Type 3*. The final form is $x = \{ \alpha_{1}, \alpha_{2}, \beta_{1} \}$ with $\alpha_{j} \in I_{1}, \, j=1, \, 2$ and $\beta_{1} \in I_{2}$. Denote by $a_{4}$ the number of such blocks. These contribute $$\binom{n}{a_{4}} \binom{m}{2a_{4}} \frac{(2a_{4})!}{2^{a_{4}}} \quad \text{ to the count}.$$
Therefore the total number of partitions is given by $$\begin{aligned}
\label{mess-1} \\
B_{n+m, \leq 3} =
\sum \binom{n}{a_{1}, a_2, a_3, 2a_4} \binom{m}{a_{1}, a_5, a_4, 2a_3} a_{1}! \frac{(2a_{3})!}{2^{a_{3}}} \frac{(2a_{4})!}{2^{a_{4}}}
B_{a_{2}, \leq n} B_{a_{5}, \leq m}, \nonumber\end{aligned}$$ where the sum extends over all indices $0 \leq n_{1}, n_{2}, n_{3}, n_{4}, n_{5}$ such that $$a_{1} + a_{2} + a_{3} + 2a_{4} = n \text{ and } a_{1} + a_{5} + 2a_{3} + a_{4} = m.$$
Introduce the notation $i = n-a_{2}, \, j = m -a_{5}$ and $k = a_{1}$ (so that $i, \, j, \, k \geq 0$) and solve for $a_{3}$ and $a_{4}$ from $$\begin{aligned}
a_{3} + 2a_{4} &=i-k, \\
2a_{3} + a_{4} &=j-k\end{aligned}$$ to obtain $$a_{3} = \frac{2j-i-k}{3} \quad \text{ and } \quad a_{4} = \frac{2i-j-k}{3}.$$ The fact that $a_{3}, \, a_{4} \in \mathbb{N}$ is equivalent to $i+j+k \equiv 0 \bmod 3$. This gives the result.
The following theorem is the analog of Theorems \[teorecr1\], \[req1\] and (\[genfun3\]). The proof is similar, so it is omitted.
\[teoas3\] The restricted factorial sequence $A_{n, \leq 3}$ is given by $$A_{n, \leq 3} = \sum_{j=0}^{\lfloor{ n/3 \rfloor}} \binom{n}{3j} \frac{(3j)!}{3^{j} \, j!} A_{n-3j, \leq 2}.$$ Moreover, it satisfies the recurrence $$A_{n, \leq 3} = A_{n-1, \leq 3} + (n-1) A_{n-2, \leq 3} + (n-1)(n-2)A_{n-3, \leq 3},
$$ with initial conditions $A_{0, \leq 3} = 1, \, A_{1, \leq 3} = 1, \, A_{2, \leq 3} = 2.$ Its generating function is $$\sum_{n=0}^{\infty} A_{n, \leq 3} \, \frac{x^{n} }{n!} = \exp{ \left( x + \tfrac{1}{2} x^{2} + \tfrac{1}{3} x^{3} \right)}.$$
The General Case $B_{n,\leq m}$. {#sec4}
================================
In this section, some recurrences of the restricted Bell numbers are generalized. A relation between this sequence and the associated Bell numbers is established. The first statement generalizes (\[eqr1\]) and (\[rec-1\]).
\[formular1\] The restricted Bell numbers $B_{n,\leq m}$ satisfy the recurrence $$B_{n,\leq m}=\sum _{i=0}^{\lfloor \frac{n}{m}\rfloor} \frac{n!}{i!(m!)^i(n-im)!}B_{n-im,\leq m-1}.$$
Count the set of all partitions of $[n]$ with blocks of size at most $k$ and contain exactly $i$ blocks of size $m$. To do so, select $m\cdot i$ elements from $n$ without any order. This is done in $\binom{n}{\underbrace{m,m,\dots,m}_{\text{ $i$ times}}}=\frac{n!}{m!^i(n-im)!}$ ways. Now divide by $i!$ to take into account the order of the blocks. The $n-im$ remaining elements of $[n]$ are placed in blocks of size $m-1$ or less elements, counted by $B_{n-im,\leq m-1}$. The result follows by summing over $i$.
A direct argument generalizes Theorems \[req1\] and (\[genfun3\]). This result appears in [@miksa-1958a].
\[gfun\] The restricted Bell numbers $B_{n, \leq m}$ satisfy the recurrence $$B_{n, \leq m} = \sum_{k=0}^{m-1} \binom{n-1}{k} B_{n-k-1, \leq m}.$$ Moreover, their exponential generating function is $$\sum_{n=0}^\infty \frac{B_{n,\leq m}x^n}{n!}=\exp\left(\sum_{i=1}^m \frac{x^i}{i!}\right).$$
The next result generalizes Theorem \[teo1a\].
\[teo1ag\] Denote $f(i,j)=2+j+\binom{i-1}{2}$, then $$\begin{aligned}
B_{n+m,\leq k}=n!m!\sum_{X}\frac{B_{a_1,\leq k}B_{a_2\leq k}}{a_1!a_2!\prod _{i=2}^k\prod _{j=1}^{i-1}{j!^{a_{f(i,j)}}(i-j)!^{a_{f(i,j)}}a_{f(i,j)}!}},\end{aligned}$$ where $X$ stands for the following set of variables $$X=\{(a_1,a_2,\dots ,a_{1+k+\binom{k-1}{2}}): a_1+\sum _{i=2}^k\sum _{j=1}^{i-1}ja_{f(i,j)}=n \wedge a_2+\sum _{i=2}^k\sum _{j=1}^{i-1}(i-j)a_{f(i,j)}=m \}.$$
The set of $n + m$ elements, whose partitions are enumerated by $B_{n+m,\leq 3}$, is split into two disjoint sets $I_1=[n]$ and $I_2=[n+m]\setminus [n]$ of cardinality $n$ and $m$, respectively. Any such partition $\pi$ can be written uniquely in the form $\pi = \pi _1 \cup \pi _2 \cup \pi _3$, where blocks in $\pi _1$ are subsets of $I_1$, blocks in $\pi _2$ are subsets of $I_2$ and the blocks in $\pi _3$ contain elements of $I_1$ and $I_2$. Denote by $a_1$ the number of elements that are going to be in blocks of $\pi _1$ and by $a_2$ the numbers of elements that are going to be in blocks of $\pi _2$. These are counted by $B_{a_1,\leq k}B_{a_2,\leq k}$.
The blocks in $\pi _3$ come in different forms depending in how many elements are in the blocks and how many come from $[n]$ and how many from $[n+m]\setminus [n]$. Denote by $a_{f(i,j)}$ the number of blocks in $\pi _3$ which have $j>0$ elements of $[n]$ and $i-j>0$ from $[n+m]\setminus [n]$. It is required to choose $ja_{f(i,j)}$ elements from $[n]$ and $(i-j)a_{f(i,j)}$ from $[n+m]\setminus [n]$. The total number of choices for grouping the $a_{f(i,j)}$ blocks is given by $$\binom{(i-j)a_{f(i,j)}}{\underbrace{i-j,i-j,\dots ,i-j}_{\text{ $a_{f(i,j)}$ times}}}\binom{ja_{f(i,j)}}{\underbrace{j,j,\dots,j}_{\text{$a_{f(i,j)}$ times}}}\frac{1}{a_{f(i,j)}!}=\frac{(ja_{f(i,j)})!((i-j)a_{f(i,j)})!}{j!^{a_{f(i,j)}}(i-j)!^{a_{f(i,j)}}a_{f(i,j)}!}.$$ The multinomial coefficient accounts for the possible groups of each side and the factorial in the denominator accounts for the order of the blocks. Summing over all possible configurations gives the result.
Relations between restricted and associated Bell numbers
--------------------------------------------------------
The *associated Stirling numbers of the second kind* $ {n \brace k}_{\geq m }$ give the number of partitions of $n$ elements into $k$ subsets under the restriction that every blocks contains *at least* $m$ elements. Komatsu et al. [@komatsu-2015c] derived the recurrence $$\begin{aligned}
{n+1 \brace k}_{\geq m} =\sum_{j=m-1}^{n} \binom{n}{j}{n-j \brace k-1}_{\geq m} = k {n \brace k}_{\geq m} + \binom{n}{m-1} {n-m +1\brace k-1}_{\geq m},
\end{aligned}$$ with initial conditions $ {0 \brace 0}_{\geq m} = 1$ and ${n\brace 0}_{\geq m} = 0.$ The *associated Bell numbers* are defined by $$B_{n, \geq m} = \sum_{k=0}^{n} {n \brace k}_{\geq m }.$$ They enumerate partitions of $n$ elements into blocks with the condition that every block has at least than $m$ elements. For example, $B_{4, \geq 3} =1 $ with the partition being $ \left\{ \{1 , 2 , 3, 4 \} \right\}$. Their generating function is $$\sum_{n=0}^\infty \frac{B_{n,\geq m}x^n}{n!}=\exp\left(\exp(x) - \sum_{i=0}^{m-1} \frac{x^i}{i!}\right).$$
In the case $m=2$, $B_{n, \geq 2}$ enumerate partitions of $n$ elements without singleton blocks, it satisfies (cf. [@bona-2016a]) $$\label{idbell2}
B_{n} = B_{n,\geq 2} + B_{n+1, \geq 2},$$ and its exponential generating function is given by $$\sum_{n=0}^{\infty} B_{n, \geq 2} \, \frac{x^{n} }{n!} = \exp{ \left(\exp(x) - 1- x \right)}.$$ Therefore, the binomial transform of $B_{n,\geq 2}$ is the Bell sequence $B_n$, i.e., $$\sum_{i=0}^n\binom{n}{i}B_{i,\geq 2}=B_n.$$ The fact that integer sequences and their inverse binomial transform have the same Hankel transform [@spivey-2006a], gives the following result.
The Hankel transform of the associated Bell numbers $B_{n,\geq 2}$ is the superfactorials. That is, for any fixed $n$, $$\det\begin{bmatrix}
B_{0,\geq 2} & B_{1,\geq 2} & B_{2,\geq 2} & \cdots & B_{n,\geq 2}\\
B_{1,\geq 2} & B_{2,\geq 2} & B_{3,\geq 2} & \cdots & B_{n+1,\geq 2}\\
\vdots & \vdots & \vdots & & \vdots \\
B_{n,\geq 2} & B_{n+1,\geq 2} & B_{n+2,\geq 2} & \cdots & B_{2n,\geq 2}
\end{bmatrix}=\prod_{i=0}^{n}i!.$$
The associated Bell numbers $B_{n,\geq 2}$ and the Bell numbers $B_n$ are related by $$B_{n,\geq 2}=\sum _{i=0}^n(-1)^i\binom{n}{i}B_{n-i}.$$
Let $\mathcal{B}_n$ be the set of all partitions of $[n]$, and let $\mathcal{B}_{n,\geq 2}$ be the set of all partitions into blocks of length at least than $2$. Denote by $\mathcal{S}_{n,i}$ the set of partitions where $i$ is in a singleton block. Then $$\mathcal{B}_{n,\geq 2}=\mathcal{B}_{n}\setminus \bigcup _{i\in [n]} \mathcal{S}_{n,i},$$ and the inclusion-exclusion principle produces $$B_{n,\geq 2}=B_n-\sum _{i=1}^n(-1)^{i-1}\sum _{a_1<a_2<\cdots <a_i} \left| \bigcap _{j=1}^i S_{n,a_j} \right|.$$ The identity now follows from $ \left| \displaystyle \bigcap _{j=1}^i S_{n,a_j} \right|=B_{n-i}$.
The next result gives a reduction for the associated Bell numbers $B_{n, \geq k}$, in the index $k$ counting the minimal number of elements in a block.
\[asbell\] The associated Bell numbers $B_{n,\geq k}$ satisfy $$B_{n,\geq k}=B_{n,\geq k-1}-\sum _{i=1}^{\lfloor \frac{n}{k-1}\rfloor}\frac{n!}{(k-1)!^i(n-(k-1)i)!i!}B_{n-(k-1)i,\geq k}.$$
Denote by $\mathcal{B}_{n,\geq k}$ the set of all partitions with blocks of length at least than $k$. Then $\mathcal{B}_{n,\geq k}\subseteq \mathcal{B}_{n,\geq k-1}$ and let $A=\mathcal{B}_{n,\geq k-1}\setminus \mathcal{B}_{n,\geq k}$ be the set difference. For $1 \leq k \leq n$, define $$A_i = \{\pi \in \mathcal{B}_{n,\geq k-1}: \text{ the number of blocks of size $k-1$ is $i$}\},$$ and observe that $$A:= \bigcup _{i=1}^n A_i =\mathcal{B}_{n,\geq k-1}\setminus \mathcal{B}_{n,\geq k}.$$ The sets $\{A_i\}$ form a partition of $A$ with $$|A_i|= \frac{1}{i!} \binom{n}{k-1,k-1,\dots ,k-1}B_{n-i(k-1),\geq k}.$$ The identity follows from this.
The associated Bell numbers can be calculated from the Bell numbers and restricted Bell numbers via $$B_{n,\geq k}=B_n-\sum _{i =1}^n\binom{n}{i}B_{i,\leq k-1}B_{n-i,\geq k}.$$
Recall that $\mathcal{B}_n$ is the set of partitions of $[n]$. For any such partition, write $\pi = \{A,B\}$, where $
A=\{\pi \in \mathcal{B}_n: \text{if }D\in \pi, \text{ then }|D|\geq k \},
$ and $B$ the complement of $A$ in $\mathcal{B}_{n}$. Then $|A|+|B|=B_n$. Now $|A|=B_{n,\geq k}$ and $B$ can be partitioned in $\{C_i\}_{i\in [n]}$ where $C_i$ contains the partitions such that there are exactly $i$ elements of $[n]$ that are in blocks with length less than $k$ and the remaining $n-i$ are in blocks with length greater or equal to $k$. Therefore $$|C_i|=\binom{n}{i}B_{i,\leq k-1}B_{n-i,\geq k},$$ and the result follows by summing over all partitions of $[n]$.
The next result is the analog of Theorem \[teo1ag\] for the case of the associated Bell numbers.
Denote $f(i,j)=2+j+\binom{i-1}{2}$, then $$B_{n+m,\geq k}=n!m!\sum_{X}\frac{B_{a_1,\geq k}B_{a_2, \geq k}}{a_1!a_2!\prod _{i=k}^{n+m}\prod _{j=1}^{i-1}{j!^{a_{f(i,j)}}(i-j)!^{a_{f(i,j)}}a_{f(i,j)}!}},$$ where $X$ stands for the following set of variables $$\begin{gathered}
X=\left\{ (a_1,a_2,a_{3+\binom{k-1}{2}}, \dots ,a_{2+n+m-1+\binom{n+m-1}{2}}): \right. \\
\left. a_1+\sum _{i=k}^{n+m}\sum _{j=1}^{i-1}ja_{f(i,j)}=n \wedge a_2+\sum _{i=k}^{n+m}\sum _{j=1}^{i-1}(i-j)a_{f(i,j)}=m \right\}.
\end{gathered}$$
The General Case $A_{n,\leq m}$. {#sec5}
================================
This section discusses the results presented in the previous section corresponding to the class $A_{n, \leq m}$.
The first statement generalizes Theorem \[teoas3\] and is the analog of Theorem \[formular1\]. The proof is similar to the one given above. Details are omitted.
The restricted factorial numbers $A_{n, \leq m}$ are given by $$A_{n, \leq m} =\sum _{i=0}^{\lfloor \frac{n}{m}\rfloor} \frac{n!}{m^ii!(n-im)!}A_{n-im,\leq m-1}.$$
The next statement is found in [@mezo-2014a].
\[teo52\] The restricted factorial sequence $A_{n, \leq m}$ satisfies the recurrence $$A_{n, \leq m} = \sum_{j=0}^{m-1} \frac{(n-1)!}{(n-1-j)!} A_{n-1-j,\leq m},
$$ with initial conditions $A_{0, \leq m} = 1\ \text{and} \ A_{1, \leq m} = 1.$ Its generating function is $$\sum_{n=0}^{\infty} A_{n, \leq m} \, \frac{x^{n} }{n!} = \exp{ \left( x + \tfrac{1}{2} x^{2} + \tfrac{1}{3} x^{3} +\cdots + \tfrac{1}{m}x^m\right)}.$$
The next reduction formula gives $A_{n+m, \leq k}$ in terms of lower value of the first index.
\[spiveyres\] Denote $f(i,j)=2+j+\binom{i-1}{2}$, then $$A_{n+m,\leq k}=n!m!\sum_{X}\frac{A_{a_1,\leq k}A_{a_2\leq k}}{a_1!a_2!}\prod _{i=2}^k\prod _{j=1}^{i-1}\binom{i}{j}^{a_{f(i,j)}}\frac{1}{i^{a_{f(i,j)}} \cdot a_{f(i,j)}!},$$ where $X$ stands for the following set of variables $$X=\{(a_1,a_2,\dots ,a_{1+k+\binom{k-1}{2}}): a_1+\sum _{i=2}^k\sum _{j=1}^{i-1}ja_{f(i,j)}=n \wedge a_2+\sum _{i=2}^k\sum _{j=1}^{i-1}(i-j)a_{f(i,j)}=m \}.$$
The special case $k=3$ gives $$A_{n+m, \leq 3} = \sum _{i=0}^n\sum _{j=0}^m\sum _{\overset{l=0}{{l\equiv -n-m+i+j\bmod 3}} }^{\min \{n-i,m-j\}}\frac{n!m!A_{i,\leq 3}A_{j,\leq 3}}{i!j!l! \left( \frac{2m-n+i-2j-l}{3} \right)! \, \left(\frac{2n-m-2i+j-l}{3}\right)!}.$$
The Associated Factorial Numbers $A_{n, \geq m} $
-------------------------------------------------
This section presents analogous results for sequence built from the *associated Stirling numbers of the first kind* $ {n \brack k}_{\geq m }$. These numbers satisfy the following recurrence [@komatsu-2015b] $$\begin{aligned}
{n+1 \brack k}_{\geq m} =\sum_{j=m-1}^{n} \frac{n!}{(n-j)!}
{n-j \brack k-1}_{\geq m} = n {n \brack k}_{\geq m} + \frac{n!}{(n-m+1)!} {n-m + 1 \brack k-1}_{\geq m},
\end{aligned}$$ with the initial conditions $
{0 \brack 0}_{\geq m} = 1
\text{ and }
{n\brack 0}_{\geq m} = 0.
$ The *associated factorial numbers* defined by $$A_{n, \geq m} = \sum_{k=0}^{n} {n \brack k}_{\geq m },$$ enumerate all permutations of $n$ elements into cycles with the condition that every cycle has at least than $m$ items. Its generating function is given by [@wilf-1990a] $$\sum_{n=0}^{\infty} A_{n, \geq m} \, \frac{x^{n} }{n!} = \exp\left(\sum_{n=m}^{\infty}\frac{x^n}{n}\right)=\exp\left(\log\frac{1}{1-x}-\sum_{n=1}^{m-1}\frac{x^n}{n}\right).$$ In particular, if $m=2$ we obtain the number of permutations of $n$ elements with no fixed points, the classical derangements numbers. This sequence satisfies that (cf. [@bona-2012a]) $$\begin{aligned}
A_{n, \geq 2} &= nA_{n-1, \geq 2} + (-1)^{n}, \quad n\geq 1, \label{bonab}\\
&=(n-1)(A_{n-1,\geq2}+A_{n-2,\geq 2}). \label{bonab2}
\end{aligned}$$ Radoux [@radoux-1991a] has shown that the Hankel transform of the associated factorial numbers $A_{n,\geq 2}$ is given by $\begin{displaystyle}\prod_{i=1}^{n}i!^2.\end{displaystyle}$
The following theorem is the analog of Theorem \[asbell\], with a similar proof. The details are omitted.
For $n, \, k \in \mathbb{N}$ with $k > 1$, the associated factorial numbers $A_{n,\geq k}$ satisfy $$A_{n,\geq k}=A_{n,\geq k-1}-\sum _{i=1}^{\lfloor \frac{n}{k-1}\rfloor}\frac{n!}{(k-1)^i(n-(k-1)i)!i!}A_{n-(k-1)i,\geq k}.$$
The following result corresponds to Theorem \[spiveyres\].
Denote $f(i,j)=2+j+\binom{i-1}{2}$, then $$A_{n+m,\geq k}=n!m!\sum_{X}\frac{A_{a_1,\geq k}A_{a_2\geq k}}{a_1!a_2!}\prod _{i=k}^{n+m}\prod _{j=1}^{i-1}\binom{i}{j}^{a_{f(i,j)}}\frac{1}{i^{a_{f(i,j)}}\cdot a_{f(i,j)}!},$$ where $X$ stands for the following set of variables $$X=\{(a_1,a_2,\dots ,a_{1+k+\binom{k-1}{2}}): a_1+\sum _{i=k}^{n+m}\sum _{j=1}^{i-1}ja_{f(i,j)}=n \wedge a_2+\sum _{i=k}^{n+m}\sum _{j=1}^{i-1}(i-j)a_{f(i,j)}=m \}.$$
The next statement generalizes .
\[genAigner\] The associated factorial numbers $A_{n,\geq k}$ satisfy $$A_{n,\geq k}=(n-1)A_{n-1,\geq k}+ (n-1)^{\underline{k-1}}A_{n-k,\geq k}, \quad n\geq 1,$$ where $n^{\underline{k}}:=n(n-1)\cdots (n-(k-1)) = \begin{displaystyle} \frac{n!}{(n-k)!} \end{displaystyle}$ and $n^{\underline{0}}=1$.
Denote by $\mathcal{A}_{n,\geq k}$ the permutations $\sigma$ on $n$ elements such that the length of every cycle in $\sigma$ is not less than $k$ (i.e., $\mathcal{A}_{n,\geq k}=\{\sigma \in \mathcal{S}_n:| \langle i \rangle|\geq k\}$. Here $\langle i \rangle$ denotes the cycle of $i\in [n]$ as a set). For $\sigma \in \mathcal{A}_{n,\geq k}$, there are two cases for $n\in[n]$:
- *Case 1*: here $|\langle n \rangle |=k$. It is required to construct a cycle of length $k$ containing $n$. In order to do this, one must choose $k-1$ numbers from $[n-1]$ and place them in the same cycle. This can be done in $\binom{n-1}{k-1}$ ways and the total number of possible cycles is $\binom{n-1}{k-1}(k-1)!=(n-1)^{\underline{k-1}}$. The other cycles are counted by $A_{n-k,\geq k}$, for a total of $(n-1)^{\underline{k-1}}A_{n-k,\geq k}$.
- *Case 2*: now $| \langle n \rangle|>k$. Then one needs to place $n$ in any cycle of a permutation $\sigma ' \in \mathcal{A}_{n-1,\geq k}$. There are $(n-1)A_{n-1,\geq k}$ ways to do it.
The identity follows from this discussion.
Log-Convex and Log-Concavity Properties {#sec-logconvex}
=======================================
A sequence $(a_n)_{n\geq0}$ of nonnegative real numbers is called *log-concave* if $a_na_{n+2}\leq a_{n+1}^2$, for all $n\geq 0$. It is called *log-convex* if $a_na_{n+2}\geq a_{n+1}^2$ for all $n\geq 0$. There is a large collection of results on log-concavity and log-convexity and its relation to combinatorial sequences. Some of these appear in [@brenti-1989a], [@mcnamara-2010a], [@medinal-2016a], [@sagan-1998a] and [@wilf-1990a]. The Bell sequence is log-convex [@asai-2000a] and it is not difficult to verify that the same is true for restricted Bell numbers and restricted factorial numbers.
A sequence $(a_n)_{n\in{{\mathbb N}}}$ has no internal zeros if there do not exist integers $0\leq i<j<k$ such that $a_i\neq 0, a_j=0, a_k\neq0$.
\[BCT\] Let $\{ 1, w_1, w_2,\dots \}$ be a log-concave sequence of nonnegative real numbers with no internal zeros. Define the sequence $(a_n)_{n\geq0}$ by $$\sum_{n=0}^\infty \frac{a_n}{n!}x^n=\exp\left(\sum_{j=1}^\infty \frac{w_i}{i}x^j\right).$$ Then the sequence $(a_n)_{n\geq0}$ is log-convex and the sequence $(a_n/n!)_{n\geq0}$ is log-convave.
This result is now used to verify the log-convexity of $B_{n, \leq m}$.
The restricted Bell sequence $(B_{n,\leq m})_{n\geq 0}$ is log-convex and the sequence $(B_{n,\leq m}/n!)_{n\geq 0}$ is log-concave.
The result follows from Theorems \[gfun\] and \[BCT\] and the log-concavity of the sequence $$w_i=\begin{cases}
\frac{1}{(i-1)!},& \ \text{if} \ 1\leq i \leq m;\\
0,& i>m. \end{cases}$$
The next statement is similar.
The restricted factorial sequence $(A_{n,\leq m})_{n\geq 0}$ is log-convex and the sequence $(A_{n,\leq m}/n!)_{n\geq 0}$ is log-concave.
Now use Theorems \[teo52\] and \[BCT\] and the sequence $$w_i=\begin{cases}
1,& \ \text{if} \ 1\leq i \leq m;\\
0,& i>m. \end{cases}$$ to produce the result.
Open questions
--------------
Some conjectured statements are collected here. The restricted Bell polynomials are defined by $$B_{n,\leq m}(x):=\sum_{k=0}^n{n \brace k}_{\leq m} x^k.$$
The recurrence (\[recur-1\]), produces $$B_{n+1,\leq m}(x)=xB_{n,\leq m}(x)+xB'_{n,\leq m}(x)-\binom{n}{m}xB_{n-m,\leq m}(x). \quad$$ This can be verified directly: $$\begin{aligned}
B_{n+1,\leq m}(x)&=x\sum_{k=0}^{n} k{n \brace k}_{\leq m}x^{k-1} + x\sum_{k=0}^{n} {n \brace k}_{\leq m}x^{k} - \binom{n}{m} x\sum_{k=0}^{n-m} {n-m \brace k}_{\leq m} x^{k} \\
&= xB'_{n,\leq m}(x) + xB_{n,\leq m}(x)-\binom{n}{m}xB_{n-m,\leq m}(x).
\end{aligned}$$ The authors have tried, without success, to establish the next two statements:
\[conj1\] The roots of the polynomial $B_{n,\leq m}(x)$ are real and non-positive if $m \neq 3, 4$.
Recall that a finite sequence $\{ a_{j}, \, 0 \leq j \leq n \}$ of non-negative real numbers is called *unimodal* if there is an index $j^{*}$ such that $a_{j- 1} \leq a_{j}$ for $1 \leq j \leq j^{*}$ and $a_{j-1} \geq a_{j}$ for $j^{*} +1 \leq j \leq n$. An elementary argument shows that a log-concave sequence must be unimodal. The unimodality of the restricted Stirling numbers $\left({n \brace k}_{\leq 2} \right)_{k\geq 0}$ was proved by Choi and Smith in [@choijy-2003a]. Moreover, Han and Seo [@hanh-2004a] gave a combinatorial proof of the log-concavity of this sequence. The log-concavity of the associated Stirling numbers of the first kind was studied by Brenti in [@brenti-1993a].
\[conj2\] The sequence of restricted Stirling numbers $\left({n \brace k}_{\leq m} \right)_{k\geq 0}$ is log-concave.
One of the main sources of log-concave sequences comes from the following fact: if $P(x)$ is a polynomial all of whose zeros are real and negative, the its coefficient sequence is log-concave. (See [@wilf-1990a Theorem 4.5.2] for a proof). Therefore the first conjecture implies the second one.
Some Arithmetical Properties {#sec-arith}
============================
Given a prime $p$, the $p$-adic valuation of $x \in \mathbb{N}$, denoted by $\nu_{p}(x)$, is the highest power of $p$ that divides $x$. For a given sequence of positive integers $(a_{n})_{n\geq 0}$ a description of the sequence of valuations $\nu_{p}(a_{n})$ often presents interesting questions. The classical formula of Legendre for factorials $$\nu_{p}(n!) = \sum_{k=1}^{\infty} \left\lfloor \frac{n}{p^{r}} \right\rfloor$$ is one of the earliest such descriptions. This may also be expressed in closed-form as $$\nu_{p}(n!) = \frac{n - s_{p}(n)}{p-1},$$ where $s_{p}(n)$ is the sum of the digits of $n$ in its expansion in base $p$. The reader will find in [@amdeberhan-2008a; @amdeberhan-2008b; @byrnes-2015a; @cohn-1999a; @kamano-2011a; @moll-2010a; @straub-2009a; @sunx-2009a] a selection of results on this topic.
The 2-adic valuation of the Bell numbers has been described in [@amdeberhan-2013f].
\[2adbell\] The 2-adic valuation of the Bell numbers satisfy $\nu_2(B_n)=0$ if $n \equiv 0, 1 \bmod 3$. In the missing case, $n\equiv 2 \bmod 3$, $\nu_2(B_{3n+2})$ is a periodic sequence of period 4. The repeating values are $\{1, 2, 2, 1\}$.
The $2$-adic valuation of the restricted Bell sequence $B_{n, \leq 2}$ was described in [@amdeberhan-2015b].
\[val-bell\] The 2-adic valuation of the restricted Bell numbers $B_{n, \leq 2}$ satisfy $$\nu_{2}(B_{n, \leq 2}) = \left\lfloor \frac{n}{2} \right\rfloor -2 \left\lfloor \frac{n}{4} \right\rfloor +
\left\lfloor \frac{n+1}{4} \right\rfloor =
\begin{cases}
k, & \quad \text{ if } n = 4k; \nonumber \\
k, & \quad \text{ if } n = 4k+1; \nonumber \\
k+1, & \quad \text{ if } n = 4k+2; \nonumber \\
k+2, & \quad \text{ if } n = 4k+3. \nonumber
\end{cases}$$
This section discusses the 2-adic valuation of the numbers $B_{n, \geq 2}$ and $A_{n, \geq 2}$. Figure \[figurea\] shows the first 100 values.
![The $2$-adic valuation of $B_{n,\geq 2}$ and $A_{n, \geq 2}$.[]{data-label="figurea"}](fig-bn.eps "fig:") ![The $2$-adic valuation of $B_{n,\geq 2}$ and $A_{n, \geq 2}$.[]{data-label="figurea"}](fig-an.eps "fig:")
The 2-adic valuation of the associated Bell numbers $B_{n, \geq 2}$ is given by $$\nu_2(B_{n, \geq 2})=0 \text{ if }n\equiv 0, \, 2 \bmod 3.$$ For $n \equiv 1 \bmod 3$, the valuation satisfies $\nu_{2}(B_{n, \geq 2}) \geq 1$.
The proof is by induction on $n$. Divide the analysis into three cases according to the residue of $n$ modulo 3. If $n=3k$ then gives $B_{3k-1}=B_{3k-1,\geq 2} + B_{3k, \geq 2}$. Theorem \[2adbell\] shows that $B_{3k-1}$ is even and by the induction hypothesis $B_{3k-1,\geq 2}$ is odd. Thus $B_{3k, \geq 2}$ is odd, so that $\nu_2(B_{3k, \geq 2})=0$. The proof is analogous for the case $3k+2$. The case $n \equiv 1 \bmod 3$ follows from the identity in the form $B_{3k} = B_{3k,\geq 2} + B_{3k+1, \geq 2}$ and the fact that $B_{3k}$ is odd (by Theorem \[val-bell\]) and so is $B_{3k, \geq 2}$ by the previous analysis.
A partial description of the valuations of $B_{n, \geq 2}$ for $n \equiv 1 \bmod 3$ is given in the next conjecture.
The sequence of valuations $\nu_{2}(B_{3k+1, \geq 2})$ satisfies the following pattern: $$\nu_{2}(B_{n, \geq 2}) = \begin{cases}
2, & \text{ if } n \equiv 4 \,\,\,\,\,\,\,\,\,\,\,\, \bmod 12; \\
1, & \text{ if } n \equiv 7, \, 10 \,\, \bmod 12.
\end{cases}$$ The remaining case $n \equiv 1 \bmod 12$, considered modulo $24$, obeys the rule $$\nu_{2}(B_{24n+1, \geq 2}) = 5 + \nu_{2}(n), \quad \text{ for } n \geq 1,$$ with the case $n \equiv 13 \bmod 24$ remaining to be determined. Continuing this process yields the conjecture $$\nu_{2}(B_{48n+37, \geq 2}) = 5 \text{ and } \nu_{2}(B_{96n+61, \geq 2}) = 6.$$ The details of this analysis will appear elsewhere.
A closed-form for the valuation $\nu_{2}(A_{n, \geq 2})$ is simpler to obtain.
The 2-adic valuation of the associated factorial numbers $A_{n, \geq 2}$ is given by $$\begin{aligned}
\nu_2(A_{n, \geq 2})=
\begin{cases}
0, & \text{if} \ n=2k \,\,\,\,\,\,\,\,\,\, \text{ and } k \geq 0;\\
\nu_2(k)+1, & \text{if} \ n=2k+1 \text{ and } k \geq 1.
\end{cases}\end{aligned}$$
If $n$ is even, then shows that $A_{n, \geq 2}$ is odd, so that $\nu_2(A_{n, \geq 2})=0$. If $n$ is odd then gives $\nu_2(A_{2k+1, \geq 2})=\nu_2(2k)=\nu_2(k)+1$.
Some additional patterns
------------------------
In this subsection we show some additional examples of the $p$-adic valuation of the restricted and associated Bell and factorial sequences.\
Theorems \[teoas3\] and \[genAigner\] are now used to produce explicit formulas for the 2-adic valuation of the restricted and associated factorial numbers for $m=3$.
The 2-adic valuation of the restricted factorial numbers $A_{n, \leq 3}$, for $n \geq 1$, is given by $$\begin{aligned}
\nu_2(A_{n, \leq 3})=
\begin{cases}
k, & \text{if} \ n=4k;\\
k, & \text{if} \ n=4k+1;\\
k+1, & \text{if} \ n=4k+2;\\
k+1, & \text{if} \ n=4k+3.
\end{cases}\end{aligned}$$
The proof is by induction on $n$. It is divided into four cases according to the residue of $n$ modulo 4. The symbols $O_i$ denote an odd number. If $n=4k$ then Theorem \[teoas3\] and the induction hypothesis give $$\begin{aligned}
A_{4k, \leq 3}&=A_{4k-1, \leq 3}+(4k-1)A_{4k-2, \leq 3}+(4k-1)(4k-2)A_{4k-3, \leq 3}\\
&=2^kO_1+(4k-1)2^kO_2+(4k-1)(4k-2)2^{k-1}O_3\\
&=2^k(O_1+(4k-1)O_2+(4k-1)(2k-1)O_3)\\
&=2^kO_4.\end{aligned}$$ Therefore $\nu_2(A_{4k, \leq 3})=k$. The remaining cases are analyzed in a similar manner.
The 2-adic valuation of the associated factorial numbers $A_{n, \geq 3}$, for $n \geq 1$, is given by $$\begin{aligned}
\nu_2(A_{n, \geq 3})=
\begin{cases}
k, & \text{if} \ n=4k;\\
\nu_2(k)+k+2, & \text{if} \ n=4k+1;\\
\nu_2(k)+k+4, & \text{if} \ n=4k+2;\\
k+1, & \text{if} \ n=4k+3.
\end{cases}\end{aligned}$$
The proof is as in the previous theorem. If $n=4k$ then Theorem \[genAigner\] and the induction hypothesis give $$\begin{aligned}
A_{4k, \geq 3}&=(4k-1)A_{4k-1, \geq 3}+(4k-1)(4k-2)A_{4k-3, \geq 3}\\
&=2^kO_1+(4k-1)(4k-2)2^{\nu_2(k-1)+k+1}O_2\\
&=2^k(O_1+(4k-1)(2k-1)2^{\nu_2(k-1)+2}O_2)\\
&=2^kO_3.\end{aligned}$$ Therefore $\nu_2(A_{4k, \geq 3})=k$.
If $n=4k+1$ then Theorem \[genAigner\] and the induction hypothesis now give $$\begin{aligned}
A_{4k+1, \geq 3}&=(4k)A_{4k, \geq 3}+(4k)(4k-1)A_{4k-2, \geq 3}\\
&=2^{k+2}kO_1+(4k)(4k-1)2^{\nu_2(k-1)+k+3}\\
&=2^{k+2}kO_1+k2^{\nu_2(k-1)+k+5}O_3)\\
&=2^{k+2}k(O_1+2^{\nu_2(k-1)+3}O_3)\\
&=2^{k+2}kO_4.\end{aligned}$$ Therefore $\nu_2(A_{4k+1, \geq 3})=\nu_2(k)+k+2$. The remaining cases are analyzed in a similar manner.
Divisibility properties of the sequences $B_{n, \leq 2}$ and $B_{n, \leq 3}$ by the prime $p=3$ turn out to be much simpler: $3$ does not divide any element of this sequence. The proof is based on the recurrences and .
The sequence of residues $B_{n, \leq 2}$ modulo $3$ is a periodic sequence of period $3$, with fundamental period $\{ 1, \, 1, \, 2\}$.
Assume $n \equiv 0 \bmod 3$ and write $n = 3k$. Then gives $$\begin{aligned}
B_{3k, \leq 2} & = & B_{3k-1, \leq 2} + (3k-1)B_{3k-2, \leq 2} \\
& \equiv & 2 - 1 = 1 \bmod 3,
\end{aligned}$$ and $B_{n, \leq 2} \equiv 1 \bmod 3$. The remaining two cases for $n$ modulo $3$ are treated in the same form.
The sequence of residues $B_{n, \leq 3}$ modulo $3$ is a periodic sequence of period $6$, with fundamental period $\{ 1, \, 1, \, 2, \, 2, \, 2, \, 1 \}$.
Assume $n \equiv 0 \bmod 6$ and write $n = 6k$. Then gives $$\begin{aligned}
B_{6k, \leq 3} & = & B_{6k-1, \leq 3} + (6k-1) B_{6k-2, \leq 3} + (3k-1)(6k-1)B_{6k-3, \leq 3} \\
& \equiv & 1 - 2 + 2 = 1 \bmod 3,
\end{aligned}$$ showing that $B_{n, \leq 3} \equiv 1 \bmod 3$. The remaining five cases for $n$ modulo $6$ are treated in the same form.
The restricted Bell numbers $B_{n,\leq 2}$ and $B_{n,\leq 3}$ are not divisible by $3$.
Using this type of analysis it is possible to prove the following results:
- The $5$-adic valuation of the sequence $B_{n, \leq 3}$ is given by $$\nu_{5}(B_{n, \leq 3}) = \begin{cases}
1, & \quad \text{ if } n \equiv 3 \bmod 5; \\
0, & \quad \text{ if } n \not \equiv 3 \bmod 5.
\end{cases}$$
- The $7$-adic valuation of the sequence $B_{n, \leq 3}$ satisfies $\nu_{7}(B_{n, \leq 3}) = 0$ if $n \not \equiv 4 \bmod 7$.
- The sequence of residues $B_{n, \leq 5}$ modulo $7$ is a periodic sequence of period $7$, with fundamental period $\{1, \ 1,\ 2,\ 5,\ 1,\ 3,\ 6 \}$.
- The 3-adic valuation of the associated factorial numbers $A_{n, \geq 3}$ satisfy $\nu_3(A_{n, \geq 3})=0$ if $n \equiv 0 \bmod 3$. For $n=3k+1$, the valuation is given by $\nu_3(A_{n, \geq 3})=\nu_3(A_{n+1, \geq 3})=\nu_3(n-1)$. This covers all cases.
- The sequence of residues $A_{n, \leq 5}$ modulo $7$ is a periodic sequence of period $7$, with fundamental period $\{1, \ 1, \ 2, \ 6, \ 3, \ 1, \ 5\}$.
Many other results of this type can be discovered experimentally. A discussion of a general theory is in preparation.
[**Acknowledgements**]{}. The first author acknowledges the partial support of NSF-DMS 1112656. The last author is a graduate student at Tulane University. The first author thanks an invitation from the Department of Mathematics of Universidad Sergio Arboleda, Bogotá, Colombia, where the presented work was initiated.
[10]{}
M. Aigner. A characterization of the [B]{}ell numbers. , 205:207–210, 1999.
T. Amdeberhan, V. [D]{}e [A]{}ngelis, and V. Moll. Complementary [B]{}ell numbers: [A]{}rithmetical properties and [W]{}ilf’s conjecture. In I. S. Kotsireas and E. Zima, editors, [*Advances in [C]{}ombinatorics. In memory of [H]{}erbert [S]{}. [W]{}ilf*]{}, pages 23–56. Springer-Verlag, 2013.
T. Amdeberhan, D. Manna, and V. Moll. The $2$-adic valuation of a sequence arising from a rational integral. , 115:1474–1486, 2008.
T. Amdeberhan, D. Manna, and V. Moll. The $2$-adic valuation of [S]{}tirling numbers. , 17:69–82, 2008.
T. Amdeberhan and V. Moll. Involutions and their progenies. , 6:483–508, 2015.
N. Asai, I. Kubo, and H. H. Kuo. Bell numbers, log-concavity, and log-convexity. , 63:79–87, 2000.
P. Barry and A. [H]{}ennessy. The [E]{}uler-[S]{}eidel matrix, [H]{}ankel matrices and moment sequences. , 13:10.8.2, 2010.
C. M. Bender and E. R. Canfield. Log-concavity and related properties of the cycle index polynomials. , 74:57–70, 1996.
M. Bona. . ress, 2nd edition, 2012.
M. Bona and I. Mező. Real zeros and partitions without singleton blocks. , 51:500–510, 2016.
F. Brenti. Unimodal, log-concave and [P]{}olya frequency sequences in combinatorics. , 81, 1989.
F. Brenti. Permutation enumeration, symmetric functions and unimodality. , 157:1–28, 1993.
A. Byrnes, J. Fink, G. Lavigne, I. Nogues, S. Rajasekaran, A. Yuan, L. Almodovar, X. Guan, A. Kesarwani, L. Medina, E. Rowland, and V. Moll. A closed-form solution might be given by a tree. [T]{}he valuation of quadratic polynomials. , 2015.
J. Y. Choi and J. D. H. Smith. On the unimodality and combinatorics of [B]{}essel numbers. , 264:45–53, 2003.
J. Y. Choi and J. D. H. Smith. On combinatorics of multi-restricted numbers. , 75:45–53, 2005.
J. Y. Choi and J. D. H. Smith. Reciprocity for multi-restricted numbers. , 113:1050–1060, 2006.
J. Y. Choi and J. D. H. Smith. Recurrences for tri-restricted numbers. , 58:3–11, 2006.
H. Cohn. $2$-adic behavior of numbers of domino tilings. , 6:1–14, 1999.
L. Comtet. . D. [R]{}eidel [P]{}ublishing [C]{}o. ([D]{}ordrecht, [H]{}olland), 1974.
H. Han and S. Seo. Combinatorial proofs of inverse relations and log-concavity for [B]{}essel numbers. , 29:1544–1554, 2008.
K. Kamano. On $3$-adic valuations of generalized harmonic numbers. , 11:A69:1–12, 2011.
T. Komatsu, K. Liptai, and I. Mező. Incomplete poly-[B]{}ernoulli numbers associated with incomplete [S]{}tirling numbers. , 88:357–368, 2016.
T. Komatsu, I. Mező, and L. Szalay. Incomplete [C]{}auchy numbers. , To appear, 2016.
J. W. Layman. The [H]{}ankel transform and some of its properties. , 4: 01.1.5, 2001.
T. Mansour. . Press, 2012.
T. Mansour and M. Schork. . Press, 2015.
P. R. McNamara and B. Sagan. Infinite log-concavity: developments and conjectures. , 94:79–96, 2010.
L. Medina and A. Straub. On multiple and infinite log-concavity. , 20:125–138, 2016.
I. Mező. Periodicity of the last digits of some combinatorial sequences. , 17, 14.1.1:1–18, 2014.
F. L. Miksa, L. [M]{}oser, and M. [W]{}yman. Restricted partitions of finite sets. , 1:87–96, 1958.
V. Moll and X. Sun. A binary tree representation for the $2$-adic valuation of a sequence arising from a rational integral. , 10:211–222, 2010.
I. Nemes, M. Petkovsek, H. Wilf, and D. Zeilberger. How to do [MONTHLY]{} problems with your computer. , 104:505–519, 1997.
M. Petkovsek, H. Wilf, and D. Zeilberger. . A. K. Peters, 1st. edition, 1996.
C. Radoux. Déterminant de hankel construit sur des polynomes liés aux nombres de dérangements. , 12:327–329, 1991.
B. Sagan. Inductive and injective proofs of log-concavity results. , 68:281–292, 1988.
M. Spivey. A generalized recurrence for [B]{}ell numbers. , 11:article 08.2.5, 2008.
M. Spivey and L. Steil. The $k$-binomial transforms and the [H]{}ankel transform. , 9:article 06.0.1, 2006.
A. Straub, V. Moll, and T. Amdeberhan. The $p$-adic valuation of $k$-central binomial coefficients. , 149:31–42, 2009.
X. Sun and V. Moll. The $p$-adic valuation of sequences counting alternating sign matrices. , 12:09.3.8, 2009.
H. S. Wilf. . Academic Press, 1st edition, 1990.
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Institut für Theoretische Physik, Technische Universität Wien\
Wiedner Hauptstraße 8–10, A-1040 Wien, Austria
title: The role of the field redefinition in noncommutative Maxwell theory
---
[[ ]{}]{}
[*Abstract.*]{} We discuss $\theta$-deformed Maxwell theory at first order in $\theta$ with the help of the Seiberg-Witten (SW) map. With an appropriate field redefinition consistent with the SW-map we analyse the one-loop corrections of the vacuum polarization of photons. We show that the radiative corrections obtained in a previous work may be described by the Ward-identity of the BRST-shift symmetry corresponding to a field redefinition.
Introduction
============
Discussing noncommutative quantum field theories, especially noncommutative gauge field models, one has in principle many possibilities to formulate such models. The traditional approach widely used in many papers, [@Moyal:sk], [@Hayakawa:1999zf], [@Hayakawa:1999yt], [@Matusis:2000jf], is based on the fact that noncommutative gauge theory is realized as a theory on the set of ordinary functions by modifying the product of two functions in terms of the $\star$-product [@Moyal:sk]. The simplest candidate would be a noncommutative counterpart of QED. Due to the noncommutativity also the noncommutative $U(1)$ gauge field model (with and without matter) has a non-Abelian structure implying the usual BRST-quantization procedure involving the appearence of $\phi\pi$-ghost fields.
Unfortunately, the perturbative realization of such gauge field models develops unexpected new features: the so called UV/IR mixing emerging from nonplanar graphs. Explicitly, this can be seen by computing one-loop corrections of the vacuum polarization of the photon [@Hayakawa:1999zf], [@Hayakawa:1999yt]. The evaluation of the finite part of the gauge field vacuum polarization shows the existence of singularities in the infrared limit. Those IR-singularities forbid Feynman graph computation of higher loop orders. Presently, one believes that such IR-singularities can perhaps be avoided by an appropriate field redefinition [@x].
A second formulation of noncommutative gauge field models simultaneously incorporates the deformed product and the Seiberg-Witten (SW) map. The SW-map ensures the gauge equivalence between an ordinary gauge field model and its noncommutative counterpart [@Seiberg:1999vs], [@Madore:2000en], [@Jurco:2000ja], [@Jurco:2001rq]. In this sense it is possible to expand the noncommutative gauge field as series in the ordinary gauge field and the deformation parameter of the noncommutative space-time geometry $\theta^{\mu\nu}$.
Additionally, one has the possibility to formulate gauge field models on noncommutative spaces via covariant coordinates [@Madore:2000en], [@Jurco:2000ja], [@Jurco:2001rq]. Also in these approaches the use of the SW-map emerges quite naturally.
The present paper is devoted to study $U(1)$ noncommutative Yang-Mills ($U(1)$-NCYM) theory in the context of the SW-map for the simplest case allowing only a linear dependence on the deformation parameter $\theta^{\mu\nu}$. The corresponding $U(1)$-deformed gauge invariant action has been derived in [@Jurco:2001rq], [@Bichl:2001nf] and recently been used very often to study physical consequences of such a deformed Maxwell theory [@Kruglov:2001dm], [@Jackiw:2001dj], [@Guralnik:2001ax], [@Cai:2001az].
It is remarkable to note that in this simple $\theta$-deformed Maxwell theory (in its perturbative realization) at the one-loop level no IR-singularities in the above sense exist [@Bichl:2001nf], [@Bichl:2001gu]. As is further explained in [@Bichl:2001cq] a consistent field redefinition (in agreement with the SW-map) allows to add to the gauge field of the $\theta$-deformed Maxwell theory further $\theta$-dependent and gauge invariant terms. Such additional terms are very useful for studying one-loop corrections of the vacuum polarization of the photon [@Bichl:2001cq]. Since the interaction contains terms linear in $\theta$ and trilinear in the Abelian field strength one obtains terms proportional to the square of $\theta$ in the corresponding vacuum polarization at the one loop level. However, by redefining the relevant unphysical free parameters of the above mentioned field redefinition one is able to carry out the renormalization procedure in the usual sense at least for this simple case. More recently, it has been argued that the linear $\theta$-deformed QED (with the inclusion of fermions) leads to a nonrenormalizable theory [@Wulkenhaar:2001sq] at the one-loop level.
The aim of this paper is to focus on the connection between the above mentioned field redefinition and the so-called BRST-shift symmetry described in [@Bastianelli:1990ey], [@Alfaro:1992cs]. Especially, we try to explain that the perturbative corrections of the photon self-energy are compatible with the Ward-identity (WI) of the shift symmetry.
The paper is organized as follows: Sec. \[2\] is concerned with the presentation of the simplest $\theta$-deformed Maxwell theory (in terms of the usual Abelian gauge symmetry of the photon field). This is realized by starting with $U(1)$-NCYM and using the SW-map to lowest order in $\theta^{\mu\nu}$. In a next step one performs a field redefinition in consistency with the SW-map in order to derive the physical consequences leading to the WI for the BRST-shift symmetry.
In order to understand this BRST-shift symmetry we assume the existence of an Abelian gauge invariant action with an appropiate gauge fixing implemented by a multiplier field B: $$\label{def-1}
\Gamma^{(0)}[A,B]=\Gamma_\mathrm{inv}[A]+\Gamma_\mathrm{gf}[B,A].$$ The description of the shift symmetry is based on the existence of the following redefinition $$\label{def-2}
A_{\mu }=\tilde{A}_{\mu }+ \mathbbm{A} ^{(2)}_{\mu }(\tilde{A}),$$ where the upper index indicates quadratic dependence on $\theta^{\mu\nu}$ which will be needed in the future.
The derivation of the BRST-shift symmetry needs a new type of gauge fixing besides $\Gamma_\mathrm{gf}[B,A]$. Here we follow [@Bastianelli:1990ey], [@Alfaro:1992cs] adapted for our special case[^1] by defining: $$\begin{aligned}
\label{def-3}
\Sigma &= \Gamma ^{0}[A_\mu,B]+\Gamma_\mathrm{shift}+\Gamma_\mathrm{gf-shift}
\\ \nonumber
&= \Gamma ^{0}[A_\mu,B]+
\int d^{4} x \int d^{4} y \, \bar{c}^{\mu }(x)\star\frac{\delta \varphi_{\mu
}(x)}{\delta \tilde{A}_{\sigma }(y)}\star c_{\sigma }(y)
\\ \nonumber
&\phantom{=}+ \int d^{4} x \, \Pi^\mu \star \Big(\varphi_\mu-A_\mu\Big),\end{aligned}$$ where $$\label{def-4}
\varphi_\mu=\tilde{A}_\mu+\phi_\mu^{(2)}(\tilde{A}).$$ The vectorial antighost $\bar{c}^{\mu }$ and ghost field $c^{\mu }$ carry a Faddeev-Popov ghost charge of $-1$ and $+1$, respectively.
Using (\[def-2\]) and (\[def-4\]) one gets $$\begin{gathered}
\label{def-5}
\Sigma = \Gamma ^{0}[A_\mu,B]+ \int d^{4} x \, \bar{c}^{\mu }(x)c_{\mu}(x)
+\int d^{4} x \int d^{4} y \, \bar{c}^{\mu }(x)\frac{\delta \phi^{(2)} _{\mu
}(x)}{\delta \tilde{A}_{\sigma }(y)}c_{\sigma }(y)
\\
+ \int d^{4} x \, \Pi^\mu \Big(\phi^{(2)}_\mu-\mathbbm{A}^{(2)}_\mu(\tilde{A})\Big).\end{gathered}$$ Since $\phi_\mu^{(2)}(\tilde{A})$ and $\mathbbm{A}_\mu^{(2)}(\tilde{A})$ are of order $2$ in $\theta$ no $\star$-products are needed. The BRST-shift symmetry is defined as $$\label{def-6}
s\bar{c}^{\mu } = -\Pi_\mu,\quad
s\tilde{A}_\mu = c_\mu = -s\mathbbm{A}^{(2)}_\mu, \quadsA_\mu = s\Pi_\mu = sc_\mu = 0.$$ These transformations are manifestly nilpotent. However, eliminating the multiplier field $\Pi_\mu$ via an algebraic equation of motion yields $$\label{def-7}
\frac{\delta \Sigma}{\delta \Pi_\mu} =
\phi^{(2)}_\mu - \mathbbm{A}^{(2)}_\mu(\tilde{A}) = 0,$$ and, additionally, from (\[def-3\]) follows $$\label{def-8}
\frac{\delta \Sigma}{\delta A_\mu} =
\frac{\delta \Gamma^{(0)}}{\delta A_\mu} - \Pi_\mu = 0.$$ This implies for the transformations (\[def-6\]) $$\label{def-9}
s\bar{c}^{\mu } = -\;\frac{\delta \Gamma^{(0)}}{\delta A_\mu},\quad
s\tilde{A}_\mu = c_\mu,\quadsc_\mu = 0.$$ These equations show explicitly that off-shell nilpotency is lost and that the shift symmetry is no longer linear.
One has to introduce an unquantized external source to describe the nonlinear transformation of $s\bar{c}^{\mu }$. This will be used for the construction of the corresponding WI for the shift symmetry in sec. \[2\].
Deformed Maxwell theory—Consequences of the BRST-shift symmetry at the classical level {#2}
======================================================================================
In order to derive the corresponding $\theta$-deformed Maxwell theory one starts with the $U(1)$-NCYM model [@Madore:2000en], [@Bichl:2001nf]: $$\label{def-2-1}
\Gamma _\mathrm{inv}^{(0)}=-\frac{1}{4}\int d^{4} x \, \hat{F}_{\mu \nu }
\star \hat{F}^{\mu \nu},$$ where the field strength in terms of the noncommutative $U(1)$ gauge field $\hat{A}_{\mu}$ is given by $$\label{def-2-2}
\hat{F}_{\mu \nu }=\partial_\mu\hat{A}_{\nu }-\partial_\nu\hat{A}_{\mu }
-i\Big[\hat{A}_{\mu },\hat{A}_{\nu }\Big]_M.$$ The Moyal bracket is defined by $$\label{def-2-3}
\Big[\hat{A}_{\mu },\hat{A}_{\nu }\Big]_M=
\hat{A}_{\mu }\star \hat{A}_{\nu }-\hat{A}_{\nu }\star \hat{A}_{\mu },$$ using the $\star$-product $$\label{def-2-4}
A(x)\star B(x)
=e^{\frac{i}{2}\theta ^{\mu \nu }\partial _{\mu }^{\alpha }\partial _{\nu }^{\beta }}
A(x+\alpha )B(x+\beta )\Big |_{\alpha =\beta =0},$$ where $\theta ^{\mu \nu }$ is the deformation parameter of the noncommutative geometry.
The action (\[def-2-1\]) is invariant under the infinitesimal noncommutative gauge transformation $$\label{def-2-5}
\hat{\delta}_{\hat{\lambda}} \hat{A}_\mu =
\partial_\mu \hat{\lambda} - i \hat{A}_\mu \star \hat{\lambda}
+ i \hat{\lambda} \star \hat{A}_\mu
\equiv \hat{D}_\mu \hat{\lambda}.$$ It was shown by Seiberg and Witten [@Seiberg:1999vs] that an expansion in $\theta ^{\mu \nu }$ leads to a map between the noncommutative gauge field $\hat{A}_\mu$ and the commutative gauge field $A_\mu$ as well as their respective gauge parameters $\hat{\lambda}$ and $\lambda$, known as the SW-map. To lowest order in $\theta$ one has in the Abelian case [@Seiberg:1999vs], [@Madore:2000en], [@Bichl:2001nf] $$\begin{aligned}
\label{def-2-6}
\hat{A}_{\mu }(A_{\mu }) &= A_{\mu }-\frac{1}{2}A_{\rho}
\Big (\partial _{\sigma}A_{\mu }+F_{\sigma \mu }\Big )
+ O(\theta^2),
\\ \nonumber
\hat{\lambda} (A_{\mu },\lambda)
&= \lambda -\frac{1}{2}\theta ^{\rho \sigma}A_{\rho}\partial _{\sigma}\lambda
+ O(\theta^2).\end{aligned}$$ In (\[def-2-6\]) $F_{\sigma \mu }$ is the ordinary Abelian field strength given by $$\label{def-2-6a}
F_{\sigma \mu } = \partial _{\sigma}A_{\mu } - \partial _{\mu}A_{\sigma}.$$ Using (\[def-2-1\]), (\[def-2-2\]), (\[def-2-3\]), (\[def-2-4\]) and (\[def-2-6\]) one gets to lowest order in $\theta ^{\mu \nu }$ [@Jurco:2001rq], [@Bichl:2001nf] $$\label{def-2-7}
\Gamma _\mathrm{inv}=\int d^4 x \, \Big (-\frac{1}{4}F_{\mu \nu }F^{\mu \nu
}-\frac{1}{2}\theta ^{\rho \sigma }\Big (F_{\mu \rho }F_{\nu \sigma }F^{\mu \nu
}-\frac{1}{4}F_{\rho \sigma }F_{\mu \nu }F^{\mu \nu }\Big )\Big )
+ O(\theta^2),$$ which is invariant under the usual Abelian gauge transformation $$\label{def-2-8}
\delta A_{\mu } = \partial _{\mu} \lambda.$$ The action (\[def-2-7\]) has in its full form, involving all orders of $\theta^{\mu \nu }$, infinitely many interactions of infinitely high order in the field. Additionally, since $\theta^{\mu \nu }$ has dimension $-2$, the model is power-counting nonrenormalizable in the traditional sense.
In order to quantize the model one introduces a Landau gauge fixing $$\label{def-2-9}
\Gamma _\mathrm{gf}=\int d^4 x \, B\partial ^{\mu }A_{\mu },$$ where $B$ is the multiplier field implementing the gauge $\partial ^{\mu }A_{\mu
}=0$.
Then one establishes the shift symmetry according to [@Bastianelli:1990ey], [@Alfaro:1992cs]. As is explained in [@Bichl:2001cq] the relevant field redefinition, compatible with the SW map, takes the following form: $$\label{def-2-10}
A_{\mu }=\tilde{A}_{\mu }+ \mathbbm{A} ^{(2)}_{\mu }(\tilde{A}),$$ where the upper index again indicates that $ \mathbbm{A} ^{(2)}_{\mu }(\tilde{A}) $ depends quadraticly on $ \theta^{\mu\nu} $. Additionally, $ \mathbbm{A} ^{(2)}_{\mu }(\tilde{A}) $ is gauge invariant with respect to (\[def-2-8\]) $$\label{def-2-11}
\delta_\lambda \mathbbm{A} ^{(2)}_{\mu }(\tilde{A}) = 0.$$ Terms linear in $ \theta^{\mu\nu} $ are excluded due to the topological nature of the corresponding action.[^2]
On the other hand the formula (\[def-2-10\]) allows the introduction of terms with a quadratic dependence already in the classical action. Such terms are needed for the one-loop renormalization procedure of the vacuum polarization of photons.
In the spirit of [@Bastianelli:1990ey], [@Alfaro:1992cs] one defines now for the deformed Maxwell theory, eq. (\[def-2-7\]), the corresponding shift-action in the following way, see formula (\[def-5\]) $$\begin{aligned}
\label{def-2-12}
\Gamma _\mathrm{shift}&=
\int d^{4}x\int d^4 y \, \bar{c}^{\mu }(x)\star\frac{\delta A_{\mu }(x)}{\delta
\tilde{A}_{\sigma }(y)}\star c_{\sigma }(y)
\\ \nonumber
&=\int d^4 x \, \bar{c}^{\mu }(x)c_{\mu}(x)
+\int d^{4}x\int d^4 y \, \bar{c}^{\mu }(x)\frac{\delta \mathbbm{A}^{(2)} _{\mu
}(x)}{\delta \tilde{A}_{\sigma }(y)}c_{\sigma }(y), & & \end{aligned}$$ where $\bar{c}^{\mu }$ and $c^{\mu }$ are the vectorial shift ghost and antighost fields, respectively. Due to the fact that $\mathbbm{A}^{(2)} _{\mu}$ is already of second order in $\theta$ one can neglect the stars in the second term of (\[def-2-12\]).[^3]
Thus, the total action of the model under consideration ready for quantization is given by $$\begin{gathered}
\label{def-2-12a}
\Gamma ^{(0)}=\Gamma ^{(0)} _\mathrm{inv}+\Gamma _\mathrm{gf}+\Gamma _\mathrm{shift}
\\
=\int d^4 x \, \Big (-\frac{1}{4}F_{\mu \nu }F^{\mu \nu }-\frac{1}{2}\theta ^{\rho
\sigma }\Big (F_{\mu \rho }F_{\nu \sigma }F^{\mu \nu }-\frac{1}{4}F_{\rho \sigma
}F_{\mu \nu }F^{\mu \nu }\Big )\Big ) \\
+\int d^4 x \, B\partial ^{\mu }A_{\mu }
+\int d^4 x \, \bar{c}^{\mu }(x)c_{\mu }(x)+\int d^{4}x\int d^4 y \, \bar{c}^{\mu
}(x)\frac{\delta \mathbbm{A}^{(2)} _{\mu }(x)}{\delta \tilde{A}_{\sigma
}(y)}c_{\sigma }(y). \end{gathered}$$ More explicitly, one has $$\begin{gathered}
\label{def-2-13}
\Gamma ^{(0)}=\int d^4 x \, \Big (-\frac{1}{4}\Big (\tilde{F}_{\mu \nu
}\tilde{F}^{\mu \nu }
-4\partial ^{\mu }\tilde{F}_{\mu \nu } \mathbbm{A} ^{(2)\nu }\Big )
\\ -\frac{1}{2}\theta ^{\rho \sigma }\Big (\tilde{F}_{\mu \rho }\tilde{F}_{\nu
\sigma }\tilde{F}^{\mu \nu }-\frac{1}{4}\tilde{F}_{\rho \sigma }\tilde{F}_{\mu
\nu }\tilde{F}^{\mu \nu }\Big )\Big )
+\int d^4 x \, \Big (B\partial ^{\mu }\tilde{A}_{\mu }+B\partial ^{\mu }
\mathbbm{A}^{(2)}_{\mu }\Big )
\\ +\int d^4 x \, \bar{c}^{\mu }(x)c_{\mu }(x)+\int d^{4}x\int d^4 y \, \bar{c}^{\mu
}(x)\frac{\delta \mathbbm{A}^{(2)} _{\mu }(x)}{\delta \tilde{A}_{\sigma
}(y)}c_{\sigma }(y), \end{gathered}$$ where $\tilde{F}_{\alpha\beta}$ is the usual Abelian field strength in terms of $\tilde{A}_{\beta}$.
From eq. (\[def-9\]) the BRST-shift symmetry of the action (\[def-2-13\]) is given by $$\begin{aligned}
\label{def-2-14}
s\bar{c}^{\tau }&=\frac{\delta \Gamma ^{(0)}}{\delta A_{\tau }}
\Bigg |_{A_{\mu }=\tilde{A}_{\mu }+ \mathbbm{A} ^{(2)}_{\mu }(\tilde{A})}
\\ \nonumber
&=\partial _{\mu }F^{\mu \tau }+ +\theta ^{\tau \sigma }\partial _{\mu }(F_{\nu \sigma }F^{\mu \nu })-\theta
^{\rho \sigma }\partial _{\rho }(F_{\nu \sigma }F^{\tau \nu })+\theta ^{\rho
\sigma }\partial _{\mu }(F^{\mu }_{\rho }F_{\sigma }^{\tau })
\\ \nonumber
&\phantom{=}-\frac{1}{4} \theta ^{\rho \tau }\partial _{\rho }\Big (F_{\mu \nu }F^{\mu \nu
}\Big )-\frac{1}{2} \theta ^{\rho \sigma }\partial _{\mu }\Big (F_{\rho \sigma
}F^{\mu \tau }\Big )-\partial ^{\tau }B
\\ \nonumber
&=\partial _{\mu }(\tilde{F}^{\mu \tau }+ \partial^\mu \mathbbm{A} ^{(2)\tau}
- \partial^\tau \mathbbm{A} ^{(2)\mu})
+\theta ^{\tau \sigma }\partial _{\mu }(\tilde{F}_{\nu \sigma }\tilde{F}^{\mu
\nu })-\theta ^{\rho \sigma }\partial _{\rho }(\tilde{F}_{\nu \sigma
}\tilde{F}^{\tau \nu })
\\ \nonumber
&\phantom{=}+\theta ^{\rho \sigma }\partial _{\mu }(\tilde{F}^{\mu
}_{\rho }\tilde{F}_{\sigma }^{\tau })
-\frac{1}{4} \theta ^{\rho \tau }\partial _{\rho }\Big (\tilde{F}_{\mu \nu
}\tilde{F}^{\mu \nu }\Big )-\frac{1}{2} \theta ^{\rho \sigma }\partial _{\mu
}\Big (\tilde{F}_{\rho \sigma }\tilde{F}^{\mu \tau }\Big )-\partial ^{\tau }B \\
\nonumber
&=:\partial _{\mu }(\tilde{F}^{\mu \tau }+ \partial^\mu \mathbbm{A} ^{(2)\tau}
- \partial^\tau \mathbbm{A} ^{(2)\mu})
-\partial ^{\tau }B+\mathcal{F}^{(1)\tau }(\tilde{A}),
\\ \nonumber
s\tilde{A}_\mu &= c_\mu, \\ \nonumber
sc_{\mu} &= sB = 0,\end{aligned}$$ where $$\begin{aligned}
\label{def-2-14a}
\mathcal{F}^{(1)\tau }(\tilde{A}):=
\theta ^{\tau \sigma }\partial _{\mu }(\tilde{F}_{\nu \sigma }\tilde{F}^{\mu \nu
})-\theta ^{\rho \sigma }\partial _{\rho }(\tilde{F}_{\nu \sigma
}\tilde{F}^{\tau \nu })+\theta ^{\rho \sigma }\partial _{\mu }(\tilde{F}^{\mu
}_{\rho }\tilde{F}_{\sigma }^{\tau })
\\ \nonumber \phantom{:=}
-\frac{1}{4} \theta ^{\rho \tau }\partial _{\rho }\Big (\tilde{F}_{\mu \nu
}\tilde{F}^{\mu \nu }\Big )-\frac{1}{2} \theta ^{\rho \sigma }\partial
_{\mu }\Big (\tilde{F}_{\rho \sigma }\tilde{F}^{\mu \tau }\Big ).\end{aligned}$$ One has to comment at this point that the BRST-shift symmetry for the vectorial antighost field $\bar{c}^{\mu }$ is highly nonlinear. Additionally, as is explained in the introduction, the off-shell nilpotency is also lost: $$\label{def-2-15}
s^2\bar{c}^{\mu } \neq 0.$$ Since the transformation of the antighost field $\bar{c}^{\mu }$ contains nonlinear expressions one must introduce an external unquantized source $\rho_\mu$ for the term $\mathcal{F}^{(1)\mu}(\tilde{A})$. This implies a further piece in the action (\[def-2-11\]) $$\label{def-2-16}
\Gamma_{ext}=\int d^4 x \, \rho_\mu \mathcal{F}^{(1)\mu}(\tilde{A}),$$ where $\rho_\mu$ is gauge invariant.
The new total action becomes therefore $$\begin{aligned}
\label{def-2-17}
\Gamma ^{(0)}&=\int d^4 x \, \Big (-\frac{1}{4}\Big (\tilde{F}_{\mu \nu }
\tilde{F}^{\mu \nu }
-4\partial ^{\mu }\tilde{F}_{\mu \nu } \mathbbm{A} ^{(2)\nu }\Big)
\\ \nonumber
&\phantom{=}-\frac{1}{2}\theta ^{\rho \sigma }\Big (\tilde{F}_{\mu \rho }
\tilde{F}_{\nu\sigma }\tilde{F}^{\mu \nu }
-\frac{1}{4}\tilde{F}_{\rho \sigma }\tilde{F}_{\mu\nu }
\tilde{F}^{\mu \nu }\Big )\Big )
+\int d^4 x \, \Big (B\partial ^{\mu }\tilde{A}_{\mu }+B\partial ^{\mu }
\mathbbm{A}^{(2)} _{\mu }\Big ) \\ \nonumber
&\phantom{=}+\int d^4 x \, \bar{c}^{\mu }(x)c_{\sigma }(x)+\int d^{4}x\int d^4 y \, \bar{c}^{\mu
}(x)\frac{\delta \mathbbm{A}^{(2)} _{\mu }(x)}{\delta \tilde{A}_{\sigma }(y)}c_{\sigma }(y)
\\ \nonumber
&\phantom{=}+\int d^4 x \, \rho_\mu \mathcal{F}^{(1)\mu}(\tilde{A}). \end{aligned}$$ Now we are able to characterize the symmetry content of the BRST-shift symmetry by the following nonlinear WI: $$\begin{gathered}
\label{def-2-18}
\mathcal{S}(\hat{\Gamma} ^{(0)}) = \\
\int d^{4} x \, \Big (\Big (\partial _{\rho }(\tilde{F}^{\rho \mu}
+ \partial^{\rho} \mathbbm{A}^{(2)\mu}
- \partial^\mu \mathbbm{A} ^{(2)\rho})
- \partial^\mu B+\frac{\delta \hat{\Gamma}^{(0)}}{\delta \rho^\mu} \Big)
\frac{\delta\hat{\Gamma}^{(0)}}{\delta \bar{c}^\mu}
+c_\mu \frac{\delta\hat{\Gamma}^{(0)}}{\delta \tilde{A}^\mu}\Big)
= 0.\end{gathered}$$ Eq. (\[def-2-18\]) will be the key for the understanding of the radiative corrections of the 2-point vertex-functional at the one-loop-level [@Bichl:2001cq].
Additionally, our model is also characterized by the gauge symmetry (\[def-2-8\]) with $$\label{def-2-19}
\delta_\lambda \bar{c}^\mu = \delta_\lambda c^\mu = \delta_\lambda B = 0.$$ This ordinary gauge invariance is described by the following WI operator $$\label{def-2-20}
W_\lambda=\int d^4 x \, \partial_\mu \lambda
\frac{\delta}{\delta A_\mu(x)}$$ and, as usual, the gauge symmetry is broken by the gauge fixing. This leads to $$\label{def-2-21}
W_\lambda \hat{\Gamma}^{(0)}
=\int d^4 x \, B \Box \lambda \neq 0.$$ By functional differentiation with respect to $\lambda(y)$ one obtains the local version of (\[def-2-21\]) $$\label{def-2-22}
W(x) \hat{\Gamma}^{(0)}
= - \partial_\mu \frac{\delta \hat{\Gamma}^{(0)}}{\delta \tilde{A}_\mu(x)}
= \Box B(x) \neq 0.$$ Due to the fact that this breaking is linear in the quantum field B, there do not arise any problems for the discussion of the gauge symmetry at higher orders of perturbation theory [@Boresch], [@Piguet:er].
We would like to point out that our model is characterized by two symmetries at the classical level: the gauge symmetry and the BRST-shift symmetry. This implies the existence of two WI’s, (\[def-2-18\]) and (\[def-2-21\]). These WI’s have severe consequences for the computation of the 2-point vertex functional.
From eq. (\[def-2-22\]) follows immediately the well-known transversality condition $$\label{def-2-23}
\partial_x^\mu \frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta \tilde{A}^\mu(x)\delta\tilde{A}^\rho(y)} \Bigg |_0 = 0,$$ where the subscript indicates vanishing classical fields. However, the WI (\[def-2-18\]) furnishes a further possibility to calculate $$\label{def-2-24}
\Pi_{\mu\rho} = \frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta \tilde{A}^\mu(x)\delta\tilde{A}^\rho(y)} \Bigg |_0.$$ The result obtained by direct calculation (functional variation) must be consistent with the result emerging from the WI (\[def-2-18\]).
However, one has to stress that all considerations done in this section are purely classical—i. e. in the tree approximation.
Therefore, one has to study the consequences of (\[def-2-18\]). Functional differentiation with respect to $\tilde{A}^\rho(z)$ and $c_\mu(y)$ of $\mathcal{S}(\hat{\Gamma} ^{(0)})$ gives: $$\begin{gathered}
\label{def-2-25}
\frac{\delta^2}{\delta c^\mu(y) \delta \tilde{A}^\rho(z)}
\mathcal{S}(\hat{\Gamma} ^{(0)}) \Bigg |_0 =
\\ \Big (\Box g_{\rho\sigma}
-\partial_{\rho }\partial_{\sigma }\Big)(z)
\frac{\delta^2\hat{\Gamma}^{(0)}}{\delta c^\mu(y) \delta
\bar{c}_{\sigma }(z)} \Bigg |_0
+ \frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta \tilde{A}^\rho(z)\delta\tilde{A}^\mu(y)}\Bigg |_0
\\ +\int d^4 x \,
\frac{\delta \mathbbm{A}^{(2)\lambda}(x)}{\delta \tilde{A}^\rho(z)}
\Big (\Box g_{\lambda\sigma }-\partial_\lambda\partial_{\sigma
}\Big)(x)
\frac{\delta^2\hat{\Gamma}^{(0)}}{\delta c^\mu(x) \delta
\bar{c}_{\sigma }(z)} \Bigg |_0
= 0.\end{gathered}$$ From (\[def-2-17\]) one gets additionally $$\label{def-2-26}
\frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta c^\mu(y) \delta\bar{c}_{\sigma }(z)} \Bigg |_0
=
\delta^\sigma_\mu \delta(y-z)
+\frac{\delta \mathbbm{A}^{(2)} _{\sigma}(z)}
{\delta \tilde{A}^{\mu}(y)}.$$ At order $\theta^2$ one has therefore $$\begin{gathered}
\label{def-2-27}
\frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta \tilde{A}^\rho(z)\delta\tilde{A}^\mu(y)} \Bigg |_0 =
\\ -\Big (\Box g_{\rho\mu} - \partial_{\rho }\partial_\mu\Big)(z)
\delta(y-z)
-\Big (\Box g_{\rho\sigma} - \partial_{\rho }\partial_\sigma\Big)(z)
\frac{\delta \mathbbm{A}^{(2)\sigma}(z)}{\delta \tilde{A}^\mu(y)}
\\ -\Big (\Box g_{\lambda\mu}-\partial_{\lambda }\partial_\mu\Big)(y)
\frac{\delta \mathbbm{A}^{(2)\lambda}(y)}{\delta \tilde{A}^\rho(z)} .\end{gathered}$$ The result (\[def-2-27\]) is fully transversal—a consequence of the gauge symmetry. Since $\mathbbm{A}^{(2)\mu}$ is of order $\theta^2$, (\[def-2-27\]) yields the well known result for the 2-point free vertex functional for $\theta^{\rho\sigma} = 0$ $$\label{def-2-28}
\frac{\delta^2\hat{\Gamma}^{(0)}}
{\delta \tilde{A}^\rho(z)\delta\tilde{A}^\mu(y)} \Bigg |_0 =
-\Big (\Box g_{\rho\mu} - \partial_{\rho }\partial_\mu\Big)(z)
\delta(y-z).$$ In the tree approximation there only exists a linear dependence on $\theta^{\rho\sigma}$—therefore $\mathbbm{A}^{(2)\mu}$ is not needed in the action, $$\begin{aligned}
\label{def-2-29}
\Gamma _\mathrm{inv} &=
\int d^4 x \, \Big (-\frac{1}{4}F_{\mu \nu }F^{\mu \nu
}-\frac{1}{2}\theta ^{\rho \sigma }\Big (F_{\mu \rho }F_{\nu \sigma }F^{\mu \nu
}-\frac{1}{4}F_{\rho \sigma }F_{\mu \nu }F^{\mu \nu }\Big )\Big )
\nonumber \\
&\phantom{=}+\int d^4 x \, B\partial ^{\mu }A_{\mu },\end{aligned}$$ which is needed to calculate eq. (\[def-2-28\]).
The additional terms proportional to $\frac{\delta \mathbbm{A}^{(2)} _{\mu}(x)}{\delta \tilde{A}_{\sigma }(y)}$ in (\[def-2-27\]) become useful if one considers one-loop corrections.
The result (\[def-2-27\]) is easily reproduced by direct twofold functional derivation of the action (\[def-2-17\]) with respect to the gauge field $\tilde{A}_\mu(x)$.
$\theta$-deformed Maxwell theory: one-loop corrections {#3}
======================================================
If one considers only the photon sector of the noncommutative Maxwell theory the relevant action[^4] is given by: $$\begin{aligned}
\label{def-3-1}
\Gamma ^{(0)} &= \Gamma ^{(1)}
+ \int d^4 x \, \partial_\mu \tilde{F}^{\mu \nu}\mathbbm{A}^{(2)}_{\nu }
\nonumber \\
&= \int d^4 x \, \Big (-\frac{1}{4}\Big (\tilde{F}_{\mu \nu }\tilde{F}^{\mu \nu }
-4\partial ^{\mu }\tilde{F}_{\mu \nu } \mathbbm{A} ^{(2)\nu }\Big ) \nonumber \\
&\phantom{=} -\frac{1}{2}\theta ^{\rho \sigma }\Big (\tilde{F}_{\mu \rho }\tilde{F}_{\nu
\sigma }\tilde{F}^{\mu \nu }-\frac{1}{4}\tilde{F}_{\rho \sigma }
\tilde{F}_{\mu\nu }\tilde{F}^{\mu \nu }\Big )\Big ),\end{aligned}$$ where $\Gamma^{(1)}$ denotes terms of order 0 and 1 in $\theta$. Eq. (\[def-3-1\]) follows from (\[def-2-17\]). In order to compensate the one-loop selfenergy corrections one needs the explicit form of $\mathbbm{A} ^{(2)}_\nu $ [@Bichl:2001cq] : $$\begin{aligned}
\label{def-3-2}
\mathbbm{A}^{(2)}_\mu
&=\phantom{+}
\kappa^{(2)}_1
g^{\alpha\gamma}g^{\beta\delta}g^{\lambda\rho}g^{\sigma\tau}
\theta_{\alpha\beta}\theta_{\gamma\delta}
\partial_\lambda\partial_\rho\partial_\sigma
\tilde{F}_{\tau\mu}
\nonumber \\
&\phantom{=}+ \kappa^{(2)}_2
g^{\alpha\gamma}g^{\beta\lambda}g^{\delta\rho}g^{\sigma\tau}
\theta_{\alpha\beta}\theta_{\gamma\delta}
\partial_\lambda\partial_\rho\partial_\sigma
\tilde{F}_{\tau\mu}
\nonumber \\
&\phantom{=}+ \kappa^{(2)}_3
g^{\beta\sigma}g^{\gamma\tau}g^{\alpha\lambda}g^{\delta\rho}
\theta_{\mu\beta}\theta_{\gamma\delta}
\partial_\alpha\partial_\lambda\partial_\rho
\tilde{F}_{\sigma\tau}
\nonumber \\
&\phantom{=}+ \kappa^{(2)}_4
g^{\gamma\tau}g^{\beta\delta}g^{\alpha\lambda}g^{\rho\sigma}
\theta_{\mu\beta}\theta_{\gamma\delta}
\partial_\alpha\partial_\lambda\partial_\rho
\tilde{F}_{\sigma\tau},\end{aligned}$$ which is gauge invariant.
Inserting (\[def-3-2\]) into (\[def-3-1\]) one gets for the terms quadratic in $\theta^{\mu\nu}$ $$\begin{gathered}
\label{def-3-3}
\Gamma ^{(0)} = \Gamma ^{(1)} +\int d^{4} x \, \tilde{A}_\mu\Bigg(
\Big(g^{\mu\nu}\Box-\partial^\mu\partial^\nu \Big)
\Big( \kappa^{(2)}_1 \theta^2\Box^2
+ \kappa^{(2)}_2 \Tilde{\Tilde{\overset{}{\Box}}}\Box \Big) \\
+ \kappa^{(2)}_3 \tilde{\partial}^\mu \tilde{\partial}^\nu \Box^2
+ \kappa^{(2)}_4 \Big(\theta^{\mu\alpha}\theta^\nu_\alpha \Box^3
+ \Big( \Tilde{\Tilde{\partial}}^\mu\partial^\nu
+ \Tilde{\Tilde{\partial}}^\nu\partial^\mu \Big) \Box^2
+ \partial^\mu\partial^\nu \Tilde{\Tilde{\overset{}{\Box}}} \Box
\Big)
\Bigg)\tilde{A}_\nu,\end{gathered}$$ where $\Box=\partial^\alpha\partial_\alpha$, $\tilde{\partial}^\alpha=\theta^{\alpha\beta}\partial_\beta$, $\Tilde{\Tilde{\partial}}^\alpha=\theta^{\alpha\beta}\tilde{\partial}_\beta$, $\Tilde{\Tilde{\overset{}{\Box}}}=\tilde{\partial}^\alpha\tilde{\partial}_\alpha$ and $\theta^2=\theta^{\alpha\beta}\theta_{\alpha\beta}$.
With the help of (\[def-3-2\]) one verifies by direct calculation that (\[def-3-3\]) and (\[def-2-27\]) are consistent.
At the one-loop level this means that the shift symmetry controls the radiative corrections of the perturbative calculation.
In order to cancel the one-loop divergences one performs the following renormalization of $\kappa^{(2)}_1$, $\kappa^{(2)}_2$, $\kappa^{(2)}_3$ and $\kappa^{(2)}_4$ [@Bichl:2001cq]: $$\begin{aligned}
{2}\label{def-3-4}
\kappa^{(2)}_1 \; & \rightarrow \; \kappa^{(2)}_1
-\frac{1}{16}\frac{\hbar}{(4\pi)^2 \epsilon} \, ,
\qquad &
\kappa^{(2)}_2 \; & \rightarrow \; \kappa^{(2)}_2
+\frac{1}{20}\frac{\hbar}{(4\pi)^2 \epsilon} \, ,
\\ \nonumber
\kappa^{(2)}_3 \; & \rightarrow \; \kappa^{(2)}_3
+\frac{1}{60}\frac{\hbar}{(4\pi)^2 \epsilon} \, ,
\qquad &
\kappa^{(2)}_4 \; & \rightarrow \; \kappa^{(2)}_4
+\frac{1}{8}\frac{\hbar}{(4\pi)^2 \epsilon}\, .\end{aligned}$$ However, one has to stress that (\[def-3-4\]) represent unphysical renormalizations because the $\kappa^{(2)}_i$ parametrize the field redefinition (\[def-2\]) and (\[def-2-10\]).
Conclusion {#4}
==========
In this paper we have demonstrated the usefulness of the BRST-shift symmetry in connection with the renormalization program of the vacuum polarization of the $\theta$-deformed Maxwell theory at the one-loop level. Gauge symmetry and BRST-shift symmetry can be implemented consistently. Unfortunately the non-Abelian extension is plagued by several difficulties.
[99]{}
J. E. Moyal, “Quantum Mechanics As A Statistical Theory,” Proc. Cambridge Phil. Soc. [**45**]{} (1949) 99. M. Hayakawa, “Perturbative analysis on infrared and ultraviolet aspects of noncommutative QED on R\*\*4,” arXiv:hep-th/9912167. M. Hayakawa, “Perturbative analysis on infrared aspects of noncommutative QED on R\*\*4,” Phys. Lett. B [**478**]{} (2000) 394 \[arXiv:hep-th/9912094\]. A. Matusis, L. Susskind and N. Toumbas, “The IR/UV connection in the non-commutative gauge theories,” JHEP [**0012**]{} (2000) 002 \[arXiv:hep-th/0002075\]. Work in preparation
N. Seiberg and E. Witten, “String theory and noncommutative geometry,” JHEP [**9909**]{} (1999) 032 \[arXiv:hep-th/9908142\].
J. Madore, S. Schraml, P. Schupp and J. Wess, “Gauge theory on noncommutative spaces,” Eur. Phys. J. C [**16**]{} (2000) 161 \[arXiv:hep-th/0001203\].
B. Jurco, S. Schraml, P. Schupp and J. Wess, “Enveloping algebra valued gauge transformations for non-Abelian gauge groups on non-commutative spaces,” Eur. Phys. J. C [**17**]{} (2000) 521 \[arXiv:hep-th/0006246\]. B. Jurco, L. Moller, S. Schraml, P. Schupp and J. Wess, “Construction of non-Abelian gauge theories on noncommutative spaces,” Eur. Phys. J. C [**21**]{} (2001) 383 \[arXiv:hep-th/0104153\]. A. A. Bichl, J. M. Grimstrup, L. Popp, M. Schweda and R. Wulkenhaar, “Perturbative analysis of the Seiberg-Witten map,” arXiv:hep-th/0102044. J. Alfaro and P. H. Damgaard, “BRST symmetry of field redefinitions,” Annals Phys. [**220**]{} (1992) 188.
S. I. Kruglov, “Maxwell’s theory on non-commutative spaces and quaternions,” arXiv:hep-th/0110059.
R. Jackiw, “Physical instances of noncommuting coordinates,” arXiv:hep-th/0110057.
Z. Guralnik, R. Jackiw, S. Y. Pi and A. P. Polychronakos, “Testing non-commutative QED, constructing non-commutative MHD,” Phys. Lett. B [**517**]{} (2001) 450 \[arXiv:hep-th/0106044\]. R. G. Cai, “Superluminal noncommutative photons,” Phys. Lett. B [**517**]{} (2001) 457 \[arXiv:hep-th/0106047\].
A. A. Bichl, J. M. Grimstrup, L. Popp, M. Schweda and R. Wulkenhaar, “Deformed QED via Seiberg-Witten map,” arXiv:hep-th/0102103.
A. Bichl, J. Grimstrup, H. Grosse, L. Popp, M. Schweda and R. Wulkenhaar, “Renormalization of the noncommutative photon selfenergy to all orders via Seiberg-Witten map,” JHEP [**0106**]{} (2001) 013 \[arXiv:hep-th/0104097\].
R. Wulkenhaar, “Non-renormalizability of theta-expanded noncommutative QED,” arXiv:hep-th/0112248. F. Bastianelli, “BRST Symmetry From A Change Of Variables And The Gauged WZNW Models,” Nucl. Phys. B [**361**]{} (1991) 555. A. Boresch, S. Emery, O. Moritsch, M. Schweda, T. Sommer, H. Zerrouki, “Applications of Noncovariant Gauges in the Algebraic Renormalization Procedure,” [*Singapore, Singapore: World Scientific (1998) 150 p*]{}.
O. Piguet and S. P. Sorella, “Algebraic Renormalization: Perturbative Renormalization, Symmetries And Anomalies,” Lect. Notes Phys. [**M28**]{} (1995) 1.
[^1]: In consistency with the formulation of noncommutative gauge field models we use here $\star$-products.
[^2]: In [@Bichl:2001cq] one finds the most general $\mathbbm{A} ^{(2)}_{\mu }(\tilde{A})$.
[^3]: In (\[def-2-12\]) we have used $\int d^4 x \, \bar{c}^{\mu}(x) \star c_{\mu}(x)
= \int d^4 x \, \bar{c}^{\mu }(x)c_{\mu}(x)$.
[^4]: Here, the vectorial ghost, the antighost, the $B$ field and the external sources are assumed to be zero.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The multiplicity distributions of hadrons produced in central nucleus-nucleus collisions are studied within the hadron-resonance gas model in the large volume limit. The microscopic correlator method is used to enforce conservation of three charges – baryon number, electric charge, and strangeness – in the canonical ensemble. In addition, in the micro-canonical ensemble energy conservation is included. An analytical method is used to account for resonance decays. The multiplicity distributions and the scaled variances for negatively, positively, and all charged hadrons are calculated along the chemical freeze-out line of central Pb+Pb (Au+Au) collisions from SIS to LHC energies. Predictions obtained within different statistical ensembles are compared with the preliminary NA49 experimental results on central Pb+Pb collisions in the SPS energy range. The measured fluctuations are significantly narrower than the Poisson ones and clearly favor expectations for the micro-canonical ensemble. Thus this is a first observation of the recently predicted suppression of the multiplicity fluctuations in relativistic gases in the thermodynamical limit due to conservation laws.'
author:
- 'V.V. Begun'
- 'M. Gaździcki'
- 'M.I. Gorenstein'
- 'M. Hauer'
- 'V.P. Konchakovski'
- 'B. Lungwitz'
title: |
Multiplicity fluctuations\
in relativistic nuclear collisions:\
statistical model versus experimental data
---
Introduction
============
For more than 50 years statistical models of strong interactions [@fermi; @landau; @hagedorn] have served as an important tool to investigate high energy nuclear collisions. The main subject of the past study has been the mean multiplicity of produced hadrons (see e.g. Refs. [@stat1; @FOC; @FOP; @pbm]). Only recently, due to a rapid development of experimental techniques, first measurements of fluctuations of particle multiplicity [@fluc-mult] and transverse momenta [@fluc-pT] were performed. The growing interest in the study of fluctuations in strong interactions (see e.g., reviews [@fluc1]) is motivated by expectations of anomalies in the vicinity of the onset of deconfinement [@ood] and in the case when the expanding system goes through the transition line between the quark-gluon plasma and the hadron gas [@fluc2]. In particular, a critical point of strongly interacting matter may be signaled by a characteristic power-law pattern in fluctuations [@fluc3]. Apart from being an important tool in an effort to study the critical behavior, the study of fluctuations in the statistical hadronization model constitutes a further test of its validity. In this paper we make, for the first time, predictions for the multiplicity fluctuations in central collisions of heavy nuclei calculated within the micro-canonical formulation of the hadron-resonance gas model. Fluctuations are quantified by the ratio of the variance of the multiplicity distribution and its mean value, the so-called scaled variance. The model calculations are compared with the corresponding preliminary results [@NA49] of NA49 on central Pb+Pb collisions at the CERN SPS energies.
There is a qualitative difference in the properties of the mean multiplicity and the scaled variance of multiplicity distribution in statistical models. In the case of the mean multiplicity results obtained with the grand canonical ensemble (GCE), canonical ensemble (CE), and micro-canonical ensemble (MCE) approach each other in the large volume limit. One refers here to the thermodynamical equivalence of the statistical ensembles. It was recently found [@CE; @res] that corresponding results for the scaled variance are different in different ensembles, and thus the scaled variance is sensitive to conservation laws obeyed by a statistical system. The differences are preserved in the thermodynamic limit.
The paper is organized as follows. In Section II the microscopic correlators for a relativistic quantum gas are calculated in the MCE in the thermodynamical limit. This allows to take into account conservation of baryon number, electric charge, and strangeness in the CE formulation and, additionally, energy conservation in the MCE. In Section III the relevant formulas for the scaled variance of multiplicity fluctuations are presented for different statistical ensembles within the hadron-resonance gas model. The scaled variance of negative, positive and all charged hadrons is then calculated along the chemical freeze-out line in the temperature–baryon chemical potential plane. The fluctuations of hadron multiplicities in central Pb+Pb (Au+Au) collisions are presented for different collision energies from SIS to LHC. The results for the GCE, CE, and MCE are compared. In Section IV the statistical model predictions for the scaled variances and multiplicity distributions of negatively and positively charged hadrons are compared with the preliminary NA49 data of central Pb+Pb collisions in the SPS energy range. A summary, presented in Section V, closes the paper. New features of resonance decays within the MCE are discussed in Appendix A, and the acceptance procedure for all charged hadrons is considered in Appendix B.
Multiplicity Fluctuations in Statistical Models
===============================================
The mean multiplicities of positively, negatively and all charged particles are defined as: [$$\begin{aligned}
\langle N_-\rangle \;=\; \sum_{i,q_i<0} \langle N_i\rangle\;,~~~~
\langle N_{+}\rangle \;=\; \sum_{i,q_i>0} \langle
N_i\rangle\;,~~~~
\langle N_{ch}\rangle \;=\; \sum_{i,q_i\neq 0} \langle
N_i\rangle\;,
\label{pminch}
\end{aligned}$$]{} where the average final state (after resonance decays) multiplicities $\langle N_i\rangle$ are equal to: [$$\begin{aligned}
\label{<N>}
\langle N_i\rangle
\;=\;
\langle N_i^*\rangle + \sum_R \langle N_R\rangle \langle
n_{i}\rangle_R\;.
\end{aligned}$$]{} In Eq. (\[<N>\]), $N_i^*$ denotes the number of stable primary hadrons of species $i$, the summation $\sum_R$ runs over all types of resonances $R$, and $\langle n_i\rangle_R \equiv \sum_r
b_r^R n_{i,r}^R$ is the average over resonance decay channels. The parameters $b^R_r$ are the branching ratios of the $r$-th branches, $n_{i,r}^R$ is the number of particles of species $i$ produced in resonance $R$ decays via a decay mode $r$. The index $r$ runs over all decay channels of a resonance $R$, with the requirement $\sum_{r} b_r^R=1$. In the GCE formulation of the hadron-resonance gas model the mean number of stable primary particles, $\langle N_i^* \rangle$, and the mean number of resonances, $\langle N_R \rangle$, can be calculated as: [$$\begin{aligned}
\label{Ni-gce}
\langle N_j\rangle \;\equiv\; \sum_{\bf p} \langle n_{{\bf p},j}\rangle
\;=\; \frac{g_j V}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\; \langle
n_{{\bf p},j}\rangle\;,
\end{aligned}$$]{} where $V$ is the system volume and $g_j$ is the degeneracy factor of particle of the species $j$ (number of spin states). In the thermodynamic limit, $V\rightarrow \infty$, the sum over the momentum states can be substituted by a momentum integral. The $\langle n_{{\bf p},j} \rangle$ denotes the mean occupation number of a single quantum state labelled by the momentum vector ${\bf
p}$ , [$$\begin{aligned}
\langle n_{{\bf p},j} \rangle
~& = ~\frac {1} {\exp \left[\left( \epsilon_{{\bf p}j} - \mu_j \right)/ T\right]
~-~ \alpha_j}~, \label{np-aver}
\end{aligned}$$]{} where $T$ is the system temperature, $m_j$ is the mass of a particle $j$, $\epsilon_{{\bf p}j}=\sqrt{{\bf p}^{2}+m_j^{2}}$ is a single particle energy. A value of $\alpha_j$ depends on quantum statistics, it is $+1$ for bosons and $-1$ for fermions, while $\alpha_j=0$ gives the Boltzmann approximation. The chemical potential $\mu_j$ of a species $j$ equals to: [$$\begin{aligned}
\mu_j~=~q_j~\mu_Q~+~b_j~\mu_B~+~s_j~\mu_S ~,\label{chempot} \end{aligned}$$]{} where $q_j,~b_j,~s_j$ are the particle electric charge, baryon number, and strangeness, respectively, while $\mu_Q,~\mu_B,~\mu_S$ are the corresponding chemical potentials which regulate the average values of these global conserved charges in the GCE. Eqs. (\[Ni-gce\]-\[chempot\]) are valid in the GCE. In the limit $V\rightarrow\infty$ , Eq. (\[Ni-gce\]-\[chempot\]) are also valid for the CE and MCE, if the energy density and conserved charge densities are the same in all three ensembles. This is usually referred to as the thermodynamical equivalence of all statistical ensembles. However, the thermodynamical equivalence does not apply to fluctuations.
In statistical models a natural measure of multiplicity fluctuations is the scaled variance of the multiplicity distribution. For negatively, positively, and all charged particles the scaled variances read: [$$\begin{aligned}
\omega^- ~=~ \frac{\langle \left( \Delta N_- \right)^2
\rangle}{\langle N_-
\rangle}~,~~~~
\omega^+~ =~ \frac{\langle \left( \Delta N_+ \right)^2
\rangle}{\langle N_+
\rangle}~,~~~~
\omega^{ch}~ =~ \frac{\langle \left( \Delta N_{ch} \right)^2
\rangle}{\langle N_{ch}
\rangle}~.
\label{omega-all}
\end{aligned}$$]{} The variances in Eq. (\[omega-all\]) can be presented as a sum of the correlators: [$$\begin{aligned}
\langle \left( \Delta N_- \right)^2 \rangle
~& =~
\sum_{i,j;~q_i<0,q_j<0} \langle \Delta N_i \Delta N_j
\rangle~,~~~~ \langle \left( \Delta N_+ \right)^2 \rangle
~ =~
\sum_{i,j;~q_i>0,q_j>0} \langle \Delta N_i \Delta N_j \rangle~,\nonumber \\
\langle \left( \Delta N_{ch} \right)^2 \rangle
~ &=~
\sum_{i,j;~q_i\neq 0,q_j\neq 0} \langle \Delta N_i \Delta N_j
\rangle~, \label{DNpm}
\end{aligned}$$]{} where $\Delta N_i\equiv N_i -\langle N_i\rangle$. The correlators in Eq. (\[DNpm\]) include both the correlations between primordial hadrons and those of final state hadrons due to the resonance decays (resonance decays obey charge as well as energy-momentum conservation).
In the GCE the final state correlators can be calculated as [@Koch]: [$$\begin{aligned}
\label{corr-GCE}
\langle \Delta N_i\,\Delta N_j\rangle_{g.c.e.}
~=~
\langle\Delta N_i^* \Delta N_j^*\rangle_{g.c.e.}
\;+\; \sum_R \left[ \langle\Delta N_R^2\rangle\;
\langle n_{i}\rangle_R\;\langle n_{j}\rangle_R
\;+\; \langle N_R\rangle\; \langle \Delta n_{i}\Delta n_{j}\rangle_R
\right]~,
\end{aligned}$$]{} where $\langle \Delta n_i~\Delta n_j\rangle_R\equiv \sum_r b_r^R
n_{i,r}^R n_{j,r}^R~-~\langle n_i\rangle_R\langle n_j\rangle_R$ . The occupation numbers, $n_{{\bf p},j}$, of single quantum states (with fixed projection of particle spin) are equal to $n_{{\bf p},j}=0,1,\ldots,\infty$ for bosons and $n_{{\bf p},j}=0,1$ for fermions. Their average values are given by Eq. (\[np-aver\]), and their fluctuations read: [$$\begin{aligned}
\langle~\left(\Delta n_{{\bf p},j}\right)^2~\rangle
~ \equiv ~ \langle \left( n_{{\bf p},j}~-~\langle
n_{{\bf p},j}\rangle\right)^2\rangle ~=~ \langle n_{{\bf p},j}\rangle \left(1~
+ ~\alpha_j ~\langle n_{{\bf p},j} \rangle\right)~\equiv~v^{ 2}_{{\bf p},j}~.
\label{np-fluc}
\end{aligned}$$]{} It is convenient to introduce a microscopic correlator, $\langle \Delta n_{{\bf p},j} \Delta n_{{\bf k},i} \rangle$, which in the GCE has a simple form: [$$\begin{aligned}
\label{mcc-gce}
\langle \Delta n_{{\bf p},j}~ \Delta n_{{\bf k},i} \rangle_{g.c.e.}~=~
\upsilon_{{\bf p},j}^2\,\delta_{ij}\,\delta_{{\bf p}{\bf k}}~.
\end{aligned}$$]{} Hence there are no correlations between different particle species, $i\neq j$, and/or between different momentum states, ${\bf p} \neq {\bf k}$. Only the Bose enhancement, $v_{{\bf p},j}^2>\langle n_{{\bf p},j}\rangle$ for $\alpha_j=1$, and the Fermi suppression, $v_{{\bf p},j}^2<\langle
n_{{\bf p},j}\rangle$ for $\alpha_j=-1$, exist for fluctuations of primary particles in the GCE. The correlator in Eq. (\[corr-GCE\]) can be presented in terms of microscopic correlators (\[mcc-gce\]): [$$\begin{aligned}
\label{dNidNj}
\langle \Delta N_j^* ~\Delta N_i^*~\rangle_{g.c.e.} ~=~
\sum_{{\bf p},{\bf k}}~\langle \Delta n_{{\bf p},j}~\Delta
n_{{\bf k},i}\rangle_{g.c.e.}~=~\delta_{ij}~\sum_{\bf p}~ v_{{\bf p},j}^2~.
\end{aligned}$$]{} In the case of $i=j$ the above equation gives the scaled variance of primordial particles (before resonance decays) in the GCE.
In the MCE, the energy and conserved charges are fixed exactly for each microscopic state of the system. This leads to two modifications in a comparison with the GCE. First, the additional terms appear for the primordial microscopic correlators in the MCE. They reflect the (anti)correlations between different particles, $i\neq j$, and different momentum levels, ${\bf p}\neq {\bf k}$, due to charge and energy conservation in the MCE, [$$\begin{aligned}
\label{corr}
&
\langle \Delta n_{{\bf p},j} \Delta n_{{\bf k},i} \rangle_{m.c.e.}
~=\; \upsilon_{{\bf p},j}^2\,\delta_{ij}\,\delta_{{\bf p}{\bf k}}
\;-\; \frac{\upsilon_{{\bf p},j}^2v_{{\bf k},i}^2}{|A|}\;
[\;q_iq_j M_{qq} + b_ib_j M_{bb} + s_is_j M_{ss} \nonumber
\\
&+ ~\left(q_is_j + q_js_i\right) M_{qs}~
- ~\left(q_ib_j + q_jb_i\right) M_{qb}~
- ~\left(b_is_j + b_js_i\right) M_{bs}\nonumber
\\
&+~ \epsilon_{{\bf p}j}\epsilon_{{\bf k}i} M_{\epsilon\epsilon}~-~
\left(q_i \epsilon_{{\bf p}j} + q_j\epsilon_{{\bf k}i} \right)
M_{q\epsilon}~
+~ \left(b_i \epsilon_{{\bf p}j} + b_j\epsilon_{{\bf k}i} \right)
M_{b\epsilon}~
- ~\left(s_i \epsilon_{{\bf p}j} + s_j\epsilon_{{\bf k}i} \right) M_{s\epsilon}
\;]\;,
\end{aligned}$$]{} where $|A|$ is the determinant and $M_{ij}$ are the minors of the following matrix, [$$\begin{aligned}
\label{matrix}
A =
\begin{pmatrix}
\Delta (q^2) & \Delta (bq) & \Delta (sq) & \Delta (\epsilon q)\\
\Delta (q b) & \Delta (b^2) & \Delta (sb) & \Delta (\epsilon b)\\
\Delta (q s) & \Delta (b s) & \Delta (s^2) & \Delta (\epsilon s)\\
\Delta (q \epsilon) & \Delta (b \epsilon) & \Delta (s \epsilon) & \Delta (\epsilon^2)
\end{pmatrix}\;,
\end{aligned}$$]{} with the elements, $\;\Delta (q^2)\equiv\sum_{{\bf p},j}
q_{j}^2\upsilon_{{\bf p},j}^2\;$, $\;\Delta (qb)\equiv \sum_{{\bf p},j}
q_{j}b_{j}\upsilon_{{\bf p},j}^2\;$, $\;\Delta (q\epsilon)\equiv \sum_{{\bf p},j}
q_{j}\epsilon_{{\bf p}j}\upsilon_{{\bf p},j}^2\;$, etc. The sum, $\sum_{{\bf p},j}$ , means integration over momentum ${\bf p}$, and the summation over all hadron-resonance species $j$ contained in the model. The first term in the r.h.s. of Eq. (\[corr\]) corresponds to the microscopic correlator (\[mcc-gce\]) in the GCE. Note that a presence of the terms containing a single particle energy, $\epsilon_{{\bf p}j}=\sqrt{{\bf p}^{2}+m_j^{2}}$, in Eq. (\[corr\]) is a consequence of energy conservation. In the CE, only charges are conserved, thus the terms containing $\epsilon_{{\bf p}j}$ in Eq. (\[corr\]) are absent. The $A$ in Eq. (\[matrix\]) becomes then the $3\times 3$ matrix (see Ref. [@res]). An important property of the microscopic correlator method is that the particle number fluctuations and the correlations in the MCE or CE, although being different from those in the GCE, are expressed by quantities calculated within the GCE. The microscopic correlator (\[corr\]) can be used to calculate the primordial particle correlator in the MCE (or in the CE): [$$\begin{aligned}
\langle \Delta N_{i} ~\Delta N_{j}~\rangle_{m.c.e.}
&~= \sum_{{\bf p},{\bf k}}~\langle \Delta n_{{\bf p},i}~\Delta
n_{{\bf k},j}\rangle_{m.c.e.}\;. \label{mc-corr-mce}
\end{aligned}$$]{} A second feature of the MCE (or CE) is the modification of the resonance decay contribution to the fluctuations in comparison to the GCE (\[corr-GCE\]). In the MCE it reads: [$$\begin{aligned}
\langle \Delta N_i\,\Delta N_j\rangle_{m.c.e.}
~&=\; \langle\Delta N_i^* \Delta N_j^*\rangle_{m.c.e.}
\;+\; \sum_R \langle N_R\rangle\; \langle \Delta n_{i}\; \Delta n_{j}\rangle_R
\;+\; \sum_R \langle\Delta N_i^*\; \Delta N_R\rangle_{m.c.e.}\; \langle n_{j}\rangle_R
\; \nonumber
\\
&+\; \sum_R \langle\Delta N_j^*\;\Delta N_R\rangle_{m.c.e.}\; \langle n_{i}\rangle_R
\;+\; \sum_{R, R'} \langle\Delta N_R\;\Delta N_{R'}\rangle_{m.c.e.}
\; \langle n_{i}\rangle_R\;
\langle n_{j}\rangle_{R^{'}}\;.\label{corr-MCE}
\end{aligned}$$]{} Additional terms in Eq. (\[corr-MCE\]) compared to Eq. (\[corr-GCE\]) are due to the correlations (for primordial particles) induced by energy and charge conservations in the MCE. The Eq. (\[corr-MCE\]) has the same form in the CE [@res] and MCE, the difference between these two ensembles appears because of different microscopic correlators (\[corr\]). The microscopic correlators of the MCE together with Eq. (\[mc-corr-mce\]) should be used to calculate the correlators $\langle\Delta N_i^*
\Delta N_j^*\rangle_{m.c.e.}$ , $\langle\Delta N_i^*\; \Delta
N_R\rangle_{m.c.e.}~$, $~\langle\Delta N_j^*\;\Delta
N_R\rangle_{m.c.e.}~$, $~\langle\Delta N_R\;\Delta
N_{R'}\rangle_{m.c.e.}$ entering in Eq. (\[corr-MCE\]) .
The microscopic correlators and the scaled variance are connected with the width of the multiplicity distribution. It can be shown [@CLT] that in statistical models the form of the multiplicity distribution derived within any ensemble (e.g. GCE, CE and MCE) approaches the Gauss distribution: $$\label{Gauss}
P_G(N) = \frac{1}{\sqrt{2 \pi ~\omega~\langle N \rangle}} ~\exp \left[ -
\frac{\left(N~-~\langle N \rangle \right)^2}{2 ~\omega ~\langle N \rangle} \right]~$$ in the large volume limit i.e. $\langle N \rangle \rightarrow \infty$. The width of this Gaussian, $\sigma = \sqrt{\omega~ \langle N \rangle}$, is determined by the choice of the statistical ensemble, while from the thermodynamic equivalence of the statistical ensembles it follows that the expectation value $\langle N \rangle$ remains the same.
multiplicity fluctuations at chemical freeze-out {#sec-HG}
=================================================
In this section we present the results of the hadron-resonance gas for the scaled variances in the GCE, CE and MCE along the chemical freeze-out line in central Pb+Pb (Au+Au) collisions for the whole energy range from SIS to LHC. Mean hadron multiplicities in heavy ion collisions at high energies can be approximately fitted by the GCE hadron-resonance gas model. The fit parameters are the volume $V$, temperature $T$, chemical potential $\mu_B$, and the strangeness saturation parameter $\gamma_S$. The latter allows for non-equilibrium strange hadron yields. A recent discussion of system size and energy dependence of freeze-out parameters and comparison of freeze-out criteria can be found in Refs. [@FOP; @FOC]. There are several programs designed for the analysis of particle multiplicities in relativistic heavy-ion collisions within the hadron-resonance gas model, see e.g., SHARE [@Share], THERMUS [@Thermus], and THERMINATOR [@Therminator]. The set of model parameters, $V,T,\mu_B$, and $\gamma_S$, corresponds to the chemical freeze-out conditions in heavy-ion collisions. The numerical values and evolution of the model parameters with the collision energy are taken from previous analysis of multiplicities data. The dependence of $\mu_B$ on the collision energy is parameterized as [@FOC]: $\mu_B \left( \sqrt{s_{NN}} \right) =1.308~\mbox{GeV}\cdot(1+
0.273~ \sqrt{s_{NN}})^{-1}~,$ where the c.m. nucleon-nucleon collision energy, $\sqrt{s_{NN}}$, is taken in GeV units. The system is assumed to be net strangeness free, $S=0$, and to have the charge to baryon ratio of the initial colliding nuclei, $Q/B = 0.4$. These two conditions define the system strange, $\mu_S$, and electric, $\mu _Q$, chemical potentials. For the chemical freeze-out condition we chose the average energy per particle, $\langle E \rangle/\langle N \rangle = 1~$GeV [@Cl-Red]. Finally, the strangeness saturation factor, $\gamma_S$, is parametrized [@FOP], $ \gamma_S~ =~ 1 - 0.396~ \exp \left( - ~1.23~ T/\mu_B \right). $ This determines all parameters of the model. In this paper an extended version of THERMUS framework [@Thermus] is used. A numerical procedure is applied to meet the above constraints simultaneously. Other choices of the freeze-out parameters will be discussed in the next section. The $T$, $\mu_B$, and $\gamma_S$ parameters used for different c.m. energies are given in Table I. Here, some further details should be mentioned. We use quantum statistics, but disregard the non-zero widths of resonances. The thermodynamic limit for the calculations of the scaled variance is assumed, thus $\omega$ reaches its limiting value, and volume $V$ is not a parameter of our model calculations. We also do not consider explicitly momentum conservation as it can be shown that it completely drops out in the thermodynamic limit [@CLT]. Excluded volume corrections due to a hadron hard core volume are not taken into account. They will be considered elsewhere [@ExclVol]. The standard THERMUS particle table includes all known particles and resonances up to a mass of about 2.5 GeV and their respective decay channels. Heavy resonances have not always well established decay channels. We re-scaled the branching ratios given in THERMUS to unity, where it was necessary to ensure a global charge conservation. Usually the resonance decays are considered in a successive manner, hence, each resonance decays into lighter ones until only stable particles are left. However, we need to implement another procedure when different branches are defined in a way that final states with only stable hadrons are counted. This distinction does not affect mean quantities, but for fluctuations it is crucial. To make a correspondence with NA49 data, both strong and electromagnetic decays should be taken into account, while weak decays should be omitted.
-------------------- ----------- ----------- ------------ ------- ------- ------- ------- ------- ------- ------- ------- -------
$\sqrt{s_{NN}}$ $T$ $\mu_B$ $\gamma_S$
\[0.5ex\] \[ MeV \] \[ MeV \] GCE CE MCE GCE CE MCE GCE CE MCE
\[0.5ex\] $ 2.32 $ 64.9 800.8 0.641 1.025 0.777 0.578 1.020 0.116 0.086 1.048 0.403 0.300
$ 4.86 $ 118.5 562.2 0.694 1.058 0.619 0.368 1.196 0.324 0.192 1.361 0.850 0.505
$ 6.27 $ 130.7 482.4 0.716 1.069 0.640 0.346 1.203 0.390 0.211 1.431 0.969 0.524
$ 7.62 $ 138.3 424.6 0.735 1.078 0.664 0.334 1.200 0.442 0.222 1.476 1.060 0.534
$ 8.77 $ 142.9 385.4 0.749 1.084 0.683 0.328 1.197 0.479 0.230 1.504 1.126 0.541
$ 12.3 $ 151.5 300.1 0.787 1.097 0.729 0.320 1.185 0.563 0.247 1.557 1.271 0.558
$ 17.3 $ 157.0 228.6 0.830 1.108 0.768 0.318 1.174 0.637 0.263 1.593 1.393 0.576
$ 62.4 $ 163.1 72.7 0.975 1.127 0.827 0.316 1.147 0.782 0.298 1.636 1.609 0.613
$ 130 $ 163.6 36.1 0.998 1.131 0.827 0.313 1.141 0.805 0.305 1.639 1.631 0.618
$ 200 $ 163.7 23.4 1.000 1.133 0.826 0.312 1.140 0.811 0.307 1.639 1.636 0.619
$ 5500 $ 163.8 0.9 1.000 1.136 0.820 0.310 1.137 0.820 0.309 1.640 1.640 0.619
-------------------- ----------- ----------- ------------ ------- ------- ------- ------- ------- ------- ------- ------- -------
: The chemical freeze-out parameters $T$, $\mu_B$, $\gamma_S$, and final state scaled variances in the GCE, CE, and MCE for central Pb+Pb (Au+Au) collisions at different c.m. energies, $\sqrt{s_{NN}}$.[]{data-label="OmegaTableFinal"}
Once a suitable set of thermodynamical parameters is determined for each collision energy, the scaled variance of negatively, positively, and all charged particles can be calculated using Eqs. (\[omega-all\]-\[DNpm\]). The Eqs. (\[corr-GCE\]-\[dNidNj\]) lead to the scaled variance in the GCE, whereas Eqs. (\[corr\]-\[corr-MCE\]) correspond to the MCE (or CE) results. The $\omega^{-},~\omega^+,~\omega^{ch}$ in different ensembles are presented in Table I for different collision energies. The values of $\sqrt{s_{NN}}$ quoted in Table I correspond to the beam energies at SIS (2$A$ GeV), AGS (11.6$A$ GeV), SPS ($20A$, $30A$, $40A$, $80A$, and $158A$ GeV), colliding energies at RHIC ($\sqrt{s_{NN}}=62.4$ GeV, $130$ GeV, and $200$ GeV), and LHC ($\sqrt{s_{NN}}=5500$ GeV).
The mean multiplicities, $\langle N_i\rangle$, used for calculation of the scaled variance (see Eq. (\[omega-all\])) are given by Eqs. (\[<N>\]) and (\[Ni-gce\]) and remain the same in all three ensembles. The variances in Eq. (\[omega-all\]) are calculated using the corresponding correlators $\langle \Delta N_i \Delta N_j \rangle$ in the GCE, CE, and MCE. For the calculations of final state correlators the summation in Eq. (\[corr-MCE\]) should include all resonances $R$ and $R^{\prime}$ which have particles of the species $i$ and/or $j$ in their decay channels. The resulting scaled variances are presented in Table I and shown in Figs. \[omega\_m\]-\[omega\_ch\] as the functions of $\sqrt{s_{NN}}$.
At the chemical freeze-out of heavy-ion collisions, the Bose effect for pions and resonance decays are important and thus (see also Ref. [@res]): $\omega^-_{g.c.e.}\cong 1.1$, $\omega^+_{g.c.e.}\cong 1.2$, and $\omega^{ch}_{g.c.e.}\cong 1.4\div 1.6$, at the SPS energies. Note that in the Boltzmann approximation and neglecting the resonance decay effect one finds $\omega^-_{g.c.e.}=\omega^+_{g.c.e.}=\omega^{ch}_{g.c.e.}=1$.
Some qualitative features of the results should be mentioned. The effect of Bose and Fermi statistics is seen in primordial values in the GCE. At low temperatures most of charged hadrons are protons, and Fermi statistics dominates, $\omega^{+}_{g.c.e.}, ~\omega^{ch}_{g.c.e.}<1$. On the other hand, in the limit of high temperature (low $\mu_B/T$) most charged hadrons are pions and the effect of Bose statistics dominates, $\omega_{g.c.e.}^{\pm},~
\omega_{g.c.e.}^{ch}>1$. Along the chemical freeze-out line, $\omega_{g.c.e.}^-$ is always slightly larger than 1, as $\pi^-$ mesons dominate at both low and high temperatures. The bump in $\omega^+_{g.c.e.}$ for final state particles seen in Fig. \[omega\_p\] at the small collision energies is due to a correlated production of proton and $\pi^+$ meson from $\Delta^{++}$ decays. This single resonance contribution dominates in $\omega^+_{g.c.e.}$ at small collision energies (small temperatures), but becomes relatively unimportant at the high collision energies.
A minimum in $\omega_{c.e.}^{-}$ for final particles is seen in Fig. \[omega\_m\]. This is due to two effects. As the number of negatively charged particles is relatively small, $\langle
N_-\rangle \ll \langle N_+\rangle$, at the low collision energies, both the CE suppression and the resonance decay effect are small. With increasing $\sqrt{s_{NN}}$, the CE effect alone leads to a decrease of $\omega^-_{c.e}$, but the resonance decay effect only leads to an increase of $\omega^-_{c.e}$. A combination of these two effects, the CE suppression and the resonance enhancement, leads to a minimum of $\omega^-_{c.e}$. As expected, $\omega_{m.c.e.}<\omega_{c.e.}$, as an energy conservation further suppresses the particle number fluctuations. A new feature of the MCE is the additional suppression of the fluctuations after resonance decays. This is discussed in Appendix A.
comparison with NA49 data
=========================
Centrality Selection
--------------------
The fluctuations in nucleus-nucleus collisions are studied on an event-by-event basis: a given quantity is measured for each collision and a distribution of this quantity is measured for a selected sample of these collisions. It has been found that the fluctuations in the number of nucleon participants give the dominant contribution to hadron multiplicity fluctuations. In the language of statistical models, fluctuations of the number of nucleon participants correspond to volume fluctuations caused by the variations in the collision geometry. Mean hadron multiplicities are proportional (in the large volume limit) to the volume, hence, volume fluctuations translate directly to the multiplicity fluctuations. Thus a comparison between data and predictions of statistical models should be performed for results which correspond to collisions with a fixed number of nucleon participants.
Due to experimental limitations it is only possible to approximately measure the number of participants of the projectile nucleus, $N_P^{proj}$, in fixed target experiments (e.g. NA49 at the CERN SPS). This is done in NA49 by measuring the energy deposited in a downstream Veto calorimeter. A large fraction of this energy is due to projectile spectators $N_S^{proj}$. Using baryon number conservation for the projectile nucleus ($A = N_P^{proj}+N_S^{proj}$) the number of projectile participants can be estimated. However, also a fraction of non-spectator particles, mostly protons and neutrons, contribute to the Veto energy [@NA49]. Furthermore, the total number of nucleons participating in the collision can fluctuate considerably even for collisions with a fixed number of projectile participants (see Ref. [@Voka]). This is due to fluctuations of the number of target participants. The consequences of the asymmetry in an event selection depend on the dynamics of nucleus-nucleus collisions (see Ref. [@MGMG] for details). Still, for the most central Pb+Pb collisions selected by the number of projectile participants an increase of the scaled variance can be estimated to be smaller than a few % [@MGMG] due to the target participant fluctuations. In the following our predictions will be compared with the preliminary NA49 data on the 1% most central Pb+Pb collisions at 20$A$-158$A$ GeV [@NA49]. The number of projectile participants for these collisions is estimated to be larger than 193.
Modelling of Acceptance
-----------------------
In the experimental study of nuclear collisions at high energies only a fraction of all produced particles is registered. Thus, the multiplicity distribution of the measured particles is expected to be different from the distribution of all produced particles. Let us consider the production of $N$ particles with the probability $P_{4\pi}(N)$ in the full momentum space. If particle detection is uncorrelated, this means that the detection of one particle has no influence on the probability to detect another one, the binomial distribution can be used. For a fixed number of produced particles $N$ the multiplicity distribution of accepted particles reads: $$\begin{aligned}
\label{bin}
P_{acc}(n,N)~=~q^n(1-q)^{N-n}~\cdot \frac{N!}{n!(N-n)!}~,\end{aligned}$$ where $n\le N$ and $q$ is the probability of a single particle to be accepted (i.e. it is the ratio between mean multiplicity of accepted and all hadrons). Consequently one gets, $\overline{n}=q~N~,~~
\overline{n^2}-\overline{n}^2=q(1-q)N~$, where $\overline{n^k}\equiv\sum_{n=0}^{N} n^k P_{acc}(n,N)~$, for $k=1,2,\ldots$ . The probability distribution $P(n)$ of the accepted particles reads: [$$\begin{aligned}
\label{W-acc}
P(n)~=~\sum_{N=n}^{\infty}P_{4\pi}(N)~P_{acc}(n,N)~.
\end{aligned}$$]{} The first two moments of the distribution $P(n)$ are calculated as: $$\begin{aligned}
\label{ac1}
\langle n \rangle & ~\equiv~\sum
_{N=0}^{\infty}P_{4\pi}(N)~ \sum_{n=0}^{N}
n~P_{acc}(n,N)~=~q \cdot \langle N\rangle~,\\
\langle n^2 \rangle & ~\equiv~\sum
_{N=0}^{\infty}P_{4\pi}(N)~ \sum_{n=0}^{N}
n^2~P_{acc}(n,N)~=~q^2\cdot\langle N^2\rangle~
+~q (1-q) \cdot \langle N \rangle ~,\label{ac2}\end{aligned}$$ where ($k=1, 2,\ldots$) $$\begin{aligned}
\label{ac3}
\langle N^k \rangle ~ \equiv~ \sum_{N=0}^{\infty}
N^k~P_{4\pi}(N)~.\end{aligned}$$ Finally, the scaled variance for the accepted particles can be obtained: $$\begin{aligned}
\label{ac4}
\omega~\equiv~\frac{\langle n^2 \rangle~-~\langle n \rangle ^2}{\langle n\rangle}~ =~1~-~q~ +q\cdot \omega_{4\pi}~,\end{aligned}$$ where $\omega_{4\pi}$ is the scaled variance of the $P_{4\pi}(N)$ distribution. The limiting behavior of $\omega$ agrees with the expectations. In the large acceptance limit ($q \approx 1$) the distribution of measured particles approaches the distribution in the full acceptance. For a very small acceptance ($q \approx 0$) the measured distribution approaches the Poisson one independent of the shape of the distribution in the full acceptance.
Model results on multiplicity fluctuations presented in Sec. III correspond to an ideal situation when all final hadrons are accepted by a detector. For a comparison with experimental data a limited detector acceptance should be taken into account. Even if primordial particles at chemical freeze-out are only weakly correlated in momentum space this would no longer be valid for final state particles as resonance decays lead to momentum correlations for final hadrons. In general, in statistical models, the correlations in momentum space are caused by resonance decays, quantum statistics and the energy-momentum conservation law, which is implied in the MCE. In this paper we neglect these correlations and use Eqs. (\[W-acc\]) and (\[ac4\]). This may be approximately valid for $\omega^+$ and $\omega^-$, as most decay channels only contain one positively (or negatively) charged particle, but is certainly much worse for $\omega^{ch}$, for instance due to decays of neutral resonances into two charged particles. In order to limit correlations caused by resonance decays, we focus on the results for negatively and positively charged hadrons. A discussion of the effect of resonance decays to the acceptance procedure and a comparison with the data for $\omega^{ch}$ are presented in Appendix B. An improved modelling of the effect regarding the limited experimental acceptance will be a subject of a future study.
Comparison with the NA49 Data for $\omega^-$ and $\omega^+$
-----------------------------------------------------------
Fig. \[stat-acc\] presents the scaled variances $\omega^-$ and $\omega^+$ calculated with Eq. (\[ac4\]).
The hadron-resonance gas calculations in the GCE, CE, and MCE shown in Figs. 1 and 2 are used for the $\omega_{4\pi}^{\pm}$. The NA49 acceptance used for the fluctuation measurements is located in the forward hemisphere ($1<y(\pi)<y_{beam}$, where $y(\pi)$ is the hadron rapidity calculated assuming pion mass and shifted to the collision c.m. system [@NA49]). The acceptance probabilities for positively and negatively charged hadrons are approximately equal, $q^+\approx q^-$, and the numerical values at different SPS energies are: $q^{\pm}=0.038,~0.063,~0.085,~ 0.131,~0.163$ at $\sqrt{s_{NN}}=6.27,~7.62,~8.77,~12.3,~17.3$ GeV, respectively. Eq. (\[ac4\]) has the following property: if $\omega_{4\pi}$ is smaller or larger than 1, the same inequality remains to be valid for $\omega$ at any value of $0<q\le 1$. Thus one has a strong qualitative difference between the predictions of the statistical model valid for any freeze-out conditions and experimental acceptances. The CE and MCE correspond to $\omega_{m.c.e.}^{\pm}<\omega^{\pm}_{c.e.}<1$, and the GCE to $\omega_{g.c.e.}^{\pm}> 1$.
From Fig. \[stat-acc\] it follows that the NA49 data for $\omega^{\pm}$ extracted from 1% of the most central Pb+Pb collisions at all SPS energies are best described by the results of the hadron-resonance gas model calculated within the MCE. The data reveal even stronger suppression of the particle number fluctuations.
Dependence on the Freeze-out Parameters
---------------------------------------
The relation $E/N=1$ GeV [@Cl-Red] was used in our calculations to define the freeze-out conditions. It does not give the best fit of the multiplicity data at each specific energy. In this subsection we check the dependence of the statistical model results for the scaled variances on the choice of the freeze-out parameters. For this purpose we compare the results obtained for the parameters used in this paper (model A) with two other sets of the freeze-out parameters at SPS energies: model B [@FOP] and model C [@pbm]. The corresponding values of $T$ and $\mu_B$ are presented in Fig. 5.
\[T\_muB\] ![Chemical freeze-out points in the $T$-$\mu_B$ plane for central Pb+Pb collisions. The solid lines shows $\langle E \rangle /\langle N \rangle =
1$ GeV, the squares are from our parametrization (model A) and denote SPS beam energies from $20A$ GeV (right) to $158A$ GeV (left), the full and open circles are the best fit parameters from reference [@FOP] (model B) and [@pbm] (model C), respectively. ](fig5.eps "fig:")
\[4piModelTable\]
-------------------- ------- ------- ------- ------- ------- -------
$\sqrt{s_{NN}}$
\[0.5ex\] \[GeV\] A B C A B C
\[0.5ex\] $ 6.27 $ 0.346 0.345 0.361 0.211 0.214 0.210
$ 7.62 $ 0.334 0.334 0.347 0.222 0.225 0.221
$ 8.77 $ 0.328 0.330 0.330 0.230 0.232 0.236
$ 12.3 $ 0.320 0.318 0.325 0.247 0.249 0.248
$ 17.3 $ 0.318 0.317 0.321 0.263 0.264 0.259
-------------------- ------- ------- ------- ------- ------- -------
: Final state scaled variances calculated in the MCE for 4 $\pi$ acceptance using freeze-out conditions A, B, and C.
The scaled variances $\omega_{m.c.e.}^-$ and $\omega_{m.c.e.}^+$ calculated in the full phase space within the MCE vary by less than 1% when changing the parameter set. In the NA49 acceptance the difference is almost completely washed out. The differences are somewhat stronger in the GCE and CE, but will not be considered here.
Comparison of Distributions
---------------------------
As discussed in Section II the multiplicity distribution in statistical models in the full phase space and in the large volume limit approaches a normal distribution. If the particle detection is modelled by the simple procedure presented in Section IV B then the results (\[ac1\]-\[ac4\]) are valid for any form of the full acceptance distribution $P_{4\pi} (N)$. In the following we discuss the properties of the multiplicity distribution in the limited acceptance, $P(n)$, (\[W-acc\]) and compare the statistical model results in different ensembles with data on negatively and positively charged hadrons.
For the Poisson distribution in the full acceptance the summation in Eq.(\[W-acc\]) leads also to the Poisson distribution in the acceptance with the expectation value $\langle n \rangle = q\langle N \rangle$: $$P(n)~ =~ \sum_{N=n}^{\infty}\frac{\langle N\rangle^N \exp[-\langle N\rangle]}{N!}~\cdot~ \frac{q^n (1-q)^{N-n}~
N!}{n!(N-n)!}~=~
\exp[- q ~\langle N \rangle] ~\frac{\left(q~
\langle N \rangle \right)^n}{n!} ~.$$ However, the same does not hold true for summation in Eq. (\[W-acc\]) being applied to other forms of the distribution $P(N)$. In particular, the normal distribution (\[Gauss\]) is transformed into the following: $$\label{model_acc}
P(n)~ = ~\sum_{N=n}^{\infty} P_{G}(N)~P_{acc}(n,N)~,$$ which is not anymore the Gauss one. It is enough to mention that a Gaussian is symmetric around its mean value, while the distribution (\[model\_acc\]) is not.
The average number particles accepted by a detector is: $$\langle n \rangle~ \equiv \sum_{n=0}^{\infty}n~P(n)~=~ q ~\langle N \rangle \equiv q ~\rho~ V~,$$ where $\rho \equiv \langle N \rangle/V$ is the corresponding particle density. Hence, one can determine the volume to be $$\label{FindVolume}
V ~= ~\frac{\langle N \rangle}{q ~ \rho}~.$$ In the following for each beam energy we adjust the volume to match the condition of Eq. (\[FindVolume\]) for negatively ($V^-$) and positively ($V^+$) charged yields, separately. Note that values for the volume are about $10-20$% larger than the ones in [@FOP; @pbm], which were obtained using a much less stringent centrality selection (here only the $1$% most central data is analyzed). We find that the $V^-$ and $V^+$ volume parameters deviate from each other by less than 10%. Deviations of a similar magnitude are observed between the data on hadron yield systematics and the hadron-resonance gas model fits. Here we are only interested in the shape of multiplicity distributions and do not attempt to optimize the model to fit simultaneously yields of positively and negatively charged particles. As typical examples the multiplicity distributions for negatively and positively charged hadrons produced in central Pb+Pb collisions at 40$A$ GeV are shown in Fig. \[data1\] at the SPS energy range.
![ The multiplicity distributions for negatively (left) and positively (right) charged hadrons produced in central (1%) Pb+Pb collisions at 40$A$ GeV in the NA49 acceptance [@NA49]. The preliminary experimental data (solid points) of NA49 [@NA49] are compared with the prediction of the hadron-resonance gas model obtained within different statistical ensembles, the GCE (dotted lines), the CE (dashed-dotted lines) and the MCE (solid lines). []{data-label="data1"}](fig6a.eps "fig:") ![ The multiplicity distributions for negatively (left) and positively (right) charged hadrons produced in central (1%) Pb+Pb collisions at 40$A$ GeV in the NA49 acceptance [@NA49]. The preliminary experimental data (solid points) of NA49 [@NA49] are compared with the prediction of the hadron-resonance gas model obtained within different statistical ensembles, the GCE (dotted lines), the CE (dashed-dotted lines) and the MCE (solid lines). []{data-label="data1"}](fig6b.eps "fig:")
The bell-like shape of the measured spectra is well reproduced by the shape predicted by the statistical model. In the semi-logarithmic plot differences between the data and model lines obtained within different statistical ensembles are hardly visible. In order to allow for a detailed comparison of the distributions the ratio of the data and the model distributions to the Poisson one is presented in Fig. \[data2\].
\[DistNegPlot\] ![ The ratio of the multiplicity distributions to Poisson ones for negatively (upper panel) and positively (lower panel) charged hadrons produced in central (1%) Pb+Pb collisions at 20$A$ GeV, 30$A$ GeV, 40$A$ GeV, 80$A$ GeV, and 158$A$ GeV (from left to right) in the NA49 acceptance [@NA49]. The preliminary experimental data (solid points) of NA49 [@NA49] are compared with the prediction of the hadron-resonance gas model obtained within different statistical ensembles, the GCE (dotted lines), the CE (dashed-dotted lines), and the MCE (solid lines). []{data-label="data2"}](fig7a.eps "fig:") ![ The ratio of the multiplicity distributions to Poisson ones for negatively (upper panel) and positively (lower panel) charged hadrons produced in central (1%) Pb+Pb collisions at 20$A$ GeV, 30$A$ GeV, 40$A$ GeV, 80$A$ GeV, and 158$A$ GeV (from left to right) in the NA49 acceptance [@NA49]. The preliminary experimental data (solid points) of NA49 [@NA49] are compared with the prediction of the hadron-resonance gas model obtained within different statistical ensembles, the GCE (dotted lines), the CE (dashed-dotted lines), and the MCE (solid lines). []{data-label="data2"}](fig7b.eps "fig:")
The results for negatively and positively charged hadrons at 20$A$ GeV, 30$A$ GeV, 40$A$ GeV, 80$A$ GeV, and 158$A$ GeV are shown separately. The convex shape of the data reflects the fact that the measured distribution is significantly narrower than the Poisson one. This suppression of fluctuations is observed for both charges, at all five SPS energies and it is consistent with the results for the scaled variance shown and discussed previously. The GCE hadron-resonance gas results are broader than the corresponding Poisson distribution. The ratio has a concave shape. An introduction of the quantum number conservation laws (the CE results) leads to the convex shape and significantly improves agreement with the data. Further improvement of the agreement is obtained by the additional introduction of the energy conservation law (the MCE results). The measured spectra surprisingly well agree with the MCE predictions.
Discussion
----------
High resolution of the NA49 experimental data allows to distinguish between multiplicity fluctuations expected in hadron-resonance gas model for different statistical ensembles. The measured spectra clearly favor predictions of the micro-canonical ensemble. Much worse description is obtained for the canonical ensemble and a strong disagreement is seen considering the grand canonical one. All calculations are performed in the thermodynamical limit which is a proper approximation for the considered reactions. Thus these results should be treated as a first observation of the recently predicted [@CE] suppression of multiplicity fluctuations due to conservation laws in relativistic gases in the large volume limit.
A validity of the micro-canonical description is surprising even within the framework of the statistical hadronization model used in this paper. This is because in the calculations the parameters of the model (e.g. energy, volume, temperature and chemical potential) were assumed to be the same in all collisions. On the other hand, significant event-by-event fluctuations of these parameters may be expected. For instance, only a part of the total energy is available for the hadronization process. This part should be used in the hadron-resonance gas calculations while the remaining energy is contained in the collective motion of matter. The ratio between the hadronization and collective energies may vary from collision to collision and consequently increase the multiplicity fluctuations.
The agreement between the data and the MCE predictions is even more surprising when the processes which are beyond the statistical hadronization model are considered. Examples of these are jet and mini-jet production, heavy cluster formation, effects related to the phase transition or instabilities of the quark-gluon plasma. Naively all of them are expected to increase multiplicity fluctuations and thus lead to a disagreement between the data and the MCE predictions. A comparison of the data with the models which include these processes is obviously needed for significant conclusions. Here we consider only one example.
In Ref. [@ood] a non-monotonic dependence of the relative fluctuations, [$$\begin{aligned}
\label{Re}
R_e~=~\frac{(\delta S)^2/S^2}{(\delta E)^2/E^2}~,
\end{aligned}$$]{} has been suggested as a signal for the onset of deconfinement. Here $S$ and $E$ denote the system entropy and thermalized energy at the early stage of collisions, respectively. This prediction assumes event-by-event fluctuations of the thermalized energy, which results in the fluctuations of the produced entropy. The ratio of the entropy to energy fluctuations (\[Re\]) depends on the equation of state and thus on the form of created matter. The $R_e$ is approximately independent of collision energy and equals about $0.6$ in pure hadron or quark-gluon plasma phases. An increase of the $R_e$ ratio up to its maximum value, $R_e\approx
0.8$, is expected [@ood] in the transition domain. Anomalies in energy dependence of the hadron production properties measured in central Pb+Pb collisions [@NA49-1] indicate [@mg2] that the transition domain is located at the low CERN SPS energies, from 30$A$ to 80$A$ GeV. Thus an anomaly in the energy dependence of multiplicity fluctuations is expected in the same energy domain [@ood].
In any case the fluctuations of the thermalized energy will lead to additional multiplicity fluctuations (“dynamical fluctuations”). The resulting contribution to the scaled variance can be calculated to be: [$$\begin{aligned}
\label{dyn}
\omega^{-}_{dyn}~=~R_e~\langle n_-\rangle~ \frac{(\delta
E)^2}{E^2}~.
\end{aligned}$$]{} The above assumes that the mean particle multiplicity is proportional to the early stage entropy. In order to perform a quantitative estimate of the effect the fluctuations of the energy of produced particles were calculated within the HSD [@hsd] and UrQMD [@urqmd] string-hadronic models. For central (impact parameter zero) Pb+Pb collisions in the energy range from 30A to 80A GeV we have obtained, $\sqrt{(\delta E)^2}/E \leq 0.03$. The number of accepted negatively charged particles is $\langle
n_-\rangle\approx 30$ at $40A$ GeV (see Fig. \[data2\]). Thus, an increase of the $\omega$ due to the “dynamical fluctuations” estimated by Eq. (\[dyn\]) is $\omega^{-}_{dyn} \leq 0.02$ for $R_e=0.6$, and it is smaller than the experimental error of the preliminary NA49 data of about 0.05 [@NA49]. In particular, an additional increase due to the phase transition, $\Delta \omega^{-}_{dyn}\approx 0.005$, for $R_e=0.8$, can be hardly observed.
In conclusion, the predicted [@ood] increase of the scaled variance of the multiplicity distribution due to the onset of deconfinement is too small to be observed in the current data. These data neither confirm nor refute the interpretation [@mg2] of the measured [@NA49-1] anomalies in the energy depedence of other hadron production properties as due to the onset of deconfinement at the CERN SPS energies.
More differential data on multiplicity fluctuations and correlations are required for further tests of the validity of the statistical models and observation of possible signals of the phase transitions. The experimental resolution in a measurement of the enhanced fluctuations due to the onset of deconfinement can be increased by increasing acceptance. For example, $\omega_{dyn}^-\propto
\langle n_-\rangle\propto q$. The present aceptance of NA49 at 40$A$ GeV is about $q\cong 0.06$ and it can be increased up to about $q\cong 0.5$ in the future studies. This will give a chance to observe, for example, the dynamical fluctuations discussed in Ref. [@ood]. The observation of the MCE suppression effects of the multiplicity fluctuations by NA49 was possible only because a selection of a sample of collisions without projectile spectators. This selection seems to be possible only in the fixed target experiments. In the collider kinematics nuclear fragments which follow the beam direction can not be measured.
On the model side a further study is needed to improve description of the effect of the limited experimental acceptance. Further on, a finite volume of hadrons is expected to lead to a reduction of the particle number fluctuations [@ExclVol]. A quantitative estimate of this effect is needed.
Summary
=======
The hadron multiplicity fluctuations in relativistic nucleus-nucleus collisions have been predicted in the statistical hadron-resonance gas model within the grand canonical, canonical, and micro-canonical ensembles in the thermodynamical limit. The microscopic correlator method has been extended to include three conserved charges – baryon number, electric charge, and strangeness – in the canonical ensemble, and additionally an energy conservation in the micro-canonical ensemble. The analytical formulas are used for the resonance decay contributions to the correlations and fluctuations. The scaled variances of negatively, positively, and all charged particles for primordial and final state hadrons have been calculated at the chemical freeze-out in central Pb+Pb (Au+Au) collisions for different collision energies from SIS to LHC. A comparison of the multiplicity distributions and the scaled variances with the preliminary NA49 data on Pb+Pb collisions at the SPS energies has been done for the samples of about 1% of most central collisions selected by the number of projectile participants. This selection allows to eliminate effect of fluctuations of the number of nucleon participants. The effect of the limited experimental acceptance was taken into account by use of the uncorrelated particle approximation.
The measured multiplicity distributions are significantly narrower than the Poisson one and allow to distinguish between model results derived within different statistical ensembles. The data surprisingly well agree with the expectations for the micro-canonical ensemble and exclude the canonical and grand-canonical ensembles. Thus this is a first experimental observation of the predicted suppression of the multiplicity fluctuations in relativistic gases in the thermodynamical limit due to conservation laws.
We would like to thank F. Becattini, E.L. Bratkovskaya, K.A. Bugaev, A.I. Bugrij, W. Greiner, A.P. Kostyuk, I.N. Mishustin, St. Mrówczyński, L.M. Satarov, H. Stöcker, and O.S. Zozulya for numerous discussions. We thank E.V. Begun for help in the preparation of the manuscript. The work was supported in part by US Civilian Research and Development Foundation (CRDF) Cooperative Grants Program, Project Agreement UKP1-2613-KV-04, Ukraine-Hungary cooperative project ¹ M/101-2005, and Virtual Institute on Strongly Interacting Matter (VI-146) of Helmholtz Association, Germany.
Resonance decays in the MCE
===========================
A comparison of the primordial scaled variances with those for final hadrons demonstrates that the fluctuations generally increase after resonance decays in the GCE and CE (see more details in Ref. [@res]), but they decrease in the MCE. In order to understand this effect let us consider a toy model $(\pi^+,\pi^-,\rho^0)$-system with a zero net charge, $Q=0$. Due to this last condition there is a full symmetry between positively and negatively charged pions, and thus $\omega^+=\omega^-$. Each $\rho^0$-meson decays into a $\pi^+\pi^-$-pair with 100% probability, i.e. $b_1^{\rho}=1$ and $\langle
n_-\rangle_{\rho^0}=1$. The predictions of the CE and MCE for $(\pi^+,\pi^-,\rho^0)$-system are shown in Fig. \[pi-rho\].
One observes that $\rho^0$ decays lead to an enhancement of $\omega^-$ in the CE, and to its suppression in the MCE. In the CE one finds from Eqs. (\[<N>\]) and (\[corr-MCE\]) for $(\pi^+,\pi^-,\rho^0)$-system: [$$\begin{aligned}
\langle N_-\rangle
~ = ~ \langle
N_{\pi^-}^* \rangle~+~
\langle N_{\rho^0}\rangle\;,~~~~
\langle \left(\Delta N_-\right)^2\rangle_{c.e.}
~ = ~ \langle\left(\Delta
N_{\pi^-}^*\right)^2\rangle_{c.e.}~+~
\langle\left(\Delta N_{\rho^0}\right)^2\rangle_{c.e.}\;.
\label{pi-rho-CE}
\end{aligned}$$]{} Note that the average multiplicities, $\langle N_{\pi^-}^* \rangle$ and $\langle N_{\rho^0}\rangle$, remain the same in the CE, and the MCE. From Eq. (\[pi-rho-CE\]) it follows: [$$\begin{aligned}
\omega_{c.e.}^-~=~\omega_{c.e.}^{-*}~\left[\frac{\langle
N_{\pi^-}^*\rangle
~+~(\omega_{c.e.}^{\rho^0}/\omega_{c.e.}^{-*})\cdot \langle
N_{\rho^0}\rangle}{\langle N_-\rangle}\right]~.
\label{omega-pi-rho-CE}
\end{aligned}$$]{} The $\omega_{c.e.}^{-*}$ is essentially smaller than 1 due to the strong CE suppression (see Fig. \[pi-rho\], left). On the other hand, there is no CE suppression for $\rho^0$ fluctuations, $\omega_{c.e.}^{\rho^0}=\omega_{g.c.e.}^{\rho^0}\cong 1$. Therefore, one finds that $\omega_{c.e.}^{\rho^0}/\omega_{c.e.}^{-*}>1$, and from Eq. (\[omega-pi-rho-CE\]) it immediately follows, $\omega_{c.e.}^-~>~\omega_{c.e.}^{-*}$. Note that $\omega_{g.c.e.}^{-*}\cong\omega_{g.c.e.}^{\rho^0}\cong 1$, thus there is no enhancement of $\omega_{g.c.e.}^-$ due to $\rho^0$ decays. In the MCE the multiplicity $\langle N_{-}\rangle$ remains the same as in Eq. (\[pi-rho-CE\]). The variance $\langle
\left(\Delta N_-\right)^2\rangle_{m.c.e.}$ is, however, modified because of the anti-correlation between primordial $\pi^{-*}$ and $\rho^0$ mesons in the MCE. From Eq. (\[corr-MCE\]) one finds for our $(\pi^+,\pi^-,\rho^0)$-system, [$$\begin{aligned}
\langle \left(\Delta N_-\right)^2\rangle_{m.c.e.}
~ = ~ \langle\left(\Delta
N_{\pi^-}^*\right)^2\rangle_{m.c.e.}~+~
\langle\left(\Delta N_{\rho^0}\right)^2\rangle_{m.c.e.}~+~
2~ \langle\Delta N_{\pi^-}^*~ \Delta N_{\rho^0}\rangle_{m.c.e.}~.
\label{pi-rho-MCE}
\end{aligned}$$]{} The last term in Eq. (\[pi-rho-MCE\]) appears due to energy conservation in the MCE (it is absent in the CE). This term is evidently negative, which means that an anti-correlation occurs. A large (small) number of primordial pions, $\Delta N_{\pi^-}^*>0~ (<0)$, requires a small (large) number of $\rho^0$-mesons, $\Delta N_{\rho^0} <0~(>0)$, to keep the total energy fixed. Anti-correlation between primordial pions and $\rho^0$-mesons makes the $\pi^-$ number fluctuations smaller after resonance decays, i.e. $\omega_{m.c.e.}^- <
\omega_{m.c.e.}^{-*}$, as depicted in Fig. \[pi-rho\] (right). The same mechanism works in the MCE for the full hadron-resonance gas.
Acceptance effect for All Charged Particles
===========================================
In order to better understand an influence of the momentum correlation due to resonance decays on the multiplicity fluctuations we define a toy model. Let us assume that there are two kinds of particles produced. The first kind ($N$) is either stable or originates from decay channels which contain only one particle of the type we are set to investigate, while the second kind ($M$) produces 2 particles of the selected type. This is described by the (unknown) probability distribution $P_{4\pi} \left(N,M\right)$. We further assume that for both types of particles, $N$ and $M$, separately the acceptance procedure defined by Eq. (\[bin\]) is applicable. We also assume that once particle $M$ is inside the experimental acceptance, both its decay products will be so as well. Hence, the average number of observed particles will be: $$\langle n \rangle ~=~ \sum_{N=0}^{\infty} \sum_{M=0}^{\infty}~ P_{4\pi}
\left(N,M\right)~ \sum_{n=0}^{N} \sum_{m=0}^{M}~ (n+2~m) ~P_{acc} \left(n,N
\right) P_{acc} \left(m,M \right)~.$$ This leads immediately to: $$\langle n \rangle ~=~ q \cdot \Big[ \langle N \rangle + 2~\langle M \rangle
\Big]~.$$ One finds the second moment, $$\langle n^2 \rangle ~=~ \sum_{N=0}^{\infty} \sum_{M=0}^{\infty} ~P_{4\pi}
\left(N,M\right)~ \sum_{n=0}^{N} \sum_{m=0}^{M}~ (n+2m)^2 ~P_{acc} \left(n,N
\right) P_{acc} \left(m,M \right)~.$$ Making use of the relation (\[ac2\]) one obtains: $$\begin{aligned}
\langle n^2 \rangle ~=~ q \left( 1-q\right) \cdot \langle N \rangle ~+~ q^2
\cdot \langle N^2\rangle ~+~ 4 q^2 \cdot \langle N M \rangle
~+~ 4 q \left( 1-q\right)\cdot \langle M \rangle ~+~ 4q^2 \cdot \langle M^2
\rangle~.\end{aligned}$$ Thus, for the scaled variance it follows: $$\label{ac6}
\omega ~\equiv~ \frac{\langle n^2 \rangle -\langle n \rangle^2}{\langle n
\rangle} ~=~ q \cdot \omega_{4\pi} ~+~ \left(1-q \right) \cdot \Bigg[ \frac{\langle N \rangle
+ 4~\langle M \rangle}{\langle N \rangle + 2~\langle M \rangle} \Bigg]~,$$ where $\omega_{4\pi}$ is obtained from the case $q=1$ and corresponds to the original distribution $P_{4\pi} \left(N,M\right)$. For the second limiting case of Eq. (\[ac6\]), $q\rightarrow 0$, one finds a scaled variance which corresponds to that of two uncorrelated Poisson distributions with means $q\langle N \rangle$ and $q\langle M \rangle$, respectively. In this case, all primordial correlations due to energy and charge conservation or Bose (Fermi) statistics are lost, but particles produced by resonances of type $M$ are still detected in pairs.
In general case, by $k$ we denote the fraction of particles originating from decays (always 2 relevant daughters) of particle kind $M$, hence, $$\langle N \rangle~ =~ \left( 1-k\right) ~\langle N_{tot} \rangle~, \qquad
\langle M \rangle~ = ~\frac{k}{2} ~ \langle N_{tot} \rangle~.$$ Finally, one finds for the scaled variance: $$\label{ac5}
\omega ~=~ q \cdot \omega_{4\pi} ~+~ \left(1-q \right) \left(1+k \right)~.$$ From the hadron-resonance gas model we can estimate the fraction $k$ of the final yield which originates from decays of resonances into 2 (or more) charged particles. From Fig. \[DC\] (left) we find the fraction of the charged particle yield $k$ to be from 35% ($20$ AGeV) to 45% ($158$ AGeV) in the SPS energy range (and about 10% for positively and negatively charged particles). For the definition of decay channels see section III. Examples of two-particle decay channels: $\rho^0 \rightarrow \pi^+ +
\pi^-$, would be counted as two particle decay in ‘ch’, but neither in ‘+’ nor in ‘-’, $\Delta^{++} \rightarrow p + \pi^+$, would contribute to ‘ch’ and ‘+’, but not to ‘-‘. The assumption that both decay products are detected is certainly not justifiable for small values of the total acceptance $q$, hence Eq. (\[ac5\]) overestimates the effect. However, this consideration will give a useful upper bound (see Fig. \[DC\], right). The typical width of decays is comparable to the width of the acceptance window, therefore, about half of all decays will leave one (or both) decay product missing. Yet the same 50% will be contributed from decays whose parents are outside the acceptance but contribute to the final yield. Hence, one expects no change in average multiplicity, but a sizable effect on fluctuations.
[100]{}
E. Fermi, Prog. Theor. Phys. [**5**]{}, 570 (1950).
L. D. Landau, Izv. Akad. Nauk SSSR, Ser. Fiz. [**17**]{}, 51 (1953).
R. Hagedorn, Nucl. Phys. B [**24**]{}, 93 (1970).
For a recent review see Proceedings of the [*3rd International Workshop: The Critical Point and Onset of Deconfinement*]{}, PoS(CPOD2006) (http://pos.sissa.it/), ed. F. Becattini, Firenze, Italy 3-6 July 2006.
J. Cleymans, H. Oeschler, K. Redlich, and S. Wheaton, Phys. Rev. C [**73**]{}, 034905 (2006).
F. Becattini, J. Manninen, and M. Gaździcki, Phys. Rev. C [**73**]{}, 044905 (2006).
A. Andronic, P. Braun-Munzinger, J. Stachel, Nucl. Phys. A [**772**]{}, 167 (2006).
S.V. Afanasev [*et al*]{}., \[NA49 Collaboration\], Phys. Rev. Lett. [**86**]{}, 1965 (2001); M.M. Aggarwal [*et al*]{}., \[WA98 Collaboration\], Phys. Rev. C [**65**]{}, 054912 (2002); J. Adams [*et al*]{}., \[STAR Collaboration\], Phys. Rev. C [**68**]{}, 044905 (2003); C. Roland [*et al*]{}., \[NA49 Collaboration\], J. Phys. G [**30**]{} S1381 (2004); Z.W. Chai [*et al*]{}., \[PHOBOS Collaboration\], J. Phys. Conf. Ser. [**37**]{}, 128 (2005); M. Rybczynski [*et al.*]{} \[NA49 Collaboration\], J. Phys. Conf. Ser. [**5**]{}, 74 (2005).
H. Appelshauser [*et al.*]{} \[NA49 Collaboration\], Phys. Lett. B [**459**]{}, 679 (1999); D. Adamova [*et al*]{}., \[CERES Collaboration\], Nucl. Phys. A [**727**]{}, 97 (2003); T. Anticic [*et al*]{}., \[NA49 Collaboration\], Phys. Rev. C [**70**]{}, 034902 (2004); S.S. Adler [*et al*]{}., \[PHENIX Collaboration\], Phys. Rev. Lett. [**93**]{}, 092301 (2004); J. Adams [*et al*]{}., \[STAR Collaboration\], Phys. Rev. C [**71**]{}, 064906 (2005).
H. Heiselberg, Phys. Rep. [**351**]{}, 161 (2001); S. Jeon and V. Koch, Review for Quark-Gluon Plasma 3, eds. R.C. Hwa and X.-N. Wang, World Scientific, Singapore, 430-490 (2004) \[arXiv:hep-ph/0304012\].
M. Gazdzicki, M. I. Gorenstein and S. Mrowczynski, Phys. Lett. B [**585**]{}, 115 (2004); M. I. Gorenstein, M. Gazdzicki and O. S. Zozulya, Phys. Lett. B [**585**]{}, 237 (2004).
I.N. Mishustin, Phys. Rev. Lett. [**82**]{}, 4779 (1999); Nucl. Phys. A [**681**]{}, 56c (2001); H. Heiselberg and A.D. Jackson, Phys. Rev. C [**63**]{}, 064904 (2001).
M. Stephanov, K. Rajagopal, and E. Shuryak, Phys. Rev. Lett. [**81**]{}, 4816 (1998); Phys. Rev. D [**60**]{}, 114028 (1999); M. Stephanov, Acta Phys. Polon. B [**35**]{}, 2939 (2004);
B. Lungwitzt [*et al.*]{} \[NA49 Collaboration\], arXiv:nucl-ex/0610046.
V.V. Begun, M. Gaździcki, M.I. Gorenstein, and O.S. Zozulya, Phys. Rev. C [**70**]{}, 034901 (2004); V.V. Begun, M.I. Gorenstein, and O.S. Zozulya, Phys. Rev. C [**72**]{}, 014902 (2005); A. Keränen, F. Becattini, V.V. Begun, M.I. Gorenstein, and O.S. Zozulya, J. Phys. G [**31**]{}, S1095 (2005); F. Becattini, A. Keränen, L. Ferroni, and T. Gabbriellini, Phys. Rev. C [**72**]{}, 064904 (2005); V.V. Begun, M.I. Gorenstein, A.P. Kostyuk, and O.S. Zozulya, Phys. Rev. C [**71**]{}, 054904 (2005); J. Cleymans, K. Redlich, and L. Turko, Phys. Rev. C [**71**]{}, 047902 (2005); J. Phys. G [**31**]{}, 1421 (2005); V.V. Begun, M.I. Gorenstein, A.P. Kostyuk, and O.S. Zozulya, J. Phys. G [**32**]{}, 935 (2006). V.V. Begun and M.I. Gorenstein, Phys. Rev. C [**73**]{}, 054904 (2006).
V.V. Begun, M.I. Gorenstein, M. Hauer, V.P. Konchakovski, and O.S. Zozulya, Phys. Rev. C [**74**]{}, 044903 (2006).
M. Hauer, V.V. Begun, and M.I. Gorenstein, in preparation.
S. Jeon and V. Koch, Phys. Rev. Lett. [**83**]{}, 5435 (1999).
G. Torrieri, S. Steinke, W. Broniowski, W. Florkowski, J. Letessier, and J. Rafelski, Comput. Phys. Commun. [**167**]{}, 229 (2005).
S. Wheaton and J. Cleymans, arXiv:hep-ph/0407174.
A. Kisiel, T. Taluc, W. Broniowski, and W. Florkowski, Comput. Phys. Commun. [**174**]{}, 669 (2006).
J. Cleymans and K. Redlich, Phys. Rev. Lett. [**81**]{}, 5284 (1998).
M.I. Gorenstein, M. Hauer and D. Nikolajenko, nucl-th/0702081.
V.P. Konchakovski, S. Haussler, M.I. Gorenstein, E.L. Bratkovskaya, M. Bleicher, and H. Stöcker, Phys. Rev. C [**73**]{}, 034902 (2006); V.P. Konchakovski, M.I. Gorenstein, E.L. Bratkovskaya, H. Stöcker, Phys. Rev. C [**74**]{}, 064901 (2006).
M. Gaździcki and M.I. Gorenstein, Phys. Lett. B [**640**]{}, 155 (2006).
S. V. Afanasiev [*et al.*]{} \[The NA49 Collaboration\], Phys. Rev. C [**66**]{}, 054902 (2002) \[arXiv:nucl-ex/0205002\],\
M. Gazdzicki [*et al.*]{} \[NA49 Collaboration\], J. Phys. G [**30**]{}, S701 (2004) \[arXiv:nucl-ex/0403023\].
M. Gazdzicki and M. I. Gorenstein, Acta Phys. Polon. B [**30**]{}, 2705 (1999) \[arXiv:hep-ph/9803462\],\
M. I. Gorenstein, M. Gazdzicki and K. A. Bugaev, Phys. Lett. B [**567**]{}, 175 (2003) \[arXiv:hep-ph/0303041\].
W. Cassing, E. L. Bratkovskaya and S. Juchem, Nucl. Phys. A [**674**]{}, 249 (2000). S. A. Bass [*et al.*]{}, Prog. Part. Nucl. Phys. [**41**]{}, 225 (1998).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The method of weighted addition of multi-frequency maps, more commonly referred to as [*Internal Linear Combination*]{} (ILC), has been extensively employed in the measurement of cosmic microwave background (CMB) anisotropies and its secondaries along with similar application in 21cm data analysis. The performance of the simple ILC method is, however, limited, but can be significantly improved by adding constraints informed by physics and existing empirical information. In recent work, a moment description has been introduced as a technique of carrying out high precision modeling of foregrounds in the presence of inevitable averaging effects. We combine these two approaches to construct a heavily constrained form of the ILC, dubbed [[MILC]{}]{}, which can be used to recover tiny spectral distortion signals in the presence of realistic foregrounds and instrumental noise. This is a first demonstration for measurements of the monopolar and anisotropic spectral distortion signals using ILC and extended moment methods. We also show that CMB anisotropy measurements can be improved, reducing foreground biases and signal uncertainties when using the [[MILC]{}]{}. While here we focus on CMB spectral distortions, the scope extends to the 21cm monopole signal and $B$-mode analysis. We briefly discuss augmentations that need further study to reach the full potential of the method.'
author:
- |
Aditya Rotti[^1] and Jens Chluba[^2]\
Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, U.K.
bibliography:
- 'Lit.bib'
- 'Lit1.bib'
- 'mendeley\_cmb.bib'
---
Cosmology - cosmic microwave background; Cosmology - observations and foreground; Cosmology - theory and analysis methods;
[^1]: E-mail:[email protected]
[^2]: E-mail:[email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the coherence and steady-state properties of the Jaynes-Cummings model subjected to time-delayed coherent feedback in the regime of multiple excitations. The introduced feedback qualitatively modifies the dynamical response and steady-state quantum properties of the system by enforcing a non-Markovian evolution. This leads to recovered collapses and revivals as well as non-equilibrium steady states when the two-level system (TLS) is directly driven by a laser. The latter are characterized by narrowed spectral linewidth and diverging correlation functions that are robust against the time delay and feedback phase choices. These effects are also demonstrated in experimentally accessible quantities such as the power spectrum and the second-order correlation function $g^{(2)}(\tau)$ in standard and widely available photon-detection setups.'
author:
- 'Nikolett N[é]{}met'
- Scott Parkins
- Victor Canela
- Alexander Carmele
bibliography:
- 'citations.bib'
title: 'Feedback-induced instabilities and dynamics in the Jaynes-Cummings model'
---
*Introduction.—* Time-delayed feedback combines the effects of information coupling back from the environment with the non-trivial dynamics introduced by the memory of the process both in classical and non-classical (coherent) systems [@Lakshmanan-SenthilkumarBOOK; @Bernd-HinkeBOOK; @Bellen-ZennaroBOOK; @Scholl2008Handbook; @Scholl2016Control; @Lloyd2000Coherent]. In case of a short feedback, where time delay is negligible, the evolution of the system shows reduced or enhanced system-reservoir coupling, which can be modelled within a Markovian framework [@Gough2009Series; @Combes2017SLH; @Fang2017Multiple]. For longer loops, however, the non-Markovian nature of the process becomes significant, which introduces non-trivial, time-delayed dynamics [@Naumann2014Steady-state; @Grimsmo2014Rapid; @Kopylov2015Time-delayed; @Kopylov2015Dissipative; @Joshi2016Quantum; @Zhang2017Quantum; @Loos2017Force; @Li2018Concepts; @Fang2018Non-markovian; @Carmele2019Non-markovian]. This dynamical aspect has been used for classical control in the field of nonlinear dynamics and chaos [@Strogatz2000BOOK; @Scholl2008Handbook; @Scholl2016Control], with special focus on Pyragas-type feedback for laser dynamics based on the Lang-Kobayashi semiclassical description [@Lang1980External; @Sano1994Antimode; @Albert2011Observing; @Grimsmo2014Rapid; @Kreinberg2018Quantum; @Holzinger2019Quantum]. In the realm of quantum optics, these dynamical features are complemented with a direct influence on the system-reservoir coupling, resulting in suppressed decoherence. The combination of non-trivial dynamics and enhanced coherence provides a wider range of control over such intrinsic quantum features as squeezing or antibunching that are potentially detectable at the system output [@Lu2017Intensified; @Kraft2016Time-delayed; @Nemet2016Enhanced].
In the simplest case time-delayed coherent feedback (TDCF) can be realized by directly – without any intermediate measurement – coupling back one of the output channels of the system into one of the input channels, as shown in FIG. \[fig:scheme\]. This structured system-reservoir coupling affects one degree of freedom in the system, and, if this is the only system variable, the dissipative dynamics leads to a fixed steady state. A classic example is the driven two-level system (TLS) in front of a mirror [@Dorner2002Laser-driven; @Glaetzle2010Single; @Pichler2016Photonic], which has also been extensively studied experimentally [@Eschner2001Light; @Wilson2003Vacuum; @Dubin2007Photon; @Andersson2019Non-exponential]. Probing the TLS in this setup with a coherent excitation shows feedback-induced peaks in the power spectrum as well as enhanced or reduced bunching or anti-bunching, which are sensitive to the exact value of the feedback phase. These properties are related to the entanglement building up between system, feedback loop and reservoir [@Pichler2016Photonic]. As soon as an enhanced and localized interaction is introduced between light and matter, such as in a cavity, where only the optical field is affected by feedback, signatures of more complex long-time dynamics, such as persistent oscillations, have been shown [@Carmele2013Single; @Kabuss2015Analytical; @Kraft2016Time-delayed; @Nemet2016Enhanced; @Nemet2019Comparison; @Crowder2020Quantum]. These solutions are related to the internal coherent dynamics of the system that is protected from the intrinsically dissipative nature of TDCF and, thus, can be enhanced with the help of its coherence-recovering properties. This, however, so far has only been demonstrated in the single-excitation or linear regime, which limits the feasibility of experimental characterization and verification. To overcome these limitations, considerable efforts have been made to develop a numerical method that enables the description of a coherently probed system [@Pichler2016Photonic; @Grimsmo2015Time-delayed; @Whalen2017Open; @Chalabi2018Interaction; @Fang2019FDTD; @Crowder2020Quantum].
![Schematic of the setup. We consider the standard Jaynes-Cummings model, where the TLS couples to the cavity field with a strength of $g$. The waveguide field is coupled to the cavity field at two points, with respective decay rates $\kappa_{1,2}$, forming a coherent unidirectional feedback loop. A coherent driving field can also be considered for the TLS with a strength $\mathcal{E}_{\rm A}$. []{data-label="fig:scheme"}](myFig1-setup.pdf){width=".5\linewidth"}
In this Letter, making use of one of the most well-established techniques [@Pichler2016Photonic], we consider the Jaynes-Cummings model with a coherent initial photonic state or with coherent driving of the TLS. We show that the oscillatory steady-state is not unique to the single-excitation subspace. Proving the truly coherent nature of TDCF, we recover the well-known collapse-revival phenomenon in the non-driven case [@Eberly1980Periodic; @Rempe1987Observation; @Chough1996Nonlinear; @Carmele2019Non-markovian]. Additionally, a considerable robustness of stabilized oscillations against the choice of the feedback phase and delay time is demonstrated in the driven scenario. The long-time dynamics is accompanied by persistent oscillations in both the first- and second-order correlation functions, with diverging correlation lengths, which translates as a linewidth narrowing in the power spectrum. In the strongly driven case a collapse-revival-type phenomenon is found with extra frequencies emerging as a result of TDCF [@Crowder2020Quantum].
*Model.—* We consider the Jaynes-Cummings model with a potential, direct coherent excitation of the TLS. The Hamiltonian is a combination of three contributions; the Jaynes-Cummings closed system Hamiltonian ${\hat{H}}_{\rm JC}$, the coherent driving ${\hat{H}}_{\rm dr}$, and the system-reservoir interaction ${\hat{H}}_{\rm SR}$. All interactions are considered in the rotating-wave and dipole approximations: $$\begin{aligned}
\label{eq:H_tot}
{\hat{H}}&={\hat{H}}_{\rm JC}+H_{\rm dr}+H_{\rm SR},\\
\label{eq:H_JC}
{\hat{H}}_{\rm JC}
&=
\hbar \omega_\text{C} {\hat{a}^\dagger}{\hat{a}}+ \hbar \omega_\text{A} {\hat{\sigma}_z}+ \hbar g
\left( {\hat{a}^\dagger}{\hat{\sigma}^-}+ {\hat{\sigma}^+}{\hat{a}}\right), \\
\label{eq:H_drive}
{\hat{H}}_{\rm dr}
&=
\hbar \mathcal{E}_\text{A}
{\left(}{\hat{\sigma}^+}e^{-i\omega_\text{L} t} + {\hat{\sigma}^-}e^{i\omega_\text{L} t}{\right)}, \\
\label{eq:H_SR}
{\hat{H}}_\text{\rm SR}
&=
\hbar
\int \hspace{-.1cm}
{\left\{}
\newcommand{\rka}{\right\}}\omega {\hat{b}^\dagger}_\omega {\hat{b}}_\omega
+i
{\left[}\gamma^*(\omega){\hat{b}^\dagger}_\omega {\hat{a}}-
\gamma(\omega){\hat{a}^\dagger}{\hat{b}}_\omega
{\right]}\rka {\rm d}\omega, \end{aligned}$$ where ${\hat{a}}$, ${\hat{\sigma}^-}$, ${\hat{b}}_\omega$ are lowering or annihilation operators, $\omega_C$, $\omega_A$, $\omega$ are the frequencies of the cavity, TLS and the reservoir excitations, respectively, $\mathcal{E}_A$ is the driving field amplitude for the TLS, and $g$ is the coupling strength between the TLS and the cavity. In the following, we assume resonant cavity-emitter and laser-emitter interactions ($\omega_\text{C}=\omega_\text{L}=\omega_\text{A}$). The coupling between the cavity and the reservoir becomes frequency dependent due to TDCF [@Nemet2019Comparison]: $\gamma(\omega) = \gamma_1 \exp[-i(\omega\tau/2-\phi_1)] + \gamma_2\exp[i(\omega\tau/2+\phi_2)]$ ($\gamma_1$ ($\gamma_2$) is the coupling strength through the left (right) mirror [@supplemental]. For the sake of simplicity, the free emission of the TLS, which we expect to contribute as an extra linewidth broadening, is ignored. Moving into a frame rotating at the TLS resonance frequency we obtain $$\begin{aligned}
{\hat{H}}(t) = &
\hbar g
{\left(}{\hat{a}^\dagger}{\hat{\sigma}^-}+ {\hat{\sigma}^+}{\hat{a}}{\right)}+
\hbar \mathcal{E}_\text{A}
\left({\hat{\sigma}^+}+ {\hat{\sigma}^-}\right) \\
&+i\hbar
{\left\{}
\newcommand{\rka}{\right\}}{\left[}\sqrt{2\kappa_1}{\hat{b}^\dagger}(t)+
\sqrt{2\kappa_2}{\hat{b}^\dagger}(t-\tau)
e^{i\phi}
{\right]}{\hat{a}}\right.{\nonumber}\\
&\hspace{1cm}\left.-{\text{H.c.}}\rka,\end{aligned}$$ where we use the Fourier transformed reservoir operator ${\hat{b}^\dagger}(t)
= \frac{1}{\sqrt{2\pi}}\int e^{i[(\omega-\omega_\text{A})(t+\tau/2)-\phi_1]} {\hat{b}^\dagger}_\omega\text{d}\omega$, and $\sqrt{2\kappa_j}=\gamma_j\sqrt{2\pi}$. The feedback phase, $\phi={\rm mod}_{2\pi}{\left(}\omega_A\tau+\phi_2-\phi_1{\right)}$, describes the phase relationship between the returning and emitted field at mirror 1.
This Hamiltonian considers a feedback reservoir that couples to the system at two different times. In other words, a memory is introduced for the vast environment that supports non-Markovian system dynamics. In some sense, the feedback loop together with the original system constitutes an effective coherent quantum system. Due to the fundamental issue of keeping track of the system excitations through a vast environment, numerical methods have been introduced based on various approximations to overcome this limitation [@Pichler2016Photonic; @Grimsmo2015Time-delayed; @Whalen2017Open; @Chalabi2018Interaction; @Fang2019FDTD; @Crowder2020Quantum]. One of the most efficient techniques represents the system and reservoir states together at various times as a Matrix Product State (MPS) [@Schollwock2011Density]. Following Ref. [@Pichler2016Photonic] we model the dynamics by representing the state of the system at a given time and the state of the reservoir in short timebins together as an MPS $({| \Psi(t) \rangle})$. Then, using the quantum stochastic Schrödinger equation approach [@Gardiner-ZollerBOOK], each time step in the evolution is obtained by acting with the discretized unitary ${\rm d}{\hat{U}}(t)={\rm exp}{\left(}-\frac{i}{\hbar}\int_t^{t+{\rm d}t}{\hat{H}}(t^\prime){\rm d}t^{\prime}{\right)}$ on the wave function ${| \Psi(t+{\rm d}t) \rangle}={\rm d}{\hat{U}}(t){| \Psi(t) \rangle}$ [@Pichler2016Photonic; @supplemental].
![Suppressed and recovered revivals in the TLS population inversion (upper panel) and the Fourier transform (lower panel) as a result of destructive and constructive feedback, respectively. Rabi frequencies are shown with dashed grey lines. $\kappa_1=g/100$, $\kappa_2=4g/100$, $g\tau=0.04$, $\left|\alpha\right|^2=6$, constr fb: $\phi=\pi$, destr fb: $\phi=0$. []{data-label="fig:collapse-revival"}](myFig2-Collapse-revival.pdf){width=".5\textwidth"}
*Transient dynamics without driving.—* One of the most prominent features of the Jaynes-Cummings model is the collapse-revival phenomenon. In this case a closed system is considered with the cavity field initialized in a coherent state, and the TLS in its ground state. After an initial destructive interference-induced collapse, revivals of the cavity and TLS populations can be observed as the coherence in the system enables rephasing. This phenomenon stems from the uniquely quantum mechanical nature of the system and the strong coupling between cavity and TLS [@Shore1993Jaynes].
Opening the system to its surroundings gives an overall exponentially decaying envelope to the cavity and TLS populations. In order to demonstrate qualitative change as a result of coherent feedback, we choose a regime where no revival can be observed without feedback (blue dash-dotted line in FIG. \[fig:collapse-revival\]). Introducing constructive feedback in this case (destructive interference between the returning and emitted field, i.e. $\phi=\pi$) recovers a similar evolution (green solid) as expected for a closed system (black dotted line), with partial revivals. Meanwhile, a destructive feedback (constructive interference at the point of interaction $(\phi=0)$) accelerates the population damping. This finding is further emphasized by the Fourier transform of the time trace, where the distinct peaks representing the Rabi frequencies of the closed system [@Shore1993Jaynes] become more (less) pronounced as a result of constructive (destructive) feedback [@supplemental].
With a lower number of excitations in the system, our simulation can also determine steady-state properties of the system. However, in order to get a non-trivial steady-state, we consider continuous, coherent driving of the TLS that is strong enough to give more than one excitation at a time in the system, but also weak enough for our simulation method to reliably determine the steady-state correlation functions and power spectrum.
![ First-order (upper panel) and second-order (lower panel) correlation functions settling around $1$ without feedback and oscillating with feedback. $\phi=\pi/2, g=0.2, \mathcal{E}_\text{A}=0.01, \kappa_1/g=0.6125, \kappa_2/g=0.6, g\tau=1.8.$[]{data-label="fig:diverging_correlation_length"}](myFig3-corr.pdf){width=".5\textwidth"}
*Diverging correlation functions with TLS driving.—* Previous works have shown stabilization of Rabi oscillations as a result of TDCF in the single-excitation limit when the condition $g\tau+\phi=(2n+1)\pi$ $(n\in\mathbb{Z})$ is satisfied [@Kabuss2015Analytical; @Nemet2019Comparison]. In this Letter, we extend the scope of this work by considering coherent driving of the TLS, generating multiple excitations in the system. We choose to excite the TLS instead of the cavity as this scheme proves to be more efficient in activating the intrinsic non-linearity of the system [@Hamsen2017Two-photon]. Starting from the cavity vacuum and TLS ground states, the TLS excitation and cavity photon number show transient initial oscillations due to excitation exchange after turn-on of the driving field, before converging, in the case of no feedback, to a constant steady state after a time depending on the cavity loss and driving strength. With feedback, however, the time evolution can reach a limit cycle around $g\tau+\phi=n\pi\ (n\in\mathbb{Z})$, giving rise to persistent oscillations [@supplemental]. To connect the impact of TDCF to experimental accessible quantities, we consider the photon correlation functions of first- and second-order in the long-time limit: $$\begin{aligned}
g^{(1)}(\tau_p) &= \lim_{t\rightarrow\infty}\frac{{\pmb{\big\langle} {\hat{b}^\dagger}(t){\hat{b}}(t+\tau_p) \pmb{\big\rangle}}}{{\pmb{\big\langle} {\hat{b}^\dagger}(t){\hat{b}}(t) \pmb{\big\rangle}}},\\
g^{(2)}(\tau_p) &= \lim_{t\rightarrow\infty}\frac{{\pmb{\big\langle} {\hat{b}^\dagger}(t){\hat{b}^\dagger}(t+\tau_p){\hat{b}}(t+\tau_p){\hat{b}}(t) \pmb{\big\rangle}}}{{\pmb{\big\langle} {\hat{b}^\dagger}(t){\hat{b}}(t) \pmb{\big\rangle}}^2}.\end{aligned}$$
Without feedback these correlation functions tend to $1$ due to the coherent driving field, as can be seen in FIG. \[fig:diverging\_correlation\_length\] (maroon and navy blue dashed lines). With feedback, persistent oscillations are evident in the correlation functions as well, signalling a highly non-classical output field. In the parameter regime of FIG. \[fig:diverging\_correlation\_length\], these oscillations only damp due to the imbalance of the cavity mirror transmissions resulting in an effective decay of the cavity field (orange and blue solid lines) [@supplemental].
Note that as the second-order correlation function deviates from 1, this highly non-classical process cannot be described using a linear or semiclassical model [@supplemental] and is a result of a feedback-induced enhanced coherence in the system. The reported characteristic second-order correlation function can, in principle, be observed experimentally using a coincidence measurement on the output field [@Kimble1977Photon].
Sweeping through a range of time delays while keeping the feedback phase fixed, we find that the non-linear character of the delayed dynamics together with the driving ensures an increased robustness of the above described unique features against the variation of the time delay [@supplemental]. This is in contrast with what was observed in the case of, for example, the degenerate parametric amplifier with feedback, where the parameters had to be set precisely [@Nemet2016Enhanced].
. \[fig:spectrum\]
*Power spectrum.—* The characteristic dynamical features of the first-order correlation function can, in principle, be observed experimentally using a spectrum analyzer. The incoherent part of the obtained power spectrum is evaluated by taking the Fourier transform of $g^{(1)}$ as $$\begin{aligned}
S(\omega) = 2\Re{\int_0^\infty {\left[}g^{(1)}(\tau_p)-g^{(1)}(\infty){\right]}e^{i\omega\tau_p} \text{d}\tau_p}.\end{aligned}$$
Plotting power spectra over a range of time delays in FIG. \[fig:spectrum\] [^1], the above mentioned resonances are distinguished by narrowed linewidth at $\pm g$. Note that these sharp features can be observed over a wide range of feedback delays, which confirms the previously mentioned robustness against experimental parameter fluctuations.
The specific value of the time delay in the feedback loop has a non-trivial influence over the dominant frequencies in the dynamics. For short delays $(g\tau\approx0.6)$ the effective coupling between the cavity and the TLS is reduced, shifting the side peaks closer to resonance. As the delay increases, other peaks appear in the spectrum that can be interpreted as a result of a strong dynamical coupling between the timescale of the feedback and the cavity-TLS coupling. These spectral features are the results of the non-linear delayed dynamics and, thus, cannot be recovered by considering a linear model [@supplemental].
Focusing on the special case where the feedback phase is $\phi=\pi$, the effective dissipation rate of a symmetric cavity ($\kappa_1=\kappa_2$) approaches zero for times longer than the delay time [@supplemental]. In this case, the above presented condition for persistent oscillations simplifies to $g\tau=\pi$. Due to the phase difference, a destructive interference between the cavity field, the feedback, and the external driving causes suppressed excitation at the place of the TLS – similar to that found in [@Alsing1992Suppression]. Therefore, in this exceptional case, the above condition also means increased mean TLS population in comparison to the cases with different delay values [@supplemental]. Meanwhile, the excitations in the cavity form an almost coherent field with $g^{(2)}(\infty)=1$.
![Time evolution of the system for strong TLS driving with and without feedback (upper panel). The Fourier transform of the dynamics with grey dashed lines representing the Rabi frequencies (lower panel). $\kappa_1=g/100$, $\kappa_2=4g/100$, $\mathcal{E}_A=2g$, $g\tau=0.04$, constr fb: $\phi=\pi$, destr fb: $\phi=0$. []{data-label="fig:strong_atom_drive"}](myFig5-Atomic_drive.pdf){width=".5\textwidth"}
*Transient dynamics with strong TLS driving.—* Increasing the atomic driving strength, the mean photon number grows in the cavity even with no initial excitation. Introducing an imbalance in the mirror transmissions, the TLS population increases as well. Considering a regime where the system populations decay without feedback (dash-dotted blue curve in FIG. \[fig:strong\_atom\_drive\]), constructive feedback ($\phi=\pi$, green solid curve) causes a similar collapse-revival as in the case of FIG. \[fig:collapse-revival\]. Comparing the irregular revivals with the closed system dynamics at the same driving strength (black dotted curve) in FIG. \[fig:strong\_atom\_drive\], a qualitative agreement can be observed. The quasi-eigenstates of this Hamiltonian are displaced Rabi doublets [@Alsing1992Stark]. As such they involve a coherent cavity-field contribution supporting the emergence of revivals which become mostly dominant at large driving strengths ($\mathcal{E}_A>g$) [@supplemental].
Taking the Fourier transform of these time traces, extra peaks can be observed for constructive feedback compared to the intrinsic frequencies of the closed system (lower panel of FIG. \[fig:strong\_atom\_drive\]). Looking at the same with respect to the cavity field, these frequencies appear in the closed system dynamics as well [@supplemental]. Thus, we suggest that the extra peaks are a result of TDCF – consisting mainly of coherent cavity field contributions – driving the TLS. Although it is important to note that the Fourier transformation was only taken over a short time trace, signatures of such feedback-induced “half-frequencies” have also been reported for coherent cavity driving in [@Crowder2020Quantum].
*Conclusion.—* In this Letter we investigate the effect of TDCF on the dynamical and steady-state properties of the Jaynes-Cummings model with multiple excitations. The presented characteristics are explored using an MPS-based approach in the limiting cases of high excitation and long time delay. TDCF is demonstrated to recover the well-known collapse-revival dynamics of TLS and cavity populations without driving, and causes similar TLS population dynamics in case of a strong coherent TLS driving. For weaker driving we observe persistent population oscillations that involves multiple excitations (cf. [@Kabuss2015Analytical; @Nemet2019Comparison]) and are accompanied with oscillating, diverging first- and second-order correlation functions around $1$. The peculiar behaviour of the correlation functions strengthens the quantum mechanical origin of these features. The presented results highlight the most crucial properties of TDCF. They show that coherence can be recovered and/or enhanced [@Nemet2019Stabilizing] while combining the diverse dynamical and quantum properties of a system [@Grimsmo2014Rapid; @Nemet2016Enhanced; @Kraft2016Time-delayed]. This is possible due to the the strong entanglement building up between part of the environment – the feedback loop – and the system. The reported striking behaviour in the observables can also be experimentally verified using common spectroscopic and coincidence measurements.
*Acknowledgements. —* We would like to thank Andr[á]{}s Vukics and P[é]{}ter Domokos for stimulating discussions. We are also grateful to the New Zealand eScience Infrasctructure (NeSI) for providing the high-performance resources for our numerical calculations. AC gratefully acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) through the project B1 of the SFB 910, and from the European Union’s Horizon 2020 research and innovation program under the SONAR grant agreement no. \[734690\].
[^1]: The negative values are caused by the numerics of the Fourier Transform algorithm.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Prior information can be incorporated in matrix completion to improve estimation accuracy and extrapolate the missing entries. Reproducing kernel Hilbert spaces provide tools to leverage the said prior information, and derive more reliable algorithms. This paper analyzes the generalization error of such approaches, and presents numerical tests confirming the theoretical results.'
author:
- 'Pere Giménez-Febrer, Alba Pagès-Zamora, and Georgios B. Giannakis [^1]'
title: Generalization error bounds for kernel matrix completion and extrapolation
---
Introduction
============
Matrix completion (MC) deals with the recovery of missing entries in a matrix – a task emerging in several applications such as image restoration [@ji2010robust], collaborative filtering [@rao2015collaborative] or positioning [@nguyen2016]. MC relies on the low rank of data matrices to enable reliable, even exact [@candes], recovery of the full unknown matrix. Exploiting this property, mainstream approaches to MC involve the minimization of the nuclear norm [@cai2010singular; @ma2011fixed] or a surrogate involving the data matrix factorization into a product of two low-rank matrices [@koren2009matrix; @sun2015matrix].
One main assumption in the aforementioned approaches to MC is that the unknown matrix is incoherent, meaning the entries of its singular vectors are uniformly distributed, which implies that matrices with structured form are not allowed. For instance, data matrices with clustered form lead to segmented singular vectors that violate the incoherence assumption. Such structures may be induced by prior information embedded in, e.g., graphs [@kalofolias2014matrix], dictionaries [@yi2015partial], or heuristic assumptions [@cheng2013stcdg]. Main approaches to MC leverage prior information with proper regularization [@chen; @gogna2015matrix; @zhou2012; @gimenez], or, by restricting the solution space [@jain2013; @bazerque2013; @gimenez2018; @abernethy2006low3333]. Most of these approaches can be unified using a reproducing kernel Hilbert space (RKHS) framework [@bazerque2013; @gimenez2018], which presents theoretical tools to exploit prior information.
When analyzing the performance of MC algorithms, several works, e.g. [@candes2010matrix; @cai2010singular; @rao2015collaborative; @jain2013], focus on the derivation of sample complexity bounds; that is, the evolution of the distance to the optimum across the number of samples and iterations. Other analyses are based on the generalization error (GE) [@shamir2014; @srebro; @foygel2011], a metric that measures the difference between the value of the loss function applied to a training dataset, and its expected value [@shawe]. When the probability distribution of the data is unknown, the expected value is replaced by the average loss on a testing dataset [@yaniv2009]. Due to the potentially large matrix sizes and the small size of the training dataset, it is important that the estimated matrix exhibits low GE in order to prevent overfitting.
In [@gimenez2018], we introduced a novel Kronecker kernel matrix completion and extrapolation (KKMCEX) algorithm for MC. This algorithm relies on kernel ridge regression with the number of coefficients equal to the number of observations, thus being attractive for imputing matrices with a minimal number of observations. The present paper presents GE analysis for MC with prior information, and establishes that different from other MC approaches, the GE of KKMCEX does not depend on the matrix size, thus making it more reliable when dealing with few observations.
MC with prior information {#sec:sideinfo}
=========================
Consider a matrix $\M=\F+\E$, where $\F\in\bbR^{N\times L}$ denotes an unknown rank $r$ matrix, and $\E$ is a noise matrix. We can only observe a subset of the entries in $\M$ whose indices are given by the sampling set $\mS_m\subseteq\{1,\ldots,N\}\times \{1,\ldots,L\}$ of cardinality $m=|\mS_m|$. Factorizing the unknown matrix as $\F=\W\H$, where $\W\in\bbR^{N\times p}$, $\H\in\bbR^{L\times p}$ and $p\geq r$, the unknown entries can be recovered by estimating $$\begin{aligned}
\!\{\hat{\W}\!,\!\hat{\H}\} \!=\! {\arg\!\min}_{\!\!\!\!\!\!\!\!\!\!\!\substack{\W\in\mathbb{R}^{N\times p}\\\H\in\mathbb{R}^{L\times p}}}\! {\left|\left|P_{\mS_m}\!(\!\M\!-\!\W\H^T)\!\right| \right|_\text{F}^2} \!+\! \mu\!\left({\left|\left|\W\right| \right|_\text{F}^2} \!+\! {\left|\left|\H\right| \right|_\text{F}^2}\right)\raisetag{10pt}\label{eq:mclag}
\vspace{-0.30cm}\end{aligned}$$ where $P_{\mS_m}(\cdot)$ denotes an operator that sets to zero the entries with index $(i,j)\notin \mS_m$ and leaves the rest unchanged, while $\mu$ is a regularization scalar. Hereafter we refer to as the base MC formulation, which can also be written with the nuclear norm as a regularizer through the property ${\left|\left|\F\right|\right|_*} = \min_{\F=\W\H^T} {1\over 2}\left({\left|\left|\W\right| \right|_\text{F}^2} + {\left|\left|\H\right| \right|_\text{F}^2}\right)$ [@srebro].
While the basic MC formulation makes no use of prior information, kernel (K)MC incorporates such knowledge by means of kernel functions that measure similarities between points in their input spaces. Let $\mathcal{X}:=\{x_1,\ldots,x_N\}$ and $\mathcal{Y}:=\{y_1,\ldots,y_L\}$ be spaces of entities with one-to-one correspondence with the rows and columns of $\F$, respectively. Given the input spaces $\mathcal{X}$ and $\mathcal{Y}$, KMC defines the pair of RKHSs $\mathcal{H}_w := \left\{ w:\: w(x)=\sum\nolimits_{n=1}^{N}b_j\kappa_w(x,x_j), \: b_j\in\mathbb{R} \right\}$ and $ \mathcal{H}_h := \left\{ h:\: h(y)=\sum\nolimits_{l=1}^{L}c_j\kappa_h(y,y_j), \: c_j\in\mathbb{R} \right\}$, where $\kappa_w:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ and $\kappa_h:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}$ are kernel functions. Then, KMC postulates that the columns of the factor matrices in are functions in $\mathcal{H}_w$ and $\mathcal{H}_h$. Thus, we write $\W=\K_w\B$ and $\H=\K_h\C$, where $\B$ and $\C$ are coefficient matrices, while $\K_w\in\mathbb{R}^{N\times N}$ and $\K_h\in\mathbb{R}^{L\times L}$ are the kernel matrices with entries $(\K_w)_{i,j}=\kappa_w(x_i,x_j)$ and $(\K_h)_{i,j}=\kappa_h(y_i,y_j)$. The KMC formulations proposed in [@bazerque2013; @zhou2012], recover the factor matrices as $$\begin{aligned}
\label{eq:kmc}
\{\hat{\W}\!,\!\hat{\H}\} \!=\! {\arg\!\min}_{\!\!\!\!\!\!\!\substack{\W\in\mathbb{R}^{N\times p}\\\H\in\mathbb{R}^{L\times p}}} &{\left|\left|P_{\mS_m}(\!\M\!-\!\W\H^T)\right| \right|_\text{F}^2} \\[-1em]&\!+ \mu\!\left({\text{\normalfont Tr}(\W^T\K_w^{-1}\W)} \!+\! {\text{\normalfont Tr}(\H^T\K_h^{-1}\H)}\right). \nonumber\end{aligned}$$ The coefficient matrices are obtained as $\hat{\B}\!=\!\K_w^{-1}\hat{\W}$ and $\hat{\C}\!=\!\K_h^{-1}\hat{\H}$, although this step is usually omitted [@bazerque2013; @zhou2012].
Algorithms solving and rely on alternating minimization schemes that do not converge to the optimum in a finite number of iterations [@jain2013low]. To overcome this limitation and obtain a closed-form solution, we introduced the Kronecker kernel MC and extrapolation (KKMCEX) method [@gimenez2018]. Associated with entries of $\F$, consider the two-dimensional function $f:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}$ with $f(x_i,y_j) = \F_{i,j}$, and the RKHS it belongs to $$\mathcal{H}_f \!:= \!\left\{\!f: f(x,y)\!=\!\!\sum_{n=1}^{N}\sum_{l=1}^{L}d_{n,l}\kappa_f((x,x_n),(y,y_l)), d_{n,l}\!\in\mathbb{R}\! \right\} . \nonumber$$ Upon vectorizing $\F$, we obtain $\f=\text{vec}(\F) =$ $\K_f\gam$, where $\K_f$ has entries $\kappa_f$ and $\gam :=[d_{1,1},\ldots,d_{N,1},\ldots,d_{N,L}]^T$. Accordingly, the data matrix is vectorized as $\overline{\m} = \S\text{vec}(\M)$, where $\S$ is an $m\times NL$ binary sampling matrix with a single nonzero entry per row, and $\bar{\e}=\S\text{vec}(\E)$ denotes the noise vector. With these definitions, the signal model for the observed entries becomes $$\label{eq:sigmodelvec}
\overline{\m} = \S\f + \bar{\e} = \S\K_f\gam + \bar{\e}.$$ Recovery of the vectorized matrix is then performed using the kernel ridge regression estimate of $\gam$ given by $$\begin{aligned}
\label{eq:kkmcex}
\hat{\gam} =& {\arg\!\min}_{\gam\in\bbR^{NL}} {\left|\left|\overline{\m} - \S\K_f\gam\right|\right|^2_2} + \mu\gam^T\K_f\gam.\end{aligned}$$ The closed-form solution to satisfies $\hat{\gam}=\S^T\hbgam$, where $$\hbgam=(\S\K_f\S^T+\mu\I)^{-1}\overline{\m} \label{eq:kkmcexsol}$$ is the result of using the matrix inversion lemma on the solution to . Since only depends on the observations in $\mS_m$, KKMCEX can be equivalently rewritten as $$\begin{aligned}
\label{eq:kkmcexrep}
\hbgam={\arg\!\min}_{\bgam\in\bbR^n} {\left|\left|\overline{\m} - \bK_f\bgam\right|\right|^2_2} + \mu\bgam^T\bK_f\bgam \end{aligned}$$ where $\bK_f =\S\K_f\S^T$. Given $\kappa_w$ and $\kappa_h$, it becomes possible to use $\kappa_f((x,x_n),(y,y_l))=\kappa_w(x,x_n)\kappa_h(y,y_l)$ as a kernel, which corresponds to a kernel matrix $\K_f=\K_h\otimes\K_w$ [@gimenez2018].
Generalization error in MC {#sec:radbounds}
==========================
In this section, we derive bounds for the GE of the MC in , KMC in and KKMCEX in algorithms. There are two approaches to GE analysis, namely the inductive [@shawe] and the transductive one in [@yaniv2009]. In the inductive one GE measures the difference between the expected value of a loss function and the empirical loss over a finite number of samples. Consider rewriting MC in the general form $$\hat{\F}={\arg\!\min}_{\F\in\mathcal{F}} {1\over m} \sum\nolimits_{(i,j)\in\mS_m} l(\M_{i,j},\F_{i,j})$$ where $l:\bbR\times\bbR\rightarrow\bbR$ denotes the loss, and $\mathcal{F}$ is the hypothesis class. For instance, choosing the square loss and setting the class to the set of matrices with a nuclear norm smaller than a constant $t$ results in the base MC formulation . Assuming a sampling distribution $\mathcal{D}$ over $\{1,\ldots,N\}\times\{1,\ldots,L\}$ for the observed indices in $\mS_m$, the GE for a specific estimate $\hat{\F}$ is given by the expected difference $\bbE_\mathcal{D}\{l(\M_{i,j},\hat{\F}_{i,j})\} - {(1/m)}\sum_{(i,j)\in\mS_m}l(\M_{i,j},\hat{\F}_{i,j})$. However, this definition of GE does not fit the MC framework because it assumes that: i) the data distribution is known; and, ii) the entries are sampled with repetition. In order to come up with distribution-free claims for MC, one may resort to the transductive GE analysis [@yaniv2009]. In this scenario, we are given $\mS_n=\mS_m\cup\mS_u$ of $n$ data comprising the union of the training set $\mS_m$ and the testing set $\mS_u$, where $|\mS_u|=u$. These data are taken without repetition, and the objective is to minimize the loss on the testing set. Thus, the GE is the difference between the testing and training loss functions $$\label{eq:tge}\\[-0.00cm]
{1 \over u} \sum\nolimits_{(i,j)\in\mS_u} \hspace{-0.5em}l(\M_{i,j},\hat{\F}_{i,j}) - {1 \over m} \sum\nolimits_{(i,j)\in\mS_m} \hspace{-0.5em} l(\M_{i,j},\hat{\F}_{i,j}).\\[-0.00cm]$$ By making this difference as small as possible, we ensure that the chosen $\hat{\F}$ has good generalization properties, meaning we expect to obtain a similar empirical loss when we choose a different testing set of samples. Since MC algorithms find their solution among a class of matrices under different restrictions or hypotheses, we are interested in bounding for any matrix in the solution space. Before we present such bounds, we need to introduce the notion of transductive Rademacher complexity (TRC) as follows.
**Transductive Rademacher complexity**[@yaniv2009] Given a set $\mS_n=\mS_m\cup\mS_u$ with $q :={1 \over u}+{1\over m}$, the transductive Rademacher complexity (TRC) of a matrix class $\mF$ is $$\label{eq:trc}
R_n(\mF) = q{\mathbb{E}_\sigma\left\{ \sup_{\F\in\mF}\sum\nolimits_{(i,j)\in\mS_n}\sigma_{i,j}\F_{i,j}\right\}}\\[-0.1cm]$$ where $\sigma_{i,j}$ is a Rademacher random variable that takes values $[-1,1]$ with probability $0.5$. We may also write in vectorized form as $R_n(\mF)= q{\mathbb{E}_\sigma\left\{ \sup_{\F\in\mF}\bm \sigma^T\text{\normalfont vec}(\F)\right\}}$, where $\bm \sigma = \text{vec}(\Sig)$, and $\Sig\in\mathbb{R}^{N\times L}$ has entries $\Sig_{i,j} = \sigma_{i,j}$ if $(i,j)\in\mS_n$, and $\Sig_{i,j} = 0$ otherwise.
TRC measures the expected maximum correlation between any function in the class and the random vector $\bm \sigma$. Intuitively, the greater this correlation is, the larger is the chance of finding a solution in the hypothesis class that will fit any observation draw, that is, $\hat{\F}_{i,j} \!\simeq\! \M_{i,j} \forall \: (i,j)\in\mS_n$. Although TRC measures the ability to fit both the testing and training data at once, a model for $\F$ is learnt using only the training data. While having a small loss across all entries in $\mS_n$ is desirable, making it too small can lead to overfitting, and an increased error when predicting entries outside $\mS_n$. Using the TRC, the GE is bounded as follows.
\[th:tgebound\][@yaniv2009] Let $\mF$ be a matrix hypothesis class. For a loss function $l$ with Lipschitz constant $\gamma$, and any $\F\in\mF$, it holds with probability $1-\delta$ that $$\begin{aligned}
&{1 \over u} \sum\nolimits_{(i,j)\in\mS_u} l(\M_{i,j},\F_{i,j}) - {1 \over m} \sum\nolimits_{(i,j)\in\mS_m}l(\M_{i,j},\F_{i,j}) \nonumber \\ &\leq R_n(l\circ \mF) + 5.05q\sqrt{\min(m,u)}+ \sqrt{{2q}\ln{(1 / \delta)}}\;. \label{eq:tgebound}\\[-0.8cm]\nonumber
\end{aligned}$$
Theorem \[th:tgebound\] asserts that in order to bound the GE, it only suffices to bound the TRC. Moreover, using the contraction property, which states that $R_n(l\circ\mathcal{F}) \leq {1\over \gamma} R_n(\mathcal{F})$ [@yaniv2009], we only need to calculate the TRC of $\mF$. Given that the same loss function is used in MC, KMC and KKMCEX, in order to assess the GE upper bound of the three methods we will pursue the TRC for the hypothesis class of each algorithm.
Rademacher complexity for base MC
---------------------------------
In the base MC formulation , the hypothesis class is $\mF_{MC}:=\{\F:{\left|\left|\F\right|\right|_*}\leq t,\: t\in\bbR\}$, where the value of $t$ is regulated by $\mu$. As derived in [@shamir2014], the TRC for this class of matrices is bounded as $$\!\!\!\! R_n(\mF_{MC}) \!\leq\! q {\mathbb{E}_\sigma\left\{ \!\!\!\!\sup_{\,\,\,\F\in\mF_{MC}}\!\!\! ||\Sig||_2\!{\left|\left|\F\right|\right|_*}\!\right\}} \!\leq\! Gqt(\sqrt{N}\!\!+\!\!\sqrt{L})\label{eq:trcmcbound}\\[-0.00cm]$$ where $G$ is a universal constant. The bound in decays as $\mathcal{O}({1\over m}+{1\over u})\subseteq\mathcal{O}\left(1/\min(m,u)\right)$ for fixed $N$ and $L$. However, the GE does not since the sum of the second and third terms on the right-hand side of decays as $\mathcal{O}(1/\sqrt{\min{(m,u)}}\,)$. Ideally, the sizes of the training and testing datasets should be comparable for the TRC to scale well with $n$. Concerning the matrix size, the bound shows that increasing $N$ or $L$ results in a larger TRC bound regardless of the number of data points $n$. Moreover, the nuclear norm of a matrix is $\mathcal{O}(\sqrt{NL})$ since ${\left|\left|\F\right| \right|_\text{F}}\leq{\left|\left|\F\right|\right|_*}\leq \sqrt{r}{\left|\left|\F\right| \right|_\text{F}}$. Therefore, $t$ should also scale with $N$ and $L$ in order to match the hypothesis class, and obtain a good estimate of $\F$.
Rademacher complexity for KMC
-----------------------------
Unlike base MC that maximizes the nuclear norm of the data matrix, KMC does not directly employ the rank in its objective function. Instead, it imposes constraints on the maximum norm of the factor matrices in their respective RKHSs. Similar to [@shamir2014], the TRC for KMC is bounded as follows.
\[th:kmc\] If the KMC hypothesis class is $\mF_K:=\left\{\F\right.:$ $\left. \F=\K_w\B\C^T\K_h, {\text{\normalfont Tr}(\B^T\K_w\B)} \!+\! {\text{\normalfont Tr}(\C^T\K_h\C)} \!<\! t_B\right\}$, then $$\\[-0.0cm]
R_n(\mF_K)\leq \lambda_{\max} Gqt_B(\sqrt{N}+\sqrt{L})$$ where $\lambda_{\max}$ is the largest eigenvalue of $\K_w$ and $\K_h$.
Rewrite the nuclear norm in in terms of the KMC constraint as $$\begin{aligned}
\label{eq:mcvskmc}
&\hspace*{-0.4cm}{\left|\left|\F\right|\right|_*} \!=\! {1\over 2}({\left|\left|\W\right| \right|_\text{F}^2}\!+\!{\left|\left|\H\right| \right|_\text{F}^2}) \!=\! {1\over 2}({\text{\normalfont Tr}(\B^T\!\K_w^2\B)} \!+\! {\text{\normalfont Tr}(\C^T\!\K_h^2\C)}) \nonumber\\&\leq {\lambda_{\max}\over 2}[{\text{\normalfont Tr}(\B^T\K_w\B)} +{\text{\normalfont Tr}(\C^T\K_h\C)}] \leq {\lambda_{\max}t_B\over 2} \end{aligned}$$ where we used that ${\text{\normalfont Tr}(\B^T\K_w^2\B)} = \sum_{i=1}^N \b_i^T\K_w^2\b_i$ with $\b_i$ denoting the $i^{th}$ column of $\B$, and $\b_i^T\K_w^{1\over 2}\K_w\K_w^{1\over 2}\b_i \leq \lambda_{\max} \b_i^T\K_w\b_i$.
Theorem \[th:kmc\] establishes that the TRC bound expressions of KMC and MC are identical within a scale. With $t_B=t$, $\lambda_{\max}$ controls whether KMC has a larger or smaller TRC bound than MC. Thus, according to Theorem \[th:kmc\], the GE bound for KMC shrinks with $n$ and grows with $N,~L$ and $\lambda_{\max}$.
Interestingly, we will show next that it is possible to have a TRC bound that does not depend on the matrix size.
Consider the factorizations $\K_w=\bphi_w\bphi_w^T$ and $\K_h=\bphi_h\bphi_h^T$, where $\bphi_w\in\bbR^{N\times d_w}$ and $\bphi_h\in\bbR^{N\times d_h}$. Plugging the latter into the objective of and substituting $\W=\K_w\B$ and $\H=\K_h\C$, yields $$\begin{aligned}
&{\left|\left|P_{\mS_m}(\M-\bphi_w\bphi_w^T\B\C^T\bphi_h\bphi_h^T)\right| \right|_\text{F}^2} \!+ \mu\!\left({\text{\normalfont Tr}(\B^T\bphi_w\bphi_w^T\B)} \right.\nonumber\\&\left. + {\text{\normalfont Tr}(\C^T\bphi_h\bphi_h^T\C)}\right) \label{eq:kmcphi} \\
&\!=\!{\left|\left|P_{\mS_m}\!(\M\!-\!\bphi_w\A_w\A_h^T\bphi_h^T)\!\right| \right|_\text{F}^2} \!+\! \mu\!\left(\!{\left|\left|\A_w\right| \right|_\text{F}^2}\!+\!{\left|\left|\A_h\right| \right|_\text{F}^2}\!\right)\label{eq:kmcphi2}\end{aligned}$$ where $\A_w = \bphi_w^T\B$ and $\A_h = \bphi_h^T\C$ are coefficient matrices of size $d_w\times p$ and $d_h \times p$, respectively. Optimizing for $\{\B,\C\}$ in or for $\{\A_w,\A_h\}$ in yields the same $\hat{\F}$ provided that $\{\bphi_w^T,\bphi_h^T\}$ have full column rank. Under this assumption, we consider the hypothesis class $\mF_I:=\left\{\F:
\F=\bphi_w\A_w\A_h^T\bphi_h^T, {\left|\left|\A_w\right| \right|_\text{F}^2} \leq t_w, {\left|\left|\A_h\right| \right|_\text{F}^2} < t_h\right\}$, which satisfies $\mF_I=\mF_K$. Clearly, is the objective used by the inductive MC [@jain2013]; and therefore, we have shown that inductive MC is a special case of KMC. This leads to the following result.
\[th:kmclin\] If $\K=(\bphi_h\otimes\bphi_w)(\bphi_h\otimes\bphi_w)^T$, and $\S_n$ is a binary sampling matrix that selects the entries in $\mS_n$, then $$R_n(\mF_I)\leq q\sqrt{t_wt_h}{\text{\normalfont Tr}(\S_n\K\S_n^T)}.$$
With $\bm \sigma := \text{vec}(\Sig)$, $b_w :={\left|\left|\A_w\right| \right|_\text{F}^2}$, and $b_h:={\left|\left|\A_h\right| \right|_\text{F}^2}$, we have that $$\begin{aligned}
&R_n(\mF_I) =q{\mathbb{E}_\sigma\left\{ \hspace{-0.5cm}\sup_{\hspace{0.5cm}\substack{b_w\leq t_w,b_h\leq t_h}}\hspace{-0.5cm}\bm \sigma^T \text{vec}(\bphi_w\A_w\A_h^T\bphi_h^T)\right\}} \nonumber \\ \nonumber\\[-0.7cm]&
= q{\mathbb{E}_\sigma\left\{ \hspace{-0.5cm}\sup_{\hspace{0.5cm}\substack{b_w\leq t_w,b_h\leq t_h}}\hspace{-0.5cm}\bm \sigma^T (\bphi_h\otimes\bphi_w)\text{vec}(\A_w\A_h^T)\right\}} \nonumber \\
&\leq q{\mathbb{E}_\sigma\left\{ \hspace{-0.5cm}\sup_{\hspace{0.5cm}\substack{b_w\leq t_w,b_h\leq t_h}}\hspace{-0.5cm}{\left|\left|\bm\sigma^T (\bphi_h\otimes\bphi_w)\right|\right|_2} {\left|\left|\text{vec}(\A_w\A_h^T)\right|\right|_2}\right\}} \nonumber\\
&= q{\mathbb{E}_\sigma\left\{ \hspace{-0.5cm}\sup_{\hspace{0.5cm}\substack{b_w\leq t_w,b_h\leq t_h}}\hspace{-0.5cm}\sqrt{\bm\sigma^T \K\bm\sigma} {\left|\left|\A_w\A_h^T\right| \right|_\text{F}}\right\}} \nonumber \\
&\leq q{\mathbb{E}_\sigma\left\{ \hspace{-0.5cm}\sup_{\hspace{0.5cm}\substack{b_w\leq t_w,b_h\leq t_h}}\hspace{-0.5cm}\sqrt{\bm\sigma^T \K\bm\sigma} {\left|\left|\A_w\right| \right|_\text{F}}{\left|\left|\A_h^T\right| \right|_\text{F}}\right\}} \nonumber\\&
\leq q\sqrt{t_wt_h}\sqrt{{\mathbb{E}_\sigma\left\{ \bm\sigma^T \K\bm\sigma\right\}}}
= q\sqrt{t_wt_h}\sqrt{{\text{\normalfont Tr}(\S_n\K\S_n^T)}}\nonumber\\[-0.7cm] \nonumber
\end{aligned}$$ where we have successively used the Cauchy-Schwarz inequality, the sub-multiplicative property of the Frobenius norm, and Jensen’s inequality in the first, second and third inequalities, respectively.
If entries in the diagonal of $\K$ are bounded by a constant, and $m=u$, Theorem \[th:kmclin\] provides a bound that decays as $\mathcal{O}({\sqrt{t_wt_h\over m}})$. Thus, if $t_w$ and $t_h$ are constant, the bound does not grow with $N$ or $L$. These values can reasonably be kept constant when the coefficients in $\{\A_w,\A_h\}$ are not expected to change much as new rows or columns are added to $\F$, e.g., when the existing entries in the kernel matrices are largely unchanged as the matrices grow. For instance, let us rewrite the loss in as ${\left|\left|\overline{\m}-\S(\bphi_h\otimes\bphi_w)\text{vec}(\A_w\A_h)\right|\right|^2_2}$. If when increasing $N$ or $L$ we add a few rows to $\bphi_w$ or $\bphi_h$, as it would have happened with a linear kernel, optimizing for $\{\A_w,\A_h\}$ in should yield similar results as with smaller $N$ and $L$, as long as the space spanned by $\S(\bphi_h\otimes\bphi_w)$ is not significantly altered.
Rademacher complexity for KKMCEX
--------------------------------
In KKMCEX, the restriction is set on the magnitude of $\bgam^T\bK_f\bgam$, which depends on $\S$. Therefore, the hypothesis class for is not altered by changes in the matrix size. The TRC bound is then given by the next theorem.
\[th:trckkmcex\] If $\mF_R:=\{\F\!:\!\F=\text{\normalfont unvec}(\K_f\S^T\bgam), \bgam^T\bK_f\bgam \leq b^2,\: b\in\bbR\}$ is the hypothesis class for KKMCEX, it holds that $$\label{eq:trckkmcex}
R_n(\mF_R) \leq qb\sqrt{{\text{\normalfont Tr}(\S_n\K_f\S^T\bK_f^{-1}\S\K_f\S_n^T)}}.$$
$$\begin{aligned}
\nonumber\\[-1.0cm]
&R_n(\mF_R) =q{\mathbb{E}_\sigma\left\{ \sup_{\bgam^T\K_f\bgam\leq b}\bm\sigma^T\K_f\S^T\bgam\right\}} \nonumber\\&
= q{\mathbb{E}_\sigma\left\{ \sup_{\bgam^T\bK_f\bgam\leq b}\bm\sigma^T\K_f\S^T\bK_f^{-{1\over2}}\bK_f^{1\over2}\bgam\right\}} \nonumber\\
&\leq q{\mathbb{E}_\sigma\left\{ \sup_{\bgam^T\bK_f\bgam\leq b}{\left|\left|\bm\sigma^T\K_f\S^T\bK_f^{-1\over2}\right|\right|_2}{\left|\left|\bK_f^{1\over2}\bgam\right|\right|_2}\right\}} \nonumber\\&
\leq qb{\mathbb{E}_\sigma\left\{ {\left|\left|\bm\sigma^T\K_f\S^T\bK_f^{- {1\over2}}\right|\right|_2}\right\}} \nonumber\\
&= qb\sqrt{{\text{\normalfont Tr}(\S_n\K_f\S\bK_f^{-1}\S^T\K_f\S_n^T)}}.\\[-0.95cm]\nonumber\end{aligned}$$
Supposing that the diagonal entries of $\K_f$ are bounded by a constant, the bound in decays as $\mathcal{O}(\sqrt{n}/\min(m,u))$. For $m=u$, this yields a rate $\mathcal{O}({1\over\sqrt{m}})$. Thus, the GE bound induced by only scales with the number of samples. As a result, we can expect the same performance on the testing dataset regardless of the data matrix size. Moreover, thanks to its simplicity and speed [@gimenez2018], KKMCEX can be used to confidently initialize other algorithms when needed, e.g., when the prior information is not accurate enough to provide a reliable hypothesis space.
Numerical tests
===============
This section compares the GE of MC and KMC, solved via alternating least-squares (ALS) [@jain2013low], with the KKMCEX solved with . Besides comparing the GE of these algorithms, we also assess how the matrix size impacts the GE. To this end, we use a fixed-rank synthetic data matrix with $N=L$ generated as $\F = \K_w\B\C^T\K_h + \E$. The kernel matrices are $\K_w=\K_h=\text{abs}(\R\D\R^T)$, where $\R \in\mathbb{C}^{N\times N}$ is the DFT basis and $\D\in\mathbb{R}^{N\times N}$ is a diagonal matrix with decreasing weights on its diagonal. The coefficient matrices $\{\B,\C\}$ have $p=30$ columns, with entries drawn from a zero-mean Gaussian distribution with variance 1. The entries of $\E\in\mathbb{R}^{N\times N}$ are drawn from a zero-mean Gaussian distribution with variance set according to the signal-to-noise ratio $snr={\left|\left|\F\right| \right|_\text{F}^2}/{\left|\left|\E\right| \right|_\text{F}^2}$.
The tests are run over 1,000 realizations. A new matrix $\F$ is generated per realization with $m\!=\!1,000$ entries drawn uniformly at random, followed by a run of each algorithm. Then, the loss on the testing set, which consists of the remaining $u=N^2-m$ entries, is measured. A single value of $\mu$ chosen by cross-validation is used for all realizations. For KMC and KKMCEX, $\mu$ is scaled with the matrix size to compensate for the trace growth of the kernel matrices, and thus keep the loss and regularization terms balanced.
Fig. \[fig:noisy\]a shows the training, testing, and GEs for square matrices with size ranging from $N=100$ to $N=3,200$, and $snr = \infty$. We observe for base MC that the training loss is small, whereas it is much larger on the testing dataset, and also it grows with $N$. Moreover, since the training loss is minimal, the GE coincides with the testing loss. Clearly, the MC solution is not able to predict the unobserved entries due to the lack of prior information that would allow for extrapolation. In addition, the GE approaches saturation for large matrix sizes since most entries in the estimated matrix are $0$, and the testing loss tends to the average ${1\over u}\sum_{(i,j)\in\mS_u}\M_{i,j}$. Regarding the performance of KMC and KKMCEX, we observe that both algorithms achieve a constant training loss. Although not visible on the plot, the training loss of KKMCEX is one order of magnitude smaller than that of KMC. On the other hand, the testing and GE of KKMCEX are constant unlike in KMC for which both are higher and grow with $N$. These results confirm what was asserted by the TRC bounds in Section \[sec:radbounds\].
Fig. \[fig:noisy\]b shows the same simulation results as Fig. \[fig:noisy\]a, but with noisy data at $snr=4$. We observe that MC overfits the noisy observations since the training loss is, again, very small, while the testing loss is much larger. For KMC and KKMCEX, the presence of noise increases the training and testing losses. Due to the noise, a larger $\mu$ is selected to prevent overfitting at the cost of a higher training loss. Nevertheless, the testing loss of KMC slightly grows with $N$. In terms of GE, KKMCEX outperforms KMC with a lower value that tends to a constant.
[^1]: P. Giménez-Febrer and A. Pagès-Zamora are with the SPCOM Group, Universitat Politècnica de Catalunya-Barcelona Tech, Spain. G. B. Giannakis is with the Dept. of ECE and Digital Technology Center, University of Minnesota, USA.This work is supported by ERDF funds (TEC2013-41315-R and TEC2016-75067-C4-2), the Catalan Government (2017 SGR 578), and NSF grants (1500713, 1514056, 1711471 and 1509040).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recent improvements and remaining problems in the prediction of thermonuclear rates are reviewed. The main emphasis is on statistical model calculations, but the challenge to include direct reactions close to the driplines is also briefly addressed. Further theoretical as well as experimental investigations are motivated.'
address: ' Departement für Physik und Astronomie, Universität Basel, Klingelbergstr. 82, CH-4056 Basel, Switzerland '
author:
- Thomas Rauscher
---
Introduction
============
The investigation of explosive nuclear burning in astrophysical environments is a challenge for both theoretical and experimental nuclear physicists. Highly unstable nuclei are produced in such processes which again can be targets for subsequent reactions. Cross sections and astrophysical reaction rates for a large number of nuclei are required to perform complete network calculations which take into account all possible reaction links and do not postulate a priori simplifications. Most of the involved nuclei are currently not accessible in the laboratory and therefore theoretical models have to be invoked in order to predict reaction rates.
In astrophysical applications usually different aspects are emphasized than in pure nuclear physics investigations. Many of the latter in this long and well established field were focused on specific reactions, where all or most “ingredients” (see Sec. \[hf\]) were deduced from experiments. As long as the reaction mechanism is identified properly, this will produce highly accurate cross sections. For the majority of nuclei in astrophysical applications such information is not available. The real challenge is thus not the application of well-established models, but rather to provide all the necessary ingredients in as reliable a way as possible, also for nuclei where no such information is available.
Nuclear Cross Sections and Reaction Rates
=========================================
The nuclear cross section $\sigma$ is defined as the number of reactions $\xi$ target$^{-1}$ s$^{-1}$ divided by the flux $\Phi$ of incoming particles: $\sigma=\xi / \Phi$. To compute the number of reactions $r$ per volume and time, the velocity (energy) distribution between the interacting particles has to be considered. Nuclei in an astrophysical plasma follow a Maxwell-Boltzmann distribution (MBD) and the thermonuclear reaction rates will have the form [@fow67] $$\begin{aligned}
r_{j,k} & = &\left< \sigma v\right> n_j n_k \nonumber \\
\left< \sigma v\right> :&=&({8 \over {M \pi}})^{1/2} (kT)^{-3/2}
\int_0 ^\infty E \sigma (E) {\rm exp}(-E/kT) dE.
\label{rate}\end{aligned}$$ Here $M$ denotes the reduced mass of the target-projectile system and $n_{j,k}$ is the number of projectiles and target nuclei, respectively. In astrophysical plasmas with high densities and/or low temperatures, electron screening becomes highly important, which reduces the Coulomb repulsion.
In the laboratory, the cross section $\sigma^{0\nu}$ for targets in the ground state is usually measured. However, if the plasma is in thermal equilibrium – and this is also a prerequisite for the application of the MBD – the nuclei will rather be thermally excited [@arn72]. This has to be accounted for by summing over the excited target states and weighting each contribution with a factor describing the probability of the thermal excitation. The ratio of the [*stellar*]{} cross section $\sigma^*$ and the laboratory cross section $\sigma^{0\nu}$ is called stellar enhancement factor (SEF=$\sigma^*/\sigma^{0\nu}$). The stellar reaction rate is then obtained by inserting $\sigma^*$ into Eq. (\[rate\]): $r^*=\left< \sigma^* v \right> n_j n_k = \left<
\sigma v \right>^* n_j n_k$. It should be noted that only the stellar rate $r^*$ (involving the stellar cross section) always obeys reciprocity (because $\sigma^*$ does) and that therefore only $r^*$ can be used to compute the reverse rate. Thus, it is very important when measuring reaction cross sections in the laboratory for astrophysical application to measure the cross section for the reaction in the direction that is least affected by excited states in the target. This is almost always the exoergic reaction (i.e. $Q>0$). Even so, the stellar rate at temperatures in excess of a few billion degrees will vary considerably from that of a determination based on targets in their ground states. The experimentally determined laboratory rate should then be multiplied by the SEF, which can, in most cases, only be determined by a theoretical calculation. Even at the low temperatures of the $s$ process, the SEF can already be important. Neutron capture cross sections of isotopes in the rare earth region have recently been measured to such a high accuracy that the knowledge of precise SEFs becomes crucial for further constraining $s$ process conditions [@voss98]. In principle, sufficiently reliable SEF calculations would provide an additional thermometer for the $s$-process environment, completely independent of the usual estimates via $s$-process branchings [@KBW89].
In general, the cross section will be the sum of the cross sections resulting from compound reactions via an average over overlapping resonances (HF) and via single resonances (BW), direct reactions (DI) and interference terms: $$\label{sumcs}
\sigma(E)=\sigma^{\mathrm{HF}}(E)+
\sigma^{\mathrm{BW}}(E)+
\sigma^{\mathrm{DC}}(E)+
\sigma^{\mathrm{int}}(E) \quad .$$ Depending on the number of levels per energy interval in the system projectile+target, different reaction mechanisms will dominate [@wag69; @rau97]. Since different regimes of level densities are probed at the various projectile energies, the application of a specific description depends on the energy. In astrophysics, one is interested in energies in the range from a few tens of MeV down to keV or even thermal energies (depending on the charge of the projectile). For instance, when varying the energy of a neutron beam from 10 MeV down to thermal energy, the resulting cross sections will be dominated by mainly three different contributions: At the highest energy, many close resonances overlap and allow to use an average cross section calculated in the statistical model (Hauser-Feshbach, HF). To lower energies, the nuclear states become more and more widely spaced until one can identify single resonances which can be included into the HF equation as a special case, yielding the well-known Breit-Wigner (BW) shape. In between resonances the cross sections are determined by the tail of resonances and the direct (DI) contribution. At the lowest energies, the levels can be so widely spaced that the cross section is described well by the direct component alone. Extrapolations from one regime into another can be extremely misleading.
The relevant nuclear levels are found in the effective energy window of a reaction, i.e. the energy range which is mainly contributing to the determination of the nuclear reaction rate. This energy window is usually well defined, due to the sharply peaking integrand of Eq. (\[rate\]). For neutrons, it is given by the width of the peak of the MBD. For charged particles the cross section includes the penetrability through the Coulomb barrier which is exponentially increasing with increasing energy. Folding this cross section with the velocity distribution gives rise to a broader peak shifted to higher energies, the so-called Gamow peak. Using experimental information or a theoretical level density description, it is possible to determine the number of levels within the effective energy window and thus derive the applicability of the statistical model as a function of temperature [@rau97]. Below a critical temperature, averaging over too few resonances is not appropriate anymore and the HF will misjudge the cross section. The critical temperature is especially high for targets with closed shells which exhibit widely spaced nuclear levels, and for targets close to the driplines which have low particle separation energies (and $Q$ values).
The Statistical Model {#hf}
=====================
The majority of nuclear reactions in astrophysics can be described in the framework of the statistical model (HF) [@hau52]. This description assumes that the reaction proceeds via a compound nucleus which finally decays into the reaction products. With a sufficiently high level density, average cross sections $$\sigma^{\rm HF}=\sigma_{\rm form} b_{\rm dec} = \sigma_{\rm form}
{\Gamma_{\rm final} \over \Gamma_{\rm tot}}$$ can be calculated which can be factorized into a cross section $\sigma_{\rm form}$ for the formation of the compound nucleus and a branching ratio $b_{\rm dec}$, describing the probability of the decay into the channel of interest compared with the total decay probability into all possible exit channels. The partial widths $\Gamma$ as well as $\sigma_{\rm form}$ are related to (averaged) transmission coefficients, which comprise the central quantities in any HF calculation.
Many nuclear properties enter the computation of the transmission coefficients: mass differences (separation energies), optical potentials, GDR widths, level densities. The transmission coefficients can be modified due to pre-equilibrium effects which are included in width fluctuation corrections [@tep74] (see also [@rau97] and references therein) and by isospin effects. It is in the description of the nuclear properties where the various HF models differ.
In the following sections, the most important ingredients and the usual parametrizations used in astrophysical applications are briefly discussed. A choice of what is thought of being the currently best parametrizations is incorporated in the new HF code NON-SMOKER [@nonsmoker], which is based on the well-known code SMOKER [@thi87].
Optical Potentials
------------------
Early astrophysical studies (e.g. [@arn72; @hol76; @woo78]) made use of simplified equivalent square well potentials and the black nucleus approximation. It is equivalent to a fully absorptive potential, once a particle has entered the potential well and therefore does not permit resonance effects. This leads to deviations from experimental data at low energies, especially in mass regions where broad resonances in the continuum can be populated [@hof98]. An additional effect, which is only pronounced for $\alpha$ particles, is that absorption in the Coulomb barrier [@mic70] is neglected in this approach.
Improved calculations have to employ appropriate [*global*]{} optical potentials which make use of imaginary parts describing the absorption. The situation is different for nucleon-nucleus and $\alpha$-nucleus potentials. Global optical potentials are quite well defined for neutron and protons. It was shown [@thi83; @cow91] that the best fit of s-wave neutron strength functions is obtained with the optical potential by [@jeu77], based on microscopic infinite nuclear matter calculations for a given density, applied with a local density approximation. It includes corrections of the imaginary part [@fantoni81; @mahaux82]. A similar description can be used for protons. Numerous experimental data document the reliability of the neutron potential for astrophysical applications. For protons, data are more scarce but recent investigations [@bork98] also show good agreement.
In the case of $\alpha$-nucleus potentials, there are only few [*global*]{} parametrizations available at astrophysical energies. Most global potentials are of the Saxon–Woods form, parametrized at energies above about 70 MeV, e.g. [@sin76; @nol87]. The high Coloumb barrier makes a direct experimental approach very difficult at low energies. More recently, there were attempts to extend those parametrizations to energies below 70 MeV [@avr94]. Astrophysical calculations mostly employed a phenomenological Saxon–Woods potential based on extensive data [@mcf66]. This potential is an energy– and mass–independent mean potential. However, especially at low energies the imaginary part of the potential should be highly energy–dependent. Nevertheless, this potential proves to be very successful in describing HF cross sections. It failed so far only in one case, the recently measured $^{144}$Sm($\alpha$,$\gamma$)$^{148}$Gd low-energy cross section [@som98]. Nevertheless, this showed that future improved $\alpha$ potentials have to take into account the mass- and energy-dependence of the potential. Several attempts have been made to construct such an improved potential. Extended investigations of $\alpha$ scattering data [@mohr94; @atz96] have shown that the data can best be described with folding potentials [@sat79]. They also found a systematic mass- and energy-dependence. Very recently, that description was used for a global approach [@nonsmoker; @hirschegg]. The idea is to parametrize the data including nuclear structure information. The accuracy reached [@hirschegg] is comparable to the potential of Ref. [@mcf66]. The same approach was used by [@gra98], without including further microscopic information. However, the limitation of this method is that a Woods-Saxon term with fixed geometry is still used for the imaginary part. The resulting transmission coefficients are very sensitive to the shape of the imaginary part, which leads to an ambiguity in the parametrization. Experimental data indicate that the geometry may also be energy-dependent [@avr94; @mohr97; @bud78]. This can be understood in terms of the semi-classical theory of elastic scattering [@bri77] which shows that with varying energy different radial parts of the potential are probed. The effect can be explicitly considered in a global potential [@ringberg]. Nevertheless, more experimental data are needed which should be consistently analyzed at different energies. Further complications arise from the fact that it is yet unclear if potentials extracted from scattering data can indeed describe transmission coefficients well [@avr94]. Clearly, further effort has to be put into the improvement of global $\alpha$-nucleus potentials at astrophysically relevant energies.
$\gamma$ Width
--------------
The $\gamma$-transmission coefficients have to include the dominant E1 and M1 $\gamma$ transitions. The smaller, less important M1 transitions have usually been treated with the simple single particle approach $T\propto E^3$ [@bla52]. The E1 transitions are usually calculated on the basis of the Lorentzian representation of the Giant Dipole Resonance (GDR). Many microscopic and macroscopic models have been devoted to the calculation of GDR energies and widths. Analytical fits as a function of mass number $A$ and charge $Z$ were also used in astrophysical calculations [@hol76; @woo78]. An excellent fit to the GDR energies is obtained with the hydrodynamic droplet model [@mye77]. An improved microscopic-macroscopic approach is used in most modern reaction rate calculations, based on dissipation and the coupling to quadrupole surface vibrations (see [@cow91]).
Most recently it was shown [@gor98] that the inclusion of pygmy resonances might have important consequences on the E1 transitions in neutron-rich nuclei far off stability. The pygmy resonances can be caused by a neutron skin which generates soft vibrational modes [@isa92]. While the effect close to stability is small, neutron capture cross sections could be considerably enhanced close to the neutron dripline.
Level Density
-------------
Until recently, the nuclear level density (NLD) has given rise to the largest uncertainties in the description of nuclear reactions [@woo78; @cow91]. For large scale astrophysical applications it is necessary to not only find reliable methods for NLD predictions but also computationally feasible ones. Such a model is the non-interacting Fermi-gas model. Most statistical model calculations use the back-shifted Fermi-gas description [@gil65]. More sophisticated Monte Carlo shell model calculations (e.g. [@dea95]), as well as combinatorial approaches (e.g. [@paa97]), have shown excellent agreement with this phenomenological approach and justified the application of the Fermi-gas description. While different fits to different mass regions to obtain the free parameters were performed in many investigations [@hol76; @woo78; @cow91], a most recent study was able to arrive at considerably improved NLDs with fewer parameters in the mass range $20\leq A\leq 245$ [@rau97]. They applied an energy-dependent NLD parameter [@ign] together with microscopic corrections from nuclear mass models. The fit to experimental NLDs is also better than a recent analytical BCS approach [@gor96] which implemented level spacings from a microscopic mass model. (In fact, see [@pea96; @doba96] for doubts on the reliability of the BCS model for neutron-rich nuclei). Further work has to be invested in the problem of the prediction of the parity distribution at low excitation energies of the nucleus.
Isospin Effects
---------------
The original HF equation [@hau52] implicitly assumes complete isospin mixing but can be generalized to explicitly treat the contributions of the dense background states with isospin $T^<=T^{\rm
g.s.}$ and the isobaric analog states with $T^>=T^<+1$ [@gri71; @har77]. The inclusion of the isospin treatment has two major effects on statistical cross section calculations in astrophysics [@nonsmoker]: the suppression of $\gamma$ widths for reactions involving self-conjugate nuclei and the suppression of the neutron emission in proton-induced reactions. (Non-statistical effects, i.e. the appearance of isobaric analog resonances, will not be discussed here.) Firstly, in the case of ($\alpha$,$\gamma$) reactions on targets with $N=Z$, the cross sections will be heavily suppressed because $T=1$ states cannot be populated due to isospin conservation. A suppression will also be found for capture reactions leading into self-conjugate nuclei, although somewhat less pronounced because $T=1$ states can be populated according to the isospin coupling coefficients. In previous reaction rate calculations [@woo78; @cow91] the suppression of the $\gamma$–widths was treated completely phenomenologically by employing arbitrary and mass-independent suppression factors. In the new NON-SMOKER code [@nonsmoker], the appropriate $\gamma$ widths are automatically obtained, by explicitly accounting for $T^<$ and $T^>$ states.
Secondly, assuming incomplete isospin mixing, the strength of the neutron channel will be suppressed in comparison to the proton channel in reactions p+target [@gri71; @sar82]. This leads to a smaller cross section for (p,n) reactions and an increase in the cross section of (p,$\gamma$) reactions above the neutron threshold. Such an effect has recently been found in a comparison of experimental data and NON-SMOKER results [@bork98].
Direct Reactions
================
As stated above, the HF approach can only be applied for sufficiently high NLDs [@rau97]. At low NLDs, the other terms in Eq. (\[sumcs\]) will begin to dominate. Many investigations (e.g. [@ohu96; @mei96; @bee95; @kra96; @mohr98; @mohr98a]) have been devoted to the calculation of direct neutron capture for light nuclei and nuclei close to magic neutron numbers. Utilizing folding potentials, these calculations can yield reliable cross sections provided that information on the bound states and the spectroscopic factors is known. Even in the regime of single resonances, the feeble DI contribution can be seen nowadays, when comparing highly precise resonance data and activation measurements (e.g. [@koe98]).
The prediction of the DI contribution to neutron capture cross sections close to the dripline (which may be important in the $r$ process) remains a challenge. Far off stability, the required nuclear properties are not known and have to be taken from other theories [@mat; @gor97; @rau98]. However, it was shown [@rau98] that a straightforward application produces cross sections which are highly sensitive to slight changes in the predicted masses and level energies. Furthermore, it is not yet clear which spectroscopic factors to employ and how to model interference between DI and HF in a simple manner. Further work is clearly needed.
Summary
=======
The new generation of HF models can make reliable predictions of nuclear cross sections. Furthermore, the applicability range of HF has been quantified and thus the boundary between different reaction mechanisms clarified. Although the phenomenological parametrizations of nuclear properties already display good quality, there is a clear need for more experimental data for checking and further improving current models. Especially investigations over a large mass range would prove useful to fill in gaps in the knowledge of the nuclear structure of many isotopes and to construct more powerful parameter systematics, which sometimes are badly known even at the line of stability. Such investigations should include neutron-, proton- and $\alpha$-strength functions, as well as radiative widths, and charged particle scattering and reaction cross sections for [*stable*]{} and unstable isotopes. More capture data with self-conjugate final nuclei would also be highly desireable.
The new code NON-SMOKER [@nonsmoker] contains updated nuclear information as well as additional effects. The NON-SMOKER reaction rate library is electronically available at [*http://quasar.physik.unibas.ch/ tommy/reaclib.html*]{} .
[99]{}[ Fowler W.A., Caughlan G.E., & Zimmerman B.A., 1967, Ann. Rev. Astron. Astrophys. 5, 525 Arnould M., 1972, 19, 92 Voss F., , 1998, , in press; Wisshak K., , this volume Käppeler F., Beer H., & Wisshak K., 1989, Rep.Prog. Phys. 52, 945 Wagoner R.V., 1969, Ap. J. Suppl. 18, 247 Rauscher T., Thielemann F.-K., & Kratz K.-L., 1997, Phys.Rev. C 56, 1613 Hauser W., Feshbach H., 1952, Phys. Rev. 87, 366 Tepel J.W., Hoffmann H.M., & Weidenmüller H.A., 1974, Phys. Lett. 49B, 1 Rauscher T., Thielemann F.-K., 1998, ed Mezzacappa A., in [*Stellar Evolution, Stellar Explosions, and Galactic Chemical Evolution*]{}. IOP Publishing, Bristol, p. 519;\
preprint nucl-th/9802040 Thielemann F.-K., Arnould M., & Truran J., 1987, ed Vangioni-Flam E., in [*Advances in Nuclear Astrophysics*]{}. Editions Frontières, Gif-sur-Yvette, p. 525 Holmes J.A., , 1976, ADNDT 18, 306 Woosley S.E., , 1978, ADNDT 22, 371 Hoffman R.D., , 1998, , submitted; preprint astro-ph/9809240 Michaud G., Scherk L., & Vogt E., 1970, Phys. Rev. C 1, 864 Thielemann F.-K., Metzinger J., & Klapdor H.V., 1983, Z. Phys. A 309, 301 Cowan J.J., Thielemann F.-K., & Truran J.W., 1991, Phys. Rep. 208, 267 Jeukenne J.P., Lejeune A., & Mahaux C., 1977, Phys. Rev. C 16, 80 Fantoni S., Friman B.L., & Pandharipande V.R., 1981, Phys. Rev. Lett. 48, 1089 Mahaux C., 1982, Phys. Rev. C 82, 1848 Bork J., , 1998, Phys. Rev. C 58, 524 Singh P.P, Schwandt P., 1976, Nukleonika 21, 451 Nolte M., Machner H., & Bojowald J., 1987, Phys. Rev. C 36, 1312 Avrigeanu V., Hodgson P.E., Avrigeanu M., 1994, Phys. Rev. C 49, 2136 McFadden L., Satchler G.R., 1966, Nucl. Phys. 84, 177 Somorjai E., , 1998, 333, 1112 Mohr P., , 1994, eds Somorjai E., Fülöp Zs., in [*Proc. Europ. Workshop on Heavy Element Nucleosynthesis*]{}. Institute of Nuclear Research, Debrecen, p. 176 Atzrott U., , 1996, Phys. Rev. C 53, 1336 Satchler G.R., Love W.G., 1979, Phys. Rep. 55, 183 Rauscher T., 1998, eds Buballa M., , in [*Nuclear Astrophysics*]{}. GSI, Darmstadt, p. 288; preprint nucl-th/9802026 Grama C., Goriely S., this volume. Mohr P., , 1997, Phys. Rev. C 55, 1523 Budzanowski A., , 1978, Phys. Rev. C 17, 951 Brink D.M., Takigawa N., 1977, Nucl. Phys. A279, 159 Rauscher T., 1998, ed Müller E., in [ *Ringberg Proceedings*]{}. MPA, Garching, in press Blatt J.M., Weisskopf V.F., 1952, [*Theoretical Nuclear Physics*]{}. Wiley, New York Myers W.D., , 1977, Phys. Rev. C 15, 2032 Goriely S., 1998, Phys. Lett. B, in press; Goriely S., this volume Van Isacker P., Nagarajan M.A., & Warner D.D., 1992, Phys. Rev. C 45, R13 Gilbert A., Cameron A.G.W., 1965, Can. J. Phys. 43, 1446 Dean D.J., , 1995, Phys. Rev. Lett. 74, 2909 Paar V., Pezer R., 1997, Phys. Rev. C 55, R1637 Ignatyuk A.V., Smirenkin G.N., & Tishin A.S., 1975, Yad. Fiz. 21, 485 Goriely S., 1996, Nucl. Phys. A605, 28 Pearson J.M., Nayak R.C., & Goriely S., 1996, Phys. Lett.B387, 455 Dobaczewski J., , 1996, Phys. Rev. C 53, 1 Grimes S.M., , 1972, Phys. Rev. C 5, 85 Harney H.L., Weidenmüller H.A., & Richter A., 1977, Phys. Rev. C 16, 1774 Sargood D.G., 1982, Phys. Rep. 93, 61 Oberhummer H., , 1996, Surv. Geophys. 17, 665 Mei[ß]{}ner J., , 1996, Phys. Rev. C 53, 459/977 Beer H., , 1995, Phys. Rev. C 52, 3342 Krausmann E., , 1996, Phys. Rev. C 53, 469 Mohr P., , 1998, Phys. Rev. C 58, 932 Mohr P., , 1998, this volume Koehler P., , 1998, this volume Mathews G.J., , 1983, 270, 740 Goriely S., 1997, 325, 414 Rauscher T., , 1998, Phys. Rev. C 57, 2031 ]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Absorption spectra of HD have been recorded in the wavelength range of 75 to 90 nm at 100 K using the vacuum ultraviolet Fourier transform spectrometer at the Synchrotron SOLEIL. The present wavelength resolution represents an order of magnitude improvement over that of previous studies. We present a detailed study of the [$D^{1}\Pi_{u}$]{} - X$^{1}\Sigma^{+}_{g}$ system observed up to $v''=18$. The Q-branch transitions probing levels of $\Pi^{-}$ symmetry are observed as narrow resonances limited by the Doppler width at 100 K. Line positions for these transitions are determined to an estimated absolute accuracy of 0.06 [cm$^{-1}$]{}. Predissociation line widths of $\Pi^{+}$ levels are extracted from the absorption spectra. A comparison with the recent results on a study of the [$D^{1}\Pi_{u}$]{} state in H$_{2}$ and D$_{2}$ reveals that the predissociation widths scale as $\mu^{-2}J(J+1)$, with $\mu$ the reduced mass of the molecule and $J$ the rotational angular momentum quantum number, as expected from an interaction with the $B''^{1}\Sigma_{u}^{+}$ continuum causing the predissociation.'
address: '$^{1}$Institute for Lasers, Life and Biophotonics Amsterdam, VU University, de Boelelaan 1081, 1081HV, Amsterdam, The Netherlands'
author:
- 'G.D. Dickenson$^{1}$, W. Ubachs$^{1}$'
bibliography:
- '/Users/garydee/Documents/Articles/CompleteDataBase.bib'
title: 'The [$D^{1}\Pi_{u}$]{} state of HD and the mass scaling relation of its predissociation widths'
---
Introduction
============
Since the early years of quantum mechanics the hydrogen molecule has been studied and has provided theorist with an ideal testing ground for calculations. The stable isotopic variants of H$_{2}$, namely D$_{2}$ and HD, allow for testing mass scaling effects. The [$D^{1}\Pi_{u}$]{} state was found to undergo predissociation above the third vibrational level (the fourth vibrational level for D$_2$ and HD) which can be accurately described by Fano’s theory of a single bound state interacting with a continuum [@Fano1961].
The [$D^{1}\Pi_{u}$]{} state of HD has received considerably less interest compared to the other two stable isotopic variants. On the experimental side determined Q(1) transitions for the lowest three vibrations accurate to within a few [cm$^{-1}$]{}. measured level energies for both $\Pi^{+}$ and $\Pi^{-}$ parity components up to $v'=8$ with accuracies of $\sim$5 [cm$^{-1}$]{}. A profile analysis of the predissociated line shapes was conducted by Dehmer and Chupka for $v'=7$ and 9 as well as a separate study focussing on the line positions for the R(0), R(1) and Q(1) transitions from $v'=7-16$ [@Dehmer1980; @Dehmer1983] with accuracies of $\sim$4 [cm$^{-1}$]{}. Theoretically, have calculated the vibrational levels up to the $n=3$ dissociation limit for the [$D^{1}\Pi_{u}$]{} state [@Kolos1976] while calculated term values for the lowest three vibrations ($v'=0-2$) in a study focussing mainly on the Lyman and Werner bands of HD [@Abgrall2006].
The present work on HD is an extension to the studies of the [$D^{1}\Pi_{u}$]{} state in H$_{2}$ and D$_{2}$ [@Dickenson2010a; @Dickenson2011]. The measurements were obtained with the vacuum ultraviolet (VUV) Fourier transform spectrometer (FTS) at the DESIRS beamline of the synchrotron SOLEIL. The line widths of transitions probing $\Pi^{+}$ levels for all three hydrogen isotopomers are used to verify scaling laws for the predissociation in the [$D^{1}\Pi_{u}$]{} state.
Experiment {#sec:Experiment}
==========
![An overview of the recorded spectra analysed in the present study. The band heads of the [$D^{1}\Pi_{u}$]{} state are indicated up to $v'=18$. Other prominent spectral features are associated with the $B^{1}\Sigma^{+}_{u}$, $B'^{1}\Sigma^{+}_{u}$ and $C^{1}\Pi_{u}$ states below the $n=2$ dissociation limit [@Ivanov2010] and $B''^{1}\Sigma^{+}_{u}$ states above $n=2$. Above the ionization limit, 124568 [cm$^{-1}$]{}[@Sprecher2010] many auto-ionization resonances appear in the spectrum.[]{data-label="Fig:Overview"}](AssignedBands.eps){width="1\linewidth"}
The VUV FTS is a scanning wavefront division interferometer operational from 40-200 nm. It has been used previously in a study of the Lyman and Werner bands of HD [@Ivanov2010]. We provide only a short description of the experimental configuration, for a detailed explanation we refer to the works of [@deOliveira2009; @deOliveira2011]. The light source is undulator based and can be tuned continuously to produce a bell - shaped output window spanning approximately 5 nm as illustrated in figure \[Fig:Overview\]. The undulator radiation passes through a windowless T - shaped cell, 10 cm in length, which contains a quasi-static HD sample, slowly flowing out either side of the cell. The purity of the HD gas is estimated at $\geq$ 99% with some traces of H$_{2}$ resulting in weak spectral features associated with H$_{2}$ Lyman bands in the low energy region. The HD is cooled to a temperature of 100 K by liquid nitrogen which flows over the outside of the T - shaped cell.
Each measurement was recorded by taking 512 kilo samples of data over the optical path difference, resulting in an instrumental width of 0.33 [cm$^{-1}$]{}. The final spectral windows were averaged over 100 individual interferograms and took about two hours to accumulate. The pressure inside the absorption cell can be regulated resulting in a change in the column density. Spectra were recorded at sufficient column density so that transitions appear with optimal signal-to-noise ratio but not saturated. A spectral range spanning from 112 000 - 134 000 [cm$^{-1}$]{} (75-90 nm) was covered by three spectral windows each set at a different central wavelength as shown in figure \[Fig:Overview\].
The wavelength scale in the FT spectra display a strict linearity so that only one fixed point is required for a calibration. This is provided by a transition in atomic argon present in the gas filter which is used to remove higher order harmonics of the selected wavelength produced by the undulator. The transition is the $(3p)^{5}(^{2}\rm{P}_{3/2})9d([3/2]) - (3p)^{6}$ $^{1}\rm{S}_{0}$ at 125718.13 [cm$^{-1}$]{} known to an accuracy of 0.03 [cm$^{-1}$]{} [@Sommavilla2002].
Theory {#sec:Theory}
======
The predissociation of the [$D^{1}\Pi_{u}$]{} state is due to a strong Coriolis coupling to the continuum of the $B'^{1}\Sigma_{u}^{+}$ state [@Monfils1961]. Due to the $\Sigma^{+}$ character of the continuum, transitions probing levels of $\Pi^{-}$ symmetry are not affected by this interaction and are only very weakly predissociated due to coupling with the lower lying $C^{1}\Pi_{u}$ continuum [@Glass-Maujean2010a]. A single continuum interacting with a bound state is described by Fano’s theory [@Fano1961] and produces broadened asymmetric absorption profiles described by the Fano function. For more details we refer to our previous works [@Dickenson2010a; @Dickenson2011]. The widths, broadened by life-time shortening due to the predissociation are given by
$$\begin{aligned}
\Gamma_{v'} = 2 \pi | \langle \psi_{B'\epsilon}|H(R)|\psi_{Dv'} \rangle |^{2}
\label{eqn:Widths}\end{aligned}$$
where $\psi_{B'\epsilon}$ and $\psi_{D}$ are the wavefunctions of the $B'$ continuum and the discrete $D$ state respectively. Here the energy value $B'_{\epsilon}$ of the $B'$ state is taken equal to the non-perturbed energy of the discrete level $D_{v'}$. The rotational operator, $H(R)$, is the $\overrightarrow{L} \cdot \overrightarrow{R}$ operator (also responsible for the $\Lambda$-doublet splitting) which causes the predissociation widths of levels $D_{v'}$ to scale as
$$\begin{aligned}
\Gamma_{v'} \propto \frac{1}{\mu^{2}}J(J+1)
\label{eqn:V}\end{aligned}$$
where $\mu$ is the reduced mass of the molecule and $J$ is the rotational quantum number. The reduced mass for the three isotopomers H$_{2}$, HD and D$_{2}$ are 0.5, 0.67 and 1.0 a.m.u. respectively.
Results and Discussion {#sec:ResultsAndDiscussion}
======================
![Detailed spectrum of the $D^{1}\Pi_{u}(v'=6)$ - $X^{1}\Sigma^{+}_{g}(v''=0)$ band with R(0), R(1) and Q(1) transitions. The dotted white line represents a least squares fit of the data with the appropriate convoluted functions (see text for details).[]{data-label="Fig:Dv=6"}](Dv6R0R1Q1.eps){width="1\linewidth"}
The region above the second dissociation limit in HD is a complex multi-line spectrum that when cooled to liquid nitrogen temperatures consists of six overlapping Rydberg series [@Dehmer1983]. The absorption spectrum is heavily congested making a complete analysis of all spectral features a challenge. In particular the [$D^{1}\Pi_{u}$]{} state is recognisable from the broadened Beutler - Fano profiles, aiding the assignment thereof. Our assignments agree with the previous works of and which are accurate to within 4-5 [cm$^{-1}$]{}. The largest discrepancy occurs for $D^{1}\Pi_{u}(v'=9)-X^{1}\Sigma^{+}_{g}(v''=0)$ band, differing from the present line positions by $\sim 8$ [cm$^{-1}$]{} possibly attributable to wavelength drive slippage of the monochromator, as mentioned by the authors. Beyond $v'=16$ the identifications are aided by the calculations of the band heads made by accurate to within 1 - 3 [cm$^{-1}$]{}. From an estimate of the $\Lambda$-doublet splitting the Q(1) transitions could be identified. The R(1) transitions beyond $v'=15$ were too weak to be observed.
The Q-branch transitions, observed as narrow resonances limited by Doppler broadening, were observed up to $v'=18$ and are listed in table \[Tab:Pi-\]. The R-branch transitions which are broadened for $v'\geq4$, were also observed up to $v'=18$ (R(0) transitions only). Transition energies and predissociated widths for these transitions are listed in table \[Tab:Pi+\]. All line positions and predissociated widths listed in the tables stem from a deconvolution procedure as described in previous work on D$_{2}$ [@Dickenson2011]. Briefly, the absorption profiles are first convoluted with a Gaussian function representative of the Doppler profile at 100 K. In a second step the Beer-Lambert law is included, accounting for the non-linear absorption depth. Finally the resulting profiles are convoluted with the instrument function and the fit parameters are then optimized by a standard least squares fitting routine. Included in the parameters are the points to an unbounded, cubic spline fit of the background. These are optimised along with the line shape parameters resulting in a fit of the background. A sample fit of the Q(1), R(1) and R(0) transitions belonging to the $D-X$ (6,0) band is shown in figure \[Fig:Dv=6\]. The Q(1) transition has a width of approximately 0.6 [cm$^{-1}$]{} which stems from the contribution of the instrument width of 0.33 [cm$^{-1}$]{} and the Doppler width of 0.5 [cm$^{-1}$]{} and represents the limiting resolution for the particular configuration of the FTS used.
In this analysis of the line widths the Beutler-Fano asymmetry of the line shape, represented by the Fano $q$-parameter was included by fixing the $q$ values to their theoretical prediction. Upon mass-scaling the $q$-parameters ($q \propto \mu$) it follows that $q \sim -25$ for R(0) and $q \sim -15$ for R(1) transitions in HD [@Glass-Maujean1979; @Dickenson2011]. The present data did not permit to perform a reliable two parameter fit to extract both $q$ and $\Gamma$. This is in part due to the method of recording the spectra in absorption against a fluctuating continuum level. For further discussion see the previous work on D$_{2}$ [@Dickenson2011].
Spectroscopic Results {#subsec:SpectroscopicResults}
---------------------
![Small portions of spectral windows containing the $D^{1}\Pi_{u}(v'=7-9)$ - $X^{1}\Sigma^{+}_{g}(v''=0)$ bands of HD. These bands are predissociated and display typical broadened Beutler-Fano profiles.[]{data-label="Fig:BeutlerFano"}](Dv7-9.eps){width="1\linewidth"}
The Q-branch transitions, *i.e.* transitions probing states of $\Pi^{-}$ symmetry, and transitions belonging to bands with $v' \leq 3$ are not predissociated and observed as narrow features with width, $\sim$0.6 [cm$^{-1}$]{}, equal to the Doppler width of HD at 100 K convoluted with the instrument width. Uncertainty in the reported line positions in table \[Tab:Pi-\] and for the unpredissociated bands listed in table \[Tab:Pi+\] is estimated at 0.06 [cm$^{-1}$]{}. For slightly saturated lines, blended lines and weak lines the uncertainty estimate increases to 0.08 [cm$^{-1}$]{}.
The R- and P-branch transitions are observed as broadened due to the life-time shortening caused by predissociation. Several small portions of the spectra are displayed in figure \[Fig:BeutlerFano\] including the $D^{1}\Pi_{u}(v'=7-9)$-$X^{1}\Sigma_{g}^{+}$ bands which display typical predissociation broadening. These transitions were fitted with convoluted profiles and the resultant line positions and widths are listed in table \[Tab:Pi+\]. The P(2) and R(2) transitions for bands with $v' \geq 3$ were observed as extremely weak as a result of the fact that most of the rotational population resides in the $J''=0$ and 1 levels. We estimate an uncertainty of 0.20 [cm$^{-1}$]{} on the line positions of the R(0) transitions which were observed to be $\sim$3 [cm$^{-1}$]{} broad and a 0.4 [cm$^{-1}$]{} uncertainty estimate on the R(1) transitions observed at widths of $\sim$7 [cm$^{-1}$]{}. There were a number of blended lines, most of which could still be fitted. Those lines affected severely by blending are indicated in the table and the estimated line position uncertainty is doubled.
![(color online) The predissociation widths scaled by multiplying by the reduced mass squared ($\mu^{2}$). The energy scale on the $x$-axis is referenced to the $n$=2 dissociation limit of each molecule as determined by . See text for details.[]{data-label="Fig:Widths"}](WidthsMassScalingn2.eps){width="1\linewidth"}
Predissociated Widths {#subsec:PredissociatedWidths}
---------------------
Figure \[Fig:Widths\] depicts the predissociated widths as a function of the excess energy above the $n$=2 dissociation limit for all three stable isotopomers, H$_{2}$ and D$_{2}$ as determined in previous work [@Dickenson2010a; @Dickenson2011] and the newly determined HD widths. The dissociation limits used for H$_{2}$, D$_{2}$ and HD were the H(1$S$)+H(2$S$), D(1$S$)+D(2$S$) and H(1$S$)+D(2$S$) respectively [@Eyler1993]. The measured widths have been scaled by their respective reduced masses squared and the rotational dependence has been removed by dividing through by $J(J+1)$. The data for H$_{2}$ between 5000 and 8000 [cm$^{-1}$]{} is missing due to blending with the $B''$ state in this region. The agreement between the three isotopomers for both $J'$=1 (derived from R(0) transitions) and $J'=2$ (derived from R(1) transitions) rotational levels is good yielding further proof of the applicability of the simple two state model to the predissociation of the [$D^{1}\Pi_{u}$]{} state in all three stable hydrogen isotopomers. At the present level of accuracy the data indicates that $u-g$ symmetry breaking effects in HD do not play a role in the predissociated life-times and that the predissociation can be fully described by the $| \langle \psi_{B'\epsilon}|H(R)|\psi_{Dv'} \rangle |$ interaction.
$\Lambda$-Doublet {#subsec:LambdaDoublet}
-----------------
![The $\Lambda$-doublet splittings in the [$D^{1}\Pi_{u}$]{} state of HD for the $J'$=1 and 2 rotational states. The lines joining the points are to guide the eye.[]{data-label="Fig:LambdaDoublet"}](LambdaDoublet.eps){width="1\linewidth"}
The $\Lambda$-doublet splitting, as depicted in figure \[Fig:LambdaDoublet\], was determined by adding the ground state level energy to the Q-branch transitions [@Komasa2011] and subtracting this from the R-branch transitions probing the same $J'$ but opposite $(e)-(f)$ parity. The results mirror those obtained for H$_{2}$ and D$_{2}$. The $\Lambda$-doublet splittings follow an erratic behaviour **for $v' < 4$** caused by the interactions between the discrete $B'$ and $D$ state levels. For $v'\geq4$ it follows a relatively smoothly decaying trend similar as in the observations on H$_{2}$ and D$_{2}$ [@Dickenson2010a; @Dickenson2011]. If the assumption can be made that the $B'$ state is the sole perturber causing the $\Lambda$-doubling, the $\Lambda$-doublet splitting can be represented as, $$\begin{aligned}
\Lambda_{v'}(J') \propto \sum_{B'v,\epsilon} \frac{\vert \langle \psi_{B'v,\epsilon} \vert H(R) \vert \psi_{Dv'} \rangle \vert^{2}}{E_{\Pi^{+}_{v}}-E_{B'v,\epsilon}} \end{aligned}$$ where summation over all $B'$ levels includes the bound levels below $n=2$ and an integral over the $B'$ continuum. Interaction with the $B'$ levels of $^{1}\Sigma^{+}$ symmetry causes the $\Pi^{+}(e)$ levels of the [$D^{1}\Pi_{u}$]{} state to shift upward while the $\Pi^{-}(f)$ levels are unaffected. Similarly as in the deviation of the predissociation widths the $\Lambda$-doublet splitting then scales like $\mu^{-2}J(J+1)$. The present results on HD and the results of H$_{2}$ [@Dickenson2010a] perfectly match this scaling, while the $\Lambda$-doublet splittings in D$_{2}$ [@Dickenson2011] are somewhat too large in this comparison.
Conclusion
==========
The VUV FTS observations on the [$D^{1}\Pi_{u}$]{} state have been extended to HD. The present work represents the highest resolution study on this state performed so far. The predissociated line shapes were analysed resulting in predissociated line widths determined to a high level of accuracy. The present and previous studies show through the mass scaling and rotational scaling that the predissociation in the $\Pi^{+}$ parity states of the [$D^{1}\Pi_{u}$]{} state can be modelled by a rotational interaction to the continuum of the $B'^{1}\Sigma^{+}_{u}$ state. In the case of HD the $u-g$ symmetry breaking does not play a role in the predissociated widths at the present level of accuracy.
Acknowledgements
================
GDD is grateful to the SOLEIL staff scientists L. Nahon, N. de Oliveira and D. Joyeux for the hospitality and for the collaboration. Dr A. Heays is thanked for valuable advice regarding the fit model. The EU provided financial support through the transnational funding scheme. This work was supported by the Netherlands Foundation for Fundamental Research of Matter (FOM). The authors thank two anonymous referees for valuable suggestions.
References {#references .unnumbered}
==========
[l r @[.]{} l r @[.]{} l l r @[.]{} l r @[.]{} l]{}
\
& & &\
\
& & &\
\
\
&\
Q(1)$^{s}$ & 112975 & 18 & 1 & 11 & Q(1)$^{s}$ & 114916 & 47 & 5 & 10\
Q(2) & 112886 & 03 & 1 & 35 & Q(2) & 114822 & 09 & -0 & 07\
Q(3) & 112753 & 22 & 1 & 77 & Q(3) & 114684 & 45 & 2 & 26\
&\
Q(1)$^{s}$ & 116760 & 16 & 2 & 73 & Q(1)$^{s}$ & 118508 & 76 & 2 & 42\
Q(2)$^{s}$ & 116663 & 06 & 1 & 69 & Q(2)$^{s}$ & 118407 & 81 & 2 & 58\
Q(3) & 116518 & 29 & 1 & 36 & Q(3) & 118256 & 35 & 1 & 39\
&\
Q(1)$^{s}$ & 120164 & 47 & -1 & 01 & Q(1) & 121729 & 21 & 0 & 90\
Q(2) & 120059 & 79 & 0 & 96 & Q(2) & 121620 & 99 & 0 & 99\
Q(3) & 119900 & 37 & 0 & 46 & Q(3) & 121459 & 67 & 1 & 20\
&\
Q(1) & 123203 & 12 & 1 & 95 & Q(1) & 124588 & 35 & 1 & 25\
Q(2) & 123091 & 11 & 1 & 09 & Q(2) & 124472 & 72\
Q(3) & 122924 & 14 & 0 & 69 & Q(3) & 124300 & 33\
&\
Q(1) & 125885 & 02 & 1 & 62 & Q(1) & 127091 & 67 & -7 & 23\
Q(2) & 125765 & 66 & & Q(2) & 126968 & 71\
&\
Q(1) & 128208 & 79 & 5 & 29 & Q(1) & 129234 & 35 & 1 & 15\
Q(2) & 128082 & 25 && Q(2) & 129103 & 84\
&\
Q(1) & 130166 & 57 & -0 & 43 & Q(1) & 131002 & 36 & 2 & 56\
Q(2) & 130032 & 21 && Q(2) & 130864 & 00\
&\
Q(1)$^{b}$ & 131737 & 82 & -1 & 88 & Q(1) & 132369 & 36 & 3 & 66\
Q(2) & 131595 & 19\
&\
Q(1)$^{b}$ & 132891 & 54 & -0 & 06 & Q(1) & 133299 & 89\
\
Q(1) & 133588 & 59\
\
\
\
$^{s}$*Saturated*\
$^{b}$*Blended*\
\
[l r @[.]{} l l r @[.]{} l l r @[.]{} l l r @[.]{} l]{}
\
& & & & &\
\
& & & & &\
\
\
&\
R(0)$^{s}$ & 113066 & 07 & & -0 & 75 & R(0)$^{s}$ & 115005 & 76 & & 1 & 17\
R(1)$^{s}$ & 113068 & 72 & & 2 & 84 & R(1)$^{s}$ & 115000 & 70 & & -4 & 21\
&\
R(0)$^{s}$ & 116851 & 42 & & 4 & 44 & R(0)$^{s}$ & 118598 & 93 & & 0 & 79\
R(1)$^{s}$ & 116846 & 89 & & 1 & 53 & R(1)$^{s}$ & 118591 & 70 & & 2 & 86\
&\
R(0) & 120254 & 63 & 2.4 & 0 & 08 & R(0) & 121819 & 48 & 2.6 & 0 & 78\
R(1) & 120240 & 26 & 7.3 & -1 & 05 & R(1) & 121801 & 44 & 6.0 & 2 & 62\
&\
R(0) & 123293 & 27 & 2.8 & 0 & 43 & R(0) & 124678 & 40 & 2.4 & 0 & 70\
R(1) & 123271 & 74 & 6.8 & 0 & 91 & R(1) & 124652 & 34 & 5.8 & 0 & 56\
&\
R(0) & 125974 & 74 & 2.3 & 1 & 15 & R(0) & 127181 & 58 & 2.2 & -7 & 82\
R(1) & 125945 & 60 & 6.8 & 0 & 09 & R(1) & 127148 & 23 & 6.4 & -8 & 97\
&\
R(0) & 128298 & 77 & 2.5 & 3 & 67 & R(0) & 129324 & 08 & 2.1 & 2 & 28\
R(1) & 128261 & 88 & 6.2 & 3 & 18 & R(1) & 129282 & 73 & 5.9 & 1 & 23\
&\
R(0)$^{b}$ & 130256 & 22 & 2.1 & -3 & 38 & R(0) & 131092 & 40 & 1.2 & 1 & 60\
R(1) & 130211 & 60 & 4.8 & -1 & 70 & R(1)$^{b}$ & 131043 & 31 & 3.5 & 2 & 31\
&\
R(0) & 131827 & 56 & 1.3 & -1 & 54 & R(0)$^{b}$ & 132459 & 67 & 0.9 & 3 & 37\
R(1) & 131773 & 73 & 5.4 & -2 & 57 & R(1) & 132400 & 58 & 5.5\
&\
R(0) & 132980 & 87 & 1.7 & -1 & 53 & R(0) & 133389 & 31 & 1.1\
\
R(0)$^{b}$ & 133677 & 72\
\
\
$^{s}$*Saturated*\
$^{b}$*Blended*\
\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We revisit the lower bound on binary tidal deformability $\tilde{\Lambda}$ imposed by a luminous kilonova/macronova, AT 2017gfo, by numerical-relativity simulations of models that are consistent with gravitational waves from the binary neutron star merger GW170817. Contrary to the claim made in the literature, we find that binaries with $\tilde{\Lambda} \lesssim 400$ can explain the luminosity of AT 2017gfo, as long as moderate mass ejection from the remnant is assumed as had been done in previous work. The reason is that the maximum mass of a neutron star is not strongly correlated with the tidal deformability of neutron stars with a typical mass of $\approx
1.4 M_\odot$. If the maximum mass is so large that the binary does not collapse into a black hole immediately after merger, the mass of the ejecta can be sufficiently large irrespective of the binary tidal deformability. We present models of binary mergers with $\tilde{\Lambda}$ down to $242$ that satisfy the requirement on the mass of the ejecta from the luminosity of AT 2017gfo. We further find that the luminosity of AT 2017gfo could be explained by models that do not experience bounce after merger. We conclude that the luminosity of AT 2017gfo is not very useful for constraining the binary tidal deformability. Accurate estimation of the mass ratio will be necessary to establish a lower bound using electromagnetic counterparts in the future. We also caution that merger simulations that employ a limited class of tabulated equations of state could be severely biased due to the lack of generality.
author:
- 'Kenta Kiuchi, Koutarou Kyutoku, Masaru Shibata, Keisuke Taniguchi'
title: Revisiting the lower bound on tidal deformability derived by AT 2017gfo
---
Introduction {#sec:intro}
============
The first binary neutron star merger was observed as the multi-messenger event GW170817/GRB 170817A/AT 2017gfo [@ligovirgo2017-3; @ligovirgogamma2017; @ligovirgoem2017]. Gravitational and electromagnetic signals have been combined to derive various information about physics and astrophysics. Examples include the velocity of gravitational waves [@ligovirgogamma2017], Hubble’s constant [@ligovirgoem2017-2], the central engine of a type of short gamma-ray burst [@mooley_dgnhbfhch2018], and the origin of (at least a part of) *r*-process elements [@kasen_mbqr2017; @tanaka_etal2017].
The multi-messenger observations also constrain properties of neutron stars. Gravitational waves, GW170817, constrain the so-called binary tidal deformability to $100 \lesssim \tilde{\Lambda} \lesssim 800$, where precise values depend on the method of analysis and adopted theoretical waveforms [@de_flbbb2018; @ligovirgo2018; @ligovirgo2019]. At the same time, some researchers have argued that the maximum mass of a neutron star $M_\mathrm{max}$ cannot be significantly larger than $\approx
2.15$–$2.2M_\odot$ based on the electromagnetic features, e.g. the absence of magnetar-powered radiation [@margalit_metzger2017; @shibata_fhkkst2017; @rezzolla_mw2018; @ruiz_st2018]. @bauswein_jjs2017 also proposed lower bounds on the radii of massive neutron stars, assuming that the electromagnetic signals may imply the avoidance of the prompt collapse. These inferences suggest that supranuclear-density matter is unlikely to be very stiff.
@radice_pzb2018 proposed a novel idea: $\tilde{\Lambda} \gtrsim
400$ is required to eject material heavier than $0.05 M_\odot$, which the authors assumed to be required by the high luminosity of AT 2017gfo.[^1] The logic is that no binary model with $\tilde{\Lambda} \lesssim 400$ is capable of ejecting $0.05
M_\odot$, even if all of the baryonic remnant can be ejected, in their numerical-relativity simulations performed with four tabulated equations of state derived by mean-field theory. This constraint approximately indicates that neutron stars must be larger than [@zhao_lattimer2018], and thus it could reject mildly soft equations of state if reliable. Indeed, this constraint has been used to infer properties of nuclear matter by various researchers [@most_wrs2018; @lim_holt2018; @malik_afpajkp2018; @burgio_dpsw2018]. Later, @radice_dai2019 loosened the limit to $\tilde{\Lambda} \gtrsim
300$ by Bayesian inferences; they allowed a standard deviation of 50% in the fitting formula of disk masses, which they required to be $> 0.04
M_\odot$, derived using results of @radice_pzb2018. @coughlin_dmm2018 also derived a lower limit of $\tilde{\Lambda} \gtrsim 279$ by Bayesian inferences, with the improvement of the fit of disk masses via incorporation of the ratio of the total mass to the threshold mass for the prompt collapse as an additional parameter. Note that these two works also use other signals, such as gravitational waves, in a different manner.
@tews_mr2018 critically examined this idea by using parameterized, general nuclear-matter equations of state. Their key finding is that the maximum mass is correlated only very weakly with binary tidal deformability for the masses consistent with GW170817. They found that some equations of state can support a neutron star with $>2.6
M_\odot$ even if $\tilde{\Lambda}$ is significantly lower than 400. Because the remnant massive neutron star should survive for a long time, or possibly permanently, after merger for these cases [@hotokezaka_kosk2011; @hotokezaka_kkmsst2013], the argument of @radice_pzb2018 based on the mass of the ejecta cannot reject such equations of state and then binary tidal deformability. However, the maximum mass of a neutron star might also be constrained to $\lesssim 2.2 M_\odot$ as described above. Whether this constraint on the maximum mass is compatible with the luminosity of AT 2017gfo is not trivial.
In this Letter, we demonstrate that the lower bound on $\tilde{\Lambda}$ is not as significant as what @radice_pzb2018 proposed, even if the maximum mass is only moderately large, $M_\mathrm{max} \lesssim 2.1
M_\odot$, by a suite of numerical-relativity simulations. Specifically, we find that various models with $\tilde{\Lambda} < 400$ can eject $0.05
M_\odot$ and can explain the luminosity of AT 2017gfo. The models include asymmetric binary neutron stars with $\tilde{\Lambda} = 242$, which may not collapse at least until after merger. In addition, we also show that the luminosity of AT 2017gfo could be explained even if the merger remnant does not experience bounce after merger, when the binary is asymmetric.
Model and equation of state {#sec:model}
===========================
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$\Gamma$ $\log P_{14.7}~(\si{dyne.cm^{-2}})$ $R_{1.35}$ (km) $M_\mathrm{max}~( M_\odot )$ $q$ $\tilde{\Lambda}$ Type $M_\mathrm{dyn}~( M_\odot )$ $M_\mathrm{disk}~( M_\odot )$ $\Delta
x$ (m)
---------- ------------------------------------- ----------------- ------------------------------ --------- ------------------- ----------- ------------------------------ ------------------------------- ----------
$3.765$ $34.1$ $10.4$ $2.00$ $1$ $208$ no bounce $<\num{e-3}$ $<\num{e-3}$ $117$
$0.774$ $218$ no bounce $<\num{e-3}$ $0.023$ $121$
$3.887$ $34.1$ $10.5$ $2.05$ $1$ $221$ no bounce $<\num{e-3}$ $<\num{e-3}$ $118$
$0.774$ $230$ no bounce $\num{5.2e-3}$ $0.029$ $126$
$4.007$ $34.1$ $10.5$ $2.10$ $1$ $232$ no bounce $\num{1.9e-3}$ $\num{2.7e-3}$ $118$
$0.774$ $242$ long $0.013$ $0.26\;(0.16,0.097)$ $121$
$3.446$ $34.2$ $10.6$ $2.00$ $1$ $232$ no bounce $<\num{e-3}$ $<\num{e-3}$ $121$
$0.774$ $245$ no bounce $\num{2.3e-3}$ $0.036$ $124$
$3.568$ $34.2$ $10.7$ $2.05$ $1$ $247$ no bounce $<\num{e-3}$ $<\num{e-3}$ $122$
$0.774$ $259$ no bounce $0.014$ $0.038$ $126$
$3.687$ $34.2$ $10.8$ $2.10$ $1$ $260$ short $\num{1.4e-3}$ $\num{7.8e-3}$ $124$
$0.774$ $272$ long $0.011$ $0.26\;(0.17,0.092)$ $126$
$3.132$ $34.3$ $11.0$ $2.00$ $1$ $272$ no bounce $<\num{e-3}$ $<\num{e-3}$ $126$
$0.774$ $290$ no bounce $0.012$ $0.063$ $131$
$3.252$ $34.3$ $11.1$ $2.05$ $1$ $288$ no bounce $\num{1.2e-3}$ $\num{1.9e-3}$ $128$
$0.774$ $305$ short $0.015$ $0.12$ $131$
$3.370$ $34.3$ $11.1$ $2.10$ $1$ $303$ short $\num{2.0e-3}$ $0.031$ $128$
$0.774$ $319$ long $0.011$ $0.25\;(0.19,0.12)$ $131$
$2.825$ $34.4$ $11.6$ $2.00$ $1$ $345$ short $\num{6.5e-3}$ $0.018$ $134$
$0.774$ $373$ short $0.011$ $0.087$ $141$
$2.942$ $34.4$ $11.6$ $2.05$ $1$ $362$ short $\num{2.5e-3}$ $0.016$ $134$
$0.774$ $387$ short $0.011$ $0.12$ $139$
$3.058$ $34.4$ $11.6$ $2.10$ $1$ $377$ long $\num{9.7e-3}$ $0.17\;(0.13,0.11)$ $134$
$0.774$ $400$ short $\num{9.0e-3}$ $0.16$ $139$
$2.528$ $34.5$ $12.5$ $2.00$ $1$ $508$ short $\num{9.4e-3}$ $0.053$ $149$
$0.774$ $558$ short $\num{5.6e-3}$ $0.16$ $156$
$2.640$ $34.5$ $12.4$ $2.05$ $1$ $516$ short $0.012$ $0.12$ $147$
$0.774$ $560$ short $\num{6.4e-3}$ $0.18$ $154$
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[table:model\]
We simulate mergers of equal-mass binaries with $1.375 M_\odot$–$1.375
M_\odot$ and unequal-mass binaries with $1.2 M_\odot$–$1.55
M_\odot$. The total mass, $m_0 = 2.75 M_\odot$, and the mass ratios, $q
= 1$ or $0.774$, are consistent with GW170817 [@ligovirgo2017-3; @ligovirgo2019] and also with observed Galactic binary neutron stars [e.g. @tauris_etal2017; @ferdman2018]. This should be contrasted with @radice_pzb2018, where many models are significantly heavier than GW170817, particularly those with $\tilde{\Lambda} \lesssim 400$, and the mass ratio is restricted to $q >
0.857$. The initial orbital angular velocity $\Omega$ of the binary is chosen to be $G m_0 \Omega / c^3 \approx 0.025$ with applying eccentricity reduction [@kyutoku_st2014], where $G$ and $c$ are the gravitational constant and the speed of light, respectively. The binaries spend about six orbits before merger.
Equations of state for neutron star matter are varied systematically by adopting piecewise polytropes with three segments [@read_lof2009]. This choice allows us to investigate more generic models rather than particular nuclear-theory models, e.g. mean-field theory. The low-density segment is identical to that adopted in @hotokezaka_kosk2011. The middle-density segment is specified by pressure at denoted by $P_{14.7}$ and an adiabatic index $\Gamma$. This segment is matched to the low-density part at the density where the pressure equals. The value of $P_\mathrm{14.7}$ is known to be correlated with the neutron star radius [@lattimer_prakash2001; @read_lof2009], and we choose $\log
P_\mathrm{14.7}~(\si{dyne.cm^{-2}})$ from $\{ 34.1, 34.2, 34.3, 34.4,
34.5 \}$. The value of $\Gamma$ is determined by, in conjunction with the high-density segment, requiring the maximum mass of neutron stars to become $2.00 M_\odot$, $2.05 M_\odot$, and $2.10 M_\odot$. The high-density segment is given by changing the adiabatic index to 2.8 at .
The first two columns of Table \[table:model\] list $\Gamma$ and $P_\mathrm{14.7}$ for 14 equations of state[^2] adopted in this study. The radius of a $1.35 M_\odot$ neutron star and the maximum mass are shown in the third and fourth columns, respectively. We checked that all of them are causal; i.e. the sound velocity does not exceed $c$, up to the central density of the spherical maximum mass configuration. Although our radii are typically smaller than those favored in @most_wrs2018, their probability distribution may be affected significantly by the small number of available equations of state with small radii [@raithel_op2018]. As shown in @annala_gkv2018, our models are compatible with current understanding of nuclear physics and astronomical observations.
Table \[table:model\] also presents the binary tidal deformability of our equal-mass and unequal-mass binaries in the sixth column, where the mass ratio is given in the fifth column. All are consistent with constraints obtained by GW170817, irrespective of the details of the analysis [@ligovirgo2017-3; @de_flbbb2018; @ligovirgo2018; @ligovirgo2019]. As pointed out by @tews_mr2018, the binary tidal deformability is not directly correlated with the maximum mass.
Method of simulations {#sec:sim}
=====================
Numerical simulations are performed in full general relativity with the SACRA code [@yamamoto_st2008; @kiuchi_kksst2017]. The finite-temperature effect is incorporated by an ideal-gas prescription following @hotokezaka_kosk2011 with the fiducial value of $\Gamma_\mathrm{th} = 1.8$, which may be appropriate for capturing the dynamics of remnant neutron stars [@bauswein_jo2010]. We also performed simulations with $\Gamma_\mathrm{th} = 1.5, 1.6$, and $1.7$ for some models with low values of $\tilde{\Lambda}$; the dependence of our results on $\Gamma_\mathrm{th}$ will be discussed. Because whether or not the merger remnant collapses into a black hole in a short time scale is important for this study, detailed physical effects such as magnetic fields and neutrino transport are neglected. They are known to play a central role on a longer time scale than durations of our simulations, which are performed until 10–$\SI{20}{\milli\second}$ after merger [@hotokezaka_kkmsst2013]; thus our results should depend only weakly on these effects. Although we cannot determine the electron fraction of the ejecta, which is important to derive nucleosynthetic yields and characteristics of the kilonova/macronova [@wanajo_snkks2014; @tanaka_etal2017; @kasen_mbqr2017], it is not relevant to the purpose of this work.
We classify the fate of merger remnants into three types. If the remnant collapses into a black hole without experiencing bounce after merger, we call it a no-bounce collapse. Note that such collapses are denoted by the prompt collapse in @bauswein_jjs2017; we avoid this name, however, taking into account the fact that some asymmetric models survive longer than the dynamical time scale up to a few even if they do not experience bounce. If the remnant evades the no-bounce collapse but still collapses by $\SI{20}{\milli\second}$ after merger, it is regarded as a short-lived remnant. This time scale is approximately identical to that adopted in @radice_pzb2018. If the remnant massive neutron star does not collapse in our simulations, it is called a long-lived remnant. These three types will be denoted by “no bounce”, “short”, and “long” in Table \[table:model\], respectively.
We derive the baryonic mass of the unbound dynamical ejecta, $M_\mathrm{dyn}$, and that of the bound material outside the black hole or exceeding for the long-lived remnant, $M_\mathrm{disk}$, from the simulations. The threshold density is chosen after @radice_pzb2018, and the dependence of our results on this value will be described later. The ejecta as a whole should consist of the dynamical ejecta and the late-time outflow from the merger remnant [e.g. @fernandez_metzger2013; @metzger_fernandez2014; @just_bagj2015; @fujibayashi_knss2018]. Because our simulations do not include magnetic fields or corresponding viscosity required to launch the outflow, we simply assume that some fraction of $M_\mathrm{disk}$ will be ejected by such processes following @radice_pzb2018. While @radice_pzb2018 conservatively (for their purpose) adopted 100% efficiency for the ejection from the accretion torus, this efficiency is likely to be lower than 50%, particularly when the remnant is a black hole, because the outflow is a result of the accretion.
-----------------------------------------------------------------------------------------------------------------
$q$ $\tilde{\Lambda}$ $\Delta x$ (m) Type $M_\mathrm{dyn}~( $M_\mathrm{disk}~( M_\odot )$
M_\odot )$
--------- ------------------- ---------------- ----------- ------------------- ------------------------------- --
$1$ $288$ $128$ no bounce $\num{1.2e-3}$ $\num{1.9e-3}$
$148$ no bounce $\num{2.1e-3}$ $\num{4.8e-3}$
$164$ short $\num{6.9e-3}$ $0.013$
$1$ $508$ $149$ short $\num{9.4e-3}$ $0.053$
$172$ short $0.011$ $0.055$
$191$ short $\num{8.5e-3}$ $0.045$
$1$ $516$ $147$ short $0.012$ $0.12$
$170$ short $0.013$ $0.089$
$189$ short $0.012$ $0.095$
$0.774$ $242$ $121$ long $0.013$ $0.26$
$140$ long $0.017$ $0.26$
$156$ long $0.019$ $0.25$
$0.774$ $259$ $128$ no bounce $0.014$ $0.038$
$148$ no bounce $0.014$ $0.041$
$164$ short $0.015$ $0.31$
$0.774$ $290$ $131$ no bounce $0.012$ $0.063$
$152$ no bounce $0.013$ $0.063$
$169$ no bounce $0.014$ $0.069$
$0.774$ $558$ $156$ short $\num{5.6e-3}$ $0.16$
$180$ short $\num{4.7e-3}$ $0.14$
$201$ short $\num{4.5e-3}$ $0.16$
$0.774$ $560$ $154$ short $\num{6.4e-3}$ $0.18$
$178$ short $\num{5.5e-3}$ $0.19$
$198$ short $\num{5.4e-3}$ $0.15$
-----------------------------------------------------------------------------------------------------------------
: Dependence of the Fate of the Remnant, $M_\mathrm{dyn}$, and $M_\mathrm{disk}$ on the Grid Spacing, $\Delta x$
\[table:resolution\]
Our results depend weakly on grid resolutions as shown in Table \[table:resolution\]. By simulating selected models with three different resolutions, we estimate that the mass of the ejecta has a relative error of about a factor of two and an absolute error of $\num{e-3} M_\odot$ for typical cases with hypothetical first-order convergence. However, the nominal error reaches an order of magnitude for marginally stable short-lived remnants, because the fate wanders from the no-bounce collapse to the short-lived remnant. We think that this is reasonable and inevitable for models near the threshold, and these errors should be kept in mind when we discuss implications to AT 2017gfo. In the rest of this Letter, we only show the results of highest-resolution runs, in which the neutron star radius is covered by $\approx 65$–70 points with the grid spacing at the finest domain shown in the tenth column of Table \[table:model\].
Result {#sec:res}
======
The merger of binary neutron stars results in dynamical mass ejection and formation of a remnant, a massive neutron star or a black hole, surrounded by an accretion torus. Because their dynamics and mechanisms have been thoroughly described in previous publications [e.g. @hotokezaka_kkosst2013; @bauswein_gj2013; @radice_glror2016], we do not repeat detailed explanations. The fate of the merger remnant (seventh column), the mass of the dynamical ejecta (eighth column), and the mass of the bound material outside the black hole or exceeding for the long-lived remnant (ninth column) are presented in Table \[table:model\]. The mass of the bound material, $M_\mathrm{disk}$, for a given equation of state is usually larger for unequal-mass binaries rather than for equal-mass binaries because of the efficient tidal interaction and angular momentum transfer during merger. In particular, some of the asymmetric models leave a baryonic mass of $\gtrsim 0.03 M_\odot$ even for the no-bounce collapse. This is because the light components are deformed significantly before merger and the collapses are gradually induced by the accretion for these models (see <http://www2.yukawa.kyoto-u.ac.jp/~kenta.kiuchi/GWRC/index.html> for visualization).
![Mass of the ejecta vs. the binary tidal deformability. The errorbars indicate ejection of the remnant by from 0% (i.e. only dynamical mass ejection occurs) to 100% (i.e. all the mass outside the black hole is ejected), and the 50% ejection of the baryonic mass surrounding the black hole is marked with symbols. Open and filled symbols denote equal-mass and unequal-mass models, respectively. Large triangles on the top axis denote the models for which remnant massive neutron stars survive longer than and thus the luminosity of AT 2017gfo can be explained. Such a model is found even at $\tilde{\Lambda} = 242$. The vertical dashed line at $\tilde{\Lambda} =
400$ is the threshold proposed by @radice_pzb2018. The horizontal dashed line at $0.05 M_\odot$ indicates the mass required to explain AT 2017gfo [@radice_pzb2018].[]{data-label="fig:ejecta"}](f1.pdf){width="0.95\linewidth"}
The masses of the ejecta are summarized visually in Fig. \[fig:ejecta\] against the binary tidal deformability, $\tilde{\Lambda}$. It is obvious that many binary models with $\tilde{\Lambda} < 400$ can eject more than $0.05 M_\odot$ and are capable of explaining the luminosity of AT 2017gfo as far as the mass of the ejecta is concerned. Indeed, we find that a handful of binary models with $\tilde{\Lambda} < 400$ result in the formation of a long-lived remnant, for which $M_\mathrm{disk}$ is always larger than $0.1M_\odot$. We have verified that the luminosity of AT 2017gfo can be explained with 50% ejection efficiency even if the threshold density is decreased to (see Table \[table:model\]). They serve as counterexamples to the claim that $\tilde{\Lambda} \gtrsim 400$ is required to explain AT 2017gfo [@radice_pzb2018].
![Summary of whether the luminosity of AT 2017gfo can be explained by each model in the binary tidal deformability ($\tilde{\Lambda}$)-maximum mass ($M_\mathrm{max}$) plane. The large symbols denote models that can eject $0.05 M_\odot$ with hypothetical 50% efficiency and can explain the luminosity of AT 2017gfo, and the small ones denote those that cannot.[]{data-label="fig:score"}](f2.pdf){width="0.95\linewidth"}
The key ingredients are the not-so-small maximum mass, $M_\mathrm{max}$, and the mass asymmetry represented by the small mass ratio, $q$. Their importance is understood from Fig. \[fig:score\], where we summarize which model can explain the luminosity of AT 2017gfo in the $\tilde{\Lambda}$–$M_\mathrm{max}$ plane. Here, we assume a 50% ejection efficiency of the bound material for concreteness. On one hand, for the case that the maximum mass is $2 M_\odot$, all the models collapse by after merger. Not only equal-mass models have no chance of ejecting $0.05 M_\odot$,[^3] but also the mass asymmetry of $q = 0.774$ does not save any model with $\tilde{\Lambda} < 377$. On the other hand, if the maximum mass is as large as $2.1 M_\odot$, many models produce long-lived remnants. Actually, all the asymmetric binaries considered here are capable of explaining the luminosity of AT 2017gfo. The lowest value of $\tilde{\Lambda}$ of models that can eject $0.05 M_\odot$ is 242. Figure \[fig:score\] suggests that, if $M_\mathrm{max}$ is larger than $2.1 M_\odot$, then the lower bound on $\tilde{\Lambda}$ derived by AT 2017gfo may become looser than that found in this study.
We also find that all the models with $\tilde{\Lambda} > 400$ are capable of ejecting $0.05 M_\odot$ if 100% ejection efficiency is adopted. This is consistent with the findings of @radice_pzb2018.
$q$ $\tilde{\Lambda}$ $\Gamma_\mathrm{th}$ type $M_\mathrm{dyn}~[M_\odot]$ $M_\mathrm{disk}~[M_\odot]$
--------- ------------------- ---------------------- ------- ---------------------------- ----------------------------- --
$0.774$ $242$ $1.8$ long $0.013$ $0.26$
$1.7$ short $0.011$ $0.045$
$1.6$ short $\num{7.6e-3}$ $0.036$
$1.5$ short $\num{6.5e-3}$ $0.033$
$0.774$ $272$ $1.8$ long $0.011$ $0.26$
$1.7$ long $0.013$ $0.26$
$1.6$ long $0.014$ $0.27$
$1.5$ short $\num{9.8e-3}$ $0.042$
: Dependence of the Fate of the Remnant, $M_\mathrm{dyn}$, and $M_\mathrm{disk}$ on $\Gamma_\mathrm{th}$
\[table:thermal\]
The fate of the merger remnant depends on the strength of the finite-temperature effect for marginal cases. For example, the lowest value of $\tilde{\Lambda}$ that can explain the luminosity of AT 2017gfo is $242$ in our models if the fiducial $\Gamma_\mathrm{th} = 1.8$ is adopted, where the outcome is a long-lived remnant. However, the remnant becomes short lived for $\Gamma_\mathrm{th} \le 1.7$ because of the reduced thermal pressure and fails to eject $0.05 M_\odot$. This indicates that the finite-temperature effect must be moderately strong for this model to account for AT 2017gfo. We also find that the model with $\tilde{\Lambda} = 272$ results in the long-lived remnant only when $\Gamma_\mathrm{th} \ge 1.6$, whereas the short-lived remnant for a very small value of $\Gamma_\mathrm{th} = 1.5$ can eject $0.05 M_\odot$ if 100% efficiency is assumed. The results for them are summarized in Table \[table:thermal\]. Although our conclusion that binaries with $\tilde{\Lambda} \lesssim 400$ are capable of explaining the luminosity of AT 2017gfo is unchanged, these observations imply that accurate incorporation of the finite-temperature effect is also crucial to infer precise properties of the zero-temperature equation of state from electromagnetic counterparts.
Discussion {#sec:disc}
==========
We conclude that the lower bound on binary tidal deformability is $\tilde{\Lambda} \le 242$ if an ejection of $0.05 M_\odot$ is required. We speculate that lower values of $\tilde{\Lambda}$ than this could even be acceptable if we employ an equation of state that supports a maximum mass larger than $2.1 M_\odot$ and/or increase the degree of asymmetry. The precise value of the threshold depends also on the strength of the finite-temperature effect, represented by $\Gamma_\mathrm{th}$ in our study.
We also find that an asymmetric binary that results in a no-bounce collapse can explain the luminosity of AT 2017gfo, if moderately high $\approx 60\%$ ejection efficiency from the remnant is admitted. The lower bounds proposed in @bauswein_jjs2017 are satisfied for the equation of state of this model, with which the radii of $1.6 M_\odot$ and maximum-mass configurations are $10.93$ and , respectively. However, our finding would potentially invalidate the argument of @bauswein_jjs2017 and its future application.
![Disk mass vs. the binary tidal deformability. The errorbars denote the typical relative error of a factor of two and absolute error of $\num{e-3}
M_\odot$ (see Sec. \[sec:sim\]). The values for the threshold density of and are shown with small symbols for long-lived remnants. We also show the fit derived in [@radice_dai2019]. The correlation between $M_\mathrm{disk}$ and $\tilde{\Lambda}$ is not significant in our models, and the applicability of the fit due to [@radice_dai2019] is very limited.[]{data-label="fig:disk"}](f3.pdf){width="0.95\linewidth"}
Our results indicate that the mass ratio is critically important to derive reliable constraints on neutron star properties from electromagnetic emission as also argued in @radice_pzb2018. If the binary turns out to be symmetric, it is possible that $\tilde{\Lambda} \gtrsim 400$ is necessary as [@radice_pzb2018] originally proposed. Indeed, we find no symmetric model with $\tilde{\Lambda} < 377$ that can eject $0.05 M_\odot$. However, Fig. \[fig:disk\] shows that the mass asymmetry significantly obscures the correlation between the disk mass and binary tidal deformability, which is the basis of previous attempts to constrain $\tilde{\Lambda}$ from AT 2017gfo. In light of our results, fitting formulas adopted in @radice_dai2019 and @coughlin_dmm2018 have severe systematic errors. Further investigation is required to clarify precisely the effect of asymmetry. Although the mass ratio can be determined from gravitational-wave data analysis, the degeneracy with the spin must be resolved to achieve high precision [@hannam_bffh2013].
The velocity and the composition can potentially be used as additional information to examine binary models. Some previous work attempted to associate either the blue or red component of AT 2017gfo to dynamical ejecta to improve parameter estimation [@gao_caz2017; @coughlin_etal2018]. However, the derived binary parameters, in particular the mass ratio, disagree between these works. As shown by @kawaguchi_st2018, such an association is not necessarily justified once interaction among multiple ejecta components is taken into account. Detailed modelings of the emission are required if we would like to utilize the velocity and/or the composition to put constraints on properties of neutron stars.
Another lesson drawn from our study is that the possible parameter space of nuclear physics may not be satisfactorily covered by current tabulated equations of state [@tews_mr2018]. For example, equations of state derived by relativistic mean-field theory tend to predict a large maximum mass only when the typical radius is large [@radice_phfbr2018], and thus the value of binary tidal deformability is also high. Such a correlation is not likely to be physical but ascribed to the method of quantum many-body calculations. Specifically, the large maximum mass and the small radius can be accommodated in variational calculations [e.g. @togashi_ntyst2017]. As Fig. \[fig:score\] shows, the outcome of the merger depends significantly on the maximum mass, even if the binary tidal deformability is unchanged. It should be remarked that models with $\tilde{\Lambda} < 400$ of @radice_pzb2018 are generated by assigning total masses larger than those allowed by GW170817 [@ligovirgo2017-3; @ligovirgo2019] except for the SFHo equation of state [@steiner_hf2013]. It is impossible for other equations of state adopted by them to produce binary models equipped with $\tilde{\Lambda} \lesssim 400$ and the total mass allowed by GW170817 simultaneously. This feature artificially enhances the chance of the early collapse. If we wish to put reliable constraints on neutron stars via numerical simulations, care must be taken regarding the limitation of the adopted models including the finite-temperature effect.
We thank Andreas Bauswein, Sebastiano Bernuzzi, Kenta Hotokezaka, David Radice, and Masaomi Tanaka for valuable comments. Numerical computations were performed at Oakforest-PACS at Information Technology Center of the University of Tokyo, Cray XC50 at CfCA of National Astronomical Observatory of Japan, and Cray XC30 at Yukawa Institute for Theoretical Physics of Kyoto University. This work is supported by Japanese Society for the Promotion of Science (JSPS) KAKENHI grant Nos. JP16H02183, JP16H06342, JP17H01131, JP17K05447, JP17H06361, JP18H01213, JP18H04595, and JP18H05236, and by a post-K project hp180179.
[51]{} natexlab\#1[\#1]{}
, B. P., [et al.]{} 2017, , 551, 85
—. 2017, , 848, L13
—. 2017, , 119, 161101
—. 2017, , 848, L12
—. 2018, , 121, 161101
—. 2019, [Phys. Rev. X]{}, 9, 011001
Annala, E., Gorda, T., Kurkela, A., & Vuorinen, A. 2018, , 120, 172703
Bauswein, A., Goriely, S., & Janka, H.-T. 2013, , 773, 78
Bauswein, A., Janka, H.-T., & Oechslin, R. 2010, , 82, 084043
Bauswein, A., Just, O., Janka, H.-T., & Stergioulas, N. 2017, , 850, L34
Burgio, G. F., Drago, A., Pagliara, G., Schulze, H.-J., & Wei, J.-B. 2018, , 860, 139
Coughlin, M. W., [et al.]{} 2018, , 480, 3871
Coughlin, M. W., Dietrich, T., Margalit, B., & Metzger, B. D. 2018, arXiv:1812.04803
De, S., Finstad, D., Lattimer, J. M., Brown, D. A., Berger, E., & Biwer, C. M. 2018, , 121, 091102
Ferdman, R. D., & [PALFA Collaboration]{}. 2018, in IAU Symposium, Vol. 337, Pulsar Astrophysics the Next Fifty Years, ed. P. [Weltevrede]{}, B. B. P. [Perera]{}, L. L. [Preston]{}, & S. [Sanidas]{}, 146–149
Fern[á]{}ndez, R., & Metzger, B. D. 2013, , 435, 502
Fujibayashi, S., Kiuchi, K., Nishimura, N., Sekiguchi, Y., & Shibata, M. 2018, , 860, 64
Gao, H., Cao, Z., Ai, S., & Zhang, B. 2017, , 851, L45
Hannam, M., Brown, D. A., Fairhurst, S., Fryer, C. L., & Harry, I. W. 2013, , 766, L14
Hotokezaka, K., Kiuchi, K., Kyutoku, K., Muranushi, T., Sekiguchi, Y.-i., Shibata, M., & Taniguchi, K. 2013, , 88, 044026
Hotokezaka, K., Kiuchi, K., Kyutoku, K., Okawa, H., Sekiguchi, Y.-i., Shibata, M., & Taniguchi, K. 2013, , 87, 024001
Hotokezaka, K., Kyutoku, K., Okawa, H., Shibata, M., & Kiuchi, K. 2011, , 83, 124008
Just, O., Bauswein, A., Pulpillo, R. A., Goriely, S., & Janka, H.-T. 2015, , 448, 541
Kasen, D., Metzger, B., Barnes, J., Quataert, E., & Ramirez-Ruiz, E. 2017, , 551, 80
Kawaguchi, K., Shibata, M., & Tanaka, M. 2018, , 865, L21
Kiuchi, K., Kawaguchi, K., Kyutoku, K., Sekiguchi, Y., Shibata, M., & Taniguchi, K. 2017, , 96, 084060
Kyutoku, K., Shibata, M., & Taniguchi, K. 2014, , 90, 064006
Lattimer, J. M., & Prakash, M. 2001, , 550, 426
Lim, Y., & Holt, J. W. 2018, , 121, 062701
Malik, T., [et al.]{} 2018, , 98, 035804
Margalit, B., & Metzger, B. D. 2017, , 850, L19
Metzger, B. D., & Fern[á]{}ndez, R. 2014, , 441, 3444
Mooley, K. P., [et al.]{} 2018, , 561, 355
Most, E. R., Weih, L. R., Rezzolla, L., & Schaffner-Bielich, J. 2018, , 120, 261103
Radice, D., & Dai, L. 2019, Eur. Phys. J. A, 55, 50
Radice, D., Galeazzi, F., Lippuner, J., Roberts, L. F., Ott, C. D., & Rezzolla, L. 2016, , 460, 3255
Radice, D., Perego, A., Hotokezaka, K., Fromm, S. A., Bernuzzi, S., & Roberts, L. F. 2018, , 869, 130
Radice, D., Perego, A., Zappa, F., & Bernuzzi, S. 2018, , 852, L29
Raithel, C. A., [Ö]{}zel, F., & Psaltis, D. 2018, , 857, L23
Read, J. S., Lackey, B. D., Owen, B. J., & Friedman, J. L. 2009, , 79, 124032
Rezzolla, L., Most, E. R., & Weih, L. R. 2018, , 852, L25
Ruiz, M., Shapiro, S. L., & Tsokaros, A. 2018, , 97, 021501
Shibata, M., Fujibayashi, S., Hotokezaka, K., Kiuchi, K., Kyutoku, K., Sekiguchi, Y., & Tanaka, M. 2017, , 96, 123012
Steiner, A. W., Hempel, M., & Fischer, T. 2013, , 774, 17
, M., [et al.]{} 2017, , 69, 102
Tauris, T. M., [et al.]{} 2017, , 846, 170
Tews, I., Margueron, J., & Reddy, S. 2018, , 98, 045804
Togashi, H., Nakazato, K., Takehara, Y., Yamamuro, S., Suzuki, H., & Takano, M. 2017, , 961, 78
Wanajo, S., Sekiguchi, Y., Nishimura, N., Kiuchi, K., Kyutoku, K., & Shibata, M. 2014, , 789, L39
Yamamoto, T., Shibata, M., & Taniguchi, K. 2008, , 78, 064054
Zhao, T., & Lattimer, J. M. 2018, , 98, 063020
[^1]: More precisely, this threshold is derived by fitting the multi-color evolution of AT 2017gfo.
[^2]: We do not adopt $(
\log P_\mathrm{14.7} , M_\mathrm{max} ) = ( 34.5 , 2.1 M_\odot )$, because it is unnecessary for our purpose.
[^3]: A model with $\tilde{\Lambda} = 508$ can eject $0.05 M_\odot$ if the efficiency exceeds 77%.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We classify all convex polyomino ideals which are linearly related or have a linear resolution. Convex stack polyominoes whose ideals are extremal Gorenstein are also classified. In addition, we characterize, in combinatorial terms, the distributive lattices whose join-meet ideals are extremal Gorenstein or have a linear resolution.'
address:
- 'Viviana Ene, Faculty of Mathematics and Computer Science, Ovidius University, Bd. Mamaia 124, 900527 Constanta, Romania, and Simion Stoilow Institute of Mathematics of the Romanian Academy, Research group of the project ID-PCE-2011-1023, P.O.Box 1-764, Bucharest 014700, Romania'
- 'Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Campus Essen, 45117 Essen, Germany'
- 'Takayuki Hibi, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan'
author:
- 'Viviana Ene, Jürgen Herzog, Takayuki Hibi'
title: Linearly related polyominoes
---
[^1]
[^2]
Introduction {#introduction .unnumbered}
============
The ideal of inner minors of a polyomino, a so-called polyomino ideal, is generated by certain subsets of $2$-minors of an $m\times n$-matrix $X$ of indeterminates. Such ideals have first been studied by Qureshi in [@Q]. They include the two-sided ladder determinantal ideals of $2$-minors which may also be viewed as the join-meet ideal of a planar distributive lattice. It is a challenging problem to understand the graded free resolution of such ideals. In [@ERQ], Ene, Rauf and Qureshi succeeded to compute the regularity of such joint-meet ideals. Sharpe [@S1; @S2] showed that the ideal $I_2(X)$ of all $2$-minors of $X$ is linearly related, which means that $I_2(X)$ has linear relations. Moreover, he described these relations explicitly and conjectured that also the ideals of $t$-minors $I_t(X)$ are generated by a certain type of linear relations. This conjecture was then proved by Kurano [@K]. In the case that the base field over which $I_t(X)$ is defined contains the rational numbers, Lascoux [@L] gives the explicit free resolution of all ideals of $t$-minors. Unfortunately, the resolution of $I_t(X)$ in general may depend on the characteristic of the base field. Indeed, Hashimoto [@H] showed that for $2 \leq t \leq \min(m, n)-3$, the second Betti number $\beta_2$ of $I_t(X)$ depends on the characteristic. On the other hand, by using squarefree divisor complexes [@BH] as introduced by Bruns and the second author of this paper, it follows from [@BH Theorem 1.3] that $\beta_2$ for $t=2$ is independent of the characteristic.
In this paper we use as a main tool squarefree divisor complexes to study the first syzygy module of a polyomino ideal. In particular, we classify all convex polyominoes which are linearly related; see Theorem \[main\]. This is the main result of this paper. In the first section we recall the concept of polyomino ideals and show that the polyomino ideal of a convex polyomino has a quadratic Gröbner basis. The second section of the paper is devoted to state and to prove Theorem \[main\]. As mentioned before, the proof heavily depends on the theory of squarefree divisor complexes which allow to compute the multi-graded Betti numbers of a toric ideal. To apply this theory, one observes that the polyomino ideal of a convex polyomino may be naturally identified with a toric ideal. The crucial conclusion deduced from this observation, formulated in Corollary \[inducedsubpolyomino\], is then that the Betti numbers of a polyomino ideal is bounded below by the Betti numbers of the polyomino ideal of any induced subpolyomino. Corollary \[inducedsubpolyomino\] allows to reduce the study of the relation of polyomino ideals to that of a finite number of polyominoes with a small number of cells which all can be analyzed by the use of a computer algebra system.
In the last section, we classify all convex polyominoes whose polyomino ideal has a linear resolution (Theorem \[linear\]) and all convex stack polyominoes whose polyomino ideal is extremal Gorenstein (Theorem \[stack\]). Since polyomino ideals overlap with join-meet ideals, it is of interest which of the ideals among the join-meet ideals have a linear resolution or are extremal Gorenstein. The answers are given in Theorem \[hibione\] and Theorem \[hibitwo\]. It turns out that the classifications for both classes of ideals almost lead to the same result.
Polyominoes
===========
In this section we consider polyomino ideals. This class of ideals of $2$-minors was introduced by Qureshi [@Q]. To this end, we consider on $\NN^2$ the natural partial order defined as follows: $(i,j) \leq (k,l)$ if and only if $i \leq k$ and $j \leq l$. The set $\NN^2$ together with this partial order is a distributive lattice.
If $a,b \in \NN^2$ with $a \leq b$, then the set $[a,b]= \{ c \in \NN^2|\; a \leq c \leq b\}$ is an interval of $\NN^2$. The interval $C=[a,b]$ with $b=a+(1,1)$ is called a [*cell*]{} of $\NN^2$. The elements of $C$ are called the [*vertices*]{} of $C$ and $a$ is called the [*left lower corner*]{} of $C.$ The [*egdes*]{} of the cell $C$ are the sets $\{a,
(a+(1,0)\}, \{a,a+(0,1)\}, \{(a+(1,0), a+(1,1)\}$ and $\{(a+(0,1), a+(1,1)\}$.
Let $\MP$ be a finite collection of cells and $C,D\in \MP$. Then $C$ and $D$ are [*connected*]{}, if there is a sequence of cells of $\MP$ given by $C= C_1, \ldots, C_m =D$ such that $C_i \cap C_{i+1}$ is an edge of $C_i$ for $i=1, \ldots, m-1$. If, in addition, $C_i \neq C_j$ for all $i \neq j$, then $\mathcal{C}$ is called a [*path*]{} (connecting $C$ and $D$). The collection of cells $\MP$ is called a [*polyomino*]{} if any two cells of $\MP$ are connected; see Figure \[polyomino\]. The set of vertices of $\MP$, denoted $V(\MP)$, is the union of the vertices of all cells belonging to $\MP$. Two polyominoes are called [*isomorphic*]{} if they are mapped to each other by a composition of translations, reflections and rotations.
(4.5,-1)(4.5,3.5) [ (4,0)(4,1)(5,1)(5,0) (5,0)(5,1)(6,1)(6,0) (3,2)(3,3)(4,3)(4,2) (5,1)(5,2)(6,2)(6,1) (4,2)(4,3)(5,3)(5,2) (5,2)(5,3)(6,3)(6,2) (6,0)(6,1)(7,1)(7,0) (6,1)(6,2)(7,2)(7,1) (4,-1)(4,0)(5,0)(5,-1) (5,-1)(5,0)(6,0)(6,-1) (2,-1)(2,0)(3,0)(3,-1) (3,-1)(3,0)(4,0)(4,-1) (2,-0)(2,1)(3,1)(3,0) ]{}
We call a polyomino $\MP$ [*row convex*]{}, if for any two cells $C,D$ of $\MP$ with left lower corner $a=(i,j)$ and $b=(k,j)$ respectively, and such that $k>i$, it follows that all cells with left lower corner $(l,j)$ with $i\leq l\leq k$ belong to $\MP$. Similarly, one defines [*column convex*]{} polyominoes. The polyomino $\MP$ is called [*convex*]{} if it is row and column convex.
The polyomino displayed in Figure \[polyomino\] is not convex, while Figure \[convex\] shows a convex polyomino. Note that a convex polyomino is not convex in the common geometric sense.
(4.5,-1)(4.5,3.5) [ (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{}
Now let $\MP$ be any collection of cells. We may assume that the vertices of all the cells of $\MP$ belong to the interval $[(1,1),(m,n)]$. Fix a field $K$ and let $S$ be the polynomial ring over $K$ in the variables $x_{ij}$ with $(i,j)\in \MP$. The [*ideal of inner minors*]{} $I_\MP\subset S$ of $\MP$, is the ideal generated by all $2$-minors $x_{il}x_{kj}-x_{kl}x_{ij}$ for which $[(i,j),(k,l)]\subset V(\MP)$. Furthermore, we denote by $K[\MP]$ the $K$-algebra $S/I_\MP$. If $\MP$ happens to be a polyomino, then $I_\MP$ will also be called a [*polyomino ideal*]{}.
For example, the polyomino $\MP$ displayed in Figure \[convex\] may be embedded into the interval $[(1,1),(4,4)]$. Then, in these coordinates, $I_\MP$ is generated by the $2$-minors $$\begin{aligned}
&& x_{22}x_{31}-x_{32}x_{21}, x_{23}x_{31}-x_{33}x_{21},x_{24}x_{31}-x_{34}x_{21}, x_{23}x_{32}-x_{33}x_{22},\\
&& x_{24}x_{32}-x_{34}x_{22}, x_{24}x_{33}-x_{34}x_{23},
x_{13}x_{22}-x_{12}x_{23}, x_{13}x_{32}-x_{12}x_{33}, \\
&& x_{13}x_{42}-x_{12}x_{43}, x_{23}x_{42}-x_{22}x_{43}, x_{33}x_{42}-x_{32}x_{43}.\end{aligned}$$
The following result has been shown by Qureshi in [@Q Theorem 2.2].
\[ayesha\] Let $\MP$ be a convex polyomino. Then $K[\MP]$ is a normal Cohen–Macaulay domain.
The proof of this theorem is based on the fact that $I_\MP$ may be viewed as follows as a toric ideal: with the assumptions and notation as introduced before, we may assume that $V(\MP)\subset [(1,1),(m,n)]$. Consider the $K$-algebra homomorphism $\varphi\: S\to T$ with $\varphi(x_{ij})=s_it_j$ for all $(i,j)\in V(\MP)$. Here $T=K[s_1,\ldots,s_m, t_1,\ldots,t_n]$ is the polynomial ring over $K$ in the variables $s_i$ and $t_j$. Then, as observed by Qureshi, $I_\MP=\Ker \varphi$. It follows that $K[\MP]$ may be identified with the edge ring of the bipartite graph $G_\MP$ on the vertex set $\{s_1,\ldots,s_m\}\union\{t_1,\ldots,t_n\}$ and edges $\{s_i,t_j\}$ with $(i,j)\in V(\MP)$. With this interpretation of $K[\MP]$ in mind and by using [@HO], we obtain
\[hibiohsugi\] Let $\MP$ be a convex polyomino. Then $I_\MP$ has a quadratic Gröbner basis.
We use the crucial fact, proved in [@HO], that the toric ideal which defines the edge ring of a bipartite graph has a quadratic Gröbner basis if and only if each $2r$-cycle with $r\geq 3$ has a chord. By what we explained before, a $2k$-cycle, after identifying the vertices of $\MP$ with the edges of a bipartite graph, is nothing but a sequence of vertices $a_1,\ldots, a_{2r}$ of $\MP$ with $$a_{2k-1}= (i_k,j_k) \quad \text{and}\quad a_{2k}= (i_{k+1}, j_k) \quad\text{for $k=1,\ldots,r$}$$ such that $i_{r+1}=i_1$, $i_k\neq i_\ell$ and $j_k\neq j_\ell$ for all $k,\ell\leq r$ and $k\neq \ell$.
A typical such sequence of pairs of integers is the following: $$\begin{aligned}
&3 2 2 4 4 5 5 3 &\\
&1 1 3 3 2 2 4 4&\end{aligned}$$ Here the first row is the sequence of the first component and the second row the sequence of the second component of the vertices $a_i$. This pair of sequences represents an $8$-cycle. It follows from Lemma \[goodforstudents\] that there exist integers $s$ and $t$ with $1\leq t, s\leq r$ and $t\neq s,s+1$ such that either $i_s<i_t<i_{s+1}$ or $i_{s+1}<i_t<i_{s}$. Suppose that $i_s<i_t<i_{s+1}$. Since $a_{2s-1}=(i_s,j_s)$ and $a_{2s}=(i_{s+1},j_s)$ are vertices of $\MP$ and since $\MP$ is convex, it follows that $(i_t,j_s)\in \MP$. This vertex corresponds to a chord of the cycle $a_1,\ldots, a_{2r}$. Similarly one argues if $i_{s+1}<i_t<i_{s}$.
\[goodforstudents\] Let $r \geq 3$ be an integer and $f:[r+1]\to \ZZ$ a function such that $f(i) \neq f(j)$ for $1 \leq i < j \leq r$ and $f(r+1) = f(1)$. Then there exist $1 \leq s, \, t \leq r$ such that one has either $f(s) < f(t) < f(s+1)$ or $f(s+1) < f(t) < f(s)$.
Let, say, $f(1) < f(2)$. Since $f(r+1) = f(1)$, there is $2 \leq q \leq r$ with $$f(1) < f(2) < \cdots < f(q) > f(q + 1).$$
- Let $q = r$. Then, since $q = r \geq 3$, one has $(f(1) =) \, f(r+1)
< f(2) < f(r)$.
- Let $q < r$ and $f(q+1) > f(1)$. Since $f(q+1) \not\in \{f(1), f(2), \ldots, f(q) \}$, it follows that there is $1 \leq s < q$ with $f(s) < f(q) < f(s+1)$.
- Let $q < r$ and $f(q+1) < f(1)$. Then one has $f(q+1) < f(1) < f(q)$.
The case of $f(1) > f(2)$ can be discussed similarly.
We denote the graded Betti numbers of $I_\MP$ by $\beta_{ij}(I_\MP)$.
\[no\] Let $\MP$ be a convex polyomino. Then $\beta_{1j}(I_\MP)=0$ for $j>4$.
By Proposition \[hibiohsugi\], there exists a monomial order $<$ such that $\ini_<(I_\MP)$ is generated in degree 2. Therefore, it follows from [@HS Corollary 4] that $\beta_{1j}(\ini_<(I_\MP))=0$ for $j>4$. Since $\beta_{1j}(I_\MP)\leq \beta_{1j}(\ini_<(I_\MP))$ (see, for example, [@HH Corollary 3.3.3]), the desired conclusion follows.
The first syzygy module of a polyomino ideal
============================================
Let $\MP$ be a convex polyomino and let $f_1,\ldots,f_m$ be the minors generating $I_\MP$. In this section we study the relation module $\Syz_1(I_\MP)$ of $I_\MP$ which is the kernel of the $S$-module homomorphism $\Dirsum_{i=1}^mSe_i\to I_\MP$ with $e_i\mapsto f_i$ for $i=1,\ldots,m$. The graded module $\Syz_1(I_\MP)$ has generators in degree $3$ and no generators in degree $>4$, as we have seen in Corollary \[no\]. We say that $I_\MP$ (or simply $\MP$) is [*linearly related*]{} if $\Syz_1(I_\MP)$ is generated only in degree $3$.
Let $f_i$ and $f_j$ be two distinct generators of $I_\MP$. Then the Koszul relation $f_ie_j-f_je_i$ belongs $\Syz_1(I_\MP)$. We call $f_i,f_j$ a [*Koszul relation pair*]{} if $f_ie_j-f_je_i$ is a minimal generator of $\Syz_1(I_\MP)$. The main result of this section is the following.
\[main\] Let $\MP$ be a convex polyomino. The following conditions are equivalent:
1. $\MP$ is linearly related;
2. $I_\MP$ admits no Koszul relation pairs;
3. Let, as we may assume, $[(1,1),(m,n)]$ be the smallest interval with the property that $V(\MP)\subset [(1,1),(m,n)]$. We refer to the elements $(1,1), (m,1), (1,n)$ and $(m,n)$ as the corners. Then $\MP$ has the shape as displayed in Figure \[shape\], and one of the following conditions hold:
1. at most one of the corners does not belong to $V(\MP)$;
2. two of the corners do not belong to $V(\MP)$, but they are not opposite to each other. In other words, the missing corners are not the corners $(1,1),(n,m)$, or the corners $(m,1),(1,n)$.
3. three of the corners do not belong to $V(\MP)$. If the missing corners are $(m,1),(1,n)$ and $(m,n)$ (which one may assume without loss of generality), then referring to Figure \[shape\] the following conditions must be satisfied: either $i_2=m-1$ and $j_4\leq j_2$, or $j_2=n-1$ and $i_4\leq i_2$.
As an essential tool in the proof of this theorem we recall the co-called squarefree divisor complex, as introduced in [@HH]. Let $K$ be field, $H\subset
\NN^n$ an affine semigroup and $K[H]$ the semigroup ring attached to it. Suppose that $h_1,\ldots, h_m\in \NN^n$ is the unique minimal set of generators of $H$. We consider the polynomial ring $T=K[t_1,\ldots,t_n]$ in the variables $t_1,\ldots,t_n$. Then $K[H]=K[u_1,\ldots,u_m]\subset T$ where $u_i=\prod_{j=1}^nt_j^{h_i(j)}$ and where $h_i(j)$ denotes the $j$th component of the integer vector $h_i$. We choose a presentation $S=K[x_1,\ldots,x_m]\to K[H]$ with $x_i\mapsto u_i$ for $i=1,\ldots,m$. The kernel $I_H$ of this $K$-algebra homomorphism is called the toric ideal of $H$. We assign a $\ZZ^n$-grading to $S$ by setting $\deg x_i=h_i$. Then $K[H]$ as well as $I_H$ become $\ZZ^n$-graded $S$-modules. Thus $K[H]$ admits a minimal $\ZZ^n$-graded $S$-resolution $\FF$ with $F_i=\Dirsum_{h\in H} S(-h)^{\beta_{ih}(K[H])}$.
In the case that all $u_i$ are monomials of the same degree, one can assign to $K[H]$ the structure of a standard graded $K$-algebra by setting $\deg u_i=1$ for all $i$. The degree of $h$ with respect to this standard grading will be denoted $|h|$.
Given $h\in H$, we define the [*squarefree divisor complex*]{} $\Delta_h$ as follows: $\Delta_h$ is the simplicial complex whose faces $F=\{i_1,\ldots,i_k\}$ are the subsets of $[n]$ such that $u_{i_1}\cdots u_{i_k}$ divides $t_1^{h(1)}\cdots t_n^{h(n)}$ in $K[H]$. We denote by $\tilde{H}_{i}(\Gamma, K)$ the $i$th reduced simplicial homology of a simplicial complex $\Gamma$.
\[bh\] With the notation and assumptions introduced one has $\Tor_i(K[H],K)_h\iso\tilde{H}_{i-1}(\Delta_h, K)$. In particular, $$\beta_{ih}(K[H])=\dim_K\tilde{H}_{i-1}(\Delta_h, K).$$
Let $H'$ be a subsemigroup of $H$ generated by a subset of the set of generators of $H$, and let $S'$ be the polynomial ring over $K$ in the variables $x_i$ with $h_i$ generator of $ H^\prime$. Furthermore, let $\FF'$ the $\ZZ^{n}$-graded free $S'$-resolution of $K[H']$. Then, since $S$ is a flat $S'$-module, $\FF'\tensor_{S'}S$ is a $\ZZ^n$-graded free $S$-resolution of $S/I_H'S$. The inclusion $K[H']\to K[H]$ induces a $\ZZ^n$-graded complex homomorphism $\FF'\tensor_{S'}S\to \FF$. Tensoring this complex homomorphism with $K=S/\mm$, where $\mm$ is the graded maximal ideal of $S$, we obtain the following sequence of isomorphisms and natural maps of $\ZZ^n$-graded $K$-modules $$\Tor_i^{S'}(K[H'],K)\iso H_i(\FF'\tensor_{S'}K)\iso H_i(\FF'\tensor_{S'}S)\tensor_SK)\to H_i(\FF\tensor_SK)\iso \Tor_i^S(K[H],K).$$
For later applications we need
\[refinement\] With the notation and assumptions introduced, let $H'$ be a subsemigroup of $H$ generated by a subset of the set of generators of $H$, and let $h$ be an element of $H'$ with the property that $h_i\in H'$ whenever $h-h_i\in H$. Then the natural $K$-vector space homomorphism $\Tor_i^{S'}(K[H'],K)_h\to
\Tor_i^S(K[H],K)_h$ is an isomorphism for all $i$.
Let $\Delta_h'$ be the squarefree divisor complex of $h$ where $h$ is viewed as an element of $H'$. Then we obtain the following commutative diagram $$\begin{aligned}
\label{diagram}
\begin{CD}
\Tor_i(K[H'],K)_h @>>> \Tor_i(K[H],K)_h\\
@VVV @VVV\\
\tilde{H}_{i-1}(\Delta_h', K)@>>> \tilde{H}_{i-1}(\Delta_h, K).
\end{CD}\end{aligned}$$ The vertical maps are isomorphisms, and also the lower horizontal map is an isomorphism, simply because $\Delta_h'=\Delta_h$, due to assumptions on $h$. This yields the desired conclusion.
Let $H\subset \NN^n$ be an affine semigroup generated by $h_1,\ldots, h_m$. An affine subsemigroup $H'\subset H$ generated by a subset of $\{h_1,\ldots, h_m\}$ will be called a [*homological pure*]{} subsemigroup of $H$ if for all $h\in H'$ and all $h_i$ with $h-h_i\in H$ it follows that $h_i\in H'$.
As an immediate consequence of Corollary \[refinement\] we obtain
\[homologicallypure\] Let $H'$ be a homologically pure subsemigroup of $H$. Then $$\Tor_i^{S'}(K[H'],K)\to \Tor_i^S(K[H],K)$$ is injective for all $i$. In other words, if $\FF'$ is the minimal $\ZZ^n$-graded free $S'$-resolution of $K[H']$ and $\FF$ is the minimal $\ZZ^n$-graded free $S$-resolution of $K[H]$, then the complex homomorphism $\FF'\tensor S\to \FF$ induces an injective map $\FF'\tensor K\to \FF\tensor K$. In particular, any minimal set of generators of $\Syz_i(K[H'])$ is part of a minimal set of generators of $\Syz_i(K[H])$. Moreover, $\beta_{ij}(I_{H'})\leq \beta_{ij}(I_H)$ for all $i$ and $j$.
We fix a field $K$ and let $\MP\subset [(1,1),(m,n)]$ be a convex polyomino. Let as before $S$ be the polynomial ring over $K$ in the variables $x_{ij}$ with $(i,j)\in V(\MP)$ and $K[\MP]$ the $K$-subalgebra of the polynomial ring $T=K[s_1,\ldots,s_m,t_1,\ldots,t_n]$ generated by the monomials $u_{ij}=s_it_j$ with $(i,j)\in V(\MP)$. Viewing $K[\MP]$ as a semigroup ring $K[H]$, it is convenient to identify the semigroup elements with the monomial they represent.
Given sets $\{i_1,i_2,\ldots, i_s\}$ and $\{j_1,j_2,\ldots, j_t\}$ of integers with $i_k\subset [m]$ and $j_k\subset [n]$ for all $k$, we let $H'$ be the subsemigroup of $H$ generated by the elements $s_{i_k}t_{j_l}$ with $(i_k,j_l)\in V(\MP)$. Then $H'$ a homologically pure subsemigroup of $H$. Note that $H'$ is also a combinatorially pure subsemigroup of $H$ in the sense of [@HHO].
A collection of cells $\MP'$ will be called a [*collection of cells*]{} of $\MP$ [*induced*]{} by the columns $i_1,i_2,\ldots, i_s$ and the rows $j_1,j_2,\ldots, j_t$, if the following holds: $(k,l)\in V(\MP')$ if and only if $(i_k,j_l)\in V(\MP)$. Observe that $K[\MP']$ is always a domain, since it is a $K$-subalgebra of $K[\MP]$. The map $V(\MP')\to V(\MP)$, $(k,l)\mapsto (i_k,j_l)$ identifies $I_{\MP'}$ with the ideal contained in $I_{\MP}$ generated by those $2$-minors of $I(\MP)$ which only involve the variables $x_{i_k,j_l}$. In the following we always identify $I_{\MP'}$ with this subideal of $I_{\MP}$.
If the induced collection of cells of $\MP'$ is a polyomino, we call it an [*induced polyomino*]{}. Any induced polyomino $\MP'$ of $\MP$ is again convex.
Consider for example the polyomino $\MP$ on the left side of Figure \[combinatorial\] with left lower corner $(1,1)$. Then the induced polyomino $\MP'$ shown on the right side of Figure \[combinatorial\] is induced by the columns $1,3,4$ and the rows $1,2,3,4$.
(4.5,-1)(4.5,3.5) (-4,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (4.8,1)(4.8,2)(5.8,2)(5.8,1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (0.3,-1)[$\MP$]{}
(4.5,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (8.3,-1)[$\MP'$]{}
Obviously Corollary \[homologicallypure\] implies
\[inducedsubpolyomino\] Let $\MP'$ be an induced collection of cells of $\MP$. Then $\beta_{ij}(I_{\MP'})\leq \beta_{ij}(I_\MP)$ for all $i$ and $j$, and each minimal relation of $I_{\MP'}$ is also a minimal relation of $I_\MP$.
We will now use Corollary \[inducedsubpolyomino\] to isolate step by step the linearly related polyominoes.
\[begin\] Suppose $\MP$ admits an induced collection of cells $\MP^\prime$ isomorphic to one of those displayed in Figure \[restricted\]. Then $I_\MP$ has a Koszul relation pair.
We may assume that $V(\MP')\subset [(1,1),(4,4)]$. By using CoCoA [@Co] or Singular [@DGPS] to compute $\Syz_1(I_{\MP'})$ we see that the minors $f_a=[12|12]$ and $f_b=[34|34]$ form a Koszul relation pair of $I_{\MP'}$. Thus the assertion follows from Corollary \[inducedsubpolyomino\].
(4.5,-1)(4.5,3.5)
(-3,1)
(2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (4.8,1)(4.8,2)(5.8,2)(5.8,1) (4.8,-1)(4.8,0)(5.8,0)(5.8,-1)
(4.5,-2)[(a)]{}
(2.5,1)
(2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (4.8,1)(4.8,2)(5.8,2)(5.8,1) (4.5,-2)[(b)]{}
\[corner\] Let $\MP$ be a convex polyomino, and let $[(1,1),(m,n)]$ be the smallest interval with the property that $V(\MP)\subset [(1,1),(m,n)]$. We assume that $m,n\geq
4$. If one of the vertices $(2,2), (m-1,2), (m-1,n-1)$ or $(2,n-1)$ does not belong to $V(\MP)$, then $I_{\MP}$ has a Koszul relation pair, and, hence, $I_{\MP}$ is not linearly related.
We may assume that $(2,2)\not\in V(\MP)$. Then the vertices of the interval $[(1,1),(2,2)]$ do not belong to $V(\MP)$. Since $[(1,1),(m,n)]$ is the smallest interval containing $V(\MP)$, there exist, therefore, integers $i$ and $j$ with $2< i\leq m-1$ and $2< j\leq n-1$ such that the cells $[(i,1),(i+1,2)]$ and $[(1,j),(2,j+1)]$ belong to $\MP$. Then the collection of cells induced by the rows $1,2,i,i+1$ and the columns $1,2,j,j+1$ is isomorphic to one of the collections $\MP'$ of Figure \[restricted\]. Thus the assertion follows from Lemma \[begin\] and Corollary \[inducedsubpolyomino\].
Corollary \[corner\] shows that the convex polyomino $\MP$ should contain all the vertices $(2,2), (m-1,2), (m-1,n-1)$ and $(2,n-1)$ in order to be linearly related. Thus a polyomino which is linearly related must have the shape as indicated in Figure \[shape\]. The number $i_1$ is also allowed to be $1$ in which case also $j_1=1$. In this case the polyomino contains the corner $(1,1)$. A similar convention applies to the other corners. In Figure \[shape\] all for corners $(1,1), (1,n), (m,1)$ and $(m,n)$ are missing.
(6,-1)(6,12) [ (0,2)(0,6)(1,6)(1,9)(4,9)(4,10)(10,10)(10,9)(12,9)(12,7)(13,7)(13,3)(12,3)(12,0)(11,0)(11,-1)(4,-1)(4,0)(1,0)(1,2) ]{} (-0.5,-0.2)[$(2,2)$]{} (14.5,-0.2)[$(m-1,2)$]{} (-1.4,9.2)[$(2,n-1)$]{} (15.5,9.2)[$(m-1,n-1)$]{} (1,0)[$\bullet$]{} (1,9)[$\bullet$]{} (12,9)[$\bullet$]{} (12,0)[$\bullet$]{}
(4,-1.6)[$i_1$]{} (11.2,-1.6)[$i_2$]{} (4,10.7)[$i_3$]{} (10.2,10.7)[$i_4$]{} (-0.7,2.2)[$j_1$]{} (-0.7,6.1)[$j_2$]{} (13.7,6.8)[$j_4$]{} (13.7,3)[$j_3$]{}
The convex polyomino displayed in Figure \[not\] however is not linearly related, though it has the shape as shown in Figure \[shape\]. Thus there must still be other obstructions for a polyomino to be linearly related.
(4.5,-1)(4.5,3.5) [ (4,0)(4,1)(5,1)(5,0) (5,0)(5,1)(6,1)(6,0) (3,1)(3,2)(4,2)(4,1) (4,1)(4,2)(5,2)(5,1) (3,0)(3,1)(4,1)(4,0) (5,1)(5,2)(6,2)(6,1) (4,2)(4,3)(5,3)(5,2) (5,2)(5,3)(6,3)(6,2) (6,0)(6,1)(7,1)(7,0) (6,1)(6,2)(7,2)(7,1) (4,-1)(4,0)(5,0)(5,-1) (5,-1)(5,0)(6,0)(6,-1) (2,-1)(2,0)(3,0)(3,-1) (3,-1)(3,0)(4,0)(4,-1) ]{}
Now we proceed further in eliminating those polyominoes which are not linearly related.
\[opposite\] Let $\MP$ be a convex polyomino, and let $[(1,1),(m,n)]$ be the smallest interval with the property that $V(\MP)\subset [(1,1),(m,n)]$. If $\MP$ misses only two opposite corners, say $(1,1)$ and $(m,n)$, or $\MP$ misses all four corners $(1,1)$, $(1,n)$, $(m,1)$ and $(m,n)$, then $I_\MP$ admits a Koszul pair and hence is not linearly related.
Let us first assume that $(1,1)$ and $(m,n)$ do not belong to $V(\MP)$, but $(1,n)$ and $(m,1)$ belong to $V(\MP)$. The collection of cells $\MP_1$ induced by the rows $1,2,m-1,m$ and the columns $1,2,n-1,n$ is shown in Figure \[stairs\]. All the light colored cells, some of them or none of them are present according to whether or not all, some or none of the equations $i_1=2$, $j_1=2$, $i_4=m-1$ and $j_4=n-1$ hold. For example, if $i_1=2$, $j_1\neq 2$, $i_4=m-1$ and $j_4\neq n-1$, then the light colored cells $[(2,1),(3,2)]$ and $[(2,3),(3,4)]$ belong $\MP_1$ and the other two light colored cells do not belong to $\MP_1$.
(4.5,-1)(4.5,3.5) [ (4.8,-1)(4.8,0)(5.8,0)(5.8,-1) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) ]{}
It can easily be checked that the ideal $I_{\MP_1}$ displayed in Figure \[stairs\] has a Koszul relation pairs in all possible cases, and so does $I_\MP$ by Corollary \[inducedsubpolyomino\].
Next, we assume that none of the four corners $(1,1)$, $(1,n)$, $(m,1)$ and $(m,n)$ belong to $\MP$. In the following arguments we refer to Figure \[shape\]. In the first case suppose $[i_3,i_4]\subset [i_1,i_2]$ and $[j_3,j_4]\subset [j_1,j_2]$. Then the collection of cells induced by the columns $2,i_3,i_4,m-1$ and the rows $1,j_3,j_4,n$ is the polyomino displayed in Figure \[convex\] which has a Koszul relation pair as can be verified by computer. Thus $\MP$ has a Koszul relation pair. A similar argument applies if $[i_1,i_2]\subset [i_3,i_4]$ or $[j_1,j_2]\subset [j_3,j_4]$.
Next assume that $[i_3,i_4]\not\subset [i_1,i_2]$ or $[j_3,j_4]\not\subset [j_1,j_2]$. By symmetry, we may discuss only $[i_3,i_4]\not\subset [i_1,i_2]$. Then we may assume that $i_3<i_1$ and $i_4< i_2$. We choose the columns $i_1,i_2,i_3,i_4$ and the rows $1,2,n-1,n$. Then the induced polyomino by these rows and columns is $\MP_1$ if $i_1<i_4$, $\MP_2$ if $i_4=i_1$ and $\MP_3$ if $i_4<i_1$; see Figure \[cases\]. In all three cases the corresponding induced polyomino ideal has a Koszul relation pair, and hence so does $I_\MP$.
(4.5,-1)(4.5,3.5) (-6,1)[ (4.8,-1)(4.8,0)(5.8,0)(5.8,-1) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (4,-2)[$i_1<i_4$]{} ]{}
(-0.5,1)[ (4.8,-1)(4.8,0)(5.8,0)(5.8,-1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (5,-2)[$i_4=i_1$]{} ]{}
(5.5,1)[ (2.8,0)(2.8,1)(3.8,1)(3.8,0) (4.8,-1)(4.8,0)(5.8,0)(5.8,-1) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (4.5,-2)[$i_4<i_1$]{} ]{}
\[threecorners\] Let $\MP$ be a convex polyomino, and let $[(1,1),(m,n)]$ be the smallest interval with the property that $V(\MP)\subset [(1,1),(m,n)]$. Suppose $\MP$ misses three corners, say $(1,n),(m,1),(m,n)$, and suppose that $i_2<m-1$ and $j_2<n-1$, or $i_2=m-1$ and $j_2<j_4$, or $j_2=n-1$ and $i_2< i_4$. Then $I_\MP$ has a Koszul relation pair and hence is not linearly related.
We proceed as in the proofs of the previous lemmata. In the case that $i_2<m-1$ and $j_2<n-1$, we consider the collection of cells $\MP'$ induced by the columns $1,2,m-1$ and the rows $1,2,n-1$. This collection of cells $\MP'$ is depicted in Figure \[two\]. It is easily seen that $I_{\MP'}$ is generated by a regular sequence of length $2$, which is a Koszul relation pair. In the case that $i_2=m-1$ and $j_2< j_4$ we choose the columns $1,2,m-1,m$ and the rows $1,2,j_4-1,j_4$. The polyomino $\MP''$ induced by this choice of rows and columns has two opposite missing corners, hence, by Lemma \[opposite\], it has a Koszul pair. The case $j_2=n-1$ and $i_2< i_4$ is symmetric. In both cases the induced polyomino ideal has a Koszul relation pair. Hence in all three cases $I_\MP$ itself has a Koszul relation pair.
(3.5,-1)(3.5,1) (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (3.8,0)(3.8,1)(4.8,1)(4.8,0)
Implication (a)$\Rightarrow$(b) is obvious. Implication (b)$\Rightarrow$(c) follows by Corollary \[corner\], Lemma \[opposite\], and Lemma \[threecorners\].
It remains to prove (c)$\Rightarrow$(a). Let $\MP$ be a convex polyomino which satisfies one of the conditions (i)–(iii). We have to show that $\MP$ is linearly related. By Corollary \[no\], we only need to prove that $\beta_{14}(I_{\MP})=0.$ Viewing $K[\MP]$ as a semigroup ring $K[H],$ it follows that one has to check that $\beta_{1h}(I_{\MP})=0$ for all $h\in H$ with $|h|=4.$ The main idea of this proof is to use Corollary \[refinement\].
Let $h=h_1h_2h_3h_4$ with $j_q=s_{i_q}t_{i_q}$ for $1\leq q\leq 4,$ and $
i=\min_{q}\{i_q\}, k=\max_{q}\{i_q\}, j=\min_{q}\{j_q\}, \text{ and } \ell=\max_{q}\{j_q\}.
$ Therefore, all the points $h_q$ lie in the (possible degenerate) rectangle $\MQ$ of vertices $(i,j), (k,j), (i,\ell),(k,\ell).$ If $\MQ$ is degenerate, that is, all the vertices of $Q$ are contained in a vertical or horizontal line segment in $\MP$, then $\beta_{1h}(I_{\MP})=0$ since in this case the simplicial complex $\Delta_h$ is just a simplex. Let us now consider $\MQ$ non-degenerate. If all the vertices of $\MQ$ belong to $\MP$, then the rectangle $\MQ$ is an induced subpolyomino of $\MP$. Therefore, by Corollary \[refinement\], we have $\beta_{1h}(I_{\MP})=\beta_{1h}(I_Q)=0$, the latter equality being true since $\MQ$ is linearly related.
Next, let us assume that some of the vertices of $\MQ$ do not belong to $\MP.$ As $\MP$ has one of the forms (i)–(iii), it follows that at most three verices of $\MQ$ do not belong to $\MP.$ Consequently, we have to analyze the following cases.
[*Case 1.*]{} Exactly one vertex of $\MQ$ does not belong to $\MP.$ Without loss of generality, we may assume that $(k,\ell)\notin \MP$ which implies that $k=m$ and $\ell=n.$ In this case, any relation in degree $h$ of $\MP$ is a relation of same degree of one of the polyominoes displayed in Figure \[proof1corner\].
(4.5,-6)(4.5,3.5)
(-4,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (0.3,-1)[(a)]{}
(4.5,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (8.3,-1)[(b)]{}
(-4,-4.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (0.3,-6)[(c)]{}
(4.5,-4.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (2.8,1)(2.8,2)(3.8,2)(3.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (8.3,-6)[(d)]{}
One may check with a computer algebra system that all polyominoes displayed in Figure \[proof1corner\] are linearly related, hence they do not have any relation in degree $h.$ Actually, one has to check only the shapes (a), (b), and (d) since the polyomino displayed in (c) is isomorphic to that one from (b). Hence, $\beta_{1h}(I_{\MP})=0.$
[*Case 2.*]{} Two vertices of $\MQ$ do not belong to $\MP.$ We may assume that the missing vertices from $\MP$ are $(i,\ell)$and $(k,\ell)$. Hence, we have $i=1,$ $k=m$, and $\ell=n.$ In this case, any relation in degree $h$ of $\MP$ is a relation of same degree of one of the polyominoes displayed in Figure \[proof23corner\] (a)–(c). Note that the polyominoes (b) and (c) are isomorphic. One easily checks with the computer that all these polyominoes are linearly related, thus $\beta_{1h}(I_{\MP})=0.$
(4.5,-6)(4.5,3.5)
(-4,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (0.3,-1)[(a)]{}
(4.5,0.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (8.3,-1)[(b)]{}
(-4,-4.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (5.8,-1)(5.8,0)(4.8,0)(4.8,-1) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (0.3,-6)[(c)]{}
(4.5,-4.5)[ (2.8,-1)(2.8,0)(3.8,0)(3.8,-1) (2.8,0)(2.8,1)(3.8,1)(3.8,0) (3.8,-1)(3.8,0)(4.8,0)(4.8,-1) (4.8,0)(4.8,1)(5.8,1)(5.8,0) (3.8,1)(3.8,2)(4.8,2)(4.8,1) (3.8,0)(3.8,1)(4.8,1)(4.8,0) ]{} (8.3,-6)[(d)]{}
[*Case 3.*]{} Finally, we assume that there are three vertices of $\MQ$ which do not belong to $\MP.$ We may assume that these vertices are $(i,\ell), (k,\ell),$ and $(k,j)$. In this case, any relation in degree $h$ of $\MP$ is a relation of same degree of the polyomino displayed in Figure \[proof23corner\] (d) which is linearly related as one may easily check with the computer. Therefore, we get again $\beta_{1h}(I_{\MP})=0.$
Polyomino ideals with linear resolution
=======================================
In this final section, we classify all convex polyominoes which have a linear resolution and the convex stack polyominoes which are extremal Gorenstein.
\[linear\] Let $\MP$ be a convex polyomino. Then the following conditions are equivalent:
1. $I_\MP$ has a linear resolution;
2. there exists a positive integer $m$ such that $\MP$ is isomorphic to the polyomino with cells $[(i,i), (i+1,i+1)]$, $i=1,\ldots,m-1$.
(b)(a): If the polyomino is of the shape as described in (b), then $I_\MP$ is just the ideal of $2$-minors of a $2\times m$-matrix. It is well-known that the ideal of $2$-minors of such a matrix has a linear resolution. Indeed the Eagon-Northcott complex, whose chain maps are described by matrices with linear entries, provides a free resolution of the ideal of maximal minors of any matrix of indeterminates, see for example [@Ei Page 600].
(a)(b): We may assume that $[(1,1),(m,n)]$ is the smallest interval containing $V(\MP)$. We may further assume that $m\geq 4$ or $n\geq 4$. The few remaining cases can easily be checked with the computer. So let us assume that $m\geq 4$. Then we have to show that $n=2$. Suppose that $n\geq 3$. We first assume that all the corners $(1,1),(1,n),(m,1)$ and $(m,n)$ belong to $V(\MP)$. Then the polyomino $\MP'$ induced by the columns $1,2,m$ and the rows $1,2,n$ is the polyomino which is displayed on the right of Figure \[extremalstack\]. The ideal $I_{\MP'}$ is a Gorenstein ideal, and hence it is does not have a linear resolution. Therefore, by Corollary \[inducedsubpolyomino\], the ideal $I_\MP$ does not have a linear resolution as well, a contradiction.
Next assume that one of the corners, say $(1,1)$, is missing. Since $I_\MP$ has a linear a linear resolution, $I_\MP$ is linearly related and hence has a shape as indicated in Figure \[shape\]. Let $i_1$ and $j_1$ be the numbers as shown in Figure \[shape\], and let $\MP'$ the polyomino of $\MP$ induced by the columns $1,2,3$ and the rows $a,j_1,j_1+1$ where $a=1$ if $i_1=2$ and $a=2$ if $i_1>2, j_1>2$. If $j_1=2$ and $i_1>2,$ we let $\MP^\prime$ to be the polyomino induced by the columns $1,i_1,i_1+1$ and the rows $1,2,3.$ In any case, $\MP'$ is isomorphic to that one displayed on the left of Figure \[extremalstack\]. Since $I_{\MP'}$ is again a Gorenstein ideal, we conclude, as in the first case, that $I_\MP$ does not a have linear resolution, a contradiction.
As mentioned in the introduction, polyomino ideals overlap with join-meet ideals of planar lattices. In the next result we show that the join-meet ideal of any lattice has linear resolution if and only if it is a polyomino as described in Theorem \[linear\]. With methods different from those which are used in this paper, the classification of join-meet ideals with linear resolution was first given in [@ERQ Corollary 10].
Let $L$ be a finite distributive lattice [@hibiredsbook pp. 118]. A [*join-irreducible*]{} element of $L$ is an element $\alpha \in L$ which is not a unique minimal element and which possesses the property that $\alpha \neq \beta \vee \gamma$ for all $\beta, \, \gamma \in L \setminus \{\alpha\}$. Let $P$ be the set of join-irreducible elements of $L$. We regard $P$ as a poset (partially ordered set) which inherits its ordering from that of $L$. A subset $J$ of $P$ is called an [*order ideal*]{} of $P$ if $a \in J$, $b \in P$ together with $b \leq a$ imply $b \in J$. In particular, the empty set of $P$ is an order ideal of $P$. Let ${\mathcal J}(P)$ denote the set of order ideals of $P$, ordered by inclusion. It then follows that ${\mathcal J}(P)$ is a distributive lattice. Moreover, Birkhoff’s fundamental structure theorem of finite distributive lattices [@hibiredsbook Proposition 37.13] guarantees that $L$ coincides with ${\mathcal J}(P)$.
Let $L = {\mathcal J}(P)$ be a finite distributive lattice and $K[L] = K[ \, x_{\alpha} : \alpha \in L \, ]$ the polynomial ring in $|L|$ variables over $K$. The [*join-meet ideal*]{} $I_{L}$ of $L$ is the ideal of $K[L]$ which is generated by those binomials $$x_{\alpha}x_{\beta} - x_{\alpha \wedge \beta}x_{\alpha \vee \beta},$$ where $\alpha, \, \beta \in L$ are incomparable in $L$. It is known [@Hibi] that $I_{L}$ is a prime ideal and the quotient ring $K[L]/I_{L}$ is normal and Cohen–Macaulay. Moreover, $K[L]/I_{L}$ is Gorenstein if and only if $P$ is pure. (A finite poset is [*pure*]{} if every maximal chain (totally ordered subset) of $P$ has the same cardinality.)
Now, let $P = \{ \xi_{1}, \ldots, \xi_{d} \}$ be a finite poset, where $i < j$ if $\xi_{i} < \xi_{j}$, and $L = {\mathcal J}(P)$. A [*linear extension*]{} of $P$ is a permutation $\pi = i_{1} \cdots i_{d}$ of $[n] = \{ 1, \ldots, n \}$ such that $j < j'$ if $\xi_{i_{j}} < \xi_{i_{j'}}$. A [*descent*]{} of $\pi = i_{1} \cdots i_{d}$ is an index $j$ with $i_{j} > i_{j+1}$. Let $D(\pi)$ denote the set of descents of $\pi$. The [*$h$-vector*]{} of $L$ is the sequence $h(L) = (h_{0}, h_{1}, \ldots, h_{d-1})$, where $h_{i}$ is the number of permutations $\pi$ of $[n]$ with $|D(\pi)| = i$. Thus, in particular, $h_{0} = 1$. It follows from [@BGS] that the Hilbert series of $K[L]/I_{L}$ is of the form $$\frac{h_{0} + h_{1} \lambda + \cdots + h_{d-1} \lambda^{d-1}}{(1 - \lambda)^{d+1}}.$$
We say that a finite distributive lattice $L = {\mathcal J}(P)$ is [*simple*]{} if $L$ has no elements $\alpha$ and $\beta$ with $\beta < \alpha$ such that each element $\gamma \in L \setminus \{\alpha, \beta\}$ satisfies either $\gamma < \beta$ or $\gamma > \alpha$. In other words, $L$ is simple if and only if $P$ possesses no element $\xi$ for which every $\mu \in P$ satisfies either $\mu \leq \xi$ or $\mu \geq \xi$.
\[hibione\] Let $L = {\mathcal J}(P)$ be a simple finite distributive lattice. Then the join-meet ideal $I_{L}$ has a linear resolution if and only if $L$ is of the form shown in Figure \[plane\].
(-10.3,-2)(4,9)
(1,0)[ (-11,1)(-9,-1)(-7,1)(-9,3) (-7,1)(-5,3)(-7,5)(-9,3) (-5,3)(-3,5) (-7,5)(-5,7) (-3,5)(-5,7)(-3,9)(-1,7) (-11,1)[$\bullet$]{} (-9,-1)[$\bullet$]{} (-7,1)[$\bullet$]{} (-9,3)[$\bullet$]{} (-5,3)[$\bullet$]{} (-7,5)[$\bullet$]{} (-3,5)[$\bullet$]{} (-5,7)[$\bullet$]{} (-3,9)[$\bullet$]{} (-1,7)[$\bullet$]{} ]{}
Since $I_L$ is generated in degree $2$, it follows that $I_L$ has a linear resolution if and only if the regularity of $K[L]/I_{L}$ is equal to $1$. We may assume that $K$ is infinite. Since $K[L]/I_{L}$ is Cohen–Macaulay, we may divide by a regular sequence of linear forms to obtain a $0$-dimensional $K$-algebra $A$ with $\reg A=\reg K[L]/I_{L}$ whose $h$-vector coincides with that of $\reg K[L]/I_{L}$. Since $\reg A=\max\{i\: A_i\neq 0\}$ (see for example [@Ei Exercise 20.18]), it follows that $I_{L}$ has a linear resolution if and only if the $h$-vector of $L$ is of the form $h(L) = (1, q, 0, \ldots, 0)$, where $q \geq 0$ is an integer. Clearly, if $P$ is a finite poset of Figure \[plane\], then $|D(\pi)| \leq 1$ for each linear extension $\pi$ of $P$. Thus $I_{L}$ has a linear resolution.
Conversely, suppose that $I_{L}$ has a linear resolution. In other words, one has $|D(\pi)| \leq 1$ for each linear extension $\pi$ of $P$. Then $P$ has no three-element clutter. (A [*clutter*]{} of $P$ is a subset $A$ of $P$ with the property that no two elements belonging to $A$ are comparable in $P$.) Since $L = {\mathcal J}(P)$ is simple, it follows that $P$ contains a two-element clutter. Hence Dilworth’s theorem [@Dil] says that $P = C \cup C'$, where $C$ and $C'$ are chains of $P$ with $C \cap C' = \emptyset$. Let $|C| \geq 2$ and $|C'| \geq 2$. Let $\xi \in C$ and $\mu \in C'$ be minimal elements of $P$. Let $\xi' \in C$ and $\mu' \in C'$ be maximal elements of $P$. Since $L = {\mathcal J}(P)$ is simple, it follows that $\xi \neq \mu$ and $\xi' \neq \mu'$. Thus there is a linear extension $\pi$ of $P$ with $|D(\pi)| \geq 2$. Thus $I_{L}$ cannot have a linear resolution. Hence either $|C| = 1$ or $|C'| = 1$, as desired.
A Gorenstein ideal can never have a linear resolution, unless it is a principal ideal. However, if the resolution is as much linear as possible, then it is called extremal Gorenstein. Since polyomino ideals are generated in degree $2$ we restrict ourselves in the following definition of extremal Gorenstein ideals to graded ideals generated in degree $2$.
Let $S$ be a polynomial ring over field, and $I\subset S$ a graded ideal which is not principal and is generated in degree $2$. Following [@Sch] we say that $I$ is an [*extremal Gorenstein ideal*]{} if $S/I$ is Gorenstein and if the shifts of the graded minimal free resolution are $$-2-p-1, -2-(p-1), -2-(p-2), \ldots, - 3, -2,$$ where $p$ is the projective dimension of $I$.
With similar arguments as in the proof of Theorem \[hibione\], we see that $I$ is an extremal Gorenstein ideal if and only if $I$ is a Gorenstein ideal and $\reg S/I=2$, and that this is the case if and only if $S/I$ is Cohen–Macaulay and the $h$-vector of $S/I$ is of the form $$h(L) = (1, q, 1, 0, \ldots, 0),$$ where $q > 1$ is an integer.
In the following theorem we classify all convex stack polyominoes $\MP$ for which $I_\MP$ is extremal Gorenstein. Convex stack polyominoes have been considered in [@Q]. In that paper Qureshi characterizes those convex stack polyominoes $\MP$ for which $I_\MP$ is Gorenstein.
Let $\MP$ be a polyomino. We may assume that $[(1,1),(m,n)]$ is the smallest interval containing $V(\MP)$. Then $\MP$ is called a [*stack polyomino*]{} if it is column convex and for $i=1,.\ldots,m-1$ the cells $[(i,1),(i+1,2)]$ belong to $\MP$. Figure \[stackpolyomino\] displays stack polyominoes – the right polyomino is convex, the left is not. The number of cells of the bottom row is called the [*width*]{} of $\MP$ and the number of cells in a maximal column is called the [*height*]{} of $\MP$.
(5,-0.5)(5,5) (-5,0)[ (4,2)(4,3)(5,3)(5,2) (4,1)(4,2)(5,2)(5,1) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (7,1)(7,2)(8,2)(8,1) (6,2)(6,3)(7,3)(7,2) (7,2)(7,3)(8,3)(8,2) (6,3)(6,4)(7,4)(7,3) ]{} (3.5,1)[ (3,0)(3,1)(4,1)(4,0) (4,0)(4,1)(5,1)(5,0) (5,0)(5,1)(6,1)(6,0) (6,0)(6,1)(7,1)(7,0) (7,0)(7,1)(8,1)(8,0) (4,1)(4,2)(5,2)(5,1) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (7,1)(7,2)(8,2)(8,1) (6,2)(6,3)(7,3)(7,2) (7,2)(7,3)(8,3)(8,2) (6,3)(6,4)(7,4)(7,3) ]{}
Let $\MP$ be a convex stack polyomino. Removing the first $k$ bottom rows of cells of $\MP$ we obtain again a convex stack polyomino which we denote by $\MP_k$. We also set $\MP_0=\MP$. Let $h$ be the height of the polymino, and let $1<k_1<k_2 < \cdots <k_r <h$ be the numbers with the property that $\width(\MP_{k_i})<\width(\MP_{k_{i-1}})$. Furthermore, we set $k_0=1$. For example, for the convex stack polyomino in Figure \[stackpolyomino\] we have $k_1=1$, $k_2=2$ and $k_3=3$.
With the terminology and notation introduced, the characterization of Gorenstein convex stack polyominoes is given in the following theorem.
\[gorensteinstack\] Let $\MP$ be a convex stack polyomino of height $h$. Then the following conditions are equivalent:
1. $I_\MP$ is a Gorenstein ideal.
2. $\width(\MP_{k_i})=\height(\MP_{k_i})$ for $i=0,\ldots,r$.
According to this theorem, the convex stack polyomino displayed in Figure \[stackpolyomino\] is not Gorenstein, because $\width(\MP_{k_0})=5$ and $\height(\MP_{k_0})=4$. An example of a Gorenstein stack polyomino is shown in Figure \[gorenstein\].
(0.5,-0.5)(0.5,4) (-6.5,0)[ (5,0)(5,1)(6,1)(6,0) (6,0)(6,1)(7,1)(7,0) (7,0)(7,1)(8,1)(8,0) (8,0)(8,1)(9,1)(9,0) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (7,1)(7,2)(8,2)(8,1) (5,2)(5,3)(6,3)(6,2) (6,2)(6,3)(7,3)(7,2) (6,3)(6,4)(7,4)(7,3) ]{}
Combining Theorem \[gorensteinstack\] with the results of Section 2, we obtain
\[stack\] Let $I_\MP$ be convex stack polyomino. Then $I_\MP$ is extremal Gorenstein if and only if $\MP$ is isomorphic to one of the polyominoes in Figure \[extremalstack\].
(4.5,-1)(4.5,3.5)
(0.8,-1)(0.8,0)(1.8,0)(1.8,-1) (1.8,0)(1.8,1)(2.8,1)(2.8,0) (1.8,-1)(1.8,0)(2.8,0)(2.8,-1)
(6.8,-1)(6.8,0)(7.8,0)(7.8,-1) (7.8,0)(7.8,1)(8.8,1)(8.8,0) (7.8,-1)(7.8,0)(8.8,0)(8.8,-1) (6.8,0)(6.8,1)(7.8,1)(7.8,0)
It can be easily checked that $I_\MP$ is extremal Gorenstein, if $\MP$ is isomorphic to one of the two polyominoes shown in Figure \[extremalstack\].
Conversely, assume that $I_\MP$ is extremal Gorenstein. Without loss of generality we may assume that $[(1,1),(m,n)]$ is the smallest interval containing $V(\MP)$. Then Theorem \[gorensteinstack\] implies that $m=n$. Suppose first that $V(\MP)= [(1,1),(n,n)]$. Then, by [@ERQ Theorem 4] of Ene, Rauf and Qureshi, it follows that the regularity of $I_\MP$ is equal to $n$. Since $I_\MP$ is extremal Gorenstein, its regularity is equal to $3$. Thus $n=3$.
Next, assume that $V(\MP)$ is properly contained in $[(1,1),(n,n)]$. Since $I_\MP$ is linearly related, Corollary \[corner\] together with Theorem \[gorensteinstack\] imply that the top row of $\MP$ consists of only one cell and that $[(2,1),(n-1,n-1)]\subset V(\MP)$. Let $\MP'$ be the polyomino induced by the rows $2,3,\ldots,n-1$ and the columns $1,2,\ldots, n-1$. Then $\MP'$ is the polyomino with $V(\MP')=[(1,1),(n-2,n-1)]$. By applying again [@ERQ Theorem 4] it follows that $\reg I_\MP'=n-2$. Corollary \[inducedsubpolyomino\] then implies that $\reg I_\MP\geq \reg I_{\MP'}= n-2$, and since $\reg I_\MP=3$ we deduce that $n\leq 5$. If $n=5$, then $I_\MP'$ is the ideal of $2$-minors of a $3\times 4$-matrix which has Betti numbers $\beta_{35}\neq 0$ and $\beta_{36}\neq 0$. Since $\MP'$ is an induced polyomino of $\MP$ and since $I_\MP$ is extremal Gorenstein, Corollary \[inducedsubpolyomino\] yields a contradiction.
Up to isomorphism there exist for $n=4$ precisely the Gorenstein polyominoes displayed in Figure \[width\]. They are all not extremal Gorenstein as can be easily checked with CoCoA or Singular. For $n=3$ any Gorenstein polyomino is isomorphic to one of the two polyominoes shown in Figure \[extremalstack\]. This yields the desired conclusion.
(0.5,-0.5)(0.5,4) (-12,0)[ (5,0)(5,1)(6,1)(6,0) (6,0)(6,1)(7,1)(7,0) (7,0)(7,1)(8,1)(8,0) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (5,2)(5,3)(6,3)(6,2) ]{}
(-6,0)[ (5,0)(5,1)(6,1)(6,0) (6,0)(6,1)(7,1)(7,0) (7,0)(7,1)(8,1)(8,0) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (7,1)(7,2)(8,2)(8,1) (5,2)(5,3)(6,3)(6,2) ]{}
(0,0)[ (5,0)(5,1)(6,1)(6,0) (6,0)(6,1)(7,1)(7,0) (7,0)(7,1)(8,1)(8,0) (5,1)(5,2)(6,2)(6,1) (6,1)(6,2)(7,2)(7,1) (7,1)(7,2)(8,2)(8,1) (6,2)(6,3)(7,3)(7,2) ]{}
The following theorem shows that besides of the two polyominoes listed in Theorem \[stack\] whose polyomino ideal is extremal Gorenstein, there exist precisely two more join-meet ideals having this property.
\[hibitwo\] Let $L = {\mathcal J}(P)$ be a simple finite distributive lattice. Then the join-meet ideal $I_{L}$ is an extremal Gorenstein ideal if and only if $L$ is one of the following displayed in Figure \[Flattice\].
(-25.3,-2.5)(4,5) (5,0)
(-9,3)(-11,1) (-9,3)(-7,1) (-11,1)(-9,-1) (-7,1)(-9,-1) (-9,3)(-11,5) (-13,3)(-11,5) (-13,3)(-11,1) (-11,1)(-13,-1) (-13,-1)(-11,-3) (-11,-3)(-9,-1)
(-9,3)[$\bullet$]{} (-11,1)[$\bullet$]{} (-7,1)[$\bullet$]{} (-9,-1)[$\bullet$]{} (-11,-3)[$\bullet$]{} (-13,-1)[$\bullet$]{} (-13,3)[$\bullet$]{} (-11,5)[$\bullet$]{}
(-5,0)
(-9,3)(-11,1) (-11,1)(-9,-1) (-9,3)(-11,5) (-13,3)(-11,5) (-13,3)(-11,1) (-11,1)(-13,-1) (-13,-1)(-11,-3) (-11,-3)(-9,-1)
(-9,3)[$\bullet$]{} (-11,1)[$\bullet$]{} (-9,-1)[$\bullet$]{} (-11,-3)[$\bullet$]{} (-13,-1)[$\bullet$]{} (-13,3)[$\bullet$]{} (-11,5)[$\bullet$]{}
(18,0)
(-9,3)(-11,1) (-9,3)(-7,1) (-11,1)(-9,-1) (-7,1)(-9,-1) (-9,3)(-11,5) (-13,3)(-11,5) (-13,3)(-11,1) (-11,1)(-13,-1) (-13,-1)(-11,-3) (-11,-3)(-9,-1) (-15,1)(-13,-1) (-15,1)(-13,3)
(-9,3)[$\bullet$]{} (-11,1)[$\bullet$]{} (-15,1)[$\bullet$]{} (-7,1)[$\bullet$]{} (-9,-1)[$\bullet$]{} (-11,-3)[$\bullet$]{} (-13,-1)[$\bullet$]{} (-13,3)[$\bullet$]{} (-11,5)[$\bullet$]{}
(-16,0)
(-11,-3)[$\bullet$]{} (-13,-0.5)[$\bullet$]{} (-9,-0.5)[$\bullet$]{} (-11,-0.5)[$\bullet$]{} (-13,2.5)[$\bullet$]{} (-9,2.5)[$\bullet$]{} (-11,2.5)[$\bullet$]{} (-11,5)[$\bullet$]{} (-11,-3)(-13,-0.5) (-11,-3)(-9,-0.5) (-11,-3)(-11,-0.5) (-11,5)(-11,2.5) (-9,-0.5)(-9,2.5) (-13,-0.5)(-13,2.5) (-13,-0.5)(-11,2.5) (-11,2.5)(-9,-0.5) (-11,-0.5)(-13,2.5) (-11,-0.5)(-9,2.5) (-11,5)(-13,2.5) (-11,5)(-9,2.5)
Suppose that $L = {\mathcal J}(P)$ is simple and that $K[L]/I_{L}$ is Gorenstein. it then follows that $P$ is pure and there is no element $\xi \in P$ for which every $\mu \in P$ satisfies either $\mu \leq \xi$ or $\mu \geq \xi$. Since $h(L) = (1, q, 1, 0, \ldots, 0)$, no $4$-element clutter is contained in $P$.
Suppose that a three-element clutter $A$ is contained in $P$. If none of the elements belonging to $A$ is a minimal element of $P$, then, since $L = {\mathcal J}(P)$ is simple, there exist at least two minimal elements. Hence there exists a linear extension $\pi$ of $P$ with $|D(\pi)| \geq 3$, a contradiction. Thus at least one of the elements belonging to $A$ is a minimal element of $P$. Similarly, at least one of the elements belonging to $A$ is a maximal element. Let an element $x \in A$ which is both minimal and maximal. Then, since $P$ is pure, one has $P = A$. Let $A = \{ \xi_{1}, \xi_{2}, \xi_{3} \}$ with $A \neq P$, where $\xi_{1}$ is a minimal element and $\xi_{2}$ is a maximal element. Let $\mu_{1}$ be a maximal element with $\xi_{1} < \mu_{1}$ and $\mu_{2}$ a minimal element with $\mu_{2} < \xi_{2}$. Then neither $\mu_{1}$ nor $\mu_{2}$ belongs to $A$. If $\xi_{3}$ is either minimal or maximal, then there exists a linear extension $\pi$ of $P$ with $|D(\pi)| \geq 3$, a contradiction. Hence $\xi_{3}$ can be neither minimal nor maximal. Then since $P$ is pure, there exist $\nu_{1}$ with $\xi_{1} < \nu_{1} < \mu_{1}$ and $\nu_{2}$ with $\mu_{2} < \nu_{2} < \xi_{2}$ such that $\{ \nu_{1}, \nu_{2}, \xi_{3}\}$ is a three-element clutter. Hence there exists a linear extension $\pi$ of $P$ with $|D(\pi)| \geq 4$, a contradiction. Consequently, if $P$ contains a three-element clutter $A$, then $P$ must coincide with $A$. Moreover, if $P$ is a three-element clutter, then $h(L) = (1,4,1)$ and $I_{L}$ is an extremal Gorenstein ideal.
Now, suppose that $P$ contains no clutter $A$ with $|A| \geq 3$. Let a chain $C$ with $|C| \geq 3$ be contained in $P$. Let $\xi, \, \xi'$ be the minimal elements of $P$ and $\mu, \, \mu'$ the maximal elements of $P$ with $\xi < \mu$ and $\xi' < \mu'$. Since $L = {\mathcal J}(P)$ is simple and since $P$ is pure, it follows that there exist maximal chains $\xi < \nu_{1} < \cdots < \nu_{r} < \mu$ and $\xi' < \nu'_{1} < \cdots < \nu'_{r} < \mu'$ such that $\nu_{i} \neq \nu'_{i}$ for $1 \leq i \leq r$. Then one has a linear extension $\pi$ of $P$ with $D(\pi) = 2 + r \geq 3$, a contradiction. Hence the cardinality of all maximal chains of $P$ is at most $2$. However, if the cardinality of all maximal chains of $P$ is equal to $1$, then $h(L) = (1,1)$. Thus $I_{L}$ cannot be an extremal Gorenstein ideal. If the cardinality of all maximal chains of $P$ is equal to $2$, then $P$ is the posets displayed in Figure \[poset\]. For each of them the join-meet ideal $I_{L}$ is an extremal Gorenstein ideal.
(-25.3,-2.5)(4,5) (7,0)[ (-1,4)[$\bullet$]{} (-4,4)[$\bullet$]{} (-1,0)[$\bullet$]{} (-4,0)[$\bullet$]{} (-2.5,-2.5)[$h(L) = (1,4,1)$]{} (-1,0)(-1,4) (-4,0)(-4,4) ]{}
(-8,0)[ (-1,4)[$\bullet$]{} (-4,4)[$\bullet$]{} (-1,0)[$\bullet$]{} (-4,0)[$\bullet$]{} (-2.5,-2.5)[$h(L) = (1,3,1)$]{} (-1,0)(-1,4) (-4,0)(-4,4) (-1,0)(-4,4) ]{}
(-23,0)[ (-1,4)[$\bullet$]{} (-4,4)[$\bullet$]{} (-1,0)[$\bullet$]{} (-4,0)[$\bullet$]{} (-2.5,-2.5)[$h(L) = (1,2,1)$]{} (-1,0)(-1,4) (-4,0)(-4,4) (-1,0)(-4,4) (-4,0)(-1,4) ]{}
A. Björner, A. M. Garsia and R. P. Stanley, *An introduction to Cohen–Macaulay partially ordered sets*, In: “Ordered Sets” (I. Rival, Ed.), Springer Netherlands, 1982, pp. 583–615. W. Bruns, J. Herzog, *Semigroup rings and simplicial complexes*, J. Pure Appl. Algebra [**122**]{} (1997), 185–208.
CoCoATeam, CoCoA: a system for doing Computations in Commutative Algebra. Available at http://cocoa.dima.unige.it
W. Decker, G.-M. Greuel, G. Pfister, H. Sch[ö]{}nemann, — [A]{} computer algebra system for polynomial computations. (2012).
R. P. Dilworth, *A decomposition theorem for partially ordered sets*, Annals of Math. [**51**]{} (1950), 161–166.
D. Eisenbud, *Commutative Algebra with a View Toward Algebraic Geometry*, Graduate Texts in Mathematics **150**, Springer, 1995.
V. Ene, A. A. Qureshi, A. Rauf, *Regularity of join-meet ideals of distributive lattices*, Electron. J. Combin. [**20**]{} (3) (2013), \#P20.
M. Hashimoto, *Determinantal ideals without minimal free resolutions*, Nagoya Math. J. [**118**]{} (1990), 203–216.
J. Herzog, T. Hibi, *Monomial ideals*, Graduate Texts in Mathematics **260**, Springer, 2010.
J. Herzog, H. Srinivasan, *A note on the subadditivity problem for maximal shifts in free resolutions*, to appear in MSRI Proc., arxiv: 1303:6214
T. Hibi, *Algebraic Combinatorics on Convex Polytopes*, Carslaw Publications, Glebe, N.S.W., Australia, 1992.
T. Hibi, *Distributive lattices, affine semigroup rings and algebras with straightening laws*, In: “Commutative Algebra and Combinatorics” (M. Nagata and H. Matsumura, Eds.), Adv. Stud. Pure Math. **11**, North–Holland, Amsterdam, 1987, pp. 93–109.
K. Kurano, *The first syzygies of determinantal ideals*, J. Algebra [**124**]{} (1989), 414–436.
A. Lascoux, *Syzygies des variétés determinantales*, Adv. in Math. [**30**]{} (1978), 202–237.
H. Ohsugi, J. Herzog, T. Hibi, *Combinatorial pure subrings,* Osaka J. Math. [bf 37]{} (2000), 745–757.
H. Ohsugi, T. Hibi, *Koszul bipartite graphs*, Adv. in Appl. Math. [**22**]{} (1999), 25-28.
A. Qureshi, *Ideals generated by $2$-minors, collections of cells and stack polyominoes*, J. Algebra [**357**]{} (2012), 279–303.
P. Schenzel, *Uber die freien Auflösungen extremaler Cohen-Macaulay Ringe*, J. Algebra [**64**]{} (1980), 93–101.
D. W. Sharpe, *On certain polynomial ideals defined by matrices*, Quart. J. Math. Oxford (2) [**15**]{} (1964), 155–175.
D. W. Sharpe, *The syzygies and semi-regularity of certain ideals defined by matrices*, Proc. London Math. Soc. [**15**]{} (1965), 645–679.
[^1]: The first author was supported by the grant UEFISCDI, PN-II-ID-PCE- 2011-3-1023.
[^2]:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recently, Chen and Koenig in [@CheKoe] and Iyama and Solberg in [@IyaSol] independently introduced and characterised algebras with dominant dimension coinciding with the Gorenstein dimension and both dimensions being larger than or equal to two. In [@IyaSol], such algebras are named Auslander-Gorenstein algebras. Auslander-Gorenstein algebras generalise the well known class of higher Auslander algebras, where the dominant dimension additionally coincides with the global dimension. In this article we generalise Auslander-Gorenstein algebras further to algebras having the property that the dominant dimension coincides with the finitistic dimension and both dimension are at least two. We call such algebras finitistic Auslander algebras. As an application we can specialise to reobtain known results about Auslander-Gorenstein algebras and higher Auslander algebras such as the higher Auslander correspondence with a very short proof. We then give several conjectures and classes of examples for finitistic Auslander algebras. For a local Hopf algebra $A$ and an indecomposable non-projective $A$-module $M$, we show that $End_A(A \oplus M)$ is always a finitistic Auslander algebra of dominant dimension two. In particular this shows that $Ext_A^1(M,M)$ is always non-zero, which generalises a result of Tachikawa who proved that $Ext_A^1(M,M) \neq 0$ for indecomposable non-projective modules $M$ over group algebras of $p$-groups. We furthermore conjecture that every algebra of dominant dimension at least two which has exactly one projective non-injective indecomposable module is a finitistic Auslander algebra. We prove this conjecture for a large class of algebras which includes all representation-finite algebras.'
address: 'Institute of algebra and number theory, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany'
author:
- René Marczinzik
title: Finitistic Auslander algebras
---
Introduction {#introduction .unnumbered}
============
Let an algebra always be a finite dimensional connected algebra over a field $K$, which is not semi-simple. All modules are finite dimensional right modules if nothing is stated otherwise. In this article we generalise Auslander-Gorenstein algebras introduced in [@IyaSol] as algebras having dominant dimension equal to the Gorenstein dimension and both dimensions being larger than or equal to two (in fact Iyama and Solberg also include selfinjective algebras in their definition of Auslander-Gorenstein algebras, but we do not include selfinjective algebras here as the developed theory for selfinjective algebras is trivial). Auslander-Gorenstein algebras contain the important class of higher Auslander algebras, introduced in [@Iya]. Recall that the finitistic dimension of an algebra is defined as the supremum of projective dimensions of all modules having finite projective dimension. It is a major open problem in the representation theory of finite dimensional algebras whether the finitistic dimension is always finite. Note that in case an algebra is Gorenstein, the Gorenstein dimension equals the finitistic dimension (see for example [@Che]). Thus algebras with finitistic dimension equal to the dominant dimension, which is larger than or equal to two, generalise Auslander-Gorenstein algebras. We call such algebras *finitistic Auslander algebras* and deduce some of their properties, including a generalisation of the celebrated higher Auslander correspondence for finite dimensional algebras first proven in [@Iya]. We characterise finitistic Auslander algebras in terms of Gorenstein homological algebra and the category of modules having a certain dominant dimension. Let $Dom_d(A)$ denote the full subcategory of modules having dominant dimension at least $d$, $Gp(A)$ the subcategory of Gorenstein projective modules and $Gp_{\infty}(A)$ the full subcategory of modules having infinite Gorenstein projective dimension, $Proj(A)$ the full subcategory of modules which are projective and $Proj_{\infty}(A)$ the full subcategory of modules having infinite projective dimension. We refer to the preliminaries for more information and definitions.
Let $A \cong End_B(M)$ be an algebra of dominant dimension $d \geq 2$, where $M$ is a generator-cogenerator of $mod-B$. The following are equivalent:
1. $A$ is a finitistic Auslander algebra.
2. $Dom_d(A) \subseteq Proj(A) \cup Proj_{\infty}(A)$.
3. $Dom_d(A) \subseteq Gp(A) \cup Gp_{\infty}(A)$.
4. $add(M)-resdim(X)= \infty$ for all $X \in M^{\perp d-2} \setminus add(M)$.
Thus generator-cogenerators $M$ with $add(M)-resdim(X)= \infty$ for all $X \in M^{\perp d-2} \setminus add(M)$ generalise the classical cluster tilting objects introduced in [@Iya] and the precluster tilting objects introduced in [@IyaSol].
In particular, specialising our results to finite Gorenstein or finite global dimension, we obtain quick proofs of some known facts such as the higher Auslander correspondence relating higher Auslander algebras and cluster tilting objects or the classification of Gorenstein projective modules over Auslander-Gorenstein algebras.
The rest of the articles is guided by conjectures that we motivate and prove in special cases.
Let $A$ be a local selfinjective algebra and $M$ an indecomposable non-projective $A$-module. Then $End_A(A \oplus M)$ is a finitistic Auslander algebra of dominant dimension 2.
We prove this conjecture in a special case:
Let $A$ be a local Hopf algebra and $M$ an indecomposable non-projective $A$-module. Then $End_A(A \oplus M)$ is a finitistic Auslander algebra of dominant dimension 2.
This theorem implies that $Ext_A^1(M,M) \neq 0$ for $M$ as in the theorem. This generalises an old theorem of Tachikawa who proved this for group algebras of $p$-groups. We also include an example of a local Hopf algebra that is not isomorphic to a group algebra of a $p$-group to show that we give a proper generalisation of the theorem of Tachikawa. We then give some other examples that motivate this conjecture on local selfinjective algebras and also show that in general the class of finitistic Auslander algebras is much larger than the class of Auslander-Gorenstein algebras. For example we show that for a local selfinjective algebra $A$ with simple module $S$, the algebra $B=End_A(A \oplus S)$ is a standardly stratified finitistic Auslander algebra of dominant dimension 2. This algebra $B$ is an Auslander-Gorenstein algebra iff $A \cong K[x]/(x^n)$ for some $n \geq 2$ and a higher Auslander algebra iff $A \cong K[x]/(x^2)$.
For some of our theory we can include algebras that have finitistic dimension equal to the dominant dimension, even when the dominant dimension is equal to one. We call an algebra with finitistic dimension equal to the non-zero dominant dimension *weak finitistic Auslander algebras*. Our next conjecture is related to the finitistic dimension conjecture as we will see later. The truth of the following conjecture would give a large and easy construction of finitistic Auslander algebras.
Let $A$ be an algebra of dominant dimension at least one that has exactly one indecomposable projective non-injective $A$-module. Then $A$ is a weak finitistic Auslander algebra.
Note that the class of algebras with dominant dimension at least one that have at exactly one indecomposable projective non-injective $A$-module is very large, for example it generalises the class of proper almost selfinjective algebras from [@FHK]. Let $P^{< \infty}(A)$ denote the full subcategory of modules having finite projective dimension. Note that in a representation-finite algebra all subcategories are contravariantly finite and thus the next theorem applies to all representation-finite algebras.
Let $A$ be a finite dimensional algebra such that $P^{< \infty}(A)$ is contravariantly-finite. Assume furthermore that $A$ has dominant dimension at least one and exactly one indecomposable projective non-injective module. Then $A$ is a weak finitistic Auslander algebra.
The examples and results in this article motivate the following conjecture:
Let $n \geq 2$. There exists a polynomial function $f(n)$ such that the following is true: Every connected non-selfinjective algebra with $n$ simple modules that has dominant dimension at least $f(n)$ is a finitistic Auslander algebra.
We think that one might choose a polynomial function with $n \leq f(n) \leq 2n$ for each $n \geq 2$. In particular, the author is not aware of a non-selfinjective algebra with dominant dimension $\geq 2$ and having two simple modules that is not a finitistic Auslander algebra. The author thanks Jeremy Rickard for allowing him to use the theorem \[Ricktheo\] in this article. This answered a question of the author raised in mathoverflow, see http://mathoverflow.net/questions/257744/finite-addn-resolution. The author thanks Matthew Pressland for helpful discussions on shifted tilting modules that lead to an improvement of \[propocotilting\]. The author thanks Xingting Wang for suggesting the example in \[hopfexample\] and he thanks Zhao Tiwei for useful comments. Many results in this article were tested with the GAP-package QPA and the author is thankful to the QPA-team for their work, see [@QPA].
Preliminaries
=============
Throughout $A$ is a finite dimensional and connected algebra over a field $K$. Furthermore, we assume that $A$ is not semisimple. We always work with finite dimensional right modules, if not stated otherwise. By $mod-A$, we denote the category of finite dimensional right $A$-modules. For background on representation theory of finite dimensional algebras and their homological algebra, we refer to [@ASS] or [@SkoYam]. For a module $M$, $add(M)$ denotes the full subcategory of $mod-A$ consisting of direct summands of $M^n$ for some $n \geq 1$. A module $M$ is called *basic* in case $M \cong M_1 \oplus M_2 \oplus ... \oplus M_n$, where every $M_i$ is indecomposable and $M_i$ is not isomorphic to $M_j$ for $i \neq j$. The *basic version* of a module $N$ is the unique (up to isomorphim) module $M$ such that $add(M)=add(N)$ and such that $M$ is basic. An algebra is called basic in case the regular module is basic. We denote by $S_i=e_iA/e_iJ$, $P_i=e_i A$ and $I_i=D(Ae_i)$ the simple, indecomposable projective and indecomposable injective module, respectively, corresponding to the primitive idempotent $e_i$. The *dominant dimension* domdim($M$) of a module $M$ with a minimal injective resolution $(I_i): 0 \rightarrow M \rightarrow I_0 \rightarrow I_1 \rightarrow ...$ is defined as: domdim($M$):=$\sup \{ n | I_i $ is projective for $i=0,1,...,n \}$+1, if $I_0$ is projective, and domdim($M$):=0, if $I_0$ is not projective. The *codominant dimension* of a module $M$ is defined as the dominant dimension of the $A^{op}$-module $D(M)$. The dominant dimension of a finite dimensional algebra is defined as the dominant dimension of the regular module. It can be shown that the dominant dimension of an algebra always equals the dominant dimension of the opposite algebra, see for example [@Ta]. So domdim($A$)$ \geq 1$ means that the injective hull of the regular module $A$ is projective or equivalently, that there exists an idempotent $e$ such that $eA$ is a minimal faithful projective-injective module. Algebras with dominant dimension larger than or equal to 1 are called QF-3 algebras. For more information on dominant dimensions and QF-3 algebras, we refer to [@Ta]. An algebra $A$ is called *Gorenstein* in case $Gdim(A):=injdim(A)$ equals $projdim(D(A))< \infty$. In this case $Gdim(A)$ is called the *Gorenstein dimension* of $A$ and we say that $A$ has infinite Gorenstein dimension if $injdim(A)= \infty$ or $projdim(D(A))= \infty$. Note that $Gdim(A)= max \{ injdim(e_iA) | e_i$ a primitive idempotent$ \}$ and $domdim(A)= min \{ domdim(e_iA) | e_i $ a primitive idempotent $ \}$. We denote by $Proj(A)$ the full subcategory of projective modules and by $Proj_{\infty}(A)$ the full subcategory of modules of infinite projective dimension. $Dom_d(A)$ denotes the full subcategory of modules having dominant dimension at least $d$. The Morita-Tachikawa correspondence (see for example [@Ta]) says that an algebra $A$ has dominant dimension at least two iff $A \cong End_B(M)$ for some generator-cogenerator $M$ of $mod-B$ and some algebra $B$ that is then isomorphic to $eAe$, when $eA$ is a minimal faithful projective-injective $A$-module. Mueller’s theorem says that in this case the dominant dimension of $A$ equals $\inf \{ i \geq 1 | Ext_B^{i}(M,M) \neq 0 \} +1$, see [@Mue]. We will need the following results that can be viewed as refinements of results of Mueller. The theorem can be found in [@Mar] as theorem 2.2. with detailed references to the article [@APT].
\[ARSmaintheorem\] Let $A$ be an algebra of dominant dimension at least two with minimal faithful projective-injective left module $P$ and minimal faithful projective-injective right module $I$. Let $B=End_A(P)$. We have $B \cong End_A(I)$.
1. $F:=Hom_A(P,-) : Dom_2(A) \rightarrow mod-B$ is an equivalence of categories. $F$ restricts to an equivalence between add($I$) and the category of injective $B$-modules.
2. The functor $G:=Hom_{B}(P,-) : mod-B \rightarrow Dom_2(A)$ is inverse to $F$.
3. For $i \geq 3$, $F$ restricts to an equivalence $F: Dom_i(A) \rightarrow (P)^{\perp i-2}$, where $P$ is viewed as a $B$-module.
An algebra $A$ is called *higher Auslander algebra* in case $\infty>domdim(A)=gldim(A) \geq 2$, see [@Iya] and $A$ is called *Auslander-Gorenstein algebra* in case $\infty>domdim(A)=Gdim(A) \geq 2$, see [@IyaSol] (here we exclude selfinjective algebras that are not interesting for the theory we develope). A module $M$ is called *Gorenstein projective* in case $Ext^{i}(D(A),\tau(M)) \cong Ext^{i}(M,A) \cong 0$ for all $i \geq 1$. Every non-projective Gorenstein projective module has infinite projective dimension. As in the case of usual projective resolutions, every module $M$ has a resolution by (possibly infinitely generated) Gorenstein projective modules and a corresponding *Gorenstein projective dimension* $Gpd(M)$, see [@Che] for more details. $\Omega^{i}(A-mod)$ denotes the full subcategory of all projective modules and modules which are $i$-th syzygies, $Gp(A)$ denotes the full subcategory of Gorenstein projective modules and $Gp_{\infty}(A)$ denotes the full subcategory of modules having infinite Gorenstein projective dimension. We will need the following proposition:
\[marvil\] Let $A$ be an algebra of dominant dimension $d \geq 1$, then $\Omega^{i}(A-mod)=Dom_i(A)$ for every $i \leq d$.
See [@MarVil], proposition 4.
For a given subcategory $C$ of $mod-A$, a *minimal right $C$-approximation* of a module $X$ is a right minimal map $f: N \rightarrow X$ with $N \in C$ such that $Hom(L,f)$ is surjective for every $L \in C$. Such minimal right approximations always exist and are unique up to isomorphism in case $C=add(M)$ for some module $M$. In case $C=add(M)$, one defines $\Omega_M^{0}(X):=X$, $\Omega_M^{1}(X)$ as the kernel of such an $f$ and inductively $\Omega_M^{n}(X):=\Omega_M^{n-1}(\Omega_M^{1}(X))$. One then defines $add(M)-resdim(X):= inf \{ n \geq 0 | \Omega_M^{n}(X) \in add(M) \}$. Dually, one can define minimal left $C$-approximations. Given an algebra $A$, which is isomorphic to $End_B(M)$ for some algebra $B$ with generator-cogenerator $M$, one can show that minimal $add(M)$-resolutions in $mod-B$ of a module $X$ correspond to minimal projective resolutions of the module $Hom_B(M,X)$ in $mod-A$. See [@CheKoe] section 2.1. for more information on this. A subcategory $C$ of $mod-A$ is called *contravariantly finite* in case every module $X \in mod-A$ has a minimal right $C$-approximation. A subcategory $C$ is called *resolving* in case it contains the projective modules, is closed under extensions and closed under kernels of surjections. We will need the following result, that can be found in [@AR], 3.9.:
\[ARpropo\] Let $C$ be a resolving contravariantly finite subcategory of $mod-A$. Then every module in $C$ has finite projective dimension bounded by $t$ in case all of the modules $X_i$ have finite projective dimension bounded by $t$, where $f_i: X_i \rightarrow S_i$ are minimal right $C$-approximations of the simple modules $S_i$.
For a module $M$, we define $M^{\perp n}:= \{ X \in mod-A | Ext^{i}(M,X)=0$ for all $i=1,...,n \}$. The *finitistic dimension* of an algebra is defined as $findim(A)= \sup \{ pd(N) | pd(N) < \infty \}$. The *global Gorenstein projective dimension* of an algebra is defined as the supremum of all Gorenstein projective dimensions of modules. It is known that the global Gorenstein projective dimension is finite iff the algbra is Gorenstein, see [@Che] corollary 3.2.6. The *finitistic Gorenstein projective dimension* is defined as $Gfindim(A)= \sup \{ Gpd(N) | Gpd(N) < \infty \}$ and in [@Che] one finds a quick proof that this always equals the usual finitistic dimension in theorem 3.2.7. We call a module $M$ $d$-rigid, in case $Ext^{i}(M,M)=0$ for $i=1,2,...,d$. We will also need the following theorem, which can be found as theorem 3.2.5. in [@Che] and can be used as a characterisation of the Gorenstein projective dimension of a module.
\[gordimchara\] Let $M$ be a module. $M$ has finite Gorenstein projective dimension at most $n$ iff in every exact sequence of the form $0 \rightarrow K \rightarrow G_{n-1} \rightarrow ... \rightarrow G_1 \rightarrow G_0 \rightarrow M \rightarrow 0$ with Gorenstein projective modules $G_i$, also the module $K$ is Gorenstein projective.
Recall that the eveloping algebra $A^{e}$ for an arbitrary algebra $A$ is defined as $A^{e}:=A^{op} \otimes_K A$ and an algebra is called $m$-periodic in case the $A^{e}-$module $A$ has $\Omega$-period $m$. Being $m$-periodic implies that the algebra is selfinjective and that every indecomposable non-projective module $M$ is periodic of period at most $m$, that is $\Omega^i(M) \cong M$ for some $i$ with $1 \leq i \leq m$. See chapter IV.11. of [@SkoYam] for this and more on periodic algebras.
Finitistic Auslander algebras
=============================
This section introduces finitistic Auslander algebras and gives new relations between dominant dimension and the finitistic dimension.
\[findimlemma\] Let $A$ be an algebra of dominant dimension $d \geq 1$. We have $findim(A)=d+\sup \{ pd(N) | domdim(N) \geq d , pd(N) < \infty \}$.
First note that the finitistic dimension is larger than or equal to the dominant dimension: The exact sequence coming from a minimal injective coresolution of the regular module: $0 \rightarrow A \rightarrow I_0 \rightarrow \cdots \rightarrow I_{d-1} \rightarrow \Omega^{-d}(A) \rightarrow 0$ shows that the module $\Omega^{-d}(A)$ has finite projective dimension $d$. Thus the finitistic dimension is at least $d$. Assume the projective dimension $s \geq d$ is attained at the module $X$: $pd(X)=s$. Looking at the minimal projective resolution of $X$: $0 \rightarrow P_s \rightarrow \cdots P_0 \rightarrow X \rightarrow 0$ and using $pd(X)=pd(\Omega^{d}(X))+d$ one immediatly obtains the lemma, since $\Omega^{d}(X)$ has dominant dimension at least $d$ by \[marvil\] and its projective dimension equals $s-d$.
Let $P^{< \infty}(A)$ denote the full subcategory of modules having finite projective dimension. It is well known that the finitistic dimension of an algebra $A$ is finite in case $P^{< \infty}(A)$ is contravariantly finite, for example using \[ARpropo\]. Here we show that it is enough that a smaller subcategory $Dom_i(A) \cap Proj_{<\infty}$ is contravariantly finite in case the algebra has dominant dimension at least $d$ and $i \leq d$.
Let $A$ be an algebra of positive dominant dimension $d$, then $A$ has finite finitistic dimension in case the subcategory $Dom_l(A) \cap Proj_{<\infty}(A)$ is contravariantly finite for some $l \leq d$.
Let $C:=Dom_l(A) \cap P^{< \infty}(A)$ for some $l \leq d$. We want to use \[ARpropo\]. First note that the intersection of two resolving subcategories is resolving and that $Dom_l(A)$ is resolving (see for example [@MarVil] proposition 1), while the property that $P^{< \infty}(A)$ is resolving is well known. Thus $C$ is a contravariantly finite resolving subcategory and the $X_i$ (defined as the modules, such that $f_i: X_i \rightarrow S_i$ are minimal right $C$-approximations of the simples) have finite projective dimension bounded by some number $t$, since they are contained in $P^{< \infty}(A)$. Thus all modules in $C$ have finite projective dimension bounded by $t$ and the result follows from \[findimlemma\].
The proposition is useful in various situations where it is hard to calculate $Proj_{<\infty}(A)$ but the subcategory $Dom_d(A)$ is representation-finite. For example for the large class of monomial algebras $A$ of dominant dimension at least two, we have $Dom_2(A)= \Omega^2(mod-A)$ and this category is representation-finite for monomial algebras (see [@Z]) and thus also the subcategory $Dom_2(A) \cap Proj_{<\infty}(A)$ is representation-finite and we can conclude directly that such algebras have finite finitistic dimension and can calculate the finitistic dimension by calculating approximations of the simple modules in the subcategory $Dom_2(A) \cap Proj_{<\infty}(A)$, that is usually much smaller than the subcategory $Proj_{<\infty}(A)$. We remark that we are not aware of an algebra with dominant dimension $d \geq 1$ such that such that $Dom_l(A) \cap Proj_{<\infty}(A)$ is not contravariantly finite for any $0 \leq l \leq d$. We formulate this as a question:
Given an algebra of dominant dimension $d \geq 1$, is $Dom_l(A) \cap Proj_{<\infty}(A)$ contravariantly finite for some $l$ with $0 \leq l \leq d$?
A positive answer to the previous question would prove the finitistic dimension conjecture for algebras with positive dominant dimension and thus prove the Nakayama conjecture. Now we come to the generalisation of Auslander-Gorenstein algebras:
An algebra with finite dominant dimension $d \geq 2$ is called a *finitistic Auslander algebra* in case its finitistic dimension equals its dominant dimension.
Note that by the Morita-Tachikawa correspondence, every finitistic Auslander algebra $A$ is isomorphic to an algebra of the form $End_B(M)$ for some algebra $B$ with generator-cogenerator $M$, since by assumption $A$ has dominant dimension at least two. We remark that every Auslander-Gorenstein algebra and thus every higher Auslander algebra is a finitistic Auslander algebra, since the finitistic dimension equals the Gorenstein dimension in case the Gorenstein dimension is finite (see for example [@Che]). We will later see many examples of a finitistic Auslander algebra of infinite Gorenstein dimension, showing that the class of finitistic Auslander algebras is much bigger than the class of Auslander-Gorenstein algebras.
The next theorem gives another characterisation of finitistic Auslander algebras using the subcategory of modules having dominant dimension at least $d$.
\[maintheorem\] Let $A \cong End_B(M)$ be an algebra of finite dominant dimension $d \geq 2$, where $M$ is a generator-cogenerator. The following are equivalent:
1. $A$ is a finitistic Auslander algebra.
2. $Dom_d(A) \subseteq Proj(A) \cup Proj_{\infty}(A)$.
3. $Dom_d(A) \subseteq Gp(A) \cup Gp_{\infty}(A)$.
4. $add(M)-resdim(X)= \infty$ for all $X \in M^{\perp d-2} \setminus add(M)$
First we show that (1) and (2) are equivalent: Just note that by \[findimlemma\], $A$ is a finitistic Auslander algebra iff every module of dominant dimension at least $d$ has infinite projective dimension or is projective. Assume now (1), that is the finitistic dimension of the algebra equals the dominant dimension. Assume $X \in Dom_d(A)$ and $X$ having finite and non-zero Gorenstein projective dimension $s$. Then there exists the following exact sequence, where the left side comes from a minimal Gorenstein projective resolution and the right side comes from a minimal injective coresolution: $0 \rightarrow G_s \rightarrow \cdots \rightarrow G_0 \rightarrow X \rightarrow I_0 \rightarrow \cdots \rightarrow \Omega^{-d}(X) \rightarrow 0 $. This shows that the module $\Omega^{-d}(X)$ has finite Gorenstein projective dimension $s+d>s$ using that $X$ is not Gorenstein projective, by \[gordimchara\].
This contradicts the fact that the finitistic Gorenstein projective dimension equals the finitistic dimension which is equal to $s$. This shows that $(1)$ implies $(3)$. Now assume (3), that is $Dom_d(A) \subseteq Gp(A) \cup Gp_{\infty}(A)$. We use \[findimlemma\] and show that $\sup \{ pd(N) | domdim(N) \geq d , pd(N) < \infty \}=0$. But this is obvious since every non-projective module in $Dom_d(A) \subseteq Gp(A) \cup Gp_{\infty}(A)$ has infinite projective dimension (recall that Gorenstein projective modules are projective or have infinite projective dimension). This shows that $(3)$ implies $(1)$. Now we show that $(4)$ is equivalent to $(1)$: Assume $A$ has dominant dimension $d \geq 2$. By \[findimlemma\], the finitistic dimension equals the dominant dimension iff every non-projective module of dominant dimension at least $d$ has infinite projective dimension. This translates into the condition $add(M)-resdim(X)= \infty$ for all $X \in M^{\perp d-2} \setminus add(M)$ since $add(M)$ resolutions correspond to minimal projective resolutions in $A$ and the subcategory $Dom_d(A)$ without the projectives corresponds to $M^{\perp d-2} \setminus add(M)$ by (3) of \[ARSmaintheorem\].
The next lemma was also noted in [@Mar].
\[gpdomdim\] Let $A$ be an algebra of dominant dimension $d \geq 1$, then every Gorenstein projective module has dominant dimension at least $d$.
By definition every Gorenstein projective module is in $\Omega^{i}(A-mod)$ for every $i \geq 1$. Now $\Omega^{d}(A-mod)=Dom_d(A)$ by \[marvil\] and thus $Gp(A) \subseteq \Omega^{d}(A-mod)=Dom_d(A)$.
Note that in the next proposition, (2) contains the higher Auslander correspondence from [@Iya], where generator-cogenerators with the condition $add(M)=M^{\perp d-2}$ are called cluster tilting objects. We give a very quick proof of the higher Auslander correspondence in (2) but refer to [@CheKoe] or [@IyaSol] for the second equivalence in (1).
\[correspondences\] Let $B$ an algebra with a generator-cogenerator $M$ and $A=End_B(M)$. Assume $A$ has finite dominant dimension $d \geq 2$, which by Mueller’s theorem is equivalent to $M$ being $d-2$ rigid and not $d-1$ rigid.
1. $A$ is an Auslander-Gorenstein algebra iff $Dom_d(A)=Gp(A)$ iff $add(M)=add(\tau(\Omega^{d-2}(M \oplus D(A))).$
2. $A$ is a higher Auslander algebra iff $Dom_d(A)=Proj(A)$ iff $add(M)=M^{\perp d-2}$.
<!-- -->
1. Being an Auslander-Gorenstein algebra is equivalent to being a finitistic Auslander algebra and additionally having finite Gorenstein dimension. An algebra is Gorenstein iff it has finite global Gorenstein dimension and thus iff $Gp_{\infty}(A)$ is empty. But by \[gpdomdim\] $Gp(A) \subseteq Dom_d(A)$ and thus it is an Auslander-Gorenstein algebra iff $Dom_d(A)=Gp(A)$, using (4) of \[maintheorem\]. For the second equivalence, see [@CheKoe] corollary 3.18.
2. Recall that an algebra has finite global dimension iff it is Gorenstein and every Gorenstein projective module is projective, see for example [@Che]. Thus the first equivalence follows by the first equivalence in (1). Now let $Af$ be the minimal faithful projective-injective left $A$-module. Then the functor $(-)f$ is an equivalence between $Proj(A)$ and $add(M)$ and between $Dom_d(A)$ and $M^{\perp d-2}$ by \[ARSmaintheorem\] and this shows the second equivalence.
We explicitly state the case $d=2$ since here finitistic Auslander algebras generalise the well known Auslander algebras.
Let $B$ be an algebra with generator-cogenerator $M$ and $A=End_B(M)$. Then $A$ is a finitistic Auslander algebra with finitistic dimension two iff $Ext^{1}(M,M) \neq 0$ and $add(M)-resdim(X)= \infty$ for all $X \in mod-B \setminus add(M)$.
By Mueller’s theorem $Ext^{1}(M,M) \neq 0$ implies that $A$ has dominant dimension $d=2$ and by \[maintheorem\] (4) the result follows by noting that $M^{\perp 0}=mod-B$.
The next question is motivated by the characterisation $Dom_d(A)=Gp(A)$ for Auslander-Gorenstein algebras.
Let $A$ be a finitistic Auslander algebra. Can the subcategory of Gorenstein projective $A$-modules be explicitly described in terms of other subcategories?
Interlude on Hopf algebras
==========================
Before we construct large classes of finitistic Auslander algebras in the next section, we prove several results about local Hopf algebras in this section that we will use. We assume that the reader is familiar with the basics on finite dimensional Hopf algebras over a field $K$ as explained for example in the last chapter of the book [@SkoYam]. Recall that we assume that all algebras are non-semisimple unless stated otherwise.
We need several results on Hopf algebras that we quote in the following from the literature.
\[hopflemmas\] For a finite dimensional Hopf algebra $A$ and $A$-modules $M_1$, $M_2$ and $M_3$, then the following holds:
1. $Ext_A^{i}(M_1 \otimes_K M_2 , M_3) \cong Ext_A^{i}(M_1,Hom_K(M_2,M_3))$, for every $i \geq 1$.
2. $Hom_A(M_1,M_2) \cong M_1^{*} \otimes_K M_2$
3. $M_1$ is projective iff $M_1 \otimes_K M_1^{*}$ is projective.
4. $A$ is selfinjective.
<!-- -->
1. See [@SkoYam], theorem 6.4. for $i$=0 and for $i>0$ the proof is as in proposition 3.1.8. (ii) of [@Ben].
2. See [@SkoYam], chapter VI. exercise 24.
3. See [@SkoYam], chapter VI. exercise 27.
4. See [@SkoYam], theorem 3.6.
\[extcrit\] The following are equivalent for two modules $X,Y$ over a local Hopf algebra $A$:
1. $Ext_A^{1}(X,Y)=0$
2. $Hom_K(X,Y)$ is projective.
Using (1) and (2) of the above \[hopflemmas\] we have: $Ext_A^{1}(X,Y) \cong Ext_A^{1}(K \otimes_K X , Y) \cong Ext_A^{1}(K,Hom_K(X,Y)).$ Now since $K$ is the unique simple modules of the local selfinjective algebra, $Ext_A^{1}(K,Hom_K(X,Y))=0$ iff $Hom_K(X,Y)$ is projective.
\[tachtheo\] For a local finite dimensional Hopf algebra $A$ we have $Ext_A^1(M,M) \neq 0$ for each non-projective module $M$.
By \[extcrit\], $Ext^{1}(M,M)=0$ for a module iff $Hom_k(M,M) \cong M^{*} \otimes_k M$ is projective. By \[hopflemmas\] (3) this is true iff $M$ is projective.
While the proof of the previous theorem might appear easy and short, recall that we had to use several non-trivial theorem from \[hopflemmas\]. In [@Ta] theorem 8.6., Tachikawa proved the previous theorem for the special case of $p$-groups. The next corollary is an immediate consequence of Mueller’s theorem.
Let $B=End_A(M)$, where $A$ is a local Hopf algebra and $M$ a non-projective generator of $mod-A$, then $B$ has dominant dimension equal to two.
The next proposition shows that \[extcrit\] can also be used to show that $Ext^{1}(X,Y) \neq 0$ for every indecomposable non-projective modules $X,Y$ in certain local Hopf algebras. First we need a lemma:
\[kxlemma\] Let $A=K[x]/(x^n)$ for some $n \geq 2$. Then $Ext_A^{1}(X,Y) \neq 0$ for arbitrary indecomposable non-projective modules $X,Y$.
This is elementary to check. See for example the preliminaries of [@ChMar] for the calculation of $Ext^1$ in symmetric Nakayama algebras.
We also need the following theorem, see [@Ch]:
\[chtheorem\] Let $K$ be a field of characteristic $p$ and $G$ be a finite group such that $p$ divdes the group order. Then a $KG$-module is projective iff it is free on restriction to all the elementary abelian $p$-subgroups of $G$.
The next proposition is a slight generalisation of an example by Jeremy Rickard given in http://mathoverflow.net/questions/259344/classification-of-certain-selfinjective-algebras.
Let $K$ be a field of characteristic $p$ and $G$ be a $p$-group having only one non-trivial elementary abelian subgroup $Z$. Let $A=KG$, then $Ext_A^{1}(X,Y) \neq 0$ for arbitary indecomposable non-projective modules $X,Y$.
By \[extcrit\], we have to show that $Hom_K(X,Y)$ is never projective for given $X,Y$. Now since $X$ and $Y$ are assumed to be non-projective, their restrictions to $KZ$ is not projective. Thus as $KZ$-modules $X \cong M_1 \oplus N_1$ and $Y \cong M_2 \oplus N_2$ for some indecomposable non-projective $KZ$-modules $M_1$ and $M_2$. Now note that $KZ$ is isomorphic to some algebra of the form $K[x]/(x^n)$ for some $n \geq 2$. Using \[kxlemma\], one has that $Hom_K(X,Y)$ is not projective as a $KZ$-module and thus $Hom_K(X,Y)$ is not projective as a $KG$-module. This gives the proposition using \[chtheorem\].
The previous proposition applies for example to the quaternion group over a field of characteristic 2.
To show that \[tachtheo\] is really a generalisation of the result of Tachikawa, one has to find a finite dimensional local Hopf algebra that is not isomorphic to a group algebra. Xingting Wang suggested to try example (A5) from theorem 1.1 in the paper [@NWW]. Here we sketch the proof that it is not isomorphic to a group algebra by calculating the quiver with relations isomorphic to the algebra and then calculating the beginning of a minimal projective resolution of the simple module. The reader can skip this example as it will not be used later.
\[hopfexample\] Fix an algebraically closed field $K$ of characteristic 2. The algebra $A$ is defined as $K<x,y,z>/(x^2,y^2,xy-yx,xz-zy,yz-zy-x,z^2-xy)$ (for the Hopf algebra structure see [@NWW]). Note first that $A$ is local of dimension 8 over the field $K$ with basis $\{1,x,y,z,z^2,xz,yz,zy\}$ and the Jacobson radical is the ideal generated by $x,y$ and $z$. Now we have to calculate the second power $J^2$ of the Jacobson radical: It contains $x$, since $x=yz-zy \in J^2$. Since $A$ is not commutative, its quiver can not have just one loop. Thus the dimension of $J^2$ is at most 5. It is clear that $J^2$ contains every basis element expect possibly $y$ and $z$. Thus, since the dimension of $J^2$ is at most 5, $J^2$ has basis $x,z^2,xz,yz,zy$. We will now show that the quiver algebra of $A$ is isomorphic to $K<a,b>/(a^2,b^2-aba)$. Clearly $K<a,b>$ maps onto $A$ by a map $f$, with $f(a)=y$ and $f(b)=z$. Note that $(a^2,b^2-aba)$ is contained in the kernel of $f$, since $y^2=0$ and $z^2-yzy=z^2-(zy+x)y=z^2-xy=0$. Thus there is a surjective map $\hat{f}:k<a,b>/(a^2,b^2-aba) \rightarrow A$ induced by $f$. But since $K<a,b>/(a^2,b^2-aba)$ also has dimension 8, that is in fact an isomorphism. Now we show that $A=K<a,b>/(a^2,b^2-aba)$ (we will identify $A$ in the following with $K<a,b>/(a^2,b^2-aba)$) is not isomorphic to a group algebra. Since $A$ has dimension 8 and is not commutative, there are only 2 candidates of group algebras, that could be isomorphic to $A$: The group algebra of the dihedral group of order 8 and the group algebra of the Quaternion group. Let $S_1$ be the simple $A$ module, then it is elementary to check that $\Omega^{4}(S_1)$ has dimension 9. The dimension of $\Omega^{4}(S_1)$ is the crucial information that we need to distinguish $A$ from group algebras of dimension 8 over the field. Let $B$ be the group algebra of the dihedral of order 8 over $K$ with simple module $S_2$. Then by [@Ben2] chapter 5.13., $\Omega^{4}(S_2)$ has dimension 17 and thus $A$ is not isomorphic to $B$. Let $C$ be the group algebra of the quaternion group of order 8 over $K$ with simple module $S_3$. Then this algebra is 4-periodic (see for example [@Erd]) and thus $\Omega^{4}(S_3) \cong S_3$ and $A$ is not isomorphic to $C$. This shows that $A$ is not isomorphic to any group algebra.
By looking at local Hopf algebras, the author noted that there seems to be no known example of a local Hopf algebra that is not a symmetric algebra. We pose this as a question:
Is every local Hopf algebra a symmetric algebra?
Examples of finitistic Auslander algebras
=========================================
In this section we construct several examples of finitistic Auslander algebras using different methods.
Finitistic Auslander algebras from local Hopf algebras
------------------------------------------------------
First we use the results on local Hopf algebras from the previous section to construct finitistic Auslander algebras. The next lemma is due to Jeremy Rickard.
Let $A$ be a local selfinjective algebra and $M$ an indecomposable module and $\alpha : M^m \rightarrow M^n$ with $n,m >0$ a map between direct sums of $M$ all of whose components are radical maps. Let $F$ be an additive functor such that $F( \alpha)$ is injective, then $F(M)=0$.
We can assume that $n$ is a multiple of $m$ by possibly adding extra summands to $M^n$. Write $n=dm$ for some integer $d$. We then have maps $M^{d^s m} \rightarrow M^{d^{s+1} m}$ for $s \geq 0$ by taking direct sums of the map $\alpha$. This gives a sequence of maps $M^m \rightarrow M^{dm} \rightarrow M^{d^2 m} \rightarrow ... \rightarrow M^{d^k m}$, all of which become injective when applying $F$, since $F$ is additive. But choosing $k$ greater than the Loewy length of $End_A(M)$, the composition of this sequence of maps is zero and thus $F(M^m)=0$, giving also $F(M)=0$ using that $F$ is addtive.
The next theorem is due to Jeremy Rickard.
\[Ricktheo\] Let $A$ be a local selfinjective algebra and let $M$ be an indecomposable nonprojective module with $Ext^{1}(M,M) \neq 0$. Then $B:=End_A(A \oplus M)$ is a finitistic Auslander algebra of finitistic dimension 2.
The condition $Ext^{1}(M,M) \neq 0$ gives us that $B$ has dominant dimension equal to two. We have to show that every non-projective module of dominant dimension at least two has infinite projective dimension by \[maintheorem\]. Let $N:=A \oplus M$. This translates into the condition that every $A$-module not in $add(N)$ has infinite $add(N)$-resolution dimension. Assume there is an indecomposable module with finite $add(N)$-resolution. Then there is a short exact sequence as follows, where the maps are minimal right $add(N)$-approximations: $$0 \rightarrow N_1 \rightarrow N_0 \rightarrow U \rightarrow 0,$$ with $N_0,N_1 \in add(N)$ and $N_1$ being a direct sum of copies of $M$ because of the minimality. Now applying the functor $Hom(M,-)$ to this short exact sequence we obtain a long exact sequence of the form: $$0 \rightarrow Hom(M,N_1) \rightarrow Hom(M,N_0) \rightarrow Hom(M,U) \rightarrow Ext^1(M,N_1) \rightarrow Ext^1(M,N_0) \rightarrow \cdots .$$
In this long exact sequence the map $Ext^{1}(M,N_1) \rightarrow Ext^{1}(M,N_0)$ has to be injective, since the right map in the short exact sequence is assumed to be a minimal $add(N)$-approximation which gives that $Hom(M,N_0) \rightarrow Hom(M,U)$ is surjective. Now after removing free summands of the left map in the short exact sequence, we obtain a map $\alpha: N_1 \rightarrow N_0'$ between direct sums of copies of $M$ with the property that all components of this map are radical maps. Now $Ext^{1}(M,-)$ is a functor sending $\alpha$ to an injection. By the previous lemma this is only possible if $Ext^{1}(M,M)=0$. This contradicts our assumptions and thus there is no module with finite $add(N)$-resolution.
Combining the previous result and \[tachtheo\] we obtain our main result in this section:
Let $A$ be a local Hopf algebra (for example a groupalgebra of a $p$-group over a field of characteristic $p$) with a nonprojective indecomposable module $M$. Then $B:=End_A(A \oplus M)$ is a finitistic Auslander algebra of finitistic dimension 2.
By \[tachtheo\] we have $Ext_A^1(M,M) \neq 0$ and thus the previous theorem \[Ricktheo\] applies to give the result.
The previous theorem motivates the following conjecture:
Let $A$ be a local selfinjective algebra and $M$ a non-projective indecomposable $A$-module. Then the algebra $B:=End_A(A \oplus M)$ is a finitistic Auslander algebra of dominant dimension 2.
By \[Ricktheo\], the conjecture can equivalently states as $Ext_A^1(M,M) \neq 0$ for any non-projective module $M$ over a local selfinjective algebra. We refer to [@Mar4] for more on $Ext_A^1(M,M)$ for selfinjective local algebras $A$.
Finitistic Auslander algebras from standardly stratified algebras
-----------------------------------------------------------------
Standardly stratified algebras are a well studied class of algebras that generalise quasi-hereditary algebras. For the basics on standardly stratified algebras we refer to [@Rei] and for relations to Auslander-Gorenstein algebras we refer to [@Mar3]. Recall that a quasi-hereditary algebra is a standardly stratified algebra with finite global dimension. This section gives the construction of standardly stratified finitistic Auslander algebras for local selfinjective algebras and special choices of modules. Recall the following result:
\[standstratfindimbound\] (see [@AHLU]) Let $A$ be a standardly stratified algebra with $n$ simple modules. Then the finitistic dimension of $A$ is bounded by $2n-2$.
Using this theorem, we can give several examples of finitistic Auslander algebras inside the class of standardly stratified algebras. We need the following result, which is the main result of [@CheDl]:
\[chedlabtheo\] Let $A$ be a local, commutative selfinjective algebra over an algebraically closed field. Let $\mathcal{X}=(A=X(1),X(2),...,X(n))$ be a sequence of local-colocal modules (meaning that all modules have simple socle and top and therefore can be viewed as ideals of $A$) with $X(i) \subseteq X(j)$ implying $j \leq i$. Let $X= \bigoplus\limits_{i=1}^{n}{X(i)}$ and $B=End_A(X)$. Then $B$ is properly stratified with a duality iff the following two conditions are satisfied: 1. $X(i) \cap X(j)$ is generated by suitable $X(t)$ of $\mathcal{X}$ for any $1 \leq i,j \leq n$ 2. $X(j) \cap \sum\limits_{t=j+1}^{n}{X(t)}=\sum\limits_{t=j+1}^{n}{X(j) \cap X(t)}$ for any $1 \leq j \leq n$.
\[commtheorem\] Let $A$ be a commutative selfinjective algebra with an ideal $I$ with $D(I) \cong I$. Then $B:=End_A(A \oplus I)$ is a finitistic Auslander algebra with finitistic dimension equal to 2 that is standardly stratified.
First note that $I$ being an ideal has simple socle because $A$ is selfinjective and thus has simple socle. Now $top(I) \cong top(D(I)) \cong D(soc(I))$ is again simple. Thus \[chedlabtheo\] applies to give that $B$ is standardly stratified. Since $B$ has two simple modules, the finitistic dimension of $B$ is bounded by 2 by \[standstratfindimbound\]. But since $B$ is an endomorphism ring of a generator-cogenerator, its dominant dimension is at least two. Since the dominant dimension is bounded by the finitsitic dimension for non-selfinjective algebras, $B$ is a finitistic Auslander algebra with finitistic dimension equal to two.
The next result illustrates that being an Auslander-Gorenstein algebra might be extremely rare compared to the more general concept of being a finitistic Auslander algebra.
\[theoremexamples\] Let $A$ be an $K$-algebra.
1. Let $A$ be a commutative selfinjective algebra with enveloping algebra $A^{e}=A \otimes_K A$. Then $B:=End_{A^{e}}(A^{e} \oplus A)$ is a finitistic Auslander algebra of finitistic dimension equal to two that is standardly stratified. It is an Auslander-Gorenstein algebra iff $A$ is a 2-periodic algebra iff $A \cong K[x]/(x^n)$ for some $n \geq 2$. It is never a higher Auslander algebra.
2. Let $A$ be a selfinjective local algebra with simple module $S$. Then $End_A(A \oplus S)$ is a finitistic Auslander algebra with finitistic dimension equal to two that is standardly stratified. It is an Auslander-Gorenstein algebra iff $A \cong K[x]/(x^n)$ for some $n \geq 2$ and it is a higher Auslander algebra iff $A \cong K[x]/(x^2)$.
<!-- -->
1. Note that being commutative selfinjective implies that $A$ is even symmetric and thus $D(A) \cong A$ has $A^{e}$-bimodules. Then \[commtheorem\] applies to show that $B$ is a finitistic Auslander algebra with finitistic dimension equal to two. Now by (1) of \[correspondences\], $B$ is an Auslander-Gorenstein algebra iff $\tau(A) \cong A$ as $A^{e}$-bimodules. Now since $A^{e}$ is symmetric: $\tau \cong \Omega^{2}$. And thus $B$ is an Auslander-Gorenstein algebra iff $A$ is 2-periodic iff $A \cong K[x]/(x^n)$ for some $n \geq 2$ by corollary 2.10. of [@Sko]. What is left to do is calculate when the algebra $B$ has finite global dimension in case $A \cong K[x]/(x^n)$. But $B$ can have only global dimension equal to the dominant dimension equal to two iff it is a Auslander algebra iff $A^{e}$ has $A^{e}$ and $A$ as its only indecomposable modules. This is certainly never the case, since $A^{e}$ has at least two loops in its quiver and thus is never representation-finite.
2. In [@We] theorem 1.1., it was proven that $End_A(A \oplus S)$ is always standardly stratified in case $A$ is a selfinjective local algebra with simple module $S$. One has $Ext^{1}(S,S) \neq 0$ since the algebra is local. Thus the dominant and finitistic dimension are equal to two. Again by [@Sko] corollary 2.10. the module $S$ is 2-periodic iff $A \cong K[x]/(x^n)$ and the global dimension is equal to two iff it is finite iff $End_A(A \oplus S)$ is an Auslander algebra iff $A \cong K[x]/(x^2)$.
We give another example, where high dominant dimension automatically leads to being a finitistic Auslander algebra and the bound $2n-2$ for the finitistic dimension of standardly stratified algebras is attained for an arbitrary $n \geq 2$.
Let $A$ be a representation-finite block of a Schur algebra with $n$ simple modules. Then $A$ has dominant dimension equal to $2n-2$. This was noted and proven in [@ChMar] and [@Mar]. By \[standstratfindimbound\] and the fact that the dominant dimension is bounded by the finitistic dimension, it is a finitistic Auslander algebra and even a higher Auslander algebra since it has finite global dimension, being a block of a Schur algebra.
The next proposition shows how to construct alot of examples of finitistic Auslander algebras from known ones:
Let $A$ be a finitistic Auslander algebra of finitistic dimension equal to $m \geq 2$ and $B$ a selfinjective algebra. Then $A \otimes_K B$ is a finitistic Auslander algebra of finitistic dimension equal to $m$.
By [@ERZ] theorem 16, the finitistic dimension of the tensor product of two algebras equals the sum of the finitistic dimensions of the two algebras. Thus $A \otimes_K B$ has finitistic dimension equal to $m$, since $B$ has finitistic dimension zero as a selfinjective algebra. Now the dominant dimension of the tensor product of two algebras equals the minimum of the dominant dimension of the two algebras by [@Mue] lemma 6. Thus $A \otimes_K B$ also has dominant dimension equal to $m$, since $B$ has infinite dominant dimension as a selfinjective algebra.
Algebras with exactly one indecomposable projective non-injective module
========================================================================
We assume that all algebras in this section are basic. This is no restriction since every algebra is Morita equivalent to its basic algebra and all results and notion of this section are invariant under Morita equivalence. For a finite dimensional algebra $A$, let $\nu_A:=DHom_A(-,A)$ denote the Nakayama functor. Following [@FHK], an algebra $A$ is called *almost selfinjective* in case there is at most one indecomposable projective non-injective module and the injective envelope $I(A)$ of the regular module $A$ has the property that $\nu_A^i(I(A))$ is projective for all $i \geq 0$. We define a *generalised almost selfinjective algebra* as an algebra with dominant dimension at least one that has at most one indecomposable projective non-injective module. Those algebras generalise the almost selfinjective algebras from [@FHK], see lemma 2.6. in [@FHK]. Again we are mainly interested in the non-selfinjective generalised almost selfinjective algebras. We call non-selfinjective generalised almost selfinjective algebras for short *NGAS algebras* in the following. We first show how to construct large classes of NGAS algebras. We recall the construction of SGC-extension algebras from [@CIM]:
Let $A$ be a non-selfinjective algebra. For $m \geq 1$ define the *$m$-th SGC-extension algebra* of $A$ as $A^{(m+1)}:=End_{A^{(m)}}(N_m)$, when $A^{(0)}=A$ and $N_m$ denotes the basic version of the module $A^{(m)} \oplus D(A^{(m)})$.
The next proposition provides us with the construction of large classes of NGAS algebras:
1. Let $A$ be an algebra with exactly one indecomposable projective non-injective module. Then the $m$-th SGC-extension algebra of $A$ is a NGAS algebra for each $m \geq 1$.
2. Let $A$ be a selfinjective algebra and $M$ an indecomposable non-projective $A$-module. Then $B:=End_A(A \oplus M)$ is a NGAS algebras.
<!-- -->
1. Let $A$ be an algebra with exactly one indecomposable projective non-injective module and $B$ the SGC-extension of $A$. Assume $A$ has $n$ simple modules. Recall from \[ARSmaintheorem\] (1) that the number of indecomposable projective-injective $B$-modules equals the number of indecomposable injective $A$-modules, which is $n$. The number of simple $B$-modules equals $n+1$ by assumption on $A$. Since $B$ is isomorphic to the endomorphism ring of a generator-cogenerator $B$ has dominant dimension at least two and thus is a NGAS algebra.
2. As before, from \[ARSmaintheorem\] (1) it follows that $B$ has $n+1$ simple modules and $n$ indecomposable projective-injective modules and dominant dimension at least two when $A$ has $n$ simple modules.
\(1) of the previous proposition applies for example to every local non-selfinjective algebra $A$ to provide infinitely many NGAS algebras $A^{(m)}$ for $m \geq 1$. The previous proposition also shows that NGAS algebras are closed under SGC-extensions. Let $A$ be an algebra of dominant dimension $d \geq 1$ with minimal faithful projective-injective module $eA$. Following [@PS], we the basic version of the module of the form $eA \oplus \Omega^{-i}(A)$ a *shifted tilting module* for each $0 \leq i \leq d$. That those modules are really tilting modules can be found in [@PS]. Dually, the basic version of the modules of the form $eA \oplus \Omega^i(D(A))$ are called *coshifted cotilting modules* for $0 \leq i \leq d$. For our next results, we recall several results from [@BS]. Let $M$ be a module and $$0 \rightarrow A \xrightarrow{f_1} M_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} M_n \cdots$$ a complex with $K_i = cokern(f_i)$ for $i \geq 1$ and $K_0=A$ such that each map $K_i \rightarrow M_{i+1}$ is a minimal left $add(M)$-approximation. Let $\eta_n$ be the truncated complex ending in $M_n$ obtained from the above complex. Then $M$ is said to have *faithful dimension* $fadim(M)$ equal to $n$ if $\eta_n$ is exact, but $\eta_{n+1}$ is not. For a module $M$ let $\delta(M)$ denote the number of indecomposable non-isomorphic direct summands of $M$. Recall that an *almost complete cotilting module* $M$ over an algebra $A$ is a direct summand of a cotilting module $N$ such that $\delta(M)=\delta(A)-1$. A *complement* of an almost complete cotilting module $M$ is a module $X$ such that $M \oplus X$ is a cotilting module.
\[BStheorem\] Let $M$ be an $A$-module and $C:=End_A(M)$. Note that we can view $M$ as a left $C$-module.
1. $M$ has faithful dimension at least two if and only if $A \cong End_C(M)$.
2. In case $M$ has faithful dimension at least two, we have $fadim(M)=n$ if and only if $Ext_C^i(M,M)=0$ for $i=1,...,n-2$ and $Ext_C^{n-1}(M,M) \neq 0$.
3. Let $M$ be an almost complete cotilting module. Then $M$ has exactly $n+1$ indecomposable complements if and only if $fadim(M)=n$.
4. Let $M$ be an almost complete cotilting module that is faithful. Then $M$ has at least two complements.
<!-- -->
1. See proposition 2.1. of [@BS].
2. See proposition 2.2. of [@BS].
3. See theorem 3.6. of [@BS].
4. See theorem 3.1. of [@BS].
The next proposition gives a nice characterisation of NGAS algebras using such shifted tilting modules and coshifted cotilting modules.
\[propocotilting\] Let $A$ be an algebra with finite dominant dimension $n \geq 1$. The the following are equivalent:
1. $A$ is a NGAS algebra.
2. $A$ has exactly $n+1$ basic cotilting modules.
3. $A$ has exactly $n+1$ basic tilting modules.
All basic cotilting modules are in this case isomorphic to the basic version of $\Omega^{i}(D(A)) \oplus eA$ and all basic tilting modules are in this case isomorphic to the basic version of $\Omega^{-i}(A) \oplus eA$ for some $0 \leq i \leq n$.
We show the equivalence of (1) and (2). The equivalence of (1) and (3) is shown similar. Assume first that $A$ has exactly one indecomposable projective non-injective module and let $eA$ be the minimal faithful projective-injective $A$-module. By the definition of the faithful dimension, the faithful dimension of $eA$ is equal to the dominant dimension of $A$. Since $eA$ is projective-injective, every cotilting module has $eA$ as a direct summand and we have $\delta(eA)=\delta(A)-1$ by assumption on $A$. Then it follows from (3) of \[BStheorem\] that $eA$ has exactly $n+1$ complements. Now the coshifted cotilting modules $\Omega^{i}(D(A)) \oplus eA$ are $n+1$ non-isomorphic cotilting modules and thus the proposition follows.
Assume now that $A$ has exactly $n+1$ cotilting modules. Since $A$ has $n+1$ coshifted cotilting modules, this means that every cotilting module must be a coshifted cotilting module. We show that this forces $A$ to have exactly one indecomposable projective non-injective module. Assume otherwise, that is $A$ has more than one indecomposable projective non-injective module. Since $A$ is assumed to have positive dominant dimension, there exists a minimal faithful projective-injective module $eA$. Then $A$ also has two indecomposable injective non-projective modules. Let $I_1$ and $I_2$ be two non-isomorphic indecomposable projective non-injective modules. Let $M:=D(A)/I_1$ and note that $M$ has $eA$ as a direct summand. Thus $M$ is faithful and an almost complete cotilting module since $M \oplus I_1=D(A)$ is a cotilting module. By \[BStheorem\] (4), there exists at least one indecomposable module $X$ that is not isomorphic to $I_1$ such that the module $T:=M \oplus X$ is a cotilting module. We show that $T$ can not be a coshifted cotilting module. Let $\delta(A)=r$. Since we assume that $A$ is not a NASG algebra, there are $s$ indecomposable projective-injective module with $s \leq r-2$. Just note that $T$ is not isomorphic to $D(A)$ and that $T$ has $r-1$ indecomposable injective direct sumannds while every coshifted cotilting module that is not injective has exactly $s \leq r-2$ indecomposable injective direct summands. Thus $T$ is a colting module that is not isomorphic to any of the coshifted cotilting modules and $A$ has therefore more than $n+1$ cotilting modules. This is a contradiction and thus $A$ has to be a NASG algebra.
We define the *finitistic injective dimension* of an algebra as the supremum of injective dimensions of modules having finite injective dimension. Note that in general the finitistic dimension does not coincide with the finitistic injective dimension. We call an algebra a *weak finitistic Auslander algebra* in case it has positive dominant dimension equal to the finitistic dimension. We call an algebra a *weak co-finitistic Auslander algebra* in case the dominant dimension equals the finitistic injective dimension of the algebra (note that the definition is really dual, since the dominant dimension of an algebra always coincides with the codominant dimension of an algebra). We do not know whether every weak finitistic Auslander algebra is a weak co-finitistic Auslander algebra. In fact the author is not even aware of an algebra that has dominant dimension at least one such that the finitistic dimension is not equal to the finitistic injective dimension. We call an algebra *tilting-finitistic* in the following in case the finitistic dimension of $A$ equals the supremum of projective dimension of tilting modules. We can apply \[propocotilting\] to obtain the following theorem.
\[nasgtheo\] Let $A$ be a NASG algebra with finite finitistic dimension. Then $A$ is a weak finitistic Auslander algebra in case $A$ is tilting finitistic.
Since $A$ is assumed to be tilting finitistic and a NASG algebra, the finitistic dimension of $A$ equals the supremum of the shifted tilting modules which is equal to $d$ when $d$ denotes the dominant dimension of $A$ because the module $eA \oplus \Omega^{-d}(A)$ is in this case the tilting module of highest projective dimension.
This motivates the following question:
Is every finite dimensional algebra tilting-finitistic?
This question is for example studied in [@AT] and it seems to be open in general. We now obtain several corollaries.
\[nasgcorollary\] Let $A$ be a NASG algebra. Then $A$ is a finitistic Auslander algebra in each of the following cases:
1. The full subcategory of modules having finite projective dimension is contravariantly finite.
2. $A$ is representation-finite.
3. $A$ is Gorenstein.
We show that $A$ is tilting-finitistic in each case:
1. This follows from [@AT], theorem 2.6.
2. This follows from (1), since every subcategory of the module category of a representation-finite algebra is contravariantly finite.
3. Recall that for Gorenstein algebras, the finitistic dimension equals the Gorenstein dimension. Then the module $D(A)$ is a tilting module of projective dimension equal to the Gorenstein dimension.
The large class of examples motivate the following conjecture:
\[nasgconjecture\] Let $A$ be a NASG algebra. Then $A$ is tilting finitistic and thus a weak finitistic Auslander algebra
Let $A$ be a representation-finite NASG algebra. Then $A$ is a weak finitistic Auslander algebra and a weak finitistic co-Auslander algebra.
This follows directly from \[nasgcorollary\] and \[nasgtheo\] and their duals.
We also obtain a corollary for higher Auslander algebras and Auslander-Gorenstein algebras that seems to be new:
Let $A$ be a NASG algebra of dominant dimension at least two.
1. $A$ is an Auslander-Gorenstein algebra if and only if $A$ is Gorenstein.
2. $A$ is a higher Auslander algebra if and only if $A$ has finite global dimension.
<!-- -->
1. If $A$ is Auslander-Gorenstein, then $A$ is by definition also Gorenstein. Assume that $A$ is Gorenstein and a NASG algebra. Then by \[nasgcorollary\] (3), it is a finitistic Auslander algebra and thus an Auslander-Gorenstein algebra because it has finite Gorenstein dimension.
2. If $A$ is a higher Auslander algebra, then $A$ is has by definition finite global dimension. Assume that $A$ has finite global dimension and is a NASG algebra. Then by \[nasgcorollary\] (3) (recall that every algebra of finite global dimension is a Gorenstein algebra), it is a finitistic Auslander algebra and thus a higher Auslander algebra because it has finite global dimension.
We give a large class of concerete examples of representation-finite NASG algebras:
Let $A$ be a Nakayama algebra. Then $A$ always has dominant dimension at least one and thus is a NASG algebra if an only if it has exactly one indecomposable projective non-injective module. This is the case if and only if $A$ has Kupisch series (see for example [@Mar2] for background on Nakayama algebras and their Kupisch series) equal to $[2,2,...,2,1]$ or $[a,a,...,a,a+1,a+1,...,a+1]$ for some natural number $a \geq 2$. Thus for any number $n \geq 2$ we get like this infinitely many representation-finite weak finitistic Auslander algebras having $n$ simple modules. Proving directly from the definition of the finitistic dimension and dominant dimension that those algebras are weak finitistic Auslander algebras seems very hard and the author is not aware of a direct proof. Computer experiments suggest that every non-selfinjective Nakayama algebra with exactly $n \geq 2$ simple modules and dominant dimension at least $n$ is automatically a NASG algebra and thus also a weak finitistic Auslander algebra. This is proved with a computer for $n \leq 13$ and we will give more results on this problem in forthcoming work.
The previous example and all other example which the author considered lead to the following conjecture:
Let $n \geq 2$. There exists a polynomial function $f(n)$ such that the following is true: Every connected non-selfinjective algebra with $n$ simple modules that has dominant dimension at least $f(n)$ is a finitistic Auslander algebra.
We think that a polynomial function $f(n)$ with $n \leq f(n) \leq 2n$ might do the job. One can verify this conjecture for several classes of algebras, as we will do in forthcoming work. We give one more example.
In [@ChMar] the class of representation-finite gendo-symmetric biserial algebras was classified (generalising the classical Brauer tree algebras). All those algebras were Gorenstein and thus their finitistic dimension coincides with the Gorenstein dimension. Explicit values for the dominant and Gorenstein dimension are obtained and one can easily show that the above conjecture is true for this class of algebras with $f(n)=n$.
[Gus]{} Agoston, Istvan; Happel, Dieter; Lukacs, Erzsebet; Unger, Luise :[*Finitistic dimension of standardly stratified algebras.*]{} Comm. Algebra 28 (2000), no. 6, 2745-2752. Angeleri-Huegel, Lidia; Trlifaj, Jan: [*Tilting theory and the finitistic dimension conjectures.* ]{} Trans. Amer. Math. Soc. 354 (2002), no. 11, 4345-4358. Assem, Ibrahim; Simson, Daniel; Skowronski, Andrzej: [*Elements of the Representation Theory of Associative Algebras, Volume 1: Representation-Infinite Tilted Algebras.*]{} London Mathematical Society Student Texts, Volume 72, (2007). Auslander, Maurice; Platzeck, Maria Ines; Todorov, Gordana: [*Homological theory of idempotent Ideals*]{} Transactions of the American Mathematical Society, Volume 332, Number 2 , August 1992. Auslander, Maurice; Reiten, Idun: [Applications of contravariantly finite subcategories. ]{} Adv. Math. 86 (1991), no. 1, 111-152. Benson, David J.: [*Representations and cohomology I: Basic representation theory of finite groups and associative algebras.*]{} Cambridge Studies in Advanced Mathematics, Cambridge University Press, 1991. Benson, David J.: [*Representations and cohomology II: Cohomology of Groups and Modules.*]{} Cambridge Studies in Advanced Mathematics, Cambridge University Press, 1998. Buan, Aslak Bakke; Solberg, [[Ø]{}]{}yvind: [*Relative cotilting theory and almost complete cotilting modules. Algebras and modules, II*]{} (Geiranger, 1996), 77-92, CMS Conf. Proc., 24, Amer. Math. Soc., Providence, RI, 1998. Chan, Aaron; Iyama, Osamu; Marczinzik, René: [*Auslander-Gorenstein algebras from Serre-formal algebras via replication.*]{} https://arxiv.org/abs/1707.03996. Chan, Aaron; Marczinzik, René: [*On representation-finite gendo-symmetric biserial algebras.*]{} Algebras and Representation Theory, Dezember 2017. Chen, Xueqing; Dlab, Vlastimil: [*Properly stratified endomorphism algebras.*]{} J. Algebra 283 (2005), no. 1, 63-79. Chen, Xiao-Wu: [*Gorenstein Homological Algebra of Artin Algebras.*]{} https://arxiv.org/abs/1712.04587. Chen, Hongxing; Koenig, Steffen: [*Ortho-symmetric modules, Gorenstein algebras and derived equivalences.*]{} International Mathematics Research Notices (2016), electronically published. doi:10.1093/imrn/rnv368. Chouinard, Leo: [*Projectivity and relative projectivity over group rings.*]{} J. Pure Appl. Algebra 7 (1976), no. 3, 287-302. Eilenberg, Samuel; Rosenberg, Alex; Zelinsky, Daniel :[*On the dimension of modules and algebras. VIII. Dimension of tensor products.*]{} Nagoya Math. J. 1957, 71-93. Erdmann, Karin :[*Blocks of Tame Representation Type and Related Algebras.*]{} Lecture Notes in Mathematics, Springer, Volume 1428. Fang, Ming; Hu, Wei; Koenig, Steffen : [*Derived equivalences, restriction to self-injective subalgebras and invariance of homological dimensions.*]{}https://arxiv.org/pdf/1607.03513.pdf. Iyama, Osamu: [*Auslander correspondence.*]{} Adv. Math. 210 (2007), no. 1, 51-82. Iyama, Osamu; Solberg, [[Ø]{}]{}yvind: [*Auslander-Gorenstein algebras and precluster tilting.*]{} http://arxiv.org/abs/1608.04179. Marczinzik, René: [*Gendo-symmetric algebras, dominant dimensions and Gorenstein homological algebra.*]{} http://arxiv.org/abs/1608.04212. Marczinzik, René: [*Upper bounds for the dominant dimension of Nakayama and related algebras.*]{} Journal of Algebra Volume 496, 15 February 2018, Pages 216-241. Marczinzik, René: [*Auslander-Gorenstein algebras, standardly stratified algebras and dominant dimensions.*]{} https://arxiv.org/abs/1610.02966. Marczinzik, Reneé :[*Upper bounds for dominant dimensions of gendo-symmetric algebras.*]{} Archiv der Mathematik, September 2017, Volume 109, Issue 3, pp 231-243 . Martinez Villa, Roberto: [*Modules of dominant and codominant dimension.*]{} Communications in algebra, 20(12), 3515-3540, (1992). Mueller, Bruno: [*The classification of algebras by dominant dimension.*]{} Canadian Journal of Mathematics, Volume 20, pages 398-409, 1968. Nguyen, Van; Wang, Linhong; Wang, Xingting : [*Classification of connected Hopf algebras of dimension $p^3$ I.*]{} Journal of Algebra Volume 424, 15 February 2015, Pages 473-505. Pressland, Matthew; Sauter, Julia :[*Special tilting modules for algebras with positive dominant dimension.*]{} https://arxiv.org/abs/1705.03367. The QPA-team, QPA - Quivers, path algebras and representations - a GAP package, Version 1.25; 2016 (https://folk.ntnu.no/oyvinso/QPA/) Reiten, Idun: [*Tilting theory and homologically finite subcategories with applications to quasihereditary algebras.*]{} Handbook of tilting theory, 179-214, London Math. Soc. Lecture Note Ser., 332, Cambridge Univ. Press, Cambridge, 2007. Skowronski, Andrzej: [*Periodicity in representation theory of algebras.*]{} https://webusers.imj-prg.fr/$\sim$bernhard.keller/ictp2006/lecturenotes/skowronski.pdf. Skowronski, Andrzej; Yamagata, Kunio: [*Frobenius Algebras I: Basic Representation Theory.*]{} EMS Textbooks in Mathematics, (2011). Tachikawa, Hiroyuki: [*Quasi-Frobenius Rings and Generalizations: QF-3 and QF-1 Rings (Lecture Notes in Mathematics 351)* ]{} Springer; (1973). Wen, Daowei: [*On self-injective algebras and standardly stratified algebras.*]{} J. Algebra 291 (2005), no. 1, 55-71. Yamagata, Kunio: [*Frobenius Algebras*]{} in [Hazewinkel, M. (editor): Handbook of Algebra, North-Holland, Amsterdam, Volume I, pages 841-887, 1996.]{} Zimmermann-Huisgen, Birge: [*Predicting syzygies over monomial relations algebras.*]{} Manuscripta mathematica, 1991, Volume 70, Issue 1, pp 157-182.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider a scalar field interacting with a quantized metric varying only on a submanifold (e.g. a scalar field interacting with a quantized gravitational wave). We explicitly sum up the perturbation series for the time-ordered vacuum expectation values of the scalar field. As a result we obtain a modified non-canonical short distance behavior.'
author:
- |
Z. Haba\
Institute of Theoretical Physics, University of Wroclaw,\
50-204 Wroclaw, Plac Maxa Borna 9, Poland\
e-mail:[email protected]
title: Resummation of the perturbation series for an interaction of a scalar field with a quantized metric
---
Introduction
=============
We consider a model of a scalar field interacting with a quantized metric. In order to simplify the model we assume that the metric does not depend on all the coordinates. There is a physical model for such a metric tensor: the field of a gravitational wave. Hence, the model would describe an interaction of a scalar particle with the graviton . We have obtained in [@jmp] a path integral representation of the scalar field correlation functions. Then, we assumed an exact scale invariance of the quantum gravitational field. In this paper we insert a flat background metric and consider an expansion around this metric. We show that the path integral formula can be considered as a resummation of the conventional perturbation expansion. Then, the assumption that the perturbation around the flat metric is scale invariant leads to the same conclusions as in [@jmp]: the singular quantum metric substantially changes the short distance behavior of the scalar field (in particular, it can make the scalar field more regular).
The conventional perturbation expansion
=======================================
We consider the Lagrangian for a complex scalar field in $D$-dimensions interacting with gravity $${\cal L}={\cal L}(g)
+g^{AB}\partial_{A}\overline{\phi}\partial_{B}\phi
+ M^{2}\overline{\phi}\phi$$ here ${\cal L}(g)$ is a gravitational Lagrangian which we do not specify . The metric is assumed to be a perturbation of the flat one $$g^{\mu\nu}=\eta^{\mu\nu}+h^{\mu\nu}$$ where $\mu,\nu=1,...,D-d$, $h^{\mu\nu}$ is varying only on a $d$-dimensional submanifold and $$g^{ik}=\eta^{ik}$$ for $i,k=D-d+1,...,D$, here $\eta^{AB}=\pm\delta^{AB}$, but we do not specify the signature. We split the coordinates as $x=({\bf X},{\bf x})$ with ${\bf x}\in R^{d}$.
As an example we could consider in $D=4$ the metric corresponding to a gravitational wave moving in the $x_{3}$ direction. In such a case the metric is $$ds^{2}=-(dx_{4})^{2}+(dx_{3} )^{2}+g^{\mu\nu}dx_{\mu}dx_{\nu}$$ where $g^{\mu\nu}(x_{3}-x_{4})$ is an exact solution of the Einstein equations ,see e.g.[@inv] (this case would correspond to $d=1$). More general solutions are known describing a scattering of gravitational waves and depending on both $x_{4}-x_{3}$ and $x_{4}+x_{3}$ (hence, $d=2$) ( for higher dimensions see [@pen][@tse]). The assumption of a weak gravitational field (2) is a realistic approximation. In such a case it can be shown that any solution of linearized Einstein equations by means of a coordinate transformation can be brought to the form where all components of $h^{\mu\nu}$ are equal to zero except of $\mu,\nu=1,2$ . These components describe the transverse polarization states of the spin 2 field.
Without a self-interaction the $\phi\overline{\phi}$ correlation function averaged over the metric is $$\langle ({\cal A}+\frac{1}{2}M^{2})^{-1}(x,y)\rangle$$ where $$\begin{array}{l}
-{\cal A}= \frac{1}{2}\Box_{D-d}+\frac{1}{2}
\sum_{\mu=1,\nu=1}^{D-d}h^{\mu\nu}({\bf
x})\partial_{\mu}\partial_{\nu}+
\frac{1}{2}\sum_{k=D-d+1}^{D}\partial_{k}^{2} \cr
\equiv\frac{1}{2}\Box_{D}+\frac{1}{2} \sum h^{\mu\nu}({\bf
x})\partial_{\mu}\partial_{\nu}
\end{array}$$ and the index at the d’Alambertian denotes its dimensionality. The inverse in eq.(4) is not unique for the pseudo-Riemannian metric (2)-(3). However, we define it in the unique way later on by means of the Feynman proper time representation together with the $i\epsilon$ prescription.
Let us begin with the two-point function (4) and consider the Neumann expansion $$\begin{array}{l}
\langle (-\Box_{D}-h\partial \partial +M^{2})^{-1}(x,y)\rangle=
\langle\sum_{n=0}^{\infty} ((Gh\partial\partial
)^{n}G)(x,y)\rangle \cr = G(x,y)+
\partial\partial\partial\partial G G G(x,y)\langle hh\rangle
+\partial\partial\partial\partial\partial\partial\partial
\partial GGGGG(x,y) \langle hhhh\rangle+...
\end{array}$$ where $G=(-\Box_{D}+M^{2})^{-1} $ denotes the Feynman propagator for the scalar field of mass $M$. In eq.(6) we have used an integration by parts in order to move the derivatives to the beginning (this is possible because the correlations of $h$ depend only on $R^{d}$ and the derivatives concern the complementary $D-d$ variables). For example, the first non-trivial term in eq.(6) reads $$\int dz_{1}dz_{2}\partial^{X}_{\mu}\partial^{X}_{\nu}\partial_{\sigma}^{X}
\partial_{\rho}^{X}G(x-z_{1})G(z_{1}-z_{2})G(z_{2}-y)
{\cal D}^{\mu\nu;\sigma\rho}({\bf z}_{1}-{\bf z}_{2})$$ where $$\langle h^{\mu\nu}({\bf z}_{1})h^{\sigma\rho}({\bf z}_{2})\rangle
={\cal D}^{\mu\nu;\sigma\rho}({\bf z}_{1}-{\bf z}_{2})$$ In order to compare the conventional expansion with the stochastic one of ref.[@jmp] we apply the proper-time representation for $G$ $$\prod_{k=1}^{n+1}G(v_{k})=\frac{1}{2}\prod_{k=1}^{n+1}\int_{0}^{\infty}id\tau_{k}
\exp(-\frac{i}{2}\tau_{k} M^{2}) p(\tau_{k},{\bf
v}_{k})p(\tau_{k}, {\bf V}_{k})$$ where in the dimension $r$ the Feynman kernel $p$ is $$p(\tau,{\bf x})=(2\pi i\tau)^{-\frac{r}{2}}\exp(-\frac{{\bf
x}^{2}}{2i\tau})$$ and we made use of the property of the Feynman kernel in $D$-dimensions that it is a product of the kernels in $D-d$ and $d$-dimensions. Then, in the product (8) we change the integration variables, so that $$\begin{array}{l}
G(x-z_{n})G(z_{n}-z_{n-1}).....G(z_{1}-y) = i^{n+1}2^{-n-1}
\int_{0}^{\infty}d\tau\int_{0}^{\tau}ds_{n}
\int_{0}^{s_{n}}.....\int_{0}^{s_{2}}ds_{1} \cr p(\tau-s_{n},
x-z_{n})p(s_{n}-s_{n-1},z_{n}-z_{n-1})..... p(s_{1},z_{1}-y)
\end{array}$$ where in eqs.(8)-(9) we introduced new variables $$\tau_{1}=s_{1}$$ $$\tau_{1}+\tau_{2}=s_{2}$$ ......... $$\tau_{1}+.....+\tau_{n}=s_{n}$$ $$\tau_{1}+.....+\tau_{n+1}=\tau$$ We write $${\bf z}_{k}={\bf x}_{k}+{\bf y}$$ Then, in the representation (9) we apply many times the Smoluchowski-Kolmogorov equation (the semigroup composition law) for the $D-d$ dimensional kernels $$\int p(s-s^{\prime},{\bf X}-{\bf Z})p(s^{\prime},{\bf Z})d {\bf
Z}=p(s,{\bf X})$$ Performing the integration by parts we can see that the derivatives in the expansion (6) (with the representation (9)) finally act on $p(\tau,{\bf X}-{\bf Y})$. We may write the result in the momentum representation in the form (which will come out in the stochastic version of the next section) $$\partial_{\mu}^{X}p(\tau,{\bf X}-{\bf Y})=
\int d{\bf P}\exp(i{\bf P}({\bf Y}-{\bf
X}))\exp(-i\frac{\tau}{2}{\bf P}^{2} -i\frac{\tau}{2}M^{2} )(-iP_{\mu})$$ We can compute higher order correlation functions in the Gaussian model of scalar fields. They are expressed by the two-point function. For example, the four-point function expanded around the flat metric reads $$\begin{array}{l}
\langle
\phi(x)\phi(x^{\prime})\overline{\phi}(y)\overline{\phi}(y^{\prime})\rangle
=\langle( {\cal A}+\frac{1}{2}M^{2})^{-1}\left(x,y\right)( {\cal
A}+\frac{1}{2}M^{2})^{-1}\left(x^{\prime},y^{\prime}\right)\rangle
+(x\rightarrow x^{\prime})
\cr
=\langle\sum_{n=0,m=0}^{\infty}
((Gh\partial\partial )^{n}G)(x,y) ((Gh\partial\partial
)^{m}G)(x^{\prime},y^{\prime})\rangle + (x\rightarrow x^{\prime})
\end{array}$$ where $x\rightarrow x^{\prime}$ means the same expression with the exchanged coordinates. In the next section we sum up the perturbation series (4) and (11). This will allow us to determine the functional dependence of the scalar correlation functions on the metric explicitly.
Let us note that the expectation value over the metric field in eqs.(4) and (11) depends on the scalar determinant. It can involve a complex dependence on the metric. We cannot compute the effective gravitational action explicitly. However, its scaling behavior is sufficient to determine the short distance behavior of the scalar fields.
The stochastic representation
=============================
Let us recall a representation of the correlation functions (6) and (11) by means of the Feynman integral [@jmp][@hababook]. In this paper we work with a real time. It makes no difference for a perturbation expansion whether we work in Minkowski or in Euclidean space. However, the non-perturbative Euclidean version may not exist (e.g. this is the case when the Hamiltonian is unbounded from below). For this reason we consider here a representation with an indefinite metric alternative to the Euclidean version of ref.[@jmp].
We represent the scalar field two-point function by means of the proper time method $$({\cal
A}+\frac{1}{2}M^{2})^{-1}(x,y)=i\int_{0}^{\infty}d\tau
\exp(-\frac{i}{2}M^{2}\tau)\left(\exp\left(-i\tau {\cal A}\right)
\right)(x,y)$$ The kernel $\left(\exp\left(-i\tau {\cal A}\right) \right)(x,y)
$ can be expressed by the Feynman integral $$\begin{array}{l}
K_{\tau}(x,y)\equiv\left(\exp\left(-i\tau {\cal A}\right)
\right)(x,y)=\int {\cal D}x\exp(\frac{i}{2}\int \frac{d{\bf
x}}{dt}
\frac{d{\bf x}}{dt}+\frac{i}{2}\int (h^{\mu\nu}({\bf x})+\eta^{\mu\nu})\frac{dX_{\mu}}{dt}
\frac{dX_{\nu}}{dt})
\cr
\delta\left(x\left(0\right)-x\right)
\delta\left(x\left(\tau\right)-y\right)
\end{array}$$ In the Feynman integral (13) we make a change of variables ($x
\rightarrow b$) defined by the solution of the Stratonovitch stochastic differential equations [@ike] $$dx^{\Omega}(s)=e_{A}^{\Omega}\left(
x\left(s\right)\right)db^{A}(s)$$ where for $\Omega=1,....,D-d$ $$e^{\mu}_{a}e^{\nu}_{a}=\eta^{\mu\nu}+h^{\mu\nu}$$ and $e^{\Omega}_{A}=\delta^{\Omega}_{A}$ if $\Omega>D-d$.
The change of variables (14) transforms the functional integral (13) into a Gaussian integral with the covariance $$E[b_{A}(t)b_{C}(s)]=i\delta_{AC}\min(s,t)$$ It can be interpreted as an analytic continuation of the standard Wiener integral. This means that on a formal level $$E[F]=\int {\cal D}b\exp\Big(\frac{i}{2}\int ds(\frac{db}{ds}
)^{2}\Big)F(b)$$ The solution $q_{\tau}$ of eq.(14) is defined by two vectors $({\bf Q},{\bf q})$ where $${\bf q}(\tau,{\bf x})={\bf x}+ {\bf b}(\tau)$$ and ${\bf Q}$ has the components (for $\mu=1,...,D-d$) $$Q^{\mu}(\tau,{\bf X})=X^{\mu}+\int_{0}^{\tau}
e_{a}^{\mu}\left({\bf q}\left(s,{\bf x}\right)\right)dB^{a}(s)$$ The kernel (13) can be expressed by the solution (15)-(16) $$\begin{array}{l}
K_{\tau}(x,y)=E[\delta(y-q_{\tau}(x))] \cr = E[\delta({\bf y}
-{\bf x}-{\bf b}(\tau)) \prod_{\mu}\delta\left(Y^{\mu}-Q^{\mu}
\left(\tau,{\bf X}\right)\right)]
\end{array}$$ When the $\delta$-function is defined by its Fourier transform then the kernel $K_{\tau}$ takes the form $$\begin{array}{l}
K_{\tau}(x,y)=(2\pi)^{-D+d}\int d{\bf P}
\exp\left(i{\bf P}\left({\bf
Y}-{\bf X}\right)\right)
\cr E[\delta\left(
{\bf y}-{\bf x}-{\bf
b}\left(\tau\right)\right)
\exp\left(-i\int P_{\mu}e^{\mu}_{a}\left ({\bf
q}\left(s,{\bf x}\right)\right)dB^{a}\left(s\right)\right)]
\end{array}$$ We can perform the $B$-integral. The random variables ${\bf b}$ and $B^{a}$ are independent. We can use the formula [@ike] $$E[\exp i\int f_{a}({\bf b})dB^{a}]= E[\exp(-\frac{i}{2}\int
f_{a}f_{a}ds)]$$ Then, the Feynman kernel is expressed by the metric tensor $$\begin{array}{l}
K_{\tau}(x,y)=(2\pi)^{-D+d}\int d{\bf P}
\exp\left(i{\bf P}\left({\bf Y}-{\bf
X}\right)\right)\exp(-i\frac{\tau}{2}{\bf P}^{2})
\cr E[\delta\left(
{\bf y}-{\bf x}-{\bf b}\left(\tau\right)\right)
\exp\left(-\frac{i}{2}\int_{0}^{\tau} P_{\mu}h^{\mu\nu}\left
({\bf q}\left(s,{\bf x}\right)\right)P_{\nu}ds\right)]
\end{array}$$ Note that in eq.(18) instead of Feynman paths we may use the Brownian paths $\tilde{b}$ [@hababook] (the $\delta$-function can be taken away by replacing the Brownian motion by the Brownian bridge as in [@jmp]) $$\tilde{b}=\exp(-i\frac{\pi}{4})b$$ Then, the expectation value really is an average over the random process.
The perturbation expansion in $h$ reads $$\begin{array}{l}
\langle ({\cal A}+\frac{1}{2}M^{2})^{-1}(x,y)\rangle=
i\int_{0}^{\infty}d\tau
\exp(-\frac{i\tau}{2}M^{2})K_{\tau}(x,y)\cr =(2\pi)^{-D+d}\int
d{\bf P}
\exp\left(i{\bf P}\left({\bf Y}-{\bf
X}\right)\right) i\int_{0}^{\infty}d\tau
\exp(-\frac{i\tau}{2}{\bf P}^{2}-\frac{i\tau}{2}M^{2})
E[\delta\left(
{\bf y}-{\bf x}-{\bf b}\left(\tau\right)\right) \cr
\sum_{n=0}^{\infty}(2i)^{-n}\int_{0}^{\tau}ds_{n}\int_{0}^{s_{n}}d
s_{n-1}......\int_{0}^{s_{2}}ds_{1} P_{\mu}h^{\mu\nu}\left ({\bf
q}\left(s_{n},{\bf x}\right)\right)P_{\nu} \cr P_{\sigma}
h^{\sigma\rho}\left ({\bf q}\left(s_{n-1},{\bf
x}\right)\right)P_{\rho}... P_{\alpha}h^{\alpha\beta}\left ({\bf
q}\left(s_{1},{\bf x}\right)\right)P_{\beta}]\end{array}$$ The expectation value over the Feynman paths (the complex Brownian motion) can be computed and we obtain as a result the expectation value depending on the correlation functions of the metric field $$\begin{array}{l}
\langle\int_{0}^{\infty}d\tau
\exp(-\frac{i\tau}{2}M^{2})K_{\tau}(x,y)\rangle =(2\pi)^{-D+d}\int
d{\bf P}
\exp\left(i{\bf P}\left({\bf Y}-{\bf
X}\right)\right) i \int_{0}^{\infty}d\tau
\cr
\exp(-\frac{i\tau}{2}{\bf P}^{2}-\frac{i\tau}{2}M^{2})
\sum_{n=0}^{\infty}\int d{\bf x}_{1}....d{\bf x}_{n}d{\bf z}\cr
(2i)^{-n}\int_{0}^{\tau}ds_{n}\int_{0}^{s_{n}}d
s_{n-1}......\int_{0}^{s_{2}} ds_{1}\cr p(s_{1},{\bf
x}_{1})p(s_{2}-s_{1},{\bf x}_{2}-{\bf x}_{1})..... \cr
p(s_{n}-s_{n-1},{\bf x}_{n}-{\bf x}_{n-1})p(\tau-s_{n},{\bf
z}-{\bf x}_{n}) \delta({\bf y}-{\bf x}-{\bf z})\langle
P_{\mu}h^{\mu\nu}\left ({\bf x}+{\bf x}_{1}\right)P_{\nu} \cr ....
P_{\alpha}h^{\alpha\beta}\left ({\bf x}+{\bf
x}_{n}\right)P_{\beta}\rangle\end{array}$$ It is easy to see that this formula coincides with eq.(6) where the representation (9) is applied with the simplification concerning the $X$-dependence ( that the result is expressed finally by $p(\tau,{\bf X}-{\bf Y}$)) discussed at the end of sec.2.
Representing the four-point function by means of the stochastic method we obtain $$\begin{array}{l}
\langle
\phi(x)\phi(x^{\prime})\overline{\phi}(y)\overline{\phi}(y^{\prime})\rangle
= -(2\pi)^{-2D+2d}\int d\tau_{1}d\tau_{2} \int d{\bf P}d{\bf
P}^{\prime}
\cr
\exp(-\frac{i}{2}\tau_{1}{\bf P}^{2}-\frac{i}{2}\tau_{2}{\bf P}^{\prime 2}
-\frac{i}{2}\tau_{1}M^{2}-\frac{i}{2}\tau_{2}M^{ 2})
\cr
\exp\left(i{\bf
P}\left({\bf Y}-{\bf X}\right)
+i{\bf P}^{\prime}\left({\bf Y}^{\prime}-{\bf X}^{\prime}\right) \right)
\cr E[
\delta\left({\bf y}-{\bf x}-{\bf b}\left(\tau_{1}\right)\right)
\delta\left({\bf y}^{\prime}-{\bf x}^{\prime}-{\bf b}^{\prime}
\left(\tau_{2}\right)\right)
\cr
\exp\left(-\frac{i}{2}\int_{0}^{\tau_{1}} P_{\mu}h^{\mu\nu}\left
({\bf q}\left(s,{\bf x}\right)\right)P_{\nu}ds\right)
\exp\left(-\frac{i}{2}\int_{0}^{\tau_{2}} P^{\prime}_{\mu}h^{\mu\nu}\left
({\bf q}^{\prime}\left(s^{\prime},{\bf x}^{\prime}\right)\right)P^{\prime}_{\nu}
ds^{\prime}\right)
]
\cr
+ (x\rightarrow x^{\prime})
\end{array}$$ We make a perturbation expansion in the metric $$\begin{array}{l}
\sum_{n=0}^{\infty}(2i)^{-n}\int_{0}^{\tau_{1}}ds_{n}\int_{0}^{s_{n}}d
s_{n-1}......\cr ...\int_{0}^{s_{2}}ds_{1} P_{\mu}h^{\mu\nu}\left
({\bf q}\left(s_{n},{\bf x}\right)\right)P_{\nu} \cr P_{\sigma}
h^{\sigma\rho}\left ({\bf q}\left(s_{n-1},{\bf
x}\right)\right)P_{\rho}.... P_{\alpha}h^{\alpha\beta}\left ({\bf
q}\left(s_{1},{\bf x}\right)\right)P_{\beta} \cr
\sum_{m=0}^{\infty}(2i)^{-m}\int_{0}^{\tau_{2}}ds^{\prime}_{m}
\int_{0}^{s^{\prime}_{m}}d
s^{\prime}_{m-1}......\int_{0}^{s^{\prime}_{2}}ds^{\prime}_{1}
P^{\prime}_{\mu}h^{\mu\nu}\left ({\bf
q}^{\prime}\left(s^{\prime}_{m},{\bf
x}\right)\right)P^{\prime}_{\nu} \cr P^{\prime}_{\sigma}
h^{\sigma\rho}\left ({\bf q}^{\prime}\left(s^{\prime}_{m-1},{\bf
x}^{\prime}\right)\right) P^{\prime}_{\rho}....
P^{\prime}_{\alpha}h^{\alpha\beta}\left ({\bf
q}^{\prime}\left(s^{\prime}_{1},{\bf
x}\right)\right)P^{\prime}_{\beta} + (x\rightarrow x^{\prime})
\end{array}$$ The paths ${\bf q}$ and ${\bf q}^{\prime}$ are independent. We calculate the expectation values according to the formula (21). In the conventional perturbation expansion (11) we make the same change of the proper time variables (9) as in the two-point function (for each two-point function separately). Then, it becomes evident by the same argument as the one at the two-point function that the stochastic formula (23) and the conventional one (11) coincide (no matter what are the correlation functions for the metric field). It is now easy to see that the argument concerning the four-point function can be generalized to any $2n$-point function.
Ultraviolet behavior of the perturbation series
===============================================
In order to study the ultraviolet behavior of the perturbation expansion (6) and (11) it is useful to express the functional integral in the momentum space. First, we write down the scalar part of the action (1) in the form $$\begin{array}{l}
L=\int d{\bf x}d{\bf P}(\nabla_{\bf x}\overline{\phi}({\bf x},{\bf
P}) \nabla_{\bf x}\phi({\bf x},{\bf P}) +{\bf
P}^{2}\overline{\phi}({\bf x},{\bf P}) \phi({\bf x},{\bf P})\cr
+M^{2}\overline{\phi}({\bf x},{\bf P}) \phi({\bf x},{\bf P})
+h^{\mu\nu}({\bf x})P_{\mu}P_{\nu}\overline{\phi}({\bf x},{\bf P})
\phi({\bf x},{\bf P}))
\end{array}$$ Then, it is easy to see from the functional integral for the correlation functions $$\int {\cal D}\phi\exp(-L)\overline{\phi}....\phi$$ that there is a direct correspondence between the model (1) and the trilinear interaction $P_{\mu}P_{\nu}h^{\mu\nu}\overline{\phi}\phi$ where $P$ is treated just as a parameter. The Fourier transform over $P$ is performed in the final formula. For example $$\begin{array}{l}
\langle \overline{\phi}({\bf x},{\bf X})\phi({\bf x}^{\prime},{\bf
X}^{\prime})\rangle \cr =\int d{\bf P}d{\bf P}^{\prime}\langle
\overline{\phi}({\bf x},{\bf P}) \phi({\bf x}^{\prime},{\bf
P}^{\prime})\rangle
\exp(i{\bf PX}-i{\bf P}^{\prime}{\bf X}^{\prime})
\cr =\int d{\bf P}\langle(-\triangle_{d}+{\bf P}^{2}+h^{\mu\nu}
P_{\mu}P_{\nu} )^{-1}({\bf x},{\bf x}^{\prime})\rangle
\exp(i{\bf P}({\bf X}-{\bf X}^{\prime}))
\end{array}$$ whereas for the four-point function $$\begin{array}{l}
\langle \overline{\phi}({\bf x},{\bf X})\overline{\phi}({\bf
x}^{\prime},{\bf X}^{\prime}) \phi({\bf y},{\bf Y}) \phi({\bf
y}^{\prime},{\bf Y}^{\prime})\rangle \cr
=\int d{\bf P}d{\bf P}^{\prime}d{\bf K}d{\bf K}^{\prime} \langle
\overline{\phi}({\bf x},{\bf P})\overline{\phi}({\bf
x}^{\prime},{\bf P}^{\prime}) \phi({\bf y},{\bf K}) \phi({\bf
y}^{\prime},{\bf K}^{\prime})\rangle \cr
\exp(i{\bf PX}+i{\bf P}^{\prime}{\bf X}^{\prime}-
i{\bf KY}-i{\bf K}^{\prime}{\bf Y}^{\prime})
\cr =\int d{\bf P}d{\bf K}\langle(-\triangle_{d}+{\bf
P}^{2}+h^{\mu\nu} P_{\mu}P_{\nu} )^{-1}({\bf x},{\bf y})
\cr
(-\triangle_{d}+{\bf K}^{2}+h^{\mu\nu}
K_{\mu}K_{\nu} )^{-1}({\bf x}^{\prime},{\bf y}^{\prime})\rangle
\exp(i{\bf P}({\bf X}-{\bf Y}))
\exp(i{\bf K}({\bf X}^{\prime}-{\bf Y}^{\prime}))
\cr
+ (x\rightarrow x^{\prime} )
\end{array}$$ The integration over the $D-d$ momenta P does not lead to any additional infinities. It follows that the divergencies depend on the dimension d and on the singularity of the metric field correlations. Hence, if $d<4$ and $h$ has the canonical short distance behavior $\vert
{\bf x}-{\bf y}\vert^{-d+2}$ then there will be no divergencies at all.
Modified short distance behavior
=================================
The correlation functions (12) and (22) have a different scaling behavior in ${\bf x}$ and ${\bf X}$ directions. We obtain a fixed scaling behavior either setting ${\bf x} ={\bf y}=0 $ or ${\bf X}={\bf Y}=0$. Assume that $h^{\mu\nu}$ is scale invariant at short distances $$h^{\mu\nu}({\bf x})\simeq \lambda^{2\gamma}h^{\mu\nu}(\lambda{\bf
x})$$ Then, by scaling (just as in [@jmp], although $g^{\mu\nu}=\eta^{\mu\nu} + h^{\mu\nu}$ is not scale invariant) and using the stochastic formula (18) we obtain $$\langle {\cal A}^{-1}(x,y)\rangle\simeq \vert
{\bf X}-{\bf Y}\vert^{-D+2-\frac{\gamma}{1-\gamma}(d-2)}$$ at short distances. This argument can be extended to all correlation functions showing that $$\phi(0,{\bf X})\simeq
\lambda^{\frac{D-2}{2}+\frac{\gamma}{1-\gamma}\frac{d-2}{2}}
\phi(0,\lambda{\bf X})$$ where the equivalence means that both sides have the same correlation functions at short distances.
For the behavior in the ${\bf x}$ direction we let $X=Y=0$. In such a case just by scaling of momenta in eq.(18) we can bring the propagator to the form $$\langle {\cal A}^{-1}(x,y)\rangle=\int_{0}^{\infty}d\tau
\tau^{-\frac{d}{2}-(D-d)(1-\gamma)/2}F_{2}(\tau^ {-\frac{1}{2}}({\bf
y} -{\bf x}))$$ Hence $$\langle {\cal A}^{-1}({\bf x},{\bf y})\rangle= R \vert
{\bf x}-{\bf y}\vert^{-d+2-(D-d)(1-\gamma)}$$ where $R$ is a constant. In general one can show similarly as in eq.(29) that $$\phi({\bf x},0)\simeq
\lambda^{\frac{d-2}{2}+\frac{(D-d)(1-\gamma)}{2}}
\phi(\lambda{\bf x},0)$$
We could see that the results (29) and (32) do not require the stochastic representation (18). For the two-point function we write the expansion (6) in the (equivalent) form (21). Using the scale invariance (27) and changing the integration variables we can rewrite the perturbation series (21) in the form $$\begin{array}{l}
\langle {\cal A}^{-1}(x,y)\rangle = \lambda^{-2+d+(1-\gamma)(D-d)}
\int_{0}^{\infty}d\tau\int d{\bf P}\exp(i{\bf P}\lambda^{1-\gamma}
({\bf X}-{\bf Y})) \cr \exp(-i\tau \lambda^{-2\gamma}({\bf
P}^{2}+M^{2}))\sum_{n}f_{n}(\tau,{\bf P},\lambda({\bf y}-{\bf x}))
\end{array}$$ We set $\lambda=\vert {\bf y}-{\bf x}\vert^{-1}$ if ${\bf X}={\bf
Y}$ and $\lambda=\vert {\bf X}-{\bf Y}\vert^{-\frac{1}{1-\gamma}}
$ if ${\bf x}={\bf y}$. The formulas (28) and (31) follow under the assumption that setting ${\bf P}^{2}+M^{2}=0$ in eq.(33) we obtain a finite result from the integral over $\tau$ of the sum of $f_{n}$. In a similar way we can obtain the short distance behavior of higher order correlation functions. As an example we could consider a dimensional reduction of the gravitational action (a static approximation) from $d=4$ to $d=3$. Then, in the quadratic approximation to the threedimensional quantum gravity we obtain $\gamma=\frac{1}{4}$ and an explicit finite average over $h$ in eq.(18).
Discussion
==========
We have obtained formulas expressing quantum scalar field correlation functions by quantum gravitational correlation functions in a model where the metric field depends only on a lower dimensional submanifold. Such models can either be considered as an approximation to a realistic theory or they may come from a dimensional reduction of a higher dimensional Einstein gravity [@cr]- [@nicolai]. In either case the non-perturbative phenomena are essential for the short distance behavior. In this way we have confirmed by a resummation of the perturbation series our results [@PLB] based on a formal functional integral. The result is that a singular quantum gravitational field substantially modifies the short distance behavior of the matter fields. In particular, it can make the matter fields more regular.
[99]{} Z. Brzezniak and Z. Haba, Journ.Phys.[**A34**]{},L139(2001)
Z. Haba, Journ.Math.Phys.[**43**]{},5483(2002)
R.d’Inverno, Introducing Einstein’s Relativity, Clarendon Press, Oxford, 1992
D. Kramer, H. Stephani, M. MacCallum and E. Herlt, Exact Solutions of the Einstein Field Equations, Deutsch.Verlag Wiss.,Berlin,1980 R. Penrose, Rev.Mod.Phys.[**37**]{},215(1965) R.R. Metsaev and A.A. Tseytlin, Nucl.Phys.[**B625**]{},70 (2002)
Z. Haba, Journ.Math.Phys. [**35**]{},6344(1994)
Z. Haba, The Feynman Integral and Random Dynamics in Quantum Physics, Kluwer, Dordrecht, 1999
N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North Holland,1981
E. Cremmer and B. Julia, Nucl. Phys.[**B159**]{},141(1979) H. Nicolai, in Recent Aspects of Quantum Fields, H. Mitter and H. Gauterer, Eds.,Springer, Berlin, 1991 Z. Haba,Phys.Lett.[**B528**]{},129(2002)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We analyze the mapping class group $\H_x(W)$ of automorphisms of the exterior boundary $W$ of a compression body $(Q,V)$ of dimension 3 or 4 which extend over the compression body. Here $V$ is the interior boundary of the compression body $(Q,V)$. Those automorphisms which extend as automorphisms of $(Q,V)$ rel $V$ are called discrepant automorphisms, forming the mapping class group $\H_d(W)$ of discrepant automorphisms of $W$ in $Q$. If $\H(V)$ denotes the mapping class group of $V$ we describe a short exact sequence of mapping class groups relating $\H_d(W)$, $\H_x(W)$, and $\H(V)$.
For an orientable, compact, reducible 3-manifold $W$, there is a canonical “maximal" 4-dimensional compression body whose exterior boundary is $W$ and whose interior boundary is the disjoint union of the irreducible summands of $W$. Using the canonical compression body $Q$, we show that the mapping class group $\H(W)$ of $W$ can be identified with $\H_x(W)$. Thus we obtain a short exact sequence for the mapping class group of a 3-manifold, which gives the mapping class group of the disjoint union of irreducible summands as a quotient of the entire mapping class group by the group of discrepant automorphisms. The group of discrepant automorphisms is described in terms of generators, namely certain “slide" automorphisms, “spins," and “Dehn twists" on 2-spheres.
Much of the motivation for the results in this paper comes from a research program for classifying automorphisms of compact 3-manifolds, in the spirit of the Nielsen-Thurston classification of automorphisms of surfaces.
author:
- Ulrich Oertel
bibliography:
- 'ReferencesUO3.bib'
date: 'January, 2006; revised June, 2007'
title: 'Mapping Class Groups of Compression Bodies and 3-Manifolds'
---
0.5in
\#1\#2\#3[[\#1]{}2.5pt([\#2]{},[\#3]{})]{}
§
Introduction {#Intro}
============
A [*compression body*]{} is a manifold triple $(Q,W,V)$ of any dimension $n\ge 3$ constructed as $V\times I$ with 1-handles attached to $V\times 1$, and with $V=V\times 0$. Here $V$ is a manifold, possibly with boundary, of dimension $n-1\ge 2$. We use the symbol $A$ to denote $\bdry
V\times I$ and $W$ to denote $\bdry Q-(V\cup
\intr(A))$. See Figure \[MapCompression\]. When $W$ is closed, we can regard the compression body as a manifold pair, and even when $\bdry W\ne\emptyset$, we will often denote the compression body simply as $(Q,V)$.
[**Remark:**]{} In most applications, a compression body satisfies the condition that $V$ contains no sphere components, but we choose not to make this a part of the definition.
We note that, when $\bdry V\ne\emptyset$, it is crucial in this paper that there be a product “buffer zone" $A$ between $\bdry V$ and $\bdry W$. Some applications of the isotopy extension theorem would fail without this buffer.
The symbol $\H$ will be used to denote the mapping class group of orientation preserving automorphisms of an orientable manifold, a manifold pair, or a manifold triple. When $(Q,V)$ is a compression body, and $\bdry
V=\emptyset$, $\H(Q,V)$ is the mapping class group of the manifold pair; when $\bdry V\ne\emptyset$ by abuse of notation, we use $\H(Q,V)$ to denote $\H(Q, W,V)$.
[**Convention:**]{} Throughout this paper, manifolds will be compact and orientable; automorphisms will be orientation preserving.
\[BasicDefinitions\] Suppose $(Q,V)$ is a compression body. Let $\H_x (W)$ denote the image of the restriction homomorphism $H(Q,V)\to \H(W)$, i.e. the subgroup of $\H(W)$ consisting of automorphisms of $W$ which extend over $Q$ to yield an element of $\H (Q,V)$. We call the elements of $\H_x (W)$ [*extendible*]{} automorphisms of $W$. Let $\H_d(W)$ denote the image of of the restriction map $H(Q
\rel V)\to \H(W)$, i.e. the subgroup of $\H_x(W)$ consisting of isotopy classes of automorphisms $f$ of $W$ which extend over $Q$ such that $f|_V$ becomes the identity. We call the elements of $\H_d (W)$ [*discrepant*]{} automorphisms of $W$.
A fundamental fact well known for compression bodies of dimension 3 is that an automorphism of $W$ which extends over a compression body $Q$ with $\bdry_eQ=W$ uniquely determines up to isotopy the restriction of the extended automorphism to $V=\bdry_iQ$. In general we have a weaker statement:
\[FundamentalTheorem\] Suppose $(Q,V)$ is a compression body of dimension $n\ge 3$. Suppose at most one component of $V$ is homeomorphic to a sphere $S^{n-1}$ and suppose each component of $V$ has universal cover either contractible or homeomorphic to $S^{n-1}$. If $f:(Q,V)\to (Q,V)$ is an automorphism with $f|_W$ isotopic to the identity, then:
\(a) [$f|_V$ is homotopic to the identity. It follows that for an automorphism $f$ of $(Q,V)$, $f|_W$ determines $f|_V$ up to homotopy.]{}
\(b) [If $Q$ is 3-dimensional, $f|_W$ determines $f|_V$ up to isotopy, and $\H_x(W)\approx \H(Q,V)$. ]{}
\(c) [If $Q$ is 4-dimensional, and each component of $V$ is irreducible, then $f|_W$ determines $f|_V$ up to isotopy.]{}
Suppose $(Q,V)$ is a compression body of dimension 3 or 4 satisfying the conditions of Theorem \[FundamentalTheorem\] (b) or (c). Then there is an [*eduction*]{} homomorphism $e: \H_x(W)\to \H(V)$. For $f\in \H_x(W)$, we extend $f$ to $\bar
f:(Q,V)\to (Q,V)$, then we restrict $\bar f$ to $V$ obtaining $e(f):V\to V$. By the theorem, $e$ is well-defined.
\[SequenceThm\] Suppose $(Q,V)$ is a compression body of dimension 3 or 4. Suppose at most one component of $V$ is a sphere. In case $Q$ has dimension 4, suppose every component of $V$ is a 3-manifold whose universal cover is contractible or $S^3$. Then the following is a short exact sequence of mapping class groups. $$1\to \H_d(W)\to \H_x(W)\xrightarrow{e} \H(V)\to 1.$$
The first map is inclusion, the second map is eduction.
There is a [*canonical 4-dimensional compression body*]{} $(Q,V)$ associated to a compact reducible 3-manifold $W$ without boundary spheres, see Section \[ThreeManifoldSection\]. It has the following properties.
\[UniquenessProposition\] The canonical compression body $(Q,V)$ associated to a compact reducible 3-manifold $W$ has exterior boundary $W$ and interior boundary the disjoint union of its irreducible summands. It is unique in the following sense: If $(Q_1,W,V_1)$ and $(Q_2,W,V_2)$ are two canonical compression bodies, and $v:V_1\to V_2$ is any homeomorphism, then there is a homeomorphism $g:(Q_1,V_1)\to (Q_2,V_2)$ with $g|_{V_1}=v$.
We will initially construct a canonical compression body using a “marking" by a certain type of system $\S$ of essential spheres in $W$, called a [*symmetric system*]{}. The result is the [*canonical compression body associated to $W$ with the symmetric system $\S$*]{}. The canonical compression body associated to $W$ with $\S$ is unique in a different sense: If $(Q_1,W,V_1)$ and $(Q_2,W,V_2)$ are canonical compression bodies associated to $W$ with $\S$, then the identity on $W$ extends to a homeomorphism $g:Q_1\to Q_2$, and $g|_{V_1}$ is unique up to isotopy. Only later will we recognize the uniqueness described in Proposition \[UniquenessProposition\] when we compare canonical compression bodies constructed from different symmetric systems of spheres.
There are well-known automorphisms of $W$ which, as we shall see, lie in $\H_d(W)$ with respect to the canonical compression body $Q$. The most important ones are called [*slides*]{}. Briefly, a slide is an automorphism of $W$ obtained by cutting on an essential sphere $S$ to obtain $W|S$ ($W$ cut on $S$), capping one of the duplicate boundary spheres coming from $S$ with a ball $B$ to obtain $W'$, sliding the ball along a closed curve in this capped manifold and extending the isotopy of $B$ to the manifold, then removing $B$ and reglueing on the two boundary spheres. An [*interchanging slide*]{} is done using two separating essential spheres cutting homeomorphic manifolds from $W$. Cut on the two spheres $S_1$ and $S_2$, remove the homeomorphic submanifolds, cap the two boundary spheres with two balls $B_1$ and $B_2$, choose two paths to move $B_1$ to $B_2$ and $B_2$ to $B_1$, extend the isotopy, remove the balls, and reglue. Interchanging slides are used to interchange prime summands. A [*spin*]{} does a “half rotation" on an $S^2\times S^1$ summand. In addition, for each embedded essential sphere, there is an automorphism of at most order 2 called a [*Dehn twist*]{} on the sphere. If slides, Dehn twists, interchanging slides, and spins are done using spheres from a given symmetric system $\S$ (or certain separating spheres associated to non-separating spheres of $\S$) then they are called [*$\S$-slides*]{}, [*$\S$-Dehn twists*]{}, [*$\S$-interchanging slides*]{}, and [*$\S$-spins*]{}. An $\S$-spin is a “half-Dehn-twist" on a separating sphere associated to a non-separating sphere of $\S$, not on a sphere of $\S$.
\[CharacterizationProp\] If $Q$ is the canonical compression body associated to the compact orientable, 3-manifold $W$ with symmetric system $\S$ and $W=\bdry_eQ$, $V=\bdry_iQ$, then
\(a) $\H_x(W)=\H(W)$.
\(b) The mapping class group $\H_d(W)$ is generated by $\S$-slides, $\S$-Dehn twists, $\S$-slide interchanges of $S^2\times S^1$ summands, and $\S$-spins.
Proposition \[CharacterizationProp\] (b) is a version of a standard result due to E. C[é]{}sar de S[á]{} [@EC:Automorphisms], see also M. Scharlemann in Appendix A of [@FB:CompressionBody], and D. McCullough in [@DM:MappingSurvey] (p. 69).
From Proposition \[CharacterizationProp\] and Theorem \[SequenceThm\] we obtain:
\[ThreeManifoldSequenceThm\] If $W$ is a compact, orientable, reducible 3-manifold, and $V$ is the disjoint union of its irreducible summands, then the following sequence is exact:
$$1\to \H_d(W)\to \H(W)\xrightarrow{e} \H(V)\to 1.$$
Here $\H_d(W)$ is the group of discrepant automorphisms for a canonical compression body, the first map is inclusion, and the second map is eduction with respect to a canonical compression body.
Using automorphisms of compression bodies, one can reinterpret some other useful results. Suppose $W$ is an $m$-manifold ($m=2$ or $m=3$) with some sphere boundary components. Let $V_0$ denote the manifold obtained by capping the sphere boundary components with balls and let $P$ denote the union of capping balls in $V_0$. There is another appropriately chosen canonical compression body $(Q,V)$ for $W$, such that $V$ is the disjoint union $V_0\sqcup P$. With respect to this compression body we have a group of discrepant automorphisms, $\H_d(W)$. We say the pair $(V_0, P)$ is a [*spotted manifold*]{}. The mapping class group of this spotted manifold, as a pair is the same as $\H(W)$. The following result probably exists in the literature in some form, and can be proven more directly, and with other terminology. For example, a special case is mentioned in [@JB:Braids]. The reason for including it here is to make an analogy with Theorem \[SequenceThm\]. Let $f:W\to W$ be an automorphism. Then $f$ clearly induces a [*capped automorphism*]{} $f_c:V_0\to V_0$. Simply extend $f$ to the balls of $P$ to obtain $f_c$. Then $f_c$ is determined up to isotopy by $f$.
\[SpottedThm\] With $W$ and $(Q,V)$ as above, and $V_0$ a connected surface or a connected irreducible 3-manifold, there is an exact sequence
$$1\to \H_d(W)\to \H(W)\xrightarrow{e} \H(V_0)\times \H(P)\to 1.$$
$\H(P)$ is the group of permutations of the balls of $P$; $\H(V_0)$ is the mapping class group of the capped manifold. The group $\H_d(W)$ of discrepant automorphisms is the subgroup of automorphisms $f$ of $W$ which map each boundary sphere to itself such that the capped automorphism $f_c$ is isotopic to the identity. The map $\H_d(W)\to
\H(W)$ is inclusion. The eduction map $e:\H(W)\to \H(V_0)\times \H(P)$ takes $f\in \H(W)$ to $(f_c,\rho)$ where $\rho$ is the permutation of $P$ induced by $f$.
In Section \[Sequence\] we prove a lemma about mapping class groups of triples. This is the basis for the results about mapping class groups of compression bodies. Let $M$ be an $n$-manifold, and let $W$ and $V$ be disjoint compact submanifolds of any (possibly different) dimensions $\le n$. The submanifolds $W$ and $V$ need not be properly embedded (in the sense that $\bdry W\embed \bdry M$) but they must be submanifolds for which the isotopy extension theorem applies, for example smooth submanifolds with respect to some smooth structure. In this more general setting, we make some of the same definitions as in Definition \[BasicDefinitions\], as well as some new ones.
Let $(M,W,V)$ be a triple as above. We define $\H_x(V)$, the group of [*extendible automorphisms*]{} of $V$, as the image of the restriction map $\H(M,W,V)\to \H(V)$. Similarly, we define $\H_x(W,V)$, the group of [*extendible automorphisms*]{} of the pair $(W,V)$, as the image of the restriction map $\H(M,W,V)\to \H(W,V)$, and the group $\H_x (W)$ of [*extendible automorphisms*]{} of $W$ as the image of the restriction homomorphism $\H(M,W,V)\to \H(W)$. The group $\H_d(W)$ of [*discrepant*]{} automorphisms of $W$ is the image of the restriction map $\H((M,W) \rel V)\to \H(W)$.
\[SequenceGeneralLemma\] Suppose $(M,W,V)$ is a triple as above. The following is a short exact sequence of mapping class groups. $$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1.$$
The first map is inclusion of an element of $\H_d(W)$ extended to $(W,V)$ by the identity on $V$, the second map is restriction.
Let $M$ be an irreducible, $\bdry$-reducible 3-manifold. There is a 3-dimensional characteristic compression body embedded in $M$ and containing $\bdry M$ as its exterior boundary, first described by Francis Bonahon, which is unique up to isotopy, see [@FB:CompressionBody], or see [@UO:Autos], [@OC:Autos]. Removing this from $M$ yields an irreducible $\bdry$-irreducible 3-manifold. The following is an application of Lemma \[SequenceGeneralLemma\] which describes how mapping class groups interact which the characteristic compression body.
\[MappingCharacteristicThm\] Let $M$ be an irreducible, $\bdry$-reducible 3-manifold with characteristic compression body $Q$ having exterior boundary $W=\bdry M$ and interior boundary $V$. Cutting $M$ on $V$ yields $Q$ and an irreducible, $\bdry$-irreducible manifold $\hat M$. There is an exact sequence
$$1\to \H_d(W)\to \H_x(W)\to \H_x(V)\to 1,$$
where $ \H_d(W)$ denotes the usual group of discrepant automorphisms for $Q$, $\H_x(W)$ denotes the group of automorphisms which extend to $M$ and $\H_x(V)$ denotes the automorphisms of $V$ which extend to $\hat M$.
To a large degree, this paper was motivated by a research program of the author and Leonardo N. Carvalho, to classify automorphisms of compact 3-manifolds in the spirit of the Nielsen-Thurston classification. This began with a paper of the author classifying automorphisms of 3-dimensional handlebodies and compression bodies, see [@UO:Autos], and continued with a paper of Carvalho, [@LNC:Tightness]. Recently, Carvalho and the author completed a paper giving a classification of automorphisms for compact 3-manifolds, see [@OC:Autos]. At this writing, the paper [@OC:Autos] needs to be revised to make it consistent with this paper. In the case of a reducible manifold $W$, the classification applies to elements of $\H_d(W)$ and $\H(V)$, where $V$ is the disjoint union of the irreducible summands of $W$, as above. The elements of $\H_d(W)$ are represented by automorphisms of $W$ each having support in a connected sum embedded in $W$ of $S^2\times S^1$’s and handlebodies. It is these automorphisms, above all, which are remarkably difficult to understand from the point of view of finding dynamically nice representatives. The theory developed so far describes these in terms of automorphisms of compression bodies and handlebodies. Throughout the theory, both in [@UO:Autos] and in [@OC:Autos], the notion of a “spotted manifold" arises as an irritating technical detail, which we point out, but do not fully explain. Theorem \[SpottedThm\] gives some explanation of the phenomenon.
I thank Leonardo N. Carvalho and Allen Hatcher for helpful discussions related to this paper.
Proof of Theorem \[FundamentalTheorem\]. {#Fundamental}
========================================
Suppose $(Q,V)$ is a compression body of dimension $n\ge 3$. We suppose that every component $V_i$ of $V$ has universal cover either contractible or homeomorphic to $S^{n-1}$. We suppose also that at most one component of $V$ is homeomorphic to a sphere. A compression body structure for $Q$ is defined by the system $\E$ of $(n-1)$-balls together with a product structure on some components of $Q|\E$, and an identification of each of the remaining components of $Q|\E$ with the $n$-ball. As elsewhere in this paper, we choose a particular type of compression body structure corresponding to a system of balls $\E$ such that exactly one duplicate $E_i'$ of a ball $E_i$ of $\E$ appears on $V_i\times 1$ for each product component of $V_i\times I$ of $Q|\E$. In terms of handles, this means we are regarding $Q$ as being obtained from $n$-balls and products by attaching just one 1-handle to each $V_i\times I$. For each component $V_i$ of $V$ we have an embedding $V_i\times I\embed Q$.
Suppose $f:(Q,V)\to (Q,V)$ is the identity on $W$. We will eventually show that for every component $V_0$ of $V$, $f|_{V_0}$ is homotopic to the identity on $V_0$ (via a homotopy of pairs preserving the boundary). If $V$ contains a single sphere component and $V_0$ is this sphere component, then clearly $f(V_0)=V_0$ and $f|_{V_0}$ is homotopic to the identity, since every degree 1 map of $S^{n-1}$ is homotopic to the identity.
If $V_0$ is not a sphere, we will begin by showing that $f(V_0)=V_0$. Consider the case where $\pi_1(V_0)$ is non-trivial. Clearly $\pi_1(Q)$ can be expressed as a free product of the $\pi_1(V_i)$’s and some $\integers$’s. If $f$ non-trivially permuted $V_i$’s homeomorphic to $V_0$, the induced map $f_*:\pi_1(Q)\to \pi_1(Q)$ would not be the identity, since it would conjugate factors of the free product. On the other hand $f_*$ must be the identity, since the inclusion $j:W\to
Q$ induces a surjection on $\pi_1$ and $f:W\to W$ induces the identity on $\pi_1$. Next we consider the case that $V_0$ is contractible. Then $\bdry V_0\ne \emptyset$. In this case also, we see that $f(V_0)=V_0$, since $f$ is the identity on $(\bdry V_0\times 1)\subset \bdry W$, and so $f(\bdry V_0)=\bdry V_0$. From our assumptions, it is always the case that $V_0$ is contractible, is a sphere, or has non-trivial $\pi_1$. We conclude that $f(V_0)=V_0$ in all cases.
It remains to show that if $V_0$ is not a sphere, then $f|_{V_0}$ is homotopic to the identity. When $Q$ has dimension 3, this is a standard result, see for example [@UO:Autos], [@KJ:Combinatorics3Mfds]. The method of proof, using “vertical essential surfaces" in $V_0\times
I$, depends on the fact that $V_0$ is not a sphere. In fact, the method shows that $f|_{V_0}$ is isotopic to the identity, which proves the theorem in this dimension.
Finally now it remains to show that $f|V_0$ is homotopic to the identity when $n\ge 4$, and $V_0$ is not a sphere. To do this, we will work in a cover of $Q$. We may assume $V_0$ is either finitely covered by the sphere, but not the sphere, or it has a contractible universal cover. We will denote by $\tilde Q_0$ the cover of $Q$ corresponding to the subgroup $\pi_1(V_0)$ in $\pi_1(Q)$. The compression body $Q$ can be constructed from a disjoint union of connected product components and balls, joined by 1-handles. Covering space theory then shows that the cover $\tilde Q_0$ can also be expressed as a disjoint union of connected products and balls, joined by 1-handles. The balls and 1-handles are lifts of the balls and 1-handles used to construct $Q$. The products involved are: one product of the form $V_0\times I$, products of the form $S^{n-1}\times I$, and products of the form $U\times I$, where $U$ is contractible, these being connected components of the preimages of the connected products used to construct $Q$. There is a distinguished component of the preimage of $V_0$ in $\tilde Q_0$ which is homeomorphic to $V_0$; we denote it $V_0$ rather than $\tilde V_0$, since the latter would probably be interpreted as the entire preimage of $V_0$ in $\tilde Q_0$. If $V_0$ has non-trivial fundamental group, then this is the only component of the preimage of $V_0$ homeomorphic to $V_0$, otherwise it is distinguished simply by the construction of the cover corresponding to $\pi_1(V_0)$, via choices of base points. Now let $\breve Q_0$ denote the manifold obtained from $\tilde Q_0$ by capping all $S^{n-1}$ boundary spheres by $n$-balls. Evidently, $\breve Q_0$ has the homotopy type of $V_0$. This can be seen, for example, by noting that $\tilde Q_0$ is obtained from $V_0\times I$ and a collection of $n$-balls by using 1-handles to attach universal covers of $V_i\times
I$, $i\ne 0$. The universal covers of $V_i\times I$ have the form $S^{n-1}\times I$ or $U\times I$ where $U$ is contractible. When we cap the boundary spheres of $\tilde Q_0$ to obtain $\breve Q_0$, the $(S^{n-1}\times I)$’s are capped and become balls.
Having described the cover $\tilde Q_0$, and the capped cover $\breve Q_0$, we begin the argument for showing that if $f:(Q,V)\to (Q,V)$ is an automorphism which extends the identity on $W$, and $V_0$ is a non-sphere component of $V$, then $f|_{V_0}$ is homotopic to the identity. We have chosen a compression body structure for $Q$ in terms of $\E$, the product $V\times I$, and the collection of balls of $Q|\E$. Using the automorphism $f$ we obtain another compression body structure: $Q|f(\E)$ contains the product $f(V\times I)$, which includes $f(V_0\times I)$ as a component. Recall that we chose $\E$ such that exactly one duplicate $E_0'$ of exactly one ball $E_0$ of $\E$ appears on $V_0\times I$. We can lift the inclusion map $i:V_0\times I\to Q$ to an embedding $\tilde i: V_0\times I\embed
\tilde Q_0\subset \breve Q_0$. Again we abuse notation by denoting the image product as $V_0\times I$, and we use $E_0$ to denote the lift of $E_0$ as well. Similarly we lift $f\circ i:V_0\times I\to
Q$ to an embedding $\widetilde{f\circ i}:V_0\times I\to
\tilde Q_0\subset \breve Q_0$.
The lifts $\tilde i$ and $\widetilde
{f\circ i}$ coincide on $(V_0\times 1)-\intr(E_0)$. In other words, $\tilde f$ is the identity on $(V_0\times 1)-\intr(E_0)$.
We consider two cases:
[*Case 1: $\bdry V_0\ne \emptyset$.*]{} The lift $\tilde f$ of $f$ must take $\bdry (V_0\times I)-\intr(E_0)$ in $\tilde Q_0$ to itself by uniqueness of lifting, since in this case $\bdry (V_0\times I)-\intr(E_0)$ is connected.
[*Case 2: $V_0$ has nontrivial fundamental group.*]{} In this case, lifting $f\circ i$ and $i$, we see that $\widetilde {f\circ
i}(V_0\times 1-E_0)$ and $\tilde {i}(V_0\times 1-E_0)$ are components in the cover $\tilde
Q_0$ of the preimage of $V_0\times 1-E_0\subset W\subset Q$. From the description of the cover $\tilde Q_0$, there is only one such component homeomorphic to $V_0-E_0$, hence both lifts must map $(V_0\times 1)-E_0$ to the same component, and the lifts $\widetilde{f\circ i}$ and $\tilde i$ must coincide on $(V_0\times 1)-\intr(E_0)$ by uniqueness of lifting.
These two cases cover all the possibilities for $V_0$, for if $V_0$ has trivial fundamental group, then $V_0$ has a contractible cover by assumption, since we are ruling out the case $V_0$ a sphere. But this implies the boundary is non-empty, and we apply Case 1. Otherwise, we apply Case 2.
This completes the proof of the claim.
Following our convention of writing $V_0\times I$ for $\tilde i(V_0\times I)$, we now conclude that though in general $\tilde i|_{E_0}\ne
\widetilde {f\circ i}|_{E_0}$, these lifts are equal on $\bdry E_0$.
The map $\widetilde{f\circ i}|_{E_0}$ is homotopic rel boundary to $\tilde i|_{E_0}$ in $\breve Q_0$.
We consider two cases; in Case 1 the universal cover of $V_0$ is contractible and in Case 2 the universal cover of $V_0$ is $S^{n-1}$. Let $C\embed \tilde W$ denote $\bdry E_0$, an $(n-2)$-sphere, and $\tilde W_0$ denote the preimage of $W$ in $\tilde Q_0\subset \breve Q_0$, and choose a base point $w_0$ in $C$.
[*Case 1: The universal cover of $V_0$ is contractible.*]{} In this case $\breve Q_0$ deformation retracts to $V_0\subset V_0\times I$ and the universal cover $\widetilde{\breve Q}_0$ of $\breve Q_0$ is contractible, by the discussion above. Hence all higher homotopy groups of $\breve Q_0$ are trivial. Using a trivial application of the long exact sequence for relative homotopy groups we obtain:
$$\cdots \pi_{n-1}(\breve Q_0,w_0)\to \pi_{n-1}(\breve Q_0,C,w_0)\to \pi_{n-2}(C,w_0)\to \pi_{n-2}(\breve
Q_0,w_0)\cdots$$
or
$$\cdots 0\to \pi_{n-1}(\breve Q_0,C,w_0)\to\integers\to 0 \cdots,$$
which implies that all maps of $(n-1)$-balls to $(\breve Q_0,C)$ which are degree one on the boundary are homotopic.
[*Case 2: The universal cover of $V_0$ is $S^{n-1}$ (but $V_0$ is not homeomorphic to $S^{n-1}$).*]{} In this case again $\breve
Q_0$ deformation retracts to $V_0\subset V_0\times I\subset \breve Q_0$ by the discussion above. The universal cover $\widetilde{\breve Q}_0$ of $\breve Q_0$ is homotopy equivalent to $S^{n-1}$, and it has the structure of $S^{n-1}\times I$ attached by 1-handles to products of the form $U\times I$ with $U$ contractible, and $n$-balls, some of the $n$-balls coming from capped preimages of $V_i\times I$ where $V_i$ is covered by $S^{n-1}$. Thus the higher homotopy groups of $\breve Q_0$ are the same as those of $\widetilde{\breve Q}_0$ or $S^{n-1}$. Again using an application of the long exact sequence for relative homotopy groups we obtain:
$$\cdots\pi_{n-1}(C,w_0)\to \pi_{n-1}(\breve Q_0,w_0)\to \pi_{n-1}(\breve Q_0,C,w_0)\to \pi_{n-2}(C,w_0)\to
\pi_{n-2}(\breve Q_0,w_0)\cdots$$
Here, $\pi_{n-1}(C,w_0)$ need not be trivial, but $C$ bounds a ball in $\breve Q_0$, hence the image of $\pi_{n-1}(C,w_0)$ in $\pi_{n-1}(\breve Q_0,w_0)$ is trivial. On the other hand, $\pi_{n-2}(\breve Q_0,w_0)=0$ since $\breve Q_0$ has the homotopy type of $S^{n-1}$. Thus we obtain the sequence:
$$\cdots 0\to\integers \to \pi_{n-1}(\breve Q_0,C,w_0)\to \integers\to 0\cdots$$
This sequence splits, since we can define a homomorphism$\pi_{n-2}(C,w_0)\to \pi_{n-1}(\breve
Q_0,C,w_0)$ which takes the generator $[C]$ to the class $[E_0]$, and the composition with the boundary operator is the identity. Thus $\pi_{n-1}(\breve Q_0,C,w_0)\approx \integers\oplus \integers$. At first glance it appears that the lift $\tilde f$ of $f$ could take the class $\beta= [\tilde i|_{E_0}]$ to a different class obtained by adding a multiple of the image of the generator of $\pi_{n-1}(\breve Q_0,w_0)$. We will show this is impossible.
If the universal cover $S^{n-1}$ of $V_0$ is a degree $s$ finite cover, then we observe that the generator $\alpha$ of $\pi_{n-1}(\breve Q_0,w_0)$ is represented by a degree $s$ covering map $\varphi_1:S^{n-1}\to \tilde i(V_0\times 1)$, whose image includes $s$ copies of $E_0$. Specifically, $\varphi_1$ is the covering map $\widetilde{\breve Q}_0\to \breve Q_0$ restricted to the lift to $\widetilde{\breve Q}_0$ of $V_0\times 1=\tilde
i(V_0\times 1)$. The map $\varphi_1$ is homotopic to a similar map $\varphi_0:S^{n-1}\to \tilde i(V_0\times 0)$. Since $\tilde f$ preserves orientation on $V_0\times 0$, it takes the homotopy class of $\varphi_0$ to itself, so $\tilde f_*(\alpha)=\alpha$ in $\pi_{n-1}(\breve Q_0)$. It follows that $\tilde f$ also preserves the homotopy class of $\varphi_1$ viewed as an element of $\pi_{n-1}(\breve Q_0,C,w_0)$, since $\tilde f(C)=C$. We denote the image of $\alpha$ in the relative homotopy group by the same symbol $\alpha$. The preimage $\varphi_1\inverse(E_0)$ consists of $s$ discs. Corresponding to the decomposition of $S^{n-1}$ into these $s$ discs and the complement $X$ of their union, there is an equation in $\pi_{n-1}(\breve Q_0,C,w_0)$: $\alpha=x+s\beta$, where x is represented by $\varphi_1|_X$ and $\beta=[\tilde i|_{E_0}]$. To interpret $\varphi_1|_X$ as an element of relative $\pi_{n-1}$, we can use the relative Hurewicz theorem, which says that $\pi_{n-1}(\breve Q_0,C,w_0)\approx H_{n-1}(\tilde Q_0,C)$.
Applying $\tilde f$ to the equation $\alpha=x+s\beta$, we obtain the equation $\tilde f_*(\alpha)=\tilde f_*(x)+s\gamma$, where $\gamma=\tilde f_*(\beta)=[\widetilde
{f\circ i}|_{E_0}]$. Since ${\tilde f}_*(\alpha)=\alpha$, and also $\tilde f_*(x)=x$ since $\tilde f$ is the identity on $(V_0\times 1)-E_0$, we obtain the equation $\alpha=x+s\gamma$. Combining with the earlier equation $\alpha=x+s\beta$, we obtain $s\beta=s\gamma$. Since this holds in the torsion free group $\integers\oplus \integers$, it follows that $\beta=\gamma$.
This completes the proof of the claim.
From the claim, we now know that $\widetilde {f\circ i}|_{E_0}$ is homotopic to $\tilde i|_{E_0}$. We perform the homotopy rel boundary on $\widetilde {f\circ i}|_{E}$ and extend to $V_0\times I$ rel $[\bdry (V_0\times I)-E_0]$. We obtain a map $\tilde i':V_0\times I\to \breve Q$ homotopic to $\widetilde {f\circ i}$ rel $(\bdry (V_0\times
I)-E_0)$ such that $\tilde i=\tilde i'$ on $V_0\times 1$.
Pasting the two maps $\tilde i$ and $\tilde i'$ so that the domain is two copies of $V_0\times I$ doubled on $V_0\times 1$, we obtain a new map $H:V_0\times I\to \breve Q_0$ with $H|_{V_0\times 0}=\id$ and $H|_{V_0\times 1}=\tilde f|_{V_0\times 0}$.
Finally $H$ can be homotoped rel $(V_0\times \bdry I)$ to $ V_0\subset \breve Q_0$ by applying a deformation retraction from $\breve Q_0$ to $V_0=V_0\times 0$. After this homotopy of $H$, we have $H:V_0\times I\to V_0$ a homotopy in $V_0$ from the identity to $f$. In case $\bdry V_0\ne \emptyset$ in the above argument (for homotoping $H$ to $V_0$), one must perform the homotopy as a homotopy of pairs, or first homotope $H|_{\bdry V_0\times I}$ to $\bdry V_0$. Whenever $\bdry V_0\ne \emptyset$, our construction gives a homotopy of pairs $(f,\bdry f)$ on $(V_0,\bdry V_0)$.
For $Q$ of dimension 4, with components of $V$ irreducible, Grigori Perelman’s proof of Thurston’s Geometrization Conjecture shows that the universal cover of each closed component of $V$ is homeomorphic to $\reals^3$ or $S^3$. Components of $V$ which have non-empty boundary are Haken, possibly $\bdry$-reducible, and also have contractible universal covers. By the first part of the lemma, it follows that $f|_V$ is uniquely determined up to homotopy. But it is known that automorphisms of such 3-manifolds which are homotopic (homotopic as pairs) are isotopic, see [@FW:SufficientlyLarge], [@HS:HomotopyIsotopy], [@BO:HomotopyIsotopy], [@BR:HomotopyIsotopy], [@DG:Rigidity], [@GMT:HomotopyHyperbolic]. It follows that $f|_V$ is determined by $f|_W$ up to isotopy.
Proof of Theorem \[SequenceThm\]. {#Sequence}
=================================
We begin with a basic lemma which is a consequence of the isotopy extension theorem.
Let $M$ be an $n$-manifold, and let $W$ and $V$ be disjoint compact submanifolds of any (possibly different) dimensions $\le n$. The submanifolds $W$ and $V$ need not be properly embedded but they must be submanifolds for which isotopy extension holds, e.g. smooth submanifolds. Various mapping class groups of the triple $(M,W,V)$ were defined in the introduction, before the statement of Lemma \[SequenceGeneralLemma\], which we now prove.
We need to prove the exactness of the sequence $$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1,$$
where the first map is inclusion of an element of $\H_d(W)$ extended to $(W,V)$ by the identity on $V$, the second map is restriction. The restriction map takes the group $\H_x(W,V)$ to $\H_x(V)$ and is clearly surjective. It remains to show that the kernel of this restriction map is $\H_d(W)$. If $(f,g)\in \H_x(W,V)$ restricts to the identity on $\H_x(V)$, then $g$ is isotopic to the identity. By isotopy extension, we can assume it actually is the identity. Thus the kernel consists of elements $(f,\id)\in \H_x(W,V)$ such that $(f,\text{id})$ extends over $M$, which is the definition of $\H_d(W)$.
We note that $ \H(W,V)\approx \H(W)\times \H(V)$ is a product, but this does not show that the short exact sequence of Lemma \[SequenceGeneralLemma\] splits, and that the group $\H_x(W,V)$ is a product.
\[Proof of Theorem \[SequenceThm\]\] We will use Lemma \[SequenceGeneralLemma\]. We apply the theorem to the triple $(Q,V,W)$, obtaining the exact sequence $$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1.$$ Given an element $f\in \H_x(W,V)$, this extends to an automorphism of $(Q,V)$, and by Theorem \[FundamentalTheorem\], $f|_V$ is determined up to isotopy, so $H_x(W,V)$ can be replaced by $H_x(W)$.
To see that $H_x(V)$ in the sequence can be replaced by $\H(V)$, we must show that every element of $\H(V)$ is realized as the restriction of an automorphism of $(Q,V)$. We begin by supposing $f$ is an automorphism of $V$. Observe that $(Q,V)$ can be represented as a (connected) handlebody $H$ attached to $V\times 1\subset V\times I$ by 1-handles, with one 1-handle connecting $H$ to each component of $V\times 1$. In other words, if $V$ has $c$ components, we obtain $Q$ by identifying $c$ disjointly embedded balls in $\bdry H$ with $c$ balls in $V\times 1$, one in each component of $V\times 1$. Denote the set, or union, of balls in $\bdry H$ by $\B_0$ and the set, or union, of balls in $V\times 1$ by $\B_1$. There is a homeomorphism $\phi:\B_0\to \B_1$ which we use as a glueing map. Now $(f\times \id)(\B_1)$ is another collection of balls in $V\times 1$, again with the property that there is exactly one ball in each component of $V\times 1$. We can then isotope $f\times \id$ to obtain $F:V\times I\to V\times I$ such that $F(\B_1)=\B_1$. Then $F$ permutes the balls of $\B_1$, and $\phi\inverse\circ F\circ \phi$ permutes the balls of $\B_0$. We use isotopy extension to obtain $G:H\to H$ with $G|_{\B_0}=\phi\inverse\circ F\circ \phi$. Finally, we can combine $F$ and $G$ by identifying $F(\B_1)$ with $G(\B_0)$ to obtain an automorphism $g:(Q,V)\to (Q,V)$ with $g|_V=f$.
Using ideas in the above proof, we can also prove Theorem \[MappingCharacteristicThm\].
We apply Lemma \[SequenceGeneralLemma\]. Letting $W=\bdry M=\bdry_eQ$ and $V=\bdry_iQ$, where $Q$ is the characteristic compression body for $M$. Thus we know that the sequence $$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1$$ is exact.
In the statement of the theorem, $\H_x(V)$ is defined to be the group of automorphisms of $V$ which extend to $\hat M$, whereas in Lemma \[SequenceGeneralLemma\], it is defined to be the group of automorphisms of $V$ which extend to maps of the triple $(M,W,V)$. In fact, these are the same, since any automorphism of $V$ also extends to $Q$, as we showed in the above proof of Theorem \[SequenceThm\].
To verify the exactness of the sequence in the statement, it remains to show that $\H_x(W,V)$ is the same as $\H_x(W)$. To obtain an isomorphism $\phi:\H_x(W,V)\to\H_x(W)$, we simply restrict to $W$, $\phi(h)=h|_W$. Given an automorphism $f$ of $\H_x(W)$ we know there is an automorphism $g$ of $M$ which extends $f$. Since the characteristic compression body is unique up to isotopy, we can, using isotopy extension, replace $g$ by an isotopic automorphism preserving the pair $(W,V)$, and thus preserving $Q$, then restrict $g$ to $(W,V)$ to obtain an automorphism $\psi(f)$ of the pair. This is well-defined by the well-known case of Theorem \[FundamentalTheorem\]; namely, for an automorphism of a 3-dimensional compression body, the restriction to the exterior boundary determines the restriction to the interior boundary up to isotopy. Clearly $\phi\circ\psi$ is the identity. Now using Theorem \[FundamentalTheorem\] again, we can check that $\psi\circ \phi$ is the identity.
[The Canonical Compression Body]{} \[ThreeManifoldSection\]
===========================================================
Let $W$ be a compact 3-manifold with no sphere boundary components. A [*symmetric system*]{} $\S$ of disjointly embedded essential spheres is a system with the property that $W|\S$ ($W$ cut on $\S$) consists of one ball-with-holes, $\hat V_*$, and other components $\hat V_i$, $i=1,\ldots k$, bounded by separating spheres $S_i$ of $\S$, each of which is an irreducible 3-manifold $V_i$ with one open ball removed. $\hat V_*$ has one boundary sphere $S_i$ for each $V_i$, $i=1,\ldots k$, and in addition it has $2\ell$ more spheres, two for each non-separating $S_i$ of $\S$. If $W$ has $\ell$ $S^2\times S^1$ prime summands, and $k$ irreducible summands, then there are $\ell$ non-separating $S_i$’s in $\S$ and $k$ separating spheres $S_i$ in $\S$. The symbol $\S$ denotes the set or the union of the spheres of $\S$.
We denote by $\S'$ the set of duplicates of spheres of $\S$ appearing on $\bdry \hat V_*$. Choose an orientation for $\hat V_*$. Each sphere of $\S'$ corresponds either to a separating sphere $\S$, which obtains an orientation from $\hat V_*$, or it corresponds to a non-separating sphere of $\S$ with an orientation.
We construct a [*canonical compression body*]{} associated to the 3-manifold $W$ together with a symmetric system $\S$ by thickening to replace $W$ by $W\times I$, then attaching a 3-handle along $S_i\times 0$ for each $S_i\in \S$ to obtain a 4-manifold with boundary. Thus for each 3-handle attached to $S_i\times 0$, we have a 3-ball $E_i$ in the 4-manifold with $\bdry E_i=S_i$. The boundary of the resulting 4-manifold consists of $W$, one 3-sphere $V_*$, and the disjoint union of the irreducible non-sphere $V_i$’s. We cap the 3-sphere $V_*$ with a 4-ball to obtain the [*canonical compression body*]{} $Q$ associated to $W$ marked by $\S$. Note that the exterior boundary of $Q$ is $W$ and the interior boundary is $V=\sqcup V_i$.
For each non-separating sphere $S$ in $\S$ we have an [*associated separating sphere*]{} $C(S)$, where $C(S)$ lies in $\hat V_*$ and separates a 3-holed ball containing the duplicates of $S$ coming from cutting on $S$. Also, we define $ C(E)$ to be the obvious 3-ball in $Q$ bounded by $C(S)$.
We have defined the canonical compression body as an object associated to the 3-manifold $W$ [*together with*]{} the system $\S$. This will be our working definition. Later, we prove the uniqueness result of Proposition \[UniquenessProposition\], which allows us to view the canonical compression body as being associated to $W$.
In some ways, the dual construction of the canonical compression body may be easier to understand: Let $V_i$, $i=1\ldots k$ be the irreducible summands of $W$. Begin with the disjoint union of the $V_i\times I$ together with a 4-ball $Z$. For each $i$, identify a 3-ball $E_i'$ in $V_i\times 1$ with a 3-ball $E_i''$ in $\bdry Z$ to obtain a ball $E_i$ in the quotient space. Then, for each $S^2\times S^1$ summand of $W$ identify a 3-ball $E_j'$ in $\bdry Z$ with another 3-ball $E_j''$ in $\bdry Z$ to obtain a disc $E_j$ in the quotient. (Of course new $E_j'$’s and $E_j''$’s are chosen to be disjoint in $\bdry Z$ from previous ones.) The result of all the identifications is the canonical compression body $Q$ for $W$ with balls $E_i$ properly embedded. We denote by $\E$ the union of $E_i$’s. The system of spheres $\S=\bdry \E$ is a symmetric system.
We now describe uniqueness for compression bodies associated to a 3-manifold $W$ with a symmetric system.
\[UniquenessProp\] If the canonical compression bodies $(Q_1,V_1)$ and $(Q_2,V_2)$ are associated to a compact 3-manifold $W$ with symmetric systems $\S_1$ and $\S_2$ respectively, and $f:W\to W$ is any automorphism with $f(\S_1)=\S_2$, then there is a homeomorphism $g:(Q_1,V_1)\to (Q_2,V_2)$ with $g|_W=f$. Further, $g|_{V_1}$ is determined up isotopy.
In particular, if $(Q_1,V_1)$ and $(Q_2,V_2)$ are canonical compression bodies associated to $W$ with the system $\S$, then there is a homeomorphism $g:(Q_1,V_1)\to (Q_2,V_2)$ with $g|_W=\text{id}$. Further, $g|_{V_1}$ is determined up isotopy.
Starting with $W\times I$, if we attach 3-handles along the spheres of $\S_1\times 0$, then cap with a 4-ball we obtain $Q_1$. Similarly we obtain $Q_2$ from $W\times I$ and $\S_2\times 0$. Starting with $f\times \text{id}:W\times I\to W\times I$, we can then construct the homeomorphism $g$ by mapping each 3-handle attached on a sphere of $\S_1$ to a 3-handle attached to the image sphere in $\S_2$. Finally we map the 4-ball in $Q_1$ to the 4-ball in $Q_2$. This yields $g$ with $g|_W=f$. The uniqueness up to isotopy of $g|_{V_1}$ follows from Theorem \[FundamentalTheorem\].
Letting $\S_1=\S_2$ and $f=\text{id}$, we get the second statement.
\[BoundingLemma\] The canonical compression body $(Q,V)$ associated to a compact 3-manifold $W$ with symmetric system $\S$ has the property that every 2-sphere $S\embed W$ bounds a 3-ball properly embedded in $Q$.
Make $S$ transverse to $\S$ and consider a curve of intersection innermost in $S$ and bounding a disc $D$ whose interior is disjoint from $\S$. If $D$ lies in a holed irreducible $\hat V_i$, $i\ge 1$, then we can isotope to reduce the number of curves of intersection. If $D$ lies in $\hat V_*$, then $D$ union a disc $D'$ in $\S$ forms a sphere bounding a 3-ball in $Q$. (The 3-ball is embedded in the 4-ball $Z$.) We replace $S$ by $S'=S\cup D'-D$, which can be pushed slightly to have fewer intersections with $\S$ than $S$. Now if $S'$ bounds a 3-ball, so does $S$, so by induction on the number of circles of intersection of $S$ with $\S$, we have proved the lemma if we can start the induction by showing that a 2-sphere $S$ whose intersection with $\S$ is empty must bound a ball in $Q$. If such a 2-sphere $S$ lies in $\hat V_*$, then it is obvious that it bounds a 3-ball in $Z$. If it lies in $\hat V_i$ for some $i$, then it must be inessential or isotopic to $S_i$, hence it also bounds a ball (isotopic to $E_i$ if $S$ is isotopic to $S_i$).
Let $W$ be a reducible 3-manifold. Let $S$ be an essential sphere in $W$ from a symmetric system $\S$. Or let $S$ be a sphere associated to a non-separating sphere in $\S$. An [*$\S$-slide automorphism*]{} of $W$ is an automorphism obtained by cutting on $S$, capping one of the resulting boundary spheres of $W|S$ with a ball $B$ to obtain $W'$, isotoping the ball along a simple closed curve $\gamma$ in $W'$ returning the ball to itself, extending the isotopy to $W'$, removing the interior of the ball, and reglueing the two boundary spheres of the manifold thus obtained from $W'$. We emphasize that when $S$ separates a holed irreducible summand, either that irreducible summand can be replaced by a ball or the remainder of the manifold can be replaced by a ball. An $\S$-slide automorphism on a sphere $S$ can be realized as the boundary of a [*$\E$-slide automorphism*]{} of $Q$ rel $V$: cut $Q$ on the ball $B$ bounded by $S$ ($B$ is either an $E$ in $\E$ or it is a ball $C(E)$ associated to a non-separating ball $E$ in $\E$) to obtain a compression body $Q'$ with two spots $B'$ and $B''$, duplicate copies of $B$. Slide the duplicate $B'$ in $\bdry_eQ'$ along a closed curve $\gamma$ in $\bdry_e Q'$, extending the isotopy to $Q'$. Then reglue $B'$ to $B''$ to recover the compression body $Q$.
There is another kind of automorphism, called an [*$\S$-interchanging slide*]{} defined by cutting $W$ on two separating spheres $S_1$ and $S_2$, either both in $\S$ cutting off holed irreducible summands $\hat V_1$ and $\hat V_2$ from the remainder $\hat W_0$ of $W$, or both spheres associated to different non-separating $S$ in $\S$ and cutting off holed $S^2\times S^1$ summands $\hat V_1$ and $\hat V_2$ from $W$. Cap both boundary spheres in $\hat W_0$ and slide the capping balls along two paths from one ball to the other, interchanging them. Then reglue. The interchanging slide can also be realized as the boundary of an [*$\E$-interchanging slide*]{} of $Q$ rel $V$. We will need to distinguish [*$\S$-interchanging slides of $S^2\times S^1$ summands*]{}.
We define a further type of automorphism of $W$ or $Q$. It is called an $\S$-[*spin*]{}. Let $S$ be a non-separating sphere in $\S$. We do a half Dehn twist on the separating sphere $C(S)$ to map $S$ to itself with the opposite orientation. Again, this spin can be realized as a half Dehn twist on the 3-ball $C(E)$ bounded by $C(S)$ in $Q$, which is an automorphism of $Q$ rel $V$.
Finally, we consider Dehn twists on spheres of $\S$. These are well known to have order at most two in the mapping class group, and we call them [*$\S$-Dehn twists*]{}. These also are boundaries of [*$\E$-Dehn twists*]{}, automorphisms of $Q$ rel $V$.
Slides, spins, Dehn twists on non-separating spheres, and interchanging slides of $S^2\times S^1$ summands all have the property that they extend over $Q$ as automorphisms rel $V$, so they represent elements of $\H_d(W)$.
We have defined $\S$-slides, and other automorphisms, in terms of the system $\S$ in order to be able to describe the corresponding $\E$-slides in the canonical maximal compression body $Q$ associated to $W$ with the marking system $\S$. If we have not chosen a system $\S$ with the associated canonical compression bodies, there are obvious definitions of [*slides*]{} (or interchanging slides, Dehn twists, or spins), without the requirement that the essential spheres involved be in $\S$ or be associated to spheres in $\S$.
Let $\S_1$ and $\S_2$ be two symmetric systems for a 3-manifold $W$ and let $\S_1'$ and $\S_2'$ denote the corresponding sets of duplicates appearing as boundary spheres of the holed spheres $\hat V_{1*}$ and $\hat V_{2*}$, which are components of $W|\S_1$ and $W|\S_2$ respectively. Then an [*allowable assignment*]{} is a bijection $a:\S_1'\to \S_2'$ such that $a$ assigns to a duplicate of a non-separating sphere $S$, another duplicate of a non-separating sphere $a(S)$, and it assigns to the duplicate in $\S_1'$ of a separating sphere of $\S_1$ a duplicate in $\S_2'$ of a separating sphere of $\S_2$ cutting off a homeomorphic holed irreducible summand.
If an automorphisms $f:W\to W$ satisfies $f(\S_1)=\S_2$, then $f$ induces an allowable assignment $a:\S_1'\to \S_2'$.
The following is closely related to results due to E. C[é]{}sar de S[á]{} [@EC:Automorphisms], see also M. Scharlemann in Appendix A of [@FB:CompressionBody], and D. McCullough in [@DM:MappingSurvey] (p. 69).
\[DiscrepantLemma\] Suppose $\S$ is a symmetric system for $W$ and $Q$ the corresponding compression body. Given any symmetric system $\S^\sharp$ for $W$, and an allowable assignment $a:\S'\to {\S^\sharp}'$, there is a composition $f$ of $\S$-slides, $\S$-slide interchanges, and $\S$-spins such that $f(\S)=\S^\sharp$, respecting orientation and inducing the map $a:\S'\to {S^\sharp}'$. The composition $f$ extends over $Q$ as a composition of $\E$-slides, $\E$-slide interchanges and $\E$-spins, all of which are automorphisms of $Q\rel V$.
$W$ cut on the system $\S$ is a manifold with components $\hat V_i$ and $\hat V_*$; $W$ cut on the system $\S^\sharp$ is a manifold with components $\hat V_i^\sharp$ and $\hat V_*^\sharp$. Make $\S^\sharp$ transverse to $\S$ and consider a curve $\alpha$ of intersection innermost on $\S^\sharp$ and bounding a disc $D^\sharp$ in $\S^\sharp$. The curve $\alpha$ also bounds a disc $D$ in $\S$; in fact, there are two choices for $D$. If for either of these choices $D\cup D^\sharp$ bounds a ball, then we can isotope $\S^\sharp$ to eliminate some curves of intersection. If $D^\sharp$ is contained in a $\hat V_i$, $i\ge 1$, so that $\hat V_i$ is an irreducible 3-manifold with one hole, then one of the choices of $D$ must give $D\cup D^\sharp$ bounding a ball, so we may assume now that $D^\sharp\subset \hat V_*$, the holed 3-sphere. We can also assume that for both choices of $D$, $D\cup D^\sharp$ is a sphere which (when pushed away from $\bdry \hat V_*$) cuts $\hat V_*$ into two balls-with-holes, neither of which is a ball. Now we perform slides to move boundary spheres of $\hat V_*$ contained in a chosen ball-with-holes bounded by $D\cup
D^\sharp$ out of that ball-with-holes. That is, for each boundary sphere of $\hat V_*$ in the ball-with-holes, perform a slide by cutting $W$ on the sphere, capping to replace $\hat V_i$ with a ball, sliding, and reglueing. One must use a slide path disjoint from $\S^\sharp$, close to $\S^\sharp$, to return to a different component of $\hat V_*-\S^\sharp$, but this of course does not yield a closed slide path. To form a closed slide path, it is necessary to isotope $\S^\sharp\cap V_*$ rel $\bdry V_*$. Seeing the existence of such a slide path requires a little thought; the key is to follow $\S^\sharp$ without intersecting it. After performing slides to remove all holes of $\hat V_*$ in the holed ball in $V_*$ bounded by $D^\sharp\cup D$, we can finally isotope $D^\sharp$ to $D$, and beyond, to eliminate curves of intersection in $\S\cap \S^\sharp$. Repeating, we can apply slide automorphisms and isotopies of $\S^\sharp$ to eliminate all curves of intersection.
We know that $\S^\sharp$ also cuts $W$ into manifolds $\hat V_i^\sharp$ homeomorphic to $\hat V_i$, and a holed sphere $\hat V_{*}^\sharp$ and so we conclude that each sphere $S$ of $\S$ bounding $\hat V_i$, which is an irreducible manifold with an open ball removed, is isotopic to a sphere $S^\sharp$ in $\S^\sharp$ bounding $\hat V_i^\sharp$ which is also an irreducible manifold with an open ball removed. Clearly also the $\hat V_i$ bounded by $S$ is homeomorphic to the $\hat V_i^\sharp$. Further isotopy therefore makes it possible to assume $f(\S)=\S^\sharp$. At this point, however, $f$ does not necessarily assign each $S'\in \S'$ to the desired $a(S')$. This can be achieved using interchanging slides and/or spins. The interchanging slides are used to permute spheres of $\S^\sharp$; the spins are used to reverse the orientation of a non-separating $S^\sharp \in \S^\sharp$, which switches two spheres of ${\S^\sharp}'$.
Clearly we have constructed $f$ as a composition of automorphisms which extend over $Q$ as $\E$-slides, $\E$-slide interchanges and $\E$-spins, all of which are automorphisms of $Q\rel V$, hence $f$ also extends as an automorphism of $Q\rel V$.
Before proving Proposition \[CharacterizationProp\], we mention some known results we will need. The first result says that an automorphism of a 3-sphere with $r$ holes which maps each boundary sphere to itself is isotopic to the identity. The second result concerns an automorphism of an arbitrary 3-manifold: If it is isotopic to the identity and the isotopy moves the point along a null-homotopic closed path in the 3-manifold, then the isotopy can be replaced by an isotopy fixing the point at all times. These facts can be proved by applying a result of R. Palais, [@RP:RestrictionMaps],[@EL:RestrictionMaps].
Let $g:W\to W$ be an automorphism. Then by Lemma \[DiscrepantLemma\], we conclude that there is an an discrepant automorphism $f:(Q,V)\to (Q,V)$ mapping $\S$ to $g(\S)$ according to the assignment $\S'\to g(\S)'$ induced by $g$. Thus $f\inverse\circ g$ is the identity on $\S$, preserving orientation. Now we construct $Q$ by attaching 3-handles to $W\times 0\subset W\times I$, a handle attached to each sphere of $\S$. The automorphisms $f\inverse\circ g$ and $f$ clearly both extend over $Q$, hence so does $g$. This shows $\H(W)=\H_x(W)$.
Showing that $\H_d(W)$ has the indicated generators is more subtle. Suppose we are given $g:W\to W$ with the property that $g$ extends to $Q$ as an automorphism rel $V$. We also use $g$ to denote the extended automorphism. We apply the method of proof in Lemma \[DiscrepantLemma\] to find an discrepant automorphism $f$, a composition of slides of holed irreducible summands $\hat V_i$, interchanging slides of $S^2\times S^1$ summands and spins of $S^2\times S^1$ to make $f\circ g(\S)$ coincide with $\S$, preserving orientation. Note that $f$ extends over $Q$, and that will be the case below as well, as we modify $f$. Thus we can regard $f$ as an automorphism of $Q$. Notice also that in the above we do not need interchanging slides of irreducible summands, since $g$ is the identity on $V$, hence maps each $V_i$ to itself.
At this point $f\circ g$ maps each $\hat V_i$ to itself, and of course it maps $\hat V_*$ to itself, also mapping each boundary sphere of $\hat V_*$ to itself. An automorphism of a holed sphere $\hat V_*$ which maps each boundary sphere to itself is isotopic to the identity, as we mentioned above. However, we must consider this as an automorphism rel the boundary spheres, and so it may be a composition of twists on boundary spheres. On $W-\cup_i
\interior(\hat V_i)$ (a holed connected sum of $S^2\times S^1$’s) this yields some twists on non-separating spheres of $\S$ corresponding to $S^2\times S^1$ summands. Precomposing these twists with $f$, we can now assume that $f\circ g$ is the identity on $W-\cup_i \interior(\hat V_i)$.
It remains to analyze $f\circ g$ restricted to $\hat V_i$. We do this by considering the restriction of $f\circ g$ to $V_i\times I$, noting that we should regard this automorphism as an automorphism of the triple $(V_i\times I, E_i', V_i\times 0)$, where $E_i'$ is a duplicate of the disc $E_i$ cutting $V_i\times I$ from $Q$. The product gives a homotopy $h_t=f\circ g|_{V_i\times t}$ from $h_0=f\circ g|_{V_i\times 0}=\text{id}$ to $h_1=f\circ
g|_{V_i\times 1}$, where we regard these maps as being defined on $V_i$. Since $V_i$ is an irreducible 3-manifold, we conclude as in the proof of Theorem \[FundamentalTheorem\], that $h_0$ is isotopic to $h_1$ via an isotopy $h_t$. From this isotopy, we obtain a path $\gamma(t)=h_t(p)$ where $p$ is a point in $E_i'$ when $V_i\times 1$ is identified with $V_i$. Consider a slide $s$ along $\bar \gamma$: Cut $Q$ on $E_i$, then slide the duplicate $E_i'$ along the path $\bar \gamma$, extending the isotopy as usual. By construction, $s$ is isotopic to the identity via $s_t$, $s_0=\text{id}$, $s_1=s$, and $s_t(p)$ is the path $\bar \gamma$. Combining the isotopies $h_t$ and $s_t$, we see that $(s\circ f\circ g)|_{V_i\times 1}$ is isotopic to the identity via an isotopy $r_t$ with $r_t(p)$ tracing the path $\gamma\bar\gamma$. This isotopy can be replaced by an isotopy fixing $p$ or $E_i'$. Thus after replacing $f$ by $s\circ f$, we have $f\circ g$ isotopic to the identity on $\hat V_i$. Doing the same thing for every $i$, we finally obtain $f$ so that $f\circ g$ is isotopic to the identity on $W$. This shows that $g$ is a composition of the inverses of the generators used to construct $f$.
Now we can give a proof that the canonical compression body is unique in a stronger sense.
\[Proof of Proposition \[UniquenessProposition\]\] We suppose $(Q_1,V_1)$ is the canonical compression body associated to $W$ with symmetric system $\S_1$ and $(Q_2,V_2)$ is the canonical compression body associated to $W$ with symmetric system $\S_2$. From Proposition \[UniquenessProp\] we obtain a homeomorphism $h:(Q_1,V_1)\to (Q_2,V_2)$ such that $h(\S_1)=\S_2$. We are given a homeomorphism $v:V_1\to V_2$. From the exact sequence of Theorem \[SequenceThm\], we know that there is an automorphism $k:(Q_2,V_2)\to
(Q_2,V_2)$ which extends the automorphism $v\circ (h\inverse|_{V_2}):V_2\to V_2$. Then $g= k\circ h$ is the required homeomorphism $g:(Q_1,V_1)\to
(Q_2,V_2)$ with the property that $g|_{V_1}=v$.
Spotted Manifolds. {#SpottedSection}
==================
In this section we present a quick “compression body point of view" on automorphisms of manifolds with sphere boundary components, or “spotted manifolds."
Suppose $W$ is an $m$-manifold, $m=2$ or $m=3$, with some sphere boundary components. For such a manifold, there is a canonical collection $\S$ of essential spheres, namely a collection of spheres isotopic to the boundary spheres. We can construct the compression body associated to this collection of spheres, and we obtain an $n=m+1$ dimensional compression body $Q$ whose interior boundary consists of $m$-balls, one for each sphere of $\S$, and a closed manifold $V_0$, which is the manifold obtained by capping the sphere boundary components. We denote the union of the $m$-balls by $P=\cup P_i$ where each $P_i$ is a ball. Thus the interior boundary is $V=V_0\cup P$. See Figure \[MapSpotted\].
Suppose $V_0$ has universal cover which is either $S^m$ or is contractible. Applying Theorem \[SequenceThm\], we obtain the exact sequence
$$1\to \H_d(W)\to \H_x(W)\to \H(V)\to 1.$$
It remains to interpret the sequence. Notice that we can view $Q$ as a [*spotted product*]{} $(V_0\times I,P)$, where $P$ is a collection of balls in $V_0\times 1$. However, in our context we have a buffer zone of the form $S^{m-1}\times I\subset A$ separating each $P_i$ from $W$. We refer to $W$ as a holed manifold, since it is obtained from the manifold $V_0$ by removing open balls. $\H_d(W)$ is the mapping class group of automorphisms which extend to $Q$ as automorphisms rel $V=V_0\cup P$. Either directly, or using Theorem \[FundamentalTheorem\], we see that the elements of $\H_d(W)$ are automorphisms $f:W\to
W$ such that the capped automorphism $f_c$ is homotopic, hence isotopic to the identity.
The mapping class group $\H_x(W)$ can be identified with $\H(W)$. This is because any sphere of $\S$ is mapped up to isotopy by any element $f\in \H(W)$ to another sphere of $\S$, hence $f$ extends to $Q$. Finally $\H(V)$ can be identified with $\H(V_0)\times
\H(P)$, where $\H(P)$ is clearly just a permutation group. We obtain the exact sequence
$$1\to \H_d(W)\to \H(W)\to \H(V_0)\times \H(P)\to 1,$$
and we have proved Theorem \[SpottedThm\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The differential decay rates of the processes $J/\psi\to p\bar{p}\pi^{0}$ and $J/\psi\to p\bar{p}\eta$ close to the $p\bar{p}$ threshold are calculated with the help of the $N\bar{N}$ optical potential. The same calculations are made for the decays of $\psi(2S)$. We use the potential which has been suggested to fit the cross sections of $N\bar{N}$ scattering together with $N\bar{N}$ and six pion production in $e^{+}e^{-}$ annihilation close to the $p\bar{p}$ threshold. The $p\bar{p}$ invariant mass spectra is in agreement with the available experimental data. The anisotropy of the angular distributions, which appears due to the tensor forces in the $N\bar{N}$ interaction, is predicted close to the $p\bar{p}$ threshold. This anisotropy is large enough to be investigated experimentally. Such measurements would allow one to check the accuracy of the model of $N\bar{N}$ interaction.'
author:
- 'V. F. Dmitriev'
- 'A. I. Milstein'
- 'S. G. Salnikov'
bibliography:
- '/home/sergey/Documents/BINP/BibTeX/library.bib'
title: 'Angular distributions in $J/\psi\to p\bar{p}\pi^{0}(\eta)$ decays'
---
\#1[\#1]{}
Introduction
============
The cross section of the process reveals an enhancement near the threshold . The enhancement near the $p\bar{p}$ threshold has been also observed in the decays , , and . These observations led to numerous speculations about a new resonance [@Bai2003], $p\bar{p}$ bound state or even a glueball state with the mass near two proton mass. This enhancement could appear due to the nucleon-antinucleon final-state interaction. It has been shown that the behavior of the cross sections of $N\bar{N}$ production in $e^{+}e^{-}$ annihilation can be explained with the help of Jülich model or slightly modified Paris model . These models also describe the energy dependence of the proton electromagnetic form factors ratio $\left|G_{E}^{p}/G_{M}^{p}\right|$. A strong dependence of the ratio on the energy close to the $p\bar{p}$ threshold is a consequence of the tensor part of the $N\bar{N}$ interaction.
Another phenomenon has been observed in the process of $e^{+}e^{-}$ annihilation to mesons. A sharp dip in the cross section of the process has been found in the vicinity of the $N\bar{N}$ threshold . This feature is related to the virtual $N\bar{N}$ pair production with subsequent annihilation to mesons . In Ref. [@Dmitriev2016] a potential model has been proposed to fit simultaneously the cross sections of $N\bar{N}$ scattering and $N\bar{N}$ production in $e^{+}e^{-}$ annihilation. This model describes the cross section of the process near the $N\bar{N}$ threshold as well. A qualitative description of this process was also achieved using the Jülich model [@Haidenbauer2015].
In this paper we investigate the decays and taking the $p\bar{p}$ final-state interaction into account. Investigation of these processes has been performed in Ref. [@Kang2015] using the chiral model. However, the tensor part of the $p\bar{p}$ interaction was neglected in that paper. To describe the $p\bar{p}$ interaction we use the potential model proposed in Ref. [@Dmitriev2016], where the tensor forces play an important role. The account for the tensor interaction allows us to analyze the angular distributions in the decays of $J/\psi$ and $\psi(2S)$ to $p\bar{p}\pi^{0}(\eta)$ near the $p\bar{p}$ threshold. The parameter of anisotropy is large enough to be studied in the experiments.
Decay amplitude
===============
Possible states for a $p\bar{p}$ pair in the decays and have quantum numbers $J^{PC}=1^{--}$ and $J^{PC}=1^{+-}$. The dominating mechanism of the $p\bar{p}$ pair creation is the following. The $p\bar{p}$ pair is created at small distances in the $^{3}S_{1}$ state and acquires an admixture of $^{3}D_{1}$ partial wave at large distances due to the tensor forces in the nucleon-antinucleon interaction. The $p\bar{p}$ pairs have different isospins for the two final states under consideration ($I=1$ for the $p\bar{p}\pi^{0}$ state, and $I=0$ for the $p\bar{p}\eta$ state), that allows one to analyze two isospin states independently. Therefore, these decays are easier to investigate theoretically than the process $e^{+}e^{-}\to p\bar{p}$, where the $p\bar{p}$ pair is a mixture of different isospin states.
We derive the formulas for the decay rate of the process , where $x$ is one of the pseudoscalar mesons $\pi^{0}$ or $\eta$. The following kinematics is considered: $\bm{k}$ and $\varepsilon_{k}$ are the momentum and the energy of the $x$ meson in the $J/\psi$ rest frame, $\bm{p}$ is the proton momentum in the $p\bar{p}$ center-of-mass frame, $M$ is the invariant mass of the $p\bar{p}$ system. The following relations hold: $$\begin{aligned}
& p=\left|\bm{p}\right|=\sqrt{\frac{M^{2}}{4}-m_{p}^{2}}\,, & & k=\left|\bm{k}\right|=\sqrt{\varepsilon_{k}^{2}-m^{2}}\,, & & \varepsilon_{k}=\frac{m_{J/\psi}^{2}+m^{2}-M^{2}}{2m_{J/\psi}}\,,\label{eq:kinematics}\end{aligned}$$ where $m$ is the mass of the $x$ meson, $m_{J/\psi}$ and $m_{p}$ are the masses of a $J/\psi$ meson and a proton, respectively, and $\hbar=c=1$. Since we consider the $p\bar{p}$ invariant mass region , the proton and antiproton are nonrelativistic in their center-of-mass frame, while $\varepsilon_{k}$ is about $\unit[1]{GeV}$.
The spin-1 wave function of the $p\bar{p}$ pair in the center-of-mass frame has the form [@dmitriev2014isoscalar] $$\bm{\psi}_{\lambda}^{I}=\bm{\mathrm{e}}_{\lambda}u_{1}^{I}(0)+\frac{u_{2}^{I}(0)}{\sqrt{2}}\left[\bm{\mathrm{e}}_{\lambda}-3\hat{\bm{p}}(\bm{\mathrm{e}}_{\lambda}\cdot\hat{\bm{p}})\right],$$ where $\hat{\bm{p}}=\bm{p}/p$, $\bm{\mathrm{e}}_{\lambda}$ is the polarization vector of the spin-1 $p\bar{p}$ pair, $$\sum_{\lambda=1}^{3}\mathrm{e}_{\lambda}^{i}\mathrm{e}_{\lambda}^{j*}=\delta_{ij}\,,$$ $u_{1}^{I}(r)$ and $u_{2}^{I}(r)$ are the components of two independent solutions of the coupled-channels radial Schrödinger equations $$\begin{aligned}
& \frac{p_{r}^{2}}{m_{p}}\chi_{n}+{\cal V}\chi_{n}=2E\chi_{n}\,,\nonumber \\
& {\cal V}=\begin{pmatrix}V_{S}^{I} & -2\sqrt{2}\,V_{T}^{I}\\
-2\sqrt{2}\,V_{T}^{I} & \quad V_{D}^{I}-2V_{T}^{I}+{\displaystyle \frac{6}{m_{p}r^{2}}}
\end{pmatrix},\qquad\chi_{n}=\begin{pmatrix}u_{n}^{I}\\
w_{n}^{I}
\end{pmatrix}.\end{aligned}$$ Here $E=p^{2}/2m_{p}$, $V_{S}^{I}$ and $V_{D}^{I}$ are the $N\bar{N}$ potentials in $S$- and $D$-wave channels, and $V_{T}^{I}$ is the tensor potential. Two independent regular solutions of these equations are determined by their asymptotic forms at large distances [@dmitriev2014isoscalar] $$\begin{aligned}
& u_{1}^{I}(r)=\frac{1}{2ipr}\Big[S_{11}^{I}\,e^{ipr}-e^{-ipr}\Big], & & u_{2}^{I}(r)=\frac{1}{2ipr}S_{21}^{I}\,e^{ipr},\nonumber \\
& w_{1}^{I}(r)=-\frac{1}{2ipr}S_{12}^{I}\,e^{ipr}, & & w_{2}^{I}(r)=\frac{1}{2ipr}\Big[-S_{22}^{I}e^{ipr}+e^{-ipr}\Big],\end{aligned}$$ where $S_{ij}^{I}$ are some functions of energy.
The Lorentz transformation for the spin-1 wave function of the $p\bar{p}$ pair can be written as $$\tilde{\bm{\psi}}_{\lambda}^{I}=\bm{\psi}_{\lambda}^{I}+\left(\gamma-1\right)\hat{\bm{k}}(\bm{\psi}_{\lambda}^{I}\cdot\hat{\bm{k}})\,,$$ where $\tilde{\bm{\psi}}_{\lambda}^{I}$ is the wave function in the $J/\psi$ rest frame, $\hat{\bm{k}}=\bm{k}/k$, and $\gamma$ is the $\gamma$-factor of the $p\bar{p}$ center-of-mass frame. The component collinear to $\bm{k}$ does not contribute to the amplitude of the decay under consideration because the amplitude is transverse to $\bm{k}$. As a result, the dimensionless amplitude of the decay with the corresponding isospin of the $p\bar{p}$ pair can be written as $$T_{\lambda\lambda'}^{I}=\frac{\mathcal{G}_{I}}{m_{J/\psi}}\bm{\psi}_{\lambda}^{I}\left[\bm{k}\times\bm{\epsilon}_{\lambda'}\right].\label{eq:amplitude}$$ Here $\mathcal{G}_{I}$ is an energy-independent dimensionless constant, $\bm{\epsilon}_{\lambda'}$ is the polarization vector of $J/\psi$, $$\sum_{\lambda'=1}^{2}\epsilon_{\lambda'}^{i}\epsilon_{\lambda'}^{j*}=\delta_{ij}-n^{i}n^{j},$$ where $\bm{n}$ is the unit vector collinear to the momentum of electrons in the beam.
The decay rate of the process $J/\psi\to p\bar{p}x$ can be written in terms of the dimensionless amplitude $T_{\lambda\lambda'}^{I}$ as (see, e.g., [@Sibirtsev2005]) $$\frac{d\Gamma}{dMd\Omega_{p}d\Omega_{k}}=\frac{pk}{2^{9}\pi^{5}m_{J/\psi}^{2}}\left|T_{\lambda\lambda'}^{I}\right|^{2},\label{eq:gamma}$$ where $\Omega_{p}$ is the proton solid angle in the $p\bar{p}$ center-of-mass frame and $\Omega_{k}$ is the solid angle of the $x$ meson in the $J/\psi$ rest frame.
Substituting the amplitude in Eq. and averaging over the spin states, we obtain the $p\bar{p}$ invariant mass and angular distribution for the decay rate $$\begin{gathered}
\frac{d\Gamma}{dMd\Omega_{p}d\Omega_{k}}=\frac{\mathcal{G}_{I}^{2}pk^{3}}{2^{11}\pi^{5}m_{J/\psi}^{4}}\left\{ \left|u_{1}^{I}(0)+{\textstyle \frac{1}{\sqrt{2}}}u_{2}^{I}(0)\right|^{2}+\left|u_{1}^{I}(0)-\sqrt{2}u_{2}^{I}(0)\right|^{2}(\bm{n}\cdot\hat{\bm{k}})^{2}\right.\\
\left.{}+\frac{3}{2}\left[\left|u_{2}^{I}(0)\right|^{2}-2\sqrt{2}\re{\left(u_{1}^{I}(0)u_{2}^{I*}(0)\right)}\right]\left[(\bm{n}\cdot\hat{\bm{p}})^{2}-2(\bm{n}\cdot\hat{\bm{k}})(\bm{n}\cdot\hat{\bm{p}})(\hat{\bm{p}}\cdot\hat{\bm{k}})\right]\right\} .\label{eq:distribution}\end{gathered}$$ The invariant mass distribution can be obtained by integrating Eq. over the solid angles $\Omega_{p}$ and $\Omega_{k}$: $$\frac{d\Gamma}{dM}=\frac{\mathcal{G}_{I}^{2}pk^{3}}{2^{5}\thinspace3\pi^{3}m_{J/\psi}^{4}}\left(\left|u_{1}^{I}(0)\right|^{2}+\left|u_{2}^{I}(0)\right|^{2}\right).\label{eq:MassSpectrum}$$ The sum in the brackets is the so-called enhancement factor which equals to unity if the $p\bar{p}$ final-state interaction is turned off.
More information about the properties of $N\bar{N}$ interaction can be extracted from the angular distributions. Integrating Eq. over $\Omega_{p}$ we obtain $$\frac{d\Gamma}{dMd\Omega_{k}}=\frac{\mathcal{G}_{I}^{2}pk^{3}}{2^{9}\pi^{4}m_{J/\psi}^{4}}\left(\left|u_{1}^{I}(0)\right|^{2}+\left|u_{2}^{I}(0)\right|^{2}\right)\left[1+\cos^{2}\vartheta_{k}\right],$$ where $\vartheta_{k}$ is the angle between $\bm{n}$ and $\bm{k}$. However, the angular part of this distribution does not depend on the features of the $p\bar{p}$ interaction. The proton angular distribution in the $p\bar{p}$ center-of-mass frame is more interesting. To obtain this distribution we integrate Eq. over the solid angle $\Omega_{k}$: $$\frac{d\Gamma}{dMd\Omega_{p}}=\frac{\mathcal{G}_{I}^{2}pk^{3}}{2^{7}\,3\pi^{4}m_{J/\psi}^{4}}\left(\left|u_{1}^{I}(0)\right|^{2}+\left|u_{2}^{I}(0)\right|^{2}\right)\left[1+\gamma^{I}P_{2}(\cos\vartheta_{p})\right],\label{eq:Pdistrib}$$ where $\vartheta_{p}$ is the angle between $\bm{n}$ and $\bm{p}$, $P_{2}(x)=\frac{3x^{2}-1}{2}$ is the Legendre polynomial, and $\gamma^{I}$ is the parameter of anisotropy: $$\gamma^{I}=\frac{1}{4}\frac{\left|u_{2}^{I}(0)\right|^{2}-2\sqrt{2}\re{\left[u_{1}^{I}(0)u_{2}^{I*}(0)\right]}}{\left|u_{1}^{I}(0)\right|^{2}+\left|u_{2}^{I}(0)\right|^{2}}\,.\label{eq:anisotropy}$$ Averaging over the direction of $\bm{n}$ gives the distribution over the angle $\vartheta_{pk}$ between $\bm{p}$ and $\bm{k}$: $$\frac{d\Gamma}{dMd\Omega_{pk}}=\frac{\mathcal{G}_{I}^{2}pk^{3}}{2^{7}\,3\pi^{4}m_{J/\psi}^{4}}\left(\left|u_{1}^{I}(0)\right|^{2}+\left|u_{2}^{I}(0)\right|^{2}\right)\left[1-2\gamma^{I}P_{2}(\cos\vartheta_{pk})\right].\label{eq:Ppidistrib}$$ Note that this distribution can be written in therms of the same anisotropy parameter .
The mass spectrum and the anisotropy parameter are sensitive to the tensor part of the $N\bar{N}$ potential and, therefore, gives the possibility to verify the potential model.
Results and Discussion
======================
In the present work we use the potential model suggested in Ref. [@Dmitriev2016]. The parameters of this model have been fitted using the $p\bar{p}$ scattering data, the cross section of $N\bar{N}$ pair production in $e^{+}e^{-}$ annihilation near the threshold, and the ratio of the electromagnetic form factors of the proton in the timelike region. By means of this model and Eq. , we predict the $p\bar{p}$ invariant mass spectra in the processes and . The isospin of the $p\bar{p}$ pair is $I=1$ and $I=0$ for, respectively, a pion and $\eta$ meson in the final state. The model [@Dmitriev2016] predicts the enhancement of the decay rates of both processes near the threshold of $p\bar{p}$ pair production (see the red band in Fig. \[fig:JPsidecays\]). The invariant mass spectra predicted by our model are similar to those predicted in Ref. [@Kang2015] with the use of the chiral model. Very close to the threshold the enhancement factor turned out to be slightly overestimated in comparison with the experimental data, as it is seen from Fig. \[fig:JPsidecays\]. We have tried to refit the parameters of our model in order to achieve a better description of the invariant mass spectra of the decays considered. The predictions of the refitted model are shown in Fig. \[fig:JPsidecays\] with the green band. It is seen that the refitted model fits better the invariant mass spectra of $J/\psi$ decays. However, the discrepancy in the cross sections of $n\bar{n}$ production in $e^{+}e^{-}$ annihilation and the charge-exchange process have slightly increased after refitting.
![\[fig:JPsidecays\]The invariant mass spectra of $J/\psi$ decays to $p\bar{p}\pi^{0}$ (left) and $p\bar{p}\eta$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model. The phase space behavior is shown by the dashed curve. The experimental data are taken from Refs. [@Bai2003; @Ablikim2009; @Bai2001]. The measurement of Ref. [@Bai2003] is adopted for the scale of the left plot.](JPsi_to_pi "fig:"){height="5.35cm"}![\[fig:JPsidecays\]The invariant mass spectra of $J/\psi$ decays to $p\bar{p}\pi^{0}$ (left) and $p\bar{p}\eta$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model. The phase space behavior is shown by the dashed curve. The experimental data are taken from Refs. [@Bai2003; @Ablikim2009; @Bai2001]. The measurement of Ref. [@Bai2003] is adopted for the scale of the left plot.](JPsi_to_eta "fig:"){height="5.35cm"}
![\[fig:anisotropy\]The dependence of the anisotropy parameters $\gamma^{I}$ on $p\bar{p}$ invariant mass (left) and the distributions over the angle between the proton momentum and the momentum of the electrons in the beam at $M-2m_{p}=\unit[150]{MeV}$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model.](gamma1 "fig:"){height="5.35cm"}![\[fig:anisotropy\]The dependence of the anisotropy parameters $\gamma^{I}$ on $p\bar{p}$ invariant mass (left) and the distributions over the angle between the proton momentum and the momentum of the electrons in the beam at $M-2m_{p}=\unit[150]{MeV}$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model.](Psi_distrib1 "fig:"){height="5.35cm"}
![\[fig:anisotropy\]The dependence of the anisotropy parameters $\gamma^{I}$ on $p\bar{p}$ invariant mass (left) and the distributions over the angle between the proton momentum and the momentum of the electrons in the beam at $M-2m_{p}=\unit[150]{MeV}$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model.](gamma0 "fig:"){height="5.35cm"}![\[fig:anisotropy\]The dependence of the anisotropy parameters $\gamma^{I}$ on $p\bar{p}$ invariant mass (left) and the distributions over the angle between the proton momentum and the momentum of the electrons in the beam at $M-2m_{p}=\unit[150]{MeV}$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model.](Psi_distrib0 "fig:"){height="5.35cm"}
An important prediction of our model is the angular anisotropy of the $J/\psi$ decays. This anisotropy is the result of $D$-wave admixture due to the tensor forces in $N\bar{N}$ interaction. The anisotropy (see Eqs. and ) is characterized by the parameters $\gamma^{1}$ and $\gamma^{0}$ for the $p\bar{p}\pi^{0}$ and $p\bar{p}\eta$ final states, respectively. The dependence of the parameters $\gamma^{I}$ on the invariant mass of the $p\bar{p}$ pair is shown in the left side of Fig. \[fig:anisotropy\]. For $p\bar{p}$ invariant mass about $\unit[100-200]{MeV}$ above the threshold, significant anisotropy of the angular distributions is predicted. The distributions over the angle between the proton momentum and the momentum of the electrons in the beam are shown in the right side of Fig. \[fig:anisotropy\]. Note that the anisotropy in the distribution over the angle $\vartheta_{pk}$ is expected to be two times larger than in the distribution over the angle $\vartheta_{p}$ (compare Eqs. and ).
There are some data on the angular distributions in the decays [@Ablikim2009] and [@Bai2001]. However, these distributions are obtained by integration over the whole $p\bar{p}$ invariant mass region. Unfortunately, our predictions are valid only in the narrow energy region above the $p\bar{p}$ threshold. Therefore, we cannot compare the predictions with the available experimental data. The measurements of the angular distributions at $p\bar{p}$ invariant mass close to the $p\bar{p}$ threshold would be very helpful. Such measurements would provide another possibility to verify the available models of $N\bar{N}$ interaction in the low-energy region.
The formulas written above are also valid for the decays and with the replacement of $m_{J/\psi}$ by the mass of $\psi(2S)$. The invariant mass spectra for these decays are shown in Fig. \[fig:PsiPrime\]. The angular distributions for these processes are the same as for the decays of $J/\psi$ because they depend only on the invariant mass of the $p\bar{p}$ pair.
![\[fig:PsiPrime\]The invariant mass spectra for the decays $\psi(2S)\to p\bar{p}\pi^{0}$ (left) and $\psi(2S)\to p\bar{p}\eta$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model. The phase space behavior is shown by the dashed curve. The experimental data are taken from Refs. [@Alexander2010; @Ablikim2013a; @Ablikim2013]. The measurement of Ref. [@Alexander2010] is adopted for the scale of both plots.](Psi2S_to_pi "fig:"){height="5.35cm"}![\[fig:PsiPrime\]The invariant mass spectra for the decays $\psi(2S)\to p\bar{p}\pi^{0}$ (left) and $\psi(2S)\to p\bar{p}\eta$ (right). The red/dark band corresponds to the model [@Dmitriev2016] and the green/light band corresponds to the refitted model. The phase space behavior is shown by the dashed curve. The experimental data are taken from Refs. [@Alexander2010; @Ablikim2013a; @Ablikim2013]. The measurement of Ref. [@Alexander2010] is adopted for the scale of both plots.](Psi2S_to_eta "fig:"){height="5.35cm"}
Conclusions
===========
Using the model proposed in Ref. [@Dmitriev2016], we have calculated the effects of $p\bar{p}$ final-state interaction in the decays and . Our results for the $p\bar{p}$ invariant mass spectra close to the $p\bar{p}$ threshold are in agreement with the available experimental data. The tensor forces in the $p\bar{p}$ interaction result in the anisotropy of the angular distributions. The anisotropy in the decay and especially in the decay are large enough to be measured. The observation of such anisotropy close to the $p\bar{p}$ threshold would allow one to refine the model of $N\bar{N}$ interaction.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Musical preferences have been considered a mirror of the self. In this age of Big Data, online music streaming services allow us to capture ecologically valid music listening behavior and provide a rich source of information to identify several user-specific aspects. Studies have shown musical engagement to be an indirect representation of internal states including internalized symptomatology and depression. The current study aims at unearthing patterns and trends in the individuals at risk for depression as it manifests in naturally occurring music listening behavior. Mental well-being scores, musical engagement measures, and listening histories of Last.fm users (N=541) were acquired. Social tags associated with each listener’s most popular tracks were analyzed to unearth the mood/emotions and genres associated with the users. Results revealed that social tags prevalent in the users at risk for depression were predominantly related to emotions depicting *Sadness* associated with genre tags representing *neo-psychedelic-, avant garde-, dream-pop*. This study will open up avenues for an MIR-based approach to characterizing and predicting risk for depression which can be helpful in early detection and additionally provide bases for designing music recommendations accordingly.'
bibliography:
- 'ISMIRtemplate.bib'
title: 'Tag2Risk: Harnessing Social Music Tags for Characterizing Depression Risk'
---
\[methodology\] {width="78.00000%"}
Introduction {#sec:introduction}
============
According to reports from the World Health Organization, an estimated 322 million people worldwide were affected from depression, the leading cause of disability [@world2017depression]. Recent times have witnessed a surge in studies on using social multimedia content, such as those from Facebook, Twitter, Instagram, to detect mental disorders including depression [@munmun; @copper; @munmun_14(2); @munmun_16; @reece]. Music plays a vital role in mental well-being by impacting moods, emotions and other affective states [@baltazar]. Musical preferences and habits have been associated with the individual’s need to satisfy and reinforce their psychological needs [@nave; @qiu]. Empirical evidence exists linking musical engagement strategies to measures of ill-health including internalized symptomatology and depression [@litlink; @rumin]. Also, increased emotional dependency on music during periods of depression has been reported [@mcferran]. Specifically, the Healthy-Unhealthy Music Scale (HUMS), a 13-item questionnaire was developed for assessing musical engagement strategies that identified maladaptive ways of using music. Such strategies are characterized by using music to avoid other people, resort to ruminative thinking and ending up feeling worse after music engagement. Such unhealthy musical engagement was found to correlate with higher psychological distress and was indicative of depressive tendencies [@hums]. Furthermore, the high predictive power observed from the machine learning models in predicting risk for depression from HUMS further bolsters its efficacy as an indirect tool for assessing mental states [@ragarwal]. Research suggests that such musical engagement does not always lead to alleviating depressive symptoms [@stewart]. This indeed calls for developing intervention strategies that allow for altering music listening behavior to suit the individual’s state, traits, and general musical preferences which may lead to a positive outcome. Thus, it is of vital importance not only to identify individuals with depressive tendencies but also to unearth music listening habits of such individuals that will provide bases for designing music recommendations accordingly.
Past research studying the link between music listening habits and depression has been done using self-reported data and controlled listening experiments wherein participants may have wished to conform to social expectations, or their responses might be influenced by how they want other people to perceive them thereby resulting in demand characteristics [@greenberg2017social]. This has also been identified as a limitation by Nave et al. [@nave], who have proposed collecting data in more ecologically valid settings, such as user listening histories from music streaming platforms which are a better reflection of the users’ true preferences and behaviours. To date no studies have looked at the link between active music listening and depression using the music listening histories of users which motivates us for this study.
In this age of big data, online music streaming platforms such as Last.fm, Spotify, and Apple Music provide access to millions of songs of varying genres and this has allowed for assessing users’ features from their digital traces on music streaming platforms. To the best of our knowledge, Last.fm is the only platform that makes it possible to extract the listening history of users and other metadata describing their listening behavior using its own public API. Last.fm has been used extensively by researchers for various purposes such as music emotion classification, user behavior analysis, and social tag categorization [@saari; @laurier]. Last.fm has an abundance of social tags that provide a wide range of information about the musical tracks including audio low- and high-level feature description, emotions and experiences evoked, genre, etc. These tags have been found to predict short-term user music preferences [@gupta] and in successfully predicting next played songs in the design of a recommendation system [@polignano]. Our aim is to identify the tags and their respective occurrences in the listening behavior of individuals at risk for depression, which makes Last.fm an apt choice for this study. The data was collected using an online survey comprising of Last.fm music listening histories, in addition to music engagement strategies (HUMS), and mental well-being scores of the participants. Specifically, each track in the data was semantically represented by the tags assigned to it. We leverage these representations of tags as social descriptors of music to uncover emotional experiences and concepts that are associated with users with risk for depression.
Research Objectives and Hypotheses
----------------------------------
In this study we investigated whether people’s music listening history, in terms of social tags, could be used to predict a risk for depression. Our research questions were:
- What are the social tags associated with music chosen by At-Risk users?
- What emotions do these tags signify in the context of musically evoked emotions?
- What genres are mostly associated with At-Risk users?
- How well can we classify users as At-Risk given user-specific social tags?
We expected the social tags linked with At-Risk listeners to contain emotions with low arousal and low valence, being typical of depressive mood. Owing to the lack of research associating music genres and risk for depression[@stewart], this part of the study was exploratory.
Methodology {#sec:page_size}
===========
The methodological approach and procedure of our study is illustrated in . The steps of data collection, processing, and analysis are described below.
Data Collection
---------------
An online survey was designed wherein participants were asked to fill their Last.fm usernames and demographics followed by standard scales for assessing their mental well-being, musical engagement strategies and personality. Participants were solicited on the Last.fm groups of social media platforms like Reddit and Facebook. The inclusion criterion required being an active listener on Last.fm for at least a year prior to filling the survey. The survey form required the users’ consent to access their Last.fm music history.
### Participants
A total of 541 individuals (Mean Age = 25.4, SD = 7.3) were recorded to be eligible and willing to participate in the study consisting of 444 males, 82 females and 15 others. Most of them belonged to the United States and the United Kingdom accounting for about 30% and 10% of the participants respectively. Every other country contributed to less than 5% of the total participants.
### Measure of Well-Being, Musical Engagement, and Personality
The Kessler’s Psychological Distress Scale (K10) questionnaire [@k10] was used to assess mental well-being. It is a measure of psychological distress, particularly assessing anxiety and depression symptoms. Individuals scoring 29 and above on K-10 are likely to be at severe risk for depression and hence, constitute the “At-Risk” group. Those scoring below 20 are labeled as the “No-Risk” group [@sakka] as they are likely to be well. There were 193 participants in the No-Risk group and 142 in the At-Risk group. The HUMS survey was administered to assess musical engagement strategies which resulted in two scores per participant, *Healthy* and *Unhealthy*. Personality information was obtained using the Mini-IPIP questionnaire [@topo] which results in scores for the Big Five traits of Personality namely *Openness*, *Conscientiousness*, *Extraversion*, *Agreeableness* and *Neuroticism*. HUMS and personality data were collected in order to identify if specific personality traits engage more in *Unhealthy* music listening and as additional measures to assess internal validity.
### Music Listening History
Each participant’s music listening history was extracted using a publicly available API. The data included tracks, artists, and social tags associated with the tracks. For each participant, the top *n* (n=500,200,100) tracks based on play-counts were extracted centered around the time *t* (t = $\pm$ 3 months,2 months) they filled in the questionnaire. The reason for varying *n* and *t* was to find converging evidence in music listening behavior in order to make our results more robust. For each track, the top 50 social tags based on tag weight (number of times the tag has been assigned to the track) were chosen for subsequent analysis.
Social Tags Processing
----------------------
### Tag Filtering
Music-related social tags are known to be descriptors of genre, perceived emotion, artist and album amongst others. It is therefore important to filter them to organize them according to some structure and interpretable dimensions for the task at hand. The purpose of this preprocessing step was to retrieve tags that could be mapped onto a semantic space representing music-evoked emotions. To this end, we used four filtering stages: first, include lower-casing, removal of punctuation and stop-words, spell-checking and checking for the existence of tag words in the English corpus; second, retain tags that are most frequently assigned adverbs or adjectives via POS (Part Of Speech) tagging since POS tags representing nouns and pronouns do not have emotion relevance in this context; third, remove tags containing 2 or more words to avoid valence shifters [@valence_shifters] and sentence-like descriptions from our Last.fm corpus; fourth, manually filter them by discarding tags without any mood/emotion associations.
### Tag Emotion Induction {#induction}
To project the tags onto an emotion space, we used dimensional models that represent the emotions. Multiple research studies have shown the usefulness of both two-dimensional and three-dimensional models to represent emotions [@dimension1; @eurola; @dimension2]. We therefore used both these models for further analysis in order to check for trends and the effect of the third dimension when dealing with emotions.
The first model is one of the most popular dimensional models, the Russell’s Circumplex Model of Affect [@VAmodel], where an emotion is a point in a two-dimensional continuous space representing *Valence* and *Arousal* (VA). *Valence* reflects pleasantness and *Arousal* describes the energy content of the emotion. The second model is an extension of the Russell’s model with an added *Dominance* dimension (VAD), which represents control of the emotional state. The VAD model has been a popular framework used to construct emotion lexicons in the field of Natural Language Processing. The projection in the VAD space is based on semantic similarity and has been largely used to obtain affective ratings for large corpora of English words [@wei] [@vad]. Another common emotion model is the VAT model wherein the third dimension represents *Tension* (VAT) and has been used in retrieving mood information from Last.fm tags [@saari]. However, Saari et. al.’s [@saari] approach was based on tag co-occurrence rather than semantic similarity. Moreover, a subsequent study by the same authors reported a positive correlation (r=0.85) between *tension* and *dominance* [@paasi_2]. Also, multiple studies have supported the use of the VAD space for analysing emotions in the context of music [@musicvad; @musicvad2]. We therefore have chosen the VAD framework for the purpose of our study. Since VA dimensions alone were found to sufficiently capture musical emotions [@eurola], we also repeat our analysis based on the VA model to observe the effect of the omitted *Dominance* dimension.
The tags were projected onto the VAD space using a word-emotion induction model introduced by Buechel and Hahn [@wei]. We used the FastText embeddings of the tags as input to a 3-layer multi-layer perceptron that produced VAD values ranging from 1 to 9 on either of the dimensions. FastText has shown better accuracy for word-emotion induction [@wei] when compared to other commonly used models like Word2vec and GloVe. Moreover, FastText embeddings incorporate sub-word character n-grams that enable the handling of out-of-vocabulary words. This results in a large advantage over the other models [@fasttext]. In addition, FastText works well with rarely occurring words because their character n-grams are still shared with other words. This made it a suitable choice since some of the user-assigned tags may be infrequent or absent in the training corpus of the embedding model. We used the same approach to project the tags onto the VA space by changing the number of nodes in the output layer from 3 to 2.
Both the models were trained using the EN+ dataset which contains *valence*, *arousal* and *dominance* ratings (on a 9-point scale) for a majority of well-known English words [@Warriner2013]. This module resulted in an n-dimensional vector (n=3 for VAD, n=2 for VA) for each tag. The remainder of the pipeline describes the 3-dimensional VAD vector processing. The same procedure is repeated for the VA scores.
### Tag Emotion Mapping {#clustering}
The social tags were grouped into broader emotion categories. These categories consisted of 9 first-order factors of Geneva Emotional Music Scale (GEMS) [@gems]. These were *Wonder, Transcendence, Nostalgia, Tenderness, Peacefulness, Power, Joyful activation, Tension and sadness*. Table 1 in the supplementary material displays the factor loadings for these first-order factors of GEMS. GEMS contains 40 emotion terms that were consistently chosen to describe musically evoked emotive states across a wide range of music genres. These were subsequently grouped to provide a taxonomy for music-evoked emotions. This scale has outperformed other discrete and dimensional emotion models in accounting for music-evoked emotions [@gemsvsVAD]. In order to project these 9 emotion categories onto the VAD space, we first obtained the VAD values for the 40 emotion terms. Next, the VAD values were weighted and summed according to the weights provided in the original GEMS study to finally obtain VAD values for each of the emotion categories. Figures 1 & 2 in supplementary material display the projections of these emotion categories onto the VAD and VA spaces. Each of the tags are then assigned the emotion category based on the proximity in the VAD space as evaluated by the euclidean distance.
User-Specific Emotion Prevalence Score {#scoring}
--------------------------------------
After every user’s tags had been mapped onto the 9 emotion categories,we calculated an *Emotion Prevalence Score* $S_{u,c}$ for every user. This represents the presence of tags belonging to that particular emotion category in the user’s listening history. $$\label{userscore}
S_{u,c}=\frac{\sum_{j\epsilon{V\textsubscript{tr}}}\left(N_{j,c}\times tr\textsubscript{u,j}\right)}{\sum_{i\epsilon{T\textsubscript{u}}} tr \textsubscript{u,i}}$$ where $$\label{n_tagweight}
N_{j,c}=\sum_{k\epsilon{Tags\textsubscript{c}}}\frac{tw\textsubscript{j,k}}{\sum_{l\epsilon{V\textsubscript{tg}}}tw\textsubscript{j,l}}$$ $c$ : emotion category\
$N_{j,c}$ : the association of track $j$ with $c$\
$T_u$ : all tracks for user $u$\
$V_{tg}$ : all tags obtained after tag filtering\
$V_{tr}$ : all tracks having at least one tag from $V_{tg}$\
$tr_{u,i}$ : playcount of track $i$ for user $u$\
$tw_{j,k}$ : tag weight of tag $k$ for track $j$\
$Tags_c$ : all tags in $V_{tg}$ which belong to $c$\
Since the objective of this work was to identify which of the 9 categories are most characteristic of At-Risk individuals when compared to No-Risk individuals, we performed group-level statistical tests of difference as described in the following section.
Emotion-based Analysis : Group Differences and Bootstrapping
------------------------------------------------------------
For each emotion category, we performed a two-tailed Mann-Whitney U (MWU) Test on the *Emotion Prevalence Scores* between the No-Risk and At-Risk groups. For a category, the group having higher mean rank from MWU Test indicates a stronger association of the category with that group. For the emotion categories that exhibited significant differences (*p* < .05), we further performed bootstrapping to account for Type I error and ensure that the observed differences are not due to chance. Bootstrapping (random sampling) with replacement was performed with 10,000 iterations. Each iteration randomly assigned participants to the At-Risk or No-Risk group. The U-statistic for each iteration was calculated. As a result, we obtain a bootstrap distribution for the U-statistic from which we estimate the significance of the observed statistic.
Genre-Prevalence Analysis
-------------------------
To further analyse the types of music associated with the tags of emotion categories, we explored genre-related social tags. In order to select the genre-related tags from our data, we collected the results of the multi-stage model proposed by Ferrer et al. [@beyondgenre] which assigned tags of Last.fm to different semantic layers namely genre, artist, affect, instrument, etc. In order to understand the underlying genre tag structure and obtain broader genre categories, we employed the approach described by Ferrer et al. [@rafael] to cluster genre tags (details in Equation 1 in the Supplementary material). In this, music tags were hierarchically organized revealing taxonomy of music tags by means of latent semantic analysis. The clusters thus obtained were labelled based on the genre-tags constituting the core points of the cluster[@treecut].
For the emotion categories that exhibited significant group differences, the genre tags co-occurring with its tags were used to calculate a user-specific *Genre Prevalence Score* for each genre-tag cluster. The formula used was similar to *Emotion Prevalence Scores* with the change in definition of the following terms: $c$ represents the genre cluster, $T_u$ is the set of all tracks for user $u$ which have a tag belonging to the particular emotion category and $V_{tg}$ is the set of all genre tags. Finally, we performed a biserial correlation between *Genre Prevalence Scores* for each genre-tag cluster and the users’ risk for depression (represented as a dichotomous variable with 0 = No-Risk; 1 = At-Risk).
Results {#Results}
=======
Internal Consistency and Criterion Validity
-------------------------------------------
The Cronbach’s alphas for *Unhealthy* scores obtained from HUMS and K10 scores were found to be relatively high at 0.80 and 0.91 respectively. A significant correlation (r=0.55, df=539, p<0.001) between *Unhealthy* Score and K10 was found which is in concordance with past research studies in the field [@hums]. Also, in line with previous research [@klein; @mckenzie], a significant positive correlation was observed between K10 score and Neuroticism (r=0.68, p<0.0001) adding to the internal consistency of the data and confirming construct validity. As can be seen in , the At-Risk group displayed higher mean and median *Unhealthy* score compared to No-Risk while *Healthy* scores were comparable.
\[boxplot\]
![Boxplot of HUMS scores for No-Risk and At-Risk Groups.[]{data-label="boxplot"}](boxplot_newf.png){width="0.7\columnwidth"}
Partial correlations between *Unhealthy*, *Healthy*, and K10 are presented in . K10 scores exhibit significant positive correlation only with *Unhealthy* for both the groups. The moderate correlation between *Healthy* and *Unhealthy* scores for the No-Risk population indicates that both of these subscales capture a shared element, most likely active music listening.
Emotion-based Results
---------------------
The data (*t*=$\pm$3,*n*=500) consisted of 3,80,261 social tags. The tag filtering process resulted in a final set of 1254 unique tags (Mean=109, SD = 24 tags per user) which were then mapped onto the VA and VAD emotion spaces. Figure 3 in supplementary material displays the tags closest to each of the Emotion Categories based on VA and VAD models.
\[gemsboxplot\]
![Boxplot of Emotion Prevalence Scores for No-Risk and At-Risk based on VA.[]{data-label="gemsboxplot"}](prevalence_bp.png){height="0.55\columnwidth" width="0.85\columnwidth"}
illustrates the *Emotion Prevalence Scores* for both groups for VA mapping (Supplementary Figure 4 displays the same for VAD, showing a similar distribution). The overall pattern appears similar between both the groups with minor observable differences for the emotion categories *wonder, transcendence, tenderness, tension, and sadness*. displays the emotion categories that exhibited significant differences between the groups (MWU U-statistic and bootstrap p-values in Table 2 of Supplementary material). The At-Risk group consistently exhibits higher Prevalence Scores in *Sadness* while the No-Risk group vacillates between *Wonder* and *Transcendence*. The most significant difference was observed in *Sadness* (VA model,*t*=$\pm$3,*n*=100) with a significantly greater *Emotion Prevalence Score* for the At-Risk group (Median = 0.0117) than the No-Risk group (Median = 0.0091), U=11414.5, p=0.009. Significant difference was also observed for *Tenderness* with greater *Emotion Prevalence Score* for the At-Risk group (Median = 0.1271) than the No-Risk group (Median = 0.1189), U=11905.0, p=0.04. On the other hand, the *Emotion Prevalence Score* in *Wonder* (VA model,*t*=$\pm$2,*n*=100) was significantly greater for the No-Risk group (Median = 0.0131) than the At-Risk group (Median = 0.0086), U=16270.0, p=0.003. The word-clouds of tags comprising *Sadness* and *Tenderness* are displayed in and . A score per tag is computed for each group (Equation 2 in Supplementary material). A rank was assigned to the tag based on the absolute difference of the tag scores between No-Risk & At-Risk groups. The size of the tag in the word-cloud is directly proportional to its rank in the category. Supplementary figures 5 and 6 depict word-clouds for *Transcendence* and *Wonder*.
[0.47]{} {width="\linewidth"}
[0.47]{} {width="\linewidth"}
We also assessed the predictive power of social tags for risk of depression by classifying participants into At-Risk or No-Risk groups using their tag information (feature details in Equation 4 in Supplementary material). The SVM model with ’rbf’ kernel (C=2301, gamma=101) gave the best results with a 5-fold cross-validation accuracy of 66.4%.
Genre-Prevalence Results
------------------------
Out of the 5062 tags assigned to the genre layer in [@beyondgenre], 94% (4766) of the tags were present in our data. The clustering of the genre tags resulted in 17 clusters and is displayed in Table 3 of Supplementary material. Figure 7 in Supplementary material displays mean genre prevalence scores between both groups for these 17 clusters. Overall, genre-cluster representing *indie-,alternative-pop/rock* represented by Cluster 4 is predominant in both groups. Genre prevalence scores were then evaluated specific to the tracks associated with the emotion categories that exhibited most significant group differences, that is, *Wonder* and *Sadness* (VA model, *t*=$\pm$3, *n*=100). For *Sadness*-specific tracks, the highest correlation (r=0.2, p<0.01) was observed between the Genre-Prevalence scores in the cluster representing *neo psychedelic-, avant garde-, dream-pop* and K-10 scores. Also, genre clusters representing *electronic rock* (r=0.17, p<0.01), *indie-, alternative-pop/rock* (r=0.12, p<0.05), and *world music* (r=-0.11, p<0.05) demonstrated significant correlations for *Tenderness*. For *Wonder* (VA model, *t*=$\pm$2, *n*=100), the K-10 scores exhibited significant negative correlation with *Genre Prevalence Scores* of clusters representing *black metal* (r=-0.11, p<0.05) and *neo-progressive rock* (r=-0.13, p<0.05).
Discussion
==========
This study is the first of its kind to examine the association between risk for depression and social tags related to music listening habits as they occur naturally as opposed to self-reported or lab-based studies. A clear difference in the music listening profiles was observed between the At-Risk group and the No-Risk group, particularly in terms of the emotional content of the tags. *Sadness* was significantly more prevalent in the At-Risk group and the word-cloud of sadness was highly illustrative of other low-arousal, low-valence emotions such as *dead*, *low*, *depressed, miserable, broken*, and *lonely*. The stronger association of the At-Risk group with sadness is in concordance with the past research studies in the field [@garrido] and confirms our hypothesis. The At-Risk group is attracted to music that reflects and resonates with their internal state. Whether this provides emotional consolation as an adaptive resource or whether it only worsens repetitive negative feelings and fuels rumination, remains an open question. Nonetheless, statistically, such listening style can be seen as a highly predictive factor of psychological distress.
In addition, *Tenderness*, which represents low-arousal and high-valence, was also more prevalent in the At-Risk group, especially for shorter-term ($\pm$ 2 months) music listening habits. *Tenderness* appears to be more significant in the shorter time period in addition to *Sadness*, possibly indicating that At-Risk people tend to oscillate between positive and negative states within a general state of low arousal. These findings appear to be very much in line with the results found by Houben et al. [@houben2015relation] who found high levels of emotional inertia and emotional variability to be linked with depression and ill-being. The consistent results related to *Sadness* in our study reflect the overall inert states in which the At-Risk tend to be. On the other hand, the *Tenderness* results reflect their tendency to jump to positive affective states while retaining low arousal, thereby demonstrating emotional variability. Furthermore, the omission of the *Dominance* dimension causes most of the tags to shift from *Tenderness* to *Transcendence* and *Transcendence* to *Wonder*, which explains the results in a reversal of the group association as evidenced in the results. Nevertheless, *Sadness* appears to be the predominant state as it is largely consistent for $\pm$3 months as well as for $\pm$2 months of music listening histories.
The At-Risk group also exhibited a tendency to gravitate towards music with genre tags such as *neo-psychedelic-*, *avant garde-*, *dream-pop* co-occurring with *Sadness*. Such genres are characterized by ethereal-sounding mixtures that often result in a wall of sound comprising electronic textures with obscured vocals. Similarly, the genres co-occurring with *Tenderness* (VAD model) or *Transcendence* (VA model) comprise similar mixtures with heavy synthesizer-based sounds (such as mellotron) which result in sounds that seem otherwordly. Such out-of-this world soundscapes have been also associated with transcendent druggy and mystical imagery and immersive experiences[@goddard2013resonances]. These results strengthen the claim that depression may foster musical immersion as an escape from a reality that is perceived to be adverse. This is somewhat in line with prior research that has linked depression with the use of music for avoidant coping[@miranda2009music]. On the other hand, music listening history of the No-Risk group was characterized by an inclination to listen to music tagged by positive valence and higher arousal as characterized by *Wonder* with a predilection for *Heavy metal* and *Progressive Rock* genres.
The use of only single word tags in the third stage of tag filtering is one limitation of this study which is due to lack of compatibility of the word emotion induction model with multi-word tags. Our results could potentially be extended to find significant differences in emotional concepts after considering multi-word social tags. We achieve a decent classification accuracy of 66.4% which is significantly above the chance level which indicate that social tags indeed may be indicative of At-Risk behavior. This may further be improved by considering additional descriptors of music such as acoustic features and lyrical content of the tracks. Another future direction is to incorporate the temporal evolution of these emotion categories in the listening histories to characterize depression, since past research suggests depression to be a result of gradual development of daily emotional experiences [@psych]. This study is intended to be one of many to come that will be helpful in early detection of depression and other potential mental disorders in individuals using their digital music footprints.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A 3-manifold is Haken if it contains a topologically essential surface. The Virtual Haken Conjecture says that every irreducible 3-manifold with infinite fundamental group has a finite cover which is Haken. Here, we discuss two interrelated topics concerning this conjecture.
First, we describe computer experiments which give strong evidence that the Virtual Haken Conjecture is true for hyperbolic 3-manifolds. We took the complete Hodgson-Weeks census of 10,986 small-volume closed hyperbolic 3-manifolds, and for each of them found finite covers which are Haken. There are interesting and unexplained patterns in the data which may lead to a better understanding of this problem.
Second, we discuss a method for transferring the virtual Haken property under Dehn filling. In particular, we show that if a 3-manifold with torus boundary has a Seifert fibered Dehn filling with hyperbolic base orbifold, then most of the Dehn filled manifolds are virtually Haken. We use this to show that every non-trivial Dehn surgery on the figure-8 knot is virtually Haken.
address: |
Department of Mathematics, Harvard University\
Cambridge MA, 02138, USA
author:
- |
Nathan M Dunfield\
William P Thurston
title: |
The virtual Haken conjecture:\
Experiments and examples
---
Introduction
============
Let $M$ be an orientable 3-manifold. A properly embedded orientable surface $S \neq S^2$ in $M$ is *incompressible* if it is not boundary parallel, and the inclusion $\pi_1(S) \to \pi_1(M)$ is injective. A manifold is *Haken* if it is irreducible and contains an incompressible surface. Haken manifolds are by far the best understood class of 3-manifolds. This is because splitting a Haken manifold along an incompressible surface results in a simpler Haken manifold. This allows induction arguments for these manifolds.
However, many irreducible 3-manifolds with infinite fundamental group are not Haken, e.g. all but 4 Dehn surgeries on the figure-8 knot. It has been very hard to prove anything about non-Haken manifolds, at least without assuming some sort of additional Haken-like structure, such as a foliation or lamination.
Sometimes, a non-Haken 3-manifold $M$ has a finite cover which is Haken. Most of the known properties for Haken manifolds can then be pushed down to $M$ (though showing this can be difficult). Thus, one of the most interesting conjectures about 3-manifolds is Waldhausen’s conjecture [@Waldhausen68]:
Suppose $M$ is an irreducible 3-manifold with infinite fundamental group. Then $M$ has a finite cover which is Haken.
A 3-manifold satisfying this conjecture is called *virtually Haken*. For more background and references on this conjecture see Kirby’s problem list [@KirbyList], problems 3.2, 3.50, and 3.51. See also [@CooperLong99; @CooperLongSSSSS] and [@Lubotzky96; @Lubotzky98] for some of the latest results toward this conjecture. The importance of this conjecture is enhanced because it’s now known that 3-manifolds which are virtually Haken are geometrizable [@GabaiMeyerhoffThurston; @Gabai97; @Scott83; @MeeksSimonYau; @Mess; @Gabai92; @CassonJungreis].
There are several stronger forms of this conjecture, including asking that the finite cover be not just Haken but a surface bundle over the circle. We will be interested in the following version. Let $M$ be a closed irreducible 3-manifold. If $H_2(M,{{\mathbb Z}}) \neq 0$ then $M$ is Haken, as any non-zero class in $H_2(M,{{\mathbb Z}})$ can be represented by an incompressible surface. Now $H_2(M, {{\mathbb Z}})$ is isomorphic to $H^1(M,{{\mathbb Z}})$ by Poincar[é]{} duality, and $H^1(M,{{\mathbb Z}})$ is a free abelian group. So if the first betti-number of $M$ is $\beta_1(M) = \dim H_1(M, {{\mathbb R}}) =\dim
H^1(M, {{\mathbb R}})$, then $\beta_1(M) > 0$ implies $M$ is Haken. As the cover of an irreducible 3-manifold is irreducible [@MeeksSimonYau], a stronger form of the Virtual Haken Conjecture is:
\[vpbn\] Suppose $M$ is an irreducible 3-manifold with infinite fundamental group. Then $M$ has a finite cover $N$ where $\beta_1(N) > 0$.
We will say that such an $M$ has *virtual positive betti number*. Note that $\beta_1(N) > 0$ if and only if $H_1(N, {{\mathbb Z}})$, the abelianization of $\pi_1(N)$, is infinite. So an equivalent, more algebraic, formulation of Conjecture \[vpbn\] is:
Suppose $M$ is an irreducible 3-manifold. Assume that $\pi_1(M)$ is infinite. Then $\pi_1(M)$ has a finite index subgroup with infinite abelianization.
Here, we focus on this form of the Virtual Haken Conjecture because its algebraic nature makes it easier to examine both theoretically and computationally. While in theory one can to use normal surface algorithms to decide if a manifold is Haken [@JacoOertel], in practice these algorithms are prohibitively slow in all but the simplest examples. Computing homology is much easier as it boils down to computing the rank of a matrix. Also, it’s probably true that having virtual positive betti number isn’t much stronger than being virtually Haken (see the discussion of [@Lubotzky96] in Section \[dehn\_questions\] below).
Outline of the paper
--------------------
This paper examines the Virtual Haken Conjecture in two interrelated parts:
Here, we describe experiments which strongly support the Virtual Positive Betti Number Conjecture. We looked at the 10,986 small-volume hyperbolic manifolds in the Hodgson-Weeks census, and tried to show that they had virtual positive betti number. In all cases, we succeeded. It was natural to restrict to hyperbolic 3-manifolds for our experiment since, in practice, all 3-manifolds are geometrizable and the Virtual Positive Betti Number Conjecture is known for all other kinds of geometrizable 3-manifolds.
Section \[experiment\] gives an overview of the experiment and discusses the results and limitations of the survey. Sections \[techniques\] and \[comp\_rank\] describe the techniques used to compute the homology of the covers. Section \[simple\] discusses some interesting patterns that we found among the covers where the covering group is a simple group. Some further questions are given in Section \[questions\].
Here we consider Dehn fillings of a fixed 3-manifold $M$ with torus boundary. Generalizing work of Boyer and Zhang [@BoyerZhang00], we give a method for transferring virtual positive betti number from one filling of $M$ to another. Roughly, Theorem \[SF\_filling\] says that if $M$ has a filling which is Seifert fibered with hyperbolic base orbifold, then most Dehn fillings have virtual positive betti number. We use this to give new examples of manifolds $M$ where all but finitely many Dehn fillings have virtual positive betti number. In Section \[Whitehead\], we show this holds for most surgeries on one component of the Whitehead link.
In the case of figure-8 knot, we use work of Holt and Plesken [@HoltPlesken92] to amplify our results, and prove that every non-trivial Dehn surgery on the figure-8 knot has virtual positive betti number (Theorem \[fig8thm\]).
In Section \[dehn\_questions\], we discuss possible avenues to other results using fillings which are Haken rather than Seifert fibered. This approach is easiest in the case of toroidal Dehn fillings, and using these techniques we prove (Theorem \[sister\_thm\]) that all Dehn fillings on the sister of the figure-8 complement satisfy the Virtual Positive Betti Number Conjecture.
The first author was partially supported by an NSF Postdoctoral Fellowship. The second author was partially supported by NSF grants DMS-9704135 and DMS-0072540. We would like to thank Ian Agol, Daniel Allcock, Matt Baker, Danny Calegari, Greg Kuperberg, Darren Long, Alex Lubotzky, Alan Reid, William Stein, and Dylan Thurston for useful conversations. We also thank of the authors of the computer programs [`SnapPea`]{} [@SnapPea] and [`GAP`]{} [@GAP4] which were critical for our computations.
The experiment {#experiment}
==============
The manifolds
-------------
We looked at the 10,986 hyperbolic 3-manifolds in the Hodgson-Weeks census of small-volume closed hyperbolic 3-manifolds [@SnapPea]. The volumes of these manifolds range from that of the smallest known manifold ($0.942707...$) to $6.5$. While there are infinitely many closed hyperbolic 3-manifolds with volume less than $6.5$, there are only finitely many if we also bound the injectivity radius from below. The census manifolds are an approximation to all closed hyperbolic 3-manifolds with volume $<6.5$ and injectivity radius $>
0.3$.
A more precise description of these manifolds is this. Start with the Callahan-Hildebrand-Weeks census of cusped finite-volume hyperbolic 3-manifolds, which is a complete list of the those having ideal triangulations with 7 or fewer tetrahedra [@HildebrandWeeks; @CallahanHildebrandWeeks]. The closed census consists of all the Dehn fillings on the 1-cusped manifolds in the cusped census, where the closed manifold has shortest geodesic of length $>0.3$.
Only 132 of the 10,986 manifolds have positive betti number. It is also worth mentioning that many (probably the vast majority) of these manifolds are non-Haken. For the 246 manifolds with volume less than 3, exactly 15 are Haken [@Dunfield:haken].
Computational framework
-----------------------
For each 3-manifold, we started with a finite presentation of its fundamental group $G$, and then looked for a finite index subgroup $H$ of $G$ which has infinite abelianization. There is a fair amount of literature on how find such an $H$, because finding a finite index subgroup with infinite abelianization is one of the main computational techniques for proving that a given finitely presented group is infinite. See [@Plesken99] for a survey. The key idea which simplifies the computations is contained in [@HoltPlesken92], which we used in the form described in Section \[techniques\].
We used [`SnapPea`]{} [@SnapPea] to give presentations for the fundamental groups of each of the manifolds in the closed census. We then used [`GAP`]{} [@GAP4] to find various finite index subgroups and compute the homology of the subgroups (see Sections \[techniques\]-\[comp\_rank\]).
Types of covers
---------------
When looking for a subgroup with positive betti number, we tried a number of different types of subgroups. Some types were much better at producing homology than others. Those that worked well were:
- Abelian/$p$-group covers with exponent $2$ or $3$.
- Low ($<20$) index subgroups. Coset enumeration techniques allow one to enumerate low-index subgroups [@Sims94]. Given such a subgroup $H < G$, we looked at the largest normal subgroup contained in $H$, to maximize the chance of finding homology.
- Normal subgroups where the quotient is a finite simple group. These were found by choosing the simple group in advance and then finding all epimorphisms of $G$ onto that group.
The following types were inefficient in producing homology:
- Abelian/nilpotent covers with exponents $> 3$.
- Dihedral covers.
- Intersections of subgroups of the types listed in the first list (the useful types).
It would be nice to have heuristics which explain why some things worked and others didn’t (we plan to explore this further in [@DunfieldThurstonHeuristics]). Also, while intersecting subgroups was not efficient in general, there were certain manifolds where the only positive betti number cover we could find were of this type.
Results
-------
We were able to find positive betti number covers for all of the Hodgson-Weeks census manifolds. For most of the manifolds, it was easy to find such a cover. For instance, just looking at abelian covers and subgroups of index $\leq 6$ works for $42\%$ of the manifolds. See Table \[quick\_success\] for more about the degrees of the covers we used.
For each of the manifolds, we stored a presentation of the fundamental group and a homomorphism from that finitely presented group to $S_n$ whose kernel has positive betti number. This information is available on the web at [@VirtHakenWebsite] together with the [`GAP`]{} code we used for the computations, and will hopefully be useful as a source of examples. The amount of computer time used to find all the covers was in excess of one CPU-year, but the amount of time needed to check all the covers for homology given the data available at [@VirtHakenWebsite] is only a few of hours.
There was one manifold in particular where it was very difficult to find a cover with positive betti number. This manifold is $N =
s633(2,3)$. Its volume is $4.49769817315...$ and $H_1(N) =
{{\mathbb Z}}/79{{\mathbb Z}}$. The manifold $N$ has a genus-2 Heegaard splitting, and is the 2-fold branched cover of the 3-bridge knot in Figure \[knot\].
![ The 2-fold cover branched over this knot is the manifold $N$. Figure created with [@Knotscape].[]{data-label="knot"}](knot_dt.eps)
One of the reasons that $N$ was so difficult is that $\pi_1(N)$ has very few low-index subgroups (the smallest index is 13). In the end, a search using `Magma` [@MAGMA], turned up a subgroup of index 14 which has positive betti number. It is very hard to enumerate all finite-index subgroups for an index as large as 14, roughly because the size of $S_n$ is $n!$; finding this index 14 subgroup took 2 days of computer time.
While $\pi_1(N)$ has few subgroups of low index, it does have a reasonable number of simple quotients, and might be a good place to look for a co-final sequence of covers which fail to have positive betti number. The manifold $N$ is non-Haken, but it contains a essential lamination (and thus a genuine lamination [@Calegari:promoting]). Arithmetically, it is quite a complicated manifold—[`Snap`]{} [@Snap] computes that the trace field has a minimal polynomial $p(x)$ whose degree is 51 and largest coefficient is about $4 \times 10^7$. The coefficients of $p$ are, starting with the constant term:
$1$, $24$, $223$, $929$, $909$, $-6163$, $-20232$, $-2935$, $79745$, $121259$, $-57077$, $-428280$, $-507427$, $689749$, $2245466$, $-519994$, $-5455251$, $355551$, $9513149$, $-1958013$, $-12213255$, $7478063$, $10535124$, $-17696676$, $-4109720$, $30159462$, $-2803266$, $-39076707$, $5291640$, $39199917$, $-3032906$, $-30650313$, $-365203$, $18711624$, $1997701$, $-8892931$, $-1776338$, $3259601$, $951237$, $-903591$, $-352258$, $182336$, $93101$, $-24677$, $-17396$, $1748$, $2197$, $33$, $-169$, $-17$, $6$, $1$.
Overlap with known results
--------------------------
The manifolds we examined have little overlap with those covered by the known results about the Virtual Haken Conjecture. The only general results are those of Cooper and Long [@CooperLong99; @CooperLongSSSSS] building on work of Freedman and Freedman [@FreedmanFreedman]. These are Dehn surgery results—they say that many “large” Dehn fillings on a 1-cusped hyperbolic 3-manifold are virtually Haken. Because “large” Dehn fillings usually have short geodesics, the Cooper-Long results probably apply to very few, if any, of the census manifolds.
Limitations
-----------
It’s possible the behavior we found might not be true in general because the census manifolds are non-generic in a couple ways. First, they all have fundamental groups with presentations with at most 3 generators. About 75% have 2-generator presentations. For these manifolds, it seems that (at least most of the time) the number of generators and the Heegaard genus coincide. So most of these manifolds have Heegaard genus 2 or 3.
![The minimally-twisted 5-chain link.[]{data-label="5chain"}](5chain.eps)
Moreover Callahan, Hodgson, and Weeks (unpublished) showed that almost all of the census manifolds are Dehn surgeries on a single 5-component link, the minimally twisted 5-chain shown in Figure \[5chain\]. Let $L$ be this link and $M = S^3 \setminus N(L)$ be its exterior. The link $L$ is invariant under rotation of $\pi$ about the dotted grey axis. The induced involution of $M$ acts on each torus in ${\partial}M$ by the elliptic involution. Thus the involution of $M$ extends to an orientation preserving involution of every Dehn filling of $M$. So almost all of the census manifolds have an orientation preserving involution where the fixed point set is a link and underlying space of the quotient is $S^3$. While any manifold which has a genus-2 Heegaard splitting has such an involution [@BirmanHilden], this says that the other 25% of the census manifolds are also special. The presence of such an involution has proven useful in the past. For instance, it implies that the manifold is geometrizable. So it’s possible that our computations only reflect the situation for manifolds of this type.
The 5-chain $L$ is a truly beautiful link, and it’s worth describing some of its properties here. The orbifold $N$ which is $M$ modulo this involution is easy to describe. Take the triangulation ${{\mathcal T}}$ of $S^3$ gotten by thinking of $S^3$ as the boundary of the 4-simplex. The 1-skeleton of ${{\mathcal T}}$ is called the *pentacle*, see Figure \[pentacle\].
![The pentacle.[]{data-label="pentacle"}](pentacle.eps)
If we take $S^3$ minus an open ball about each vertex in ${{\mathcal T}}$, and label what’s left of each edge of the pentacle by ${{\mathbb Z}}/2{{\mathbb Z}}$, we get exactly the orbifold $N$!
We can put a hyperbolic structure on $N$ and thus $M$ by making each tetrahedron in ${{\mathcal T}}$ a regular ideal tetrahedron. Thus the volume of $M$ is $10 v_3 = 10.149416064...$, and further $M$ is arithmetic and commensurable with the Bianchi group ${\mathrm{PSL}_{2} \mathcal{O}(
\sqrt{-3})}$. The symmetric group $S_5$ acts on the 4-simplex by permuting the vertices, inducing an action of $S_5$ on $N$. This action is exactly the group of isometries of $N$. The isometry group of $M$ is $S_5 \times {{\mathbb Z}}/2{{\mathbb Z}}$, where the ${{\mathbb Z}}/2{{\mathbb Z}}$ is the rotation about the axis.
The manifold $M$ fibers over the circle, and in fact every face of the Thurston norm ball is fibered. Here’s an explicit way to see that $N$ fibers over the interval $I$ with mirrored endpoints (this fibration lifts to a fibration of $M$ over $S^1$). Take any Hamiltonian cycle in the 1-skeleton of ${{\mathcal T}}$. The complementary edges also form a Hamiltonian cycle. Split the fat vertices of ${{\mathcal T}}$ (the cusps of $N$) in the obvious way in space so that these two cycles become the unlink, with cusps stretched between them. Then the special fibers over the ${{\mathbb Z}}/2{{\mathbb Z}}$ endpoints of $I$ are two pentagons, spanning the two Hamiltonian cycles. The other fibers are 5-punctured spheres.
Techniques for computing homology {#techniques}
=================================
Given a finite index subgroup $H$ of a finitely presented group $G$, a simplified version of the Reidemeister-Schreier method produces a matrix $A$ with integers entries whose cokernel is the abelianization of $H$. Computing this matrix is not very time-consuming. The hard part of computing the rank of the abelianization of $H$ is finding the rank of $A$. Computing the rank of a matrix is $O(n^3)$ if field operations are constant time. We need to compute the rank over ${{\mathbb Q}}$ so the time needed is somewhat more than that (see Section \[comp\_rank\]). The side lengths of $A$ are usually about $n = [G:H]$, which at $O(n^3)$ is prohibitive for many of the covers that we looked at (the largest covering group we needed was ${\mathrm{PSL}_{2} {{\mathbb F}}_{101}}$, whose order is 515,100).
So one wants to keep the degree of the cover, or really the size of the matrix involved, as small as possible. One way to do this, first used in this context by Holt and Plesken [@HoltPlesken92], is the following application of the representation theory of finite groups. Suppose $H$ is a finite index subgroup of $G$. Assume that $H$ is normal, so the corresponding cover is regular. Set $Q = G/H$ and let $f{\colon\thinspace}G \to Q$ be the quotient map. The group $Q$ acts on the homology of the cover $H_1(H, {{\mathbb C}})$, giving a representation of $Q$ on the vector space $H_1(H, {{\mathbb C}})$. Another description of $H_1(H, {{\mathbb C}})$ is that it is the homology with twisted coefficients $H_1(G, {{\mathbb C}}Q)$. As a $Q$-module, ${{\mathbb C}}Q$ decomposes as ${{\mathbb C}}Q = V_1^{n_1} \oplus V_2^{n_2}
\oplus \dots \oplus V_k^{n_k}$ where the $V_i$ are simple $Q$-modules and $\dim V_i = n_i$. So $$H_1(H) = H_1(G, {{\mathbb C}}Q) = H_1(G, V_1)^{n_1} \oplus H_1(G, V_2)^{n_2}
\oplus \dots \oplus H_1(G, V_k)^{n_k}.$$ Since the dimensions of the $V_i$ are usually much less than the order of $Q$, the matrices involved in computing $H_1(G, V_i)$ are much smaller than the one you would get by applying Reidemeister-Schreier to the subgroup $H$. For instance, ${\mathrm{PSL}_{2} {{\mathbb F}}_p}$ has order about $(1/2) p^3$, but every $V_i$ has dimension about $p$. If we want to show that $H_1(H, {{\mathbb C}})$ is non-zero, we just have to compute that a single $H_1(G, V_i)$ is non-zero.
There are a couple of difficulties in computing $H_1(G, V_i)$. First, to do the computation rigorously, we need to compute not over ${{\mathbb C}}$ but over a finite extension of ${{\mathbb Q}}$. Now there is a field $k$ so that $k
Q$ splits over $k$ the same way as ${{\mathbb C}}Q$ splits over ${{\mathbb C}}$. However, the matrices we need to compute $H_1(G, V_i)$ will have entries in $k$, whereas the matrix given to us by Reidemeister-Schreier has integer entries. If $A$ is a matrix with entries in $k$, to compute its rank over ${{\mathbb Q}}$ one can form an associated ${{\mathbb Q}}$-matrix $B$ by embedding $k$ as a subalgebra of ${\mathrm{GL}_{n} {{\mathbb Q}}}$ where $n$ is $[k : {{\mathbb Q}}]$ (see e.g. [@PleskenSouvignier98]). The rank of $B$ can then be computed using one the techniques for integer matrices. However, the size of $B$ is the size of $A$ times $[k : {{\mathbb Q}}]$, so this eats up part of the apparent advantage to computing just the $H_1(G, V_i)$.
The other problem is that we may not know what the irreducible representations of $Q$ are, especially if we don’t know much about $Q$. While computing the character table of a finite group is a well-studied problem, the problem of finding the actual representations is harder and not one of the things that [`GAP`]{} or other standard programs can do. Even when the representations of $Q$ are explicitly known (e.g. $Q = {\mathrm{PSL}_{2} {{\mathbb F}}_p}$), it can be time-consuming to tell the computer how to construct the representations. For more on computing the actual representations see [@Dixon70; @PleskenSouvignier97].
We used the following modified approach which avoids the two difficulties just mentioned, while still reducing the size of the matrices considerably. Suppose we are given normal subgroup $H$ and we want to determine if $H_1(H, {{\mathbb C}})$ is non-zero. Suppose $U$ is a subgroup of $Q$. Note $U$ is not assumed to be normal. The permutation representation of $Q$ on ${{\mathbb C}}[Q/U]$ desums into irreducible representations, say ${{\mathbb C}}[Q/U] = V_1^{e_1} \oplus V_2^{e_2}
\oplus \dots \oplus V_k^{e_k}$. Let $K = f^{-1}(U)$, a finite index subgroup of $G$ containing $H$. Then $$H_1(K) = H_1(G,{{\mathbb C}}[Q/U]) = H_1(G, V_1)^{e_1} \oplus H_1(G, V_2)^{e_2} \oplus \dots \oplus
H_1(G, V_k)^{e_k}.$$ Suppose that $U$ is chosen so that every irreducible representation appears in ${{\mathbb C}}[Q/U]$, that is, every $e_i > 0$. Then we see that $H_1(H)$ is non-zero if and only if $H_1(K)$ is. As long as $U$ is non-trivial, the index $[G:K] = [Q:U]$ is smaller than $[G:H] = \#Q$, so computing $H_1(K)$ is easier that computing $H_1(H)$. Returning to the example of ${\mathrm{PSL}_{2} {{\mathbb F}}_p}$, there is such a $U$ of index about $p^2$, whereas the order of ${\mathrm{PSL}_{2} {{\mathbb F}}_p}$ is about $p^3/2$. Looking at a matrix with side $O(p^2)$ is a big savings over one of side $O(p^3)$.
Moreover, finding such a $U$ given $Q$ is easy. First compute the character table of $Q$ and the conjugacy classes of subgroups of $Q$ (these are both well-studied problems). For each subgroup $U$ of $Q$ compute the character $\chi_U$ of the permutation representation of $Q$ on ${{\mathbb C}}[Q/U]$. Expressing $\chi_U$ as a linear combination of the irreducible characters tells us exactly what the $e_i$ are. Running through the $U$, we can find the subgroup of lowest index where all of the $e_i > 0$.
When we were searching for positive betti number covers, we used this method of replacing $H$ with $K = f^{-1}(U)$ and computed the ranks of the resulting matrices over a finite field ${{\mathbb F}}_p$. Once we had found an $H$ with positive ${{\mathbb F}}_p$-betti number, we did the following to check rigorously that $H$ has infinite abelianization. First, we went through *all* the subgroups $U$ of $Q$, till we found the $U$ of smallest index such that $f^{-1}(U)$ has positive ${{\mathbb F}}_p$-betti number. For this $U$, we computed the ${{\mathbb Q}}$-betti number of $f^{-1}(U)$ using one of the methods described in Section \[comp\_rank\]. Doing this kept the matrices that we needed to compute the ${{\mathbb Q}}$-rank of small, and was the key to checking that the covers really had positive ${{\mathbb Q}}$-betti number. For instance, for the ${\mathrm{PSL}_{2} {{\mathbb F}}_{101}}$-cover of degree 515,100 there was a $U$ so that the intermediate cover $f^{-1}(U)$ with positive betti number had degree “only” 5,050.
It’s worth mentioning that the rank over ${{\mathbb Q}}$ was very rarely different than that over a small finite field. Initially, for each manifold we found a cover where the ${{\mathbb F}}_{31991}$-betti number was positive. All but 3 of those 10,986 covers had positive ${{\mathbb Q}}$-betti number.
Computing the rank over ${{\mathbb Q}}$ {#comp_rank}
=======================================
Here, we describe how we computed the ${{\mathbb Q}}$-rank of the matrices produced in the last section. Normally, one thinks of linear algebra as “easy”, but standard row-reduction is polynomial time only if field operations are constant time. To compute the rank of an integer matrix $A$ rigorously one has to work over ${{\mathbb Q}}$. Here, doing row reduction causes the size of the fractions involved to explode. There are a number of ways to try to avoid this.
The first is to use a clever pivoting strategy to minimize the size of the fractions involved [@HavasMajewski97; @HavasMajewski94; @HavasHoltRees]. This is the method built into [`GAP`]{}, and was what we used for the covers of degree less than 500, which sufficed for $99.2\%$ of the manifolds.
For all but about 7 of the remaining 94 manifolds, we used a simplified version of the $p$-adic algorithm of Dixon given in [@Dixon83]. Over a large finite field ${{\mathbb F}}_p$, we computed a basis of the kernel of the matrix. Then we used “rational reconstruction”, a partial inverse to the map ${{\mathbb Q}}\to {{\mathbb F}}_p$ to try to lift each of the ${{\mathbb F}}_p$-vectors to ${{\mathbb Q}}$-vectors (see [@Dixon83 pg. 139]). If we succeeded, we then checked that the lifted vectors were actually in the kernel over ${{\mathbb Q}}$.
For 7 of the largest covers (degree 1,000–5,000), this simplification of Dixon’s algorithm fails, and we used the program [`MAGMA`]{} [@MAGMA], which has a very sophisticated $p$-adic algorithm, to check the ranks of the matrices involved.
Simple covers {#simple}
=============
To gain more insight into this problem, we looked at a range of simple covers for a randomly selected 1,000 of the census manifolds which have 2-generator fundamental groups. For these 1,000 manifolds we found all the covers where the covering group was a non-abelian finite simple group of order less than 33,000. For each cover we computed the homology. We will describe some interesting patterns we found.
-------------- ----------- --------- ------------ ------------- -------------
**Quotient** **Order** **Hit** **HavCov** **SucRat1** **SucRat2**
$A_5$ 60 14.0 26.9 52.0 52.9
$L_2(7)$ 168 17.8 28.2 63.1 66.3
$A_6$ 360 21.6 31.4 68.8 68.7
$L_2(8)$ 504 15.4 21.7 71.0 72.6
$L_2(11)$ 660 24.1 32.8 73.5 71.8
$L_2(13)$ 1092 29.4 41.1 71.5 77.8
$L_2(17)$ 2448 29.4 43.1 68.2 69.6
$A_7$ 2520 41.1 45.8 89.7 90.9
$L_2(19)$ 3420 28.2 44.4 63.5 65.7
$L_2(16)$ 4080 11.3 18.3 61.7 65.3
$L_3(3)$ 5616 19.2 28.0 68.6 76.5
$U_3(3)$ 6048 16.4 18.0 91.1 92.8
$L_2(23)$ 6072 32.7 47.6 68.7 70.1
$L_2(25)$ 7800 24.7 33.0 74.8 75.5
$M_{11}$ 7920 14.6 17.1 85.4 88.8
$L_2(27)$ 9828 14.2 26.6 53.4 57.1
$L_2(29)$ 12180 42.0 57.1 73.6 74.1
$L_2(31)$ 14880 38.1 56.5 67.4 70.9
$A_8$ 20160 18.7 20.7 90.3 92.3
$L_3(4)$ 20160 42.8 50.2 85.3 89.1
$L_2(37)$ 25308 24.9 54.2 45.9 50.5
$U_4(2)$ 25920 26.6 27.8 95.7 97.5
$Sz(8)$ 29120 26.9 43.9 61.3 73.1
$L_2(32)$ 32736 12.4 17.9 69.3 72.1
-------------- ----------- --------- ------------ ------------- -------------
: **Hit** is the percentage of manifolds having a cover with this group which has positive betti number. **HavCov** is the percentage of manifolds having a cover with this group. **SucRate1** is the percentage of manifolds having a cover with this group which have such a cover with positive betti number. **SucRate2** is the percentage of covers with this group having positive betti number.[]{data-label="basic_table"}
First, look at Table \[basic\_table\]. There, the simple groups are listed by their ATLAS [@ATLAS] name (so, for instance, $L_n(q) =
{\mathrm{PSL}_{n} {{\mathbb F}}_q}$), together with basic information about how many covers there are, and how many have positive betti number. There is quite a bit of variation among the different groups. For instance, only $11.3\%$ of the manifold groups have $L_2(16)$ quotients but $42.8\%$ have $L_3(4)$ quotients. Moreover, there are big differences in how successful the different kinds of covers are at producing homology. Only half of the $L_2(37)$ covers have positive betti number, but almost all ($97.5\%$) of the $U_4(2)$ covers do. There are no obvious reasons for these patterns (for instance, the success rates don’t correlate strongly with the order of the group). It would be very interesting to have heuristics which explain them, and we will explore these issues in [@DunfieldThurstonHeuristics].
![ This graph shows how quickly simple group covers generate homology. Each $+$ plotted is the pair $(\log( \#Q(n)) , V(n))$, where the $\log$ is base 10. Thus the leftmost $+$ corresponds to the fact that $14\%$ of the manifolds have an $A_5$ cover with positive betti number. The second leftmost $+$ corresponds to the fact that $29\%$ of the manifolds have either an $A_5$ or an $L_2(7)$ cover with positive betti number, and so on.[]{data-label="simple_groups"}](s_groups.eps)
In terms of showing manifolds are virtually Haken, even the least useful group has a **Hit** rate greater than $10\%$. That is, for any given group at least $10\%$ of the manifolds have a positive betti number cover with that group. So unless things are strongly correlated between different groups, one would expect that every manifold would have a positive betti number simple cover, and that one would generally find such a cover quickly. Let $Q(n)$ denote the $n^{\mathrm{th}}$ simple group as listed in Table \[basic\_table\]. Set $V(n)$ to be the proportion of the manifolds which have a positive betti number $Q(k)$-cover where $k \leq n$. We expect that the increasing function $V(n)$ should rapidly approach $1$ as $n$ increases. This is born out in Figure \[simple\_groups\].
Figure \[simple\_groups\] shows that the groups behave pretty independently of each other, although not completely as we will see. Let $H(n)$ denote the hit rate for $Q(n)$, that is the proportion of the manifolds with a $Q(n)$ cover with positive betti number. If everything were independent, then one would expect $$V(n) \approx V(n-1) + (1-V(n-1)) H(n).$$ If we let $E(n)$ be the right-hand side above, and compare $E(n)$ with $V(n)$ we find that $E(n) - V(n)$ is almost always positive. To judge the size of this deviation, we look at $$\frac{E(n) - V(n)}{1 - V(n-1)} {\quad\mbox{which lies in $[-0.007, 0.13]$,}\quad}$$ and which averages $0.022$. In other words, $V(n) - V(n-1)$ is usually about $2\%$ smaller as a proportion of the possible increase than $E(n) - V(n-1)$.
For a graphical comparison, define $V'(n)$ by the recursion $$V'(n) = V'(n-1) + (1-V'(n-1)) H(n),$$ and compare with $V(n)$ in Figure \[simple\_compare\].
Asymptotically, every non-abelian finite simple group is of the form $L_2(q)$, and so it’s interesting to look at a modified $V(n)$ where we look only at the $Q(n)$ of this form. This is also shown in Figure \[simple\_compare\].
![The top line plots $(\log(\#Q(n)), V'(n))$, the middle line $(\log(\#Q(n)), V(n))$ (as in Figure \[simple\_groups\]), and the lowest line plots only the groups of the form $L_2(q)$.[]{data-label="simple_compare"}](s_gr2.eps)
Amount of homology
------------------
![This plot shows the relationship between the expected rank and the degree of the cover. The line shown is $y = x - 1.3$. []{data-label="homology"}](homology.eps)
$\log(\#Q(n))$
Suppose we look at a simple cover of degree $d$, what is the expected rank of the homology of the cover? The data suggests that the expected rank is linearly proportional to $d$. For the simple group $Q(n)$, set $R(n)$ to be the mean of $\beta_1(N)$, where $N$ runs over all the $Q(n)$ covers of our manifolds (including those where $\beta_1(N) = 0$). Figure \[homology\] gives a plot of $\log R(n)$ versus $\log(\#Q(n))$. Also shown is the line $y = x - 1.3$ (which is almost the least squares fit line $y = 1.018 x - 1.303$). The data points follow that line, suggesting that: $$\log R(n) \approx \log(\#Q(n)) - 1.3 {\quad\mbox{and hence}\quad} R(n) \approx
\frac{\#Q(n)}{20}. \label{hom_rel}$$ Now each of the 3-manifold groups we are looking at here are quotients of the free group on two generators $F_2$. Let $G$ be fundamental group of one of our 3-manifolds, say $G = F_2/N$. Given a homomorphism $G \to Q(n)$, we can look at the composite homomorphism $F_2 \to Q(n)$. Let $H$ be the kernel of $G \to Q(n)$ and $K$ the kernel of $F_2 \to Q(n)$. Then the rank of $H_1(K)$ is $\#Q(n) + 1$. As $H_1(H)$ is a quotient of $H_1(K)$, Equation \[hom\_rel\] is says that on average, $5\%$ of $H_1(K)$ survives to $H_1(H)$.
This amount of homology is not a priori forced by the high hit rate for the $Q(n)$. For instance, $L_2(p)$ has order $(p^3 - p)/2$ but has a rational representation of dimension $p$. Thus it would be possible for $L_2(p)$ covers to have $$\log(R(n)) \approx (1/3) \log(\#G(n))+ C,$$ even if a large percentage of these covers had positive betti number. This data suggests that on a statistical level these 3-manifold groups are trying to behave like the fundamental group of a 2-dimensional orbifold of Euler characteristic $-1/20$.
The data in Figure \[homology\] is not based on the full $Q(n)$ covers but on subcovers coming from a fixed subgroup $U(n)
< Q(n)$, chosen as described in Section \[techniques\]. The degree plotted is the degree of the cover that was used, that is $[Q(n) :
U(n)]$ not the order of $Q(n)$ itself, so the above analysis is still valid. Also, throughout Section \[simple\] having positive betti number really means having positive betti number over ${{\mathbb F}}_{31991}$. Also, we originally used a list of the Hodgson-Weeks census which had a few duplicates and so there are actually 12 manifold which appear twice in our list of 1000 random manifolds.
Homology of particular representations
--------------------------------------
As discussed in Section \[techniques\], if we look at a cover with covering group $Q$, the homology of the cover decomposes into $$H_1(G, V_1)^{n_1} \oplus H_1(G, V_2)^{n_2} \oplus \dots \oplus
H_1(G, V_k)^{n_k},$$ where $G$ is the fundamental group of the base manifold and the $V_i$ are the irreducible $Q$-modules. For $Q$ an alternating group, we looked at this decomposition and found that the ranks of the $H_1(G,
V_i)$ were very strongly positively correlated. This contrasts with the relative independence of the ranks of covers with different $Q(n)$.
We will describe what happens for $A_7$, the other alternating groups being similar. The rational representations of $A_7$ are easy to describe: they are the restrictions of the irreducible representations of $S_7$. They correspond to certain partitions of $7$. Table \[alt\_basic\_table\] lists the representations and their basic properties. Table \[alt\_correlations\] shows the correlations between the ranks of the $H_1(G, V_i)$. Many of the correlations are larger than $0.5$ and all are bigger than $0$ ($+1$ is perfect correlation, $-1$ perfect anti-correlation and $0$ the expected correlation for independent random variables). Figure \[alt\_homology\] shows the distribution of the homology of the covers.
[rrrr]{} Partition & Dim. of rep & Success rate & Mean homology\
7 & 1 & 2% & 0.0\
$1,6$ & 6 & 22% & 1.5\
$2,5$ & 14 & 63% & 19.8\
$1,1,5$ & 15 & 64% & 21.8\
$3,4$ & 14 & 41% & 11.0\
$1,2,4$ & 35 & 70% & 101.6\
$1,1,1,4$ & 20 & 61% & 20.7\
$1,3,3$ & 21 & 61% & 33.9
[r|cccccccc]{} & 7 & 16 & 25 & 115 & 34 & 124 & 1114 & 133\
7 &**1.00** & 0.01 & 0.11 & 0.08 & 0.15 & 0.17 & 0.02 & 0.13\
16 & 0.01 & **1.00** & 0.22 & 0.09 & 0.23 & 0.19 & 0.18 & 0.19\
25 & 0.11 & 0.22 & **1.00** & 0.63 & 0.65 & 0.79 & 0.37 & 0.61\
115 & 0.08 & 0.09 & 0.63 & **1.00** & 0.52 & 0.80 & 0.75 & 0.78\
34 & 0.15 & 0.23 & 0.65 & 0.52 & **1.00** & 0.73 & 0.50 & 0.65\
124 & 0.17 & 0.19 & 0.79 & 0.80 & 0.73 & **1.00** & 0.65 & 0.89\
1114& 0.02 & 0.18 & 0.37 & 0.75 & 0.50 & 0.65 & **1.00** & 0.66\
133 & 0.13 & 0.19 & 0.61 & 0.78 & 0.65 & 0.89 & 0.66 & **1.00**
![Plot showing the distribution of the ranks of the homology of the 964 covers with group $A_7$. The $x$-axis is the amount of homology and the $y$-axis the number of covers with homology in that range.[]{data-label="alt_homology"}](alt_hom.eps)
Correlations between groups {#correlation}
---------------------------
In the beginning of Section \[simple\] we saw that the two events $$\big(\mbox{having a $Q(n)$-cover with $\beta_1 > 0$}, \mbox{having a $Q(m)$-cover with $\beta_1>0$}\big)$$ were more or less independent of each other, though overall there was a slight positive correlation which dampened the growth of $V(n)$. In the appendix, there is a table giving these correlations, was well one giving those between the events: $$\big(\mbox{having a $Q(n)$-cover}, \mbox{having a $Q(m)$-cover}\big).$$ Some of these correlations are much larger than one would expect by chance alone—for instance the correlation between $$\big(\mbox{ having a $L_2(7)$-cover with $\beta_1 > 0$},
\mbox{ having a $L_2(8)$-cover with $\beta_1 > 0$}\big)$$ is $0.38$. Moreover, there are very few negative correlations and those that exist are quite small. Overall, the average correlation is positive as we would expect from Section \[simple\].
One way of trying to understand these correlations is to observe that almost all of these manifolds are Dehn surgeries on the minimally twisted $5$-chain. Let us focus on the simpler question of correlations between having a cover with group $Q(n)$ and having a cover with group $Q(m)$. Let $M$ be the complement of the $5$-chain. Consider all the homomorphisms $f_k {\colon\thinspace}\pi_1 M \to
Q(n)$. Supposes $X$ is a Dehn filling on $M$ along the five slopes $(\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5)$ where $\gamma_i$ is in $\pi_1({\partial}_i M)$. The manifold $X$ has a cover with group $Q(n)$ if and only if there is an $f_k$ where each $\gamma_i$ lies in the kernel of $f_k$ restricted to $\pi_1( {\partial}_i M)$. Thus having a cover with group $Q(n)$ is determined by certain subgroups of the groups $\pi_1({\partial}_i M) = {{\mathbb Z}}^2$. If we consider a different group $Q(m)$ we get a different family of subgroups of the $\pi_1({\partial}_i
M)$. If there is a lot of overlap between these two sets of subgroups, there will be a positive correlation between having a cover with group $Q(n)$ and having a cover with group $Q(m)$. If there is little overlap then there will be a negative correlation. However, even looked at this way there seems to be no reason that the average correlation should be positive.
If we look at the same question for manifolds which are Dehn surgeries on the figure-8 knot (a simplified version of this setup) there are many negative correlations and the overall average correlation is 0. If we look at the question for small surgeries on the Whitehead link, the overall average correlation is positive and of similar magnitude of that for the 5-chain. If we also look at larger surgeries on the Whitehead link the average correlation drops somewhat. By changing the link we get a different pattern of correlations, and so it is unwise to attach much significance to these numbers.
Further questions {#questions}
=================
Here are some interesting further questions related to our experiment.
1. What happens for 3-manifolds bigger than the ones we looked at? Do the patterns we found persist? It is computationally difficult to deal with groups with large numbers of generators, which would limit the maximum size of the manifolds considered. But another difficulty is how to find a “representative” collection of such manifolds. (Some notions of a “random 3-manifold”, which help with this latter question, will be discussed in [@DunfieldThurstonHeuristics]).
2. How else could the virtually Haken covers we found be used to give insight into these conjectures? For instance, one could try to look at the virtual fibration conjecture. While there is no good algorithm for showing that a closed manifold is fibered, one could look at the following algebraic stand-in for this question. If a 3-manifold fibers over the circle, then one of the coefficients of the Alexander polynomial which is on a vertex of the Newton polytope is $\pm 1$ (see e.g. [@Dunfield:norms]). One could compute the Alexander polynomial of the covers with virtual positive betti number and see how often this occurred. As many of our covers are quite small, computing the Alexander polynomial should be feasible in many cases.
3. One could use our methods to look at the Virtual Positive Betti Number conjecture for lattices in the other rank-1 groups that don’t have Property T. This would be particularly interesting for the examples of complex hyperbolic manifolds where every congruence cover has $\beta_1 = 0$. These complex hyperbolic manifolds were discovered by Rogawski [@Rogawski90 Thm. 15.3.1] and are arithmetic.
Transferring virtual Haken via Dehn filling {#transfer}
===========================================
In the rest of this paper, we consider the following setup. Let $M$ be a compact 3-manifold with boundary a torus. The process of *Dehn filling* creates closed 3-manifolds from $M$ by taking a solid torus $D^2 \times S^1$ and gluing its boundary to the boundary of $M$. The resulting manifolds are parameterized by the isotopy class of essential simple closed curve in ${\partial}M$ which bounds a disc in the attached solid torus. If $\alpha$ denotes such a class, called a *slope*, the corresponding Dehn filling is denoted by $M(\alpha)$. Though no orientation of $\alpha$ is needed for Dehn filling, we will often think of the possible $\alpha$ as being the primitive elements in $H_1({\partial}M, {{\mathbb Z}})$ and so $H_1({\partial}M, {{\mathbb Z}})$ parameterizes the possible Dehn fillings.
If you have a general conjecture which you can’t prove for all 3-manifolds, a standard thing to do is to try to prove it for most Dehn fillings on an arbitrary 3-manifold with torus boundary. For instance, in the case of the Geometrization Conjecture there is the following theorem:
[[@ThurstonLectureNotes]]{} Let $M$ be a compact 3-manifold with ${\partial}M$ a torus. Suppose the interior of $M$ has a complete hyperbolic metric of finite volume. Then all but finitely many Dehn fillings of $M$ are hyperbolic manifolds.
For the Virtual Haken Conjecture there is the following result of Cooper and Long. A properly embedded compact surface $S$ in $M$ is *essential* if it is incompressible, boundary incompressible, and not boundary parallel. Suppose $S$ is an essential surface in $M$. While $S$ may have several boundary components, they are all parallel and so have the same slope, called the boundary slope of $S$. If $\alpha$ and $\beta$ are two slopes, we denote their minimal intersection number, or *distance*, by $\Delta(\alpha, \beta)$.
[(Cooper-Long [@CooperLong99])]{} Let $M$ be a compact orientable 3-manifold with torus boundary which is hyperbolic. Suppose $S$ is a non-separating orientable essential surface in $M$ with non-empty boundary. Suppose that $S$ is not the fiber in a fibration over $S^1$. Let $\lambda$ be the boundary slope of $S$. Then there is a constant $N$ such that for all slopes $\alpha$ with $\Delta(\alpha, \lambda) \geq N$, the manifold $M(\alpha)$ is virtually Haken.
Explicitly, $N = 12 g - 8 + 4 b$ where $g$ is the genus of $S$ and $b$ is the number of boundary components.
This result differs from the Hyperbolic Dehn Surgery Theorem in that it excludes those fillings lying in an infinite strip in $H_1({\partial}M)$, instead of only excluding those in a compact set. Here, we will prove a Dehn surgery theorem about the Virtual Positive Betti Number Conjecture, assuming that $M$ has a very simple Dehn filling which strongly has virtual positive betti number. Our theorem is a generalization of the work of Boyer and Zhang [@BoyerZhang00], which we discuss below.
The basic idea is this. Suppose $M$ has a Dehn filling $M(\alpha)$ which has virtual betti number in a very strong way. By this we mean that there is a surjection $\pi_1(M(\alpha)) \to \Gamma$ where $\Gamma$ is a group all of whose finite index subgroups have lots of homology. In our application, $\Gamma$ will be the fundamental group of a hyperbolic 2-orbifold. Given some other Dehn filling $M(\beta)$, we would like to transfer virtual positive betti number from $M(\alpha)$ to $M(\beta)$. Look at $\pi_1(M)/{\left\langle \alpha, \beta \right\rangle}$ which we will call $\pi_1(M(\alpha, \beta))$. This group is a common quotient of $\pi_1(M(\alpha))$ and $\pi_1(M(\beta))$. Choose $\gamma
\in \pi_1({\partial}M)$ so that $\{\alpha, \gamma\}$ is a basis of $\pi_1({\partial}M)$. Then $\beta = \alpha^m \gamma^n$. If we think of $\pi_1(M(\alpha, \beta))$ as a quotient of $\pi_1(M(\alpha))$ we have: $$\pi_1(M(\alpha, \beta)) = \pi_1(M(\alpha)) /{\left\langle \beta \right\rangle} = \pi_1(M(\alpha))/{\left\langle \gamma^n \right\rangle}.$$ Thus $\pi_1(M(\alpha, \beta))$ surjects onto $\Gamma/{\left\langle \gamma^n \right\rangle}$, where here we are confusing $\gamma$ and its image in $\Gamma$. So $\pi_1(M)$ surjects onto $\Gamma/{\left\langle \gamma^n \right\rangle}$. If $\Gamma$ has rapid homology growth, one can hope that $\Gamma_n = \Gamma/{\left\langle \gamma^n \right\rangle}$ still has virtual positive betti number when $n$ is large enough. This is plausible because adding a relator which is a large power often doesn’t change the group too much. If there is an $N$ so that $\Gamma_n$ has virtual positive betti number for all $n \geq N$, then $M(\beta)$ has virtual positive betti number for all $\beta$ with $n = \Delta(\gamma, \alpha) \geq N$.
Our main theorem applies when $M(\alpha)$ is a Seifert fibered space whose base orbifold is hyperbolic:
\[SF\_filling\] Let $M$ be a compact 3-manifold with boundary a torus. Suppose $M(\alpha)$ is Seifert fibered with base orbifold $\Sigma$ hyperbolic. Assume also that the image of $\pi_1({\partial}M)$ under the induced map $\pi_1(M) \to \pi_1(\Sigma)$ contains no non-trivial element of finite order. Then there exists an $N$ so that $M(\beta)$ has virtual positive betti number whenever $\Delta(\alpha, \beta) \geq N$.
If $\Sigma$ is not a sphere with 3 cone points, then $N$ can be taken to be $7$.
In light of the above discussion, if we consider the homomorphism $\pi_1(M(\alpha)) \to \pi_1(\Sigma) = \Gamma$, Theorem \[SF\_filling\] follows immediately from:
\[orbifold\_quotient\] Let $\Sigma$ be a closed hyperbolic 2-orbifold without mirrors, and $\Gamma$ be its fundamental group. Let $\gamma \in \Gamma$ be a element of infinite order. Then there exists an $N$ such that for all $n \geq N$ the group $$\Gamma_n = \Gamma/{\left\langle \gamma^n \right\rangle}$$ has virtual positive betti number. In fact, $\Gamma_n$ has a finite index subgroup which surjects onto a free group of rank 2.
If $\Sigma$ is not a 2-sphere with 3 cone points, then $N =
\max\{1/{{\left| 1 + \chi(\Sigma) \right|}}, 3\}$. In this case, $N$ is at most $7$.
In applying Theorem \[SF\_filling\], the technical condition that the image of $\pi_1({\partial}M)$ not contain an element of finite order holds in many cases. For instance, Theorem \[SF\_filling\] implies the following theorem about Dehn surgeries on the Whitehead link. Let $W$ the exterior of the Whitehead link. Given a slope $\alpha$ on the first boundary component of $W$, we denote by $W(\alpha)$ the manifold with one torus boundary component obtained by filling along $\alpha$.
Let $W$ be the exterior of the Whitehead link. Then for all but finitely many slopes $\alpha$, the manifold $M = W(\alpha)$ has the following property: All but finitely many Dehn fillings of $M$ have virtual positive betti number.
In fact, our proof of this theorem excludes only 28 possible slopes $\alpha$ (see Section \[Whitehead\]). The complements of the twist knots in $S^3$ are exactly the $W(1/n)$ for $n \in {{\mathbb Z}}$. Theorem \[whiteheadthm\] applies to all of the slopes $1/n$ except for $n \in \{0, 1\}$ which correspond to the unknot and the trefoil. Thus we have:
Let $K$ be a twist knot in $S^3$ which is not the unknot or the trefoil. Then all but finitely many Dehn surgeries on $K$ have virtual positive betti number.
For the simplest hyperbolic knot, the figure-8, we can use a quantitative version of Theorem \[orbifold\_quotient\] due to Holt and Plesken [@HoltPlesken92] which applies in this special case. We will show:
Every non-trivial Dehn surgery on the figure-8 knot in $S^3$ has virtual positive betti number.
As we mentioned, Theorem \[SF\_filling\] generalizes the work of Boyer and Zhang [@BoyerZhang00]. They restricted to the case where the base orbifold was not a 2-sphere with 3 cone points. In particular, they proved:
[[@BoyerZhang00]]{}\[BoyerZhang\] Let $M$ have boundary a torus. Suppose $M(\alpha)$ is Seifert fibered with a hyperbolic base orbifold $\Sigma$ which is not a 2-sphere with 3 cone points. Assume also that $M$ is small, that is, contains no closed essential surface. Then $M(\beta)$ has virtual positive betti number whenever $\Delta( \alpha, \beta) \geq
7$.
The condition that $M$ is small is a natural one as if $M$ contains an closed essential surface, then there is a $\alpha$ so that $M(\beta)$ is actually Haken if $\Delta(\alpha, \beta) > 1$ [@CGLS; @Wu91].
Boyer and Zhang’s point of view is different than ours, in that they do not set out a restricted version of Theorem \[orbifold\_quotient\]. While the basic approach of both proofs comes from [@BaumslagMorganShalen], Boyer and Zhang’s proof of Theorem \[BoyerZhang\] also uses the Culler-Shalen theory of ${\mathrm{SL}_{2} {{\mathbb C}}}$-character varieties and surfaces arising from ideal points. From our point of view this is not needed, and Theorem \[BoyerZhang\] follows easily from Theorem \[SF\_filling\] (see the end of Section \[one\_rel\_quo\] for a proof).
In Section \[dehn\_questions\], we discuss possible generalizations of Theorem \[SF\_filling\] to other types of fillings. In a very special case, we use toroidal Dehn fillings to show (Theorem \[sister\_thm\]) that every Dehn filling of the sister of the figure-8 complement satisfies the Virtual Positive Betti Number Conjecture.
One-relator quotients of 2-orbifold groups {#one_rel_quo}
==========================================
This section is devoted to the proof of Theorem \[orbifold\_quotient\]. The basic ideas go back to [@BaumslagMorganShalen] which proves the analogous result for $\Gamma = {{\mathbb Z}}/p * {{\mathbb Z}}/q$. Fine, Roehl, and Rosenberger proved Theorem \[orbifold\_quotient\] in many, but not all, cases where $\Sigma$ is not a 2-sphere with 3 cone points [@FineRoehlRosenberger; @FineRosenberger99]. In the case $\Sigma =
S^2(a_1, a_2, a_3)$, Darren Long and Alan Reid suggested the proof given below, and Matt Baker provided invaluable help with the number theoretic details.
Let $\Sigma_n$ be the 2-complex with marked cone points consisting of $\Sigma$ together with a disc $D$ with a cone point of order $n$, where the boundary of $D$ is attached to $\Sigma$ along a curve representing $\gamma$. Thus $\Gamma_n = \pi_1(\Sigma_n)$. Now the Euler characteristic of $\Sigma_n$ is $\chi(\Sigma) + 1/n$, which is negative if $n > 1/{{\left| \chi(\Sigma) \right|}}$. From now on, assume that $n >
1/{{\left| \chi(\Sigma) \right|}}$. Suppose $\Gamma_n$ contains a subgroup $\Gamma_n'$ of finite index such that if $\alpha$ is a small loop about a cone point then $\alpha \not\in \Gamma_n'$. For instance, this is the case if $\Gamma_n'$ is torsion free. Let $\Sigma_n'$ be the corresponding cover of $\Sigma_n$, so $\Gamma_n' =
\pi_1(\Sigma_n')$. Then $\Sigma_n'$ is a 2-complex without any cone points. Since $\Sigma_n'$ has negative Euler characteristic and there is no homology in dimensions greater than two, we must have $H_1(\Sigma_n', {{\mathbb Q}}) \neq 0$. Thus $\Gamma_n$ has virtual positive betti number.
One can show more: Let $d$ be the degree of the cover $\Sigma_n' \to
\Sigma_n$. The complex $\Sigma_n'$ is a smooth hyperbolic surface $S$ with $d/n$ discs attached. From this description it is easy to check that $\Gamma_n'$ has a presentation where $$\begin{split}
(\mbox{\# of generators}) - (\mbox{\# of relations}) &= ({{\left| \chi(S) \right|}}+ 1) - \frac{d}{n} \\
&= 1 + d\left( {{\left| \chi(\Sigma) \right|}} - \frac{1}{n}
\right) \geq 2.
\end{split}$$ By a theorem of Baumslag and Pride [@BaumslagPride78], the group $\Gamma_n'$ has a finite-index subgroup which surjects onto ${{\mathbb Z}}* {{\mathbb Z}}$.
So it remains to produce the subgroups $\Gamma_n'$. First, we discuss the case where $\Sigma$ is not a sphere with 3 cone points. A homomorphism $f {\colon\thinspace}\Gamma \to Q$ is said to preserve torsion if for every torsion element $\alpha$ in $\Gamma$ the order of $f(\alpha)$ is equal to the order of $\alpha$. (Recall that the torsion elements of $\Gamma$ are exactly the loops around cone points.) The key is to show:
\[non\_triangle\_case\] Suppose $\Sigma$ is not a 2-sphere with 3 cone points, and that $\gamma \in \Gamma$ has infinite order. Given any $n > 2$, there exists a homomorphism $\rho {\colon\thinspace}\Gamma \to {\mathrm{PSL}_{2} {{\mathbb C}}}$ such that $\rho$ preserves torsion and $\rho(\gamma)$ has order $n$.
Suppose we have $\rho$ as in the lemma, which we will regard as a homomorphism from $\Gamma_n$ to ${\mathrm{PSL}_{2} {{\mathbb C}}}$. By Selberg’s lemma, the group $\rho(\Gamma)$ has a finite index subgroup $\Lambda$ which is torsion free. We can then take $\Gamma_n'$ to be $\rho^{-1}(\Lambda)$. Because the lemma only requires that $n>2$ and the preceding argument required that $n>1 /{{\left| \chi(\Sigma) \right|}}$, in this case we can take the $N$ in the statement of Theorem \[orbifold\_quotient\] to be $\max\{3, 1 +
1/{{\left| \chi(\Sigma) \right|}}\}$. A case check, done in [@BoyerZhang00], shows that $N$ is at most $7$. As we will see, the proof of Lemma \[non\_triangle\_case\] is relatively easy and involves deforming Fuchsian representations $\Gamma \to \operatorname{Isom}({{\mathbb H}}^2)$ to find $\rho$.
The harder case is when $\Sigma$ is a 2-sphere with 3 cone points, which we denote $S^2(a_1, a_2, a_3)$. Here the fundamental group $\Gamma$ can be presented as $${{ \left\langle {x_1, x_2, x_3 } \ \left| \ { x_1^{a_1} = x_2^{a_2} = x_3^{a_3} = x_1
x_2 x_3 = 1} \right. \right\rangle }}.$$ Geometrically, $x_i$ is a loop around the $i$^th^ cone point. We will show:
\[triangle\_case\] Let $\Gamma = \pi_1(S^2(a_1, a_2, a_3))$ where $1/a_1 + 1/a_2 +
1/a_3 < 1$. Given an element $\gamma \in \Gamma$ of infinite order, there exists an $N$ such that for all $n \geq N$ the group $\Gamma$ has a finite quotient where the images of $(x_1, x_2, x_3, \gamma)$ have orders exactly $(a_1, a_2, a_3, n)$ respectively.
With this Lemma, we can take $\Gamma'_n$ to be the kernel of the given finite quotient. The proof of Lemma \[triangle\_case\] involves using congruence quotients of $\Gamma$ and a some number theory. Unfortunately, unlike the previous case, the proof of Lemma \[triangle\_case\] gives no explicit bound on $N$.
In any event, we’ve established Theorem \[orbifold\_quotient\] modulo Lemmas \[non\_triangle\_case\] and \[triangle\_case\].
The rest of this section is devoted to proving the two lemmas.
Because $\Sigma$ is not a 2-sphere with 3 cone points, the Teichm[ü]{}ller space of $\Sigma$ is positive dimensional. Thus there are many representations of $\Gamma$ into $\operatorname{Isom}({{\mathbb H}}^2)$. We can embed $\operatorname{Isom}({{\mathbb H}}^2)$ into $\operatorname{Isom}^+({{\mathbb H}}^3) = {\mathrm{PSL}_{2} {{\mathbb C}}}$ as the stabilizer of a geodesic plane. We will then deform these Fuchsian representations to produce $\rho$.
Pick a simple closed curve $\beta$ which intersects $\gamma$ essentially. There are two cases depending on whether a neighborhood of $\beta$ is an annulus or a M[ö]{}bius band.
Suppose the neighborhood is an annulus. First, let’s consider the case where $\beta$ separates $\Sigma$ into 2 pieces. In this case $\Gamma$ is a free product with amalgamation $A *_{{\left\langle \beta \right\rangle}} B$. Let $\rho_1 {\colon\thinspace}\Gamma \to {\mathrm{PSL}_{2} {{\mathbb C}}}$ be one of the Fuchsian representations. Conjugate $\rho_1$ so that $\rho_1(\beta)$ is diagonal. Then $\rho_1(\beta)$ commutes with the matrices $$C_t = \left(\begin{array}{cc} t & 0 \\
0 & t^{-1}
\end{array}\right) \quad \mbox{for $t$ in ${{\mathbb C}}^\times$}.$$ For $t$ in ${{\mathbb C}}^\times$, let $\rho_t$ be the representation of $\Gamma$ whose restriction to $A$ is $\rho_1$ and whose restriction to $B$ is $C_t \rho_1 C_t^{-1}$. Consider the function $f {\colon\thinspace}{{\mathbb C}}^\times \to {{\mathbb C}}$ which sends $t$ to $\operatorname{tr}^2(\rho_t(\gamma))$. It is easy to see that $f$ is a rational function of $t$ by expressing $\gamma$ as a word in elements of $A$ and $B$. We claim that $f$ is non-constant. First, suppose that neither of the two components of $\Sigma \setminus \beta$ is a disc with two cone points of order 2. In this case, $\beta$ can be taken to be a geodesic loop. If we restrict $t$ to ${{\mathbb R}}$ then the family $\{ \rho_t \}$ corresponds to twisting around $\beta$ in the Fenchel-Nielsen coordinates on $\operatorname{Teich}(\Sigma)$. As $\gamma$ intersects $\beta$ essentially, the length of $\gamma$ changes under this twisting and so $f$ is non-constant. From this same point of view, we see that that $f$ has poles at $0$ and $\infty$. If one of the pieces of $\Sigma
\setminus \beta$ is a disc with two cone points of order 2, then $\beta$ naturally shrinks not to a closed geodesic, but to a geodesic arc joining the two cone points. There is still a Fenchel-Nielsen twist about $\beta$, and so we have the same observations about $f$ in this case (think of $\Sigma$ being obtained from a surface with a geodesic boundary component by pinching the boundary to a interval).
Since the rational function $f$ has poles at $\{ 0, \infty\}$, we have $f({{\mathbb C}}^\times) = {{\mathbb C}}$. So given $n > 1$, we can choose $t \in
{{\mathbb C}}^\times$ so that $\operatorname{tr}^2(\rho_t(\gamma)) = (\zeta_{2n} +
\zeta_{2n}^{-1})^2$ where $\zeta_{2n} = e^{\pi i /n}$. Then $\rho_t(\gamma)$ has order $n$. Moreover, $\rho_t$ preserves torsion because $\rho_1$ does, and so we have finished the proof of the lemma when $\beta$ is separating and has an annulus neighborhood. If $\beta$ has an annulus neighborhood and is non-separating, the proof is identical except that $\Gamma$ is an HNN-extension instead of a free product with amalgamation.
Now we consider the case where the neighborhood of $\beta$ is a M[ö]{}bius band. The difference here is that you can’t twist a hyperbolic structure of $\Sigma$ along $\beta$. To see this, think of constructing $\Sigma$ from a surface with geodesic boundary where the boundary is identified by the antipodal map to form $\beta$. Instead, we will deform the length of $\beta$ in $\operatorname{Teich}(\Sigma)$. Here we will need the hypothesis that $n > 2$, as you can see by looking at ${{ \mathbb{R}\, \mathrm{P}}}^2(3,5)$ with $\gamma$ a simple closed geodesic which has a M[ö]{}bius band neighborhood. The only quotient of $\pi_1({{ \mathbb{R}\, \mathrm{P}}}^2(3,5))$ where $\gamma$ has order $2$ is ${{\mathbb Z}}/2$ and this doesn’t preserve torsion.
The underlying surface of $\Sigma$ is non-orientable. We can assume that $\Sigma$ has at least one cone point since every non-orientable surface covers such an orbifold. Pick an arc $a$ joining $\beta$ to a cone point $p$. Let $A$ be a closed neighborhood of $\beta \cup
a$. The set $A$ is a M[ö]{}bius band with a cone point. Let $B$ be the closure of $\Sigma \setminus A$. Let $\alpha$ be the boundary of $A$. A small neighborhood of $\alpha$ is an annulus, so if $\gamma$ intersects $\alpha$ essentially, we can replace $\beta$ with $\alpha$ and use the argument above. So from now on, we can assume that $\gamma$ lies in $A$. Let $\psi {\colon\thinspace}\Gamma \to
{\mathrm{PSL}_{2} {{\mathbb C}}}$ be a Fuchsian representation. Suppose we construct a representation $\rho {\colon\thinspace}\pi_1(A) \to {\mathrm{PSL}_{2} {{\mathbb C}}}$ so that $\rho$ preserves torsion, $\rho(\gamma)$ has order $n$, and $\operatorname{tr}^2(\rho(\alpha)) = \operatorname{tr}^2(\psi(\alpha))$. Then as $\Gamma =
\pi_1(A) *_{{\left\langle \alpha \right\rangle}} \pi_1(B)$ and $\rho$ and $\psi$ are conjugate on ${\left\langle \alpha \right\rangle}$, we can glue $\rho$ and $\psi$ restricted to $\pi_1(B)$ together to get the required representation of $\Gamma$.
Thus we have reduced everything to a question about certain representations of $\pi_1(A)$. The group $\pi_1(A)$ is generated by $\alpha$ and $\beta$. Choosing orientations correctly, a small loop about the cone point $p$ is $\delta = \beta^2 \alpha$. If $p$ has order $r$, then $\pi_1(A)$ has the presentation $${{ \left\langle {\alpha, \beta, \delta} \ \left| \ {\delta = \beta^2 \alpha, \delta^r = 1} \right. \right\rangle }}.$$
Given any representation $\phi$ of $\pi_1(A)$, we will fix lifts of $\phi(\alpha)$ and $\phi(\beta)$ to ${\mathrm{SL}_{2} {{\mathbb C}}}$. Having done this, any word $w$ in $\alpha$ and $\beta$ has a canonical lift of $\phi(w)$ to ${\mathrm{SL}_{2} {{\mathbb C}}}$. We will abuse notion and denote this lift by $\phi(w)$ as well. In this way, we can treat $\phi$ as though it was a representation into ${\mathrm{SL}_{2} {{\mathbb C}}}$ so that, for instance, the trace of $\phi(w)$ is defined.
Define a 1-parameter family of representations $\rho_t$ for $t \in
{{\mathbb C}}^\times$ as follows. Set $$\psi(\beta) = \left(\begin{array}{cc}
0 & 1 \\
-1 & t
\end{array}\right),
{\quad\mbox{and}\quad}
\psi(\alpha) = \left(\begin{array}{cc}
e & s \\
0 & e^{-1}
\end{array}\right)$$ where $e + e^{-1} = \operatorname{tr}(\psi(\alpha))$ and $s = \frac{1}{t}(e^{-1} t^2 - (e + e^{-1}) - \operatorname{tr}(\psi(\delta))$. This gives a representation of $\pi_1(A)$ because $s$ was chosen so that $\operatorname{tr}(\rho_t(\delta)) =
\operatorname{tr}(\psi(\delta))$ and so $\rho_t(\delta)$ also has order $r$ in ${\mathrm{PSL}_{2} {{\mathbb C}}}$.
Let $\operatorname{Teich}(A)$ denote hyperbolic structures on $A$ with geodesic boundary where the length of the boundary is fixed to be that of the Fuchsian representation $\psi$. This Teichm[ü]{}ller space is ${{\mathbb R}}$ with the single Fenchel-Nielsen coordinate being the length of $\beta$. Note that any irreducible representation of $\pi_1(A)$ is conjugate to some $\rho_t$, and so each point in $\operatorname{Teich}(A)$ yields a Fuchsian representation $\rho_t$. As $\beta$ gets short in $\operatorname{Teich}(A)$, the curve $\gamma$ gets long. Thus if we set $f =
\operatorname{tr}(\rho_t(\gamma))$, then $f$ is a non-constant Laurent polynomial in $t$.
Let $v = \zeta_{2n} + \zeta^{-1}_{2n}$. To finish the proof of the lemma, all we need to do is find a $t \in {{\mathbb C}}^\times$ so that $f(t)^2
= v^2$. As a map from the Riemann sphere to itself, $f$ is onto and there are $t_1$ and $t_2$ in $\widehat{{{\mathbb C}}}$ so that $f(t_1) = v$ and $f(t_2) = -v$. As $n > 2$, $v$ is not $0$ and so $t_1$ and $t_2$ are distinct. As $f$ is non-constant and finite on ${{\mathbb C}}^\times$, it has a pole at at least one of $0$ and $\infty$. Therefore, at least one of $t_1$ and $t_2$ is in ${{\mathbb C}}^\times$ and we are done.
The group $\Gamma$ is naturally a subgroup of ${\mathrm{PSL}_{2} {{\mathbb R}}}$. Set $b_i = 2 a_i$. Let $X_i$ be the matrix in ${\mathrm{PSL}_{2} {{\mathbb R}}}$ corresponding to the generator $x_i$. As $X_i$ has order $a_i$, it follows that $\operatorname{tr}(X_i) = \pm (\zeta_{b_i} + \zeta_{b_i}^{-1})$ where $\zeta_{b_i}$ is some primitive $b_i$^th^ root of unity. Any irreducible 2-generator subgroup of ${\mathrm{PSL}_{2} {{\mathbb C}}}$ is determined by its traces on the generators and their product, and so we can conjugate $\Gamma$ in ${\mathrm{PSL}_{2} {{\mathbb C}}}$ so the $X_i$ are: $$X_1 = \left(\begin{array}{cc}
0 & 1 \\
-1 & \zeta_{b_1} + \zeta_{b_1}^{-1}
\end{array}\right),
X_2 = \left(\begin{array}{cc}
\zeta_{b_2} +\zeta_{b_2}^{-1} & -\zeta_{b_3} \\
\zeta_{b_3}^{-1} & 0
\end{array}\right),
\ \mbox{and}\ X_3 = (X_1 X_2)^{-1}.$$ Henceforth we will identify $\Gamma$ with its image. The entries of the $X_i$ lie in ${{\mathbb Q}}(\zeta_{b_1}, \zeta_{b_2}, \zeta_{b_3})$, and moreover are integral, so $\Gamma$ is contained in the subgroup ${\mathrm{PSL}_{2} {{{\mathcal{O}}}}({{\mathbb Q}}(\zeta_{b_1}, \zeta_{b_2}, \zeta_{b_3}))}$. Let $G$ be a matrix in ${\mathrm{PSL}_{2} {{\mathbb C}}}$ representing $\gamma$. Let $a$ be one of the eigenvalues of $G$. Note that $a$ is an algebraic integer, in fact a unit, because it satisfies the equation $a^2 - (\operatorname{tr}G) a +
1$ and $\operatorname{tr}{G}$ is integral. Let $K$ be the field ${{\mathbb Q}}(\zeta_{b_1},
\zeta_{b_2}, \zeta_{b_3}, a)$. From now on, we will consider $\Gamma$ as a subgroup of ${\mathrm{PSL}_{2} {{{\mathcal{O}}}}(K)}$. We will construct the required quotients of $\Gamma$ from congruence quotients of ${\mathrm{PSL}_{2} {{{\mathcal{O}}}}(K)}$. Suppose ${\wp}$ is a prime ideal of ${{{\mathcal{O}}}}(K)$. Setting $k = {{{\mathcal{O}}}}(K)/{\wp}$, we have the finite quotient of $\Gamma$ given by $$\Gamma \to {\mathrm{PSL}_{2} {{{\mathcal{O}}}}(K)} \to {\mathrm{PSL}_{2} k}.$$ What conditions do we need so that $(x_1, x_2, x_3, \gamma)$ have the right orders in ${\mathrm{PSL}_{2} k}$? Well, the eigenvalues of $X_i$ are $\pm \{ \zeta_{b_i}, \zeta_{b_i}^{-1} \}$, so as long as $\bar{\zeta}_{b_i}$ has order $b_i$ in $k^\times$, the matrix $\bar{X}_i$ in ${\mathrm{PSL}_{2} k}$ also has order $b_i$. Similarly, if we set $m = 2 n$, then $\bar{G}$ in ${\mathrm{PSL}_{2} k}$ has order $n$ if $\bar{a}$ has order $m$ in $k^\times$. Thus the following claim will complete the proof of the lemma:
There exists an $N$ such that for all $n \geq N$ there is a prime ideal ${\wp}$ such that if $k = {{{\mathcal{O}}}}(K)/{\wp}$ then the images of $(\zeta_{b_1}, \zeta_{b_2}, \zeta_{b_3}, a)$ in $k^\times$ have orders $(b_1, b_2, b_3, m)$.
Let’s prove the claim. The idea is to show that $a^m - 1$ is not a unit in ${{{\mathcal{O}}}}(K)$ for large $m$, and then just take ${\wp}$ to be a prime ideal dividing $a^m - 1$. We have to be careful, though, that $(\bar{\zeta}_{b_1}, \bar{\zeta}_{b_2}, \bar{\zeta}_{b_3}, \bar{a})$ don’t end up with lower orders that expected in $k^\times$.
A prime ideal is called *primitive* if it divides $a^m - 1$ and does not divide $a^r - 1$ for all $r < m$. Postnikova and Schinzel proved the following theorem:
[[@Schinzel74; @PostnikovaSchinzel]]{}\[num\_theory\] Suppose that $a$ is an algebraic integer which is not a root of unity. There there is an $N$ such that for all $n \geq N$ the integer $a^n - 1$ has a primitive divisor.
The proof of Theorem \[num\_theory\] relies on deep theorems of Gelfond and A. Baker on the approximation by rationals of logarithms of algebraic numbers.
Because $\gamma$ has infinite order, we know that $a$ is not a root of unity. Thus Theorem \[num\_theory\] applies, and let $N$ be as in the statement. By increasing $N$ if necessary, we can ensure that the primitive divisor ${\wp}$ given Theorem \[num\_theory\] does not divide any element of the finite set $$R = {{ \left\{ { \zeta_{b_i}^r - 1} \ \left| \ { 1 \leq r < b_i} \right. \right\} }}.$$ Thus for all $m \geq N$, we have a prime ideal ${\wp}$ which divides $a^m
- 1$ but does not divide $a^r - 1$ for $r < m$. Thus $\bar{a}$ has order $m$ in $k^\times$. As ${\wp}$ does not divide any element of $R$, the element $\bar{\zeta}_{b_i}$ has order $b_i$ in $k^\times$. This proves the claim and thus the lemma.
It would be nice to have given a proof of Lemma \[triangle\_case\] which gave an explicit bound on $N$. The number theory used gives “an effectively computable constant” for $N$, but doesn’t actually compute it. Perhaps there are other proofs of Lemma \[triangle\_case\] more like that of Lemma \[non\_triangle\_case\]. While $\pi_1(S^2(a_1, a_2, a_3))$ has only a finite number of representations into ${\mathrm{PSL}_{2} {{\mathbb C}}}$, if one looks at representations into larger groups there are deformation spaces where you could hope to play the same game. For instance, if one embeds ${{\mathbb H}}^2$ as a totally geodesic subspace in complex hyperbolic space ${{ \mathbb{C}\, \mathrm{H}}}^2$, then a Fuchsian representation deforms to a one real parameter family in $\operatorname{Isom}^+({{ \mathbb{C}\, \mathrm{H}}}^2) = \mathrm{PU}(2,1)$. One could instead consider deformations in the space of real-projective structures, which gives rise to homomorphisms to ${\mathrm{PGL}_{3} {{\mathbb R}}}$ [@ChoiGoldman2001]. In general, the structure of the space representations of $\pi_1(S^2(a_1, a_2, a_3)) \to {\mathrm{SL}_{n} {{\mathbb C}}}$ is closely related to the Deligne-Simpson problem [@Simpson90].
We end this section by deducing Boyer and Zhang’s original Theorem \[BoyerZhang\] from Theorem \[SF\_filling\].
Let $M$ be a manifold with torus boundary which is small. Suppose that $M(\alpha)$ is Seifert fibered with hyperbolic base orbifold $\Sigma$ which is not sphere with 3 cone points. We need to check that Theorem \[SF\_filling\] applies. Let $\beta$ be a curve so that $\{ \alpha, \beta \}$ is a basis for $\pi_1({\partial}M)$. It suffices to show the image of $\beta$ does not have finite order in $\Gamma = \pi_1(\Sigma)$. Suppose not. Then there are infinitely many Dehn fillings $M(\gamma_i)$ of $M$ where $\pi_1(M(\gamma_i))$ surjects onto $\Gamma$. The orbifold $\Sigma$ contains an essential simple closed curve which isn’t a loop around a cone point. Therefore, $\Gamma$ has non-trivial splitting as a graph of groups and so acts non-trivially on a simplicial tree. Then each $\pi_1(M(\gamma_i))$ act non-trivially on a tree and so $M(\gamma_i)$ contains an essential surface. As infinitely many fillings contain essential surfaces, a theorem of Hatcher [@Hatcher82] implies that $M$ contains a closed essential surface. This is contradicts that $M$ is small. So the image of $\beta$ has infinite order and we are done.
Surgeries on the Whitehead link {#Whitehead}
===============================
Consider the Whitehead link pictured in Figure \[whitehead\_link\]. Let $W$ be its exterior.
![The Whitehead link, showing our orientation conventions for the meridians and longitudes.[]{data-label="whitehead_link"}](whitehea.eps)
We will denote the two boundary components of $W$ by ${\partial}_0 W$ and ${\partial}_1 W$. For each ${\partial}_i W$, we fix a meridian-longitude basis $\{ \mu_i, \lambda_i \}$ with the orientations shown in the figure. With respect to one of these bases, we will write boundary slopes as rational numbers, where $p \mu + q \lambda$ corresponds to $p/q$. We will denote Dehn filling of both boundary components of $W$ by $W(p_0/q_0;
p_1/q_1)$. Dehn filling on a single component of $W$ will be denoted $W(p_0/q_0; {\, \cdot \, })$ and $W({\, \cdot \, }; p_1/q_1)$. As $W(p/q;
{\, \cdot \, })$ is homeomorphic to $W({\, \cdot \, }; p/q)$, we will sometimes denote this manifold by $W(p/q)$. With our conventions, $W(1)$ is the trefoil complement, and $W(-1)$ is the figure-8 complement. The manifold $W(p/q)$ is hyperbolic except when $p/q$ is in $\{ \infty, 0, 1, 2,
3, 4 \}$. The point of this section is to show:
\[whiteheadthm\] Let $W$ be the complement of the Whitehead link. For any slope $p/q$ which is not in $E = \{\infty$, $0$, $1$, $2$, $3$, $4$, $5$, $5/2$, $6$, $7/1$, $7/2$, $8$, $8/3$, $9/2$, $10/3$, $11/2$, $11/3$, $13/3$, $13/4$, $14/3$, $15/4$, $16/3$, $16/5$, $17/5$, $18/5$, $19/4$, $24/5$, $24/7\}$ the manifold $W(\alpha)$ has the property that all but finitely many Dehn fillings have virtual positive betti number.
The proof goes by showing that except for $p/q$ in $E$, the manifold $W(p/q)$ has at least 2 distinct Dehn fillings which are Seifert fibered and to which Theorem \[SF\_filling\] applies. The reason that $W(p/q)$ has so many Seifert fibered fillings is because the manifolds $W(1)$, $W(2)$, and $W(3)$ are all Seifert fibered with base orbifold a disc with two cone points. In particular, the base orbifolds are $D^2(2,3)$, $D^2(2,4)$, and $D^2(3,3)$ respectively. Therefore, all but one Dehn surgery $W(1;p/q)$ on $W(1)$ is Seifert fibered with base orbifold a sphere with 3 cone points. Similarly for $W(2)$ and $W(3)$. In fact, you can check that
- $W(1; p/q)$ Seifert fibers over $S^2(2,3, {{\left| p - 6 q \right|}})$ if $p/q \neq 6$.
- $W(2; p/q)$ Seifert fibers over $S^2(2,4, {{\left| p - 4 q \right|}})$ if $p/q \neq 4$.
- $W(3; p/q)$ Seifert fibers over $S^2(3,3, {{\left| p - 3 q \right|}})$ if $p/q \neq 3$.
Now fix a slope $p/q$, and consider the manifold $M = W({\, \cdot \, };p/q)$. We want to know when we can apply Theorem \[SF\_filling\] to $M(1)$, $M(2)$, or $M(3)$. First, we need the base orbifold to be hyperbolic, i.e. that the reciprocals of the orders of the cone points sum to less than 1. This leads to the conditions: $$\begin{split}\label{pq_cond}
& \mbox{For $M(1)$ that ${{\left| p - 6 q \right|}} > 6$.} \\
& \mbox{For $M(2)$ that ${{\left| p - 4 q \right|}} > 4$.} \\
& \mbox{For $M(3)$ that ${{\left| p - 3 q \right|}} > 3$.}
\end{split}$$ We claim that as long as the base orbifold is hyperbolic then Theorem \[SF\_filling\] applies. Consider the map $\pi_1(M) \to \Gamma$ where $\Gamma$ is the fundamental group of one of the base orbifolds. Let $\mu$ in ${\partial}M$ be the meridian coming from our meridian $\mu_0$ of $W$. Since $\mu$ intersects any of the slopes $1, 2, 3$ once, its image in $\Gamma$ generates the image of $\pi_1({\partial}M)$. Thus we just need to check that the image of $\mu$ is an element of infinite order in $\Gamma$. One can work out what the image in $\Gamma$ is explicitly (most easily by with the help of [`SnapPea`]{} [@SnapPea]): $$\begin{split}\label{quo_pres}
\mbox{For $M(1)$,} & \mbox{ $\mu \mapsto aba^{-1}b^{-1}$ where} \\
&\Gamma = {{ \left\langle {a,b} \ \left| \ { a^2 = b^3 = (ab)^{p - 6 q} = 1} \right. \right\rangle }}. \\
\mbox{ For $M(2)$,}& \mbox{ $\mu \mapsto ab^2$ where} \\
& \Gamma = {{ \left\langle {a,b} \ \left| \ { a^2 = b^4 = (ab)^{p - 4 q} = 1} \right. \right\rangle }}. \\
\mbox{ For $M(3)$,}& \mbox{ $\mu \mapsto a b^{-1}$ where} \\
&\Gamma = {{ \left\langle {a,b} \ \left| \ { a^3 = b^3 = (ab)^{p - 3 q} = 1} \right. \right\rangle }}.
\end{split}$$ It remains to check that the images of $\mu$ above always have infinite order in $\Gamma$. This is intuitively clear for looking at loops which represent these elements. The suspicious reader can check that this is really the case by using, say, the solution to the word problem for Coxeter groups [@Brown89 [§]{} II.3].
Thus, Theorem \[SF\_filling\] applies whenever one of the conditions in (\[pq\_cond\]) holds. If $p/q$ is such that two of (\[pq\_cond\]) hold, then all but finitely many Dehn surgeries on $M$ have virtual positive betti number. The set in $H_1({\partial}M,
{{\mathbb R}}) = {{\mathbb R}}^2$ where any one of the conditions fails is an infinite strip. So the set where a fixed pair of them fail is compact, namely a parallelogram. Hence, outside a union of 3 parallelograms, at least two of the conditions hold. These 3 parallelograms are all contained in the square where ${{\left| p \right|}}, {{\left| q \right|}} \leq 100$. To complete the proof of the theorem, one checks all the slopes in that square to find those where fewer that two of (\[pq\_cond\]) hold.
For most of the slopes in $E$, one of (\[pq\_cond\]) holds, and so one still has a partial result. The slopes where none of the conditions in (\[pq\_cond\]) hold are $$\{ \infty, 0, 1, 2, 3, 4, 5, 6, 7/2, 9/2\}.$$ One interesting manifold among these exceptions is the sister of the figure-8 complement $W(5)$. We will consider that manifold in detail in Section \[sister\].
The figure-8 knot
=================
Here we prove:
\[fig8thm\] Every non-trivial Dehn surgery on the figure-8 knot has virtual positive betti number.
Let $M$ be the figure-8 complement. As the figure-8 knot is isotopic to its mirror image, the Dehn filling $M(p/q)$ is homeomorphic to $M(-p/q)$. Now, if $W$ is the Whitehead complement as in the last section, $M = W(-1)$. Hence $M$ has at least 6 interesting Seifert fibered surgeries namely $M(\pm 1)$, $M(\pm 2)$ and $M(\pm 3)$. In (\[quo\_pres\]), we saw exactly which orbifold quotients $\Gamma/{\left\langle \mu^n \right\rangle}$ arise when we try our method of transferring virtual positive betti number. By a minor miracle, Holt and Plesken have looked at exactly these quotients and shown:
[[@HoltPlesken92]]{}Let $$\begin{aligned}
\Gamma^1_n &= {{ \left\langle {a,b} \ \left| \ { a^2 = b^3 = (ab)^7 =
(aba^{-1}b^{-1})^n = 1} \right. \right\rangle }} ,\\
\Gamma^2_n &= {{ \left\langle {a,b} \ \left| \ { a^2 = b^4 = (ab)^5 = (ab^2)^n} \right. \right\rangle }} , \mbox{and}\\
\Gamma^3_n &= {{ \left\langle {a,b} \ \left| \ { a^3 = b^3 = (ab)^4 = (ab^{-1})^n = 1} \right. \right\rangle }}.
\end{aligned}$$ These groups have virtual positive betti number if $n \geq 11$ for $\Gamma^1_n$ and $n \geq 6$ for $\Gamma^2_n$ and $\Gamma^3_n$.
Thus $M(\alpha)$ has virtual positive betti number if any of the following hold: $$\Delta(\alpha, \pm 1) \geq 11, \Delta(\alpha, \pm2) \geq 6, {\quad\mbox{or}\quad} \Delta(\alpha, \pm 3) \geq 6.$$ It’s easy to check that the only slopes $\alpha$ for which none of these hold are $\{\infty, 0, \pm 1, \pm 2 \}$. Since $H_1(M(0)) = {{\mathbb Z}}$ and the Seifert fibered manifolds $M(\pm 1)$ and $M(\pm 2)$ have virtual positive betti number, we’ve proved the theorem.
Other groups of the form $\Gamma/{\left\langle \gamma^n \right\rangle}$ and further questions {#dehn_questions}
=============================================================================================
As we have seen, groups of the form $\Gamma/{\left\langle \gamma^n \right\rangle}$, where $\Gamma$ is a Fuchsian group, are very useful for studying the Virtual Haken Conjecture via Dehn filling. So it is natural to ask: what other types of $\Gamma$ give similar results? In this section, we consider $\Gamma$ which are free products with amalgamation of finite groups. The key source here is Lubotzky’s paper [@Lubotzky96], which gives a number of applications of these groups to the Virtual Positive Betti Number Conjecture.
For convenience, we will only discuss free products with amalgamation, but there are analogous statements for HNN extensions. Let $\Gamma =
A *_C B$ be an amalgam of finite groups where $C$ is a proper subgroup of $A$ and $B$. The group $\Gamma$ acts on a tree $T$ with finite point stabilizers. By [@SerreTrees [§]{} II.2.6], $\Gamma$ has a finite index subgroup $\Lambda$ which acts freely on $T$. The subgroup $\Lambda$ has to be free, and so $\Gamma$ is virtually free. It is not hard to show that if one of $[A : C]$ and $[B : C]$ is $\geq
3$ then $\Gamma$ is virtually a free group of rank $\geq 2$ [@Lubotzky96 Lemma 2.2]. From now on, we will assume $[A : C]
\geq 3$. Because $\Gamma$ is virtually free, it is natural to hope that the answer to the following question is yes:
Let $\Gamma$ be an amalgam of finite groups, and fix $\gamma \in
\Gamma$ of infinite order. Does there exist an $N$ such that for all $n \geq N$, the group $\Gamma_n = \Gamma/{\left\langle \gamma^n \right\rangle}$ has virtual positive betti number?
Note that by Gromov, there is an $N$ such that $\Gamma_n$ is a non-elementary word hyperbolic group for all $n \geq N$.
Now consider these groups in the context of Dehn filling. Suppose $M$ is a manifold with torus boundary, and suppose $\alpha$ is a slope where $\pi_1(M(\alpha))$ surjects onto $\Gamma$, an amalgam of finite groups. Choose $\gamma$ in $\pi_1({\partial}M)$ so that $\{ \alpha,
\gamma \}$ form a basis. The proof of Theorem \[BoyerZhang\] shows that if $M$ does not contain a closed incompressible surface, then the image of $\gamma$ in $\Gamma$ has infinite order.
There are candidate $\alpha$ where one expects that $\pi_1(M(\alpha))$ will surject onto an amalgam of finite groups. Suppose that $N = M(\alpha)$ contains a separating incompressible surface $S$. Then $\pi_1(N)$ splits as $\pi_1(N_1) *_{\pi_1(S)} \pi_1(N_2)$, where the $N_i$ are the components of $N \setminus S$. Recall that $\pi_1(S)$ is said to *separable* in $\pi_1(N)$ if it is closed in the profinite topology on $\pi_1(N)$. Lubotzky showed [@Lubotzky96 Prop. 4.2] that if $\pi_1(S)$ is separable then there is a homomorphism from $\pi_1(N)$ to an amalgam of finite groups $\Gamma$, which respects the amalgam structure. Provided that $S$ is not a semi-fiber (that is, the $N_i$ are not both $I$-bundles), then $\Gamma = A *_C B$ can be chosen so that $[A:C] \geq 3$.
In general, we will say that $\pi_1(S)$ is *weakly separable* when there is such an amalgam preserving map from $\pi_1(N)$ to an amalgam of finite groups. A priori, this is weaker than $\pi_1(S)$ being closed in $\pi_1(N)$, which is in turn weaker than $\pi_1(N)$ being subgroup separable (aka LERF).
Note that if $\pi_1(S)$ is weakly separable, then $N$ has virtual positive betti number as $\pi_1(N)$ virtually maps onto a free group. If $N$ is hyperbolic, it seems quite possible that the fundamental group of an embedded surface is always weakly separable. If this is the case, there is no difference between being virtually Haken and having virtual positive betti number. Subgroup separability properties for 3-manifold groups have been difficult to prove even in special cases. Weak separability also seems quite difficult to show even though the surface $S$ is embedded.
Let $M$ be a manifold with torus boundary which is hyperbolic. Assume that $M$ does not contain a closed incompressible surface. Then there are always at least two Dehn fillings of $M$ which contain an incompressible surface [@CullerShalen84; @CGLS]. If embedded surface subgroups are weakly separable, we would expect that for most $M$, there are at least two slopes where $\pi_1(M(\alpha))$ surjects onto an amalgam of finite groups. One has to say “most” here because $M(\alpha)$ might be a (semi-)fiber or the Poincar[é]{} conjecture might fail. This makes it plausible that, regardless of the truth of the virtual Haken conjecture in general, for a fixed $M$ all but finitely many Dehn fillings of $M$ have virtual positive betti number. In this context, it is worth mentioning the result of Cooper-Long [@CooperLongSSSSS] which says that for any such hyperbolic $M$ all but finitely many of the Dehn fillings contain a surface group. If fundamental groups of hyperbolic manifolds are subgroup separable, then this result would also imply that all but finitely many fillings of $M$ have virtual positive betti number.
One case where weak separability is known is when $N = M(\alpha)$ is irreducible and the incompressible surface $S$ in $N$ is a torus. Then $N$ is Haken and, by geometrization, $\pi_1(N)$ is residually finite. Using this it’s not too hard to show that $\pi_1(S)$ is a separable subgroup. So in this case $\pi_1(N)$ maps to a amalgam of finite groups. In the next section, we will use these ideas in this special case to show that all of the Dehn filings on the sister of the figure-8 complement satisfy the Virtual Haken Conjecture.
The sister of the figure-8 complement {#sister}
=====================================
Let $M$ be the sister of the figure-8 complement. The manifold $M$ is the punctured torus bundle where the monodromy has trace $-3$, and is also the surgery on the Whitehead link $W(5)$. We will use the basis $(\mu, \lambda)$ of $\pi_1({\partial}M)$ coming from the standard basis on $W$. We will show:
\[sister\_thm\] Let $M$ be the sister of the figure-8 complement. Then every Dehn filling of $M$ which has infinite fundamental group has virtual positive betti number.
The manifold $M$ has a self-homeomorphism which acts on $\pi_1({\partial}M)$ via $(\mu, \lambda) \mapsto (\mu + \lambda, -\lambda)$. Let $N$ be the filling $M(4) {\cong}M(4/3)$. The manifold $N$ contains a separating incompressible torus. It turns out that this torus splits $N$ into a Seifert fibered space with base orbifold $D^2(2,3)$ and a twisted interval bundle over the Klein bottle. Rather than describe the details of this splitting, we will simply exhibit the final homomorphism from $\pi_1(N)$ onto an amalgam of finite groups. In fact, $\pi_1(N)$ surjects onto $\Gamma = S_3
*_{C_2} C_4$ where $C_n$ is a cyclic group of order $n$.
According to [`SnapPea`]{}, the group $\pi_1(N)$ has presentation: $${{ \left\langle {a, b} \ \left| \ { ab^2ab^{-1}a^3b^{-1} = ab^2a^{-2}b^2 = 1} \right. \right\rangle }}$$ where $\mu \in \pi({\partial}M)$ becomes $a b$ in $\pi_1(N)$. If we add the relators $a^3 = b^4 = 1$ to the presentation of $\pi_1(N)$, we get a surjection from $\pi_1(N)$ onto $$\Gamma = {{ \left\langle {a,b} \ \left| \ {a^3 = b^4 = (ab^2)^2 = 1} \right. \right\rangle }}.$$ As $S_3$ has presentation ${{ \left\langle {x,y} \ \left| \ {x^3 = y^2 = (xy)^2=1} \right. \right\rangle }}$, we see that $\Gamma$ is $S_3 *_{C_2} C_4$ where the first factor is generated by $\{a, b^2\}$ and the second by $b$.
We will need:
\[amalgam\] Let $\Gamma$ be $S_3 *_{C_2} C_4$ and let $\gamma \in \Gamma$ be $ab$. The group $$\Gamma_n = \Gamma/{\left\langle \gamma^n \right\rangle}$$ has virtual positive betti number for all $n \geq 10$. For $n < 10$, the group $\Gamma_n$ is finite.
Assuming the lemma, the theorem follows easily. Given a slope $\alpha$ in $\pi_1({\partial}M)$, if either $\Delta(\alpha, 4) \geq 10$ or $\Delta(\alpha, 4/3) \geq 10$ then $M(\alpha)$ has virtual positive betti number. The only $\alpha$ which satisfy neither condition are $E = \{ 0$, $-1$, $\infty$, $1$, $1/2$, $2$, $3$, $3/2$, $4$, $4/3$, $5/2$, $5/3$, $7/3$, $7/4\}.$ One can check that the fillings along these slopes either have finite $\pi_1$ or have virtual positive betti number (the 6 hyperbolic fillings in $E$ are all among the census manifolds which we showed have virtual positive betti number in the earlier sections).
Now we will prove the lemma.
As in the case of a Fuchsian group the key is to show:
\[bigclaim\] Let $n \geq 12$. Then there is a homomorphism $f$ from $\Gamma$ to a finite group $Q$ where $f$ is injective on the amalgam factors $S_3$ and $C_4$ and where $\gamma$ has order $n$.
Assuming this claim, we will prove the theorem for $n \geq 12$. The Euler characteristic (in the sense of Wall [@Wall61]) of $\Gamma$ is $1/6 + 1/4 - 1/2 = -1/12$. Let $K$ be the kernel of $f$. The subgroup $K$ is free, and from its Euler characteristic we see that it has rank $1 + \#Q/12$. Let $K'$ be the kernel of the induced homomorphism from $\Gamma_n \to Q$. Then $H_1(K', {{\mathbb Z}})$ is obtained from $H_1(K, {{\mathbb Z}})$ by adding $\#Q/n$ relators. As $n \geq 12$, this implies that $H_1(K', {{\mathbb Z}})$ is infinite and $\Gamma_n$ has virtual positive betti number.
To prove the rest of the theorem, one can check that $\Gamma_{10}$ and $\Gamma_{11}$ have homomorphisms into $S_{12}$ and ${\mathrm{PSL}_{2} {{\mathbb F}}_{23}}$ respectively whose kernels have infinite $H_1$. Using coset enumeration, it is easy to check that $\Gamma_n$ is finite for $n < 10$.
Now we establish the claim. For each $n$, we will inductively build a permutation representation $f {\colon\thinspace}\Gamma \to S_n$ where $f(\gamma)$ has order $n$. We will say that $f {\colon\thinspace}\Gamma \to S_n$ is *special* if it is faithful on the amalgam factors, $f( \gamma )$ is an $n$-cycle, and $f(b)$ fixes $n$. If $f$ satisfies these conditions except for $f(b)$ fixing $n$, we will say that $f$ is *almost special*. Our induction tool is:
\[subclaim\] Suppose that $f$ is a special representation of $\Gamma$ into $S_n$. Then there exists a special representation of $\Gamma$ into $S_{n +
6}$. Also, there exists an almost special representation of $\Gamma$ into $S_{n +7}$.
To see this, let $f$ be a special representation. First, we construct the representation into $S_{n+6}$. Let $$L = \{1, 2,\dots, n \} \cup \{p_1, p_2, p_3, p_4, p_5, p_6 \}.$$ We will find a special representation into $S_L$. Let $g {\colon\thinspace}\Gamma
\to S_{\{n, p_1, \dots, p_6\}}$ be the special representation given by $$g(a) = ( p_1 p_2 p_3) (p_4 p_5 p_6) {\quad\mbox{and}\quad} g(b) = (n p_1) (p_2 p_4 p_3 p_5).$$ It’s easy to check (using that $f(a)$ commutes with $g(b^2)$, etc.) that $h(a) = f(a) g(a)$ and $h(b) = f(b) g(b)$ induces a homomorphism $h {\colon\thinspace}\Gamma \to S_L$. Moreover, $h( a b ) = f(a) g(a) f(b) g(b) =
f(a)f(b) g(a) g(b) = f(ab) g(ab)$. Thus $h$ is the product of an $n$-cycle and a $7$-cycle which overlap only in $n$, and so is a $n +
6$ cycle. So $h$ is special.
To construct the almost-special representation, do the same thing, where $g$ replaced is now defined by $$g(a) = ( p_1 p_2 p_3) (p_4 p_5 p_6) {\quad\mbox{and}\quad} g(b) = (n p_1) (p_2 p_4 p_3 p_5)(p_6 p_7).$$ This establishes the inductive Claim \[subclaim\].
Using the induction, to prove Claim \[bigclaim\] it suffices to show that there are special representations for $n = 6, 7, 15, 17$, and that there is an almost-special representation for $n = 16$. These are $$\begin{aligned}
n = 6 \quad & a \mapsto ( 1,2,3)( 4,5,6)\\
& b \mapsto ( 2, 4, 3, 5) \\
n = 7 \quad & a \mapsto ( 2, 3, 4)( 5, 6, 7) \\
& b \mapsto ( 1, 2)( 3, 5, 4, 6) \\
n = 15 \quad & a \mapsto ( 2, 3, 4)( 5, 7, 9)( 6, 8,11)(12,13,15) \\
& b \mapsto ( 1, 2)( 3, 5, 4, 6)( 7,10,11,14)( 8,12, 9,13)\\
n = 16 \quad & a \mapsto ( 2, 3, 4)( 5, 7, 9)( 6, 8,11)(12,13,15) \\
& b \mapsto ( 1, 2)( 3, 5, 4, 6)( 7,10,11,14)( 8,12, 9,13)(15,16) \\
n =17 \quad & a \mapsto ( 2, 3, 5)( 6, 8,11)( 7,10, 9)(12,15,13)(14,16,17) \\
& b \mapsto ( 1, 2, 4, 7)( 3, 6, 9,12)( 5, 8,10,13)(11,14,15,16).\end{aligned}$$ This completes the proof of the claim, the lemma, and thus the theorem.
\#1\]\#2
, [**SJ Pride**]{}, , (1978) 425–426
, [**JW Morgan**]{}, [**PB Shalen**]{}, , (1987) 25–31
, [**HM. Hilden**]{}, , (1975) 315–352
, [**J Cannon**]{}, , (1994)
, [**X Zhang**]{}, , (2000) 103–114
, , Springer-Verlag, New York (1989)
, ,
, [**MV Hildebrand**]{}, [**JR Weeks**]{}, , (1999) 321–332
, , (1994) 441–456
, , preprint (2001) [arXiv:math.GT/0107193]{}
, [**RT Curtis**]{}, [**SP Norton**]{}, [**RA Parker**]{}, [**RA Wilson**]{}, , Oxford University Press, Eynsham (1985)
, [**DD Long**]{}, , (1999) 173–187
, [**DD Long**]{}, , Geometry and Topology 5 (2001) 347–367
, [**PB Shalen**]{}, , (1984) 537–545
, [**CM Gordon**]{}, [**J Luecke**]{}, [**PB Shalen**]{}, , (1987) 237–300
, , (1970) 707–712
, , (1982) 137–141
, , (2001) 43–58,
, Slides from a talk at the University of Warwick (1999) available from `http://www.math.harvard.edu/~nathand`
, [**WP Thurston**]{}, , `http://www.computop.org/software/virtual_haken`
, [**WP Thurston**]{}, , in preparation
, [**F Roehl**]{}, [**G Rosenberger**]{}, , In: [“Combinatorial and geometric group theory (Edinburgh, 1993)”]{}, pages 73–86. Cambridge Univ. Press, 1995
, [**G Rosenberger**]{}, , Marcel Dekker Inc. New York (1999)
, [**MH Freedman**]{}, , (1998) 133–147
, , (1992) 447–510
, , (1997) 37–74
, [**R Meyerhoff**]{}, [**N Thurston**]{}, , preprint, to appear in [Ann. of Math.]{}
, , (2000) `http://www-gap.dcs.st-and.ac.uk/~gap`
, ,
, , (1982) 373–377
, [**DF Holt**]{}, [**S Rees**]{}, , (1993) 137–163
, [**BS Majewski**]{}, , from: [“Proceedings of the Twenty-fifth Southeastern International Conference on Combinatorics, Graph Theory and Computing (Boca Raton, FL, 1994)”]{}, [Congr. Numer.]{} 105 (1994) 87–96
, [**BS Majewski**]{}, , (1997) 399–408
, [**J Weeks**]{}, , from: [“computers and mathematics (Cambridge, MA, 1989)”]{}, Springer (1989) 53–59
, [**W Plesken**]{}, , (1992) 469–480
, [**M Thistlethwaite**]{}, , `www.math.utk.edu/~morwen`
, [**U Oertel**]{}, , (1984) 195–209
, , from: [“Geometric topology (Athens, GA, 1993)”]{}, Amer. Math. Soc. Providence, RI (1997) 35–473, `http://www.math.berkeley.edu/~kirby/`
, , (1996) 441–452
, , (1996) 71–82
, [**L Simon**]{}, [**S-T Yau**]{}, , (1982) 621–659
, , preprint
, , from: [“Algorithmic algebra and number theory (Heidelberg, 1997)”]{}, Springer, Berlin (1999) 423–434
, [**B Souvignier**]{}, , (1996) 39–47
, [**B Souvignier**]{}, , (1998) 690–703
, [**A Schinzel**]{}, , (1968) 171–177
, , Princeton University Press, Princeton, NJ (1990)
, , (1974) 27–33
, , (1983) 35–70
, , Springer-Verlag, Berlin (1980) translated from the French by John Stillwell
, , from: [“Differential geometry, global analysis, and topology (Halifax, NS, 1990)”]{}, Amer. Math. Soc. Providence, RI (1991) 157–185.
, , Cambridge University Press, Cambridge (1994)
, , Lecture notes (1978) [http://www.msri.org/publications/books/gt3m/]{}
, , (1968) 272–280
, , (1961) 182–184
, ,
, , (1992) 271–279
$A_5$ $L_2(7)$ $A_6$ $L_2(8)$ $L_2(11)$ $L_2(13)$ $L_2(17)$ $A_7$ $L_2(19)$ $L_2(16)$ $L_3(3)$ $U_3(3)$
----------- ---------- ---------- ---------- ---------- ----------- ----------- ----------- ---------- ----------- ----------- ---------- ----------
$A_5$ **1.00** 0.02 0.13 0.05 0.17 0.03 -0.03 0.12 0.15 0.09 0.02 0.02
$L_2(7)$ 0.02 **1.00** 0.04 0.23 0.05 0.16 0.05 0.06 -0.02 -0.04 0.12 0.09
$A_6$ 0.13 0.04 **1.00** -0.04 0.13 -0.07 0.02 0.10 0.11 0.09 0.04 0.00
$L_2(8)$ 0.05 0.23 -0.04 **1.00** 0.02 0.20 0.06 0.08 0.05 -0.00 -0.00 0.11
$L_2(11)$ 0.17 0.05 0.13 0.02 **1.00** -0.01 0.03 0.11 0.11 0.14 0.07 0.05
$L_2(13)$ 0.03 0.16 -0.07 0.20 -0.01 **1.00** 0.00 -0.01 0.04 0.04 0.06 0.09
$L_2(17)$ -0.03 0.05 0.02 0.06 0.03 0.00 **1.00** 0.01 0.05 0.03 0.11 0.12
$A_7$ 0.12 0.06 0.10 0.08 0.11 -0.01 0.01 **1.00** 0.08 0.10 0.03 0.11
$L_2(19)$ 0.15 -0.02 0.11 0.05 0.11 0.04 0.05 0.08 **1.00** 0.11 0.03 0.03
$L_2(16)$ 0.09 -0.04 0.09 -0.00 0.14 0.04 0.03 0.10 0.11 **1.00** -0.02 0.07
$L_3(3)$ 0.02 0.12 0.04 -0.00 0.07 0.06 0.11 0.03 0.03 -0.02 **1.00** 0.10
$U_3(3)$ 0.02 0.09 0.00 0.11 0.05 0.09 0.12 0.11 0.03 0.07 0.10 **1.00**
$L_2(23)$ 0.01 0.10 0.03 0.07 0.05 0.03 0.12 -0.04 0.03 0.03 0.15 0.04
$L_2(25)$ 0.04 0.06 0.15 0.06 0.14 0.03 0.13 0.09 0.10 0.10 0.21 0.08
$M_{11}$ 0.16 0.03 0.21 -0.00 0.09 -0.02 0.09 0.12 0.01 0.05 0.05 0.06
$L_2(27)$ -0.01 0.19 -0.05 0.29 0.02 0.15 0.04 0.09 0.04 0.00 0.06 0.10
$L_2(29)$ 0.01 0.13 0.01 0.14 0.17 0.10 -0.00 0.19 0.15 0.06 0.00 0.01
$L_2(31)$ 0.08 0.08 0.18 0.00 0.10 -0.05 0.11 0.04 0.10 0.09 0.09 0.06
$A_8$ 0.11 0.14 0.12 0.11 0.08 0.08 0.07 0.17 0.10 0.07 0.04 0.11
$L_3(4)$ 0.15 0.03 0.13 0.02 0.11 -0.04 0.03 0.23 0.05 0.01 0.07 0.03
$L_2(37)$ 0.02 0.01 0.06 0.02 0.06 0.02 0.07 0.04 0.08 0.13 0.00 0.02
$U_4(2)$ 0.18 0.02 0.24 -0.00 0.07 -0.04 -0.01 0.13 0.05 0.05 0.02 -0.01
Sz(8) -0.00 0.02 0.11 -0.01 0.03 -0.03 0.00 -0.02 0.09 -0.03 -0.01 -0.03
$L_2(32)$ 0.07 0.06 -0.02 -0.02 0.01 0.03 0.00 -0.02 0.01 0.02 -0.00 0.05
: This table gives the correlations between: (having a cover with group 1, having a cover with group 2). The average off-diagonal correlation is 0.06.
$L_2(23)$ $L_2(25)$ $M_{11}$ $L_2(27)$ $L_2(29)$ $L_2(31)$ $A_8$ $L_3(4)$ $L_2(37)$ $U_4(2)$ Sz(8) $L_2(32)$
----------- ----------- ----------- ---------- ----------- ----------- ----------- ---------- ---------- ----------- ---------- ---------- -----------
$A_5$ 0.01 0.04 0.16 -0.01 0.01 0.08 0.11 0.15 0.02 0.18 -0.00 0.07
$L_2(7)$ 0.10 0.06 0.03 0.19 0.13 0.08 0.14 0.03 0.01 0.02 0.02 0.06
$A_6$ 0.03 0.15 0.21 -0.05 0.01 0.18 0.12 0.13 0.06 0.24 0.11 -0.02
$L_2(8)$ 0.07 0.06 -0.00 0.29 0.14 0.00 0.11 0.02 0.02 -0.00 -0.01 -0.02
$L_2(11)$ 0.05 0.14 0.09 0.02 0.17 0.10 0.08 0.11 0.06 0.07 0.03 0.01
$L_2(13)$ 0.03 0.03 -0.02 0.15 0.10 -0.05 0.08 -0.04 0.02 -0.04 -0.03 0.03
$L_2(17)$ 0.12 0.13 0.09 0.04 -0.00 0.11 0.07 0.03 0.07 -0.01 0.00 0.00
$A_7$ -0.04 0.09 0.12 0.09 0.19 0.04 0.17 0.23 0.04 0.13 -0.02 -0.02
$L_2(19)$ 0.03 0.10 0.01 0.04 0.15 0.10 0.10 0.05 0.08 0.05 0.09 0.01
$L_2(16)$ 0.03 0.10 0.05 0.00 0.06 0.09 0.07 0.01 0.13 0.05 -0.03 0.02
$L_3(3)$ 0.15 0.21 0.05 0.06 0.00 0.09 0.04 0.07 0.00 0.02 -0.01 -0.00
$U_3(3)$ 0.04 0.08 0.06 0.10 0.01 0.06 0.11 0.03 0.02 -0.01 -0.03 0.05
$L_2(23)$ **1.00** 0.09 0.04 0.07 0.02 0.08 -0.02 0.01 0.00 0.01 -0.04 0.08
$L_2(25)$ 0.09 **1.00** 0.05 0.15 0.07 0.14 0.12 0.05 0.06 0.10 0.03 0.03
$M_{11}$ 0.04 0.05 **1.00** -0.01 -0.00 0.14 0.14 0.19 0.00 0.21 0.09 0.04
$L_2(27)$ 0.07 0.15 -0.01 **1.00** 0.19 0.01 0.11 0.02 0.03 -0.01 -0.04 0.05
$L_2(29)$ 0.02 0.07 -0.00 0.19 **1.00** 0.07 0.12 0.11 0.08 0.03 -0.01 -0.02
$L_2(31)$ 0.08 0.14 0.14 0.01 0.07 **1.00** 0.09 0.10 0.02 0.13 0.08 0.08
$A_8$ -0.02 0.12 0.14 0.11 0.12 0.09 **1.00** 0.15 -0.01 0.14 0.08 -0.03
$L_3(4)$ 0.01 0.05 0.19 0.02 0.11 0.10 0.15 **1.00** -0.00 0.21 0.26 -0.04
$L_2(37)$ 0.00 0.06 0.00 0.03 0.08 0.02 -0.01 -0.00 **1.00** 0.01 -0.03 0.06
$U_4(2)$ 0.01 0.10 0.21 -0.01 0.03 0.13 0.14 0.21 0.01 **1.00** 0.02 0.04
Sz(8) -0.04 0.03 0.09 -0.04 -0.01 0.08 0.08 0.26 -0.03 0.02 **1.00** -0.03
$L_2(32)$ 0.08 0.03 0.04 0.05 -0.02 0.08 -0.03 -0.04 0.06 0.04 -0.03 **1.00**
: This table gives the correlations between: (having a cover with group 1, having a cover with group 2). The average off-diagonal correlation is 0.06.
$A_5$ $L_2(7)$ $A_6$ $L_2(8)$ $L_2(11)$ $L_2(13)$ $L_2(17)$ $A_7$ $L_2(19)$ $L_2(16)$ $L_3(3)$ $U_3(3)$
----------- ---------- ---------- ---------- ---------- ----------- ----------- ----------- ---------- ----------- ----------- ---------- ----------
$A_5$ **1.00** -0.01 0.28 0.05 0.25 0.01 0.06 0.11 0.23 0.11 0.02 0.02
$L_2(7)$ -0.01 **1.00** 0.05 0.38 0.04 0.25 0.14 0.11 0.02 0.01 0.17 0.13
$A_6$ 0.28 0.05 **1.00** 0.00 0.22 -0.07 0.12 0.13 0.17 0.10 0.08 0.02
$L_2(8)$ 0.05 0.38 0.00 **1.00** 0.05 0.36 0.11 0.12 0.06 0.06 0.03 0.12
$L_2(11)$ 0.25 0.04 0.22 0.05 **1.00** 0.03 0.07 0.06 0.18 0.12 0.08 0.04
$L_2(13)$ 0.01 0.25 -0.07 0.36 0.03 **1.00** 0.07 0.01 0.04 0.10 0.08 0.13
$L_2(17)$ 0.06 0.14 0.12 0.11 0.07 0.07 **1.00** 0.07 0.12 0.07 0.15 0.11
$A_7$ 0.11 0.11 0.13 0.12 0.06 0.01 0.07 **1.00** 0.07 0.09 0.07 0.13
$L_2(19)$ 0.23 0.02 0.17 0.06 0.18 0.04 0.12 0.07 **1.00** 0.09 0.08 0.05
$L_2(16)$ 0.11 0.01 0.10 0.06 0.12 0.10 0.07 0.09 0.09 **1.00** 0.03 0.10
$L_3(3)$ 0.02 0.17 0.08 0.03 0.08 0.08 0.15 0.07 0.08 0.03 **1.00** 0.14
$U_3(3)$ 0.02 0.13 0.02 0.12 0.04 0.13 0.11 0.13 0.05 0.10 0.14 **1.00**
$L_2(23)$ 0.06 0.13 0.02 0.04 0.06 0.05 0.13 -0.01 0.06 0.05 0.15 0.09
$L_2(25)$ 0.12 0.13 0.20 0.14 0.17 0.06 0.17 0.12 0.14 0.15 0.21 0.13
$M_{11}$ 0.19 0.04 0.33 0.03 0.12 0.00 0.11 0.17 0.07 0.08 0.06 0.07
$L_2(27)$ -0.03 0.38 -0.06 0.45 0.05 0.35 0.06 0.10 0.01 0.01 0.09 0.16
$L_2(29)$ 0.08 0.17 0.04 0.24 0.24 0.18 0.02 0.22 0.15 0.05 0.06 0.03
$L_2(31)$ 0.22 0.08 0.30 0.02 0.15 0.02 0.24 0.08 0.15 0.09 0.15 0.08
$A_8$ 0.11 0.15 0.15 0.14 0.08 0.12 0.14 0.20 0.08 0.09 0.09 0.12
$L_3(4)$ 0.21 0.08 0.27 0.04 0.15 -0.01 0.05 0.28 0.13 0.09 0.11 0.04
$L_2(37)$ 0.09 0.03 0.12 0.05 0.14 0.10 0.15 0.02 0.09 0.16 0.03 0.08
$U_4(2)$ 0.17 0.03 0.34 0.01 0.10 -0.01 0.05 0.15 0.10 0.08 0.05 0.02
Sz(8) 0.08 0.08 0.17 0.03 0.06 0.01 0.05 0.04 0.10 0.03 0.02 -0.01
$L_2(32)$ 0.06 0.05 -0.01 0.01 0.05 0.06 0.02 -0.01 0.04 0.05 0.01 0.08
: This table gives the correlations between: (having a cover with group 1 with positive betti number, having a cover with group 2 with positive betti number). The average off-diagonal correlation is 0.09.
$L_2(23)$ $L_2(25)$ $M_{11}$ $L_2(27)$ $L_2(29)$ $L_2(31)$ $A_8$ $L_3(4)$ $L_2(37)$ $U_4(2)$ Sz(8) $L_2(32)$
----------- ----------- ----------- ---------- ----------- ----------- ----------- ---------- ---------- ----------- ---------- ---------- -----------
$A_5$ 0.06 0.12 0.19 -0.03 0.08 0.22 0.11 0.21 0.09 0.17 0.08 0.06
$L_2(7)$ 0.13 0.13 0.04 0.38 0.17 0.08 0.15 0.08 0.03 0.03 0.08 0.05
$A_6$ 0.02 0.20 0.33 -0.06 0.04 0.30 0.15 0.27 0.12 0.34 0.17 -0.01
$L_2(8)$ 0.04 0.14 0.03 0.45 0.24 0.02 0.14 0.04 0.05 0.01 0.03 0.01
$L_2(11)$ 0.06 0.17 0.12 0.05 0.24 0.15 0.08 0.15 0.14 0.10 0.06 0.05
$L_2(13)$ 0.05 0.06 0.00 0.35 0.18 0.02 0.12 -0.01 0.10 -0.01 0.01 0.06
$L_2(17)$ 0.13 0.17 0.11 0.06 0.02 0.24 0.14 0.05 0.15 0.05 0.05 0.02
$A_7$ -0.01 0.12 0.17 0.10 0.22 0.08 0.20 0.28 0.02 0.15 0.04 -0.01
$L_2(19)$ 0.06 0.14 0.07 0.01 0.15 0.15 0.08 0.13 0.09 0.10 0.10 0.04
$L_2(16)$ 0.05 0.15 0.08 0.01 0.05 0.09 0.09 0.09 0.16 0.08 0.03 0.05
$L_3(3)$ 0.15 0.21 0.06 0.09 0.06 0.15 0.09 0.11 0.03 0.05 0.02 0.01
$U_3(3)$ 0.09 0.13 0.07 0.16 0.03 0.08 0.12 0.04 0.08 0.02 -0.01 0.08
$L_2(23)$ **1.00** 0.11 0.05 0.04 0.04 0.08 0.01 0.04 0.09 0.02 -0.05 0.15
$L_2(25)$ 0.11 **1.00** 0.12 0.15 0.15 0.18 0.20 0.06 0.16 0.14 0.04 0.02
$M_{11}$ 0.05 0.12 **1.00** -0.04 0.02 0.22 0.14 0.24 0.05 0.25 0.08 0.05
$L_2(27)$ 0.04 0.15 -0.04 **1.00** 0.25 -0.03 0.10 0.02 0.03 0.01 0.01 0.08
$L_2(29)$ 0.04 0.15 0.02 0.25 **1.00** 0.06 0.13 0.21 0.12 0.04 0.04 0.01
$L_2(31)$ 0.08 0.18 0.22 -0.03 0.06 **1.00** 0.10 0.12 0.12 0.15 0.10 0.06
$A_8$ 0.01 0.20 0.14 0.10 0.13 0.10 **1.00** 0.18 0.07 0.16 0.09 -0.03
$L_3(4)$ 0.04 0.06 0.24 0.02 0.21 0.12 0.18 **1.00** 0.02 0.25 0.30 -0.04
$L_2(37)$ 0.09 0.16 0.05 0.03 0.12 0.12 0.07 0.02 **1.00** 0.06 0.03 0.10
$U_4(2)$ 0.02 0.14 0.25 0.01 0.04 0.15 0.16 0.25 0.06 **1.00** -0.01 0.01
Sz(8) -0.05 0.04 0.08 0.01 0.04 0.10 0.09 0.30 0.03 -0.01 **1.00** -0.07
$L_2(32)$ 0.15 0.02 0.05 0.08 0.01 0.06 -0.03 -0.04 0.10 0.01 -0.07 **1.00**
: This table gives the correlations between: (having a cover with group 1 with positive betti number, having a cover with group 2 with positive betti number). The average off-diagonal correlation is 0.09.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using the results found previously \[5\] for the cooling rates of the emittances, due to collisions between the electrons and the ions, a result is found for the friction force acting on the ions. It is shown that the friction force found here when used to track the ion bunch will give the same emittance cooling rates as those found using the intrabeam scattering theory for electron cooling \[5\].For the case of the uniform in space electron bunch distribution, the friction force found here agrees with the friction force result found with the usual theory of electron cooling.'
author:
- George Parzen
date: |
NOVEMBER 2006\
BNL REPORT C-A/AP NO.261
title: Theory of the friction force using electron cooling as an intrabeam scattering process
---
Introduction
============
Using the results found previously \[5\] for the cooling rates of the emittances, due to collisions between the electrons and the ions, a result is found for the friction force acting on the ions. It is shown that the friction force found here when used to track the ion bunch will give the same emittance cooling rates as those found using the intrabeam scattering theory for electron cooling \[5\].For the case of the uniform in space electron bunch distribution, the friction force found here agrees with the friction force result found with the usual theory of electron cooling.
Intrabeam scattering, ions on ions
==================================
Consider a beam which consists of a single bunch of completely ionized ions. The ions are doing betatron oscillations in the transverse direction , and synchrotron oscillations in the longitudinal direction. In addition the ions are subject to the coulomb repulsion between them. The scattering of each ion by the other ions is called intrabeam scattering. In Rhic , intrabeam scattering (IBS) causes the beam dimensions to grow slowly in all three diretions.The growth of the beam can be computed using intrabeam scattering theory \[1-4\].
Intrabeam scattering, ions on electrons
=======================================
In electron cooling the ion bunch is overlapped by an electron bunch which is moving at the same velocity as the ion bunch. The ions can now scatter off each other or they can scatter off the electrons in the electron bunch.The scattering of the ions from each other occurs all around the accelerator ring and causes the emittances of the beam to grow. The scattering of the ions from the electrons occurs only in the cooling section and causes negative growth (cooling) in the ion emittances. Each kind of scattering may be considered as a kind of intrabeam scattering and the growth of the ion bunch due to each kind of scattering can be computed in the same way \[5\].
Friction force definition using intrabeam scattering results for electron cooling
=================================================================================
The friction force will be defined as a force which acting on each ion in the ion bunch will produce the same cooling rates, due to collisions between ions and electrons, for the three quantities, $<p_i^2>, {\;\;\;}i=x,y,s$ for the ions as that found by the IBS theory for electron cooling. $<p_i^2>$ is the average of $p_i^2$ over all the ions in the bunch. It will be shown below that the friction force found using this definition when used to track the ion bunch will give the same emittance cooling rates as those found using the IBS theory of electron cooling.
Friction force results found using intrabeam scattering results for electron cooling
====================================================================================
Using the results for the cooling rates of $<p_i^2>, {\;\;\;}i=x,y,s$ found by the IBS theory for electron cooling \[5\], and the above definition of the friction force, one finds the following expression for the friction force. The subscripts $a,b$ indicate ions and electrons respectively. $N_b f_b(x,v_b)$ is the electron distribution function. $N_b$ is the total nuber of electrons in the electron bunch. $x$ is the location of the ion. The derivation of the friction force results are given below. Using the Coulomb cross-section, one gets $$\begin{aligned}
\sigma_{ab} &=& (\frac {r_{ab}} {\beta_{ab}^2})^2 \frac{1}{(1-cos \theta)^2}
{\;\;\;}{\;\;\;}coulomb\; cross-section\;in\; CMS \nonumber\\
r_{ab} &=& \frac{Z_aZ_b e^2}{\mu c^2} \nonumber\\
\beta_{ab} c &=& |\vec{v_a}-\vec{v_b}| \nonumber\\
\frac{1}{\mu} &=& \frac{1}{m_a}+\frac{1}{m_b} \nonumber\\
& & \nonumber\\
F_i &=& -4 \pi m_b N_b r_{ab}^2 c^4 \int d^3v_b {\;\;\;}\frac{(v_a-v_b)_i}{|v_a-v_b|^3}
f_b(x,v_b) ln {\left [}\frac{\beta_{ab}^2 b_{maxab}}{r_{ab}} {\right ]}\nonumber\\\end{aligned}$$ One can also find a result for any ${\sigma}_{ab}$, and not just the coulomb ${\sigma}_{ab}$. $$\begin{aligned}
F_i &=& -2 \pi m_b N_b \int d^3v_b {\;\;\;}{\left [}(v_a-v_b)_i |v_a-v_b| f_b(x,v_b)
\int d\theta sin \theta (1-cos\theta) {\sigma}_{ab} {\right ]}\nonumber\\\end{aligned}$$
Uniform electron bunch case
---------------------------
For a uniform in space electron beam $$\begin{aligned}
f_b(x,v_b) &=& \frac{1}{volume}f_v(v_b) \nonumber\\
n_b &=& N_b/volume {\;\;\;}{\;\;\;}electron {\;\;\;}density \nonumber\\
& & \nonumber\\
F_i &=& -4 \pi m_b n_b r_{ab}^2 c^4 \int d^3v_b {\;\;\;}\frac{(v_a-v_b)_i}{|v_a-v_b|^3}
f_v(v_b) ln {\left [}\frac{\beta_{ab}^2 b_{maxab}}{r_{ab}} {\right ]}\nonumber\\\end{aligned}$$ This result for the friction force for a uniform in space electron beam is the same as the result found using the usual theory of electron cooling.
Gaussian bunch case
-------------------
$$\begin{aligned}
f_b(x,v_b) &=& \frac
{exp[-x^2/(2{\sigma}_x^2)-y^2/(2{\sigma}_y^2)-s^2/(2{\sigma}_s^2)]}
{(2 \pi)^{3/2}{\sigma}_x {\sigma}_y {\sigma}_s }
f_{v}(v_b) \nonumber\\
& & \nonumber\\
F_i &=& -4 \pi m_b N_b r_{ab}^2 c^4
\frac {exp[-x^2/(2{\sigma}_x^2)-y^2/(2{\sigma}_y^2)-s^2/(2{\sigma}_s^2)]}
{(2 \pi)^{3/2}{\sigma}_x {\sigma}_y {\sigma}_s } \nonumber\\
& & \int d^3v_b
\frac{(v_a-v_b)_i}{|v_a-v_b|^3}
f_v(v_b) ln {\left [}\frac{\beta_{ab}^2 b_{maxab}}{r_{ab}} {\right ]}\nonumber\\\end{aligned}$$
This result can be generalized to apply to any electron bunch ditribution that can be factored and written as $$f_b(x,v_b)=f_x(x)f_v(v_b)$$ One then finds $$\begin{aligned}
F_i &=& -4 \pi m_b N_b r_{ab}^2 c^4 f_x(x)
\int d^3v_b
\frac{(v_a-v_b)_i}{|v_a-v_b|^3}
f_v(v_b) ln {\left [}\frac{\beta_{ab}^2 b_{maxab}}{r_{ab}} {\right ]}\nonumber\\\end{aligned}$$ The results for the friction force given in this paper may differ from the usual friction force results when the electron bunch distribution can not be factored. This may happen when the alpha funtion is not zero or when dispersion is present.
Cooling rates for $<p_ip_j>$, due to collisions, and for $<x_ip_i>$
===================================================================
If a horizontal dispersion is present in the cooling section , then the cooling rate of the emittances will also depend on the cooling rate of $<p_x p_s>$. It will be shown that the friction force obtained as described above when used to track a particle sample of the ion diStribution will give the same cooling rate for $<p_x p_s>$ as that found using the IBS theory of electron cooling. Similar statements can be made for the vertical dispersion. Thus the friction force can be used to track a bunch of ions when dispersion is present to find the same emittance cooling rates as those found using the IBS theory of electron cooling.
The friction force as defined here to give the correct cooling rates, due to collisions, for $<p_i^2>, {\;\;\;}i=x,y,s$ also gives the correct cooling rates for all 6 of the moments $<p_i p_j>, {\;\;\;}i,j=x,y,s$. It will also be shown that it gives the correct cooling rates, due to collisions, for $<x_i p_i>, {\;\;\;}i=x,y,s$ which is required to compute the cooling rates of the emittances.
Derivation of the friction force using intrabeam scattering results for electron cooling
========================================================================================
To derive the results for the friction force, we will first find the cooling rates for $<p_i^2>, {\;\;\;}i=x,y,s$, due to collisions, using the friction force. We will then find the cooling rates for $<p_i^2>, {\;\;\;}i=x,y,s$ using the methods of IBS. Comparing these two results for the cooling rates , due to collisions, for $<p_i^2>, {\;\;\;}i=x,y,s$ will give us the result for the friction force.
Cooling rate of $<p_i^2>$ from the friction force
-------------------------------------------------
Let $p_{ik}, {\;\;\;}i=x,y,s$ be the components of the momentum of the $k$th ion. Let $v_{ik}, {\;\;\;}i=x,y,s$ be the components of the ion velocity. Let $N_a$ be the number of ions in the ion bunch. Let $F_i$ be the components of the friction force acting on the ion. If the ions are tracked using this friction force then $$\begin{aligned}
\frac{dp_{ik}}{dt} &=& F_i \nonumber\\
\frac{dp_{ik}^2}{dt} &=& 2 m_a v_{ik} F_i \nonumber\\
& & \nonumber\\
\frac{d<p_{ik}^2>}{dt} &=&
\frac{1}{N_a} \sum_{k=1}^{N_a} 2 m_a v_{ik} F_i \nonumber\\
\frac{d<p_{ia}^2>}{dt} &=& \int d^3x d^3v_a f_a(x,v_a) 2 m_a
v_{ia} F_i \nonumber\\ \end{aligned}$$ Note that $d/dt$ here gives only the rate of change of the relevant quantity. due to collisions between ions and electrons.
Cooling rate of $<p_i^2>$ from the IBS theory of electron cooling
-----------------------------------------------------------------
Let $\delta N_a$ be the number of ions with momentum, $p_a$ in $d^3p_a$ and space coordinate $x$ in $d^3x$ which are scattered by the electrons with momentum $p_b$ in $d^3p_b$ which are also in $d^3x$, in the time interval $dt$ , into the solid angle $d\Omega$. In a scattering event $p_a,p_b$ change to $p_a',p_b'$ and $q_a=p_a'-p_a$ is the change in the ion momentum. Then $\delta N_a$ is given by, Ref.\[4,5\], $$\begin{aligned}
\delta N_a &=& d\Omega {\;\;\;}{\sigma}_{ab} {\;\;\;}N_a f_a(x,v_a) d^3v_a |v_a-v_b|
{\;\;\;}N_b f_b(x,v_b) d^3v_b d^3x {\;\;\;}dt \nonumber\\\end{aligned}$$ $\sigma_{ab}$ is the scattering cross section for the scattering of the ions from the electrons. Using this result for $\delta N_a$ one can find that \[4,5\] $$$$ $$$$ $$\begin{aligned}
\delta<p_{ia}^2> &=& \int d^3v_bd^3xd^3v_a [ {\;\;\;}f_a(x,v_a) N_bf_b(x,v_b) |v_a-v_b| \nonumber\\
& & \int d\Omega {\;\;\;}{\sigma}_{ab} \delta (p_{ia}^2) ] {\;\;\;}dt
\nonumber\\
& & \nonumber\\
\delta (p_{ia}^2) &=& (p_{ia}+q_{ia})^2-p_{ia}^2 \nonumber\\
&=& 2 p_{ia} q_{ia} + q_{ia}^2 \nonumber\\
&=& 2 p_{ia} q_{ia} {\;\;\;}dropping\;q_{ia}^2\;(see\;below)
\nonumber\\
\int d\Omega {\;\;\;}{\sigma}_{ab} \delta (p_{ia}^2) &=& \int d\Omega {\;\;\;}{\sigma}_{ab} 2 p_{ia} q_{ia} \nonumber\\
q_{ia} &=& p_{ia}'-p_{ia} \nonumber\\\end{aligned}$$ In Eq.6, $p_{ia}$ does not depend on the scattering angles $\theta,\phi$. Let $d_i$ be defined as $$d_i=\int d\Omega {\;\;\;}{\sigma}_{ab} q_{ia}$$ $d\Omega {\sigma}_{ab}$ is an invariant and $q_{ia}$ is a vector in 3-space which has the same magnitude in the Rest CS and in the Center of Mass CS (CMS). Then $d_i$ is a vector in 3-space and can be evaluated in the CMS.
If this integral is evaluated in the CMS and the result is written in terms of tensors in 3-space then the result will also hold in the Rest CS.
In the CMS, we introduce a polar coordinate system $\theta,\phi$ where $\theta$ is measured relative to the direction of $\vec{p_a}$ and we assume that $\sigma_{ab}(\theta,\phi)$ is a fumction of $\theta$ only. we can then write $$\begin{aligned}
\vec{p_a} &=& (0,0,1)|\vec{p_a}| \nonumber\\
\vec{p_a \; '} &=& (\sin \theta \cos \phi,\sin \theta \sin \phi,
\cos \theta)|\vec{p_a}| \nonumber\\
\vec{q_a} &=& (\sin \theta \cos \phi,\sin \theta \sin \phi,
\cos \theta-1)|\vec{p_a}| \nonumber\\\end{aligned}$$ In the CMS, using Eq.7, one finds $$\begin{aligned}
d_i &=& -2 \pi \int d\theta sin\theta
(1-cos\theta) \sigma_{ab} (0,0,1) |p_a| \nonumber\\\end{aligned}$$ These results for $d_i$ in the CMS can be rewritten in terms of tensors in 3-space. In the CMS $$$$ $$v_{ia}-v_{ib}=p_{ia}/m_a-p_{ib}/m_b=p_{ia}/\mu$$ $$p_{ia}=\mu (v_{ia}-v_{ib})$$ $$$$ and $$\begin{aligned}
d_i &=& -2 \pi \int d\theta sin\theta
(1-cos\theta) \sigma_{ab} \mu (v_{ia}-v_{ib}) \nonumber\\\end{aligned}$$ In this form the result will also hold in the Rest CS.
Using the above results for $\delta (p_{ia}^2)$, due to collisions, and for $d_i$ and putting them into the result for $\delta<p_{ia}^2>$ in Eq.6, one finds $$$$ $$$$ $$\begin{aligned}
\delta<p_{ia}^2> &=& \int d^3xd^3v_a f_a(x,v_a) 2 m_a v_{ia} \nonumber\\
& & [ -2 \pi m_b \int d^3v_b N_b (v_a-v_b)_i |v_a-v_b|
f_b(x,v_b) \nonumber\\
& &
(\int d\theta sin \theta (1-cos\theta) {\sigma}_{ab}) {\;\;\;}dt ]
\nonumber\\\end{aligned}$$
Friction force results
----------------------
Comparing the result for $\delta<p_{ia}^2>$, due to collisions, found here with the result for $\delta<p_{ia}^2>$ found in section 7.1, we get the result for the friction force $$\begin{aligned}
F_i &=& -2 \pi m_b N_b \int d^3v_b {\;\;\;}{\left [}(v_a-v_b)_i |v_a-v_b| f_b(x,v_b)
\int d\theta sin \theta (1-cos\theta) {\sigma}_{ab} {\right ]}\nonumber\\\end{aligned}$$ Using for $sig_{ab}$ the results for the coulomb croos-section given in Eq.1 one finds $$\begin{aligned}
\sigma_{ab} &=& (\frac {r_{ab}} {\beta_{ab}^2})^2 \frac{1}{(1-cos \theta)^2}
{\;\;\;}{\;\;\;}coulomb\; cross-section\;in\; CMS \nonumber\\
r_{ab} &=& \frac{Z_aZ_b e^2}{\mu c^2} \nonumber\\
\beta_{ab} c &=& |\vec{v_a}-\vec{v_b}| \nonumber\\
\frac{1}{\mu} &=& \frac{1}{m_a}+\frac{1}{m_b} \nonumber\\
& & \nonumber\\
F_i &=& -4 \pi m_b N_b r_{ab}^2 c^4 \int d^3v_b {\;\;\;}\frac{(v_a-v_b)_i}{|v_a-v_b|^3}
f_b(x,v_b) ln {\left [}\frac{\beta_{ab}^2 b_{maxab}}{r_{ab}} {\right ]}\nonumber\\\end{aligned}$$ We can now justify dropping the $q_{ia}^2$ term in Eq.6. We will show that $|q_a|$ is smaller than $|p_a|$ in the Rest CS by the factor $m_b/m_a$. Thus the $q_{ia}^2$ term in Eq.6 is smaller than the $2p_{ia}q_{ia}$ by the factor $m_b/m_a$.
$|q_a|$ has the same vaue in the CMS and in the Rest CS. In the CMS $|q_a|$ has the magnitude of $|p_a|$ in the CMS. In Rhic, $|q_a|$ has the magnitude of $1e-3m_b c$ while $|p_a|$ in the Rest CS has the magnitude of $1e-3m_a c$. Thus $|q_a|$ is smaller than $|p_a|$ in the Rest CS by the factor $m_b/m_a$.
Cooling rates for $<p_ip_j>$, due to collisions, required when dispersion is present
------------------------------------------------------------------------------------
If a horizontal dispersion is present in the cooling section , then the cooling rate of the emittances will also depend on the cooling rate of $<p_x p_s>$, due to collisions. It will be shown that the friction force obtained as described above when used to track a particle sample of the ion ditribution will give the same cooling rate, due to collisions, for $<p_x p_s>$ as that found using the IBS theory of electron cooling. Similar statements can be made for the vertical dispersion. Thus the friction force can be used to track a bunch of ions when dispersion is present to find the same emittance cooling rates as those found using the IBS theory of electron cooling.
First let us find the cooling rate of $<p_{ia}p_{ja}>$ using the friction force. Using the same procedure as given in section 7.1 one gets $$\begin{aligned}
\frac{dp_{ik}}{dt} &=& F_i \nonumber\\
\frac{d(p_{ik} p_{jk})}{dt} &=& m_a( v_{ik} F_j+v_{jk} F_i) \nonumber\\
& & \nonumber\\
\frac{d<p_{ik}p_{jk}>}{dt} &=&
\frac{1}{N_a} \sum_{k=1}^{N_a} m_a ( v_{ik} F_j+v_{jk} F_i) \nonumber\\
\frac{d<p_{ia}p_{ja}>}{dt} &=& \int d^3x d^3v_a f_a(x,v_a) m_a
( v_{ia} F_j+v_{ja} F_i) \nonumber\\ \end{aligned}$$ This result for the cooling rate of $<p_{ia}p_{ja}>$, due to collisions, found using our result for the friction force will now be shown to be the same result as that found using the IBS theory of electron cooling \[5\]. The cooling rate of $<p_{ia}p_{ja}>$ using the IBS theory of electron cooling can be found using the the same procedure as that given in section 7.2 . $$\begin{aligned}
\delta<p_{ia}p_{ja}> &=& \int d^3v_bd^3xd^3v_a [ {\;\;\;}f_a(x,v_a) N_bf_b(x,v_b) |v_a-v_b| \nonumber\\
& & \int d\Omega {\;\;\;}{\sigma}_{ab} \delta (p_{ia}p_{ja}) ]
\nonumber\\
& & \nonumber\\
\delta (p_{ia}p_{ja}) &=& (p_{ia}+q_{ia})(p_{ja}+q_{ja})-p_{ia}p_{ja} \nonumber\\
&=& p_{ia} q_{ja}+p_{ja} q_{ia} + q_{ia}q_{ja} \nonumber\\
&=& p_{ia} q_{ja}+p_{ja} q_{ia} {\;\;\;}dropping\;q_{ia}q_{ja}
\nonumber\\
\int d\Omega {\;\;\;}{\sigma}_{ab} \delta (p_{ia}p_{ja}) &=& \int d\Omega {\;\;\;}{\sigma}_{ab} (p_{ia} q_{ja}+p_{ja} q_{ia}) \nonumber\\
q_a &=& p_a'-p_a \nonumber\\\end{aligned}$$ Eq.14 now becomes $$\begin{aligned}
\delta<p_{ia}p_{ja}> &=& \int d^3xd^3v_a f_a(x,v_a) m_a \nonumber\\
& & ( -2 \pi m_b \int d^3v_b
( v_{ia}(v_a-v_b)_j+v_j{a}(v_a-v_b)_i) |v_a-v_b| \nonumber\\
& &
N_bf_b(x,v_b ) {\;\;\;}(\int d\theta sin \theta (1-cos\theta)
{\sigma}_{ab})
{\;\;\;}dt ) \nonumber\\\end{aligned}$$ which, using Eq.11 for the friction force, can be written as $$\begin{aligned}
\delta <p_{ia}p_{ja}> &=& \int d^3x d^3v_a f_a(x,v_a) m_a
( v_{ia} F_j+v_{ja} F_i) {\;\;\;}dt \nonumber\\ \end{aligned}$$ This is the same result as that found using the friction force, Eq.13.
Cooling rates for $<x_ip_i>$, due to collisions.
------------------------------------------------
First let us find the cooling rate of $<x_{i}p_{ia}>$ using the friction force. Using the same procedure as given in section 7.1 one gets $$\begin{aligned}
\frac{dp_{ik}}{dt} &=& F_i \nonumber\\
\frac{d(x_{ik}p_{ik})}{dt} &=& x_{ik} F_i \nonumber\\
& & \nonumber\\
\frac{d<x_{ik}p_{ik}>}{dt} &=&
\frac{1}{N_a} \sum_{k=1}^{N_a} x_{ik} F_i \nonumber\\
\frac{d<x_{i}p_{ia}>}{dt} &=& \int d^3x d^3v_a f_a(x,v_a)
x_{i} F_i \nonumber\\ \end{aligned}$$ Note we are finding only the cooling rate due to collisions and in collisions $x$ does not change.
This result for cooling rate of $<x_{i}p_{ia}>$, due to collisions, found using our result for the friction force will now be shown to be the same result as that found using the IBS theory of electron cooling \[5\]. The cooling rate of $<x_{i}p_{ia}>$ using the IBS theory of electron cooling can be found using the the same procedure as that given in section 7.2 . $$\begin{aligned}
\delta<x_{i}p_{ia}> &=& \int d^3v_bd^3xd^3v_a [ {\;\;\;}f_a(x,v_a) N_bf_b(x,v_b) |v_a-v_b| \nonumber\\
& & \int d\Omega {\;\;\;}{\sigma}_{ab} \delta (x_{i}p_{ia}) ]
\nonumber\\
& & \nonumber\\
\delta (x_{i}p_{ia}) &=& x_{i}q_{ia} \nonumber\\
\int d\Omega {\;\;\;}{\sigma}_{ab} \delta (p_{ia}p_{ja}) &=& \int d\Omega {\;\;\;}{\sigma}_{ab} x_{i}q_{ia} \nonumber\\
q_{ia} &=& p_{ia}'-p_{ia} \nonumber\\\end{aligned}$$ Eq.18 now becomes, using EQ.9 FOR $\int d\Omega {\;\;\;}{\sigma}_{ab}q_{ia}$ $$\begin{aligned}
\delta<x_{i}p_{ia}> &=& \int d^3xd^3v_a f_a(x,v_a) \nonumber\\
& & ( -2 \pi m_b \int d^3v_b
x_{i}(v_a-v_b)_i |v_a-v_b| \nonumber\\
& &
N_bf_b(x,v_b ) {\;\;\;}(\int d\theta sin \theta (1-cos\theta)
{\sigma}_{ab})
{\;\;\;}dt ) \nonumber\\\end{aligned}$$ which, using Eq.11 for the friction force, can be written as $$\begin{aligned}
\delta <x_{i}p_{ia}> &=& \int d^3x d^3v_a f_a(x,v_a)
x_{i} F_i {\;\;\;}dt \nonumber\\ \end{aligned}$$ This is the same result as that found using the friction force, Eq.17.
Thanks are due to Alexei Fedotov for his assistance in comparing the results of the IBS treatment of electron cooling and the results found using the friction force.
References {#references .unnumbered}
==========
1\. A. Piwinski Proc. 9th Int. Conf. on High Energy Accelerators (1974) 405
2\. J.D. Bjorken and S.K. Mtingwa, Part. Accel.13 (1983) 115
3\. M. Martini CERN PS/84-9 (1984)
4\. G. Parzen BNL report C-A/AP/N0.150 (2004)
and at http://arxiv.org/ps[$\_$]{}cache/physics/pdf/0405/0405019.pdf
5\. G. Parzen BNL report C-A/AP/N0.243 (2006)
and at http://arxiv.org/abs/physics/0609076
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
This work presents the formalism and implementation of excited state nuclear forces within density functional linear response theory (TDDFT) using a plane wave basis set. An implicit differentiation technique is developed for computing nonadiabatic coupling between Kohn-Sham molecular orbital wavefunctions as well as gradients of orbital energies which are then used to calculate excited state nuclear forces. The algorithm has been implemented in a plane wave/pseudopotential code taking into account only a reduced active subspace of molecular orbitals. It is demonstrated for the H$_2$ and N$_2$ molecules that the analytical gradients rapidly converge to the exact forces when the active subspace of molecular orbitals approaches completeness.\
\
\
[***J. Chem. Phys. 122, 144101 (2005)***]{}
address: |
$^1$Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44780 Bochum, Germany\
$^2$Department of Chemistry and Biochemistry, University of Maryland, College Park, MD 20742\
author:
- 'Nikos L. Doltsinis$^1$ and D. S. Kosov$^2$'
bibliography:
- 'dfg.bib'
title: 'Plane wave/pseudopotential implementation of excited state gradients in density functional linear response theory: a new route via implicit differentiation'
---
Introduction
============
The past decade has seen time-dependent density functional linear response theory (TDDFT) [@rg84; @c95; @ba96] become the most widely used electronic structure method for calculating vertical electronic excitation energies [@mbagl02; @th00]. Except for certain well-known problem cases such as, for instance, charge transfer [@bsh04; @dwh03; @cgggsd00] and double excitations [@hh99], TDDFT excitation energies are generally remarkably accurate, typically to within a fraction of an electron Volt [@agb03; @sggb00; @hgr00; @th98].
Excited state analytical nuclear forces within TDDFT have only been implemented recently [@ca99; @ca00; @fa02; @h03] in an attempt to extend the applicability of TDDFT beyond single point calculations. One complication has been the fact that TDDFT merely provides excitation energies, but excited state wavefunctions are not properly defined. The first excited state geometry optimization using analytical gradients was presented by van Caillie and Amos based on a Handy-Schaefer Z-vector method [@ca99; @ca00]. An extended Lagrangian ansatz was chosen by Furche and Ahlrichs [@fa02] and Hutter [@h03] for their Gaussian-type basis set and plane wave/pseudopotential implementations, respectively. The latter variant is of particular importance for condensed phase applications since it is used in conjunction with periodic boundary conditions. In order to ensure completeness, the number of Kohn-Sham (KS) orbitals included in constructing the response matrix in a molecular orbital (MO) basis must equal the number of basis functions. Since a plane wave basis typically consists of two orders of magnitude more basis functions than a Gaussian-type basis set a complete MO formulation of TDDFT is impractical. A solution to this problem is to cast the working matix equations directly into a plane wave basis as proposed by Hutter [@h03]. Earlier, Doltsinis and Sprik [@ds00] have proposed an alternative, [*active space*]{} approach to TDDFT in which only a subset of (active) KS orbitals is selected to construct the response matrix. For a large variety of excited states, convergence of the corresponding excitation energies has been shown to be rapid with respect to the number of orbitals included in the active space [@ds00; @cgggsd00]. In the present paper, we shall follow this active space ansatz and derive analytical expressions for excited state nuclear forces within an MO basis. In contrast to previous work, we do not rely on a Lagrangian formulation [@fa02; @h03; @bk99], but employ an implicit differentiation scheme instead. This has the advantage that we obtain, in addition to the excited state energy gradients, also the gradients of KS energies and wavefunctions. The latter may be exploited to compute nonadiabatic coupling matrix elements between different electronic states. We have implemented the working equations within a plane wave/pseudopotential code [@cpmd] and we will demonstrate numerical accuracy and illustrate the convergence behaviour with respect to the number of MOs included in the active space for some prototypical test examples.
Theory
======
The linear response TDDFT eigenvalue problem can be written in Hermitian form as $$\label{eigen}
{\bf \Omega} |{\bf F}_n\rangle=\omega_n^2|{\bf F}_n\rangle\quad,$$ where the response matrix ${\bf \Omega}$ is defined as $$\label{omega}
\Omega_{ph\sigma,p'h'\sigma'}=\delta_{\sigma\sigma'}\delta_{pp'}
\delta_{hh'}(\epsilon_{p\sigma}-\epsilon_{h\sigma})^2 +
2\sqrt{\epsilon_{p\sigma}-\epsilon_{h\sigma}}K_{ph\sigma,p'h'\sigma'}
\sqrt{\epsilon_{p'\sigma}-\epsilon_{h'\sigma}}\quad,$$ with the coupling matrix $$K_{ph\sigma,p'h'\sigma'}=\int d{\bf r}\int d{\bf r^{\prime}}
\psi_{p\sigma}({\bf r})\psi_{h\sigma}({\bf r})f_{\rm H,xc}^{\sigma\sigma'}({\bf r,
r^{\prime}}) \psi_{p'\sigma'}({\bf r^\prime})\psi_{h'\sigma'}({\bf
r^\prime})\quad,$$ $\psi_{p\sigma}$ and $\psi_{h\sigma}$ being the KS particle (unoccupied) and hole (occupied) molecular orbitals with spin $\sigma$ corresponding to the KS energies $\epsilon_{p\sigma}$ and $\epsilon_{h\sigma}$, respectively. The response kernel $$f_{\rm H,xc}^{\sigma\sigma'}({\bf r,
r^{\prime}})=\frac{1}{|{\bf r -r^{\prime}}|}+\delta({\bf r}-{\bf
r}')\frac{\delta ^2E_{\rm xc}}{\delta\rho^\sigma({\bf r})\delta\rho^{\sigma^\prime}({\bf r}^\prime)}$$ containing a Hartree term and an exchange-correlation term is given in the usual adiabatic approximation [@gk90], i.e. the exchange-correlation contribution is taken to be simply the second derivative of the static ground state exchange-correlation energy, $E_{\rm xc}$, with respect to the spin density $\rho^\sigma$.
For the sake of simplicity, the following derivation is for singlet excitations only (extension to triplet excitations is straightforward). We shall therefore drop the spin index $\sigma$. The reduced (singlet) response matrix is given by $$\Omega_{ph,p'h'}=\delta_{pp'}
\delta_{hh'}(\epsilon_p-\epsilon_h)^2 +
4\sqrt{\epsilon_p-\epsilon_h}K_{ph,p'h'}
\sqrt{\epsilon_{p'}-\epsilon_{h'}}\quad,$$ where $$K_{ph,p'h'}=\int d{\bf r}\int d{\bf r^{\prime}}
\psi_p({\bf r})\psi_h({\bf r})f_{\rm H,xc}({\bf r,
r^{\prime}}) \psi_{p'}({\bf r^\prime})\psi_{h'}({\bf
r^\prime})\quad,$$ and $$f_{\rm H,xc}({\bf r,
r^{\prime}})=\frac{1}{|{\bf r -r^{\prime}}|}+\delta({\bf r}-{\bf
r}')\frac{\delta ^2E_{\rm xc}}{\delta\rho({\bf r})^2}\quad.$$ Multiplying eq. (\[eigen\]) by $\langle {\bf F}_n|$ from the left we obtain $$\label{eigen2}
\langle {\bf F}_n|{\bf \Omega} |{\bf F}_n\rangle=\omega_n^2\quad.$$ Differentiation with respect to the nuclear coordinate $R_\alpha \quad (\alpha=1,\dots,3N)$ for a molecule consisting of $N$ atoms yields $$\label{exc-grad}
\omega_n^\alpha=\frac{1}{2\omega_n}\langle {\bf F}_n|{\bf
\Omega}^\alpha |{\bf
F}_n\rangle=\frac{1}{2\omega_n}\sum_{ph}\sum_{p'h'}F_{ph}^{(n)}\Omega_{ph,p'h'}^\alpha
F_{p'h'}^{(n)}\quad,$$ where we have used the short-hand notation $\frac{d\, f}{dR_\alpha}\equiv f^\alpha$; the $F_{ph}^{(n)}$ are the components of the linear response eigenvector ${\bf F}_n$. Carrying out the differentiation of the response matrix, eq. (\[exc-grad\]) becomes $$\begin{aligned}
\label{exc-grad2}\nonumber
\omega_n^\alpha & = &
\frac{1}{\omega_n}\left[\sum_{ph}(F_{ph}^{(n)})^2
(\epsilon_p^\alpha-\epsilon_h^\alpha)(\epsilon_p-\epsilon_h)\right.\\\nonumber
&&+ 2\int d{\bf r}\int d{\bf r}' \Gamma_1({\bf r})f_{\rm
H,xc}({\bf r,r^{\prime}})\Gamma_2({\bf r}')\\
&&\left.+ 2\int d{\bf r} \Gamma_1({\bf r})
\frac{\delta ^3E_{\rm xc}}{\delta\rho({\bf r})^3}
\rho^\alpha({\bf r})\Gamma_1({\bf r})\right]
\quad.\end{aligned}$$ Here we have defined the contracted densities $$\label{gamma1}
\Gamma_1({\bf r})=
\sum_{ph}F_{ph}^{(n)}\sqrt{\epsilon_p-\epsilon_h}\Gamma_{ph}({\bf r})$$ and $$\label{gamma2}
\Gamma_2({\bf r})=
\sum_{ph}F_{ph}^{(n)}\left[
\frac{\epsilon_p^\alpha-\epsilon_h^\alpha}{\sqrt{\epsilon_p-\epsilon_h}}
\Gamma_{ph}({\bf r})
+2\sqrt{\epsilon_p-\epsilon_h}\Gamma_{ph}^\alpha({\bf r})\right]$$ with $$\label{gamma-ph}
\Gamma_{ij}({\bf r})=\psi_i({\bf r})\psi_j({\bf r})$$ In order to compute the excitation energy gradient (eq. (\[exc-grad2\])), we require the nuclear derivatives of KS orbital energies and wavefunctions, $\epsilon_i^\alpha$ and $\psi_i^\alpha$ ($i=p,h$). These can be obtained using an implicit differentiation scheme as follows. We start by writing down the KS equations in matrix form $$\label{ks-eq}
F_{ij}\equiv H_{ij}-\epsilon_i\delta_{ij}=0\quad.$$ For the full differential of $F_{ij}$ we have $$\label{ks-eq-diff}
dF_{ij}=(\frac{\partial H_{ij}}{\partial R_{\alpha}}-\epsilon_i^\alpha\delta_{ij})dR_\alpha + \sum_k \int d{\bf
r}H_{ij}^k\delta\psi_k({\bf r}) =0\quad,$$ where $H_{ij}^k \equiv \frac{\delta H_{ij}}{\delta\psi_k({\bf
r})}$. Division by $dR_\alpha$ yields $$\label{ks-eq-diff2}
\frac{\partial H_{ij}}{\partial R_{\alpha}}-\epsilon_i^\alpha\delta_{ij}
=- \sum_k \int d{\bf r}H_{ij}^k\psi_k^\alpha({\bf r})
= - \sum_k \int d{\bf r}\int d{\bf r}'H_{ij}^k \delta({\bf r}-{\bf r}')\psi_k^\alpha({\bf r}')\quad.$$ On the rhs of eq. (\[ks-eq-diff2\]) we have inserted a delta function, which we now express in terms of KS orbitals $$\label{delta}
\delta({\bf r}-{\bf r}')=\sum_l\psi_l({\bf r})\psi_l({\bf r}')
\quad.$$ Thus eq. (\[ks-eq-diff2\]) becomes $$\label{ks-eq-diff3}
\frac{\partial H_{ij}}{\partial R_{\alpha}}-\epsilon_i^\alpha\delta_{ij}
=- \sum_{kl} H_{ij}^{kl}\psi_k^{\alpha l}
\quad,$$ where $$\begin{aligned}
\label{hklij}
\nonumber
H_{ij}^{kl}&\equiv& \int d{\bf r}H_{ij}^k\psi_l({\bf r})\\
&=&(\delta_{ik} \delta_{lj} + \delta_{jk} \delta_{li})
\epsilon_l +
2 n_k \int d{\bf r}\int d{\bf r'} \Gamma_{kl} ({\bf r})
f_{\rm H, xc}({\bf r}, {\bf r}') \Gamma_{ij} ({\bf r'})
\quad,\end{aligned}$$ and $$\label{nonadiab}
\psi_k^{\alpha l}\equiv \int d{\bf r}\psi_l({\bf r})\psi_k^\alpha({\bf r})
\quad,$$ $n_k$ being the number of electrons occupying orbital $k$.
Exploiting the symmetry of the nonadiabatic coupling matrix elements (\[nonadiab\]), i.e. $\psi_l^{\alpha k}=-\psi_k^{\alpha l}$ and therefore $\psi_l^{\alpha l}=0$, eq. (\[ks-eq-diff3\]) can be rewritten as $$\label{ks-eq-diff4}
\frac{\partial H_{ij}}{\partial R_{\alpha}}=\sum_{l<k}D_{ij}^{lk}\psi_k^{\alpha l}
\quad,(i<j)$$ and for the diagonal terms ($i=j$) $$\label{ks-energy-grad}
\epsilon_i^\alpha=\frac{\partial H_{ii}}{\partial R_{\alpha}}-\sum_{k<l}D_{ii}^{kl}
\psi_k^{\alpha l}
\quad.$$ where $$\label{dmat}
D_{ij}^{lk}=H_{ij}^{lk}-H_{ij}^{kl}=
(\delta_{il}\delta_{kj}+\delta_{ik}\delta_{lj})(\epsilon_k-\epsilon_l)
+2 (n_k-n_k) K_{ij,lk}$$ With the definition (\[dmat\]) eq. (\[ks-eq-diff4\]) becomes $$\label{ks-eq-diff5}
\frac{\partial H_{hp}}{\partial R_{\alpha}}=
\sum_{p'h'}((\epsilon_{p'}-\epsilon_{h'})\delta_{pp'} \delta_{hh'}+4 K_{p'h',ph}) \psi_{h'}^{\alpha p'}$$ for particle-hole states, and $$\label{ks-eq-diff6}
\frac{\partial H_{ij}}{\partial R_{\alpha}}=
4 \sum_{ph}K_{ij,ph} \psi_h^{\alpha p}
+ (\epsilon_i-\epsilon_j)\psi_i^{\alpha j}
\quad,(i<j,\; ij\ni ph)$$ for all remaining combinations. Eq. (\[ks-eq-diff6\]) allows us to express the nonadiabatic coupling elements between non-particle-hole states analytically as $$\label{non-ph-coupl}
\psi_i^{\alpha j}=\frac{
\frac{\partial H_{ij}}{\partial R_{\alpha}}+
4 \sum_{ph} K_{ij, ph}\psi_h^{\alpha p}}{
(\epsilon_i-\epsilon_j)}
\quad,(i<j,\; ij\ni ph)$$ The system of linear equations (\[ks-eq-diff5\]) is first solved for the particle-hole nonadiabatic coupling elements $\psi_p^{\alpha h}$, which are then inserted into eq. (\[non-ph-coupl\]) to obtain the remaining, non-particle-hole, elements. The second term in the numerator of eq. (\[non-ph-coupl\]) is most conveniently evaluated by introducing the contracted density $$\label{gamma3}
\Gamma_3({\bf r})=
\sum_{ph} \psi_p({\bf r})\psi_h({\bf r})\psi_h^{\alpha p}$$ Then $$\label{sum_ph}
\sum_{ph} K_{ij, ph}\psi_p^{\alpha h}=
\int d{\bf r}\int d{\bf r^{\prime}}
\Gamma_3({\bf r})f_{\rm H,xc}({\bf r,
r^{\prime}}) \psi_i({\bf r^\prime})\psi_j({\bf
r^\prime})\equiv K_{ij}'$$ Thus eq. (\[non-ph-coupl\]) becomes $$\label{non-ph-coupl2}
\psi_j^{\alpha i}=\frac{
\frac{\partial H_{ij}}{\partial R_{\alpha}}+
4 K_{ij}'}
{(\epsilon_i-\epsilon_j)}
\quad,(i<j,\; ij\ni ph)$$ Similarly the KS orbital energy gradients can now be obtained from the simplified eq. (\[ks-energy-grad\]) $$\label{ks-energy-grad2}
\epsilon_i^\alpha=\frac{\partial H_{ii}}{\partial R_{\alpha}}+4K_{ii}'
\quad.$$ Finally, the nuclear derivative of the KS orbital wavefunction is recovered by unfolding the nonadiabatic couplings $$\label{orb-grad}
\psi_k^\alpha({\bf r})=\sum_l\psi_l({\bf r})\psi_k^{\alpha l}
\quad.$$
Equations (1)–(31) have been implemented with periodic boundary conditions using a plane wave expansion of the KS MOs at the $\Gamma$ point of the Brillouin zone. By making use of the periodic boundary conditions, the generalized densities $\Gamma_1$, $\Gamma_2$, $\Gamma_3$ and $\Gamma_{ij}$ can be expanded in reciprocal space via the three-dimensional Fourier transform, e.g. $$\Gamma_1({\bf r}) = \sum_{ {\bf G}} \Gamma_1({\bf G}) \exp(i{\bf Gr})$$ where ${\bf G} $ is the vector of the recipocal lattice. The Hartree part of the matrix element $\int d{\bf r}\int d{\bf r}' \Gamma_1({\bf r})f_{\rm
H,xc}({\bf r,r^{\prime}})\Gamma_2({\bf r}')$ and $\int d{\bf r}\int d{\bf r'} \Gamma_{kl} ({\bf r})
f_{\rm H, xc}({\bf r}, {\bf r}') \Gamma_{ij} ({\bf r'})$ which enter the key equations (\[exc-grad2\]) and (\[hklij\]), respectively, can be readily computed in reciprocal space, e.g. $$\int d{\bf r}\int d{\bf r}' \Gamma_1({\bf r})\frac{1}{|{\bf
r}-{\bf r'}|}\Gamma_2({\bf r}')=
\Omega \sum_{{\bf G}\ne 0} \frac{2 \pi}{G^2} \Gamma_1({\bf G}) \Gamma_2({\bf G})$$ whereas the exchange-correlation parts of the matrix elements are calculated via direct numerical integration over grid in coordinate space.
Test results
============
To illustrate the performance and the convergence behaviour of our method, we have computed nuclear gradients of the first excited state energies of H$_2$ and N$_2$. The calculations were performed using our implementation of the formalism presented here in the CPMD package [@cpmd]. All the systems were treated employing periodic boundary conditions and the molecular orbitals were expanded in plane waves at the $\Gamma$ point of the Brillouin zone. We used Troullier-Martins normconserving pseudopotentials [@tm91]. The excitation energies and nuclear gradients were computed within the adiabatic local density approximation [@pgg96] to the linear response exchange-correlation kernel.
The central idea underlying the present active space approach originates from the observation that excitation energies for a large number of electronic transitions exhibit only a minor dependence on the size of the response matrix (\[omega\]). This is illustrated in Table 1 for the $3\sigma_g \rightarrow 1\pi_g$ transition of N$_2$. A simple two-state HOMO–LUMO response calculation is seen to give an excitation energy which is less than 0.2 eV away from an extended treatment including all 5 occupied and 100 virtual MOs. Generally, such behaviour is to be expected for excitations which can be characterized by only a few low-lying one-electron transitions without higher-lying continuum states mixing in.
In the following, we shall discuss the active space dependence of the excitation energy gradients and the KS orbital energy gradients. The upper panel of Fig. 1 displays the completeness of the active space as a function of the number of virtual KS orbitals included in the space. The integral $$\label{complete}
C(N)= \int d{\bf r} \sum_{i=1}^N \psi_i({\bf r}) \psi_i(0)$$ was used as a measure of completeness of the active space. It becomes unity when the active space of KS orbitals is complete, i.e. when the total number of the KS orbitals (virtual and occupied) equals the number of plane waves used to solve the KS equations. The total number of plane waves is 925 for the 6 a.u. cubic box and 40 Ry plane wave cutoff. With 450 virtual orbitals included the active space is almost complete and the value of the integral (\[complete\]) deviates from unity by approximately $10^{-3}$ which is already comparable with the accuracy of the numerical integration. The lower panel of Fig. 1 shows the absolute deviation of the analytic derivatives from the respective finite difference values for the the first singlet excitation energy, $\omega_1$, as well as the HOMO and LUMO KS orbital energies, $\epsilon_1$ and $\epsilon_2$, of H$_2$ at a bond length of 1.0 a.u. as a function of the number of virtual KS orbitals included in the active space. The absolute deviation in analytical gradients vanishes rapidly as the number of virtual orbitals is increased and the errors in the analytical gradients of different states generally show the same patterns in the dependence upon the number of virtual orbitals included in the active space.
Fig. 2 shows the absolute deviation of the analytical derivative from the respective finite difference value as a function of the size of the active space for the first three KS orbital energies $\epsilon_i$ ($i=1,2,3$) as well as the lowest response matrix eigenvalue $\omega_1$ of the N$_2$ molecule. The errors of the analytical gradients are seen to decrease rapidly as the number of orbitals included in the active space approaches the number of plane wave basis functions (in this case 925 plane waves). For the largest active space the deviations of all energies are of the order of $10^{-3}$ or smaller. At this point, the accuracy of the analytical derivatives is hard to assess because the finite difference reference values are also subject to numerical errors. We have further checked whether the excited state gradients are invariant under translation. The translational contribution to the excitation energy gradient is found to decrease rapidly as the active space increses. Interstingly, the underlying ground state calculation exhibits a significantly larger translational error than the excitation energy.
To test the practical value of our derivatives, we have performed geometry optimizations of $N_2$ in the first excited state ($8\times 5.6 \times 5.6$ a.u. box, 40 Ry plane-wave cutoff, i.e. 600 basis functions). When we include only 100 virtual orbitals in the active space, we obtain a bond length of 2.44 a.u., which deviates by 0.02 a.u. from the value of 2.42 a.u. determined by a series of single point energy calculations. Upon increasing the number of virtual states to 200, the optimized bond length comes out as 2.42 a.u., correct to two decimal places. Our test calculations illustrate how the size of the active space may be adjusted to achieve any desired level of accuracy. For many practical purposes it will be sufficient to work with a reduced active space which is significantly smaller than the total number of basis functions.
The situation is different in the case of molecular dynamics simulations, where the nuclear forces need to be essentially exact derivatives of the potential in order to maintain conservation of energy. Although we have already carried out test excited state MD simulations for diatomic molecules, those results do not offer any additional insight into the general performance and convergence pattern of our method. We have not yet applied the formalism presented here to perform more realistic excited state MD simulations of polyatomic molecules , because our current implementation does not yet make use of more efficient iterative techniques, such as Lanczos algorithm or related schemes [@ssf98], to solve the response eigenvalue problem (\[eigen\]). These numerical techniques, however, are standard and we plan to exploit them in future implementations. The scope of this article is merely the presentation of the formalism and the analysis of the convergence behaviour with respect to the choice of the active space.
We would like to emphasize, however, that the method described here is capable of providing additional information beyond excited state energy gradients. Fig. 3 shows, for instance, the nonadiabatic coupling strength between the second and third KS orbitals, $\psi_2$ and $\psi_3$, for the H$_2$ molecule as a function of its bond length. The nonadiabatic coupling values obtained from eqn (\[non-ph-coupl2\]) exhibit a singularity at the crossing point between the two KS orbital energies, as one would expect due to the KS energy difference in the denominator. This feature of our formalism may be exploited in future applications of TDDFT beyond the Born-Oppenheimer approximation.
Conclusions
===========
We have developed and implemented an novel, alternative formalism to calculate analytical nuclear forces for TDDFT excited states within a plane wave/pseudopotential framework. In addition to the excited state energy gradient, our method also provides molecular orbital wavefunction as well as energy derivatives at a small computational overhead compared to the vertical excitation energies. The latter may, for instance be employed as a powerful tool for understanding and interpreting various chemical phenomena such as molecular structures and reactivities [@yrgsf94]. A fundamental quantity in the present formalism are the nonadiabatic coupling elements in the molecular orbital basis which are obtained as direct solutions of a system of linear equations. These matrix elements may provide the basis for the calculation of nonadiabatic couplings between the many-electron adiabatic wavefunctions. Trial calculations on the prototypical test molecules H$_2$ and N$_2$ demonstrate that our implementation reproduces the exact gradients when the number of molecular orbital included in the active space approaches the number of plane wave basis functions. Excited state geometry optimization of N$_2$ using different active spaces show that for many practical purposes it will be sufficient to work within a relatively small active space. The size of the latter may be tuned to achieve any desired level of accuracy.
[**Acknowledgments**]{}\
We are grateful to J. Hutter and F. Furche for helpful discussions.
Calculation of $\frac{\partial H_{ij}}{\partial R}$ in plane waves
==================================================================
The matrix elements of the derivatives of the KS Hamiltonian are required for the calculations of the KS molecular orbitals gradients. Since the kinetic and exchange-correlation energies do not depend directly upon the atomic positions, the matrix element of the derivative becomes (local/nonlocal pseudopotential and electrostatic interaction contributions): $$\frac{\partial H_{ij}}{\partial R_I} =
\frac{\partial }{\partial R} H^{pp,local}_{ij} +
\frac{\partial }{\partial R} H^{pp,nonlocal}_{ij} +
\frac{\partial }{\partial R} H^{es}_{ij}$$ All these matrix elements are computed in reciprocal space. The matrix element of the derivative of the local pseudopotential has the following form $$\frac{\partial }{\partial {\bf R}_I} H^{pp,local}_{ij}
=- \Omega \sum_{{\bf G}} i {\bf G} V_{local}^I ({\bf G}) S_I({\bf G}) \Gamma_{ij}({\bf G})$$ where ${\bf R}_I$ denotes the atomic position and the structure factor $S_I= \exp(i \bf {G R}_I)$ of nuclei $I$, $\Gamma_{ij}({\bf G})$ is the three-dimensional Fourier transform of the contracted density (\[gamma-ph\]). The nonlocal pseudopotential contribution is $$\frac{\partial }{\partial {\bf R}_I} H^{pp,nonlocal}_{ij}=
\sum_{\mu \nu \in I} \left[ (
\frac{ \partial F^{\mu}_{I,i} }{ \partial {\bf R}_I})^* h^I_{\mu \nu} F^{\nu}_{I,j}
+ (F^{\mu}_{I,i})^* h^I_{\mu \nu} \frac{\partial F^{\nu}_{I,j}}{\partial {\bf R}_I}
\right] \;,$$ where the contribution from the projector derivative $\frac{\partial F^{\mu}_{I,i} }{ \partial {\bf R}_I}$ is calculated in the standard way [@mh00]. The contribution from the electrostatic energy is computed in the form $$\frac{\partial }{\partial {\bf R}_I} H^{es}_{ij}=
- \Omega \sum_{{\bf G} \ne 0} i \frac{{\bf G}}{G^2} \Gamma^*_{ij} ( {\bf G}) n^I_c ({\bf G})
S_I({\bf G})$$ where $n^I_c ({\bf G})$ is the gaussian core charge distribution for nuclei $I$ in reciprocal space.
Third functional derivatives of the exchange-correlation functional
===================================================================
Third derivative of Vosko-Wilk-Nusair correlation
-------------------------------------------------
$$E_c=\int \rho \epsilon_c d{\bf r}\quad,$$
where $$\epsilon_c= A \left\{ \ln{\frac{x^2}{X(x)}} +
\frac{2b}{Q}
\tan^{-1}{\frac{Q}{X'(x)}}
-\frac{bx_0}{X(x_0)}
\left[ \ln{\frac{(x-x_0)^2}{X(x)}}+
\frac{2X'(x_0)}{Q}\tan^{-1}{\frac{Q}{X'(x)}}
\right]\right\}$$ with $x=\sqrt{r_s}$, $r_s=(\frac{3}{4\pi \rho})^{\frac{1}{3}}$, $X(x)=x^2+bx+c$, $X'(x)\equiv\frac{dX}{dx}=2x+b$, $Q=\sqrt{4c-b^2}$, $A=0.0310907$, $b=3.72744$, $c=12.9352$, $x_0=-0.10498$. $$\label{dEdrho}
\frac{\delta E_c}{\delta \rho}=\epsilon_c+\rho\frac{\partial
\epsilon_c}{\partial \rho}$$ where $$\label{dedrho}
\frac{\partial \epsilon_c}{\partial \rho}=\frac{\partial
x}{\partial \rho}\frac{\partial \epsilon_c}{\partial x}=
-\frac{x}{6\rho}\frac{\partial \epsilon_c}{\partial x}$$ and $$\frac{\partial \epsilon_c}{\partial
x}=A\left\{\frac{2}{x}-\frac{X}{X'}-\frac{4b}{X'^2+Q^2}-\frac{bx_0}{X(x_0)}\left[\frac{2}{x-x_0}-\frac{X}{X'}-\frac{4X'(x_0)}{X'^2+Q^2}\right]\right\}$$ $$\label{d2Edrho2}
\frac{\delta^2 E_c}{\delta \rho^2}=2\frac{\partial
\epsilon_c}{\partial \rho}+\rho\frac{\partial^2
\epsilon_c}{\partial \rho^2}$$ with $$\begin{aligned}
\label{d2edrho2}
\frac{\partial^2 \epsilon_c}{\partial \rho^2}&=&\frac{\partial}{\partial
\rho}\left(-\frac{1}{6}\frac{x}{\rho}\frac{\partial
\epsilon_c}{\partial x}\right)
=-\frac{1}{6}\left[ \left(\frac{\partial}{\partial
\rho}\frac{x}{\rho}\right)\frac{\partial \epsilon_c}{\partial x}
+ \frac{x}{\rho}\left(\frac{\partial}{\partial \rho}\frac{\partial
\epsilon_c}{\partial x}\right) \right]\\
&=&\frac{7}{36}\frac{x}{\rho^2}\frac{\partial
\epsilon_c}{\partial x}+\frac{1}{36}\frac{x^2}{\rho^2}\frac{\partial^2
\epsilon_c}{\partial x^2}\end{aligned}$$ Using relation (\[dedrho\]), eqn (\[d2edrho2\]) can be rewritten as $$\label{d2edrho2f}
\frac{\partial^2 \epsilon_c}{\partial \rho^2}=-\frac{7}{6}\frac{1}{\rho}\frac{\partial
\epsilon_c}{\partial \rho}+\frac{1}{36}\frac{x^2}{\rho^2}\frac{\partial^2
\epsilon_c}{\partial x^2}$$ and thus eqn (\[d2Edrho2\]) becomes $$\frac{\delta^2 E_c}{\delta \rho^2}=\frac{5}{6}\frac{\partial
\epsilon_c}{\partial \rho}+\frac{1}{36}\frac{x^2}{\rho}\frac{\partial^2
\epsilon_c}{\partial x^2}$$ where $$\begin{aligned}
\frac{\partial^2 \epsilon_c}{\partial x^2}&=& A\left\{
-\frac{2}{x^2}-\frac{2}{X}+\left(\frac{X'}{X}\right)^2+\frac{16bX'}{(X'^2+Q^2)^2}\right.\\
&&\left.+\frac{bx_0}{X(x_0)}\left[\frac{2}{(x-x_0)^2}+\frac{2}{X}-\left(\frac{X'}{X}\right)^2-\frac{16X'(x_0)X'(x)}{(X'^2+Q^2)^2}\right]\right\}\end{aligned}$$ $$\begin{aligned}
\label{d3Edrho3}
\frac{\delta^3 E_c}{\delta \rho^3}&=&3\frac{\partial^2
\epsilon_c}{\partial \rho^2}+\rho\frac{\partial^3
\epsilon_c}{\partial \rho^3}\\
\label{d3Edrho3f}&=&-\frac{1}{36}\left[35
\frac{\partial\epsilon_c}{\partial\rho}+
\frac{1}{2}\frac{x^2}{\rho}\frac{\partial^2
\epsilon_c}{\partial x^2}+\frac{1}{6}\frac{x^3}{\rho}\frac{\partial^3
\epsilon_c}{\partial x^3}\right]\end{aligned}$$ with $$\begin{aligned}
\frac{\partial^3 \epsilon_c}{\partial x^3}&=&
A\left\{\frac{4}{x^3}+6\frac{X'}{X^2}-2\left(\frac{X'}{X}\right)^3+
\frac{32b}{(X'^2+Q^2)^2} \left[1-\frac{4X'^2}{X'^2+Q^2}\right]\right.
\\
\nonumber &&\left.+\frac{bx_0}{X(x_0)}\left[-\frac{4}{(x-x_0)^3}-6\frac{X'}{X^2}+2
\left(\frac{X'}{X}\right)^3-\frac{32X'(x_0)}{(X'^2+Q^2)^2} \left[
1-\frac{4X'^2}{X'^2+Q^2}\right]\right] \right\}\end{aligned}$$ One arrives at eqn (\[d3Edrho3f\]) by substituting eqn (\[d2edrho2f\]) into eqn (\[d3Edrho3\]). For instance, $$\begin{aligned}
\frac{\partial^3\epsilon_c}{\partial\rho^3}&=&\frac{\partial}{\partial\rho}
\left[-\frac{7}{6}\frac{1}{\rho}\frac{\partial
\epsilon_c}{\partial \rho}+\frac{1}{36}\frac{x^2}{\rho^2}\frac{\partial^2
\epsilon_c}{\partial x^2}\right]\\
&=&-\frac{7}{6}\frac{1}{\rho}\left(\frac{\partial^2
\epsilon_c}{\partial \rho^2}-\frac{1}{\rho}\frac{\partial
\epsilon_c}{\partial
\rho}\right)-\frac{1}{108}\frac{x^2}{\rho^3}\left(
7\frac{\partial^2\epsilon_c}{\partial x^2}+\frac{1}{2}x\frac{\partial^3
\epsilon_c}{\partial x^3}\right)\end{aligned}$$ where we have used $$\frac{\partial}{\partial\rho}\left(\frac{x}{\rho}\right)^2=-\frac{7}{3}\frac{x^2}{\rho^3}$$
Third derivative of Slater exchange
-----------------------------------
$$E_x=\int \rho \epsilon_x d{\bf r}\quad,$$
where $$\epsilon_x=C\alpha\rho^{\frac{1}{3}}\; , \quad
C=-\frac{9}{8}\left(\frac{3}{\pi}\right)^{\frac{1}{3}} \; , \quad \alpha=\frac{2}{3}$$ The third derivatives can be readily computed $$\label{d3Exdrho3}
\frac{\delta^3 E_x}{\delta \rho^3}=-C\alpha^4\rho^{-\alpha-1}$$
active space $\Delta E_{\rm KS}$ 1o/1v 5o/1v 5o/5v 5o/50v 5o/100v
-------------- --------------------- ------- ------- ------- -------- ---------
exc. energy 8.39 9.47 9.40 9.40 9.33 9.29
: Dependence of N$_2$ TDLDA excitation energy ($^1\!\Pi_g$, $3\sigma_g \rightarrow 1\pi_g$) in eV using plane waves (p.w.) with a 70 Ry cutoff in a 10 $a_0$ periodic box on the number of occupied (o) and virtual (v) Kohn-Sham orbitals included in the active space. $\Delta E_{\rm KS}$ is the unperturbed Kohn-Sham energy difference.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We measure the recent star formation history (SFH) across M31 using optical images taken with the *Hubble Space Telescope* as part of the Panchromatic Hubble Andromeda Treasury (PHAT). We fit the color-magnitude diagrams in 9000 regions that are 100 pc $\times$ 100 pc in projected size, covering a 0.5 square degree area (380 kpc$^2$, deprojected) in the NE quadrant of M31. We show that the SFHs vary significantly on these small spatial scales but that there are also coherent galaxy-wide fluctuations in the SFH back to 500 Myr, most notably in M31’s 10-kpc star-forming ring. We find that the 10-kpc ring is at least 400 Myr old, showing ongoing star formation over the past 500 Myr. This indicates the presence of molecular gas in the ring over at least 2 dynamical times at this radius. We also find that the ring’s position is constant throughout this time, and is stationary at the level of 1 km s$^{-1}$, although there is evidence for broadening of the ring due to diffusion of stars into the disk. Based on existing models of M31’s ring features, the lack of evolution in the ring’s position makes a collisional ring origin highly unlikely. Besides the well-known 10-kpc ring, we observe two other ring-like features. There is an outer ring structure at 15 kpc with concentrated star formation starting 80 Myr ago. The inner ring structure at 5 kpc has a much lower star formation rate (SFR) and therefore lower contrast against the underlying stellar disk. It was most clearly defined 200 Myr ago, but is much more diffuse today. We find that the global SFR has been fairly constant over the last 500 Myr, though it does show a small increase at 50 Myr that is 1.3 times the average SFR over the past 100 Myr. During the last 500 Myr, 60% of all SF occurs in the 10-kpc ring. Finally, we find that in the past 100 Myr, the average SFR over the PHAT survey area is $0.28\pm0.03$ with an average deprojected intensity of $7.3\times10^{-4}$, which yields a total SFR of 0.7 when extrapolated to the entire area of M31’s disk. This SFR is consistent with measurements from broadband estimates.'
author:
- 'Alexia R. Lewis, Andrew E. Dolphin, Julianne J. Dalcanton, Daniel R. Weisz, Benjamin F. Williams, Eric F. Bell, Anil C. Seth, Jacob E. Simones, Evan D. Skillman, Yumi Choi, Morgan Fouesneau, Puragra Guhathakurta, Lent C. Johnson, Jason S. Kalirai, Adam K. Leroy, Antonela Monachesi, Hans-Walter Rix, Andreas Schruba'
bibliography:
- 'sfh.bib'
title: 'The Panchromatic Hubble Andromeda Treasury XI: The Spatially-Resolved Recent Star Formation History of M31'
---
Introduction {#sec:intro}
============
A galaxy’s star formation history (SFH) encodes much of the physics controlling its evolution. It tells us about the evolution of the star formation rate (SFR) throughout the galaxy, the evolution of the mass and metallicity distributions, and the movement of stars within the galaxy. In addition to the global evolution, focusing on the recent SFH ($<$1 Gyr) reveals the relationships between stars and the gas and dust from which they form and can be used to constrain models of star formation (SF) propagation and/or dissolution. For this type of study to be possible, however, we first need a spatially-resolved view of the past SFH with sufficient resolution to probe the relevant physical scales.
Broad SFH constraints can be derived by looking at the properties of the galaxy population across cosmic time. However, examining the integrated properties of distant galaxies provides limited information on how individual galaxies form and evolve. While such integrated light studies benefit from large sample sizes, the final results are limited to conclusions about the SFHs of general galaxy types (e.g., based on bins of mass, luminosity, or color), and cannot say anything definitive about the physics that controls the evolution of individual galaxies.
To appropriately examine the evolution of individual galaxies, it is necessary to study well-resolved nearby galaxies for which there exists large amounts of ancillary data. With such data, one can, for example, analyze the relationship between star SF and gas in the spatially-resolved Kennicutt-Schmidt law [e.g., @Kennicutt2007a; @Bigiel2008a], understand the evolution of a galaxy’s gas reservoir [e.g., @Leroy2008a; @Schruba2010a; @Bigiel2011a; @Leroy2013b], and calibrate SFR indicators [e.g., @Calzetti2007a; @Li2013a], among many others. However, these studies have historically been restricted to using only the current SFR where ‘current’ is the average over some timescale characteristic of a given SFR indicator. These studies, therefore, cannot probe the evolution of these relationships with time or on small physical scales where the SFR indicators break down [e.g., @Leroy2012a].
For a more detailed analysis of the recent SFH, resolved stellar populations are the gold standard. Using individual stars, we can examine the evolution of a galaxy archaeologically by analyzing the color-magnitude diagram (CMD) as a function of position within the galaxy. Embedded within the CMD is the history of SF and metallicity evolution of the galaxy. Although recovering this information is not completely assumption-free (we must make choices about the initial mass function (IMF), stellar models, constancy of SFR within time bins, etc), it is the only way to make a time-resolved measurement of the SFR. It also has the ability to recover the SFR on much finer physical scales.
CMD fitting has most often been used to probe low mass galaxies [e.g., @Gallart1999a; @Harris2004a; @Cole2007a; @Harris2009a; @Monelli2010a; @Weisz2011a; @Monachesi2012a; @Weisz2014a] because they are the most numerous type of galaxy in the Local Volume, which is one of the few places where galaxies can be sufficiently well resolved. The technique has been used to examine a few individual larger galaxies [e.g., @Wyder1998a; @Hernandez2000a; @Bertelli2001a; @Williams2002a; @Williams2003a; @Brown2006a; @Brown2007a; @Brown2008a; @Williams2009a; @Williams2010a; @Gogarten2010a; @Bernard2012a; @Bernard2015a], but these studies have been limited to either small fields spread across the disk and/or halo or have low spatial resolution such that it is difficult to pick out detailed features present in the galaxy. This technique has never been used to contiguously and uniformly recover the recent SFH of an [L$_\star$ ]{}galaxy with high resolution.
In this paper, we present the first finely spatially-resolved recent SFH of a significant part of the [L$_\star$ ]{}galaxy, Andromeda (M31). M31 has been the target of many photometric [e.g., @Brown2006a; @Barmby2006a; @Gordon2006a; @Dalcanton2012a; @Bernard2012a; @Ford2013a; @Sick2014a] and spectroscopic [e.g., @Ibata2004a; @Guhathakurta2006a; @Kalirai2006a; @Koch2008a; @Gilbert2009a; @Dorman2012a; @Gilbert2014a] studies due to its proximity and similarity to the Milky Way. M31 is the ideal place to examine processes in [L$_\star$ ]{}galaxies; it is close enough to be well resolved into stars with the *Hubble Space Telescope* ([*HST*]{}) but does not not face the same obstacles as studies in the Milky Way, which are plagued by uncertainties due to line-of-sight reddening and challenging distance measurements.
We generate maps of the recent ($<$0.5 Gyr) SFH in M31 using resolved stars from recent observations of M31 taken as part of the Panchromatic Hubble Andromeda Treasury [PHAT; @Dalcanton2012a]. While other studies have examined the SFH in M31 using resolved stars [@Williams2003a; @Brown2006a; @Brown2007a; @Brown2008a; @Davidge2012a; @Bernard2012a], none have been as finely resolved as the work we present here.
With the resulting spatially- and temporally-resolved recent SFHs, we can see where stars form within the galaxy and how that SF evolves across the galaxy, whether it’s a single star-forming event or propagation across the disk. The maps also provide clues about the evolution of spatial structure on a variety of different scales; while we know a great deal about small-scale SF within molecular clouds and large-scale SF within the galactic environment, the maps we derive bridge these two scales. Recent SFHs also enable the analysis of fluctuations in the recent SFR. This is especially significant for SF relations, such as the Kennicutt-Schmidt relation [@Schmidt1959a; @Kennicutt1989a], which often assumes a constant SFR over the timescale of the tracer used. While this paper deals only with the SFHs themselves, it is the first in a series of papers on the SF, dust, and ISM contents of M31 on small spatial scales (Lewis et al., in prep).
This paper is organized as follows: We describe the data used in Section \[sec:data\]. In Section \[sec:SFHderive\], we explain the method by which we recover the SFHs in each region. We present the resulting SFH maps in Section \[sec:results\] and discuss features of the maps in Section \[sec:discussion\]. We summarize the results in Section \[sec:conclusion\].
PHAT Data {#sec:data}
=========
We derive the spatially resolved SFHs using photometry from the PHAT survey. PHAT surveyed the northeast quadrant of M31 in six filters, from the near-UV to the near-IR, measuring the properties of 117 million stars. Full details of the survey can be found in @Dalcanton2012a and the photometry is described in @Williams2014a. Figure \[fig:phat\_area\] shows a 24 image [@Gordon2006a] of M31 with the PHAT footprint overlaid. In this paper, we examine the SFH inside the solid red region; we have excluded the region closest to the bulge (black dashed line) where crowding errors are large and the depth of the CMD is shallow, making reliable CMD fitting difficult.
![The PHAT NIR footprint overlaid on a 24 image [@Gordon2006a] of M31. The thick red line denotes the area for which we compute SFHs in this paper. We have left out the regions closest to the bulge along the major axis (dashed black line) because crowding errors are large and CMD depth is shallow, which makes fitting a reliable SFH difficult. The inset shows an example of our binning scheme, shown here for Brick 18, which is outlined in blue. We have included scale bars for the large image as well as for the inset. The bricks are labeled with their numbers just exterior to the brick outline. We use these brick numbers throughout the text. The image is oriented such that north is up and east is to the left.[]{data-label="fig:phat_area"}](fig1.pdf){width="\columnwidth"}
Photometry and Creation of Single-Brick Catalogs {#subsec:catalog}
------------------------------------------------
We use optical photometry (F475W and F814W filters) from the `.gst` catalogs, which were compiled following procedures developed by @Dolphin2000a as described in @Dalcanton2012a. The stars in this catalog have S/N $>$ 4 in both filters and pass stringent goodness-of-fit cuts. These cuts leave stars with the highest quality photometry but with higher incompleteness in more crowded regions. The incompleteness is worse in the inner galaxy (inside 3 kpc), which we exclude from our analysis, and also affects the centers of stellar clusters. This latter limitation is not a problem for this paper because clusters contain only a few percent of the recent SF (Johnson et al., in prep.). Moreover, we are interested in the SFHs of the field stars, and picking out stars in the centers of dense stellar clusters is not necessary.
The survey was split into 23 regions called ‘bricks’; each brick is 1.5 kpc $\times$ 3 kpc in projected size. Odd-numbered bricks extend from the galactic center to the outer disk along the major axis. Even-numbered bricks sit adjacent to them at larger radii along the minor axis. Each brick is subdivided into 18 ‘fields’, each with a (projected) size of 500 pc $\times$ 500 pc. In the optical, adjacent fields overlap to cover the ACS chip gap. The SFHs presented in this paper are derived on a brick-by-brick basis. To create a single brick catalog from the 18 ‘field’ catalogs, we use the smaller IR brick footprint divided into 18 non-overlapping regions which roughly describe the IR field footprints. In each of these fields, we select all of the stars in the corresponding optical catalog that fall within the IR field boundaries. We fill in the chip gap using two adjacent fields, selecting only the stars that fall within the desired portion of the chip gap. Three fields in Brick 11 were not observed; this area is completely covered by overlap from Brick 09 so that there is continuous coverage over the survey area.
The result of this process is the creation of a single brick catalog of all stars detected in the optical filters that fall within the IR footprint, filling in the chip gap and eliminating duplication of stars in the overlap regions. We then grid each brick into 450 approximately equal sized, non-overlapping regions that are 100 pc $\times$ 100 pc in projected size for a total of 9000 regions across the survey area. In Figure \[fig:phat\_area\], B18 is outlined in blue. The inset shows the binning scheme used within each brick.
Artificial Star Tests {#subsec:asts}
---------------------
Even with the resolution of [*HST*]{}, crowding in regions of high stellar density can strongly affect the photometry. Many faint stars cannot be resolved in the dense field of brighter stars. In addition, faint stars, that would otherwise not be detected, are biased brighter by blending with neighboring stars. This also affects brighter stars, but to a lesser degree.
To characterize photometric completeness and to account for the observational errors that result from crowding, we perform extensive artificial star tests (ASTs). Briefly, we insert fake stars into each image and run the photometry as normal. We then test for recovery of these fake stars and measure the difference between the input and recovered magnitude if a star was detected. We adopt the magnitude at which 50% of the stars are recovered as our limiting magnitude when solving for the SFHs. The completeness limits used for each brick are given in Table \[tab:completeness\].We refer the reader to @Dalcanton2012a for further details.
We inserted 100,000 artificial stars individually into each ACS field-of-view. We combined the resulting ASTs into brick-wide catalogs in the same way as the photometry. When running the SFHs for a given 100 pc region, we select the fake stars from a 5$\times$5 grid of adjacent regions, such that the ASTs come from a 500$\times$500 pc$^2$ region centered on the region of interest. Each of these larger regions contains the results of 50,000 ASTs.
[ccccc]{} 02 & 27.2 & 26.1 & 0.019 & 0.22\
04 & 27.1 & 25.9 & 0.019 & 0.21\
05 & 26.4 & 25.0 & 0.019 & 0.18\
06 & 27.2 & 26.0 & 0.019 & 0.22\
07 & 26.8 & 25.4 & 0.019 & 0.18\
08 & 27.2 & 26.0 & 0.019 & 0.22\
09 & 27.1 & 25.8 & 0.019 & 0.21\
10 & 27.2 & 26.1 & 0.019 & 0.23\
11 & 27.1 & 25.9 & 0.019 & 0.21\
12 & 27.3 & 26.2 & 0.019 & 0.24\
13 & 27.2 & 26.1 & 0.019 & 0.23\
14 & 27.3 & 26.2 & 0.019 & 0.24\
15 & 27.4 & 26.2 & 0.019 & 0.25\
16 & 27.4 & 26.4 & 0.019 & 0.26\
17 & 27.5 & 26.4 & 0.020 & 0.26\
18 & 27.8 & 26.7 & 0.020 & 0.29\
19 & 27.6 & 26.7 & 0.020 & 0.28\
20 & 27.8 & 26.9 & 0.020 & 0.30\
21 & 27.8 & 26.9 & 0.020 & 0.30\
22 & 27.8 & 26.9 & 0.020 & 0.30\
23 & 27.8 & 26.9 & 0.020 & 0.30 \[tab:completeness\]
Derivation of the Star Formation Histories {#sec:SFHderive}
==========================================
We derive SFHs using only the optical data from the F475W and F814W filters. These filters provide the deepest CMDs and the greatest leverage for the recent SFHs of interest in this paper. A more detailed discussion of our filter choice can be found in Appendix \[app:filterchoice\].
Fitting the Star Formation History {#subsec:SFH_fit}
----------------------------------
We derive the SFHs using the CMD fitting code [`MATCH`]{} described in @Dolphin2002a. The user specifies desired ranges in age, metallicity, distance, and extinction. The code also requires a choice of IMF and a binary fraction. It then populates CMDs at each combination of age and metallicity, convolved with photometric errors and completeness as modeled by ASTs. The individual synthetic CMDs are linearly combined to form many possible SFHs. Each synthetic composite CMD is compared with the observed CMD via a Poisson maximum likelihood technique. The synthetic CMD that provides the best fit to the observed CMD is taken as the model SFH that best describes the data. For full implementation details, see @Dolphin2002a.
The fit quality is given by the [`MATCH`]{} *fit* statistic: *fit* = $-2\ln L$, where $L$ is the Poisson maximum likelihood. We estimate the $n\sigma$ confidence intervals as $n^2 \ge fit - fit_{\mathrm{min}}$; the 1$\sigma$ confidence interval includes all SFHs in a given region with $fit - fit_{\mathrm{min}} \le 1$, the 2$\sigma$ confidence interval includes all SFHs with $fit - fit_{\mathrm{min}} \le 4$, etc.
We use a fixed distance modulus of 24.47 [@McConnachie2005a], a binary fraction of 0.35 with the mass of the secondary drawn from a uniform distribution, and a @Kroupa2001a IMF. We solve the SFH in 34 time bins covering a range in log time (in years) from 6.6 to 10.15 with a resolution of 0.1 dex except for the range of log(time) = 9.9 – 10.15 which we combine into one bin. This time binning scheme was chosen to provide as much time resolution as possible while minimizing computing time. We found that using a finer time binning scheme with a resolution of 0.05 dex increased the computing time by at least a factor of two and only resulted in differences in the SFHs of 1%, which is much smaller than systematic and random uncertainties. We use the Padova [@Marigo2008a] isochrones with updated AGB tracks [@Girardi2010a]. The \[M/H\] range is \[-2.3, 0.1\] with a resolution of 0.1 dex. Because we are limited by the depth of the data, which does not reach the ancient main sequence turnoff, we also require that \[M/H\] only increases with time. We limit the oldest time bin to have \[M/H\] between -2.3 and -0.9 and the youngest time bin to have \[M/H\] between -0.4 and 0.1.
M31 contains significant amounts of dust [e.g., @Walterbos1987a; @Draine2014a; @Dalcanton2015a], which, broadly speaking, can be described by three components: a mid-plane component due to extinction internal to M31 that dominates the older, well-mixed stellar populations, a foreground component due to Milky Way extinction, and a differential component that affects the star-forming regions. In addition to the SFH, [`MATCH`]{} allows two free parameters to describe the dust distribution: a foreground extinction ([[$A_V$]{}]{}) and a differential extinction ([[$dA_V$]{}]{}) which describes the spread in extinction values for the stars in each region. The differential extinction is a step function starting at [[$A_V$]{}]{} with a width given by the value of [[$dA_V$]{}]{}. While foreground extinction is expected to be relatively constant across the galaxy, differential extinction can vary significantly from region to region as they probe very different star-forming and stellar density environments. To determine the best fit to the data, we search extinction space to find the combination of [[$A_V$]{}]{} and [[$dA_V$]{}]{} that best fits the data. However, the distribution of dust is different for young stars and old stars [e.g., @Zaritsky1999a]. The step function differential extinction model provides a good fit to the main sequence (MS) component, but it cannot reproduce the post-MS stellar populations.
We mitigate the effects of dust on our SFHs by simplifying the fitting process such that we exclude the redder portions of the CMD from the fit. Specifically, we have adopted the cuts in @Simones2014a, excluding all stars with F475W-F814W$>$1.25 and F475W$>$21 (shaded regions of the CMDs in Figures \[fig:image\_sfh\_cmd\] and \[fig:pg\]). This prevents contamination from the older populations. We therefore avoid extinction-related complications by excluding the RGB and the red clump, which is often poorly fit with a single step function, and which is not relevant when calculating the recent SFH.
We note that age-metallicity degeneracy is an important concern in any kind of SFH work. When modeling composite CMDs, it primarily affects the RGB [e.g., @Gallart2005a], which we do not fit in this analysis. Instead, the vast majority of stars in the CMD are main sequence stars, for which the age-metallicity degeneracy is negligible compared to typical photometric uncertainties. In addition, the metallicity gradient of M31 has been extensively studied and found to be very shallow [e.g., @Blair1982a; @Zaritsky1994a; @Galarza1999a; @Trundle2002a; @Kwitter2012a; @Sanders2012a; @Balick2013a; @Lee2013a; @Pilyugin2014a Gregersen et al. in prep], and the age range we are fitting is small. As a result, we are not concerned that the age-metallicity degeneracy affects the results in this paper.
We compute the SFH in 450 regions per brick for 21 of the 23 bricks in the PHAT survey. To determine the best-fit SFH, we solve multiple SFHs with different combinations of [[$A_V$]{}]{} and [[$dA_V$]{}]{}, where the best-fit SFH is chosen to be the one whose combination of [[$A_V$]{}]{} and [[$dA_V$]{}]{} minimizes the fit value as given by the maximum likelihood technique. Consequently, for each of our regions, we must compute many possible SFHs. We minimize the total number of SFHs that must be run using an optimization scheme to limit the size of ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space that must be searched, as discussed in Appendix \[app:avdav\]. Based on this optimization, we set a constraint that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. In each region, we run a grid of SFHs in ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space with a step size of 0.3 over the range of [[$A_V$]{}]{} = \[0.0, 1.0\], also requiring that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. We take the resulting SFH with the best fit, determine the two-sigma range around that best ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) pair, and then sample the grid in that region down to a finer spacing of 0.1 in [[$A_V$]{}]{} and [[$dA_V$]{}]{}. Not only does this ensure that we are finding the global minimum, but it also allows us to account for the uncertainty in extinction in the SFHs by including all fits in the result. In addition, the extinction parameters provide us with an additional method to verify our results, as we discuss in Section \[subsec:SFH\_dust\].
{width="\textwidth"}
{width="\textwidth"}
As an example, in Figure \[fig:image\_sfh\_cmd\] we plot the CMDs and SFHs for two of the regions found near the 10-kpc ring. We show the location of these regions over-plotted on a *GALEX* FUV image [@Gil-de-Paz2007a]. The top region, in Brick 17, is located just off of a spur of the 10-kpc ring. The region itself shows very little FUV emission, and the resulting SFH is sparse with only moderate SFRs at all times. The lower region falls directly on an OB association [OB 54; @vandenBergh1964a] in Brick 15. As expected, there is elevated, on-going SF in that region over at least the past 100 Myr. The CMDs for each of these regions show a well-defined MS. The consequences of dust are very evident in the region but are most easily seen in the part of the of the CMD that we do not fit, where the red clump is elongated along the reddening vector.
We note that the Padova isochrones do not include tracks younger than 4 Myr. In the resulting SFHs, we renormalize the SFR in the youngest time bin to reach the present day (0 Myr), conserving the total mass formed in that time bin.
In Figure \[fig:pg\], we show the observed CMD, the best-fit modeled CMD, and the significance of the residuals (the observed CMD minus the modeled CMD with a weighting determined by the variance) for a region in Brick 15. We fit all stars that are outside the blue dashed region, which are primarily MS stars, with a smattering of short-lived blue helium-burning stars. The residuals show no distinct features, which means the model is a good fit.
Extinction {#subsec:SFH_dust}
----------
In this section we discuss how we incorporate IR-based dust maps as a prior in determining our dust parameters. After determining the best-fit SFH in each region as described in Section \[subsec:SFH\_fit\], we conducted additional verification by examining the map of total dust, [[$A_V$]{}]{} + [[$dA_V$]{}]{}.
We found that there were a handful of regions in which the best-fit required large amounts of dust, at or very close to the limit of 2.5 mag in spite of there being no evidence for SF within the last 100 Myr, based on a lack of luminous MS stars and low SFR averaged over the most recent 100 Myr. In these regions, we would expect very little dust because there are no dust-enshrouded young stars. Bad fits result in these cases because there are few stars in the MCD fitting region that can be used to constrain the dust.
To examine this discrepancy, we compared our dust parameters with the total dust mass in each region, as measured by @Draine2014a. We correct the low-SFR, high-dust regions by constructing a prior on [[$A_V$]{}]{} + [[$dA_V$]{}]{} based on the @Draine2014a dust mass maps and multiplying the prior by the likelihood calculated by [`MATCH`]{}. The details of the prior are described in Appendix \[app:prior\]. After applying the prior, we compute new fits for all of the SFHs measured in each region. We use these new fits to go back through [[$A_V$]{}]{}, [[$dA_V$]{}]{} parameter space to be sure that we properly sampled 2-$\sigma$ space around the new best-fits for each region. As a result, we were able to constrain the [`MATCH`]{} dust parameters in the regions of very-low SFR that are not properly anchored in the CMD analysis. Ultimately, because application of the prior primarily affects the very-low SFR regions, this processing did not significantly affect the SFH results of this paper.
Uncertainties {#subsec:SFH_uncertainties}
-------------
There are three significant sources of uncertainties that affect the measured SFHs: random, systematic, and dust. In this section, we discuss each source of uncertainty in turn.
First, we consider random uncertainties. The random uncertainties are dominated by the number of stars on the CMD and are consequently larger for more sparsely populated CMDs. Random uncertainties were calculated using a hybrid Monte Carlo process [@Duane1987a], implemented as described in @Dolphin2013a. The result of this Markov Chain Monte Carlo routine is a sample of 10,000 SFHs with density proportional to the probability density, i.e., the density of samples is highest near the maximum likelihood point. Error bars are calculated by identifying the boundaries of the highest-density region containing 68% of the samples, corresponding to the percentage of a normal distribution falling between the $\pm 1\sigma$ bounds. This procedure provides meaningful uncertainties for time bins in which the best-fitting result indicates little or no star formation.
Next, we consider the systematic uncertainties. Systematic uncertainties reflect deficiencies in the stellar models [i.e., uncertainties due to convection, mass loss, rotation, etc.; @Conroy2013a] such that different groups model these parts of stellar evolution differently, which leads to discrepant results for the same data, depending on the stellar models used [@Gallart2005a; @Aparicio2009a; @Weisz2011a; @Dolphin2012a]. These uncertainties primarily affect older populations that have evolved off the MS. The stellar models of the various groups generally agree quite well for MS stars that dominate our adopted fitting region.
Because we have used the same models across the whole survey, all regions experience similar systematic effects. We have estimated the size of the systematics for a number of regions, covering the range of stellar environments within M31. We computed the systematic uncertainties by running 50 Monte Carlo realizations on the best-fit SFHs as described in @Dolphin2002a. For each run, we shifted the model CMD in $\log(T_{\mathrm{eff}})$ and $M_{\mathrm{bol}}$ by an amount taken from a random draw from a Gaussian with sigma listed in Table \[tab:completeness\]. These shifts are designed to mimic differences in isochrone libraries. We then measured the resulting SFH. The range that contained 68% of the distributions from all 50 realizations is designated as the systematic uncertainties. The relative size of the systematics varies greatly from region to region but is generally less than half the size of the random uncertainties in individual regions and increases at larger lookback time. There is also more variation in the relative size of the uncertainties in the ring features than in the outermost regions where both the stellar density and the SFR are low.
Finally, the variable internal dust content introduces uncertainties. We select the best-fit SFH by choosing the ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) pair that maximizes the likelihood. However, there are regions where the difference between the fit values of the two most likely SFHs is very small (i.e., both SFHs are almost equally likely). We also sample to a minimum spacing of only 0.1 in [[$A_V$]{}]{} and [[$dA_V$]{}]{}, and thus may have determined a slightly different best-fitting SFH than if we had sampled [[$A_V$]{}]{}, [[$dA_V$]{}]{} space more finely. To account for these variations, we calculate our uncertainties due to the dust distribution by combining all SFHs measured in a given region and determining the range that contains 68% of the samples. In this combination, the SFHs are weighted by their fit values such that the best-fit gets full weight and the $n\sigma$ fits are weighted by $e^{-0.5 \, n^2}$ (e.g., SFHs with fit values that are $2\sigma$ from the best-fit value are weighted by $e^{-0.5 \times 2^2}$, or $e^{-2}$).
A possible additional source of uncertainty is due to the choice of binary fraction, which is a free parameter in this analysis. We tested the effect of different binary fractions in two of our regions and found that the final fit is not very sensitive to binary fraction. This is because the inclusion of binaries in the model results in a color separation on the CMD that is washed out by dust. The fits and resulting SFHs are consistent with the uncertainties when choosing a binary fraction anywhere between about 0.2 and 0.7. This insensitivity of the SFH to binary fraction is consistent with more extensive tests presented in @Monelli2010a. Uncertainties due to binary fraction will be much smaller than those due to dust.
We note that we do not include the model systematics in our reported uncertainties. While there may be absolute uncertainties in the global SFR due to model uncertainties, the relative region-to-region uncertainties are dominated by the random and dust components.
Choice of Region Size {#subsec:SFH_regionsize}
---------------------
To generate the spatially-resolved SFH of M31, we divide each of the brick-wide catalogs into regions that are approximately 100 pc (projected; 25) on a side, assuming a distance of 783 kpc to M31. There were a few different considerations for this size.
Regions of this size are of scientific interest because they bridge the gap between existing knowledge of Galactic pc scale SF [e.g., @Bate2009a; @Schruba2010a] and SF in more distant galaxies on kpc scales [e.g., @Leroy2008a]. The resolution is also fine enough to resolve features such as large HII regions and giant molecular clouds.
While a finer grid would also be scientifically interesting, there are a couple of difficulties to consider. The main problem is that smaller regions would have insufficiently populated CMDs, increasing the random uncertainties of the SFHs to unacceptable levels. With our adopted 100 pc bin size, the number of stars within the CMD fitting region ranges from 110 to 3900. In 93% of the regions, we fit more than 500 stars, and in 80% we fit more than 1000 stars. Additionally, the SFHs are computationally expensive to run. For each region, [`MATCH`]{} must be run multiple times to determine the ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) pair that provides the best fit to the data. Moving to 50 pc size regions would have resulted in four times as many regions, significantly increasing the time needed to derive the SFHs. Even at our 100 pc grid size, deriving the SFHs and uncertainties for the entire sample required more than 500,000 CPU hours using XSEDE resources [@Towns2014a].
Our overall technique is similar to that of @Simones2014a, who measured the SFHs of UV-bright regions within Brick 15 of the PHAT survey. Their goal was to convert the SFHs into FUV fluxes and compare with the observed fluxes in each star-forming region. About half of their regions had fewer than 500 stars on the part of the CMD they were fitting but they still found reasonable agreement between the modeled and observed fluxes. This agreement indicates that the CMD fitting routine is robust, even with a modest number of stars in the part of the CMD occupied by young stars.
Reliability of the SFHs as a Function of Lookback Time {#subsec:SFH_reliability}
------------------------------------------------------
![Artificial CMDs over a range of extinction and photometric depth, assuming a constant SFH, solar metallicity, and three different extinction values; one low: \[[[$A_V$]{}]{}, [[$dA_V$]{}]{}\] = \[0.0,0.0\], one typical of our SFHs: \[[[$A_V$]{}]{}, [[$dA_V$]{}]{}\] = \[0.3,0.8\], and one high: \[[[$A_V$]{}]{}, [[$dA_V$]{}]{}\] = \[0.6,1.8\], shown in the left, middle, and right columns, respectively and labeled in the upper right corner of the top panel of plots. The brick number listed in the left panel of plots indicates the depth of the corresponding row. Within each panel, individual stars are color-coded by age. All stars older than 1 Gyr are dark red. In the top 3 rows, we have over-plotted 630 Myr, solar metallicity isochrones, one attenuated by [[$A_V$]{}]{}, and the second attenuated by [[$A_V$]{}]{} + [[$dA_V$]{}]{}. Individual stars within the CMD will be reddened to somewhere between the two isochrones. Increased extinction reddens some stars off of the MS and into the region we do not fit in the SFH recovery process (shaded gray). The bottom two rows each have isochrones of 4 different ages over-plotted. The ages are listed in the left panel.[]{data-label="fig:cmd_age_iso"}](fig4.pdf){width="\columnwidth"}
We have excluded the red side of the CMD in our SFH recovery process, so consequently, our fits are not sensitive to old stellar populations. The exact age at which sensitivity is lost is set by the oldest stars observable on the MS, which varies with stellar density and dust extinction.
We perform two different tests to examine the sensitivity of our results as a function of time. First, we create artificial CMDs using [`MATCH`]{}. The CMDs are generated with a constant SFH and solar metallicity, while modeling observational uncertainties by using the results of the ASTs in each region. The results are shown in Figure \[fig:cmd\_age\_iso\], where we plot simulated CMDs. The youngest stars are shades of blue, and all stars older than 1 Gyr are red. Each of the top three rows shows a CMD at a different depth, where the top row is the shallowest CMD closest to the bulge (B05), and the third row is the deepest in B23. The brick numbers are indicated in the left panel. The columns display varying amounts of extinction, which is applied to the CMD in the same way we apply it to the model CMDs when recovering our SFHs. Extinction is labeled in the top panel by \[[[$A_V$]{}]{}, [[$dA_V$]{}]{}\]. The left column is un-reddened, the middle column shows the effect of the median extinction found in each of our regions (\[[[$A_V$]{}]{}, [[$dA_V$]{}]{}\] = \[0.3, 0.8\]), and the right column shows the upper limit of extinction allowed in our SFHs. We also check the sensitivity by over-plotting isochrones of a single age (630 Myr) and solar metallicity. The isochrone in the first column has not been reddened. In the second and third columns of the first three rows, we plot two isochrones, one extincted by [[$A_V$]{}]{} and one extincted by [[$A_V$]{}]{} + [[$dA_V$]{}]{}. Individual stars can be extincted to anywhere between the two isochrones.

The variation in depth is large across the survey. The inner regions are very crowded and as a result the photometric depth is quite shallow. In Brick 05, where there is high crowding and moderate levels of extinction, we cannot detect many stars that are older than 500 Myr. As we move further away from the center of the galaxy, crowding and extinction generally decrease and we can probe stars that are 700 – 1000 Myr old (row 3 of Figure \[fig:cmd\_age\_iso\]). The exception, of course, is the 10-kpc ring, where extinction can be high; the CMDs in the second row of Figure \[fig:cmd\_age\_iso\] mimic the conditions found in these regions. At low or mid-range extinction, we still see many stars on the MS that are at least 600 – 700 Myr old. As we increase the extinction, many of these stars are reddened into the portion of the CMD that we do not fit, as can also be seen from the isochrones. Unreddened 630 Myr old stars are easily recovered; however, stars that have the highest level of extinction applied to them are entirely reddened into the neglected portion of the CMD.
In the last two rows, the colored CMD is identical to that in the second row (B15 depth). Here, though, we plot solar metallicity isochrones of different ages: the fourth row has isochorones of 100, 200, 300, and 400 Myr while the bottom row shows isochrones of 500, 630, 800, and 1000 Myr. From these tests, we can see that there will be a number of regions that are reliable back to 1 Gyr, while others that are more extincted may only have a handful of stars that are more than 500 Myr old.
Ultimately, due to the wide variety of stellar environments found across the survey, not all bricks can reliably cover the same age range. The inner regions are much more crowded and have shallower completeness limits than the outer regions; as a result, the age limit for regions closest to the bulge is only 400 Myr. Regions that fall within the ring and spiral arm features are much more extincted than other regions; although the completeness limit is deeper, stars may be reddened into the region of the CMD that we do not fit. These regions have an age limit of no more than 500 Myr. In the outer regions where extinction and crowding are less, the completeness limit is much deeper and the SFHs are reliable back to 700 – 1000 Myr. We therefore choose a conservative age limit of 400 Myr for consistency across the survey. For all scientific analysis, we examine only the most recent 400 Myr. In some of the plots that follow, we include results back to 630 Myr for illustrative purposes, with the caveat that they are only relevant for the outermost regions of the survey.
Results {#sec:results}
=======
We now present the results of the SFH fitting process.
Star Formation Rate Maps {#subsec:sfr}
------------------------
We combine the single best-fit SFHs in each region into maps of [$\Sigma_{\rm SFR}$]{} as a function of time. As an ensemble, they reveal the recent SFH of M31 in the PHAT footprint over the last 630 Myr. This SFH is shown in Figure \[fig:global\_sfh\_map\]. Each panel displays the SFR surface density within the time range specified in the lower left corner. We note that these time bins are not the native resolution of $\Delta(\log t)=0.1$; instead, we have binned the results within the most recent 100 Myr into 25 Myr bins. This helps to expose the continuous structure, especially in the most recent 25 Myr when SF is very low across the galaxy. The regions are colored according to $\Sigma_{\rm SFR}$ in that region during the given time range. To more clearly illuminate the structure, we have placed a lower limit on the SFRs visible within this map such that all regions with $\Sigma_{\rm SFR}$ lower than $10^{-4}$ are colored black. We have also over-plotted a blue dashed line on each image to aid the eye in recognizing structural evolution between time bins. There is little large-scale change over the last 500 Myr. The upper left panel shows the *GALEX* FUV image smoothed to the same physical scale as our SFHs, which shows excellent morphological agreement with the most recent time bins. This agreement strengthens confidence in our SFH maps given that measurements made in 9000 completely independent regions reproduce coherent large-scale structure seen in a well-established SFR tracer. The relationship between the SFR and the FUV flux will be examined in an upcoming paper (Simones et al. in prep.).
![SFR surface density as a function of radius and time. Here we plot the SFR surface density in all regions within a 40 degree arc about the major axis (shaded yellow in the inset) per time bin. We have binned the radius in 0.5 kpc bins. The SFR surface density in each radius bin is the mean of all regions that fall within that bin. The thick black line shows the overall mean. The dashed line shows the fractional RGB residuals (Seth et al., in prep.) with values given on the right axis.[]{data-label="fig:sigma_radius"}](fig6.pdf){width="\columnwidth"}
The SFHs reveal large-scale, long-lasting, coherent structures in the M31 disk. There are three star-forming ring-like features; a modest inner ring at 5 kpc, the well-known 10-kpc ring, and an outer, low-intensity ring at 15 kpc that partially merges with the 10-kpc ring due to a combination of projection effects and a possible warp that is visible in HI [e.g., @Brinks1984a; @Chemin2009a; @Corbelli2010a]. These rings have been observed previously in *Spitzer*/Infrared Array Camera (IRAC) images [@Barmby2006a] and *Spitzer*/Multiband Imaging Photometer (MIPS) images [@Gordon2006a], as well as in atomic [@Brinks1984a] and molecular gas images [@Nieten2006a]. There is also an observed over-density of red giant branch (RGB) stars in the 10-kpc ring [@Dalcanton2012a]. Recovery of these coherent, well-known features is further confirmation that our method is robust.
One of the most remarkable features of this SFH is that the 10-kpc ring is visible and actively forming stars throughout the past 500 Myr. Although SF occurs at a low level in the outer regions at all times, SF in the ring feature at 15 kpc is most concentrated starting at 80 Myr ago. This is likely due to SF in OB 102, which has had an elevated SFR compared to its surroundings over the last 100 Myr [@Williams2003b]. The inner ring feature at $\sim$5 kpc is also visible, and though there appears to be SF in that ring distinct from the surrounding populations, that feature gains definition 200 Myr ago but has largely dispersed in the last 25 Myr.

We further investigate these trends in Figure \[fig:sigma\_radius\] where we plot the average SFR surface density as a function of radius in each time bin for a subset of the regions along the major axis. We determine the distance to the center of each region and then divide the regions into bins of 0.5 kpc. The three rings at 5 kpc, 10 kpc, and 15 kpc are clearly visible as peaks in [$\Sigma_{\rm SFR}$]{}, providing further indication of ongoing (if low) SF in the ring features over 500 Myr. This trend is further supported by evidence of RGB star residuals in the ring features \citep[Seth et al., in prep][]{}, as shown by the thick dashed line. There is an over density of RGB stars in the 10-kpc ring and in the inner ring where our oldest time bin shows the highest SF. The plot reveals that not only is the 10-kpc ring long-lived, it has also remained mostly stationary in galactocentric radius over 500 Myr, a result that has implications for the origin of the ring, which we discuss further in Section \[subsec:ring\].
Mass Maps {#subsec:mass}
---------
In Figure \[fig:global\_mass\_map\], we show the evolution of recent mass growth in the galaxy from 630 Myr to the present. The upper left panel shows the 3.6 image, which is a rough estimation of the total mass of a galaxy, smoothed to the same spatial resolution as our SFHs. The next 11 panels show the total mass formed over a given time range (the same as those in Figure \[fig:global\_sfh\_map\]). The upper middle panel shows the mass that formed in the last 25 Myr while the bottom right panel shows all of the mass formed in the last 630 Myr. The time range is written in the lower left of each plot. In the upper right, we have indicated the total mass formed and the average SFR over that time range.
Most of the SF in M31 occurred much earlier than the timescale probed by these SFHs; consequently, the amount of mass accumulated over the last 500 Myr has been minimal. As can be seen in the upper left image in Figure \[fig:global\_mass\_map\], the older stars are distributed fairly uniformly. This means that the structure that we see in our integrated mass maps only appears within the last 2 Gyr (more recent than the ages of stars that dominate the emission at 3.6 ). Over the time range of our SFHs, SF has been confined primarily to the rings.
We further examine the impact of SF in the last 500 Myr by looking at the fraction of mass formed during this time. In Figure \[fig:mass\_fraction\_map\], we plot the fraction of mass formed in the last 400 Myr to the total mass in each region. We derive the total stellar mass by multiplying the 3.6 image [@Barmby2006a] by a constant mass-to-light ratio of 0.6 [@Meidt2014a]. Only 0.8% of the total stellar mass of this section of the galaxy formed in the last 400 Myr, which is 3% of a Hubble time. On a region-by-region basis, the mass fraction reaches a maximum of 7% and is highest in the ring features, specifically in the two outer features, where most of the gas is located (see Figure \[fig:sfr\_fuv\_hi\]). Consequently, though the SFR in the ring in the last 500 Myr is much higher than that in the rest of the galaxy, it must still be very small relative to past SFRs.
![Map of the fraction of mass formed in the last 400 Myr compared to the total mass as inferred from the 3.6 image [@Barmby2006a]. The amount of mass formed over the time range covered by these SFHs is very small, as expected, with the highest fractions coming in the 10 and 15-kpc ring features. The map is oriented as in Figure \[fig:phat\_area\].](fig8.pdf){width="\columnwidth"}
\[fig:mass\_fraction\_map\]
Discussion {#sec:discussion}
==========
{width="\textwidth"}
We have presented maps of SF and mass evolution in M31 that show rich structure with ongoing SF and evolution. Much of this structure has been observed in maps of M31 in other tracers. As an example, in Figure \[fig:sfr\_fuv\_hi\], we plot our SFR map averaged over the last 100 Myr, next to maps of FUV flux [@Gil-de-Paz2007a] and HI [@Brinks1984a]. All three maps show M31’s ring structures. In addition, we note the good agreement between the 100 Myr SFR map and the map of FUV flux, which is sensitive to SF within the last 100 Myr. These regions of high SFR and flux are also broadly consistent with areas of largest gas content. We now discuss some of the features observed in these maps.
The Recent PHAT SFH {#subsec:totalSFH}
-------------------
The SFHs of the individual regions are extremely diverse. Some of the regions, such as those in the 10-kpc ring have SF that is ongoing and seemingly long-lived; others, such as those that fall in the outer parts of the galaxy, have generally quiescent SFHs in the past 500 Myr. At the native time resolution of 0.1 dex, the uncertainties on the individual SFHs in each region are significant. By combining all the SFHs, we can reduce the amplitude of the uncertainties in those time bins and obtain a more significant and constrained result for the total SFH.
We derive the total SFH within the PHAT footprint by integrating over all regions in Figure \[fig:global\_sfh\_map\]. In Figure \[fig:global\_sfh\] we show the total SFR per time bin over the survey area. Figure \[fig:global\_sfh\_linear\] shows the same SFH but with linear time bins over 3 different time ranges. The dashed blue line indicates the average SFR over the past 100 Myr. The uncertainties in each of these figures include the random component as well as uncertainties in the [[$A_V$]{}]{}, [[$dA_V$]{}]{} combination of the best-fit SFH of each region. There are also systematic uncertainties due to isochrone mismatch (see Section \[subsec:SFH\_uncertainties\]), which we do not include but are 30% in all time bins. The SFRs and uncertainties in each time bin are listed in Table \[tab:totalSFH\].
![The total SFH of M31 within the PHAT footprint over the last 400 Myr, combining the individual SFHs of each field. The dashed line (blue) shows the average SFR density over the most recent 100 Myr. The red error bars are a combination of the random uncertainties and the uncertainties in [[$A_V$]{}]{} and [[$dA_V$]{}]{}. The time bins are $\Delta(\log t)=0.1$, so the youngest bins cover considerably less linear time than do the older bins.[]{data-label="fig:global_sfh"}](fig10.pdf){width="\columnwidth"}
{width="\textwidth"}
{width="\textwidth"}
[ccc]{} 6.60 & 6.70 & 0.09$^{+0.04}_{-0.03}$\
6.70 & 6.80 & 0.18$^{+0.03}_{-0.02}$\
6.80 & 6.90 & 0.24$^{+0.02}_{-0.02}$\
6.90 & 7.00 & 0.16$^{+0.02}_{-0.01}$\
7.00 & 7.10 & 0.22$^{+0.02}_{-0.02}$\
7.10 & 7.20 & 0.26$^{+0.01}_{-0.02}$\
7.20 & 7.30 & 0.25$^{+0.01}_{-0.01}$\
7.30 & 7.40 & 0.28$^{+0.01}_{-0.01}$\
7.40 & 7.50 & 0.29$^{+0.01}_{-0.01}$\
7.50 & 7.60 & 0.34$^{+0.01}_{-0.01}$\
7.60 & 7.70 & 0.33$^{+0.01}_{-0.01}$\
7.70 & 7.80 & 0.30$^{+0.01}_{-0.01}$\
7.80 & 7.90 & 0.29$^{+0.01}_{-0.01}$\
7.90 & 8.00 & 0.28$^{+0.01}_{-0.01}$\
8.00 & 8.10 & 0.26$^{+0.01}_{-0.01}$\
8.10 & 8.20 & 0.28$^{+0.01}_{-0.01}$\
8.20 & 8.30 & 0.32$^{+0.01}_{-0.01}$\
8.30 & 8.40 & 0.34$^{+0.01}_{-0.01}$\
8.40 & 8.50 & 0.40$^{+0.01}_{-0.01}$\
8.50 & 8.60 & 0.39$^{+0.01}_{-0.01}$ \[tab:totalSFH\]
We look more closely at the total SFH to determine which regions contribute in each time bin. While the SFR has been generally declining over the last 600 Myr, we see a bump at 50 Myr. Further examination reveals that this peak is not a galaxy-wide feature. To show this, in Figure \[fig:theta\_sfh\], we plot the SFH in 5 angular slices in theta about the major axis. We see that the slice that falls down the major axis (green line) shows a declining SFR over all periods of time, i.e., there is no 50 Myr bump. The west-most slice (blue line) shows a slight peak at 50 Myr and if we move to the slice that is out along the minor axis (purple line), the SFR shows the most prominent 50 Myr bump. These slices contain bright OB associations and are dominated by ring regions. Consequently, it is likely that the 50 Myr peak in the total SFH is due to specific star-forming regions in the rings.
In Figure \[fig:global\_sfh\], we have over-plotted a dashed line showing the average SFR over the past 100 Myr. The average SFR in the PHAT area over the past 100 Myr is 0.28$\pm$0.03, where the uncertainty is calculated from the tests of systematic uncertainties. To determine the average SFR over the past 100 Myr over the entire disk, we adopt a simple scaling argument. We use both the FUV and 24- maps of M31 and select the D25 ellipse [@Gil-de-Paz2007a] as an aperture. We then measure the total flux within the entire aperture and the flux inside the PHAT footprint (without Bricks 1 or 3). In both cases, the fraction of the total flux inside the PHAT footprint is 40%. Therefore, to scale the PHAT SFR to the entire D25 aperture, we must multiply by a factor of 1/0.4 = 2.5. We multiply the SFR by this scaling factor to yield a total SFR of 0.7. Previous studies have examined the global SFR using resolved stars [1; @Williams2003a], 8 [0.4; @Barmby2006a], FUV [0.6–0.7; @Kang2009a], H$\alpha$ (0.3;@Tabatabaei2010a, 0.44; @Azimlu2011a), and FUV + 24 [0.25; @Ford2013a]. Our result falls within this range, and is most consistent with methods that effectively average over longer timescales.
Although this simple scaling argument neglects possible variations in the SFR from one side of the disk to the other, we do not expect these variations to be large enough to dramatically affect the overall SFR. A complete analysis of the SFR measured via different tracers is beyond the scope of this paper but will be examined more closely in an upcoming paper (Lewis et al., in prep).
Birthrate Parameter
-------------------
We further examine the total SFH by computing the birthrate parameter, $b$, which is defined as the ratio of the current SFR to the past averaged SFR:
$$b = \dfrac{\mathrm{SFR}}{\langle \mathrm{SFR} \rangle}$$
We calculated the past-averaged SFR by converting the 3.6 image to mass, as described in Section \[subsec:mass\] and setting the lifetime of the disk to the Hubble time.
While the birthrate parameter is generally used to classify a galaxy as a whole as in either a star-bursting or quiescent phase, in this case we can examine $b$ for our individual regions. This gives us the ability to tell whether regions of current high SF, such as the 10-kpc ring, are currently undergoing unusually high SF relative to the rest of the galaxy (in which case $b$ should be large) or whether those regions have always had higher SFRs (in which case $b \le 1$). In Figure \[fig:birthrate\], we show a map of the birthrate parameter over two timescales: the lifetime of our SFHs (400 Myr) and 100 Myr. The regions are colored according to $b$ on a linear scale. We see over both time periods that the rings at 10 and 15 kpc have higher values of $b$ than the rest of the galaxy. While $b$ is greater than unity over most of these ring features, there are very few regions where it rises above 2, which indicates that while SF is certainly elevated in those regions compared to the overall SFR, it would not be considered ‘bursty’. The exception in the right panel of Figure \[fig:birthrate\] is the large OB associations in Bricks 15 and 21 where we know that SF is occurring at an elevated rate. These low values of $b$ are not wholly unexpected since the 3.6 image is quite smooth (see the upper left panel in Figure \[fig:global\_mass\_map\]) though one can still make out the ring feature.
Integrated over the entire survey area, we find that the birthrate parameter is 0.23 over the last 100 Myr and 0.27 over the last 400 Myr. This further shows that M31 has been forming stars in the last 400 Myr at a much lower rate than it was in the past.
{width="\textwidth"}
Comparison with Previous Work
-----------------------------
Here we examine the results presented in this paper in relation to earlier studies of the recent SFH of M31.
@Williams2002a measured the SFH using archival *HST* data in various fields throughout the galaxy, some of which overlap with the PHAT survey area. He also recovered a SFH using CMD analysis, with an earlier version of the code used in this work. The overlapping fields in his study fall primarily along the 10-kpc ring, so they probe regions of higher SFR. His ‘INNER’ field, which falls in B13, shows a SFH with a steep decline from 1 Gyr to 200 Myr ago, then a short rise to a peak in a time bin covering the range 40-80 Myr ago, and a decrease to the present day. Two nearby fields, G287 and G11, show a similar morphology in their SFHs, though the peak comes at slightly earlier times, closer to 100 Myr. Other fields fall along the northeast section of the 10-kpc ring and show strong SF at the most recent times. Our SFHs are consistent with this. As we showed in Figure \[fig:theta\_sfh\], the average SFR per time bin can vary significantly in different regions. If we were to define our regions differently, such that they fell only on the regions of strongest SF, we would also see a sharper rise to the present day.
@Williams2003a measured the SFH of M31 using ground-based data [@Massey2006a] and found an increase in the SFR from 25 Myr ago to the present. This increase was seen primarily in the northeast spiral arm. While the work we present here shows a decrease in the SFR from 50 Myr to the present, our last time bin in Figure \[fig:global\_sfh\_map\] from 25 Myr to the present shows a tightening of the ring structure and more localized SF along the northeast arm, suggesting that the difference in resolution between this study and that of @Williams2003a could be the cause of the discrepancy. For example, blends could be measured as upper MS stars, artificially increasing the measurements in the youngest time bins from ground-based data.
@Davidge2012a examined the recent SFH of the entire disk of M31 by comparing $u'$ luminosity functions with those derived from models assuming various SFHs. Their result indicates a factor of 2-3 rise in the SFR during the past 10 Myr, which is in broad agreement with the results of @Williams2003a.
While our results do not show a recent SFR increase, we will point out that @Williams2003a and @Davidge2012a looked at the SFH over the entire disk of M31, while we focus only on 1/3 of the disk. There is evidence that the SFR is elevated in the outer regions of the southern and western parts of the disk more so than in the eastern parts. Our study does not include the southern disk, nor does it reach the outer regions of the western edge of the disk. It is possible that these regions could contribute to an increase in the last 10 – 25 Myr that we do not see in this work.
@Williams2003a also saw movement of SF across the disk, which he interpreted as evidence of propagating density waves from the northern to the southern disk. We do not see this movement. However, our survey does not cover the southern disk and our region sizes are much smaller, giving us the ability to more precisely locate SF. We find no evidence for propagation in the area covered by the PHAT survey. We do, however, agree that SF has been confined primarily to the ring structures over at least the last 500 Myr.
The Mystery of the 10-kpc Ring {#subsec:ring}
------------------------------
### The Ring is Long Lived
The results we present here indicate that the ring of SF at a radius of 10 kpc has persisted for at least 500 Myr. If we assume a rotational velocity of M31 at the 10-kpc ring of 250 km s$^{-1}$ [@Chemin2009a; @Corbelli2010a], then the dynamical time of M31 at the ring is 250 Myr. We have defined the dynamical time as the time to make one full rotation at a given radius: $t_{\rm dyn} = 2\pi \, r / v_r$. This means that SF has continued in the ring for at least two dynamical times. @Williams2003a found that SF has occurred in the ring for the last 250 Myr, and @Davidge2012a found SF in the ring for at least 100 Myr. @Dalcanton2012a showed that there is an over density of stars with ages $>$1 Gyr in the ring as well, further supporting the long-lived nature of this feature.
Looking at just the regions that fall inside the 10-kpc ring, we see that over the last 600 Myr, SF in the ring drives the overall SFR (see Figure \[fig:ring\_noring\_sfh\]). Within the last 400 Myr, the SFR in all regions that do not lie in the ring has been mostly constant, while the SFR inside the ring has varied substantially. This is also clearly visible in Figure \[fig:global\_sfh\_map\], where the ring is very prominent due to a lack of SF occurring in other parts of the galaxy. We find that in the last 400 Myr, 60% of all SF occurs in the 10-kpc ring feature.
![SFH showing the average SFR in each time bin for all regions within the 10-kpc ring (red) and all other regions (black). Error bars include random uncertainties primarily due to the number of stars on the CMD as well as uncertainties as a result of the search in [[$A_V$]{}]{} and [[$dA_V$]{}]{} for the best-fit SFH. Comparison with Figure \[fig:global\_sfh\] shows that the total SFH is driven by SF within the ring. SF outside of the ring feature was more important before 500 Myr ago. It is important to note that due to the high inclination of M31, our definition of the 10-kpc ring, shown in the inset, may also include some of the 15-kpc ring (if it is indeed a distinct feature). Nevertheless, this still confirms our result that SF occurs primarily in these ring features.[]{data-label="fig:ring_noring_sfh"}](fig14.pdf){width="\columnwidth"}
The long-lived nature of the ring is also visible in Figure \[fig:sigma\_radius\], where SF is elevated relative to the areas just outside of the ring at all time. Only in the most recent 25 Myr do we see a notable decrease in SFR in the 10-kpc ring. This plot only includes regions that fall along the major axis so as to minimize uncertainties in the deprojected distance to each region. As a result, this slice does not include many of the higher SFR OB associations, such as OB 54 and those that fall along the northeast portion of the 10-kpc ring. These would likely act to increase the SFR over the most recent 50 Myr.
Not only is the ring long-lived, but it has also been mostly stationary. From Figure \[fig:sigma\_radius\], we can see that the 10-kpc ring has remained centered at about the same location, moving no more than 0.5 kpc over 500 Myr. This translates into a motion of 1 km/s.
### Dispersion of the Ring {#subsec:diffusion}
Our maps of SF show that the 10-kpc ring is clearly broader at older ages than at younger ages. This suggests two possibilities: 1) Over time, SF in the ring has grown more concentrated, occurring in a much narrower strip, likely following the density of gas within the ring features or 2) SF has always occurred in the center of the ring and we see a broadening of the ring as the stars disperse. The second option is more probable, since we know qualitatively that the majority of stars form in clusters and that those clusters eventually disperse and the individual stars diffuse into the surrounding environment, becoming part of the larger galactic background [e.g., @Harris1999a; @Bastian2009a].
The simple kinematical argument is that if stars are born with an average random velocity of 10 km s$^{-1}$ and that motion is directed radially outward, then over the course of 100 Myr, those stars should move 1 kpc. For the results we present here, this would result in significant motion over the course of 500 Myr (5 kpc). It would also suggest a strong dispersion of the ring features over relatively short periods of time, which we do not see. In reality, these timescales are much longer. @Bastian2011a showed that the evolution of the spatial structure of stellar populations in nearby dwarf galaxies occurs on many different timescales, setting *lower* limits of tens to hundreds of Myr, longer than would be expected based on simple arguments.
A detailed quantitative discussion of the evolution of spatial structure will be the subject of a future paper.
### What is the Origin of the Ring?
The 10-kpc ring is the most prominent feature of M31, visible in wave bands from the UV through the radio [e.g., @Brinks1984a; @Barmby2006a; @Beaton2007a; @Block2006a; @Gordon2006a; @Massey2006a; @Nieten2006a; @Gil-de-Paz2007a; @Braun2009a; @Chemin2009a; @Fritz2012a]. Its ubiquity naturally leads to the question of its origin. Surprisingly, there have been few studies devoted to this question; its genesis remains largely uncertain.
Those studies that have attempted to answer the question have invoked one of two likely formation scenarios for a ring: (1) resonance ring due to a central bar, or (2) collisional ring due to the passage of a satellite galaxy through the disk of M31.
Bars can produce inner, outer, and/or nuclear rings in galaxies [@Buta1986a]. M31 shows evidence of a bar, as suggested by its boxy bulge seen in infrared imaging [@Beaton2007a], though it is likely not very strong [@Athanassoula2006a]. The bar length has been estimated to be 4 – 5 kpc [@Athanassoula2006a; @Beaton2007a]. If the 10-kpc ring is due to a bar, it is unlikely to be either a nuclear ring, which generally occurs within the bar [e.g., @Buta1996a] or an inner ring, originating at the end of the bar [@Schwarz1984a]. Instead, if the ring is indeed due to a rotating bar, it must be an outer ring, which occurs near the outer Lindblad resonance [OLR, e.g., @Schwarz1981a; @Buta1995a]. It has also been shown that in barred galaxies with rings, the ratio of the outer ring diameter to the bar diameter is 2 [@Kormendy1979a; @Athanassoula2009a]. By comparing observational data of M31 with N-body simulations, @Athanassoula2006a find that the OLR is at $45 \pm 4$ arcmin (9 – 10 kpc at the distance of M31). The outer ring radius to bar length ratio and the location of the OLR in M31 suggest that the 10-kpc ring could indeed be due to a bar.
Rings in galaxies can also be the result of collisions with satellite galaxies [e.g., @Lynds1976a]. In the case of M31, that satellite galaxy is likely M32. A handful of studies have attempted to model the effect of M32 crashing through the disk of M31 [@Gordon2006a; @Block2006a; @Dierickx2014a] and have found that such a collision could produce a ring with properties similar to that observed, including ring centers offset from the center of the galaxy, and the hole seen in the 10-kpc ring. These studies suggest that the impact event happened 20, 210, and 800 Myr ago, respectively.
There are inconsistencies with both scenarios. It is much more statistically likely that M31 is a barred galaxy with resonance rings simply because there are many barred galaxies that show this kind of ring structure. However, the offset centers of the 5 and 10-kpc rings [@Block2006a] and the hole in the 10-kpc ring are difficult to explain if the rings are all due to a bar. On the other hand, the collisional simulations suffer from uncertainties in the mass and orbit of M32, both of which strongly affect the collision and resulting morphological features. In addition, if a collision occurred, we would expect to see evidence of ring expansion. We note, however, that it is possible that existing spiral structure and bar disturbances could interact strongly with these much weaker collisional rings generated by a companion, preventing the expected classical propagation [@Struck2010a].
Our results indicate that the ring has been a distinct feature, actively forming stars for at least 500 Myr. @Davidge2012a found that the last major disturbance to the disk occurred at least 500 Myr ago. @Bernard2015a and @Williams2015a both find the last major event in M31’s history to be 2 – 3 Gyr ago. These are longer than the timescales suggested by @Gordon2006a and @Block2006a for a collision with M32. We cannot rule out a collision that happened 800 Myr ago [@Dierickx2014a] much less 2 – 3 Gyr ago. However, the velocity deviations suggested by collisional ring models do not match the observations presented here. @Block2006a expect the rings to expand radially at 7 – 10 km s$^{-1}$ and @Dierickx2014a find radial motions of at least 20 km s$^{-1}$ in their models. The systems described by @Struck2010a have radial motions of at least 10 – 20 km s$^{-1}$, though he does note that M31 seems to be an exception to the general class of colliding ring galaxies.
Consequently, based on existing models of the origin of M31’s ring features, we can rule out a purely collisional origin because of the additional time-resolved data we present in this paper. A bar-induced ring is more likely to be long-lived and resonance rings from bars are fairly common. If the ring is a resonance ring, it would make M31 a much more average galaxy than one that had recently suffered a dramatic collision. On the other hand, perhaps the morphological features we see in M31 result from a combination of the two scenarios. We note that no existing studies attempt to explain the existence of the outer ring at 15 kpc.
If we wish to fully understand the origin of the 10-kpc ring (or either of the other ring features) in M31, it is clear that more work is needed. One necessary step is to run simulations that follow the creation of the ring and its evolution over 1 Gyr or more. These simulations should account for multiple possible formation scenarios: collision with a satellite, resonance due to a bar, and a combination of the two. @Athanassoula2006a provide stellar velocity field predictions that need to be tested. Finally, obtaining deep photometry that goes below the oldest main sequence turnoff over a large area around the ring in combination with a more complex dust model would allow us to determine the SFH in M31 back to more than 5 Gyr ago, providing additional observational constraints on the lifetime of the ring.
Conclusions {#sec:conclusion}
===========
We have measured the SFH of 1/3 of the star-forming disk of M31 from *HST* images as part of the PHAT survey. We divided the survey area into 9000 approximately equal-sized regions and determined the SFH, as well as the extinction distribution, independently in each region.
We have found that SF in M31 has been largely confined to three ring features, including the well-known 10-kpc ring, over the past 500 Myr. The 15-kpc ring becomes most prominent and structured starting at about 80 Myr ago, with SF increasing to the present day. In the 5-kpc ring, SF reached a peak about 100 Myr ago and the ring feature has since begun to dissolve.
The 10-kpc ring is long-lived and stationary, producing stars over at least the past 400 Myr, which is about two dynamical times at that radius. Over this period of time, SF has been elevated relative to the surrounding regions. As a result, the ring drives the overall SFH. The shape of the total SFH follows that of the SFH of just regions that fall broadly in the 10-kpc ring feature. This feature has also shown little or no significant propagation in our survey area over the lifetime of these SFHs.
We have shown that the total mass formed over the last 400 Myr is less than 10% of the total mass of the galaxy, as measured from 3.6 images. M31 is a fairly quiescent galaxy today. Aside from a handful of bright regions that coincide with OB associations, the total SFR at the present day is very low. We have shown with maps of the birthrate parameter, $b$, that only a handful of regions have $b>2$ over the last 100 Myr or 400 Myr. In most cases, the current average SFR is very low relative to the past average. Globally, the galaxy has $b=0.23$ over the last 100 Myr and $b=0.27$ over the last 400 Myr, further confirming that M31 is a fairly quiescent galaxy today.
Finally, aside from a possible increase in SF 50 Myr ago, the SFH over the last 400 Myr has been relatively constant. We have computed the average SFR over the past 100 Myr to be $0.28\pm0.03$ within the PHAT footprint. Extrapolating to the entire galaxy we find a global SFR over the last 100 Myr of 0.7 , which is consistent with the range found in previous studies.
The authors thank the anonymous referee for comments that improved the clarity of this paper. We also thank Pauline Barmby for providing us with the *Spitzer* 3.6 image. Support for this work was provided by NASA through grant number HST-GO-12055 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. D.R.W. is supported by NASA through Hubble Fellowship grant HST-HF-51331.01 awarded by the Space Telescope Science Institute. This research made extensive use of NASAÕs Astrophysics Data System Bibliographic Services. The SFHs were run on the Stampede supercomputing system at TACC/UT Austin, funded by NSF award OCI-1134872. In large part, analysis and plots presented in this paper utilized iPython [@Perez2007a] and packages from Astropy, NumPy, SciPy, and Matplotlib [@Astropy2013a; @Oliphant2007a; @Hunter2007a]
Filter Choice {#app:filterchoice}
=============
We have chosen to use only the optical filters (F475W and F814W) for this analysis. In the PHAT survey, data were also taken in two NUV filters (F275W and F336W) and two NIR filters (F110W and F160W). The optical filters provide the greatest leverage for probing the recent SFH because they have by far the deepest CMDs of MS stars which allows us to probe both further down the luminosity function and further back in lookback time.
In principle, addition of the UV filters might potentially allow for greater constraints on the recent SFH; however, in practice, the PHAT data do not allow it. Only the brightest main sequence stars have measurements in the two UV filters, which significantly reduces the age range over which the SFH can be derived. As such, including the UV filters will not improve the analysis presented in this paper.
The NIR data are more complex. They are significantly shallower than the optical data. The mean depth in F160W is an absolute magnitude of 0, which is 2 magnitudes shallower than in the optical. At this depth, we lose the main sequence at less than 200 Myr. In addition, there are not very many stars that fall on the main sequence to begin with, further reducing the usefulness of the NIR for analysis of the recent SFH. Finally, the NIR models of young stars are much more uncertain than the optical models, as there has been less verification of the models in this regime.
In Figure \[fig:2camiso\], we plot the color-magnitude diagrams of a bright, highly star-forming region in B15. The left panel shows the optical data and the right panel shows the NIR data. On each panel, we have plotted three sets of isochrones at 25, 100, and 400 Myr at two different metallcities, solar and slightly sub-solar (\[M/H\] = –0.3). M31 has a very flat metallicity gradient that is approximately solar, so these values are reasonable for the disk. The stars mark the main sequence turnoff points at each age and metallicity. The optical data are deep, and we have leverage back at least 400 Myr. On the other hand, the IR data are much shallower and do not allow reliable SFH measurements beyond 200 Myr. Consequently, we have chosen to use just the optical data in this analysis of the recent SFH.
{width="\textwidth"}
Determining the Best-Fit SFH by Searching [[$A_V$]{}]{}, [[$dA_V$]{}]{} Space {#app:avdav}
=============================================================================
The two main free parameters in the fitting process are those that describe the dust model: an extinction applied evenly to all stars, [[$A_V$]{}]{}, and a differential extinction, [[$dA_V$]{}]{}, that is applied in addition to [[$A_V$]{}]{}. The differential extinction is applied as a distribution to all stars, rather than uniformly, so that each star receives some additional extinction between 0 and [[$dA_V$]{}]{}. As a result, the total extinction applied to each star falls in the range \[[[$A_V$]{}]{}, [[$A_V$]{}]{} + [[$dA_V$]{}]{}\].
Using B15, we sampled [[$A_V$]{}]{}, [[$dA_V$]{}]{} space on the range [[$A_V$]{}]{} = \[0.0, 1.0\] and [[$dA_V$]{}]{} = \[0.0, 6.0\] with step sizes of 0.25 in both parameters. The results for two of these regions are shown in the top panel of Figure \[fig:b15\_avdav\_trough\]. We found that while some regions had a well-defined best-fit SFH, other regions showed “troughs" of ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) pairs going to arbitrarily high [[$dA_V$]{}]{} values with fits that were within 1-$\sigma$ of the best-fit, defined as $fit_{\rm best}$ + 1. The resulting SFHs are indistinguishable within the error bars. These “troughs" are an artifact of the exclusion area we used in the SFH solution, which removed the red clump and RGB from the fitting process. At higher extinction, stars are artificially pushed into the exclude region on the CMD. In order to account for the fewer stars in a given CMD bin, the SFR increases. Examination of the regions with these troughs shows that they tend to start at [[$A_V$]{}]{} + [[$dA_V$]{}]{} = 2.5. As a result, we limit our total extinction to 2.5 magnitudes.
![Exploration of [[$A_V$]{}]{}, [[$dA_V$]{}]{} parameter space in two regions of B15. The top panel show examples of a grid search in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. One region has a very well defined best fit. In the second region, a “trough" of low fit values occurs. If [[$dA_V$]{}]{} is increased, the fit value does not change, but rather remains very close to the best fit. In the bottom panel, we show the results of the search over ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space for the same two regions with the requirement that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. Once again, the region on the left remains well-defined in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. In the second region, the constraint on extinction has forced a best fit. The lines show the 1-, 2-, and 3-$\sigma$ contours.[]{data-label="fig:b15_avdav_trough"}](fig16a.pdf "fig:"){width="50.00000%"} ![Exploration of [[$A_V$]{}]{}, [[$dA_V$]{}]{} parameter space in two regions of B15. The top panel show examples of a grid search in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. One region has a very well defined best fit. In the second region, a “trough" of low fit values occurs. If [[$dA_V$]{}]{} is increased, the fit value does not change, but rather remains very close to the best fit. In the bottom panel, we show the results of the search over ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space for the same two regions with the requirement that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. Once again, the region on the left remains well-defined in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. In the second region, the constraint on extinction has forced a best fit. The lines show the 1-, 2-, and 3-$\sigma$ contours.[]{data-label="fig:b15_avdav_trough"}](fig16b.pdf "fig:"){width="50.00000%"}\
![Exploration of [[$A_V$]{}]{}, [[$dA_V$]{}]{} parameter space in two regions of B15. The top panel show examples of a grid search in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. One region has a very well defined best fit. In the second region, a “trough" of low fit values occurs. If [[$dA_V$]{}]{} is increased, the fit value does not change, but rather remains very close to the best fit. In the bottom panel, we show the results of the search over ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space for the same two regions with the requirement that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. Once again, the region on the left remains well-defined in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. In the second region, the constraint on extinction has forced a best fit. The lines show the 1-, 2-, and 3-$\sigma$ contours.[]{data-label="fig:b15_avdav_trough"}](fig16c.pdf "fig:"){width="50.00000%"} ![Exploration of [[$A_V$]{}]{}, [[$dA_V$]{}]{} parameter space in two regions of B15. The top panel show examples of a grid search in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. One region has a very well defined best fit. In the second region, a “trough" of low fit values occurs. If [[$dA_V$]{}]{} is increased, the fit value does not change, but rather remains very close to the best fit. In the bottom panel, we show the results of the search over ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) space for the same two regions with the requirement that [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\le$ 2.5. Once again, the region on the left remains well-defined in [[$A_V$]{}]{}, [[$dA_V$]{}]{} space. In the second region, the constraint on extinction has forced a best fit. The lines show the 1-, 2-, and 3-$\sigma$ contours.[]{data-label="fig:b15_avdav_trough"}](fig16d.pdf "fig:"){width="50.00000%"}
Applying a Dust Prior {#app:prior}
=====================
In some of the very low-SFR regions at the survey edges and between the ring features, we found that [`MATCH`]{} assigned large extinctions with very poor constraints. Given the low SFR and the small number of bright, young MS stars at these locations, there is no physical explanation for large extinction. To investigate this issue, we use the @Draine2014a dust mass maps to examine the relationship between the [`MATCH`]{} extinction and the total dust mass in those regions. On the left side of Figure \[fig:dust\_compare\], we plot the ratio of the [`MATCH`]{} total dust ([[$A_V$]{}]{} + [[$dA_V$]{}]{}) to the dust mass surface density as a function of SFR (averaged over the last 100 Myr) and number of MS stars. Each circle represents a single region, color-coded by the number of MS stars in that region, where we define MS stars as those with F475W $-$ F814W $<$ 1 and F475W $<$ 26. The bricks closer to the bulge have lower completeness, so the value of 26 was chosen to accommodate those bricks with 50% completeness just fainter than 26th magnitude. On the right axis, we converted the dust mass surface density into extinction according to Equation 7 in @Draine2014a: $$A_V = 0.74 \left( \dfrac{\Sigma_{Md}}{10^5 \; M_\odot \; \textrm{kpc}^{-2}} \right) \textrm{mag}.
\label{eq:avconvert}$$
We have made this conversion for ease of reader interpretation, though analysis of this result is beyond the scope of this paper. For the sake of this exercise, only the relative numbers are important.
{width="\textwidth"}
If there were a perfect correlation between measured dust mass and the [`MATCH`]{} dust parameters, we would expect to see a straight horizontal line in this figure such that the ratio of the normalized [`MATCH`]{} dust to the normalized dust mass would be constant across all SFRs with some scatter expected at the low SFR end. Instead, we see that at very low SFRs there is wide scatter in this ratio, and, as may be expected, these regions also contain lower numbers of MS stars. If there are very few or no stars on the upper MS, it is more difficult to anchor the SFH. In order to fit the broadened lower MS, the amount of differential reddening is increased. Figure \[fig:dust\_compare\] shows that the ratio between [`MATCH`]{} dust model parameters and the dust mass is approximately constant in regions with high SFR and/or many MS stars. This implies that we should be able to fit a straight line to a plot of [`MATCH`]{} dust parameters vs. dust mass in these regions with the most robust fits. So we use this relation to set a prior on the total extinction in a region to adjust the low-SFR, high-dust regions to achieve more physical extinction parameters.
We compare the [`MATCH`]{} best-fit extinction ([[$A_V$]{}]{} + [[$dA_V$]{}]{}) with dust mass surface density ($M_{\rm dust}$) in each region. To do this, we fit a line with slope $m$ and scatter $\sigma$ to a plot of [`MATCH`]{} dust ([[$A_V$]{}]{} + [[$dA_V$]{}]{}) vs. $M_{\rm dust}$ of the $n$ regions with high SFR and many MS stars.
The slope and dispersion of that line are found by solving the following equation numerically for $m$ and $\sigma$ such that $\chi^2 = 1$:
$$\chi^2 = \dfrac{1}{n-1} \displaystyle \sum_n \begin{cases}
\dfrac{ \left(\left[A_{V} + dA_{V}\right] - m \times M_{\mathrm{dust}} \right)^2} { (\overline{\sigma_{A_{V}}}_{\mathrm{reg}}^2 + \sigma^2)}, & A_V + dA_V < 2\\
\displaystyle \int\limits_{2}^{\infty} \dfrac{\left( x - m \times M_{\mathrm{dust}} \right) ^2} {(\overline{\sigma_{A_{V}}}_{\mathrm{reg}}^2 + \sigma^2)} \, \mathrm{d}x , & A_V +dA_V \ge 2
\end{cases}$$
where [[$A_V$]{}]{} + [[$dA_V$]{}]{} is that of the best-fit SFH for that region and $\overline{\sigma_{A_{V}}}_{\mathrm{reg}} = C/\sqrt{N_{\textrm{stars}}}$. $N_\textrm{stars}$ is the number of stars in the $nth$ region and $C$ is calculated such that the resulting values of $m$ and $\sigma$ do not depend on $N_\textrm{stars}$. In this case, $C=12$. We split $\chi^2$ into two solutions to account for the fact that we have set an upper limit of [[$A_V$]{}]{} + [[$dA_V$]{}]{} = 2.5 in our analysis. As we have shown in Appendix \[app:avdav\], the best-fit extinction parameters are effectively lower limits in some of our regions. As a result, we assume that all regions with [[$A_V$]{}]{} + [[$dA_V$]{}]{} $\ge$ 2 are lower limits.
We find that a line with $m=4.5\times10^{-6}$ and $\sigma=0.6$ results in the fit that best helps us constrain the regions with low SFR and high dust. We apply this prior to our results in all regions by recomputing the $fit$ values for each ([[$A_V$]{}]{}, [[$dA_V$]{}]{}) pair. The new $fit$ value is given by
$$fit_{\rm new} = fit_{\rm old} + \dfrac{ (A_V + dA_V) - m \times M_{\mathrm{dust}}} {\sigma^2}$$
We compare the [[$A_V$]{}]{} and [[$dA_V$]{}]{} values corresponding to the new fits in each region with the dust mass. The results are shown on the right side of Figure \[fig:dust\_compare\]. Applying the prior did tighten up the relation a little bit, reducing the number of outliers on both tails at the low SFR end.
|
{
"pile_set_name": "ArXiv"
}
|
\[equation\] \#1
eqnnum[[()]{}]{} @eqncr[tempaeqcnt tempa[& & &]{} tempa[& &]{}tempa[&]{}tempa @eqnsweqnnumeqnswtrueeqcnt=0]{} currentlabeltempb[\#1]{} tempb\[\#1\] currentlabeleqnswtrueeqcnt=0 centering\
=eqncr $$to
$$ignoretrue
\#1[.3ex]{}
-0.5cm -0.5cm -1cm 24.5cm
April 1996\
**Charge and color breaking minima and and**
**$\!\!$constraints$\,$ on$\,$ the$\,$ MSSM$\,$ parameters**
**Alessandro Strumia**
*Departamento de Fisica Teórica, módulo C–XI*
*Universidad Autónoma de Madrid, 28049, Madrid, España*
and
*INFN, Sezione di Pisa, I-56126 Pisa, Italia*
**Abstract**
> The MSSM potential can have unphysical minima deeper than the physically acceptable one. We point out that their presence is quite generic in SO(10) unification with supergravity mediated soft terms. However, at least for moderate values of $\tan\beta$, the physically acceptable vacuum has a life-time longer than the universe age. Furthermore, by discussing the evolution of the universe in an inflationary scenario, we show that the correct vacuum is the natural expectation. This is not the tendency in different cases of unphysical minima. Even in the SO(10) case, the weak assumptions necessary for this may prevent to use the $\tilde{\nu} h^{\rm u}$ direction for baryogenesis [*á la*]{} Affleck-Dine.
Introduction {#Introduction}
============
In the supersymmetric limit the MSSM potential possesses some flat direction and many almost flat directions, lifted only by Yukawa interacions with small couplings. In some portions of the MSSM parameter space the soft terms give negative corrections to the potential along such directions, so that new other unwanted minima appear beyond the SM-like one . Typically the vacuum expectation values in the other minima break electric charge and/or colour and are of order $M_Z/\lambda_f$, where $\lambda_f$ are the Yukawa couplings of the matter fermions $f$.
> *What kind of restrictions on the theory implies the possible presence of other vacuum states beyond the SM-like one?*
One extremal point of view consists in barring all the regions of parameter space where other deeper minima are present. In this case one can derive strong constraints on the soft terms [@UFB] and, more interestingly, some scenarios would be excluded on these grounds: this happens, for example, in string theory when supersymmetry breaking is dominated by the dilaton [@UFB]. Similarly, as shown in section \[SO10\], the presence of deeper minima also afflicts a SO(10) unified theory with supergravity mediated soft terms.
At the other extremum, the weakest possible requirement on the theory is obtained accepting the possibility that the ‘true’ (physical) vacuum in which we live be a ‘false’ [@Coleman] (unstable) one[^1] that some mechanism has selected among the other minima. In this case one must at least require that the lifetime of the unstable SM-like vacuum be bigger than the universe age. The quantum tunneling rate is proportional to an exponentially small factor $\exp{\cal O}(-1/\lambda_f^2)$ which makes the SM-like minimum stable enough in all cases except in decays towards possible minima with vacuum expectation values of order $M_Z/\lambda_t$ (or eventually even of order $M_Z/\lambda_b$ and $M_Z/\lambda_\tau$, if $\tan\beta$ is large). In such a case the restrictions on the theory are extremely weak. In practice the only constraint derived so far is a weak upper bound on the top quark trilinear soft term, $A_t$ [@VacDec; @Thermal]. We will show that in the large $\tan\beta$ case another well defined region of the parameter space must be excluded.
The big difference between the constraints obtained following the two extremal attitudes shows that a better understanding of the problem is needed. Before excluding something due to the presence of a bad minimum it is necessary to ensure that it is bad enough. In other words, we must investigate what other unacceptable consequences can have the possible presence of other minima beyond the SM-like one. While their presence does not constitute a problem for particle physics at the Fermi scale, there can be bad [*cosmological*]{} consequences. It may happen that the universe evolution (as now we understand it) can not naturally end up in the desired SM-like vacuum, preferring instead some other unphysical minimum.
For example if the universe has passed through a hot phase where the temperature was bigger than $M_Z$ and possibly of the order of the vacuum expectation values of some unphysical minimum, it is also necessary to impose that the SM-like vacuum be stable under thermal fluctuations. This requirement turns out to be very weak [@Thermal; @Riotto].
It seems however that a cosmological scenario compatible with particle physics must contain a stage of inflation. Such a scenario is most naturally realized assuming a random initial distribution of the order of the gravitational scale for the various vacuum expectation values [@Linde]. If the potential during inflation would be the usual low energy MSSM one, then the natural end point of universe evolution would be the ‘biggest’ eventual unphysical minimum with the largest vacuum expectation values. In this case it would be natural to avoid all the regions of the parameter space in which another ‘bigger’ (and not necessarily deeper) minimum is present. The bounds would be even (slightly) stronger than the strong ones.
However the same positive vacuum energy which gives rise to inflation also produces effective soft terms of order $H$ [@Hsoft], where $H$ is the Hubble constant during inflation. In general, such new contributions to the soft terms are not directly linked to the low energy ones. Even when unphysical minima are generically present in the low energy potential, the inflationary potential can be safe. We will show that this naturally happens in the SO(10) case discussed in section \[SO10\]. As discussed in section \[Inflation\], in such cases the inflationary cosmological evolution chooses the SM-like minimum avoiding the other ones.
This paper is organized as follows. In section \[BadMinima\] we briefly review the unphysical minima which turn out to be more significant and discuss their main characteristics. In section \[SO10\] we will show that in an SO(10) theory with supergravity mediated soft terms the presence of a phenomenologically unacceptable deeper minimum is quite generic. In the large $\tan\beta$ region the resulting lifetime of the SM-like vacuum can be smaller than the universe age. Finally, in section \[Inflation\] we will discuss if we can tolerate the presence of other minima or if they constitute a problem for cosmology.
The flat directions and the unphysical minima {#BadMinima}
=============================================
The first example of an unphysical minima has been given in where it has been shown that the (approximately) necessary and sufficient conditions such that the RGE improved MSSM potential do not develop a deeper minimum along the directions
[sys:v1]{} &&|[h\^[u]{}\_0 ]{}| = |[\_u ]{}| = |[\_R ]{}| \~[O]{}(),\
&&|[h\^[d]{}\_0 ]{}| = |[\_d ]{}| = |[\_R ]{}| \~[O]{}(),\
&&|[h\^[d]{}\_0 ]{}| = |[\_e ]{}| = |[\_R ]{}| \~[O]{}()
are respectively
[sys:A<3]{} A\_u\^2 &<& 3(\_[u]{}\^2 + m\_\^2 + m\_[\_R]{}\^2 ) \[eq:At<3\]\
A\_d\^2 &<& 3(\_[d]{}\^2 + m\_\^2 + m\_[\_R]{}\^2 )\
A\_e\^2 &<& 3(\_[d]{}\^2 + m\_\^2 + m\_[\_R]{}\^2 )
where $h^{\rm u}$ ($h^{\rm d}$) are the Higgs doublet coupled to up quarks (down quarks and leptons), $m_R^2$ is the soft mass for the field $R$, $\mu_{\rm u,d}^2 = m_{h^{\rm u,d}}^2 +|\mu|^2$, and the standard notations for the other quantities have been followed. The generation number, which can be 1, 2 or 3, has been omitted. Optimized versions of these directions have then been considered in , obtaining slightly more stringent constraints in which $\mu_{\rm u,d}^2$ is replaced by $m_{h^{\rm u,d}}^2$.
Another potentially dangerous direction have been successively considered in $$\label{eq:Lhu}
{\langle h^{\rm u} \rangle} =-a^2 \frac{\mu}{\lambda_d},\qquad
{\langle \tilde{Q}_d \rangle}={\langle \tilde{d}_R \rangle}=a\frac{\mu}{\lambda_d},
\qquad\hbox{and}\qquad
{\langle \tilde{L}_\nu \rangle}=\frac{\mu}{\lambda_d}\cdot a\sqrt{1+a^2},$$ where $\tilde{L}$ and the down squarks can be of any generation and $a$ is a numerical constant of order one which depends on the shape of the potential. In this case the resulting (approximate) condition to avoid an unphysical minimum is $$\label{eq:L<hu}
m_{\tilde{L}}^2 + m_{h^{\rm u}}^2 > 0$$ where $m_{h^{\rm u}}^2$ is expected to be driven negative at some scale greater than $M_Z$ by the top quark Yukawa coupling so that to induce the ‘radiative’ electroweak symmetry breaking. Optimized versions of these directions have been recently obtained in [@UFB]; in particular the down squark may be replaced by another slepton $\tilde{e}$ of a generation different than the one which already appears in[ (\[eq:Lhu\])]{}. While the approximate form of the constraint remains the same as in eq.[ (\[eq:L<hu\])]{}, these similar minima have vacuum expectation values of order $\mu/\lambda_e$.
The typical vacuum expectation value $v$ in all these minima is of the order of the electroweak scale $M_Z$ divided by some Yukawa coupling which may be small. For this reason the RGE improved tree level potential is not a good approximation: choosing the renormalization scale at ${\cal O}(M_Z)$ the neglected full one loop potential contains terms proportional to $\ln v/M_Z$, which may be crucial [@V1loop]. A more correct bound is obtained evaluating the running parameters at scales $Q\sim v$. While no significant variation is expected in the case of the first constraint (\[sys:A<3\]), the other constraint (\[eq:L<hu\]) becomes ineffective when the scale $Q_0$ at which $m_{\tilde{L}}^2 + m_{h^{\rm u}}^2$ becomes negative is smaller than the typical vacuum expectation value $\mu/\lambda_f$. Quantum corrections are very important in this second case, since such scale is mainly determined by the top Yukawa coupling, which radiatively induces the electroweak breaking. This point will be crucial in the following. Nowadays we know that the top quark is much heavier than what was typically assumed ten years ago and that $\lambda_t$ at the GUT scale is larger than about ${\leavevmode\kern.1em\raise.5ex
\hbox{\the\scriptfont0 1}
\kern-.1em/\kern-.15em\lower.25ex\hbox{\the\scriptfont0 2}}$. This implies that the scale $Q_0$ is typically larger than $M_Z/\lambda_f$ and that the correct evaluation of the bound (\[eq:L<hu\]) is still important but no more essential.
A general analysis of the various possibly dangerous flat directions was performed in [@UFB]. In the standard scenario of supergravity mediated soft terms it turns out that the two examples (\[sys:A<3\]) and[ (\[eq:L<hu\])]{} here reported almost include all the others. The first constraint gives an upper bound on the $A$-terms, while the second one, eq. (\[eq:L<hu\]), gives a $\lambda_t$-dependent upper bound on the ratio between gaugino masses $M_{1/2}$ and scalar masses $m_0$ at the GUT scale, that, in the case of universal soft masses at the GUT scale, is approximately $M_{1/2}/m_0\circa{<}(0.5\div0.8)$. This restriction can be problematic for some scenarios where such ratio is predicted, for example in supergravity with dilaton dominated supersymmetry breaking [@UFB].
(16,7) (-6.3,-2.4)
(7.4,0)
Constraints on the SO(10) parameter space {#SO10}
=========================================
We will now show that in a very large portion of the parameter space of an SO(10) unified theory with soft breaking terms mediated by minimal supergravity, unphysical deeper minima are present along the directions[ (\[eq:Lhu\])]{} due to the RGE effects generated by the unified top Yukawa coupling. These same radiative corrections to the unified soft terms produce significant rates for processes which violate the lepton flavour numbers (such as $\mu\to e\gamma$) and CP (such as $d_e$ and $d_N$) [@LFV]. For this reason the regions of parameter space in which the unphysical minimum is not present are the same in which the most interesting leptonic signals of SO(10) unification are more difficult to detect (see fig. \[fig:LFVgood\]).
To give a precise meaning to our computation we will adopt the ‘minimal’ SO(10) model presented in ref. [@LFV] in which the only relevant Yukawa coupling above the unification scale is the unified top quark one and the light Higgs doublets $h^{\rm u}$ and $h^{\rm d}$ are not unified in a single representation of SO(10), allowing for moderate values of $\tan\beta$. The opposite case of large $\tan\beta\sim m_t/m_b$ will be considered at the end of this section.
Let us explain why in an SO(10) GUT the constraint (\[eq:L<hu\]) is stronger than in the MSSM case. This is due to two different reasons:
- New RGE corrections are present from the Planck scale to the GUT scale, with the larger RGE coefficients typical of a unified theory. This means that, for a given $\lambda_t$ value, the scale $Q_0$ at which $m_{\tilde{L}}^2 + m_{h_{\rm u}}^2$ becomes negative is much larger than in the MSSM. The approximate strong form[ (\[eq:L<hu\])]{} of the condition necessary to avoid minima in the directions[ (\[eq:Lhu\])]{} becomes now more exact.
- In an SO(10) theory also the third generation slepton doublet feels the unified Yukawa coupling of the top and becomes lighter than the corresponding sleptons of first and second generation. For this reason the bound[ (\[eq:L<hu\])]{} with a slepton of third generation, $\tilde{\nu}_\tau$, becomes much stronger.
This explains why the bound (\[eq:L<hu\]) is violated in a very great part of the SO(10) parameter space, approximately for $$M_2^2 \circa{>} [0.13-0.064\lambda_t^2(M_{\rm G})] \cdot m_{\tilde{e}_R}^2$$ where $M_2$, the wino mass parameter, and $m_{\tilde{e}_R}$, the right-handed selectron mass, are renormalized at the weak scale.
In the case of universal soft terms, the precise form of this restriction is shown in figure \[fig:L<hu10\], that covers all the parameter space for any moderate $\tan\beta$ value. Only in the shaded area inside the thin lines no other minima are present for $\lambda_t(M_{\rm G})=0.5$ (dotted line), $0.75$ (dashed line), $1$ (dot-dashed line) and $1.25$ (continue line). Outside of the corresponding thick lines $m_{\tilde{\tau}}^2<0$ and the SM-like minimum is not present. Similar results are obtained including a possible ${\rm U}(1)_X$ $D$-term contribution at the SO(10) breaking scale or with non universal soft terms at the Planck scale. For example, in such a case, unphysical minima are present in all the parameter space for $\lambda_t(M_{\rm G})\circa{>}1$ and $m_{16}^2\circa{>} 2 m_{10}^2$, where $m_{16}$ and $m_{10}$ are the soft masses at the Planck scale for the matter (16) and Higgs (10) SO(10) representations.
Let us made an aside remark. Possible non renormalizable operators, suppressed by inverse powers of the unification mass or of the Planck scale, can lift all the (almost) flat directions of the MSSM potential along which the unphysical minima may be present. Anyhow, the effects of such operators are totally negligible at vacuum expectation values of order $M_Z/\lambda_f$, where $f$ — in the case under examination — can be $d,s,b,e$ or $\mu$. However, a [*light*]{} right handed neutrino mass $M_{\nu_R}$, around $10^{11\div13}\,{\rm GeV}$ (which in SO(10) models can be naturally obtained as $\sim M_{\rm G}^2/M_{\rm Pl}$), is necessary if the usual seesaw mechanism should produce neutrino masses in the range suggested by the MSW solution of the solar neutrino deficit. The superpotential then contains the non renormalizable operators $$(\lambda^\nu \frac{1}{M_{\nu_R}}\lambda^\nu)
(L h^{\rm u})^2,$$ which lift the vacuum degeneracy precisely along the very dangerous direction under examination ${\langle \tilde{\nu}_\tau \rangle} \sim{\langle h^{\rm u}_0 \rangle}$. The corresponding dangerous minima are erased if $m_{\nu_\tau} \circa{>} \lambda_f^2 M_Z$. The mass of a stable $\tau$ neutrino must be smaller than $m_{\nu_\tau}\circa{<}100\,{\rm eV}$ in order not to overclose the universe (an unstable $\tau$ neutrino can be heavier) and a mass around $10\,{\rm eV}$ is preferred for the (necessary?) hot dark matter of the universe. In this case the unphysical minima with $f=e,d$ are no more present and those for $f=\mu,s$ only marginally affected. In the case $f=b$ non renormalizable operators are totally irrelevant.
We can then conclude that at least the unphysical minimum in (\[eq:Lhu\]) along the $h^{\rm u}\tilde{\nu}_\tau,~\tilde{b}_L,\tilde{b}_R$ direction is present in the low energy potential of an SO(10) GUT for quite generic initial tree level values of the soft terms. The small part of the parameter space in which unphysical minima are not present can be characterized as follows:
- A [*light chargino*]{} is present in the spectrum and the [*squarks and the gluinos cannot be significantly heavier than sleptons*]{} (apart from a light stau), especially if the unified top quark Yukawa coupling is near at its IR-fixed point [@LFV], $\lambda_t(M_{\rm G})\circa{>}1$.
- With this particular spectrum, the gluino mass RGE contribution to squark masses can not efficiently restore their flavour universality, so that [*flavour and CP violating signals of SO(10) unification in the [*leptonic*]{} sector ($\mu\to e\gamma$, $\mu\to e$ conversion, $d_e$) are not more important than the corresponding signals in the [*hadronic*]{} sector ($d_N$, $\varepsilon_K$, $\varepsilon'_K$, $\Delta m_B$, $b\to s\gamma$, …).*]{} We illustrate this point in figure \[fig:LFVgood\]. We remember that lepton signals are correlated among them, and that, at a less extent, the same happens for the hadronic signals also [@LFV]. For this reason we have chosen the electric dipoles of the electron, $d_e$, and of the neutron, $d_N$, as representatives of leptonic and hadronic signals and we have plotted in fig. \[fig:LFVgood\] their predicted values in the plane $(d_N,d_e)$ for randomly chosen samples of reasonable supersymmetric spectra and $\lambda_t$ values. We clearly see that an observable $d_e$ one order of magnitude larger than $d_N$, which is a possible distinguishing feature of SO(10) unification [@dedN-SO10], is accompanied by an unphysical minimum.
However in the next section we will argue that [*unphysical minima of this kind do not necessarily constitute a problem*]{} and that there is no reason to restrict the parameter space to its small region where they are not present.
It is now interesting to study what happens in the large $\tan\beta$ case. The now significant $\lambda_\tau$ coupling below the unification scale (together with $\lambda_{\nu_\tau}$ if $M_{\nu_R}<M_{\rm G}$) will make the third generation sleptons even lighter and the bound[ (\[eq:L<hu\])]{} even stronger. The most important difference is however that the vacuum expectation values in the eventual unphysical minima can now be of order $M_Z/\lambda_b\sim M_Z$. The quantum tunneling rate of the SM-like vacuum into the unphysical vacuum would no longer contain a safe exponentially small factor $\exp{\cal O}(-1/\lambda_f^2)$. It may happen that the resulting lifetime be shorter than the universe age so that the corresponding regions must be excluded.
(16,8.8) (-2.5,-1.5)
Is such an unphysical minimum again an almost generic feature of an SO(10) GUT with large $\tan\beta$? Of course, an appropriate numerical analysis is necessary to delimitate the excluded regions. It is however easy to see that [*the ‘best’ part of the parameter space is safe.*]{} Let us remember that it is not possible to satisfy the conditions for a correct electroweak symmetry breaking with large $\tan\beta\sim m_t/m_b$, $$\label{eq:MinBigTan}
\frac{\mu B}{\mu_{\rm u}^2 + \mu_{\rm d}^2}\approx
\frac{1}{\tan\beta}\qquad{\rm and}\qquad
\mu_{\rm u}^2 \approx -{M_Z^2 \over2},$$ without fine tuning. A minimum fine tuning of order $\tan\beta$ is necessary even in the ‘best’ region [@LargeTan] where $$\label{eq:GoodRegion}
M_i,\mu,A_f,B \sim M_Z,\qquad\hbox{and}\qquad
m_0^2\sim \frac{M_Z^2}{\tan\beta}\sim(1{\,{\rm TeV}})^2.$$ Accepting to pay the price of some fine-tuning we can exploit the full predictability of SO(10) gauge unification considering models where the higgs doublets and all the third generation Yukawa couplings are unified. In this case the resulting predictions are phenomenologically correct and a non zero ${\rm U}(1)_X$ $D$-term contribution $m_X^2\sim m_0^2$ is necessary to split the $h^{\rm u}$ soft mass from the $h^{\rm d}$ one and satisfy the minimum conditions[ (\[eq:MinBigTan\])]{}. In this way a possible but narrow region is obtained. The resulting particular low energy spectrum has been explored in [@LargeTan; @LargeTanLFV] and the associated rates for lepton flavour violating processes have been computed in [@LargeTanLFV], showing that large $m_0\sim1{\,{\rm TeV}}$ sfermion masses are also preferred for compatibility with the experimental upper bounds.
In this particular narrow ‘good’ region[ (\[eq:GoodRegion\])]{} $m_{h^{\rm u}}^2 \approx \mu_{\rm u}^2 \approx -M_Z^2/2$ is much smaller than $m_{\tilde{L}_3}^2$ so that the bound[ (\[eq:L<hu\])]{} necessary to avoid the unphysical minimum[ (\[eq:Lhu\])]{} is already practically included in the obvious $m_{\tilde{L}_3}^2>0$ condition.
The same conclusion can not be reached if, for example, $\mu\sim m_0$ or $m_0\sim M_Z$ which are, however, regions accessible with a worse fine tuning [@LargeTan] and that for this reason were discarded in the analysis in [@LargeTanLFV] making possible more defined predictions for lepton flavour violating rates.
These qualitative considerations are confirmed by the numerical calculation. For example in figure \[fig:largeTan\] we show the region of the $(M_2,|\mu|)$ plane where only the SM-like minimum is present (dark gray area) in the case of a minimal SO(10) theory with large $\tan\beta$ [@LargeTanLFV]. The figure is valid for small values of the selectron $A$-term, $A_e\sim M_Z$, and universal soft masses at the Planck scale larger than the $Z$ mass as in[ (\[eq:GoodRegion\])]{}. In the white region delimited by the various continue lines, along which some particle becomes too light, the SM-like minimum is not present [@LargeTanLFV]. In the remaining light gray area, which extends outside of the preferred region[ (\[eq:GoodRegion\])]{}, the SM-like minimum is present together with other minima since the bound[ (\[eq:L<hu\])]{} is violated. It is interesting that if the tunneling rate of the SM-like vacuum were sufficiently large one could exclude such regions on a more solid base than fine tuning considerations. We recall that, in standard cosmology, the probability that the unstable SM-like vacuum has survived until today is is given by $$\label{eq:tau}
p\approx\exp\big\{-(M_Z T)^4 e^{-S[\varphi^B_i]}\big\}$$ where $T\sim 10^{10}\,{\rm yrs}$ is the universe age, $\varphi^B_i(x)$ is the ‘bounce’ (a particular field configuration which interpolates between the true and the false vacuum) and $S[\varphi^B_i]$ its action [@Coleman]. In our case $\varphi_i=
\{h_{\rm u},\tilde{\nu}_\tau,\tilde{b}_L,\tilde{b}_R\}$ involves more than one field so that finding the bounce is a cumbersome numerical problem. However, we can approximate the problem with a single field one restricting the trajectories in field space to the deepest direction[ (\[eq:Lhu\])]{}. This is a good approximation for large field values, that give a dominant and large contribution to the bounce action, since the potential goes down only quadratically towards the unphysical minimum. In such approximation the relevant Lagrangian is $${\cal L} = 2 Z(a)\, |\partial h^{\rm u}|^2 +
\big\{m_2^2 |h^{\rm u}|^2 -
\frac{|\mu|}{\lambda_b} m_3^2 |h^{\rm u}|\big\}$$ where $m_2^2\equiv |m_{h^{\rm u}}^2 + m_{\tilde{\nu}_\tau}^2|$, $m_3^2\equiv m_{\tilde{\nu}_\tau}^2+m_{\tilde{b}_L}^2+m_{\tilde{b}_R}^2$, and $$\label{eq:a}
a =\left|\frac{h^{\rm u}}{\mu/\lambda_b}\right| \ge 0,\qquad
Z(a) = \frac{8 a^2 + 10 a + 3}{8a(a+1)}.$$ At ‘large’ field values $a\circa{>}1$ where this Lagrangian is realistic we can further approximate $Z\approx 1$. Note that we are now going to compute the tunneling rate between two minima using an approximated potential that does not have any minimum. Since this might seem suspect, it is better to add some word of comment. The [*unphysical minimum*]{} appears at the (high) scale at which the scale-dependent $m_{h^{\rm u}}^2 + m_{\tilde{\nu}_\tau}^2$ term becomes positive. We can however neglect this scale dependence, since the behavior of the potential beyond the ‘escape’ point, $\varphi_i^B(0)$ that in our case if of $\Ord({\,{\rm TeV}})$, is irrelevant [@Thermal], and, in fact, the potential could even be unbounded from below along the direction[ (\[eq:Lhu\])]{}. About the [*unstable minimum*]{} at $a\approx 0$, we only need to proceed carefully and consider the potential as the limit of a ‘conventionally shaped’ one, different from our approximated form only for $a\to 0$. In this sense we can work with a potential without minima. While being a bit crazy, our approximation constitutes a great simplification, since now the bounce action depends only on [*one*]{} dimensionless ratio $[\mu m_3^2]^2/[m_2^2]^3$. Moreover, expressing the Lagrangian in terms of a $h^{\rm u}$ field normalized in units of $\mu/\lambda_b$ as in eq.[ (\[eq:a\])]{}, we can see that, due to the particular form of the approximated potential, $S[\varphi^B_i]\propto (\mu/\lambda_b)^2$. These considerations fix the bounce action to be $$\label{eq:S}
S[\varphi^B_i] \approx c\cdot \frac{2\pi^2}{\lambda_b^2}
\frac{\mu^2(m_{\tilde{\nu}_\tau}^2+m_{\tilde{b}_L}^2+m_{\tilde{b}_R}^2)^2}
{|m_{\tilde{\nu}_\tau}^2+ m_{h^{\rm u}}^2|^3}.$$ The dimensionless proportionality constant $c$ can be easily computed since, under our approximations, it is also possible to find analytically the bounce $h^{{\rm u}B}$ in terms of the Bessel function $J_1$, $$h^{{\rm u}B}(r) = \frac{|\mu| m_3^2}{2 \lambda_b m_2^2}
\times\left\{\begin{array}{ll}
b + (1-b)j_1(m_2 r)&
\hbox{for $0\le r\le r_*$}\\
0 & \hbox{for $r\ge r_*$}\end{array}\right. ,\qquad\qquad$$ where $r$ is the Euclidean 4-radius, $j_1(x)\equiv 2J_1(x)/x$, $m_2r_*\approx 5.14$ is the position of its first minimum, and $b\approx 1/8.56$ has been chosen in such a way that $h^{{\rm u}B}(r_*)=0$. The relatively large value of $r_*$, due to the slow quadratic decrease of the potential at large $h^{\rm u}$, give rise to a correspondingly large proportionality constant in eq.[ (\[eq:S\])]{}, $c\approx 90$.
Let us now apply these results to the minimal SO(10) case with universal soft terms at the Planck scale and large $\tan\beta$, so that $\lambda_b\approx 0.9$. In this model there are peculiar correlations between the soft parameters [@LargeTanLFV]. In particular there exists no deep interior area where the bound[ (\[eq:L<hu\])]{} is sufficiently strongly violated, as shown in figure \[fig:largeTan\]. For this reason, in all the allowed parameter space, $S[\varphi^B_i]\approx 10^{4\div5}$ is quite large, so that [*the lifetime of the SM-like minimum is always much larger than the universe age*]{}. The fact that the bounce action is always much larger than the limiting value ensures that the quality of our approximation is much better than what had been sufficient. For more general but less interesting supersymmetric models with large $\tan\beta$ — for example not imposing exact SO(10) relations at the unification scale — the parameter space is more various and the life-time of the SM-like vacuum can be shorter than the universe age.
Cosmological evolution and unphysical minima {#Inflation}
============================================
We now come back to the question raised in the introduction. What are the consequences of the possible presence of phenomenologically unacceptable minima other than the SM-like one? For example, is it necessary to restrict an SO(10) theory to the narrow part of its parameter space where unphysical minima are not present?
As discussed in the introduction there can be cosmological problems: if the low energy potential possesses more than one minimum it may be impossible, or unnatural, that the SM-like minimum be the actually populated one. Implementing this condition requires considering the universe evolution, so that the answer is not only a matter of particle physics at the Fermi scale. While to obtain precise results would be necessary to choose some particular well defined cosmological model, the general ideas at the basis of our present understanding of universe evolution are, for example, sufficient to clarify the situation in the SO(10) case presented in the previous section. For this reason, we will not adopt any particular cosmological model but rather we will base our discussion only on its most well established features. We can summarize them as follows. A period of inflation seems to be an essential feature of any consistent cosmological model. After inflation the inflaton field decays in a time of order $ \Gamma_I^{-1}$ giving rise to the standard ‘hot big-bang’ scenario with a reheating temperature $T_R\sim (\Gamma_I M_{\rm Pl})^{1/2}$. The natural way of obtaining a sufficient amount of inflation consists in assuming random out of equilibrium initial values of the order of the Planck scale for the various fields. In such a case the non zero modes of the fields are rapidly red-shifted away and the various fields begin to move towards a minimum.
When more than one minimum is present, [*the most natural final vacuum state is the one with the largest vacuum expectation value.*]{} For example, in the case of a single field whose potential possesses two minima at different scales in field space $v\sim M_Z$ and $v'\sim v/\lambda_f$ the relative probability is at least $p/p'\sim\lambda_f$, even in the case where the depths are comparable. This shows that the unphysical minima with the smallest $\lambda_f$ are the more dangerous ones, even if not deeper than the SM-like one. Here we are assuming that the unknown mechanism responsible of the small present value of the cosmological constant does not treat the SM-like minimum as special.
We thus reach the conclusion that the bounds can be even (slightly) stronger than in the extremal case of excluding only all deeper unphysical minima. However such conditions can not be directly applied to the low energy potential, since, during inflation, the potential along the flat directions is expected to be significantly different from the low energy one [@Hsoft]. The reason is that the same positive vacuum energy density $V=|F_I|^2+\cdots$ which give rise to inflation with a Hubble constant $H^2\sim V/M_{\rm Pl}^2$ also produces supergravity mediated soft terms of order $H$ [@Hsoft]. If the inflaton field $\varphi_I$ were more directly coupled to the SM fields than the usual ‘hidden’ sector responsible of low energy supersymmetry breaking, the resulting effective soft terms would be even bigger. For example $\varphi_I$ may have a Yukawa coupling to some charged field with mass of the order of the GUT scale [@Hsoft]. In the opposite case, for example in no-scale models, it is also possible that the inflationary soft terms are zero at tree level [@noHsoft]. In general the inflationary contributions to the soft terms are not directly linked to the low energy ones neither in the supergravity case. In fact, a vacuum expectation value of the inflaton away from the minimum and of the order of the Planck scale, ${\langle \varphi_I \rangle}\sim M_{\rm Pl}$, can distort the Kähler potential.
To summarize we can say that, while the necessary bounds are possibly very strong, they do not depend only on physics at the Fermi scale and their computation requires a knowledge of physics at the Planck scale. The well established features of cosmological evolution alone are not sufficient to determine if a particular unphysical minimum is present even in the inflationary potential so that to decide if it has untolerable consequences which force to exclude its presence. On the contrary, the identification of the inflaton with some particular field of known behavior (such as the dilaton) would allow to precise the bounds on the low energy theory.
It is interesting that we can obtain a definite answer in the $\SO(10)$ case where an unphysical minimum is present in the [*low*]{} energy potential for quite generic values of the soft terms due to combined effect of $\lambda_t$-induced renormalization corrections at all scales from $M_{\rm Pl}$ down to $M_Z$. We now show that unphysical minima do not afflict the potential during inflation with soft terms of order $H$. We recall that $H\sim 10^{14}{\,{\rm GeV}}$ is the preferred value for which the quantum de$\,$Sitter fluctuations of the inflaton give rise to density perturbation which produce the observed amount of large scale inhomogeneities. In the case where the dominant contribution to inflationary soft terms is transmitted by some field at or below the GUT scale, the [*unified*]{} top quark Yukawa coupling can no longer affect the potential distorting it to a dangerous form. Even in the simplest case of supergravity mediated soft terms, the quantum corrections at scales greater than $H$ alone, are not sufficient for creating unphysical minima. We can conclude that, in SO(10) unification (but, of course, also in more general models) quantum corrections do not generate unphysical minima along the direction[ (\[eq:Lhu\])]{} in the inflationary potential, at least until $H$ is sufficiently large. Of course unphysical minima can reappear when inflation is over.
In a generic case it is possible that, just as in the SO(10) case, the unphysical minimum of the low energy potential is not present in the inflationary potential. We now show that in such cases the cosmological problems disappear. In fact, if the potential during inflation does not contain other minima, when inflation is over, the various fields $\varphi$ have already efficiently rolled towards zero [@AD2], ending up, ultimately, in the SM-like minimum. It is again crucial that the inflationary soft terms be of the order of the inflationary Hubble constant, so that the ‘forcing’ $V'$ term in the inflationary field equations $$\ddot{\varphi} + 3H\dot \varphi+ V'(\varphi)=0,\qquad
V(\varphi)={\cal O}(H^2) |\varphi|^2 +\cdots+
\frac{|\varphi|^{n+4}}{M^n}$$ is at least of the same order of the ‘damping’ term.
Let us assume that the inflationary soft masses squared ${\cal O}(H^2)$ be positive. This is indeed what happens for a minimal form of the Kähler potential [@Hsoft]. In this case the fields evolve as $\varphi(t)\sim \varphi_0 \cdot e^{-Ht}\cos Ht$, and, even starting from Planck scale values $\varphi_0 \sim M_{\rm Pl}$, at the end of inflation $\varphi< M_Z$ provided that the number of $e$-foldings be sufficiently big. This may be difficult to obtain for a given initial starting condition, but it is also exactly what is necessary if inflation should dilute unwanted species and produce the necessary homogeneity and isotropy. Moreover this is naturally obtained in the ‘chaotic’ inflationary scenario we have assumed, where the regions in which the most favorable starting conditions are satisfied are the ones which expand experiencing a huge amount of inflation.
We have shown that inflationary cosmology avoids unphysical minima present only in the low energy MSSM potential. If they were instead present also in the inflationary potential the ‘largest’ minimum would be preferred as the true vacuum of the universe.
It is also possible that another minimum is present in the inflationary potential but not in the low energy MSSM potential. This happens if one soft mass squared ${\cal O}(H^2)$ is negative and if, for example, the usual quantum corrections at energies $E\ll H$ stabilize the potential at low field values, $\varphi\circa{>}M_Z$. In this case the fields roll in the moving minimum ${\langle \varphi \rangle}(t)\sim [H(t) M^{n+1}]^{1/(n+2)}$ following it [@AD2] and then relaxing towards the SM-like minimum when the Hubble constant $H$ and the temperature $T$ became small enough that the other minima disappear and only the SM-like minimum is present. This is not a problematic situation. On the contrary, it has been shown [@AD2; @AD] that if lepton (or baryon) number and CP are broken in the inflationary minimum a baryon asymmetry is produced, which may be easily the source of the observed one. Since the viability of alternative mechanisms for generating it at the electroweak scale is not ensured, another minimum in the inflationary potential is welcome. We only need to point out that if a corresponding unphysical minimum, or one sufficiently coupled to it in the inflationary field equations, is present also in the low energy MSSM potential, then one has to worry that the SM-like minimum might not be the populated one.
Having discussed how inflation can naturally select the SM-like minimum as the physical one even if bigger minima are present in the MSSM potential, before concluding that unphysical minima are not dangerous in these cases, we must first ensure that the universe remains in the SM-like vacuum until the present epoch. This might fail due to the following effects:
- [*Quantum fluctuations*]{} of the MSSM fields in de$\,$Sitter inflationary universe with ${\langle \delta\varphi^2 \rangle}\sim H^2$ and correlation length of order $H^{-1}$ [@deSfluc] are not sufficient to populate the unphysical minima with large vacuum expectation values $H/\lambda_f\gg H$. At the end of inflation, when $H(t)\sim M_Z$, such fluctuations could be a problem [*if*]{} an unphysical minima with ${\langle \varphi \rangle}\sim M_Z$ is present, as may happen if the trilinear soft term $A_t$ of the top quark exceeds the bound[ (\[eq:At<3\])]{}.
- In the subsequent ‘hot big-bang’ phase after inflation, where the reheating temperature $T_R$ is expected to be bigger than $M_Z$ and possibly of the order of the unphysical minima vacuum expectation values, it is also necessary to ensure that the ‘false’ SM-like vacuum be stable under [*thermal fluctuations*]{}. This requirement turns out to be very weak [@Thermal; @Riotto]. It is also possible that high temperature corrections stabilize sufficiently the potential, for example along the direction[ (\[eq:Lhu\])]{}, erasing the corresponding unphysical minima during the high temperature phase after inflation. In this case, excluding a small part of the MSSM parameter space with $m_{\tilde{t}_R}^2<0$ [@Thermal], the symmetric vacuum will do a phase transition towards the SM-like one when the temperature cools down below the electroweak scale.
- Finally, the unstable SM-like vacuum must not undergo [*quantum tunneling*]{} towards a deeper unphysical minimum in the subsequent $10^{10}\,{\rm years}$. The relative probability is $1-p$, with $p$ given in eq.[ (\[eq:tau\])]{} in terms of the ‘bounce’ action [@Coleman] $S[\varphi^B]={\cal O}(2\pi^2/\lambda_f^2)$. For moderate values of $\tan\beta$ only minima with vacuum expectation values of order $M_Z/\lambda_t$ can be dangerous, giving rise to a weak upper bound on the trilinear soft term $A_t$ of the top quark [@Thermal].
In the SO(10) case, where unphysical minima with vacuum expectation values of order $M_Z/\lambda_b$ or larger are present in the low energy MSSM potential, the SM-like minimum suffers no instability in the moderate $\tan\beta$ region. On the contrary quantum tunneling effects can be dangerous if $\tan\beta$ is large, excluding some region of the parameter space, as discussed in section \[SO10\].
Conclusions
===========
The MSSM potential can have other minima deeper than the SM-like one. Their presence can be quite generic in some scenarios; for example in section \[SO10\] we have shown that, in SO(10) unification with supergravity mediated soft terms, unphysical minima with vacuum expectation values of order $M_Z/\lambda_b$ or larger are not present only in small and well defined regions of the parameter space with a peculiar phenomenology. In the large $\tan\beta$ case no unphysical minima are present in the preferred part[ (\[eq:GoodRegion\])]{} of the parameter space which is accessible with a minimum fine tuning of order $\tan\beta\sim m_t/m_b$. Due to $\lambda_b\sim 1$, in some region the quantum tunneling rate of the SM-like minimum can be too large and it is clear that such regions must be excluded. On the contrary, if $\tan\beta$ is small, the lifetime of the SM-like vacuum is exponentially larger than the universe age.
In general, it is not clear whether the presence of other deeper minima signals a problem of the theory. For this reason we have studied if unphysical minima have other unacceptable consequences which force to exclude their presence, and consequently the scenarios that predict them. From the point of view of particle physics, the presence of other minima does not affect the physics in the SM-like minimum. It can instead be a problem for cosmology. However, once we have accepted that for some reason the SM-like minimum is the populated one, the conditions necessary to ensure sufficient stability are very weak. Stronger bounds could be obtained studying why the SM-like vacuum should be the populated one. In particular this question can be attached in an inflationary scenario, where chaotic vacuum expectation values of the order of the Planck scale are suggested as starting conditions in order to obtain the necessary amount of inflation. In this case, we have seen that if more than one vacuum is present in the [*inflationary*]{} potential, the subsequent evolutions naturally prefers the minimum with the largest vacuum expectation value, which in many interesting cases is not the SM-like one. However, even if strong bounds must be imposed on the parameters of the inflationary potential, we presently can not use them to constrain the low energy physics. The reason is that new dominant contributions to the soft terms of the order of the inflationary Hubble constant are present during inflation and we can not compute them without a well defined model of inflation. In particular it turns out that if no other minimum is present during inflation (or at least during a sufficiently long stage of it), then the SM-like minimum, being the basin of the zero point in field space, is naturally selected even when the low energy potential contains other minima. We have shown that this is exactly what happens in the $\SO(10)$ case since the effect which generates the unphysical minima at low energy is not operative during inflation. In this case the SM-like minimum will be selected, provided that the tree level inflationary potential is safe. The opposite assumption is necessary if baryogenesis should be produced by the Affleck-Dine mechanism [@AD], at least along the ‘ideal’ $\tilde{\nu}h^{\rm u}$ direction [@AD2].
Acknowledgments {#acknowledgments .unnumbered}
---------------
The author would like to thank R. Barbieri, G. Dvali, H.B. Kim and C. Muñoz for helpful discussions.
[nn]{} J.M. Frere, D.R.T. Jones and S. Raby, [[*Nucl. Phys. **B222***]{} (1983) 11]{}. H. Komatsu, [[*Phys. Lett. **B215***]{} (1988) 323]{}. J.A. Casas, A. Lleyda and C. Muñoz, [[*Nucl. Phys. **B471***]{} (1996) 3]{} and [[*Phys. Lett. **B380***]{} (1996) 59]{}. S. Coleman, [[*Phys. Rev. **D15***]{} (1977) 2929]{};\
S. Coleman and C. Callan, [[*Phys. Rev. **D16***]{} (1977) 1762]{};\
A. Kusenko, [[*Phys. Lett. **B358***]{} (1995) 47]{}. M. Claudson, L. Hall and I. Hinchliffe, [[*Nucl. Phys. **B228***]{} (1983) 501]{}. A. Kusenko, P. Langacker and G. Segre, hep-ph/9602414. A. Riotto and E. Roulet, [[*Phys. Lett. **B377***]{} (1996) 60]{}. A.D. Linde, [*“Inflation and quantum cosmology”*]{}, Academic Press, Inc. 1990. M. Dine, W. Fishler and D. Nemeschansky, [[*Phys. Lett. **B136***]{} (1984) 169]{};\
O. Bertolami and G. Ross, [[*Phys. Lett. **B183***]{} (1987) 163]{};\
G. Dvali, [[*Phys. Lett. **B355***]{} (1995) 78]{} and hep-ph/9503259;\
M. Dine, L. Randall and S. Thomas, [[*Phys. Rev. Lett. **75***]{} (1995) 398]{}. G. Gamberini, G. Ridolfi and F. Zwirner, [[*Nucl. Phys. **B331***]{} (1990) 331]{}. R. Barbieri and L. Hall, [[*Phys. Lett. **B338***]{} (1994) 212]{};\
S. Dimopoulos and L. Hall, [[*Phys. Lett. **B344***]{} (1995) 185]{};\
R. Barbieri, L. Hall and A. Strumia, [[*Nucl. Phys. **B445***]{} (1995) 219]{} and [[*Nucl. Phys. **B449***]{} (1995) 437]{}. R. Barbieri, A. Romanino and A. Strumia, [[*Phys. Lett. **B369***]{} (1996) 283]{}. L. Hall, R. Rattazzi and U. Sarid, [[*Phys. Rev. **D50***]{} (1994) 7048]{};\
R. Rattazzi and U. Sarid, [[*Phys. Rev. **D53***]{} (1996) 1553]{}. P. Ciafaloni, A. Romanino and A. Strumia, [[*Nucl. Phys. **B458***]{} (1996) 3]{}. M. Gaillard, H. Murayama and K.A. Olive, [[*Phys. Lett. **B355***]{} (1995) 71]{}. M. Dine, L. Randall and S. Thomas, [[*Nucl. Phys. **B458***]{} (1996) 291]{}. I. Affleck and M. Dine, [[*Nucl. Phys. **B249***]{} (1985) 361]{}. T. Bunch and P. Davies, [*Proc. R. Soc. **A360 (1978) 117***]{};\
A. Linde, [[*Phys. Lett. **B116***]{} (1982) 335]{} and [[*Phys. Lett. **B131***]{} (1983) 330]{};\
A.A. Starobinskii, [[*Phys. Lett. **B117***]{} (1982) 175]{}.
[^1]: To avoid confusion we will call ‘unphysical’ all unacceptable vacua different from the SM-like one.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we describe a uniform analysis of eight transits and eleven secondary eclipses of the extrasolar planet GJ 436b obtained in the 3.6, 4.5, and 8.0 bands using the IRAC instrument on the *Spitzer Space Telescope* between UT 2007 June 29 and UT 2009 Feb 4. We find that the best-fit transit depths for visits in the same bandpass can vary by as much as 8% of the total ($4.7\sigma$ significance) from one epoch to the next. Although we cannot entirely rule out residual detector effects or a time-varying, high-altitude cloud layer in the planet’s atmosphere as the cause of these variations, we consider the occultation of active regions on the star in a subset of the transit observations to be the most likely explanation. We find that for the deepest 3.6 transit the in-transit data has a higher standard deviation than the out-of-transit data, as would be expected if the planet occulted a star spot. We also compare all published transit observations for this object and find that transits observed in the infrared typically have smaller timing offsets than those observed in visible light. In this case the three deepest *Spitzer* transits are all measured within a period of five days, consistent with a single epoch of increased stellar activity. We reconcile the presence of magnetically active regions with the lack of significant visible or infrared flux variations from the star by proposing that the star’s spin axis is tilted with respect to our line of sight, and that the planet’s orbit is therefore likely to be misaligned. In contrast to the results reported by @beaulieu10, we find no convincing evidence for methane absorption in the planet’s transmission spectrum. If we exclude the transits that we believe to be most affected by stellar activity, we find that we prefer models with enhanced CO and reduced methane, consistent with GJ 436b’s dayside composition from @stevenson10. It is also possible that all transits are significantly affected by this activity, in which case it may not be feasible to characterize the planet’s transmission spectrum using broadband photometry obtained over multiple epochs. These observations serve to illustrate the challenges associated with transmission spectroscopy of planets orbiting late-type stars; we expect that other systems, such as GJ 1214, may display comparably variable transit depths. We compare the limb-darkening coefficients predicted by `PHOENIX` and `ATLAS` stellar atmosphere models, and discuss the effect that these coefficients have on the measured planet-star radius ratios given GJ 436b’s near-grazing transit geometry. Our measured 8 secondary eclipse depths are consistent with a constant value, and we place a $1\sigma$ upper limit of 17% on changes in the planet’s dayside flux in this band. These results are consistent with predictions from general circulation models for this planet, which find that the planet’s dayside flux varies by a few percent or less in the 8 band. Averaging over the eleven visits gives us an improved estimate of $0.0452\%\pm0.0027\%$ for the secondary eclipse depth; we also examine residuals from the eclipse ingress and egress and place an upper limit on deviations caused by a nonuniform surface brightness for GJ 436b. We combine timing information from our observations with previously published data to produce a refined orbital ephemeris, and determine that the best-fit transit and eclipse times are consistent with a constant orbital period. We find that the secondary eclipse occurs at a phase of $0.58672\pm0.00017$, corresponding to $e\cos(\omega)=0.13754\pm0.00027$ where $e$ is the planet’s orbital eccentricity and $\omega$ is the longitude of pericenter. We also present improved estimates for other system parameters, including the orbital inclination, $a/R_{\star}$, and the planet-star radius ratio.'
author:
- 'Heather A. Knutson, Nikku Madhusudhan, Nicolas B. Cowan, Jessie L. Christiansen, Eric Agol, Drake Deming, Jean-Michel Désert, David Charbonneau, Gregory W. Henry, Derek Homeier, Jonathan Langton, Gregory Laughlin, & Sara Seager'
title: 'A *Spitzer* Transmission Spectrum for the Exoplanet GJ 436b, Evidence for Stellar Variability, and Constraints on Dayside Flux Variations'
---
Introduction {#intro}
============
Transiting planet systems have proven to be a powerful tool for studying exoplanetary atmospheres. Observations of transiting systems have been used to detect the signatures of atomic and molecular absorption features at wavelengths ranging from the UV to the infrared [e.g., @charbonneau02; @vidal03; @swain08; @desert08; @pont08a; @linsky10], although sometimes the results have proven to be controversial [e.g., @gibson10]. They have enabled studies of the dayside emission spectra and pressure-temperature profiles of close-in planets [e.g., @charbonneau05; @deming05; @knutson08; @grillmair08], and they have informed us about their atmospheric circulation [e.g., @knutson07; @knutson09a; @cowan07; @crossfield10]. Although we currently know of 101 transiting planet systems, our knowledge of these planets (including a majority of the studies cited above) has so far been dominated by studies of the brightest and closest handful of systems, including HD 209458b and HD 189733b. Planets orbiting small stars offer additional advantages, as they produce proportionally deeper transits and secondary eclipses as a result of their favorable radius ratios and lower stellar effective temperatures. By this standard, GJ 436 [@butler04; @maness07; @gillon07a; @gillon07b; @deming07; @demory07; @torres07] represents an ideal target, as the primary in this system is an early M star with a K band magnitude of 6.1.
GJ 436b is currently one of the smallest known transiting planets, with a mass only 22 times that of the Earth [@torres07]. Of the planets orbiting stars brighter than 9th magnitude in K band, only GJ 1214b [@charbonneau09], which also orbits a nearby M dwarf, is smaller. New discoveries of low-mass transiting planets from space-based surveys such as the *Kepler* and CoRoT missions are unlikely to change this picture significantly, as both include relatively few bright stars. GJ 436b is also one of the coolest known transiting planets, with a dayside effective temperature of only 800 K [@stevenson10]. Like GJ 1214b, GJ 436b has a high average density indicative of a massive rocky or icy core. In GJ 436b’s case, models indicate that it must also maintain $1-3$ M$_\earth$ of its mass in the form of a H/He atmosphere [@adams08; @figueira09; @rogers10; @nettelmann10] in order to match the observed radius.
By measuring the wavelength-dependent transit depth as GJ 436b passes in front of its host star we can study its atmospheric composition at the day-night terminator, which should be dominated by methane, water, and carbon monoxide [@spiegel10; @stevenson10; @shabram10; @madhu10]. @pont08 observed two transits of GJ 436b with NICMOS grism spectrograph on the *Hubble Space Telescope* (*HST*) and placed an upper limit on the amplitude of the predicted water absorption feature between $1-2$ . More recently @beaulieu10 reported the detection of strong methane absorption in the 3.6, 4.5, and 8.0 *Spitzer* bands.
We can compare these results to observations of the planet’s dayside emission spectrum, obtained by measuring the depth of the secondary eclipse when the planet passes behind the star. @stevenson10 measured secondary eclipse depths for GJ 436b in the 3.6, 4.5, 5.8, 8.0, 16, and 24 *Spitzer* bands, from which they concluded that the planet’s dayside atmosphere contained significantly less methane and more CO than the equilibrium chemistry predictions. In this work we present an analysis of eight transits and eleven secondary eclipses of GJ 436b observed with *Spitzer*, including an independent analysis of the transit data described in Beaulieu et al., and discuss the corresponding implications for GJ 436b’s atmospheric composition.
Unlike most close-in planets, which typically have circular orbits, GJ 436b has an orbital eccentricity of approximately $0.15$ [@maness07; @deming07; @demory07; @madhu09a]. Atmospheric circulation models for eccentric Jovian planets suggest that they may exhibit significant temperature variations from one orbit to the next [@langton08; @iro10], although @lewis10 find little evidence for significant temporal variability in general circulation models for GJ 436b. The extensive nature of our data set, which includes eleven secondary eclipse observations in the same bandpass obtained between $2007-2009$, allows us to test the predictions of these models by searching for changes in the planet’s 8 dayside emission on timescales ranging from weeks to years.
It has also been suggested [@ribas08] that GJ 436b’s orbital parameters are changing in time, perhaps through perturbations by an unseen second planet in the system. Such a planet could serve to maintain GJ 436b’s nonzero eccentricity despite ongoing orbital circularization, and would not necessarily produce transit timing variations large enough to be detected by earlier, ground-based studies [@batygin09]. Although more recent studies [@alonso08; @bean08; @coughlin08; @madhu09a; @caceres09; @shporer09; @ballard10a] have failed to find any evidence for either time-varying orbital parameters or a second transiting object in the system, *Spitzer*’s unparalleled sensitivity and stability allow us to extend the current baseline by nine months with new, high-precision transit observations.
[lrrrrcrrrrr]{} UT 2007 Jun 29 & Transit & 8.0 & 3.4 & 0.4 & 28,480 & 530.8 & 9,149.0 & 0.500%\
UT 2007 Jun 30 & Eclipse & 8.0 & 5.9 & 0.4 & 49,920 & 544.2 & 9,148.2 & 0.498%\
UT 2008 Jun 11 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 226.0 & 9,156.6 & 0.497%\
UT 2008 Jun 13 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 255.0 & 9,161.3 & 0.502%\
UT 2008 Jun 16 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 296.5 & 9,154.5 & 0.494%\
UT 2008 Jun 19 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 306.4 & 9,151.7 & 0.493%\
UT 2008 Jul 12 & Eclipse & 8.0 & 70 & 0.4 & 588,480 & 669.2 & 9,159.9 & 0.506%\
UT 2008 Jul 14 & Transit & 8.0 & 70 & 0.4 & 588,480 & 695.6 & 9,160.4 & 0.509%\
UT 2008 Jul 15 & Eclipse & 8.0 & 70 & 0.4 & 588,480 & 714.4 & 9,158.1 & 0.515%\
UT 2009 Jan 9 & Transit & 3.6 & 4.3 & 0.1 & 117,056 & 37.7 & 36,164.3 & 0.387%\
UT 2009 Jan 17 & Transit & 4.5 & 4.3 & 0.1 & 117,056 & 61.6 & 24,382.6 & 0.561%\
UT 2009 Jan 25 & Transit & 8.0 & 4.3 & 0.4 & 35,904 & 474.5 & 9,151.9 & 0.502%\
UT 2009 Jan 27 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 455.1 & 9,164.5 & 0.495%\
UT 2009 Jan 28 & Transit & 3.6 & 4.3 & 0.1 & 117,056 & 82.5 & 36,744.5 & 0.389%\
UT 2009 Jan 29 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 411.8 & 9,161.7 & 0.496%\
UT 2009 Jan 30 & Transit & 4.5 & 4.3 & 0.1 & 117,056 & 86.3 & 24,177.0 & 0.567%\
UT 2009 Feb 1 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 395.9 & 9,163.9 & 0.499%\
UT 2009 Feb 2 & Transit & 8.0 & 4.3 & 0.4 & 35,904 & 393.6 & 9,143.8 & 0.501%\
UT 2009 Feb 4 & Eclipse & 8.0 & 3.4 & 0.4 & 28,800 & 376.6 & 9,154.5 & 0.499%\
Observations {#obs}
============
We analyze nineteen separate observations of GJ 436, including two 3.6 transits, two 4.5 transits, four 8 transits, and eleven 8 secondary eclipses, as listed in Table \[obs\_table\]. All observations were obtained between 2007 and 2009 using the IRAC instrument [@faz04] on the *Spitzer Space Telescope* [@wern04] in subarray mode. Some of these data were previously published by other groups, including a transit and secondary eclipse observed on UT 2007 Jun 29/30 [@deming07; @gillon07a; @demory07] and six transits observed in 2009 [@beaulieu10]. Because the two shorter wavelength IRAC channels (3.6 and 4.5 ) use InSb detectors and the two longer wavelength channels (5.8 and 8.0 ) use Si:As detectors, each of which display different detector effects, we describe our analysis for each type separately below.
We calculate the BJD\_UTC values at mid-exposure for each image using the DATE\_OBS keyword in the image headers and the position of *Spitzer*, which is in an earth-trailing orbit, as determined using the JPL Horizons ephemeris. Each set of 64 images obtained in subarray mode comes as a single FITS file with a time stamp corresponding to the start of the first image; we calculate the time stamps for individual images assuming uniform spacing and using the difference between the AINTBEG and ATIMEEND headers, which record the start and end of the 64-image series. We then use the routines described in @eastman10 to convert from *Spitzer* JD to BJD\_UTC. Eastman et al. further advocate a conversion from UTC to TT timing standards, which provide a more consistent treatment of leap seconds. We note that for the dates spanned by these observations the conversion from BJD\_UTC to BJD\_TT simply requires the addition of \[65.184,65.184,66.184\] s for data obtained in \[2007,2008,2009\], and we leave the dates listed in Table \[transit\_param\] in BJD\_UTC for consistency with other studies.
3.6 and 4.5 Photometry {#short_phot}
-----------------------
GJ 436 has a $K$ band magnitude of 6.07, and as a result we elect to use short 0.1 s exposures at 3.6 and 4.5 in order to ensure that we remain well below saturation. Subarray images have dimensions of $32\times32$ pixels, making it challenging to estimate the sky background independent of contamination from the wings of the star’s point spread function. We choose to exclude pixels within a radius of 12 pixels of the star’s position, as well as the 14th-17th rows, which contain a horizontal diffraction spike that extends close to the edges of the array. We also exlcude the top (32nd) row of pixels, which have values that are consistently lower than those for the rest of the array. We then iteratively trim 3$\sigma$ outliers from the remaining subset of approximately six hundred pixels, create a histogram of the remaining values, and then fit a Gaussian to this histogram to determine the sky background for each image. We find that the background is $0.1-0.2$% and $0.3-0.4$% of the total flux in a 5 pixel aperture for the 3.6 and 4.5 arrays, respectively.
We correct for transient hot pixels by taking a 10-pixel running median of the fluxes at a given pixel position within each set of 64 images and replacing outliers greater than $4\sigma$ with the median value. We found that using a wider median filter or tighter upper limit for discriminating outliers increased the scatter in the final time series while failing to significantly reduce the number of images that are ultimately discarded. We find that approximately $0.4-0.8$% of our images have one or more pixels flagged as outliers using this filter.
Several recent papers have investigated optimal methods for estimating the position of the star on the array for *Spitzer* photometry, with the most extensive discussions appearing in @stevenson10 and @agol10. These papers conclude that flux-weighted centroiding [e.g., @knutson08; @charbonneau08] and parabola-fitting routines [e.g., @deming06; @deming07] tend to produce less than optimal results, while Gaussian fits and least asymmetry methods appear to have fewer systematic biases and a lower overall scatter. We confirm that for all three wavelengths we obtain better results (defined as a lower scatter in the final trimmed light curve after correcting for detector effects) with Gaussian fits than with flux-weighted centroiding, with a total reduction of $2-7$% in the standard deviation of the final time series binned in sets of 64 images. We obtain the best results in both the 3.6 and 4.5 bands when we first subtract the best-fit background flux from each image, correct bad pixels as described above, and then fit a two-dimensional Gaussian function to a circular region with a radius of 4 pixels centered on the position of the star. Using smaller or larger fitting regions does not significantly alter the time series but does result in a slightly higher scatter in the normalized light curve. Although error arrays are available as part of the standard *Spitzer* pipeline, we find that in this case we obtain better results using uniform error weighting for individual pixels. We use a radially symmetric Gaussian function and run our position estimation routines twice, once where the width is allowed to vary freely in the fits and a second time where we fix the width to the median value over the time series. Reducing the degrees of freedom by fixing the width produces fits that converge more consistently, with a corresponding improvement in the standard deviation of the normalized time series and fewer large outliers. @stevenson10 report that they obtain better position estimates when fitting Gaussians to images that have been interpolated to $5\times$ higher resolution, but we find that using interpolated images for our position fits resulted in a slight increase in the scatter in our final light curves.
We perform aperture photometry on our images using the position estimates derived from our Gaussian fits; we expect that aperture photometry will produce the optimal results in light of the low background flux at these wavelengths. We use apertures with radii ranging between 2.5-7.0 pixels in half pixel steps. We find that apertures smaller than 3.5 pixels show excess noise, likely connected to position-dependent flux losses, while apertures larger than 5 pixels are more likely to include transient hot pixels and higher background levels, resulting in a higher root-mean-square variance in the final light curve. We use a 3.5 pixel aperture for our final analysis, but we find consistent results for apertures between $3.5-5.0$ pixels. We trim outliers from our final time series using a 50 point running median, where we discard outliers greater than $3\sigma$, approximately $2\%$ of the points in a typical light curve. We find that we trim fewer points when we use flux-weighted centroiding for our position estimates (typically $0.6\%$), but the uncertainties in our best-fit transit parameters are still larger than with the Gaussian fits due to the increased scatter in the final trimmed time series. We also trim the first 15 minutes in all observations except for the 4.5 transit on UT 2009 Jan 30, where we trim the first hour of data. Images taken at the start of a new observation tend to have larger pointing offsets, most likely due to the settling time of the telescope at the new pointing; we find that discarding these early data improves the quality of the fit to the subsequent points. For all visits other than the transit on UT 2009 Jan 30, we find that we achieve consistent results when we trim either the first 15, 30, or 60 minutes of data, and we therefore choose to trim the minimal 15-minute interval. For the UT 2009 Jan 30 observation we find that the data display an additional time dependence that is not well-described by the standard linear function of time in Eq. \[eq1\], but is instead better-described by a linear function of ln($dt$). This may be due to the fact that the star falls near the edge of a pixel in these observations, which could introduce additional time-dependent effects. Rather than changing the functional form used to fit these data, we instead opt to trim the first hour of observations, which removes the most steeply-varying part of the time series and leaves a trend that is well-described by the same linear function of time used in the other transit fits. We find that we obtain the same transit parameters for this visit when we either trim the first 15 minutes of data and fit with a linear function of ln($dt$) or trim the first hour of data and fit with a linear function of time, so this choice does not affect our final conclusions.
Fluxes measured at these two wavelengths show a strong correlation with the changing position of the star on the array, at a level comparable to the depth of the secondary eclipse. This effect is due to a well-documented intra-pixel sensitivity variation [e.g., @reach05; @charbonneau05; @charbonneau08; @morales06; @knutson08], in which the sensitivity of an individual pixel differs by several percent between the center and the edge. The 3.6 array typically exhibits larger sensitivity variations than the 4.5 array, as demonstrated by the UT 2009 Jan 9 and 17 transits. The UT 2009 Jan 30 transit falls very near the edge of a pixel in the 4.5 subarray, and thus displays a sensitivity variation comparable to that of the more centrally-located 3.6 transit on UT 2009 Jan 28. We correct for these sensitivity variations by fitting a quadratic function of $x$ and $y$ position simultaneous with the transit light curve:
[lrrrrcrrrrr]{} UT 2007 Jun 29 & 8.0 & $0.08322\pm 0.00052$ & $0.6926\%\pm0.0087\%$ & $ 2454280.78193\pm0.00012$ & $12.5\pm10.2$\
UT 2008 Jul 14 & 8.0 & $0.08247\pm 0.00061$ & $0.6801\%\pm0.0101\%$ & $2454661.50314\pm0.00017$ & $6.2\pm14.4$\
UT 2009 Jan 9 & 3.6 & $0.08182\pm 0.00037$ & $0.6694\%\pm0.0061\%$ & $2454841.28821\pm0.00008$ & $6.9\pm6.5$\
UT 2009 Jan 17 & 4.5 & $0.08286\pm0.00047$ & $0.6865\%\pm0.0078\%$ & $2454849.21985\pm0.00012$ & $2.1\pm10.5$\
UT 2009 Jan 25 & 8.0 & $0.08224\pm0.00051$ & $0.6763\%\pm0.0084\%$ & $2454857.15155\pm0.00012$ & $3.2\pm10.1$\
UT 2009 Jan 28 & 3.6 & $0.08495\pm0.00056$ & $0.7216\%\pm0.0095\%$ & $2454859.79504\pm0.00012$ & $-31.9\pm10.3$\
UT 2009 Jan 30 & 4.5 & $0.08502\pm0.00057$ & $0.7227\%\pm0.0097\%$ & $2454862.43970\pm0.00011$ & $33.7\pm9.6$\
UT 2009 Feb 2 & 8.0 & $0.08424\pm0.00049$ & $0.7096\%\pm0.0083\%$ & $2454865.08345\pm0.00012$ & $20.8\pm10.6$\
$$\begin{aligned}
\label{eq1}
f & = & f_0*(a_1+a_2(x-x_0)+a_3(x-x_0)^2 \nonumber \\
& & +a_4(y-y_0)+a_5(y-y_0)^2+a_6t)\end{aligned}$$
where $f_0$ is the original flux from the star, $f$ is the measured flux, $x$ and $y$ denote the location of the star on the array, $x_0$ and $y_0$ are the median $x$ and $y$ positions, $t$ is the time from the predicted eclipse center, and $a_1-a_6$ are free parameters in the fit. In both bandpasses we find that quadratic terms in both $x$ and $y$ are necessary to achieve a satisfactory fit to the observed variations, although the $\chi^2$ value for the fits is not improved by the addition of an $xy$ term, or higher-order terms in $x$ and $y$. We find that the fits are also improved by the addition of a linear term in time, consistent with previous observations at these wavelengths [e.g., @knutson09; @todorov10; @fressin10; @odonovan10; @deming11].
8.0 Photometry {#long_phot}
---------------
We follow the same methods described in §\[short\_phot\] to estimate the sky background in the 8.0 images, except in this case we include pixels at distances of more than ten pixels from the position of the star in our estimate instead of the previous twelve-pixel radius. The background in these images ranges between $2.6-7.7$% of the total flux in a five pixel aperture, and we find that including pixels between $10-12$ pixels from the star’s position improves the accuracy of our background estimates without adding significant contamination from the star’s point spread function. In @agol10 we find that using a slightly larger 4.5 pixel aperture instead of 3.5 pixels minimizes correlated noise in 8 *Spitzer* observations (albeit at the cost of slightly higher Gaussian noise), and we therefore elect to use a 4.5 pixel aperture for our 8 data. Our choice of aperture has a negligible effect on the best-fit eclipse depths and times, as we find consistent results for apertures between $3.5-5.0$ pixels.
*Spitzer* fluxes for stars observed using the IRAC 8.0 array, the IRS 16 array, and the MIPS 24 array do not appear to have a significant position dependence, but do display a ramp-like behavior where higher-illumination pixels converge to a constant value within the first hour of observations while lower-illumination pixels show a continually increasing linear trend on the time scales of interest here. This effect is believed to be due to charge-trapping in the array, and is discussed in detail in @knutson07 [@knutson09c] and @agol10, among others. We mitigate this effect in our data by staring either at a bright star (HD 107158 in the case of the 8 secondary eclipse observations between UT 2008 Jun 11 and Jun 19) or an HII region with bright diffuse emission at 8 (LBN 543 for the 8 transit observations, and G111.612+0.374 for the 8 secondary eclipse observations between UT 2009 Jan 27 and Feb 4) for approximately 30 minutes prior to the start of our observations. The 2007 observations were obtained prior to the development of this preflash technique, but as discussed in @deming07 the transit observation happened to follow an observation of another bright object and thus was effectively preflashed in the same manner as the 2008 and 2009 data. The secondary eclipse observed in 2007 was not preflashed, and thus displays a much steeper ramp than the other observations. We examine the distribution of ramp slopes in Fig. \[eclipse\_phot\_raw\] and find no correlation between the relative offsets in the positions of GJ 436 and the preflash star and the slope of the subsequent ramp; the preflash star is offset by \[0.4, 0.2, 0.3, 0.1\] pixels in the UT 2008 Jun 11, 13, 16, and 19 observations, respectively, but the shallowest ramp occurs in the Jun 13 observation while the smallest offset occurs in the Jun 19 observation. We speculate that the UT 2008 Jun 13 observation may have been effectively preflashed by the preceding science observations in the same way as the UT 2007 Jun 29 transit observation. We find that all forms of preflash reduce the slope of the subsequent ramp as compared to the non-preflashed secondary eclipse on UT 2007 June 30, but the HII regions consistently produce a larger reduction in the ramp slope than preflashes using a bright star.
We can describe the ramps in our 8 science data with the following functional form:
$$\begin{aligned}
\label{eq2}
f=c_1\left(1-c_2\exp{}(-\delta t/c_3)-c_4\exp(-\delta t/c_5)\right)\end{aligned}$$
where $f$ is the measured flux, $\delta t$ is the elapsed time from the start of the observations, and $c_1-c_5$ are free parameters in the fit. Previous studies have elected to use either a single exponential [e.g., @harrington07], a linear + log function of $\delta t$ [e.g., @deming07], or a quadratic function in $\log(\delta t)$ [e.g., @charbonneau08; @knutson08; @desert09]. However, in @agol10 we find that the functional forms involving $\log(\delta t)$ produce eclipse depths that are correlated with the slope of the observed ramp function, while the single exponential does not provide a good fit to data with a steep ramp. We conclude that a double exponential function has enough degrees of freedom to fit a range of ramp profiles, while still avoiding correlations between the measured eclipse depths and the slope of the detector ramp. Although we require a double exponential function in order to fit the steeper, non-preflashed 2007 secondary eclipse observation in this study, we obtain comparable results with a single exponential term for our preflashed 8 data. We therefore elect to use this simpler single exponential in our subsequent analysis for all 8 visits except the 2007 secondary eclipse.
For our fits to phase curve data obtained on UT 2008 July 12-15, we select a four hour subset of data centered on the position of the transit or eclipse and use that in our fits. The first eclipse takes place at the start of the observations, which exhibit a residual ramp, and we therefore fit this light curve with the same single exponential as our other data. We use a linear function of time to fit the out-of-eclipse trends in the transit, which occurs in the middle of the observations, as well as the secondary eclipse towards the end of the observations. We find that the scatter in the central region of the time series near the transit, when the star is closest to the edge of the pixel, is higher than for either secondary eclipse or for the other 8 transit observations. [@stevenson10] found that the fluxes measured with the 5.8 *Spitzer* array sometimes display a weak dependence on the position of the star, which may be due to either flat-fielding errors or intrapixel sensitivity variations similar to those observed in the 3.6 and 4.5 arrays, although no such effect has been definitively detected in the 8 array to date. We test for the presence of position-dependent flux variations in our data by adding linear functions of $x$ and $y$ position to each of our 8 transit fits, and find that the $\chi^2$ value of the resulting fits is effectively unchanged in all cases except for the UT 2008 July 14 transit, where it decreased from 37,186.6 to 37,177.7 for 33,636 points and six degrees of freedom. Using the Bayesian Information Criterion described in @stevenson10, we conclude that this reduction in $\chi^2$ is not significant, and we exclude these position-depedent terms in our subsequent analysis of the 8 data.
[lrrrrcrrrrr]{} UT 2007 Jun 30 & $0.0553\pm0.0083$ & $2454282.3329\pm0.0016$ & $-0.2\pm2.3$\
UT 2008 Jun 11 & $0.0506\pm0.0110$ & $ 2454628.6850\pm0.0017$ & $2.0\pm2.4$\
UT 2008 Jun 13 & $0.0395\pm0.0097$ & $2454631.3281\pm0.0021$ & $0.9\pm3.0$\
UT 2008 Jun 16 & $0.0497\pm0.0087$ & $2454633.9716\pm0.0013$ & $0.3\pm1.9$\
UT 2008 Jun 19 & $0.0368\pm0.0089$ & $2454636.6162\pm0.0021$ & $1.2\pm3.0$\
UT 2008 Jul 12 & $0.0523\pm0.0090$ & $2454660.4112\pm0.0019$ & $1.2\pm2.8$\
UT 2008 Jul 15 & $0.0422\pm0.0078$ & $2454663.0537\pm0.0040$ & $-0.9\pm5.8$\
UT 2009 Jan 27 & $0.0386\pm0.0087$ & $2454858.7047\pm0.0026$ & $2.8\pm3.8$\
UT 2009 Jan 29 & $0.0491\pm0.0088$ & $2454861.3460\pm0.0015$ & $-1.0\pm2.2$\
UT 2009 Feb 1 & $0.0398\pm0.0086$ & $2454863.9889\pm0.0017$ & $-2.4\pm2.4$\
UT 2009 Feb 4 & $0.0441\pm0.0087$ & $2454866.6355\pm0.0023$ & $1.4\pm3.3$\
Transit and Eclipse Fits {#transits}
------------------------
We carry out simultaneous fits to determine the best-fit transit functions and detector corrections using a non-linear Levenberg-Marquardt minimization routine [@markwardt09]. We calculate our eclipse curve using the equations from @mand02 assuming a longitude of pericenter equal to $334\degr~\pm10\degr$ (update based on complete set of published and unpublished radial velocity data, A. Howard, personal communication, 2010). The orbital eccentricity determined from the updated radial velocity data is $0.145\pm0.017$, but we choose to set the orbital eccentricity equal to 0.152 in our fits, which we calculate using the above longitude of pericenter and the published value of $e*cos(\omega)=0.1368\pm0.0004$ from @stevenson10. We find that the uncertainty in the calculated eccentricity is dominated by the uncertainty in $\omega$, but this has a minimal impact on our transit fits. Our best-fit parameters change by less than $1\sigma$ for eccentricity values between 0.142 and 0.169, corresponding to $\pm10\degr$ in $\omega$, where our best-fit inclination is most sensitive to the assumed eccentricity ($0.9\sigma$ change), $a/R_{\star}$ is somewhat sensitive ($0.5\sigma$ change), and the best-fit radius ratios and transit times for individual fits are minimally sensitive ($<0.3\sigma$ change). Our nominal values for the eccentricity and longitude of pericenter result in a transit length of 60.9 minutes, 0.5 minutes longer than the zero eccentricity case. Using the same parameters for the secondary eclipse, which occurs shortly before periastron passage, produces a length of 62.6 minutes.
[lrrrrcrrrrr]{} `ATLAS` & 3.6 & 1.122 & -1.852 & 1.675 & -0.582\
`ATLAS` & 4.5 & 0.749 & -0.917 & 0.718 & -0.230\
`ATLAS` & 5.8 & 0.815 & -1.147 & 0.947 & -0.310\
`ATLAS` & 8.0 & 0.770 & -1.141 & 0.942 & -0.304\
`PHOENIX` & 3.6 & 1.284 & -1.751 & 1.433 & -0.470\
`PHOENIX` & 4.5 & 1.203 & -1.796 & 1.512 & -0.500\
`PHOENIX` & 5.8 & 0.918 & -1.264 & 1.064 & -0.358\
`PHOENIX` & 8.0 & 0.619 & -0.762 & 0.645 & -0.220\
We fit the eight transits simultaneously and assume that the inclination and the ratio of the orbital semi-major axis to the stellar radius $a/R_{\star}$ are the same for all transits, but allow the planet-star radius ratio $R_p/R_{\star}$ and transit times to vary individually. Figure \[transit\_phot\_raw\] shows the final binned data from these fits with the best fit normalizations for the detector effects and transit light curves in each channel overplotted, and Figure \[transit\_phot\_norm\] shows the binned data once these trends are removed, with best-fit transit curves overplotted. Best-fit parameters are given in Tables \[transit\_param\] and \[global\_param\].
### A Comparison of `ATLAS` and `PHOENIX` Limb-Darkening Models
We derive limb-darkening coefficients for the star using a Kurucz `ATLAS` stellar atmosphere model with $T_\mathrm{eff}=3500$ K, $\log(g)=5.0$, and \[Fe/H\]$=0$ [@kurucz79; @kurucz94; @kurucz05], where we take the flux-weighted average of the intensity profile in each IRAC band and then fit this profile with four-parameter nonlinear limb-darkening coefficients [@claret00]. We also derive limb-darkening coefficients for a `PHOENIX` atmosphere model [@hauschildt99] with the same parameters, and list both sets of coefficients in Table \[limb\_dk\]. We trim the maximum stellar radius in the `PHOENIX` models, which is set to an optical depth of $10^{-9}$, to match the level of the $\tau=1$ surface in each *Spitzer* band. We estimate the location of this surface by determining when the intensity relative to that at the center of the star first drops below $e^{-1}$, and find that the new stellar radius is $0.09-0.10\%$ smaller than the old $\tau=10^{-9}$ value. We find that we can achieve satisfactory four-parameter nonlinear fits to the `PHOENIX` intensity profiles only when we exclude points where $\mu<0.025$, whereas the `ATLAS` models are well-described by fits including this region.
As illustrated in Fig. \[limbdk\], the `PHOENIX` model predicts stronger limb darkening in all bands as compared to the `ATLAS` model, with the largest differences in the 3.6 band. When we compare our best-fit transit parameters using either the `ATLAS` or `PHOENIX` limb-darkening coefficients, we find that the best-fit planet-star radius ratios are $0.8-1.2\sigma$ ($0.5-0.6\%$) deeper in the 3.6 band, $0.06-0.07\sigma$ ($0.04-0.05\%$) smaller in the 4.5 band, and $0.3-0.4\sigma$ ($0.2-0.3\%$) larger in the 8.0 band for the `PHOENIX` models. The best-fit values for the inclination and $a/R_{\star}$ increase by $1.0\sigma$ ($0.6\%$) and $0.9\sigma$ ($0.04\%$), respectively, for the `PHOENIX` model fits; this is a product of the stronger limb-darkening profile, as GJ 436b’s relatively high impact parameter creates a partial degeneracy between the limb-darkening profile and the other transit parameters.
We examine the relative importance of the assumed stellar parameters by comparing two `PHOENIX` models with effective temperatures of 3400 K and 3600 K. We find that for this 200 K range in effective temperature, the best-fit planet-star radius ratios change by $0.11-0.16\sigma$ at 3.6 , $0.07-0.09\sigma$ at 4.5 , and $0.10-0.12\sigma$ at 8.0 . The changes in the best-fit values for the inclination and $a/R_{\star}$ were similarly small, $0.04\sigma$ and $0.4\sigma$, respectively. We therefore conclude that changes in the stellar effective temperature of less than 200 K are negligible for the purposes of our transit fits. We also compute `PHOENIX` model intensity profiles for $0.0<$\[Fe/H\]$<+0.3$, but we find that the differences between models are much smaller than for our 200 K change in the effective temperature.
As there are currently few observational constraints on limb-darkening profiles for main-sequence stars [e.g., @claret08; @claret09], and even fewer constraints for M stars in the mid-infrared, we also consider simultaneous transit fits in which we allow quadratic limb-darkening coefficients in each band to vary as free parameters. As a result of the planet’s high impact parameter, our observations do not directly constrain the limb-darkened intensity for values of $\theta \le 50\degr$, corresponding to $\mu \ge 0.64$, as the planet does not cross this region on the star. However, we can infer the limb-darkening profile in this region if we assume a simple quadratic limb-darkening law. We require the intensity profile computed from these coefficients to be always less than or equal to one (i.e., no limb brightening), and we require the relative intensity at the edge of the star to be greater than or equal to the equivalent $K$ band limb-darkening from @claret00. The dotted lines in Fig. \[limbdk\] show the resulting best-fit limb-darkening profiles in each band; these profiles show less contrast than either model, but the `ATLAS` models appear to provide the closest match.
This agreement is reflected in the $\chi^2$ values for the simultaneous transit fits; the total $\chi^2$ for the best-fit quadratic coefficients is 536,729.25, for the 3500 K `ATLAS` limb-darkening coefficients it is 536,733.98, and for the \[3400, 3500, 3600\] K `PHOENIX` models it is \[536,740.69, 536,739.75, 536,738.81\], for 536,798 points and either 53 (with fixed limb-darkening) or 59 (with freely varying quadratic limb-darkening coefficients) free parameters. We use the `ATLAS` limb-darkening coefficients in our subsequent analysis, as they produce a marginally better agreement with the best fit profiles than the `PHOENIX` models. Although the $\chi^2$ value for the best-fit quadratic limb-darkening coefficients is formally smaller than that of either model, this fit also contains six additional degrees of freedom, making the difference negligible.
As an additional test, we also repeat our fits with the limb-darkening coefficients fixed to zero in all bands. This produces planet-star radius ratios that are $1.6-2.4\sigma$ ($1.1\%$) smaller in the 3.6 band, $1.7-2.0\sigma$ ($1.1\%$) smaller in the 4.5 band, and $0.9-1.1\sigma$ ($0.6-0.7\%$) smaller in the 8.0 band. The best-fit inclination and $a/R_{\star}$ are $2.7\sigma$ ($1.5\%$)and $2.0\sigma$ ($0.1\%$) smaller, respectively. The $\chi^2$ value for this fit is 536,738.04, equivalent to the `PHOENIX` model fits and marginally worse than the `ATLAS` models or the fitted limb-darkening coefficients. This fit confirms the pattern suggested earlier, namely that stronger limb darkening leads to larger planet-star radius ratios and larger values for the inclination and $a/R_{\star}$. If we consider the constraints imposed by the transit fits, stronger limb darkening means that for a grazing transit the planet must occult a relatively larger fraction of the star in order to produce the same apparent transit depth. This effect will be even larger in visible light, and we conclude that accurate limb-darkening coefficients are essential when calculating the planet-star radius ratio and corresponding transmission spectrum for near-grazing transits.
It is difficult to diagnose the origin of the disagreement between `ATLAS` and `PHOENIX` stellar atmosphere models for GJ 436; @kurucz05 note that the `ATLAS` models should not be used for stellar effective temperatures below 3500 K, as they do not include important low-temperature opacity sources such as TiO and VO. However, these molecules primarily affect the star’s visible and near-infrared spectra, and at 3500 K they are still relatively weak [@cushing05]. Both disk-integrated and intensity spectra for the `ATLAS` models in this temperature range are featureless longward of 2.4 , with the exception of the CO band between 4.3$-$5.0 , whereas `PHOENIX` spectra show also clear molecular band structures, mainly due to H$_2$O and OH, between 2.5$-$3.6 $\mu$m and 6.5$-$8.0 , with corresponding increases in the amount of limb darkening in these bands. The presence of the CO band in both model sets would appear to explain the relatively good agreement in limb-darkening proÞles for the 4.5 Spitzer bandpass, but we were unable to determine the reason behind the missing mid-IR H$_2$O absorption features in the `ATLAS` models, which incorporate the strongest water lines [@cdrom26] from the Ames list of @AmesH2O. It is possible that the spherical geometry used in the `PHOENIX` models (`ATLAS` models use a plane-parallel geometry) may also affect the resulting limb-darkening profiles [@orosz00; @claret03], but we find that `PHOENIX` models computed with a planet-parallel geometry show nearly identical limb darkening, with the exception of an exponential decline in the optically thin limb. We conclude that the differing opacities in the 3.6 and 8.0 bands appear to be the most likely explanation for the disagreement between the limb-darkening profiles at these wavelengths. In this case the change in the $\chi^2$ value indicates the differences between the two models are not statistically significant for this data set; near-IR grism spectroscopy of transits of GJ 436b, such as those obtained by @pont08, might help to better distinguish between these models.
[lrrrrcrrrrr]{} *Transit Parameters* &\
$i$() & $86.699^{+0.034}_{-0.030}$\
$a/R_{\star}$ & $14.138^{+0.093}_{-0.104}$\
$R_p/R_{\star}$ & $0.08311\pm0.00026$\
Duration $T_{14}$ (d) & $0.04227\pm0.00016$\
$T_{12}$ ($\approx T_{34}$) (d)& $0.01044\pm0.00014$\
$b$ & $0.8521\pm0.0021$\
$a$ (A.U.) & $0.0287\pm0.0003$\
$R_{\star}$ ($R_\odot$) & $0.437\pm0.005$\
$R_p$ ($R_\Earth$) & $3.96\pm0.05$\
$T_c$ (BJD) & $2454865.083208\pm0.000042$\
$P$ (d) & $2.6438979\pm0.0000003$\
&\
*Secondary Eclipse Parameters* &\
8 depth & $0.0452\%\pm0.0027\%$\
$T_{\mbox{bright}}\tablenotemark{d}$ & $740\pm16$ K\
Duration $T_{14}$ (d) & $0.04347$\
$T_{12}$ ($\approx T_{34}$) (d)& $0.00700$\
Orbital phase & $0.58672\pm0.00017$\
$e\cos(\omega)$ & $0.13775\pm0.00027$\
$T_c(0)$ (BJD) & $2454866.63444\pm0.00082$\
$P$ (d) & $2.6438944\pm0.0000071$\
### Error Analysis
We calculate uncertainties for our best-fit transit parameters using a Markov Chain Monte Carlo (MCMC) fit [see, for example @ford05; @winn07b] with a total of $6\times 10^6$ steps, fourteen independent chains, and 53 free parameters. Free parameters in the fits include: $a/R_{\star}$, $i$, eight individual estimates of $R_P/R_{\star}$, eight transit times, eight constants, a linear function of time and linear and quadratic terms in $x$ and $y$ for each of the 3.6 and 4.5 transits (20 variables total), the amplitude $c_2$ and decay time $c_3$ from Eq. \[eq2\] for the exponential fits to three 8.0 transits (6 variables total), and a linear function of time for the other 8.0 visit. We assume a constant error for the points in each individual transit light curve, defined as the the uncertainty needed to produce a reduced $\chi^2$ equal to one for the best-fit transit solution.
We initialize each chain at a position determined by randomly perturbing the best-fit parameters from our Levenberg-Marquardt minimization. After running the chain, we search for the point where the $\chi^2$ value first falls below the median of all the $\chi^2$ values in the chain (i.e. where the code had first found the optimal fit), and discard all steps up to that point. We calculate the uncertainty in each parameter as the symmetric range about the median containing 68% of the points in the distribution, except for the inclination and $a/R_{\star}$, which we allow to have asymmetric error bars spanning 34% of the points above and below the median, respectively. The distribution of values was very close to symmetric for all other parameters, and there did not appear to be any strong correlations between variables. As a check we also carried out a residual permutation error analysis [@gillon07b; @winn08], which is sensitive to correlated noise in the light curve, on each individual transit. At the start of each new permutation, we randomly drew values for the inclination and $a/R_{\star}$ from the simultaneous MCMC distribution and then fit for the corresponding best-fit values for the transit time and $R_p/R_{\star}$ in that step. This ensures that our resulting error distributions for individual transit times and $R_p/R_{\star}$ values also take into account the uncertainties in the best-fit values for the inclination and $a/R_{\star}$. In each case where both a MCMC and residual permutation uncertainty are available for a given parameter we use the higher of the two values. We find that the MCMC fits generally produce larger uncertainties for the 8 observations, whereas for 3.6 and 4.5 data sets, which have higher levels of correlated noise, the residual permutation uncertainties are typically 50% larger than the MCMC errors.
### Secondary Eclipse Fits
We fit the secondary eclipses individually using the best-fit values for inclination and $a/R_{\star}$ from our transit fits but allowing the eclipse depths and times to vary freely. Figure \[eclipse\_phot\_raw\] shows the final binned data from these fits with the best fit normalizations for the detector ramp in each channel overplotted, and Figure \[eclipse\_phot\_norm\] shows the binned data once these trends are removed, with best-fit eclipse curves overplotted. Best-fit parameters for individual eclipses are given in Table \[eclipse\_param\], and the error-weighted average (i.e., weights equal to $1/\sigma^2$) of these eclipse depths is reported in Table \[global\_param\]. We find that fixing the time of eclipse to a constant value, defined here as the best-fit orbital phase, does not significantly change our best-fit eclipse depths, nor does it reduce the uncertainties in our measurement of those depths. We calculate the uncertainties on individual eclipses using both a MCMC analysis and a residual permutation error analysis, again taking the higher of the two values as the final uncertainty for each parameter.
Results
=======
Orbital Ephemeris and Limits on Timing Variations {#timing}
-------------------------------------------------
We fit the transit times given in Table \[transit\_param\], together with the transit times published in @pont08a [@bean08; @coughlin08; @alonso08; @shporer09; @caceres09; @ballard10a], with the following equation,
$$\begin{aligned}
\label{eq3}
T_c(n)= T_c(0)+n\times P\end{aligned}$$
where $T_c$ is the predicted transit time as a function of the number of transits elapsed since $T_c(0)$ and $P$ is the orbital period. We find that $T_c=2454865.083208\pm0.000042$ BJD and $P=2.6438979\pm0.0000003$ days. As demonstrated by Fig, \[transit\_o\_c\], the 34 published transit times appear to be markedly inconsistent with a constant orbital period, with the most statistically significant outliers (6.2 and 7.1$\sigma$, respectively), occurring during the sequence of eight transits observed by the EPOXI mission between UT 2008 May 5-29 [@ballard10a]. The most significant deviations in the *Spitzer* transit data presented here occur during the last three visits (UT 2009 Jan 28 - Feb 2), and range between -3.1 and +3.5$\sigma$ in significance. Given the size of these discrepancies, it is perhaps not surprising that the reduced $\chi^2$ value for the linear fit to Eq. \[eq3\] is 6.8 (total $\chi^2$ of 216.4, 34 points, two degrees of freedom). It is unlikely that the observed deviations could be explained by perturbations from a previously unknown second planet in the system, as the measured transit times shift by as much as several minutes on time scales of only a few days (i.e., a single planet orbit). As we discuss in more detail in §\[star\_spots\], we believe the presence of occulted star spots in a subset of the transit light curves is the most likely explanation for the observed deviations.
We carry out a similar fit to the secondary eclipse times given in Table \[eclipse\_param\], along with the additional secondary eclipse times reported in @stevenson10, and find that $T_c(0)= 2454866.63444\pm0.00082$ BJD and $P=2.6438944\pm0.0000071$ days. This period is consistent with the best-fit transit period to better than $1\sigma$, and we therefore conclude that there is no evidence for orbital precession in this system. We also see no evidence for statistically significant variations in the secondary eclipse times (see Fig. \[eclipse\_o\_c\]), as would be expected if the shifted transit times were due to occulted spots, but our measurements are not precise enough to rule out timing variations of the same magnitude as those observed in the transit data. If we fix the orbital period to the value from the transit fits and subtract the 28 s light travel time delay for this system [@loeb05], we find that the secondary eclipses occur at an orbital phase of $0.58672\pm0.00017$, consistent with the best-fit phase from @stevenson10.
We can use the offset in the best-fit secondary eclipse time to calculate a new estimate for $e\cos(\omega)$. We find that the secondary eclipse occurs $330.18\pm0.67$ minutes later on average than the predicted time for a circular orbit, including the correction for the light travel time. We can convert this to $e\cos(\omega)$ using the expression reported in Eq. 19 of @pal10. Note that this expression is more accurate than the commonly used approximation of $e\cos(\omega)\approx \frac{\pi \delta t}{2P}$ [e.g., @charbonneau05; @deming05], where $\delta t$ is the delay in the measured secondary eclipse time and $P$ is the planet’s orbital period. We find that using the less accurate approximation gives $e\cos(\omega)=0.13622\pm0.00026$, while the equation from Pál et al. yields $e\cos(\omega)=0.13754\pm0.00027$, a $4\sigma$ difference in this case [see also, @sterne40; @dekort54]. If we take the best-fit longitude of pericenter from the radial velocity fits, $334\degr \pm 10\degr$, we find an orbital eccentricity equal to $0.153\pm0.014$. This is consistent with the current best-fit orbital eccentricity from radial velocity data alone, $e=0.145\pm0.017$ (A. Howard, personal communication, 2010).
[lrrrrcrrrrr]{} UT 2007 Jun 29 & 8.0 & $86.68 \pm0.12$ & $14.11\pm0.35$\
UT 2008 Jul 14 & 8.0 & $86.54 \pm0.12$ & $13.67\pm0.36$\
UT 2009 Jan 9 & 3.6 & $86.76 \pm0.07$ & $14.40\pm0.22$\
UT 2009 Jan 17 & 4.5 & $86.85 \pm0.10$ & $14.60\pm0.32$\
UT 2009 Jan 25 & 8.0 & $86.70 \pm0.14$ & $14.19\pm0.43$\
UT 2009 Jan 28 & 3.6 & $86.67 \pm0.07$ & $13.96\pm0.20$\
UT 2009 Jan 30 & 4.5 &$86.58 \pm0.10$ & $13.76\pm0.28$\
UT 2009 Feb 2 & 8.0 & $86.80 \pm0.14$ & $14.49\pm0.44$\
System Parameters from Transit Fits
-----------------------------------
In this work we examine two transits obtained at 3.6 , two transits at 4.5 , and four transits at 8.0 . We carry out two sets of transit fits, one where the ratio of the orbital semi-major axis to the stellar radius $a/R_{\star}$ and the orbital inclination $i$ are allowed to vary freely, and the other where they have a single common value for all visits. In all cases we allow the planet-star radius ratio $R_p/R_{\star}$ and best-fit transit times to vary independently for each visit. In fits where $a/R_{\star}$ and $i$ are allowed to vary individually we find no evidence for statistically significant variations in either of these parameters (see Table \[free\_transit\_fits\]), and we therefore proceed assuming that these parameters have a single common value in our subsequent analysis. Our best-fit values for $i$, $a/R_{\star}$, and $R_p/R_{\star}$ are consistent with those reported by @ballard10a to better than $1\sigma$, and the impact parameter $b$ and transit duration $T=T_{14}-T_{12}=0.0318\pm0.0007$ days that we derive from our fits are similarly consistent with the value reported by @pont08.
Although the best-fit orbital inclination and $a/R_{\star}$ appear to be consistent with a constant value over the approximately two year period spanned by our observations, we do see evidence for statistically significant differences in the transit depths *within the same Spitzer bandpass* (see Fig. \[depth\_comparison\]). We would expect to see the transit depth vary with wavelength due to absorption from the planet’s atmosphere, but this signal should remain constant from epoch to epoch for observations in the same bandpass. If we compare individual visits in a given bandpass, we find that the two 3.6 radius ratios, measured on UT 2009 Jan 9 and 28, are inconsistent at the $4.7\sigma$ level. The two 4.5 radius ratios, measured on UT 2009 Jan 17 and 30, differ by $2.9\sigma$. The four 8 transits, measured on UT 2007 Jun 29, UT 2008 Jul 14, UT 2009 Jan 28, and UT 2009 Feb 2, differ from the error-weighted average by $0.2\sigma$, $1.0\sigma$, $1.5\sigma$, and $2.0\sigma$, respectively. These offsets are still present in the fits where the inclination and $a/R_{\star}$ are allowed to vary individually, indicating that the discrepancy cannot be due to a change in these two parameters.
Discussion
==========
Transit Depth Variations
------------------------
In the sections below we consider three possible explanations for the observed depth variations: first, that the effective radius of the planet is varying in time, second, that residual correlated noise in the data affected the best-fit transit solutions, and third, that spots or other stellar activity produced apparent depth variations.
### A Time-Varying Radius for the Planet {#time_var_radius}
We first consider the possibility that the radius of the planet is changing in time, either due to thermal expansion of the atmosphere or the presence of a variable cloud layer at sub-mbar pressures. We require a change in radius of approximately 4% in order to match both of the measured 3.6 transit depths; if this change is due to thermal expansion, we can estimate the energy input required using simple scale arguments.
The effective change in the planet’s radius due to heating of the atmosphere depends on both the amount of heating and the range in pressures over which this heating takes place. We use the secondary eclipse depths in §\[dayside\_var\] to place an upper limit on the allowed change in temperature at the level of the mid-IR photosphere, and then calculate the corresponding range in pressure that must be heated by this amount in order to increase the radius of the planet by 4%. If we assume a hydrogen atmosphere with a baseline temperature of 700 K, we find a corresponding scale height of approximately 240 km, where the scale height is defined as $H=\frac{kT}{\mu g}$, $T$ is the temperature of the atmosphere, $\mu$ is the mean molecular weight, and $g$ is the surface gravity. We know from the secondary eclipse observations described in §\[dayside\_var\] that the temperature of the planet’s dayside atmosphere must change by less than 30%, which would correspond to an upper limit of 100 km on corresponding changes in the planet’s scale height.
In order to calculate the required energy input to produce the observed change in radius, we must first determine the range of pressures affected by this heating. We model the planet as an interior region with a constant temperature, surrounded by an outer envelope that expands and contracts freely with changing temperature. We set the upper boundary on this region equal to 50 mbar, corresponding to the approximate location of the $\tau=1$ surface in the mid-infrared. As illustrated in Fig. \[atm\_models\], opaque clouds at this pressure suppress but do not entirely remove absorption features in the planet’s transmission spectrum at these wavelengths, making this a reasonable estimate for the location of the $\tau=1$ surface. We assume that when the planet is heated the scale height changes by 100 km, which requires the lower boundary of the heated region to be located at a pressure of approximately 1 bar in order to produce a 1% expansion in radius. If we then calculate the change in the planet’s gravitational energy corresponding to this expansion, we find that an energy input of approximately $10^{26}$ J is required. Repeating this calculation for a 4% increase in radius, we find a lower boundary at 8,000 bars and a corresponding energy input of $10^{30}$ J. The insolation received by the planet is $10^{20}$ W, which gives an energy budget of $10^{25}$ J per orbit. When we examine Fig. \[depth\_comparison\] we find that that the observed change in radius occurs primarily between the third and fourth visits (UT 2009 Jan 25-28). This would require an energy input as much as $10^5$ times higher than the total insolation over this epoch, which is clearly unphysical.
One alternative explanation for the observed change in radius would be to invoke the presence of intermittent, high-altitude clouds. Such clouds could produce a change in the apparent radius of the planet across multiple bands without requiring any actual heating or cooling of the atmosphere. In this picture, smaller radii for the planet would correspond to the cloud-free state, while larger radii would require the presence of an additional cloud layer. A change of 4% in apparent radius would require the clouds to form at a pressure approximately 100 times lower than the location of the nominal cloud-free radius. In §\[modeling\_disc\] we find that the average pressure of the $\tau=1$ surface for the nominal methane-poor (green) model between $3-10$ is 40 mbar, indicating that the clouds would have to extend to 0.4 mbar to explain the largest measured 3.6 radius for the planet. This conclusion is reasonably independent of our assumed composition, as the average $\tau=1$ surface for the methane-rich (blue) model is located at 30 mbar. Gravitational settling would presumably pose a challenge for cloud layers at sub-mbar levels, but vigorous updrafting of condensate particles might compensate for this effect. The broadband nature of the data presented here make it difficult to directly test this hypothesis; we therefore recommend the acquisition of high signal-to-noise, near-infrared grism spectroscopy over multiple transits in order to resolve this issue. A 0.5 mbar cloud layer would lead to a near-featureless transmission spectrum whereas a lower cloud layer would still exhibit many of the same absorption features as a cloud-free atmosphere. Such a data set would also allow us to test the theory, outlined in §\[star\_spots\], that the observed transit depth variations are due to the occultation of regions of non-uniform brightness on the surface of the star, as these regions should also produce a wavelength-dependent effect.
### Poorly Corrected Systematics {#det_norm}
It is possible that poorly corrected instrument effects, such as the intrapixel sensitivity variations at 3.6 and 4.5 micron, or the detector ramp at 8.0 , might lead to variations in the measured transit depth. Because there is complete overlap between the positions spanned by the star in the in-eclipse and out-of-eclipse data for all 3.6 and 4.5 visits, fits that inadequately describe the pixel response as a function of position should fail equally for both sections of the light curve. The UT 2009 Jan 30 transit serves as an example of imperfectly removed detector effects, as the residuals display a sawtooth signal with a shape and timescale similar to the original intrapixel sensitivity variations (see §\[short\_phot\] for a more detailed discussion of this light curve). Conversely, it is much more difficult to explain the 3.6 micron transit on UT 2009 Jan. 28 with this scenario, as there appears to be a large dip in residuals during ingress, but when the star spans the same pixels in the out-of-eclipse data we see no comparable deviations.
In this section we consider an alternate decorrelation function that better accounts for small-scale variations in the intrapixel sensitivity function as discussed in @ballard10. Following the discussion in Ballard et al., we describe the intrapixel sensitivity variations using a position-weighted average of the time series after the best-fit transit function and linear function of time from the position fits described above has been divided out. Unlike Eq. \[eq1\], this formalism does not assume a functional form for the intrapixel sensitivity variations and therefore should in principle produce an unbiased correction for these variations. We calculate the weighting function as follows:
$$\begin{aligned}
\label{eq4}
W(x_i,y_i)=\frac{\sum\limits_{i\neq j}{\exp\left(-\frac{(x_j-x_i)^2}{2\sigma _x^2}\right)\exp\left(-\frac{(y_j-y_i)^2}{2\sigma _y^2}\right)f_j}}{\sum\limits_{i\neq j}{\exp\left(-\frac{(x_j-x_i)^2}{2\sigma _x^2}\right)\exp\left(-\frac{(y_j-y_i)^2}{2\sigma _y^2}\right)}}\end{aligned}$$
where $x_i$ and $y_i$ are the $x$ and $y$ positions of the $i^{th}$ frame, $x_j$ and $y_j$ are the $x$ and $y$ positions for the rest of the time series. We optimize our choice of $\sigma _x$ and $\sigma_y$ to produce the smallest possible scatter in the final time series when we fix the transit light curve to the best-fit solutions listed in Table \[transit\_param\]. We find that the preferred values range between $0.0053-0.0120$ pixels in $\sigma _x$ and $0.0024-0.0045$ pixels in $\sigma_y$ for the four 3.6 and 4.5 transits examined here. For ease of computation we bin our time series in intervals corresponding to one point per original set of 64 images (in some instances there are less than 64 images in a given bin after removing outliers) and iteratively calculate the weighting function and the linear function of time plus transit fits until we converge to a consistent solution.
Once we have a final solution we calculate the weighting function for the unbinned data and carry out a final fit for the transit function to determine our best-fit transit depth. In this case we fix the inclination and $a/R_{\star}$ to their best-fit values from the simultaneous fits to all transits described in §\[transits\], which allows us to fit each transit individually using the weighting function while still preserving the constraints imposed in a simultaneous fit. We find that in all cases we obtain transit depths and times that are consistent with the values from our fits using Equation \[eq1\], with a standard deviation that is comparable or slightly worse than that achieved with our polynomial fits.
We also carried out a second set of fits in which we derived our corrections for the intrapixel sensitivity variations using only the out-of-transit data, and found that our best-fit planet-star radius ratios changed by less than $0.4\sigma$ in all cases. Because the star samples the same regions of the pixel in both the in-transit and out-of-transit data, it is possible to obtain an equivalently good correction for the intrapixel sensitivity variations using only the out-of-transit points. Conversely, this means that poor corrections for this effect should produce equally large deviations in both the in-transit and out-of-transit regions of the light curve. As we will discuss in the following section, we find that the residuals for the deepest transits in these two bands have a significantly higher RMS in transit than out of transit. This behavior is inconsistent with our expectations for poorly corrected instrument effects, and we therefore conclude that it is unlikely that these effects are responsible for the discrepant transit depths measured at 3.6 and 4.5 .
At 8.0 we fit the data with a single or double exponential function to describe the smoothly varying detector ramp. In @agol10 we conclude that this functional form avoids correlations between the slope of the ramp and the measured transit or eclipse depth; however we check this assertion using our 8 data as well. For our 8 transit fits we find that the exponential term has a coefficient of \[0.00156, 0.00000, 0.00288, 0.00299\], corresponding to planet-star radius ratios of \[0.08234, 0.08162, 0.08138, 0.08336\], where we have set the amplitude of the exponential term to zero for the transit occurring in the middle of our 70-hour phase curve observation. For the eleven secondary eclipse observations we find coefficients of \[0.00645, 0.00627, 0.00194, 0.00433, 0.00534, 0.00140, 0.00000, 0.00359, 0.00321, 0.00353, 0.00262\], corresponding to eclipse depths of \[0.0552, 0.0507, 0.0395, 0.0495, 0.0367, 0.0523, 0.0421, 0.0386, 0.0491, 0.0397, 0.0441\], respectively, where we have set the exponential coefficient to zero for the secondary eclipse at the end of the phase curve observation. We find no evidence for any correlation between the slope of the exponential function and the measured transit or secondary eclipse depths. As an additional check we also confirm that there is no correlation between these depths and either the measured sky background or the total stellar flux given in Table \[obs\_table\].
### Stellar Variability {#star_spots}
The presence of spots or faculae on the visible face of the star can have two distinct effects on the measured light curve for a transiting planet. Non-occulted spots on the visible face of the star reduce the star’s total flux, increasing the measured transit depth, while spots occulted by the planet cause a small positive deviation in the light curve with a time scale proportional to the physical size of the occulted spot [e.g., @rabus09], and occulted faculae would have the opposite effect. The early K dwarf HD 189733 ($T_\mathrm{eff}= 5100$ K) is perhaps the best-studied example of an active star with a transiting hot Jupiter [e.g., @bakos06; @pont08a; @desert10], but the late G dwarf CoRoT-2 ($T_\mathrm{eff}=5600$ K) also exhibits a high level of spot activity that may have resulted in early overestimates of its planet’s inflated radius [@guillot10]. This problem is likely to be even more common for M dwarfs, and in fact several instances of occulted spots were reported in transit light curves for the super-Earth GJ 1214b, which orbits a 3000 K primary [@carter10; @berta10; @kundurthy10].
Although it is important to correct for these effects in any transit fit, it is particularly crucial when comparing non-simultaneous, multi-wavelength transit observations such as the ones described in this paper, which require a relative precision of better than a part in $10^{-4}$ in the measured transit depths. We evaluate the likely impact of GJ 436’s activity on the measured transit depths using several complementary approaches. First, we estimate the average activity level on GJ 436 by measuring the amount of emission in the cores of the Ca II H & K lines; in @knutson10 we determined that GJ 436 had an average [$S_{\mbox{\scriptsize HK}}$]{} of 0.620. @isaacson10 found that other stars in the California Planet Search database with similar $B-V$ colors have [$S_{\mbox{\scriptsize HK}}$]{} values ranging between $0.5-2.0$, indicating that GJ 436b is relatively quiet for its spectral type. @demory07 report that this star’s rotation period is greater than 40 days, consistent with upper limits on [$v \sin i$]{} of 1 km/s [@jenkins09], also suggesting that it is likely to be relatively old and correspondingly quiet. The upper limit of 3 km/s on [$v \sin i$]{} from spectroscopy [@butler04] is also consistent with an inclined or pole-on viewing geometry, although it is not required as long as the star’s rotation period is longer than 7 days.
Rather than relying on these indirect measures of activity, we can also directly measure the amplitude of the star’s rotation-modulated flux variations using visible-light ground-based observations. We obtained observations of GJ 436 in Strömgren *b* and *y* filters over a span of approximately six months surrounding our 2009 *Spitzer* transit and secondary eclipse observations from an ongoing monitoring program carried out with the T12 0.8 m APT at Fairborn Observatory in southern Arizona [@henry99; @eat03; @henry08]. In these observations the telescope nodded between GJ 436 and three comparison stars of comparable or greater brightness, which were then used to correct for the effects of variable seeing and airmass. We find that during the period between UT 2009 Jan 9 - Feb 4, when a majority of our transit data was obtained, the star varied in flux by less than a few mmag in visible light (Figure \[stellar\_rot\]). We carry out a similar check for variability in the infrared using the fifteen 8 flux estimates listed in Table \[obs\_table\], which we find have a standard deviation of 0.07%. Both of these measurements indicate that the star is very nearly constant in flux in both visible and infrared light, and we can therefore rule out non-occulted spots as the cause of the observed transit depth variations.
We also use these same data to search for periodicities corresponding to GJ 436b’s rotation period. If we fit the combined $b$ and $y$ band fluxes with a sine function plus a quadratic function of time as shown in Fig. \[stellar\_rot\], we find a best-fit period of 56.5 days. We calculate a Lomb-Scargle periodogram [@lomb76; @scargle82] for these data and find that this period has a false alarm probability of only 2%, which we determine using a bootstrap Monte Carlo analysis. We find a nearly identical best-fit period of 56.6 days in the [$S_{\mbox{\scriptsize HK}}$]{} values measured with Keck HIRES during this epoch [@isaacson10], but with a much higher false alarm probability of approximately 20%. We also examine the correlation between the measured $b$ fluxes and [$S_{\mbox{\scriptsize HK}}$]{} values over the six-year period in which both were available (Fig. \[bflux\_vs\_sval\]), and find that these parameters are negatively correlated. Taken together, these data indicate that the small observed variations in GJ 436’s visible-light fluxes are likely connected with the presence of regions of increased magnetic activity on the visible face of the star.
Although such low-amplitude flux variations generally indicate that a star has relatively few spots, there are two important exceptions. First, if the spots are uniformly distributed in longitude, it is theoretically possible to have a star with significant spot coverage and an effectively constant flux. It would not be surprising if the occurrence rate and distribution of spots was different for M stars than for G or K stars, but in GJ 436b’s case the lack of any flux variations larger than a few mmag would seem to place a strong limit on the allowed spot distributions. We can quantify this limit if we assume that the deviation of approximately 0.08% in the first part of the 3.6 transit light curve from UT 2009 Jan 28 shown in Fig. \[transit\_phot\_norm\] is due to the occultation of a bright region on the star. This region must have a surface intensity that is 12% brighter than the rest of the star in order to produce the observed deviation. If we compare `PHOENIX` models with varying effective temperatures integrated over this band, we find that the star’s temperature must increase by approximately 200 K in the affected region in order to match this surface intensity. We know that the total rotational modulation in the star’s visible-light flux must remain below 0.1%, and we estimate that an increase of 12% in the 3.6 surface intensity should produce an increase of approximately 65% in the Strömgren (*b*+*y*)/2 band. In this case the fractional area covered by active regions on the star must vary by less than 0.15% from the most active to the least active hemisphere. Of course, it is possible that the stellar atmosphere models do not provide an accurate match for the spectra of these active regions; if we instead use the measured 3.6 flux contrast of 12%, we find a more conservative limit of 1% on variations in the area affected by stellar activity.
A second, more plausible scenario involves tilting the rotation axis of star so that we are viewing it closer to pole-on, which would effectively suppress the amplitude of rotational flux variations regardless of spot coverage. If we assume that the star’s spin axis is randomly oriented with respect to our line of sight, the probability that it will fall within $45\degr$ of a pole-on view is 30%. In this scenario the star could be highly spotted, allowing for frequent occultations of spots by the planet, while still displaying a small rotational flux modulation. This scenario would require the planet’s orbit to be misaligned with respect to the star’s rotation axis, but such misalignments are commonly seen in other transiting planet systems [@winn10a]. Although the Rossiter-McLaughlin effect has never been successfully measured for GJ 436b, @winn10b find that the Neptune-mass planet HAT-P-11b, which is perhaps the best analogue to the GJ 436 system, has a sky-projected obliquity of $103\degr^{+26\degr}_{-10\degr}$ indicating that this system is significantly misaligned. If most close-in planets start out misaligned and are then gradually brought into alignment through tidal interactions with their host star as proposed by @winn10a, the fact that HAT-P-11b still maintains both a non-zero orbital eccentricity and a significant misalignment would seem to suggest that the same could also be true for GJ 436b.
If we proceed with the hypothesis that GJ 436 is both spotty and tilted with respect to our line of sight, we can then search for evidence of occulted spots in the light curves with discrepant transit depths. We first compare the relative standard deviations of the in-transit ($\sigma_\texttt{in}$) and out-of-transit ($\sigma_\texttt{in}$) residuals plotted in Fig. \[transit\_phot\_norm\]:
$$\begin{aligned}
\label{eq5}
\sigma_\texttt{rel} = \frac{\sigma_\texttt{in} - \sigma_\texttt{out}}{\sigma_\texttt{out}}\end{aligned}$$
We list the measured values of $\sigma_\texttt{rel}$ for all eight transit observations in Table \[sigma\_table\]. Both the 3.6 transit on UT 2009 Jan 28 and the 4.5 transit on UT 2009 Jan 30 appear to have inflated values of $\sigma_\texttt{rel}$, as would be expected if the planet occulted active regions on the star during these visits. We can quantify the statistical significance of the measured $\sigma_\texttt{rel}$ values if we assume that both the in-transit and out-of-transit points are drawn from the same underlying Gaussian distribution, and then ask how many times in a sample of 100,000 random trials we measure a value of $\sigma_\texttt{rel}$ greater than or equal to the value calculated directly from our observations. In each trial we generate two synthetic data sets, each with the appropriate length corresponding to either the in-transit or out-of-transit measurements, and then calculate the standard deviation of each distribution and the corresponding value of $\sigma_\texttt{rel}$. In the 3.6 micron transit observation on UT 2009 Jan 9 there are 81,848 out-of-transit flux measurements and 25,482 in-transit flux measurements, and we find that over 100,000 trials, we obtain a value of $\sigma_\texttt{rel}$ greater than or equal to the measured value of 0.2% approximately 36% of the time. Repeating the same calculation for the 3.6 micron transit observed on Ut 2009 Jan 28, which has 82,238 out-of-transit points and 25,530 in-transit points, we obtain $\sigma_\texttt{rel}$ greater than or equal to the measured value of 1.4% only 0.23% of the time. We list the corresponding probabilities for all eight transits in Table \[sigma\_table\].
[lrrrrcrrrrr]{} *Unbinned Data* & & & & &\
UT 2007 Jun 29 & 8.0 & 7,924 & 17,956 & -1.3% & 0.92\
UT 2008 Jul 14 & 8.0 & 7,895 & 25,012 & +1.4% & 0.059\
UT 2009 Jan 9 & 3.6 & 25,482 & 81,848 & +0.2% & 0.36\
UT 2009 Jan 17 & 4.5 & 25,954 & 82,212 & -0.3% & 0.70\
UT 2009 Jan 25 & 8.0 & 7,910 & 25,334 & +0.1% & 0.44\
UT 2009 Jan 28 & 3.6 & 25,536 & 82,238 & +1.4% & 0.0023\
UT 2009 Jan 30 & 4.5 & 25,955 & 62,334 & +1.1% & 0.018\
UT 2009 Feb 2 & 8.0 & 7,890 & 25,318 & +0.1% & 0.45\
*Binned Data* & & & & &\
UT 2007 Jun 29 & 8.0 & 126 & 287 & -13.6% & 0.97\
UT 2008 Jul 14 & 8.0 & 126 & 399 & -4.9% & 0.75\
UT 2009 Jan 9 & 3.6 & 411 & 1311 & +3.3% & 0.21\
UT 2009 Jan 17 & 4.5 & 412 & 1310 & -3.4% & 0.80\
UT 2009 Jan 25 & 8.0 & 126 & 403 & +9.7% & 0.093\
UT 2009 Jan 28 & 3.6 & 411 & 1311 & +37.5% & $1\times 10^{-6}$\
UT 2009 Jan 30 & 4.5 & 412 & 989 & -1.4% & 0.63\
UT 2009 Feb 2 & 8.0 & 126 & 403 & +2.9% & 0.34\
We also repeat this same test with data that has been binned in sets of 64 images, corresponding to 10 s bins at 3.6 and 4.5 um and 30 s bins at 8 um. This allows us to evaluate the relative contribution that correlated noise makes to the in-transit and out-of-transit variances, as the photon noise should be be reduced by a factor of 8 in these bins (also see Fig. 3). In this case we carry out 1,000,000 random trials for each visit, as each simulated data set is much smaller and the computations are correspondingly fast. We find that for the binned Jan 9 light curve there are 1311 points out of eclipse and 411 points in eclipse. In this case $\sigma_\texttt{rel}$ is 3.3%, and we obtain values greater than or equal to this number in 21% of our random trials. Repeating this calculation for the UT 2009 Jan 28 visit, we find that the measured value of $\sigma_\texttt{rel}$ is 37% (i.e., a standard deviation that is 37% higher in eclipse than it is out of eclipse), with 1311 points out of eclipse and 411 points in eclipse. In our simulations assuming a single Gaussian probability distribution for both segments, this level of disagreement occurred only once in $10^6$ trials. We find that in all other visits, including the 4.5 micron transit observed on UT 2009 Jan 30, the binned data in and out of eclipse are consistent with a single distribution.
One consequence of a misalignment between the star’s rotation axis and the planet’s orbit is that the planet will not necessarily occult the same spot on successive transits, as would be expected for a well-aligned system; we therefore consider each transit individually. Our analysis above indicates that the 3.6 transit on UT 2009 Jan 28 displays a statistically significant increase in the standard deviation of the in-transit data that is dominated by contributions from correlated noise on time scales greater than 30 s, as would be expected if the planet occulted an active region on the surface of the star. Although the 4.5 transit from UT 2009 Jan 30 does not appear to display a similar increase, our imperfect correction for the intrapixel sensitivity variations in this visit means that we are less sensitive to variations in $\sigma_\texttt{rel}$. We argue that even if the star’s rotation axis and the planet’s orbit are misaligned, it is still likely that the planet would occult the same active region during both the UT 2009 Jan 28 and Jan 30 visits, as the interval between these visits is much shorter than the star’s approximately 50 day rotation period. As we discuss later in this section, the fact that both visits display increased transit depths and shifted transit times provides additional support for this hypothesis.
We also consider the possibility that the increased scatter in the in-transit residuals might be due to a change in the transit parameters, including the planet’s radius, orbital inclination, transit time, or $a/R_{\star}$, from one visit to the next. We test this hypothesis by taking the difference of the first and second visits in each bandpass from 2009 and comparing the shape of the residual light curve to the differences we would expect due to changes in these parameters, which should be distinct from the deviations created by occulted star spots (Fig. \[transit\_diff\]). Because we are directly differencing the two light curves, our results are independent of any assumptions about the shape of the transit light curve or the stellar limb-darkening. We inspect the deviations in the residuals plotted in Fig. \[transit\_diff\] and conclude that they do not appear to be well-matched by changes in the best-fit transit parameters, leaving occultations of active regions on surface of the star as the most likely hypothesis.
If the planet occults a spot it can also cause a shift in the best-fit transit times, particularly when the spot is near the edge of the star and is occulted during ingress or egress. Indeed, we see that the UT 2009 Jan 28 3.6 appears to occur $31.4\pm9.5$ s early, while the 4.5 Jan 30 visit occurs $34.4\pm9.4$ s late (see Fig. \[transit\_o\_c\]) in the fits where we fix $a/R_{\star}$ and $i$ to a single common value. As a test we repeated our fit to the 3.6 transit excluding the first 1/3 of the transit light curve, and found that the best-fit transit time shifted forward by approximately 30 s. We would also expect that transits observed in visible light, where the contrast between the spots and the star is more pronounced, would show proportionally larger timing deviations when the planet crosses a spot. As noted in §\[timing\], the scatter in the measured visible-light transit times is inconsistent with a constant period, and the amplitude of the visible-light deviations is on average larger than the deviations in the infrared. We should also see this same wavelength-dependence in the measured transit depths in Fig. \[depth\_comparison\], and indeed we find that the 3.6 transit depth changes by 7.8%, the 4.5 transit depth changes by 5.3%, and the 8.0 transit depth changes by 4.9% during the period between UT 2009 Jan 9 and Feb 2. Lastly, we can examine the visible-light flux measurements for GJ 436 in Fig. \[stellar\_rot\] and see that these two transits were obtained near a minimum in the star’s flux, consistent with a relative increase in the fractional spot coverage as compared to earlier epochs. The measured values for [$S_{\mbox{\scriptsize HK}}$]{}, a common activity indicator, appear to be anti-correlated with the observed flux variations and reach a local maximum near this point.
Atmospheric Transmission Spectrum {#modeling_disc}
---------------------------------
In principle, the broadband transmission photometry of GJ 436b allows us to constrain the chemical composition and temperature structure near the limb of the planetary atmosphere [e.g., @madhu09]. However, time variability in either the properties of the star or of the planet poses a significant challenge to an analysis in which we are comparing transit observations at different wavelengths obtained days or weeks apart. As discussed in §\[time\_var\_radius\], we consider it unlikely that the discrepancies in the measured transit depths are due to changes in the properties of the planet, but instead conclude in §\[star\_spots\] that the occultations of regions of nonuniform brightness in a subset of the transits appear to be responsible for the observed depth variations.
If we set aside those transits which we believe to be most strongly affected by stellar activity, including the UT 2009 Jan 28 and 30 visits, we may attempt to estimate the shape of the planet’s transmission spectrum using the remaining transits. Although the evidence for spots in the final 8.0 transit on UT 2009 Feb 2 is somewhat weaker, we choose to exclude it on the grounds that it displays some of the same behaviors (increased depth, larger than usual timing offset) as the more strongly affected 3.6 and 4.5 transits immediately preceding it. If we then average the remaining three 8.0 depths, we find depths of \[$0.6694\%\pm0.0061\%$,$0.6865\%\pm0.0078\%$,$0.6831\%\pm0.0052\%$\] at 3.6, 4.5, and 8.0 , respectively. These three values are consistent with the near-IR transit depth from @pont08 of $0.6906\%\pm0.0083\%$ ($1.1-1.9$ ), as well as the best-fit visible light transit depth from @ballard10, $0.663\%\pm0.014\%$ ($0.35-1.0$ ). Ground-based data provide additional constraints in the near-IR, including a $H$ band transit depth of $0.707\%\pm0.019\%$ from @alonso08 and a $Ks$ transit depth of $0.64\%\pm0.03\%$ from @caceres09, both from individual transit observations.
We fit these data using the retrieval technique described in @madhu09, which explores the parameter space of a one-dimensional, hydrogen-rich model atmosphere. We compute line-by-line radiative transfer with the assumption of hydrostatic equilibrium and we use parametric prescriptions for the relative abundances of H$_2$O, CH$_4$, CO, CO$_2$. We also include other dominant visible-light and infrared opacity sources, including Na, K, H$_2$-H$_2$ collision-induced absorption, and Rayleigh scattering. Our molecular line data are from @rothman05, @freedman08, Freedman (personal communication, 2009), @karkoschka10, and Karkoschka (personal communication, 2011). The H$_2$-H$_2$ opacities are from @borysow97 and @borysow02. We fix the pressure-temperature ($P$-$T$) profile to the best-fit dayside profile from @stevenson10 and @madhu10; it is possible to obtain a marginally improved fit to these data if we allow the $P$-$T$ profile to vary freely in the fit, but the differences are not significant. We find that the observations can be explained to within the 1-$\sigma$ uncertainties by a methane-poor model (green line in Fig. \[atm\_models\]) that contains mixing ratios of H$_2$O = $1.0\times 10^{-3}$, CO = $1.0\times 10^{-3}$, and CH$_4$ = $1.0\times 10^{-6}$; the data used in this fit appear to be inconsistent with methane abundances $\geq 10^{-5}$. This model also includes CO$_2$ = $1.0\times 10^{-5}$, but the concentration of this molecule is less well constrained, as it is degenerate with the CO abundance in the 4.5 band. We do not expect strong absorption due to atomic Na and K in this temperature regime [@sharp07], and we therefore adopt Na and K mixing ratios of $0.1 \times $ solar abundances. If we compare the visible-light transit depth of 0.650% from this model to the value reported by @ballard10 we find that it is consistent at the $0.5\sigma$ level. Model transmission spectra for GJ 436b from @shabram10, such as the rescaled model including higher-order hydrocarbons (model “g" in Shabram et al.) also provide a reasonably good match to these data.
We can reduce the disagreement between the measured transit depths and the green model in the $1-2$ wavelength range by introducing an opaque cloud layer at 50 mbar (grey model in Fig. \[atm\_models\]). However, such a cloud layer would be inconsistent with the dayside emission spectrum measured by @stevenson10 unless it was optically thin in the center of the dayside hemisphere, or only intermittently present as discussed in §\[time\_var\_radius\]. We also note that occultations of spots and other features on the star will have a stronger effect on the measured transit depth at shorter wavelengths, and it is therefore possible that these measurements (several of which were derived from individual transit observations) are unreliable for our purposes here.
Returning to the *Spitzer* data, we find that our conclusions about the atmospheric composition are strongly dependent on our choice of which transit depths to include in our analysis. We illustrate this with a blue model in Fig. \[atm\_models\], which contains H$_2$O and CH$_4$ mixing ratios of $5.0\times 10^{-4}$ each and no CO or CO$_2$, and is comparable to the model presented in @beaulieu10. @beaulieu10 excluded the shallower 3.6 transit on UT 2009 Jan 9 and kept the deeper 3.6 UT 2009 Jan 28 and 8.0 UT 2009 Feb 2 visits in their analysis, and as a result they concluded that the planet’s transmission spectrum contained strong methane features, as illustrated by this blue model. They argue that the correction for the intrapixel effect is degenerate with the transit depth for the UT 2009 Jan 9 visit and that this visit is therefore unreliable, but we find that there is good overlap between the $x$ and $y$ positions spanned by the in-transit and out-of-transit data. We obtain transit depths that are consistent at the $0.1\sigma$ level when we fit for our intrapixel sensitivity correction using either the entire light curve or the out-of-transit data alone. Although our 3.6 and 8.0 transit depths are in good agreement with the values obtained by Beaulieu et al., our best-fit transit depth for the 4.5 UT 2009 Jan 17 is $2.5\sigma$ larger. We note that Beaulieu et al. allow $a/R_{\star}$ and $b$ to vary individually for each transit, and that their values for these parameters from the Jan 17 transit fit are outliers when compared to other visits; we conclude that this is likely the cause of their shallower best-fit radius ratio. Despite this disagreement, we find that if we include the same transits as Beaulieu et al. in our analysis, we also produce a transmission spectrum that is consistent with strong methane absorption.
If, as we propose, occulted regions of non-uniform brightness on the surface of the star are responsible for the discrepancies in the 3.6 and 4.5 transit depths, it will be difficult to provide a definitive characterization of GJ 436b’s transmission spectrum with broadband *Spitzer* photometry. Our analysis suggests that the atmosphere of GJ 436b is likely under-abundant in methane and over-abundant in CO, consistent with the conclusions of @stevenson10 and @madhu10, but in order to reach these conclusions we have assumed that we have correctly identified and excluded all transits in which the planet occults active regions on the star. However, if the fractional spot coverage on the star is sufficiently high, it is possible that *all* transits are affected by these regions, in which case we cannot draw any robust conclusions about the shape of the planet’s transmission spectrum.
Dayside Emission Spectrum and Limits on Variability {#dayside_var}
---------------------------------------------------
We can use the eleven secondary eclipse depths listed in Table \[eclipse\_param\] to study the properties of the planet’s dayside atmosphere. We take the error-weighted average of the eclipse depths and find a combined value of $0.0452\%\pm0.0027\%$, consistent with the value of $0.054\%\pm0.008\%$ reported by @stevenson10. Next we construct a combined light curve incorporating all eleven secondary eclipse observations, shown in Fig. \[combined\_eclipse\]. Fig. \[combined\_transit\] shows the equivalent combined 8 transit light curve for comparison. As a check we fit these combined data with a secondary eclipse light curve and find that the best-fit eclipse depth agrees exactly with this error-weighted average from the individual eclipse fits. Because the strongest constraints on the relative abundances of methane and CO come from the 3.6 and 4.5 eclipse measurements, we do not expect the reduced 8 error bar to affect the conclusions reached by Stevenson et al. regarding these molecules. If we compare our results to the two models plotted in Fig. 2 of Stevenson et al., we find that the revised 8 eclipse depth is best-described by a cooler model with an effective blackbody temperature of 790 K (defined as the temperature needed to match the total integrated flux at all wavelengths) and a modestly enhanced (30x higher) water abundance, rather than the hotter 860 K model with weaker water absorption. We also calculate a revised brightness temperature for the planet in the 8 band, defined as the temperature required to match the observed planet-star flux ratio in this bandpass assuming that the planet radiates as a blackbody. We use the parameters in Table \[global\_param\] and assume a `Phoenix` atmosphere model with an effective temperature of 3585 K and $\log(g)$ equal to 4.843 [@torres07] for the star, and find that the planet has a best-fit brightness temperature of $740\pm16$ K.
Returning to Fig. \[combined\_eclipse\], we examine the residuals from our best-fit eclipse solution to search for evidence of deviations during ingress and egress caused by a non-uniform day-side surface brightness [@williams06; @rauscher07]. The primary effect of a non-uniform brightness distribution is to shift the best-fit eclipse time [e.g., @agol10], but in this case uncertainties in estimates for GJ 436b’s orbital eccentricity and longitude of periastron prevent us from detecting the small ($<1$ minute) timing offsets expected from this effect. This timing offset will also display a small wavelength-dependence, due to variations in the brightness distribution as seen in different bandpasses, but this signal is likely to be too weak to detect by comparing to the existing 3.6 and 5.8 eclipse observations from @stevenson10. Instead, we seek to determine if the shape of the 8 eclipse ingress and egress can be used to constrain the planet’s day-side brightness distribution. We compare the eclipse light curves for a uniform surface brightness disk to that of a local equilibirum model [i.e., one with the radiative time set to zero so that each region of the planet is at its local equilibrium temperature; @hansen08; @burrows08], and find that the peak-to-trough residuals between these light curves is only $0.002\%$, if the eclipse depth is a free parameter. This is approximately a factor of ten smaller than our measurement errors, as demonstrated by the binned residuals in Fig. \[combined\_eclipse\]. As we increase the amount of energy advected to the planet’s night side using the models described in @cowan10, the location of the hot spot on the planet’s day side shifts away from the substellar point and the overall temperature contrast decreases. Because we are not sensitive to the timing offset caused by the shifted hot spot, the only effect of this increased advection is to homogenize the planet’s temperatures, producing light curves increasingly similar to the uniform disk light curves.
### A Variability Study for GJ 436b
Tidal dissipation is expected to have driven GJ 436b into a pseudo-synchronous rotation state in which the planet’s spin frequency is nearly commensurate with the planet’s instantaneous orbital frequency at periastron. There are several competing theories of the pseudosynchronization process [see, e.g. @ivanov07]. We adopt the expression given by @hut81: $$\begin{aligned}
\label{eq6}
{\Omega_{\rm spin}\over{\Omega_{\rm orbit}}}={
1+{15\over{2}}e^{2}+{45\over{8}}e^{4}+{5\over{16}}e^{6}
\over{(1+3e^2+{3\over{8}}e^{4})(1-e^{2})^{3/2}}}\, .\end{aligned}$$ For GJ 436b, this relation gives $P_{\rm spin}=2.32$ days, which yields a 19-day synodic period for the star as viewed from a fixed longitude on the planet. GJ 436b also experiences an 83% increase in incident flux during the 1.3-day interval between apoastron and periastron.
We have computed simple hydrodynamical models to assess whether the asynchronous rotation and time-varying insolation are likely to generate atmospheric flows that are sufficiently chaotic to produce observable orbit-to-orbit variability in the secondary eclipse depths. Our two-dimensional hydrodynamical model contains three free parameters. The first, $p_{8 \mu \rm {m}}$, is the atmospheric pressure at the 8 photosphere; the second, $X$, corresponds to the fraction of the incoming optical flux that is absorbed at or above the 8 photosphere; and the third, $p_b$, corresponds to the pressure at the base of our modeled layer. We adopt parameter values of $p_{8 \mu \rm {m}}$ = 100 mbar, $p_b$ = 4.0 bar and $X$ = 1.0 for these models, which puts our modelÕs light curve in good agreement with GJ 436b’s average 8 secondary eclipse depth. The full details of the computational scheme are the same as those adopted in @langton08, with updates as described in @laughlin09. A model photometric light curve is then obtained by integrating at each time step over the planetary hemisphere visible from Earth, where we assume that each patch of the planet radiates with a black-body spectrum corresponding to the local temperature.
The model is run for a large number of orbits, and a quasi-steady state surface flow emerges. The temperature structure of this flow as seen from an observer in the direction of Earth at five equally spaced intervals in the orbit is shown in Figure \[orbit\_diagram\], and the model light curve over these five orbits is shown in Figure \[circ\_model\]. Over the course of a single orbit, the $8~\micron$ planet-to-star flux ratio varies nearly sinusoidally from $\Delta F/F=0.033\%$ to $0.043\%$. The model’s flux at secondary eclipse agrees well with the observed value, and varies by only 0.5% peak-to-peak from one orbit to the next. We note that more sophisticated three-dimensional general circulation models for GJ 436b from @lewis10 also predict very low ($1.3-1.5\%$) levels of variability in the 8 band for a range of atmospheric metallicities.
Although these models indicate that GJ 436b’s modest orbital eccentricity is likely not sufficient to induce significant variability, they also do not include many processes such as clouds, photochemistry, and small scale turbulence that are known to contribute to temporal variability in planetary atmospheres. We therefore place empirical limits on GJ 436b’s dayside variability using the eleven 8 secondary eclipse observations. We assume that the intrinsic dayside fluxes are drawn from either a Gaussian distribution with a standard deviation $\delta$, or from a boxcar distribution with a width equal to $2\delta$. In both cases we set the mean of the distribution equal to the error-weighted mean of the measured secondary eclipse depths given in Table \[global\_param\]. We then conduct 10,000 random trials, where we draw eleven measurements from each distribution and calculate the reduced $\chi^2$ of these values as compared to the measured secondary eclipse depths in Table \[eclipse\_param\]. We then determine the fraction of the 10,000 random trials in which the reduced $\chi^2$ is less than or equal to one, which should correspond to the probability that the underlying distribution is consistent with the measured eclipse depths. We repeat this calculation for a range of values for $\delta$, and plot the resulting probability distribution as a function of $\delta$ for both boxcar and Gaussian distributions. We find that for a boxcar distribution we can place \[$1\sigma$, $2\sigma$, $3\sigma$\] limits on the intrinsic variability of \[29%, 42%, 58%\], and for a Gaussian distribution our corresponding upper limits are \[17%, 27%, 42%\]. These limits are consistent with the predictions from general circulation models for this planet, but they are not low enough to provide meaningful constraints on these models.
Conclusions
===========
In this paper we present *Spitzer* observations of eight transits and eleven secondary eclipses of GJ 436b at 3.6, 4.5, and 8.0 , which allow us to derive improved values for the planet’s orbital ephemeris, eccentricity, inclination, radius, and other system parameters. We discuss the effects that our assumptions about the longitude of periastron and stellar limb-darkening profiles have on our best-fit transit parameters, and find that our best-fit parameters vary by $1\sigma$ or less in all cases. We find that all parameters are consistent with a constant value over the two-year period spanned by our observations, with the exception of the measured transit depths and times in the 3.6 and 4.5 bands. We find that the 3.6 radius ratio measured on UT 2009 Jan 28 is $4.7\sigma$ deeper than the value measured on UT 2009 Jan 9 in this same band, and the 4.5 radius ratio from UT 2009 Jan 30 is $2.9\sigma$ deeper than the value measured on UT 2009 Jan 17. The level of significance for these changing radius ratios remains high even after accounting for the effects of residual correlated noise in the data.
We also present an improved estimate for GJ 436b’s 8 secondary eclipse depth, based on eleven eclipse observations in this bandpass. We find that the new depth is consistent with previous models described in @stevenson10 and @madhu10, although we prefer solutions with modestly lower effective temperatures (790 K instead of 860 K). We use the shape of the eclipse ingress and egress to search for the presence of a non-uniform temperature distribution in the planet’s dayside atmosphere, but uncertainties in the predicted time of secondary eclipse ultimately limits our ability to place meaningful constraints on this quantity. Our eclipse depths in this band are consistent with a constant value, and we place a $1\sigma$ upper limit of 17% on variability in the planet’s dayside atmosphere. This limit is in good agreement with the predictions of general circulation models for this planet, which are typically variable at the level of a few percent or less in this bandpass.
Although it is possible that such residual noise or a time-varying cloud layer at sub-mbar pressures could explain the apparent transit depth variations, the features observed in the transit light curves appear to be most consistent with the presence of occulted spots or other areas of non-uniform brightness on the surface of the star in the UT 2009 Jan 28 and 30 transits. We find that for the UT 2009 Jan 28 transit the in-transit data have a higher RMS than the out-of-transit data, as would be expected for occulted spots; we would expect poorly corrected systematics to produce an equivalently large RMS in both the in-transit and out-of-transit data, as the star spans same region of the pixel in both segments. Although we are not as sensitive to such effects in the UT 2009 Jan 30 visit, which has higher levels of correlated noise due to an imperfect correction for intrapixel sensitivity variations, the short separation between these two observations relative to the star’s approximately 50 day rotation period means that the planet is likely to have occulted the same feature in both visits. We also see statistically significant variations in the measured transit times, where the amplitude of the variations is typically smaller for infrared observations than for those obtained in visible light, also suggesting the present of occulted spots. We note that the anomalously deep transits observed on UT 2009 Jan 28 and 30 also have best-fit transit times that are offset by 30 s ($3.1-3.5\sigma$ significance) from the predicted values. The fact that the three deepest transits are all measured within the same five-day period is also consistent with a single epoch of increased stellar activity. We reconcile this conclusion with the absence of any variations larger than a few mmag in the star’s visible and infrared fluxes by proposing that the star’s spin axis is likely inclined with respect to our line of sight, which has the effect of reducing the amplitude of any flux variations independent of spot coverage. If this is in fact the case, GJ 436b’s orbit will be misaligned with respect to the star’s spin axis.
If we examine the wavelength-dependent transit depths for the subset of visits that appear to be least affected by spots, we find that the resulting transmission spectrum is consistent with the same reduced methane and enhanced CO abundances used by @stevenson10 to fit the planet’s dayside emission spectrum. These same transit data are also consistent with models including an opaque cloud layer at a pressure of approximately 50 mbar or less in the planet’s atmosphere, which reduces the amplitude of the absorption features in the model spectra. We find no convincing evidence for the strong methane absorption reported by @beaulieu10, although we note that our conclusions vary significantly depending on which transits we include in our analysis. It is possible that all measured transit depths are affected to varying degrees by stellar activity, in which case it may not be feasible to characterize the planet’s transmission spectrum using broadband photometry obtained over multiple epochs. Because active regions occulted by the planet display a characteristic wavelength-dependence and also alter the local shape of the transit light curve, high signal-to-noise grism spectroscopy of the transit over multiple epochs would help to resolve this issue. Such observations would also provide an independent test of the reliability of the *Spitzer* transit data; if similar apparent depth variations were observed in other data sets, it would provide a strong argument against the hypothesis that the apparent depth variations in these data might be the result of poorly corrected instrument effects. Lastly, grism spectroscopy could also be used to search for time-varying clouds at sub-mbar pressures, which should produce a featureless transmission spectrum with a uniformly increased depth when present, as compared to the standard cloud-free transmission spectrum.
As indicated by its rotation rate and [ H & K]{} emission, GJ 436 is an old and relatively quiet early M star. If the apparent transit depth variations we describe here are indeed due to the occultation of active regions on the star, as appears likely, we would expect similar features to occur frequently in the transit light curves of other planets orbiting M dwarfs at all activity levels. GJ 1214 is currently the only other M star known to host a transiting planet, and has a similar 53-day rotation period and a modestly lower 3000 K effective temperature as compared to GJ 436 [@charbonneau09; @berta10]. A majority of the published data on this system are in the visible and near-infrared wavelengths where star spots should be prominent, and several recent papers report the presence of occulted spots in a subset of transit observations [@berta10; @carter10; @kundurthy10]. Such spots might also account for the apparent disagreement in measurements of the planet’s infrared transmission spectrum, which some authors find to be featureless [@bean10; @desert11], while others detect absorption features [@croll11]. HD 189733b is currently the only other exoplanet with repeated *Spitzer* transit observations in the same band; although this planet orbits a relatively active K star [e.g., @knutson10], it exhibits much smaller variations in the measured transit depths and times as compared to GJ 436b [@agol10; @desert10]. This is perhaps not surprising, as the relative fractional spot coverage, spot sizes, and spot temperatures may well be qualitatively different on K stars and M stars.
We would like to thank the anonymous referee for a very thoughtful report, as well as Jonathan Fortney, Megan Shabram, and Nikole Lewis for helpful discussions on the implications of our data for their published models of GJ 436b. We are also grateful to Eric Gaidos for his commentary on the nature of activity on M dwarfs, and Josh Winn for helpful discussions on spin-orbit alignment for GJ 436b. We would also like to thank Howard Isaacson for supplying the [$S_{\mbox{\scriptsize HK}}$]{} values for our activity study, and to acknowledge the Keck observers who obtained the HIRES spectra used for these measurements, including Andrew Howard, John Johnson, Debra Fischer, and Geoff Marcy. This work is based on observations made with the *Spitzer Space Telescope*, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. HAK was supported by a fellowship from the Miller Institute for Basic Research in Science. EA was supported in part by the National Science Foundation under CAREER Grant No. 0645416.
Adams, E., Seager, S., & Elkins-Tanton, L. 2008, , 673, 1160 Agol, E., et al. 2010, , 721, 1861 Alonso, R., et al. 2008, , 487, L5 Bakos, G. Á , et al. 2006, , 650, 1160 Ballard, S., et al. 2010a, , 716, 1047 Ballard, S., et al. 2010b, , 122, 1341 Basri, G., et al. 2011, , 141, 20 Batygin, K., et al. 2009, , 699, 23 Bean, J. L. & Seifahrt, A. 2008, , 487, L25 Bean, J. L., Kempton, E. M.-R., & Homeier, D. Nature, 468, 669 Beaulieu, J.-P., et al. 2011, , 731, 16 Berta, Z. K., et al. 2011, submitted, arXiv:1012.0518 Borysow, A., Jorgensen, U. G., & Zheng, C. 1997, A&A, 324, 185. Borysow, A. 2002, A&A, 390, 779 Burrows, A., Budaj, J., & Hubeny, I. 2008, , 678, 1436 Butler, R. P., et al. 2004, , 617, 580 Cáceres, C., et al. 2009, , 507, 481 Carter, J. A., et al. 2011, , 730, 82 Charbonneau, D., et al. 2002, , 568, 377 Charbonneau, D., et al. 2005, , 626, 523 Charbonneau, D. et al. 2008, , 686, 1436 Charbonneau, D., et al. 2009, , 462, 891 Claret, A. 2000, , 363, 1081 Claret, A. 2008, , 482, 259 Claret, A. 2009, , 506, 1335 Claret, A., & Hauschildt, P. H. 2003, , 412, 241 Coughlin, J. L., et al. 2008, , 689, L149 Cowan, N. B., Agol, E., & Charbonneau, D. 2007, , 379, 641 Cowan, N. B. & Agol, E. 2011, , 726, 82 Croll, B., et al. 2011, in press, arXiv:1104.1011 Crossfield, I. J. M., et al. 2010, , 723, 1436 Cushing, M. C., Rayner, J. T., & Vacca, W. D. , 623, 1115 de Kort, J. J. M. A. 1954, Ricerche astronomiche, vol. 3, n. 5, Citta del Vaticano : Specola vaticana, 1954., p. 109 Deming, D., Seager, S., & Richardson, L. J. 2005, , 434, 740 Deming, D., Harrington, J., Seager, S., & Richardson, L. J. 2006, , 644, 560 Deming, D., et al. 2007, , 667, L199 Deming, D., et al. 2011, , 726, 95 Demory, B.-O., et al. 2007, , 475, 1125 Desert, J.-M., et al. 2008, , 492, 585 Desert, J.-M., et al. 2009, , 699, 478 Desert, J.-M., et al. 2011, , 526, A12 Desert, J.-M., et al. 2011, , 731, L40 Eastman, J., Siverd, R., & Gaudi, B. S. 2010, , 122, 894 Eaton, J. A., Henry, G. W., & Fekel, F. C. 2003, in The Future of Small Telescopes in the New Millennium, Volume II - The Telescopes We Use, ed. T. D. Oswalt (Dordrecht: Kluwer), 189 Fazio, G. G., et al., 2004, , 154, 10 Figueira, P. et al. 2009, , 493, 671 Ford, E. 2005, , 129, 1706 Freedman, R. S., Marley, M. S. & Lodders, K. 2008, S, 174, 504 Fressin, F., et al. 2010, , 711, 374 Gibson, N. P., Pont, F., & Aigrain, S. 2011, , 411, 2199 Gillon, M., et al. 2007a, , 471, L51 Gillon, M., et al. 2007b, , 472, L13 Grillmair, C. J. et al. 2008, , 456, 767 Guillot, T. & Havel, M. 2010, , 527, A20 Hansen, B. M. S. 2008, , 179, 484 Harrington, J., et al. 2007, , 447, 691 Hauschildt, P., Allard, F., Ferguson, J., Baron, E., & Alexander, D. R. 1999, , 525, 871 Henry, G. W. 1999, PASP, 111, 845 Henry, G. W. & Winn, J. N. 2008, , 135, 68 Hut, P. 1981, , 99, 126 Iro, N., & Deming, L. D., 2010, , 712, 218 Isaacson, H., & Fischer, D. 2010, , 725, 875 Ivanov, P. B., & Papaloizou, J. C. B. 2007, , 376, 682 Jenkins, J. S., et al. 2009, , 704, 975 Karkoschka, E. & Tomasko, M. 2010, Icarus, 205, 674 Knutson, H. A., et al. 2007, , 447, 183 Knutson, H. A., et al. 2008, , 673, 526 Knutson, H. A., et al. 2009a, , 690, 822 Knutson, H. A. et al. 2009b, , 703, 769 Knutson, H. A., et al. 2009c, , 703, 769 Knutson, H. A., Howard, A. W., & Isaacson, H. 2010, , 720, 1569 Kundurthy, P. et al. 2011, ApJ, 731, 123 Kurucz, R. 1979, , 40, 1 Kurucz, R. 1994, *Solar Abundance Model Atmospheres for 0, 1, 2, 4, and 8 km/s* CD-ROM No. 19 (Smithsonian Astrophysical Observatory, Cambridge, MA, 1994) Kurucz, R. 1999, *H$_2$O linelist from Partridge & Schwenke (1997), part 2 of 2.* CD-ROM No. 26 (Smithsonian Astrophysical Observatory, Cambridge, MA, 1994) Kurucz, R. 2005, in Memorie Della Societa Astronomica Italiana Supplement, v. 8, p. 14 Langton, J., & Laughlin, G. 2008, , 674, 1106 Laughlin, G., et al. 2009, , 457, 562 Léger, A., et a. 2009, , 506, 287 Lewis, N. K., et al. 2010, , 720, 344 Line, M. R., Liang, M. C., & Yung, Y. L. 2010, , 717, 496 Linsky, J. L., et al. 2010, , 717, 1291 Loeb, A. 2005, , 623, L45 Lomb, N. R. 1976, Ap&SS, 39, 447 Madhusudhan, N., & Seager, S. 2009, , 707, 24 Madhusudhan, N., & Seager, S. 2011, , 729, 41 Madhusudhan, N., & Winn, J. N. 2009, , 693, 784 Mandel, K. & Agol, E. 2002, , 580, L171 Maness, H. L., et al. 2007, , 119, 90 Markward, C. B. 2009 *in* D. A. Bohlender, D. Durand, & P. Dowler, ed., ’Astronomical Society of the Pacific Conference Series’ Vol. 411 of *Astronomical Society of the Pacific Conference Series* pp. 251 Morales-Calderon, M., et al. 2006, , 653, 1454 Nettelmann, N., Kramm, U., Redmer, R., & Neuhäuser, R. 2010, , 523, A26 OÕDonovan, F. T., et al. 2010, , 710, 1551 Orosz, J. A., & Hauschuldt, P. H. 2000, , 364, 265 Pál, A., et al. 2010, , 401, 2665 Partridge, H., & Schwenke, D. W., 1997, , 106, 4618 Pont, F., et al. 2008a, , 385, 109 Pont, F., et al. 2008b, , 393, L6 Queloz, D., et al. 2009, , 506, 303 Rabus, M., et al. 2009, , 494, 391 Rauscher, E., et al. 2007, , 664, 1199 Reach, W. T. et al. 2005, PASP, 117, 978 Ribas, I., Font-Ribera, A., & Beaulieu, J.-P. 2008, , 677, L59 Rogers, L. A. & Seager, S. 2010, , 712, 974 Rothman, L. S., et al. 2005, J. Quant. Spec. & Rad. Transfer, 96, 139 Scargle, J. D. 1982, , 263, 835 Shabram, M., Fortney, J. J., Green, T. P., & Freedman, R. S. 2011, , 727, 65 Sharp, C. M. & Burrows, A. 2007, , 168, 140 Shporer, A., et al. 2009, , 694, 1559 Spiegel, D. S., Burrows, A., Ibgui, L., Hubeny, I., & Milsom, J. A. 2010, , 709, 149 Sterne, T. E. 1940, Proc. National Academy of Sciences, 26, 36 Stevenson, K. B., et al. 2010, , 464, 1161 Swain, M. R., Vasisht, G., & Tinetti, G. 2008, , 452, 329 Todorov, K., et al. 2010, , 708, 498 Torres, G. 2007, , 671, L65 Vidal-Madjar, A., et al. 2003, , 422, 143 Werner, M. W. et al., 2004, , 154, 1 Williams, P. K., Charbonneau, D., Cooper, C. S., Showman, A. P., & Fortney, J. J. 2006, , 649, 1020 Winn, J. N., et al. 2007a, , 133, 1828 Winn, J. N., et al. 2007b, , 134, 1707 Winn, J. N., et al. 2008, , 683, 1076 Winn, J. N., Fabrycky, D., Albrecht, S., & Johnson, J. A. 2010a, , 718, L145 Winn, J. N., et al. 2010b, , 723, L223 Zahnle, K., Marley, M. S., Freedman, R. S., Lodder, K., & Fortney, J. J. 2009, , 701, L20 Zahnle, K., Marley, M. S., & Fortney, J. J. 2010, , submitted, arXiv:0911.0728
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Polydispersity is found to have a significant effect on the potential energy landscape; the average inherent structure energy decreases with polydispersity. Increasing polydispersity at fixed volume fraction decreases the glass transition temperature and the fragility of glass formation analogous to the [*antiplasticization*]{} seen in some polymeric melts. An interesting temperature dependent crossover of heterogeneity with polydispersity is observed at low temperature due to the faster build-up of dynamic heterogeneity at lower polydispersity.
PACS numbers: 64.70.Pf, 82.70.Dd, 61.20.Lc
author:
- Sneha Elizabeth Abraham
- Sarika Maitra Bhattacharrya
- Biman Bagchi
title: 'Energy Landscape, Anti-Plasticization and Polydispersity Induced Crossover of Heterogeneity in Supercooled Polydisperse Liquids'
---
Polydispersity is ubiquitous in nature. It is present in clays, minerals, paint pigments, metal and ceramic powders, food preservatives and in simple homogeneous liquids. It is common in synthetic colloids, which frequently exhibit considerable size polydispersity [@pusey] and is also found in industrially produced polymers, which contain macromolecules with a range of chain length. Polydispersity has significant effects on both the structure and dynamics of the system. Experiments [@phan] and simulations [@rastogi; @sear] on colloidal systems show that increasing polydispersity, at a constant volume fraction, lowers structural correlations, pressure, energy and viscosity. Polydisperse colloidal systems are known to be excellent glass formers. William et al [@william] suggest that colloidal glass formation results from a small degree of particle polydispersity. Crystal nucleation in a polydisperse colloid is suppressed due to the increase of the surface free energy [@frenkel]. Studies by several groups [@pinaki] have shown that the glass becomes the equilibrium phase beyond a terminal value of polydispersity.
Despite being natural glass formers, relationships between polydispersity, fragility, energy landscape and heterogeneous dynamics have not been adequately explored in these systems. Because these systems exist in the glassy phase over a wide range of polydispersity, they offer opportunity to test many of the theories and ideas developed in this area in recent years. We find that polydispersity introduces several unique features to the dynamics of these systems not present in the binary systems usually employed to study dynamical features in supercooled liquids and glasses.
In this work we particularly investigate how polydispersity influences the potential energy landscape, fragility and heterogeneous dynamics of polydisperse Lennard-Jones (LJ) systems in supercooled regime near the glass transition [@murarka]. The polydispersity in size is introduced by random sampling from a Gaussian distribution of particle diameters, $\sigma$. The standard deviation $\delta$ of the distribution divided by its mean $ \overline \sigma$ gives a dimensionless parameter, the polydispersity index $S = \frac {\delta} {\overline \sigma}$. The mass $m_{i}$ of particle $i$ is scaled by its diameter as $m_{i} = \overline m(\frac{\sigma_{i}}{\overline \sigma})^3$.⎝ ⎠ Micro canonical (NVE) ensemble MD simulations are carried out at a fixed volume fraction, $\phi$ on a system of $N=864$ particles of mean diameter $\sigma=1.0$ and mean mass $m=1.0$ for $S=0.10$, $0.15$ and $0.20$ at $\phi = 0.52$ and $S=0.10$ and $0.20$ at $\phi=0.54$. All quantities in this study are given in reduced units (length in units of $\sigma$, temperature in units of $\frac{\epsilon}{k_{B}}$ and time in units of $\tau=( \frac{\overline m \overline \sigma^{2}}{\epsilon } ) ^{\frac{1}{2}}$). The LJ interaction parameter $\epsilon$ is assumed to have the same value for all particle pairs.
At large supercooling the system settles into glassy phase. We first analyze the system from the perspective of potential energy landscape (PEL), which has emerged as an important tool in the study of glass forming liquids [@sastry; @stillinger; @wales]. Fig \[eis-beta\](a) and (b) show the variation of the average inherent structure energy ($\langle e_{IS} \rangle$) with temperature ($T$) at both the volume fractions studied. The value of $\langle e_{IS} \rangle$ remains fairly insensitive to the variation in $T$ at high T before it starts to fall with $T$ (around $T \sim 1.0$). It has been established earlier in the context of the binary mixtures [@sastry; @dwc] that [*the start of fall in $\langle e_{IS} \rangle$ coincides with the onset of non-exponential relaxation in the time correlation functions of the system*]{}. We show in Fig \[eis-beta\](c) that this correlation continues to hold in polydisperse systems. The fall of $\langle e_{IS} \rangle$ with $T$ is consistent with the Gaussian landscape model.
The average inherent structure energy decreases with polydispersity (Fig \[eis-beta\](a) and (b)), which indicates that the packing is more efficient at higher S. In Fig \[rdf\] we plot the inherent structure (IS) and the parent liquid radial distribution functions (rdf). At $S=0.20$ there is hardly any difference between the rdf of the parent liquid and the IS. The coordination number, $N_{c}$ at $S=0.10$ and $S=0.20$ obtained from the IS rdf are $13.1$ and $14.6$, respectively. This shows that packing is more efficient at higher S and one would expect a slowing down of dynamics at higher S. Instead, we find that similar to colloidal hard spheres polydisperse LJ systems also show a speed up of relaxation with S. The presence of smaller particles at higher S provides some sort of lubrication [@lionberger; @wil-megen], which speeds up the dynamics of the whole system. A plot of the Mode Coupling Theory (MCT)[@sarika] critical temperature $T_{c}^{i}$ for particles of different sizes $\sigma^{i}$ (inset of Fig \[rdf\](b)) shows that the $T_{c}^{i}$ for the largest-sized particles in S=0.20 system is smaller than the smallest-sized particles in S=0.10 system. This tells us that not only the smaller particles in $S=0.20$ system but the whole system has a faster relaxation. The rate of growth of relaxation time upon lowering of T decreases with S (Fig \[rdf\](b)). Hence as the system is cooled, vitrification is expected to occur at a lower T for the system at higher S. This should lead to a lowering of the glass transition temperature with S.
Fragility is a term being used to characterize and quantify the non-Arrhenius transport behavior in glass-forming liquids as they approach glass transition[@Angell]. To study the effect of polydispersity on fragility, we plot the diffusion coefficients in an Angell-like fragility plot in Fig \[fragility\]. The plot clearly shows that [*increasing polydispersity*]{} at fixed volume fraction [*reduces the fragility*]{} of the liquid so that the system is a stronger glass former at higher polydispersity. This effect is analogous to the [*antiplasticization*]{} that has been observed in polymer melts [@plast1]. PEL analysis shows that the antiplasticized system has smaller barriers to overcome in order to explore the configuration space [@plast2]. In the rest of the paper we explore the correlations between fragility and non-exponential relaxation/heterogeneous dynamics.
Fragility is usually correlated to the stretch exponent $\beta$ which is found to be valid for many materials [@ngai-niss]. From PEL perspective, fragile liquids display a proliferation of well-separated basins which result in a broad spectrum of relaxation times leading to stretched exponential dynamics [@stillinger]. The correlation is also consistent within the framework of coupling model (CM) [@CM] according to which the strength of the intermolecular coupling is given by ($1-\beta$). The rate of growth of intermolecular coupling with decrease in T is a measure of fragility which according to CM would depend on the rate of fall of $\beta$ with T. We indeed find that as S increases (fragility decreases) the rate of fall of $\beta$ with T decreases (Fig \[eis-beta\](c)). However, if we look only at the $\beta$ values and not its T-dependence we find that at high T, stretching is anti-correlated with fragility whereas [*at low T, we get the reverse scenario where the stretching is correlated with fragility*]{}. This leads to a cross-over of the $\beta$ values for different S at intermediate T as shown in Fig \[eis-beta\](c). The $\beta$ values in Fig \[eis-beta\](c) are obtained by KWW fit to $F_{s}( k_{max}, t)$. However, these cross-overs are independent of $k$ values as shown in Fig \[eis-beta\](d). The interplay between the T-independent [*intrinsic heterogeneity*]{} (due to the particle size and mass distribution) and the dynamic heterogeneity which builds up at low $T$ seems to be the microscopic origin of the anti-correlation between fragility and stretching at high T and the observed crossover at intermediate T.
To investigate this point in further details, we study the non-Gaussian parameter, $\alpha_{2}(t)$ which also shows a correlation with fragility for most materials [@sokolov]. The non-zero values of $\alpha_{2}(t)$ in a monodisperse system is purely due to the presence of dynamic heterogeneity whereas in polydisperse system, in addition to dynamic heterogeneity, there is an ‘intrinsic heterogeneity’ due to particle size and mass distribution which is present at all $T$. Thus for the latter, $\alpha_{2}(t)$ reflects a coupled effect of both these heterogeneities. As seen in Fig \[ngp\], for a polydisperse system $\alpha_{2}(t)$ is nonzero both in the short time limit (due to the mass distribution [@poole]) and in the long time limit (due to the spread in diffusion coefficients with particle size and mass). At high $T$, the non-zero value of $\alpha_{2}(t)$ is predominantly due to the intrinsic heterogeneity and thus increases with S (Fig \[ngp\](d)). As $T$ is lowered, the effects of dynamic heterogeneity starts to dominate, as was shown by the onset of connected clusters of fast moving particles [@sear; @weeks] whose size increases as one approaches glass transition. Since the relaxation time increases with decrease of S (Fig. \[rdf\](b)), there is a faster build-up of dynamic heterogeneity at lower S which leads to the observed crossovers (Fig \[ngp\](c)&(b)) in the values of $\alpha_{2}(t)$ between different S (similar to that observed for $\beta$ in Fig \[eis-beta\](c)). Hence at low $T$, one gets the scenario where $\alpha_{2}(t)$ decreases with polydispersity (Fig \[ngp\](d)). Since fragility decreases with S, these crossovers in $\beta$ and $\alpha_{2}(t)$ [ *would mean that fragility is correlated only to the dynamic heterogeneity and not to the intrinsic heterogeneity in the system*]{}.
When the polydispersity is increased at constant volume, we get results that are opposite to that obtained from constant volume fraction studies. We find that the dynamics slows down with increase in polydispersity [@poole]. This is because at constant volume as polydispersity increases, the packing fraction increases [@cai] and hence we find a coupled effect of polydispersity and density.
Our results show that at constant volume fraction, although the increase of polydispersity leads to a more efficient packing, the dynamics become faster due to the lubrication effect. Fragility decreases with polydispersity and is found to be correlated only to the rate of growth of dynamic heterogeneity and not to the intrinsic molecular heterogeneity in the system. These results reveal that the rich dynamics of the polydisperse system can lead to new relaxations mechanisms that deserve further study.
We thank Dr. S. Sastry and Dr. D. Chakrabarti for discussions. This work was supported in parts by grants from DST, India. S. E. Abraham acknowledges the CSIR (India) for financial support.
[99]{}
P. N. Pusey and W. van Megen, Nature [**320**]{}, 340 (1986); P. N. Pusey and W. van Megen, Phys. Rev. Lett. [**59**]{}, 2083 (1987).
S. Phan et al, Phys. Rev. E. [**54**]{}, 6633 (1996); C. G. de Kruif et al, J. Chem. Phys [**83**]{}, 4717 (1986); J. Mewis et al, AIChE J. [**35**]{}, 415 (1989); D. S. Pearson and T. Shikata, J. Rheol. [**38**]{}, 601 (1994); P. N. Segre et al, Phys. Rev. Lett. [**75**]{}, 958 (1995).
S. R. Rastogi, N. J. Wagner and S. R. Lustig, J. Chem. Phys. [**104**]{}, 9249 (1996).
R. P. Sear, J. Chem. Phys. [**113**]{}, 4732 (2000).
S. R. Williams, I. K. Snook and W. van Megen , Phys. Rev. E. [**64**]{}, 21506 (2001).
S. Auer and D. Frenkel, Nature [**413**]{}, 711 (2001).
P. Chaudhuri et al, Phys. Rev. Lett. [**95**]{}, 248301 (2005); D. A. Kofke and P. G. Bolhuis, Phys. Rev. E [**59**]{}, 618 (1999); D. J. Lacks and J. R. Wienhoff, J. Chem. Phys. [**111**]{}, 398 (1999).
R. K. Murarka and B. Bagchi, Phys. Rev. E. [**67**]{}, 051504 (2003).
S. Sastry, P. B. Debenedetti and F. H. Stillinger, Nature [**393**]{}, 554 (1998)
F. H. Stillinger, Science [**267**]{}, 1935 (1995); P. B. Debenedetti and F. H. Stillinger, Nature [**410**]{}, 259 (2001)
D. J. Wales, [**Energy Landscapes**]{} (Cambridge University Press, Cambridge, England, 2003).
D. Chakrabarti and B. Bagchi, Phys. Rev. Lett. [**96**]{}, 187801 (2006).
R. A. Lionberger, Phys. Rev. E. [**65**]{}, 61408 (2002).
S. R. Williams and W. van Megen, Phys. Rev. E. [**64**]{}, 041502 (2001); G. Foffi et al, Phys. Rev. Lett. [**91**]{}, 085701-1 (2003).
B. Bagchi and S. Bhattacharyya, Adv. Chem. Phys. [**116**]{}, 67 (2001).
R. A. Riggleman et al, Phys. Rev. Lett. [**97**]{}, 045502 (2006); A. Anopchenko et al, Phys. Rev. E [**74**]{}, 031501 (2006).
R. A. Riggleman, J. F. Douglas and J. J. de Pablo, Phys. Rev. E. [**76**]{}, 011504 (2007).
C. M. Roland and K. L. Ngai, Macromolecules [**24**]{}, 5315 (1991); R. Bohmer et al, J. Chem. Phys. [**99**]{}, 4201 (1993); K. Niss et al, J. Phys: Cond. Matt. [**19**]{}, 76102 (2007).
K. L. Ngai, Comments Solids State Phys. [**9**]{}, 121 (1979); K. L. Ngai, IEEE Trans. Dielectric. Electr. Insul. [**8**]{}, 329 (2001).
N. Kiriushcheva and P. H. Poole, Phys. Rev. E. [**65**]{}, 011402 (2001).
E. R. Weeks et al, Science [**287**]{}, 627 (2000); W. K. Kegel and A. van Blaaderen, Science [**287**]{}, 290 (2000).
V. N. Novikov, Y. Ding and A. P. Sokolov, Phys. Rev. E. [**71**]{}, 61501 (2005).
D. He, N. N. Ekere and L. Cai, Phys. Rev. E. [**60**]{}, 7098 (1999).
C. A. Angell, J. Phys. Chem. Solids [**49**]{}, 863 (1988); C. A. Angell, Science [**267**]{}, 1926 (1995).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate generically applicable and intuitively appealing prediction intervals based on leave-one-out residuals. The conditional coverage probability of the proposed interval, given the observations in the training sample, is close to the nominal level, provided that the underlying algorithm used for computing point predictions is sufficiently stable under the omission of single feature-response pairs. Our results are based on a finite sample analysis of the empirical distribution function of the leave-one-out residuals and hold in a non-parametric setting with only minimal assumptions on the error distribution. To illustrate our results, we also apply them to high-dimensional linear predictors, where we obtain uniform asymptotic conditional validity as both sample size and dimension tend to infinity at the same rate. These results show that despite the serious problems of resampling procedures for inference on the unknown parameters [cf. @Bickel83; @Mammen96; @ElKaroui15], leave-one-out methods can be successfully applied to obtain reliable predictive inference even in high dimensions.'
address:
- |
Lukas Steinberger\
Department of Mathematical Stochastics\
University of Freiburg\
Ernst-Zermelo-Stra[ß]{}e 1\
79104 Freiburg im Breisgau, Germany\
- |
Hannes Leeb\
Department of Statistics and Operations Research\
University of Vienna\
Oskar-Morgenstern-Platz 1\
1090 Vienna, Austria\
author:
-
-
bibliography:
- '../../bibtex/lit.bib'
title: 'Conditional predictive inference for high-dimensional stable algorithms'
---
Introduction
============
It is a fundamental task of statistical learning, when given an i.i.d. training sample of feature-response pairs $(x_i,y_i)$ and an additional feature vector $x_0$, to provide a point prediction for the corresponding unobserved response variable $y_0$. In such a situation, a prediction interval that contains the unobserved response variable with a prescribed probability provides valuable additional information to the practitioner. In many applications, when measurements are costly, a training sample is obtained only once and is subsequently used to repeatedly construct point and interval predictions as new measurements of feature vectors become available. In such a situation, it is desirable to control the conditional coverage probability of the prediction interval given the observations in the training sample, rather than the unconditional probability.
We study a very simple method based on leave-one-out residuals which is generic in the sense that it applies to a large class of possible point predictors, while providing asymptotically valid prediction intervals. For an i.i.d. sample of $n$ feature-response pairs $T_n = (x_i,y_i)_{i=1}^n$ and an additional feature vector $x_0$, suppose that we have decided to use a prediction algorithm $M_n(T_n,x_0)$ to produce a point prediction $\hat{y}_0 = M_n(T_n,x_0)$ for the real unobserved response $y_0$. If $T_n^{[i]} = (x_j,y_j)_{j\ne i}$ is the sample without the $i$-th observation pair, compute leave-one-out residuals $\hat{u}_i = y_i - M_{n-1}(T_n^{[i]},x_i)$. Finally, to obtain a prediction interval for $y_0$, compute appropriate empirical quantiles $\hat{q}_{\alpha_1}$ and $\hat{q}_{\alpha_2}$ from the collection $\hat{u}_1, \dots, \hat{u}_n$ and report the leave-one-out prediction interval $$PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) = [\hat{y}_0+ \hat{q}_{\alpha_1}, \hat{y}_0+ \hat{q}_{\alpha_2}].$$ In this paper we investigate the conditional coverage probability $$P^{n+1}(y_0\in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \| T_n),$$ first in finite samples, and then in more specific asymptotic settings where the dimension $p$ of the feature vectors $x_i$ increases at the same rate as sample size $n$. We find that even in these challenging scenarios where both $n$ and $p$ are large, the conditional coverage of $PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0)$ is close to the nominal level $\alpha_2-\alpha_1$. We point out that the analogous procedure based on ordinary residuals $y_i - M_n(T_n,x_i)$ instead of leave-one-out residuals would, in general, not be valid in such a large-$p$ scenario [cf. @Bickel83].
Despite the remarkable simplicity of this method, and its apparent similarity to the jackknife, we are not aware of any rigorous analysis of its statistical properties. The approach is very similar, in spirit, to the methods proposed in @ButlerRoth80, @Stine85, @Schmoyer92, @Olive07 and @Politis13, in the sense that it relies on resampling and leave-one-out ideas for predictive inference. But the methods from these references, like most resampling procedures in the literature, are investigated only in the classical large sample asymptotic regime, while the number of available explanatory variables is fixed. Notable exceptions are @Bickel83, @Mammen96 and, recently, @ElKaroui15. However, the latter articles draw mainly negative conclusions about resampling methods in high dimensions, arguing, for instance, that the famous residual bootstrap in linear regression, which relies on the consistent estimation of the true unknown error distribution, is unreliable when the number of variables in the model is not small compared to sample size. In contrast, we show that the leave-one-out prediction interval $PI_{\alpha_1,\alpha_2}^{(L1O)}$ does not suffer from these problems because it relies on a direct estimation of the conditional distribution of the prediction error $P^{n+1}(y_0-\hat{y}_0\le t\| T_n)$ instead of an estimator for the true unknown distribution of the disturbances. That the use of leave-one-out residuals leads to more reliable methods in high dimensions was also observed by @ElKaroui15.
Our contribution is threefold. First, we show that the leave-one-out prediction interval is approximately conditionally valid given the training sample $T_n$, in the sense that $$P^{n+1}\left( y_0 \in PI_{\alpha_1, \alpha_2}^{(L1O)}(T_n,x_0)\Big\| T_n\right)\; =\; \alpha_2-\alpha_1 \;+\;\Delta_n.$$ The error term $\Delta_n$ can be controlled in finite samples and asymptotically, provided that the employed prediction algorithm $M_n$ is sufficiently stable under the omission of single feature-response pairs and that it has a bounded (in probability) estimation error as an estimator for the true unknown regression function. It is of paramount importance, however, to point out that we do not need to assume consistent estimation of the regression function and our leading examples are such that consistency fails.
Second, we show that the required stability and approximation properties are satisfied in many cases, including many linear predictors in high dimensional regression problems and even if the true model is not exactly linear. In particular, the proposed method is always valid if the employed predictor is consistent for the unknown regression function (or for an appropriate surrogate target), and is therefore applicable to complex data structures and methods such as non-parametric regression or LASSO prediction.
Third, we discuss issues of interval length and find that in typical situations predictors with smaller mean squared prediction error lead to shorter prediction intervals. For ordinary least squares prediction, we also investigate the impact of the dimensionality of the regression problem on the interval length and discuss the relationship between the leave-one-out method and an obvious sample splitting technique. All our results hold uniformly over large classes of data generating processes and under weak assumptions on the unknown error distribution (e.g., the errors may be heavy tailed and non-symmetric, and the standardized design vectors $\operatorname{Cov}[x_i]^{-1/2}x_i$ may have dependent components and a non-spherical geometry).
Our work is greatly inspired by [@ElKaroui13b] and @Bean13 [see also @ElKaroui13; @ElKaroui18], who investigate efficiency of general $M$-estimators in linear regression when the number of regressors $p$ is of the same order of magnitude as sample size $n$. In particular, the $M$-estimators studied in these references provide one leading example of a class of linear predictors for which our construction of prediction intervals leads to conditionally valid predictive inference even in high dimensions.
The remainder of the paper is organized as follows. In the following Subsection \[sec:related\] we give a brief overview of alternative methods from the large body of literature on predictive inference in regression. Subsection \[sec:notation\] introduces the notation that is used throughout the paper. Sections \[sec:main\] and \[sec:linpred\] proceed along a general-to-specific scheme. We begin, in Subsection \[sec:L1OPI\], by introducing the general leave-one-out method and the notion of conditional validity and we take a first step towards proving that the latter property is satisfied. In Subsection \[sec:stability\], we draw the connection between conditional validity and algorithmic stability and provide generic sufficient conditions for conditional validity. In Section \[sec:linpred\] we then show that these conditions can even be verified in challenging statistical scenarios where consistent estimation of the regression function and bootstrap consistency usually fail. In particular, we consider linear predictors based on James-Stein-type estimators and based on regularized $M$-estimators in a situation where the number of regressors $p$ is not small relative to sample size $n$. We also take a closer look at the ordinary least squares estimator, because its simplicity allows for a rigorous discussion of the resulting interval length. In Section \[sec:consistency\] we then also discuss the important case where the employed predictor is consistent (possibly for some pseudo target rather than the true regression function) and we provide examples on non-parametric regression and high-dimensional LASSO. The case of consistency is an important test case for our method. Finally, Section \[sec:discussion\] provides some further discussions and we sketch possible extensions of our results. Most of the proofs are deferred to the supplementary material.
Related work {#sec:related}
------------
In a fully parametric setting, predictive inference is essentially a special case of parametric inference [see, e.g., @Cox74 Section 7.5]. Constructing valid prediction sets becomes much more challenging, however, if one is interested in a non-parametric setting. By non-parametric, we do not only mean that the statistical model under consideration can not be indexed by a finite dimensional Euclidean space, but more precisely, that the random fluctuations $y_i-{{\mathbb E}}[y_i\|x_i]$ about the conditional mean function can not be described by a parametric family of distributions.
### Tolerance regions
A rather well researched and classical topic in the statistics literature is the construction of so called tolerance regions or tolerance limits, which are closely related to prediction regions. A tolerance region is a set valued estimate $TR_\alpha(T_n)\subseteq{{\mathbb R}}^m$ based on i.i.d. $m$-variate data $z_1,\dots, z_n$, $T_n = (z_1,\dots, z_n)$, such that the probability of covering an independent copy $z_0$ is close to a prescribed confidence level. More precisely, a $(p,1-\alpha)$ tolerance region $TR$ is such that $P^n(P^{n+1}(z_0\in TR\|T_n)\ge p)=1-\alpha$, and $TR$ is called a $\beta$-expectation tolerance region, if ${{\mathbb E}}_{P^n}[P^{n+1}(z_0\in TR\| T_n)] = P^{n+1}(z_0\in TR) = \beta$ [cf. @Krish09]. The study of non-parametric tolerance regions goes back at least to @Wilks41 [@Wilks42], @Wald43 and @Tukey47 (see @Krish09 for an overview and further references) and is traditionally based on the theory of order statistics of i.i.d. data. These researchers already obtained multivariate distribution-free methods, that is, tolerance regions that achieve a certain type of validity in finite samples without imposing parametric assumptions. The connection to prediction regions is apparent. If $z_i=(x_i,y_i)$, then a tolerance region $TR_\alpha(T_n)$ for $z_0 = (x_0,y_0)$ can be immediately used to define a prediction region for $y_0$, by setting $PR_\alpha(T_n,x_0) = \{y: (x_0,y)\in TR_\alpha(T_n)\}$. However, this is arguably not the most economical way of constructing a prediction region. In fact, the construction of a multivariate and possibly high-dimensional tolerance region appears to be a more ambitious goal than the construction of a prediction region for an univariate response variable. In particular, since estimation of the full density of $z_0$ is usually not feasible if the dimension $m$ is non-negligible compared to sample size $n$, one has to specify a shape for the tolerance region $TR_\alpha$ and it is not obvious which shapes are preferable in a non-parametric setting. For example, @Bucc01 provide results for smallest possible hyperrectangles and ellipsoids, but obtain only the classical large sample asymptotic results with fixed dimension. @Chatterjee80 estimate the density non-parametrically, which fails in high dimensions. @Li08 use a notion of data depth to avoid the specification of the shape, but the fully data driven method, again, is only shown to be valid asymptotically, with the dimension fixed. Finally, numerically computing the $x_0$-cut of $TR_\alpha$ to obtain $PR_\alpha$ is computationally demanding and the result is sensitive to the shape of $TR_\alpha$.
### Conformal prediction
A strand of literature that has emerged from the early ideas of non-parametric tolerance regions, but is more prominent within the machine learning community than the statistics community, is called conformal prediction [@Vovk99; @Vovk05; @Vovk09]. Conformal prediction is a very flexible general framework for construction of prediction regions that can be used in conjunction with any learning algorithm. The general idea is to construct a pivotal $p$-value $\pi(y)$ to test $H_0: y_0=y$ based on the sample $T_n$ and $x_0$ and to invert the test to obtain a prediction region for $y_0$, i.e., $PR_\alpha = \{y: \pi(y)\ge \alpha\}$. The method was primarily designed for an on-line learning setup [cf. @Vovk09], but has recently been popularized in the statistics community by @Lei13 [@Lei16] and @Lei14, who study it as a batch method. Aside from their flexibility, conformal prediction methods have the advantage that they are valid in finite samples, in the sense that the unconditional coverage probability $P^{n+1}(y_0\in PR_\alpha)$ is no less than the nominal level $1-\alpha$, provided only that the feature-response pairs $(x_0,y_0), (x_1,y_1), \dots, (x_n,y_n)$ are exchangeable. On the other hand, their practical implementation is not so straight forward, because for the test inversion, the $p$-value $\pi$ has to be evaluated on a grid of possible $y$ values, which is especially tricky if the conformal prediction region is not an interval (see @Chen17 and @Lei17 for further discussion of these issues). Moreover, it is not clear if the classical conformal methods can also provide a form of conditional validity. In @Vovk12, a version of conformal prediction was presented that achieves also a certain type of (approximate) conditional validity. However, the method relies on a sample splitting idea, which usually makes the prediction region unnecessarily wide (see Sections \[sec:SS\] and \[sec:naive\] for further discussion of sample splitting techniques).
Preliminaries and notation {#sec:notation}
--------------------------
For $p\in{{\mathbb N}}$, let $\mathcal Y \subseteq {{\mathbb R}}$ and $\mathcal X\subseteq {{\mathbb R}}^p$ be Borel measurable sets and let $\mathcal Z= \mathcal X\times \mathcal Y$. Moreover, let $\mathcal P$ be some class of Borel probability measures on $\mathcal Z$ and, for $n\in{{\mathbb N}}$, $n\ge 2$, let $P^n$ denotes the $n$-fold product measure of $P\in\mathcal P$. For $P\in\mathcal P$, we write $z_0=(x_0,y_0)$ for a random vector distributed according to $P$ and we write $T_n = (z_i)_{i=1}^n$, $z_i = (x_i, y_i)$, for a training sample, where $z_0,z_1,\dots, z_n$ are independent and identically distributed according to $P$. By $m_P(x) := {{\mathbb E}}_P[y_0\|x_0=x]$, $m_P:\mathcal X\to{{\mathbb R}}$, we denote (a version of) the true unknown regression function, if it exists. We sometimes express the training data $T_n$ in matrix form where $X = [x_1,\dots, x_n]'$ is of dimension $n\times p$ and $Y = (y_1,\dots, y_n)'$ is a random $n$-vector. Moreover, $X'$ denotes the transpose of $X$, and we write $(X'X)^\dagger$ for the Moore-Penrose inverse of $X'X$. Similarly, we write $X_{[i]} = [x_1,\dots, x_{i-1},x_{i+1},\dots, x_n]'$ and $Y_{[i]} = (y_1,\dots, y_{i-1},y_{i+1},\dots, y_n)'$.
Next, we formally define the notion of a (learning) algorithm and that of a predictor (or estimator) $\hat{m}_n$ and its leave-one-out equivalent $\hat{m}_n^{[i]}$. Consider a measurable function $M_{n,p} : \mathcal Z^n \times \mathcal X \to {{\mathbb R}}$. $M_{n,p}$ is also called a learning algorithm. For some fixed vector $x\in\mathcal X$, we set $\hat{m}_n(x) = M_{n,p}(T_n,x)$ and $\hat{m}_n^{[i]}(x) = M_{n-1,p}(T_n^{[i]},x)$, where $T_n^{[i]} = (z_j)_{j\ne i}$, $i=1,\dots, n$, denotes the reduced training sample where the observation $z_i=(x_i,y_i)$ has been deleted. Thus whenever we are talking about a predictor, we implicitly talk about the pair of functions $(M_{n,p}, M_{n-1,p})$. A predictor $\hat{m}_n$ is called *symmetric* if for every choice of $z_1,\dots, z_n\in\mathcal Z$, every $x\in\mathcal X$ and every permutation $\pi$ of $n$ elements, $M_{n,p}((z_i)_{i=1}^n,x) = M_{n,p}((z_{\pi(i)})_{i=1}^n,x)$, and if the same holds true for $M_{n-1,p}$. Since the training data $T_n = (z_1,\dots, z_n)$ are assumed to be i.i.d. it is natural to consider symmetric predictors. Also note that, although computationally demanding, in principle any predictor $\hat{m}_n$ can be symmetrized by averaging over all possible permutations of the training data.
If $t\in\mathcal Z^n$ and $A(t)\in\mathcal B(\mathcal Z)$ is a Borel subset of $\mathcal Z$, then we denote the conditional probability of $A(T_n)$ given the training sample $T_n=t$ by $P^{n+1}(A(T_n)\|T_n=t) := P(A(t))$. For example, if $PI(t,x)$ is an interval depending on $t\in\mathcal Z^n$ and $x\in\mathcal X$, then $P^{n+1}(y_0\in PI(T_n,x_0)\|T_n=t) := P(\{(x,y)\in\mathcal Z : y\in PI(t,x)\})$. If $f:D\to{{\mathbb R}}$ is a real function on some domain $D$, then $\|f\|_\infty = \sup_{s\in D}|f(s)|$. For $a,b\in{{\mathbb R}}$, we also write $a\lor b = \max(a,b)$, $a\land b = \min(a,b)$ and $a_+ = a\lor 0$, and let $\lceil \delta\rceil$ denote the smallest integer no less than $\delta\in{{\mathbb R}}$. We write $U\stackrel{\mathcal L}{=} V$, if the random quantities $U$ and $V$ are equal in distribution and the underlying probability space is clear from the context. By a slight abuse of notation, we also write $U\stackrel{\mathcal L}{=} \mathcal L_0$ if the random variable $U$ is distributed according to the probability law $\mathcal L_0$ and, again, the underlying probability space is clear from the context.
For our asymptotic statements, we will also need the following conventions. If for $n\in{{\mathbb N}}$, $\mathcal P_n$ is a collection of probability distributions on $\mathcal Z_n\subseteq{{\mathbb R}}^{p_n+1}$ and $\phi_n:\mathcal Z^n\times \mathcal P_n \to {{\mathbb R}}$ is a function such that for every $P\in\mathcal P_n$, $t\mapsto \phi_n(t,P)$ is measurable, then we say that $\phi_n$ is $\mathcal P_n$-uniformly bounded in probability if $\limsup_{n\to\infty}\sup_{P\in\mathcal P_n} P^n(|\phi_n(T_n,P)|>M) \to 0$, as $M\to\infty$, and write $\phi_n = O_{\mathcal P_n}(1)$. If $\sup_{P\in\mathcal P_n} P^n(|\phi_n(T_n,P)|>{\varepsilon}) \to 0$, as $n\to\infty$, for every ${\varepsilon}>0$, then we say that $\phi_n$ converges $\mathcal P_n$-uniformly in probability to zero and write $\phi_n = o_{\mathcal P_n}(1)$. Similarly, we say that $\phi_n$ converges $\mathcal P_n$-uniformly in probability to $\psi_n:\mathcal Z^n\times\mathcal P_n\to{{\mathbb R}}$, which is also assumed to be measurable in its first argument, if $|\phi_n-\psi_n| = o_{\mathcal P_n}(1)$.
Main results {#sec:main}
============
Leave-one-out prediction intervals and conditional validity {#sec:L1OPI}
-----------------------------------------------------------
For $\alpha\in(0,1)$, we want to construct a prediction interval $PI_{\alpha}(T_n,x_0) = [\hat{m}_n(x_0) + L_\alpha(T_n), \hat{m}_n(x_0) + U_\alpha(T_n)]$ for $y_0$ around the point prediction $\hat{y}_0=\hat{m}_n(x_0)$, where $L_\alpha$ and $U_\alpha$ are measurable functions on $\mathcal Z^n$, such that $$\begin{aligned}
\label{eq:valid}
\sup_{P\in \mathcal P} {{\mathbb E}}_{P^n} \left[
\left| P^{n+1} \left( y_0 \in PI_{\alpha}(T_n,x_0) \Big\| T_n \right) - (1-\alpha) \right| \right]\end{aligned}$$ is small. We can not expect the expression in to be equal to zero for some fixed $n$ and a reasonably large class $\mathcal P$ (see Remark \[rem:no-exact-validity\] below). Therefore, we are content with being close to zero as $n$, and possibly also $p$, is large. This notion of conditional validity is related to what @Vovk13 calls *training conditional validity*, and which is itself closely related to the conventional notion of a $(1-\alpha, {\varepsilon})$ tolerance region for small ${\varepsilon}$ [cf. @Krish09]. However, these conventional definitions require only that the conditional coverage probability $P^{n+1}(y_0\in PI_\alpha(T_n,x_0)\|T_n)$ is no less than the prescribed confidence level $1-\alpha$, with high probability, whereas the requirement that is small also excludes overly conservative procedures. Note that if is small, then also $$\begin{aligned}
&\left| P^{n+1} \left( y_0 \in PI_{\alpha}(T_n,x_0) \right) - (1-\alpha) \right|\\
&\quad\quad=
\left| {{\mathbb E}}_{P^n}\left[ P^{n+1} \left( y_0 \in PI_{\alpha}(T_n,x_0) \Big\| T_n \right) - (1-\alpha) \right]\right|\\
&\quad\quad\le
{{\mathbb E}}_{P^n}\left[ \left| P^{n+1} \left( y_0 \in PI_{\alpha}(T_n,x_0) \Big\| T_n \right) - (1-\alpha) \right| \right],\end{aligned}$$ will be small. Hence, the prediction interval is also approximately unconditionally valid.
If the training data $T_n$ and the distribution $P$ are such that the conditional distribution function $s\mapsto\tilde{F}_n(s) := P^{n+1}(y_0 - \hat{m}_n(x_0) \le s\| T_n)$ is continuous, then, for $0\le \alpha_1 < \alpha_2\le 1$ fixed, there is an optimal shortest but infeasible interval $$\begin{aligned}
PI_{\alpha_1,\alpha_2}^{(OPT)} = [\hat{m}_n(x_0) + \tilde{q}_{\alpha_1}, \hat{m}_n(x_0) + \tilde{q}_{\alpha_2}], \label{eq:OPTPI}\end{aligned}$$ among the set of all prediction intervals $PI$ that satisfy $$\begin{aligned}
P^{n+1}\left(y_0 \le \inf PI\Big\| T_n \right) &= \alpha_1, \quad\text{and}\label{eq:alpha1}\\
P^{n+1}\left(y_0 \ge \sup PI\Big\| T_n \right) &= 1 - \alpha_2, \label{eq:alpha2}\end{aligned}$$ and are of the form $PI = PI(T_n, x_0) = [\hat{m}_n(x_0) + L(T_n), \hat{m}_n(x_0) + U(T_n)]$. Simply choose $\tilde{q}_{\alpha_1}$ to be the largest $\alpha_1$-quantile of $\tilde{F}$ and $\tilde{q}_{\alpha_2}$ to be the smallest $\alpha_2$-quantile of $\tilde{F}_n$. This gives the user the flexibility to choose precisely what error probability of under and over-prediction she is willing to accept. Thus, for $PI_{\alpha_1,\alpha_2}^{(OPT)}$, is actually equal to zero (for $\alpha_1=1-\alpha_2=\alpha/2$), at least if $\mathcal P$ contains only probability distributions on $\mathcal Z$ for which $\tilde{F}_n:{{\mathbb R}}\to[0,1]$ is almost surely continuous.
We propose the following simple Jackknife-type idea to approximate the optimal infeasible procedure: For $\alpha\in[0,1]$, let $\hat{q}_{\alpha}$ denote an empirical $\alpha$-quantile of the sample $\hat{u}_1,\dots, \hat{u}_n$ of leave-one-out residuals $\hat{u}_i = y_i - \hat{m}_n^{[i]}(x_i)$. To be more precise, we set $\hat{q}_\alpha = \hat{u}_{(\lceil n\alpha\rceil)}$ if $\alpha>0$ and $\hat{q}_0 = \hat{u}_{(1)}-e^{-n}$ (any number strictly less than $\hat{u}_{(1)}$ would do), where $\hat{u}_{(1)}\le \hat{u}_{(2)}\le\dots\le\hat{u}_{(n)}$ are the order statistics of the leave-one-out residuals. Then the leave-one-out prediction interval is given by $$\begin{aligned}
\label{eq:L1OPI}
PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \;=\; \hat{m}_n(x_0) \;+\; \Big( \hat{q}_{\alpha_1}, \;\hat{q}_{\alpha_2}\Big].\end{aligned}$$ Excluding the left endpoint turns out to be convenient for proving Lemma \[lemma:basic\] below. The random distribution functions $$\begin{aligned}
\label{eq:Fhat}
\hat{F}_n(s) := \hat{F}_n(s; T_n) := \frac{1}{n}\sum_{i=1}^n \mathbbm{1}_{(-\infty,s]}(\hat{u}_i)\end{aligned}$$ and $$\begin{aligned}
\label{eq:Ftilde}
\tilde{F}_n(s) := \tilde{F}_n(s; T_n) := P^{n+1}(y_0 - \hat{m}_n(x_0)\le s\|T_n),\end{aligned}$$ $s\in{{\mathbb R}}$, play a crucial role in the analysis of the leave-one-out prediction intervals.
The idea behind the leave-one-out procedure is remarkably simple. To estimate the conditional distribution $\tilde{F}_n$ of the prediction error $y_0 - \hat{m}_n(x_0)$ we simply use the empirical distribution $\hat{F}_n$ of the leave-one-out residuals $\hat{u}_i = y_i -\hat{m}_n^{[i]}(x_i)$. Notice that $\hat{m}_n$ is independent of $(x_0,y_0)$, and $\hat{m}_n^{[i]}$ is independent of $(x_i,y_i)$, and thus, $\hat{u}_i$ has almost the same distribution as the prediction error, except that $\hat{m}_n^{[i]}$ is calculated from one observation less than $\hat{m}_n$. In many cases this difference turns out to be negligible if $n$ is large, even if $p$ is relatively large too. Note, however, that the leave-one-out residuals $(\hat{u}_i)_{i=1}^n$ are not independent.
The following elementary result shows that, indeed, the main ingredient to establish conditional validity of the leave-one-out prediction interval in is consistent estimation of $\tilde{F}_n$ in Kolmogorov distance.
\[lemma:basic\] For $0\le\alpha_1<\alpha_2\le1$, and if the fixed (non-random) training sample $T_n\in\mathcal Z^n$ is such that the leave-one-out residuals $\hat{u}_i = \hat{u}_i(T_n)$, $i=1,\dots, n$, are all distinct, then the leave-one-out prediction interval defined in satisfies, $$\begin{aligned}
\left| P \left( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \right) \;-\; \frac{\lceil n\alpha_2\rceil - \lceil n\alpha_1 \rceil }{n}\right| \quad\le\quad
2\|\hat{F}_n - \tilde{F}_n\|_\infty.\end{aligned}$$
Note that the inequality of Lemma \[lemma:basic\] is a purely algebraic statement for a fixed training set $T_n$. Also note that the coverage probability $P ( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) )$ is a version of the conditional probability $P^{n+1}( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0)\| T_n )$.
By definition, $$\begin{aligned}
&P \left( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \right)
\;=\;
\tilde{F}_n(\hat{q}_{\alpha_2}) - \tilde{F}_n(\hat{q}_{\alpha_1}) \\
&\quad\;=\;
\tilde{F}_n(\hat{q}_{\alpha_2}) - \hat{F}_n(\hat{q}_{\alpha_2})
+ \hat{F}_n(\hat{q}_{\alpha_1}) - \tilde{F}_n(\hat{q}_{\alpha_1})
+\hat{F}_n(\hat{q}_{\alpha_2}) - \hat{F}_n(\hat{q}_{\alpha_1}).\end{aligned}$$ For $\alpha_1>0$, $$\begin{aligned}
n\hat{F}_n(\hat{q}_{\alpha_2}) - n\hat{F}_n(\hat{q}_{\alpha_1})
&=
\left| \{ i\le n : \hat{u}_{(i)} \le \hat{u}_{(\lceil n\alpha_2\rceil)} \}\right|
-
\left| \{ i\le n : \hat{u}_{(i)} \le \hat{u}_{(\lceil n\alpha_1\rceil)} \}\right| \\
&=
\lceil n\alpha_2\rceil - \lceil n\alpha_1 \rceil,\end{aligned}$$ and $
n\hat{F}_n(\hat{q}_{\alpha_2}) - n\hat{F}_n(\hat{q}_{0})
=
\left| \{ i\le n : \hat{u}_{(i)} \le \hat{u}_{(\lceil n\alpha_2\rceil)} \}\right|
- 0
=
\lceil n\alpha_2\rceil.
$ Thus, $$\begin{aligned}
\hat{F}_n(\hat{q}_{\alpha_2}) - \hat{F}_n(\hat{q}_{\alpha_1})
=
\frac{\lceil n\alpha_2\rceil - \lceil n\alpha_1 \rceil }{n},\end{aligned}$$ which concludes the proof.
By virtue of Lemma \[lemma:basic\], most of what follows will be concerned with the analysis of $\|\hat{F}_n - \tilde{F}_n\|_\infty$. We are particularly interested in situations where, for a fixed $x\in\mathcal X$, $\hat{m}_n(x)$ does not concentrate around $m_P(x)$ with high probability but remains random (cf. Remark \[rem:Dicker\] below). In such a case, the unconditional distribution of the prediction error $P^{n+1}(y_0-\hat{m}_n(x_0)\le s) = {{\mathbb E}}_{P^n}[\tilde{F}_n(s)]$, the empirical distribution of the ordinary residuals $\frac{1}{n}\sum_{i=1}^n \mathbbm 1_{(-\infty,s]}(y_i-\hat{m}_n(x_i))$ and the true error distribution $P(y_0-m_P(x_0)\le s)$ may not be close to one another, because $\hat{m}_n$ does not contain enough information about the true regression function $m_P$ (see, for example, @Bickel83 and @Bean13 for a linear regression setting $m_P(x) = x'\beta_P$)[^1]. Nevertheless, we will see that even in such a challenging scenario, it is often possible to estimate the conditional distribution $\tilde{F}_n$ of $y_0 - \hat{m}_n(x_0)$, given the training sample $T_n$, by the empirical distribution $\hat{F}_n$ of the leave-one-out residuals.
The role of algorithmic stability {#sec:stability}
---------------------------------
In this section we present general results that relate the uniform estimation error $\|\hat{F}_n-\tilde{F}_n\|_\infty$ to a measure of stability of the estimator $\hat{m}_n$. For our first result, sample size $n\ge2$ and dimension $p\ge 1$ are fixed. We only need the following condition on the class of distributions $\mathcal P$ on $\mathcal Z = \mathcal X\times \mathcal Y$.
1. \[c.density\] Under every $P\in\mathcal P$, the distribution of $z_0 = (x_0,y_0)$ has the following properties:[^2] The regression function $m_P(x) = {{\mathbb E}}_P[y_0\|x_0=x]$ exists and the error term $u_0 := y_0 - m_P(x_0)$ is independent of the regressor vector $x_0$ and has a Lebesgue density $f_{u,P}$ with $\|f_{u,P}\|_\infty<\infty$.
The boundedness of the error density $f_{u,P}$ can be relaxed to a Hölder condition on the cdf of $u_0$ at the expense of a slightly more complicated theory.
Note that by continuity of the cdf of the error distribution $u_0$, for every $\alpha\in(0,1)$, there exists a quantile $q_{u,P}(\alpha)$ such that $P(u_0\le q_{u,P}(\alpha)) = \alpha$. However, $q_{u,P}(\alpha)$ may not be uniquely determined by this requirement.
Building on terminology from @Bousquet02 (see also @Devroye79), we use the following notion of algorithmic stability.
\[def:stable\] For $\eta>0$ and $\mathcal P$ as in \[c.density\], we say the predictor $\hat{m}_n$ is $\eta$-*stable* with respect to $\mathcal P$, if $$\begin{aligned}
\sup_{P\in\mathcal P}{{\mathbb E}}_{P^{n+1}}\left[\left(\|f_{u,P}\|_\infty \left|\hat{m}_n(x_0) - \hat{m}_n^{[i]}(x_0)\right|\right) \land 1\right] \;\le\; \eta, \quad \forall i=1\dots, n.\end{aligned}$$
By exchangeability of $z_0, z_1,\dots, z_n$, it is easy to see that a symmetric predictor $\hat{m}_n$ is $\eta$-stable w.r.t. $\mathcal P$ if, and only if, ${{\mathbb E}}_{P^{n+1}}[(\|f_{u,P}\|_\infty|\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)|)\land 1] \le \eta$ for all $P\in\mathcal P$. Also note that a $0$-stable predictor can not depend on the training data (cf. Lemma \[lemma:0stable\]).
We are now in the position to state our main result on the estimation of $\tilde{F}_n(s) = P(y_0 - \hat{m}_n(x_0)\le s)$ by $\hat{F}_n(s) = \frac{1}{n}\sum_{i=1}^n\mathbbm 1_{(-\infty, s]}(\hat{u}_i)$.
\[thm:density\] Suppose the class $\mathcal P$ satisfies Condition \[c.density\] and the estimator $\hat{m}_n$ is symmetric and $\eta$-stable w.r.t. $\mathcal P$. Then, for every $P\in\mathcal P$ and every $L\in(0,\infty)$, we have $$\begin{aligned}
{{\mathbb E}}_{P^{n}}\left[\|\hat{F}_n - \tilde{F}_n\|_\infty \right]
\;&\le\;
\int_{[-L,L]^c} f_{u,P}(s)\,ds + 3\left(\frac{1}{2n}+6\eta\right)^{1/2}\\
&\quad+ \inf_{\mu\in{{\mathbb R}}}P^{n+1}(|m_P(x_0) - \hat{m}_n(x_0)-\mu| > L)\\
&\quad+
2(L\|f_{u,P}\|_\infty)^{1/2}\left( \frac{1}{2n}+6\eta\right)^{1/4}.\end{aligned}$$
For illustration and later use we also provide an asymptotic version of this result.
\[cor:density\] For $n\in{{\mathbb N}}$, let $p=p_n$ be a sequence of positive integers and let $\mathcal P_n$ be as in \[c.density\] but with $\mathcal X = \mathcal X_n\subseteq{{\mathbb R}}^{p_n}$. Suppose that for $P\in\mathcal P_n$, there exists $\sigma_P^2\in(0,\infty)$ such that $\sup_{n\in{{\mathbb N}}}\sup_{P\in\mathcal P_n} \|f_{u/\sigma_P,P}\|_\infty<\infty$, where $f_{u/\sigma_P,P}(s):=\sigma_P f_{u,P}(s\sigma_P)$ is the scaled error density. Moreover, assume that the estimator $\hat{m}_n$ is symmetric and $\eta_n$-stable w.r.t. $\mathcal P_n$, such that $\eta_n\to0$ as $n\to\infty$, and that it has $\mathcal P_n$-uniformly bounded scaled estimation error, i.e., $$\limsup_{n\to\infty} \sup_{P\in\mathcal P_n}P^{n+1}\left(\frac{|m_P(x_0) - \hat{m}_n(x_0)|}{\sigma_P} > M\right) \xrightarrow[M\to\infty]{} 0.$$ If the family of distributions $\{f_{u/\sigma_P,P}: P\in\mathcal P_n, n\in{{\mathbb N}}\}$ on ${{\mathbb R}}$ is uniformly tight, then $$\sup_{P\in\mathcal P_n} {{\mathbb E}}_{P^{n}}\left[\|\hat{F}_n - \tilde{F}_n\|_\infty \right] \xrightarrow[n\to\infty]{} 0.$$ Moreover, for $0\le \alpha_1<\alpha_2\le 1$, the leave-one-out prediction interval is uniformly asymptotically conditionally valid, i.e., $$\sup_{P\in \mathcal P_n} {{\mathbb E}}_{P^{n}} \left[
\left| P^{n+1} \left( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \Big\| T_n \right) - (\alpha_2-\alpha_1) \right| \right]
\xrightarrow[n\to\infty]{}\;0.$$
Apply Theorem \[thm:density\] with $L=l_n\sigma_P$ and $l_n = o\left(\left( \frac{1}{2n}+6\eta_n\right)^{-1/2}\right)$, $l_n\to\infty$ as $n\to\infty$. For the second claim, note that under \[c.density\], $P^n(\hat{u}_1=\hat{u}_2)=0$ and apply Lemma \[lemma:basic\].
Theorem \[thm:density\] provides an upper bound on the risk of estimating the conditional prediction error distribution $\tilde{F}_n$ by the empirical distribution of the leave-one-out residuals $\hat{F}_n$. The upper bound crucially relies on the properties of the chosen estimator $\hat{m}_n$ for the true unknown regression function $m_P$. If the sample size is sufficiently large and if the estimator is sufficiently stable and has a moderate estimation error, then the parameter $L$ can be chosen such that the upper bound is small. This is what we do in Corollary \[cor:density\]. It is important to note that Theorem \[thm:density\] and Corollary \[cor:density\] are informative also in case the estimator $\hat{m}_n$ is not consistent for $m_P$. The bound of Theorem \[thm:density\] also exhibits an interesting trade-off between the $\eta$-stability of $\hat{m}_n$ and the magnitude of its estimation error. More stable estimators are allowed to be less accurate whereas less stable estimators need to achieve higher accuracy in order to be as reliable for predictive inference purposes as a more stable algorithm.
The proof of Theorem \[thm:density\], among other things, relies on a result of @Bousquet02 which bounds the $L^2$-distance between the generalization error of a predictor $\hat{m}_n$ (i.e., ${{\mathbb E}}_{P^{n+1}}[(y_0-\hat{m}_n(x_0))^2\|T_n]$) and its estimate based on leave-one-out residuals, in terms of the stability properties of $\hat{m}_n$. See Section \[sec:proof:thm:density\] for details.
Theorem \[thm:density\] and Corollary \[cor:density\] show that the leave-one-out prediction interval in is approximately uniformly conditionally valid, i.e., has the property that is small at least for large $n$, provided that the underlying estimator $\hat{m}_n$ has two essential properties. First, the estimator must be $\eta$-stable with respect to the class $\mathcal P$ over which uniformity is desired, with an $\eta$ value that is small if $n$ is large. More precisely, we require $$\begin{aligned}
\eta_n := \sup_{P\in\mathcal P_n}{{\mathbb E}}_{P^{n+1}}\left[\left( \|f_{u,P}\|_\infty |\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)|\right)\land 1\right] \xrightarrow[n\to\infty]{} 0. \label{eq:etan}\end{aligned}$$ This is an intuitively appealing assumption since otherwise the leave-one-out residuals $\hat{u}_i = y_i - \hat{m}_n^{[i]}(x_i)$ may not be well suited to estimate the distribution of the prediction error $y_0 - \hat{m}_n(x_0)$. Second, the estimation error $m_P(x_0) - \hat{m}_n(x_0)$ at the new observation $x_0$ must be bounded in probability, uniformly over the class $\mathcal P$. Formally, $$\begin{aligned}
\limsup_{n\to\infty} \sup_{P\in\mathcal P_n}P^{n+1}\left(\frac{|m_P(x_0) - \hat{m}_n(x_0)|}{\sigma_P} > M\right) \xrightarrow[M\to\infty]{} 0.\label{eq:estError}\end{aligned}$$ This is important to guarantee that the conditional distribution $\tilde{F}_n$ of the prediction error $y_0 - \hat{m}_n(x_0)$ given the training data is tight in an appropriate sense (cf. Lemma \[lemma:Ftilde\]\[l:Tightness\]), so that a pointwise bound on $|\hat{F}_n(t)-\tilde{F}_n(t)|$ can be turned into a uniform bound. The remainder of this paper is therefore mainly concerned with verifying these two conditions on the estimator $\hat{m}_n$ in several different contexts. From now on, as in Corollary \[cor:density\], we will take on an asymptotic point of view.
Linear prediction with many variables {#sec:linpred}
=====================================
In this section we investigate a scenario in which both consistent parameter estimation as well as bootstrap consistency fail [cf. @Bickel83; @ElKaroui15], but the leave-one-out prediction interval is still asymptotically uniformly conditionally valid. See Section \[sec:consistency\] for a discussion of scenarios where consistent parameter estimation is possible. For $\kappa\in[0,1)$, we fix a sequence of positive integers $(p_n)$, such that $p_n/n\to \kappa$ as $n\to\infty$ and $n>p_n+1$ for all $n\in {{\mathbb N}}$. This type of ‘large $p$, large $n$’ asymptotics has the advantage that certain finite sample features of the problem are preserved in the limit, while offering a workable simplification. It turns out that conclusions drawn from this type of asymptotic analysis often provide remarkably accurate descriptions of finite sample phenomena.
When working with linear predictors $\hat{m}_n(x_0) = x_0'\hat{\beta}_n$, and if the feature vectors $x_i$ have second moment matrix $\Sigma_P = {{\mathbb E}}_P[x_0x_0']$ under $P$, the conditions and can be verified as follows. For ${\varepsilon}>0$, $$\begin{aligned}
&{{\mathbb E}}_{P^{n+1}}\left[\left( \|f_{u,P}\|_\infty |\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)|\right)\land 1\right] \\
&\quad\le \left( 1\lor \|f_{u/\sigma_P,P}\|_\infty\right)
\left( P^{n+1}\left( \frac{|x_0'\hat{\beta}_n - x_0'\hat{\beta}_n^{[1]}|}{\sigma_P} > {\varepsilon}\right) + {\varepsilon}\right)\\
&\quad\le \left( 1\lor \|f_{u/\sigma_P,P}\|_\infty\right)
\left( {{\mathbb E}}_{P^n}\left[ \left(\frac{1}{{\varepsilon}^2} \left\|\Sigma_P^{1/2}\left( \hat{\beta}_n-\hat{\beta}_n^{[1]}\right)/\sigma_P\right\|_2^2\right)\land 1\right] + {\varepsilon}\right),\end{aligned}$$ where we have used the conditional Markov inequality along with independence of $x_0$ and $T_n$. Thus follows if the scaled error densities $f_{u/\sigma_P,P}$, $P\in\mathcal P_n$, $n\in{{\mathbb N}}$, are uniformly bounded and $$\label{eq:etanLin}
\sup_{P\in\mathcal P_n} P^n\left( \left\|\Sigma_P^{1/2}\left( \hat{\beta}_n-\hat{\beta}_n^{[1]}\right)\right\|_2/\sigma_P > {\varepsilon}\right) \xrightarrow[n\to\infty]{}0.$$ By a similar argument, we find that follows if $$\begin{aligned}
&\limsup_{n\to\infty} \sup_{P\in\mathcal P_n}P^{n}\left(\left\|\Sigma_P^{1/2}\left( \hat{\beta}_n-\beta_P\right)\right\|_2/\sigma_P > M\right) \xrightarrow[M\to\infty]{} 0 \quad\text{and} \label{eq:estErrorLin}\\
&\limsup_{n\to\infty} \sup_{P\in\mathcal P_n}P\left(\frac{|m_P(x_0) - x_0'\beta_P|}{\sigma_P} > M\right) \xrightarrow[M\to\infty]{} 0,\label{eq:AppLinMod}\end{aligned}$$ for some $\beta_P\in{{\mathbb R}}^{p_n}$.
James-Stein type estimators
---------------------------
Our first example is the class of linear predictors $\hat{m}_n(x_0) = x_0'\hat{\beta}_n^{(JS)}$ based on James-Stein type estimators $\hat{\beta}_n^{(JS)}$ defined below. Here, we can allow for the following class of data generating processes.
1. \[c.non-linmod\] Fix finite constants $C_0>0$ and $c_0>0$ and probability measures $\mathcal L_l$ and $\mathcal L_v$ on $({{\mathbb R}},\mathcal B({{\mathbb R}}))$, such that $\mathcal L_v$ has mean zero, unit variance and finite fourth moment, $\int s^2 \mathcal L_l(ds) = 1$ and $\mathcal L_l([c_0,\infty)) = 1$.
For every $n\in{{\mathbb N}}$, the class $\mathcal P_n = \mathcal P_n(\mathcal L_l, \mathcal L_v, C_0)$ consists of all probability measures on $\mathcal Z^n\subseteq{{\mathbb R}}^{p_n+1}$, such that the distribution of $(x_0,y_0)$ under $P\in\mathcal P_n$ has the following properties: The $x_0$-marginal distribution of $P$ is given by $$x_0\; \stackrel{\mathcal L}{=}\; l_0 \Sigma_P^{1/2}(v_1,\dots, v_{p_n})',$$ where $v_1,\dots, v_{p_n}$ are i.i.d. according to $\mathcal L_v$, $l_0\stackrel{\mathcal L}{=} \mathcal L_l$ is independent of the $v_j$ and $\Sigma_P^{1/2}$ is the unique symmetric positive definite square root of a positive definite $p_n\times p_n$ covariance matrix $\Sigma_P$.
The response $y_0$ has mean zero and its conditional distribution given the regressors $x_0$ under $P$ is $$y_0 \|x_0 \;\stackrel{\mathcal L}{=}\; m_P(x_0) + \sigma_Pu_0,$$ where $u_0$ is independent of $x_0$ and has mean zero, unit variance and fourth moment bounded by $C_0$, $m_P:{{\mathbb R}}^{p_n}\to {{\mathbb R}}$ is some measurable regression function with ${{\mathbb E}}_P[m_P(x_0)]=0$ and $\sigma_P\in(0,\infty)$.
In words, under the distributions in $\mathcal P_n$, the feature-response pair $(x_0,y_0)$ follows a non-Gaussian random design non-linear regression model with regression function $m_P$ and error variance $\sigma_P$. Moreover, the feature vectors $x_i$ are allowed to have a complex geometric structure, in the sense that the standardized design vector $\Sigma_P^{-1/2} x_1$ is not necessarily concentrated on a sphere of radius $\sqrt{p_n}$, as would be the case if $\mathcal L_l$ was supported on $\{-1,1\}$ (see, e.g., @ElKaroui10 [Section 3.2] and @ElKaroui18 [Section 2.3.1] for further discussion of this point). The model $\mathcal P_n$ in \[c.non-linmod\] is non-parametric, because the regression function $m_P$ is unrestricted, up to being centered, and the error distribution is arbitrary, up to the requirements ${{\mathbb E}}_P[u_0] =0$, ${{\mathbb E}}_P[u_0^2]=1$ and ${{\mathbb E}}_P[u_0^4]\le C_0$.
To predict the value of $y_0$ from $x_0$ and a training sample $T_n = (x_i,y_i)_{i=1}^n$ with $n\ge p_n+2$, generated from $P^n$, we consider linear predictors $\hat{m}_n(x_0)=x_0'\hat{\beta}_n(c)$, where $\hat{\beta}_n(c)$ is a James-Stein-type estimator given by $$\begin{aligned}
\hat{\beta}_n(c) \;=\;\begin{cases}
\left( 1 - \frac{cp_n\hat{\sigma}_n^2}{\hat{\beta}_n'X'X\hat{\beta}_n}\right)_+\hat{\beta}_n, \quad&\text{if } \hat{\beta}_n'X'X\hat{\beta}_n >0,\\
0, &\text{if } \hat{\beta}_n'X'X\hat{\beta}_n = 0,
\end{cases} \end{aligned}$$ for a tuning parameter $c\in[0,1]$. Here $\hat{\beta}_n = (X'X)^\dagger X'Y$, $\hat{\sigma}_n^2 = \|Y-X\hat{\beta}_n\|_2^2/(n-p_n)$. The corresponding leave-one-out estimator $\hat{\beta}_n^{[i]}(c)$ is defined equivalently, but with $X$ and $Y$ replaced by $X_{[i]}$ and $Y_{[i]}$. Note that the leave-one-out equivalent of $\hat{\sigma}_n^2 = \hat{\sigma}_n^2(X,Y)$ is given by $$\hat{\sigma}_{n,[i]}^2(X_{[i]},Y_{[i]}) = \hat{\sigma}_{n-1}^2(X_{[i]},Y_{[i]}) = \|Y_{[i]}-X_{[i]}\hat{\beta}_n^{[i]}\|_2^2/(n-1-p_n).$$ The ordinary least squares estimator $\hat{\beta}_n$ belongs to the class of James-Stein estimators. In particular, $\hat{\beta}_n(0) = \hat{\beta}_n$, because, with $P_X:=X(X'X)^\dagger X'$, we have $\|P_XY\|_2^2 = \hat{\beta}_n'X'X\hat{\beta}_n=0$ if, and only if, $Y\in\operatorname{span}(P_X)^\perp=\operatorname{span}(X)^\perp$, and the latter clearly implies $\hat{\beta}_n=0$.
Using James-Stein type estimators for prediction is motivated, e.g., by the optimality results of @Dicker13 and the discussion in @Huber13. The next result shows that in the model \[c.non-linmod\] with $p_n/n\to\kappa\in(0,1)$ and if the deviation from a linear model is not too severe, the James-Stein-type estimators are sufficiently stable and their estimation errors are uniformly bounded in probability, just as required in and .
\[thm:JSmisspec\] For every $n\in{{\mathbb N}}$, let $\mathcal P_n = \mathcal P_n(\mathcal L_l, \mathcal L_v, C_0)$ be as in Condition \[c.non-linmod\] and suppose that under every $P\in\mathcal P_n$, the error term $u_0$ in \[c.non-linmod\] has a Lebesgue density. For $P\in \mathcal P_n$, define $\beta_P$ to be the minimizer of $\beta \mapsto {{\mathbb E}}_P[(y_0-\beta'x_0)^2]$ over ${{\mathbb R}}^{p_n}$. If $p_n/n\to \kappa \in [0,1)$, $0\le c_n\le 1$ for all $n\in{{\mathbb N}}$, and $$\begin{aligned}
\label{eq:MisspecBound}
\sup_{n\in{{\mathbb N}}} \sup_{P\in\mathcal P_n} {{\mathbb E}}_P\left[\left(\frac{m_P(x_0) - \beta_P'x_0}{\sigma_P} \right)^2\right] \;<\;\infty,\end{aligned}$$ then the positive part James-Stein estimator $\hat{\beta}_n(c)$ satisfies , i.e., $$\limsup_{n\to\infty} \sup_{P\in\mathcal P_n} P^n \left( \left\|\Sigma_P^{1/2}(\hat{\beta}_n(c_n) - \beta_P)/\sigma_P\right\|_2 > M \right) \quad\xrightarrow[M\to \infty]{} \quad 0.$$ If, in addition, $\kappa>0$, then for every ${\varepsilon}>0$, is also satisfied, i.e., $$\sup_{P\in\mathcal P_n} P^n\left( \left\|\Sigma_P^{1/2}(\hat{\beta}_n(c_n) - \hat{\beta}_n^{[1]}(c_n))/\sigma_P\right\|_2 > {\varepsilon}\right) \quad\xrightarrow[n\to \infty]{} \quad 0.$$
Regularized $M$-estimators {#sec:ElKaroui}
--------------------------
Another class of linear predictors for which our theory on the leave-one-out prediction interval applies are those based on regularized $M$-estimators investigated by @ElKaroui18 in the challenging scenario where $p/n$ is not close to zero [see also @ElKaroui13b; @Bean13; @ElKaroui13]. For a given convex loss function $\rho:{{\mathbb R}}\to{{\mathbb R}}$ and a fixed tuning parameter $\gamma\in(0,\infty)$ (both not depending on $n$), consider the estimator $$\label{eq:robust}
\hat{\beta}_n^{(\rho)} := {{\operatorname{argmin}}_{b\in{{\mathbb R}}^{p}}} \frac{1}{n} \sum_{i=1}^n \rho(y_i-x_i'b) + \frac{\gamma}{2}\|b\|_2^2.$$ In a remarkable tour de force, @ElKaroui18 studied the estimation error $\|\hat{\beta}_n^{(\rho)} - \beta\|_2$ as $p/n\to \kappa\in(0,\infty)$, in a linear model $y_i=x_i'\beta + u_i$, allowing for heavy tailed errors (including the Cauchy distribution) and non-spherical design [see Section 2.1 in @ElKaroui18 for details on the technical assumptions]. In particular, the author shows that $\|\hat{\beta}_n^{(\rho)} - \beta\|_2$ converges in probability to a deterministic positive and finite quantity $r_\rho(\kappa)$ and characterizes the limit through a system of non-linear equations. On the way to this result, @ElKaroui18 [Theorem 3.9 together with Lemma 3.5 and the ensuing discussion] also establishes the stability property $\|\hat{\beta}_n^{(\rho)} - \hat{\beta}_{n,[1]}^{(\rho)}\|_2\to0$ in probability. Thus, under the assumptions maintained in that reference, , and hold, and the leave-one-out prediction interval based on the linear predictor $\hat{m}_n(x_0)=x_0'\hat{\beta}_n^{(\rho)}$ is asymptotically conditionally valid, provided that also the boundedness and tightness conditions on $\{f_{u/\sigma_P,P}:P\in\mathcal P_n\}$ of Corollary \[cor:density\] are satisfied. Finally, we note that an assessment of the predictive performance of $\hat{\beta}_n^{(\rho)}$ in dependence on $\rho$ requires a highly non-trivial analysis of $r_\rho(\kappa)$. For the asymptotic validity of the leave-one-out prediction interval, however, all the information needed on $r_\rho(\kappa)$ is, that it is finite.
Ordinary least squares and interval length {#sec:PIlength}
------------------------------------------
We investigate the special case of the ordinary least squares predictor $\hat{m}_n(x) = x'\hat{\beta}_n = x'(X'X)^\dagger X'Y$ in some more detail, because here also the length $$\left|PI_{\alpha_1,\alpha_2}^{(L1O)}\right| = \hat{q}_{\alpha_2} - \hat{q}_{\alpha_1},$$ of the leave-one-out prediction interval permits a reasonably simple asymptotic characterization. We consider a class $\mathcal P_n^{(lin)} = \mathcal P_n^{(lin)}(\mathcal L_l,\mathcal L_v, \mathcal L_u)$ which is similar to the one of Condition \[c.non-linmod\], but with the additional assumption that the regression function $m_P$ is linear and that the error distribution is fixed.
1. \[c.linmod\] Fix a finite constant $c_0>0$ and probability measures $\mathcal L_l$, $\mathcal L_v$ and $\mathcal L_u$ on $({{\mathbb R}},\mathcal B({{\mathbb R}}))$, such that $\mathcal L_v$ and $\mathcal L_u$ have mean zero, unit variance and finite fourth moment, $\int s^2 \mathcal L_l(ds) = 1$ and $\mathcal L_l([c_0,\infty)) = 1$.
For every $n\in{{\mathbb N}}$, the class $\mathcal P_n^{(lin)} = \mathcal P_n^{(lin)}(\mathcal L_l, \mathcal L_v, \mathcal L_u)$ consists of all probability measures on ${{\mathbb R}}^{p_n+1}$, such that the distribution of $(x_0,y_0)$ under $P\in\mathcal P_n$ has the following properties: The $x_0$-marginal distribution of $P$ is given by $$x_0\; \stackrel{\mathcal L}{=}\; l_0 \Sigma_P^{1/2}(v_1,\dots, v_{p_n})',$$ where $v_1,\dots, v_{p_n}$ are i.i.d. according to $\mathcal L_v$, $l_0\stackrel{\mathcal L}{=} \mathcal L_l$ is independent of the $v_j$ and $\Sigma_P^{1/2}$ is the unique symmetric positive definite square root of a positive definite $p_n\times p_n$ covariance matrix $\Sigma_P$.
The conditional distribution of the response $y_0$ given the regressors $x_0$ under $P$ is $$y_0 \|x_0 \;\stackrel{\mathcal L}{=}\; x_0'\beta_P + \sigma_Pu_0,$$ where $u_0\stackrel{\mathcal L}{=} \mathcal L_u$ is independent of $x_0$, and where $\beta_P\in{{\mathbb R}}^{p_n}$ and $\sigma_P\in(0,\infty)$.
Note that under \[c.linmod\], the distributions $\mathcal L_l$, $\mathcal L_v$ and $\mathcal L_u$ are fixed, so that $\mathcal P_n^{(lin)}$ is a parametric model indexed by $\beta_P$, $\Sigma_P$ and $\sigma_P$. However, these parameters may depend on sample size $n$ and the dimension $p_n$ of $\beta_P$ and $\Sigma_P$ may increase with $n$. Subsequently, we aim at uniformity in these parameters.
\[thm:PIlength\] Fix $\alpha\in[0,1]$. For every $n\in{{\mathbb N}}$, let $\mathcal P_n = \mathcal P_n^{(lin)}(\mathcal L_l, \mathcal L_v, \mathcal L_u)$ be as in \[c.linmod\]. If $p_n/n\to \kappa \in (0,1)$ then the scaled empirical $\alpha$-quantile $\hat{q}_\alpha/\sigma_{P_n}$ of the leave-one-out residuals $\hat{u}_i = y_i - x_i'\hat{\beta}_n^{[i]}$ based on the OLS estimator $\hat{\beta}_n = (X'X)^\dagger X'Y$ converges $\mathcal P_n$-uniformly in probability to the corresponding $\alpha$-quantile $q_\alpha$ of the distribution of $$l N \tau + u$$ and $l, N, \tau$ and $u$ are defined as follows: $l\stackrel{\mathcal L}{=} \mathcal L_l$, $N\stackrel{\mathcal L}{=} \mathcal N(0,1)$, and $u\stackrel{\mathcal L}{=}\mathcal L_u$ are independent, and $\tau= \tau(\mathcal L_l,\kappa)$ is non-random. Moreover, $\tau = 0$ if, and only if, $\kappa=0$. If $\mathcal L_l(\{-1,1\})=1$, then $\tau = \sqrt{\kappa/(1-\kappa)}$.
The same statement holds also for $\kappa=0$, provided that, in addition, $\mathcal L_u$ has a continuous and strictly increasing cdf and $p_n\to\infty$ as $n\to\infty$.
The result can be intuitively understood as follows. If the true model $\mathcal P_n^{(lin)}$ is linear (and satisfies \[c.linmod\]) then the scaled prediction error under $P\in\mathcal P_n^{(lin)}$ is distributed as $$\frac{y_0- \hat{m}_n(x_0)}{\sigma_P} \stackrel{\mathcal L}{=} l_0 (v_1,\dots, v_{p_n}) \Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P + u_0,$$ and for $n$ large, $\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P\|_2 \approx \tau$ is approximately non-random, so that $(v_1,\dots, v_{p_n}) \Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P \approx v_0' Z \tau$, where $Z:= \Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)\|_2$ is a random unit vector independent of $v_0=(v_1,\dots, v_{p_n})'$. Thus, if $p_n$ is large and $Z$ satisfies the Lyapounov condition $\|Z\|_{2+\delta}\to 0$, then $v_0'Z \approx \mathcal N(0,1)$ (see Lemma \[lemma:UnifWeak\]). This effect of additional Gaussian noise in the prediction error was also observed by @ElKaroui13 [@ElKaroui13b; @ElKaroui15; @ElKaroui18]. Note, however, that the conditions $\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P\|_2 \approx \tau$ and $\|Z\|_{2+\delta}\to0$ are not necessarily satisfied for any estimator $\hat{\beta}_n$. The former condition is indeed more generally satisfied by robust $M$-estimators of the form $$\hat{\beta}_n^{(\rho)} = {{\operatorname{argmin}}_{b\in{{\mathbb R}}^p}} \frac{1}{n}\sum_{i=1}^n \rho(y_i-x_i'b),$$ considered in @ElKaroui13 and under the model assumptions in that reference (including $\mathcal L_l(\{-1,1\})=1$ and further boundedness conditions on the error terms). Here, $\rho:{{\mathbb R}}\to{{\mathbb R}}$ is an appropriate convex loss function. If $\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n^{(\rho)})/\sigma_P\|_2\approx \tau<\infty$ holds, then the Lyapounov condition $\|Z\|_{2+\delta}\to0$ is also satisfied by $\hat{\beta}_n^{(\rho)}$, provided that the standardized design vectors $\Sigma_P^{-1/2}x_i$ follow an orthogonally invariant distribution, because then one easily sees that $$\hat{\beta}_n^{(\rho)} \;=\; \beta_P + \Sigma_P^{-1/2}\tilde{\beta}_n^{(\rho)}
\;\stackrel{\mathcal L}{=}\; \beta_P + \|\tilde{\beta}_n^{(\rho)}\|_2 \Sigma_P^{-1/2}U,$$ where $\tilde{\beta}_n^{(\rho)} = {{\operatorname{argmin}}_{b\in{{\mathbb R}}^p}} \frac{1}{n}\sum_{i=1}^n \rho(u_i-x_i'\Sigma_P^{-1/2}b)$ and $U$ is uniformly distributed on the unit sphere and independent of $\|\tilde{\beta}_n^{(\rho)}\|_2 = \|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n^{(\rho)})/\sigma_P\|_2$, which is itself approximately constant equal to $\tau$. However, this distributional invariance of the estimator, which is required for the Lyapounov property to hold, is not satisfied, e.g., by the James-Stein estimators (cf. Lemma \[lemma:JSneg\]). If the mentioned conditions are not satisfied, much more complicated limiting distributions of the prediction error than the one of Theorem \[thm:PIlength\] may arise.
Theorem \[thm:PIlength\] shows how the length $\hat{q}_{\alpha_2} - \hat{q}_{\alpha_1}$ of the leave-one-out prediction interval for the OLS predictor depends (asymptotically) on $\mathcal L_l$, $\mathcal L_u$ and $\kappa = \lim_{n\to\infty} p_n/n$. For simplicity, let $\mathcal L_l(\{-1,1\}) = 1$ and consider an equal tailed interval, i.e., $\alpha_1 = \alpha/2 = 1-\alpha_2$. Figure \[fig:PIlengths\] shows asymptotic interval lengths as functions of $\kappa\in[0,1]$ for different values of error level $\alpha$ in the cases $\mathcal L_u = \text{Unif}\{-1,1\}$ and $\mathcal L_u = \mathcal N(0,1)$. For a wide range of $\kappa$ values ($\kappa\in[0,0.8]$), the interval length is relatively stable. However, for high dimensional problems ($\kappa>0.8$) the interval length increases dramatically, as expected, because here the asymptotic estimation error $\tau=\sqrt{\kappa/(1-\kappa)}$ explodes. We also get an idea about the impact of the error distribution, on which the practitioner has no handle. In particular, for large error levels ($\alpha=0.6$) we even observe a non-monotonic dependence of the interval length on $\kappa$, which seems rather counterintuitive. This results from the non-monotonicity of $\tau^2 \mapsto IQR_\alpha(\mathcal N(0,\tau^2) * \mathcal L_u) = q_{1-\alpha/2}-q_{\alpha/2}$, which may only occur if the error distribution $\mathcal L_u$ is not log-concave (cf. the discussion in Section \[sec:efficiency\]). Finally, for large values of $\kappa$, and thus, for large values of $\tau$, the error distribution has little effect on the interval length, because in that case the term $N\tau$ dominates the distribution of $N\tau + u$.
![Lengths of leave-one-out prediction intervals as a function of $\kappa = \lim_{n\to\infty} p_n/n$ for confidence level $1-\alpha$ and with $\text{Unif}\{-1,1\}$ (binary) and $\mathcal N(0,1)$ (Gauss) errors.[]{data-label="fig:PIlengths"}](PIlengths.pdf){width="\textwidth"}
Sample splitting {#sec:SS}
----------------
An obvious alternative to the leave-one-out prediction interval is to use a sample splitting method as follows. Decide on a fraction $\nu\in(0,1)$ and use only a number $n_1 = \lceil \nu n\rceil$ of observation pairs $(x_i, y_i)$, $i\in S_\nu\subseteq \{1,\dots, n\}$, $|S_\nu|=n_1$, to compute an estimate $\hat{\beta}_n^{(\nu)}$. Note that in the present case of OLS, the estimator will not be unique if $n_1< p_n$, so that one usually requires $n_1\ge p_n$. Now use the remaining $n-n_1$ observations to compute residuals $\hat{u}_i^{(\nu)} = y_i - x_i'\hat{\beta}_n^{(\nu)}$, $i\in\{1,\dots, n\}\setminus S_\nu$. Since, conditionally on the observations corresponding to $S_\nu$, these residuals are i.i.d. and distributed as $y_0-x_0'\hat{\beta}_n^{(\nu)}$, constructing a prediction interval of the form $[x_0'\hat{\beta}_n^{(\nu)} + L, x_0'\hat{\beta}_n^{(\nu)} + U]$ for $y_0$ is now equivalent to constructing a tolerance interval for $y_0-x_0'\hat{\beta}_n^{(\nu)}$ based on i.i.d. observations with the same distribution. One can now simply use appropriate empirical quantiles $L=\hat{q}_{\alpha_1}^{(\nu)}$ and $U=\hat{q}_{\alpha_2}^{(\nu)}$ from the sample splitting residuals $\hat{u}_i^{(\nu)}$ (see also Section \[sec:naive\]). Such a procedure is suggested, e.g., by @Vovk12 and @Lei16. However, by the same mechanism as discussed in Section \[sec:PIlength\], the empirical quantiles of the residuals $\hat{u}_i^{(\nu)}$, $i\in S_\nu^c$, converge (unconditionally) to the quantiles of $l N \tau' + u$, where now $\tau'$ is the non-random limit of $\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n^{(\nu)})/\sigma_P\|_2$. In particular, if $\mathcal L_l$ degenerates to $\{-1,1\}$, then $\tau' = \sqrt{\kappa'/(1-\kappa')}$, where $\kappa'=\lim_{n\to\infty} p_n/n_1 = \kappa/\nu$. Thus, we can read off the asymptotic interval length of the sample splitting procedure from Figure \[fig:PIlengths\] by simply adjusting the value of $\kappa$ to $\kappa/\nu$. For instance, in the binary error case with $\alpha=0.05$, if $\kappa=0.4$ and we use sample splitting with $\nu=1/2$, then $\kappa'=0.8$ and the asymptotic length of the leave-one-out prediction interval is about $4.7$, while the asymptotic length of the sample splitting interval is about $9$, so almost twice as wide.
Consistent estimators {#sec:consistency}
=====================
Another important class of problems where the conditions and of Subsection \[sec:stability\] are satisfied, are those where the estimator $\hat{m}_n$ asymptotically degenerates to some non-random function which need not be the true regression function $m_P:\mathcal X\to{{\mathbb R}}$. However, we point out that in the scenario considered in this subsection, the naive approach that tries to estimate the true unknown distribution of the errors $u_i$ in the additive error model \[c.density\] based on the ordinary residuals $y_i-\hat{m}_n(x_i)$ is usually successful (asymptotically) for constructing conditionally valid prediction intervals. Nevertheless, we think that this less ambitious but more classical setting of asymptotically non-random predictors is an important test case for the leave-one-out method. We still consider asymptotic results where the number of explanatory variables $p=p_n$ can grow with sample size $n$. Thus, we consider a sequence $(p_n)_{n\in{{\mathbb N}}}$ and a sequence $(\mathcal P_n)_{n\in{{\mathbb N}}}$ of collections of probability measures on $\mathcal Z_n \subseteq {{\mathbb R}}^{p_n+1}$. Moreover, we have to slightly extend the usual definition of uniform consistency of an estimator sequence to cover also the leave-one-out estimate and the possibility of an asymptotically non-vanishing bias.
For every $n\in{{\mathbb N}}$, let $p_n\in{{\mathbb N}}$, let $\mathcal P_n$ be a collection of probability measures on $\mathcal Z_n$ and let $\sigma^2_n : \mathcal P_n \to (0,\infty)$ be a positive functional on $\mathcal P_n$. We say that a sequence of symmetric predictors $\hat{m}_n(\cdot)= M_{n,p_n}(T_n,\cdot)$ is uniformly consistent for the (non-random) measurable function $g_P:{{\mathbb R}}^{p_n}\to{{\mathbb R}}$, with respect to $(\mathcal P_n)_{n\in{{\mathbb N}}}$ and relative to $(\sigma^2_n)_{n\in{{\mathbb N}}}$, if for every ${\varepsilon}>0$, $$\begin{aligned}
&\sup_{P\in\mathcal P_n} P^{n+1}\Big(|g_P(x_0) - M_{n,p_n}(T_n,x_0)|>{\varepsilon}\sigma_n(P)\Big) \xrightarrow[n\to\infty]{} 0\quad\text{and} \label{eq:defCons:a}\\
&\sup_{P\in\mathcal P_n} P^{n}\Big(|g_P(x_0) - M_{n-1,p_n}(T_n^{[1]},x_0)|>{\varepsilon}\sigma_n(P)\Big) \xrightarrow[n\to\infty]{} 0.\label{eq:defCons:b}\end{aligned}$$
The functional $\sigma_n^2(P)$ can be thought of, for instance, as the error variance $\sigma_n^2(P)=\operatorname{Var}_P[y_0-m_P(x_0)]$, if it exists. Of course, conditions and coincide if the sequences $(p_n)$, $(\sigma_n^2)$ and $(\mathcal P_n)$ are constant. It is also easy to see that uniform consistency of $\hat{m}_n$ for any $g_P$ with respect to $(\mathcal P_n)$ and relative to $(\sigma_n^2)_{n\in{{\mathbb N}}}$ implies that the sequence of stability constants $\eta_n$ satisfies , i.e., $$\begin{aligned}
\eta_n := \sup_{P\in\mathcal P_n}{{\mathbb E}}_{P^{n+1}}\left[\left(\sigma_n(P) \|f_{u,P}\|_\infty \frac{|\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)|}{\sigma_n(P)}\right)\land 1\right] \xrightarrow[n\to\infty]{} 0,\end{aligned}$$ provided that $\sup_{n\in{{\mathbb N}}}\sup_{P\in\mathcal P_n}\sigma_n(P) \|f_{u,P}\|_\infty < \infty$. Note that $f_{u/\sigma_n,P}(v) = \sigma_n f_{u,P}(\sigma_n v)$ is the density of the scaled error term $(y_0 - m_P(x_0))/\sigma_n$ under $P$. Furthermore, it is equally obvious that uniform consistency of $\hat{m}_n$ for $g_P$ together with $\limsup_{n\to\infty}\sup_{P\in\mathcal P_n} P(|m_P(x_0)-g_P(x_0)|>M\sigma_n(P))\to 0$, as $M\to\infty$, implies .
In the remainder of this subsection we list a number of examples where uniform consistency of $\hat{m}_n$, and therefore also asymptotic conditional validity of the leave-one-out prediction interval, holds. We emphasize that the conditions on the statistical model $\mathcal P$ that are imposed in the subsequent examples, are taken from the respective reference and we do not claim that they are minimal.
\[ex:nonparametric\] Consider a constant sequence of dimension parameters $p_n=p\in{{\mathbb N}}$. For positive finite constants $L$ and $C$, let $\mathcal P(L,C)$ denote the class of probability distributions $P$ on $\mathcal Z = \mathcal X\times \mathcal Y\subseteq{{\mathbb R}}^{p+1}$ such that $P(|y_0|\le L) = 1 = P(\|x_0\|_2\le L)$ and whose corresponding regression function $m_P:{{\mathbb R}}^p\to {{\mathbb R}}$ is $C$-Lipschitz, i.e., $|m_P(x_1) - m_P(x_2)| \le C\|x_1-x_2\|_2$ for all $x_1,x_2\in\mathcal X$. @Gyorfi02 [Chapter 7] show that if $\hat{m}_n$ is either an appropriate kernel estimate, a partitioning estimate or a nearest-neighbor estimate, all with fully data driven choice of tuning parameter, then $$\begin{aligned}
\sup_{P\in\mathcal P(L,C)} P^{n+1}(|\hat{m}_n(x_0) - m_P(x_0)|>{\varepsilon}) \xrightarrow[n\to\infty]{} 0,\end{aligned}$$ for every ${\varepsilon}>0$. Because of the data driven choice of tuning parameter, which is usually done by a sample splitting procedure, the estimators in @Gyorfi02 are generally not symmetric in the input data. However, it is easy to see that symmetrized versions of those estimators are still uniformly consistent. Simply note that it is no restriction to assume $|\hat{m}_n(x_0) - m_P(x_0)|\le 2L$, so that convergence in probability and converges in $L_1$ are equivalent and study the $L_1$ estimation error of the symmetrized estimator.
\[ex:LASSO\]
Consider a sequence $(K_n)_{n\in{{\mathbb N}}}$ of positive numbers and a sequence of dimension parameters $(p_n)_{n\in{{\mathbb N}}}$ such that $K_n^4\log(p_n)/n\to 0$ as $n\to\infty$. For a positive finite constant $M$, let $\mathcal P_n(M)$ denote the class of probability distributions on ${{\mathbb R}}^{p_n+1}$, such that under $P\in\mathcal P_n(M)$, the pair $(x_0,y_0)$ has the following properties:
- $\|x_0\|_\infty \le M$, almost surely.
- Conditional on $x_0$, $y_0$ is distributed as $\mathcal N(x_0'\beta_P, \sigma_P^2)$, for some $\beta_P\in{{\mathbb R}}^{p_n}$ and $\sigma_P^2\in(0,\infty)$.
- The parameters $\beta_P$ and $\sigma_P^2$ satisfy $\max(\|\beta_P\|_1, \sigma_P)\le K_n$.
In particular, we have $m_P(x_0) = x_0'\beta_P$. @Chatt13 [Theorem 1] shows that any estimator $\hat{\beta}_n^{(K_n)}$ which minimizes $$\beta \quad\mapsto \quad \sum_{i=1}^n (y_i - \beta'x_i)^2 \quad \text{subject to }\quad \|\beta\|_1\le K_n,$$ satisfies $$\begin{aligned}
\sup_{P\in\mathcal P_n(M)} P^{n+1}\left(\left| x_0'\hat{\beta}_n^{(K_n)} - m_P(x_0)\right|>{\varepsilon}\right) \xrightarrow[n\to\infty]{} 0,\end{aligned}$$ for every ${\varepsilon}>0$. Clearly, here the leave-one-out estimate has the same asymptotic property. Note that in this example, consistent estimation of the parameters $\beta_P$ and $\sigma_P^2$ would require additional assumptions on the distribution of the feature vector $x_0$ (so called ‘compatibility conditions’, see @Buehlmann11), and therefore, it is not immediately clear whether the standard Gaussian prediction interval based on estimates $\hat{\beta}_n$ and $\hat{\sigma}_n^2$ and a Gaussian quantile is asymptotically valid in the present setting. Furthermore, the result of @Chatt13 can be extended also to the non-Gaussian case, where the standard Gaussian prediction interval certainly fails.
\[ex:RIDGE\] A qualitatively different parameter space is considered in @Lopes15, who shows uniform consistency of ridge regularized estimators in a linear model under a boundedness assumption on the regression parameter $\beta_P$ and a specific decay rate of eigenvalues of $\Sigma = {{\mathbb E}}_P[x_0x_0']$.
A classical strand of literature on the asymptotics of Maximum-Likelihood under misspecification has established various conditions under which the MLE is not consistent for the true unknown parameter, but for a pseudo parameter that corresponds to the projection of the true data generating distribution onto the maintained working model. See, for example, @Huber67, @White80a [@White80b] or @Fahrmeir90. A common pseudo target in random design regression is the minimizer of $\beta\mapsto{{\mathbb E}}_P[(y_0-\beta'x_0)^2]$.
Further discussion and remarks {#sec:discussion}
==============================
In this section we collect several further thoughts on the leave-one-out prediction intervals. We discuss some properties of the proposed method that we have established above but of which we believe that they hold in much higher generality. We also draw some more connections to other methods such as sample splitting, tolerance regions and prediction regions based on non-parametric density estimation, and we provide further intuition. Finally, we sketch possible extensions and open problems.
Predictor efficiency and interval length {#sec:efficiency}
----------------------------------------
Recall that if $T_n\in\mathcal Z^n$ and $P$ are such that $$s\mapsto\tilde{F}_n(s;T_n) = P^{n+1}(y_0-\hat{m}_n(x_0)\le s\|T_n),$$ is continuous, the optimal infeasible interval $$PI_{\alpha_1,\alpha_2}^{(OPT)} = \hat{m}_n(x_0) + [\tilde{q}_{\alpha_1}, \tilde{q}_{\alpha_2}]$$ in is the shortest interval of the form $\hat{m}_n(x_0) + [L(T_n), U(T_n)]$ such that and are satisfied. In this infeasible scenario, the only way in which one can influence its length is via the choice of predictor $\hat{m}_n$. This choice clearly affects the conditional distribution $\tilde{F}_n$ of the prediction error $y_0-\hat{m}_n(x_0)$, and thus, potentially its inter quantile range $\tilde{q}_{\alpha_2} - \tilde{q}_{\alpha_1}$. Since we only care about minimizing the inter-quantile-range of the conditional distribution $\tilde{F}_n$, for the rest of this subsection we consider the training data $T_n$ to be fixed and non-random. Thus, the predictor $\hat{m}_n:{{\mathbb R}}^{p}\to {{\mathbb R}}$ is also non-random. Now we would like to use a predictor $\hat{m}_n$ such that the prediction error $y_0-\hat{m}_n(x_0)$ has short inter-quantile-range. For simplicity, assume that $y_0 = m_P(x_0) + u_0$, where the error term $u_0$ has mean zero and is independent of the features $x_0$. Therefore, the prediction error is given by $$y_0 - \hat{m}_n(x_0) \;=\; m_P(x_0)-\hat{m}_n(x_0) \;+\; u_0,$$ i.e., the convolution of the estimation error $m_P(x_0)-\hat{m}_n(x_0)$ and the innovation $u_0$. Following @Lewis81, we say that a continuous univariate distribution $P_1$ is more dispersed than $P_0$ if, and only if, any two quantiles of $P_1$ are further apart than the corresponding quantiles of $P_0$. Now we note that minimizing the inter-quantile-rage of the prediction error $y_0-\hat{m}_n(x_0)$ is, in general, not equivalent to minimizing the inter-quantile-rage of $m_P(x_0)-\hat{m}_n(x_0)$, because of the effect of the error term $u_0$. However, if the distribution of the error term $u_0$ has a log-concave density, then the distribution of $y_0-\hat{m}_n^{(1)}(x_0)$ is more dispersed than that of $y_0-\hat{m}_n^{(0)}(x_0)$, if, and only if, $m_P(x_0)-\hat{m}_n^{(1)}(x_0)$ is more dispersed than $m_P(x_0)-\hat{m}_n^{(0)}(x_0)$ [see Theorem 8 of @Lewis81]. Thus, under log-concave error distributions, interval length of $PI_{\alpha_1,\alpha_2}^{(OPT)}$ is directly related to prediction accuracy of the employed point predictor $\hat{m}_n$. These considerations naturally carry over to the feasible analog $PI_{\alpha_1,\alpha_2}^{(L1O)}$ defined in . In Section \[sec:PIlength\], in the special case of a linear model and ordinary-least-squares prediction, we have discussed the issue of interval length in some more detail and provided a rigorous description of the asymptotic interval length in a high-dimensional regime. This sheds some more light on the connection between the length of $PI_{\alpha_1,\alpha_2}^{(L1O)}$ and the estimation error $m_P(x_0) - \hat{m}_n(x_0)$. However, the lessons learned from the linear model appear to be valid in a much more general situation. In particular, we see that at least for log-concave error distributions, the lengths of leave-one-out prediction intervals can be used to evaluate the relative efficiency of competing predictors.
The case of a naive predictor and sample splitting {#sec:naive}
--------------------------------------------------
Next, we discuss the important special case where we naively decide to work with a predictor $M_{n,p}(T_n,x_0) = m(x_0)$, $m : \mathcal X\to{{\mathbb R}}$, that does not depend on the training data $T_n$ at all.[^3] In this case, the predictor and its leave-one-out analog coincide and the (leave-one-out) residuals $\hat{u}_i = y_i-m(x_i)$ for $i=1,\dots, n$, are actually independent and identically distributed according to the non-random distribution $\tilde{F}_n(s) = P^{n+1}(y_0-m(x_0)\le s \| T_n) = P(y_0-m(x_0)\le s)$ and $\hat{F}_n$ is their empirical distribution function. Therefore, by Lemma \[lemma:basic\] and the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality [@Massart90], if $\tilde{F}_n$ is continuous, we get for every ${\varepsilon}>0$, that $$\begin{aligned}
P^n\left(
\left| P \left( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \right) \;-\; \frac{\lceil n\alpha_2\rceil - \lceil n\alpha_1 \rceil }{n}\right| > {\varepsilon}\right)
\;\le\;
2\exp\left(-\frac{n{\varepsilon}^2}{2}\right).\end{aligned}$$ Integrating this tail probability also yields $$\begin{aligned}
\sup_{P\in\mathcal P}{{\mathbb E}}_{P^n}\left[
\left| P \left( y_0 \in PI_{\alpha_1,\alpha_2}^{(L1O)}(T_n,x_0) \right) \;-\; \frac{\lceil n\alpha_2\rceil - \lceil n\alpha_1 \rceil }{n}\right|
\right]
\;\le\;
\sqrt{\frac{2\pi}{n}},\end{aligned}$$ where $\mathcal P$ contains all probability measures on ${{\mathbb R}}^{p+1}$ for which $\tilde{F}_n$ is continuous. We also point out that in the present case where the predictor does not depend on $T_n$, the problem of constructing a prediction interval for $y_0$ can actually be reduced to finding a non-parametric univariate tolerance interval for $y_0-m(x_0)$ based on the i.i.d. copies $(y_i-m(x_i))_{i=1}^n$. For this problem classical solutions are available, based on the theory of order statistics of i.i.d. data [cf. @Krish09 Chapter 8]. Unfortunately, the problem changes dramatically, once we try to learn the true regression function $m_P$ from the training data $T_n$ and use $\hat{m}_n(x_0) = M_{n,p}(T_n,x_0)$ to predict $y_0$, because then the leave-one-out residuals are no longer independent and the conditional distribution function $\tilde{F}$ of the prediction error $y_0-\hat{m}_n(x_0)$ given $T_n$ is random. Thus, in the general case we can not expect to obtain equally powerful and elegant results as above and we can not resort to the theory of order statistics of i.i.d. data. In particular, we note that the bound of Theorem \[thm:density\] is still somewhat sub-optimal in this trivial case where the estimator does not depend on the training sample $T_n$. Here, $\eta=0$, but the derived bound still depends on the distribution of the estimation error $m_P(x_0) - m(x_0)$, even though in that case the alternative bound obtained above by the DKW inequality does no longer involve the estimation error. It is an open problem to establish a concentration inequality for $\|\hat{F}_n-\tilde{F}_n\|_\infty$ analogous to the DKW inequality but in the general case of dependent leave-one-out residuals and random $\tilde{F}_n$.
The discussion of the previous paragraph also applies to the case where the predictor $m$ was obtained as an estimator for $m_P$, but from another independent training sample $S_k = (x_j^*, y_j^*)_{j=1}^k$ of $k$ i.i.d. copies of $(x_0,y_0)$. This situation can be seen as a sample splitting method, where $k$ of the overall $n+k$ observations are used to compute the point predictor $m=\hat{m}_k$ and the remaining $n$ observations in $T_n$ are used as a validation set to estimate the conditional distribution of the prediction error $y_0 - \hat{m}_k(x_0)$ given $S_k$ (and $T_n$), from the (conditionally on $S_k$) i.i.d. residuals $y_i-\hat{m}_k(x_i)$, $i=1,\dots, n$. Such a procedure is discussed, for instance, by @Lei16 and @Vovk12. Note that under the assumptions of the previous paragraph, such a method is asymptotically conditionally valid if the size $n$ of the validation set diverges to infinity. However, this method uses only $k$ of the $n+k$ available observation pairs, so that the point predictor $\hat{m}_k$ based on $S_k$ is not as efficient as the analogous predictor based on the full sample $S_k\cup T_n$. This typically results in a larger prediction interval than necessary, because then the conditional distribution of the prediction error $y_0 - \hat{m}_k(x_0)$ is usually more dispersed than that of $y_0 - \hat{m}_{k+n}(x_0)$. See also the discussion in Subsections \[sec:SS\] and \[sec:efficiency\].
Further remarks
---------------
\[rem:no-exact-validity\] Suppose that the class $\mathcal P$ contains at least the data generating distributions $P_0$ and $P_1$, where for $j\in\{0,1\}$ $$P_j \;=\; \mathcal N_{p+1}(0,\sigma_j^2 I_{p+1}), \quad \sigma_j^2>0, \;\sigma_0^2\ne\sigma_1^2,$$ and that we decide to predict $y_0$ by some linear predictor $\hat{m}_n(x_0)=x_0'\hat{\beta}_n$. We shall show that for every $\alpha\in(0,1/2)$, it is impossible to construct a prediction interval of the form $PI_\alpha(T_n,x_0) = x_0'\hat{\beta}_n + [L_\alpha(T_n), U_\alpha(T_n)]$ based on a finite sample $T_n$ and $x_0$, such that is equal to zero.
If is equal to zero, then for both $j=0,1$ and $P_j^n$-almost all samples $T_n$, $$\begin{aligned}
1-\alpha &= P_j^{n+1}(y_0\in PI_\alpha(T_n,x_0)\|T_n) \\
&=
P_j^{n+1}(L_\alpha(T_n) \le y_0 - x_0'\hat{\beta}_n \le U_\alpha(T_n)\|T_n) \\
&= \Phi\left( \frac{U_\alpha(T_n)}{(\|\hat{\beta}_n\|_2^2+1)\sigma_j^2} \right) - \Phi\left( \frac{L_\alpha(T_n)}{(\|\hat{\beta}_n\|_2^2+1)\sigma_j^2} \right).\end{aligned}$$ Since $1-\alpha>1/2$, we must have $L_\alpha < 0 < U_\alpha$, almost surely, and it is easy to see that the function $$g_{l,u}(\nu) :=
\Phi\left( \frac{u}{\nu} \right) - \Phi\left( \frac{l}{\nu} \right), \quad g_{l,u} : (0,\infty) \to (0,1),$$ is continuous and strictly decreasing, provided that $l<0<u$, and thus, for such $l$ and $u$, $g_{l,u}$ is invertible. Therefore, for $j=0,1$ and for $P_j^{n}$-almost all samples $T_n$, we have $$\tilde{\sigma}_n^2(T_n) := \frac{g_{L_\alpha, U_\alpha}^{-1}(1-\alpha)}{\|\hat{\beta}_n\|_2^2+1} = \sigma_j^2.$$ In other words, there exists $T_n\in\mathcal Z^n$, such that $\sigma_0^2=\tilde{\sigma}^2_n(T_n) = \sigma_1^2$, a contradiction.
\[rem:Dicker\] Consistent estimation of the true regression function $m_P:\mathcal X\to {{\mathbb R}}$ from an i.i.d. sample of size $n$ is usually not possible if the dimension $p$ of $\mathcal X$ is non-negligible compared to $n$. For example, in a Gaussian linear model where the only unknown parameter is the $p$-vector $\beta$ of regression coefficients, it is impossible to consistently estimate the conditional mean $m_P(x_0)={{\mathbb E}}_P[y_0\|x_0] = \beta'x_0$, unless $p/n\to0$, or strong assumptions are imposed on the parameter space [cf. @Dicker12].
\[rem:ObjValid\] A natural approach for constructing non-parametric prediction sets is to estimate the conditional density of $y_0$ given $x_0$ (if it exists), because, as can be easily shown, a highest density region of the conditional density of $y_0$ given $x_0$ provides the smallest (in terms of Lebesgue measure) prediction region $PR_\alpha(x_0)$ for $y_0$ that controls the conditional coverage probability given $x_0$, i.e., that satisfies $$\begin{aligned}
\label{eq:condXvalid}
P(y_0\in C_\alpha(x) \| x_0 = x) \;\ge\; 1-\alpha \quad \text{for $P$-almost all $x$}.\end{aligned}$$ This condition has been called *object conditional validity* by @Vovk13. However, object conditional validity is often too much to ask for. First of all, as shown by @Lei14 [see also @Vovk13], for continuous distributions there are no non-trivial prediction sets based on a finite sample that satisfy . Moreover, even if we are content with *asymptotic* object conditional validity, learning the relevant properties of the conditional density of $y_0$ given $x_0$ is typically only possible if the dimension of the feature vector $x_0$ is much smaller than the available sample size (cf. Remark \[rem:Dicker\]). Therefore, since our focus in the present paper is on high-dimensional problems, we do not aim at object conditional validity.
The length of the leave-one-out prediction interval in , as it stands, does not depend on the value of $x_0$. An immediate way to account for heteroskedasticity is the following. Consider, in addition, an estimator $\hat{\sigma}_n^2(x) = S(T_n,x)$ of the conditional variance $\operatorname{Var}[y_0\|x_0=x]$. Then a prediction interval can be computed as $\hat{m}_n(x_0) + [\hat{q}_{\alpha_1},\hat{q}_{\alpha_2}]\hat{\sigma}_n(x_0)$, where now, $\hat{q}_\alpha$ is an empirical $\alpha$-quantile of the leave-one-out residuals $$\hat{u}_i = \frac{y_i-\hat{m}_n^{[i]}(x_i)}{\hat{\sigma}_{n,[i]}(x_i)}, \quad i=1,\dots, n.$$
Computing the leave-one-out prediction interval may be computationally costly, because the model has to be re-fitted $n$-times on each of the possible reduced samples $T_n^{[i]}$, $i=1,\dots, n$, in order to compute the leave-one-out residuals $\hat{u}_i = y_i - \hat{m}_n^{[i]}(x_i)$. Sometimes, it is possible to devise a short cut for the computation of these residuals. For example, in case of ordinary least squares prediction $\hat{m}_n(x) = x'\hat{\beta}_n = x'(X'X)^\dagger X'Y$, if $X_{[i]}'X_{[i]}$ has full rank, we have the well known identity $$\hat{u}_i = y_i-x_i'\hat{\beta}_n^{[i]} = \frac{y_i-x_i'\hat{\beta}_n}{1-x_i'(X'X)^{-1}x_i},$$ so that the $n$-vector of leave-one-out residuals can be computed as $$\left[\operatorname{diag}(I_n-X(X'X)^{-1}X')\right]^{-1}(I_n - X(X'X)^{-1}X')Y.$$ Hence, the model has to be fitted only once. If such a simplification is not possible, and the computation of all the residuals $\hat{u}_i$, $i=1,\dots, n$, is too costly, then one will typically restrict to using only a smaller number of those residuals, e.g., $\hat{u}_i$, $i=1,\dots, l$, with $l\ll n$.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank the participants of the “ISOR Research Seminar in Statistics and Econometrics" at the University of Vienna for discussion of an early version of the paper. In particular, we want to thank Benedikt Pötscher and David Preinerstorfer for valuable comments.
Proofs of main results {#sec:MainProofs}
======================
Proof of Theorem \[thm:density\] {#sec:proof:thm:density}
--------------------------------
The proof relies on the following result, which is a special case of Lemma 9 in @Bousquet02 [see also @Devroye79] applied with the loss function $\ell(f,z) = \mathbbm{1}_{(-\infty,t]}(y-f(x))$, $f:\mathcal X\to\mathcal Y$, $z=(y,x)\in\mathcal Z$ and $t\in{{\mathbb R}}$, in their notation.
\[lemma:Bousquet\] If the estimator $\hat{m}_n$ is symmetric, then $$\begin{aligned}
&{{\mathbb E}}_{P^{n}}\left[\left(\hat{F}_n(t) - \tilde{F}_n(t)\right)^2\right] \\
&\hspace{1cm}\le\; \frac{1}{2n} \;+\; 3{{\mathbb E}}_{P^{n+1}}\left[\left|\mathbbm{1}_{(-\infty, t]}(y_0-\hat{m}_n(x_0)) - \mathbbm{1}_{(-\infty, t]}(y_0-\hat{m}_n^{[1]}(x_0)) \right|\right],\end{aligned}$$ for every $t\in{{\mathbb R}}$ and every probability distribution $P$ on $\mathcal Z$.
Under Condition \[c.density\], it is elementary to relate the upper bound of Lemma \[lemma:Bousquet\] to the $\eta$-stability of $\hat{m}_n$.
\[lemma:elementary\] Let $\mathcal P$ be a collection of probability measures on $\mathcal Z = \mathcal Y\times \mathcal X$ that satisfies Condition \[c.density\]. Then, for every $t\in{{\mathbb R}}$ and every $P\in\mathcal P$, $$\begin{aligned}
&{{\mathbb E}}_{P^{n+1}}\left[\left|\mathbbm{1}_{(-\infty, t]}(y_0-\hat{m}_n(x_0)) - \mathbbm{1}_{(-\infty, t]}(y_0-\hat{m}_n^{[1]}(x_0)) \right|\right]\\
&\quad\le
2{{\mathbb E}}_{P^{n+1}}\left[
\left(\|f_{u,P}\|_\infty |\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)| \right)\land 1\right].\end{aligned}$$
To turn the pointwise bound of Lemma \[lemma:Bousquet\] into a uniform one, we need a certain continuity and tightness property of $\tilde{F}_n$.
\[lemma:Ftilde\] Let $\mathcal P$ be a collection of probability measures on $\mathcal Z = \mathcal Y\times \mathcal X$ that satisfies Condition \[c.density\] and fix a training sample $T_n\in\mathcal Z^n$.
1. \[l:Lipschitz\] If $P\in\mathcal P$ and $t_1, t_2\in{{\mathbb R}}$, then $$\left|\tilde{F}_n( t_1; T_n) - \tilde{F}_n(t_2; T_n)\right| \;\le\; \|f_{u,P}\|_\infty |t_1-t_2|.$$
2. \[l:Tightness\] Let $P\in\mathcal P$, $(\delta_1,\delta_2)\in[0,1]^2$, $\delta_1\le\delta_2$, $\mu\in{{\mathbb R}}$ and $c\in(0,\infty)$, and define $\overline{t} = \mu+c+q_{u,P}(\delta_2)$ and $\underline{t} = \mu - c + q_{u,P}(\delta_1)$, where $q_{u,P}(\delta)\in\bar{{{\mathbb R}}}$ is an arbitrary $\delta$-quantile of $f_{u,P}$. Then, $$\tilde{F}_n( \overline{t}; T_n) - \tilde{F}_n(\underline{t}; T_n) \;\ge\; (\delta_2-\delta_1) \; P\left(|m_P(x_0) - M_{n,p}^P(T_n,x_0) - \mu| \le c\right).$$
We provide the proofs of Lemma \[lemma:elementary\] and Lemma \[lemma:Ftilde\] below, after the main argument is finished. The proof of Theorem \[thm:density\] is now a finite sample version of the proof of Polya’s theorem. Fix $P\in \mathcal P$, $T_n\in\mathcal Z^n$, $(\delta_1,\delta_2)\in(0,1)^2$, $\delta_1\le\delta_2$, $\mu\in{{\mathbb R}}$, ${\varepsilon}>0$ and $c>0$ and consider $\overline{t}$ and $\underline{t}$ as in Lemma \[lemma:Ftilde\]\[l:Tightness\]. Since $0<\delta_1\le\delta_2<1$, $\overline{t}$ and $\underline{t}$ are real numbers. We split up the interval $[\underline{t},\overline{t})$ into $K$ intervals $[t_{j-1},t_j)$, $j=1,\dots, K$, with endpoints $\underline{t} =: t_0 < t_1 < \dots < t_K := \overline{t}$, such that $t_j - t_{j-1} \le {\varepsilon}$. We may thus take $K = \lceil (\overline{t}-\underline{t})/{\varepsilon}\rceil = \lceil [2c + q_{u,P}(\delta_2)-q_{u,P}(\delta_1)]/{\varepsilon}\rceil$. If $t<t_0$, then $$\begin{aligned}
\hat{F}_n( t) - \tilde{F}_n(t) \;&\ge\; 0 - \tilde{F}_n(t_0) \ge - |\hat{F}_n(t_0) - \tilde{F}_n(t_0)| - \tilde{F}_n(t_0),\\
\hat{F}_n(t) - \tilde{F}_n(t) \;&\le\; \hat{F}_n(t_0) \le |\hat{F}_n(t_0) - \tilde{F}_n(t_0)| + \tilde{F}_n(t_0).\end{aligned}$$ Furthermore, if $t\ge t_K$, then $$\begin{aligned}
\hat{F}_n(t) - \tilde{F}_n(t) \;&\ge\; \hat{F}_n(t_K) - 1 \\&\ge\; - |\hat{F}_n(t_K) - \tilde{F}_n(t_K)| - \left( 1-\tilde{F}_n(t_K)\right) \\
\hat{F}_n(t) - \tilde{F}_n(t) \;&\le\; 1 - \tilde{F}_n(t_K) \\&\le\; |\hat{F}_n(t_K) - \tilde{F}_n(t_K)| + 1- \tilde{F}_n(t_K).\end{aligned}$$ Finally, for $j\in\{1,\dots, K\}$ and $t\in[t_{j-1},t_j)$, $$\begin{aligned}
\hat{F}_n(t) - \tilde{F}_n(t) \;&\ge\; - |\hat{F}_n(t_{j-1}) - \tilde{F}_n(t_{j-1})| - \left( \tilde{F}_n(t_j) - \tilde{F}_n(t_{j-1})\right) \\
\hat{F}_n(t) - \tilde{F}_n(t) \;&\le\; |\hat{F}_n(t_j) - \tilde{F}_n(t_j)| + \left( \tilde{F}_n(t_j) - \tilde{F}_n(t_{j-1}) \right).\end{aligned}$$ Thus, discretizing the supremum over ${{\mathbb R}}$, we get $$\begin{aligned}
\sup_{t\in{{\mathbb R}}} | \hat{F}_n(t) - \tilde{F}_n(t) | &=
\sup_{t<t_0} | \hat{F}_n(t) - \tilde{F}_n(t) | \lor \sup_{t\ge t_K} | \hat{F}_n(t) - \tilde{F}_n(t) |\\
&\quad\quad\lor \max_{j=1,\dots,K} \sup_{t\in[t_{j-1},t_j)} | \hat{F}_n(t) - \tilde{F}_n(t) | \\
&\le\quad
\left( |\hat{F}_n( t_0) - \tilde{F}_n( t_0) | + \tilde{F}_n( t_0) \right) \\
&\quad\quad\lor
\left( |\hat{F}_n( t_K) - \tilde{F}_n( t_K) | + 1 - \tilde{F}_n( t_K) \right)\\
&\quad\quad\lor \max_{j=1,\dots,K} \left( |\hat{F}_n( t_j) - \tilde{F}_n( t_j) | + \tilde{F}_n( t_j) - \tilde{F}_n( t_{j-1}) \right)\end{aligned}$$ Using Lemma \[lemma:Ftilde\]\[l:Lipschitz\] first, then bounding the maximum by the sum and applying Lemma \[lemma:Ftilde\]\[l:Tightness\], we arrive at the bound $$\begin{aligned}
\| \hat{F}_n - \tilde{F}_n \|_\infty
&\le
|\hat{F}_n( t_0) - \tilde{F}_n( t_0) | + 1 - \left(\tilde{F}_n( t_K) - \tilde{F}_n( t_0)\right) \\
&\quad+\;
|\hat{F}_n( t_K) - \tilde{F}_n( t_K) | \\
&\quad+\; \max_{j=1,\dots,K} \left( |\hat{F}_n( t_j) - \tilde{F}_n( t_j) | + {\varepsilon}\|f_{u,P}\|_\infty \right)\\
&\le
{\varepsilon}\|f_{u,P}\|_\infty + 1-(\delta_2-\delta_1) P(|m_P(x_0) - M_{n,p}^P(T_n,x_0)-\mu| \le c ) \\
&\quad+\; |\hat{F}_n(t_K) - \tilde{F}_n(t_K)| + \sum_{j=0}^K |\hat{F}_n(t_j) - \tilde{F}_n(t_j)|.\end{aligned}$$ Finally, taking expectation with respect to the training sample and applying Lemmas \[lemma:Bousquet\] and \[lemma:elementary\], we obtain $$\begin{aligned}
{{\mathbb E}}_{P^n}&[\| \hat{F}_n - \tilde{F}_n \|_\infty] \\
&\le
{\varepsilon}\|f_{u,P}\|_\infty + 1-\delta_2+\delta_1 \\
&\quad+ P^{n+1}(|m_P(x_0) - \hat{m}_n(x_0)-\mu| > c)\\
&\quad+
(K+2) \sqrt{\frac{1}{2n} + 6 {{\mathbb E}}_{P^{n+1}}\left[ \left(\|f_{u,P}\|_\infty | \hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)|\right) \land 1\right] },\end{aligned}$$ where we had $K = \lceil [2c + q_{u,P}(\delta_2)-q_{u,P}(\delta_1)]/{\varepsilon}\rceil$. We now see that this inequality also holds if $\delta_1=0$ or $\delta_2=1$, but it may be trivial, depending on whether the support of $f_{u,P}$ is bounded or not. Next, let $L\in(0,\infty)$ be fixed, choose $c=L$ and set $\delta_1=F_{u,P}(-L)$ and $\delta_2 = F_{u,P}(L)$, where $F_{u,P}(t) := \int_{-\infty}^t f_{u,P}(s)\,ds$. Even if any one of the quantiles $q_{u,P}(\delta_1)$ and $q_{u,P}(\delta_2)$ is not unique, we can certainly choose them such that $q_{u,P}(\delta_2)-q_{u,P}(\delta_1)= 2L$, because, if $F_{u,P}(q)=\delta$, then $q$ is by definition a $\delta$-quantile and $q_{u,P}(\delta):=q$ is a valid choice of quantile. So far, the upper bound of the previous display reduces to $$\begin{aligned}
&{\varepsilon}\|f_{u,P}\|_\infty + 1-F_{u,P}(L)+ F_{u,P}(-L) + P^{n+1}(|m_P(x_0) - \hat{m}_n(x_0)-\mu| > L)\\
&\quad+
(\lceil 4L/{\varepsilon}\rceil+2) \sqrt{\frac{1}{2n} + 6 \eta }.\end{aligned}$$ To conclude, replace ${\varepsilon}$ by ${\varepsilon}L$, bound $\lceil 4/{\varepsilon}\rceil+ 2\le 4/{\varepsilon}+3$ and minimize in ${\varepsilon}$.
The integrand on the left of the desired inequality is equal to $$\begin{aligned}
\mathbbm{1}_{\left\{ y_0-\hat{m}_n(x_0) \le t < y_0 - \hat{m}_n^{[1]}(x_0)\right\}}
+
\mathbbm{1}_{\left\{ y_0-\hat{m}_n(x_0) > t \ge y_0 - \hat{m}_n^{[1]}(x_0)\right\}}.\end{aligned}$$ Using the abbreviations $e_n(P) = m_P(x_0) - \hat{m}_n(x_0)$ and $e_n^{[1]}(P) = m_P(x_0) - \hat{m}_n^{[1]}(x_0)$, the expectation of, say, the first of the two summands in the previous display can be bounded as $$\begin{aligned}
&P^{n+1}(y_0-\hat{m}_n(x_0) \le t < y_0 - \hat{m}_n^{[1]}(x_0))\\
&\quad=
{{\mathbb E}}_{P^{n+1}}\left[
P^{n+1}\left(
t - e_n(P) \ge u_0 > t - e_n^{[1]}(P)
\Big\| T_n, x_0\right)
\right]\\
&\quad\le
{{\mathbb E}}_{P^{n+1}}\left[
\left|\; \int\limits_{t - e_n^{[1]}(P)}^{t - e_n(P)} f_{u,P}(s)\;ds \right| \land 1
\right]\\
&\quad\le
{{\mathbb E}}_{P^{n+1}}\left[
\left(\|f_{u,P}\|_\infty |\hat{m}_n(x_0) - \hat{m}_n^{[1]}(x_0)| \right)\land 1
\right].\end{aligned}$$ The proof is finished by an analogous argument for the second summand.
For $P\in\mathcal P$, $T_n\in\mathcal Z^n$ and $t_1>t_2$, abbreviate $e_n(P) = m_P(x_0) - \hat{m}_n(x_0)$ and note $$\begin{aligned}
&\tilde{F}_n( t_1) - \tilde{F}_n( t_2)
=
P \left( t_2 < y_0 - \hat{m}_n(x_0) \le t_1 \right)\\
&\quad=
P \left( t_2 - e_n(P) < u_0 \le t_1 - e_n(P) \right)\\
&\quad=
{{\mathbb E}}_{P}\left[ \int\limits_{t_2 - e_n(P)}^{t_1 - e_n(P)} f_{u,P}(s)\;ds \right]
\quad\le\quad
\|f_{u,P}\|_\infty (t_1-t_2),\end{aligned}$$ in view of independence between $x_0$ and $u_0$ imposed by Condition \[c.density\], so the first claim follows upon reversing the roles of $t_1$ and $t_2$. For the second claim, take $\overline{t}$ and $\underline{t}$ as in the lemma to obtain $$\begin{aligned}
&\tilde{F}_n( \overline{t}) - \tilde{F}_n( \underline{t})
\;=\;
P \left( \underline{t} - e_n(P) < u_0 \le \overline{t} - e_n(P) \right)\\
&\ge\;
P \left( \underline{t} - e_n(P) < u_0 \le \overline{t} - e_n(P), |e_n(P) - \mu| \le c \right)\\
&\ge\;
P \left( q_{u,P}(\delta_1) < u_0 \le q_{u,P}(\delta_2), |e_n(P)-\mu| \le c \right)\\
&=\;
P \left( q_{u,P}(\delta_1) < u_0 \le q_{u,P}(\delta_2) \right)\cdot P \left( |e_n(P)-\mu| \le c \right)\\
&=\; (\delta_2-\delta_1) \;P \left( |m_P(x_0) - \hat{m}_n(x_0) - \mu| \le c \right).\end{aligned}$$ This finishes the proof.
Proof of Theorem \[thm:JSmisspec\]
----------------------------------
We begin by stating an analogous result for the OLS estimator, the proof of which is deferred to the end of the subsection.
\[lemma:OLSmisspec\] For every $n\in{{\mathbb N}}$, let $\mathcal P_n= \mathcal P_n(\mathcal L_l, \mathcal L_v, C_0)$ be as in Condition \[c.non-linmod\]. For $P\in \mathcal P_n$, define $\beta_P$ to be the minimizer of $\beta \mapsto {{\mathbb E}}_P[(y_0-\beta'x_0)^2]$ over ${{\mathbb R}}^{p_n}$. If $p_n/n\to \kappa \in [0,1)$ and $$\begin{aligned}
\label{eq:MisspecBound}
\sup_{n\in{{\mathbb N}}} \sup_{P\in\mathcal P_n} {{\mathbb E}}_P\left[\left(\frac{m_P(x_0) - \beta_P'x_0}{\sigma_P} \right)^2\right] \;<\;\infty,\end{aligned}$$ then the ordinary least squares estimator $\hat{\beta}_n = (X'X)^\dagger X'Y$ satisfies $$\limsup_{n\to\infty} \sup_{P\in\mathcal P_n} P^n \left( \left\|\Sigma_P^{1/2}(\hat{\beta}_n - \beta_P)/\sigma_P\right\|_2^2 > M \right) \quad\xrightarrow[M\to \infty]{} \quad 0,$$ and for every ${\varepsilon}>0$, $$\sup_{P\in\mathcal P_n} P^n\left( \left\|\Sigma_P^{1/2}(\hat{\beta}_n - \hat{\beta}_n^{[1]})/\sigma_P\right\|_2^2 > {\varepsilon}\right) \quad\xrightarrow[n\to \infty]{} \quad 0.$$
We proceed with the proof of Theorem \[thm:JSmisspec\]. In order to achieve uniformity over $\mathcal P_n$, we consider an arbitrary sequence $P_n\in\mathcal P_n$ and abbreviate $m_n = m_{P_n}$, $\beta_n = \beta_{P_n}$, $\Sigma_n = \Sigma_{P_n}$ and $\sigma_n = \sigma_{P_n}$ and we write ${{\mathbb E}}_n = {{\mathbb E}}_{P_n^n}$, $\operatorname{Var}_n = \operatorname{Var}_{P_n^n}$, etc. We have to show that $\limsup_{n\to\infty} P_n^n(\|\Sigma_n^{1/2}(\hat{\beta}_n(c_n) - \beta_n)/\sigma_n\|_2^2>M)\to 0$ as $M\to\infty$, and that $P_n^n(\|\Sigma_n^{1/2}(\hat{\beta}_n(c_n) - \hat{\beta}_n^{[1]}(c_n))/\sigma_n\|_2^2>{\varepsilon})\to 0$, as $n\to\infty$, for every ${\varepsilon}>0$.
Define $\delta_n^2 = \beta_n'\Sigma_n\beta_n/\sigma_n^2$, $t_n^2 = \hat{\beta}_n'X'X\hat{\beta}_n/(n\sigma_n^2)$ and $$\begin{aligned}
s_n = \begin{cases}
\left( 1 - \frac{p_n}{n}\frac{c_n}{t_n^2}\frac{\hat{\sigma}_n^2}{\sigma_n^2}\right)_+, \quad&\text{if } t_n^2 >0,\\
1, &\text{if } t_n^2=0,
\end{cases}\end{aligned}$$ so that $0\le s_n\le 1$, and $\hat{\beta}_n(c_n) = s_n \hat{\beta}_n$, because $t_n^2=0$ if, and only if, $\hat{\beta}_n=0$. We abbreviate $D:= \sup_n\sup_{P\in\mathcal P_n} {{\mathbb E}}_P[(m_P(x_0)-\beta_P'x_0)^2]/\sigma_P^2$. The following properties will be useful and will be verified after the main argument is finished.
\[lemma:JSmisspec\] $\hat{\sigma}_n^2/\sigma_n^2$ and $\sigma_n^2/\hat{\sigma}_n^2$ are $P_n$-uniformly bounded in probability, $P_n^n(\hat{\sigma}_n^2=0)=0$ and $P_n^n(t_n^2=0)\to 0$. Furthermore, we have $P_n^n(t_n^2 \ge \kappa/2) \to 1$, if $\delta_n\to\delta\in[0,\infty)$. All the statements of the lemma continue to hold also for the leave-one-out analogs $t_{n,[1]}^2:= \hat{\beta}_n^{[1]'}X_{[1]}'X_{[1]}\hat{\beta}_n^{[1]}/(n\sigma_n^2)$ and $\hat{\sigma}_{n,[1]}^2 = \|Y_{[1]}-X_{[1]}\hat{\beta}_n^{[1]}\|_2^2/(n-1-p_n)$ of $t_n^2$ and $\hat{\sigma}_n^2$.
The quantity of interest in the first claim of the theorem can be bounded as $$\begin{aligned}
\left\| \Sigma_n^{1/2}\left(\hat{\beta}_n(c_n) - \beta_n\right)/\sigma_n\right\|_2
&=
\left\| \Sigma_n^{1/2}s_n\left(\hat{\beta}_n - \beta_n\right)/\sigma_n + \Sigma_n^{1/2}(s_n-1)\beta_n/\sigma_n\right\|_2\notag\\
&\le
\left\| \Sigma_n^{1/2}\left(\hat{\beta}_n - \beta_n\right)/\sigma_n\right\|_2
+ (1-s_n) \delta_n.\label{eq:s_ndelta_n}\end{aligned}$$ Thus, the claim follows from Lemma \[lemma:OLSmisspec\] if we can show that $\limsup_{n\to\infty}Q_n(M)\to0$ as $M\to\infty$, where $Q_n(M) = P_n^n((1-s_n)\delta_n>M)$. For fixed $M\in (1,\infty)$ and fixed $n\in{{\mathbb N}}$, we distinguish the cases $\delta_n< M^{1/2}$ and $\delta_n\ge M^{1/2}$. In the former case, $Q_n(M) = 0$. In the latter case, we proceed as follows. First, notice that $$\begin{aligned}
Q_n(M) &= P_n^n\left((1-s_n)\delta_n>M, t_n^2>0\right) \notag\\
&\le
P_n^n\left( \frac{p_n}{n} \frac{c_n}{t_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2} \delta_n > M, t_n^2>0\right)
+
P_n^n\left(
\frac{p_n}{n} \frac{c_n}{t_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2}>1, t_n^2>0
\right)\notag\\
&=
P_n^n\left( \frac{p_n}{n} \frac{c_n}{t_n^2/\delta_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2} > M \delta_n, t_n^2>0\right)
+
P_n^n\left(
\frac{p_n}{n} \frac{c_n}{t_n^2/\delta_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2}>\delta_n^2, t_n^2>0
\right)\notag\\
&\le
2P_n^n\left( \frac{p_n}{n} \frac{c_n}{t_n^2/\delta_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2} > M, t_n^2>0\right). \label{eq:boundQn}\end{aligned}$$ Furthermore, we trivially have $Y = X\beta_n + \sigma_n\tilde{u}$, where $\tilde{u} := (Y-X\beta_n)/\sigma_n$ has components $\tilde{u}_i = (m_n(x_i)-\beta_n'x_i)/\sigma_n + (y_i-m_n(x_i))/\sigma_n$, and, using the reverse triangle inequality, we have $$\begin{aligned}
t_n &= \sqrt{\frac{1}{n}\frac{Y'P_XY}{\sigma_n^2} }
=\|X\beta_n + \sigma_n \tilde{u}\|_{P_X}(n\sigma_n^2)^{-1/2}\\
&\ge
\left|\|X\beta_n\|_{P_X} - \|\sigma_n \tilde{u}\|_{P_X}\right|(n\sigma_n^2)^{-1/2}\\
&=
\left|
\sqrt{\frac{\beta_n'\Sigma_n^{1/2}(V'V/n)\Sigma_n^{1/2}\beta_n}{\sigma_n^2}} - \sqrt{\frac{\tilde{u}'P_X\tilde{u}}{n}}
\right|,\end{aligned}$$ where $V:= X\Sigma_n^{-1/2}$ and $P_X := X(X'X)^\dagger X'$. Therefore, on the event $$A_n(M) = \{\|\tilde{u}/\sqrt{n}\|_2^2 \le M^{1/2}, {{\operatorname{\lambda_{\text{min}}}}{(}}V'V/n)> c_0^2(1-\sqrt{\kappa})^2/2>M^{-1/2}\},$$ we have $\tilde{u}'P_X\tilde{u}(n\delta_n^2)^{-1} \le M^{-1/2}$ and $$\beta_n'\Sigma_n^{1/2}(V'V/n)\Sigma_n^{1/2}\beta_n(\sigma_n^2\delta_n^2)^{-1}
>
c_0^2(1-\sqrt{\kappa})^2/2
>
M^{-1/2},$$ so that on this event $t_n/\delta_n \ge c_0(1-\sqrt{\kappa})/\sqrt{2} - M^{-1/4}\ge0$. Thus, turning back to and using Markov’s inequality, we obtain $$\begin{aligned}
&P_n^n\left( \frac{p_n}{n} \frac{c_n}{t_n^2/\delta_n^2} \frac{\hat{\sigma}_n^2}{ \sigma_n^2} > M, t_n^2>0\right)
\le
P_n^n\left( \frac{\hat{\sigma}_n^2}{ \sigma_n^2}> M t_n^2/\delta_n^2, t_n^2>0\right)\\
&\quad\le
P_n^n\left( \frac{\hat{\sigma}_n^2}{ \sigma_n^2} > M t_n^2/\delta_n^2, A_n(M)\right)
+
P_n^n\left( A_n(M)^c\right)\\
&\quad\le
P_n^n\left(\frac{\hat{\sigma}_n^2}{ \sigma_n^2} > M\left(c_0(1-\sqrt{\kappa})/\sqrt{2} - M^{-1/4}\right)^2 \right)
+ \frac{D+1}{M^{1/2}} \\
&\quad\quad+ P_n^n\left( {{\operatorname{\lambda_{\text{min}}}}{(}}V'V/n) \le c_0^2(1-\sqrt{\kappa})^2/2\right)
+ P_n^n(c_0^2(1-\sqrt{\kappa})^2/2 \le M^{-1/2}).\end{aligned}$$ In view of Lemma \[lemma:traceConv\] in Appendix \[sec:proofsaux\] and $P_n$-boundedness of $\hat{\sigma}_n^2 /\sigma_n^2$ (Lemma \[lemma:JSmisspec\]), the limit superior of the upper bound is equal to a function $Q(M)\ge0$ that vanishes as $M\to\infty$. Therefore, we have shown that $\limsup_{n\to\infty} Q_n(M) \le Q(M) \to 0$ as $M\to\infty$.
To establish the claim about the stability of $\hat{\beta}_n(c_n)$ we proceed in a similar way. First, note that $$\begin{aligned}
&\|\Sigma_n^{1/2}(\hat{\beta}_n(c_n) - \hat{\beta}_n^{[1]}(c_n))\|_2/\sigma_n
=
\|\Sigma_n^{1/2}(s_n- s_n^{[1]})\hat{\beta}_n + s_n^{[1]}\Sigma_n^{1/2}(\hat{\beta}_n - \hat{\beta}_n^{[1]})\|_2/\sigma_n\\
&\,\le
|s_n - s_n^{[1]}| \|\Sigma_n^{1/2} \hat{\beta}_n/\sigma_n\|_2
+ |s_n^{[1]}|\|\Sigma_n^{1/2}(\hat{\beta}_n - \hat{\beta}_n^{[1]})/\sigma_n\|_2\\
&\,\le
|s_n - s_n^{[1]}| \|\Sigma_n^{1/2} (\hat{\beta}_n - \beta_n)/\sigma_n\|_2
+
|s_n - s_n^{[1]}| \delta_n
+
|s_n^{[1]}|\|\Sigma_n^{1/2}(\hat{\beta}_n - \hat{\beta}_n^{[1]})/\sigma_n\|_2,\end{aligned}$$ where we have used the notation $s_n^{[1]}$ to denote the leave-one-out equivalent of $s_n$. In view of Lemma \[lemma:OLSmisspec\], it remains to show that $|s_n - s_n^{[1]}|(1+\delta_n) = o_{P_n}(1)$. We argue along subsequences. Let $n'$ be an arbitrary subsequence of $n$. Then by compactness of the extended real line, there exists a further subsequence $n''$ of $n'$, such that $\delta_{n''}\to\delta\in[0,\infty]$. If we can show that for every ${\varepsilon}>0$ $$P_{n''}^{n''}(|s_{n''} - s_{n''}^{[1]}|(1+\delta_{n''})>{\varepsilon}) \xrightarrow[n''\to\infty]{} 0,$$ then the claim follows. For simplicity, we write $n$ instead of $n''$ and we distinguish the cases $\delta=\infty$ and $\delta\in[0,\infty)$.
If $\delta=\infty$, then it suffices to show that $(s_n-s_n^{[1]})\delta_n$ converges to zero in $P_n^n$-probability. By Lemma \[lemma:JSmisspec\] we have $P_n^n(t_n^2=0)\to0$ and the same for $t_{n,[1]}^2$, so that it suffices to show that $$P_n^n(|s_n-s_n^{[1]}|\delta_n > {\varepsilon}, t_n>0, t_{n,[1]}>0) \to 0.$$ If $t_n>0$, set $r_n = \frac{p_n}{n}\frac{c_n}{t_n^2}\frac{\hat{\sigma}_n^2}{\sigma_n^2}$, so that $s_n = (1-r_n)_+$, on this event, and define $r_{n}^{[1]} = \frac{p_n}{n}\frac{c_n}{t_{n,[1]}^2}\frac{\hat{\sigma}_{n,[1]}^2}{\sigma_n^2}$, provided that $t_{n,[1]}>0$. Thus, if both $t_n$ and $t_{n,[1]}$ are positive, we have $$|s_n-s_n^{[1]}|\delta_n\le |r_n - r_{n}^{[1]}|\delta_n
\le
\left|
\frac{\delta_n^2}{t_n^2}\frac{\hat{\sigma}_n^2}{ \sigma_n^2} - \frac{\delta_n^2}{t_{n,[1]}^2}\frac{\hat{\sigma}_{n,[1]}^2}{ \sigma_n^2}
\right|\frac{1}{\delta_n}.$$ But in the first part of the proof we have already established that $t_n^2/\delta_n^2$ is lower bounded by $c_0^2(1-\sqrt{\kappa})^2/4$ with asymptotic probability one, provided that $\delta_n^2\to\infty$ (recall the case $\delta_n\ge M^{1/2}$ and the set $A_n(M)$, and let $M=\delta_n^2\to\infty$), and an analogous argument applies to $t_{n,[1]}^2/\delta_n^2$. Thus, it follows from the $P_n$-boundedness of $\hat{\sigma}_n^2 /\sigma_n^2$ and $\hat{\sigma}_{n,[1]}^2 /\sigma_n^2$ that the upper bound in the previous display converges to zero in $P_n^n$-probability.
If $\delta\in[0,\infty)$, it suffices to show that $|s_n-s_n^{[1]}|$ converges to zero in $P_n^n$-probability. Note that due to the positive part mapping in the definition of $s_n$, the absolute difference $|s_n-s_n^{[1]}|$ vanishes if both $r_n$ and $r_n^{[1]}$ are greater than or equal to $1$, and is otherwise bounded by $|r_n-r_n^{[1]}| \le \max(|r_n/r_n^{[1]}-1|, |r_n^{[1]}/r_n - 1|)$, provided that $r_n$ and $r_n^{[1]}$ are positive. Thus, it remains to verify that $r_n^{[1]}/r_n$ converges to $1$ in $P_n^n$-probability and that both $P_n^n(r_n=0)$ and $P_n^n(r_n^{[1]}=0)$ converge to zero. The latter statement follows from Lemma \[lemma:JSmisspec\], in fact it shows that $P_n^n(r_n=0)=0=P_n^n(r_n^{[1]}=0)$. Finally, to show that $r_n^{[1]}/r_n \to 1$ in $P_n^n$-probability, define $S_1 := V_{[1]}'V_{[1]} = \sum_{i=2}^n v_iv_i'$ and note that by the Sherman-Morrison formula (see also the proof of Lemma \[lemma:OLSmisspec\] below) we have $$\begin{aligned}
\hat{\beta}_n'X'X\hat{\beta}_n
&=
\hat{\beta}_n^{[1]'}X_{[1]}'X_{[1]}\hat{\beta}_n^{[1]}
+
(x_1'\hat{\beta}_n^{[1]})^2
+ 2x_1'\hat{\beta}_n^{[1]}(y_1-x_1'\hat{\beta}_n^{[1]})\\
&\quad\quad+
(y_1-x_1'\hat{\beta}_n^{[1]})^2 \frac{v_1'S_1^{-1}v_1}{1+v_1'S_1^{-1}v_1},\end{aligned}$$ at least on the event $B_n:=\{{{\operatorname{\lambda_{\text{min}}}}{(}}S_1)>0\}$, which has asymptotic $P_n^n$-probability one by Lemma \[lemma:traceConv\]. Thus, on $B_n$, $t_n^2/t_{n,[1]}^2 = 1 + g_n$, where $$\begin{aligned}
\label{eq:g_nBound}
|g_n| \le 2\frac{(x_1'\hat{\beta}_n^{[1]})^2 + (y_1-x_1'\hat{\beta}_n^{[1]})^2}{\hat{\beta}_n^{[1]'}X_{[1]}'X_{[1]}\hat{\beta}_n^{[1]}}.\end{aligned}$$ By Lemma \[lemma:JSmisspec\] and since $\kappa>0$, $\hat{\beta}_n^{[1]'}X_{[1]}'X_{[1]}\hat{\beta}_n^{[1]}/(n\sigma_n^2) = t_{n,[1]}^2$ is bounded away from zero with asymptotic probability one. Thus, for the desired convergence of $t_n^2/t_{n,[1]}^2$ to $1$, it remains to show that the numerator in divided by $n\sigma_n^2$ converges to zero in $P_n^n$-probability. But this now follows from Lemma \[lemma:OLSmisspec\] and the fact that $\delta<\infty$, by evaluating the conditional expectation given $T_n^{[1]}$. The proof is finished if we can also show that $\hat{\sigma}_n^2/\hat{\sigma}_{n,[1]}^2$ converges to $1$, in $P_n^n$-probability. To this end, we apply the Sherman-Morrison formula once more to get $$\begin{aligned}
I_n - P_X = I_n - P_V
= \begin{pmatrix}
\frac{1}{1+v_1'S_1^{-1}v_1}, &-\frac{v_1'S_1^{-1}V_{[1]}'}{1+v_1'S_1^{-1}v_1}\\
- \frac{V_{[1]}S_1^{-1}v_1}{1+v_1'S_1^{-1}v_1}, &I_{n-1} - P_{V_{[1]}} + \frac{V_{[1]}S_1^{-1}v_1v_1'S_1^{-1}V_{[1]}'}{1+v_1'S_1^{-1}v_1}
\end{pmatrix},\end{aligned}$$ on the event $B_n$. Thus, on this event, $$\begin{aligned}
\hat{\sigma}_n^2(n-p_n) &= Y'(I_n-P_X)Y = Y_{[1]}'(I_{n-1}-P_{X_{[1]}})Y_{[1]}\\
&+ \frac{y_1^2}{1+v_1'S_1^{-1}v_1} - 2y_1 x_1'\hat{\beta}_n^{[1]} + \frac{(x_1'\hat{\beta}_n^{[1]})^2}{1+v_1'S_1^{-1}v_1},\end{aligned}$$ so that $\frac{\hat{\sigma}_n^2}{\hat{\sigma}_{n,[1]}^2} \frac{n-p_n}{n-1-p_n} =: 1 + h_n$, where $$|h_n| \le 2\frac{y_1^2 + (x_1'\hat{\beta}_n^{[1]})^2}{(n-1-p_n)\sigma_n^2}\frac{\sigma_n^2}{\hat{\sigma}_{n,[1]}^2}.$$ But it is easy to see that the upper bound converges to zero in $P_n^n$-probability by a simple moment computation, Lemmas \[lemma:OLSmisspec\] and \[lemma:JSmisspec\], and because $n-p_n\to\infty$ and $\delta<\infty$.
We use the notation $e_i := \frac{m(x_i)-x_i'\beta_n}{\sigma_n}$, $u_i := \frac{y_i- m(x_i)}{\sigma_n}$, $\tilde{u} := e + u$, so that $Y=X\beta_n + \sigma_n\tilde{u} = X\beta_n + \sigma_ne + \sigma_nu$. For the first claim simply observe that $$\begin{aligned}
\frac{\hat{\sigma}_n^2}{\sigma_n^2} = \frac{Y'(I_n-P_X)Y}{(n-p_n)\sigma_n^2} = \frac{n}{n-p_n}\frac{\tilde{u}'(I_n-P_X)\tilde{u}}{n}
\le \frac{n}{n-p_n}\left\|\frac{\tilde{u}}{\sqrt{n}}\right\|_2^2,
\end{aligned}$$ and that ${{\mathbb E}}_n\|\tilde{u}\|_2^2 = n{{\mathbb E}}_n[\tilde{u}_1] \le n(D+1)$. For boundedness of the reciprocal we first note that $P_n^n(\hat{\sigma}_n^2 = 0) = {{\mathbb E}}_n[P_n^n(Y'(I_n-P_X)Y=0\|X)]= P_n^n(I_n-P_X =0) = 0$, because the conditional distribution of $Y$ given $X$ under $P_n^n$ is absolutely continuous with respect to Lebesgue measure. The same argument shows that $P_n^n(t_n=0) = P_n^n(\hat{\beta}_n=0) = {{\mathbb E}}_n[P_n^n((X'X)^\dagger X'Y=0\|X)] = P_n^n(X(X'X)^\dagger=0) = P_n^n(X=0) = (\mathcal L_v(\{0\}))^{np_n} \to 0$. Next we show that $\hat{\sigma}_n^2/\sigma_n^2$ is bounded from below by $(1-\kappa)/2$ with asymptotic probability one. To this end, note that $$\frac{\hat{\sigma}_n^2}{\sigma_n^2}
=
\frac{Y'(I_n-P_X)Y}{(n-p_n)\sigma_n^2}
\ge
2\frac{e'(I_n-P_X)u}{n} + \left\|\frac{(I_n-P_X)u}{\sqrt{n}}\right\|_2^2,$$ where the conditional expectation of the mixed term given $X$ is equal to zero and its conditional variance converges to zero in $P_n^n$-probability because of ${{\mathbb E}}_n[e_i^2]\le D$. The conditional expectation of the last term in the previous display is $\operatorname{trace}(I_n-P_X)/n = \operatorname{trace}(I_n-P_V)/n \to 1-\kappa$, in view of Lemma \[lemma:traceConv\]. Using independence, its conditional variance can be computed as $$\begin{aligned}
\operatorname{Var}_n\left[\frac{u'(I_n-P_X)u}{n}\Big\|X\right]
&= \frac{2\operatorname{trace}((I_n-P_X)^2)}{n^2} + \frac{({{\mathbb E}}_n[u_1^4]-3)}{n^2} \sum_{i=1}^n(I_n-P_X)_{ii}^2
\notag\\
&\le 2\frac{n}{n^2} + \frac{(C_0-3)n}{n^2} \to 0.\label{eq:CondVarHatsigma}\end{aligned}$$ This establishes the boundedness of $\sigma_n^2/\hat{\sigma}_n^2$. For the remaining statement about $t_n^2$, suppose that $\delta_n^2\to\delta\in[0,\infty)$ and note that $$\begin{aligned}
t_n^2 &= \frac{Y'P_XY}{n\sigma_n^2}
=\left\|\frac{V\Sigma_n^{1/2}\beta_n}{\sqrt{n\sigma_n^2}} + \frac{P_Xe}{\sqrt{n}} + \frac{P_Xu}{\sqrt{n}}\right\|_2^2.\end{aligned}$$ Abbreviate $W_n := \frac{V\Sigma_n^{1/2}\beta_n}{\sqrt{n\sigma_n^2}} + \frac{P_Xe}{\sqrt{n}}$ and observe $t_n^2 \ge 2W_n'P_Xu/\sqrt{n} + \|P_Xu/\sqrt{n}\|_2^2$. The conditional expectation of the mixed term $W_n'P_Xu/\sqrt{n}$ given $X$ is equal to zero, and its conditional variance is bounded by $\|W_n/\sqrt{n}\|_2^2$. But $\|W_n\|_2^2$ is bounded in $P_n^n$-probability, in view of the facts that $\delta<\infty$, ${{\mathbb E}}_n[V'V/n] = I_n$ and ${{\mathbb E}}_n[e_i^2]\le D$. Thus, the mixed term is $o_{P_n^n}(1)$. For $\|P_Xu/\sqrt{n}\|_2^2$ one easily verifies that its conditional expectation given $X$ is $\operatorname{trace}(P_X)/n = \operatorname{trace}(P_V)/n$, which converges to $\kappa\in[0,1)$ in $P_n^n$-probability, because $P_n^n({{\operatorname{\lambda_{\text{min}}}}{(}}V'V)=0)\to0$ by Lemma \[lemma:traceConv\]. Furthermore, as above, its conditional variance can easily be computed as $$\begin{aligned}
\operatorname{Var}_n\left[\frac{u'P_Xu}{n}\Big\|X\right]
&= \frac{2\operatorname{trace}(P_X^2)}{n^2} + \frac{({{\mathbb E}}_n[u_1^4]-3)}{n^2} \sum_{i=1}^n(P_X)_{ii}^2\\
&\le \frac{2p_n}{n^2} + \frac{(C_0-3)p_n}{n^2} \to 0.\end{aligned}$$ Thus, $\|P_Xu/\sqrt{n}\|_2^2$ converges to $\kappa$, in $P_n^n$-probability, which establishes the asymptotic lower bound on $t_n^2$. The results about the leave-one-out quantities can be established analogously.
Fix $n\in{{\mathbb N}}$ and $P\in\mathcal P_n$. For simplicity, we write $m = m_P$, $\Sigma = \Sigma_P$, $\beta = \beta_P$ and $\sigma^2 = \sigma_P^2$ and abbreviate $V := X\Sigma^{-1/2}$. For $\xi>0$, consider the event $A_n := A_n(\xi) := \{T_n\in{{\mathbb R}}^{n\times(p_n+1)}: {{\operatorname{\lambda_{\text{min}}}}{(}}V'V/n)>\xi\}$. On this event, we observe that $\Sigma^{1/2}(\hat{\beta}_n - \beta)/\sigma = (V'V)^{-1}V'\tilde{u}$, where $\tilde{u} = (\tilde{u}_1,\dots, \tilde{u}_n)'$, $\tilde{u}_i = (m(x_i) - \beta'x_i)/\sigma + u_i$ and $u_i = (y_i - m(x_i))/\sigma$, for $i=1,\dots, n$. Thus, on $A_n$, $\|\Sigma^{1/2}(\hat{\beta}_n-\beta)/\sigma\|_2^2 = \tilde{u}'V(V'V)^{-2}V'\tilde{u} = \tilde{u}'V(V'V)^{-1/2}(V'V)^{-1}(V'V)^{-1/2}V'\tilde{u} \le \|\tilde{u}/\sqrt{n}\|_2^2 \|(V'V/n)^{-1}\|_2$. Consequently, using Condition \[c.non-linmod\], we obtain $$\begin{aligned}
&P^n(\|\Sigma^{1/2}(\hat{\beta}_n - \beta)/\sigma\|_2^2 > M)\\
&\quad\le
P^n(\|\tilde{u}/\sqrt{n}\|_2^2 \|(V'V/n)^{-1}\|_2 > M,A_n(\xi)) + P^n(A_n(\xi)^c)\\
&\quad\le
P^n(\|\tilde{u}/\sqrt{n}\|_2^2 /\xi > M) + P^n(A_n(\xi)^c)\\
&\quad\le
\frac{{{\mathbb E}}_P[\tilde{u}_1^2]}{M\xi} + P^n({{\operatorname{\lambda_{\text{min}}}}{(}}V'V/n) \le \xi) .\end{aligned}$$ Since ${{\mathbb E}}_P[\tilde{u}_1^2] = {{\mathbb E}}_P[(m(x_0)-\beta'x_0)^2/\sigma^2] + 1$, in view of \[c.non-linmod\], and because $P^n({{\operatorname{\lambda_{\text{min}}}}{(}}V'V/n) \le \xi) $ does not depend on the parameters $\beta$, $\Sigma$ and $\sigma^2$, Lemma \[lemma:traceConv\]\[lemma:traceConvA’\] implies the first claim if we set $\xi = c_0^2(1-\sqrt{\kappa})^2/2>0$.
For the stability property, we abbreviate $S_1 = V_{[1]}'V_{[1]}$, $\tilde{\beta}_n := \Sigma^{1/2}(\hat{\beta}_{n} - \beta)/\sigma$ and $\tilde{\beta}_n^{[1]} = \Sigma^{1/2}(\hat{\beta}_{n}^{[1]} - \beta)/\sigma$, and consider the event $B_n = \{{{\operatorname{\lambda_{\text{min}}}}{(}}S_1)>0\}$. On this event, also ${{\operatorname{\lambda_{\text{min}}}}{(}}V'V) = {{\operatorname{\lambda_{\text{min}}}}{(}}S_1 + v_1v_1')> 0$, where $V = [v_1,\dots, v_n]'$ and $V_{[1]} = [v_2,\dots, v_n]'$, and the Sherman-Morrison formula yields $$\begin{aligned}
&\tilde{\beta}_n\; =\; (V'V)^{-1}V'\tilde{u}\; =\; (S_1 + v_1v_1')^{-1}(V_{[1]}'\tilde{u}_{[1]} + v_1 \tilde{u}_1)\\
&\quad=
\left(S_1^{-1} - \frac{S_1^{-1}v_1v_1'S_1^{-1}}{1 + v_1'S_1^{-1}v_1} \right) (V_{[1]}'\tilde{u}_{[1]} + v_1 \tilde{u}_1) \\
&\quad=
\tilde{\beta}_n^{[1]} - \frac{S_1^{-1}v_1v_1'\tilde{\beta}_n^{[1]}}{1 + v_1'S_1^{-1}v_1} + S_1^{-1}v_1\tilde{u}_1 - S_1^{-1}v_1\tilde{u}_1\frac{v_1'S_1^{-1}v_1}{1 + v_1'S_1^{-1}v_1}\\
&\quad=
\tilde{\beta}_n^{[1]} + \frac{S_1^{-1}v_1(\tilde{u}_1-v_1'\tilde{\beta}_n^{[1]})}{1 + v_1'S_1^{-1}v_1},\end{aligned}$$ and thus, $\|\Sigma^{1/2}(\hat{\beta}_n - \hat{\beta}_{n}^{[1]})/\sigma\|_2^2 = (1+v_1'S_1^{-1}v_1)^{-2} v_1'S_1^{-2}v_1 (\tilde{u}_1 - v_1'\tilde{\beta}_n^{[1]})^2 \le 2 (\tilde{u}_1^2 + (v_1'\tilde{\beta}_n^{[1]})^2) v_1'S_1^{-2}v_1$. Clearly, the squared error term $\tilde{u}_1^2$ is $\mathcal P_n$-uniformly bounded in probability because ${{\mathbb E}}_P[\tilde{u}_1^2] = {{\mathbb E}}_P[(m(x_0)-\beta'x_0)^2/\sigma^2]+1$, as above; ${{\mathbb E}}[(v_1'\tilde{\beta}_{n}^{[1]})^2\|\tilde{\beta}_{n}^{[1]}] = \|\tilde{\beta}_{n}^{[1]}\|_2^2$ is also $\mathcal P_n$-uniformly bounded in probability, by the same argument as in the first paragraph, which implies that $(v_1'\tilde{\beta}_{n}^{[1]})^2$ is $\mathcal P_n$-uniformly bounded in probability; and ${{\mathbb E}}[v_1'S_1^{\dagger2}v_1\|S_1] = \operatorname{trace}S_1^{\dagger2} \to 0$, $\mathcal P_n$-uniformly in probability, by Lemma \[lemma:traceConv\]. Therefore, we have $P^n(\|\Sigma^{1/2}(\hat{\beta}_n - \hat{\beta}_{n}^{[1]})/\sigma\|_2^2>{\varepsilon}, B_n) \le P^n(2O_{\mathcal P_n}(1)o_{\mathcal P_n}(1)>{\varepsilon}, B_n) \to 0$. Moreover, $P^n(B_n^c) = P^{n-1}({{\operatorname{\lambda_{\text{min}}}}{(}}S_1)=0) \to 0$, uniformly over $\mathcal P_n$, in view of Lemma \[lemma:traceConv\].
Proof of Theorem \[thm:PIlength\]
---------------------------------
We begin by stating a few more results on the OLS estimator that hold in the linear model \[c.linmod\]. The proof is deferred to the end of the subsection.
\[lemma:OLScorrSpec\] In the context of Theorem \[thm:PIlength\], the OLS estimator $\hat{\beta}_n = (X'X)^\dagger X'Y$, satisfies $$\begin{aligned}
&\sup_{P\in\mathcal P_n} P^n\left(\left|
\left\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P\right\|_2 - \tau
\right|>{\varepsilon}\right) \xrightarrow[n\to\infty]{} 0, \quad\text{and}\\
&\sup_{P\in\mathcal P_n} P^n\left(\left\|\Sigma_P^{1/2}(\beta_P-\hat{\beta}_n)/\sigma_P\right\|_4>{\varepsilon}\right)
\xrightarrow[n\to\infty]{} 0,\end{aligned}$$ for every ${\varepsilon}>0$.
The next result will be instrumental to establish convergence of the conditional law $P(y_0-x_0'\hat{\beta}_n\le t\| T_n)$ to the distribution of $lN\tau + u$, as in the statement of the theorem. Its proof is also deferred until after the main argument is finished.
\[lemma:UnifWeak\] Fix arbitrary positive constants $\tau\in[0,\infty)$, $\delta\in(0,2]$ and $c\in(0,\infty)$ and let $(p_n)_{n\in{{\mathbb N}}}$ be a sequence of positive integers. On some probability space $(\Omega, \mathcal A, {{\mathbb P}})$, let $u_0$ and $l_0$ be real random variables and let $V_0 = (v_{0j})_{j=1}^\infty$ be a sequence of i.i.d. real random variables such that $V_0$, $u_0$ and $l_0$ are jointly independent, $|l_0|\ge c>0$, ${{\mathbb E}}[l_0^2]=1$, ${{\mathbb E}}[v_{01}]=0$, ${{\mathbb E}}[v_{01}^2]=1$ and ${{\mathbb E}}[|v_{01}|^{2+\delta}]<\infty$. For $n\in{{\mathbb N}}$ and $b\in{{\mathbb R}}^{p_n}$, define $v_n = (v_{01},\dots, v_{0p_n})'$, $G(t, b) = {{\mathbb P}}(l_0 v_n'b + u_0\le t)$ and $F(t) = {{\mathbb P}}(l_0 N \tau + u_0\le t)$, where $N \stackrel{\mathcal L}{=} \mathcal N(0,1)$ is independent of $(l_0,u_0)$. Consider positive sequences $g_1, g_2:{{\mathbb N}}\to (0,1)$, such that $g_j(n)\to 0$, as $n\to\infty$, $j=1,2$. Suppose that one of the following cases applies.
(i) \[l:UnifWeakT0\] $\tau=0$ and $t\mapsto {{\mathbb P}}(u_0\le t)$ is continuous. In this case, set $$B_n = \{b\in{{\mathbb R}}^{p_n}: \|b\|_2 \le g_1(n)\}.$$
(ii) \[l:UnifWeakT>0\] $\tau>0$ and $p_n\to\infty$ as $n\to\infty$. In this case, set $$B_n = \{b\in{{\mathbb R}}^{p_n} : b \ne 0, | \|b\|_2 - \tau| \le g_1(n), \|b\|_{2+\delta}/\|b\|_2 \le g_2(n) \}.$$
(iii) \[l:UnifWeakGauss\] $\tau>0$ and $v_{0j} \stackrel{\mathcal L}{=} \mathcal N(0,1)$ for all $j=1,\dots, p_n$. In this case, set $$B_n = \{b\in{{\mathbb R}}^{p_n} : | \|b\|_2 - \tau| \le g_1(n)\}.$$
Then, using the convention that $\sup \varnothing = 0$, $$\begin{aligned}
\label{eq:Lemma:UnifWeak}
\sup_{b\in B_n} \sup_{t\in{{\mathbb R}}} \left| G(t,b) - F(t) \right| \;\xrightarrow[n\to\infty]{}\; 0.\end{aligned}$$
We now turn to the proof of Theorem \[thm:PIlength\]. In order to achieve uniformity in $P_n\in\mathcal P_n^{{lin}}$, we consider sequences of parameters $\beta_n\in{{\mathbb R}}^{p_n}$, $\sigma_n^2 \in(0,\infty)$ and $\Sigma_n\in S_{p_n}$ (where $S_{p_n}$ is the set of all symmetric, positive definite $p_n\times p_n$ matrices). All the operators ${{\mathbb E}}$, $\operatorname{Var}$ and $\operatorname{Cov}$ are to be understood with respect to $P_n^n$.
We have to show that, for arbitrary but fixed $\alpha\in[0,1]$, $\hat{q}_\alpha/\sigma_n$ converges in $P_n^n$-probability to $q_\alpha$, the $\alpha$ quantile of the distribution of $l N \tau + u$, the cdf of which we denote by $F$. Note that in either case of Theorem \[thm:PIlength\] the quantile $q_\alpha$ is unique. Note further that for $\alpha\in(0,1]$, $\hat{q}_\alpha = \hat{F}_n^\dagger(\alpha) := \inf\{t\in{{\mathbb R}}:\hat{F}_n(t)\ge \alpha\}$. We treat the case $\alpha\in\{0,1\}$ separately at the end of the proof, because $q_1=-q_0=\infty$. To deal with the empirical quantiles we use a standard argument. For $\alpha\in(0,1)$ and ${\varepsilon}>0$, consider $$P_n^n(|\hat{q}_\alpha/\sigma_n - q_{\alpha}| > {\varepsilon})
=
P_n^n(\hat{q}_\alpha/\sigma_n > q_{\alpha} + {\varepsilon})
+
P_n^n(\hat{q}_\alpha/\sigma_n < q_{\alpha} - {\varepsilon}).$$ To bound the first probability on the right, abbreviate $J_i := \mathds 1_{\{\hat{u}_i/\sigma_n > q_{\alpha} + {\varepsilon}\}}$ and note that by definition of the OLS predictor, the leave-one-out residuals $\hat{u}_i = y_i - x_i'(X_{[i]}'X_{[i]})^\dagger X_{[i]}'Y_{[i]}$, $i=1,\dots, n$, and thus also the $J_i$, $i=1,\dots, n$, are exchangeable under $P_n^n$. A basic property of the quantile function $\hat{F}_n^\dagger$ [cf. @vanderVaart07 Lemma 21.1] yields $$\begin{aligned}
P_n^n(\hat{q}_\alpha/\sigma_n &> q_{\alpha} + {\varepsilon})
= P_n^n\left(\alpha > \hat{F}_n(\sigma_n(q_{\alpha} + {\varepsilon}))\right)\\
&= P_n^n\left(1-\hat{F}_n(\sigma_n(q_{\alpha} + {\varepsilon})) > 1-\alpha \right)\\
&=
P_n^n\left(\frac{1}{n}\sum_{i=1}^n (J_i - {{\mathbb E}}[J_1]) > 1-\alpha - {{\mathbb E}}[J_1] \right)\\
&=
P_n^n\left(\frac{1}{n}\sum_{i=1}^n (J_i - {{\mathbb E}}[J_i]) > F_n(q_{\alpha} + {\varepsilon}) - \alpha \right),\end{aligned}$$ where $F_n(t) := P_n^n(\hat{u}_1/\sigma_n\le t)$ is the marginal cdf of the scaled leave-one-out residuals. If we can show that $$\begin{aligned}
\label{eq:PIlengthToShow1}
F_n(t) \to F(t), \quad\quad\forall t\in{{\mathbb R}},\end{aligned}$$ as $n\to\infty$, then $F_n(q_\alpha+{\varepsilon}) \to F(q_\alpha+{\varepsilon})>\alpha$, because $q_\alpha$ is unique, and the probability in the preceding display can be bounded, at least for $n$ sufficiently large, using Markov’s inequality, by $$\begin{aligned}
&(F_n(q_{\alpha} + {\varepsilon}) - \alpha)^{-2} {{\mathbb E}}\left[\left| \frac{1}{n}\sum_{i=1}^n (J_i - {{\mathbb E}}[J_i])\right|^2 \right]\\
&\quad=
(F_n(q_{\alpha} + {\varepsilon}) - \alpha)^{-2} \left(
\frac{1}{n} \operatorname{Var}[J_1] + \frac{n(n-1)}{n^2} \operatorname{Cov}(J_1,J_2)
\right),\end{aligned}$$ where the equality holds in view of the exchangeability of the $J_i$. An analogous argument yields a similar upper bound for the probability $P_n^n(\hat{q}_\alpha/\sigma_n \le q_{\alpha} - {\varepsilon})$ but with $(F_n(q_{\alpha} + {\varepsilon}) - \alpha)^{-2}$ replaced by $(\alpha - F_n(q_{\alpha} - {\varepsilon}))^{-2}$, and $J_i$ replaced by $K_i = \mathds 1_{\{\hat{u}_i/\sigma_n \le q_{\alpha} - {\varepsilon}\}}$. The proof will thus be finished if we can establish and show that $\operatorname{Cov}(J_1,J_2)$ and $\operatorname{Cov}(K_1,K_2)$ converge to zero as $n\to\infty$. We only consider $\operatorname{Cov}(J_1,J_2) = \operatorname{Cov}(1-J_1,1-J_2)$, as the argument for $\operatorname{Cov}(K_1,K_2)$ is analogous. Write $\delta = q_\alpha + {\varepsilon}$ and $$\operatorname{Cov}(1-J_1,1-J_2) = P_n^n(\hat{u}_1/\sigma_n\le \delta, \hat{u}_2/\sigma_n \le \delta) - P_n^n(\hat{u}_1/\sigma_n\le \delta)P_n^n(\hat{u}_2/\sigma_n \le \delta).$$ Now, $$\begin{aligned}
\begin{pmatrix}
\hat{u}_1/\sigma_n \\
\hat{u}_2/\sigma_n
\end{pmatrix}
=
\begin{pmatrix}
\hat{u}_{1[2[}/\sigma_n \\
\hat{u}_{2[1]}/\sigma_n
\end{pmatrix}
+
\begin{pmatrix}
\hat{e}_1 \\
\hat{e}_2
\end{pmatrix},\end{aligned}$$ where $\hat{u}_{i[j]} = y_i - x_i'\hat{\beta}_n^{[ij]}$, $\hat{\beta}_n^{[ij]} = (X_{[ij]}'X_{[ij]})^\dagger X_{[ij]}'Y_{[ij]}$, and $\hat{e}_i = (\hat{u}_i - \hat{u}_{i[j]})/\sigma_n = x_i'(\hat{\beta}_n^{[ij]} - \hat{\beta}_n^{[i]})/\sigma_n$, for $i,j\in\{1,2\}$, $i\ne j$. Therefore, ${{\mathbb E}}[\hat{e}_i\|Y_{[i]}, X_{[i]}] = 0$, and ${{\mathbb E}}[\hat{e}_i^2 \| Y_{[i]},X_{[i]}] = \|\Sigma^{1/2}(\hat{\beta}_n^{[i]} - \hat{\beta}_n^{[ij]})/\sigma_n\|_2^2$, which converges to zero in $P_n^n$-probability, by Lemma \[lemma:OLSmisspec\], for a sample of size $n-1$ instead of $n$, which applies here because \[c.non-linmod\] is satisfied under \[c.linmod\]. Hence, $\hat{e}_1$ and $\hat{e}_2$ converge to zero in probability. The joint distribution function of $\hat{u}_{1[2]}/\sigma_n$ and $\hat{u}_{2[1]}/\sigma_n$ can be written as $$\begin{aligned}
&P_n^n(\hat{u}_{1[2]}/\sigma_n\le s, \hat{u}_{2[1]}/\sigma_n \le t)\label{eq:2CDF}\\
&\quad=
{{\mathbb E}}\left[ P_n^n\left(x_1'(\beta-\hat{\beta}_n^{[12]})/\sigma_n + u_1 \le s, x_2'(\beta-\hat{\beta}_n^{[12]})/\sigma_n + u_2 \le t \Big\| Y_{[12]},X_{[12]}\right)\right]\notag\\
&\quad=
{{\mathbb E}}\left[ G\left(s, \Sigma^{1/2}(\beta-\hat{\beta}_n^{[12]})/\sigma_n\right) G\left(t, \Sigma^{1/2}(\beta-\hat{\beta}_n^{[12]})/\sigma_n\right) \right],\notag\end{aligned}$$ where, for $t\in{{\mathbb R}}$ and $b\in{{\mathbb R}}^{p_n}$, $G_n$ is defined as $G_n(t, b) = P_n(b'\Sigma^{-1/2}x_0 + u_0 \le t)$. Note that $G$ depends only on $\mathcal L_l$, $\mathcal L_v$, $\mathcal L_u$ and on $n$, through $p_n$. If we abbreviate $\tilde{\beta}_n^{[12]} = \Sigma^{1/2}(\beta-\hat{\beta}_n^{[12]})/\sigma_n$ and $\tilde{\beta}_n^{[1]} = \Sigma^{1/2}(\beta-\hat{\beta}_n^{[1]})/\sigma_n$, we arrive at $$\operatorname{Cov}(1-J_1,1-J_2)
=
{{\mathbb E}}\left[ G\left(\delta,\tilde{\beta}_n^{[12]}\right)^2\right]
- {{\mathbb E}}\left[ G\left(\delta,\tilde{\beta}_n^{[1]}\right)\right]^2+ o(1),$$ provided the bivariate distribution in converges weakly. We finish the proof by showing that for all $t\in{{\mathbb R}}$, the bounded random variables $G_n(t,\tilde{\beta}_n^{[12]})$ and $G_n(t,\tilde{\beta}_n^{[1]})$ both converge to $F(t)$, in $P_n^n$-probability. Note that this also implies , because $F_n(t) = {{\mathbb E}}[ G_n(t,\tilde{\beta}_n^{[1]})]$.
To this end, we note that for an arbitrary measureable set $B_n\subseteq{{\mathbb R}}^{p_n}$ and for any ${\varepsilon}>0$, $$\begin{aligned}
P_n^n\left( \sup_{t\in{{\mathbb R}}}\left| G_n(t,\tilde{\beta}_n^{[1]}) - F(t)\right| > {\varepsilon}\right)
&\le
P_n^n\left( \sup_{t\in{{\mathbb R}}}\left| G_n(t,\tilde{\beta}_n^{[1]}) - F(t)\right| > {\varepsilon}, \tilde{\beta}_n^{[1]}\in B_n \right) \\
&\hspace{1cm}+
P_n^n\left( \tilde{\beta}_n^{[1]}\notin B_n \right)\\
&\le
a_n({\varepsilon}) \;+\; P_n^n\left( \tilde{\beta}_n^{[1]}\notin B_n \right),\end{aligned}$$ where $a_n({\varepsilon}) = 1$ if $\sup_{b\in B_n} \sup_{t\in{{\mathbb R}}} \left| G_n(t,b) - F(t) \right|>{\varepsilon}$, and $a_n({\varepsilon}) = 0$, else. Now, we first consider the case $\kappa=0$. Thus, Lemma \[lemma:OLScorrSpec\], which also applies to $\hat{\beta}_n^{[1]}$, yields $\|\tilde{\beta}_n^{[1]}\|_2 \to \tau = 0$, as $n\to\infty$, in $P_n^n$-probability. Therefore, the probability on the last line of the previous display converges to zero if we take $B_n =\{b\in{{\mathbb R}}^{p_n}: \|b\|_2 \le g_1(n)\}$ and $g_1(n)\to0$ sufficiently slowly, as $n\to\infty$. Hence, Lemma \[lemma:UnifWeak\] applies and shows that also $a_n({\varepsilon})\to 0$ as $n\to\infty$, for every ${\varepsilon}>0$. If $\kappa>0$, Lemma \[lemma:OLScorrSpec\] yields $\|\tilde{\beta}_n^{[1]}\|_2 \to \tau>0$ and $\|\tilde{\beta}_n^{[1]}\|_4 \to 0$, in $P_n^n$-probability, as $n\to\infty$. Thus, the probability on the last line of the previous display converges to zero if we take $B_n = \{b\in{{\mathbb R}}^{p_n} : b \ne 0, | \|b\|_2 - \tau| \le g_1(n), \|b\|_{4}/\|b\|_2 \le g_2(n) \}$ and sequences $g_1$ and $g_2$ that converge to zero sufficiently slowly. Now Lemma \[lemma:UnifWeak\] shows that also $a_n({\varepsilon})\to 0$ as $n\to\infty$, for every ${\varepsilon}>0$. The same argument applies to $\tilde{\beta}_n^{[12]}$ instead of $\tilde{\beta}_n^{[1]}$, which finishes the proof in the case $\alpha\in(0,1)$.
Next, we treat the case $\alpha=0$. In either case of the theorem, we have $\lim_{\gamma\to 0}q_\gamma = q_0=-\infty$. By definition, $\hat{q}_0 \le \hat{q}_\gamma$, for any $\gamma\in(0,1)$. Thus, for any $M>0$, there exists a $\gamma\in(0,1)$, such that $q_\gamma<-2M$ and $P_n^n(\hat{q}_0<-M) \ge P_n^n(\hat{q}_\gamma<-M) \to 1$, as $n\to\infty$, in view of the first part. In other words, $\hat{q}_0$ converges to $-\infty = q_0$ in $P_n^n$-probability. A similar argument can be used to treat the case $\alpha=1$.
Notice the identity $$\Sigma^{1/2}(\hat{\beta}_n - \beta)/\sigma = (\Sigma^{-1/2}X'X\Sigma^{-1/2})^\dagger\Sigma^{-1/2}X'u,$$ (with $u = (u_1,\dots, u_n)' = Y-X\beta$) whose distribution under $P\in\mathcal P_n$ does not depend on the parameters $\beta$, $\sigma^2$ and $\Sigma$. Hence, without loss of generality, we assume for the rest of this proof that $\beta=0$, $\sigma^2=1$ and $\Sigma = I_{p_n}$. First, we have to show that $\|\hat{\beta}_n\|_2 \to \tau \in [0,\infty)$, in probability, for a $\tau = \tau(\kappa)$ as in Theorem \[thm:PIlength\]. To this end, consider the conditional mean $${{\mathbb E}}\left[\|\hat{\beta}_n\|_2^2\Big\| X\right] = \operatorname{trace}(X'X)^\dagger X'X (X'X)^\dagger = \operatorname{trace}(X'X)^\dagger \xrightarrow[]{a.s.} \tau^2,$$ by Lemma \[lemma:traceConv\] and for $\tau$ as desired (cf. Remark \[rem:tau\]). From the same lemma we get convergence of the conditional variance $$\begin{aligned}
\operatorname{Var}\left[\|\hat{\beta}_n\|_2^2\Big\| X\right] &= \operatorname{Var}\left[u'X(X'X)^{\dagger2} X'u\Big\|X\right] =: \operatorname{Var}[u'Ku\|X] \\
&=
2\operatorname{trace}K^2 + ({{\mathbb E}}[u_1^4] - 3)\sum_{i=1}^n K_{ii}^2\\
&\le
2\operatorname{trace}K^2 + ({{\mathbb E}}[u_1^4] + 3)\sum_{i,j=1}^n K_{ij}^2
=
({{\mathbb E}}[u_1^4] + 5)\operatorname{trace}K^2 \\
&=
({{\mathbb E}}[u_1^4] + 5)
\operatorname{trace}X(X'X)^{\dagger2}X'X(X'X)^{\dagger2}X' \\
&= ({{\mathbb E}}[u_1^4] + 5)
\operatorname{trace}(X'X)^{\dagger2} \xrightarrow[]{a.s.} 0. \end{aligned}$$ For the second claim it suffices to show that $\|\hat{\beta}_n\|_4^4 \to 0$, in probability. Notice that for $M := (m_1, \dots, m_{p_n})' := (X'X)^\dagger X'$, we have $$\begin{aligned}
\|\hat{\beta}_n\|_4^4 = \|Mu\|_4^4 = \sum_{j=1}^{p_n} (m_j'u)^4
=
\sum_{j=1}^{p_n} \sum_{i_1,i_2,i_3,i_4=1}^n m_{ji_1}m_{ji_2}m_{ji_3}m_{ji_4}u_{i_1}u_{i_2}u_{i_3}u_{i_4}.\end{aligned}$$ After taking conditional expectation given $X$, only terms with paired indices remain and we get $$\begin{aligned}
{{\mathbb E}}\left[\|\hat{\beta}_n\|_4^4\Big\|X\right] &= \sum_{j=1}^{p_n}\left( {{\mathbb E}}[u_1^4] \sum_{i=1}^n m_{ji}^4 + 3 \sum_{i\ne k}^n m_{ji}^2m_{jk}^2\right)\\
&\le
\sum_{j=1}^{p_n}\left( {{\mathbb E}}[u_1^4] \sum_{i,k=1}^n m_{ji}^2m_{jk}^2 + 3 \sum_{i, k=1}^n m_{ji}^2m_{jk}^2\right)\\
&=
({{\mathbb E}}[u_1^4] + 3) \sum_{j=1}^{p_n} (m_j'm_j)^2
\le
({{\mathbb E}}[u_1^4] + 3) \operatorname{trace}\sum_{i,j=1}^{p_n} m_im_i'm_jm_j'\\
&=
({{\mathbb E}}[u_1^4] + 3) \operatorname{trace}(M'M)^2
=
({{\mathbb E}}[u_1^4] + 3) \operatorname{trace}(X'X)^{\dagger2} \xrightarrow[]{i.p.} 0,\end{aligned}$$ by Lemma \[lemma:traceConv\].
First, in the case , for every $n\in{{\mathbb N}}$, take $b_n \in B_n =\{b\in{{\mathbb R}}^{p_n}: \|b\|_2 \le g_1(n)\}$ and simply note that $l_0b_n'v_n \to 0$, in probability, and thus $G(t,b_n) \to F(t)$. Since the limit is continuous, Polya’s theorem yields uniform convergence in $t\in{{\mathbb R}}$. Since $b_n\in B_n$ was arbitrary, we also get uniform convergence over $B_n$.
Next, we consider the Gaussian case , so $B_n = \{b\in{{\mathbb R}}^p : | \|b\|_2 - \tau| \le g_1(n)\}$. For every $n\in{{\mathbb N}}$, choose $b_n\in B_n$ arbitrary, and note that $t\mapsto G(t,b_n)$ is the distribution function of $l_0 b_n'v_n + u_0$, where $v_n \stackrel{\mathcal L}{=} \mathcal N(0,I_{p_n})$, and $l_0, v_n, u_0$ are independent. Clearly, $l_0 b_n'v_n + u_0 \stackrel{\mathcal L}{=} l_0 N \|b_n\|_2 + u_0 \to l_0 N \tau + u_0$, weakly, and this limit has continuous distribution function $F$. Hence, by Polya’s theorem, $\sup_t |{{\mathbb P}}(l_0 b_n'v_n + u_0 \le t) - F(t)| \to 0$, as $n\to \infty$. And since the sequence $b_n\in B_n$ was arbitrary, the result follows.
In the general case first note that $B_n$ may be empty. By our convention that $\sup \varnothing = 0$ it suffices to restrict to the subsequence $n'$ for which $B_{n'}\ne \varnothing$. If this is only a finite sequence, then the result is trivial. For convenience, we write $n=n'$. So let $b_n\in B_n$ and define the triangular array $z_{nj} := b_{nj} v_{0j}$, $j=1,\dots, p_n$, which satisfies ${{\mathbb E}}[z_{nj}]=0$ and $s_n^2 := \sum_{j=1}^p {{\mathbb E}}[z_{nj}^2] = \|b_n\|_2^2 \ne 0$. The Lyapounov condition is verified by $$\begin{aligned}
\sum_{j=1}^{p_n} s_n^{-(2+\delta)} {{\mathbb E}}[|z_{nj}|^{2+\delta}]
\;&=\;
{{\mathbb E}}\left[|v_{01}|^{2+\delta}\right]\left(\frac{\|b_n\|_{2+\delta}}{\|b_n\|_2}\right)^{2+\delta}\\
\;&\le\;
{{\mathbb E}}\left[|v_{01}|^{2+\delta}\right] \left[ g_2(n)\right]^{2+\delta}
\; \xrightarrow[n\to\infty]{} \; 0.\end{aligned}$$ Therefore, by Lyapounov’s CLT [@Billingsley95 Theorem 27.3], we have $$b_n'v_n/\|b_n\|_2 \;=\; \sum_{j=1}^{p_n} z_{nj}/s_n \;\xrightarrow[n\to\infty]{w}\; \mathcal N(0,1).$$ Since $b_n\in B_n$, we must have $\|b_n\|_2 \to \tau$ as $n\to\infty$, and thus, $b_n'v_n = \|b_n\|_2 b_n'v_n/\|b_n\|_2 \xrightarrow[]{w} N\tau$, where $N \stackrel{\mathcal L}{=} \mathcal N(0,1)$, as $n\to\infty$, and, by independence, $l_0 b_n'v_n + u_0 \xrightarrow[]{w} l_0 N \tau + u_0$. Since the distribution function of this limit is continuous, Polya’s theorem yields $\sup_t | G(t,b_n) - F(t)| \to 0$, as $n\to\infty$. Now the proof is finished because this convergence holds for arbitrary sequences $b_n\in B_n$.
Auxiliary results {#sec:proofsaux}
=================
\[lemma:traceConv\] On a common probability space $(\Omega, \mathcal F, {{\mathbb P}})$, consider an i.i.d. sequence $L_0 = \{l_i : i=1,2,\dots\}$ of random variables satisfying $|l_1|\ge c>0$, and a double infinite array $V_0 = \{v_{ij} : i,j=1,2,\dots\}$ of i.i.d. random variables with mean zero, unit variance and ${{\mathbb E}}[v_{11}^4]<\infty$, such that $L_0$ and $V_0$ are independent. For a sequence of positive integers $(p_n)$ with $p_n\le n$, consider the $n\times p_n$ random matrix $X = \Lambda V$, where $\Lambda = \operatorname{diag}(l_1,\dots, l_n)$ is diagonal and $V = \{v_{ij} : i=1,\dots, n; j=1,\dots, p_n\}$. Let $(X'X)^\dagger$ denote the Moore-Penrose pseudo inverse of $X'X$. If $p_n/n\to \kappa\in[0,1)$ then the following holds:
1. \[lemma:traceConvA\] $\liminf_{n\to\infty} {{\operatorname{\lambda_{\text{min}}}}{(}}X'X/n) \ge c^2(1-\sqrt{\kappa})^2$, almost surely.
2. \[lemma:traceConvA’\] $\lim_{n\to\infty} {{\mathbb P}}({{\operatorname{\lambda_{\text{min}}}}{(}}X'X/n) \le {\varepsilon}) = 0$ for all ${\varepsilon}< c^2(1-\sqrt{\kappa})^2$.
3. \[lemma:traceConvB\] If $m>1$, then $\operatorname{trace}{(X'X)^{\dagger m}} \to 0$, almost surely, as $n\to\infty$.
4. \[lemma:traceConvC\] $\operatorname{trace}{(X'X)^{\dagger}} \to \tau^2$ almost surely, as $n\to\infty$, for some constant $\tau=\tau(\kappa)\in [0,\infty)$ that depends only on $\kappa$ and on the distribution of $l_1^2$ and satisfies $\tau(\kappa)=0$ if, and only if, $\kappa=0$.
Let $\lambda_1\le \dots \le \lambda_p$ and $\mu_1\le \dots \le \mu_p$ denote the ordered eigenvalues of $X'X/n$ and $V'V/n$, respectively, and write $e_i\in{{\mathbb R}}^n$ for the $i$-th element of the canonical basis in ${{\mathbb R}}^n$. Then, $$\begin{aligned}
\lambda_1 &= \inf_{\|w\|=1} w'V'\Lambda^2Vw/n
= \inf_{\|w\|=1} \sum_{i=1}^n l_i^2 (e_i'Vw)^2/n \\
&\ge \left(\min_{i=1,\dots, n} l_i^2\right) \inf_{\|w\|=1} w'V'Vw/n
= c^{2} \mu_1,\end{aligned}$$ and from the Bai-Yin Theorem [@Bai93] it follows that $\mu_1 \to (1-\sqrt{\kappa})^{2}>0$, almost surely, as $p_n/n\to \kappa \in[0,1)$ [cf. @Huber13 for the case $\kappa=0$]. This finishes the proof of part \[lemma:traceConvA\]. Part \[lemma:traceConvA’\] is now a textbook argument. Simply note that ${{\mathbb P}}(\lambda_1 \le {\varepsilon}) \le {{\mathbb P}}(\inf_{n\ge k} \lambda_1 \le {\varepsilon})$ for $k\le n$, and that $\inf_{n\ge k} \lambda_1 \le \inf_{n\ge k+1} \lambda_1$ for all $k\in{{\mathbb N}}$, and thus $$\begin{aligned}
&\limsup_{n\to\infty} {{\mathbb P}}(\lambda_1 \le {\varepsilon})
=
\inf_{k\in{{\mathbb N}}}\sup_{n\ge k} {{\mathbb P}}(\lambda_1 \le {\varepsilon}) \\
&\quad
\le \inf_{k\in{{\mathbb N}}} {{\mathbb P}}\left(\inf_{n\ge k} \lambda_1 \le {\varepsilon}\right)
=
\lim_{k\to\infty} {{\mathbb P}}\left(\inf_{n\ge k} \lambda_1 \le {\varepsilon}\right) \\
&\quad
=
{{\mathbb P}}\left(\forall k\in{{\mathbb N}}: \inf_{n\ge k} \lambda_1 \le {\varepsilon}\right)
=
{{\mathbb P}}\left(\liminf_{n\to\infty} \lambda_1 \le {\varepsilon}\right) = 0.\end{aligned}$$
Next, set $\alpha_m := c^{2m} (1-\sqrt{\kappa})^{2m}$ and, for $\alpha>0$, define the functions $h_0$ and $h_\alpha$ by $h_0(y) = 1/|y|$ if $y\ne 0$ and $h_0(0)=0$, and by $h_\alpha(y) = 1/|y|$, if $|y|> \alpha/2$ and $h_\alpha(y) = 2/\alpha$, if $|y|\le\alpha/2$. With this notation, and from the previous considerations, we see that the difference between $$\operatorname{trace}{(X'X)^{\dagger m}} = n^{-m} \operatorname{trace}{(X'X/n)^{\dagger m}}
= \frac{p_n}{n^m} \frac{1}{p_n}\sum_{j=1}^{p_n} h_0(\lambda_j^m),$$ and $$\frac{p_n}{n^m} \frac{1}{p_n}\sum_{j=1}^{p_n} h_{\alpha_m}(\lambda_j^m)$$ converges to zero, almost surely, because $\lambda_j^m\ge \lambda_1^m \ge c^{2m}\mu_1^{m} \to \alpha_m > \alpha_m/2>0$, almost surely. But we have $n^{-m} \sum_{j=1}^{p_n} h_{\alpha_m}(\lambda_j^m) \le (p_n/n^m) (2/\alpha_m) \to 0$, if $m>1$, or if $m=1$ and $\kappa=0$. This finishes \[lemma:traceConvB\] and the case $\kappa=0$ of part \[lemma:traceConvC\].
For the remainder of part \[lemma:traceConvC\], let $m=1$ and $\kappa>0$, and first note that the empirical spectral distribution function $F_n^{\Lambda^2}$ of $\Lambda^2$ is simply given by the empirical distribution function of $l_1^2,\dots, l_n^2$, and this converges weakly (even uniformly) to the distribution function of $l_1^2$, almost surely. Hence, from Theorem 4.3 in @BaiSilv10, it follows that, almost surely, the empirical spectral distribution function $F_n^{X'X/n}$ of $X'X/n$ converges vaguely, as $p_n/n\to\kappa\in(0,1)$, to a non-random distribution function $F$ that depends only on $\kappa$ and on the distribution of $l_1^2$. From the argument in the previous paragraph we know that $\lambda_1 \ge c^2\mu_1 \to c^2(1-\sqrt{\kappa})^2=\alpha_1>0$, almost surely, and thus the support of $F$ must be lower bounded by $\alpha_1$. Since $h_{\alpha_1}$ is continuous and vanishes at infinity, by vague convergence, we have [cf. @Billingsley95 relation (28.2)] $$\begin{aligned}
\frac{1}{p_n}\sum_{j=1}^{p_n} h_{\alpha_1}(\lambda_j) = \int\limits_{-\infty}^\infty &h_{\alpha_1}(y) dF_n^{X'X/n}(y) \\
&\xrightarrow[]{a.s.} \int\limits_{-\infty}^\infty h_{\alpha_1}(y) dF(y)
= \int\limits_{-\infty}^\infty \frac{1}{y}\,dF(y) =: \tau_0^2 \in (0, 1/\alpha_1).\end{aligned}$$ Thus $$\frac{p_n}{n} \frac{1}{p_n} \sum_{j=1}^{p_n} h_{\alpha_1}(\lambda_j) \quad\xrightarrow[]{a.s.}\quad \kappa \tau_0^2 \;=:\; \tau^2 >0.$$
\[rem:tau\]If the $l_i$ in Lemma \[lemma:traceConv\] satisfy $|l_i|=1$, almost surely, then $\tau$ in part \[lemma:traceConvC\] is given by $\tau(\kappa)= \sqrt{\kappa/(1-\kappa)}$ [cf. @Huber13 Lemma B.2].
\[lemma:JSneg\] Suppose that for every $n\in{{\mathbb N}}$, the class $\mathcal P_n = \mathcal P_n(\mathcal L_l, \mathcal L_u, \mathcal L_v)$ is as in Condtion \[c.linmod\] and $\mathcal L_l$ has a finite fourth moment. Furthermore, let $p_n/n\to\kappa>0$ and $n> p_n$ for all $n\in{{\mathbb N}}$. Then, for every $c\in[0,1]$, every $\eta\in(0,\infty]$ and every ${\varepsilon}\in(0,1)$, the James-Stein-type estimator $\hat{\beta}_n(c)$ satisfies $$\begin{aligned}
\sup_{P\in\mathcal P_n} P^{n}\left(
\Big\|\Sigma_P^{1/2}\left(\hat{\beta}_n(c) - \beta_P\right)/\sigma_P\Big\|_{2+\eta} \ge {\varepsilon}c\sqrt{\kappa}/2
\right) \quad\xrightarrow[n\to\infty]{}\quad1.\end{aligned}$$
Consider a sequence $P_n\in\mathcal P_n$, such that $\beta_{P_n} = \sigma_{P_n}\Sigma_{P_n}^{-1/2}(\sqrt{\kappa},0,\dots, 0)'$, so that $b_n := \Sigma_{P_n}^{1/2}\beta_{P_n}/\sigma_{P_n} = (\sqrt{\kappa}, 0,\dots,0)'\in{{\mathbb R}}^{p_n}$ and $\delta_n := \|b_n\|_2 = \|b_n\|_q = \sqrt{\kappa} =: \delta$, for every $q\in(0,\infty]$. Simple relations of $\ell^q$-norms yield $$\begin{aligned}
&\left\|\Sigma_{P_n}^{1/2}\left(\hat{\beta}_n(c) - \beta_{P_n}\right)/\sigma_{P_n}\right\|_{2+\eta}
\ge
\left\|\Sigma_{P_n}^{1/2}\left(\hat{\beta}_n(c) - \beta_{P_n}\right)/\sigma_{P_n}\right\|_{(2+\eta)\lor 4}\\
&\quad=
\left\|s_n\Sigma_{P_n}^{1/2}(\hat{\beta}_n - \beta_{P_n})/\sigma_{P_n} - (1-s_n)b_n\right\|_{(2+\eta)\lor 4} \\
&\quad\ge
\left|
|s_n|\left\|\Sigma_{P_n}^{1/2}(\hat{\beta}_n - \beta_{P_n})/\sigma_{P_n}\right\|_{(2+\eta)\lor 4}
-
|s_n-1|\sqrt{\kappa}
\right|\\
&\quad\ge
|s_n-1|\sqrt{\kappa} - |s_n|\left\|\Sigma_{P_n}^{1/2}(\hat{\beta}_n - \beta_{P_n})/\sigma_{P_n}\right\|_{(2+\eta)\lor 4},\end{aligned}$$ where $$\begin{aligned}
s_n = \begin{cases}
\left( 1 - \frac{p}{n}\frac{c}{t_n^2}\frac{\hat{\sigma}_n^2}{\sigma_n^2}\right)_+, \quad&\text{if } t_n^2 >0,\\
0, &\text{else,}
\end{cases}\end{aligned}$$ and $t_n^2 = \hat{\beta}_n'X'X\hat{\beta}_n/(n\sigma_n^2)$, so that $\hat{\beta}_n(c) = s_n \hat{\beta}_n$. Clearly, we have $|s_n|\le 1$ and $\left\|\Sigma_{P_n}^{1/2}(\hat{\beta}_n - \beta_{P_n})/\sigma_{P_n}\right\|_{(2+\eta)\lor 4} \le \left\|\Sigma_{P_n}^{1/2}(\hat{\beta}_n - \beta_{P_n})/\sigma_{P_n}\right\|_4 \to 0$, in $P_n^n$-probability, by Lemma \[lemma:OLScorrSpec\]. Therefore, we see that $$\begin{aligned}
&P_n^n\left( |s_n-1|\sqrt{\kappa} - o_{P_n^n}(1) \ge {\varepsilon}c\sqrt{\kappa}/2 \right)\\
&\quad\le
P_n^n\left( \left\|\Sigma_{P_n}^{1/2}\left(\hat{\beta}_n(c) - \beta_{P_n}\right)/\sigma_{P_n}\right\|_{2+\eta} \ge {\varepsilon}c\sqrt{\kappa}/2 \right).\end{aligned}$$ From the arguments in the proof of Lemma \[lemma:JSmisspec\], and noting that now the linear model is correct (i.e., $e=0$), we easily see that $t_n^2 \to \delta^2 + \kappa = 2\kappa$, in $P_n^n$-probability. Moreover, $\hat{\sigma}_n^2 /\sigma_n^2\to 1$, in $P_n^n$-probability, because its conditional mean given $X$ converges to $1$ and its conditional variance converges to zero (see the arguments in and in the lines immediately before that display). Thus $|s_n-1|\sqrt{\kappa} \to c \sqrt{\kappa}/2 > {\varepsilon}c \sqrt{\kappa}/2$, so that the left-hand-side in the previous display converges to $1$ and the proof is finished.
\[lemma:0stable\] If $\hat{m}_n$ is a $0$-stable predictor w.r.t. some class $\mathcal P$ of distributions on $\mathcal Z$, then there exists a collection $\{g_P:P\in\mathcal P\}$ of measurable functions $g_P : \mathcal X\to {{\mathbb R}}$, such that for all $P\in\mathcal P$, $$\begin{aligned}
&P^{n+1}\left(\{(T_n,z_0)\in \mathcal Z^{n+1} : M_{n,p}^P(T_n,x_0) = g_P(x_0)\}\right)=1, \quad \text{and}\\
&P^n\left(\{(T_{n-1},z_0)\in \mathcal Z^n : M_{n-1,p}^P(T_{n-1},x_0) = g_P(x_0)\}\right)=1.\end{aligned}$$
Fix $P\in\mathcal P$. For $i=1,\dots, n$, let $z_i, z_i'\in\mathcal Z$ and $x_0\in\mathcal X$, and note that $$\begin{aligned}
&\left| M_{n,p}^P(z_1,\dots, z_n,x_0) - M_{n,p}^P(z_1',\dots, z_n',x_0) \right|\\
&=
\left|\sum_{i=1}^n \left[M_{n,p}^P(z_1',\dots, z_{i-1}',z_i,\dots, z_n,x_0) - M_{n,p}^P(z_1',\dots, z_i',z_{i+1},\dots, z_n,x_0) \right]\right|\\
&\le
\sum_{i=1}^n \Big[\left| M_{n,p}^P(z_1',\dots, z_{i-1}',z_i,\dots, z_n,x_0) - M_{n-1,p}^P(z_1',\dots, z_{i-1}',z_{i+1},\dots, z_n,x_0) \right|+\\
&\quad\quad \left|M_{n-1,p}^P(z_1',\dots, z_{i-1}',z_{i+1},\dots, z_n,x_0) - M_{n,p}^P(z_1',\dots, z_i',z_{i+1},\dots, z_n,x_0) \right|\Big].\end{aligned}$$ By $0$-stability, the integral of this upper bound with respect to $P^{2n+1}$ is equal to zero. Therefore, applying Lemma \[lemma:0stableKEY\] with $f= M_{n,p}^P$, $S = \mathcal Z^n$, $P_S = P^n$, $T = \mathcal X$ and $P_T$ equal to the $x$-marginal distribution of $P$, the first claim follows. The second claim is now a simple consequence of $0$-stability.
\[lemma:0stableKEY\] Let $(S,\mathcal S, P_S)$ and $(T, \mathcal T, P_T)$ be two probability spaces, and let $f:S\times T \to {{\mathbb R}}$ be measurable w.r.t. the product sigma algebra $\mathcal S \otimes \mathcal T$ and the Borel sigma algebra on ${{\mathbb R}}$. If $$\int_{S^2\times T} | f(s_1,t) - f(s_2,t)|\,d P_S\otimes P_S\otimes P_T(s_1,s_2, t) = 0,$$ then there exists a measurable function $g:T\to{{\mathbb R}}$, such that $$P_S\otimes P_T\Big((s,t): f(s,t) = g(t)\Big) = 1.$$
By Tonelli’s theorem we have $$\int_{T} | f(s_1,t) - f(s_2,t)|\,d P_T(t) = 0,$$ for $P_S\otimes P_S$-almost all $(s_1,s_2)$. Let $N\in \mathcal S\otimes\mathcal S$ be the corresponding null set. Furthermore, whenever $(s_1,s_2)\in N^c$, then $f(s_1,t) = f(s_2,t)$, for $P_T$-almost all $t$. The corresponding $P_T$-null set $M(s_1,s_2)\in\mathcal T$ therefore depends on $(s_1,s_2)\in N^c$. For $s_1\in S$, consider $N_{s_1} := \{s\in S: (s_1,s)\in N\}$, i.e., the $s_1$-section of $N$, and use Tonelli again, to see that there exists a $P_S$-null set $L\in\mathcal S$, such that $P_S(N_{s_1}) = 0$, for all $s_1\in L^c$.
Next, fix $s_1\in L^c$ and define the set $$A := A(s_1):= \{(s,t)\in S\times T : s\in N_{s_1}^c, t\in M(s_1,s)^c\},$$ as well as the function $g(t):= f(s_1,t)$, for $t\in T$.[^4] We therefore have $A \subseteq \{(s,t): f(s_1,t)=f(s,t)\} = \{(s,t): g(t)=f(s,t)\}$ and, for $s\in N_{s_1}^c$, $A_s = M(s_1,s)^c$ has $P_T$-probability one. To conclude, we use Tonelli again, to obtain $$P_S\otimes P_T(A) = \int_S P_T(A_s) \,dP_S(s)
=
\int_{N_{s_1}^c} P_T(A_s) \,dP_S(s)
=
P_S(N_{s_1}^c) = 1.$$
[^1]: It turns out, however, that in the linear model $m_P(x) = x'\beta_P$ and for appropriate estimators of $\beta_P$, the conditional distribution of the prediction error $\tilde{F}$ does stabilize at its mean, i.e., the unconditional distribution, even if $n$ and $p$ are of the same order of magnitude (cf. Section \[sec:PIlength\] and the proof of Theorem \[thm:PIlength\]).
[^2]: To be formally precise, one should interpret $x_0$ as the identity mapping of $\mathcal X\subseteq{{\mathbb R}}^{p}$ onto itself and $y_0$ as the identity mapping of $\mathcal Y\subseteq{{\mathbb R}}$ onto itself.
[^3]: Note that this covers, in particular, the case where we do not even use, or do not have available, the feature vectors $x_0, \dots, x_n$, i.e., $m\equiv 0$. In this case, a *prediction interval* for $y_0$ that is only based on $y_1,\dots, y_n$ is more commonly referred to as a *tolerance interval*.
[^4]: Note that by construction, the function $g$ depends not only on $f$, but also on the null set $L$, and thus on both the probability spaces $(S,\mathcal S, P_S)$ and $(T, \mathcal T, P_T)$.
|
{
"pile_set_name": "ArXiv"
}
|
epsf.sty
=cmss12 =cmss10 at 11pt
**Exclusive evolution kernels in two-loop order:**
**parity even sector.**
**A.V. Belitsky[^1], D. Müller**
*Institut für Theoretische Physik, Universität Regensburg*
*D-93040 Regensburg, Germany*
**Abstract**
We complete the construction of the non-forward evolution kernels in next-to-leading order responsible for the scale dependence of e.g. parity even singlet distribution amplitudes. Our formalism is designed to avoid any explicit two-loop calculations employing instead conformal and supersymmetric constraints as well as known splitting functions.
Keywords: evolution equation, two-loop exclusive kernels, supersymmetric and conformal constraints
PACS numbers: 11.10.Hi, 11.30.Ly, 12.38.Bx
Introduction.
=============
Hard exclusive processes [@BroLep89], i.e. involving a large momentum transfer, provide a complimentary and an equally important information about the internal structure of hadrons to the one gained e.g. in deep inelastic scattering in terms of inclusive parton densities. By means of QCD factorization theorems [@ColSopSte89] physical observables measurable in these reactions, i.e. form factors and cross sections, are expressed as convolution of a hard parton rescattering subprocess and non-perturbative distribution amplitudes [@BroLep89] and/or skewed parton distributions [@MulRobGeyDitHor94; @Ji96; @Rad96; @ColFraStr96]. Moreover, it is implied that the main contribution to the latter comes from the lowest two-particle Fock state in the hadron wave function. The field-theoretical background for the study of the distribution amplitudes is provided by their expression in terms of matrix elements of non-local operators sandwiched between a hadron and vacuum states (or hadron states with different momenta in the case of skewed parton distributions) $$\label{DefSPD}
\phi (x) = \frac{1}{2\pi} \int d z_- e^{i x z_-}
\langle 0 | \varphi^\dagger ( 0 ) \varphi ( z_- ) | h \rangle .$$ Due to the light-like character of the path separating the partons, $\varphi$, the operator in Eq. (\[DefSPD\]) diverges in perturbation theory and thus requires renormalization which inevitably introduces a momentum scale into the game so that a distribution acquires a logarithmic dependence on it. This dependence is governed by the renormalization group which within the present context is cast into the form of Efremov-Radyushkin-Brodsky-Lepage (ER-BL) evolution equation [@EfrRad78; @BroLep79] $$\label{ER-BLequation}
\frac{d}{d \ln Q^2} \phi (x, Q) =
V \left(x, y | \alpha_s(Q) \right)
{\mathop{\otimes}}^{{\rm e}}\phi (y, Q),
\qquad\mbox{with}\qquad
{\mathop{\otimes}}^{{\rm e}}= \int_{0}^{1} dy .$$ Note that the restoration of the generalized skewed kinematics in perturbative evolution kernel, $V$, is unambiguous and straightforward [@GeyDitHorMulRob88]. Therefore, we discuss in what follows only the case when the skewedness of the process equals unity.
Recently, we have addressed the question of calculation of two-loop approximation for the exclusive evolution kernels and give our results for the parity-odd sector in Ref. [@BelMulFre99a]. The main tools of our analysis were the constraints coming from known pattern of conformal symmetry breaking in QCD and supersymmetric relations arisen from super-Yang-Mills theory. In the present note we address the flavour singlet parity even case which is responsible for the evolution of the vector distribution amplitude.
Anatomy of NLO evolution kernels.
=================================
Our derivation is based on the fairly well established structure of the ER-BL kernel in NLO. Up to two-loop order we have $$\mbox{\boldmath$V$} (x, y | \alpha_s )
= \frac{\alpha_s}{2\pi}
\mbox{\boldmath$V$}^{(0)} (x, y)
+ \left( \frac{\alpha_s}{2\pi} \right)^2
\mbox{\boldmath$V$}^{(1)} (x, y)
+ {{\cal O}}(\alpha_s^3) ,$$ with the purely diagonal LO kernel $\mbox{\boldmath$V$}^{(0)}$ in the basis of Gegenbauer polynomials and NLO one having the structure governed by the conformal constraints [@Mue94; @BelMul98a; @BelMul98b] $$\label{pred-Sing}
\mbox{\boldmath$V$}^{(1)}
= - \mbox{\boldmath$\dot V$} {\mathop{\otimes}}^{{\rm e}}\left(
\mbox{\boldmath$V$}^{(0)} + \frac{\beta_0}{2}\, \1
\right)
- \mbox{\boldmath$g$} {\mathop{\otimes}}^{{\rm e}}\mbox{\boldmath$V$}^{(0)}
+ \mbox{\boldmath$V$}^{(0)} {\mathop{\otimes}}^{{\rm e}}\mbox{\boldmath$g$}
+ \mbox{\boldmath$D$} + \mbox{\boldmath$G$}.$$ Here the first three terms are induced by conformal symmetry breaking counterterms in the $\overline{\mbox{MS}}$ scheme. Contrary to the LO kernel $\mbox{\boldmath$V$}^{(0)} $, the so-called dotted kernel $\mbox{\boldmath$\dot V$}^{(0)}$ and the $\mbox{\boldmath$g$}$ kernel are off-diagonal in the space of Gegenbauer moments. They have been obtained by a LO calculation [@Mue94; @BelMul98a; @BelMul98b]. The remaining two pieces are diagonal and are decomposed into the $\mbox{\boldmath$G$}^{V}$ kernel which is related to the crossed ladder diagram and contains the most complicated structure in terms of Spence functions, while the $\mbox{\boldmath$D$}^{V}$ kernel originates from the remaining graphs and can be represented as convolution of simple kernels.
One of the ingredients of the NLO result are the one-loop kernels. We use for them improved expressions of Ref. [@BelMul98a] which are completely diagonal in physical as well as unphysical spaces of moments. In the matrix form we have $$\begin{aligned}
\label{decomp-V-V}
\mbox{\boldmath$V$}^{(0)V} (x, y)
=
\left(
\begin{array}{ll}
C_F \left[ {^{QQ}\!v} (x, y) \right]_+
& - 2 T_F N_f \, {^{QG}\!v^V} (x, y) \\
C_F\, {^{GQ}\!v^V} (x, y)
& C_A {^{GG}\!v^V} (x, y)
- \frac{\beta_0}{2} \delta(x - y)
\end{array}
\right) ,\end{aligned}$$ where $\beta_0 = \frac{4}{3} T_F N_f - \frac{11}{3} C_A$ and $C_A = 3$, $C_F = 4/3$, $T_F = 1/2$ for QCD. The structure of the kernels reflects the supersymmetry in ${{\cal N}}= 1$ super-Yang-Mills theory [@BukFroKurLip85; @BelMulSch98; @BelMul99a] $$\begin{aligned}
\label{v-kernels}
&&{^{QQ}\!v} \equiv {^{QQ}\!v^a} + {^{QQ}\!v^b},
\quad
{^{QG}\!v^V} \equiv {^{QG}\!v^a} + 2 {^{QG}\!v^c},
\nonumber\\
&&{^{GQ}\!v^V} \equiv {^{GQ}\!v^a} + 2 {^{GQ}\!v^c} ,
\quad
{^{GG}\!v^V} \equiv \left[2\, {^{GG}\!v^a} + {^{GG}\!v^b} \right]_+
+ 2 {^{GG}\!v^c},\end{aligned}$$ where the functions $v^i$ are defined in the following way $${^{AB} v^i}(x, y)
= \theta(y - x) {^{AB}\! f^i}(x, y)
\pm \left\{ {x \to \bar x \atop y \to \bar y } \right\}
\quad
\mbox{for}
\quad
\left\{ {A = B \atop A \not = B } \right. ,$$ with (here and everywhere $\bar x\equiv 1 - x$) $$\begin{aligned}
\left\{{ {^{AB}\! f^a} \atop {^{AB}\! f^b} }\right\}
&=& \frac{ x^{\nu(A) - 1/2}}{y^{\nu(B) - 1/2}}
\left\{ { 1 \atop \frac{1}{y - x} } \right\},
\nonumber\\
{^{AA}\! f^c}
&=& \frac{ x^{\nu(A) - 1/2}}{y^{\nu(A) - 1/2}}
\left\{
{ 2 \bar x y \left[ \frac{4}{3} - \ln( \bar x y ) \right] + y - x
\atop
2 \bar x y + y - x }
\right\}
\quad \mbox{for} \quad
A = \left\{ {Q \atop G } \right. ,
\nonumber\\
{^{AB}\! f^c}
&=& \frac{ x^{\nu(A) - 1/2}}{y^{\nu(B) - 1/2}}
\left\{
{ 2 x \bar y - \bar x
\atop 2 \bar x y - \bar y }
\right\}
\quad \mbox{for} \quad
A = \left\{ {Q \atop G} \right\} \not = B .\end{aligned}$$ The index $\nu(A)$ coincides with the index of the Gegenbauer polynomials in the corresponding channel, i.e. $\nu(Q) = 3/2$ and $\nu(G) = 5/2$. The eigenvalues of the same $v^i$-kernel in different channels are related to each other (here $v_{jj} \equiv v_j$) $$\begin{aligned}
\label{eigenvalues-LO-a}
&&{^{QQ}\!v^a_j} = - \frac{1}{6} {^{QG}\!v^a_j} =
\frac{6}{j ( j + 3 )} {^{GQ}\!v^a_j} = \frac{1}{2} {^{GG}\!v^a_j}
= \frac{1}{(j + 1)(j + 2)},
\nonumber\\
\label{eigenvalues-LO-b}
&&{^{QQ}\!v^b_j} = {^{GG}\!v^b_j} - 1
= - 2 \psi( j + 2 ) + 2 \psi( 1 ) + 2,
\nonumber\\
\label{eigenvalues-LO-c}
&&{^{QQ}\!v^c_j} = - \frac{1}{6}{^{QG}\!v^c_j}
= \frac{6}{j ( j + 3 )}{^{GQ}\!v^c_j}
= \frac{1}{3}{^{GG}\!v^c_j}
=\frac{2}{j ( j + 1 )( j + 2 )( j + 3 )}.\end{aligned}$$ Note that we have the identity ${^{GQ}\!v^c_j}={^{QQ}\!v^a_j}/3 = {^{GG}\!v^a_j}/6$, which in the next section will serve as a guideline for the construction of $\mbox{\boldmath$\dot V$}$ and $\mbox{\boldmath$G$}$ kernels.
Construction of $\mbox{\boldmath$\dot V$}$ and $\mbox{\boldmath$G$}$ kernels.
=============================================================================
To proceed further let us consider first the construction of the so-called dotted kernels whose off-diagonal conformal moments are simply expressed in terms of the one-loop anomalous dimensions, ${^{AB}\!\gamma}_j^{(0)}$, of the conformal operators as $\theta_{j - 2,k} ({^{AB}\!\gamma}_j^{(0)} - {^{AB}\!\gamma}_k^{(0)})
d_{jk}$ with $d_{jk} = - \frac{1}{2}[ 1 + ( - 1)^{j - k} ]
\frac{(2k + 3)}{(j - k)(j + k + 3)}$. We introduce the matrix $$\mbox{\boldmath$\dot V$}^{(0)V} (x, y)
=
\left(
\begin{array}{ll}
C_F \left[ {^{QQ}\!\dot v} (x, y) \right]_+
& - 2 T_F N_f {^{QG}\!\dot v}^V (x, y) \\
C_F {^{GQ}\!\dot v}^V (x, y)
&
C_A {^{GG}\!\dot v}^V (x, y)
\end{array}
\right) ,$$ where we use the decomposition analogous to Eqs.(\[decomp-V-V\],\[v-kernels\]) for the LO kernels including the same “+”-prescription although this time the kernels are regular at the point $x = y$. The general structure of ${^{AB} \dot v^i}$ reads $$\label{DotKernel}
{^{AB} \dot v^i} (x, y) =
\theta(y - x) {^{AB}\! f^i} (x, y) \ln \frac{x}{y}
+ \Delta{^{AB}\! \dot{f}^i} (x, y)
\pm \left\{ {x \to \bar x \atop y \to \bar y } \right\} ,
\quad
\mbox{for}
\quad
\left\{ { A = B \atop A \not= B } \right. .$$ For the dotted $a$ and $b$-kernels we have $\Delta{^{AB}\! \dot{f}^i}
(x, y) \equiv 0$ with $i = a, b$. To find the dotted $c$-kernels we make use of the fact that kernels with the same conformal moments in different channels are related by differential equations owing to the following simple relations for the Gegenbauer polynomials $$\begin{aligned}
\frac{d}{dx} C_j^{3/2}(2 x - 1)
= 6 C_{j-1}^{5/2}(2 x - 1),
\quad
\frac{d}{dx} \frac{w(x|5/2)}{N_j(5/2)} C_{j - 1}^{5/2}(2 x - 1)
= - 6 \frac{w(x | 3/2)}{N_j (3/2)} C_{j}^{3/2}(2 x - 1),\end{aligned}$$ were $w(x|\nu)=(x\bar{x})^{\nu - 1/2}$ is the weight function and $N_j(\nu)= 2^{ - 4 \nu + 1 } \frac{ \Gamma^2 (\frac{1}{2}) \Gamma
( 2 \nu + j )}{\Gamma^2 (\nu) ( \nu + j ) j! }$ is the normalization coefficient. From the knowledge of conformal moments, which are determined by the eigenvalues of the corresponding kernels given in Eq. (\[eigenvalues-LO-c\]), and using the expansion of the kernels w.r.t. the Gegenbauer polynomials $$\begin{aligned}
{^{AB}\!v^i} (x, y)
= \sum_{j = 0}^\infty
\frac{w \left( x | \nu(A) \right)}{N_j \left( \nu (A) \right)}
C^{\nu(A)}_{j + 3/2 - \nu(A)} (2 x - 1) {^{AB}\!v^i_j} \,
C^{\nu(B)}_{j + 3/2 - \nu(B)} (2 y - 1)\end{aligned}$$ we find then the following differential equations $$\begin{aligned}
\frac{d}{dy} {^{GQ}\! \dot v^c} (x, y)
&=& {^{GG}\! \dot v^a} (x, y) + {^{GG}\! v^a} (x,y),
\label{DiffEq1}\\
\frac{d}{dx} {^{GQ}\! \dot v^c} (x, y)
&=& - 2 \,{^{QQ}\! \dot v^a} (x, y) -{^{QQ}\! v^a} (x,y),
\label{DiffEq2}\\
{^{QG}\! \dot v^c} (x, y)
&=& \frac{1}{3} \frac{d}{dx} {^{GG}\! \dot v^c} (x, y) .
\label{DiffEq3}\end{aligned}$$ One of the entries in Eq. (\[DotKernel\]), namely $$\Delta{^{GG}\! \dot{f}^c} (x, y) = 2 \frac{x^2}{y^2} (y - x),$$ has been obtained in Ref. [@BelMul98b]. Thus defined ${^{GG}\!\dot v^c}$ kernel possesses the correct conformal moments in both un- and physical sectors. Eqs. (\[DiffEq1\],\[DiffEq2\]) allow us (after fixing the integration constant) to find $\Delta{^{GQ}\! \dot{f}^c}$, while from Eq. (\[DiffEq3\]) we conclude that $\Delta{^{QG}\! \dot{f}^c}$ is trivially induced by $\Delta{^{GG}\! \dot{f}^c}$. Therefore, we have finally $$\begin{aligned}
\Delta{^{GQ}\! \dot{f}^c}
= x^2 (2 x - 3) \ln\frac{x}{y},
\qquad
\Delta{^{QG}\! \dot{f}^c}
= - \frac{x}{3 y^2} \left( 4 x - 5 y + 2 x y \right).\end{aligned}$$
Next the $\mbox{\boldmath$g$}$ function is given by [@BelMul98a; @BelMul98b] $$\begin{aligned}
\label{set-g-kernels}
\mbox{\boldmath$g$} (x, y) =
\theta(y - x)
\left(
\begin{array}{cc}
- C_F \left[ \frac{ \ln \left( 1 - \frac{x}{y} \right) }{y - x} \right]_+
& 0 \\
C_F \frac{x}{y}
& - C_A\left[ \frac{ \ln \left( 1 - \frac{x}{y} \right) }{y - x} \right]_+
\end{array}
\right)
\pm
\left\{ x \to \bar x \atop y \to \bar y \right\},\end{aligned}$$ with ($-$) $+$ sign corresponding to (non-) diagonal elements.
The construction of the diagonal $\mbox{\boldmath$G$} (x, y)$ kernel related to the crossed ladder diagrams is straightforward up to a number of points which are not obvious and present the most non-trivial part of the machinery. Let us give here its construction in more detail as compared to Ref. [@BelMulFre99a]. From the result in the flavour non-singlet sector [@Sar84; @DitRad84; @MikRad85] and the general limiting procedure to the forward case [@MulRobGeyDitHor94; @GeyDitHorMulRob88] we know that all entries in the matrix $$\label{G-kernel-odd}
\mbox{\boldmath$G$}^i (x, y)
= - \frac{1}{2}
\left(
\begin{array}{cc}
2 C_F \left( C_F - \frac{C_A}{2} \right)
\left[ {^{QQ}\!G}^i (x, y) \right]_+
&
2 C_A T_F N_f \, {^{QG}\!G}^i (x, y)
\\
C_F C_A \, {^{GQ}\!G}^i (x, y)
&
C_A^2 \left[ {^{GG}\!G}^i (x, y) \right]_+
\end{array}
\right) ,$$ must have the following general structure $$\label{kernel-G}
{^{AB} G}^i (x, y)
= \theta (y - x)
\left( {^{AB}\! H}^i + \Delta{^{AB}\! H}^i \right) (x, y)
+ \theta (y - \bar x)
\left( {^{AB} \overline H}^i
+ \Delta{^{AB} \overline H}^i \right) (x, y),$$ with the following expressions for $H$ and $\overline{H}$ $$\begin{aligned}
\label{kernel-S-H}
{^{AB} H}^i (x, y)
\!\!&=&\!\! 2 \left[ \pm {^{AB} \overline f}^i
\left( {\rm Li}_2( \bar x ) + \ln y \ln \bar x \right)
- {^{AB}\! f}^A\, {\rm Li}_2( \bar y ) \right],
\\
\label{kernel-S-bH}
{^{AB} \overline{H}}^i (x, y)
\!\!&=&\!\! 2 \left[
\left( {^{AB}\! f}^i \mp {^{AB} \overline f}^i \right)
\left( {\rm Li}_2 \left( 1 - \frac{x}{y} \right)
+ \frac{1}{2} \ln^2 y \right)
+ {^{AB}\! f}^i \left( {\rm Li}_2 ( \bar y )
- {\rm Li}_2 (x) - \ln y \ln x \right) \right],
\nonumber\\\end{aligned}$$ where the upper (lower) sign corresponds to the $A = B$ ($A \not= B$) channels. For the $QQ$ sector we have $\Delta{^{QQ}\!H} =
\Delta{^{QQ}\!\overline H}=0$. However, in general these addenda are nonzero and are needed to ensure the diagonality of the kernels. From the known two-loop splitting functions we have to require as well that in the forward limit these terms contribute only to rational functions and/or terms containing single logs of momentum fractions.
The reduction $P \to V^{\rm D}$ procedure from the forward to non-forward kinematics [@BelMul98a] is hard to handle for the restoration of $\Delta H$ contributions, so we have to rely on different principles. We do this by exploring supersymmetry and conformal covariance of ${{\cal N}}= 1$ super-Yang-Mills theory [@BelMulSch98; @BelMul99a]. As a matter of fact being wrong for all order results these assumptions hold true within the present context since the ${^{AB} G}$ kernels arise from the crossed ladder diagrams which have no UV divergent subgraphs and, therefore, require no renormalization. Thus, these kernels can be constructed from six constraints on anomalous dimensions of conformal operators. In principle these relations can also be written for the kernels in the ER-BL representation so that taking the known two entries of the quark-quark channel one can deduce all other channels. Unfortunately, at first glance it seems that not all of these constraints have a simple solution in the ER-BL representation. For this reason we modify our construction in the following way. Because of both supersymmetry and conformal covariance, the mixed channels are related by the equation $$\begin{aligned}
\label{const-III}
{^{GQ}\!G^i} (x, y)
= \frac{(\bar x x)^2}{\bar y y} {^{QG}\!G^i} (y, x).\end{aligned}$$ Employing this relation we can get a further one, from the so-called Dokshitzer supersymmetry constraint, $$\begin{aligned}
\label{const-I+III}
\frac{d}{dy}
{^{QQ}\!G^i} (x, y)
+ \frac{d}{dx} {^{GG}\!G^i} (x, y)
= - 3 {^{QG}\!G^i} (x, y),\end{aligned}$$ which allows to obtain the $GG$ entry provided we already know the kernel in the mixed channel.
Let us consider first the parity odd sector. At LO we have for moments ${^{QG}v^A_j} = 6 {^{QQ}\!v^a_j}$. Thus we can simply obtain the $QG$ kernel differentiating the $QQ$ one. Fortunately, it turns out that the ${^{QG}\!G^A}$ kernel can be obtained[^2] in the same way $$\begin{aligned}
{^{QG}\!G^A} (x, y) = \frac{d}{dy} {^{QQ}\!G^a} (x, y),\end{aligned}$$ where ${^{QQ}\!G^a}$ is given by Eqs. (\[kernel-G\]-\[kernel-S-bH\]) with $\Delta{^{QQ}\!H}^a = \Delta{^{QQ}\!\overline H}^a = 0$. The $GQ$ entry is simply deduced from Eq. (\[const-III\]), while the $GG$ one comes from the solution of the differential equation (\[const-I+III\]). The integration constant as a function of $y$ is almost fixed by the necessary condition of diagonality $$\begin{aligned}
{^{GG}\!G^A}(x,y)
= \frac{(x \bar x)^2}{(y \bar y)^2} {^{GG}\!G^A}(y,x) .\end{aligned}$$ The remaining degree of freedom can be easily fixed from the requirement that the moments ${^{GG}\!G}^A_{ji}$ are diagonal for $i = 0, 1$. To simplify the result, we remove a symmetric function (w.r.t. the simultaneous interchange $x \to \bar x$ and $y \to \bar y$) which enters in both $\Delta{^{GG}\!H}^A$ and $\Delta{^{GG}\!\overline H}^A$ kernels, however with different overall signs and, therefore, disappears from ${^{GG}\!G}^A$. We present our final results in a symmetric manner as (cf. [@BelMulFre99a]) $$\begin{aligned}
\Delta{^{QQ}\!H}^A (x, y)
&=& \Delta{^{QQ}\!\overline H}^A (x, y) = 0,
\\
\Delta {^{QG}\!H}^A (x, y)
&=& \Delta {^{QG}\!\overline H}^A (\bar x, y),
\quad
\Delta{^{QG}\!\overline H}^A (x, y)=
\frac{x \bar x}{(y \bar y)^2} \Delta{^{GQ}\!\overline H}^A (y,x)
\\
\Delta{^{GQ}\!H}^A (x, y)
&=& - \Delta{^{GQ}\!\overline H}^A (\bar x, y)
\quad
\Delta{^{GQ}\!\overline H}^A (x, y)
=
- 2 \frac{x \bar x}{y} \ln x + 2 \frac{x \bar x}{\bar y} \ln y,
\\
\Delta{^{GG}\!H}^A (x, y)
&=& -\Delta{^{GG}\!\overline H}^A(\bar x, y),
\\
\Delta{^{GG}\!\overline H}^A (x, y)
&=&
\frac{1 - 2x \bar x}{4 \bar y^2} + \frac{1 - 2 \bar x (1 + \bar x)}{4 y^2}
- 2 \frac{x (\bar x + y - 3 \bar x y)}{\bar y y^2} \ln x
- 2 \frac{\bar x (x + \bar y - 3 x \bar y)}{y \bar y^2} \ln y .
\nonumber\end{aligned}$$
Now instead of dealing with the whole parity even sector, we can consider only the difference between vector and axial-vector functions $$H^V = H^A + H^\delta .$$ In LO we have the simple equation ${^{GQ}\!v^c_j} = {^{QQ}\!v^a_j}/3
= {^{GG}\!v^a_j}/6$, see Eq. (\[eigenvalues-LO-c\]), which allows us to write down a simple relation between the kernels in different channels. However, to preserve the generic form of the ${^{GQ}\!G^c}$ function in the forward limit [@FurPet80] we have used the following modified differential equations[^3]: $$\begin{aligned}
\frac{d}{dx} {^{GQ}\!H^c}
= - 4 \left( {^{QQ}\! H^a} + 9\, {^{QQ}\! f^c}
{\mathop{\otimes}}^{{\rm e}}{^{QQ}\! f^c} \right),
\quad
\frac{d}{dy} {^{GQ}\! H^c}
= 2 \left( {^{GG}\! H^a} + 2\, {^{GG}\! f^c}
{\mathop{\otimes}}^{{\rm e}}{^{GG}\! f^c} \right),
\nonumber\\
\frac{d}{dx} {^{GQ}\!\overline{H}^c}
= - 4 \left( {^{QQ}\!\overline{H}^a}
+ 9\, {^{QQ}\!\tilde{f}^c} {\mathop{\otimes}}^{{\rm e}}{^{QQ}\! f^c} \right),
\quad
\frac{d}{dy} {^{GQ}\! \overline{H}^c}
= 2 \left( {^{GG}\! \overline{H}^a}
+ 2\, {^{GG}\! \tilde{f}^c} {\mathop{\otimes}}^{{\rm e}}{^{GG}\! f^c} \right),\end{aligned}$$ where $\tilde{f}^c (x, y) \equiv f^c (\bar x, y)$. The kernels ${^{GG}\! H^a}$ and ${^{GG}\! \overline H^a}$ are the parts of the whole parity odd functions derived in the fashion already explained above. The two sets of differential equations can be solved up to two integration constants which can easily be fixed from the diagonality of their conformal moments. Finally, we simplify the solution by adding pure diagonal pieces containing $a$ and $c$ kernels and their convolution as well as by removing symmetric terms which die out in ${^{GQ}\!G}$.
The entry in the $QG$ channel can be obtained from the supersymmetric relation (\[const-III\]). To construct the $GG$ kernel we use then the constraint (\[const-I+III\]) with ${^{QQ}\! G^c} \equiv 0$. We determine the integration constant as a function of $x$ in the same manner as described previously. Our findings for the $\Delta{^{AB} H}^\delta$ and $\Delta{^{AB}\overline H}^\delta$ can be summarized in the formulae $$\begin{aligned}
\Delta{^{QG}\! H}^\delta (x, y)
&=& - \frac{x\bar x}{(y \bar y)^2}
\Delta{^{GQ}\!H}^\delta (\bar y, \bar x),
\quad
\Delta{^{QG}\!\overline H}^\delta (x, y)
= \frac{x \bar x}{(y \bar y)^2}
\Delta{^{GQ}\!\overline H}^\delta (y, x),
\\
\Delta {^{GQ}\!H}^\delta (x, y)
&=& \Delta {^{GQ}\!\overline H}^\delta (\bar x, y)
+ 20 \frac{x (x - \bar x)}{3y}
- 4 \frac{\bar x (3 + 2 \bar x)}{3y} \ln\bar x
+ 4 \frac{x (3 + 2 x)}{3\bar y} \ln y,
\\
\Delta{^{GQ}\!\overline H}^\delta (x, y)
&=&- \frac{61}{18}
+ 2 x \bar x \Big( 1 - ( 3 - 10 \bar x ) \ln y
+ ( 3 - 10 x ) \ln x \Big)
\nonumber\\
&+& \frac{\bar x \left( 6 - 19 \bar x + 6 \bar x^2 \right)}{3y}
- 2\frac{\bar x \left( y + x ( \bar x - x) \right)}{\bar y} \ln y
+ 2 \frac{x \left( \bar y + \bar x ( x - \bar x) \right)}{y} \ln x ,
\nonumber\\
\Delta{^{GG}\!H}^\delta (x, y)
&=& \Delta{^{GG}\!\overline H}^\delta (\bar x, y)
- \frac{20 - 18 x + 55 x \bar x}{6 y^2}
- \frac{20 - 23 x \bar x}{6 \bar y^2}
- \frac{17 + 32 x + 28 x^2}{6 y \bar y} \\
&-& 2 \frac{\bar x}{y}
\left( 2 \frac{ \bar x - x }{\bar y}
+ \frac{2 + 3 \bar x}{y} \right) \ln\bar x
- 2 \frac{x}{\bar y}
\left(
2 \frac{x - \bar x}{y}
+ \frac{2 + 3 x}{\bar y} \right)\ln y,
\nonumber\\
\Delta{^{GG}\!\overline H}^\delta (x, y)
&=&
-(1 - x - y) \left(
\frac{20 - 22 x + 21 x \bar x}{6 y^2}
+ \frac{20 - 22 \bar x + 21 x \bar x}{6 \bar y^2}
+ \frac{39 + 38 x \bar x}{6 y \bar y}
\right)
\nonumber\\
&+& 2 \left( \frac{x^3}{3 y^2}
- \frac{x^2 (21 - 20 x) }{3 y}
-2 \frac{x \bar x^2}{\bar y} \right)\ln x
+ 2 \left(
\frac{\bar x^3}{3\bar y^2}
- \frac{\bar x^2 (21 - 20 \bar x) }{3\bar y}
- 2 \frac{\bar x x^2}{y}
\right)\ln y .
\nonumber\end{aligned}$$
Restoration of remaining diagonal terms.
========================================
As in the twist-two axial and transversity sectors [@BelMulFre99a] it turns out that the remaining diagonal piece, $\mbox{\boldmath$D$}^V$, can be represented as the convolution of simple diagonal kernels. To find it we take first the forward limit $$\begin{aligned}
\label{SingletLimit}
\mbox{\boldmath$P$} (z)
= {\rm LIM}\, \mbox{\boldmath$V$} (x, y)
\equiv \lim_{\tau\to 0} \frac{1}{|\tau|}
\left(
\begin{array}{rr}
{^{QQ} V}
&
\frac{1}{\tau}{^{QG} V}
\\
\frac{\tau}{z} {^{GQ} V}
&
\frac{1}{z}{^{GG} V}
\end{array}
\right)^{\rm ext}
\left( \frac{z}{\tau}, \frac{1}{\tau} \right) ,\end{aligned}$$ and compare our result with the known DGLAP kernel $\mbox{\boldmath$P$}^V$ [@FurPet80]. In this way, $$\begin{aligned}
\mbox{\boldmath$D$}^V (z)
= \mbox{\boldmath$P$}^V(z)
- {\rm LIM}
\left\{
- \mbox{\boldmath$\dot V$} {\mathop{\otimes}}^{{\rm e}}\left( \mbox{\boldmath$V$}^{(0)V} + \frac{\beta_0}{2} \1 \right)
- \mbox{\boldmath$g$} {\mathop{\otimes}}^{{\rm e}}\mbox{\boldmath$V$}^{(0)V}
+ \mbox{\boldmath$V$}^{(0)V} {\mathop{\otimes}}^{{\rm e}}\mbox{\boldmath$g$}
+ \mbox{\boldmath$G$}^{V}
\right\},\end{aligned}$$ we extract the remaining part ${\rm LIM} \mbox{\boldmath$D$}^V(x,y)$ and find then the desired convolutions in the forward representation. Note that we map the antiparticle contribution, i.e. $z < 0$, into the region $z > 0$ by taking into account the underlying symmetry of the singlet parton distributions. Our findings can be immediately mapped back into the ER-BL representation: $$\begin{aligned}
\label{D-QQ-e}
{^{QQ}\! D}^V
&=& C_F^2 \left[ D_F \right]_+
- C_F \frac{\beta_0}{2} \left[ D_\beta \right]_+
- C_F \left( C_F - \frac{C_A}{2} \right)
\left[ \frac{4}{3} {^{QQ}\!v} + 2\, {^{QQ}\!v}^a \right]_+ \\
&+& 4\, C_F T_F N_f \left\{
\frac{1}{3} {^{QQ}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{QQ}\!v}^a
- 6{^{QQ}\!v}^c {\mathop{\otimes}}^{{\rm e}}{^{QQ}\!v}^c
- {^{QQ}\!v}^a + \frac{7}{6}{^{QQ}\!v}^c
\right\}, \nonumber\\
{^{QG}\!D}^V
&=& - C_F T_F N_f
\left\{
2\left[ {^{QQ}\!v} \right]_+ {\mathop{\otimes}}^{{\rm e}}{^{QG}\!v}^c
+ {^{QQ}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{QG}\!v}^a
+ \frac{3}{2} {^{QG}\!v}^a + 6 {^{QG}\!v}^a
\right\} \\
&+& 2\, C_A T_F N_f
\Bigg\{
- \left[ \frac{8}{3} \left[{^{QQ}\!v} \right]_+
+ 56 {^{QQ}\!v}^c \right]
{\mathop{\otimes}}^{{\rm e}}{^{QG}\!v}^c + \frac{130}{3} {^{QQ}\!v}^a
{\mathop{\otimes}}^{{\rm e}}{^{QG}\!v}^a \nonumber\\
&&\hspace{2cm}+ \left[ \frac{55}{9} - 2 \zeta (2) \right] {^{QG}\!v}^a
- \left[\frac{301}{18} + 4 \zeta (2) \right] {^{QG}\!v}^a
\Bigg\}, \nonumber\\
{^{GQ}\!D}^V
&=& C_F^2
\left\{
-\left[ {^{GG}\!v}^A \right]_+
{\mathop{\otimes}}^{{\rm e}}\left[\frac{1}{2} {^{GQ}\!v}^a
+ 3 {^{GQ}\!v}^c \right]
-5 {^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GQ}\!v}^a
- 3 {^{GQ}\!v}^a
\right\} \nonumber\\
&-& C_F \beta_0
\left\{\left[ {^{GG}\!v}^A \right]_+ {\mathop{\otimes}}^{{\rm e}}\left[\frac{1}{2} {^{GQ}\!v}^a + {^{GQ}\!v}^c \right]
+ \frac{3}{4} {^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GQ}\!v}^a
+ \frac{5}{3} {^{GQ}\!v}^a
\right\} \\
&+& C_F C_A
\Bigg\{
- \left[{^{GG}\!v}^A \right]_+ {\mathop{\otimes}}^{{\rm e}}\left[ {^{GQ}\!v}^a - \frac{3}{2} {^{GQ}\!v}^c \right]
-\frac{25}{6} {^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GQ}\!v}^a
+ 9 {^{GG}\!v}^c {\mathop{\otimes}}^{{\rm e}}{^{GQ}\!v}^c
\nonumber\\
&&\hspace{2cm} - \left( \frac{43}{9}
+ 2 \zeta(2) \right) {^{GQ}\!v}^a
+ \left( \frac{8}{9} - 4 \zeta(2) \right) {^{GQ}\!v}^c
\Bigg\} , \nonumber\\
{^{GG}\!D}^V
&=& C_A^2
\Bigg\{
\left[ {^{GG}\!v}^A \right]_+
{\mathop{\otimes}}^{{\rm e}}\left[ {^{GG}\!v}^a + \frac{11}{3} {^{GG}\!v}^c \right]
- 14 {^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GG}\!v}^a
+ 12{^{GG}\!v}^c {\mathop{\otimes}}^{{\rm e}}{^{GG}\!v}^c \nonumber\\
&+& \frac{2}{3} \left[ {^{GG}\!v}^A \right]_+
- \frac{131}{12} {^{GG}\!v}^a
+ \frac{91}{18} {^{GG}\!v}^c - 2 \delta(x - y)
\Bigg\} \\
&-& C_A \frac{\beta_0}{2}
\left\{
- \frac{1}{2} {^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GG}\!v}^a
+ \frac{5}{3} \left[ {^{GG}\!v}^V \right]_+
+ 3 {^{GG}\!v}^a + \frac{13}{3} {^{GG}\!v}^a + 2 \delta(x - y)
\right\} \nonumber\\
&+& C_F T_F N_f
\left\{
{^{GG}\!v}^a {\mathop{\otimes}}^{{\rm e}}{^{GG}\!v}^a
+ \frac{4}{3} {^{GG}\!v}^c -\delta(x-y)
\right\} . \nonumber\end{aligned}$$ where $D_F$, $D_\beta$ functions are known from the flavour non-singlet case [@BelMulFre99a]. In comparison to the parity odd sector the convolution of $c$-kernels appears as a new entry. It is worth mentioning that our result for the evolution kernels in the parity even singlet sector possesses the correct conformal moments in both the physical and unphysical sectors. This is to be contrasted with an explicit momentum fraction space calculation at LO and quark bubble insertions in NLO kernels for the mixed channels [@BelMul98a] where the improved kernels do not appear.
Conclusions.
============
To recapitulate the results presented here, we have reconstructed the two-loop singlet evolution kernels responsible for the scale dependence of the vector meson distribution amplitudes. We have avoided cumbersome next-to-leading calculations by adhering to an extensive use of the conformal and supersymmetric constraints derived earlier which thus play a paramount rôle in the formalism. The correctness of the results given here is proved by evaluating the Gegenbauer moments of the kernels which coincide with the anomalous dimensions derived in Ref. [@BelMul98b]. Our findings allow to use now the direct numerical integration (see Ref.[@FFGS98; @MusRad99] for a leading order analysis of non-forward parton distributions) of the generalized exclusive evolution equations which provides a competitive alternative to the previously developed methods of orthogonal polynomial reconstruction of skewed parton distribution pursued by us in Ref. [@Beletal97].
A.B. was supported by the Alexander von Humboldt Foundation.
[99]{} S.J. Brodsky, G.P. Lepage, [*Exclusive processes in Quantum Chromodynamics*]{}, In [*Perturbative Quantum Chromodynamics*]{}, ed. A.H. Mueller, World Scientific, Singapore (1989). J.C. Collins, D.E. Soper, G. Sterman, [*Factorization of hard processes in QCD*]{}, In [*Perturbative Quantum Chromodynamics*]{}, ed. A.H. Mueller, World Scientific, Singapore (1989). D. Müller, D. Robaschik, B. Geyer, F.-M. Dittes, J. Hořejši, Fortschr. Phys. 42 (1994) 101. X. Ji, Phys. Rev. D 55 (1997) 7114; J. Phys. G 24 (1998) 1181; A.V. Radyushkin, Phys. Rev. D 56 (1997) 5524. J.C. Collins, L. Frankfurt, M. Strikman, Phys. Rev. D 56 (1997) 2982. A.V. Efremov, A.V. Radyushkin, Theor. Math. Phys. 42 (1980) 97; Phys. Lett. B 94 (1980) 245. S.J. Brodsky, G.P. Lepage, Phys. Lett. B 87 (1979) 359; Phys. Rev. D 22 (1980) 2157. F.-M. Dittes, B. Geyer, D. Müller, D. Robaschik, J. Hořejši, Phys. Lett. B 209 (1988) 325; A.V. Belitsky, D. Müller, A. Freund, [*Reconstruction of non-forward evolution kernels*]{}, hep-ph/9904477. D. Müller, Phys. Rev. D 49 (1994) 2525. A.V. Belitsky, D. Müller, Nucl. Phys. B 527 (1998) 207. A.V. Belitsky, D. Müller, Nucl. Phys. B 537 (1999) 397. A.P. Bukhvostov, G.V. Frolov, E.A. Kuraev, L.N. Lipatov, Nucl. Phys. B 258 (1985) 601. A.V. Belitsky, D. Müller, A. Schäfer, Phys. Lett. B 450 (1999) 126. A.V. Belitsky, D. Müller, [*${\cal N} = 1$ supersymmetric constraints for evolution kernels*]{}, hep-ph/9905211. M.H. Sarmadi, Phys. Lett. B 143 (1984) 471. F.-M. Dittes, A.V. Radyushkin, Phys. Lett. B 134 (1984) 359. S.V. Mikhailov, A.V. Radyushkin, Nucl. Phys. B 254 (1985) 89. W. Furmanski, R. Petronzio, Phys. Lett. B 97 (1980) 437;\
E.G. Floratos, C. Kounnas, R. Lacaze, Nucl. Phys. B 192 (1981) 417;\
R.K. Ellis, W. Vogelsang, [*The evolution of parton distributions beyond leading order: singlet case*]{}, hep-ph/9602356. L. Frankfurt, A. Freund, V. Guzey, M. Strikman, Phys. Lett. B 418 (1998) 345; Phys. Lett. B 429 (1998) 414 (E). I.V. Musatov, A.V. Radyushkin, [*Evolution and models for skewed parton distributions*]{}, hep-ph/9905376. A.V. Belitsky, B. Geyer, D. Müller, A. Schäfer, Phys. Lett. B 421 (1998) 312;\
A.V. Belitsky, D. Müller, L. Niedermeier, A. Schäfer, Phys. Lett. B 437 (1998) 160; Nucl. Phys. B 546 (1999) 279;\
A.V. Belitsky, D. Müller, [*Scaling violations and off-forward parton distributions: leading order and beyond*]{}, hep-ph/9905263.
[^1]: Alexander von Humboldt Fellow.
[^2]: The correctness of this and subsequent results is checked by forming the Gegenbauer moments and comparing them with known NLO forward anomalous dimensions [@FurPet80].
[^3]: Note that we introduce a shorthand notation for the convolution, namely, ${^{QQ}\!f^i} {\mathop{\otimes}}^{{\rm e}}{^{QQ}\!f^j}$ is understood as convolution of the corresponding ER-BL kernels and then keeping only the part proportional to $\theta(y-x)$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Explaining the late time acceleration is one of the most challenging tasks for theoretical physicists today. Infra-red modification of Einstein’s general theory of relativity (GR) is a possible route to model late time acceleration. In this regard, vector-tensor theory as a part of gravitational interactions on large cosmological scales, has been proposed recently. This involves generalization of massive Proca lagrangian in curved space time. Black hole solutions in such theories have also been constructed. In this paper, we study different astrophysical signatures of such black holes. We first study the strong lensing and time delay effect of such static spherically symmetric black hole solutions, in particular for the case of gravitational lensing of the star S2 by Sagittarius A\* at the centre of Milky Way. We also construct the rotating black hole solution from this static spherically symmetric solution in Proca theories using the Newman-Janis algorithm and subsequently study lensing, time delay and black hole shadow effect in this rotating black hole space time. We discuss the possibility of detecting Proca hair in future observations.'
author:
- Mostafizur Rahman
- 'Anjan A. Sen'
title: '**[Astrophysical Signatures of Black holes in Generalized Proca Theories]{}**'
---
Introduction {#intro}
============
Einstein’s General Theory of Relativity (GR) is an extremely successful theory to describe gravity from Solar System scales involving planetary motions upto Cosmological scales describing the expansion of the Universe, formation of light elements, existence of cosmic microwave background radiation, formation of large scale structures. But the late time acceleration of the Universe, first confirmed by SNIa observations two decades ago [@Riess1998; @Perlmutter1999; @Tonry2003; @Knop2003; @Riess2004], is the first observed astrophysical phenomena, that attractive gravity fails to explain. Accelerated expansion in the Universe demands the existence of repulsive gravity at large cosmological scales. This can be done if one modifies either the matter part with exotic components having negative pressure, or modifies the gravity at large cosmological scales (see [@copeland; @Padmanabhan; @Peebles; @Sahni], for review on this topic). Although the cosmological constant ($\Lambda$), as introduced by Einstein to model a static Universe, is the simplest solution to the late time acceleration of the Universe, the large discrepancy between the observed value of $\Lambda$ and what we expect its value from field theory point of view, is the greatest obstacle for it to be a successful explanation for the late time acceleration of the Universe ( also recent observations suggest tensions in $\Lambda$CDM with the data [@R16; @R18]). A consistent theory of quantum gravity is needed to solve this cosmological constant problem.
Going beyond $\Lambda$, whether to modify the matter sector or the gravity sector, scalar fields play the most important role in late time acceleration of Universe [@copeland]. Scalar fields do exist in nature; Higgs field, which is the fundamental ingredient of standard model of particle physics [@higgs], is the best example of a scalar field that exists in nature. Moreover, being a scalar, it can be naturally incorporated in a isotropic and homogeneous Universe. It also can give rise to repulsive gravity with its slow-roll property and hence can explain late time acceleration. But these scalar fields have to be very light in order to slow-roll at large cosmological scales and without any mechanism to avoid their possible interactions with baryons, they give rise to long-range fifth force in baryons that is absent in solar system scales. To avoid such tensions, we need to have some screening mechanism that prevents the scalar field to interact with baryons on small scales, but allows the scalar field to give desired late time accelerated expansion at large cosmological scales. Chameleon mechanism [@chameleon], Vainshtein mechanism [@vainstein] are examples of such screening processes.
Amongst the scalar field models for infra-red modification of gravity, Galileon model is one of the most studied models [@galileon; @covariant; @lategalileon]. It was first introduced as a natural extension of DGP brane-world model [@dgp] in decoupling limit [@decoup]. The lagrangian for the Galileon field respects the shift symmetry and contains higher derivative terms. Despite this, the equation of motion for Galileon field is second order and hence the theory is free from Ostragradsky ghosts [@ostro]. One can also implement the Vainshtein mechanism in this model to preserve the local physics and to satisfy the solar system constraints. The general Galileon action with second order equations of motion, contains non-minimal derivative coupling with Ricci and Einstein tensor. This is a subclass of more general Horndeski theories [@horndeski] which contain scalar-tensor interactions with second order equation of motion on curved background. Massive gravity theories [@massive; @galileon] are other examples of general scalar-tensor theories giving second order of equations of motion.
Similar to scalar-tensor theories, one can also have consistent models of vector-tensor theory as a part of the gravitational interactions on large scales resulting the late time acceleration in the Universe [@Tasinato:2014eka]. In Minkowski space, allowing the mass of the vector fields, leads to Proca lagrangian. One can then generalize this Massive Proca lagrangian to curved space time. This has been done in a recent paper by Heisenberg [@Heisenberg:2014rta], where a generalized massive Proca lagrangian in curved background with second order equations of motion has been proposed. This constitutes a Galileon type self interaction for the vector field including the non-minimal derivative coupling to gravity. Different cosmological aspects of such models as well as constraints from cosmological observations have been studied in several recent works [@cosmoproca]. In a recent paper, Heisenberg has studied, in systematic way, different generalisations of Einstein gravity and their cosmological implications [@heisen_recent]
The recent results from Advanced Ligo experiment for measuring gravitational waves [@ligo], have opened up the opportunity to probe astrophysical black holes. The latest gravitational wave measurements from two colliding neutron stars and its electromagnetic counterparts [@ligoneutron], have confirmed the validity of GR for these astrophysical processes. This put extremely tight constraints on different modified gravity theories based on scalar fields [@ligode], explaining the late time acceleration in the Universe. In a recent work, Jimenez and Heisenberg [@Jimenez:2016isa] have put forward vector models for dark energy based on Proca lagrangian with $c_{gw} =1$ making it consistent with latest Ligo observations for neutron star merger. But the model can still give non-trivial predictions for gravitational waves.
To probe any gravitational theory at astrophysical scales, black hole are the best candidates. Recently, Heisenberg et al. have constructed hairy black hole solutions in generalized Proca theories [@Heisenberg:2017xda]. For power-law coupling, they found a class of asymptotically flat hairy black hole solutions. These are not exact solutions but are iterative series solutions upto $\mathcal{O}(1/r^3)$ which matches excellently with the numerical solutions. These are hairy black hole solution in a modified gravity scenario and it is extremely interesting to study their astrophysical signatures to probe the underlying modified gravity theory.
Gravitational lensing is one of most interesting astrophysical phenomena due to gravitational effects of massive bodies. It is broadly the bending of light due to the curvature of the space time and as the curvature of the space time depends on the gravitational properties of massive bodies, one can directly constrain different properties of a massive body like its mass or angular momentum, by observing its gravitational lensing effect. In solar system, through lensing effects, observers first confirmed the validity of Einstein GR. But in solar system, the effect is pretty weak with deflection angle much small compared to $2\pi$ [@lense1; @lense2; @lense3]. But it can be large in the vicinity of strong gravitating objects like black holes, where the photon can circle in closed loops around the black hole many times due to the strong gravitational effect, before escaping. There exists a sphere around the black hole called “[*photon sphere*]{}”, where the deflection angle for the photon can even diverge. Gravitational lensing in the space time of Schwarzschild black holes was first studied by Virbhadra and Ellis [@virbhadra] and later it was extended to Reissner-Nordstorm [@lensrn] and Kerr black holes [@lenskerr], black holes in brane-world models [@lensbrane] and Galileon models [@lensgal], in extra dimension with Kalb-Ramond field [@lenskalb] and so on. As strong gravitational lensing in the vicinity of black holes probes different properties of the black holes, it is also useful to probe different modified gravity theories as standard Schwarzschild or Kerr black solutions get modified in different versions of modified gravity theories. Moreover through gravitational lensing, one can probe the region around black holes, known as “[*black hole shadow*]{}" [@shadow]. The shape and size of the black hole shadow is a direct probe for the black hole space time and hence the underlining gravity theory. With the prospects of Even Horizon telescope [@eht] as well as telescopes like SKA [@ska], one can resolve the black hole shadow with great accuracy and hence probing modified gravity through such observations is possible in near future.
In this paper, we study the strong lensing phenomena for the black hole spacetimes in generalized Proca theories. Throughout the paper, we have used the geometrical unit $G=c=1$.
Hairy black hole solution in generalized Proca theories {#sec2}
=======================================================
The action for general Proca theory is given by [@Heisenberg:2017xda; @Heisenberg:2014rta; @Jimenez:2016isa; @Heisenberg:2017hwb]: $${\label{intro1}}
S=\int d^4x \sqrt{-g} \biggl( F+\sum_{i=2}^{6}{\cal L}_i
\biggr)\,,$$ with $$\begin{aligned}
{\label{intro2}}
& &{\mathcal L}_2=G_2(X)\,,\qquad
{\mathcal L}_3=G_3(X) {\mathcal A^{\mu}}_{;\mu}\,, \nonumber \\
& &{\mathcal L}_4=G_4(X)R+G_{4,X} \left[
({\mathcal A^{\mu}}_{;\mu})^2 -{\mathcal A_{\nu}}_{;\mu} {\mathcal A^{\mu}}^{;\nu}
\right]-2g_4(X)F\,, \nonumber \\
& &{\mathcal L}_5=G_5(X)G_{\mu \nu} {\mathcal A^{\nu}}^{;\mu}
-\frac{G_{5,X}}{6} [ ({\mathcal A^{\mu}}_{;\mu})^3
-3{\mathcal A^{\mu}}_{;\mu} {\mathcal A_{\sigma}}_{;\rho}
{\mathcal A^{\rho}}^{;\sigma} \nonumber \\
& &~~~~~+2{\mathcal A_{\sigma}}_{;\rho}{\mathcal A^{\rho}}^{;\nu}
{\mathcal A_{\nu}}^{;\sigma}]
-g_5(X)\tilde{F}^{\alpha \mu} {\tilde{F}^{\beta}}_{\mu}
\mathcal A_{\beta;\alpha}\,,\nonumber \\
& &{\mathcal L}_6=G_6(X) L^{\mu \nu \alpha \beta}
\mathcal A_{\nu;\mu}\mathcal A_{\beta;\alpha}
+\frac{G_{6,X}}{2}\tilde{F}^{\alpha \beta} {\tilde{F}^{\mu \nu}}
\mathcal A_{\mu;\alpha}\mathcal A_{\nu;\beta}.\end{aligned}$$ Here $F=-F_{\mu \nu}F^{\mu \nu}/4$. The functions $G_{2} - G_{6}$ as well as $g_{4}$ and $g_{5}$ depend on $X=-\mathcal{A}_{\mu}\mathcal{A}^{\mu}/2 $. We denote $G_{i,X}=\partial G_i/\partial X$. The vector field $\cal A^{\mu}$ has non-minimal couplings with space time curvature through $L^{\mu \nu \alpha \beta}={\cal E}^{\mu \nu \rho \sigma}
{\cal E}^{\alpha \beta \gamma \delta}R_{\rho \sigma \gamma \delta}/4$, where ${\cal E}^{\mu \nu \rho \sigma}$ is the Levi-Civita tensor and $R_{\rho \sigma \gamma \delta}$ is the Riemann tensor. The dual strength tensor $\tilde{F}^{\mu \nu}={\cal E}^{\mu \nu \alpha \beta}F_{\alpha \beta}/2$. The Einstein-Hilbert term $M_{\rm pl}^2/2$ is contained in $G_4(X)$.
To describe the black-holes in this general Proca theory, one assumes a static spherically symmetric space time:
$${\label{intro3}}
ds^{2} =-A(r) dt^{2} +B(r)dr^{2} +
C(r) \left( d\vartheta^{2}+\sin^{2}\vartheta\,d\varphi^{2}
\right)\,,$$
together with the vector field $\mathcal{A}_{\mu}=(\mathcal{A}_{0}(r),\mathcal{A}_{1}(r),0,0)$. Here $A(r)$, $B(r)$, $\mathcal A_0(r)$, and $\mathcal A_1(r)$ are arbitrary functions of $r$. In [@Heisenberg:2017xda; @Heisenberg:2017hwb], the following action has been considered for general Proca theory:
$${\label{intro4}}
S=\int d^4x \sqrt{-g} \biggl( \frac{M_{Pl}^2}{2}R +\beta_{3} {\cal A}^{\mu}_{;\mu} X + F\biggr).$$
Upto $\mathcal{O}(1/r^3)$, the Black hole solution for such theory is given by [@Heisenberg:2017xda; @Heisenberg:2017hwb] :
$${\label{intro5}}
\begin{aligned}
A(r)&=1-\frac{2}{r}-\frac{P^{2}}{6r^{3}} + \mathcal{O}(1/r^4)\\
B(r)^{-1}&=1-\frac{2}{r}-\frac{P^{2}}{2r^{2}}-\frac{P^{2}}{2r^{3}} + \mathcal{O}(1/r^4)\\C(r)&=r^{2}
\end{aligned}$$
where, we have set $r=r/M$, where $M$ is the mass of the black hole. Throughout the paper, all the distances are measured in the unit of the mass of the black hole ($M=1$) unless otherwise specified. Here $P$ is Proca hair, related to the time component of the vector field as $\mathcal{A}_0=(P-P/r-P/(2r^2))M_{Pl}+{\cal O}(1/r^3)$. We set $P=P/M_{Pl}$ where $M_{Pl}$ is the Planck mass. Clearly, the metric satisfies asymptotically flat condition, $\lim\limits_{r \to \infty}A(r)=\lim\limits_{r \to \infty}B(r)=1 $. Note that, in the limit $P\to 0$ i.e. when the Proca hair $P$ vanishes, the above metric elements reduce to that of Schwarzschild metric.
Lensing Effect in Strong Field Limit in a Static, Spherically symmetric metric
==============================================================================
Before considering the spacetime of our interest, we review the gravitational lensing effect in Strong Field Limit (SFL) in a general asymptotically flat, static and spherically symmetric space-time. In this section we discuss about the main concepts and different observables related to gravitational lensing in the strong field limit following Ref. [@Bozza:2002zj].
Observables in Strong Field Limit {#bb}
---------------------------------
Any generic static, spherically symmetric space-time can be described by the line element (\[intro3\]). In order to study the photon trajectory, we will assume that the equation [@Claudel:2000yi; @virbhadra; @Bozza:2002zj] $${\label{2}}
C'(r)A(r)-A'(r)C(r)=0$$ admits at least one positive solution and the largest positive solution of this equation is defined as the radius of photon sphere, $r_{m}$. We further assume that $A(r)$, $B(r)$ and $C(r)$ are finite and positive for $r\geq r_{m}$ [@Bozza:2002zj]. Since the spacetime admits spherical symmetry, we can restrict our attention to equatorial plane ($\vartheta=\pi/2$) without losing any generality. Now we can formulate the lensing problem. Consider a black hole situated at the origin. A photon with impact parameter $u$ incoming from a source situated at $r_{S}$, deviates while approaching it. Let the photon approaches the black hole at a minimum distance $r_{0}$ and then deviated away from it. An observer situated at $r_{R}$ detects the photon (see Fig.-(\[fig:Lensing system\])). In the strong field limit, we consider only those photons whose closest approach distance $r_{0}$ is very near to $r_{m}$ and hence the deflection angle $\alpha$ can be expanded around the photon sphere, $r_{m}$ or equivalently minimum impact parameter $u_{m}$. When the closest approach distance $r$ is greater than $r_{m}$, it just simply gets deflected (it may complete several loops around the black hole before reaching the observer). When it reaches a critical value $r_{0}=r_{m}$ (or $u=u_{m}$), $\alpha$ diverges and the photon gets captured. Following the method developed by Bozza [@Bozza:2002zj], one can show that this divergence is logarithmic in nature and the deflection angle can be written as $${\label{6}}
\alpha(\theta)=-\bar{a} \ln\left(\frac{\theta}{ \theta_{m}}-1\right)+\bar{b}$$ where subscript $`m$’ denotes function evaluated at $r=r_{m}$. $\theta$ is the incident angle to the observer whereas $\theta_{m}=u_{m}\sqrt{A(r_{R})/C(r_{R})}$ corresponds to the incoming photon with minimum impact factor, $u_{m}=\sqrt{C_{m}/A_{m}}$. When $\theta\leq\theta_{m}$, the photon gets captured. The parameters $\bar{a}$ and $\bar{b}$ are called the Strong Lensing coefficients whose functional forms are given in Eq. (35-36) of Ref.[@Bozza:2002zj].
![A schematic diagram of the lensing system has been presented. Light from source S get lensed by the black hole L and incident on the observer O with an angle $\theta$. Image is formed at I. The line joining O and L is called the Optic axis [@lenskalb]. []{data-label="fig:Lensing system"}](Systemoflens.eps){width="\linewidth"}
With the help of Eq. (\[6\]), we can calculate the observables for strong lensing corresponding to any given static and spherically symmetric metric using the lens equation. The corresponding observables are - (i) position of the innermost image, $\theta_{m}$ ,(ii) the angular separation between the first relativistic image (outermost image) with the innermost image, $s$, and (iii) relative flux between different images, $\mathcal{R}$. In order to do so, we introduce co-ordinate independent lens equation: $\alpha=\theta-\theta_{S}+\phi_{RS}$, where $\theta$ and $\theta_{S}$ denote the angles that are measured at the receiver position and the source position, respectively, while $\phi_{RS}$ is the angle between the azimuthal coordinate of source and observer [@Ishihara:2016vdc]. The quantity $\alpha$ is geometrically invariant which in asymptotically flat limit, coincides with the deflection angle. If $\bar{\theta}$ denotes the impact angle as seen from the source, then angle measured from source position becomes $\theta_{S}=\pi-\bar{\theta}$. Let $\gamma$ be the angle between the optic axis (the line joining the observer and the lens) and the line joining the lens and the source (see Fig-\[fig:Lensing system\]). Note that $ \phi_{RS}=\pi -\gamma$. Then the lens equation connecting the observer and source position takes the form $${\label{8}}
\gamma=\theta+\bar{\theta}-\alpha(\theta)$$ This equation is known as the Ohanian lens equation [@1987AmJPh..55..428O] and as discussed in Ref.[@Bozza:2008ev], is the best approximate lens equation in asymptotically flat space-time. Since both the source and observaer is situated far away from the black hole, $\bar{\theta}$ can be approximated as $\bar{\theta}=\theta r_{R}/r_{S}$. With this condition and using Eq. (\[6\]) and (\[8\]), one can obtain the position of the $n$-th order image [@Bozza:2002zj] $${\label{9}}
\theta_{n}= \theta_{m}\left(1+\exp\left(\dfrac{\bar{b}+\gamma-2n\pi}{\bar{a}}\right)\right)$$ Where $n$ corresponds to the number of winding around the black hole. When $n\to\infty$, $\theta_{n}$ becomes $\theta_{m}$. So $\theta_{m}$ represents the position of inner most relativistic image. In simplest of situation, we consider that the outermost image (first relativistic image) $\theta_{1}$ is resolved as a single image and all the other images packed together at $\theta_{m}$ [@Bozza:2002zj; @lenskalb]. Then the angular separation between these two images is defined as [@Bozza:2002zj] $${\label{10}}
s= \theta_{1}-\theta_{m}=\theta_{m} \exp\left(\dfrac{\bar{b}+\gamma-2\pi}{\bar{a}}\right)$$
Magnification of the image is defined as the ratio of solid angle to the observer with a lens to the solid angle without lens i.e. $ \mu=\sin\theta d\theta/\sin\chi d\chi$, where $\chi$ is the angle between source to observer w.r.t. the optic axes (see Fig-\[fig:Lensing system\]). Note that lens equation Eq. (\[8\]) does not have any term that contains $\chi$. So using the relation, $ r_{S}\sin\gamma=D_{OS}\sin\chi$ and considering $D_{OS}\gg r_{S}$, one can easily show that the magnification of $n$th relativistic image can be written as [@Bozza:2002zj] $$\begin{aligned}
{\label{11}}
& &{\mu}_{n}=\left(\frac{D_{OS}}{r_{S}}\right)^{2}\dfrac{\theta_{m}^{2} e_{n}(1+e_{n})}{\bar{a}\sin\gamma}\,,\qquad
{e}_{n}=\exp \left(\dfrac{\bar{b}+\gamma-2n\pi}{\bar{a}}\right)\,. \nonumber \end{aligned}$$
where $D_{OS}$ is the distance between the source and observer. The ratio of magnification hence the flux from the first relativistic image to all the other images is given by [@Bozza:2002zj] $${\label{13}}
\mathcal{R}=2.5\log_{10}\left(\dfrac{\mu_{1}}{\sum\limits_{n=2}^{\infty}\mu_{n}}\right)=\frac{5\pi}{\bar{a}\ln 10}$$ If we have a precise knowledge of $\gamma$ and observer to lens distance $r_{R}$, then we can predict strong lensing co-efficient $\bar{a}$, $\bar{b}$ and minimum impact parameter $u_{m}$ by measuring $\mathcal{R}$, $s$, $\theta_{m}$. Then by comparing them with the values predicted by given theoretical models, we can identify the nature of the black hole.
Time delay in strong field gravitational lensing {#Td}
------------------------------------------------
In this section we briefly review the Time Delay effect in a static, spherically symmetric spacetime following the method developed by Bozza and Manchini [@Bozza:2003cp]. From the discussion in the previous section, it is clear that formation of multiple images is a key feature of strong lensing and generally the time taken by different photons following different paths (which correspond to different images) are not the same. So there are some time delay between different images. Moreover time delay between the images will depend on which side of the lens the images are formed. When both the images are on the same side of the lens, time delay between $m$ and $n$ th relativistic image can be expressed as [@Bozza:2003cp] $${\label{Td1}}
\Delta T_{mn}^{s}=-u_{m} 2\pi(m-n)+2\sqrt{u_{m}}\sqrt{\frac{B_{m}}{A_{m}}}\left(\exp(\dfrac{\bar{b}+\gamma-2m\pi}{2\bar{a}})-\exp(\dfrac{\bar{b}+\gamma-2n\pi}{2\bar{a}})\right)$$ The sign of $\gamma$ depends on which side of the source images are formed. When the images are formed on the opposite side of the lens time delay between $m$ and $n$ th relativistic image can be expressed as [@Bozza:2003cp] $${\label{Td2}}
\Delta T_{mn}^{o}=-u_{m} (2\pi(m-n)-2\gamma)+2\sqrt{u_{m}}\sqrt{\frac{B_{m}}{A_{m}}}\left(\exp(\dfrac{\bar{b}+\gamma-2m\pi}{2\bar{a}})-\exp(\dfrac{\bar{b}-\gamma-2n\pi}{2\bar{a}})\right)$$ We need instruments with high observational precision in order to find the contribution from the second term. Thus for practical purposes, we can approximate the time delay by its first term’s contribution. In terms of $\theta_{m}$ one can get an interesting result when both the images are formed in the same side of the lens. Then time delay between first and second relativistic image can be expressed as [@Bozza:2004kq] $${\label{Td3}}
\Delta T_{12}^{s}=\theta_{m}2\pi r_{R}$$ In principle, using this formula, we can get a very accurate estimate for the distance of the black hole. Note that for a distant observer, $A(r_{R})$ practically becomes 1 and $\theta_{m}$ can be written as $\theta_{m}=r_{m}/\sqrt{A_{m}r_{R}^{2}}$. Using Eq. (\[Td3\]), we can found a interesting result given by $${\label{Td4}}
r_{m}^{2}A(r_{m})=\left(\dfrac{\Delta T_{12}^{s}}{2\pi}\right)^{2}$$ This equation beautifully relates an observational parameter, the time delay $\Delta T_{12}^{s}$ between first and second relativistic image with a theoretical parameter, the metric function $A(r)$ evaluated at $r=r_{m}$. Thus one can verify a given theoretical model by solving this equation using the observational data of $\Delta T_{12}^{s}$.
Orbital Parameter $\varrho$ (pc) $T$ (yr) $e$ $T_{o}$ (yr) $i$ (deg) $\Omega$ (deg) $\omega$ (deg)
------------------- ---------------------- ---------- --------- -------------- ----------- ---------------- ----------------
Value $4.54\times 10^{-3}$ $15.92$ $ 0.89$ $ 2018.37$ $ 45.7$ $ 45.9$ $244.7$
: Orbital parameters for S2. Its orbit can be described by an ellipse with $\varrho$ is the semi major axis and $e$ being the eccentricity of the orbit. The inclination angle $i$ denotes the angle between the ellipse and a reference plane in the line of sight. $\Omega$ and $\omega$ describes the position angle of the ascending node and the periapse anomaly with respect to the ascending node respectively. The orbital time period is described by $T$. $T_{0}$ describes the epoch when it reaches the periapse position [@Hees:2017aal; @Bozza:2004kq].[]{data-label="tab1"}
Numerical estimation of different observables for Gravitational lensing of the star S2 by Sgr A\*
=================================================================================================
In this section we will numerically estimate the values of different observable parameters related to strong lensing for a spacetime described by Eq. (\[intro3\]-\[intro5\]). For this purpose, we take the nature of the super massive black hole (SMBH) at the center of our galaxy (Sgr A\*) is given by solutions of second order generalized Proca theories. Here we take the star S2 as the source. This star revolves around the SMBH in a highly elliptic orbit with orbital time period around 15.92 years and has the minimum average distance from it. In the early 2018, it had been at its periapse position. Previous studies have shown that the magnification of images is maximum when the star reaches its pariapse position [@Bozza:2004kq]. This gives us an unique opportunity to observe different lensing parameters in this time and thus make it possible to verify different theories of gravity. In this section, we first reconstruct the lensing system using the data given in Ref. [@Hees:2017aal; @Bozza:2004kq] and then numerically calculate different observables related to strong lensing in this scenario.
The Lensing system
------------------
The mass of black hole at the center of our galaxy is estimated to be $4.01*10^{6}M_{\bigodot}$ which is located at a distance $7.8$ kpc away from us [@2016ApJ...830...17B]. S2 is one of the star with the minimum average distance from it (S-102 has even smaller minimum average distance but it is 16 times fainter than S2 [@S2]). As stated earlier, it was at its periapse position in early 2018 where one expect to have maximum magnification [@Bozza:2004kq]. So we have used it as a source for gravitational lensing. Moreover, S2 has radius of few solar radii, so one can treat it as point source. It’s orbital motion (along with other short-period stars around SMBH) has been studied over 20 years mainly by two groups, one at Keck Observatory while the other with New Technology Telescope (NTT) and with Very Large Telescope (VTT) [@Hees:2017aal]. From those studies, we now have precise understanding about its orbital motion. Its orbital parameters are reported in Table-\[tab1\] [@Hees:2017aal; @Bozza:2004kq]. It’s position ($r_{S},\gamma$), can be expressed in terms of the orbital parameters of the system[@Bozza:2004kq; @BinNun:2010se] $$\begin{aligned}
{\label{17}}
r_{S}&=\dfrac{\varrho(1-e^{2})}{1+e\cos\xi}\\r_{R}&\simeq D_{os}=7.8 kpc\\\gamma&=\arccos[\sin(\xi+\omega)\sin i]\end{aligned}$$ where $ D_{os}$ is the distance between observer and the source, $\varrho$ is the major semi axis, $e$ is the eccentricity, $i$ is the inclination of the normal of the orbit with respect to the line of sight, $\omega$ is the periapse anomaly with respect to the ascending node. $\xi$ is the anomaly angle from the periapse, determined by the differential equation and initial condition $$\label{20}
\begin{aligned}
\frac{T}{2\pi}\dfrac{(1-e^{2})^{3/2}}{(1+e\cos\xi)^{2}}\dot{\xi}=1\\
\xi(T_{0})=2 \kappa \pi
\end{aligned}$$ where $T$ is the orbital time period of S2 and $T_{0}$ is the epoch of periapse and $\kappa$ be any integer. We have plotted anomaly angle as a function of time in Fig–\[fig:positio\]. From the plot we can see that the star reached its periapse position in early 2018.
![Orbital position of S2 as a function of time have been presented. Here $\xi$ represents the anamoly angle from the periapse position. Previous studies have shown that the maximum magnification of the images will be obtained when S2 is in its periapse position i.e. when $\xi=2 \kappa \pi$, where $\kappa$ is an integer [@Bozza:2004kq]. The plot indicates that this had been achived in early 2018. []{data-label="fig:positio"}](s2position.eps){width="\textwidth"}
Numerical Estimation of Different Lensing Parameters
----------------------------------------------------
In this section, we present the numerical estimation of different observational parameters considering the SMBH at the center of our galaxy as a lens and the star S2 as a source. Here we have considered that nature of black hole space-time is given by solutions of second order generalized Proca theories presented in Eq. (\[intro5\]) and the S2 star is at its pariapse position. The radius of the photon sphere is given by the largest positive solution of the equation (see Eq. (\[2\])) $${\label{23}}
12r^{3}-36r^{2}-5P^{2}=0$$ Clearly, one can see that in the limit $P=0$, the radius of the photon sphere is reduces to $r_{m}=3$, representing photon circular orbit in Schwarzschild space-time. By solving the above equation, one can express the radius of the photon sphere as $$r_{m}=1+\frac{2}{\sqrt[3]{K1}}+\frac{\sqrt[3]{K1}}{2}$$ where, $$K1\nonumber=\left[{\left(\frac{5 P^2}{3}+8\right)+\frac{\sqrt{5}}{3} \sqrt{P^2 \left(5 P^2+48\right)}}\right]$$ As stated earlier, we assumed that $A(r)$, $B(r)$, and $C(r)$ are finite and positive for $r\geq r_{m}$. But here $B(r)$ fails to remain positive for $P\geq2.48$ at $r=r_{m}$ and hence in our analysis we will concentrate in the range $P<2.48$. In Table-\[tab:tab2\], we have presented the numerical estimation of different observational parameters namely the angular position of the inner most image $\theta_{m}$, the angular separation between inner and outermost image $s$, the relative magnification of the outermost relativistic image with the other images $\mathcal{R}$ and the time delay between first and second relativistic image $\Delta T^{s}_{12}$ (formed on the same side of the lens). We also compare the results with those obtained from Reissner-Nordström (RN) black hole solution with charge $q$ whose line element can be expressed as [@1916AnP...355..106R; @chandra] $$\label{RN}
ds^{2}=-\left(1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}\right)dt^{2}+\left(1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}\right)^{-1}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)$$
----- ---------- --------- ---------- ---------- ---------- --------- ---------- ---------
Proca BH RN BH Proca BH RN BH Proca BH RN BH Proca BH RN BH
0 19.0033 19.0033 0.182214 0.182214 15.708 15.708 32.6484 32.6484
0.3 19.0191 18.713 0.185641 0.188735 15.5854 15.5434 32.6755 32.1498
0.6 19.066 17.7691 0.196129 0.214982 15.2132 14.934 32.7563 30.5281
0.9 19.143 15.7962 0.214176 0.314308 14.5764 13.0677 32.8885 27.1385
----- ---------- --------- ---------- ---------- ---------- --------- ---------- ---------
: Numerical estimations of the observables related to strong lensing ($\theta_{m}$, $s$, $\mathcal{R}$, $\Delta T_{12}^{s}$) have been presented. A comparison between the values of the observables obtained from generalized Proca theories ( Proca BH) to those obtained from black hole ( RN BH) have also been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of Proca BH and charged hair $q$ in the case of RN black holes. Note that, $hair=0$ case corresponds to Schwarzschild black hole. Here the SMBH at the center of our galaxy is taken as the lens whereas the star S2 is taken as the source. The observables have been calculated at the epoch of periapse of the star S2 (early 2018).[]{data-label="tab:tab2"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image that formed on the same side of the lens $\Delta T_{12}^{s}$ (bottom-right corner) as a function of the hair parameters. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of generalized Proca black holes and charge hair $q$ in the case of Reissner-Nordström black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for Reissner-Nordström black holes.[]{data-label="observable"}](nonrotatingtheta.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image that formed on the same side of the lens $\Delta T_{12}^{s}$ (bottom-right corner) as a function of the hair parameters. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of generalized Proca black holes and charge hair $q$ in the case of Reissner-Nordström black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for Reissner-Nordström black holes.[]{data-label="observable"}](nons.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image that formed on the same side of the lens $\Delta T_{12}^{s}$ (bottom-right corner) as a function of the hair parameters. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of generalized Proca black holes and charge hair $q$ in the case of Reissner-Nordström black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for Reissner-Nordström black holes.[]{data-label="observable"}](nonrotatingflux.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image that formed on the same side of the lens $\Delta T_{12}^{s}$ (bottom-right corner) as a function of the hair parameters. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of generalized Proca black holes and charge hair $q$ in the case of Reissner-Nordström black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for Reissner-Nordström black holes.[]{data-label="observable"}](nonrotatingtimedelay.eps){width="\textwidth"}
In Tab-\[tab:tab2\], ‘$hair$’ corresponds to Proca hair $P$ in the case of generalized Proca black holes ( Eq. (\[intro5\])) and charged hair $q$ in the case of Reissner-Nordström (RN) black holes (Eq. (\[RN\])). We also have plotted the observables as a function of hair parameter for these two black hole spacetime in Fig-\[observable\]. From Table-\[tab:tab2\], we found out that in the case of generalized Proca black holes, angular position of the inner most image $\theta_{m}$ increases as the $hair$-parameter increases which is contrary to the RN case. This means that the size of the inner-most Einstein ring is bigger for the generalized Proca black hole space-time than those obtained from RN space-time for the same value of the hair parameter. The angular separation $s$ increases with the increase of the $hair$-parameter similar to case of RN black hole space-time while the Relative flux $\mathcal{R}$ decreases with the increase of the $hair$-parameter. From Tab-\[tab:tab2\], one can see that $\Delta T_{12}^{s}$ increases with the increase of $hair$- parameter for the case of generalized Proca black holes which is contrary to the RN case. Note that size of the inner-most Einstein ring $\theta_{m}$(or, the time delay between first and second relativistic image $\Delta T_{12}^{s}$) is maximum for the case $q=0$ (Schwarzschild black hole) in static, spherically symmetric space-time predicted by General Relativity. So any value of $\theta_{m}$ ( or $\Delta T_{12}^{s}$) greater than those predicted in Schwarzschild space-time implies the existence of Proca hair. Thus by measuring the size of the innermost Einstein ring (or the time delay delay between first and second relativistic image) in a static, spherically symmetric space-time, one can observationally verify the “no-hair theorem"[@nohair].
Now, in order to probe the Proca hair, one have to observationally measure both the position of innermost image $\theta_{m}$ and angular septation $s$. From the Table-\[tab:tab2\], we can see that the angular separation between the images is $\sim \mathcal{O}(10^{-1})$ $\mu$arcsec, which is too hard to detect with present technologies.
Before doing further study with Proca black hole, we want to discuss an important issue regarding the validity of our estimates for different astrophysical parameters. As mentioned in section \[sec2\], the solution (\[intro5\]) for Proca black hole that we consider in our study, is not an exact solution of the Einstein equations, but an approximated analytical solution which agrees well with the full numerical solution upto order $\mathcal{O}(1/r^{3})$ . Although this is ok for regions away from the black hole horizon, but for near-horizon regions, analytical approximation may break down. To see how far it affects the numerical estimates of different observables, we consider solution upto order $\mathcal{O}(1/r^{4})$ ( next order) [@Heisenberg:2017hwb], calculate different observables and study the percentage deviations from the corresponding values for solution upto order $\mathcal{O}(1/r^{3})$. The result is shown in Tab-\[table3\]. As one can see, for most of the cases, the deviation is around $1\%$ or less, except for the parameter $s$ with high value of Proca hair, when the deviation is around $3\%$. Hence, as long as the errors in future observational estimates for these parameters are larger than these percentage deviations, our results are reliable.\
------------ -------------- ---------- --------------- ---------------------
Proca hair
$\theta_{m}$ $s$ $\mathcal{R}$ $\Delta T_{12}^{s}$
0.3 0.0550516 0.444458 0.111209 0.0550516
0.6 0.214403 1.70292 0.447742 0.214403
0.9 0.4623 3.57128 1.03114 0.4623
------------ -------------- ---------- --------------- ---------------------
: In Tab-\[tab:tab2\], numerical estimation of different observables related to strong lensing ($\theta_{m}$, $s$, $\mathcal{R}$, $\Delta T_{12}^{s}$) for metric (\[intro5\]) have been presented where we have considered metric components upto order $\mathcal{O}(1/r^{3})$ only. This table shows the percentage modification of the observables when contribution from next leading order ($\mathcal{O}(1/r^{4})$) is taken into account. As one can see for most of the cases, the deviation is below $1\%$. []{data-label="table3"}
Lensing of Rotating Proca Black holes
=====================================
In the previous section, we have studied the bending of light ray trajectory in the presence of a static and spherically symmetric black hole. But several observations indicate that the super massive black hole in the center of our galaxy possesses angular momentum [@spinSgr]. So for observational perspective, it is important to consider the lensing effect for a rotating black hole. Moreover, the spacetime geometry is much more richer in this case. So from pure theoretical point of view, we can expect some interesting result will emerge when we consider strong lensing effect around a rotating black hole. Indeed, previous studies has showed that the caustic points are no longer aligned with the optical axis for the rotating black hole, but shifted in according to the rotation of the black hole and now they have a finite extension [@Bozza:2005tg]. In this present section, we will discuss the gravitational lensing effect for a more general rotating black hole. First we calculate the metric for a rotating black hole in generalized Proca theories using Newman-Janis algorithm and then study the null trajectories in this spacetime.
Null Geodesic equation and Photon Trajectory
--------------------------------------------
Applying Newman-Janis algorithm [@NJ1] to the metric.(\[intro5\]) and retaining only terms up to the order $\mathcal{O}(\frac{P^{2}}{r^{3}})$, we found out the stationary, axisymmetric solution to Einstein’s field equation which in Boyer-Lindquist coordinates $(t,r,\vartheta,\phi)$ [@BL] can be written as $$\begin{aligned}
{\label{101}}
ds^{2}&=&-\left[1-\frac{2r}{\rho^{2}}-\frac{P^{2}}{6\rho^{2}r}\right]dt^{2}-\dfrac{4a \sin^{2}\vartheta}{\rho^{2}}\left[r+\frac{P^{2}}{8}+\frac{P^{2}}{6r}\right]dt d\phi +\dfrac{\rho^{2}}{\Delta}dr^{2}+\rho^{2}d\vartheta^{2}+ \nonumber\\
&&\left[r^{2}+a^{2}+\dfrac{2ra^{2}\sin^{2}\vartheta}{\rho^{2}}+\dfrac{ P^{2}a^{2}\sin^{2}\vartheta}{2\rho^{2}}\left(1+\frac{1}{r}\right)\right]\sin^{2}\vartheta d\phi^{2}
\end{aligned}$$ where $a=L/M^{2}$, where $L$ and $M$ denotes the angular momentum and mass of the black hole respectively. The Proca field in this case is turned out to be $$\label{Proca}
\mathcal{A}_{\mu}=\left(\tilde{\mathcal{A}_{0}},-\frac{\tilde{\mathcal{A}}_{0}\rho^{2}}{\Delta}\frac{1}{\sqrt{\tilde{A}\tilde{B}}}+\left(\tilde{\mathcal{A}}_{0}+\tilde{\mathcal{A}}_{1}\sqrt{\frac{\tilde{B}}{\tilde{A}}}\right)\left(1-\frac{a^{2}\sin^{2}\theta}{\Delta}\right),0,\tilde{\mathcal{A}}_{1}\sqrt{\frac{\tilde{B}}{\tilde{A}}}a\sin^{2}\theta\right)$$ where “tilde” denotes the components after complexification. We have checked that eqn (23) and (24) together satisfy the Einstein’s equation for an axisymmetric metric for a rotating Proca black hole upto order $\mathcal{O}(1/r^{4})$. As our original non-rotating black hole solution is valid upto order $\mathcal{O}(1/r^{3})$, we can safely take the metric given by eqn (23) for a rotating Proca black hole for further study.
We can also identify the quantity $a$ in eqn (23) as the specific angular momentum of the black hole. The functions $\rho$ and $\Delta$ is given by $$\begin{aligned}
{\label{102}}
\rho^{2}&=r^{2}+a^{2}\cos^{2}\vartheta\\\Delta&=r^{2}+a^{2}-2r-\frac{P^{2}}{2}\left(1+\frac{1}{r}\right)
\end{aligned}$$ Here also, we have set $r=r/M$ and $P=P/M_{Pl}$. In the limit of vanishing Proca hair $i.e.$ $P\to 0$, the solution coincides with the Kerr black hole. Horizon of the black hole is a surface where $\Delta=0$ and outer horizon is determined by the largest possible solution of the equation, which in this case turns out to be $${\label{103}}
r_{H}=\dfrac{2}{3}-\dfrac{-16+12a^{2}-6P^{2}}{3\sqrt[3]{4}\sqrt[3]{4 \sqrt{K}-144 a^2+180 P^2+128}}+\dfrac{\sqrt[3]{4 \sqrt{K}-144 a^2+180 P^2+128}}{6\sqrt[3]{4}}$$ where the function $K=2 \left(6 a^2-3 P^2-8\right)^3+\left(-36 a^2+45 P^2+32\right)^2$. It is easy to see in the limit $a,P\to 0$, radius of the horizon is turns out to be $r_{H}=2$ as expected for Schwarzschild case.
Null geodesics equations can be obtained by using Hamilton-Jacobi Equation [@chandra]. For the metric.(\[101\]), the relevent geodesic equations are given by $$\begin{aligned}
\rho^{2}\dot{r}&=\sqrt{{R}(r)}{\label{104}}\\\rho^{2}\dot{\vartheta}&=\sqrt{\Theta(\vartheta)}{\label{105}}\\\rho^{2}\dot{\phi}&=\frac{a}{\Delta}\left(r^{2}+a^{2}-aJ\right)+\dfrac{P^{2}a}{\Delta}\left(\frac{1}{2}+\frac{1}{3r}\right)+\left(J\sin^{-2}\vartheta-a\right){\label{106}}
\end{aligned}$$ where $$\begin{aligned}
{R}(r)&=\left(r^{2}+a^{2}-aJ\right)^{2}-P^{2}a\left(\frac{1}{2}+\frac{1}{3r}\right)-\Delta\left(Q+(J-a)^{2}\right){\label{107}}\\\Theta(\vartheta)&=Q-\left(J^{2}\sin^{-2}\vartheta-a^{2}\right)\cos^{2}\vartheta{\label{108}}
\end{aligned}$$ In Eq. (\[104\])-(\[106\]), the dot indicates derivative w.r.t. some affine parameter $\lambda$. In Eq. (\[107\]) and Eq. (\[108\]), $Q$ denotes a constant of separation called Carter constant, $J$ is the angular momentum of the photon with respect to the axis of the black hole. In our analysis, we have set $p_{t}=-E=-1$ by a suitable choice of affine parameter.
Since we are interested in studying the photon trajectory in an isolated black hole spacetime, we can ignore the effect of other celestial bodies on the photon trajectory and can approximate the spacetime at a large distance from the black hole as flat Minkowski spacetime. We will assume that both the source and observer are situated at a large distance from the black hole. Now we can formulate the lensing problem as follows: the initial photon trajectory starts off as a straight line. If there were no black hole, then it would continue to follow this straight line trajectory. But because of the presence of the black hole, its path gets deviated from this initial trajectory near the black hole. Finally, it approaches the observer along this deviated path. In this scenario, we can relate the constant of motion $Q$ and $J$ in terms of a set of geometric quantities $(\psi_{R},u,h)$ [@Bozza:2002af]. Here the inclination angle $\psi_{R}$ denotes the angle between the initial photon trajectory and the equatorial plane. The projected impact parameter $u$ describes the minimum distance of projected photon trajectory on the equatorial plane from the origin if there were no black hole and lastly, the height of the light ray trajectory at $u$ from the equatorial plane is denoted by $h$.
Let $(\alpha,\beta)$ denotes the celestial coordinate of the image as seen by an observer sitting on $(r_{R},\vartheta_{R})$ in Boyer-Lindquist coordinate. The coordinate $\alpha$ and $\beta$ represents the apparent perpendicular distance of the image from the axis of symmetry and its projection on the equatorial plane respectivily [@chandra]. Taking into consideration that the observer is situated far away from the black hole and using Eq. (\[104\] -\[108\]), we can express $\alpha$ and $\beta$ as [@chandra; @Gyulchev:2006zg] $$\begin{aligned}
\alpha&=-r_{R}^{2}\sin\vartheta_{R}\dfrac{d\phi}{dr}\Biggm|_{r_{R}\to \infty}=\frac{J}{\sin\vartheta_{R}}\label{Sh1}\\\beta&=r_{R}^{2}\dfrac{d\vartheta}{dr}\Biggm|_{r_{R}\to \infty}=h\sin\vartheta_{R}\label{Sh2}
\end{aligned}$$ Taking the asymptotic limit $\vartheta_{R}=\pi/2-\psi_{R}$ and $\alpha=u$, we can finally express the constants of motion in terms of geometric parameters of the incoming ray as $$\begin{aligned}
J&\approx u \cos\psi_{R}{\label{109}}\\Q&\approx h^{2}\cos^{2}\psi_{R}+(u^{2}-a^{2})\sin^{2}\psi_{R}{\label{110}}
\end{aligned}$$
Black Hole Shadow analysis
==========================
In this section, we will describe the shadow of rotating black hole. For non-rotating black hole, it is just a black circular disc in the observable sky with a radius that corresponds to the position of the photon sphere. As we will see, things goes a bit interesting in the case of rotating black hole.
We will use the celestial co-ordinates $(\alpha,\beta)$ given in Eq. (\[Sh1\])-(\[Sh2\]) to give a description of the shadow. For simplicity let assume that the observer is sitting on the equatorial plane. Then using Eq. (\[Sh1\])-(\[110\]), it is easy to check that photons reaching from an generic point $(\alpha/r_{R},\beta/r_{R})$ can be characteristic by $J = -\alpha$ and $Q=\beta^{^2}$. In our calculation we have considered that positive angular momentum $J$ corresponds to counterclockwise winding of the light rays as seen from above. So when $a>0$, the photons rotates in the same direction as the black hole (prograde/direct photons) while they rotate in the opposite direction for $a<0$ case (retrograde photons). One can visualize of the shape of black hole shadow by plotting $\beta$ vs $\alpha$.
![Shadow casted by rotating black hole in generalized Proca theories given by the metric.(\[101\]) for different values of $a$ and $P$ as seen by observer in equatorial plane. The shadow region is corresponds to the inside of each dashed curve. The case $a=0$ defines shadow for non-rotating black hole. An increase of $a$ causes the deformation of the black hole shadow. []{data-label="Shgola"}](shadow00.eps){width="\textwidth"}
![Shadow casted by rotating black hole in generalized Proca theories given by the metric.(\[101\]) for different values of $a$ and $P$ as seen by observer in equatorial plane. The shadow region is corresponds to the inside of each dashed curve. The case $a=0$ defines shadow for non-rotating black hole. An increase of $a$ causes the deformation of the black hole shadow. []{data-label="Shgola"}](Shadow03.eps){width="\textwidth"}
![Shadow casted by rotating black hole in generalized Proca theories given by the metric.(\[101\]) for different values of $a$ and $P$ as seen by observer in equatorial plane. The shadow region is corresponds to the inside of each dashed curve. The case $a=0$ defines shadow for non-rotating black hole. An increase of $a$ causes the deformation of the black hole shadow. []{data-label="Shgola"}](shadow06.eps){width="\textwidth"}
![Shadow casted by rotating black hole in generalized Proca theories given by the metric.(\[101\]) for different values of $a$ and $P$ as seen by observer in equatorial plane. The shadow region is corresponds to the inside of each dashed curve. The case $a=0$ defines shadow for non-rotating black hole. An increase of $a$ causes the deformation of the black hole shadow. []{data-label="Shgola"}](shadow09.eps){width="\textwidth"}
In Fig-\[Shgola\], we have plotted the shadows casted by a black hole described by metric.(\[101\]) for different values of $a$ and $P$. For the non-rotating case ($a=0$), the shadow of the black hole is just a circular disc. When the spin of the black hole is non-vanishing, the shadow shape gets slightly distorted and gets displaced to the right. Physically which means that the prograde photons (photons coming from the left side as seen by the observer) are allowed to get closer to the black hole while the retrograde photons (photons coming from the right side as seen by the observer) are kept even further.
Following Ref. [@Hioki:2009na], we define the observables for black hole shadow as the radius $R_{s}$ of a reference circle and the distortion parameter $\delta_{s}$. We will consider a reference circle that passes through three points of the shadow: the top $(\alpha_{t},\beta_{t})$ and the bottom $(\alpha_{b},\beta_{b})$ and a point corresponds to the unstable retrograde circular orbit $(\alpha_{r},0)$. The distortion parameter $\delta_{s}$ is the ratio of the difference between the endpoint of the circle $(\bar{\alpha}_{p},0)$ and the point corresponding to the prograde circular orbit $(\alpha_{p},0)$ (both of them at the opposite side of the point $(\alpha_{r},0)$) to radius of the reference circle [@Amarilla:2011fx]. Typically $R_{s}$ gives the approximate size of the black hole shadow, while $\delta_{s}$ is a measure of its deformation w.r.t. the reference circle. For an equatorial observer, the observables takes the form $$\begin{aligned}
R_{s}&=\dfrac{(\alpha_{t}-\alpha_{r})^{2}+\beta_{t}^{2}}{2|\alpha_{t}-\alpha_{r}|}{\label{Sh3}}\\\delta_{s}&=\dfrac{\bar{\alpha}_{p}-\alpha_{p}}{R_{s}}{\label{Sh4}}
\end{aligned}$$ By measuring this two observables, one can predict the black hole parameters very accurately. A simple way to extract the information about parameters $a$ and $P$ is to plot the contour curves of constant $R_{s}$ and $\delta_{s}$ in the $(a,P)$ plane [@Amarilla:2013sj]. The points in that plane where they intersect give the value of corresponding $a$ and $P$. In Fig–\[ap\], we show the contour plot of $R_{s}$ and $\delta_{s}$ in the $(a,P)$ plane. As stated earlier, if we can obtain values of $R_{s}$ and $\delta_{s}$ very accurately from the observations, the point where the associated contours intersect, gives the corresponding values of $a$ and $P$.
![The contour plot of constant $R_{s}$ (green solid lines)and $\delta_{s}$ (red dashed lines) curves in the $(a,P)$ plane have been presented. Intersection of the curves corresponding to constant $R_{s}$ and $\delta_{s}$ obtained from observation gives value of $a$ and $P$ of the black hole.[]{data-label="ap"}](shadowcontour.eps){width=".50\textwidth"}
Gravitational lensing by a rotating black hole in strong field limit
=====================================================================
In this section, we briefly review the the main concepts and the observables related to strong lensing in a stationary, axisymmetric space-time following the methods developed by Bozza [@Bozza:2002af]. Throughout our discussion, we have considered that both the source and observer are situated very far away from the black hole. For sake of simplicity, we restrict our attention to the trajectories that are very close to equatorial plane. Advantage of considering such scenario is that the angular position of the images can still be described by those obtained in the equatorial plane but now one can understand the problem in some deeper level as one can calculate the magnification of the images from two dimensional lens equation.\
We formulate the lensing problem as follows : we consider that the observer and the source are situated at a height $h_{R}$ and $h_{S}$ from the equatorial plane respectively. A photon with impact parameter $u$ incoming from the source situated at $r_{S}$, approaches the black hole at a minimum distance $r_{0}$ and then deviates away from it. An observer at $r_{R}$ receives the photon. Now we want to find the angular position and the magnification of the images. In order to do so, we first restrict our attention to the light rays on the equatorial plane by setting $\vartheta=\pi/2$ or equivalently by taking $\psi=\pi/2-\vartheta=0$ and $h=0$. Substituting these conditions on Eq. (\[101\]), we get reduced metric of the form $${\label{111}}
ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+C(r)d\phi^{2}-D(r)dt d\phi$$ In order to study the photon trajectory in a stationary space-time, we assume that the equation [@Gyulchev:2006zg] $$\label{123}
(A_{0}C'_{0}-A_{0}'C_{0})^{2}=(A_{0}'D_{0}-A_{0}D'_{0})(C_{0}'D_{0}-C_{0} D'_{0})$$ admits at least one positive solution and largest positive root of the equation is defined as the radius of photon sphere, $r_{m}$. Here subscript ‘$0$’ implies functions evaluated at closest approach distance $r_{0}$. Note that, when we put $D_{0}=0$, this equation coincides with the condition for photon sphere in static case given by Eq. (\[2\]). In the strong field limit, we consider only those photons whose closest approach distance $r_{0}$ is very near to $r_{m}$ and hence the deflection angle $\alpha$ can be expanded around the photon sphere, $r_{m}$ or equivalently minimum impact parameter $u_{m}$. When the closest approach distance $r$ is greater than $r_{m}$, it just simply gets deflected (it may complete several loops around the black hole before reaching the observer). When it reaches a critical value $r_{0}=r_{m}$ (or $u=u_{m}$), $\alpha$ diverges and the photon gets captured. Using the same method as in the case static case, we can express total deflection as follows [@Bozza:2002af] $$\label{124}
\alpha_{f}(\theta)=-\bar{a}_{rot}\log\left(\dfrac{\theta}{\theta_{m}}-1\right)+\bar{b}_{rot}$$ where $u_{m}$ is the impact parameter evaluated at $r_{m}$. The parameters $\bar{a}_{rot}$, $\bar{b}_{rot}$ are the Strong field coefficient for rotating metric in the equatorial plane (for explicit expression of $\bar{a}_{rot}$, $\bar{b}_{rot}$, see Eq. (34-35) of Ref [@Bozza:2002af] ). $\theta $ denotes the angular position of the image. From Eq. (\[124\]), we can see that the deflection angle diverges at $\theta=\theta_{m}=u_{m}/r_{R}$. It represents the position of the innermost image. Once the deflection angle is known, we can obtain the angular position of different images using the lens equation (\[8\]). In the simplest situation, one can express the angular position of the n-th order image as [@Bozza:2002af] $$\label{129}
\theta_{n}=\theta_{n}^{0}\left[1-\dfrac{u_{m}e_{n}}{\bar{a}_{rot}}\left(\dfrac{r_{R}+r_{S}}{r_{R}r_{S}}\right)\right]$$ where, $$\begin{aligned}
& &{\theta}_{n}^{0}=\theta_{m}(1+e_{n})\,,\qquad
{e}_{n}=\exp\left[{\dfrac{\bar{b}_{rot}+\gamma-2n\pi}{\bar{a}_{rot}}}\right]\,. \nonumber \end{aligned}$$ Now we turn our attention to the trajectories that are very close to the equatorial plane. i. e. those trajectories have very small value of declination angle $\psi=\pi/2-\vartheta$. With the help of this condition and assuming the height of light ray trajectory from equatorial plane $h$ is small compared to the projected impact parameter $u$, it is easy to show that the inclination angle $\psi_{R}\approx h/u$. Then the constants of motion are given in Eq. (\[109\])-(\[110\]) can be written as $$\begin{aligned}
\label{130}
& &{J}\approx u\,,\qquad
{Q}\approx h^{2}+\bar{u}^{2}\psi_{R}^{2}\,.
\end{aligned}$$
where $\bar{u}=\sqrt{u^{2}-a^{2}}$. Moreover, we expect the declination angle $\psi$ to remain small (of the order of $\psi_{R}$) during the motion. Small declination condition for the photon trajectory readily implies that $(h_{R},h_{S})\ll u\ll (r_{R},r_{S})$. If we neglect the higher order terms, then the polar lens equation can be written as [@Bozza:2002af] $$\label{140}
h_{S}=h_{R}\left[\frac{r_{R}}{\bar{u}}\sin\bar{\phi}_{f}-\cos\bar{\phi}_{f}\right]-\psi_{R}\left[(r_{R}+r_{S})\cos\bar{\phi}_{f}-\dfrac{r_{R}r_{S}}{\bar{u}}\sin\bar{\phi}_{f}\right]$$ where $$\label{137}
\bar{\phi}_{f}(\theta)=-\hat{a}_{rot}\ln\left(\dfrac{\theta}{\theta_{m}}-1\right)+\hat{b}_{rot}$$ Here, $\hat{a}_{rot}$ and $\hat{b}_{rot}$ denotes two numerical parameters (For more details see Eq. (52-53) of Ref [@Bozza:2002af]). Eq. (\[140\]) along with Eq. (\[8\]) represents the two dimensional lensing equation. Using these two equation one can find we get the magnification of the $n$-th image as $$\label{147}
\mu_{n}=\dfrac{(r_{R}+r_{S})^{2}}{(r_{R}r_{S})}\left(\frac{\bar{\mu}(a)}{\mathcal{K}(\gamma)}\right)$$ where $$\begin{aligned}
\label{148}
& &{\bar{\mu}}(a)=\dfrac{\bar{u}_{m}(a)u_{m}(a)e_{\gamma}}{\hat{a}_{rot}(a)}\,,\qquad
{e}_{\gamma}=\exp{(\dfrac{\hat{b}_{rot}+\gamma}{\hat{a}_{rot}})} \,,\qquad
{\mathcal{K}}(\gamma)=r_{R}r_{S}\sin{\bar{\phi}_{f,n}}-\bar{u}_{m}(r_{S}+r_{R})\cos{\bar{\phi}_{f,n}}\,.
\end{aligned}$$ where ${\bar{\phi}}_{f,n}$ is the phase of the $n$-th order image given by the Eq. (\[137\]) with $\theta_{n}$ is the solution of Eq. (\[129\]). Note that $\mu_{n}$ diverges when $\mathcal{K}(\gamma)$ vanishes. This condition gives the position of the caustic points which formally defined as the positions of source for which one gets infinite magnification of the images.
Time Delay between Different Images in Stationary space-time
============================================================
In this section we will extend our study of time delay effect in a rotating black hole space-time. As stated earlier, formation of different image is a result of photons following different trajectories, time taken by different photons is not the same and hence there will be a time delay between them. Bozza [@Bozza:2003cp] solved this problem using the same method used to find deflection angle and showed that the leading term in time delay between $m$-th and $n$-th order image can be expressed as [@Bozza:2003cp] $$\label{Td6}
\Delta T_{mn}^{s}=2\pi (n-m)\frac{\widetilde{a}_{rot}(a)}{\bar{a}_{rot}(a)}$$ When both the images are formed on the same side of the black hole. We can see that the time delay for direct photons ($a>0$) and retrograde photons ($a<0$) will be different.
When images are formed on the opposite side of the lens, then time delay between $m$-th and $n$-th order image can be expressed as [@Bozza:2003cp] $$\label{Td7}
\Delta T_{mn}^{o}=\frac{\widetilde{a}_{rot}(a)}{\bar{a}_{rot}(a)}\left(2\pi n+\gamma-\bar{b}_{rot}(a)\right)+\widetilde{b}_{rot}(a)-\frac{\widetilde{a}_{rot}(-a)}{\bar{a}_{rot}(-a)}\left(2\pi n-\gamma-\bar{b}_{rot}(-a)\right)-\widetilde{b}_{rot}(-a)$$ This extra contribution comes due to the fact that the co-efficient $\bar{b}_{rot}$ and $\widetilde{b}_{rot}$ is not same for direct and retrograde photons in the stationary case. The functional form of $\widetilde{a}_{rot}(a)$ and $\widetilde{b}_{rot}(a)$ is given in Eq. (35)-(36) in Ref [@Bozza:2003cp].
-- ------ ---------- --------- ----------- ----------- ---------- --------- ---------- ---------
Proca BH KN BH Proca BH KN BH Proca BH KN BH Proca BH KN BH
0.0 19.0033 19.0033 0.0237825 0.0237825 15.708 15.708 32.6484 32.6484
0.2 19.0103 18.8756 0.0241503 0.0243711 15.6536 15.6367 32.6605 32.429
0.4 19.0313 18.4796 0.0252867 0.0263996 15.4896 15.4039 32.6966 31.7488
0.6 19.066 17.7691 0.0272943 0.0310206 15.2132 14.934 32.7563 30.5281
0 .0 18.2607 18.2607 0.0295151 0.0295151 15.708 15.708 31.3726 31.3726
0.2 18.2666 18.1332 0.0300417 0.0302463 15.6413 15.6371 31.3829 31.1537
0.4 18.2845 17.7382 0.031675 0.0327568 15.4398 15.4058 31.4135 30.475
0.6 18.314 17.0297 0.0345836 0.0384271 14.0986 14.9395 31.4642 29.2577
0.0 17.4932 17.4932 0.0371839 0.0371839 15.708 15.708 30.054 30.054
0.2 17.4978 17.3667 0.0379561 0.0380791 15.6247 15.6388 30.0619 29.8367
0.4 17.5115 16.9749 0.0403633 0.0411289 15.3721 15.4133 30.0855 29.1635
0.6 17.5343 16.2738 0.0446884 0.0478844 15.9415 14.9618 30.1247 27.9591
0.0 16.6958 16.6958 0.0476924 0.0476924 15.708 15.708 28.684 28.684
0.2 16.6986 16.5713 0.048857 0.048752 15.6018 15.6427 28.6889 28.4701
0.4 16.707 16.1864 0.0525067 0.0523042 15.2765 15.4316 28.7033 27.8089
0.6 16.7209 15.5015 0.0591226 0.0598653 14.7165 15.0155 28.7272 26.6322
-- ------ ---------- --------- ----------- ----------- ---------- --------- ---------- ---------
: Numerical estimations of the observables related to strong lensing ($\theta_{m}$, $s$, $\mathcal{R}$, $\Delta T_{12}^{s}$) by a rotating black hole have been presented. A comparison between the values of the observables obtained from rotating black holes in generalized Proca theories ( Proca BH) to those obtained from black hole ( KN BH) have also been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating Proca BH and charged hair $q$ in the case of black holes. Note that, $hair=0$ case corresponds to Kerr black hole. Here the SMBH at the center of our galaxy is taken as the lens. We have assumed that the lensing system is highly aligned and both the source and observer are situated at infinity. The time delay between the first and second relativistic image $\Delta T_{12}^{s}$ have been calculated under the assumption that both of these images are formed on the same side of the lens.[]{data-label="Tab3"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image $\Delta T^{s}_{12}$ when both of these images are formed on the same side of the lens (bottom-right corner) as a function of the hair parameters have been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating black hole in generalized Proca black holes and charge hair $q$ in the case of black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for black holes. Here we have assumed that the lensing system is highly aligned and both the source and observer are situated at infinity. We have taken that the value of the black hole spin is taken as $a=0.44$, which is the current estimated value of spin of Sgr A\* [@spinSgr]. []{data-label="observablesspin"}](rotatingThvsP.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image $\Delta T^{s}_{12}$ when both of these images are formed on the same side of the lens (bottom-right corner) as a function of the hair parameters have been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating black hole in generalized Proca black holes and charge hair $q$ in the case of black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for black holes. Here we have assumed that the lensing system is highly aligned and both the source and observer are situated at infinity. We have taken that the value of the black hole spin is taken as $a=0.44$, which is the current estimated value of spin of Sgr A\* [@spinSgr]. []{data-label="observablesspin"}](rotatingsvsP.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image $\Delta T^{s}_{12}$ when both of these images are formed on the same side of the lens (bottom-right corner) as a function of the hair parameters have been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating black hole in generalized Proca black holes and charge hair $q$ in the case of black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for black holes. Here we have assumed that the lensing system is highly aligned and both the source and observer are situated at infinity. We have taken that the value of the black hole spin is taken as $a=0.44$, which is the current estimated value of spin of Sgr A\* [@spinSgr]. []{data-label="observablesspin"}](spinFvsP.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) (b) the angular difference between the outermost and inner images s , (top-right corner) (c) Flux ratio of innermost image with respect to the others, $\mathcal{R}$ (bottom-left corner) and (d) Time delay between first and second relativistic image $\Delta T^{s}_{12}$ when both of these images are formed on the same side of the lens (bottom-right corner) as a function of the hair parameters have been presented. Here the parameter ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating black hole in generalized Proca black holes and charge hair $q$ in the case of black holes. The solid red lines indicates the behavior of the observables as function Proca hair $P$ for generalized Proca black holes whereas the blue dashed lines indices the variation of the observables as a function charge hair $q$ for black holes. Here we have assumed that the lensing system is highly aligned and both the source and observer are situated at infinity. We have taken that the value of the black hole spin is taken as $a=0.44$, which is the current estimated value of spin of Sgr A\* [@spinSgr]. []{data-label="observablesspin"}](timedealay.eps){width="\textwidth"}
Numerical Estimation of different Observational Parameters
==========================================================
In this section, we present the numerical estimation of different observational parameters related to strong lensing for stationary, axiasymmetric spacetime considering the SMBH at the center of our galaxy as a lens. Here we have considered that nature of black hole space-time is given by solutions of second order generalized Proca theories presented in Eq. (\[101\]) and numerically estimated the values of different observables in Strong Field Limit for two separate lensing configuration namely a) when the source, lens and the observer is highly aligned with both the source and the observer is at infinity and b) taking the star S2 as a source.\
We have first considered a lensing system where the source, lens and the observer are highly aligned and both the source and the observer are very far away from the lens. We numerically solved Eq. (\[123\]) to get the radius of the photon sphere $r_{m}$ and angular radius of innermost image using the relation $\theta_{m}=u(r_{m})/r_{R}$. We have further assumed that the outer most image $\theta_{1}$ is resolved as a single image and all other images packed together at $\theta_{m}$. Then the observables- angular separation between the inner and outermost image $s$, the ratio of flux from the outermost image to those from all other image $\mathcal{R}$ and time delay between first and second relativistic image $\Delta T_{12}^{s}$ (formed on the same side of the lens) can be approximated by [@Ji:2013xua; @Bozza:2002zj; @Bozza:2003cp] $$\label{151}
\begin{aligned}
s&=\theta_{1}-\theta_{m}\approx\theta_{m}\exp\left[\frac{\bar{b}_{rot}-2\pi}{\bar{a}_{rot}}\right]\\\mathcal{R}&=2.5\log_{10}\left(\dfrac{\mu_{1}}{\sum\limits_{n=2}^{\infty}\mu_{n}}\right)=\frac{5\pi}{\hat{a}_{rot}\ln 10}\\\Delta T^{s}_{12}&\approx2\pi \frac{\widetilde{a}_{rot}(a)}{\bar{a}_{rot}(a)}
\end{aligned}$$ So by measuring $\theta_{m}$, $s$ and $\mathcal{R}$ we can correctly predict strong lensing co-efficient $\bar{a}_{rot}$, $\bar{b}_{rot}$ and the minimum impact parameter $u_{m}$ and comparing them with the values predicted by a given theoretical model, we can identify the nature of the black hole. In Table-\[Tab3\], we have presented the numerical estimation of different observational parameters ($\theta_{m}$, $s$, $\mathcal{R}$, $\Delta T_{12}^{s}$). We also compare the results with those obtained from (KN) black hole solution with charge $q$ whose line element can be expressed as [@1965JMP.....6..915N]. $$\begin{aligned}
{\label{KN}}
ds^{2}&=&-\left[1-\frac{2Mr}{\rho^{2}}+\frac{q^{2}}{\rho^{2}}\right]dt^{2}-\dfrac{4a \sin^{2}\vartheta}{\rho^{2}}\left[r-\frac{q^{2}}{2}\right]dt d\phi +\dfrac{\rho^{2}}{\Delta_{KN}}dr^{2}+\rho^{2}d\vartheta^{2}+ \nonumber\\
&&\left[r^{2}+a^{2}+\dfrac{2ra^{2}\sin^{2}\vartheta}{\rho^{2}}-\dfrac{ q^{2}a^{2}\sin^{2}\vartheta}{\rho^{2}}\right]\sin^{2}\vartheta d\phi^{2}
\end{aligned}$$ where $$\begin{aligned}
& &{\Delta}_{KN}=r^{2}-2r+a^{2}+q^{2}\,,\qquad
{\rho}^{2}=r^{2}+a^{2}\cos^{2}\vartheta\,. \nonumber \end{aligned}$$
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) and the (b) the angular difference between the outermost and innermost images s , (top-right corner) for different values of Proca hair (c) Magnification of second, third and fourth relativistic images, $\log \mu_{n}$ (bottom) as a function of black hole spin $a$ have been presented. Here we have taken S2 as source. Note that, the peaks in the magnification corresponds to caustic points. For drawing the caustics, we assume $P=0.5$; but the overall behavior remains the same for other values of $P$.[]{data-label="thetakvsa"}](ForS2thetavsP.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) and the (b) the angular difference between the outermost and innermost images s , (top-right corner) for different values of Proca hair (c) Magnification of second, third and fourth relativistic images, $\log \mu_{n}$ (bottom) as a function of black hole spin $a$ have been presented. Here we have taken S2 as source. Note that, the peaks in the magnification corresponds to caustic points. For drawing the caustics, we assume $P=0.5$; but the overall behavior remains the same for other values of $P$.[]{data-label="thetakvsa"}](ForS2svsP.eps){width="\textwidth"}
![Variation of different observables - (a) angular position of inner most image $\theta_{m}$, (top-left corner) and the (b) the angular difference between the outermost and innermost images s , (top-right corner) for different values of Proca hair (c) Magnification of second, third and fourth relativistic images, $\log \mu_{n}$ (bottom) as a function of black hole spin $a$ have been presented. Here we have taken S2 as source. Note that, the peaks in the magnification corresponds to caustic points. For drawing the caustics, we assume $P=0.5$; but the overall behavior remains the same for other values of $P$.[]{data-label="thetakvsa"}](ForS2magnification.eps){width="\textwidth"}
In Table-\[Tab3\], ‘$hair$’ corresponds to Proca hair $P$ in the case of rotating black holes in generalized Proca theories ( Eq. (\[101\])) and charged hair $q$ in the case of (KN) black holes (Eq. (\[KN\])). We also have plotted the observables as a function of hair parameter for these two black hole spacetime in Fig-\[observablesspin\]. From Table-\[Tab3\], we can see that angular position of the inner most image $\theta_{m}$ decreases with the increase of black hole spin $a$. But it increases with the increase of the $hair$-parameter which is contrary to the KN case. Physically this implies that the size of the innermost Einstein ring is bigger for a slowly rotating black hole than those obtained from a rapidly rotating black hole. Moreover, that the size of the ring is bigger for the generalized Proca black holes space-time than those obtained from space-time for the same value of the hair parameter. In Proca black hole spacetime, angular separation between the inner and outer most image increases with the increase of $a$. This angular separation increases monotonically with the increase of the $hair$-parameter similar to KN case. The relative flux $\mathcal{R}$ decreases with increase of both $a$ and $hair$-parameter similar to KN case. Now the time delay between first and second relativistic image (formed on the same side of the lens) $\Delta T_{12}^{s}$ increases with the increase of $hair$-parameter which is in contrary to the KN case. Note that size of the inner-most Einstein ring $\theta_{m}$ (or, the time delay between first and second relativistic image $\Delta T_{12}^{s}$) is maximum for the case $a,q=0$ (Schwarzschild black hole) for the black holes predicted by General Relativity. So any value of $\theta_{m}$ ( or $\Delta T_{12}^{s}$) greater than those predicted in Schwarzschild space-time implies the existence of Proca hair. Thus by measuring the size of the innermost Einstein ring (or the time delay delay between first and second relativistic image), one can observationally verify the “no-hair theorem"[@nohair].\
Note that in Table-\[Tab3\], we have considered a highly aligned lensing system where both the source and observer are situated at infinity whereas in Tab-\[tab:tab2\] we have taken S2 as a source. Now compare $a=0$ case (corresponds to non-rotating black hole with source at infinity) for different values of the $hair$-parameter presented in Table-\[Tab3\] with the numerical values presented in Table-\[tab:tab2\]. Here one can see the value of the angular separation between the innermost and outermost image $s$ is $\sim \mathcal{O}(10)$ times higher for later case than those presented in Table-\[Tab3\]. Thus the observable is in more detectable range when one considers S2 as a source. However, still, the small value of angular separation ($\sim \mathcal{O}(10^{-1})$ $\mu$arc-sec) makes the detection of angular separation very difficult with present technologies.\
Now we will turn our attention to the gravitational lensing by a rotating black hole in generalized Proca theories with S2 as a source. In this case a high alignment of the source with the optic axes does not happen due to inclination of the source orbit. So the simplified formula presented in Eq. (\[151\]) does not work in this case. Rather we have to relay on the more general formula of lensing given by Eq. (\[129\]) and Eq. (\[147\]) to have correct estimation of angular position and the magnification of the images. Using those expression, we have plotted the observables $\theta_{m}$ (top-left corner), $s$ (top-right corner) for different values of Proca hair $P$ and the magnification of the images corresponding to different winding number $n$ (bottom) as a function of $a$ in Fig–\[thetakvsa\]. The peaks in the magnification corresponds to caustic points of the given lensing configuration where $\mathcal{K}(\gamma)$ vanishes (see Eq. (\[148\])). One can see that images corresponding to low winding number has fewer caustic points in the allowed range of $a$. As a result dimmer images meet caustic more often in the allowed range of $a$. For drawing the caustics, we assume $P=0.5$; but the overall behavior remains the same for other values of $P$.
Discussion and Conclusion
=========================
In this paper, we discuss about different astrophysical aspects for a black hole in second order generalized Proca theories with derivative vector field interactions coupled to gravity. These black hole solutions are hairy and hence give us a perfect opportunity to observationally verify the “No-Hair theorem”. We considered the super massive black hole in the center of galaxy is given by these generalized Proca theories and numerically estimated the values of different observables in strong field limit for two separate lensing configuration namely a) when the source, lens and the observer is highly aligned with both the source and the observer is at infinity and b) taking the star S2 as a source . For the latter case, we have shown that although the lensing system is not perfectly aligned, it gives observables in more detectable range . In early 2018, S2 was at its periapse position when one get maximum magnification [@Bozza:2004kq] and thus gave us a perfect opportunity to measure different lensing parameters in this time.\
We also compare our results with those obtained from Reissner-Nordström (Kerr-Newmann for rotating case) black hole to see how the generalized Proca theory modifies the observables taking the stationary black holes predicted by GR as a reference. Our study shows that the size innermost Einstein ring increases with the increase Proca hair $P$ for Proca black holes whereas the Einstein ring will shrink with the increase of charge $q$ for Reissner-Nordström black hole (for rotating case). The contribution of black hole spin can be well understood in the analysis of black hole shadow. Adding angular momentum to a black hole will cause a slight distortion in shape of the black hole shadow. Following Ref. [@Hioki:2009na], we have shown that by measuring this distortion with respect to a reference circle, one can accurately measure the black hole parameters ($a, P$) . Thus by analyzing black hole shadow, we can directly probe black hole space time and hence the underlying gravity theory. With the prospects of Even Horizon telescope as well as telescopes like SKA, one can resolve the black hole shadow with great accuracy and hence probing modified gravity through such observations is possible in near future. The other two observable angular separation between inner and outermost images and relative flux in between them exhibit same behavior as in the case of Reissner-Nordström black hole (Kerr-Newmann for rotating case) with the change of hair parameter (which is Proca hair $P$ for Proca black holes and charged hair $q$ for black hole). Angular separation between the images increases while the relative flux in between them decreases with the increase of the hair parameter. The time delay between first and second relativistic images, when both of them are formed on the same side of the lens, increases with the increase of $hair$-parameter which is in contrary to RN black hole (Kerr-Newmann for rotating case). The size of the inner-most Einstein ring $\theta_{m}$ (or, the time delay between first and second relativistic image $\Delta T_{12}^{s}$) is maximum for $a,q=0$ case (Schwarzschild black hole) for the black holes predicted by General Relativity. So any value of $\theta_{m}$ ( or $\Delta T_{12}^{s}$) greater than those predicted in Schwarzschild space-time, implies the existence of Proca hair. Thus by measuring the size of the innermost Einstein ring (or the time delay delay between first and second relativistic image), one can observationally verify the “no-hair theorem".\
Unfortunately, the angular separation between the inner and outermost relativistic image is extremely small ($\sim \mathcal{O}(10^{-1})$ $\mu$as while taking S2 as a source) which puts a great challenge for present technologies. However modern near-infrared (NIR) instruments like PRIMA [@PRIMA], GRAVITY [@GRAVITY1], ASTRA [@ASTRA] hope to achieve an astrometric accuracy of $10-100$ $\mu$as in combination with milli-arcsec angular-resolution imaging. With the help of these techniques, one can probe the Proca hair in near future.
M.R. thanks INSPIRE-DST, Government of India for a Junior Research Fellowship.
[99]{}
A. G. Riess et al., Astron. J. [**116**]{}, 1103 (1998)
S.Perlmutter et al., Astrophys.J. [**517**]{}, 565 (1999)
J.L Tonry et al., Astrophys.J. [**594**]{}, 1 (2003)
R.A Knop, et al., Astrophys.J. [**598**]{}, 102 (2003)
A.G Riess et al., Astrophys. J. [**607**]{}, 665 (2004)
E.J Copeland et al, Int. J. Mod. Phys. D [**15**]{}, 1753 (2006)
T.Padmanabhan, Phys.Rep. [**380**]{}, 235 (2003)
P.J.E. Peebles and B.Ratra, Rev. Mod. Phys. [**75**]{}, 554 (2003)
V.Sahni and A.Starobinsky, Int. J. Mod. Phys. [**D 9**]{}, 373 (2000)
A. G. Riess et al., Astrophys. J., [**826**]{}, 56 (2016).
A. G. Riess et al., arXiv:1804.10655 \[astro-ph.CO\].
P. W. Higgs, Phys. Rev. Lett., [**13**]{}, 508, (1964);\
F. Englert, Phys. Rev. Lett, [**13**]{}, 321 (1964);\
T. W. B. Kibble, Phys. Rev. D, [**155**]{}, 1554 (1967).
J. Khoury and A. Weltman, Phys. Rev. Lett., [93]{}, 171104 (2004).
A. I. Vainshtein, Phys. Lett., [**B 39**]{}, 393 (1972).
A. Nicolis, R. Rattazzi and E. Trincherini, Phys. Rev. D, [**79**]{}, 064036 (2009).
C. Deffayet, G. Esposito-Farese and A. Vikman, Phys. Rev. D, [**79**]{}, 084003 (2009); C. Deffayet, S. Deser and G. Esposito-Farese, Phys. Rev. D, [**80**]{}, 064015 (2009);\
N. Chow and J. Khoury, Phys. Rev.D, [**80**]{}, 024037 (2009);
A. Ali, R. Gannouji and M. Sami, Phys. Rev. D, [**82**]{}, 103015 (2010);\
R. Gannouji and M. Sami, Phys. Rev. D, [**82**]{}, 024011 (2010);\
A. De. Felice and S. Tsujikawa, Phys. Rev. Lett., [**105**]{}, 111301 (2010);\
S. A. Appleby and E. Linder, arXiv:1112.1981;\
E. Linder, arXiv:1201.5127.
G. R. Dvali, G. Gabaddze and M. Porrati, Phys. Lett. B, [**485**]{}, 208, (2000).
M. A. Luty, M. Porrati and R. Rattazzi, JHEP, [**09**]{}, 029 (2003);\
A. Nicolis and R. Rattazzi, JHEP, [**06**]{}, 059 (2004).
R. P. Woodard, Lect. Notes. Phys. [**720**]{}, 403 (2007).
J. D. Barrow, M. Thorsrud, JHEP, [**1302**]{}, 146 (2013);\
R. Kase and S. Tsujikawa, JCAP, [**1308**]{}, 054 (2013);\
C. Deffayet and D. A. Steer, Class. Quant. Grav., [**30**]{}, 214006 (2013);\
P. Martin-Moruno, N. J. Nunes and F. S. N. Lobo, JCAP, [**1505**]{}, 033 (2015);\
A. de. Fellice, K. Koyama and S. Tsujikawa, JCAP, [**1505**]{}, 058 (2015);S. Peirone, K. Koyama, L. Pogosian, M. Raveri and A. Silvestri, Phys. Rev. D, [**97**]{}, 043519 (2018).
G. Tasinato, JHEP [**1404**]{}, 067 (2014).
M. Fierz and W. Pauli, Proc. Roy. Soc. Lond., [**A173**]{}, 211 (1939);\
K. Hinterbichler, Rev. Mod. Phys., [**84**]{}, 671 (2012);\
M. V. Bebronne, P. G. Tinyakov, JHEP, [**0904**]{}, 100 (2009);\
C. de Rham and L. Heisienberg, Phys. Rev. D [**84**]{}, 043503 (2011);\
L. Heisenberg and A. Refregier, JCAP, [**1609**]{}, 020 (2016).
L. Heisenberg, JCAP [**1405**]{}, 015 (2014).
A. de. Felice, L. Heisenberg, R. Kase, S. Mukohyama, S. Tsujikawa, Y-Li Zhang, JCAP, [**1606**]{}, 048 (2016);\
A. de. Felice, L. Heisenberg, R. Kase, S. Mukohyama, S. Tsujikawa, Y-Li Zhang, Phys. Rev. D, [**94**]{}, 044024 (2016);\
A. de. Felice, L. Heisenberg, S. Tsujikawa, Phys. Rev. D, [**95**]{}, 123540 (2017)
L. Heisenberg, [*A systematic approach to generalisations of General Relativity and their cosmological implications*]{}, arXiv:1807.01725 \[gr-qc\].
B. P. Abott et. al., Phys. Rev. Lett., [**116**]{}, 061102 (2016).
B. P. Abott et. al., Phys. Rev. Lett, [**119**]{}, 161101 (2017).
P. Creminelli, F. Vernizzi, Phys. Rev. Lett, [**119**]{}, 251302 (2017);\
J. M. Ezquiaga, M. Zumalacárregui, Phys. Rev. Lett, [**119**]{}, 251304 (2017);\
J. Sakstein, B. Jain, Phys. Rev. Lett, [**119**]{}, 251303 (2017):\
L. Amendola, M. Kunz, I. D. Saltas, I. Sawicki, Phys. Rev. Lett, [**120**]{}, 131101 (2018).
J. Beltran Jimenez and L. Heisenberg, Phys. Lett. [**B 757**]{}, 405 (2016);\
E. Allys, P. Peter and Y. Rodriguez, JCAP [**1602**]{}, no. 02, 004 (2016).
L. Heisenberg, R. Kase, M. Minamitsuji and S. Tsujikawa, Phys. Rev. D [**96**]{}, 084049 (2017):\
L. Heisenberg, R. Kase, M. Minamitsuji and S. Tsujikawa, JCAP [**1708**]{}, no. 08, 024 (2017).
P. Schneider, J. Ehlers, and E. .E. Falco, [*Gravitational Lenses*]{}, Springer-Verlag, (1992).
S. Liebes, Phys. Rev. B, [**133**]{}, 835 (1964).
S. Refsdal and J. Surdej, Rept. Prog. Phys., [**57**]{}, 117 (1994).
K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D [**65**]{}, 103004 (2002).
E. F. Eiroa, G. E. Romero and D. F. Torres, Phys. Rev. D, [**66**]{}, 024010 (2002).
V. Bozza, F. De Luca, and G. Scarpetta, Phys. Rev. D, [**74**]{}, 063001 (2006);\
G. V. Kraniotis, Gen. Rel. Grav., [**46**]{}, 1818, (2014).
R. Whisker, Phys. Rev. D, [**71**]{}, 064004 (2005).
S.-S. Zhao and Y. Xie, JCAP, [**1607**]{}, 007 (2016).
S. Chakraborty and S. SenGupta, JCAP [**1707**]{}, 045 (2017)
P. V. P. Cunha, C. A. R. Herdeiro, E. Radu, H. F. Runarsson, Phys. Rev. Lett., [**115**]{}, 211102 (2015).
Ru-Sen Lu et. al., Astrophys. J., [**788**]{}, 120 (2014).
R. Deane et. al., PoS. AASKA, [**14**]{}, 151 (2015).
N. Tsukamoto, Phys. Rev. D [**95**]{}, 064035 (2017).
C. M. Claudel, K. S. Virbhadra and G. F. R. Ellis, J. Math. Phys. [**42**]{}, 818 (2001).
V. Bozza, Phys. Rev. D [**66**]{}, 103001 (2002).
A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura and H. Asada, Phys. Rev. D [**94**]{}, 084015 (2016).
V. Bozza, Phys. Rev. D [**78**]{}, 103005 (2008).
A. Hees [*et al.*]{}, Phys. Rev. Lett. [**118**]{}, 211101 (2017)).
Boehle, A., Ghez, A. M., Sch[ö]{}del, R., et al., , [**830**]{}, 17 (2016);\
A. M. Ghez [*et al.*]{}, Astrophys. J. [**689**]{}, 1044 (2008).
R. Zhang, J. Jing and S. Chen, Phys. Rev. D [**95**]{}, 064054 (2017).
Meyer, L., Ghez, A. M., Sch[ö]{}del, R., et al., Science, [**338**]{}, 84 (2012).
V. Bozza and L. Mancini, Astrophys. J. [**611**]{}, 1045 (2004).
Ohanian, H. C., American Journal of Physics, [**55**]{}, 428 (1987).
V. Bozza and L. Mancini, Gen. Rel. Grav. [**36**]{}, 435 (2004).
A. Y. Bin-Nun, Phys. Rev. D [**82**]{}, 064009 (2010).
Reissner, H., Annalen der Physik, [**355**]{}, 106 (1916).
V. Bozza, F. De Luca, G. Scarpetta and M. Sereno, Phys. Rev. D [**72**]{}, 083003 (2005).
E. T. Newman and A. I. Janis, J. Math. Phys. 6 915 (1965);\
S. P. Drake and P. Szekeres, Gen. Rel. Grav. [**32**]{}, 445 (2000).
Boyer, R. H., & Lindquist, R. W., Journal of Mathematical Physics, [**8**]{}, 265 (1967).
S. Chandrasekhar, [*The Mathematical Theory of Black Holes*]{}, Oxford University Press (14 July 1983), ISBN-13: 978-0198512912.
V. Bozza, Phys. Rev. D [**67**]{}, 103006 (2003).
G. N. Gyulchev and S. S. Yazadjiev, Phys. Rev. D [**75**]{}, 023006 (2007).
K. Hioki and K. i. Maeda, Phys. Rev. D [**80**]{}, 024042 (2009).
L. Amarilla and E. F. Eiroa, Phys. Rev. D [**85**]{}, 064019 (2012);\
B. P. Singh, arXiv:1711.02898 \[gr-qc\];\
A. Abdujabbarov, M. Amir, B. Ahmedov and S. G. Ghosh, Phys. Rev. D [**93**]{}, no. 10, 104004 (2016).
C. W. Misner, K. S. Thorne, J. A. Wheeler, [*Gravitation*]{}, (1973), San Francisco: W. H. Freeman, ISBN 978-0-7167-0344-0;\
W. Israel, Phys. Rev. [**164**]{}, 1776 (1967);\
W. Israel, Phys. Rev. , [**164**]{}, 1776 (1967);\
B. Carter, Phys. Rev. Lett., [**26**]{}, 331 (1971).
L. Amarilla and E. F. Eiroa, Phys. Rev. D [**87**]{}, 044057 (2013).
L. Ji, S. Chen and J. Jing, JHEP [**1403**]{}, 089 (2014).
Newman, E. T., & Janis, A. I., Journal of Mathematical Physics, [**6**]{}, 915 (1965);\
Newman, E. T., Couch, E., Chinnapared, K., et al., Journal of Mathematical Physics, [**6**]{}, 918 (1965).
Y. Kato M. Miyoshi, R. Takahashi H. Negoro, R. Matsumoto, MNRAS, [**403**]{}, L74, (2010).
H. Bartko [*et al.*]{}, Proc. SPIE Int. Soc. Opt. Eng. [**7013**]{}, 4K (2008).
F. Eisenhauer [*et al.*]{}, Proc. SPIE Int. Soc. Opt. Eng. [**7013**]{}, 2A (2008);\
V. Bozza and L. Mancini, Astrophys. J. [**753**]{}, 56 (2012).
J. U. Pott [*et al.*]{}, New Astron. Rev. [**53**]{}, 363 (2009).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Multi-core architectures can be leveraged to allow independent processes to run in parallel. However, due to resources shared across cores, such as caches, distinct processes may interfere with one another, e.g. affecting execution time. Analysing the extent of this interference is difficult due to: (1) the diversity of modern architectures, which may contain different implementations of shared resources, and (2) the complex nature of modern processors, in which interference might arise due to subtle interactions. To address this, we propose a black-box auto-tuning approach that searches for processes that are effective at causing slowdowns for a program when executed in parallel. Such slowdowns provide lower bounds on worst-case execution time; an important metric in systems with real-time constraints.
Our approach considers a set of parameterised “enemy” processes and “victim” programs, each targeting a shared resource. The autotuner searches for enemy process parameters that are effective at causing slowdowns in the victim programs. The idea is that victim programs behave as a proxy for shared resource usage of arbitrary programs. We evaluate our approach on: 5 different chips; 3 resources (cache, memory bus, and main memory); and consider several search strategies and slowdown metrics. Using enemy processes tuned per chip, we evaluate the slowdowns on the autobench and coremark benchmark suites and show that our method is able to achieve slowdowns in $98\%$ of benchmark/chip combinations and provide similar results to manually written enemy processes.
author:
- |
Dan Iorga\
Imperial College London\
[email protected]
- |
Tyler Sorensen\
Princeton University\
[email protected]
- |
Alastair F. Donaldson\
Imperial College London\
[email protected]
bibliography:
- 'references.bib'
title: |
Do Your Cores Play Nicely?\
A Portable Framework for Multi-core Interference Tuning and Analysis
---
``` {.numberLines numbers="left" xleftmargin="20em" basicstyle="\footnotesize\ttfamily"}
void vec_add(float *A, float *B, float *C, int SIZE) {
for (int i = 0; i < SIZE; i++)
C[i] = A[i] + B[i];
}
```
``` {.numberLines numbers="left" xleftmargin="20em" basicstyle="\footnotesize\ttfamily"}
void cache_enemy(byte* scratch) {
while(1)
for (int i = 0: i+=STRIDE; i < BUFFER_SIZE)
ACCESS(&(scratch[i]));
}
```
Introduction
============
Multi-core processors have seen widespread adoption, with nearly every consumer device containing more than one independent processing unit. However, due to shared resources and their corresponding arbitration mechanisms, e.g. cache hierarchies and protocols, reasoning about program behaviours on multi-core processors can be significantly more complex than on their single-core predecessors. Interference can affect non-functional properties of otherwise entirely separate processes, e.g. the execution time of a program on a multi-core processor can vary greatly depending on shared resource contention. Because of these issues, the *Worst Case Execution Time* (WCET) of an application on a multi-core chip is difficult to derive, and as a result, multi-core processors remain challenging to deploy in systems with hard or soft real-time constraints.
Because of this, prior work has identified *interference paths*, where the contention and arbitration of shared resources might impact program execution time [@5347560; @5347562; @5347561; @Berthon2016; @Brindejonc2014]. Components of interference paths include caches, memory buses, and main memory systems. Although not widely adopted, various schemes have been proposed to limit this interference. Such schemes require either hardware support, e.g. cache partitioning [@6375555], or invasive software modifications, e.g. bank-aware memory allocation and bandwidth reservation [@6531079; @6925999]. However, even with these schemes, interference can still be substantial [@7461361]. As a result, there is immediate and pragmatic interest in detecting and quantifying interference effects rather than aiming to mitigate them entirely.
In this vein, various techniques have been investigated to quantify the effects of interference on real-time properties, and evaluated for specific multi-core architectures [@Radojkovic:2012:EIS:2086696.2086713; @6214768; @Fernandez:2012:ASN:2380356.2380389]. Typically, work in this domain consists of: manually developing small programs designed exclusively to stress an interference path (called *enemy processes* in this work), executing a set of enemy processes on all but one core of a multi-core system (called a *hostile environment* in this work), and evaluating the execution time of a sequential *Software Under Test* (SUT) on the remaining core. The slowdown observed in the SUT from the hostile environment is a useful quantification of interference effects.
This prior work requires the manual design of hostile environments and corresponding enemy processes, presenting two immediate limitations. First, manual enemy process design is not *portable* across architectures, as different architectures may have different implementations, or their shared resources may be configured differently. Thus, manual effort is required for each new target architecture. Second, hand-tuned enemy processes may not be as effective as possible, due to subtle interactions that are difficult to derive from available documentation, and thus unlikely to be considered by human designers.
\
In this work, we aim to address both limitations. The heart of our contribution is an auto-tuning method that can tune enemy process parameters to be effective at slowing down an SUT. For each interference path considered, our approach takes a parameterised enemy process (called an *enemy template*) and a corresponding *victim program*, which is designed to be especially vulnerable to the particular interference path. We then employ auto-tuning to search for enemy process parameters that are effective at causing a slowdown in the corresponding victim program. After obtaining tuned enemy processes, we employ a second level of tuning over all victim programs to obtain a combination of enemy processes, which can be deployed as a hostile environment. The tuning approach can be run on many chips using the same enemy templates and victim programs to produce chip-specialised hostile environments.
We illustrate the problems with manually-designed enemy processes, and explain at a high-level how our auto-tuning approach overcomes these limitations, using an example.
#### Example of Portability Limitation
------------------ ----- ------- ------ -------
[BUFFER\_SIZE]{} 512 20480 2048 40960
[STRIDE]{} 64 262 64 40960
[ACCESS]{} SL SLSSL SL SS
------------------ ----- ------- ------ -------
: Parameters for the enemy template of Figure \[fig:example-enemy\] for the [Pi3]{}and [570X]{} boards: [BUFFER\_SIZE]{} is given in KB; [STRIDE]{} is given in bytes; and [ACCESS]{} is given by a sequence of store (S) and load (L) operations[]{data-label="tab:example-parameters"}
![Slowdowns caused by different hostile environments on the [Pi3]{}and [570X]{}: HT denotes enemy processes hand-tuned for the chip given in (), and AT denotes auto-tuned enemy processes for the target chip []{data-label="fig:example-slowdowns"}](images/Pi3_example.pdf "fig:"){width=".49\linewidth"} ![Slowdowns caused by different hostile environments on the [Pi3]{}and [570X]{}: HT denotes enemy processes hand-tuned for the chip given in (), and AT denotes auto-tuned enemy processes for the target chip []{data-label="fig:example-slowdowns"}](images/570X_example.pdf "fig:"){width=".49\linewidth"}
Figure \[fig:example-vec-add\] shows a simple vector addition program. The [SIZE]{} parameter can be set to a value (in this case we use 16K) such that the vector data is not able to fit in a entirely in a core-local cache; thus, memory accesses must go through shared caches at some point. Suppose we want to assess the potential interference between this program (the SUT) running on one core and a set of independent programs running on other cores. The program of Figure \[fig:example-enemy\] is an example of an enemy process: the sole task of the program is to exercise an interference path. This particular enemy process is designed to cause cache contention by looping over a memory region the size of the shared cache and accessing memory at a stride of the cache line size. To explore potential interference, the execution time of the SUT executing on one core can be measured while multiple instances of the cache enemy processes are running on other cores in the system.
As the enemy process code of Figure \[fig:example-enemy\] shows, there are three parameters that need to be instantiated: (1) size of the shared cache; (2) cache line size; and (3) memory instructions, i.e. loads or stores, used to access memory. To provide suitable values, the target processor must be known. For example, on a 4-core Raspberry Pi 3 B (abbreviated to [Pi3]{}), the shared cache size is 512KB, the cache line is 64 bytes and we use a single store followed by a load as the instructions. The execution time of the SUT shows a $1.14\times$ slowdown when executed in parallel with this enemy processes running on the three additional cores.
Now, if we consider a different processor, say a 4-core Intel Joule 570X (abbreviated to [570X]{}), we might try the same experiment using the same parameters from the [Pi3]{}. In this case, we observe a slowdown of $1.06\times$. We then might try a different enemy process, tuned to the architectural details of the [570X]{}. This changes the size of the shared cache to 2048KB but keeps the same cache line size. With this new enemy process, a much greater slowdown of $1.36\times$ is observed. However, when the [570X]{}enemy process is used on the [Pi3]{}, the slowdown observed is $1.14\times$, exactly the same as the original [Pi3]{}enemy process. Since the [Pi3]{}and the [570X]{}have the same same cache line size and the same associativity but different cache sizes, the enemy processes will access the cache in a simillar manner but the [Pi3]{}enemy process will not access the entire [570X]{}cache.
These results are intuitive: enemy processes tuned to a particular architecture will be less effective when running on a different architecture. Prior work in this area has largely focused on such enemy processes, i.e. hand-tuned for a particular architecture. In the case where enemy processes are written at the assembly level, the situation is even worse: enemy processes designed with respect to one ISA are inapplicable to a processor with a different ISA.
#### Example of Effectiveness Limitation
Recall that the hand-tuned enemy processes showed an SUT slowdown of $1.14\times$ and $1.36\times$ for the [Pi3]{}and [570X]{}, respectively. These slowdowns were achieved by making reasonable human judgements for enemy process parameters. If instead, using the methodology described in the remainder of this paper, the enemy template is *auto-tuned* for two hours, different parameters can be found (summarised in Table \[tab:example-parameters\]).
The values found by auto-tuning do not correspond to any architectural features that we are aware of, and it seems unlikely that a human designer with detailed knowledge of these processors would guess them. However, the slowdowns of the SUT using this auto-tuned hostile environment increase to $10.08\times$[^1] and $1.66\times$ for the [Pi3]{}and [570X]{}, respectively. The slowdown results are summarised in Figure \[fig:example-slowdowns\]. Thus, an auto-tuning methodology for enemy processes (1) requires less architectural knowledge and manual effort than hand-tuned approaches of previous work, and (2) can provide greater slowdowns than reasonable hand-tuned enemy processes.
#### Contributions
We present an auto-tuning methodology for hostile environments and the corresponding enemy processes. This approach can be employed for different chips, removing the need for the detailed architectural expertise required by prior works in this area. Additionally, because of the many configurations explored by the tuning approach, our methods may discover non-intuitive parameters that cause slowdowns beyond what might be developed through hand-optimisation, as illustrated in the above example.
We illustrate our approach by creating a hostile environment on five different chips for three interference paths: cache, memory bus, and the main memory system. This requires three enemy process, victim program pairs. For tuning, we explore three search strategies (random search, simulated annealing and Bayesian optimisation), and report on the effectiveness of each.
Finally, we assess the effectiveness of our approach at causing slowdowns in SUTs by running benchmarks from the coremark and autobench application suites [@Coremark; @Autobench] in the hostile environments produced by our tuning methodology. We show that we can achieve statistical significant slowdowns for $98\%$ of benchmarks. We compare the slowdowns caused by our auto-tuned hostile environments with hand-tuned hostile environments from prior work and show that the slowdowns are comparable and that in some cases we are even able to achieve higher slowdowns.
In summary, our contributions, in order of presentation, are:
1. An auto-tuning methodology for hostile environments with the aim to cause slowdowns in an SUT; this methodology is portable and can be used to automatically tune hostile environments for different chips (Section \[sec:creating\_a\_hostile\_environment\]).
2. An illustration of our methodology on five different chips for three interference paths: cache, memory bus, and main memory systems; we evaluate several natural search strategies and slowdown metrics (Section \[sec:experimental\_setup\]).
3. An assessment of the extent that our tuned hostile environments slowdown the coremark and autobench real-time application suites; we show that in many cases our auto-tuned hostile environments are as effective as hand-optimised assembly environments of prior work [@Radojkovic:2012:EIS:2086696.2086713], and sometimes better (Section \[sec:results\]).
The source code for our framework can be found online.[^2]
Creating a Hostile Environment {#sec:creating_a_hostile_environment}
==============================
We now describe in detail our methodology for creating a hostile environment, which aims to be effective at causing slowdowns in an SUT through shared resource interference. After a high-level overview of our approach (Section \[sec:overview\]), we detail the interference paths (shared resources) targeted in this work along with the associated per-resource victim programs (Section \[sec:shared\_resources\]). We then explain how we use auto-tuning to search for enemy template parameters per resource (Section \[sec:tuningpertest\]) and how hostile environments are constructed from tuned enemy processes (Section \[sec:tuninghostile\]). Because different search strategies can be used in the tuning phase, we outline several natural choices (Section \[sec:searchstrategies\]). We conclude the section by describing the care we have taken to ensure validity of the measurements that form the basis of the tuning process (Section \[sec:meas\_validity\]).
Overview of our approach {#sec:overview}
------------------------
The first step in our method consists of identifying possible interference paths, i.e. shared resources for which multi-core contention might cause slowdowns. For each one of these paths, we create: (1) a parameterised *enemy template* that will run in an infinite loop and stress the resource and (2) a *victim program* that performs a fixed amount of synthetic work, making heavy use of the resource. We tune the parameters of the enemy template using the slowdown it can cause on its corresponding victim program as the objective function. Because the victim program is vulnerable to interference on the target resource, the degree to which it is slowed down serves as a proxy for measuring interference on the associated interference path.
Each enemy process is tuned to provoke interference to a *specific* resource. However, we want to develop a hostile environment that is effective at slowing down an arbitrary SUT. A black-box SUT is likely to use multiple resources in complex ways. Thus, we aim to find a combination of tuned enemy processes to be effective across all victim programs. The second step in our methodology involves searching for a combination of tuned enemy processes, with one enemy process running on each non-SUT core of the processor, that is effective in causing interference with respect to multiple resources.
Parameter Discussed in this work
------------------------- ----------------------------------
Victim program resource cache; bus; main memory
Enemy template resource cache; bus; main memory
random; sim. ann.; Bayesian opt.
Metric median; max; quantile
: The available parameters of the tuning framework
\[table:framework\_parameters\]
Table \[table:framework\_parameters\] summarises the parameters that can be used to configure our auto-tuning framework. In the tuning phase of our methodology, different search strategies can be employed. We evaluate three natural choices: random search, simulated annealing and Bayesian optimisation (see Section \[sec:searchstrategies\] for more detailed descriptions of each), and compare their proficiency in finding effective parameters. Since measurements in this domain are noisy, we time multiple runs and analyse the results to best approximate the actual interference. These metrics, along with the steps we have taken to ensure measurement validity, are explained in more detail in Section \[sec:meas\_validity\].
Shared Resources and Victim Programs {#sec:shared_resources}
------------------------------------
We now discuss several shared resources that can lead to interference between independent applications running in parallel on separate cores. For each type of interference, we first outline the ANSI-C victim program we have designed to be vulnerable to this interference, and then describe a parameterized enemy template which aims to effectively provoke interference through this resource.
#### Bus
Buses are used to transfer data between memory and processing elements. To reduce the area of a processor, the bus is often shared between multiple processing cores, requiring some form of arbitration mechanism. There are three main classes of resource arbitration mechanisms: (1) *time-driven* arbitration uses a predefined bus schedule that assigns time slots to contending components; (2) *event-driven* arbitration resolves contention at runtime, e.g. via a round-robin or first-in-first-out strategy; and (3) a *hybrid* approach that uses different runtime policies for each time slot [@10.1007/978-3-642-40184-8_3].
[*Victim Program*]{} The bus victim program reads a series of numbers from a main memory data buffer and increments their value. The increment operation forces the enemy process to bring the numbers from main memory to the CPU. Afterwards, the process writes the numbers back to a second buffer in main memory. The buffers are allocated using `malloc` and are sufficiently large not to fit in the cache. This entire process ensures that the bus will be kept busy with transfers between main memory and registers.
[*Enemy Template*]{} We have designed the enemy with the aim of hindering data transfers between the CPU and main memory. It achieves this by performing the same operation as the victim process and thus competes for the same bus resource. Each enemy template has distinct buffers to simplify the design and avoid using any synchronisation mechanism.
The configurable parameters are:
- Size of main memory data buffers.
- Integer data type used for buffers : `int8_t`, `int16_t`
- Which buffer is used for read and which for writes.
Intuitively, the larger the size of the data being transferred, the more contention it will cause. However, there might be patterns where fast transfers through the bus followed by pauses will trigger some timing issues as shown in [@7588111].
#### Cache
Caches are used to reduce latency between the processor and main memory. A cache is much smaller than the main memory and, as a result, only a small subset of main memory can be stored in the cache at one time. There are multiple cache levels, and usually the last level cache is shared between processor cores. As an example, the [570X]{}has two levels of cache, where the level 1 data and instruction caches are core-local, while the level 2 cache is shared between all cores. Once a cache becomes full, stale data is evicted and replaced with newer data using a variety of policies, e.g., first in first out, least recently used, and random replacement [@HennessyPatterson12]. Because these caching policies have a direct effect on memory latency, multi-core timing analysis must take into account potential interference from contending cores.
[*Victim Program*]{} The victim program is configured based on the size of the shared cache, its associativity and its line size. The program allocates an array of the size of the last level cache and then reads and writes to the array in a pattern given by the associativity and the line size. The processor optimises access by storing the array in the last level cache.
[*Enemy Template*]{} The cache enemy template works by striding over a data buffer and performing a sequence of memory operations (reads or writes). The configurable parameters will force it to access the cache in a chaotic manner, thus working against data locality for which caches are designed to optimise. The cache enemy template is configured by the following parameters:
- Data buffer size
- Stride value
- Series of operations; reads, writes (up to five)
#### Main Memory
Access to main memory is granted by a shared controller. In a single read or write operation, only one bank of memory can be accessed for single-channel memories and simultaneous requests can lead to delayed requests [@6925999]. The interference issues are similar to those of shared caches, however, the runtime impact of *contended* memory accesses are higher.
[*Victim Program*]{} The victim program allocates a data buffer in main memory and writes random values to it. We ensure that the values are actually written in main memory and not just in the cache by having a sufficiently large buffer that does not fit in the cache for any processor.
[*Enemy Template*]{} The goal of the enemy template is to touch as many memory banks as possible. It allocates a large data buffer and repeatedly selects a random, contiguous sub-region of this buffer, of a given fixed size, and [memset]{}s the sub-region with random values. A randomly-selected byte is chosen and [memset]{} is used to write this byte across the region. The following parameters configure the enemy process:
- The size of the data buffer
- The size of the sub-region
#### Overlap and Extensibility
There will be a certain level of overlap between enemy processes. The hardware cache policies can make the memory enemy processes first write data to the cache and therefore also act as a cache enemy. Also, we expect that the memory enemy process will stress the system bus by bringing data from the processor to the memory. However, we assume that each enemy process will concentrate its attack on one resource, e.g. the bus enemy, in contrast to the memory enemy, will only read data from the same locations in main memory and therefore will demand less work from the memory controller. This way the enemy processes will complement each other.
It is easy to extend the framework and stress different shared resources that we have not included and that might be unique to specific platforms. This extension can be done by adding a pair consisting of a victim program and an enemy template, including the tunable parameters.
Tuning enemy processes per Resource {#sec:tuningpertest}
-----------------------------------
For each processor, we tune the parameters of the enemy templates to cause the highest slowdown in their corresponding victim program. In this section, we describe how this process is performed.
In what follows, let $R$ denote the set of resources to be targeted. In our current work, $R = \{ {\mathsf{bus}}{}, {\mathsf{cache}}{}, {\mathsf{RAM}}{} \}$. For a resource $r \in R$ let $V_r$ denote the victim program associated with $r$, and $T_r$ the enemy template associated with $r$. For example $V_{{\mathsf{bus}}}$ denotes the victim programs associated with the ${\mathsf{bus}}$ resource, and $T_{{\mathsf{cache}}}$ the enemy template associated with the ${\mathsf{cache}}$ resource.
A template $T_r$ takes a set of parameters drawn from a parameter set $P_r$. For a given parameter valuation $p \in P_r$, let $T_r(p)$ denote the concrete enemy process obtained by instantiating $T_r$ with parameters $p$.
For a victim program $V_r$, template $T_r$ and parameter setting $p \in P_r$, let ${\mathsf{slowdown}}(V_r, T_r(p))$ denote the slowdown associated with (1) executing $V_r$ in isolation on core 0 (with all other cores unoccupied), compared with (2) executing $V_r$ on core 0, in parallel with an instance of $T_r(p)$ on every other available core.
Our aim is to compute:
$${\underset{p \in P_r}{\operatorname{arg}\,\operatorname{max}}\;} \; {\mathsf{slowdown}}(V_r, T_r(p))$$
Because $P_r$ is too large to search exhaustively, we use a search strategy to approximate the maximum. These are discussed in Section \[sec:searchstrategies\].
Let $p_r^{{\mathsf{tuned}}}$ denote the best parameter setting that was found via search using the chosen strategy. We refer to the set ${\mathsf{ResourceTunedEnemies}}{} = \{T_r(p_r^{{\mathsf{tuned}}})\; \mid r \in R\}$ as the set of *resource-tuned enemies*.
Tuning a Hostile Environment {#sec:tuninghostile}
----------------------------
The tuning process described in Section \[sec:tuningpertest\] considers the same enemy process running on every available core other than that occupied by the SUT. We aim to devise a deployment of enemy processes that is effective at inducing interference across all resource types since we do not know the resource usage profile of the SUT a priori. We determine the best configuration of enemy processes by using a strategy similar to the one described in [@Sorensen:2016:EER:2980983.2908114].
We refer to a configuration of multiple possibly distinct enemy processes running on the non-SUT cores as a *hostile environment*. More formally, for an $n$-core processor where the SUT runs on core 0, a hostile environment is a mapping from $\{1, \dots, n-1\} \rightarrow {\mathsf{ResourceTunedEnemies}}{}$.
We now describe our strategy for choosing a suitable hostile environment from the set of $|R|^{n-1}$ possibilities. First, for each resource $r \in R$, we rank every possible hostile environment according to the extent to which they slow down $V_r$, with the environment that induces the largest slowdown ranked first. Let ${\mathsf{RankedEnvironments}}(r)$ denote this ranking. This set is much smaller than the tuning set, so we can exhausting run these experiments. The most common processor in our case, a 4 core processor, with 3 shared resources would only require 81 evaluations.
We then select a Pareto optimal hostile environment. This is an environment $e$ such that there does not exist an environment $e' \neq e$ such that for all $r \in R$, $e'$ is ranked more highly than $e$ in ${\mathsf{RankedEnvironments}}(r)$. Being Pareto optimal, $e$ may not be unique and in this case we use a tie breaking mechanism. The tie breaking mechanism consists of selecting the environment that is ranked better in all but one of the ${\mathsf{RankedEnvironments}}(r)$.
Search Strategies {#sec:searchstrategies}
-----------------
To estimate the maximum interference caused to the victim program, we need to find effective parameters for the enemy templates given in Section \[sec:shared\_resources\]. We intuitively expect the search space of enemy process configurations to be discontinuous with respect to interference, e.g. due to caches having fixed parameters that are typically powers of two, memory being organised in banks, etc. Therefore we utilise search strategies that do not make any explicit assumption about the convexity of the cost function and do not rely on gradient information. To do so, we evaluate the following candidates:
####
samples different configurations and remembers the best values. This approach has the advantage of being lightweight and providing a baseline for the more complicated techniques.
####
is a metaheuristic to approximate global optimisation in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable.
####
Having an unknown objective function, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures our beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function that determines what the next query point should be.
There are advantages and disadvantages to each one of these strategies. Random search and simulated annealing can quickly determine the next query point. Bayesian optimisation needs time to remodel the objective function and the acquisition function after each new query is made. On the other hand, it is expected that Bayesian optimisation will only sample points that will increase our knowledge of the problem. In general, one would expect to prefer the first strategies in cases where the cost function is cheap to evaluate and in cases where the cost function is expensive to evaluate. We evaluate the effectiveness of these approaches in Section \[sec:comparing-searches\].
Measurement validity {#sec:meas_validity}
--------------------
A threat to the validity of our approach is that measurement errors and performance fluctuations due to external factors may cause us to wrongly conclude that our test harness is responsible for slowing down an SUT. Similar to other approaches that make use of enemy processes [@Fang:2015:MMD:2695583.2687356], we deal with factors related to the hardware, operating system, and compiler. We also make use of a statistical metric, more specifically quantiles, to refine our results.
![Frequency variation due to temperature on the [Pi3]{}.[]{data-label="fig:frequency_variation"}](images/rasp_freq.pdf){width="\linewidth"}
#### Hardware
Hardware mechanisms are generally designed to be transparent to the user but can be unpredictable. We took into account the following factors in our design:
1. *Frequency throttling due to increased temperature.* When the temperature of a processor increases beyond a limit, frequency throttling can kick in. Figure \[fig:frequency\_variation\] shows how frequency is affected by temperature on the [Pi3]{}. The data was gathered using a tool called *vcgencmd*. We want to guard against the risk of attributing an SUT slow-down to interference caused by an enemy process when the slow-down is actually due to the raised temperature of the processor. To mitigate this risk, we measure the temperature at the end of each experiment and discard the result if the temperature has risen above 80 degrees. We empirically found that using this temperature threshold works well on the other devices used.
2. *Alternating between hot and cold caches.* We flush the cache at the beginning of each experiment as data left from the previous experiment might affect the execution time of the next one.
#### Operating system
Modern operating systems are multithreaded and include a range of elements, aimed at efficient execution of a large number of threads. However, this can make thread execution more unpredictable. We use the following techniques to mitigate the effects that the operating system could have on our measurements.
1. *Thread migration.* The operating system might decide to migrate the thread to a different core for various reasons. We avoid this by pinning the SUT and the enemy processes on a specific cores using the `taskset` linux command.
2. *SUT preemption.* To avoid the kernel from stochastically preempting the SUT, adding the cost of context switching to our measurement time, we run the application at the maximum possible priority.
3. *Ensure parallel execution.* The operating system might decide to postpone the startup of any of the enemy processes after the SUT has started, rendering the experiment practically useless. To evaluate the maximum startup latency in different platforms that we considered, we used the latency evaluation framework [@RT_tests] and discovered that the maximum startup latency is generally under 1 ms. To ensure that all the enemy processes are running before the SUT starts executing, we wait 10 ms after all the enemy processes have started, which should be a conservative margin for any evaluation.
4. *Remove unnecessary software* The interaction between different software can be difficult to predict. To mitigate the chance of this occurring, we removed all software that is not strictly required for our experiments such as any graphical capabilities, logging software and network managers.
#### Compiler
The compiler might optimise away part of the code in the enemy process to reduce execution time, decreasing the stress it is intended to put on specific resources. To avoid this, we generate random numbers at runtime for certain elements, e.g. the number written by *memset*. Furthermore, we run the compiler with the `-O0` flag.
#### Statistical analysis
We can never truly get rid of all nondeterministic elements of our environment that can interfere with our measurements. For this reason, we need to measure multiple runs and apply a statistical analysis.
We need a metric that can reliably estimate the worst-case execution time and ignore the unreliable outliers. Oliveira et. al. [@deOliveira:2013:WYC:2490301.2451140] show how quantiles can be used to compare the latency and end-to-end times of two different Linux schedulers. We follow a similar approach and run the same experiment multiple times and calculate the quantile. Since we are interested in the worst-case behaviour, we would naturally want to select a quantile that is close to the $100^{th}$ one as possible. However, choosing too high of a quantile will not properly disqualify outliers.
![Variation of the quantile.[]{data-label="fig:quantile_var"}](images/variance.pdf){width="\linewidth"}
Figure \[fig:quantile\_var\] shows the variance of different quantiles for our development platforms. For each board, we ran the coremark benchmark alongside some manually tuned enemy processes and measured the execution time 1000 times. We divided this execution times into 40 sets of 250 data points each and we measured the quantile from the $75^{th}$ to the $100^{th}$. Afterwards we calculated the variance of the 40 sets for each development board and plotted the results. The figure shows how selecting too high a quantile results in noisy data. For that reason, we have chosen the $90^{th}$ quantile as our slowdown metric.
Another issue that we take into consideration is the number of measurements required for obtaining a reliable value of the quantile. For this reason, we measure 20 times with the same configuration and calculate the $90\%$ confidence interval. If the range of values within the interval is too high, we repeat the process and add more measurements until it decreases to a desired threshold. We do this by calculating the difference between the quantile and the interval endpoints and checking that it is not higher than $5\%$. However, it often happens that the measurements never converge to the desired threshold. For this reason, we limit the number of measurements to 200.
Experimental setup {#sec:experimental_setup}
==================
We now evaluate the effectiveness of our approach by running it on a collection of embedded development boards, and evaluating its effect on a series of industry benchmarks common in time-constrained domains. In Section \[sec:hard\_bench\] we present the utilised hardware and afterwards in Section \[sec:comparing-searches\] we compare the considered search strategies. We conclude this part with Section \[sec:create\_hostile\] where we describe how we developed a hostile environment for each platform.
Hardware and Benchmarks {#sec:hard_bench}
-----------------------
Suite Benchmark name Alias
------- -------------------------------- -------
coremark a
bitmnp-rspeed-puwmod-4K b
bitmnp-rspeed-puwmod-4M c
matrix-tblook-4K d
matrix-tblook-4M e
puwmod-rspeed-4K f
puwmod-rspeed-4M g
rspeed-idctrn-canrdr-4K h
rspeed-idctrn-canrdr-4M i
rspeed-idctrn-iirflt-4K j
rspeed-idctrn-iirflt-4M k
ttsprk-a2time-matrix-4K l
ttsprk-a2time-matrix-4M m
ttsprk-a2time-pntrch-4K n
ttsprk-a2time-pntrch-4M o
ttsprk-a2time-pntrch-aifirf-4K p
ttsprk-a2time-pntrch-aifirf-4M q
ttsprk-a2time-pntrch-idctrn-4K r
ttsprk-a2time-pntrch-idctrn-4M s
ttsprk-a2time-pntrch-tblook-4K t
ttsprk-a2time-pntrch-tblook-4M u
: Benchmarks used to evaluate our approach, along with a short alias to use in figures.
\[table:benchmarks\]
#### Benchmarks
The synthetic victim programs we created are designed to be especially vulnerable to shared resource interference. While these synthetic applications show how we can achieve extreme interference for a specific resource, we are also interested in observing the effects on industry standard benchmarks. These benchmarks are summarised in Table \[table:benchmarks\].
[*EEMBC Coremark*]{} [@Coremark] is a standardised benchmark used for evaluating processors. It is composed of implementations of the following algorithms: list processing (find and sort), matrix manipulation (common matrix operations), state machine (determine if an input stream contains valid numbers), and CRC (cyclic redundancy check).
[*EEMBC Autobench2*]{} [@Autobench] consists of automotive workloads, including: road speed calculation and finite impulse response filters. This benchmark suite is of interest for the real-time industry and has been used in the evaluation of other works in this domain, e.g. [@Fernandez:2012:ASN:2380356.2380389; @7588111].
#### Hardware
We have chosen a range of development boards, containing both ARM and x86 CPUs to evaluate the portability of our approach. Table \[table:experimental\_board\] shows the SoC, the architecture and the number of cores for each of them.
Name Short name SoC Arch Cores
------------------ ------------ ----------- --------- -------
Raspberry Pi 3 B [Pi3]{} BCM2837 ARM A53 4
DragonBoard 410c [410c]{} Adreno306 ARM A53 4
Intel Joule [570X]{} 570x Atom 4
Nano-PC T3 [T3]{} S5P6818 ARM A53 8
BananaPi M3 [M3]{} A837 A7 8
: Development boards used to evaluate our approach.
\[table:experimental\_board\]
The operating system can have an impact on the effectiveness of our approach. This impact is minimised to create a fair comparison between the different development boards. We used Debian Linux, as it was available across all platforms.
Comparing Search Strategies \[sec:comparing-searches\]
------------------------------------------------------
Board Cache Memory Bus
---------- -------------- -------------- --------------------
[Pi3]{} $<$$<$ $<$$\approx$ $<$$<$
[410c]{} $<$$<$ $<$$<$ $\approx$$<$
[570X]{} $\approx$$<$ $<$$<$ $<$$\approx$
[T3]{} $<$$\approx$ $<$$<$ $\approx$$\approx$
[M3]{} $<$$\approx$ $<$$<$ $\approx$$\approx$
: Comparing search strategies when tuning templates against litmus tests. The search strategies are placed in order of effectiveness, with the ordering symbols described in Section \[sec:comparing-searches\]
\[table:search\_strategies\]
We now compare the search strategies described in Section \[sec:searchstrategies\] and determine which one is the most proficient at finding effective parameters of the enemy templates. We tune the enemy templates using their corresponding victim program as described in Section \[sec:tuningpertest\] with all three search strategies tuning for 2 hours. Since all search strategies have a certain degree of randomness and can sometimes get lucky or unlucky (even starts by randomly sampling its starting points) we perform three runs of each search method for each shared resource. We use the Wilcoxon rank-sum method to test if values from one set are stochastically more likely to be greater than values from another set. This method is non-parametric, i.e. it does not assume any distribution of values, and returns a $p$-value indicating the confidence of the result.
The results of this experiment can be found in Table \[table:search\_strategies\] where we constructed an order of the effectiveness of each search method. However, some orders are more confident than others, i.e. the ones with a low enough p-value. When the p-value is low (below 0.5) we have higher confidence in the ordering, denoted by the $<$ symbol. However, when the p-value is high (above 0.5) we are not as confident in the ordering, denoted by the $\approx$ symbol. In all cases seems to perform the worst. performs well for the memory enemy process and for the bus enemy process. However, the difference is not clear for the cache enemy process, with and randomly obtaining the best result.
performs well due to the highly irregular search space that the parameters of our enemy templates have. can intelligently sample points of interest quickly and has reduced chances of getting stuck in a local minimum. It is surprising that ranks last in our comparison. Most likely the search space is highly irregular, and the algorithm often gets stuck in a local minimum. Of course, can be configured to focus more on exploration, but then there would be no reason to use it in place of .
Creating a Hostile Environment\[sec:create\_hostile\]
-----------------------------------------------------
After determining the appropriate search strategy for each of the enemy processes on each board, we search for the most aggressive parameters that maximise interference. We tune each of the enemy templates and its corresponding victim program with the winning strategies for a more extended period (8 hours).
Cache Memory Bus
-- ---------- --------- --------- --------
Slowdown $16.53$ $6.71$ $1.38$
Method
Slowdown $1.81$ $2.65$ $1.07$
Method
Slowdown $1.96$ $2.65$ $1.07$
Method
Slowdown $5.29$ $1.27$ $1.17$
Method
Slowdown $7.50$ $49.47$ $2.18$
Method
: Maximum slowdown obtained using the best search strategy found on the corresponding victim program
\[table:maximum\_slowdown\]
Table \[table:maximum\_slowdown\] presents the maximum slowdown obtained alongside the search strategy used. The most significant slowdowns were obtained for the cache or main memory resources. The bus appears less vulnerable to interference than the other two resources.
From Table \[table:experimental\_board\] we see that the [Pi3]{}and the [410c]{}have the same architecture, but implemented in different SoCs. The [Pi3]{}is especially vulnerable to cache interference while the [410c]{}is much less prone to the same type of interference. It is likely that this can be explained by microarchitectural differences between the the two boards; however, we are not aware of the exact mechanism that causes this difference as low-level details are generally not available for most SoCs on the market today.
------ --------- ------- ------ --------- ------- ------ --------- -------
rank score rank score rank score
1 MMM 1.51 1 BBB 1.41 1 **MBM** 1.16
2 MMB 1.48 2 **MBM** 1.34 ... ... ...
3 **MBM** 1.47 ... ... ... ... ... ...
... ... ... ... ... ... ... ... ...
26 BCC 1.19 26 BMC 1.04 26 CBC 1.02
------ --------- ------- ------ --------- ------- ------ --------- -------
: Snippet of rank scores for [570X]{}. The environment *r* is described by a sequence of 3 letters describing what resource is stressed by each core. For example: CMB indicates that the first core stresses the cache, the second stresses the memory while the third stresses the bus.
\[table:ranked\_list\]
We now determine the optimal hostile environment for each of the boards using the methodology described in Section \[sec:tuninghostile\]. An example of this approach can be seen in Table \[table:ranked\_list\], where we have the ranked list of each of the possible environments for the [570X]{}. For this platform, the **MBM** configuration is the Pareto optimal, where **MBM** denotes the hostile environment where the first core stresses the [**M**]{}ain memory, the second core stresses the [**B**]{}us and the last one also stresses the [**M**]{}ain memory.
Results {#sec:results}
=======
Now we evaluate the effectiveness of the hostile environment on the benchmarks of Section \[sec:hard\_bench\]. In Section \[sec:evaluate\_hostile\] we measure how the benchmark runtimes are influenced by our hostile environments and then compare our method with previous approaches in Section \[sec:assembly\_stress\].
Evaluating the Hostile Environment {#sec:evaluate_hostile}
----------------------------------
\
With the Pareto optimal hostile environment determined for each SoC, we evaluate its effectiveness on the benchmarks described in Section \[sec:hard\_bench\]. Figure \[fig:final\_results\] shows the results of the hostile environment on the benchmarks for each one of the boards. To determine if our slowdowns are statistically significant, we calculated the $90\%$ confidence interval for the benchmarks running in isolation compared to running in the hostile environment. We then proceed to determine if there is any overlap between the two. There are only two cases when they overlap, that is for benchmark **a** on the [410c]{}and for benchmark **f** on the [M3]{}. For the considered benchmarks, we can summarise that this approach has a $98\%$ effectiveness of slowing down applications across the benchmark suite we consider.
Figure \[fig:geometric\_mean\] shows the geometric mean of all benchmark slowdowns for each of the platforms. The [Pi3]{}is the most vulnerable development board in our experiments, while the [410c]{}is the most resilient one. This score does not provide a hard guarantee of the timing predictability of any of the tested boards, but it does offer a means by which we can quickly eliminate unreliable platforms. This experiment is in line with the results from Table \[table:maximum\_slowdown\] where only the victim processes were used, and each resource was stressed individually. More specifically, the large slowdowns in the victim programs can be directly correlated to the large slowdowns in the benchmarks.
Comparing with Hand-Written Assembly {#sec:assembly_stress}
------------------------------------
Previous approaches have relied on hand-crafting assembly enemy processes to assess the maximum slowdown that a given platform can experience. One example of such an approach involves implementing a pointer chasing scenario [@Radojkovic:2012:EIS:2086696.2086713], in which the enemy process creates a large array of addresses in main memory where each address points to a different location in the same array. The enemy process then starts to navigate the array using assembly code. By utilising assembly code, the compiler is prevented from performing any optimisations. The irregular access pattern contravenes the locality principal needed by the cache to store information efficiently.
{width="\linewidth"}
Using the code provided by the authors of [@Radojkovic:2012:EIS:2086696.2086713], we evaluated our hostile environment against the previous approach. Since this code is written to assembly language we were only able to execute it on the [570X]{}, which is the only x86 development at our disposal. We ran the benchmarks alongside the hand-crafted assembly enemy processes and measured the slowdown. Figure \[fig:assembly\_comparison\] shows the slowdowns observed using the hostile environment and the hand-crafted assembly enemy processes with two different array sizes (4KB and 4MB). We calculated the $90\%$ confidence interval for the obtained results to evaluate if the differences between the two approaches are of statistical significance. The confidence intervals proved to be relatively large, and there was an overlap between the two methods. In 14 cases out of the total 21 benchmarks, our method achieves a higher slowdown. However, out of those, only 11 of them are statistically significant. In the other 7 cases, the assembly approach can reach a higher slowdown, but the confidence intervals always overlap.
Our experiments show a statistically significant higher slowdown in 52% of the applications, in the remaining 48% of the cases, there was no statistically significant difference. While our method does not always outperform the hand-written tests, our method has the advantage of being portable, i.e. it does not require crafting assembly code for each specific platform.
Related work
============
Applications with similar functionality as the enemy processes have been used before in the literature. They have been referred to as *resource stressing benchmarks* [@Radojkovic:2012:EIS:2086696.2086713], *resource stressing kernels* [@7588111] or *synthetic contenders* [@e49f8f7632ac4b36a20dd2965ea01d1f]. Radojkovic et al. [@Radojkovic:2012:EIS:2086696.2086713] were the first to utilise such techniques by deploying assembly code to measure multi-core interference on real application workloads. They propose a framework for quantifying the maximum slowdown obtained during simultaneous execution by stressing a single shared resource at a time. Their work examines several Intel processors, exploring the extent that the interference from resource stressing benchmarks can slow down real-time software. Nowotsch et al. [@6214768] perform a similar experiment on a multi-core PowerPC-based processor platform and focus specifically on the memory system. The platform allows for different memory configurations and provides several methods for interference mitigation. Regardless of configuration, SUT slowdowns are still observed when executing the resource stressing kernels on distinct cores. Fernandez et al. [@Fernandez:2012:ASN:2380356.2380389] evaluate a multi-core LEON-based processor and run experiments with both a Linux and real-time operating system. Unsurprisingly, the slowdown is mitigated on the real-time operating system, but not eliminated.
Fernandez et al. argue that most resource stressing benchmarks may fail at producing *safe* bounds [@7588111]. Under heavy contention, arbitration policies of shared resources such as round robin and first in first out produce a so-called “synchrony effect” that causes the SUT to suffer a delay that is not as severe as the potential worst-case. They propose a method to improve the bound by varying the injection time between requests to the shared hardware resources. Approaches such as [@7827636; @10.1007/978-3-319-60588-3_7] rely on randomisation of the source code to produce different memory mapping and therefore gather a large set of possible execution times. They utilise a statistical approach called “extreme value theory” and can provide multiple worst-case execution times alongside a confidence factor.
Tuning strategies have been used to optimise different computational aspects, with Ansel et al. [@ansel:pact:2014] showing how such an approach can be used for a variety of domain-specific issues. Wegner et al. [@Wegener1997] use genetic algorithms to find the inputs that cause the longest or shortest execution time. To do so, they formulate the search for such inputs as an optimisation problem. Law et al. [@Law2016] use simulated annealing on a single core processor to maximise code coverage and therefore obtain an estimate of the WCET.
Griffin et al. [@e49f8f7632ac4b36a20dd2965ea01d1f] take a different approach and train a deep linear neural network to learn the relationship between interference and the effect of the SUT execution time. This approach is used to calculate an interference multiplier that can be applied to a previously calculated WCET without interference.
Previous approaches are limited by the need of the developers to tune each resource stressing benchmark for each specific SoC and also can not always detect hidden interference patterns that are specific to the underlying microarchitecture of the system. In contrast, our approach assumes no knowledge of the architecture and microarchitecture of the system and can detect hidden interference patterns automatically.
Conclusions
===========
We have devised a portable auto-tuning method for determining interference across a wide range of platforms. Our approach is based on configurable enemy processes and does not rely on advanced knowledge of the microarchitectural details of a given platform. For determining the more effective parameters, we compared three different search strategies and determined the better candidates.
We evaluated this method across a wide range of processors, consisting of both ARM and x86 processors using industry standard benchmarks. Our approach is capable of causing interference in $98\%$ of the cases. We compared the slowdowns caused by our auto-tuned hostile environments with hand-tuned hostile environments from prior work and showed that the slowdowns are comparable, and in some cases, even able to achieve statistically significant higher slowdowns.
[^1]: This slowdown is alarming, but we have rigorously validated this result and see similar values for the [Pi3]{}throughout this work
[^2]: https://github.com/mc-imperial/multicore-test-harness
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the analysis of CP-violating effects in non-diagonal chargino pair production in $e^+e^-$ collisions. These effects appear only at the one-loop level. We show that CP-odd asymmetries in chargino production are sensitive to the phases of $\mu$ and $A_t$ parameters and can be of the order of a few %.'
address: |
Institute of Theoretical Physics, University of Warsaw\
Hoża 69, 00-681 Warsaw, Poland
author:
- Krzysztof Rolbiecki
- 'Jan Kalinowski[^1]'
title: |
CP VIOLATION IN CHARGINO PRODUCTION\
IN $e^+e^-$ COLLISIONS[^2]
---
Introduction
============
Supersymmetry (SUSY) is one of the most promising extensions of the Standard Model (SM) [@MSSM] since, among other things, it solves the hierarchy problem, provides a natural candidate for dark matter *etc*. It also introduces many new sources of CP violation that may be needed to explain baryon asymmetry of the universe. These phases, if large $\mathcal{O}(1)$, can cause problems with satisfying experimental bounds on lepton, neutron and mercury EDMs [@susycp]. This can be overcome by pushing sfermion spectra above a TeV scale or arranging internal cancelations [@Ibrahim:2007fb].
Most unambiguous way to detect the presence of CP-violating phases would be to study CP-odd observables measurable at future accelerators — the LHC and the ILC. Such observables in the chargino sector are, for example, triple products of momenta of initial electrons, charginos and their decay products [@Kittel:2004kd]. However they require polarized initial electron/positron beams or measurement of chargino polarization.
In this talk we present another possibility of detecting CP-violating phases in the chargino sector. As it was recently pointed out [@Osland:2007xw; @my], in non-diagonal chargino pair production $$e^+e^-\to\tilde{\chi}_1^\pm\tilde{\chi}_2^\mp \label{produkcja}$$ a CP-odd observable can be constructed beyond tree-level from production cross section without polarized $e^+e^-$ beams or measurement of chargino polarization. We show here the results of the full one-loop calculation of this effect. In the reaction (\[produkcja\]) the CP violation can be induced by the complex higgsino mass parameter $\mu$ or complex trilinear coupling in top squark sector $A_t$. Since these asymmetries can reach a few percent, they can be detected in simple event-counting experiments at future colliders.
CP-odd asymmetry at one loop
============================
In $e^+e^-$ collisions charginos are produced at tree-level via the $s$-channel $\gamma,Z$ exchange and $t$-channel $\tilde{\nu}_e$ exchange. As it was shown in [@Choi:1998ei] no CP violation effects can be observed at the tree-level for the production processes of diagonal $\tilde{\chi}_i^+ \tilde{\chi}_i^-$ and non-diagonal $\tilde{\chi}_i^+ \tilde{\chi}_j^-$ chargino pairs without the measurement of polarization of final chargino. However the situation is different for non-diagonal production if we go beyond tree-level approximation.
Radiative corrections to the chargino pair production include the following generic one-loop Feynman diagrams: the virtual vertex corrections, the self-energy corrections to the $\tilde{\nu}$, $Z$ and $\gamma$ propagators, and the box diagrams contributions. We also have to include corrections on external chargino legs.
One-loop corrected matrix element squared is given by $$\begin{aligned}
|\mathcal{M}_{\mathrm{loop}}|^2 = |\mathcal{M}_{\mathrm{tree}}|^2 +
2 \mathrm{Re}(\mathcal{M}_{\mathrm{tree}}^*
\mathcal{M}_{\mathrm{loop}} )\, .\end{aligned}$$ Accordingly, the one-loop CP asymmetry for the non-diagonal chargino pair is defined as $$\begin{aligned}
&& A_{12}=\frac{\sigma^{12}_{\rm loop}-
\sigma^{21}_{\rm loop}}{\sigma^{12}_{\rm tree}+
\sigma^{21}_{\rm tree}}\, ,
\label{CPasym}\end{aligned}$$ where $\sigma^{12}$, $\sigma^{21}$ denote cross sections for production of $\tilde{\chi}_1^+ \tilde{\chi}_2^-$ and $\tilde{\chi}_2^+ \tilde{\chi}_1^-$, respectively. Since the asymmetry vanishes at tree-level it has to be finite at one loop, hence no renormalization is needed.
![Box diagram with selectron exchange and its contribution to the asymmetry $A_{12}$ vs. center of mass energy. The selectron mass is 403 GeV.\[fig:thrs\_scan\]](Rolbiecki_fig1a.eps "fig:") ![Box diagram with selectron exchange and its contribution to the asymmetry $A_{12}$ vs. center of mass energy. The selectron mass is 403 GeV.\[fig:thrs\_scan\]](Rolbiecki_fig1b.eps "fig:")
The CP asymmetry Eq. (\[CPasym\]) arises due to the interference between complex couplings, which in our case are due to complex mixing matrices of charginos or stops, and non-trivial imaginary part from Feynman diagrams — the absorptive part. Such contributions appear when some of the intermediate state particles in loop diagrams go on-shell. This is illustrated in Fig. \[fig:thrs\_scan\] where the contribution to $A_{12}$ from double selectron exchange appears at the threshold for selectron pair production at $\sqrt{s}=806$ GeV.
Numerical results
=================
For the numerical results in this section we use two parameter sets (A) and (B) with gaugino/higgsino mass parameters defined as follows at the low scale: $$\begin{aligned}
&\mbox{A:}&\quad |M_1| = 100\mbox{ GeV},\quad M_2 = 200\mbox{ GeV},
\quad|\mu| = 400\mbox{ GeV},\nonumber \\
&\mbox{B:}&\quad |M_1| = 250\mbox{ GeV}, \quad M_2 = 200\mbox{ GeV},
\quad |\mu| = 300\mbox{ GeV},\nonumber\end{aligned}$$ and with $\tan\beta=10$. This gives the following chargino masses: $$\begin{aligned}
&\mbox{A:}&\quad m_{\tilde{\chi}^-_1} = 186.7 \mbox{ GeV},\quad
m_{\tilde{\chi}^-_2} = 421.8 \mbox{ GeV},\\
&\mbox{B:}&\quad m_{\tilde{\chi}^-_1} = 175.6 \mbox{ GeV},\quad
m_{\tilde{\chi}^-_2} = 334.5 \mbox{ GeV}.\end{aligned}$$ For the sfermion mass parameters in scenario (A) we assume $$\begin{aligned}
&&m_{\tilde{q}}\equiv M_{\tilde{Q}_{1,2}}=M_{\tilde{U}_{1,2}}=M_{\tilde{D}_{1,2}}=450\mbox{ GeV},\nonumber\\
&&M_{\tilde{Q}}\equiv M_{\tilde{Q}_{3}}=M_{\tilde{U}_{3}}=M_{\tilde{D}_{3}}=300\mbox{ GeV},\\
&&m_{\tilde{l}}\equiv
M_{\tilde{L}_{1,2,3}}=M_{\tilde{E}_{1,2,3}}=150\mbox{ GeV},\end{aligned}$$ and for the sfermion trilinear coupling: $|A_{t}|=-A_{b}=-A_{\tau}=A=400\mbox{ GeV}$. Scenario (B) is for comparison with Ref. [@Osland:2007xw] for which we take $$M_{S}= M_{\tilde{Q}}= M_{\tilde{U}} =M_{\tilde{D}} =M_{\tilde{L}}=
M_{\tilde{E}}=10\mbox{ TeV}.$$
In our numerical analysis we consider the dependence of the asymmetry (\[CPasym\]) on the phase of the higgsino mass parameter $\mu = |\mu| e^{i \Phi_\mu}$ and soft trilinear top squark coupling $A_t = |A_t| e^{i \Phi_t}$. In Fig. \[fig2\] we show the CP asymmetry in scenario (A) as a function of the phase of $\mu$ and $A_t$, left and middle panel, respectively. Contributions due to box corrections, vertex corrections and self energy corrections have been plotted in addition to the full result. In this scenario the asymmetry can reach $\sim 1\%$ for the $\mu$ parameter and $\sim
6\%$ for $A_t$, respectively. We note that for the asymmetry due to the non-zero phase of the higgsino mass parameter there are significant cancelations among various contributions. In addition, we also show in the right panel of Fig. \[fig2\] the dependence of the asymmetry due to $A_t$ as a function of $\tan\beta$.
For the asymmetry generated by the $\mu$ parameter all possible one-loop diagrams containing absorptive part contribute. The situation is different for the phase of the trilinear coupling $A_t$ — when chargino mixing matrices remain real. In this case only vertex and self-energy diagrams containing stop lines contribute to the asymmetry [@my].
![Asymmetry $A_{12}$ in scenario (A) as a function of the phase of $\mu$ parameter (left), the phase of $A_t$ (middle), and as a function of $\tan\beta$ with $\Phi_t=\pi/3$ (right). Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. \[fig2\]](Rolbiecki_fig2a.eps "fig:") ![Asymmetry $A_{12}$ in scenario (A) as a function of the phase of $\mu$ parameter (left), the phase of $A_t$ (middle), and as a function of $\tan\beta$ with $\Phi_t=\pi/3$ (right). Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. \[fig2\]](Rolbiecki_fig2b.eps "fig:") ![Asymmetry $A_{12}$ in scenario (A) as a function of the phase of $\mu$ parameter (left), the phase of $A_t$ (middle), and as a function of $\tan\beta$ with $\Phi_t=\pi/3$ (right). Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. \[fig2\]](Rolbiecki_fig2c.eps "fig:")
We present also the results for the heavy sfermion scenario (B). This is to compare with [@Osland:2007xw] where only box diagrams with $\gamma$, $W$, $Z$ exchanges have been calculated neglecting all sfermion contributions. As can be seen in the left panel of Fig. \[fig3\] these gauge-box diagrams constitute the main part of the asymmetry $A_{12}$, however this is due to partial cancelation of vertex and self-energy contributions. For lower values of the universal scalar mass $M_S$ the discrepancy between full and approximate result of [@Osland:2007xw] increases significantly. This is illustrated in the middle and right panel of Fig. \[fig3\] where we show two paths of approaching of the full result to the gauge-box approximation as the function of $M_S$. As can be seen these paths depend strongly on the center of mass energy.
![Left: Asymmetry $A_{12}$ in scenario (B) as a function of the phase $\Phi_\mu$. Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. Middle and Right: Asymmetry $A_{12}$ as a function of the universal scalar mass $M_S$ with $\Phi_\mu=\pi/2$ at different cms. The full lines denote full result and dashed lines show only the box contributions after neglecting diagrams with slepton exchange.\[fig3\]](Rolbiecki_fig3a.eps "fig:") ![Left: Asymmetry $A_{12}$ in scenario (B) as a function of the phase $\Phi_\mu$. Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. Middle and Right: Asymmetry $A_{12}$ as a function of the universal scalar mass $M_S$ with $\Phi_\mu=\pi/2$ at different cms. The full lines denote full result and dashed lines show only the box contributions after neglecting diagrams with slepton exchange.\[fig3\]](Rolbiecki_fig3b.eps "fig:") ![Left: Asymmetry $A_{12}$ in scenario (B) as a function of the phase $\Phi_\mu$. Different lines denote full asymmetry (full line) and contributions from box (dashed), vertex (dotted) and self energy (dash-dotted) diagrams. Middle and Right: Asymmetry $A_{12}$ as a function of the universal scalar mass $M_S$ with $\Phi_\mu=\pi/2$ at different cms. The full lines denote full result and dashed lines show only the box contributions after neglecting diagrams with slepton exchange.\[fig3\]](Rolbiecki_fig3c.eps "fig:")
Summary
=======
It has been shown that CP-odd asymmetry can be generated in non-diagonal chargino pair production with unpolarized electron/positron beams. The asymmetry is pure one-loop effect and is generated by interference between complex couplings and absorptive parts of one loop integrals. The effect is significant for the phases of the higgsino mass parameter $\mu$ and the trilinear coupling in stop sector $A_t$. At future linear collider it may give information about CP violation in chargino and stop sectors.
[100]{}
H. P. Nilles, Phys. Rept. [**110**]{} (1984) 1; H. E. Haber and G. L. Kane, Phys. Rept. [**117**]{} (1985) 75. V. Barger, T. Falk, T. Han, J. Jiang, T. Li and T. Plehn, Phys. Rev. D [**64**]{} (2001) 056007 \[arXiv:hep-ph/0101106\]. T. Ibrahim and P. Nath, arXiv:0705.2008 \[hep-ph\] and references therein. O. Kittel, A. Bartl, H. Fraas and W. Majerotto, Phys. Rev. D [**70**]{} (2004) 115005 \[arXiv:hep-ph/0410054\]. P. Osland and A. Vereshagin, Phys. Rev. D [**76**]{} (2007) 036001 \[arXiv:0704.2165 \[hep-ph\]\]. K. Rolbiecki and J. Kalinowski, arXiv:0709.2994 \[hep-ph\]. S. Y. Choi, A. Djouadi, H. S. Song and P. M. Zerwas, Eur. Phys. J. C [**8**]{} (1999) 669 \[arXiv:hep-ph/9812236\].
[^1]: The authors are supported by the Polish Ministry of Science and Higher Education Grant No. 1 P03B 108 30 and by the EU Network MRTN-CT-2006-035505 “Tools and Precision Calculations for Physics Discoveries at Colliders".
[^2]: Presented by K. Rolbiecki at the XXXI International Conference of Theoretical Physics, “Matter to the Deepest", Ustroń, Poland, September 5–11, 2007.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
*K* band observations of the galaxy populations of three high redshift ($z=0.8$–$1.0$), X-ray selected, massive clusters are presented. The observations reach a depth of $K \simeq 21.5$, corresponding to $K^{*}+3.5$ mag. The evolution of the galaxy properties are discussed in terms of their *K* band luminosity functions and the *K* band Hubble diagram of brightest cluster galaxies.
The bulk of the galaxies, as characterised by the parameter $K^{*}$ from the Schechter (1976) function, are found to be consistent with passive evolution with a redshift of formation of $z_{f}\approx 1.5$–2. This is in agreement with observations of other high redshift clusters, but in disagreement with field galaxies at similar redshifts. The shape of the luminosity function at high redshift, after correcting for passive evolution, is not significantly different from that of the Coma cluster, again consistent with passive evolution.
author:
- |
S. C. Ellis and L. R. Jones,\
The University of Birmingham, UK
---
\[1996/06/01\]
å[[A&A]{}]{}
The *K* Band Luminosity Function\
of High Redshift Clusters
=================================
Introduction
------------
We present a study of three of the most massive ($\sim 10^{15}$M$\odot$), high redshift clusters known (Maughan et al. 2003a, Maughan et al. 2003b). They are thus ideal probes of galaxy evolution. In a hierarchical model galaxies are predicted to first form in regions with the highest overdensities which merge over time with other systems to become massive clusters. Thus such massive clusters at high redshift are relatively rare and we have an unusual opportunity to study the galaxy populations of rich, distant clusters and compare results with local rich clusters such as Coma. The high redshift of the clusters should make any evolution in the galaxy populations easier to observe.
The evolution is probed by means of their $K$ band galaxy luminosity functions. The $K$ band magnitude of a galaxy is a good indicator of the stellar mass of the galaxy being relatively insensitive to star formation, and furthermore $k$-corrections are small in this band. Thus the evolution of the $K$ band luminosity function, parametrized by $K^{*}$ of the Schechter (1976) function, traces the epoch of assembly of the galaxies. In the monolithic collapse picture of structure formation all galaxies were formed at high redshift and have evolved only passively since, with galaxies at high redshift being intrinsically brighter than their present day counterparts due to their younger stellar populations. In the hierarchical formation picture the number of bright galaxies will grow with time through a series of mergers and thus the shape of the luminosity function will be altered with the passing of time.
The X-ray data for the clusters show that two of the clusters appear relaxed (ClJ1226, $z=0.89$ and ClJ1415, $z=1.03$) with relaxed X-ray contours (Maughan et al. 2003b, Ebeling et al. 2001) and one of them (ClJ0152, $z=0.83$) appears to be in a state of merging (Maughan et al. 2003a, Ebeling et al. 2000), thus we also have a small selection of different environments. The cluster X-ray properties are consistent with little or no evolution when compared to local clusters (Jones et al. 2003, these proceedings). All three clusters were discovered in the Wide Angle [Rosat]{} Pointed Survey (WARPS, Scharf et al. 1997).
Results
-------
The luminosity functions of all three clusters are shown in the first three panels of figure \[fig:klfs\]. Also shown are the best fitting Schechter functions with $\alpha$ held constant at $\alpha=-0.9$. Schechter functions were fit with the brightest cluster galaxies included and excluded. The results for all clusters are given in tables \[tab:schechter\_fits\] and \[tab:schechter\_fits\_bcg\].
The combined luminosity function, excluding BCGs, for our 3 high redshift clusters is shown in the bottom right panel of figure \[fig:klfs\]. This was fit with $\alpha$ as a free parameter yielding a value of $\alpha=-0.54 \pm 0.3$. Also shown, by the open squares, is the $K$ band luminosity function for the Coma cluster computed from the $H$ band luminosity function of de Propris et al. (1998) using their given value of $H-K=0.22$. The shapes of the luminosity functions are very similar and the difference in absolute magnitude can be accounted for by a passively evolving population with $z_{\rm{f}}=2$ as shown by the dashed line.
A Schechter function was also fit to the combined luminosity function including BCGs. Fits were made with $\alpha=-0.9$ and with $\alpha$ free. The results for all fits are given in tables \[tab:schechter\_fits\] and \[tab:schechter\_fits\_bcg\].
![K Band luminosity function for ClJ1226, ClJ1415, ClJ0152 and the combined luminosity function of all three clusters compared to that of Coma.[]{data-label="fig:klfs"}](1226klf.eps)
![K Band luminosity function for ClJ1226, ClJ1415, ClJ0152 and the combined luminosity function of all three clusters compared to that of Coma.[]{data-label="fig:klfs"}](1415klf.eps)
![K Band luminosity function for ClJ1226, ClJ1415, ClJ0152 and the combined luminosity function of all three clusters compared to that of Coma.[]{data-label="fig:klfs"}](0152klf.eps)
![K Band luminosity function for ClJ1226, ClJ1415, ClJ0152 and the combined luminosity function of all three clusters compared to that of Coma.[]{data-label="fig:klfs"}](klfcomb2.eps)
$K^{*}$ (lower limit, upper limit) $\phi^{*}$ Prob($\chi^{2}$)
---------------------------------------- ------------------------------------ ------------ ------------------
Cl0152 17.59 (17.26,17.90) 47.89 0.56
Cl1226 17.79 (17.52,18.03) 57.37 $<10^{-4}$
Cl1415 17.96 (17.63,18.26) 42.53 0.80
Combined ($\alpha_{{\rm fixed}}=-0.9$) 17.81 (17.51,18.07) 44.69 $<10^{-4}$
Combined ($\alpha_{{\rm free}}=-0.94$) 17.83 (17.1,18.3) 45.34 $<10^{-4}$
: Best fitting parameters of a Schechter function for each cluster including BCGs.
\[tab:schechter\_fits\]
$K^{*}$ (lower limit, upper limit) $\phi^{*}$ Prob($\chi^{2}$)
---------------------------------------- ------------------------------------ ------------ ------------------
Cl0152 17.76 (17.43,18.07) 50.31 0.67
Cl1226 18.04 (17.79,18.28) 61.01 0.51
Cl1415 18.08 (17.75,18.38) 43.80 0.63
Combined ($\alpha_{{\rm fixed}}=-0.9$) 17.98 (17.68,18.26) 45.59 0.38
Combined ($\alpha_{{\rm free}}=-0.54$) 18.53 (18.0,18.9) 79.66 0.52
: Best fitting parameters of a Schechter function for each cluster excluding BCGs.
\[tab:schechter\_fits\_bcg\]
The evolution of $K^{*}$ is shown in figure \[fig:klf\_evol\] along with the high $L_{\rm{X}}$ data from de Propris et al. (1999). Models were computed for no-evolution, and passively evolving stellar populations with redshifts of formation $z_{\rm{f}}=1.5$, 2 and 5. It can be seen that the bulk of the galaxies were significantly brighter in the past, than no-evolution predictions. This is consistent with a passively evolving population with $z_{\rm{f}} \approx 1.5$–2.
![The evolution of $K^{*}$. The circles are data from this paper. The squares are from de Propris et al. (1999), open symbols being low $L_{\mathrm{X}}$ systems and closed symbols being high $L_{\mathrm{X}}$ systems.[]{data-label="fig:klf_evol"}](kstar.eps)
Conclusions
-----------
The evolution of the galaxy populations of three high redshift clusters of galaxies has been studied. The bulk evolution of the galaxies, as characterised by $K^{*}$, is found to be consistent with passive evolution with a redshift of formation $z_{\mathrm{f}}\sim\ 1.5$–2. Further evidence for passive evolution is seen in the similarity of the shape of the high-redshift luminosity function with that of Coma.
Purely passive evolution of early-type galaxies is consistent with several other studies including the evolution of the $K$ band luminosity function (de Propris et al. 1999), evolution of mass-light ratios (van Dokkum et al. 1998), and studies of the scatter of the colour-magnitude relation (see eg. Ellis et al. 1997, Stanford et al. 1998).
The lack of evolution observed in $K$ band luminosity function is in contrast to the conclusions of Kauffmann and Charlot (1998) and Kauffmann et al. (1996) for galaxies in the field.
When discussing formation it is important to distinguish between the epoch at which the stars in the galaxies were formed and the epoch at which the galaxies were assembled. If merging were a dissipationless process then it would be possible to have no extra star formation as a result of a merger and thus the age of the stars within a galaxy could be older than the age of galaxy assembly. A study of the cluster of galaxies MS 1054-03 at $z=0.83$ is presented by van Dokkum et al. (1999) in which there is observed a high fraction of merging red galaxies. Very little star formation is seen in the merging galaxies constituting evidence that the galaxies are in fact somewhat younger than the stars that reside within them. Is such merging reflected in the evolution of the luminosity function? The $K$ band luminosity of a galaxy is very nearly independent of star-formation, but reflects the mass of the old stars within the galaxy. Thus $K$ magnitudes are a good measure of the stellar mass of a galaxy. It is expected from semi-analytic models (Kauffmann et al. 1993, Baugh et al. 1996) that the typical redshift of assembly of an elliptical will be higher in a rich cluster than in the field, due to the fact that structures collapse earlier in denser environments. This will be particularly relevant here since the systems investigated here are all very massive. The semi-analytic models of Diaferio et al. (2001) predict very little evolution in the numbers of massive cluster galaxies since $z=0.8$ in a hierarchical model with dissipationless merging. A high redshift of assembly of massive galaxies is in qualitative agreement with our results. We conclude that the luminosity evolution of bright galaxies in massive clusters is consistent with pure passive evolution, but note that this may be consistent with hierarchical models if most merging takes place at high redshifts.
Baugh C. M., Cole S., Frenk C. S., 1996, MNRAS, 283, 1361
De Propris R., Eisenhardt P. A., Stanford S. A., Dickinson M., 1998, ApJ, 503, L45
De Propis R., Stanford S. A., Eisenhardt P. A., Dickinson M., Elston R., 1999, AJ, 118, 719
Diaferio, A., Kauffmann, G., Balogh, M. L., White, S. D. M., Schade, D., Ellingson, E., 2001, MNRAS, 323, 999
Ebeling H., Jones L. R., Fairley B. W., Perlman E., Scharf C., Horner D., 2001, ApJ, 548, L23
Ebeling H., Jones L. R., Perlman E., Scharf C., Horner D., Wegner G., Malkan M., Fairley B., Mullis C. R., 2000, ApJ, 534, 133
Ellis R. S., Smail I., Dressler A., Couch W. J., Oemler A. J., Butcher H., Sharples R. M., 1997, ApJ, 483, 582
Jones L. R., Maughan, B. J, Ebeling, H., Scharf, C., Perlamn, E., Lumb, D., Gondoin, P., Mason, K. O., Cordova, F., Priedhorsky, W. C., these proceedings
Kauffmann G., White S. D. M., Guiderdoni B., 1993, MNRAS, 264, 201
Kauffmann G., Charlot S., White S. D. M., 1996, MNRAS, 283, L117
Kauffmann G., Charlot S., 1998, MNRAS, 297, L23
Maughan B. J., Jones L. R., Ebeling H., Perlamn E., Rosati P., Frye C., Mullis C. R., 2003a, ApJ, in press, astro-ph/0301218
Maughan B. J., Jones L. R., et al., 2003b, in preparation
Scharf C., Jones L. R., Ebeling H., Perlman E., Malkan M., Wegner G., 1997, ApJ, 477, 79
Schechter P., 1976, ApJ, 203, 297
Stanford S. A., Eisenhardt P. R., Dickinson M., 1998, ApJ, 492, 461
van Dokkum P. G., Franx M., Kelson D. D., Illingworth G. D., 1998, ApJ, 504, L17
van Dokkum P. G., Franx M., Fabricant D., Kelson D., Illingworth G., 1999, ApJ, 520, L95
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The role of power corrections and higher Fock-state contributions to exclusive charmonium decays will be discussed. It will be argued that the $\jp$ ($\psi'$) decays into baryon-antibaryon pairs are dominated by the valence Fock-state contributions. $P$-wave charmonium decays, on the other hand, receive strong contributions from the ${\rm{c}}{\overline{\rm{c}}}{\rm{g}}$ Fock states since the valence Fock-state contributions are suppressed in these reactions. Numerical results for $\jp (\psi') \to \BbB$ decay widths will be also presented and compared to data.\
Contribution to the QCD 97 conference, Montpellier (1997)
address: |
Fachbereich Physik, Universität Wuppertal\
Gaußstrasse 20, D-42097 Wuppertal, Germany
author:
- 'P. Kroll[^1]'
title: '[**Exclusive charmonium decays**]{} WU B 97-25'
---
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
3[[SU]{}(3)\_[[F]{}]{}]{}
INTRODUCTION
============
Exclusive charmonium decays have been investigated within perturbative QCD by many authors, e.g. [@dun80]. It has been argued that the dominant dynamical mechanism is ${\rm{c}}{\overline{\rm{c}}}$ annihilation into the minimal number of gluons allowed by colour conservation and charge conjugation, and subsequent creation of light quark-antiquark pairs forming the final state hadrons (${\rm{c}}{\overline{\rm{c}}}\to n{\rm{g}}^* \to m({\rm{q}}{\overline{\rm{q}}})$). The dominance of annihilation through gluons is most strikingly reflected in the narrow widths of charmonium decays into hadronic channels in a mass region where strong decays typically have widths of hundreds of MeV. Since the ${\rm{c}}$ and the ${\overline{\rm{c}}}$ quarks only annihilate if their mutual distance is less than about $1/{m_{{\rm{c}}}}$ (where ${m_{{\rm{c}}}}$ is the ${\rm{c}}$-quark mass) and since the average virtuality of the gluons is of the order of $1 - 2\, \gev^2$ one may indeed expect perturbative QCD to be at work although corrections are presumably substantial (${m_{{\rm{c}}}}$ is not very large).\
In hard exclusive reactions higher Fock-state contributions are usually suppressed by inverse powers of the hard scale, $Q$, appearing in the process ($Q=2{m_{{\rm{c}}}}$ for exclusive charmonium decays), as compared to the valence Fock-state contribution. Hence, higher Fock-state contributions are expected to be negligible in most cases. Charmonium decays are particularly interesting because the valence Fock-state contributions are often suppressed for one or the other reason. In such a case higher Fock-state contributions or other peculiar contributions such as power corrections or small components of the hadronic wave functions may become important. In the following I present a few examples of charmonium decays with suppressed valence Fock-state contributions:
1. [*Hadronic helicity non-conserving processes*]{} (e.g. $\jp\to\rho\pi$, $\eta_{{\rm{c}}}\to \BbB$, $\chi_{{\rm{c}}0}\to\BbB$). The standard method of calculating the valence Fock-state contributions leads to vanishing helicity non-conserving amplitudes. There are many attempts to understand the helicity non-conserving processes (e.g. intrinsic charm of the $\rho$ meson [@bro97]; diquarks in baryons [@kro:93a]) but a satisfactory explanation of all these processes is still lacking.
2. [*$G$ parity violating $\jp$ decays*]{} (e.g. $\pi^+\pi^-$, $\omega\pi^0$, $\rho\eta$). These reactions can proceed through the electromagnetic elementary process ${\rm{c}}{\overline{\rm{c}}}\to\gamma^*\to n({\rm{q}}{\overline{\rm{q}}})$ and/or may receive contributions from the isospin violating part of QCD. In general, these contributions are rather small and thus other contributions may play an important role here.
3. [*Radiative $\jp$ decays into light pseudoscalar mesons*]{}. The purely electromagnetic process ${\rm{c}}{\overline{\rm{c}}}\to\gamma^*\to\gamma{\rm{q}}{\overline{\rm{q}}}$, a contribution of order $\alpha^3$ being proportional to the time-like $\pi\gamma$ transition form factor at $s=\mjp^2$, together with a power correction provided by the VDM contribution $\jp\to\rho\pi$, $\rho\to\gamma$ [@CZ84], leads to $\Gamma(\jp\to\pi^0\gamma) = 2.86\,$ eV in agreement with experiment (3.43$\pm$1.2 eV [@PDG]). Similar estimates of the $\eta\gamma$ and $\eta'\gamma$ widths fail. Agreement with experiment is here obtained from a twist-4 gluon component of the singlet-$\eta$ state [@nov80], i.e. from a power correction. That gluon component can occur as a consequences of the $U(1)$ anomaly.
4. [*$\chi_{{\rm{c}}J}$ decays.*]{} For the $\chi_{{\rm{c}}J}$ mesons the ${\rm{c}}{\overline{\rm{c}}}$ pair forms a colour-singlet $P$-wave in the valence Fock state (notation: ${\rm{c}}{\overline{\rm{c}}}_{1}(^3P_J)$). The next-higher Fock state is a ${\rm{c}}{\overline{\rm{c}}}{\rm{g}}$ $S$-wave where the quark-antiquark pair forms a ${\rm{c}}{\overline{\rm{c}}}_{8} ({}^3S _1)$ state. For this reason the latter contribution is customarily referred to as the colour-octet contribution. The colour-singlet and octet contributions to the $\chi_{{\rm{c}}J}\to h\overline h$ decay amplitude behave as [@BKS] \[dim\] M\_J\^[(c)]{}\~f\_h\^2 f\^[(c)]{}(\^3P\_J) [m\_[[[c]{}]{}]{}]{}\^[-n\_c]{} . The singlet decay constant, $f^{(1)}(^3P_J)$, which represents the derivative of a two-particle (non-relativistic) coordinate space wave function at the origin, and the octet decay constant, $f^{(8)}(^3P_J)$, as a three-particle wave function at the origin, are of the same dimension, namely GeV$^2$. Hence, $n_1=n_8$. In fact, $n_c=1\, +\, 2n_h$ where $n_h$ is the dimension of the light hadron’s decay constant, $f_h$. It is, therefore, unjustified to neglect the colour-octet contributions in the $\chi_{{\rm{c}}J}$ decays.
VELOCITY SCALING
================
Recently, the importance of higher Fock states in understanding the production and the [*inclusive*]{} decays of charmonium has been pointed out [@BBL]. As has been shown the long-distance matrix elements can there be organized into a hierarchy according to their scaling with $v$, the typically velocity of the ${\rm{c}}$ quark in the charmonium. One may apply the velocity expansion to [*exclusive*]{} charmonium decays as well [@BKS]. The Fock-state expansions of the charmonia start as \[Fockexpansion\] |&=& O(1) |c\_[1]{}(\^3S\_1) + O([*v*]{}) |c\_[8]{}(\^3P\_J) [[g]{}]{}\
&& + O(v\^2) |c\_[8]{}(\^3S\_1) [[g]{}]{}[[g]{}]{} + O([*v*]{}\^3) ,\
|\_[J]{}&=& O(1) |c\_[1]{}(\^3P\_J) + O([*v*]{}) |c\_[8]{}(\^3S\_1) [[g]{}]{}\
&& + O(v\^2). Combining the fact that the hard scattering amplitude involving a $P$-wave ${\rm{c}}{\overline{\rm{c}}}$ pair is of order $v$, with the Fock-state expansion (\[Fockexpansion\]), one finds for the amplitudes of $\chi_{{\rm{c}}J}$ decays into, say, a pair of pseudoscalar mesons the behaviour \[ampPP\] M(\_[J]{}P ) &\~& a\^2 v + b \^2 (v)\
&+&O(v\^2), where $a$ and $b$ are constants and $\als^{{\scriptscriptstyle soft}}$ comes from the coupling of the Fock-state gluon to the hard process. For the decay reaction $\jp\to\BbB$, on the other hand, one has \[ampBB\] M()&\~&a \^3 + b \^3 v(v)\
&+&c \^3 v\^2 \^[[soft]{}]{} + O(v\^3). Thus, one sees that in the case of the $\chi_{{\rm{c}}J}$ the colour-octet contributions are not suppressed by powers of either $v$ or $1/{m_{{\rm{c}}}}$ as compared to the contributions from the valence Fock states [@BKS]. Hence, the colour-octet contributions have to be included for a consistent analysis of $P$-wave charmonium decays. Indeed, as an explicite analysis reveals [@BKS], only if both the contributions are taken into account agreement between predictions and experiment is obtained for the $\chi_{{\rm{c}}J}\to P\overline{P}$ decay widths. For more details and numerical results for decay widths, see the talk by G. Schuler in these proceedings. The situation is different for $\jp$ decays into baryon-antibaryon pairs: Higher Fock state contributions first start at $O(v^2)$, see (\[ampBB\]). Moreover, there is no obvious enhancement of the corresponding hard scattering amplitudes, they appear with at least the same power of $\als$ as the valence Fock state contributions. Thus, despite of the fact that ${m_{{\rm{c}}}}$ is not very large and $v$ not small ($v^2\simeq 0.3$), it seems reasonable to expect small higher Fock-state contributions to the baryonic decays of the $\jp$.\
In the following sections I will report on an analysis of the processes $\jp (\psi')\to \BbB$ performed with regard to these observations [@bol97].
THE MODIFIED PERTURBATIVE APPROACH
==================================
Recently, a modified perturbative approach has been proposed [@BLS] in which transverse degrees of freedom as well as Sudakov suppressions are taken into account in contrast to the standard approach of Brodsky and Lepage [@lep80]. The modified perturbative approach possesses the advantage of strongly suppressed end-point regions. In these regions perturbative QCD cannot be applied. Moreover, the running of $\als$ and the evolution of the hadronic wave function can be taken into account properly.\
Within the modified perturbative approach an amplitude for a ${}^{2S+1}L_J$ charmonium decay into two light hadrons, $h$ and $h'$, is written as a convolution with respect to the usual momentum fractions, $x_i$, $x_i'$ and the transverse separations scales, ${\bf b_i}$, ${\bf b'_i}$, canonically conjugated to the transverse momenta, of the light hadrons. Structurally, such an amplitude has the form \[structure\]\
&&\
&& \_h\^[\*]{}(x,[**b**]{}) \_[h’]{}\^[\*]{}(x’,[**b**]{}’)\
&& T\_H\^[(c)]{}(x,x’,[**b**]{},[**b**]{}’,t) ,where $x^{(}{}'{}^{)}$, ${\bf b}^{(}{}'{}^{)}$ stand for sets of momentum fractions and transverse separations characterizing the hadron $h^{(}{}'{}^)$. Each Fock state (see (\[Fockexpansion\])) provides such an amplitude (marked by the upper index $c$)[^2]. $\hat{\Psi}_h$ denotes the transverse configuration space (light-cone) wave function of a light hadron. The $f^{(c)}$ is the charmonium decay constant already introduced in Sect. 1. Because the annihilation radius is substantially smaller than the charmonium radius this is, to a reasonable approximation, the only information on the charmonium wave function required. $\hat{T}_H^{(c)}$ is the Fourier transform of the hard scattering amplitude to be calculated from a set of Feynman graphs relevant to the considered process. $t$ represents a set of renormalization scales at which the $\als$ appearing in $\hat{T}_H^{(c)}$, are to be evaluated. The $t_i$ are chosen as the maximum scale of either the longitudinal momentum or the inverse transverse separation associated with each of the internal gluons. Finally, $\exp{[-S]}$ represents the Sudakov factor which takes into account gluonic corrections not accounted for in the QCD evolution of the wave functions as well as a renormalization group transformation from the factorization scale $\mu_F$ to the renormalization scales. The gliding factorization scale to be used in the evolution of the wave functions is chosen to be $\mu_F=1/\tilde{b}$ where $\tilde b = max\{b_i\}$. The $b$ space form of the Sudakov factor has been calculated in next-to-leading-log approximation by Botts and Sterman [@BLS].\
As mentioned before, exclusive charmonium decays have beeen analysed several times before, e.g.[@dun80]. New refined analyses are however necessary for several reasons: in previous analyses the standard hard scattering approach has been used with the running of $\als$ and the evolution of the wave function ignored. In the case of the $\chi_{{\rm{c}}J}$ also the colour-octet contributions have been disregarded. Another very important disadvantage of some of the previous analyses is the use of light hadron distribution amplitudes (DAs), representing wave functions integrated over transverse momenta, that are strongly concentrated in the end-point regions. Despite of their frequent use, such [[DAs]{}]{} were always subject to severe criticism, see e.g. [@isg]. In the case of the pion, they lead to clear contradictions to the recent CLEO data [@CLEO] on the $\pi\gamma$ transition form factor, $F_{\pi\gamma}$. As detailed analyses unveiled, the $F_{\pi\gamma}$ data require a [[DA]{}]{} that is close to the asymptotic form $\propto x(1-x)$ [@Rau96; @mus97]. Therefore, these end-point region concentrated [[DAs]{}]{} should be discarded for the pion and perhaps also for other hadrons like the nucleon [@BK96].
RESULTS FOR $\jp\;(\psi')\to\BbB$
=================================
According to the statements put forward in Sects. 1 and 2, higher Fock-state contributions are neglected in this case.\
The $\jp$ colour-singlet component is written in a covariant fashion |; q,;c\_1&=& ( )\
&& ([q-1.1ex/]{}+) [-1.0ex/]{}() . \[Jpsistate\] The $\jp$ decay constant $f_{\jp}$ ($=f^{(1)}(^3S_1)$) is obtained from the electronic $\jp$ decay width and found to be 409 MeV. Except in phase space factors, the baryon masses are ignored and $\mjp$ is replaced by $2{m_{{\rm{c}}}}$ for consistency since the binding energy is an $O(v^2)$ effect.\
From the permutation symmetry between the two $u$ quarks and from the requirement that the three quarks have to be coupled in an isospin $1/2$ state it follows that there is only one independent scalar wave function in the case of the nucleon if the 3-component of the orbital angular momentum is assumed to be zero. Since $\su3$ is a good symmetry, only mildly broken by quark mass effects, one may also assume that the other octet baryons are described by one scalar wave function. It is parameterized as \^[B\_8]{} \_[123]{}(x,[**k\_**]{}) = \^[B\_8]{}\_[123]{}(x) \_[B\_8]{} (x,[**k\_**]{}) \[Psiansatz\] in the transverse momentum space. The set of indices 123 refers to the quark configuration ${\rm{u}}_+\,{\rm{u}}_-\,{\rm{d}}_+$; the wave functions for other quark configurations are to be obtained from (\[Psiansatz\]) by permutations of the indices. The transverse momentum dependent part $\Omega$ is parameterized as a Gaussian in $k^2_{\perp i}/x_i$ ($i=1,2,3$). The transverse size parameter, $a_{B_8}$, appearing in that Gaussian, as well as the octet-baryon decay constant, $f_{B_8}$, are assumed to have the same value for all octet baryons. In [@BK96] these two parameters as well as the nucleon [[DA]{}]{} have been determined from an analysis of the nucleon form factors and valence quark structure functions at large momentum transfer ($a_{B_8}=0.75 \gev^{-1}$; $f_{B_8}=6.64\times 10^{-3} \gev^2$ at a scale of reference $\mu_0=1
\gev$). The [[DA]{}]{} has been found to have the simple form \^N\_[123]{} (x,\_0) = 60x\_1 x\_2 x\_3 . \[phiFIT\] It behaves similar to the asymptotic form, only the position of the maximum is shifted slightly. For the hyperon and decuplet baryon [[DAs]{}]{} suitable generalizations of the nucleon [[DA]{}]{} are used.\
Calculating the hard scattering amplitude from the Feynman graphs for the elementary process ${\rm{c}}{\overline{\rm{c}}}\to 3{\rm{g}}^*\to 3({\rm{q}}{\overline{\rm{q}}})$ and working out the convolution (\[structure\]), one obtains the widths for the $\jp$ decays into $\BbB$ pairs listed and compared to experimental data in Table \[tab:1S\].
[|c||c|c|c|c|c|c|]{}
------------------------------------------------------------------------
channel & $p\overline p\;\;$ & $\Sigma^0\overline{\Sigma}{}^0\;\;$ & $\Lambda\overline \Lambda\;\;$ & $\Xi^-\overline{\Xi}{}^+\;\;$ & $\Delta^{++}\overline{\Delta}{}^{--}\;\;$ & $\Sigma^{*-}\overline{\Sigma}{}^{*+}$\
$\Gamma_{3g}$ & 174 & 113 & 117 & 62.5 & 65.1 & 40.8\
$\Gamma_{\rm exp}$ [@PDG] & $188\pm 14$ & $110\pm 15$ & $117\pm 14$ & $78\pm 18$ & $96\pm 26$ & $45\pm 6$\
As can be seen from that table a rather good agreement with the data is obtained.\
In addition to the three-gluon contribution there is also an isospin-violating electromagnetic one generated by the subprocess ${\rm{c}}{\overline{\rm{c}}}\to\gamma^*\to 3({\rm{q}}{\overline{\rm{q}}})$. According to [@bol97] this contribution seems to be small.\
The extension of the perturbative approach to the baryonic decays of the $\psi'$ is now a straightforward matter. One simply has to rescale the $\jp$ widths by the ratio of the corresponding electronic widths \[sca\] (’)=\
() , where $\rho_{\mathrm p.s.}(z)=\sqrt{1-4z^2}$ is the phase space factor. Contrary to previous calculations the $\psi'$ and the $\jp$ widths do not scale as $(\mjp/M_{\psi'})^8$ since the hard scattering amplitude is evaluated with $2{m_{{\rm{c}}}}$ instead with the bound-state mass. Results for the baronic decay widths of the $\psi'$ are presented in Table \[tab:2S\].
[|c||c|c|c|c|c|c|]{}
------------------------------------------------------------------------
channel & $p\overline p\;\;$ & $\Sigma^0\overline{\Sigma}{}^0\;\;$ & $\Lambda\overline \Lambda\;\;$ & $\Xi^-\overline{\Xi}{}^+\;\;$ & $\Delta^{++}\overline{\Delta}{}^{--}\;\;$ & $\Sigma^{*-}\overline{\Sigma}{}^{*+}$\
$\Gamma_{3g}$ & 76.8 & 55.0 & 54.6 & 33.9 & 32.1 & 24.4\
$\Gamma_{\rm exp}$ [@BESC] & $76\pm 14$ & $26\pm 14$ & $58\pm 12$ & $23\pm 9$ & $25\pm 8$ & $16\pm 8$\
[@PDG] & $53\pm 15$ & & & & &\
Again good agreement between theory and experiment is to be seen with perhaps the exception of the $\Sigma^0\overline{\Sigma}{}^0\;\;$ channel. An additional factor of $(\mjp/M_{\psi'})^8$ ($\approx 0.25$) in (\[sca\]) would clearly lead to diagreement with the data.
CONCLUSIONS
===========
Exclusive charmonium decays constitute an interesting laboratory for investigating power correstions and higher Fock-state contributions. In particular one can show that in the decays of the $\chi_{{\rm{c}}J}$ the contributions from the next-higher charmonium Fock state, ${\rm{c}}{\overline{\rm{c}}}{\rm{g}}$, are not suppressed by powers of ${m_{{\rm{c}}}}$ or $v$ as compared to the ${\rm{c}}{\overline{\rm{c}}}$ Fock state and therefore have to be included for a consistent analysis of these decays. For $\jp$ ($\psi'$) decays into $\BbB$ pairs the situation is different: Higher Fock-state contributions are suppressed by powers of $1/{m_{{\rm{c}}}}$ and $v$. Indeed, as an explicit analysis reveals, with plausible baryon wave functions a reasonable description of the baryonic $\jp$ ($\psi'$) decay widths can be obtained alone from the ${\rm{c}}{\overline{\rm{c}}}$ Fock state.
[9]{} A. Duncan and A.H. Mueller, ; S.J. Brodsky and G.P. Lepage, ; V.L. Chernyak and A.R. Zhitnitsky, . S.J. Brodsky and M. Karliner, . P. Kroll, Th. Pilsner, M. Schürmann and W. Schweiger, . V.L. Chernyak and A.R. Zhitnitsky, [*Phys. Rep. *]{}[**112**]{}, 173 (1984). Particle Data Group: Review of Particle Properties, . V.A. Novikov et al., . J. Bolz, P. Kroll and G.A. Schuler, and hep-ph/9704378, to be published in [*Z. Phys.*]{} C. G.T. Bodwin, E. Braaten and G.P. Lepage, . J. Bolz and P. Kroll, hep-ph/9703252, to be published in [*Z. Phys.*]{} C. J. Botts and G. Sterman, ; H.-N. Li and G. Sterman, . G.P. Lepage and S.J. Brodsky, . A.V. Radyushkin, . P. Kroll and M. Raulfs, . I.V. Musatov and A.V. Radyushkin, JLAB-THY-97-07, February 1997, hep-ph/9702443. J. Bolz and P. Kroll, . CLEO collaboration, V. Savinov et al., CLNS preprint 97/1477 (1997). Y. Zhu for the BES coll., talk presented at the XXVIII Int. Conf. on High Energy Physics, 25-31 July 1996, Warsaw, Poland.
[^1]: Supported in part by the TMR network ERB 4061 Pl 95 0115
[^2]: In higher Fock-state contributions additional integration variables may appear.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Invasion bond percolation (IBP) is mapped exactly into Prim’s algorithm for finding the shortest spanning tree of a weighted random graph. Exploring this mapping, which is valid for arbitrary dimensions and lattices, we introduce a new IBP model that belongs to the same universality class as IBP and generates the minimal energy tree spanning the IBP cluster.'
address: 'IBM, T. J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA'
author:
- 'Albert-László Barabási$^*$'
title: Invasion percolation and global optimization
---
[2]{}
Flow in porous medium, a problem with important practical applications, has motivated a large number of theoretical and experimental studies [@wong]. Aiming to understand the complex interplay between the dynamics of flow processes and randomness characterizing the porous medium, a number of models have been introduced that capture different aspects of various experimental situations. One of the most investigated models in this respect is invasion percolation [@Wilkinson83], that describes low flow rate drainage experiments or secondary migration of oil during the formation of underground oil reservoirs [@wong; @Vicsek92].
When a wetting fluid (e.g. water) is injected [*slowly*]{} into a porous medium saturated with a non-wetting fluid (e.g. oil), capillary forces, inversely proportional to the local pore diameter, are the major driving forces determining the motion of the fluid. The invasion bond percolation (IBP) model captures the basic features of this invasion process. Consider a two dimensional square lattice and assign random numbers $p_{ij} \in [0,1]$ to bonds connecting the nearest neighbor vertices $x_i$ and $x_j$. Here $p_{ij}$ mimic the randomness of the porous medium, corresponding to the random diameter of the pores, and vertices correspond to throats. Invasion bond percolation without trapping is defined by the following steps: (i) Choose a vertex on the lattice. (ii) Find the bond with the smallest $p_{ij}$ connected to the occupied vertex and occupy it. At this point the IBP cluster has two vertices and one bond. (iii) In any subsequent step find the empty bond with the smallest $p_{ij}$ connected to the occupied vertices, and occupy the bond and the vertex connected to it.
The various versions of the model are useful in matching the simulated dynamics to the microscopic effects acting as fluids with different wetting properties and compressibility are considered. Originally introduced to model fluid flow, lately invasion percolation is viewed as a key model in statistical mechanics, investigated for advancing our understanding of irreversible and nonequilibrium growth processes with generic scaling properties [@Vicsek92].
Finding the shortest spanning tree of a weighted random graph is a well known problem in graph theory [@Chris75]. Consider a connected nondirected graph $G$ of $n$ vertices and $m$ bonds (links connecting vertices), with costs $p_{ij}$ associated with every bond $(x_i,x_j)$. A [*spanning tree*]{} on this graph is a connected graph of $n$ vertices and $n-1$ bonds. Of the many possible spanning trees one wants to find the one for which the sum of the weights $p_{ij}$ is the smallest. A well known example is designing a network that connects $n$ cities with direct city-to-city links (whose length is $p_{ij}$) and shortest possible total length. This is a problem of major interest in the planning of large scale communication networks and is one of the few problems in graph theory that can be considered completely solved. Since for a fully connected graph with $n$ vertices there are $n^{n-2}$ spanning trees [@Cayley1874], designing an algorithm that finds the shortest one in non-exponential time steps is a formidable global optimization problem.
An efficient algorithm for finding the shortest spanning tree of an arbitrary connected graph $G$ was introduced by Prim [@Prim57], and involves the following steps: (i) Choose an arbitrary vertex, $x_i$. (ii) Of all vertices connected to $x_i$ find the one for which $p_{ij}$ is the smallest, and join $x_i$ and $x_j$. (iii) At any subsequent step a new vertex is appended to the tree by searching for the bond that has the smallest weight $p_{ik}$, where $x_i$ belongs to the tree, and $x_k$ does not. Thus bonds that connect already occupied vertices are not eligible for growth. It has been shown by Prim that the tree generated by the previous algorithm is the smallest energy spanning tree for the graph $G$ [@proof]. Already at this point one can notice the formal similarity between Prim’s algorithm and the IBP model discussed above.
In this paper I show the equivalence of the IBP model with Prim’s algorithm for finding the shortest spanning tree of a weighted random graph [@Prim57], and explore the consequences of this interesting mapping. For this I introduce an invasion bond percolation model with a local trapping rule (hereafter called IBPO model). At [*every time step*]{} the bonds invaded by the IBPO model form the [*minimum energy tree*]{} spanning all vertices of the IBP cluster, where energy is defined as the sum of the invaded random bonds. Moreover, the clusters generated by the IBPO model have the same scaling and dynamic properties as the clusters of the standard IBP model. Thus the two models (and Prim’s algorithm) belong to the same universality class. Since the IBPO cluster forms a tree (i.e. is loopless), this result implies that loopless IBP belongs to the same universality class as IBP. The cluster formed by the invaded bonds coincides with the unique solution of the global optimization problem of finding the smallest energy branching self-avoiding walk connecting all vertices of a finite lattice. Furthermore, the IBPO model is computationally more efficient than the IBP model. The above results are [*exact*]{} and are valid for [*arbitrary dimensions*]{} and [*lattices*]{}.
The difference between the IBP and IBPO models comes in an additional trapping rule [@trap]: in the IBPO model only bonds connecting vertices of the cluster to empty vertices are eligible for growth (see Fig. \[fig1\]). Note that in the IBP model there may be bonds eligible for growth, that connect two already occupied vertices (hereafter these are be called [*trapped*]{} bonds, since an empty bond is trapped between two occupied vertices). In the IBPO model these trapped bonds are not eligible for growth [@lung].
Consider the invasion process described by the IBPO model, and assume that invasion ends when [*all vertices*]{} of a [*finite* ]{} lattice have been invaded [@note-bonds]. The energy of the obtained IBPO cluster is defined by $E=\sum p_{ij}$, where the sum goes over all [*occupied*]{} bonds.
With these definitions one can prove the following:
\(a) The cluster generated by the IBPO model has the smallest energy of all possible clusters that span all vertices of the lattice.
\(b) The obtained cluster is independent of the site chosen as the starting point of the invasion process.
\(c) Defining time as the number of invaded [*vertices*]{}, at any time step the vertices invaded by the IBPO model coincide with those invaded by the IBP model, implying that the IBPO and IBP models belong to the same universality class.
\(d) At any time step the bonds invaded by the IBPO model form the smallest energy tree spanning the vertices of the IBP cluster.
\(e) The statements (a)-(d) are valid in any dimension and are independent of the lattice.
In the following we discuss (a)-(e) separately.
[*(a) Prim’s algorithm and IBPO—*]{} Comparing the definition of the IBPO model and Prim’s algorithm, we find that Prim’s algorithm is [*exactly*]{} the IBPO model acting on the graph $G$. Since the square lattice, for which the IBPO model is defined, is a particular case of an arbitrary graph, the cluster generated by the IBPO model coincides with the smallest energy spanning tree.
[*(b) Uniqueness of the IBPO cluster—*]{} If the IBPO model selects the smallest energy tree, there is only one such a tree, provided that the $p_{ij}$’s are distinct real numbers, since the chance of having two trees with the same number of bonds and the same energy is zero [@unique]. Thus starting from any vertex of the lattice one should obtain the same minimum energy cluster.
[*(c) Cluster properties—*]{} A $B$-cluster is the set of [*bonds*]{} occupied by the invasion process. Similarly, a $V$-cluster is the set of occupied [*vertices*]{}. In percolation and fluid flow one is interested in the scaling properties of the first spanning cluster generated by the invasion algorithm. In particular, it is known that clusters generated by the IBP model are fractal. However, the fractal dimension, and in general the scaling exponents, may depend on the trapping rule, thus one needs to establish the universality class to which the IBPO model belongs, since it differs from the IBP model in a trapping rule.
Defining time as the number of occupied [*vertices*]{}, at any time step the $V$-clusters generated by the IBP and IBPO models are identical [@vertex], the only difference being that within one time step the IBP model may occupy a number of trapped bonds without adding any new vertex to the cluster. The IBPO model with every occupied bond occupies a vertex as well. In conclusion, at any time step the [*$V$-clusters generated by the two models coincide*]{}, provided, that we start the invasion process from the same vertex. This implies that the IBP and IBPO belong to the same universality class, and the generated clusters have the same fractal dimension, whose value coincides with the fractal dimension of ordinary percolation [@univ].
However, not only the static properties, but all dynamic properties measured in terms of the occupied vertices coincide as well. For example, the two models generate exactly the same set of avalanches [@Suki94; @Fruenberg] and the growth of the cluster obeys the same dynamic scaling form [@Fruenberg].
[*(d) Spanning trees and loopless percolation—*]{} Next I investigate the relation between the $B$-clusters generated by the two models. The bonds invaded by the IBPO model is a subset of the bonds invaded by the IBP model, i.e. at any time step $N^b_{IBP}
\ge
N^b_{IBPO}$, where $N^b_{IBPO}$ and $N^b_{IBP}$ are the number of bonds occupied by the IBPO and IBP models, respectively. According to (a) and Prim’s theorem, the bonds invaded by the IBPO model form the smallest energy spanning path connecting the selected vertices. Since the IBP and IBPO models share the same vertices, at [*every time step the IBPO $B$-cluster is the minimum energy tree spanning all vertices of the IBP clusters*]{}. This can be seen in Fig. \[fig2\], where the IBP and IBPO clusters are shown simultaneously.
Since the IBPO cluster forms a tree, removing any bond of the IBPO cluster breaks the cluster in two subclusters. This is not true for the IBP model, where by cutting any trapped bond one does not break the cluster (Fig. \[fig2\]). Since the cluster generated by the IBPO model is a tree, it has no loops. The fact the IBPO and IBP share the same scaling exponents shows that loopless IBP (which is the IBPO model) belongs to the same universality class as IBP, or ordinary percolation. Loopless percolation has been studied in great detail [@tzs89], and there is [*numerical*]{} evidence that removing loops does not change the universality class of the percolation model. However, to my knowledge the IBPO model is the first percolation model generating loopless percolation clusters, for which the fact that the loopless model belongs to the same universality class as ordinary (invasion) percolation can be proven exactly.
[*(e) Dimension dependence—*]{} The proof of (a)-(d) does not assume anything specific about the structure of the lattice. Indeed, Prim’s theorem applies for an arbitrary weighted graph. Since any regular lattice, in any dimension, is a special case of a random graph, the above results are independent of the nature and dimension of the lattice, proving (e).
[*Complexity of the IBPO model—*]{} The number of spanning trees on a regular lattice is much smaller that on a fully connected graph, but still increases exponentially with $n$. But the number of computations needed in the simulation of the invasion processes, or the [*complexity of the IBPO algorithm*]{}, is algebraic in $n$ [@algo]. The most time consuming operation is finding at every time step the bond with the smallest weight eligible for growth. However, since $N^b_{IBPO} \le N^b_{IBP}$, the IBPO model requires equal or less time to run on an arbitrary computer. Fig. \[fig3\] shows the number of trapped bonds with time $(N^b_{IBP}-N^b_{IBPO})$. Since the two models belong to the same universality class, using the IBPO model for studying the scaling properties of IBP or ordinary percolation has considerable computational advantages.
In conclusion, I introduced a new bond invasion percolation model that belongs to the same universality class as IBP without trapping, or ordinary percolation. The cluster generated by the IBPO model form the smallest energy tree spanning the IBP cluster. Exact enumeration, which is the only alternative solution to this global optimization problem, diverges exponentially with the number of vertices in the system. This is the first model, to my knowledge, that through a step-by step optimization process finds the global minima of the system.
The global optimization problem, to which IBP is shown to be equivalent, connects to an another class of problems in statistical mechanics: that of understanding the zero temperature properties of various spatially extended random systems. Since the low temperature behavior is dominated by configurations with the smallest energy, such problems involve finding the minima of certain functions, most often of a Hamiltonian. Problems in physics that regularly deal with such minimalization procedures range from directed polymers to spin glasses [@cieplak], or interface motion in disordered media [@alb]. The IBPO model provides the minimal energy cluster, implicitly solving a generic problem for a particular random system whose only other solution is exact enumeration.
I have benefited from enlightening discussions with A. Aharony, J. Feder, G. Grinstein, S. Havlin, J. Toner, J. Tøssang, and Y.Tu.
Permanent address: Department of Physics, University of Notre Dame, Notre Dame, IN 46556, Email: [email protected]
For a recent review see P-z. Wong, MRS Bulletin [**19**]{} (No 5), 32 (1994); M. Sahimi, [*Flow and transport in porous media and fractured rock*]{} (Weinheim, New York,1995).
D. Wilkinson, and J.F. Willemsen, J.Phys. A [**16**]{}, 3365 (1983); R. Lenormand, and S. Bories, C.R. Acad. Sci. Paris [**291B**]{}, 279 (1980); R. Chandler, J. Koplik, K. Lerman, and J.F. Willemsen, J. Fluid. Mech. [**119**]{}, 249 (1982).
T. Vicsek, [*Fractal Growth Phenomena*]{} (World Scientific, Singapore, 1992); J. Feder, [*Fractals*]{} (Plenum, New York, 1988); A. Bunde, and S. Havlin, [*Fractals and Disordered Systems*]{} (Springer Verlag, Berlin, 1991).
N. Christofides, [*Graph theory: An algorithmic approach*]{} (Academic Press, London, 1975).
A. Cayley, Philosophical Magazine [**67**]{}, 444 (1874).
R.C. Prim, The Bell Syst. Tech. Jl. [**36**]{} 1389 (1957).
The detailed proof that the cluster generated by Prim’s algorithm has the smallest energy is given in [@Chris75] and [@Prim57]. For a short outline of the proof, tailored to the IBPO model, consider a graph of $n$ vertices, and assume that we are at the last step of the invasion process. The IBPO cluster, $C_{n-1}$, has the smallest energy of all trees connecting the selected $n-1$ vertices. In the next step we connect the last vertex, denoted by ${\cal A}$, to the tree, generating the cluster $C_n$. This is done by selecting and occupying the smallest energy bond connecting ${\cal A}$ to the cluster $C_{n-1}$. The proof proceeds by [*reduction ad absurdum*]{}: assume that the obtained cluster $C_n$ does not have the smallest energy, i.e. exists a cluster $C_n'$ that has an energy $E(C'_n)<E(C_n)$. However, this means that $C'_n-{\cal A}$ has smaller energy than $C_{n-1}$, contradicting the hypothesis that $C_{n-1}$ is the tree with the smallest energy spanning the $n-1$ vertices. Thus $C_n$ has to be the smallest energy cluster existing in the system, concluding the proof of (a).
The trapping rule used in the IBPO model does not isolate complete clusters, but only bonds that have both ends occupied. Note that the investigated IBP model is the so called IBP [*without trapping*]{} [@Wilkinson83]. Similar trapping rules have been considered by M. Blunt [*et al.*]{} \[M. Blunt, M.J. King, and H. Scher, Phys. Rev. A [**46**]{}, 7680 (1992)\].
Opening mechanisms similar to the one described by the IBPO model are known to take place in the lung during inflation, as it was demonstrated experimentally by Suki [*et al.*]{} [@Suki94].
B. Suki [*et al.*]{}, Nature [**368**]{}, 615 (1994).
Note that occupying all vertices does not imply that all bonds have been occupied as well.
A formal proof of the uniqueness of the shortest spanning tree is given by by J.B Kruskal \[Proc. Amer. Math. Soc. [**7**]{}, 48 (1956)\].
The IBP model selects and occupies at any time step the smallest of the empty bonds connected to occupied vertices. However, if the selected bond is trapped, occupying it does not occupy any [ *new vertex*]{}, i.e. it occupies a bond adding it to the $B$-cluster, but the $V$-cluster remains unchanged. A new vertex is added only when the selected bond is not trapped (Fig. \[fig1\]). By definition, in the IBPO model only bonds that are not trapped are eligible for growth, which at every time step are identical to the IBP non-trapped bonds.
In the following I argue that the IBP and an IBPO cluster, that share the same vertices, must have the same scaling properties, and thus must belong to the same universality class. Take a cluster of fractal dimension $D_f$ generated by the invasion bond percolation algorithm and replace every bond with the vertices connected by the bonds. The performed [*local*]{} operation is not observable if the system is viewed at length scales larger than two bond length, thus going from bonds to vertices can not affect the scaling properties of the cluster. For example, if we measure the fractal dimension of the cluster, the differences between the B and the V-clusters come on length scales smaller than two lattice spacings, i.e. any method that is investigating the fractal (large scale) properties of the cluster will not see any difference. Thus the vertex cluster and the bond cluster belong to the same universality class. Since at any time step the IBP and IBPO models share the same vertices, we have proven that IBP and IBPO models belong to the same universality class.
L. Furuberg, J. Feder, A. Aharony, and T. Jøssang, Phys. Rev. Lett. [**61**]{}, 2117–2120 (1988).
F. Tzschichholz, A. Bunde, and S. Havlin, Phys. Rev A [**39**]{}, 5470 (1989).
The execution time of the shortest published program for generating the minimal spanning tree using Prim’s algorithm increases as $n^2$ \[V. Kevin and M. Whitney, Comm. of ACM [**15**]{}, 273 (1972)\].
M. Cieplak, A. Maritan, and J.R. Banavar, Phys. Rev. Lett. [**72**]{}, 2320 (1994); M. Cieplak, A. Maritan, and J.R. Banavar (preprint).
A.-L. Barabási and H.E. Stanley, [*Fractal Concepts in Surface Growth*]{} (Cambridge University Press, Cambridge, 1995).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Christian Rose and Peter Stollmann
title: 'Manifolds with Ricci curvature in the Kato class: heat kernel bounds and applications'
---
Introduction {#introduction .unnumbered}
============
The 1973 paper by Tosio Kato entitled Schrödinger operators with singular potentials, published in the Israel Journal of Mathematics [@Kato-72], was meant to establish essential selfadjointness for Schrödinger operators under very mild restrictions on the potential term. Along the way, the author introduced two concepts that bear his name and turned out to be useful in different contexts. Actually, those two concepts, *Kato’s inequality* and the *Kato class* of potentials, can be combined to give new insight in analysis and geometry of Riemannian manifolds, and this is, what the present survey is about.\
We concentrate on the latter and record some of the implications that arise when the negative part of Ricci curvature obeys a Kato-type condition. Put very roughly, this is an application of methods from mathematical physics, more precisely, operator theory and Schrödinger operators, to questions about manifolds, namely, geometric properties that are related to the heat kernel. At the time being, papers concerning that topic are relatively recent and we have tried to record them all. If it should turn out that we missed a relevant paper, we would be grateful for references and include them in the future. Since ideas from two different communities are involved, we have decided to include some basics before stating the results.
In Section \[section\_Kato\] we start by introducing the Kato class or Kato condition in a general set-up and explain its use in analysis and probability. The Kato condition can be seen as a condition of relative boundedness of a function (potential) $V$ with respect to some reference operator $H_0$ on the space in question. This reference operator was the usual Laplacian in ${\mathbb{R}}^n$ in the case of Kato’s original paper and it will be the Laplace-Beltrami operator on a Riemannian manifold in the application that we have in mind. The Kato condition means that $V$ is, in a certain sense, small with respect to $H_0$ and that leads to the comforting fact that $H_0+V$ will inherit some of the good properties of $H_0$. In particular, mapping properties of the semigroup $\left({\mathrm{e}}^{-tH_0}\right)_{t\geq 0}$ carry over to the perturbed semigroup $\left({\mathrm{e}}^{-t(H_0+V)}\right)_{t\geq 0}$.
The next issue, also treated in the first Section, is the connection between heat kernel bounds for the Laplace-Beltrami operator and the validity of the Kato condition for functions in appropriate $L^p$-spaces.
In Section \[section\_domination\] we give a short introductory account on domination of semigroups, a notion that is intimately connected with Kato’s inequality and with the defining properties of Dirichlet forms in terms of the associated semigroups, known as the Beurling-Deny criteria. They allow for a pointwise comparison of the heat semigroup of the Laplace–Beltrami operator acting on functions and the heat semigroup of the Hodge–Laplacian acting on forms.
Throughout we will be concerned with a central point that makes the state of affairs somewhat complex. It is so important that we try to sketch it here and refer to the later sections for the technical details that we omit; we denote by $M$ a Riemannian manifold. The Hodge-Laplacian on $1$–forms, denoted by $\Delta^1$ (here you see that our sign convention differs from the preferred one in mathematical physics), can be calculated by the Weitzenböck formula as $$\Delta^1=\nabla^*\nabla+\operatorname{\mathop{Ric}},$$ where the latter summand is considered as a matrix-valued function. Therefore, $\Delta^1$ itself looks like a Schrödinger operator, acting on vector-valued functions, though. Using Kato’s inequality (introduced in that context by Hess, Schrader and Uhlenbrock in their paper [@HessSchraderUhlenbrock-80]) the heat semigroup $\left({\mathrm{e}}^{-t\Delta^1}\right)_{t\geq 0}$ of the Hodge-Laplacian is *dominated* by the semigroup of the Schrödinger operator $$\Delta+\rho\quad\text{on $L^2(M)$},$$ where $\Delta=\Delta^{\mathrm{LB}}$ is the Laplace-Beltrami operator on functions and $$\rho(x):=\min\{\sigma(\operatorname{\mathop{Ric}}_x)\},\quad x\in M,$$ picks the smallest eigenvalue of the symmetric matrix $\operatorname{\mathop{Ric}}_x$ considered as an endomorphism of the space of $1$–forms.
Knowing that ${\rho_-}$, the negative part of $\rho$, is in some sense small with respect to $\Delta$, e.g., in terms of a Kato condition, allows one to control ${\mathrm{e}}^{-t(\Delta+\rho)}$ which in turn gives useful information on ${\mathrm{e}}^{-t\Delta^1}$ that can be used to estimate the first Betti number $b_1(M)$ in certain cases.\
Now that looks like an easy lay-up but there is a very important drawback. The implications of the Kato condition are perturbation theoretic in spirit and require some smallness of ${\rho_-}$ with respect to $\Delta$. But here, we cannot view $\rho$ as a perturbation of $\Delta$ in the usual sense, since we can not vary the potential $\rho$ independently of $\Delta$: both depend on the Riemannian metric that defines the manifold!\
The good news are that especially the Kato condition provides good quantitative estimates and so we can arrive at interesting consequences, provided that ${\rho_-}$ satisfies a suitable Kato condition.\
This work should be seen as part of a general program concerning geometric properties under curvature assumptions that are less restrictive than uniform bounds. Here, as well, an important comment is in order: in the compact case, all the quantities we consider depend quite regularly on the space variable. In particular, ${\rho_-}$ is a pointwise minimum of smooth functions, hence continuous and, therefore, bounded. Hence, ${\rho_-}$ is certainly in the Kato class, what would also be true if the $L^p$-norm of ${\rho_-}$ for certain $p$ would be small enough in this case. However, ${\rho_-}$ not only is relatively bounded, it is really bounded. So why would one care about integrability conditions imposed on ${\rho_-}$ or even the Kato condition? Well, the answer lies in the quantitative nature of our question and in the uniform control that is possible by assuming that, e.g., the $L^p$-norm of ${\rho_-}$ for a family of metrics on $M$ obeys a suitable bound. We could, of course, use the infimum of $\rho$ as well but that would give much weaker estimates.\
This being understood, we want to mention here the work of Gallot and co-authors in particular, who made important contributions in establishing analytic and geometric properties of Riemannian manifolds under the condition that the $L^p$-norm of ${\rho_-}$ is small enough, see [@Gallot-88; @Gallot-88_2; @Berard-88; @BerardBesson-90]. Of course, the aforementioned program on geometric consequences of integral bounds on the Ricci curvature includes much more than the papers listed above, see, e.g., the original literature as well as [@PetersenWei-97; @PetersenWei-00; @PetersenSprouse-98; @RosenbergYang-94] for more information.
A natural question that comes up is whether a Kato condition on ${\rho_-}$ is sufficient to control the heat kernel. This was established by one of us in [@Rose-16a], building on an observation made in [@ZhangZhu-15]. This is given in Section \[section\_liyau\], where we also record similar results by Carron from [@Carron-16].
We used heat kernel bounds in relating $L^p$-properties and the Kato condition in order to prove upper bounds on $b_1(M)$ in [@RoseStollmann-15] and to present conditions under which $b_1(M)=0$, generalizing earlier results by Elworthy and Rosenberg [@ElworthyRosenberg-91]. Actually, the latter reference was the starting point and main source of motivation for our above mentioned paper. Related work by different authors is collected in Section \[section\_Betti\], starting from Bochner’s seminal work [@Bochner-46].
In Section \[section-more\] we mention some more consequences that arise from the control of the Kato property of Ricci curvature.
*Acknowledgement:* The second named author expresses his thanks to Daniel, Matthias and Radek for organizing such a wonderful conference and creating an atmosphere of open and respectful exchange of ideas.
The [K]{}ato class and the [K]{}ato condition {#section_Kato}
=============================================
As mentioned in the introduction, the original definition of the property that defines the Kato class goes back to the celebrated paper [@Kato-72] and was phrased as follows: the potential $V\colon {\mathbb{R}}^n\to{\mathbb{R}}$ is required to satisfy $$\begin{aligned}
\label{KCclassical}
\lim_{r\to 0}\left[\sup_{x\in{\mathbb{R}}^n}\int_{\vert y\vert\leq r}V(x-y)\vert
y\vert^{2-n}{\mathrm{d}}y\right]= 0,\end{aligned}$$ where we assume, in all that follows, that the space dimension satisfies $n\geq 3$ in order to avoid notational technicalities. Actually, an additional growth condition at $0$ is present in Kato’s paper. The important fact to notice is that condition limits possible singularities, and uniformly so, in that $V$ is convolved with a singular kernel, in fact with the kernel of the fundamental solution of the Laplacian on ${\mathbb{R}}^n$ (up to a constant). We refer to [@Simon-82], Section A2 in particular, for more details on the prehistory of the Kato condition; we wish to underline one important point and cite from the latter article, p. 453f., that the naturalness of the Kato condition for $L^p$–properties was first noticed by Aizenman and Simon [@AizenmanSimon-82] in the path integral context (i.e., using the Feynman-Kac formula and Brownian motion) and by Agmon (cited as private communication in [@Simon-82]) in the PDE context. Let us point out the following facts, which can be found in [@Simon-82], where original references are given; we write $V\in \mathcal{K}_n$, and say that $V$ is in the Kato class, provided holds.
\[prop1.1\] For $W\geq 0$ the following are equivalent:
(i) $W\in\mathcal{K}_n$,
(ii) $\Vert(-\Delta+\alpha)^{-1}W\Vert_\infty\to 0$ as $\alpha\to\infty$,
(iii) $\Vert{\mathrm{e}}^{-t(-\Delta-W)}\Vert_{\infty,\infty}\to 1$ as $t\to 0$.
See Theorem A.2.1 and Proposition A.2.3 in [@Simon-82 p.454], which go back to [@AizenmanSimon-82]. The analytic properties in the latter Proposition were the starting point of a generalization of the Kato class given in [@StollmannVoigt-96]. There, the Laplacian is generalized to a selfadjoint operator $H_0$ on some $L^2(X,m)$ that is associated with a Dirichlet form (under some mild assumptions concerning $X$, see local citations for details). The fact that $H_0$ is associated with a Dirichlet form is equivalent to the fact that its semigroup is positivity preserving and contractive in the $L^\infty$–sense, in which case we speak of a *Markovian semigroup*.
In the latter article a Kato class of measures has been introduced and it has been shown that for this class $\hat{S}_K$ and $\mu\in\hat{S}_K$, $H_0-\mu$ can be defined by form methods and that the semigroup $\left({\mathrm{e}}^{-t(H_0-\mu)}\right)_{t\geq 0}$ shares many of the good properties of $\left({\mathrm{e}}^{-tH_0}\right)_{t\geq
0}$. The main idea is that in order to control $L^p$-$L^q$-norms, e.g., of $\left({\mathrm{e}}^{-t(H_0-\mu)}\right)_{t\geq 0}$ uniformly in $\mu=W{\mathrm{d}}x$ for nice, say bounded $W\geq 0$, the relevant quantities are the following functions; as in [@RoseStollmann-15] we omit the dependence on $H_0$ in the notation and set: $$c_{\mathrm{Kato}}(W,\alpha):=\Vert(H_0+\alpha)^{-1}W\Vert_\infty\quad\text{for
} \alpha>0$$ as well as $$b_{\mathrm{Kato}}(W,\beta):=\int_0^\beta\Vert{\mathrm{e}}^{-tH_0}W\Vert_\infty{\mathrm{d}}t\quad\text{for } \beta>0\text.$$ As mentioned above, in most cases we are interested in dealing with bounded functions. In this case, the fact that the heat semigroup $\left({\mathrm{e}}^{-tH_0}\right)_{t\geq 0}$ is Markovian implies that $\Vert{\mathrm{e}}^{-tH_0}W\Vert_\infty<\infty$ for $t>0$ as well as $\Vert(H_0+\alpha)^{-1}W\Vert_\infty<\infty$ for $\alpha>0$.\
For general measurable $W\geq 0$ we can define $$c_{\mathrm{Kato}}(W,\alpha):=\sup_{n\in{\mathbb{N}}}\Vert(H_0+\alpha)^{-1}(W\wedge
n)\Vert_\infty\in[0,\infty]$$ and say that $W$ satisfies a *Kato condition*, provided the latter quantity is finite for some $\alpha>0$.\
The quantities above are related via functional calculus: $$\begin{aligned}
\label{constantsrelation}
(1-{\mathrm{e}}^{-\alpha\beta})c_{\mathrm{Kato}}(W,\alpha)\leq
b_{\mathrm{Kato}}(W,\beta)\leq {\mathrm{e}}^{\alpha\beta}c_{\mathrm{Kato}}(W,\alpha),\end{aligned}$$ see [@Gueneysu-14], and we get that the behavior of $c_{\mathrm{Kato}}(W,\alpha)$ for $\alpha\to\infty$ controls the behavior of $b_{\mathrm{Kato}}(W,\beta)$ for $\beta\to 0$ and vice versa.
An important property is the stability of the boundedness of the semigroup in different $L^p$–spaces under Kato perturbations. It is implicit in the equivalence (i)$\Leftrightarrow$(iii) of the above Proposition \[prop1.1\]. The following explicit estimate goes back to [@StollmannVoigt-96 Theorem 3.3], and can be found in [@RoseStollmann-15] in an equivalent dual form.
\[propultraKato\] Let $H_0$ be as above (the generator of a Markovian semigroup on $L^2(X,m)$) and $V\in L^1_{\mathrm{loc}}(X)$ such that $b_{\rm{Kato}}(V_-,\beta)=:b<1$ for some $\beta > 0$. Then $$\| {\mathrm{e}}^{-t(H_0+V)}\|_{\infty, \infty}\le \frac{1}{1-b}{\mathrm{e}}^{
t\frac{1}{\beta}\log\frac{1}{1-b}} .$$
Consequently, if $({\mathrm{e}}^{-tH_0};t\ge 0)$ is ultracontractive, i.e., maps $L^1$ to $L^\infty$, then so is $({\mathrm{e}}^{-t(H_0+V)};t\ge 0)$, a fact that can be deduced from the above and an interpolation argument, see [@StollmannVoigt-96 Theorem 5.1].
While we used an analytic set-up in the latter paper, one of the useful features of the Kato condition is that it is well suited for probabilistic techniques. So it is equally well possible to start from a given Markov process and define respective perturbations in a probabilistic manner. We refer to [@AizenmanSimon-82; @Simon-82] for the start and to [@KuwaeT-06; @KuwaeT-07] for more recent contributions along these lines, as well as to [@Gueneysu-14] and the literature cited in these works.\
One main point of interest here is to study the question whether the abstract version of a Kato condition as above, using quantities like $b_{\mathrm{Kato}}$, $c_{\mathrm{Kato}}$, can still be expressed in terms of kernels. In other words, whether a generalization of Proposition \[prop1.1\] holds true. The answer is yes, provided the *heat kernel* $p_t(\cdot,\cdot)$, the integral kernel of the heat semigroup, is controlled in some sense.\
First of all, a very general condition for locally integrable functions to satisfy a Kato condition was given in [@KuwaeT-07] for general Markov processes associated to a Dirichlet form in $L^2(X,m)$ on a locally compact separable metric space $(X,d)$. The authors defined the *Kato class relative to the Green kernel* of the generator. For fixed $\nu\geq\beta>0$, a non-negative function $V\in
L^1_{\mathrm{loc}}(X,m)$ belongs to this Kato class $K_{\nu,\beta}(X)$ if $$\lim_{r\to 0} \sup_{x\in X}\int_{B(x,r)}G(x,y)V(y){\mathrm{d}}m(y)=0\text,$$ where $G(x,y):=G(d(x,y))$ with $G(r)=r^{\beta-\nu}$ and $B(x,r)$ denotes the metric ball around $x\in X$ of radius $r>0$. Denote by $L^p_{\mathrm{unif}}(X)$ the set of functions $f$ such that $$\sup_{x\in X}\int_{B(x,1)}\vert f\vert^p{\mathrm{d}}m<\infty\text.$$
\[KuwaeT\] For all $\nu\geq\beta>0$, $p>\nu/\beta$, we have $$f\in
L^p_{\mathrm{unif}}(X)\quad\Rightarrow\quad \vert f\vert \in K_{\nu,\beta}(X)$$ provided there is a positive increasing function $V$ on $(0,\infty)$ such that $r\mapsto V(r)/r^\nu$ is increasing or bounded and $\sup_{x\in X}m(B(x,r))\leq
V(r)$ for all $r>0$, and the heat kernel satisfies upper and lower bounds in the following way: there exist two positive increasing functions $\varphi_1,\varphi_2\colon (0,\infty)\to (0,\infty)$ such that $$\int_1^\infty\frac 1t \max\{V(t),t^\nu\}\varphi_2(t){\mathrm{d}}t<\infty$$ and for any $x,y\in X$, $t\in (0,t_0]$, we have $$\frac{1}{t^{\nu/\beta}}\varphi_1\left(\frac{d(x,y)}{t^{1/\beta}}\right)\leq
p_t(x,y)\leq
\frac{1}{t^{\nu/\beta}}\varphi_2\left(\frac{d(x,y)}{t^{1/\beta}}\right)\text.$$
More specifically, in the case of a geodesically complete Riemannian manifold with bounded geometry, bounds on the heat kernel are explicit, leading to the following.
Let $M$ be a geodesically complete Riemannian manifold of dimenion $n\geq 2$ with Ricci curvature bounded below and assume that there are $C, R>0$ such that for all $x\in M$ and $r\in(0,R]$, one has $$\operatorname{\mathop{Vol}}(B(x,r))\geq C\, r^n\text.$$ Then we have
(i) $V\in K_{n,2}(M)$ if and only if $$\lim_{t\to 0}\sup_{x\in M}\int_0^t
p_t(x,y)\vert V(y)\vert {\mathrm{dvol}}(y) =0\text.$$
(ii) for any $p>n/2$, we have $L^p_{\mathrm{unif}}(M)\subset K_{n,2}(M)$.
In particular, [@Gueneysu-14 Corollary 2.11] then gives $L^p(M)+L^\infty(M)\subset K_{n,2}(M)$ under the same assumptions. The non-collapsing of the volume of the balls seems strong, but can only be avoided by replacing it by a lower bound on the heat kernel, a condition that is stronger than the volume bound.
Let $M$ be a Riemannian manifold of dimension $n\geq 2$ and $p>n/2$.
(i) If there are $C,t_0>0$ such that for all $t\in(0,t_0]$ and all $x\in M$ we have $p_t(x,x)\leq C\, t^{-n/2}$, then, for any $V\in L^p(M)+L^\infty(M)$, $$\lim_{t\to 0}\sup_{x\in M}\int_0^t p_t(x,y)\vert V(y)\vert {\mathrm{dvol}}(y) =0\text.$$
(ii) Let $M$ be geodesically complete and assume that there are positive constants $C_1,\ldots, C_6, t_0>0$ such that for all $t\in (0,t_0]$, $x,y\in
M$, $r>0$ one has $\operatorname{\mathop{Vol}}(B(x,r))\leq C_1r^n{\mathrm{e}}^{C_2r}$ and $$C_3t^{-n/2}{\mathrm{e}}^{-C_4\frac{d(x,y)^2}{t}}\leq p_t(x,y)\leq
C_5t^{-n/2}{\mathrm{e}}^{-C_6\frac{d(x,y)^2}{t}}\text.$$ Then, one has $$L^p_{\mathrm{unif}}(M)+L^\infty(M)\subset K_{n,2}(M)\text.$$
In the special case of compact manifolds, the potentials in the Kato class can be characterized with the help of uniform heat kernel estimates, i.e., there are some constants $C,k,t_0>0$ such that $$\begin{aligned}
\label{kernelunif}
\forall\, x,y\in M, t\in (0,t_0]\colon \quad p_t(x,y)\leq C t^{-k}.\end{aligned}$$ Classically, such estimates follow from so-called isoperimetric inequalities under certain assumptions on the Ricci curvature. In [@RoseStollmann-15], the authors exhibited the neccessary analytic framework based on [@Gallot-88_2], leading to the following:
Let $D>0$ and $q>n\geq 3$. There is an explicit constant ${\varepsilon}>0$ such that for any compact Riemannian manifold $M$ with $\dim M=n$, $\operatorname{\mathop{diam}}(M)\leq D$, and $\Vert{\rho_-}\Vert_{q/2}<{\varepsilon}$, for any $p>q/2$ there is $C>0$ such that for any $0\leq V\in L^p(M)$ we have $$c_{\mathrm{Kato}}(V,\alpha)\leq C\, \Vert V\Vert_p\,
\int_0^\infty{\mathrm{e}}^{-\alpha t}\max\{1, t^{-q/2p}\}{\mathrm{d}}t\text.$$
An analogous estimate also holds for $b_{\mathrm{Kato}}(V,\beta)$ by a direct computation or by using the relation . Due to the fact that the decay rate $k$ of the heat kernel in depends on the integrability of the negative part of the Ricci curvature, the integral on the right-hand side only converges for potentials with a higher integrability.
Domination of semigroups, Kato’s inequality and comparison for the heat semigroup on functions and on 1–forms {#section_domination}
=============================================================================================================
We start with some historical remarks and with the famous Kato’s inequality that reads $$\begin{aligned}
\label{famKato}
\Delta\vert u\vert \leq \Re( \operatorname{sgn}\bar u )\Delta u\end{aligned}$$ according to our sign convention. Actually, in its original form as Lemma A in [@Kato-72], magnetic fields were included on the left-hand side. We should note, in passing, that is meant in the distributional sense and it is assumed that $\Delta u\in L^1_{\mathrm{loc}}$. The interesting feature of is that it can be expressed equivalently in terms of the following positivity property for the semigroup: $$\vert {\mathrm{e}}^{-t\Delta}f \vert\leq {\mathrm{e}}^{-t\Delta}\vert f\vert,\quad (f\in
L^2, t\geq 0)$$ which can be seen as the property that the heat semigroup *dominates itself*. To explain that, we follow the paper [@HSU-77] in introducing the neccessary concepts. See also [@Simon-77; @Simon-79; @HessSchraderUhlenbrock-80] as well as [@LSW-17] and the literature cited there for more recent contributions. The absolute value $\vert f\vert$ is replaced by a more general mapping, allowing vector valued functions. We will use some terminology without explanation. All the neccessary facts can be found in the articles above.\
We start with a real Hilbert space $\mathcal{K}$ and a cone $\mathcal{K}^+$ that is compatible with the inner product $\langle\cdot,\cdot\rangle$. Given another, real or complex, Hilbert space $\mathcal{H}$, a map $\vert\cdot\vert\colon\mathcal{H}\to\mathcal{K}^+$ is called an absolutely pairing map, provided the following properties hold (we write $\langle\cdot,\cdot\rangle$ for both inner products in a slight abuse of notation):
- $\forall f_1,f_2\in\mathcal{K}\colon\quad \vert\langle
f_1,f_2\rangle\vert\leq \langle\vert f_1\vert,\vert f_2\vert\rangle$,
- $\forall f\in\mathcal{H}\colon\quad \langle f,f\rangle=\langle \vert
f\vert, \vert f\vert\rangle$,
- $\forall g\in\mathcal{K}^+\exists f_2\in\mathcal{H}\colon\quad \vert
f_2\vert=g$ and $\forall f_1\in\mathcal{H}\colon\quad \langle
f_1,f_2\rangle=\langle\vert f_1\vert ,g\rangle$.
Two elements satisfying the third condition are called absolutely paired. Given an absolutely pairing map, we can talk about domination of operators that act on $\mathcal{K}$ and $\mathcal{H}$ respectively. First, however, we give the example that is important for us here: $$\mathcal{K}=L^2(M),\quad \mathcal{H}=L^2(M,\Omega^1),$$ the latter consisting of square integrable sections of the cotangent bundle, where the forms $\omega(x)$ are measured in terms of the inner product induced by the Riemannian metric, written as $\vert \omega(x)\vert$. It is not hard to see that $$\vert\cdot\vert\colon L^2(M,\Omega^1)\to L^2(M),\quad
\omega\mapsto\vert\omega(\cdot)\vert$$ is an absolutely pairing map; of course, both $L^2$-spaces are built upon the Riemannian volume.\
Going back to the general case we can say that a bounded linear operator $A$ on $\mathcal{H}$ is *dominated* by a bounded linear operator $B$ on $\mathcal{K}$, provided $$\vert Af\vert\leq B\vert f\vert \quad (f\in\mathcal{H})\text.$$ Clearly, this notion depends on the absolute map $\vert\cdot\vert$. The following powerful characterization of semigroup domination can be found in [@HSU-77 Theorem 2.15]. Let $H$ and $K$ be (the negative of) generators of the strongly continuous semigroups $T_t={\mathrm{e}}^{-tH}$ and $S_t={\mathrm{e}}^{-tK}$.
In the situation above, the following statements are equivalent:
(i) $(T_t)_{t\geq 0}$ is dominated by $(S_t)_{t\geq 0}$, i.e., $$\vert
T_tf\vert\leq S_t\vert f\vert\quad ( f\in \mathcal{H}).$$
(ii) $H$ and $K$ satisfy a generalized Kato’s inequality: For all $f_1\in\operatorname{dom}(H)$ and $f_2\in\mathcal{H}$ such that $\vert f_2\vert\in\operatorname{dom}(K^*)$ and $f_1$ and $f_2$ absolutely paired: $$\Re\langle Hf_1,f_2\rangle\geq \langle \vert f_1\vert, K^*\vert
f_2\vert\rangle.$$
Let us mention that condition (ii) has a counterpart in a version of the first Beurling-Deny criterion, see [@Simon-79] and [@LSW-17]. The comforting fact for us is that $\left({\mathrm{e}}^{-t\bar\Delta}\right)_{t\geq 0}$ and $\left({\mathrm{e}}^{-t\Delta}\right)_{t\geq 0}$ enjoy the domination relation aluded to above, where $\bar\Delta=\nabla^*\nabla$ is the Bochner-Laplacian. This has been proven in terms of a Kato inequality by Hess, Schrader and Uhlenbrock in [@HessSchraderUhlenbrock-80], so that $$\vert{\mathrm{e}}^{-t\bar\Delta}\omega\vert\leq {\mathrm{e}}^{-t\Delta}\vert \omega\vert$$ for all $t\geq 0$ and $\omega\in L^2(M,\Omega^1)$. The Weitzenböck formula and abstract results on sums of generators give $$\vert{\mathrm{e}}^{-t\Delta^1}\omega\vert\leq {\mathrm{e}}^{-t(\Delta+\rho)}\vert
\omega\vert$$ and, moreover, $$\operatorname{\mathop{Tr}}\left({\mathrm{e}}^{-t\Delta^1}\right)\leq
n\,\operatorname{\mathop{Tr}}\left({\mathrm{e}}^{-t(\Delta+\rho)}\right).$$ Both these inequalities have geometrical implications, as we already mentioned above and as we will see in more detail in Section \[section\_Betti\] below.\
We mention in passing that domination of semigroups can also be treated probabilistically, cf. [@Rosenberg-88]. This gives a nice pointwise estimate on the heat kernels, as shown in Theorem 3.5 of the latter paper by Rosenberg, which we state here in the special case under consideration. For this reason, denote by $p^{(1)}_t(x,y)$ the heat kernel of the Hodge-Laplacian $\Delta^{(1)}$ and $p^{(0)}_t(x,y)$ the heat kernel of $\Delta+\rho$ acting on functions.
For all $t>0, x,y\in M$, we have $$\vert p^{(1)}_t(x,y)\vert\leq n \, p^{(0)}_t(x,y).$$
The [K]{}ato condition implies heat kernel bounds {#section_liyau}
=================================================
Proofs of the fact mentioned in the title of this section are based on the following result by Qi S. Zhang and M. Zhu:
Let $M$ be a compact Riemannian manifold of dimension $n$, and $u$ a positive solution to the heat equation $$\partial_t u=-\Delta u\text.$$ Suppose that either one of the following conditions holds:
(i) $\Vert{\rho_-}\Vert_p<\infty$ for $p>n/2$, and that there is a $c>0$ such that for all $x\in M$ and $r\in(0,1]$, we have $\operatorname{\mathop{Vol}}(B(x,r))\geq c\, r^n$.
(ii) $\sup_{x\in M}\int_M {\rho_-}^2(y)d(x,y)^{2-n}{\mathrm{dvol}}(y)<\infty$ and the heat kernel is bounded from above.
Then, for any $\alpha\in(0,1)$, there are an explicit continuous function $J\colon (0,\infty)\to (0,1)$ and $c>0$ such that $$\begin{aligned}
\label{ZhangZhu}J(t) \frac{\vert\nabla
u\vert^2}{u^2}-\frac{\partial_t u}{u}\leq \frac{c}{J(t)\, t}, \quad (t\in
(0,\infty))\text.
\end{aligned}$$
An inequality of the type yields an explicit upper bound of the heat kernel by a nowadays standard technique introduced by Li and Yau in [@LiYau-86]. A thorough inspection of the reasoning in [@ZhangZhu-15] shows that the Kato condition on ${\rho_-}$ indeed implies heat kernel estimates in the following sense:
\[heatkernelkato\] Let $n\ge 3$ and $\beta>0$. For any closed Riemannian manifold $M$ of dimension $n$ satisfying $\operatorname{\mathop{diam}}(M)\leq\sqrt\beta$, and $$b:=b_{\rm{Kato}}({\rho_-},\beta)<\frac 1{16n}\text,$$ there are explicit constants $C=C(n,b,\beta)>0$ and $\kappa=\kappa(n,b,\beta)>0$ such that we have $$\begin{aligned}
\label{upperboundondiag}
p_t(x,y)\leq \frac{C}{\operatorname{\mathop{Vol}}(M)}t^{-\kappa}\quad (t\in(0,\beta^2/4], x,y\in
M)\text.\end{aligned}$$
A different version that is more explicit in the sense that it fits well with the euclidean case was obtained independently by G. Carron. For its formulation we use the notation of the present paper:
\[heatkernelkato\_carron\] There is a constant $c_n$ depending on $n$ alone with the following property: Let $D := diam(M)$ and $T$ the largest time such that $$b_{\rm{Kato}}({\rho_-},T)\le\frac{1}{16n} .$$ Then $$\begin{aligned}
\label{upperboundondiag_Carron}
p_t(x,x)\leq \frac{c_n}{\operatorname{\mathop{Vol}}(B(x,\sqrt{t}))}\quad (t\in(0, T/2], x\in
M)\text.\end{aligned}$$
See Corollary 3.9 in [@Carron-16] and Section 3 of the latter article. Note that the results above are closely related via the so-called *volume doubling condition*. [@Carron-16 Proposition 3.8] also shows that the volume doubling condition is indeed satisfied under the curvature assumptions of the above theorems. See also [@Rose-17] for the connection.
Bounds on the first Betti number {#section_Betti}
================================
The first Betti number, $b_1(M)$, is a tool for classifying compact Riemannian manifolds $M$ of dimension $n$. By definition, $b_1(M)$ is the dimension of the first cohomology group, $b_1(M):=\dim\mathcal{H}^1(M),$ where $\mathcal{H}^1(M)$ is the real linear space quotient of the closed differential 1-forms on $M$ by the exact forms. This group describes in some sense the $(n-1)$-dimensional holes of $M$ and is therefore actually of topological nature, clarifiying its relevance for the classification of manifolds. Bochner was the first to observe that it is quite easy to derive bounds on $b_1(M)$ if the Ricci tensor is non-negative or even positive in some point of $M$. More precisely, he showed in [@Bochner-46] the following theorem, although, the result is not explicitely stated in the form below:
Let $M$ be a compact Riemannian manifold of dimension $n$. If the Ricci curvature is non-negative, then $$b_1(M)\leq n\text.$$ If, additionally, there is a point in $M$ such that the Ricci curvature is strictly positive at that point, then $$b_1(M)=0\text.$$
Actually, the above theorem follows implicitly from the Weitzenböck formula $$\Delta^1=\bar\Delta+\operatorname{\mathop{Ric}},$$ where $\Delta^1$ is the Hodge-Laplacian acting on one-forms, $\bar\Delta:=\nabla^*\nabla$ the so-called rough or Bochner-Laplacian, and $\operatorname{\mathop{Ric}}$ denotes the Ricci tensor interpreted as a section of endomorphisms on the space of one forms as above. Any equivalence class in $\mathcal{H}^1(M)$ can be represented by a harmonic one-form, such that $$\dim\ker(\Delta^1)=\dim\mathcal{H}^1(M).$$ Using the non-negativity of $\operatorname{\mathop{Ric}}$ in quadratic form sense implies directly that there are only parallel forms in $\ker(\Delta^1)$, which vanish under the additional positivity assumption on $\operatorname{\mathop{Ric}}$. This classical ansatz, known as the first application of the nowadays called Bochner method, seems not to lead to results allowing some negative Ricci curvature. However, instead of a geometric argument it is possible to use form methods to control the kernel of $\Delta^1$. The main observation here can be found in [@ElworthyRosenberg-91] by Elworthy and Rosenberg building on the domination property established by Hess, Schrader and Uhlenbrock in [@HessSchraderUhlenbrock-80], discussed in Section \[section\_domination\] above: namely, for any square integrable section of one-forms $\omega\in L^2(M;\Omega^1)$, we have: $$\begin{aligned}
\label{sgd}
|{\mathrm{e}}^{-t\Delta^1}\omega|\leq{\mathrm{e}}^{-t(\Delta+\rho)}
|\omega|\text,\end{aligned}$$ where the norms above are taken pointwise in the cotangent bundle of $M$. If $\omega$ is harmonic, the left-hand side equals $|\omega|$. If the semigroup on the right-hand side is generated by a positive operator $\Delta+\rho>0,$ we can let $t\to \infty$, so that ${\mathrm{e}}^{-t(\Delta+\rho)}
|\omega|\to 0$ which gives $\omega=0$, yielding a method to conclude the triviality of $\ker(\Delta^1)$. The issue here is that we cannot easily treat $\rho$ as a perturbation of $\Delta$ since both of them depend on the metric tensor of $M$. Therefore, it is not trivial to get positivity of the operator $\Delta+\rho$.
A general strategy is to control the part of the Ricci curvature lying below a certain positive threshold. Elworthy and Rosenberg derived the following theorem along these lines with a more complicated method of proof based on Sobolev embedding theorems and eigenfunction estimates:
Let $M$ be a compact manifold, $X\subset M$, $K,K_0>0$, $\operatorname{\mathop{Ric}}\geq -K_0$ on $X$, $\operatorname{\mathop{Ric}}\geq K$ on $M\setminus X$. There exists $a>0$ depending on the quantities above such that if $$\operatorname{\mathop{Vol}}(X)<a\,\operatorname{\mathop{Vol}}(M),$$ then $b_1(M)=0$.
Unfortunately, the constant $a$ in the above theorem is far from being explicit and it also still depends on the lower bound $K$ for the Ricci tensor.
When Elworthy and Rosenberg published their article there was already a result that implies the assertion in the latter theorem in Gallot’s article [@Gallot-88_2] from 1988. In fact, Gallot proved an estimate of the first eigenvalue of $\Delta+V$ for some potential $V$ in terms of its $L^p$-norm, so that leads to the vanishing of $b_1(M)$; we also mention [@Berard-90] in which the same basic idea is nicely explained in a little more restrictive context.
Rosenberg and Yang also recognized that integral bounds are the right thing to look for and arrived at the following result, Theorem 4 in [@RosenbergYang-94]:
Let M be an n-dimensional complete Riemannian manifold. Assume that there exist constants $A, B>0$ such that for any $f\in C^\infty_c(M)$ $$\left( \int_M |f(x)|^\frac{2n}{n-2}{\mathrm{dvol}}(x)\right)^\frac{n-2}{n}\le A\int_M
|\nabla f(x)|^2{\mathrm{dvol}}(x) +
B\int_M |f(x)|^2{\mathrm{dvol}}(x) .$$ Then, whenever for some $\rho_0>0$, $$\| ({\rho}-\rho_0)_-\|_\frac{n}{2} < \min\{ A^{-1}, \rho_0B^{-1}\} ,$$ it follows that $b_1(M)=0$.
The main idea is to decompose $$\Delta+\rho=\Delta+\rho_0+({\rho}-\rho_0)\geq \Delta+\rho_0-({\rho}-\rho_0)_-,$$ which is positive as soon as $({\rho}-\rho_0)_-$ is relatively bounded w. r. t. $\Delta$ for some form-bound smaller than one and such an estimate can be deduced from a Sobolev embedding theorem. The nice fact about the latter result is that it allows a statement in the threshold case $\frac{n}{2}$ as far as integrability of $({\rho}-\rho_0)_-$ is concerned. Moreover, the argument is quite direct and doesn’t rely on explicit heat kernel estimates, an issue we turn to next.
Assuming the semigroup generated by $\Delta$ is ultracontractive, i.e., there are constants $C,k,t_0>0$ such that $$\Vert{\mathrm{e}}^{-t\Delta}\Vert_{1,\infty}\leq
C\,
t^{-k}, \quad t\in (0,t_0),$$ perturbation techniques based on the Kato condition lead to quantitative results as well. With the decomposition of $\Delta+\rho$ as above, the assumption of ultracontractivity allows to handle $({\rho}-\rho_0)_-$ as a Kato-perturbation, that means, we are looking for conditions that give
$$\begin{aligned}
\label{kato}
b_{\rm{Kato}}(({\rho}-\rho_0)_-,\rho_0^{-1}):=\int_0^{\rho_0^{-1}}\Vert
{\mathrm{e}}^{-t\Delta}({\rho}-\rho_0)_-\Vert_\infty{\mathrm{d}}t<1.\end{aligned}$$
Ultracontractivity of the heat semigroup also implies its continuity from $L^p(M)$ to $L^\infty(M)$ for all $p\in (0,\infty]$, so that $$\begin{aligned}
b_{\rm{Kato}}(({\rho}-\rho_0)_-,\rho_0^{-1})\leq
C\,\Vert({\rho}-\rho_0)_-\Vert_p\,\int_0^{\rho_0^{-1}} t^{-p/k}{\mathrm{d}}t<1\end{aligned}$$ if $p<k$ and the $L^p$-norm on the right-hand side is small enough. This explicitly computable quantity led to the result below.
Let $n\in{\mathbb{N}}$, $n\geq 3$, $p>n/2$, $D>0$. There is an explicitly computable ${\varepsilon}>0$ such that for all compact Riemannian manifolds $M$of dimension $n$, $\operatorname{\mathop{diam}}(M)\leq D$, and $$\frac{1}{\operatorname{\mathop{Vol}}(M)}\int_M{\rho_-}^p\, {\mathrm{dvol}}<{\varepsilon},$$ we have $b_1(M)=0$.
At the heart of the proof lies a deep isoperimetric estimate from [@Gallot-88_2] that holds under the assumption that the averaged $L^p$-norm of the Ricci curvature is small enough, implying ultracontractivity of the heat semigroup.
We now turn to the question whether it is enough to assume smallness of the Kato constant $b_{\rm{Kato}}$ to derive bounds on $b_1(M)$. The main oberservation is $$\begin{aligned}
\label{trace1}
\dim\ker(\Delta^1)\leq \operatorname{\mathop{Tr}}({\mathrm{e}}^{-t\Delta^1})\leq
n\,\operatorname{\mathop{Tr}}({\mathrm{e}}^{-t(\Delta+{\rho_-})})\leq
n\,\operatorname{\mathop{Vol}}(M)\Vert{\mathrm{e}}^{-t(\Delta+{\rho_-})}\Vert_{1,\infty},\end{aligned}$$ so that we get bounds on $b_1(M)$ as soon as we can control $\Vert{\mathrm{e}}^{-t(\Delta+{\rho_-})}\Vert_{1,\infty}$. The ultracontractivity estimate is crucial here as well as the stability of ultracontractivity under Kato-class perturbations, stated in Proposition \[propultraKato\] above.
A little work and putting everything together yields
$3\leq n<p<2q$ and $D>0$. There is an ${\varepsilon}>0$ and a constant $K(p)>0$ depending only on $p$ such that for all compact Riemannian manifolds $M$ with $\dim M=n$, and $\operatorname{\mathop{diam}}(M)\leq D$ with $\Vert{\rho_-}\Vert_q\leq {\varepsilon}$, we have $$b_1(M)\leq n\cdot
\left(\frac{2}{1-{\varepsilon}^{-1}\Vert{\rho_-}\Vert_p}\right)^{2\frac{1+{\varepsilon}^{-1}
\Vert{\rho_-}\Vert_p}{1-{\varepsilon}^{-1}\Vert{\rho_-}\Vert_p}+\frac
p2}\left(1+K(p)D^\frac{p}{2}\right)\text.$$
Even though the Kato condition on the part of Ricci curvature below a positive level is sufficient to obtain the triviality of $\mathcal{H}^1(M)$ we do not know yet whether a Kato-bound on the negative part of Ricci curvature implies a non-trivial bound on $b_1(M)$. The ultracontractivity is a neccessary assumption such that equation can be applied and calculated. Fortunately, Theorem \[heatkernelkato\] above shows that the smallness of $b_{\rm{Kato}}({\rho_-},\beta)$ for some $\beta>0$ implies a heat kernel upper bound, giving the desired ultracontractive bound and in turn the bound on the first Betti number.
Let $n\geq 2$ and $\beta>0$. Any compact Riemannian manifold with $\dim M=n$, $\operatorname{\mathop{diam}}M\leq \sqrt\beta$, and $$b:=b_{\rm{Kato}}({\rho_-},\beta)\leq\frac{1}{16n},$$ satisfies $$b_1(M)\leq n\cdot
\left(\frac{2}{1-b}\right)^{\left(1+\frac{1}{\beta}\right)\frac{1+b}{1-b}+\frac{
1}{n-1}}{\mathrm{e}}^\frac{3n}{n-1}\text.$$
Additionally, Carron showed in [@Carron-16], that a clever improvement of the upper bound of the heat kernel and Gromov’s trick lead to the following estimate.
Let $n\geq 2$ and $\beta>0$. There is an ${\varepsilon}>0$ such that any compact Riemannian manifold with $\operatorname{\mathop{diam}}M\leq \sqrt\beta$ and $b_{\rm{Kato}}({\rho_-},\beta)<{\varepsilon}$ satisfies $b_1(M)\leq n$.
Concluding remarks {#section-more}
==================
Here we first briefly mention some other results that have been obtained under the assumption that the negative part of Ricci curvature satisfies a Kato condition.
We already heavily cited [@Carron-16] above. Apart from what we already referred, Carron shows, amongst other things and assumptions, that such a curvature condition allows to control the volume growth of balls from above, giving volume doubling and an upper bound on the volume of balls that coincides with the euclidean case.
In [@GueneysuPallara-15], the authors extend the heat semigroup characterization of functions of bounded variation to manifolds whose Ricci curvature is not necessarily bounded below, again assuming that the negative part of Ricci curvature satisfies a Kato condition.
There is also a big distribution by Güneysu, who extended the concepts of Kato class potentials to the context of vector-valued functions on manifolds, see, e.g., [@Gueneysu-12; @Gueneysu-17; @Gueneysu-14; @Gueneysu-10; @Gueneysu-16] and the cited literature therein.
Let us end with a meta question: While it is by now quite well understandable that Kato conditions on the negative part of Ricci curvature can be used to find bounds on $b_1(M)$ as we hopefully convinced our readers above, there is still some kind of mystery in the fact, that Kato class Ricci curvature actually leads to heat kernel bounds and other geometric features. In fact, for the former results, one uses domination and a Schrödinger operator that features $\rho_-$ as a potential term. For the latter case, however, the operator in question is the Laplace-Beltrami operator itself that exhibits no potential term.
Apart from the obvious fact that the proofs work: why is it true? A better understanding is certainly needed, e.g., for an extension of some of the results we mentioned to the non–compact case.
[10]{}
M. Aizenman and B. Simon. Brownian motion and [H]{}arnack inequality for [S]{}chrödinger operators. , 35(2):209–273, 1982.
P. H. Bérard. From vanishing theorems to estimating theorems: the [B]{}ochner technique revisited. , 19(2):371–406, 1988.
P.H. Bérard. A lower bound for the least eigenvalue of [$\Delta+V$]{}. , 69(3):255–259, 1990.
S. Bochner. Vector fields and [R]{}icci curvature. , 52:776–797, 1946.
P Bérard and G Besson. Number of bound states and estimates on some geometric invariants. , 94(2):375 – 396, 1990.
G. Carron. Geometric inequalities for manifolds with [R]{}icci curvature in the [K]{}ato class. 2016. https://arxiv.org/abs/1612.03027 \[math.DG\].
K. D. Elworthy and S. Rosenberg. Manifolds with wells of negative curvature. , 103(3):471–495, 1991. With an appendix by Daniel Ruberman.
S. Gallot. Inégalités isopérimétriques et analytiques sur les variétés riemanniennes. , (163-164):5–6, 31–91, 281 (1989), 1988. On the geometry of differentiable manifolds (Rome, 1986).
S. Gallot. Isoperimetric inequalities based on integral norms of [R]{}icci curvature. , (157-158):191–216, 1988. Colloque Paul L[é]{}vy sur les Processus Stochastiques (Palaiseau, 1987).
B. G[ü]{}neysu. . Dissertation, Rheinische Friedrich-Wilhelms-Universit[ä]{}t Bonn, 2010.
B. G[ü]{}neysu. Kato’s inequality and form boundedness of [K]{}ato potentials on arbitrary [R]{}iemannian manifolds. , 142(4):1289–1300, 2014.
B. G[ü]{}neysu. . Habilitation, Humboldt-Universit[ät]{} zu Berlin, 2016.
B. Güneysu and D. Pallara. Functions with bounded variation on a class of [R]{}iemannian manifolds with [R]{}icci curvature unbounded from below. , 363(3-4):1307–1331, 2015.
B. G[ü]{}neysu and O. Post. Path integrals and the essential self-adjointness of differential operators on noncompact manifolds. , 275(1-2):331–348, 2013.
Batu Güneysu. On generalized [S]{}chrödinger semigroups. , 262(11):4639–4674, 2012.
Batu Güneysu. Heat kernels in the context of [K]{}ato potentials on arbitrary manifolds. , 46(1):119–134, 2017.
H. Hess, R. Schrader, and D. A. Uhlenbrock. Domination of semigroups and generalization of [K]{}ato’s inequality. , 44(4):893–904, 1977.
H. Hess, R. Schrader, and D. A. Uhlenbrock. Kato’s inequality and the spectral distribution of [L]{}aplacians on compact [R]{}iemannian manifolds. , 15(1):27–37 (1981), 1980.
T. Kato. Schrödinger operators with singular potentials. In [*Proceedings of the [I]{}nternational [S]{}ymposium on [P]{}artial [D]{}ifferential [E]{}quations and the [G]{}eometry of [N]{}ormed [L]{}inear [S]{}paces ([J]{}erusalem, 1972)*]{}, volume 13, pages 135–148 (1973), 1972.
K. Kuwae and M. Takahashi. Kato class functions of [M]{}arkov processes under ultracontractivity. In [*Potential theory in [M]{}atsue*]{}, volume 44 of [*Adv. Stud. Pure Math.*]{}, pages 193–202. Math. Soc. Japan, Tokyo, 2006.
K. Kuwae and M. Takahashi. Kato class measures of symmetric [M]{}arkov processes under heat kernel estimates. , 250(1):86–113, 2007.
D. [Lenz]{}, M. [Schmidt]{}, and M. [Wirth]{}. . , November 2017.
P. Li and S.-T. Yau. On the parabolic kernel of the [S]{}chrödinger operator. , 156(3-4):153–201, 1986.
P. Petersen and C. Sprouse. Integral curvature bounds, distance estimates and applications. , 50(2):269–298, 1998.
P. Petersen and G. Wei. Relative volume comparison with integral curvature bounds. , 7(6):1031–1045, 1997.
P. Petersen and G. Wei. Analysis and geometry on manifolds with integral [R]{}icci curvature bounds. [II]{}. , 353(2):457–478, 2001.
C. Rose. i-[Y]{}au gradient estimate for compact manifolds with negative part of [R]{}icci curvature in the [K]{}ato class. 2016. https://arxiv.org/abs/1608.04221 \[math.DG\].
C. Rose. . Dissertation, Technische Universit[ä]{}t Chemnitz, 2017.
C. Rose and P. Stollmann. The [K]{}ato class on compact manifolds with integral bounds of [R]{}icci curvature. to appear in P. Am. Math. Soc., arXiv:1601.07441 \[math.DG\], 2016.
S. Rosenberg. Semigroup domination and vanishing theorems. In [*Geometry of random motion ([I]{}thaca, [N]{}.[Y]{}., 1987)*]{}, volume 73 of [*Contemp. Math.*]{}, pages 287–302. Amer. Math. Soc., Providence, RI, 1988.
S. Rosenberg and D. Yang. Bounds on the fundamental group of a manifold with almost nonnegative [R]{}icci curvature. , 46(2):267–287, 1994.
B. Simon. An abstract [K]{}ato’s inequality for generators of positivity preserving semigroups. , 26(6):1067–1073, 1977.
B. Simon. Kato’s inequality and the comparison of semigroups. , 32(1):97–101, 1979.
B. Simon. Schrödinger semigroups. , 7(3):447–526, 1982.
P. Stollmann and J. Voigt. Perturbation of [D]{}irichlet forms by measures. , 5(2):109–138, 1996.
Q. Zhang and M. Zhu. i-[Y]{}au gradient bounds under nearly optimal curvature conditions. 2015. https://arxiv.org/pdf/1511.00791v2.pdf, \[math.DG\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The quantum Heisenberg model is studied in the geometrically frustrated body-centered tetragonal lattice (BCT lattice) with antiferromagnetic interlayer coupling $J_1$ and intralayer first and second neighbor coupling $J_2$ and $J_3$. Using a fermionic representation of the spin $1/2$ operators, we introduce a variational method: each interaction term can be decoupled partially in the purely magnetic Weiss and in the spin-liquid (SL) mean-field channels. We find that the most stable variational solutions correspond to the three different possible long range magnetic orders that are respectively governed by $J_1$, $J_2$, and $J_3$. We show that magnetic and SL parameters do not coexist, and we characterize three different purely SL non-magnetic solutions that are variationally the second most stable states after the purely magnetic ones. The degeneracy lines separating the purely magnetic phases do not coincide with the ones separating the purely SL phases. This suggests that quantum fluctuations induced by the frustration between $J_1$-$J_2$-$J_3$ coupling should destroy magnetic orders and stabilize the formation of SL in large areas of parameters. The SL solution governed by $J_1$ breaks the lattice translation symmetry. This Modulated SL is associated to a commensurate ordering wave vector (1,1,1). Remarking that four different fits of experimental data on URu$_2$Si$_2$ locate this material with BCT lattice very close to the degeneracy line between $J_1$ and $J_3$ but well inside the Modulated SL, we suggest that frustration might be a key ingredient for the formation of the Hidden order phase observed in this compound. Our results also underline possible analogies between different families of correlated systems with BCT lattice, including unconventional superconductors. Also, the general variational method introduced here can be applied to any other system where interaction terms can be decoupled in two different mean-field channels.'
author:
- Sébastien Burdin
- Christopher Thomas
- Catherine Pépin
- Alvaro Ferraz
- Claudine Lacroix
bibliography:
- 'biblio\_j1j2j3\_BCT.bib'
title: 'Spin liquid versus long range magnetic order in the frustrated body-centered tetragonal lattice'
---
Introduction
============
The body-centered tetragonal (BCT) lattice is one of the 14 three-dimensional lattice types [@Kittelbook]. This standard crystalline structure is realized in several strongly correlated electron materials with unusual magnetic and transport properties. Among the heavy fermion systems [@Stewart1984; @Fulde2006], different examples of materials with rare earth atoms on a BCT lattice have been intensively studied for the last decades: in URu$_2$Si$_2$, a still mysterious Hidden order (HO) phase was discovered in 1986, that appears below the critical temperature $T_{HO}\approx 17~{\rm K}$ close to a pressure-induced antiferromagnetic (AF) transition [@Palstra1985; @Mydosh2011]; in YbRh$_2$Si$_2$ and CeRu$_2$Si$_2$, non-Fermi liquid properties are observed in the vicinity of AF quantum phase transitions, that are still poorly understood [@Steglich2003; @Steglich2009; @Flouquet1988; @Knafo2009]; CeCu$_2$Si$_2$ was the first (heavy fermion) material where unconventional superconductivity was discovered in 1979 close to an AF transition [@Steglich1979]; CePd$_2$Si$_2$ also exhibits unconventional superconductivity related to an AF transition [@Lonzarich1998; @Demuer2001]. Today, each one of these compounds can yet be considered as one entire field of research. It is noticeable that the link between AF ordering and unconventional superconductivity has also been suggested in other families of correlated materials with BCT symmetry: the cuprate superconductors, discovered in 1986 by Bednorz and Muller [@Bednorz1986], whose AF insulating parent compounds include La$_2$CuO$_4$ and Sr$_2$CuO$_2$Cl$_2$. In these cases, the AF order originates from the Cu atoms that form a BCT crystal. But the relevant physics there is mainly two-dimensional, the BCT structure being only involved in the formation of square-lattice layers of Cu atoms that order antiferromagnetically.
The BCT lattice can also be considered as a prototype three-dimensional frustrated system. Important theoretical developments were made in the past years about the unconventional magnetic properties of the BCT lattice using a classical Heisenberg model. These works were motivated by the rich magnetic phase diagram of iron based materials like FePd possibly doped with Rh, with a main focus on the competition between ferromagnetic, AF, and helical orders [@Diep1989; @Diep1989a; @Rastelli1989; @Quartu1998; @Loison2000; @Sorokin2011]. Tuning the interaction parameters made possible the description of different phases in the XY and Heisenberg models with thermal and quantum fluctuations. It was also shown that magnetic fluctuations as magnons excitations can help in the stability of long-range order.
![\[fig:bct\] BCT lattice and the $J_1$, $J_2$, and $J_3$ interactions. Lattice constants are $a$ in the ${\bf a},{\bf b}$ directions and $c$ along ${\bf c}$.](fig01_j1j2j3_bct_01){width="0.4\columnwidth"}
In this paper, we analyze the ground states of a frustrated $J_1$-$J_2$-$J_3$ quantum Heisenberg model on a BCT lattice as illustrated by figure \[fig:bct\]. We are aware that a complete exact determination of its expected-to-be rich phase diagram would not be realistic and we thus need to do some approximations. Here, we introduce and use a variational mean-field method that allows to decouple the Heisenberg interaction terms partially in the standard Weiss and in the modulated spin liquid (MSL) channels. Since the MSL state has been initially introduced as a scenario for the HO state in URu$_2$Si$_2$ [@Pepin2011; @Thomas2013; @Montiel2013], applications to this compound are considered as one motivation. However, the method which is developed here could be adapted to other correlated systems with BCT structure.
The paper is organized as follows: section \[section\_Model\] introduces the concept of MSL, the model, the mean-field decoupling, and the variational method. General results including phase diagrams are analyzed for $J_3=0$ and all $T$ in section \[ResultsJ1J2\], and for $T=0$ and all $J_3$ in section \[sec:j3mod\]. We will see how geometric frustration that is intrinsic to the model may help stabilizing a MSL ordered ground state. Applications to real correlated materials with BCT lattice are discussed in section \[Applications\].
Model and method \[section\_Model\]
===================================
The concept of modulated spin liquid (MSL)
------------------------------------------
The expression spin liquid was originally introduced in 1976 by contrast with spin glasses, in order to describe the dynamical properties of a disordered spin system[@Aharony1976]. Nonetheless the concept of spin liquid within quantum Heisenberg models on frustrated geometries usually also refers to the Resonant Valence Bond (RVB) state proposed by Fazekas and Anderson in 1974 on the triangular lattice [@Fazekas1974]. Later, Baskaran, Zou, and Anderson have proposed that RVB spin-liquid correlations could act like a magnetic glue for the Cooper pair formation in cuprate superconductors [@Anderson1987; @Baskaran1987; @Rice1993; @Wen1996]. Within this scenario, the AF Néel ordered state formed by the Cu square lattice layers in the insulating parent compounds is destabilized by charge fluctuations induced by doping on the O sites. The MSL scenario proposed for URu$_2$Si$_2$ was inspired by the spin-liquid scenario for cuprates. Even if the underlying BCT lattice is shared by these two families of systems, the microscopic physics in URu$_2$Si$_2$ is of course quite different and the long range orders invoke correlations in three-dimensions. Whether a system can have a true spin-liquid ground state or not has been a long standing issue, but some good evidences of possible spin-liquid ground states have been proposed for the Heisenberg model on frustrated lattices [@Canals1998; @Wen2002; @Balents2010; @Frustratedmagnetismbook]. It has also been observed from numerical calculations that spin-liquid disordered states can be very close in energy to dimer ordered states [@Yan2011; @Iqbal2013]. In general, spin dimer orders refer to bond orders that are characterized by a given periodic pattern of disconnected dimers. The proposed MSL state can be thought of as a kind of spin dimer commensurate ordered state where two different dimers may be connected to a same site. Such a dimer ordered state may also be named valence bond crystal [@Frustratedmagnetismbook] especially when it is characterized by bosonic triplet excitations. Here, we prefer use the name MSL because its magnetic excitations are deconfined Abrikosov fermions.
In previous works, the competition between AF and MSL orders on a square lattice [@Pepin2011] and on a BCT lattice [@Thomas2013] was tuned phenomenologically by introducing two independent nearest neighbor coupling $J_{AF}$ and $J_{SL}$. Here, we study this competition as an intrinsic effect associated to the geometric frustration in the $J_1$-$J_2$-$J_3$ quantum Heisenberg model on the BCT lattice. We then introduce a variational method that allows to treat the system in a mean field approximation, where the interaction on each lattice bond can be decoupled in two channels, the magnetic and the spin liquid. The relative weight of each decoupling channel is determined by minimizing the free energy of the system.
Model and method of calculation
-------------------------------
### The J1-J2-J3 model
The $J_1$-$J_2$-$J_3$ model is defined by the following quantum Heisenberg Hamiltonian: $$\begin{aligned}
H&=\sum_{\langle {\bf R},{\bf R'}\rangle}\sum_{\sigma\sigma'}J_{{\bf RR}'}\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma'}\chi_{{\bf R}'\sigma'}^{\dagger}\chi_{{\bf R}'\sigma}~,
\label{eq:hheis}\end{aligned}$$ where $\chi_{{\bf R}\sigma}^{\dagger}$ ($\chi_{{\bf R}\sigma}$) is the creation (annihilation) fermionic operator that represents quantum spins $1/2$, and satisfy the local constraints $\sum_{\sigma=\uparrow, \downarrow}\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma}=1$. The antiferromagnetic interactions $J_{{\bf RR}'}$ connects two sites ${\bf R}$ and ${\bf R}'$ on a BCT lattice, and can take three possible values $J_1,~J_2,~J_3>0$, as indicated on figure \[fig:bct\].
### Variational method
In a very oversimplified classical mean field approach and considering the specific connectivity of the BCT lattice, we expect that competition between different Weiss mean fields may reveal degenerate frustrated ground states. Hereafter, we go beyond this classical picture, and we introduce quantum correlation effects at a mean field level within a spin-liquid RVB-like decoupling on each bond. First we formally split the interaction term on each bond into two different contributions: $$\begin{aligned}
J_i&\equiv J_{i}^{\text{Weiss}}+J_{i}^{\text{SL}}\equiv J_i\cos^2(\alpha_i)+J_i\sin^2(\alpha_i)\,,
\label{eq:DefJ1channels}\end{aligned}$$ where $\alpha_1$, $\alpha_2$, and $\alpha_3$ are variational parameters. Hereafter, each interaction term will be treated within a mixed mean-field approximation on each bond: the mean-field decoupling will be made partially in the Weiss channel, and partially in SL channel. The extreme cases $\alpha_i=0$ and $\alpha_i=\pi/2$ correspond to a decoupling in the purely classical Weiss channel and in the purely SL channel, respectively. In the following, the three decoupling variational parameters $\alpha_{i}\in [0, \pi/2]$ will be determined self-consistently as functions of $J_1$, $J_2$ and $J_3$ in order to minimize the free energy of the system.
### General mean-field decoupling
Generalizing the procedure developed in Refs. , and invoking the variational splitting Eq. \[eq:DefJ1channels\], the Heisenberg Hamiltonian (eq. ) is decoupled for each bond ${\bf RR}'$ using appropriated Hubbard-Stratonovich transformations as follows: $$\begin{aligned}
&J_{i}^{\text{Weiss}}&\sum_{\sigma\sigma'}\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma'}\chi_{{\bf R}'\sigma'}^{\dagger}\chi_{{\bf R}'\sigma}\nonumber\\
&&\approx
J_{i}^{\text{Weiss}}\sum_{\sigma}
\left(\sigma m_{{{\bf R}}}\chi_{{{\bf R}'}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+\sigma m_{{{\bf R}'}}\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}}{\sigma}}\right)
\nonumber\\
&&~~-2J_{i}^{\text{Weiss}}m_{\bf R}m_{{\bf R}'}~,
\label{ApproxmeanfieldWeiss}\end{aligned}$$ where $m_{\bf R}$ is the local contribution from site ${\bf R}$ to the magnetic Weiss field, with $\sigma=\uparrow,\downarrow\equiv +,-$, and : $$\begin{aligned}
&J_{i}^{\text{SL}}&\sum_{\sigma\sigma'}\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma'}\chi_{{\bf R}'\sigma'}^{\dagger}\chi_{{\bf R}'\sigma}\nonumber\\
&&\approx
J_{i}^{\text{SL}}\sum_{\sigma}\left(\varphi_{{\bf RR}'}^\star\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}'\sigma}+c.c.\right)
+
J_{i}^{\text{SL}}\vert \varphi_{{\bf RR}'}\vert^2~, \nonumber\\
&~&
\label{ApproxmeanfieldSL}\end{aligned}$$ where $\varphi_{{{\bf R}}{{\bf R}'}}^\star\equiv\varphi_{{{\bf R}'}{{\bf R}}}$ denotes the spin-liquid field on the bond ${{\bf R}}{{\bf R}'}$. Hereafter, the Hubbard-Stratonovitch fields are replaced by their mean-field values, which are given by free energy saddle point conditions: $$\begin{aligned}
m_{{{\bf R}}}&=&\frac{1}{2}\sum_{{\sigma}}{\sigma}\langle \chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}}{\sigma}}\rangle~, \\
\varphi_{{{\bf R}}{{\bf R}'}}&=&-\sum_{{\sigma}}{\langle{\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}}\rangle}~. \end{aligned}$$ The self-consistency of the mean-fields is established from the following mean-field Lagrangian: $$\begin{aligned}
{\cal L}&={\cal L}_1+{\cal L}_2+{\cal L}_3+\sum_{{\bf R}\sigma}\chi_{{\bf R}\sigma}^{\dagger}\left(\partial_\tau+\lambda_{\bf R}\right)\chi_{{\bf R}\sigma}-\sum_{\bf R}\lambda_{\bf R}~, \notag\\
&~ \label{Lagrangian}\end{aligned}$$ with $$\begin{aligned}
{\cal L}_1&\equiv
\sum_{n}\sum_{\langle {{\bf R}}\in P_n,{{\bf R}'}\in P_{n+1}\rangle}\left[
J_1^{\text{SL}}\sum_{\sigma}\left(\varphi_{{{\bf R}}{{\bf R}'}}^\star\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+c.c\right)
\right. \notag\\
&+
J_1^{\text{Weiss}}\sum_{\sigma}\left(\sigma
m_{{{\bf R}}}\chi_{{{\bf R}'}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+\sigma m_{{{\bf R}'}}\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}}{\sigma}}\right)\notag\\
&+\left. J_1^{\text{SL}}\vert \varphi_{{{\bf R}}{{\bf R}'}}\vert^{2}-2J_{1}^{\text{Weiss}}m_{\bf R}m_{{\bf R}'}
\right]\,, \end{aligned}$$ where $P_n$ denotes sites of the planar layer $n$ oriented in the $a,b$ crystallographic directions indicated on figure \[fig:bct\], and: $$\begin{aligned}
{\cal L}_2&\equiv
\sum_{n\sigma}\sum_{\langle {{\bf R}},{{\bf R}'}\rangle\in P_n}\left[
J_2^{\text{SL}}\sum_{\sigma}\left(\varphi_{{{\bf R}}{{\bf R}'}}^\star\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+c.c.\right)\right. \notag\\
&+
J_2^{\text{Weiss}}\sum_{\sigma}\left(\sigma m_{{{\bf R}}}\chi_{{{\bf R}'}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+\sigma m_{{{\bf R}'}}\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}}{\sigma}}\right)\notag\\
&+\left. J_2^{\text{SL}}\vert \varphi_{{{\bf R}}{{\bf R}'}}\vert^{2}-2J_2^{\text{Weiss}}m_{\bf R}m_{{\bf R}'}\right]\,,
\notag\\
&~\notag\\
{\cal L}_3&\equiv
\sum_{n\sigma}\sum_{\langle \langle{{\bf R}},{{\bf R}'}\rangle\rangle\in P_n}\left[
J_3^{\text{SL}}\sum_{\sigma}\left(\varphi_{{{\bf R}}{{\bf R}'}}^\star\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+c.c.\right)\right. \notag\\
&+
J_3^{\text{Weiss}}\sum_{\sigma}\left(\sigma m_{{{\bf R}}}\chi_{{{\bf R}'}{\sigma}}^{\dag}\chi_{{{\bf R}'}{\sigma}}+\sigma m_{{{\bf R}'}}\chi_{{{\bf R}}{\sigma}}^{\dag}\chi_{{{\bf R}}{\sigma}}\right)\notag\\
&+\left. J_3^{\text{SL}}\vert \varphi_{{{\bf R}}{{\bf R}'}}\vert^{2}-2J_3^{\text{Weiss}}m_{\bf R}m_{{\bf R}'}\right]\,. \end{aligned}$$ In these expressions of ${\cal L}_1$, ${\cal L}_2$, and ${\cal L}_3$, the sums over bonds ${{\bf R}},{{\bf R}'}$ are taken with the same connectivity as the couplings $J_1$, $J_2$, and $J_3$ respectively, which is indicated on figure \[fig:bct\]: in ${\cal L}_1$ the bonds are nearest neighbors in two different planes $P_n$ and $P_{n+1}$, in ${\cal L}_2$ the bonds are nearest neighbors in the same plane $P_n$, and in ${\cal L}_3$ the bonds are second nearest neighbors in the same plane. The convention used in these notations is that each pair ${{\bf R}}{{\bf R}'}$ is summed only once.
In the following, we will make some Ansatz for the mean-field parameters $m_{{{\bf R}}}$ and $\varphi_{{{\bf R}}{{\bf R}'}}$, which will generalize the approach of Ref. . This first requires to introduce space Fourier transforms and to use the momentum representation of the fermionic operators: $$\begin{aligned}
\chi_{{\bf k}\sigma}\equiv\frac{1}{\sqrt{N}}\sum_{\bf R}e^{-i{\bf k}\cdot{\bf R}}
\chi_{{\bf R}\sigma}\,, \end{aligned}$$ where $N$ is the number of lattice sites. The inverse relation is $$\begin{aligned}
\chi_{{\bf R}\sigma}\equiv\frac{1}{\sqrt{N}}\sum_{{{\bf k}}\in\text{\bf BZ}_{\text{site}}^{\text{BCT}}}e^{i{\bf k}\cdot{\bf R}}\chi_{{\bf k}\sigma}\,. \end{aligned}$$ Here, $\text{\bf BZ}_{\text{site}}^{\text{BCT}}$ refers to the first Brillouin zone of the BCT lattice of sites. This precision will be useful later since other Brillouin zones will emerge from the dual lattices made of inplane and interplane bonds (see appendix \[Apendixduallattice\]). We define the mean-fields in reciprocal space as: $$\begin{aligned}
m_{\bf k}&\equiv&\frac{1}{\sqrt{N}}\sum_{\bf R}e^{-i{\bf k}\cdot{\bf R}}m_{\bf R}\,,
\label{Defmk}\\
\varphi_{\bf q}^{1}&\equiv&\frac{e^{i\theta_{\bf q}}}{2\sqrt{N}}\sum_{n}
\sum_{\langle {\bf R}\in L_n, {\bf R}'\in L_{n+1}\rangle}e^{-i{\bf q}\cdot\left( \frac{{\bf R}+{\bf R}'}{2}\right)}\varphi_{{{\bf R}}{{\bf R}'}}^\star\,, \notag\\
&&~\label{Defvarphiq1}\\
\varphi_{\bf q}^{2}&\equiv&\frac{1}{\sqrt{2N}}\sum_{n}\sum_{\langle {\bf R}, {\bf R}'\rangle\in L_{n}}e^{-i{\bf q}\cdot\left( \frac{{\bf R}+{\bf R}'}{2}\right)}\varphi_{{{\bf R}}{{\bf R}'}}^\star\,,\label{Defvarphiq2}\\
\varphi_{\bf q}^{3}&\equiv&\frac{1}{\sqrt{2N}}\sum_{n}\sum_{\langle\langle {\bf R}, {\bf R}'\rangle\rangle\in L_{n}}e^{-i{\bf q}\cdot\left( \frac{{\bf R}+{\bf R}'}{2}\right)}\varphi_{{{\bf R}}{{\bf R}'}}^\star\,. \label{Defvarphiq3}\end{aligned}$$ Here, a phase factor $\theta_{\bf q}\equiv {\bf q}\cdot{\bf R_0}$ is introduced for the interlayer spin-liquid field $\varphi_{\bf q}^{1}$ in order to fix the origin of the interplane bond lattice at real space position ${\bf R_0}\equiv ({\bf a}+{\bf b}+{\bf c})/4$. Such a global phase factors could be included arbitrarily for convenience to each mean-field. The site and bond dependence of the mean-fields can be recovered by the reciprocal Fourier relations: $$\begin{aligned}
m_{\bf R}\equiv\frac{1}{\sqrt{N}}\sum_{{{\bf k}}\in\text{\bf BZ}_{\text{site}}^{\text{BCT}}}e^{i{\bf k}\cdot{\bf R}}m_{\bf k}\,,\label{TFmag}\end{aligned}$$ and $$\varphi_{{{\bf R}}{{\bf R}'}}=
\begin{array}{|ll}
\varphi_{{{\bf R}}{{\bf R}'}}^i&\text{if }{{\bf R}}\text{ and }{{\bf R}'}\text{ are connected by }J_i\\
~~&\\
0&\text{else}~,
\end{array}$$ with $$\begin{aligned}
\varphi_{{{\bf R}'}{{\bf R}}}^{1}&\equiv\frac{1}{2\sqrt{N}}\sum_{{{\bf q}}\in\text{\bf BZ}_{\text{bond}}^{1}}e^{i{\bf q}\cdot\left(\frac{{\bf R}+{\bf R}'}{2}\right){-i\theta_{\bf q}}}\varphi_{\bf q}^{1}\,, \label{TFphi1}\\
\varphi_{{{\bf R}'}{{\bf R}}}^{2}&\equiv\frac{1}{\sqrt{2N}}\sum_{{{\bf q}}\in\text{\bf BZ}_{\text{bond}}^{2}}e^{i{\bf q}\cdot\left( \frac{{\bf R}+{\bf R}'}{2}\right)}\varphi_{\bf q}^{2}\,,\label{TFphi2}\\
\varphi_{{{\bf R}'}{{\bf R}}}^{3}&\equiv\frac{1}{\sqrt{2N}}\sum_{{{\bf q}}\in\text{\bf BZ}_{\text{bond}}^{3}}e^{i{\bf q}\cdot\left( \frac{{\bf R}+{\bf R}'}{2}\right)}\varphi_{\bf q}^{3}\,, \label{TFphi3}\end{aligned}$$ The different Brillouin zones emerging here from the dual lattices of bonds are defined and discussed in appendix \[Apendixduallattice\]. At this general stage, the number of mean-field variables that can be considered is still huge. Concerning the Weiss mean-fields $m_{{\bf k}}$, we consider here magnetic structures described by a single$-{\bf k}$ ordering wave-vector ${\bf Q}_{\text{AF}}$, excluding multi$-{\bf k}$ structures. Hereafter, we will generalize this classical mean-field approach by doing similar Ansatz for the bond spin liquid mean-fields.
### Mean-field Ansatz
Hereafter, the Weiss and spin liquid mean-fields are approximated using the following Ansatz: $$\begin{aligned}
m_{\bf R}&=&S_{{\bf Q}_{\text{AF}}}e^{i{\bf Q}_{\text{AF}}\cdot{\bf R}}~, \label{Ansatzm}\\
\varphi_{{{\bf R}}{{\bf R}'}}^{1}&=&\frac{1}{2}\Big[\Phi_1+ie^{i{{\bf Q}}\cdot\big(\frac{{{\bf R}}+{{\bf R}'}}{2}\big)}\Phi_{{{\bf Q}}}\Big]~,
\label{Ansatzvarphi1}\\
\varphi_{{{\bf R}}{{\bf R}'}}^{2}&=&\Phi_2~, \label{Ansatzvarphi2}\\
\varphi_{{{\bf R}}{{\bf R}'}}^{3}&=&\Phi_3~. \label{Ansatzvarphi3}\end{aligned}$$ Here, $S_{{\bf Q}_{\text{AF}}}$ is the staggered magnetization characterizing an AF order. The wave-vector ordering ${\bf Q}_{\text{AF}}$ will be fixed by minimization of the spin-wave spectrum resulting from the Weiss field. The three fields $\Phi_1$, $\Phi_2$, and $\Phi_3$ correspond to the homogeneous parts of the spin liquid terms along the three kinds of bonds that are considered here. The emergence of three such homogeneous spin liquid fields is a natural BCT lattice generalization of the RVB decoupling introduced initially on triangular lattice [@Fazekas1974] and later on a square lattice [@Anderson1987; @Baskaran1987]. The extra term $\Phi_{\bf Q}$ included in this Ansatz takes into account a possible spatial modulation of the spin liquid field. The specific choice of this spin-liquid modulation is motivated by previous work of Ref. , where only the interplane spin-liquid term $\varphi_{{{\bf R}}{{\bf R}'}}^{1}$ was considered. This modulation is defined on the bond lattice by a wave-vector ${\bf Q}$, and it can lower the lattice translation symmetry. Invoking the momentum representation given by Eqs. (\[Defmk\], \[Defvarphiq1\], \[Defvarphiq2\], \[Defvarphiq3\]), the mean-field Ansatz Eqs. (\[Ansatzm\], \[Ansatzvarphi1\], \[Ansatzvarphi2\], \[Ansatzvarphi3\]) can be expressed as $$\begin{aligned}
m_{{\bf k}}&=&S_{{\bf Q}_{\text{AF}}}\sqrt{N}\delta({{\bf k}}-{\bf Q}_{\text{AF}})~, \\
\varphi_{\bf q}^1&=&\Phi_1\sqrt{N}\delta({\bf q})+\Phi_{\bf Q}\sqrt{N}\delta({\bf q}-{\bf Q})~, \\
\varphi_{\bf q}^2&=&\Phi_2\sqrt{2N}\delta({\bf q})~,\\
\varphi_{\bf q}^3&=&\Phi_3\sqrt{2N}\delta({\bf q})~, \end{aligned}$$ where $\delta({\bf q})$ denotes the Dirac distribution. We also assume an homogeneous and constant Lagrange multiplier $\lambda_{\bf R}=\lambda_{0}$. Finally, within this mean-field Ansatz, the Lagrangian (\[Lagrangian\]) can be expressed explicitly in terms of the ${{\bf k}}$-dependent fermions as $$\begin{aligned}
{\cal L}&=\sum_{{\sigma}{{\bf k}}}\chi_{{{\bf k}}{\sigma}}^{\dag}\Big(\partial_{\tau}+\lambda_{0}\Big)\chi_{{{\bf k}}{\sigma}}
+N\lambda_0
\notag \\
&\,+4J_{1}^{\text{SL}}\Phi_1\sum_{{\sigma}{{\bf k}}}\gamma_{1,{{\bf k}}}\,\chi_{{{\bf k}}{\sigma}}^{\dag}\chi_{{{\bf k}}{\sigma}}+NJ_1^{\text{SL}}\left(\vert\Phi_1\vert^2+\vert\Phi_{{{\bf Q}}}\vert^2\right)\notag\\
&\,+2J_{1}^{\text{SL}}e^{-i\theta_{{{\bf Q}}}}\Phi_{{{\bf Q}}}\sum_{{\sigma}{{\bf k}}}\gamma_{{{\bf Q}},{{\bf k}}}\left[\chi_{{{\bf k}}{\sigma}}^{\dag}\chi_{{{\bf k}}+{{\bf Q}},{\sigma}}+c.c\right]\notag\\
&\,+2J_{2}^{\text{SL}}\Phi_2\sum_{{\sigma}{{\bf k}}}\gamma_{2,{{\bf k}}}\,\chi_{{{\bf k}}{\sigma}}^{\dag}\chi_{{{\bf k}}{\sigma}}+2NJ_2^{\text{SL}}\vert \Phi_2\vert^2\notag \\
&\,+4J_{3}^{\text{SL}}\Phi_3\sum_{{\sigma}{{\bf k}}}\gamma_{3,{{\bf k}}}\,\chi_{{{\bf k}}{\sigma}}^{\dag}\chi_{{{\bf k}}{\sigma}}+2NJ_3^{\text{SL}}\vert \Phi_3\vert^2\notag \\
&\,+\sum_{{\sigma}{{\bf k}}}{\sigma}J_{{{\bf Q}}_{\text{AF}}}S_{{{\bf Q}}_{\text{AF}}}\chi_{{{\bf k}}{\sigma}}^{\dag}\chi_{{{\bf k}}+{{\bf Q}}_{\text{AF}},{\sigma}}-NJ_{{{\bf Q}}_{\text{AF}}}\left|S_{{{\bf Q}}_{\text{AF}}}\right|^2\,,
\label{MeanfieldLagrangian}\end{aligned}$$ where the effective spin-wave dispersion is $$\begin{aligned}
J_{{{\bf Q}}_{\text{AF}}}&\equiv 8J_1^{\text{Weiss}}\gamma_{1,{{\bf Q}}_{\text{AF}}}+2J_2^{\text{Weiss}}\gamma_{2,{{\bf Q}}_{\text{AF}}}
+4J_3^{\text{Weiss}}\gamma_{3,{{\bf Q}}_{\text{AF}}}\,, \label{DefJQAF}\end{aligned}$$ and the effective dispersions resulting from the spin-liquid decoupling are given by: $$\begin{aligned}
\gamma_{1,{{\bf k}}}&\equiv\cos{\left(\frac{k_xa}{2}\right)}\cos{\left(\frac{k_ya}{2}\right)}
\cos{\left(\frac{k_zc}{2}\right)}\,,\label{defgamma1k}\\
\gamma_{{{\bf Q}},{{\bf k}}}&\equiv\gamma_{1,{{\bf k}}+{{\bf Q}}/2}\,,\label{defgammaQk}\\
\gamma_{2,{{\bf k}}}&\equiv\cos{(k_xa)}+\cos{(k_ya)}\,,\label{defgamma2k}\\
\gamma_{3,{{\bf k}}}&\equiv\cos{(k_xa)}\cos{(k_ya)}\,.\label{defgamma3k}\end{aligned}$$ The values considered for ${{\bf Q}}_{\text{AF}}$ will be those that minimize the spin-wave dispersion $J_{{{\bf Q}}_{\text{AF}}}$. Hereafter, we will restrict the analysis to some specific modulating vectors ${\bf Q}$ in $\text{\bf BZ}_{\text{bond}}^{1}$ that are equivalent to ${{\bf Q}}_{\text{AF}}$ in $\text{\bf BZ}_{\text{site}}^{\text{BCT}}$ (definitions of the various Brillouin zones are discussed in appendix \[Apendixduallattice\]). One key assumption that will be made in the following is that we will consider only breaking of symmetries that lead to commensurate order with doubling of the unit cell. This restrictive but realistic assumption has a crucial simplifying consequence: $2{\bf Q}$, ${\bf Q}+{{\bf Q}}_{\text{AF}}$, and $2{{\bf Q}}_{\text{AF}}$ are all equivalent to ${\bf 0}$. In the Lagrangian, the MSL and AF terms correlate fermions of momentum ${\bf k}$ with fermions of momenta ${\bf k}+{\bf Q}$ and ${\bf k}+{{\bf Q}}_{\text{AF}}$. Therefore, there is no new harmonics generated by these interactions since the second harmonics would correlate momenta ${\bf k}+{\bf Q}$ and ${\bf k}+{{\bf Q}}_{\text{AF}}$ with ${\bf k}$. There could be more possible solutions obtained by considering non-equivalent ${\bf Q}$ and ${{\bf Q}}_{\text{AF}}$, but such solutions would correspond to a lowering of the lattice symmetry associated to a bigger unit cell made of more than two atoms.
### Free energy functional
Invoking the assumptions ${\bf Q}={{\bf Q}}_{\text{AF}}$ and $2{{\bf Q}}_{\text{AF}}={\bf 0}$, the free energy can be expressed from the mean-field Lagrangian Eq. (\[MeanfieldLagrangian\]) as $$\begin{aligned}
\label{eq:totalfj1j2j3}
F&(\alpha_1,\alpha_2,\alpha_3,\lambda_0,\Phi_{1},\Phi_{{{\bf Q}}},\Phi_2,\Phi_3,S_{{{\bf Q}}_{\text{AF}}})= \notag\\&
-\frac{k_B T}{2N}\sum_{{{\bf k}}\in\text{\bf BZ}_{\text{site}}^{\text{BCT}}}\sum_{{\sigma}, s=\pm}\ln{\left(1+e^{-\beta\Omega_{{{\bf k}}}^{s}}\right)}-\lambda_{0}
-J_{{{{\bf Q}}_{\text{AF}}}}\vert S_{{{\bf Q}}_{\text{AF}}}\vert^{2}\notag\\&
+J_{1}^{\text{SL}}\left(\vert\Phi_{1}\vert^{2}+\vert\Phi_{{{\bf Q}}}\vert^{2}\right)
+2J_{2}^{\text{SL}}\vert\Phi_{2}\vert^{2}+2J_{3}^{\text{SL}}\vert\Phi_{3}\vert^{2}\,. \end{aligned}$$ where the eigenenergies involved are given by $$\begin{aligned}
&\Omega_{{{\bf k}}}^{\pm}=\lambda_0+2J_2^{\text{SL}}\gamma_{2,{{\bf k}}}\Phi_2+4J_3^{\text{SL}}\gamma_{3,{{\bf k}}}\Phi_3\notag \\&
\pm
\sqrt{
(J_{{{{\bf Q}}_{\text{AF}}}})^2\vert S_{{{\bf Q}}_{\text{AF}}}\vert^2+16(J_1^{\text{SL}})^2\big[(\gamma_{1,{{\bf k}}})^2\vert\Phi_{1}\vert^{2}+(\gamma_{{{\bf Q}},{{\bf k}}})^2\vert\Phi_{{{\bf Q}}}\vert^{2}\big]
}\,.
$$ The explicit dependence of the free energy in terms of the variational decoupling fields $\alpha_1$, $\alpha_2$, and $\alpha_3$ is obtained from the definition Eq. (\[eq:DefJ1channels\]) by identifying $J_{i}^{\text{Weiss}}=J_i\cos^2{(\alpha_i)}$ and $J_{i}^{\text{SL}}=J_i\sin^2{(\alpha_i)}$. The Weiss field and spin liquid dispersion terms are given by Eqs. (\[DefJQAF\], \[defgamma1k\], \[defgammaQk\], \[defgamma2k\], \[defgamma3k\]). The mean-field and variational parameters correspond to the minima of the free energy.
Temperature phase diagram for J3=0 \[ResultsJ1J2\]
==================================================
Before analyzing the ground state of the $J_1$-$J_2$-$J_3$ model, we start with the simplified situation where $J_3=0$. In this section we are thus not concerned with the fields $\alpha_3$ and $\Phi_3$. Hereafter, we use the reduced notation ${\bf Q}\equiv (h,k,l)$ for the ordering wave-vectors ${\bf Q}=2\pi(h/a,k/a,l/c)$. When stable, all magnetic phases are analyzed for the wave vectors ${{{\bf Q}}_{\text{AF}}}^{\rm I}=(1,1,1)$ and ${{{\bf Q}}_{\text{AF}}}^{\rm II}=(1/2,1/2,0)$, that correspond to the classical magnetic solution, i.e., with $\alpha_1=\alpha_2=0$. Experimental examples of these two kinds of classical Néel orders in BCT lattices are realized in the AF phases of URu$_2$Si$_2$ and cuprates for ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ and ${{{\bf Q}}_{\text{AF}}}^{\rm II}$, respectively.
Method of calculation for J3=0
------------------------------
In order to find the stable configuration for the $J_3=0$ case, we have to minimize the free energy functional Eq. . It is first minimized as much as possible analytically as a function of the variational decoupling fields $\alpha_1$ and $\alpha_2$. To do this, we start by expressing the seven saddle point relations for $F(\alpha_1,\alpha_2,\lambda_0,\Phi_{1},\Phi_{{{\bf Q}}},\Phi_2,S_{{{\bf Q}}_{\text{AF}}})$. The resulting system of equations is detailed in appendix \[sec:sadpoint\], and invokes several formal sums over momenta ${{\bf k}}$. After non trivial but straightforward algebraic transformations this system can be rewritten as seven equations (\[eq:sa1\]-\[eq:slam\]) that involve five independents sums over ${{\bf k}}$. Explicit expressions of these five sums are given in Eqs. (\[DefAlambda0\]-\[DefASQ\]). The resolution of this system in general requires a numerical approach, but we also find some trivial solutions that may have a physical meaning. Hereafter we analyze more precisely the trivial solutions that are obtained when the variational decoupling parameters $\alpha_1$ and $\alpha_2$ take the extreme values $0$ or $\pi/2$. Physically, such trivial solutions correspond to decoupling the corresponding Heisenberg interaction term (with $J_1$ or with $J_2$) in a pure channel that is either Weiss or spin-liquid.
$\alpha_1$ $\alpha_2$
-------- ---------------- ----------------
Case A $0$ or $\pi/2$ $0$ or $\pi/2$
Case B $0$ or $\pi/2$ free parameter
Case C free parameter $0$ or $\pi/2$
Case D free parameter free parameter
: Characteristics of the four possible cases for the variational decoupling parameters $\alpha_1$ and $\alpha_2$. []{data-label="Table1"}
Hereafter, we analyze the possible solutions by considering the four different cases as defined in table \[Table1\]:
### Case A
Here, we consider extreme values for $\alpha_1$ and $\alpha_2$ so that $\sin{(2\alpha_1)}=\sin{(2\alpha_2)}=0$. The saddle point equations (\[eq:sa1\]) and (\[eq:sa2\]) are thus trivially satisfied and we are left with Eqs. (\[eq:sphi\]-\[eq:slam\]). Among these five remaining equations, some may also be satisfied trivially.
The sub-case $(\alpha_1,\alpha_2)=(0,0)$ corresponds to the classical mean Weiss field approximation. The two possible antiferromagnetic ground states compete, characterized respectively by the ordering wave-vectors ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ and ${{{\bf Q}}_{\text{AF}}}^{\rm II}$. The corresponding temperature-coupling classical phase diagram is depicted in figure \[fig:j1j2magclas\] as a function of the dimensionless parameters $T/J_1$ and $J_2/J_1$. The classical phase transition between these two kinds of AF orders is realized at finite temperature when $J_1=J_2$.
The sub-case $(\alpha_1,\alpha_2)=(0,\pi/2)$ does not correspond to a physically realistic situation. Indeed, decoupling $J_1$ in the pure Weiss and $J_2$ in the pure spin liquid channels artificially bypasses the underlying frustration problem. Such a solution artificially induces ferromagnetic planes coupled antiferromagnetically among them: this is compatible with $J_1$ interaction. But the inplane spin liquid term $\Phi_2$ vanishes, leading to a $J_2-$independent unphysical solution. It will not be considered in the following.
For $(\alpha_1,\alpha_2)=(\pi/2,0)$, the interplane SL field competes with the inplane magnetization Weiss field with ${{{\bf Q}}_{\text{AF}}}^{\rm II}=(1/2,1/2,0)$. Here, since we restrict our analysis to commensurate orders with at most a doubling of the unit cell, we enforce $\Phi_{\bf Q}=0$. The phase diagram presents a pure homogeneous SL solution with only $\Phi_1$ non-zero for $J_2/J_1\lesssim0.3$, and a purely magnetic solution is recovered for $J_2/J_1>0.5$. But these two extreme situations are more appropriately described by taking $(\alpha_1,\alpha_2)$ equal to $(\pi/2,\pi/2)$ and $(0,0)$ respectively. A more interesting solution is found in the range $0.3\lesssim J_2/J_1\lesssim0.5$, where the homogeneous SL field $\Phi_1$ coexists with the inplane antiferromagnetic order. Nevertheless, in this regime of parameters, the magnetic order obtained with $(\alpha_1,\alpha_2)=(0,0)$ has a much lower energy. Therefore, in the following we will not consider the sub-case $(\alpha_1,\alpha_2)=(\pi/2,0)$.
The last trivial sub-case is $(\alpha_1,\alpha_2)=(\pi/2,\pi/2)$, corresponding to pure spin-liquid decoupling. Here, the interplane MSL phase competes with the intraplane SL phase. For $J_2<J_1$, the MSL is predominant. Comparing the values of the free energy obtained by considering three possible ordering wave vectors $(1,1,1)$, $(0,0,1)$, and $(1,0,0)$, we found that ${\bf Q}=(1,1,1)$ corresponds to the most stable MSL state. For $J_2\gtrsim J_1$ the intraplane SL takes place. The temperature-coupling phase diagram for this sub-case is depicted in figure \[fig:j1j2sl\]. Due to the lattice breaking of symmetry associated with the MSL field, the critical line $T_{\Phi_{{{\bf Q}}}}$ indicates a true phase transition that would survive beyond the mean-field. The other mean-field critical temperature $T_{\Phi_2}$ rather describes a crossover since the inplane spin-liquid field $\Phi_2$ here is homogeneous.
### Case B
In this case, the saddle point condition (\[eq:sa2\]) can be simplified as $\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2+|\Phi_2|^2=0$. Letting aside the trivial high temperature solution where both $S_{{{{\bf Q}}_{\text{AF}}}}$ and $\Phi_2$ vanish, we consider here only the magnetic wave vector ${{{\bf Q}}_{\text{AF}}}^{\rm II}$. Indeed Eq. (\[defgamma2k\]) gives $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm I}}>0$ but $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm II}}<0$. Here, as a consequence of relation (\[eq:sa2\]), the intraplane spin liquid field $\Phi_2$ is proportional to the local magnetization. Solving the remaining saddle point equations in the sub-case ${\alpha}_1=0$, we find the numerical value $\sin^2({\alpha}_2)=0.675\pm 0.01$. For the other sub-case, ${\alpha}_1=\pi/2$, the pure MSL state has the most stable configuration until $J_2\lesssim 2J_1$, then the pure inplane solution with non zero $S_{{{{\bf Q}}_{\text{AF}}}}$ and $\Phi_2$ is present for higher $J_2$.
### Case C
Here, excluding the extreme solutions for $\alpha_1$, the saddle point Eq. (\[eq:sa1\]) is simplified as $8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}\vert S_{{\bf Q}_{\rm AF}}\vert^2+\vert \Phi_1\vert^2+\vert \Phi_{\bf Q}\vert^2=0$. In this case, Eq. (\[defgamma1k\]) gives $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm I}}<0$ but $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm II}}>0$. Therefore, only the ordering wave-vector ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ is considered for the magnetic phase. The trivial solution with vanishing $S_{{\bf Q}_{\rm AF}}$, $\Phi_1$, and $\Phi_{\bf Q}$ is not considered here, and we thus focus on the phases where magnetic order coexist with interlayer spin-liquid fields. For the first sub-case $\alpha_2=0$ we naturally explore the situation with $\Phi_2=0$. But for $\alpha_2=\pi/2$ all the competing mean-fields may coexist. Typically, this sub-case has similarities with the pure spin-liquid one discussed above and illustrated by figure \[fig:j1j2sl\]: the parameter $J_1/J_2$ tunes the competition between the interlayer MSL order and the inplane SL. However, here, a non-zero MSL field must coexist with a non-zero local magnetization field.
### Case D
This case is in principle the most general one since it corresponds to non extreme values of both $\alpha_1$ and $\alpha_2$. Nevertheless, this situation can not be realized and it would correspond to all mean-fields vanishing. Indeed, assuming that neither $\alpha_1$ nor $\alpha_2$ are extreme, the saddle point relations (\[eq:sa1\]) and (\[eq:sa2\]) give $8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}\vert S_{{\bf Q}_{\rm AF}}\vert^2+\vert \Phi_1\vert^2+\vert \Phi_{\bf Q}\vert^2=0$ and $\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2+|\Phi_2|^2=0$. Non zero solutions for the mean-field parameters would thus require an ordering wave-vector ${{{\bf Q}}_{\text{AF}}}$ such that both $\gamma_{1,{{{\bf Q}}_{\text{AF}}}}<0$ and $\gamma_{2,{{{\bf Q}}_{\text{AF}}}}<0$. Since these two conditions cannot be realized simultaneously, neither by ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ nor by ${{{\bf Q}}_{\text{AF}}}^{\rm II}$, we exclude case D from our study.
Results for J3=0
----------------
All possible cases described above are studied by solving numerically the saddle point equations given in appendix \[sec:sadpoint\]. We computed the free energy for each case, as functions of $J_2/J_1$ and $T/J_1$. For a sake of clarity, figure \[fig:fetot\] shows its evolution at $T=0$ only. The finite $T$ results are not presented here but they do not exhibit any extra free energy “crossing” between these cases.
![\[fig:fetot\] Ground state energy of the model computed with $J_3=0$ as a function of $J_2/J_1$ for the various relevant cases discussed in this work and defined in table \[Table1\]. ](fig05_fenergy_temp005_j1_1_q222){width="0.65\columnwidth"}
The main result that emerges here from our variational approach for $J_3=0$ is the following: among all the considered cases, the classical purely AF mean-field solutions obtained with $\alpha_1=\alpha_2=0$ are always the most stable ones. The second most stable family of solutions are obtained with pure spin-liquid decoupling channels $\alpha_1=\alpha_2=\pi/2$. All the other combinations are found to be energetically less favorable. Here, we describe the two phase diagrams obtained for these two variational sub-cases. The temperature-coupling phase diagrams for both configurations $\alpha_1=\alpha_2=0$ and $\alpha_1=\alpha_2=\pi/2$ are shown in figures \[fig:j1j2magclas\] and \[fig:j1j2sl\] respectively.
![Temperature-coupling phase diagram obtained with the purely magnetic configuration $\alpha_1=\alpha_2=0$ for $J_3=0$. The lines indicate the Néel ordering temperatures of the two magnetic orders corresponding to ${{{\bf Q}}_{\text{AF}}}^{\rm I}=(1,1,1)$ and ${{{\bf Q}}_{\text{AF}}}^{\rm II}=(1/2,1/2,0)$. \[fig:j1j2magclas\]](fig04_tc_mag_j1_222_110_q222){width="0.95\columnwidth"}
![Temperature-coupling phase diagram obtained with the purely spin-liquid decoupling channels $\alpha_1=\alpha_2=\pi/2$ for $J_3=0$. The lines indicate the critical temperatures below which the corresponding mean-field $\Phi_1$, $\Phi_2$, and $\Phi_{{\bf Q}}$ are non-zero. Among these lines, $T_{\Phi_1}=T_{\Phi_{{\bf Q}}}$ is still expected to indicate a transition beyond the mean field because $\Phi_{{\bf Q}}$ is associated to a lattice symmetry breaking. $T_{\Phi_2}$ is expected to mark a crossover beyond the mean-field. \[fig:j1j2sl\]](fig06_tc_msl_alp_pi2_pi2_j1_1_q222){width="0.95\columnwidth"}
![Amplitude of the SL mean-field parameters $\Phi_{\bf Q}$ and $\Phi_1$ (red squares) and $\Phi_2$ (blue circles) computed for $J_3=0$ and $\alpha_1=\alpha_2=\pi/2$. Left: as a function of temperature for fixed $J_2/J_1=0.5$ (a), $1.1$ (b), $1.5$ (c). Right: as a function of $J_2/J_1$ for fixed temperature $T/J_1=0.05$ (d), $0.4$ (e), $0.75$ (f). With numerical accuracy we find $\Phi_1=\Phi_{\bf Q}$. \[fig:j1j2meanfieldparameters\]](fig07b_phiq_phi1_phi2_msl_alp_pi2_pi2_j1_1_q222){width="0.95\columnwidth"}
While the purely AF solutions are the most stable the purely SL ones are energetically very close. Any mixed solution where both Weiss and SL mean-fields would coexist is found to be much less favorable and can also be excluded. Therefore, we can deduce that any fluctuation that would destabilize the AF order leave some room for stabilizing a pure SL phase. We also find that the SL parameters $\Phi_1=\Phi_{\bf Q}$ and $\Phi_2$ do not coexist, as illustrated by figure \[fig:j1j2meanfieldparameters\]. Depending on the value of $J_2/J_1$, there are three different kinds of temperature behaviors, corresponding to cases $a$, $b$, and $c$. Furthermore, we remark that the transition between the Modulated and the $\Phi_2-$dominated SL phases is characterized by a discontinuity of the corresponding mean-fields. This feature is in contrast with the continuous vanishing of these fields at the critical temperature separating the paramagnetic fully-decoupled phase from the SL ones. We thus conclude that the MSL transition is second order for $J_2<J_1$ and becomes first order for $J_2>J_1$. The transition temperature $T_{\Phi_{2}}$ is expected to indicate a crossover between the paramagnetic high $T$ and the SL low $T$ regimes when fluctuations beyond the mean-field approximation are included. Indeed, $\Phi_2$ is not associated to any breaking of symmetry. But we expect the transition at $T_{\Phi_{\bf Q}}$ to survive beyond the mean-field since the MSL phase is characterized by a breaking of lattice symmetry.
An interesting feature also appears for the MSL solution: with a relatively high numerical accuracy the modulation field $\Phi_{\bf Q}$ is found to be always equal to the homogeneous field $\Phi_1$. Invoking the Ansatz Eq. (\[Ansatzvarphi1\]), this leads to a very extreme situation for the inter-layer field $\varphi_{{{\bf R}}{{\bf R}'}}^{1}=\frac{1}{2}[\Phi_1\pm\Phi_{{{\bf Q}}}]$ which vanishes on half of the bonds while it keeps the finite value $\Phi_1=\Phi_{{{\bf Q}}}$ on the other bonds. Introducing the probability $p_{\bf RR'}^{singlet}$ that a given bond ${\bf RR'}$ forms a singlet (see Appendix \[Apendixsingletprobability\]), the formation of the MSL state can be interpreted here as follows: first, the interaction terms for all the inter-layer bonds such that ${{\bf Q}}\cdot ({\bf R}+{\bf R'})/2=\pi/2$ are effectively decoupled at the mean-field level, leading to a local probability $p_{\bf RR'}^{singlet}=1/4$ and a vanishing spin-spin correlations $\langle \vec{S}_{\bf R}\cdot\vec{S}_{\bf R'}\rangle=0$ . Then the spin-liquid with $\langle \vec{S}_{\bf R}\cdot\vec{S}_{\bf R'}\rangle\neq 0$ is formed on the other inter-layer bonds, with ${{\bf Q}}\cdot ({\bf R}+{\bf R'})/2=- \pi/2$, that remain effectively coupled. Using the numerical value $\Phi_1=\Phi_{\bf Q}\approx 0.45$ computed at $T=0$ in the MSL (see figure \[fig:j1j2meanfieldparameters\]), and invoking expression Eq. (\[probasinglet\]), we find that the singlet probability on these effectively coupled bonds is $p_{\bf RR'}^{singlet}\approx 0.60$. This value is, not surprisingly, higher than $1/4$, and it has to be compared with the value $ln(2)\approx 0.69$ that is predicted for a one-dimensional Heisenberg chain using exact methods like Bethe Ansatz [@Bethe1931] or numerical renormalization technics [@Schollwock2005]. We may thus interprete the MSL as a crystal of interacting filaments formed by the connected effectively coupled bonds. In this picture, spin excitations are deconfined fermions moving along the filaments. This may generalize the usual concept of valence bond crystal where localized spin $1$ excitations correspond to confined fermions.
Mean-field ground state of the J1-J2-J3 model {#sec:j3mod}
=============================================
Here we analyze the ground state of the $J_1$-$J_2$-$J_3$ model within the mean-field Ansatz described above. In the previous section it was shown that for $J_3=0$ the low temperature most stable configuration is obtained by choosing purely magnetic Weiss mean-field decoupling channels. The second most stable solution corresponds to the purely spin liquid decoupling channels. Here, we assume that this result can be extended to the decoupling of the intraplane next nearest neighbor interaction $J_3$. We therefore assume that $\alpha_3$ can take only the extreme values $0$ or $\pi/2$.
![\[fig:magj1j2j3\] Phase diagram characterizing the ground state of the $J_1$-$J_2$-$J_3$ model obtained within the pure Weiss mean-field decoupling channels $\alpha_1=\alpha_2=\alpha_3=0$. Three different magnetic orders are found, characterized by the wave-vectors ${{{\bf Q}}_{\text{AF}}}^{\rm I}$, ${{{\bf Q}}_{\text{AF}}}^{\rm II}$ and ${{{\bf Q}}_{\text{AF}}}^{\rm III}$. We name [*magnetic tricritical point*]{} the highly degenerate point corresponding to the crossing of the three critical lines. Additionally, we include four points obtained from various fits of inelastic neutron scattering (INS) data on URu$_2$Si$2$: from Broholm [*et al.*]{} [@Broholm1991], Kusunose [*et al.*]{} [@Kusunose2012], Sugiyama [*et al.*]{} [@Sugiyama1990] and Bourdarot [@BourdarotHDR]. ](fig05_magphase_j1Tfixo_j2_x_j3_222_110_100_q222){width="0.95\columnwidth"}
![\[fig:slj1j2j3\] Phase diagram characterizing the ground state of the $J_1$-$J_2$-$J_3$ model obtained within the pure spin-liquid mean-field decoupling channels $\alpha_1=\alpha_2=\alpha_3=\pi/2$. The MSL phase corresponds to finite $\Phi_1$ and $\Phi_{{\bf Q}}$. The two other spin-liquid phases correspond to a vanishing $\Phi_{{\bf Q}}$ and finite values of the nearest and next nearest neighbor inplane spin liquid fields $\Phi_2$ and $\Phi_3$ respectively. Among the three critical lines depicted here, only the ones indicating the MSL phase would still correspond to a transition when considering fluctuations beyond the mean-field approximation. The magnetic tricritical point is defined as the highly degenerate point in the purely magnetic phase diagram. The additional points obtained from INS data are included here with the same notations as in figure \[fig:magj1j2j3\]. ](fig05_slphase_j1Tfixo_j2_x_j3_q222){width="0.95\columnwidth"}
Solving numerically the two extreme cases, we find that, at the mean-field level, the classical magnetic solution with $\alpha_1=\alpha_2=\alpha_3=0$ is the most stable variational configuration. The resulting ground state phase diagram is presented in figure \[fig:magj1j2j3\] as a function of the dimensionless parameters $J_2/J_1$ and $J_3/J_1$. Three possible ordering wave-vectors are obtained, ${{\bf Q}}_{\text{AF}}^{\rm I}=(1,1,1)$, ${{\bf Q}}_{\text{AF}}^{\rm II}=(1/2,1/2,0)$, or ${{\bf Q}}_{\text{AF}}^{\rm III}=(1/2,0,0)$, that correspond to the three different regimes where the Weiss field can be dominated by $J_1$, $J_2$, or $J_3$ respectively. A highly degenerate point is found for $J_1=J_3=2J_2$, that we name [*magnetic tricritical point*]{}.
Figure \[fig:slj1j2j3\] depicts the phase diagram obtained within a purely spin-liquid mean-field decoupling $\alpha_1=\alpha_2=\alpha_3=\pi/2$. At the mean-field level we find three different phases, that are characterized by finite values of $\Phi_{{\bf Q}}$, $\Phi_2$, or $\Phi_3$. Beyond the mean-field, we expect that only the critical line defining finite $\Phi_{{\bf Q}}$ would still correspond to a phase transition, associated with a translation symmetry breaking. We remark that the MSL solution that we obtain corresponds to $\Phi_1=\Phi_{{\bf Q}}$, and it corresponds to the formation of a crystal of connected filaments as described above.
The position of the magnetic tricritical point is also indicated in the pure spin liquid phase diagram, figure \[fig:slj1j2j3\]. It is very surprising to see that this point which is highly degenerate from a Weiss mean-field perspective turns to be located well inside the MSL phase. Several earlier works have been dedicated to the characterization of the magnetic ground state of a frustrated Heisenberg model on a square lattice [@Oitmaa1996; @Sushkov2001; @Kalz2011], that can be realized here for $J_1=0$. It was shown that quantum fluctuations can stabilize a non magnetic spin-liquid phase between the antiferromagnetic phases ${{\bf Q}}_{\text{AF}}^{\rm II}$ and ${{\bf Q}}_{\text{AF}}^{\rm III}$. For this reason, we expect that huge quantum fluctuations of the Weiss mean-field should occur around all the critical lines separating the three possible phases ${{\bf Q}}_{\text{AF}}^{\rm I}$, ${{\bf Q}}_{\text{AF}}^{\rm II}$, and ${{\bf Q}}_{\text{AF}}^{\rm III}$. The position of the magnetic tricritical point inside the MSL phase suggests that the fluctuations of the MSL mean-field should be much less critical. Therefore, we expect that fluctuations beyond the mean-field will destabilize the magnetic solutions around all their degeneracy lines. We believe that the MSL mean-field solution should be more robust for all the regions that are sufficiently far from the MSL critical line. This is the case, for example, of the area around the magnetic tricritical point.
Discussion and applications to materials with BCT structure \[Applications\]
============================================================================
Relevance for Hidden order in URu2Si2
-------------------------------------
The HO phase in URu$_2$Si$_2$ cannot be explained by the formation of too tiny local magnetic moments. Nevertheless, there are strong experimental evidences that the thermodynamic anomaly measured at the transition [@Maple1986] has a magnetic origin. For example, the HO phase is characterized by a peak revealed by Inelastic Neutron Scattering (INS) at the commensurate wave-vector ${{\bf Q}}_{\text{AF}}=(1,0,0)$ in reduced notation [@Broholm1987; @Villaume2008; @Bourdarot2010]. This wave-vector is surprisingly identical to the one that describes the pressure-induced AF phase of this compound. In the BCT structure, this AF order represents a ferromagnetic correlation in the ${ \bf a},{ \bf b}$ directions (see Fig. \[fig:bct\]), with antiferromagnetic correlations between nearest $({\bf a},{\bf b})$ planes. Recently, it was proposed that a quantum modulated spin liquid (MSL) phase could be stabilized by frustration and explain the origin of the hidden order phase in URu$_2$Si$_2$ [@Pepin2011; @Thomas2013; @Montiel2013]. A phase with a similar order as the MSL has also been proposed in terms of unconventional spin-orbital density wave [@Riseborough2012; @Das2014; @Oppeneer2011], where the order parameter characterizes a spatial commensurate modulation of the intersite hybridization between $5f$ states.
The first Heisenberg model on a BCT lattice that was proposed for URu$_2$Si$_2$ was introduced by Broholm [*et al.*]{} [@Broholm1991], trying to fit INS data in terms of spin density wave (SDW) excitations from an AF ground state. As we will see further, the resulting SDW model obtained by Broholm corresponds to a highly frustrated situation. The SDW scenario has later been contradicted by several other experiments. Nonetheless, the classical version of a $J_1$-$J_2$-$J_3$ Heisenberg model has been proposed by Sugiyama [*et al.*]{} [@Sugiyama1990; @Kim2003] as a frustration scenario to explain the cascade of metamagnetic-like transitions and magnetization plateaux that are observed in URu$_2$Si$_2$. More recently, INS data analysis was invoked by Kusunose who proposed a competition between multipolar and AF Ising-like orders as a scenario for the HO-AF pressure-induced transition [@Kusunose2012]. Bourdarot also recently proposed numerical values for $J_1$, $J_2$ and $J_3$ in order to fit his INS data [@BourdarotHDR].
We are aware that modeling URu$_2$Si$_2$ with the present $J_1$-$J_2$-$J_3$ quantum Heisenberg model may constitute a very crude approximation with respect to several aspects: for example, the real system is metallic, and also, local 5f electronic states require an Ising-like highly anisotropic multiplet description. Nevertheless, the numerous previous attempts to fit INS data using effective SDW dispersions make it worth checking where the fitted parameter would locate URu$_2$Si$_2$ on the mean-field phase diagrams we analyzed here.
Hereafter, we use four different fits of various INS datas: the original fit introduced by Broholm [*et al.*]{} in Ref. [@Broholm1991], the fit introduced more recently by Kusunose [@Kusunose2012] from Broholm’s datas, the fit of INS datas from Sugiyama [*et al.*]{} [@Sugiyama1990; @Kim2003], and the one from Bourdarot’s data [@BourdarotHDR]. These fits invoke not only $J_1$-$J_2$-$J_3$ terms but also up to seven Heisenberg-like interaction parameters in the BCT structure. Neglecting these extra parameters, we extracted the numerical values of $J_1$, $J_2$ and $J_3$ provided by each fit. The corresponding dimensionless pairs of ratios $J_2/J_1$ and $J_3/J_1$ thus provide specific points in the phase diagrams as indicated on figures \[fig:magj1j2j3\] and \[fig:slj1j2j3\]. The absolute numerical values of $J_1$, $J_2$ and $J_3$ that were provided by these four different fits do not coincide. This quantitative difference between fits is easily understandable: different experimental INS data were involved, and different extra fitting parameters were also involved, that we have not considered here. Nevertheless, it is remarkable that the four different fits all provide antiferromagnetic values for $J_1$, $J_2$, and $J_3$. Furthermore, the most interesting observation is the following: all of these different fits locate URu$_2$Si$_2$ in the very close vicinity of the transition line separating the two ordered states ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ and ${{{\bf Q}}_{\text{AF}}}^{\rm III}$, as indicated on figure \[fig:magj1j2j3\]. We thus expect frustration to be very important as also noticed by Sugiyama [*et al.*]{} [@Sugiyama1990; @Kim2003], and spin fluctuations may destabilize the magnetically ordered phase. Considering now the spin-liquid phase diagram on figure \[fig:slj1j2j3\], we find that the four points that correspond to the different fits of INS data are all located well inside the MSL phase.
This observation together with the analysis presented here suggest the MSL scenario as an alternative to the geometrical frustration problem that seems to prevent URu$_2$Si$_2$ from forming an AF order: the pressure induced HO-AF transition which is observed in this compound at low temperature could be mostly controlled by the tuning of $J_3/J_1$. At ambient pressure, quantum fluctuations are too strong and only the MSL state is realized. Applying pressure pushes the system away from the critical line, reducing the fluctuations and thus stabilizing the AF state with wave-vector ordering ${{{\bf Q}}_{\text{AF}}}^{\rm I}$.
We are aware that this scenario should be completed by including charge fluctuation effects and by taking into account the precise $5f$ local multiplet structure at the origin of the magnetic ordering. We believe that the concept of spatially modulated highly entangled state which emerges here from frustration would survive when adding such sophistications to the $J_1$-$J_2$-$J_3$ model.
Relevance for other systems
---------------------------
Here we considered a model with only localized spins. But we know from previous works on cuprates and heavy-fermions that charge fluctuations play a crucial role in destabilizing antiferromagnetic states.
In the context of cuprates, the AF Néel ordered phase of the insulating parent compounds corresponds to ${{{\bf Q}}_{\text{AF}}}^{\rm II}$. The spin liquid phase introduced by Anderson [*et al.*]{} [@Fazekas1974; @Anderson1987; @Baskaran1987; @Rice1993; @Wen1996] corresponds to the homogeneous spin liquid phase with $\Phi_2$ non zero. The relation between the spin-liquid field and the superconducting order parameter has been discussed by Wen, Lee [*et al.*]{} in terms of gauge transformations [@Lee2006; @Wenbook]. These gauge transformations are based on particle-hole transformations on the fermionic operators $\chi_{{\bf R}\sigma}$ that preserve the physical starting Heisenberg spin Hamiltonian but transform the spin-liquid fields into superconducting pairing terms.
Doping may be introduced more generally on the full $J_1$-$J_2$-$J_3$ model. In heavy fermions, we know that the localized quasiparticle states are associated with the $f$-electrons. These localized degrees of freedom directly related to magnetism are usually distinguishable from the itinerant charge degrees of freedom. Indeed, in Ce and Yb compounds, delocalized modes emerge from light conduction electrons; in actinides they emerge from the duality of the $5f$ orbitals that have a partialy Mott-delocalized sector. In cuprates, such a localized spin - delocalized charge scenario cannot be clearly done. Especialy at low doping, the adaption of the present spin-fermion model for cuprates should include the physics of the Mott transition. Therefore, doping the $J_1$-$J_2$-$J_3$ model should be realized appropriately in various maners adapted to each experimental motivation: typically, within Kondo+Heisenberg, $t$-$J$, or multi-orbital Hubbard models.
Inspired by the previous works of Wen, Lee [*et al.*]{}, we expect that the resulting charge fluctuations would strengthen the spin fluctuations and weaken the magnetically ordered phases that are predicted from a classical Heisenberg $J_1$-$J_2$-$J_3$ model. In turn, the spin-liquid phases are expected to remain stable, associated to superconducting instabilities. Invoking this general scenario, we predict that the symmetries of the resulting superconducting order parameters will result from the point group symmetries of the spin-liquids. This scenario may be tested first with the superconducting instability observed in URu$_2$Si$_2$ inside the HO phase. More generally, this scenario also generalizes to 3D systems the spin-fluctuation pairing mechanism that was proposed for cuprates. Here, the link between the BCT lattice structure and the superconducting order parameter is natural. This spin-liquid mechanism driven by frustration on the BCT lattice may also be tested for the heavy-fermion superconductors CeRu$_2$Si$_2$ and CePd$_2$Si$_2$, but in these systems valence fluctuation effects need to be carefully included.
Appart from superconductivity, we may also question whereas there is a connexion between HO in URu$_2$Si$_2$ and the magnetic-field induced non-fermi liquid properties observed in YbRh$_2$Si$_2$. Indeed, this very unconventional heavy-fermion compound has a magnetically ordered ground state at ambiant pressure but the associated local moment is relatively small. This suggests that frustration on the BCT lattice may be analyzed together with Kondo screening in this system.
Conclusion {#sec:con}
==========
To summarize, we studied the frustrated $J_1$-$J_2$-$J_3$ quantum Heisenberg Hamiltonian in the BCT lattice using mean field approximations. Introducing variational parameters $\alpha_i$, each intersite interaction is decoupled in the Weiss and the spin liquid channels. Our first observation corresponds to the fact that variationally the interactions always prefer a pure channel. Indeed, any intermediate value of $\alpha_i$ corresponds to a higher free energy than the one obtained with decoupling parameters $\alpha_i=0$ (pure Weiss) or $\pi/2$ (pure spin-liquid).
Studying the model at $J_3=0$ for all temperatures $T$ and at $T=0$ for all values of coupling $J_i$, we find that the most stable variational solution corresponds to the purely magnetically ordered ones. Nevertheless, we also analyze and characterize the purely SL solutions that are the second most stable ones. Three possible different magnetically ordered phases emerge at low $T$, characterized by the ordering wave-vectors $(1,1,1)$, $(1/2,1/2,0)$, and $(1/2,0,0)$ that respectively correspond to the three different regimes dominated by $J_1$, $J_2$, or $J_3$. Similarly, three different SL phases are also identified, the one dominated by $J_1$ corresponding to a non-homogeneous MSL state with commensurate ordering wave-vectors $(1,1,1)$, that is expected to survive beyond the mean-field. We also remarked that other variational solutions, including MSL states with a different wavevector $(0,0,1)$ or $(1,0,0)$ and mixed states with $\alpha_i$ non extreme, are energetically above but not so far from the three pure SL ones that are analyzed here. Fluctuations might stabilize some of these solutions as well.
Whilst the purely magnetically ordered phases are the most stable at the mean-field level, we expect fluctuations to be strong in the vicinity of the degeneracy lines separating the different ordering wave-vectors. It is very interesting to notice that the analogous degeneracy lines obtained for the three different SL solutions do not coincide with the ones obtained for the magnetically ordered solutions. We thus conclude that fluctuations should open a large area of parameters where magnetic orders are destroyed, favoring the stabilization of SL phases.
Surprisingly, when considering four different fits of experimental INS datas on URu$_2$Si$_2$, we find in each case that this compound is close to the degeneracy line separating the $(1,1,1)$ and $(1/2,0,0)$ antiferromagnetic orders. We also find that, when considering the SL solutions, each of these four fits locates URu$_2$Si$_2$ well inside the MSL phase. This result suggests that fluctuations and frustration between $J_1$ and $J_3$ coupling should play a crucial role in the HO-AF transition that is induced by pressure at low $T$ in this compound. The possible formation of a spatially modulated highly entangled state analogous to the MSL, emerging from frustration and fluctuations, could provide a key ingredient in the realization of the Hidden order phase.
The scenario presented here is very general and could be adapted and applied to study doped correlated systems with BCT structure, including possibly unconventionnal superconductors. In these cases, the inclusion of charge fluctuations in the model are necessary and have to be done carefully since they might also play a crucial direct role for the superconducting instabilities. Finally, the variational method that we introduced here could also be used for other models where a two-body interaction term can be decoupled in two different mean-field channels.
We acknowledge the financial support of Capes-Cofecub Ph 743-12. CT is [*bolsista Capes*]{}. This research was also supported in part by the Brazilian Ministry of Science, Technology and Innovation (MCTI) and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). Research carried out with the aid of the Computer System of High Performance of the International Institute of Physics-UFRN, Natal, Brazil. The authors are gratefull to Frédéric Bourdarot for usefull discussions.
Brillouin zones for the dual (bond) lattice \[Apendixduallattice\]
==================================================================
The choice of the phase of the modulation $+$ or $-$ on a given bond ${{\bf R}}{{\bf R}'}$ in Eq. (\[Ansatzvarphi1\]) is of course not unique. At this stage we could not go further by considering the system in its whole generality. Motivated by experimental applications to URu$_2$Si$_2$, we may thus assume that the order parameter $\Phi_{\bf Q}$ lowers the lattice translation symmetry from BCT to tetragonal. This translation symmetry breaking corresponds to a doubling of the lattice unit cell, and may as well be characterized by various point group symmetry breaking. Indeed, the spin-liquid field $\varphi_{{{\bf R}}{{\bf R}'}}$ is defined on the dual (bond) lattice. Each of these possible point group symmetry breaking results from a non isotropic distribution of the phase modulation $+$ or $-$ on the bonds neighboring a given lattice site. Different possible orders belong to the same tetragonal lattice group but break different point group symmetries. It is remarkable that a MSL order can equivalently be characterized by a point group symmetry or by an ordering wave-vector ${\bf Q}$ belonging to the reciprocal space of the dual lattice. On the other side, the AF order is characterized by a wave-vector ${\bf Q}_{\text{AF}}$ that belongs to the first Brillouin zone of the BCT lattice of sites, $\text{\bf BZ}_{\text{site}}^{\text{BCT}}$. We will thus later consider three other Brillouin zones, denoted $\text{\bf BZ}_{\text{bond}}^{1}$, $\text{\bf BZ}_{\text{bond}}^{2}$, and $\text{\bf BZ}_{\text{bond}}^{3}$, that correspond to the first Brillouin zones of the bonds connected with the couplings $J_1$, $J_2$, and $J_3$ respectively (see figure \[fig:bct\]). Note that $\text{\bf BZ}_{\text{bond}}^{2}$ and $\text{\bf BZ}_{\text{bond}}^{3}$ look like two-dimensional Brillouin zones since the couplings $J_2$ and $J_3$ are inplane. We remark here that the present formalism at this stage can be applied to study both two-dimensional magnetism in compounds like cuprates where $J_1\approx 0$ and three-dimensional magnetism in compounds like URu$_2$Si$_2$ for which $J_1$ drives the AF order. Since the BCT lattice has four times more bonds of kind $1$ than sites, it appears that $\text{\bf BZ}_{\text{site}}^{\text{BCT}}$ is four times smaller than $\text{\bf BZ}_{\text{bond}}^{1}$. As a result, different wave-vectors ${\bf Q}$ in $\text{\bf BZ}_{\text{bond}}^{1}$ characterizing different MSL bond orders, can be equivalent with each other from the AF point of view. For example, the ordering wave-vector ${\bf Q}_{\text{AF}}^{\rm I}$ can be equivalently chosen to be $(1,1,1)$, $(1,0,0)$ or $(0,0,1)$ when characterizing AF ordered phase on the BCT lattice. But these three vectors characterize three different MSL ordered states. A detailed analysis is given in Ref. [@Thomas2013], comparing the free energy of these three possible MSL ordered states. It was found that, when the degeneracy was left, $(1,1,1)$ characterized the MSL state with the lowest free energy. Therefore, we choose to consider in this article only the results obtained with the modulation wave-vector ${\bf Q}=(1,1,1)$. Finally, note that the prefactors $1/2\sqrt{N}$ and $1/\sqrt{2N}$ in the Fourier transform relations (\[TFphi1\], \[TFphi2\],\[TFphi3\]) are related to the number of sites or bonds which are relevant for each field: the BCT lattice considered here has $N$ sites, $4N$ bonds connected by $J_1$, and $2N+2N$ bonds connected by $J_2$ and $J_3$.
Bond singlet probabilities\[Apendixsingletprobability\]
=======================================================
Keeping in mind that the fermionic operators $\chi_{\bf R\sigma}$ represent quantum spin $1/2$, each interaction term on a bond ${\bf RR'}$ in the $J_1$-$J_2$-$J_3$ Hamiltonian (\[eq:hheis\]) can be identified to an antiferromagnetic Heisenberg interaction: $$\begin{aligned}
\sum_{\sigma\sigma'}\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma'}\chi_{{\bf R}'\sigma'}^{\dagger}\chi_{{\bf R}'\sigma}
=\frac{1}{2}+2\vec{S}_{\bf R}\cdot\vec{S}_{\bf R'}~, \end{aligned}$$ where $\vec{S}_{\bf R}$ and $\vec{S}_{\bf R'}$ are quantum spin $1/2$ on sites ${\bf R}$ and ${\bf R'}$. Integrating formally the other sites degrees of freedom of the many-body state characterizing the lattice, each local bond ${\bf RR'}$ can be characterized by a probability $p_{\bf RR'}^{singlet}$ to be in a singlet state. Invoking standard quantum spin algebra, we find the very general identity: $$\begin{aligned}
p_{\bf RR'}^{singlet}=\frac{1}{4}-\langle \vec{S}_{\bf R}\cdot\vec{S}_{\bf R'}\rangle~. \end{aligned}$$ Introducing the variational parameter $\alpha_i$ that is appropriate to the bond ${\bf RR'}$ as defined by Eq. (\[eq:DefJ1channels\]), and invoking the mean-field approximation decoupling in Weiss and spin-liquid channels as defined by Eqs. (\[ApproxmeanfieldWeiss\]) and (\[ApproxmeanfieldSL\]), we find the average $$\begin{aligned}
\sum_{\sigma\sigma'}\langle\chi_{{\bf R}\sigma}^{\dagger}\chi_{{\bf R}\sigma'}\chi_{{\bf R}'\sigma'}^{\dagger}\chi_{{\bf R}'\sigma}\rangle
&=&
2m_{\bf R}m_{\bf R'}\cos^2(\alpha_i)\nonumber\\
&&-\vert\varphi_{\bf RR'}\vert^2\sin^2(\alpha_i)~. \end{aligned}$$ Finally, within the variational mean-field approximation, the probability that a bond ${\bf RR'}$ forms a singlet state is given by: $$\begin{aligned}
p_{\bf RR'}^{singlet}=\frac{1}{2}-m_{\bf R}m_{\bf R'}\cos^2(\alpha_i)
+\frac{\vert\varphi_{\bf RR'}\vert^2}{2}\sin^2(\alpha_i)~, \nonumber\\
~~
\label{probasinglet}\end{aligned}$$ where the kind of bond $i=1,~2$ or $3$ is defined on figure \[fig:bct\].
Saddle point equations for J3=0 \[sec:sadpoint\]
================================================
Using expression (\[eq:totalfj1j2j3\]) with $\alpha_3=\Phi_3=J_3=0$, the seven saddle point equations for the free energy functional $F(\alpha_1,\alpha_2,\lambda_0,\Phi_{1},\Phi_{{{\bf Q}}},\Phi_2,S_{{{\bf Q}}_{\text{AF}}})$ are obtained from the following partial derivative expressions: $$\begin{aligned}
\frac{\partial F}{\partial {\alpha}_1}&=2J_1\cos{{\alpha}_1}\sin{{\alpha}_1}\Bigg\{16\sum_{{{\bf k}}}\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta \Omega_{{{\bf k}}}}\notag\\
&\times\Big[2J_1\sin^2{{\alpha}_1}(
\vert\Phi_{1}\gamma_{1,{{\bf k}}}\vert^2+\vert\Phi_{{{\bf Q}}}\gamma_{1,{{\bf k}}+{{\bf Q}}/2}\vert^2)-J_{{{{\bf Q}}_{\text{AF}}}}\gamma_{1,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2\Big]\notag\\
&+|\Phi_1|^2+|\Phi_{{{\bf Q}}}|^2+8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial {\alpha}_2}&=4J_2\sin{{\alpha}_2}\cos{{\alpha}_2}\Bigg\{\sum_{{{\bf k}}}\Big\{\big[f(\Omega_{{{\bf k}}}^+)+f(\Omega_{{{\bf k}}}^-)\big]|\Phi_2|\gamma_{2,{{\bf k}}}\notag \\
&-2\Big[\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta \Omega_{{{\bf k}}}}\Big]J_{{{{\bf Q}}_{\text{AF}}}}\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2\Big\}\notag\\
&+|\Phi_2|^2+\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial \Phi_2}&=2J_2\sin^2{{\alpha}_2}\notag\\
&\times\Bigg\{\sum_{{{\bf k}}}\Big[f(\Omega_{{{\bf k}}}^+)+f(\Omega_{{{\bf k}}}^-)\Big]\gamma_{2,{{\bf k}}}+2|\Phi_2|\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial \Phi_1}&=2J_1\sin^2{{\alpha}_1}|\Phi_1|\notag\\
&\times\Bigg\{16\sum_{{{\bf k}}}\Big[\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta \Omega_{{{\bf k}}}}\Big]J_1\sin^2{{\alpha}_1}\gamma_{1,{{\bf k}}}^2+1\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial \Phi_{{{\bf Q}}}}&=2J_1\sin^2{{\alpha}_1}|\Phi_{{{\bf Q}}}|\notag\\
&\times\Bigg\{16\sum_{{{\bf k}}}\Big[\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta \Omega_{{{\bf k}}}}\Big]J_1\sin^2{{\alpha}_1}\gamma_{1,{{\bf k}}+{{\bf Q}}/2}^2+1\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial S_{{{{\bf Q}}_{\text{AF}}}}}&=2J_{{{{\bf Q}}_{\text{AF}}}}S_{{{{\bf Q}}_{\text{AF}}}}\notag\\
&\times\Bigg\{\sum_{{{\bf k}}}\Big[\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta \Omega_{{{\bf k}}}}\Big]J_{{{{\bf Q}}_{\text{AF}}}}-1\Bigg\}\,,\end{aligned}$$ $$\begin{aligned}
\frac{\partial F}{\partial \lambda_0}&=\sum_{{{\bf k}}}\Big[f(\Omega_{{{\bf k}}}^+)+f(\Omega_{{{\bf k}}}^-)\Big]-1\,.\end{aligned}$$ where $f(\omega)\equiv \frac{1}{1+\exp\beta\omega}$ denotes the Fermi function, and $\Delta\Omega_{{\bf k}}\equiv \Omega_{{{\bf k}}}^+-\Omega_{{{\bf k}}}^-=2\sqrt{
(J_{{{{\bf Q}}_{\text{AF}}}})^2\vert S_{{{\bf Q}}_{\text{AF}}}\vert^2+16(J_1^{\text{SL}})^2\big[(\gamma_{1,{{\bf k}}})^2\vert\Phi_{1}\vert^{2}+(\gamma_{{{\bf Q}},{{\bf k}}})^2\vert\Phi_{{{\bf Q}}}\vert^{2}\big]
}$. In the following it will be convenient to introduce the field-dependent sums: $$\begin{aligned}
A_{\lambda_0}&\equiv\frac{1}{N}\sum_{{{\bf k}}}\Big[f(\Omega_{{{\bf k}}}^+)+f(\Omega_{{{\bf k}}}^-)\Big]\,,
\label{DefAlambda0}\\
A_{\Phi_2}&\equiv\frac{1}{N}\sum_{{{\bf k}}}\Big[f(\Omega_{{{\bf k}}}^+)+f(\Omega_{{{\bf k}}}^-)\Big]\gamma_{2{{\bf k}}}\,,
\label{DefAPhi2}\\
A_{\Phi_1}&\equiv\frac{1}{N}\sum_{{{\bf k}}}\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta\Omega_{{{\bf k}}}}\gamma_{1{{\bf k}}}^2\,,
\label{DefAPhi1}\\
A_{\Phi_{{{\bf Q}}}}&\equiv\frac{1}{N}\sum_{{{\bf k}}}\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta\Omega_{{{\bf k}}}}\gamma_{1{{\bf k}}{{\bf Q}}}^2\,,
\label{DefAPhiQ}\\
A_{S_{{{\bf Q}}_{\text{AF}}}}&\equiv\frac{1}{N}\sum_{{{\bf k}}}\frac{f(\Omega_{{{\bf k}}}^+)-f(\Omega_{{{\bf k}}}^-)}{\Delta\Omega_{{{\bf k}}}}\,.
\label{DefASQ}\end{aligned}$$ After some standard algebra, the seven saddle point equations for $F(\alpha_1,\alpha_2,\lambda_0,\Phi_{1},\Phi_{{{\bf Q}}},\Phi_2,S_{{{\bf Q}}_{\text{AF}}})$ are rewritten as : $$\begin{aligned}
J_1\sin{2{\alpha}_1}\Big(8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2+|\Phi_1|^2+|\Phi_{{{\bf Q}}}|^2\Big)&=0\,,\label{eq:sa1}\\
J_2\sin{2{\alpha}_2}\Big(|\Phi_2|^2+\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2\Big)&=0\,,\label{eq:sa2}\\
J_2\sin{{\alpha}_2}\Big(A_{\Phi_2}+2|\Phi_2|\Big)&=0\,,\label{eq:sphi}\\
J_1|\Phi_1|\sin{{\alpha}_1}\Big(16J_1\sin^2{{\alpha}_1}A_{\Phi_1}+1\Big)&=0\,,\label{eq:sph0}\\
J_1|\Phi_{{{\bf Q}}}|\sin{{\alpha}_1}\Big(16J_1\sin^2{{\alpha}_1}A_{\Phi_{{{\bf Q}}}}+1\Big)&=0\,,\label{eq:sphq}\\
J_{{{{\bf Q}}_{\text{AF}}}}S_{{{{\bf Q}}_{\text{AF}}}}\Big(A_{S_{{{{\bf Q}}_{\text{AF}}}}}J_{{{{\bf Q}}_{\text{AF}}}}-1\Big)&=0\,,\label{eq:ssqaf}\\
A_{\lambda_0}&=1\,.\label{eq:slam}\end{aligned}$$ These equations may have some trivial solutions that correspond to giving $\alpha_1$ and/or $\alpha_2$ the extreme values $0$ and $\pi/2$. This leads to four various cases that are defined in table \[Table1\]. Hereafter, the system of saddle-point relations (\[eq:sa1\], \[eq:sa2\], \[eq:sphi\], \[eq:sph0\], \[eq:sphq\], \[eq:ssqaf\], \[eq:slam\]) is rewritten accordingly to the simplifications provided by each case. In all cases, we still have to solve the saddle point equation for the Lagrange multiplier $\lambda_0$: $$\begin{aligned}
A_{\lambda_0}&=1\,. \label{eq:slam2}\end{aligned}$$ For the other fields we are thus left with:
Trivial solutions: [**Case A**]{}
---------------------------------
Here we consider the trivial cases where both ${\alpha}_1$ and ${\alpha}_2$ take extreme values $\pi/2$ or $0$. There are naturally four possibilities that are analyzed sub-case by sub-case hereafter. Most of the saddle point equations are trivially satisfied, and we analyze here the relevant relations that still remain.
### Sub-case (a1,a2)=(0,0)
This situation corresponds to the classical magnetic mean-field approximation. In this case, only magnetic order is considered, with the two possible ordering wave-vectors ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ and ${{{\bf Q}}_{\text{AF}}}^{\rm II}$. The saddle point equation for $S_{{{{\bf Q}}_{\text{AF}}}}$ and a given ordering wave-vector is: $$\begin{aligned}
J_{{{{\bf Q}}_{\text{AF}}}}S_{{{{\bf Q}}_{\text{AF}}}}\Big(A_{S_{{{{\bf Q}}_{\text{AF}}}}}J_{{{{\bf Q}}_{\text{AF}}}}-1\Big)&=0\,.
$$
### Sub-case (a1,a2)=(p/2,0)
Here, the interplane spin liquid fields compete or coexist with the magnetic order originating from the inplane Weiss field $J_{2}^{\text{Weiss}}$. The saddle point equations for $\Phi_1$, $\Phi_{{\bf Q}}$, and $S_{{{{\bf Q}}_{\text{AF}}}}$ are: $$\begin{aligned}
J_1|\Phi_1|\Big(16J_1A_{\Phi_1}+1\Big)&=0\,,\\
J_1|\Phi_{{{\bf Q}}}|\Big(16J_1A_{\Phi_{{{\bf Q}}}}+1\Big)&=0\,,\\
J_2\gamma_{2,{{{\bf Q}}_{\text{AF}}}}S_{{{{\bf Q}}_{\text{AF}}}}\Big(2J_2\gamma_{2,{{{\bf Q}}_{\text{AF}}}}A_{S_{{{{\bf Q}}_{\text{AF}}}}}-1\Big)&=0\,.\end{aligned}$$
### Sub-case (a1,a2)=(0,p/2)
Here, the different layers in $(a,b)$ directions are decoupled from each other in a pure Weiss field channel. Inside each layer, the mean-field decoupling is purely spin-liquid. The saddle point equations for $\Phi_2$ and $S_{{{{\bf Q}}_{\text{AF}}}}$ are: $$\begin{aligned}
J_2\Big(A_{\Phi_2}+2|\Phi_2|\Big)&=0\,,\\
J_1\gamma_{1,{{{\bf Q}}_{\text{AF}}}}S_{{{{\bf Q}}_{\text{AF}}}}\Big(8J_1\gamma_{1,{{{\bf Q}}_{\text{AF}}}}A_{S_{{{{\bf Q}}_{\text{AF}}}}}-1\Big)&=0\,.\end{aligned}$$
### Sub-case (a1,a2)=(p/2,p/2)
This corresponds to a pure spin liquid state with interplane fields $\Phi_1$, $\Phi_{{\bf Q}}$, and inplane field $\Phi_2$. The saddle point equations are: $$\begin{aligned}
J_2\Big(A_{\Phi_2}+2|\Phi_2|\Big)&=0\,,\\
J_1|\Phi_1|\Big(16J_1A_{\Phi_1}+1\Big)&=0\,,\\
J_1|\Phi_{{{\bf Q}}}|\Big(16J_1A_{\Phi_{{{\bf Q}}}}+1\Big)&=0\,. \end{aligned}$$
Case B
------
Here we consider that $\alpha_1$ is fixed to an extreme value ($0$ or $\pi/2$), and $\alpha_2$ is a free parameter. Since extremal values of $\alpha_2$ have been already considered in case A, we thus assume the strict inequality $0<\alpha_2<\pi/2$. Eq. (\[eq:sa2\]) can thus be simplified as: $$\begin{aligned}
\label{eq:cb}
|\Phi_2|^2+\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2&=0\,,\end{aligned}$$ Putting aside the trivial solution with vanishing fields, this relation requires an ordering wave-vector such that $\gamma_{2,{{{\bf Q}}_{\text{AF}}}}<0$. Invoking the definition Eq. (\[defgamma2k\]), we check easily that $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm II}}=-2$ and $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm I}}=+2$. Therefore we consider only the ordering wave-vector ${{{\bf Q}}_{\text{AF}}}^{II}=(1/2,1/2,0)$ for this case. Eq. (\[eq:cb\]) enforces linearity between the fields: $$\begin{aligned}
\vert\Phi_2\vert=\vert S_{{{{\bf Q}}_{\text{AF}}}}\vert\sqrt{2}~.\label{eq:phi2eqSQ}\end{aligned}$$ This relation and Eq. (\[eq:slam2\]) have to be completed by the other relevant saddle point equations that are rewritten as follows: $$\begin{aligned}
J_2\Big(A_{\Phi_2}+2|\Phi_2|\Big)&=0\,,\\
J_2S_{{{{\bf Q}}_{\text{AF}}}}\Big[4J_2\cos^2{({\alpha}_2)}A_{S_{{{{\bf Q}}_{\text{AF}}}}}+1\Big]&=0\,, \end{aligned}$$ and also:
### Sub-case a1=0:
$$\begin{aligned}
\Phi_1=\Phi_{{\bf Q}}&=0\,. \end{aligned}$$
### Sub-case a1=p/2:
$$\begin{aligned}
J_1|\Phi_1|\Big(16J_1A_{\Phi_1}+1\Big)&=0\,,\\
J_1|\Phi_{{{\bf Q}}}|\Big(16J_1A_{\Phi_{{{\bf Q}}}}+1\Big)&=0\,.\end{aligned}$$
Case C
------
This case corresponds to $\sin{(2{\alpha}_2)}=0$ and a strict inequality $0<\alpha_1<\pi/2$. Here we first consider Eq. (\[eq:sa1\]), that is rewritten as: $$\begin{aligned}
\label{eq:sa1casc}
8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2+|\Phi_1|^2+|\Phi_{{{\bf Q}}}|^2&=0\,.\end{aligned}$$ Excluding the trivial solution with all fields vanishing, the AF ordering wave-vector must satisfy $\gamma_{1,{{{\bf Q}}_{\text{AF}}}}<0$. Invoking the definition Eq. (\[defgamma1k\]), we check easily that $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm I}}=-1$ and $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm II}}=+1/2$. Therefore we consider only the ordering wave vector ${{{\bf Q}}_{\text{AF}}}^{I}=(1,1,1)$ for this case, and Eq. (\[eq:sa1casc\]) reads: $$\begin{aligned}
\label{eq:sa1cascbis}
8|S_{{{{\bf Q}}_{\text{AF}}}}|^2=|\Phi_1|^2+|\Phi_{{{\bf Q}}}|^2~.\end{aligned}$$ In case C, this relation, together with Eq. (\[eq:slam2\]) has to be completed by the following relevant saddle point equations: $$\begin{aligned}
J_1|\Phi_1|\Big(16J_1A_{\Phi_1}\sin^2{{\alpha}_1}+1\Big)&=0\,,\\
J_1|\Phi_{{{\bf Q}}}|\Big(16J_1A_{\Phi_{{{\bf Q}}}}\sin^2{{\alpha}_1}+1\Big)&=0\,.\end{aligned}$$ and also:
### Sub-case a2=0:
$$\begin{aligned}
S_{{{{\bf Q}}_{\text{AF}}}}\Big[A_{S_{{{{\bf Q}}_{\text{AF}}}}}\big(8J_1\cos^2{{\alpha}_1}-4J_2\big)+1\Big]&=0\,, \\
\Phi_2&=0\,.\end{aligned}$$
### Sub-case a2=p/2:
$$\begin{aligned}
J_2\Big(A_{\Phi_2}+2|\Phi_2|\Big)&=0\,,\\
S_{{{{\bf Q}}_{\text{AF}}}}\Big(8J_1A_{S_{{{{\bf Q}}_{\text{AF}}}}}\cos^2{{\alpha}_1}+1\Big)&=0\,.\end{aligned}$$
Case D
------
This case is in principle the most general one, where both $\alpha_1$ and $\alpha_2$ are considered as free parameters. Since extreme values $0$ or $\pi/2$ have already been considered in previous cases, we assume here strict equalities $0<\alpha_1<\pi/2$ and $0<\alpha_2<\pi/2$. Therefore, Eqs. (\[eq:sa1\]) and (\[eq:sa2\]) can be simplified as: $$\begin{aligned}
\label{eq:casd}
8\gamma_{1,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2+|\Phi_1|^2+|\Phi_{{{\bf Q}}}|^2&=0\,,\\
|\Phi_2|^2+\gamma_{2,{{{\bf Q}}_{\text{AF}}}}|S_{{{{\bf Q}}_{\text{AF}}}}|^2&=0\,. \end{aligned}$$ The only way to obtain a solution without all fields vanishing would require at least $S_{{{{\bf Q}}_{\text{AF}}}}\neq 0$. The corresponding AF ordering wave-vector ${{{\bf Q}}_{\text{AF}}}$ would have to satisfy both $\gamma_{1,{{{\bf Q}}_{\text{AF}}}}<0$ and $\gamma_{2,{{{\bf Q}}_{\text{AF}}}}<0$. Nevertheless, invoking definitions (\[defgamma1k\]) and (\[defgamma2k\]), we check easily that $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm I}}<0$ but $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm I}}>0$, and $\gamma_{1,{{{\bf Q}}_{\text{AF}}}^{\rm II}}>0$ but $\gamma_{2,{{{\bf Q}}_{\text{AF}}}^{\rm II}}<0$. We thus conclude that neither ${{{\bf Q}}_{\text{AF}}}^{\rm I}$ nor ${{{\bf Q}}_{\text{AF}}}^{\rm II}$ ordering wave-vectors can lead to such a solution.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we consider space-time codes where the code-words are restricted to either real or quaternion matrices. We prove two separate diversity-multiplexing gain trade-off (DMT) upper bounds for such codes and provide a criterion for a lattice code to achieve these upper bounds. We also point out that lattice codes based on ${\mathbb{Q}}$-central division algebras satisfy this optimality criterion. As a corollary this result provides a DMT classification for all ${\mathbb{Q}}$-central division algebra codes that are based on standard embeddings.'
bibliography:
- 'STC.bib'
title: The DMT classification of real and quaternionic lattice codes
---
Introduction
============
In [@EKPKL] the authors proved that for every number of transmit antennas $n$ there exist a DMT optimal code in the space $M_{n}({\mathbb{C}})$. These codes are derived from division algebras where the center of the division algebra is a complex quadratic field. However, this result is actually more general, and their proof revealed that as long as a $2n^2$-dimensional lattice code in $M_n({\mathbb{C}})$ has the non-vanishing determinant property (NVD), it is DMT optimal. Yet, this result does not tell us anything about space-time lattice codes that are not full dimensional in $M_n({\mathbb{C}})$. Such codes naturally appear in the scenario where we have less receive than transmit antennas and try to keep the decoding complexity limited.
One natural class of such space-time codes are the codes derived from ${\mathbb{Q}}$-central division algebras. In this paper we will measure their DMT. Unlike the case of complex quadratic center, ${\mathbb{Q}}$-central division algebras are divided into two categories with respect to their DMT performance. This division is based on the ramification of the infinite Hasse-invariant of the division algebra, which decides if the lattice code corresponding to the division algebra can be embedded into real or quaternionic space.
Our DMT classification holds for any multiplexing gain, extending previous partial results in [@VLL2013; @ISIT2016_MIMO] which were based on the theory of Lie algebras. We note that the approach used in this paper is quite different and more general. In the spirit of [@EKPKL] we are not just considering division algebra codes, but all space-time codes where the code matrices are restricted to $M_n({\mathbb{R}})$ (resp. $M_{n/2}({\mathbb{H}})$), and provide two different upper bounds for the DMT of such codes. We then prove that if we have a degree $n^2$-dimensional NVD lattice inside $M_n({\mathbb{R}})$ (resp. $M_{n/2}({\mathbb{H}})$) then this code achieves the respective upper bound. As the ${\mathbb{Q}}$-central division algebra codes are of this type, we get their DMT as a corollary.
Notation and preliminaries
==========================
#### Notation {#notation .unnumbered}
Given a matrix $X$, we denote its complex conjugate by $X^*$, its transpose by $X^T$ and its conjugate transpose by $X^{\dagger}$.\
We use the the dotted inequality $f(\rho) {\mathrel{\dot{\leq}}}g(\rho)$ to mean $\lim_{\rho\to \infty}\frac{\log f(\rho)}{\log \rho} \leq \lim_{\rho\to \infty}\frac{\log g(\rho)}{\log \rho},$ and similarly for equality.
Subspaces and lattices {#basic}
----------------------
In this paper we will consider space-time codes that are subsets of certain subspaces of the $2n^2$-dimensional real vector space $M_n({\mathbb{C}})$. The first such subspace consists of all the real matrices inside $M_n({\mathbb{C}})$ and we denote it with $M_n({\mathbb{R}})$. The other subspace of interest consists of quaternionic matrices.
Let us assume that $2\mid n$. We denote with $M_{n/2}({\mathbb{H}})$ the set of quaternionic matrices $$\begin{pmatrix}
A & -B^* \\
B& A^*
\end{pmatrix}
\in M_{n}({\mathbb{C}}),$$ where $*$ refers to complex conjugation and $A$ and $B$ are complex matrices in $M_{n/2}({\mathbb{C}})$. Note that quaternionic matrices form a $n^2$-dimensional subspace in $M_n({\mathbb{C}})$.
The space-time codes we consider in this work are based on additive groups in $M_{n}({\mathbb{C}})$.
A [*matrix lattice*]{} $L \subseteq M_{n}({\mathbb{C}})$ has the form $$L={\mathbb{Z}}B_1\oplus {\mathbb{Z}}B_2\oplus \cdots \oplus {\mathbb{Z}}B_k,$$ where the matrices $B_1,\dots, B_k$ are linearly independent over ${\mathbb{R}}$, i.e., form a lattice basis, and $k$ is called the *dimension* of the lattice.
We immediately see that if we have a lattice inside the space $M_n({\mathbb{R}})$ or $M_{n/2}({\mathbb{H}})$ the maximal dimension it can have is $n^2$.
\[def:NVD\] If the *minimum determinant* of the lattice $L \subseteq M_{n\times n}({\mathbb{C}})$ is non-zero, i.e. satisfies $${\hbox{\rm det}_{min}\left( L\right)}:=\inf_{{\bf 0} \neq X \in L}{\ensuremath{\left\lvert \det (X) \right\rvert}} > 0,$$ we say that the lattice satisfies the *non-vanishing determinant* (NVD) property.
Building high dimensional NVD lattices is a highly non-trivial task. A natural source of such lattices are division algebras. Let $\mathcal{D}$ be a degree $n$ ${\mathbb{Q}}$-central division algebra. We say that the algebra ${{\mathcal D}}$ is *ramified at the infinite place* if $
{{\mathcal D}}\otimes_{{\mathbb{Q}}}{\mathbb{R}}\simeq M_{n/2}(\mathbb{H})$. If it is not, then $
{{\mathcal D}}\otimes_{{\mathbb{Q}}}{\mathbb{R}}\simeq M_{n}({\mathbb{R}}).
$
Let $\Lambda$ be an *order* in $\mathcal{D}$.
[@VLL2013 Lemma 9.10]\[embeddings\] If the infinite prime is ramified in the algebra ${{\mathcal D}}$, then there exists an embedding $$\psi: \mathcal{D} \to M_{n/2}(\mathbb{H})$$ such that $\psi(\Lambda)$ is a $n^2$ dimensional NVD lattice. If ${{\mathcal D}}$ is not ramified at the infinite place, then there exists an embedding $$\psi: \mathcal{D} \to M_{n}({\mathbb{R}})$$ such that $\psi(\Lambda)$ is a $n^2$ dimensional NVD lattice.
Channel model
-------------
We consider a MIMO system with $n$ transmit and $m$ receive antennas, and minimal delay $T=n$. The received signal is $$\label{channel}
Y_c=\sqrt{\frac{\rho}{n}} H_c \bar{X} + W_c,$$ where $\bar{X} \in M_n({\mathbb{C}})$ is the transmitted codeword, $H_c \in M_{m,n}({\mathbb{C}})$ and $W_c \in M_{m,n}({\mathbb{C}})$ are the channel and noise matrices with i.i.d. circularly symmetric complex Gaussian entries $h_{ij}, w_{ij} \sim \mathcal{N}_{{\mathbb{C}}}(0,1)$, and $\rho$ is the signal-to-noise ratio (SNR). The set of transmitted codewords $\mathcal{C}$ satisfies the average power constraint $$\label{power_constraint}
\frac{1}{{\ensuremath{\left\lvert \mathcal{C} \right\rvert}}} \frac{1}{n^2} \sum_{X \in \mathcal{C}} {\ensuremath{\left\Vert X \right\Vert}}^2 \leq 1.$$ We suppose that perfect channel state information is available at the receiver but not at the transmitter, and that maximum likelihood decoding is performed. In the DMT setting [@ZT], we consider codes $\mathcal{C}(\rho)$ whose size grows with the SNR, and define the multiplexing gain as $$r=\lim_{\rho \to \infty} \frac{1}{n}\frac{\log {\ensuremath{\left\lvert \mathcal{C} \right\rvert}}}{\log \rho},$$ and the diversity gain as $$d(r)=-\lim_{\rho \to \infty}\frac{\log P_e}{\log \rho},$$ where $P_e$ is the average error probability.
#### Spherically shaped lattice codes {#spherically-shaped-lattice-codes .unnumbered}
Let now $L$ be a lattice in $M_n({\mathbb{C}})$. Given $M$, consider the subset of elements whose Frobenius norm is bounded by $M$: $$L(M)=\{ X \in L \;:\; {\ensuremath{\left\Vert X \right\Vert}} \leq M\}.$$ Let $k \leq 2n^2$ be the dimension of $L$ as a ${\mathbb{Z}}$-module. As in [@VLL2013], we choose $M=\rho^{\frac{rn}{k}}$ and consider codes of the form $$\mathcal{C}(\rho)=M^{-1} L(M)=\rho^{-\frac{rn}{k}} L(\rho^{\frac{rn}{k}}),$$ which satisfy the power constraint (\[power\_constraint\]). The multiplexing gain of this code is $r$.
Real lattice codes
==================
In this section, we focus on the special case where $\mathcal{C}(\rho) \subset M_n({\mathbb{R}})$, i.e. the code is a set of real matrices.
Equivalent real channel
-----------------------
First, we show that the channel model (\[channel\]) is equivalent to a real channel with $n$ transmit and $2m$ receive antennas.\
We can write $H_c=H_r + i H_i$, $W_c=W_r + i W_i$, where $H_r,H_i,W_r,W_i$ have i.i.d. real Gaussian entries with variance $1/2$. If $Y_c=Y_r + i Y_i$, with $Y_r, Y_i \in M_{m \times n}({\mathbb{R}})$, we can write an equivalent real system with $2m$ receive antennas: $$\label{real_channel}
Y=\begin{pmatrix} Y_r \\ Y_i \end{pmatrix} =
\sqrt{\frac{\rho}{n}} \begin{pmatrix} H_r \\ H_i \end{pmatrix} \bar{X} + \begin{pmatrix} W_r \\ W_i \end{pmatrix} =\sqrt{\frac{\rho}{n}} H\bar{X} + W,$$ where $H \in M_{2m \times n}({\mathbb{R}})$, $W \in M_{2m \times n}({\mathbb{R}})$ have real i.i.d. Gaussian entries with variance $1/2$.
General DMT upper bound for real codes
--------------------------------------
Using the equivalent real channel in the previous section, we can now establish a general upper bound for the DMT of real codes.
\[theorem\_real\_upper\] Suppose that $\forall \rho$, $\mathcal{C}(\rho) \subset M_n({\mathbb{R}})$. Then the DMT of the code $\mathcal{C}$ is upper bounded by the function $d_1(r)$ connecting the points $(r,[(m-r)(n-2r)]^+)$ where $2r \in {\mathbb{Z}}$.
This part of the proof closely follows [@ZT]. Given a rate $R=r \log \rho$, consider the outage probability [@Telatar] $$\label{P_out}
P_{\operatorname*{out}}(R)=\inf_{Q \succ 0, \;\operatorname*{tr}(Q) \leq n} \mathbb{P}\left\{ \Psi(Q,H) \leq R\right\},$$ where $\Psi(Q,H)$ is the maximum mutual information per channel use of the real MIMO channel (\[real\_channel\]) with fixed $H$ and real input with fixed covariance matrix $Q$.[^1] Following a similar reasoning as in [@Telatar Section 3.2], it is not hard to see that $$\Psi(Q,H)=\frac{1}{2} \log \det (I + \frac{\rho}{n} H Q H^T).$$ As in [@ZT Section III.B], since $\log \det$ is increasing on the cone of positive definite symmetric matrices, for all $Q$ such that $\operatorname*{tr}(Q)\leq n$ we have $\frac{Q}{n}\preceq I$ and $$P_{\operatorname*{out}}(R) \geq \mathbb{P} \left\{\frac{1}{2} \log \det (I + \rho HH^T) \leq R\right\}.$$ Note that $\det(I+\rho HH^T)=\det(I+\rho H^T H)$. Let $l=\min(2m,n)$, and $\Delta={\ensuremath{\left\lvert n-2m \right\rvert}}$. Let $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_l > 0$ be the nonzero eigenvalues of $H^T H$. The joint probability distribution of $\boldsymbol\lambda=(\lambda_1,\ldots,\lambda_l)$ is given by [@Edelman][^2]: $$\label{p_lambda_real}
p(\boldsymbol\lambda)=Ke^{-\sum\limits_{i=1}^l \lambda_i} \prod_{i=1}^l \lambda_i^{\frac{\Delta-1}{2}}\prod_{i<j}(\lambda_i-\lambda_j)$$ for some constant $K$. Consider the change of variables $\lambda_i=\rho^{-\alpha_i} \;\forall i$. The corresponding distribution for $\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_l)$ in the set $\mathcal{A}=\{{\boldsymbol\alpha}\;:\; \alpha_1 \leq \cdots \leq \alpha_l\}$ is $$\label{p_alpha_real}
\!p(\boldsymbol\alpha)\!=\!K(\log \rho)^l e^{-\!\!\sum\limits_{i=1}^l \! \rho^{-\alpha_i}}\!\rho^{-\!\!\sum\limits_{i=1}^l\!\alpha_i \left(\frac{\Delta+1}{2}\right)}\!\prod_{i<j}\!\left(\rho^{-\alpha_i}\!\!-\!\rho^{-\alpha_j}\!\right)$$ Then we have [$$\begin{aligned}
&P_{\operatorname*{out}}(R) \doteq \mathbb{P}\left\{ \prod_{i=1}^l (1+ \rho \lambda_i) \leq \rho^{2r}\right\}\\
&=\mathbb{P}\left\{ \prod_{i=1}^l(1+\rho^{1-\alpha_i}) \leq \rho^{2r}\right\}.
$$ ]{}To simplify notation, we take $s=2r$. Note that $1+\rho^{1-\alpha_i} \leq 2 \rho^{(1-\alpha_i)^+} \doteq \rho^{(1-\alpha_i)^+}$, therefore [$$\begin{aligned}
& P_{\operatorname*{out}}(R) {\mathrel{\dot{\geq}}}\mathbb{P}\left\{ \prod_{i=1}^l \rho^{(1-\alpha_i)^{+}} \leq \rho^s\right\}\geq \mathbb{P}(\mathcal{A}_0),\end{aligned}$$ ]{}where [ $$\begin{aligned}
\label{A_0}
&\mathcal{A}_0=\left\{{\boldsymbol\alpha}\in \mathcal{A}:\;\alpha_i \geq 0 \;\forall i=1,\ldots,l,\; \sum_{i=1}^l (1-\alpha_i)^+ \leq s\right\} \notag\\
&=\!\left\{\!{\boldsymbol\alpha}\in \mathcal{A}\!:\;\!\alpha_j\geq 0,\;\sum_{i=1}^j (1-\alpha_i) \leq s \; \forall j=1,\ldots,l\right\}.\end{aligned}$$ ]{}In fact, given ${\boldsymbol\alpha}\in \mathcal{A}$, let $t=t({\boldsymbol\alpha})$ be such that $\alpha_{t+1} \geq 1 \geq \alpha_t$. Then $\forall j=1,\ldots,l$, $\sum_{i=1}^j (1-\alpha_i) \leq \sum_{i=1}^t (1-\alpha_i)=\sum_{i=1}^l (1-\alpha_i)^+.$\
Consider $S_{\delta}=\{{\boldsymbol\alpha}\in \mathcal{A}:\; {\ensuremath{\left\lvert \alpha_i-\alpha_j \right\rvert}}> \delta \; \forall i \neq j\}$. Then [$$\begin{aligned}
&P_{\operatorname*{out}}(R) {\mathrel{\dot{\geq}}}\int_{\mathcal{A}_0} e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}} \rho^{- \sum\limits_{i=1}^l\frac{(\Delta+1)\alpha_i}{2}} \prod_{i<j} (\rho^{-\alpha_i}-\rho^{-\alpha_j}) d {\boldsymbol\alpha}\\
&\geq \int_{\mathcal{A}_0 \cap S_{\delta}}e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}} \rho^{-\sum\limits_{i=1}^l \frac{(\Delta+1)\alpha_i}{2}} \prod_{i<j} (\rho^{-\alpha_i}-\rho^{-\alpha_j}) d {\boldsymbol\alpha}\\
&\geq \frac{(1-\rho^{-\delta})^l}{e^l} \int_{\mathcal{A}_0 \cap S_{\delta}} \rho^{-\sum\limits_{i=1}^l\alpha_i N_i} d{\boldsymbol\alpha}\doteq \int_{\mathcal{A}_0 \cap S_{\delta}} \rho^{-\sum\limits_{i=1}^l\alpha_i N_i} d{\boldsymbol\alpha},\end{aligned}$$ ]{}where $N_i=\frac{\Delta+2l-2i+1}{2}$. The previous inequality follows from the fact that $\rho^{-\alpha_i}-\rho^{-\alpha_j}>\rho^{-\alpha_i}(1-\rho^{-\delta})$ for ${\boldsymbol\alpha}\in S_{\delta}$, and $e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}}\geq \frac{1}{e}$ if $\alpha_i \geq 0$. (Note that for a fixed $i$, there are $l-i$ possible values for $j$ such that $i<j$.)
\[inf\_lemma\] Let $f({\boldsymbol\alpha})=\sum\limits_{i=1}^l (q+l+1-2i)\alpha_i$. Then $$\inf\limits_{{\boldsymbol\alpha}\in \mathcal{A}_0} f({\boldsymbol\alpha})=(-q-l+2{\ensuremath{\left\lfloor s \right\rfloor}}+1)s+ql-{\ensuremath{\left\lfloor s \right\rfloor}}({\ensuremath{\left\lfloor s \right\rfloor}}+1)=f(\boldsymbol\alpha^*),$$ where $\alpha_1^*=\ldots=\alpha_{k-1}^*=0$, $\alpha_k^*=k-s$, $\alpha_{k+1}^*=\ldots=\alpha_l^*=1$.
The proof of Lemma \[inf\_lemma\] can be found in Appendix \[proof\_inf\_lemma\].\
Using Lemma \[inf\_lemma\] with $q=\Delta+l$, $s=2r$, we find that $\inf_{{\boldsymbol\alpha}\in \mathcal{A}_0} \sum_{i=1}^l N_i \alpha_i=\inf_{{\boldsymbol\alpha}\in \mathcal{A}_0} \frac{f({\boldsymbol\alpha})}{2}$ is equal to [$$\begin{aligned}
&\frac{1}{2}\left[(-\Delta-2l+2{\ensuremath{\left\lfloor 2r \right\rfloor}}+1)2r+(\Delta+l)l-{\ensuremath{\left\lfloor 2r \right\rfloor}}({\ensuremath{\left\lfloor 2r \right\rfloor}}+1)\right]\\
&=(-2m-n+2{\ensuremath{\left\lfloor 2r \right\rfloor}}+1)r+mn-\frac{{\ensuremath{\left\lfloor 2r \right\rfloor}}({\ensuremath{\left\lfloor 2r \right\rfloor}}+1)}{2}.\end{aligned}$$ ]{}This is the piecewise function $d_1(r)$ connecting the points $(r,[(m-r)(n-2r)]^+)$ where $2r \in {\mathbb{Z}}$.\
Using the Laplace principle, $\forall \delta>0$ we have $$\lim_{\rho \to \infty} -\frac{\log P_{\operatorname*{out}}(R)}{\log\rho}\geq \inf_{\mathcal{A}_0 \cap S_{\delta}} \frac{f(\boldsymbol\alpha)}{2}.$$ Note that $\forall \delta$, the point ${\boldsymbol\alpha}_{\delta}$ such that $\alpha_{\delta,i}=\alpha_i^*+\frac{\delta i}{l}$ is in $\mathcal{A}_0 \cap S_{\frac{\delta}{l}}$ and when $\delta \to 0$, ${\boldsymbol\alpha}_{\delta} \to {\boldsymbol\alpha}^*$. By continuity of $f$, $$\begin{aligned}
\lim_{\delta \to 0} \inf_{\mathcal{A}_0 \cap S_{\delta}} \frac{f({\boldsymbol\alpha})}{2}=\frac{f({\boldsymbol\alpha}^*)}{2}=d_1(r). \tag*{{\IEEEQEDopen}}\end{aligned}$$
DMT of real lattice codes with NVD
----------------------------------
In this section, we show that real spherically shaped lattice codes with the NVD property achieve the DMT upper bound of Theorem \[theorem\_real\_upper\]. This result extends Proposition 4.2 in [@ISIT2016_MIMO].
\[theorem\_real\_lower\] Let $L$ be an $n^2$-dimensional lattice in $M_n({\mathbb{R}})$, and consider the spherically shaped code $\mathcal{C}(\rho)=\rho^{-\frac{r}{n}} L(\rho^{\frac{r}{n}})$.\
If $L$ has the NVD property, then the DMT of the code $\mathcal{C}(\rho)$ is the function $d_1(r)$ connecting the points $(r,[(m-r)(n-2r)]^+)$ where $2r \in {\mathbb{Z}}$.
Since the upper bound has already been established in Theorem \[theorem\_real\_upper\], we only need to prove that the DMT is lower bounded by $d_1(r)$. The following section follows very closely the proof in [@EKPKL], and thus some details are omitted. To simplify notation, we assume that ${\hbox{\rm det}_{min}\left( L\right)}=1$.\
We consider the sphere bound for the error probability for the equivalent real channel (\[real\_channel\]): for a fixed channel realization $H$, $$P_e(H) \leq \mathbb{P}\left\{ {\ensuremath{\left\Vert W \right\Vert}}^2 > d_H^2/4\right\}$$ where $d_H^2$ is the squared minimum distance in the received constellation: $$\begin{aligned}
&d_H^2\doteq\rho \min_{\bar{X},\bar{X}' \in \mathcal{C}(\rho),\;\bar{X} \neq \bar{X}'} {\ensuremath{\left\Vert H(\bar{X}-\bar{X}') \right\Vert}}^2\\
&=\rho^{1-\frac{2r}{n}} \min_{X,X' \in L(\rho^{\frac{r}{n}}),\;X \neq X'} {\ensuremath{\left\Vert H(X-X') \right\Vert}}^2. \end{aligned}$$ We denote $\Delta X=X-X'$. Let $l=\min(2m,n)$, and $\Delta={\ensuremath{\left\lvert n-2m \right\rvert}}$. Let $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_l >0$ be the non-zero eigenvalues of $H^T H$, and $0 \leq \mu_1 \leq \cdots \leq \mu_n$ the eigenvalues of $\Delta X \Delta X^T$. Using the mismatched eigenvalue bound and the arithmetic-geometric inequality as in [@EKPKL], for all $k=1, \ldots, l$ [$$\begin{aligned}
&d_H^2\doteq\rho^{1-\frac{2r}{n}}\min_{X,X' \in L(\rho^{\frac{r}{n}}),\;X \neq X'} \operatorname*{tr}(H\Delta X \Delta X^T H^T)\\
&\geq \rho^{1-\frac{2r}{n}} \sum_{i=1}^l \mu_i \lambda_i \geq k \rho^{1-\frac{2r}{n}} \left(\prod_{i=1}^k \lambda_i\right)^{\frac{1}{k}} \left(\prod_{i=1}^k \mu_i\right)^{\frac{1}{k}}.\end{aligned}$$ ]{} For all $i= 1, \ldots, n$, $\mu_i \leq {\ensuremath{\left\Vert \Delta X \right\Vert}}^2 \leq
4 \rho^{\frac{2r}{n}}$, and $$\prod_{i=1}^n \mu_i =\det(\Delta X \Delta X^T) \geq 1$$ due to the NVD property. Consequently, for all $k=1,\ldots,l$ $$\prod_{i=1}^k \mu_i =\frac{\det(\Delta X \Delta X^T)}{\prod_{j=k+1}^n \mu_j} \geq \frac{1}{\rho^{\frac{2r(n-k)}{n}}}.$$ With the change of variables $\lambda_i=\rho^{-\alpha_i}$ $\forall i=1,\ldots,l$, we can write [$$\begin{aligned}
&d_H^2 {\mathrel{\dot{\geq}}}\rho^{1-\frac{2r}{n}} \rho^{-\frac{1}{k} \sum\limits_{i=1}^k \alpha_i} \frac{1}{\rho^{\frac{2r(n-k)}{n}}}= \rho^{-\frac{1}{k}\left(\sum\limits_{i=1}^k \alpha_i +2r -k \right)}\\
&= \rho^{\delta_k(\boldsymbol\alpha,2r)} \quad \forall k=1,\ldots,l,\end{aligned}$$ ]{} where we have set $\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_l)$ and $$\label{delta}
\delta_k(\boldsymbol\alpha,s)=-\frac{1}{k}\left(\sum\limits_{i=1}^k \alpha_i +s -k \right).$$ To simplify the notation, we will take $s=2r$.\
Since $2 {\ensuremath{\left\Vert W \right\Vert}}^2$ is a $\chi^2(2mn)$ random variable, we have $$\mathbb{P}\left\{{\ensuremath{\left\Vert W \right\Vert}}^2 >d\right\}=\sum_{j=0}^{mn-1}e^{-d} \frac{d^j}{j!}.$$ Let $p({\boldsymbol\alpha})$ be the distribution of ${\boldsymbol\alpha}$ in (\[p\_alpha\_real\]). Note that for $i<j$, $\rho^{-\alpha_i} \geq \rho^{-\alpha_j}$ and for a fixed $i$, there are $l-i$ possible values for $j$. Consequently $$\label{p_prime}
p(\boldsymbol\alpha) \leq p'(\boldsymbol\alpha)=K e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}} \rho^{-\sum\limits_{i=1}^l\alpha_i N_i}(\log \rho)^l$$ where $N_i=\frac{\Delta+2l-2i+1}{2}$. By averaging over the channel, the error probability is bounded by $$P_e =\int P_e(\boldsymbol\alpha) p(\boldsymbol\alpha) d\boldsymbol\alpha \leq \int \mathbb{P}\left\{ {\ensuremath{\left\Vert W \right\Vert}}^2 > \frac{\rho^{\delta_k(\boldsymbol\alpha,s)}}{4}\right\} p(\boldsymbol\alpha) d\boldsymbol\alpha.$$ Finally, we get $\forall k=1,\ldots,l$, $$\label{integral_A}
P_e \leq \int_{\mathcal{A}} p'(\boldsymbol\alpha) \Phi(d_H^2)d\boldsymbol\alpha \leq\int_{\mathcal{A}} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)})d\boldsymbol\alpha$$ where $\mathcal{A}=\{{\boldsymbol\alpha}\;:\; \alpha_1 \leq \cdots \leq \alpha_l\}$, and $$\label{Phi}
\Phi(d)= \mathbb{P}\left\{ {\ensuremath{\left\Vert W \right\Vert}}^2 > \frac{d}{4}\right\}= e^{-\frac{d}{4}} \sum_{j=0}^{2mn-1} \left(\frac{d}{4}\right)^j\frac{1}{j!}.$$
The following Lemma is proven in Appendix \[proof\_laplace\_lemma\]:
\[laplace\_lemma\] [$$\begin{aligned}
&\min_{k=1,\ldots,l}\left(-\lim_{\rho \to \infty} \frac{1}{\log \rho} \log \int_{\mathcal{A}} p'({\boldsymbol\alpha}) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) d{\boldsymbol\alpha}\right) \\
&\geq \inf_{{\boldsymbol\alpha}\in \mathcal{A}_0} \sum_{i=1}^l N_i \alpha_i,\end{aligned}$$ ]{} where $\mathcal{A}_0$ is defined in (\[A\_0\]).
The proof of the Theorem is concluded using Lemma \[inf\_lemma\] with $q = \Delta+l$, $s = 2r$.
(0,0) – (2.1,0); (2.1,0) node\[right\] [$r$]{}; (0,0) – (0,8.9); (0,0) node\[below\] [$0$]{}; (0.5,0) node\[below\] [$\frac{1}{2}$]{}; (1,0) node\[below\] [$1$]{}; (0,0.5) node\[left\] [$\frac{1}{2}$]{}; (0,2) node\[left\] [$2$]{}; (0,3) node\[left\] [$3$]{}; (0,4.5) node\[left\] [$\frac{9}{2}$]{}; (1.5,0) node\[below\] [$\frac{3}{2}$]{}; (2,0) node\[below\] [$2$]{}; (0,8) node\[left\] [$8$]{}; (0,8.9) node\[above\] [$d(r)$]{}; (0.5,0) – (0.5,4.5); (1,0) – (1,3); (0,0.5) – (1.5,0.5) ; (1.5,0) – (1.5,0.5); (0,2) – (1,2); (0,3) – (1,3); (0,4.5) – (0.5,4.5); plot(,[(-7+2\*floor(2\*))\*+8-floor(2\*)\*(floor(2\*)+1)/2]{}); plot(,[2\*((-1)\*(3-2\*floor())\*+4-floor()\*(floor()+1))]{}); plot(,[(-5+2\*floor())\*+8-floor()\*(floor()+1)]{});
Quaternion lattice codes
========================
Suppose that $n=2p$ is even. We consider again the channel $$\label{channel_2}
Y_c=\sqrt{\frac{\rho}{n}} H_c \bar{X} + W_c,$$ and we suppose that the codewords $\bar{X}$ are of the form $$\bar{X}=\begin{pmatrix} A & -B^* \\ B & A^* \end{pmatrix} \in M_{2p}({\mathbb{C}}),$$ where $A,B \in M_p({\mathbb{C}})$.
Equivalent quaternion channel
-----------------------------
First, we derive an equivalent model where the channel has quaternionic form. We can write $$Y_c=\begin{pmatrix} Y_1 & Y_2\end{pmatrix}, \quad H_c=\begin{pmatrix} H_1 & H_2\end{pmatrix}, \quad W_c=\begin{pmatrix} W_1 & W_2\end{pmatrix},$$ where $Y_1, Y_2, H_1, H_2, W_1, W_2 \in M_{m \times p}({\mathbb{C}})$. Then $$Y_1\!=\!\sqrt{\frac{\rho}{n}}(H_1 A + H_2 B) + W_1, \; Y_2\!=\!\sqrt{\frac{\rho}{n}}(-H_1 B^* + H_2 A^*)+W_2,$$ and we have the equivalent [quaternionic channel]{}:
$$\underbrace{\begin{pmatrix} Y_1 & Y_2 \\ -Y_2^* & Y_1^* \end{pmatrix}}_{\text{\normalsize{$Y$}}} =\sqrt{\rho} \underbrace{\begin{pmatrix} H_1 & H_2 \\ -H_2^* & H_1^* \end{pmatrix}}_{\text{\normalsize{$H$}}} \underbrace{\begin{pmatrix} A & -B^* \\ B & A^* \end{pmatrix}}_{\text{\normalsize{$X$}}} + \underbrace{\begin{pmatrix} W_1 & W_2 \\ -W_2^* & W_1^* \end{pmatrix}}_{\text{\normalsize{$W$}}}$$
General DMT upper bound for quaternion codes
--------------------------------------------
\[theorem\_quaternion\_upper\] Suppose that $\forall \rho$, $\mathcal{C}(\rho) \subset M_{n/2}({\mathbb{H}})$. Then the DMT of the code $\mathcal{C}$ is upper bounded by the function $d_2(r)$ connecting the points $(r,[(m-r)(n-2r)]^+)$ for $r \in {\mathbb{Z}}$.
The quaternionic channel can be written in the complex MIMO channel form $$\label{quaternion_channel_2}
\begin{pmatrix} Y_1 \\ -Y_2^* \end{pmatrix} =\sqrt{\frac{\rho}{n}} \begin{pmatrix} H_1 & H_2 \\ -H_2^* & H_1^* \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix} + \begin{pmatrix} W_1 \\ -W_2^* \end{pmatrix}$$ If $r$ is the multiplexing gain of the original system (\[channel\_2\]), then the multiplexing gain of this channel is $2r$, since the same number of symbols is transmitted using half the frame length.\
Consider the eigenvalues $\lambda_1=\lambda_1' \geq \lambda_2=\lambda_2' \geq \cdots \geq \lambda_p=\lambda_p' \geq 0$ of $H^{\dagger}H$. Let $l=\min(m,p)$ the number of pairs of nonzero eigenvalues, and $\Delta={\ensuremath{\left\lvert p-m \right\rvert}}$. For fixed $H$, the capacity of this channel is [@Telatar] $$C(H) \doteq \log \det (I+ \rho H^{\dagger} H)=2 \sum_{i=1}^p \log (1+\rho \lambda_i ).$$ The joint eigenvalue density $p(\boldsymbol\lambda)=p(\lambda_1,\ldots,\lambda_l)$ of a quaternion Wishart matrix is [@Edelman_Rao][^3] $$p(\lambda_1,\ldots,\lambda_p)=K \prod_{i<j} (\lambda_i -\lambda_j)^4 \prod_{i=1}^l \lambda_i^{2\Delta+1}e^{-\sum\limits_{i=1}^l \lambda_i}$$ for some constant $K$. Considering the change of variables $\lambda_i=\rho^{-\alpha_i}$ $\forall i=1,\ldots,l$, the distribution of ${\boldsymbol\alpha}=(\alpha_1,\ldots,\alpha_l)$ is $$p({\boldsymbol\alpha})\!\!=\!\!K(\log \rho)^le^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}}\! \rho^{-2\sum\limits_{i=1}^l \alpha_i (\Delta+1)}\!\prod_{i<j}\!\left(\rho^{-\alpha_i}\!\!-\!\!\rho^{-\alpha_j}\right)^4$$The output probability for rate $R=r \log \rho$ is given by [$$\begin{aligned}
&P_{\operatorname*{out}}(R)\doteq \mathbb{P}\left\{2 \sum_{i=1}^l \log(1+\rho \lambda_i)<2r \log \rho\right\}\\
&=\!\mathbb{P}\left\{\prod_{i=1}^l (1\!+\!\rho^{1-\alpha_i})\!<\!\rho^{r}\!\right\}\!\doteq\! \mathbb{P}\left\{\prod_{i=1}^l \rho^{(1-\alpha_i)^+}\!\!<\!\rho^{r}\!\right\} \!\geq\! \mathbb{P}(\mathcal{A}_0)\end{aligned}$$ ]{}where $\mathcal{A}_0=\left\{{\boldsymbol\alpha}: 0 \leq \alpha_1 \leq \ldots \leq \alpha_l,\; \sum_{i=1}^l (1-\alpha_i)^+ <r\right\}$. Given $\delta >0$, define $S_{\delta}=\{{\boldsymbol\alpha}:\; {\ensuremath{\left\lvert \alpha_i-\alpha_j \right\rvert}}> \delta \; \forall i \neq j\}$. Then [$$\begin{aligned}
&P_{\operatorname*{out}}(R)\\ & {\mathrel{\dot{\geq}}}\!\!\int_{\mathcal{A}_0 \cap S_{\delta}}\! e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}} \!\rho^{-2\sum\limits_{i=1}^l\alpha_i(\Delta +1)} \prod_{i<j} (\rho^{-\alpha_i} -\rho^{-\alpha_j})^4 d{\boldsymbol\alpha}\\
&\geq \frac{(1-\rho^{-\delta})^l}{e^{l}} \int_{\mathcal{A}_0 \cap S_{\delta}} \rho^{-2\sum\limits_{i=1}^l N_i \alpha_i} d{\boldsymbol\alpha}\end{aligned}$$]{}where $N_i=2(\Delta+2l-2i+1)$. Let $f({\boldsymbol\alpha})=\sum_{i=1}^l \alpha_i N_i$. Using the Laplace principle, $\lim_{\rho \to \infty} -\frac{\log P_{\operatorname*{out}}(R)}{\log\rho}\geq 2\inf_{\mathcal{A}_0 \cap S_{\delta}} f(\alpha)$ $\forall \delta>0.$ Using Lemma \[inf\_lemma\] with $s=r$, $q=\Delta+l$, we find that $2\inf_{{\boldsymbol\alpha}\in \mathcal{A}_0} f({\boldsymbol\alpha})=2f({\boldsymbol\alpha}^*)$ is the piecewise linear function $d_2(r)$ connecting the points $(r, \left[2(p-r)(m-r)\right]^+)=(r,\left[(n-2r)(m-r)\right]^+)$ for $r \in {\mathbb{Z}}$. Note that $\forall \delta$, the point ${\boldsymbol\alpha}_{\delta}$ such that ${\boldsymbol\alpha}_{\delta,i}=\alpha_i^*+\frac{\delta i}{l}$ is in $\mathcal{A}_0 \cap S_{\frac{\delta}{l}}$ and when $\delta \to 0$, ${\boldsymbol\alpha}_{\delta} \to {\boldsymbol\alpha}^*$. By continuity of $f$, $2\lim_{\delta \to 0} \inf_{\mathcal{A}_0 \cap S_{\delta}} f({\boldsymbol\alpha})=2f({\boldsymbol\alpha}^*)=d_2(r)$.
DMT of quaternionic lattice codes with NVD
------------------------------------------
We now show that quaternionic lattice codes with NVD achieve the upper bound of Theorem \[theorem\_quaternion\_upper\]. This result extends Proposition 4.3 in [@ISIT2016_MIMO].
\[theorem\_quaternion\_lower\] Let $L$ be an $n^2$-dimensional lattice in $M_{n/2}({\mathbb{H}})$, and consider the spherically shaped code $\mathcal{C}(\rho)=\rho^{-\frac{r}{n}} L(\rho^{\frac{r}{n}})$. If $L$ has the NVD property, then the DMT of the code $\mathcal{C}(\rho)$ is the piecewise linear function $d_2(r)$ connecting the points $(r,[(m-r)(n-2r)]^+)$ for $r \in {\mathbb{Z}}$.
To simplify notation, assume ${\hbox{\rm det}_{min}\left( L\right)}=1$. For a fixed realization $H$, $P_e(H) \leq \mathbb{P}\left\{ {\ensuremath{\left\Vert W \right\Vert}}^2 > d_H^2/4\right\}$, where [$$\begin{aligned}
&d_H^2 \doteq\rho^{1-\frac{2r}{n}} \min_{X,X' \in L(\rho^{\frac{r}{n}}),\;X \neq X'} {\ensuremath{\left\Vert H(X-X') \right\Vert}}^2.$$ ]{}Let $\Delta X=X-X'$. We denote by $\lambda_1=\lambda_1' \geq \lambda_2=\lambda_2' \geq \cdots \geq \lambda_p=\lambda_p' \geq 0$ the eigenvalues of $H^{\dagger} H$, and by $0 \leq \mu_1=\mu_1' \leq \cdots \leq \mu_p=\mu_p'$ the eigenvalues of $\Delta X \Delta X^{\dagger}$. Both sets of eigenvalues have multiplicity $2$ since $H$ and $X$ are quaternion matrices. Again we set $l=\min(m,p)$ and $\Delta={\ensuremath{\left\lvert p-m \right\rvert}}$.\
Using the mismatched eigenvalue bound and the arithmetic-geometric inequality as in [@EKPKL], we find that for all $k=1,\ldots,l$, [$$\begin{aligned}
&d_H^2\doteq\rho^{1-\frac{2r}{n}} \min_{X,X' \in \mathcal{C}(\rho),\;X \neq X'} \operatorname*{tr}(H\Delta X \Delta X^{\dagger} H^{\dagger}) \\
&\geq \rho^{1-\frac{2r}{n}} \sum_{i=1}^l (2 \mu_i \lambda_i) \geq 2k \rho^{1-\frac{2r}{n}} \left(\prod_{i=1}^k \lambda_i\right)^{\frac{1}{k}} \left(\prod_{i=1}^k \mu_i\right)^{\frac{1}{k}}. \end{aligned}$$ ]{} As before, for all $i=1,\ldots,n$, $\mu_i \leq {\ensuremath{\left\Vert \Delta X \right\Vert}}^2 \leq 4 \rho^{\frac{2r}{n}}$, and $ \prod_{i=1}^n \mu_i =\det(\Delta X \Delta X^{\dagger})^{\frac{1}{2}} \geq 1$ using the NVD property of the code. Consequently, for all $k=1,\ldots,l$ $$\prod_{i=1}^k \mu_i =\frac{\det(\Delta X \Delta X^{\dagger})^{\frac{1}{2}}}{\prod_{j=k+1}^n \mu_j} \geq \frac{1}{\rho^{\frac{2r(p-k)}{n}}}=\frac{1}{\rho^{\frac{r(p-k)}{p}}}.$$ With the change of variables $\lambda_i=\rho^{-\alpha_i}$ $\forall i=1,\ldots,l$, we have $\forall k=1,\ldots,l$ $$d_H^2 \!{\mathrel{\dot{\geq}}}\! 2\rho^{1-\frac{r}{p}} \rho^{-\frac{1}{k} \sum\limits_{i=1}^k \alpha_i} \rho^{-\frac{r(p-k)}{p}}\!= 2\rho^{-\frac{1}{k}\big(\sum\limits_{i=1}^k \alpha_i +r -k \big)}\!\!=2 \rho^{\delta_k(\boldsymbol\alpha)}$$ where $\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_l)$ and $\delta_k(\boldsymbol\alpha)=-\frac{1}{k}\left(\sum\limits_{i=1}^k \alpha_i +r -k \right)$.\
Since $2 {\ensuremath{\left\Vert W \right\Vert}}^2 \sim 2 \chi^2(2mp)$, we have [$$\begin{aligned}
&P_e(H) \leq \mathbb{P}\left\{ {\ensuremath{\left\Vert W \right\Vert}}^2 > \frac{\rho^{\delta_k({\boldsymbol\alpha})}}{2}\right\}\\
&=\sum_{j=0}^{mp-1}e^{-\frac{\rho^{\delta_k({\boldsymbol\alpha})}}{4}} \left(\frac{\rho^{\delta_k({\boldsymbol\alpha})}}{4}\right)^j\frac{1}{j!}=\Phi(\delta_k({\boldsymbol\alpha},r)).\end{aligned}$$ ]{} By averaging with respect to the distribution $p({\boldsymbol\alpha})$, we get $$P_e \leq \int_{\mathcal{A}} p(\boldsymbol\alpha) \Phi(\delta_k({\boldsymbol\alpha},r))d\boldsymbol\alpha \leq \int_{\mathcal{A}} p'(\boldsymbol\alpha) \Phi(\delta_k({\boldsymbol\alpha},r))d\boldsymbol\alpha$$ where $\mathcal{A}=\{{\boldsymbol\alpha}: \alpha_1 \leq \cdots \leq \alpha_l\}$, and $$p'(\boldsymbol\alpha)=K(\log \rho)^l e^{-\sum\limits_{i=1}^l \rho^{-\alpha_i}}\ \rho^{-\sum\limits_{i=1}^l \alpha_i N_i},$$ where $N_i=2(\Delta+2l-2i+1)$. Note that $p'({\boldsymbol\alpha})$ and $\Phi(\delta_k({\boldsymbol\alpha},r))$ have the same form as in (\[p\_prime\]) and (\[Phi\]). From Lemma \[laplace\_lemma\] we find $d(r) \geq \inf_{{\boldsymbol\alpha}\in \mathcal{A}_0} 2\sum_{i=1}^l \alpha_i (\Delta+2l-2i+1)$, which by Lemma \[inf\_lemma\] is the piecewise linear function connecting the points $(r,[(n-2r)(m-r)]^+)$ for $r \in {\mathbb{Z}}$.
Proof of Lemma \[inf\_lemma\] {#proof_inf_lemma}
-----------------------------
Let $\bar{d}(s)=(-q-l+2{\ensuremath{\left\lfloor s \right\rfloor}}+1)s+ql-{\ensuremath{\left\lfloor s \right\rfloor}}({\ensuremath{\left\lfloor s \right\rfloor}}+1)$. Without loss of generality, we can suppose that $k-1 \leq s < k$ for some $k \in {\mathbb{N}}$, i.e. $k-1={\ensuremath{\left\lfloor s \right\rfloor}}$, $k={\ensuremath{\left\lfloor s \right\rfloor}}+1$.\
First, we show that $\forall {\boldsymbol\alpha}\in \mathcal{A}_0$, we have $f({\boldsymbol\alpha}) \geq \bar{d}(s)$. In fact [$$\begin{aligned}
&f({\boldsymbol\alpha})=\left(q-l-1\right)\sum\limits_{i=1}^l \alpha_i +2\sum\limits_{i=1}^l (l-i+1)\alpha_i \\
& \geq \left(q-l-1\right)(l-s)+2\sum\limits_{i=k}^l \sum_{j=1}^i\alpha_i \\
&\geq \left(q-l-1\right)(l-s)+2\sum\limits_{i=k}^l (i-s)\\
&=\left(q-l-1\right)(l-s)+l(l+1)-(k-1)k-2(l-k+1)s\\
&=\bar{d}(s).\end{aligned}$$ ]{} Next, we show that $\exists {\boldsymbol\alpha}^*$ such that $f({\boldsymbol\alpha}^*)=\bar{d}(s)$.\
Let $\alpha_1^*=\ldots=\alpha_{k-1}^*=0$, $\alpha_k^*=k-s$, $\alpha_{k+1}^*=\ldots=\alpha_l^*=1$. Then [$$\begin{aligned}
&f({\boldsymbol\alpha}^*)=\sum_{i=1}^l \left(q+l+1\right)\alpha_i-2\sum_{i=1}^l i \alpha_i \\
&=\left(q+l+1\right)(l-s)-2k(k-s) -l(l+1)+k(k+1)\\
&=\bar{d}(s) \tag*{{\IEEEQEDopen}}\end{aligned}$$ ]{}
Proof of Lemma \[laplace\_lemma\] {#proof_laplace_lemma}
---------------------------------
The proof closely follows [@EKPKLold], which is a preliminary version of [@EKPKL]. Note that $\Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) \leq 1$ since it is a probability. Given $\varepsilon>0$, we can bound the integral (\[integral\_A\]) as follows $$\label{integral_2}
P_e \leq \int_{\bar{\mathcal{A}}} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)})d\boldsymbol\alpha + \sum_{j=1}^l \int_{\mathcal{A}_j} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)})d\boldsymbol\alpha,$$ where $\bar{\mathcal{A}}=\{{\boldsymbol\alpha}\in \mathcal{A}\;:\;\alpha_i \geq -\varepsilon \;\;\forall i=1,\ldots,l\}$ and $\mathcal{A}_j=\{{\boldsymbol\alpha}\in \mathcal{A}\;:\;\alpha_j < - \varepsilon\}$. Note that [$$\begin{aligned}
&\int_{\mathcal{A}_j} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)})d\boldsymbol\alpha \leq \int_{\mathcal{A}_j} p'(\boldsymbol\alpha)d\boldsymbol\alpha \\
&\leq \left(\prod_{i \neq j} \int_{-\infty}^{\infty} e^{-\rho^{-\alpha_i}} \rho^{-\alpha_i N_i} d\alpha_i\right)\int_{-\infty}^{\varepsilon} e^{-\rho^{-\alpha_j}}\rho^{-\alpha_j N_j} d\alpha_j\\
&=\left(\prod_{i \neq j} \int_{0}^{\infty} \frac{e^{-\lambda_i} \lambda_i^{N_i-1}}{\log \rho}\right)\int_{\rho^{\varepsilon}}^{\infty} \frac{\lambda_j e^{-\lambda_j}}{\log \rho} d\lambda_j \\
&\doteq \rho^{0} \int_{\rho^{\varepsilon}}^{\infty} \frac{\lambda_j e^{-\lambda_j}}{\log \rho} d\lambda_j \end{aligned}$$ ]{} which vanishes exponentially fast as a function of $\rho$. For the first term in (\[integral\_2\]), we have [$$\begin{gathered}
\int_{\bar{\mathcal{A}}} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)})d\boldsymbol\alpha \leq \int\limits_{\substack{{\boldsymbol\alpha}> -\varepsilon \\ \boldsymbol\delta(\alpha,s)< \varepsilon}} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) d{\boldsymbol\alpha}\\
+\sum_{j=1}^n \int\limits_{\substack{{\boldsymbol\alpha}> -\epsilon,\\ \delta_j({\boldsymbol\alpha},s)\geq \varepsilon}}p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) d{\boldsymbol\alpha},\end{gathered}$$ ]{}where the notation ${\boldsymbol\alpha}> -\epsilon$ means $\alpha_i >-\epsilon \;\;\forall i=1,\ldots,l$. We have [$$\begin{aligned}
&\int\limits_{\substack{{\boldsymbol\alpha}> -\epsilon,\\ \delta_j({\boldsymbol\alpha},s)\geq \varepsilon}}p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) d{\boldsymbol\alpha}\label{d}\\
&\leq \int\limits_{\substack{{\boldsymbol\alpha}> -\epsilon,\\ \delta_j({\boldsymbol\alpha},s)\geq \varepsilon}} e^{-\frac{\rho^{\delta_j({\boldsymbol\alpha},s)}}{4}} \sum_{t=0}^{2mn-1}\left(\frac{\rho^{\delta_j({\boldsymbol\alpha},s)}}{4}\right)^t\frac{1}{t!} \prod_{i=1}^l \rho^{-\alpha_i N_i} d{\boldsymbol\alpha}\notag \\
&\leq \left(\prod_{i=j+1}^{l} \int\limits_{\alpha_i >-\varepsilon} \rho^{-\alpha_i N_i} d\alpha_i\right) \notag \\
&\cdot\!\! \int\limits_{\substack{\alpha_1,\ldots,\alpha_j >- \varepsilon\\ \delta_j({\boldsymbol\alpha},s) \geq \varepsilon}}\!\!e^{-\frac{\rho^{\delta_j({\boldsymbol\alpha},s)}}{4}}\sum_{t=0}^{2mn-1}\!\!\left(\frac{\rho^{\delta_j({\boldsymbol\alpha},s)}}{4}\right)^t \frac{1}{t!} \rho^{-\sum\limits_{i=1}^j N_i}d\alpha_1 \ldots d\alpha_j \notag\end{aligned}$$ ]{} since $\delta_j({\boldsymbol\alpha},s)$ is independent of $\alpha_i$ for $i >j$. As $\delta_j({\boldsymbol\alpha},s) \geq \varepsilon$, $\alpha_i>-\varepsilon $ implies $\alpha_i \leq -j\varepsilon -s +j$, the second integral is over a bounded region and tends to zero exponentially fast as a function of $\rho$, while the first integral has a finite SNR exponent. Thus, (\[d\]) tends to zero exponentially fast.\
Finally, the SNR exponent of (\[integral\_A\]) is determined by the behavior of [ $$\begin{aligned}
&\int\limits_{\substack{{\boldsymbol\alpha}> -\varepsilon \\ \boldsymbol\delta({\boldsymbol\alpha},s)< \varepsilon}} p'(\boldsymbol\alpha) \Phi(\rho^{\delta_k({\boldsymbol\alpha},s)}) d{\boldsymbol\alpha}\leq \int\limits_{\substack{{\boldsymbol\alpha}> -\varepsilon \\ \boldsymbol\delta({\boldsymbol\alpha},s)< \varepsilon}} p'(\boldsymbol\alpha) d{\boldsymbol\alpha}\\
&\leq \int\limits_{\substack{{\boldsymbol\alpha}> -\varepsilon \\ \boldsymbol\delta({\boldsymbol\alpha},s)< \varepsilon}} \rho^{-\sum\limits_{i=1}^n N_i \alpha_i} d{\boldsymbol\alpha}\end{aligned}$$ ]{} The conclusion follows by using the Laplace principle, and taking $\epsilon \to 0$. Note that [$$\begin{aligned}
&\mathcal{A}_0=\left\{{\boldsymbol\alpha}\in \mathcal{A}:\;\alpha_j\geq 0,\;\sum_{i=1}^j (1-\alpha_i) \leq s \; \forall j=1,\ldots,l\right\} \notag\\
&=\{{\boldsymbol\alpha}: \alpha_j \geq 0, \; \delta_j({\boldsymbol\alpha},s) \leq 0 \;\; \forall j=1,\ldots,l\}. \tag*{{\IEEEQEDopen}}\end{aligned}$$ ]{}
[^1]: Unlike [@Telatar] and [@ZT], we don’t use a strict inequality in the definition (\[P\_out\]), but our definition is equivalent since the set of $H$ such that $\Psi(Q,H)=R$ has measure zero.
[^2]: We have slightly modified the expression to be consistent with our notation. In [@Edelman], the author considers a matrix $A^TA$ where each element of $A$ is $\mathcal{N}(0,1)$.
[^3]: The quaternion case corresponds to taking $\beta=4$ in [@Edelman_Rao equation (4.5)]. Note that we modify the distribution to take into account the fact that each entry of $H$ has variance $1/2$ per real dimension.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using the results obtained by Staruszkiewicz in *Acta Phys. Pol. [**B 23**]{}, 591 (1992)* and in *Acta Phys. Pol. [**B 23**]{}, 927 (1992)* we show that the representations acting in the eigenspaces of the total charge operator corresponding to the eigenvalues $n_1, n_2$ whose absolute values are less than or equal $\sqrt{\pi/e^2}$ are inequivalent if $|n_1| \neq |n_2|$ and contain the supplementary series component acting as a discrete component. On the other hand the representations acting in the eigenspaces corresponding to eigenvalues whose absolute values are greater than $\sqrt{\pi/e^2}$ are all unitarily equivalent and do not contain any supplementary series component.'
author:
- |
Jaros[ł]{}aw Wawrzycki [^1]\
Institute of Nuclear Physics of PAS, ul. Radzikowskiego 152,\
31-342 Kraków, Poland
title: ' [**A new theorem on the representation structure of the $SL(2, \mathbb{C})$ group acting in the Hilbert space of the quantum Coulomb field**]{}'
---
Introduction
============
In this paper we prove a new theorem within the Quantum Theory of the Coulomb Field, [@Staruszkiewicz1987], [@Staruszkiewicz]. This paper can be regarded as an immediate continuation of the series of Staruszkiewicz’s papers [@Staruszkiewicz1992ERRATUM], [@Staruszkiewicz1992], [@Staruszkiewicz2004], on the structure of the unitary representation of $SL(2, \mathbb{C})$ acting in the Hilbert space of the quantum Coulomb field and the quantum phase field $S(x)$ of his theory, and its connection to the fine structure constant. We use the notation of these papers. Basing on the results of these papers we give here a proof of the following
THEOREM. Let $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ be the restriction of the unitary representation $U$ of $SL(2, \mathbb{C})$ in the Hilbert space of the quantum phase field $S$ to the invariant eigenspace ${\mathcal{H}_{{}_{m}}}$ of the total charge operator $Q$ corresponding to the eigenvalue $me$ for some integer $m$. Then for all $m$ such that $$|m| > \textrm{Integer part} \Big( \sqrt{\frac{\pi}{e^2}} \Big)$$ the representations $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ are unitarily equivalent: $$U|_{{}_{\mathcal{H}_{{}_{m}}}} \cong_{{}_{U}}
U|_{{}_{\mathcal{H}_{{}_{m'}}}}$$ whenever $$|m| > \textrm{Integer part} \Big( \sqrt{\frac{\pi}{e^2}} \Big), \,\,\,
|m'| > \textrm{Integer part} \Big( \sqrt{\frac{\pi}{e^2}} \Big).$$
On the other hand if the two integers $m,m'$ have different absolute values $|m| \neq |m'|$ and are such that $$|m| < \sqrt{\frac{\pi}{e^2}}, \,\,\,
|m'| < \sqrt{\frac{\pi}{e^2}},$$ then the representations $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ and $U|_{{}_{\mathcal{H}_{{}_{m'}}}}$ are inequivalent. Each representation $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ contains a unique discrete supplementary component if $$|m| < \sqrt{\frac{\pi}{e^2}}, \,\,\,$$ and the supplementary components contained in $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ with different values of $|m|$ fulfilling the last inequality are inequivalent. If $$|m| > \textrm{Integer part} \Big( \sqrt{\frac{\pi}{e^2}} \Big)$$ then the representation $U|_{{}_{\mathcal{H}_{{}_{m}}}}$ does not contain in its decomposition any supplementary components.
This remarkable result can be compared to the well known and curious coincidence concerning self-adjointness of the Hamiltonian of the bounded system composed by a heavy source (say nucleus) of the classical Coulomb field and a relativistic charged particle in this field. Namely it is a well known phenomenon in relativistic wave mechanics that whenever the charge of the nuclei is of the order of magnitude comparable to the inverse of the fine structure constant or greater, then the Hamiltonian loses the self-adjointness property (which sometimes is interpretad as an indication that the system, when passing to the quantum field theory level, becomes unstable). On the other hand (and this is a coincidence which no one understands) the nuclei of real atoms are unstable whenever the charge of the nuclei reaches the value of the same order (inverse of the fine structure constant). The mentioned breakdown of self-adjointness cannot explain of course this phenomenon because there are mostly the strong (and not electromagnetic) forces which govern the stability of nuclei. To this coincidence we add another coming from the quantum theory of infrared photons of the quantized Coulomb field. Although we should emphasize that the mentioned three phenomena come from three different regimes and so far we are not able to unswer the question if these coincidences are merely accidental or not.
Proof of the theorem
====================
Let us concentrate our attention on the specific state $|u\rangle$ in the eigenspace $\mathcal{H}_{{}_{m=1}}$ corresponding to the eigenvalue $e$ of the charge operator $Q$. For any time like unit vector $u$ we can form the following unit vector (compare [@Staruszkiewicz1992ERRATUM] or [@Staruszkiewicz2004]) $$\label{kernel-AS}
|u\rangle = e^{-iS(u)} |0\rangle$$ in the Hilbert space of the quantum field $S$. It has the following properties
1. $|u\rangle$ is an eigenstate of the total charge $Q$: $Q|u\rangle = e |u\rangle$.
2. $|u\rangle$ is spherically symmetric in the rest frame of $u$: $\epsilon^{\alpha \beta \mu \nu}u_{\beta} M_{\mu \nu} |u\rangle = 0$, where $M_{\mu \nu}$ are the generators of the $SL(2, \mathbb{C})$ group.
3. $|u\rangle$ does not contain the (infrared) transversal photons: $N(u) |u\rangle = 0$, where $N(u)$ is the operator of the number of transversal photons in the rest frame of $u$. If $u$ is the four-velocity of the reference frame in which the partial waves $f_{lm}^{(+)}$ are computed, then in this reference frame $$N(u) = (4\pi e^2)^{-1} \sum \limits_{l=1}^{\infty} \sum \limits_{m = -l}^{l} c_{lm}^{+}c_{lm},$$ and (up to an irrelevant phase factor) $$|u\rangle = e^{-iS_0}|0\rangle.$$
These three conditions determine the state vector $|u\rangle$ up to a phase factor.
Now let us consider the subspace $\mathcal{H}_{{}_{|u\rangle}} \subset \mathcal{H}_{{}_{m=1}}$ as spanned by the vectors of the form $U_{{}_{\alpha}}|u\rangle$, $\alpha \in SL(2,\mathbb{C})$.
Note that the above conditions 1) and 2) determine $|u\rangle$ as the “maximal” vector in $\mathcal{H}_{{}_{|u\rangle}}$ which preserves the conditions 1), 2), i.e. any state vector in the Hilbert subspace $\mathcal{H}_{{}_{|u\rangle}}$ of the quantum phase field $S$ which preserves 1) and 2) and which is orthogonal to $|u\rangle$ is equal zero.
First: in the paper [@Staruszkiewicz] it was computed that the inner product $$\langle u| v \rangle = \exp\Big\{ - \frac{e^2}{\pi}(\lambda \textrm{coth} \lambda - 1) \Big\},$$ where $u\cdot v = g_{\mu \nu}u^\mu v^\mu = \textrm{cosh} \lambda$, so that $\lambda$ is the hyperbolic angle between $u$ and $v$; compare also [@Staruszkiewicz2002].
Second: it was proved in [@Staruszkiewicz1992ERRATUM] (compare also [@Staruszkiewicz2004], [@Staruszkiewicz2009]) that the state $|u\rangle$, lying in the subspace $Q = e \boldsymbol{1}$ of the
Hilbert space of the field $S$, when decomposed into components corresponding to the decomposition of $U$ into irreducible sub-representations contains
1. only the principal series if $\frac{e^2}{\pi} >1$,
2. the principal series and a discrete component from the supplementary series with $$-\frac{1}{2}M_{\mu \nu}M^{\mu \nu} = z(2-z) \boldsymbol{1}, \,\,\, z = \frac{e^2}{\pi} ,$$ if $0 < \frac{e^2}{\pi} < 1$,
in the units in which $\hslash = c = 1$. In other units one should read $\frac{e^2}{\pi \hslash c}$ for $\frac{e^2}{\pi}$.
In particular from the result of [@Staruszkiewicz1992ERRATUM] it follows that for the restriction $U|_{{}_{\mathcal{H}_{{}_{|u\rangle}}}}$ of the representation $U$ of $SL(2, \mathbb{C})$ acting in the Hilbert space of the quantum “phase” field $S$ to the invariant subspace $\mathcal{H}_{{}_{|u\rangle}}$ we have the decomposition $$\label{decAS}
U|_{{}_{\mathcal{H}_{{}_{|u\rangle}}}} =
\left\{ \begin{array}{lll}
\mathfrak{D}(\rho_0) \bigoplus \int \limits_{\rho>0} \mathfrak{S}(n=0, \rho) \, {\mathrm{d}}\rho,
& \rho_0 = 1 - z_0, z_0 = \frac{e^2}{\pi}, & \textrm{if} \,\, 0 < \frac{e^2}{\pi} <1 \\
\int \limits_{\rho>0} \mathfrak{S}(n=0, \rho) \, {\mathrm{d}}\rho, & &
\textrm{if} \,\, 1 < \frac{e^2}{\pi},
\end{array} \right.$$ into the direct integral of the unitary irreducible representations of the principal series representations $\mathfrak{S}(n=0, \rho)$, with real $\rho>0$ and $n = 0$, and a discrete direct summand of the supplementary series $\mathfrak{D}(\rho_0)$ corresponding to the value of the parameter $$\rho_0 = 1 - z_0, z_0 = \frac{e^2}{\pi};$$ and where ${\mathrm{d}}\rho$ is the ordinary Lebesgue measure on $\mathbb{R}_+$.
Note that the irreducible unitary representations $\mathfrak{S}(n, \rho)$ of the principal series correspond to the representations $(l_0 = \frac{n}{2}, l_1 = \frac{i\rho}{2})$, with $n \in \mathbb{Z}$ and $\rho \in \mathbb{R}$ in the notation of the book [@Geland-Minlos-Shapiro], and correspond to the character $\chi = (n_1, n_2) = \big(\frac{n}{2} + \frac{i\rho}{2}, - \frac{n}{2} + \frac{i\rho}{2}\big)$ in the notation of the book [@GelfandV], and finally to the irreducible unitary representations $$U^{{}^{\chi_{{}_{n,\rho}}}} = \mathfrak{S}(n, \rho)$$ induced by the unitary representations of the diagonal subgroup corresponding to the unitary character $\chi_{{}_{n,\rho}}$ of the diagonal subgroup of $SL(2, \mathbb{C})$ within the Mackey theory of induced representations.
And recall that the irreducible unitary representations $\mathfrak{D}(\rho)$ of $SL(2, \mathbb{C})$ of the supplementary series are numbered by the real parameter $0<\rho <1$, and correspond to the representations $(l_0 = 0, l_1 = \rho)$ in the notation of the book [@Geland-Minlos-Shapiro]. They also correspond to the character $\chi = (n_1, n_2) = \big(\rho, \rho)$ in the notation of the book [@GelfandV], and finally to the irreducible unitary representations $$U^{{}^{\chi_{{}_{\rho}}}} = \mathfrak{D}(\rho)$$ induced by the (non-unitary) representations of the diagonal subgroup of $SL(2, \mathbb{C})$ corresponding to the non-unitary character $\chi_{{}_{\rho}}$ of the diagonal subgroup of $SL(2, \mathbb{C})$ within the Mackey theory of induced representations.
Next for each integer $m \in \mathbb{Z}$ and a point $u$ in the Lobachevsky space we consider spherically symmetric unit state vector $|m,u\rangle \in \mathcal{H}_{{}_{m}}$ $$|m,u\rangle = e^{-imS(u)} |0\rangle$$ in the Hilbert space of the quantum field $S$. If $u$ is the four-velocity of the reference frame in which the partial waves $f_{lm}^{(+)}$ are computed, then in this reference frame $$|m,u\rangle = e^{-imS_0}|0\rangle$$ up to an irrelevant phase factor. The unit vector $|m,u\rangle$ has the following properties
1. $|m,u\rangle$ is an eigenstate of the total charge $Q$: $Q|u\rangle = em |m,u\rangle$.
2. $|m,u\rangle$ is spherically symmetric in the rest frame of $u$: $\epsilon^{\alpha \beta \mu \nu}u_{\beta} M_{\mu \nu} |m,u\rangle = 0$, where $M_{\mu \nu}$ are the generators of the $SL(2, \mathbb{C})$ group.
3. $|m,u\rangle$ does not contain the (infrared) transversal photons: $N(u) |m,u\rangle = 0$.
Proceeding exactly as Staruszkiewicz in [@Staruszkiewicz] (compare also [@Staruszkiewicz2002]) we show that for any two points $u,v$ in the Lobachevsky space of unit time like four vectors $$\langle u,m|m, v \rangle = \exp\Big\{ - \frac{e^2m^2}{\pi}(\lambda \textrm{coth} \lambda - 1) \Big\},$$ where $\lambda$ is the hyperbolic angle between $u$ and $v$. Next, we construct the Hilbert subspace $\mathcal{H}_{{}_{|m,u\rangle}} \subset \mathcal{H}_{{}_{m}}$ spanned by $$U_\alpha |m, u\rangle, \,\,\, \alpha \in SL(2, \mathbb{C}).$$ Note that $\mathcal{H}_{{}_{|m,u\rangle}} \neq \mathcal{H}_{{}_{m}}$. Using the Gelfand-Neumark Fourier analysis on the Lobachevsky space as Staruszkiewicz in [@Staruszkiewicz1992ERRATUM] we show that $$\label{decASm}
U|_{{}_{\mathcal{H}_{{}_{|m,u\rangle}}}} =
\left\{ \begin{array}{lll}
\mathfrak{D}(\rho_0) \bigoplus \int \limits_{\rho>0} \mathfrak{S}(n=0, \rho) \, {\mathrm{d}}\rho,
& \rho_0 = 1 - z_0, z_0 = \frac{e^2 m^2}{\pi}, & \textrm{if} \,\, 0 < \frac{e^2 m^2}{\pi} <1 \\
\int \limits_{\rho>0} \mathfrak{S}(n=0, \rho) \, {\mathrm{d}}\rho, & &
\textrm{if} \,\, 1 < \frac{e^2 m^2}{\pi},
\end{array} \right.$$ where ${\mathrm{d}}\rho$ is the Lebesgue measure on $\mathbb{R}_+$.
We need two Lemmata concerning the structure of the representation $U$ of $SL(2,\mathbb{C})$ in the Hilbert space of the quantum phase field $S$.
LEMMA. $$U|_{{}_{\mathcal{H}_{{}_{m=1}}}} = U|_{{}_{\mathcal{H}_{{}_{|u\rangle}}}} \otimes U|_{{}_{\mathcal{H}_{{}_{m=0}}}}.$$
[$\blacksquare$]{} First we show that (all tensor products in this Lemma are the Hilbert-space tensor products) $$\label{H0timesHu=H1}
\mathcal{H}_{{}_{m=1}} = \mathcal{H}_{{}_{|u\rangle}} \otimes \mathcal{H}_{{}_{m=0}}
= \mathcal{H}_{{}_{|u\rangle}} \otimes \Gamma(\mathcal{H}_{{}_{m=0}}^{1})$$ where $\mathcal{H}_{{}_{m=0}}^{1}$ is the single particle subspace of infrared transversal photons spanned by $$c_{lm}^{+} |0\rangle,$$ and $\Gamma(\mathcal{H}_{{}_{m=0}}^{1})$ stands for the boson Fock space over $\mathcal{H}_{{}_{m=0}}^{1}$, i.e. direct sum of symmetrized tensor products of $\mathcal{H}_{{}_{m=0}}^{1}$. The Hilbert subspace $\mathcal{H}_{{}_{|u\rangle}}$ is spanned by $|u\rangle$, and all its transforms $U_{{}_{\Lambda(\alpha)}}|u\rangle = |u'\rangle$ with $u' = \Lambda(\alpha)^{-1}u$ ranging over the Lobachevsky space $\mathscr{L}_3 \cong SL(2,\,\mathbb{C})/SU(2,\mathbb{C})$ of time like unit four-vectors $u'$ – the Lorentz images of the fixed $u$. The Hilbert space structure of $\mathcal{H}_{{}_{|u\rangle}}$ can be regarded as the one induced by the invariant kernel $$u \times v \mapsto \langle u| v \rangle = \exp\Big\{ - \frac{e^2}{\pi}(\lambda \textrm{coth} \lambda - 1) \Big\},$$ on the Lobachevsky space $\mathscr{L}_3$ as the RKHS corresponding to the kernel, compare e.g. [@PaulsenRaghupathi]. Because this kernel is continuous as a map $\mathscr{L}_3 \times \mathscr{L}_3
\mapsto \mathbb{R}$, and the Lobachevsky space is separable, then it is easily seen that there exists a denumerable subset $\{u_1, u_2, \ldots \} \subset \mathscr{L}_3$ such that $|u_1\rangle, u_2\rangle, \ldots$ are linearly independent and such that the denumerable set of finite rational (with $b_i \in \mathbb{Q}$) linear combinations $$\sum_{i=1}^{k} b_i |u_i\rangle$$ of the elements $|u_1 \rangle, |u_2 \rangle, \ldots$ is dense in $\mathcal{H}_{{}_{|u\rangle}}$, compare e.g. [@Sikorski] Chap. XIII, §3. One can choose (Schmidt orthonormalization, [@Sikorski], Chap XIII, §3) out of them a denumerable and orthonormal system $$e_k(b_{1k}u_1, \ldots, b_{kk}u_k) = \sum_{i=1}^{k} b_{ik} |u_i\rangle
= \sum_{i=1}^{k} b_{ik} e^{-iS(u_i)}|0\rangle, \,\,\, k=1,2, \ldots,$$ wich is complete in $\mathcal{H}_{{}_{|u\rangle}}$. Note that $$U_{{}_{\Lambda(\alpha)}}|u\rangle = U_{{}_{\Lambda(\alpha)}}e^{-iS(u)} |0\rangle = U_{{}_{\Lambda(\alpha)}}e^{-iS(u)}
U_{{}_{\Lambda(\alpha)}}^{-1} |0\rangle = e^{-iS(u')} |0\rangle$$ where $u' = \Lambda(\alpha)^{-1}u$ is the Lorentz image $u'$ in the Lobachevsky space of $u$ under the Lorentz transformation $\Lambda(\alpha)$, because $|0\rangle$ is Lorentz invariant: $U|0\rangle = |0\rangle$. In particular $$\begin{gathered}
U_{{}_{\Lambda(\alpha)}}e_k(b_{1k}u_1, \ldots, b_{kk}u_k) = e_k(b_{1k}u'_1, \ldots, b_{kk}u'_k), \\
= U_{{}_{\Lambda(\alpha)}} \big(\sum_{i=1}^{k} b_{ik} e^{-iS(u_i)}|0\rangle\big)
= \sum_{i=1}^{k} b_{ik} e^{-iS(u'_i)}|0\rangle,
\,\, u'_i = \Lambda(\alpha)^{-1}u_i, \,\,\, k=1,2,3, \ldots,\end{gathered}$$ forms another orthonormal and complete system in $\mathcal{H}_{{}_{|u\rangle}}$. In particular if $y \in \mathcal{H}_{{}_{|u\rangle}}$ then for some sequence of numbers $b^k \in \mathbb{C}$ such that $$||y||^2 = \sum_k |b^k|^2 < +\infty$$ we have $$\label{yinHu}
y = \sum_{k = 1,2, \ldots} b^k e_k(b_{1k}u_1, \ldots, b_{kk}u_k) =
\sum_{k = 1,2, \ldots, i= 1, \ldots, k} b^k b_{ik} e^{-iS(u_i)}|0\rangle$$ and $$U_{{}_{\Lambda(\alpha)}} y = \sum_{k = 1,2, \ldots} b^k e_k(b_{1k}u'_1, \ldots, b_{kk}u'_k) =
\sum_{k = 1,2, \ldots, i= 1, \ldots, k} b^k b_{ik} e^{-iS(u'_i)}|0\rangle.$$
Similarly let us write shortly $$c_{lm}^{+} = c_{\alpha}^{+} \,\,\, \textrm{and} \,\,\,
U_{{}_{\Lambda(\alpha)}} c_{lm}^{+} U_{{}_{\Lambda(\alpha)}}^{-1} = {c'}_{lm}^{+}.$$ Then if $x \in \Gamma(\mathcal{H}_{{}_{m=0}}^{1}) = \mathcal{H}_{{}_{m=0}}$, then there exists a multi-sequence of numbers $a^{\alpha_1 \ldots \alpha_n} \in \mathbb{C}$ such that $$|| x ||^2 = \sum_{n =1,2, \ldots, \alpha_1, \ldots, \alpha_n} (4\pi e^2)^n
\big| a^{\alpha_1 \ldots \alpha_n} \big|^2 < + \infty$$ and $$\label{xinH0}
x = \sum_{n =1,2, \ldots, \alpha_1, \ldots, \alpha_n}
a^{\alpha_1 \ldots \alpha_n} c_{\alpha_1}^{+} \ldots c_{\alpha_n}^{+} |0\rangle$$ $$U_{{}_{\Lambda(\alpha)}} x = \sum_{n =1,2, \ldots, \alpha_1, \ldots, \alpha_n}
a^{\alpha_1 \ldots \alpha_n} {c'}_{\alpha_1}^{+} \ldots {c'}_{\alpha_n}^{+} |0\rangle \,\,\,$$ where we have shortly written $\alpha_i$ for the pair $l_i, m_i$ with $-l_i \leq m_i \leq l_i$.
Before giving the definition of $x \otimes y$ for any general elements $x,y$ of the form (\[xinH0\]) and respectively (\[yinHu\]) giving the algebraic tensor product $\mathcal{H}_{{}_{m=0}} \widehat{\otimes} \mathcal{H}_{{}_{|u\rangle}}$ densely included in $\mathcal{H}_{{}_{m=1}}$, we need some further preliminaries. Namely note that the operators $c_{lm} = c_\alpha$ depend on the reference frame. For the construction of $\otimes$ we need the operators in several reference frames. If the time-like axis of the referece frame has the unit versor $v \in \mathscr{L}_3$, then for the operator $c_\alpha = c_{lm}$ computed in this reference frame we will write
$$~^v\!c_{\alpha} \,\,\, \textrm{or} \,\,\, ~^v\!c_{lm}$$ and $$~^v\!c_{\alpha}^{+} \,\,\, \textrm{or} \,\,\, ~^v\!c_{lm}^{+}$$ for their adjoints. Only for the fixed vector $u \in \mathscr{L}_3$ we simply write $$~^u\!c_{\alpha} = c_{\alpha}^{+} \,\,\, \textrm{or} \,\,\, ~^u\!c_{lm} = c_{lm}$$ and $$~^u\!c_{\alpha}^{+} = c_{\alpha}^{+} \,\,\, \textrm{or} \,\,\, ~^u\!c_{lm}^{+} = c_{lm}^{+}$$ in order to simplify notation.
Now let $$\overset{u \mapsto v}{A}_{\alpha \beta}$$ be the unitary matrix transforming the orthonormal basis vectors $c_{\alpha}^{+} |0\rangle
= ~^u\!c_{\alpha}^{+} |0\rangle$ in $\mathcal{H}_{{}_{m=0}}$ $$\label{transformation-cbeta+Vacuum}
~^v\!c_{\alpha}^{+} |0\rangle=
\sum_{\beta} \overset{u \mapsto v}{A}_{\alpha \beta} ~^u\!c_{\beta}^{+}
|0\rangle =
\sum_{\beta} \overset{u \mapsto v}{A}_{\alpha \beta} c_{\beta}^{+}|0\rangle,$$ under the Lorentz transformation $\Lambda_{uv}(\lambda_{uv})$ transforming the reference frame time-like versor $u \in \mathscr{L}_3$ into the reference frame unit time-like versor $v \in \mathscr{L}_3$. In particular it gives the irreducible representation of the $SL(2, \mathbb{C})$ group in the single particle Hilbert subspace $\mathcal{H}_{{}_{m=0}}^{1}$ of infrared transversal photons spanned by $$c_{\alpha}^{+} |0\rangle = ~^u\!c_{\alpha}^{+} |0\rangle,$$ and equal to the Gelfand-Minlos-Shapiro irreducible unitary representation $(l_0 = 1, l_1 = 0) = \mathfrak{S}(n=2, \rho= 0)$, computed explicitly in [@Staruszkiewicz1995]. Then, as shown in [@Staruszkiewicz1992], it follows that $$\begin{gathered}
\label{transformation-vcalpha}
U_{{}_{\Lambda_{uv}(\lambda_{uv})}} ~^u\!c_{\alpha} U_{{}_{\Lambda_{uv}(\lambda_{uv})}}^{-1}
= U_{{}_{\Lambda_{uv}(\lambda_{uv})}} c_{\alpha} U_{{}_{\Lambda_{uv}(\lambda_{uv})}}^{-1}
= ~^v\!c_{\alpha} = \\
\sum_{\beta} \overline{\overset{u \mapsto v}{A}_{\alpha \beta}} ~^u\!c_{\beta}
+ \overline{\overset{u \mapsto v}{B}_\alpha} \, Q \\ =
\sum_{\beta} \overline{\overset{u \mapsto v}{A}_{\alpha \beta}} c_{\beta}
+ \overline{\overset{u \mapsto v}{B}_\alpha} \, Q,\end{gathered}$$ and[^2] $$\begin{gathered}
\label{transformation-S}
U_{{}_{\Lambda_{uv}(\lambda_{uv})}} S(u) U_{{}_{\Lambda_{uv}(\lambda_{uv})}}^{-1}
= S(v) = \\
S(u) + \frac{1}{4\pi i e}\sum_{\alpha \beta} \big( \overset{u \mapsto v}{B}_\alpha\overline{\overset{u \mapsto v}{A}_{\alpha \beta}} ~^u\!c_{\beta} -
\overline{\overset{u \mapsto v}{B}_\alpha} \overset{u \mapsto v}{A}_{\alpha \beta} ~^u\!c_{\beta}^{+} \big) \end{gathered}$$ and thus $$\begin{gathered}
\label{transformation-vcalpha+}
U_{{}_{\Lambda_{uv}(\lambda_{uv})}} ~^u\!c_{\alpha}^{+} U_{{}_{\Lambda_{uv}(\lambda_{uv})}}^{-1}
= U_{{}_{\Lambda_{uv}(\lambda_{uv})}} c_{\alpha}^{+} U_{{}_{\Lambda_{uv}(\lambda_{uv})}}^{-1}
= ~^v\!c_{\alpha}^{+} = \\
\sum_{\beta} \overset{u \mapsto v}{A}_{\alpha \beta} ~^u\!c_{\beta}^{+}
+ \overset{u \mapsto v}{B}_\alpha Q \\ =
\sum_{\beta} \overset{u \mapsto v}{A}_{\alpha \beta} c_{\beta}^{+}
+ \overset{u \mapsto v}{B}_\alpha Q,\end{gathered}$$ where $Q$ is the charge operator and where $\overset{u \mapsto v}{B}_\alpha$ are complex numbers depending on the transformation $\Lambda_{uv}(\lambda_{uv})$ mapping $u \mapsto v = \Lambda_{uv}(\lambda_{uv})^{-1}u$ such that $$\sum_{\alpha} |\overset{u \mapsto v}{B}_\alpha|^2 = 8e^2 (\lambda_{uv} \textrm{coth} \lambda_{uv} -1)$$ with $\lambda_{uv}$ equal to the hyperbolic angle between $u$ and $v$. Note that the charge operator is invariant (commutes with $U_{{}_{\Lambda_{uv}(\lambda_{uv})}}$) and is identical in each reference frame so that no superscript $u$ nor $v$ is needed for $Q$.
The limit on the right hand side of the equality (\[transformation-cbeta+Vacuum\]) should be understood in the sense of the ordinary Hilbert space norm in the Hilbert space of the quantum phase field $S$. In general all limits in the expressions containing linear combinations of operators acting on $|0\rangle$ should be understood in this manner.
Now let us explain why for each fixed $\alpha$ we need essentially all $~^v\!c_{\alpha}$, $v \in \mathscr{L}_3$ for the construction of the bilinear map $x\times y \mapsto x\otimes y$ which serves to define the algebraic tensor product $\mathcal{H}_{{}_{m=0}} \widehat{\otimes} \mathcal{H}_{{}_{|u\rangle}}$ of the Hilbert spaces $\mathcal{H}_{{}_{m=0}}$ and $\mathcal{H}_{{}_{|u\rangle}}$. In particular consider two vectors $c_\alpha^{+} |0\rangle$ and $e^{-iS(v)}|0\rangle$ with $v$ not equal to the fixed time like versor $u$ of the reference frame in which the partial waves $f_{lm}^{(+)}$ and the operators $c_{lm} = c_{\alpha} = ~^uc_{\alpha}$ are computed. Perhaps it would be tempting to put $$c_\alpha^{+}e^{-iS(v)} |0\rangle$$ for the tensor product of $c_\alpha^{+} |0\rangle$ and $e^{-iS(v)}|0\rangle$, but this would be a wrong definition. In particular $$\begin{gathered}
\langle0|e^{iS(v)}~^u\!c_{\beta} ~^uc_{\alpha}^{+}e^{-iS(v)}|0\rangle
=
\langle0|e^{iS(v)}c_{\beta} c_{\alpha}^{+}e^{-iS(v)}|0\rangle \neq \\
\neq
\langle 0|~^uc_{\beta} ~^uc_{\alpha}^{+} |0\rangle
\langle0| e^{iS(v)}e^{-iS(v)}|0\rangle
=
\langle 0|c_{\beta} c_{\alpha}^{+} |0\rangle
\langle 0| e^{iS(v)}e^{-iS(v)}|0\rangle\end{gathered}$$ contrary to what is expected of the inner product for simple tensors. This is mainly because $c_{\alpha} = ~^u\!c_{\alpha}$ do not commute with $e^{-iS(v)}$ for $u \neq v$. However for any two $u,w \in \mathscr{L}_3$, $$\label{inn-prod-vacexp(S(v))c(v)c(w)+exp(S(w))vac}
\langle 0 |e^{iS(v)}~^v\!c_{\beta} ~^w\!c_{\alpha}^{+} e^{-iS(w)}|0\rangle
= \langle 0| ~^v\!c_{\beta} ~^w\!c_{\alpha}^{+} |0\rangle
\langle 0| e^{iS(v)} e^{-iS(w)}|0 \rangle$$ which easily follows from (\[transformation-vcalpha\]) - (\[transformation-vcalpha+\]) and from the canonical commutation relations. Similarly for the case when two (or more) creation operators are involved $$\begin{gathered}
\label{inn-prod-vacexp(S(v))c(v)c(v)c(w)+c(w+)exp(S(w))vac}
\langle 0 |e^{iS(v)}~^v\!c_{\beta_1}~^v\!c_{\beta_2} ~^w\!c_{\alpha_1}^{+} ~^w\!c_{\alpha_1}^{+} e^{-iS(w)}|0\rangle
= \langle 0| ~^v\!c_{\beta_1}~^v\!c_{\beta_2} ~^w\!c_{\alpha_1}^{+} ~^w\!c_{\alpha_2}^{+}|0\rangle
\langle 0| e^{iS(v)} e^{-iS(w)}|0 \rangle, \\
\langle 0 |e^{iS(v)}~^v\!c_{\beta_1} \ldots ~^v\!c_{\beta_n}
~^w\!c_{\alpha_1}^{+} \ldots ~^w\!c_{\alpha_n}^{+} e^{-iS(w)}|0\rangle \\
= \langle 0| ~^v\!c_{\beta_1} \ldots ~^v\!c_{\beta_n}
~^w\!c_{\alpha_1}^{+} \ldots ~^w\!c_{\alpha_n}^{+}|0\rangle
\langle 0| e^{iS(v)} e^{-iS(w)}|0 \rangle\end{gathered}$$ as expected of the inner product on simple tensors. This explains the need for using $~^v\!c_{lm} = ~^v\!c_{\alpha}$ in various reference frames $v$, as in composing any complete orthomnormal system in $\mathcal{H}_{{}_{|u\rangle}}$ we need linear combinations of vectors $$e^{-iS(v)}|0\rangle$$ with various $v \in \mathscr{L}_3$.
Therefore for any $v \in \mathscr{L}_3$ we put $$\begin{gathered}
\label{def-otimes-particular}
\big( ~^v\!c_{\alpha_1}^{+} ~^v\!c_{\alpha_2}^{+}|0\rangle) \otimes
\big( e^{-iS(v)}|0\rangle \big) =
~^v\!c_{\alpha_1}^{+} ~^v\!c_{\alpha_2}^{+}e^{-iS(v)}|0\rangle, \\
\big( ~^v\!c_{\alpha_1}^{+} \ldots ~^v\!c_{\alpha_n}^{+}|0\rangle) \otimes
\big( e^{-iS(v)}|0\rangle \big) =
~^v\!c_{\alpha_1}^{+} \ldots ~^v\!c_{\alpha_n}^{+}e^{-iS(v)}|0\rangle.\end{gathered}$$
Let in particular $U$ be the unitary representor of a Lorentz transformation which transforms $v$ into $v'$. Then $$~^v\!c_{\alpha}^{+} = \\
\sum_{\beta} \overset{w \mapsto v}{A}_{\alpha \beta} ~^w\!c_{\alpha}^{+}
+ \overset{w \mapsto v}{B}_\alpha Q$$ and $$\begin{gathered}
(U ~^v\!c_{\alpha}^{+} |0\rangle) \otimes (U e^{-iS(w)}|0\rangle) =
(~^{v'}\!c_{\alpha}^{+} |0\rangle) \otimes (e^{-iS(w')}|0\rangle) \\ =
\big( \sum_{\beta} \overset{w' \mapsto v'}{A}_{\alpha \beta} ~^{w'}\!c_{\alpha}^{+} |0\rangle \big) \otimes
\big( e^{-iS(w')}|0\rangle \big) \\ =
\sum_{\beta} \overset{w' \mapsto v'}{A}_{\alpha \beta} ~^{w'}\!c_{\alpha}^{+} e^{-iS(w')}|0\rangle \\
= \sum_{\beta} \overset{w \mapsto v}{A}_{\alpha \beta} ~^{w'}\!c_{\alpha}^{+} e^{-iS(w')}|0\rangle \\
= U \big( \sum_{\beta} \overset{w \mapsto v}{A}_{\alpha \beta} ~^w\!c_{\alpha}^{+} e^{-iS(w)}|0\rangle \big),\end{gathered}$$ so that $$(U ~^v\!c_{\alpha}^{+} |0\rangle) \otimes (U e^{-iS(w)}|0\rangle) =
U \big((~^v\!c_{\alpha}^{+} |0\rangle) \otimes (e^{-iS(w)}|0\rangle) \big)$$ and similarly we show that this is the case for more general simple tensors $$\label{U=UxU-on-simple-tensors}
\big( U \, ~^v\!c_{\alpha_1}^{+} \ldots ~^v\!c_{\alpha_n}^{+}|0\rangle) \otimes
\big( U \, e^{-iS(v)}|0\rangle \big) = \\
U \Big( \big( ~^v\!c_{\alpha_1}^{+} \ldots ~^v\!c_{\alpha_n}^{+}|0\rangle) \otimes
\big( e^{-iS(v)}|0\rangle \big) \Big).$$
Now in order to define $x \otimes y$ for general $x,y$ of the form (\[xinH0\]) and respectively (\[yinHu\]) we need to extend the formula (\[def-otimes-particular\]). In fact $x \otimes y$ is uniquelly determined by (\[def-otimes-particular\]). Now we prepare the explicit formula for $x \otimes y$ out of (\[def-otimes-particular\]).
Let $u_1, u_2, \ldots \in \mathscr{L}_3$ be the unit fourvectors which are used in the definition of the complete orthonormal system $$e_k(b_{1k}u_1, \ldots, b_{kk}u_k) = \sum_{i=1}^{k} b_{ik} |u_i\rangle
= \sum_{i=1}^{k} b_{ik} e^{-iS(u_i)}|0\rangle, \,\,\, k=1,2, \ldots,$$ in $\mathcal{H}_{{}_{|u\rangle}}$. Corresponding to them we define
$$~^{u_i}\!c_{\alpha} =
\sum_{\beta} \overline{\overset{u \mapsto u_i}{A}_{\alpha \beta}} \, ~^u\!c_{\alpha}
+ \overline{\overset{u \mapsto v}{B}_\alpha} \, Q \\ =
\sum_{\beta} \overline{\overset{u \mapsto u_i}{A}_{\alpha \beta}} \, c_{\alpha}
+ \overline{\overset{u \mapsto u_i}{B}_\alpha} \, Q,$$ and $$~^{u_i}\!c_{\alpha}^{+} =
\sum_{\beta} \overset{u \mapsto u_i}{A}_{\alpha \beta} ~^u\!c_{\alpha}^{+}
+ \overset{u \mapsto v}{B}_\alpha \, Q \\ =
\sum_{\beta} \overset{u \mapsto u_i}{A}_{\alpha \beta} c_{\alpha}^{+}
+ \overset{u \mapsto u_i}{B}_\alpha \, Q.$$ Having defined this we introduce for each $i=1,2, \ldots$ and the corresponding operator $~^{u_i}\!c_{\alpha}$ the operator $$\label{icalpha}
~^i\!c_{\alpha} =
\sum_{\beta} \overline{\overset{u_i\mapsto u}{A}_{\alpha \beta}} ~^{u_i}\!c_{\alpha}$$ by discarding the part proportional to the total charge $Q$ in the operator $$c_{\alpha} = ~^u\!c_{\alpha} =
\sum_{\beta} \overline{\overset{u_i\mapsto u}{A}_{\alpha \beta}} \, ~^{u_i}\!c_{\beta}
+ \overline{\overset{u_i \mapsto u}{B}_\alpha} \, Q$$ as obtained by the transformation $u_i \mapsto u$ transforming the system of operators $~^{u_i}\!c_{\beta}$ into the system of operators $~^{u}\!c_{\alpha}$. Of course we have $$c_{\alpha}^{+}= ~^u\!c_{\alpha}^{+} =
\sum_{\beta} \overset{u_i\mapsto u}{A}_{\alpha \beta} ~^{u_i}\!c_{\beta}^{+}
+ \overset{u_i \mapsto u}{B}_\alpha \, Q.$$
The crucial facts for the computations which are to follow are the following. For each four-vector $v \in \mathscr{L}_3$ $$[~^v\!c_{\alpha}, e^{-iS(v)}] =0.$$ The commutation rules are preserved and $$[~^v\!c_{\alpha}, ~^v\!c_{\beta}] = 0, \,\,\,
[~^v\!c_{\alpha}, ~^v\!c_{\beta}^{+}] = 4\pi e^2 \, \delta_{{}_{\alpha \beta}}, \,\,\,
[Q, ~^v\!c_{\alpha}] = 0, \,\,\, ~^v\!c_{\alpha}|0\rangle = \langle0| ~^v\!c_{\alpha}^{+} = 0.$$ But moreover, if we fix arbitrarily $\alpha = (l,m)$ then because the operators $~^i\!c_{\alpha}$, $i= 1,2,\ldots$ all differ from the fixed operator $c_{\alpha} = ~^u\!c_{\alpha}$ with fixed $u \in \mathscr{L}_3$ by the operator (depending on $i$) which is always proportional to the total charge operator $Q$, as a consequence of the transformation rule (\[transformation-vcalpha\]) and (\[transformation-vcalpha+\]), then not only $$[~^i\!c_{\alpha}, ~^i\!c_{\beta}] = 0, \,\,\,
[~^i\!c_{\alpha}, ~^i\!c_{\beta}^{+}] = 4\pi e^2 \, \delta_{{}_{\alpha \beta}}, \,\,\,
[Q, ~^i\!c_{\alpha}] = 0, \,\,\, ~^i\!c_{\alpha}|0\rangle = \langle0| ~^i\!c_{\alpha}^{+} = 0,
\,\,\, i= 1,2, \ldots$$ for all $i = 1,2, \ldots$ but likewise $$[~^i\!c_{\alpha}, ~^j\!c_{\beta}] = 0, \,\,\,
[~^i\!c_{\alpha}, ~^j\!c_{\beta}^{+}] = 4\pi e^2 \, \delta_{{}_{\alpha \beta}}, \,\,\,
[Q, ~^i\!c_{\alpha}] = 0, \,\,\, ~^i\!c_{\alpha}|0\rangle = \langle0| ~^i\!c_{\alpha}^{+} = 0,
\,\,\, i,j = 1,2, \ldots.$$ Note also that $$c_{\alpha}^{+}|0\rangle = ~^i\!c_{\alpha}^{+} |0\rangle, \,\,\, i = 1,2, 3, \ldots.$$ Furthermore we have the following orthogonality relations $$\begin{gathered}
\label{orthonormality-vacOps+ccccc+c+c+c+Opkvac}
\Big\langle 0 \Big| \Big(\sum_{j=1}^{s} b_{js} e^{iS(u_j)} ~^j\!c_{\beta_1} \ldots ~^j\!c_{\beta_{m}} \Big)
\Big(\sum_{i=1}^{k} b_{ik} ~^i\!c_{\alpha_1}^{+} \ldots ~^i\!c_{\alpha_n}^{+} e^{-iS(u_i)} \Big) \Big| 0 \Big\rangle \\
=
(4\pi e^2)^n \,
\delta_{sk} \, \delta_{mn} \, \delta_{{}_{\{\alpha_1 \ldots \alpha_n\} \,\,\,\{\beta_1 \ldots \beta_m\}}}.\end{gathered}$$
Let $x,y$ be general elements respectively $x \in \mathcal{H}_{{}_{m=0}}$ and $y \in \mathcal{H}_{{}_{|u\rangle}}$ of the general form (\[xinH0\]) and respectively (\[yinHu\]). We define the following bilinear map $\otimes$ of $\mathcal{H}_{{}_{m=0}} \times \mathcal{H}_{{}_{|u\rangle}}$ into $\mathcal{H}_{{}_{m=1}}$ by the formula $$\begin{gathered}
x \times y \mapsto x \otimes y \\ =
\sum_{n =1,2, \ldots, k = 1,2, \ldots, i = 1, \ldots, k, \alpha_1, \ldots, \alpha_n}
a^{\alpha_1 \ldots \alpha_n} b^k b_{ik} ~^i\!c_{\alpha_1}^{+} \ldots ~^i\!c_{\alpha_n}^{+} e^{-iS(u_i)}|0\rangle.\end{gathered}$$ We show now that $\mathcal{H}_{{}_{m=0}}$ and $\mathcal{H}_{{}_{|u\rangle}}$ are $\otimes$-linearly disjoint [@treves], compare Part III, Chap. 39, Definition 39.1. Namely let $y_1, \ldots, y_r$ be a finite subset of generic elements $$y_j = \sum_{k = 1,2, \ldots} b^{k}_{j} e_k(b_{1k}u_1, \ldots, b_{kk}u_k) =
\sum_{k = 1,2, \ldots, i= 1, \ldots, k} b^{k}_{j} b_{ik} e^{-iS(u_i)}|0\rangle$$ in $\mathcal{H}_{{}_{|u\rangle}}$ for $j = 1, \ldots, r$; and similarly let $x_1, \ldots, x_r$ be a finite subset of generic elements $$x_j = \sum_{n =1,2, \ldots, \alpha_1, \ldots \alpha_n}
a^{\alpha_1 \ldots \alpha_n}_{j} c_{\alpha_1}^{+} \ldots c_{\alpha_n}^{+} |0\rangle$$ in $\mathcal{H}_{{}_{m=0}}$ for $j = 1, \ldots, r$. Let us suppose that $$\begin{gathered}
\label{x-tensor-y=0}
\sum_{j=1}^{r} x_j \otimes y_j \\ =
\sum_{j=1,\ldots,r, n =1,2, \ldots, k = 1,2, \ldots, i = 1, \ldots, k, \alpha_1, \ldots, \alpha_n}
a^{\alpha_1 \ldots \alpha_n}_{j} b^{k}_{j} b_{ik} ~^i\!c_{\alpha_1}^{+} \ldots ~^i\!c_{\alpha_n}^{+} e^{-iS(u_i)}|0\rangle = 0,\end{gathered}$$ and that $x_1, \ldots, x_r$ are linearly independent. We have to show that $y_1 = \ldots = y_r = 0$. The linear inependence of $x_j$ means that if for numbers $b^j$ it follows that $$\sum_{j=1}^{r} b^j a^{\alpha_1 \ldots \alpha_n}_{j} = 0$$ for all $n= 1,2, \ldots$, $\alpha_i = (1,-1), (1,0), (1,1), (2,-2), \ldots$ then $b_1 = \ldots = b_r = 0$. Now consider the inner product of the left hand side of (\[x-tensor-y=0\]) with $$\sum_{q=1}^{k} b_{qk} ~^q\!c_{\beta_1}^{+} \ldots ~^q\!c_{\beta_n}^{+} e^{-iS(u_q)}|0\rangle.$$ Then from (\[x-tensor-y=0\]) and the orthogonality relations (\[orthonormality-vacOps+ccccc+c+c+c+Opkvac\]) we get $$\sum_{j = 1}^{r}
a^{\beta_1 \ldots \beta_n}_{j} b^{k}_{j} = 0$$ for each $k = 1,2, \ldots$. Therefore by the linear independence of $x_j$ we obtain $$b^{k}_{1} = \ldots = b^{k}_{r} = 0$$ for each $k = 1,2, \ldots$, so that $$y_1 = \ldots = y_r = 0.$$ Similarly from (\[x-tensor-y=0\]) and linear independence of $y_1, \ldots, y_r$ it follows that $$x_1 = \ldots = x_r = 0,$$ so that $\mathcal{H}_{{}_{m=0}}$ and $\mathcal{H}_{{}_{|u\rangle}}$ are $\otimes$-linearly disjoint.
By construction the image of $ \otimes: \mathcal{H}_{{}_{m=0}} \times \mathcal{H}_{{}_{|u\rangle}}
\rightarrow \mathcal{H}_{{}_{m=1}}$ span the Hilbert space $\mathcal{H}_{{}_{m=1}}$ and is dense in $\mathcal{H}_{{}_{m=1}}$. Therefore the image of $\otimes$ defines the algebraic tensor product $\mathcal{H}_{{}_{m=0}} \otimes_{{}_{\textrm{alg}}} \mathcal{H}_{{}_{|u\rangle}}$ of $\mathcal{H}_{{}_{m=0}}$ and $\mathcal{H}_{{}_{|u\rangle}}$ densely included in $\mathcal{H}_{{}_{m=1}}$.
Now we show that the inner product $\langle \cdot | \cdot \rangle$ on $\mathcal{H}_{{}_{m=1}}$, if restricted to the algebraic tensor product subspace $\mathcal{H}_{{}_{m=0}} \otimes_{{}_{\textrm{alg}}} \mathcal{H}_{{}_{|u\rangle}}$, coincides with the inner product of the algebraic Hilbert space tensor product: $$\langle x\otimes y |x' \otimes y'\rangle = \langle x|x'\rangle \langle y | y' \rangle$$ for any generic elements $x,x' \in \mathcal{H}_{{}_{m=0}}$ and any generic elements $y,y' \in \mathcal{H}_{{}_{|u\rangle}}$. Indeed let $x,y$ be generic elements of the form (\[xinH0\]) and (\[yinHu\]) respectively and similarly for the generic elements $x',y'$ we put $$x' = \sum_{q =1,2, \ldots, \beta_1, \ldots, \beta_q}
a'^{\beta_1 \ldots \beta_n} c_{\beta_1}^{+} \ldots c_{\beta_q}^{+} |0\rangle$$ and $$y' = \sum_{s = 1,2, \ldots} b'^{s} e_s(b_{1s}u_1, \ldots, b_{ss}u_s)
= \sum_{s = 1,2, \ldots, j= 1, \ldots, s} b'^{s} b_{js} e^{-iS(u_j)}|0\rangle.$$ Then $$\begin{gathered}
\langle x' \otimes y' |x \otimes y\rangle = \sum_{n,k, q, s, \alpha_1, \ldots, \alpha_n, \beta_1, \ldots, \beta_q}
\overline{a'^{\beta_1 \ldots \beta_q}}a^{\alpha_1 \ldots \alpha_n} \overline{b'^{s}}b^k \,\,\times
\\ \times \,\,
\Big\langle 0 \Big|
\Big(\sum_{j=1}^s b_{js} e^{iS(u_j)} ~^j\!c_{\beta_q} \ldots ~^j\!c_{\beta_1} \Big)
\Big(\sum_{i=1}^{k}~^i\!c_{\alpha_1}^{+} \ldots ~^i\!c_{\alpha_n}^{+}e^{-iS(u_i)} \Big)
\Big|0 \Big\rangle\end{gathered}$$ which, on using (\[inn-prod-vacexp(S(v))c(v)c(v)c(w)+c(w+)exp(S(w))vac\]) and the orthogonality relations (\[orthonormality-vacOps+ccccc+c+c+c+Opkvac\]), is equal to $$\Big(\sum_{n,\alpha_1, \ldots \alpha_n} (4 \pi e^2)^n \,
\overline{a'^{\alpha_1 \ldots \alpha_n}}a^{\alpha_1 \ldots \alpha_n} \Big)
\Big( \sum_{k}
\overline{b'^{k}}b^k \Big) = \langle x|x'\rangle \langle y | y' \rangle.$$ Thus the proof of the equality (\[H0timesHu=H1\]) is now complete.
Now let $x,y$ be any generic elements of the form (\[xinH0\]) and (\[yinHu\]) respectively. Then by repeated application of (\[U=UxU-on-simple-tensors\]) and the continuity of each representor[^3] $U$ we obtain $$U(x \otimes y) = Ux \otimes Uy.$$ This ends the proof of our Lemma.
We observe now that the same proof can be repeated in showing validity of the following
LEMMA. $$U|_{{}_{\mathcal{H}_{{}_{m}}}} = U|_{{}_{\mathcal{H}_{{}_{|m,u\rangle}}}} \otimes U|_{{}_{\mathcal{H}_{{}_{m=0}}}}.$$
Now let[^4] ‘$\textrm{Integer part} \, x$’ for any positive real number $x$ be the least natural number among all natural numbers $n$ for which $x \leq n$. Joining the last Lemma with the result (\[decASm\]) of Staruszkiewicz [@Staruszkiewicz1992ERRATUM] we obtain the theorem formulated in Introduction.
The author is indebted for helpful discussions to prof. A. Staruszkiewicz.
[99]{}
Gelfand, I. M., Minlos, R. A,, Shapiro, Z. Ya.: Representations of the rotation and Lorentz groups and their applications. Pergamon Press Book, The Macmillan Company, New York, 1963. Gelfand, I. M., Graev, M. I. and Vilenkin, N. Ya.: Generalized Functions. Vol V. Academic Press, New York and London, 1966. Paulsen, V. I., Raghupathi, M.: An introduction to the theory of reproducing kernel Hilbert spaces, Cambridge University Press, Cambridge 2016. Sikorski, R.: Real functions, Vol II, PWN, Warszawa (1959) (in Polish). Staruszkiewicz, A.: Quntum mechanics of phase and charge and quantization of the Coulomb field. Preprint TPJU-12/87, June 1987. Staruszkiewicz, A.: Ann. Phys. (N.Y.) 190, 354 (1989). Staruszkiewicz, A.: Acta Phys. Polon. [**B 23**]{}, 927 (1992). Staruszkiewicz, A.: Acta Phys. Polon. [**B23**]{}, 591 (1992) and ERRATUM in Acta Phys. Pol [**B23**]{}, 959 (1992). Staruszkiewicz, A.: Acta Phys. Polon. [**B26**]{}, 1275 (1995). Staruszkiewicz, A.: Foundations of Physics [**32**]{}, 1863 (2002). Staruszkiewicz, A.: Acta Phys. Polon. [**B35**]{}, 2249 (2004). Staruszkiewicz, A.: Reports on Math. Phys. [**64**]{}, 293 (2009). Treves, F.: Topological vector spaces, distributions and kernels. Academic Press, 1967.
[^1]: Electronic address: [email protected] or [email protected]
[^2]: We are using slightly different convention than [@Staruszkiewicz1992], with ours $\overset{u \mapsto v}{A}_{\alpha \beta}$ corresponding to the complex conjugation $\overline{A_{\alpha \beta}}$ of the matrix elements $A_{\alpha \beta}$ used in [@Staruszkiewicz1992] and similarly our numbers $\overset{u \mapsto v}{B}_\alpha$ correspond to the complex conjugation $\overline{B_\alpha}$ of the numbers $B_\alpha$ used in [@Staruszkiewicz1992].
[^3]: Each representor $U_{{}_{\Lambda(\alpha)}}$ being unitary is bounded and thus continuous in the topology of the Hilbert space.
[^4]: Note that the standard definition of the integer part is slightly different.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Ryan Poling-Skutvik'
- 'Ali H. Slim'
- Suresh Narayanan
- 'Jacinta C. Conrad'
- Ramanan Krishnamoorti
bibliography:
- 'biblio.bib'
title: |
Supporting information for:\
Soft interactions modify the diffusive dynamics of polymer-grafted nanoparticles in solutions of free polymer
---
Predicted compression of PGNPs in solutions of linear chains
------------------------------------------------------------
The compression of star polymers in solutions of linear polymers was derived by balancing the elasticity of the star chains with the osmotic pressure of the solution, according to $$\label{eqn:compression}
\frac{3faR_\mathrm{s}}{N_\mathrm{s}} + \frac{4\pi b^3 \Phi_\mathrm{L}[1+P(\Phi_\mathrm{L}]}{\delta^3 N_\mathrm{s}^{3\nu}f^{3/5}} - \frac{3\nu N_\mathrm{s}^2 f^2 a^3}{2R_\mathrm{s}^4} = 0,$$ where $$P(x) = \frac{1}{2} x \exp\left\{ \frac{1}{4} \left[ \frac{1}{x}+\left(1-\frac{1}{x} \right) \ln (x+1) \right]\right\},$$ $$x(\Phi_\mathrm{L}) = \frac{1}{2} \Phi_\mathrm{L} \pi^2 \left[ 1 + \frac{1}{4}\left( \ln 2 + \frac{1}{2} \right) \right],$$ $\Phi_\mathrm{L} = c/c^*$ is the volume fraction of linear chains, $f$ is the functionality of the stars, $a$ is the monomer size, $N_\mathrm{s}$ is the degree of polymerization of the star polymer arms, $b = 1.3$ represents the penetrability of the star, $\delta = R_\mathrm{L}/R_\mathrm{s}$ is the size ratio of linear to star polymer, and $\nu$ is the Flory exponent.[@Truzzolillo2011] Because PGNPs geometrically resemble star polymers, we use eqn. \[eqn:compression\] to estimate the compression of PGNPs dispersed into solutions of linear chains with varying molecular weights, as shown in Fig. \[fig:Compression\](a). For our system, $f = 388$, $a = 0.5$ nm, $N_\mathrm{s} = 3410$, and $\nu = 0.53$. The size ratio $\delta = 0.12$, 0.25, 0.35, and 1.36 for linear chains with $M_\mathrm{w} = 150$, 590, 1100, and 15000 kDa, respectively.
According to this theory of how star polymers compress in solutions of linear chains, the size of the PGNPs decreases most drastically in solutions of low $M_\mathrm{w}$ chains (Fig. \[fig:Compression\](a)). As the free polymer concentration increases, the PGNPs compress. The degree of compression is lower in solutions of higher $M_\mathrm{w}$. These predicted changes in PGNP size are consistent with the conformational changes of the grafted layer elucidated by SANS (Fig. 1 in main text). Because dynamics depend on PGNP size, this compression may be affecting how the PGNP diffusivity depends on free polymer $M_\mathrm{w}$. If the dynamics of PGNPs are solely affected by the PGNP compression, the particles should couple to bulk solution viscosity $\eta$ according to the Stokes-Einstein expression, resulting in $$\frac{DR}{D_0 R_0} = \frac{\eta_0 }{\eta}$$ where $\eta_0$ is solvent viscosity, $R_0$ is the particle size in neat solvent, and $R$ is the compressed size of the PGNP. This ratio, however, fails to collapse the measured PGNP diffusivities, does not agree with the measured bulk rheology, and actually increases the spread in the data (Fig. \[fig:Compression\](c)). This failure indicates that the PGNP dynamics and predicted compression trend in opposite directions with free polymer $M_\mathrm{w}$. To further exemplify this trend, we calculate the necessary effective size ratio $R_\mathrm{eff}/R_\mathrm{0} = \frac{\eta_0 D_0}{\eta D}$ that would result in the measured diffusivities using the Stokes-Einstein expression and measured solution viscosity. This effective size ratio decreases with increasing polymer concentration but also decreases with increasing free chain $M_\mathrm{w}$, in stark contrast to and opposite from the predicted compression behavior shown in Fig. \[fig:Compression\]. From this analysis, we can conclude that compression of PGNPs cannot explain the observed dynamic behavior. The PGNPs diffuse faster in solutions of high $M_\mathrm{w}$ free polymer *despite* being less compressed.
Dispersion of PGNPs in solutions of linear chains
-------------------------------------------------
We confirm that the PGNPs remain well dispersed in all solutions using small angle x-ray scattering (SAXS). The scattering from the silica core of the PGNP dominates any scattering from the polymer. Thus, the SAXS intensity $I(Q)$ as a function of wavevector $Q$ is well fit by the hard sphere form factor for a sphere of radius $R = 24$ nm given by $$P(Q) = \left(\frac{\sin(QR)-QR\cos(QR)}{(QR)^3}\right)^2$$ and a log-normal polydispersity $\sigma = 0.28$ (Fig. \[fig:SAXS\]), in good agreement with our earlier work.[@Poling-Skutvik2016; @Poling-Skutvik2017] Observing no significant change in $I(Q)$ with increasing polymer concentrations, we conclude that the PGNPs are well-dispersed in solutions of free polymers at all measured concentrations.
Estimates of arm retraction dynamics
------------------------------------
The dynamics of star polymers in linear solutions are often dependent on the entanglements between arms of the star polymer and surrounding linear chains. These entanglements slow the diffusion of the star core through the linear system. To estimate if similar entanglements affect the movement of the PGNP core, we calculate the number of entanglements per grafted chain using the total dissolved polymer concentration $\phi$ according to[@Rubinstein2003] $$Z = \frac{M_\mathrm{e}}{M_\mathrm{w,g}} = \frac{M_\mathrm{e,0}}{\phi^{4/3} M_\mathrm{w,g}},$$ where $M_\mathrm{e,0} = 13000$ Da is the melt entanglement molecular weight of polystyrene.[@Lin1987] For all $c/c^*$ and $M_\mathrm{w}$, $Z < 5$. The longest relaxation time of a star polymer is related to $Z$ according to[@McLeish2002] $$\tau_\mathrm{arm} = \frac{\pi^{5/2}}{\sqrt{6}} \tau_\mathrm{e} Z^{3/2} \exp (a Z)$$ where $a = 3/2$ is a constant, $\tau_\mathrm{e} = \tau_0 (M_\mathrm{w,g}/M_\mathrm{e})^2 \phi^{-5/3}$ is the relaxation of an entanglement strand, and $\tau_0 = 6\pi \eta b^3/k_\mathrm{B}T$ is the diffusive relaxation time of a Kuhn segment with $b = 1.54$ Å.[@Rubinstein2003] After $\tau_\mathrm{arm}$, the core of the star should move diffusively with a diffusivity $D_\mathrm{star} = R_\mathrm{H}^2/\tau_\mathrm{arm}$. We plot the measured experimental diffusivities against the predicted normalized values based on this theory derived for star polymers in Fig. \[fig:ArmRetraction\]. Although there is a positive correlation between experiment and the predicted entangled dynamics, there is a significant spread in the data as a function of $M_\mathrm{w}$. Additionally, whereas the diffusivity of PGNPs in low $M_\mathrm{w}$ polymer at high $c/c^*$ are larger than predicted, the diffusivity in high $M_\mathrm{w}$ solutions is much slower than predicted. Further, incorporating tube dilation would lead to faster predicted diffusivities and less agreement between theory and experiments. Thus, entanglements between grafted chains and free linear chains cannot explain the dynamics of PGNPs in semidilute polymer solutions.
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.