text
stringlengths 1
2.25M
|
---|
---
abstract: 'We establish relationships between the Castelnuovo-Mumford regularity of standard graded algebras and the Ratliff-Rush closure of ideals. These relationships can be used to compute the Ratliff-closure and the regularities of the Rees algebra and the fiber ring. As a consequence, these regularities are equal for large classes of monomial ideals in two variables, thereby confirming a conjecture of Eisenbud and Ulrich for these cases.'
address:
- 'Department of Mathematics, FPT University, 8 Ton That Thuyet, My Dinh, Tu Liem, Hanoi, Vietnam'
- 'Department of Mathematics, University of Genoa, Via Dodecaneso 35, 16146 Genoa, Italy'
- 'Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam'
author:
- Trung Thanh Dinh
- Maria Evelina Rossi
- Ngo Viet Trung
title: |
Castelnuovo-Mumford regularity\
and Ratliff-Rush closure
---
[^1]
*Dedicated to Giuseppe Valla on the occasion of his seventieth anniversary*
0.7cm
Introduction {#introduction .unnumbered}
============
Let $R$ be a standard graded algebra over a commutative ring with unity. Let $H_{R_+}^i(R)$ denote the $i$-th local cohomology module of $R$ with respect to the graded ideal $R_+$ of elements of positive degree and set $a_i(R) = \max\{n|\ H_{R_+}^i(R)_n \neq 0\}$ with the convention $a_i(R) = -\infty$ if $H_{R_+}^i(R) = 0$. The Castelnuovo-Mumford regularity is defined by $$\reg R := \max\{a_i(R)+i|\ i \ge 0\}.$$ It is well known that $\reg R$ controls many important invariants of the graded structure of $R$ (see e.g. [@BM], [@EG], [@Tr3]).
The motivation for our work originates from the following conjecture of Eisenbud and Ulrich [@EU Conjecture 1.3].
[**Conjecture**]{}. Let $A$ be a standard graded algebra over a field $k$. Let $\mm$ be the maximal graded ideal of $A$ and $I$ a homogeneous $\mm$-primary ideal which is generated by forms of the same degree. Then $\reg R(I) = \reg F(I),$ where $R(I) = \oplus_{n \ge 0}I^n$ is the Rees algebra and $F(I) = \oplus_{n\ge 0}I^n/\mm I^n$ is the fiber ring of $I$.
In general, it is very difficult to estimate $\reg R(I)$ because $R(I)$ is a standard graded algebra over $A$. On the other hand, as $F(I)$ is a standard graded algebra over $k$, $\reg F(I)$ can be effectively computed in terms of a minimal free resolution. Note that if $d$ is the degree of the generators of $I$, then $F(I) \cong k[I_d]$, the subalgebra of $A$ generated by the elements of $I_d$.
Using the characterization of $\reg R(I)$ by means of a superficial sequence, the authors were able to settle the above conjecture in the affirmative when $I$ is an ideal in $k[x,y]$ generated by a set of monomials in degree $d$ which contains $x^d,x^{d-1}y,y^d$. The solution suggests that both $\reg R(I)$ and $\reg F(I)$ are related to the behavior of the Ratliff-Rush filtration. Inspired by this finding, this paper will study the relationships between the Castelnuovo-Mumford regularity and the Ratliff-Rush closure.
Let $(A,\mm)$ be an arbitrary local ring and $I$ an arbitrary ideal of $A$. Recall that the Ratliff-Rush closure of $I$ is defined as the ideal $$\tilde I = \bigcup_{n \ge 1} I^{n+1}:I^n.$$ It is a refinement of the integral closure of $I$ and $\tilde I = I$ if I is integrally closed. If $I$ is a regular ideal, i.e. if $I$ contains non-zerodivisors, $\tilde I$ is the largest ideal sharing the same higher powers with $I$ [@RR]. In particular, the Ratliff-Rush filtration $\widetilde {I^n}, n \ge 0$, carries important information on the blowups, the associated graded ring of $I$, and the Hilbert function of an $\mm$-primary ideal $I$ (see e.g. [@HJLS], [@HLS], [@Ho], [@Sa]).
In general, the computation of $\tilde I$ is hard because $I^{n+1}:I^n = I^n:I^{n-1}$ does not imply $I^{n+2}:I^{n+1} = I^{n+1}:I^n$. We call the least integer $m \ge 0$ such that $\tilde I = I^{n+1}:I^n$ for all $n \ge m$ the Ratliff-Rush index of $I$ and denote it by $s(I)$. If we know an upper bound for $s(I)$, we can easily compute $\tilde I$. For an $\mm$-primary ideal $I$, Elias [@El] already gave a bound for $s(I)$ in terms of the postulation numbers of $I$ and of ideals of the form $I/(x)$, where $x$ belongs to a given superficial sequence of $I$. For an arbitrary ideal $I$, we will show that $I^{n+1}:I = I^n$ for $n \ge \reg R(I)$. From this it follows that $$s(I) \le \max\{\reg R(I)-1,0\}.$$ Since there are various bound for $\reg R(I)$ in terms of other well known invariants of $I$ ([@DGV], [@Du], [@DH], [@Li1], [@Li2], [@RTV1], [@St], [@Va]), the above bound for $s(I) $ provides us a practical tool to compute $\tilde I$.
A remarkable feature of the Ratliff-Rush closure is the property that $\widetilde {I^n} = I^n$ for all $n$ sufficiently large if $I$ is a regular ideal. Again, if $\widetilde {I^n} = I^n$, then it does not necessarily imply $\widetilde {I^{n+1}} = I^{n+1}$. We call the least integer $m \ge 1$ such that $\widetilde {I^n} = I^n$ for all $n \ge m$ the Ratliff-Rush regularity of the ideal $I$ and denote it by $s^*(I)$. We will show that $\widetilde {I^n} = I^{n+t}:I^t$ for $t \ge \reg R(I)-n$. From this it immediately follows that $$s^*(I) \le \max\{\reg R(I),1\}.$$ Using the strong result that $\reg R(I) = \reg G(I)$, where $G(I)$ denotes the associated graded ring of $I$, one can also deduce this bound from the bound $s^*(I) \le \max\{a_1(G(I))+1,1\}$ given by Puthenpurakal in [@Pu].
Now one may ask whether $s^*(I)$ can be used to estimate $\reg R(I)$. If $A$ is a two-dimensional Buchsbaum local ring with $\depth A > 0$ (e. g. if $A$ is Cohen-Macaulay) and $I$ is an $\mm$-primary ideal, which is not a parameter ideal, we show that $$\reg R(I) = \max\{r_J(I), s^*(I)\},$$ where $J$ is an arbitrary minimal reduction of $I$ and $r_J(I)$ denotes the reduction number of $I$ with respect to $J$. As an application we give a negative answer to a question of Rossi and Swanson [@RS Section 4] which asks whether $s^*(I) \le r_J(I)$ always holds. In fact, if the answer were yes, this would imply $r_J(I) = \reg R(I)$ independent of the choice of $J$. However, Huckaba [@Huc] already showed that $r_J(I)$ may depends on the choice of $J$.
Our interest in Buchsbaum rings comes from the fact that the conjecture of Eisenbud and Ulrich is not true if one does not put further assumption on the standard graded algebra $A$. We shall see that if the conjecture were true for factor rings of $A$, then $A$ must be a Buchsbaum ring. If $A$ is an one-dimensional Buchsbaum ring, we will show that $\reg R(I) = \reg F(I)$ always holds. If $A$ is a two-dimensional Buchsbaum ring with $\depth A > 0$ and $I$ is not a parameter ideal, we show that $$\reg F(I) = \max\{r_J(I), s_\ini^*(I)\}.$$ Here, $s_\ini^*(I)$ denotes the least integer $m \ge 1$ such that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$ for all $n \ge m$, where $d$ is the degree of the generators of $I$. Since $nd$ is the initial degree of $\widetilde {I^n}$, we call $s_\ini^*(I)$ the initial Ratliff-Rush regularity of $I$. The above formulas establish unexpected relationships between the Castelnuovo-Mumford regularity and the Ratliff-Rush closure, which can be used to compare $\reg R(I)$ and $\reg F(I)$.
If $I$ is an $\mm$-primary monomial ideal in a polynomial ring $k[x,y]$ which is generated by forms of degree $d$, then $I$ has a natural minimal reduction $J = (x^d,y^d)$. Using the above formulas we are able to show that $\reg R(I) = \reg F(I)$ in the following cases:
\(1) $I = (x^d,y^d) + (x^{d-i}y^i\mid a \le i \le b)$, where $a \le b \le d$ are given positive integers.
\(2) $x^d,x^{d-1}y,y^d \in I$.
These large classes of ideals indicate that the conjecture of Eisenbud and Ulrich may be true for polynomial rings over a field. In fact, Ulrich communicated to the last author that he and Eisenbud always thought of a polynomial ring in their conjecture.
Note that the equality $\reg R(I) = \reg F(I)$ was already studied by Cortadellas and Zarzuela [@CZ], and Jayanthan and Nanduri [@JN] for an ideal $I$ in a local ring. However, their results are too specific to be recalled here.
The paper is divided into four sections. In Section 1 we recall basic results on the Castelnuovo-Mumford regularity of the Rees algebra. The bound $s(I) \le \max\{\reg R(I)-1,0\}$ will be proved in this section. In Section 2 we study the relationship between $s^*(I)$ and $\reg R(I)$ and prove the bound $s^*(I) \le \max\{\reg R(I),1\}$ and the formula $\reg R(I) = \max\{r_J(I), s^*(I)\}$. In Section 3 we investigate the conjecture of Eisenbud and Ulrich and prove the formula $\reg k[I_d] = \max\{r_J(I), s_0^*(I)\}.$ In Section 4 we apply our approach to monomial ideals in two variables and settle the conjecture of Eisenbud and Ulrich in the afore mentioned cases.
Regularity of the Rees algebra
==============================
Let $(A,\mm)$ be a local ring with $\dim A > 0$ and $I$ an ideal of $A$. Let $R(I) = \oplus_{n\ge 0}I^n$ be the Rees algebra of $I$. We shall see that $\reg R(I)$ can be characterized in terms of a superficial sequence which generates a reduction of $I$. Without loss of generality we may assume that the residue field of $A$ is infinite.
An element $x \in I$ is called [*superficial*]{} for $I$ if there is an integer $c$ such that $$(I^{n+1} : x) \cap I^c = I^n$$ for all large $n$. A system of elements $x_1,....,x_s$ in $I$ is called a [*superficial sequence*]{} of $I$ if $x_i$ is a superficial element of $I$ in $A/(x_1,....,x_{i-1})$, $i = 1,...,s$. Superficial sequences can be characterized by means of filter-regular sequences in the associated graded ring $G(I) = \oplus_{n\ge 0}I^n/I^{n+1}$.
A system of homogeneous elements $z_1,...,z_s$ in $G(I)$ is called [*filter-regular*]{} if $$[(z_1,...,z_{i-1}):z_i]_n = (z_1,...,z_{i-1})_n,$$ for sufficiently large $n$, $i = 1,...,s$. It is easy to see that $z_1,...,z_s$ is filter-regular if and only if $x_i \not\in P$ for all associated primes $P \not\supseteq G(I)_+$ of $(z_1,...,z_{i-1})$, $i = 1,...,s$ (see [@Tr1]). This characterization is especially useful in finding filter-regular sequences.
For every element $x \in I$ we denote by $x^*$ the residue class of $x$ in $I/I^2$.
\[superficial\] [@Tr2 Lemma 6.2] $x_1,....,x_s$ is a superficial sequence of $I$ if and only if $x_1^*,...,x_s^*$ forms a filter-regular sequence of $G(I)$.
Note that the condition $x_i \not\in (x_1,...,x_{i-1})+I^2$, $i = 1,...,s$, in [@Tr2 Lemma 6.2] is superfluous.
An ideal $J \subseteq I$ is called a [*reduction*]{} of $I$ if there exists an integer $n$ such that $I^{n+1} = JI^n$. The least integer $n$ with this property is called the [*reduction number*]{} of $I$ with respect to $J$. We will denote it by $r_J(I)$. A reduction is minimal if it is minimal with respect to containment.
The following relationship between minimal reductions and superficial sequences is more or less known. For completeness we include a proof here.
\[generating\] Every minimal reduction $J$ of $I$ can be generated by a superficial sequence of $I$.
Let $Q$ denote the ideal in $G(I)$ generated by the elements $x^*$, $x \in J$. Then $Q$ is generated by $(J+I^2)/I^2$. Since $I^{n+1} = JI^n$, $Q_{n+1} = G(I)_{n+1}$. Therefore, $Q \not\subseteq P$ for any prime $P \not\supseteq G(I)_+$. Using prime avoidance we can find a filter-regular sequence $z_1,...,z_s \in G(I)$ such that $Q = (z_1,...,z_s)$. Choose $x_i \in J$ such that $x_i^* = z_i$, $i = 1,...,s$. Then $(x_1,...,x_s) + I^2 = J + I^2$. Hence $$(x_1,...,x_s)I^n + I^{n+2} = JI^n + I^{n+2} = I^{n+1}.$$ By Nakayama’s Lemma, this implies $I^{n+1} = (x_1,...,x_s)I^n$. Therefore, $(x_1,...,x_s)$ is a reduction of $I$. By the minimality of $J$, we must have $J =(x_1,...,x_s)$. The conclusion now follows from Lemma \[superficial\].
One can characterize $\reg R(I)$ in terms of a superficial sequence that generates a reduction of $I$. The following characterization is a reformulation of [@Tr2 Theorem 4.8], where it is assumed that $x_1^*,...,x_s^*$ is a filter-regular sequence.
\[regularity\] Let $x_1,...,x_s$ be a superficial sequence of $I$ such that $J = (x_1,...,x_s)$ is a reduction of $I$. Then $$\begin{aligned}
& \reg R(I) = \reg G(I)\\
& = \min\left\{n \ge r_J(I)|\ I^{n+1} \cap [(x_1,...,x_{i-1}):x_i] = (x_1,...,x_{i-1})I^n, i = 1,...,s\right\}.\end{aligned}$$
Theorem \[regularity\] will play a crucial role in our paper. One can use it to compute $\reg F(I)$ in terms of $J$ as we shall see later.
Superficial elements are related to the regularity by the following property, which is a reformulation of [@Tr2 Lemma 4.4 (i)].
\[intersection\] Let $x$ be a superficial element of $I$. Then $I^{n+1} \cap (x) = xI^n$ for $n \ge \reg R(I)$.
This lemma led us to the following property of colon ideals of powers of $I$.
\[colon\] Let $I$ be a regular ideal. Then $I^{n+1} : I = I^n$ for $n \ge \reg R(I)$.
It is well-known that if $I$ is a regular ideal, every superficial element of $I$ is a non-zerodivisor (see e.g. [@RV Lemma 1.2]). Therefore, $0:x = 0$ if $x$ is a superficial element for $I$. By Lemma \[intersection\], $I^{n+1}:x = I^n + (0:x) = I^n$ for $n \ge \reg R(I)$. Since $I^n \subseteq I^{n+1}:I \subseteq I^{n+1}:x$, this implies $I^{n+1}:I = I^n$ for $n \ge \reg R(I)$.
\[a\] [Actually, the proof of [@Tr2 Lemma 4.4 (i)] shows more, namely that $I^{n+1} \cap (x) = xI^n$ for $n \ge \max\{a_0(G(I)),a_1(G(I))+1\}$. If $I$ is a regular ideal, $a_0(G(I)) < a_1(G(I))$ [@Ho1 Theorem 5.2]. Therefore, $I^{n+1} : I = I^n$ for $n \ge a_1(G(I))+1$. By the definition of the Castelnuovo-Mumford regularity and Theorem \[regularity\], we always have $a_1(G(I))+1 \le \reg G(I) = \reg R(I).$]{}
Recall that the [*Ratliff-Rush closure*]{} of $I$ is defined as the ideal $$\tilde I = \bigcup_{n \ge 1}I^{n+1}:I^n.$$
In general, the computation of $\tilde I$ is hard because $I^{n+1}:I^n = I^n:I^{n-1}$ does not imply $I^{n+2}:I^{n+1} = I^{n+1}:I^n$ [@RS]. Therefore, it is of great interest to have an upper bound for the least integer $m \ge 0$ such that $\tilde I = I^{n+1}:I^n$ for all $n \ge m$. We call this integer the [*Ratliff-Rush index*]{} of $I$ and denote it by $s(I)$. Note that $s(I) = 0$ means $\tilde I = I$.
\[stable\] Let $I$ be a regular ideal. Then $s(I) \le \max\{\reg R(I)-1,0\}$.
Applying Proposition \[colon\] we have $$I^{n+1} : I^n = (I^{n+1}:I) : I^{n-1} = I^n:I^{n-1}$$ for $n\ge \reg R(I)$. Thus, $I^{n+1}:I^n$ is the same ideal for all $n \ge \reg R(I)-1$ and equals $\tilde I$.
There are plenty examples with $s(I) = 0$ and $\reg R(I)$ arbitrarily large. For instance, take $I = \mm$. It is clear that $s(\mm) = 0$. Since $\reg R(\mm) \ge r_J(\mm)$ for any minimal reduction $J$ of $\mm$, one can easily construct local rings such that $\reg R(\mm)$ is arbitrarily large.
According to Theorem \[stable\], if $\reg R(I) \le c$ for some integer $c \ge 0$, then $\tilde I = I^{c+1}:I^c$. So one can compute $\tilde I$ if one knows an upper bound for $\reg R(I)$. There have been several works giving upper bounds for $\reg R(I)$ ([@DGV], [@Du], [@DH], [@Li1], [@Li2], [@RTV1], [@St], [@Va]). We consider here only a general upper bound for $\reg R(I)$ in terms of the extended degree.
Let $I$ be an $\mm$-primary ideal. Following [@DGV] and [@Li1] we call a numerical function $D(I,M)$ an [*extended degree*]{} of a finitely generated $A$-module $M$ with respect to $I$ if the following conditions are satisfied:
\(i) $D(I,M) = D(I,M/L) + \ell(L)$, where $L$ is the largest submodule of $M$ of finite length,
\(ii) $D(I,M) \ge D(I,M/xM)$ for a generic element $x$ in $I$,
\(iii) $D(I,M) = e(I,M)$ if $M$ is a Cohen-Macaulay module, where $e(I,M)$ denotes the multiplicity of $M$ with respect to $I$.
We refer the readers to [@DGV], [@Li1], [@Va] for several kinds of extended degrees. For $M = A$ we simply use the notations $D(I)$ and $e(I)$ instead of $D(I,R)$ and $e(I,R)$.
\[extended\] Let $A$ be a $d$-dimensional ring with $\depth A > 0$. Let $I$ be an $\mm$-primary ideal. Set $c(I) = D(I)-e(I)$, where $e(I)$ denotes the multiplicity of $I$.
[(i)]{} If $d = 1$, then $s(I) \le e(I)+c(I)-1$,
[(ii)]{} If $d \ge 2$, then $s(I) \le e(I)^{(d-1)!-1}[e(I)^2+ e(I)c(I)+2c(I)-e(I)]^{(d-1)!}-c(I)$.
The assertion follows from Theorem \[stable\] and [@RTV1 Theorem 3.3], where the right sides of the bounds were shown to be upper bounds for $\reg R(I)$. Note that [@RTV1 Theorem 3.3] was proved for the case $I = \mm$. However, the proof can be extended to an arbitrary $\mm$-primary ideal $I$. It was carried out in [@Li1 Theorem 4.4], where a more compact but weaker bound for $\reg R(I)$ is given.
\[CM\] Let $A$ be a $d$-dimensional Cohen-Macaulay ring. Let $I$ be an $\mm$-primary ideal.
[(i)]{} If $d = 1$, then $s(I) \le e(I)-1$,
[(ii)]{} If $d \ge 2$, then $s(I) \le e(I)^{2(d-1)!-1}[e(I)-1]^{(d-1)!}$.
Similar upper bounds for $s(I)$ were already given by Elias in [@El Theorem 2.1], which is slightly worse than Corollary \[CM\](ii) in the case $d \ge 2$. His proof involves the postulation numbers of a set of quotient ideals $I/(x)$, where $x$ is an element of a given superficial sequence of $I$ generating a minimal reduction of $I$.
Ratliff-Rush filtration
=======================
Let $(A,\mm)$ be a local ring with $\dim A > 0$ and $I$ an ideal of $A$. One call the sequence of ideals $\widetilde {I^n}$, $n \ge 1$, the [*Ratliff-Rush filtration*]{} with respect to $I$. It is well known that for $n \ge 1$, $$\widetilde {I^n} = \bigcup_{t \ge 0}I^{n+t}:I^t$$ and, if $I$ is a regular ideal, $\widetilde {I^n} = I^n$ for $n$ sufficiently large [@RR].
We call the least integer $m \ge 1$ such that $\widetilde {I^n} = I^n$ for $n \ge m$ the [*Ratliff-Rush regularity*]{} of $I$ and denote it by $s^*(I)$. Note that $\widetilde {I^n} = I^n$ does not necessarily imply $\widetilde {I^{n+1}} = I^{n+1}$ (see e.g. [@RS]).
In this section we shall see that $s^*(I)$ is strongly related to $\reg R(I)$.
\[bound\] Let $I$ be a regular ideal. Then
[(i)]{} $\widetilde {I^n} = I^{n+t}:I^t$ for all $t \ge \reg R(I)-n$,
[(ii)]{} $s^*(I) \le \max\{\reg R(I),1\}$.
By Proposition \[colon\] we have $I^{n+1}:I = I^n$ for $n \ge \reg R(I)$. Therefore, $$I^{n+t+1}: I^{t+1} = (I^{n+t+1}:I):I^t = I^{n+t}: I^t$$ for $t \ge \reg R(I)-n$, which proves (i). If $n \ge \reg R(I)$, we can put $t = 0$ in (i). Hence $\widetilde {I^n} = I^n:I^0 = I^n$, which proves (ii).
As pointed out in Remark \[a\], we can replace $\reg R(I)$ by $a_1(G(I))+1$ in Proposition \[bound\]. So we can recover the bound $s^*(I) \le \max\{a_1(G(I))+1,1\}$ proved by Puthenpurakal in [@Pu Theorem 4.3]. Note that Puthenpurakal considers the least integer $m \ge 0$ such that $\widetilde {I^n} = I^n$ for $n \ge m$, whereas we require $m \ge 1$ because one always has $\widetilde {I^0} = I^0 = A$. On the other hand, we can also deduce Proposition \[bound\] (ii) from Puthenpurakal’s result by using the inequality $a_1(G(I))+1 \le \reg G(I) = \reg R(I)$.
Proposition \[bound\] has the interesting consequence that if $c$ is an upper bound for $\reg R(I)$, then $\widetilde {I^n} = I^c:I^{c-n}$ for $n < c$ and $\widetilde {I^n} = I^n$ for $n \ge c$. In particular, if $\reg R(I) \le 1$, then $s^*(I) = 1$, i.e. $\widetilde {I^n} = I^n$ for all $n \ge 1$. We will use this fact to give a large class of ideals with $s^*(I) = 1$.
Recall that a system of elements $x_1,...,x_r$ in $A$ is a [*$d$-sequence*]{} if the following two conditions are satisfied:
\(i) $x_i$ is not contained in the ideal generated by the rest of the system, $i = 1,...,r$,
\(ii) $(x_1,...,x_i):x_{i+1}x_k = (x_1,...,x_i):x_{i+1}$ for all $i = 0,...,r-1$ and $k = i+1,...,r$.
This notion was introduced by Huneke in [@Hun]. Examples of $d$-sequences are abundant such as the maximal minors of an $r \times (r+ 1)$ generic matrix and systems of parameters in Buchsbaum rings. It was showed in [@Tr2 Corollary 5.7] that $\reg R(I) = 0$ if and only if $I$ is generated by a $d$-sequence. Therefore, Proposition \[bound\] (ii) implies the following result.
\[d-sequence\] Let $I$ be a regular ideal generated by a $d$-sequence. Then $\widetilde {I^n} = I^n$ for all $n \ge 1$.
It is also known that $\widetilde {I^n} = I^n$ for all $n \ge 1$ if and only if $G(I)$ contains a non-zerodivisor [@HLS (1.2)]. This fact can be used to find examples with $s^*(I) = 1$ and $\reg R(I)$ arbitrarily large.
\[example\] [Let $R = k[X]/P$, where $k[X]$ is a polynomial ring and $P$ is a homogeneous prime generated by forms of any given degree $d$. Let $A$ be the localization of $R$ at its maximal graded ideal. Then $G(\mm) \cong R$. Since $\depth R > 0$, $s^*(\mm) = 1$. By Theorem \[regularity\], $\reg R(\mm) = \reg G(\mm) = \reg R$. It is known that $\reg R+1$ is an upper bound for the degree of the generators of $P$ [@EG]. Thus, $\reg R(\mm) \ge d-1$.]{}
Despite the possible large difference between $\reg R(I)$ and $s^*(I)$ we can use $s^*(I)$ to characterize $\reg R(I)$ in the following case.
Recall that $A$ is called a [*Buchsbaum ring*]{} if every system of parameters $x_1,...,x_r$ of $A$ is a [*weak sequence*]{}, i.e. $$(x_1,...,x_{i-1}): x_i = (x_1,...,x_{i-1}):\mm$$ for $i = 1,...,r$. Huneke showed that $A$ is a Buchsbaum ring if and only if every system of parameters forms a $d$-sequence [@Hun Proposition 1.7]. Therefore, $\reg R(I) = 0$ and $s^*(I) = 1$ if $I$ is a parameter ideal in a Buchsbaum ring. If $I$ is not a parameter ideal, we have the following formula for $\reg R(I)$.
\[equality\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$. Let $I$ be an $\mm$-primary ideal, which is not a parameter ideal. Then $$\reg R(I) = \max\{r_J(I), s^*(I)\} = \min\{n \ge r_J(I)|\ \widetilde {I^n} = I^n\}.$$ where $J$ is an arbitrary minimal reduction of $I$.
By Theorem \[regularity\], $\reg R(I) \ge r_J(I)$. Since $I$ is not a parameter ideal, $r_J(I) \ge 1$. Hence, $\reg R(I) \ge 1$. By Proposition \[bound\] (ii), this implies $\reg R(I) \ge s^*(I)$. Thus, $\reg R(I) \ge \max\{r(I),s^*(I)\}.$ Since $$\max\{r_J(I),s^*(I)\} \ge \min\{n \ge r_J(I)|\ \widetilde {I^n} = I^n\},$$ it suffices to show that $\reg R(I) \le \min\{n \ge r_J(I)|\ \widetilde {I^n} = I^n\}.$
By Lemma \[generating\], there is a superficial sequence $x,y$ of $I$ such that $J = (x,y)$. Since $\depth A > 0$, $I$ is a regular ideal. Hence $0:x = 0$. By Theorem \[regularity\], this implies $$\reg R(I) = \min\{n \ge r_J(I)|\ I^{n+1} \cap [(x):y] = xI^n\}.$$ We will show that $I^{n+1} \cap [(x):y] = I^{n+1} \cap (x)$ for $n \ge r_J(I)$. Let $f$ be an arbitrary element of $I^{n+1} \cap [(x):y]$. Since $I^{n+1} = (x,y)I^n$, there are elements $g,h \in I^n$ such that $f = gx +hy$. Since $fy \in (x)$, $h \in (x):y^2$. Since $x,y$ is a $d$-sequence, $(x):y^2 = (x):y$. Hence $hy \in y[(x):y^2] \subseteq y[(x):y] \subseteq (x)$. This implies $f \in (x)$. So we can conclude that $I^{n+1} \cap [(x):y] \subseteq I^{n+1} \cap (x)$. Since the converse inclusion is obvious, $I^{n+1} \cap [(x):y] = I^{n+1} \cap (x)$. Therefore, $$\begin{aligned}
\reg R(I) & = \min\{n \ge r_J(I)|\ I^{n+1} \cap (x) = xI^n\}\\
& = \min\{n \ge r_J(I)|\ I^{n+1}:x = I^n\}.\end{aligned}$$
Note that $I^n \subseteq I^{n+1}: x \subseteq \widetilde {I^{n+1}}:x = \widetilde {I^n}$ by [@RV Lemma 3.1 (5)]. If $\widetilde {I^n} = I^n$, this implies $I^{n+1}:x = I^n$. Thus, $\reg R(I) \le \min\{n \ge r_J(I)|\ \widetilde {I^n} = I^n\}.$
The formula $\reg R(I) = \min\{n \ge r_J(I)|\ \widetilde {I^n} = I^n\}$ provides us a practical way to compute $\reg R(I)$ because we only need to check the condition $\widetilde {I^n} = I^n$ successively for $n \ge r_J(I)$. Moreover, comparing to Theorem \[regularity\], we do not need a superficial sequence which generates a reduction of $I$. To find such a sequence is in general not easy.
It is known that the reduction numbers may be different for different minimal reductions [@Huc]. Since the reduction number is very useful in the study of local rings (see e.g. [@Va]), it is of great interest to know when $r_J(I)$ is independent of the choice of $J$. We can use Theorem \[equality\] to give a sufficient condition for the invariance of the reduction numbers.
\[invariance\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$ and $I$ an $\mm$-primary ideal. If there exists a minimal reduction $J$ of $I$ such that $s^*(I) < r_J(I)$, then the reduction numbers of all minimal reductions of $I$ equal $\reg R(I)$.
Since $r_J(I) > 1$, $I$ is not a parameter ideal. By Theorem \[equality\], the assumption implies $\reg R(I) = r_J(I)$. If there is a minimal reduction $J'$ of $I$ with $r_{J'}(I) \neq r_J(I)$, from the formula $\reg R(I) = \max\{r_{J'}(I),s^*(I)\}$ we can deduce that $\reg R(I) = s^*(I)$, a contradiction.
There are plenty ideals with $s^*(I) = 1$ and $r_J(I)$ arbitrarily large. For instance, take Example \[example\], where $A$ is chosen to be a two-dimensional Buchsbaum ring. Then $r_J(\mm) = \reg R(\mm)$ by Theorem \[equality\]. As we have seen there, $s^*(\mm) = 1$ while $\reg R(\mm)$ can be arbitrarily large. We shall see in the last section that there are examples such that $s^*(I) = r_J(I)$ for a minimal reduction $J$, but $s^*(I) > r_{J'}(I)$ for another minimal reduction $J'$ of $I$.
Let $br(I)$ denote the big reduction number of $I$ which is defined by $$br(I) = \max\{r_J(I)|\ J \text{ is a minimal reduction of } I\}.$$ To estimate the big reduction number is usually a hard problem. If $s^*(I) = r_J(I)$ for some minimal reduction $J$ of $I$, we can deduce from Theorem \[equality\] that $br(I) = r_J(I)$. Under the assumption of Theorem \[equality\], we could not find any example with $br(I) < s^*(I)$. So we conjecture that $br(I) \ge s^*(I)$ in this case.
In the following we will give an alternative formula for $\reg R(I)$, which involves only the Ratliff-Rush closure of a power of $I$. This formula is based on the following observation.
\[reduction\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$ and $I$ an $\mm$-primary ideal. Let $J$ be an arbitrary minimal reduction of of $I$. Let $x,y$ be a superficial sequence of $I$ such that $J = (x,y)$. Set $r = r_J(I)$. For $n \ge r$, we have $$I^{n+1}: x = I^n + y^{n-r}(I^{r+1}:x).$$
The case $n = r$ is trivial. Let $f$ be an arbitrary element of $I^{n+1}:x$, $n \ge r+1$. Since $I^{n+1} = (x,y)I^n$, there are elements $g,h \in I^n$ such that $xf = xg + yh$. From this it follows that $h \in I^n \cap [(x):y]$. As showed in the proof of Theorem \[equality\], $$I^n \cap [(x):y] = I^n \cap (x) = x(I^n:x).$$ Hence $h = xh'$ for some element $h' \in I^n:x$. Thus, $xf = xg + xyh'$. Since $x$ is a non-zerodivisor, $f = g +yh' \in I^n + y(I^n:x)$. So we have $I^{n+1}: x \subseteq I^n + y(I^n:x)$. Since the inverse inclusion is obvious, we can conclude that $$I^{n+1}:x = I^n + y(I^n:x).$$ Applying this formula successively, we obtain $I^{n+1}: x = I^n + y^{n-r}(I^{r+1}:x).$
\[alternative\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$ and $I$ an $\mm$-primary ideal. Let $J$ be an arbitrary minimal reduction of $I$. Set $r = r_J(I)$. Then $$\reg R(I) = \min\{n \ge r|\ \widetilde {I^r} = I^n:I^{n-r}\}.$$
If $r = 0$, $I$ is a parameter ideal. Since $A$ is Buchsbaum, $I$ is generated by a $d$-sequence. Hence $\reg R(I) = 0$ by [@Tr2 Corollary 5.7]. In this case, the above formula is trivial. Therefore, we may assume that $I$ is not a parameter ideal.
By the proof of Theorem \[equality\], we have $$\reg R(I) = \min\{n \ge r|\ I^{n+1}: x = I^n\},$$ where $x$ is an element of a superficial sequence $x,y$ of $I$ such that $J = (x,y)$. By Lemma \[reduction\], if $n \ge r$, $$I^{n+1}: x = I^n + y^{n-r}(I^{r+1}:x) \subseteq I^n + I^{n-r}(\widetilde {I^{r+1}}:x).$$ By [@RV Lemma 3.1 (5)], $\widetilde {I^{r+1}}:x = \widetilde {I^r}$. If $\widetilde {I^r} = I^n:I^{n-r}$, we have $$I^n + I^{n-r}(\widetilde {I^{r+1}}:x) = I^n+ I^{n-r}\widetilde {I^r} \subseteq I^n \subseteq I^{n+1}:x.$$ From this it follows that $I^{n+1}: x = I^n$. Thus, $$\reg R(I) \ge \min\{n \ge r|\ \widetilde {I^r} = I^n:I^{n-r}\}.$$
To show the converse inequality we observe that for $n \ge r$, $$\widetilde {I^r} \subseteq \widetilde {I^n}: I^{n-r} \subseteq \widetilde {I^n}:x^{n-r} = \widetilde {I^r}$$ by [@RV Lemma 3.1 (5)]. From this it follows that $ \widetilde {I^r} = \widetilde {I^n}: I^{n-r}$. If $\widetilde {I^n} = I^n$, we have $\widetilde {I^r} = I^n:I^{n-r}$. Therefore, using Theorem \[equality\], we have $$\begin{aligned}
\reg R(I) & = \min\{n \ge r|\ \widetilde {I^n} = I^n\}\\
& \le \min\{n \ge r|\ \widetilde {I^r} = I^n:I^{n-r}\}.\end{aligned}$$
The conjecture of Eisenbud and Ulrich
=====================================
Throughout this section let $A$ be a finitely generated standard graded algebra over a field $k$ with $\dim A > 0$. Let $\mm$ be the maximal graded ideal of $A$ and $I$ an $\mm$-primary ideal generated by homogeneous elements of the same degree $d$, $d \ge 1$.
Motivated by the behavior of the function $\reg I^n$, Eisenbud and Ulrich conjectured that $\reg R(I) = \reg k[I_d]$, where $k[I_d]$ is the algebra generated by the elements of the component $I_d$ of $I$ [@EU Conjecture 1.3]. Note that $k[I_d]$ is isomorphic to the fiber ring $F(I) = \oplus_{n \ge 0}I^n/\mm I^n$ because $I$ is generated by elements of the same degree $d$.
The conjecture of Eisenbud and Ulrich is not true if one does not put further assumption on $A$. This follows from the following observation on graded Buchsbaum rings, where $A$ is called a Buchsbaum ring if $A_\mm$ is a Buchsbaum ring.
\[Buchsbaum\] Assume that $\reg R(Q) = \reg F(Q)$ for every parameter ideal $Q$ generated by forms of the same degree in graded factor rings of $A$. Then $A$ is a Buchsbaum ring.
It is well known that every system of parameters is analytically independent. From this it follows that $F(Q)$ is isomorphic to a polynomial ring over $k$. Hence $\reg R(Q) = \reg F(Q) = 0$. By [@Tr2 Corollary 5.7], this implies that $Q$ is generated by a $d$-sequence. In particular, every system of parameters of $A$, which consists of forms of the same degree, is a $d$-sequence.
Let $x_1,...,x_s$ be a homogeneous system of parameters of degree 2 of $A$, $s = \dim A$. Applying the above fact to the factor ring $A/(x_1,...,x_i)$, $i < s$, we can deduce that every homogeneous system of parameters $x_1,...,x_i,y_1,...,y_{r-i}$ of $A$, where $y_1,...,y_{s-i}$ are linear forms, is a $d$-sequence. By [@Tr Corollary 2.6], a local ring $(B,\nn)$ is Buchsbaum if there exists a system of parameters $x_1',...,x_s'$ in $\nn^2$, $s = \dim B$, and a generating set $S$ for $\nn$ such that $x_1',...,x_i',y_1',...,y_{s-i}'$ is a $d$-sequence for every family $y_1',...,y_{s-i}'$ of $s-i$ elements of $S$, $i = 1,...,s$ (the term absolutely superficial sequence was used there for $d$-sequence). From this it follows that $A$ is a Buchsbaum ring.
The following example shows that $\reg R(I)$ can be arbitrarily larger than $\reg F(I)$ even when $I$ is a parameter ideal in an one-dimensional non-Buchsbaum ring.
[Let $A = k[x,y]/(x^t,xy^{t-1})$, $t \ge 2$. Then $A$ is a non-Buchsbaum ring for $t \ge 3$. Let $I = yA$. It is clear that $\reg F(I) = 0$. Using Theorem \[regularity\], we have $\reg R(I) = \min\{n \ge 0|\ y^{n+1}A \cap (0:yA) = 0\} = t-2.$]{}
Let $\nn$ denote the maximal graded ideal of $F(I)$. Since $F(I)$ is a standard graded algebra over $k$, $F(I) \cong G(\nn)$. By Theorem \[regularity\], $$\reg F(I) = \reg G(\nn) = \reg R(\nn),$$ and we can use a minimal reduction of $\nn$, to compute $\reg F(I)$.
A minimal reduction of $\nn$ is just a parameter ideal of $F(I)$ generated by linear forms. In general, there is a natural correspondence between such parameter ideals and minimal reductions of $I$ (see e.g. [@Va Section 1.3]). In our setting, this correspondence can be formulated as follows.
In the following we will identify $F(I)$ with $k[I_d]$ and we will consider it as the graded subalgebra $\oplus_{n \ge 0}(I^n)_{nd}$ of $R(I)$. Let $J$ be an arbitrary ideal generated by $s$ forms $x_1,...,x_s \in I_d$, $s = \dim A$. Let $\q$ be the ideal generated by these forms in $F(I)$.
\[correspondence\] $J$ is a minimal reduction of $I$ if and only if $\q$ is a parameter ideal of $F(I)$. Moreover, $r_\q(\nn) = r_J(I)$.
If $J$ is a minimal reduction of $I$, using the proof of Lemma \[generating\] we can find a generating sequence $x_1,...,x_s$ for $J$ such that it is superficial for both $J$ and $\nn$. Applying Theorem \[regularity\] and Lemma \[correspondence\] we have $$\begin{aligned}
& \reg R(I) = \\
& \min\left\{n \ge r_J(I)|\ I^{n+1} \cap [(x_1,...,x_{i-1}) :x_i] = (x_1,...,x_{i-1})I^n, i = 1,...,s\right\}, \nonumber\\
& \reg F(I) = \\
& \min\left\{n \ge r_J(I)|\ \nn^{n+1} \cap [(x_1,...,x_{i-1})F(I) :x_i] = (x_1,...,x_{i-1})\nn^n, i = 1,...,s\right\}. \nonumber \end{aligned}$$ Since $\nn^{n+1} = \oplus_{t \ge n}(I^{t+1})_{(t+1)d}$, one can easily check that $$\begin{aligned}
\nn^{n+1} \cap [(x_1,...,x_{i-1})F(I):x_i] & = \bigoplus_{t \ge n} \big(I^{t+1} \cap [(x_1,...,x_{i-1}):x_i]\big)_{(t+1)d},\\
(x_1,...,x_{i-1})\nn^n & = \bigoplus_{t \ge n}\big((x_1,...,x_{i-1})I^t\big)_{(t+1)d}.\end{aligned}$$ Therefore, one can use the formulas (1) and (2) to compare $\reg R(I)$ and $\reg F(I)$. In particular, one can easily see that $$\begin{aligned}
\reg R(I) & \ge \reg F(I),\end{aligned}$$ which was proved in [@EU Section 1] by a different mean.
\[Buch\] Let $A$ be a Buchsbaum ring with $\dim A \ge 1$ and $\depth G(I) \ge \dim A-1$. Then $\reg R(I) = \reg F(I)$.
By [@Tr1 Theorem 1.2], the assumption implies that $\reg R(I) = r_J(I)$ for every minimal reduction $J$ of $I$. Since $r_J(I) \le \reg F(I) \le \reg R(I)$ by (2) and (3), we obtain $\reg R(I) = \reg F(I)$.
The condition $\depth G(I) \ge \dim A-1$ is satisfied if $\dim A = 1$. Therefore, we have the following consequence.
Let $A$ be an one-dimensional Buchsbaum ring. Then $\reg R(I) = \reg F(I)$.
If $\dim A = 2$, we shall see that there is a formula for $\reg F(I)$, which is similar to the formula for $\reg R(I)$ in Theorem \[equality\].
Let $s^*_\ini(I)$ denote the least integer $m$ such that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$ for all $n \ge m$. It is clear that $nd$ is the initial degree of the homogeneous elements of $\widetilde {I^n}$. For this reason we call $s^*_\ini(I)$ the [*initial Ratliff-Rush regularity*]{}. Obviously, $s^*_\ini(I) \le s^*(I)$.
\[initial\] Let $I$ be a regular ideal. Then $s^*_\ini(I) \le \max\{\reg F(I), 1\}.$
We have to show that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$ for $n \ge \reg F(I)$. Since $\widetilde {I^n} = \cup_{t \ge 0}I^{n+t}:I^t$, it suffices to show that $(I^{n+t}:I^t)_{nd} \subseteq (I^n)_{nd}$ for $n \ge \reg F(I)$.
Let $x \in I_d$ be a superficial element of $\nn$. Since $\nn$ is a regular ideal, $x$ is a non-zerodivisor. By Lemma \[intersection\], $\nn^{n+1} \cap (x) = x\nn^n$ for $n \ge \reg F(I)$. This implies $\nn^{n+1}: x = \nn^n$. From this it follows that $\nn^{n+t}: x^t = \nn^n$ for $n \ge \reg F(I)$. Note that $(\nn^n)_d = (I^n)_{nd}$. Then $$(I^{n+t}:I^t)_{nd} \subseteq (I^{n+t}:x^t)_{nd} = (\nn^{n+t}:x^t)_d = (\nn^n)_d = (I^n)_{nd}$$ for $n \ge \reg F(I)$, which implies the conclusion.
\[equality 2\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$. Assume that $I$ is not a parameter ideal. Then $$\reg F(I) = \max\{r_J(I),s^*_\ini(I)\} = \min\{n \ge r_J(I)|\ (\widetilde {I^n})_{nd} = (I^n)_{nd}\},$$ where $J$ is an arbitrary homogeneous minimal reduction of $I$.
Let $J = (x,y)$, $x, y \in I_d$. By (2) we have $\reg F(I) \ge r_J(I)$. Since $I$ is not a parameter ideal, $\nn$ is not generated by two elements. From this it follows that the defining equations of $F(I)$ have degree $> 1$. Hence $\reg F(I) > 0$ [@EG]. By Lemma \[initial\], this implies $\reg F(I) \ge s^*_\ini(I)$. Thus, $\reg F(I) \ge \max\{r_J(I), s^*_\ini(I)\}$. Since $$\max\{r_J(I), s^*_\ini(I)\} \ge \min\{n \ge r_J(I)|\ (\widetilde {I^n})_{nd} = (I^n)_{nd}\},$$ it suffices to show that $\reg F(I) \le \min\{n \ge r_J(I)|\ (\widetilde {I^n})_{nd} = (I^n)_{nd}\}.$
By the proof of Lemma \[reduction\], we can choose $x, y$ such that $x,y$ is a superficial sequence for both $I$ and $\nn$. Since $I$ is regular ideal, $x$ is a non-zerodivisor in $A$. Hence $0:x = 0$ in $F(I)$. By (2) we have $$\reg F(I) = \min\{n \ge r_J(I)|\ \nn^{n+1} \cap (xF(I) : y) = x\nn^n\}.$$ By (3) and (4), $\nn^{n+1} \cap (xF(I) : y) = x\nn^n$ if $[I^{t+1} \cap (xA:y)]_{(t+1)d} = (xI^t)_{(t+1)d}$ for $t \ge n$. On the other hand, by the proof of Theorem \[equality\], we have $$I^{t+1} \cap (xA:y) = I^{t+1} \cap xA = x(I^{t+1}:x)$$ for $t \ge r_J(I)$. Therefore, we only need to show that $(I^{t+1}:x)_{td} = (I^t)_{td}$ for $t \ge n$ if $(\widetilde {I^n})_{nd} = (I^n)_{nd}$ for $n \ge r_J(I)$.
By Lemma \[reduction\], we have $I^{t+1}:x = I^t + y^{t-n}(I^{n+1}:x)$ for $t \ge n \ge r_J(I)$. Since $I^{n+1}:x \subseteq \widetilde {I^{n+1}}:x = \widetilde {I^n}$ [@RV Lemma 3.1(5)], $I^{t+1}:x \subseteq I^t + y^{t-n}\widetilde {I^n}.$ If $(\widetilde {I^n})_{nd} = (I^n)_{nd}$, we obtain $$(I^{t+1}:x)_{td} \subseteq (I^t)_{td} + y^{t-n}(I^n)_{nd} = (I^t)_{td} \subseteq (I^{t+1}:x)_{td}.$$ From this it follows that $(I^{t+1}:x)_{td} = (I^t)_{td}$.
\[initial 1\] Let $A$ be a two-dimensional Buchsbaum ring with $\depth A > 0$. Let $J$ be an arbitrary homogeneous minimal reduction of $I$. Then $\reg R(I) = \reg F(I)$ if and only if $\widetilde {I^n} = I^n$ for the least integer $n \ge r_J(I)$ such that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$.
If $I$ is a parameter ideal, we have $\reg R(I) = \reg F(I) = 0$ by [@Tr2 Corollary 5.7] and $\widetilde {I^n} = I^n$ for $n \ge 1$ by Corollary \[d-sequence\]. If $I$ is not a parameter ideal, the conclusion follows from Theorem \[equality\] and Theorem \[equality 2\].
Monomial ideals in two variables
================================
In this section we will use the relationship between Castelnuovo-Mumford regularity and Ratliff-Rush closure to investigate the conjecture of Eisenbud and Ulrich for monomial ideals in two variables.
Let $A = k[x,y]$ be a polynomial ring over a field $k$, $\mm = (x,y)$ and $I$ is an $\mm$-primary ideal generated by monomials of degree $d$, $d \ge 1$. In this case, $I$ contains $x^d,y^d$ and $J = (x^d,y^d)$ is a minimal reduction of $I$. It is well-known [@Sa] and easy to see that $$\widetilde {I^n} = \bigcup_{t \ge 0}I^{n+t}: (x^{td},y^{td}).$$
\[middle 1\] Let $I= (x^d,y^d) + (x^{d-i}y^i\mid a \le i \le b)$, where $a \le b < d$ are given positive integers. Then $\widetilde {I^n} = I^n$ for all $n \ge 1$.
Let $x^iy^j$ be an arbitrary monomial of $\widetilde {I^n}$. Then $x^{i+td}y^j \in I^{n+t}$ for some $t \ge 1$. Since $I^{n+t}$ is generated by monomials of degree $(n+t)d$, $x^{i+td}y^j$ is divisible by a monomial $x^{(n+t)d-c}y^c \in I^{n+t}$. The divisibility implies $i+td \ge (n+t)d-c$ and $j \ge c$.
If $j < na$, then $$(n+t)d - c \ge (n+t)d - j > (n+t)d + na = td + n(d-a).$$ Let $M = \{x^d,x^{d-a}y^a,x^{d-a-1}y^{a+1},...,x^{d-b}y^b,y^d\}$ be the set of the monomial generators of $I$. Then $x^{(n+t)d-c}y^c$ is a product of $n+t$ monomials of $M$. Let $s$ be the number of copies of $x^d$ among these $n+t$ monomials of $M$. If $s < t$, we would have $(n+d)d -c \le sd + (n+t-s)(d-a)$ because the exponent of $x$ in each monomial in $M \setminus \{x^d\}$ is less or equal $d-a$. Since $$sd + (n+t-s)(d-a) = td + n(d-a) - (t-s)a < td + n(d-a),$$ we would get $(n+d)d -c < td+n(d-a)$, a contradiction. Therefore, we must have $s \ge t$. From this it follows that $x^{nd-c}y^c = x^{(n+t)d-c}y^c/x^{td}$ is a product of $n$ monomials in $M$. Hence $x^{nd-c}y^c \in I^n$. Since $x^iy^j$ is divisible by $x^{nd-c}y^c$, $x^iy^j \in I^n$.
By symmetry, if $i < n(d-b)$, we can also show that $x^iy^j \in I^n$.
Now, we may assume that $i \ge n(d-b)$ and $j \ge na$. Let $Q$ denote the ideal generated by the monomials $x^{d-j}y^j$, $a \le j \le b$. It is clear that $Q^n$ is generated by the monomials $x^{nd-j}y^j $, $na \le j \le nb$. If $ j < nb$, $x^iy^j$ is divisible by $x^{nd-j}y^j$ because $i \ge nd - c \ge nd-j$. Therefore, $x^iy^j \in Q^n$. Since $Q \subset I$, we obtain $x^iy^j \in I^n$. If $j \ge nb$, then $x^iy^j$ is divisible by $x^{n(d-b)}y^{nb} = (x^{d-b}y^b)^n \in I^n$. Thus, we always have $x^iy^j \in I^n$. Therefore, we can conclude that $\widetilde {I^n} = I^n$.
\[middle 2\] Let $I= (x^d,y^d) + (x^{d-i}y^i\mid a \le i \le b)$, where $a \le b < d$ are given positive integers. Then $$\reg R(I) = \reg F(I) = r_J(I)$$ for any homogeneous minimal reduction $J$ of $I$.
By Lemma \[middle 1\], we have $s^*(I) = 1$. Since $s^*(I) \ge s^*_\ini(I) \ge 1$, we also have $s^*_\ini(I) = 1$. Applying Theorem \[equality\] and Theorem \[equality 2\], we obtain $\reg R(I) = \reg F(I) = r_J(I)$.
Theorem \[middle 2\] gives a large class of monomial ideals in two variables for which the conjecture of Eisenbud and Ulrich holds. In particular, it contains the case $I$ is an $\mm$-primary ideal generated by three monomials.
Assume that $I = (x^d,x^{d-a}y^a,y^d)$, $1 \le a <d$. Then $$\reg R(I) = \reg F(I) = d/(a,d)-1.$$
This is the case $a = b$ of Theorem \[middle 2\]. Therefore, $\reg R(I) = \reg F(I) = r_J(I)$. For $J = (x^d,y^d)$, it is easy to check that $r_J(I) = d/(a,d)-1$.
Now we will present another large class of monomial ideals in two variables for which the conjecture of Eisenbud and Ulrich holds.
\[neighbor\] Let $I$ be an ideal in $k[x,y]$ which is generated by monomials of degree $d \ge 2$. Assume that $x^d,x^{d-1}y,y^d \in I$. Then $\reg R(I) = \reg F(I).$
Let $n$ be the least integer $n \ge r_J(I)$ such that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$. By Corollary \[initial 1\], we only need to show that $\widetilde {I^n} = (I^n)$.
Let $x^iy^j$ be an arbitrary monomial of $\widetilde {I^n}$. Then $x^iy^{j+td} \in I^{n+t}$ for some $t \ge 1$. Since $I^{n+t}$ is generated by monomials of degree $(n+t)d$, there exists a monomial $x^ay^{(n+t)d-a} \in I^{n+t}$ such that $x^iy^{j+td}$ is divisible by $x^ay^{(n+t)d-a}$. By the divisibility, we have $i \ge a$ and $j + td \ge (n+t)d-a$. For what follows see Figure I, where each node represents the vector of exponents of a monomial.
If $i \ge nd$, then $x^iy^j$ is divisible by $x^{nd} \in I^n$. If $i < nd$, then $a < nd$. We have $$\begin{aligned}
(x^ay^{nd-a})x^{(nd-a)d} & = (x^{d-1}y)^{nd-a}x^{nd} \in I^{nd-a+n}\\
(x^ay^{nd-a})y^{td} & = x^ay^{(n+t)d-a} \in I^{n+t}.\end{aligned}$$
(0,4) |- (7.6,0); (3,0) – (3,3.9); (1.7,1.3) – (6.8,1.3); (1.7,1.3) – (1.7,2.8); (2.4,1.8) – (2.4,3.3); (0,0) – (5.1,1.7);
(3,0) node\[below\] [$nd$]{}; (7.4,0) node\[below\] [$i$]{}; (0,3.7) node\[left\] [$j$]{}; (1.1,0.3) node\[right\] [$(d-1,1)$]{}; (0.3,1.0) node\[right\] [$(a,nd-a)$]{}; (4.0,1.0) node\[right\] [$(a+(nd-a)d,nd-a)$]{}; (1.7,2.9) node\[left\] [$(a,(n+t)d-a)$]{}; (2.4,1.8) node\[right\] [$(i,j)$]{}; (2.4,3.4) node\[right\] [$(i,j+td)$]{};
(3,0) circle (2pt); (0.9,0.3) circle (2pt); (1.7,1.3) circle (2pt); (1.7,2.9) circle (2pt); (3.9,1.3) circle (2pt); (6.9,1.3) circle (2pt); (2.4,1.8) circle (2pt); (2.4,3.4) circle (2pt);
Set $s = \max\{(nd-a),t\}$. Then $(x^ay^{nd-a})x^{sd}, (x^ay^{nd-a})y^{sd} \in I^{n+s}$. Hence $x^ay^{nd-a} \in I^{n+s}:(x^{sd},y^{sd}) \subseteq \widetilde {I^n}$. Since $(\widetilde {I^n})_{nd} \in (I^n)_{nd}$, $x^ay^{nd-a} \in I^n$. Since $i \ge a$ and $j \ge nd-a$, $x^iy^j$ is divisible by $x^ay^{nd-a}$. Thus, $x^iy^j \in I^n$. So we can conclude that $\widetilde {I^n} = (I^n)$.
Actually, we obtain the above cases by translating everything in terms of lattice points. For every set $S \subseteq A$ we consider the set $E(Q)$ of the vectors of the exponents of the monomials in $Q$. We define the sum of two sets in $\NN^2$ simply as the set of the corresponding sums of the elements of the given sets. A multiple of a set is thus a sum of copies of the given set.
Let $E$ be the set of the vectors of the exponents of the monomial generators of $I$. Set $\e_1 = (d,0)$ and $\e_2 = (0,d)$. For $J = (x^d,y^d)$ we have $$r_J(I) = \min\big\{n|\ (n+1)E = \{\e_1,\e_2\}+nE\big\}.$$ Let $E_n$ resp. $F_n$ denote the set of the vectors $\a \in \NN^2$ such that such that $\a + t\e_1, \a+t\e_2 \in (n+t)E$ resp. $\a + t\e_1, \a+t\e_2 \in (n+t)E + \NN^2$ for some $t \ge 0$. Then $E((\widetilde {I^n})_{nd}) = E_n$ and $E(\widetilde {I^n}) = F_n.$ It is clear that $(\widetilde {I^n})_{nd} = (I^n)_{nd}$ resp. $\widetilde {I^n} = I^n$ if and only if $E_n = nE$ resp. $F_n = nE+\NN^2$.
[ The sets $E_n$ have an interesting interpretation in the theory of affine semigroup rings. Let $S \subseteq \NN^2$ denote the additive monoid generated by $E$. Then $S$ is called an affine semigroup and $F(I)$ is the semigroup ring $k[S]$ of $S$. Set $$S^* = \{\a\in \NN^2|\ \a + t\e_1, \a +t\e_2 \in S \text{ for some $t \ge 0$}\}.$$ By [@Tr' Lemma 1.1], $k[S]$ is Cohen-Macaulay or Buchsbaum if and only if $S^* = S$ or $S^*+(S \setminus \{0\}) \subseteq S$, respectively. Actually, $S^*$ is an additive semigroup such that $k[S^*]$ is Cohen-Macaulay, and we can prove that $$H_\nn^1(k[S]) \cong k[S^*]/k[S],$$ where $\nn$ denote the maximal graded ideal of $k[S]$ and $k[S^*]/k[S]$ is the vector space spanned by the elements of $S^* \setminus S$. It is easy to see that $S^* \setminus S = \cup_{n\ge 1} (E_n \setminus nE)$ and that the $n$-th graded component of $H_\nn^1(k[S])$ is a vector space spanned by the elements of the set $E_n \setminus nE$. ]{}
Now we will give an example such that $\reg R(I) = s^*(I) = r_J(I)$ for $J = (x^d,y^d)$, but $s^*(I) > r_{J'}(I)$ for another minimal reduction $J'$ of $I$.
\[Huckaba\] [Let $I = (x^7,x^6y,x^2y^5,y^7)$. First, we will show that $\reg R(I) = r_J(I)$ for $J = (x^7,y^7)$. By Theorem \[neighbor\] we know that $\reg R(I) = \reg F(I)$. Since $\reg F(I) \ge r_J(I)$ by (2), it suffices to show that $\reg F(I) = r_J(I)$. It is easy to check that $r_J(I) = 4$ and $(\widetilde {I^4})_{28} = (I^4)_{28}$. By Theorem \[equality 2\], this implies $\reg F(I) = 4$. On the other hand, it is shown in [@Huc Example 3.1] that $r_{J'}(I) \le 3$ for $J' = (x^7,x^6y+y^7)$. Hence the reduction numbers of $I$ depend on the choice of the minimal reductions. By Theorem \[equality\], this implies $\reg R(I) = s^*(I) = r_J(I)$. ]{}
The following example shows that we may have $\reg R(I) = s^*(I) > r_J(I)$, where $J = (x^d,y^d)$.
[Let $I = (x^{17-i}y^i|\ i = 0,1,3,5,13,14,16,17)$. By [@HHS Example 3.2], we have $\reg F(I) = 4 > r_J(I) = 3$, where $J = (x^{17},y^{17})$. By Theorem \[neighbor\] we have $\reg R(I) = \reg F(I) = 4$. Hence, Theorem \[equality\] implies $\reg R(I) = s^*(I) > r_J(I)$. ]{}
[1]{}
D. Bayer and D. Mumford, What can be computed in algebraic geometry? In: Computational Algebraic Geometry and Commutative Algebra (Cortona 1991), 1–48, Cambridge University Press, 1993.
B. T. Cortadellas and S. Zarzuela, On the structure of the fiber cone of ideals with analytic spread one, J. Algebra 317 (2007), no. 2, 759–785.
L. R. Doering, T. Gunston, W. Vasconcelos, Cohomological degrees and Hilbert functions of graded modules, Amer. J. Math. 120 (1998), 493–504.
L. X. Dung, Castelnuovo-Mumford regularity of the associated graded module in dimension one, Acta Mathematica Vietnamica 38 (2013), 541–550.
L. X. Dung and L. T. Hoa, Castelnuovo-Mumford regularity of associated graded modules and fiber cones of filtered modules, Comm. in Algebra. 40 (2012), 404–422.
D. Eisenbud and S. Goto, Linear free resolutions and minimal multiplicities, J. Algebra 88 (1984), 89–133.
D. Eisenbud and B. Ulrich, Notes on regularity stabilization, Proc. Amer. Math. Soc. 140 (2012), no. 4, 1221–1232.
J. Elias, On the computation of Ratliff-Rush closure, J. Symbolic Comput. 37 (2004), 717–725.
A. V. Jayanthan and R. Nanduri, Castelnuovo-Mumford Regularity and Gorensteiness of Fiber Cone, Comm. in Algebra 40 (2012), 1338–1351.
W. Heinzer, B. Johnston, D. Lantz and K. Shah, Coefficient ideals in and blowups of a commutative Noetherian domain, J. Algebra 162 (1993), 355–391.
W. Heinzer, D. Lantz and K. Shah, The Ratliff-Rush ideals in a Noetherian ring, Comm. in Algebra 20 (1992), 591–622.
M. Hellus, L.T. Hoa and J. Stückrad, Castelnuovo-Mumford regularity and the reduction number of some monomial curves, Proc. Amer. Math. Soc. 318 (2010), 27–35.
L.T. Hoa, Reduction numbers of equimultiple ideals, J. Pure Appl. Algebra 109 (1996), 111–126.
L.T. Hoa, A note on the Hilbert-Samuel Hilbert function in a two-dimensional local ring, Acta Math. Vietnamica 21 (1996), 335–347.
S. Huckaba, Reduction numbers for ideals of higher analytic spread, Math. Proc. Cambridge Phil. Soc. 102 (1987), 49–57.
C. Huneke, The theory of $d$-sequences and powers of ideals, Advances in Math. 46 (1982), 249–279.
C. H. Linh, Upper bound for the Castelnuovo-Mumford regularity of associated graded modules, Comm. in Algebra 33 ( 2005), 1817–1831.
C. H. Linh, Castelnuovo-Mumford regularity and degree of nilpotency, Math. Proc. Cambridge Philos. Soc. 142 (2007), 429–437.
T. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded modules, J. Pure Appl. Algebra 208 (2007), No. 1, 159–176.
L. J. Ratliff, Jr. and D. E. Rush, Two notes on reductions of ideals, Indiana Univ. Math. J. 27 (1978), 929–934.
M. Rossi and I. Swanson, Notes on the behavior of the Ratliff-Rush filtration, in: Commutative Algebra (Grenoble/Lyon, 2001), Contemp. Math. 331, 313–328, Amer. Math. Soc., 2003.
M. Rossi and G. Valla, Hilbert functions of filtered modules, Lect. Notes of Unione Matematica. Italiana vol. 9, Springer, 2010.
M. E. Rossi, N. V. Trung and G. Valla, Castelnuovo-Mumford regularity and extended degree, Trans. Amer. Math. Soc. 355 (2003), 1773-1786.
J. Sally, Ideals whose Hilbert function and Hilbert polynomial agree at $n = 1$, J. Algebra 157 (1993), 534–547.
B. Strunk, Castelnuovo-Mumford regularity, postulation numbers, and reduction numbers, J. Algebra 311 (2007), 538–550.
N. V. Trung, Absolutely superficial sequences, Math. Proc. Cambridge Philos. Soc. 93 (1983), 35–47.
N. V. Trung, Projections of one-dimensional Veronese varieties, Math. Nachr. 118 (1984), 47-67.
N. V. Trung, Reduction exponent and degree bound for the defining equations of graded rings, Proc. Amer. Math. Soc. 101 (1987), 229–236.
N. V. Trung, The Castelnuovo regularity of the Rees algebra and the associated graded ring, Trans. Amer. Math. Soc. 35 (1998), 2813–2832.
N. V. Trung, Castelnuovo-Mumford regularity and related invariants, in: Commutative Algebra, Lecture Notes Series 4, Ramanujan Mathematical Society, 2007, 157–180.
W. Vasconcelos, Integral Closure: Rees algebras, Multiplicities, and Algorithms, Springer, 2005.
[^1]: The first and the last authors are supported by Vietnam National Foundation for Science and Technology Development. Part of the paper was done when the last author visited Genova in July 2014. He would like to thank INdAM and the Department of Mathematics of the University of Genova for their support and hospitality.
|
---
author:
- |
J.A. Gracey,\
Theoretical Physics Division,\
Department of Mathematical Sciences,\
University of Liverpool,\
P.O. Box 147,\
Liverpool,\
L69 3BX,\
United Kingdom.
title: 'Four loop ${\overline{\mbox{MS}}}$ mass anomalous dimension in the Gross-Neveu model'
---
[**Abstract.**]{} We compute the four loop term of the mass anomalous dimension in the two dimensional Gross-Neveu model in the ${\overline{\mbox{MS}}}$ scheme. The absence of multiplicative renormalizability which results when using dimensional regularization means that the effect of the evanescent operator, which first appears at three loops in the $4$-point Green’s function, has to be properly treated in the construction of the renormalization group function. We repeat the calculation of the three loop ${\overline{\mbox{MS}}}$ $\beta$-function and construct the $\beta$-function of the evanescent operator coupling which corrects earlier computations.
Introduction.
=============
The Gross-Neveu model is a two dimensional asymptotically free renormalizable quantum field theory whose basic interaction is a simple quartic fermion self-interaction, [@1]. It has been widely studied since its introduction in [@1], as it has many interesting properties which are readily accessible given the space-time dimension the model is defined in. For instance, unlike the same interaction in four dimensions it is renormalizable in two dimensions and the generation of mass dynamically has been observed and studied in the large $N$ expansion, [@1]. Moreover, one property of interest is that it possesses an $S$-matrix whose [*exact*]{} form is known, [@2; @3], whence the mass gap is known exactly, [@4], in terms of the basic mass scale of the theory, $\Lambda_{{\overline{\mbox{\footnotesize{MS}}}}}$. Aside from these features the model itself underpins several problems in condensed matter physics. For instance, in the replica limit it is equivalent at the critical point to the two dimensional random bond Ising model. (See, for example, the review [@5].) Necessary to study the fixed point properties for such physical problems is knowledge of the renormalization group functions in some renormalization scheme, such as ${\overline{\mbox{MS}}}$. These have been computed to three loops in ${\overline{\mbox{MS}}}$ over a period of years. In [@1] the one loop $\beta$-function was computed demonstrating asymptotic freedom. This was extended to two loops in [@6], whilst the three loop $\beta$-function appeared more or less simultaneously in [@7] and [@8]. Though the method of computation in both articles was significantly different. For instance, given the quartic nature of the sole interaction it can be rewritten in terms of an auxiliary field producing a trivalent interaction with the introduction of an auxiliary field. This was the version of the theory used in [@8], as well as at two loops in [@6], not only to deduce the $\beta$-function but also to study the effective potential of the auxiliary field at three loops. However, renormalization effects will generate a quartic interaction. So [@8] had in effect to handle the intricate problem of renormalizing a version of the theory with two independent couplings. The three loop ${\overline{\mbox{MS}}}$ $\beta$-function of the original theory was eventually extracted when the effect of the newly generated interaction was properly accounted for in the renormalization group equations. By contrast, in [@7] the purely quartic version of the theory was treated with a massive fermion. The agreement of both three loop results was a reassuring non-trivial check on the final expression. At four loops only the wave function renormalization has been computed in [@9] and later verified in [@10]. Although apparently one loop further than the $\beta$-function of [@7; @8], or mass anomalous dimension, [@11], the fermionic nature of the interaction means that the anomalous dimension begins at two loops since the one loop snail graph is zero in the wave function channel of the $2$-point function. Thus in effect the four loop wave function is a computation on a par with the three loop mass anomalous dimension and $\beta$-function.
Given the range of problems which the Gross-Neveu model underlies, it is the purpose of this article to start the programme of completing our knowledge of the four loop structure by computing the mass anomalous dimension at this order in the ${\overline{\mbox{MS}}}$ scheme. This may appear to be the least important of the two outstanding quantities. However, as will become apparent from the calculation we will describe, the nature of the model is such that there are several difficult technical issues to be dealt with en route which do not arise in other four loop calculations in other important theories. Therefore, in one sense we are testing the viability of computing renormalization constants at four loops in an example in the Gross-Neveu model which contains a moderate number of Feynman diagrams rather than the $1000$ plus graphs which will occur in the $4$-point function renormalization [*and*]{} with a sensible investment of time. Moreover, the experience gained in such an exercise will be invaluable for any future such coupling constant renormalization. It will transpire that we will require properties of Feynman integrals in two space-time dimensions higher than the one we compute in, in order to determine the final renormalization group function. Whilst this may not be the unique way to determine this, it will suggest the importance of the structure of higher dimensional Feynman diagrams in complementing practical lower dimensional calculations and potentially [*equally*]{} vice versa at high loop order. Our main tool of computation will be the use of dimensional regularization in $d$ $=$ $2$ $-$ $\epsilon$ dimensions where the relevant part of the $2$-point function is written in terms of basic massive vacuum bubble graphs. Such a calculation could only be completed with the use of computer algebra and invaluable in this was the symbolic manipulation language [Form]{}, [@12]. At this level of loop order automatic Feynman diagram computation using computers is the only viable way of keeping a reliable account of the algebra within a reasonable time. Several other computer packages were also required.
As this is the first four loop calculation which involves [*four*]{} terms of the renormalization group functions, we will have to revisit and redo the earlier calculations using the same approach consistently as that which will be used here at four loops. For reasons which will become evident later this has entailed us carrying out the full renormalization of the $4$-point function in the theory with a massive fermion, where the mass will act as a natural infrared regulator. In [@7] and [@8] these articles centred on the derivation of the three loop $\beta$-function itself, which unlike most field theories, is not the same as fully renormalizing the underlying $n$-point function. For the Gross-Neveu model this was first observed in [@13; @14] with explicit three loop calculations for the full $4$-point function given for a massless version of the theory discussed in the series of articles [@15; @16; @17] and examined in [@18] for the massive version. However, neither computation was in complete agreement as to the final structure of the $4$-point function renormalization. Prior to considering any four loop mass anomalous dimension this discrepancy needs to be resolved in one way or another which is a secondary consideration for this article. Any four loop $\beta$-function computation would also require this resolution but constructing the mass anomalous dimension in a consistent way is an easier environment in which to [*check*]{} any final explanation.
The paper is organised as follows. We review the current understanding of the renormalization properties of the Gross-Neveu model in section $2$. Given this we discuss the three loop vacuum bubble integrals needed for the full three loop renormalization in section three. There we resolve the discrepancy between [@17] and [@18] in the $4$-point renormalization in section $4$. Section $5$ extends the discussion of the relevant vacuum bubble computations to four loops with the four loop mass anomalous finally being derived in section $6$. Concluding remarks are given in section $7$.
Preliminaries.
==============
We turn now to the specific properties of the Gross-Neveu model. The bare two dimensional Lagrangian is, [@1], $$L ~=~ i \bar{\psi}_0^{i} {\partial \! \! \! /}\psi^{i}_0 ~-~ m_0 \bar{\psi}^{i}_0
\psi^{i}_0 ~+~ \frac{1}{2} g_0 ( \bar{\psi}^i_0 \psi^i_0 )^2
\label{baregn}$$ where the subscript ${}_0$ denotes a bare quantity and $g$ is the coupling constant which is dimensionless in two dimensions. Unlike [@7; @11] which used the symmetry group $O(N)$ we take the $SU(N)$ version of the theory so that the fermion field $\psi^i$ is a complex Majorana fermion with the former property deriving from the fields taking values in the group $SU(N)$ with $1$ $\leq$ $i$ $\leq$ $N$. An advantage of the choice of the group $SU(N)$ is that the Feynman rule of (\[baregn\]) involves two terms unlike the three of the $O(N)$ case. At four loops this reduces the number of terms needed to be substituted when the Feynman rules are implemented in [Form]{} which speeds up the calculation. We choose to work with the massive version where $m$ is the mass. This is primarily because we will need to redo the three loop renormalization of the coupling and the presence of a non-zero mass will ensure that any resulting divergences are purely ultraviolet and not deriving from spurious infrared infinities which could arise when external momenta are nullified in the basic divergent $4$-point Green’s function. In two dimensions the theory (\[baregn\]) is renormalizable to all orders in the coupling constant and is asymptotically free. Specifically we note that the ${\overline{\mbox{MS}}}$ scheme renormalization group functions of the model, as they currently stand are, [@1; @6; @7; @8; @9; @11; @17], $$\begin{aligned}
\gamma(g) &=& (2N-1) \frac{g^2}{8\pi^2} ~-~ (N-1)(2N-1)
\frac{g^3}{16\pi^3} \nonumber \\
&& +~ (4N^2-14N+7)(2N-1) \frac{g^4}{128\pi^4} ~+~ O(g^5) \nonumber \\
\gamma_m(g) &=& -~ (2N-1) \frac{g}{2\pi} ~+~ (2N-1) \frac{g^2}{8\pi^2} ~+~
(4N-3)(2N-1) \frac{g^3}{32\pi^3} ~+~ O(g^4) \nonumber \\
\beta(g) &=& (d-2)g ~-~ ( N - 1 ) \frac{g^2}{\pi} ~+~
( N - 1 ) \frac{g^3}{2\pi^2} ~+~ ( N - 1 ) ( 2N - 7 ) \frac{g^4}{16\pi^4}
\nonumber \\
&& +~ O(g^5)
\label{threerge}\end{aligned}$$ where $\gamma(g)$, $\gamma_m(g)$ and $\beta(g)$ are respectively the field and mass anomalous dimensions and the $\beta$-function. Their formal definitions will be discussed later. Although several terms were determined for the $O(N)$ version of the model, we have converted the previous computations to the $SU(N)$ model whence the free field case emerges when $N$ $=$ ${\mbox{\small{$\frac{1}{2}$}}}$ as indicated by the vanishing of $\gamma(g)$ and $\gamma_m(g)$ for this value.
In the computations deriving (\[threerge\]) the main strategy was to dimensionally regularize (\[baregn\]) in $d$-dimensions and determine the renormalization constants as poles in the deviation from two dimensions. Here we will take $d$ $=$ $2$ $-$ $\epsilon$ with $\epsilon$ being regarded as small. Whilst the correct renormalization group functions emerged at three loops, [@8] overlooked a novel feature of the dimensionally regularized Lagrangian which was explicitly discussed in [@15; @16] after the observation in [@13; @14]. Basically (\[baregn\]) ceases being multiplicatively renormalizable in $d$-dimensions but crucially retains renormalizability. This is not a property solely restricted to the Gross-Neveu model but is a feature of any two dimensional model with a $4$-fermi interaction such as the abelian and non-abelian Thirring models, [@18]. At a certain loop order, which is different for different models, evanescent operators are generated through the renormalization which are non-trivial in $d$-dimensions but which are absent or evaporate in the limit to strictly two dimensions which corresponds to the lifting of the regularization. A comprehensive study of this problem was provided for general $4$-fermi theories in [@13; @14] and we recall those features which are relevant for our ultimate goal. The same problem in four dimensions has been considered in [@19; @20]. Though there $4$-fermi operators are of course non-renormalizable and treated in the context of effective field theories.
First, in $d$-dimensions the basis of $\gamma$-matrices based on the Clifford algebra $$\{ \gamma^\mu , \gamma^\nu \} ~=~ 2 \eta^{\mu\nu}$$ has to be extended to the set of objects $\Gamma_{(n)}^{\mu_1\ldots\mu_n}$ for integer $n$ $\geq$ $0$, [@13; @14; @15; @16; @17], which is totally antisymmetric in the Lorentz indices and defined by $$\Gamma_{(n)}^{\mu_1 \mu_2 \ldots \mu_n} ~=~ \gamma^{[\mu_1} \gamma^{\mu_2}
\ldots \gamma^{\mu_n]}$$ where we use the convention that the square brackets include division by $n!$ when all possible permutations of the $\gamma$-strings are written explicitly. Then $\Gamma_{(n)}^{\mu_1\ldots\mu_n}$ form a complete closed basis for $\gamma$-matrices in $d$-dimensions where $\Gamma_{(0)}$ is the unit matrix. Hence one can immediately see that the most general multiplicatively renormalizable $4$-fermi theory using dimensional regularization in $d$-dimensions is, [@13; @14; @15; @16; @17], $$L ~=~ i \bar{\psi}_0^{i} {\partial \! \! \! /}\psi^{i}_0 ~-~ m_0 \bar{\psi}^{i}_0
\psi^{i}_0 ~+~ \frac{1}{2} \sum_{n=0}^\infty g_{(n) \, 0} \, \bar{\psi}^i_0
\Gamma_{(n)}^{\mu_1\ldots\mu_n} \psi^i_0 \,
\bar{\psi}^i_0 \Gamma_{(n) ~ \mu_1\ldots\mu_n} \psi^i_0
\label{baregen}$$ where there is an infinite number of (bare) couplings $g_{(n)\,0}$ with $g_{(0)}$ $\equiv$ $g$ identified as the original one of the Gross-Neveu model (\[baregn\]). Though the Gross-Neveu model strictly will correspond to the case where $g_{(1)}$ $=$ $g_{(2)}$ $=$ $0$ as $\Gamma_{(1)}^\mu$ and $\Gamma_{(2)}^{\mu\nu}$ are not evanescent. Given (\[baregen\]), there are several points of view depending on the problem in hand. If (\[baregen\]) is the most general renormalizable theory in $d$-dimensions, then in principle for the Gross-Neveu model one must begin with (\[baregen\]) but omit $g_{(1)}$ and $g_{(2)}$. This will produce renormalization group functions dependent, in principle, on all evanescent couplings. The true renormalization group functions of the original theory would eventually emerge from this multiplicatively renormalizable theory by setting $g_{(n)}$ $=$ $0$ for $n$ $\geq$ $3$ at the end, [@13; @14; @15; @16; @17]. Clearly this would involve a significant amount of calculation much of which would be redundant in the production of the final renormalization group functions. From a practical point of view there is a less laborious route to follow if one abandons the insistence on multiplicative renormalizability, [@17; @18]. Then operators such as $${\cal O}_n ~=~ {\mbox{\small{$\frac{1}{2}$}}}\bar{\psi}^i \Gamma_{(n)}^{\mu_1\ldots\mu_n} \psi^i \,
\bar{\psi}^i \Gamma_{(n) ~ \mu_1\ldots\mu_n} \psi^i$$ for $n$ $\geq$ $3$ will be generated with $g_{(0)}$ $\equiv$ $g$ dependent coefficients. The problem for this point of view then becomes one of how to extract the [*true*]{} two dimensional renormalization group functions. It turns out that a formalism was developed in [@13; @14] and used in [@18; @10] for this evanescent operator issue. In essence the true renormalization group functions are not strictly determined from what we term the naive renormalization constants. By these we mean those required to render $2$ and $4$-point functions finite. Instead these naive renormalization group functions need to be amended by the effect the evanescent operators have on the divergence structure in $d$-dimensions relative to two dimensions. In [@13; @14] such a projection formula was introduced which involves projection functions, $\rho^{(k)}(g)$, $\rho^{(k)}_m(g)$ and $C^{(k)}(g)$, where the index $k$ ranges over the evanescent range $k$ $\geq$ $3$. These functions quantify the effect the evanescent operators have on the derivation of the renormalization group functions. The derivation of the projection formula is given in [@14] and applied additionally in [@18; @10]. We recall that it is $$\begin{aligned}
\left. \int d^d x \, {{\cal N}}[ {{\cal O}}_k ] \right|_{ g_{(i)} = 0 \, , \, d = 2 }
&=& \int d^d x \left( \, \rho^{(k)}(g) {{\cal N}}[ i
\bar{\psi} {\partial \! \! \! /}\psi ~-~ m \bar{\psi}\psi ~+~ 2g {{\cal O}}_0 ] \right.
\nonumber \\
&& \left. \left. ~~~~~~~~~-~ \rho^{(k)}_m(g) \, {{\cal N}}[ m \bar{\psi}\psi ] ~+~
C^{(k)}(g) {{\cal N}}[ {{\cal O}}_0 ] \right) \right|_{ g_{(i)} = 0 \, , \, d = 2 }
\label{projfm} \end{aligned}$$ where the normal ordering symbol, ${{\cal N}}$, is included, [@13; @14; @21; @22]. The relation strictly only has meaning when inserted in a $2$ or $4$-point Green’s function. In other words one inserts the evanescent operator of the left side of (\[projfm\]) in a Green’s function and evaluates it using the naive renormalization constants to yield a finite expression. Equally one inserts the left side of (\[projfm\]) into the same Green’s function to the same loop order and renders it finite. Then the coefficients of the perturbative expansion in the coupling constant $g$ are chosen order by order to render the equation consistent at each loop order after one has set $d$ $=$ $2$. This procedure is denoted by the restriction $\{g_{(i)}$ $=$ $0$, $d$ $=$ $2\}$ on both sides of (\[projfm\]). Once the explicit projection functions have been determined to the appropriate order, then the [*true*]{} renormalization group functions are given by, [@13; @14], $$\begin{aligned}
\beta(g) &=& \tilde{\beta}(g) ~+~ \sum_{k=3}^\infty C^{(k)}(g)
\beta_k(g) \nonumber \\
\gamma(g) &=& \tilde{\gamma}(g) ~+~ \sum_{k=3}^\infty \rho^{(k)}(g)
\beta_k(g) \nonumber \\
\gamma_m(g) &=& \tilde{\gamma}_m(g) ~+~ \sum_{k=3}^\infty \rho^{(k)}_m(g)
\beta_k(g)
\label{truerge}\end{aligned}$$ where $\tilde{}$ denotes the [*naive*]{} renormalization group functions.
For the Gross-Neveu model the first appearance of an evanescent operator is at three loops which was originally observed in [@14; @17]. Whilst this postdates the three loop ${\overline{\mbox{MS}}}$ $\beta$-functions of [@7; @8] the latter are unaffected by the generation of ${\cal O}_3$ since it occurs with a coupling dependence of $g^3$. So that coupled with $C^{(3)}(g)$ it will only affect the $\beta$-function itself at four loops. Equally the mass anomalous dimension of [@11] does not feel this evanescent operator presence until four loops either. We refrain from quoting the value of the associated $\beta$-function, $\beta_3(g)$, until later. This is primarily because there are two completing values given in [@17] and [@18]. In the former the renormalization was deduced in a massless version of (\[baregn\]) where is was claimed that only ladder style diagrams were the origin of ${\cal O}_3$. In that instance the nullification of two external momenta in the associated $4$-point function should not have resulted in spurious infrared singularities. Whilst a $\beta_3(g)$ was determined, it involved $\zeta(3)$ which was not found in [@18] which used the massive version, (\[baregn\]), where $\zeta(x)$ is the Riemann zeta function. This clearly avoided infrared singularities when all the external momenta were nullified in the $4$-point function. Moreover it was claimed that the diagrams leading to ${\cal O}_3$ were akin to those analysed in [@17] but with no $\zeta(3)$ appearing in the published value of $\beta_3(g)$. Though both calculations agreed on the rational part of $\beta_3(g)$. The discrepancy between both computations needs to be resolved and a four loop calculation which requires $\beta_3(g)$ explicitly to obtain the true renormalization group function will provide a non-trivial forum in which to achieve this. The correct expression for $\beta_3(g)$ will be crucial for the four loop $\beta$-function. Given this structure of the Gross-Neveu model we can now write down the renormalized form of (\[baregn\]) we will use. It is, [@17; @18], $$L ~=~ i Z_\psi \bar{\psi}^{i} {\partial \! \! \! /}\psi^{i} ~-~ m Z_\psi Z_m
\bar{\psi}^{i} \psi^{i} ~+~ \frac{1}{2} g \mu^\epsilon Z_g Z^2_\psi
( \bar{\psi}^i \psi^i )^2 ~+~ \frac{1}{2} g \mu^\epsilon Z_{33} Z^2_\psi
\left( \bar{\psi}^i \Gamma_{(3)} \psi^i \right)^2
\label{rengn}$$ where the renormalized quantities are defined from their bare counterparts by $$\psi_0 ~=~ \psi Z_\psi^{{\mbox{\small{$\frac{1}{2}$}}}} ~~~,~~~ m_0 ~=~ m Z_m ~~~,~~~ g_0 ~=~
g Z_g \mu^\epsilon
\label{rencon}$$ in $d$-dimensions and $Z_{33}$ absorbs the infinity associated with the generation of ${\cal O}_3$ at this order. Unlike $Z_\psi$, $Z_m$ and $Z_g$ its coupling constant expansion does not commence with unity. Though we stress that (\[rengn\]) is valid only for $2$-point calculations to four loops. Only by renormalizing the $4$-point function at four loops would the full evanescent operator structure at that order emerge. For instance, it is not inconceivable given the $\gamma$-matrix structure of the four loop $4$-point function that a ${\cal O}_4$ evanescent operator will be generated. From these renormalization constants the naive renormalization group functions are given by $$\begin{aligned}
\tilde{\gamma}(g) &=& \mu \frac{\partial}{\partial \mu} \ln Z_\psi ~~~,~~~
\tilde{\gamma}_m(g) ~=~ -~ \tilde{\beta}(g) \frac{\partial}{\partial g} \ln Z_m
\nonumber \\
\tilde{\beta}(g) &=& (d-2) g ~-~ g \tilde{\beta}(g)
\frac{\partial}{\partial g} \ln Z_g \end{aligned}$$ to the order we are working to. For $\beta_3(g)$ one deduces its explicit form from the simple pole in $\epsilon$ via standard methods, [@14]. Thus in the context of (\[truerge\]) and these observations, we note that for the mass anomalous dimension the result of [@18] for $\rho_m^{(3)}(g)$ is $$\rho_m^{(3)}(g) ~=~ -~ \frac{g}{\pi} ~+~ O(g^2) ~.$$ The higher terms are not required since the first term of $\beta_3(g)$ is $O(g^3)$.
Three loop calculations.
========================
We begin this section by discussing the computational strategy. To determine the mass anomalous dimension for (\[baregn\]) we consider the $2$-point function for the massive theory. In [@9] the four loop ${\overline{\mbox{MS}}}$ anomalous dimension was calculated and independently verified in [@10]. Therefore, we assume that result for $Z_\psi$. However, this is effectively a three orders calculation since the one loop snail of Figure $1$ corresponding to $\langle \psi_\alpha(p) \bar{\psi}^\beta(-p) \rangle$ has no non-zero contributions in the ${p \! \! \! /}$ channel for the massive or massless Lagrangians where $p$ is the external momentum. Moreover, since for this case one is interested only in $Z_\psi$, it sufficed to consider the [*massless*]{} theory whence one only has to determine massless Feynman integrals. The component involving ${p \! \! \! /}_\alpha^{~\beta}$ can be deduced by multiplying all diagrams by ${p \! \! \! /}$ and taking the spinor trace. For the mass dimension one cannot follow this strategy. Not only because the one loop diagram contributes but also because its contribution to $Z_m$ requires the presence of the mass itself. Therefore unlike the determination of $Z_\psi$ one cannot neglect the snail graph of Figure $1$ at one loop as well as the graphs where snails appear as subgraphs at higher order. However, given that one is only interested in the $m \delta_\alpha^{~\beta}$ channel of $\langle \psi_\alpha(p)
\bar{\psi}^\beta(-p) \rangle$ the Green’s function can be analysed by nullifying the external momentum. Taking the spinor trace produces Lorentz scalar integrals but with tensor structure resulting from internal momenta contractions. The presence of the common mass $m$ automatically protects against the appearance of spurious infrared infinities and relegates the problem of determining the ultraviolet structure to mapping the integrals with internal momenta contractions to a set of basic master vacuum bubbles at each loop order. The problem of studying massive vacuum bubbles in [*four*]{} dimensions has received much attention over the years, culminating in, for example, the [Matad]{} package at three loops, [@24], and the comprehensive study by Broadhurst of all combinations of massive and massless propagators in the Benz or tetrahedron topology, [@25]. The analogous problem relative to two dimensions has not been treated as systematically. Though the main results to three loops have appeared within various articles. Additionally, at four loops we will have to handle new integrals for topologies which do not simply break into products of lower loop vacuum bubbles. The main difficulty lies in having to handle the tensor structure emanating from the fermion propagator. Throughout we have made extensive use of the symbolic manipulation language [Form]{}, [@12], in which to code our algorithm where the contributing Feynman diagrams to the $2$ and $4$-point functions are generated automatically with the [Qgraf]{} package, [@23]. To summarize there are $1$ one loop, $2$ two loop, $7$ three loop and $36$ four loop graphs for the $2$-point function. For the $4$-point function there are $3$ one loop, $18$ two loop and $138$ three loop graphs to renormalize.
At three loops we now summarize the few master (massive) integrals which will be of interest to us, in our notation and conventions. First, Figure $2$ denotes the basic vacuum bubbles to two loops. We define the first graph of Figure $2$ as $$I ~=~ i \int_k \frac{1}{[k^2-m^2]} ~.$$ We work in Minkowski space and choose to include a factor of $i$ with each integration measure which is abbreviated by $$\int_k ~=~ \int \frac{d^dk}{(2\pi)^d} ~.$$ The integral $I$ is trivial to deduce from the Euler $\beta$-function as $$I ~=~ \frac{\Gamma(1-d/2)}{(4\pi)^{d/2}} (m^2)^{d/2-1} ~.$$ Hence the middle graph of Figure $2$ is $I^2$.
The final graph we denote by $\Delta(0)$, [@11], where $$\Delta(0) ~=~ i \int_k \frac{J(k^2)}{[k^2-m^2]}$$ and $$J(p^2) ~=~ i \int_k \frac{1}{[k^2-m^2][(k-p)^2 - m^2]}
\label{jdef}$$ is the basic one loop self-energy bubble. There is a sequence of integrals related to $J(p)$ defined by $$J_{\alpha\beta}(p) ~=~ i \int_k
\frac{1}{[k^2-m^2]^\alpha[(k-p)^2 - m^2]^\beta}$$ where we choose $J_{21}(p)$ $\equiv$ $K(p^2)$. In this form a Gauss relation of the hypergeometric functions gives the relation $$(p^2-4m^2) K(p^2) ~=~ J(0) ~-~ (d-3) J(p^2)
\label{cut1}$$ with $$J(0) ~=~ -~ \frac{\Gamma(2-d/2)}{(4\pi)^{d/2}} (m^2)^{d/2-2}$$ since explicit calculations produce $$J(p^2) ~=~ -~ \frac{\Gamma(2-d/2)}{(4\pi)^{d/2}} \left( \frac{4m^2-p^2}{4}
\right)^{d/2-2} \, {}_2F_1 \left( 2 - \frac{d}{2}, \frac{1}{2}; \frac{3}{2};
\frac{p^2}{p^2-4m^2} \right)$$ and $$K(p^2) ~=~ \frac{\Gamma(3-d/2)}{2(4\pi)^{d/2}} \left( \frac{4m^2-p^2}{4}
\right)^{d/2-3} \, {}_2F_1 \left( 3 - \frac{d}{2}, \frac{1}{2}; \frac{3}{2};
\frac{p^2}{p^2-4m^2} \right) ~.$$ Likewise, at the subsequent level $$(p^2-4m^2) \left( J_{22}(p^2) ~+~ 2 J_{31}(p^2) \right) ~=~ 2 K(0) ~-~ 2 (d-5)
K(p^2)
\label{cut2}$$ whence $$\begin{aligned}
J_{31}(p^2) &=& \frac{(d-6)}{2p^2} K(p^2) ~+~ \frac{(p^2-2m^2)}{p^2(p^2-4m^2)}
\left( K(0) - (d-5) K(p^2) \right) \nonumber \\
J_{22}(p^2) &=& -~ \frac{(d-6)}{p^2} K(p^2) ~+~ \frac{4m^2}{p^2(p^2-4m^2)}
\left( K(0) - (d-5) K(p^2) \right) ~.
\label{cut3}\end{aligned}$$ These rules are used extensively for the two and higher loop Feynman integrals. The integral $\Delta(0)$ is finite in two dimensions and can be evaluated in an expansion in powers of $\epsilon$ as $$\Delta(0) ~=~ -~ \frac{9s_2}{16\pi^2m^2} ~+~ O(\epsilon)
\label{del0}$$ where $s_2$ $=$ $(2\sqrt{3}/9) \mbox{Cl}_2(2\pi/3)$ with $\mbox{Cl}_2(x)$ the Clausen function. The analogous four dimensional vacuum bubble also contains $s_2$ in its finite part but is divergent. In principle the $O(\epsilon)$ term of (\[del0\]) can be deduced. However, throughout our computations we left $\Delta(0)$ itself unevaluated since on renormalizability grounds it must be absent from the final renormalization constants at higher loops. This is because if the $2$-point function did not have its external momenta nullified then the integral $\Delta(p)$ would emerge, where $$\Delta(p) ~=~ i \int_k \frac{J(k^2)}{[(k-p)^2 - m^2]} ~.$$ Clearly such a non-local function of the external momenta could not be retained when all the counterterms are included.
At three loops there are several more basic topologies. If one ignores for the moment the complication due to the presence of internal momenta contractions in integral numerators, then the basic graphs are given in Figure $3$. The first involves $I$ and its derivatives with respect to $m^2$. Also the second is a variation on $\Delta(0)$ and we note that the two loop subgraph is $$i \int_k \frac{J(k^2)}{[k^2-m^2]^2} ~=~ \frac{(d-3)}{3m^2} \Delta(0)$$ which is established by differentiating $\Delta(0)$ with respect to $m^2$. Aside from the Benz topology the remaining vacuum bubbles of Figure $3$ are related to the integrals $i\int_k J^2(k^2)/[k^2-m^2]$, $i\int_k J^2(k^2)$ and $i\int_k J(k^2) K(k^2)$. Similar to $\Delta(0)$ these are finite in two dimensions but their values are required at four loops when multiplied by counterterms. As only the leading term in $\epsilon$ is required in each case it transpires that we can set $d$ $=$ $2$ and use the fact that in Euclidean space, denoted by the subscript $E$, $$\left. J_E(-k^2) \right|_{d=2} ~=~ \left. \frac{\theta}{\sinh\theta} J(0)
\right|_{d=2}$$ upon the change of variables $k^2$ $=$ $4m^2\sinh^2(\theta/2)$ where $\left. J(0) \right|_{d=2}$ $=$ $-$ $1/(4\pi m^2)$. Then, for instance, $$i \int_k J^2(k^2) ~=~ -~ \frac{m^2}{2\pi} \left. J^2(0) \right|_{d=2}
\int_0^\infty d \theta \, \frac{\theta^2}{\sinh\theta} ~+~ O(\epsilon) ~.$$ This can be evaluated from standard integrals to give $$i \int_k J^2(k^2) ~=~ -~ \frac{7\zeta(3)}{64\pi^3m^2} ~+~ O(\epsilon) ~.
\label{j2def}$$ Equally we find $$i \int_k \frac{J^2(k^2)}{[k^2-m^2]} ~=~ \frac{11\zeta(3)}{576\pi^3m^4} ~+~
O(\epsilon) ~.
\label{j21def}$$ Though in $d$-dimensions one can derive the relation $$i \int_k J(k^2) K(k^2) ~=~ \frac{(3d-8)}{8m^2} \, i \int_k J^2(k^2) ~.$$ The presence of $K(k^2)$ in several of the master integrals with topology similar to those of Figure $3$ produces similar finite integrals whose finite part is required and which is determined in an analogous way. We note that $$\begin{aligned}
i \int_k \frac{J(k^2)}{[k^2-4m^2]} &=& \left. \frac{\ln(2)}{2\pi} J(0)
\right|_{d=2} ~+~ O(\epsilon)
\nonumber \\
i \int_k \frac{J^2(k^2)}{[k^2-4m^2]} &=& \left[ \frac{7}{8} \zeta(3) ~-~
\ln(2) \right] \left. \frac{J^2(0)}{2\pi} \right|_{d=2} ~+~ O(\epsilon) ~.\end{aligned}$$ For the mass anomalous dimension these basic vacuum bubbles suffice to determine the renormalization constants to three loops. Given the nature of the $4$-point interaction in (\[baregn\]) the Benz topology does not occur in the $2$-point function at this loop order.
Having discussed the basic scalar master integrals which result we briefly note the algorithm dealing with the numerator structure of the integrals. This has been systematically quantified in [@11]. However, we note that repeated use of $$kl ~=~ \frac{1}{2} \left[ k^2 ~+~ l^2 ~-~ [(k-l)^2-m^2] ~-~ m^2 \right]
\label{propdec1}$$ and then $$k^2 ~=~ [k^2-m^2] ~+~ m^2 ~~~,~~~ l^2 ~=~ [l^2-m^2] ~+~ m^2
\label{propdec2}$$ in each contributing topology where there are $[k^2-m^2]$, $[l^2-m^2]$ and $[(k-l)^2-m^2]$ propagators already. This is done in such a way that powers of $kl$ can remain when all mixed $[(k-l)^2-m^2]$ propagators are absent and one does not then continue substituting for $kl$. In such integrals one can use Lorentz symmetry in the $k$ and $l$ subgraph integrals to redefine even powers of $kl$ as proportional to $k^2 l^2$ or zero if there are an odd number of factors of $kl$. Then (\[propdec2\]) is repeated. Consequently several variations in the basic bubble graphs of Figure $3$ emerge and we note that $$\begin{aligned}
i \int_k k^2 J^2(k^2) &=& \frac{4}{3} I^3 ~+~ \frac{4}{3} m^2 i \int_k J^2(k^2)
\nonumber \\
i \int_k (k^2)^2 J^2(k^2) &=& \frac{8m^2}{3(3d-4)} \left[ (5d-6) I^3 ~+~
2d m^2 \, i \int_k J^2(k^2) \right] \end{aligned}$$ where these are exact and no finite parts have been omitted since these are crucial for the next loop order. In essence this summarizes the key ingredients in the algorithm for evaluating the three loop mass anomalous dimension which has been coded in [Form]{} and reproduces the previous three loop ${\overline{\mbox{MS}}}$ Gross-Neveu mass anomalous dimension.
Three loop $4$-point function renormalization.
==============================================
At this point we turn to our secondary aim which is to resolve the discrepancy in the renormalization associated with the generation of ${\cal O}_3$. This requires the complete determination of the $4$-point function divergence structure at three loops. Whilst the algorithm to do this is very similar to that of the $2$-point function there are several key differences. First, the $4$-point function divergences will be independent of the external momenta which means that they can be immediately nullified. The mass again protects against spurious infrared divergences. However, we cannot now take the Lorentz traces since that would prevent one from seeing the emergence of any $\Gamma_{(3)}^{\mu\nu\sigma} \otimes \Gamma_{(3) ~ \mu\nu\sigma}$ $\gamma$-matrix structure. Instead we have to retain $\gamma$-strings and also unentangle the internal momenta within them. Hence one decouples the Feynman diagrams into $\gamma$-strings and Lorentz tensor vacuum bubbles. At one and two loops the resulting tensor integrals for the whole integral can be straightforwardly reduced by noting that at one loop $$\int_k k^\mu k^\nu f_1(k^2) ~=~ \frac{\eta^{\mu\nu}}{d} \int_k k^2 f_1(k^2)$$ where $k$ is the sole loop momentum and at two loops $$\begin{aligned}
\int_{kl} k_1^{\mu_1} k_2^{\mu_2} k_3^{\mu_3} k_4^{\mu_4}
f_2(k,l) &=& \frac{1}{d(d-1)(d+2)} \nonumber \\
&& \int_{kl} \! \! f_2(k,l) \left[
\left[ (d+1) k_1.k_2 k_3.k_4 - k_1.k_3 k_2.k_4 - k_1.k_4 k_2.k_3 \right]
\eta^{\mu_1\mu_2} \eta^{\mu_3\mu_4} \right. \nonumber \\
&& \left. ~~~~~~~~+
\left[ (d+1) k_1.k_3 k_2.k_4 - k_1.k_2 k_3.k_4 - k_1.k_4 k_2.k_3 \right]
\eta^{\mu_1\mu_3} \eta^{\mu_2\mu_4} \right. \nonumber \\
&& \left. ~~~~~~~~+
\left[ (d+1) k_1.k_4 k_2.k_3 - k_1.k_2 k_3.k_4 - k_1.k_3 k_2.k_4 \right]
\eta^{\mu_1\mu_4} \eta^{\mu_2\mu_3} \right] \nonumber \\ \end{aligned}$$ where $k_i$ $\in$ $\{k,l\}$ and in the Lorentz tensor of the integrand all possible combinations of the two internal momenta are covered. The functions $f_i(\{k_i\})$ represent the various possible propagator combinations. For clarity we have included the dot of the scalar products explicitly. At three loops the situation is complicated by the observation that the extension of both these formula gives $$\begin{aligned}
\int_{klq} k_1^{\mu_1} k_2^{\mu_2} k_3^{\mu_3} k_4^{\mu_4} k_5^{\mu_5}
k_6^{\mu_6} f_3(k,l,q)
&=& \frac{\eta^{\mu_1\mu_2} \eta^{\mu_3\mu_4} \eta^{\mu_5\mu_6}}
{d(d-1)(d-2)(d+2)(d+4)} \nonumber \\
&& \int_{klq} f_3(k,l,q) \left[ (d^2+3 d-2) k_1.k_2 k_3.k_4 k_5.k_6
\right. \nonumber \\
&& \left. ~~
- (d+2) k_1.k_2 k_3.k_5 k_6.k_4
- (d+2) k_1.k_2 k_3.k_6 k_4.k_5
\right. \nonumber \\
&& \left. ~~
- (d+2) k_1.k_3 k_2.k_4 k_5.k_6
+ 2 k_1.k_3 k_2.k_5 k_6.k_4
\right. \nonumber \\
&& \left. ~~
+ 2 k_1.k_3 k_2.k_6 k_4.k_5
- (d+2) k_1.k_4 k_2.k_3 k_5.k_6
\right. \nonumber \\
&& \left. ~~
+ 2 k_1.k_4 k_2.k_5 k_6.k_3
+ 2 k_1.k_4 k_2.k_6 k_3.k_5
\right. \nonumber \\
&& \left. ~~
+ 2 k_1.k_5 k_2.k_3 k_4.k_6
+ 2 k_1.k_5 k_2.k_4 k_6.k_3
\right. \nonumber \\
&& \left. ~~
- (d+2) k_1.k_5 k_2.k_6 k_3.k_4
+ 2 k_1.k_6 k_2.k_3 k_4.k_5
\right. \nonumber \\
&& \left. ~~
+ 2 k_1.k_6 k_2.k_4 k_5.k_3
- (d+2) k_1.k_6 k_2.k_5 k_3.k_4 \right]
\nonumber \\
&& ~+~
\mbox{$14$ similar terms}
\label{tensdec3}\end{aligned}$$ where $k_i$ $\in$ $\{k,l,q\}$. The full decomposition is clearly quite large. However, it is the appearance of the $1/(d-2)$ factor which is novel. In [@17; @18] the full set of three loop graphs in both massless and massive cases where a divergent $\Gamma_{(3)}^{\mu\nu\sigma} \otimes \Gamma_{(3) ~
\mu\nu\sigma}$ structure emerged, was noted. The sets of graphs appear to be the same. Though in [@17] it is not fully clear which the actual ladder graphs referred to are. However, the seemingly [*finite*]{} graph of Figure $4$ was regarded as fully finite in [*all*]{} $\gamma$-string channels, [@18]. In our present reconsideration it transpires that within the integral of the graph of Figure $4$ there is a divergent contribution to the $\Gamma_{(3)}^{\mu\nu\sigma} \otimes \Gamma_{(3) ~ \mu\nu\sigma}$ channel but [*not*]{} for the $\Gamma_{(0)} \otimes \Gamma_{(0)}$ one. This derives from the pole $1/(d-2)$ in (\[tensdec3\]) producing the massive Benz integral corresponding to the final graph of Figure $3$. The key part is then $$\frac{1}{(d-2)}
\int_{klq} \frac{1}{[k^2-m^2] [l^2-m^2] [q^2-m^2] [(k-l)^2-m^2] [(k-q)^2-m^2]
[(l-q)^2-m^2]}
\label{betabenz}$$ where the actual integral itself is finite in two dimensions. It remains after repeated application of (\[propdec1\]) and (\[propdec2\]) in the scalar integrals of (\[tensdec3\]). However, to have the complete divergence structure the integral needs to be evaluated since it will contribute to $Z_{33}$. The remaining integrals with this $1/(d-2)$ pole in the $\Gamma_{(3)}^{\mu\nu\sigma} \otimes \Gamma_{(3) ~ \mu\nu\sigma}$ channel correspond to (\[betabenz\]) but with one or more propagators removed after application of (\[propdec1\]) and (\[propdec2\]). These can be evaluated from the three loop techniques discussed earlier. In [@18] this contribution, (\[betabenz\]), was overlooked since it was assumed that the parent integral with the internal momenta contracted was finite without noting the possibility of the $1/(d-2)$ factor deriving from the tensor decomposition. In relation to [@17] we can only comment that in the massless version of (\[betabenz\]) the integral will be zero. However, given the totally different method of calculating the $4$-point function of [@18] in the massless case, a contribution analogous to (\[betabenz\]) could possibly arise elsewhere.
There remains the task now of evaluating the [*integral*]{} of (\[betabenz\]). Although finite it clearly cannot be reduced to any of the three loop master vacuum bubbles already discussed even using, say, integration by parts. Instead we have had to resort to the more extensive experience of four dimensional vacuum bubble diagrams and specfically the Benz graphs discussed in [@25]. To promote (\[betabenz\]) to four dimensions we exploit Tarasov’s observation of relating $d$-dimensional integrals to $(d+2)$-dimensional ones, [@26; @27]. Moreover, this is straightforward to do via the [Tarcer]{} package, [@28], written in [Mathematica]{} for the basic two loop self energy topology given in Figure $5$. Specifically one feature of [Tarcer]{} is that one can relate this two loop self energy graph in $d$-dimensions to that in $(d+2)$-dimensions. This is a subgraph of Figure $4$ with nullified external momenta and given that this is a three loop vacuum bubble, the final three loop integration measure can be rewritten as $$\int \frac{d^dk}{(2\pi)^d} ~=~ 2 \pi d \int \frac{d^{d+2}k}{(2\pi)^{d+2}}
\frac{1}{k^2}
\label{intmes}$$ in our conventions since the two loop subgraph will clearly be a function of $k^2$ only. From (\[intmes\]) a massless propagator will appear in the higher dimensional integral. Since all the lines of Figures $4$ and $5$ are massive and both final integrations involve functions of the square of the momentum, then we find the relation between the $d$-dimensional massive Benz graph and similar topologies in two dimensions higher is $$\begin{aligned}
&& \mbox{Be}(1,1,1,1,1,1,m^2,m^2,m^2,m^2,m^2,m^2,d) \nonumber \\
&& = -~ \frac{1}{12m^4} i \int_k J^2(k^2) ~-~ \frac{3}{4m^2}
i \int_k \frac{J^2(k^2)}{[k^2-m^2]} \nonumber \\
&& ~~~~+~ \frac{\pi d(d-1)(d-2)}{m^6} \left[
\mbox{Be}(1,1,1,1,1,1,m^2,m^2,m^2,m^2,m^2,m^2,d+2) \right. \nonumber \\
&& \left. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-~
\mbox{Be}(1,1,1,1,1,1,0,m^2,m^2,m^2,m^2,m^2,d+2) \right]
\label{dimreln}\end{aligned}$$ where we define $$\begin{aligned}
&& \mbox{Be}(\alpha,\beta,\gamma,\rho,\lambda,\theta,m_1^2,m_2^2,m_3^2,
m_4^2,m_5^2,m_6^2,d) \nonumber \\
&& =~ i^3 \int_{klq} \frac{1}{[k^2-m_1^2]^\alpha [l^2-m_2^2]^\beta
[q^2-m_3^2]^\gamma [(k-l)^2-m_4^2]^\rho [(k-q)^2-m_5^2]^\lambda
[(l-q)^2-m_6^2]^\theta} \nonumber \\ \end{aligned}$$ and emphasise that $\int_k$ indicates a $d$-dimensional integration. The key part is the piece which represents the difference in two Benz topologies in $(d+2)$-dimensions where one is completely massive and the other has one massless line. However, since these are multiplied by $(d-2)$ then in our $\epsilon$ expansion relative to two dimensions we note that the leading term of each is $O(\epsilon)$ meaning that $$\begin{aligned}
&& i^3 \int_{klq} \frac{1}{[k^2-m^2] [l^2-m^2] [q^2-m^2] [(k-l)^2-m^2]
[(k-q)^2-m^2] [(l-q)^2-m^2]} \nonumber \\
&& ~~~=~ -~ \frac{\zeta(3)}{192\pi^3m^6} ~+~ O(\epsilon) \end{aligned}$$ from (\[j2def\]) and (\[j21def\]). This is because whilst each of the two $(d+2)$-dimensional integrals are divergent in four dimensions due to the presence of a simple pole in the regularizing parameter, the difference in (\[dimreln\]) is finite and the residue is independent of the masses in either Benz topology, [@25].
With this observation all the ingredients are assembled to repeat the full three loop renormalization of the $4$-point function of (\[baregn\]). In [@7] only the $\Gamma_{(0)} \otimes \Gamma_{(0)}$ part was isolated but this was sufficient to deduce the $\beta$-function at three loops. It is satisfying to record that we have verified the previous three loop ${\overline{\mbox{MS}}}$ result of [@7; @8]. However, by contrast we find that a different renormalization constant from [@17] and [@18] emerges for $Z_{33}$. We find $$Z_{33} ~=~ \left[ \frac{\zeta(3)}{64} - \frac{1}{48} \right]
\frac{g^3}{\pi^3\epsilon} ~+~ O(g^4)$$ whence $$\beta_3(g) ~=~ \left[ \frac{3\zeta(3)}{64} ~-~ \frac{1}{16} \right]
\frac{g^3}{\pi^3} ~+~ O(g^4) ~.
\label{beta3}$$ Though there is universal agreement on the rational part of (\[beta3\]), [@17; @18], only the contribution from the diagram of Figure $5$ to the $\Gamma_{(3)}^{\mu\nu\sigma} \otimes \Gamma_{(3) ~ \mu\nu\sigma}$ channel produces the irrational piece thereby confirming the overall structure observed in [@17]. However, rather than finding that we produce one of the previous values for $Z_{33}$ we are in the seemingly unfortunate position of finding a new alternative. To determine which is correct and consistent it will transpire that the four loop mass anomalous dimension is the correct testing ground for this in the context of (\[truerge\]).
Four loop vacuum bubbles.
=========================
In this section we return to out initial aim and summarize the evaluation of the underlying four loop vacuum bubbles required for the mass anomalous dimension. For the $2$-point function there are $18$ distinct topologies and $36$ Feynman diagrams to be considered. Of these topologies $14$ involve snail insertions in one way or another and hence their determination is in effect relegated to the straightforward extension of the three loop topology discussion. One effect of a snail is to produce two propagators on a line of a three loop graph but this can be reproduced by differentiating that line with respect to $m^2$. This is also a reason why the three loop vacuum bubbles were required to be evaluated to the finite part exactly or left in terms of $\Delta(0)$ and other known integrals whose $\epsilon$ expansion could be substituted when required, if at all. Several topologies contributing to the $2$-point function, however, have a more demanding evaluation. These are illustrated in Figure $6$ and we concentrate on these for the main part. Essentially the main complication now derives from rewriting the scalar products of internal momenta in terms of the propagator structure. For all the integrals which result we used several interconnected techniques.
First was the use of the [Tarcer]{} package, [@28], again, particularly for the third and fourth graphs of Figure $6$. Clearly the last graph contains the two loop self-energy topology of Figure $5$ as a subgraph and the third has a similar two loop subgraph but with one line removed. Unlike the properties of [Tarcer]{} we described previously, the feature exploited in this instance is the ability to relate diagrams with different powers of the propagators in Figure $5$ to that with unit power. Further, [Tarcer]{} reduces integrals involving powers of the scalar products $kl$, $kp$ and $lp$ where $k$ and $l$ are internal and $p$ is the external momentum in Figure $5$. The point is that the Lorentz tensor reduction for these situations can only be performed by this route. Any one loop subgraph of Figure $5$ will involve three external legs and the invariant decomposition in this case is too intricate. By contrast where possible we did exploit the Lorentz structure of subgraphs with [*one*]{} internal momentum flowing through it which can be regarded as a $2$-point function external momentum for that subgraph. Then integrals can be rewritten using results such as $$\begin{aligned}
i \int_l \frac{l^\mu l^\nu}{[l^2-m^2][(k-l)^2-m^2]} &=& \frac{1}{(d-1)}
i \int_l \frac{1}{[l^2-m^2][(k-l)^2-m^2]} \nonumber \\
&& ~~~~~~~~~~~~~\times \! \left[ \eta^{\mu\nu} \left( l^2
- \frac{kl^2}{k^2} \right) - \frac{k^\mu k^\nu}{k^2} \left( l^2
- d \frac{kl^2}{k^2} \right) \right] .
\label{lordec} \end{aligned}$$ The outcome of the [Tarcer]{} implementation is to reduce these more complicated tensor integrals to a set of master scalar four loop vacuum bubbles since the resulting combination of internal momenta allows for the repeated application of (\[propdec1\]) and (\[propdec2\]).
The use of (\[lordec\]) and [Tarcer]{} though may appear to introduce potential infrared difficulties. However, it transpires that in the full sum of all contributing pieces to a Feynman graph it can be checked that no integral retains an unprotected factor of $1/k^2$ which would give an infrared divergence upon integrating over the internal momentum $k$. For one instance checking this proved to be a tedious non-trivial exercise which we document for completeness. In all bar the second graph of Figure $6$ the following combination of integrals emerge $$V_\Delta ~=~ i \int_k \left[ J(k^2) ~-~ J(0) \right] \frac{\Delta(k)}{k^2} ~.$$ Clearly each could be infrared divergent but the above combination always appears. Defining $$K_\mu(p) ~=~ i \int_k \frac{k_\mu}{[k^2-m^2]^2[(k-p)^2 - m^2]}
\label{vdeldef}$$ then one can show $$p^\mu K_\mu(p) ~=~ 2 m^2 K(p^2) ~-~ \frac{1}{2} (d-4) J(p^2) ~=~
\frac{1}{2} \left[ p^2 K(p^2) ~+~ J(p^2) ~-~ J(0) \right] ~.$$ Using this and integration parts in (\[vdeldef\]) one finds $$V_\Delta ~=~ (d-3) i \int_k \frac{1}{k^2} J(k^2) \Delta(k^2) ~-~ 2 i \int_k
\Delta(k) K(k^2) ~+~ i^2 \int_{kl}
\frac{(k^2-m^2) J(k^2) J(l^2)}{l^2[(k-l)^2-m^2]^2} ~.
\label{vdelman}$$ The final integral can be reduced using [Tarcer]{} if one regards the $k$ momentum as external to the self-energy graph of Figure $5$ and $J(l^2)$ is replaced by the Feynman integral of (\[jdef\]). Consequently, [Tarcer]{} produces $$\begin{aligned}
i \int_{l} \frac{J(l^2)}{l^2[(k-l)^2-m^2]^2} &=&
\frac{(d-2)^2(d-4)I^2}{2(d-3)m^2[k^2-m^2]^2} ~-~
\frac{(d-2)(d-3)}{2m^2[k^2-m^2]} i \int_l \frac{1}{l^2[(k-l)^2-m^2]}
\nonumber \\
&& +~ \frac{(d-4)(3d-8)}{[k^2-m^2]^2} i^2 \int_{lq}
\frac{1}{[l^2-m^2][(k-q)^2-m^2][(l-q)^2-m^2]} \nonumber \\
&& +~ \frac{[(d-2)[k^2-m^2]-8(d-4)m^2]}{[k^2-m^2]^2} \nonumber \\
&& ~~~~ \times ~ i^2 \int_{lq} \frac{1}{[l^2-m^2]^2 [(k-q)^2-m^2]
[(l-q)^2-m^2]} ~.
\label{vdeltar} \end{aligned}$$ The benefit of rearranging the two loop integral of the left hand side is to isolate the potential infrared singularity into a simple term on the right hand side. Moreover, the appearance of powers of $1/[k^2-m^2]$ will lead to simplifications when substituted back into the expression for $V_\Delta$ and the term with the singularity will actually combine with the first term of (\[vdelman\]) to produce $V_\Delta$ but with a factor of $(d-3)$. Hence, evaluating the remaining integrals of (\[vdeltar\]) in the context of (\[vdelman\]) one arrives at the expression $$\begin{aligned}
V_\Delta &=& -~ i \int_k \Delta(k) K(k^2) ~-~
\frac{(d-2)^2I^2\Delta(0)}{2(d-3)m^2} \nonumber \\
&& -~ (3d-8) i \int_k \frac{J(k^2)\Delta(k)}{[k^2-m^2]} ~+~ 8 m^2 i^2 \int_{kl}
\frac{J(k)K(l)}{[k^2-m^2] [(k-l)^2-m^2]}\end{aligned}$$ which has no potential infrared singular term. Moreover, each term of the right side of this is ultraviolet finite in two dimensions. So, in fact, when the combination $V_\Delta$ appears in our computation, it can actually be dropped as there is no contribution to the renormalization of the mass at four loops.
Having completed the tensor reduction of the scalar propagators, all that remains is the evaluation of a set of divergent four loop master integrals akin to those discussed earlier. Most of these are elementary given the results of (\[cut1\]), (\[cut2\]) and (\[cut3\]) and the observation that as all integrals are infrared finite then one can ignore those four loop ones which are clearly ultraviolet finite by the usual counting rules. Though one integral is worth recording and that is $$i \int_k (k^2)^2 J^3(k^2) ~=~ 2 I^4 ~+~ \frac{(7d-13)m^2I}{(2d-5)} \, i
\int_k J^2(k^2) ~+~ \frac{2(d-1)(d-3)}{(2d-5)(d-2)} m^4 \, i \int_k J^3(k^2)$$ because in the determination of this relation a pole in $(d-2)$ emerges in the standard $d$-dimensional manipulations such as differentiating the original integral with respect to $m^2$ and using (\[cut1\]). This pole gives rise to a problem similar to that discussed for (\[betabenz\]) but with a simpler resolution since we merely apply the technique used to deduce (\[j2def\]) and (\[j21def\]), to find $$i \int_k J^3(k^2) ~=~ \frac{3\zeta(3)}{256\pi^4m^4} ~+~ O(\epsilon) ~.$$ The need to evaluate $i\int_k J^2(k^2)$ to the finite part as well is also illustrated by this equation.
This completes the discussion of the construction of the relevant basic Feynman integrals. For each of the eighteen topologies a [Form]{} module was created within which the algorithm to break the original Feynman graphs up into its basic components was encoded. The tedious identification with the above results together with the remaining more elementary ones were also contained in each module. Finally, prior to summing all the results from the $36$ four loop diagrams, the $\epsilon$ expansion of $I$ and other integrals were evaluated to the appropriate order in $\epsilon$. The resulting sum produced the divergent part of the mass component of the $2$-point function to the simple pole in $\epsilon$ as a function of the bare parameters.
Four loop renormalization.
==========================
The final piece of the calculation rests in determining the overall renormalization constant $Z_m$ at four loops in ${\overline{\mbox{MS}}}$. However, prior to this we must consider the full theory. To this point we have tacitly assumed that only the basic $4$-point vertex of (\[baregn\]) is responsible for all the Feynman diagrams we have discussed. The presence of the generated evanescent operator in (\[rengn\]) needs to be included. As noted earlier since the operator appears with a coupling $g^4$ the effect of this operator cannot arise before four loops. Therefore, we now have to include the additional graph of Figure $7$ where the circle with a cross in it denotes the insertion of the operator ${\cal O}_3$ with its associated renormalization constant $Z_{33}$. The integration routine to determine its contribution is the same as that for the original vertex except that one has to first replace the $\Gamma_{(3)}^{\mu\nu\sigma}$ matrices by the corresponding string of ordinary $\gamma$-matrices.
With this additional graph included the overall renormalization constant is extracted using the standard method for automatic Feynman diagram computations developed in [@29]. Briefly one computes the Green’s function of interest as a function of all the bare parameters such as the coupling constant and the mass. Then the renormalized parameters are introduced by the rescaling defined by the renormalization constants. In the present context these are the renormalization constants leading to the [*naive*]{} anomalous dimensions as defined in (\[rengn\]) and (\[rencon\]). This rescaling in effect reproduces the counterterms to remove subgraph divergences. Moreover the Green’s function is multiplied by the associated renormalization constant which in our case and conventions is $Z_\psi Z_m$. As the former is already known, [@9], then the divergences which remain in the $2$-point function are absorbed by the unknown pieces of $Z_m$. We recall that at four loops the anomalous dimension of [@9] corresponds to the naive anomalous dimension $\tilde{\gamma}(g)$ since there is no contribution from the graph of Figure $7$ in the wave function channel. Therefore, having followed this procedure we find the naive mass anomalous dimension in ${\overline{\mbox{MS}}}$ is $$\begin{aligned}
\tilde{\gamma}_m(g) &=& -~ (2N-1) \frac{g}{2\pi} ~+~ (2N-1)
\frac{g^2}{8\pi^2} ~+~ (4N-3)(2N-1) \frac{g^3}{32\pi^3} \nonumber \\
&& +~ \left[ ( 48 N^3 - 384 N^2 + 492 N - 138 ) \zeta(3) - 40 N^3 - 72 N^2
+ 160 N - 81 \right] \frac{g^4}{384\pi^4} \nonumber \\
&& +~ O(g^5) ~.
\label{naigam}\end{aligned}$$ At this stage several comments are necessary. First, there are several checks on the underlying renormalization constant itself. Whilst the evanescent operator issue arises at four loops, it will manifest itself in the simple pole in $\epsilon$ of $Z_m$. Therefore, the quartic, triple and double poles in $\epsilon$ are in fact already predetermined by the structure of previous loop order poles from the renormalization group equation. For (\[naigam\]) we have verified that this is in fact correct. One other useful check was the explicit cancellation of divergences of the form $\Delta(0)/\epsilon^n$ for $n$ $=$ $1$ and $2$ at four loops. This is non-trivial since, for instance, $\Delta(0)$ arises at three loops both associated with a simple pole in $\epsilon$ and in the finite part. Therefore, one needs to write $\Delta(0)$ as a formal expansion in powers of $\epsilon$ prior to the rescaling of the bare quantities. This is because the $O(1)$ piece at three loops will be multiplied by $1/\epsilon$ poles. Moreover, since $\Delta(0)$ has dependence $(m^2)^{d-3}$, then this has to be explicitly factored off since this mass is bare and needs to be renormalized too. Once written in this way we have checked that the poles in $\epsilon$ involving the $O(1)$ and $O(\epsilon)$ residues stemming from the $\epsilon$ expansion of $\Delta(0)$ do indeed cancel completely.
Again one can partially check part of (\[naigam\]) from another point of view. In [@9] the structure of the mass anomalous dimension has been given in the large $N$ expansion to $O(1/N^2)$ based on the results from a series of articles [@30; @31; @32; @33; @34; @35; @36]. Again at this level of expansion the evanescent operator is not manifested and so the $O(1/N^2)$ coefficients of the mass anomalous dimension which are given there at four and higher loops in fact equate to those of the naive mass anomalous dimension $\tilde{\gamma}_m(g)$. In other words if it were possible to compute the critical exponent corresponding to the mass anomalous dimension at the $d$-dimensional fixed point of the theory at the next order in large $N$, $O(1/N^3)$, then unless the effect of the evanescent operator could be included, it would not correspond to the true mass anomalous dimension, [@9]. From the expression given in [@9] we note that when the same convention is used, that part of (\[naigam\]) at four loops which corresponds to the $O(1/N^2)$ piece agrees precisely with [@9]. This is a reassuring cross-check on a significant part of our four loop computation since, within the computer setup, one can examine the $N$-dependence multiplying all the basic integrals which we have had to compute for all topologies. The vast majority are at least touched by a quadratic or cubic in $N$ which are related respectively to the $O(1/N^2)$ or $O(1/N)$ large $N$ piece already determined in [@9]. For the small number of remaining pieces which have linear factors in $N$ we have been careful in evaluating the corresponding, though invariably simple, vacuum bubbles. Therefore, we are confident that (\[naigam\]) is correct.
One clear problem remains which is related to the structure of the expression (\[naigam\]). Unlike the previous orders the four loop part does not vanish when $N$ $=$ ${\mbox{\small{$\frac{1}{2}$}}}$ which corresponds to the free field theory. Moreover, it transpires that of the eighteen underlying topologies only the graphs for one do not vanish for this value for $N$. (Though actually the parts from the second and third graphs of Figure $6$ cancel between each other which is similar to what occurs at three loops for analogous graphs.) The topology which gives a contribution for $N$ $=$ ${\mbox{\small{$\frac{1}{2}$}}}$ is the final graph of Figure $6$. However, given our discussion in several places concerning the evanescent operator, the resolution is clearly straightforward. More concretely one can see the evidence for this if one evaluates (\[naigam\]) at $N$ $=$ ${\mbox{\small{$\frac{1}{2}$}}}$ to find $$\left. \frac{}{} \tilde{\gamma}_m(g) \right|_{N={\mbox{\small{$\frac{1}{2}$}}}} ~=~ [ 3 \zeta(3) - 4 ]
\frac{g^4}{64\pi^4} ~+~ O(g^5) ~.
\label{gamdisc}$$ This is the piece which needs to be cancelled in order to have a mass dimension consistent with a free field theory. Indeed this is the relative combination of rationals and $\zeta(3)$ which our three loop $4$-point function renormalization reevaluation produced. Therefore, using (\[truerge\]) and (\[beta3\]) we can derive the true mass anomalous dimension as $$\begin{aligned}
\gamma_m(g) &=& -~ (2N-1) \frac{g}{2\pi} ~+~ (2N-1) \frac{g^2}{8\pi^2} ~+~
(4N-3)(2N-1) \frac{g^3}{32\pi^3} \nonumber \\
&& +~ \left[ 12 (2N-13)(N-1) \zeta(3) - 20N^2 - 46N + 57 \right] (2N-1)
\frac{g^4}{384\pi^4} \nonumber \\
&& +~ O(g^5) ~.
\label{truegam} \end{aligned}$$ Clearly this has the correct expected $N$ $=$ ${\mbox{\small{$\frac{1}{2}$}}}$ property and given our earlier checks on (\[naigam\]) we will regard (\[truegam\]) as the completion of our original aim. Also, it is worth stressing that the discrepancy in the $4$-point function renormalization has now been crucially resolved simultaneously. It turns out that neither of the previous expressions for $\beta_3(g)$, [@17; @18], could be correct to preserve the vanishing of $\gamma_m(g)$ in the free field case. So we can regard this mass anomalous dimension calculation as also a non-trivial check on the full [*three*]{} loop ${\overline{\mbox{MS}}}$ renormalization.
Discussion.
===========
We have completed the four loop renormalization of the mass anomalous dimension of the Gross-Neveu model in the ${\overline{\mbox{MS}}}$ scheme. Despite the lack of multiplicative renormalizability when the Lagrangian is regularized dimensionally, it has been possible to compute an expression which passes all possible internal checks. Not least of these is the correct implementation of the projection formula formalism of [@13; @14] which has been justified by the consistency with the free field case. Concerning this the previous attempts to deduce $\beta_3(g)$ appear to indicate that the only approach which is truly reliable for renormalizing the model is the one where there is a non-zero mass. This seems to be the conclusion one must draw from the origin of the necessary $\zeta(3)$ part missing from [@17] required to balance the discrepancy of (\[gamdisc\]). Given these remarks one possible extension would now be to repeat the derivation of the mass anomalous dimension at four loops in other $4$-fermi models in two dimensions. Whilst considering the most general possible interactions involving ${\cal O}_0$, ${\cal O}_1$ and ${\cal O}_2$ of [@13; @14] would perhaps be too ambitious, there is the interesting case of the non-abelian Thirring model, [@37; @38]. The seed interaction involves ${\cal O}_1$ but includes colour group generators too. It has been renormalized at three loops in ${\overline{\mbox{MS}}}$ in [@18] and the four loop wave function is also known, [@10]. Though in light of our comments on the $4$-point function in the Gross-Neveu model, the corresponding $4$-point function renormalization would clearly need to be reconsidered to deduce the correct evanescent operator $\beta$-functions. One motivation for determining the mass anomalous dimension in the non-abelian Thirring model would be to examine the colour group Casimir structure of the final expression since, given the similarity with QCD, it is thought that it should involve the same structures as the corresponding expression for the quark mass anomalous dimension, [@39; @40]. This was the case for the wave function, [@10].
[**Acknowledgements.**]{} The author thanks the Max Planck Institute for the Physics of Complex Systems, Dresden, Germany where part of this work was carried out. Also the author is grateful to Dr R. Mertig for assistance with setting up [Tarcer]{}.
[99]{} D. Gross & A. Neveu, Phys. Rev. [**D10**]{} (1974), 3235. A.B. Zamolodchikov & A.B. Zamolodchikov, Ann. Phys. [**120**]{} (1979), 253. A.B. Zamolodchikov & A.B. Zamolodchikov, Nucl. Phys. [**B133**]{} (1978), 525. P. Forgacs, F. Niedermayer & P. Weisz, Nucl. Phys. [**B367**]{} (1991), 123. B.N. Shalaev, Phys. Rept. [**237**]{} (1994), 129. W. Wetzel, Phys. Lett. [**B153**]{} (1985), 297. J.A. Gracey, Nucl. Phys. [**B367**]{} (1991), 657. C. Luperini & P. Rossi, Ann. Phys. [**212**]{} (1991), 371. N.A. Kivel, A.S. Stepanenko & A.N. Vasil’ev, Nucl. Phys. [ **B424**]{} (1994), 619. D.B. Ali & J.A. Gracey, Nucl. Phys. [**B605**]{} (2001), 337. J.A. Gracey, Nucl. Phys. [**B341**]{} (1990), 403. J.A.M. Vermaseren, math-ph/0010025. A. Bondi, G. Curci, G. Paffuti & P. Rossi, Ann. Phys. [**199**]{} (1990), 268. A. Bondi, G. Curci, G. Paffuti & P. Rossi, Phys. Lett. [**B216**]{} (1989), 349. A.N. Vasil’ev, M.I. Vyazovskii, S.É. Derkachov & N.A. Kivel, Theor. Math. Phys. [**107**]{} (1996), 441. A.N. Vasil’ev, M.I. Vyazovskii, S.É. Derkachov & N.A. Kivel, Theor. Math. Phys. [**107**]{} (1996), 359. A.N. Vasil’ev & M.I. Vyazovskii, Theor. Math. Phys. [**113**]{} (1997), 1277. J.F. Bennett & J.A. Gracey, Nucl. Phys. [**B563**]{} (1999), 390. M.J. Dugan & B. Grinstein, Phys. Lett. [**B256**]{} (1991), 239. S. Herrlich & U. Nierste, Nucl. Phys. [**B455**]{} (1995), 39. W. Zimmermann, Ann. Phys. [**77**]{} (1973), 536. W. Zimmermann, Ann. Phys. [**77**]{} (1973), 570. P. Nogueira, J. Comput. Phys. [**105**]{} (1993), 406. M. Steinhauser, Comput. Phys. Commun. [**134**]{} (2001), 335. D.J. Broadhurst, Eur. Phys. J. [**C8**]{} (1999), 311. O.V. Tarasov, Phys. Rev. [**D54**]{} (1996), 6479. O.V. Tarasov, Nucl. Phys. [**B502**]{} (1997), 455. R. Mertig & R. Scharf, Comput. Phys. Commun. [**111**]{} (1998), 265. S.A. Larin & J.A.M. Vermaseren, Phys. Lett. [**B303**]{} (1993), 334. S.É. Derkachov, N.A. Kivel, A.S. Stepanenko & A.N. Vasil’ev, hep-th/9302034. A.N. Vasil’ev, S.É. Derkachov, N.A. Kivel & A.S. Stepanenko, Theor. Math. Phys. [**94**]{} (1993), 179. A.N. Vasil’ev, & A.S. Stepanenko, Theor. Math. Phys. [**97**]{} (1993), 364. J.A. Gracey, Phys. Lett. [**B297**]{} (1992), 293. J.A. Gracey, Int. J. Mod. Phys. [**A6**]{} (1991), 395, 2755(E). J.A. Gracey, Int. J. Mod. Phys. [**A9**]{} (1994), 567. J.A. Gracey, Int. J. Mod. Phys. [**A9**]{} (1994), 727. R. Dashen & Y. Frishman, Phys. Lett. [**B46**]{} (1973), 439. R. Dashen & Y. Frishman, Phys. Rev. [**D11**]{} (1975), 2781. J.A.M. Vermaseren, S.A. Larin & T. van Ritbergen, Phys. Lett. [**B405**]{} (1997), 327 T. van Ritbergen, J.A.M. Vermaseren & S.A. Larin, Phys. Lett. [**B400**]{} (1997), 379
|
---
author:
- Neri Merhav
title: 'Codeword or Noise? Exact Random Coding Exponents for Slotted Asynchronism[^1]'
---
Department of Electrical Engineering\
Technion - Israel Institute of Technology\
Technion City, Haifa 32000, ISRAEL\
E–mail: [[email protected]]{}\
[**Abstract**]{}
We consider the problem of slotted asynchronous coded communication, where in each time frame (slot), the transmitter is either silent or transmits a codeword from a given (randomly selected) codebook. The task of the decoder is to decide whether transmission has taken place, and if so, to decode the message. We derive the optimum detection/decoding rule in the sense of the best trade-off among the probabilities of decoding error, false alarm, and misdetection. For this detection/decoding rule, we then derive single–letter characterizations of the exact exponential rates of these three probabilities for the average code in the ensemble.\
[**Index Terms:**]{} Synchronization, error exponent, false alarm, misdetection, random coding.
Introduction
============
The problem of synchronization has been a long–standing, important issue in communication throughout several decades (see, e.g., [@Barker53], [@GDRTS63], [@Franks80], [@Massey72], [@Scholtz80], [@TCW08], [@TKW06], [@Wang10], [@WCCW11] and references therein, for a non–exhaustive sample of earlier works).
The general problem setting under consideration allows the transmitter to send messages only part of the time, and to be ‘silent’ (non–transmitting) when it has no messages ready to be conveyed. The receiver then has to be able to reliably detect the existence of the message, locate its starting time instant, and decode it. The traditional approach has been to separate the problems of synchronization and coding/decoding, where in the former, a special pattern of symbols (synchronization word) is used to mark the beginning of a message transmission. This transmission of a synchronization word is, however, is an undesired overhead.
Following [@Wang10] and [@WCCW11], in this work, we treat the synchronization and coding jointly and we adopt the simplified model of [*slotted*]{} communication. According to this model, a transmission can start only at time instants that are integer multiples of the slot length, which is also the block length. Thus, in each slot (or block), the transmitter is either entirely silent, or it transmits a codeword corresponding to one of $M$ possible messages. In the silent mode, it is assumed that the transmitter repetitively feeds the channel by a special channel input symbol denoted by ‘$0$’ (indeed, in the case of a continuous input alphabet, it is natural to assign a zero input signal), and then the channel output vector is thought of as “pure noise.” The decoder in turn has to decide whether a message has been sent or the received channel output vector is pure noise. In case it decides in favor of the former, it then has to decode the message.
In [@Wang10] and [@WCCW11], three figures of merit were defined in order to judge performance: (i) the probability of [*false alarm*]{} (FA) – i.e., deciding that a message has been sent when actually, the transmitter was silent and the channel output was pure noise, (ii) the probability of [*misdetection*]{} (MD) – that is, deciding that the transmitter was silent when it actually transmitted some message, and (iii) the probability of [*decoding error*]{} (DE) – namely, not deciding on the correct message sent. Wang [@Wang10] and Wang [*et al.*]{} [@WCCW11] have posed the problem of characterizing the best achievable region of the error exponents associated with these three probabilities for a given discrete memoryless channel (DMC). It was stated in [@WCCW11] that this general problem is open, and so, the focus both in [@Wang10] and [@WCCW11] was directed to the narrower problem of trading off the FA exponent and the MD exponent when the DE exponent constraint is completely relaxed, that is, there is no demand on exponential decay rate of the DE probability. Upper and lower bounds on the maximum achievable FA exponent for a given MD exponent were derived in these works. In the extreme case where the MD exponent constraint is omitted (set to zero), these bounds coincide, and so, the characterization of the best achievable MD exponent is exact.
In this paper, we adopt the same problem setting of slotted asynchronous communication as in [@Wang10] and [@WCCW11]. We first derive, for a given code, the optimum detection–decoding rule that minimizes the DE probability subject to given constraints on the FA and the MD probabilities. This detection–decoding rule turns out to be completely different from the one in the achievability parts of [@Wang10] and [@WCCW11]. In particular, denoting the codewords by $\{{\mbox{\boldmath $x$}}_m\}$, the channel output vector by ${\mbox{\boldmath $y$}}$ (all of length $n$), and the channel conditional probability by $W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$, then according to this rule, a transmission is detected iff $$e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)+\max_{1\le m\le M}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\ge
e^{n\beta}W({\mbox{\boldmath $y$}}|0^n)$$ where $\alpha$ and $\beta$ are chosen to meet the MD and FA constraints. Of course, whenever the received ${\mbox{\boldmath $y$}}$ passes this test, the maximum likelihood (ML) decoder is applied, assuming that all messages are equiprobable [*a-priori*]{}. The performance of this optimum detector/decoder is analyzed under the random coding regime of fixed composition codes, and the achievable trade-off between the three error exponents is given in full generality, that is, not merely in the margin where at least one of the exponents vanishes. It should be pointed out that our analysis technique, which is based on type class enumeration (see, e.g., [@Merhav09], [@SBM11] and references therein), provides the [*exact*]{} random coding exponents, not just bounds. These relationships between the random coding exponents and the parameters $\alpha$ and $\beta$ can, in principle, be inverted (in a certain domain) in order to find the assignments of $\alpha$ and $\beta$ needed to satisfy given constraints on the exponents of the FA and the MD probabilities. For the sake of fairness, on the other hand, it should also be made clear that since we consider only the random coding regime, these are merely achievability results, with no converse bounds pertaining to optimal codes.
The outline of the paper is as follows. In Section 2, we establish some notation conventions, provide some preliminaries, and finally, formulate the problem. In Section 3, we derive the optimum detector/decoder and discuss some of its properties. In Section 4, we present our main theorem, which is about single–letter formulas for the various error exponents. Finally, in Section 5, we prove this theorem.
Notation Conventions, Preliminaries and Problem Formulation
===========================================================
Notation Conventions and Preliminaries
--------------------------------------
Throughout the paper, random variables will be denoted by capital letters, specific values they may take will be denoted by the corresponding lower case letters, and their alphabets, similarly as other sets, will be denoted by calligraphic letters. Random vectors and their realizations will be denoted, respectively, by capital letters and the corresponding lower case letters, both in the bold face font. Their alphabets will be superscripted by their dimensions. For example, the random vector ${\mbox{\boldmath $X$}}=(X_1,\ldots,X_n)$, ($n$ – positive integer) may take a specific vector value ${\mbox{\boldmath $x$}}=(x_1,\ldots,x_n)$ in ${{\cal X}}^n$, the $n$–th order Cartesian power of ${{\cal X}}$, which is the alphabet of each component of this vector.
For a given vector ${\mbox{\boldmath $x$}}$, let $\hat{Q}_X$ denote[^2] the empirical distribution, that is, the vector $\{\hat{Q}_X(x),~x\in{{\cal X}}\}$, where $\hat{Q}_X(x)$ is the relative frequency of the letter $x$ in the vector ${\mbox{\boldmath $x$}}$. Let ${{\cal T}}_P$ denote the type class associated with $P$, that is, the set of all sequences $\{{\mbox{\boldmath $x$}}\}$ for which ${{\hat{Q}}}_X=P$. Similarly, for a pair of vectors $({\mbox{\boldmath $x$}},{\mbox{\boldmath $y$}})$, the empirical joint distribution will be denoted by $\hat{Q}_{XY}$ or simply ${{\hat{Q}}}$ for short. Conditional empirical distributions will be denoted by ${{\hat{Q}}}_{X|Y}$ and ${{\hat{Q}}}_{Y|X}$, the $y$–marginal by ${{\hat{Q}}}_Y$, etc. Accordingly, the empirical mutual information induced by $({\mbox{\boldmath $x$}},{\mbox{\boldmath $y$}})$ will be denoted by $I({{\hat{Q}}}_{XY})$ or $I({{\hat{Q}}})$, the divergence between ${{\hat{Q}}}_X$ and $P=\{P(x),~x\in{{\cal X}}\}$ – by ${{\cal D}}({{\hat{Q}}}_Y\|P)$, and the conditional divergence between the empirical conditional distribution ${{\hat{Q}}}_{Y|X}$ and the channel $W=\{W(y|x)~x\in{{\cal X}},~y\in{{\cal Y}}\}$, will be denoted by ${{\cal D}}({{\hat{Q}}}_{Y|X}\|W|{{\hat{Q}}}_X)$, that is, $${{\cal D}}({{\hat{Q}}}_{Y|X}\|W|{{\hat{Q}}}_X)=\sum_{x\in{{\cal X}}}{{\hat{Q}}}_X(x)\sum_{y\in{{\cal Y}}}{{\hat{Q}}}_{Y|X}(y|x)\log
\frac{{{\hat{Q}}}_{Y|X}(y|x)}{W(y|x)},$$ and so on. The joint distribution induced by ${{\hat{Q}}}_X$ and ${{\hat{Q}}}_{Y|X}$ will be denoted by ${{\hat{Q}}}_X\times{{\hat{Q}}}_{Y|X}$, and a similar notation will be used when the roles of $X$ and $Y$ are switched. The marginal of $X$, induced by ${{\hat{Q}}}_Y$ and ${{\hat{Q}}}_{X|Y}$ will be denoted by $({{\hat{Q}}}_Y\times{{\hat{Q}}}_{X|Y})_X$, and so on. Similar notation conventions will apply, of course, to generic distributions $Q_{XY}$, $Q_X$, $Q_Y$, $Q_{Y|X}$, and $Q_{X|Y}$, which are not necessarily empirical distributions (without “hats”).
The expectation operator will be denoted by ${\mbox{\boldmath $E$}}\{\cdot\}$. Whenever there is room for ambiguity, the underlying probability distribution will appears as a subscript, e.g., ${\mbox{\boldmath $E$}}_Q\{\cdot\}$. Logarithms and exponents will be understood to be taken to the natural base unless specified otherwise. The indicator function will be denoted by ${{\cal I}}(\cdot)$. Sets will normally be denoted by calligraphic letters. The complement of a set ${{\cal A}}$ will be denoted by $\overline{{{\cal A}}}$. The notation $[t]_+$ will stand for $\max\{t,0\}$. For two positive sequences, $\{a_n\}$ and $\{b_n\}$, the notation $a_n{\stackrel{\cdot} {=}}b_n$ will mean asymptotic equivalence in the exponential scale, that is, $\lim_{n\to\infty}\frac{1}{n}\log(\frac{a_n}{b_n})=0$. Similarly, $a_n{\stackrel{\cdot} {\le}}b_n$ will mean $\limsup_{n\to\infty}\frac{1}{n}\log(\frac{a_n}{b_n})\le 0$, and so on. Throughout the sequel, we will make frequent use of the fact that $\sum_{i=1}^{k_n} a_i(n) {\stackrel{\cdot} {=}}\max_{1\le i\le k_n} a_i(n)$ as long as as $\{a_i(n)\}$ are positive and $k_n{\stackrel{\cdot} {=}}1$. Accordingly, for $k_n$ sequences of positive random variables $\{A_i(n)\}$, all defined on a common probability space, and a deterministic sequence $B_n$, $$\begin{aligned}
\label{pullout}
\mbox{Pr}\left\{\sum_{i=1}^{k_n} A_i(n)\ge B_n\right\}
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{\max_{1\le i\le k_n}A_i(n)\ge B_n\right\}\nonumber\\
&=&\mbox{Pr}\bigcup_{i=1}^{k_n}\left\{A_i(n)\ge B_n\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\sum_{i=1}^{k_n}\mbox{Pr}\left\{A_i(n)\ge B_n\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\max_{1\le i\le k_n}\mbox{Pr}\left\{A_i(n)\ge B_n\right\},\end{aligned}$$ provided that $B_n'{\stackrel{\cdot} {=}}B_n$ implies $\mbox{Pr}\{A_i(n)\ge B_n'\}{\stackrel{\cdot} {=}}\mbox{Pr}\{A_i(n)\ge B_n\}$.[^3] In simple words, summations and maximizations are equivalent and can be both “pulled out outside” $\mbox{Pr}\{\cdot\}$ without changing the exponential order, as long as $k_n{\stackrel{\cdot} {=}}1$. By the same token, $$\begin{aligned}
\label{intersect}
\mbox{Pr}\left\{\sum_{i=1}^{k_n} A_i(n)\le B_n\right\}&{\stackrel{\cdot} {=}}&
\mbox{Pr}\left\{\max_{1\le i\le k_n} A_i(n)\le B_n\right\}\nonumber\\
&=&\mbox{Pr}\bigcap_{i=1}^{k_n} \{A_i(n)\le B_n\}.\end{aligned}$$ Another fact that will be used extensively is that for a given set of $M$ pairwise independent events $\{{{\cal A}}_i\}_{i=1}^M$, $$\label{shulman}
\mbox{Pr}\left\{\bigcup_{i=1}^M{{\cal A}}_i\right\}{\stackrel{\cdot} {=}}\min\left\{1,\sum_{i=1}^M
\mbox{Pr}\{{{\cal A}}_i\}\right\}.$$ The right–hand side (r.h.s.) is obviously the union bound, which holds true even if the events are not pairwise independent. On the other hand, when multiplied by a factor of $1/2$, the r.h.s. becomes a lower bound to $\mbox{Pr}\{\bigcup_{i=1}^M{{\cal A}}_i\}$, provided that $\{A_i\}$ are pairwise independent [@Shulman03 Lemma A.2], [@SBM07 Lemma 1].
Problem Formulation
-------------------
Consider a discrete memoryless channel (DMC), characterized by a finite input alphabet ${{\cal X}}_0$, a finite out alphabet ${{\cal Y}}$ and a given matrix of single–letter transition probabilities $\{W(y|x),~x\in{{\cal X}}_0,~y\in{{\cal Y}}\}$. It is further assumed that ${{\cal X}}_0$ contains a special symbol denoted by ‘$0$’, which designates the channel input in the absence of transmission. We shall denote ${{\cal X}}={{\cal X}}_0\setminus\{0\}$ and $Q_0(y)=W(y|x=0)$.
We assume an ensemble of random codes, where each codeword is selected independently at random, uniformly within a type class ${{\cal T}}_P$. Let ${{\cal C}}=\{{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $x$}}_2\ldots,{\mbox{\boldmath $x$}}_M\}$, ${\mbox{\boldmath $x$}}_m\in{{\cal X}}^n$, $m=1,\ldots,M$, $M=e^{nR}$ ($R$ being the coding rate in nats per channel use), denote the (randomly chosen) code, which is revealed to both the encoder and the decoder.
A detector/decoder, for a code operating in the setting of slotted asynchronous communication, is a partition of ${{\cal Y}}^n$ into $M+1$ regions, denoted ${{\cal R}}_0,{{\cal R}}_1,\ldots,{{\cal R}}_M$. If ${\mbox{\boldmath $y$}}\in{{\cal R}}_m$, $m=1,2,\ldots,M$, then the decoder decodes the message to be $m$. If ${\mbox{\boldmath $y$}}\in{{\cal R}}_0$, then the decoder declares that nothing has been transmitted, that is, ${\mbox{\boldmath $x$}}=0^n$ and then ${\mbox{\boldmath $y$}}$ is “pure noise.” The probability of decoding error (DE) is defined as $$P_{\mbox{\tiny DE}}=\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m})=
\frac{1}{M}\sum_{m=1}^M\sum_{k\ne m}W({{\cal R}}_k|{\mbox{\boldmath $x$}}_m),$$ where the inner summation at the right–most side [*includes*]{} $k=0$. The probability of false alarm (FA) is defined as $$P_{\mbox{\tiny FA}}=Q_0(\overline{{{\cal R}}_0})=\sum_{m=1}^MQ_0({{\cal R}}_m),$$ and the probability of misdetection (MD) is defined as $$P_{\mbox{\tiny MD}}=\frac{1}{M}\sum_{m=1}^M W({{\cal R}}_0|{\mbox{\boldmath $x$}}_m).$$ For a given code ${{\cal C}}$, we are basically interested in achievable trade-offs between $P_{\mbox{\tiny
DE}}$, $P_{\mbox{\tiny FA}}$, and $P_{\mbox{\tiny MD}}$. Consider the following problem: $$\begin{aligned}
\label{min}
& &\mbox{minimize}~~~P_{\mbox{\tiny DE}}\nonumber\\
& &\mbox{subject to}~~P_{\mbox{\tiny FA}}\le \epsilon_{\mbox{\tiny FA}}\nonumber\\
& &~~~~~~~~~~~~~~~P_{\mbox{\tiny MD}}\le \epsilon_{\mbox{\tiny MD}}\end{aligned}$$ where $\epsilon_{\mbox{\tiny FA}}$ and $\epsilon_{\mbox{\tiny MD}}$ are given prescribed quantities, and it assumed that these two constraints are not contradictory.[^4]
Our goal is to find the optimum detector/decoder and then analyze the random coding exponents associated with the resulting error probabilities.
The Optimum Detector/Decoder
============================
Let us define the following detector/decoder: $$\begin{aligned}
{{\cal R}}_0^*&=&\left\{{\mbox{\boldmath $y$}}:~a\cdot\sum_{m=1}^M W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)+\max_m W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\le
b\cdot Q_\star({\mbox{\boldmath $y$}})\right\}\\
{{\cal R}}_m^*&=&\overline{{{\cal R}}_0^*}\bigcap\left\{{\mbox{\boldmath $y$}}:~W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)> \max_{k\ne m}
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_k)\right\},~~~~m=1,2,\ldots,M,\end{aligned}$$ where ties are broken arbitrarily, and where $a\ge 0$ and $b\ge 0$ are deterministic constants. The following lemma establishes the optimality of the decision rule ${{\cal R}}^*=\{{{\cal R}}_0^*,{{\cal R}}_1^*,\ldots,{{\cal R}}_M^*\}$ in the sense of the trade-off among the probabilities $P_{\mbox{\tiny MD}}$, $P_{\mbox{\tiny
FA}}$ and $P_{\mbox{\tiny DE}}$. It tells us that there is no other decision rule that simultaneously yields strictly smaller error probabilities of all three kinds.
Let ${{\cal R}}^*=\{{{\cal R}}_0^*,{{\cal R}}_1^*,\ldots,{{\cal R}}_M^*\}$ be as above and let ${{\cal R}}=\{{{\cal R}}_0,{{\cal R}}_1,\ldots,{{\cal R}}_M\}$ be any another partition of ${{\cal Y}}^n$ into $M+1$ regions. If $$Q_0(\overline{{{\cal R}}_0})\le Q_0(\overline{{{\cal R}}_0^*})$$ and $$\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0|{\mbox{\boldmath $x$}}_m)\le
\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m),$$ then $$\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m^*}|{\mbox{\boldmath $x$}}_m)\le
\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m}|{\mbox{\boldmath $x$}}_m).$$
[*Proof.*]{} We begin from the obvious observation that for a given choice of ${{\cal R}}_0$, the optimum choice of the other decision regions is always: $$\label{optrm}
{{\cal R}}_m=
\overline{{{\cal R}}_0}\bigcap\left\{{\mbox{\boldmath $y$}}:~W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)> \max_{k\ne
m}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_k)\right\},~~~~~~~m=1,2,\ldots,M.$$ In other words, once a transmission has been detected, the best decoding rule is the ML decoding rule. Similarly as in classical hypothesis testing theory, this is true because the probability of correct decoding, $$P_{\mbox{\tiny CD}}=\frac{1}{M}\sum_{m=1}^M\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_m}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m),$$ is upper bounded by $$P_{\mbox{\tiny CD}}\le\frac{1}{M}\sum_{m=1}^M\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_m}\max_kW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_k)
=\frac{1}{M}\sum_{{\mbox{\boldmath $y$}}\in\overline{{{\cal R}}_0}}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$$ and this bound is achieved by (\[optrm\]). Thus, upon adopting (\[optrm\]) for a given choice of ${{\cal R}}_0$, it remains to prove that the choice ${{\cal R}}_0^*$ satisfies the assertion of the lemma.
The proof of this fact is similar to the proof of the Neyman–Pearson lemma. Let ${{\cal R}}_0^*$ be as above and let ${{\cal R}}_0$ be another, competing rejection region. First, observe that for every ${\mbox{\boldmath $y$}}\in{{\cal Y}}^n$ $$[{{\cal I}}\{{\mbox{\boldmath $y$}}\in{{\cal R}}_0^*\}-{{\cal I}}\{{\mbox{\boldmath $y$}}\in{{\cal R}}_0\}]\cdot\left[b\cdot
Q_0({\mbox{\boldmath $y$}})-a\cdot\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)-\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\right]\ge 0.$$ This is true because, by definition of ${{\cal R}}_0^*$, the two factors of the product at the left–hand side (l.h.s.) are either both non–positive or both non–negative. Thus, taking the summation over all ${\mbox{\boldmath $y$}}\in{{\cal Y}}^n$, we have: $$\begin{aligned}
0&\le&\sum_{{\mbox{\boldmath $y$}}\in{{\cal Y}}^n}
[{{\cal I}}\{{\mbox{\boldmath $y$}}\in{{\cal R}}_0^*\}-{{\cal I}}\{{\mbox{\boldmath $y$}}\in{{\cal R}}_0\}]\cdot\left[b\cdot
Q_0({\mbox{\boldmath $y$}})-a\cdot\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)-\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\right]\nonumber\\
&=&b\cdot[Q_0({{\cal R}}_0^*)-Q_0({{\cal R}}_0)]-
a\cdot\left[\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m)-\sum_{m=1}^MW({{\cal R}}_0|{\mbox{\boldmath $x$}}_m)\right]-\nonumber\\
& &\left[\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0^*}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)-\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\right]\end{aligned}$$ which yields $$\begin{aligned}
& &\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0^*}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)-\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\nonumber\\
&\le&
b\cdot[Q_0({{\cal R}}_0^*)-Q_0({{\cal R}}_0)]-
a\cdot\left[\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m)-\sum_{m=1}^MW({{\cal R}}_0|{\mbox{\boldmath $x$}}_m)\right]\nonumber\\
&=&
b\cdot[Q_0(\overline{{{\cal R}}_0})-Q_0(\overline{{{\cal R}}_0^*})]+
a\cdot\left[\sum_{m=1}^MW({{\cal R}}_0|{\mbox{\boldmath $x$}}_m)-\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m)\right]\end{aligned}$$ Since $a\ge 0$ and $b\ge 0$, it follows that $$Q_0(\overline{{{\cal R}}_0})\le Q_0(\overline{{{\cal R}}_0^*})$$ and $$\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0|{\mbox{\boldmath $x$}}_m)\le\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m)$$ together imply that $$\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0^*}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\le
\sum_{{\mbox{\boldmath $y$}}\in{{\cal R}}_0}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$$ or equivalently, $$\sum_{{\mbox{\boldmath $y$}}\in\overline{{{\cal R}}_0^*}}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\ge
\sum_{{\mbox{\boldmath $y$}}\in\overline{{{\cal R}}_0}}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m),$$ which in turn yields $$\begin{aligned}
\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m^*}|{\mbox{\boldmath $x$}}_m)&\equiv&1-
\frac{1}{M}\sum_{{\mbox{\boldmath $y$}}\in\overline{{{\cal R}}_0^*}}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\nonumber\\
&\le&
1-\frac{1}{M}\sum_{{\mbox{\boldmath $y$}}\in\overline{{{\cal R}}_0}}\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\nonumber\\
&\equiv&\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m}|{\mbox{\boldmath $x$}}_m).\end{aligned}$$ This completes the proof of Lemma 1. $\Box$
[**Discussion.**]{} At this point, two comments are in order.
1\. The results thus far hold for any given code ${{\cal C}}$. As mentioned earlier, in this work, we analyze the ensemble performance. Specifically, let $\bar{P}_{\mbox{\tiny DE}}$, $\bar{P}_{\mbox{\tiny FA}}$, and $\bar{P}_{\mbox{\tiny MD}}$ denote the corresponding ensemble averages of $P_{\mbox{\tiny DE}}$, $P_{\mbox{\tiny FA}}$, and $P_{\mbox{\tiny MD}}$, respectively. We will assess the random coding exponents of these three probabilities. The constants $a$ and $b$ can be thought of as Lagrange multipliers that are tuned to meet the given FA and MD constraints. For these Lagrange multipliers to have an impact on error exponents, we let them be exponential functions of $n$, that is, $a=e^{n\alpha}$ and $b=e^{n\beta}$, where $\alpha$ and $\beta$ are real numbers, independent of $n$. The rejection region is then of the form $$\label{finalR0}
{{\cal R}}_0^*=\left\{{\mbox{\boldmath $y$}}:~e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)+\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\le
e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}.$$ By the same token, we impose exponential constraints on the FA and MD probabilities, that is, $\epsilon_{\mbox{\tiny FA}}=e^{-nE_{\mbox{\tiny FA}}}$ and $\epsilon_{\mbox{\tiny MD}}=e^{-nE_{\mbox{\tiny MD}}}$, where $E_{\mbox{\tiny FA}} \ge 0$ and $E_{\mbox{\tiny MD}} \ge 0$ are given numbers, independent of $n$.
2\. The detection/rejection rule defined by (\[finalR0\]) involves a linear combination of $\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$ and $\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$, or equivalently, the overall output distribution induced by the code $$Q_{{{\cal C}}}({\mbox{\boldmath $y$}}){\stackrel{\Delta} {=}}\frac{1}{M}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m).$$ In this context, the intuition behind the optimality of this detection rule is not trivial (at least for the author of this article), and as mentioned earlier, it is very different from that of [@Wang10] and [@WCCW11]. It is instructive, nonetheless, to examine some special cases. The first observation is that for $\alpha\ge 0$, the term $e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$ dominates the term $\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$, and so, the rejection region is essentially equivalent to $$\label{approxr0}
{{\cal R}}_0'=\left\{{\mbox{\boldmath $y$}}:~e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\le
e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}=
\left\{{\mbox{\boldmath $y$}}:~e^{n(\alpha+R)}Q_{{{\cal C}}}({\mbox{\boldmath $y$}})\le
e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\},$$ which is exactly the Neyman–Pearson test between $Q_{{{\cal C}}}({\mbox{\boldmath $y$}})$ and $Q_0({\mbox{\boldmath $y$}})$. This means that $\alpha\ge 0$ corresponds to a regime of full tension between the FA and the MD constraints (see footnote no. 2). In this case, $E_{\mbox{\tiny FA}}$ and $E_{\mbox{\tiny MD}}$ are related via the Neyman–Pearson lemma, and there are no degrees of freedom left for minimizing $\bar{P}_{\mbox{\tiny DE}}$ (or equivalently, maximizing its exponent). Indeed, the detection–rejection rule (\[approxr0\]) depends only on one degree of freedom, which is the difference $\alpha-\beta$, and hence so are the FA and MD error exponents associated with it. At the other extreme, where $e^{n\alpha} \ll 1$, and the term $\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$ dominates, the detection rule becomes equivalent to $${{\cal R}}_0''=\left\{{\mbox{\boldmath $y$}}:~\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\le
e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}.$$ In this case, the silent mode is essentially treated as corresponding to yet another codeword – ${\mbox{\boldmath $x$}}_0=0^n$, although it still has a special stature due to the factor $e^{n\beta}$. But for $\beta=0$, this “silent codeword” is just an additional codeword with no special standing, and the decoding is completely ordinary. The interesting range is therefore the range where $\alpha$ is negative, but not too small, where both $Q_{{{\cal C}}}({\mbox{\boldmath $y$}})$ and $\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)$ play a considerable role.
Performance
===========
In this section, we present our main theorem, which provides exact single–letter characterizations for all three exponents as functions of $\alpha$ and $\beta$. We first need some definitions. Let $$d(x,y){\stackrel{\Delta} {=}}\ln\left[\frac{Q_0(y)}{W(y|x)}\right],~~~~x\in{{\cal X}},~y\in{{\cal Y}}$$ and denote $D(Q)={\mbox{\boldmath $E$}}_Qd(X,Y)$. For a given output distribution $Q_Y=\{Q_Y(y),~y\in{{\cal Y}}\}$, define[^5] $${\mbox{\boldmath $R$}}(\Delta;Q_Y){\stackrel{\Delta} {=}}\inf_{\{Q_{Y|X}:~D(Q)\le \Delta,~(P\times
Q_{Y|X})_Y=Q_Y\}}I(Q).$$ Next, define $$\mu(Q_Y,R){\stackrel{\Delta} {=}}\min_{Q_{X|Y}\in{{\cal Q}}_P,~I(Q)\le R}\{I(Q)+D(Q)\},$$ $$\tilde{{\mbox{\boldmath $R$}}}(\Delta,R;Q_Y){\stackrel{\Delta} {=}}\left\{\begin{array}{ll}
{\mbox{\boldmath $R$}}(\Delta;Q_Y)-R & \Delta\le \mu(Q_Y,R)-R\\
0 & \Delta> \mu(Q_Y,R)-R\end{array}\right.$$ $$E_A{\stackrel{\Delta} {=}}\inf_{Q_Y}[{{\cal D}}(Q_Y\|Q_0)+\tilde{{\mbox{\boldmath $R$}}}(\alpha-\beta,R;Q_Y)],$$ $$E_B{\stackrel{\Delta} {=}}\inf_{Q_Y}\left\{{{\cal D}}(Q_Y\|Q_0)+[{\mbox{\boldmath $R$}}(-\beta;Q_Y)-R]_+\right\},$$ and $$\label{efa}
E_{\mbox{\tiny
FA}}{\stackrel{\Delta} {=}}\min\{E_A,E_B\}.$$ The inverse function of ${\mbox{\boldmath $R$}}(D;Q_Y)$, will be denoted by ${\mbox{\boldmath $D$}}(R;Q_Y)$, i.e., $${\mbox{\boldmath $D$}}(R;Q_Y)=\inf_{\{Q_{Y|X}:~I(Q)\le R,~(P\times
Q_{Y|X})_Y=Q_Y\}}D(Q).$$ Also, let $R_1(Q_Y)$ and $D_1(Q_Y)$ denote $I(Q^*)$ and $D(Q^*)$, where $Q^*$ minimizes $I(Q)+D(Q)$. Now, let $$\label{emd}
E_{\mbox{\tiny MD}}{\stackrel{\Delta} {=}}\inf{{\cal D}}(Q_{Y|X}\|W|P)$$ where the infimum is subject to the constraints:
1. ${\mbox{\boldmath $D$}}(R;Q_Y)\le[\alpha]_+-\beta\le D(P\times Q_{Y|X})$
2. $D_1(Q_Y)\le[\alpha]_+-\beta$ implies ${\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)\ge
R-[-\alpha]_+$
3. $D_1(Q_Y)>[\alpha]_+-\beta$ implies $R_1(Q_Y)+D_1(Q_Y)\ge
R+\alpha-\beta$
with $Q_Y=(P\times Q_{Y|X})_Y$. Next define $$E_1=\inf_{\{Q_{Y|X}:~D(P\times Q_{Y|X})\le[\alpha]_+-\beta\}}
\left\{{{\cal D}}(Q_{Y|X}\|W|P)+\left[{\mbox{\boldmath $R$}}(D(P\times Q_{Y|X});(P\times Q_{Y|X})_Y)-R\right]_+\right\},$$ $$E_2=\inf_{Q_{Y|X}}
\left\{{{\cal D}}(Q_{Y|X}\|W|P)+\left[{\mbox{\boldmath $R$}}(\alpha-\beta;(P\times Q_{Y|X})_Y)-R\right]_+\right\},$$ and finally, $$\label{ede}
E_{\mbox{\tiny DE}}{\stackrel{\Delta} {=}}\min\{E_1,E_2,E_{\mbox{\tiny MD}}\}.$$
Let $W$ be a DMC and let ${{\cal R}}^*$ be both defined as in Section 2.2. Let the codewords of ${{\cal C}}=\{{\mbox{\boldmath $x$}}_1,\ldots,{\mbox{\boldmath $x$}}_M\}$, $M=e^{nR}$, be selected independently at random under the uniform distribution across a given type class ${{\cal T}}_P$. Then, the asymptotic exponents associated with $\bar{P}_{\mbox{\tiny FA}}$, $\bar{P}_{\mbox{\tiny MD}}$, and $\bar{P}_{\mbox{\tiny DE}}$ are given, respectively, by $E_{\mbox{\tiny FA}}$, $E_{\mbox{\tiny MD}}$, and $E_{\mbox{\tiny DE}}$, as defined in eqs. (\[efa\]), (\[emd\]), and (\[ede\]).
[**Discussion.**]{} As discussed in Section 3, we observe that for $\alpha\ge 0$, all three exponents depend on $\alpha$ and $\beta$ only via the difference $\alpha-\beta$. It is also seen that there is nothing to lose by replacing a positive value of $\alpha$ by $\alpha=0$, as long as the difference $\alpha-\beta$ is kept. For $\alpha < 0$, the various exponents depend on $\alpha$ and $\beta$ individually, so there are two degrees of freedom to adjust both the FA and the MD exponents to pre–specified values in a certain range.
It is instructive to find out the maximum achievable information rate for which the average probability of decoding error still tends to zero, that is, the smallest rate $R$ for which $E_{\mbox{\tiny DE}}=0$, for given $E_{\mbox{\tiny MD}}$ and $E_{\mbox{\tiny FA}}$. This happens as soon as either $E_1=0$ or $E_2=0$. The exponent $E_1$ vanishes for $R={\mbox{\boldmath $R$}}(D(P\times W);(P\times W)_Y)$. But $$\begin{aligned}
{\mbox{\boldmath $R$}}(P\times W;(P\times W)_Y)&=&\min\{I(Q):~D(Q)\le D(P\times W),~(P\times
Q_{Y|X})_Y=(P\times W)_Y\}\nonumber\\
&\le& I(P\times W).\end{aligned}$$ On the other hand, since ${{\cal D}}(Q_{Y|X}\|W|P)\ge 0$, it is easy to see that the constraint set $\{Q:~D(Q)\le D(P\times W),~(P\times Q_{Y|X})_Y=(P\times
W)_Y\}$ is a subset of $\{Q:~I(Q)\ge I(P\times W)\}$, and so, $${\mbox{\boldmath $R$}}(P\times W;(P\times W)_Y)
\ge\min\{I(Q):~I(Q)\ge I(P\times W)\}=I(P\times W),$$ therefore, ${\mbox{\boldmath $R$}}(P\times W;(P\times W)_Y)=I(P\times W)$, which is the ordinary achievable rate one would expect from a constant composition code of type class ${{\cal T}}_P$. The exponent $E_2$ vanishes at the rate ${\mbox{\boldmath $R$}}(\alpha-\beta;(P\times W)_Y)$ Therefore, there is no rate loss, compared to ordinary decoding, as long as $$\alpha-\beta
\le D(P\times W).$$
Proof of Theorem 1
==================
This section is divided into three subsections, each one devoted to the analysis of one of the three error exponents.
The False Alarm Error Exponent
------------------------------
Let ${\mbox{\boldmath $y$}}$ be given and consider $\{{\mbox{\boldmath $X$}}_m\}$ as random. Then, $$\begin{aligned}
\bar{P}_{\mbox{\tiny FA}}({\mbox{\boldmath $y$}})&=&Q_0\left\{
e^{n\alpha}\cdot\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+\max_m W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)>
e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}\\
&{\stackrel{\cdot} {=}}&Q_0\left\{e^{n\alpha}\cdot\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)
> e^{n\beta}Q_\star({\mbox{\boldmath $y$}})\right\}+ Q_0\left\{\max_m W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)>
e^{n\beta}Q_\star({\mbox{\boldmath $y$}})\right\}\\
&=&Q_0\left\{\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)
> e^{n(\beta-\alpha)}Q_0({\mbox{\boldmath $y$}})\right\}+ Q_0\left\{\max_m W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)>
e^{n\beta}Q_\star({\mbox{\boldmath $y$}})\right\}\\
&{\stackrel{\Delta} {=}}& A({\mbox{\boldmath $y$}})+B({\mbox{\boldmath $y$}}),\end{aligned}$$ where we have used (\[pullout\]). It is sufficient now to show that $A={\mbox{\boldmath $E$}}\{A({\mbox{\boldmath $Y$}})\}{\stackrel{\cdot} {=}}e^{-nE_A}$ and $B={\mbox{\boldmath $E$}}\{B({\mbox{\boldmath $Y$}})\}{\stackrel{\cdot} {=}}e^{-nE_B}$. Now, for a given ${\mbox{\boldmath $y$}}$, let $N({{\hat{Q}}}|{\mbox{\boldmath $y$}})$ be the number of codewords in ${{\cal C}}$ whose joint empirical distribution with ${\mbox{\boldmath $y$}}$ is $\hat{Q}=\{\hat{Q}(x,y),~x\in{{\cal X}},~y\in{{\cal Y}}\}$. Next, define $$f({{\hat{Q}}})=\sum_{x,y}{{\hat{Q}}}(x,y)\ln W(y|x)$$ and $$g({{\hat{Q}}}_Y)=\sum_y{{\hat{Q}}}_Y(y)\ln Q_\star(y)+\beta-\alpha.$$ Then, $$\begin{aligned}
A({\mbox{\boldmath $y$}})&=&Q_0\left\{\sum_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}
> e^{ng({{\hat{Q}}}_Y)}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&Q_0\left\{\max_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}>
e^{ng({{\hat{Q}}}_Y)}\right\}\\
&=&Q_0\bigcup_{{{\hat{Q}}}_{X|Y}}\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}>
e^{ng({{\hat{Q}}}_Y)}\right\}\\
&{\stackrel{\cdot} {=}}&\sum_{{{\hat{Q}}}_{X|Y}}Q_0\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})>
e^{n[g({{\hat{Q}}}_Y)-f({{\hat{Q}}})}\right\}\\
&{\stackrel{\cdot} {=}}&\max_{{{\hat{Q}}}_{X|Y}}Q_0\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})>
e^{nu({{\hat{Q}}})}\right\},\end{aligned}$$ where we have used again eq. (\[pullout\]) and where we have defined $$u({{\hat{Q}}}){\stackrel{\Delta} {=}}g({{\hat{Q}}}_Y)-f({{\hat{Q}}})=\sum_{x,y\in{{\cal X}}\times{{\cal Y}}}{{\hat{Q}}}(x,y)\ln\frac{Q_0(y)}{W(y|x)}+\beta-\alpha
=D({{\hat{Q}}})+\beta-\alpha.$$ Now, since $N({{\hat{Q}}}|{\mbox{\boldmath $y$}})$ is a Bernoulli random variable pertaining to $e^{nR}$ trials and probability of success of the exponential order of $e^{-nI({{\hat{Q}}})}$, we have, similarly as in [@Merhav09 Subsection 6.3] $$\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge e^{nu({{\hat{Q}}})}\}{\stackrel{\cdot} {=}}\exp\left\{-e^{n[u({{\hat{Q}}})]_+}(n[I({{\hat{Q}}})-R+[u(Q)]_+]-1)\right\},$$ provided that for $u({{\hat{Q}}})> 0$, $I({{\hat{Q}}})-R+u({{\hat{Q}}})> 0$ (otherwise, $\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge e^{nu({{\hat{Q}}})}\}\to 1$).[^6] Therefore, the exponential rate $E({{\hat{Q}}})$ of $\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{nu({{\hat{Q}}})}\}$ is as follows: $$E({{\hat{Q}}})=\left\{\begin{array}{ll}
[I({{\hat{Q}}})-R]_+ & u({{\hat{Q}}})\le 0\\
\infty & u({{\hat{Q}}})> 0,~u({{\hat{Q}}})> R-I({{\hat{Q}}})\\
0 & u({{\hat{Q}}})> 0,~u({{\hat{Q}}})< R-I({{\hat{Q}}})\end{array}\right.$$ For a given ${{\hat{Q}}}_Y$, let ${{\cal Q}}_P$ be the set of $\{{{\hat{Q}}}_{X|Y}\}$ such that $({{\hat{Q}}}_Y\times{{\hat{Q}}}_{X|Y})_X=P$. Then, $$\begin{aligned}
\min_{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P}E({{\hat{Q}}})&=&\left\{\begin{array}{ll}
\infty & \forall {{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})> 0,~u({{\hat{Q}}})>R-I({{\hat{Q}}})\\
0 & \exists {{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~0\le u({{\hat{Q}}})\le R-I({{\hat{Q}}})\\
0 & \exists {{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})\le 0,~I({{\hat{Q}}})\le R\\
\min_{\{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})\le 0\}}[I({{\hat{Q}}})-R]_+ &
\mbox{otherwise}\end{array}\right.\nonumber\\
&=&\left\{\begin{array}{ll}
\infty & \forall {{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})> [R-I({{\hat{Q}}})]_+\\
0 & \exists {{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~I({{\hat{Q}}})\le \min\{R,R-u({{\hat{Q}}})\}\\
\min_{\{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})\le 0\}}[I({{\hat{Q}}})-R]_+ &
\mbox{otherwise}\end{array}\right.\nonumber\end{aligned}$$ The condition for $\min_{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P}E({{\hat{Q}}})$ to vanish becomes $$\begin{aligned}
\alpha-\beta+R&\ge&\mu({{\hat{Q}}}_Y,R)=
\min_{\{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~I({{\hat{Q}}})\le R\}}[I({{\hat{Q}}})+D({{\hat{Q}}})]\nonumber\\
&=&\left\{\begin{array}{ll}
R+{\mbox{\boldmath $D$}}(R;{{\hat{Q}}}_Y) & R < R_1({{\hat{Q}}}_Y)\\
R_1({{\hat{Q}}}_Y)+D_1({{\hat{Q}}}_Y) & R \ge R_1({{\hat{Q}}}_Y)\end{array}\right.\end{aligned}$$ The condition for an infinite exponent is as follows: For $u({{\hat{Q}}})$ to be non-negative for all ${{\hat{Q}}}_{X|Y}$, we need $$\alpha-\beta\le D_{\min}({{\hat{Q}}}_Y){\stackrel{\Delta} {=}}\min_{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P}D({{\hat{Q}}}).$$ For $u({{\hat{Q}}})\ge R-I({{\hat{Q}}})$ for all ${{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P$, we need $\alpha-\beta+R<\mu({{\hat{Q}}}_Y,\infty)$. Thus, in summary, $$\begin{aligned}
\min_{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P}E({{\hat{Q}}})&=&\left\{\begin{array}{ll}
0 & \alpha-\beta \ge \mu({{\hat{Q}}}_Y,R)-R\\
\infty & \alpha-\beta < \min\{\mu({{\hat{Q}}}_Y,\infty)-R,D_{\min}({{\hat{Q}}}_Y)\}\\
\min_{\{{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P:~u({{\hat{Q}}})\le 0\}}[I({{\hat{Q}}})-R]_+ &
\mbox{elsewhere}\end{array}\right.\nonumber\\
&=&\left\{\begin{array}{ll}
0 & \alpha-\beta \ge \mu({{\hat{Q}}}_Y,R)-R\\
\infty & \alpha-\beta < \min\{\mu({{\hat{Q}}}_Y,\infty)-R,D_{\min}({{\hat{Q}}}_Y)\}\\
\left[{\mbox{\boldmath $R$}}(\alpha-\beta;{{\hat{Q}}}_Y)-R\right]_+ &
\mbox{elsewhere}\end{array}\right.\nonumber\\
&=&\left\{\begin{array}{ll}
{\mbox{\boldmath $R$}}(\alpha-\beta;{{\hat{Q}}}_Y)-R & \alpha-\beta< \mu({{\hat{Q}}}_Y,R)-R\\
0 & \alpha-\beta\ge \mu({{\hat{Q}}}_Y,R)-R\end{array}\right.\nonumber\\
&=&\tilde{{\mbox{\boldmath $R$}}}(\alpha-\beta,R;{{\hat{Q}}}_Y),\end{aligned}$$ where we have used the convention that the minimum over an empty set is infinity and the fact that ${\mbox{\boldmath $D$}}(R;{{\hat{Q}}}_Y)\ge \mu(Q_Y,R)-R$. For the overall exponent associated with $A$, we need to average over ${\mbox{\boldmath $Y$}}$, which gives $A{\stackrel{\cdot} {=}}e^{-nE_A}$ with $$E_A=\min_{Q_Y}\{{{\cal D}}(Q_Y\|Q_0)+\tilde{{\mbox{\boldmath $R$}}}(\alpha-\beta,R;Q_Y)\}.$$
Moving on to the analysis of $B({\mbox{\boldmath $y$}})$, $$\begin{aligned}
B({\mbox{\boldmath $y$}})&=&Q_0\left\{\max_m W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)> e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}\\
&=&Q_0\bigcup_{m=1}^M\left\{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)> e^{n\beta}Q_0({\mbox{\boldmath $y$}})\right\}\\
&{\stackrel{\cdot} {=}}&\min\left\{1,M\cdot Q_0\{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_1)> e^{n\beta}Q_0({\mbox{\boldmath $y$}})\}\right\},\end{aligned}$$ where in the last line, we have used (\[shulman\]). Now, $$Q_0\{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_1)> e^{n\beta}Q_0({\mbox{\boldmath $y$}})\}{\stackrel{\cdot} {=}}e^{-nI_0({{\hat{Q}}}_Y)},$$ where $$\begin{aligned}
I_0({{\hat{Q}}}_Y)&=&\min_{{{\hat{Q}}}_{X|Y}}\left\{I({{\hat{Q}}}):~D({{\hat{Q}}})\le -\beta,
~~{{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P
\right\}\nonumber\\
&=&{\mbox{\boldmath $R$}}(-\beta;{{\hat{Q}}}_Y).\end{aligned}$$ Thus, $B{\stackrel{\cdot} {=}}e^{-nE_B}$ with $$E_B=\min_{Q_Y}\{{{\cal D}}(Q_Y\|Q_0)+[{\mbox{\boldmath $R$}}(-\beta;Q_Y)-R]_+\}.$$
The Misdetection Error Exponent
-------------------------------
Without loss of generality, we will assume that ${\mbox{\boldmath $X$}}_1={\mbox{\boldmath $x$}}_1$ was transmitted. We first condition on ${\mbox{\boldmath $x$}}_1$ and ${\mbox{\boldmath $y$}}$. $$\begin{aligned}
\bar{P}_{\mbox{\tiny MD}}({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})&=&
\mbox{Pr}\left\{e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $X$}}_1={\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&=&\mbox{Pr}\left\{e^{n\alpha}\sum_{m=1}^MW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+\right.\nonumber\\
& &\left.\max\{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1),\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\}\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $X$}}_1={\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{e^{n\alpha}\left[W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)+\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\right]+W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)+
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{e^{n[\alpha]_+}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)+e^{n\alpha}\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{e^{n[\alpha]_+}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)< e^{n\beta}Q_\star({\mbox{\boldmath $y$}}),
e^{n\alpha}\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&=&{{\cal I}}\left\{e^{n[\alpha]_+}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)< e^{n\beta}Q_\star({\mbox{\boldmath $y$}})\right\}\times\nonumber\\
& & \mbox{Pr}\left\{e^{n\alpha}\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_\star({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\Delta} {=}}&C\cdot D.\end{aligned}$$ Using the identity $$\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)\equiv\max_{{{\hat{Q}}}_{X|Y}}{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\cdot
e^{nf({{\hat{Q}}})}$$ (where now $N({{\hat{Q}}}|{\mbox{\boldmath $y$}})$ does not count ${\mbox{\boldmath $x$}}_1$), we now have $$\begin{aligned}
D&=&\mbox{Pr}\left\{e^{n\alpha}\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\le e^{n\beta}Q_0({\mbox{\boldmath $y$}})
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&=&\mbox{Pr}\left\{e^{n\alpha}\sum_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}+
\max_{{{\hat{Q}}}_{X|Y}}{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\cdot e^{nf({{\hat{Q}}})}\le
e^{n[g({{\hat{Q}}}_Y)+\alpha]}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{e^{n\alpha}\sum_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}+
\sum_{{{\hat{Q}}}_{X|Y}}{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}e^{nf({{\hat{Q}}})}\le e^{n[g({{\hat{Q}}}_Y)+\alpha]}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&=&\mbox{Pr}\left\{\sum_{{{\hat{Q}}}_{X|Y}}[e^{n\alpha}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})+
{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}]e^{nf({{\hat{Q}}})}\le e^{n[g({{\hat{Q}}}_Y)+\alpha]}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{\max_{{{\hat{Q}}}_{X|Y}}[e^{n\alpha}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})+
{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}]e^{nf({{\hat{Q}}})}\le e^{n[g({{\hat{Q}}}_Y)+\alpha]}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&=&\mbox{Pr}\bigcap_{{{\hat{Q}}}_{X|Y}}\left\{e^{n\alpha}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})+
{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\le e^{n[u({{\hat{Q}}})+\alpha]}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\\
&=&\mbox{Pr}\bigcap_{{{\hat{Q}}}_{X|Y}}\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\le
e^{nv({{\hat{Q}}})}
\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\},\end{aligned}$$ where $$v({{\hat{Q}}})=\left\{\begin{array}{ll}
u({{\hat{Q}}}) & u({{\hat{Q}}})+\alpha > 0\\
-\infty & u({{\hat{Q}}})+\alpha \le 0\end{array}\right.$$ Now, if there exists at least one ${{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P$ for which $I({{\hat{Q}}}) < R$ and $R-I({{\hat{Q}}})>v({{\hat{Q}}})$, then this ${{\hat{Q}}}_{X|Y}$ alone is responsible for a double exponential decay of $D$ (because then the event in question would be a large deviations event whose probability decays exponentially with $M=e^{nR}$, thus double–exponentially with $n$), let alone the intersection over all $\{{{\hat{Q}}}_{X|Y}\}$. The condition for this to happen is $R>R_0({{\hat{Q}}}_Y){\stackrel{\Delta} {=}}\min_{Q_{X|Y}\in{{\cal Q}}_P}\max\{I({{\hat{Q}}}),I({{\hat{Q}}})+v({{\hat{Q}}})\}$. Conversely, if for every ${{\hat{Q}}}$ with ${{\hat{Q}}}_{X|Y}\in{{\cal Q}}_P$, we have $I({{\hat{Q}}}) > R$ or $R-I({{\hat{Q}}})<v({{\hat{Q}}})$, that is, $R<R_0({{\hat{Q}}}_Y)$, then $D$ is close to 1 since the intersection is over a sub–exponential number of events with very high probability. It follows that $D$ behaves like ${{\cal I}}\{R_0({{\hat{Q}}}_Y)>R\}$, Thus, $$\begin{aligned}
P_{\mbox{\tiny MD}}&{\stackrel{\cdot} {=}}&{\mbox{\boldmath $E$}}{{\cal I}}\left\{R_0({{\hat{Q}}}_Y)>R,~
W({\mbox{\boldmath $Y$}}|{\mbox{\boldmath $X$}}_1)\le
e^{n(\beta-[\alpha]_+)}Q_0({\mbox{\boldmath $Y$}})\right\}\nonumber\\
&=&\exp\left[-n\inf_{Q_{Y|X}\in{{\cal Q}}_P}\left\{{{\cal D}}(Q_{Y|X}\|W|P):~R_0(Q_Y)>R,~
D(Q)>[\alpha]_+-\beta\right\}\right].\end{aligned}$$ Now, let us take a closer look at $R_0(Q_Y)$: $$\begin{aligned}
\max\{I(Q),I(Q)+v(Q)\}&=&\left\{\begin{array}{ll}
\max\{I(Q),I(Q)+u(Q)\} & u(Q)>-\alpha\\
I(Q) & u(Q)\le-\alpha\end{array}\right.\\
&=&I(Q)+u(Q)\cdot{{\cal I}}\{u(Q)>[-\alpha]_+\}.\end{aligned}$$ Thus, $$\begin{aligned}
R_0(Q)&=&\min_{Q_{X|Y}\in{{\cal Q}}_P}[I(Q)+u(Q)\cdot{{\cal I}}\{u(Q)>[-\alpha]_+\}]\\
&=&\min\left\{\min_{Q_{X|Y}\in{{\cal Q}}_P:~u(Q)\le[-\alpha]_+}I(Q),
\min_{Q_{X|Y}\in{{\cal Q}}_P:~u(Q)>[-\alpha]_+}[I(Q)+u(Q)]\right\}.\end{aligned}$$ Now, $$\begin{aligned}
\min_{Q_{X|Y}\in{{\cal Q}}_P:~u(Q)\le[-\alpha]_+}I(Q)&=&{\mbox{\boldmath $R$}}(\alpha+[-\alpha]_+-\beta;Q_Y)\\
&=&{\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)\end{aligned}$$ and $$\begin{aligned}
& &\min_{Q_{X|Y}\in{{\cal Q}}_P:~u(Q)>[-\alpha]_+}[I(Q)+u(Q)]\\
&=&\beta-\alpha+\min_{Q_{X|Y}\in{{\cal Q}}_P:~D(Q)>[\alpha]_+-\beta}[I(Q)+D(Q)]\\
&=&\beta-\alpha+\left\{\begin{array}{ll}
R_1(Q_Y)+D_1(Q_Y) & [\alpha]_+-\beta < D_1(Q_Y)\\
{\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)+[\alpha]_+-\beta & \mbox{otherwise}
\end{array}\right.\\
&=&\left\{\begin{array}{ll}
R_1(Q_Y)+D_1(Q_Y) +\beta-\alpha & [\alpha]_+-\beta < D_1(Q_Y)\\
{\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)+[\alpha]_+-\alpha & \mbox{otherwise}
\end{array}\right.\\
&=&\left\{\begin{array}{ll}
R_1(Q_Y)+D_1(Q_Y) +\beta-\alpha & [\alpha]_+-\beta < D_1(Q_Y)\\
{\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)+[-\alpha]_+ & \mbox{otherwise}
\end{array}\right.\end{aligned}$$ Thus, $$E_{\mbox{\tiny MD}}=\inf{{\cal D}}(Q_{Y|X}\|W|P),$$ where the infimum is over all $\{Q_{Y|X}\}$ that satisfies the following conditions:
1. ${\mbox{\boldmath $D$}}(R;Q_Y)\le[\alpha]_+-\beta\le D(P\times Q_{Y|X})$
2. $D_1(Q_Y)\le[\alpha]_+-\beta$ implies ${\mbox{\boldmath $R$}}([\alpha]_+-\beta;Q_Y)\ge
R-[-\alpha]_+$
3. $D_1(Q_Y)>[\alpha]_+-\beta$ implies $R_1(Q_Y)+D_1(Q_Y)\ge
R+\alpha-\beta$
where $Q_Y=(P\times Q_{Y|X})_Y$.
The Decoding Error Exponent
---------------------------
Let us denote $$\Omega_m{\stackrel{\Delta} {=}}\left\{{\mbox{\boldmath $y$}}:~W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_m)>\max_{k\ne m}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_k)\right\}.$$ Then, for $m\ge 1$, ${{\cal R}}_m^*=\overline{{{\cal R}}_0^*}\cap\Omega_m$. For a given code, the probability of decoding error is given by $$\begin{aligned}
P_{\mbox{\tiny DE}}&=&\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_m^*}|{\mbox{\boldmath $x$}}_m)\\
&=&\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0^*\cup\overline{\Omega_m}|{\mbox{\boldmath $x$}}_m)\\
&=&\frac{1}{M}\sum_{m=1}^MW(\overline{{{\cal R}}_0^*}\cap\overline{\Omega_m}|{\mbox{\boldmath $x$}}_m)+
\frac{1}{M}\sum_{m=1}^MW({{\cal R}}_0^*|{\mbox{\boldmath $x$}}_m).\end{aligned}$$ Upon taking the ensemble average, the second term becomes $\bar{P}_{\mbox{\tiny MD}}$, which we have already analyzed in the previous subsection. Its error exponent, $E_{\mbox{\tiny MD}}$, indeed appears as one of the arguments of the $\min\{\cdot\}$ operator in eq. (\[ede\]), and so, it remains to show that the exponent of the ensemble average of the first term is $\min\{E_1,E_2\}$. Let ${\mbox{\boldmath $X$}}_1={\mbox{\boldmath $x$}}_1$ be transmitted and let ${\mbox{\boldmath $Y$}}={\mbox{\boldmath $y$}}$ be received. As before, we first condition on $({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})$. $$\begin{aligned}
\mbox{Pr}\{\overline{{{\cal R}}_0^*}\cap\overline{\Omega_1}|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\}
&=&\mbox{Pr}\left\{e^{n\alpha}\sum_m
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+\max_mW({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)>e^{n\beta}Q_\star({\mbox{\boldmath $y$}}),\right.\nonumber\\
& &\left.~\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{e^{n[\alpha]_+}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)+e^{n\alpha}\sum_{m>1}
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)+\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)>e^{n\beta}Q_\star({\mbox{\boldmath $y$}}),\right.\nonumber\\
& &\left.~\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&A({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})+B({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})+C({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})\end{aligned}$$ where $$A({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})={{\cal I}}\left\{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\ge e^{n(\beta-[\alpha]_+)}Q_\star({\mbox{\boldmath $y$}})\right\}
\cdot\mbox{Pr}\left\{
\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\},$$ $$B({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})=\mbox{Pr}\left\{\sum_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
e^{n(\beta-\alpha)}Q_\star({\mbox{\boldmath $y$}}),~\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\},$$ and $$C({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})=\mbox{Pr}\left\{\max_{m>1}W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)\ge
\max\{e^{n\beta}Q_\star({\mbox{\boldmath $y$}}),~W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)\}\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}.$$ We next analyze each one of these terms. First, observe that for a given constant $S$ (which may depend on the given ${\mbox{\boldmath $x$}}_1$ and ${\mbox{\boldmath $y$}}$), we have $$\begin{aligned}
\mbox{Pr}\left\{
\max_{m>1}\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)}{Q_\star({\mbox{\boldmath $y$}})}\ge
e^{-nS}\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}
&{\stackrel{\cdot} {=}}&\min\left\{1, e^{nR}\cdot\mbox{Pr}\left\{\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_2)}{Q_\star({\mbox{\boldmath $y$}})}>
e^{-nS}|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\right\}\\
&{\stackrel{\cdot} {=}}&\min\left\{1, e^{nR}\cdot\mbox{Pr}\left\{\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_2)}{Q_\star({\mbox{\boldmath $y$}})}>
e^{-nS}|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\right\}\\
&{\stackrel{\cdot} {=}}& \exp\{-n[{\mbox{\boldmath $R$}}(S,{{\hat{Q}}}_Y)-R]_+\}.\end{aligned}$$ In our case, $S=D({{\tilde{Q}}})$, where ${{\tilde{Q}}}$ is the empirical joint distribution of ${\mbox{\boldmath $x$}}_1$ and ${\mbox{\boldmath $y$}}$. Thus, $$\begin{aligned}
A&{\stackrel{\Delta} {=}}& {\mbox{\boldmath $E$}}\{A({\mbox{\boldmath $X$}}_1,{\mbox{\boldmath $Y$}})\}\nonumber\\
&{\stackrel{\cdot} {=}}&\exp\left[-n\min_{\{Q:~Q_{X|Y}\in{{\cal Q}}_P:~D(Q)\le
[\alpha]_+-\beta\}}\{{{\cal D}}(Q_{Y|X}\|W|P)+[{\mbox{\boldmath $R$}}(D(Q),Q_Y)-R]_+\}\right]\nonumber\\
&=& e^{-nE_1}.\end{aligned}$$ Concerning $C({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})$, we similarly have: $$\begin{aligned}
C({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})&=&\mbox{Pr}\left\{\max_{m>1}\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_m)}{Q_\star({\mbox{\boldmath $y$}})}\ge
\max\left\{e^{n\beta},~\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)}{Q_\star({\mbox{\boldmath $y$}})}\right\}\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\min\left\{1,e^{nR}\cdot\mbox{Pr}\left\{\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $X$}}_2)}{Q_\star({\mbox{\boldmath $y$}})}\ge
\max\left\{e^{n\beta},~\frac{W({\mbox{\boldmath $y$}}|{\mbox{\boldmath $x$}}_1)}
{Q_\star({\mbox{\boldmath $y$}})}\right\}\bigg|{\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}}\right\}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}& \exp\left\{-n[{\mbox{\boldmath $R$}}(\min\{-\beta,D({{\tilde{Q}}})\};{{\hat{Q}}}_y)-R]_+\right\},\end{aligned}$$ and so, $$\begin{aligned}
C&{\stackrel{\Delta} {=}}& {\mbox{\boldmath $E$}}\{C({\mbox{\boldmath $X$}}_1,{\mbox{\boldmath $Y$}})\}\nonumber\\
&{\stackrel{\cdot} {=}}&
\exp\left\{-n\min_{Q:~Q_{X|Y}\in{{\cal Q}}_P}\{{{\cal D}}(Q_{Y|X}\|W|P)+
[{\mbox{\boldmath $R$}}(\min\{-\beta,D(Q)\};Q_Y)-R]_+\}\right\}\nonumber\\
&{\stackrel{\cdot} {\le}}& e^{-nE_1},\end{aligned}$$ therefore, $C$ is always dominated by $A$. It remains then to show that $B={\mbox{\boldmath $E$}}\{B({\mbox{\boldmath $X$}}_1,{\mbox{\boldmath $Y$}})\}{\stackrel{\cdot} {=}}e^{-nE_2}$. First, for given $({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})$, $$\begin{aligned}
B({\mbox{\boldmath $x$}}_1,{\mbox{\boldmath $y$}})&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{\sum_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}\ge e^{ng({{\hat{Q}}})},~
\sum_{{{\hat{Q}}}_{X|Y}}{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\cdot e^{nf({{\hat{Q}}})}\ge
e^{nf({{\tilde{Q}}})}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left\{\max_{{{\hat{Q}}}_{X|Y}}N({{\hat{Q}}}|{\mbox{\boldmath $y$}})e^{nf({{\hat{Q}}})}\ge e^{ng({{\hat{Q}}})},~
\max_{{{\hat{Q}}}_{X|Y}}{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\cdot e^{nf({{\hat{Q}}})}\ge
e^{nf({{\tilde{Q}}})}\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\left[\bigcup_{{{\hat{Q}}}_{X|Y}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{nu({{\hat{Q}}})}\}\right]\bigcap\left[\bigcup_{{{\hat{Q}}}_{X|Y}}\{{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\ge
e^{n[f({{\tilde{Q}}})-f({{\hat{Q}}})]}\}\right]\nonumber\\
&=&\mbox{Pr}\left[\bigcup_{{{\hat{Q}}}_{X|Y}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{nu({{\hat{Q}}})}\}\right]\bigcap\left[\bigcup_{{{\hat{Q}}}_{X|Y}:~f({{\tilde{Q}}})\le f({{\hat{Q}}})}\{{{\cal I}}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge 1\}\ge
e^{n[f({{\tilde{Q}}})-f({{\hat{Q}}})]}\}\right]\nonumber\\
&=&\mbox{Pr}\bigcup_{\{{{\hat{Q}}}_{X|Y},Q_{X|Y}':~f({{\tilde{Q}}})\le f(Q')\}}\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{nu({{\hat{Q}}})},~N(Q'|{\mbox{\boldmath $y$}})\ge 1\right\}\nonumber\\
&{\stackrel{\cdot} {=}}&\mbox{Pr}\bigcup_{{{\hat{Q}}}}\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{n[u({{\hat{Q}}})]_+}\right\}+\nonumber\\
& &\sum_{{{\hat{Q}}}_{X|Y}\ne Q_{X|Y}':~f({{\tilde{Q}}})\le f(Q')}\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{n[u({{\hat{Q}}})]_+},~N(Q'|{\mbox{\boldmath $y$}})\ge 1\}\nonumber\\
&{\stackrel{\cdot} {=}}&\max_{{{\hat{Q}}}_{X|Y}}\mbox{Pr}\left\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{n[u({{\hat{Q}}})]_+}\right\}+\nonumber\\
& &\max_{{{\hat{Q}}}_{X|Y}\ne Q_{X|Y}':f({{\tilde{Q}}})\le f(Q')}\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{n[u({{\hat{Q}}})]_+},~N(Q'|{\mbox{\boldmath $y$}})\ge
1\}\nonumber\\
&{\stackrel{\cdot} {=}}&\max_{{{\hat{Q}}}_{X|Y}}\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge
e^{n[u({{\hat{Q}}})]_+}\}\nonumber\\
&=&\exp\{-n[{\mbox{\boldmath $R$}}(\alpha-\beta;{{\hat{Q}}}_Y)-R]_+\}.\end{aligned}$$ where the last passage follows from an analysis almost identical to that of $E_A$ in Subsection 5.1. Thus, $$B={\mbox{\boldmath $E$}}\{B({\mbox{\boldmath $X$}}_1,{\mbox{\boldmath $Y$}})\}=\exp\{-n\min_{Q_{Y|X}}\{{{\cal D}}(Q_{Y|X}\|W|P)+
[{\mbox{\boldmath $R$}}(\alpha-\beta;Q_Y)-R]_+\}=e^{-nE_2}.$$
[AA]{}
R. H. Barker, “Group synchronization of binary digital systems,” [*Communication Theory*]{}. London: Butterworth. pp. 273–287, 1953.
S. Golomb, J. Davey, I. Reed, H. van Trees, and J. Stiffer, “Syncronization,” [*IEEE Trans. on Communication Systems*]{}, vol. 11, no. 4, pp. 481–491, 1963.
G. D. Forney, Jr., “Exponential error bounds for erasure, list, and decision feedback systems,” [*IEEE Trans. Inform. Theory*]{}, vol. IT–14, no. 2, pp. 206–220, March 1968.
L. Franks, “Carrier and bit synchronization in data communication – a tutorial review,” [*IEEE Trans. on Communication Systems*]{}, vol. 28, no. 8, pp. 1107–1121, 1980.
J. Massey, “Optimum frame synchronization,” [*IEEE Trans. on Communications*]{}, vol. 20, no. 2, pp. 115–119, 1972.
N. Merhav, “Statistical physics and information theory,” [*Foundations and Trends in Communications and Information Theory*]{}, vol. 6, nos. 1–2, pp. 1–212, 2009.
R. Scholtz, “Frame synchronization techniques,” [*IEEE Trans. on Communication Systems*]{}, vol. 28, no. 8, pp. 1204–1213, 1980.
N. Shulman, [*Communication over an Unknown Channel via Common Broadcasting*]{}, Ph.D. dissertation, Department of Electrical Engineering – Systems, Tel Aviv University, July 2003.\
http://www.eng.tau.ac.il/$\sim$shulman/papers/Nadav\_PhD.pdf
A. Somekh–Baruch and N. Merhav, “Achievable error exponents for the private fingerprinting game,” [*IEEE Trans. Inform. Theory*]{}, vol. 53, no. 5, pp. 1827–1838, May 2007.
A. Somekh–Baruch and N. Merhav, “Exact random coding exponents for erasure decoding,” [*IEEE Trans. Inform. Theory*]{}, vol. 57, no. 10, October 2011.
A. Tchankerten, V. Chandar, and G. Wornell, “On the capacity region for asynchronous channels,” [*Proc. 2008 IEEE International Symposium on Information Theory*]{}, pp. 1213–1217, 2008.
A. Tchankerten, A. Khisti, and G. Wornell, “Information theoretic perspectives of synchronization,” [*Proc. 2006 IEEE International Symposium on Information Theory*]{}, pp. 371–375, 2006.
D. Wang, [*Distinguishing Codes From Noise: Fundamental Limits and Applications to Sparse Communication*]{}, Master thesis, Massachusetts Institute of Technology, Department of EECS, June 2010.
D. Wang, V. Chandar, S.-Y. Chung, and G. Wornell, “Error exponents in asynchronous communication,” [*Proc. 2011 IEEE International Symposium on Information Theory*]{}, pp. 1071–1075, 2011.
[^1]: This research was supported by the Israel Science Foundation (ISF), grant no. 412/12.
[^2]: In our notation, we do not index ${{\hat{Q}}}_X$ by ${\mbox{\boldmath $x$}}$ because the underlying sequence ${\mbox{\boldmath $x$}}$ will be clear from the context.
[^3]: Consider the case where $B_n{\stackrel{\cdot} {=}}e^{bn}$ ($b$ being a constant independent of $n$) and the exponent of $\mbox{Pr}\{A_i(n)\ge e^{bn}\}$ is a continuous function of $b$.
[^4]: Note that there is some tension between $P_{\mbox{\tiny MD}}$ and $P_{\mbox{\tiny
FA}}$ as they are related via the Neyman–Pearson lemma. For a given $\epsilon_{\mbox{\tiny FA}}$, the minimum achievable MD probability is positive, in general. It is assumed then that the prescribed value of $\epsilon_{\mbox{\tiny MD}}$ is not smaller than this minimum. In the problem under consideration, it makes sense to relax the tension between the two constraints to a certain extent, in order to allow some freedom to minimize $P_{DE}$ under these constraints.
[^5]: Conceptually, ${\mbox{\boldmath $R$}}(D,Q_Y)$ can be thought of as the rate–distortion function of the “source” $P$ subject to a constrained reproduction distribution $Q_Y$ (or vice versa), but note that the “distortion measure” $d(x,y)$ here is not necessarily non–negative for all $(x,y)$.
[^6]: Note also that $\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge e^{nu({{\hat{Q}}})}\}=
\mbox{Pr}\{N({{\hat{Q}}}|{\mbox{\boldmath $y$}})\ge e^{n[u({{\hat{Q}}})]_+}\}$ since $N({{\hat{Q}}}|{\mbox{\boldmath $y$}})$ is an integer valued random variable.
|
---
author:
- 'J. Hirtz'
- 'J.-C. David'
- 'A. Boudard'
- 'J. Cugnon'
- 'S. Leray'
- 'I. Leya'
- 'D. Mancusi'
title: Parametrization of cross sections for elementary hadronic collisions involving strange particles
---
Introduction
============
The modelling of nuclear reactions involving a light projectile and an atomic nucleus from a few tens of MeV to a few GeV is important for a large variety of applications, ranging from nuclear waste transmutation, to spacecraft shielding, through hadron therapy. This type of reactions is called spallation reactions. Technically, it is assumed that a proper description of spallation reactions starts at about 100-200 MeV. However, special attention to the low energy domain showed that results down to a few tens of MeV could be as good as those obtained via models dedicated to the description of low energy nuclear reactions [@Dav08]. Spallation reactions are usually described by two steps. The first step is called the intranuclear cascade (INC), because the incident projectile gives rise to a cascade of hadronic reactions within the nucleus with emission of energetic particles leading to a remaining excited nucleus. The second step is the deexcitation of the nucleus via evaporation, fission, fermi-breakup, or multifragmentation. During the last twenty years great improvements have been achieved modelling those reactions, often driven by projects on spallation neutron sources (shielding of neutron beams or transmutation of nuclear waste). In 2010, IAEA tested the reliability for most of the models used worldwide [@Ler11]. Various observables enabled to scrutinize the qualities and shortcomings of the models.
The INCL (Liège Intranuclear Cascade) model, which is developed by the authors of this paper, was recognized as one of the best spallation code up to 2-3 GeV according to the IAEA 2010 benchmark. We decided then to improve and extend our model [@incl]. Among the different topics one can cite the improvement of low-energy cluster-induced reactions [@Dav13], the few-nucleon removal study [@Man15; @jose], and the extension to high energies (up to 15 GeV) [@Ped11; @Ped12]. For extending the model to high energies we introduced the main new channel, which is multiple pion emission in $NN$ and $\pi N$ interactions. This type of emission is based on the hypothesis that the produced baryonic resonances have so short lifetimes that their decay, in several pions, occurs before they interact with another particle in the nucleus. In addition, the overlap of their large widths makes difficult the choice of a specific resonance. Finally, the very good results obtained when comparing the new model predictions to experimental data and other models confirmed that the main features can be described on this manner. However, other particles, especially strange particles, can be produced to a lesser extent when the energy goes up. Even if they only play a minor role during the cascade, strange particle production contributes a few percent of the nucleon-nucleon inelastic cross section for energies from 2 GeV to 15 GeV; therefore, taking them into account could improve the modelling. The improvements and implementations will also bring new possibilities, which are important for simulating specific experiment involving, for instance, kaon emission. Comparisons with experimental data may also probe the nuclear medium effects. In addition, hypernuclei, whose interest grows with new facilities and experiments (*e.g.*, HypHI and PANDA at GSI and/or FAIR (Germany), JLab (USA), J-PARC (Japan)), can also be studied. We want to stress the particular interest of possibility of studying hypernuclei with the extended INCL model. Beyond having a general high predictive power, the combination of the INCL model with the de-excitation Abla model is probably the most suitable tool to study the propagation of baryons in a nuclear medium, as testified by the IAEA intercomparison mentioned above.
The needed ingredients to account for strange particles (limited in this paper to Kaons, antiKaons, Lambda, Sigmas) are their characteristics, reaction cross sections involving strange particles in the initial and/or final state, angular distributions, momentum and charge repartition of the particles in the final state. This paper describes the ingredients and especially the parametrizations of the reaction cross sections involving strange particles. These ingredients are independent of the code considered and can be used in any other code. It is worth mentioning that hyperon and kaon production from a nucleus are already modelled in several codes, *e.g.*, GiBUU [@gibuu], JAM [@jam], LAQGSM [@Mas08], INCL2.0[@joseph; @deneye], and Bertini [@bertini]. Numerous scenarios exist to treat the production of strange particles. Some models split the energy range in two parts: a low-energy part with a center-of-mass energy roughly below 3-4 GeV and a high-energy part. The low-energy part is described either by resonances or directly by their decay products. However, the cross sections are then often treated differently as it is done in INCL; there are often given in resonant and non-resonant terms. For the high energy part the LUND string model [@And83] is usually used. Some other models, like Bertini and INCL, which both focus on the energy domain considered here, *i.e.*, below 15 GeV, consider directly the decay products of the resonances and they rest on experimental data, calculation results (*e.g.*, from string models), and approximations. Therefore, some information already exists. However, we investigated new parametrizations by using all available materials (experimental data, hypotheses, and models) and here we use the opportunity to report our best knowledge of the thus determined cross sections and to improve some parametrizations. Our goal is also to provide a rather comprehensive set of cross sections and angular distributions in an as simple and accurate as possible shape, that can be used by other model builders and/or end-users. In addition, our work attempts to a systematic and coherent elaboration of fitted cross sections, largely based on symmetry and simple hadronic models, as explained in detail in this manuscript.
The paper starts with the list of particles and reactions considered. Then, the way the reaction cross sections have been parametrized is described in . is devoted to the particles in the final states and more precisely to their emission angles and momenta. There we also describe the charge repartition. Since such information already partly exist in literature, comparisons of the earlier data with the new results obtained here are given in . Finally, we draw some conclusions.
Particles and reactions {#II}
=======================
In a first step, only the non-resonant particles with one unit of strangeness were considered. Therefore Kaons ($K^0$ and $K^+$), antiKaons ($\overline{K}^0$ and $K^-$)(the difference between Kaons and antiKaons is relevant in this paper), Sigmas ($\Sigma^-$,$\Sigma^0$, and $\Sigma^+$), and the Lambda ($\Lambda$) were added, *i.e.*, particles with a nuclear spin $J=0$ and $J=1/2$ and with a strangeness $-1$ for baryons and $\pm 1$ for mesons.
The types of particles considered also define the types of reactions that must be considered. Doing so, we use their relative importance, given by the experimental cross sections. Knowing that the main particles that evolve during the intranuclear cascade are nucleons and pions, we consider reactions contributing at least 1% to the $NN$ and $\pi N$ total cross section and at least 10% of the total cross section for $YN$($Y=\Lambda$ or $\Sigma$), $\overline{K}N $, and $KN$ reaction. The reactions take into account in this work are listed in . This choice is based on available experimental data.
------ ---------------- ----------------------- --------- ---------------- -------------------- ------------------ ---------------- -------------------------- -------------- ------------------ ---------------
$NN$ $ \rightarrow$ $ N \Lambda K$ $\pi N$ $ \rightarrow$ $\Lambda K $ $N \overline{K}$ $ \rightarrow$ $N \overline{K}$ $N K$ $ \rightarrow$ $N K $
$ \rightarrow$ $ N \Sigma K $ $ \rightarrow$ $\Sigma K $ $ \rightarrow$ $\Lambda \pi$ $\rightarrow $ $N K \pi$
$ \rightarrow$ $ N \Lambda K \pi$ $ \rightarrow$ $\Lambda K \pi$ $ \rightarrow$ $\Sigma \pi$ $\rightarrow $ $N K \pi \pi$
$ \rightarrow$ $ N \Sigma K \pi$ $ \rightarrow$ $\Sigma K \pi$ $ \rightarrow$ $N \overline{K} \pi$ $N \Lambda $ $ \rightarrow$ $ N \Lambda$
$ \rightarrow$ $ N \Lambda K \pi\pi$ $ \rightarrow$ $\Lambda K \pi\pi$ $ \rightarrow$ $\Lambda \pi \pi$ $ \rightarrow $ $ N \Sigma$
$ \rightarrow$ $ N \Sigma K \pi\pi$ $ \rightarrow$ $\Lambda \pi \pi$ $ \rightarrow$ $\Sigma \pi \pi$ $N \Sigma $ $ \rightarrow$ $ N \Lambda$
$ \rightarrow$ $ NN K \overline{K}$ $ \rightarrow$ $N K \overline{K}$ $ \rightarrow$ $N \overline{K} \pi \pi$ $\rightarrow$ $ N \Sigma$
------ ---------------- ----------------------- --------- ---------------- -------------------- ------------------ ---------------- -------------------------- -------------- ------------------ ---------------
: \[reac1\] List of considered reactions involving strangeness based on experimental data.
In addition, we include two other types of reactions. The first one considers strangeness production via $\Delta N$ reactions. $\Delta$’s are less numerous than nucleons and $\pi$’s, but are nevertheless expected to contribute significantly to the strangeness production according to the study of Tsushima et *al.*[@tsushima]. The second type is the strange production in reactions where many particles are produced in the final state but no measurements are available. Since their contributions increase significantly with increasing energy, a specific study was necessary to get the correct inclusive strangeness production cross section. lists the channels for both types of reactions also taken into account.
------------ ---------------- ---------------------- --------- ---------------- ---------
$\Delta N$ $\rightarrow$ $N \Lambda K$ $NN$ $\rightarrow$ $K + X$
$ \rightarrow$ $N \Sigma K $
$ \rightarrow$ $ \Delta \Lambda K$ $\pi N$ $ \rightarrow$ $K + X$
$ \rightarrow$ $ \Delta \Sigma K$
$ \rightarrow$ $ NN K \overline{K}$
------------ ---------------- ---------------------- --------- ---------------- ---------
: \[reac2\] List of the reactions involving strangeness and requiring information to be taken exclusively from models. Meaning of $X$ is explained in and excludes the reactions cited in
In the reaction listed in , Kaon production is equivalent to strangeness production, since it is the only particle with strangeness $+1$ , in the energy range under consideration in this paper, which can counterbalance the production of strangeness $-1$ of $\Lambda$, $\Sigma$ and $\overline{K}$ particles (strangeness is conserved in strong interaction processes).
Considering isospin, there are 488 channels, excluding the reactions $NN~\rightarrow~K~+~X$ and $\pi~N~\rightarrow~K~+~X$ of , which must be characterized by their reaction cross sections () and their final state, *i.e.*, charge repartitions, emission angle, and energy of the particles ().
Reaction cross sections {#III}
=======================
Among the ingredients needed to include new particles in an INC model, reaction cross sections are the most important. As far as possible they are taken from experimental data. However, measurements are not always performed on the entire energy range, rarely for all isospin channels, and are often inexistent when numerous particles exist in the final state. To overcome these limitations a step-by-step procedure has been developed to obtain parametrizations of the required cross sections ([first@refreac1,reac2,@]{}). First, an overview of the available experimental data has been performed. Second, two methods based on isospin symmetry allowed to extend our database by increasing the available information. Third, the still missing cross sections were determined using models and/or similar reactions with the help of plausible hypotheses. Finally, generic formulae, which can be applied to parametrize the cross sections, are given in the last subsection.
Available experimental data
---------------------------
The number of measured data for each reaction from are given in . The energy range goes up to 32 GeV and the data are taken from Landolt-Börnstein [@landolt] and two other papers [@hires; @sibir2].
Since some of the published experimental data are rather old, our study offers the possibility to check and summarize our knowledge of the cross sections. We therefore give for each reaction the number of isospin channels, number of experimental data points, and the Gini coefficient.
The Gini coefficient[@gini] is a statistical tool used typically in economy to measure the dispersion of a system (usually the income distribution of the residents of a nation). The coefficient takes values between 0 (perfect repartition) and 1 (maximal inequality). The Gini coefficient for the discrete case is calculated as following:
[|Sc|Sc|Sc|Sc|]{} Reaction &
----------
\# of
channels
----------
: Available experimental data points for reactions studied in this work[]{data-label="tab_data"}
&
-------
\# of
data
-------
: Available experimental data points for reactions studied in this work[]{data-label="tab_data"}
&
-------------
Gini
coefficient
-------------
: Available experimental data points for reactions studied in this work[]{data-label="tab_data"}
\
\
$N \Lambda K $ & 4 & 31 & 0.62\
$N \Sigma K $ & 10 & 44 & 0.69\
$N \Lambda K \pi$ & 10 & 29 & 0.63\
$N \Sigma K \pi$ & 26 & 43 & 0.74\
$N \Lambda K \pi\pi$ & 16 & 14 & 0.77\
$N \Sigma K \pi\pi$ & 44 & 15 & 0.87\
$NN K \overline{K}$ & 10 & 16 & 0.71\
\
$\Lambda K $ & 4 & 108 & 0.75\
$\Sigma K $ & 10 & 158 & 0.74\
$\Lambda K \pi$ & 10 & 68 & 0.72\
$\Sigma K \pi$ & 26 & 148 & 0.77\
$\Lambda K \pi\pi$ & 16 & 60 & 0.81\
$\Sigma K \pi\pi$ & 44 & 63 & 0.86\
$N K \overline{K}$ & 14 & 57 & 0.81\
\
$N \Lambda$ & 2 & 44 & 0.5\
$N \Sigma$ & 4 & 11 & 0.75\
\
$N \Lambda$ & 4 & 11 & 0.75\
$N \Sigma$ & 10 & 21 & 0.80\
\
$N \overline{K}$ & 6 & 687 & 0.61\
$N \overline{K} \pi$ & 14 & 500 & 0.72\
$N \overline{K} \pi \pi$ & 22 & 124 & 0.87\
$\Lambda \pi$ & 4 & 349 & 0.52\
$\Sigma \pi$ & 10 & 685 & 0.59\
$\Lambda \pi \pi$ & 6 & 256 & 0.66\
$\Sigma \pi \pi$ & 16 & 496 & 0.73\
\
$N K $ & 4 & 134 & 0.69\
$N K \pi $ & 14 & 223 & 0.62\
$N K \pi \pi$ & 22 & 123 & 0.89\
$$G = \frac{2 \sum\limits_{i=1}^{n} i \ y_i}{n \sum\limits_{i=1}^{n} \ y_i} - \frac{n+1}{n},$$
with $y_i$ the number of data in the i[^th^]{} channel arranged as $y_{i+1} \geq y_i$ (non-cumulative). This coefficient measures the repartition of data in the different isospin channels and, in our case (with a very high inequality and a high number of channels), correspond to the missing part of data for each reaction. For example, if $ G = 0.8$ approximatively 80% of more data are needed to complete the 20% of existing data and to make the entire database for each channel as precise as for the most precise channel.
The shows the number of data depends strongly on the given reaction. For example, in average there are only 2-3 points per channel for the $NN$ collisions while for the $N \overline{K}$ reactions more than 35 data per channel are available. However, the Gini coefficients exhibit an important inhomogeneity ($G > 0.5$) with respect to the isospin channels. There is also a significant inhomogeneity, not given by the Gini coefficient, with respect to the energy range studied, with more data at the threshold and in the resonances region (see ). When available, these data nevertheless enable a reliable parametrization over the entire considered energy range (up to 15 GeV).
Using only experimental data the reaction cross sections were determined (sometimes partially) only for about 17% of the channels listed in . For the remaining 83% various hypotheses were necessary, which are explained in some detail in the next subsections.
The Bystricky procedure {#bystricky}
-----------------------
The first method used to get information on missing isospin channels is based on the assumption of isospin symmetry, which is described in detail by Bystricky et *al.* [@bystricky]. Their goal was to provide a phenomenological calculation tool for elastic and inelastic cross sections in the framework of isospin symmetry for the reactions $N \! N \! \rightarrow \! N \! N \! \pi$ and $N \! N \! \rightarrow \! N \! N \! \pi \! \pi$.
The procedure, which is based on the isospin decomposition of systems, was used by Sophie Pedoux [@Ped11] to find missing cross sections in channels involving multiple pion production. The procedure was applied up to the production of four pions and determined cross section were then implemented in a previous version of INCL. Briefly, the initial state of two nucleons $\ket{NN}$ is projected on the final state decomposed into the nucleon final state $\bra{NN}$ and the pion final state $\bra{x \pi}$. The amplitude of the reaction is given by the following equation: $$\label{ampli1}
\mathcal{M}(N \! N \rightarrow N \! N x \pi ) = \left(\bra{N \! N} \otimes \bra{x \pi} \right) M \ket{N \! N},$$ with $M$ the reduced matrix element. The is subsequently decomposed using isospin projection: $$\bra{I^{(1)} I^{(1)}_3 \ I^{(2)} I^{(2)}_3} M \ket{I^i I^i_3} = CG \ M_{I^{(1)}I^{(2)}I^i},$$ with $CG$ the associated Clebsch-Gordan coefficient, $I^{(1)}$ and $I^{(1)}_3$ the $NN$ system isospin and its projection, $I^{(2)}$ and $I^{(2)}_3$ the $x\pi$ system isospin and its projection, $I^{(i)}$ and $I^{(i)}_3$ the initial state isospin and its projection and $M_{I^{(1)}I^{(2)}I^i}$ the reduced matrix element for the isospin decomposition $I^i I^{(1)}I^{(2)}$. This equation can be written as the isospin decomposition on each multiplet system involved in the initial and final state contracted on the reduced matrix element.
Next, by integrating over all kinematic variables of the final state and summing over all permutations we obtain a decomposition of the cross section on isospin states, which is then compared with others to establish relations between the different cross sections.
This same procedure was then applied to reactions involving strange particles. In our case, can be written as the tensor product of the nucleon, pion, Kaon, antiKaon, Lambda, and Sigma systems of the initial and final state contracted on the reduced matrix element. With this becomes:
$$\begin{aligned}
\label{ampli2}
\mathcal{M} = \left( Initial~state \rightarrow x_N \! N \ x_{\pi} \! \pi \ x_Y \! Y \ x_K \! K \ x_{\overline{K}} \! \overline{K} \right) &= \left( \bra{x_N \! N} \otimes \bra{x_{\pi} \! \pi} \otimes \bra{x_Y \! Y} \otimes \bra{x_K \! K} \otimes \bra{x_{\overline{K}} \! \overline{K}} \right) M \ket{Initial~state} \nonumber \\
&= \left( \bra{system 1} \otimes \bra{system 2} \right) M \ket{Initial~state},\end{aligned}$$
with $\bra{system 1}$ and $\bra{system 2}$ a contraction of the final multiplet systems in two arbitrary systems. Note that the final result does not depend on the choice of contraction.
The so obtained results are either simple equalities between individual cross sections, resulting form the Clebsch-Gordan coefficients associated with isospin symmetry, or equations between several cross sections resulting from the cross sections associated with a given total value of the isospin which can be expressed as sums of partial cross sections on various final charge states. Non trivial expressions of this kind are reported in in bold. As example, for the reaction $N \! \pi \! \rightarrow \! N \! K \! \overline{K}$ we get: $$\begin{aligned}
\sigma(\pi^+ p\! \rightarrow p K^+ \! \overline{K}^0) &= \! \sigma(\pi^- n\! \rightarrow n K^0 \! K^-), \\
\nonumber \\
\sigma(\pi^- p \! \rightarrow n K^0 \! \overline{K}^0) + \sigma(\pi^- p \! \rightarrow n K^+ \! K^-) &+ \sigma(\pi^- p \! \rightarrow p K^0 \! K^-) + \sigma(\pi^+ p \! \rightarrow p K^+ \! \overline{K}^0) \nonumber \\
= 2 \sigma(\pi^0 p \! \rightarrow n K^+ \! \overline{K}^0) + 2 \sigma(\pi^0 p\! &\rightarrow p K^0 \! \overline{K}^0) + 2 \sigma(\pi^0 p \! \rightarrow p K^+ \! K^-).
\label{sym_NpitoNKKb}\end{aligned}$$
Errors arising from this procedure are introduced by the isospin invariance hypothesis and are estimated to be in the range of a few percent, which is approximately the mass differences between particles belonging to a same multiplet.
The Bystricky procedure allowed us to reduce the missing information on the reaction cross sections by approximately a factor 2, *i.e.*, we increased the knowledge of the reaction cross sections by about a factor of 2. Thus, at this stage 35% of the channels were parametrized, still 65% are missing. For establishing a complete database another method, also based on isospin symmetry, was used (see next section).
Hadron exchange model {#HEM}
---------------------
In order to complete the dataset, a procedure based on the hadron exchange model (HEM) was developed. The basic of the model is to apply the isospin symmetry at the Feynman diagram level, considering only diagrams at leading order, to obtain cross section ratios. This way, once again, unknown cross sections can be determined from known cross sections.
This procedure is an adaptation of the method used by Li [@li] and Sibirtsev [@sibirtsev]. In this method, complete Feynman diagrams are considered and not only the initial and final states as in the Bystricky procedure [@bystricky]. The method used by Li and Sibirtsev treats the case of pion and kaon exchange. Here, baryon exchange is also considered because of the type of studied cross sections. Initially, the hadron exchange model was developed with the idea to calculate explicitly a cross section and then using the isospin symmetry to determine easily other channel cross sections for a specific type of reaction. Here, the explicit calculation is replaced by a fit of experimental data. In the following, the method is explained and illustrated in an example.
Similar to the Bystricky method, the procedure determines in a first step relations between matrix elements and, in a second step, the cross section ratios by integrating over all kinematic variables of the squared matrix elements: $$\label{HEM_1}
\sigma = \int |\mathcal{M}_{fi}|^2 d\Omega.$$
To make things easier, the method used by Li and Sibirtsev neglects interferences between diagrams. They estimated that this hypothesis could change their result by about 30%. In our case, first we consider only the ratios between cross sections and second we check, as far as possible, the results by comparing to experimental data or results arising from the Bystricky procedure. Doing so, the cross section of a specific isospin channel can be rewritten as the sum of all individual diagram contributions: $$\label{HEM_2}
\sigma(channel) = \sum_i \int |\mathcal{M}_{X_i}(channel)|^2 d\Omega,$$ with $\mathcal{M}_{X_i}(channel)$ the diagram amplitude of the isospin channel with the exchange particle $X_i$. In the reduced matrix element amplitude, there are three types of contribution: the initial and final fields, the propagators, and the vertices. Due to isospin symmetry, in the case of the same type of exchange particles, propagators and fields are identical. Therefore, the only difference between matrix elements comes from the vertices. However, the vertices have the same structure when the same particle types are involved. Consequently, these vertices are linked together by the isospin symmetry and this link can be obtained using Clebsch-Gordan coefficients. Note that Kaons and antiKaons have the same field and the same propagator because of the matter/antimatter symmetry. Considering a specific vertex with two incoming particles and one outgoing particle, the contribution can be written as: $$\label{HEM_3}
\bra{I^{out} I^{out}_3} \mathcal{V} \ket{I^{in(1)} I^{in(1)}_3, I^{in(2)} I^{in(2)}_3} = CG \ \mathcal{V}_{X,Y,Z},$$ with $I^{out}$ and $I^{out}_3$ the outgoing particle isospin and its projection, $I^{in(i)}$ and $I^{in(i)}_3$ the isospin and the projection of the $i^{th}$ incoming particle, $\mathcal{V}$ the matrix element associated to the vertex, $CG$ the associated Clebsch-Gordan coefficient, and $\mathcal{V}_{X,Y,Z}$ the projected matrix element for the incoming and outgoing particles of type $X,Y,Z$. Since Clebsch-Gordan coefficients are scalar, diagrams with the same type of exchange particle are linked by a coefficient that is independent of energy. The matrix element of one diagram can be rewritten as: $$\label{HEM_4}
\mathcal{M}_{X_i}(channel) = a_{X_i}(channel) \times \mathfrak{M}_{X_i},$$ with $\mathfrak{M}_{X_i}$ the isospin-independent part of the matrix element and $a_{X_i}(channel)$ the product of all Clebsch-Gordan coefficients coming from each vertex (isospin-dependent part). A factor $n!$ appears in the case of n identical particles in the final state. The matrix element $\mathfrak{M}_{X_i}$ contains all the propagators, field contributions, and the structure of the vertices. The $a_{X_i}(channel)$ coefficient is a real scalar, which contains only the factor linking the different matrix elements. Using , can be rewritten as: $$\label{HEM_5}
\sigma(channel) = \sum_i |a_{X_i}(channel)|^2 \int |\mathfrak{M}_{X_i}|^2 d\Omega.$$
Two cases must be distinguished. In the first case, all $|a_{X}(channel_j)/a_{X}(channel_k)|$ ratios are equal, independent of the diagram. In such a case, the cross section ratio of the two channels can easily be determined. In the second case with unequal ratios, extra information and hypotheses are required. In a first step, global relations obtained from the Bystricky procedure were systematically used as extra information. In a second step, hypotheses linking diagrams together or neglecting some diagrams are needed. Small coupling constants involved and/or small disintegration rates of the intermediate particles allow to leave out some diagrams. Note that all resonances (the $\Delta$ particle is not considered as a nucleon resonance from an isospin point of view: $J_\Delta \neq J_N$) are automatically considered because a given particle and its resonances have the same isospin and the same isospin projection. Therefore the $a_{X}$ coefficients are identical. Consequently, the sum over all diagram amplitudes with the same type of exchange particle can be treated as:
$$\begin{aligned}
\sum_{X^{(*)}_i} |a_{X_i} (channel)|^2 \int |\mathfrak{M}_{X_i}|^2 d\Omega & = |a_{\mathcal{X}}(channel)|^2 \sum_{X^{(*)}_i} \int |\mathfrak{M}_{X_i}|^2 d\Omega \\
& = |a_{\mathcal{X}}(channel)|^2 \int |\mathfrak{M}_{\mathcal{X}}|^2 d\Omega, \nonumber\end{aligned}$$
with ${X^{(*)}_i}$ the particle and its resonances $(*)$ and $\mathfrak{M}_{\mathcal{X}}$ the isospin independent general matrix element of the particle type $\mathcal{X}$ defined as: $$|\mathfrak{M}_{\mathcal{X}}|^2 = \sum_{X^{(*)}_i} |\mathfrak{M}_{X_i}|^2.$$
In order to illustrate the basic procedure, the way to solve the difficulties but also demonstrating the limits, we discuss an illustrative case based on the $\pi N~\rightarrow~\Sigma K$ reaction. Sadly, the hadron exchange model give no solution in this case but instead present the rare advantage to be relatively simple but to exhibit numerous problems, which appear often in more complex cases.
Five diagrams (three types), listed in , are considered.
\
\
Using the cross section is given by: $$\begin{aligned}
\label{HEM_6}
\sigma (\pi N \rightarrow \Sigma K) = a_K^2 \int |\mathfrak{M}_K|^2 d\Omega & + a_{\Lambda}^2 \int |\mathfrak{M}_\Lambda|^2 d\Omega + a_{\Sigma}^2 \int |\mathfrak{M}_\Sigma|^2 d\Omega \nonumber \\
& + a_{N}^2 \int |\mathfrak{M}_N|^2 d\Omega + a_{\Delta}^2 \int |\mathfrak{M}_\Delta|^2 d\Omega. \end{aligned}$$
In this example, there are two vertices in each diagram called $v_1^X$ and $v_2^X$ as shown in . The $\Sigma$ exchange in the case $\pi^+p\rightarrow~\Sigma^+K^+$ is a $\Sigma^0$. Then, the projection on isospin eigenstates at $v_1^\Sigma$ is: $$\begin{aligned}
P_r(v_1^\Sigma)( \pi^+ p \rightarrow \Sigma^+ K^+) & = (\bra{K^+} \otimes \bra{\Sigma^0})\mathcal{V} \ket{p} \nonumber \\
& = \left( \left\langle \frac{1}{2} \frac{1}{2} \right| \otimes \bra{10} \right) \left|\frac{1}{2} \frac{1}{2} \right\rangle \mathcal{V}_{K\Sigma N} = \sqrt{\frac{1}{3}} \mathcal{V}_{K\Sigma N}.\end{aligned}$$
[|c|Sccccc|]{} & $a_K^2$ & $a_\Lambda^2$ & $a_\Sigma^2$ & $a_N^2$ & $a_\Delta^2$\
$\pi^+ p \rightarrow \Sigma^+ K^+$ & 1 & 1 & 1/2 & 0 & 1\
$\pi^0 p \rightarrow \Sigma^+ K^0$ & 1/2 & 0 & 1 & 1/2 & 2/9\
$\pi^0 p \rightarrow \Sigma^0 K^+$ & 1/4 & 1 & 0 & 1/4 & 4/9\
$\pi^- p \rightarrow \Sigma^0 K^0$ & 1/2 & 0 & 1 & 1/2 & 2/9\
$\pi^- p \rightarrow \Sigma^- K^+$ & 0 & 1 & 1/2 & 1 & 1/9\
Doing the same calculation for each diagram, each channel, and each vertex gives the coefficients $a_{X_i}$ once a global normalization has been chosen. The counterweight of this normalization is hidden in the isospin-independent part of the matrix element. Here the choice is that the largest $a_{X_i}$ is equal to 1. All $a^2_{X_i}$ are given in . Only channels with an incoming proton are given here since channels with an incoming neutron can easily be deduced. It can be seen that the $a^2_{X_i}$ coefficients of the $\pi^0p\rightarrow~\Sigma^+K^0$ channel are equal to the ones of the $\pi^- p \rightarrow \Sigma^0 K^0$ channel. Therefore, we can infer: $$\sigma(\pi^0 p \rightarrow \Sigma^+ K^0) = \sigma(\pi^- p \rightarrow \Sigma^0 K^0).$$
Second, another interesting point is given by the following relations: $$\begin{aligned}
2 a_N^2 = a_\Lambda^2 + 2 a_\Sigma^2 - 2 a_K^2,\\
9 a_\Delta^2 = 2 a_\Lambda^2 - 2 a_\Sigma^2 + 8 a_K^2.\end{aligned}$$ Thus, if we define three new matrix elements: $$\begin{aligned}
|\mathfrak{M}_1|^2 = |\mathfrak{M}_K|^2 - |\mathfrak{M}_N|^2 + \frac{8}{9}|\mathfrak{M}_\Delta|^2, \\
|\mathfrak{M}_2|^2 = |\mathfrak{M}_\Lambda|^2 + \frac{1}{2} |\mathfrak{M}_N|^2 + \frac{2}{9}|\mathfrak{M}_\Delta|^2, \\
|\mathfrak{M}_3|^2 = |\mathfrak{M}_\Sigma)|^2 + |\mathfrak{M}_N|^2 - \frac{2}{9}|\mathfrak{M}_\Delta|^2,\end{aligned}$$ becomes: $$\sigma (\pi \! N \! \rightarrow \! \Sigma \! K) = a_K^2 \int |\mathfrak{M}_1|^2 d\Omega + a_{\Lambda}^2 \int |\mathfrak{M}_2|^2 d\Omega + a_{\Sigma}^2 \int |\mathfrak{M}_3|^2 d\Omega.$$
The $|\mathfrak{M}_i|^2$ being unknown, extra hypotheses are needed to obtain other relations between the cross sections of the different channels. Their reliability will, however, directly affect the reliability of the final result. The hypotheses for this show-case are: the experimental data exhibit some similarities between the known channel cross sections (3 channels in the 10 that which should be parametrized are reasonably well measured). It can be reasonably argued that: $$\sigma (\pi^- p \rightarrow \Sigma^0 K^0) \approx \sigma(\pi^- p \rightarrow \Sigma^- K^+).$$ That implies: $$|\mathfrak{M}_1|^2 = 2 |\mathfrak{M}_2|^2 - |\mathfrak{M}_3|^2.$$
Finally, two more hypotheses are necessary to link the isospin channel cross sections of the reaction $\pi N~\rightarrow~\Sigma K$. First $N$ and/or $\Delta$ exchanges were neglected, because the strange decay ratio is very weak for most of the resonances. Second, the graphs with a $\Lambda$ exchange and a $\Sigma$ exchange are supposed to be equivalent, because of their similar nature. Doing so, it follows: $$|\mathfrak{M}_K|^2 = |\mathfrak{M}_\Lambda|^2 = |\mathfrak{M}_\Sigma|^2.$$ We finally get: $$\begin{aligned}
\sigma(\pi^+ p \rightarrow \Sigma^+ K^+) & = \sigma(n \pi^- \rightarrow \Sigma^- K^0) \nonumber \\
= \frac{5}{3} \sigma(\pi^0 p \rightarrow \Sigma^+ K^0) & = \frac{5}{3} \sigma(\pi^- p \rightarrow \Sigma^0 K^0) \nonumber \\
= \frac{5}{3} \sigma(n \pi^+ \rightarrow \Sigma^0 K^+) & = \frac{5}{3} \sigma(n \pi^0 \rightarrow \Sigma^- K^+) \nonumber \\
= 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+) & = 2 \sigma(n \pi^0 \rightarrow \Sigma^0 K^0) \nonumber \\
= \frac{5}{3} \sigma(\pi^- p \rightarrow \Sigma^- K^+) & = \frac{5}{3}\sigma(n \pi^+ \rightarrow \Sigma^+ K^0).\end{aligned}$$
After all necessary relations have been found, the result is always compared to the experimental data and/or the predictions by the Bystricky procedure, if available, in order to check if the hypotheses used are reasonable. Unfortunately, in this special case the result obtained by the HEM procedure is not very reasonable (see ), likely due to unreliable hypotheses.
We anticipate that, the Bystricky procedure predictions associated to available experimental data are sufficient for parametrizing all $\pi N \rightarrow \Sigma K$ channels. Then, exclusive cross sections were fitted channel per channel for the ones with experimental data and the other cross sections are determined using the symmetries from the Bystricky procedure. However, in cases without enough experimental data, the relations obtained with sometimes questionable hypotheses must be kept. In general, the reliability of relations found using this method decreases with the increasing number of outgoing particles. This is due to the increasing number of Feynman diagrams, which should be taken into account and which then increases the number of hypothesis needed. An example of a case that works well even if the prediction does not match perfectly over the entire energy range with the experimental data for many channels is shown in .
The errors introduced by this method on the isospin average cross sections are estimated to be around 10%-20%, supposing that hypotheses are wisely chosen because, even if a specific isospin channel is under- or overestimated by a large factor, the Bystricky procedure provides relatively strong constraints on the isospin average cross sections. The list of all graphs considered and relations found are available in .
Thanks to the use of isospin symmetry in the hadron exchange model, combined with experimental data and the Bystricky procedure, around 72% of the required information () can be obtained.
Enlarging the data set
----------------------
Unfortunately, both methods, which are based on isospin symmetries in combination with experimental data, are not sufficient to provide a parametrization for all reactions listed in . The missing cross sections were either obtained from models or from our best knowledge of [*similar*]{} reactions (notably based on reactions already studied in a previous version of INCL [@incl]). The reactions of interest are:
- $NN \rightarrow NN K \overline{K}$,
- $NN \rightarrow N \Lambda K \pi$, $NN \rightarrow N \Sigma K \pi$,
- $NN \rightarrow N \Lambda K \pi\pi$, $NN \rightarrow N \Sigma K \pi\pi$.
Parametrization of the $NN \rightarrow NN K \overline{K}$ reaction cross section parametrization is taken from [@sibirtsev](Eq. 21).
For the other four reactions we assume similarities with the already included reactions $\sigma(NN \rightarrow NN \pi)$ and $\sigma(NN \rightarrow NN \pi\pi)$, taking into account the center of mass energy ($\sqrt s$ in MeV in the following equations). Actually, in these cases, the changes in the shape of the cross sections, when adding a pion in the final state, is supposed to be the same as if a hyperon and a kaon replace a nucleon and a pion. $$\sigma_{NN \rightarrow N \Lambda K \pi}(\sqrt s) = 3\ \sigma_{NN \rightarrow N \Lambda K}(\sqrt s) \times \frac{\sigma_{NN \rightarrow NN \pi\pi}(\sqrt s - 540)}{\sigma_{NN \rightarrow NN \pi}(\sqrt s - 540)},
\label{NLKpi}$$ $$\sigma_{NN \rightarrow N \Sigma K \pi}(\sqrt s) = 3\ \sigma_{NN \rightarrow N \Sigma K}(\sqrt s) \times \frac{\sigma_{NN \rightarrow NN \pi\pi}(\sqrt s - 620)}{\sigma_{NN \rightarrow NN \pi}(\sqrt s - 620)},
\label{NSKpi}$$ $$\sigma_{NN \rightarrow N \Lambda K \pi\pi}(\sqrt s) = \sigma_{NN \rightarrow N \Lambda K \pi}(\sqrt s) \times \frac{\sigma_{NN \rightarrow NN \pi\pi}(\sqrt s - 675)}{\sigma_{NN \rightarrow NN \pi}(\sqrt s - 675)},$$ $$\sigma_{NN \rightarrow N \Sigma K \pi\pi}(\sqrt s) = \sigma_{NN \rightarrow N \Sigma K \pi}(\sqrt s) \times \frac{\sigma_{NN \rightarrow NN \pi\pi}(\sqrt s - 755)}{\sigma_{NN \rightarrow NN \pi}(\sqrt s - 755)}.$$
The factor 3 used in and is a normalization factor needed to fit the few available experimental data. The method was tested using the same type of reaction cross sections (strangeness produced or not) with the $\pi N$ initial state that are already relatively well described. It appears also a factor of approximatively 3 between the cross section ratio $\sigma_{\pi N~\rightarrow~N \pi\pi\pi}/\sigma_{\pi N~\rightarrow~N \pi\pi}$ and the cross section ratio $\sigma_{\pi N \rightarrow Y K \pi}/\sigma_{\pi N \rightarrow Y K}$ with the appropriately shifted center of mass energy. Note that this verification starts with the $\pi N~\rightarrow~N \pi \pi$ reaction, because the reaction $\pi N~\rightarrow~N \pi$ is an elastic reaction and therefore, is clearly not similar to $\pi N \rightarrow Y K$.
The charge repartition is determined by using the work done in for $NN~\rightarrow~NN K \overline{K}$, $NN \rightarrow N \Lambda K \pi$, and $NN \rightarrow N \Sigma K \pi$ reactions. As discussed previously, the method based on the hadron exchange model is not used to calculate the total cross sections for those reactions (too many hypotheses needed), but it can be used to determine the charge repartition. The charge repartition for $NN \rightarrow N \Lambda K \pi\pi$ and $NN \rightarrow N \Sigma K \pi\pi$ were determined using an approach by Iljinov et *al.*[@iljinov], simplified to take into account only the combinatorics of the final state as it was done in the Bertini model [@bertini]. The method determines the ratio of channel cross sections from a same reaction based only on the particle multiplicities in the final state as: $$\frac{\sigma \left( A+B\rightarrow \hspace{-2mm} \sum\limits_{i=n,p,\pi^+,...} \hspace{-2mm} x_i i \right) }{\sigma \left( A'+B'\rightarrow \hspace{-2mm} \sum\limits_{j=n,p,\pi^+,...} \hspace{-2mm} x_j j \right) } = \frac{\prod\limits_{i=n,p,\pi^+,...} x_i!}{\prod\limits_{j=n,p,\pi^+,...}^{} x_j!}$$ with $x_i$ the number of particle $i$ in the final state.
In addition, and as mentioned in and , two additional reaction types must be taken into account: strangeness production reactions with numerous particles in final states and $\Delta$-induced strange production reactions.
For increasing energy, kaon production is associated with an increasing number of particles in the final state and, consequently, the reactions listed in are not sufficient to account for kaon production. Actually, the additional particles are mostly pions as demonstrated by the Fritiof model [@fritiof] (see ). Therefore, regarding the high-energy reactions $NN \rightarrow K + X$ and $\pi N \rightarrow K + X$, inclusive parametrizations of the cross sections are determined from experimental measurement and individual cross sections can be generated by trying to reproduce as good as possible the particle multiplicities given by the Fritiof model [@fritiof] using a random generator.
![Particle rate per reaction in $pp \rightarrow K + X$ reactions in the Fritiof model as a function of the incident proton momentum.[]{data-label="fritiof"}](fritiof-rate.png){width="0.5\columnwidth"}
The parametrization for $\Delta$-induced strangeness production cross sections listed in are taken from [@tsushima], except for the reaction $\Delta N \rightarrow NN K \overline{K}$, which is discussed below and given in . Since the estimates given by [@tsushima] for the cross sections related to $\Delta N$ collisions are very large compared to the cross sections related to $NN$ collisions with the same final states (factor $\sim$10), it was decided to take the isospin average cross section $\sigma(\Delta N\rightarrow~NNK\overline{K})$ as 10 times the isospin average cross section $\sigma(NN\rightarrow~NNK\overline{K})$.
Even if the number of $\Delta$ particles present in the nuclear volume during the collision is significantly lower than the number of pions and nucleons, $\Delta$-induced reaction are expected to contribute significantly to the strangeness production. Indeed, the cross sections calculated by Tsushima et *al.* [@tsushima] for $\Delta$-induced reactions are much larger than those measured for pion-induced or nucleon-induced reactions. However, for these parametrizations, they used hypotheses, which are not obviously good for the entire energy range studied in this work and the experimental data in $NN\rightarrow~NYK$ calculated with the same hypotheses are not always well reproduced (see [@tsushima], Fig.7). Considering the rather large uncertainties associated to these theoretical cross sections, this kind of reaction is supposed to be the largest source of error on strangeness production in our code.
The charge repartition was determined based on information obtained from the Bystricky procedure and the Hadron Exchange Model.
Parametrizations {#fitt}
----------------
Different generic formula were used to parametrize the reaction cross sections. The reactions considered are of two types: elastic and inelastic. This section presents our choice of fit functions. We give below the generic formula and in the parametrizations for all reactions in the whole energy domain considered (momentum in laboratory frame of reference below $15~GeV$).
The elastic scattering cross sections become extremely large when the incoming particle momentum goes down to zero. Upper limits are placed at low energies to avoid cross section divergences. The limits have no consequences on the final result if placed high enough, because the cross sections are only used to determine which reaction will contribute. The elastic cross sections appear relatively complex in the energy range studied here to be defined by a singular function. As a result of which the energy ranges studied were split into several parts in order to get better parametrizations of the cross sections. The following functions were used: $$\sigma(p_{Lab}) = a + b \ e^{-c \ p_{Lab}},$$ $$\sigma(p_{Lab}) = a + b \ p_{Lab}^{-c}.$$
Note that this kind of reaction is often resonant; the resonances are fitted by adding bumps of Gaussian shape on the underlying background.
The quasi-elastic reactions, which are $N K~\rightarrow~N' K'$, $N \overline{K}~\rightarrow~N' \overline{K}'$, and $N \Sigma~\rightarrow~N' \Sigma'$, are especially problematic at low energies with respect to the assumption of isospin symmetry because of the existence or absence of reaction thresholds. This asymmetry is taken into account by a cross section shift, which “breaks” the isospin symmetry hypothesis for both reactions.
The inelastic cross sections are the most important for the physics studied here. A lot of different formulae were tested. The following function, which is similar to formulae found in literature, gives good results for most reactions. We used the basic formula over the entire energy range even for those reactions where only few data concentrated in a narrow energy range exist. $$\label{zea}
\sigma(p_{Lab}) = a \frac{(p_{Lab} - p_0)^b}{(p_{Lab} + p_0)^c \ p_{Lab}^d},$$ with $p_0$ the threshold momentum and $a$, $b$, $c$, and $d$ positive fitting parameters. In a few cases, Gaussian functions are added in order to fit resonances.
Characteristics of the final states {#IV}
===================================
After fixing the type of reaction, the final state must be determined. Doing so, charge and momentum must be assigned to each particle in the final state.
In most cases, charge repartition is determined using isospin symmetry and the hadron exchange model, which both predict relations between the isospin channel cross sections. The ratios are given in . We then randomly chose the charge repartition using the ratios determined before. For the reaction $NN \rightarrow N Y K \pi\pi$, the Bystricky procedure and the hadron exchange model discussed in are not able to provide any ratio. Therefore, the simplified Iljinov et *al.* approach [@iljinov] is used.
The other information needed to define the final state is the three-momentum of outgoing particles. In INCL, there are two different options to determine the kinematics of outgoing particles: the first one is to provide an angular distribution based on experimental measurements. The second one is to use a phase space generator, which is isotropic for the simplest cases or more sophisticated for more complex cases (Kopylov [@kopylov] or Raubold-Lynch [@james]). Typically, no experimental data are available and therefore, phase space generators are used. Nevertheless, studies providing Legendre coefficient have been carried out for $\overline{K}N$[@DA6; @DA20; @DA21; @DA212; @DA24; @DA34; @DA85; @DA90; @DA93; @DA96; @DA105; @DAa; @DAb; @DAc] and $\pi N$[@piN1; @piN2; @piN3; @piN4; @piN5; @piN6; @piN7] elastic and quasi-elastic reactions. The results are used to provide angular distributions for $\overline{K}N$ and $\pi N$ reactions. Details are given and summarized in .
$\Delta p(MeV/c)$ Reaction Refs
------------------- --------------------------------------- ----------------------------------------------------------------
225 - 2374 $K^-p \rightarrow K^-p$ [@DAb; @DA96; @DA21; @DA34; @DA105; @DA212; @DA93; @DAa; @DA6]
235 - 1355 $K^-p \rightarrow \overline{K}^0 n$ [@DAb; @DA21; @DA34; @DAc; @DA90; @DA105]
436 - 1843 $K^-p \rightarrow \Lambda \pi^0$ [@DA21; @DA24; @DA85; @DA90; @DA105; @DA20]
436 - 865 $K^-p \rightarrow \Sigma^0 \pi^0$ [@DA21; @DA24; @DA85]
436 - 1843 $K^-p \rightarrow \Sigma^\pm \pi^\mp$ [@DA21; @DA90; @DA105; @DA24]
930 - 2375 $\pi^- p \rightarrow K^0 \Lambda^0$ [@piN1; @piN2; @piN3]
1040 - 2375 $\pi^- p \rightarrow K^0 \Sigma^0$ [@piN4; @piN5]
1105 - 2473 $\pi^+ p \rightarrow K^+ \Sigma^-$ [@piN6; @piN7]
: List of reactions where the angular distributions were studied experimentally. Momentum range and references are given.[]{data-label="ref_AD"}
The angular distributions for a given energy are usually parametrized using Legendre polynomials as follows:
$$\label{Leg}
\frac{d\sigma (\sqrt{s},\Theta_{c.m.})}{d\Omega} = {\mbox{\makebox[-0.7ex][l]{$\lambda$} \raisebox{0.5ex}[0pt][0pt]{\textbf{--}}}}^2(\sqrt{s}) \sum_{l=0}^{n} A_l(\sqrt{s}) P_l(\cos\Theta_{c.m.}),$$
with ${\mbox{\makebox[-0.7ex][l]{$\lambda$} \raisebox{0.5ex}[0pt][0pt]{\textbf{--}}}}$ the c.m. reduced wavelength, $A_l$ the $l^{th}$ Legendre coefficient, $\sqrt{s}$ the center of mass energy, $\Theta_{c.m.}$ the angle of the outgoing particle with its initial momentum in the center of mass reference frame, and $P_l$ the $l^{th}$ order Legendre polynomial.
The experimental papers treating the angular distributions provide often $A_l$ at different energies[@DA6; @DA20; @DA21; @DA212; @DA24; @DA34; @DA85; @DA90; @DA93; @DA96; @DA105; @DAa; @DAb; @DAc; @piN1; @piN2; @piN3; @piN4; @piN5; @piN6]. If it is not the case, like in [@piN7], Legendre coefficients were determined by us (c.f. ). However, the Legendre coefficients determined in experiments strongly depend on the experimental set-up, like the backward detection and the angular binning, and can therefore provide an angular distribution that is only valid in a partial angular range. Sometimes, aberrations like negative density probability also appear. In an intranuclear cascade model, a description of Legendre coefficients as a function of the energy is needed. Doing so, a direct (non-parametric) fit of the $A_l$ using all Legendre coefficients coming from the experiments were done. Using these fitted $A_l$, we observed that most of the negative density probability problems disappeared. When negative probability density problems persists, the density is set to zero. Thanks to the cross section parametrization (see ), only the $A_i(\sqrt s)/A_0(\sqrt s)$ fittings are needed. Below, we elaborate on the two methods used to define the $A_i(\sqrt s)/A_0(\sqrt s)$ ratios in the given energy range.
The first method used is the Nadaraya–Watson kernel regression [@nadaraya]. The parametrization of the ratios is obtained by determining the function $\hat{m_h}(x)$ given by: $$\label{Kernel}
\hat{m_h}(x) = \frac{\sum_{i=1}^n K_h(x-x_i) . y_i}{\sum_{i=1}^n K_h(x-x_i)},$$ with $(x_i,y_i)$ the set of $n$ data, $K_h$ is a kernel, here a Gaussian with a standard deviation defined so that their quartiles (viewed as probability densities) are at $\pm 0.25 h$. The denominator in is the normalization term. In our analyse, the bandwidth was chosen as $h$ = $25$, $50$, $100$, $150$ or $200~MeV/c$ either on the whole energy range or according to energy bins. The latter case is used when complex structures or narrow resonances appear, taking care of avoiding fitting non physical fluctuations.
The second method used is the smoothing spline regression [@smoothingsplines]. This method consists in the minimization of the following function: $$\sum_{i=1}^n \left(y_i - \hat{\mu}(x_i) \right) ^2 + \lambda \int_{x_1}^{x_n} \left(\hat{\mu}^{''}(x)\right)^2 dx,$$ with $(x_i,y_i)$ the set of $n$ data, $\hat{\mu}$ the non-parametric fit function (a spline), and $\lambda$ the smoothing parameter. This method corresponds to the common $\rchi$ $^2$ minimization with a second term used to limit quick variations in the fit function. The smoothing parameter was for each cases optimized by hand to obtain a good compromise between the smoothness and the proximity to the data in order to fit resonances but to avoid fitting the noise.
As already mentioned, there is no fit function for the two non-parametric methods. The result is a tabulation of Legendre coefficients as a function of the momentum with bins as small as needed. An example is shown in .
![Example of $A_1(\sqrt{s})/A_0(\sqrt{s})$ fit in the case $K^- p \rightarrow \Lambda \pi^0$ using Nadaraya-Watson kernel regression (blue), and smoothing spline regression (red).[]{data-label="regression"}](splines_vs_kernel.png){width="0.5\columnwidth"}
The two methods use completely different ways of fitting but give very similar results, as shown in . The choice to use one or the other was made case-by-case. Out of the data range, it was decided to use an isotropic distribution in the energy range below the experimental data and a more and more forward peaked distribution for higher energies
Tables used in INCL are available as electronic supplementary material in “tabulation.pdf”. Note that the extrapolation of the $A_i(\sqrt s)/A_0(\sqrt s)$ outside the energy range considered here is not reliable and is likely to produce unphysical results.
Comparison with other models {#V}
============================
Here we compare the input parameters determined in this paper, namely cross sections, charge repartition, and phase-space generation, to the same input parameters available in the literature and already used in other models considering strangeness production in the same energy range. These models are: (i) INCL2.0[@joseph; @deneye], a version developed to study anti-proton physics and including kaon physics, (ii) the Bertini Cascade model[@bertini], and (iii) the GiBUU model[@gibuu]. To do this comparison, different examples will be discussed in order to show the strength and the weakness of each model.
{width="0.5\columnwidth"}
The different models parametrize the reactions using different methods. The Bertini cascade model tabulates the cross sections based on parametrization and calculation at 30 kinetic energies corresponding to as many intervals whose width is increasing logarithmically with the incident energy and spanning the $0$ to $32~$GeV domain. In INCL2.0, cross sections were parametrized only for reactions with two particles in the final state. The parametrization is often a fit in one or two parts using a formula like $\sigma = a~p^b$, with $p$ the momentum in the laboratory frame of reference. In the GiBUU model, the energy range is divided in two parts: the *low-energy* part is fitted with parametrizations and the *high-energy* part is treated using PYTHIA [@pythia], which is based on the Lund string model [@lund]. The transition between the *low-energy* parametrization and the PYTHIA predictions is a smooth linear transition in an energy transition range. The energy range considered in GiBUU is $\sqrt{s} = 2.2\pm 0.2~$GeV in meson-baryon collisions, which corresponds, in term of momentum, to $2.1\pm 0.5~$GeV/c for pion nucleon collisions and to $1.9\pm 0.2~$GeV/c for kaon nucleon collisions, and $\sqrt{s} = 3.4\pm 0.1~$GeV in baryon-baryon collision, which corresponds to $5.1\pm 0.4~$GeV/c for nucleon-nucleon collisions.
Nucleon-nucleon collisions have a high contribution in the strangeness production. The first open reaction channel with a proton as a projectile is the $pp \rightarrow p \Lambda K^+$ channel, which is important at low energies but which contributes less and less at high energies. As shown in , all models reproduce well the experimental cross sections. However, in the range $3.7-5~$GeV/c, where there are no experimental data, there are significant differences between the different fits. Such differences are very common when experimental data are not available in some energy range and/or are rather inconsistent.
{width="0.5\columnwidth"}
![The $\pi^- p \rightarrow \Lambda K^0$ and $\pi^+ p \rightarrow \Sigma^+ K^+$ cross sections fits from the Bertini cascade model(green line), GiBUU(blue line), INCL2.0(orange line), and this work(red line) compared to experimental data(black dots) as a function of the incident pion momentum.[]{data-label="pipLK"}](pipLK.png "fig:"){width="0.45\columnwidth"} ![The $\pi^- p \rightarrow \Lambda K^0$ and $\pi^+ p \rightarrow \Sigma^+ K^+$ cross sections fits from the Bertini cascade model(green line), GiBUU(blue line), INCL2.0(orange line), and this work(red line) compared to experimental data(black dots) as a function of the incident pion momentum.[]{data-label="pipLK"}](pipSK.png "fig:"){width="0.45\columnwidth"}
A typical problematic channel is $pp \rightarrow~p \Sigma^+ K^0$ with the cross section parametrization shown in . The parametrization from our work matches relatively well the experimental data at energies up to $4~$GeV/c but underestimates the high energy part. This is due to the compromise between inclusive calculations from the Fritiof model [@fritiof] and exclusive cross section measurements. We have chosen to artificially reduce our fit in order to be consistent with the inclusive cross section data. However, this type of reaction could deserve extra work according to its contribution in INC models. Another crucial point for this type of reaction, which can also be observed in , is the inconsistency of the experimental data. For example, the two measurements around $3.7~$GeV/c differ with a factor $3$ and the data point at $10~GeV$ is suspiciously high compared not only to other data from this reaction but also compared to other isospin channels, which seem to show decreasing cross sections with increasing energy. The parametrizations in the other models differ strongly from our work. The Bertini cascade model and the GiBUU model, which uses the formula from [@tsushima] divided by a factor $1.5$, are other compromises with the experimental data.
{width="0.45\columnwidth"}
The highlights a problem with the INCL2.0 parametrizations. The result of the parametrization describes correctly the magnitude of cross sections but does not give good fits of the energy dependence of the cross sections. As seen in , the cross section is slightly overestimated in the energy range $1.5-2~$GeV/c for the $\pi^- p \rightarrow \Lambda K^0$ reaction and in the energy range $1.5-10~$GeV/c for the $\pi^+ p \rightarrow \Sigma^+ K^+$ reaction. In the Bertini cascade model tabulations, because of the few energy intervals, quick variations in cross sections as a function of energy can be missed. For example, for the reaction $\pi^- p \rightarrow \Lambda K^0$ shown in , the Bertini cascade model reproduces well the experimental data near the threshold and at high energies but, the first interval being too wide, some part of the cross section is underestimated. The $\pi^- p \rightarrow \Lambda K^0$ cross section from the GiBUU model is close to the experimental data up to $1.4~$GeV/c but, surprisingly, there are relatively large deviations from the experimental data at higher momenta. However, this deviation is in the energy range of the transition between the parametrization and the PYTHIA model (see above). Note also that the parametrization for the reaction $\pi^+ p \rightarrow \Sigma^+ K^+$ from this work is slightly shifted to higher energies (about $10~$MeV - so seen only at low energies) because the isospin invariance considers an equal mass for all particles belonging to a same multiplet. Here, the mass for a multiplet was considered as the heaviest mass of this multiplet and therefore, can produce this artefact.
{width="0.45\columnwidth"} {width="0.45\columnwidth"}
The illustrates another important result: the predictions at high energies from the Bertini cascade model are significantly different from our results. However, since there are only very few experimental data in this energy range, we cannot state which model is more reliable. This phenomenon is also visible in , though with more physical relevance. Deviations between experimental data and predictions are not very problematic when cross sections are relatively low because other reactions dominate. However, deviations of two orders of magnitude as seen for the reaction $K^- n\rightarrow~\Sigma^{0} \pi^-$ () are much more significant. Again, looking only at the experimental data, it is not obvious which of the parametrizations are correct. Fortunately, for this special case the deviations have a low impact on the entire cascade because antiKaons, except if they are projectiles, play a minor role (very low production yield).
{width="0.45\columnwidth"}
Resonances are not treated directly in our work. However, they appear as Gaussians in the cross section parametrization. If the hadron exchange model is used to determine a missing channel, those resonances appear also in the missing channel cross section even if they cannot be the intermediate state because of quantum number considerations. As an example, the resonances fitted for the reaction $K^- p \rightarrow \Sigma^{0} \pi^0$ () appear also in the $K^- n\rightarrow~\Sigma^{0} \pi^-$ cross section fit, even if the third component of the isospin differs ($0$ for the former and $-1$ for the latter). Note that the GiBUU parametrization is not shown in because the reaction is treated in a different way using resonant and non-resonant cross sections. Therefore, no simple formula can be given. The and also show another problem with the earlier INCL2.0 parametrizations: resonances are not reproduced. In contrast and as an improvement, the parametrizations proposed in this work and in the Bertini cascade model have no difficulties reproducing resonant cross sections.
Unlike antiKaon-nucleon collision cross sections discussed above, the $K^+ p$ elastic cross section is important for spallation process with either nucleons or pions as projectiles. This is due to the low production rate of antiKaons compared to Kaons. The shows that the cross section is well reproduced using the results from this work and in the Bertini cascade model. Also the GiBUU model gives a good description of the experimental data. Differences between the three different approaches are observable at low energies where the differences are not very relevant because of the lack of competing processes in this energy range. In contrast, the INCL2.0 model underestimates the cross sections over the entire energy range.
{width="0.5\columnwidth"}
In general, the parametrizations of the three different models fit the experimental data (if available) rather well. However, if experimental data are missing in an energy range, fits can be very different.
The two last subjects developed in our work are charge repartition and phase space generation. Since information about phase space generation in other models is too scarce, a comparison between the different models is not possible. Considering charge repartition, different methods are used by the different models. The Bertini cascade model uses a simplified version [@bertini] of the Iljinov et *al.* approach [@iljinov]. For the GiBUU model, the charge repartition is determined using isospin rules and, in the case $\pi N \rightarrow NK\overline{K}$, using the hadron exchange model with $K^*$ and $\pi$ exchange diagrams. In INCL2.0, the charge repartition was determined using isospin invariance rules by neglecting interferences.
Conclusion {#VII}
==========
A comprehensive and consistent description of all relevant elementary reactions involving strangeness production, scattering, and absorption when a light particle hit a nucleus was performed. Here we focused on energies below 15 GeV. The considered reactions are compiled in tables 1 and 2. This work was motivated by the implementation of strange particle physics into the intranuclear cascade model INCL with two major goals: refinement of the high-energy modelling (beyond 2-3 GeV) and possibility to contribute to hypernucleus studies.
This description includes parametrization of reaction cross sections, charge repartition, and phase space generation. These parametrizations are based on experimental measurements, when available, in order to be as model independent as possible. Unfortunately, for the reaction cross sections less than 20% of the needed information can be obtained directly in this way. Therefore hypotheses and models are used to complete the parametrization. Isospin symmetry allows to parametrize a large number of cross sections by linking known and unknown cross sections. This is applied in two different ways, either by taking into account only the initial and final states (called Bystricky procedure) or by considering the isospin symmetry at each vertex of the Feynman diagrams used in a hadron exchange model. Nevertheless, still roughly one third of the cross sections needed additional information for a full characterization. Then, in few cases where experimental data were rare, it was necessary to use similarities, *e.g.*, in the cross section ratios when one pion is added. Finally two types of reactions were fully based on modelling, *i.e.* without possible confrontation with experimental data : reactions with numerous particles in the final state (with increasing energy) and delta-induced reactions.
For quality control, we compared our cross sections to experimental data and parametrizations used in other models. They reproduce quite well the measurements, but assessing the quality of our cross sections for reactions and in energy ranges where no experimental data exist is still a problem. It is worth to mention that parametrizations differ often where no data point is measured. A typical case is the $\Delta$-induced reactions that should play an interesting role. No measurements exist and our parametrization relies on a theoretical model stating that those channels contribute in a significant way in kaon and hyperon production \[17\].
This set of newly parametrized cross sections, dealing with strangeness, will be implemented in the INCL code and formulae are given in appendix B. Calculations of Kaon and hyperon production, as well as of hypernucleus production, from interactions of a light particle with a nucleus will be soon performed and compared to experimental data. According to the available measurements, not only the reliability of the parametrizations obtained in this work will be estimated, but also the role and the weight of the different elementary reactions analysed.Those comparisons could add new constraints on these latter.
We hope that this compilation of formulae will be useful, not only for the users of transport codes, but also to model developers and physicists, who are interested in hypernuclear physics.
Acknowledgements
================
The authors would like to thank Janus Weil and the GiBUU collaboration, Denis Wright, Nikolai Mokhov and Gudima Konstantin for the different models calculations. We also thank Georg Schnabel and Jose-Luis Rodriguez-Sanchez for useful and productive discussions.
J.-C. David et al, [Memorie della Società Astronomica Italiana, Vol 82 N.4 - 909-912 (2011)](http://sait.oat.ts.astro.it/MSAIt820411/PDF/2011MmSAI..82..909D.pdf) and references therein.
S. Leray et *al.*, [J. Korean Phys. Soc. 59 791-796 (2011)](http://www.jkps.or.kr/journal/view.html?uid=12708&vmd=Full)
D. Mancusi et *al.*, [Phys. Rev. C 90 054602(2014)](https://doi.org/10.1103/PhysRevC.90.054602).
J.-C. David et *al.*, [Eur. Phys. J. A49 29 (2013)](https://link.springer.com/article/10.1140%2Fepja%2Fi2013-13029-4)
D. Mancusi et *al.*, [Phys. Rev. C91 034602 (2015)](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.91.034602),
J.-L. Rodríguez-Sánchez et *al.*, [Phys. Rev. C96 054602 (2017)](https://link.aps.org/doi/10.1103/PhysRevC.96.054602)
S. Pedoux and J. Cugnon, [Nucl. Phys. A866 16-36 (2011)](http://www.sciencedirect.com/science/article/pii/S0375947411005483?via%3Dihub).
S. Pedoux, PhD thesis, University of Liège, 2012. [BICTEL/e - ULg](http://bictel.ulg.ac.be/ETD-db/collection/available/ULgetd-09022011-153156/)
O. Buss and all. [Physics Reports 512 1–124 (2012)](http://dx.doi.org/10.1016/j.physrep.2011.12.001)
Y. Nara et *al.*, [Phys. Rev. C61 (1999) 024901](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.61.024901), [arXiv:nucl-th/9904059 \[nucl-th\] (1999)](https://arxiv.org/abs/nucl-th/9904059).
S. G. Mashnik et *al.*, LANL Report LA-UR-08-2931, [arXiv:0805.0751v2 \[nucl-th\] (2008)](https://arxiv.org/abs/0805.0751).
J. Cugnon, P. Deneye, and J. Vandermeulen. [Phys. Rev. C41, 1701 (1990)](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.41.1701).
Pierre Deneye, PhD thesis, University of Liège, 1991.
D.H. Wright, M.H. Kelsey, [Nuclear Instruments and Methods in Physics Research A804 175-188 (2015)](http://www.sciencedirect.com/science/article/pii/S0168900215011134)
B. Andersson et *al.*, [Phys. Rep. 97, 31-145 (1983)](http://www.sciencedirect.com/science/article/pii/0370157383900807?via%3Dihub).
K. Tsushima, A. Sibirtsev, A. W. Thomas, and G. Q. Li, [Phys. Rev. C59, 369](http://journals.aps.org/prc/abstract/10.1103/PhysRevC.59.369), [Erratum: Phys. Rev. C61, 029903 (2000)](http://journals.aps.org/prc/abstract/10.1103/PhysRevC.61.029903).
A. Baldini et *al.*, Landolt-Börnstein Numerical Data and Functional Relationships in Science and Technology. (Springer-Verlag, Berlin, 1988), Vol. 12 a and b.
HIRES Collaboration. [Physics Letters B692 10-14 (2010)](http://dx.doi.org/10.1016/j.physletb.2010.07.015)
A. Sibirtsev and all, [Eur. Phys. J. A32, 229-241 (2007)](http://dx.doi.org/10.1140/epja/i2007-10370-1)
C. Gini, (1912). C. Cuppini, Bologna, 156 pages.
J. Bystricky, P. La France, F. Lehar, F. Perrot, T. Siemiarczuk, et P. Winternitz. [Journal de Physique, 1987, 48(11), pp.1901-1924.](https://www.researchgate.net/publication/45246173_Energy_dependence_of_nucleon-nucleon_inelastic_total_cross-sections)
G.Q. Li, C.M. Ko, [Nucl. Phys. A594 439-459 (1995)](http://www.sciencedirect.com/science/article/pii/037594749500369C)
Sibirtsev, A., Cassing, W. and Ko, C.M. [Z Phys A - Particles and Fields (1997) 358: 101](http://link.springer.com/article/10.1007%2Fs002180050282)
A.S. Iljinov, et *al.* [Nucl. Phys. A616 575 (1997)](http://www.sciencedirect.com/science/article/pii/S0375947496004782)
Vladimir Uzhinsky [(SNA + MC2010) Hitotsubashi Memorial Hall, Tokyo, Japan, October 17-21, 2010](https://geant4.web.cern.ch/geant4/results/papers/Fritiof-MC2010.pdf)
G.I. Kopylov. [translation Soviet Physics JETP 8, 996 (1959)](http://www.jetp.ac.ru/cgi-bin/dn/e_008_06_0996.pdf)
F. James. [CERN 68-15 (1968)](https://cds.cern.ch/record/275743/files/CERN-68-15.pdf)
C. Daum et *al.* [Nucl. Phys. B6 273-324 (1968)](http://www.sciencedirect.com/science/article/pii/0550321368900758)
S. Andersson-Almehed et *al.* [Nucl. Phys. B21 515-527 (1970)](http://www.sciencedirect.com/science/article/pii/0550321370905419)
J. Griselin et *al.* [Nucl. Phys. B93 189-216 (1975)](http://www.sciencedirect.com/science/article/pii/0550321375905696)
C.J. Adams et *al.* [Nucl. Phys. B96 54-66 (1975)](http://www.sciencedirect.com/science/article/pii/0550321375904575)
K. Abe et *al.* [Phys. Rev. D 12, 6 (1975)](http://journals.aps.org/prd/pdf/10.1103/PhysRevD.12.6)
B. Conforto et *al.* [Nucl. Phys. B34 41-70 (1971)](http://www.sciencedirect.com/science/article/pii/0550321371901088)
Terry S. Mast et *al.* [Phys. Rev. D14, 13 (1976)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.14.13)
R. Armenteros et *al.* [Nucl. Phys. B21 15-76 (1970)](http://www.sciencedirect.com/science/article/pii/055032137090461X)
B. Conforto et *al.* [Nucl. Phys. B105 189-221 (1976)](http://www.sciencedirect.com/science/article/pii/0550321376902625)
M. Jones, R. Levi Setti and D. Merrill. [Nucl. Phys. B90 349-383 (1975)](http://www.sciencedirect.com/science/article/pii/0550321375906525)
M. Alston-Garnjost et *al.* [Phys. Rev. D17, 2226 (1978)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.17.2226)
G.W. London et *al.* [Nucl. Phys. B85 289-310 (1975)](http://www.sciencedirect.com/science/article/pii/0550321375900097)
A. Berthon, L.K. Rangan, J. Vrana. [Nucl. Phys. B20 476-492 (1970)](http://www.sciencedirect.com/science/article/pii/0550321370903834)
A. Berthon, J. Vrana. [Nucl. Phys. B24 417-440 (197)](http://www.sciencedirect.com/science/article/pii/0550321370902464)
T.M. Knasel et *al.* [Phys.Rev.D11, 1 (1975)](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.11.1)
R.D. Baker et *al.* [Nucl. Phys. B141, 29 (1978)](https://www.sciencedirect.com/science/article/pii/0550321378903322)
D.H. Saxon et *al.* [Nucl. Phys. B162, 522 (1980)](https://www.sciencedirect.com/science/article/pii/0550321380903545)
R.D. Baker et *al.* [Nucl. Phys. B145 402-408 (1978)](http://www.sciencedirect.com/science/article/pii/0550321378900913)
J.C. Hart et *al.* [Nucl. Phys. B166, 73 (1980)](https://www.sciencedirect.com/science/article/pii/0550321380904903)
M. Winik, S. Toaff, D. Revel, J. Goldberg, L. Berny. [Nucl. Phys. B128, 66 (1977)](https://www.sciencedirect.com/science/article/pii/0550321377903005)
D.J. Candlin et *al.* [Nucl. Phys. B226, 1 (1983)](https://www.sciencedirect.com/science/article/pii/0550321383904613)
E. A. Nadaraya [Theory Probab. Appl., 9(1), 141-142](http://epubs.siam.org/doi/abs/10.1137/1109020)
Reinsch, C.H. Numer. Math. (1967) 10:177. [https://doi:10.1007/BF02162161](http://link.springer.com/article/10.1007%2FBF02162161)
T. Sjöstrand, S. Mrenna, and P. Z. Skands. [JHEP 05 (2006) 026](http://iopscience.iop.org/article/10.1088/1126-6708/2006/05/026/meta;jsessionid=0348763612409B948F69597BAE4398CA.c4.iopscience.cld.iop.org)
B.Andersson, G.Gustafson, G.Ingelman and T.Sjöstrand. [Phys. Rep. 97,2-3 (1983)](http://www.sciencedirect.com/science/article/pii/0370157383900807)
Relations extracted from the Hadron exchange model and from the Bystricky procedure. {#channel}
====================================================================================
This appendix summarizes the relations obtained from the hadron exchange model (normal style) and the relation obtain from the Bystricky procedure (in bold) (see and ).
In what follows, $N$ represents a nucleon, $\Delta$ a Delta particle, $B$ a nucleon or a Delta particle, $Y$ a hyperon, $\pi$ a pion, $K$ a Kaon (excluding $\overline{K}^0$ and $K^-$), and $\overline{K}$ an antiKaon.
The reliability of equation displayed here are discussed in the paper. In resume, bold equations (coming from the Bystricky procedure) are highly reliable equations. Normal style equations (coming from the hadron exchange model) often used debatable hypotheses, which could produce surprising results but always consistent with equations in bold.
The reactions $ N K~\rightarrow~N K$, $ N \overline{K}~\rightarrow~N \overline{K}$, and $ N \Lambda~\rightarrow~N \Lambda$ do not have symmetries, except the trivial ones. They also have threshold effects, therefore the hadron exchange model is not relevant for these reactions.
$$\begin{aligned}
\sigma(pp \rightarrow p\Lambda K^+) & = \sigma(nn \rightarrow n\Lambda K^0) \\
\sigma(pn \rightarrow p\Lambda K^0) & = \sigma(pn \rightarrow n\Lambda K^+)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
4 \sigma(pp \rightarrow p \Sigma^+ K^0) = 4 \sigma(nn \rightarrow n \Sigma^- K^+) & = 8 \sigma(pp \rightarrow p \Sigma^0 K^+) = 8 \sigma(nn \rightarrow n \Sigma^0 K^0) \\
= \sigma(pp \rightarrow n \Sigma^+ K^+) = \sigma(nn \rightarrow p \Sigma^- K^0) & = \frac{8}{5} \sigma(pn \rightarrow p \Sigma^0 K^0) = \frac{8}{5}\sigma(pn \rightarrow n \Sigma^0 K^+) \\
= 4 \sigma(pn \rightarrow p \Sigma^- K^+) & = 4 \sigma(pn \rightarrow n \Sigma^+ K^0)\end{aligned}$$
$$\begin{aligned}
\sigma(pn \rightarrow p \Sigma^- K^+) + \sigma(pp & \rightarrow n \Sigma^+ K^+) + \sigma(pp \rightarrow p \Sigma^+ K^0) \\
= 2 \sigma(pn \rightarrow p \Sigma^0 K^0) & + 2 \sigma(pp \rightarrow p \Sigma^0 K^+)\end{aligned}$$
Calculation are based on $ NN \rightarrow \Delta YK \rightarrow NYK\pi$.
$$\begin{aligned}
\frac{4}{9}\sigma(pp \rightarrow p\Lambda K^0 \pi^+) = \frac{4}{9}\sigma(nn \rightarrow n\Lambda K^+ \pi^-) & = 2\sigma(pp \rightarrow p\Lambda K^+ \pi^0) = 2\sigma(nn \rightarrow n\Lambda K^0 \pi^0) \\
= 4\sigma(pp \rightarrow n\Lambda K^+ \pi^+) = 4\sigma(nn \rightarrow p\Lambda K^0 \pi^-) & = 2\sigma(pn \rightarrow p\Lambda K^+ \pi^-) = 2\sigma(pn \rightarrow n\Lambda K^0 \pi^+) \\
= \sigma(pn \rightarrow p\Lambda K^0 \pi^0) & = \sigma(pn \rightarrow n\Lambda K^+ \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(pn \rightarrow p\Lambda K^+ \pi^-) + \sigma(pp & \rightarrow n\Lambda K^+ \pi^+) + \sigma(pp \rightarrow p\Lambda K^0 \pi^+) \\
= 2 \sigma(pn \rightarrow p\Lambda K^0 \pi^0) & + 2 \sigma(pp \rightarrow p\Lambda K^+ \pi^0)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
\sigma(pn \rightarrow p \Sigma^- K^+ \pi^0) + \sigma(pp \rightarrow n \Sigma^+ K^+ \pi^0) & + \sigma(pp \rightarrow p \Sigma^+ K^0 \pi^0) \\
= \sigma(pn \rightarrow p \Sigma^0 K^+ \pi^-) + \sigma(pp \rightarrow n \Sigma^0 K^+ \pi^+) & + \sigma(pp \rightarrow p \Sigma^0 K^0 \pi^+)\end{aligned}$$ $$\begin{aligned}
\sigma(pn \rightarrow p \Sigma^- K^0 \pi^+) + \sigma(pn & \rightarrow p \Sigma^+ K^0 \pi^-) + \sigma(pp \rightarrow n \Sigma^+ K^0 \pi^+) \\
+ \sigma(pp \rightarrow p \Sigma^- K^+ \pi^+) & + \sigma(pp \rightarrow p \Sigma^+ K^+ \pi^-) \\
= \sigma(pn \rightarrow p \Sigma^0 K^+ \pi^-) + 2 \sigma(pn & \rightarrow p \Sigma^0 K^0 \pi^0) + \sigma(pp \rightarrow n \Sigma^0 K^+ \pi^+) \\
+ 2 \sigma(pp \rightarrow p \Sigma^0 K^+ \pi^0) & + \sigma(pp \rightarrow p \Sigma^0 K^0 \pi^+)\end{aligned}$$$$\begin{aligned}
\sigma(pp \rightarrow p \Sigma^+ K^0 \pi^0) = \sigma(nn \rightarrow n \Sigma^- K^+ \pi^0) & = 2\sigma(pp \rightarrow n \Sigma^+ K^0 \pi^+) = 2\sigma(nn \rightarrow p \Sigma^- K^+ \pi^-) \\
= \sigma(pp \rightarrow n \Sigma^+ K^+ \pi^0) = \sigma(nn \rightarrow p \Sigma^- K^0 \pi^0) & = 2\sigma(pp \rightarrow p \Sigma^+ K^+ \pi^-) = 2\sigma(nn \rightarrow n \Sigma^- K^0 \pi^+) \\
= \sigma(pp \rightarrow p \Sigma^0 K^+ \pi^0) = \sigma(nn \rightarrow n \Sigma^0 K^0 \pi^0) & = 2\sigma(pp \rightarrow n \Sigma^0 K^+ \pi^+) = 2\sigma(nn \rightarrow p \Sigma^0 K^0 \pi^-) \\
= \frac{4}{9}\sigma(pp \rightarrow p \Sigma^0 K^0 \pi^+) = \frac{4}{9}\sigma(nn \rightarrow n \Sigma^0 K^+ \pi^-) & = \frac{4}{9}\sigma(pp \rightarrow p \Sigma^- K^+ \pi^+) = \frac{4}{9}\sigma(nn \rightarrow n \Sigma^+ K^0 \pi^-) \\
= \frac{4}{9}\sigma(pn \rightarrow p \Sigma^- K^0 \pi^+) = \frac{4}{9}\sigma(pn \rightarrow n \Sigma^+ K^+ \pi^-) & = 2\sigma(pn \rightarrow p \Sigma^0 K^0 \pi^0) = 2\sigma(pn \rightarrow n \Sigma^0 K^+ \pi^0) \\
= 4\sigma(pn \rightarrow p \Sigma^0 K^+ \pi^-) = 4\sigma(pn \rightarrow n \Sigma^0 K^0 \pi^+) & = \sigma(pn \rightarrow p \Sigma^- K^+ \pi^0) = \sigma(pn \rightarrow n \Sigma^+ K^0 \pi^0) \\
= 2\sigma(pn \rightarrow p \Sigma^+ K^0 \pi^-) & = 2\sigma(pn \rightarrow n \Sigma^- K^+ \pi^+) \end{aligned}$$
$$\begin{aligned}
4 \sigma(pp \rightarrow pp K^+ K^-) = 4 \sigma(nn \rightarrow nn K^0 \overline{K}^0) & = 4 \sigma(pp \rightarrow pp K^0 \overline{K}^0) = 4 \sigma(nn \rightarrow nn K^+ K^-) \\
= \sigma(pp \rightarrow pn K^+ \overline{K}^0) = \sigma(nn \rightarrow pn K^0 K^-) & = \sigma(pn \rightarrow pp K^0 K^-) = \sigma(pn \rightarrow nn K^+ \overline{K}^0) \\
= 4/9 \ \sigma(pn \rightarrow pn K^+ K^-) & = 4/9 \ \sigma(pn \rightarrow pn K^0 \overline{K}^0)\end{aligned}$$
**No solution with the Bystricky procedure**
$$\begin{aligned}
0.83 \sigma(p K^+ \rightarrow p K^+ \pi^0) = 0.83 \sigma(n K^0 \rightarrow n K^0 \pi^0) & = \frac{1}{3} \sigma(p K^+ \rightarrow p K^0 \pi^+) = \frac{1}{3} \sigma(n K^0 \rightarrow n K^+ \pi^-) \\
= 1.25 \sigma(p K^+ \rightarrow n K^+ \pi^+) = 1.25 \sigma(n K^0 \rightarrow p K^0 \pi^-) & = \sigma(p K^0 \rightarrow p K^+ \pi^-) = \sigma(n K^+ \rightarrow n K^0 \pi^+) \\
= 1.18 \sigma(p K^0 \rightarrow p K^0 \pi^0) = 1.18 \sigma(n K^+ \rightarrow n K^+ \pi^0) & = 0.68 \sigma(p K^0 \rightarrow n K^+ \pi^0) = 0.68 \sigma(n K^+ \rightarrow p K^0 \pi^0) \\
= 0.45 \sigma(p K^0 \rightarrow n K^0 \pi^+) & = 0.45 \sigma(n K^+ \rightarrow p K^+ \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(p K^0 \rightarrow n K^0 \pi^+) + \sigma(p K^0 \rightarrow p K^+ \pi^-) & + \sigma(p K^+ \rightarrow n K^+ \pi^+) + \sigma(p K^+ \rightarrow p K^0 \pi^+) \\
= 2 \sigma(p K^0 \rightarrow n K^+ \pi^0) + 2 \sigma(p K^0 & \rightarrow p K^0 \pi^0) + 2 \sigma(p K^+ \rightarrow p K^+ \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(p K^+ \rightarrow p K^+ \pi^+ \pi^-) = \sigma(n K^0 \rightarrow n K^0 \pi^+ \pi^-) & = 8 \sigma(p K^+ \rightarrow p K^+ \pi^0 \pi^0) = 8 \sigma(n K^0 \rightarrow n K^0 \pi^0 \pi^0) \\
= \sigma(p K^+ \rightarrow p K^0 \pi^+ \pi^0) = \sigma(n K^0 \rightarrow n K^+ \pi^0 \pi^-) & = 2 \sigma(p K^+ \rightarrow n K^+ \pi^+ \pi^0) = 2 \sigma(n K^0 \rightarrow p K^0 \pi^0 \pi^-) \\
= 4 \sigma(p K^+ \rightarrow n K^0 \pi^+ \pi^+) = 4 \sigma(n K^0 \rightarrow p K^+ \pi^- \pi^-) & = \sigma(p K^0 \rightarrow p K^+ \pi^0 \pi^-) = \sigma(n K^+ \rightarrow n K^0 \pi^+ \pi^0) \\
= \sigma(p K^0 \rightarrow p K^0 \pi^+ \pi^-) = \sigma(n K^+ \rightarrow n K^+ \pi^+ \pi^-) & = 8 \sigma(p K^0 \rightarrow p K^0 \pi^0 \pi^0) = 8 \sigma(n K^+ \rightarrow n K^+ \pi^0 \pi^0) \\
= 4 \sigma(p K^0 \rightarrow n K^+ \pi^+ \pi^-) = 4 \sigma(n K^+ \rightarrow p K^0 \pi^+ \pi^-) & = 4 \sigma(p K^0 \rightarrow n K^+ \pi^0 \pi^0) = 4 \sigma(n K^+ \rightarrow p K^0 \pi^0 \pi^0) \\
= 2 \sigma(p K^0 \rightarrow n K^0 \pi^+ \pi^0) & = 2 \sigma(n K^+ \rightarrow p K^+ \pi^0 \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(p K^0 \rightarrow n K^0 \pi^+ \pi^0) + 4 \sigma(p K^0 \rightarrow n K^+ \pi^0 \pi^0) & + 4 \sigma(p K^0 \rightarrow p K^0 \pi^0 \pi^0) + \sigma(p K^0 \rightarrow p K^+ \pi^0 \pi^-) \\
+ \sigma(p K^+ \rightarrow n K^+ \pi^+ \pi^0) + \sigma(p K^+ \rightarrow p K^0 \pi^+ \pi^0) & + 4 \sigma(p K^+ \rightarrow p K^+ \pi^0 \pi^0) = 2 \sigma(p K^0 \rightarrow n K^+ \pi^+ \pi^-) \\
+ 2 \sigma(p K^0 \rightarrow p K^0 \pi^+ \pi^-) + 2 \sigma(p K^+ & \rightarrow n K^0 \pi^+ \pi^+) + 2 \sigma(p K^+ \rightarrow p K^+ \pi^+ \pi^-)\end{aligned}$$
$$\begin{aligned}
12 \sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^0) = 12 \sigma(nK^- \rightarrow nK^- \pi^0) & = 6\sigma(p\overline{K}^0 \rightarrow pK^- \pi^+) = 6\sigma(nK^- \rightarrow n\overline{K}^0 \pi^-) \\
= 12 \sigma(p\overline{K}^0 \rightarrow n\overline{K}^0 \pi^+) = 12 \sigma(nK^- \rightarrow pK^- \pi^-) & = 9 \sigma(pK^- \rightarrow p\overline{K}^0 \pi^-) = 9 \sigma(n\overline{K}^0 \rightarrow nK^- \pi^+) \\
= 12 \sigma(pK^- \rightarrow pK^- \pi^0) = 12 \sigma(n\overline{K}^0 \rightarrow n\overline{K}^0 \pi^0) & = 3 \sigma(pK^- \rightarrow n\overline{K}^0 \pi^0) = 3 \sigma(n\overline{K}^0 \rightarrow pK^- \pi^0) \\
= 8 \sigma(pK^- \rightarrow nK^- \pi^+) & = 8 \sigma(n\overline{K}^0 \rightarrow p\overline{K}^0 \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(pK^- \rightarrow nK^- \pi^+) + \sigma(pK^- \rightarrow p\overline{K}^0 \pi^-) & + \sigma(p\overline{K}^0 \rightarrow n\overline{K}^0 \pi^+) + \sigma(p\overline{K}^0 \rightarrow pK^- \pi^+) \\
= 2 \sigma(pK^- \rightarrow n\overline{K}^0 \pi^0) + 2 \sigma(pK^- & \rightarrow pK^- \pi^0) + 2 \sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^+ \pi^-) = \sigma(nK^- \rightarrow nK^- \pi^+ \pi^-) & = 4 \sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^0 \pi^0) = 4 \sigma(nK^- \rightarrow nK^- \pi^0 \pi^0) \\
= \sigma(p\overline{K}^0 \rightarrow pK^- \pi^+ \pi^0) = \sigma(nK^- \rightarrow n\overline{K}^0 \pi^0 \pi^-) & = \sigma(p\overline{K}^0 \rightarrow n\overline{K}^0 \pi^+ \pi^0) = \sigma(nK^- \rightarrow pK^- \pi^0 \pi^-) \\
= \sigma(p\overline{K}^0 \rightarrow nK^- \pi^+ \pi^+) = \sigma(nK^- \rightarrow p\overline{K}^0 \pi^- \pi^-) & = \sigma(pK^- \rightarrow p\overline{K}^0 \pi^0 \pi^-) = \sigma(n\overline{K}^0 \rightarrow nK^- \pi^+ \pi^0) \\
= \sigma(pK^- \rightarrow pK^- \pi^+ \pi^-) = \sigma(n\overline{K}^0 \rightarrow n\overline{K}^0 \pi^+ \pi^-) & = 4 \sigma(pK^- \rightarrow pK^- \pi^0 \pi^0) = 4 \sigma(n\overline{K}^0 \rightarrow n\overline{K}^0 \pi^0 \pi^0) \\
= \sigma(pK^- \rightarrow n\overline{K}^0 \pi^+ \pi^-) = \sigma(n\overline{K}^0 \rightarrow pK^- \pi^+ \pi^-) & = 2 \sigma(pK^- \rightarrow n\overline{K}^0 \pi^0 \pi^0) = 2 \sigma(n\overline{K}^0 \rightarrow pK^- \pi^0 \pi^0) \\
= \sigma(pK^- \rightarrow nK^- \pi^+ \pi^0) & = \sigma(n\overline{K}^0 \rightarrow p\overline{K}^0 \pi^0 \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(pK^- \rightarrow nK^- \pi^+ \pi^0) + 4 \sigma(pK^- & \rightarrow n\overline{K}^0 \pi^0 \pi^0) + 4 \sigma(pK^- \rightarrow pK^- \pi^0 \pi^0) \\
+ \sigma(pK^- \rightarrow p\overline{K}^0 \pi^0 \pi^-) & + \sigma(p\overline{K}^0 \rightarrow n\overline{K}^0 \pi^+ \pi^0) \\
+ \sigma(p\overline{K}^0 \rightarrow pK^- \pi^+ \pi^0) & + 4 \sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^0 \pi^0)\\
= 2 \sigma(pK^- \rightarrow n\overline{K}^0 \pi^+ \pi^-) & + 2 \sigma(pK^- \rightarrow pK^- \pi^+ \pi^-) \\
+ 2 \sigma(p\overline{K}^0 \rightarrow nK^- \pi^+ \pi^+) & + 2 \sigma(p\overline{K}^0 \rightarrow p\overline{K}^0 \pi^+ \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(p\overline{K}^0 \rightarrow \Lambda \pi^+) = \sigma(nK^- \rightarrow \Lambda \pi^-) = 2 \sigma(pK^- \rightarrow \Lambda \pi^0) = 2 \sigma(n\overline{K}^0 \rightarrow \Lambda \pi^0)\end{aligned}$$
$$\sigma(p\overline{K}^0 \rightarrow \Lambda \pi^+) = 2 \sigma(pK^- \rightarrow \Lambda \pi^0)$$
------------------------------------------------------------------------
$$\begin{aligned}
\sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0) = \sigma(nK^- \rightarrow \Sigma^- \pi^0) & = \sigma(p\overline{K}^0 \rightarrow \Sigma^0 \pi^+) = \sigma(nK^- \rightarrow \Sigma^0 \pi^-) \\
= \frac{3}{4} \sigma(pK^- \rightarrow \Sigma^+ \pi^-) = \frac{3}{4} \sigma(n\overline{K}^0 \rightarrow \Sigma^- \pi^+) & = \frac{3}{2} \sigma(pK^- \rightarrow \Sigma^0 \pi^0) = \frac{3}{2} \sigma(n\overline{K}^0 \rightarrow \Sigma^0 \pi^0) \\
= \sigma(pK^- \rightarrow \Sigma^- \pi^+) & = \sigma(n\overline{K}^0 \rightarrow \Sigma^+ \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0) & =\sigma(p\overline{K}^0 \rightarrow \Sigma^0 \pi^+) \\
\\
\sigma(pK^- \rightarrow \Sigma^- \pi^+) + \sigma(pK^- \rightarrow \Sigma^+ \pi^-) & = 2 \sigma(pK^- \rightarrow \Sigma^0 \pi^0) + \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(p\overline{K}^0 \rightarrow \Lambda \pi^+ \pi^0) = \sigma(nK^- \rightarrow \Lambda \pi^0 \pi^-) & = \sigma(pK^- \rightarrow \Lambda \pi^+ \pi^-) = \sigma(n\overline{K}^0 \rightarrow \Lambda \pi^+ \pi^-) \\
= 4 \sigma(pK^- \rightarrow \Lambda \pi^0 \pi^0) & = 4 \sigma(n\overline{K}^0 \rightarrow \Lambda \pi^0 \pi^0)\end{aligned}$$
$$\begin{aligned}
4 \sigma(pK^- \rightarrow \Lambda \pi^0 \pi^0) + \sigma(p\overline{K}^0 \rightarrow \Lambda \pi^+ \pi^0)= 2 \sigma(pK^- \rightarrow \Lambda \pi^+ \pi^-)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
\frac{3}{2} \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^+ \pi^-) = \frac{3}{2} \sigma(nK^- \rightarrow \Sigma^- \pi^+ \pi^-) & = 4 \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0 \pi^0) = 4 \sigma(nK^- \rightarrow \Sigma^- \pi^0 \pi^0) \\
= \frac{6}{5} \sigma(p\overline{K}^0 \rightarrow \Sigma^0 \pi^+ \pi^0) = \frac{6}{5} \sigma(nK^- \rightarrow \Sigma^0 \pi^0 \pi^-) & = \frac{3}{2} \sigma(p\overline{K}^0 \rightarrow \Sigma^- \pi^+ \pi^+) = \frac{3}{2} \sigma(nK^- \rightarrow \Sigma^+ \pi^- \pi^-) \\
= \sigma(pK^- \rightarrow \Sigma^+ \pi^0 \pi^-) = \sigma(n\overline{K}^0 \rightarrow \Sigma^- \pi^+ \pi^0) & = \frac{3}{2} \sigma(pK^- \rightarrow \Sigma^0 \pi^+ \pi^-) = \frac{3}{2} \sigma(n\overline{K}^0 \rightarrow \Sigma^0 \pi^+ \pi^-) \\
= 8 \sigma(pK^- \rightarrow \Sigma^0 \pi^0 \pi^0) = 8 \sigma(n\overline{K}^0 \rightarrow \Sigma^0 \pi^0 \pi^0) & = \frac{3}{2} \sigma(pK^- \rightarrow \Sigma^- \pi^+ \pi^0) = \frac{3}{2} \sigma(n\overline{K}^0 \rightarrow \Sigma^+ \pi^0 \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(pK^- \rightarrow \Sigma^- \pi^+ \pi^0) + \sigma(pK^- & \rightarrow \Sigma^+ \pi^0 \pi^-)
+ 2 \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0 \pi^0)\\
= 2 \sigma(pK^- \rightarrow \Sigma^0 \pi^+ \pi^-) & + \sigma(p\overline{K}^0 \rightarrow \Sigma^0 \pi^+ \pi^0) \\
\\
\sigma(p\overline{K}^0 \rightarrow \Sigma^- \pi^+ \pi^+) & + \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^+ \pi^-) \\
= 2 \sigma(pK^- \rightarrow \Sigma^0 \pi^0 \pi^0) + \sigma(p\overline{K}^0 & \rightarrow \Sigma^0 \pi^+ \pi^0) + \sigma(p\overline{K}^0 \rightarrow \Sigma^+ \pi^0 \pi^0)\end{aligned}$$
$$\begin{aligned}
2 \sigma(p\Lambda \rightarrow p \Sigma^0) = 2 \sigma(n\Lambda \rightarrow n \Sigma^0) = \sigma(p\Lambda \rightarrow n \Sigma^+) = \sigma(n\Lambda \rightarrow p \Sigma^-)\end{aligned}$$
$$\sigma(p\Lambda \rightarrow n \Sigma^+) = 2 \sigma(p\Lambda \rightarrow p \Sigma^0)$$
------------------------------------------------------------------------
$$\begin{aligned}
2 \sigma(p \Sigma^0 \rightarrow p \Lambda) = 2 \sigma(n \Sigma^0 \rightarrow n \Lambda) = \sigma(p \Sigma^- \rightarrow n \Lambda) = \sigma(n \Sigma^+ \rightarrow p \Lambda)\end{aligned}$$
$$\sigma(p \Sigma^- \rightarrow n \Lambda) = 2 \sigma(p \Sigma^0 \rightarrow p \Lambda)$$
------------------------------------------------------------------------
$$\begin{aligned}
\sigma(p \Sigma^- \rightarrow p \Sigma^-) = \sigma(n \Sigma^+ \rightarrow n \Sigma^+) & = \sigma(p \Sigma^+ \rightarrow p \Sigma^+) = \sigma(n \Sigma^- \rightarrow n \Sigma^-)\\
\sigma(p \Sigma^0 \rightarrow n \Sigma^+) = \sigma(n \Sigma^0 \rightarrow p \Sigma^-) & = \sigma(p \Sigma^0 \rightarrow p \Sigma^0) = \sigma(n \Sigma^0 \rightarrow n \Sigma^0)\end{aligned}$$
$$\begin{aligned}
\sigma( \Delta^{++} p \rightarrow p p K^+ \overline{K}^0 ) = \sigma( \Delta^- n \rightarrow n n K^0 K^- ) & = 2 \sigma( \Delta^{++} n \rightarrow p p K^+ K^- ) = 2 \sigma( \Delta^- p \rightarrow n n K^0 \overline{K}^0 ) \\
= 2 \sigma( \Delta^{++} n \rightarrow p n K^+ \overline{K}^0 ) = 2 \sigma( \Delta^- p \rightarrow n p K^0 K^- ) & = 2 \sigma( \Delta^{++} n \rightarrow p p K^0 \overline{K}^0 ) = 2 \sigma( \Delta^- p \rightarrow n n K^+ K^- ) \\
= 2 \sigma( \Delta^+ p \rightarrow p p K^+ K^- ) = 2 \sigma( \Delta^0 n \rightarrow n n K^0 \overline{K}^0 ) & = 6 \sigma( \Delta^+ p \rightarrow p p K^0 \overline{K}^0 ) = 6 \sigma( \Delta^0 n \rightarrow n n K^+ K^- ) \\
= 2 \sigma( \Delta^+ p \rightarrow p n K^+ \overline{K}^0 ) = 2 \sigma( \Delta^0 n \rightarrow n p K^0 K^- ) & = 3 \sigma( \Delta^+ n \rightarrow p p K^0 K^- ) = 3 \sigma( \Delta^0 p \rightarrow n n K^+ \overline{K}^0 ) \\
= 6 \sigma( \Delta^+ n \rightarrow p n K^+ K^- ) = 6 \sigma( \Delta^0 p \rightarrow n p K^0 \overline{K}^0 ) & = 3 \sigma( \Delta^+ n \rightarrow p n K^0 \overline{K}^0 ) = 3 \sigma( \Delta^0 p \rightarrow n p K^+ K^- ) \\
= 2 \sigma( \Delta^+ n \rightarrow n n K^+ \overline{K}^0 ) &= 2 \sigma( \Delta^0 p \rightarrow p p K^0 K^- )\end{aligned}$$
$$\begin{aligned}
3 \sigma( \Delta^+ n \rightarrow nn K^+ \overline{K}^0 ) &= 2 \sigma( \Delta^{++} n \rightarrow pn K^+ \overline{K}^0 ) \\
\\
3 \sigma( \Delta^+ n \rightarrow pn K^0 \overline{K}^0 ) &+ \sigma( \Delta^{++} n \rightarrow pn K^+ \overline{K}^0 ) \\
= 3 \sigma( \Delta^+ p \rightarrow pp K^+ K^- ) &+ \sigma( \Delta^{++} n \rightarrow pp K^0 \overline{K}^0 ) \\
\\
3 \sigma( \Delta^+ n \rightarrow pn K^+ K^- ) &+ \sigma( \Delta^{++} n \rightarrow pn K^+ \overline{K}^0 ) \\
= 3 \sigma( \Delta^+ p \rightarrow pp K^0 \overline{K}^0 ) &+ \sigma( \Delta^{++} n \rightarrow pp K^+ K^- ) \\\end{aligned}$$ $$\begin{aligned}
3 \sigma( \Delta^+ n \rightarrow pp K^0 K^- ) + 3 \sigma( \Delta^+ p & \rightarrow pp K^0 \overline{K}^0 ) + 3 \sigma( \Delta^+ p \rightarrow pp K^+ K^- ) \\
= 2 \sigma( \Delta^{++} n \rightarrow pn K^+ \overline{K}^0 ) & + \sigma( \Delta^{++} n \rightarrow pp K^0 \overline{K}^0 ) \\
+ \sigma( \Delta^{++} n \rightarrow pp K^+ K^- ) & + \sigma( \Delta^{++} p \rightarrow pp K^+ \overline{K}^0 ) \\
\\
3 \sigma( \Delta^+ p \rightarrow pn K^+ \overline{K}^0 ) + 3 \sigma( \Delta^+ p & \rightarrow pp K^0 \overline{K}^0 ) + 3 \sigma( \Delta^+ p \rightarrow pp K^+ K^- ) \\
= \sigma( \Delta^{++} n \rightarrow pn K^+ \overline{K}^0 ) & + \sigma( \Delta^{++} n \rightarrow pp K^0 \overline{K}^0 )\\
+ \sigma( \Delta^{++} n \rightarrow pp K^+ K^- ) & + 2 \sigma( \Delta^{++} p \rightarrow pp K^+ \overline{K}^0 ) \end{aligned}$$
$$\begin{aligned}
\sigma(\Delta^{++} n \rightarrow p \Lambda K^+) = \sigma(\Delta^- p \rightarrow n \Lambda K^0) & =3 \sigma(\Delta^{+} p \rightarrow p \Lambda K^+) = 3\sigma(\Delta^0 n \rightarrow n \Lambda K^0) \\
=3 \sigma(\Delta^{+} n \rightarrow p \Lambda K^0) = 3\sigma(\Delta^0 p \rightarrow n \Lambda K^+) & =3 \sigma(\Delta^{+} n \rightarrow n \Lambda K^+) = 3\sigma(\Delta^0 p \rightarrow p \Lambda K^0)\end{aligned}$$
$$\begin{aligned}
\sigma(\Delta^{++} n \rightarrow p \Lambda K^+) =3 \sigma(\Delta^{+} p \rightarrow p \Lambda K^+)=3 \sigma(\Delta^{+} p \rightarrow p \Lambda K^0) =3 \sigma(\Delta^{+} n \rightarrow p \Lambda K^+)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
\sigma(\Delta^{++} p \rightarrow p \Sigma^+ K^+) = \sigma(\Delta^- n \rightarrow n \Sigma^- K^0) & = 2\sigma(\Delta^{++} n \rightarrow p \Sigma^+ K^0) = 2 \sigma(\Delta^- p \rightarrow n \Sigma^- K^+) \\
= 2\sigma(\Delta^{++} n \rightarrow p \Sigma^0 K^+) = 2 \sigma(\Delta^- p \rightarrow n \Sigma^0 K^0) & = 2\sigma(\Delta^{++} n \rightarrow n \Sigma^+ K^+) = 2 \sigma(\Delta^- p \rightarrow p \Sigma^- K^0) \\
= 3\sigma(\Delta^+ p \rightarrow p \Sigma^+ K^0) = 3 \sigma(\Delta^0 n \rightarrow n \Sigma^- K^+) & = 3\sigma(\Delta^+ p \rightarrow p \Sigma^0 K^+) = 3 \sigma(\Delta^0 n \rightarrow n \Sigma^0 K^0) \\
= 2\sigma(\Delta^+ p \rightarrow n \Sigma^+ K^+) = 2 \sigma(\Delta^0 n \rightarrow p \Sigma^- K^0) & = 2\sigma(\Delta^+ n \rightarrow p \Sigma^0 K^0) = 2 \sigma(\Delta^0 p \rightarrow n \Sigma^0 K^+) \\
= 3\sigma(\Delta^+ n \rightarrow p \Sigma^- K^+) = 3 \sigma(\Delta^0 p \rightarrow n \Sigma^+ K^0) & = 3\sigma(\Delta^+ n \rightarrow n \Sigma^+ K^0) = 3 \sigma(\Delta^0 p \rightarrow p \Sigma^- K^+) \\
= 3\sigma(\Delta^+ n \rightarrow n \Sigma^0 K^+) &= 3 \sigma(\Delta^0 p \rightarrow p \Sigma^0 K^0)\end{aligned}$$
$$\begin{aligned}
3 \sigma(\Delta^+n \rightarrow n \Sigma^0 K^+) + \sigma(\Delta^{++}n \rightarrow p \Sigma^0 K^+) & = 3 \sigma(\Delta^+p \rightarrow p \Sigma^+ K^0) + \sigma(\Delta^{++}n \rightarrow n \Sigma^+ K^+) \\
\\
3 \sigma(\Delta^+n \rightarrow n \Sigma^+ K^0) + \sigma(\Delta^{++}p \rightarrow p \Sigma^+ K^+) & = 3 \sigma(\Delta^+p \rightarrow p \Sigma^0 K^+) + \sigma(\Delta^{++}n \rightarrow p \Sigma^0 K^+)\end{aligned}$$ $$\begin{aligned}
3 \sigma(\Delta^+n \rightarrow p \Sigma^- K^+) &= 2 \sigma(\Delta^{++}n \rightarrow p \Sigma^0 K^+) \\
\\
3 \sigma(\Delta^+n \rightarrow p \Sigma^0 K^0) + 3 \sigma(\Delta^+p & \rightarrow p \Sigma^0 K^+) + 3 \sigma(\Delta^+p \rightarrow p \Sigma^+ K^0) \\
= \sigma(\Delta^{++}n \rightarrow n \Sigma^+ K^+) + 2 \sigma(\Delta^{++}n & \rightarrow p \Sigma^+ K^0) + 2 \sigma(\Delta^{++}p \rightarrow p \Sigma^+ K^+) \\
\\
3 \sigma(\Delta^+p \rightarrow n \Sigma^+ K^+) + 3 \sigma(\Delta^+p & \rightarrow p \Sigma^0 K^+) + 3 \sigma(\Delta^+p \rightarrow p \Sigma^+ K^0) \\
= \sigma(\Delta^{++}n \rightarrow n \Sigma^+ K^+) & + \sigma(\Delta^{++}n \rightarrow p \Sigma^0 K^+) \\
+ \sigma(\Delta^{++}n \rightarrow p \Sigma^+ K^0) & + 2 \sigma(\Delta^{++}p \rightarrow p \Sigma^+ K^+)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
3 \sigma( \Delta^{++} p \rightarrow \Lambda K^+ \Delta^{++} ) = 3 \sigma( \Delta^- n \rightarrow \Lambda K^0 \Delta^- ) & = 4\sigma( \Delta^{++} n \rightarrow \Lambda K^+ \Delta^+ ) = 4\sigma( \Delta^- p \rightarrow \Lambda K^0 \Delta^0 ) \\
= 3 \sigma( \Delta^{++} n \rightarrow \Lambda K^0 \Delta^{++} ) = 3\sigma( \Delta^- p \rightarrow \Lambda K^+ \Delta^- ) & = 4\sigma( \Delta^+ p \rightarrow \Lambda K^+ \Delta^+ ) = 4\sigma( \Delta^0 n \rightarrow \Lambda K^0 \Delta^0 ) \\
= 6\sigma( \Delta^+ p \rightarrow \Lambda K^0 \Delta^{++} ) = 6\sigma( \Delta^0 n \rightarrow \Lambda K^+ \Delta^- ) & = 3 \sigma( \Delta^+ n \rightarrow \Lambda K^+ \Delta^0 ) = 3\sigma( \Delta^0 p \rightarrow \Lambda K^0 \Delta^+ ) \\
= 6 \sigma( \Delta^+ n \rightarrow \Lambda K^0 \Delta^+ ) &= 6\sigma( \Delta^0 p \rightarrow \Lambda K^+ \Delta^0 )\end{aligned}$$
$$\begin{aligned}
3 \sigma( \Delta^+ n \rightarrow \Lambda K^0 \Delta^+ ) + 2 \sigma( \Delta^{++} n \rightarrow \Lambda K^+ \Delta^+ ) & = 2 \sigma( \Delta^{++} n \rightarrow \Lambda K^0 \Delta^{++} ) + \sigma( \Delta^{++} p \rightarrow \Lambda K^+ \Delta^{++} ) \\
\\
3 \sigma( \Delta^+ n \rightarrow \Lambda K^+ \Delta^0 ) &= 4 \sigma( \Delta^{++} n \rightarrow \Lambda K^+ \Delta^+ )\\
\\
3 \sigma( \Delta^+ p \rightarrow \Lambda K^+ \Delta^+ ) + 2 \sigma( \Delta^{++} n \rightarrow \Lambda K^+ \Delta^+ ) & = \sigma( \Delta^{++} n \rightarrow \Lambda K^0 \Delta^{++} ) + 2 \sigma( \Delta^{++} p \rightarrow \Lambda K^+ \Delta^{++} ) \end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
6 \sigma( \Delta^{++} p \rightarrow \Sigma^0 K^+ \Delta^{++} ) = 6 \sigma( \Delta^- n \rightarrow \Sigma^0 K^0 \Delta^- ) &= 6 \sigma( \Delta^{++} n \rightarrow \Sigma^0 K^+ \Delta^+ ) = 6 \sigma( \Delta^- p \rightarrow \Sigma^0 K^0 \Delta^0 ) \\
= 12 \sigma( \Delta^{++} p \rightarrow \Sigma^+ K^0 \Delta^{++} ) = 12 \sigma( \Delta^- n \rightarrow \Sigma^- K^+ \Delta^- ) &= 12 \sigma( \Delta^{++} n \rightarrow \Sigma^0 K^0 \Delta^{++} ) = 12 \sigma( \Delta^- p \rightarrow \Sigma^0 K^+ \Delta^- ) \\
= 2 \sigma( \Delta^{++} n \rightarrow \Sigma^+ K^+ \Delta^0 ) = 2 \sigma( \Delta^- p \rightarrow \Sigma^- K^0 \Delta^+ ) &= 2 \sigma( \Delta^{++} n \rightarrow \Sigma^+ K^0 \Delta^+ ) = 2 \sigma( \Delta^- p \rightarrow \Sigma^- K^+ \Delta^0 ) \\
= 3 \sigma( \Delta^{++} n \rightarrow \Sigma^- K^+ \Delta^{++} ) = 3 \sigma( \Delta^- p \rightarrow \Sigma^+ K^0 \Delta^- ) &= 3 \sigma( \Delta^+ p \rightarrow \Sigma^0 K^0 \Delta^{++} ) = 3 \sigma( \Delta^0 n \rightarrow \Sigma^0 K^+ \Delta^- ) \\
= 6 \sigma( \Delta^+ p \rightarrow \Sigma^+ K^+ \Delta^0 ) = 6 \sigma( \Delta^0 n \rightarrow \Sigma^- K^0 \Delta^+ ) &= 6 \sigma( \Delta^+ p \rightarrow \Sigma^- K^+ \Delta^{++} ) = 6 \sigma( \Delta^0 n \rightarrow \Sigma^+ K^0 \Delta^- ) \\
= 12 \sigma( \Delta^+ p \rightarrow \Sigma^0 K^+ \Delta^+ ) = 12 \sigma( \Delta^0 n \rightarrow \Sigma^0 K^0 \Delta^0 ) &= 12 \sigma( \Delta^+ n \rightarrow \Sigma^0 K^0 \Delta^+ ) = 12 \sigma( \Delta^0 p \rightarrow \Sigma^0 K^+ \Delta^0 ) \\
= 6 \sigma( \Delta^+ p \rightarrow \Sigma^+ K^0 \Delta^+ ) = 6 \sigma( \Delta^0 n \rightarrow \Sigma^- K^+ \Delta^0 ) &= 6 \sigma( \Delta^+ n \rightarrow \Sigma^+ K^+ \Delta^- ) = 6 \sigma( \Delta^0 p \rightarrow \Sigma^- K^0 \Delta^{++} ) \\
= 3 \sigma( \Delta^+ n \rightarrow \Sigma^0 K^+ \Delta^0 ) = 3 \sigma( \Delta^0 p \rightarrow \Sigma^0 K^0 \Delta^+ ) &= 6 \sigma( \Delta^+ n \rightarrow \Sigma^- K^+ \Delta^+ ) = 6 \sigma( \Delta^0 p \rightarrow \Sigma^+ K^0 \Delta^0 ) \\
= 6 \sigma( \Delta^+ n \rightarrow \Sigma^+ K^0 \Delta^0 ) = 6 \sigma( \Delta^0 p \rightarrow \Sigma^- K^+ \Delta^+ ) &= 6 \sigma( \Delta^+ n \rightarrow \Sigma^- K^0 \Delta^{++} ) = 6 \sigma( \Delta^0 p \rightarrow \Sigma^+ K^+ \Delta^- )\end{aligned}$$
$$\begin{aligned}
2 \sigma(\Delta^+ n \rightarrow \Sigma^- K^0 \Delta^{++} ) & + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ ) \\
= 3 \sigma(\Delta^+ p \rightarrow \Sigma^+ K^+ \Delta^0) & + \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0)\end{aligned}$$ $$\begin{aligned}
12 \sigma(\Delta^+ n \rightarrow \Sigma^0 K^0 \Delta^+ ) + 15 \sigma(\Delta^+ p & \rightarrow \Sigma^+ K^+ \Delta^0) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) + 9 \sigma(\Delta^{++}n & \rightarrow \Sigma^+ K^+ \Delta^0) + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^0 \Delta^{++} ) \\
+ 4 \sigma(\Delta^{++}p \rightarrow \Sigma^0 K^+ \Delta^{++} ) & = 18 \sigma(\Delta^+ p \rightarrow \Sigma^0 K^+ \Delta^+ ) \\
+ 6 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 8 \sigma(\Delta^{++}n & \rightarrow \Sigma^0 K^+ \Delta^+ ) + 18 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ )\end{aligned}$$ $$\begin{aligned}
6 \sigma(\Delta^+ n \rightarrow \Sigma^+ K^0 \Delta^0 ) + 9 \sigma(\Delta^+ p & \rightarrow \Sigma^+ K^+ \Delta^0 ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) \\
+ 9 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0 ) & + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^0 \Delta^{++} ) \\
= 6 \sigma(\Delta^+ p \rightarrow \Sigma^0 K^+ \Delta^+ ) + 2 \sigma(\Delta^{++}n & \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 6 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 8 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^+ \Delta^+ ) & + 10 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ )\end{aligned}$$ $$\begin{aligned}
12 \sigma(\Delta^+ n \rightarrow \Sigma^- K^+ \Delta^+ ) + 18 \sigma(\Delta^+ p & \rightarrow \Sigma^0 K^+ \Delta^+ ) + 6 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 16 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^+ \Delta^+ ) &+ 18 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ )\\
= 9 \sigma(\Delta^+ p \rightarrow \Sigma^+ K^+ \Delta^0 ) + 2 \sigma(\Delta^{++}n & \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 10 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) \\
+ 15 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0 ) + 6 \sigma(\Delta^{++}p & \rightarrow \Sigma^+ K^0 \Delta^{++} ) + 8 \sigma(\Delta^{++}p \rightarrow \Sigma^0 K^+ \Delta^{++} )\end{aligned}$$ $$\begin{aligned}
6 \sigma(\Delta^+ n \rightarrow \Sigma^0 K^+ \Delta^0 ) + 6 \sigma(\Delta^+ p & \rightarrow \Sigma^0 K^+ \Delta^+ ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^0 \Delta^{++} ) \\
+ 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ ) = 3 \sigma(\Delta^+ p & \rightarrow \Sigma^+ K^+ \Delta^0 ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) + \sigma(\Delta^{++}n & \rightarrow \Sigma^+ K^+ \Delta^0 ) + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^0 \Delta^{++} )\end{aligned}$$ $$\begin{aligned}
4 \sigma(\Delta^+ p \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 6 \sigma(\Delta^+ p & \rightarrow \Sigma^0 K^+ \Delta^+ ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^0 \Delta^{++} ) \\
+ 4 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^+ \Delta^+ ) & + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ ) \\
= 3 \sigma(\Delta^+ p \rightarrow \Sigma^+ K^+ \Delta^0) + 2 \sigma(\Delta^{++}n & \rightarrow \Sigma^+ K^0 \Delta^+ ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) \\
+ 5 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0 ) & + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^0 \Delta^{++} ) \end{aligned}$$ $$\begin{aligned}
6 \sigma(\Delta^+ p \rightarrow \Sigma^+ K^0 \Delta^+ ) + 6 \sigma(\Delta^+ p & \rightarrow \Sigma^0 K^+ \Delta^+ ) + 4 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 4 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^+ \Delta^+ ) & + 8 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ ) \\
= 3 \sigma(\Delta^+ p \rightarrow \Sigma^+ K^+ \Delta^0) + 2 \sigma(\Delta^{++}n & \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) \\
+ 5 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0 ) + 4 \sigma(\Delta^{++}p & \rightarrow \Sigma^+ K^0 \Delta^{++} ) + 4 \sigma(\Delta^{++}p \rightarrow \Sigma^0 K^+ \Delta^{++} )\end{aligned}$$ $$\begin{aligned}
4 \sigma(\Delta^+ p \rightarrow \Sigma^- K^+ \Delta^{++} ) + 9 \sigma(\Delta^+ p & \rightarrow \Sigma^+ K^+ \Delta^0) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^- K^+ \Delta^{++} ) \\
+ 7 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^+ \Delta^0) & + 2 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^0 \Delta^{++} ) \\
= 6 \sigma(\Delta^+ p \rightarrow \Sigma^0 K^+ \Delta^+ ) + 2 \sigma(\Delta^{++}n & \rightarrow \Sigma^0 K^0 \Delta^{++} ) + 2 \sigma(\Delta^{++}n \rightarrow \Sigma^+ K^0 \Delta^+ ) \\
+ 8 \sigma(\Delta^{++}n \rightarrow \Sigma^0 K^+ \Delta^+ ) & + 10 \sigma(\Delta^{++}p \rightarrow \Sigma^+ K^+ \Delta^+ ) \end{aligned}$$
$$\begin{aligned}
2 \sigma(\pi^0 p \rightarrow \Lambda K^+) = 2 \sigma(\pi^0 n \rightarrow \Lambda K^0) = \sigma(\pi^- p \rightarrow \Lambda K^0) = \sigma(\pi^+ n \rightarrow \Lambda K^+)\end{aligned}$$
$$\sigma(\pi^- p \rightarrow \Lambda K^0) = 2 \sigma(\pi^0 p \rightarrow \Lambda K^+)$$
The case of the reaction $\pi N \rightarrow \Sigma K$ is detailed in .
$$\begin{aligned}
\sigma(\pi^+ p \rightarrow \Lambda K^+ \pi^+) = \sigma(\pi^- n \rightarrow \Lambda K^0 \pi^-) & = \sigma(\pi^0 p \rightarrow \Lambda K^0 \pi^+) = \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^0) \\
= \sigma(\pi^+ n \rightarrow \Lambda K^+ \pi^0) = \sigma(\pi^0 n \rightarrow \Lambda K^+ \pi^-) & = 2\sigma(\pi^0 p \rightarrow \Lambda K^+ \pi^0) = 2 \sigma(\pi^0 n \rightarrow \Lambda K^0 \pi^0) \\
= \sigma(\pi^- p \rightarrow \Lambda K^+ \pi^-) & = \sigma(\pi^+ n \rightarrow \Lambda K^0 \pi^+)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^0 p \rightarrow \Lambda K^0 \pi^+) & = \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^0)\\
\\
\sigma(\pi^- p \rightarrow \Lambda \pi^- K^+) + \sigma(\pi^+ p \rightarrow \Lambda \pi^+ K^+) & = 2 \sigma(\pi^0 p \rightarrow \Lambda \pi^0 K^+) + \sigma(\pi^0 p \rightarrow \Lambda \pi^+ K^0)\end{aligned}$$
------------------------------------------------------------------------
$$\begin{aligned}
\frac{4}{5} \sigma(\pi^+ p \rightarrow \Sigma^+ K^0 \pi^+) = \frac{4}{5} \sigma(\pi^- n \rightarrow \Sigma^- K^+ \pi^-) & = \frac{4}{3} \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^0) = \frac{4}{3} \sigma(\pi^- n \rightarrow \Sigma^- K^0 \pi^0) \\
= 4 \sigma(\pi^+ p \rightarrow \Sigma^0 K^+ \pi^+) = 4 \sigma(\pi^- n \rightarrow \Sigma^0 K^0 \pi^-) & = 2 \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^0) = 2 \sigma(\pi^0 n \rightarrow \Sigma^- K^+ \pi^0) \\
= 2 \sigma(\pi^0 p \rightarrow \Sigma^+ K^+ \pi^-) = 2 \sigma(\pi^0 n \rightarrow \Sigma^- K^0 \pi^+) & = \frac{4}{3} \sigma(\pi^0 p \rightarrow \Sigma^0 K^0 \pi^+) = \frac{4}{3} \sigma(\pi^0 n \rightarrow \Sigma^0 K^+ \pi^-) \\
= \frac{8}{3} \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^0) = \frac{8}{3} \sigma(\pi^0 n \rightarrow \Sigma^0 K^0 \pi^0) & = 2 \sigma(\pi^0 p \rightarrow \Sigma^- K^+ \pi^+) = 2 \sigma(\pi^0 n \rightarrow \Sigma^+ K^0 \pi^-) \\
= \frac{8}{3} \sigma(\pi^- p \rightarrow \Sigma^+ K^0 \pi^-) = \frac{8}{3} \sigma(\pi^+ n \rightarrow \Sigma^- K^+ \pi^+) & = \frac{8}{5} \sigma(\pi^- p \rightarrow \Sigma^0 K^0 \pi^0) = \frac{8}{5} \sigma(\pi^+ n \rightarrow \Sigma^0 K^+ \pi^0) \\
= \frac{8}{5} \sigma(\pi^- p \rightarrow \Sigma^0 K^+ \pi^-) = \frac{8}{5} \sigma(\pi^+ n \rightarrow \Sigma^0 K^0 \pi^+) & = \sigma(\pi^- p \rightarrow \Sigma^- K^0 \pi^+) = \sigma(\pi^+ n \rightarrow \Sigma^+ K^+ \pi^-) \\
= \frac{8}{3} \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^0) & = \frac{8}{3} \sigma(\pi^+ n \rightarrow \Sigma^+ K^0 \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^- \pi^0 K^+) + \sigma(\pi^- p & \rightarrow \Sigma^0 \pi^0 K^0)
+ \sigma(\pi^+ p \rightarrow \Sigma^+ \pi^0 K^+) \\
= \sigma(\pi^0 p \rightarrow \Sigma^- \pi^+ K^+) + \sigma(\pi^0 p & \rightarrow \Sigma^0 \pi^+ K^0) + \sigma(\pi^0 p \rightarrow \Sigma^+ \pi^- K^+)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^- \pi^+ K^0) + \sigma(\pi^- p & \rightarrow \Sigma^+ \pi^- K^0) + \sigma(\pi^+ p \rightarrow \Sigma^+ \pi^+ K^0) \\
= \sigma(\pi^- p \rightarrow \Sigma^0 \pi^0 K^0) + 2 \sigma(\pi^0 p & \rightarrow \Sigma^0 \pi^0 K^+) + \sigma(\pi^0 p \rightarrow \Sigma^0 \pi^+ K^0) + \sigma(\pi^0 p \rightarrow \Sigma^+ \pi^0 K^0)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^0 \pi^- K^+) + \sigma(\pi^- p & \rightarrow \Sigma^0 \pi^0 K^0) + \sigma(\pi^+ p \rightarrow \Sigma^0 \pi^+ K^+) \\
= \sigma(\pi^0 p \rightarrow \Sigma^- \pi^+ K^+) + \sigma(\pi^0 p & \rightarrow \Sigma^+ \pi^- K^+) + \sigma(\pi^0 p \rightarrow \Sigma^+ \pi^0 K^0)\end{aligned}$$
$$\begin{aligned}
2 \sigma(\pi^+ p \rightarrow p K^+ \overline{K}^0) = 2 \sigma(\pi^- n \rightarrow n K^0 K^-) & = 4 \sigma(\pi^0 p \rightarrow p K^+ K^-) = 4 \sigma(\pi^0 n \rightarrow n K^0 \overline{K}^0) \\
= 4 \sigma(\pi^0 p \rightarrow p K^0 \overline{K}^0) = 4 \sigma(\pi^0 n \rightarrow n K^+ K^-) & = \sigma(\pi^0 p \rightarrow n K^+ \overline{K}^0) = \sigma(\pi^0 n \rightarrow p K^0 K^-) \\
= 2 \sigma(\pi^- p \rightarrow p K^0 K^-) = 2 \sigma(\pi^+ n \rightarrow n K^+ \overline{K}^0) & = \sigma(\pi^- p \rightarrow n K^+ K^-) = \sigma(\pi^+ n \rightarrow p K^0 \overline{K}^0) \\
= \sigma(\pi^- p \rightarrow n K^0 \overline{K}^0) & = \sigma(\pi^+ n \rightarrow p K^+ K^-)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow n K^0 \overline{K}^0) + \sigma(\pi^- p \rightarrow n K^+ K^-) & + \sigma(\pi^- p \rightarrow p K^0 K^-) + \sigma(\pi^+ p \rightarrow p K^+ \overline{K}^0) \\
= 2 \sigma(\pi^0 p \rightarrow n K^+ \overline{K}^0) + 2 \sigma(\pi^0 p & \rightarrow p K^0 \overline{K}^0) + 2 \sigma(\pi^0 p \rightarrow p K^+ K^-)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^+ p \rightarrow \Lambda K^0 \pi^+ \pi^+) & = \sigma(\pi^- n \rightarrow \Lambda K^+ \pi^- \pi^-) &
= \sigma(\pi^+ p \rightarrow \Lambda K^+ \pi^+ \pi^0) & = \sigma(\pi^- n \rightarrow \Lambda K^0 \pi^0 \pi^-) \\
= 2 \sigma(\pi^0 p \rightarrow \Lambda K^0 \pi^+ \pi^0) & = 2 \sigma(\pi^0 n \rightarrow \Lambda K^+ \pi^0 \pi^-) &
= \sigma(\pi^0 p \rightarrow \Lambda K^+ \pi^+ \pi^-) & = \sigma(\pi^0 n \rightarrow \Lambda K^0 \pi^+ \pi^-) \\
= 4 \sigma(\pi^0 p \rightarrow \Lambda K^+ \pi^0 \pi^0) & = 4 \sigma(\pi^0 n \rightarrow \Lambda K^0 \pi^0 \pi^0) &
= \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^+ \pi^-) & = \sigma(\pi^+ n \rightarrow \Lambda K^+ \pi^+ \pi^-) \\
= 2 \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^0 \pi^0) & = 2 \sigma(\pi^+ n \rightarrow \Lambda K^+ \pi^0 \pi^0) &
= \sigma(\pi^- p \rightarrow \Lambda K^+ \pi^0 \pi^-) & = \sigma(\pi^+ n \rightarrow \Lambda K^0 \pi^+ \pi^0)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow \Lambda K^+ \pi^0 \pi^-) & + 2 \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^+ \pi^-) \\
+ \sigma(\pi^+ p \rightarrow \Lambda K^+ \pi^+ \pi^0) & + 2 \sigma(\pi^+ p \rightarrow \Lambda K^0 \pi^+ \pi^+) \\
= 4 \sigma(\pi^0 p \rightarrow \Lambda K^+ \pi^0 \pi^0) + 2 \sigma(\pi^0 p & \rightarrow \Lambda K^+ \pi^+ \pi^-) + 3 \sigma(\pi^0 p \rightarrow \Lambda K^0 \pi^+ \pi^0)\end{aligned}$$ $$\begin{aligned}
\sigma(\pi^- p \rightarrow & \Lambda K^0 \pi^0 \pi^0) + 2 \sigma(\pi^0 p \rightarrow \Lambda K^+ \pi^0 \pi^0) + \sigma(\pi^0 p \rightarrow \Lambda K^0 \pi^+ \pi^0) \\
& = \sigma(\pi^- p \rightarrow \Lambda K^0 \pi^+ \pi^-) + \sigma(\pi^+ p \rightarrow \Lambda K^0 \pi^+ \pi^+)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^+ \pi^-) & = \sigma(\pi^- n \rightarrow \Sigma^- K^0 \pi^+ \pi^-) = 4 \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^0 \pi^0) \\
= 4 \sigma(\pi^- n \rightarrow \Sigma^- K^0 \pi^0 \pi^0) & = 2 \sigma(\pi^+ p \rightarrow \Sigma^0 K^+ \pi^+ \pi^0) = 2 \sigma(\pi^- n \rightarrow \Sigma^0 K^0 \pi^0 \pi^-) \\
= 4 \sigma(\pi^+ p \rightarrow \Sigma^- K^+ \pi^+ \pi^+) & = 4 \sigma(\pi^- n \rightarrow \Sigma^+ K^0 \pi^- \pi^-) = \sigma(\pi^+ p \rightarrow \Sigma^+ K^0 \pi^+ \pi^0) \\
= \sigma(\pi^- n \rightarrow \Sigma^- K^+ \pi^0 \pi^-) & = 4 \sigma(\pi^+ p \rightarrow \Sigma^0 K^0 \pi^+ \pi^+) = 4 \sigma(\pi^- n \rightarrow \Sigma^0 K^+ \pi^- \pi^-) \\
= 2 \sigma(\pi^0 p \rightarrow \Sigma^+ K^+ \pi^0 \pi^-) & = 2 \sigma(\pi^0 n \rightarrow \Sigma^- K^0 \pi^+ \pi^0) = 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^+ \pi^-) \\
= 2 \sigma(\pi^0 n \rightarrow \Sigma^0 K^0 \pi^+ \pi^-) & = 4 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^0 \pi^0) = 4 \sigma(\pi^0 n \rightarrow \Sigma^0 K^0 \pi^0 \pi^0) \\
= 4 \sigma(\pi^0 p \rightarrow \Sigma^- K^+ \pi^+ \pi^0) & = 4 \sigma(\pi^0 n \rightarrow \Sigma^+ K^0 \pi^0 \pi^-) = \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^+ \pi^-) \\
= \sigma(\pi^0 n \rightarrow \Sigma^- K^+ \pi^+ \pi^-) & = 4 \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^0 \pi^0) = 4 \sigma(\pi^0 n \rightarrow \Sigma^- K^+ \pi^0 \pi^0) \\
= 4 \sigma(\pi^0 p \rightarrow \Sigma^0 K^0 \pi^+ \pi^0) & = 4 \sigma(\pi^0 n \rightarrow \Sigma^0 K^+ \pi^0 \pi^-) = 2 \sigma(\pi^0 p \rightarrow \Sigma^- K^0 \pi^+ \pi^+) \\
= 2 \sigma(\pi^0 n \rightarrow \Sigma^+ K^+ \pi^- \pi^-) & = 4 \sigma(\pi^- p \rightarrow \Sigma^+ K^+ \pi^- \pi^-) = 4 \sigma(\pi^+ n \rightarrow \Sigma^- K^0 \pi^+ \pi^+) \\
= 2 \sigma(\pi^- p \rightarrow \Sigma^0 K^+ \pi^0 \pi^-) & = 2 \sigma(\pi^+ n \rightarrow \Sigma^0 K^0 \pi^+ \pi^0) = 4 \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^+ \pi^-) \\
= 4 \sigma(\pi^+ n \rightarrow \Sigma^+ K^0 \pi^+ \pi^-) & = 4 \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^0 \pi^0) = 4 \sigma(\pi^+ n \rightarrow \Sigma^+ K^0 \pi^0 \pi^0) \\
= 2 \sigma(\pi^- p \rightarrow \Sigma^+ K^0 \pi^0 \pi^-) & = 2 \sigma(\pi^+ n \rightarrow \Sigma^- K^+ \pi^+ \pi^0) = \sigma(\pi^- p \rightarrow \Sigma^0 K^0 \pi^+ \pi^-) \\
= \sigma(\pi^+ n \rightarrow \Sigma^0 K^+ \pi^+ \pi^-) & = 2 \sigma(\pi^- p \rightarrow \Sigma^0 K^0 \pi^0 \pi^0) = 2 \sigma(\pi^+ n \rightarrow \Sigma^0 K^+ \pi^0 \pi^0) \\
= 2 \sigma(\pi^- p \rightarrow \Sigma^- K^0 \pi^+ \pi^0) & = 2 \sigma(\pi^+ n \rightarrow \Sigma^+ K^+ \pi^0 \pi^-)\end{aligned}$$
$$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^- K^0 \pi^+ \pi^0) &+ \sigma(\pi^- p \rightarrow \Sigma^+ K^0 \pi^0 \pi^-) + \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^0 \pi^0) \\
+ \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^+ \pi^-) &+ \sigma(\pi^- p \rightarrow \Sigma^+ K^+ \pi^- \pi^-) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^0 \pi^+ \pi^0) \\
+ \sigma(\pi^+ p \rightarrow \Sigma^- K^+ \pi^+ \pi^+) &+ \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^0 \pi^0) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^+ \pi^-) \\
= \sigma(\pi^0 p \rightarrow \Sigma^- K^0 \pi^+ \pi^+) &+ 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^0 \pi^+ \pi^0) + \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^0 \pi^0) \\
+ \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^+ \pi^-) &+ \sigma(\pi^0 p \rightarrow \Sigma^- K^+ \pi^+ \pi^0) + 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^0 \pi^0) \\
+ 2 \sigma(\pi^0 p \rightarrow & \Sigma^0 K^+ \pi^+ \pi^-) + \sigma(\pi^0 p \rightarrow \Sigma^+ K^+ \pi^0 \pi^-)\end{aligned}$$ $$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^+ \pi^-) & + \sigma(\pi^- p \rightarrow \Sigma^+ K^+ \pi^- \pi^-) + \sigma(\pi^0 p \rightarrow \Sigma^- K^0 \pi^+ \pi^+) \\
+ \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^+ \pi^-) & + \sigma(\pi^+ p \rightarrow \Sigma^- K^+ \pi^+ \pi^+) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^+ \pi^-) \\
= 2 \sigma(\pi^- p \rightarrow \Sigma^0 K^0 \pi^0 \pi^0) & + \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^0 \pi^0) + \sigma(\pi^- p \rightarrow \Sigma^0 K^+ \pi^0 \pi^-) \\
+ \sigma(\pi^0 p \rightarrow \Sigma^0 K^0 \pi^+ \pi^0) & + \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^0 \pi^0) + 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^0 \pi^0) \\
+ \sigma(\pi^+ p \rightarrow & \Sigma^0 K^+ \pi^+ \pi^0) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^0 \pi^0)\end{aligned}$$ $$\begin{aligned}
\sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^0 \pi^0) & + \sigma(\pi^0 p \rightarrow \Sigma^- K^0 \pi^+ \pi^+) + \sigma(\pi^0 p \rightarrow \Sigma^0 K^0 \pi^+ \pi^0) \\
+ 3 \sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^0 \pi^0) & +\sigma(\pi^0 p \rightarrow \Sigma^+ K^0 \pi^+ \pi^-) + 2 \sigma(\pi^0 p \rightarrow \Sigma^- K^+ \pi^+ \pi^0) \\
+ 2 \sigma(\pi^0 p \rightarrow \Sigma^0 K^+ \pi^0 \pi^0) & + 2 \sigma(\pi^0 p \rightarrow \Sigma^+ K^+ \pi^0 \pi^-) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^0 \pi^0) \\
= 2 \sigma(\pi^- p \rightarrow \Sigma^0 K^0 \pi^+ \pi^-) & + \sigma(\pi^- p \rightarrow \Sigma^- K^+ \pi^+ \pi^-) + \sigma(\pi^- p \rightarrow \Sigma^0 K^+ \pi^0 \pi^-) \\
+ \sigma(\pi^- p \rightarrow \Sigma^+ K^+ \pi^- \pi^-) & + 2 \sigma(\pi^+ p \rightarrow \Sigma^0 K^0 \pi^+ \pi^+) + \sigma(\pi^+ p \rightarrow \Sigma^- K^+ \pi^+ \pi^+) \\
+ \sigma(\pi^+ p \rightarrow & \Sigma^0 K^+ \pi^+ \pi^0) + \sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi^+ \pi^-)\end{aligned}$$
Parametrizations of elementary cross sections involving strange particles ($\mathbf{K}$ $\mathbf{\overline{K}}$ $\mathbf{\Lambda}$ $\mathbf{\Sigma}$) for incident energies from threshold up to 15 GeV. {#param}
========================================================================================================================================================================================================
Here we give for each considered new channel the full parametrization. If several cross sections are linked by a symmetry only one of the cross section parametrization is given. See for a complete list of symmetries between the cross sections.
In the following $P_{lab}$ is the momentum in the target nucleon frame of reference. Note that in INCL protons and neutrons are considered to have the same mass inside the nucleus then formulae given below are valid for proton and neutron targets.
Pions also have been given the same mass. Lambdas (anti)Kaons and Sigmas are considered with their real masses. The threshold for every channels of the same reaction is the same (the highest one calculated with the INCL masses) in order to remain consistent with the isospin invariance hypothesis.
Cross sections are always given in mb.
Elastic
-------
Considering that data available for the elastic and quasi-elastic reactions $\Sigma N \rightarrow \Sigma N$ $\Sigma N \rightarrow \Lambda N$ and $\Lambda N \rightarrow \Sigma N$ are very scarce and with big uncertainties the choice to consider them as equivalent to the $\Lambda N \rightarrow \Lambda N$ was made.
$$\text{\hspace*{-2cm}}\sigma = \left\{
\begin{aligned}
&200 & \ P_{lab} < 145 \ MeV/c \\
&869 \exp(-P_{lab}[MeV/c]/100) & \ 145 \ MeV/c \leq \ P_{lab} < 425 \ MeV/c \\
&12.8 \exp(-6.2 \ 10^{-5} \ P_{lab}[MeV/c]) & \ 425 \ MeV/c \leq \ P_{lab} < 30 \ GeV/c \\
\end{aligned}
\right.$$
$$\text{\hspace*{-2cm}}\sigma = \left\{
\begin{aligned}
&12 & \ P_{lab} < 935 \ MeV/c \\
&17.4-3\exp(6.3 \ 10^{-4} \ P_{lab}[MeV/c]) & \ 935 \ MeV/c \leq \ P_{lab} < 2080 \ MeV/c \\
&832 \ P_{lab}[MeV/c]^{-0.64} & \ 2080 \ MeV/c \leq \ P_{lab} < 5.5 \ GeV/c \\
&3.36 & \ 5.5 \ GeV/c \leq \ P_{lab} < 30 \ GeV/c \\
\end{aligned}
\right.$$
$$\left.
\begin{aligned}
\sigma& = 6.132 P_{lab}[GeV/c]^{-0.2437}+12.98 \exp \frac{-(P_{lab}[GeV/c]-0.9902)^2}{0.05558} \\
&+2.928 \exp\frac{-(P_{lab}[GeV/c]-1.649)^2}{0.772}+564.3\exp \frac{-(P_{lab}[GeV/c]+0.9901)^2}{0.5995}
\end{aligned}
\right.$$
Inelastic
---------
In this section if non specified momentum is in $GeV/c$.
$$\sigma(pp \rightarrow p \Lambda K^+) = 1.11875 \frac{(P_{lab}-2.3393)^{1.0951}}{(P_{lab}+2.3393)^{2.0958}} \ \ 2.3393 \leq \ P_{lab}< 30 \ GeV/c$$
$$\sigma(pp \rightarrow n \Sigma^+ K^+) = 6.38 (P_{lab}-2.593)^{2.1}/P_{lab}^{4.162} \ \ P_{lab}\geq2.593 \ GeV/c$$
$$\sigma(pp \rightarrow p p K^+ K^-) = 3/38 \left(1-\frac{2.872^2}{s[GeV^2]}\right)^{3} \left(\frac{2.872^2}{s[GeV^2]} \right)^{0.8} \ \sqrt{s} \geq 2.872 \ GeV$$
The 2 following formulae are coming from the results of the Fritiof model [@fritiof].
$$\left.
\begin{aligned}
\sigma(pp) &= 8.12 (P_{lab}-6)^{2.157}/P_{lab}^{2.333} \\
\sigma(pn) &= 10.15 (P_{lab}-6)^{2.157}/P_{lab}^{2.333} \\
\end{aligned}
\right\} \ P_{lab}\geq 6 \ GeV$$
$$\sigma(\Delta^{++} p \rightarrow p p K^+ \overline{K}^0) = 6.6 \left(1-\frac{2.872^2}{s[GeV^2]}\right)^{3} \left(\frac{2.872^2}{s[GeV^2]} \right)^{0.8} \ \ \sqrt{s}\geq2.872 \ GeV\\$$
$$\left.
\begin{aligned}
\sigma(\pi^- p \rightarrow \Lambda K^0) &= 0.3936 \ P_{lab}^{-1.357}\\
&-6.052 \exp(-(P_{Lab}-0.7154)^2/0.02026) \\
&+0.489 \exp(-(P_{Lab}-0.8886)^2/0.08378) \\
&-0.16 \exp(-(P_{Lab}-0.9684)^2/0.001432) \\
\end{aligned}
\right.
\ P_{lab} \geq 0.911 \ GeV/c$$
$$\sigma(\pi^- p \rightarrow \Sigma^- K^+) = 4.352 \frac{(P_{Lab}-1.0356)^{1.006}}{(P_{lab}+1.0356)^{0.0978} P_{lab}^{5.375}} \qquad \ P_{lab} \geq 1.0356 \ GeV/c$$
$$\sigma(\pi^+ p \rightarrow \Sigma^+ K^+) = 1.897 \ 10^{-3} \frac{(P_{Lab}-1.0428)^{2.869} \left(P_{Lab}+1.0428\right)^{16.68}}{P_{Lab}^{19.1}} \qquad \ P_{lab} \geq 1.0428 \ GeV/c$$
$$\sigma(\pi^- p \rightarrow \Sigma^0 K^0) = 0.3474 (P_{Lab}-1.034)^{0.07678}/ \ P_{lab}^{1.627} \qquad \ P_{lab} \geq 1.034 \ GeV/c$$
$$\sigma(\pi^0 p \rightarrow \Sigma^0 K^+) = 3.624 (P_{Lab}-1.0356)^{1.4}/ \ P_{lab}^{5.14} \qquad \ P_{lab} \geq 1.0356 \ GeV/c$$
$$\sigma(\pi^+ p \rightarrow \Lambda K^+ \pi ^+) = 146.2 \frac{(P_{Lab}-1.147)^{1.996}}{(P_{Lab}+1.147)^{5.921}} \qquad P_{Lab}\geq1.147 \ GeV/c$$
$$\sigma(\pi^- p \rightarrow \Sigma^- K^0 \pi ^+) = 8.139 \frac{(P_{Lab}-1.3041)^{2.431}}{(P_{Lab})^{5.298}} \qquad \ P_{Lab}\geq1.3041 \ GeV/c$$
$$\sigma(\pi^+ p \rightarrow \Lambda K^+ \pi ^+ \pi^0) = 18.77 \frac{(P_{Lab}-1.4162)^{4.597}}{(P_{Lab})^{6.877}} \qquad \ P_{Lab}\geq1.4162 \ GeV/c$$
$$\sigma(\pi^+ p \rightarrow \Sigma^+ K^+ \pi ^+ \pi^-) = 137.6 \frac{(P_{Lab}-1.5851)^{5.856}}{(P_{Lab})^{9.295}} \qquad \ P_{Lab}\geq1.5851 \ GeV/c$$
$$\sigma(\pi^0 p \rightarrow n K^+ \overline{K}^0) = 2.996 \frac{(P_{Lab}-1.5066)^{1.929}}{(P_{Lab})^{3.582}} \qquad \ P_{lab} \geq1.5066 \ GeV/c$$
The 3 following formulae are coming from the results of the Fritiof model [@fritiof].
$$\left.
\begin{aligned}
\sigma(\pi^+ p) &= 3.851 \frac{(P_{Lab} - 2.2)^2}{P_{Lab}^{1.88286}} \\
\sigma(\pi^0 p) &= 4.4755 \frac{(P_{Lab} - 2.2)^{1.927}}{P_{Lab}^{1.89343}} \\
\sigma(\pi^- p) &= 5.1 \frac{(P_{Lab} - 2.2)^{1.854}}{P_{Lab}^{1.904}} \\
\end{aligned}
\right\} \ P_{lab} \geq 2.2 \ GeV/c$$
$$\sigma(p \Lambda \rightarrow \Sigma^+ n) = 8.74 \frac{(P_{Lab}-0.664)^{0.438}}{(P_{Lab})^{2.717}} \ \ P_{lab} \geq 0.664 \ GeV/c$$
$$\sigma(p \Sigma^0 \rightarrow p \Lambda) = \left\{
\begin{aligned}
& 100 \qquad \ &P_{lab} < 0.1 \ GeV/c \\
& 8.23 \ P_{lab}^{-1.087} &0.1 \ GeV/c \leq P_{lab}\\
\end{aligned}
\right.$$
$$\left.
\begin{aligned}
\sigma(n \Sigma^0 \rightarrow p \Sigma^-)\\
\sigma(n \Sigma^+ \rightarrow p \Sigma^0)\\
\end{aligned}
\right\} =
\left\{
\begin{aligned}
&0 & P_{lab} < 162 \ MeV/c \\
&13.79 \ P_{lab}^{-1.181} \ & P_{lab} \geq 162 \ MeV/c \\
\end{aligned}
\right.$$
$$\left.
\begin{aligned}
\sigma(p \Sigma^0 \rightarrow n \Sigma^+)\\
\sigma(p \Sigma^- \rightarrow n \Sigma^0)\\
\end{aligned}
\right\} =
\left\{
\begin{aligned}
& 200 & P_{lab} < 103.5 \ MeV/c \\
& 13.79 \ P_{lab}^{-1.181} \ & P_{lab} \geq 103.5 \ MeV/c \\
\end{aligned}
\right.$$
$$\sigma(p K^- \rightarrow \Lambda \pi^0) =
\left\{
\begin{aligned}
&\begin{aligned}
&40.24 && \ \ P_{lab} < 86.636 \ MeV/c \\
&0.97 \ P_{lab}^{-1.523} && \ 86.636 \ MeV/c \leq P_{lab} < 500 \ MeV/c \\
\end{aligned}\\
&\begin{aligned}
\\
& 1.23 \ P_{lab}^{-1.467} + 0.872\exp \left(-\frac{(P_{lab}-0.749)^2}{0.0045} \right)+ 2.337 \exp \left(-\frac{(P_{lab}-0.957)^2}{0.017} \right)\\
& \qquad + 0.476 \exp \left(-\frac{(P_{lab}-1.434)^2}{0.136} \right) \qquad \qquad \ 500 \ MeV/c \leq P_{lab} < 2 \ GeV/c \\
\end{aligned}\\
\\
&\begin{aligned}
&3 \ P_{lab}^{-2.57} && \ 2 \ GeV/c \leq P_{lab} \\
\end{aligned}
\end{aligned}
\right.$$
$$\sigma(p K^- \rightarrow \Sigma^+ \pi^-) =
\left\{
\begin{aligned}
& 70.166 \qquad \qquad \ P_{lab} < 100 \ MeV/c \\
& 1.4 \ P_{lab}^{-1.7} + 1.88 \exp \left(-\frac{(P_{lab}-0.747)^2}{0.005} \right) & \ P_{lab} \geq 100 \ MeV/c \\
&\qquad + 8 \exp \left(-\frac{(P_{lab}-0.4)^2}{0.002} \right)+ 0.8 \exp \left(-\frac{(P_{lab}-1.07)^2}{0.01} \right)& \\
\end{aligned}
\right.$$
$$\sigma(p K^- \rightarrow n \overline{K}^0) =
\begin{dcases}
0 & \ P_{lab} < 89.21 \ MeV/c \\
0.4977 \frac{(P_{lab} - 0.08921)^{0.5581}}{P_{lab}^{2.704}} & \ 89.21 \ MeV/c \leq \ P_{lab} < 0.2 \ GeV/c\\
2 \ P_{lab}^{-1.2} + \ 6.493 \exp \left(-0.5 \left( \frac{P_{lab}-0.3962}{0.02} \right)^2\right) & \ 0.2 \ GeV/c \leq \ P_{lab} < 0.73 \ GeV/c\\
2.3 \ P_{lab}^{-0.9} + 1.1 \exp \left(-0.5 \left( \frac{P_{lab}-0.82}{0.04} \right)^2\right) & \\
\hspace{1cm}+ \ 5 \exp\left(-0.5 \left( \frac{P_{lab}-1.04}{0.1} \right)^2\right) & 0.73 \ GeV/c \leq \ P_{lab} < 1.38 \ GeV/c \\
\ 2.5 \ P_{lab}^{-1.68} + \ 0.7 \exp \left(-0.5 \left( \frac{P_{lab}-1.6}{0.2} \right)^2\right) & \\
\hspace{1cm}+ \ 0.2 \exp \left(-0.5 \left( \frac{P_{lab}-2.3}{0.2} \right)^2\right) & \ 1.38 \ GeV/c \leq \ P_{lab}\\
\end{dcases}$$
The following cross section results from the balance detailed of the previous cross section.
$$\sigma(n \overline{K}^0 \rightarrow p K^-) =
\begin{dcases}
30 & \ P_{lab} < 100 \ MeV/c \\
2 \ P_{lab}^{-1.2} + 6.493 \exp \left(-0.5 \left( \frac{P_{lab}-0.3962}{0.02} \right)^2\right) & \ 0.1 \ GeV/c \leq \ P_{lab} < 0.73 \ GeV/c\\
2.3 \ P_{lab}^{-0.9} + 1.1 \exp \left(-0.5 \left( \frac{P_{lab}-0.82}{0.04} \right)^2\right) & \\
\qquad + 5 \exp \left(-0.5 \left( \frac{P_{lab}-1.04}{0.1} \right)^2\right) & \ 0.73 \ GeV/c \leq P_{lab} < 1.38 \ GeV/c\\
2.5 \ P_{lab}^{-1.68} + 0.7 \exp \left(-0.5 \left( \frac{P_{lab}-1.6}{0.2} \right)^2\right) & \\
\qquad + 0.2 \exp \left(-0.5 \left( \frac{P_{lab}-2.3}{0.2} \right)^2\right) & \ 1.38 \ GeV/c \leq \ P_{lab} \\
\end{dcases}$$
$$\sigma(p \overline{K}^0 \rightarrow p K^- \pi^+) = 101.3\frac{(P_{lab}-0.526)^{5.846}}{P_{lab}^{8.343}} \qquad P_{lab} \geq 526 \ MeV/c$$
$$\sigma(p K^- \rightarrow \Sigma^+ \pi^0 \pi^-) = 73.67 \frac{(P_{lab}-0.260)^{6.398}}{(P_{lab}+0.260)^{9.732}} + 0.21396 \exp \left(-\frac{(P_{lab}-0.4031)^2}{0.00115} \right) \quad \ P_{lab} \geq 260 \ MeV/c$$
$$\sigma(p K^- \rightarrow \Lambda \pi^+ \pi^-) =
\left\{
\begin{aligned}
&6364 \frac{P_{lab}^{6.07}}{(P_{lab}+1)^{10.58}} + 2.158 \exp \left(-\frac{1}{2} \left(\frac{P_{lab}-0.395}{0.01984} \right)^2 \right) &\ P_{lab} < 970 \ MeV/c \\
& 46.3 \frac{P_{lab}^{0.62}}{(P_{lab}+1)^{3.565}} &\ P_{lab} \geq \ 970\ MeV/c \\
\end{aligned}
\right.$$
$$\sigma(p K^- \rightarrow p K^- \pi^+ \pi^-) = 26.8 \frac{(P_{lab}-0.85)^{4.9}}{P_{lab}^{6.34}} \qquad \ P_{lab} \geq 850 \ MeV/c$$
$$\begin{aligned}
\sigma(n K^+ \rightarrow p K^0) & = 12.84 \frac{(P_{lab}-0.0774)^{18.19}}{(P_{lab})^{20.41}} \qquad \ P_{lab} \geq 77.4 \ MeV/c \\
\sigma(p K^0 \rightarrow n K^+) & = 12.84 \frac{(P_{lab}+0.0774)^{18.19}}{(P_{lab}+0.1548)^{20.41}} \qquad \ P_{lab} \geq 0 \ MeV/c\end{aligned}$$
$$\sigma(p K^0 \rightarrow p K^+ \pi^-) = 116.8 \frac{(P_{lab}-0.53)^{6.874}}{P_{lab}^{10.11}} \qquad \ P_{lab} \geq 530 \ MeV/c \\$$
$$\sigma(p K^0 \rightarrow p K^+ \pi^0 \pi^-) =
\left\{
\begin{aligned}
& 26.41 \frac{(P_{lab}-0.812)^{7.138}}{P_{lab}^{5.337}} & \ 812 \ MeV/c \leq \ P_{lab}& < 1.744 \ GeV/c \\
& 1572 \frac{(P_{lab}-0.812)^{9.069}}{P_{lab}^{12.44}} & \ 1.744 \ GeV/c \leq \ P_{lab}& < 3.728 \ GeV/c \\
& 60.23 \frac{(P_{lab}-0.812)^{5.084}}{P_{lab}^{6.72}} & \ 3.728 \ GeV/c \leq \ P_{lab}.&\\
\end{aligned}
\right.$$
The $\boldsymbol{\pi^+ p \rightarrow K^+ \Sigma^+}$ Legendre coefficients with pion momentum from $\boldsymbol{1282}$ up to $\boldsymbol{2473~MeV/c}$ {#Leg_Table}
=====================================================================================================================================================
In this appendix we summarize in a table the 9 first Legendre coefficients extracted from the differential cross sections published in [@piN7]. The reaction studied is $\pi^+ p \rightarrow K^+ \Sigma^+$ with pion momentum from $1282$ up to $2473~MeV/c$. This coefficients were determined using a ROOT minimization with a smoothness constraint.
[|c|ccccccccc|c|]{}
-----------
$P_{lab}$
$(MeV/c)$
-----------
: Legendre coefficients extracted from differential cross sections published in [@piN7].[]{data-label="leg1"}
& $A_0$ & $A_1$ & $A_2$ & $A_3$ & $A_4$ & $A_5$ & $A_6$ & $A_7$ & $A_8$ & $\rchi$ $^2/NDF$\
1282 & 0.120 & -0.030 & -0.011 & 0.121 & -0.001 & -0.012 & -0.026 & 0.008 & -0.008 & 1.476\
1328 & 0.144 & -0.029 & -0.014 & 0.135 & 0.018 & 0.007 & -0.020 & -0.004 & -0.003 & 1.665\
1377 & 0.175 & -0.033 & 0.006 & 0.168 & 0.032 & 0.010 & -0.003 & 0.016 & -0.004 & 1.240\
1419 & 0.203 & -0.023 & 0.004 & 0.201 & 0.058 & 0.031 & -0.025 & -0.005 & -0.028 & 1.330\
1490 & 0.247 & -0.042 & 0.111 & 0.174 & 0.142 & -0.015 & 0.059 & -0.027 & -0.004 & 1.289\
1518 & 0.264 & -0.043 & 0.142 & 0.189 & 0.175 & -0.003 & 0.089 & -0.055 & -0.023 & 0.861\
1582 & 0.247 & -0.018 & 0.138 & 0.176 & 0.161 & 0.008 & 0.084 & -0.031 & -0.008 & 1.177\
1614 & 0.266 & -0.007 & 0.174 & 0.181 & 0.195 & 0.039 & 0.118 & -0.003 & -0.039 & 1.094\
1687 & 0.259 & 0.015 & 0.170 & 0.165 & 0.211 & 0.075 & 0.162 & -0.036 & -0.009 & 1.155\
1712 & 0.261 & 0.021 & 0.199 & 0.158 & 0.252 & 0.088 & 0.192 & -0.003 & -0.018 & 1.781\
1775 & 0.267 & 0.037 & 0.226 & 0.133 & 0.260 & 0.107 & 0.170 & -0.003 & -0.007 & 1.201\
1808 & 0.256 & 0.066 & 0.231 & 0.108 & 0.269 & 0.104 & 0.180 & -0.020 & 0.030 & 1.033\
1879 & 0.230 & 0.076 & 0.220 & 0.102 & 0.249 & 0.072 & 0.153 & -0.063 & 0.002 & 1.914\
1906 & 0.262 & 0.065 & 0.202 & 0.110 & 0.233 & 0.082 & 0.185 & -0.025 & -0.031 & 1.194\
1971 & 0.265 & 0.085 & 0.218 & 0.100 & 0.263 & 0.131 & 0.165 & -0.048 & -0.015 & 1.108\
1997 & 0.238 & 0.085 & 0.207 & 0.056 & 0.224 & 0.122 & 0 .154 & -0.009 & -0.027 & 0.981\
2067 & 0.259 & 0.103 & 0.186 & 0.081 & 0.203 & 0.181 & 0.148 & -0.008 & -0.063 & 1.011\
2099 & 0.246 & 0.158 & 0.183 & 0.112 & 0.200 & 0.200 & 0.114 & 0.001 & -0.085 & 0.779\
2152 & 0.242 & 0.121 & 0.224 & 0.064 & 0.209 & 0.188 & 0.174 & 0.041 & -0.086 & 1.339\
2197 & 0.248 & 0.101 & 0.230 & 0.051 & 0.218 & 0.223 & 0.211 & -0.013 & -0.058 & 1.491\
2241 & 0.252 & 0.121 & 0.246 & 0.061 & 0.186 & 0.199 & 0.161 & 0.044 & -0.070 & 1.129\
2291 & 0.254 & 0.154 & 0.235 & 0.125 & 0.170 & 0.269 & 0.208 & 0.092 & -0.071 & 1.324\
2344 & 0.264 & 0.144 & 0.279 & 0.110 & 0.242 & 0.254 & 0.215 & 0.087 & -0.040 & 0.911\
2379 & 0.245 & 0.172 & 0.246 & 0.114 & 0.206 & 0.278 & 0.237 & 0.133 & -0.040 & 1.239\
2437 & 0.262 & 0.167 & 0.315 & 0.106 & 0.286 & 0.249 & 0.281 & 0.150 & 0.016 & 1.308\
2473 & 0.281 & 0.158 & 0.347 & 0.095 & 0.344 & 0.230 & 0.345 & 0.083 & 0.088 & 1.306\
|
---
abstract: |
Given a tesselation of the plane, defined by a planar straight-line graph $G$, we want to find a minimal set $S$ of points in the plane, such that the Voronoi diagram associated with $S$ fits $G$. This is the Generalized Inverse Voronoi Problem (GIVP), defined in [@Trin07] and rediscovered recently in [@Baner12]. Here we give an algorithm that solves this problem with a number of points that is linear in the size of $G$, assuming that the smallest angle in $G$ is constant.
diagram, Dirichlet tesselation, planar tesselation, inverse Voronoi problem
author:
- Greg Aloupis
- 'Hebert Pérez-Rosés'
- 'Guillermo Pineda-Villavicencio'
- |
\
Perouz Taslakian
- 'Dannier Trinchet-Almaguer'
title: 'Fitting Voronoi Diagrams to Planar Tesselations[^1]'
---
Introduction {#intro}
============
Any planar straight-line graph (PSLG) subdivides the plane into cells, some of which may be unbounded. The Voronoi diagram (also commonly referred to as *Dirichlet tesselation*, or *Thiessen polygon*) of a set $S$ of $n$ points is a PSLG with $n$ cells, where each cell belongs to one point from $S$ and consists of all points in the plane that are closer to that point than to any other in $S$.
Let $G$ be a given PSLG, whose cells can be considered bounded and convex for all practical purposes. Indeed, if some cell is not convex, it can always be partitioned into convex subcells, thus yielding a finer tesselation. The asymptotic size complexity of the PSLG remains the same by this convexification operation.
The *Inverse Voronoi Problem* (IVP) consists of deciding whether $G$ coincides with the Voronoi diagram of some set $S$ of points in the plane, and if so, finding $S$. This problem was first studied by Ash and Bolker [@ash-bolker]. Subsequently, Aurenhammer presented a more efficient algorithm [@aurenhammer], which in turn was improved by Hartvigsen, with the aid of linear programming [@hartvig92], and later by Schoenberg, Ferguson and Li [@schoen03]. Yeganova also used linear programming to determine the location of $S$ [@yega01; @yeganova-thesis].
In the IVP, the set $S$ is limited to have one point per cell; a generalized version of this problem (GIVP) allows more than one point per cell. In this case, new vertices and edges may be added to $G$, but the original ones must be kept, as shown in Figure \[givp1\]. With this relaxation the set $S$ always exists, hence we are interested in minimizing its size.
{width="70.00000%"}
\[givp1\]
The GIVP in ${\mathbb{R}}^2$ was indirectly mentioned in [@yega01; @yeganova-thesis], in the context of set separation. It was formally stated and discussed in the III Cuban Workshop on Algorithms and Data Structures, held in Havana in 2003, where an algorithm for solving the problem in ${\mathbb{R}}^2$ was sketched by the current authors. However, the manuscript remained dormant for several years, and the algorithm was only published in Spanish in 2007 [@Trin07]. Recently, the problem was revisited in [@Baner12], where another algorithm for the GIVP in ${\mathbb{R}}^2$ is given, and the special case of a rectangular tesselation is discussed in greater detail. The authors of [@Baner12] were unaware of [@Trin07], however the two algorithms turn out to have certain common aspects.
This paper is an expanded and updated English version of [@Trin07]. It contains a description and analysis of the aforementioned algorithm for solving the GIVP in ${\mathbb{R}}^2$. This is followed by the description of an implementation of the algorithm, which was used to make a first (if only preliminary) experimental study of the algorithm’s performance. Our algorithm generates ${\mathcal{O}}(E)$ sites in the worst case, where $E$ is the number of edges of $G$ (provided that the smallest angle of $G$ is constant). This bound is asymptotically optimal for tesselations with such angular constraints.
In comparison, the analysis given for the algorithm in [@Baner12] states that ${\mathcal{O}}(V^3)$ sites are generated, where $V$ is the size of a refinement of $G$ such that all faces are triangles with acute angles. Given an arbitrary PSLG, there does not appear to be any known polynomial upper bound on the size of its associated acute triangulation. Even though it seems to us that the analysis in [@Baner12] should have given a tighter upper bound in terms of $V$, even a linear bound would not make much of a difference, given that $V$ can be very large compared to the size of $G$. The analysis in [@Baner12] is purely theoretical, so it would be interesting to perform an experimental study to shed some light on the algorithm’s performance in practice.
This paper is organized as follows: In Section \[aljuarizmi\] we describe the algorithm and discuss its correctness and performance. In Section \[implement\] we derive some variants of the general strategy, and deal with several implementation issues of each variant. Section \[exper\] is devoted to an experimental analysis of the algorithm’s performance. Finally, in Section \[open\] we summarize our results and discuss some open problems arising as a result of our work.
The Algorithm {#aljuarizmi}
=============
First we establish some notation and definitions. In that respect we have followed some standard texts, such as [@deBerg].
Let $p$ and $q$ be points of the plane; as customary, $\overline{pq}$ is the segment that joins $p$ and $q$, and $\vert \overline{pq} \vert$ denotes its length. $B_{pq}$ denotes the bisector of $p$ and $q$, and $H_{pq}$ is the half-plane determined by $B_{pq}$, containing $p$. For a set $S$ of points in the plane, Vor($S$) denotes the *Voronoi diagram* generated by $S$. The points in $S$ are called *Voronoi sites* or *generators*.
If $p \in S$, $V(p)$ denotes the cell of Vor($S$) corresponding to the site $p$. For any point $q$, $C_S(q)$ is the largest empty circle centered at $q$, with respect to $S$ (the subscript $S$ can be dropped if it is clear from the context). Two points $p, q \in S$ are said to be *(strong) neighbors* (with respect to $S$) if their cells share an edge in Vor($S$); in this case $E_{pq}$ denotes that edge.
We will make frequent use of the following basic property of Voronoi diagrams:
\[when-edge\] The bisector $B_{pq}$ defines an edge of the Voronoi diagram if, and only if, there exists a point $x$ on $B_{pq}$ such that $C_S(x)$ contains both $p$ and $q$ on its boundary, but no other site. The (open) edge in question consists of all points $x$ with that property.
The technique used by the algorithm is to place pairs of points (*sentinels*) along each edge $e$ of the PSLG (each pair is placed so that it is bisected by $e$) in order to guard or protect $e$. The number of sentinels required to protect $e$ depends on its length and the relative positions of its neighboring edges. Each pair of sentinels meant to guard $e$ is placed on the boundary of some circle, whose center lies on $e$. Furthermore this circle will not touch any other edge. The only exception is when the circle is centered on an endpoint of $e$, in which case it is allowed to touch all other edges sharing that endpoint. More formally, we have the following.
Let $G$ be a PSLG, and let $e$ be an edge of $G$. Let $S$ be a set of points, and $p, q \in S$. The pair of points $p, q$ is said to be a **pair of sentinels** of $e$ if they are strong neighbors with respect to $S$, and $E_{pq}$ is a subsegment of $e$. In this case, $e$ (or more precisely, the segment $E_{pq}$) is said to be **guarded** by $p$ and $q$.
The algorithm works in two stages: First, for each vertex $v$ of $G$ we draw a circle centered on $v$. This is our set of *initial circles* (this is described in more detail below). Then we proceed to cover each edge $e$ of $G$ by non-overlapping *inner circles*, whose centers lie on $e$, and which do not intersect any other edges of $G$.
Let $u$ be a given vertex of $G$, and let $\lambda$ be the length of the shortest edge of $G$ incident to $u$. We denote as $\xi_G(u)$ the initial circle centered at $u$, which will be taken as the largest circle with radius $\rho_0 \leq \lambda / 2$ that does not intersect any edge of $G$, except those that are incident to $u$. Once we have drawn $\xi_G(u)$, for each edge $e$ incident to $u$ we can choose a pair of sentinels $p, q$, placed on $\xi_G(u)$, one on each side of $e$, at a suitably small distance $\epsilon$ from $e$, as in Figure \[circle1\]. Later in this section we discuss how to choose $\epsilon$ appropriately.
![Initial circle $\xi_G(u)$ for vertex $u$ and sentinels of $e$[]{data-label="circle1"}](circles1.pdf){width="60.00000%"}
Let $w$ be the point of intersection between $\xi_G(u)$ and $e$; now $p$ and $q$ guard the segment $\overline{uw}$ of $e$, which means that $\overline{uw}$ will appear in the Voronoi diagram that will be constructed, provided that we do not include any new points inside $\xi_G(u)$ (see Lemma \[when-edge\]).
Let $e = \overline{uv}$ be an edge of $G$, and $w_1, w_2$ the intersection points of $\xi(u)$ and $\xi(v)$ with $e$, respectively.[^2] The segments $\overline{u w_1}$ and $\overline{u w_2}$ are now guarded, whereas the (possibly empty) segment $\overline{w_1 w_2}$ still remains unguarded. In order to guard $\overline{w_1 w_2}$ it suffices to cover that segment with circles centered on it, not intersecting with any edge other than $e$, and not including any sentinel belonging to another circle. Then we can choose pairs of sentinels on each covering circle, each sentinel being at distance $\epsilon$ from $e$, as shown in Figure \[circle2\].
![Edge covered by circles[]{data-label="circle2"}](circles2.pdf){width="90.00000%"}
As a consequence of Lemma \[when-edge\], $e$ will be guarded in all its length, provided that no new point is later included inside one of the circles centered on $e$. To ensure this, we will not allow an inner circle of $e$ to get closer than $\epsilon$ to another edge $f$, because then a sentinel of $f$ might fall inside the circle. With this precaution, the sentinels guarding $e$ will not interfere with other edges, since they will not be included in any circle belonging to another edge.
In summary, an outline of the algorithm is:
1. For each vertex $u \in G$, draw initial circle $\xi_G(u)$ centered on $u$.
2. Choose a suitable value of $\epsilon$.
3. For each vertex $u$ and for each edge $e$ incident to $u$, place a pair of sentinels on $\xi_G(u)$, symmetric to one another with respect to $e$, at distance $\epsilon$ from $e$.
4. For each edge $e \in G$, cover the unguarded segment of $e$ with inner circles centered on $e$, and then place pairs of sentinels on each circle.
This algorithm is a general strategy that leads to several variants when Step 4 is specified in more detail, as will be seen in Section \[implement\]. In order to prove that the algorithm works it suffices to show that:
1. The algorithm terminates after constructing a finite number of circles (and sentinels).
2. After termination, every edge of $G$ is guarded (see the discussion above).
In order to show that the algorithm terminates we will establish some facts. Let $\rho_0>0$ be the radius of the smallest initial circle. Now let $\alpha$ be the smallest angle formed by any two incident edges of $G$, say $e$ and $f$. By taking $\epsilon \leq \rho_0 \sin \frac{\alpha}{2}$ we make sure that any sentinel will be closer to the edge that it is meant to guard than to any other edge. This is valid for all initial circles.
After all initial circles have been constructed, together with their corresponding sets of sentinels, for every edge $e$ there may be a [*middle segment*]{} that remains unguarded. This segment must be covered by a finite number of inner circles. Take one edge, say $e$, with middle unguarded segment of length $\delta$. If we use circles of radius $\epsilon$ to cover the unguarded segment, then we can be sure that these circles will not intersect any circle belonging to another edge. Exactly $\lfloor \delta / 2\epsilon \rfloor +1$ such circles will suffice to cover the middle segment, where the last one may have a radius $\epsilon'$ smaller than $\epsilon$. For this last circle, the sentinels could be placed at distance $\epsilon' < \epsilon$ from $e$ (c.f. Figure \[circle3\]).
![Covering the middle segment of edge $\overline{uv}$ by inner circles of radius $\epsilon$[]{data-label="circle3"}](circles3.pdf){width="80.00000%"}
Using circles of radius $\epsilon$ is, among all the possible variants mentioned here, the one that yields the largest number of circles, and hence the largest number of sentinels (generators of the Voronoi diagram). Now let $e$ be the longest edge of $G$, with length $\Delta$. In the worst case, the number of inner circles that cover $e$ will be $\lfloor (\Delta-2\rho_0) / 2\epsilon \rfloor +1$, and the number of sentinels will be twice that number plus four (corresponding to the sentinels of both initial circles). Therefore, the algorithm generates a number of points that is linear in $E$, the number of edges, which is asymptotically optimal, since a lower bound for the number of points is the number of faces in $G$.
Note that by letting $G$ become part of the problem instance, the number of generators becomes a function of $\alpha$, and it is no longer linear in $E$. In practice, however, screen resolution and computer arithmetic impose lower bounds on $\alpha$. Under such constraints, the above analysis remains valid. This leads to our main result:
Let $G$ be a planar straight-line graph, whose smallest angle $\alpha$ is larger than a fixed constant. Then, the corresponding Generalized Inverse Voronoi Problem can be solved with ${\mathcal{O}}(E)$ generators, where $E$ is the number of edges of $G$.
Implementation {#implement}
==============
In step 4 of the algorithm given in the previous section, the method to construct the inner circles was left unspecified. Taking the circles with radius $\epsilon$, as suggested in the preceding analysis, is essentially a brute-force approach, and may easily result in too many sentinels being used. In this section we discuss two different methods for constructing the inner circles.
First let us note that in order to reduce the number of sentinels in our construction we may allow two adjacent circles on the same edge to overlap a little, so that they can share a pair of sentinels (see Figure \[circle2B\]). This observation is valid for all variants of the algorithm.
![Adjacent circles share a pair of sentinels[]{data-label="circle2B"}](circles4.pdf){width="80.00000%"}
The first variant for the construction of the inner circles along an edge is to place them sequentially (iteratively), letting them grow as much as possible, provided that they do not enter the $\epsilon$-wide security area of another edge. Obviously, this greedy heuristic must yield a smaller number of Voronoi generators than the naive approach of taking all circles with radius $\epsilon$.
Suppose we want to construct an inner circle $\chi$ for edge $e$, adjacent to another circle on $e$ that has already been fixed and on which we have already placed two sentinels: $a=(x_a, y_a)$, and $b=(x_b, y_b)$. Let $f$ be the first edge that will be touched by $\chi$ as it grows, while constrained to have its center on $e$ and $a,b$ on its boundary. Let $f'$ be a straight line parallel to $f$, at distance $\epsilon$ from $f$, and closer to $\chi$ than $f$. Let $e$ be defined by the equation $y=mx+n$, and $f'$ by the equation $Ax+By+C = 0$.[^3] The distance of any point $(x, y)$ to $f'$ is given by $\frac{ \vert Ax+By+C \vert }{\sqrt{A^2+B^2}}$. The radius of $\chi$ must be equal to this distance. Hence the $x$-coordinate of the center satisfies the following quadratic equation: $$(A^2+B^2)((x_a - x)^2+(y_a -(m x + n))^2)=(A x + B(m x + n) + C)^2$$
or
$$-(A^2 + 2ABm + B^2m^2 - D(m^2+1))x^2$$ $$-2(A(Bn+C) + B^2mn + BCm + D(x_a - m(n-y_a )))x$$ $$-B^2n^2 - 2BCn - C^2 + D(n^2-2ny_a +x_a ^2+y_a ^2)=0$$
where $D=A^2+B^2$.\
\
Our second variant for constructing inner circles is also based on the principle of letting them grow until they come within distance $\epsilon$ of some edge. Yet, instead of growing the circles sequentially along the edge that is to be covered, we center the first inner circle on the midpoint of the unguarded middle segment. This will yield at most two smaller disjoint unguarded segments, on which we recurse. In the worst case, a branch of the recursion will end when an unguarded segment can be covered by a circle of radius $\epsilon$. The advantage of this approach is that the coordinates of the center can be determined with much less computation, thus avoiding potential roundoff errors. Additionally, this variant is more suitable for parallel implementation than the previous one. On the other hand, we need an extra data structure to handle the unguarded segments.
We end this discussion with a word about the choice of $\epsilon$. On one hand, $\epsilon$ must be sufficiently small for the construction to be carried out. On the other hand, for the sake of robustness to numerical errors, it is convenient to take $\epsilon$ as large as possible. That is why we defer the actual choice of $\epsilon$ until the initial circles have been drawn. A different approach might be to use a variable-sized $\epsilon$, which would lead to a more complicated, yet (hopefully) more robust algorithm.
A final remark: For the sake of simplicity we have assumed throughout the whole discussion that the cells of the input tesselation are convex, but our algorithm could be easily generalized to accept tesselations with non-convex cells.
Experimental Analysis {#exper}
=====================
From the analysis in Section \[aljuarizmi\] we know that the number of sites generated by our algorithm is linear in the size of the input, provided that the smallest angle $\alpha$ is constant. However, we would like to get a more precise idea about the algorithm’s performance, and the difference between the two strategies we have suggested for Step 4. For that purpose, we have implemented the algorithm and carried out a set of experiments.
Our experimental workbench consists of a Graphical User Interface, which can generate a tesselation on a random point set, store it in a DCEL data structure, and then apply one of the two variants of the algorithm for solving the GIVP, described in Section \[aljuarizmi\].
The GUI is described in more detail in [@Trin05; @Trin07], and a beta Windows version can be downloaded from <https://www.researchgate.net/publication/239994361_Voronoi_data>. The file Voronoi data.rar contains the Windows executable and a few DCEL files, consisting of sample tesselations. The user can generate additional tesselations randomly, and apply either variant of the algorithm on them.
The tesselations are generated as follows: First, the vertex set of $G$ is randomly generated from the uniform distribution in a rectangular region. Then, pairs of vertices are chosen randomly to create edges. If a new edge intersects existing edges, then the intersection points are added as new vertices, and the intersecting edges are decomposed into their non-intersecting segments. Finally, some edges are added to connect disjoint connected components and dangling vertices, so as to make the PSLG biconnected.
Table \[tab:results\] displays some statistics about 40 such randomly generated tesselations: Number of vertices, number of edges, number of regions, number of Voronoi sites with the recursive version of Step 4, number of Voronoi sites with the sequential version of Step 4, the smallest angle $\alpha$, and the width $\epsilon$ of the security area. The tesselations have been listed in increasing order of the number of edges. For each parameter, the table also provides the median (MED), the mean value (AVG), and the standard deviation (STD).
[|cc|\*[3]{}[c]{}|\*[2]{}[c]{}|c|c|]{} & & & &\
& & & & **Smallest** &\
& & & & **Recursive** & **Sequential** & **angle** & $\epsilon$**-neigh.**\
& **Vertices** & **Edges** & **Regions** & **version** & **version** & **(degrees)** & **(pixels)**\
& 72 & 142 & 66 & 1 020 & 852 & 1.63 & 1.20\
& 117 & 206 & 91 & 916 & 870 & 4.09 & 12.30\
& 194 & 252 & 60 & 1 468 & 1 296 & 1.07 & 9.61\
& 274 & 376 & 105 & 1 672 & 1 596 & 1.60 & 12.26\
& 229 & 429 & 202 & 2 400 & 2 148 & 3.38 & 0.91\
& 314 & 441 & 129 & 2 208 & 2 020 & 0.56 & 5.70\
& 336 & 472 & 138 & 2 656 & 2 374 & 0.47 & 4.03\
& 339 & 475 & 138 & 3 098 & 2 618 & 3.95 & 0.18\
& 344 & 480 & 138 & 3 140 & 2 720 & 0.23 & 4.48\
& 357 & 493 & 138 & 2 844 & 2 530 & 0.13 & 7.24\
& 339 & 501 & 164 & 2 580 & 2 364 & 0.38 & 3.81\
& 390 & 568 & 180 & 2 680 & 2 520 & 8.92 & 0.60\
& 438 & 637 & 281 & 3 320 & 3 028 & 0.21 & 6.34\
& 403 & 641 & 240 & 3 838 & 3 382 & 0.16 & 7.07\
& 472 & 684 & 214 & 3 432 & 3 144 & 0.25 & 2.56\
& 397 & 721 & 319 & 4 244 & 3 718 & 0.11 & 3.16\
& 421 & 784 & 365 & 5 112 & 4 406 & 1.30 & 0.12\
& 564 & 826 & 264 & 4 092 & 3 790 & 0.70 & 0.11\
& 504 & 986 & 463 & 4 148 & 4 020 & 2.37 & 14.16\
& 512 & 999 & 472 & 4 276 & 4 134 & 1.52 & 3.68\
& 552 & 1 056 & 506 & 4 048 & 4 796 & 0.25 & 4.40\
& 574 & 1 107 & 535 & 4 856 & 4 689 & 3.68 & 0.75\
& 601 & 1 166 & 567 & 5 240 & 5 009 & 0.80 & 3.43\
& 645 & 1 256 & 613 & 5 852 & 5 521 & 0.77 & 3.62\
& 672 & 1 292 & 622 & 5 720 & 5 992 & 0.25 & 3.44\
& 738 & 1 311 & 575 & 6 124 & 5 832 & 0.34 & 1.84\
& 724 & 1 399 & 677 & 6 440 & 6 194 & 0.23 & 3.23\
& 815 & 1 441 & 628 & 6 832 & 6 478 & 1.20 & 0.30\
& 763 & 1 479 & 718 & 6 960 & 6 599 & 2.14 & 0.19\
& 772 & 1 495 & 725 & 6 900 & 6 610 & 2.54 & 0.29\
& 855 & 1 522 & 669 & 7 684 & 7 158 & 1.43 & 0.31\
& 894 & 1 607 & 712 & 9 062 & 8 685 & 0.36 & 0.23\
& 898 & 1 615 & 716 & 8 152 & 7 580 & 1.19 & 0.33\
& 963 & 1 750 & 789 & 9 637 & 9 045 & 0.88 & 0.29\
& 1 006 & 1 842 & 838 & 9 236 & 8 582 & 1.85 & 0.34\
& 1 018 & 1 874 & 858 & 10 144 & 9 228 & 1.09 & 0.27\
& 984 & 1 902 & 920 & 7 924 & 7 792 & 4.40 & 0.25\
& 1 015 & 1 962 & 949 & 8 396 & 8 198 & 3.34 & 0.31\
& 1 066 & 1 973 & 909 & 10 392 & 9 492 & 0.63 & 0.30\
& 1 019 & 1 999 & 982 & 8 952 & 8 616 & 1.49 & 0.45\
& **558** & **1 027.5** & **489** & **4 566** & **4 547.5** & **3.4** & **0.3**\
& **590** & **1 054** & **467** & **5 192** & **4 891** & **0.59** & **4.06**\
& **282.7** & **571.24** & **292.73** & **2 715.5** & **2 600** & **3.36** & **0.74**\
From the tabulated data we can also get empirical estimates about the correlation among different parameters, especially $\alpha$ and $\epsilon$, and about the distribution of their values. The parameters $\alpha$ and $\epsilon$ show a weak negative correlation with the number of edges, of $-0.548$ and $-0.358$ respectively. In turn they are positively correlated with one another, with a correlation of $0.67$. These empirical findings agree with intuition.
Figures \[histoalpha\] and \[histoepsilon\] display the histograms of $\alpha$ and $\epsilon$ with 20 bins. They can be well approximated by Poisson distributions, with $\lambda = 4.06$ and $\lambda = 0.59$, respectively, and with 95% confidence intervals $[3.437; 4.686]$ and $[0.375; 0.8784]$.
![Histogram of $\epsilon$[]{data-label="histoepsilon"}](hist-alpha.pdf){width="100.00000%"}
![Histogram of $\epsilon$[]{data-label="histoepsilon"}](hist-epsilon.pdf){width="100.00000%"}
The comparison between the two variants of the algorithm is shown in Figure \[plot1B\]. We can see that the sequential variant is slightly better than the recursive variant, as it generates a smaller number of sites in most cases. However, the difference between both variants is not significant. Indeed, the linear regression fits have very similar slopes: The linear fit for the sequential variant is $y = 4.4831 x + 158.59$, whereas the linear fit in the recursive case is $y = 4.6241 x + 318.51$.
![Plot of the results in Table \[tab:results\][]{data-label="plot1B"}](plot2A.pdf){width="180.00000%"}
Conclusions and Open Problems {#open}
=============================
Our results show that the Generalized Inverse Voronoi Problem can be solved with a number of generators that is linear in the size of the input tesselation, provided that we enforce a lower bound on the size of the smallest angle. On the other hand, the algorithm described in [@Baner12] produces ${\mathcal{O}}(V^3)$ generators, where $V$ is the number of vertices of an acute triangulation of $G$. As the performance of the two algorithms is given as a function of different parameters, a theoretical comparison between them is not straightforward. An experimental study could be helpful, but that would require an implementation of the algorithm in [@Baner12]. In practice, our algorithm generates approximately $4.48E + 159$ Voronoi sites, where $E$ is the number of edges of the input tesselation.
In any case, the number of generators produced by both algorithms may still be too large, and it may be possible to reduce it to a number closer to $F$, the number of faces of the tesselation, which is the trivial lower bound. This lower bound can only be achieved if the tesselation is a Voronoi tesselation. In the more general case, how close to $F$ can we get?
In particular, our algorithm still has plenty of room for improvement. In Section \[implement\] we have already mentioned several strategies that can decrease the number of Voronoi sites produced. The design of a parallel version, and a version that is robust against degenerate cases and numerical roundoff errors, are other issues to consider. Roundoff errors have long been an important concern in Computational Geometry in general, and in Voronoi diagram computation, in particular (see e.g. [@sugi92]).
Other practical questions have to do with the experimental analysis of our algorithms. We have devised a method to generate a PSLG on a random point set, but we have not analyzed how this compares to generating such graphs uniformly from the set of all PSLGs that can be defined on a given point set. Regarding certain properties of our generated graphs (expected number of vertices, edges, and faces, expected area of the faces, distribution of the smallest angle, etc.), we have not attempted a theoretical analysis, but we have estimated some of these parameters empirically. A more comprehensive set of experiments will reveal how these tesselations compare with those generated by other methods.
As a final remark, we point out that our algorithm could also be generalized to other metrics, continuous or discrete, including graph metrics. Potential applications include image representation and compression, as described in [@Mar07], and pattern recognition (e.g. given a partition of some sample space, we could select a set of representatives for each class). In the case of graphs, Voronoi partitions can be used to find approximate shortest paths (see [@Som10; @Ra12], for instance). In social networks, node clustering around a set of representative nodes, or super-vertices, is a popular technique for network visualization and$/$or anonymization [@Zhou08].
Acknowledgements {#acknowledgements .unnumbered}
================
Janos Pach contributed some key ideas for the algorithm, at the early stages of this work. Hebert Pérez-Rosés was partially supported by the Spanish Ministry of Economy and Competitiveness, under project TIN2010-18978. Guillermo Pineda-Villavicencio was supported by a postdoctoral fellowship funded by the Skirball Foundation, via the Center for Advanced Studies in Mathematics at the Ben-Gurion University of the Negev, Israel, and by an ISF grant.
[4]{}
Ash, P., Bolker, E.D. Recognizing Dirichlet Tesselations. Geometriae Dedicata 19, 175–206 (1985).
Aurenhammer, F. Recognising Polytopical Cell Complexes and Constructing Projection Polyhedra. J. Symbolic Computation 3, 249–255 (1987).
Banerjee, S., Bhattacharya, B.B., Das, S., Karmakar, A., Maheshwari, A., Roy, S. On the Construction of a Generalized Voronoi Inverse of a Rectangular Tesselation. In: Procs. 9th Int. IEEE Symp. on Voronoi Diagrams in Science and Engineering, pp. 132–137. IEEE, New Brunswick, NJ (2012).
de Berg, M., Cheong, O., van Kreveld, M., Overmars, M.: Computational Geometry. Algorithms and Applications. Springer, Berlin, third ed. (2008).
Hartvigsen, D. Recognizing Voronoi Diagrams with Linear Programming. ORSA J. Comput. 4, 369–374 (1992).
Martínez, A., Martínez, J., Pérez-Rosés, H., Quirós, R. Image Processing using Voronoi diagrams. In: Procs. 2007 Int. Conf. on Image Proc., Comp. Vision, and Pat. Rec., pp 485-491. CSREA Press (2007).
Ratti, B., Sommer, C. Approximating Shortest Paths in Spatial Social Networks. In: Procs. 2012 ASE/IEEE Int. Conf. on Social Computing and 2012 ASE/IEEE Int. Conf. on Privacy, Security, Risk and Trust, pp 585–586. IEEE Comp. Soc. (2012).
Schoenberg, F.P., Ferguson, T., Li, C. Inverting Dirichlet Tesselations. The Computer J. 46, 76–83 (2003).
Sommer, C.: Approximate Shortest Path and Distance Queries in Networks. PhD Thesis, Department of Computer Science, The University of Tokyo, Japan (2010).
Sugihara, K., Iri, M. Construction of the Voronoi Diagram for One Million Generators in Single-Precision Arithmetic. Procs. IEEE 80, 1471–1484 (1992).
Trinchet-Almaguer, D.: Algorithm for Solving the Generalized Inverse Voronoi Problem. Honour’s Thesis (in Spanish), Department of Computer Science, University of Oriente, Cuba (2005).
Trinchet-Almaguer, D., Pérez-Rosés, H.: Algorithm for Solving the Generalized Inverse Voronoi Problem (in Spanish). Revista Cubana de Ciencias Informaticas 1 (4), 58–71 (2007).
Yeganova, L., Falk, J.E., Dandurova, Y.V. Robust Separation of Multiple Sets. Nonlinear Analysis 47, 1845–1856 (2001).
Yeganova, L.E. Robust linear separation of multiple finite sets. Ph.D. Thesis, George Washington University, 2001.
Zhou, B., Pei, J., Luk, W-S. A brief survey on anonymization techniques for privacy preserving publishing of social network data. ACM SIGKDD Explorations Newsletter 10, 12–22 (2008).
[^1]: Mathematics Subject Classification: 52C45, 65D18, 68U05.
[^2]: For convenience, we have dropped the subscript $G$.
[^3]: The equation of $f'$ can be obtained easily after the initial circles have been constructed and their sentinels placed.
|
---
abstract: 'We observe an enormous $\textit{spontaneous}$ exchange bias ($\sim$30-60 mT) - measured in an unmagnetized state following zero-field cooling - in a nanocomposite of BiFeO$_3$ ($\sim$94%)-Bi$_2$Fe$_4$O$_9$ ($\sim$6%) over a temperature range 5-300 K. Depending on the path followed in tracing the hysteresis loop - positive or negative - as well as the maximum field applied, the exchange bias ($H_E$) varies significantly with $\mid-H_E\mid$ $>$ $\mid H_E\mid$. The temperature dependence of $H_E$ is nonmonotonic. It increases, initially, till $\sim$150 K and then decreases as the blocking temperature $T_B$ is approched. All these rich features appear to be originating from the spontaneous symmetry breaking and consequent onset of unidirectional anisotropy driven by “superexchange bias coupling” between ferromagnetic core of Bi$_2$Fe$_4$O$_9$ (of average size $\sim$19 nm) and canted antiferromagnetic structure of BiFeO$_3$ (of average size $\sim$112 nm) via superspin glass moments at the shell.'
author:
- Tuhin Maity
- Sudipta Goswami
- Dipten Bhattacharya
- Saibal Roy
title: '**Superspin glass mediated giant spontaneous exchange bias in a nanocomposite of BiFeO$_3$-Bi$_2$Fe$_4$O$_9$**'
---
The spontaneous exchange bias (SEB), where the unidirectional anisotropy sets in $\textit{spontaneously}$ under the application of first field of a hysteresis loop even in an unmagnetized state, is a consequence, primarily, of biaxial symmetry in the antiferromagnetic (AFM) structure of ferromagnetic (FM)/AFM interface [@Saha; @Wang]. It can also develop for a set of AFM grains with uniaxial anisotropy because of an interplay among the exchange-coupled FM/AFM systems in presence of an applied field H. In a spin glass (SG)/FM structure, on the other hand, the anisotropy sets in under field cooling via oscillatory RKKY interaction [@Ali]. The interfacial roughness has earlier been shown [@Malozemoff] to be influencing the exchange bias coupling between FM and AFM significantly. However, we show in this paper, for the first time, that glassy moments at the interface, in fact, introduce an additional magnetic degree of freedom in between the exchange-coupled FM and AFM grains and breaks the symmetry truly spontaneously even in the absence of first field of a loop to set the unidirectional anisotropy in an unmagnetized state. As discussed later, the consequence of this is an hitherto unheard of yet profound asymmetry in the spontaneous exchange bias (SEB) depending on the path followed in tracing the hysteresis loop - positive or negative. We report that in a nanocomposite of BiFeO$_3$($\sim$94%)-Bi$_2$Fe$_4$O$_9$ ($\sim$6%), we observe (i) a large SEB ($\sim$30-60 mT) across 5-300 K, (ii) asymmetry in dependence of SEB on the path followed in tracing the hysteresis loop - positive or negative, and (iii) a nonmonotonic variation of SEB with temperature - it increases till $\sim$150 K and then decreases as the blocking temperature $T_B$ is approached. The magnitude of the SEB itself is far higher than what has so far been observed in all the bulk or thin film based composites of BiFeO$_3$ [@Martin; @Ramesh; @Chu; @Heron; @Lebeugle] even under magnetic annealing. We have also observed the conventional magnetic-annealing-dependent exchange bias (CEB) with all its regular features such as dependence on annealing field, rate, and training. The terms SEB and CEB have been used in this paper following the parlance of Ref. 2. The random field generated by the glassy moments at the shell appears to be influencing the superexchange coupling between FM core [@Tian] of finer ($\sim$19 nm) Bi$_2$Fe$_4$O$_9$ and local moments of AFM order in coarser BiFeO$_3$ ($\sim$112 nm) and inducing the SEB, its path dependence, and its nonmonotonicity in variation with temperature.
The nanocomposite of BiFeO$_3$-Bi$_2$Fe$_4$O$_9$ has been synthesized by the sonochemical route [@Goswami]. The particle morphology, crystallographic details, and the misalignment angle between two component phases have been determined by transmission electron microscopy (TEM), selected area electron diffraction (SAED), and high resolution transmission electron microscopy (HRTEM). The Rietveld refinement of the high resolution powder x-ray diffraction pattern too offers information about the crystallographic details of the component phases in addition to the crystallite sizes and volume fraction of each phase. The supplementary document [@supplementary] gives all the results and analyses of the data. The average misalignment angle turns out to be $\sim$19$^o$. The magnetic measurements have been carried out in a SQUID magnetometer (MPMS, Quantum Design) across 5-300 K under a 5T magnetic field. While the SEB has been determined from the hysteresis loops measured without any prior magnetic annealing, CEB has been measured following conventional magnetic annealing treatment. Prior to the measurement of SEB, the sample has been demagnetized using an appropriate protocol in order to ensure that there is no trapped flux. The details of the protocol have been given in the supplementary document [@supplementary]. We have also measured the SEB following zero-field cooling from a high temperature ($\sim$700 K) for a test case. The comparison of the results show that the demagnetization protocol used here was appropriate in ensuring unmagnetized state of the sample prior to the measurement.
We report here mainly the results obtained in a nanocomposite of $\sim$6% Bi$_2$Fe$_4$O$_9$ and $\sim$94% BiFeO$_3$ (sample-A) which exhibits maximum SEB and CEB. In Fig. 1, the results from the magnetic measurements are shown. In Fig. 1a, we show the hysteresis loops across 5-300 K which yield the SEB. The region near the origin is blown up to show the extent of EB clearly (full loops are given in the supplementary document). In each case, the presence of a large shift in the loop along the field axis is conspicuous. The EB $H_E$ is given by ($H_{c1}$ - $H_{c2}$)/2 while the coercivity $H_C$ is given by ($H_{c1}$ + $H_{c2}$)/2; $H_{c1}$ and $H_{c2}$ are the fields corresponding to the points in forward and reverse branches of the hysteresis loop at which the magnetization reaches zero. The extent of SEB observed here right across 5-300 K is quite large and comparable to what has been reported by Wang $\textit{et al}$. [@Wang] in Ni-Mn-In bulk alloys at 10 K. The observation of SEB iteslf in BiFeO$_3$ based bilayer or composite system is unheard of as yet and, for the first time, we are reporting it in the nanocomposite of BiFeO$_3$-Bi$_2$Fe$_4$O$_9$. In Fig. 1b, the asymmetry and hence the $\textit{tunability}$ of the SEB has been demonstrated. Depending on the sign of the starting field +5T(-5T), the sign of the SEB is negative (positive) as well as $\mid$-H$_E$$\mid$ $>$ $\mid$+H$_E$$\mid$. This is also remarkable and has not yet been observed in any other system exhibiting SEB [@Wang]. Fig. 1c shows the CEB measured after a magnetic annealing treatment with +1T. In this case a positive (negative) magnetic field of 1T has been applied at room temperature and then the temperature was ramped down to the given point at a cooling rate of 2.5 K/min. Like SEB, the CEB too turns out to be negative i.e., annealing under positive (negative) field yields hysteresis loop shift in negative (positive) direction along the field axis. Even more interesting is that, in this case too, the exchange bias $H_E$ for positive (negative) annealing field is $\textit{asymmetric}$ with $\mid$-H$_E$$\mid$ $>$ $\mid$+H$_E$$\mid$. This has been demonstrated clearly in Fig. 1d which shows the asymmetry in the shift of the loop along the field axis depending on whether the sample has been field-cooled under +5T or -5T. Finally, as expected, the CEB is found to be dependent on magnetic history and is, therefore, tunable via different cooling field and ramping rate. The CEB increases linearly with the rise in annealing field right up to the field limit for our experiment 5T. In Figs. 1e and 1f, respectively, we show the spontaneous and conventional $H_E$ and $H_C$ as a function of temperature. The H$_E$ and H$_C$ in both the cases of SEB and CEB are nonmonotonic: the peak in $H_C$ appears at $\sim$50 K for both SEB and CEB whereas the peak in $H_E$ appears at $\sim$150 and $\sim$50 K, respectively, for SEB and CEB.
In order to trace the origin of all these features, we investigated the spin structure both in the bulk of the BiFeO$_3$ and Bi$_2$Fe$_4$O$_9$ particles as well as at their interfaces from well designed protocol dependent magnetic moment versus temperature and magnetic training effect measurements. In Fig. 2a, the ZFC, FC, and remanent magnetization for sample-A have been shown. The rapid rise in magnetic moment at low temperature (Fig. 2a inset) signifies presence of superparamagnetic moments in Bi$_2$Fe$_4$O$_9$ [@Cong]. The superparamagnetic domains could undergo a transition and be frozen at the blocking temperature $T_B$. The $T_B$ for sample-A turns out to be $>$350 K. In contrast, the $T_B$ for sample-B where the volume fraction of Bi$_2$Fe$_4$O$_9$ phase is greater than 10% while the particle size is smaller ($\sim$8 nm), the $T_B$ $\sim$ 60 K. Interestingly, we did observe a signature of the presence of even superspin glass (SSG) moments [@Chen] in this system at below $T_B$. Since it is difficult to confirm the presence of SSG in the case of a composite - as this structure, if present, is always associated with the antiferromagnetism of BiFeO$_3$ - we used a ’stop-and-wait’ protocol to measure the memory effect which is an unequivocal signature of the presence of SSG [@Sasaki]. The sample was first cooled down to 2 K from room temperature under zero field and an M(T) pattern (reference line) was measured under 200 Oe. After the sample temperature reaches 300 K, it was again brought back to 2 K under zero field. The M(T) measurement was then repeated but with a ’stop-and-wait’ protocol. As the temperature reaches at $T_w$ $\sim$21 K, the measurement was stopped and waited at that temperature for $\sim$10$^4$s. The difference between the two patterns $\delta$M(T) is shown in Fig. 2b main frame. The memory effect is shown as a dip at $\sim$21 K which confirms the presence of SSG phase in the nanocomposite. The entire measurement has been repeated for $T_w$ $\sim$100 K. The memory effect could be observed even at $\sim$100 K as well (Fig. 2b inset).

The dynamics of the spin structure at the interface has been probed by studying the training effect on CEB. The dependence of $H_E$ and $H_C$ on the number of repeating cycles is shown in Fig. 2c. Both the parameters are found to be decreasing monotonically with the increase in number of cycles indicating spin rearrangement at the interface. It appears that the empirical law [@Paccard] for purely antiferromagnetic spin rearrangement at the interface $H_E^n$ = $H_E^\infty$ + k.n$^{-\frac{1}{2}}$ with k = 788 Oe and $H_E^\infty$ = 544 Oe cannot describe our data well (green line in Fig. 2c). Instead, a model [@Mishra] which considers a mixed scenario of two different relaxation rates for frozen and rotate-able uncompensated spin components at the interface $H_E^n$ = $H_E^\infty$ + $A_f$exp(-n/$P_f$) + $A_r$exp(-n/$P_r$) (where f and r denote the frozen and rotate-able spin components) fits the data perfectly well (brown line in Fig. 2c) and yields the fitting parameters as $H_E^\infty$ = 853 Oe, $A_f$ = 3244 Oe, $P_f$ = 0.39, $A_r$ = 370 Oe, and $P_r$ = 2.7. The ratio $P_r$/$P_f$ $\sim$ 8 indicates that the rotate-able spins rearrange nearly 8 times faster than the frozen spins. The ’memory effect’ and ’training effect’ experiments thus show that the interface region is populated by SSG moments as well which influence the SEB and CEB significantly.
We have also examined the SEB in two other samples with higher ($\sim$10%) and lower ($<$3%) volume fraction of Bi$_2$Fe$_4$O$_9$ (sample-B and C, respectively). The corresponding full hysteresis loops have been given in the supplementary document [@supplementary]. The ZFC, FC, and remanent magnetization as well as the $H_E$ and $H_C$ for sample-B are shown in Figs. 3a and 3b, respectively. Finally, in Fig. 3c, the comparison of SEB among all the three samples (A,B, and C) is shown. The $T_B$ decreases down to $\sim$60 K in sample-B because of finer Bi$_2$Fe$_4$O$_9$ particles ($\sim$8 nm). The SEB too could be observed only below $T_B$. Interestingly, the memory effect could be seen at only below $T_B$ ($\sim$60 K) and not above it [@supplementary]. Nearly zero remanence at below $T_B$ proves the presence of superparamagnetic domains for sample-B as well. The $T_B$, however, could not be located within the range 5-300 K for sample-C and no exchange bias could be observed within the same temperature range.
We show that all these results could be qualitatively understood by considering a model of “superexchange bias coupling” between ferromagnetic core of finer Bi$_2$Fe$_4$O$_9$ and local uncompensated moments of antiferromagnetic order in coarser BiFeO$_3$ particles via the superspin glass shell at the interface. The model is shown schematically in Fig. 4 and draws essentially from the model proposed in Ref. 3. The dotted line marks the direction of the applied field. The shell SSG moments $s_1$ and $s_2$ are coupled to the FM moment $S_F$ by a coupling parameter $J_F$ and to the AFM moment $S_{AF}$ by $J_{AF}$ while the coupling between $s_1$ and $s_2$ is $J$. The net coupling parameter $b$ will depend on $J_{AF}$, $J_F$, and $J$ and, finally, $H_E$ $\propto$ $b$ [@Ali]. It has been shown [@Ali] that the random fields generated by spin glass moments at the core can act on the saturated ferromagnetic moment and set a unidirectional anisotropy (UA) via RKKY interaction either along the direction of the applied field or opposite to that. The model that we are proposing in the present case is the following. The random field from frozen SSG moments appears to be inducing a variation in the anisotropy of the AFM moments including biaxiality with respect to the direction of the applied field. Thus depending on the orientation of the principal easy axes of AFM grains with respect to the direction of the applied field, the AFM grains can experience either no torque or large torque and become (i) fully hysteretic, (ii) non-hysteretic, and (iii) partially hysteretic. While the fully hysteretic and non-hysteretic grains do not contribute to the bias in the loop, the partially hysteretic grains do. The partially hysteretic grains set the UA, primarily, in a direction opposite to that of the applied field. The SEB, then, becomes negative - i.e., depending on the sign of the starting field for loop tracing, positive (negative), the SEB turns out to be negative (positive). Application of the first field for tracing the loop breaks the symmetry among the AFM grains and sets the UA. The FM moments are assumed to be saturated under the applied field. However, the most interesting aspect is that there is a $\textit{spontaneous symmetry breaking}$ as well, driven by the random field of the SSG moments at the interface which yields a global minima in the energy landscape and sets the UA universally along the negative field direction even in absence of first field of loop tracing. These grains are thus always partially hysteretic along the negative direction of the applied field. The grains which set the UA in a direction opposite to that of applied field are partially hysteretic for both the directions of applied field. But the ones mentioned above are partially hysteretic $\textit{only}$ with respect to the negative field direction. This aspect, in fact, gives rise to the observed $\textit{asymmetry}$ in both SEB and CEB with $\mid$-H$_E$$\mid$ $>$ $\mid$+H$_E$$\mid$ and has not been reported by others so far in the context of either SEB or CEB. The role of SSG moments, therefore, appears to be crucial in inducing this spontaneous symmetry breaking and setting the UA universally along the negative field direction. It has not been observed in systems with otherwise frustrated or glassy interface.
The nonmonotonic temperature dependence of SEB could be understood by drawing an analogy with the nonmonotonic pattern of variation of SEB with the thickness of AFM grain. At well below $T_B$, the increase in temperature increases the interaction between SSG and AFM moments which, in turn, induces the energy landscape necessary to set the UA in the system. The bias as well as the asymmetry, therefore, increase. However, as the $T_B$ is approached, the number of grains turning superparamagnetic increases which, in turn, reduces the bias. The nonmonotonic variation in SEB with the volume fraction of Bi$_2$Fe$_4$O$_9$ phase, likewise, can be explained by taking recourse to the nonmonotonic variation in the interface density. With the increase in volume fraction of the Bi$_2$Fe$_4$O$_9$ phase, large scale phase segreation takes place which reduces the interface density with UA. It also decreases as the volume fraction drops down via passing through the optimum in sample-A. Therefore, both in sample-B and C, the SEB is either small or non-existent.
In summary, we report a giant as well as tunable spontaneous exchange bias of $\sim$30-60 mT across 5-300 K in a nanocomposite of BiFeO$_3$ ($\sim$94%) - Bi$_2$Fe$_4$O$_9$ ($\sim$6%). It originates from a superexchange bias coupling between ferromagnetic core of finer Bi$_2$Fe$_4$O$_9$ ($\sim$19 nm) particles and antiferromagnetic moment in coarser ($\sim$112 nm) BiFeO$_3$ particles via superspin glass moments at the interface. Since it appears to be inducing a variety of coupling across the interfaces and thus develop a complicated energy landscape in interaction among the FM/AFM grains by breaking the symmetry spontaneously even in absence of the first field of loop tracing, the presence of superspin glass moments turns out to be crucial. It engineers a giant SEB and its tunability. This giant and tunable exchange bias can be utilized for improving the efficiency manyfold of switching the magnetic anisotropy in a ferromagnetic system electrically via “exchange coupling mediated multiferroicity”.
This work has been supported by Indo-Ireland joint program (DST/INT/IRE/P-15/11) and FORME SFI SRC project (07/SRC/I1172) of Science Foundation of Ireland (SFI). One of the authors (S.G.) acknowledges support from a Research Associateship of CSIR.
[99]{}
J. Saha and R.H. Victora, Phys. Rev. B $\textbf{76}$, 100405 (2007). B.M. Wang, Y. Liu, P. Ren, B. Xia, K.B. Ruan, J.B. Yi, J. Ding, X.G. Li, and L. Wang, Phys. Rev. Lett. $\textbf{106}$, 077203 (2011). M. Ali, P. Adie, C.H. Marrows, D. Greig, B.J. Hickey, and R.L. Stamps, Nature Mater. $\textbf{6}$, 70 (2007). See, e.g., A.P. Malozemoff, Phys. Rev. B $\textbf{35}$, 3679 (1987). L.W. Martin, Y.-H. Chu, Q. Zhan, R. Ramesh, S.-J. Han, S.X. Wang, M. Warusawithana, and D.G. Schlom, Appl. Phys. Lett. $\textbf{91}$, 172513 (2007). L.W. Martin, Y.-H. Chu, M.B. Holcomb, M. Hujiben, P. Yu, S.-J. Han, D. Lee, S.X. Wang, and R. Ramesh, Nano Lett. $\textbf{8}$, 2050 (2008). Y-H. Chu, L.W. Martin, M.B. Holcomb, M. Gajek, S.J. Han, Q. He, N. Balke, C.H. Yang, D. Lee, W. Hu, Q. Zhan, P.L. Yang, A. Fraile-Rodriguez, A. Scholl, S.X. Wang, and R. Ramesh, Nature Mater. $\textbf{7}$, 478 (2008). J.T. Heron, M. Trassin, K. Ashraf, M. Gajek, Q. He, S.Y. Yang, D.E. Nikonov, Y-H. Chu, S. Salahuddin, and R. Ramesh, Phys. Rev. Lett. $\textbf{107}$, 217202 (2011). D. Lebeugle, A. Mougin, M. Viret, D. Colson, and L. Ranno, Phys. Rev. Lett. $\textbf{103}$, 257601 (2009). Z.M. Tian, S.L. Yuan, X.L. Wang, X.F. Zheng, S.Y. Yin, C.H. Wang, and L. Liu, J. Appl. Phys. $\textbf{106}$, 103912 (2009). S. Goswami, D. Bhattacharya, and P. Choudhury, J. Appl. Phys. $\textbf{109}$, 07D737 (2011). The supplementary document contain additional data and is available at this url. D.Y. Cong, S. Roth, J. Liu, Q. Luo, M. Potschke, C. Hurrich, and L. Schultz, Appl. Phys. Lett. $\textbf{96}$, 112504 (2010). Xi Chen, S. Bedanta, O. Petracic, W. Kleemann, S. Sahoo, S. Cardoso, and P.P. Freitas, Phys. Rev. B $\textbf{72}$, 214436 (2005). M. Sasaki, P.E. Jonsson, H. Takayama, and H. Mamiya, Phys. Rev. B $\textbf{71}$, 104405 (2005). D. Paccard, C. Schlenker, O. Massenet, R. Montmory, and A. Yelon, Phys. Stat. Solid. $\textbf{16}$, 301 (1966). S.K. Mishra, F. Radu, H.A. Durr, and W. Eberhardt, Phys. Rev. Lett. $\textbf{102}$, 177208 (2009).
|
---
abstract: 'We consider the mesoscopic normal persistent current (PC) in a very low-temperature superconductor with a bare transition temperature $T_c^0$ much smaller than the Thouless energy $E_c$. We show that in a rather broad range of pair-breaking strength, $T_c^0 \lesssim \hbar/\tau_s \lesssim E_c$, the transition temperature is renormalized to zero, but the PC is hardly affected. This may provide an explanation for the magnitude of the average PC’s in the noble metals, as well as a way to determine their $T_c^0$’s.'
author:
- 'H. Bary-Soroker'
- 'O. Entin-Wohlman'
- 'Y. Imry'
title: 'Effect of pair-breaking on mesoscopic persistent currents well above the superconducting transition temperature'
---
[**Introduction.**]{} The magnitude of the equilibrium averaged persistent currents (PC’s) [@BIL; @book] in normal metals has been a long-standing puzzle. Experiments [@LDDB; @JMKW; @DBRBM] produce a current larger by at least two orders of magnitude than the theoretical prediction for noninteracting electrons [@CGR; @RvO; @AGI] and seem to indicate that the low-flux response is diamagnetic. The average PC of a diffusive system with interactions was calculated first in this connection [@AL] in Refs. and . The Resulting PC was found to be much larger than that of a noninteracting system, but nevertheless not large enough to explain the experiments.
Repulsive electron-electron interactions [@AEPRL] result in a paramagnetic response (at small magnetic fluxes) whose magnitude is smaller than the experiment by about a factor of five. This disagreement is due to the downward renormalization of the interaction [@dG; @MA]. Attractive interactions [@AEEPL] result in a diamagnetic response, whose magnitude (due to the very low superconducting transition temperature), is again smaller by a factor of order five than the measured one. This is in spite of the renormalization upward of the attractive interaction. Attractive interactions, at low energies, imply (with no pair-breaking) a transition into a superconducting state, and the PC of such an interacting system depends on its transition temperature. These temperatures are very low [@Mota] for the noble metals used in the PC experiments – hence the too small predicted values for the PC.
Here we consider attractive interactions. We show that the presence of a very small amount of pair breakers, e.g., magnetic impurities (which seem to be very difficult to avoid in these metals [@PGAPEB]), may change the picture profoundly. Obviously one may consider other pair-breakers, such as a two-level systems [@IFS] or simply a magnetic field [@SO]. In this letter we treat specifically the case of magnetic impurities. We find that within a significant range of the pair-breaking strength, the magnetic impurities [*suppress the transition temperature down to immeasurable values, leaving concomitantly the PC almost unchanged*]{}. The physical reason for this remarkable observation is that the PC is determined by the interaction on the scale of the Thouless energy $E_c = \hbar D/L^2$ $(\sim 20 mK$ for a typical experimental system), while the bare transition temperature, $T_c^0$, is much smaller. (The circumference of the ring is denoted by $L$ and $D$ is the diffusion coefficient.) This gives rise to a rather wide range of pair-breaking strengths, presented here by the spin-scattering time $\tau_{s}$, $$\begin{aligned}
T_c^0 \lesssim \hbar/\tau_s \lesssim E_c
, \label{1}\end{aligned}$$ in which the actual transition temperature $T_c$ will drop to zero [@AG], but the PC will be hardly affected. As a result, it is the [*bare*]{} transition temperature of the system [*without*]{} the magnetic impurities, $T_c^0$, as opposed to $T_{c}$, which dominates the expression for PC, see Fig. \[fig1\].
![The first flux harmonic \[$m=1$, see Eq. \[MAIN\]\], in units of $I(s=0)$, of the PC at $T=E_c$ (full line) and $T_c/T_c^0$ (dashed line) as functions of the pair-breaking strength, $s=1/(\pi T_c^0 \tau_s)$, displayed on a logarithmic scale.[]{data-label="fig1"}](figS1.eps){width="8.6cm"}
We concentrate here on the experimental results of Ref. [@future]. In order to explain them, it is necessary to assume a $T_c^0$ in the $1 mK$ range for copper. [*Our basic assertion is that this may indeed be the correct order of magnitude of $T_c^0$ for ideally clean copper, but that it is knocked down to zero or to a very low value by a minute, $\lesssim$ ppm, amount of unwanted [@PGAPEB] pair-breakers.*]{} We emphasize, however, that our result concerning the fundamentally different sensitivities of $T_c$ and PC to pair-breaking in the range given by Eq. (\[1\]), [*remains valid regardless of the situation in specific materials.*]{} The Kondo screening of the spins is not considered here. Other effects of magnetic impurities have previously been considered in Ref. .
![The first flux harmonic of the PC in units of $I^*=- e
E_c$ as a function of the temperature, for two values of $T_c^0/E_c$ and several values of s. Keeping, at $T\lesssim
E_c$, up to the 100 lowest values of $|\nu|$, was necessary for convergence. Note that the $s=0$ curve in the upper panel is valid only for $T/T_c\geq 1+Gi$, where $Gi$ is the Ginzburg parameter ($Gi\sim0.1$ for the samples of Ref. ).[]{data-label="fig2"}](fig2.eps){width="8.6cm"}
[**Results.**]{} The expression we obtained for the PC in a diffusive ring with magnetic impurities can be expressed as a sum over the harmonics of the magnetic flux through the ring $\phi$, in units of the flux quantum $h/e$, $$\begin{aligned}
&I=-8eE_{c}\sum_{m=1}^{\infty} \frac{\sin(4\pi
m\phi)}{m^2}\nonumber\\
&\times\sum_\nu \int_0^\infty dx\frac{x \sin(2\pi x) \Psi
'(F(x, \nu ))}{\ln(T/T_c^0)+\Psi(F(x, \nu ))-\Psi(\frac{1}{2})} \ ,\nonumber\\
&F(x,\nu )=\frac{1}{2}+\frac{|\nu|+2/\tau_s}{4\pi T}+\frac{\pi
E_c x^2}{m^2 T}\ ,\label{MAIN}\end{aligned}$$ (using $\hbar=1$). Here $\nu$ denotes the bosonic Matsubara frequency [@COM1], $\Psi$ and $\Psi '$ are the digamma function and its derivative, and $T$ is the temperature. Our expression (\[MAIN\]) generalizes the result of Ref. for the case where spin-scattering is present: the Matsubara frequency $|\nu |$ is shifted by $2/\tau_{s}$. However, the superconducting transition temperature (which appears formally in the denominator of the integrand) is [*not*]{} the one modified by the pair-breakers, but retains its bare (magnetic impurities free) value. Interestingly enough, it follows that by measuring the PC one may determine $T_{c}^{0}$ (which would be directly measurable only if all low-temperature pair breaking could be eliminated).
In Fig. \[fig2\] the PC is plotted using Eq. (\[MAIN\]). At the critical pair-breaking time $1/\tau_s \simeq T_c^0$, corresponding to $s =1/\pi \tau_s T_c^0\simeq 1/\pi$, the transition temperature vanishes [@AG], while the PC is hardly affected. The measured PC in the copper samples of Ref. is $I(T\lesssim E_c)\simeq -eE_c$. The curve with $s=1$ in the upper panel, taken with $T_c^0 = 1.5 mK$ and $E_c=15 mK$ (the value for the samples of Ref. ) gives a PC lower by only $25\%$. A better fit is possible by changing the parameters somewhat, but we do not regard this as crucial at the present stage. Likewise, we can qualitatively explain the result of Ref. . The high frequency results of Ref. require a separate discussion [@DBRBM]. The PC is reduced significantly once $1/\tau_s\geq
E_c$, or $L_s\equiv\sqrt{D\tau_s}\leq L$. For $T_c^0/E_c=0.1
\;(0.01)$, the condition for $E_c\tau_s\sim1$ is $s=10\;(100)$.
[**Derivation.**]{} For completeness, we outline below the derivation of the PC in the presence of magnetic scattering [@future]. The PC, Eq. (\[MAIN\]), is obtained by differentiating the free energy with respect to the flux. Our system is described by the Hamiltonian [@AG] $$\begin{aligned}
{\cal H}&=\int d{\bf r}\Bigl (\psi^{\dagger}_{\alpha}({\bf r})
\Bigl [({\cal H}_{0}+u_{1}({\bf r}))\delta_{\alpha\gamma}+u_{2}
({\bf r}){\bf S}\cdot\sig^{\alpha\gamma}\Bigr ]\psi^{}_{\gamma}({\bf r})\nonumber\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{g}{2}\psi^{\dagger}_{\alpha}
({\bf r})\psi^{\dagger}_{\gamma}({\bf r})\psi^{}_{\gamma}({\bf
r}) \psi^{}_{\alpha}({\bf r})\Bigr )\ ,\label{HAM}\end{aligned}$$ in which the last term is the attractive interaction, of coupling $g$. The spin components are $\alpha$ and $\gamma$, $\sig$ is the vector of the Pauli matrices, and ${\cal
H}_{0}=(-i\nab -e {\bf A})^{2}/2m -\mu $ ($\mu $ is the chemical potential and ${\bf A}$ is the vector potential describing the flux through the ring). The scattering, both nonmagnetic and magnetic, is assumed to result from $N_{i}$ point-like impurities, such that $$\begin{aligned}
&u_{1}({\bf r})+u_{2}({\bf r}){\bf S}\cdot\sig\nonumber\\
&\equiv \sum_{i=1}^{N_{i}}\Bigl (\delta ({\bf r}-{\bf
R}_{i})-\frac{1}{V}\Bigr ) (u_{1}+u_{2}{\bf S}^{}_{{\bf
R}_{i}}\cdot\sig )\ ,\end{aligned}$$ where $V$ is the system volume. In averaging over the impurity disorder one assumes that the impurity locations, ${\bf R}_{i}$, are random, and so are their classical spins, such that $\langle {\bf S}^{}_{{\bf R}_{i}}\rangle =0$, and $\langle {\bf
S}^{}_{{\bf R}_{i}}\cdot{\bf S}^{}_{{\bf R}_{j}}\rangle
=\delta_{ij}S(S+1)$.
The partition function, ${\cal Z}$, is calculated by the method of Feynman path integrals [@AS], combined with the Grassman algebra of many-body fermionic coherent states in terms of the variables $\psi_\alpha({\bf r},\tau)$ ($\bar\psi_\alpha({\bf
r},\tau)$). Introducing the bosonic fields $\Delta ({\bf
r},\tau)$ via the Hubbard-Stratonovich transformation leads to the partition function $\mathcal Z=\int D(\psi({\bf
r},\tau),\bar\psi({\bf r},\tau))D(\Delta({\bf
r},\tau),\Delta^{\ast}({\bf r},\tau)) e^{-{\cal S} }$ with $$\begin{aligned}
{\cal S}&=\int d{\bf r}\int_{0}^{\beta} d\tau \Bigl
(\frac{|\Delta ({\bf r},\tau )|^{2}}{g}
\nonumber\\
& -\frac{1}{2}\bar{\Psi}({\bf r},\tau )G^{-1}_{{\bf r},{\bf
r};\tau ,\tau}\Psi({\bf r},\tau )\Bigr )\ ,\label{ACT1}\end{aligned}$$ where $\bar{\Psi}=(\bar{\psi}_{{\uparrow}},\bar{\psi}_{{\downarrow}},\psi_{{\uparrow}},\psi_{{\downarrow}})$. The inverse Green function $G^{-1}$ (at equal positions ${\bf
r}$ and equal imaginary times $\tau$) is given by
$$\begin{aligned}
G^{-1}_{{\bf r}={\bf r}';\tau =\tau '} =\left
[\begin{array}{cccc}
-\partial_{\tau}-h^{\phi}_{{\uparrow}}&-2u_{2}S_{-} &0&\Delta\\
-2u_{2}S_{+}& -\partial_{\tau}-h^{\phi}_{{\downarrow}}&-\Delta &0\\
0&-\Delta^{\ast}&-\partial_{\tau}+h^{-\phi}_{{\uparrow}}& 2u_{2}S_{+}\\
\Delta^{\ast}&0&2u_{2}S_{-}&-\partial_{\tau}+h^{-\phi}_{{\downarrow}}\end{array}\right
]\equiv\left [\begin{array}{cccc}\hat{G}^{-1}_{\rm p}&\ &\ &\hat{\Delta}\\
\ &\ &\ &\ \\
\ &\ &\ &\ \\
\hat{\Delta}^{\dagger}&\ &\ &\hat{G}^{-1}_{\rm
h}\end{array}\right ]\ ,\label{MAT}\end{aligned}$$
where $h_{\alpha}^{\pm\phi}={\cal H}_{0}(\pm {\bf A})+u_{1}+{\rm
sgn}(\alpha )S_{z}u_{2}$, and $S_\pm=(S_x\pm iS_y)/2$.
The integration over the fermionic part of the action (\[ACT1\]) yields $$\begin{aligned}
&{\cal Z}=\int D(\Delta({\bf r},\tau),\Delta^{\ast}({\bf
r},\tau))\nonumber\\
&\times\exp \Bigl (\frac{1}{2}{\rm Tr}\ln (\beta G^{-1})- \int
d{\bf r }\int_{0}^{\beta} d\tau\frac{|\Delta({\bf r},\tau
)|^{2}}{g} \Bigr )\ .\label{ZwithTr}\end{aligned}$$ In order to treat the boson fields $\Delta$, we expand ${\rm
Tr}\ln (\beta G^{-1})$ up to second order in $\Delta$. This expansion is valid for temperatures well above the transition temperature, and, strictly speaking, above the Ginzburg critical region. The zeroth order is omitted as it leads to the tiny magnitude PC of noninteracting, grand-canonical, normal metal rings [@CGR]. The result in Fourier space reads (the dependence on the magnetic flux is specified below)
$$\begin{aligned}
{\rm Tr}\ln (\beta G^{-1})\Big |^{2\textrm{nd}}= -\sum_{{\bf
q}_1,{\bf q}_2,\nu}\sum_{{\bf k}_1,{\bf k}_2,\omega} {\rm Tr
}\left[\hat G_{\rm p}({\bf k}_1+{\bf q}_1,{\bf k}_2+{\bf
q}_2,\omega+\nu)\hat\Delta({\bf q}_2,\nu)\hat G_{\rm h}({\bf
k}_2,{\bf k}_1,-\omega)\hat\Delta^\dag({\bf q}_1,\nu)\right] \
.\label{SEC}\end{aligned}$$
The resulting expression for the partition function may be simplified considerably. Firstly, the terms that survive the disorder-average in the sum of Eq. (\[SEC\]) are those for which [@AGD] ${\bf q}_{1}={\bf q}_{2}$. Secondly, the particle and the hole Green functions, $\hat G_{\rm p}$ and $\hat G_{\rm h}$, \[see Eq. (\[MAT\])\] are related, $$\begin{aligned}
\hat G_{\rm h}({\bf k},{\bf k}';\omega)=- \hat G_{\rm
p}^{t}(-{\bf k},-{\bf k}',\omega)\ ,\end{aligned}$$ where the superscript $t$ denotes the transposed. Carrying out the integration in Eq. (\[ZwithTr\]), $$\begin{aligned}
{\cal Z}=\prod_{{\bf q},\nu}{\cal N}(0)\Bigl (
\frac{V}{g}-T\;\Pi({\bf q},\nu) \Bigr )^{-1} \ ,\label{ZwithPi}\end{aligned}$$ where ${\cal N}(0)$ denotes the extensive density of states at the Fermi level. The polarization is $$\begin{aligned}
&\Pi({\bf q},\nu)= \frac{1}{2} \sum_{\omega}
\varepsilon_{\alpha\gamma} K_{\omega\alpha\gamma}({\bf q},\nu)\\end{aligned}$$ with $$\begin{aligned}
K_{\omega\alpha\gamma}({\bf q},\nu)&= \sum_{{\bf k}_1, {\bf
k}_2}\langle G_{\alpha\alpha '}({\bf k}_1+{\bf q},{\bf k}_2+{\bf
q},\omega+\nu)\nonumber\\
&\times \varepsilon_{\alpha '\gamma '} G_{\gamma\gamma '}(-{\bf
k}_2,-{\bf k}_1,-\omega)\rangle \ .\label{FUNK}\end{aligned}$$ Here $\varepsilon$ is the anti-symmetric tensor, $\varepsilon_{\alpha\alpha}=0$, and $\varepsilon_{{\uparrow}{\downarrow}}=-\varepsilon_{{\downarrow}{\uparrow}}=1$, and $G$ denotes the particle Green function.
In Ref. $K(0,0)$ was calculated using a Dyson equation. We generalize their calculation to obtain $K({\bf
q},\nu)$ and consequently the polarization becomes [@future] $$\begin{aligned}
\frac{T}{{\cal N}(0)}&\Pi({\bf q},\nu)= \Psi \Bigl
(\frac{1}{2}+\frac{
\omega_D}{2\pi T}+\frac{|\nu|+D{\bf q}^2}{4\pi T}\Bigr )\nonumber\\
&-\Psi \Bigl (\frac{1}{2}+\frac{D{\bf q}^2+|\nu|+2/\tau_s}{4\pi
T}\Bigr )\ .\end{aligned}$$ Here $\omega_{D}$ is the cutoff frequency on the attractive interaction, and the pair-breaking time $\tau_{s}$ is given by $$\begin{aligned}
\frac{1}{\tau_{s}}= 2\pi {\cal N}(0)N_i S(S+1)u_2^2\ .\end{aligned}$$
The transition temperature of the system in the [*absence*]{} of pair breakers, $T_{c}^{0}$, is obtained from the ${\bf q}=0, \nu
=0$ pole of ${\cal Z}$, upon setting $1/\tau_{s}=0$, $$\begin{aligned}
\frac{V}{g{\cal N}(0)}=\Psi\Bigl
(\frac{1}{2}+\frac{\omega_D}{2\pi T_c^0} \Bigr )- \Psi\Bigl
(\frac{1}{2}\Bigr )\ .\end{aligned}$$ (Note that the same procedure in the [*presence*]{} of the pair breaking reproduces the decrease in the transition temperature $T_{c}$, as found in Ref. .) Since $\omega_D\gg
T_c^0,T$ we may use the asymptotic expansion of the digamma function. In this way we obtain $$\begin{aligned}
{\cal Z}&=\prod_{{\bf q},\nu}\Bigl [ \ln\Bigl
(\frac{T}{T_c^0}\Bigr )\nonumber\\
&+ \Psi\Bigl (\frac{1}{2}+ \frac{D{\bf q}^2+|\nu|+2/\tau_s}{4\pi
T}\Bigr )-\Psi\Bigl (\frac{1}{2}\Bigr )\Bigr ]^{-1}\
.\label{SOF}\end{aligned}$$
Finally, the PC is given by $ I=(e/h)\;\partial T\ln{\cal
Z}/\partial \phi$. In our ring geometry, the flux enters the longitudinal component, $q_{\parallel}$, of the vector ${\bf q}$ as $$\begin{aligned}
q_{\parallel}=\frac{2\pi}{L}(n+2\phi)\ ,\end{aligned}$$ where $n$ is an integer. Only the zero transverse momentum contributes significantly to the current. Our result (\[MAIN\]) is obtained upon inserting Eq. (\[SOF\]) into the definition of the current and employing the Poisson summation formula. It then follows from Eq. (\[MAIN\]) that values of $\tau_s$ which are detrimental to $T_c$, may hardly affect the PC (see Fig. \[fig1\]).
We conclude by further explaining the physical argument behind our result. Very roughly, the renormalization of the dimensionless attractive interaction $\lambda$ $(>0)$ from a higher frequency scale $\omega_>$ to a lower one, $\omega_<$, is given by $\lambda (\omega_<) = \frac {\lambda (\omega_>) }{1-
\lambda( \omega _>)ln(\frac{\omega_>} {\omega_<})}\;$. At $T^0_c$ and $1/\tau_s=0$, the attractive interaction should diverge. Using this to eliminate $\lambda (\omega_D$) ($\equiv g
N(0)/V$), we obtain that for $T^0_c \lesssim \omega <<\omega_D,\
\lambda(\omega) \backsim 1/ln (\omega /T_c^0)$, which around the Thouless scale, is close to the value found in Ref. . The pair-breaking stops the renormalization at $1/\tau_s$, but does not significantly change the interaction on the much larger scale of $E_c$. Our prediction can also be tested with very small rings made of known low $T_c$ superconductors.
We point out that the mechanism suggested by Kravtsov and Altshuler [@KA], relating extrinsic dephasing to an enhanced PC, is different than ours, since it relies on the rectification of the noise.\
[**Acknowledgements:**]{} We thank E. Altman, A. M. Finkel’stein, L. Gunther, K. Michaeli, A. C. Mota, F. von Oppen, Y. Oreg, G. Schwiete and A. A. Varlamov for very helpful discussions. This work was supported by the German Federal Ministry of Education and Research (BMBF) within the framework of the German-Israeli project cooperation (DIP), and by the Israel Science Foundation (ISF).
[999]{}
M. Büttiker, Y. Imry, and R. Landauer, Phys. Lett. [**96A**]{}, 365 (1983).
Y. Imry, [*Introduction to Mesoscopic Physics*]{}, 2nd ed (Oxford University Press, Oxford, 2002).
L. P. Levy, G. Dolan, J. Dunsmuir, and H. Bouchiat, Phys. Rev. Lett. [**64**]{}, 2074 (1990).
E. M. Q. Jariwala, P. Mohanty, M. B. Ketchen, and R. A. Webb, Phys. Rev. Lett. [**86**]{}, 1594 (2001).
R. Deblock, R. Bel, B. Reulet, H. Bouchiat, and D. Mailly, Phys. Rev. Lett. [**89**]{}, 206803 (2002).
H. F. Cheung, E. K. Riedel, and Y. Gefen, Phys. Rev. Lett. [**62**]{}, 587 (1989).
E. K. Riedel and F. von Oppen, Phys. Rev. B [**47**]{}, 15449 (1993).
B. L. Altshuler, Y. Gefen, and Y. Imry, Phys. Rev. Lett. [**66**]{}, 88 (1991).
The fluctuation correction to the orbital magnetic response above $T_c$ was calculated first by L. G. Aslamazov and A. I. Larkin, Sov. Phys. JETP [**40**]{}, 321 (1975).
V. Ambegaokar and U. Eckern, Europhys. Lett. [**13**]{}, 733 (1990).
V. Ambegaokar and U. Eckern, Phys. Rev. Lett. [**65**]{}, 381 (1990).
P. G. de Gennes, [*Superconductivity of Metals and Alloys*]{} (Addison-Wesley Publishing Co., 1989).
P. Morel and P. W. Anderson, Phys. Rev. [**125**]{}, 1263 (1962).
The $T_c's$ of the noble metals were estimated using varying amounts of alloying by R. F. Hoyt and A. C. Mota, Solid State Commun. [**18**]{}, 139 (1976). The pair-breaking strengths in these alloys are not precisely known.
F. Pierre, A. B. Gougam, A. Anthore, H. Pothier, D. Esteve, and N. O. Birge, Phys. Rev. B [**68**]{}, 085413 (2003). This work highlighted experimentally the role of minute amounts of magnetic impurities in producing the extra low temperature dephasing in the noble metal samples.
Y. Imry, H. Fukuyama, and P. Schwab, Europhys. Lett. [**47**]{}, 608 (1999).
G. Schwiete and Y. Oreg, submitted in parallel with the present work, have considered the “strong" Little-Parks effect including fluctuations for rings shorter than the coherence length. There the $T_c$ is driven to zero due to the pair breaking effect of the flux. Like in our case, the PC can still be large outside the superconducting regime. These results are relevant to recent experiments in Al rings, see N. C. Koshnick, H. Bluhm, M. E. Huber, and K. A. Moler, Science [**318**]{}, 1440 (2007).
A. A. Abrikosov and L. P. Gorkov, Soviet Physics JETP [**12**]{}, 1243 (1961).
More details about the derivation and the results, including comparision with Refs. , will be given elsewhere.
U. Eckern and P. Schwab, J. of Low Temp. Phys. [**126**]{}, 1291 (2002).
The classical limit i.e., retaining only the lowest Matsubara frequency $\nu =0$, holds [*only*]{} when the temperature $T$ is larger than $E_{c}$ (see Fig. 1 in Ref. ).
A. Altland and B. Simons, [*Condensed Matter Field Theory*]{} (Cambridge University Press, Cambridge, 2006).
A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinski, [*Methods of Quantum Field Theory in Statistical Physics*]{} (Prentice-Hall, Englewood Cliffs, NJ, 1963).
V. E. Kravtsov and B. L. Altshuler, Phys. Rev. Lett. [**84**]{}, 3394 (2000).
|
---
abstract: |
Motivated by the excess in the diphoton production rate of the Higgs boson at the Large Hadron Collider (LHC), we investigate the possibility that one of the CP-even Higgs bosons of the extra $U(1)$ extended minimal supersymmetric standard model can give a consistent result. We scan the parameter space for a standard-model-like Higgs boson such that the mass is in the range of $124-127$ GeV and the production rate $\sigma \cdot B$ of the $W
W^*$, $ZZ^*$ modes is consistent with the standard model (SM) values while that of $\gamma\gamma$ is enhanced relative to the SM value. We find that the SM-like Higgs boson is mostly the lightest CP-even Higgs boson and it has a strong mixing with the second lightest one, which is largely singletlike. The implications on $Z\gamma$ production rate and properties of the other Higgs bosons are also studied.
author:
- 'Kingman Cheung$^{1,2}$, Chih-Ting Lu$^{1}$, and Tzu-Chiang Yuan$^3$'
title: |
Diphoton rate of the standard-model-Like Higgs boson\
in the extra $U(1)$ extended MSSM\
---
Introduction
============
A boson of mass 125 GeV, almost consistent with the standard model (SM) Higgs boson, was recently discovered at the LHC experiments [@cms; @atlas]. The production rates of $pp \to h \to WW^*, ZZ^*$ are consistent with the SM values while that of $pp \to h \to
\gamma\gamma$ is somewhat higher than the SM expectation. On the other hand, fermionic modes $b\bar b$ and $\tau\tau$ seem to be suppressed in the present data set, however, the uncertainties are still too large to say anything concrete. Nevertheless, the diphoton rate has seemed to stay above the SM prediction since 2011. If it is confirmed after collecting more data at the end of 2012, this would become a strong constraint on supersymmetry models and on other extended Higgs models.
In the minimal supersymmetric standard model (MSSM), the mass of the lightest CP-even Higgs boson can be raised to 125 GeV by a large radiative correction with a relatively large soft parameter $A_t$ [@mssm; @carena]. However, the more difficult requirement is to achieve an enhanced diphoton production rate $gg \to h \to \gamma\gamma$ relative to the SM prediction. One possibility is to have a light scalar tau (or stau), as light as 100 GeV, which is made possible by choosing the third generation slepton masses $m_{L_3}, m_{E_3} \sim 200-450$ GeV, the parameter $\mu \sim 200-1000$ GeV, and large $\tan\beta \sim 60$ [@carena]. Such a light stau will soon be confirmed or ruled out at the LHC. Another possibility is to identify the heavier CP-even Higgs boson as the observed 125 GeV boson [^1] and the enhancement of the diphoton rate is then made possible by a reduction in $b\bar b$ width [@mssm-d]. In this case, all the other Higgs bosons are around or below 125 GeV, which will soon be uncovered at the LHC. Yet, one can also fine-tune the mixing angle ($\alpha$) between the two CP-even Higgs bosons such that the observed one is mostly $H_u$, the Higgs doublet that couples to the right-handed up-type quarks. In this case, one can achieve enhancement to the diphoton rate and suppression to the $\tau\tau$ mode [@yee]. Among all these possible scenarios certain levels of fine-tuning are necessary to achieve a 125 Higgs boson and an enhanced diphoton rate.
It is quite well known that the next-to-minimal supersymmetric standard model (NMSSM) gives additional tree-level contributions to the Higgs boson mass arising from the terms $\lambda S H_u H_d$ and $\kappa S^3/3$ in the superpotential and the corresponding soft terms. The 125 GeV CP-even Higgs boson can be obtained as either the lightest or the second lightest one, without giving stress on the stop sector [@ellw; @nmssm; @kai]. The diphoton production rate can be enhanced through the singlet-doublet mixing. The second lightest CP-even Higgs boson is the SM-like one while the lightest CP-even is more singletlike. The mixing between these two states substantially reduces the $b\bar b$ width of the SM-like Higgs boson, which then causes an increase of the branching ratio into $\gamma\gamma$ [@ellw; @nmssm].
In this work, we consider the $U(1)'$-extended minimal supersymmetric standard model (UMSSM), which involves an extra $U(1)$ symmetry and a Higgs singlet superfield $S$. It is well known that by adding the singlet Higgs field one can easily raise the Higgs boson mass. The scalar component of the Higgs singlet superfield develops a vacuum expectation value (VEV), which breaks the $U(1)'$ symmetry and gives a mass to the $U(1)'$ gauge boson, denoted by $Z'$. At the same time, the VEV together with the Yukawa coupling can form an effective $\mu_{\rm eff}$ parameter from the term $\lambda \langle S \rangle H_u H_d = \mu_{\rm eff} H_u H_d$ in the superpotential, thus solving the $\mu$ problem of MSSM. Also, because of the presence of the $U(1)'$ symmetry, terms like $S, S^2$, or $S^3$ are disallowed in the superpotential.
The existence of extra neutral gauge bosons had been predicted in many extensions of the SM [@paul]. String-inspired models and grand-unified theory (GUT) models usually contain a number of extra $U(1)$ symmetries, beyond the hypercharge $U(1)_Y$ of the SM. The exceptional group $E_6$ is one of the famous examples of this type. Phenomenologically, the most interesting option is the breaking of these $U(1)$’s at around TeV scales, giving rise to an extra neutral gauge boson observable at the Tevatron and the LHC. Previously, in the works of Refs.[@ours; @kang], a scenario of $U(1)'$ symmetry breaking at around TeV scale by the VEV of a Higgs singlet superfield in the context of weak-scale supersymmetry was considered. The $Z'$ boson obtains a mass from the breaking of this $U(1)'$ symmetry that is proportional to the VEVs. Such a $Z'$ can decay into the SUSY particles such as neutralinos, charginos, and sleptons, in addition to the SM particles. Thus, the current mass limits are reduced by a substantial amount and so is the sensitivity reach at the LHC [@ours; @kang]. We have also considered the SM-like boson and its decay branching ratios into $WW^*,\; ZZ^*$, and $\tilde{\chi}_1^0 \tilde{\chi}_1^0$ with the Higgs boson mass in the ranges of $120-130$ and $130-141$ GeV [@mimic]. In the first mass range $120-130$ GeV, we selected the parameter space such that the SM-like Higgs boson behaves like the SM Higgs boson while in the second mass range $130-141$ GeV, we selected the parameter space point to make sure that the Higgs boson is hiding from the existing data.
The goal in this work is to refine the previous analyses [@mimic] to scan for the parameter space such that
1. the SM-like Higgs boson falls in the mass range $124-127$ GeV;
2. the production rates for $gg \to h \to WW^*, ZZ^*$ are consistent with the SM within certainties;
3. the production rate for $gg \to h \to \gamma\gamma$ is enhanced relative to the SM prediction, namely, $$R_{\gamma\gamma} \equiv \frac{\sigma(gg \to h) \times B(h \to \gamma\gamma)}
{\sigma(gg \to h_{\rm SM}) \times B(h_{\rm SM} \to \gamma\gamma)}
> 1 \; ;$$
4. other existing constraints such as $Z$ invisible width and chargino mass bound are fulfilled.
In the chosen parameter space, we calculate the $Z\gamma$ production rate and study the properties of the other Higgs bosons.
We organize the paper as follows. In the next section, we briefly describe the model (UMSSM) and summarize the formulas for the one loop decays of the CP-even Higgs bosons. In Sec. III, we search for the parameter space in the model that satisfies the above requirements, and present the numerical results. We discuss and conclude in Sec. IV. Detailed expressions for the loop functions in the decay formulas are relegated to the Appendix. Some recent studies on extended MSSM can be found in Ref. [@others] and on extended electroweak models in Ref. [@others1].
UMSSM
=====
For illustrative purposes we use the popular grand unified models based on the exceptional group $E_6$, which is anomaly free. The two most studied $U(1)$ subgroups in the symmetry breaking chain of $E_6$ are $$E_6 \to SO(10) \times U(1)_\psi\,, \qquad
SO(10) \to SU(5) \times U(1)_\chi$$ In $E_6$ each family of the left-handed fermions is embedded into a fundamental $\mathbf{27}$-plet, which decomposes under $E_6 \to SO(10) \to SU(5)$ as $$\mathbf{27} \to \mathbf{16} + \mathbf{10} + \mathbf{1} \to
( \mathbf{10} + \mathbf{5^*} + \mathbf{1} ) + (\mathbf{5} +
\mathbf{5^*} ) + \mathbf{1}$$ The SM fermions of each family together with an extra state identified as the conjugate of a right-handed neutrino are embedded into the $\mathbf{10}$, $\mathbf {5^*}$, and $\mathbf{1}$ of the $\mathbf{16}$. All the other states are exotic states required for the $\mathbf{27}$-plet of $E_6$ unification. In general, the two $U(1)_\psi$ and $U(1)_\chi$ are allowed to mix as $$Q'(\theta_{E_6} ) = \cos \theta_{E_6} Q'_\chi + \sin \theta_{E_6} Q'_\psi \;,$$ where $0 \le \theta_{E_6} < \pi$ is the mixing angle. The commonly studied $Z'_\eta$ model assumes the mixing angle $\theta_{E_6} = \pi - \tan^{-1} \sqrt{5/3} \sim 0.71 \pi$ such that $$\label{eta-model}
Q'_\eta = \sqrt{ \frac{3}{8} } Q'_\chi - \sqrt{ \frac{5}{8} } Q'_\psi \;.$$ Here we follow the common practice by assuming that all the exotic particles, other than the particle contents of the MSSM, are very heavy and well beyond the reaches of all current and planned colliders. For an excellent review of $Z'$ models, see Ref. [@paul].
The effective superpotential $W_{\rm eff}$ involving the matter and Higgs superfields in UMSSM can be written as $$\label{sp}
W_{\rm eff} = \epsilon_{ab} \left [ y^u_{ij} Q^a_j H_u^b U^{\rm c}_i
- y^d_{ij} Q^a_j H_d^b D^{\rm c}_i
- y^l_{ij} L^a_j H_d^b E^{\rm c}_i
+ h_s S H_u^a H_d^b \right ] \;,$$ where $\epsilon_{12}= - \,\epsilon_{21} =1$, $i,j$ are family indices, and $y^u$ and $y^d$ represent the Yukawa matrices for the up-type and down-type quarks respectively. Here $Q, L, U^{\rm c}, D^{\rm c}, E^{\rm c}, H_u$, and $H_d$ denote the MSSM superfields for the quark doublet, lepton doublet, up-type quark singlet, down-type quark singlet, lepton singlet, up-type Higgs doublet, and down-type Higgs doublet respectively, and $S$ is the singlet superfield. The $U(1)'$ charges of the fields $H_u, H_d,$ and $S$ are chosen such that the relation $Q'_{H_u} + Q'_{H_d} + Q'_S = 0$ holds. Thus $S H_u H_d$ is the only term in the superpotential allowed by the $U(1)'$ symmetry beyond the MSSM. Once the singlet scalar field $S$ develops a VEV, it generates an effective $\mu$ parameter: $\mu_{\rm eff} =
h_s \langle S \rangle$.
The singlet superfield will give rise to a singlet scalar boson and a singlino. The real part of the scalar boson will mix with the real part of $H_u^0$ and $H_d^0$ to form three physical CP-even Higgs bosons. The imaginary part of the singlet scalar will be eaten and become the longitudinal part of the $Z'$ boson according to the Higgs mechanism in the process of spontaneous symmetry breaking of $U(1)'$. The singlino, together with the $Z'$-ino, will mix with the neutral gauginos and neutral Higgsinos to form six physical neutralinos. Studies of various singlet extensions of the MSSM can be found in Refs. [@vernon-n; @vernon-h; @sy]. The Higgs doublet and singlet fields are $$H_d = \left( \begin{array}{c}
H_d^0 \\
H_d^- \end{array} \right ) \;\; , \qquad
H_u = \left( \begin{array}{c}
H_u^+ \\
H_u^0 \end{array} \right ) \;\; \qquad {\rm and} \qquad
S \;.$$ The scalar interactions are obtained by calculating the $F$- and $D$-terms of the superpotential, and by including the soft-SUSY-breaking terms. They are given in Refs. [@ours; @mimic].
Now we can expand the Higgs fields after taking on VEVs as $$\begin{aligned}
H_d^0 &=& \frac{1}{\sqrt{2}} \, \left( v_d + \phi_d + i \chi_d \right)
\,,\nonumber\\
H_u^0 &=& \frac{1}{\sqrt{2}} \, \left( v_u + \phi_u + i \chi_u \right )
\,,\nonumber\\
S &=& \frac{1}{\sqrt{2}} \, \left( v_s + \phi_s + i \chi_s \right )
\,.\nonumber\end{aligned}$$ It is well known that the lightest CP-even Higgs boson mass receives a substantial radiative mass correction in the MSSM. The same is true here for the UMSSM. Tree-level and radiative corrections to the mass matrix ${\cal M}^{\rm tree}$ have been given in Ref. [@vernon-h]. We have included radiative corrections in our calculation. The interaction eigenstates $\phi_u, \phi_d, \phi_s$ can be rotated into mass eigenstates via an orthogonal matrix $O$ $$\left( \begin{array}{c}
h_1 \\
h_2 \\
h_3 \end{array} \right ) = O \,
\left( \begin{array}{c}
\phi_d \\
\phi_u \\
\phi_s \end{array} \right ) \qquad \;,$$ such that $O{\cal M}^{\rm tree + loop}O^T = {\rm diag}( m^2_{h_1},\;
m^2_{h_2}, \; m^2_{h_3} )$ in ascending order. There are also one CP-odd Higgs boson and a pair of charged Higgs bosons, as in the MSSM. Note that the Higgs boson masses receive extra contributions from the $D$-term of the $U(1)'$ symmetry (proportional to $g_2$) and from the $F$-term for the mixing of the doublets with the singlet Higgs field (proportional to $h_s$).
Formulas for one loop decays of the CP-even Higgs bosons
--------------------------------------------------------
We will present the relevant formulas for the one loop processes of $h_j \to \gamma\gamma, Z\gamma$ and $gg$. The $gg$ width is relevant for the gluon-fusion production cross section. The couplings of the neutral CP-even Higgs bosons with the SM gauge bosons and fermions, charged Higgs bosons, sfermions, charginos and neutralinos have been given in Refs. [@ours; @mimic].
The $\gamma\gamma$ partial decay width of the CP-even Higgs boson ($h_j , \, j=1,2,3$) receives contributions from all charged particles running in the loop. It is given by $$\begin{aligned}
\label{hgam}
\Gamma (h_j \rightarrow \gamma\gamma)&=& \frac{\alpha^{2}m_{h_j}^{3}}{256\pi^{3}v^{2}}
\left | F_{\tau} + 3 \left( \frac{2}{3} \right)^{2} F_{t}+
3 \left(-\frac{1}{3}\right)^{2} F_{b} + F_{W}+ F_{h^{\pm}} \right. \nonumber \\
&& + \, F_{\tilde \tau} + \, 3 \left(\frac{2}{3}\right)^{2} F_{\tilde t}
\left. +3 \left(- \frac{1}{3} \right)^{2} F_{\tilde b} + F_{\tilde \chi^{\pm}} \right |^{2} \;,\end{aligned}$$ where the factor $3$ in front of $F_t, F_b, F_{\tilde t}$, and $F_{\tilde b}$ accounts for the color factor, and $v^2 = v_u^2 + v_d^2$. The expressions for the loop functions $F$ are given in the Appendix. For the decay $h_j \to gg$ where only colored particles are running in the loop, we have $$\label{hglue}
\Gamma (h_j \rightarrow gg) = \frac{\alpha_{s}^{2}m_{h_j}^{3}}{128\pi^{3} v^{2}}
\biggl\vert F_{t}+F_{b}+F_{\tilde t}+F_{\tilde b} \biggr\vert^{2} \;.$$ For the decay $h_j \to Z \gamma$, we have $$\begin{aligned}
\label{hZgam}
\Gamma (h_j \rightarrow Z\gamma) & =&
\frac{m_{h_j}^{3}}{32\pi}
\left(1-\frac{m_{Z}^{2}}{m_{h_j}^{2}}\right)^{3}
\frac{\alpha ^{2}g^{2}} {16\pi ^{2}m_{W}^{2}} \nonumber \\
&& \times \biggl\vert G_{\tau}+G_{t}+G_{b}+G_{W}+G_{h^{\pm}}
+G_{\tilde \tau}+G_{\tilde t}+G_{\tilde b} + G_{\tilde \chi ^{\pm}} \biggr\vert^{2} \; .\end{aligned}$$ The expressions for the loop functions $G$ are given in the Appendix.
Scanning of Parameter Space
===========================
The UMSSM has the following parameters: $M_{\tilde{Z}'}$, $A_s$, the VEV $ \langle S \rangle = v_s/\sqrt{2}$, and the Yukawa coupling $h_s$, other than those of the MSSM: gaugino masses $ M_{1,2,3}$, squark masses $M_{\tilde{q}}$, slepton masses $M_{\tilde{\ell}}$, soft parameters $A_{t,b,\tau}$, and $\tan\beta$. The soft parameter $M_S$ can be expressed in terms of VEVs and couplings through the tadpole conditions. The effective $\mu$ parameter is given as $\mu_{\rm eff} = h_s \langle S \rangle$. The other model parameters are fixed by the quantum numbers $Q'_{\phi}$ of various supermultiplets $\phi$.
The mass of the $Z'$ boson is determined by $m_{Z'} \approx {g_2} (Q'^2_{H_u} v_u^2 +Q'^2_{H_d} v_d^2 + Q'^2_{S} v_s^2 )^{1/2}$ if the $Z-Z'$ mixing is ignored. The most stringent limit on the $Z'$ boson comes from the dilepton resonance search by ATLAS [@atlas]. Nevertheless, we can avoid these $Z'$ mass limits by assuming that the leptonic decay mode is suppressed. The mixing between the SM $Z$ boson and the $Z'$ can be suppressed by carefully choosing the $\tan\beta \approx (Q'_{H_d}/Q'_{H_u})^{1/2}$ [@vernon-h]. In this work we do not impose these constraints in our parameter scan. However we note that we can always carefully choose the set of quantum numbers $Q'$ such that both the $Z'$ mass and mixing constraints can be evaded. [^2]
We first fix most of the MSSM parameters (unless stated otherwise): $$\begin{aligned}
&& M_{1} = M_{2} / 2 = 0.2 \;{\rm TeV}, \;\; M_3 = 2 \; {\rm TeV} \; ; \nonumber \\
&& M_{\tilde{Q}} = 0.7 \;{\rm TeV},\;
M_{\tilde{U}} = 0.7 \;{\rm TeV},\; M_{\tilde{D}} = 1 \;{\rm TeV},\;
M_{\tilde{L}} = M_{\tilde{E}} = 1 \;{\rm TeV}\; ; \\
&&A_b = A_{t} = A_\tau = 1\;{\rm TeV} \; . \nonumber\end{aligned}$$ We also fix the UMSSM parameter: $$A_s = 0.5 \, {\rm TeV} \;\; ,$$ while we scan the rest of the parameters in the following ranges $$0.2 < h_s < 0.6 ,\;\; 1.1 < \tan \beta < 40\; , \;
\label{scan1}$$ and $$0.2 \,{\rm TeV} < v_s < 2 \;{\rm TeV},\;\;
0.2 \, {\rm TeV} < M_{\tilde{Z}'} < 2 \, {\rm TeV} \; \; .
\label{scan2}$$ Note that the $U(1)'$ gaugino mass, $M_{\tilde{Z}'}$, is a soft-SUSY-breaking parameter, unlike the $Z'$ boson mass which is fixed by the $U(1)'$ coupling constant and quantum numbers, and the three VEVs.
Constraints
-----------
[*Charginos mass.–*]{} The chargino sector of the UMSSM is the same as that of MSSM with the following chargino mass matrix $$M_{\tilde{\chi}^\pm} = \left ( \begin{array}{cc}
M_2 & \sqrt{2} m_W \sin\beta \\
\sqrt{2} m_W \cos\beta & \mu_{\rm eff} \end{array} \right ) \;.$$ Thus, the two charginos masses depend on $M_2$, $\mu_{\rm eff}=h_s v_s/\sqrt{2}$, and $\tan\beta$. The current bound for the lighter chargino mass is $m_{\tilde{\chi_1}^\pm}> 94$ GeV as long as its mass difference with the lightest supersymmetric particle (LSP) is larger than 3 GeV [@pdg]. We impose this chargino mass bound in our scans in the parameter space defined by (\[scan1\]) and (\[scan2\]).
[*Invisible width of the $Z$ boson.–*]{} The lightest neutralino $\tilde{\chi}^0_1$ is the LSP of the model, and thus would be stable and invisible. When the $Z$ boson decays into a pair of LSPs, it would give rise to an invisible width of the $Z$ boson, which had been tightly constrained by experiments. The current bound of the $Z$ invisible width is $\Gamma_{\rm inv} (Z) < 3$ MeV at about 95% C.L. [@pdg]. The coupling of the $Z$ boson to the lightest neutralino is given by $${\cal L}_{Z\tilde{\chi}_1^0 \tilde{\chi}^0_1} =
\frac{g_1}{4} \, \left( | N_{13}|^2 - | N_{14} |^2 \right ) \,
Z_\mu \, \overline{\tilde{\chi}^0_1} \gamma^\mu \gamma_5
\, \tilde{\chi}^0_1\; ,$$ where $N$ is the orthogonal matrix that diagonalized the neutralino mass matrix. The contribution to the $Z$ boson invisible width is $$\Gamma(Z \to \tilde{\chi}^0_1 \tilde{\chi}^0_1 ) =
\frac{g_1^2} { 96 \pi} \left( | N_{13}|^2 - | N_{14} |^2 \right )^2 m_Z
\left( 1 - \frac{4 m_{\tilde{\chi}^0_1}^2 } { m_Z^2 } \right )^{3/2} \; .$$ Note that the $Z$ boson would not couple to the singlino component, and we have assumed negligible mixing between $Z$ and $Z'$ bosons; therefore the $Z$ boson would not couple to the $Z'$-ino component either. Here we impose the experimental constraint on the invisible $Z$ width. The constraint of fulfilling the relic density by the LSP will be ignored in this work.
[*Mass of the Higgs boson and production rate of various decay modes.–*]{} The boson masses reported by CMS and ATLAS are $125.3 \pm 0.6$ [@cms] and $126.0 \pm 0.6$ GeV [@atlas], respectively. The current data indicated that the observed boson is similar to the SM Higgs boson. For our purpose we define the SM-like Higgs boson $h_{\rm SM-like}$ in our scenario when the square of its singlet component is smaller than $1/3$, i.e., $O_{k3}^2 < \frac{1}{3}$, where $h_k = O_{k1} \phi_d + O_{k2} \phi_u + O_{k3} \phi_s$. For all the allowed points we have $k=1$ for the SM-like Higgs boson. We choose the allowable mass range for the SM-like Higgs boson in our analysis as $$124 \; {\rm GeV} < m_{h_{\rm SM-like}} < 127 \; {\rm GeV} \;.$$
The production rate of various channels of the Higgs boson relative to the SM prediction is defined as $$\label{R}
R_{ab} \equiv \frac{ \sigma(pp \to h+X ) \times B(h \to ab)}
{ \sigma(pp \to h_{\rm SM} +X ) \times B(h_{\rm SM} \to ab)}$$ where $ab = \gamma\gamma, W^+ W^-, ZZ, b\bar b, \tau^+ \tau^-$. At the LHC, the production of $h_{\rm SM}$ or the CP-even Higgs bosons in the UMSSM is dominated by gluon fusion. We shall focus on gluon fusion in Eq. (\[R\]). The production rates of $WW^*$ and $ZZ^*$ reported by CMS and ATLAS are close to the SM predictions: $$\begin{aligned}
0.2 < R_{WW^*} <1.1 \;, \;\; 0.4< R_{ZZ^*} <1.2 & \qquad {\rm CMS} \nonumber \\
0.8 < R_{WW^*} <1.7 \;, \;\; 0.6< R_{ZZ^*} <1.8 & \qquad {\rm ATLAS} \nonumber\end{aligned}$$ On the other hand, the diphoton production rates reported by CMS [@cms] and ATLAS [@atlas] are $$\begin{aligned}
1.1 < R_{\gamma\gamma} <2.0 \; , \nonumber \\
1.3 < R_{\gamma\gamma} < 2.2 \;, \nonumber\end{aligned}$$ respectively. We require in our scan $$\begin{aligned}
&& 0.5 < R_{WW^*}, R_{ZZ^*} < 1.5 \; , \nonumber \\
&& 1.0 < R_{\gamma\gamma} \; .\end{aligned}$$
Current limits on the pseudoscalar Higgs boson ($A$) come from the LEP searches in the associated production with a scalar Higgs boson ($H$) of $e^+ e^- \to Z^* \to AH$. In those MSSM-extended models, such as NMSSM, where multiple scalar and pseudoscalar Higgs bosons exist, the constraint could be severe. However, there is only one pseudoscalar Higgs boson in the UMSSM and in our choice of parameters it is often heavier than a few hundred GeV. Thus, it is not constrained by the current limits. Similarly, the charged Higgs boson is also heavy and not constrained by current searches.
Numerical results
-----------------
![\[one\] Case for $h_s = 0.4$. Parameter space points satisfy $124\;{\rm GeV} < m_{h_{\rm SM-like}} <
127 \;{\rm GeV}$, the chargino mass, and the invisible $Z$ width constraints. Also, the relative production rates satisfy $0.5 < R_{WW^*,ZZ^*} < 1.5$ and $ 1 < R_{\gamma\gamma}$. ](hs04-rpp.eps "fig:"){width="3.2in"} ![\[one\] Case for $h_s = 0.4$. Parameter space points satisfy $124\;{\rm GeV} < m_{h_{\rm SM-like}} <
127 \;{\rm GeV}$, the chargino mass, and the invisible $Z$ width constraints. Also, the relative production rates satisfy $0.5 < R_{WW^*,ZZ^*} < 1.5$ and $ 1 < R_{\gamma\gamma}$. ](hs04-rpp-rww.eps "fig:"){width="3.2in"} ![\[one\] Case for $h_s = 0.4$. Parameter space points satisfy $124\;{\rm GeV} < m_{h_{\rm SM-like}} <
127 \;{\rm GeV}$, the chargino mass, and the invisible $Z$ width constraints. Also, the relative production rates satisfy $0.5 < R_{WW^*,ZZ^*} < 1.5$ and $ 1 < R_{\gamma\gamma}$. ](hs04-rpp-mh2.eps "fig:"){width="3.2in"} ![\[one\] Case for $h_s = 0.4$. Parameter space points satisfy $124\;{\rm GeV} < m_{h_{\rm SM-like}} <
127 \;{\rm GeV}$, the chargino mass, and the invisible $Z$ width constraints. Also, the relative production rates satisfy $0.5 < R_{WW^*,ZZ^*} < 1.5$ and $ 1 < R_{\gamma\gamma}$. ](hs04-ok3-mh2.eps "fig:"){width="3.2in"}
![\[new2\] Case for $h_s = 0.4$. Same as Fig. \[one\], but showing $B(h_1 \to b \bar b)$ versus (a) $m_{h_2}$ and (b) $R_{\gamma\gamma}$.](hs04-b-mh2.eps "fig:"){width="3.2in"} ![\[new2\] Case for $h_s = 0.4$. Same as Fig. \[one\], but showing $B(h_1 \to b \bar b)$ versus (a) $m_{h_2}$ and (b) $R_{\gamma\gamma}$.](hs04-b-rpp.eps "fig:"){width="3.2in"}
![\[two\] Same as Fig. \[one\]. Case for $h_s = 0.35$. ](hs035-rpp.eps "fig:"){width="3.2in"} ![\[two\] Same as Fig. \[one\]. Case for $h_s = 0.35$. ](hs035-rpp-rww.eps "fig:"){width="3.2in"} ![\[two\] Same as Fig. \[one\]. Case for $h_s = 0.35$. ](hs035-rpp-mh2.eps "fig:"){width="3.2in"} ![\[two\] Same as Fig. \[one\]. Case for $h_s = 0.35$. ](hs035-ok3-mh2.eps "fig:"){width="3.2in"}
![\[three\] Same as Fig. \[one\]. Case for $h_s = 0.45$. ](hs045-rpp.eps "fig:"){width="3.2in"} ![\[three\] Same as Fig. \[one\]. Case for $h_s = 0.45$. ](hs045-rpp-rww.eps "fig:"){width="3.2in"} ![\[three\] Same as Fig. \[one\]. Case for $h_s = 0.45$. ](hs045-rpp-mh2.eps "fig:"){width="3.2in"} ![\[three\] Same as Fig. \[one\]. Case for $h_s = 0.45$. ](hs045-ok3-mh2.eps "fig:"){width="3.2in"}
![\[four\] Correlation between $R_{\gamma\gamma}$ and $R_{Z\gamma}$ for $h_s = 0.35, 0.4,
0.45$. ](hs035-rpp-rpz.eps "fig:"){width="3.2in"} ![\[four\] Correlation between $R_{\gamma\gamma}$ and $R_{Z\gamma}$ for $h_s = 0.35, 0.4,
0.45$. ](hs04-rpp-rpz.eps "fig:"){width="3.2in"} ![\[four\] Correlation between $R_{\gamma\gamma}$ and $R_{Z\gamma}$ for $h_s = 0.35, 0.4,
0.45$. ](hs045-rpp-rpz.eps "fig:"){width="3.2in"}
We start with $h_s = 0.4$ and show relative production rates, as defined by Eq. (\[R\]), in Fig. \[one\]. We show $R_{\gamma\gamma}$ versus $\tan\beta$ in part (a), $R_{WW^*}$ versus $R_{\gamma\gamma}$ in part (b), $R_{\gamma\gamma}$ versus $m_{h_2}$ in part (c), and $O_{13}^2$ versus $m_{h_2}$ in part (d). The majority of the points have $R_{\gamma\gamma}$ between $1.3$ and $1.6$ while $R_{WW^*}$ (similarly $R_{ZZ^*}$) is between $1.0$ and $1.4$ with $\tan\beta$ between 3 and 9. The correlations between $R_{\gamma\gamma}$ and $m_{h_2}$, and between $O_{13}^2$ and $m_{h_2}$ show that the enhancement of $R_{\gamma\gamma}$ of $h_1$ is a result of mixing between the doublet and singlet components. When $m_{h_2}$ gets closer to $m_{h_1}$, the mixing between $h_1$ and $h_2$ gets stronger, and therefore the singlet component $O_{13}^2$ for $h_1$ becomes larger and so does $R_{\gamma\gamma}$. The $R_{\gamma\gamma}$ is enhanced mainly due to a reduced total width, which is dominated by the $b\bar b$ width. In order to fully understand the enhancement of diphotons, we show the branching ratio $B(h_1 \to b\bar b)$ versus (a) $m_{h_2}$ and (b) $R_{\gamma\gamma}$ in Fig. \[new2\]. In Fig. \[new2\] (a) we can see that the branching ratio into $b\bar b$ decreases as $m_{h_2}$ approaches $m_{h_1}$, where the mixing is the strongest. Also, in Fig. \[new2\](b) $R_{\gamma\gamma}$ increases as $B(h_1 \to b\bar b)$ decreases. It is now clear that the enhancement in diphotons is due to a reduced $b\bar
b$ branching ratio, which in turn is because of the stronger mixing with the singlet.
We repeat the cases of $h_s =0.35$ and $h_s=0.45$ in Figs. \[two\] and \[three\], respectively. It is easy to see that the number of points for $h_s=0.35$ and $h_s=0.45$ are reduced substantially as compared with $h_s =0.4$. The range of $\tan\beta$ for $h_s=0.35$ stretches between 3 to 40, while for $h_s = 0.45$ it shrinks drastically to between 2.5 and 6. The correlations between $R_{WW^*}$ and $R_{\gamma\gamma}$, between $R_{\gamma\gamma}$ and $m_{h_2}$, and between $O_{13}^2$ and $m_{h_2}$ are similar to the case of $h_s =0.4$. Note that there is a gap in $m_{h_2}$ between $450-475$ GeV in the case of $h_s=0.35$, which is mainly due to the combined constraints of $R_{\gamma\gamma}$ and $R_{WW^*}$. We have checked that there are many fewer points satisfying all the constraints below $h_s =0.3$ and above $h_s = 0.5$.
An interesting prediction is the relative production rate of $R_{Z\gamma}$, which can probe various Higgs-sector extensions [@cw]. In the SM, $B(h_{\rm SM} \to Z\gamma)$ is smaller than $B(h_{\rm SM} \to \gamma\gamma)$. We show the correlation between $R_{Z\gamma}$ and $R_{\gamma\gamma}$ for $h_s=0.35,0.4,0.45$ in Fig. \[four\], in which the points shown already satisfy the constraints listed above. All the points that receive enhancement in the $\gamma\gamma$ channel also receive enhancement in the $Z\gamma$ channel. However, for most of the points $R_{Z\gamma}$ is less than $R_{\gamma\gamma}$, indicated by the points below the green line ($R_{Z\gamma} = R_{\gamma\gamma}$).
-------------------------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
[No. 1]{} [No. 2]{} [No. 3]{} [No. 1]{} [No. 2]{} [No. 3]{} [No. 1]{} [No. 2]{} [No. 3]{}
$m_{h_{\rm SM-like}}$ 124.42 124.02 126.04 124.02 124.01 125.45 124.13 125.51 124.35
$m_{h_{2}}$ 154.49 157.15 162.80 159.23 158.27 149.30 159.59 158.88 149.48
$m_{\tilde{\chi}_{0}}$ 54.32 27.77 64.61 28.75 25.33 64.71 27.84 87.70 67.11
$ |O_{13}|^{2} $ 0.316 0.296 0.210 0.248 0.256 0.319 0.215 0.188 0.313
$ \tan\beta $ 19.91 21.55 19.53 9.01 9.01 8.46 5.74 5.53 5.89
$B(h\rightarrow \gamma\gamma)\times 10^{3} $ $ 4.23 $ $ 4.31 $ $ 3.73 $ $ 3.83 $ $ 3.90$ $ 4.43 $ $ 3.75 $ $ 3.62$ $ 4.41 $
$ B(h\rightarrow b\overline{b}) $ 0.387 0.425 0.441 0.444 0.443 0.400 0.480 0.473 0.428
$ B(h\rightarrow \tilde{\chi}^0_1 \tilde{\chi}^0_1 ) $ 0.054 0.007 0.0 0.039 0.033 0.0 0.009 0.0 0.0
$ R_{\gamma\gamma} $ 1.63 1.72 1.61 1.63 1.64 1.66 1.65 1.61 1.70
$ R_{ZZ^{*}} $ 1.15 1.18 1.38 1.13 1.13 1.27 1.16 1.29 1.17
$ R_{WW^{*}} $ 1.15 1.18 1.34 1.13 1.14 1.25 1.16 1.26 1.17
$ R_{Z\gamma} $ 1.40 1.44 1.53 1.36 1.37 1.50 1.39 1.47 1.43
-------------------------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
: \[table1\] Selected points (labeled 1, 2, and 3) in the allowed parameter space for $h_s=0.35,0.4,0.45$. The masses are given in GeV. The $h_{\rm SM-like} = h_1$ in our scan.
It is instructional to list a few selected points in the allowed parameter space, as shown in Table \[table1\]. The masses $m_{h_{\rm SM-like}}$ are all around $124-126$ GeV and $m_{h_2}$ are around $150 - 160$ GeV so that the singlet-doublet mixing is strong but not maximal. The $b\bar b$ width is reduced by a moderate amount because we have set that the singlet fraction cannot be too large ($O_{13}^2 < 1/3$). Therefore, we can see $R_{WW^*}$ and $R_{ZZ^*}$ are enhanced by about $10-15$%. The $R_{\gamma\gamma}$ is enhanced by about 60% and $R_{Z\gamma}$ by about 40%. In the future, if experiments can measure $R_{\gamma\gamma}$, $R_{WW^*}$, and $R_{ZZ^*}$ to better precision, one could tell whether the enhancement in $R_{\gamma\gamma}$ is due to singlet-doublet mixing.
Note that the lightest neutralino $\tilde{\chi}^0_1$ could be lighter than $m_{h_{\rm SM-like}} /2$. In this case $h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1$ is possible, but the branching ratio $B(h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1)$ is very small, because we have set the production rates $R_{\gamma\gamma}$, $R_{WW^*}$, and $R_{ZZ^*}$ larger than certain values. In Fig. \[last\], we show the branching ratio of the $B(h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1)$ (invisible) versus $m_{h_2}$ and $O^2_{13}$. These are the parameter-space points satisfying all the constraints of chargino mass, invisible $Z$ width, Higgs boson mass, and the Higgs production rates. We can see that the majority of points are at $B({\rm invisible}) =0$, because mostly $h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1$ is not kinematically open yet; while the other points have $B({\rm invisible}) \alt 0.25$. This is in accord with a recent model-independent study on the Higgs boson couplings that the nonstandard Higgs decay branching ratio is constrained to be less than about $0.25$ [@global].
![\[last\] Case for $h_s=0.4$. Same as Fig. \[one\]. Shown is the invisible branching ratio $B(h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1)$ versus (a) $m_{h_2}$ and (b) $O^2_{13}$. ](hs04-inv-mh2.eps "fig:"){width="3.2in"} ![\[last\] Case for $h_s=0.4$. Same as Fig. \[one\]. Shown is the invisible branching ratio $B(h_1 \to \tilde{\chi}^0_1 \tilde{\chi}^0_1)$ versus (a) $m_{h_2}$ and (b) $O^2_{13}$. ](hs04-inv-o13.eps "fig:"){width="3.2in"}
Discussion and Conclusions
==========================
There are two ways to enhance the diphoton production rate, either by increasing the absolute width into $\gamma\gamma$ or by reducing the total width of the Higgs boson (which is dominated by the $b\bar b$ width at 125 GeV). The former is possible if an extra light charged particle is running in the triangular loop, e.g., a light stau [@carena], in the MSSM. The latter effect is possible if the SM-like Higgs boson has a large mixing with another singlet-like Higgs boson, e.g., in the NMSSM [@ellw], such that the $b\bar b$ width is reduced by the mixing and therefore the $\gamma\gamma$ branching ratio is enhanced.
For the choice of UMSSM parameters all the extra charged particles like the stau, top squark, sbottom and the charged Higgs boson are relatively heavy. We have searched in the parameter space of UMSSM under the constraints of current Higgs boson data, chargino-mass bound, and $Z$ invisible width. We found that (1) the enhancement of the diphoton production rate is mainly due to the mixing between the Higgs doublets and singlet, and (2) the lightest CP-even Higgs boson is SM-like while the second lightest is more singletlike. This is in contrast to the case of NMSSM, in which the lightest is singletlike and the second lightest is SM-like.
Before closing, we offer a few more comments as follows.
1. The relative production rate $R_{Z\gamma}$ mostly goes in the same direction as $R_{\gamma\gamma}$, though the amount of enhancement in $R_{Z\gamma}$ is less than $R_{\gamma\gamma}$. The probing of the $Z\gamma$ mode of the observed Higgs boson is an interesting test for the Higgs boson from the SM or from its extensions. In the present luminosity, it is rather difficult to probe the $Z\gamma$ because it suffers an additional suppression from the leptonic branching ratio of the $Z$ boson.
2. Almost all of the points have $R_{WW^*}$ between $1.0$ and $1.4$. This is easy to understand because $R_{\gamma\gamma}$ is enhanced by a reduced total width. Therefore, the $WW^*$ and $ZZ^*$ branching ratios also increase.
3. The mass of the second lightest CP-even Higgs boson cannot be too large, as shown in bottom panels of Figs. \[one\], \[two\], \[three\]. Again, this is easy to understand because in order to achieve a large doublet-singlet mixing between $h_1$ and $h_2$ their mass difference cannot be too large. We found that $m_{h_2} < 580, 320, 260$ GeV for $h_s =0.35, 0.4, 0.45$ respectively. The detection of $h_2$ is rather difficult because of its singlet nature. The production cross section would be reduced significantly by the mixing.
4. There are six physical neutralinos in the mass spectrum in UMSSM. The lightest one can be the dark matter candidate. If kinematics is allowed, it may lead to invisible modes for the decays of Higgs bosons, $Z'$ or even $Z$. Dark matter physics is therefore very rich in this model. We only touch upon this lightly in this work and would like to return to this issue in future publications.
Loop Functions
==============
The partial decay width for $h_j \to \gamma\gamma$ is given by Eq.(\[hgam\]). The loop functions are given by $$\begin{aligned}
\label{Ff}
F_{f}&=& -2 x_{f}\left[1+\left(1-x_{f}\right)f(x_{f})\right] R_{f} \qquad \qquad (f = \tau, t, b)\\
F_{W} &=& \left[2+3x_{W} +3x_{W}\left(2-x_{W}\right)f(x_{W}) \right] R_{W} \\
F_{h^{\pm}} &=& x_{h^{\pm}} \left[ 1-x_{h^{\pm}}f(x_{h^{\pm}})\right]
R_{h^{\pm}} \frac{m_{W}^{2}}{m_{h^{\pm}}^{2}} \end{aligned}$$ for non-SUSY particles and $$\begin{aligned}
\label{Fsf}
F_{\tilde f}&=& \sum_{i=1,2} x_{\tilde{f _{i}}}\left[1-x_{\tilde{f _{i}}}f(x_{\tilde{f _{i}}})\right]
R_{h_j \tilde{f _{i}}\tilde{f _{i}}} \frac{m_{Z}^{2}}{m_{\tilde{f _{i}}}^{2}} \qquad \qquad (\tilde f = \tilde \tau, \tilde t,\tilde b) \\
F_{\tilde \chi^{\pm}}&=& \sum_{i=1,2} -2 x_{\tilde{\chi}^{\pm}_{i}}
\left[ 1+ \left(1-x_{\tilde{\chi}^{\pm}_{i}}\right) f(x_{\tilde{\chi}^{\pm}_{i}})
\right ] R_{\tilde{\chi}^{\pm}_{i}} \frac{m_{W}}{m_{\tilde \chi^{\pm}_i} } \end{aligned}$$ for sparticles with $x_{X} =4 m_{X}^{2} / m_{h_j}^{2} \, (X = \tau, t, b, W, h^\pm , \tilde \tau_i, \tilde t_i, \tilde b_i , \tilde \chi_i^\pm)$. $$\label{f}
f \left( x \right) =
\begin{cases} \left[ \sin^{-1} \sqrt \frac{1}{x} \right]^2 & \;\; \mbox{for } x \ge 1 \\
-\frac{1}{4} \left[ \ln \left(\frac{1 + \sqrt{1 - x}}{1 - \sqrt{1 - x}} \right)
- i \pi\right]^2 & \;\; \mbox{for } x < 1
\end{cases}$$ The couplings entering into the loop functions for the non-SUSY particles are $$R_\tau = \frac{O_{j1}}{\cos\beta} \;\; , \;\; R_{t} = \frac{O_{j2}}{\sin\beta} \;\; , \;\; R_b = R_\tau$$ $$\begin{aligned}
R_{W} &=& O_{j2} \sin\beta + O_{j1} \cos\beta \\
R_{h^{\pm}} &=& \frac{3-2\sin^{2}\theta_{W}}{2\cos^{2}\theta_{W}} \sin\beta \cos\beta
\left( O_{j2} \cos\beta +O_{j1} \sin\beta \right) \nonumber \\
&&
+\frac{1-2\sin^{2}\theta_{W}}{2\cos^{2}\theta_{W}}\left( O_{j2} \sin^{3}\beta +O_{j1} \cos^{3}\beta \right)
\nonumber \\
&&
+\frac{2g_{2}^{2}}{g^{2}}Q'_{H_u} {}^2 O_{j2} \sin\beta
\cos^{2}\beta +\frac{2g_{2}^{2}}{g^{2}}Q'_{H_d} {}^{2} O_{j1} \sin^{2}\beta
\cos\beta \nonumber \\
&&
+\frac{2g_{2}^{2}}{g^{2}}Q'_{H_u}Q'_{H_d}
\left( O_{j2} \sin^{3}\beta +O_{j1}\cos^{3}\beta \right) \nonumber \\
&&
-\frac{2h_{s}^{2}}{g^{2}} \sin\beta \cos\beta
\left( O_{j2} \cos\beta + O_{j1} \sin\beta \right) \nonumber \\
&&
+ \left(\frac{h_{s}^{2} v_{s}}{gm_{W}}
+ \frac{g_{2}^{2}}{gm_{W}}Q'_{H_u}Q'_{S} v_s \cos^2\beta
+\frac{g_{2}^2}{gm_{W}}Q'_{H_d}Q'_{S} v_s \sin^2\beta \right. \nonumber \\
&& \left. \qquad
+ \; \frac{\sqrt{2}h_{s}A_{s}} {gm_{W}} \sin\beta \cos\beta \right) O_{j3} \end{aligned}$$ For the sfermions, we have the couplings $$\begin{aligned}
R_{h_j \tilde{f_{1}}\tilde{f_{1}}} &=& R^L_{\tilde f} \cos^{2}\theta_{\tilde f}
+ R^R_{\tilde f} \sin^{2} \theta_{\tilde f}+2 R^{RL}_{\tilde f} \sin\theta_{\tilde f} \cos\theta_{\tilde f}
\\
R_{h_j \tilde{f_{2}}\tilde{f_{2}}} &=& R^L_{\tilde f} \sin^{2}\theta_{\tilde f}
+ R^R_{\tilde f} \cos^{2}\theta_{\tilde f} - 2 R^{RL}_{\tilde f} \sin\theta_{\tilde f} \cos\theta_{\tilde f} \end{aligned}$$ where $\theta_{\tilde f}$ is the mixing angle between $\tilde f_L$ and $\tilde f_R$ to obtain the physical mass eigenstates $\tilde f_1$ and $\tilde f_2$. For the $Z\gamma$ case, we also need the off-diagonal term $$R_{h_j \tilde{f_{1}}\tilde{f_{2}}} = \left( R^R_{\tilde f} - R^L_{\tilde f}\right) \sin\theta_{\tilde f}\cos\theta_{\tilde f}
+R^{RL}_{\tilde f} \left(\cos^{2}\theta_{\tilde f}-\sin^{2}\theta_{\tilde f}\right) \; .$$ The expressions of $R^{L,R,RL}_{\tilde t , \tilde b}$ are given by $$\begin{aligned}
R^L_{\tilde t , \tilde b} &=& \frac{v m_{W}}{gm_{Z}^{2}} \Biggr [
\left(\frac{g^{2}}{2\cos^{2}\theta_{W}} \left( \sin^{2}\theta_{W} Q^{t,b} - T^{t,b}_3 \right) +
g_{2}^{2}Q'_{H_u}Q'_{Q_3} \right) O_{j2} \sin\beta +
\frac{2m_{t,b}^{2}}{v^2} R_{t,b} \nonumber\\
&& +
\left(-\frac{g^{2}}{2\cos^{2}\theta_{W}} \left( \sin^{2}\theta_{W} Q^{t,b} - T^{t,b}_3 \right) +
g_{2}^{2}Q'_{H_d}Q'_{Q_3}\right) O_{j1} \cos\beta \nonumber \\
&& + \; g_{2}^{2} \frac{v_s}{v} Q'_{S}Q'_{Q_3} O_{j3} \Biggr] \\
R^R_{\tilde t , \tilde b} &=&\frac{v m_{W}}{gm_{Z}^{2}}\Biggr[
\left(-\frac{g^{2}\sin^{2}\theta_{W}}{2\cos^{2}\theta_{W}}Q^{t,b}+g_{2}^{2}Q'_{H_u}Q'_{U_3^{\rm c},D_3^{\rm c}} \right)
O_{j2} \sin\beta +\frac{2m_{t,b}^{2}}{ v^2} R_{t,b} \nonumber\\
&& +
\left(\frac{g^{2}\sin^{2}\theta_{W}}{2\cos^{2}\theta_{W}}Q^{t,b}+g_{2}^{2}Q'_{H_d}Q'_{U_3^{\rm c},D_3^{\rm c}} \right)
O_{j1} \cos\beta +g_{2}^{2} \frac{v_s}{v} Q'_{S}Q'_{U_3^{\rm c},D_3^{\rm c}} O_{j3} \Biggr ] \\
R^{RL}_{\tilde t , \tilde b} &=&\frac{v m_{t,b}}{2m^{2}_{Z}}\left(\frac{g}{2m_{W}}A_{t,b}R_{t,b}
-h_{s}\left[\frac{gv_{s}}{2\sqrt{2}m_{W}}R'_{t,b}-\frac{1}{\sqrt{2}} R^{''}_{t,b} \right]\right) \end{aligned}$$ where we have defined $$R'_\tau = \frac{O_{j2}}{\cos\beta} \;\; , \;\; R'_{t} = \frac{O_{j1}}{\sin\beta} \;\; , \;\; R'_b = R'_\tau$$ $$R^{''}_\tau = O_{j3} \tan\beta \;\; , \;\; R^{''}_{t} = O_{j3} \cot\beta \;\; , \;\; R^{''}_b = R^{''}_\tau$$ The $R^{L,R,RL}_{\tilde \tau}$ can be obtained from the $R^{L,R,RL}_{\tilde b}$ by appropriate substitutions. For the chargino loop, we have $$R_{\tilde{\chi}^{\pm}_{i}} = 2 \left[\frac{1}{\sqrt{2}}V_{i1}U_{i2}O_{j1}
+\frac{1}{\sqrt{2}}V_{i2}U_{i1}O_{j2}
+\frac{h_{s}}{\sqrt{2}g}V_{i2}U_{i2}O_{j3} \right] \; ,$$ where $U$ and $V$ are the two unitary matrices that diagonalize the chargino mass matrix.
The partial decay width for $h_j \to Z \gamma$ is given by Eq.(\[hZgam\]). The loop functions for the non-SUSY particles are given by $$G_{f} = N^f_C \cdot R_{f} \cdot \frac{-2 Q^f \left[ T^f_3 -2 Q^f \sin^{2}\theta_{W}\right]}{\sin\theta_{W}\cos\theta_{W}}
\left[I_{1}(\tau _{f},\lambda _{f})-I_{2}(\tau _{f},\lambda _{f})\right] \qquad (f = \tau , t , b)$$ $$\begin{aligned}
G_{W} &=&-R_{W} \cot\theta_{W}\biggl(4\left(3-\tan^{2}\theta_{W}\right)I_{2}(\tau _{W},\lambda _{W}) \biggr. \nonumber \\
&& \left. +\left[ \left(1+\frac{2}{\tau _{W}}\right)\tan^{2}\theta_{W}
-\left(5+\frac{2}{\tau _{W}}\right)\right]I_{1}(\tau _{W},\lambda _{W}) \right)
\nonumber \\ \\
G_{h^{\pm}} &=&R_{h^{\pm}}\frac{1-2\sin^{2}\theta_{W}}{\sin\theta_{W}\cos\theta_{W}}I_{1}(\tau _{h^{\pm}},\lambda _{h^{\pm}})\frac{m_{W}^{2}}{m_{h^{\pm}}^{2}} \end{aligned}$$ Here, we define $$\tau_X = \frac{4m_X^2}{m_{h_j}^2} \; \; , \; \; \lambda_X = \frac{4 m_X^2}{m_Z^2}
\qquad \qquad (X = \tau, t, b, W, h^\pm) \; .$$ The definitions of $ I_{1}(\tau ,\lambda)$ and $I_{2}(\tau,\lambda)$ are the same as given in [@hunter]. $$\begin{aligned}
I_1(\tau, \lambda) & = &
\frac{\tau \lambda}{2\left( \tau - \lambda\right)}
+ \frac{\tau^2 \lambda^2}{2\left( \tau - \lambda\right)^2}\left[ f(\tau) - f(\lambda) \right]
+ \frac{\tau^2 \lambda}{\left( \tau - \lambda\right)^2}\left[ g(\tau) - g(\lambda) \right]
\\
I_2(\tau, \lambda) & = & - \frac{\tau \lambda}{2\left( \tau - \lambda\right)} \left[ f(\tau) - f(\lambda) \right]\end{aligned}$$ where $f(x)$ is given in Eq.(\[f\]) and $g(x)$ is defined as $$\label{g}
g \left( x \right) =
\begin{cases} \sqrt{x - 1} \left[ \sin^{-1} \sqrt \frac{1}{x} \right] & \;\; \mbox{for } x \ge 1 \\
\frac{1}{2} \sqrt{1 - x} \left[ \ln \left(\frac{1 + \sqrt{1 - x}}{1 - \sqrt{1 - x}} \right)
- i \pi\right] & \;\; \mbox{for } x < 1
\end{cases}$$ For the sparticles, we have $$\begin{aligned}
G_{\tilde f} &=& 8 \cdot N^f_C \cdot Q^f \cdot m_{Z}^{2}
\sum_{k,l=1,2}R_{h_j \tilde{f_{l}}\tilde{f_{k}}}R_{Z\tilde{f_{k}}\tilde{f_{l}}}
C_{2}(m_{\tilde{f_{l}}},m_{\tilde{f_{k}}},m_{\tilde{f_{k}}})
\\
G_{\tilde \chi ^{\pm}} &=&\sum_{k,l=1,2}\frac{m_{Z}m_{\tilde \chi ^{+}_{l}}}{\sin\theta_{W}}
f\left(m_{\tilde \chi ^{+}_{l}},m_{\tilde \chi ^{+}_{k}},m_{\tilde \chi ^{+}_{k}}\right)
\sum_{m,n=L,R}R^{m}_{Z\tilde \chi ^{+}_{l} \tilde \chi ^{-}_{k}}R^{n}_{h_j \tilde \chi ^{+}_{k} \tilde \chi ^{-}_{l}} \end{aligned}$$ The definitions of $C_{2}(m_{1},m_{2},m_{2})$ and $ f(m_{1},m_{2},m_{2})$ can be found in [@Djouadi:1996yq]. The couplings for the sfermions are $$\begin{aligned}
R_{Z\tilde{f_{1}}\tilde{f_{1}}} &=&\frac{1}{\sin\theta_{W}\cos\theta_{W}}
\left[ \left( T^f_3 - Q^f \sin^{2}\theta_{W} \right)\cos^{2}\theta_{\tilde f}
-Q^f\sin^{2}\theta_{W}\sin^{2}\theta_{\tilde f}\right]
\\
R_{Z\tilde{f_{2}}\tilde{f_{2}}} &=&\frac{1}{\sin\theta_{W}\cos\theta_{W}}
\left[ - Q^f \sin^{2}\theta_{W} \cos^{2}\theta_{\tilde f}
+\! \left( T^f_3 - Q^f \sin^{2}\theta_{W} \right)
\sin^{2}\theta_{\tilde f} \right]
\\
R_{Z\tilde{f_{1}}\tilde{f_{2}}} &=&\frac{-T^f_3}{\sin\theta_{W}\cos\theta_{W}}\sin\theta_{\tilde f}
\cos\theta_{\tilde f} \end{aligned}$$ For the charginos, the couplings are $$\begin{aligned}
R^{L}_{Z\tilde\chi ^{+}_{l}\tilde\chi ^{-}_{k}} &=&-\left(V_{l1}V_{k1}+\frac{1}{2}V_{l2}V_{k2}-\delta_{lk}\sin^{2}\theta_{W}\right)
\\
R^{R}_{Z\tilde \chi ^{+}_{l}\tilde\chi ^{-}_{k}} &=&-\left(U_{l1}U_{k1}+\frac{1}{2}U_{l2}U_{k2}-\delta_{lk}\sin^{2}\theta_{W}\right)
\\
R^{L}_{h_j \tilde\chi ^{+}_{i}\tilde\chi ^{-}_{l}} &=&\frac{1}{\sqrt{2}}\left[V_{l1}U_{i2}O_{j 1}+V_{l2}U_{i1}O_{j 2}
+\frac{h_{s}}{g}V_{l2}U_{i2}O_{j 3}\right]
\\
R^{R}_{h_j \tilde\chi ^{+}_{i}\tilde\chi ^{-}_{l}} &=&\frac{1}{\sqrt{2}}\left[V_{i1}U_{l2}O_{j 1}+V_{i2}U_{l1}O_{j 2}
+\frac{h_{s}}{g}V_{i2}U_{l2}O_{j 3}\right]\end{aligned}$$ The partial decay width for $h_j \to gg$ is given by Eq.(\[hglue\]). The loop functions $F_f$ and $F_{\tilde f}$ for the colored particles are the same as in the case of $h_j \to \gamma \gamma$ given by Eqs.(\[Ff\]) and (\[Fsf\]) respectively.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported in parts by the National Science Council of Taiwan under Grants No. 99-2112-M-007-005-MY3 and No. 101-2112-M-001-005-MY3, and the WCU program through the KOSEF funded by the MEST (R31-2008-000-10057-0). TCY is grateful to the National Center for Theoretical Sciences of Taiwan for its warm hospitality.
[99]{}
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\]. See e.g., H. Baer, V. Barger and A. Mustafayev, Phys. Rev. D [**85**]{}, 075010 (2012) \[arXiv:1112.3017 \[hep-ph\]\]; S. Heinemeyer, O. Stal and G. Weiglein, Phys. Lett. B [**710**]{}, 201 (2012) \[arXiv:1112.3026 \[hep-ph\]\]; A. Arbey, M. Battaglia, A. Djouadi, F. Mahmoudi and J. Quevillon, Phys. Lett. B [**708**]{}, 162 (2012) \[arXiv:1112.3028 \[hep-ph\]\]; P. Draper, P. Meade, M. Reece and D. Shih, Phys. Rev. D [**85**]{}, 095007 (2012) \[arXiv:1112.3068 \[hep-ph\]\]; S. Akula, B. Altunkaynak, D. Feldman, P. Nath and G. Peim, Phys. Rev. D [**85**]{}, 075001 (2012) \[arXiv:1112.3645 \[hep-ph\]\]; M. Kadastik, K. Kannike, A. Racioppi and M. Raidal, JHEP [**1205**]{}, 061 (2012) \[arXiv:1112.3647 \[hep-ph\]\]; J. Cao, Z. Heng, D. Li and J. M. Yang, Phys. Lett. B [**710**]{}, 665 (2012) \[arXiv:1112.4391 \[hep-ph\]\]; J. L. Feng and D. Sanford, Phys. Rev. D [**86**]{}, 055015 (2012) \[arXiv:1205.2372 \[hep-ph\]\]. M. Carena, S. Gori, N. R. Shah, C. E. M. Wagner and L. -T. Wang, JHEP [**1207**]{}, 175 (2012) \[arXiv:1205.5842 \[hep-ph\]\]; M. Carena, S. Gori, N. R. Shah and C. E. M. Wagner, JHEP [**1203**]{}, 014 (2012) \[arXiv:1112.3336 \[hep-ph\]\]. A. Drozd, B. Grzadkowski, J. F. Gunion and Y. Jiang, arXiv:1211.3580 \[hep-ph\]. N. D. Christensen, T. Han and S. Su, Phys. Rev. D [**85**]{}, 115018 (2012) \[arXiv:1203.3207 \[hep-ph\]\]; K. Hagiwara, J. S. Lee and J. Nakamura, JHEP [**1210**]{}, 002 (2012) \[arXiv:1207.0802 \[hep-ph\]\]. V. Barger, M. Ishida and W. -Y. Keung, arXiv:1207.0779 \[hep-ph\]. U. Ellwanger, JHEP [**1203**]{}, 044 (2012) \[arXiv:1112.3548 \[hep-ph\]\]; U. Ellwanger and C. Hugonie, Adv. High Energy Phys. [**2012**]{}, 625389 (2012) \[arXiv:1203.5048 \[hep-ph\]\]. J. F. Gunion, Y. Jiang and S. Kraml, Phys. Lett. B [**710**]{}, 454 (2012) \[arXiv:1201.0982 \[hep-ph\]\]; S. F. King, M. Muhlleitner and R. Nevzorov, Nucl. Phys. B [**860**]{}, 207 (2012) \[arXiv:1201.2671 \[hep-ph\]\]; D. A. Vasquez, G. Belanger, C. Boehm, J. Da Silva, P. Richardson and C. Wymant, Phys. Rev. D [**86**]{}, 035023 (2012) \[arXiv:1203.3446 \[hep-ph\]\]; J. -J. Cao, Z. -X. Heng, J. M. Yang, Y. -M. Zhang and J. -Y. Zhu, JHEP [**1203**]{}, 086 (2012) \[arXiv:1202.5821 \[hep-ph\]\]; K. Kowalska, S. Munir, L. Roszkowski, E. M. Sessolo, S. Trojanowski and Y. -L. S. Tsai, arXiv:1211.1693 \[hep-ph\]. K. Schmidt-Hoberg and F. Staub, JHEP [**1210**]{}, 195 (2012) \[arXiv:1208.1683 \[hep-ph\]\].
For a review, see P. Langacker, Rev. Mod. Phys. [**81**]{}, 1199-1228 (2009) \[arXiv:0801.1345 \[hep-ph\]\].
C. -F. Chang, K. Cheung and T. -C. Yuan, JHEP [**1109**]{} 058 (2011) \[arXiv:1107.1133 \[hep-ph\]\].
J. Kang and P. Langacker, Phys. Rev. [**D71**]{}, 035014 (2005) \[hep-ph/0412190\]; M. Baumgart, T. Hartman, C. Kilic and L. -T. Wang, JHEP [**0711**]{}, 084 (2007). \[hep-ph/0608172\].
C. -F. Chang, K. Cheung, Y. -C. Lin and T. -C. Yuan, JHEP [**1206**]{}, 128 (2012) \[arXiv:1202.0054 \[hep-ph\]\]. H. An, T. Liu and L. -T. Wang, arXiv:1207.2473 \[hep-ph\]. A. Alves, A. G. Dias, E. R. Barreto, C. A. de S.Pires, F. S. Queiroz and P. S. R. da Silva, arXiv:1207.3699 \[hep-ph\].
V. Barger, P. Langacker and H. -S. Lee, Phys. Lett. [**B630**]{}, 85-99 (2005) \[hep-ph/0508027\]; V. Barger, P. Langacker and G. Shaughnessy, Phys. Lett. [**B644**]{}, 361-369 (2007) \[hep-ph/0609068\].
V. Barger, P. Langacker and G. Shaughnessy, Phys. Rev. [**D75**]{}, 055013 (2007) \[hep-ph/0611239\]; V. Barger, P. Langacker, H. -S. Lee and G. Shaughnessy, Phys. Rev. [**D73**]{}, 115010 (2006) \[hep-ph/0603247\].
S. Y. Choi, H. E. Haber, J. Kalinowski and P. M. Zerwas, Nucl. Phys. B [**778**]{}, 85 (2007) \[hep-ph/0612218\].
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**704**]{}, 123 (2011) \[arXiv:1107.4771 \[hep-ex\]\]. T. Aaltonen [*et al.*]{} \[CDF Collaboration\], Phys. Rev. D [**79**]{}, 112002 (2009) \[arXiv:0812.4036 \[hep-ex\]\]. K. Cheung and J. Song, Phys. Rev. Lett. [**106**]{}, 211803 (2011) \[arXiv:1104.1375 \[hep-ph\]\].
K. Nakamura [*et al.*]{} (Particle Data Group), J. Phys. G [**37**]{}, No 7A, 075021 (2010).
C. -W. Chiang, K. Yagyu and K. Yagyu, arXiv:1207.1065 \[hep-ph\]. K. Cheung, J. S. Lee and P. -Y. Tseng, arXiv:1302.3794 \[hep-ph\]. J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson, “The Higgs Hunter’s Guide,” published in Front. Phys. [**80**]{}, 1 (2000) page 29. A. Djouadi, V. Driesen, W. Hollik and A. Kraft, Eur. Phys. J. C [**1**]{}, 163 (1998) \[hep-ph/9701342\].
[^1]: A similar consideration was also analyzed for the two-Higgs-doublet models in Ref.[@Drozd:2012vf].
[^2]: Such a $Z'$ boson is still subjected to the dijet resonance searches. The CMS Collaboration has published a search for dijet resonances [@cms-dijet], one of which is the $Z'$ model with the SM couplings. The production cross section curve of the $Z'$ barely touches the upper-limit curve and thus receives no constraint. The $Z'$ boson in our case has a smaller coupling down by $g_2/g_1 \approx
0.62$, and so the production cross section is down by $(0.62)^2 =
0.38$. Similarly, it is true for the dijet resonance search in the mass range $260-1400$ GeV by the CDF Collaboration [@cdf-dijet], which ruled out a part of this $Z'$ mass range when the $Z'$ has the SM couplings. Again, in our case when the production cross section is down by $0.38$, the constraint is moot. For an even lower mass range of dijet resonance search the relevant data came from UA2. However, for the $Z'$ model with a coupling $g_2/g_1 = 0.62$ it has been shown to be safe with the UA2 data in [@song].
|
---
abstract: 'A non-parametric smoothing method is presented that reduces noise in multi-wavelength imaging data sets. Using Principle Component Analysis (hereafter PCA) to associate pixels according to their $ugriz$-band colors, smoothing is done over pixels with a similar location in PCA space. This method smoothes over pixels with similar color, which reduces the amount of mixing of different colors within the smoothing region. The method is tested using a mock galaxy with signal-to-noise levels and color characteristics of SDSS data. When comparing this method to smoothing methods using a fixed radial profile or an adaptive radial profile, the $\chi^2$-like statistic for the method presented here is smaller. The method shows a small dependence on input parameters. Running this method on SDSS data and fitting theoretical stellar population models to the smoothed data of the mock galaxy and SDSS data, shows that the method reduces scatter in the best-fit stellar population analysis parameters, when compared to cases where no smoothing is done. For an area centered on the star forming region of the mock galaxy, the median and standard deviation of the PCA-smoothed data is 7 Myr ($\pm$ 3 Myr), as compared to 10 Myr ($\pm$ 1 Myr) for a simple radial average, where the noise-free true value is 7.5 Myr ($\pm$ 3.7 Myr).'
author:
- James Pizagno
title: 'Preserving Structure in Multi-wavelength Images of Extended Objects'
---
Introduction
============
Galaxy formation theories predict that baryons cool in dark matter halos in such a way as to provide connections between galaxy observables and dark matter properties [@whi78; @col89]. For example, the Tully-Fisher relation [@tul77] shows the connection between galaxy luminosity and circular velocity, where the circular velocity depends on the dark matter and baryonic mass profiles. A key tool in studying galaxy evolution is the semi-analytic model approach to relating the observable properties of galaxies to the underlying formation physics [@eis96; @mo98]. Semi-analytic models of galaxy formation predict observables such as the luminosity function, radii, rotation curves, clustering statistics, colors, and the stellar mass of galaxies. [@gne07] and [@dut07] have shown that semi-analytic models can predict the joint distribution of galaxy observables, with model parameters that depend on the baryonic mass profile, where the baryonic mass profile includes the stellar mass and gas mass. Budgeting baryonic mass into stellar mass and gas mass is an important tunable parameter in the modeling [@mcg05]. One major difficulty lies in converting multi-wavelength imaging data into the stellar mass, where the multi-wavelength imaging data is inherently noisy.
Large surveys produce multi-wavelength maps of galaxies, which are used to measure the baryonic properties of the galaxy population. Large surveys produce large data sets, which are comparable in size to modern simulations [@del06; @bow06]. Since models of galaxy formation predict stellar mass and surface density, the multi-wavelength maps of galaxies must be converted into stellar populations using synthetic stellar population models [@mar05; @bru03]. In order to analyze the stellar populations in multi-wavelength images of galaxies, noise-reduction techniques must be employed. Large surveys are key to answering these questions because they provide data uniformity over a large area of the sky. SDSS provides $ugriz$-band data over 25% of the sky with a photometric calibration accuracy good to 2% [@ive04]. The existence of these noisy, but uniform and large, data sets, along with the need for stellar population modeling, requires a smoothing technique for multi-wavelength data sets.
A study similar to this one is ([@lan07]; hereafter L07), which studies the pixel color magnitude relation for nearby galaxies. L07 noted the distinct difference of pixel color magnitude diagrams with different Hubble types, where Early-type galaxies have redder pCDMs. L07 noted how morphological features were related to distinct features in the pCDM. Scatter in a pCDM was caused by extinction, showing the need for accurate ISM extinction models when modeling observed colors. The study by L07 shows how pixel maps of galaxies are correlated with galaxy type, and might reveal hidden features. L07 does not employ noise-reduction techniques, such as the ones presented in this paper, which may affect the structure of the pixel diagrams.
Another study similar to this one is [@wel08]. [@wel08] used the pixel-z technique, which combines stellar population synthesis models with multi-wavelength pixel photometry of galaxies to study the stellar population content of SDSS galaxies. [@wel08] showed how the star formation rate varies with local galaxy density, varies with position in a galaxy, and studied the mean star formation rate. The pixel-z method does not include any smoothing techniques. This current work will be complementary to that work, by providing a smoothing technique that will minimize the effect data noise will have on the best-fit stellar population parameters.
Adaptsmooth [@zib09] is a multi-wavelength smoothing algorithm that is similar to this work. Adaptsmooth uses a circularly symmetric radial median to reduce noise. The radius of the circle is defined so that median smoothed data has a signal-to-noise of 20. All of the imaging wavelength data (i.e. $ugriz$-bands) are smoothed to the same radius, which is determined from the maximum radius of the multi-wavelength data in question, which is usually the $u$-band or $z$-band in SDSS data as they have the lowest signal-to-noise. Adaptsmooth is adaptive, in the sense that the radius varies with position in the galaxy, as the signal-to-noise varies. However, the radial median filter is still azimuthally symmetric. This means that blue star formation regions can get median filtered together with redder disk. The PCA-smoothing method presented in this paper, median filters over pixels that are associated in PCA space according to their color.
We focus on SDSS because it is a large and uniform data-set. SDSS has coverage over 25% of the sky, where the imaging data covers a large range of optical wavelengths, and does so in a uniform manner. The distribution of galaxy properties has been well studied for SDSS data sets [@bla03]. Many semi-analytic models have been constrained using SDSS data [@gne07; @li07].
In the following paper the technique is described in Section 2, a comparison to other methods is done in Section 3, case studies are presented in Section 4, and the conclusion is presented in Section 5.
Technique
=========
The goal of this method is to smooth data without mixing different colors. For example, a simple radial smoothing method may mix a red bulge with a blue star forming region, which will result in poorly fit stellar population models. The method must be automated so that it can be applied to large data sets, such as the SDSS, and not have input parameters that vary within the data-set. Since the noise characteristics may vary among different data sets (i.e. SDSS vs. 2MASS), the method must also have tunable parameters. It is shown below that an advantage of this method is that it is not overly sensitive to the input parameters.
The method presented here uses Principal Component Analysis (hereafter PCA) in order to quantitatively associate pixels according to their color. PCA can be used to reduce the dimensionality of data, illuminate hidden correlations, quantify levels of proportionality, and rotate data into new axes. PCA has been used in many different areas of data analysis, and has been described in detail in other papers using it as an analysis tool for astronomical spectra [@yip04; @con95]. PCA is used here as a way to associate pixels according to their color and spatial location. This method transforms the multi-wavelength data, from color as a function of position in the galaxy to eigenweights of basis colors as a function of position in PCA space. Poisson noise is reduced by averaging over pixels having similar PCA weights, or similar locations in PCA space.
Mock Galaxy
-----------
For several reasons, the method presented here is tested using a mock galaxy. First, the mock galaxy can be assigned a wide range of colors, representing those seen in real data sets. The colors used here resemble the colors of recent star formation for the HII-like regions, old stellar populations for the Bulge, intermediate populations for the Disk, and dust is applied to different spatial regions. Secondly, the effects of noise on different colors can be estimated by adding Poisson noise due to the source and background. Thirdly, the mock galaxy is assigned spatially variable colors with realistic spatial structures which may cause over simplified smoothing methods to fail. For example, real galaxies tend to have a high signal-to-noise bulge with old reddish color, whereas the disk contains a fainter intermediate age color with a moderate-to-low signal-to-noise value, and there are asymmetric HII regions with a young stellar populations that have high signal-to-noise values. Real galaxies have blue high SNR HII regions that are only a few FWHMs from a faint red dust lane. Over simplified mapping techniques may average these colors together. Fourthly, using a mock galaxy provides the “truth”. Knowing the true noise-free colors will allow a figure of merit to be calculated which can be used to test different techniques discussed in this paper.
The goal is to create a mock galaxy with realistic spatial variations of the underlying color and SNR values that are similar to SDSS data. The components of the mock galaxy are a bulge, disk, spiral arms, star formation regions, and dust. Using GALFIT [@pen02], 5 different images are created, which represent the $ugriz$-band respectively. Each component has the color of a specific stellar population model. Star formation is spatially modeled as an arc of point sources along the spiral arms. A PSF, as a function of wavelength, was measured from real SDSS data, and was added to the GALFIT input file. The wavelength dependent extinction is applied to the $ugriz$-band mock images using the extinction curve in [@car89] and assuming a selective extinction value of $R_V=3.1$. The PSF is then measured using stars at edge of the image, which were added by GALFIT as point-sources and the user-provided PSF. A mock galaxy was made that includes spatially varying colors, with dusty regions that are generated from low-pass filtering of a real galaxy.
Typical SDSS sky values for each band are added to the $ugriz$ images of the mock galaxy. This is required to reproduce the SNR seen in SDSS data. Poisson noise, including source and sky, is added to the images of the mock galaxy. The resulting $g$-band and $r$-band images are shown in Figure \[fig:grmock\]. Figure \[fig:grmock\] shows the bulge, spiral arms, star formation regions, and dust regions. These features qualitatively represent real SDSS data, in the sense that the color varies with position and the features are asymmetric. The mock galaxy matches the observed properties of SDSS galaxies with blue star formation regions and a red bulge, with a color of $g-r=1.2$ for the bulge, $g-r=0.37$ for the exponential disk, $g-r=-0.63$ for the star formation regions, and the patches of dust produce regions that are typically $\Delta (g-r) \sim 0.12$ redder than the surrounding dust-free regions. Adding sky and source noise results in total signal-to-noise values that vary from 50.0 in the center of the bulge to 5-7 at the disk half-light radius, to nearly 0 where the outer parts of the disk fade into the sky background. For example, the bulge of the mock galaxy has an $r$-band flux $=$ 1500 DN on average, where the typical HII $r$-band flux $=$ 150-180, which is comparable to the galaxies from SDSS in Section 4.
In the analysis steps below, the mock data is treated as if it were real SDSS data. PSFs are measured from point sources in the image. The PSF of each image is matched to the $u$-band PSF, because it is the broadest PSF. The background sky was measured from the corner of the images and subtracted from the data. Before this method can be applied, the data must have PSFs matched, all instrument artifacts removed, and the galaxy identified.
PCA Application
---------------
The method proposed here uses the results of PCA to determine a quantitative relationship between the colors of different pixels, which are associated during the smoothing process. It is assumed that each pixel is a linear combination of basis spectra. The flux at any pixel can be described as a linear combination of a set of weighted basis spectra. The normalized flux at pixel (x,y) can then be written as: $$F_{\lambda} (x,y) = \sum_{i=1}^{5} a_{i}(x,y) e_{\lambda,i}$$ where $i$ is the number of bands (5 for SDSS), $x$ and $y$ are spatial position in this galaxy, and $\lambda$ denotes the band (i.e. $ugriz$), $a_i(x,y)$ is a eigenweight which varies as a function of position in the galaxy, and $e_{\lambda,i}$ is the $i$th basis at band $\lambda$.
The covariance matrix method is used to measure the eigenvectors and eigenweights. The data are first normalized to the $r$-band. All pixels within 2 disk scale lengths and having a SNR ratio greater than a minimum value (discussed later) are included in a data matrix. Using lower signal-to-noise values lower than this includes pixels heavily influenced by background sky colors. The covariance matrix of this data matrix is calculated, and then the eigenvectors and eigenvalues of this covariance matrix are determined. This is carried out using Python procedures in the NUMPY.LINALG library, where the eigenvectors are solved using the APACK routines dgeev and zgeev [^1].
Figure \[fig:PCAanalysis\] shows the location of pixels within a subsection of the mock galaxy image. The figure color-coding is made according to spatial location in the galaxy. Pixels in similar areas of the left-hand panel of Figure \[fig:PCAanalysis\] have similar colors. This method divides the PCA-space into evenly spaced angular bins. Angular bins are chosen, as opposed to a linear interpolation, to follow the observed trend seen in PCA-space. A small variation in color can result in a large variation in best-fit stellar population parameters. For example, a ($g$-$r$) color change of 0.3 can cause an estimate of the mass-to-light ratio to change by a factor of 3 (Figure 6 of [@bel03]). Therefore, the number of bins is chosen to have a standard deviation in $g$-$r$ color that will minimize the scatter in best-fit parameters. The number of bins used in smoothing is a free parameter, which can be adjusted by the user. The implementation here uses 10 bins because using too few bins will produce averaging over different colors, whereas using too many bins will have bins that are so narrow that almost no averaging is done over pixels that are in similar areas of both PCA-space and spatial regions of the galaxy. For example, using 5 bins breaks the PCA-space up so broadly that pixels in the same bin have a broad distribution with a standard deviation of $\delta (g-r) = 0.8$, whereas using 10 bins has a standard deviation of typically $\delta (g-r) = 0.3$. To enhance the SNR, smoothing is done over pixels within a range of angular bins, where the range of bins is inversely proportional to that pixel’s total signal-to-noise ratio. More specifically, for a pixel with total signal-to-noise $SNR(x,y)$, all pixels within the integer-value of $SNRenhanced/SNR(x,y)$ angular bins are median filtered together. For example, using an input parameter of $SNRenhanced=70$ for a pixel with a $SNR(x,y)=30.0$, median filters within $\pm$2 ($=int(70/30)$) bins in the left-hand panel of Figure \[fig:PCAanalysis\]. The parameter $SNRenhanced$ is a constant for a given data set, and will be determined using the mock galaxy for the SDSS data set described here. This formulation has two effects. First, pixels with lower signal-to-noise values are averaged over more pixels. Secondly, the averaging is over pixels with similar colors in PCA space (Figure \[fig:PCAanalysis\]).
A figure of merit metric ($FOM$) is used to quantitatively determine the best smoothing technique. It should be the difference between the truth and smoothed data, consider all wavelengths, and be inversely proportional to the total noise value. The $FOM$ is defined as: $$FOM(P,N) = \sum_{x,y}^N \sum_{\lambda=1}^5 \frac{(Truth_{\lambda}(x,y) - Smooth_{\lambda}(x,y|P))^2}{(\sigma_{\lambda}(x,y))^2},$$ where $Truth_{\lambda}(x,y)$ is the true flux at band $\lambda$ at spatial position $(x,y)$, $Smooth_{\lambda}(x,y|P)$ is the smoothed flux at band $\lambda$ at spatial position $(x,y)$, $P$ is the list of free parameters ($SNRmax$, $SNRmin$, $Radius$, $SNRenhanced$), $\sigma_{\lambda}$ is the noise in band $\lambda$, and N is the number of pixels. $SNRmax$ is the maximum SNR ratio of the pixels to which the method is applied. $SNRmin$ is the minimum SNR over which smoothing is applied. $Radius$ gives the size of the circle for pixels that may be included in the mean. $SNRenhanced$ is a constant controlling the range of pixels in PCA space that are included in the smoothing. The free parameters are then the ones mentioned above in $P$, along with the region within which the PCA eigen-vectors are measured, and the number of PCA bins. The best-fit parameters for the SDSS data-set are determined using a grid-search method, which searches for the parameters $P$ which provide the minimum $FOM$ . The $FOM$ is defined in such a way that a lower $FOM$ is a better estimate of the true colors.
Figure \[fig:PCAsmoothed\] shows the region over which this experiment is run. The range of parameters searched was $6<SNRmax<200$, $0<Radius<$ size-of-image, $6<SNRenhanced<600$, and $0<SNRmin<9$, and results are shown below for most of that range. The bottom panels of Figure \[fig:PCAsmoothed\] show the PCA map and smoothed PCA image. The bottom left panel shows the combination of location of each pixel in the angular bin seen in Figure \[fig:PCAanalysis\] and which pixels are included in smoothing. The HII-like regions and bulge are clearly in different locations in PCA space, as can be seen by the fact that they have different gray scales (angular bins). The bottom right panel shows the image smoothed by pixels associated in PCA space (PCA-smoothing). The contrast in the smoothed image resembles the noise-free image (top left panel).
Analysis
--------
Figure \[fig:IMSTATall\] shows the $FOM$ versus the free-parameters for the region in Figure \[fig:PCAsmoothed\]. The lines show how the $FOM$ varies with the variable on the x-axis as the parameters in the key are held fixed. The left panel of Figure \[fig:IMSTATall\] shows how the $FOM$ varies as the maximum SNR value is varied. The $FOM$ has a minimum between $30 < SNRmax < 50$. The reason the $FOM$ doesn’t get smaller as SNRmax increases is because there are so few pixels with such high SNR values. The middle panel of Figure \[fig:IMSTATall\] shows how the $FOM$ varies with the smoothing radius. For radii smaller than 3 the $FOM$ increases because less smoothing is done. For radii larger than 4, the $FOM$ does not increase because only pixels with similar positions in PCA-space of Figure \[fig:PCAanalysis\] are included in the median. The right panel of Figure \[fig:IMSTATall\] shows the $FOM$ as the $SNRenhanced$ constant is changed. This figure shows that any $SNRenhanced$ value greater than 40 will give an acceptable fit. Increasing the $SNRenhanced$ value increases the number of bins over which pixels in PCA-space are smoothed. The $FOM$ does not increase as $SNRenhanced$ is increased because only pixels within a fixed spatial area (circle of radius 2-5) are included in the fit, and this mock galaxy does not have HII-like regions within a few pixels of the bright red bulge. The $FOM$ does not change drastically as the parameters are changed slightly, demonstrating the robustness of PCA smoothing.
Figure \[fig:ANALYZEHII\] shows the $FOM$ versus the method parameters for a region centered on the HII-like star forming arc. The left panel of Figure \[fig:ANALYZEHII\] shows how the $FOM$ varies with $SNRmax$. The $FOM$ has a minimum between $20 < SMRNmax < 50$. The middle panel of Figure \[fig:ANALYZEHII\] shows how the $FOM$ varies with $Radius$. The $FOM$ has a minimum between $2< Radius< 4$. The right panel of Figure \[fig:ANALYZEHII\] shows how the $FOM$ varies with the $SNRenhanced$. The SNR decreases rapidly to about $SNRenhanced=60$, and then does not decrease drastically. It does not increase for this data because the HII-like region is not close to the red color bulge. The variation of $SNRmin$ is not shown, because as long as it is $SNRmin=3$, there is an acceptable fit. These parameters, being only slightly different from the globally determined best-fit parameters, show the robustness of this method. Considering Figures \[fig:ANALYZEHII\] and Figure \[fig:IMSTATall\], the best-fit parameters are SNRmax=30.0, Radius=4.0, SNRenhanced=60.0 and SNRmin=3.0.
Figure \[fig:kernel\] shows a map of pixels included in the smoothing for a single pixel in the mock galaxy image. The pixels with green crosses are included in the median average when smoothing for the central pixel. The circle is the best-fit radial aperture of $Radius=4$ pixels. The pixels with green crosses all have similar locations in PCA-space (left panel of Figure \[fig:PCAanalysis\]) and SNR range (right panel of Figure \[fig:PCAanalysis\]). This figure shows that most of the pixels included in the median filter are HII-like region pixels.
Figure \[fig:SNRchange\] shows the change in total SNR for pixels centered on the HII region. The SNR per pixel after PCA smoothing increases by a factor of 2-3 in the range of original signal-to-noise of 5-20. Above the SNRlimit there is no change in the SNR. This is slightly less than the expected change in the SNR. In a simple circular average of radius=3 pixels, the SNR should increase by 5.3. The SNR does not increase by a factor of 5.3 because not all pixels within the circular aperture are used, as they are not in the same location in PCA space as the pixel be being smoothed.
If the smoothing is not done over a range of angular bins in PCA-space (Figure \[fig:PCAanalysis\]), or if $SNRenhanced=SNR(x,y)$ , then the $FOM$ always increases by 1.0. The $FOM$ increases because there are so few pixels with the PCA-space in the same bin and within the same spatial region. If the region is not restricted by a certain aperture, then the $FOM$ increases only slightly. For example, if the radius is set to a value larger than the image, essentially including all the pixels in the analysis, then the $FOM$ increases by only 0.8. This is much better than the simple radial average, which increases by orders of magnitude when the radius is the size of the image. Changing the region over which the PCA eigenspectra are determined scatters the eigenvalues, reducing the correlation between location in PCA-space and color. Using a larger region includes pixels with such high noise values due to sky, so that PCA results are scattered throughout the PCA-space. Using too small a region does not include a variation in color. For example, using the central bulge half-light radius includes almost no blue star formation colors.
Comparison with Other Methods
=============================
We compare PCA smoothing with other smoothing techniques: a simple circular smoothing kernel and Adaptsmooth [@zib09]. Adaptsmooth uses a circular aperture smoothing kernel, where the radius of the circle is set to achieve a SNR of 20.
Adaptsmooth [@zib09] is compared to the PCA-smoothing technique for an area on the edge of an HII-like region of the mock galaxy. Adaptsmooth is run in default mode, where the radial aperture is determined by increasing the radius until the resulting SNR equals 20. Fixing the radius for all bands resembles the simple radial average results. Figures \[fig:IMSTATall\] and \[fig:ANALYZEHII\] show the $FOM$ for Adaptsmooth. In almost all cases the $FOM$ for Adaptsmooth is larger than the PCA-smoothing method presented here. When the $FOM$ is calculated for the HII-like region, the results in \[fig:ANALYZEHII\] shows that Adaptsmooth has a $FOM$ worse that no smoothing at all. This is due to Adaptsmooth mixing pixels with different colors, which will be demonstrated below.
Figure \[fig:Deltagr\] shows a comparison between the ($g-r$) color predicted by various smoothing methods versus the true color, for a pixel located on the edge of the spiral arm. For the simple circular smoothing kernel, an increase in the radius always produces a worse prediction of the color. The color is off by 0.1 mag at all radii greater than 2 pixels. Adaptsmooth picks a radius of 9 pixels for this pixel, as this was the maximum radius to reach a SNR=20 in the lowest SNR band ($z$-band). The radius of 9 pixels is so large that HII-like colors and redder disk colors get mixed together in the median. For PCA smoothing, with a best-fit $Radius=4$, the predicted $g-r$ color (open circle at $radius=4$pix is within 1-sigma of the true value (red filled square) and has a lower noise level. Adaptsmooth (open triangle) is more than 3 sigma away from the true value.
The true $g$-band flux at the pixel in \[fig:Deltagr\] is 152.782 DN. Adaptsmooth predicts a $g$-band flux of 127.30 DN, whereas the PCA-smoothing predicts a more accurate flux of 157.93 DN. The signal-to-noise for this pixel is 10.85, where the PCA-smoothed SNR $=$ 28.63 versus the Adaptsmooth SNR$=$ 36.73. There is clearly a trade off when considering the number of bins to use ($SNRenhanced$), such that smoothing over more bins increases the SNR but decreases the accuracy of the predicted color and vice-versa for using fewer bins. The higher SNR for the Adaptsmooth result is due to over smoothing using too many pixels, which comes at the cost of a worse prediction of the SED ($g-r$ color in Figure \[fig:Deltagr\]). Figure \[fig:Deltagr\] shows that the PCA-smoothing provides a better estimate of the true color, and has little dependence on the choice of radius.
Next, the effectiveness of each method to reproduce the stellar population of the mock galaxy are discussed. [@mar05] stellar population models are fit to the true (noise-free) mock galaxy image, mock galaxy image with no smoothing, a simple radially smoothed image, Adaptsmoothed image, and a PCA-smoothed image. The method uses a grid-search chi-squared minimization routine to find the best-fit model stellar population parameters, given the pixel’s $ugriz$-band fluxes and the model’s $ugriz$-band fluxes. For an aperture centered on the HII region, the noise-free truth image has an age of 7.5 Myr ($\pm$ 3.7 Myr). The simple radial smoothing has an age of 10.0 Myr ($\pm$ 1.0 Myr). Adaptsmooth results in ages of 6.0 Myr ($\pm$ 7.0 Myr). The PCA-smoothed results in an age of 7.0 Myr ($\pm$ 4.1 Myr). The PCA-smoothed result is a better description of the noise-free true age.
Case Study
==========
NGC 450
-------
Next PCA-smoothing is applied to real data sets. PCA-smoothing is first run on NGC 450, because it was also analyzed by [@wel08]. NGC 450 is a particularly interesting case because of the drastic variation in spatial colors. There is a flocculent spatial distribution of very blue star formation regions on top of a red disk. The PCA-smoothing was run on SDSS data of NGC 450 using the best-fit parameters determined in Section 2. After running the PCA-smoothing, stellar population models [@mar05] are fit to the smoothed and un-smoothed data using a simple grid-search $\chi^2$ minimization routine. Figure \[fig:ngc450color\] shows the results. Comparing the stellar population age maps with PCA-smoothing to unsmoothed maps shows that there is considerably more scatter in the un-smoothed maps. Figure \[fig:ngc450color\] also shows that the PCA-smoothing preserves the structural information, where HII regions are still prominent in the PCA-smoothed image. The contrast between bright HII regions and a faint older disk is preserved during PCA-smoothing.
Figure \[fig:ngc450age2\] quantifies the level of scatter in the best-fit ages. Figure \[fig:ngc450age2\] shows the analysis for a region with lower SNR pixels which are located on the disk region. The scatter towards younger ages for un-smoothed data is clear. For unsmoothed data the best-fit ages range from 0.1 Gyr to 10 Gyr, where the PCA-smoothed analysis has best-fit ages clustered around a few Gyr to 9 Gyr. Since this is real data, the true age distribution is not known and is not shown.
SDSS J235106.25+010324.1
------------------------
Next PCA-smoothing is run on SDSS J235106.25+010324.1, which has well defined blue spiral arms nearby a red bulge. Figure \[fig:sdss\] shows the SDSS color image, the PCA map, PCA-smoothed $g$-band image, best-fit model age of the un-smoothed data, and best-fit model age of the smoothed data. The top middle panel of Figure \[fig:sdss\] clearly shows that this method separates the bulge and spiral arms into different areas in PCA space, as can be seen by the bulge pixels being bright and the spiral arms being dark. The separation of spiral arms versus the bulge and inter-disk regions means that they will not be mixed in the PCA-smoothing routines.
Conclusions
===========
This paper presents a method for smoothing SDSS data using a variation of Principal Component Analysis. The method is performed by running PCA simultaneously on multi-wavelength images of galaxies, and then smoothing over pixels that have similar locations in PCA space and spatial location within the galaxy. The advantages of the method are 1) no mixing of colors, 2) the method is geared towards stellar population analysis, 3) the parameters are tunable, and 4) the results are not extremely sensitive to the input parameters. The disadvantages of the method are 1) requiring initial analysis to identify the galaxy, 2) running PCA which may take computational time, and 3) requires well understood and uniform noise characteristic across different wavelengths. The smoothing parameters can be tuned to adjust the tradeoff between more smoothing and more color mixing versus less smoothing and more color purity. Increasing the $SNRenhanced$ constant, results in an increased signal-to-noise of the smoothed pixel, at the cost of mixing over different colors. Lowering the $SNRenhanced$ constant, results in a more pure color with less smoothing over different colors, at the cost of a lower smoothed signal-to-noise.
The method was tested and demonstrated using a mock galaxy with $ugriz$-band images having SNRs similar to that seen in typical SDSS data. Figures \[fig:IMSTATall\] and \[fig:ANALYZEHII\] show that the $FOM$ for the PCA-smoothing method is always better (lower), when compared to azimuthally symmetric smoothing routines. Considering Figures \[fig:ANALYZEHII\] and Figure \[fig:IMSTATall\], the best-fit parameters are $SNRmax=30.0$, $Radius=4.0$, $SNRenhanced=60.0$ and $SNRmin=3.0$. The lack of extreme peaks in the $FOM$ shows the robustness of the method. Figures \[fig:IMSTATall\] and \[fig:ANALYZEHII\] imply that as long as the user doesn’t use extreme smoothing parameters, a reliable result will be obtained. Analysis of a region located on the boundary between an HII region and the red disk (Figure \[fig:Deltagr\]), shows that PCA smoothing is better at predicting a ($g$-$r$) color to 0.2 mag, when compared to simple radial smoothing or Adaptsmooth.
The PCA-smoothing algorithm can be run on the SDSS data set, with the parameters described in this paper. The galaxies in the low-redshift NYU-VAGC [@bla05] would be perfect for analysis, as it includes galaxies within a comoving distance range of $10 < d < 150$ Mpc/h. These nearby galaxies are spatially resolved, and perfect for this type of analysis. This method is geared towards large area surveys having multi-wavelength data, over a large part of the sky, and having uniform noise characteristics (i.e. COSMOS, DEEP, SDSS, 2MASS, DES). The method can also be applied to Galactic Nebulae as well, which are also asymmetrical extended objects with multi-wavelength data.
Bell, E. F., McIntosh, D. H., Katz, N., Weinberg, M. D. 2003, ApJS, 149, 289 Blanton, M. R. et al. 2003, ApJ, 594, 186 Blanton et. al. 2005 MNRAS 129, 2562 Bower, R. G., Benson, A. J., Malbon, R., Helley, J. C., Frenk, C. S., Baugh, C. M., Cole, S., Lacey, C. G. 2006, MNRAS, 370, 645 Bruzual, G., Charlot, S. 2003, MNRAS, 344, 1000 Cardelli, J. A., Clayton, G. C., Mathis, J. S. 1989, ApJ, 345, 245 Cole, S., & Kaiser, N. 1989, MNRAS, 237, 1127 Connolly, A. J., Szalay, A. S., Bershady, M. A., Kinney, A. L., Calzetti, D. 1995, AJ, 110, 1071 De Lucia, G., Springel, V., White, S. D. M., Croton, D., Kauffmann, G. 2006, MNRAS, 366, 499 Dutton, A. A., van den Bosch, F. C., Dekel, A., Courteau, S. 2007, ApJ, 654, 27 Eisenstein, D. J., & Loeb, A. 1996, ApJ, 459, 432
Gnedin, O. Y., Weinberg, D. H., Pizagno, J., Prada, F., Rix, H.-W. 2007, ApJ, 671, 1115 Ivezić, Ž. et al. 2004, Astron. Nachr., 325, 583 Lanyon-Foster, M. M., Conselice, C. J., Merrifield, M. R. 2007, MNRAS, 380, 571 Li, C., Jing, Y. P., Kauffmann, G., Boerner, G., Kang, X., Wange, L. 2007, MNRAS, 376, 984 Maraston, C. 2005, MNRAS, 362, 799 McGaugh, S. S. 2005, Phys. Rev. Lett., 95, 171302 Mo, H. J., Mao, S., & White, S. D. M. 1998, MNRAS, 295, 319 Peng, C. Y., Ho, L. C., Impey, C. D., Rix, H.-W. 2002, AJ, 124, 266 Tully, R. B., & Fisher, J. R. 1977, A&A, 54, 661 Welikala, N., Connolly, A. J., Hopkins, A. M., Scranton, R., Conti, A. 2008, ApJ, 677, 970 White, S. D. M., & Rees, M. J. 1978, MNRAS, 183, 341 Yip, C. W., et al. 2004, AJ, 128, 2603 Zibetti, S., Charlot, S., Rix, H.-W. 2009, MNRAS, 400, 1181
[^1]: http://www.netlib.org/lapack/
|
---
abstract: 'The band structure and the Fermi surface of hexagonal diborides ZrB$_2$, VB$_2$, NbB$_2$, TaB$_2$ have been studied by the self-consistent full-potential LMTO method and compared with those for the isostructural superconductor MgB$_2$. Factors responsible for the superconducting properties of AlB$_2$-like diborides are analyzed, and the results obtained are compared with previous calculations and available experimental data.'
address: 'Institute of Solid State Chemistry, Ural Branch of the Russian Academy of Sciences, 620219 Ekaterinburg, Russia'
author:
- 'I.R. Shein, A.L. Ivanovskii'
title: |
The band structure of hexagonal diborides ZrB$_2$, VB$_2$, NbB$_2$ and TaB$_2$\
in comparison with the superconducting MgB$_2$
---
A recent discovery \[1\] of the critical transition (T$_c$ $\approx$ 40 K) in magnesium diboride (MgB$_2$) and development of some promising superconducting materials on its basis (ceramics, films, extended wires, see review \[2\]) gave an impetus to active search for novel superconductors (SC) among related compounds with the structure and chemistry similar to MgB$_2$. It is reasonable that hexagonal (AlB$_2$-like) metal diborides isostructural with MgB$_2$ were considered as first SC candidates. Detailed investigations of the band structure and coupling mechanism in MgB$_2$ \[2-7\] showed that diborides of group I, II metals of the Periodic System, for instance metastable CaB$_2$ \[6\], LiB$_2$, ZnB$_2$ \[7\], hold the greatest promise as SC candidates. The authors of \[8\] predicted the possibility of critical transition with T$_c$ $>$ 50 K in AgB$_2$ and AuB$_2$. It is much less probable to find new SC (with T$_c$ $>$ 1 K) among AlB$_2$-like diborides of d metals (MB2), see \[2-7\]. The first report (\[9\], 1970) on the superconductivity in NbB$_2$ (T$_c$ $\approx$ 3.9 K) was not confirmed by systematic studies \[10\] of SC properties of a series of diborides MgB$_2$ (M = Ti, Zr, Hf, V, Nb, Ta, Cr), according to which their T$_c$ are smaller than 0.7 K.\
Therefore recent reports \[11-13\] on rather high T$_c$ for ZrB$_2$ (5.5 K \[11\]), TaB$_2$ (9.5 K \[12\]) and NbB$_2$ (5.2 K \[13\]) were quite unexpected. It is remarkable that in the investigations of identical series of diborides (TiB$_2$, ZrB$_2$, HfB$_2$, VB$_2$, NbB$_2$, TaB$_2$ \[12\] and ZrB$_2$, NbB$_2$, TaB$_2$ \[11\]) each group revealed “its” superconductor, namely ZrB$_2$ \[11\] and TaB$_2$ \[12\], while all other MB$_2$ phases were assigned to non-superconductors.\
The results published in \[12\] prompted the authors of \[14\] to examine in detail the temperature dependencies of magnetic susceptibility and electrical resistance in TaB$_2$. It was established that the SC transition for TaB$_2$ is not observed to T $\approx$ 1.5 K. The SC properties of MgB$_2$ and TaB$_2$ discussed \[14, 15\] on the basis of the band structure calculations were found to be essentially different due to strong hybridization effects of the Ta5d-B2p states. A weak (as compared with MgB$_2$) electronic interaction with the E$_2g$ mode of the phonon spectrum was noted in \[14\]. An abrupt lowering of T$_c$ for TaB$_2$ (and the absence of SC for VB$_2$) is explained in \[15\] by a considerable decrease in the contributions from B2p states to the density of states at the Fermi level (N(E$_F$)): MgB$_2$ (0.494) $>$ TaB$_2$ (0.114) $>$ VB$_2$ (0.043 states/eV). Analyzing fine features of soft X-ray emission and absorption BK-spectra of MgB$_2$, NbB$_2$ and TaB$_2$, the authors of \[16\] pointed out fundamental differences in the structure of their pre-Fermi edges with dominating contributions from B2p$_\sigma$ (MgB$_2$) or B2p$_\pi$ states (NbB$_2$, TaB$_2$). Any operations reproducing results \[11\], to us are unknown. As is shown in \[2-7\], the superconductivity in MgB$_2$ and related borides is fairly well described in the context of the electron-phonon interaction theory. Therefore the peculiarities of the electronic spectrum, primarily the composition and structure of the near-Fermi bands, constitute the major factor responsible for the formation of this effect. In this work we present the results of detailed band structure analysis of Zr, V, Nb and Ta diborides in comparison with the superconducting MgB$_2$. As is known, these diborides are isostructural (AlB$_2$ type, space group P6/mmm), their crystal lattices are made up of alternating hexagonal metal (M) monolayers and graphite-like boron layers \[17\]. The unit cell contains three atoms (M, 2B). The main differences are due to the metal sublattice type, viz. the electronic configurations of metal atoms (Mg - 3s$^2$3p$^0$; Zr - 5s$^2$4d$^2$; V, Nb, Ta - (n+1)s$^2$nd$^3$, where n = 3, 4, 5 respectively), which determine the electron concentration (EC) growth (MgB$_2$ (8) $<$ ZrB$_2$ (10) $<$ VB$_2$, NbB$_2$, TaB$_2$ (11 e/cell)) and the variation in interatomic bonds, see \[2, 18, 19\].\
The band structure of MgB$_2$ was calculated in the framework of the LDA theory by the self-consistent full-potential linear muffin-tin orbital (FP LMTO) method with allowance for relativistic effects and spin-orbital interactions \[20, 21\] with the exchange-correlation potential in accordance with \[22\]. Equilibrium parameters of MB$_2$ cells (Table 1) were obtained based on the condition of the system’s total energy minimum.\
**MgB$_2$**. As follows from Figs. 1 and 2, the peculiarities of the band structure of the superconducting MgB$_2$ are due to the B2p states, that form four $\sigma$(2p$_{x,y}$) and two $\pi$(p$_z$) energy bands. The B2p$_z$ states are perpendicular to boron atom layers and form weak interlayer $\pi$ bonds. The B2p$_{x,y}$ bands are of two-dimensional-like (2D) type and form flat areas in the $\Gamma$-A direction of the Brillouin Zone (BZ). A small dispersion of $\sigma$ bands is also indicative of an insignificant interaction between Mg-B layers. Two B2p$_{x,y}$ bands intersect E$_F$ and make an appreciable contribution to the density of states (DOS) at the Fermi level being responsible for the metal-like properties of MgB$_2$,see Table 2. One of the most important features of MgB$_2$ is the presence of hole B2p$_{x,y}$ states: in the $\Gamma$-A direction they are above E$_F$ and form cylindric hole-type elements of the Fermi surface (FS), Fig. 1.\
[ccccccc]{} Diboride&&&&&&\
\
MgB$_2$&&&&&&\
ZrB$_2$&&&&&&\
&&&&&&\
VB$_2$&&&&&&\
&&&&&&\
NbB$_2$&&&&&&\
TaB$_2$&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
$^*$ - \[11,12,14,15,23\]
Thus, the distinguishing characteristics of the band spectrum of MgB$_2$, that are crucial for its superconducting properties, as well as inter- and interlayer interaction effects (see also \[2-7\]), include: (1) location of $\sigma$(p$_{x,y}$) bands relative to E$_F$ (the presence of hole states); (2) the value of their dispersity in the $\Gamma$-A direction ($\Delta$E$^\sigma$($\Gamma$-A), determined by the degree of interaction between metal-boron layers); (3) the value and orbital composition of N(E$_F$) (dominating contribution of s states of boron atoms from graphite-like layers). Let us consider in this context the band structure of Zr, V, Nb and Ta diborides. First note that the most obvious consequence of the variation in the composition of the metal sublattice in the series of diborides is an increase in the EC with successive occupation of energy bands. Then the Fermi level for ZrB$_2$ is located in the pseudogap between completely occupied bonding and free antibonding states. This determines the maximum stability of ZrB$_2$ (and also of isoelectronic and isostructural TiB$_2$ and HfB$_2$) in the series of AlB$_2$-like phases and their extreme thermomechanical characteristics \[23\]. These inferences were confirmed by recent FP LMTO calculations of the cohesion energy of some MB$_2$ phases (M = 3d, 4d and 5d metals) \[18, 19\].\
[ccccccccc]{} &&&&&&&&\
Phase&&&&&&&&\
\
MgB$_2$&&&&&&&&\
ZrB$_2$&&&&&&&&\
VB$_2$&&&&&&&&\
NbB$_2$&&&&&&&&\
TaB$_2$&&&&&&&&\
**MgB$_2$ and ZrB$_2$**. As is seen from Figs. 1, 2 and Table 2, the structure of the near-Fermi spectra edges of ZrB$_2$ and the SC MgB$_2$ differ radically. For ZrB$_2$, (1) $\sigma$(p$_{x,y}$) bands of boron are located in the region are located below E$_F$ (-1.1 eV in point A BZ) and the corresponding hole states are absent; (2) a considerable dispersion of these bands appears in the $\Gamma$-A direction (($\Delta$E$^\sigma$($\Gamma$-A) = 1.73 eV), $\sigma$ bands become no longer of the 2D type as a result of formation of strong covalent d-p bands between metal-boron layers, in which partially occupied $\pi$(p$_z$) bands take part; (3) the value of N(E$_F$) decreases drastically as compared with MgB$_2$ (from 0.719 to 0.163 states/eV), the maximum contribution ($\sim$80%) to N(E$_F$) being made by Zr4d states (whereas the contributions from boron states are much smaller - $\sim$18%). The change in the type (2D $\rightarrow$ 3D) of the near-Fermi states can be easily traced by comparing the structure of the FS of MgB$_2$ and ZrB$_2$, Fig. 1. It is seen that the FS of ZrB$_2$ consists of three types of figures defined by mixed Zr4d,5p-Bp states: (a) a 3D rotation figure around a straight line along the $\Gamma$-A direction with the hole-type conductivity; (b) a 3D figure near the centre of the M-K segment with the electronic-type conductivity; and (c) tiny 3D-type sections with the electronic conductivity.\
**VB$_2$, NbB$_2$ and TaB$_2$**. Energy bands, FS and DOS of these isoelectronic and isostructural diborides are demonstrated in Figs. 3, 4, and some electronic structure parameters are listed in Tables 2, 3. The mentioned above differences in ZrB$_2$ and the SC MgB$_2$ (filling of $\sigma$(p$_{x,y}$) bands, decreased contributions of B2p states to N(E$_F$), variation in the type (2D $\rightarrow$ 3D) of the near-Fermi states) are typical also of VB$_2$, NbB$_2$, TaB$_2$. Besides, they have the following features in common (as compared with ZrB$_2$): (1) partial occupation of the antibonding d band responsible for the metallic-type conductivity; (2) considerable growth of N(E$_F$); and (3) increased filling of $\pi$(p$_z$) bands. A peculiar form of the surface Fermi transformation is observed for example for TaB$_2$ (Fig. 3): the surface Fermi contains double (“internal” and “external”) non-intersecting electronic-type rotation spheroids around point A defined by 3D-B2p and Ta5d$_{xz,yz}$ states respectively. The bonding $\sigma$(p$_{x,y}$) bands of boron lie in the region of -1.3, -2.5 and -2.6 eV in A-point of BZ for VB$_2$, NbB$_2$ and TaB$_2$ accordingly, below E$_F$ and have, as in the case of ZrB$_2$, a significant energy dispersion $\Delta$E$^\sigma$($\Gamma$-A), which has a maximum value for VB$_2$, Table 2. In the series of isoelectronic VB$_2$ $\rightarrow$ NbB$_2$ $\rightarrow$ TaB$_2$, N(E$_F$) decreases systematically and has a maximum value for VB$_2$ due to the contribution from the near-Fermi quasi-plane V3d$_{xz,yz}$ band in the direction $\Gamma$ - A. By contrast, the contribution of B2p states (antibonding s and p bands) to N(E$_F$) in this series changes non-monotonously and reaches its peak (0.190) for NbB$_2$, which is much smaller than that for MgB$_2$ (0.448 states/eV). A great concentration of B2p states in the vicinity of E$_F$ for NbB$_2$ (as compared with TaB$_2$) is also evident from spectroscopy experiments \[16\].\
[cccc]{} &&&\
Phase&&&\
\
MgB$_2$&&&\
B2p$^*$&&&\
\
TaB$_2$&&&\
Ta5d-&&&\
B2p-&&&\
\
VB$_2$&&&\
V3d-&&&\
B2p-&&&\
$^*$ - total DOS and partial contributions of Md and B2p states
Thus, the performed analysis of the band structure and the surface Fermi of isostructural d-metal (Zr, V, Nb, Ta) diborides allows us to formulate their fundamental differences from those of the superconducting MgB$_2$: (1) occupation of bonding p$_{x,y}$ bands and the absence of hole s states; (2) increased covalent interactions between boron and metal layers (due to the hybridization of B2p-Md states) and the loss of the energy bands two-dimensional character; (3) changes in the value and orbital composition of N(E$_F$), where the dominant role is played by valence d states of metals. The latter circumstance is typical of low-temperature superconductors, for example metal-like compounds of these d elements with carbon, nitrogen, silicon (NbN, V$_3$Si etc.), the T$_c$ values of which correlate with the values of N(E$_F$) \[24\]. In this case the results obtained suggest that V, Nb, Ta diborides are more likely to have low-temperature superconductivity, and among them the maximum T$_c$ value can be anticipated for VB$_2$. On the contrary, if we assume that the major electronic factor for the formation of SC properties of MB$_2$ is the near-Fermi density of B2p states (by analogy with MgB$_2$ \[2-7\]), NbB$_2$ should possess the highest critical transition temperature. It is noteworthy that according to the coupling model proposed in \[25, 26\], not only boron s, but also p bands occupation should be considered in this case. Anyhow the superconducting transition for ZrB$_2$ is the least probable, and the results \[11\] need to be revised.
[99]{}
J. Nagamatsu, N. Nakagawa, T. Muranaka et al., Nature, [**410**]{}, 63 (2001)
A.L. Ivanovskii. Uspekhi khimii, [**70**]{}, (¹9) (2001) - in Russian, in press.
J. Kortus, I.I. Mazin, K.D. Belaschenko et al., Phys.Rev.Letters, [**86**]{}, 4656 (2001)
J.M. An, W.E. Pickett. Phys.Rev.Letters, [**86**]{}, 4366 (2001)
N.I. Medvedeva, A.L. Ivanovskii, J.E. Medvedeva, A.J. Freeman. Phys.Rev., [**B64**]{}, 20502 (2001)
K.D. Belashchenko, M. van Schilfgaarde, V.A. Antropov. Cond-matter/0102391 (2001)
V.P. Antropov, K.D. Belashchenko, M. Van Schilfgaarde, S.N. Rashkeev. Cond-matter/0107123 (2001)
S.K.Kwon, S.J.Youn, K.S.Kim, B.I.Min. Cond-matter/0106483 (2001)
A.S. Cooper, E.Corenzest, L.D.Longinotti et al. Proc.Natl.Acad.Sci., [**67**]{}, 313 (1970)
L.Leyarovska, E. Leyarovski. J.Less Common Metals, [**67**]{}, 249 (1979).
V.A. Gasparov, N.S. Sidorov, I.L. Zver’kova, M.P. Kulakov, JEPT Letters, [**73**]{}, 532 (2001)
D. Kaczorowski, A.J. Zaleski, O.J. Zogal, J. Klamut, Cond-matter/0103571 (2001)
J.Akimitsu, Annual Meeting Phys. Soc. Japan, [**3**]{}, 533 (2001)
H. Rosner, W.E. Pickett, S.-L. Drechsler et al., Cond-matter/0106092 (2001)
P.P. Singh, Cond-matter/0104580 (2001)
J.Nakamura, N.Yamada, K.Kuroki et al. Cond-matter/0108215 (2001)
Yu.B. Kuzma. Kristallokhimia boridov. “Vishcha Shkola”, Lvov (1983) - in Russian.
A.L. Ivanovskii, N.I. Medvedeva, G.P. Shveikin et al. Metallofizika i noveishie tekhnologii, [**20**]{}, 41 (1998) - in Russian.
A.L. Ivanovskii, N.I. Medvedeva, Yu.Ye. Medvedeva. Metallofizika i noveishie tekhnologii, [**21**]{}, 19 (1999) - in Russian.
M. Methfessel, C.Rodriquez, O.K. Andersen. Phys. Rev., [**B40**]{}, 2009 (1989).
S.Y.Savrasov. Phys. Rev., [**B54**]{}, 16470 (1996).
J. P. Perdew and Y. Wang, Phys. Rev.,[**B45**]{}, 13244 (1992).
G.V. Samsonov, I.Ì. Vinitskii. Refractory Compounds. Moscow: Metallurgia, 1976 - in Russian.
S.V. Vonsovskii, Yu.À. Izyumov, E.Z. Kurmaev. Superconductivity of Transition Metals, Their Alloys and Compounds. Moscow.: Nauka, 1977 - in Russian.
I. Imada. Cond-matter/0103006 (2001)
K. Furukawa. Cond-matter/0103184 (2001)




|
---
abstract: 'Colloidal gels are a prototypical example of a heterogeneous network solid whose complex properties are governed by thermally-activated dynamics. In this Letter we experimentally establish the connection between the intermittent dynamics of individual particles and their local connectivity. We interpret our experiments with a model that describes single-particle dynamics based on highly cooperative thermal debonding. The model, in quantitative agreement with experiments, provides a microscopic picture for the structural origin of dynamical heterogeneity in colloidal gels and sheds new light on the link between structure and the complex mechanics of these heterogeneous solids.'
author:
- Jan Maarten van Doorn
- Jochem Bronkhorst
- Ruben Higler
- Ties van de Laar
- Joris Sprakel
bibliography:
- 'refs.bib'
title: Linking particle dynamics to local connectivity in colloidal gels
---
Attractive interactions can drive a dilute colloidal suspension towards a solid state formed by a sample-spanning and mechanically-rigid particle network [@zaccarelli2007colloidal; @trappe2004colloidal]. These colloidal gels are non-equilibrium solids, kinetically arrested en route to their equilibrium state of solid-liquid coexistence [@lu2008gelation]. Such particle gels are characterized by strong heterogeneity in their local connectivity, mesoscopic structure and their dynamics and mechanics [@duri2006length; @dibble2008structural; @gao2007direct; @0953-8984-14-33-303]. The microstructure and internal dynamics of colloidal gels can be directly observed with microscopy techniques at the single-particle level. As a consequence, it forms an interesting testing ground to explore the complex and length-scale dependent mechanics of heterogeneous solids. Colloidal gels derive their mechanical rigidity from physically bonded gel strands and nodes that form a percolating elastic network. The linear elasticity of gels is governed by the mechanics of the network architecture and its thermal fluctuations [@PhysRevLett.80.778; @rueb1997viscoelastic]. By contrast, the gradual aging of gels to a denser state [@cipelletti2000universal; @zaccarelli2007colloidal] and their non-linear response to applied stresses [@PhysRevLett.106.248303; @gibaud2016multiple], is governed by events occuring at the the much smaller length scale of individual particles. Since the bonds between the particles are typically weak, single particles can debond from strands in the gel by thermally-activated bond breaking [@lindstrom2012structures]. On longer time scales, this result in the gradual restructuration of the gel network, causing it to coarsen, age and relax internal stresses that are built up during gelation [@negi2009dynamics]. Moreover, thermal-activation at the single particle level plays a crucial role in processes of fatigue that preempt stress-induced failure of the gel network [@PhysRevLett.106.248303]. To date, quantitative descriptions of these thermally-activated phenomena have relied on mean-field approximations[@lindstrom2012structures]. Yet, the inhomogeneity in local coordination that is intrinsic to gels, must play a large role in the intermittent debonding dynamics that are at the origin of this complex non-linear behavior. As a result, linking the structure of colloidal gels to their non-linear mechanics has remained challenging, in particular as the relationship between local connectivity and thermally-activated dynamics of single particles is not clearly established.\
In this letter we explore the connection between the local connectivity and intermittent bonding-debonding dynamics of individual particles in colloidal gels. We use quantitative three-dimensional microscopy to experimentally probe this relationship in colloidal gels formed from colloids that interact by means of short-ranged attractions. We show how the experimental data can be quantitatively described with a microscopic model that describes particle debonding as a strongly cooperative thermally-activated event depending on the local bonding structure. This allows us to explain how the complex ensemble-averaged mean-squared displacement results from the convolution of different particle species within a single gel. Our results illustrate how the the heterogeneous dynamics characteristics of strongly disordered solids emerge from their complex and inhomogeneous local network structure.\
We study gels formed from poly(methyl methacrylate) (PMMA) particles, stabilised by a poly(hydroxystearic acid) comb polymer, synthesized as detailed elsewhere [@antl1986preparation]. The particles have a radius $a = 709$ nm and a polydispersity of $\sim 5$%, as determined from static light scattering experiments. The particles are equilbrated and suspended at a nominal volume fraction $\phi = 0.20$ in a density-matching solvent mixture of cyclohexyl bromide and decalin. The solvent is saturated with tetrabutylammonium bromide (TBAB) to partially screen charge interactions; we note that even at saturation, very weak electrostatic interactions remain [@C2SM26245B]. Attractive forces between the particles are induced by the addition of polystyrene ($M_w =$ 105 kg/mol, $M_w/M_n =$ 1.06) as a depletant. In our solvent, this polymer has a radius of gyration $R_g \approx$ 10 nm, resulting in a short ranged depletion attraction with $\xi = R_g/a = 0.014$. Three-dimensional image stacks with a field-of-view of 41x41x21 $\mu$m$^3$ are acquired with confocal fluorescence microscopy at 1Hz; for each sample we capture 5000 stacks to ensure sufficient statistics. The three-dimensional centroid positions and the trajectories of all particles in the field-of-view are then determined with a resolution $dr_{res} = 40$ nm [@gao2009accurate].\
![(color online) a-b) Computer-generated renderings of a liquid just before the gel point ($c_p = 21.0$ mg/ml) and a gel ($c_p = 37.1$ mg/ml) based on three-dimensional confocal microscopy data, with particles color coded according to their coordination number (from dark blue $Z \geq 6$ to yellow $Z = 1$). c) Coordination number distributions for a liquid $c_p = 21.0$ mg/ml (squares) and a gel $c_p = 37.1$ mg/ml (circles). d) Ensemble-averaged coordination number $\langle Z \rangle$ as a function of depletant concentration $c_p$, dotted lines to guide the eye. []{data-label="fig1"}](figure1smallrender.png){width="\linewidth"}
Upon increasing the polymer concentration $c_p$ in a suspension of these particles, the structure of the sample transitions from a fluid of isolated particles, into a fluid of small and dynamic clusters [@lu2006fluids]. At a threshold depletant concentration a sample-spanning gel structure forms (Fig. \[fig1\]b). The phase behavior of this experimental system was studied in detail previously [@lu2008gelation; @pusey1994phase]. To evaluate the sample microstructure, we first calculate the ensemble-averaged and static coordination number $\langle Z \rangle$ from snapshots of the three-dimensional gel structure. As the attraction strength increases we see a transition from a low, but finite, value of $\langle Z \rangle$ in the liquid state, and a rapid growth in the coordination number as the sample transforms into an aggregated colloidal gel (Fig. \[fig1\]d) [@dibble2008structural]. However, the average coordination number does not provide insight into the strong intrinsic heterogeneity in the microscture of colloidal gels, which becomes visible in a computer-generated representation of our experimental system in which the particles are color-coded according to their instantaneous value of $Z$ (Fig. \[fig1\]a-b). Indeed, calculation of the coordination number probability $P(Z)$ reveals a relatively wide distribution, both prior-to and beyond the gelation threshold (Fig. \[fig1\]c).\
The microscopic dynamics of colloidal systems are conventionally probed by means of the time- and ensemble-averaged mean-squared displacement (MSD) $\langle \Delta r^2 \rangle$ (Fig. \[fig2\]a). In these data, a continuous transition between fluid and solid behavior is observed. At low attraction strengths a diffusive $\langle \Delta r^2 \rangle \propto t$ is found (Fig. \[fig2\]a). Beyond a threshold $c_p \sim 30$ mg/ml, $\langle \Delta r^2 \rangle$ decreases and begins to display a time-independent localisation plateau at short lag times. The height of this plateau $\delta^2$ decreases with increasing $c_p$, while it extends to increasingly large lag times. At even longer times $\langle \Delta r^2 \rangle$ exhibits an upturn to diffusive behavior. The value of $\langle \Delta r^2 \rangle$ at a lag time $t = 498$ s, as a proxy for the low-frequency particle mobility, exhibits a continuous transition between fluid-like behavior at low $c_p$ to a gel-like state for $c_p > 33$ g/L (Fig. \[fig2\]b).\
![(color online) a) Ensemble- and time-averaged self-part of the mean squared displacements as a function of polymer volume fraction with (from top to bottom): $c_p = $0, 21.0, 30.9, 31.1, 33.3, 34.1, 34.6, 37.1, 38.3, 40.6 mg/ml. b) Value of $\langle \Delta r^2 \rangle$ at $t = 498$ s as a function of $c_p$. c-d) self (closed symbols) and distinct part (open symbols) of the mean-squared displacement for a sample in the liquid (c, $c_p = 21.0$ mg/ml) and in the gel (d, $c_p = 37.1$ mg/ml).[]{data-label="fig2"}](figure2new.png){width="1.0\linewidth"}
The conventional, self-part of the MSD is a measure for the local dynamics of single particles. To illustrate the fact that the internal gel dynamics are strongly length-scale dependent, we compare these data to the distinct-part of the mean-squared displacements $\langle \Delta r^2 \rangle_D$ (Fig. \[fig2\]c-d). The distinct, or 2-point, mean-squared displacement, computed as described elsewhere [@crocker2000two], probes the correlated motion of particles transmitted through the medium. As such they are a measure for the global, rather than local, properties of the gel. For samples in the fluid, just prior to the liquid-solid transition, the self- and distinct-parts of the MSD overlap within experimental error (Fig. \[fig2\]c). This indicates that there are no appreciable differences between local and global dynamics. By contrast, just above the gel threshold the distinct $\langle \Delta r^2 \rangle_D$ is almost an order-of-magnitude lower than the self-part of the MSD (Fig. \[fig2\]d). The gel is more rigid at the macroscopic scale, than that what is experienced by individual particles locally. Apparently, the dynamics of single particles in the gel are strongly affected by local structures; insight into these effects cannot be obtained by ensemble averaging.\
We hypothesize that single-particle dynamics, as measured by the self-part of the MSD, can be described by a specific sequence of events. Particles are first bonded to their neighbors in the gel network by bonds of strength $U/k_BT$. Under the action of thermal fluctuations, particles spontaneously debond from the gel with a characteristic rate $k_{d,Z}$; after debonding a particle will diffuse through the viscous medium with a rate $D$. This motion persists, until the particle collides with the gel network and re-attaches by forming new bonds. Thus, particles can exist in two states, bound and free, each characterised by different dynamics.\
We can experimentally evidence the existence of these two populations by determining the probability distribution $P(\Delta r^2 (t))$ of mean-squared displacement values for individual particles at a particular lag time $t = 498$ s. A sample in the fluid states exhibits a distribution with a single population of freely diffusing particles (Fig. \[fig3\]a), also illustrated by the linear dependence of the ensemble-averaged MSD with time (Fig. \[fig2\]a). By contrast, a sample in the gel state reveals two populations; a major fraction of the particles is bonded and exhibits a very low mobility, whereas a secondary peak signals the particles which have temporarily debonded and diffuse through the solution (Fig. \[fig3\]b). Note that this diffusive population has a lower effective diffusion coefficient that particles in the repulsive liquid, probably due to the fact that not only singlets, but also small clusters debond and diffuse.\
![(color online) Probability distributions $P$ of single-particle mean-squared displacements at $t = 498$ s for $c_p = 21.0$ (a) and 37.1 mg/ml (b).[]{data-label="fig3"}](figure3new.png){width="1.0\linewidth"}
The self-part of the mean-squared displacement of a single particle $\Delta r^2(t)$ can be split into two contributions: i) free diffusion during a characteristic time $\tau_{f}$, during which $\Delta r^2(t)=6Dt$ and ii) thermal vibrations of amplitude $\delta$ around an equilibrium bonded position, during a time $\tau_{b}$, for which $\Delta r^2(t) =\delta^2$. If we define $\alpha_{b} = {\tau_{b}} / {(\tau_{b} + \tau_{f})}$ as the fraction of time a particle resides in a bonded configuration, the time-averaged mean-squared displacement of a single particle can be approximated as: $$\label{MSD}
\Delta r^2(t) = (1-\alpha_b) 6Dt + \alpha_b \delta^2$$ For the sake of simplicity, we presume that the diffusion of debonded species occurs at a rate $D = ^{k_BT} / _{6\pi\eta a}$, where $\eta = 2$ mPa s is the viscosity of the suspending medium.\
The localization length $\delta$ of a bonded particle is set by the curvature of the local minimum in the potential energy. In our experiments we use depletion interactions; this gives rise to an attraction of depth $U$ and a range of the order of the depletant $R_g$. Approximating the bonds as a harmonic well $U = k \Delta r^2$, we estimate the spring constant of the bond between two colloids from dimensional analysis as $k \sim U / {R_g^2}$. Thermal excursions from their equilibrium position will occur with a typical squared amplitude $\delta^2 \sim {k_BT} / {k} = {k_BT R_g^2}/{U}$. Note that, in our experiments, only vibrations that exceed the spatial resolution $dr_{res}$ of the particle locating algorithm can be detected. Smaller vibrations will result in a observed mean-squared displacement plateau of $\delta^2 \approx dr_{res}^2$.\
The characteristic time a particle resides in a bound state is governed by thermally-activated dynamics. In an Eyring approach, the rate of dissociation of a single bond is described as $k_{d,1} = \omega_0 \exp \left[-U/k_BT\right]$, where $\omega_0$ is the attempt frequency [@eyring1935activated]. For a particle to detach from the gel network, all $Z$ bonds that connect it to its neighbors must be ruptured. Breaking one bond, while the particle stays in place due to the remaining $Z-1$ bonds, leads to rapid restoration of the broken bond with a rate $k_a$. Assuming that $k_a \gg k_{d,1}$, particle detachment from the network will only occur if all $Z$ bonds break simultaneously [@lindstrom2012structures]. Thus particle detachment is a strongly cooperative process with a rate $k_{d,Z} = (k_{d,1})^Z$. The typical time a particle remains bonded becomes $\tau_b = \frac{1}{\omega_0}\left[\exp\left({ZU}/{k_BT} \right)-1\right]$,where the term $-1$ ensures that the bonding time vanishes as $Z \rightarrow 0$. Substituting these results in Eq.\[MSD\] gives a microscopic expression for the single-particle mean-squared displacement as: $$\label{MSD2}
\Delta r^2(Z,t) = 6Dt + \frac{e^{ZU/k_BT}-1}{e^{ZU/k_BT}-1+\omega_0\tau_f} \left(\delta^2-6Dt \right)$$ This expression predicts a distinct dependence of the single-particle dynamics on its local coordination number $Z$. From our experimental data, we determine the value of $\Delta r^2 (Z,t)$ at a fixed lag time $t = 498$s, and plot these as a function of the average coordination number for the particle during the length of our experimental observations (symbols in Fig. \[fig5\]a). We fit these experimental data to the theoretical model (Eq. \[MSD2\]), in which there are two fitting parameters: the effective energy of interparticle bonds $U$ and the dimensionless number $\omega_0 \tau_f$, which is the ratio of the frequencies of debonding attempts and reassociations. The predictions from the microscopic theory are in excellent agreement with the experimental data (symbols in Fig. \[fig5\]a). Both data sets, for different polymer concentrations, can be fitted with $\omega_o \tau_f \approx 0.1$, which indicates that particle reassociation is indeed significantly faster than debonding, thus confirming the validity of the assumption that $k_a \gg k_{d,1}$.
The effective bonding energies we need to fit the data in proximity to the gel point are of the order of $\sim$ 5 $k_BT$; these values are almost an order-of-magnitude lower than the depth of the depletion attraction calculated with the Asakura-Oosawa model [@asakura1958interaction], which assumes only hard sphere repulsions. We attribute this to the still significant electrostatic repulsion known to act between PMMA particles in apolar solvents even in presence of the TBAB electrolyte [@C2SM26245B].
![(color online) a) Single-particle mean-squared displacement at $t = 498$ s as a function of particle connectivity $Z$ from experimental data (symbols) and as predicted by the model described in the text (solid lines) for $c_p = $ 34.1 (circles) and 37.1 mg/ml (squares). b) Comparison between experimental ensemble-averaged $\langle \Delta r^2 \rangle$ (symbols) and that predicted by Equation \[eq3\] without adjustable parameters (solid blue line). Solid gray lines are the contributions to the mean-squared displacements as a function of particle coordination number with (top-to-bottom) $Z = $ 0, 2, 4, 6, 8, 10, as predicted by Eq. \[MSD2\].[]{data-label="fig5"}](figure4new.png){width="1.0\linewidth"}
These data illustrate the intimate link between single-particle dynamics and local connectivity. To further substantiate these findings we probe the evolution of the coordination number for a single particle as a function of time. For a weakly connected particle, strongly intermittent fluctuations occur between bonded $Z>0$ and unbonded $Z=0$ states (Fig. \[fig4\]a); the continuous debonding and diffusion allows the particle to travel significant distances over the course of several minutes before it exits the field-of-view (Fig. \[fig4\]c). By contrast, a strongly coordinated particle shows fluctuations in coordination number of $\pm 1$ (Fig. \[fig4\]b), but remains connected over the entire length of the experiment of 5000s, and as a consequence only exhibits strongly localised positional fluctuations (Fig. \[fig4\]d).\
![(color online) Thermally-activated fluctuations in the coordination number $Z$ of a single particle (a,c) and the corresponding particle displacement $\Delta r$ (b,d) for a weakly connected (a-b) and highly connected (c-d) particle in the same gel at $c_p = 37.1$ mg/ml. Note that the trajectory length is much shorter for the weakly connected particles as it diffuses out of the field-of-view after $\sim 700$ s.[]{data-label="fig4"}](figure5new.png){width="1.0\linewidth"}
Finally, with a quantitative microscopic description for the effect of connectivity on single-particle dynamics (Eq.\[MSD2\]), we attempt to reconstruct the ensemble-averaged mean-squared displacement. To do so, we must weight the ensemble-average using the distribution of coordination numbers $P(Z)$ as a weighting function: $$\label{eq3}
\langle \Delta r^2 (t) \rangle = \sum_Z P(Z) \Delta r^2(Z,t)$$ With the values of $U$ and $\omega_0 \tau_f$ determined from our experimental data (Fig. \[fig5\]a) and $P(Z)$ obtained directly from the static structure of the gel (Fig. \[fig1\]c), we can now predict the ensemble-averaged MSD. Indeed, without adjustable parameters, we find that the reconstructed $\langle \Delta r^2 (t) \rangle$ based on our model for single particle dynamics is in reasonable quantitative agreement with the ensemble-averaged MSD determined directly from experiments (Fig.\[fig5\]b). This highlights the self-consistency of our description. Moreover, it enables us to deconvolve the ensemble-average into the different populations of particles with different local coordination numbers $Z$ (solid gray lines, Fig.\[fig5\]b). This provides a direct and quantitative explanation for the distinct dynamical heterogeneities characteristic of colloidal gels.\
We have presented experimental data and theoretical analysis that explains how the heterogeneous dynamics of colloidal gels derives from the large inhomogeneities in local connectivity. The quantitative description of single-particle dynamics based on the local structure could form a stepping stone to develop microscopic descriptions of processes, such as aging, syneresis or stress-induced fatigue, in which the local microstructure evolves over time under the action of thermally-actived particle rearrangements. In our current description, we have only considered particle rearrangements to occur through debonding and reassociation onto the gel network. Even though this provides a reasonable approximation, given the agreement between our experiments and the model, other thermally-actived modes of particle motion, such as the sliding of a particle along a gel strand without debonding entirely may exist. Increasing the attraction range, will make these types of rearrangements more likely to occur; extending our model to account for these “sliders”, could lead to a more generalized descripion of local dynamics that is applicable to a wide range of disordered network materials, even those in which the local connectivity must be preserved [@montarnal2011silica].
Acknowledgements {#acknowledgements .unnumbered}
================
This work is part of an Industrial Partnership Programme of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research. The work of TvdL is carried out as part of a project of the Institute for Sustainable Process Technology: Produced Water Treatment (WP-20-03).
References {#references .unnumbered}
==========
|
---
abstract: |
If $\Omega$ is the interior of a convex polygon in $\mathbb{R}^{2}$ and $f,g$ two asymptotic geodesics, we show that the distance function $d\left(
f\left( t\right) ,g\left( t\right) \right) $ is convex for $t$ sufficiently large. The same result is obtained in the case $\partial \Omega$ is of class $C^{2}$ and the curvature of $\partial \Omega$ at the point $f\left( \infty \right) =g\left( \infty \right) $ does not vanish. An example is provided for the necessity of the curvature assumption. *[2010 Mathematics Subject Classification:]{} 52A41, 53C60, 51F99, 53A40.*
author:
- 'Charalampos Charitos, Ioannis Papadoperakis'
- |
and Georgios Tsapogas\
Agricultural University of Athens
title: Convexity of asymptotic geodesics in Hilbert Geometry
---
Introduction and statements of results
======================================
Let $\Omega$ be a bounded convex (open) domain in $\mathbb{R}^{n}$ and $h$ the Hilbert metric on $\Omega$ defined as follows: for any distinct points $p,q$ in $\Omega$ let $p^{\prime}$ and $q^{\prime}$ be the intersections of the line through $p$ and $q$ with $\partial \Omega$ closest to $p$ and $q$ respectively. Then $$h\left( p,q\right) =\log \frac{\left \vert p^{\prime}-q\right \vert
\cdot \left \vert q^{\prime}-p\right \vert }{\left \vert p^{\prime}-p\right \vert
\cdot \left \vert q^{\prime}-q\right \vert }$$ where $\left \vert z-w\right \vert $ denotes the usual Euclidean distance. The quantity $\frac{\left \vert p^{\prime}-q\right \vert \cdot \left \vert q^{\prime
}-p\right \vert }{\left \vert p^{\prime}-p\right \vert \cdot \left \vert q^{\prime
}-q\right \vert }$ is the cross ratio of the colinear points $p,q,q^{\prime
},p^{\prime}$ denoted by $\left[ p,q,q^{\prime},p^{\prime}\right] $ and is invariant under projective transformations of $\mathbb{R}^{n} .$ We refer to [@Bus], [@Har] and [@PaTr] for the basic properties of the distance $h$ as well as a presentation of classic and contemporary aspects of Hilbert Geometry.
It is well known that, contrary to non-positively curved Riemannian geometry, the distance between two points moving at unit speed along two geodesics is not necessarily convex. The behavior near infinity of the distance function $$t\rightarrow h\left( f\left( t\right) ,g\left( t\right) \right)$$ when $f,g$ are two intersecting geodesics is studied in detail in [@Soc]. In this note we are concerned with the case of asymptotic geodesics.
Each geodesic line $f$ determines two points at infinity denoted by $f\left(
-\infty \right) $ and $f\left( +\infty \right) $ which are distinct points in $\partial \Omega.$ Two geodesics $f,g$ are said to be *asymptotic* if $f\left( +\infty \right) =g\left( +\infty \right) .$ We first show that, up to re-parametrization, the distance function $h\left( f(t),g(t)\right) $ tends to $0$ when $t\rightarrow \infty,$ provided that $\partial \Omega$ is $C^1$ at the point $f\left( +\infty \right) =g\left( +\infty \right) .$ Then we show that, near infinity, the distance function is convex when $\Omega$ is the interior of a convex polytope in $\mathbb{R}^{n}$ as well as when $\Omega$ is a convex domain in $\mathbb{R}^{n}$ with $C^{2}$ boundary such that the curvature of $\partial \Omega$ at $f(+\infty
)=g(+\infty)$ along the plane determined by $f$ and $g$ is not zero. The precise statement is the following
\[mainth\]Suppose $f,g$ are two asymptotic geodesic lines in a convex bounded domain $\Omega$ with common boundary point $\xi=f(+\infty)=g(+\infty
)\in \partial \Omega$ and $P$ the plane determined by $f$ and $g.$ If $\Omega$ is either
- the interior of a convex polytope in $\mathbb{R}^{n},$ or
- a convex domain in $\mathbb{R}^{n}$ with $C^{2}$ boundary and the curvature of $\partial \Omega$ at $\xi$ along $P$ is not zero,
then, there exists $T>0$ such that the function $t\rightarrow h\left(
f\left( t\right) ,g\left( t\right) \right) $ is convex for $t>T.$
In the last Section and for the case of non-asymptotic geodesics, two simple examples are provided demonstrating non-convexity for either intersecting or, disjoint geodesics. Finally, an example demonstrating the necessity of the curvature condition in Theorem \[mainth\]b above is provided (see Example 3).
Dynamical properties of the geodesic flow on $\Omega /\Gamma$ equipped with the Hilbert metric, where $\Gamma$ is a torsion free discrete group which divides the strictly convex domain $\Omega$, have been studied by Y. Benoist (see [@Ben]) using the Anosov properties of the flow. In view of Eberlein’s approach in the study of the geodesic flow (see [@Ebe1], [@Ebe2] and [@CPT]) which is based on the convexity of the distance function as well as the zero distance of asymptotic geodesics, mixing of geodesic flow in the Hilbert geometry setting can be established using Theorem \[mainth\]b and Proposition \[dzero\] below.
Distance of asymptotic geodesics
================================
We will always work with a pair of geodesic lines which determine a plane $P$ in $\mathbb{R}^{n}.$ As the distance function only depends on the affine section $P\cap \Omega$ we will assume for the rest of this paper that $\Omega$ is a bounded convex domain in $\mathbb{R}^{2}.$
Recall that the Euclidean line $\ell_{\xi}$ is called a *support line* for $\Omega$ at the point $\xi \in \partial \Omega$ if $\partial \Omega \cap
\ell_{\xi}\ni \xi$ and $\Omega \cap \ell_{\xi}=\varnothing.$ Note that if $\partial \Omega$ is smooth at $\xi,$ then the support line is the unique tangent line at $\xi.$
![Overview of notation for the proof of Proposition \[dzero\].[]{data-label="quardangle"}](condUfig1n.eps)
(22,12) (247,195)[$\theta(y_n)$]{} (121,195)[$\theta(x_n)$]{} (335,188)[$\ell_{\xi}$]{} (195,190)[$\xi$]{} (195,65)[$\Omega$]{} (290,37)[$g(-\infty)$]{} (50,37)[$f(-\infty)$]{} (377,50)[$L$]{} (377,110)[$L_0$]{} (377,142)[$L_{t_n}$]{} (51,95)[$x^{\prime}_n$]{} (136,94)[$f(0)$]{} (226,95)[$g(0)$]{} (110,125)[$x_n$]{}(255,126)[$y_n$]{} (271,97)[$y_0$]{}(95,97)[$x_0$]{} (305,95)[$y^{\prime}_n$]{} (202,125)[$g(t_n)$]{} (158,124)[$f(t_n)$]{}
\[dzero\]Let $f,g$ be two asymptotic geodesic lines with common boundary point $\xi=f(+\infty)=g(+\infty)\in \partial \Omega.$ Assume that $\partial \Omega$ is $C^1$ at $\xi.$ Then there exists a (geodesic) re-parametrization of $f$ such that $$\lim_{t\rightarrow \infty}h\left( f(t),g(t)\right) =0.$$
Let $\ell_{\xi}$ be the tangent line at $\xi$ and $L$ the line containing $f(-\infty)$ and $g(-\infty).$ We first treat the case where $\ell_{\xi}\cap \partial \Omega=\left \{ \xi \right \} ,$ the other case being when $\ell_{\xi}\cap \partial \Omega$ is an Euclidean segment containing $\xi.$
If $\ell_{\xi}\cap L$ is a (finite) point $A,$ we may compose with a projective transformation which sends $A\ $to $\infty$ and, thus, we may assume that $\ell_{\xi},L$ are parallel. For each $t\in \mathbb{R},$ using the line $L_{t}$ containing $g(t)$ and parallel to $L$ we obtain a new parametrization for $f$ by setting $f(t):=L_{t}\cap \mathrm{Im}f.$ For all $t\in \mathbb{R},$ by similarity of the (Euclidean) triangles $\bigl(\xi
,f(t),g(t)\bigr)$ and $\bigl(\xi,f(0),g(0)\bigr),$ we have $$h(f(0),f(t))=h(g(0),g(t))=t$$ hence $f$ is re-parametrized by arc length.
Pick an arbitrary sequence $\left \{ t_{n}\right \} $ of positive reals converging to infinity. For each $n\in \mathbb{N},$ the geodesic segment $[f(t_{n}),g(t_{n})]$ determines two points in $\partial \Omega$ denoted by $x_{n}$ and $y_{n}$ so that $\left[ x_{n},g\left( t_{n}\right) \right] $ contains $f\left( t_{n}\right) $ and does not contain $y_{n}.$ For each $n>0$ extend the (Euclidean) segment $[\xi,x_{n}]$ and denote by $x_{n}^{\prime}$ its intersection with the line $L_{0}.$ All the above notation is displayed in Figure \[quardangle\].
Denote by $T_{n}$ (resp. $T_{n}^{\prime}$) the triangle with vertices $\xi,x_{n}$ and $y_{n}$ (resp. $x_{n}^{\prime}$ and $y_{n}^{\prime}$). Denote by $h_{T_{n}}$ and $h_{T_{n}^{\prime}}$ the corresponding Hilbert metrics. We have $$\begin{split}
h(f\left( t_{n}\right) ,g\left( t_{n}\right) )=h_{T_{n}}(f\left(
t_{n}\right) , & g\left( t_{n}\right) )=\log[f\left( t_{n}\right)
,g\left( t_{n}\right) ,y_{n},x_{n}]\\
& =\log[f\left( 0\right) ,g\left( 0\right) ,y_{n}^{\prime},x_{n}^{\prime
}]=d_{T_{n}^{\prime}}(f\left( 0\right) ,g\left( 0\right) ).
\end{split}$$ Hence, it suffices to show that $h_{T_{n}^{\prime}}(f\left( 0\right)
,g\left( 0\right) )\rightarrow0$ as $n\rightarrow \infty.$
As $\partial \Omega$ is smooth at $\xi$ and $\ell_{\xi}\cap \partial
\Omega=\left \{ \xi \right \} $ the angle $\theta(x_{n})$ (resp. $\theta
(y_{n})$) formed by $\ell_{\xi}$ and the segment $[x_{n},\xi]$ (resp. $[y_{n},\xi]$) is well defined. Clearly, $$\theta(x_{n})\rightarrow0\mathrm{\ \ and\ \ }\theta(y_{n})\rightarrow
0\mathrm{\ \ as\ \ }n\rightarrow \infty$$ which implies that $$|x_{n}^{\prime}-f\left( 0\right) |\rightarrow \infty \mathrm{\ \ and\ \ }|y_{n}^{\prime}-g\left( 0\right) |\rightarrow \infty.$$ Therefor, both fractions $$\begin{split}
\frac{|x_{n}^{\prime}-g\left( 0\right) |}{|x_{n}^{\prime}-f\left( 0\right)
|}= & \frac{|x_{n}^{\prime}-f\left( 0\right) | + |f\left( 0\right)
-g\left( 0\right) |}{|x_{n}^{\prime}-f\left( 0\right) |}\mathrm{\ ,\ }\\
& \frac{|y_{n}^{\prime}-f\left( 0\right) |}{|y_{n}^{\prime}-g\left(
0\right) |}=\frac{|y_{n}^{\prime}-g\left( 0\right) |+|f\left( 0\right)
-g\left( 0\right) |}{|y_{n}^{\prime}-g\left( 0\right) |}\end{split}$$ converge to $1$ and hence $$h_{T_{n}^{\prime}}(f\left( 0\right) ,g\left( 0\right) )=\log \left(
\frac{|x_{n}^{\prime}-g\left( 0\right) |\, \,|y_{n}^{\prime}-f\left(
0\right) }{|x_{n}^{\prime}-f\left( 0\right) |\, \,|y_{n}^{\prime}-g\left(
0\right) |}\right) \rightarrow0.$$ We now treat the case where $\xi$ is contained in a segment $\sigma
\subset \partial \Omega.$ We may assume that $\sigma$ and the line $L$ containing $f\left( -\infty \right) $ and $g\left( -\infty \right) $ are parallel, otherwise, we may compose by a projective transformation sending the intersection point at infinity. As in the previous case, define a new geodesic parametrization of $f$ by setting $f(t):=L_{t}\cap \mathrm{Im}f$ where $L_{t}$ is the line containing $g(t)$ and parallel to $L.$ Pick a simple close $C^{1}$ curve $\tau$ with the following properties:
\(1) $\tau$ bounds a convex domain $\Omega^{\prime}\subsetneq \Omega$
\(2) $\tau$ contains $\xi,f\left( -\infty \right) ,g\left( -\infty \right) $
\(3) the tangent line to $\tau$ at $\xi$ contains $\sigma.$Denote by $h^{\prime}$ the Hilbert distance in $\Omega^{\prime}.$ By the previous case, $$h^{\prime}(f\left( t\right) ,g\left( t\right) )\rightarrow0 \text{\ as\ }
t\rightarrow \infty.$$ Since $\Omega^{\prime}\subset \Omega,$ we have $$h(f\left( t\right) ,g\left( t\right) )\leq h^{\prime}(f\left(
t\right) ,g\left( t\right) )$$ for all $t,$ which completes the proof of the proposition.
If $\partial \Omega$ is not smooth at $\xi$ then the distance function is bounded away from $0.$ To see this, pick two distinct support lines $\ell
_{1},\ell_{2}$ at $\xi.$ One of them, say $\ell_{1},$ intersects $L$ at a point, say $A.$ Then, a fraction involving the sines of the angles formed by the segments $\left[ A,\xi \right] ,\left[ f\left( -\infty \right)
,\xi \right] $ and $\left[ g\left( -\infty \right) ,\xi \right] $ at $\xi$ is a lower bound for the distance function.
Let $f,g$ be two asymptotic geodesic lines with common boundary point $\xi=f(+\infty)=g(+\infty)\in \partial \Omega.$ Assume that $\partial \Omega$ is $C^1$ at $\xi.$ Then $$\lim_{t\rightarrow \infty}h\left( f(t),g(t)\right) =C$$ for some non-negative real $C.$
In the proof of Proposition \[dzero\], a new parametrization for $f$ was defined. Denote by $\overline{f}$ the same geodesic line with the new parametrization and let $C$ be the unique real number so that $f\left(
C\right) =\overline{f}\left( 0\right) .$ Clearly, $f\left( t+C\right) =$ $\overline{f}\left( t\right) ,$ hence,$$\begin{array}
[c]{lll}h\left( f(t),g(t)\right) & \leq & h\left( f\left( t\right) ,f\left(
t+C\right) \right) +h\left( f\left( t+C\right) ,g\left( t\right)
\right) \\
& \leq & \left \vert C\right \vert +h\left( \overline{f}\left( t\right)
,g\left( t\right) \right)
\end{array}$$ and $$\begin{array}
[c]{lll}h\left( f(t),g(t)\right) & \geq & h\left( f\left( t\right) ,f\left(
t+C\right) \right) -h\left( f\left( t+C\right) ,g\left( t\right)
\right) \\
& = & \left \vert C\right \vert -h\left( \overline{f}\left( t\right)
,g\left( t\right) \right)
\end{array}$$ where $h\left( \overline{f}\left( t\right) ,g\left( t\right) \right)
\rightarrow0$ as t$\rightarrow \infty.$
Convexity of the distance function
==================================
We start with an elementary lemma concerning the Hilbert distance.
\[xift\] Let $\Omega$ be a convex domain and $f$ a geodesic line in $\Omega$ such that $\left \vert f\left( -\infty \right) -f\left( 0\right)
\right \vert =\left \vert f\left( +\infty \right) -f\left( 0\right)
\right \vert =1/2.$ Then for any $t>0$ the Euclidean distance $E\left(
t\right) =\left \vert f\left( t\right) -f\left( +\infty \right) \right \vert
$ is given by the formula $$E\left( t\right) =\frac{1}{e^{t}+1}.$$
The proof is straightforward by the definition of the Hilbert metric:$$h\left( f\left( 0\right) ,f\left( t\right) \right) =\log \frac{\left \vert
f\left( -\infty \right) -f\left( t\right) \right \vert \cdot \left \vert
f\left( +\infty \right) -f\left( 0\right) \right \vert }{\left \vert f\left(
-\infty \right) -f\left( 0\right) \right \vert \cdot \left \vert f\left(
+\infty \right) -f\left( t\right) \right \vert }=\log \frac{\left( 1-E\left(
t\right) \right) \cdot \frac{1}{2}}{\frac{1}{2} \cdot E\left( t\right) }$$ hence $\displaystyle
e^{t}=e^{h\left( f\left( 0\right) ,f\left( t\right) \right) }=
\frac{1-E\left( t\right) }{E\left( t\right) }$ and $\displaystyle E\left(
t\right) =\frac{1}{e^{t}+1}.$
We also need an elementary calculus lemma.
\[convy\] There exists $T>0$ such that the following function is convex $$\phi:\left[ T,+\infty \right) \rightarrow \mathbb{R}:\phi \left( t\right)
=\log \frac{\beta+\left( \alpha+\frac{1}{2}\right) E\left( t\right) }{\beta+\left( \alpha-\frac{1}{2}\right) E\left( t\right) }$$ where $\alpha,\beta$ are real numbers with $\beta>0$ and $E\left( t\right)
=\frac{1}{e^{t}+1}.$
![The four generic positions of the sides of $\partial \Omega$ with respect to $\xi$ and the coordinate axes (Theorem 1a).[]{data-label="figcases"}](conv1.eps)
(22,12) (60,45) (60,205) (240,45) (240,205) (120,50)[$\xi$]{} (120,205)[$\xi$]{} (300,50)[$\xi$]{} (300,210)[$\xi$]{} (60,135)[$g(-\infty)$]{} (60,295)[$g(-\infty)$]{} (240,135)[$g(-\infty)$]{} (240,295)[$g(-\infty)$]{} (140,135)[$f(-\infty)$]{} (140,295)[$f(-\infty)$]{} (320,135)[$f(-\infty)$]{} (320,295)[$f(-\infty)$]{}
An elementary calculation shows that $$\displaystyle \phi^{\prime}\left( t\right)=
\frac{\beta E^{\prime}\left( t\right)}{\left[ \beta+\left(
\alpha+\frac{1}{2}\right) E\left( t\right) \right] \left[
\beta+\left( \alpha-\frac{1}{2}\right) E\left( t\right) \right]}$$ and the second derivative of $\phi$ is $$\phi^{\prime \prime}\left( t\right) =\frac{\Phi}{\left[ \beta+\left(
\alpha+\frac{1}{2}\right) E\left( t\right) \right] ^{2}\left[
\beta+\left( \alpha-\frac{1}{2}\right) E\left( t\right) \right] ^{2}}$$ where the numerator $\Phi$ is\
$\Phi=\beta \left[ \beta+\left( \alpha+\frac{1}{2}\right)
E\left( t\right) \right] \left[ \beta+\left( \alpha-\frac{1}{2}\right)
E\left( t\right) \right] E^{\prime \prime}\left( t\right) $\
$+\beta E^{\prime}\left( t\right) \left( \alpha+\frac{1}{2}\right) E^{\prime}\left( t\right) \left[ \beta+\left( \alpha-\frac
{1}{2}\right) E\left( t\right) \right] $\
$-\beta
E^{\prime}\left( t\right) \left( \alpha-\frac{1}{2}\right) E^{\prime
}\left( t\right) \left[ \beta+\left( \alpha+\frac{1}{2}\right) E\left(
t\right) \right] .$\
As the asymptotic behavior of $E\left( t\right) ,E^{\prime}\left( t\right)
$ and $E^{\prime \prime}\left( t\right) $ is $\frac{1}{e^{t}},\frac{-1}{e^{t}}$ and $\frac{1}{e^{t}}$ respectively, the dominant summand of $\Phi$ is $$\beta^{3}E^{\prime \prime}\left( t\right) =\beta^{3}\frac{e^{3t}-e^{t}}{\left( e^{t}+1\right) ^{4}}.$$ Since $\beta>0,$ this completes the proof.
\
**Proof of Theorem \[mainth\](a).**
Let $\Omega$ be the interior of a convex polygon in $\mathbb{R}^{2}$ and $f,g$ two asymptotic geodesic lines with common boundary point $\xi=f(+\infty
)=g(+\infty)\in \partial \Omega.$ Recall that a projective transformation preserves straight lines, convexity and cross ratio of four colinear points. Moreover, a projective transformation is uniquely determined by its image on four points provided that no three of them are colinear. As the latter property is satisfied by the points $f(-\infty),$ $g(-\infty),$ $f(0)$ and $g(0),$ we may assume, after composing by the appropriate projective transformation that the coordinates of the four points mentioned above are:\
$f\left( -\infty \right) \equiv \left( \frac{1}{2},\frac{\sqrt{3}}{2}\right) ,g(-\infty)\equiv \left( -\frac{1}{2},\frac{\sqrt{3}}{2}\right)
,f(0)\equiv \left( \frac{1}{4},\frac{\sqrt{3}}{4}\right) \text{\ \ and\ \ }g(0)\equiv \left( -\frac{1}{4},\frac{\sqrt{3}}{4}\right) .$\
In particular, the point $\xi=f(+\infty)=g(+\infty)$ is the point $\left(
0,0\right) $ and the points $\xi,f\left( -\infty \right) ,g\left(
-\infty \right) $ form an equilateral triangle with side length $1.$ There are four generic cases to examine depending on whether the point $\xi \equiv \left( 0,0\right) $ is a vertex of the polygon $\partial \Omega$ or not and whether the side containing $\xi$ intersects the $x-$axis only at $\xi$ or not. In Figure \[figcases\] the thick segments represent sides of $\partial \Omega$ demonstrating the four cases.
![The calculation of the Hilbert distance $h\left(
f(t),g(t)\right) $ (Theorem 1a)[]{data-label="isosc1a"}](conv2.eps)
(22,12) (45,279)[$\left(-\frac{1}{2},\frac{\sqrt{3}}{2}\right)=g(-\infty)$]{} (285,279)[$f(-\infty)=\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)$]{} (72,197)[$\left(-\frac{1}{4},\frac{\sqrt{3}}{4}\right)$ =$g(0)$]{} (266,195)[$f(0)=\left(\frac{1}{4},\frac{\sqrt{3}}{4}\right)$]{} (116,170)[$w(t)$]{} (332,170)[$z(t)$]{} (208,170)[$y(t)$]{} (239,145)[$f(t)$]{} (154,146)[$g(t)$]{} (152,90)[$(0,0)\equiv \xi$]{}
\[isosc\]
Clearly, in Case IV the distance $h\left( f(t),g(t)\right) $ is constant because for all sufficiently large $t\neq t^{\prime}$ the lines containing $f(t),g(t)$ and $f(t^{\prime}),g(t^{\prime})$ are parallel. We will deal in detail with Case I and the arguments will suffice for the remaining cases II and III.
By Lemma \[xift\] we have$$\left \vert f\left( t\right) -\xi \right \vert =\left \vert g\left( t\right)
-\xi \right \vert =\frac{1}{e^{t}+1}\equiv E(t).$$ Let $\left( 0,y\left( t\right) \right) $ be the intersection point of the line containing $f\left( t\right) ,g\left( t\right) $ with the $y-$axis. As the triangle formed by $\xi,g\left( t\right) ,f\left( t\right) $ is equilateral we have $$y\left( t\right) =E\left( t\right) \frac{\sqrt{3}}{2}=\frac{\sqrt{3}/2}{e^{t}+1}.$$ Then the unique side of $\partial \Omega$ not containing $\xi$ and intersecting the $x-$axis has the form$$z\left( t\right) =\left( \alpha y\left( t\right) +\beta,y\left(
t\right) \right)$$ and the side containing $\xi$ has the form$$w\left( t\right) =\left( - \alpha^{\prime}y\left( t\right) ,y\left(
t\right) \right)$$ for $\alpha \in \mathbb{R},$ $\beta>0$ and $\alpha^{\prime}>\frac{\sqrt{3}}{3}
.$ The latter holds because the angle formed by the segments $\left[
\xi,w\left( t\right) \right] $ and $\left[ \xi,\left( 0,y\left(
t\right) \right) \right] $ at $\xi$ belongs to $\left( \frac{2\pi}{3}
,\pi \right) .$ All the above notation is visualized in Figure \[isosc1a\].
We next compute the Euclidean distances involved in the definition of the distance $h\left( f\left( t\right) ,g\left( t\right) \right) :$$$\begin{array}
[c]{ccl}\left \vert z(t)-f\left( t\right) \right \vert & = & \alpha y\left(
t\right) +\beta-\frac{1}{2}\left \vert g\left( t\right) -f\left( t\right)
\right \vert \, \,=\\[2mm]
& = & \alpha \frac{\sqrt{3}/2}{e^{t}+1}+\beta-\frac{1}{2}\frac{1}{e^{t}+1}=\left( \frac{\alpha \sqrt{3}}{2}-\frac{1}{2}\right) E\left( t\right)
+\beta \\[2mm]\left \vert z(t)-g\left( t\right) \right \vert & = & \left( \frac
{\alpha \sqrt{3}}{2}+\frac{1}{2}\right) E\left( t\right) +\beta \\[2mm]\left \vert w(t)-g\left( t\right) \right \vert & = & \alpha^{\prime}y\left(
t\right) -\frac{1}{2}\left \vert g\left( t\right) -f\left( t\right)
\right \vert \, \, =\\[2mm]
& = & \alpha^{\prime}\frac{\sqrt{3}/2}{e^{t}+1}-\frac{1}{2}\frac{1}{e^{t}+1}=\left( \frac{\alpha^{\prime}\sqrt{3}}{2}-\frac{1}{2}\right) E\left(
t\right) \\[2mm]\left \vert w(t)-f\left( t\right) \right \vert & = & \left( \frac
{\alpha^{\prime}\sqrt{3}}{2}+\frac{1}{2}\right) E\left( t\right)
\end{array}$$ It follows that $$h\left( f\left( t\right) ,g\left( t\right) \right) =\log \frac{\left(
\frac{\alpha \sqrt{3}}{2}+\frac{1}{2}\right) E\left( t\right) +\beta
}{\left( \frac{\alpha \sqrt{3}}{2}-\frac{1}{2}\right) E\left( t\right)
+\beta}+\log \biggl(A^{\prime}E\left( t\right) \biggr) \text{\ \ where\ \ }A^{\prime}=\frac{\frac{\alpha^{\prime}\sqrt{3}}{2}+\frac{1}{2}}{\frac
{\alpha^{\prime}\sqrt{3}}{2}-\frac{1}{2}}.$$ The second summand is convex for all $t$ because $\alpha^{\prime} >
\frac{\sqrt{3}}{3},$ hence, $A^{\prime}>0.$ There exists, by Lemma \[convy\], $T>0$ such that the first summand is convex for $t>T$ as required. This completes the proof of part 1a.\
**Proof of Theorem \[mainth\](b).** Let $\Omega$ be a convex domain in $\mathbb{R}^{2}$ with $C^{2}$ boundary and $f,g$ two asymptotic geodesic lines with common boundary point $\xi=f(+\infty)=g(+\infty)\in \partial \Omega.$ We will need the following well known Lemma whose proof is included for the reader’s convenience.
\[curvat\] Let $p$ be a projective transformation of $\mathbb{R}^{2} .$ sending $\partial \Omega$ to a bounded curve. If the curvature of $\partial \Omega$ at the point $\xi \in \partial \Omega$ is not zero then the same holds for the point $p(\xi) \in p \left( \partial (\Omega) \right) .$
**Proof of Lemma.** Identify $\mathbb{R}^{2} $ with the plane $\Pi =\left\{ (x,y,z)|z=1 \right\}$ in the real projective space $\mathbb{R}P^{2} =\mathbb{R}^{3}- \{ (0,0,0)\} / \sim $ whose points are rays emanating from the origin. The image of $\partial \Omega$ under a projective transformation can be taken as the composition of
- an invertible linear transformation of $\mathbb{R}^3,$ and
- the projection of $A\left(\partial \Omega\right)\subset A\left(\Pi\right)$ onto $\Pi$ along the rays through the origin.
Since an invertible linear transformation of $\mathbb{R}^3$ sends $C^2$ curves to $C^2$ curves and preserves the non-vanishing curvature property, it suffices to check the desired property for the projection $A\left(\Pi\right)\longrightarrow \Pi .$
To see this, let $E_1 , E_2$ be two hyperplanes intersecting a $C^2$ cone $K$ through the origin and denote by $\sigma_i $ the simple closed convex $C^2$ curve determined by the intersection $E_i \cap K, i=1,2.$ Moreover, as $K$ is convex, the curvature of $\sigma_i $ at any point is $\geq 0. $ Let $\ell$ be a line through the origin contained in $K$ intersecting $\sigma_i$ at the point $\xi _i , i=1,2$ and $E^{\prime}_2$ the hyperplane containg $\xi_2$ and parallel to $E_1.$ Denote by $\kappa (\xi _i)$ the curvature of $\sigma_i$ at $\xi_i$ and by $\kappa^{\prime} (\xi _2)$ the curvature of the curve $E^{\prime}_2 \cap K$ at $\xi_2 .$ Assume $\kappa (\xi _1)\neq 0$ and, clearly, $\kappa^{\prime} (\xi _2) \neq 0. $ For, if $\kappa (\xi _2)= 0$ then, since the line $\ell$ contains $\xi_2 ,$ both principal curvature at $\xi_2$ would be $0$ contradicting the fact that $\kappa^{\prime} (\xi _2) \neq 0. $
------------------------------------------------------------------------
\
![Cases of intersection of $\partial\Omega$ with the $x-$axis (Theorem 1b).[]{data-label="fig12cases"}](conv12.eps)
(22,12) (30,35) (30,195) (225,32) (225,195) (104,50)[$\xi$]{} (117,209)[$\xi$]{} (297,50)[$\xi$]{} (297,210)[$\xi$]{} (55,112)[$g(-\infty)$]{} (55,272)[$g(-\infty)$]{} (235,112)[$g(-\infty)$]{} (235,52)[$-\lambda$]{}(172,52)[$\lambda$]{}(354,213)[$\lambda$]{} (235,272)[$g(-\infty)$]{} (135,112)[$f(-\infty)$]{} (135,272)[$f(-\infty)$]{} (315,112)[$f(-\infty)$]{} (315,272)[$f(-\infty)$]{}
Returning to the proof of Theorem \[mainth\](b), the points $f\left(
-\infty \right) ,g(-\infty),f(0)$ and $g(0)$ form a non trivial quadrilateral. After composing by a projective transformation we may assume that the coordinates of the four points mentioned above are: $$\begin{split}
f\left(-\infty\right)\equiv\left(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2}\right),
g(-\infty)\equiv \left( -\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2}\right) ,
& f(0) \equiv \left( \frac{\sqrt{2}}{4},\frac{\sqrt{2}}{4}\right)
\\ & \text{\ \ and\ \ }
g(0)\equiv \left( -\frac{\sqrt{2}}{4},\frac{\sqrt{2}}{4}\right).
\end{split}$$ In particular, the point $\xi=f(+\infty)=g(+\infty)$ is the point $\left(
0,0\right) $ and the points $\xi,f\left( -\infty \right) ,g\left(
-\infty \right) $ form a right equilateral triangle. There are three cases to examine depending on the intersection of $\partial\Omega$ with the $x-$axis:
Case 1: $\partial\Omega \cap \left\{y=0\right\} = \left\{\xi\right\}.$
Case 2: $\partial\Omega \cap \left\{y=0\right\} =
\left\{\xi,\left(\lambda,0\right)\bigm\vert \lambda>0 \right\}.$
Case 3: $\partial\Omega \cap \left\{y=0\right\} =
\left\{\xi,\left(\lambda,0\right)\bigm\vert \lambda<0 \right\}.$\
In Figure \[fig12cases\], these cases are demonstrated with the additional consideration, in Case 2, of two subcases (2a and 2b defined below) depending on the tangent line at the point $\left(\lambda,0\right) .$
The Euclidean line containing $f(t)$ and $g(t)$ intersects the $y-$axis at the point $\left(0,y(t)\right)$ where, by Lemma \[xift\], $\left| f(t) -\xi \right|=\frac{1}{e^{t}+1},$ hence, $$y(t)=\frac{\sqrt{2}/2}{e^{t}+1}.\label{ytlength}$$ We also have that when $t\rightarrow+\infty$ $$\begin{split}
ye^{t}=\frac{\sqrt{2}/2}{e^{t}+1}e^t\longrightarrow \sqrt{2}/2,\, \, \, \,y^{\prime}e^{t}= &
\frac{-(\sqrt{2}/2)e^{t}}{\left( e^{t}+1\right) ^{2}}e^{t}\longrightarrow
-\sqrt{2}/2\text{\ and\ }\\
& y^{\prime \prime}e^{t}=\frac{\sqrt{2}}{2}\frac{e^{2t}-e^{t}}{\left( e^{t}+1\right) ^{3}}e^{t}\longrightarrow \sqrt{2}/2
\end{split}
\label{e2t}$$ The Euclidean line containing $f(t)$ and $g(t)$ also intersects $\partial\Omega$ at two points with coordinates $\left(x(t),y(t)\right) ,x(t)>0$ and $\left(\overline{x}(t),y(t)\right) ,\overline{x}(t)<0.$ For $t$ large enough, $y$ is a function of $x,$ say, $y(t)=K\left( x(t)\right) $ for some 1-1 and $C^{2}$ function $K$ and, hence, $x$ is a function of $y,$ namely, $x(t)=K^{-1}\left( y(t)\right) .$ Similarly, for $t$ large enough, $y(t)=\overline{K}\left( \overline{x}(t)\right) $ for some 1-1 and $C^{2}$ function $\overline{K}$ and $\overline{x}$ is a function of $y,\overline{x}(t)=\overline{K}^{-1}\left( y(t)\right) .$\
The Hilbert distance of $f(t),g(t)$ is given by $$h\left( f\left( t\right) ,g\left( t\right) \right) =\log \frac
{x(t)+y(t)}{x(t)-y(t)}+\log \frac{\left|\overline{x}(t)\right|+y(t)}{\left|\overline{x}(t)\right| -y(t)}\label{hdist}$$ and we denote by $\phi (t)$ and $\overline{\phi} (t)$ the first and second summand respectively. We re-write the above mentioned Cases using the notation just introduced:\
1. Both $x(t), \overline{x}(t)\longrightarrow 0$ as $t\rightarrow +\infty .$
2. $x(t) \rightarrow \lambda >0$ and $ \overline{x}(t)\longrightarrow 0$ as $t\rightarrow +\infty .$
3. $x(t) \rightarrow0$ and $ \overline{x}(t)\longrightarrow \lambda^{\prime}<0$ as $t\rightarrow +\infty .$
\
It suffices to deal only with the convexity of $\phi (t)$ in all three cases: in Case 1 the proof for the convexity of $\overline{\phi}$ is identical with that of $\phi$ and convexity of $\overline{\phi}$ in Case 2 (resp. Case 3) follows from convexity of $\phi$ in Case 3 (resp. Case 2).
We will suppress the parameter $t$ and we will be writing $\frac
{dy}{dx}$ instead of $\frac{dK}{dx}$ and $\frac{dx}{dy}$ instead of $\frac{d\left(K^{-1}\right)}{dy}.$ By the following calculation $$\begin{split} \frac{d^{2}x}{dy^{2}} = \frac{d}{dy}\left( \frac{dx}{dy}\right)
= \frac{d}{dy}\left( \frac{1}{\left( \frac{dy}{dx}\right)}\right)
= -\frac{1}{\left( \frac{dy}{dx}\right) ^2} \frac{d^{2}y}{dx^{2}}\frac{dx}{dy}
=- \frac{d^{2}y}{dx^{2}} \left( \frac{dy}{dx}\right) ^3
\end{split}$$ we have the formula $$\frac{d^{2}x}{dy^{2}}=-\frac{d^{2}y}{dx^{2}}\left( \frac{dx}{dy}\right)
^{3}\label{fir}$$ First and second derivatives of $\phi(t)$ are as follows: $$\phi^{\prime}(t)=2 \left( \frac{y}{x}\right) ^{\prime}
\frac{1}{1-\left(\frac{y}{x}\right)^2}
\text{\ \ and\ \ }\phi^{\prime \prime}(t)=2\left( \frac{y}{x}\right)
^{\prime \prime}\frac{1}{1-\left(\frac{y}{x}\right)^2}+
2 \left(\frac{y}{x}\right)^{\prime}
\frac{2\frac{y}{x}\left( \frac{y}{x}\right) ^{\prime}}{\left(1-\left(\frac{y}{x}\right)^2 \right)^2}.$$ As the slope of the geodesic line $f$ is $1,$ it suffices to show that for $t$ large enough $\left( \frac{y}{x}\right) ^{\prime \prime}>0.$ We have the following calculations
$$\begin{aligned}
\left( \frac{y}{x}\right) ^{\prime} =
\frac{y^{\prime}x-yx^{\prime}}{x^{2}}=
\frac{y^{\prime}x-\frac{dx}{dy}
y^{\prime}y}{x^{2}}\,\,\,=\,\,\,\frac{x-\frac{dx}{dy}y}{x^{2}} y^{\prime}
\nonumber \\[3mm]
\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}=
\left[-\frac{dy}{dx}\frac{d^{2}x}{dy^{2}}y-2+2\frac{y}{x}\frac{dx}{dy}\right]
\left(y^{\prime}\right)^{2}+\left( x\frac{dy}{dx}-y\right)y^{\prime \prime}
\label{first} \\[3mm]
\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}
\stackrel{by\ (\ref{fir})}{=}
\left[ \left(\frac{dy}{dx}\right)^{-2}\frac{d^{2}y}{dx^{2}}y-2+2\frac{y}{x}\frac{dx}{dy}\right]
\left(y^{\prime}\right)^{2}+\left( x\frac{dy}{dx}-y\right)y^{\prime \prime}
\label{second}\end{aligned}$$
: In this case the $x-$axis is the tangent line to $\partial \Omega$ at $\xi = (0,0)$ and using Lemma \[curvat\] and our curvature hypothesis we have that $$\left. \frac{d^{2}y}{dx^{2}}\right \vert _{x=0}\neq0 \text{\ \ and\ \ }
\left. \frac{dy}{dx}\right \vert _{x=0}=0.$$ Multiplying both sides of equation (\[second\]) by $e^{2t}$ we have $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}e^{2t}
=
\left[ \vphantom{\left(\frac{dy}{dx}\right)^{-2}\frac{d^2 x}{dy^2}} \right.
\underbrace{\left(\frac{dy}{dx}\right)^{-2}\frac{d^{2}y}{dx^{2}}y}_{\underline{\text{term1}}}
-2+2
\underbrace{\frac{y}{x}\frac{dx}{dy}}_{\underline{\text{term2}}}\left.
\vphantom{\left(\frac{dy}{dx}\right)^{-2} \frac{d^2 x}{dy^2}y}\right]
\left(y^{\prime}\right)^{2}e^{2t}+
\left( \vphantom{ x\frac{dy}{dx}e^t -ye^t}\right.
\underbrace{x\frac{dy}{dx}e^{t}}_{\underline{\text{term3}}}-ye^{t}\left.
\vphantom{ x\frac{dy}{dx}e^t -ye^t}\right) y^{\prime \prime}e^{t}.\label{basiceq}$$ We will show that and both converge to $1/2$ as $t\rightarrow+\infty$ and converges to $2.$ Then using (\[e2t\]) the right hand side of (\[basiceq\]) converges to $\left( \frac{1}{2}-2+2\frac{1}{2}\right)\left(-\sqrt{2}/2\right)^{2} +
\left(\sqrt{2}-\sqrt{2}/2\right)\left(\sqrt{2}/2\right)=\frac{1}{4}>0.$ This shows that $\left( \frac{y}{x}\right)^{\prime \prime}>0$ which in turn implies that $\phi(t)$ is convex for large enough $t.$ In the following calculations limits are always taken as $t\rightarrow+\infty$ or, equivalently, $x,y\rightarrow0$ and the symbol $\sim$ between two functions indicates that the their limits as $t\rightarrow+\infty$ are equal. $${\underline{\text{term1}}}=
\frac{d^{2}y}{dx^{2}}\frac{y}{\left( \frac{dy}{dx}\right)^{2}}
\sim
\frac{d^{2}y}{dx^{2}}\frac{y^{\prime}}{2\frac{dy}{dx}\left( \frac{dy}{dx}\right)^{\prime}}
=
\frac{d^{2}y}{dx^{2}}\frac{y^{\prime}}{2\frac{dy}{dx}\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}y^{\prime}}
\longrightarrow \frac{1}{2}.$$ The more tedious calculation for is as follows: $$\begin{split}
{\underline{\text{term2}}}=\frac{y}{x}\frac{dx}{dy}
\sim &
\frac{\left(\frac{y}{x}\right)^{\prime}}{\left(\frac{dy}{dx}\right)^{\prime}}
=
\frac{\left(x-y\frac{dx}{dy}\right)y^{\prime}x^{-2}}{\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}y^{\prime}}
=
\left(\frac{d^{2}y}{dx^{2}}\right)^{-1}\frac{x\frac{dy}{dx}-y}{x^{2}}
\\ &
\begin{split}
\sim
\left(\frac{d^{2}y}{dx^{2}}\right)^{-1} & \frac{\left(x\frac{dy}{dx}-y\right) ^{\prime}}{\left( x^{2}\right) ^{\prime}}
=\left( \frac{d^{2}y}{dx^{2}}\right) ^{-1}\frac{
\frac{dx}{dy}y^{\prime}\frac{dy}{dx}+x\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}y^{\prime}-y^{\prime}}{2x\frac{dx}{dy}y^{\prime}} \\ & =
\left( \frac{d^{2}y}{dx^{2}}\right) ^{-1}\frac{1}{2}\frac{d^{2}y}{dx^{2}}=\frac{1}{2}.\end{split}
\end{split}$$ For the calculation of first observe that $$\frac{d^{2}y}{dx^{2}}x\frac{dx}{dy}\sim \frac{d^{2}y}{dx^{2}}\frac{x^{\prime}}{\left( \frac{dy}{dx}\right) ^{\prime}}=\frac{d^{2}y}{dx^{2}}\frac
{\frac{dx}{dy}y^{\prime}}{\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}y^{\prime}}=1.\label{help3}$$ Then we have $$\begin{split}
{\underline{\text{term3}}}=x\frac{dx}{dy}e^{t}\sim & \frac{\left( x\frac
{dx}{dy}\right) ^{\prime}}{\left( e^{-t}\right) ^{\prime}}=\frac{\frac
{dx}{dy}y^{\prime}\frac{dy}{dx}+x\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}y^{\prime}}{-e^{-t}}\\
& =-y^{\prime}e^{t}\left( 1+x\frac{d^{2}y}{dx^{2}}\frac{dx}{dy}\right)
\overset{by\ (\ref{e2t}),(\ref{help3})}{\longrightarrow}
-\left(-\sqrt{2}/2\right)(1+1)=\sqrt{2}.
\end{split}$$ This completes the proof for the convexity of $\phi$ in Case 1.\
: In this case the $x(t)\rightarrow \lambda>0$ as $t\rightarrow +\infty$ and we have two sub-cases depending on whether $$\left. \frac{dx}{dy}\right \vert _{y=0}=0 \text{\ equivalently\ }
\left. \frac{dy}{dx}\right \vert _{x=\lambda}=\infty\tag{Subcase 2a}
\label{s2a}$$ or$$\left. \frac{dx}{dy}\right \vert _{y=0}\neq 0 \neq
\left. \frac{dy}{dx}\right \vert _{x=\lambda}\tag{Subcase 2b}\label{s2b}$$ For \[s2a\], multiply both sides of equation (\[second\]) by $e^{t}\frac{dx}{dy}$ to get $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}e^{t}
=
\left[
\left(\frac{dy}{dx}\right)^{-3}\frac{d^{2}y}{dx^{2}}y
-2 \frac{dx}{dy}+2\frac{y}{x}\left(\frac{dx}{dy} \right)^2
\right]
\left(y^{\prime}\right)^{2}e^{t}
+
\left( x-y\frac{dx}{dy} \right) y^{\prime \prime}e^{t}
.\label{basiceq2}$$ By (\[e2t\]), $y^{\prime \prime}e^{t} \rightarrow \sqrt{2}/2$ and by assumptions in this Subcase, $\left( x-y\frac{dx}{dy} \right) \rightarrow
\lambda .$ Moreover, the quantity inside the square bracket is easily seen to be bounded and, since by (\[e2t\]) $\left(y^{\prime}\right)^{2}e^{t}
\rightarrow 0 ,$ it follows that $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}e^{t}
\longrightarrow \lambda\frac{\sqrt{2}}{2}$$ hence, $\left( \frac{y}{x}\right) ^{\prime \prime}
$ is positive for large enough $t.$\
For \[s2b\], observe that $ \left. \frac{dy}{dx}\right \vert _{x=\lambda}$ may be negative. For this reason, we multiply both sides of equation (\[second\]) by $e^{t}$ to get $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}e^t
=
\left[
\left(\frac{dy}{dx}\right)^{-2}\frac{d^{2}y}{dx^{2}}y
-2 +2\frac{y}{x}\frac{dx}{dy}
\right]
\left(y^{\prime}\right)^{2}e^{t}
+
\left( x\frac{dy}{dx}-y \right) y^{\prime \prime}e^{t}
.\label{basiceq3}$$ In a similar manner as in the previous subcase we obtain $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}e^{t}
\longrightarrow \lambda\frac{dy}{dx}\frac{\sqrt{2}}{2}.$$ This completes the proof of the convexity of $\phi$ in Case 2.\
: In this case $x(t)\rightarrow 0$ as $t\rightarrow +\infty$ and $\left.\frac{dy}{dx}\right\vert_{x=0}\in (0,1)$ because the slope of the geodesic line $f$ is $1.$ We have the following preliminary calculations
![The thick segments are equal of Euclidean length $E(t)=\frac{1}{e^{t}+1}.$[]{data-label="example1"}](conv3.eps)
(22,12) (215,135)[$f(-\infty)$]{} (162,136)[$g(-\infty)$]{} (235,107)[$f(t)$]{} (156,107)[$g(t)$]{} (119,68)[$-1$]{} (70,68)[$-2$]{} (159,68)[$0\equiv g(\infty)$]{} (220,68)[$1\equiv f(\infty)$]{} (280,68)[$2$]{} (332,68)[$3$]{}
$$\begin{split}
\frac{x\frac{dy}{dx}-y}{e^{-2t}}
\sim &
\frac{x\frac{d^2y}{dx^2}\frac{dx}{dy}y^{\prime}}{-2e^{-2t}}
=
-\frac{1}{2}\,\, \frac{d^2y}{dx^2}\,\,\frac{dx}{dy}\,\,
\left(e^t x\right) \left(e^t y^{\prime}\right) \\ &
\sim
-\frac{1}{2}\,\, \frac{d^2y}{dx^2}\,\,\frac{dx}{dy}\,\,
\left(\frac{dx}{dy}\frac{\sqrt{2}}{2}\right) \left(-\frac{\sqrt{2}}{2}\right)
=\frac{1}{4}\frac{d^2y}{dx^2}\left(\frac{dx}{dy}\right)^2 >0
\end{split}\label{mple}$$
where we used that $e^t x\sim \frac{dx}{dy}\frac{\sqrt{2}}{2}$ and $e^ty^{\prime}\rightarrow - \frac{\sqrt{2}}{2} .$ In a similar manner we obtain $$\frac{x-\frac{dx}{dy}y}{x^2} \sim
-\frac{1}{2} \left(\frac{dy}{dx}\right)^2\frac{d^2x}{dy^2}
\label{mauro}$$ We now multiply both sides of equation (\[first\]) by $e^{3t}$ to get $$\left( \frac{y}{x}\right) ^{\prime \prime}\! \!x^{2}\frac{dy}{dx}e^{3t}
=
\left[
-\frac{dy}{dx}\frac{d^{2}x}{dy^{2}}ye^t
+2\frac{-1+\frac{y}{x}\frac{dx}{dy}}{e^{-t}}
\right]
\left(y^{\prime}\right)^{2}e^{2t}
+
\frac{x\frac{dy}{dx}-y}{e^{-2t}} y^{\prime \prime}e^{t}
.\label{basiceq4}$$ By (\[mple\]) and the fact $\left(y^{\prime}\right)^{2}e^{2t}
\rightarrow \left(\sqrt{2}/2\right)^2,$ it suffices to show that the term in the square bracket converges to $0$ as $t\rightarrow \infty.$ For the first summand inside the square bracket we have $$-\frac{dy}{dx}\frac{d^{2}x}{dy^{2}}ye^t \rightarrow
-\left.\frac{dy}{dx}\right\vert_{0}\frac{d^{2}x}{dy^{2}} \frac{\sqrt{2}}{2}
\label{paralast}$$ For the second summand inside the square bracket we have $$\begin{split}
2\frac{-1+\frac{y}{x}\frac{dx}{dy}}{e^{-t}} &
\sim
2\frac{\left( \frac{y}{x}\right) ^{\prime } \frac{dx}{dy}
+\frac{y}{x}\frac{d^{2}x}{dy^{2}}y^{\prime}} {-e^{-t}}
=
-2y^{\prime}e^t \left[
\frac{y}{x}\frac{d^{2}x}{dy^{2}} +
\frac{x-\frac{dx}{dy}y}{x^2}\frac{dx}{dy}
\right]\\&
\stackrel{by\ (\ref{mauro})}{\sim}
-2y^{\prime}e^t
\left[
\frac{y}{x}\frac{d^{2}x}{dy^{2}} -\frac{1}{2} \left(\frac{dy}{dx}\right)^2\frac{d^2x}{dy^2} \frac{dx}{dy}
\right] \rightarrow \sqrt{2}
\left[ \frac{1}{2} \left.\frac{dy}{dx}\right\vert_{0} \frac{d^2x}{dy^2}\right]
\end{split}\label{last}$$ By (\[paralast\]) and (\[last\]) the term in the square bracket on the right hand side of (\[basiceq4\]) converges to $0$ as required. This completes the proof in Case 3 and the proof of Theorem 1(b).
------------------------------------------------------------------------
The convexity result posited in Theorem \[mainth\] also holds for bounded convex domains $\Omega$ with piece wise $C^2$ boundary which consists of either segments or, $C^2$ curves with nno-vanishing curvature. This follows by combining parts (a) and (b) of Theorem \[mainth\] and the fact that the distance funtion studied in the above proof has two summands each of which was treated separetely and shown to be convex.
Examples
========
We will construct an example which demonstrates the necessity of the curvature condition in Theorem \[mainth\]b. Apart from asymptotic geodesics, there are two more cases: intersecting and disjoint geodesics. In the case $\Omega$ is a convex polytope, we provide below examples showing that convexity does not hold neither for intersecting nor for disjoint geodesics.
**Example 1 (disjoint geodesics):** let $\Omega$ be the interior of the trapezoid with vertices $\left( 2,0\right) ,\left( 3,1\right) ,\left(
-2,1\right) $ and $\left( -1,0\right) .$ Let $f$ be the geodesic whose image is the intersection of $\Omega$ with the line $x=1$ and $f\left(
0\right) =\left( 1,\frac{1}{2}\right) , $ having arc length parametrization. Similarly, let $g$ be the geodesic with image the intersection of $\Omega$ with the line $x=0$ and $g\left( 0\right) =\left(
0,\frac{1}{2}\right) ,$ see Figure \[example1\]. By Lemma \[xift\], the Euclidean distance $\left \vert g\left( t\right) -g\left( +\infty \right) \right \vert $ is $\frac{1}{e^{t}+1}\equiv E\left( t\right) ,$ in other words, the horizontal lines $y=\frac{1}{e^{t}+1},t\in \left( 0,\infty \right) $ intersect the images of the geodesics $f$ and $g$ at the points $f\left( t\right) $ and $g\left(
t\right) $ respectively. The distance function is given by $$h\left( f\left( t\right) ,g\left( t\right) \right) =\log \frac{2+E\left(
t\right) }{1+E\left( t\right) }+\log \frac{2+E\left( t\right) }{1+E\left(
t\right) }.$$ An elementary calculation shows that $$h^{\prime \prime}=2\frac{\Phi}{\left[ 2+E\left( t\right) \right]
^{2}\left[ 1+E\left( t\right) \right] ^{2}}$$ where the dominant summand of $\Phi$ is $$-2E^{\prime \prime}\left( t\right) =-2\frac{e^{2t}-e^{t}}{\left(
e^{t}+1\right) ^{3}}.$$ Thus, for large enough $t,$ the function $h\left( f\left( t\right)
,g\left( t\right) \right) $ is not convex.\
![The thick segments are equal of Euclidean length $\frac{\sqrt{2}}{2}\left \vert f\left( t\right) - f\left( +\infty \right) \right \vert
=\frac{\sqrt{2}}{2}\frac{\sqrt{2}}{e^{t}+1}=E(t).$[]{data-label="example2"}](conv4.eps)
(22,12) (215,135)[$f(-\infty)$]{} (162,136)[$g(-\infty)$]{} (213,101)[$g(t)$]{} (179,100)[$f(t)$]{} (119,68)[$-1$]{} (70,68)[$-2$]{} (162,68)[$0\equiv f(\infty)$]{} (220,68)[$1\equiv g(\infty)$]{} (280,68)[$2$]{} (332,68)[$3$]{}
**Example 2 (intersecting geodesics):** let $\Omega$ be as above. Let $f,g$ be the geodesics with $\operatorname{Im}f=\Omega \cap \left \{
y=x\right \} $ and $\operatorname{Im}g=\Omega \cap \left \{ y=-x+1\right \} $ respectively and $f\left( 0\right) =g\left( 0\right) =$ $\left( \frac
{1}{2},\frac{1}{2}\right) ,$ see Figure \[example2\]. By the same procedure as in the previous example we obtain $$h\left( f\left( t\right) ,g\left( t\right) \right) =\log \frac
{2}{1+2E\left( t\right) }+\log \frac{2}{1+2E\left( t\right) }.$$ An analogous elementary calculations shows that $$h^{\prime \prime}=2\frac{\Phi}{\left[ 1+2E\left( t\right) \right] ^{2}}$$ where the dominant summand of $\Phi$ is $-E^{\prime \prime}\left( t\right)
\frac{e^{2t}-e^{t}}{\left( e^{t}+1\right) ^{3}}.$\
![The segments $I_n$ for $n=0,1 .$[]{data-label="segments01"}](convexample.eps)
(22,12) (165,174)[$f(t)$]{} (65,174)[$y(t)$]{} (244,87)[$x(t)$]{} (132,62)[$\displaystyle\frac{x_0}{4}=a$]{} (90,62)[$\displaystyle\frac{x_0}{8}$]{} (207,62)[$\displaystyle\frac{x_0}{2}$]{} (280,68)[$b$]{} (345,68)[$x_0$]{} (345,427)[$y=x^3$]{} (245,385)[$y=\left(2x_0^2 \right) x - x_0^3$]{} (92,294)[$y=\left[2\left(\frac{x_0}{4}\right)^2 \right] x -
\left(\frac{x_0}{4}\right)^3$]{} (316,280)[$\color{red} I_0$]{} (120,104)[$\color{red} I_1$]{}
**Example 3:** We will construct a (convex) $C^2$ curve whose curvature is zero at exactly one point, positive at every other point and two geodesics asymptotic at the point of zero curvature such that their distance function is not convex.
Consider the convex domain $\Omega$ bounded below by the function $$y=\left| x^3 \right| , x\in[-1,1] .$$ Let $f$ (resp. $g$) be the geodesic line whose image is the intersection of $\Omega$ with the line $y=x$ (resp. $y=-x$). We may assume that $\Omega$ is a bounded convex domain containing the above mentioned geodesics. Although Theorem \[mainth\]b does not apply because $\partial \Omega$ has curvature $0$ at the point $(0,0),$ it can be shown that the distance function $$D(t):=d\left( f(t),g(t) \right) = 2 \log \frac{x(t)+y(t)}{x(t)-y(t)}$$ is in fact convex for sufficiently large $t.$ We will alter $\Omega$ by replacing a subarc of its boundary by a segment so that the distance function will no longer be convex at the corresponding time interval. Then we will repeat the same process infinitely many times to ensure that convexity does not hold for large $t$ and we will take appropriate care for the $C^2$ property.\
By symmetry, we will restrict our attention to $x(t)>0 .$
Let $y(t) = \displaystyle \frac{c}{e^t +1}, c>0$ and $y(t) = A x(t) +B$ with $A\in (0,1)$ and $ B<0.$ Then the first and second derivative of $\displaystyle D (t) = 2\log \frac{x(t)+y(t)}{x(t)-y(t)}$ are as follows: $$D^{\prime} (t) = 4\,\, \frac{Ax-y}{x^2 -y^2}\,\,\frac{1}{A}\,\left( \frac{y^2}{c}-y \right)$$ $$D^{\prime\prime} (t)\,\, \mathcal{C} =
2\left( Ay-x\right) \left( -\frac{y^2}{c} +y \right) +
\left( x^2 -y^2 \right) \left( -\frac{2yA}{c} +A \right)$$ where $\mathcal{C}$ is the following positive real number $\displaystyle \frac{A^2\left( x^2 -y^2 \right)^2}
{-4B\left( -\frac{y^2}{c} +y \right)} .$
For $x_0 \in (0,1) $ consider the line determined by the points $\left( x_0 , x_0^3 \right)$ and $\left( \frac{x_0}{2} ,0 \right)$ whose equation is $y=\left(2x_0^2 \right) x - x_0^3 . $ This line determines a segment $I_0$ with endpoints $\left( x_0 , x_0^3\right) $ and $\left( b,b^3\right) $ for some $b<x_0$ which can be computed explicitly (see Figure \[segments01\]). Using the previous Lemma it is easy to see that for $t_0$ such that $x\left( t_0\right) = x_0$ we have $A=2x_0^2 ,$ $y(t_0)=x_0^3$ and, thus, $$D^{\prime\prime} (t_0)\, \mathcal{C} = -\frac{2}{c}x_0^7 +2x_0^8$$ which is negative for sufficiently small $x_0 .$ Clearly, if the subarc of $\partial \Omega$ determined by the points $\left( b , b^3 \right)$ and $\left( x_0 , x_0^3 \right)$ is replaced by the segment $I_0$ then $D(t)$ will not be convex near $t_0 .$ The same non-convexity property can be obtained by replacing the above mentioned subarc of $\partial \Omega$ by a $C^2$ arc $$\sigma_1 :[b,x_0] \longrightarrow \left[ b^3 ,x_0^3 \right]$$ of constant and sufficiently small curvature.\
Using $ x_n = \displaystyle \frac{x_0}{2^{2n}} , n\in \mathbb{N}$ as starting point we obtain the corresponding intervals $I_n$ with endpoints on $\partial\Omega$ and we perform the same replacement for all $n\in \mathbb{N} $ using $C^2$ arcs $ \sigma_n $ of constant and sufficiently small curvature. Moreover, we may arrange so that the curvature of each $ \sigma_n $ $\rightarrow 0$ as $n\rightarrow \infty .$\
This guarantees that the distance function $D(t)$ with respect to the new (altered) convex domain, denoted again by $\Omega ,$ cannot be convex for $t$ large enough.\
The endpoint $\left( b,b^3\right)$ of the interval $I_0$ can be computed explicitly but we will only need the fact that $$b\in \left( \frac{x_0}{2} , \frac{3x_0}{4}\right) . \label{betamiddle}$$ Denote by $a$ the point $x_1 = \displaystyle \frac{x_0}{4} .$\
Our Final Step is to replace the subarc of $\partial\Omega$ with endpoints $\left( a , a^3 \right)$ and $\left( b , b^3 \right)$ by a $C^2$ curve $$\sigma_{1,2} : [a,b]\longrightarrow \left[ a^3 ,b^3 \right]$$ so that the first and second derivatives of $\sigma_{1,2}$ matches those of $\sigma_1$ and $\sigma_2 $ at the appropriate points.
\[lemmaintineq\] Let $[\alpha,\beta] $ be an interval. Let $\alpha^{(0)} , \alpha^{(1)} , \alpha^{(2)}$ and $\beta^{(0)} , \beta^{(1)} ,\beta^{(2)}$ be positive real numbers satisfying $ 0<\alpha^{(0)}<\beta^{(0)} , 0<\alpha^{(1)}<\beta^{(1)} $ and $$\alpha^{(1)}(\beta-\alpha )< \beta^{(0)} -\alpha^{(0)} < \beta^{(1)} (\beta-\alpha) \label{intineq}$$ Then there exists a $C^2$ function $\sigma : [\alpha,\beta]\longrightarrow\left[ \alpha^{(0)} ,\beta^{(0)} \right]$ satisfying
- $\sigma (\alpha)=\alpha^{(0)} ,\sigma^{\prime} (\alpha) =\alpha^{(1)} ,
\sigma^{\prime\prime} (\alpha)=\alpha^{(2)} $
- $\sigma (\beta)=\beta^{(0)} ,\sigma^{\prime} (\beta) =\beta^{(1)} ,
\sigma^{\prime\prime} (\beta) =\beta^{(2)}$
- $\sigma^{\prime\prime} (x)>0$ for all $x\in(\alpha,\beta ).$
First, we may find a strictly increasing differentiable function $$\sigma^{(1)} : \left[ \alpha , \beta\right] \longrightarrow
\left[ \alpha^{(1)} ,\beta^{(1)}\right]$$ satisfying $$\lim_{t\rightarrow \alpha} \frac{d}{dt} \sigma^{(1)}(t) = \alpha^{(2)}
\textrm{\ \ and\ \ }
\lim_{t\rightarrow \beta} \frac{d}{dt} \sigma^{(1)}(t) = \beta^{(2)}$$ Set $\sigma (t) = \alpha^{(0)} + \int_{\alpha}^{t}\sigma^{(1)}(s) ds $ and we need the following equality to hold $$\int_{\alpha}^{\beta} \sigma^{(1)} (t) dt = \sigma(\beta) - \sigma(\alpha) =
\beta^{(0)} - \alpha^{(0)} . \label{pat}$$ As $$\alpha^{(1)}(\beta -\alpha) < \int_{\alpha}^{\beta} \sigma^{(1)} (t) dt <
\beta^{(1)}(\beta -\alpha)$$ equation (\[pat\]) can be achieved provided that (\[intineq\]) holds.
In order to use the above Lemma to find $\sigma_{1,2}$ we need to check that the boundary values of $\sigma_1$ and $\sigma_2$ which correspond to the segments $I_0$ and $I_1 $ satisfy the assumptions of the above Lemma.
The slope of $I_0$ (resp. $I_1$) is $2x_0^2 $ (resp. $2(x_0/4)^2$) so the first two inequalities required by Lemma \[lemmaintineq\] clearly hold: $$0<\left( \frac{x_0}{4} \right)^3<b^3 \textrm{\ and\ } 0< 2 \left( \frac{x_0}{4} \right)^2 <
2x_0^2 .$$ For condition (\[intineq\]) of Lemma \[lemmaintineq\] we need to check that $$2 \left( \frac{x_0}{4} \right)^2 \left( b- \frac{x_0}{4} \right) <
b^3- \left( \frac{x_0}{4} \right)^3 < 2x_0^2 \left( b- \frac{x_0}{4} \right).$$ For the right hand side inequality and using (\[betamiddle\]) it suffices to check that $$\left( \frac{3x_0}{4} \right)^3 - \left( \frac{x_0}{4} \right)^3 < 2x_0^2
\left(\frac{x_0}{2} - \frac{x_0}{4} \right)$$ which is equivalent to $\frac{26}{64} x_0^3 < \frac{1}{2}x_0^3$ which holds. For the left hand side inequality and using again (\[betamiddle\]) it suffices to check that $$2\left( \frac{x_0}{4} \right)^2\left(\frac{3x_0}{4} - \frac{x_0}{4} \right)
< \left( \frac{x_0}{2} \right)^3 - \left( \frac{x_0}{4} \right)^3$$ which is equivalent to $\frac{x_0^3 }{32} < \frac{7}{64}x_0^3$ which holds.
As we have the liberty to choose the arcs $\sigma_n$ arbitrarily close to the segments $I_n ,$ it follows that the inequalities required in Lemma \[lemmaintineq\] hold for any pair of segments $I_n , I_{n+1} .$ Therefor, the above procedure can be followed in an identical way to construct the curves $\sigma_{n,n+1}$ joining the curves $\sigma_n$ $\sigma_{n+1}$ for all $n.$ The final curve obtained in this way is clearly a $C^2$ curve with zero curvature only at the point $(0,0) $ and, by construction, the distance function between the geodesics $f,g$ which are asymptotic at $(0,0)$ is not convex.
[9]{}
Y. Benoist, *Convexes divisibles I*, in Algebraic groups and arithmetic, Tata Inst. Fund. Res. Stud. Math. 17 (2004), 339?374.
H. Busemann, *The Geometry of Geodesics*, Academic Press Inc. (New York), 1955.
C. Charitos, I. Papadoperakis & G. Tsapogas, *On the geodesic flow on CAT(0) spaces*, Ergodic Theory and Dynamical Systems, 1-29. doi:10.1017/etds.2019.52
P. Eberlein, *Geodesic flows on negatively curved manifolds I*, Annals of Math. 95(1972), 492-510.
P. Eberlein, *Geodesic flows on negatively curved manifolds II*, Trans. Amer. Math. Soc. 178 (1973), pp.57-82.
*Handbook of Hilbert Geometry*, (eds A. Papadopoulos, M. Troyanov), European Mathematical Society, Zurich 2014.
P. de la Harpe, *On Hilbert’s metric for simplices*, In: Geometric Group Theory, Vol. 1 (Sussex, 1991) pp. 97–119. Cambridge Univ. Press (Cambridge), 1993.
Edith Socie-Methou, *Behavior of distance functions in Hilbert–Finsler geometry,* Differential Geometry and its Applications 20 (2004) pp. 1–10.
|
---
abstract: 'Shape-constrained density estimation is an important topic in mathematical statistics. We focus on densities on $\mathbb{R}^d$ that are log-concave, and we study geometric properties of the maximum likelihood estimator (MLE) for weighted samples. Cule, Samworth, and Stewart showed that the logarithm of the optimal log-concave density is piecewise linear and supported on a regular subdivision of the samples. This defines a map from the space of weights to the set of regular subdivisions of the samples, i.e. the face poset of their secondary polytope. We prove that this map is surjective. In fact, every regular subdivision arises in the MLE for some set of weights with positive probability, but coarser subdivisions appear to be more likely to arise than finer ones. To quantify these results, we introduce a continuous version of the secondary polytope, whose dual we name the Samworth body. This article establishes a new link between geometric combinatorics and nonparametric statistics, and it suggests numerous open problems.'
author:
- 'Elina Robeva, Bernd Sturmfels, and Caroline Uhler'
title: '**Geometry of Log-Concave Density Estimation**'
---
Introduction
============
Let $X = (x_1,x_2,\ldots,x_n)$ be a configuration of $n$ distinct labeled points in ${\mathbb{R}}^d$, and let $w = (w_1,w_2,\ldots,w_n)$ be a vector of positive weights that satisfy $w_1 + w_2 + \cdots + w_n =1$. The pair $(X,w)$ is our dataset. Think of experiments whose outcomes are measurements in ${\mathbb{R}}^d$. We interpret $w_i$ as the fraction among our experiments that led to the sample point $x_i$ in ${\mathbb{R}}^d$.
From this dataset one can compute the sample mean $\,\hat \mu = \sum_{i=1}^n w_i x_i \,$ and the sample covariance matrix $\,\hat \Sigma = \sum_{i=1}^n w_i (x-\hat \mu) (x - \hat \mu)^T $. Suppose that $\hat \Sigma$ has full rank $d$ and we wish to approximate the sample distribution by a Gaussian with density $f_{\mu,\Sigma}$ on ${\mathbb{R}}^d$. Then $(\hat \mu, \hat \Sigma) $ is the best solution in the likelihood sense, i.e. this pair maximizes the log-likelihood function $$\label{ex:loglikelihood} (\mu,\Sigma) \,\,\, \mapsto \,\,\, \sum_{i=1}^n w_i \cdot
{\rm log}(f_{\mu,\Sigma} (x_i)) .$$
In [*nonparametric statistics*]{} one abandons the assumption that the desired probability density belongs to a model with finitely many parameters. Instead one seeks to maximize $$\label{eq:loglikelihood2}
f \,\,\,\mapsto \,\,\, \sum_{i=1}^n w_i \cdot {\rm log}(f (x_i))$$ over all density functions $f$. However, since $f$ can be chosen arbitrarily close to the finitely supported measure $\sum_{i=1}^n w_i \delta_{x_i}$, it is necessary to put constraints on $f$. One approach to a meaningful maximum likelihood problem is to impose [*shape constraints*]{} on the graph of $f$. This line of research started with Grenander [@Grenander], who analyzed the case when the density is monotonically decreasing. Another popular shape constraint is convexity of the density [@Groeneboom].
In this paper, we consider maximum likelihood estimation, under the assumption that $f$ is [*log-concave*]{}, i.e. that ${\rm log}(f)$ is a concave function from ${\mathbb{R}}^d$ to ${\mathbb{R}}\cup \{-\infty\}$. Density estimation under log-concavity has been studied in depth in recent years; see e.g. [@CSS; @Duembgen; @Walther]. Note that Gaussian distributions $f_{\mu,\Sigma}$ are log-concave. Hence, the following optimization problem naturally generalizes the familiar task of maximizing (\[ex:loglikelihood\]) over all pairs of parameters $(\mu,\Sigma)$: $$\label{eq:ourproblem}
\begin{matrix}
\hbox{Maximize the log-likelihood (\ref{eq:loglikelihood2}) of the given sample $(X,w)$ over all} \\
\hbox{integrable functions $f: {\mathbb{R}}^d \rightarrow {\mathbb{R}}_{\geq 0} $ such that
${\rm log}(f)$ is concave and
$\,\int_{{\mathbb{R}}^d} f(x) dx = 1$.}
\end{matrix}$$
A solution to this optimization problem was given by Cule, Samworth and Stewart in [@CSS]. They showed that the logarithm of the optimal density $\hat f$ is a piecewise linear concave function, whose regions of linearity are the cells of a regular polyhedral subdivision of the configuration $X$. This reduces the infinite-dimensional optimization problem (\[eq:ourproblem\]) to a convex optimization problem in $n$ dimensions, since $\hat f$ is uniquely defined once its values at $x_1,\dots, x_n$ are known. An efficient algorithm for solving this problem is described in [@CSS]. It is implemented in the [R]{} package [LogConcDEAD]{} due to Cule, Gramacy and Samworth [@CGS].
$\quad\quad$
\[ex:octahedron\] Let $d=2$, $n=6$, $w = \frac{1}{6}(1,1,1,1,1,1)$, and fix the point configuration $$\label{eq:sixpoints}
X \,\,=\,\, \bigl( \,(0, 0)\,, \,(100, 0)\,,\, (0, 100)\,,\, (22, 37)\,, \,(43, 22)\,, \,(36, 41) \,\bigr).$$ The graphical output generated by [LogConcDEAD]{} is shown on the left in Figure \[fig:octahedron\]. This is the graph of the function ${\rm log}(\hat f)$ that solves (\[eq:ourproblem\]). This piecewise linear concave function has seven linear pieces, namely the triangles on the right in Figure \[fig:octahedron\], with vertices taken from $X$.
The purpose of this paper is to establish a link between nonparametric statistics and geometric combinatorics. We develop a generalization of the theory of regular triangulations arising in the context of maximum likelihood estimation for log-concave densities.
Our paper is organized as follows. In Section 2 we first review the relevant mathematical concepts, especially polyhedral subdivisions and secondary polytopes [@DRS; @GKZ]. We then generalize results in [@CSS] from the case of unit weights $w = \frac{1}{n}(1,1,\ldots,1)$ to arbitrary weights $w$. Theorem \[thm:samworth\] casts the problem (\[eq:ourproblem\]) as a linear optimization problem over a convex subset $\mathcal{S}(X)$ of ${\mathbb{R}}^n$, which we call the [*Samworth body*]{} of $X$. Theorem \[thm:IntegralFormula\] uses integrals as in [@Bar] to give an unconstrained formulation of this problem with an explicit objective function.
Cule, Samworth and Stewart [@CSS] discovered that log-concave density estimation leads to regular polyhedral subdivisions. In this paper we prove the following converse to their result:
\[thm:converse\] Let $\Delta$ be any regular polyhedral subdivision of the configuration $X$. There exists a non-empty open subset $\,\mathcal{U}_\Delta$ in ${\mathbb{R}}^n$ such that, for every $w \in \mathcal{U}_\Delta$, the optimal solution $\hat f$ to (\[eq:ourproblem\]) is a piecewise log-linear function whose regions of linearity are the cells of $\Delta$.
The proof of Theorem \[thm:converse\] appears in Section 3. We introduce a remarkable symmetric function $H$ that serves as a key technical tool. The theory behind $H$ seems interesting in its own right. In Theorem \[thm:normalcone\] we characterize the normal cone at any boundary point of the Samworth body. In other words, for a given concave piecewise log-linear function $f$, we determine the set of all weight vectors $w$ such that $f$ is the optimal solution in (\[eq:ourproblem\]).
In Section 4 we view (\[eq:ourproblem\]) as a parametric optimization problem, as either $w$ or $X$ vary. Variation of $w$ is explained by the geometry of the Samworth body. We explore empirically the probability that a given subdivision is optimal. We observe that triangulations are rare. Thus pictures like the triangulation in Figure \[fig:octahedron\] are exceptional and deserve special attention.
In Section 5 we focus our attention on the case of unit weights, and we examine the constraints this imposes on $\Delta$. Theorem \[thm:d+2points\] shows that triangulations never occur for $n=d+2$ points in ${\mathbb{R}}^d$ with unit weights. A converse to this result is established in Theorem \[thm:d+3points\].
Sections 4 and 5 conclude with several open problems. These suggest possible lines of inquiry for a future research theme that might be named [*Nonparametric Algebraic Statistics*]{}.
Geometric Combinatorics
=======================
We begin by reviewing concepts from geometric combinatorics, studied in detail in the books by De Loera, Rambau and Santos [@DRS] and Gel’fand, Kapranov and Zelevinsky [@GKZ]. See Thomas [@Tho §7-8] for a first introduction. Let $X = (x_1,\ldots,x_n)$ be a configuration as before and $P = {\rm conv}(X)$ its convex hull in ${\mathbb{R}}^d$. We assume that the polytope $P$ has dimension $d$.
Fix a real vector $y = (y_1,\ldots,y_n)$. We write $h_{X,y}$ for the smallest concave function $h$ on ${\mathbb{R}}^n$ such that $h(x_i) \geq y_i$ for $i=1,\ldots,n$. The graph of $h_{X,y}$ is the upper convex hull of $\{(x_1,y_1),\ldots,(x_n,y_n)\}$ in ${\mathbb{R}}^{n+1}$. Hence $h_{X,y}(t)$ is the largest real number $h^*$ such that $(t,h^*)$ is in the convex hull of $\{(x_1,y_1),\ldots,(x_n,y_n)\}$. In particular, $h_{X,y}(t) = - \infty$ for $t \not\in P$. Up to sign, the function $h_{X,y}$ is called the [*characteristic section*]{} in [@DRS Definition 5.2.12]. We also refer to $h_{X,y}$ as the [*tent function*]{}, with (some of) the points $(x_i,y_i)$ being the [*tent poles*]{}. The vector $y$ is called [*relevant*]{} if $h_{X,y}(x_i) = y_i$ for $i=1,\ldots,n$, i.e. if each $(x_i,y_i)$ is a tent pole. This fails, for example, if $x_i$ lies in the interior of $P$ and $y_i$ is small relative to the other $y_j$.
A [*regular subdivision*]{} $\Delta$ of $X$ is a collection of subsets of $X$ whose convex hulls are the regions of linearity of the function $h_{X, y}$ for some $y\in \mathbb R^n$. These regions are $d$-dimensional polytopes, and are called the [*cells*]{} of $\Delta$. A regular subdivision $\Delta$ is a [*regular triangulation*]{} of $X$ if each cell is a $d$-dimensional simplex. The [*secondary polytope*]{} $\Sigma(X)$ is a polytope of dimension $n-d-1$ in ${\mathbb{R}}^n$ whose faces are in bijection with the regular subdivisions of $X$. In particular, the vertices of $\Sigma(X)$ correspond to the regular triangulations of $X$; see [@DRS §5].
If $\Delta$ is a regular triangulation of $X$, then the $k$-th coordinate of the corresponding vertex $z^\Delta$ of $\Sigma(X) \subset {\mathbb{R}}^n$ is the sum of the volumes of all simplices in $\Delta$ that contain $x_k$. In symbols, $$\label{eq:GKZvector}
z^\Delta_k \,\,= \,\, \sum_{\sigma \in \Delta: \atop x_k \in \sigma} {\rm vol}(\sigma) .$$ We call $z^\Delta = (z^\Delta_1,\ldots,z^\Delta_n) $ the [*GKZ vector*]{} of the triangulation $\Delta$, in reference to [@GKZ].
The support function of the secondary polytope $\Sigma(X)$ is the piecewise linear function $${\mathbb{R}}^n \rightarrow {\mathbb{R}}, \quad y \,\mapsto \, \int_P h_{X,y}(t) dt.$$ This follows from the equation in [@DRS page 232]. The function is linear on each cone in the [*secondary fan*]{} of $X$. For every $y$ in the secondary cone of a given regular triangulation $\Delta$, $$\label{eq:intislinear}
\int_P h_{X,y}(t) dt \,\,= \,\, z^\Delta \cdot y \,\,=\,\, \sum_{i=1}^n z^\Delta_i y_i .$$ This means that the convex dual to the secondary polytope has the representation $$\Sigma(X)^* \quad = \quad \bigl\{
y \in {\mathbb{R}}^n \,:\, z^\Delta \cdot y \leq 1 \,\,\hbox{for all} \,\,\Delta \bigr\} \,\,\, = \,\,\,
\bigl\{ y \in {\mathbb{R}}^n \,:\,\int_P h_{X,y}(t) dt \leq 1 \,\bigr\}.$$ Note that $\Sigma(X)^*$ is an unbounded polyhedron in ${\mathbb{R}}^n$ since $\Sigma(X)$ has dimension $n-d-1$. Indeed, $\Sigma(X)^*$ is the product of an $(n-d-1)$-dimensional polytope and an orthant ${\mathbb{R}}_{\geq 0}^{d+1}$.
We now introduce an object that looks like a continuous analogue of $\Sigma(X)^*$. We define $$\label{eq:samdef}
\mathcal{S}(X) \quad = \quad \bigl\{\,
y \in {\mathbb{R}}^n \,:\,\int_P {\rm exp}(h_{X,y}(t)) dt \leq 1 \,\bigr\}.$$
Inspired by [@CGS; @CSS], we call $\mathcal{S}(X)$ the [*Samworth body*]{} of the point configuration $X$.
The Samworth body $\mathcal{S}(X)$ is a full-dimensional closed convex set in ${\mathbb{R}}^n$.
Let $y, y' \in \mathcal{S}(X)$ and consider a convex combination $y'' = \alpha y + (1-\alpha)y'$ where $0 \leq \alpha \leq 1$. For all $t \in P$, we have $h_{X,y''}(t) \leq \alpha h_{X,y}(t) + (1-\alpha) h_{X,y'}(t)$, and therefore $${\rm exp}(h_{X,y''}(t)) \,\leq \, {\rm exp} \bigl(\alpha h_{X,y}(t) + (1-\alpha) h_{X,y'}(t) \bigr) \,\leq \,
\alpha \cdot {\rm exp} ( h_{X,y}(t)) + (1-\alpha) \cdot {\rm exp}(h_{X,y'}(t) ) .$$ Now integrate both sides of this inequality over all $t \in P$. The right hand side is bounded above by $1$, and hence so is the left hand side. This means that $y'' \in \mathcal{S}(X)$. We conclude that $\mathcal{S}(X)$ is convex. It is closed because the defining function is continuous, and it is $n$-dimensional because all points $y$ whose $n$ coordinates are sufficiently negative lie in $\mathcal{S}(X)$.
Every boundary point $y$ of the Samworth body $\mathcal{S}(X)$ defines a log-concave probability density function $f_{X,y}$ on ${\mathbb{R}}^d$ that is supported on the polytope $P = {\rm conv}(X)$. This density is $$\label{eq:fydensity}
f_{X,y} \,\,: \,\,\,
t \,\,\mapsto\,\, \begin{cases} {\rm exp}(h_{X,y}(t)) & {\rm if} \,\,\, t \in P, \\
\qquad 0 & {\rm otherwise}. \end{cases}$$
We fix a positive real vector $w = (w_1,\ldots,w_n) \in {\mathbb{R}}^n_{\geq 0}$ that satisfies $\sum_{i=1}^n w_i =1$. The following result rephrases the key results of Cule, Samworth and Stewart [@CSS Theorems 2 and 3], who proved this, in a different language, for the unit weight case $w = \frac{1}{n} (1,1,\ldots,1)$.
\[thm:samworth\] The linear functional $ \,y \mapsto w \cdot y = \sum_{i=1}^n w_i y_i$ is bounded above on the Samworth body $\,\mathcal{S}(X)$. Its maximum over $\,\mathcal{S}(X)\,$ is attained at a unique point $y^*$. The corresponding log-concave density $f_{X,y^*}$ is the unique optimal solution to the estimation problem (\[eq:ourproblem\]).
We are claiming that $\mathcal{S}(X)$ is strictly convex and its recession cone is contained in the negative orthant ${\mathbb{R}}^n_{\leq 0}$. The point $y^*$ represents the solution to the optimization problem $$\label{eq:constrained}
\hbox{Maximize $\,w \cdot y\,$ subject to $\,y \in \mathcal{S}(X)$.}$$
The equivalence of (\[eq:ourproblem\]) and (\[eq:constrained\]) stems from the fact that the optimal solution $\hat f$ to the maximum likelihood problem (\[eq:ourproblem\]) has the form $ f = f_{X, y}$ for some choice of $ y \in {\mathbb{R}}^n$. This was proven in [@CSS] for unit weights $w = \frac{1}{n}(1,1,\ldots,1)$. The general case of positive rational weights $w_i$ can be reduced to the unit weight case by regarding $(X,w)$ as a multi-configuration. We extend this from rational weights to non-rational real weights by a continuity argument.
Let $N$ be the sample size, so that $N_i = Nw_i$ is a positive integer for $i=1,\ldots,n$. We think of $x_i$ as a sample point in ${\mathbb{R}}^d$ that has been observed $N_i$ times. If $f$ is any probability density function on ${\mathbb{R}}^d$, then the log-likelihood of the $N$ observations with respect to $f$ equals $$\label{eq:MLF}
N \cdot \sum_{i=1}^n w_i \cdot {\rm log}( f(x_i)).$$ Maximizing (\[eq:MLF\]) over log-concave densities is equivalent to maximizing (\[eq:loglikelihood2\]). We know from [@CSS Theorem 2] that the maximum is unique and is attained by $ f = f_{X,y^*}$ for some $y^* \in {\mathbb{R}}^n$. Here $y^*$ is the unique relevant point in $\, \mathcal{S}(X) = \bigl\{ y \in {\mathbb{R}}^n : \int_{{\mathbb{R}}^d} f_{X,y}(t)dt \leq 1 \bigr\}\,$ that maximizes the linear functional $w \cdot y$. Hence (\[eq:ourproblem\]) and (\[eq:constrained\]) are equivalent for all $w \in {\mathbb{R}}^n_{\geq 0}$.
The constrained optimization problem (\[eq:constrained\]) can be reformulated as an unconstrained optimization problem. For the unit weight case $w_1 = \cdots = w_n = 1/n$, this was done in [@CSS §3.1]. This result can easily be extended to general weights. In the language of convex analysis, Proposition \[prop:unconstrained\] says that the optimal value function of the convex optimization problem (\[eq:constrained\]) is the [*Legendre-Fenchel transform*]{} of the convex function $y \mapsto \int_P {\rm exp}(h_{X,y}(t)) dt$.
\[prop:unconstrained\] The constrained optimization problem [(\[eq:constrained\])]{} is equivalent to the unconstrained optimization problem $$\label{eq:unconstrained}
\hbox{{\rm Maximize} $\,\,w \cdot y - \int_{P} \exp(h_{X,y}(t)) dt\,\,$ over all $\,\,y \in {\mathbb{R}}^n$},$$ where, as before, $P$ denotes the convex hull of $x_1, \dots , x_n\in\mathbb{R}^d$ and $h_{X,y}$ is the tent function, i.e., $h_{X,y} : \mathbb{R}^d \to \mathbb{R}$ is the least concave function satisfying $h_{X,y}(x_i) \geq y_i$ for all $i = 1,\ldots , n$.
A proof for uniform weights is given in [@CSS]. We here present the proof for arbitrary weights $w_1,\ldots,w_n$. These are positive real numbers that sum to $1$. This ensures that the objective function in (\[eq:constrained\]) is bounded above, since the exponential term dominates when the coordinates of $y$ become large. Clearly, the optimum of (\[eq:constrained\]) is attained on the boundary $\partial \mathcal{S}(X)$ of the feasible set $\mathcal{S}(X)$, and we could equivalently optimize over that boundary.
Now suppose that $y^*$ is an optimal solution of (\[eq:unconstrained\]). This implies that $h_{X,y^*}(x_i) = y^*_i$, i.e. each tent pole touches the tent. Otherwise $\,w \cdot y$ in the objective function can be increased without changing $\int_{P} \exp(h_{X,y}(t)) dt$. Let $c:= \int_{P} \exp(h_{X,y^*}(t)) dt$. We claim that $c=1$.
Let $\hat{y}$ be a vector in ${\mathbb{R}}^n$, also satisfying $h_{X,\hat{y}}(x_i) = \hat{y}_i$ for all $i$, such that $\exp(h_{X,y^*}(t)) = c \exp(h_{X,\hat{y}}(t))$ and $\int_{P} \exp(h_{X,\hat{y}}(t)) dt = 1$. This means that $h_{X,y^*}(t) = \log(c) + h_{X,\hat{y}}(t)$ for all points $t$ in the polytope $ P$. In particular, we have $ y_i^* - \hat{y}_i = \log(c) $ for $i=1,2,\ldots,n$.
We now analyze the difference of the objective functions at the points $\hat{y}$ and $y^*$: $$w \cdot \hat{y} - \int_{P} \exp(h_{X,\hat{y}}(t)) dt - \left(w \cdot y^*
- \int_{P} \exp(h_{X,y^*}(t)) dt\right) = -\log(c) -1+c.$$
Note that the function $c \mapsto -\log(c) -1+c$ is nonnegative. Since $y^*$ maximizes $w \cdot y - \int_{P} \exp(h_{X,y}(t)) dt$, it follows that $-\log(c) -1+c = 0$, which implies that $c=1$. So, the claim holds. We have shown that the solution $y^*$ of (\[eq:unconstrained\]) also solves the following problem: $$\label{eq:constrained_2}
\hbox{Maximize $\,w \cdot y - \int_{P} \exp(h_{X,y}(t)) dt$ \; subject to\; $\int_{P} \exp(h_{X,y}(t)) dt = 1$.}$$ But this is equivalent to the constrained formulation (\[eq:constrained\]), and the proof is complete.
The objective function in (\[eq:unconstrained\]) looks complicated because of the integral and because $h_{X,y}(t)$ depends piecewise linearly on both $y$ and $t$. To solve our optimization problem, a more explicit form is needed. This was derived by Cule, Samworth and Stewart in [@CSS Section B.1]. The formula that follows writes the objective function locally as an exponential-rational function. This can also be derived from work on polyhedral residues due to Barvinok [@Bar].
\[lem:integral\] Fix a simplex $\sigma= {\rm conv}(x_0,x_1,\ldots,x_d)$ in ${\mathbb{R}}^d$ and an affine-linear function $\ell: {\mathbb{R}}^d \rightarrow {\mathbb{R}}$, and let $y_0=\ell(x_0), y_1 = \ell(x_1),\ldots,y_d = \ell(x_d)$ be its values at the vertices. Then $$\int_{\sigma} {\rm exp}\bigl( \ell (t)\bigr) dt \,\, \,= \,
\,\,{\rm vol}(\sigma) \cdot \sum_{i=0}^d {\rm exp}(y_i) \!
\!\prod_{j \in \{0,\ldots,d\}\backslash \{i\}} \!\! \!\!\! (y_i-y_j)^{-1}.$$
This follows directly from equation (B.1) in [@CSS Section B.1], and it can also easily be derived from Barvinok’s formula in [@Bar Theorem 2.6].
This lemma implies the following formula for integrating exponentials of piecewise-affine functions on a convex polytope. This can be regarded as an exponential variant of (\[eq:intislinear\]).
\[thm:IntegralFormula\] Let $\Delta$ be a triangulation of the configuration $X=(x_1,\ldots,x_n)$ and $h : P \rightarrow {\mathbb{R}}$ the piecewise-affine function on $\Delta$ that takes values $h(x_i) = y_i$ for $i = 1,2,\ldots,n$. Then $$\int_{P} {\rm exp}\bigl( h (t)\bigr) dt \,\, \,= \,\,\,
\sum_{i=1}^n
{\rm exp}(y_i) \sum_{\sigma \in \Delta: \atop i \in \sigma} \frac{{\rm vol}(\sigma)}
{\prod_{j \in \sigma \backslash i} (y_i-y_j)}$$
We add the expressions in Lemma \[lem:integral\] over all maximal simplices $\sigma$ of the triangulation $\Delta$, and we collect the rational function multipliers for each of the $n$ exponentials ${\rm exp}(y_i)$.
This formula underlies the efficient solution to the estimation problem (\[eq:ourproblem\]) that is implemented in the [R]{} package [LogConcDEAD]{} [@CGS]. We record the following algebraic reformulation, which will be used in our study in the subsequent sections. This follows from Theorem \[thm:IntegralFormula\].
\[cor:optsecondary\] The equivalent optimization problems (\[eq:ourproblem\]), (\[eq:constrained\]), (\[eq:unconstrained\]) are also equivalent to $$\label{eq:optsecondary}
{\rm Maximize} \,\,\,w \cdot y \,- \,\sum_{\sigma \in \Delta}
\sum_{i \in \sigma}
\frac{{\rm vol}(\sigma) \cdot {\rm exp}(y_i)}
{\prod_{j \in \sigma \backslash i} (y_i-y_j)},$$ where $y $ runs over $ {\mathbb{R}}^n$ and $\Delta$ is a regular triangulation of $X$ whose secondary cone contains $y$.
We close this section with an example that illustrates the various concepts seen so far.
\[ex:hexagon\] Let $d=2$ and $n=6$. Take $X$ to be six points in convex position in the plane, labeled cyclically in counterclockwise order. The normalized area of the triangle formed by any three of the vertices of the hexagon $P = {\rm conv}(X)$ is computed as a $3 \times 3$-determinant $$\label{eq:areas}
\quad v_{ijk} \,\, := \,\,
{\rm vol}\bigl( {\rm conv}(x_i, x_j, x_k) \bigr) \,\, = \,\,
{\rm det}\begin{pmatrix} 1 & 1 & 1 \\ x_i & x_j & x_k \end{pmatrix}
\,\,\quad {\rm for}\, \,\, \,1 \leq i < j < k \leq 6. \quad$$ The configuration $X$ has $14$ regular triangulations. These come in three symmetry classes: six triangulations like $\Delta = \{123,134,145,156\}$, six triangulations like $\,\Delta' = \{123,134,146,456\}$, and two triangulations like $\,\Delta'' = \{123,135, 156,345\} $. The corresponding GKZ vectors are $$\begin{matrix}
z^{\Delta} &=& \bigl(\,v_{123}+v_{134}+v_{145}+v_{156}\,,\,
v_{123}\,,\,v_{123}+v_{134}\,,\,v_{134}+v_{145}\,,\,v_{145}+v_{156}\,,v_{156}\, \bigr) ,\\
z^{\Delta'} &=&
\bigl(\,v_{123}+v_{134}+v_{146}\,,\, v_{123}\,,\, v_{123}+v_{134}\,,\, v_{134}+v_{146}+v_{456}\,,
\, v_{456}\,, \,v_{146}+v_{456}\, \bigr), \\
z^{\Delta'} &=&
\bigl(\, v_{123}+v_{135}+v_{156} \,,\, v_{123}\,, \, v_{123}+v_{135}+v_{345}\,,\,
v_{345}\,, \,v_{135}+v_{156}+v_{345}\,, \,v_{156}\, \bigr) ,
\end{matrix}$$ as defined in (\[eq:GKZvector\]). The secondary polytope $\Sigma(X)$ is the convex hull of these $14$ points in ${\mathbb{R}}^6$. This is a simple $3$-polytope with $14$ vertices, $21$ edges and $9$ facets, shown in Figure \[fig:associahedron\]. This polytope is known as the [*associahedron*]{}. It has $45 = 14+21+9+1$ faces in total, one for each of the $45$ polyhedral subdivisions of $X$. These are the supports of the functions $h_{X,y}$.
For example, the edge of $\Sigma(X)$ that connects $z^{\Delta}$ and $z^{\Delta'}$ represents the subdivision $\{123,134,1456\}$, with two triangles and one quadrangle. The smallest face containing $\{z^{\Delta},z^{\Delta'},z^{\Delta''}\}$ is two-dimensional. It is a pentagon, encoding the subdivision $\{123, 13456\}$.
The Samworth body $\mathcal{S}(X)$ is full-dimensional in ${\mathbb{R}}^6$. Its boundary is stratified into $45$ pieces, one for each subdivision of $X$. For any given $w \in {\mathbb{R}}^6$, the optimal solution $y^*$ to (\[eq:constrained\]) lies in precisely one of these $45$ strata, depending on the shape of the optimal density $f_{X,y^*}$.
Algebraically, we can find $y^*$ by computing the maximum among $14$ expressions like $$\label{eq:maximumamong14}
\begin{matrix} w_1 y_1 + w_2 y_2 + \cdots + w_6 y_6
& - & v_{123}\cdot \bigl( \frac{{\rm exp}(y_1)}{(y_1-y_2)(y_1-y_3)}+
\frac{{\rm exp}(y_2)}{(y_2 - y_1)(y_2 - y_3)}+
\frac{{\rm exp}(y_3)}{(y_3 - y_1)(y_3 - y_2)}\bigr) \smallskip \\
& - & v_{134}\cdot \bigl( \frac{{\rm exp}(y_1)}{(y_1-y_3)(y_1-y_4)}+
\frac{{\rm exp}(y_3)}{(y_3 - y_1)(y_3 - y_4)}+
\frac{{\rm exp}(y_4)}{(y_4 - y_1)(y_4 - y_3)}\bigr) \smallskip \\
& - & v_{145}\cdot \bigl( \frac{{\rm exp}(y_1)}{(y_1-y_4)(y_1-y_5)}+
\frac{{\rm exp}(y_4)}{(y_4 - y_1)(y_4 - y_5)}+
\frac{{\rm exp}(y_5)}{(y_5 - y_1)(y_5 - y_4)}\bigr) \smallskip \\
& - & v_{156}\cdot \bigl( \frac{{\rm exp}(y_1)}{(y_1-y_5)(y_1-y_6)}+
\frac{{\rm exp}(y_5)}{(y_5 - y_1)(y_5 - y_6)}+
\frac{{\rm exp}(y_6)}{(y_6 - y_1)(y_6 - y_5)}\bigr) .
\end{matrix}$$ This formula is the objective function in (\[eq:optsecondary\]) for the triangulation $\Delta = \{123,134,145,156\}$. The mathematical properties of this optimization process will be studied in the next sections.
Every Regular Subdivision Arises
================================
Our goal in this section is to prove Theorem \[thm:converse\]. We begin by examining the function $$\label{eq:defH}
H\,:\, {\mathbb{R}}^d \rightarrow {\mathbb{R}}\,,\,\,\,
(u_1,\ldots, u_d) \,\, \mapsto \,\, (-1)^d\frac{1 + u_1^{-1} + \cdots + u_d^{-1}}{u_1 u_2 \cdots u_d }
\,+\, \sum_{j=1}^d \frac{e^{u_j}}{u_j^2 \prod_{k \not= j} (u_j - u_k)}.$$
\[prop:Hformula\] The function $H$ is well-defined on ${\mathbb{R}}^d$. It admits the series expansion $$\label{eq:Hidentity} H(u_1,\dots, u_d) \quad = \quad \sum_{r=0}^\infty\frac{h_r(u_1,\ldots,u_d)}{(r+d+1)!},$$ where $h_r$ is the complete homogeneous symmetric polynomial of degree $r$ in $d$ unknowns.
We substitute the Taylor expansion of the exponential function in the sum on the right hand side of (\[eq:defH\]). This sum then becomes $$\sum_{j=1}^d \frac{e^{u_j}}{u_j^2 \prod_{k \not= j} (u_j - u_k)} \quad= \quad\sum_{\ell=0}^\infty \frac{1}{\ell !} \sum_{j=1}^d \frac{u_j^{\ell-2}}{\prod_{k \not= j} (u_j - u_k)}
\quad$$ $$= \quad
\sum_{\ell=0}^\infty \frac{1}{\ell !} \sum_{j=1}^d \frac{u_j^{\ell-d-1}}{\prod_{k \not= j} (1-u_k/u_j )}
\quad = \quad
\sum_{r=-d-1}^\infty \frac{1}{( r+d+1)!} \sum_{j=1}^d
\frac{u_j^r}{\prod_{k \not= j} (1-u_k/u_j )}$$ For nonnegative values of the summation index $r = \ell - d- 1$, the inner summand equals $h_r(u_1,\ldots,u_d)$, by Brion’s Theorem [@MS Theorem 12.13]. For negative values of $r$, we use Ehrhart Reciprocity, in the form of [@MS Lemma 12.15, eqn (12.7)], as seen in [@MS Example 12.14]. The two terms for $r \in \{-d-1,-d\}$ cancel with the left summand on the right hand side of (\[eq:defH\]). The terms for $r \in \{-d+1,\ldots,-2,-1\}$ are zero. This implies (\[eq:Hidentity\]).
We shall derive a useful integral representation of our function $H$. What follows is a Lebesgue integral over the standard simplex $\,\Sigma_d = \{(y_1,\dots, y_d) \in {\mathbb{R}}^d: \,y_i \geq 0 ,\, \sum_{i}y_i \leq 1\}$.
\[prop:magic\] The function $H$ can be expressed as the following integral: $$\begin{aligned}
\label{eqn:HIntegralExpression}
H(u_1,\ldots, u_d) \,\,=\, \int_{\Sigma_d}
\left(1 - \sum_{i=1}^d t_i\right)\exp\left(\sum_{i=1}^d u_i t_i\right)\text{d} t_1\dots\text{d} t_d.\end{aligned}$$
The complete homogeneous symmetric polynomial $h_r$ equals the Schur polynomial $s_{(r)}$ corresponding to the partition $\lambda = (r)$. By formula (2.11) in [@GR] we have $s_{(r)} = Z_{(r)}$, where $Z_\lambda(u_1,\dots, u_d)$ is the [*zonal polynomial*]{}, or [*spherical function*]{} [@GR]. Therefore, we conclude $$H(u_1,\ldots, u_d) \,=\,\sum_{r=0}^{\infty}\frac{Z_{(r)}(u_1,\ldots, u_d)}{(r+d+1)!}
\,=\, \frac1{(d+1)!}\sum_{r=0}^{\infty}
\frac{Z_{(r)}(u_1,\ldots, u_d)\cdot [1]_{(r)}}{[d+2]_{(r)} \cdot r!},$$ where $[a]_\lambda = \prod_{j=1}^{m}(a-j+1)_{\lambda_j}$ for a partition $\lambda = (\lambda_1,\dots, \lambda_m)$, and $(a)_s = a(a+1)\cdots(a+s-1)$. In particular, $[1]_{(r)} = r!$, and $[1]_{\lambda} = 0$ if $\lambda$ has more than one nonzero part. Therefore, $$H(u_1,\ldots, u_d) \,\,= \,\, \frac1{(d+1)!}\sum_{\text{all partitions } \lambda}
\frac{Z_\lambda(u_1,\ldots, u_d) \cdot [1]_\lambda}{[d+2]_\lambda \cdot |\lambda|!}.$$ By [@GR (4.14)], this can be written in terms of the confluent hypergeometric function ${}_1F_1$: $$H(u_1,\ldots, u_d) \,\,= \,\,\frac1{(d+1)!} \cdot \, {}_1F_1(1;d+2;u_1,\ldots, u_d) .$$ The right hand side has the desired integral representation (\[eqn:HIntegralExpression\]), by [@GR equation (5.14)].
\[cor:positive\] The function $H$ is positive, increasing in each variable, and convex.
The integrand in is nonnegative. Hence, $H(u_1,\ldots, u_d) > 0$ for all $(u_1,\ldots, u_d)\in\mathbb R^d$. After taking derivatives with respect to $u_i$, the integrand remains positive. Therefore, $H$ is increasing in $u_i$. Finally, the integrand is a convex function, and hence so is $H$.
We now embark towards the proof of Theorem \[thm:converse\]. Recall that a vector $y \in {\mathbb{R}}^n$ is relevant if $h_{X,y}(x_i) = y_i$ for all $i$, i.e. the regular subdivision of $X$ induced by $y$ uses each point $x_i$.
\[lem:weightsAnyDim\] Fix a configuration $X$ of $n$ points in $\mathbb R^d$. For any relevant $y^* \in {\mathbb{R}}^n$ that satisfies $\int_{{\mathbb{R}}^d} f_{X,y^*}(t)dt = 1$, there are weights $w \in {\mathbb{R}}^n_{> 0}$ such that $y^* $ is the optimal solution to -.
We use the formulation (\[eq:optsecondary\]) which is equivalent to (\[eq:ourproblem\]), (\[eq:constrained\]), and (\[eq:unconstrained\]). Let $\Delta$ be any regular triangulation that refines the regular subdivision given by $y$. In other words, we choose $\Delta$ so that (\[eq:intislinear\]) is maximized. The objective function in Corollary \[cor:optsecondary\] takes the form $$S(y_1,\dots, y_n) \,\,\,= \,\,\,w\cdot y - \sum_{i=1}^n\exp(y_i)\sum_{\sigma\in\Delta,\atop i\in\sigma}\frac{\text{vol}(\sigma)}{\prod_{j\in \sigma\setminus i }(y_i-y_j)}.$$
Consider the partial derivative of the objective function $S$ with respect to the unknown $y_k$: $$\begin{aligned}
\frac{\partial S}{\partial y_k} \,\,\,= \,\,\,w_k \,\, - \,
&\sum_{\sigma\in\Delta,\atop k\in\sigma}\text{vol}(\sigma)\exp(y_k)\frac1{\prod_{j\in \sigma\setminus k }(y_k-y_j)}\left(1 - \sum_{j\in \sigma\setminus k }\frac{1}{(y_k - y_j)}\right) \\
-&\, \sum_{\sigma\in\Delta,\atop k\in\sigma}\text{vol}(\sigma)\sum_{j\in \sigma\setminus k } \exp(y_j) \frac1{\prod_{i\in \sigma\setminus j } (y_j - y_i)} \frac1{(y_j - y_k)}.\end{aligned}$$ Using the formula (\[eq:defH\]) for the symmetric function $H(u_1,\ldots,u_d)$, this can be rewritten as $$\frac{\partial S}{\partial y_k} \,\,\,= \,\,\,w_k \,-\,
\sum_{\sigma\in\Delta,\atop k\in\sigma} \text{vol}(\sigma)\exp(y_k)H(\{ y_i - y_k :
i\in\sigma \backslash k\}).$$ We now consider the specific given vector $y^* \in {\mathbb{R}}^n$, and we use it to define $$\begin{aligned}
\label{weightsFormula}
w_k \,\,= \,\,\sum_{\sigma\in\Delta,\atop k\in\sigma} \text{vol}(\sigma)\exp(y^*_k)H(\{
y^*_i - y^*_k : i\in\sigma \backslash k \}).\end{aligned}$$ By Corollary \[cor:positive\], the vector $w=(w_1,\ldots,w_n)$ is well-defined and has positive coordinates. Consider now our estimation problem (\[eq:ourproblem\]) for that $w \in {\mathbb{R}}^n_{>0}$. By construction, the gradient vector of $S$ vanishes at $y^*$. Furthermore, recall that the choice of the triangulation $\Delta$ was arbitrary, provided $\Delta$ refines the subdivision of $y$. This ensures that all subgradients of the objective function in (\[eq:unconstrained\]) vanish. Since this function is strictly convex, as shown in [@CSS], we conclude that the given $y^*$ is the unique optimal solution for the choice of weights in (\[weightsFormula\]).
We note that the function $H$ and Lemma \[lem:weightsAnyDim\] are quite interesting even in dimension one.
\[ex:d=1\] Let $d=1$. So, we here examine log-concave density estimation for $n$ samples $x_1 < x_2 < \cdots < x _n$ on the real line. The function we defined in (\[eq:defH\]) has the representations $$H(u) \,\,=\,\, \frac{e^u - u - 1}{u^2} \,\,=\,\,\int_{0}^1 (1-y) e^{uy} dy \,\,=\,\,
\frac{1}{2} + \frac{1}{6} u + \frac{1}{24} u^2 + \frac{1}{120} u^3 + \cdots .$$ A vector $y^* \in {\mathbb{R}}^n$ is relevant if and only if $$\label{eq:relevant}
{\rm det}
\begin{pmatrix}
1 & 1 & 1 \\
x_{i-1} & x_i & x_{i+1} \\
y^*_{i-1} & y^*_i & y^*_{i+1}
\end{pmatrix} \,\leq \, 0 \,
\,\quad \hbox{for} \quad i=2,3,\ldots,n-1.$$ The desired vector $w \in {\mathbb{R}}^n_{>0}$ is defined by the formula in (\[weightsFormula\]). The $k$-th coordinate of $w$ is $$w_k = \begin{cases} (x_2 - x_1)e^{y^*_1}H(y^*_2 - y^*_1)& {\rm if}\,\, k = 1,\\
(x_k - x_{k-1})e^{y^*_k}H(y^*_{k-1} - y^*_k) + (x_{k+1} - x_k)e^{y^*_k}H(y^*_{k+1} - y^*_k)&
{\rm if}\,\, 2\leq k \leq n-1,\\
(x_n-x_{n-1})e^{y^*_n}H(y^*_{n-1} - y^*_n)& {\rm if}\,\, k = n.
\end{cases}$$ If we now further assume that $f_{X,y^*} = {\rm exp}(h_{X,y^*})$ is a density, i.e. $\int_{-\infty}^\infty f_{X,y^*}(t)dt = 1$, then $f_{X,y^*}$ is the unique log-concave density that maximizes the likelihood function for $(X,w)$.
\[ex:H2\] For $d=2$, our symmetric convex function $H$ has the form $$H(u, v) \,=\, \frac{1}{uv} + \frac{1}{u^2 v} + \frac{1}{u v^2}
+ \frac{e^u}{u^2(u{-}v)} + \frac{e^v}{v^2(v{-}u)} \,= \,
\frac{1}{6} + \frac{1}{24}(u+v) + \frac{1}{120}(u^2 + uv + v^2) + \cdots.$$ For planar configurations $X$, we use this function to map each point $y^*$ in the boundary of the Samworth body $\mathcal{S}(X)$ to a hyperplane $w \in \partial \mathcal{S}(X)^*$ that is tangent to $\partial \mathcal{S}(X)$ at $y^*$.
The set of all vectors $w \in {\mathbb{R}}^n$ that lead to a desired optimal solution $y^* \in \partial \mathcal{S}(X)$ is a convex polyhedral cone in ${\mathbb{R}}^n$. The following theorem characterizes that convex cone.
\[thm:normalcone\] Fix a vector $y^* \in {\mathbb{R}}^n$ that is relevant for $X$. Let $\Delta_1,\Delta_2,\ldots, \Delta_m$ be all the regular triangulations of $X$ that refine the subdivision of $X$ given by $y^*$, and let $w^{\Delta_i} \in {\mathbb{R}}^n_{>0}$ be the vector defined by (\[weightsFormula\]) for $\Delta_i$. Then, a vector $w \in {\mathbb{R}}^n_{>0} $ lies in the convex cone that is spanned by $\,w^{\Delta_1},w^{\Delta_2}, \ldots, w^{\Delta_m}\,$ if and only if $\,y^*$ is the optimal solution for (\[eq:ourproblem\]),(\[eq:constrained\]),(\[eq:unconstrained\]),(\[eq:optsecondary\]).
This follows from the fact that the cone of subgradients at each $y^*$ is convex, and, the gradients for each triangulation on which $h_{X,y^*}$ is linear are also subgradients at $y^*$; cf. [@CSS]. We can take any convex combination of these subgradients to obtain another subgradient.
\[ex\_4points\_plane\] Fix four points $x_1,x_2, x_3, x_4$ in counterclockwise convex position in ${\mathbb{R}}^2$. These admit two regular triangulations, $\Delta_1 = \{124,234\}$ and $\Delta_2 = \{123,134\}$. Consider any $y \in {\mathbb{R}}^4$ with $\int_{{\mathbb{R}}^2} f_{X,y}(t) dt = 1$. The vector $w^{\Delta_1} \in {\mathbb{R}}^4$ has coordinates $$\begin{aligned}
w_1^{\Delta_1} &\,\,=\,\, v_{124}e^{y_1}H(y_2-y_1,y_4-y_1)\\
w_2^{\Delta_1} &\,\,=\,\, v_{124}e^{y_2}H(y_1-y_2, y_4-y_2)
+ v_{234}e^{y_2}H(y_3-y_2, y_4-y_2)\\
w_3^{\Delta_1} &\,\,=\,\, v_{234}e^{y_3}H(y_2-y_3, y_4-y_3)\\
w_4^{\Delta_1} &\,\,=\,\, v_{124}e^{y_4}H(y_1-y_4, y_2-y_4)
+ v_{234}e^{y_4}H(y_2-y_4, y_3-y_4).\end{aligned}$$ Here $v_{ijk}$ denotes the triangle area in (\[eq:areas\]). Similarly, the vector $w^{\Delta_2}$ has coordinates $$\begin{aligned}
w_1^{\Delta_2} &\,\,=\,\, v_{123}e^{y_1}H(y_2-y_1, y_3-y_1)
+ v_{134}e^{y_1}H(y_3-y_1, y_4-y_1)\\
w_2^{\Delta_2} &\,\,=\,\, v_{123}e^{y_2}H(y_1-y_2, y_3-y_2)\\
w_3^{\Delta_2} &\,\,=\,\, v_{123}e^{y_3}H(y_1-y_3, y_2-y_3)
+ v_{134}e^{y_3}H(y_1-y_3, y_4-y_3)\\
w_4^{\Delta_2} &\,\,=\,\, v_{134}e^{y_4}H(y_1-y_4, y_3-y_4).\end{aligned}$$ In these formulas, the bivariate function $H$ can be evaluated as in Example \[ex:H2\].
We now distinguish three cases for $y$, depending on the sign of the $4 \times 4$-determinant $$\label{eq:tetrahedron}
{\rm det} \begin{pmatrix}
1 & 1 & 1 & 1 \\
x_1 & x_2 & x_3 & x_4 \\
y_1 & y_2 & y_3 & y_4
\end{pmatrix}.$$ If (\[eq:tetrahedron\]) is positive then $y$ induces the triangulation $\Delta_1$. In that case, $y$ is the unique solution to our optimization problem whenever $w$ is any positive multiple of $w^{\Delta_1}$. If (\[eq:tetrahedron\]) is negative then $y$ induces $\Delta_2$ and it is the unique solution whenever $w$ is a positive multiple of $w^{\Delta_2}$. Finally, suppose (\[eq:tetrahedron\]) is zero, so $y$ induces the trivial subdivision $1234$. If $w$ is any vector in the cone spanned by $w^{\Delta_1}$ and $w^{\Delta_2}$ in ${\mathbb{R}}^4$ then $y$ is the optimal solution for (\[eq:ourproblem\]),(\[eq:constrained\]),(\[eq:unconstrained\]),(\[eq:optsecondary\]).
We next observe what happens in Theorem \[thm:normalcone\] when all coordinates of $y^*$ are equal.
\[cor:cccc\] Fix the constant vector $y^* = (c,c,\ldots,c)$, where $c = - {\rm log}({\rm vol}(P))$, so as to ensure that $\int_{{\mathbb{R}}^d} f_{X,y^*}(t) dt = 1$. For any regular triangulation $\Delta_i$, the weight vector in (\[weightsFormula\]) is a constant multiple of the GKZ vector in (\[eq:GKZvector\]). More precisely, we have $w^{\Delta_i} = \frac{e^c}{(d+1)!} \cdot z^{\Delta_i}$. Hence $y^*$ is the optimal solution for any $w$ in the cone over the secondary polytope $\Sigma(X)$.
The constant term of the series expansion in Proposition \[prop:Hformula\] equals $$H(0,0,\ldots,0) \,\,= \,\, \frac{1}{(d+1)!} .$$ This implies that the sum in (\[weightsFormula\]) simplifies to $\frac{e^c}{(d+1)!}$ times the sum in (\[eq:GKZvector\]). The last statement follows from Theorem \[thm:normalcone\] because the cone over $\sigma(X)$ is spanned by all GKZ vectors $z^\Delta$.
We shall now prove the result that was stated in the Introduction.
Let $\Delta_1, \ldots,\Delta_m$ be all regular triangulations that refine a given subdivision $\Delta$. To underscore the dependence on $y$, we write $w^{\Delta_i}_y$ for the vector defined in (\[weightsFormula\]). Let $\mathcal{C}_\Delta$ denote the secondary cone of $\Delta$. This is the normal cone to $\Sigma(X)$ at the face with vertices $z^{\Delta_1},\ldots, z^{\Delta_m}$. In particular, we have $\,\dim (\text{span} (z^{\Delta_1},\dots, z^{\Delta_m})) = n - \dim (\mathcal{C}_\Delta)$.
For $y \in {\mathbb{R}}^n$ we abbreviate $N(y) =
\dim (\text{span} (w^{\Delta_1}_{y}, \dots, w^{\Delta_m}_{y}))$. The closure of the cone $\mathcal{C}_\Delta$ contains the constant vector $y^0 = (c,c,\ldots,c)$, where $c = - {\rm log}({\rm vol}(P))$. Corollary \[cor:cccc\] implies that $N(y_0) = n - \dim (\mathcal{C}_\Delta)$. The matrix $(w^{\Delta_1}_{y}, \dots, w^{\Delta_m}_{y})$ depends analytically on the parameter $y$. Its rank is an upper semicontinuous function of $y$. Thus, there exists an open ball $\hat{\mathcal{B}}$ in ${\mathbb{R}}^n$ that contains $y_0$ and such that $N(y)\geq n - \dim (\mathcal{C}_\Delta)$ for every $y\in\hat{\mathcal B}$. Now, let $\mathcal B = \mathcal{C}_\Delta \cap \hat{\mathcal B}$. The set $\mathcal B$ is full-dimensional in $\mathcal{C}_\Delta$, and $N(y)\geq n - \dim(\mathcal{C}_\Delta)$ for all $y \in \mathcal{B}$.
For each $ y \in \mathcal{B}$ we consider the convex cone in Theorem \[thm:normalcone\], which consists of all weight vectors $w$ for which the optimum occurs at $y$. We denote it by $\,{\rm cone}(w^{\Delta_1}_{y}, \ldots, w^{\Delta_m}_{y})$. These convex cones are pairwise disjoint as $y$ runs over $\mathcal{B}$, and they depend analytically on $y$. Since the dimension of each cone is at least $n-{\rm dim}(\mathcal{B}) $, it follows that the semi-analytic set $$\label{eq:fulldimset}
\bigcup_{y \in \mathcal{B}} {\rm cone}(w^{\Delta_1}_{y}, \dots, w^{\Delta_m}_{y}).$$ is full-dimensional in ${\mathbb{R}}^n$. By Theorem \[thm:normalcone\], for each $w$ in the set (\[eq:fulldimset\]), the optimal solution $\hat f$ to (\[eq:ourproblem\]) is a piecewise log-linear function whose regions of linearity are the cells of $\Delta$.
We believe that the rank of the matrix $(w^{\Delta_1}_{y}, \ldots, w^{\Delta_m}_{y})$ is the same for all vectors $y$ that induce the regular subdivision $\Delta$, namely $N(y) = n-{\rm dim}(\mathcal{C}_\Delta)$. At present we do not know how to prove this. For the proof of Theorem \[thm:converse\], it was sufficient to have this constant-dimension property for all $y$ in a relatively open subset $\mathcal{B}$ of the secondary cone $\mathcal{C}_\Delta$.
The Samworth Body
=================
The maximum likelihood problem studied in this paper is a linear optimization problem over a convex set. We named that convex set the Samworth body, in recognition of the contributions made by Richard Samworth and his collaborators [@CGS; @CSS]. In what follows we explore the geometry of the Samworth body. We begin with the following explicit formula:
The Samworth body of a given configuration $X$ of $\,n$ points in ${\mathbb{R}}^d$ equals $$\label{eq:samformula}
\! \mathcal{S}(X) \,\, = \,\,
\biggl\{ (y_1,\ldots,y_n) \in {\mathbb{R}}^n \,:\,
\sum_{\sigma \in \Delta}
\sum_{i \in \sigma}
\frac{{\rm vol}(\sigma) \cdot {\rm exp}(y_i)}
{\prod_{j \in \sigma \backslash i} (y_i-y_j)} \leq 1 \,\,\,
\hbox{for all $\Delta$ that refine $y\,$} \biggr\}.$$ This is a closed convex subset of $\,{\mathbb{R}}^n$. In the defining condition we mean that $\Delta$ runs over all regular triangulations that refine the regular polyhedral subdivision of $X$ specified by $y$.
This is a reformulation of the definition (\[eq:samdef\]) using the formulas in Theorem \[thm:IntegralFormula\] and Corollary \[cor:optsecondary\]. Closedness and strict convexity of $\mathcal{S}(X)$ were noted in Theorem \[thm:samworth\].
Maximization of a linear function $w$ over $\mathcal{S}(X)$ becomes an unconstrained problem via the Legendre-Fenchel transform as in (\[eq:optsecondary\]). By solving this problem for many instances of $w$, one can approximate the shape of $\mathcal{S}(X)$. Indeed, each regular subdivision of $X$ specifies a full-dimensional subset in the boundary of the dual body $\mathcal{S}(X)^*$, by Theorem \[thm:converse\]. If we choose a direction $w$ at random in ${\mathbb{R}}^n$, then a unique positive multiple $\lambda w$ lies in $\partial \mathcal{S}(X)^*$, in the stratum associated to the subdivision of $X$ specified by the optimal solution $y^* \in \partial \mathcal{S}(X)$. By evaluating the map $w \mapsto y^*$ many times, we thus obtain the empirical distribution on the subdivisions, indicating the proportion of volumes of the strata in $\partial \mathcal{S}(X)^*$. In the next example we compute this distribution when the double sum in (\[eq:samformula\]) looks like that in (\[eq:maximumamong14\]).
\[ex:associahedron2\] Let $d=2$, $n=6$, and take our configuration $X$ to be the six points $(0,0),(1,0),(2,1),(2,2),(1,2), (0,1)$. We sampled 100,000 vectors $w$ uniformly from the simplex $\{w \in {\mathbb{R}}^6_{\geq 0} : \sum_{i=1}^6 w_i = 1\}$. For each $w$, we computed the optimal $y^* \in {\mathbb{R}}^6$, and we recorded the subdivision of $X$ that is the support of $h_{X,y^*}$. We know from Example \[ex:hexagon\] that the secondary polytope $\Sigma(X)$ is an associahedron, which has $14+21+9+1 = 45$ faces. We here code each subdivision by a list of length $3,2,1$ or $0$ from among the diagonal segments $$13, \,14, \, 15, \,24, \,25,\,26, \,35, \,36,\,46.$$ For instance, the list $13 \,\,14 \,\,15$ encodes the triangulation $\Delta$ in Example \[ex:hexagon\]. The edge connecting the triangulations $\Delta$ and $\Delta'$ from Example \[ex:hexagon\] is denoted $13 \, 14 $. We write $\emptyset$ for the trivial flat subdivision. The following table of percentages shows the empirical distribution we observed for the $45$ outcomes of our experiment:
$$\begin{matrix}
\emptyset & 35 \,&\, 46 \,&\, 24 \,&\, 15 \,&\, 13 \,&\, 26 \,&\, 25 \,&\, 14 \,&\, 36 \\
30.5 \,\,&\,\, 5.95 \,\,&\,\, 5.85 \,\,&\,\, 5.84 \,\,&\,\, 5.83 \,\,&
\,\, 5.75 \,\, &\,\, 5.70 \,\,&\,\, 3.91 \,\,&\,\, 3.90 \,\,&\,\, 3.87 \\
\end{matrix}$$ $$\begin{matrix}
13 \,15 & 26 \, 46 & 15 \, 35 & 13 \, 35 & 24 \, 26 & 24 \, 46 & 13 \, 14 & 35\, 36 & 14 \, 24 & 26 \, 36 & 14 \, 46 & 25 \, 35 & 15 \, 25 \\
1.23 & 1.21 & 1.21 & 1.20 & 1.16 &
1.14 & 0.96 & 0.92 & 0.92 & 0.92 & 0.92 & 0.90 & 0.90
\end{matrix}$$ $$\begin{matrix}
25 \,26 &
14 \, 15 &
36 \, 46 &
24 \, 25 &
13 \, 36 &
13 \,46 &
26 \, 35 &
15\, 24 &
13 \, 14 \, 15 &
13 \,15 \,35 &
14 \,24 \,46 &
24 \,26 \, 46 \\
0.89 &
0.89 &
0.87 &
0.87 &
0.84 &
0.82 &
0.77 &
0.70 &
0.25 &
0.24 & 0.23 & 0.22
\end{matrix}$$ $$\begin{matrix}
15 \, 25 \, 35 &
26\, 36 \, 46 &
13 \, 35 \, 36 &
24 \, 25 \, 26 &
13 \, 36 \, 46 &
25 \, 26\, 35 &
15 \,24 \, 25 &
14 \, 15 \, 24 &
13 \,14 \, 46 &
26 \, 35\, 36 \\
0.22 &
0.21 &
0.20 &
0.18 &
0.18 &
0.16 &
0.15 &
0.15 & 0.15 & 0.14
\end{matrix}$$
The entry marked $\emptyset$ reveals that the trivial subdivision occurs with the highest frequency. This means that a large portion of the dual boundary $\partial \mathcal{S}(X)^*$ is flat. Equivalently, the Samworth body $\mathcal{S}(X)$ has a “very sharp edge” along the lineality space of the secondary fan.
To get a better understanding of the geometry of the Samworth body $\mathcal{S}(X)$, at least when $d$ or $n-d$ are small, we can also use the algebraic formula in (\[eq:samformula\]) for explicit computations.
\[ex:fancyteewurst\] Let $d=3$, $n=6$, and fix the configuration of vertices of a [*regular octahedron*]{}: $$X \, = \,(x_1,x_2,\ldots,x_6) \,=\, \bigl( \,+e_1\,,\,-e_1\,,\,\,+e_2,\,-e_2\,,\,\,+e_3\,,\,-e_3 \,\bigr).$$ Here $e_i$ denotes the $i$th unit vector in ${\mathbb{R}}^3$. The secondary polytope $\Sigma(X)$ is a triangle. Its edges correspond to the three subdivisions of the octahedron $X$ into two square-based pyramids, $\Delta_{1234} = \{12345, 12346\}$, $\Delta_{1256} = \{12356,12456\}$, and $\Delta_{3456} = \{13456, 23456\}$. Its vertices correspond to the three triangulations of $X$, namely $\Delta_{12} = \{1235, 1236, 1245, 1256\}$, $\Delta_{34} = \{ 1345, 1346, 2345, 2346 \}$, and $\Delta_{56} = \{1356, 1456, 2356, 2456\}$.
The normal fan of $\Sigma(X)$, which is the secondary fan of $X$, has three full-dimensional cones in ${\mathbb{R}}^6$. A vector $y$ in ${\mathbb{R}}^6$ selects the triangulation $\Delta_{ij}$ if $y_i+y_j$ is the uniquely attained minimum among $\{y_1+y_2,\,y_3+y_4,\,y_5+y_6\}$. It selects $\Delta_{1234}$ if $y_1+y_2 = y_3+y_4 < y_5+y_6$, and it leaves the octahedron unsubdivided when $y$ is in the lineality space $\,\{y \in {\mathbb{R}}^6: y_1+y_2 = y_3+y_4 = y_5+y_6\}$.
The Samworth body $\mathcal{S}(X)$ is defined in ${\mathbb{R}}^6$ by the following system of three inequalities. Use the $i$th inequality when the $i$th number in the list $(y_1{+}y_2, y_3 {+} y_4, y_5 {+} y_6)$ is the smallest: $$\begin{matrix}
\frac{ e^{y_1} (2 y_1-y_6-y_5) (2 y_1-y_4-y_3)}{(y_1{-}y_2)(y_1{-}y_3)(y_1{-}y_5)(y_1{-}y_6)(y_1{-}y_4) }
- \frac{ e^{y_2} (2 y_2-y_6-y_5) (2 y_2-y_4-y_3)}{(y_1{-}y_2)(y_2{-}y_3)(y_2{-}y_5)(y_2{-}y_6)(y_2{-}y_4) }
+ \frac{e^{y_3} (2 y_3-y_6-y_5) }{ (y_1{-}y_3)(y_2{-}y_3)(y_3{-}y_5)(y_3{-}y_6) } \smallskip \\ \,\,
+ \frac{e^{y_4} (2 y_4-y_6-y_5) }{ (y_1{-}y_4)(y_2{-}y_4)(y_4{-}y_5)(y_4{-}y_6) }
- \frac{e^{y_5} (y_4-2 y_5+y_3) }{ (y_1{-}y_5)(y_2{-}y_5)(y_3{-}y_5)(y_4{-}y_5) }
- \frac{e^{y_6} (y_4-2 y_6+y_3) }{ (y_1{-}y_6)(y_2{-}y_6)(y_3{-}y_6)(y_4{-}y_6) } \,\,\leq \,1 \smallskip
\end{matrix}$$ $$\begin{matrix}
\frac{ e^{y_1} (2 y_1-y_6-y_5) }{ (y_1-y_3)(y_1-y_4)(y_1-y_5)(y_1-y_6) }
+ \frac{ e^{y_2} (2 y_2-y_6-y_5) }{ (y_2-y_3)(y_2-y_4)(y_2-y_5)(y_2-y_6) }
- \frac{ e^{y_3} (2 y_3-y_6-y_5) (y_2-2 y_3+y_1)}{(y_1-y_3)(y_3-y_4)(y_3-y_5)(y_3-y_6)(y_2-y_3) }
\smallskip \\
+ \frac{ e^{y_4} (2 y_4-y_6-y_5) (-2 y_4+y_1+y_2)}{(y_1-y_4)(y_3-y_4)(y_4-y_5)(y_4-y_6)(y_2-y_4) }
- \frac{ e^{y_5} (y_2-2 y_5+y_1) }{ (y_1-y_5)(y_2-y_5)(y_3-y_5)(y_4-y_5) }
- \frac{ e^{y_6} (y_2-2 y_6+y_1) }{ (y_1-y_6)(y_2-y_6)(y_3-y_6)(y_4-y_6) }\,\,\leq \,1 \smallskip
\end{matrix}$$ $$\!
\begin{matrix}
\frac{ e^{y_1} (2 y_1-y_4-y_3) }{ (y_1-y_3)(y_1-y_4)(y_1-y_5)(y_1-y_6) }
+ \frac{ e^{y_2} (2 y_2-y_4-y_3) }{ (y_2-y_3)(y_2-y_4)(y_2-y_5)(y_2-y_6) }
- \frac{ e^{y_3} (y_2-2 y_3+y_1) }{ (y_1-y_3)(y_2-y_3)(y_3-y_5)(y_3-y_6) } - \smallskip \\
\frac{ e^{y_4} (-2 y_4+y_1+y_2) }{ (y_1-y_4)(y_2-y_4)(y_4-y_5)(y_4-y_6) }
{+} \frac{ e^{y_5} (y_4-2 y_5+y_3) (y_2-2 y_5+y_1)}{(y_1-y_5)(y_3-y_5)(y_5-y_6)(y_4-y_5)(y_2-y_5) }
{-} \frac{ e^{y_6} (y_4-2 y_6+y_3) (y_2-2 y_6+y_1)}{(y_1-y_6)(y_3-y_6)(y_5-y_6)(y_4-y_6)(y_2-y_6) }
\leq 1
\smallskip
\end{matrix}$$
The dual convex body $\mathcal{S}(X)^*$ has seven strata of faces in its boundary: a $3$-dimensional manifold of $2$-dimensional faces, corresponding to the trivial subdivision, three $4$-dimensional manifolds of edges corresponding to $\Delta_{1234}, \Delta_{1256}, \Delta_{3456}$, and three $5$-dimensional manifolds of extreme points, corresponding to $\Delta_{12},\Delta_{34},\Delta_{56}$. Each $2$-dimensional face of $\mathcal{S}(X)^*$ is a triangle, like the secondary polytope $\Sigma(X)$. The dual to this convex set is the Samworth body $\mathcal{S}(X)$, which is strictly convex. Its boundary is singular along three $4$-dimensional strata are formed when two of the three inequalities above are active. These meet in a highly singular $3$-dimensional stratum which is formed when all three inequalities are active. These singularities of $\partial \mathcal{S}(X)$ exhibit the secondary fan of $X$. It is instructive to draw a cartoon, in dimension two or three, to visualize the boundary features of $\mathcal{S}(X)$ and $\mathcal{S}(X)^*$.
Up until this point, the premise of this paper has been that the configuration $X$ is fixed but the weights $w$ vary. Example \[ex:fancyteewurst\] was meant to give an impression of the corresponding geometry, by describing in an intuitive language how a Samworth body $\mathcal{S}(X)$ can look like.
However, our premise is at odds with the perspective of statistics. For a statistician, the natural setting is to fix unit weights, $w = \frac{1}{n}(1,1,\ldots,1)$, and to assume that $X$ consists of $n$ points that have been sampled from some underlying distribution. Here, one cares about one distinguished point in $\partial \mathcal{S}(X)$ and less about the global geometry of the Samworth body. Specifically, we wish to know which face of $\mathcal{S}(X)^*$ is pierced by the ray $\bigl\{ (\lambda,\ldots,\lambda) \,:\, \lambda \geq 0 \bigr\}$.
-------- -------- ---------- --------- ---------- -------------------- --------- --------- ---------
Convex Gaussian Uniform Circular Circular
3-gons 4-gons 5-gons 6-gons hull $\mathcal{N}(0,1)$ $a=0.5$ $a=0.3$ $a=0.1$
1 0 0 0 3 948 533 257 34
0 1 0 0 4 8781 6719 4596 1507
0 0 1 0 5 8209 9743 10554 8504
0 0 0 1 6 1475 2805 4495 9887
2 0 0 0 4 8 3 6 7
1 1 0 0 5 1 2 1 2
3 0 0 0 3 6 2 2 1
2 1 0 0 4 39 16 4 7
2 0 1 0 5 1 1 0 1
1 2 0 0 5 1 0 1 6
4 0 0 0 4 1 0 0 0
3 1 0 0 3 114 38 10 1
3 0 1 0 4 39 20 9 2
2 2 0 0 4 59 19 16 9
5 0 0 0 3 3 0 0 0
4 1 0 0 4 1 0 0 0
4 0 1 0 3 90 27 8 1
3 2 0 0 3 120 32 11 0
5 1 0 0 3 50 11 3 0
7 0 0 0 3 2 1 0 0
-------- -------- ---------- --------- ---------- -------------------- --------- --------- ---------
: \[tab:caroline\] The optimal subdivisions for six random points in the plane
\[ex:Gaussian\] Let $d=2$ and $n=6$ as in Example \[ex:associahedron2\], but now with unit weights $w = \frac{1}{6}(1,1,1,1,1,1)$. We sample i.i.d. points $x_1,\ldots,x_6$ from various distributions $f$ on ${\mathbb{R}}^2$, some log-concave and others not, and we compare the resulting maximum likelihood densities $\hat f$.
In what follows, we analyze the case where $f$ is a standard Gaussian distribution or a uniform distribution on the unit disc, and we contrast this to distributions of the form $X=(U_1^a \cos(2\pi U_2), U_1^a \sin(2\pi U_2))$, where $U_1$ and $U_2$ are independent uniformly distributed on the interval $[0,1]$ and $a<0.5$. Such distributions have more mass towards the exterior of the unit disc and are hence not log-concave. For $a=0.5$ this is the uniform distribution on the unit disc. We drew 20,000 samples $X = (x_1,\ldots,x_6)$ from each of these four distributions.
For each experiment, we recorded the number of vertices of the convex hull of the sample, we computed the optimal subdivision using [LogConcDEAD]{}, and we recorded the shapes of its cells. Our results are reported in Table \[tab:caroline\]. Each of the four right-most columns shows the number of experiments out of 20,000 that resulted in a subdivision as described in the five left-most columns. These columns do not add up to 20,000, because we discarded all experiments for which the optimization procedure did not converge due to numerical instabilities.
In the vast majority of cases, reported in the first four rows, the optimal solution $\hat f$ is log-linear. Here the subdivision is trivial, with only one cell. For instance, the fourth row is the 30.5% case in Example \[ex:associahedron2\]. In the last row, ${\rm conv}(X)$ is a triangle and the subdivision is a triangulation that uses all three interior points. We saw such a triangulation in Example \[ex:octahedron\]. In fact, we constructed the data (\[eq:sixpoints\]) by modifying one of the examples with seven cells found by sampling from a Gaussian $\mathcal{N}(0,1)$ distribution. Note that the subdivisions resulting from Gaussian samples tend to have more cells than those from other distributions.
The examples in this section illustrate two different interpretations of the data set $(X,w)$: either the configuration $X$ is fixed and the weight vector $w$ varies, or $w$ is fixed and $X$ varies. These are two different parametric versions of our optimization problem (\[eq:ourproblem\]), (\[eq:constrained\]), (\[eq:unconstrained\]), (\[eq:optsecondary\]). This generalizes the interpretation of the secondary polytope $\Sigma(X)$ seen in [@DRS Section 1.2], namely as a geometric model for [*parametric linear programming*]{}. The vertices of $\Sigma(X)$ represent the various collections of optimal bases when the matrix $X$ is fixed and the cost function $w$ varies. See [@DRS Exercise 1.17] for the case $d=2, n=6$, as in Examples \[ex:hexagon\], \[ex:associahedron2\] and \[ex:Gaussian\]. Of course, it is very interesting to examine what happens when both $X$ and $w$ vary, and to study $\Sigma(X)$ as a function on the space of configurations $X$. This was done in [@universal]. The same problem is even more intriguing in the statistical setting introduced in this paper.
Study the Samworth body as a function $X \mapsto \mathcal{S}(X)$ on the space of configurations. Understand log-concave density estimation as a parametric optimization problem.
This problem has many angles, aspects and subproblems. Here is one of them:
For fixed $w$ and a fixed combinatorial type of subdivision $\Delta$, study the semi-analytic set of all configurations $X$ such that $\Delta$ is the optimal subdivision for the data $(X,w)$.
For instance, suppose we fix the triangulation $\Delta$ seen on the right of Figure \[fig:octahedron\]. How much can we perturb the configuration in (\[eq:sixpoints\]) and retain that $\Delta$ is optimal for unit weights? For $n{=}6,d{=}2$, give inequalities that characterize the space of all datasets $(X,w)$ that select $\Delta$.
An ultimate goal of our geometric approach is the design of new tools for nonparametric statistics. One aim is the development of test statistics for assessing whether a given sample comes from a log-concave distribution. Such tests are important, e.g. in economics [@An1; @An2].
Design a test statistic for log-concavity based on the optimal subdivision $\Delta$.
The idea is that $\Delta$ is likely to have more cells when $X$ is sampled from a log-concave distribution. Hence we might use the f-vector of $\Delta$ as a test statistic for log-concavity. The study of such tests seems related to the approximation theory of convex bodies developed by Adiprasito, Nevo and Samper [@ANS]. What does their “higher chordality” mean for statistics?
Unit Weights
============
In this section we offer a further analysis of the uniform weights case. Example \[ex:Gaussian\] suggests that the flat subdivision occurs with overwhelming probability when the sample size is small. Our main result in this section establishes this flatness for the small non-trivial case $n=d+2$:
\[thm:d+2points\] Let $X$ be a configuration of $n=d+2$ points that affinely span ${\mathbb{R}}^d$. For $w = \frac{1}{n}(1,\ldots,1)$, the optimal density $\hat f$ is log-linear, so the optimal subdivision of $X$ is trivial.
We shall use the following lemma, which can be derived by a direct computation.
\[lem:surprise\] The symmetric function $H$ in Section 4 satisfies the differential equation $$\frac{\partial H}{\partial x_1}
(x_1,\dots, x_d) \quad = \quad \frac{e^{x_1}H(-x_1, x_2-x_1,\dots, x_d-x_1) - H(x_1,\dots, x_d)}{x_1}.$$
Our $d+2$ points in $\mathbb R^d$ can be partitioned uniquely into two affinely independent subsets whose convex hulls intersect. This gives rise to a unique identity $$\sum_{i=1}^k \alpha_ix_i \,\,\,= \,\, \sum_{j=k+1}^{d+2}\beta_j x_j,$$ where $1\leq k \leq d+1, \,\,\alpha_1,\dots, \alpha_k, \beta_{k+1},\dots, \beta_{d+2} \geq 0$, and $\sum \alpha_i = \sum \beta_j = 1$. We abbreviate $\mathcal{D} = \{1,2,\ldots,d+2\}$. There are precisely three regular subdivisions of the configuration $X$:
1. the triangulation $\,\bigl\{ \mathcal{D} \backslash \{1\},
\mathcal{D} \backslash \{2\}, \ldots, \mathcal{D} \backslash \{k\} \bigr\}$,
2. the triangulation $\,\bigl\{ \mathcal{D} \backslash \{k{+}1\},
\mathcal{D} \backslash \{k{+}2\}, \ldots, \mathcal{D} \backslash \{d{+}2\} \bigr\}$,
3. the flat subdivision $\bigl\{ \mathcal{D} \bigr\}$.
The simplex volumes $\,\sigma_{\mathcal{D} \setminus i}
= {\rm vol}\bigl({\rm conv}(\,x_\ell : \ell \in \mathcal{D} \backslash \{i\})\bigr) \,$ satisfy the identity $$\label{eq:simplexvolumes}
\qquad \quad \sum_{i=1}^k \sigma_{\mathcal{D}\setminus i} \,\,\,= \,\,\sum_{j=k+1}^{d+2}
\! \sigma_{\mathcal{D}\setminus j}
\quad = \quad {\rm vol}({\rm conv}(X)).$$
Now let $w \in {\mathbb{R}}^{d+2}$ be a positive weight vector, and suppose that the optimal heights $y_1,\dots, y_{d+2}$ do not induce the flat subdivision (iii). This means that the optimal subdivision is one of the triangulations (i) and (ii). We will show that in that case $w \not= (\lambda, \lambda,\ldots,\lambda)$.
After relabeling we may assume that (ii) is the optimal triangulation for the given weights $w$. This is equivalent to the inequality $$\sum_{i=1}^k y_i\sigma_{\mathcal{D}\setminus i}
\,\,\,> \,\, \sum_{j=k+1}^{d+2}y_j \sigma_{\mathcal{D}\setminus j}. \quad \qquad$$ In light of (\[eq:simplexvolumes\]), at least one of $y_1,\dots, y_{k}$ has to be larger than at least one of $y_{k+1},\dots, y_{d+2}$. After relabeling once more, we may assume that $\,y_1 > y_{k+1}$.
Theorem \[thm:normalcone\] states that the weight vector $w$ is uniquely determined (up to scaling) by the optimal height vector $y$. Namely, the coordinates of $w$ are given by the formula for the optimal triangulation (ii). That formula gives $$\label{eq:w_1}
w_1 \,\,\,= \,\, \sum_{j=k+1}^{d+2} \sigma_{\mathcal{D}\backslash j}e^{y_1}
H \bigl(y_\ell-y_1 : \ell\in\mathcal{D}\backslash \{1, j\} \bigr), \quad$$ and $$\label{eq:w_{k+1}}
w_{k+1} \,\,\,=\,\, \sum_{j=k+2}^{d+2}\sigma_{\mathcal{D}\backslash j}e^{y_{k+1}}
H \bigl(y_\ell-y_{k+1}: \ell\in \mathcal{D}\backslash \{k{+}1,j\} \bigr).$$ For any index $j\in \{k{+}2,\dots, d{+}2\}$ we consider the expression $$\begin{aligned}
\label{eq:difference}
e^{y_{1}}H(y_\ell-y_1 : \ell\in\mathcal{D}\backslash j )\,\,-\,\,
e^{y_{k+1}}H(y_\ell-y_{k+1}: \ell\in \mathcal{D}\backslash j)\, \, \\
= \,\,\, \bigl(\,e^{y_1-y_{k+1}} H(y_{\ell} - y_{k+1} - (y_1-y_{k+1}) : \ell\in \mathcal{D}\backslash j)
\, - \,H(y_\ell-y_{k+1}: \ell\in \mathcal{D}\backslash j)
\,\bigr). \notag\end{aligned}$$ If we divide the parenthesized difference by $x_1 = y_1 - y_{k+1}$, then we obtain an expression as in the right hand side of Lemma \[lem:surprise\]. Then, by Lemma \[lem:surprise\], the expression in becomes $$e^{y_{k+1}} \cdot (y_1-y_{k+1}) \cdot \frac{\partial H}{\partial x_1}\bigl( \,
y_\ell-y_{k+1}: \ell\in \mathcal{D}\backslash j \,\bigr) .$$ By Corollary \[cor:positive\], all partial derivatives of $H$ are positive. Also, recall that $y_1>y_{k+1}$. Therefore, the expression in is positive. Hence, for any $j\in\{k{+}2,\dots, d{+}2\}$, we have $$e^{y_{1}}H(y_\ell-y_1 : \ell\in\mathcal{D}\backslash j)
\,\,\,> \,\,\, e^{y_{k+1}}H(y_\ell-y_{k+1}: \ell\in \mathcal{D}\backslash j).$$ In the left expression it suffices to take $ \,\ell\in\mathcal{D}\backslash \{1, j\} $, and in the right expression it suffices to take $\, \ell\in \mathcal{D}\backslash \{k{+}1,j\}$. Summing over all $j$, the identities (\[eq:simplexvolumes\]), (\[eq:w\_1\]) and (\[eq:w\_[k+1]{}\]) now imply $$w_1 \,\,> \,\, w_{k+1}.$$ This means that $w \not= (\lambda,\lambda,\ldots,\lambda)$ for all $\lambda > 0$. We conclude that it is impossible to get a nontrivial subdivision of $X$ as the optimal solution when all the weights are equal.
We now show that the result of Theorem \[thm:d+2points\] is the best possible in the following sense.
\[thm:d+3points\] For any integer $d \geq 2$, there exists a configuration of $n=d+3$ points in ${\mathbb{R}}^d$ for which the optimal subdivision with respect to unit weights is non-trivial.
The hypothesis $d \geq 2$ is essential in this theorem. Indeed, for $d=1$ it can be shown, using the formulas in Example \[ex:d=1\], that the flat subdivision is optimal for any configuration of $d+3=4$ points on the line $\mathbb R$ with unit weights. Here is an illustration of Theorem \[thm:d+3points\].
$\quad\qquad$
Fix unit weights on the following five points in the plane: $$\begin{aligned}
\label{eq:fivepoints}
X \,\, = \,\, \bigl(\, (0,0), \,(40, 0),\, (20, 40),\, (17, 10),\, (21, 15) \,\bigr).\end{aligned}$$ Using [LogConcDEAD]{} [@CGS], we find that the optimal subdivision equals $\{124, 245, 235, 1345\}$.
To derive Theorem \[thm:d+3points\], we first study the following configuration of $d+2$ points in ${\mathbb{R}}^d$:
$$\label{eq:specialconfig}
X\,\, =\,\, \biggl( e_1\,,\,e_2\,,\,\ldots\,,\, e_d\,,\, 0\,,\,\, \frac1{d+1} \sum_{i=1}^de_i \,\biggr).$$
Let $\alpha > 0$ and assign weights as follows to the configuration $X$ in (\[eq:specialconfig\]): $$\begin{aligned}
\label{eq:equalWeightsFormula}
w_1=w_2 =\cdots = w_{d+1} > 0,
\text{ and } \,\,w_{d+2} \,=\,
w_1\frac{(d+1)e^{\alpha} H(-\alpha, -\alpha,\dots, -\alpha)}{dH(\alpha, 0,\dots,0)}.
\end{aligned}$$ Then the optimal heights satisfy $\,y_1= y_2 = \cdots = y_{d+1}\,$ and $\,y_{d+2} = y_1+\alpha$.
Let $\mathcal D = \{1,\dots, d+2\}$ and fix $w$ as in (\[eq:equalWeightsFormula\]). The volumes $\text{vol}(\mathcal D\backslash\{i\})$ are equal for $i\in\{1,\dots, d+1\}$. Set $\sigma = \text{vol}(\mathcal D\backslash\{i\})$. We will show that the heights $y_1=\cdots=y_{d+1} = y$ and $y_{d+2} = y+\alpha$ solve the Lagrange multiplier equations for our optimization problem, assuming that $\Delta$ is the triangulation $\{\mathcal D\backslash \{1\},\ldots, \mathcal D\backslash \{d{+}1 \}\}$. Indeed, from we derive $$\begin{matrix}
& w_i &= &
d \cdot \sigma \cdot e^y \cdot H(\alpha, 0, \ldots, 0) \quad & \hbox{
for $i\leq d+1$} \\
\hbox{and} \qquad & w_{d+2}
& = & (d{+}1)\cdot \sigma \cdot e^{y+\alpha} \cdot H(-\alpha, \dots, -\alpha). &
\end{matrix}$$ By taking ratios, we now obtain . Of course, the weights must be scaled so that they sum to one. Since $\alpha > 0$, the subdivision induced by $y$ is indeed $\{\mathcal D\backslash 1, \ldots, \mathcal D\backslash \{d{+}1 \} \}$.
We now note that, by Lemma \[lem:surprise\], $$e^\alpha \cdot H(-\alpha, \dots, -\alpha) - H(\alpha, 0, \ldots, 0)
\,\,=\,\, \alpha \frac{\partial H}{\partial \alpha}(\alpha, 0, \ldots, 0).$$ This is positive for $\alpha > 0$, zero for $\alpha = 0$, and negative for $\alpha < 0$. The first case implies:
\[cor:setup\] Fix the configuration $X$ in (\[eq:specialconfig\]) and suppose that $w_1=\cdots=w_{d+1}$. Then $\frac{w_{d+2}}{w_1} > \frac{d+1}d$ if and only if the optimal subdivision is the triangulation $\{\mathcal D\backslash \{1\},
\ldots, \mathcal D \backslash \{ d{+}1 \}\}$.
We are now prepared to pass from $d+2$ to $d+3$ points, and to offer the missing proof.
We use Corollary \[cor:setup\] with $\frac{w_{d+2}}{w_1} = 2$. This is strictly bigger than $\frac{d+1}{d}$ whenever $d\geq 2$. We redefine $(X,w)$ by splitting the last point $x_{d+2}$ into two nearby points with equal weights. Then $n=d+3$ and the optimal subdivision is non-trivial. This holds because, for any fixed $w \in {\mathbb{R}}^n$, the set of $X$ whose optimal subdivision is trivial is described by the vanishing of continuous functions. It is hence closed in the space of configurations.
We conclude this paper with a pair of challenges for Nonparametric Algebraic Statistics.
What is the smallest size $n $ of a configuration $X$ in ${\mathbb{R}}^d$ such that the optimal subdivision of $\,X$ with unit weights has at least $c$ cells? This $n$ is a function of $c$ and $d$.\
We just saw that $n(2,d) = d+3$ for $d \geq 2$. Determine upper and lower bounds for $n(c,d)$.
We can also ask for a characterization of combinatorial types of triangulations that are realizable as in Figures \[fig:octahedron\] and \[fig:fivepoints\]. Such a triangulation in ${\mathbb{R}}^d$ is obtained by removing a facet from a $(d{+}1)$-dimensional simplicial polytope with $\leq n$ vertices. If we are allowed to vary $w \in {\mathbb{R}}^n$, then Theorem \[thm:converse\] tells us that all simplicial polytopes have such a realization. Hence, in the following question, we seek configurations $X$ in ${\mathbb{R}}^d$ with $w = \frac{1}{n}(1,\ldots,1)$.
Which simplicial polytopes can be realized by points in ${\mathbb{R}}^d$ with unit weights?
For example, the octahedron can be realized with unit weights, as was seen in Figure \[fig:octahedron\].
[**Acknowledgements.**]{} We thank Donald Richards for very helpful discussions regarding Proposition \[prop:magic\]. Bernd Sturmfels was partially supported by the Einstein Foundation Berlin and the NSF (DMS-1419018). Caroline Uhler was partially supported by DARPA (W911NF-16-1-0551), NSF (DMS-1651995) and ONR (N00014-17-1-2147).
[10]{}
K. Adiprasito, E. Nevo and J. Samper: [*A geometric lower bound theorem*]{}, Geom. Funct. Anal. [**26**]{} (2016) 359–378.
M.Y. An: [*Log-concave probability distributions: theory and statistical testing*]{}, Duke University, Department of Economics Working Paper No. 95-03.
M.Y. An: [*Log-concavity versus log-convexity: a complete characterization*]{}, Journal of Economic Theory [**80**]{} (1998) 350–369.
A. Barvinok: [*Computing the volume, counting integral points, and exponential sums*]{}, Discrete Comput. Geom. [**10**]{} (1993) 123–141.
M. Cule, R.B. Gramacy and R. Samworth: LogConcDEAD: an R package for maximum likelihood estimation of a multivariate log-concave density. J. Statist. Software [**29**]{} (2009) Issue 2.
M. Cule, R. Samworth and M. Stewart: [*Maximum likelihood estimation of a multi-dimensional log-concave density*]{}, J. R. Stat. Soc. Ser. B Stat. Methodol. [**72**]{} (2010) 545–607.
J. De Loera, S. Hoşten, F. Santos and B. Sturmfels: [*The polytope of all triangulations of a point configuration*]{}, Documenta Mathematica [**1**]{} (1996) 103–119.
J. De Loera, J. Rambau and F. Santos: [*Triangulations. Structures for Algorithms and Applications*]{}, Algorithms and Computation in Mathematics [**25**]{}, Springer-Verlag, Berlin, 2010.
L. Dümbgen and K. Rufibach: [*Maximum likelihood estimation of a log-concave density and its distribution function: Basic properties and uniform consistency*]{}, Bernoulli [**15**]{} (2009) 40–68.
I.M. Gel’fand, M.M. Kapranov and A.V. Zelevinsky: [*Discriminants, Resultants and Multidimensional Determinants*]{}, Birkhäuser, Boston, 1994.
U. Grenander: [*On the theory of mortality measurement II*]{}, Skandinavisk Aktuarietidskrift [**39**]{} (1956) 125–153.
P. Groeneboom, G. Jongbloed and J. A. Wellner: [*Estimation of a convex function: Characterizations and asymptotic theory*]{}, Annals of Statistics [**29**]{} (2001) 1653–1698.
K. Gross and D. Richards: [*Total positivity, spherical series, and hypergeometric functions of matrix argument*]{}, Journal of Approximation Theory [**59**]{} (1989) 224–246
E. Miller and B. Sturmfels: [*Combinatorial Commutative Algebra*]{}, Graduate Texts in Mathematics, Vol. 227, Springer Verlag, New York, 2004.
R. Thomas: [*Lectures in Geometric Combinatorics*]{}, Student Mathematical Library [**33**]{}, IAS/Park City Mathematical Subseries, American Mathematical Society, Providence, RI, 2006.
G. Walther: [*Inference and modeling with log-concave distributions*]{}, Statistical Science [**24**]{} (2009) 319–327.
Elina Robeva, Massachusetts Institute of Technology, Department of Mathematics, [[email protected]]{}
Bernd Sturmfels, MPI-MiS Leipzig, [[email protected]]{} and UC Berkeley, [[email protected]]{}
Caroline Uhler, Massachusetts Institute of Technology, IDSS and EECS Department, [[email protected]]{}.
|
---
abstract: 'Hexagonal circle patterns are introduced, and a subclass thereof is studied in detail. It is characterized by the following property: For every circle the multi-ratio of its six intersection points with neighboring circles is equal to $-1$. The relation of such patterns with an integrable system on the regular triangular lattice is established. A kind of a Bäcklund transformation for circle patterns is studied. Further, a class of isomonodromic solutions of the aforementioned integrable system is introduced, including circle patterns analogons to the analytic functions $z^\alpha$ and $\log z$.'
author:
- 'A.I.Bobenko[^1]'
- 'T.Hoffmann[^2]'
- 'Yu.B.Suris[^3]'
date: 'Fachbereich Mathematik, Technische Universität Berlin, Str. 17 Juni 136, 10623 Berlin, Germany'
title: |
Hexagonal circle patterns and integrable systems:\
Patterns with the multi-ratio property\
and Lax equations on the regular triangular lattice
---
Introduction {#Sect introd}
============
The theory of circle packings and, more generally, of circle patterns enjoys in recent years a fast development and a growing interest of specialists in complex analysis. The origin of this interest was connected with the Thurston’s idea about approximating the Riemann mapping by circle packings, see [@T1], [@RS]. Since then the theory bifurcated to several subareas. One of them concentrates around the uniformization theorem of Koebe–Andreev–Thurston, and is dealing with circle packing realizations of cell complexes of a prescribed combinatorics, rigidity properties, constructing hyperbolic 3-manifolds, etc [@T2], [@MR], [@BS], [@H].
Another one is mainly dealing with approximation problems, and in this context it is advantageous to stick from the beginning with fixed regular combinatorics. The most popular are hexagonal packings, for which the $C^{\infty}$ convergence to the Riemann mapping was established by He and Schramm [@HS]. Similar results are available also for circle patterns with the combinatorics of the square grid introduced by Schramm [@S]. It is also the context of regular patterns (more precisely, the two just mentioned classes thereof) where some progress was achieved in constructing discrete analogs of analytic functions (Doyle’s spiralling hexagon packings [@BDS] and their generalizations including the discrete analog of a quotient of Airy functions [@BH], discrete analogs of ${\rm exp}(z)$ and ${\rm erf}(z)$ for the square grid circle patterns [@S], discrete versions of $z^{\alpha}$ and $\log z$ for the same class of circle patterns [@BP], [@AB]). And it is again the context of regular patterns where the theory comes into interplay with the theory of integrable systems. Strictly speaking, only one instance of such an interplay is well–established up to now: namely, Schramm’s equation describing the square grid circle packings in terms of Möbius invariants turns out to coincide with the stationary Hirota’s equation, known to be integrable, see [@BP], [@Z]. It should be said that, generally, the subject of discrete integrable systems on lattices different from ${\Bbb Z}^n$ is underdeveloped at present. The list of relevant publications is almost exhausted by [@ND], [@NS], [@KN], [@A], [@OP].
The present paper contributes to several of the above mentioned issues: we introduce a new interesting class of circle patterns, and relate them to integrable systems. Besides, for this class we construct, in parallel to [@BP], [@AB], the analogs of the analytic functions $z^{\alpha}$, $\log z$.
This class is constituted by [*hexagonal circle patterns*]{}, or, in other words, by circle patterns with the combinatorics of the regular hexagonal lattice (the honeycomb lattice). This means that each elementary hexagon of the honeycomb lattice corresponds to a circle, and each common vertex of two hexagons corresponds to an intersection point of the corresponding circles. In particular, each circle carries six intersection points with six neighboring circles. Since at each vertex of the honeycomb lattice there meet three elementary hexagons, there follows that at each intersection point there meet three circles.
This class of hexagonal circle patterns is still too wide to be manageable, but it includes several very interesting subclasses, leading to integrable systems. For example, one can prescribe intersection angles of the circles. This situation will be considered in a subsequent publication. In the present one we consider the following requirement: the six intersection points on each circle have the multi-ratio equal to $-1$, where the multi–ratio is a natural generalization of the notion of a cross-ratio of four points on a plane.
We show that, adding to the intersection points of the circles their centers, one embeds hexagonal circle patterns with the multi-ratio property into an integrable system on the regular triangular lattice. Each solution of this latter system describes a peculiar geometrical construction: it consists of three triangulations of the plane, such that the corresponding elementary triangles in all three tilings are similar. Moreover, given one such tiling, one can reconstruct the other two almost uniquely (up to an affine transformation). If one of the tilings comes from the hexagonal circle pattern, so do the other two. This results are contained in Sect. \[Sect hex patterns\], \[Sect fgh system\]. In the intermediate Sect. \[Sect integrable systems on graphs\] we discuss a general notion of integrable systems on graphs as flat connections with the values in loop groups. It should be noticed that closely related integrable equations (albeit on the standard grid ${\Bbb Z}^2$) were previously introduced by Nijhoff [@N] in a totally different context (discrete Bussinesq equation), see also similar results in [@BK]. However, these results did not go beyond writing down the equations: geometrical structures behind the equations were not discussed in these papers.
Having included hexagonal circle patterns with the multi-ratio property into the framework of the theory of integrable systems, we get an opportunity of applying the immense machinery of the latter to studying the properties of the former. This is illustrated in Sect. \[Sect isomonodromic\], \[Sect isomonodromic patterns\], where we introduce and study some isomonodromic solutions of our integrable system on the triangular lattice, as well as the corresponding circle patterns. Finally, in Sect. \[Sect hexagonal z\^a\] we define a subclass of these “isomonodromic circle patterns” which are natural discrete versions of the analytic functions $z^{\alpha}$, $\log z$. The results of Sect. \[Sect isomonodromic\]–\[Sect hexagonal z\^a\] constitute an extension to the present, somewhat more intricate, situation of the similar constructions for Schramm’s circle patterns with the combinatorics of the square grid [@AB].
Hexagonal circle patterns {#Sect hex patterns}
=========================
![The regular triangular lattice with its hexagonal sublattices.[]{data-label="fig:regularLattice"}](HexGrid.eps){width="0.3\hsize"}
First of all we define the [[******]{}regular triangular lattice]{} ${{\cal T}}{{\cal L}}$ as the cell complex whose vertices are $$V({{\cal T}}{{\cal L}})=\Big\{{{\frak z}}=k+\ell\omega+m\omega^2:\; k,\ell,m\in{\Bbb
Z}\Big\},\quad {\rm where}\quad \omega=\exp(2\pi i/3),$$ whose edges are all non–ordered pairs $$E({{\cal T}}{{\cal L}})=\Big\{[{{\frak z}}_1,{{\frak z}}_2]:\; {{\frak z}}_1,{{\frak z}}_2\in V({{\cal T}}{{\cal L}}),\;
|{{\frak z}}_1-{{\frak z}}_2|=1 \Big\},$$ and whose 2-cells are all regular triangles with the vertices in $V({{\cal T}}{{\cal L}})$ and the edges in $E({{\cal T}}{{\cal L}})$. We shall use triples $(k,\ell,m)\in {\Bbb Z}^3$ as coordinates of the vertices of the regular triangular lattice, identifying two such triples iff they differ by the vector $(n,n,n)$ with $n\in{\Bbb Z}$. We call two points ${{\frak z}}_1,{{\frak z}}_2$ [*neighbors in*]{} ${{\cal T}}{{\cal L}}$, iff $[{{\frak z}}_1,{{\frak z}}_2]\in E({{\cal T}}{{\cal L}})$.
To the complex ${{\cal T}}{{\cal L}}$ there correspond three [[******]{}regular hexagonal sublattices]{} ${{\cal H}}{{\cal L}}_j$, $j=0,1,2$. Each ${{\cal H}}{{\cal L}}_j$ is the cell complex whose vertices are $$V({{\cal H}}{{\cal L}}_j)=\Big\{{{\frak z}}=k+\ell\omega+m\omega^2:\; k,\ell,m\in{\Bbb
Z},\; k+\ell+m\not\equiv j\!\!\pmod 3\Big\},$$ whose edges are $$E({{\cal H}}{{\cal L}}_j)=\Big\{[{{\frak z}}_1,{{\frak z}}_2]:\;{{\frak z}}_1,{{\frak z}}_2\in V({{\cal H}}{{\cal L}}_j),\;
|{{\frak z}}_1-{{\frak z}}_2|=1 \Big\},$$ and whose 2-cells are all regular hexagons with the vertices in $V({{\cal H}}{{\cal L}}_j)$ and the edges in $E({{\cal H}}{{\cal L}}_j)$. Again, we call two points ${{\frak z}}_1,{{\frak z}}_2$ [*neighbors in*]{} ${{\cal H}}{{\cal L}}_j$, iff $[{{\frak z}}_1,{{\frak z}}_2]\in E({{\cal H}}{{\cal L}}_j)$. Obviously, every point in $V({{\cal H}}{{\cal L}}_j)$ has three neighbors in ${{\cal H}}{{\cal L}}_j$, as well as three neighbors in ${{\cal T}}{{\cal L}}$ which do not belong to $V({{\cal H}}{{\cal L}}_j)$. The centers of 2-cells of ${{\cal H}}{{\cal L}}_j$ are exactly the points of $V({{\cal T}}{{\cal L}})\setminus V({{\cal H}}{{\cal L}}_j)$, i.e. the points ${{\frak z}}'=k+\ell\omega+m\omega^2$ with $k+\ell+m\equiv j\!\!\pmod 3$.
In the following definition we consider only ${{\cal H}}{{\cal L}}_0$, since, clearly, ${{\cal H}}{{\cal L}}_1$ and ${{\cal H}}{{\cal L}}_2$ are obtained from ${{\cal H}}{{\cal L}}_0$ via shifting all the corresponding objects by $\omega$, resp. by $\omega^2$.
\[def hex pattern\] We say that a map $w:V({{\cal H}}{{\cal L}}_0)\mapsto\hat{\Bbb C}$ defines a [[******]{}hexagonal circle pattern]{}, if the following condition is satisfied:
- Let $${{\frak z}}_k={{\frak z}}'+\varepsilon^k\in V({{\cal H}}{{\cal L}}_0), \quad
k=1,2,\ldots,6,\quad where \quad\varepsilon=\exp(\pi i/3),$$ be the vertices of any elementary hexagon in ${{\cal H}}{{\cal L}}_0$ with the center ${{\frak z}}'\in V({{\cal T}}{{\cal L}})\setminus V({{\cal H}}{{\cal L}}_0)$. Then the points $w({{\frak z}}_1),w({{\frak z}}_2),\ldots,w({{\frak z}}_6)\in\hat{\Bbb C}$ lie on a circle, and their circular order is just the listed one. We denote the circle through the points $w({{\frak z}}_1),w({{\frak z}}_2),\ldots,w({{\frak z}}_6)$ by $C({{\frak z}}')$, thus putting it into a correspondence with the center ${{\frak z}}'$ of the elementary hexagon above.
As a consequence of this condition, we see that if two elementary hexagons of ${{\cal H}}{{\cal L}}_0$ with the centers in ${{\frak z}}',{{\frak z}}''\in
V({{\cal T}}{{\cal L}})\setminus V({{\cal H}}{{\cal L}}_0)$ have a common edge $[{{\frak z}}_1,{{\frak z}}_2]\in E({{\cal H}}{{\cal L}}_0)$, then the circles $C({{\frak z}}')$ and $C({{\frak z}}'')$ intersect in the points $w({{\frak z}}_1)$, $w({{\frak z}}_2)$. Similarly, if three elementary hexagons of ${{\cal H}}{{\cal L}}_0$ with the centers in ${{\frak z}}',{{\frak z}}'',{{\frak z}}''' \in V({{\cal T}}{{\cal L}})\setminus V({{\cal H}}{{\cal L}}_0)$ meet in one point ${{\frak z}}_0\in V({{\cal H}}{{\cal L}}_0)$, then the circles $C({{\frak z}}')$, $C({{\frak z}}'')$ and $C({{\frak z}}''')$ also have a common intersection point $w({{\frak z}}_0)$. (Note that in every point ${{\frak z}}_0\in
V({{\cal H}}{{\cal L}}_0)$ there meet three distinct elementary hexagons of ${{\cal H}}{{\cal L}}_0$).
[**Remark.**]{} Sometimes it will be convenient to consider circle patterns defined not on the whole of ${{\cal H}}{{\cal L}}_0$, but rather on some connected subgraph of the regular hexagonal lattice.
We shall study in this paper a subclass of hexagonal circle patterns satisfying an additional condition. We need the following generalization of the notion of cross-ratio.
Given a $(2p)$-tuple $(w_1,w_2,\ldots,w_{2p})\in{\Bbb C}^{2p}$ of complex numbers, their [[******]{}multi-ratio]{} is the following number: $$M(w_1,w_2,\ldots,w_{2p})=\frac{\prod_{j=1}^p(w_{2j-1}-w_{2j})}
{\prod_{j=1}^p (w_{2j}-w_{2j+1})},$$ where it is agreed that $w_{2p+1}=w_1$.
In particular, $$M(w_1,w_2,w_3,w_4)=\frac{(w_1-w_2)(w_3-w_4)}{(w_2-w_3)(w_4-w_1)}$$ is the usual cross-ratio, while in the present paper we shall be mainly dealing with $$M(w_1,w_2,\ldots,w_6)=\frac{(w_1-w_2)(w_3-w_4)(w_5-w_6)}
{(w_2-w_3)(w_4-w_5)(w_6-w_1)}.$$ The following two obvious properties of the multi-ratio will be important for us:
- The multi-ratio $M(w_1,w_2,\ldots,w_{2p})$ is invariant with respect to the action of an arbitrary Möbius transformation $w\mapsto (aw+b)/(cw+d)$ on all of its arguments.
- The multi-ratio $M(w_1,w_2,\ldots,w_{2p})$ is a Möbius transformation with respect to each one of its arguments.
We shall need also the following, slightly less obvious, property:
- If the points $w_1,w_2,\ldots,w_{2p-1}$ lie on a circle $C\subset
\hat{\Bbb C}$, and the multi-ratio $M(w_1,w_2,\ldots,w_{2p})$ is real, then also $w_{2p}\in C$.
We say that a map $w:V({{\cal H}}{{\cal L}}_0)\mapsto\hat{\Bbb C}$ defines a [[******]{}hexagonal circle pattern with]{} $\boldsymbol{M}\boldsymbol{R}\boldsymbol{=}
\boldsymbol{-}\boldsymbol{1}$, if in addition to the condition of Definition \[def hex pattern\] the following one is satisfied:
- For any elementary hexagon in ${{\cal H}}{{\cal L}}_0$ with the vertices ${{\frak z}}_1,{{\frak z}}_2,\ldots,{{\frak z}}_6\in V({{\cal H}}{{\cal L}}_0)$ (listed counterclockwise), the multi-ratio $$\label{spec cond}
M(w_1,w_2,\ldots,w_6)=-1,$$ where $w_k=w({{\frak z}}_k)$.
Geometrically the condition (\[spec cond\]) means that, first, the lengths of the sides of the hexagon with the vertices $w_1w_2\ldots w_6$ satisfy the condition $$|w_1-w_2|\cdot|w_3-w_4|\cdot|w_5-w_6|=
|w_2-w_3|\cdot|w_4-w_5|\cdot|w_6-w_1|,$$ and, second, that the sum of the angles of the hexagon at the vertices $w_1$, $w_3$, and $w_5$ is equal to $2\pi\!\!\pmod
{2\pi}$, as well as the sum of the angles at the vertices $w_2$, $w_4$, and $w_6$. Notice that if a hexagon is inscribed in a circle and satisfies (\[spec cond\]), then it is [*conformally symmetric*]{}, i.e. there exists a Möbius transformation mapping it onto a centrally symmetric hexagon. Notice also that the regular hexagons satisfy this condition.
To demonstrate quickly the [*existence*]{} of hexagonal circle patterns with $MR=-1$ we give their [*construction*]{} via solving a suitable Cauchy problem.
Consider a row of elementary hexagons of ${{\cal H}}{{\cal L}}_0$ running from the north–west to the south-east, with the centers in the points ${{\frak z}}'_k=k-k\omega$. Let the map $w$ be defined in five vertices of each hexagon – in all except ${{\frak z}}'_k+\varepsilon$. Suppose that the five points $w({{\frak z}}'_k+\varepsilon^j)$, $j=2,3,\ldots,6$, lie on the circles $C({{\frak z}}'_k)$. These data determine uniquely a map $w:V({{\cal H}}{{\cal L}}_0)\mapsto\hat{\Bbb C}$ yielding a hexagonal circle pattern with $MR=-1$ on the whole lattice.
[**Proof.**]{} Equation (\[spec cond\]) determines the points $w({{\frak z}}'_k+\varepsilon)$, which, according to the property above, lie also on $C({{\frak z}}'_k)$. Now for every hexagon of the parallel row next to north–east, with the centers in the points ${{\frak z}}''_k={{\frak z}}'_k+1+\varepsilon=(k+2)-(k-1)\omega$, we know the value of the map $w$ in three vertices, namely in $${{\frak z}}''_k+\varepsilon^4={{\frak z}}'_k+1={{\frak z}}'_{k+1}+\varepsilon^2,\quad
{{\frak z}}''_k+\varepsilon^3={{\frak z}}'_{k}+\varepsilon, \quad
{{\frak z}}''_k+\varepsilon^5={{\frak z}}'_{k+1}+\varepsilon^2.$$ This uniquely defines the circle $C({{\frak z}}''_k)$, as the only circle through three points $w({{\frak z}}''_k+\varepsilon^3)$, $w({{\frak z}}''_k+\varepsilon^4)$ and $w({{\frak z}}''_k+\varepsilon^5)$. The intersection points of these circles of the second row give us the values of the map $w$ in the points ${{\frak z}}''_k+\varepsilon^2$ and ${{\frak z}}''_k+\varepsilon^6$. Namely, $w({{\frak z}}''_k+\varepsilon^2)$ is the intersection point of $C({{\frak z}}''_k)$ with $C({{\frak z}}''_{k-1})$, different from $w({{\frak z}}''_k+\varepsilon^3)$, and $w({{\frak z}}''_k+\varepsilon^6)$ is the intersection point of $C({{\frak z}}''_k)$ with $C({{\frak z}}''_{k+1})$, different from $w({{\frak z}}''_k+\varepsilon^5)$. Therefore we get the values of the map $w$ in five vertices of each hexagon of the next parallel row – in all except ${{\frak z}}''_k+\varepsilon$. The induction allows to continue the construction [*ad infinitum*]{}.
------------------------------------------------------------------------
Now we show that, adding the centers of the circles of a hexagonal pattern with $MR=-1$ to their intersection points, we come to a new interesting notion.
\[central extension\] Let the map $w:V({{\cal H}}{{\cal L}}_0)\mapsto\hat{\Bbb C}$ define a hexagonal circle pattern with $MR=-1$. Extend $w$ to the points of $V({{\cal T}}{{\cal L}})\setminus V({{\cal H}}{{\cal L}}_0)$ by the following rule. Fix some point $P_{\infty}\in\hat{\Bbb C}$. Let ${{\frak z}}'$ be a center of an elementary hexagon of ${{\cal H}}{{\cal L}}_0$. Set $w({{\frak z}}')$ to be the reflection of the point $P_{\infty}$ in the circle $C({{\frak z}}')$. Then the condition (\[spec cond\]) holds also for $w_k=w({{\frak z}}_k)$ in the case when the points ${{\frak z}}_1,{{\frak z}}_2,\ldots,{{\frak z}}_6$ are the vertices of any elementary hexagon of the two complementary hexagonal sublattices ${{\cal H}}{{\cal L}}_1$ and ${{\cal H}}{{\cal L}}_2$.
a\_proofMV.tex
[**Proof.**]{} Consider the situation corresponding to an elementary hexagon of the sublattice ${{\cal H}}{{\cal L}}_1$ or ${{\cal H}}{{\cal L}}_2$ (see Fig. \[fig:proof5\]). The point $w_0$ is the intersection point of the three circles $C({{\frak z}}_1)$, $C({{\frak z}}_3)$, and $C({{\frak z}}_5)$, the points $w_1$, $w_3$, and $w_5$ are obtained by reflection of $P_{\infty}$ in the corresponding circles, and the points $w_2$, $w_4$, and $w_6$ are the pairwise intersection points of these circles different from $w_0$. To simplify the geometry behind this situation, perform a Möbius transformation sending $w_0$ to infinity. Then the circles $C({{\frak z}}_1)$, $C({{\frak z}}_3)$, and $C({{\frak z}}_5)$ become straight lines, and the points $w_1$, $w_3$, $w_5$ are the reflections of $P_{\infty}$ in these lines (see Fig. \[fig:proof5\]; for definiteness we suppose here that the Möbius image of $P_{\infty}$ lies in the interior of the triangle formed by these straight lines). By construction, one gets: $$|w_2-w_1|=|w_2-w_3|, \quad |w_4-w_3|=|w_4-w_5|, \quad
|w_6-w_5|=|w_6-w_1|;$$ the angles by the vertices $w_2$, $w_4$, $w_6$ are equal to $2(\alpha_1+\alpha_2)$, $2(\beta_1+\beta_2)$, $2(\gamma_1+\gamma_2)$, respectively, so that their sum is equal to $$2(\alpha_1+\alpha_2+\beta_1+\beta_2+\gamma_1+\gamma_2)=2\pi;$$ the angles by the vertices $w_1$, $w_3$, $w_5$ are equal to $\pi-(\alpha_1+\gamma_2)$, $\pi-(\beta_1+\alpha_2)$, $\pi-(\gamma_1+\beta_2)$, respectively, so that their sum is equal to $$3\pi-(\alpha_1+\alpha_2+\beta_1+\beta_2+\gamma_1+\gamma_2)=2\pi.$$ This proves that the hexagon under consideration satisfies (\[spec cond\]).
------------------------------------------------------------------------
A particular case of the construction of Theorem \[central extension\] is when $P_{\infty}=\infty$, so that the map $w$ is extended by the [*centers*]{} of the corresponding circles. In any case, this theorem suggests to consider the class of maps described in the following definition.
We say that the map $w:V({{\cal T}}{{\cal L}})\mapsto\hat{\Bbb C}$ defines [[******]{}a triangular lattice with $\boldsymbol{M}\boldsymbol{R}\boldsymbol{=}
\boldsymbol{-}\boldsymbol{1}$]{}, if the equation (\[spec cond\]) holds for $w_k=w({{\frak z}}_k)$, whenever the points ${{\frak z}}_1,{{\frak z}}_2,\ldots,{{\frak z}}_6$ are the vertices (listed counterclockwise) of any elementary hexagon of any of the sublattices ${{\cal H}}{{\cal L}}_j$ $(j=0,1,2)$.
In the next section we shall discuss an integrable system on the regular triangular lattice, each solution of which delivers, in a single construction, [*three*]{} different triangular lattices with $MR=-1$. However, these three lattices are not independent: given such a lattice, the two associated ones can be constructed almost uniquely (up to an affine transformation $w\mapsto aw+b$). It will turn out that if the original lattice comes from a hexagonal circle pattern with $MR=-1$, then the two associated ones do likewise.
Discrete flat connections on graphs {#Sect integrable systems on graphs}
===================================
Let us describe a general construction of “integrable systems” on graphs which does not hang on the specific features of the regular triangular lattice. This notion includes the following ingredients:
- An [*oriented graph*]{} ${{\cal G}}$; the set of its vertices will be denoted $V({{\cal G}})$, the set of its edges will be denoted $E({{\cal G}})$.
- A loop group $G[\lambda]$, whose elements are functions from ${\Bbb C}$ into some group $G$. The complex argument $\lambda$ of these functions is known in the theory of integrable systems as the “spectral parameter”.
- A “wave function” $\Psi: V({{\cal G}})\mapsto G[\lambda]$, defined on the vertices of ${{\cal G}}$.
- A collection of “transition matrices” $L: E({{\cal G}})\mapsto G[\lambda]$ defined on the edges of ${{\cal G}}$.
It is supposed that for any oriented edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)\in
E({{\cal G}})$ the values of the wave functions in its ends are connected via $$\label{wave function evol}
\Psi({{\frak z}}_2,\lambda)=L({{\frak e}},\lambda)\Psi({{\frak z}}_1,\lambda).$$ Therefore the following [*discrete zero curvature condition*]{} is supposed to be satisfied. Consider any closed contour consisting of a finite number of edges of ${{\cal G}}$: $${{\frak e}}_1=({{\frak z}}_1,{{\frak z}}_2),\quad {{\frak e}}_2=({{\frak z}}_2,{{\frak z}}_3),\quad \ldots,\quad
{{\frak e}}_p=({{\frak z}}_p,{{\frak z}}_1).$$ Then $$\label{zero curv cond}
L({{\frak e}}_p,\lambda)\cdots L({{\frak e}}_2,\lambda)L({{\frak e}}_1,\lambda)=I.$$ In particular, for any edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)$, if ${{\frak e}}^{-1}=({{\frak z}}_2,{{\frak z}}_1)$, then $$\label{zero curv cond inv}
L({{\frak e}}^{-1},\lambda)=\Big(L({{\frak e}},\lambda)\Big)^{-1}.$$
Actually, in applications the matrices $L({{\frak e}},\lambda)$ depend also on a point of some set $X$ (the “phase space” of an integrable system), so that some elements $x({{\frak e}})\in X$ are attached to the edges ${{\frak e}}$ of ${{\cal G}}$. In this case the discrete zero curvature condition (\[zero curv cond\]) becomes equivalent to the collection of equations relating the fields $x({{\frak e}}_1)$, $\ldots$, $x({{\frak e}}_p)$ attached to the edges of each closed contour. We say that this collection of equations admits a [*zero curvature representation*]{}.
For an arbitrary graph, the analytical consequences of the zero curvature representation for a given collection of equations are not clear. However, in case of regular lattices, like ${{\cal T}}{{\cal L}}$, such representation may be used to determine conserved quantities for suitably defined Cauchy problems, as well as to apply powerful analytical methods for finding concrete solutions.
[**Remark.**]{} The above construction of integrable systems on graphs is not the only possible one. For example, in the construction by Adler [@A] the fields are defined on the vertices of a planar graph, and the equations relate the fields on [*stars*]{} consisting of the edges incident to each single vertex, rather than the fields on closed contours. Examples are given by discrete time systems of the relativistic Toda type. In the corresponding zero curvature representation the wave functions $\Psi$ naturally live on 2-cells rather than on vertices. The transition matrices live on edges: the matrix $L({{\frak e}},\lambda)$ corresponds to the transition [*across*]{} ${{\frak e}}$ and depends on the fields sitting on two ends of ${{\frak e}}$.
An integrable system on the regular triangular lattice {#Sect fgh system}
======================================================
We now introduce an [*orientation*]{} of the edges of the regular triangular lattice ${{\cal T}}{{\cal L}}$. Namely, we declare as positively oriented all edges of the types $$({{\frak z}},{{\frak z}}+1),\quad ({{\frak z}},{{\frak z}}+\omega), \quad ({{\frak z}},{{\frak z}}+\omega^2).$$ Correspondingly, all edges of the types $$({{\frak z}},{{\frak z}}-1),\quad ({{\frak z}},{{\frak z}}-\omega), \quad ({{\frak z}},{{\frak z}}-\omega^2)$$ are negatively oriented. Thus all elementary triangles become oriented. There are two types of elementary triangles: those “pointing upwards” $({{\frak z}},{{\frak z}}+\omega,{{\frak z}}-1)$ are oriented counterclockwise, while those “pointing downwards” $({{\frak z}},{{\frak z}}+\omega^2,{{\frak z}}-1)$ are oriented clockwise.
Lax representation
------------------
The group $G[\lambda]$ we use in our construction is the [*twisted loop group*]{} over ${\rm SL}(3,{\Bbb C})$: $$\label{loop group}
\Big\{L:{\Bbb C}\mapsto {\rm SL}(3,{\Bbb C})\Big|\;
L(\omega\lambda)=\Omega L(\lambda)\Omega^{-1}\Big\},$$ where $\Omega={\rm diag}(1,\omega,\omega^2)$. The elements of $G[\lambda]$ we attach to every [*positively oriented*]{} edge of ${{\cal T}}{{\cal L}}$ are of the form $$\label{L}
L(\lambda)=(1+\lambda^3)^{-1/3}\left(\begin{array}{ccc} 1 &
\lambda f & 0 \\ 0 & 1 & \lambda g \\\lambda h & 0 &
1\end{array}\right), \quad fgh=1.$$ Hence, to each positively oriented edge we assign a triple of complex numbers $(f,g,h)\in{\Bbb C}^3$ satisfying an additional condition $fgh=1$. In other words, choosing $(f,g)$ (say) as the basic variables, we can assume that the “phase space” $X$ mentioned in the previous section, is ${\Bbb C}_{*}\times {\Bbb
C}_{*}$. The scalar factor $(1+\lambda^3)^{-1/3}$ is not very essential and assures merely that $\det L(\lambda)=1$.
It is obvious that the zero curvature condition (\[zero curv cond\]) is fulfilled for every closed contour in ${{\cal T}}{{\cal L}}$, if and only if it holds for all elementary triangles.
\[Equations of motion\] Let ${{\frak e}}_1$, ${{\frak e}}_2$, ${{\frak e}}_3$ be the consecutive positively oriented edges of an elementary triangle of ${{\cal T}}{{\cal L}}$. Then the zero curvature condition $$L({{\frak e}}_3,\lambda)L({{\frak e}}_2,\lambda)L({{\frak e}}_1,\lambda)=I$$ is equivalent to the following set of equations: $$\label{fields fact}
f_1+f_2+f_3=0,\qquad g_1+g_2+g_3=0,$$ and $$\label{motion eq}
f_1g_1=f_3g_2\quad\Leftrightarrow\quad f_2g_2=f_1g_3
\quad\Leftrightarrow\quad f_3g_3=f_2g_1,$$ with the understanding that $h_k=(f_kg_k)^{-1}$, $k=1,2,3$.
[**Proof.**]{} An easy calculation shows that the matrix equation $L_3L_2L_1=I$ consists of the following nine scalar equations: $$\label{motion eq aux1}
f_1+f_2+f_3=0,\qquad g_1+g_2+g_3=0,\qquad h_1+h_2+h_3=0,$$ $$\label{motion eq aux2}
f_3g_2h_1=1,\qquad g_3h_2f_1=1, \qquad h_3f_2g_1=1,$$ $$\label{motion eq aux3}
f_3g_2+f_3g_1+f_2g_1=0,\qquad g_3h_2+g_3h_1+g_2h_1=0,\qquad
h_3f_2+ h_3f_1+h_2f_1=0.$$ It remains to isolate the independent ones among these nine equations. First of all, equations (\[motion eq aux3\]) are equivalent to (\[motion eq aux2\]), provided (\[motion eq aux1\]) and $f_kg_kh_k=1$ hold. For example: $$f_3(g_2+g_1)+f_2g_1=0\;\Leftrightarrow\;f_3g_3=f_2g_1\;\Leftrightarrow\;
h_3f_2g_1=1.$$ Next, the conditions $f_kg_kh_k=1$ allow us to rewrite (\[motion eq aux2\]) as $$\label{motion eq aux4}
f_1g_1=f_3g_2,\qquad f_2g_2=f_1g_3, \qquad f_3g_3=f_2g_1.$$ Further, all equations in (\[motion eq aux4\]) are equivalent provided (\[fields fact\]) holds. For example: $$f_1g_1=f_3g_2\;\Rightarrow\;
(f_2+f_3)g_1=f_3(g_1+g_3)\;\Rightarrow f_2g_1=f_3g_3.$$ Finally, $h_1+h_2+h_3=0$ follows from (\[fields fact\]), (\[motion eq\]). Indeed, $$\begin{aligned}
h_1+h_2 & = & (f_1g_1)^{-1}+(f_2g_2)^{-1}=
(f_3g_2)^{-1}+(f_2g_2)^{-1}\\
& = & (f_2g_2)^{-1}(f_2+f_3)f_3^{-1}=-(f_2g_2)^{-1}f_1f_3^{-1}\\
& = & -(f_1g_3)^{-1}f_1f_3^{-1}=-(f_3g_3)^{-1}=-h_3.\end{aligned}$$ The theorem is proved. For want of a better name we shall call the system of equations (\[fields fact\]), (\[motion eq\]) the [[******]{}fgh–system]{}.
------------------------------------------------------------------------
The equations (\[motion eq aux1\]) may be interpreted in the following way: there exist functions $u,v,w:V({{\cal T}}{{\cal L}})\mapsto{\Bbb
C}$ such that for any positively oriented edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)$ there holds: $$\label{fact}
f({{\frak e}})=u({{\frak z}}_2)-u({{\frak z}}_1),\quad g({{\frak e}})=v({{\frak z}}_2)-v({{\frak z}}_1),\quad
h({{\frak e}})=w({{\frak z}}_2)-w({{\frak z}}_1).$$ The function $u$ is determined by $f$ uniquely, up to an additive constant, and similarly for the functions $v$, $w$. Having introduced functions $u,v,w$ sitting in the vertices of ${{\cal T}}{{\cal L}}$, we may reformulate the remaining equations (\[motion eq\]) as follows: let ${{\frak z}}_1,{{\frak z}}_2,{{\frak z}}_3$ be the consecutive vertices of a positively oriented elementary triangle, then $$\label{motion eq zw}
\frac{u({{\frak z}}_2)-u({{\frak z}}_1)}{u({{\frak z}}_3)-u({{\frak z}}_2)}=
\frac{v({{\frak z}}_3)-v({{\frak z}}_2)}{v({{\frak z}}_1)-v({{\frak z}}_3)}.$$ The equations arising by cyclic permutations of indices $(1,2,3)\mapsto(2,3,1)$ are equivalent to this one due to (\[motion eq\]). So, we have one equation pro elementary triangle ${{\frak z}}_1{{\frak z}}_2{{\frak z}}_3$. Its geometrical meaning is the following: the triangle $u({{\frak z}}_1)u({{\frak z}}_2)u({{\frak z}}_3)$ is similar to the triangle $v({{\frak z}}_2) v({{\frak z}}_3)v({{\frak z}}_1)$ (where the corresponding vertices are listed on the corresponding places). Of course, these two triangles are also similar to the third one, $w({{\frak z}}_3)w({{\frak z}}_1)w({{\frak z}}_2)$.
Cauchy problem
--------------
We discuss now the Cauchy data which allow one to determine a solution of the $fgh$–system. The key observation is the following.
\[lemma 4th point\] Given the values of two fields, say $u$ and $v$, in three points ${{\frak z}}_0$, ${{\frak z}}_1={{\frak z}}_0+1$ and ${{\frak z}}_2={{\frak z}}_0+\omega$, the equations of the $fgh$–system determine uniquely the values of $u$ and $v$ in the point ${{\frak z}}_3={{\frak z}}_0+1+\omega$: $$\label{induct aux1}
u_3-u_0=(u_1-u_0)\frac{v_1-v_0}{v_1-v_2}+(u_2-u_0)\frac{v_2-v_0}{v_2-v_1},$$ $$\label{induct aux2}
v_3-v_1=(v_1-v_0)\frac{u_1-u_0}{u_0-u_3}\quad \Leftrightarrow\quad
v_3-v_2=(v_2-v_0)\frac{u_2-u_0}{u_0-u_3}.$$
[**Proof.**]{} The formula (\[induct aux1\]) follows by eliminating $v_3$ from $$\label{induct aux0}
\frac{u_0-u_3}{u_1-u_0}=\frac{v_1-v_0}{v_3-v_1},\qquad
\frac{u_0-u_3}{u_2-u_0}=\frac{v_2-v_0}{v_3-v_2}.$$ These equations yield then (\[induct aux2\]).
------------------------------------------------------------------------
This immediately yields the following statement.
\[Cauchy data for fgh system\]
- The values of the fields $u$ and $v$ in the vertices of the zig–zag line running from the north–west to the south–east, $$\Big\{{{\frak z}}=k+\ell\omega: k+\ell=0,1\Big\},$$ uniquely determine the functions $u,v:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ on the whole lattice.
- The values of the fields $u$ and $v$ on the two positive semi-axes, $$\Big\{{{\frak z}}=k: k\ge 0\Big\}\cup\Big\{{{\frak z}}=\ell\omega: \ell\ge
0\Big\},$$ uniquely determine the functions $u,v$ on the whole sector $$\Big\{{{\frak z}}=k+\ell\omega: k,\ell\ge 0\Big\}= \Big\{{{\frak z}}\in V({{\cal T}}{{\cal L}}):
0\le{\rm\arg}({{\frak z}})\le2\pi/3\Big\}.$$
[**Proof**]{} follows by induction with the help of the formulas (\[induct aux1\]), (\[induct aux2\]).
------------------------------------------------------------------------
Sym formula and related results
-------------------------------
There holds the following result having many analogs in the differential geometry described by integrable systems (“Sym formula”, see, e.g., [@BP]).
\[Sym formula\] Let $\Psi({{\frak z}},\lambda)$ be the solution of (\[wave function evol\]) with the initial condition $\Psi({{\frak z}}_0,\lambda)=I$ for some ${{\frak z}}_0\in V({{\cal T}}{{\cal L}})$. Then the fields $u,v,w$ may be found as $$\label{Sym}
\left.\frac{d\Psi}{d\lambda}\right|_{\lambda=0}=
\left(\begin{array}{ccc} 0 & u & 0 \\ 0 & 0 & v \\ w & 0 & 0
\end{array} \right).$$
[**Proof.**]{} Note, first of all, that from $\Psi({{\frak z}}_0,0)=I$ and $L({{\frak e}},0)=I$ there follows that $\Psi({{\frak z}},0)=I$ for all ${{\frak z}}\in
V({{\cal T}}{{\cal L}})$. Consider an arbitrary positively oriented edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)$. From (\[wave function evol\]) there follows: $$\frac{d\Psi({{\frak z}}_2)}{d\lambda}-\frac{d\Psi({{\frak z}}_1)}{d\lambda} =
\left(\frac{dL({{\frak e}})}{d\lambda}\Psi({{\frak z}}_1)+
L({{\frak e}})\frac{d\Psi({{\frak z}}_1)}{d\lambda}\right)-\frac{d\Psi({{\frak z}}_1)}{d\lambda}$$ At $\lambda=0$ we find: $$\begin{aligned}
\lefteqn{\left.\frac{d\Psi({{\frak z}}_2)}{d\lambda}\right|_{\lambda=0}
-\left.\frac{d\Psi({{\frak z}}_1)}{d\lambda}\right|_{\lambda=0}=
\left.\frac{dL({{\frak e}})}{d\lambda}\right|_{\lambda=0}=
\left(\begin{array}{ccc} 0 & f({{\frak e}}) & 0 \\ 0 & 0 & g({{\frak e}}) \\
h({{\frak e}}) & 0 & 0
\end{array}\right)}\\
& = & \left(\begin{array}{ccc} 0 & u({{\frak z}}_2)-u({{\frak z}}_1) & 0 \\
0 & 0 & v({{\frak z}}_2)-v({{\frak z}}_1) \\ w({{\frak z}}_2)-w({{\frak z}}_1) & 0 & 0
\end{array}\right).\end{aligned}$$ This proves the Proposition.
------------------------------------------------------------------------
Next terms of the power series expansion of the wave function $\Psi({{\frak z}},\lambda)$ around $\lambda=0$ also deliver interesting and important results.
\[Closed forms\] Let $\Psi({{\frak z}},\lambda)$ be the solution of (\[wave function evol\]) with the initial condition $\Psi({{\frak z}}_0,\lambda)=I$ for some ${{\frak z}}_0\in V({{\cal T}}{{\cal L}})$. Then $$\label{Sym2}
\frac{1}{2}\,\left.\frac{d^2\Psi}{d\lambda^2}\right|_{\lambda=0}=
\left(\begin{array}{ccc} 0 & 0 & a \\ b & 0 & 0 \\ 0 & c & 0
\end{array} \right),$$ where the function $a:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ satisfies the difference equation $$\label{eq for a}
a({{\frak z}}_2)-a({{\frak z}}_1)=v({{\frak z}}_1)\Big(u({{\frak z}}_2)-u({{\frak z}}_1)\Big),$$ and similar equations hold for the functions $b,c:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ (with the cyclic permutation $(u,v,w)\mapsto(w,u,v)$).
[**Proof.**]{} Proceeding as in the proof of Proposition \[Sym formula\], we have: $$\frac{d^2\Psi({{\frak z}}_2)}{d\lambda^2}-\frac{d^2\Psi({{\frak z}}_1)}{d\lambda^2}
= \left(\frac{d^2L({{\frak e}})}{d\lambda^2}\Psi({{\frak z}}_1)+
2\frac{dL({{\frak e}})}{d\lambda}\frac{d\Psi({{\frak z}}_1)}{d\lambda}+
L({{\frak e}})\frac{d^2\Psi({{\frak z}}_1)}{d\lambda^2}\right)-
\frac{d^2\Psi({{\frak z}}_1)}{d\lambda^2}$$ Taking into account that $d^2L({{\frak e}})/d\lambda^2|_{\lambda=0}=0$, we find at $\lambda=0$: $$\begin{aligned}
\lefteqn{\left.\frac{d^2\Psi({{\frak z}}_2)}{d\lambda^2}\right|_{\lambda=0}
-\left.\frac{d^2\Psi({{\frak z}}_1)}{d\lambda^2}\right|_{\lambda=0}=
2\left.\frac{dL({{\frak e}})}{d\lambda}\right|_{\lambda=0}
\left.\frac{d\Psi({{\frak z}}_1)}{d\lambda}\right|_{\lambda=0}=}\\ \nonumber\\
& = & 2\left(\begin{array}{ccc} 0 & f({{\frak e}}) & 0 \\ 0 & 0 & g({{\frak e}})
\\ h({{\frak e}}) & 0 & 0
\end{array}\right)\left(\begin{array}{ccc} 0 & u({{\frak z}}_1) & 0 \\
0 & 0 & v({{\frak z}}_1) \\ w({{\frak z}}_1) & 0 & 0 \end{array}\right).\end{aligned}$$ This implies the statement of the proposition.
------------------------------------------------------------------------
Notice that it is [*à priori*]{} not obvious that the equation (\[eq for a\]) admits a well–defined solution on $V({{\cal T}}{{\cal L}})$, or, in other words, that its right–hand side defines a closed form on ${{\cal T}}{{\cal L}}$. This fact might be proved by a direct calculation, based upon the equations of the $fgh$–system, but the above argument gives a more conceptual and a much shorter proof.
Under the conditions of Propositions \[Sym formula\],\[Closed forms\], we have: $$\label{Sym12}
-\frac{1}{2}\,\left.\frac{d^2\Psi}{d\lambda^2}\right|_{\lambda=0}+
\left(\frac{d\Psi}{d\lambda}\right)^2_{\lambda=0}=
\left(\begin{array}{ccc} 0 & 0 & a' \\ b' & 0 & 0 \\ 0 & c' & 0
\end{array} \right),$$ where the function $a':V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ satisfies the difference equation $$\label{eq for a'}
a'({{\frak z}}_2)-a'({{\frak z}}_1)=u({{\frak z}}_2)\Big(v({{\frak z}}_2)-v({{\frak z}}_1)\Big),$$ and similar equations hold for the functions $b',c':V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ (with the cyclic permutation $(u,v,w)\mapsto(w,u,v)$).
Further examples of such exact forms may be obtained from the values of higher derivatives of the wave function $\Psi({{\frak z}},\lambda)$ at $\lambda=0$.
One–field equations
-------------------
We discuss now the equations satisfied by the field $u$ alone, as well as by the field $v$ alone. In this point we make contact with the geometric considerations of Sect. \[Sect hex patterns\].
\[z to w\]
1. Both maps $u,v:V({{\cal T}}{{\cal L}})\mapsto {\Bbb C}$ define triangular lattices with $MR=-1$. In other words, if ${{\frak z}}_1,{{\frak z}}_2,\ldots,{{\frak z}}_6$ are the vertices (listed counterclockwise) of any elementary hexagon of any of the hexagonal sublattices ${{\cal H}}{{\cal L}}_j$ $(j=0,1,2)$, and if $u_k=u({{\frak z}}_k)$ and $v_k=v({{\frak z}}_k)$, then there hold both the equations $$\label{hex eq z}
M(u_1,u_2,\ldots,u_6)=-1$$ and $$\label{hex eq w}
M(v_1,v_2,\ldots,v_6)=-1.$$
2. Given a triangular lattice $u:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ with $MR=-1$, there exists a unique, up to an affine transformation $v\mapsto
av+b$, function $v:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ such that (\[motion eq zw\]) are satisfied everywhere. This function also defines a triangular lattice with $MR=-1$.
3. Given a pair of complex–valued functions $(u,v)$ defined on $V({{\cal T}}{{\cal L}})$ and satisfying the equation (\[motion eq zw\]) everywhere, there exists a unique, up to an affine transformation, function $w:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ such that the pairs $(v,w)$ and $(w,u)$ satisfy the same equation. The function $w$ also defines a triangular lattice with $MR=-1$.
[**Proof.**]{} 1. To prove the first statement, we proceed as follows. Let ${{\frak z}}'\in V({{\cal T}}{{\cal L}})$, and let the vertices of an elementary hexagonal with the center in ${{\frak z}}'$ be enumerated as ${{\frak z}}_k={{\frak z}}'+\varepsilon^k$, $ k=1,2,\ldots,6$. Then the following elementary triangles are positively oriented: $({{\frak z}}_{2k},{{\frak z}}_{2k-1}, {{\frak z}}')$ and $({{\frak z}}_{2k},{{\frak z}}_{2k+1},{{\frak z}}')$ for $k=1,2,3$ (with the agreement that ${{\frak z}}_7={{\frak z}}_1$). According to (\[motion eq zw\]), we have: $$\frac{u_{2k-1}-u_{2k}}{u'-u_{2k-1}}=\frac{v'-v_{2k-1}}{v_{2k}-v'},\qquad
\frac{u_{2k+1}-u_{2k}}{u'-u_{2k+1}}=\frac{v'-v_{2k+1}}{v_{2k}-v'},\qquad
k=1,2,3.$$ Dividing the first equation by the second one and taking the product over $k=1,2,3$, we find: $$\prod_{k=1}^3\frac{u_{2k-1}-u_{2k}}{u_{2k+1}-u_{2k}}=1,$$ which is nothing but (\[hex eq z\]). The proof of (\[hex eq w\]) is similar.
2\. As for the second statement, suppose we are given a function $u$ on the whole of $V({{\cal T}}{{\cal L}})$. For an arbitrary elementary triangle, if the values of $v$ in two vertices are known, the equation (\[motion eq\]) allows us to calculate the value of $v$ in the third vertex. Therefore, choosing arbitrarily the values of $v$ in two neighboring vertices, we can extend this function on the whole of $V({{\cal T}}{{\cal L}})$, provided this procedure is consistent. It is easy to understand that it is enough to verify the consistency in running once around a vertex. But this is assured exactly by the equation (\[hex eq z\]).
3\. To prove the third statement, notice that the proof of Theorem \[Equations of motion\] shows that the formula $$h({{\frak e}})=w({{\frak z}}_2)-w({{\frak z}}_1)=\frac{1}{f({{\frak e}})g({{\frak e}})}=
\frac{1}{(u({{\frak z}}_2)-u({{\frak z}}_1))(v({{\frak z}}_2)-v({{\frak z}}_1))},$$ valid for every edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)$ of ${{\cal T}}{{\cal L}}$, correctly defines the third field $h$ of the $fgh$–system. All affine transformations of the field $w$ thus obtained, and only they, lead to pairs $(v,w)$ and $(w,u)$ satisfying (\[motion eq zw\]).
------------------------------------------------------------------------
[**Remark.**]{} Notice that the above results remain valid in the more general context, when the fields $f,g,h$ do not commute anymore, e.g. when they take values in ${\Bbb H}$, the field of quaternions. The formulation and the proof of Theorem \[Equations of motion\] hold in this case literally, while the formula (\[hex eq z\]) reads then as $$\label{hex eq z quat}
(u_1-u_2)(u_2-u_3)^{-1}(u_3-u_4)(u_4-u_5)^{-1}(u_5-u_6)(u_6-u_1)^{-1}=-1,$$ and similarly for $v,w$.
Circularity
-----------
Recall that hexagonal circle patterns with $MR=-1$ lead to a subclass of triangular lattices with $MR=-1$, namely those where the points of one of the three hexagonal sublattices lie on circles. We now prove a remarkable statement, assuring that this subclass is stable with respect to the transformation $u\mapsto v$ described in Theorem \[z to w\].
\[from circ to circ\] Let $u:V({{\cal H}}{{\cal L}}_j)\mapsto{\Bbb C}$ define a hexagonal circle pattern with $MR=-1$. Extend it with the centers of the circles to $u:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$, a triangular lattice with $MR=-1$. Let $v:V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ be the triangular lattice with $MR=-1$ related to $u$ via (\[motion eq zw\]). Then the restriction of the map $v$ to the sublattice ${{\cal H}}{{\cal L}}_{j+1}$ also defines a hexagonal circle pattern with $MR=-1$, while the points $v$ corresponding to ${{\cal T}}{{\cal L}}\setminus{{\cal H}}{{\cal L}}_{j+1}$ are the centers of the corresponding circles.
[**Proof**]{} starts as the proof of Theorem \[z to w\]. Let ${{\frak z}}'$ be a center of an arbitrary elementary hexagon of the sublattice ${{\cal H}}{{\cal L}}_{j+1}$, i.e. ${{\frak z}}'=k+\ell\omega+m\omega^2$ with $k+\ell+m\equiv j+1\!\!\pmod 3$. Denote by ${{\frak z}}_k={{\frak z}}'+\varepsilon^k$, $k=1,2,\ldots,6$ the vertices of the hexagon. As before, considering the positively oriented triangles $({{\frak z}}_{2k}, {{\frak z}}_{2k-1},{{\frak z}}')$ and $({{\frak z}}_{2k},{{\frak z}}_{2k+1},{{\frak z}}')$, $k=1,2,3$, surrounding the point ${{\frak z}}'$, we come to the relations $$\label{circularity aux1}
\frac{u_{2k-1}-u_{2k}}{u'-u_{2k-1}}=\frac{v'-v_{2k-1}}{v_{2k}-v'}\,,\qquad
\frac{u_{2k+1}-u_{2k}}{u'-u_{2k+1}}=\frac{v'-v_{2k+1}}{v_{2k}-v'}\,,\qquad
k=1,2,3.$$ But, obviously, ${{\frak z}}_{2k-1}$ $(k=1,2,3)$ are centers of elementary hexagons of the sublattice ${{\cal H}}{{\cal L}}_j$. By condition, the points $u_{2k-2}$, $u_{2k}$ and $u'$ lie on a circle with the center in $u_{2k-1}$. Therefore, $$\label{circularity aux2}
|u_{2k}-u_{2k-1}|=|u_{2k-2}-u_{2k-1}|=|u'-u_{2k-1}|,\qquad
k=1,2,3.$$ So, the absolute values of the left–hand sides of all equations in (\[circularity aux1\]) are equal to 1. It follows that all six points $v_1,v_2,\ldots,v_6$ lie on a circle with the center in $v'$.
------------------------------------------------------------------------
Isomonodromic solutions {#Sect isomonodromic}
=======================
Recall that we use triples $(k,\ell,m)\in{\Bbb Z}^3$ as coordinates of the vertices ${{\frak z}}=k+\ell\omega+m\omega^2$, and that two such triples are identified iff they differ by the vector $(n,n,n)$ with $n\in{\Bbb Z}$. By the $k$–axis we call the straight line ${\Bbb R}\subset {\Bbb C}$, resp. by the $\ell$–axis the straight line ${\Bbb R}\omega$, and by the $m$–axis the straight line ${\Bbb R}\omega^2$.
It will be sometimes convenient to use the symbols $\tilde{\cdot}$, $\hat{\cdot}$ and $\bar{\cdot}$ to denote the shifts of various objects in the positive direction of the axes $k$, $\ell$, $m$, respectively, and the symbols $\undertilde{\cdot}$, $\underhat{\cdot}$, $\underline{\cdot}$ to denote the shifts in the negative directions. This will apply to vertices, edges and elementary triangles of ${{\cal T}}{{\cal L}}$, as well as to various objects assigned to them. For example, if ${{\frak z}}\in
V({{\cal T}}{{\cal L}})$, then $$\widetilde{{{\frak z}}}={{\frak z}}+1, \quad \undertilde{{{\frak z}}}={{\frak z}}-1, \quad
\widehat{{{\frak z}}}={{\frak z}}+\omega, \quad \underhat{{{\frak z}}}={{\frak z}}-\omega, \quad
\bar{{{\frak z}}}={{\frak z}}+\omega^2,\quad \underline{{{\frak z}}}={{\frak z}}-\omega^2.$$ Similarly, if ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)\in E({{\cal T}}{{\cal L}})$, then $$\widetilde{{{\frak e}}}=({{\frak z}}_1+1,{{\frak z}}_2+1), \quad
\widehat{{{\frak e}}}=({{\frak z}}_1+\omega,{{\frak z}}_2+\omega), \quad
\bar{{{\frak e}}}=({{\frak z}}_1+\omega^2,{{\frak z}}_2+\omega^2),\quad {\rm etc.}$$
A fundamental role in the subsequent presentation will be played by a [*non-autonomous constraint*]{} for the solutions of the $fgh$–system. This constraint consists of a pair of equations which are formulated for every vertex ${{\frak z}}\in V({{\cal T}}{{\cal L}})$ and include the values of the fields on the edges incident to ${{\frak z}}$, i.e. on the [*star*]{} of this vertex. It will be convenient to fix a numeration of these edges as follows: $$\begin{aligned}
{{\frak e}}_0=({{\frak z}},\widetilde{{{\frak z}}}),\quad {{\frak e}}_2=({{\frak z}},\widehat{{{\frak z}}}),\quad
{{\frak e}}_4=({{\frak z}},\bar{{{\frak z}}}), \\
{{\frak e}}_1=(\underbar{{{\frak z}}},{{\frak z}}),\quad
{{\frak e}}_3=(\undertilde{{{\frak z}}},{{\frak z}}),\quad {{\frak e}}_5=(\underhat{{{\frak z}}},{{\frak z}}).\end{aligned}$$ The notations $f_0,\ldots,f_6$ will refer to the values of the field $f$ on these edges: $$\begin{aligned}
f_0=\widetilde{u}-u,\quad f_2=\widehat{u}-u,\quad f_4=\bar{u}-u, \\
f_1=u-\underbar{u}, \quad f_3=u-\undertilde{u},\quad
f_5=u-\underhat{u},\end{aligned}$$ and similarly for the fields $g$, $h$, see Fig. \[fig:notationSec5\].
(6,5) (3,2.5)(5.5,2.5) (3,2.5)(0.5,2.5) (3,2.5)(4.25,4.665) (3,2.5)(4.25,0.335) (3,2.5)(1.75,4.665) (3,2.5)(1.75,0.335) (5.8,2.5)[(0,0)[$\widetilde{u}$]{}]{} (0.2,2.5)[(0,0)[$\undertilde{u}$]{}]{} (4.5,0.2)[(0,0)[$\underhat{u}$]{}]{} (1.55,4.8)[(0,0)[$\widehat{u}$]{}]{} (4.5,4.8)[(0,0)[$\underbar{u}$]{}]{} (1.55,0.2)[(0,0)[$\bar{u}$]{}]{} (4.7,2.85)[(0,0)[$f_0$]{}]{} (1.3,2.85)[(0,0)[$f_3$]{}]{} (1.9,3.8)[(0,0)[$f_2$]{}]{} (4.1,3.8)[(0,0)[$f_1$]{}]{} (1.8,1.2)[(0,0)[$f_4$]{}]{} (4.2,1.2)[(0,0)[$f_5$]{}]{} (3,2)[(0,0)[$u$]{}]{}
The constraint looks as follows: $$\label{constr 1}
\alpha u=k\frac{f_0g_0f_3}{f_0g_0+g_0f_3+f_3g_3}+
\ell\frac{f_2g_2f_5}{f_2g_2+g_2f_5+f_5g_5}+
m\frac{f_4g_4f_1}{f_4g_4+g_4f_1+f_1g_1},$$ $$\label{constr 2}
\beta v=k\frac{g_0f_3g_3}{f_0g_0+g_0f_3+f_3g_3}+
\ell\frac{g_2f_5g_5}{f_2g_2+g_2f_5+f_5g_5}+
m\frac{g_4f_1g_1}{f_4g_4+g_4f_1+f_1g_1}.$$ These are supposed to be the equations for the vertex ${{\frak z}}=k+\ell\omega+m\omega^2$, and we use the notations $u=u({{\frak z}})$, $v=v({{\frak z}})$. Since the fields $u$, $v$ are defined only up to an affine transformation, one should replace the left–hand sides of the above equations by $\alpha u+\phi$, $\beta v+\psi$, respectively, with arbitrary constants $\phi$, $\psi$. In the form we have choosen it is imposed that the fields $u$, $v$ are normalized to vanish in the origin.
\[constraint preliminary\] The equations (\[constr 1\]), (\[constr 2\]) are well defined equations for the point ${{\frak z}}\in V({{\cal T}}{{\cal L}})$, i.e. they are invariant under the shift $(k,\ell,m)\mapsto(k+n,\ell+n,m+n)$, provided the equations (\[motion eq zw\]) hold.
[**Proof**]{} is technical and is given in the Appendix \[Appendix\].
------------------------------------------------------------------------
We mention an important consequence of this proposition. Apparently, the constraint (\[constr 1\]), (\[constr 2\]) relates the values of the fields $u$, $v$ in [*seven*]{} points shown on Fig. \[fig:notationSec5\]. However, we are free to choose any representative $(k,\ell,m)$ for ${{\frak z}}$. In particular, we can let vanish any one of the coordinates $k$, $\ell$, $m$. In the corresponding representation the constraint relates the values of the fields $u$, $v$ in [*five*]{} points, belonging to any one of the three possible four–leg crosses through ${{\frak z}}$.
An essential algebraic property of the constraint (\[constr 1\]), (\[constr 2\]) is given by the following statement.
\[third constraint\] If the equations (\[motion eq zw\]) hold, then the constraints (\[constr 1\]), (\[constr 2\]) imply a similar equation for the field $w$ (vanishing at ${{\frak z}}=0$): $$\label{constr 3}
\gamma w=k\frac{1}{f_0g_0+g_0f_3+f_3g_3}+
\ell\frac{1}{f_2g_2+g_2f_5+f_5g_5}+
m\frac{1}{f_4g_4+g_4f_1+f_1g_1},$$ where $\gamma=1-\alpha-\beta$.
[**Proof**]{} is again based on calculations and is relegated to the Appendix \[Appendix\].
------------------------------------------------------------------------
[**Remark.**]{} We notice that restoring the fields $h_k=1/(f_kg_k)$ allows us to rewrite the equations (\[constr 2\]), (\[constr 3\]) as $$\begin{aligned}
\beta v & = & k\frac{g_0h_0g_3}{g_0h_0+h_0g_3+g_3h_3}+
\ell\frac{g_2h_2g_5}{g_2h_2+h_2g_5+g_5h_5}+
m\frac{g_4h_4g_1}{g_4h_4+h_4g_1+g_1h_1}, \label{constr 2 alt}\\
\gamma w & = & k\frac{h_0f_0h_3}{h_0f_0+f_0h_3+h_3f_3}+
\ell\frac{h_2f_2h_5}{h_2f_2+f_2h_5+h_5f_5}+
m\frac{h_4f_4h_1}{h_4f_4+f_4h_1+h_1f_1}, \label{constr 3 alt}\end{aligned}$$ which coincides with (\[constr 1\]) via a cyclic permutation of fields $(f,g,h)\mapsto(g,h,f)$ performed once or twice, respectively, and accompanied by changing $\alpha$ to $\beta$, $\gamma$, respectively.
Another similar remark: as it follows from the formulas (\[constr 1 well aux2\]), (\[constr 1 well aux3\]) used in the proof of Proposition \[constraint preliminary\] (and their analogs for the fields $g$, $h$), the constraints (\[constr 1\]), (\[constr 2\]), (\[constr 3\]) may be rewritten as equations for the single field $u$, resp. $v$, $w$: $$\begin{aligned}
\alpha u & = & k\frac{f_0f_3(f_1+f_2)}{(f_0-f_2)(f_1-f_3)}+
\ell\frac{f_2f_5(f_3+f_4)}{(f_2-f_4)(f_3-f_5)}+
m\frac{f_4f_1(f_5+f_0)}{(f_4-f_0)(f_5-f_1)}, \label{constr 1 z}\\
\beta v & = & k\frac{g_0g_3(g_1+g_2)}{(g_0-g_2)(g_1-g_3)}+
\ell\frac{g_2g_5(g_3+g_4)}{(g_2-g_4)(g_3-g_5)}+
m\frac{g_4g_1(g_5+g_0)}{(g_4-g_0)(g_5-g_1)}, \label{constr 2 w}\\
\gamma w & = & k\frac{h_0h_3(h_1+h_2)}{(h_0-h_2)(h_1-h_3)}+
\ell\frac{h_2h_5(h_3+h_4)}{(h_2-h_4)(h_3-h_5)}+
m\frac{h_4h_1(h_5+h_0)}{(h_4-h_0)(h_5-h_1)}. \label{constr 3 v}\end{aligned}$$ However, in this form, unlike the previous one, the terms attached to the variable $k$ (say), contain not only the fields on two edges ${{\frak e}}_0$, ${{\frak e}}_3$ parallel to the $k$–axis. This form is therefore less suited for the solution of the Cauchy problem for the constrained $fgh$–system, which we discuss now.
\[compatibility\] For arbitrary $\alpha,\beta\in{\Bbb C}$ the constraint (\[constr 1\]), (\[constr 2\]) is compatible with the equations (\[motion eq zw\]).
[**Proof.**]{} To prove this statement, one has to demonstrate the solvability of a reasonably posed Cauchy problem for the $fgh$–system constrained by (\[constr 1\]), (\[constr 2\]). In this context, it is unnatural to assume that the fields $u$, $v$ vanish at the origin, so that we replace (only in this proof) the left–hand sides of (\[constr 1\]), (\[constr 2\]) by $\alpha u+\phi$, $\beta v+\psi$, with arbitrary $\phi,\psi\in{\Bbb
C}$. We show that reasonable Cauchy data are given by the values of two fields $u$, $v$ (say) in three points ${{\frak z}}_0$, ${{\frak z}}_1={{\frak z}}_0+1$, ${{\frak z}}_2={{\frak z}}_0+\omega$, where ${{\frak z}}_0$ is arbitrary. According to Lemma \[lemma 4th point\], these data yield via the equations of the $fgh$–system the values of $u$, $v$ in ${{\frak z}}_3={{\frak z}}_0+1+\omega$. Further, these data together with the constraint (\[constr 1\]), (\[constr 2\]) determine uniquely the values of $u$, $v$ in ${{\frak z}}_4={{\frak z}}_0+\omega^2$. Indeed, assign $u({{\frak z}}_4)=\xi$, $v({{\frak z}}_4)=\eta$, where $\xi$, $\eta$ are two arbitrary complex numbers. The constraint uniquely defines the values of $u$, $v$ in the point ${{\frak z}}_5={{\frak z}}_0-\omega$. The requirement that these values agree with the ones obtained via Lemma \[lemma 4th point\] from the points ${{\frak z}}_0$, ${{\frak z}}_1$, ${{\frak z}}_4$, gives us two equations for $\xi$, $\eta$. It is shown by a direct computation that these equations have a unique solution, which is expressed via rational functions of the data at ${{\frak z}}_0$, ${{\frak z}}_1$, ${{\frak z}}_2$. It is also shown that the same solution is obtained, if we work with ${{\frak z}}_6={{\frak z}}_0-1$ instead of ${{\frak z}}_5$. Having found the fields $u$, $v$ at ${{\frak z}}_4$, we determine simultaneously $u$, $v$ at ${{\frak z}}_5$, ${{\frak z}}_6$. Now a similar procedure allows us to determine $u$, $v$ at ${{\frak z}}_7={{\frak z}}_0+2$ and ${{\frak z}}_8={{\frak z}}_0+2\omega$, using the constraint at the points ${{\frak z}}_1$ and ${{\frak z}}_2$, respectively. Simultaneously the values of $u$, $v$ are found at ${{\frak z}}_9={{\frak z}}_0+2+\omega$ and ${{\frak z}}_{10}=
{{\frak z}}_0+1+2\omega$. A continuation of this procedure delivers the values of $u$, $v$ on the both semiaxes $$\Big\{{{\frak z}}=k: k\ge 0\Big\}\cup\Big\{{{\frak z}}=\ell\omega: \ell\ge
0\Big\},$$ using the condition that the constraint (\[constr 1\]), (\[constr 2\]) is fulfilled on these semiaxes. As we know from Proposition \[Cauchy data for fgh system\], these data are enough to determine the solution of the $fgh$–system in the whole sector $$\Big\{{{\frak z}}=k+\ell\omega: k,\ell\ge 0\Big\}= \Big\{{{\frak z}}\in V({{\cal T}}{{\cal L}}):
0\le{\rm\arg}({{\frak z}})\le2\pi/3\Big\}.$$ It remains to prove that this solution fulfills also the constraint (\[constr 1\]), (\[constr 2\]) in the whole sector. This follows by induction from the following statement:
\[lemma for compatibility\] If the constraint (\[constr 1\]), (\[constr 2\]) is satisfied in ${{\frak z}}_0$, ${{\frak z}}_1$, ${{\frak z}}_2$, then it is satisfied also in ${{\frak z}}_3$.
The constraint at ${{\frak z}}_3$ includes the data at five points ${{\frak z}}_1$, ${{\frak z}}_2$, ${{\frak z}}_3$, ${{\frak z}}_9$, ${{\frak z}}_{10}$. As we have seen, the data at ${{\frak z}}_3$, ${{\frak z}}_9$, ${{\frak z}}_{10}$ are certain (complicated) functions of the data at ${{\frak z}}_0$, ${{\frak z}}_1$, ${{\frak z}}_2$. Therefore, to check the constraint at ${{\frak z}}_3$, one has to check that two (complicated) equations for the values of $u$, $v$ at ${{\frak z}}_0$, ${{\frak z}}_1$, ${{\frak z}}_2$ are satisfied identically. This has been done with the help of the Mathematica computer algebra system.
------------------------------------------------------------------------
Now we show how the constraint (\[constr 1\]), (\[constr 2\]) appears in the context of isomonodromic solutions of integrable systems. In this context, the results look better with a different gauge of the transition matrices for the $fgh$–system. Namely, we conjugate them with the matrix ${\rm diag}(1,\lambda,\lambda^2)$, and then multiply by $(1+\lambda^3)^{1/3}$ in order to get rid of the normalization of the determinant. Writing then $\mu$ for $\lambda^3$, we end up with the matrices $${{\cal L}}(\mu)=\left(\begin{array}{ccc} 1 & f & 0 \\ 0 & 1 & g \\ \mu h
& 0 & 1\end{array}\right), \quad fgh=1.$$ The zero curvature condition turns into $$\label{zero curv in mu}
{{\cal L}}({{\frak e}}_3,\mu){{\cal L}}({{\frak e}}_2,\mu){{\cal L}}({{\frak e}}_1,\mu)=(1+\mu)I,$$ ${{\frak e}}_1$, ${{\frak e}}_2$, ${{\frak e}}_3$ being the consecutive positively oriented edges of an elementary triangle of ${{\cal T}}{{\cal L}}$. This implies some slight modifications also for the notion of the wave function. Namely, the previous formula does not allow to define the function $\Psi$ on $V({{\cal T}}{{\cal L}})$ such that $$\Psi({{\frak z}}_2,\mu)={{\cal L}}({{\frak e}},\mu)\Psi({{\frak z}}_1,\mu)$$ holds, whenever ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)$. The way around this difficulty is the following. We define the wave function $\Psi$ on a covering of $V({{\cal T}}{{\cal L}})$. Namely, over each point ${{\frak z}}=k+\ell\omega+m\omega^2$ now sits a sequence $$\label{cover cond for psi}
\Psi_{k+n,\ell+n,m+n}(\mu)=(1+\mu)^n\Psi_{k,\ell,m}(\mu),\quad
n\in{\Bbb Z}.$$ The values of these functions in neighboring vertices are related by natural formulas $$\label{wave evolution in mu}
\left\{\begin{array}{l}
\Psi_{k+1,\ell,m}(\mu)={{\cal L}}({{\frak e}}_0,\mu)\Psi_{k,\ell,m}(\mu),\quad
{{\frak e}}_0=({{\frak z}},{{\frak z}}+1),\\
\Psi_{k,\ell+1,m}(\mu)={{\cal L}}({{\frak e}}_2,\mu)\Psi_{k,\ell,m}(\mu),\quad
{{\frak e}}_2=({{\frak z}},{{\frak z}}+\omega),\\
\Psi_{k,\ell,m+1}(\mu)={{\cal L}}({{\frak e}}_4,\mu)\Psi_{k,\ell,m}(\mu), \quad
{{\frak e}}_4=({{\frak z}},{{\frak z}}+\omega^2).
\end{array}\right.$$ We call a solution $(u,v):V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}^2$ of the equations (\[motion eq zw\]) [[******]{}isomonodromic]{} (cf. [@I]), if there exists the wave function $\Psi:{\Bbb
Z}^3\mapsto {\rm GL}(3,{\Bbb C})[\mu]$ satisfying (\[wave evolution in mu\]) and some linear differential equation in $\mu$: $$\label{eq in mu}
\frac{d}{d\mu}\Psi_{k,\ell,m}(\mu)={{\cal A}}_{k,\ell,m}(\mu)\Psi_{k,\ell,m}(\mu),$$ where ${{\cal A}}_{k,\ell,m}(\mu)$ are $3\times 3$ matrices, meromorphic in $\mu$, with the poles whose position and order do not depend on $k,\ell,m$.
Obviously, due to (\[cover cond for psi\]), the matrix ${{\cal A}}$ has to fulfill the condition $$\label{cover cond for A}
{{\cal A}}_{k+n,\ell+n,m+n}(\mu)={{\cal A}}_{k,\ell,m}(\mu)+\frac{n}{1+\mu}I,\quad
n\in{\Bbb Z}.$$
\[monodromy\] Solutions of the equations (\[motion eq zw\]) satisfying the constraints (\[constr 1\]), (\[constr 2\]) are isomonodromic. The corresponding matrix ${{\cal A}}_{k,\ell,m}$ is given by the following formulas: $$\label{ans A}
{{\cal A}}_{k,\ell,m}=\frac{C_{k,\ell,m}}{1+\mu}+\frac{D({{\frak z}})}{\mu},$$ where $C_{k,\ell,m}$ and $D({{\frak z}})$ are $\mu$–independent matrices: $$\label{C}
C_{k,\ell,m}=kP_0({{\frak z}})+\ell P_2({{\frak z}})+mP_4({{\frak z}}),$$ $P_{0,2,4}$ are rank 1 matrices $$\label{P}
P_j({{\frak z}})=\frac{1}{f_jg_j+g_jf_{j+3}+f_{j+3}g_{j+3}}
\left(\begin{array}{ccc} f_jg_j & -f_jg_jf_{j+3} & f_jg_jf_{j+3}g_{j+3} \\
-g_j & g_jf_{j+3} & -g_jf_{j+3}g_{j+3} \\
1 & -f_{j+3} & f_{j+3}g_{j+3} \end{array}\right),\quad j=0,2,4,$$ and the matrix $D$ is well defined on $V({{\cal T}}{{\cal L}})$ and not only on its covering ${\Bbb Z}^3$: $$\label{D}
D({{\frak z}})=\left(\begin{array}{ccc}
-(2\alpha+\beta)/3 & \alpha u & \beta a-\alpha a' \\
0 & (\alpha-\beta)/3 & \beta v \\
0 & 0 & (2\beta+\alpha)/3 \end{array}\right),$$ where the functions $a,a':V({{\cal T}}{{\cal L}})\mapsto{\Bbb C}$ are solutions of the equations (\[eq for a\]), (\[eq for a’\]).
[**Proof**]{} can be found in the Appendix \[Appendix\].
------------------------------------------------------------------------
Isomonodromic solutions and circle patterns {#Sect isomonodromic patterns}
===========================================
We now consider isomonodromic solutions of the $fgh$–system satisfying the constraint (\[constr 1\]), (\[constr 2\]), which are special in two respects:
- First, the constants $\alpha$ and $\beta$ in the constraint equations are not arbitrary, but are [*equal*]{}: $\alpha=\beta$, so that $\gamma=1-2\alpha$.
- Second, the initial conditions will be choosen in a special way.
We will show that the resulting solutions lead to hexagonal circle patterns.
First of all, we discuss the Cauchy data which allow one to determine a solution of the $fgh$–system augmented by the constraints (\[constr 1\]), (\[constr 2\]). Of course, the fields $u$, $v$, $w$ have to vanish in the origin ${{\frak z}}=0$. Next, one sees easily that, given $u$ and $v$ in one of the points neighboring to $0$, the constraint allows to calculate one after another the values of $u$ and $v$ in all points of the corresponding axis. For instance, fixing some values of $u(1)$ and $v(1)$, we can calculate all $u(k)$ and $v(k)$ from the relations $$\label{contr u on axis}
\alpha
u(k)=k\frac{f(k)g(k)f(k-1)}{f(k)g(k)+g(k)f(k-1)+f(k-1)g(k-1)},$$ $$\label{contr v on axis}
\beta
v(k)=k\frac{g(k)f(k-1)g(k-1)}{f(k)g(k)+g(k)f(k-1)+f(k-1)g(k-1)},$$ where we have set $$\label{fg uv}
f(k)=u(k+1)-u(k),\qquad g(k)=v(k+1)-v(k).$$ Indeed, we start with $u(0)=0$, $v(0)=0$, $f(0)=u(1)$, $g(0)=v(1)$, and continue via the recurrent formulas, which are easily seen to be equivalent to (\[contr u on axis\]), (\[contr v on axis\]), (\[fg uv\]): $$\label{recur zw}
u(k)=u(k-1)+f(k-1),\qquad v(k)=v(k-1)+g(k-1),$$ $$\begin{aligned}
f(k) & = & \frac{\alpha u(k)}{\beta v(k)}\,g(k-1), \label{recur f} \\
g(k) & = & \frac{\beta v(k)} {k-\displaystyle\frac{\alpha
u(k)}{f(k-1)}-\displaystyle\frac{\beta v(k)} {g(k-1)}}.
\label{recur g}\end{aligned}$$ So, given the values of the fields $u$ and $v$ (and hence of $w$) in the points ${{\frak z}}=1$ and ${{\frak z}}=\omega$, we get their values in all points ${{\frak z}}=k$ and ${{\frak z}}=\ell\omega$ of the positive $k$- and $\ell$-semiaxes. It is easy to see that $u(k)/u(1)$ and $v(k)/v(1)$ do not depend on $u(1)$ and $v(1)$, respectively, so that all points $u(k)$ lie on a straight line, and so do all points $v(k)$. Similar statements hold also for all points $u(\ell\omega)$ and for all points $v(\ell\omega)$. And, of course, the third field $w$ behaves analogously.
So, we get the values of $u$ and $v$ in all points on the border of the sector $$\label{sector}
S=\Big\{{{\frak z}}\in V({{\cal T}}{{\cal L}}): 0\le{\rm\arg}({{\frak z}})\le 2\pi/3\Big\}
=\Big\{{{\frak z}}=k+\ell\omega: k,\ell\ge 0\Big\}.$$ Proposition \[Cauchy data for fgh system\] assures that these data determine the values of $u$ and $v$ in all points of $S$. By Theorem \[compatibility\] (more precisely, by Lemma \[lemma for compatibility\]) the solution thus obtained will satisfy the constraint (\[constr 1\]), (\[constr 2\]) in the whole sector $S$.
Now we are in a position to specify the above mentioned isomonodromic solutions.
\[circular\] Let $\beta=\alpha$. Let $u,v,w:S\mapsto{\Bbb C}$ be the solutions of the $fgh$–system with the constraint (\[constr 1\]), (\[constr 2\]), with the initial conditions $$\label{ini uv}
u(1)=v(1)=1, \quad u(\omega)=v(\omega)=\exp(i\theta),$$ where $0<\theta<\pi$. Then all three maps $u,v,w$ define hexagonal circle patterns with $MR=-1$ in the sector $S$. More precisely, if ${{\frak z}}_k={{\frak z}}'+\varepsilon^k$, $k=1,2,\ldots,6,$ are the vertices of an elementary hexagon in this sector, then:
- $u({{\frak z}}_1),u({{\frak z}}_2),\ldots,u({{\frak z}}_6)$ lie on a circle with the center in $u({{\frak z}}')$ whenever ${{\frak z}}'\in S\setminus V({{\cal H}}{{\cal L}}_1)$,
- $v({{\frak z}}_1),v({{\frak z}}_2),\ldots,v({{\frak z}}_6)$ lie on a circle with the center in $v({{\frak z}}')$ whenever ${{\frak z}}'\in S\setminus V({{\cal H}}{{\cal L}}_2)$,
- $w({{\frak z}}_1),w({{\frak z}}_2),\ldots,w({{\frak z}}_6)$ lie on a circle with the center in $w({{\frak z}}')$ whenever ${{\frak z}}'\in S\setminus V({{\cal H}}{{\cal L}}_0)$.
[**Proof**]{} follows from the above inductive construction with the help of two lemmas. The first one shows that if $\beta=\alpha$ then the constraint yields a very special property of the sequences of the values of the fields $u$, $v$, $w$ in the points of the $k$- and $\ell$-axes.
\[equidist\] If $\beta=\alpha$, then for $k,\ell\ge 1$: $$\begin{aligned}
|u(3k-1)-u(3k-2)| & = & |u(3k-2)-u(3k-3)|, \label{u=u on k}\\
|v(3k)-v(3k-1)| & = & |v(3k-1)-v(3k-2)|,\label{v=v on k}\\
|w(3k+1)-w(3k)| & = & |w(3k)-w(3k-1)|,\label{w=w on k}\\ \nonumber\\
|u((3\ell-1)\omega)-u((3\ell-2)\omega)| & = & |u((3\ell-2)\omega)-
u((3\ell-3)\omega)|, \label{u=u on l}\\
|v(3\ell\omega)-v((3\ell-1)\omega)| & = &
|v((3\ell-1)\omega)-v((3\ell-2)\omega)|, \label{v=v on l}\\
|w((3\ell+1)\omega)-w(3\ell\omega)| & = &
|w(3\ell\omega)-w((3\ell-1)\omega)|. \label{w=w on l}\end{aligned}$$
The second one allows to extend inductively these special properties to the whole sector (\[sector\]).
\[geometry\] Consider two elementary triangles with the vertices ${{\frak z}}_0$, ${{\frak z}}_1={{\frak z}}_0+1$, ${{\frak z}}_2={{\frak z}}_0+\omega$, and ${{\frak z}}_3={{\frak z}}_0+1+\omega$. Suppose that
- $|u({{\frak z}}_1)-u({{\frak z}}_0)|=|u({{\frak z}}_2)-u({{\frak z}}_0)|$;
- $\measuredangle v({{\frak z}}_1)v({{\frak z}}_0)v({{\frak z}}_2)=\vartheta\;\;$ and $\;\;\measuredangle u({{\frak z}}_1)u({{\frak z}}_0)u({{\frak z}}_2)=2\pi-2\vartheta\;\;$ for some $\vartheta$.
Then $$|u({{\frak z}}_3)-u({{\frak z}}_0)|=|u({{\frak z}}_1)-u({{\frak z}}_0)|=|u({{\frak z}}_2)-u({{\frak z}}_0)|,$$ and hence $$|v({{\frak z}}_3)-v({{\frak z}}_1)|=|v({{\frak z}}_0)-v({{\frak z}}_1)|, \qquad
|v({{\frak z}}_3)-v({{\frak z}}_2)|=|v({{\frak z}}_0)-v({{\frak z}}_2)|$$ and $$|w({{\frak z}}_3)-w({{\frak z}}_1)|=|w({{\frak z}}_3)-w({{\frak z}}_2)|=|w({{\frak z}}_3)-w({{\frak z}}_0)|$$
The assertion of this lemma is illustrated on Fig. \[fig:statement17\].
(15,5) (0,0.5)(3.9,0.5) (0,0.5)(1.2,4.2) (0,0.5)(3.1,2.76) (1.2,4.2)(3.1,2.76) (3.1,2.76)(3.9,0.5) (0.1,0.2)[(0,0)[$u_0$]{}]{} (3.9,0.2)[(0,0)[$u_1$]{}]{} (3.3,3)[(0,0)[$u_3$]{}]{} (1.2,4.5)[(0,0)[$u_2$]{}]{} (0.5,0.5)(0.45,0.85)(0.15,0.923) (0.6,0.6)[(1,0.2)[$2\pi\!\!-\!\!2\vartheta$]{}]{} (7,0.5)(4.4,2.4) (7,0.5)(10.2,0.5) (7,0.5)(7.6,2.4) (10.2,0.5)(7.6,2.4) (4.4,2.4)(7.6,2.4) (7,0.2)[(0,0)[$v_0$]{}]{} (10.2,0.2)[(0,0)[$v_1$]{}]{} (7.6,2.75)[(0,0)[$v_3$]{}]{} (4.4,2.75)[(0,0)[$v_2$]{}]{} (7.4,0.5)(7.1,0.8)(6.677,0.736) (7.25,0.7)[(0.5,0.2)[$\vartheta$]{}]{} (11.7,0.5)(13.8,0.5) (11.7,0.5)(12.75,3.8) (11.7,0.5)(10,1.8) (10,1.8)(12.75,3.8) (12.75,3.8)(13.8,0.5) (11.7,0.2)[(0,0)[$w_0$]{}]{} (13.8,0.2)[(0,0)[$w_1$]{}]{} (12.75,4.1)[(0,0)[$w_3$]{}]{} (9.8,2.1)[(0,0)[$w_2$]{}]{}
First of all, we show how do these lemmas work towards the proof of Theorem \[circular\]. The initial conditions (\[ini uv\]) imply: $$\label{ini w}
w(1)=1, \quad w(\omega)=\exp(-2i\theta)=\exp(i(2\pi-2\theta)).$$ Therefore, the conditions of Lemma \[geometry\] are fulfilled in the point ${{\frak z}}_0=0$ with the fields $(w,u,v)$ instead of $(u,v,w)$. From this Lemma it follows that
- The points $w(1)$, $w(\omega)$, $w(1+\omega)$ are equidistant from $w(0)$;
- The points $v(0)$, $v(1)$, $v(\omega)$ are equidistant from $v(1+\omega)$;
- The points $u(1+\omega)$, $u(0)$ are equidistant from $u(\omega)$;
- The points $u(1+\omega)$, $u(0)$ are equidistant from $u(1)$.
a\_proof15u.tex a\_proof15v.tex a\_proof15w.tex
Since, by Lemma \[equidist\], we have $|u(0)-u(1)|=|u(2)-u(1)|$, there follows from $({\rm d}_0)$ that $|u(1+\omega)-u(1)|=|u(2)-u(1)|$. Finally, from Lemma \[geometry\] there follows that (see Fig. \[fig:proof15\]) $$\measuredangle v(2)v(1)v(1+\omega)=\pi-\psi_1,\qquad
\measuredangle
u(2)u(1)u(1+\omega)=\pi-\phi_1=2\psi_1=2\pi-2(\pi-\psi_1).$$ Therefore the conditions of Lemma \[geometry\] are fulfilled in the point ${{\frak z}}_0=1$ with the fields $(u,v,w)$. We deduce that
- The points $u(2)$, $u(1+\omega)$, $u(2+\omega)$ are equidistant from $u(1)$;
- The points $w(1)$, $w(2)$, $w(1+\omega)$ are equidistant from $w(2+\omega)$;
- The points $v(2+\omega)$, $v(1)$ are equidistant from $v(1+\omega)$, which adds the point $v(2+\omega)$ to the list of equidistant neighbors of $v(1+\omega)$ from the conclusion $({\rm b}_0)$ above; and
- The points $v(2+\omega)$, $v(1)$ are equidistant from $v(2)$.
By Lemma \[equidist\], we have $|v(1)-v(2)|=|v(3)-v(2)|$, and there follows from $({\rm d}_1)$ that $|v(2+\omega)-v(2)|=|v(3)-v(2)|$. Finally, from Lemma \[geometry\] there follows that (see Fig. \[fig:proof15\]) $$\measuredangle w(3)w(2)w(2+\omega)=\pi-\psi_3,\qquad
\measuredangle
v(3)v(2)v(2+\omega)=\pi-\phi_3=2\psi_3=2\pi-2(\pi-\psi_3).$$ Hence, the conditions of Lemma \[geometry\] are again fulfilled in the point ${{\frak z}}_0=2$ with the fields $(v,w,u)$.
These arguments may be continued by induction along the $k$-axis, and, by symmetry, along the $\ell$-axis. This delivers all the necessary relations which involve the points ${{\frak z}}=k+\ell\omega$ with $k\le 1$ or $\ell\le 1$. We call them the relations of the level 1.
The arguments of the level 2 start with the pair of fields $(v,w)$ at the point ${{\frak z}}=1+\omega$. We have the level 1 relation $$|v(2+\omega)-v(1+\omega)|=|v(1+2\omega)-v(1+\omega)|.$$ For the angles, we have from the level 1 (see Fig. \[fig:proof15\]): $$\begin{aligned}
\measuredangle
w(2+\omega)w(1+\omega)w(1+2\omega) & = & 2\pi-(\psi_1+\psi_2+\psi_4+\psi_5), \\
\measuredangle v(2+\omega)v(1+\omega)v(1+2\omega) & = &
2\pi-(\phi_1+\phi_2+\phi_4+\phi_5)=
2\pi-2(2\pi-\psi_1-\psi_2-\psi_4-\psi_5).\end{aligned}$$ So, the conditions of Lemma \[geometry\] are again satisfied in the point ${{\frak z}}_0=1+\omega$ for the fields $(v,w,u)$. Continuing this sort of arguments, we prove all the necessary relations which involve the points ${{\frak z}}=k+\ell\omega$ with $k\le 2$ or $\ell\le
2$, and which will be called the relations of the level 2. The induction with respect to the level finishes the proof.
------------------------------------------------------------------------
It remains to prove Lemmas \[equidist\] and \[geometry\] above.
What concerns the key Lemma \[geometry\], it might be instructive to give two proofs for it, an analytic and a geometric ones. The first one is shorter, but the second one seems to provide more insight into the geometry.
[**Analytic proof of Lemma \[geometry\].**]{} We rewrite the assumptions of the lemma as $$u_2-u_0=(u_1-u_0)e^{2i(\pi-\vartheta)}=(u_1-u_0)e^{-2i\vartheta}$$ and $$v_2-v_0=c(v_1-v_0)e^{i\vartheta},\quad c>0.$$
[**Geometric proof of Lemma \[geometry\].**]{} The equations of the $fgh$–system imply that the triangles $u_0u_1u_3$ and $v_1v_3v_0$ are similar, and the triangles $u_0u_2u_3$ and $v_2v_3v_0$ are similar. Therefore, $$\frac{|v_1-v_0|}{|u_0-u_3|}=\frac{|v_1-v_3|}{|u_0-u_1|},\qquad
\frac{|v_2-v_0|}{|u_0-u_3|}=\frac{|v_2-v_3|}{|u_0-u_2|}.$$ From $|u_0-u_1|=|u_0-u_2|$ there follows now $$\label{lemma aux1}
\frac{|v_1-v_0|}{|v_1-v_3|}=\frac{|v_2-v_0|}{|v_2-v_3|}.$$
a\_proof17u.tex a\_proof17v.tex
Denoting the angles as on Fig. \[fig:proof17\], we have: $$\chi_1+\chi_2=\vartheta,\qquad \phi_1+\phi_2=2\pi-2\vartheta,$$ hence $$\psi_1+\psi_2=2\pi-(\phi_1+\phi_2)-(\chi_1+\chi_2)=\vartheta=\chi_1+\chi_2.$$ In other words, $$\label{lemma aux2}
\measuredangle v_1v_3v_2=\measuredangle v_1v_0v_2.$$ The relations (\[lemma aux1\]), (\[lemma aux2\]) yield that the triangles $v_1v_3v_2$ and $v_1v_0v_2$ are similar. But they have a common edge $[v_1,v_2]$, therefore they are congruent (symmetric with respect to this edge). This implies that the triangles $v_0v_2v_3$ and $v_0v_1v_3$ are isosceles, so that $\chi_1=\psi_1$ and $\chi_2=\psi_2$, and $$|v_0-v_1|=|v_3-v_1|,\qquad |v_0-v_2|=|v_3-v_2|.$$ Therefore $$|u_3-u_0|=|u_1-u_0|=|u_2-u_0|.$$ Lemma is proved.
------------------------------------------------------------------------
As for Lemma \[equidist\], its statement is a small part of the following theorem and its corollary.
\[Th circular zalpha\] If $\beta=\alpha$, then the recurrent relations (\[recur zw\]), (\[recur f\]), (\[recur g\]) with $u(1)=v(1)=1$ can be solved for $u(k)$, $v(k)$, $f(k)$, $g(k)$ $(k\ge 0)$ in a closed form: $$\label{u3k}
u(3k)=\frac{2k}{k+2\alpha}\,\Pi_1(k),\qquad
u(3k+1)=\frac{2k+2\alpha}{k+2\alpha}\,\Pi_1(k),\qquad
u(3k+2)=2\,\Pi_1(k),$$ $$\label{f for zalpha}
f(3k-1)=f(3k)=f(3k+1)=\frac{2\alpha}{k+2\alpha}\,\Pi_1(k),$$ and $$\label{v3k}
v(3k-1)=\frac{k-\alpha}{k+\alpha}\,\Pi_2(k),\qquad
v(3k)=\frac{k}{k+\alpha}\,\Pi_2(k),\qquad v(3k+1)=\Pi_2(k),$$ $$\label{g for zalpha}
g(3k-2)=g(3k-1)=g(3k)=\frac{\alpha}{k+\alpha}\,\Pi_2(k),$$ where $$\label{Pi}
\Pi_1(k)=\frac{(1+2\alpha)(2+2\alpha)\ldots(k+2\alpha)}
{(1-\alpha)(2-\alpha)\ldots(k-\alpha)},\qquad
\Pi_2(k)=\frac{(1+\alpha)(2+\alpha)\ldots(k+\alpha)}
{(1-2\alpha)(2-2\alpha)\ldots(k-2\alpha)}.$$
[**Proof.**]{} Elementary calculations show that the expressions above satisfy the recurrent relations (\[recur zw\]), (\[recur f\]), (\[recur g\]) with $\beta=\alpha$, as well as the initial conditions. The uniqueness of the solution yields the statement. We remark that similar formulas can be found also in the general case $\alpha\neq\beta$, however, the property formulated in Lemma \[equidist\] fails to hold in general.
------------------------------------------------------------------------
\[Cor circular zalpha\] If $\beta=\alpha$, and $u(1)=v(1)=1$, then for the third field $w(k)$, $h(k)$ $(k\ge 0)$ we have: $$\label{w3k}
w(3k-1)=\frac{k-1+2\alpha}{1-2\alpha}\,\Pi_3(k),\qquad
w(3k)=\frac{k}{1-2\alpha}\,\Pi_3(k),\qquad
w(3k+1)=\frac{k+1-2\alpha}{1-2\alpha}\,\Pi_3(k),$$ $$\label{h3k-1 for zalpha}
h(3k-1)=h(3k)=\Pi_3(k),\qquad
h(3k+1)=\frac{k+1-2\alpha}{k+\alpha}\,\Pi_3(k),$$ where $$\Pi_3(k)=\frac{(1-\alpha)(2-\alpha)\ldots(k-\alpha)}
{\alpha(1+\alpha)\ldots(k-1+\alpha)}\cdot
\frac{(1-2\alpha)(2-2\alpha)\ldots(k-2\alpha)}
{2\alpha(1+2\alpha)\ldots(k-1+2\alpha)}.$$
[**Proof.**]{} The formulas for $h(k)=(f(k)g(k))^{-1}$ follow from (\[f for zalpha\]), (\[g for zalpha\]). The formulas for $w(k)= w(k-1)+h(k-1)$ with $w(0)=0$ follow by induction.
------------------------------------------------------------------------
Discrete hexagonal $z^{\alpha}$ and $\log z$ {#Sect hexagonal z^a}
============================================
Although the construction of the previous section always delivers hexagonal circle patterns with $MR=-1$, these do not always behave regularly. As a rule, they are not embedded (i.e. some elementary triangles overlap), and even not immersed (i.e. some [*neighboring*]{} triangles overlap), cf. Fig. \[fig:nonReg\]). However, there exists a choice of the initial values (i.e. of $\theta$ in Theorem \[circular\]) which assures that this is not the case.
![A non–immersed pattern with $\theta\neq 2\pi\alpha$.[]{data-label="fig:nonReg"}](NonReg.eps)
Let $0<\alpha=\beta<\frac{1}{2}$, so that $0<\gamma=1-2\alpha<1$. Set $\theta=2\pi\alpha$. Then the hexagonal circle patterns of Theorem \[circular\] are called:
- the hexagonal $z^{3\alpha}$ with an intersection point at the origin;
- the hexagonal $z^{3\gamma}$ with a circle at the origin.
In other words, for the hexagonal $z^{3\alpha}$ the opening angle of the image of the sector (\[sector\]) is equal to $2\pi\alpha$, exactly as for the analytic function $z\mapsto
z^{3\alpha}$.
\[Conj z\^a\] For $0<\alpha<\frac{1}{2}$ the hexagonal circle patterns $z^{3\alpha}$ with an intersection point at the origin and $z^{3\gamma}$ with a circle at the origin are embedded.
For the proof of a similar statement for $z^{\alpha}$ circle patterns with the combinatorics of the square grid see [@AB], where it is proven that they are immersed.
[**Remark.**]{} Actually, the $u$ and $v$ versions of the hexagonal $z^{3\alpha}$ with an intersection point at the origin are not essentially different. Indeed, it is not difficult to see that the half–sector of the $u$ pattern, corresponding to $0\le {\rm
arg}({{\frak z}})\le \pi/3$, being rotated by $\pi\alpha$, coincides with the half–sector of the $v$ pattern, corresponding to $\pi/3\le
{\rm arg}({{\frak z}})\le 2\pi/3$, and vice versa. For the $w$ pattern, both sectors are identical (up to the rotation by $\pi\gamma$). So, for every $0<\alpha<\frac{1}{2}$ we have [*two*]{} essentially different hexagonal pattrens $z^{3\alpha}$.
It is important to notice the peculiarity of the case when $\alpha=n/N$ with $n,N\in{\Bbb N}$. Then one can attach to the $u,v$–images of the sector $S$ its $N$ copies, rotated each time by the angle $2\pi\alpha=2\pi n/N$. The resulting object will satisfy the conditions for the hexagonal circle pattern everywhere except the origin ${{\frak z}}=0$, which will be an intersection point of $M=nN$ circles. Similarly, if $\gamma/2=n'/N'$, and we attach to the $w$–image of the sector $S$ its $N'$ copies, rotated each time by the angle $2\pi\gamma=4\pi n'/N'$, then the origin ${{\frak z}}=0$ will be the center of a circle intersecting with $M'=n'N'$ neighboring circles. See Fig. \[fig:gamma15and25\] for the examples of the $w$–pattern with $\gamma=1/5$ and the $u$–pattern with $\alpha=1/5$.
![The hexagonal patterns $z^{3/5}$ with a circle at the origin and with an intersection point at the origin.[]{data-label="fig:gamma15and25"}](Gamma1o5w.eps "fig:") ![The hexagonal patterns $z^{3/5}$ with a circle at the origin and with an intersection point at the origin.[]{data-label="fig:gamma15and25"}](Alpha1o5u.eps "fig:")
Now we turn our attention to the limiting cases $\alpha=1/2$ and $\alpha=0$.
Case $\alpha=\frac{1}{2}$, $\gamma=0$: hexagonal $z^{3/2}$ and $\log z$
------------------------------------------------------------------------
It is easy to see that the quantities $g(k)$, $k\ge
1$, and $v(k)$, $k\ge 2$, become singular as $\alpha\to\frac{1}{2}$ (see (\[g for zalpha\]) and (\[v3k\])). As a compensation, the quantities $h(k)$, $k\ge 1$, vanish with $\alpha\to\frac{1}{2}$, so that $w(k)\to w(1)=1$ for all $k\ge 2$. Similar effects hold for the $\ell$–axis, where $v(\ell\omega)$, $\ell\ge 2$, become singular, and $w(\ell\omega)\to 1$ for all $\ell\ge 1$. (Recall that for the $w$ pattern we have: $w(\omega)=e^{2\pi i\gamma}\to 1$). These observations suggest the following rescaling: $$\label{rescaling zalpha1}
u=\overset{\circ}{u},\qquad
v=\overset{\circ}{v}/(1-2\alpha),\qquad
w=1+(1-2\alpha)\overset{\circ}{w}.$$ In order to be able to go to the limit $\alpha\to\frac{1}{2}$, we have to calculate the values of our fields in several lattice points next to ${{\frak z}}=0$. Applying formulas (\[induct aux0\]), (\[induct aux1\]), we find: $$\begin{aligned}
u(0)=0, \quad u(1)=1, & & u(\omega)=e^{2\pi i\alpha}, \qquad
u(1+\omega)=1+e^{2\pi i\alpha}, \label{u init}\\
v(0)=0,\quad v(1)=1, & & v(\omega)=e^{2\pi i\alpha}, \qquad
v(1+\omega)=\frac{e^{2\pi i\alpha}}{1+e^{2\pi i\alpha}}, \label{v init}\\
w(0)=0,\quad w(1)=1, & & w(\omega)=e^{2\pi i(1-2\alpha)}, \quad
w(1+\omega)=e^{\pi i(1-2\alpha)}. \label{w init}\end{aligned}$$ For the rescaled variables $\overset{\circ}{u}$, $\overset{\circ}{v}$, $\overset{\circ}{w}$ in the limit $\alpha\to\frac{1}{2}$ we find: $$\begin{aligned}
\overset{\circ}{u}(0)=0, \quad \overset{\circ}{u}(1)=1, & &
\overset{\circ}{u}(\omega)=-1, \quad
\overset{\circ}{u}(1+\omega)=0,
\label{zalpha1 u init}\\
\overset{\circ}{v}(0)=0,\quad \overset{\circ}{v}(1)=0, & &
\overset{\circ}{v}(\omega)=0, \qquad
\overset{\circ}{v}(1+\omega)=\frac{i}{\pi},
\label{zalpha1 v init}\\
\overset{\circ}{w}(0)=\infty,\quad \overset{\circ}{w}(1)=0, & &
\overset{\circ}{w}(\omega)=2\pi i, \quad
\overset{\circ}{w}(1+\omega)=\pi i. \label{zalpha1 w init}\end{aligned}$$ These initial values have to be supplemented by the values in all further points of the $k$– and $\ell$–axes. From the formulas of Theorem \[Th circular zalpha\] there follows: $$\label{zalpha1 u}
\overset{\circ}{u}(3k)=\frac{2^k k!}{(2k-1)!!}\cdot (2k), \quad
\overset{\circ}{u}(3k+1)=\frac{2^k k!}{(2k-1)!!}\cdot (2k+1),
\quad \overset{\circ}{u}(3k+2)=\frac{2^k k!}{(2k-1)!!}\cdot
(2k+2),$$ $$\label{f for zalpha1}
\overset{\circ}{f}(3k-1)=\overset{\circ}{f}(3k)=\overset{\circ}{f}(3k+1)=
\frac{2^k k!}{(2k-1)!!},$$ and $$\label{zalpha1 v}
\overset{\circ}{v}(3k-1)=\frac{(2k-1)!!}{2^k (k-1)!}\cdot (2k-1),
\quad \overset{\circ}{v}(3k)=\frac{(2k-1)!!}{2^k (k-1)!}\cdot
(2k), \quad \overset{\circ}{v}(3k+1)=\frac{(2k-1)!!}{2^k
(k-1)!}\cdot (2k+1),$$ $$\label{g for zalpha1}
\overset{\circ}{g}(3k-2)=\overset{\circ}{g}(3k-1)=\overset{\circ}{g}(3k)=
\frac{(2k-1)!!}{2^k (k-1)!},$$ which have to be augmented by $\overset{\circ}{u}(k\omega)=
-\overset{\circ}{u}(k)$, $\overset{\circ}{v}(k\omega)=-\overset{\circ}{v}(k)$. From Corollary \[Cor circular zalpha\] there follow the formulas for the edges of the $\overset{\circ}{w}$ lattice: $$\begin{aligned}
\overset{\circ}{h}(3k-1)=\overset{\circ}{h}(3k) & = &
\overset{\circ}{h}((3k-1)\omega)=\overset{\circ}{h}(3k\omega)
\;\;=\;\; \frac{1}{k},\qquad k\ge 1,
\label{h3k-1 for zalpha1}\\
\overset{\circ}{h}(3k+1) & = & \overset{\circ}{h}((3k+1)\omega)\;\;=\;\;
\frac{1}{k+1/2}, \qquad k\ge 0.
\label{h3k+1 for zalpha1}\end{aligned}$$
The hexagonal circle patterns corresponding to the solutions of the $fgh$–system in the sector (\[sector\]) defined by the boundary values (\[zalpha1 u init\])–(\[h3k+1 for zalpha1\]) are called:
- the hexagonal $z^{3/2}$ with an intersection point at the origin;
- the symmetric hexagonal $\log z$.
Alternatively, one could define the lattices $\overset{\circ}{u}$, $\overset{\circ}{v}$, $\overset{\circ}{w}$ as the solutions of the $fgh$–system with the initial values (\[zalpha1 u init\])–(\[zalpha1 w init\]), satisfying the constraint (\[constr 1\]), (\[constr 2\]) with $\alpha=\beta=1/2$. In this appoach the values (\[zalpha1 u\])–(\[h3k+1 for zalpha1\]) would be derived from the constraint. Notice also that the formulas (\[constr 3\]), (\[constr 3 alt\]) in this case turns into $$\begin{aligned}
1 & = & k\frac{1}{\overset{\circ}{f}_0\overset{\circ}{g}_0+
\overset{\circ}{g}_0\overset{\circ}{f}_3+
\overset{\circ}{f}_3\overset{\circ}{g}_3}+
\ell\frac{1}{\overset{\circ}{f}_2\overset{\circ}{g}_2+
\overset{\circ}{g}_2\overset{\circ}{f}_5+
\overset{\circ}{f}_5\overset{\circ}{g}_5}+
m\frac{1}{\overset{\circ}{f}_4\overset{\circ}{g}_4+
\overset{\circ}{g}_4\overset{\circ}{f}_1+
\overset{\circ}{f}_1\overset{\circ}{g}_1}\\
& = & k\frac{\overset{\circ}{h}_0\overset{\circ}{f}_0\overset{\circ}{h}_3}
{\overset{\circ}{h}_0\overset{\circ}{f}_0+
\overset{\circ}{f}_0\overset{\circ}{h}_3+
\overset{\circ}{h}_3\overset{\circ}{f}_3}+
\ell\frac{\overset{\circ}{h}_2\overset{\circ}{f}_2\overset{\circ}{h}_5}
{\overset{\circ}{h}_2\overset{\circ}{f}_2+
\overset{\circ}{f}_2\overset{\circ}{h}_5+
\overset{\circ}{h}_5\overset{\circ}{f}_5}+
m\frac{\overset{\circ}{h}_4\overset{\circ}{f}_4\overset{\circ}{h}_1}
{\overset{\circ}{h}_4\overset{\circ}{f}_4+
\overset{\circ}{f}_4\overset{\circ}{h}_1+
\overset{\circ}{h}_1\overset{\circ}{f}_1}.\end{aligned}$$
![The patterns $z^{3/2}$ with an intersection point at the origin, and the symmetric hexagonal $\log z$; the second pattern coincides with the first one upon the rotation by $\pi/2$[]{data-label="fig:gamma0"}](Gamma0u.eps "fig:") ![The patterns $z^{3/2}$ with an intersection point at the origin, and the symmetric hexagonal $\log z$; the second pattern coincides with the first one upon the rotation by $\pi/2$[]{data-label="fig:gamma0"}](Gamma0v.eps "fig:") ![The patterns $z^{3/2}$ with an intersection point at the origin, and the symmetric hexagonal $\log z$; the second pattern coincides with the first one upon the rotation by $\pi/2$[]{data-label="fig:gamma0"}](Gamma0w.eps "fig:")
Case $\alpha=0$, $\gamma=1$: hexagonal $\log z$ and $z^3$
----------------------------------------------------------
Considerations similar to those of the previous subsection show that, as $\alpha\to 0$, the quantities $h(k)$, $k\ge 1$, and $w(k)$, $k\ge 2$, become singular (see (\[h3k-1 for zalpha\]) and (\[w3k\])). As a compensation, the quantities $f(k)$, $k\ge
2$, and $g(k)$, $k\ge 1$, vanish with $\alpha\to 0$, so that $u(k)\to u(2)=2$ for all $k\ge 3$, and $v(k)\to v(1)=1$ for all $k\ge 2$. Similar effects hold for the $\ell$–axis. These observations suggest the following rescaling: $$\label{rescaling zalpha2}
u=2+2\alpha\overset{\circ}{u},\qquad
v=1+\alpha\overset{\circ}{v},\qquad
w=\overset{\circ}{w}/(2\alpha^2).$$ It turns out that in this case we need to calculate the values of these functions in a larger number of lattice points in the vicinity of ${{\frak z}}=0$. To this end, we add to (\[u init\])–(\[w init\]) the following values, which are obtained by a direct calculation: $$\begin{aligned}
u(2)=2, \qquad u(2\omega)=2e^{2\pi i\alpha}, & &
u(2+\omega)=\frac{1+e^{2\pi i\alpha}}{1+\alpha(e^{2\pi
i\alpha}-1)},
\label{u init begin}\\
u(1+2\omega)=\frac{1+e^{2\pi i\alpha}}{1+\alpha(e^{-2\pi
i\alpha}-1)}, & &
u(2+2\omega)=\frac{1-\alpha}{1-2\alpha}\,(1+e^{2\pi i\alpha}),\end{aligned}$$ $$\begin{aligned}
v(2)=\frac{1-\alpha}{1-2\alpha},\qquad
v(2\omega)=\frac{1-\alpha}{1-2\alpha}\,e^{2\pi i\alpha}, & &
v(2+\omega)=\frac{1}{1+\alpha(e^{-2\pi i\alpha}-1)}, \\
v(1+2\omega)=\frac{e^{2\pi i\alpha}}{1+\alpha(e^{2\pi
i\alpha}-1)}, & & v(2+2\omega)=\frac{2e^{2\pi i\alpha}}{1+e^{2\pi
i\alpha}}\end{aligned}$$ $$\begin{aligned}
w(2)=\frac{1-\alpha}{\alpha}, \qquad
w(2\omega)=\frac{1-\alpha}{\alpha}\,
e^{-2\pi i\alpha}, & & w(2+\omega)=-\frac{1}{\alpha(e^{2\pi i\alpha}-1)}, \\
w(1+2\omega)=\frac{e^{-2\pi i\alpha}}{\alpha(e^{2\pi i\alpha}-1)},
& & w(2+2\omega)=-\frac{1-\alpha}{\alpha}\,e^{-2\pi i\alpha} .
\label{w init end}\end{aligned}$$ From (\[u init\])–(\[w init\]) and (\[u init begin\])–(\[w init end\]) we obtain in the limit $\alpha\to 0$ under the rescaling (\[rescaling zalpha2\]) the following initial values: $$\begin{aligned}
\overset{\circ}{u}(0)=\infty, \quad \overset{\circ}{u}(1)=\infty, \quad
\overset{\circ}{u}(\omega)=\infty, \quad \overset{\circ}{u}(2)=0,
\quad \overset{\circ}{u}(2\omega)=2\pi i,
\label{zalpha2 u init1}\\
\overset{\circ}{u}(1+\omega)=\pi i, \quad
\overset{\circ}{u}(2+\omega)=\pi i,\quad \overset{\circ}{u}(1+2\omega)=\pi i,
\quad \overset{\circ}{u}(2+2\omega)=1+\pi i,
\label{zalpha2 u init2}\end{aligned}$$ $$\begin{aligned}
\overset{\circ}{v}(0)=\infty, \quad \overset{\circ}{v}(1)=0, \quad
\overset{\circ}{v}(\omega)=2\pi i, \quad \overset{\circ}{v}(2)=1,
\quad \overset{\circ}{v}(2\omega)=1+2\pi i,
\label{zalpha2 v init1}\\
\overset{\circ}{v}(1+\omega)=\infty, \quad
\overset{\circ}{v}(2+\omega)=0, \quad \overset{\circ}{v}(1+2\omega)=2\pi i,
\quad \overset{\circ}{v}(2+2\omega)=\pi i,
\label{zalpha2 v init2}\end{aligned}$$ $$\begin{aligned}
\overset{\circ}{w}(0)=0, \quad \overset{\circ}{w}(1)=0, \quad
\overset{\circ}{w}(\omega)=0, \quad \overset{\circ}{w}(2)=0,
\quad \overset{\circ}{w}(2\omega)=0,
\label{zalpha2 w init1}\\
\overset{\circ}{w}(1+\omega)=0, \quad
\overset{\circ}{w}(2+\omega)=\frac{i}{\pi},
\quad \overset{\circ}{w}(1+2\omega)=-\frac{i}{\pi},
\quad \overset{\circ}{w}(2+2\omega)=0.
\label{zalpha2 w init2}\end{aligned}$$ These initial values have to be supplemented by the values in all further points of the $k$– and $\ell$–axes. From the formulas of Theorem \[Th circular zalpha\] there follow the expressions for the edges of the lattices $\overset{\circ}{u}$, $\overset{\circ}{v}$: $$\begin{aligned}
\overset{\circ}{f}(3k-1)=\overset{\circ}{f}(3k)=\overset{\circ}{f}(3k+1)
\;\;=\;\; \overset{\circ}{f}((3k-1)\omega)=\overset{\circ}{f}(3k\omega)
=\overset{\circ}{f}((3k+1)\omega) & = &
\frac{1}{k}, \qquad k\ge 1,\nonumber\\\label{f for zalpha2}\\
\overset{\circ}{g}(3k-2)=\overset{\circ}{g}(3k-1)=\overset{\circ}{g}(3k)
\;\;=\;\; \overset{\circ}{g}((3k-2)\omega)=\overset{\circ}{g}((3k-1)\omega)=
\overset{\circ}{g}(3k\omega) & = &
\frac{1}{k}, \qquad k\ge 1.\nonumber\\ \label{g for zalpha2}\end{aligned}$$ The formulas of Corollary \[Cor circular zalpha\] yield the results for the lattice $\overset{\circ}{w}$: $$\label{zalpha2 w}
\overset{\circ}{w}(3k)=k^3, \quad
\overset{\circ}{w}(3k+1)=k^2(k+1), \quad
\overset{\circ}{w}(3k+2)=k(k+1)^2, \quad k\ge 1,$$ so that $$\label{h for zalpha2}
\overset{\circ}{h}(3k-1)=\overset{\circ}{h}(3k)=k^2,\quad
\overset{\circ}{h}(3k+1)=k(k+1), \quad k\ge 1.$$ Of course, one has also $\overset{\circ}{w}(k\omega)=\overset{\circ}{w}(k)$.
The hexagonal circle patterns corresponding to the solutions of the $fgh$–system in the sector (\[sector\]) defined by the boundary values (\[zalpha2 u init1\])–(\[h for zalpha2\]) are called:
- the asymmetric hexagonal $\log z$;
- the hexagonal $z^3$ with a (degenerate) circle at the origin.
It is meant that the $u$–image of the half-sector $0\le {\rm arg}({{\frak z}})\le
\pi/3$ is not symmetric with respect to the line $\Im(u)=\pi i/2$ (the image of ${\rm arg}({{\frak z}})=\pi/6$, and the same for $v$. Instead, this symmetry interchanges the $u$ pattern and the $v$ pattern, see Fig. \[fig:gamma1\].
Alternatively, one can define these lattices as the solutions of the $fgh$–system with the initial values (\[zalpha2 u init1\])–(\[zalpha2 w init2\]), satisfying the constraint (\[constr 1\]), (\[constr 2\]), which in the present situation degenerates into $$\begin{aligned}
1 & = &
k\frac{\overset{\circ}{f}_0\overset{\circ}{g}_0\overset{\circ}{f}_3}
{\overset{\circ}{f}_0\overset{\circ}{g}_0+
\overset{\circ}{g}_0\overset{\circ}{f}_3+
\overset{\circ}{f}_3\overset{\circ}{g}_3}+
\ell\frac{\overset{\circ}{f}_2\overset{\circ}{g}_2\overset{\circ}{f}_5}
{\overset{\circ}{f}_2\overset{\circ}{g}_2+
\overset{\circ}{g}_2\overset{\circ}{f}_5+
\overset{\circ}{f}_5\overset{\circ}{g}_5}+
m\frac{\overset{\circ}{f}_4\overset{\circ}{g}_4\overset{\circ}{f}_1}
{\overset{\circ}{f}_4\overset{\circ}{g}_4+
\overset{\circ}{g}_4\overset{\circ}{f}_1+
\overset{\circ}{f}_1\overset{\circ}{g}_1},\\
1 & = &
k\frac{\overset{\circ}{g}_0\overset{\circ}{f}_3\overset{\circ}{g}_3}
{\overset{\circ}{f}_0\overset{\circ}{g}_0+
\overset{\circ}{g}_0\overset{\circ}{f}_3+
\overset{\circ}{f}_3\overset{\circ}{g}_3}+
\ell\frac{\overset{\circ}{g}_2\overset{\circ}{f}_5\overset{\circ}{g}_5}
{\overset{\circ}{f}_2\overset{\circ}{g}_2+
\overset{\circ}{g}_2\overset{\circ}{f}_5+
\overset{\circ}{f}_5\overset{\circ}{g}_5}+
m\frac{\overset{\circ}{g}_4\overset{\circ}{f}_1\overset{\circ}{g}_1}
{\overset{\circ}{f}_4\overset{\circ}{g}_4+
\overset{\circ}{g}_4\overset{\circ}{f}_1+
\overset{\circ}{f}_1\overset{\circ}{g}_1}.\end{aligned}$$ Just as in the non–degenerate case, these formulas allow one to calculate inductively the values of $\overset{\circ}{u}$, $\overset{\circ}{v}$ on the $k$– and $\ell$–axes. The formulas (\[constr 3\]), (\[constr 3 alt\]) hold literally with $\gamma=1$.
![The asymmetric patterns $\log z$ and the hexagonal pattern $z^3$ with a circle at the origin; the upper half of the first pattern coincides with the lower half of the second one, and vice versa[]{data-label="fig:gamma1"}](Gamma1u.eps "fig:"){width="0.45\hsize"} ![The asymmetric patterns $\log z$ and the hexagonal pattern $z^3$ with a circle at the origin; the upper half of the first pattern coincides with the lower half of the second one, and vice versa[]{data-label="fig:gamma1"}](Gamma1v.eps "fig:"){width="0.45\hsize"} ![The asymmetric patterns $\log z$ and the hexagonal pattern $z^3$ with a circle at the origin; the upper half of the first pattern coincides with the lower half of the second one, and vice versa[]{data-label="fig:gamma1"}](Gamma1w.eps "fig:")
Conclusions
===========
In this paper we introduced the notion of hexagonal circle patterns, and studied in some detail a subclass consisting of circle patterns with the property that six intersection points on each circle have the multi-ratio $-1$. We established the connection of this subclass with integrable systems on the regular triangular lattice, and used this connection to describe some Bäcklund–like transformations of hexagonal circle patterns (transformation $u\mapsto v\mapsto w$, see Theorems \[z to w\], \[from circ to circ\]), and to find discrete analogs of the functions $z^{\alpha}$, $\log z$. Of course, this is only the beginning of the story of hexagonal circle patterns. In a subsequent publication we shall demonstrate that there exists another subclass related to integrable systems, namely the patterns with fixed intersection angles. The intersection of both subclasses constitute conformally symmetric patterns, including analogs of Doyle’s spirals (cf. [@BH]).
A very interesting question is, what part of the theory of integrable circle patterns can be applied to hexagonal circle packings. This also will be a subject of our investigation.
This research was financially supported by DFG (Sonderforschungsbereich 288 “Differential Geometry and Quantum Physics”).
Appendix: Square lattice version of the $fgh$–system {#square grid version}
====================================================
Dropping all edges of $E({{\cal T}}{{\cal L}})$ parallel to the $m$–axis, we end up with the cell complex isomorphic to the regular square lattice: its vertices ${{\frak z}}=k+\ell\omega$ may be identified with $(k,\ell)\in{\Bbb Z}^2$, its edges are then identified with those pairs $[(k_1,\ell_1),(k_2,\ell_2)]$ for which $|k_1-k_2|+|\ell_1-\ell_2|=1$, and its 2-cells (parallelograms) are identified with the elementary squares of the square lattice. Hence, flat connections on ${{\cal T}}{{\cal L}}$ form a subclass of flat connections on the square lattice. A natural question is, whether this inclusion is strict, i.e. whether there exist flat connections on the square lattice which cannot be extended to flat connections on ${{\cal T}}{{\cal L}}$. At least for the $fgh$–system, the answer is negative: denote by ${{\cal M}}\subset{\rm
SL}(3,{\Bbb C}) [\lambda]$ the set of matrices (\[L\]), then flat connections on the regular square grid with values in ${{\cal M}}$ are essentially in a one-to-one correspondence with flat connections on ${{\cal T}}{{\cal L}}$ with values in ${{\cal M}}$, i.e. with solutions of the $fgh$–system. This is a consequence of the following statement dealing with an elementary square of the regular square lattice: a flat connection on such an elementary square with values in ${{\cal M}}$ can be extended by an element of ${{\cal M}}$ sitting on its diagonal without violating the flatness property. More precisely:
\[lemma 6to4\] Let $$L_1L_2=L_3L_4, \quad where \quad L_i\in{{\cal M}}\;\;(i=1,2,3,4),$$ and let the off–diagonal parts of $L_1$, $L_2$ be componentwise distinct from the off–diagonal parts of $L_3$, $L_4$, respectively. Then there exists $L_0\in{{\cal M}}$ such that $$L_0L_1L_2=L_0L_3L_4=I.$$
(4,4) (1,1)[(1,0)[2]{}]{} (3,1)[(0,1)[2]{}]{} (1,1)[(0,1)[2]{}]{} (1,3)[(1,0)[2]{}]{} (3,3)[(-1,-1)[2]{}]{} (3.5,1.9)[(0,0)[$L_1$]{}]{} (0.5,1.9)[(0,0)[$L_4$]{}]{} (2,0.5)[(0,0)[$L_2$]{}]{} (2,3.5)[(0,0)[$L_3$]{}]{} (2.3,1.7)[(0,0)[$L_0$]{}]{}
[**Proof.**]{} We have to prove that $(L_1L_2)^{-1}=(L_3L_4)^{-1}\in{{\cal M}}$. It is easy to see that it is necessary and sufficient to prove that the entries 13, 21, 32 of this matrix vanish, i.e. that there holds $$\label{lemma square to prove}
f_1g_1+f_2g_1+f_2g_2=f_3g_3+f_4g_3+f_4g_4=0,$$ as well as two similar equations resulting by two successive permutations $(f,g,h)\mapsto (g,h,f)$. We are given the relations $f_ig_ih_i=1$ and $$\label{lemma square have 1}
f_1+f_2=f_3+f_4,\qquad g_1+g_2=g_3+g_4,\qquad h_1+h_2=h_3+h_4,$$ $$\label{lemma square have 2}
f_1g_2=f_3g_4,\qquad g_1h_2=g_3h_4,\qquad h_1f_2=h_3f_4.$$ In order to prove (\[lemma square to prove\]), we start with the third equation in (\[lemma square have 1\]): $$\label{lemma square aux1}
h_1\left(1-\frac{h_3}{h_1}\right)=h_2\left(\frac{h_4}{h_2}-1\right).$$ Using $f_ig_ih_i=1$ and (\[lemma square have 2\]), we find: $$\label{lemma square aux2}
\frac{h_3}{h_1}=\frac{f_1g_1}{f_3g_3}=\frac{g_4g_1}{g_2g_3},\qquad
\frac{h_4}{h_2}=\frac{g_1}{g_3}.$$ Plugging this into (\[lemma square aux1\]), we get: $$\label{lemma square aux3}
\frac{g_2g_3-g_1g_4}{f_1g_1g_2g_3}=\frac{g_1-g_3}{f_2g_2g_3}.$$ Now, due to the second equation in (\[lemma square have 1\]), we find: $$\label{lemma square aux4}
g_2g_3-g_1g_4=g_2(g_3-g_1)+g_1(g_2-g_4)=(g_1+g_2)(g_3-g_1).$$ Substituting this into (\[lemma square aux3\]), we come to the equation: $$(g_3-g_1)\left(\frac{g_1+g_2}{f_1g_1}+\frac{1}{f_2}\right)=0.$$ Since, by condition, $g_1\neq g_3$, we obtain $f_2(g_1+g_2)+f_1g_1=0$, which is the equation (\[lemma square to prove\]).
------------------------------------------------------------------------
This result shows that the $fgh$–system could be alternatively studied in a more common framework of integrable systems on a square lattice. However, such an approach would hide a rich and interesting geometric structures immanently connected with the triangular lattice. It should be said at this point that the one–field equation (\[hex eq z\]) was first found, under the name of the “Schwarzian lattice Bussinesq equation” by Nijhoff in [@N] using a (different) Lax representation on the square lattice. The same holds for the one–field form of the constraint (\[constr 1 z\]).
Appendix: Proofs of statements of Sect. \[Sect isomonodromic\] {#Appendix}
==============================================================
[**Proof of Proposition \[constraint preliminary\].**]{} The arguments are similar for both equations (\[constr 1\]), (\[constr 2\]). For instance, for the first one we have to demonstrate that $$\begin{aligned}
\label{constr 1 well aux1}
\frac{f_0g_0f_3}{f_0g_0+g_0f_3+f_3g_3}+
\frac{f_2g_2f_5}{f_2g_2+g_2f_5+f_5g_5}+
\frac{f_4g_4f_1}{f_4g_4+g_4f_1+f_1g_1} & = & \nonumber\\
\frac{f_0f_3}{f_0+f_3+f_3g_3/g_0}+
\frac{f_2f_5}{f_2+f_5+f_5g_5/g_2}+
\frac{f_4f_1}{f_4+f_1+f_1g_1/g_4} & = & 0.\end{aligned}$$ To eliminate the fields $g$ from this equation, consider six elementary triangles surrounding the vertex ${{\frak z}}$. The equations (\[motion eq\]) imply: $$\begin{aligned}
\frac{g_1}{g_0}=-\frac{f_0+f_1}{f_1},\quad
\frac{g_2}{g_1}=-\frac{f_1}{f_1+f_2},\quad
\frac{g_3}{g_2}=-\frac{f_2+f_3}{f_3}, \\
\frac{g_5}{g_0}=-\frac{f_5+f_0}{f_5},\quad
\frac{g_4}{g_5}=-\frac{f_5}{f_4+f_5},\quad
\frac{g_3}{g_2}=-\frac{f_3+f_4}{f_3}.\end{aligned}$$ Therefore, $$\begin{aligned}
f_0+f_3+f_3\frac{g_3}{g_0} & = &
f_0+f_3-\frac{(f_0+f_1)(f_2+f_3)}{f_1+f_2}=
\frac{(f_0-f_2)(f_1-f_3)}{f_1+f_2}\label{constr 1 well aux2}\\
& = & f_0+f_3-\frac{(f_5+f_0)(f_3+f_4)}{f_4+f_5}=
\frac{(f_4-f_0)(f_3-f_5)}{f_4+f_5}.\label{constr 1 well aux3}\end{aligned}$$ By the way, this again yields the property $MR=-1$ of the lattice $u$, which can be written now as $$\label{constr 1 well H}
(f_0+f_1)(f_2+f_3)(f_4+f_5)=(f_1+f_2)(f_3+f_4)(f_5+f_0),$$ Using (\[constr 1 well aux2\]), an analogous expression along the $\ell$–axis, and an expression analogous to (\[constr 1 well aux3\]) along the $m$–axis, we rewrite (\[constr 1 well aux1\]) as $$\frac{f_0f_3(f_1+f_2)}{(f_0-f_2)(f_1-f_3)}+
\frac{f_2f_5(f_3+f_4)}{(f_2-f_4)(f_3-f_5)}+
\frac{f_4f_1(f_2+f_3)}{(f_2-f_4)(f_1-f_3)}=0.$$ Clearing denominators, we put it in the equivalent form $$\begin{aligned}
f_0f_3(f_1+f_2)(f_2-f_4)(f_3-f_5)+
f_2f_5(f_3+f_4)(f_0-f_2)(f_1-f_3) & & \\
+f_4f_1(f_2+f_3)(f_0-f_2)(f_3-f_5) & = & 0.\end{aligned}$$ But the polynomial on the left–hand side of the last formula is equal to $$f_2f_3\Big((f_1+f_2)(f_3+f_4)(f_5+f_0)-(f_0+f_1)(f_2+f_3)(f_4+f_5)\Big),$$ and hence vanishes in virtue of (\[constr 1 well H\]).
------------------------------------------------------------------------
[**Proof of Proposition \[third constraint\].**]{} Denote the right–hand sides of (\[constr 1\]), (\[constr 2\]), (\[constr 3\]) through $U({{\frak z}})$, $V({{\frak z}})$, $W({{\frak z}})$, respectively. In order to prove (\[constr 3\]), i.e. $\gamma
w=W({{\frak z}})$, it is necessary and sufficient to demonstrate that $$\gamma h_0=W(\widetilde{{{\frak z}}})-W({{\frak z}}),\quad \gamma
h_2=W(\widehat{{{\frak z}}})-W({{\frak z}}), \quad \gamma h_4=W(\bar{{{\frak z}}})-W({{\frak z}}),$$ (or, actually, any two of these three equations). We perform the proof for the first one only, since for the other two everything is similar. In dealing with our constraints we are free to choose any representative $(k,\ell,m)$ for ${{\frak z}}$. In order to keep things shorter, we always assume in this proof that $m=0$. Writing the formula $$\gamma=\frac{1}{h_0}\Big(W(\widetilde{{{\frak z}}})-W({{\frak z}})\Big)$$ in long hand, we have to prove that $$\begin{aligned}
\label{constr 3 to prove}
\gamma=1-\alpha-\beta & = &
(k+1)\frac{1/h_0}{\widetilde{f}_0\widetilde{g}_0+
\widetilde{g}_0\widetilde{f}_3+\widetilde{f}_3\widetilde{g}_3}-
k\frac{1/h_0}{f_0g_0+g_0f_3+f_3g_3}\nonumber\\
& & +\ell \frac{1/h_0}{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}-
\ell\frac{1/h_0}{f_2g_2+g_2f_5+f_5g_5}.\end{aligned}$$ Assuming that (\[constr 1\]) and (\[constr 2\]) hold, we have: $$\alpha+\beta=\frac{1}{f_0}\Big(U(\widetilde{{{\frak z}}})-U({{\frak z}})\Big)+
\frac{1}{g_0}\Big(V(\widetilde{{{\frak z}}})-V({{\frak z}})\Big).$$ Taking into account that $\widetilde{f}_3=f_0$, $\widetilde{g}_3=g_0$, we find: $$\begin{aligned}
\alpha+\beta & = & (k+1)\frac{\widetilde{f}_0\widetilde{g}_0+
\widetilde{g}_0\widetilde{f}_3}{\widetilde{f}_0\widetilde{g}_0+
\widetilde{g}_0\widetilde{f}_3+\widetilde{f}_3\widetilde{g}_3}-
k\frac{g_0f_3+f_3g_3}{f_0g_0+g_0f_3+f_3g_3}\nonumber\\
& & +\ell\left(
\frac{\widetilde{f}_2\widetilde{g}_2\widetilde{f}_5/f_0+
\widetilde{g}_2\widetilde{f}_5\widetilde{g}_5/g_0}
{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}-
\frac{f_2g_2f_5/f_0+g_2f_5g_5/g_0}{f_2g_2+g_2f_5+f_5g_5}\right),\end{aligned}$$ or, equivalently, $$\begin{aligned}
\gamma=1-\alpha-\beta & = &
(k+1)\frac{\widetilde{f}_3\widetilde{g}_3}
{\widetilde{f}_0\widetilde{g}_0+
\widetilde{g}_0\widetilde{f}_3+\widetilde{f}_3\widetilde{g}_3}-
k\frac{f_0g_0}{f_0g_0+g_0f_3+f_3g_3}\nonumber\\
& & -\ell\left(
\frac{\widetilde{f}_2\widetilde{g}_2\widetilde{f}_5/f_0+
\widetilde{g}_2\widetilde{f}_5\widetilde{g}_5/g_0}
{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}-
\frac{f_2g_2f_5/f_0+g_2f_5g_5/g_0}{f_2g_2+g_2f_5+f_5g_5}\right).\end{aligned}$$ The first two terms on the right–hand side already have the required form, since $\widetilde{f}_3\widetilde{g}_3=f_0g_0=1/h_0$. So, it remains to prove that $$\label{third constraint aux0}
-\frac{\widetilde{f}_2\widetilde{g}_2\widetilde{f}_5/f_0+
\widetilde{g}_2\widetilde{f}_5\widetilde{g}_5/g_0}
{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}+
\frac{f_2g_2f_5/f_0+g_2f_5g_5/g_0}{f_2g_2+g_2f_5+f_5g_5}=
\frac{1/h_0}{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}-
\frac{1/h_0}{f_2g_2+g_2f_5+f_5g_5}.$$ The most direct and unambiguous way to do this is to notice that everything here may be expressed with the help of the $fgh$–equations in terms of a single field $h$. After straightforward calculations one obtains: $$\begin{aligned}
\widetilde{f}_2\widetilde{g}_2\frac{\widetilde{f}_5}{f_0}+
\widetilde{f}_5\widetilde{g}_5\frac{\widetilde{g}_2}{g_0} & = &
-\frac{1}{\widetilde{h}_5}+\frac{h_0(h_0-\widetilde{h}_5)}{\widetilde{h}_2
\widetilde{h}_5h_5}, \label{third constraint aux1}\\
f_2g_2\frac{f_5}{f_0}+f_5g_5\frac{g_2}{g_0} & = &
-\frac{1}{h_2}+\frac{h_0(h_0-h_2)}{\widetilde{h}_2h_2h_5},
\label{third constraint aux2}\\
\widetilde{f}_2\widetilde{g}_2+\widetilde{g}_2\widetilde{f}_5+
\widetilde{f}_5\widetilde{g}_5 & = &
\frac{(h_0-\widetilde{h}_5)(\widetilde{h}_4-\widetilde{h}_2)}
{\widetilde{h}_2\widetilde{h}_5h_5}, \label{third constraint aux3}\\
f_2g_2+g_2f_5+f_5g_5 & = &
\frac{(h_0-h_2)(h_1-h_5)}{\widetilde{h}_2h_2h_5}. \label{third
constraint aux4}\end{aligned}$$ Taking into account that $\widetilde{h}_4-\widetilde{h}_2=h_1-h_5$, we see that (\[third constraint aux0\]) and Proposition \[third constraint\] are proved.
------------------------------------------------------------------------
[**Proof of Theorem \[monodromy\].**]{} In order for the isomonodromy property to hold, the following compatibility conditions of (\[wave evolution in mu\]) with (\[eq in mu\]) are necessary and sufficient: (\[zero curv in mu\]) and
$$\left\{\begin{array}{l}
\displaystyle\frac{d}{d\mu}{{\cal L}}({{\frak e}}_0,\mu)={{\cal A}}_{k+1,\ell,m}{{\cal L}}({{\frak e}}_0,\mu)-
{{\cal L}}({{\frak e}}_0,\mu){{\cal A}}_{k,\ell,m},\\
\displaystyle\frac{d}{d\mu}{{\cal L}}({{\frak e}}_2,\mu)={{\cal A}}_{k,\ell+1,m}{{\cal L}}({{\frak e}}_2,\mu)-
{{\cal L}}({{\frak e}}_2,\mu){{\cal A}}_{k,\ell,m},\\
\displaystyle\frac{d}{d\mu}{{\cal L}}({{\frak e}}_4,\mu)={{\cal A}}_{k,\ell,m+1}{{\cal L}}({{\frak e}}_4,\mu)-
{{\cal L}}({{\frak e}}_4,\mu){{\cal A}}_{k,\ell,m}.
\end{array}\right.$$
Substituting the ansatz (\[ans A\]) and calculating the residues at $\mu=-1$, $\mu=0$ and $\mu=\infty$, we see that the above system is equivalent to the following nine matrix equations: $$\begin{aligned}
C_{k+1,\ell,m}{{\cal L}}({{\frak e}}_0,-1) & = & {{\cal L}}({{\frak e}}_0,-1)C_{k,\ell,m}, \label{eq C1}\\
C_{k,\ell+1,m}{{\cal L}}({{\frak e}}_2,-1) & = & {{\cal L}}({{\frak e}}_2,-1)C_{k,\ell,m}, \label{eq C2}\\
C_{k,\ell,m+1}{{\cal L}}({{\frak e}}_4,-1) & = & {{\cal L}}({{\frak e}}_4,-1)C_{k,\ell,m},
\label{eq C3}\end{aligned}$$ $$\begin{aligned}
D(\widetilde{{{\frak z}}}){{\cal L}}({{\frak e}}_0,0) & = & {{\cal L}}({{\frak e}}_0,0)D({{\frak z}}), \label{eq D1}\\
D(\widehat{{{\frak z}}}){{\cal L}}({{\frak e}}_2,0) & = & {{\cal L}}({{\frak e}}_2,0)D({{\frak z}}), \label{eq D2}\\
D(\bar{{{\frak z}}}){{\cal L}}({{\frak e}}_4,0) & = & {{\cal L}}({{\frak e}}_4,0)D({{\frak z}}), \label{eq D3}\end{aligned}$$ $$\begin{aligned}
\Big(C_{k+1,\ell,m}+D(\widetilde{{{\frak z}}})\Big)Q-Q\Big(C_{k,\ell,m}+D({{\frak z}})\Big)
& =& Q, \label{eq CD1}\\
\Big(C_{k,\ell+1,m}+D(\widehat{{{\frak z}}})\Big)Q-Q\Big(C_{k,\ell,m}+D({{\frak z}})\Big)
& = & Q, \label{eq CD2}\\
\Big(C_{k,\ell,m+1}+D(\bar{{{\frak z}}})\Big)Q-Q\Big(C_{k,\ell,m}+D({{\frak z}})\Big)
& = & Q, \label{eq CD3}\end{aligned}$$ where
$$\label{matr Q}
Q=\left(\begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0
\end{array} \right).$$
We do not aim at solving these equations completely, but rather at finding a [*certain*]{} solution leading to the constraint (\[constr 1\]), (\[constr 2\]). The subsequent reasoning will be divided into several steps.
[**Step 1. Consistency of the ansatz for**]{} $C_{k,\ell,m}$. First of all, we have to convince ourselves that the ansatz (\[C\]), (\[P\]) does not violate the necessary condition (\[cover cond for A\]), i.e. that $$\label{projector wonder}
P_2+P_4+P_6=I.$$ Notice that the entries 12 and 23 of this matrix equation are nothing but the content of Proposition \[constraint preliminary\]. Upon the cyclic permutation of the fields $(f,g,h)\mapsto(g,h,f)$ this gives also the entry 31. To check the entry 21, we proceed as in the proof of Proposition \[constraint preliminary\]. We have to prove that $$\begin{aligned}
\frac{g_0}{f_0g_0+g_0f_3+f_3g_3}+
\frac{g_2}{f_2g_2+g_2f_5+f_5g_5}+
\frac{g_4}{f_4g_4+g_4f_1+f_1g_1} & = & \nonumber\\
\frac{1}{f_0+f_3+f_3g_3/g_0}+ \frac{1}{f_2+f_5+f_5g_5/g_2}+
\frac{1}{f_4+f_1+f_1g_1/g_4} & = & \nonumber\\
\frac{f_1+f_2}{(f_0-f_2)(f_1-f_3)}+
\frac{f_3+f_4}{(f_2-f_4)(f_3-f_5)}+
\frac{f_2+f_3}{(f_2-f_4)(f_1-f_3)} & = & 0.\end{aligned}$$ Clearing denominators, we put it in the equivalent form $$(f_1+f_2)(f_2-f_4)(f_3-f_5)+ (f_3+f_4)(f_0-f_2)(f_1-f_3)+
(f_2+f_3)(f_0-f_2)(f_3-f_5)=0.$$ But the polynomial on the left–hand side is equal to $$(f_1+f_2)(f_3+f_4)(f_5+f_0)-(f_0+f_1)(f_2+f_3)(f_4+f_5),$$ and vanishes due to (\[constr 1 well H\]). Via the cyclic permutation of fields this proves also the entries 32 and 13 of the matrix identity (\[projector wonder\]). Finally, turning to the diagonal entries, we consider, for the sake of definiteness, the entry 22. We have to prove that $$\begin{aligned}
\frac{f_3g_0}{f_0g_0+g_0f_3+f_3g_3}+
\frac{f_5g_2}{f_2g_2+g_2f_5+f_5g_5}+
\frac{f_1g_4}{f_4g_4+g_4f_1+f_1g_1} & = & \nonumber\\
\frac{f_3}{f_0+f_3+f_3g_3/g_0}+ \frac{f_5}{f_2+f_5+f_5g_5/g_2}+
\frac{f_1}{f_4+f_1+f_1g_1/g_4} & = & \nonumber\\
\frac{f_3(f_1+f_2)}{(f_0-f_2)(f_1-f_3)}+
\frac{f_5(f_3+f_4)}{(f_2-f_4)(f_3-f_5)}+
\frac{f_1(f_2+f_3)}{(f_2-f_4)(f_1-f_3)} & = & 1,\end{aligned}$$ or $$\begin{aligned}
f_3(f_1+f_2)(f_2-f_4)(f_3-f_5)+ f_5(f_3+f_4)(f_0-f_2)(f_1-f_3)
& + & \nonumber\\
f_1(f_2+f_3)(f_0-f_2)(f_3-f_5)-(f_0-f_2)(f_1-f_3)(f_2-f_4)(f_3-f_5)
& = & 0.\end{aligned}$$ Again, the polynomial on the left–hand side is equal to $$f_3\Big((f_1+f_2)(f_3+f_4)(f_5+f_0)-(f_0+f_1)(f_2+f_3)(f_4+f_5)\Big),$$ and vanishes due to (\[constr 1 well H\]). The formula (\[projector wonder\]) is proved.
[**Step 2. Checking the equations for the matrix**]{} $C_{k,\ell,m}$. Next, we have to show that the ansatz (\[C\]), (\[P\]) verifies (\[eq C1\])–(\[eq C3\]). Notice that the matrices $${{\cal L}}({{\frak e}},-1)=\left(\begin{array}{ccc} 1 & f & 0 \\ 0 & 1 & g \\ -h
& 0 & 1
\end{array}\right)$$ are degenerate, and that $$\xi=\left(\begin{array}{c} fg \\ -g \\ 1\end{array}\right) \quad
{\rm and} \quad \eta^{\rm T}=\Big(1,\;\; -f,\;\; fg\Big)$$ are the right null–vector and the left null–vector of ${{\cal L}}({{\frak e}},-1)$, respectively. In terms of these vectors one can write the projectors $P_{0,2,4}$ as $$P_j=\frac{1}{\langle \xi_j,\eta_{j+3}\rangle}\xi_j\eta_{j+3}^{\rm
T},\quad j=0,2,4.$$ Therefore we have: $$\begin{aligned}
P_0(\widetilde{{{\frak z}}}){{\cal L}}({{\frak e}}_0,-1)={{\cal L}}({{\frak e}}_0,-1)P_0({{\frak z}}) & = & 0,\\
P_2(\widehat{{{\frak z}}}){{\cal L}}({{\frak e}}_2,-1)={{\cal L}}({{\frak e}}_2,-1)P_2({{\frak z}}) & = & 0,\\
P_4(\bar{{{\frak z}}}){{\cal L}}({{\frak e}}_4,-1)={{\cal L}}({{\frak e}}_4,-1)P_4({{\frak z}}) & = & 0.\end{aligned}$$ In order to demonstrate (\[eq C1\])–(\[eq C3\]) it is sufficient to prove that $$\begin{aligned}
P_2(\widetilde{{{\frak z}}}){{\cal L}}({{\frak e}}_0,-1)={{\cal L}}({{\frak e}}_0,-1)P_2({{\frak z}}), & \quad &
P_4(\widetilde{{{\frak z}}}){{\cal L}}({{\frak e}}_0,-1)={{\cal L}}({{\frak e}}_0,-1)P_4({{\frak z}})=0,\\
P_4(\widehat{{{\frak z}}}){{\cal L}}({{\frak e}}_2,-1)={{\cal L}}({{\frak e}}_2,-1)P_4({{\frak z}}), & \quad &
P_0(\widehat{{{\frak z}}}){{\cal L}}({{\frak e}}_2,-1)={{\cal L}}({{\frak e}}_2,-1)P_0({{\frak z}})=0,\\
P_0(\bar{{{\frak z}}}){{\cal L}}({{\frak e}}_4,-1)={{\cal L}}({{\frak e}}_4,-1)P_0({{\frak z}}), & \quad &
P_2(\bar{{{\frak z}}}){{\cal L}}({{\frak e}}_4,-1)={{\cal L}}({{\frak e}}_4,-1)P_2({{\frak z}})=0.\end{aligned}$$ All these equations are verified in a similar manner, therefore we restrict ourselves to the first one. $$\frac{1}{\langle \widetilde{\xi}_2,\widetilde{\eta}_5\rangle}
\widetilde{\xi}_2\widetilde{\eta}_5^{\rm T}{{\cal L}}({{\frak e}}_0,-1)=
\frac{1}{\langle \xi_2,\eta_5\rangle}{{\cal L}}({{\frak e}}_0,-1)\xi_2\eta_5^{\rm
T},$$ or, in long hand, $$\label{C ansatz aux0}
\frac{1}{\widetilde{f}_2\widetilde{g}_2+\widetilde{g}_2\widetilde{f}_5+
\widetilde{f}_5\widetilde{g}_5}\left(\begin{array}{c}
\widetilde{f}_2\widetilde{g}_2 \\ -\widetilde{g}_2 \\
1\end{array}\right)\!
\left(\begin{array}{c} 1-h_0/\widetilde{h}_5 \\
f_0-\widetilde{f}_5 \\
\widetilde{f}_5(\widetilde{g}_5-g_0)\end{array} \right)^{\rm
T}\!\!= \frac{1}{f_2g_2+g_2f_5+f_5g_5} \left(\begin{array}{c}
(f_2-f_0)g_2 \\ g_0-g_2 \\ 1-h_0/h_2\end{array}\right)
\!\left(\begin{array}{c} 1 \\ -f_5 \\ f_5g_5\end{array}
\right)^{\rm T}\!\!.$$ To prove this we have, first, to check that these two rank one matrices are proportional, and then to check that their entries 31 (say) coincide. The second of these claims reads: $$\label{C ansatz aux1}
\frac{1-h_0/\widetilde{h}_5}
{\widetilde{f}_2\widetilde{g}_2+\widetilde{g}_2\widetilde{f}_5+
\widetilde{f}_5\widetilde{g}_5}=
\frac{1-h_0/h_2}{f_2g_2+g_2f_5+f_5g_5},$$ and follows from (\[third constraint aux3\]), (\[third constraint aux4\]). The first claim above is equivalent to: $$\left(\begin{array}{c} \widetilde{f}_2\widetilde{g}_2 \\
-\widetilde{g}_2 \\ 1\end{array}\right) \sim
\left(\begin{array}{c} (f_2-f_0)g_2 \\ g_0-g_2 \\
1-h_0/h_2\end{array}\right) \quad{\rm and}\quad
\left(\begin{array}{c} 1-h_0/\widetilde{h}_5 \\
f_0-\widetilde{f}_5 \\
\widetilde{f}_5(\widetilde{g}_5-g_0)\end{array} \right)\sim
\left(\begin{array}{c} 1 \\ -f_5 \\ f_5g_5\end{array} \right),$$ which, in turn, is equivalent to: $$\label{C ansatz aux2}
\widetilde{f}_2=g_2\frac{f_0-f_2}{g_0-g_2},\quad
\widetilde{g}_2=h_2\frac{g_0-g_2}{h_0-h_2},$$ and $$\label{C ansatz aux3}
\widetilde{h}_5=f_5\frac{h_0-\widetilde{h}_5}{f_0-\widetilde{f}_5},\quad
\widetilde{f}_5=g_5\frac{f_0-\widetilde{f}_5}{g_0-\widetilde{g}_5}.$$ All these relations easily follow from the equations of the $fgh$–system. For instance, to check the first equation in (\[C ansatz aux2\]), one has to consider the two elementary positively oriented triangles $({{\frak z}},{{\frak z}}+\omega,{{\frak z}}+\varepsilon)$ and $({{\frak z}},{{\frak z}}+1,{{\frak z}}+\varepsilon)$. Denoting the edge ${{\frak e}}_{12}=({{\frak z}}+\omega,{{\frak z}}+\varepsilon)$, we have: $$f_2+f_{12}=f_0+\widetilde{f}_2\;(=-f_1),\qquad
f_{12}g_2=\widetilde{f}_2g_0\;(=f_1g_1).$$ Eliminating $f_{12}$ from these two equations, we end up with the desired one. This finishes the proof of (\[eq C1\])–(\[eq C3\]).
[**Step 3. Checking the equations for the matrix**]{} $D({{\frak z}})$. Notice that the matrices $$L({{\frak e}},0)=\left(\begin{array}{ccc} 1 & f & 0 \\ 0 & 1 & g \\ 0 & 0
& 1
\end{array}\right)$$ are upper triangular. We require that the matrices $D({{\frak z}})$ are also upper triangular: $$D=\left(\begin{array}{ccc} d_{11} & d_{12} & d_{13} \\ 0 & d_{22} & d_{23} \\
0 & 0 & d_{33}\end{array}\right).$$ It is immediately seen that the diagonal entries are constants. By multiplying the wave function $\Psi_{k,\ell,m}(\mu)$ from the right by a constant ($\mu$–dependent) matrix one can arrange that the matrices $D({{\frak z}})$ are traceless. Hence the diagonal part of $D$ is parameterized by two arbitrary numbers. It will be convenient to choose this parametrization as $$(d_{11},d_{22},d_{33})=\Big(-(2\alpha+\beta)/3,\;
(\alpha-\beta)/3, \; (2\beta+\alpha)/3\Big).$$ Equating the entries 12 and 23 in (\[eq D1\])–(\[eq D3\]), we find for an arbitrary positively oriented edge ${{\frak e}}=({{\frak z}}_1,{{\frak z}}_2)\in E({{\cal T}}{{\cal L}})$: $$\begin{aligned}
d_{12}({{\frak z}}_2)-d_{12}({{\frak z}}_1) & = & (d_{22}-d_{11})f=
\alpha\Big(u({{\frak z}}_2)-u({{\frak z}}_1)\Big),\\
d_{23}({{\frak z}}_2)-d_{23}({{\frak z}}_1) & = & (d_{33}-d_{22})g=
\beta\Big(v({{\frak z}}_2)-v({{\frak z}}_1)\Big).\end{aligned}$$ Obviously, a solution (unique up to an additive constant) is given by $$d_{12}=\alpha u,\quad d_{23}=\beta v.$$ Finally, equating in (\[eq D1\])–(\[eq D3\]) the entries 13, we find: $$\begin{aligned}
d_{13}({{\frak z}}_2)-d_{13}({{\frak z}}_1) & = & d_{23}({{\frak z}}_1)f-d_{12}({{\frak z}}_2)g
\label{eq for d13}\\
& = & \beta v({{\frak z}}_1)\Big(u({{\frak z}}_2)-u({{\frak z}}_1)\Big)- \alpha
u({{\frak z}}_2)\Big(v({{\frak z}}_2)-v({{\frak z}}_1)\Big).\end{aligned}$$ Comparing this with (\[eq for a\]), (\[eq for a’\]), we see that (\[D\]) is proved.
[**Step 4. Equations relating the matrices**]{} $C_{k,\ell,m}$ [**and**]{} $D({{\frak z}})$. It remains to consider the equations (\[eq CD1\])–(\[eq CD3\]). Denoting entries of the matrix $C$ by $c_{ij}$, we see that these matrix equations are equivalent to the following scalar ones: $$\begin{aligned}
c_{12}+d_{12} & = & 0, \label{cd 12}\\
c_{23}+d_{23} & = & 0, \label{cd 23}\\
c_{13}+d_{13} & = & 0,\label{cd 13}\end{aligned}$$ $$\begin{aligned}
(c_{33})_{k+1,\ell,m}-(c_{11})_{k,\ell,m}+d_{33}-d_{11} & = & 1,
\label{cd 11 33 k} \\
(c_{33})_{k,\ell+1,m}-(c_{11})_{k,\ell,m}+d_{33}-d_{11} & = & 1,
\label{cd 11 33 l}\\
(c_{33})_{k,\ell,m+1}-(c_{11})_{k,\ell,m}+d_{33}-d_{11} & = & 1.
\label{cd 11 33 m}\end{aligned}$$ (In the last three equations we took into account that $d_{11}$, $d_{33}$ are constants.) It is easy to see that the equations (\[cd 12\]), (\[cd 23\]) are nothing but the constraint equations (\[constr 1\]), (\[constr 2\]), respectively. We show now that the remaining equations (\[cd 13\])–(\[cd 11 33 m\]) are not independent, but rather follow from the equations of the $fgh$–system and the constraints (\[cd 12\]), (\[cd 23\]). We start with the last three equations, and prove the claim for (\[cd 11 33 k\]), since for other two everything is similar. As in the proof of Proposition \[third constraint\], we write the formulas here with $m=0$. Writing (\[cd 11 33 k\]) in long hand, using the ansätze (\[C\]), (\[P\]), (\[D\]), we see that it is equivalent to $$\begin{aligned}
\label{cd 11 33 to prove}
1-\alpha-\beta & = &
(k+1)\frac{1/h_0}{\widetilde{f}_0\widetilde{g}_0+
\widetilde{g}_0\widetilde{f}_3+\widetilde{f}_3\widetilde{g}_3}-
k\frac{1/h_0}{f_0g_0+g_0f_3+f_3g_3}\nonumber\\
& & +\ell \frac{1/\widetilde{h}_5}{\widetilde{f}_2\widetilde{g}_2+
\widetilde{g}_2\widetilde{f}_5+\widetilde{f}_5\widetilde{g}_5}-
\ell\frac{1/h_2}{f_2g_2+g_2f_5+f_5g_5}.\end{aligned}$$ But this follows immediately from (\[constr 3 to prove\]), (\[C ansatz aux1\]). Finally, we turn to (\[cd 13\]). Actually, since the entry 13 of the matrix $D$ is defined only up to an additive constant, this equation is equivalent to the system of the following three ones: $$\begin{aligned}
(c_{13})_{k+1,\ell,m}-(c_{13})_{k,\ell,m}+\widetilde{d}_{13}-d_{13}
& = & 0,
\label{cd 13 k}\\
(c_{13})_{k,\ell+1,m}-(c_{13})_{k,\ell,m}+\widehat{d}_{13}-d_{13}
& = & 0,
\label{cd 13 l}\\
(c_{13})_{k,\ell,m+1}-(c_{11})_{k,\ell,m}+\bar{d}_{13}-d_{13} & =
& 0. \label{cd 13 m}\end{aligned}$$ As usual, we restrict ourselves to the first one. Upon using the equation (\[eq for d13\]) and the constraints (\[cd 12\]), (\[cd 23\]), we see that it is equivalent to $$\label{cd 13 to prove}
(c_{13})_{k+1,\ell,m}-(c_{13})_{k,\ell,m}+g_0(c_{12})_{k+1,\ell,m}
-f_0(c_{23})_{k,\ell,m}=0.$$ Writing in long hand, in the representation with $m=0$, we see that the terms proportional to $k+1$ and $k$ vanish identically, while the vanishing of the terms proportional to $\ell$ is equivalent to: $$\frac{1}{\widetilde{h}_2\widetilde{h}_5}\cdot
\frac{1-g_0/\widetilde{g}_5}
{\widetilde{f}_2\widetilde{g}_2+\widetilde{g}_2\widetilde{f}_5+
\widetilde{f}_5\widetilde{g}_5}=
\frac{1}{h_2h_5}\cdot\frac{1-f_0/f_2}{f_2g_2+g_2f_5+f_5g_5}.$$ But this follows immediately from (\[C ansatz aux1\]) and the formulas $$\widetilde{g}_5=h_5\frac{g_0-\widetilde{g}_5}{h_0-\widetilde{h}_5},
\qquad \widetilde{g}_2=h_2\frac{g_0-g_2}{h_0-h_2},$$ which are similar to (and follow from) the equations (\[C ansatz aux3\]), (\[C ansatz aux2\]).
This finishes the proof of Theorem \[monodromy\].
------------------------------------------------------------------------
[WWW]{}
V.E.Adler. Legendre transforms on a triangular lattice. [*Funct. Anal. Appl.*]{}, 2000, [**34**]{}, No.1, p.1–9.
S.I.Agafonov, A.I.Bobenko. Discrete $Z^{\gamma}$ and Painlevé equations. [*Internat. Math. Res. Notes*]{}, 2000, N 4, p.165–193.
A.F.Beardon, K.Stephenson. The uniformization theorem for circle packings. [*Indiana Univ. Math. J.*]{}, 1990, [**39**]{}, p. 1383–1425.
A.F.Beardon, T.Dubejko, K.Stephenson. Spiral hexagonal circle packings in the plane. [*Geom. Dedicata*]{}, 1994, [**49**]{}, p.39–70.
A.I.Bobenko, T.Hoffmann. Conformally symmetric circle packings. A generalization of Doyle spirals. [*Experimental Math.*]{}, 2001 (to appear).
A.I.Bobenko, U.Pinkall. Discretization of surfaces and integrable systems. – In: [*Discrete integrable geometry and physics*]{}, Eds. A.I.Bobenko, R.Seiler, Oxford, Clarendon Press, 1999, p. 3–58.
L.V.Bogdanov, B.G.Konopelchenko. Möbius invariant integrable lattice equations associated with KP and 2DTL hierarchies. [*Phys. Lett. A*]{}, 1999, [**256**]{}, p.39–46.
Z.-X.He. Rigidity of infinite disk patterns. [*Ann. of Math.*]{}, 1999, [**149**]{}, p. 1–33.
Z.-X.He, O.Schramm. The $C^{\infty}$ convergence of hexagonal disc packings to Riemann map. [*Acta Math.*]{}, 1998, [**180**]{}, p. 219–245.
A.R.Its. “Isomonodromy” solutions of equations of zero curvature. [*Math. USSR Izv.*]{}, 1986, [**26**]{}, p. 497–529.
I.M.Krichever, S.P.Novikov. Trivalent graphs and solitons. [*Russ. Math. Surv.*]{}, 1999, [**54**]{}, p. 1248–1249.
A.Marden, B.Rodin. On Thurston’s formulation and proof of Andreev’s theorem. [*Lect. Notes Math.*]{}, 1990, [**1435**]{}, p. 103–115.
F.W.Nijhoff. Discrete Painlevé equations and symmetry reduction on the lattice. – In: [*Discrete integrable geometry and physics*]{}, Eds. A.I.Bobenko, R.Seiler, Oxford, Clarendon Press, 1999, p. 209–234.
S.P.Novikov, I.A.Dynnikov. Discrete spectral symmetries of low-dimensional differential operators and difference operators on regular lattices and two-dimensional manifolds. [*Russ. Math. Surv.*]{}, 1997, [**52**]{}, p.1057–1116.
S.P.Novikov, A.S.Shvarts. Discrete Lagrangian systems on graphs. Symplectic-topological properties. [*Russ. Math. Surv.*]{}, 1999, [**54**]{}, p.258–259.
A.A.Oblomkov, A.V.Penskoi. Two–dimensional algebro–geometric difference operators. [*J. Phys. A: Math. Gen.*]{}, 2000, [**33**]{}, p.9255–9264.
B.Rodin, D.Sullivan. The convergence of circle packings to Riemann mapping. [*J. Diff. Geom.*]{}, 1987, [**26**]{}, p.349–360.
O.Schramm. Circle patterns with the combinatorics of the square grid. [*Duke Math. J.*]{}, 1997, [**86**]{}, p. 347–389.
A.Zabrodin. A survey of Hirota’s difference equation. [*Teor. Math. Phys.*]{}, 1997, [**113**]{}, p.1347–1392.
W.P.Thurston. The finite Riemann mapping theorem. [*Invited talk at the international symposium on the occasion of the proof of the Bieberbach conjecture*]{}, Purdue University, 1985.
W.P.Thurston. The geometry and topology of 3-manifolds. [*Preprint*]{}, Princeton University, 1991.
[^1]: E–mail: [[email protected]]{}
[^2]: E–mail: [[email protected]]{}
[^3]: E–mail: [[email protected]]{}
|
---
abstract: 'In [@SV] the author and A. Vershik have shown that for ${\beta}=\frac12(1+\sqrt5)$ and the alphabet $\{0,1\}$ the infinite Bernoulli convolution ($=$ the Erdös measure) has a property similar to the Lebesgue measure. Namely, it is quasi-invariant of type $\mathrm{II}_1$ under the ${\beta}$-shift, and the natural extension of the ${\beta}$-shift provided with the measure equivalent to the Erdös measure, is Bernoulli. In this note we extend this result to all Pisot parameters ${\beta}$ (modulo some general arithmetic conjecture) and an arbitrary “sufficient" alphabet.'
address: 'Department of Mathematics, UMIST, P.O. Box 88, Manchester M60 1QD, United Kingdom. E-mail: [email protected]'
author:
- Nikita Sidorov
title: |
Ergodic-theoretic properties of\
certain Bernoulli convolutions
---
[^1]
**1. Introduction and the main theorem.** Let ${\beta}>1$; the *infinite Bernoulli convolution* (or the *infinitely convolved Bernoulli measure*) is defined as the infinite convolution of the independent discrete random variables $\theta_n({\beta})$ for $n$ from 1 to $\infty$, where $\theta_n({\beta})$ assumes the values $\pm{\beta}^{-n}$ with the probability $\frac12$. This measure is well studied from the probabilistic point of view – see, e.g., [@Ga; @AlZa]. In particular, if ${\beta}>2$, then the support of the corresponding infinite Bernoulli convolution is a Cantor set of zero Lebesgue measure, and for ${\beta}=2$ it coincides with the Lebesgue measure on $[-1,1]$. Besides, if ${\beta}$ is a *Pisot number* (i.e., an algebraic integer $>1$ whose Galois conjugates are all less than 1 in modulus), then the famous Erdös Theorem claims that it is singular with respect to the Lebesgue measure [@E]. Finally, it is also worth mentioning the fundamental result by B. Solomyak who has proved that it is absolutely continuous for a.e. ${\beta}\in(1,2)$ [@So]. The aim of this short note is to study some ergodic-theoretic properties of this important measure in the case of Pisot parameter ${\beta}$.
Actually, we will consider a slightly more general model. Namely, let $d\in{\mathbb{N}}\setminus\{1\}, {\mathcal{A}}_d=\{0,1,\dots,d-1\}$ and ${\beta}$ be an irrational Pisot number, $1<{\beta}<d$.
Let $\mu^+_{{\beta},d}$ denote the [*Erdös measure*]{}, i.e., the measure on ${\mathbb{R}}$ that corresponds to the distribution of the random variable $$\xi_{{\beta},d}=\pi_{{\beta},d}((x_n)_1^\infty):=
\frac{{\beta}-1}{d-1}\sum_{n=1}^\infty x_n{\beta}^{-n}, \label{pr}$$ where $x_n$’s are i.i.d. variables, each of which assumes the values $\{0,1,\linebreak[0]\dots,d-1\}$ with the probability $1/d$. Since ${\beta}<d$, it is obvious that $\textrm{supp}\,\mu_{{\beta},d}=[0,1]$. Let $\tau^+_{\beta}$ denote the ${\beta}$-shift in $[0,1)$, i.e., $$\tau^+_{\beta}(x)={\beta}x\bmod1.$$ The relationship between the Erdös measure and the infinite Bernoulli convolutions is straightforward: let $d=2$ and ${\beta}\in(1,2)$. Then the affine map $h_{\beta}(x)=\frac{{\beta}-1}2x+\frac12$ turns the corresponding infinite Bernoulli convolution into the Erdös measure with the same parameter ${\beta}$. Since an affine transform does not alter any essential ergodic properties, we may confine ourselves to the study of the measures $\mu^+_{{\beta},d}$.
Let $X_{\beta}^+$ denote the one-sided [*${\beta}$-compactum*]{}, i.e., the space of all possible (greedy) ${\beta}$-expansions of the numbers in $[0,1)$.
More precisely, let the sequence $(a_n)_1^\infty$ be defined as follows: let $1=\sum_{1}^{\infty}a_k' {\beta}^{-k}$ be the greedy expansion of 1, i.e, $a_n'=[{\beta}(\tau^+_{\beta})^{n-1}1],\ n\ge1$. If the tail of the sequence $(a_n')$ differs from $0^\infty$, then we put $a_n\equiv a_n'$. Otherwise let $k=\max\,\{j:a_j'>0\}$, and $(a_1,a_2,\dots):=
(a_1',\dots,a_{k-1}',\linebreak[0]a_k'-1)^\infty$. In the seminal paper [@Pa] it is shown that for each greedy expansion ${\varepsilon}$ in base ${\beta},\ ({\varepsilon}_n,{\varepsilon}_{n+1},\dots)$ is lexicographically less (notation: $\prec$) than $(a_1,a_2,\dots)$ for every $n\ge1$. Moreover, it was shown that, conversely, every sequence with this property is actually the greedy expansion in base ${\beta}$ for some $x\in [0,1)$.
Put $$X_{\beta}^+=\left\{{\varepsilon}\in\prod_1^\infty\{0,1,\dots,[{\beta}]\}\mid
({\varepsilon}_n,{\varepsilon}_{n+1},\dots)\prec(a_1,a_2,\dots),\ n\in{\mathbb{N}}\right\}$$ (the one-sided ${\beta}$-compactum), and $$X_{\beta}=\left\{{\varepsilon}\in\prod_1^\infty\{0,1,\dots,[{\beta}]\}\mid
({\varepsilon}_n,{\varepsilon}_{n+1},\dots)\prec(a_1,a_2,\dots),\ n\in{\mathbb{Z}}\right\}$$ (the two-sided ${\beta}$-compactum). The sequences from the ${\beta}$-compactum (one-sided or two-sided) will be called [*${\beta}$-expansions*]{}. It follows from the above formulas that both ${\beta}$-compacta are stationary ($=$ shift-invariant). As was shown in [@Pa], the map ${\varphi}_{\beta}:X_{\beta}^+\to[0,1)$ defined by the formula $${\varphi}_{\beta}({\varepsilon})=\sum_{n=1}^\infty{\varepsilon}_n{\beta}^{-n}\label{bexp},$$ is one-to-one except a countable set of sequences.
Let $Fin({\beta})$ denote all numbers from $[0,1)$ that have a finite ${\beta}$-expansion (i.e., the tail is $0^\infty$). It is obvious that $Fin({\beta})\subset{\mathbb{Z}}[{\beta}]\cap[0,1)$; the inverse inclusion however is not necessarily true even for a Pisot number, see, e.g., [@Ak; @S1].
\[WF\] We call a Pisot number ${\beta}$ [*weakly finitary*]{} (WF) if for any $y\in{\mathbb{Z}}[{\beta}]\cap[0,1)$ and any $\delta>0$ there exists $f\in Fin({\beta})\cap(0,\delta)$ such that $y+f\in Fin({\beta})$ as well.
The notion of WF number appeared in different settings in a number of recent works [@Ak; @S1; @S2] and in a slightly different form – earlier in the thesis [@Hol]. There exists a conjecture (shared by most experts in the area) that in fact [**every**]{} Pisot number is weakly finitary. Note that Sh. Akiyama [@Ak] has given an explicit algorithm of checking whether a [*given*]{} Pisot number is WF, and, as far as we are concerned, none of them has failed so far.
The main theorem of the present note is as follows.
\[main\] If ${\beta}$ is WF, then the Erdös measure $\mu^+_{{\beta},d}$ is quasi-invariant under the ${\beta}$-shift $\tau^+_{\beta}$. Moreover, there exists a unique probability measure $\nu^+_{{\beta},d}$ invariant under $\tau^+_{\beta}$ and equivalent to $\mu^+_{{\beta},d}$. The natural extension of the endomorphism $([0,1), \nu^+_{{\beta},d},\tau^+_{\beta})$ is Bernoulli.
We believe that knowing this fact could be important for the study of further ergodic-theoretic properties of this important measure, including its Gibbs structure and multifractal spectrum (see [@OST] for some results in this direction and references therein).
**2. Auxiliary results and definitions.** The rest of the paper is devoted to the proof of Theorem \[main\], which is based on the idea of [@SV §1] (where the special case $\beta=\frac{1+\sqrt5}2,\ d=2$ was considered) and also uses techniques of [@S2]. Note that the claim analogous to Theorem \[main\] is known to be true for the Lebesgue measure on $[0,1]$ – see [@Sm; @DKS]. The above theorem therefore immediately leads to an ergodic-theoretic proof of the famous Erdös Theorem which claims that the Erdös measure is singular [@E]. Indeed, it suffices to apply the corollary of the Birkhoff Ergodic Theorem claiming that two ergodic measures either coincide or are mutually singular; the fact that the two invariant measures in question do not coincide can be proved in the very same way as in the case ${\beta}=\frac{1+\sqrt5}2$, see [@SV Proposition 1.10].
Our first goal is to define the two-sided normalization in base $({\beta},d)$. Let ${\Sigma}_d:=\prod_{-\infty}^\infty\{0,1,\dots,d-1\},
{\Sigma}_d^+:=\prod_1^\infty\{0,1,\dots,d-1\}$. We will use the following convention: the sequences from $X_{\beta}$ ($X_{\beta}^+$) will be denoted with the letter “${\varepsilon}$" and sequences from the full compacta – with “$x$". Let $p_d$ denote the product measure on ${\Sigma}_d$ with the equal multipliers and $p_d^+$ – its one-sided analog. Recall that the [*one-sided normalization*]{} ${\mathfrak{n}}_{{\beta},d}^+$ is defined as the map from ${\Sigma}_d^+$ to $X_{\beta}^+$ acting by the formula $${\mathfrak{n}}_{{\beta},d}^+={\varphi}_{\beta}^{-1}\circ\pi_{{\beta},d}, \label{n+}$$ where $\pi_{{\beta},d}$ is given by (\[pr\]) and ${\varphi}_{\beta}$ is given by (\[bexp\]) – see [@Fr]. The following convention will be used hereinafter: the notation ${\mathfrak{n}}_{{\beta},d}^+(x_1\dots x_n)$ means ${\mathfrak{n}}_{{\beta},d}^+(x_1,\dots,x_n,0^\infty)$ and if ${\mathfrak{n}}_{{\beta},d}^+(x_1\dots x_n)=({\varepsilon}_1,\dots,{\varepsilon}_{n'},0^\infty)$, then by definition, ${\mathfrak{n}}_{{\beta},d}^+(x_1\dots x_n)=({\varepsilon}_1,\dots,{\varepsilon}_{n'})$, i.e., we ignore the tail $0^\infty$ whenever possible. By the above, the Erdös measure may be computed by the formula $$\mu^+_{{\beta},d}={\mathfrak{n}}^+_{{\beta},d}(p_d^+).$$ For more details see [@SV].
There exists a more direct way of defining normalization. Namely, in [@Fr] it was shown that one may find a finite automaton that carries out the operation of normalization in Pisot bases. The converse is also true: if the function of normalization is computable by a finite automaton, then ${\beta}$ must be a Pisot number – see [@BeFr].
We need one more technical lemma before we may proceed.
\[L\] If ${\beta}$ is WF, then there exists $L=L({\beta},d)\in{\mathbb{N}}$ such that for any word $x_1\dots x_n$ in the alphabet ${\mathcal{A}}_d$ there exists a word $x_{n+1}\dots x_{n+L}$ in the same alphabet such that ${\mathfrak{n}}^+_{{\beta},d}(x_1\dots x_{n+L})$ is finite.
By the well-known result of K. Schmidt [@Sch], the ${\beta}$-expansion of any $x\in{\mathbb{Q}}[{\beta}]\cap(0,1)$ is eventually periodic; moreover, for the elements of ${\mathbb{Z}}[{\beta}]\cap(0,1)$ the collection of such periods is known to be finite [@Ak; @S2]. Let $\mathcal{T}_{\beta}=({\mathrm{Per}}_1,\dots,{\mathrm{Per}}_r)$ denote this collection. In [@FrSo] it was shown that the normalization of a word $x_1\dots x_n$ in base $({\beta},d)$ has the following form: $${\mathfrak{n}}_{{\beta},d}^+(x_1\dots
x_n)=({\varepsilon}_1,\dots,{\varepsilon}_n,{\varepsilon}_{n+1},\dots,{\varepsilon}_{n+L_1},{\mathrm{Per}}_j),\quad 1\le
j\le r,$$ where ${\mathrm{Per}}_j\in\mathcal T_{\beta}$ and $L_1$ is a function of ${\beta}$ and $d$. Thus, it suffices to prove the claim for the words $x$ of the form ${\varepsilon}_1\dots{\varepsilon}_{n+L_1}x^{(j)}_1\dots x^{(j)}_{p_j}$, where $x^{(j)}_1\dots x^{(j)}_{p_j}$ is the word in the alphabet ${\mathcal{A}}_d$ whose normalization is ${\mathrm{Per}}_j$. Let $p=\max_{j=1}^r p_j,\
{\delta}=\frac{d-1}{{\beta}-1}\,{\beta}^{-n-L_1-p}$ and $y=\sum_{i=1}^{n+L_1}{\varepsilon}_i{\beta}^{-i}+
\sum_{i=1}^{p_j}x_i^{(j)}{\beta}^{-i-n-L_1}$. Then by Definition \[WF\], there exists $f\in Fin({\beta})\cap(0,{\delta})$ such that $y+f\in Fin({\beta})$. By our choice of ${\delta}$, the ${\beta}$-expansion of $f$ must be of the form $(f_{n+L_1+p+1},\dots,
f_{n+L_1+p+L_2})$ for some fixed $L_2$ (because we have only a finite number of $(x^{(j)}_1\dots x^{(j)}_{p_j})$). Setting $L:=L_1+L_2+p$ finishes the proof.
Thus, even if the normalization of a word is not finite, you can add a “period killer" of a fixed length so that it will become such. This is in fact the only property we will be using. It looks weaker than WF and we have been even tempted to call it something like PWF (positively weakly finitary); the reason for not doing so is the fact that there are no examples of Pisot numbers that are PWF but not WF (actually, as we already mentioned above, there are no examples of Pisot numbers that are not WF at all!).
For $p_d^+$-a.e. sequence $x\in{\Sigma}_d^+$ there exists $n$ such that ${\mathfrak{n}}_{{\beta},d}^+(x_1\dots x_n)$ is finite. \[blk\]
The proof is similar to the one of [@S2 Proposition 18]. Let $$\mathfrak{A}_n=\left\{x\in{\Sigma}_d^+ : {\mathfrak{n}}_{{\beta},d}^+(x_1\dots
x_n)\,\,\mathrm{is\ finite}\right\}\label{An}$$ and $\mathfrak{B}_n={\Sigma}_d^+\setminus\mathfrak{A}_n$. Our goal is to show that there exists a constant ${\gamma}={\gamma}({\beta},d)\in(0,1)$ such that $$p_d^+\left(\bigcap_{k=1}^n\mathfrak B_k\right)\le{\gamma}^n,\quad
n\ge1. \label{ineq}$$ Let $L=L({\beta},d)$ be as in Lemma \[L\]. We have $$\begin{aligned}
p_d^+\left(\bigcap_{k=1}^n\mathfrak B_k\right) &\le
&\prod_{k=2}^n p_{d}^+(\mathfrak B_k\mid \mathfrak B_{k-1}\cap
\dots \cap \mathfrak
B_1) \\
&\le &\prod_{k=2}^{[n/L]}p_d^+(\mathfrak B_{Lk}\mid \cap
_{j=1}^{Lk-L-1}\mathfrak B_j).\end{aligned}$$ By Lemma \[L\], $$p_d^+(\mathfrak A_{Lk}\mid\mathfrak E )\ge d^{-L},$$ for any $\mathfrak E$ in the sigma-algebra generated by $x_{Lk-L-1},\dots,x_1$. Hence $$p_d^+(\mathfrak B_{Lk}\mid\cap_{j=1}^{Lk-L-1}\mathfrak B_j)\le
1-d^{-L},$$ and $$p_d^+\left(\bigcap_{k=1}^n\mathfrak
B_k\right)\le(1-d^{-L})^{[n/L]}.$$ It suffices to put $\gamma:=(1-d^{-L})^{1/2L}$, which proves (\[ineq\]) and the lemma.
\[maincor\] For $p_d^+$-a.e. sequence $x\in{\Sigma}_d^+$ its normalization is blockwise, i.e., there exists a sequence $(n_k)_{k=0}^\infty$ such that $n_0=0$ and $${\mathfrak{n}}^+_{{\beta},d}(x)={\mathfrak{n}}^+_{{\beta},d}(x_1\dots
x_{n_1}){\mathfrak{n}}^+_{{\beta},d}(x_{n_1+1}\dots x_{n_2})\dots$$ (a concatenation of finite words), and ${\mathfrak{n}}_{{\beta},d}(x_{n_k+1}\dots
x_{n_{k+1}})$ is finite of the length $n_{k+1}-n_k$ for any $k\ge0$.
Recall that by the well-known result from [@FrSo] quoted in Lemma \[L\], there exists a number $K=L_1\in{\mathbb{N}}$ such that if the normalization of a word of length $n$ in a Pisot base with a fixed alphabet is finite, then the length of its normalization is at most $n+K$ (here $K$ depends on ${\beta}$ and the alphabet only). This means that if $w_1, w_2$ are two words with finite normalizations, then the normalization of $w_10^{2K}w_2$ is the concatenation of the normalizations of $w_10^K$ and $0^Kw_2$.
Set $$\mathfrak D=\{x\in{\Sigma}_d^+\mid \exists (l_k)_1^\infty: x_j\equiv0,\
l_k\le j\le l_k+2K, \forall k\ge1 \}.$$ Obviously, $p_d^+(\mathfrak D)=1$. Put $$\mathfrak A^+=\mathfrak D\cap\bigcup_{n=1}^\infty \mathfrak A_n,$$ where $\mathfrak A_n$ is given by (\[An\]). Still we have $p_d^+(\mathfrak A^+)=1$. Thus, the probability that at the end of the first block in Lemma \[blk\] there are $2K$ consecutive zeros, is also 1. Therefore, by the above, the normalizations of the first block and all the rest will be totally independent. Consider the second block, then the third one, etc. - and then take the countable intersection of all the sets obtained. This is a sought set of full measure $p_d^+$.
**3. Two-sided Erdös measure and conclusion of the proof**. Now we are ready to define the two-sided normalization and – consequently – the two-sided Erdös measure.
Let $x=(x_n)_{-\infty}^\infty\in{\Sigma}_d$. Put $${\mathfrak{n}}_{{\beta},d}(x):=\lim_{N\to+\infty}{\mathfrak{n}}^+_{{\beta},d}(x_{-N},x_{-N+1},
\dots), \label{norm}$$ where the limit is taken in the natural (weak) topology of ${\Sigma}_d$. By the previous corollary and the fact that the measure $p_d$ is the weak limit of the measures $S_d^n(p_d^+)$ (where $S_d$ denotes the shift on ${\Sigma}_d$), we conclude that the map ${\mathfrak{n}}_{{\beta},d}:{\Sigma}_d\to X_{\beta}$ is well defined and blockwise (in the sense of the previous corollary) for $p_d$-a.e. sequence $x\in{\Sigma}_d$. We will call it the [*two-sided normalization*]{} (in base $({\beta},d)$).
It is worth noting that if ${\beta}$ is an algebraic unit (i.e., if ${\beta}^{-1}\in{\mathbb{Z}}[{\beta}]$), then there exists an alternative way of defining ${\mathfrak{n}}_{{\beta},d}$ via the torus. Namely, let $x^m=k_1x^{m-1}+\dots+k_m$ be the characteristic equation for ${\beta}$ ($k_m=\pm1$) and ${\mathbb{T}}^m={\mathbb{R}}^m/{\mathbb{Z}}^m$. Let $T_{\beta}$ be the automorphism of ${\mathbb{T}}^m$ determined by the companion matrix $M_{\beta}$ for ${\beta}$, i.e., $$M_{\beta}=\left(
\begin{array}
[c]{ccccc}k_{1} & k_{2} & \ldots & k_{m-1} & k_{m}\\
1 & 0 & \ldots & 0 & 0\\
0 & 1 & \ldots & 0 & 0\\
\ldots & \ldots & \ldots & \ldots & \ldots\\
0 & 0 & \ldots & 1 & 0
\end{array}
\right).$$ Let $\mathcal{H}(T_{\beta})$ denote the group of points homoclinic to zero, i.e., $\mathbf{t}\in\mathcal{H}(T_{\beta})$ iff $T_{\beta}^n(\mathbf{t})\to0$ as $n\to\pm\infty$. A homoclinic point $\mathbf{t}$ is called *fundamental* if the linear span of its $T_{\beta}$-orbit is the whole group $\mathcal{H}(T)$. It is well known that such points always exist for $T_{\beta}$ (actually they exist for any automorphism of ${\mathbb{T}}^m$ which is $SL(m,{\mathbb{Z}})$-conjugate to $T_{\beta}$ – see [@Ver; @S2]). Now let the map from $X_{\beta}$ onto ${\mathbb{T}}^m$ be defined as follows: $$F_{\mathbf t}({\varepsilon})=\sum_{n\in{\mathbb{Z}}}{\varepsilon}_nT_{\beta}^{-n}(\mathbf t),
\label{F}$$ where $\mathbf t$ is fundamental. It is easy to show that the series does converge on the torus [@Sch2000; @S2] whenever the ${\varepsilon}_n$ are bounded. Let $\tau_{\beta}$ denote the shift on $X_{\beta}$, i.e., $\tau_{\beta}({\varepsilon})_n={\varepsilon}_{n+1}$. In [@S2] it is shown that if ${\beta}$ is WF, then $F_{\mathbf t}$ is one-to-one a.e. and conjugates the shift $\tau_{\beta}$ and $T_{\beta}$.
Now we state without proof that similarly to the one-sided normalization (see (\[n+\])), the two-sided normalization can be computed by the formula $${\mathfrak{n}}_{{\beta},d}=F_{\mathbf{t}}^{-1}{\widetilde}F_{\mathbf{t},d},$$ where the projection ${\widetilde}F_{\mathbf{t},d}:{\Sigma}_d\to{\mathbb{T}}^m$ is given by the same formula (\[F\]) as $F_{\mathbf{t}}$ with ${\varepsilon}_n$ replaced by $x_n$. In particular, it is well defined a.e. and does not depend on a choice of $\mathbf t$.
The projection $\nu_{{\beta},d}:={\mathfrak{n}}_{{\beta},d}(p_d)$ is called the [*two-sided Erdös measure*]{}.
We have the following diagram:
$$\begin{CD}
{\Sigma}_d @>{S_d}>> {\Sigma}_d \\
@V{{\mathfrak{n}}_{{\beta},d}}VV @VV{{\mathfrak{n}}_{{\beta},d}}V \\
X_{\beta}@>{\tau_{\beta}}>> X_{\beta}\end{CD}$$
Since the the two-sided normalization obviously commutes with the shift (see (\[norm\])), the diagram commutes as well, whence the two-sided Erdös measure is also shift-invariant (unlike the one-sided Erdös measure!). This is because $$\tau_{\beta}(\nu_{{\beta},d})=\tau_{\beta}{\mathfrak{n}}_{{\beta},d}(p_d)={\mathfrak{n}}_{{\beta},d}S_d(p_d)=
{\mathfrak{n}}_{{\beta},d}(p_d)=\nu_{{\beta},d}.$$ Moreover, since the automorphism $({\Sigma}_d,p_d,S_d)$ is Bernoulli, so is the automorphism $(X_{\beta},\nu_{{\beta},d},\tau_{\beta})$ – by Ornstein’s Theorem which claims that all Bernoulli factors are Bernoulli [@Orn].
Let $\rho_d:{\Sigma}_d\to{\Sigma}_d^+$ and $\rho_{\beta}:X_{\beta}\to X_{\beta}^+$ denote the natural projections. Then $$\mu_{{\beta},d}^+ = ({\mathfrak{n}}_{{\beta},d}^+\rho_d)(p_d).$$ Let $$\nu_{{\beta},d}^+ :=
(\rho_{\beta}{\mathfrak{n}}_{{\beta},d})(p_d)=({\varphi}_{\beta}\rho_{\beta})(\nu_{{\beta},d}).$$ By the above, $\nu_{{\beta},d}^+$ is $\tau_{\beta}^+$-invariant and its natural extension is Bernoulli.
Note that one of the reasons why one may have difficulties with the one-sided Erdös measure is because the operations of normalization and projection do not commute (and therefore, $\mu_{{\beta},d}^+$ is not shift-invariant). However, in a sense these operations are “commuting up to a finite number of coordinates", which allows us to finish the proof of Theorem \[main\].
The measures $\mu_{{\beta},d}^+$ and $\nu_{{\beta},d}^+$ are equivalent.
Let $P_{{\beta},d}=\rho_\beta\circ{\mathfrak{n}}_{{\beta},d}$ and $Q_{{\beta},d}={\mathfrak{n}}_{{\beta},d}^+\circ\rho_d$. Since $p_d$ is preserved by the action of the group that changes a finite number of coordinates, it suffices to show that there exist two maps $C:{\Sigma}_d\to{\Sigma}_d$ and $C':{\Sigma}_d\to{\Sigma}_d$ defined $p_d$-almost everywhere with the following properties: each of them is a step function with a countable number of steps, it changes just a finite number of coordinates of $x$ and also $$P_{{\beta},d}(C(x))=Q_{{\beta},d}(x),\,\,
Q_{{\beta},d}(C'(x))=P_{{\beta},d}(x).\label{change}$$ If we construct such functions, this will prove Theorem \[main\], because then we will have $$({\mathfrak{n}}_{{\beta},d}^+\rho_d)(p_d)\prec(\rho_{\beta}{\mathfrak{n}}_{{\beta},d})(p_d),\,\,
(\rho_{\beta}{\mathfrak{n}}_{{\beta},d})(p_d)\prec({\mathfrak{n}}_{{\beta},d}^+\rho_d)(p_d),$$ i.e., $\nu_{{\beta},d}^+\approx\mu_{{\beta},d}^+$.
Let $\mathfrak A$ be the two-sided analog of $\mathfrak A^+$ defined in the proof of Corollary \[maincor\], namely, $\mathfrak A$ is the set of all sequences in alphabet ${\mathcal{A}}_d$ whose normalization ${\mathfrak{n}}_{{\beta},d}$ is blockwise in the sense of Lemma \[blk\]. This set has full measure $p_d$. Let $x\in\mathfrak A$; then $x$ can be represented in the block form $x=(\dots B_{-2}B_{-1}B_0B_1\dots)$ and $${\mathfrak{n}}_{{\beta},d}(x)=(\dots
{\mathfrak{n}}_{{\beta},d}(A_{-1}){\mathfrak{n}}_{{\beta},d}(A_0){\mathfrak{n}}_{{\beta},d}(A_1)\dots),$$ where each word ${\mathfrak{n}}_{{\beta},d}(A_n)$ is of the same length as $A_n$.
Let $A_0=(x_{-a}\dots x_{b})$ with $a>0, b>0$ (one can always achieve this by merging blocks). By the above, we have $P_{{\beta},d}(x)=({\varepsilon}_1,\dots,{\varepsilon}_b,*)$ and $Q_{{\beta},d}(x)=({\varepsilon}'_1,\dots,{\varepsilon}'_b,*)$ (where the star indicates one and the same tail), i.e., the difference is only at the first $b$ places. Thus, it is easy to guess what $C$ and $C'$ may look like. Namely, put $$(C(x))_j=
\begin{cases} x_j,&j<-a\,\,\mathrm{or}\,\, j>b\\
0,& -a\le j\le 0\\
{\varepsilon}_j',&1\le j\le b
\end{cases}$$ and $$(C'(x))_j=
\begin{cases} x_j,&j<-a\,\,\mathrm{or}\,\, j>b\\
0,& -a\le j\le 0\\
{\varepsilon}_j,&1\le j\le b.
\end{cases}$$ Both functions are obviously well defined for $p_d$-a.e. $x$, are step functions with a countable number of steps and change a finite number of coordinates. The equalities in (\[change\]) are satisfied as well, which proves Theorem \[main\].
**4. Acknowledgement.** The author wishes to thank E. Olivier and A. Thomas for helpful discussions and suggestions.
[99]{}
Sh. Akiyama, *On the boundary of self-affine tiling generated by Pisot numbers*, to appear in J. Math. Soc. Japan, http://mathalg.ge.niigata-u.ac.jp/akiyama
J. C. Alexander and D. Zagier, *The entropy of a certain infinitely convolved Bernoulli measure*, J. London Math. Soc. **44** (1991), 121–134.
D. Berend and Ch. Frougny, *Computability by finite automata and Pisot bases*, Math. Systems Theory **27** (1994), 275–282.
K. Dajani, C. Kraaikamp and B. Solomyak, *The natural extension of the $\beta$-transformation*, Acta Math. Hungar. **73** (1996), 97–109.
P. Erdös, [*On a family of symmetric Bernoulli convolutions*]{}, Amer. J. Math. [**61**]{} (1939), 974–975.
Ch. Frougny, *Representations of numbers and finite automata*, Math. Systems Theory **25** (1992), 37–60.
Ch. Frougny and B. Solomyak, *Finite beta-expansions*, Ergodic Theory Dynam. Systems **12** (1992), 713–723.
A. Garsia, *Arithmetic properties of Bernoulli convolutions*, Trans. Amer. Math. Soc. **102** (1962), 409–432.
M. Hollander, Linear Numeration Systems, Finite Beta Expansions, and Discrete Spectrum of Substitution Dynamical Systems, Ph.D. Thesis, University of Washington, 1996.
E. Olivier, N. Sidorov and A. Thomas, *On the Gibbs properties of Bernoulli convolutions, and related problems in fractal geomtery*, preprint.
D. Ornstein, Ergodic Theory, Randomness and Dynamical Systems, New Haven and London, Yale Univ. Press, 1974.
W. Parry, *On the ${\beta}$-expansions of real numbers*, Acta Math. Acad. Sci. Hung. **11** (1960), 401–416.
K. Schmidt, *On periodic expansions of Pisot numbers and Salem numbers*, Bull. London Math. Soc. **12** (1980), 269–278.
K. Schmidt, *Algebraic codings of expansive group automorphisms and two-sided beta-shifts*, Monatsh. Math. **129** (2000), 37–61.
N. Sidorov, *Bijective and general arithmetic codings for Pisot toral automorphisms*, J. Dynam. Control Systems **7** (2001), 447–472.
N. Sidorov, *An arithmetic group associated with a Pisot unit, and its symbolic-dynamical representation*, Acta Arith. **101** (2002), 199–213.
N. Sidorov and A. Vershik, *Ergodic properties of Erdös measure, the entropy of the goldenshift, and related problems*, Monatsh. Math. **126** (1998), 215–261.
M. Smorodinsky, *$\beta$-automorphisms are Bernoulli shifts*, Acta Math. Acad. Sci. Hung. **24** (1973), 273–278.
B. Solomyak, [*On the random series $\sum\pm \lambda^i$ (an Erdös problem)*]{}, Annals of Math. [**142**]{} (1995), 611–625.
A. Vershik, *Arithmetic isomorphism of the toral hyperbolic automorphisms and sofic systems*, Functional. Anal. Appl. **26** (1992), 170–173.
[^1]: Supported by the EPSRC grant no GR/R61451/01.
|
---
abstract: 'A fast charged particle crossing the boundary between the chiral matter and vacuum radiates the transition radiation. Its most remarkable features — the resonant behavior at a certain emission angle and the circular polarization of the spectrum — depend on the parameters of the chiral anomaly in a particular material/matter. The chiral transition radiation can be used to investigate the chiral anomaly in such diverse media as the quark-gluon plasma, the Weyl semimetals, and the axionic dark matter.'
author:
- 'Xu-Guang Huang'
- Kirill Tuchin
title: Transition radiation as a probe of chiral anomaly
---
Introduction
============
The chiral matter — the matter containing chiral fermions — possesses a number of unique properties originating from the quantum phenomenon of the chiral anomaly. Those chiral materials that exist at room temperatures, such as the Weyl semimetals, can be studied with high precision. On the other hand, there are forms of chiral matter that exist only under extreme conditions, such as the quark-gluon plasma; their study requires novel approaches. In this letter we argue that an informative insight into properties of the chiral matter can be gained using the chiral analogue of the transition radiation.
The transition radiation is emitted when a fast charged particle, i.e. a particle moving with energy much greater than the medium ionization energy, crosses the boundary between the two media having different dielectric constants. This is a classical effect predicted by Ginzburg and Frank in 1945 [@Ginzburg:1945zz] (reviewed in [@Ginzburg-Tsytovich]) that has a number of practical applications. The quantum corrections were calculated in [@Garibyan; @Baier:1998ej; @Schildknecht:2005sc]. The transition radiation originates from the difference of the photon wave function on the two sides of the boundary. At high energies this is manifested in variation of the plasma frequency, across the boundary.
In a chiral matter photon dispersion relation is modified due to the chiral anomaly [@Deser:1981wh]. As a result, when a fast charged particle crosses the boundary between the chiral matter and vacuum it emits the transition radiation, which we will refer to as the [*chiral transition radiation*]{}. Its spectrum was recently derived by one of us in [@Tuchin:2018sqe] employing a method developed in [@Schildknecht:2005sc]. It possesses distinctive features as compared to other forms of radiation by fast particles in matter. Thus, the chiral transition radiation can be employed to investigate the chiral anomaly in various forms of matter and materials as we explain in the forthcoming sections.
The spectrum
============
The dispersion relation of a photon in a chiral medium can be most readily computed using the Maxwell-Chern-Simons theory [@Wilczek:1987mv; @Carroll:1989vb; @Sikivie:1984yz; @Kalaydzhyan:2012ut], which is an effective low energy approximation of the QED in chiral medium. The gauge part of this theory reads [\[a1\]]{} = -F\_\^2-F\_F\^, where the pseudo-scalar field $\theta$ encapsulates the effect of the chiral anomaly and $c_A$ is the anomaly coefficient. In practical applications one usually assumes that $\theta$ is either (i) spatially uniform and adiabatically time dependent $\dot \theta\neq 0$, or (ii) that it is time-independent and slightly anisotropic ${{\bm \nabla}}\theta \neq 0$. The dispersion relation in each case takes form [@Tuchin:2014iua; @Tuchin:2017vwb; @Yamamoto:2015maz; @Qiu:2016hzd] [\[a3\]]{} \^2= k\^2+ \^2(k, ), where ${{\bm k}}$ is the photon momentum and $\lambda=\pm 1$ is its circular polarization. The parameter $\mu$ is a complex function of its arguments that is sensitive to the spatial or temporal variation of the $\theta$-field. In the case (i) it reads [\[a6\]]{} \^2(k, )=-\_k, where $\sigma_\chi = c_A\dot \theta$ is the chiral conductivity[@Fukushima:2008xe; @Kharzeev:2009pj]. In the case (ii) it takes form [\[a5\]]{} \^2(k, )=b\^2-(kb) , where ${{\bm b}}= c_A{{\bm \nabla}}\theta$ [@Qiu:2016hzd]. In Weyl semimetals ${{\bm \nabla}} \theta $ is the separation in momentum space between the Weyl nodes of right-handed and left-handed fermions. We observe that $\mu$ can be real or imaginary depending on the photon polarization, whereas in a non-chiral matter $\mu$ is always real. This is the origin of the distinct transition radiation pattern from the chiral matter that we are proceeding to discuss in the next few paragraphs.
We start with the case (i) representing spatially uniform matter. We assume that the boundary is located at $z=0$ and the particle moves in the $z$ direction, i.e. perpendicular to the boundary. At the boundary $\mu$ is discontinuous. In the ultrarelativistic limit, when $\mu$ can be treated as a small parameter, the photon wave function in the radiation gauge reads [\[a8\]]{} A= \_e\^[i z+ik\_x\_-it ]{}{ -i \_0\^z(k\_\^2+\^2)dz’}, where ${{\bm \epsilon}}_\lambda$ is the polarization vector such that ${{\bm \epsilon}}_\lambda \cdot {{\bm k}}=0$ and $V$ is the normalization volume. By the same token the fermion wave function is [\[a9\]]{} = u(p) e\^[iz-i t]{}{ ip\_x\_-iz}, where ${{\bm p}}$ and ${\varepsilon}$ are the fermion momentum and energy.
The scattering matrix element for the photon emission process is S=& -ie Q|\^A\_d\^4x=i(2)\^3(+’-)(p\_-k\_-p’\_), \[a13\] where $Q$ is the fermion electric charge and the prime distinguishes the final fermion energy and momentum. The invariant amplitude reads [\[a15\]]{} &= -eQ|u(p’)\^\* u(p) 2x(1-x){ - } , where $x=\omega/{\varepsilon}$ is the fraction of the incident fermion energy carried away by the radiated photon, ${{\bm q}}_\bot = x{{\bm p}}-{{\bm k}}_\bot$, $\kappa_\lambda = x^2m^2+(1-x)\mu^2$ and $\gamma$ is the resonance width that depends on the system geometry, electrical conductivity etc. The radiated photon spectrum can be computed as [\[a19\]]{} = \_[,,’]{}||\^2, where $\sigma$,$\sigma'$ are the fermion and anti-fermion spins. Substitution of [(\[a19\])]{} into [(\[a15\])]{} yields [\[a21\]]{} = { (-x+1)q\_\^2+} \_| -|\^2. For positive $\kappa_\lambda$, the photon spectrum [(\[a21\])]{} coincides with the standard formula for the transition radiation with $\mu$ being the plasma frequency [@Schildknecht:2005sc]. However, the main contribution to the photon spectrum arises from the pole at $q_\bot^2= -\kappa_\lambda>0$, i.e. when $\kappa_\lambda$ is negative. Keeping only the term that is most singular at $\gamma\to 0$, we find the chiral transition radiation spectrum of photons [@Tuchin:2018sqe] [\[a24\]]{} ={ (-x+1)q\_\^2+} . It is remarkable that the spectrum is circularly polarized[^1]. Indeed, $\kappa_\lambda$ is negative only if $\lambda \sigma_\chi>0$ and $x<[1+m^2/(\lambda\sigma_\chi{\varepsilon})]^{-1}$. In other words, only one of the possible photon polarizations exhibits the resonant behavior, while the other one is suppressed. Whether the photon spectrum is right- or left-hand polarized depends on the sign of $\sigma_\chi$.
Since $\mu^2\approx -\lambda\sigma_\chi \omega$, the angular distribution of the photons peaks at the angle $\vartheta^2= q^2_\bot/ \omega^2 = -\kappa_\lambda/ x^2{\varepsilon}^2$ with respect to the fermion momentum. If the fermion mass is negligible and bearing in mind that most photons are soft ($x\ll 1$) we can estimate $\vartheta^2\approx\lambda\sigma_\chi/ \omega$.
Applications
============
[**1.**]{} As the first application, consider jet emission from the quark-gluon plasma (QGP) with a homogenous chiral conductivity. QGP is isotropic at the scales of interest here, hence the corresponding case is (i). Jets in heavy-ion collisions are produced by the highly energetic color particles. If a jet is originated by a quark (as opposed to a gluon) we expect radiation of circularly polarized photons in a cone with the opening angle $\vartheta\sim \sqrt{|\sigma_\chi|/ \omega}$ with respect to the jet momentum. The chiral conductivity is an unknown parameter. If we estimate it as $\sigma_\chi \sim 10$ MeV, then $\omega =1$ GeV photons are emitted at the angle $\vartheta\sim 0.1$, provided that the jet energy ${\varepsilon}$ is much larger than $\omega$. Thus the observation of circularly polarized photons at angle $\vartheta$ to the jet direction would be an indication of the chiral transition radiation.
[**2.**]{} We have seen that the main feature of the transition radiation from chiral matter is the emergence of the resonance factor in [(\[a24\])]{}. It arises entirely due to the energy and momentum conservation in a $1\to 2$ process involving a photon with complex $\mu$. Thus we expect to see the same resonant factor as in [(\[a24\])]{} arising in the case (ii) which deals with an anisotropic matter. The calculation of the pre-factor requires a more careful analysis that will be presented elsewhere. In the high energy limit Eq. [(\[a5\])]{} reduces to $\mu^2\approx -\lambda \omega b\cos\beta$, where $\beta$ is the angle between ${{\bm b}}$ and the photon momentum. The soft photon emission angle in the massless limit is $\vartheta^2 \approx \lambda b \cos\beta/\omega$. Similarly to the previous case (i), the photon spectrum is circularly polarized. One can verify that now $\kappa_\lambda$ is negative only if $\lambda\cos\beta>0$ and $x< [1+m^2/(\lambda{\varepsilon}b\cos\beta)]^{-1}$. Thus the polarization direction depends on whether ${{\bm b}}$ points towards or away from the boundary. Furthermore, since $\mu^2$ is proportional to $\cos\beta$, the radiation is maximal when $\beta =0$ or $\pi$ and vanishes in the perpendicular direction. To estimate the characteristic radiation angle discussed above, consider a Weyl semimetal with $b= (\alpha/\pi) 80$ eV [@Xu:2015cga; @Lv:2015pya]. An electron with energy about GeV moving parallel to ${{\bm b}}$ ($\beta=0$) would radiate, say, $\omega= 10$ MeV photons at $\vartheta=1.3\cdot 10^{-4}$. This can be tested by injecting a beam of energetic electrons normal to a Weyl semimetal film and measuring the polarization and angular distribution of the photons emitted in a cone with the opening angle $\vartheta$ around the beam direction.
[**3.**]{} The chiral transition radiation emitted by protons traveling through the dark matter lumps [@Chang:2015odg] can be used to search for the axionic dark matter. In this case $\dot\theta$ is proportional to the axion mass $m_a$ which is unknown but expected to be very small. The emission angle of the chiral transition radiation with respect to the direction of a cosmic ray is of the order of $\sqrt{c_A\theta_0m_a/\omega}$ where $\theta_0$ is the average value of $\theta$. Taking $\theta_0\sim 10^{-19}$ [@Graham:2013gfa], $m_a\sim 10^{-6}$ eV, and $\omega\sim 1$ TeV we obtain $\vartheta\sim 10^{-15}$. Measurement of photon spectrum emitted by a cosmic ray at such angles might be possible over the astronomical distances.
Summary
=======
In summary, we computed the transition radiation spectrum at the boundary between chiral matter and vacuum, given by [(\[a24\])]{} and argued that its unique features — the resonant enhancement at a characteristic angle $\vartheta$ and circular polarization — can be used as the direct measurement of the chiral anomaly in chiral matter/materials.
We thank Dima Kharzeev for valuable discussions. XGH is supported by the Young 1000 Talents Program of China, NSFC through Grants No. 11535012 and No. 11675041. KT is supported in part by the U.S. Department of Energy under Grant No. DE-FG02-87ER40371.
[100]{}
V. L. Ginzburg and I. M. Frank, “Radiation of a uniformly moving electron due to its transition from one medium into another,” J. Phys. (USSR) [**9**]{} (1945) 353 \[Zh. Eksp. Teor. Fiz. [**16**]{} (1946) 15\]. V. L. Ginzburg and V. N. Tsytovich, “Several problems of the theory of transition radiation and transition scattering", Phys. Rep. [**49**]{} (1979) 1.
G.M. Garibyan, Zh. Eksp. Teor. Fiz. 39, 1630 (1960); JETP 12, 1138 (1961)
V. N. Baier and V. M. Katkov, “Quantum theory of transition radiation and transition pair creation,” Phys. Lett. A [**252**]{}, 263 (1999) D. Schildknecht and B. G. Zakharov, “Transition radiation in quantum regime as a diffractive phenomenon,” Phys. Lett. A [**355**]{}, 289 (2006)
S. Deser, R. Jackiw and S. Templeton, “Topologically Massive Gauge Theories,” Annals Phys. [**140**]{}, 372 (1982) \[Annals Phys. [**281**]{}, 409 (2000)\] Erratum: \[Annals Phys. [**185**]{}, 406 (1988)\].
F. Wilczek, “Two Applications of Axion Electrodynamics,” Phys. Rev. Lett. [**58**]{}, 1799 (1987). S. M. Carroll, G. B. Field and R. Jackiw, “Limits on a Lorentz and Parity Violating Modification of Electrodynamics,” Phys. Rev. D [**41**]{}, 1231 (1990). P. Sikivie, “On the Interaction of Magnetic Monopoles With Axionic Domain Walls,” Phys. Lett. B [**137**]{}, 353 (1984). T. Kalaydzhyan, “Chiral superfluidity of the quark-gluon plasma,” Nucl. Phys. A [**913**]{}, 243 (2013)
K. Tuchin, “Radiative instability of quantum electrodynamics in chiral matter,” arXiv:1806.07340 \[hep-ph\]. K. Tuchin, “Electromagnetic field and the chiral magnetic effect in the quark-gluon plasma,” Phys. Rev. C [**91**]{}, 064902 (2015) K. Tuchin, “Taming instability of magnetic field in chiral matter,” Nucl. Phys. A [**969**]{}, 1 (2018) N. Yamamoto, “Axion electrodynamics and nonrelativistic photons in nuclear and quark matter,” Phys. Rev. D [**93**]{}, 085036 (2016)
Z. Qiu, G. Cao and X. G. Huang, “On electrodynamics of chiral matter,” Phys. Rev. D [**95**]{}, 036002 (2017)
K. Fukushima, D. E. Kharzeev and H. J. Warringa, “The Chiral Magnetic Effect,” Phys. Rev. D [**78**]{} (2008) 074033
D. E. Kharzeev and H. J. Warringa, “Chiral Magnetic conductivity,” Phys. Rev. D [**80**]{}, 034028 (2009) S. Y. Xu [*et al.*]{}, “Discovery of a Weyl Fermion semimetal and topological Fermi arcs,” Science [**349**]{}, 613 (2015).
B. Q. Lv [*et al.*]{}, “Experimental discovery of Weyl semimetal TaAs,” Phys. Rev. X [**5**]{}, 031013 (2015). C. Chang [*et al.*]{} \[DES Collaboration\], “Wide-Field Lensing Mass Maps from Dark Energy Survey Science Verification Data,” Phys. Rev. Lett. [**115**]{}, no. 5, 051301 (2015). P. W. Graham and S. Rajendran, “New Observables for Direct Detection of Axion Dark Matter,” Phys. Rev. D [**88**]{}, 035023 (2013).
[^1]: In contrast, the ordinary transition radiation is linearly polarized [@Ginzburg:1945zz].
|
---
abstract: 'We study the projective logarithmic potential $\mathbb{G}_{\mu}$ of a Probability measure $\mu$ on the complex projective space $\mathbb{P}^{n}$. We prove that the Range of the operator $\mu\longrightarrow \mathbb{G}_{\mu}$ is contained in the (local) domain of definition of the complex Monge-Ampère operator acting on the class of quasi-plurisubharmonic functions on $\mathbb{P}^n$ with respect to the Fubini-Study metric. Moreover, when the measure $\mu $ has no atom, we show that the complex Monge-Ampère measure of its Logarithmic potential is an absolutely continuous measure with respect to the Fubini-Study volume form on $\mathbb{P}^{n}$'
address: |
Ibn Tofail university\
faculty of sciences\
PO 242 Kenitra Morocco
author:
- Fatima Zahra Assila
title: 'Logarithmic Potentials on $\mathbb P^n$'
---
Introduction and statement of the results
=========================================
Logarithmic potentials of Borel measures in the complex plane play a fundamental role in Logarithmic Potential Theory. This du to the fact that this theory is associated to the Laplace operator which is a linear elliptic partial differential operator of second order. It is well known that in higher dimension plurisubharmonic functions are rather connected to the complex Monge-Ampère operator which is a fully non-linear second order partial differential operator. Therefore Pluripotential theory cannot be described by logarithmic potential. However the class of logarithmic potentials gives a nice class of plurisubharmonic functions which turns out to be in the local domain of definition of the complex Monge-Ampère operator. This study was carried out by Carlehed [@5] in the case of a compactly supported measures on $\mathbb{C}^{n}$ or a bounded hyperconvex domain in $\mathbb{C}^{n}$.
Our main goal is to extend this study to the complex projective space motivated by the fact that the complex Monge-Ampère operator plays an important role in Kähler geometry (see [@13]). A large class of singular potentials on which the complex Monge-Ampère is well defined was introduced (see [@12], [@8], [@4]). However the global domain of definition of the complex Monge-Ampère operator on compact Kähler manifolds is not yet well understood. Using the characterization of the local domain of definition given by Cegrell and Blocki (see [@2], [@3], [@7]), we show that it is contained in the local domain of definition of the complex Monge-Ampère operator on the complex projective space $(\mathbb{P}^{n},\omega)$ equipped with the Fubini-Study metric.
Let $\mu$ be a probability measure on $\mathbb{P}^{n}$. Then its projective logarithmic potential is defined on $\mathbb{P}^{n}$ as follows : $$\mathbb{G}_{\mu}(\zeta):=\int_{\mathbb{P}^{n}} G(\zeta,\eta)d\mu(\eta)\quad\hbox{where}\quad G(\zeta,\eta):=\log{\vert\zeta\wedge\eta\vert\over\vert\zeta\vert\vert\eta\vert}$$
\[thm1\] Let $\mu$ be a probability measure on $\mathbb{P}^{n}$. Then the following properties hold.\
1. The potential $\mathbb{G}_{\mu}$ is a negative $\omega$-plurisubharmonic function on $\mathbb{P}^{n}$ normalized by the following condition : $$\int_{\mathbb{P}^{n}} \mathbb{G}_{\mu} \, \omega_{FS}^n = -\alpha_{n},$$ where $\alpha_{n}$ is a numerical constant.\
2. $\mathbb{G}_{\mu}\in W^{1,p}(\mathbb{P}^{n})$ for any $0 < p < 2n$.\
3. $\mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{n},\omega)$.
We also show a regularizing property of the operator $\mu\rightarrow\mathbb{G}_{\mu}$ acting on probability measures on $\mathbb{P}^{n}$.
\[thm2\] Let $\mu$ be a probability measure on $\mathbb{P}^{n}$ with no atoms. Then the Monge-Ampère measure $(\omega +dd^{c}\mathbb{G}_{\mu})^{n}$ is absolutely continuous with respect to the Fubini-Study volume form on $\mathbb{P}^{n}$.
The logarithmic potential, proof of Theorem 1.1
===============================================
The complex projective space can be covered by a finite number of charts given by $ \mathcal{U}_{k}:=\{[\zeta_{0},\zeta_{1},\cdots,\zeta_{n}]\in\mathbb{P}^{n}\ :\ \zeta_{k}\not=0\}\ ( 0\leq k\leq n)$ and the corresponding coordinate chart is defined on $\mathcal{U}_{k}$ by the formula
$$z^{k}(\zeta)=z^{k}:=(z_{j}^{k})_{0\leq j\leq n, j\not=k}\quad\hbox{where}\quad z_{j}^{k}:={\zeta_{j}\over\zeta_{k}}\quad\hbox{for}\quad j\not=k$$ The Fubini-Study metric $\omega = \omega_{FS}$ is given on $\mathcal U_k$ by $\omega|_{\mathcal{U}_{k}}={1\over 2}dd^{c}\log(1+\vert z^{k}\vert^{2})$. The projective logaritmic kernel on $\mathbb{P}^{n}\times\mathbb{P}^{n}$ is naturally defined by the following formula
$$G(\zeta,\eta):=\log{\vert\zeta\wedge\eta\vert\over\vert\zeta\vert\vert\eta\vert}=\log\sin{d(\zeta,\eta)\over\sqrt{2}}\,
\, \hbox{where} \,\, \vert\zeta\wedge\eta\vert^{2}=\sum_{0\leq i<j\leq n}\vert\zeta_{i}\eta_{j}-\zeta_{j}\eta_{i}\vert^{2}$$
and $d$ is the geodesic distance associated to the Fubini-Study metric (see [@15],[@6]).
We recall some definitions and give a useful characterization of the local domain of definition of the complex Monge-Ampère operator given by Z. Błocki (see [@2],[@3]).
Let $\Omega \subset {\mathbb C}^n$ be a domain. By definition the set $DMA_{loc}(\Omega)$ denotes the set of plurisubharmonic functions $\phi$ on $\Omega$ for which there a positive Borel measure $\sigma$ on $\Omega$ such that for all open $U\subset\subset X$ and $\forall\ (\phi_{j})\in PSH(U)\cap C^{\infty}(U)\searrow\phi$ in $U$, the sequences of measures $(dd^{c}\phi_{j})^{n}$ converges weakly to $\sigma$ in $U$. In this case, we put $(dd^{c}\phi)^{n}=\sigma$.
The following result of Blocki gives a useful characterization of the local domain of definition of the complex Monge-Ampère operator.
(Z. Błocki [@2], [@3]). 1. If $\Omega\subset\mathbb{C}^{2}$ is an open set then $DMA_{loc}(\Omega)=PSH(\Omega)\cap W^{1,2}_{loc}(\Omega)$.
2\. If $n\geq 3$, a plurisubharmonic function $\phi$ on a open set $U\subset\mathbb{C}^{n}$ belong to $DMA_{loc}(\Omega)$ if and only if for any $z\in \Omega$ there exists a neighborhood $U_{z}$ of $z$ in $\Omega$ and a sequence $(\phi_{j})\subset PSH(U_{z})\cap C^{\infty}(U_{z})\searrow \phi$ in $U_{z}$ such that the sequences $$\vert \phi_{j}\vert^{n-p-2}d\phi_{j}\wedge d^{c}\phi_{j}\wedge(dd^{c}\phi_{j})^{p}\wedge(dd^{c}\vert z\vert^{2})^{n-p-1},\quad p=0,1,\cdots,n-2$$ are locally weakly bounded in $U_{z}$.
Observe that by Bedford and Taylor [@1], the class of locally bounded plurisubharmonic functions in $\Omega$ is contained in $DMA_{loc}(\Omega)$. By the work of J.-P. Demailly [@9], any plurisubharmonic function in $\Omega$ bounded near the boundary $\partial \Omega$ is contained in $DMA_{loc}(\Omega)$. Let $(X,\omega)$ be a Kähler manifold of dimension $n$. We denote by $PSH (X,\omega)$ the set of $\omega$-plurisubharmonic functions in $X$. Then it is possible to define in the same way the local domain of definition $DMA_{loc} (X,\omega)$ of the complex Monge-Ampère operator on $(X,\omega)$. A function $\varphi \in PSH (X,\omega)$ belongs to $DMA_{loc} (X,\omega)$ iff for any local chart $(U,z)$ the function $\phi := \varphi + \rho \in DMA_{loc} (U)$ where $\rho$ is a Kähler potential of $\omega$. Then the previous theorem extends trivially to this general case.
Let $(\chi_{j})_{0\leq j\leq n}$ be a fixed partition of unity subordinated to the covering $(\mathcal{U}_{j})_{0\leq j\leq n}$. We define $m_{j}=\int\chi_{j}d\mu$ and $J=\{j\in\{0,1,\cdots,n\}\ :\ m_{j}\not=0\}$. The $J\not=\emptyset$ and for $j\in J$, the measure $\mu_{j}:={1\over m_{j}}\chi_{j}\mu$ is a probability measure on $\mathbb{P}^{n}$ supported in $\mathcal{U}_{j}$ and we have the following convex decomposition of $\mu$ $$\mu=\sum_{j\in J}m_{j}\mu_{j}$$ Therefore the potential $\mathbb{G}_{\mu}$ can be written as a convex combination $$\mathbb{G}_{\mu}=\sum_{j\in J} m_j \mathbb{G}_{\mu_{j}}.$$
To show that $\mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{n},\omega)$, it suffices to consider the case of a compact measure supported in an affine chart. Without loss of generality, we may always assume that $\mu$ is compactly supported in $\mathcal{U}_{0}$ and we are reduced to the study of the potential $\mathbb{G}_{\mu}$ on the open set $\mathcal{U}_{0}$. The restriction of $G(\zeta,\eta)$ to $\mathcal{U}_{0}\times\mathcal{U}_{0}$ can expressed in the affine coordinates as $$G(\zeta,\eta)=N(z,w)-{1\over 2}\log(1+\vert z\vert^{2})$$ where $$N(z,w):={1\over 2}\log{\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}}$$ will be called the projective logarithmic kernel on $\mathbb{C}^{n}$.
\[diag\] 1.The kernel $N$ is upper semicontinuous in $\mathbb{C}^{n}\times\mathbb{C}^{n}$ and smooth off the diagonal of $\mathbb{C}^{n}\times\mathbb{C}^{n}$.\
2. For any fixed $w\in\mathbb{C}^{n}$, the function $ N(.,w)\ :\ z\rightarrow N(z,w)$ is plurisubharmonic in $\mathbb{C}^{n}$ and satisfies the following inequality $${1\over 2}\log{\vert z-w\vert^{2}\over 1+\vert w\vert^{2}} \leq N(z,w)\leq{1\over 2}\log(1+\vert z\vert^{2}),\quad\forall\ (z,w)\in \mathbb{C}^{n}\times\mathbb{C}^{n}$$
From lemma \[diag\], we have the following properties of the projective logarithmic kernel $G$ on $\mathbb{P}^{n}\times\mathbb{P}^{n}$.
1\. The kernel $G$ is a non positive upper semi continuous function on $\mathbb{P}^{n}\times\mathbb{P}^{n}$ and smooth off the diagonal of $\mathbb{P}^{n}\times\mathbb{P}^{n}$.\
2. For any fixed $\eta\in\mathbb{P}^{n}$, the function $G(.,\eta)\ :\ \zeta \rightarrow G(\zeta,\eta)$ is a non positive $\omega$-plurisubharmonic function in $\mathbb{P}^{n}$ and smooth in $\mathbb{P}^{n}\setminus\{\eta\}$, hence $G(.,\eta)\in DMA_{loc}(\mathbb{P}^{n},\omega)$. Moreover $(\omega + dd^c G(\cdot,\eta))^n = \delta_\eta$.
For a probability measure $\nu$ on $\mathbb{C}^{n}$, we define the projective logarithmic potential of $\nu$ as follows $$\mathbb{V}_{\nu}(z):={1\over 2}\int_{\mathbb{C}^{n}}\log{\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}}d\nu(w)$$
\[pluris\] Let $\nu$ be a probability measure $\nu$ on $\mathbb{C}^{n}$. Then the function $\mathbb{V}_{\nu}(z)$ is plurisubharmonic in $\mathbb{C}^{n}$ and for all $z\in\mathbb{C}^{n}$ $$\mathbb{V}_{\nu}(z)\leq{1\over 2}\log(1+\vert z\vert^{2}).$$
Also $\mathbb{V}_{\nu}\in DMA_{loc}(\mathbb{C}^{n})$ and $$(dd^{c}\mathbb{V}_{\nu})^{n}=\int_{\mathbb{C}^{n}\times\cdots\times\mathbb{C}^{n}}dd_{z}^{c}N(.,w_{1})\wedge\cdots\wedge dd_{z}^{c}N(.,w_{w})d\nu(w_{1})\cdots d\nu(w_{n})$$
[**Proof of theorem \[thm1\]**]{} As we have seen we have $${\mathbb G}_\mu = \sum_{j \in J} m_j {\mathbb G}_{\mu_j},$$ where $\mu_j$ is compactly supported in the affine chart $\mathcal U_j$.
Observe that for a fixed $k$ one can write on $\mathcal U_k$ $$\mathbb{G}_{\mu_k}(\zeta)+{1/2}\log(1+\vert z\vert^{2})= \mathbb{V}_{\mu_k}(z), \, \, \text{where} \, \, z := z^{k} (\zeta) \in {\mathbb C}^n,$$ which is plurisubharmonic in ${\mathbb C}^n$. Hence $\mathbb{G}_{\mu}$ is $\omega$-plurisubharmonic in $\mathbb{P}^{n}$.
2\. By the co-area formula ( see [@10] ) $$\begin{aligned}
\int_{\mathbb{P}^{n}}\mathbb{G}_{\mu}(\zeta)dV(\zeta)&=&\int_{0}^{\pi/\sqrt{2}}\log\sin{r\over\sqrt{2}}A(r)dr\\
&=&-{c_{n}\over\sqrt{2}n^{2}}\end{aligned}$$ where $A(r):=c_{n}\sin^{2n-2}(r/\sqrt{2})\sin(\sqrt{2}r)$ is the area of the sphere about $\eta$ and radius $r$ on $\mathbb{P}^{n}$ and $c_{n}$ is a numerical constant ( see [@14] page 168 or [@11] lemma 5.6\] ).\
Let $p\geq 1$. Since $\vert\nabla d\vert_{\omega}=1$, also by the co-area formula $$\begin{aligned}
\int_{\mathbb{P}^{n}}\vert\nabla\mathbb{G}_{\mu}(\zeta)\vert^{p}dV(\zeta)&\leq&
\int_{\mathbb{P}^{n}}\cot^{p}\Bigl({d(\zeta,\eta)\over\sqrt{2}}\Bigr)d\mu(\eta)dV(\zeta)\\&\leq& 2\sqrt{2}c_{n}\int_{0}^{\pi/2}\sin^{2n-1-p}t dt\end{aligned}$$ which is finite if and only if $p < 2n$. Hence for all $p\in ]0,2n[\ :\ \mathbb{G}_{\mu}\in W^{1,p}(\mathbb{P}^{n})$ ( by concavity of $x^{p}$).
3\. When $n = 2$, we can apply the previous result to conclude that $ \mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{2})$. When $n\geq 3$, we apply Blocki’s characterization staded above to show that $\mathbb{G}_{\mu_k} \in DMA_{loc}(\mathcal U_k)$. We consider the following approximating sequence $$\mathbb{V}^{\epsilon}_{\mu}(z)={1\over 2}\int_{\mathbb{C}^{n}}\log\Bigl({\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}}+\epsilon^{2}\Bigr)d \mu_k(w)\searrow \mathbb{V}_{\mu}(z),$$ and use the next classical lemma on Riesz potentials to show a uniform estimates on their weighted gradients as required in Blocki’s theorem.
Let $\mu$ be a probability measure on $\mathbb{C}^{n}$. For $0<\alpha<2n$, define the Riesz potential of $\mu$ by $$J_{\mu,\alpha}(z):=\int_{\mathbb{C}^{n}}{d\mu(w)\over\vert z-w\vert^{\alpha}}$$ If $0<p<2n/\alpha$ then $J_{\mu,\alpha}\in L^{p}_{loc}(\mathbb{C}^{n})$.
Regularizing property and proof of theorem \[thm2\]
===================================================
We prove a regularizing property of the operator $\mu\rightarrow\mathbb{G}_{\mu}$. By the localization process exlained before, the proof of theorem 1.2 follows from the following theorem which generalizes and improves a result of Carlehed ( see [@5]).
\[thm3\] Let $\mu$ be a probability measure on $\mathbb{C}^{n}$ with no atoms and let $\psi \in \mathcal L ({\mathbb C}^n)$. Assume that $\psi$ is smooth in some open subset $U \subset {\mathbb C}^n$. Then for any $0 \leq m \leq n $, the Monge-Ampère measure $(dd^{c}\mathbb{V}_{\mu})^{m} \wedge (dd^c \psi)^{n - m}$ is absolutely continuous with respect to the Lebesgue measure on $U$.
The proof is based on the following elementary lemma.
\[absolut\] Let $(w_1, \cdots , w_n) \in ({\mathbb C}^n)^n$ fixed such that $w_{1}\not=w_{2}$. Let $\psi \in \mathcal L ({\mathbb C}^n)$. Assume that $\psi$ is smooth in some open subset $U \subset {\mathbb C}^n$. Then for any integer $ 0 \leq m \leq n$, the measure $$\bigwedge_{1 \leq j \leq m} dd^{c}\log(\vert \cdot -w_j\vert^{2}+\vert \cdot \wedge w_j\vert^{2}) \wedge(dd^c \psi) ^{n - m}$$ is absolutely continuous with respect to the Lebesgue measure on $U$.
[**Proof of theorem \[thm3\]:**]{} We first assume that $m = n$. Let $K\subset\mathbb{C}^{n}$ be a compact set such that $(dd^{c}\vert z\vert^{2})^{n}(K)=0$. Set $\Delta=\{(w,w,\cdots,w)\ :\ w\in\mathbb{C}^{n}\}$. Since $\mu$ puts no mass at any point, it follows by Fubini’s theorem that $\mu^{\otimes n} (\Delta)=0$. By proposition \[pluris\] $$\int_K (dd^{c}\mathbb{V}_{\mu})^n =\int_{(\mathbb{C}^{n})^{n}\setminus\Delta}f(w_{1},\cdots,w_{n})d\mu^{\otimes n}(w_{1},\cdots,w_{n})$$ where $$f(w_{1},\cdots,w_{n})=\int_{K}dd^{c}\log(\vert z-w_1\vert^{2}+\vert z\wedge w_1\vert^{2})\wedge\cdots\wedge d^{c}\log(\vert z-w_{n}\vert^{2}+\vert z\wedge w_{n}\vert^{2})$$ By Lemma \[absolut\], for any $(w_{1},\cdots,w_{n})\not\in\Delta$, $f(w_1, \cdots, w_n) = 0$, hence $(dd^{c}\mathbb{V}_{\mu})^{n}(K)=0$. The case $1 \leq m \leq n$ follows from Lemma \[absolut\] in the same way. The proof is complete.
[**Proof of theorem \[thm2\]:**]{} As we have seen in the proof of Theorem \[thm1\], one can write on each coordinate chart $\mathcal U_k$, $${\mathbb G}_\mu (\zeta) = m_k \mathbb{G}_{\mu_k} + \psi_k (z),$$ where $\psi_k \in \mathcal L ({\mathbb C}^n)$ is a smooth function in ${\mathbb C}^n$. Using Theorem \[thm3\] again we conclude that ${\mathbb G}_\mu \in DMA_{loc} (\mathcal U_k)$. Therefore ${\mathbb G}_\mu \in DMA_{loc} (\mathbb P^n)$.
Acknowledgements {#acknowledgements .unnumbered}
================
It is a pleasure to thank my supervisors Ahmed Zeriahi and Said Asserda for their support, suggestions and encouragement. I also would like to thank professor Vincent Guedj for very useful discussions and suggestions. A part of this work was done when the author was visiting l’Institut de Mathématiques de Toulouse in March 2016. She would like to thank this institution for the invitation.
[99]{} E. Bedford, B.A. Taylor [*The Dirichlet problem for the complex Monge-Ampère equation.*]{} Z.Blocki, [*On the definition of the Monge-Ampère oparator in $\mathbb{C}^{2}$*]{}. Math.Ann.328 (2004),no 3, 415-423. Z.Blocki, [*The domain of definition of the complex Monge-Ampère operator.*]{} Amer.J.math. 128 (2006), no 2, 519-530. S.Bouckson, P.Essidieux, V.Guedj, A.Zeriahi, [*Monge-Ampère equations in big cohomology classes.*]{} Acta math. 205 (2010), no 2, 199-262. M.Carlehed, [*Potentials in pluripotential theory.*]{} Annales de la Faculté des sciences de Toulouse, Mathématiques, Série 6, Tome 8 (1999) no. 3,439-469. E. Cartan, [*Leçons sur la géometrie projective complexe*]{}. Gauthier-Villars (1950). U.Cegrell, [*The general definition of the complex Monge-Ampère operator.*]{} Ann.Inst.Fourier Grenoble 54, I (2004), 159-179. D.Coman, V.Guedj, A.Zeriahi, [*Domains of definition of Monge-Ampère operator on compact Kähler manifolds.*]{} Marth.Zeit. 259 (2008),no 2, 393-418. J.-P. Demailly, [*Monge-Ampère operators, Lelong numbers and intersection theory*]{}. Complex Analysis and Geometry, Univ. Series in Math., edited by V. Ancona and A. Silva, Plenum Press, New-York (1993). \] H. Federer, [*Geometric Measure Theory.*]{} Springer Verlag, 1996
S.Helgason, [*The Radon transform on Euclidean spaces, two point homogeneous spaces and Grassmann manifolds.*]{} Acta Math. 113 (1965), 153-180. V.Guedj, A.Zeriahi, [*The weighted Monge-Ampère energy of quasiplurisubharmonic functions.*]{} J.Funct.Anal. 250 (2007), no 2, 442-482. V.Guedj, A.Zeriahi, [*Degenerate Complex Monge-Ampère Equations.*]{} Tracts in Mathematics, EMS, Vol 26, 2017. D.L.Ragozin, [*Constructive polynomial approximation on spheres and projective sapces.*]{} Trans.Amer.Math.Soc. 162 (1971), 157-170. E. Study, [*Kürzeste Wege im komplexen Gebiet*]{}. Math. Ann. , 60 (1905) pp. 321-378.
|
---
abstract: |
Understanding the acceleration of the universe and its cause is one of the key problems in physics and cosmology today, and is best studied using a variety of mutually complementary approaches. Daly and Djorgovski (2003, 2004) proposed a model independent approach to determine the expansion and acceleration history of the universe and a number of important physical parameters of the dark energy as functions of redshift directly from the data. Here, we apply the method to explicitly determine the first and second derivatives of the coordinate distance with respect to redshift, $y^{\prime}$ and $y^{\prime \prime}$, and combine them to solve for the kinetic and potential energy density of the dark energy as functions of redshift, $K(z)$ and $V(z)$.
A data set of 228 supernova and 20 radio galaxy measurements with redshifts from zero to 1.79 is used for this study. Values of $y^{\prime}$ and $y^{\prime \prime}$ are combined to study the dimensionless acceleration rate of the universe as a function of redshift, $q(z)$. The only assumptions underlying our determination of $q(z)$ are that the universe is described by a Robertson-Walker (RW) metric and is spatially flat. We find that the universe is accelerating today, and was decelerating in the recent past. The transition from acceleration to deceleration occurs at a redshift of about $z_T = 0.42 \pm {}^{0.08}_{0.06}$. Values of $y^{\prime}$ and $y^{\prime \prime}$ are combined to determine $K(z)$ and $V(z)$. These are shown to be consistent with the values expected in a standard Lambda Cold Dark Matter (LCDM) model.
address:
- 'Department of Physics, Pennsylvania State University, Berks Campus, Reading, PA 19610'
- 'Division of Physics, Mathematics, and Astronomy, California Institute of Technology, MS 105-24, Pasadena, CA 91125'
author:
- 'Ruth A. Daly [^1], and S. G. Djorgovski [^2]'
title: 'A Nearly Model-Independent Characterization of Dark Energy Properties as a Function of Redshift'
---
The acceleration of the universe at the present epoch has been studied in the contexts of specific models using coordinate distances to type Ia supernovae [@R98; @P99; @T03; @K03; @B04; @R04; @A05], and FRII radio galaxies [@DGW98; @GDW00; @DG02; @PDMR03], in addition to other techniques. These studies indicate that the universe is expanding at an accelerting rate at the present epoch. Generally, these studies are done in the context of a specific cosmological model, such as an open universe with non-relativistic matter, a cosmological constant, and space curvature (e.g. [@R98; @P99; @DGW98; @GDW00]), a spatially flat universe with non-relativistic matter and dark energy that has an energy density that can evolve with redshift but which maintains a constant equation of state (e.g. [@P99; @T03; @K03; @B04; @R04; @A05; @DG02]), or a spatially flat universe with non-relativistic matter and an evolving scalar field (e.g. [@PR00; @PDMR03]). In each of these studies it is implicitly assumed that the universe is desribed by a RW metric and that General Relativity is the correct theory of gravity; in addition, a particular functional form for the redshift evolution of the energy density of some new component is assumed. The data are then used to constrain the parameters that describe the assumed functional form for the redshift evolution of whatever was being considered as the driver of the acceleration of the universe.
![\[dydz248compare\] Results obtained with the mock data set of 248 sources described in the text. The results are in an excellent agreement with the input cosmolgoy, with no apparent bias.](Y248.eps){width="70mm"}
![\[dydz248compare\] Results obtained with the mock data set of 248 sources described in the text. The results are in an excellent agreement with the input cosmolgoy, with no apparent bias.](dydz248compare.eps){width="70mm"}
![\[dydz248snrg\] The first derivative of the coordinate distance with respect to redshift for the actual data set of 248 sources. The zero redshift value we measure is $y ^\prime _0 = 1.025 \pm 0.022$; the predicted value in all models is 1.000. The values for the standard LCDM model with $\Omega_{\Lambda} = 0.7$ and $\Omega_{0m}=0.3$ are shown as the solid line in this and all subsequent plots. Best fit Cardassian (dotted line) and Chaplygin Gas (dashed line) models are also shown, and are described in the text. ](d2ydz2compare248.eps){width="70mm"}
![\[dydz248snrg\] The first derivative of the coordinate distance with respect to redshift for the actual data set of 248 sources. The zero redshift value we measure is $y ^\prime _0 = 1.025 \pm 0.022$; the predicted value in all models is 1.000. The values for the standard LCDM model with $\Omega_{\Lambda} = 0.7$ and $\Omega_{0m}=0.3$ are shown as the solid line in this and all subsequent plots. Best fit Cardassian (dotted line) and Chaplygin Gas (dashed line) models are also shown, and are described in the text. ](dydz248snrg.eps){width="70mm"}
![\[Q248\] The deceleration parameter $q(z)$, where $q(z) = -[1+y^{\prime \prime}(1+z)/y^{\prime}]$ [@dd03]. The zero redshift value is $q_0 = -0.46 \pm 0.08$. The predicted value in the LCDM model shown is $-0.55$. Our fits are systematically higher than the LCDM model shown by about 1$\sigma$.](d2ydz2snrg248.eps){width="70mm"}
![\[Q248\] The deceleration parameter $q(z)$, where $q(z) = -[1+y^{\prime \prime}(1+z)/y^{\prime}]$ [@dd03]. The zero redshift value is $q_0 = -0.46 \pm 0.08$. The predicted value in the LCDM model shown is $-0.55$. Our fits are systematically higher than the LCDM model shown by about 1$\sigma$.](Q248.eps){width="70mm"}
A complementary approach was suggested by [@dd03; @dd04] who showed that the recent expansion and acceleration history of the universe, and some properties of the driver of the accelertion, can be determined directly from the data after specifying a minimal number of assumptions. Assuming only that the universe is described by a RW metric and is spatially flat, the data can be used to solve for the dimensionless expansion and acceleration rates of the universe as functions of redshift, $E(z)$ and $q(z)$, respectively. This can be done without specifying a theory of gravity, or anything else. The function $q(z)$ thus obtained is a direct measure of the acceleration/deceleration of the universe at different epochs. The key ingredients that go into the determination of $E(z)$ and $q(z)$ are the first and second derivatives of the coordinate distance with respect to redshift, $dy/dz$ (or $y^\prime$) and $d^2y/dz^2$ (or $y^{\prime \prime}$), which are obtained from the coordinate distances to supernovae and radio galaxies at known redshift, as described by [@dd03; @dd04]. Thus, rather than assuming a functional form for the redshift evolution of the “dark energy” and constraining the model parameters, it is possible to solve for quantities such as $q(z)$ directly.
This direct approach indicates that the universe is accelerating today, and was decelerating in the recent past. The data used for the results shown here include the sample of 157 “Gold” supernovae [@R04], the sample of 71 supernvae from the Supernova Legacy Survey [@A05], and the 20 radio galaxies of [@GDW00], as described in detail [@DD05]. The total sample of 248 sources is shown in Fig. \[Y248\]; there are no systematic differences seen among the three groups of measurements in the redshift ranges of their overlaps.
![\[V248\] The potential energy of the dark energy $V(z)$ in units of the critical density today. The zero redshift value is $V_0 = 0.62 \pm 0.05$; the expected value for the LCDM model shown is 0.7.](K248.eps){width="70mm"}
![\[V248\] The potential energy of the dark energy $V(z)$ in units of the critical density today. The zero redshift value is $V_0 = 0.62 \pm 0.05$; the expected value for the LCDM model shown is 0.7.](V248.eps){width="70mm"}
The first and second derivatives of the coordinate distance with respect to redshift are obtained using the numerical differentiation method described by [@dd03; @dd04]. To test whether the method introduces a bias in the results, a mock data set of 248 sources with the same redshift distribution and fractional uncertainty per point as the actual data was constructed assuming a LCDM model with $\Omega_{0m}$ = 0.3 and $\Omega_{\Lambda}=0.7$, and analyzed. The results are shown in Figs. \[\[dydz248compare\]\] and \[\[d2ydz2compare248\]\]. We see that no bias has been introduced by the numerical differentiation technique.
Values of $y^{\prime}$ and $y^{\prime \prime}$ are shown on Figs. \[\[dydz248snrg\]\] and \[\[d2ydz2snrg248\]\]. The ringing seen in these figures is most likely due to sparse sampling. In these plots, and in the ones that follow, we do not consider these fluctuations at higher redshifts to be statistically significant, as they are commensurate with our derived 1-$\sigma$ error bars. The results are consistent with the LCDM model. The LCDM model is based on General Relativity with non-relativistic matter $\Omega_{0m}$ and a cosmological constant. This provides an excellent description of the data. Curves showing predictions in two modified gravity models in a spatially flat universe are also shown on Fig. \[\[dydz248snrg\]\]. These curves shown are obtained using the best fit model parameters obtained by [@BBSS05] for the Cardassian model of [@FL02] and the generalized Chaplygin gas model of [@BBS02] based on the model of [@KMP01]; this is consistent with the results obtained by [@LNP05]. Clearly, the LCDM model provides a better description of the data than do either of the modified gravity models. Thus, this large-scale test of General Relativity shows that GR provides an excellent description of the data on these very large length scales of about 10 billion light years.
The deceleration parameter $q(z)$ is shown on Fig.\[\[Q248\]\]. These results allow a determination of the redshift at which the universe transitions from an accelerating phase to a decelerating phase; we find this redshift to be $z_T = 0.42 \pm {}^{0.08}_{0.06}$, consistent with the values quoted by [@R04] and [@dd03; @dd04]. The upper bound on this transition redshift is uncertain because of the fluctuations in $q(z)$ which are due to sparse sampling at high redshift.
It is well known that $K = 0.5 (\rho+P)$ and $V = 0.5 (\rho-P)$, where $\rho$ and $P$ are the energy density and pressure of the dark energy. In [@dd04] we show that both $\rho$ and $P$ may be written in terms of the first and second derivatives of the coordinate distance. Combining these, we find that $({K/\rho_{oc}}) = -{(1+z)} (y^{\prime \prime})(y^\prime)^{-3}/3-0.5
{\Omega_{0m}}(1+z)^3$, and $({V/\rho_{oc}}) =(y^\prime)^{-2}[1+
{(1+z)}y^{\prime \prime}(y^\prime)^{-1}/3] - 0.5{\Omega_{0m}} (1+z)^3~$, where $\rho_{oc}$ is the critical density at the current epoch. These are shown in Figs. \[\[K248\]\] and \[\[V248\]\]. In obtaining $K$ and $V$, the assumptions made to obtain $P$ and $\rho$ apply: the universe is spatially flat; the kinematics of the universe are accurately described by general relativity; and two components, the dark energy and non-relativisitc matter (with $\Omega_{0m} = 0.3$), are sufficient to account for the kinematics of the universe out to redshift of about 2 (see the discussion in [@dd04]). Functional forms for $P(z)$ and $\rho(z)$ for the dark energy are [*not*]{} assumed, nor is any assumtion made regarding the equation of state of the dark energy. The work presented here on the potential energy, $V(z)$, is complementary to the work of [@SRSS00; @CTC04; @SVJ05; @SLP05].
Thus, our (nearly) model-independent method provides results which are consistent with those from the more traditional approaches, in a largely complementary fashion; at the very least, it is a new way of looking at the data. As the quality and size of relevant data sets increase, we can expect even more useful constraints to emerge from this approach.
A. G. Riess, et al., AJ, **116**, 1009 (1998).
S. Perlmutter et al., ApJ, **517**, 565 (1999).
J. T. Tonry et al., ApJ, **594**, 1 (2004).
R. A. Knop et al., ApJ, **598**, 102 (2003).
B. J. Barris, et al., ApJ, **602**, 571 (2004).
R. G. Riess, et al., ApJ, **607**, 665 (2004).
P. Astier, et. al, Astron. & Astrophys., **447**, 31 (2006).
R. A. Daly, E. J. Guerra, and L. Wan, in Proc. 33d Rencontre de Moriond [*Fundamental Parameters in Cosmology*]{}, eds. J. Tran Thanh Van, Y. Giraud-Heraud, F. Bouchet, T. Damour, and Y. Mellier (Paris: Editions Frontieres), 323 (1998); astro-ph/9803265
E. J. Guerra, R. A. Daly, and L. Wan, ApJ, **544**, 659 (2000).
R. A. Daly, and E. J. Guerra, AJ, **124**, 1831 (2002).
S. Podariu, R. A. Daly, M. P. Mory, and B. Ratra, B., ApJ, **584**, 577 (2003).
S. Podariu, and B. Ratra, ApJ, **532**, 109 (2000).
R. A. Daly and S. G. Djorgovski, ApJ, **597**, 9 (2003).
R. A. Daly and S. G. Djorgovski, ApJ, **612**, 652 (2004).
R. A. Daly and S. G. Djorgovski, astro-ph/0512578 (2005).
M. C. Bento, O. Bertolami, N. M. C. Santos, and A. A. Sen, astro-ph/0512076 (2005).
K. Freese and M. Lewis, Phys. Lett. B., **540**, 1 (2002).
M. C. Bento, O. Bertolami, and A. A. Sen, Phys. Rev. D., **66**, 043507 (2002).
A. Y. Kamenshchik, U. Moschella, and V. Pasquier, Phys. Lett. B., **511**, 265 (2001).
R. Lazkoz, S. Nesseris, and L. Perivolaropoulos, JCAP, **11**, 10 (2005).
T. Saini, S. Raychaudhury, V. Sahni, and A. A. Starobinsky, Phys. Rev. Lett., **85**, 1162 (2000).
V. F. Cardone, A. Troisi, and S. Cappozziello, Phys. Rev. D **69**, 3517 (2004).
J. Simon, L. Verde, and R. Jimenez, Phys. Rev. D **71**, 123001 (2005).
M. Sahlen, A. R. Liddle, and D. Parkinson, Phys. Rev. D **72**, 083511 (2005).
[^1]: This work was supported in part by the U. S. National Science Foundation (NSF) under grant AST-0507465.
[^2]: This work was supported in part by the NSF under grant AST-0407448 and by the Ajax Foundation. We acknowledge the outstanding work and efforts of many observers who obtained the valuable data used in this study.
|
---
abstract: 'We present a search for $\beta^+$/EC double beta decay of [$\rm^{120}Te$ ]{}performed with the CUORICINO experiment, an array of $\rm{TeO}_2$ cryogenic bolometers. After collecting 0.0573 [kg$\cdot$y ]{}of $^{120}\rm{Te}$, we see no evidence of a signal and therefore set the following limits on the half-life: $\rm T^{0\nu}_{1/2}> 1.9 \cdot 10^{21}$ y at 90% C.L. for the $0\nu$ mode and $\rm T^{2\nu}_{1/2}> 7.6 \cdot 10^{19}$ y at 90% C.L. for the $2\nu$ mode. These results improve the existing limits by almost three orders of magnitude (four in the case of $0\nu$ mode).'
title: 'Search for $\beta^+$/EC double beta decay of [$\rm^{120}Te$ ]{}'
---
authors.tex
The discovery of neutrino oscillations [@oscillation] proved that neutrinos are massive, but there are fundamental issues that oscillation experiments cannot address: measuring the absolute neutrino mass and determining whether the neutrino is the antiparticle of itself, thus being of Majorana nature, or not. To answer these questions it is necessary to look for neutrinoless double beta, [$0\nu\beta\beta$]{}, decays, which would bring conclusive evidence of the Majorana nature of the neutrino and whose decay rates constrain the absolute neutrino mass [@dbeta].
Double beta decays can occur by either emitting two electrons or two positrons. In the latter case, either of the positron emissions can be replaced by an electron capture (EC). While $\beta^-\beta^-$ decays have the largest expected rates, $\beta^+/EC$ and $\beta^+\beta^+$decays provide clear signatures from the 511-keV annihilation gamma rays. Energy and momentum conservation in the EC/EC decay requires an extra radiative process, reducing the rate by several orders of magnitude.
This paper reports on a search for $\beta^+$/EC decays ${\ensuremath{\rm^{120}Te} }\to{\ensuremath{\rm^{120}Sn} }+ e^+$ and ${\ensuremath{\rm^{120}Te} }\to{\ensuremath{\rm^{120}Sn} }+ e^+ + 2\nu$ with the CUORICINO experiment, an array of [TeO$_2$ ]{}cryogenic bolometers at the Gran Sasso National Laboratories. The [$\rm^{120}Te$ ]{}isotope has been only minimally investigated from a theoretical point of view. No calculations of the half-life of [$0\nu\beta^+$/EC ]{}decay of [$\rm^{120}Te$ ]{}are available for comparison with our result. The predictions for the same decay mechanism in other nuclei, assuming the effective Majorana mass $\langle m_\nu \rangle$ = 1 eV, range between $10^{26}$–$10^{27}$ y [@theo; @theo2]. The larger value is mostly due to the need to have simultaneously a decay and a capture and to the reduced phase space available. The only reference for the two neutrino mode [@Abad] yields a theoretical value for the half-life of $4.4 \cdot 10^{26}$ y.
In recent years, experimental limits on the [$\rm^{120}Te$ ]{}decays have been set using an array of CdZnTe detectors located at the Gran Sasso Underground Laboratory [@Cobra07; @Cobra09] and a HPGe detector at the Modane Underground Laboratory [@Bar07; @Bar08]. The present best limits in the literature are T$^{0\nu}_{1/2} > 4.1 \cdot 10^{17}$ y [@Cobra09] and T$^{(0\nu+2\nu)}_{1/2} > 1.9 \cdot 10^{17}$ y [@Bar07].
The search strategy is the following: the emitted positron carries a kinetic energy up to K$_{max}$ = Q - 2m$_e$c$^2$ - E$_b$, where $E_b$ is the binding energy of the captured electron within the atomic shell and Q$=(1714.8\pm1.3)$ keV [@Scielzo] is the difference in $^{120}$Te and $^{120}$Sn atomic masses. The electron capture is most likely to occur from the K shell, whose binding energy is 30.5 keV. The ratio of L-capture to K-capture for most elements is around 10% (12% for$^{120}$Sb $\rightarrow$ $^{120}$Sn EC decay) [@EC]. In the following we will always assume a capture from the K shell.
The bolometer where the decay occurs will see the deposition of both the binding energy and the kinetic energy of the positron (maximum energy: $E_{0}=K_{max}+E_b=Q-2m_ec^2=692.8$ keV, independent of $E_b$).[^1] Once at rest, the positron annihilates with an electron and two photons with an energy $E_\gamma=511.0$ keV are emitted. These photons can interact with the same bolometer or by a nearby one, or they can escape undetected.
The analysis presented here searches for the signatures with the best signal-to-noise ratio. This is not the case for the events where the entire energy is deposited inside the detector, due to the low detection efficiency (see Table \[Tab1\]). For the $0\nu$ mode, where the positron is monochromatic, this means the coincidence between a bolometer with an energy deposition consistent with $E_\gamma$ and one with either $E_0$ or $E_0+E_\gamma$ and the coincidence of one bolometer with a signal of $E_0$ and two other bolometers with a signal of $E_\gamma$. For the $2\nu$ mode, where the positron is emitted with a continuum of kinetic energies between 0 and K$_{max}$, this means the triple coincidence of one bolometer with a signal between $E_b$ and $E_0$ and two others with a signal of $E_\gamma$.
The CUORICINO detector {#sec:qino}
======================
![A sketch of the CUORICINO assembly showing the tower hanging from the mixing chamber, the various heat shields and the external shielding.[]{data-label="Fig1"}](Fig1.ps){width="30mm"}
The CUORICINO experiment is detailed in Ref. [@Qino08]. Briefly, it is an array of [TeO$_2$ ]{}crystals acting as cryogenic bolometers at a working temperature of 8–10 mK and with heat capacity $~2.3\cdot10^ {-9}$ J/K. To measure temperature variations corresponding to few keV ($\Delta T\sim 0.1~\mu$K/keV) heavily doped high-resistance germanium thermistors (NTD, Neutron Transmutation Doped) are glued to each crystal.
The CUORICINO detector consists of 62 [TeO$_2$ ]{}crystals arranged in 13 planes. Each of the upper 10 planes and the lowest one consists of four $5\times5\times5$ cm$^3$ [TeO$_2$ ]{}crystals, while the 11th and 12th planes have nine, $3\times3\times6$ cm$^3$ crystals. All crystals have natural isotopic abundances except four of the smaller crystals, two of which are enriched to 82.3% in $^{128}$Te and two to 75% in $^{130}$Te.
The natural abundance of [$\rm^{120}Te$ ]{}is 0.096% [@TOI], so the 39.4 kg of the CUORICINO experiment (enriched crystals are not included) contain $N_{\beta\beta}=1.43 \cdot 10^{23}$ nuclei of $^{120}$Te .
The experiment is shielded with two layers of lead of 10 cm minimum thickness each. The outer layer is made of common low radioactivity lead, while the inner layer is made of special lead with a low activity of $^{210}$Pb. The electrolytic copper of the refrigerator thermal shields provides an additional shield with a minimum thickness of 2 cm. An external 10 cm layer of borated polyethylene was installed to reduce the background due to environmental neutrons. The detector itself is shielded against the intrinsic radioactive contamination of the dilution unit materials by an internal layer of 10 cm of Roman lead [@Qino98], located inside the cryostat immediately above the tower. The background from the activity in the lateral thermal shields of the dilution refrigerator is reduced by a lateral internal 1.4 cm thick shield of Roman lead. Another 8 cm lead shield is located at the bottom of the tower. The refrigerator is surrounded by a Plexiglas anti-radon box flushed with clean $N_2$ from a liquid nitrogen evaporator and is also enclosed in a Faraday cage to eliminate electromagnetic interference. A sketch of the assembly is shown in Fig. \[Fig1\].
Signature \[energies in keV\] $\mu$ $\varepsilon$ \[%\]
------------------------------- ------- ---------------------
(30.5 – 692.8) 1 3.00 $\pm$ 0.02
(30.5 – 692.8) + 511 2 3.40 $\pm$ 0.02
(30.5 – 692.8) + 511 + 511 3 0.45 $\pm$ 0.01
(541.5 – 1203.8) 1 16.28 $\pm$ 0.04
(541.5 – 1203.8) + 511 2 6.23 $\pm$ 0.03
(1052.5 – 1714.8) 1 10.04 $\pm$ 0.03
: Signatures of [$\rm^{120}Te$ ]{}[$\beta^+$/EC ]{}decay in an array of [TeO$_2$ ]{}detectors and their corresponding multiplicity ($\mu$), that is the number of detectors with an energy deposition above threshold. The detection efficiency for the $0\nu$ mode in CUORICINO is reported in the last column ($\varepsilon$). We denote with the $+$ sign the coincidence of energies released in different detectors. For the $0\nu$ mode the energy released in the detector where the decay occurred corresponds to the upper bound of the interval. The errors are statistical only.[]{data-label="Tab1"}
{width="0.9\linewidth"}
{width="0.45\linewidth"} {width="0.45\linewidth"}
For the present analysis, the full CUORICINO statistics (data collected between May 2004 and May 2008) for a total exposure of 0.0573 [kg$\cdot$y ]{}of $^{120}\rm{Te}$ is used. The total energy spectrum of all detectors is shown in Fig. \[Fig2\]. Several peaks of radioactive isotopes are clearly visible, the most prominent are labeled.
Data acquisition and analysis
=============================
CUORICINO data are divided in datasets, each one being a collection of about a month of daily measurements. Routine calibrations are performed at the beginning and at the end of each dataset using two wires of thoriated tungsten inserted inside the external lead shield. The signals coming from each bolometer are amplified and filtered with a six-pole Bessel low-pass filter and fed to a 16-bit ADC. The signal is digitized with a sampling time of 8 ms. With each triggered pulse, a set of 512 samples is recorded to disk. The typical bandwidth is approximately 10 Hz, with signal rise and decay times of order 30 and 500 ms, respectively. More details of the design and features of the electronics system are found in Ref. [@Arn02].
Each bolometer has a different trigger threshold, optimized according to the bolometer’s typical noise and pulse shape. The trigger rate is time and channel dependent, with a mean value of about 1 mHz. The amplitude of the pulses is estimated by means of an Optimal Filter technique [@Qino04]. The gain of each bolometer is monitored by means of a Si resistor of 50–100 k$\Omega$ attached to it that acts as a heater. Heat pulses are periodically supplied by an ultra-stable pulser [@Arn03] that sends a calibrated voltage pulse to the Si resistors. Their Joule dissipation produces heat pulses in the crystal with a shape which is almost identical to the calibration $\gamma$-rays.
{width="0.45\linewidth"} {width="0.45\linewidth"}
![Energy spectra, in the regions of interest, of coincidences with a single 511 keV event \[two instances\] (top) and of coincidences with two 511 keV photons (bottom). The arrows indicate the energy of the expected signal. Fit results, as explained in the text, are overlaid.[]{data-label="Fig5"}](Fig5a.ps "fig:"){width=".8\linewidth"} ![Energy spectra, in the regions of interest, of coincidences with a single 511 keV event \[two instances\] (top) and of coincidences with two 511 keV photons (bottom). The arrows indicate the energy of the expected signal. Fit results, as explained in the text, are overlaid.[]{data-label="Fig5"}](Fig5b.ps "fig:"){width=".8\linewidth"} ![Energy spectra, in the regions of interest, of coincidences with a single 511 keV event \[two instances\] (top) and of coincidences with two 511 keV photons (bottom). The arrows indicate the energy of the expected signal. Fit results, as explained in the text, are overlaid.[]{data-label="Fig5"}](Fig5c.ps "fig:"){width=".8\linewidth"}
Search strategy
===============
As detailed in the introduction, a [$\beta^+$/EC ]{}decay of [$\rm^{120}Te$ ]{}in an array of [TeO$_2$ ]{}detectors releases an energy up to $E_0=692.8$ keV in the bolometer where the decay occurs and, if the annihilation gammas do not escape undetected, one or two additional energy deposits of $E_\gamma=511.0$ keV in either the same or a nearby bolometer. There are therefore several distinctive signatures as listed in Tab. \[Tab1\].
For the $0\nu$ mode the energy released in the detector where the decay occurred corresponds to the upper bound of the interval. The detection efficiencies for the $0\nu$ mode were estimated by means of a GEANT4 simulation of the CUORICINO setup (see last column of Tab. \[Tab1\]) where the decays are located uniformly within all non-enriched detectors and the decay products are emitted isotropically. We always assume that the binding energy of the captured electron is released within the detector where the decay occurs. The efficiency estimate includes the dead time evaluated separately for each detector. To account for the different energy resolution of CUORICINO detectors we assigned to each detector in the simulation a FWHM given by the weighted mean of the FWHMs calculated in all CUORICINO calibration runs. The highest detection efficiencies correspond to the cases where one or both of the electron-positron annihilation photons are fully absorbed in the detector where the [$\beta^+$/EC ]{}decay occurs. This is consistent with the relatively large size of the CUORICINO detectors and the mean free path of a 511 keV photon in [TeO$_2$ ]{}(1.9 cm). These signatures are calculated to account for nearly half of all decays. The remainder involve only partial energy deposition of the 511 keV gamma ray energy and therefore are more difficult to distinguish from background. Due to the extremely short range of positrons in CUORICINO crystals, the efficiencies estimated for the $0\nu$ mode are also valid for the $2\nu$ mode, apart from small corrections (see below). Considering also the expected background, we limited our analysis to the signatures that feature a coincidence of two or three events for the $0\nu$ mode and of three events for the $2\nu$ mode.
We consequently search only for double or triple coincidences where one or two of the energy depositions are in the interval $\pm2.5\sigma$ of $E_\gamma=511$ keV, where $\sigma=$ 1.7 keV, as estimated in the inclusive energy spectrum (see Fig. \[Fig3\]). The coincidence window is 100 ms. The probability of accidental coincidence is estimated from the measurement of the single crystal rate ($\sim$ 1.6 mHz) to be around 0.7% for double coincidences between detectors in the same plane or in adjacent planes. For triple coincidences it is even less.
The energy spectra of the events in coincidence with one or two 511 keV photons are shown in Fig. \[Fig4\]. The structures in both spectra have been identified and correspond to single or double escape lines from known radioactive lines present in the CUORICINO background spectrum. The continuum background is due both to accidental coincidences and true coincidences in which the energy deposition of the event that accompanies the 511 keV gamma(s) is not complete.
Results of the 0$\nu\beta^+$/EC decay search
============================================
In the spectra of Fig. \[Fig4\] we search for a signal with mean energy 692.8 keV (spectra of double and triple coincidences) or 1203.8 keV (spectrum of double coincidences only). The expected resolution at 1203.8 keV is estimated on the $^{214}$Bi line at 1238 keV and found to be $\sigma$ = 1.67 $\pm$ 0.05 keV (see Fig. \[Fig3\]), in agreement with the value of 1.7 keV found for the 511 keV line and consistent with being constant over the energy region of interest.
Fig. \[Fig5\] shows the energy regions of interest of the double and triple coincidences spectra. The observed lines, as explained above, correspond to single- and double-escape peaks. Specifically, the measured spectra show the single escape lines of the 1173.2 keV transition from $^{60}$Co at 662.2 keV and of the 1729.6 keV transition from $^{214}$Bi at 1218.6 keV plus the double escape lines of the 2204.06 keV transition from $^{214}$Bi at 1182.06 keV, of the 1764.5 keV transition from $^{214}$Bi at 742.5 keV and again of the 1729.6 keV transition from $^{214}$Bi at 707.6 keV. The observed SE and DE peaks produce a negligible contribution to the background in the energy regions of interest.
The search for the 0$\nu$$\beta^+$/EC decay of $^{120}$Te on the spectra in Fig. \[Fig5\] is performed with unbinned maximum likelihood fits. In the likelihood fit the background is therefore parametrized with the sum of a flat component and a Gaussian for each of the expected escape lines, with mean values fixed to the corresponding known energies and the width fixed to $\sigma=1.7$ keV. The signal is also parametrized with a Gaussian with mean value fixed to the expectations, that is 692.8 keV or 1203.8 keV, depending on whether or not one of the annihilation photons is absorbed in the same crystal where it is emitted. The width of the signal line is fixed to 2.2 keV which is obtained by summing in quadrature the sigma (1.7 keV) , the error on the Q-value (1.3 keV), and the uncertainty on the energy scale (0.4 keV) as estimated from the calibration fit[^2]. In summary for each spectrum the fitted quantities are the number of events in the signal, the flat background and the escape lines. The curves resulting from the fits are overlaid in Fig. \[Fig5\] and the corresponding fitted quantities are reported in Tab. \[tab:results\].
The upper limit on the number of observed signal events is extracted by means of a Bayesian procedure. By integrating with a Monte Carlo technique the likelihood over the nuisance parameters, we calculate, for each signature, the posterior probability density function ([*pdf*]{}) for the parameter $N_{sig}$ and define our 90% C.L. upper limit as the value where its integral reaches the 90% of the total area (see Tab. \[tab:results\]).
To combine the results coming from the three signatures, we calculate with the same procedure the pdfs for the parameter $N_{sig}/\epsilon_{tot}$, the number of signal events divided by the corresponding efficiency. The efficiency is calculated as $\epsilon_{tot}=\epsilon\times(\epsilon_{noise}\epsilon_{heat})^{\mu}$, where $\epsilon$ is the efficiency tabulated in Tab. \[Tab1\], $\mu=2,3$ is the multiplicity of the given signature from the same table, $\epsilon_{noise}=99.1\%$ accounts for the loss of signal due to noise as estimated on heater pulses, and $\epsilon_{heat}=97.7\%$ accounts for the dead time induced by the presence of heater. This procedure, together with the inclusion of the uncertainty on the energy scale in the fits of the spectra, accounts for the systematic errors, which have a negligible impact on the result.
From the combined pdf, we extract a 90% C.L. upper limit on the total number of double beta decays regardless of their detection, $n_B=100$. We can then set a limit on the half-life of 0$\nu\beta^+$/EC decay of $^{120}$Te $$T^{0\nu}_{1/2} > \ln2N_{\beta\beta} \frac{T}{n_{B}}=1.9 \cdot 10^{21}~y$$ where $N_{\beta\beta}$ is the number of $^{120}$Te nuclei and $T$ is the live time.
Results of the 2$\nu\beta^+$/EC decay search
============================================
The energy spectrum of the events in coincidence with two 511 keV photons in the region below 692.8 keV contains 15 events (see Fig. \[Fig6\]). We can exclude from our analysis the regions $\pm 3\sigma$ around the energies where we expect the DE peaks from known radioactive gamma lines in the CUORICINO inclusive spectrum. These energies are indicated with red arrows in Fig. \[Fig6\] and listed in the caption. We have listed only the gamma lines that are expected to contribute with at least one event. This expectation is based on the comparison with the $^{40}$K line that produces 5 events at 438.8 keV in the experimental spectrum of triple coincidences (see Fig. \[Fig6\], line labeled as 1). After the subtraction, only 8 events remain in the spectrum from threshold to end-point. These events can be due to accidental coincidences or to true coincidences (double escape of already compton scattered gammas). To estimate the first component we looked at the spectra of events in triple coincidence with the side-band of 511 keV (left side-band: from 470 keV to 502.5 keV, right side-band: from 519.5 keV to 560 keV) correctly normalized to the experimental spectrum of Fig. \[Fig6\]. This accounts for $4.3\pm0.5$ events. The other component of the background cannot be reliably estimated due to limited statistics and we cannot discriminate against it. Therefore, we set an upper limit assuming conservatively that the remaining events may be signal.
![Energy spectrum of triple coincidences with two 511 keV photons in the energy region where we expect a signal from the 2$\nu$$\beta^+$/EC. The red arrows indicate the energies where we expect the DE peaks from known radioactive gamma lines in the CUORICINO inclusive spectrum. For each of these energies we subtract an energy region of $\pm 3\sigma$. The lines that can give a double-escape peak between 50 keV and 692.8 keV are 1120.3 keV and 1238.1 keV of $^{214}$Bi (3), 1173.2 keV and 1332.5 keV of $^{60}$Co (2) and 1460.8 keV of $^{40}$K (1).[]{data-label="Fig6"}](Fig6.ps){width="80mm"}
spectrum $N_{sig}$ $N_{flat}$ $N_{L1}$ $N_{L2}$ $N_{H}$ 90% U.L.
---------- ------------- ------------ -------------- ---------- ------------- ----------
I 1.7$\pm$3.5 214$\pm$16 17.5$\pm$5.3 14$\pm$5 34$\pm$6 8.5
II 0.3$\pm$3.1 78$\pm$10 26$\pm$6 N/A 9.5$\pm$4.3 7.25
III 0.0$\pm$0.5 1$\pm$1 N/A N/A 8$\pm$3 2.62
The upper limit on the number of observed signal events is extracted by means of a Bayesian procedure. From the likelihood function for Poisson distributed data with unknown mean $s$ and known background $b$ and using as prior pdf a flat prior different from zero only for positive values of $s$, we extract our 90% C.L. upper limit on the number of observed events as the value where the integral of the posterior pdf reaches the 90% of the total area [@PDG], $n_B=9.04$. We can then set a limit on the half-life of 2$\nu\beta^+$/EC decay of $^{120}$Te $$T^{2\nu}_{1/2} > \ln2N_{\beta\beta} \frac{\epsilon_{tot}T}{n_{B}}=0.76 \cdot 10^{20}~y$$ where $N_{\beta\beta}$ is the number of $^{120}$Te nuclei and $T$ is the live time. The efficiency is calculated as $\epsilon_{tot}=\epsilon\times(\epsilon_{noise}\epsilon_{heat})^3\times(1-\epsilon_{corr})$, where $\epsilon$, $\epsilon_{noise}$ and $\epsilon_{heat}$ have been defined above and $\epsilon_{corr}=12.5\pm0.5\%$ is the correction to be applied to the detection efficiency of the $0\nu$ mode and accounts for the fact that we are not sensitive to the portion of the positron spectrum between 30.5 and 50 keV (threshold) and that we subtract five 10 keV energy regions from the experimental spectrum, as explained above. Since we do not know the exact shape of the positron spectrum from the 2$\nu$$\beta^+$/EC decay of [$\rm^{120}Te$ ]{}the estimation of $\epsilon_{corr}$ has been performed on a standard $\beta$ spectrum with end-point at 692.8 keV and the error is estimated from the comparison of $\epsilon_{corr}$ estimated as explained above and $\epsilon_{corr}$ estimated on a flat spectrum from 0 to end point. Systematic errors on the efficiencies have a negligible impact on the result.
Conclusions
===========
We searched for double beta decays $\beta^+$/EC of [$\rm^{120}Te$ ]{}in the $\rm{TeO}_2$ cryogenic bolometers of CUORICINO, using 0.0573 [kg$\cdot$y ]{}of $^{120}\rm{Te}$ of data. We see no evidence of a signal and therefore set new limits on the half-life for $0\nu$ and $2\nu$ decay $\rm T^{0\nu}_{1/2}> 1.9 \cdot 10^{21}$ y and $\rm T^{2\nu}_{1/2}> 0.76 \cdot 10^{20}$ y extending the exclusion region by almost three orders of magnitude (four in the case of $0\nu$ mode) with respect to the constraints from previous experiments. The limits obtained with CUORICINO could be improved in the future by an additional $\sim$2 orders of magnitude with the CUORE experiment [@CUORE] because of increased mass, higher coincident detection efficiency, and lower backgrounds.
[99]{} Y. Fukuda et al. \[Super-Kamiokande Collaboration\], Phys. Rev. Lett. 81 (1998) 1562, Y. Fukuda et al. \[Super-Kamiokande Collaboration\], Phys. Rev. Lett. 82 (1999) 2430, S. Fukuda et al. \[Super-Kamiokande Collaboration\], Phys. Rev. Lett. 86 (2001) 5651, Q. R. Ahmad et al. \[SNO Collaboration\], Phys. Rev. Lett. 87 (2001) 071301, Q. R. Ahmad et al. \[SNO Collaboration\], Phys. Rev. Lett. 89 (2002) 011301, K. Eguchi et al. \[KamLAND Collaboration\], Phys. Rev. Lett. 90 (2003) 021802, T. Araki et al. \[KamLAND Collaboration\], Phys. Rev. Lett. 94 (2005) 081801.
Examples of recent reviews are S. Elliott and P. Vogel, Ann. Rev. Nucl. Part. Sci. 52 (2002) 115, A. Morales and J. Morales, Nucl. Phys. B (Proc. Suppl.) 114 (2003) 141, F. T. Avignone III, S. R. Elliott and J. Engel (2006), Rev. Mod. Phys. 80 (2008) 481 and K. Zuber, Acta Phys. Polon. B 37 (2006) 1905.
V.I. Tretyak and Yu.G. Zdesenko, ATOMIC DATA AND NUCLEAR DATA TABLES 61, 43 (1995). V.I. Tretyak and Yu.G. Zdesenko, ATOMIC DATA AND NUCLEAR DATA TABLES 80, 83 (2002). J. Abad et al., J. de Physique 45 C3 (1984). T. Bloxham et al., Phys. Rev. C 76, (2007) 025501. J.V. Dawson et al., Phys. Rev. C 80, (2009) 025502. A.S. Barabash et al., J. Phys. G 34 (2007) 1721-1728. A.S. Barabash et al., J. Phys. Conf. Ser. 120 (2008) 052057. N.D. Scielzo et al., Phys. Rev. C 80 (2009) 025501. W. Bambynek et al., Rev. Mod. Phys. Vol.49, No.1 (1997). C. Arnaboldi et al., Phys. Rev. C 78 (2008) 035502. http://nucleardata.nuclear.lu.se/nucleardata/toi/ . A. Alessandrello et al., Nucl. Instrum. and Meth. B 142 (1998) 163. C. Arnaboldi et al., IEEE Trans. Nucl. Sci. 49 (2002) 2440. C. Arnaboldi et al., Nucl. Instr. Meth. A 518 (2004) 775. C. Arnaboldi et al., IEEE Trans. Nucl. Sci. 50 (2003) 979. C. Amsler et al. (Particle Data Group), Physics Letters B667, 1 (2008). R. Ardito et al. (CUORE Collaboration), arXiv:hep-ex/0501010.
[^1]: Here and in the following we assume that the X-rays following the EC do not escape from the crystal where the decay occurs.
[^2]: From a bayesian point of view (i.e. our approach), you should sum the likelihoods for different Q-values weighting them by the probability of that Q-value being correct. This is equivalent to convoluting the peak position with a gaussian, and therefore to a gaussian whose width is the sum in quadrature of the uncertainty on the Q-value and the resolution.
|
---
abstract: 'We propose a graphical user interface based groundtruth generation tool in this paper. Here, annotation of an input document image is done based on the foreground pixels. Foreground pixels are grouped together with user interaction to form labeling units. These units are then labeled by the user with the user defined labels. The output produced by the tool is an image with an $XML$ file containing its metadata information. This annotated data can be further used in different applications of document image analysis.'
author:
- Soumyadeep Dey
- Jayanta Mukherjee
- Shamik Sural
- Amit Vijay Nandedkar
title: 'Anveshak - A Groundtruth Generation Tool for Foreground Regions of Document Images'
---
Introduction {#sec:intro}
============
Document digitization has attracted attention for several years. Conversion of a document image into electronic format requires several types of document image analysis. Typical document image analysis includes different types of segmentation, optical character recognition ($OCR$), etc. Numerous algorithms have been proposed to achieve these objectives. The performance of these algorithms can be measured with the help of groundtruth. The data with groundtruth is of immense importance in document image analysis. It is required for training, machine learning based algorithms, and it is also used for evaluation of various algorithms. The generation of groundtruth is a manual and time consuming process. Hence, the groundtruth generation tool should be user friendly, reliable, effective, and capable of generating data in a convenient manner.
Several systems for groundtruth generation have been reported in the literature for producing benchmark datasets to evaluate competitive algorithms. Pink Panther [@pinkpanther] is one such groundtruth generator, and is mainly used for evaluation of layout analysis. PerfectDoc [@PerfectDoc] is a groundtruth generation system for document images, based on layout structures. Various layout based groundtruth generation tools are present in the literature [@trueviz], [@GT_ICDAR2009], [@Li_Das06]. These groundtruth generators [@Gford_gt], [@trueviz], [@PerfectDoc], only support rectangular regions for annotation. Hence, they fail to generate groundtruth for documents with complex layout.
A recent groundtruth generator $GEDI$ [@GEDI], supports annotation by generating a polygonal region. However, it is observed that the tool is quite inefficient for images of larger size ($600 dpi$). PixLabeler [@PixLabeler] is an example of pixel level groundtruth generator. Similar tools are also reported in [@4669984], [@1699028], [@Dori_MVA]. Pixel level annotation gives more general measure for annotation, but it involves more time for completing the annotation task.
In this paper, we propose a tool to annotate a document image at pixel level. The main objective of the tool is to efficiently annotate data using less amount of time. Towards this, we have provided a semi-automatic interactive platform to annotate document images efficiently. Since our main goal is to annotate foreground pixels, we segment foreground pixels from its background with user assistance. Next, we group foreground pixels such that neighboring pixels of similar types get connected. Finally, annotation of each such group of pixels is performed with a predefined set of labels.
The system is called Anveshak and its functionality is described in Section \[sec:Functions\]. Implementation details are discussed in Section \[sec:imple\]. Section \[sec:GTGen\] provides the details of groundtruth generation with Anveshak. Finally, we conclude in Section \[sec:conclu\].
Functionality {#sec:Functions}
=============
The work-flow of the Anveshak system is shown in Figure \[fig:workflow\]. Some semi-automated modules are implemented to speed up the annotation process.
Foreground Background Separation {#sub-sec:FBS}
--------------------------------
We are mainly concerned with the annotation of foreground pixels of a document image. A module is integrated with Anveshak to efficiently segment foreground pixels from its background. This task can be performed with three types of thresholding techniques, first, $GUI$ based thresholding, second, a $GUI$ based adaptive thresholding technique [@opencv], and third, the *Otsu’s* thresholding technique [@otsu]. Here, a user can segment foreground from its background efficiently, using either of these three thresholding techniques. An example of foreground background separation module using $GUI$ based thresholding is shown in Figure \[fig:binarizationmodule\].
-- --
-- --
-- --
-- --
Generation of Labeling Units {#sub-sec:GLU}
----------------------------
Anveshak has a unique technique to predefine labeling units. Labeling units are generated using $GUI$ based morphological operations. Morphological operations included in Anveshak are, *erosion*, *dilation*, *closing*, *opening*, *gap-filling*, and *smoothing*.
A labeling unit is a collection of foreground pixels, grouped together using a suitable morphological operator. Pixels are grouped together by choosing either of these morphological operations - *erosion*, *dilation*, *closing*, and *opening* [@gonzalez09]. The user can select an ideal element size and element type, in order to group pixels. A user can also accumulate pixels to form a group by a smoothing operation [@Wahl1982], where choosing of run length parameter is an interactive process. Foreground pixels can also be grouped together using gap filling operation [@sdey12], where selection of the parameter, gap size in horizontal and vertical directions, is a user driven process. An instance of Anveshak for generating labeling units is shown in Figure \[fig:GLU\].
After grouping the pixels, contours of each group is obtained using the method described in [@Suzuki85]. Each contour is then approximated to a polygon by applying Douglas-Peucker algorithm [@douglas73]. The polygons thus computed are the basic units for annotation in Anveshak. An example of a collection of labeling units is shown in Figure \[fig:labelingunit\], where each labeling unit is represented using a unique colors.
Defining Labels {#sub-sec:labels}
---------------
There are some predefined labels in Anveshak. The tool provides an option to add and delete labels, as shown in Figure \[fig:setlabels\]. After defining all the labels, a user can annotate the labeling units of the input document with the defined labels. A unique index number and a color is assigned to each label, which are used in the later stages of annotation.
Annotation of Labeling Units {#sub-sec:alu}
----------------------------
Overall annotation process can be summarized uasing a flow chart given in Figure \[fig:workflow\_annotation\]. Annotation of labeling units is performed in two ways as shown in Figure \[fig:annotate\]. A user can label unlabeled units one by one with the predefined labels. In this case, an unlabeled unit is displayed in a window and the user is prompted for a label for the displayed unit. This process continues until each of the units is labeled, or the user chooses to label the units by selecting a region of interest ($ROI$).
Another method of labeling units is to select a region of interest. In this module, a user can select an $ROI$, which can be annotated with the defined labels. At first, all units are determined which are completely present within the selected $ROI$. After selection of an $ROI$, units present within the $ROI$ can be labeled using three different modes (Figure \[fig:ROIoption\]). A user can annotate all units within the $ROI$ with one label, and update all units with the selected label. Another way of annotation is by labeling all units belonging to the selected $ROI$ with a particular type. Lastly, a user can annotate each unit belonging to the selected $ROI$ individually with a label. Pixels belonging to a particular labeling unit are updated with the unique index corresponding to the label of $ROI$, and color of those pixels is updated with the color of that label. Belongingness of a pixel to a particular labeling unit is computed through point-polygon test. At each stage of the annotation process, the updated color image is displayed, where labeled pixels are displayed with color of the corresponding label, and unlabeled pixels are displayed with their original color value.
The process of annotation continues until all labeling units are marked. After completion of annotation, the user is asked, whether he/she wants to update any label, or finalize the labels. After finalizing the labels, output labeled image and its corresponding $XML$ file are generated. An example of different stages of labeling is shown in Figure \[fig:labeling\].
----- ----- -----
(a) (b) (c)
----- ----- -----
[@c@[ ]{}c@]{}\
(d)
Implementation Details {#sec:imple}
======================
Anveshak is implemented in $C++$, using cross-platform application framework $Qt$ for graphical user interface and with customized modules developed using OpenCV [@opencv]. Annotation of an image is achieved through the user interface and after completion, a single image in $.png$ format is generated. Each pixel of the output image is represented with an index corresponding to a particular annotation.
The metadata of the concerned image is stored in an $XML$ file, which also includes the information of the source image along with the annotated image. In the $XML$ file, an index corresponds to the unique pixel value for a particular label in the annotated image. Examples of two different annotated images and their corresponding $XML$ files are respectively shown in Figures \[fig:labeling\] (c) and (d) and Figures \[fig:XMLDATA\] (a) and (b). Anveshak is tested to annotate $344$ images from the dataset reported in [@Micenkova:11ICDAR:stamp]. It has been observed by the annotator that, the labeling can be performed in a much easier and faster way than it could be performed with PixLabeler [@PixLabeler] or $GEDI$ [@GEDI]. In our present implementation of Anveshak, only one annotation per block is supported. In many scenarios, it is desirable to have multiple annotations per block, mainly in case of overlapping regions. In future, we plan to support more than one annotation per block. Present implementation of Anveshak has been made available online[^1].
[@c@[ ]{}c@]{}\
(a)
[@c@[ ]{}c@]{}\
(b)
[@c@[ ]{}c@]{}\
(b)
Generation of Groundtruth using Anveshak {#sec:GTGen}
========================================
Anveshak is used to generate groundtruth for the dataset reported in [@Micenkova:11ICDAR:stamp]. The images in the dataset consist of various regions like logo, headers, text, signature, headline, bold text, etc. However, annotation of stamp regions is only available with the original dataset. The dataset consists of $425$ scanned images in $600$, $300$, and $200$ $dpi$ resolutions. Out of these $425$ images, $344$ images contain non overlapping regions. Anveshak is used to annotate these $344$ images of $300$ $dpi$ resolution, and the groundtruth data has been made available online[^2]. These $344$ images are annotated using Anveshak with the help of $6$ users. There are on an average $5$ labels, and $148$ segments per image in the given dataset. Users involved in annotation are initially trained to annotate data with one random image. Average time taken by a user to annotate an image with Anveshak is about $3-4$ minutes. The annotated dataset has been used in the works reported in [@sdey_stamp_NCVPRIPG] and [@Dey2016_IJDAR].
Conclusion {#sec:conclu}
==========
The primary target of Anveshak is to annotate an input document image in an efficient manner. Our tool produces an $XML$ file containing the metadata information, along with an annotated image. We have developed a user friendly groundtruth generation tool, with some semi-automatic modules which make the annotation process faster. We hope that Anveshak will serve the document analysis community in an effective manner by simplifying groundtruth generation procedure.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is partly funded by TCS research scholar program and partly by Ministry of Communications & Information Technology, Government of India; MCIT 11(19)/ 2010-HCC (TDIL) dt. 28-12-2010.
[10]{}
G. Bradski. . , 2000.
S. Dey, J. Mukherjee, and S. Sural. Stamp and logo detection from document images by finding outliers. In [*2015 Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)*]{}, pages 1–4, Dec 2015.
S. Dey, J. Mukherjee, and S. Sural. Consensus-based clustering for document image segmentation. , pages 1–18, 2016.
S. Dey, J. Mukhopadhyay, S. Sural, and P. Bhowmick. Margin noise removal from printed document images. , pages 86–93, 2012.
D. Doermann, E. Zotkina, and H. Li. - [A]{} [G]{}roundtruthing [E]{}nvironment for [D]{}ocument [I]{}mages. , 2010.
D. H. Douglas and T. M. Peucker. Algorithm for the reduction of the number of points required to represent a digitized line or its caricature. , 10(2):112–122, 1973.
G. Ford and G. R. Thoma. . pages 9–11, 2003.
R. C. Gonzalez and R. E. Woods. . Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2009.
L. C. Ha and T. Kanungo. The architecture of trueviz: a groundtruth/metadata editing and {VIsualiZing} toolkit. , 36(3):811 – 825, 2003.
B. Micenkova and J. V. Beusekom. Stamp detection in color document images. , pages 1125–1129, 2011.
M. Moll, H. Baird, and C. An. Truthing for pixel-accurate segmentation. In [*Document Analysis Systems, 2008. DAS ’08. The Eighth IAPR International Workshop on*]{}, pages 379–385, Sept 2008.
N. Otsu. A threshold selection method from gray-level histograms. , 9(1):62–66, 1979.
E. Saund, J. Lin, and P. Sarkar. Pixlabeler: User interface for pixel-level labeling of elements in document images. , pages 646–650, 2009.
F. Shafait, D. Keysers, and T. Breuel. Pixel-accurate representation and evaluation of page segmentation in document images. In [*Pattern Recognition, 2006. ICPR 2006. 18th International Conference on*]{}, volume 1, pages 872–875, 2006.
T. Strecker, J. van Beusekom, S. Albayrak, and T. Breuel. Automated ground truth data generation for newspaper document images. , pages 1275–1279, July 2009.
S. Suzuki and K. Abe. Topological structural analysis of digitized binary images by border following. , 30(1):32–46, 1985.
F. M. Wahl, K. Y. Wong, and R. G. Casey. Block segmentation and text extraction in mixed text/image documents. , 20(4):375 – 390, 1982.
L. Wenyin and D. Dori. A protocol for performance evaluation of line detection algorithms. , 9(5-6):240–250, 1997.
S. Yacoub, V. Saxena, and S. Sami. Perfectdoc: a ground truthing environment for complex documents. , pages 452–456, 2005.
L. Yang, W. Huang, and C. Tan. Semi-automatic ground truth generation for chart image recognition. In [*Document Analysis Systems VII*]{}, volume 3872 of [*Lecture Notes in Computer Science*]{}, pages 324–335. Springer Berlin Heidelberg, 2006.
B. Yanikoglu and L. Vincent. Pink panther: A complete environment for ground-truthing and benchmarking document page segmentation. , 31(9):1191–1204, 1998.
[^1]: <http://www.facweb.iitkgp.ernet.in/~jay/anveshak/anveshak.html>
[^2]: <http://www.facweb.iitkgp.ernet.in/~jay/anveshak_gt/anveshak_gt.html>
|
---
abstract: 'The ever-increasing take-up of machine learning techniques requires ever-more application-specific training data. Manually collecting such training data is a tedious and time-consuming process. Data marketplaces represent a compelling alternative, providing an easy way for acquiring data from potential data providers. A key component of such marketplaces is the compensation mechanism for data providers. Classic payoff-allocation methods such as the Shapley value can be vulnerable to data-replication attacks, and are infeasible to compute in the absence of efficient approximation algorithms. To address these challenges, we present an extensive theoretical study on the vulnerabilities of game theoretic payoff-allocation schemes to replication attacks. Our insights apply to a wide range of payoff-allocation schemes, and enable the design of customised replication-robust payoff-allocations. Furthermore, we present a novel efficient sampling algorithm for approximating payoff-allocation schemes based on marginal contributions. In our experiments, we validate the replication-robustness of classic payoff-allocation schemes and new payoff-allocation schemes derived from our theoretical insights. We also demonstrate the efficiency of our proposed sampling algorithm on a wide range of machine learning tasks.'
author:
- |
Dongge Han[^1]\
University of Oxford\
Shruti Tople\
Microsoft Research\
Alex Rogers\
University of Oxford\
Michael Wooldridge\
University of Oxford\
Olga Ohrimenko\
The University of Melbourne\
Sebastian Tschiatschek\
University of Vienna\
bibliography:
- 'neurips.bib'
title: 'Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces'
---
[^1]: Work done in part while at Microsoft Research Cambridge.
|
---
abstract: 'A stochastic solution is constructed for a fractional generalization of the KPP (Kolmogorov, Petrovskii, Piskunov) equation. The solution uses a fractional generalization of the branching exponential process and propagation processes which are spectral integrals of Levy processes.'
author:
- 'F. Cipriano[^1], H. Ouerdiane[^2] and R. Vilela Mendes[^3] [^4]'
title: Stochastic solution of a nonlinear fractional differential equation
---
Introduction: The notion of stochastic solution
===============================================
The solutions of linear elliptic and parabolic equations, both with Cauchy and Dirichlet boundary conditions, have a probabilistic interpretation. These are classical results which may be traced back to the work of Courant, Friedrichs and Lewy [@Courant] in the 1920’s and became a standard tool in potential theory[@Getoor] [@Bass2]. For example, for the heat equation
$$\partial _{t}u(t,x)=\frac{1}{2}\frac{\partial ^{2}}{\partial x^{2}}%
u(t,x)\qquad \textnormal{with}\qquad u(0,x)=f(x) \label{1.1}$$
the solution may be written either as $$u\left( t,x\right) =\frac{1}{2\sqrt{\pi }}\int \frac{1}{\sqrt{t}}\exp \left(
-\frac{\left( x-y\right) ^{2}}{4t}\right) f\left( y\right) dy \label{1.2}$$ or as $$u(t,x)={\Bbb E}_{x}f(X_{t}) \label{1.3}$$ ${\Bbb E}_{x}$ meaning the expectation value, starting from $x$, of the process $$dX_{t}=dW_{t}$$ $W_{t}$ being the Wiener process.
Eq.(\[1.1\]) is a [*specification*]{} of a problem whereas (\[1.2\]) and (\[1.3\]) are [*solutions*]{} in the sense that they both provide algorithmic means to construct a function satisfying the specification. An important condition for (\[1.2\]) and (\[1.3\]) to be considered as solutions is the fact that the algorithmic tools are independent of the particular solution, in the first case an integration procedure and in the second the simulation of a solution-independent process. This should be contrasted with stochastic processes constructed from a given particular solution, as has been done for example for the Boltzman equation[@Graham].
In contrast with the linear problems, for nonlinear partial differential equations, explicit solutions in terms of elementary functions or integrals are only known in very particular cases. However, if a solution-independent stochastic process is found that (for arbitrary initial conditions) generates the solution in the sense of Eq.(\[1.3\]), a stochastic solution is obtained. In this way the set of equations for which exact solutions are known would be considerably extended.
The stochastic representations recently constructed for the Navier-Stokes[@Jan] [@Waymire] [@Bhatta1] [@Ossiander] and the Vlasov-Poisson equations[@Vilela1] [@Vilela2] define solution-independent processes for which the mean values of some functionals are solutions to these equations. Therefore, they are exact [**stochastic solutions**]{}.
In the stochastic solutions one deals with a process that starts from the point where the solution is to be found, a functional being then computed along the whole sample path or until it reaches a boundary. In all cases one needs to average over many independent sample paths to obtain a expectation value of the functional. The localized and parallelizable nature of the solution construction is clear. Provided some differentiability conditions are satisfied, the process also handles equally well simple or very complex boundary conditions.
Stochastic solutions also provide an intuitive characterization of the physical phenomena, relating nonlinear interactions with cascading processes. By the study of exit times from a domain they also sometimes provide access to quantities that cannot be obtained by perturbative methods[@VilelaZeit].
One way to construct stochastic solutions is based on a probabilistic interpretation of the Picard series. The differential equation is written as an integral equation which is rearranged in a such a way that the coefficients of the successive terms in the Picard iteration obey a normalization condition. The Picard iteration is then interpreted as an evolution and branching process, the stochastic solution being equivalent to importance sampling of the normalized Picard series. This method is used in this paper to obtain a stochastic solution of a nonlinear partial differential equation, which is a fractional version of the Kolmogorov-Petrovskii-Piskunov (KPP) equation[@KPP].
A fractional nonlinear partial differential equation
====================================================
We consider the following equation
$$_{t}D_{*}^{\alpha }u\left( t,x\right) =\frac{1}{2}\,_{x}D_{\theta }^{\beta
}u\left( t,x\right) +u^{2}\left( t,x\right) -u\left( t,x\right) \label{2.1}$$
We use the same notations as in the study of the linear problem in [@Mainardi1]. $_{t}D_{*}^{\alpha }$ is a Caputo derivative of order $\alpha $$$_{t}D_{*}^{\alpha }f\left( t\right) =\left\{
\begin{array}{lll}
\frac{1}{\Gamma \left( m-\alpha \right) }\int_{0}^{t}\frac{f^{(m)}\left(
\tau \right) d\tau }{\left( t-\tau \right) ^{\alpha +1-m}} & & m-1<\alpha <m
\\
\frac{d^{m}}{dt^{m}}f\left( t\right) & & \alpha =m
\end{array}
\right. \label{2.2}$$ $m$ integer. $_{x}D_{\theta }^{\beta }$ is a Riesz-Feller derivative defined through its Fourier symbol by $${\cal F}\left\{ _{x}D_{\theta }^{\beta }f\left( x\right) \right\} \left(
k\right) =-\psi _{\beta }^{\theta }\left( k\right) {\cal F}\left\{ f\left(
x\right) \right\} \left( k\right) \label{2.3}$$ with $\psi _{\beta }^{\theta }\left( k\right) =\left| k\right| ^{\beta
}e^{i\left( \textnormal{sign}k\right) \theta \pi /2}$.
Eq.(\[2.1\]) is a fractional version of the KPP equation, studied by probabilistic means by McKean[@McKean]. Physically it describes a nonlinear diffusion with growing mass and in our fractional generalization it would represent the same phenomenon taking into account memory effects in time and long range correlations in space.
As outlined in the introduction, the first step towards a probabilistic formulation is the rewriting of Eq.(\[2.1\]) as an integral equation including the initial conditions. For this purpose we take the Fourier transform $\left( {\cal F}\right) $ in space and the Laplace transform $%
\left( {\cal L}\right) $ in time obtaining $$s^{\alpha }\widetilde{\widehat{u}}\left( s,k\right) \left( s,k\right)
=s^{\alpha -1}\widehat{u}\left( 0^{+},k\right) -\frac{1}{2}\psi _{\beta
}^{\theta }\left( k\right) \widetilde{\widehat{u}}\left( s,k\right) -%
\widetilde{\widehat{u}}\left( s,k\right) +\int_{0}^{\infty }dte^{-st}{\cal F}%
\left( u^{2}\left( t,x\right) \right) \label{2.4}$$ where $$\widehat{u}\left( t,k\right) ={\cal F}\left( u\left( t,x\right) \right)
=\int_{-\infty }^{\infty }e^{ikx}u\left( t,x\right) dx$$ $$\widetilde{u}\left( s,x\right) ={\cal L}\left( u\left( t,x\right) \right)
=\int_{0}^{\infty }e^{-st}u\left( t,x\right) dt$$ This equation holds for $0<\alpha \leq 1$ or for $0<\alpha \leq 2$ with $%
\frac{\partial }{dt}u\left( 0^{+},x\right) =0$. Solving for $\widetilde{%
\widehat{u}}\left( s,k\right) $ one obtains an integral equation $$\widetilde{\widehat{u}}\left( s,k\right) \,=\frac{s^{\alpha -1}}{s^{\alpha
}+1+\frac{1}{2}\psi _{\beta }^{\theta }\left( k\right) }\widehat{u}\left(
0^{+},k\right) +\int_{0}^{\infty }dt\frac{e^{-st}}{s^{\alpha }+1+\frac{1}{2}%
\psi _{\beta }^{\theta }\left( k\right) }{\cal F}\left( u^{2}\left(
t,x\right) \right) \label{2.5}$$ Taking the inverse Fourier and Laplace[@Trujillo] transforms $$\begin{aligned}
u\left( t,x\right) &=&E_{\alpha ,1}\left( -t^{\alpha }\right) \int_{-\infty
}^{\infty }dy{\cal F}^{-1}\left( \frac{E_{\alpha ,1}\left( -\left( 1+\frac{1%
}{2}\psi _{\beta }^{\theta }\left( k\right) \right) t^{\alpha }\right) }{%
E_{\alpha ,1}\left( -t^{\alpha }\right) }\right) \left( x-y\right) u\left(
0^{+},y\right) \nonumber \\
&&+\int_{0}^{t}d\tau \left( t-\tau \right) ^{\alpha -1}E_{\alpha ,\alpha
}\left( -\left( t-\tau \right) ^{\alpha }\right) \nonumber \\
&&\int_{-\infty }^{\infty }dy{\cal F}^{-1}\left( \frac{E_{\alpha ,\alpha
}\left( -\left( 1+\frac{1}{2}\psi _{\beta }^{\theta }\left( k\right) \right)
\left( t-\tau \right) ^{\alpha }\right) }{E_{\alpha ,\alpha }\left( -\left(
t-\tau \right) ^{\alpha }\right) }\right) \left( x-y\right) u^{2}\left( \tau
,y\right) \label{2.6}\end{aligned}$$ $E_{\beta ,\rho }$ is the generalized Mittag-Leffler function $$E_{\alpha ,\rho }\left( z\right) =\sum_{j=0}^{\infty }\frac{z^{j}}{\Gamma
\left( \alpha j+\rho \right) }$$ We define the following propagation kernel $$G_{\alpha ,\rho }^{\beta }\left( t,x\right) ={\cal F}^{-1}\left( \frac{%
E_{\alpha .\rho }\left( -\left( 1+\frac{1}{2}\psi _{\beta }^{\theta }\left(
k\right) \right) t^{\alpha }\right) }{E_{\alpha ,\rho }\left( -t^{\alpha
}\right) }\right) \left( x\right) \label{2.7}$$ and, from the normalization relation, $$E_{\alpha ,1}\left( -t^{\alpha }\right) +\int_{0}^{t}d\tau \left( t-\tau
\right) ^{\alpha -1}E_{\alpha ,\alpha }\left( -\left( t-\tau \right)
^{\alpha }\right) =1$$ we may interpret $E_{\alpha ,1}\left( -t^{\alpha }\right) $ and $\left(
t-\tau \right) ^{\alpha -1}E_{\alpha ,\alpha }\left( -\left( t-\tau \right)
^{\alpha }\right) $, respectively as a survival probability up to time $t$ and as the probability density for the branching at time $\tau $ in a branching process $B_{\alpha }$. It is a fractional generalization of an exponential process. This provides a probabilistic sampling of the Picard series obtained by iteration of Eq.(\[2.6\]). The solution is therefore obtained by the expectation of the exit values of the following process:
Starting at time zero, a particle lives according to the process $B_{\alpha }
$. At the branching time $\tau $ the initial particle dies and two new particles are born at the dying point. The process continues in the same way with independent evolution of each one of the newborn particles. At time $t$ the solution is obtained as a functional of the $n$ existing particles at time $t$, namely as the product of the initial condition propagated from the point where each one of the $n$ particles is at time $t$ up to the initial position. $$u(t,x)={\Bbb E}_{x}\left( \varphi _{1}\varphi _{2}\cdots \varphi _{n}\right)
\label{2.8}$$ with $$\begin{aligned}
\varphi _{i} &=&\int dy_{1}^{\left( i\right) }dy_{2}^{\left( i\right)
}\cdots dy_{k-1}^{\left( i\right) }dy_{k}^{\left( i\right) }G_{\alpha
,\alpha }^{\beta }\left( \tau _{1},x-y_{1}^{(i)}\right) G_{\alpha ,\alpha
}^{\beta }\left( \tau _{2},y_{1}^{(i)}-y_{2}^{(i)}\right) \cdots \nonumber
\\
&&\cdots G_{\alpha ,\alpha }^{\beta }\left( \tau
_{k-1},y_{k-2}^{(i)}-y_{k-1}^{(i)}\right) G_{\alpha ,1}^{\beta }\left( \tau
_{k},y_{k-1}^{(i)}-y_{k}^{(i)}\right) u\left( 0^{+},y_{k}^{(i)}\right)
\label{2.9}\end{aligned}$$ with $\sum_{j=1}^{k}\tau _{j}=t$, $k-1$ being the number of branchings leading to particle $i$. Notice that the last propagator in (\[2.9\]) is different from the others.
Because of the normalization of the probabilities in the process $B_{\alpha
} $, the probability of each one of the products in (\[2.8\]) corresponds to the weight of the corresponding term in the Picard series. Therefore the expectation value exists whenever the Picard series converges.
The solution (\[2.8\]) is not yet a purely stochastic solution because it involves both the expectation value over the process $B_{\alpha }$ and a multiple integration of the initial condition with the propagation kernels $%
G_{\alpha ,1}^{\beta }$ and $G_{\alpha ,\alpha }^{\beta }$. To obtain a purely stochastic solution we notice that, for $0<\alpha \leq 1$, the propagation kernels satisfy the conditions to be the Green’s functions of stochastic processes in ${\Bbb R}$ (see the Appendix).
We denote the processes associated to $G_{\alpha ,1}^{\beta }\left(
t,x\right) $ and $G_{\alpha ,\alpha }^{\beta }\left( t,x\right) $, respectively by $\Pi _{\alpha ,1}^{\beta }$ and $\Pi _{\alpha ,\alpha
}^{\beta }$. Therefore the process leading to the solution is as described before with all the particles until the last branching propagating according to the process $\Pi _{\alpha ,\alpha }^{\beta }$ and the last ones (that sample the initial condition) propagating by the process $\Pi _{\alpha
,1}^{\beta }$. When finally all the $n$ surviving particles reach time zero, their coordinates $x+\xi _{i}$ are recorded and the solution is given by $$u(t,x)={\Bbb E}_{x}\left( u(0^{+},x+\xi _{1})u(0^{+},x+\xi _{2})\cdots
u(0^{+},x+\xi _{n})\right) \label{2.11}$$ Eq.(\[2.11\]) is a stochastic solution of (\[2.1\]) and our main result is summarized as follows:
[**Theorem:**]{} [*The nonlinear fractional partial differential equation (\[2.1\]), with* ]{}$0<\alpha \leq 1$[*, has a stochastic solution given by (\[2.11\]), the coordinates* ]{}$x+\xi _{i}$[* in the arguments of the initial condition obtained from the exit values of a propagation and branching process, the branching being ruled by the process* ]{}$B_{\alpha }$[* and the propagation by* ]{}$\Pi _{\alpha ,1}^{\beta }$[* for the particles that reach time* ]{}$t$[* and by* ]{}$\Pi _{\alpha ,\alpha }^{\beta
} $[* for all the remaining ones.*]{}
[*A sufficient condition for the existence of the solution is*]{} $$\left| u(0^{+},x)\right| \leq 1 \label{2.12}$$
[**Remarks:**]{}
1\) The condition $\left| u(0^{+},x)\right| \leq 1$ imposes a finite value for all contributions to the multiplicative functional. However, the solution may exist under more general conditions, namely when the decreasing value of the probability of higher order products in (\[2.11\]) compensates the growth of the powers of the initial condition.
2\) The stochastic solution may also be constructed by a backwards-in-time stochastic process from time $t$ to time zero. This is obtained by rewriting Eq.(\[2.6\]) as $$\begin{aligned}
u\left( t,x\right) &=&E_{\alpha ,1}\left( -t^{\alpha }\right) \int_{-\infty
}^{\infty }dy{\cal F}^{-1}\left( \frac{E_{\alpha ,1}\left( -\left( 1+\frac{1%
}{2}\psi _{\beta }^{\theta }\left( k\right) \right) t^{\alpha }\right) }{%
E_{\alpha ,1}\left( -t^{\alpha }\right) }\right) \left( x-y\right) u\left(
0^{+},y\right) \nonumber \\
&&+\int_{0}^{t}d\tau \tau ^{\alpha -1}E_{\alpha ,\alpha }\left( -\tau
^{\alpha }\right) \nonumber \\
&&\int_{-\infty }^{\infty }dy{\cal F}^{-1}\left( \frac{E_{\alpha ,\alpha
}\left( -\left( 1+\frac{1}{2}\psi _{\beta }^{\theta }\left( k\right) \right)
\tau ^{\alpha }\right) }{E_{\alpha ,\alpha }\left( -\tau ^{\alpha }\right) }%
\right) \left( x-y\right) u^{2}\left( t-\tau ,y\right) \label{2.10}\end{aligned}$$ and noticing that also $$E_{\alpha ,1}\left( -t^{\alpha }\right) +\int_{0}^{t}d\tau \tau ^{\alpha
-1}E_{\alpha ,\alpha }\left( -\tau ^{\alpha }\right) =1$$ Then, we obtain the following stochastic construction of the solution:
Starting at time $t$ a particle propagates backwards in time according to the process $\Pi _{\alpha ,1}^{\beta }$ if it reaches time zero or according to $\Pi _{\alpha ,\alpha }^{\beta }$ if it branches at time $t-\tau $. The branching probability is controlled by the process $B_{\alpha }$ (that is, the branching probability density is $\tau ^{\alpha -1}E_{\alpha ,\alpha
}\left( -\tau ^{\alpha }\right) $). When it branches, two new particles are born which propagate independently and the process is repeated until all surviving particles reach time zero.
[Appendix. The Green’s functions and the characterization of the processes]{}
[**The processes** ]{}$\Pi _{\alpha ,1}^{\beta }$ [**and**]{} $\Pi _{\alpha
,\alpha }^{\beta }$$${\cal F}\left\{ G_{\alpha ,1}^{\beta }\left( t,x\right) \right\} \left(
t,k\right) =\frac{E_{\alpha ,1}\left( -\left( 1+\frac{1}{2}\psi _{\beta
}^{\theta }\left( k\right) \right) t^{\alpha }\right) }{E_{\alpha ,1}\left(
-t^{\alpha }\right) } \label{A.1}$$ $${\cal F}\left\{ G_{\alpha ,\alpha }^{\beta }\left( t,x\right) \right\}
\left( t,k\right) =\frac{E_{\alpha ,\alpha }\left( -\left( 1+\frac{1}{2}\psi
_{\beta }^{\theta }\left( k\right) \right) t^{\alpha }\right) }{E_{\alpha
,\alpha }\left( -t^{\alpha }\right) } \label{A.2}$$
For a propagation kernel $G\left( t,x\right) $ to be the Green’s function of a stochastic process, the following conditions should be satisfied:
\(i) $G\left( 0,x-y\right) =\delta \left( x-y\right) $ or ${\cal F}\left\{
G\right\} \left( 0,k\right) =1$ $\forall k$
\(ii) $\int dxG\left( t,x\right) =1$ $\forall t$ or ${\cal F}\left\{
G\right\} \left( t,0\right) =1$
\(iii) $G\left( t,x\right) $ should be real and $\geq 0$
For the processes $\Pi _{\alpha ,1}^{\beta }$ and $\Pi _{\alpha ,\alpha
}^{\beta }$
\(i) ${\cal F}\left\{ G_{\alpha ,1}^{\beta }\right\} \left( 0,k\right) =\frac{%
E_{\alpha ,1}\left( 0\right) }{E_{\alpha ,1}\left( 0\right) }=1$ and ${\cal F%
}\left\{ G_{\alpha ,\alpha }^{\beta }\right\} \left( 0,k\right) =\frac{%
E_{\alpha ,\alpha }\left( 0\right) }{E_{\alpha ,\alpha }\left( 0\right) }=1$
\(ii) ${\cal F}\left\{ G_{\alpha ,1}^{\beta }\right\} \left( t,0\right) =%
\frac{E_{\alpha ,1}\left( -t^{\alpha }\right) }{E_{\alpha ,1}\left(
-t^{\alpha }\right) }=1$ and ${\cal F}\left\{ G_{\alpha ,\alpha }^{\beta
}\right\} \left( t,0\right) =\frac{E_{\alpha ,\alpha }\left( -t^{\alpha
}\right) }{E_{\alpha ,\alpha }\left( -t^{\alpha }\right) }=1$
\(iii) If ${\cal F}\left\{ G\right\} \left( t,-k\right) =\left( {\cal F}%
\left\{ G\right\} \left( t,k\right) \right) ^{*}$ then $G\left( t,x\right) $ is real.
Because $\psi _{\beta }^{\theta }\left( -k\right) =\left( \psi _{\beta
}^{\theta }\left( k\right) \right) ^{*}$ it follows $$E_{\alpha ,1}\left( -\left( 1+\frac{1}{2}\psi _{\beta }^{\theta }\left(
-k\right) \right) t^{\alpha }\right) =\left( E_{\alpha ,1}\left( -\left( 1+%
\frac{1}{2}\psi _{\beta }^{\theta }\left( k\right) \right) t^{\alpha
}\right) \right) ^{*}$$ and $$E_{\alpha ,\alpha }\left( -\left( 1+\frac{1}{2}\psi _{\beta }^{\theta
}\left( -k\right) \right) t^{\alpha }\right) =\left( E_{\alpha ,1}\left(
-\left( 1+\frac{1}{2}\psi _{\beta }^{\theta }\left( k\right) \right)
t^{\alpha }\right) \right) ^{*}$$ implying that both $G_{\alpha ,1}^{\beta }\left( t,x\right) $ and $G_{\alpha
,\alpha }^{\beta }\left( t,x\right) $ are real.
Finally, for the positivity, one notices that for $0<\alpha \leq 1$ and $%
\rho \geq \alpha $, $E_{\alpha ,\rho }\left( -x\right) $ is a completely monotone function[@Schneider]. Therefore $$E_{\alpha ,\rho }\left( -x\right) =\int_{0}^{\infty }e^{-rx}dF\left(
r\right)$$ with $F$ nondecreasing and bounded.
For $G_{\alpha ,\rho }^{\beta }\left( t,x\right) $ ($\rho =1$ and $\rho
=\alpha $) one has $$\begin{aligned}
G_{\alpha ,\rho }^{\beta }\left( t,x\right) &=&\frac{1}{2\pi E_{\alpha ,\rho
}\left( -t^{\alpha }\right) }\int_{0}^{\infty }dF\left( r\right)
\int_{-\infty }^{\infty }dke^{-ikx}e^{-rt^{\alpha }\left( 1+\frac{1}{2}\psi
_{\beta }^{\theta }\left( -k\right) \right) } \\
&=&\frac{1}{2\pi E_{\alpha ,\rho }\left( -t^{\alpha }\right) }%
\int_{0}^{\infty }dF\left( r\right) e^{-rt^{\alpha }}\int_{-\infty }^{\infty
}dke^{-ikx}e^{-\frac{rt^{\alpha }}{2}\psi _{\beta }^{\theta }\left(
-k\right) }\end{aligned}$$ We recognize the last integral (in $k$) as the Green’s function of a Levy process. Therefore one has an integral in $r$ of positive quantities implying that $G_{\alpha ,1}^{\beta }\left( t,x\right) $ and $G_{\alpha
,\alpha }^{\beta }\left( t,x\right) $ are positive.
[**The process** ]{}$B_{\alpha }$
The decaying probability in time $d\tau $ of this process is $$\tau ^{\alpha -1}E_{\alpha ,\alpha }\left( -\tau ^{\alpha }\right)$$ From $$\int_{0}^{t}\tau ^{\alpha -1}E_{\alpha ,\alpha }\left( -\tau ^{\alpha
}\right) d\tau =1-E_{\alpha ,1}\left( -t^{\alpha }\right)$$ it follows that $E_{\alpha ,1}\left( -t^{\alpha }\right) $ is the survival probability up to time $t$. The process $B_{\alpha }$ is a fractional generalization of the exponential process.
[99]{} R. Courant, K. Friedrichs and H. Lewy; Mat. Ann. 100 (1928) 32-74.
R. M. Blumenthal and R. K. Getoor; [*Markov processes and potential theory*]{}, Academic Press, New York 1968.
R. F. Bass; [*Diffusions and elliptic operators*]{}, Springer, New York 1998.
C. Graham and S. Méléard; in ESAIM Proceedings vol. 10 (F. Coquel and S. Cordier, Eds.) pag. 77-126, Les Ulis 2001.
Y. LeJan and A. S. Sznitman ; Prob. Theory and Relat. Fields 109 (1997) 343-366.
E. C. Waymire; Prob. Surveys 2 (2005) 1-32.
R. N. Bhattacharya et al. ; Trans. Amer. Math. Soc. 355 (2003) 5003-5040
M. Ossiander ; Prob. Theory and Relat. Fields 133 (2005) 267-298.
R. Vilela Mendes and F. Cipriano; Commun. Nonlinear Science and Num. Simul. 13 (2008) 221-226.
E. Floriani, R. Lima and R. Vilela Mendes; arxiv:0707.1409, Eur. J. of Physics D (DOI: 10.1140/epjd/e2007-00302-7)
R. Vilela Mendes; Zeitsch. Phys. C54, (1992) 273-281.
A. Kolmogorov, I. Petrovskii and N. Piskunov; Moscow Univ. Bull. Math. 1 (1937) 1-25.
F. Mainardi, Y. Luchko and G. Pagnini; Fractional Calculus and Appl. Analysis 4 (2001) 153-192.
H. P. McKean; Comm. on Pure and Appl. Math. 28 (1975) 323-331, 29 (1976) 553-554.
A. Kilbas, H. Srivastava and J. Trujillo; [*Theory and Applications of Fractional Differential Equations*]{}, Elsevier B. V., Amsterdam 2006.
W. R. Schneider; Expositiones Mathematicae 14 (1996) 3-16.
[^1]: GFM and FCT-Universidade Nova de Lisboa, Complexo Interdisciplinar, Av. Gama Pinto, 2 - 1649-003 Lisboa (Portugal), e-mail: [email protected]
[^2]: Départment de Mathématiques, Faculté des Sciences, Université de Tunis El Manar, Campus Universitaire, 1060 Tunis. e-mail: [email protected]
[^3]: CMAF, Complexo Interdisciplinar, Universidade de Lisboa, Av. Gama Pinto, 2 - 1649-003 Lisboa (Portugal), e-mail: [email protected], http://label2.ist.utl.pt/vilela/
[^4]: Centro de Fusão Nuclear - EURATOM/IST Association, Instituto Superior Técnico, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal
|
---
abstract: |
We present the results of transverse-field muon-spin rotation measurements on an epitaxially grown 40 nm-thick film of MnSi on Si(111) in the region of the field-temperature phase diagram where a skyrmion phase has been observed in the bulk. We identify changes in the quasistatic magnetic field distribution sampled by the muon, along with evidence for magnetic transitions around $T\approx
40$ K and 30 K. Our results suggest that the cone phase is not the only magnetic texture realized in film samples for out-of-plane fields.
author:
- 'T. Lancaster'
- 'F. Xiao'
- 'Z. Salman'
- 'I. O. Thomas'
- 'S. J. Blundell'
- 'F. L. Pratt'
- 'S. J. Clark'
- 'T. Prokscha'
- 'A. Suter'
- 'S. L. Zhang'
- 'A. A. Baker'
- 'T. Hesjedal'
title: |
Transverse field muon-spin rotation measurement\
of the topological anomaly in a thin film of MnSi
---
There has been a flurry of recent interest in the physics of the skyrmion as an example of a topological excitation in condensed matter.[@skyrm] The simplest example of a skyrmion may be derived from a sphere studded with radially directed arrows. The skyrmion is formed via the stereographic projection of the arrows onto a plane while keeping their orientations fixed. The clearest evidence for the existence of the skyrmion is in the spin texture of magnetic systems and in recent years a number of advances have demonstrated the existence, not only of magnetic skyrmions, but also their ordering into a skyrmion lattice (SL). [@Muhlbauer-2009; @Munzer-2010; @Yu-2010; @Yi-2010a; @Seki-2012a; @Seki-2012b; @Adams-2012; @Seki-2012c; @Langner-2014; @omrani] In bulk samples the SL has been observed in only a restricted region of the applied field-temperature phase diagram of magnetic systems, known as the $A$-phase. However, it has been shown that the SL phase is stabilized over an extended region of the phase diagram in bulk samples that have been thinned.[@tonomura; @Seki-2012a] This motivated the search for skyrmions in epitaxially grown thin film systems. However, the unambiguous identification of the SL in such samples has been challenging.
When a magnetic field is applied along the \[111\] direction below the critical temperature $T^{\mathrm{bulk}}_{\mathrm{c}}=29.5$ K, bulk MnSi hosts four magnetically ordered phases, characterized by critical fields[@Muhlbauer-2009] $B_{\mathrm{c}1}\approx 0.1$ T and $B_{\mathrm{c}2}\approx 0.5$ T. Below a critical field $B_{\mathrm{c}1}$ the spins order helically with a $q$ vector parallel to the applied field; for $B_{\mathrm{c}1}<B<B_{\mathrm{c}2}$ the spins order conically; and when $B>B_{\mathrm{c}2}$ the spins order ferromagnetically. In addition, for $B_{\mathrm{c}1}<B<B_{\mathrm{c}2}$ there exists a wedge-shaped $A$-phase region close to $T^{\mathrm{bulk}}_{\mathrm{c}}$ (centered around $T=28$ K and an applied field $B_{\mathrm{app}}$=150 mT), which hosts the SL.[@Muhlbauer-2009; @tonomura] As with other B20 systems when compared to the bulk, the SL was reported to exist over an extended region in thinned samples of thickness[@tonomura] $\approx
50$ nm and also in nanowires.[@yu] Subsequently, a report of topological Hall effect (THE) measurements and Lorentz transmission electron microscopy (TEM) on epitaxially grown thin-film samples suggested that the skyrmion phase was significantly enlarged in field and temperature [@li] as might be presumed from comparison between the phase diagrams of bulk samples and those that have been thinned. However, subsequent microscopy work[@Monchesky-2014] challenged the notion that skyrmions are present in the thin film samples, leading to the claim that the extended phase region responsible for the THE response was, in fact, the magnetic cone phase.[@meynell] There have also been several experimental and theoretical investigations of thin-film MnSi [@growth; @karhu; @wilson; @wilson2; @karhu2] suggesting, in particular, that the ground state magnetic structure propagates along the \[111\] direction.[@karhu] Further, it was suggested that in out-of-plane fields no first-order magnetic transition is observed that would indicate the appearance of skyrmions and that the cone magnetic structure is the thermodynamically stable phase for out-of-plane applied magnetic fields with $B<B_{\mathrm{c2}}$.[@wilson2]
In view of the controversy, there is value in using alternative experimental techniques to probe the physics of the magnetic field configurations in thin film samples. To this end, we report the results of transverse-field muon-spin rotation (TF $\mu^{+}$SR) measurements [@steve] on a thin-film sample of MnSi. We probe the local magnetic field distribution in the region giving rise to the reported anomalous topological response in Hall measurements. Here we show that several discontinuous changes in the field distribution are observed in the $T$=20–40 K region suggesting that the magnetic structure changes significantly, and indicating that the cone phase is unlikely to be the only magnetically ordered phase stabilized in these films.
A MnSi thin film sample was prepared by molecular beam epitaxy (MBE) on Si(111) substrates as described in the supplemental information.[@SI]
To characterize the film, a series of static magnetization measurements was made by applying the magnetic field along the MnSi\[111\] direction. The Curie temperature $T^{\mathrm{film}}_{\mathrm{c}}$, defined as the knee point of the $M$-$T$ curve \[examples shown inset Fig. \[SQUID\](b)\] is found to be $T_{\mathrm{c}}^{\mathrm{film}}=42.3(2)$ K, which is consistent with the values previously reported for epitaxial MnSi films.[@growth; @karhu; @wilson; @karhu2; @wilson2; @li] The difference between this value and that of the bulk is largely attributable to tensile strain induced by the $-3\%$ lattice mismatch between the MnSi film and the Si substrate, as discussed previously.[@growth] The $M$-$B$ curves \[Fig.\[SQUID\](a)\] were measured at different temperatures after field-cooling from 300 K in an applied field of 2 T. The Si substrate provides a large diamagnetic background at low temperatures below $T_{\mathrm{c}}$ which, along with contributions from unsaturated MnSi moments at high field, [@unsat] results in a linear field dependence which we subtract from the data. The saturation magnetization $M_{\mathrm{s}}$ at 5 K is found to be 0.41(3)$\mu_{\mathrm{B}}/\text{Mn}$, consistent with bulk behavior[@unsat; @bulk] but different from the behavior found in films that are thinner than 10 nm.[@growth] The shape of the curves is consistent with previous studies on epitaxial MnSi films with applied field directed out-of-plane.[@li; @wilson2] This contrasts with field in-plane configuration (not shown), in which a sharp phase transition can be identified at lower fields in the susceptibility curves. (This effect appears to indicate that the system undergoes different evolutions of the magnetic structure in the two distinct geometries.) The kink in the hysteresis loops reported in Ref. appears very weak, but can be seen in data measured between 600-800 mT for 10 K, 20 K, and 30 K.
The transition from the conical phase to the field-polarized phase at an applied field $B_{\mathrm{c}2}$ can be extracted from the $M$–$B$ curves at the point where $\text{d}M/\text{d}B=0$. The values of $B_{\mathrm{c}2}$ are found to be slightly higher than the intrinsic values reported previously, and this may be caused by demagnetization effects[@growth] due to the irregular sample shape used in our magnetometry measurements. Figure \[SQUID\](b) shows the upper boundary of the phase diagram depicted by a series of $T_{\mathrm{c}}$ values from $M$-$T$ curves, and $B_{\mathrm{c}2}$ values from $M$-$B$ curves, inside which the non-trivial spin textures may appear.
TF $\mu^{+}$SR measurements were made on the MnSi sample using the Low Energy Muon (LEM) beamline at S$\mu$S.[@SI; @prokscha] Applied magnetic fields were directed perpendicular to the surface of the sample (i.e. along \[111\]). Our use of TF $\mu^{+}$SR to probe the SL is analogous to its use in probing the vortex lattice (VL) in a type II superconductor, where the technique provides a powerful means of measuring the internal magnetic field distribution caused by the presence of the magnetic field texture.[@sonier] We have previously used this technique to probe the SL region in bulk Cu$_{2}$OSeO$_{3}$.[@Lancaster-2015] In a TF $\mu^{+}$SR experiment, spin polarized muons are implanted in the bulk of a material in the presence of a magnetic field $B_{\mathrm{app}}$, which is applied perpendicular to the initial muon spin direction. Muons stop at random positions on the length scale of the field texture where they precess about the total local magnetic field $B$ at the muon site, with frequency $\omega = \gamma_{\mu}B$, where $\gamma_{\mu} = 2 \pi \times
135.5$ MHz T$^{-1}$. The observed property of the experiment is the time evolution of the muon spin polarization $P_{x}(t)$, which allows the determination of the distribution $p(B)$ of local magnetic fields across the sample volume via $P_{x}(t) = \int_{0}^{\infty}\mathrm{d}B\, p(B) \cos
(\gamma_{\mu}Bt + \phi)$ where the phase $\phi$ results from the detector geometry.
Our TRIM.SP calculations[@SI] predict that an incident energy of $E=5$ keV leads to fairly symmetrical implantation profile of muons, with a maximum $\approx25$ nm below the surface of the 40 nm-thick MnSi layer with FWHM width $\approx 20$ nm, such that $>$95% of muons are implanted in the MnSi film. TF $\mu^{+}$SR measurements were made in an applied magnetic field of $B_{\mathrm{app}}=149$ mT after pre-cooling in this field. This field was chosen as it is known to promote the SL phase in the bulk material. Example Fourier transform spectra, whose spectral weight is proportional to $p(B)$, measured as a function of temperature are shown in Fig. \[data3\]. At high temperatures we observe the response of muons precessing in the applied field $B_{\mathrm{app}}=149$ mT. On cooling below 60 K the observed lineshape broadens considerably, with its mean $B_{0}$ shifting to fields lower than $B_{\mathrm{app}}$. The lineshape also becomes slightly skewed, with additional spectral weight shifting to lower fields. The lineshape is seen to broaden further below the 30 K $A$-phase boundary observed in the bulk, and again below 20 K. It is notable that the spectral lines observed in our film sample seem somewhat less well resolved than those seen in a bulk sample.[@Amato-2014] This, along with the increased $T_{\mathrm{c}}$ observed in MnSi films, might be ascribable to strain effects in the films.
The spectra were fitted in the time domain to a polarization function $P_{x}(t) = a{\rm e}^{-\sigma^{2}t^{2}}\cos(\gamma_{\mu}B_{0}t+\phi),$ where $\phi$ are phases resulting form the detector geometry, and $a$ is a fixed amplitude. The evolution of (a) the relaxation rate $\sigma = \gamma_{\mu}\sqrt{\langle \left( B- B_{0}
\right)^{2} \rangle}/{\sqrt{2}}$, arising from those muons stopping in MnSi, and (b) the average magnetic field $B_{0}$ that they experience, are shown in Fig. \[fig1\]. The relaxation rate $\sigma$ \[Fig. \[fig1\](a)\] is seen to slowly increase with decreasing temperature below 100 K, with the rate of increase becoming larger below 60 K. There is a cusp in the curve at 40 K, with the width remaining approximately constant in the region between 40 K and 30 K. At 30 K, the relaxation rate $\sigma$ jumps in magnitude, and increases sharply once again below 20 K. Although a smooth increase in $\sigma$ below $T^{\mathrm{film}}_{\mathrm{c}}$ might be predicted on the grounds of an increase in the ordered moment of the system with decreasing temperature in the ordered phase, the observed discontinuities are not be expected in a typical ordered magnet. The average field $B_{0}$ \[Fig. \[fig1\](b)\] is close to the applied field in the high temperature regime around 100 K, but is seen to decrease with decreasing temperature above $T_{\mathrm{c}}^{\mathrm{film}}$, before reaching a sharp minimum at 40 K. It then increases in the ordered regime until $B_{0}\approx
B_{\mathrm{app}}$ below $T\approx 30$ K.
A temperature scan carried out at a larger applied field of 220 mT (at the same 5 keV implantation energy) was found to show the same trends above 40 K. The larger relaxation rates measured in these data are more difficult to fit and consequently more scatter is seen at low temperature, but the data are suggestive of similar features as those seen at $149$ mT. Finally, a scan carried out for $B_{\mathrm{app}}=149$ mT, but at a muon implantation energy $E=1.3$ keV allows us to probe the response of muons implanted near to the surface of the film. In this case similar features are again seen down to 40 K, below which the broad signal lineshapes become difficult to fit.[@SI]
Each of the features seen in the TF $\mu^{+}$SR may be correlated with those observed previously using other techniques in bulk and thin film samples of MnSi. We note that the general trend in $B_{0}$ above $T_{\mathrm{c}}^{\mathrm{film}}$ is accounted for by the effect of the increase in magnitude of the hyperfine coupling with decreasing temperature in the paramagnetic regime near the transition, as has previously been observed in Knight shift measurements.[@hayano] In fact, correcting the shift for the Lorentz field and the demagnetizing field gives the Knight shift shown inset in Fig. \[fig1\], which when plotted against $\chi \approx \mu_{0}M/B$, measured at $B=150$ mT, allows us to estimate a contact hyperfine coupling of $A=-0.95(5)$ mol emu$^{-1}$, consistent with previous measurements.[@Amato-2014]
In the bulk, the magnetic transition from the paramagnetic to the skyrmion phase occurs in these applied fields at around[@bauer] $T^{\mathrm{bulk}}_{\mathrm{c}}=30$ K, where we see a jump in the $\mu^{+}$SR relaxation rate $\sigma$ and change in behavior of the peak field $B_{0}$. In addition, the film ordering transition $T_{\mathrm{c}}^{\mathrm{film}}$, identified from magnetometry and the onset of a sizeable response from the THE [@li], occurs at 40 K, where we see a knee in the evolution of $\sigma$ and the maximum shift in $B_{0}$. Finally, THE measurements also show a transition or crossover below the 15–20 K region, resulting in a small or negative THE signal, which coincides with a further jump in $\sigma$. We note that in the muon data the lineshape broadens further still at low temperatures, possibly reflecting the increased ordered moment size.
The original interpretation of the THE data[@li] was based on the assumption that a large response was the result of the scattering of carriers from a topologically non-trivial spin texture, identified with the SL phase. In view of the controversy surrounding the observation of skyrmions using Lorentz TEM [@Monchesky-2014], an alternative interpretation was suggested, [@meynell] which attributed the topological contribution to the Hall effect to additional scattering of charge carriers arising intrinsically in the magnetic cone phase. (It was also noted that the lack of any clear transition being observed in the Hall effect data prevented the signal being attributed to a separate magnetic phase. [@meynell] However, it is also worth noting that although the signal is dominated in many films by a feature in the anomolous Hall signal that prevent the transition being observed, the transition can be seen in thinner films.) Our results suggest that the magnetic properties of this system are unlikely to be accounted for simply via effects arising from a single magnetic cone phase, but rather suggest transitions in the nature of the field distribution occurring at $T_{\mathrm{c}}^{\mathrm{film}}\approx 40$ K, $T^{\mathrm{bulk}}_{\mathrm{c}}\approx 30$ K and possibly below 20 K. Since the $T=30$ and 20 K features do not have a counterpart in the magnetization measurements, they presumably do not involve a sizeable rearrangement of the net component of the magnetization along the field direction. However, the changes in muon lineshape, reflected in the linewidth $\sigma$ and static field shift in $B_{0}$, that we observe below $T_{\mathrm{c}}^{\mathrm{film}}$ are suggestive of changes in the distribution of magnetic fields at the muon sites. This could suggest sizeable changes to the contact hyperfine fields or to the nature of the dynamics. In the case of the latter, for example, a freezing of relaxation channels in the fast-fluctuation limit could lead to an increase in $\sigma$ on cooling. However, given the change in lineshape with temperature, the shift in $B_{0}$ in the $30 \leq T \leq 40$ K region and the fact that a transition around 30 K is known to occur in the bulk, we believe the most likely explanation is that the features observed imply changes in the ordered spin structure (which itself would likely also involve a change in hyperfine fields and dynamics).
To assess the evidence for the existence of skymions in this system, we have carried out simulations of the predicted muon lineshape in order to compare the expected signal for a cone, helical and skyrmion lattice phases. The local magnetic field distributions were generated following the procedure outlined in Ref. . We generate helical and conical spin textures with propagation vectors oriented along the \[111\] direction. The SL is generated with the skyrmion lattice plane perpendicular to \[111\]. Helical and SL distributions are generated as described previously[@Lancaster-2015], while cone-phase distributions are generated using [@wilson2] $\boldsymbol{m} (\boldsymbol{r})/\boldsymbol{m}_{0}=\hat{\boldsymbol{x}}\sin\theta\cos(\boldsymbol{q}\cdot\boldsymbol{r}) +
\hat{\boldsymbol{y}}\sin\theta\sin(\boldsymbol{q}\cdot\boldsymbol{r}) +
\hat{\boldsymbol{z}}\cos\theta,$ where $\cos\theta=|B|/B_{\mathrm{c}2}$ and $\boldsymbol{B}\parallel\hat{\boldsymbol{z}}\parallel [111]$. The magnetic moment is taken to be 0.4$\mu_B$ [@Motoya-1978], $B_{\mathrm{c}2}=0.5$ T (corresponding to $T_{\mathrm{c}}^{\mathrm{film}}\approx 40$ K [@wilson2]) and the length scale of the helical [@Amato-2014] and conical structures is taken to be 18 nm. For the skymion lattice we consider lattice constants suggested in previous studies of both $L_{\mathrm{sk}}= 18$ nm[@tonomura] and 8.5 nm[@li], although there is little difference between the two. We evaluate dipole fields from the moment distributions along with the contribution from the contact hyperfine field, calculated using [@Amato-2014] $
B_{c} = \frac{4 V_{\mathrm{mol}}A_{\mathrm{HF}}}{NV_{\mathrm{cell}}}\sum_{j=1}^{N}\boldsymbol{m}(\boldsymbol{r}_{j}),
$ where $V_{\mathrm{mol}}$ is the molar volume of Mn ions, $V_{\mathrm{cell}}$ is the unit cell volume, $A_{\mathrm{HF}}=-0.9276$ mol/emu is the hyperfine coupling and $N$ is the number of Mn ions within one lattice constant of the muon site. The field distribution $p(B)$ sampled by the muon ensemble has been shown to arise from muons stopped in the $4a$ Wyckoff position. These give rise to two magnetically distinct classes of site, conventionally labelled sites 0 and 1.[@Amato-2014]
The predicted field distributions $p(|B|)$ in an applied field of $B_{\mathrm{app}} =150$ mT, shown in Fig. \[theory\](a,b), are seen to be quite distinct in each phase. The cone magnetic structure \[Fig. \[theory\](a)\] involves a distribution of fields (resulting from muons at site 1) and a sharp peak on the low field side of the applied field (resulting from site 0). The signal from the skyrmion magnetic structure \[Fig. \[theory\](b)\] shows significant broadening over that resulting from the cone and helical magnetic structures. The shape remains asymmetric, but is less skewed than the distribution expected from the conical structure. In addition, the contribution from site 0 shows an asymmetric distribution, with a sharp peak on the low field side of the applied field. The features in the simulations are not apparent in the measured data, which also includes, for example, a background contribution from muons stopping in the sample holder. We note, however, that spectra measured in the region where the SL phase is observed in bulk MnSi ($26 \lesssim T
\lesssim 30$ K) do show a broadened field distribution compared to those measured in the $30 \lesssim T \lesssim 40$ K region, with some spectral weight shifted to fields below the diamagnetic peak. It is also noticable that $p(B)$ for the cone structure involves a sizeable shift of the peak in spectral weight to low fields and that, in the data, the value of the field $B_{0}$ recovers towards the applied field below 40 K with the shift no longer apparent below $T=30$ K. Despite these observations, it is not possible from the comparison of these simulations to the measured lineshapes of Fig. \[data3\], to make firm conclusions regarding the nature of the spin structure in the films, nor to unambiguously conclude whether skyrmions are present. Whatever the case, it is unlikely that the changes that we see in the $T=20$–$40$ K region can be accounted for simply by a single magnetic cone phase. It is also unlikely that our measurements could be explained via a single magnetic phase with the formation of chiral domains of the form discussed, e.g., in Ref. . In that case, we would expect the muon response in the two domains to be different (based on the behaviour of the bulk material [@Amato-2014]), resulting in a broadening compared to the single domain case. However this would not, in itself, explain the succession of changes in broadening we observe on cooling. In conclusion, magnetization and TF $\mu^{+}$SR measurements in thin film MnSi identify an ordering temperature at $T_{\mathrm{c}}^{\mathrm{film}}\approx 40$ K. The TF data also reveal significant changes in the static field distribution that coincide with the topological contribution to the Hall effect identified previously, and also with the the magnetic phase boundaries observed in both the bulk and thin films. We therefore suggest that there may be phase boundaries or crossovers in behaviour in this system occurring around $T\approx 20$ K, 30 K and 40 K. Although our data do not reveal a signature of the SL phase, it is unlikely that a single cone phase could account for our results.
This work was carried out at S$\mu$S, Paul Scherrer Institut, Switzerland and we are grateful for the provision of beam time. We thank the EPSRC (UK) and the John Templeton Foundation for financial support and to A. Amato, Ch. Pfleiderer and R.C. Williams for useful discussion. SLZ and TH gratefully acknowledge support by the Semiconductor Research Corporation (SRC). This work made use of the facilities of N8 HPC provided and funded by the N8 consortium and EPSRC (Grant No. EP/K000225/1). The Centre is co-ordinated by the Universities of Leeds and Manchester.
[xx]{}
G. E. Brown and M. Rho (Eds.) [*The multifaceted Skyrmion*]{} (World Scientific Singapore) (2010).
S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, P. Böni, Science [**323**]{}, 915 (2009).
W. Münzer, A. Neubauer, T. Adams, S. Mühlbauer, C. Franz, F. Jonietz, R. Georgii, P. Böni, B. Pedersen, M. Schmidt, A. Rosch and C. Pfleiderer, Phys. Rev. B [**81**]{}, 041203(R) (2010).
X. Z. Yu, Y. Onose, N. Kanazawa, J. H. Park, J. H. Han, Y. Matsui, N. Nagaosa and Y. Tokura, Nature [**465**]{}, 901 (2010).
X. Z. Yu, N. Kanazawa, Y. Onose, K. Kimoto, W. Z. Zhang, S. Ishiwata, Y. Matsui and Y. Tokura, Nature Mat. [**10**]{}, 106 (2010).
S. Seki, X. Z. Yu, S. Ishiwata and Y. Tokura, Science [**336**]{}, 198 (2012).
S. Seki, J.-H. Kim, D. S. Inosov, R. Georgii, B. Keimer, S. Ishiwata and Y. Tokura, Phys. Rev. B [**85**]{}, 220406(R) (2012).
S. Seki, S. Ishiwata and Y. Tokura, Phys. Rev. B [**86**]{}, 060403 (2012).
M. C. Langner, S. Roy, S. K. Mishra, J. C. T. Lee, X. W. Shi, M. A. Hossain, Y.-D. Chuang, S. Seki, Y. Tokura, S. D. Kevan and R. W. Schoenlein, Phys. Rev. Lett. [**112**]{}, 167202 (2014).
T. Adams, A. Chacon, M. Wagner, A. Bauer, G. Brandl, B. Pedersen, H. Berger, P. Lemmens, and C. Pfleiderer, Phys. Rev. Lett. [**108**]{}, 237204 (2012).
A. A. Omrani, J. S. White, K. Prša, I. Živković, H. Berger, A. Magrez, Y.-H. Liu, J. H. Han and H. M. R[ø]{}nnow, Phys. Rev. B [**89**]{}, 064406 (2014).
A. Tonomura, X. Yu, K. Yanagisawa, T. Matsuda, Y. Onose, N. Kanazawa, H.-S. Park, Y. Tokura, Nanoletters [**12**]{}, 1673 (2012).
X. Yu, J.P. DeGrave, Y. Hara, T. Hara, S. Jin and Y. Tokura, Nano Letters [**13**]{}, 3755 (2013).
Y. Li, N. Kanazawa, X. Z. Yu, A. Tsukazaki, M. Kawasaki, M. Ichikawa, X. F. Jin, F. Kagawa and Y. Tokura, Phys. Rev. Lett. [**110**]{}, 117202 (2013).
T. L. Monchesky, J. C. Loudon, M. D. Robertson, A. N. Bogdanov, Phys. Rev. Lett. [**112**]{}, 059701 (2014).
S. A. Meynell, M. N. Wilson, J. C. Loudon, A. Spitzig, F. N. Rybakov, M. B. Johnson, and T. L. Monchesky, Phys. Rev. B [**90**]{}, 224419 (2014).
E. A. Karhu, S. Kahwaji and T.L. Monchesky, C. Parsons, M.D. Robertson and C. Maunders, Phys. Rev. B [**82**]{}, 184417 (2010).
T. Prokscha, E. Morenzoni, K. Deiters, F. Foroughi, D. George, R. Kobler, A. Suter and V. Vrankovic Nucl. Instr. Meth. A [**595**]{} 317 (2008).
E. A. Karhu, S. Kahwaji, M. D. Robertson, H. Fritzsche, B. J. Kirby, C. F. Majkrzak and T.L. Monchesky, Phys. Rev. B [**84**]{}, 060404 (2011).
E. A. Karhu, U. K. Rößler, A. N. Bogdanov, S. Kahwaji, B. J. Kirby, H. Fritzsche, M. D. Robertson, C. F. Majkrzak and T. L. Monchesky, Phys Rev. B [**85**]{}, 094429 (2012).
M. N. Wilson, E. A. Karhu, D. P. Lake, A. S. Quigley, S. Meynell, A. N. Bogdanov, H. Fritzsche, U. K. Rößler and T. L. Monchesky, Phys. Rev. B [**88**]{}, 214420 (2013).
M. N. Wilson, A. B. Butenko, A. N. Bogdanov and T. L. Monchesky, Phys. Rev. B [**89**]{}, 094411 (2014).
S. J. Blundell, Contemp. Phys. [**40**]{}, 175 (1999).
Supplemental information contains details of sample preparation, experimental methods and further data.
A. Amato, P. Dalmas de Réotier, D. Andreica, A. Yaouanc, A. Suter, G. Lapertot, I. M. Pop, E. Morenzoni, P. Bonfà, F. Bernardini and R. De Renzi, Phys. Rev. B [**89**]{}, 184425 (2014).
D. Bloch, V. Jaccarino, J. Voiron and J.H. Wernick, Phys. Lett. A [**51**]{}, 259 (1975).
M. Lee, Y. Onose, Y. Tokura and N.P. Ong, Phys. Rev. B [**75**]{}, 172403 (2007).
J. E. Sonier, J. H. Brewer, and R. F. Kiefl, Rev. Mod. Phys. [**72**]{}, 769 (2000).
T. Lancaster, R. C. Williams, I. O. Thomas, F. Xiao, F. L. Pratt, S. J. Blundell, J. C. Loudon, T. Hesjedal, S. J. Clark, P. D. Hatton, M. Ciomaga Hatnean, D. S. Keeble and G. Balakrishnan, Phys. Rev. B [**91**]{}, 224408 (2015).
R.S. Hayano, Y.J. Uemura, J. Imazato, N. Nishida, K. Nagamine, T. Yamazaki. Y. Ishikawa and H. Yasuoka, J. Phys. Soc. Japan [**49**]{}, 1773 (1980).
A. Bauer and Ch. Pfleiderer, Phys. Rev. B [**85**]{}, 214418 (2012).
K. Moyoya, H. Yasuoka, Y. Nakamura, V. Jaccarino and J. H. Wernick, J. Phys. Soc. Jpn. [**44**]{}, 833 (1979).
N. A. Porter, P. Sinha, M. B. Ward, A. N. Dobrynin, R. M. D. Brydson, T. R. Charlton, C. J. Kinane, M. D. Robertson, S. Langridge and C. H. Marrows, arXiv:1312.1722 (2013).
|
---
abstract: 'Generative Adversarial Networks (GANs) are an elegant mechanism for data generation. However, a key challenge when using GANs is how to best measure their ability to generate realistic data. In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality. In particular, we propose a new evaluation measure, CrossLID, that assesses the local intrinsic dimensionality (LID) of real-world data with respect to neighborhoods found in GAN-generated samples. Intuitively, CrossLID measures the degree to which manifolds of two data distributions coincide with each other. In experiments on 4 benchmark image datasets, we compare our proposed measure to several state-of-the-art evaluation metrics. Our experiments show that CrossLID is strongly correlated with the progress of GAN training, is sensitive to mode collapse, is robust to small-scale noise and image transformations, and robust to sample size. Furthermore, we show how CrossLID can be used within the GAN training process to improve generation quality.'
author:
- Sukarna Barua
- Xingjun Ma
- Sarah Monazam Erfani
- 'Michael E. Houle'
- James Bailey
bibliography:
- 'crosslid.bib'
title: Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality
---
=1
Introduction
============
Generative Adversarial Networks (GANs) are powerful models for data generation, composed of two neural networks, known as the *generator* and the *discriminator*. The generator maps random noise vectors to locations in the data domain in an attempt to approximate the distribution of the real-world (or real) data. The discriminator accepts a data sample and returns a decision as to whether or not the sample is from the real data distribution or was artificially generated. While the discriminator is trained to distinguish real samples from generated ones, the generator’s objective is to deceive the discriminator by producing data that cannot be distinguished from real data. The two networks are jointly trained to optimize an objective function resembling a two-player minimax game.
GANs were first formulated by [@gan], and have been applied to tasks such as image generation [@largescalegan; @laplaciangan; @sngan; @dcgan] and image inpainting [@imageinpainting]. Despite their elegant theoretical formulation [@gan], training of GANs can be difficult in practice due to instability issues, such as vanishing gradients and mode collapse [@wgan; @gan]. The vanishing gradient problem occurs whenever gradients become too small to allow sufficient progress towards an optimization goal within the allotted number of training iterations. The latter occurs when the generator produces samples for only a limited number of data modes, without covering the full distribution of the real data.
Deployment of GANs is further complicated by the difficulty of evaluating the quality of their output. Researchers often rely on visual inspection of generated samples, which is both time-consuming and subjective. A quantitative quality metric is clearly desirable, and several such methods do exist [@modereggan; @gan; @fid; @proggan; @c2st; @areganequal; @acgan; @ganimproved; @howgoodismygan]. However, past research has identified various limitations of some existing metrics [@inceptionnote; @noteevalgenmodel], and effective evaluation of GAN models is still an open issue.
In this paper, we show how the data distribution learned by a GAN model can be evaluated in terms of the distributional characteristics within neighborhoods of data samples. With respect to a given location $q$ in the data domain, the [*Local Intrinsic Dimensionality*]{} (LID) model [@lidhoule] characterizes the order of magnitude of the growth of probability measure with respect to a neighborhood of increasing radius. LID can be regarded as a measure of the discriminability of the distribution of distances to $q$ induced by the global distribution; equivalently, it reveals the intrinsic dimensionality of the local data submanifold tangent to $q$.
Here, we further generalize LID to a new measure, CrossLID, that assesses the average LID estimate over data samples $q$ from one distribution, with respect to a set of samples from a second distribution. If the two distributions are in perfect alignment, the CrossLID measure would yield an estimate of the average LID value with respect to the common distribution. If the distributions were then progressively separated (such as would happen if their underlying manifolds were moved out of alignment), the CrossLID estimate would tend to increase, and become higher than the average LID estimates of samples $q$ with respect to the distribution from which they were drawn. We show that by applying CrossLID to samples from GAN generated data and real data, we can assess the degree to which the GAN generated data distribution conforms to the real distribution.
As an illustration of the possible relationships between two distributions, Fig. \[fig:lidstudy1\] shows four examples of how a GAN model could learn a bimodal Gaussian distribution. Decreasing CrossLID scores indicate an increasing conformity between the two-mode real data distribution and the generated data distribution.
[.23]{} ![Four 2D examples showing how GAN-generated data samples (triangles) could relate to a bimodal Gaussian-distributed data set (circles), together with CrossLID scores: (a) generated data distributed uniformly, spatially far from the real data; (b) generated data with two modes, spatially far from the real data; (c) generated data associated with only one mode of the real data; and (d) generated data associated with both modes of the real data (the desired situation).[]{data-label="fig:lidstudy1"}](img/lidstudy/fake_is_random-2.pdf "fig:"){width="\textwidth"}
[.23]{} ![Four 2D examples showing how GAN-generated data samples (triangles) could relate to a bimodal Gaussian-distributed data set (circles), together with CrossLID scores: (a) generated data distributed uniformly, spatially far from the real data; (b) generated data with two modes, spatially far from the real data; (c) generated data associated with only one mode of the real data; and (d) generated data associated with both modes of the real data (the desired situation).[]{data-label="fig:lidstudy1"}](img/lidstudy/fake_close-2.pdf "fig:"){width="\textwidth"}
[.23]{} ![Four 2D examples showing how GAN-generated data samples (triangles) could relate to a bimodal Gaussian-distributed data set (circles), together with CrossLID scores: (a) generated data distributed uniformly, spatially far from the real data; (b) generated data with two modes, spatially far from the real data; (c) generated data associated with only one mode of the real data; and (d) generated data associated with both modes of the real data (the desired situation).[]{data-label="fig:lidstudy1"}](img/lidstudy/fake_missing_one_mode-2.pdf "fig:"){width="\textwidth"}
[.23]{} ![Four 2D examples showing how GAN-generated data samples (triangles) could relate to a bimodal Gaussian-distributed data set (circles), together with CrossLID scores: (a) generated data distributed uniformly, spatially far from the real data; (b) generated data with two modes, spatially far from the real data; (c) generated data associated with only one mode of the real data; and (d) generated data associated with both modes of the real data (the desired situation).[]{data-label="fig:lidstudy1"}](img/lidstudy/fake_learns_both_mode-2.pdf "fig:"){width="\textwidth"}
The main contributions of the paper are as follows:
- We propose CrossLID, a cross estimation technique based on LID that is capable of assessing the alignment of the data embedding learned by the GAN generator with that of the real data distribution.
- We show how CrossLID can be used to avoid mode collapse during GAN training, to identify classes for which some modes are not well-covered by learning. We also show how this knowledge can then be used to bias the GAN discriminator via an oversampling strategy to improve its performance on such classes.
- Experimentation showing that our CrossLID measure is well correlated with GAN training progress, and comparison to two state-of-the-art GAN evaluation measures.
Evaluation Metrics for GAN Models
=================================
GAN-based learning is an extensively researched area. Here, we briefly review the topic most relevant to our work, evaluation metrics for GAN models. Past research has employed several different metrics, including log-likelihood measures [@gan], the Inception score (IS) [@ganimproved], the MODE score [@modereggan], Kernel MMD [@kmmd], the MS-SSIM index [@acgan], the Fréchet Inception Distance (FID) [@fid], the sliced Wasserstein distance [@proggan], and Classifier Two-Sample Tests [@c2st]. In our study, we focus on the two most widely used metrics for image data, IS and FID, as well as a recently proposed measure, the Geometry Score (GS) [@gscore].
IS uses an associated Inception classifier [@inceptionv3] to extract output class probabilities for each image, and then computes the Kullback-Leibler (KL) divergence of these probabilities with respect to the marginal probabilities of all classes: $$\small
\text{IS} = \textrm{exp}( \mathbb{E}_{x \sim p_g} D_{KL}( p(y|x) || p(y) ),
\label{eq:inceptionscore}$$ here, $x \sim p_G$ implies a sample $x$ drawn from the generator outputs, $p(y|x)$ is the probability distribution over different classes as assigned to sample $x$ by the Inception classifier, $p(y) = \int_{x}(p(y|x)dx$ is the marginal class distribution, and $D_{\textrm{KL}}$ is the KL divergence. IS measures two aspects of a generative model: (1) the images generated should be both clear and highly distinguishable by the classifier, as indicated by low entropy of $p(y|x)$, and (2) all classes should have good representation over the set of generated images, which can be indicated by high entropy of $p(y|x)$ when marginalized over $x$. A recent study has shown that IS is susceptible to variations in the Inception network weights when trained on different platforms [@inceptionnote]. FID passes both real and generated images to an Inception classifier, and extracts activations from an intermediate pooling layer. Activations are assumed to follow multidimensional Gaussians parameterized by their means and covariances. The FID is defined as: $$\small
\text{FID} = (||\mu_I - \mu_G||)_2^2 + \textrm{Tr}(\Sigma_I + \Sigma_G - 2((\Sigma_I\Sigma_G)^{\frac{1}{2}})),
\label{eq:fidcore}$$ where $(\mu_I,\Sigma_I)$ and $(\mu_G,\Sigma_G)$ represent the mean and covariance of activations for real and generated data samples, respectively. Compared to IS, FID has been shown to be more sensitive to mode collapse and noise in the outputs; however, it also requires an external Inception classifier for its calculation [@fid].
GS assesses the conformity between manifolds of real and generated data, in terms of the persistence of certain topological properties in a manifold approximation process. The topological relationships are extracted in terms of the counts of 1-dimensional loops in a graph structure built up from proximity relationships as a distance threshold is increased. Although it may be indirectly sensitive to variations in the dimensionality of the manifolds, GS explicitly rewards only matches in terms of the specific topology of these loop structures in approximations of the manifolds. However, due to its strictly topological nature, GS is insensitive to differences in relative embedding distances or orientations within the manifold. This issue is acknowledged by the authors, who advise that GS would be best suited for use in conjunction with other metrics [@gscore].
Local Intrinsic Dimensionality (LID)
====================================
LID is an expansion-based measure of intrinsic dimensionality within the vicinity of some reference point $q$ [@lidhoule]. Intuitively, in Euclidean space, the volume of a $D$-dimensional ball grows proportionally to $r^D$ when its size is scaled by a factor of $r$. From the above rate of volume growth with radius, the dimension $D$ can be deduced from two volume measurements as: $V_2/V_1 = (r_2/r_1)^D \Rightarrow D = \ln(V_2/V_1)/\ln(r_2/r_1)$. Transferring this concept to smooth functions leads to the formal definition of LID. **Definition of LID:** Let $F$ be a positive and continuously differentiable function over some open interval containing $r>0$. The LID of $F$ at $r$ is defined as: $$\small
\begin{split}
\text{LID}_F(r)
:=
r\frac{F'(r)}{F(r)} &
=
\lim_{\epsilon \to 0^{+}}
\frac{\ln\,(F((1+\epsilon)r)/F(r))}{\ln\,(1+\epsilon)}
=
\lim_{\epsilon \to 0^{+}}
\frac{F((1+\epsilon)r)-F(r)}{\epsilon\,F(r)}
,
\end{split}
\label{eq:lidhoule}$$ wherever the limits exists. The local intrinsic dimensionality of $F$ is then: $$\small
\text{LID}^{*}_F=\lim_{r \to 0^{+}} \text{LID}_F(r).$$ In our context, and as originally proposed in [@lidhoule], we are interested in functions that are the distributions of distances induced by some global distribution of data points: for each data sample generated with respect to the global distribution, its distance to a predetermined reference point $q$ determines a sample from the local distance distribution associated with $q$.
The LID model has the interesting property that the definition can be motivated in two different ways. The first limit stated in the definition follows from a modeling of the growth of probability measure in a small expanding neighborhood of the origin $q$: as the radius $r$ increases, the amount of data encountered can be expected to grow proportionally to the $r$-th power of the intrinsic dimension. Although the LID model is oblivious of the representational dimension of the data domain, in the setting of a uniform distribution with a local manifold of dimension $m$, if $F$ is the distribution of distances to a reference point in the relative interior of the manifold, then $\text{LID}^{*}_F = m$. For more information on the formal definition and properties of LID see [@lidhoule1; @lidhoule2].
The second limit expresses the (in)discriminability of $F$ when interpreted as a distance measure evaluated at distance $r$ (with low values of $\text{LID}_F(r)$ indicating higher discriminability). As implied by Eq. \[eq:lidhoule\], the LID framework is extremely convenient in that the local intrinsic dimensionality and the discriminability of distance measures are shown to be equivalent and interchangeable concepts.
**Estimating LID:** LID is a generalization of pre-existing expansion-based measures which implicitly use neighborhood set sizes as a proxy for probability measure. These earlier models include the expansion dimension [@ed] and its variants [@ged], and the minimum neighbor distance (MinD) [@mind], all of which have been shown to be crude estimators of LID [@lidestj]. Although the popular estimator due to [@lidestlevina] can be regarded as a smoothed version of LID, its derivation depends on the assumption that the observed data can be treated as a homogeneous Poisson process. However, the only assumptions made by the LID model is that the underlying (distribution) function be continuously differentiable. For this work, we use the Maximum Likelihood Estimator (MLE) of LID as proposed in [@lidestj], due to its ease of implementation and its superior convergence properties relative to the other estimators studied there.
Given a set of data points $X$, and a distinguished data sample $x$, the MLE estimator of LID is: $$\small
\begin{split}
\text{LID}\,(x; X) &
=
-\Big (\frac{1}{k} \sum_{i=1}^{k} \ln\frac{r_i(x; X)}{r_{max}(x; X)} \Big )^{-1}
=
\Big (\ln r_{max}(x; X) - \frac{1}{k} \sum_{i=1}^{k} \ln r_i(x; X) \Big )^{-1}
\,
,
\end{split}
\label{eq:lidmle}$$ where $k$ is the neighborhood size, $r_i(x; X)$ is the distance from $x$ to its $i$-th nearest neighbor in $X\setminus\{x\}$, and $r_{\textrm{max}}(x; X)$ denotes the maximum distance within the neighborhood (which by convention can be $r_k(x;X)$). Due to the deep equivalence between the LID model and the statistical theory of extreme values (EVT) shown in [@lidhoule1; @lidestj], the first of the two equivalent formulations in Eq. \[eq:lidmle\] coincides with the well-known Hill estimator of scale derived from EVT [@Hill75]. As can be seen from the second formulation, the reciprocal of the MLE estimator assesses the discriminability within the $k$-NN set of $x$ as the difference between the maximum and mean of log-distance values. Note that in these estimators, no explicit knowledge of the underlying function $F$ is needed - this information is implicit in the distribution of neighbor distances themselves.
LID can characterize the intrinsic dimensionality of the data submanifold in the vicinity of a distinguished point $x$. The ${\text{LID}}(x;X)$ values of all data samples $x$ from a dataset $X$ can thus be averaged to characterize the overall intrinsic dimensionality of the manifold within which $X$ resides. In [@lidhoule2; @romano2016measuring], it was shown that this type of average is in fact an estimator of the correlation dimension over the sample domain (or manifold). Henceforth, whenever the context set $X$ is understood, we will use the simplified notation ${\text{LID}}(x)$ to refer to ${\text{LID}}(x;X)$, and to denote the average of these estimates over all $x\in X$ by ${\text{LID}}(X)$.
Evaluating GANs via Cross Local Intrinsic Dimensionality {#sec:evaluating_gans_crosslid}
========================================================
We propose a new measure, CrossLID, that evaluates the conformity between a real distribution $p_{R}$ and a GAN-generated distribution $p_{G}$, as derived from the profiles of distances from samples of one distribution to samples of the other distribution. As illustrated in Fig. \[fig:lidstudy1\], our intuition is that if two distributions are similar, then the distance profiles of a generated sample with respect to a neighborhood of real samples should conform with the profiles observed within the real samples, and vice versa.
CrossLID for GAN Model Evaluation {#sec:crosslid}
---------------------------------
We generalize the single data distribution based LID metric defined in Eq. \[eq:lidhoule\] to a new metric that measures the cross LID characteristics between two distributions. Given two sets of samples $A$ and $B$, the CrossLID of samples in $A$ with respect to $B$ is defined as: $$\small
\text{CrossLID}(A; B) = {\mathbb{E}}_{x \in A} {\text{LID}}(x; B).
\label{eq:crosslid}$$ Note that $\text{CrossLID}(A; B)$ does not necessarily equal $\text{CrossLID}(B; A)$.
Low $\textrm{CrossLID}(A; B)$ scores indicate low average spatial distance between the elements of $A$ and their neighbor sets in $B$. From the second formulation of the LID estimator in Eq. \[eq:lidmle\], we see that increasing the separation between $A$ and $B$ would result in a reduction in the discriminability of distances between them, as assessed by the difference between the maximum and mean of the log-distances from points of $A$ to their nearest neighbors in $B$ — thereby increasing the CrossLID score. As a simplified example, consider the case where a positive correction $d$ is added to each of the distances from some reference sample $x\in A$ with respect to its neighbors in $B$. This distance correction would cause the reciprocal of the LID estimate defined in Eq. \[eq:lidmle\] to become $\ln(r_{\textrm{max}}{+}d)-\frac{1}{k} \sum_{i=1}^{k} \ln(r_i{+}d)$, which leads to an increase in the estimate of $\text{LID}$ when $d>0$, and a decrease when $d<0$. Thus, a good alignment between $A$ and $B$ is revealed by good discriminability (low LID) of the distance distributions induced by one set ($B$) relative to the members of the other ($A$). In general, CrossLID differs from LID in its sensitivity to differences in spatial position and orientation of the respective manifolds within which $A$ and $B$ reside (see Suppl. Sec. \[sec:manifoldposorientationcrosslid\]).
Low values of $\textrm{CrossLID}(A; B)$ also indicate good coverage of the domain of $A$ by elements of $B$. To see why, consider what would happen if this were not the case: if the samples in $B$ did not provide good coverage of all modes of the underlying distribution of $A$, there would be a significant number of samples in $A$ whose distances to its nearest neighbors in $B$ would be excessively large in comparison to an alternative set $B'$ providing better coverage of $A$ (see Fig. \[fig:lidstudy1\]c and \[fig:lidstudy1\]d for an example). As discussed above, this increase in the distance profile would likely lead to an increase in many of the individual LID estimates that contribute to the CrossLID score.
Given a set of samples $X_R$ from a real data distribution, and a set of samples $X_G$ from the GAN-generated distribution, a low value of $\text{CrossLID}(X_{R}; X_{G})$ indicates a good alignment between the manifold associated with $X_G$ and the manifold associated with $X_R$, as well as an avoidance of mode collapse in the generation of $X_G$. It should be noted, however, that $\text{CrossLID}(X_{G}; X_{R})$ (in contrast to $\text{CrossLID}(X_{R}; X_{G})$) does not indicate good coverage, and thus is not sensitive to mode collapse. Since low values of $\text{CrossLID}(X_{R}; X_{G})$ encourage a good integration of generated data into the submanifolds with respect to these learned representations, and an avoidance of mode collapse in sample generation, $\text{CrossLID}(X_{R}; X_{G})$ is a good candidate measure for evaluating GAN learning processes. As CrossLID is local measure rather than global, it also allows targeted quality assessment of GANs for refined sample groups (a subset of real data) of interest. For example, for a specific mode ($X_{R}^{m}$) from the real samples based on either cluster or class information, $\text{CrossLID}(X_{R}^{m}; X_{G})$ can be used to assess how well the GAN model learns the submanifold of this particular mode. CrossLID can therefore be exploited to detect and mitigate underlearned modes in GAN training (explored further in Sec. \[sec:lidoptapproach\]).
Effective Estimation of CrossLID {#sec:cross_estimation}
--------------------------------
We next discuss 2 important aspects in CrossLID estimation: (1) the choice of feature space where CrossLID is computed, and (2) the choice of appropriate sample and neighborhood sizes for accurate and efficient CrossLID estimation.
**Deep Feature Space for CrossLID Estimation:** The representations that define the underlying manifold of a data distribution are well learned in the deep representation space. Recent work in representation learning [@goodfellow2016deep], adversarial detection [@lidadversarial] and noisy label learning [@ddl] has shown that DNNs can effectively map high-dimensional inputs to low-dimensional submanifolds at different intermediate layers of the network. We denote the output of such a layer as a function $f(x)$, and estimate CrossLID in the deep feature space as: $$\small
\begin{split}
\text{CrossLID}(f(X_{R});f(X_{G}))
= \frac{1}{|X_{R}|}\sum_{x \in X_{R}}
\Big (\ln r_{\textrm{max}}(f(x),f(X_{G}))
\mbox{~~~~~~~~~~~~} \\
- \frac{1}{k} \sum_{i=1}^{k} \ln r_i(f(x),f(X_{G})) \Big )^{-1}
\,
.
\end{split}
\label{eq:crosslidmlesetfs}$$
It should be noted that successful learning by the GAN discriminator would entail the learning of a mapping $f$ for which the intrinsic dimensionality of $f(X_R)$ is relatively low, and the local discriminability is relatively high. This encourages the GAN generator to produce samples for which $\text{CrossLID}(f(X_{R});f(X_{G}))$ is also low, further enhancing the value of CrossLID in GAN evaluation and training.
The transformation $f(x)$ can be computed by training an external network separately on the real data distribution, such as the Inception network used by IS and FID, and then extracting feature vectors from an intermediate layer of the network. In Sec. \[sec:experiments\_crosslid\] we will show how such feature extractors work well for the estimation of CrossLID. Note that CrossLID can be computed using a single forward pass of the feature extractor network — no backward pass is needed. **Sample Size and Neighborhood for CrossLID Estimation:** Searching for the $k$-nearest neighbors of all samples of $X_{R}$ within the entire GAN-generated dataset $X_{G}$ can be prohibitively expensive. Recent works using the LID measure in adversarial detection [@lidadversarial] and noisy label learning [@ddl] have demonstrated that LID estimation at the deep feature level can be effectively performed within small batches of training samples — with neighborhood sizes as small as $k=20$ drawn from batches of 100 samples. For the estimation of $\text{CrossLID}(f(X_{R});f(X_{G}))$, we use $|X_R|=20000$ samples from the real training dataset, and $|X_{G}|=20000$ GAN-generated samples. To reduce computational complexity, we search $k=100$ nearest neighbors of each $f(x)$, where $x\in X_R$, within a batch of 1000 samples randomly chosen from $f(X_{G})$, and use the distances from $f(x)$ to these $k=100$ nearest neighbors to estimate $\text{CrossLID}(f(x);f(X_{G}))$. The mean of the CrossLID estimates over all 20000 real samples determines the final overall estimate. A larger $k$ tends to result in a higher value of CrossLID, an effect of the expansion of locality (more details in Suppl. Sec. \[sec:crosslidvaryk\]).
Oversampling in GAN training with Mode-wise CrossLID {#sec:lidoptapproach}
====================================================
A GAN distribution may not equally capture the distributions of all modes presenting in a real data distribution. Due to the inherent randomness in stochastic learning, the decision boundary of the discriminator may be closer to regions of some modes than others at different stages of the training process. The closer modes may develop stronger gradients, in which case the generator would learn these modes better than the others. If imbalances in learning can be detected and addressed during training, we could expect a better convergence to good solutions. To achieve this, we propose a GAN training strategy with oversampling based on mode-wise CrossLID scores (as defined in Sec. \[sec:crosslid\]).
We describe our training strategy in the context of labeled data, where we simply take the classes to be the modes (or clusters if data is unlabeled). We compute the average CrossLID score for real samples (w.r.t. generated samples) from each class, and use it to assess how well a class has been learned by the generator — the lower the CrossLID score, the more effective the learning. To generate good gradients for all classes during the training, we dynamically modify the input samples of the discriminator by oversampling the poorly learned classes (those with high class-wise CrossLID scores) from the real data distribution. The objective is to bias the discriminator’s decision boundary towards the regions of poorly learned classes and to produce stronger gradients for the generator in favor of underlearned classes.
The steps are described in Alg. \[algorithm:crosslid\]. From each class $c \in \{1, \cdots, C\}$, we select a subset of samples $X_{c}^{'}$, of size proportional to a deviation factor $\gamma_c = |\text{CrossLID}(X_{R}^{c}; X_{R}^{c}) - \text{CrossLID}(X_{R}^{c}; X_{G})|/\text{CrossLID}(X_{R}^{c}; X_{R}^{c})$, and augment the original real dataset with the members of $X_{c}^{'}$ for subsequent training. $\gamma_c$ measures the relative deviation of the $\text{CrossLID}(X_{R}^{c}; X_{G})$ score from the self-CrossLID score $\text{CrossLID}(X_{R}^{c}; X_{R}^{c})$, i.e, the LID of $X_{R}^{c}$. When the GAN has already fully learned the distribution of a given class (i.e., $\text{CrossLID}(X_{R}^{c}; X_{G}) = \text{CrossLID}(X_{R}^{c}; X_{R}^{c})$), $\gamma_c=0$, indicating that no oversampling will be applied to this class.
Generate $N_1$ GAN samples $X_{G}$. Sample $N_2$ real samples $X_{R}^{c}$ from class $c$ $\gamma_c = |\text{CrossLID}(X_{R}^{c}; X_{R}^{c}) - \text{CrossLID}(X_{R}^{c}; X_{G})|/\text{CrossLID}(X_{R}^{c}; X_{R}^{c})$ $\gamma_c=\gamma_c/\sum_{j=1}^{C}\gamma_j$, for $c \in \{1, \dots, C\}$. $X_{\textrm{aug}} = \{X_1^{'}, \cdots, X_C^{'}\} \cup X_{R}$ where $X_c^{'}$ is a random sample from $X_R^c$ of size $|X_c^{'}| = m \times \gamma_c$ where $m$ is a size parameter Continue GAN training with $X_{\textrm{aug}}$ for the next $T$ generator iterations.
Our proposed strategy can effectively deal with the mode collapse issues encountered in GAN training. When the generator learns a class partially, or not at all, it receives a relatively high CrossLID score for that class. In subsequent iterations, the imbalance in learning will be addressed by our oversampling in favor of these classes (Step 8 in Alg. \[algorithm:crosslid\]).
Note that CrossLID guided training can be used for unconditional GAN training that does not explicitly use label information in the generator. CrossLID does not require the knowledge of class or mode information of the GAN generated samples; it only requires the same information of the target dataset only. Unlike CrossLID, other metrics such as Inception score and FID cannot be used for mode-wise performance estimation as they are inherently global estimates. FID can be estimated class-wise (or mode-wise) provided that we know the label (mode) information of the GAN generated samples, which might be available in a (class) conditional GAN training only. Thus, the proposed mode-wise training is more widely applicable using CrossLID than using other metrics, e.g., FID, in different training settings.
Experimental Results
====================
Evaluation of CrossLID as a GAN Quality Metric {#sec:experiments_crosslid}
----------------------------------------------
We first demonstrate that the CrossLID score is well correlated with the training progress of GAN models. We then discuss four characteristics of the CrossLID metric: (1) sensitivity to mode collapse, (2) robustness to small input noise, (3) robustness to small image transformations, and (4) robustness to sample size used for estimation. We also compare CrossLID score with Geometry score, IS and FID. For evaluation, we used 4 benchmark image datasets: MNIST [@lecun1990handwritten], CIFAR-10 [@krizhevsky2009learning], SVHN [@netzer2011reading], and ImageNet [@imagenet].
For the CrossLID score, we used external CNNs trained on the original training set of real images for feature extraction (more details in Suppl. Sec \[sec:cnnfeatureextractor\]). To compute the IS, we followed [@ganimproved] using the pretrained Inception network, except in the case of MNIST, for which we pretrained a different CNN model as described in [@mnistinception]. FID scores were computed as in [@fid]. Our code is available at <https://www.dropbox.com/s/bqadqzr5plc6xud/CrossLIDTestCode.zip>.
**Correlation of CrossLID and Training Progress of GANs:** We show that the CrossLID score is highly correlated with the training progress of GAN models. In the left three subfigures of Fig. \[fig:lidstudy2\], as GAN training proceeds, the CrossLID score decreases (supporting images are reported in Suppl. Sec. \[sec:correlationlidsamplequality\] for visual verification of training progress). CrossLID($X_{R};X_{G}$) was estimated over 20,000 generated samples using deep features extracted from the external CNN model. To show that CrossLID metric remains effective for high dimensional datasets, we evaluate it on the $128 \times 128$ pixel ImageNet dataset consisting of 1000 classes, each class having approximately 1300 images. We trained a ResNet model using WGAN-GP [@improvedwgan] algorithm on the full ImageNet dataset for $100K$ generator iterations and computed CrossLID scores after every 1000 generator iterations. The fourth subfigure from the left in Fig. \[fig:lidstudy2\] shows the computed CrossLID scores over different generator iterations confirming that CrossLID score improves (decreases) as GAN training proceeds.
---------------------------------------------------------------------------
{width="122mm"}
---------------------------------------------------------------------------
IS is an established metric that was demonstrated to correlate well with human judgment of sample quality [@ganimproved]. The rightmost subfigure in Fig. \[fig:lidstudy2\] illustrates the strong negative correlation between CrossLID and IS over different training epochs. We also observed a strong positive correlation of CrossLID with FID (results are reported in Suppl. Sec. \[sec:crosslidfidcorr\]). We found that GS does not exhibit a clear correlation with sample quality, which is consistent with its reported insensitivity to differences in embedding distances or orientations [@gscore] (see Suppl. Sec. \[sec:geometryscore\]). Therefore, we omit GS from the remainder of the discussion.
**Sensitivity to Mode Collapse:** A challenge of GAN training is to overcome mode collapse, which occurs when the generated samples cover only a limited number of modes (not necessarily from the real distribution) instead of learning the entire real data distribution. An effective evaluation metric for GANs should be sensitive to such situations.
We simulate two types of mode collapse by downsampling the training data: (1) *intra-class mode dropping*, which occurs when the GAN generates samples covering all classes, and (2) *inter-class mode dropping*, which occurs when the GAN generates samples from a limited number of classes. For both types, we randomly select a subset of $n$ samples from $c$ classes from the original training set (of $N$ samples from $C$ classes), then randomly subsample with replacement from the subset to create a new dataset with the same number of samples $N$ as in the original training set. For the simulation of intra-class mode dropping, we let $c=C$, and vary $n \in [30, 40, 50, 70, 100]$, whereas for inter-class mode dropping we let $n=50$, and vary $c \in [2, 4, 6, 8, 10]$. Overall, for each of the original datasets, we created five new datasets for each type of mode collapse, and computed CrossLID, IS, and FID scores on the new datasets. Note that each of these new datasets has the same number of instances $N$ as the original training set from which it was derived.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![*(a–c)* Test results for intra-class mode dropping: The CrossLID, IS, and FID scores on varying numbers of unique samples in the datasets. *(d–f)* Test results for inter-class mode dropping. *(g–i)* Robustness to Gaussian noise for CrossLID, IS, and FID. Noise percent indicates the proportion of pixels of GAN images that have been modified with noise.[]{data-label="fig:lidinceptionfidvarydiversitynoise"}](img/lidvinception/lidinceptionfidvarynoisediversityclass_colored.pdf "fig:"){width="90mm"}
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
As shown in Fig. \[fig:lidinceptionfidvarydiversitynoise\](a), we found that CrossLID is sensitive to different degrees of intra-class mode dropping, but IS failed to identify intra-class mode dropping on MNIST and ImageNet, and responded inconsistently for different levels of intra-class mode dropping on CIFAR-10 and SVHN (Fig. \[fig:lidinceptionfidvarydiversitynoise\](b)). FID is also sensitive to intra-class mode dropping (Fig. \[fig:lidinceptionfidvarydiversitynoise\](c)). Similar results were seen for inter-class mode dropping (Fig. \[fig:lidinceptionfidvarydiversitynoise\](d–f)): again, CrossLID was found to be sensitive to increasing levels of inter-class mode dropping, and is more sensitive than FID. Although IS revealed inter-class mode dropping for MNIST and ImageNet, it failed to do so for CIFAR-10 and SVHN. **Robustness to Small Input Noise:** We examine the robustness of the three metrics to small noise in data which does not alter visual quality greatly. We add noise drawn from a Gaussian distribution with both mean and variance equal to 127.5 (255/2) to a small proportion of pixels in the original images. As shown in Fig. \[fig:lidinceptionfidvarydiversitynoise\](g), CrossLID exhibits small variations as the proportion of modified pixels increases from 0.2% to 2%. For example, on CIFAR10 dataset, CrossLID score changes by only 1.2% at 2% Guassian noise. In contrast, both IS and FID demonstrate disproportionately large variations, particularly for CIFAR-10, SVHN, and ImageNet (Fig. \[fig:lidinceptionfidvarydiversitynoise\](h) and \[fig:lidinceptionfidvarydiversitynoise\](i)). For example, IS and FID change by 52% and 48%, respectively, at only 2% noise level on CIFAR10. The behavior of the three metrics remain similar even if we normalize the scores with respect to their minimum and maximum values. (Details and further experiments with different noise types are reported in Suppl. Sec. \[sec:robustnessspnoise\] and \[normalizedscoresnoise\]).
A potential drawback of high sensitivity to low noise levels is that the metric may respond inconsistently for images with low noise as compared to images of extremely low quality. Consider the figures \[fig:lidinceptionfidvarytrans\](a) and \[fig:lidinceptionfidvarytrans\](b), wherein we report the three metrics for two specific types of noise: a black rectangle obscuring the center of the images, and 2% Gaussian noise, respectively. Although the images with Gaussian noise are visually superior to the other ones (with implanted rectangle), by virtue of its lower score, FID rates the obscured images to be of better quality — quite the opposite to human visual judgment. In contrast, for this particular scenario, the response of both CrossLID (for which a lower score indicates better quality) and IS (for which a higher score is better) is in line with human assessment. We believe that robustness to small input noise which does not greatly change visual quality of images is a desirable characteristic for a quality measure. Noting that there is as yet no consensus on the issue of whether GAN quality measures should be robust to noise, we pose it as an open problem for the GAN research community to explore.
[c]{} {width="120mm"}\
\
\[fig:lidinceptionfidvarytrans\]
**Robustness to Small Input Transformation:** We further test the robustness of the metrics to small input transformations. As long as the transformations do not alter the visual appearance of GAN images, a robust metric should be able to give consistent evaluations. This is important in that GAN generated images often exhibit small distortions compared to natural images, and such small imperfections should not significantly detract from the perceived quality of GANs. As demonstrated in figures \[fig:lidinceptionfidvarytrans\](c) and \[fig:lidinceptionfidvarytrans\](d), CrossLID and IS conform with each other showing moderate sensitivity to small translations and rotations on CIFAR-10 images. This is reasonable considering that the convolution layers of the feature extractor are expected to learn features which are moderately invariant to small input transformations. However, we find that FID changes drastically with small input transformations (Fig. \[fig:lidinceptionfidvarytrans\](c–d)). FID calculation on different (non-Inception) feature spaces could possibly lead to different behavior; however, this investigation is beyond the scope of this paper.
**Robustness to Sample Size:** For the sake of efficiency, it is desirable that GAN quality measures be able to perform well even when computed over relatively small sample sizes. We test the robustness of the three metrics versus sample size on a subset of CIFAR-10 training images. The results are shown in Fig. \[fig:lidinceptionfidvarytrans\](e). CrossLID and IS are moderately stable as the subset size decreases from 25K to 5K, in particular, CrossLID exhibits the least variation. However, the FID score turns out to be quite highly sensitive to the sample size. The lower variation of CrossLID against sample size allows it to be computed on samples of smaller size as compared to what is typically needed by the other two metrics. We note that previous research on FID [@areganequal] has noted that it exhibits high variance for low sample sizes, and has hence been recommended only for sufficiently large sample sizes ($>10K$). We have also compared the running times of the three metrics with respect to different sample sizes and found that CrossLID requires the lowest computation time while FID the highest (Details in Suppl. Section \[sec:metricexectime\]).
**Summary:** Table \[tab:lidcompareinceptionfid\] summarizes our experimental comparisons of CrossLID, IS, and FID.
[m[6.0cm]{}|ccc]{} EVALUATION CRITERIA & CrossLID & IS & FID\
Sensitivity to mode collapse.&High&Low&High\
Robustness to small input noise.&High&Low&Low\
Robustness to small input transformations. &Moderate&Moderate&Low\
Robustness to sample size variation.&High&Moderate&Low\
Evaluation of the Proposed Oversampling {#sec:agumented_training}
---------------------------------------
We evaluate the effectiveness of the CrossLID-guided oversampling approach in GAN training. For the MNIST, CIFAR-10 and SVHN datasets, we compare standard versions of the popular DCGAN [@dcgan] and WGAN [@wgan] models to the same models trained with CrossLID-guided oversampling. (Further details of model architecture, experimental settings, and output images can be found in Suppl. Sec. \[sec:expsetup\].)
The performances in terms of CrossLID scores are reported in Table \[tab:dcganlidinceptionscores\], where DCGAN+ and WGAN+ refer to training with our proposed oversampling (IS and FID results for these experiments are reported in Suppl. Sec. \[sec:inceptionfidoversamplingexp\]). Our training approach achieved comparatively better results than the standard training in terms of CrossLID, IS, and FID, for both DCGANs and WGANs.
--------- ----------------- --------------------- ------------------ ---------------------
Dataset DCGAN DCGAN+ WGAN WGAN+
MNIST 5.11 $\pm$ 0.02 **4.96 $\pm$ 0.08** 5.91 $\pm$ 0.02 **5.26 $\pm$ 0.02**
CIFAR10 3.00 $\pm$ 0.04 **2.78 $\pm$ 0.04** 3.70 $\pm$ 0.04 **3.57 $\pm$ 0.04**
SVHN 7.40 $\pm$ 0.01 **7.14 $\pm$ 0.03** 10.14 $\pm$ 0.04 **9.95 $\pm$ 0.04**
--------- ----------------- --------------------- ------------------ ---------------------
: Performance of oversampling on DCGAN and WGAN.[]{data-label="tab:dcganlidinceptionscores"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Images generated at the end of the 30-th epoch by DCGAN and DCGAN+ on the MNIST dataset, when batch normalization is removed from the discriminator (a–b) and from both the generator and discriminator (c–d).[]{data-label="fig:mnistoutputsmodecollapse"}](img/stabilitystudy/bnnbn/stdgan/epoch30.jpg "fig:"){width="18mm"} ![Images generated at the end of the 30-th epoch by DCGAN and DCGAN+ on the MNIST dataset, when batch normalization is removed from the discriminator (a–b) and from both the generator and discriminator (c–d).[]{data-label="fig:mnistoutputsmodecollapse"}](img/stabilitystudy/bnnbn/lidoptgan/epoch30.jpg "fig:"){width="17.5mm"} ![Images generated at the end of the 30-th epoch by DCGAN and DCGAN+ on the MNIST dataset, when batch normalization is removed from the discriminator (a–b) and from both the generator and discriminator (c–d).[]{data-label="fig:mnistoutputsmodecollapse"}](img/stabilitystudy/nbnnbn/stdgan/epoch30.jpg "fig:"){width="18mm"} ![Images generated at the end of the 30-th epoch by DCGAN and DCGAN+ on the MNIST dataset, when batch normalization is removed from the discriminator (a–b) and from both the generator and discriminator (c–d).[]{data-label="fig:mnistoutputsmodecollapse"}](img/stabilitystudy/nbnnbn/lidoptgan/epoch30.jpg "fig:"){width="18mm"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Effectiveness in Preventing Mode Collapse:** As explained in Sec \[sec:lidoptapproach\], our approach can help avoid mode collapse. We next show on MNIST, when the batch-normalization layers were removed from the discriminator, or from both the discriminator and the generator, standard DCGAN training suffered significant mode collapse and failed to learn the full real distribution, as shown in Fig. \[fig:mnistoutputsmodecollapse\](a) and \[fig:mnistoutputsmodecollapse\](c). Our approach, however, was still able to produce high quality images without any sign of mode collapse during the training, as shown in Fig. \[fig:mnistoutputsmodecollapse\](b) and \[fig:mnistoutputsmodecollapse\](d). (Training process and visual inspections reported in Suppl. Sec. \[sec:stability\].)
Conclusion
==========
We have proposed a new metric for quality evaluation of GANs, based on cross local intrinsic dimensionality (CrossLID). Our measure can effectively assess GAN generation quality and mode collapse in GAN outputs. It is reasonably robust to input noise, image transformations, and sample size. We also demonstrated a simple oversampling approach based on the mode-wise CrossLID that can improve GAN training and help avoid mode collapse.
We believe CrossLID is not only a promising new tool for assessing the quality of GANs, but also can help improve GAN training. We envisage CrossLID can be used as an additional metric for the community to evaluate GAN quality. Unlike IS and FID, CrossLID uses a local perspective rather than global perspective when evaluating sample quality, in that a quality score for each individual GAN generated sample can be computed based on its neighborhood. The advantage of mode-wise performance estimation by CrossLID may be utilized in different GAN models such as conditional and supervised GANs.
|
---
abstract: 'This paper is devoted to study the stability/instability of the expansionfree self gravitating source in the framework of Einstein Gauss-Bonnet gravity. The source has been taken as Tolman-Bondi model which is homogenous in nature. The field equations as dynamical equations have been evaluated in Gauss-Bonnet gravity in five dimensions. The junction conditions as well as cavity evaluations equations have been explored in detail. The perturbation scheme of first order has been applied to dynamical as Einstein Gauss-Bonnet field equations. The concept of Newtonian as well post Newtonian approximation have been used to derive general dynamical stability equations. In general this equation represents the stability of the gravitating source. Some particular values of system parameters have been chosen to prove the concept of stability graphically. It has been mentioned that other than chosen the particular values of the parameters the stability of the system will be disturbed , hence it would leads to instability.'
author:
- |
G. Abbas [^1], S. Sarwar [^2]\
Department of Mathematics, COMSATS Institute\
of Information Technology, Sahiwal-57000, Pakistan.
title: '**Dynamical Stability of Collapsing Stars in Einstein Gauss-Bonnet Gravity**'
---
[**Key Words:**]{} Einstein Gauss-Bonnet Gravity; Gravitational Collapse; Stability of Stars .\
04.70.Bw, 04.70.Dy, 95.35.+d\
Introduction
============
The dynamical instability of the astrophysical objects is the subject of interest in classical physical as well as in general theory of relativity (GR). The motivation of this problem become important when static stellar models are stable against the fluctuations produced by the self gravitational attraction of the massive stars. It is most relevant to structure formation during the different phases of the gravitationally collapsing objects. In the relativistic astrophysics the dynamical stability of the stars was studied by the Chandrasekhar [@1] in 1964 , since then a renowned interest has grown in this research area. Herrera et al.[@2; @3] have extended the pioneers work for non-adiabatic, anisotropic and viscous fluids. All these investigations imply that adiabatic index ${\Gamma}_1$ define the range of instability, for example for the Newtonian perfect fluid such range is ${\Gamma}_1<4/3$. Friedman [@4] discussed the dynamical instability of neutral fluid sphere in the Newtonian as well as in the relativistic physics and showed that anisotropy enhances the stability if anisotropy is positive throughout matter distribution. Herrera et al. [@5] have studied the dynamical instability of the expansion free fluids using perturbation scheme.
Different physical properties of the fluid plays important role in dynamical evaluation of the self-gravitating systems. According to Herrera et al. [@6; @7] dissipation terms in the fluid would increase the instability of the collapsing objects. Chan and his collaborators [@8]-[@11] have showed that anisotropy and radiation would affect the instability range at Newtonian nd post-Newtonian approximation. Sharif and Azam [@11a]-[@14] have studied the effects of electromagnetic field on the dynamical stability of the collapsing dissipative and non dissipative fluids in spherical, cylindrical and plane symmetric geometries. This work has been further extended by Sharif and his collaborators [@15]-[@24] in higher order theories of gravity, like $f(R)$ and $f(T)$ and $f(R,T)$, in these papers the possible forms of the fluid with electromagnetic field have been discussed in detail.
Till now many quantum theories of gravity have been proposed to investigate the natural phenomenon occurring in astronomy and astrophysics. Among these theories, superstring theory is the most strong candidate which has been extensively investigated for the spacetimes with more than four dimensions. In this theory the effects of extra dimensions becomes more prominent when curvature radius of the central high density regions during gravitational collapse becomes comparable with the curvature radius of the extra dimensions.From this point of view high density regions can be modeled in a sophisticated way in a theory which deals the extra dimensions. The braneworld universe model which is attractive proposal for the new picture of the universe is based on the superstring theory [@25]-[@30]. The geometrical interpretation of the braneworld model revels the fact that we are living on a four dimensional timelike hypersurface which is embedded in more than four dimensional manifold. This suggest that effects of superstring on the formation of back hole during the relativistic gravitational collapse of a star should be investigated explicitly.
The current experiments performed for the tests of inverse square law do not exclude the possibility of the extra dimension even as large as a tenth of millimeter. As observed range of the gravitational force is directly dependent on the size of objects so it is interesting to consider some physical phenomena in the extra dimensions.On the basis of these facts it becomes important to study the general theory of relativity in more than four dimensions. In this regards a class of exact solutions to the Einstein field equations have been determined in the recent years [@31]-[@35]. These solutions play a significant role in studying the gravitational collapse evolution of the universe. Recently [@36]-[@43] there has been growing interest to study the higher order gravity , which are involves higher order derivatives of curvature terms. One of the most studied extensively higher order gravity theory is the Gauss-Bonnet gravity. This theory is the simplest generalization of general theory of relativity and special case of Lovelock Gravity theory. The Lagrangian of this theory contains just three terms as compared to Lagrangian of Loelock gravity Theory.
The Gauss-Bonnet gravity theory is used to discuss the nontrivial dynamical systems in the dimensions greater or equal to 5. This theory naturally appears in the low energy effective action of the heterotic string theory. Boulware and Deser [@44] formulated the black hole (BH) solutions in N dimensional gravitational theory with four dimensional Gauss-Bonnet term. These are generalization of N dimensional solutions investigated by Tangherili [@45], Merys and Perry [@46]. The spherically symmetric BH solutions and their physical properties have been studied in detail by Wheeller [@47]. The structure of topologically nontrivial BHs has been presented by Cai [@48]. Kobayashi [@49] and Maeda [@50] have explored the effects of Gauss-Bonnet term on the structure of Vaidya BH. All these studies show that appearance of the Gauss-Bonnet term in the field Equations would effect the occurrence of BH and Naked singularity during the gravitational collapse. In a recent paper [@52], Jhinag and Ghosh have consider the $5D$ action with the Gauss-Bonnet terms in Tolman-Bondi model and give an exact model of the gravitational collapse of a inhomogeneous dust. Motivated by these studies, we have discussed the stability of the gravitationally collapsing spheres in Einstein Gauss-Bonnet gravity. This paper is organized as follow: In section **[2]{}** the Einstein Gauss-Bonnet field equations and dynamical equations have been presented. The perturbation scheme of first order on the field equations as well as on dynamical equations have been presented in section **3**. Section **4** deals with the Newtonian and post Newtonian approximation and derivation of the stability equation, which is main result of the paper. We summaries the results of the paper in the last section.
Interior Matter Distribution and Einstein Gauss-Bonnet Field Equations
=======================================================================
We begin with the following 5D action: $$\label{1}
S=\int d^{5}x\sqrt{-g}\left[ \frac{1}{2k_{5}^{2}}\left( R+\alpha
L_{GB}\right) \right] +S_{matter}$$where $R$ ia a $5D$ Ricci scalar and $k^2_{5}={8\pi G_{5}}$ is $5D$ gravitational constant. The Gauss-Bonnet Lagrangian is of the form $$\label{2}
L_{GB}=R^{2}-4R_{ab}R^{ab}+R_{abcd}R^{abcd}$$ where $\alpha$ is the coupling constant of the Gauss-Bonnet terms. This type of action is derived in the low-energy limit of heterotic superstring theory. In that case, $\alpha $ is regarded as the inverse string tension and positive definite and we consider only the case with $\alpha \geq 0$ in this paper. In the $4D$ space-time, the Gauss-Bonnet terms do not contribute to the Einstein field equations. The action (\[1\]) leads to the following set of field equations $$\label{3}
{G}_{ab}=G_{ab}+\alpha H_{ab}=T_{ab},$$ where $$\label{4}
G_{ab}=R_{ab}-\frac{1}{2}g_{ab}R$$ is the Einstein tensor and $$\label{5}
H_{ab}=2\left[ RR_{ab}-2R_{a\alpha }R_{b}^{\alpha }-2R^{\alpha \beta
}R_{a\alpha b\beta }+R_{a}^{\alpha \beta \gamma }R_{b\alpha \beta \gamma }%
\right] -\frac{1}{2}g_{ab}L_{GB},$$ is the Lanczos tensor.
A spacelike 4D hypersurface $\Sigma^{(e)}$ is taken such that it divides a 5D spacetime into two 5D manifolds, $M^-$ and $M^+$, respectively. The 5D TB spacetime is taken as an interior manifold $M^-$ which represents an interior of a collapsing inhomogeneous and anisotropic sphere is given by [@52] $$\label{6}
ds_{-}^2=-dt^2+A^2dr^2+R^2(d\theta^2+\sin^2{\theta}d\phi^2
+\sin^2{\theta}\sin^2{\phi}d\psi^2),$$ where $A$ and $R$ are functions of $t$ and $r$. The energy-momentum tensor $T_{\alpha \beta }^{-}$ for anisotropic fluid has the form $$\label{7}
T_{\alpha \beta }^{-}=(\mu +P_{\perp })V_{\alpha }V_{\beta
}+P_{\perp }g_{\alpha \beta }+(P_{r}-P_{\perp })\chi _{\alpha }\chi
_{\beta },$$ where $\mu $ is the energy density, $P_{r}$ the radial pressure, $P_{\perp }$the tangential pressure, $V^{\alpha }$ the four velocity of the fluid and $ \chi _{\alpha }$ a unit four vector along the radial direction. These quantities satisfy,$$V^{\alpha }V_{\alpha }=-1\ \ ,\ \ \ \ \ \chi ^{\alpha }\chi _{\alpha
}=1\ \ ,\ \ \ \ \ \chi ^{\alpha }V_{\alpha }=0 \label{N8}$$The expansion scalar $\Theta $ for the fluid is given by $$\label{8}
\Theta =V_{\ ;\ \alpha ,}^{\alpha }.$$Since we assumed the metric (6) comoving, then $$\label{9}
V^{\alpha }=A^{-1}\delta _{0}^{\alpha }\ ,\ \ \ \ \ \chi ^{\alpha
}=B^{-1}\delta _{1}^{\alpha }\$$and for the expansion scalar, we get $$\label{10}
\Theta =\frac{\dot{A}}{A}+\frac{3\dot{R}}{R}.$$ Hence, Einstein Gauss-Bonnet field equations take the form$$\begin{aligned}
\nonumber
k^2_{5}\mu &&=\frac{12\left( R^{\prime 2}-A^{2}\left(
1+\dot{R}^{2}\right) \right) }{R^{3}A^{5}}\left[ R^{\prime
}A^{\prime }+A^{2}\dot{R}\dot{A}-AR^{\prime \prime }\right] \alpha
\\\label{11} &&\ \ \ \ \ -\frac{3}{A^{3}R^{2}}\left[ A^{3}\left(
1+\dot{R}^{2}\right) +A^{2}R\dot{R}\dot{A}+RR^{\prime }A^{\prime
}-A(RR^{\prime \prime }+R^{\prime 2})\right]\\\label{12a}
k^2_{5}p_{r} &&=-12\alpha \left( \frac{1}{R^{3}}-\frac{R^{^{\prime }2}}{A^{2}R^{3}}%
+\frac{\dot{R}^{2}}{R^{3}}\right) \ddot{R}+3\frac{R^{^{\prime }2}}{A^{2}R^{2}%
} -3\Big(\frac{1+\dot{R}^{2}+R\ddot{R}}{R^{2}}\Big)\\\nonumber
k^2_{5}p_{\perp } &&=\frac{4\alpha }{A^{4}R^{2}}\Big[ -2A\left(
A^{^{\prime
}}R^{^{\prime }}+A^{2}\dot{A}\dot{R}-AR^{^{\prime \prime }}\right) \ddot{R}%
+A\left( R^{^{\prime
}2}-A^{2}\left( 1+\dot{R}^{2}\right) \right)
\ddot{A}\\\nonumber&&+2\Big( \dot{A}R^{^{\prime
}}-A\dot{R}^{^{\prime }}\Big] -\frac{1}{A^{3}R^{2}}\Big[ A^{3}\Big(
1+\dot{R}^{2}+2R\ddot{R}\Big) +A^{2}R\left(
2\dot{R}\dot{A}+R\ddot{A}\right)\\&&+2RR^{^{\prime }}A^{^{\prime
}}-2A\left( RR^{^{\prime \prime }}+R^{^{\prime }2}\right)\Big]
\label{13}\\
&&\frac{12\alpha }{A^{5}R^{3}}\left( \dot{A}R^{^{\prime
}}-A\dot{R}^{^{\prime
}}\right) \left( A^{2}\left( 1+\dot{R}^{2}\right) -R^{^{\prime }2}\right) -3%
\frac{A\dot{R}^{^{\prime }}-\dot{A}R^{^{\prime }}}{A^{3}R}=0
\label{14}\end{aligned}$$ The mass function $m(t,r)$ analogous to Misner-Sharp mass in $n$ manifold without ${\Lambda}$ is given by [@50] $$\label{15}
m(t,r)=\frac{(n-2)}{2k_{n}^{2}}{V^k}_{n-2}\left[ R^{n-3}\left(
k-g^{ab}R,_{a}R,_{b}\right) +(n-3)(n-4)\alpha \left(
k-g^{ab}R,_{a}R,_{b}\right) ^{2} \right],$$ where a comma denotes partial differentiation and ${V^k}_{n-2}$ is the surface of $(n-2)$ dimensional unit space. For $k=1$, ${V^1}_{n-2}=\frac{2{\pi}^{(n-1)/2}}{\Gamma((n-1)/2)}$, using this relation with $n=5$ and Eq.(\[6\]), the mass function (\[15\]) reduces to $$\label{16}
m(r,t)=\frac{3}{2}\left[ R^{2}\left( 1-\frac{R^{^{\prime }2}}{A^{2}%
}+\dot{R}^{2}\right) +2\alpha \left( 1-\frac{R^{^{\prime }2}}{A^{2}}+\dot{R}%
^{2}\right) ^{2}\right]$$ The nontrivial components of the Binachi identities, $T_{;\beta }^{\
-\alpha \beta }=0$, from Eqs.(\[6\]) and (\[7\]), yield $$\left[ \dot{\mu}+\left( \mu +P_{r}\right) \frac{\dot{A}}{A}+3\left(
\mu +P_{\perp }\right) \frac{\dot{R}}{R}\right] =0 , \label{17}$$and$$T_{;\beta }^{\ -\alpha \beta }\chi _{\alpha }=\frac{1}{A}\left[
P_{r}^{^{\prime }}+3\left( P_{r}-P_{\perp }\right) \frac{R^{^{\prime }}}{R}%
\right] =0 \label{18}$$
Using field equations and Eq.(\[16\]), we may write
$$m^{\prime }=\frac{2}{3}k^2_{5}\mu R^{\prime }R^{3} \label{N16a}$$
In the exterior region to $\Sigma^{(e)}$, we consider Einstein Gauss-Bonnet Schwarzschild solution which is given by [@54]
$$\label{c1}
ds_{+}^2=-F(\rho)d{\nu}^2-2d\nu
d\rho+\rho^2(d\theta^2+\sin^2{\theta}d\phi^2
+\sin^2{\theta}\sin^2{\phi}d\psi^2),$$
where $F(\rho)=1+\frac{{\rho}^2}{4\alpha}-\frac{{\rho}^2}{4\alpha}\sqrt{1+\frac{16\alpha
M}{\pi {\rho}^4}}$
The smooth matching of the $5D$ anisotropic fluid sphere (\[6\]) to GB Schwarzschild BH solution (\[c1\]), across the interface at $r = {r_{\Sigma}}^{(e)}$ = constant, demands the continuity of the line elements and extrinsic curvature components (i.e., Darmois matching conditions), implying $$\begin{aligned}
\label{c2}
dt \overset{\Sigma^{(e)}}{=}\sqrt{F(\rho)}d\nu,\\
R \overset{\Sigma^{(e)}}{=}\rho, \\\label{cm}
m(r,t)\overset{\Sigma^{(e)}}{=}M,\end{aligned}$$ $$\begin{aligned}
\nonumber
&&-12\alpha \left( \frac{1}{R^{3}}-\frac{R^{^{\prime }2}}{A^{2}R^{3}}%
+\frac{\dot{R}^{2}}{R^{3}}\right) \ddot{R}+3\frac{R^{^{\prime }2}}{A^{2}R^{2}%
} -3\Big(\frac{1+\dot{R}^{2}+R\ddot{R}}{R^{2}}\Big)\\
&&\overset{\Sigma^{(e)}}{=}\frac{12\alpha }{A^{5}R^{3}}\left( \dot{A}R^{^{\prime
}}-A\dot{R}^{^{\prime
}}\right) \left( A^{2}\left( 1+\dot{R}^{2}\right) -R^{^{\prime }2}\right) -3%
\frac{A\dot{R}^{^{\prime }}-\dot{A}R^{^{\prime }}}{A^{3}R}
\label{c3}\end{aligned}$$ Comparing Eq.(\[c3\]) with (\[12a\]) and (\[14\]) (for detail see [@12]), we get $$\label{c4}
p_r\overset{\Sigma^{(e)}}{=}0.$$ Hence, the matching of the interior inhomogeneous anisotropic fluid sphere (\[6\]) with the exterior vacuum Einstein Gauss-Bonnet spactime (\[c1\]) produces Eqs.(\[6\]) and (\[cm\]). These are the necessary and sufficient conditions for the smooth matching of interior and exterior regions of a star on boundary surface ${\Sigma^{(e)}}$.
It is well known that the expansionfree models present an internal vacuum cavity. The boundary surface between the external cavity and interior the fluid is labeled by ${\Sigma^{(i)}}$ then the smooth matching of the Minkowski spacetime within the cavity to the fluid distribution over ${\Sigma^{(i)}}$, yields
$$\begin{aligned}
m(r,t)\overset{\Sigma^{(i)}}{=}0.\\
p_r\overset{\Sigma^{(i)}}{=}0.\end{aligned}$$
The physical applications of expansionfree models are wide in astrophysics and astronomy. For example, it may help to explore the structure of voids on cosmological scales[@55]. By definition Voids are the sponge like structures and occupying 40-50 percent of the entire universe. There are commonly two types of the voids: mini-voids [@56] and macro-voids[@57] On the basis of Observational data analysis the voids are neither empty nor spherical. For the sake of further exploration about voids they are considered as vacuum spherical cavities around the fluid distribution.
The Perturbation Scheme
========================
In this section, we introduce the perturbation scheme, for this purpose it is assumed that initially fluid is in static equilibrium implying that the fluid is described by only such quantities which have only radial dependence. Such quantities are denoted by a subscript zero. We further assume as usual, that the metric functions $A(t,r)$ and $R(t,r)$ have the same time dependence in their perturbations. Therefore, we consider the metric and material functions in the following form $$\begin{aligned}
A(t,r)&=&A_{0}(r)+\epsilon T(t)a(r), \label{N17}\\
R(t,r)&=&R_{0}(r)+\epsilon T(t)c(r), \label{N19}\\
\mu (t,r)&=&\mu
_{0}(r)+\epsilon \bar{\mu}(t,r), \label{N20}\\
P_{r}(t,r)&=&P_{r0}(r)+\epsilon \bar{P}_{r}(t,r), \label{N21}\\
P_{\perp }(t,r)&=&P_{\perp 0}(r)+\epsilon \bar{P}_{\perp }(t,r),
\label{N22}\\
m(t,r)&=&m_{0}(r)+\epsilon \bar{m}(t,r), \label{N23}\\
\Theta (t,r)&=&\epsilon \bar{\Theta}(t,r), \label{N24}\end{aligned}$$ where $0<\epsilon \ll 1$ and we choose the Schwarzschild coordinates with $R_{0}(r)=r$. Using Eqs.(\[N17\])-(\[N22\]), we have from Eqs.(\[11\])-(\[14\]) the following static configuration $$\begin{aligned}
k\mu _{0}&&=\frac{3}{r^{3}A_{0}^{4}}\left[ 4\alpha \left( \frac{%
A_{0}^{^{\prime }}}{A_{0}}-A_{0}\right) -rA_{0}^{2}\left( A_{0}+\frac{%
A_{0}^{^{\prime }}}{A_{0}}r-1\right) \right], \label{N25}\\
kP_{r0}&&=3\left[ \frac{1}{r^{2}A_{0}^{2}}-\frac{1}{r^{2}}-1\right],
\label{N26}\\
kP_{\perp 0}&&=\frac{-1}{A_{0}^{2}r^{2}}\left[ A_{0}^{2}+2r\frac{%
A_{0}^{^{\prime }}}{A_{0}}-2\right]. \label{N27}\end{aligned}$$ Also from Eqs.(\[11\])-(\[14\]), we obtain the following form of the perturbed field equations $$\begin{aligned}
\nonumber
k\bar{\mu} &&=\frac{3T}{r^{2}A_{0}^{3}}\Big[ 4\alpha \Big(
\frac{a^{\prime
}}{rA_{0}^{2}}+\frac{3A_{0}^{\prime }c^{\prime }}{rA_{0}^{2}}-\frac{%
c^{\prime \prime }}{rA_{0}}-\frac{a^{\prime
}}{A_{0}r}-\frac{c^{\prime }A_{0}^{\prime }}{rA_{0}} \\\nonumber
&&+ \frac{c^{\prime \prime }}{r}+\frac{3aA_{0}^{\prime }}{rA_{0}^{2}}-%
\frac{5aA_{0}^{\prime }}{rA_{0}^{2}}-\frac{cA_{0}^{\prime }}{r^{2}A_{0}^{2}}+%
\frac{cA_{0}^{\prime }}{r^{2}A_{0}}\Big) \\\nonumber &&-\Big(
-3aA_{0}+2c-3ar\frac{A_{0}^{^{\prime }}}{A_{0}}+A_{0}^{\prime
}c+ra^{\prime }\\\label{N28}
&& +A_{0}^{\prime }rc^{\prime
}-A_{0}rc^{\prime \prime }-A_{0}+2a-2A_{0}c^{\prime }\Big) \Big]
-\frac{2Tc}{r}k\mu _{0}\\\nonumber
k\bar{P}_{r}&&=\frac{3\ddot{T}c}{r}\left[ 1-4\alpha \left( 1-\frac{1}{%
r^{2}A_{0}^{2}}\right) \right] -\frac{6T}{r^{2}}\left( \frac{a}{A_{0}^{3}}-%
\frac{c^{\prime }}{A_{0}^{2}}+cr\right) -\frac{2Tc}{r}kP_{r0},\\
\label{N29}\\\nonumber
k^2_5\bar{P}_{\perp }
&=&\frac{\ddot{T}}{A_{0}^{3}r^{2}}\left[ 4\alpha \left( a\left(
1-A_{0}^{2}\right) -2A_{0}^{^{\prime }}c\right) -A_{0}rc\left(
A_{0}^{2}+r\right) \right] \\\nonumber &&+\frac{8\alpha
\dot{T}}{A_{0}^{3}r^{2}}\left( \frac{a}{A_{0}}-c^{^{\prime
}}\right) +\frac{T}{A_{0}^{2}r^{2}}\left[ 2rc^{^{\prime }}\left( \frac{%
A_{0}^{^{\prime }}}{A_{0}}\right) +2a\left( \frac{A_{0}^{^{\prime }}}{A_{0}}%
\right) -2rc^{^{\prime \prime }}\right. \\\label{N30} &&\left.
+2r\left( \frac{a}{A_{0}}\right) ^{^{\prime }}-4c^{^{\prime
}}r-5\left( \frac{a}{A_{0}}\right) -4ar\left( \frac{A_{0}^{^{\prime }}}{A_{0}%
}\right) \right] -\frac{2Tc}{r}kP_{\perp 0}, \\
&&\frac{12\alpha \dot{T}}{A_{0}^{5}r^{3}}\left(
A_{0}^{2}a-A_{0}^{3}c-a-A_{0}c\right)
-\frac{3\dot{T}}{A_{0}^{3}r}\left( A_{0}c^{\prime }-a\right) =0.
\label{N31}\end{aligned}$$ For the expansion given in Eq.(\[10\]), we have $$\bar{\Theta}=\dot{T}\left( \frac{a}{A_{0}}+\frac{3c}{R_{0}}\right).
\label{N32}$$The Binachi identities Eqs.(\[17\]) and (\[18\]) with (\[N17\])-(\[N22\]), yield the static configuration $$P_{r0}^{\prime }+\frac{3}{r}\left( P_{r0}-P_{\perp 0}\right) =0
\label{N33}$$and for the perturbed configuration $$\begin{aligned}
\frac{1}{A_{0}}\left[ \bar{P}_{r}^{\prime }+\frac{3}{r}\left( \bar{P}_{r}-%
\bar{P}_{\perp }\right) +3\left( P_{r0}-P_{\perp 0}\right) T\left( \frac{c}{r%
}\right) ^{\prime }\right] =0 , \label{N34}\\
\bar{\mu}=-\left[ \left( \mu _{0}+P_{r0}\right) \frac{a}{A_{0}}+\frac{3c}{r}%
\left( \mu _{0}+P_{\perp 0}\right) \right] T . \label{N35}\end{aligned}$$The total energy inside $\Sigma^{(e)\text{ }}$ up to a radius $r$ given by Eq.([16]{}) with Eqs.(\[N17\]),(\[N19\]) and (\[N23\]) becomes $$\begin{aligned}
m_{0}&&=\frac{3}{2}\left[ \left( 1-\frac{1}{A_{0}^{2}}\right) \left(
r^{2}+2\alpha \left( A_{0}^{2}-1\right) \right) \right],
\label{N36}\\ \bar{m}&&=\frac{3T}{A_{0}^{2}}\left[ \left(
A_{0}^{2}cr-c-c^{^{\prime }}r^{2}+r^{2}\frac{a}{A_{0}}\right)
-\frac{\alpha }{A_{0}^{2}}\left( A_{0}^{2}-1\right) \left(
c^{^{\prime }}-\frac{a}{A_{0}}\right) \right]. \label{N37}\end{aligned}$$ From the matching condition Eq.(\[c4\]), we have $$P_{r0}\overset{\Sigma^{(e)}}{=}0,\ \ \ \
\bar{P}_{r}\overset{\Sigma^{(e)}}{=}0, \label{N38}$$For $c\neq 0$, which is the case that we want to study , with (\[N29\]),(\[N31\]) and (\[N38\]) we obtain $$\ddot{T}\ \beta -\gamma T=0, \label{N39}$$ where $$\beta =1-4\alpha \left( 1-\frac{1}{r^{2}A_{0}^{2}}\right) ,\ \ \ \ \
\ \
\gamma =\frac{2}{rc}\left( \frac{a}{A_{0}^{3}}-\frac{c^{\prime }}{A_{0}^{2}}%
+rc\right)$$
The general solution of Eq.(\[N39\]) is actually the linear combination of two solutions one of these corresponding to stable (oscillating) system while other corresponds to unstable (non-oscillating) ones. As in the present case, we are interested to establish the range of instability, so we restrict our attention to the non oscillating ones, i.e., we assume that $a(r)$ and $c(r)$ attain such values on $r_{\Sigma ^{(e)}}$ that $\psi_{\Sigma ^{(e)}}
=\Big(\frac{\beta }{\gamma }\Big) _{\Sigma ^{(e)}}>0.$ Then $$T=\exp (-\sqrt{\psi _{\Sigma ^{(e)}}}t) \label{N40}$$ representing collapsing sphere as areal radius becomes decreasing function of time.
The dynamical instability of collapsing fluids can be well discussed in term of adiabatic index $\Gamma _{1}$. We relate$\bar{P}_{r}$ and $\bar{\mu}$ for the static spherically symmetric configuration as follows $$\bar{P}_{r}=\Gamma _{1}\frac{P_{r0}}{\mu _{0}+P_{r0}}\bar{\mu}.
\label{N41}$$ We consider it constant throughout the fluid distribution or at least, throughout the region that we want to study.
Newtonian and Post Newtonian Terms and Dynamical Stability
==========================================================
This section deals to identify the Newtonian (N), post Newtonian (pN) and post post Newtonian (ppN) regimes. For this purpose we convert the relativistic units into c.g.s. units and expands all the terms in dynamical equations upto the $C^{-4}$ ($C$ being speed of light). In this analysis for the different regimes following approximation will be applicable
- N order: terms of order $C^0$;
- pN order: terms of order $C^{-2}$;
- ppN order: terms of order $C^{-4}$.
These terms are analyzed for the stability conditions appearing in the dynamical equation in the N approximations while pN and ppN are neglected. Thus, for N approximation, we assume $$\mu _{0}\gg P_{r0},\ \ \ \ \ \mu _{0}\gg P_{\perp 0} \label{N42}$$ For the metric coefficient expanded up to pN approximation, we take $$A_{0}=1+\frac{Gm_{0}}{C^{2}r}, \label{N43}$$ where $G$ is the gravitational constant and $C$ is the speed of light. With the help of equations obtained in previous sections, we can formulate the dynamical equation with expansion-free condition which is aim of our study. The key equation for the dynamical equation is Eq.(\[N34\]).
The expansion-free condition $\Theta =0$ implies from (\[N32\]) $$\frac{a}{A_{0}}=-3\frac{c}{r}, \label{N44}$$ with (\[N44\]), we have for (\[N35\]) that
$$\bar{\mu}=3(P_{r0}-P_{\perp 0})T\frac{c}{r}. \label{N45}$$
This equation explains how perturbed energy density of the system originates from the static background anisotropy.
Also, with (\[N41\]) and (\[N45\]) we have
$$\bar{P}_{r}=3\Gamma _{1}\frac{P_{r0}}{\mu _{0}+P_{r0}}(P_{r0}-P_{\perp })T%
\frac{c}{r} \label{N46}$$
From equations (\[N25\])and (\[N36\]), we have
$$\frac{A_{0}^{^{\prime
}}}{A_{0}}=\frac{(r+m_{0})[(r+m_{0})^{3}k^2_5\mu _{0}+12\alpha
]}{12\alpha r-3r(r+m_{0})} \label{N47}$$
Next, we develop dynamical equation by substituting Eq.(\[N30\]) along with Eqs. (\[N44\]),(\[N43\]),(\[N39\]) and (\[N47\]) in Eq. ([N34]{}) and using the radial functions $a(r)=a_{0}r,\ c(r)=c_{0}r$, where $a_0$ and $c_0$ are constants. After a tedious algebra ( a detail procedure can be followed in [@6]), we obtain the dynamical equation at pN order ( with $c=G=1$)
$$\begin{aligned}
\nonumber
&&\Big(12\alpha r-3r(r+m_{0})\Big)\Big[3\psi
(r+m_{0})^2\Big(12\alpha c_{0}m_{0}(m_{0}+2r)-96\alpha
^{2}c_{0}r^{3}(r+m_{0}) \\\nonumber
&&-c_{0}\Big((r+m_{0})^{2}+r^{3}\Big)\Big)+{8\alpha r^{8}\sqrt{%
\psi }c_{0}}+3r^{3}c_{0}{(r+m_{0})}\Big(4r-15\Big)
+6r^{3}(r+m_{0})^{3}c_{0}k^2_5P_{\perp 0} \\\nonumber
&&+3r^{3}(r+m_{0})^{3}k^2_5(P_{r0}-P_{\perp
0})\Big]+{(r+m_{0})}\Big[216\alpha
r^{3}c_{0}(1-2r)(r+m_{0})^{2}-72\alpha r^{4}c_{0}\Big] \\\nonumber
&&=\Big[
24r^3\alpha \psi c_{0}{(r+m_{0})}+{6r^4c_{0}}
+18c_{0}r^3(2r-1){(r+m_{0})}\Big]k^2_5\lambda\Big(r^{n+1}+\frac{2}{3}(\frac{r^{n+4}}{n+4}-\frac{{r_i}^{n+4}}{n+4})\Big)\\\label{N48}\end{aligned}$$
Here, we have used Eq.(\[N16a\]) and considered an energy density profile of the form $\mu _{0}=\lambda r^{n},$ where $\lambda $ is positive constant and $n$ is also a constant whose value ranges in the interval $-\infty <n<\infty .$ In order to fulfill the stability of expansion free fluids, we have to prove that both sides of Eq.(\[N48\]) produces positive results, which is analytically impossible. We represent left side of Eq.(\[N48\]) as $X(r)$ and right side of this equation by $Y(r)$. We prove graphically that for particulary values of the parameters involved in Eq.(\[N48\]) both $X(r)$ and $Y(r)$ positive. The positivity of $X(r)$ and $Y(r)$ is shown in figures **(**1-3**)** and **(**4-6**)**, respectively. The values of the parameters for which $X(r)$ and $Y(r)$ remain positive (system predicts range of stability) are mentioned below each graph and other than these values system becomes unstable.
Summary
=======
This paper deals with dynamical instability of the expansionfree anisotropic fluid at Newtonian and post Newtonian order in the frame work of Einstein Gauss-Bonnet gravity, which is vast play ground for higher dimensional analysis of general relativity . For a gravitating source which has non zero expansion scalar, the instability range of a self gravitating source can be defined by the adiabatic index $\Gamma_1$, which measures the compressibility of the fluid under consideration. On the other hand, for an expansionfree case as we are dealing, the instability explicitly depends upon the energy density, radial pressure, local anisotropy of pressure and Gauss-Bonnet coupling constant $\alpha$ at Newtonian approximation, but it appears to be independent of the adiabatic index $\Gamma_1$. In other words the stiffness of gravitating source at Newtonian and post Newtonian approximation does not play any role for the investigation of the stability of system. We would like to mention that anisotropy in pressure, inhomogeneity in the energy density and Gauss-Bonnet coupling constant $\alpha$ are the key factors for studying the the structure formation as well as evolution of shearfree anisotropic astrophysical objects.
We have formulated two dynamical equations how gravitating objects evolve with time? and what is the final outcome of such evolution?. One of these dynamical equations is used to separate the terms which have Newtonian and post Newtonian order by using the concept of relativistic and c.g.s units. The post post Newtonian regimes are absent in the present analysis, it not due to Gauss-Bonnet gravity, it seems to occur due to the geodesic properties of the spacetime used in which $g_{00}=1$. This condition is in fact Newtonian limit of general relativity. The second dynamical equation is used to discuss the instability range of expansionfree fluid upto pN order.
The first order perturbation scheme has been applied on the metric functions and matter variables appearing in the Gauss-Bonnet field equations and dynamical equations. The analysis of resulting dynamical equations shows that stability is independent of adiabatic index $\Gamma_1$ due to expansionfree fluid. The instability depends on the density profile, local anisotropy, Gauss-Bonnet coupling constant and some other parameters. The instability required that resultant of all term on left side of equation (\[N48\]) should be positive and equal to resultant of all terms on right side of that equation. It is impossible to show analytically from Eq.(\[N48\]), so we have proved this result for a particular values of the parameters appearing in Eq.(\[N48\]). The domain of the parameters is taken conveniently to show both sides positive Fig. **(1-6)**. The parameters have following values for which system satisfies stability conditions: $1\leqslant\alpha2.5$, $-4\leqslant c_0\leqslant-1$,$9.5\leqslant m_0\leqslant12$, $2\leqslant(P_{r0}-P_{\perp 0})\leqslant6$, $10\leqslant P_{\perp 0}
\leqslant13$, $0.5\leqslant r_i\leqslant0.9$, $2\leqslant\lambda15$, $4\leqslant n\leqslant 8.$ We have the novel values of the parameters one can carry actual calculations for the values of the parameters by introducing some restrictions on the system under consideration. This work with electromagnetic and heat flux in the presence of non-geodesic model i.e., $g_{00}\neq1$ is under progress [@58].
[40]{}
Chandrashekhar, S.: Astrophys. J. **140**(1964)417.
Herrera, L., Santos, N.O. and Le Denmat, G.: Mon. Not. R. Astron. Soc. **237**(1989)257.
L. Herrera and N. O. Santos Phys. Rep. **286**(1997)53.
Friedman, J.L. J. Astrophys. Astron. **17**(1996)199.
Chan, R., Kichenassamy, S., Le Denmat, G. and Santos, N.O.: Mon. Not. R. Astron. Soc. 239(1989)91.
Herrera, L., Santos, N.O. and Le Denmat, G.: Gen. Relativ. Gravit. **44**(2012)1143.
Herrera, L., Le Denmat, G. and Santos, N.O.: Phys. Rev. D79(2009)087505. Herrera, L., Le Denmat, G. and Santos, N.O.: Class. Quantum Grav. 27(2010)135017.
Chan, R., Herrera, L. and Santos, N.O.: Mon. Not. R. Astron. Soc. 265(1993)533.
Chan, R., Herrera, L. and Santos, N.O.: Mon. Not. R. Astron. Soc. 267(1994)637.
Chan, R.: Mon. Not. R. Astron. Soc. **316**(2000)588.
Sharif, M. and Azam, M.: Chinese Phys. B**22**(2013)050401. Sharif, M. and Azam, M.: Gen. Relativ. Gravit. **44**(2012)1181.
Sharif, M. and Azam, M.: Mon. Not. R. Astron. Soc. **430**(2013)3048. Sharif, M. and Azam, M.: J. Cosmol. Astropart. Phys. **02**(2012)043;
Sharif, M. and Yousaf, Z.: Eur. Phys. J. C **73**(2013)2633.
Sharif, M. and Yousaf, Z.: Mon. Not. R. Astron. Soc. **440**(2014)3479.
Sharif, M. and Yousaf, Z.: J. Cosmol. Astropart. Phys. **06**(2014)019;
Sharif, M. and Yousaf, Z.: Astrophys. Space Sci. **352**(2014)943.
Sharif, M. and Yousaf, Z.: Phys. Rev. D **88**(2013)024020.
Sharif, M. and Yousaf, Z.: Mon. Not. R. Astron. Soc. **432**(2013)264.
Sharif, M. and Bhatti, M.Z.: J. Cosmol. Astropart. Phys. **11**(2013)014. Sharif, M. and Bhatti, M.Z.: Astropart. Phys. **56**(2014)35.
Sharif, M. and Bhatti, M.Z.: J. Cosmol. Astropart. Phys. **10**(2013)056.
Sharif, M. and Kausar, H.R.: J. Cosmol. Astropart. Phys. **07**(2011)022.
Arkani-Hamed, N., Dimopoulos, S. and Dvali,G.: Phys. Lett. B **429**(1998)263. Antoniadis, I., Arkani-Hamed, N., Dimopoulos, S. and Dvali, G.: Phys. Lett. B **436**(1998)257. Randall, L. and Sundrum, R.: Phys. Rev. Lett. **83**(1999)3370. Dvali, G., Gabadadze, G. and Porrati, M.: Phys. Lett. B 485(2000)208. Dvali, G. and Gabadadze, G.: Phys. Rev. D **63**(2001)065007. 065007Dvali, G. and Gabadadze, G. and Shifman, M.: Phys. Rev. D **67**(2003)044020.
Dimopoulos, S. and Landsberg, G.: Phys. Rev. Lett. **87**(2001)161602. Chamblin, A. and Nayak, G.C.: Phys. Rev. D**66**(2002)091901. Giddings, S.B. and Thomas, S.: Phys. Rev. D**65**(2002)056010. Gross D.J. and Sloan,J.H.: Nucl. Phys. B**291**(1987)41. Bento M.C. and Bertolami, O.: Phys. Lett. B**368**(1996)198. Banerjee, A., Debnath, U. and Chakraborty, S.: Int. J. Mod. Phys. D**12**(2003)1255. Patil, K.D.: Phys. Rev. D**67**(2003)024017 . Goswami, R. and Joshi, P.S: Phys. Rev. D**69**(2004)044002. Banerjee, A., Sil, A. and Chatterjee, S.: Astrophys. J. 422(1994)681. Sil, A. and Chatterjee, S.: Gen. Relativ. Gravit. 26(1994)**124005**. Ghosh S.G. and Beesham, A.: Phys. Rev.D **64**(2001) 124005. Ghosh, S.G. and Banerjee, A.: Int. J. Mod. Phys. D **12**(2003)693. Ghosh, S.G. Deshkar S.D. and Saste, N.N: Int. J. Mod. Phys. D**16**(2007)53. Ghosh S.G. and Deshkar, D.W.: Astrophys. Space Sci. **310**(2007)111. D. G. Boulware D.G. and Deser, S.: Phys. Rev. Lett. **55**(1985)2656. Tangherlini, F.: Nuovo Cimento **27**(1963)636. Myers C.R. and Perry, M.J.: Annals of Phys. **172**(1986)304.
Wheeler, J.T.: Nucl. Phys. B **268**(1986)737. Cai, R.G.: Phys. Rev. D**65**(2002)084014. Kobayashi, T.: Gen. Rel. Grav. **37**(2005)1879. Maeda, H.: Class. Quantum Grav. **23**(2006)2155. Jhingan, S. and Ghosh, S.S.: Phys. Rev. D**81**(2010)024010. Wiltshire, D.L.: Phys. Lett. B **169**(1986)36. Liddle, A.R. andWands, D.: Mon. Not. Roy. Astron. Soc. 253(1991)637. Tikhonov, A.V. and Karachentsev, I.D.: Astrophys. J. 653(2006)969. Rudnick, L., Brown, S. and Williams, L.R.: Astrophys. J. 671(2007)40. Abbas, G. and Sarwar, S. *Satbility of Gravitational collapse in Gauss-Bonnet Gravity electromagnetic field and heat flux.*: Abbas, G. and Shahzad, S.: Work in Progress.
[^1]: [email protected]
[^2]: [email protected]
|
---
abstract: 'We prove that all maximal subgroups of the free idempotent generated semigroup over a band $B$ are free for all $B$ belonging to a band variety $\V$ if and only if $\V$ consists either of left seminormal bands, or of right seminormal bands. [^1] [^2] [^3]'
author:
- |
[<span style="font-variant:small-caps;">Igor Dolinka</span>]{}\
Department of Mathematics and Informatics, University of Novi Sad,\
Trg Dositeja Obradovića 4, 21101 Novi Sad, Serbia\
E-mail: [email protected]
title: '****'
---
Let $S$ be a semigroup, and let $E=E(S)$ be the set of its idempotents; in fact, $E$, along with the multiplication inherited from $S$, is a partial algebra. It turns out to be fruitful to restrict further the domain of the partial multiplication defined on $E$ by considering only the pairs $e,f\in E$ for which either $ef\in\{e,f\}$ or $fe\in\{e,f\}$ (i.e. $\{ef,fe\}\cap\{e,f\}\neq\es$). Note that if $ef\in\{e,f\}$ then $fe$ is an idempotent, and the same is true if we interchange the roles of $e$ and $f$. Such unordered pairs $\{e,f\}$ are called *basic pairs* and their products $ef$ and $fe$ are *basic products*.
The *free idempotent generated semigroup over $E$* is defined by the following presentation: $$\ig{E} = \langle E\pre e\cdot f=ef\text{ such that }\{e,f\}\text{ is a basic pair}\,\rangle .$$ Here $ef$ denotes the product of $e$ and $f$ in $S$ (which is again an idempotent of $S$), while $\cdot$ stands for the concatenation operation in the free semigroup $E^+$ (also to be interpreted as the multiplication in its quotient $\ig{E}$). An important feature of $\ig{E}$ is that there is a natural homomorphism from $\ig{E}$ onto the subsemigroup of $S$ generated by $E$, and the restriction of $\phi$ to the set of idempotents of $\ig{E}$ is a basic-product-preserving bijection onto $E$, see e.g. [@E4; @Nam; @P2].
An important background to these definitions is the notion of the *biordered set* [@Hi] of idempotents of a semigroup and its abstract counterpart. The biordered set of idempotents of $S$ is just a partial algebra on $E(S)$ obtained by restricting the multiplication from $S$ to basic pairs of idempotents. In this way we have that if $B$ is a band (an idempotent semigroup), then, even though there is an everywhere defined multiplication on $E(B)=B$, its biordered set [@E2] is in general still a partial algebra. Another way of treating biordered sets is to consider them as relational structures $(E(S),{\leqslant}^{(l)},{\leqslant}^{(r)})$, where the set of idempotents $E(S)$ is equipped by two quasi-order relations defined by $$\begin{aligned}
e{\leqslant}^{(l)}f & \text{ if and only if }ef=e,\\
e{\leqslant}^{(r)}f & \text{ if and only if }fe=e.\end{aligned}$$ One of the main achievements of [@E3; @E4; @Nam] is the result that the class of biordered sets considered as relational structures is *axiomatisable*: there is in fact a finite system of formulæ satisfied by biordered sets such that any set endowed with two quasi-orders satisfying the axioms in question is a biordered set of idempotents of some semigroup. In this sense we can speak about the free idempotent generated semigroup over a biordered set $E$. A fundamental fact which justifies the term ‘free’ is that $\ig{E}$ is the free object in the category of all semigroups $S$ whose biordered set of idempotents is isomorphic to $E$: if $\psi:E\to E(S)$ is any isomorphism of biordered sets, then it uniquely extends (via the canonical injection of $E$ into $\ig{E}$) to a homomorphism $\psi':\ig{E}\to S$ whose image is the subsemigroup of $S$ generated by $E(S)$. This is also true if $\psi$ is a (surjective) homomorphism of biordered sets (taken as relational structures), so that the freeness property of $\ig{E}$ carries over to even wider categories of semigroups.
In this short note we consider $\ig{B}$, the free idempotent generated semigroup over (the biordered set of) a band $B$; more precisely, we are interested in the question whether the maximal subgroups of these semigroups are free. It was conjectured in [@McE] that each maximal subgroup of any semigroup of the form $\ig{E}$ is a free group. Recently, this was disproved [@BMM1] (see also [@BMM2]), where a certain 72-element semigroup was found whose biordered set $E$ of idempotents yields a maximal subgroup in $\ig{E}$ isomorphic to $\mathbb{Z}\oplus\mathbb{Z}$, the rank 2 free abelian group. Here we will see that a particular 20-element regular band suffices for the same purpose. In fact, as proved by Gray and Ruškuc in [@GR], *every* group can be isomorphic to a maximal subgroup of some $\ig{E}$, while the assumption that the semigroup $S$ with $E=E(S)$ is finite yields a sole restriction that the groups in question are finitely presented. This puts forward many new questions, one of which is the characterisation of bands $B$ for which all subgroups of $\ig{B}$ are free.
More specifically, as a first approximation to the latter question, we may ask for a description of all varieties $\V$ of bands with the property that for each $B\in\V$ the maximal subgroups of $\ig{B}$ are free. To facilitate the discussion, we depict in Fig. \[lb\] the bottom part of the lattice $\mathcal{L}(\mathbf{B})$ of all band varieties, along with their standard labels (see also [@Pe-LinS Diagram II.3.1]).
(140.00,190.00) (70.00,5.00) (40.00,35.00) (70.00,35.00) (100.00,35.00) (40.00,65.00) (70.00,65.00) (100.00,65.00) (10.00,95.00) (70.00,95.00) (130.00,95.00) (40.00,125.00) (100.00,125.00) (10.00,155.00) (70.00,155.00) (130.00,155.00)
(70.00,5.00)[(1,1)[30.00]{}]{} (70.00,5.00)[(-1,1)[30.00]{}]{} (40.00,35.00)[(1,1)[30.00]{}]{} (100.00,35.00)[(-1,1)[30.00]{}]{} (40.00,65.00)[(1,1)[30.00]{}]{} (100.00,65.00)[(-1,1)[30.00]{}]{} (40.00,65.00)[(0,-1)[30.00]{}]{} (100.00,65.00)[(0,-1)[30.00]{}]{} (70.00,35.00)[(0,-1)[30.00]{}]{} (70.00,95.00)[(0,-1)[30.00]{}]{} (70.00,35.00)[(1,1)[60.00]{}]{} (70.00,35.00)[(-1,1)[60.00]{}]{} (10.00,95.00)[(1,1)[60.00]{}]{} (130.00,95.00)[(-1,1)[60.00]{}]{} (70.00,95.00)[(1,1)[60.00]{}]{} (70.00,95.00)[(-1,1)[60.00]{}]{} (10.00,155.00)[(1,1)[15.00]{}]{} (130.00,155.00)[(-1,1)[15.00]{}]{} (70.00,155.00)[(1,1)[15.00]{}]{} (70.00,155.00)[(-1,1)[15.00]{}]{}
(40.00,180.00)[(0,0)\[cc\][$\vdots$]{}]{} (100.00,180.00)[(0,0)\[cc\][$\vdots$]{}]{} (70.00,40.00)[(0,0)\[cb\][$\mathbf{SL}$]{}]{} (35.00,35.00)[(0,0)\[rc\][$\mathbf{LZ}$]{}]{} (105.00,35.00)[(0,0)\[lc\][$\mathbf{RZ}$]{}]{} (70.00,70.00)[(0,0)\[cb\][$\mathbf{ReB}$]{}]{} (35.00,65.00)[(0,0)\[rc\][$\mathbf{LNB}$]{}]{} (105.00,65.00)[(0,0)\[lc\][$\mathbf{RNB}$]{}]{} (70.00,100.00)[(0,0)\[cb\][$\mathbf{NB}$]{}]{} (5.00,95.00)[(0,0)\[rc\][$\mathbf{LRB}$]{}]{} (135.00,95.00)[(0,0)\[lc\][$\mathbf{RRB}$]{}]{} (70.00,160.00)[(0,0)\[cb\][$\mathbf{RB}$]{}]{} (5.00,155.00)[(0,0)\[rc\][$\mathbf{LSNB}$]{}]{} (135.00,155.00)[(0,0)\[lc\][$\mathbf{RSNB}$]{}]{} (35.00,125.00)[(0,0)\[rc\][$\mathbf{LQNB}$]{}]{} (105.00,125.00)[(0,0)\[lc\][$\mathbf{RQNB}$]{}]{}
The main result of this note is the following.
\[t-main\] Let $\V$ be a variety of bands. Then $\ig{B}$ has all its maximal subgroups free for all $B\in\V$ if and only if $\V$ is contained either in $\mathbf{LSNB}$ or in $\mathbf{RSNB}$.
This theorem is a direct consequence of the following two propositions.
\[p1\] For any left (right) seminormal band $B$, all maximal subgroups of $\ig{B}$ are free.
\[p2\] There exists a regular band $B$ such that $\ig{B}$ has a maximal subgroup isomorphic to $\mathbb{Z}\oplus\mathbb{Z}$.
The first of these propositions is a generalisation of the well known result of Pastijn [@P2 Theorem 6.5] (cf.also [@NP; @P1]) that all maximal subgroups of $\ig{B}$ are free for any normal band $B$. The other one supplies a simpler example with the same non-free maximal subgroup than the one considered in [@BMM1 Section 5]. The method used is the one from [@GR], which is based on the Reidemeister-Schreier type rewriting process for obtaining presentations of maximal subgroups of semigroups developed in [@R-JA]. So, before turning to the proofs of the above two propositions, we briefly present this general method yielding presentations for maximal subgroups of $\ig{E}$, $E=E(S)$, for an arbitrary semigroup $S$, and then we explain its particular case when $S$ is a band. Along the way, we assume some familiarity with the most basic notions of semigroup theory, such as Green’s relations and the structure of bands, see, for example, [@Hi; @Pe-LinS].
Let $S$ be a semigroup and let $D$ be a $\mathcal{D}$-class of $S$ containing an idempotent $e_0\in E(S)$. We are going to label the $\mathcal{R}$-classes contained in $D$ by $R_i$, $i\in I$, while $L_j$, $j\in J$, is the list of all $\mathcal{L}$-classes of $D$. The $\mathcal{H}$-class $R_i\cap L_j$ will be denoted by $H_{ij}$. Define $\mathcal{K}=\{(i,j):\ H_{ij}\text{ is a group}\}$; as is well known, $(i,j)\in\mathcal{K}$ if and only if $H_{ij}$ contains an idempotent, which we denote by $e_{ij}$. There is no loss of generality if we assume that both $I$ and $J$ contain an index 1, so that $e_0=e_{11}$.
For a word $\ww\in E\st$, let $\ol\ww$ denote the image of $\ww$ under the canonical monoid homomorphism of $E\st$ into $S^1$: in other words, when $\ww$ is non-empty, $\ol\ww$ is just the element of $S$ obtained by multiplying in $S$ the idempotents the concatenation of which is $\ww$. We say that a system of words $\rr_j,\rr'_j\in E\st$, $j\in J$, is a *Schreier system of representatives* for $D$ if for each $j\in J$:
- the right multiplications by $\ol{\rr_j}$ and $\ol{\rr'_j}$ are mutually inverse $\mathcal{R}$-class preserving bijections $L_1\to L_j$ and $L_j\to L_1$, respectively (so, in particular, right multiplication by $\rr_1$ is the identity mapping on $L_1$);
- each prefix of $\rr_j$ coincides with $\rr_{j'}$ for some $j'\in J$ (in particular, the empty word is just $\rr_1$).
It is well-known that such a Schreier system always exists. In the following, we assume that one particular Schreier system has been fixed.
In addition, we will assume that a mapping $i\mapsto j(i)$ has been specified such that $(i,j(i))\in\mathcal{K}$: such $j(i)$ must exist for each $i\in I$, since $D$ is a regular $\mathcal{D}$-class (as it contains an idempotent), and so each $\mathcal{R}$-class $R_i$ must contain an idempotent. The index $j(i)\in J$ is called the *anchor* of $R_i$.
Finally, call a *square* a quadruple of idempotents $(e,f,g,h)$ in $D$ such that $$\begin{array}{ccc}
e & \mathcal{R} & f\\[1mm]
\mathcal{L} && \mathcal{L}\\[1mm]
g & \mathcal{R} & h.
\end{array}$$ Then there are $i,k\in I$ and $j,\ell\in J$ such that $e\in H_{ij}$, $f\in H_{i\ell}$, $g\in H_{kj}$ and $h\in
H_{k\ell}$. For an idempotent $\varepsilon\in S$ we say that it *singularises* the square $(e,f,g,h)$ if any of the following two cases takes place:
- $\varepsilon e=e$ and $\varepsilon g=g$, while $e=f\varepsilon$; or
- $e=\varepsilon g$, along with $e\varepsilon=e$ and $f\varepsilon=f$.
Note that case (a) implies $\varepsilon f=f$, $\varepsilon h=h$, $e\varepsilon=e$ and $g=g\varepsilon=h\varepsilon$, while conditions $\varepsilon e=e$, $f=\varepsilon f=\varepsilon h$, $g\varepsilon=g$ and $h\varepsilon=h$ follow from (b). The square $(e,f,g,h)$ is *singular* if it is singularised by some idempotent of $S$. Let $\Sigma$ be the set of all quadruples $(i,k;j,\ell)\in I\times I\times J\times J$ (to be called *singular rectangles*) such that $(e_{ij},e_{i\ell},e_{kj},e_{k\ell})$ is a singular square in $D$.
The required general result of [@GR] can be now paraphrased as follows.
\[Bob-Nik\] Let $S$ be a semigroup with a non-empty set of idempotents $E=E(S)$. With the notation as above, the maximal subgroup of the free idempotent generated semigroup $\ig{E}$ containing $e_{11}\in E$ is presented by $\langle\Gamma\pre\mathfrak{R}\rangle$, where $\Gamma=\{f_{ij}:\ (i,j)\in\mathcal{K}\}$, while $\mathfrak{R}$ consists of three types of relations:
- $f_{i,j(i)}=1$ for all $i\in I$;
- $f_{ij}=f_{i\ell}$ for all $i\in I$ and $j,\ell\in J$ such that $\rr_j\cdot e_{i\ell}=\rr_\ell$;
- $f_{ij}^{-1}f_{i\ell}=f_{kj}^{-1}f_{k\ell}$ for all $(i,k;j,\ell)\in \Sigma$.
For our purpose, we would like to focus on the particular case when $S$ is a band. Then, clearly, $\mathcal{K}=I\times
J$ and $D=\{e_{ij}:\ i\in I,\ j\in J\}$. Since $\mathcal{D}=\mathcal{J}$ in any band, the set of all $\mathcal{D}$-classes of $B$ is partially ordered; it instantly turns out that, by definition, if $\varepsilon$ singularises a square $(e,f,g,h)$ in $D$, then $D_\varepsilon{\geqslant}D$. Now any such $\varepsilon\in B$ induces a pair of transformations on $I$ and $J$, respectively, in the following sense. For each $i\in I$ and $j\in J$ there are $i',k\in
I$ and $j',\ell\in J$ such that $\varepsilon e_{ij}=e_{i'\ell}$ and $e_{ij}\varepsilon =e_{kj'}$. One immediately sees that it must be $\ell=j$ and $k=i$, so that $B$ acts on the left on $I$ and on the right on $J$. Thus it is convenient to write the transformation $\sigma=\sigma_\varepsilon^{(l)}$ induced by $\varepsilon$ on $I$ to the left of its argument (so that $ee_{ij}=e_{\sigma(i)j}$), while the analogous transformation $\sigma'=\sigma_\varepsilon^{(r)}$ on $J$ is written to the right (resulting in the rule $e_{ij}e=e_{i(j)\sigma'}$).
Let $B$ be a band, let $D$ be a $\mathcal{D}$-class of $B$, and let $e_{11}\in D$. Then the maximal subgroup $G_{e_{11}}$ of $\ig{B}$ containing $e_{11}$ is presented by $\langle\Gamma\pre\mathfrak{R}\rangle$, where $\Gamma=\{f_{ij}:\ i\in I, j\in J\}$ and $\mathfrak{R}$ consists of relations $$f_{i1}=f_{1j}=f_{11}=1\label{rel1}$$ for all $i\in I$ and $j\in J$, and $$f_{ij}^{-1}f_{i\ell} = f_{kj}^{-1}f_{k\ell},\label{rel2}$$ where for some $\varepsilon\in B$ such that $D_\varepsilon{\geqslant}D$ the indices $i,k\in I$, $j,\ell\in J$ satisfy one of the following two conditions:
- $\sigma_\varepsilon^{(l)}(i)=i$, $\sigma_\varepsilon^{(l)}(k)=k$, and $(j)\sigma_\varepsilon^{(r)}=
(\ell)\sigma_\varepsilon^{(r)}=\ell$,
- $\sigma_\varepsilon^{(l)}(i)=\sigma_\varepsilon^{(l)}(k)=k$, $(j)\sigma_\varepsilon^{(r)}=j$ and $(\ell)\sigma_\varepsilon^{(r)}=\ell$.
Since $\mathcal{K}=I\times J$, we have a generator $f_{ij}$ for each $i\in I$ and $j\in J$. Furthermore, the same reason allows us to choose $j(i)=1$ as the anchor for each $i\in I$. Such a choice will imply that the relations of type (i) from Theorem \[Bob-Nik\] take the form $f_{i1}=1$, $i\in I$. In particular, we have $f_{11}=1$. As for the Schreier system, we can choose $\rr_1$ to be the empty word, $\rr_{j}=e_{1j}$ for all $j\in J\setminus\{1\}$ and $\rr'_j=e_{11}$ for all $j\in J$. The system $\rr_j$, $j\in J$, of words over $E$ is obviously prefix-closed. Since $e_{i1}e_{ij}=e_{ij}$ and $e_{ij}e_{11}=e_{i1}$ holds for all $i\in I$, $j\in J$, the right multiplications by $e_{ij}$ and $e_{11}$ are indeed mutually inverse bijections between $L_1$ and $L_j$ and between $L_j$ and $L_1$, respectively. Hence, the relations of type (ii) reduce to $f_{11}=f_{1j}$, that is, $f_{1j}=1$, for all $j\in J$. Thus we have all the relations . Finally, the conditions (a) and (b) express precisely the singularisation of a square $(e_{ij},e_{i\ell},e_{kj},e_{k\ell})$ in $D$ by an element $\varepsilon\in B$; therefore, the relations correspond to relations of type (iii).
Rectangles $(i,k;j,\ell)\in I\times J$ of type (a) will be said to be *left-right* singular, while those of type (b) are *up-down* singular (with respect to $\varepsilon$). Another, more compact way of expressing condition (a) is $i,k\in\operatorname{Im}\sigma_\varepsilon^{(l)}$, $\ell\in\operatorname{Im}\sigma_\varepsilon^{(r)}$ and $(j,\ell)\in\operatorname{Ker}\sigma_\varepsilon^{(r)}$, while (b) is equivalent to $k\in\operatorname{Im}\sigma_\varepsilon^{(l)}$, $(i,k)\in\operatorname{Ker}\sigma_\varepsilon^{(l)}$ and $j,\ell\in\operatorname{Im}\sigma_\varepsilon^{(r)}$.
We can now turn to proving our aforementioned result.
Without any loss of generality, assume that $B\in\mathbf{RSNB}$ (the case when $B$ belongs to $\mathbf{LSNB}$ is dual). Recall (e.g. from [@Pe-LinS Proposition II.3.8]) that the variety $\mathbf{RSNB}$ satisfies (and is indeed defined by) the identity $tuv=tvtuv$. Therefore, if $B=\bigcup_{\alpha\in Y}B_\alpha$ is the greatest semilattice decomposition of $B$, $a\in B$ and $x,y\in D=B_\alpha$ for some $\alpha\in Y$, then $x=xyx$ and $y=yxy$. Hence, we have $ax=ax(yx)=ayxaxyx$ and $ay=ay(xy)=axyayxy$, implying $ax\,\mathcal{R}\, ay$. In particular, for any $\varepsilon\in B$ such that $D_\varepsilon{\geqslant}D$, $\varepsilon e_{ij}\,\mathcal{R}\, \varepsilon e_{k\ell}$ holds in $D$ for all $i,k\in
I$, $j,\ell\in J$, so the transformation $\sigma_\varepsilon^{(l)}$ is a constant function on $I$.
We conclude that there are no proper (non-degenerate) rectangles $(i,k;j,\ell)$ that are left-right singular with respect to some $\varepsilon\in B$. In other words, all proper singular rectangles in $I\times J$—and thus all nontrivial relations of $G_{e_{11}}$—are of the up-down kind: $$f_{ij}^{-1}f_{i\ell}=f_{k_0j}^{-1}f_{k_0\ell},$$ where $j,\ell$ are two fixed points of $\sigma_\varepsilon^{(r)}$, $i\in I$ is arbitrary, and (since in this context $\sigma_\varepsilon^{(l)}$ is constant) $\operatorname{Im}\sigma_\varepsilon^{(l)}=\{k_0\}$, for some $\varepsilon\in B$. However, now it is straightforward to deduce the relation for *all* $i,k\in I$ and fixed points $j,\ell$ of $\sigma_\varepsilon^{(r)}$. Thus we are led to define an equivalence $\theta_B$ of $\bigcup_{\varepsilon\in B,
D_\varepsilon{\geqslant}D}\operatorname{Im}\sigma_\varepsilon^{(r)}=J$ which is the transitive closure of the relation $\rho_B$ defined by $(j_1,j_2)\in\rho_B$ if and only if $j_1,j_2\in\operatorname{Im}\sigma_\varepsilon^{(r)}$ for some $\varepsilon\in B$. Now it is almost immediate to see that for all $i,k\in I$ and $j,\ell\in J$ such that $(j,\ell)\in\theta_B$ we have that $$f_{ij}^{-1}f_{i\ell}=f_{kj}^{-1}f_{k\ell}$$ holds in $G_{e_{11}}$. This immediately implies $f_{k\ell}=1$ for all $k\in I$ and $\ell\in 1/\theta_B$, as well as $$f_{kj}=f_{k\ell}$$ for all $k\in I$, whenever $(j,\ell)\in\theta_B$. So, let $j_1=1,j_2\dots,j_m\in J$ be a cross-section of $J/\theta_B$. Then it is straightforward to eliminate all the relations from the presentation of $G_{e_{11}}$ while reducing its generating set to $$\{f_{ij_r}:\ i\in I\setminus\{1\},\;2{\leqslant}r{\leqslant}m\}.$$ In other words, $G_{e_{11}}$ is a free group of rank $(|I|-1)(m-1)$.
Let $B$ be the subband of the free regular band on four generators $a,b,c,d$ consisting of two $\mathcal{D}$-classes: a $2\times 2$ class $D_1$ consisting of elements $ab,aba,ba,bab$ and a $4\times 4$ class $D_0$ consisting of elements of the form $\uu_1\vv\uu_2$, where $\uu_1,\uu_2\in\{ab,ba\}$ and $\vv\in\{cd,cdc,dc,dcd\}$. So, we can take $I=\{abcd,abdc,bacd,badc\}$, the set of all initial parts of words from $D_0$, and $J=\{cdba,dcba,cdab,dcab\}$, the set of all final parts of those words. A direct computation shows that $$\begin{aligned}
&\sigma_{ab}^{(l)}=\sigma_{aba}^{(l)}=\left(
\begin{array}{llll}
abcd & abdc& badc & bacd\\
abcd & abdc& abdc & abcd
\end{array}\right),\\
&\sigma_{ba}^{(l)}=\sigma_{bab}^{(l)}=\left(
\begin{array}{llll}
abcd & abdc& badc & bacd\\
bacd & badc& badc & bacd
\end{array}\right),\\
&\sigma_{ab}^{(r)}=\sigma_{bab}^{(r)}=\left(
\begin{array}{llll}
cdba & cdab & dcab & dcba\\
cdab & cdab & dcab & dcab
\end{array}\right),\\
&\sigma_{ba}^{(r)}=\sigma_{aba}^{(r)}=\left(
\begin{array}{llll}
cdba & cdab & dcab & dcba\\
cdba & cdba & dcba & dcba
\end{array}\right).\end{aligned}$$ If we enumerate (for brevity of further calculations) $abcd\to 1,abdc\to 2,badc\to 3,bacd\to 4$ and $cdba\to 1,cdab\to
2,dcab\to 3,dcba\to 4$, we get $$\begin{aligned}
&\sigma_{ab}^{(l)}=\sigma_{aba}^{(l)}=\left(
\begin{array}{llll}
1 & 2 & 3 & 4\\
1 & 2 & 2 & 1
\end{array}\right),
&\sigma_{ba}^{(l)}=\sigma_{bab}^{(l)}=\left(
\begin{array}{llll}
1 & 2 & 3 & 4\\
4 & 3 & 3 & 4
\end{array}\right),\\
&\sigma_{ab}^{(r)}=\sigma_{bab}^{(r)}=\left(
\begin{array}{llll}
1 & 2 & 3 & 4\\
2 & 2 & 3 & 3
\end{array}\right),
&\sigma_{ba}^{(r)}=\sigma_{aba}^{(r)}=\left(
\begin{array}{llll}
1 & 2 & 3 & 4\\
1 & 1 & 4 & 4
\end{array}\right).\end{aligned}$$ Hence, the list of singular rectangles is exhausted by: $$\begin{aligned}
&(1,2;1,2),(1,2;3,4),(3,4;1,2),(3,4;3,4),\\
&(1,4;2,3),(1,4;1,4),(2,3;2,3),(2,3;1,4).\end{aligned}$$ This results in $f_{11}=f_{12}=f_{13}=f_{14}=f_{21}=f_{31}=f_{41}=f_{22}=f_{44}=1$ and $$\begin{array}{lll}
f_{23}=f_{24}, &f_{24}=f_{34}, &f_{43}^{-1}=f_{33}^{-1}f_{34}\\[1mm]
f_{32}=f_{42}, &f_{42}=f_{43}, &f_{23}=f_{32}^{-1}f_{33}.
\end{array}$$
(120.00,132.50)(0.00,5.00)[(0,1)[120.00]{}]{} (30.00,5.00)[(0,1)[120.00]{}]{} (60.00,5.00)[(0,1)[120.00]{}]{} (90.00,5.00)[(0,1)[120.00]{}]{} (120.00,5.00)[(0,1)[120.00]{}]{} (0.00,5.00)[(1,0)[120.00]{}]{} (0.00,35.00)[(1,0)[120.00]{}]{} (0.00,65.00)[(1,0)[120.00]{}]{} (0.00,95.00)[(1,0)[120.00]{}]{} (0.00,125.00)[(1,0)[120.00]{}]{}
(5.00,9.00)[(0,1)[52.00]{}]{} (55.00,9.00)[(0,1)[52.00]{}]{} (4.00,10.00)[(1,0)[52.00]{}]{} (4.00,60.00)[(1,0)[52.00]{}]{} (5.00,69.00)[(0,1)[52.00]{}]{} (55.00,69.00)[(0,1)[52.00]{}]{} (4.00,70.00)[(1,0)[52.00]{}]{} (4.00,120.00)[(1,0)[52.00]{}]{} (65.00,9.00)[(0,1)[52.00]{}]{} (115.00,9.00)[(0,1)[52.00]{}]{} (64.00,10.00)[(1,0)[52.00]{}]{} (64.00,60.00)[(1,0)[52.00]{}]{} (65.00,69.00)[(0,1)[52.00]{}]{} (115.00,69.00)[(0,1)[52.00]{}]{} (64.00,70.00)[(1,0)[52.00]{}]{} (64.00,120.00)[(1,0)[52.00]{}]{}
(35.00,39.00)[(0,1)[52.00]{}]{} (85.00,39.00)[(0,1)[52.00]{}]{} (34.00,40.00)[(1,0)[52.00]{}]{} (34.00,90.00)[(1,0)[52.00]{}]{} (35.00,99.00)[(0,1)[26.00]{}]{} (85.00,99.00)[(0,1)[26.00]{}]{} (34.00,100.00)[(1,0)[52.00]{}]{} (35.00,5.00)[(0,1)[26.00]{}]{} (85.00,5.00)[(0,1)[26.00]{}]{} (34.00,30.00)[(1,0)[52.00]{}]{}
(25.00,39.00)[(0,1)[52.00]{}]{} (0.00,40.00)[(1,0)[26.00]{}]{} (0.00,90.00)[(1,0)[26.00]{}]{} (95.00,39.00)[(0,1)[52.00]{}]{} (94.00,40.00)[(1,0)[26.00]{}]{} (94.00,90.00)[(1,0)[26.00]{}]{} (0.00,30.00)[(1,0)[26.00]{}]{} (25.00,5.00)[(0,1)[26.00]{}]{} (94.00,30.00)[(1,0)[26.00]{}]{} (95.00,5.00)[(0,1)[26.00]{}]{} (0.00,100.00)[(1,0)[26.00]{}]{} (25.00,99.00)[(0,1)[26.00]{}]{} (94.00,100.00)[(1,0)[26.00]{}]{} (95.00,99.00)[(0,1)[26.00]{}]{}
(15.00,20.00) (45.00,20.00) (75.00,20.00) (105.00,20.00) (15.00,50.00) (45.00,50.00) (75.00,50.00) (105.00,50.00) (15.00,80.00) (45.00,80.00) (75.00,80.00) (105.00,80.00) (15.00,110.00) (45.00,110.00) (75.00,110.00) (105.00,110.00)
(-5.00,20.00)[(0,0)\[rc\][$4$]{}]{} (-5.00,50.00)[(0,0)\[rc\][$3$]{}]{} (-5.00,80.00)[(0,0)\[rc\][$2$]{}]{} (-5.00,110.00)[(0,0)\[rc\][$1$]{}]{} (15.00,130.00)[(0,0)\[cb\][$1$]{}]{} (45.00,130.00)[(0,0)\[cb\][$2$]{}]{} (75.00,130.00)[(0,0)\[cb\][$3$]{}]{} (105.00,130.00)[(0,0)\[cb\][$4$]{}]{}
If we denote $x=f_{23}$ and $y=f_{32}$ we obviously remain with these two generators for $G_{abcdba}$ and a single relation $$yx=f_{33}=xy,$$ so $G_{abcdba}\cong \mathbb{Z}\oplus\mathbb{Z}$.
This completes the proof of Theorem \[t-main\].
The band $B$ from the previous proof can be also realised as a regular subband of the free band $FB_3$ on three generators $a,b,c$ whose elements are from $D'_1=\{ab,aba,ba,bab\}$ and $D'_0=\{\uu c\vv:\ \uu,\vv\in D'_1\}$.
We finish the note by several problems that might be subjects of future research in this direction.
Characterise all bands $B$ with the property that $\ig{B}$ has a non-free maximal subgroup.
Characterise all groups that arise as maximal subgroups of $\ig{B}$ for some band $B$. The same problem stands for regular bands $B$, and in fact for $B\in\mathbf{V}$ for any particular band variety $\mathbf{V}{\geqslant}\mathbf{RB}$.
Given a band variety $\mathbf{V}$ and an integer $n{\geqslant}1$, describe the maximal subgroups of $\ig{\mathfrak{F}_n\mathbf{V}}$, where $\mathfrak{F}_n\mathbf{V}$ denotes the $\mathbf{V}$-free band on a set of $n$ free generators [@PeSi].
The author is grateful to the anonymous referee, whose careful reading, comments and suggestions significantly improved the presentation of the results.
[99]{} =-2.5pt
, [S. W. Margolis]{} and [J. Meakin]{}, Subgroups of free idempotent generated semigroups need not be free, *J. Algebra*, **321** (2009), 3026–3042.
, [S. W. Margolis]{} and [J. Meakin]{}, Subgroups of free idempotent generated semigroups: full linear monoids, manuscript, 17 pp. [arXiv: 1009.5683](arXiv: 1009.5683)
, Biordered sets of bands, *Semigroup Forum*, **29** (1984), 241–246.
, Biordered sets are biordered subsets of idempotents of semigroups, *J. Austral. Math. Soc. Ser. A*, **37** (1984), 258–268.
, Biordered sets come from semigroups, *J. Algebra*, **96** (1985), 581–591.
and [N. Ruškuc]{}, On maximal subgroups of free idempotent generated semigroups, *Israel J. Math.*, to appear.
, *Techniques of Semigroup Theory*, Oxford University Press, Oxford, 1992.
, Subgroups of the free semigroup on a biordered set in which principal ideals are singletons, *Comm. Algebra*, **30** (2002), 5513–5519.
, Structure of regular semigroups. I, *Mem. Amer. Math. Soc.*, **22** (1979), no. 224, vii+119 pp.
and [F. Pastijn]{}, Subgroups of free idempotent generated regular semigroups, *Semigroup Forum*, **21** (1980), 1–7.
, Presentations for subgroups of monoids, *J. Algebra*, **220** (1999), 365–380.
, Idempotent generated completely 0-simple semigroups, *Semigroup Forum*, **15** (1977), 41–50.
, The biorder on the partial groupoid of idempotents of a semigroup, *J. Algebra* **65** (1980), 147–187.
, The translational hull in semigroups and rings, *Semigroup Forum*, [**1**]{} (1970), 283–360.
, *Lectures in Semigroups*, Wiley, New York, 1977.
and [P. V. Silva]{}, Structure of relatively free bands, *Comm. Algebra*, [**30**]{} (2002), 4165–4187.
[^1]: *Mathematics subject classification numbers:* 20M05, 20M10, 20F05
[^2]: *Key words and phrases:* free idempotent generated semigroup, band, maximal subgroup
[^3]: The support of the Ministry of Education and Science of the Republic of Serbia, through Grant No. 174019, is gratefully acknowledged.
|
[**$L^\infty$-estimates for the Neumann problem on general domains**]{}\
A.F.M. ter Elst, H. Meinlschmidt and J. Rehberg
[=1.8cm =1.8cm ]{}
. Let $\Omega \subset {\mathds{R}}^d$ be bounded open and connected. Suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$ for some $r > 2$. Let $A$ be a pure second-order elliptic differential operator with bounded real measurable coefficients on $\Omega$. Let $q > d$ with $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$. If $p$ is the dual exponent of $q$, then we show that the pre-image of the space $(W^{1,p}(\Omega))^*$ under the map $A$ is contained in the space of bounded functions on $\Omega$. The considerations are complemented by results on optimal Sobolev regularity for $A$.
Introduction {#boundedS1}
============
If $A$ is a pure second-order elliptic operator with Dirichlet boundary conditions and real measurable coefficients on a bounded connected open set $\Omega \subset {\mathds{R}}^d$, then it is not too hard to show that the resolvent operator $A^{-1}$ maps $L^q(\Omega)$ into $L^\infty(\Omega)$ if $q > d$. It is a famous result of Stampacchia ([@Stam2] Theorem 4.4) that $A^{-1}$ extends to a continuous operator from $W^{-1,q}(\Omega)$ into $L^\infty(\Omega)$ if $q > d$. Inspecting the proof, it is becomes clear that this result extends without difficulties from the pure Dirichlet case to the case of mixed boundary conditions, as long as the Dirichlet part of the boundary is large enough to imply a Poincaré inequality. It is also possible to extend the result to merely Neumann boundary conditions if a positive scalar is added to the operator, and hence the resulting operator is coercive. The idea how to do this can be found in the book of Tröltzsch ([@Troltzsch] Section 7.2.2). What remains open is the pure divergence form operator with pure Neumann boundary condition. It is clear that a ‘naive’ generalisation cannot work, since one can add to any solution of such a Neumann problem an arbitrary constant and again obtains a solution.
The main theorem of this paper is as follows.
\[tbounded101\] Let $\Omega \subset {\mathds{R}}^d$ be a bounded connected open set. Let $r \in (2,\infty)$ and suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$. Let $\mu \colon \Omega \to {\mathds{R}}^{d \times d}$ be a bounded measurable function. Suppose there exists a $\nu > 0$ such that $${\mathop{\rm Re}}\sum_{k,\ell=1}^d \mu(x) \, \xi_k \, \overline{\xi_\ell}
\geq \nu \, |\xi|^2$$ for all $\xi \in {\mathds{C}}^d$ and almost all $x \in \Omega$. Define ${{\cal A}}\colon W^{1,2}(\Omega) \to (W^{1,2}(\Omega))^*$ by $${{\cal A}}(u,v)
= \int_\Omega \mu \nabla u \cdot \overline{\nabla v}
.$$ Let $q \in (d,\infty)$ and suppose that $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$. If $u \in W^{1,2}(\Omega)$ with ${{\cal A}}u \in (W^{1,p}(\Omega))^*$, where $p$ is the dual exponent of $q$, then $u \in L^\infty(\Omega)$.
More precisely, for all $T \in (W^{1,p}(\Omega))^*$ with $T({\mathds{1}}) = 0$ there is a unique $u \in W^{1,2}(\Omega)$ with $\int_\Omega u = 0$ satisfying ${{\cal A}}u = T$. Moreover, there exists a $c > 0$ independent of $T$ such that $\|u\|_{L^\infty(\Omega)} \leq c \, \|T\|_{(W^{1,p}(\Omega))^*}$.
We emphasise that the Sobolev embedding $W^{1,2}(\Omega) \subset L^r(\Omega)$ assumption is very weak. If $2^*$ is the first Sobolev exponent, that is $\frac{1}{2^*} = \frac{1}{2} - \frac{1}{d}$, then it follows from scaling that $r \leq 2^*$. The assumption $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$ implies that $q > d$. It is well known that there is a connection between Sobolev embeddings and solvability of Neumann problems. We exemplarily refer to Maz’ya and Poborchi[ĭ]{} [@MazyaPoborchii1; @MazyaPoborchii2] and [@MazED2], Section 6.10.
If $d \geq 3$, then the optimal case in our assumption is $r = 2^*$, the first Sobolev exponent. Then the condition $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$ is merely the condition $q > d$, as in the Stampacchia theorem for the Dirichlet boundary condition. This optimal assumption is satisfied for example by any open bounded set which is the finite union of connected $W^{1,2}$-extension domains, such as for example Lipschitz domains. Another example is that of a connected John domain ([@Bojarski], Section 6). If the domain has cusps, then the full Sobolev embedding is usually not available, but the embedding $W^{1,2}(\Omega) \subset L^r(\Omega)$ still holds for some $r \in (2,2^*)$ if the cusps are of polynomial type by [@AF], Theorem 4.51. We also refer to Maz’ya [@MazED2], Section 6.9 for more geometric conditions. It is also known that the embedding cannot hold true for any $r > 2$ if the boundary of $\Omega$ has cusps of exponential sharpness, see [@AF] Theorem 4.48. Note that in the case of Dirichlet boundary conditions, one always has the optimal embeddings $W^{1,2}_0(\Omega) \subset L^{2^*}(\Omega)$ if $d \geq 3$, and $W^{1,2}_0(\Omega) \subset L^r(\Omega)$ for all $r \in (2,\infty)$ if $d=2$.
The proof of Theorem \[tbounded101\] follows the ideas of Stampacchia and uses truncations of Sobolev functions. It relies on the Stampacchia lemma ([@KinS] Chapter II, Appendix B, Lemma 2.1) and at its heart lies a uniform estimation of the Poincaré constants of the truncations of mean value free Sobolev functions, Lemma \[lbounded207\] below.
We also prove that the pure Neumann operator ${{\cal A}}$ admits optimal Sobolev regularity in the setting of Theorem \[tbounded101\] for $q$ sufficiently close to $2$. This means that the domain of the part of the operator ${{\cal A}}$ in $(W^{1,p}(\Omega))^*$ coincides with $W^{1,q}_\perp(\Omega)$, the mean value free functions in $W^{1,q}(\Omega)$, where again $p$ is the dual exponent to $q$. The result relies on interpolation and the [Š]{}ne[ĭ]{}berg stability theorem. We refer to Theorem \[tbounded402\] below.
The outline of this paper is as follows. In Section \[boundedS2\] we show that a Sobolev embedding implies a Poincaré inequality on any $L^p$-space. We use this in Section \[boundedS3\] to adapt the argument of Stampacchia to deduce the boundedness as stated in Theorem \[tbounded101\]. In Section \[boundedS4\] we derive optimal Sobolev regularity results for ${{\cal A}}$ and some consequences of these based on the results in Section \[boundedS2\].
We conclude with an example. We formally attach the following boundary value problem to the equation ${{\cal A}}u = T$ with $T \in (W^{1,p}(\Omega))^*$ as in Theorem \[tbounded101\]: $$\begin{aligned}
- {\mathop{\rm div}}(\mu \nabla u) & = & f \quad \text{in}~\Omega, \\*
- n \cdot \mu\nabla u & = & g \quad \text{on}~\partial\Omega,\end{aligned}$$ where $f \in L^s(\Omega)$ and $g \in L^t(\partial\Omega;\mathcal{H}_{d-1})$ for appropriate values of $s$ and $t$, where $n$ is the normal. Since $T$ is only supposed to be a functional on $W^{1,p}(\Omega)$, inhomogeneous boundary data is allowed. For the foregoing boundary value problem, $T$ takes the form $$T(v) = \int_\Omega f\,\overline{v}
+ \int_{\partial\Omega} g\,\overline{\tau v} \, \mathrm{d}\mathcal{H}_{d-1},$$ where $\tau$ is the trace operator onto $\partial\Omega$. If the domain $\Omega$ is sufficiently regular to allow the application of the divergence theorem and to admit a suitable trace operator, this formulation and its connection to ${{\cal A}}u = T$ can be made rigorous, see Ciarlet ([@Cia], Chapter 1.2) or [@GGZ], Chapter 2.2. A particular case would be that of a Lipschitz graph domain $\Omega$.
Sobolev and Poincaré {#boundedS2}
====================
We first show that a Sobolev type embedding extrapolates to compactness of the inclusion map $W^{1,p}(\Omega) \subset L^p(\Omega)$.
\[lbounded201\] Let $\Omega \subset {\mathds{R}}^d$ be open and bounded. Let $q \in (1,\infty)$ and suppose there exists a $\delta > 0$ such that $W^{1,q}(\Omega) \subset L^{q+\delta}(\Omega)$. Let $p \in (1,\infty)$. Then the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ is compact. Moreover, there exists a $\delta' > 0$ such that $W^{1,p}(\Omega) \subset L^{p+\delta'}(\Omega)$.
We show that there exists an $s > p$ such that $W^{1,p}(\Omega) \subset L^s(\Omega)$. Then the compactness of the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ follows as in [@Daners7] Lemma 7.1. Suppose that $p \in (1,q)$ (the case $p \in (q,\infty)$ is similar). Fix $r \in (1,p)$. It follows from Liu–Tai [@LiuTai] Theorem 9 that the real interpolation space $(W^{1,1}(\Omega),W^{1,\infty}(\Omega))_{1 - \frac{1}{t},t} = W^{1,t}(\Omega)$ for all $t \in (1,\infty)$. Here $W^{1,\infty}(\Omega)$ is the Sobolev space of all $L^\infty(\Omega)$ functions whose weak partial derivatives are also $L^\infty(\Omega)$ functions. Let $\theta \in (0,1)$ be such that $\frac{1}{p} = \frac{1-\theta}{r} + \frac{\theta}{q}$. Then by complex interpolation $$\begin{aligned}
\bigl[W^{1,r}(\Omega),W^{1,q}(\Omega)\bigr]_\theta
& = & \Bigl[\bigl(W^{1,1}(\Omega),W^{1,\infty}(\Omega)\bigr)_{1 - \frac{1}{r},r},
\bigl(W^{1,1}(\Omega),W^{1,\infty}(\Omega)\bigr)_{1 - \frac{1}{q},q}\Bigr]_\theta \label{ebounded1} \\
& = & \bigl(W^{1,1}(\Omega),W^{1,\infty}(\Omega)\bigr)_{1 - \frac{1}{p},p}
= W^{1,p}(\Omega) \notag
,\end{aligned}$$ where we used the reiteration theorem [@BL] Theorem 4.7.2 in the second step. The inclusions $W^{1,r}(\Omega) \to L^r(\Omega)$ and $W^{1,q}(\Omega) \to L^{q + \delta}(\Omega)$ are continuous. Hence by complex interpolation one deduces that $W^{1,p}(\Omega) \subset L^s(\Omega)$, where $\frac{1}{s}
= \frac{1-\theta}{r} + \frac{\theta}{q+\delta}
< \frac{1-\theta}{r} + \frac{\theta}{q}
= \frac{1}{p}$. Note that $s > p$ as required.
Arguing as in Ziemer [@Zie2] Theorem 4.4.2 one obtains a Poincaré inequality from the compact inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$.
\[pbounded202\] Let $\Omega \subset {\mathds{R}}^d$ be open, bounded and connected. Let $p \in (1,\infty)$ and suppose that the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ is compact. Let $\Omega_0 \subset \Omega$ be measurable and suppose that the Lebesgue measure $|\Omega_0| > 0$. Then there exists a $c > 0$ such that $$\|u\|_p \leq c \, \|\nabla u\|_{p}$$ for all $u \in W^{1,p}(\Omega)$ with $\int_{\Omega_0} u = 0$.
Suppose not. Then for all $n \in {\mathds{N}}$ there exists a $u_n \in W^{1,p}(\Omega)$ such that $\|u_n\|_p > n \, \|\nabla u_n\|_p$ and $\int_{\Omega_0} u_n = 0$. Without loss of generality $\|u_n\|_p = 1$ for all $n \in {\mathds{N}}$. Then $\|\nabla u_n\|_p \leq \frac{1}{n}$. So the sequence $(u_n)_{n \in {\mathds{N}}}$ is bounded in $W^{1,p}(\Omega)$. Passing to a subsequence if necessary there exists a $u \in W^{1,p}(\Omega)$ such that $\lim u_n = u$ weakly in $W^{1,p}(\Omega)$. Then $\lim u_n = u$ strongly in $L^p(\Omega)$ and $\int_{\Omega_0} u = 0$. Moreover $\|u\|_p = 1$ and $u \neq 0$. Next $\|\nabla u\|_p \leq \liminf_{n \to \infty} \|\nabla u_n\|_p = 0$. Since $\Omega$ is connected it follows that $u$ is constant by [@Zie2] Corollary 2.1.9. Because $\int_{\Omega_0} u = 0$ and $|\Omega_0| > 0$ one deduces that $u = 0$. This is a contradiction.
If $\Omega \subset {\mathds{R}}^d$ is a bounded open set and $p \in (1,\infty)$, then we define $$W^{1,p}_\perp(\Omega)
= \Bigl\{ u \in W^{1,p}(\Omega) \colon \int_\Omega u = 0 \Bigr\}
.$$ It follows from Proposition \[pbounded202\] that $W^{1,p}_\perp(\Omega)$ equipped with the norm $u \mapsto \|\nabla u\|_p$ is a Banach space.
\[cbounded203\] Let $\Omega \subset {\mathds{R}}^d$ be open, bounded and connected. Let $p \in (1,\infty)$ and suppose that the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ is compact. Define ${{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {}\cdot{}
\right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \colon W^{1,p}(\Omega) \to [0,\infty)$ by ${{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert u
\right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} = \|\nabla u\|_p + \big| \int_\Omega u \big|$. Then one has the following.
\[cbounded203-1\] The function ${{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {}\cdot{}
\right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}}$ is a norm on $W^{1,p}(\Omega)$ which is equivalent to $\|\cdot\|_{W^{1,p}(\Omega)}$.
\[cbounded203-2\] The map $$P \colon u \mapsto u - \tfrac{1}{|\Omega|} \int_\Omega u$$ is a projection from $W^{1,p}(\Omega)$ onto $W^{1,p}_\perp(\Omega)$. In particular, $$u \mapsto \Big(\tfrac{1}{|\Omega|} \int_\Omega u, u - \tfrac{1}{|\Omega|} \int_\Omega u
\Big)$$ is a topological isomorphism from $W^{1,p}(\Omega)$ onto ${\mathds{C}}\oplus W^{1,p}_\perp(\Omega)$.
By Proposition \[pbounded202\] there exists a $c > 0$ such that $\|u\|_p \leq c \, \|\nabla u\|_p$ for all $u \in W^{1,p}_\perp(\Omega)$. If $u \in W^{1,p}(\Omega)$, then $$\begin{aligned}
\|u\|_p
& \leq & \bigl\|u - \tfrac{1}{|\Omega|} \int_\Omega u\bigr\|_p
+ \bigl\|\tfrac{1}{|\Omega|} \int_\Omega u\bigr\|_p \\
& \leq & c \, \bigl\|\nabla \Big( u - \tfrac{1}{|\Omega|} \int_\Omega u \Big) \bigr\|_p
+ |\Omega|^{-1+\frac{1}{p}} \Big| \int_\Omega u \Big| \\
& = & c \, \|\nabla u\|_p + |\Omega|^{-1+\frac{1}{p}} \Big| \int_\Omega u \Big| \end{aligned}$$ and the lemma follows easily.
\[pbounded204\] Let $\Omega \subset {\mathds{R}}^d$ be open, bounded and connected. Let $p \in (1,\infty)$ and suppose that the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ is compact. Then for all $T \in (W^{1,p}(\Omega))^*$ there exist $\kappa \in {\mathds{C}}$ and $f_1,\ldots,f_d \in L^q(\Omega)$ such that $$\langle T,u \rangle_{(W^{1,p}(\Omega))^* \times W^{1,p}(\Omega)}
= \kappa \int_\Omega \overline u + \sum_{j=1}^d \int_\Omega f_j \, \overline{\partial_j u}$$ for all $u \in W^{1,p}(\Omega)$, where $q$ is the dual exponent of $p$.
Using Corollary \[cbounded203\]\[cbounded203-2\] it suffices to show that for all $S \in (W^{1,p}_\perp(\Omega))^*$ there exist $f_1,\ldots,f_d \in L^q(\Omega)$ such that $$\langle S,u \rangle_{(W^{1,p}_\perp(\Omega))^* \times W^{1,p}_\perp(\Omega)}
= \sum_{j=1}^d \int_\Omega f_j \, \overline{\partial_j u}$$ for all $u \in W^{1,p}_\perp(\Omega)$, where $u \mapsto \|\nabla u\|_p$ is the norm on $W^{1,p}_\perp(\Omega)$. Consider the subspace $M = \{ \nabla u \colon u \in W^{1,p}_\perp(\Omega) \} $ in $L^p(\Omega)^d$. Define $F \colon M \to {\mathds{C}}$ by $F(\nabla u) = S u$. Then $F$ is well-defined and continuous. Therefore by Hahn–Banach there exists an extension $\widetilde F \in (L^p(\Omega)^d)^*$ of $F$. The rest of the proof is straight forward.
Proof of Theorem \[tbounded101\] {#boundedS3}
================================
In this section we prove Theorem \[tbounded101\]. Let $\Omega \subset {\mathds{R}}^d$ be a bounded connected open set. Let $\mu \colon \Omega \to {\mathds{R}}^{d \times d}$ be a bounded measurable function. We suppose that $\mu$ is [**elliptic**]{}, that is there exists a $\nu > 0$ such that $${\mathop{\rm Re}}\sum_{k,\ell=1}^d \mu(x) \, \xi_k \, \overline{\xi_\ell}
\geq \nu \, |\xi|^2$$ for all $\xi \in {\mathds{C}}^d$ and almost all $x \in \Omega$. Let $r > 2$ and suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$.
Define ${{\cal A}}\colon W^{1,2}(\Omega) \to (W^{1,2}(\Omega))^*$ by $${{\cal A}}(u,v)
= \int_\Omega \mu \nabla u \cdot \overline{\nabla v}
.$$ Recall that $W^{1,p}_\perp(\Omega) = \bigl\{ u \in W^{1,p}(\Omega) \colon \int_\Omega u = 0 \bigr\} $ for all $p \in (1,\infty)$. If $q \in (1,\infty)$ then we define $$W^{-1,q}_\emptyset(\Omega)
= \bigl(W^{1,p}(\Omega)\bigr)^*
,$$ where $p$ is the dual exponent of $q$. Moreover, we define $$W^{-1,q}_\perp(\Omega) = \bigl\{ T \in W^{-1,q}_\emptyset(\Omega) \colon T({\mathds{1}}) = 0 \bigr\}
.$$ Clearly ${{\cal A}}u \in W^{-1,2}_\perp(\Omega)$ for all $u \in W^{1,2}(\Omega)$ and $\ker {{\cal A}}= {\mathds{C}}{\mathds{1}}$ since $\Omega$ is connected. Define ${{\cal A}}_\perp \colon W^{1,2}_\perp(\Omega) \to W^{-1,2}_\perp(\Omega)$ by ${{\cal A}}_\perp u = {{\cal A}}u$. Then ${{\cal A}}_\perp$ is injective. We next show that it is also surjective and $W^{-1,2}_\perp(\Omega) = (W^{1,2}_\perp(\Omega))^*$, up to isomorphy.
\[pbounded205\] The map ${{\cal A}}_\perp$ is a topological isomorphism.
Define the form ${\gothic{b}}\colon W^{1,2}_\perp(\Omega) \times W^{1,2}_\perp(\Omega) \to {\mathds{C}}$ by $${\gothic{b}}(u,v) = \int_\Omega \mu \nabla u \cdot \overline{\nabla v}
.$$ Then ${\gothic{b}}$ is a continuous coercive sesquilinear form by Lemma \[lbounded201\] and Proposition \[pbounded202\]. Let ${{\cal B}}\colon W^{1,2}_\perp(\Omega) \to (W^{1,2}_\perp(\Omega))^*$ be such that ${\gothic{b}}(u,v) = \langle {{\cal B}}u,v \rangle_{(W^{1,2}_\perp(\Omega))^* \times W^{1,2}_\perp(\Omega)}$ for all $u,v \in W^{1,2}_\perp(\Omega)$. Then ${{\cal B}}$ is surjective by the Lax–Milgram theorem. Let $T \in W^{-1,2}_\perp(\Omega)$. Then $T \in W^{-1,2}_\emptyset(\Omega) = (W^{1,2}(\Omega))^*$. Let $\widetilde T = T|_{W^{1,2}_\perp(\Omega)}$. Then $\widetilde T \in (W^{1,2}_\perp(\Omega))^*$. Hence there is a $u \in W^{1,2}_\perp(\Omega)$ such that ${{\cal B}}u = \widetilde T$. If $v \in W^{1,2}_\perp(\Omega)$, then $$\langle {{\cal A}}u,v \rangle_{W^{-1,2}_\emptyset(\Omega) \times W^{1,2}(\Omega)}
= {\gothic{b}}(u,v)
= \langle {{\cal B}}u,v \rangle_{(W^{1,2}_\perp(\Omega))^* \times W^{1,2}_\perp(\Omega)}
= \widetilde T(v)
= T(v)
.$$ Since $\langle {{\cal A}}u,{\mathds{1}}\rangle_{W^{-1,2}_\emptyset(\Omega) \times W^{1,2}(\Omega)}
= 0 = T({\mathds{1}})$ it follows by linearity and Corollary [\[cbounded203\]\[cbounded203-2\]]{} that ${{\cal A}}_\perp u = {{\cal A}}u = T$.
As a main tool for the proof of Theorem \[tbounded101\] we need truncations of Sobolev functions, which we consider next.
For all $u \in W^{1,2}(\Omega,{\mathds{R}})$ and $k \in [0,\infty)$ define $\zeta_{u,k} = ({\mathop{\rm sgn}}u) \, (|u| - k)^+$. If no confusion is possible then we write $\zeta_k = \zeta_{u,k}$. Moreover, define $A_k = \{ x \in \Omega \colon |u(x)| > k \} = [|u| > k]$.
\[lbounded206\] Let $u \in W^{1,2}(\Omega,{\mathds{R}})$. Then one has the following.
\[lbounded206-1\] $\zeta_k \in W^{1,2}(\Omega)$ for all $k \in [0,\infty)$.
\[lbounded206-2\] ${\mathds{1}}_{A_k} \, D_j u = {\mathds{1}}_{A_k} \, D_j \zeta_k$ for all $j \in \{ 1,\ldots,d \} $ and $k \in [0,\infty)$.
\[lbounded206-3\] The map $k \mapsto \zeta_k$ is continuous from $[0,\infty)$ into $W^{1,2}(\Omega)$.
\[lbounded206-4\] If $k \in [0,\infty)$, then the map $v \mapsto \zeta_{v,k}$ is continuous from $W^{1,2}(\Omega,{\mathds{R}})$ into $W^{1,2}(\Omega)$.
‘\[lbounded206-1\]’ and ‘\[lbounded206-2\]’. Note that $\zeta_k = (u^+ - k)^+ - (u^- - k)^+$. Then the statements follow from [@GT] Lemma 7.6.
‘\[lbounded206-3\]’. This follows from the Lebesgue dominated convergence theorem.
‘\[lbounded206-4\]’. This follows from \[lbounded206-1\] and [@MaM] Theorem 1.
A key estimate for the proof of Theorem \[tbounded101\] is the next lemma.
\[lbounded207\] Let $u \in W^{1,2}_\perp(\Omega,{\mathds{R}})$. Then there exists a $\gamma \geq 0$ such that $\|\zeta_k\|_2 \leq \gamma \, \|\nabla \zeta_k\|_2$ for all $k \in [0,\infty)$.
We split the proof into two cases depending whether $u$ is bounded or not.
[**Case 1. **]{} Suppose $u$ is unbounded. If $k \in [0,\infty)$ and $\|\nabla \zeta_k\|_2= 0$, then $\zeta_k$ is constant and consequently $u$ is bounded, which is a contradiction. Hence $\|\nabla \zeta_k\|_2\neq 0$ for all $k \in [0,\infty)$. Since both $k \mapsto \|\zeta_k\|_2$ and $k \mapsto \|\nabla \zeta_k\|_2$ are continuous on $[0,\infty)$ by Lemma \[lbounded206\]\[lbounded206-3\], it suffices to show that $$\limsup_{k \to \infty}
\frac{ \|\zeta_k\|_2 }
{ \|\nabla \zeta_k\|_2 }
\leq 1 .
\label{elbounded207;1}$$ Suppose that (\[elbounded207;1\]) is false. Then there exists a sequence $(k_n)_{n \in {\mathds{N}}}$ in ${\mathds{R}}$ such that $k_n \geq n$ for all $n \in {\mathds{N}}$ and $\|\zeta_{k_n}\|_2 > \|\nabla \zeta_{k_n}\|_2$ for all $n \in {\mathds{N}}$. Define $v_n = \|\zeta_{k_n}\|_2^{-1} \, \zeta_{k_n}$ for all $n \in {\mathds{N}}$. Then $v_n \in W^{1,2}(\Omega)$, $\|v_n\|_2 = 1$ and $\|\nabla v_n\|_2 \leq 1$ for all $n \in {\mathds{N}}$. So the sequence $(v_n)_{n \in {\mathds{N}}}$ is bounded in $W^{1,2}(\Omega)$. Passing to a subsequence if necessary we may assume that there is a $v \in W^{1,2}(\Omega)$ such that $\lim v_n = v$ weakly in $W^{1,2}(\Omega)$. Then $\lim v_n = v$ in $L^2(\Omega)$. So $\|v\|_2 = 1$ and in particular $v \neq 0$. But $v(x) = \lim_{n \to \infty} v_n(x) = 0$ for almost every $x \in \Omega$. This is a contradiction.
[**Case 2. **]{} Suppose $u$ is bounded. Without loss of generality we may assume that $u \neq 0$. Let $k \in [0,\|u\|_\infty)$ and suppose that $\|\nabla \zeta_k\|_2 = 0$. Then $\zeta_k$ is constant, say $\delta$. If $\delta = 0$, then $|u| \leq k$ a.e., which is not possible since $k < \|u\|_\infty$. Suppose $\delta > 0$. Note that $\zeta_k(x) \leq 0 < \delta$ for all $x \in \Omega$ with $u(x) \leq k$. So $u(x) = k + \delta$ for all $x \in \Omega$. But then $\int_\Omega u \neq 0$. Similarly $\delta < 0$ gives a contradiction. Hence $\|\nabla \zeta_k\|_2 \neq 0$ for all $k \in [0,\|u\|_\infty)$.
Arguing as in Case 1 and using Lemma \[lbounded206\]\[lbounded206-3\] it follows that for all $k_1 \in (0,\|u\|_\infty)$ there exists a $c_1 > 0$ such that $\|\zeta_k\|_2 \leq c_1 \, \|\nabla \zeta_k\|_2$ for all $k \in [0,k_1]$.
Finally we show that there exist $k_0 \in (0,\|u\|_\infty)$ and $c_0 > 0$ such that $\|\zeta_k\|_2 \leq c_0 \, \|\nabla \zeta_k\|_2$ for all $k \in (k_0,\infty)$. If $|u| = \|u\|_\infty$ a.e., then $| [u = \|u\|_\infty ]|
= \frac{1}{2} \, |\Omega| > 0$, where we use that $\int_\Omega u = 0$. Then $w = {\mathds{1}}_{[u = \|u\|_\infty]}\,u = u \vee 0 \in W^{1,2}(\Omega)$. Using [@GT] Lemma 7.7 we deduce that $\nabla w = 0$ a.e. and this implies that $|[u = \|u\|_\infty]| \in \{ 0,|\Omega| \} $, which is a contradiction. Hence there is a $k_0 \in (0,\|u\|_\infty)$ such that $|[ |u| \leq k_0 ]| > 0$. Write $\Omega_0 = [ |u| \leq k_0 ]$. By Lemma \[lbounded201\] and Proposition \[pbounded202\] there exists a $c_0 > 0$ such that $\|v\|_2 \leq c_0 \, \|\nabla v\|_2$ for all $v \in W^{1,2}(\Omega)$ with $\int_{\Omega_0} v = 0$. If $k \in (k_0,\infty)$, then $\zeta_k(x) = 0$ for all $x \in \Omega_0$, so $\int_{\Omega_0} \zeta_k = 0$. Hence $\|\zeta_k\|_2 \leq c_0 \, \|\nabla \zeta_k\|_2$.
For all $u \in W^{1,2}_\perp(\Omega,{\mathds{R}})$ define $\gamma_u \in [0,\infty)$ to be the minimum of all $\gamma \geq 0$ such that $\|\zeta_k\|_2 \leq \gamma \, \|\nabla \zeta_k\|_2$ for all $k \in [0,\infty)$. Recall that $r > 2$ is such that $W^{1,2}(\Omega) \subset L^r(\Omega)$.
\[pbounded301\] Let $u \in W^{1,2}_\perp(\Omega,{\mathds{R}})$ and $q > d$ with $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$. Further let $f_1,\ldots,f_d \in L^q(\Omega)$ and suppose that $\langle {{\cal A}}u,v\rangle_{W^{-1,2}_\emptyset(\Omega) \times W^{1,2}(\Omega)}
= \sum_{j=1}^d (f_j, \partial_j v)_2$ for all $v \in W^{1,2}(\Omega)$. Then $u \in L^\infty(\Omega)$. Moreover $$\|u\|_\infty
\leq 2^{(\frac{1}{2}-\frac{1}{q})/\delta} \, \frac{E}{\nu} \, \sqrt{\bigl(1 +
\gamma_u^2\bigr)|\Omega|^{\delta}} \, \left(\sum_{j=1}^d \bigl\|f_j\bigr\|_q^2\right)^{\frac12}
,$$ where $\delta = \frac{1}{2} - \frac{1}{q} - \frac{1}{r} > 0$ and $\nu$ is the ellipticity constant of $\mu$. Finally, $E > 0$ is such that $\|v\|_r \leq E \, \|v\|_{W^{1,2}(\Omega)}$ for all $v \in W^{1,2}(\Omega)$.
For all $k \in [0,\infty)$ define $\zeta_k = ({\mathop{\rm sgn}}u) \, (|u| - k)^+ \in W^{1,2}(\Omega)$ and $A_k = [|u| > k] $ as before. Let $k \in [0,\infty)$. Then $$\begin{aligned}
\nu \, \bigl\|\nabla \zeta_k\bigr\|_2^2
& \leq & \int_\Omega \mu \nabla \zeta_k \cdot \nabla \zeta_k = \int_\Omega \mu \nabla u \cdot \nabla \zeta_k = \sum_{j=1}^d \int_{A_k} f_j \, \partial_j \zeta_k \\
& \leq & \Bigl( \sum_{j=1}^d \int_{A_k} |f_j|^2 \Bigr)^{1/2}
\bigl\|\nabla \zeta_k\bigr\|_2 \\
& \leq & \frac{\nu}{2} \, \bigl\|\nabla \zeta_k\bigr\|_2^2
+ \frac{1}{2\nu} \, \sum_{j=1}^d \int_{A_k} |f_j|^2
.\end{aligned}$$ Hence $$\bigl\|\nabla \zeta_k\bigr\|_2^2
\leq \frac{1}{\nu^2} \sum_{j=1}^d \int_{A_k} |f_j|^2
\leq \frac{|A_k|^{1 - \frac{2}{q}}}{\nu^2} \, \sum_{j=1}^d \bigl\|f_j\bigr\|_q^2
.$$ By assumption $W^{1,2}(\Omega) \subset L^r(\Omega)$. Then $$\begin{aligned}
\left( \int_{A_k} \bigl( |u| - k \bigr)^{r} \right)^{\frac{2}{r}}
& = & \bigl\|\zeta_k\bigr\|_{L^{r}(\Omega)}^2 \\
& \leq & E^2 \, \bigl\|\zeta_k\bigr\|_{W^{1,2}(\Omega)}^2
= E^2 \, \Bigl(\bigl\|\zeta_k\bigr\|_2^2 + \bigl\|\nabla \zeta_k\bigr\|_2^2 \Bigr) \\
& \leq & E^2 \, \bigl(1 + \gamma_u^2\bigr) \,
\frac{|A_k|^{1 - \frac{2}{q}}}{\nu^2} \, \sum_{j=1}^d \bigl\|f_j\bigr\|_q^2
.\end{aligned}$$ Next let $h,k \in [0,\infty)$ with $h > k$. Then $A_h \subset A_k$ and $$(h-k)^2 \, |A_h|^{\frac{2}{r}}
\leq \left( \int_{A_h} \bigl| |u| - k \bigr|^{r} \right)^{\frac{2}{r}}
\leq \left( \int_{A_k} \bigl| |u| - k \bigr|^{r} \right)^{\frac{2}{r}}
\leq E^2 \, \bigl(1 + \gamma_u^2\bigr) \,
\frac{|A_k|^{1 - \frac{2}{q}}}{\nu^2} \, \sum_{j=1}^d \bigl\|f_j\bigr\|_{q}^2
.$$ Equivalently $$|A_h|
\leq \frac{1}{(h-k)^r} \, \Big( \frac{E}{\nu} \Big)^{r}
\bigl(1 + \gamma_u^2\bigr)^{\frac{r}{2}} \,
\biggl( \sum_{j=1}^d \bigl\|f_j\bigr\|_{q}^2 \biggr)^{\frac{r}{2}} \,
|A_k|^{(1-\frac{2}{q})\frac{r}{2}}
.$$ Due to $(1-\frac{2}{q})\frac{r}{2} = (\frac{1}{2} - \frac{1}{q}) r > 1$ by assumption, it now follows from the Stampacchia lemma ([@KinS] Chapter II, Appendix B, Lemma 2.1) that $u \in L^\infty(\Omega)$ and $$\|u\|_\infty
\leq 2^{(\frac{1}{2}-\frac{1}{q})/\delta} \, \frac{E}{\nu} \,
\sqrt{\bigl(1 +
\gamma_u^2\bigr)|\Omega|^{\delta}} \,
\left(\sum_{j=1}^d \bigl\|f_j\bigr\|_q^2\right)^{\frac12}
.$$ This completes the proof of the proposition.
Let $u \in W^{1,2}(\Omega)$ be such that ${{\cal A}}u \in (W^{1,p}(\Omega))^*$, where $p$ is the dual exponent of $q$. By Lemma \[lbounded201\] the inclusion $W^{1,p}(\Omega) \subset L^p(\Omega)$ is compact. Hence by Proposition \[pbounded204\] there exist $\kappa \in {\mathds{C}}$ and $f_1,\ldots,f_d \in L^q(\Omega)$ such that $$\langle {{\cal A}}u,v \rangle_{(W^{1,p}(\Omega))^* \times W^{1,p}(\Omega)}
= \kappa \int_\Omega \overline v + \sum_{j=1}^d \int_\Omega f_j \, \overline{\partial_j v}$$ for all $v \in W^{1,p}(\Omega)$. Choosing $v = {\mathds{1}}$ one deduces that $\kappa = 0$ and ${{\cal A}}u \in W^{-1,2}_\perp(\Omega)$. Without loss of generality we may assume that $u \in W^{1,2}_\perp(\Omega)$. Moreover, we may also assume that $u$ is real valued. Now apply Proposition \[pbounded301\] to obtain $u \in L^\infty(\Omega)$.
If we start with $T \in W^{-1,q}_\perp(\Omega)$, then there exists a unique $u \in W^{1,2}_\perp(\Omega)$ such that ${{\cal A}}u = T$ by Proposition \[pbounded205\]. Then ${{\cal A}}u \in W^{-1,q}_\perp(\Omega) \subset (W^{1,p}(\Omega))^*$, so $u \in L^\infty(\Omega)$ by the above.
For the estimate it suffices to show that the map $T \mapsto u$ has closed graph in the space $W^{-1,q}_\perp(\Omega) \times L^\infty(\Omega)$. Let $T,T_1,T_2,\ldots \in W^{-1,q}_\perp(\Omega)$ and $u \in L^\infty(\Omega)$. Suppose that $\lim T_n = T$ in $W^{-1,q}_\perp(\Omega)$ and $\lim ({{\cal A}}_\perp)^{-1} T_n = u$ in $L^\infty(\Omega)$. Then $\lim T_n = T$ in $W^{-1,2}_\perp(\Omega)$, so $\lim ({{\cal A}}_\perp)^{-1} T_n = ({{\cal A}}_\perp)^{-1} T$ in $W^{1,2}_\perp(\Omega)$ and hence also in $L^2(\Omega)$. But $\lim ({{\cal A}}_\perp)^{-1} T_n = u$ in $L^\infty(\Omega)$ and therefore also in $L^2(\Omega)$. Consequently $({{\cal A}}_\perp)^{-1} T = u$ as required.
Interpolation and maximal Sobolev regularity {#boundedS4}
============================================
In this section, we use the structure of $W^{1,p}_\perp(\Omega)$ as a complemented subspace of $W^{1,p}(\Omega)$ to establish interpolation results. Optimal Sobolev regularity for the pure Neumann operator ${{\cal A}}_\perp$ for $p$ close to $2$ also follows. This is particularly interesting for space dimension $d=2$. The first step is to show that $W^{1,p}_\perp(\Omega)$ and $W^{-1,p}_\perp(\Omega)$ form an interpolation scale with respect to $p$.
\[pbounded401\] Let $\Omega \subset {\mathds{R}}^d$ be open and bounded. Let $p_0,p_1 \in (1,\infty)$, $\theta \in (0,1)$ and set $\frac1p = \frac{1-\theta}{p_0} + \frac\theta{p_1}$. Then $$\bigl[W^{1,p_0}_\perp(\Omega),W^{1,p_1}_\perp(\Omega)\bigr]_\theta =
\bigl(W^{1,p_0}_\perp(\Omega),W^{1,p_1}_\perp(\Omega)\bigr)_{\theta,p} = W^{1,p}_\perp(\Omega)$$ and $$\bigl[W^{-1,p_0}_\perp(\Omega),W^{-1,p_1}_\perp(\Omega)\bigr]_\theta =
\bigl(W^{-1,p_0}_\perp(\Omega),W^{-1,p_1}_\perp(\Omega)\bigr)_{\theta,p} = W^{-1,p}_\perp(\Omega).$$
It follows from that $$\bigl[W^{1,p_0}(\Omega),W^{1,p_1}(\Omega)\bigr]_\theta = W^{1,p}(\Omega)
.$$ Arguing as in , but using the reiteration theorem for real interpolation [@BL], Theorem 3.5.3 one deduces similarly $$\bigl(W^{1,p_0}(\Omega),W^{1,p_1}(\Omega)\bigr)_{\theta,p} = W^{1,p}(\Omega).$$ Note that for all $r \in (1,\infty)$ the projection $P$ in Corollary \[cbounded203\]\[cbounded203-2\] maps $W^{1,r}(\Omega)$ onto $W^{1,r}_\perp(\Omega)$, so $W^{1,r}_\perp(\Omega)$ is a complemented subspace of $W^{1,r}(\Omega)$. We further observe that $W^{1,p_i}_\perp(\Omega) = W^{1,p_i}(\Omega) \cap W^{1,\min(p_0,p_1)}_\perp(\Omega)$ for $i=1,2$. Thus, interpolation theory for complemented subspaces ([@Tri] Theorem 1.17.1.1) shows that $$\bigl[W^{1,p_0}_\perp(\Omega),W^{1,p_1}_\perp(\Omega)\bigr]_\theta =
\bigl(W^{1,p_0}_\perp(\Omega),W^{1,p_1}_\perp(\Omega)\bigr)_{\theta,p} =
W^{1,p}(\Omega) \cap W^{1,\min(p_0,p_1)}_\perp(\Omega) = W^{1,p}_\perp(\Omega).$$ Concerning the dual spaces, it is easy to see that for all $q \in (1,\infty)$ the operator $T \mapsto T - \frac{1}{|\Omega|} \, \langle T, {\mathds{1}}\rangle {\mathds{1}}$ is a projection from $W^{-1,q}(\Omega)$ onto $W^{-1,q}_\perp(\Omega)$. Hence the assertion follows with the same argument and the duality properties of the real and complex interpolation functors, see [@Tri] Subsections 1.11.2 and 1.11.3.
The first result derived from Proposition \[pbounded401\] together with Theorem \[tbounded101\] is the following mapping property for ${{\cal A}}_\perp^{-1}$ on the $W^{-1,p}_\perp(\Omega)$ spaces for all $p > 2$. Note that we do not require that $p > d$.
\[cbounded402\] Let $\Omega \subset {\mathds{R}}^d$ be a bounded connected open set. Let $r \in (2,\infty)$ and suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$. Let further $q \in (d,\infty)$ and suppose that $\frac{1}{2}-\frac{1}{q} > \frac{1}{r}$. Let $p \in (2,q)$. Let $\mu \colon \Omega \to {\mathds{R}}^{d \times d}$ be a bounded measurable elliptic function and let ${{\cal A}}\colon W^{1,2}(\Omega) \to (W^{1,2}(\Omega))^*$ be the associated operator. Then ${{\cal A}}_\perp^{-1}$ maps $W^{-1,p}_\perp(\Omega)$ into $L^s(\Omega)$, where $\frac{1}{s} = \frac{1-\theta}{r}$ and $\theta \in (0,1)$ is such that $\frac{1}{p} = \frac{1-\theta}{2} + \frac{\theta}{q}$.
The operator ${{\cal A}}_\perp^{-1}$ maps $W^{-1,2}_\perp(\Omega)$ continuously into $W^{1,2}_\perp(\Omega) \subset L^r(\Omega)$ by Proposition \[pbounded205\]. Moreover, ${{\cal A}}_\perp^{-1}$ maps $W^{-1,q}_\perp(\Omega)$ continuously into $L^\infty(\Omega)$ by Theorem \[tbounded101\]. Now use complex interpolation and Proposition \[pbounded401\].
Due to Proposition \[pbounded401\] and the work from the previous sections, a maximal Sobolev regularity result for $p$ close to $2$ follows by an application of the [Š]{}ne[ĭ]{}berg stability theorem.
\[tbounded402\] Let $\Omega \subset {\mathds{R}}^d$ be a bounded connected open set. Let $r \in (2,\infty)$ and suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$. Let $\mu \colon \Omega \to {\mathds{R}}^{d \times d}$ be a bounded measurable elliptic function and let ${{\cal A}}\colon W^{1,2}(\Omega) \to (W^{1,2}(\Omega))^*$ be the associated operator. Then there exists a $\delta > 0$ such that ${{\cal A}}_\perp$ is a topological isomorphism between $W^{1,p}_\perp(\Omega)$ and $W^{-1,p}_\perp(\Omega)$ for all $p \in (2-\delta,2+\delta)$.
Under the assumptions, ${{\cal A}}_\perp$ is a topological isomorphism between $W^{1,2}_\perp(\Omega)$ and $W^{-1,2}_\perp(\Omega)$ by Proposition \[pbounded205\]. Proposition \[pbounded401\] shows that these spaces are simultaneous interpolation spaces in the $W^{1,p}_\perp(\Omega)$ and $W^{-1,p}_\perp(\Omega)$ scale. The [Š]{}ne[ĭ]{}berg stability theorem [@Sneiberg] implies that there is a $\delta > 0$ such that ${{\cal A}}_\perp$ remains an isomorphism between $W^{1,p}_\perp(\Omega)$ and $W^{-1,p}_\perp(\Omega)$ for all $p \in (2-\delta,2+\delta)$.
There exist quantitative results on the size of $\delta$ derived from the [Š]{}ne[ĭ]{}berg result in Theorem \[tbounded402\]. We refer to [@ABES], Appendix A. The most crucial information is that one can choose $\delta$ to depend only on the ellipticity constant and the upper bound $\|\mu\|_\infty$ of the coefficient function $\mu$ of ${{\cal A}}$. Moreover, for all $p \in (2-\delta,2+\delta)$, the operator norm $\|{{\cal A}}_\perp^{-1}\|_{W^{-1,p}_\perp(\Omega)\to W^{1,p}_\perp(\Omega)}$ can be estimated by a multiple of $\|{{\cal A}}_\perp^{-1}\|_{W^{-1,2}_\perp(\Omega)\to W^{1,2}_\perp(\Omega)}$. By Lax-Milgram, the latter can be estimated by $1/\nu$, where $\nu$ is the ellipticity constant of $\mu$.
Theorem \[tbounded402\] yields further corollaries for $d=2$.
\[cor:cbounded403\] Adopt the notation and assumptions of Theorem \[tbounded402\]. Let $d=2$. Let $q \in (2,2+\delta)$ and suppose that $\frac12 - \frac1q > \frac1r$. Then $W^{1,s}(\Omega) \subset L^\infty(\Omega)$ for all $s \geq q$.
It follows from Theorem \[tbounded101\] that ${{\cal A}}_\perp^{-1} W^{-1,q}_\perp(\Omega) \subset L^\infty(\Omega)$. But ${{\cal A}}_\perp^{-1} W^{-1,q}_\perp(\Omega) = W^{1,q}_\perp(\Omega)$ by Theorem \[tbounded402\]. Since $W^{1,q}(\Omega) = W^{1,q}_\perp(\Omega) + {\mathds{C}}\, {\mathds{1}}$ the corollary follows.
The parameter $\delta$ in the previous corollary depends on the coefficient function $\mu$ via the [Š]{}ne[ĭ]{}berg theorem. If $\Omega$ is smooth enough so that the full Sobolev embedding for $W^{1,2}(\Omega)$ is available, then no coefficient function is needed (at least in the formulation of the corollary).
\[cbounded405\] Let $\Omega \subset {\mathds{R}}^2$ be a bounded connected open set. Suppose that $W^{1,2}(\Omega) \subset L^r(\Omega)$ for all $r \in (2,\infty)$. Then $W^{1,s}(\Omega) \subset L^\infty(\Omega)$ for all $s \in (2,\infty)$.
Choose $\mu = I$. Let $\delta > 0$ be as in Theorem \[tbounded402\]. Let $s \in (2,\infty)$. Then there exists a $q \in (2,2+\delta) \cap (2,s]$. Now apply Corollary \[cor:cbounded403\].
The third corollary concerns Hölder regularity of solutions $u$ of ${{\cal A}}_\perp u = T$ with $T \in W^{-1,q}_\perp(\Omega)$ for $q>2$ and a uniform estimate. We do not pass through Theorem \[tbounded101\] for this result. The price to pay is a Sobolev embedding assumption for the Hölder space similar to the one in Theorem \[tbounded101\].
\[cbounded406\] Let $\Omega \subset {\mathds{R}}^2$ be a bounded connected open set. Suppose that for all $q \in (2,\infty)$ there exists an $\alpha \in (0,1)$ such that $W^{1,q}(\Omega) \subset C^\alpha(\overline\Omega)$. Let $\mu \colon \Omega \to {\mathds{R}}^{d \times d}$ be a bounded measurable elliptic function and let ${{\cal A}}\colon W^{1,2}(\Omega) \to (W^{1,2}(\Omega))^*$ be the associated operator. Then one has the following.
\[cbounded406-1\] For all $q \in (2,\infty)$ there exists an $\alpha \in (0,1)$ such that ${{\cal A}}_\perp^{-1} W^{-1,q}_\perp(\Omega) \subset C^\alpha(\overline\Omega)$.
\[cbounded406-2\] For all $q \in (2,\infty)$ and $R > 0$ the set $$\bigl\{ {{\cal A}}_\perp^{-1}(T) : T \in W^{-1,q}_\perp(\Omega) \mbox{ and }
\|T\|_{W^{-1,q}_{\emptyset}(\Omega)} \leq R \bigr\}$$ is compact in $C(\overline \Omega)$.
‘\[cbounded406-1\]’. Since $C^\alpha(\overline\Omega) \subset L^\infty(\Omega) \subset L^r(\Omega)$ for all $\alpha \in (0,1)$ and $r \in (1,\infty)$, it follows from Lemma \[lbounded201\] that there exists an $r \in (2,\infty)$ such that $W^{1,2}(\Omega) \subset L^r(\Omega)$. Let $\delta > 0$ be as in Theorem \[tbounded402\]. Let $s \in (2,2+\delta) \cap (2,q]$. By assumption there exists an $\alpha \in (0,1)$ such that $W^{1,s}(\Omega) \subset C^\alpha(\overline\Omega)$. Then ${{\cal A}}_\perp^{-1} W^{-1,q}_\perp(\Omega)
\subset {{\cal A}}_\perp^{-1} W^{-1,s}_\perp(\Omega)
= W^{1,s}_\perp(\Omega)
\subset C^\alpha(\overline\Omega)$.
‘\[cbounded406-2\]’. This follows from statement \[cbounded406-1\] and the Arzelà–Ascoli theorem.
The situation for the Hölder-Sobolev embedding assumption in Corollary \[cbounded406\] is similar to the assumption on the Sobolev embedding in Theorem \[tbounded101\]. It is satisfied for example when for all $q \in (2,\infty)$ the domain $\Omega$ is a connected $W^{1,q}$-extension domain and then one can choose $\alpha = 1-2/q$, but there are also examples of (non-extension) domains with sufficiently regular cusps where the assumption is satisfied in the weaker form, see [@AF] Theorem 4.53. Note however that the optimal embedding for $W^{1,q}(\Omega)$ into the Hölder space of order $1-2/q$ implies the $W^{1,r}$-extension property for all $r > q$, see [@Koskela] Theorem A.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The first and third named authors are grateful for a most stimulating stay at the RICAM. Part of this work is supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand.
[ABES19]{}
, [*Sobolev spaces*]{}. Second edition, Pure and Applied Mathematics 140. Elsevier/Academic Press, Amsterdam, 2003.
, Nonlocal self-improving properties: a functional analytic approach. (2019), 151–183.
, [*Interpolation spaces. An introduction*]{}. Grundlehren der mathematischen Wissenschaften 223. Springer-Verlag, Berlin etc., 1976.
, Remarks on Sobolev imbedding inequalities. In [Laine, L., Rickman, S. [and]{} Sorvali, T.]{}, eds., [ *Complex analysis, Joensuu 1987*]{}, Lecture Notes in Math. 1351, 52–68. Springer, Berlin, 1988.
, [*The finite element method for elliptic problems*]{}. Studies in Mathematics and its Applications 4. North-Holland, Amsterdam, 1978.
, A priori estimates for solutions to elliptic equations on non-smooth domains. (2002), 793–813.
, [*Nichtlineare Operatorgleichungen und Operatordifferentialgleichungen*]{}. Mathematische Lehrb[ü]{}cher und Monographien, II. Abteilung Mathematische Monographien 38. Akademie-Verlag, Berlin, 1974.
, [*Elliptic partial differential equations of second order*]{}. Second edition, Grundlehren der mathematischen Wissenschaften 224. Springer-Verlag, Berlin etc., 1983.
, [*An introduction to variational inequalities and their applications*]{}. Pure and Applied Mathematics 88. Academic Press, New York, 1980.
, Extensions and imbeddings. (1998), 369–383.
, Lusin properties and interpolation of Sobolev spaces. (1997), 163–177.
, Every superposition operator mapping one Sobolev space into another is continuous. (1979), 217–229.
, [*Sobolev spaces with applications to elliptic partial differential equations*]{}. Second edition, Grundlehren der mathematischen Wissenschaften 342. Springer-Verlag, Berlin etc., 2011.
, Imbedding theorems for Sobolev spaces on domains with peak and on H[ö]{}lder domains. (2007), 583–605.
height 2pt depth -1.6pt width 23pt, On the solvability of the Neumann problem in domains with peak. (2009), 757–790.
, Spectral properties of linear operators in interpolation families of Banach spaces. , No. 2 (1974), 214–229.
, Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus. (1965), 189–258.
, [*Interpolation theory, function spaces, differential operators*]{}. North-Holland, Amsterdam, 1978.
, [*Optimal control of partial differential equations. Theory, methods and applications*]{}. Graduate Studies in Mathematics 112. American Mathematical Society, Providence, RI, 2010.
, [*Weakly differentiable functions*]{}. Graduate Texts in Mathematics 120. Springer-Verlag, New York, 1989.
[A.F.M. ter Elst, Department of Mathematics, University of Auckland, Private bag 92019, Auckland 1142, New Zealand]{}\
[*E-mail address*]{}: [**[email protected]**]{}
[H. Meinlschmidt, Johann Radon Institute for Computational and Applied Mathematics (RICAM), Altenberger Straße 69, 4040 Linz, Austria]{}\
[*E-mail address*]{}: [**[email protected]**]{}
[J. Rehberg, Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstr. 39, 10117 Berlin, Germany]{}\
[*E-mail address*]{}: [**[email protected]**]{}
|
---
abstract: 'We develop a new method for describing the dynamics of $3$-dimensional thermal plasmas. Using a piecewise constant $1$-particle distribution, we reduce the Vlasov equation to a generalized Lorentz force equation for a family of vector fields encoding the discontinuity. By applying this equation to longitudinal electrostatic plasma oscillations, and coupling it to Maxwell’s equations, we obtain a limit on the magnitude of the electric field in relativistic thermal plasma oscillations. We derive an upper bound on the limit and discuss its applicability in a background magnetic field.'
author:
- 'D.A. Burton, A. Noble, H. Wen'
title: Discontinuous distributions in thermal plasmas
---
Introduction
============
High-power lasers and plasmas may be used to accelerate electrons by electric fields that are orders of magnitude greater than those achievable using conventional methods [@tajima:1979]. An intense laser pulse is used to drive a wave in a plasma and, for sufficiently large fields, non-linearities lead to collapse of the wave structure (‘wave-breaking’) due to sufficiently large numbers of electrons becoming trapped in the wave.
Hydrodynamic investigations of wave-breaking were first undertaken for cold plasmas [@akhiezer:1956; @dawson:1959] and thermal effects were later included in non-relativistic [@coffey:1971] and relativistic contexts [@katsouleas:1988; @rosenzweig:1988; @schroeder:2005] (see [@trines:2006] for a discussion of the numerous approaches). However, it is clear that the value of the electric field at which the wave breaks (the electric field’s ‘wave-breaking limit’) is highly sensitive to the details of the hydrodynamic model.
Plasmas dominated by collisions are described by a pressure tensor that does not deviate far from isotropy, whereas an intense and ultrashort laser pulse propagating through an underdense plasma will drive the plasma anisotropically over typical acceleration timescales. Thus, it is important to accommodate 3-dimensionality and allow for anisotropy when investigating wave-breaking limits.
Our aim is to uncover the relationship between wave-breaking limits and the shape of the $1$-particle distribution $f$. In general, the detailed structure of $f$ cannot be reconstructed from a few low-order moments so we adopt a different approach based on a particular class of piecewise constant $1$-particle distributions. Our approach may be considered as a multi-dimensional generalization of the 1-dimensional relativistic ‘waterbag’ model employed in [@katsouleas:1988] (for a discussion of the relationship between our approach and [@katsouleas:1988] see [@us:2008]).
We employ the Einstein summation convention throughout and units are used in which the speed of light $c=1$ and the permittivity of the vacuum $\varepsilon_0=1$. Lowercase Latin indices $a,b,c$ run over $0,1,2,3$.
Vlasov-Maxwell system
=====================
Our attention is focussed on plasmas evolving over timescales during which the ‘discrete’ nature (collisions) of the plasma electrons can be neglected and the plasma ions can be prescribed as a background. Such configurations are well described by the covariant Vlasov-Maxwell system [@degroot:1980; @ehlers:1971] which, for the purposes of this paper, is most usefully expressed in the language of exterior calculus (see, for example, [@burton:2003; @benn:1987]). We will now briefly summarize the particular formulation of the Vlasov-Maxwell system employed here.
Let $({{\cal {M}}},g)$ be a spacetime with signature $(-,+,+,+)$ for the metric tensor $g$. Each point $p\in{{\cal {M}}}$ is associated with a space ${{\cal {E}}}_p\subset T_p{{\cal {M}}}$ of future-directed unit normalized vectors on ${{\cal {M}}}$, $${{\cal {E}}}_p = \{ (x(p),{\dot{x}}) \in T_p{{\cal {M}}} : g_{ab}(x(p)){\dot{x}}^a{\dot{x}}^b =-1 \text{ and } {\dot{x}}^0>0 \},$$ where $g_{ab}$ are the components of the metric $g$ in a coordinate system $(x^a)$ whose patch contains $p$ and $(x^a,{\dot{x}}^b)$ are induced coordinates on $T{{\cal {M}}}$. The total space ${{\cal {E}}}$ of the bundle $({{\cal {E}}},\Pi,{{\cal {M}}})$ is the union of ${{\cal {E}}}_p$ over $p\in{{\cal {M}}}$ and $\Pi$ is the restriction to ${{\cal {E}}}$ of the canonical projection on $T{{\cal {M}}}$.
Naturally induced tensors on $T{{\cal {M}}}$ include the dilation vector field $X$ $$X = {\dot{x}}^a \partial_a^{\bm{V}},$$ the vertical lift $\star 1^{\bm{V}}$ of the volume $4$-form $\star
1$ from ${{\cal {M}}}$ to $T{{\cal {M}}}$ and the horizontal $4$-form $\# 1$ $$\# 1 = \sqrt{|\text{det}\mathfrak{g}|}^{\bm{V}} dx^{0\bm{H}}\wedge
dx^{1\bm{H}} \wedge dx^{2\bm{H}} \wedge dx^{3\bm{H}}$$ where $dx^{a\bm{H}}$ is the horizontal lift of $dx^a$ from ${{\cal {M}}}$ to $T{{\cal {M}}}$ (see Appendix \[appendix:vertical\_and\_horizontal\] for further details) and $\mathfrak{g}=(g_{ab})$ is the matrix of components of $g$.
The Vlasov-Maxwell system for $f$ (a scalar field on $T{{\cal {M}}}$ whose restriction to ${{\cal {E}}}$ is the plasma electron $1$-particle distribution) and the electromagnetic field $F$ may be written $$\begin{aligned}
\label{LVlas}
& Lf \simeq 0,\\
\label{Max}
& dF=0, \qquad d\star F= q\star (\widetilde{N}_\text{ion} -
\widetilde{N})\end{aligned}$$ where $\simeq$ indicates equality on restriction by pullback from $T{{\cal {M}}}$ to ${{\cal {E}}}$ and $$\begin{aligned}
\label{Liou}
&L = {\dot{x}}^a (\partial_a^{\bm{H}} + {\bf f}^{\bm{V}}_a) \in
TT{{\cal {M}}},\\
\label{force}
&{\bf f}_a= -{\frac{q}{m}}F^b{}_a \partial_b \in T{{\cal {M}}}\end{aligned}$$ with $m$ the mass and $q$ the charge of the electron ($q<0$) and $F^a{}_b= g^{ac} F_{cb}$ the components of the electromagnetic 2-form $F=\frac{1}{2} F_{ab} dx^a \wedge dx^b$ on ${{\cal {M}}}$. The metric dual of a vector $V$ is defined by $\widetilde{V}(Y)=g(Y,V)$ for all vectors $Y$ on ${{\cal {M}}}$ and $\star$ is the Hodge map induced from the volume $4$-form $\star
1$ $$\star 1 = \sqrt{|\text{det}\mathfrak{g}|}\, dx^0\wedge
dx^1 \wedge dx^2 \wedge dx^3$$ on ${{\cal {M}}}$. The components of the electron number $4$-current $N=N^a(x(p))\partial_a$ at $p\in{{\cal {M}}}$ are given as an integral over the fibre ${{\cal {E}}}_p=\Pi^{-1}(p)$ $$\begin{aligned}
\notag
N^a(x(p)) &= \int_{\Pi^{-1}(p)} {\dot{x}}^a f\iota_X\# 1\\
\label{electron_current}
&= -\int_{\Pi^{-1}(p)} {\dot{x}}^a f(x(p),{\dot{x}})
\frac{\sqrt{|\text{det}{\mathfrak{g}(x(p))}|}}{g_{0c}(x(p)){\dot{x}}^c}
d{\dot{x}}^1\wedge d{\dot{x}}^2 \wedge d{\dot{x}}^3, \end{aligned}$$ and the ion number $4$-current $N_{\text{ion}}$ is prescribed as data.
The measure on ${{\cal {E}}}_p$ in (\[electron\_current\]) is induced from the $3$-form $\iota_X \# 1$, $$\begin{aligned}
\notag
\iota_X \# 1 &= \sqrt{|\text{det}\mathfrak{g}|}^{\bm{V}} \frac{1}{3!}
{\dot{x}}^a \epsilon_{abcd} dx^{b \bm{H}} \wedge dx^{c \bm{H}} \wedge dx^{d
\bm{H}}\\
\label{i_X_hash_1_on_TM}
&\simeq
-\frac{\sqrt{|\text{det}{\mathfrak{g}}|}^{\bm{V}}}{g_{0c}^{\bm{V}}{\dot{x}}^c}
dx^{1\bm{H}}\wedge dx^{2\bm{H}} \wedge dx^{3\bm{H}} \end{aligned}$$ where $\epsilon_{abcd}$ is the alternating symbol with $\epsilon_{0123} = 1$.
The Vlasov-Maxwell equations constitute a non-linear integro-differential system. Direct calculation of its solutions for general plasma configurations is difficult and, to proceed analytically, it is common to approximate the above as a finite number of moments of $f$ in ${\dot{x}}^a$ satisfying a non-linear field system on ${{\cal {M}}}$ (a so-called ‘fluid’ model). However, there are difficult issues associated with closing the resulting field system (see, for example, [@amendt:1986]) so we opt for a different approach. Our strategy is to reduce the system by employing a discontinuous $f$, and to proceed we need to cast (\[LVlas\]) as an integral.
One may rewrite (\[LVlas\]) as $$\begin{aligned}
\label{dVlas}
& d(f\omega) \simeq 0,\\
\label{omega}
&\omega = \iota_L(\star 1^{\bm{V}} \wedge \iota_X\# 1) \in \Lambda_6
T{{\cal {M}}}.\end{aligned}$$ Integrating (\[dVlas\]) over a 7-chain ${{\cal {A}}} \subset {{\cal {E}}}$ and applying Stokes’s theorem yields $$\int_{\partial {{\cal {A}}}} f\omega=0, \label{intVlas}$$ with $\partial {{\cal {A}}}$ the boundary of ${{\cal {A}}}$. For differentiable distributions, this equation is equivalent to (\[dVlas\]); however, since it makes no reference to the differentiability of $f$, it may be regarded as a generalisation of (\[dVlas\]) applicable to discontinuous $f$.
Evolution of discontinuities
============================
Equation (\[intVlas\]) may be used to develop an equation of motion for a discontinuity, which we choose as a local hypersurface ${{\cal {H}}}$. Suppose that ${{\cal {A}}}$ in (\[intVlas\]) is a $7$-dimensional ‘pill-box’ straddling ${{\cal {H}}}$. We may write $\partial{{\cal {A}}} =
\sigma_+ + \sigma_- + \sigma_0$ where $\sigma_+$ and $\sigma_-$ are the ‘top’ and ‘bottom’ of the pill-box and $\sigma_0$ is the ‘sides’ of the pill-box. Thus, in the limit as the volume of ${{\cal {A}}}$ tends to zero with $\sigma_+$ tending to $\sigma$ and $\sigma_-$ tending to $-\sigma$, we recover the condition $$[f] \sigma^\ast \omega = 0,$$ where the image of $\sigma$ is in ${{\cal {H}}}$ and $[f] = \sigma^*_+f +
\sigma^*_-f$. Thus it follows that a finite discontinuity in $f$ can occur only across the image of a chain $\Sigma$ satisfying $$\Sigma^\ast \omega=0. \label{Sigma_discon}$$
Suppose that $\Sigma$ may be written locally $$\begin{aligned}
\nonumber
\Sigma : \,\,{{\cal {V}}} \times {{\cal {D}}} &\rightarrow {{\cal {E}}}\subset T{{\cal {M}}}\\
(x^a,\xi^1,\xi^2) &\mapsto (x^a,{\dot{x}}^b=\dot{\Sigma}^b(x,\xi))\end{aligned}$$ for ${{\cal {V}}}\subset{{\cal {M}}}$, where $\dot{\Sigma}^b$ denotes the ${\dot{x}}^b$ component of $\Sigma$, and $(\xi^1,\xi^2) \in {{\cal {D}}}\subset\mathbb{R}^2$. It is then possible to translate (\[Sigma\_discon\]) into a field equation for a family of vector fields $V_\xi$ on ${{\cal {V}}}$ given as $$V_\xi (p) = V^a_\xi (x(p)) \partial_a = \dot{\Sigma}^a(x(p),\xi) \partial_a$$ where, since $g_{ab}(x(p)){\dot{x}}^a{\dot{x}}^b=-1$ at $p\in{{\cal {E}}}$, it follows $$\label{norm}
g(V_\xi, V_\xi)=-1.$$
Using (\[omega\], \[Sigma\_discon\]) it follows $$\Sigma^\ast(\underbrace{\iota_L \star 1^{\bm{V}} \wedge \iota_X \#
1}_{(a)} + \underbrace{\star 1^{\bm{V}} \wedge \iota_L \iota_X \#
1}_{(b)})=0. \label{split}$$
Consider first the term $(a)$ in equation (\[split\]): $$\label{term_a}
\Sigma^\ast (\iota_L \star 1^{\bm{V}} \wedge \iota_X \# 1)= \Sigma^\ast
({\dot{x}}^{a} \iota_{\partial^{\bm{H}}_a} \star 1^{\bm{V}} \wedge \iota_X\# 1)$$ where (\[Liou\]) and $\iota_{{\bf f}^{\bm{V}}_a} \star 1^{\bm{V}}=0$ have been used (see (\[V\_on\_V\]) in Appendix \[appendix:vertical\_and\_horizontal\]). Thus $$\Sigma^\ast ({\dot{x}}^{a} \iota_{\partial^{\bm{H}}_a} \star
1^{\bm{V}} \wedge \iota_X\# 1)
= \star \widetilde{V}_\xi \wedge \Sigma^\ast \iota_X \# 1,$$ since $\iota_{\partial^{\bm{H}}_a} \star 1^{\bm{V}} = (g_{ab} \star
dx^b)^{\bm{V}}$ (see (\[V\_on\_H\_and\_H\_on\_V\]) in Appendix \[appendix:vertical\_and\_horizontal\]). Furthermore, using (\[H\_lift\_of\_dx\]) in Appendix \[appendix:vertical\_and\_horizontal\], it follows $$\label{pullback_dx_H}
\Sigma^\ast (dx^{a \bm{H}}) =
DV^a_\xi + \underline{d} \dot{\Sigma}^a$$ where $D$ is the exterior covariant derivative on ${{\cal {M}}}$ and $\underline{d}$ is the exterior derivative on ${{\cal {D}}}$. Using (\[i\_X\_hash\_1\_on\_TM\], \[pullback\_dx\_H\]) it follows $$\label{Sigma_pullback_i_X_hash 1}
\Sigma^\ast \iota_X \# 1 = \sqrt{|\text{det}\mathfrak{g}|}
\frac{1}{3!} \epsilon_{abcd} V^a_\xi (DV^b_\xi +\underline{d}
\dot{\Sigma}^b) \wedge (DV^c_\xi +\underline{d} \dot{\Sigma}^c)
\wedge (DV^d_\xi +\underline{d} \dot{\Sigma}^d)$$ and (\[term\_a\]) is $$\label{simplified_term_a}
\Sigma^\ast (\iota_L \star 1^{\bm{V}} \wedge \iota_X \# 1)= \star
\widetilde{V}_\xi \wedge \frac{1}{2!} \sqrt{|\text{det}\mathfrak{g}|}
\epsilon_{abcd} V^a_\xi DV^b_\xi \wedge \underline{d} \dot{\Sigma}^c
\wedge \underline{d} \dot{\Sigma}^d$$ since $\star \widetilde{V}_\xi \wedge DV^a_\xi \wedge DV^b_\xi=0$ ($\text{dim}({{\cal {M}}})=4$) and $\underline{d} \dot{\Sigma}^a \wedge
\underline{d} \dot{\Sigma}^b \wedge \underline{d} \dot{\Sigma}^c=0$ (dim(${{\cal {D}}}$)=2).
Since $\star \widetilde{V}_\xi \wedge DV^b_\xi= -(\nabla_{V_\xi}
V_\xi)^b \star 1$ and $\sqrt{|\text{det}\mathfrak{g}|}\epsilon_{abcd}=
\iota_{\partial_d}\iota_{\partial_c}\iota_{\partial_b}\iota_{\partial_a}\star
1$, it follows (\[simplified\_term\_a\]) may be written $$\label{term_a_on_M}
\Sigma^\ast (\iota_L \star 1^{\bm{V}} \wedge \iota_X \# 1)=
\widetilde{V}_\xi \wedge \nabla_{V_\xi} \widetilde{V}_\xi \wedge
\Omega_\xi \wedge d\xi^1 \wedge d\xi^2,$$ where the family of $2$-forms $\Omega_\xi$ on ${{\cal {V}}}$ is $$\label{surface_form}
\Omega_\xi = \frac{\partial
\dot{\Sigma}^a}{\partial \xi^1} \frac{\partial
\dot{\Sigma}^b}{\partial \xi^2} g_{ac}\,g_{bd}\, dx^c \wedge dx^d.$$
The second term $(b)$ in (\[split\]) can be rewritten using a similar procedure : $$\Sigma^\ast (\star 1^{\bm{V}} \wedge \iota_L\iota_X \# 1) = \star 1
\wedge \Sigma^\ast (\iota_L \iota_X \# 1)$$ and from (\[Liou\], \[force\], \[i\_X\_hash\_1\_on\_TM\]) it follows $$\iota_L \iota_X \# 1 = -\frac{1}{2!}{\frac{q}{m}}\sqrt{|\det\mathfrak{g}|}^{\bm{V}} F^b{}^{\bm{V}}_e {\dot{x}}^a {\dot{x}}^e \epsilon_{abcd} dx^{c \bm{H}} \wedge dx^{d \bm{H}}.$$ where (\[H\_on\_H\]) and (\[V\_on\_H\_and\_H\_on\_V\]) have been used. Then $$\Sigma^\ast (\iota_L \iota_X \# 1)=
-\frac{1}{2!}{\frac{q}{m}}\sqrt{|\text{det}\mathfrak{g}|} F^b{}_e V^a_\xi
V^e_\xi \epsilon_{abcd} (DV^c_\xi +\underline{d} \dot{\Sigma}^c)
\wedge (DV^d_\xi +\underline{d} \dot{\Sigma}^d)$$ and $$\label{term_b_on_M}
\Sigma^\ast ( \star 1^{\bm{V}} \wedge \iota_L \iota_X \# 1)= -{\frac{q}{m}}\widetilde{V}_\xi \wedge \iota_{V_\xi} F \wedge \Omega_\xi \wedge d\xi^1
\wedge d\xi^2.$$
Combining (\[term\_a\_on\_M\], \[term\_b\_on\_M\]) and (\[split\]) yields $$\label{discont_on_M}
\Sigma^\ast \omega = \widetilde{V}_\xi \wedge ( \nabla_{V_\xi} \widetilde{V}_\xi - {\frac{q}{m}}\iota_{V_\xi} F) \wedge \Omega_\xi \wedge d\xi^1 \wedge d\xi^2 =0.$$ Acting on (\[discont\_on\_M\]) successively with $\iota_{V_\xi}$, $\iota_{\partial/ \partial \xi^1}$ and $\iota_{\partial/ \partial
\xi^2}$, and noting that $\iota_{V_\xi} \Omega_\xi=0$ and $g(V_\xi,V_\xi)=-1$, yields $$(\nabla_{V_\xi} \widetilde{V}_\xi - {\frac{q}{m}}\iota_{V_\xi} F) \wedge \Omega_\xi=0. \label{discon}$$ Thus, solutions to (\[discon\]) may be obtained by demanding that $V_\xi$ is driven by the Lorentz force $$\label{lorentz_force_law}
\nabla_{V_\xi} \widetilde{V}_\xi = {\frac{q}{m}}\iota_{V_\xi} F.$$ However, although (\[lorentz\_force\_law\]) is simpler than (\[discon\]), there are simple solutions to (\[discon\]) that do not satisfy (\[lorentz\_force\_law\]); we will return to this point shortly.
Non-linear electrostatic oscillations
=====================================
A laser pulse travelling through a plasma can excite plasma oscillations, which induce very high longitudinal electric fields. Due to nonlinear effects, there is a maximum amplitude of electric field (the ‘wave-breaking limit’) that can be sustained in the plasma and, as mentioned in the introduction, our aim is to investigate the relationship between the shape of the distribution (i.e. $\Sigma$) and the wave-breaking limit.
For simplicity, we choose to describe the plasma using a distribution $f$ where $f=\alpha$ is a positive constant inside a $7$-dimensional region $\mathcal{U}\subset\mathcal{E}$ and $f=0$ outside. In particular, we consider $\mathcal{U}$ to be the union over each point $p\in{{\cal {M}}}$ of a domain $\mathcal{W}_p$ whose boundary $\partial{{\cal {W}}}_p$ in $\mathcal{E}_p$ is topologically equivalent to the $2$-sphere. Such distributions are sometimes called ‘waterbags’ in the literature and can be completely characterized by $V_\xi$ and the constant $\alpha$. Our approach may be considered as a multi-dimensional generalization of the purely 1-dimensional relativistic waterbag model used in [@katsouleas:1988] to examine wave-breaking.
We work in Minkowski spacetime $({{\cal {M}}},g)$ and assume that the ions constitute a homogeneous immobile background. We employ an inertial coordinate system $(x^a)$ adapted to the ions: $$N_\text{ion}= n_\text{ion} \partial_0,$$ where $$g=-dx^0 \otimes dx^0 + dx^1 \otimes dx^1 + dx^2 \otimes dx^2 + dx^3 \otimes dx^3$$ and the ion proper number density $n_\text{ion}$ is constant.
To proceed further we seek a form for $\Sigma$ axisymmetric about $\dot{x}^3$ whose pointwise dependence in ${{\cal {M}}}$ is on the wave’s phase $\zeta = x^3 - v x^0$ only, where $v$ is constant and $0<v<1$. We suppose that all electrons described by $f$ are travelling slower than the wave, and the wave ‘breaks’ if the longitudinal velocity of any plasma electron equals $v$ (i.e. an electron ‘catches up’ with the wave).
Introduce $$\label{coframe}
\bm{e}^1 = v dx^3 - dx^0,\qquad \bm{e}^2 = dx^3 - v dx^0,$$ and decompose $\widetilde{V}_\xi$ as $$\widetilde{V}_\xi = [\mu(\zeta) + A(\xi^1)]\, \bm{e}^1 + \psi(\xi^1,\zeta)\, \bm{e}^2
\,\,+ R\sin(\xi^1)\cos(\xi^2)dx^1 + R\sin(\xi^1)\sin(\xi^2)dx^2 \label{V_ansatz}$$ for $0 < \xi^1 < \pi$, $0 \le \xi^2 < 2\pi$ where $R>0$ is constant.
Here, $(\gamma \bm{e}^1, \gamma \bm{e}^2, dx^1, dx^2)$, with $\gamma = 1/\sqrt{1-v^2}$, is an orthonormal coframe on ${{\cal {M}}}$ adapted to $\zeta$. Since $V_\xi$ is future-directed and timelike, and $\bm{e}^1$ is timelike, it follows $\bm{e}^1(V_\xi) < 0$ and $\mu+A(\xi^1) > 0$.
The component $\psi$ is determined using (\[norm\]), $$\label{psi}
\psi = -\sqrt{[\mu + A(\xi^1)]^2 - \gamma^2[1 + R^2 \sin^2(\xi^1)]},$$ where the negative square root is chosen because no electron is moving faster along $x^3$ than the wave.
Substituting the [*ansatz*]{} (\[V\_ansatz\]) together with a purely longitudinal electric field depending only on $\zeta$, $$F=E(\zeta)\, dx^0 \wedge dx^3,$$ into (\[lorentz\_force\_law\]) yields $$\label{F_dmu}
E= \frac{1}{\gamma^2} \frac{m}{q} \frac{d \mu}{d\zeta}.$$ Equation (\[F\_dmu\]) is used to eliminate $E$ from Maxwell equations (\[Max\]) and obtain a differential equation for $\mu$.
The electron number current is calculated using (\[electron\_current\]): $$N(p) = \alpha \bigg(\int_{{{\cal {W}}}_p} {\dot{x}}^a \iota_X\# 1 \bigg) \partial_a$$ and (\[Max\], \[V\_ansatz\], \[psi\]) yield $$\begin{aligned}
\notag
\frac{1}{\gamma^2}\frac{d^2\mu}{d\zeta^2} = &-
\frac{q^2}{m}n_{\text{ion}}\gamma^2\\
\label{ODE_mu}
&- \frac{q^2}{m}2\pi R^2 \alpha \int\limits^\pi_0 \bigg([\mu +
A(\xi^1)]^2
\,\,- \gamma^2[1 + R^2
\sin^2(\xi^1)]\bigg)^{1/2}\sin(\xi^1)\,\cos(\xi^1)\, d\xi^1\end{aligned}$$ and $$\label{norm_A}
2\pi R^2 \int\limits^\pi_0
A(\xi^1)\,\sin(\xi^1)\,\cos(\xi^1)\,d\xi^1 = - \frac{n_\text{ion}\gamma^2\,v}{\alpha}$$ where $\alpha$ is the value of $f$ inside $\mathcal{W}_p$.
The form of the 2nd order autonomous non-linear differential equation (\[ODE\_mu\]) for $\mu$ is fixed by specifying the generator $A(\xi^1)$ of $\partial\mathcal{W}_p$ subject to the normalization condition (\[norm\_A\]).
Electrostatic wave-breaking
---------------------------
The form of the integrand in (\[ODE\_mu\]) ensures that the magnitude of oscillatory solutions to (\[ODE\_mu\]) cannot be arbitrarily large. For our model, the wave-breaking value $\mu_{\text{wb}}$ is the largest $\mu$ for which the argument of the square root in (\[ODE\_mu\]) vanishes, $$\mu_{\text{wb}} =
\text{max}\bigg\{-A(\xi^1) + \gamma\sqrt{1+R^2\sin^2(\xi^1)}
\,\bigg|\,0\le\xi^1\le\pi\bigg\}, \label{mu_wave-breaking}$$ because $\mu<\mu_{\text{wb}}$ yields an imaginary integrand in (\[ODE\_mu\]) for some $\xi^1$. The positive square root in (\[mu\_wave-breaking\]) is chosen because, as discussed above, $\mu + A(\xi^1) > 0$ and in particular $\mu_\text{wb} + A(\xi^1)>0$.
The wave-breaking limit $E_{\text{max}}$ is obtained by evaluating the first integral of (\[ODE\_mu\]) between $\mu_{\text{wb}}$ where $E$ vanishes and the equilibrium[^1] value $\mu_{\text{eq}}$ of $\mu$ where $E$ is at a maximum. Using (\[norm\_A\]) to eliminate $\alpha$ it follows that $\mu_{\text{eq}}$ satisfies $$\begin{aligned}
\notag
\frac{1}{v}\int\limits^\pi_0 A(\xi^1)&\sin(\xi^1)\cos(\xi^1)\,d\xi^1\\
\label{mu_equilibrium}
&= \int\limits^\pi_0 \bigg([\mu_{\text{eq}} + A(\xi^1)]^2
- \gamma^2[1 + R^2
\sin^2(\xi^1)]\bigg)^{1/2}
\sin(\xi^1)\cos(\xi^1)
d\xi^1\end{aligned}$$ with $$\label{A_negativity}
\int\limits^\pi_0 A(\xi^1)\sin(\xi^1)\cos(\xi^1)\,d\xi^1\, <\, 0$$ since $\alpha, v >0$. Equation (\[ODE\_mu\]) yields the maximum value $E_\text{max}$ of $E$, $$\begin{aligned}
\notag E_{\text{max}}^2 = \,\,&2 m n_\text{ion}\Bigg[
-\mu_{\text{eq}} + \mu_{\text{wb}}
+ \, \frac{v}{\int\limits^\pi_0
A(\xi^{1\prime})\sin(\xi^{1\prime})\cos(\xi^{1\prime})d\xi^{1\prime}} \times \\
& \int\limits^{\mu_{\text{eq}}}_{\mu_{\text{wb}}}\int\limits^\pi_0
\bigg([\mu + A(\xi^1)]^2
- \gamma^2 [1 + R^2
\sin^2(\xi^1)]\bigg)^{1/2}
\sin(\xi^1)\cos(\xi^1)
d\xi^1\,d\mu\Bigg]. \label{E_max}\end{aligned}$$
The above is a general expression for $E_\text{max}$ given $A(\xi^1)$ as data. In the following, we determine a simple expression for an upper bound on $E_\text{max}$ when $A(\xi^1) = -a\cos(\xi^1)$ where $a$ is a positive constant ($a>0$ ensures (\[A\_negativity\]) is satified). Using (\[E\_max\]) it follows $$\begin{aligned}
\label{E_max_simple}
E_{\text{max}}^2 = 2 m n_\text{ion}\Bigg\{
-\mu_{\text{eq}} + \mu_{\text{wb}}
+ \frac{3}{2}\frac{v}{a}
\int\limits^{\mu_{\text{eq}}}_{\mu_{\text{wb}}}
[{{\cal {I}}}_+(\mu) + {{\cal {I}}}_-(\mu)]\,d\mu
\Bigg\},\end{aligned}$$ where $$\label{I_plus_minus}
{{\cal {I}}}_\pm(\mu) = \pm\int\limits^1_0 \bigg([\mu \pm a\chi]^2
- \gamma^2[1 + R^2 (1-\chi^2)]\bigg)^{1/2}\chi\,d\chi$$ Furthermore, (\[mu\_equilibrium\]) may be written $$\label{mu_equilibrium_simple}
\frac{3}{2}\frac{v}{a}[{{\cal {I}}}_+(\mu_{\text{eq}}) + {{\cal {I}}}_-(\mu_{\text{eq}})] = 1$$ and since ${{\cal {I}}}_+(\mu_{\text{eq}})\ge {{\cal {I}}}_+(\mu)$ and ${{\cal {I}}}_-(\mu_\text{wb})\ge {{\cal {I}}}_-(\mu)$ for $\mu_{\text{wb}}\le\mu\le\mu_{\text{eq}}$, using (\[E\_max\_simple\], \[mu\_equilibrium\_simple\]) $$E^2_\text{max} \le \frac{3v}{a} m n_{\text{ion}}
(\mu_\text{eq}-\mu_{\text{wb}})[{{\cal {I}}}_-(\mu_{\text{wb}})-{{\cal {I}}}_-(\mu_{\text{eq}})]$$ Furthermore $-{{\cal {I}}}_-(\mu_{\text{eq}}) \le
\frac{1}{2}\sqrt{\mu_\text{eq}^2 - \gamma^2}$ and ${{\cal {I}}}_-(\mu_{\text{wb}}) \le 0$ so $$E^2_\text{max} \le \frac{3v}{2a} \frac{m^2\omega_p^2}{q^2} (\mu_\text{eq}- \mu_\text{wb})
\sqrt{\mu_\text{eq}^2 - \gamma^2}$$ where $\omega_p=\sqrt{n_\text{ion}q^2/m}$ is the plasma angular frequency (in units where $\varepsilon_0=1$ and $c=1$).
Wave-breaking in an external magnetic field
-------------------------------------------
In tackling (\[discon\]), one may opt to seek only those $V_\xi$ satisfying (\[lorentz\_force\_law\]); this approach was followed in the preceeding sections. Although, at first sight, this method appears to be a simpler than attempting to solve (\[discon\]), it is not always the simplest option. There are potential advantages in considering (\[discon\]) in its generality, as we will now argue.
The component of the magnetic field parallel to the velocity of a point charge does not contribute to the Lorentz force on that point charge. A similar observation may also be applied to certain $V_\xi$ in (\[discon\]) even though the $(\partial_1,\partial_2)$ components of $V_\xi$ are non-zero. Furthermore, the results of the previous section are unaffected by a constant magnetic field aligned along $x^3$.
The axially symmetric $V_\xi$ introduced above is of the general form $$V_\xi= (1+Y^2+ Z^2)^{1/2} \partial_0+ Y \cos(\xi^2) \partial_1+ Y
\sin(\xi^2) \partial_2+ Z \partial_3,$$ where $Y=\hat{Y}(x,\xi^1)$ and $Z=\hat{Z}(x,\xi^1)$. Suppose $F$ is of the form $$\begin{aligned}
&F = F_{\bm{I}} + F_{\bm{II}},\\
&F_{\bm{I}} = E(\zeta) dx^0\wedge dx^3,\\
&F_{\bm{II}} = B dx^1\wedge dx^2\end{aligned}$$
We have $$i_{V_\xi} F_{\bm{II}}= B\, Y \big(\cos (\xi^2) dx^2 - \sin (\xi^2) dx^1 \big)$$ and furthermore $$\begin{aligned}
&\frac{\partial \dot{\Sigma}^1}{\partial \xi^2} = -Y\sin(\xi^2),\\
&\frac{\partial \dot{\Sigma}^2}{\partial \xi^2} = Y\cos(\xi^2)\end{aligned}$$ so $$g_{ab} \frac{\partial\dot{\Sigma}^a}{\partial \xi^2} dx^b =
-Y\sin(\xi^2)dx^1 + Y\cos(\xi^2)dx^2.$$ Hence, using (\[surface\_form\]) it follows $${\frac{q}{m}}\iota_{V_\xi} F_{\bm{II}} \wedge \Omega_\xi=0$$ and from (\[discon\]) $$(\nabla_{V_\xi}\widetilde{V}_\xi - {\frac{q}{m}}\iota_{V_\xi} F_{\bm{I}})\wedge
\Omega_\xi = 0.$$ Therefore, if $V_\xi$ satisfies (\[discon\]) with $F=F_{\bm{I}}$, the same velocity field also satisfies (\[discon\]) with $F=F_{\bm{I}} + F_{\bm{II}}$. It follows that a longitudinal magnetic field does not influence an axially symmetric discontinuity in the electron distribution and the results of the previous section hold for non-zero constant $B$.
Beyond the Lorentz force
========================
The previous discussion clearly shows that there is merit in considering (\[discon\]) in its generality. We argue that for future extension of this work to fields with more complicated spacetime dependence, it is prudent to eschew (\[lorentz\_force\_law\]) in favour of (\[discon\]). We now illustrate this point further using a very simple example.
Let the chain $\Sigma$ be such that $$\label{constant_energy_V}
V_\xi= (1+R^2)^{1/2} \partial_0+ R \sin(\xi^1)
\cos(\xi^2) \partial_1 + R \sin(\xi^1) \sin(\xi^2)
\partial_2 + R\cos(\xi^1) \partial_3$$ where $R$ is a function on ${{\cal {M}}}$ and, using (\[surface\_form\]), $$\Omega_\xi= R^2 \sin(\xi^1) \cos(\xi^1) dx^1 \wedge dx^2+ R^2
\sin^2(\xi^1) dx^3 \wedge \Big( \sin(\xi^2) dx^1- \cos(\xi^2) dx^2 \Big).$$
The $4$-acceleration of $V_\xi$ is $$\begin{aligned}
\notag
\nabla_{V_\xi} \widetilde{V}_\xi = -&\frac{R\,
V_\xi R}{\sqrt{1+R^2}}\,dx^0\\
\label{acceleration}
&+ V_\xi R \Big( \sin (\xi^1) \cos (\xi^2) dx^1+ \sin (\xi^1) \sin
(\xi^2) dx^2+ \cos (\xi^1) dx^3 \Big)\end{aligned}$$ and, for simplicity, we assume that (\[acceleration\]) is in response to a longitudinal electric field, $$F= E dx^0 \wedge dx^3,$$ which contributes to (\[discon\]) as $$\iota_{V_\xi} F= E \bigg( (1+R^2)^{1/2}dx^3 - R\cos (\xi^1) dx^0 \bigg).$$
Clearly the Lorentz force equation (\[lorentz\_force\_law\]) cannot be satisfied for general $R$, since $\nabla_{V_\xi} \widetilde{V}_\xi$ contains terms in $dx^1$, $dx^2$ which cannot cancel against terms in $\iota_{V_\xi} F$. However, the only nonzero contribution to the left-hand side of (\[discon\]) can be made to vanish by requiring $$\label{VR}
V_\xi R = {\frac{q}{m}}E \sqrt{1+R^2} \cos (\xi^1).$$ Inspection of the $\xi$ dependences in (\[constant\_energy\_V\]) and (\[VR\]) reveals that $R$ can depend only on $x^3$ and $$\frac{d}{dx^3} \sqrt{1+R^2}= {\frac{q}{m}}E$$ so $E$ also depends only on $x^3$.
To compare the above with solutions to (\[lorentz\_force\_law\]), we seek a reparameterisation of $\Sigma$ whose corresponding family of $4$-velocities satisfies (\[lorentz\_force\_law\]). In particular, we consider a map $\rho$ $$\begin{aligned}
\nonumber \rho : {{\cal {V}}} \times {{\cal {D}}}^\prime &\rightarrow& {{\cal {V}}} \times {{\cal {D}}} \\
(x, \xi^{\prime 1}, \xi^{\prime 2}) &\mapsto& (x,
\xi^1=\psi^1(x,\xi^\prime),\xi^1=\psi^2(x,\xi^\prime))
\label{diff}\end{aligned}$$ where ${{\cal {V}}}\subset{{\cal {M}}}$ and $(\xi^{\prime 1},\xi^{\prime 2})\in{{\cal {D}}}^\prime\subset\mathbb{R}^2$. Then given $\Sigma$ satisfying $\Sigma^\ast \omega=0$, $$\label{reparam_discont}
(\Sigma \circ \rho)^\ast \omega= \rho^\ast(\Sigma^\ast \omega)=0.$$ The chains $\Sigma$ and $(\Sigma \circ \rho)$ locally represent the same discontinuity, and are physically equivalent. However, the families $V_\xi= V^a_\xi
\partial_a$ and $W_{\xi^\prime}=W^a_{\xi^\prime} \partial_a$ of vector fields, where $$V^a_\xi = \Sigma^\ast {\dot{x}}^a, \qquad W^a_{\xi^\prime} = (\Sigma \circ \rho)^\ast {\dot{x}}^a,$$ are different. We demand $$\nabla_{W_\xi^\prime} \widetilde{W}_{\xi^\prime} = {\frac{q}{m}}\iota_{W_{\xi^\prime}} F$$ and using (\[reparam\_discont\]) it follows $$W_{\xi^\prime}\psi^2 =0, \qquad W_{\xi^\prime}\psi^1 = -\frac{q E}{mR}
\sqrt{1+R^2}\sin (\psi^1). \label{thetaeq}$$ One may solve (\[thetaeq\]) to determine $(\Sigma\circ\rho)$, but is clear that (for general $E$) solving for the discontinuity in terms of $\Sigma$ is a simpler task. Furthermore, we expect this state of affairs to hold for more complicated configurations.
Conclusion
==========
We have developed a covariant formalism for tackling discontinuities in $1$-particle distributions. We have used it to develop wave-breaking limits for models of thermal plasmas whose distributions have effectively $1$-dimensional spacetime dependence but are $3$-dimensional in velocity.
Vertical and horizontal lifts {#appendix:vertical_and_horizontal}
=============================
Given tensors on a manifold ${{\cal {M}}}$, there are a number of ways to lift them onto the tangent manifold $T{{\cal {M}}}$. The simplest and best known of these are the vertical lift and (given a connection on ${{\cal {M}}}$) the horizontal lift. The following is a summary of some important properties of these lifts; for more details see, for example, [@yano:1973].
The vertical lift is a tensor homomorphism. Acting on forms, it is equivalent to the pull-back with respect to the canonical projection map $\pi : T{{\cal {M}}} \rightarrow {{\cal {M}}}$: $$\beta^{\bm{V}}= \pi^\ast \beta, \qquad \forall\,\, \beta \in \Lambda {{\cal {M}}}.$$ Acting on a vector $Y$, the vertical lift is $$Y^{\bm{V}}= Y^a \frac{\partial}{\partial {\dot{x}}^a} \qquad
\forall\,\,Y=Y^a \frac{\partial}{\partial x^a} \in T{{\cal {M}}}.$$
Note from these definitions, contraction of the vertical lift of a vector with the vertical lift of a form vanishes: $$\label{V_on_V}
\iota_{(Y^{\bm{V}})} \beta^{\bm{V}}=0.$$ The horizontal lift makes use of a connection $\nabla$, with coefficients $\Gamma^a{}_{bc}$ in a coordinate basis $\partial_a = \partial/\partial x^a$: $$\nabla_{\partial_c} \partial_b = \Gamma^a{}_{bc} \partial_a.$$ The horizontal lift $dx^{a \bm{H}}$ of the basis form $dx^a$ is $$\label{H_lift_of_dx}
dx^{a \bm{H}}= d{\dot{x}}^a + {\dot{x}}^c \Gamma^a{}^{\bm{V}}_{bc} dx^b,$$ while that of the basis vector $\partial/\partial x^a$ is $$\bigg( \frac{\partial}{\partial x^a}\bigg)^{\bm{H}}= \frac{\partial}{\partial x^a} - {\dot{x}}^c \Gamma^b{}^{\bm{V}}_{ac} \frac{\partial}{\partial {\dot{x}}^b}.$$ The horizontal lifts of more general 1-forms and vectors may be determined from the relations $$\begin{aligned}
& (f\beta)^{\bm{H}}= f^{\bm{V}} \beta^{\bm{H}} \qquad \beta \in \Lambda^1 {{\cal {M}}},\\
& (fY)^{\bm{H}}= f^{\bm{V}} Y^{\bm{H}} \qquad \forall\,\, Y\in T{{\cal {M}}},\end{aligned}$$ for any function $f$ on ${{\cal {M}}}$.
Similarly to the vertical lift, the contraction of the horizontal lift of a vector with the horizontal lift of a form vanishes: $$\label{H_on_H}
\iota_{(Y^{\bm{H}})} \beta^{\bm{H}}=0.$$
Two other useful identities relate to the contractions of vertical and horizontal lifts of forms and vectors: $$\label{V_on_H_and_H_on_V}
\iota_{(Y^{\bm{V}})} \beta^{\bm{H}}=\iota_{(Y^{\bm{H}})}
\beta^{\bm{V}}=(\iota_Y \beta)^{\bm{V}}.$$ We thank RA Cairns, B Ersfeld, J Gratus, AJW Reitsma and RMGM Trines for useful discussions. This work is supported by EPSRC grant EP/E022995/1 and the Cockcroft Institute.
[0]{} (North-Holland, Amsterdam), 1980 in , (Academic Press, New York and London), 1971, 1 (Adam Hilger, Bristol and New York), 1987 (Marcel Dekker, New York, 1973)
[^1]: \[footnote1\]Note that the equilibrium of $\mu$ need not coincide with the plasma’s thermodynamic equilibrium.
|
---
abstract: 'We provide a metric-like formulation of the spin-3 gravity in three dimensions. It is shown that the Chern-Simons formulation of the spin-3 gravity can be reformulated as a Einstein-Cartan-Sciama-Kibble theory coupled with the higher-spin matter fields. A duality-like transformation is also identified from this metric-like formulation.'
author:
- 'Zhi-Qiang Guo'
bibliography:
- 'spinRef.bib'
title: 'Metric-Like Formulation Of the Spin-Three Gravity In Three Dimensions'
---
[*Intrduction*]{}.In three dimensions (3D), the pure Einstein gravity does not have local degrees of freedom [@Deser:1983tn]. The Einstein-Hilbert action with the cosmological constant term can be recast as a $SL(2,R){\times}SL(2,R)$ Chern-Simons (CS) theory [@Achucarro:1987vz; @Witten:1988hc], which is a manifestly topological theory. Recently, it was suggested that the higher-spin gravity [@Vasiliev:1990en; @Blencowe:1988gj] in 3D could also be expressed as a CS theory [@Henneaux:2010xg; @Campoleoni:2010zq] but with the larger gauge group $SL(N,R){\times}SL(N,R)$. In contrast with its concise formulation in terms of the frame-like fields, which facilitates the analysis of asymptotical symmetries [@Brown:1986nw; @Henneaux:2010xg; @Campoleoni:2010zq] and higher-spin black hole solutions [@Gutperle:2011kf], a metric-like formulation of the higher-spin gravity are helpful to illuminate its geometrical structure and make its higher-spin freedoms transparent. From the perspective of anti-de Sitter/conformal field theory correspondence, the metric-like formulation in 3D is also useful to understand the thermodynamical properties (such as entropy [@Campoleoni:2012hp] and shear viscosity [@Policastro:2001yc]) of its dual theory in two dimensions [@Gaberdiel:2012uj]. However, a metric-like formulation can not be derived straightforwardly. A perturbative study of the metric-like formulation of the spin-3 gravity has been pursued in [@Campoleoni:2012hp]. Geometrical analysis based on the metric compatibility method [@Fujisawa:2012dk] shows that a complete metric-like formulation not only depends on the spin-2 field and the spin-3 field, but also higher-spin fields with more space-time indices are required. In this paper, we propose that if we assume the connection of the conventional spin-2 gravity has a torsion, then the CS formulation of the spin-3 gravity can be recast as a Einstein-Cartan-Sciama-Kibble theory (ECSK) [@Hehl:1976kj], in which the spin-3 field can be regarded as the higher-spin matter acting as the source of the torsion.
[*Metric-Like Formulation*]{}.Similar to its spin-2 cousin in three dimensions, the spin-3 gravity in 3D could be described by the $SL(3,R){\times}SL(3,R)$ Chern-Simons theory $$\begin{aligned}
\label{sec2-cs-lag}
S&=&S_{\mathrm{CS}}[A]-S_{\mathrm{CS}}[\bar{A}],\\
S_{\mathrm{CS}}[A]&=&\frac{k}{4\pi}\int \mathrm{tr}\bigr(A\wedge {dA}+\frac{2}{3}A\wedge {A}\wedge {A}\bigr),\nonumber\end{aligned}$$ where $k=\frac{l}{16G}$. $A$ and $\bar{A}$ can further be decomposed into the frame-like fields $$\begin{aligned}
\label{sec2-cs-frame}
A=\omega+\frac{1}{l}e,~~\bar{A}=\omega-\frac{1}{l}e.\end{aligned}$$ Then the CS action (\[sec2-cs-lag\]) has the Palatini formulation $$\begin{aligned}
\label{sec2-cs-lag-pal}
S=\frac{k}{\pi}\int \mathrm{tr}\bigr(e\wedge (d\omega+\omega\wedge\omega)+\frac{1}{3l^2}e\wedge {e}\wedge {e}\bigr),\end{aligned}$$ where the second term is the generalized cosmological term. The variation of $\omega$ yields the torsion constraints $$\begin{aligned}
\label{sec2-cs-lag-pal-tor}
\mathscr{T}=de+\omega\wedge{e}+e\wedge\omega=0,\end{aligned}$$ and the variation of $e$ yields the equations of motion $$\begin{aligned}
\label{sec2-cs-lag-pal-eom}
\mathscr{R}=d\omega+\omega\wedge\omega+\frac{1}{l^2}e\wedge{e}=0.\end{aligned}$$ If we can solve $\omega$ in terms of $e$ and $de$ through the torsion equation (\[sec2-cs-lag-pal-tor\]), then a second-order formulation of the Palatini action can be obtained. For the Einstein gravity, $\omega$ and $e$ take values in the Lie algebra of $SL(2,R)$, Eq. (\[sec2-cs-lag-pal-tor\]) can be solved straightforwardly, and a pure metric-like formulation can be achieved. For the spin-3 gravity, the Lie algebra of $\omega$ and $e$ is $SL(3,R)$. A perturbative solution of Eq. (\[sec2-cs-lag-pal-tor\]) has been given in [@Campoleoni:2012hp]. An non-perturbative attempt has been made in [@Fujisawa:2012dk], which shows it is difficult to achieve a pure metric-like formulation of the CS action (\[sec2-cs-lag\]). The work of [@Fujisawa:2012dk] is based on the $SL(3,R)$ invariant metric variables $\varphi_{\alpha\beta}=\mathrm{tr}(e_{\alpha}e_{\beta})$ and $\varphi_{\alpha\beta\gamma}=\mathrm{tr}(e_{\alpha}e_{\beta}e_{\gamma})$. Alternatively, in this paper, we use the $SL(2,R)$ decomposition of the $SL(3,R)$ Lie algebra $$\begin{aligned}
\label{sec2-lie}
[J_a,J_b]&=&\epsilon_{abc}J^{c},~~[J_a,Q_{bc}]=\epsilon_{\hspace{1mm}ab}^{d}Q_{dc}+\epsilon_{\hspace{1mm}ac}^{d}Q_{db},\nonumber\\
~[Q_{ab},Q_{cd}]&=&\lambda^2
(\eta_{ac}\epsilon_{bdm}+\eta_{bc}\epsilon_{adm})J^{m}+(c\leftrightarrow{d}),\end{aligned}$$ where the small Latin letters take the values $0,1,2$, and the definitions of $\eta_{ab}$ and $\epsilon_{abc}$ follow the conventions in [@Campoleoni:2010zq]. $\lambda$ is a dimensionless constant. The anti-commutators of $J_a$ furnish the Lie algebra of the $SL(2,R)$ group. $Q_{ab}$ is symmetrical about its indices and satisfies the traceless condition $Q_{ab}\eta^{ab}=0$. They transform as the 5D symmetrical representation of $SL(2,R)$. Using this realization of the $SL(3,R)$ Lie algebra, $\omega$ and $e$ are expressed as $$\begin{aligned}
\label{sec2-lie-we}
\omega=\omega^{a}J_a+\omega^{bc}Q_{bc},~~e=e^{a}J_a+e^{bc}Q_{bc}.\end{aligned}$$ Here $\omega^{bc}$ and $e^{bc}$ are also symmetrical and traceless, that is, $\omega^{bc}\eta_{bc}=0$ and $e^{bc}\eta_{bc}=0$. We define the metric-like fields from the frame-like fields as $$\begin{aligned}
\label{sec2-lie-we-metric}
g_{\alpha\beta}=e_{\alpha}^{a}e_{\beta}^{b}\eta_{ab},~~
h_{\alpha\beta\gamma}=e_{\alpha}^{ab}e_{\beta}^{c}e_{\gamma}^{d}\eta_{ac}\eta_{bd}.\end{aligned}$$ $g_{\alpha\beta}$ is the conventional $SL(2,R)$ invariant metric. $h_{\mu\alpha\beta}$ is only symmetrical about $\alpha$ and $\beta$, which belongs to the class of the mixed-symmetrical field discussed in [@Campoleoni:2012th]. Using $g^{\alpha\beta}$ as the inverse of $g_{\alpha\beta}$, $h_{\mu\alpha\beta}$ satisfies the traceless condition $h_{\mu\alpha\beta}g^{\alpha\beta}=0$. We also have $$\begin{aligned}
\label{sec2-lie-we-metric-anti}
e_{\alpha}^{a}e_{\beta}^{b}e_{\gamma}^{c}\epsilon_{abc}=\varepsilon_{\alpha\beta\gamma},~~
E^{\alpha}_{a}E^{\beta}_{b}E^{\gamma}_{c}\epsilon^{abc}=\varepsilon^{\alpha\beta\gamma},\end{aligned}$$ where $g$ is the determinant of $g_{\alpha\beta}$, and $E^{\alpha}_{a}$ is the inverse of $e_{\alpha}^{a}$, which satisfies $E^{\alpha}_{a}e_{\alpha}^{b}=\delta^{a}_{b}$ and $E^{\alpha}_{a}e_{\beta}^{a}=\delta^{\alpha}_{\beta}$. We have defined $$\begin{aligned}
\label{sec2-lie-we-metric-anti-def}
\varepsilon_{\alpha\beta\gamma}=\sqrt{-g}\epsilon_{\alpha\beta\gamma},~~
\varepsilon^{\alpha\beta\gamma}=\frac{1}{\sqrt{-g}}\epsilon^{\alpha\beta\gamma},\end{aligned}$$ which are covariant antisymmetrical tensors under the general 3D coordinate transformation. By means of the $SL(2,R)$ variables, the torsion equation can be rewritten as
$$\begin{aligned}
\label{sec2-cs-lag-re-tor-1}
(\partial_{\mu}e_{\nu}^{a}
&+&\omega_{\mu}^{b}e_{\nu}^{c}\epsilon_{\hspace{1mm}bc}^{a})-(\partial_{\nu}e_{\mu}^{a}
+\omega_{\nu}^{b}e_{\mu}^{c}\epsilon_{\hspace{1mm}bc}^{a})\\
&=&
4\lambda^2(\omega_{\nu{d}}^{b}e_{\mu}^{dc}\epsilon_{\hspace{1mm}bc}^{a}-
\omega_{\mu{d}}^{b}e_{\nu}^{dc}\epsilon_{\hspace{1mm}bc}^{a}),\nonumber\\
\label{sec2-cs-lag-re-tor-2}
(\partial_{\mu}e_{\nu}^{bc}&+&\omega_{\mu}^{a}e_{\nu}^{dc}\epsilon_{\hspace{1mm}ad}^{b}
+\omega_{\mu}^{a}e_{\nu}^{db}\epsilon_{\hspace{1mm}ad}^{c})-(\mu\leftrightarrow\nu)\\
&=&(\omega_{\nu}^{ab}e_{\mu}^{d}\epsilon_{\hspace{1mm}ad}^{c}
+\omega_{\nu}^{ac}e_{\mu}^{d}\epsilon_{\hspace{1mm}ad}^{b})-(\mu\leftrightarrow\nu).\nonumber\end{aligned}$$
Eqs. (\[sec2-cs-lag-re-tor-1\]) and (\[sec2-cs-lag-re-tor-2\]) have clear interpretations in term of the $SL(2,R)$ variables. The left side of Eq. (\[sec2-cs-lag-re-tor-1\]) can be interpreted as the torsion of the $SL(2,R)$ frame-like fields $e_{\nu}^{a}$. The left side of Eq. (\[sec2-cs-lag-re-tor-2\]) transforms as a symmetrical representation of the the $SL(2,R)$ group. These observations provide us with the hints that Eqs. (\[sec2-cs-lag-re-tor-1\]) and (\[sec2-cs-lag-re-tor-2\]) can be reformulated as equations of metric-like fields through the assumptions $$\begin{aligned}
\label{sec2-cs-tor-assm-1}
\partial_{\mu}e_{\nu}^{a}
+\omega_{\mu}^{b}e_{\nu}^{c}\epsilon_{\hspace{1mm}bc}^{a}=\Gamma^{\rho}_{\mu\nu}e_{\rho}^{a}\end{aligned}$$ and $$\begin{aligned}
\label{sec2-cs-tor-assm-2}
\omega_{\mu}^{bc}=\Omega_{\mu}^{\rho\sigma}e_{\rho}^{b}e_{\sigma}^{c},\end{aligned}$$ where $\Omega_{\mu}^{\rho\sigma}$ is symmetrical about $\rho$ and $\sigma$, and it also satisfies the traceless condition $\Omega_{\mu}^{\rho\sigma}g_{\rho\sigma}=0$. From Eq. (\[sec2-cs-tor-assm-1\]), we can obtain the $SL(2,R)$ connection $\omega_{\mu}^{a}$ $$\begin{aligned}
\label{sec2-cs-tor-assm-1-sol}
\omega_{\mu}^{a}=\frac{1}{2}\epsilon_{\hspace{2mm}c}^{ab}
E^{\sigma}_{b}(\partial_{\mu}e_{\sigma}^{c}-\Gamma^{\rho}_{\mu\sigma}e_{\rho}^{c}),\end{aligned}$$ and Eq. (\[sec2-cs-tor-assm-1\]) also yields the metric compatibility condition $$\begin{aligned}
\label{sec2-cs-tor-assm-mcom}
\partial_{\mu}g_{\alpha\beta}=\Gamma^{\rho}_{\mu\alpha}g_{\rho\beta}+\Gamma^{\rho}_{\mu\beta}g_{\rho\alpha},\end{aligned}$$ which requires the connection to be $$\begin{aligned}
\label{sec2-cs-tor-assm-mcom-con}
\Gamma^{\rho}_{\alpha\beta}&=&\bar{\Gamma}^{\rho}_{\alpha\beta}-g^{\rho\sigma}(T_{\alpha\sigma}^{\tau}g_{\tau\beta}
+T_{\beta\sigma}^{\tau}g_{\tau\alpha})+T_{\alpha\beta}^{\rho},\\
\label{sec2-cs-tor-assm-mcom-con-1}
\bar{\Gamma}^{\rho}_{\alpha\beta}&=&\frac{1}{2}g^{\rho\sigma}
(\partial_{\alpha}g_{\sigma\beta}+\partial_{\beta}g_{\sigma\alpha}-\partial_{\sigma}g_{\alpha\beta}),\end{aligned}$$ where $T_{\alpha\beta}^{\rho}$ is the torsion tensor, which is antisymmetric about $\alpha$ and $\beta$. In terms of the variables in Eqs. (\[sec2-lie-we-metric\])-(\[sec2-lie-we-metric-anti\]), (\[sec2-cs-tor-assm-2\]) and (\[sec2-cs-tor-assm-1-sol\]), the action (\[sec2-cs-lag-pal\]) can be rewritten as $$\begin{aligned}
\label{sec2-cs-lag-re-0}
S&=&\frac{1}{16\pi{G}}\int{d^3x}\sqrt{-g}\mathscr{L},\\
\mathscr{L}&=&\mathscr{L}_{1}+4\lambda^2(\mathscr{L}_{2}+\mathscr{L}_{3}+\mathscr{L}_{4}).\nonumber\end{aligned}$$ In Eq. (\[sec2-cs-lag-re-0\]), $\mathscr{L}_{1}$ is $$\begin{aligned}
\label{sec2-cs-lag-re-0a}
\mathscr{L}_{1}&=&R-\frac{2}{l^2},\end{aligned}$$ where $$\begin{aligned}
\label{sec2-cs-lag-re-1}
R^{\sigma}_{\hspace{1mm}\rho\mu\nu}=\partial_{\mu}\Gamma^{\sigma}_{\nu\rho}-\partial_{\nu}\Gamma^{\sigma}_{\mu\rho}
+\Gamma^{\sigma}_{\mu\tau}\Gamma^{\tau}_{\nu\rho}-\Gamma^{\sigma}_{\nu\tau}\Gamma^{\tau}_{\mu\rho}\end{aligned}$$ is the Riemann curvature, and $R=g^{\alpha\beta}R^{\sigma}_{\hspace{1mm}\alpha\beta\sigma}$ is the Ricci scalar. $\mathscr{L}_{2}$, $\mathscr{L}_{3}$ and $\mathscr{L}_{4}$ are given by
$$\begin{aligned}
\label{sec2-cs-lag-re-0b}
\mathscr{L}_{2}&=&g_{\alpha\beta}(\Omega^{\alpha\sigma}_{\rho}\Omega^{\beta\rho}_{\sigma}
-\Omega^{\alpha\sigma}_{\sigma}\Omega^{\beta\rho}_{\rho})\\
\label{sec2-cs-lag-re-0c}
\mathscr{L}_{3}&=&\frac{1}{l^2}
g_{\alpha\beta}(h^{\alpha\sigma}_{\rho}h^{\beta\rho}_{\sigma}
-h^{\alpha\sigma}_{\sigma}h^{\beta\rho}_{\rho}),\\
\label{sec2-cs-lag-re-0d}
\mathscr{L}_{4}&=&\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}\Omega^{\rho\sigma}_{\nu}
+T^{\tau}_{\mu\nu}\Omega^{\rho\sigma}_{\tau})h_{\alpha\rho\sigma},\end{aligned}$$
where $$\begin{aligned}
\label{sec2-cs-lag-re-0d-def}
\nabla_{\mu}\Omega^{\alpha\beta}_{\nu}=\partial_{\mu}\Omega^{\alpha\beta}_{\nu}
-\Gamma^{\sigma}_{\mu\nu}\Omega^{\alpha\beta}_{\sigma}+\Gamma^{\alpha}_{\mu\sigma}\Omega^{\sigma\beta}_{\nu}
+\Gamma^{\beta}_{\mu\sigma}\Omega^{\alpha\sigma}_{\nu}\end{aligned}$$ is the covariant derivative associated with the connection $\Gamma^{\sigma}_{\mu\nu}$. $h^{\alpha\beta}_{\mu}=g^{\alpha\rho}g^{\beta\sigma}h_{\mu\rho\sigma}$, that is, we always lower and raise the indices through $g_{\alpha\beta}$ and its inverse $g^{\alpha\beta}$. From the above, we saw that $\mathscr{L}_{1}$ is the action of the conventional spin-2 gravity with the cosmological constant. $\mathscr{L}_{4}$ is a topologically likewise coupling term. The meaning of $\mathscr{L}_{2}$ would be clear if we know the expression of $\Omega_{\mu}^{\rho\sigma}$. Now the action (\[sec2-cs-lag-re-0\]) has a metric-like formulation, but it is a first-order action about $\Omega_{\mu}^{\alpha\beta}$ and $h_{\mu\alpha\beta}$. In order to obtain a second-order formulation, we need to solve the torsion constraints (\[sec2-cs-lag-re-tor-1\]) and (\[sec2-cs-lag-re-tor-2\]). The torsion constraint (\[sec2-cs-lag-re-tor-1\]) can be reformulated as $$\begin{aligned}
\label{sec2-cs-tor-assm-re-3}
-T^{\gamma}_{\alpha\beta}=2\lambda^2
g_{\tau\mu}(\Omega_{\alpha}^{\sigma\tau}h_{\beta\sigma\rho}-
\Omega_{\beta}^{\sigma\tau}h_{\alpha\sigma\rho})\varepsilon^{\mu\rho\gamma},\end{aligned}$$ and the torsion constraint (\[sec2-cs-lag-re-tor-2\]) can be reformulated as
$$\begin{aligned}
\label{sec2-cs-tor-assm-re-1}
-K^{\gamma}_{\alpha\beta}&=&(\Omega_{\alpha}^{\gamma\tau}g_{\tau\beta}
-\Omega_{\rho}^{\rho\tau}g_{\tau\alpha}\delta^{\gamma}_{\beta})+(\alpha\leftrightarrow\beta),\\
\label{sec2-cs-tor-assm-re-2}
K^{\gamma}_{\alpha\beta}&=&\varepsilon^{\rho\sigma\gamma}(\nabla_{\rho}h_{\sigma\alpha\beta}
+T^{\tau}_{\rho\sigma}h_{\tau\alpha\beta}).\end{aligned}$$
Eqs. (\[sec2-cs-tor-assm-re-3\]) and (\[sec2-cs-tor-assm-re-1\]) are derived from Eqs. (\[sec2-cs-lag-re-tor-1\]) and (\[sec2-cs-lag-re-tor-2\]) by multiplying the frame-like fields $E^{\alpha}_{a}$ or $e_{\beta}^{b}$. Alternatively, they can also be derived through variations of the action (\[sec2-cs-lag-re-0\]) regarding to $T^{\gamma}_{\alpha\beta}$ and $\Omega_{\mu}^{\rho\sigma}$ respectively. Eqs. (\[sec2-cs-tor-assm-re-3\]) and (\[sec2-cs-tor-assm-re-1\]) are coupling equations about $T^{\gamma}_{\alpha\beta}$ and $\Omega_{\mu}^{\rho\sigma}$. Eq. (\[sec2-cs-tor-assm-re-3\]) demonstrates that the torsion is determined by the higher-spin fields, which provides the action (\[sec2-cs-lag-re-0\]) with the interpretation as a Einstein-Cartan-Sciama-Kibble theory [@Hehl:1976kj]. The solution of Eq. (\[sec2-cs-tor-assm-re-1\]) can express the connection $\Omega_{\mu}^{\rho\sigma}$ with $h_{\mu\alpha\beta}$ and its derivatives. A solution of Eq. (\[sec2-cs-tor-assm-re-1\]) is $$\begin{aligned}
\label{sec2-cs-tor-assm-re-1-sol}
\Omega_{\mu}^{\alpha\beta}&=&\frac{1}{2}\bigr(g^{\alpha\sigma}K^{\beta}_{\mu\sigma}+g^{\beta\sigma}K^{\alpha}_{\mu\sigma}
-\frac{2}{3}K^{\sigma}_{\mu\sigma}g^{\alpha\beta}\bigr)\\
&-&\frac{1}{2}g^{\alpha\rho}g^{\beta\sigma}g_{\mu\tau}K^{\tau}_{\rho\sigma}.\nonumber\end{aligned}$$ Through this expression, $\Omega_{\mu}^{\alpha\beta}$ can be eliminated from Eqs. (\[sec2-cs-lag-re-0b\]) and (\[sec2-cs-lag-re-0d\]), and Eqs. (\[sec2-cs-lag-re-0b\]) and (\[sec2-cs-lag-re-0d\]) can be regarded as the kinetic terms of the spin-3 fields $h_{\mu\alpha\beta}$. Eq. (\[sec2-cs-lag-re-0d\]) looks like a Fierz-Pauli type massive term of $h_{\alpha\mu\nu}$. However, because the background solution of the action (\[sec2-cs-lag-re-0\]) is the anti-de Sitter space-time. Eq. (\[sec2-cs-lag-re-0d\]) plays the role to ensure the 3D diffeomorphism invariance of the action, but does not mean that the spin-3 field $h_{\mu\alpha\beta}$ is massive [@Campoleoni:2012th; @Deser:2001pe]. We can further attempt to solve the torsion constraints (\[sec2-cs-tor-assm-re-3\]). In 3D, $T^{\gamma}_{\alpha\beta}$ is equivalent to a rank (2,0) tensor through the definition $$\begin{aligned}
\label{sec2-cs-tor-re-def}
T^{\alpha\beta}=-\varepsilon^{\beta\rho\sigma}T^{\alpha}_{\rho\sigma},~~
T^{\gamma}_{\alpha\beta}=\frac{1}{2}T^{\gamma\rho}\varepsilon_{\rho\alpha\beta}.\end{aligned}$$ Substituting $\Omega_{\mu}^{\alpha\beta}$ into Eq. (\[sec2-cs-tor-assm-re-3\]), we can obtain an equation of $T^{\alpha\beta}$ $$\begin{aligned}
\label{sec2-cs-tor-re-def-1}
T^{\alpha\beta}+4\lambda^2T^{\rho\sigma}M^{\alpha\beta}_{\rho\sigma}=4\lambda^2
\bar{\Omega}_{\theta}^{\sigma\tau}g_{\tau\mu}h_{\nu\sigma\rho}\varepsilon^{\mu\rho\alpha}\varepsilon^{\theta\nu\beta},\end{aligned}$$ where $\bar{\Omega}_{\mu}^{\alpha\beta}$ is defined as $\Omega_{\mu}^{\alpha\beta}$ in (\[sec2-cs-tor-assm-re-1-sol\]) but with the connection $\Gamma^{\tau}_{\alpha\beta}$ replaced by the Levi-Civita connection $\bar{\Gamma}^{\tau}_{\alpha\beta}$. $M^{\alpha\beta}_{\rho\sigma}$ is a complicated algebraic function of $g_{\alpha\beta}$ and $h_{\rho\alpha\beta}$, which does not have a compact expression. To solve $T^{\alpha\beta}$, we need to know the inverse of $(\delta^{\alpha}_{\rho}\delta^{\beta}_{\sigma}+4\lambda^2M^{\alpha\beta}_{\rho\sigma})$, which is obtainable perturbatively or non-perturbatively in a algebraic way through the Caylay-Hamilton method. The first order approximation of $T^{\alpha\beta}$ is given by the right side of Eq. (\[sec2-cs-tor-re-def-1\]). In this paper, we keep the torsion constraint (\[sec2-cs-tor-assm-re-3\]) intact in order that the action (\[sec2-cs-lag-re-0\]) has a concise formulation, then the action (\[sec2-cs-lag-re-0\]) is a ECSK theory coupled with the higher-spin fields $h_{\rho\alpha\beta}$.
*Equations of motion*.In order to obtain a transparent Lagrangian for $h_{\rho\alpha\beta}$, firstly we rewrite the Lagrangian $\mathscr{L}_{4}$ as $$\begin{aligned}
\label{sec2-cs-lag-re-0d-1}
\mathscr{L}_{4}&=&\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}h_{\nu\rho\sigma}
+T^{\tau}_{\mu\nu}h_{\tau\rho\sigma})\Omega_{\alpha}^{\rho\sigma}\\
&+&\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}\varepsilon^{\mu\nu\alpha}\Omega_{\nu}^{\rho\sigma}h_{\alpha\rho\sigma}).\nonumber\end{aligned}$$ The second line of this equation is a total divergence term. Substituting the solution (\[sec2-cs-tor-assm-re-1-sol\]) of $\Omega_{\alpha}^{\rho\sigma}$ into Eqs. (\[sec2-cs-lag-re-0b\]) and (\[sec2-cs-lag-re-0d-1\]), we obtain a new Lagrangian $$\begin{aligned}
\label{sec2-cs-lag-re-0d-1a}
\mathscr{L}_{2}+\mathscr{L}_{4}&=&-\frac{1}{4}(g^{\mu\alpha}g^{\nu\beta}-g^{\mu\beta}g^{\nu\alpha})
\hat{\nabla}_{\mu}h_{\nu}^{\rho\sigma}\hat{\nabla}_{\alpha}h_{\beta\rho\sigma}\nonumber\\
&-&\frac{1}{2}g^{\tau\theta}\varepsilon^{\mu\nu\rho}\varepsilon^{\alpha\beta\sigma}
\hat{\nabla}_{\mu}h_{\nu\sigma\tau}\hat{\nabla}_{\alpha}h_{\beta\rho\theta},\end{aligned}$$ where we have use $\hat{\nabla}_{\mu}h_{\nu\rho\sigma}=\nabla_{\mu}h_{\nu\rho\sigma}+T^{\tau}_{\mu\nu}h_{\tau\rho\sigma}$ to achieve a compact expression, and the divergence term in Eq. (\[sec2-cs-lag-re-0d-1\]) was omitted. This Lagrangian has the Maxwell-like formulation, which is quadratic about the field strength. The identity $$\begin{aligned}
\label{sec2-cs-lag-re-0d-1a-id}
\varepsilon^{\mu\nu\rho}\varepsilon^{\alpha\beta\sigma}&=&g^{\mu\alpha}g^{\nu\sigma}g^{\rho\beta}
+(g^{\nu\alpha}g^{\mu\beta}-g^{\mu\alpha}g^{\nu\beta})g^{\rho\sigma}\\
&-&g^{\nu\alpha}g^{\mu\sigma}g^{\rho\beta}+(g^{\mu\sigma}g^{\nu\beta}-g^{\nu\sigma}g^{\mu\beta})g^{\rho\alpha}\nonumber\end{aligned}$$ can be further used to rewrite Eq. (\[sec2-cs-lag-re-0d-1a-id\]) into a conventional formulation. Now we discuss the equations of motion about $g_{\alpha\beta}$ and $h_{\rho\alpha\beta}$. Their equations of motion are given by the zero curvature condition (\[sec2-cs-lag-pal-eom\]), which can be decomposed into two equations as the torsion constraints (\[sec2-cs-lag-re-tor-1\]) and (\[sec2-cs-lag-re-tor-2\]). Firstly, from Eq. (\[sec2-cs-lag-pal-eom\]), we can obtain
$$\begin{aligned}
\label{sec2-cs-lag-pal-eom-re-1a}
2\lambda^2\mathcal{T}_{\mu\nu}&=&R_{\mu\nu}-\frac{1}{2}{R}g_{\mu\nu}+\frac{1}{l^2}g_{\mu\nu},\\
\label{sec2-cs-lag-pal-eom-re-1b}
\mathcal{T}_{\mu\nu}&=&\mathscr{L}_{2}g_{\mu\nu}
+2(\Omega_{\tau}^{\sigma\tau}\Omega_{\nu\sigma\mu}-\Omega_{\nu}^{\sigma\tau}\Omega_{\tau\sigma\mu})\\
&+&\mathscr{L}_{3}g_{\mu\nu}+\frac{2}{l^2}(h_{\tau}^{\sigma\tau}h_{\nu\sigma\mu}-h_{\nu}^{\sigma\tau}h_{\tau\sigma\mu}).\nonumber\end{aligned}$$
Here $R_{\mu\nu}=R^{\sigma}_{\hspace{1mm}\mu\nu\sigma}$ is the Ricci tensor. Eq. (\[sec2-cs-lag-pal-eom-re-1a\]) is the equation of motion of the spin-2 field $g_{\mu\nu}$, which has the same formulation with the Einstein equation. Because the connection has a torsion, $R_{\mu\nu}$ is not symmetric about its indices [@Hehl:1976kj]. $\mathcal{T}_{\mu\nu}$ is the energy-momentum tensor contributed by the higher spin fields, and it is also not symmetric. From Eq. (\[sec2-cs-lag-pal-eom\]), we can also obtain
$$\begin{aligned}
\label{sec2-cs-lag-pal-eom-re-2a}
-H^{\gamma}_{\hspace{1mm}\alpha\beta}&=&\frac{1}{l^2}(h_{\alpha}^{\gamma\tau}g_{\tau\beta}
-h_{\rho}^{\rho\tau}g_{\tau\alpha}\delta^{\gamma}_{\beta})
+(\alpha\leftrightarrow\beta),\\
\label{sec2-cs-lag-pal-eom-re-2b}
H^{\gamma}_{\alpha\beta}&=&\varepsilon^{\rho\sigma\gamma}(\nabla_{\rho}\Omega_{\sigma\alpha\beta}
+T^{\tau}_{\rho\sigma}\Omega_{\tau\alpha\beta}).\end{aligned}$$
Substituting the solution (\[sec2-cs-tor-assm-re-1-sol\]) of $\Omega_{\mu}^{\alpha\beta}$ into (\[sec2-cs-lag-pal-eom-re-2b\]), we can obtain the equations of motion about $h_{\mu\alpha\beta}$, though they do not have a compact expression as the action (\[sec2-cs-lag-re-0d-1a\]). We saw that Eqs. (\[sec2-cs-lag-pal-eom-re-2a\]) and (\[sec2-cs-lag-pal-eom-re-2b\]) have the same structure as Eqs. (\[sec2-cs-tor-assm-re-1\]) and (\[sec2-cs-tor-assm-re-2\]). From Eq. (\[sec2-cs-lag-pal-eom-re-2a\]), we have $$\begin{aligned}
\label{sec2-cs-tor-assm-re-2-sol-h}
\frac{1}{l^2}h_{\mu\alpha\beta}&=&\frac{1}{2}\bigr(g_{\alpha\sigma}H^{\sigma}_{\mu\beta}+g_{\beta\sigma}H^{\sigma}_{\mu\alpha}
-\frac{2}{3}g_{\alpha\beta}H^{\sigma}_{\mu\sigma}\bigr)\\
&-&\frac{1}{2}g_{\mu\tau}H^{\tau}_{\alpha\beta}.\nonumber\end{aligned}$$ which is an equivalent formulation of Eq. (\[sec2-cs-lag-pal-eom-re-2a\]), and it is similar to Eq. (\[sec2-cs-tor-assm-re-1-sol\]).
*Duality-like Transformation*.We have noticed that the similarity between Eq. (\[sec2-cs-tor-assm-re-2-sol-h\]) and Eq. (\[sec2-cs-tor-assm-re-1-sol\]), which indicate a duality-like transformation between $\Omega_{\mu}^{\alpha\beta}$ and $h_{\mu}^{\alpha\beta}$. To make this transformation transparent, we rewrite the Lagrangian $\mathscr{L}_{4}$ as $$\begin{aligned}
\label{sec2-cs-lag-re-0d-re-2}
\mathscr{L}_{4}&=&\frac{1}{2}\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}\Omega^{\rho\sigma}_{\nu}
+T^{\tau}_{\mu\nu}\Omega^{\rho\sigma}_{\tau})h_{\alpha\rho\sigma}\\
&+&\frac{1}{2}\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}h_{\nu\rho\sigma}
+T^{\tau}_{\mu\nu}h_{\tau\rho\sigma})\Omega^{\rho\sigma}_{\alpha}\nonumber\\
&+&\frac{1}{2}\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}\varepsilon^{\mu\nu\alpha}\Omega_{\nu}^{\rho\sigma}h_{\alpha\rho\sigma}).\nonumber\end{aligned}$$ If we do not consider the divergence term in Eq. (\[sec2-cs-lag-re-0d-re-2\]), then the action (\[sec2-cs-lag-re-0\]) is invariant under the duality likewise transformation $$\begin{aligned}
\label{sec2-cs-lag-re-0d-re-a}
\tilde{\Omega}^{\rho\sigma}_{\mu}=\frac{1}{l}h^{\rho\sigma}_{\mu},~~
\frac{1}{l}\tilde{h}^{\rho\sigma}_{\mu}=\Omega^{\rho\sigma}_{\mu}.\end{aligned}$$ We can furthure define
$$\begin{aligned}
\label{sec2-cs-lag-re-0d-de-a}
\Omega^{\hspace{1mm}\rho\sigma}_{\mu}&=&\frac{1}{\sqrt{2}}(U^{\rho\sigma}_{\mu}-V^{\rho\sigma}_{\mu}),\\
\label{sec2-cs-lag-re-0d-de-b}
\frac{1}{l}{h}^{\rho\sigma}_{\mu}&=&
\frac{1}{\sqrt{2}}(U^{\rho\sigma}_{\mu}+V^{\rho\sigma}_{\mu}),\end{aligned}$$
then $\mathscr{L}_{2}$ and $\mathscr{L}_{3}$ can be rewritten as $$\begin{aligned}
\label{sec3-cs-lag-re-0d-re-a}
\mathscr{L}_{2}&+&\mathscr{L}_{3}=g_{\alpha\beta}(U^{\alpha\sigma}_{\rho}U^{\beta\rho}_{\sigma}
-U^{\alpha\sigma}_{\sigma}U^{\beta\rho}_{\rho})\\
&+&g_{\alpha\beta}(V^{\alpha\sigma}_{\rho}V^{\beta\rho}_{\sigma}
-V^{\alpha\sigma}_{\sigma}V^{\beta\rho}_{\rho}),\nonumber\end{aligned}$$ and $\mathscr{L}_{4}$ can be rewritten as $$\begin{aligned}
\label{sec3-cs-lag-re-0d-re-b}
\mathscr{L}_{4}&=&\frac{l}{2}\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}U^{\rho\sigma}_{\nu}
+T^{\tau}_{\mu\nu}U^{\rho\sigma}_{\tau})U_{\alpha\rho\sigma}\\
&-&\frac{l}{2}\varepsilon^{\mu\nu\alpha}(\nabla_{\mu}V_{\nu\rho\sigma}
+T^{\tau}_{\mu\nu}V_{\tau\rho\sigma})V^{\rho\sigma}_{\alpha}\nonumber\end{aligned}$$ up to the divergence term in Eq. (\[sec2-cs-lag-re-0d-re-2\]). The torsion constraint (\[sec2-cs-tor-assm-re-3\]) can be rewritten as $$\begin{aligned}
\label{sec2-cs-tor-assm-re-1-uv}
-T^{\gamma}_{\alpha\beta}=2\lambda^2
g_{\tau\mu}(U_{\alpha}^{\sigma\tau}U_{\beta\sigma\rho}-
V_{\beta}^{\sigma\tau}V_{\alpha\sigma\rho})\varepsilon^{\mu\rho\gamma}.\end{aligned}$$ From the above, we saw that the cross products of $U^{\alpha\beta}_{\mu}$ and $V^{\alpha\beta}_{\mu}$ are eliminated from Eqs. (\[sec3-cs-lag-re-0d-re-a\]) and (\[sec3-cs-lag-re-0d-re-b\]). So the action (\[sec2-cs-lag-re-0\]) can be interpreted as the spin-2 gravity interacting with two rank (2,1) tensor fields. However, $U^{\alpha\beta}_{\mu}$ and $V^{\alpha\beta}_{\mu}$ are not free fields, and their interactions is provided by the torsion constraint (\[sec2-cs-tor-assm-re-1-uv\]) through the covariant direvative. From Eqs. (\[sec2-cs-tor-assm-re-1\]) and (\[sec2-cs-lag-pal-eom-re-2a\]), we have
$$\begin{aligned}
\label{sec2-cs-lag-pal-eom-re-2a-u}
-\tilde{H}^{\gamma}_{\hspace{1mm}\alpha\beta}&=&\frac{1}{l}(U_{\alpha}^{\gamma\tau}g_{\tau\beta}
-U_{\rho}^{\rho\tau}g_{\tau\alpha}\delta^{\gamma}_{\beta})
+(\alpha\leftrightarrow\beta),\\
\label{sec2-cs-lag-pal-eom-re-2b-u}
\tilde{H}^{\gamma}_{\alpha\beta}&=&\varepsilon^{\rho\sigma\gamma}(\nabla_{\rho}U_{\sigma\alpha\beta}
+T^{\tau}_{\rho\sigma}U_{\tau\alpha\beta}).\end{aligned}$$
If we omit the torsion constraint (\[sec2-cs-tor-assm-re-1-uv\]), then Eq. (\[sec2-cs-lag-pal-eom-re-2a-u\]) is linear about $U_{\alpha}^{\beta\gamma}$. $\tilde{H}^{\gamma}_{\alpha\beta}$ can be interpreted as the field which is dual to the Maxwell-like field strength $$\begin{aligned}
\label{sec2-cs-lag-pal-eom-re-2a-u-1}
\mathcal{F}_{\mu\nu\alpha\beta}=\nabla_{\mu}U_{\nu\alpha\beta}-\nabla_{\nu}U_{\mu\alpha\beta}.\end{aligned}$$ While the right side of Eq. (\[sec2-cs-lag-pal-eom-re-2a-u\]) is a part of $U_{\alpha\beta\gamma}$ which is symmetric and traceless about its first two indices. So Eq. (\[sec2-cs-lag-pal-eom-re-2a-u\]) has the meaning that the dual of the field strength of $U_{\alpha\beta\gamma}$ is the minus of its symmetric and traceless part about its first two indices. Similarly, from Eqs. (\[sec2-cs-tor-assm-re-1\]) and (\[sec2-cs-lag-pal-eom-re-2a\]), we also have
$$\begin{aligned}
\label{sec2-cs-lag-pal-eom-re-2a-v}
\tilde{K}^{\gamma}_{\hspace{1mm}\alpha\beta}&=&\frac{1}{l}(V_{\alpha}^{\gamma\tau}g_{\tau\beta}
-V_{\rho}^{\rho\tau}g_{\tau\alpha}\delta^{\gamma}_{\beta})
+(\alpha\leftrightarrow\beta),\\
\label{sec2-cs-lag-pal-eom-re-2b-v}
\tilde{K}^{\gamma}_{\alpha\beta}&=&\varepsilon^{\rho\sigma\gamma}(\nabla_{\rho}V_{\sigma\alpha\beta}
+T^{\tau}_{\rho\sigma}V_{\tau\alpha\beta}),\end{aligned}$$
which have the interpretations similar to Eqs. (\[sec2-cs-lag-pal-eom-re-2a-u\]) and (\[sec2-cs-lag-pal-eom-re-2b-u\]).
*Generalized diffeomorphism*.Now we discuss the potential symmetries of the action (\[sec2-cs-lag-re-0\]). These symmetries can be induced from the symmetries of the Chern-Simons action (\[sec2-cs-lag\]). If the boundary terms are negligible, then the CS action (\[sec2-cs-lag\]) is invariant under the infinitesimal $SL(3,R){\times}SL(3,R)$ gauge transformations
$$\begin{aligned}
\label{sec2-cs-gtr-a}
\delta{A}&=&d\zeta+[A,\zeta],\\
\label{sec2-cs-gtr-b}
\delta{\bar{A}}&=&d\bar{\zeta}+[\bar{A},\bar{\zeta}].\end{aligned}$$
In terms of the decomposition (\[sec2-cs-frame\]), we have $$\begin{aligned}
\label{sec2-cs-gtr-lor-a}
\delta{\omega}&=&d\Lambda+[\omega,\Lambda]+\frac{1}{l}[e,\xi],\\
\label{sec2-cs-gtr-diff-b}
\frac{1}{l}\delta{e}&=&d\xi+[\omega,\xi]+\frac{1}{l}[e,\Lambda],\end{aligned}$$ where we have defined $$\begin{aligned}
\label{sec2-cs-gtr-diff-a-de}
\xi=\frac{1}{2}(\zeta-\bar{\zeta}),~~\Lambda=\frac{1}{2}(\zeta+\bar{\zeta}).\end{aligned}$$ Now we focus on the transformation (\[sec2-cs-gtr-diff-b\]). $\Lambda$ and $\xi$ have the $SL(2,R)$ decomposition $$\begin{aligned}
\label{sec2-cs-gtr-diff-a-de-lx}
\Lambda=\Lambda^{a}J_a+\Lambda^{bc}Q_{bc},~~\xi=\xi^{a}J_a+\xi^{bc}Q_{bc},\end{aligned}$$ where $\Lambda^{ab}$ and $\xi^{ab}$ are symmetrical and traceless. If $\xi=0$, then Eq. (\[sec2-cs-gtr-diff-b\]) yields the local Lorentz transformations
$$\begin{aligned}
\label{sec2-cs-gtr-lor-a-re}
\delta_{\Lambda}{g_{\mu\nu}}&=&
4\lambda^2\left(h_{\mu}^{\rho\sigma}\Lambda_{\rho\tau}g^{\tau\theta}\varepsilon_{\sigma\theta\nu}+(\mu\leftrightarrow\nu)\right) ,\\
\label{sec2-cs-gtr-lor-b-re}
\delta_{\Lambda}{h_{\alpha\mu\nu}}&=&\left(\Lambda_{\mu\rho}g^{\rho\theta}\varepsilon_{\alpha\theta\nu}+(\mu\leftrightarrow\nu)\right)\\
&+&4\lambda^2\left({h}_{\alpha}^{\rho\sigma}{h}_{\mu}^{\tau\theta}
g_{\rho\nu}\Lambda_{\tau\beta}g^{\beta\gamma}\varepsilon_{\theta\sigma\gamma}+(\mu\leftrightarrow\nu)\right),\nonumber\end{aligned}$$
where $\Lambda_{\mu\nu}=\Lambda_{ab}e^{a}_{\mu}e^{b}_{\nu}$ is symmetrical and satisfies the traceless condition $g^{\mu\nu}\Lambda_{\mu\nu}=0$. We saw that the local lorentz transformations only depend on the parameter $\Lambda_{\rho\sigma}$, but it is independent of $\Lambda_{\mu}=\Lambda_{a}e^{a}_{\mu}$. This is consistent with the fact that $g_{\mu\nu}$ and $h_{\alpha\mu\nu}$ are $SL(2,R)$ invariant variables, but they are not $SL(3,R)$ invariant ones. Otherwise, If $\Lambda=0$, from Eq. (\[sec2-cs-gtr-diff-b\]), we can obtain the generalized diffeomorphism
$$\begin{aligned}
\label{sec2-cs-gtr-diff-a-re}
\frac{1}{l}\delta_{\xi}{g_{\mu\nu}}&=&\nabla_{\mu}\xi_{\nu}+\nabla_{\nu}\xi_{\mu}\\
&+&4\lambda^2\left(\Omega_{\mu}^{\tau\beta}\xi_{\tau\rho}g^{\rho\sigma}\varepsilon_{\beta\sigma\nu}
+(\mu\leftrightarrow\nu)\right) ,\nonumber\\
\label{sec2-cs-gtr-diff-b-re}
\frac{1}{l}\delta_{\xi}{h_{\alpha\mu\nu}}&=&\nabla_{\alpha}\xi_{\mu\nu}+\left(h_{\alpha\rho\mu}g^{\rho\gamma}
\nabla_{\nu}\xi_{\gamma}+(\mu\leftrightarrow\nu)\right)\\
&+&4\lambda^2\left(h_{\alpha\rho\mu}g^{\rho\gamma}
\Omega_{\nu}^{\tau\beta}\xi_{\tau\theta}g^{\theta\sigma}\varepsilon_{\beta\sigma\gamma}
+(\mu\leftrightarrow\nu)\right),\nonumber\end{aligned}$$
where $\nabla_{\mu}\xi_{\nu}=\partial_{\mu}\xi_{\nu}-\Gamma^{\rho}_{\mu\nu}\xi_{\rho}$ is the covariant derivative with the connection (\[sec2-cs-tor-assm-mcom-con\]). $\xi_{\mu}=\xi_{a}e^{a}_{\mu}$, and $\xi_{\mu\nu}=\xi_{ab}e^{a}_{\mu}e^{b}_{\nu}$ is symmetrical and traceless. $\Omega_{\nu}^{\tau\beta}$ is defined by Eq. (\[sec2-cs-tor-assm-re-1-sol\]). If $h_{\alpha\mu\nu}$ is small, then the second term of the right side of Eq. (\[sec2-cs-gtr-diff-a-re\]) is negligible. Eq. (\[sec2-cs-gtr-diff-a-re\]) yields the conventional diffeomorphism for the spin-2 gravity.
*Conclusions*.We have provided a metric-like formulation for the $SL(3,R){\times}SL(3,R)$ Chern-Simons theory using the $SL(2,R)$ invariant variables. This metric-like formulation can be interpreted as a Einstein-Cartan-Sciama-Kibble theory [@Hehl:1976kj], in which the torsion is determined by the higher-spin fields. The local Lorentz transformation and the generalized diffeomorphism can be expressed with these metric-like fields manifestly. We also identify a duality-like transformation in this metric-like formulation. Because the Lie algebra of $SL(N,R)$ has the decomposition under its sub algebra $SL(2,R)$ similar to that of $SL(3,R)$, the $SL(2,R)$ variables used here could also be useful to find a metric-like formulation for the $SL(N,R){\times}SL(N,R)$ Chern-Simons theory [@Campoleoni:2010zq], and the duality-like transformation discussed here could also be found in those theories.
[*Acknowledgments*]{}.This work was supported in part by Fondecyt (Chile) grant 1100287 and by Project Basal under Contract No. FB0821.
|
---
abstract: 'Current CNN-based solutions to salient object detection (SOD) mainly rely on the optimization of cross-entropy loss (CELoss). Then the quality of detected saliency maps is often evaluated in terms of F-measure. In this paper, we investigate an interesting issue: can we consistently use the F-measure formulation in both training and evaluation for SOD? By reformulating the standard F-measure, we propose the *relaxed F-measure* which is differentiable w.r.t the posterior and can be easily appended to the back of CNNs as the loss function. Compared to the conventional cross-entropy loss of which the gradients decrease dramatically in the saturated area, our loss function, named FLoss, holds considerable gradients even when the activation approaches the target. Consequently, the FLoss can continuously force the network to produce polarized activations. Comprehensive benchmarks on several popular datasets show that FLoss outperforms the state-of-the-art with a considerable margin. More specifically, due to the polarized predictions, our method is able to obtain high-quality saliency maps without carefully tuning the optimal threshold, showing significant advantages in real-world applications. Code and pretrained models are available at <http://kaizhao.net/fmeasure>.'
author:
- |
Kai Zhao^1^, Shanghua Gao^1^, Wenguan Wang^2^, Ming-Ming Cheng^1^[^1]\
^1^TKLNDST, CS, Nankai University ^2^Inception Institute of Artificial Intelligence\
[{kaiz.xyz,shanghuagao,wenguanwang.ai}@gmail.com,[email protected]]{}
bibliography:
- 'fmeasure.bib'
title: |
[\
]{}Optimizing the F-measure for Threshold-free Salient Object Detection
---
Introduction
============
We consider the task of salient object detection (SOD), where each pixel of a given image has to be classified as salient (outstanding) or not. The human visual system is able to perceive and process visual signals distinctively: interested regions are conceived and analyzed with high priority while other regions draw less attention. This capacity has been long studied in the computer vision community in the name of ‘salient object detection’, since it can ease the procedure of scene understanding [@borji2015salient]. The performance of modern salient object detection methods is often evaluated in terms of F-measure. Rooted from information retrieval [@van1974foundation], the F-measure is widely used as an evaluation metric in tasks where elements of a specified class have to be retrieved, especially when the relevant class is rare. Given the per-pixel prediction $\hat{Y} (\hat{y}_i \!\in\! [0, 1], i\!=\!1,...,|Y|)$ and the ground-truth saliency map $Y (y_i \!\in\! \{0, 1\}, i\!=\!1,...,|Y|)$, a threshold $t$ is applied to obtain the binarized prediction $\dot{Y}^t (\dot{y}^t_i \!\in\! \{0, 1\}, i\!=\!1,...,|Y|)$. The F-measure is then defined as the harmonic mean of precision and recall: $$\small
\!\!F(Y, \dot{Y}^t) \!=\!
(1\!+\!\beta^2)\frac{\text{precision}(Y, \dot{Y}^t) \cdot \text{recall}(Y, \dot{Y}^t)}
{\beta^2 \text{precision}(Y, \dot{Y}^t) \!+\! \text{recall}(Y, \dot{Y}^t)},
\label{eq:def-f}$$ where $\beta^2\!>\!0$ is a balance factor between precision and recall. When $\beta^2\!>\!1$, the F-measure is biased in favour of recall and otherwise in favour of precision. Most CNN-based solutions for SOD [@hou2017deeply; @li2016deep; @wang2016saliency; @fan2019shifting; @wang2019iterative; @zhao2019EGNet; @wang2019salient] mainly rely on the optimization of *cross-entropy loss* (CELoss) in an FCN [@long2015fully] architecture, and the quality of saliency maps is often assessed by the F-measure. Optimizing the pixel-independent CELoss can be regarded as minimizing the mean absolute error (MAE=$\frac{1}{N}\sum_i^N |\hat{y}_i - y_i|$), because in both circumstances each prediction/ground-truth pair works independently and contributes to the final score equally. If the data labels have biased distribution, models trained with CELoss would make biased predictions towards the majority class. Therefore, SOD models trained with CELoss hold biased prior and tend to predict unknown pixels as the background, consequently leading to low-recall detections. The F-measure [@van1974foundation] is a more sophisticated and comprehensive evaluation metric which combines precision and recall into a single score and automatically offsets the unbalance between positive/negative samples.
In this paper, we provide a uniform formulation in both training and evaluation for SOD. By directly taking the evaluation metric, *i.e.* the F-measure, as the optimization target, we perform F-measure maximizing in an end-to-end manner. To perform end-to-end learning, we propose the *relaxed F-measure* to overcome the in-differentiability in the standard F-measure formulation. The proposed loss function, named FLoss, is decomposable w.r.t the posterior $\hat{Y}$ and thus can be appended to the back of a CNN as supervision without effort. We test the FLoss on several state-of-the-art SOD architectures and witness a visible performance gain. Furthermore, the proposed FLoss holds considerable gradients even in the saturated area, resulting in polarized predictions that are stable against the threshold. Our proposed FLoss enjoys three favorable properties:
- Threshold-free salient object detection. Models trained with FLoss produce contrastive saliency maps in which the foreground and background are clearly separated. Therefore, FLoss can achieve high performance under a wide range of threshold.
- Being able to deal with unbalanced data. Defined as the harmonic mean of precision and recall, the F-measure is able to establish a balance between samples of different classes. We experimentally evidence that our method can find a better compromise between precision and recall.
- Fast convergence. Our method quickly learns to focus on salient object areas after only hundreds of iterations, showing fast convergence speed.
Related Work
============
We review several CNN-based architectures for SOD and the literature related to F-measure optimization.
#### Salient Object Detection (SOD).
The convolutional neural network (CNN) is proven to be dominant in many sub-areas of computer vision. Significant progress has been achieved since the presence of CNN in SOD. The DHS net [@liu2016dhsnet] is one of the pioneers of using CNN for SOD. DHS firstly produces a coarse saliency map with global cues, including contrast, objectness . Then the coarse map is progressively refined with a hierarchical recurrent CNN. The emergence of the fully convolutional network (FCN) [@long2015fully] provides an elegant way to perform the end-to-end pixel-wise inference. DCL [@li2016deep] uses a two-stream architecture to process contrast information in both pixel and patch levels. The FCN-based sub-stream produces a saliency map with pixel-wise accuracy, and the other network stream performs inference on each object segment. Finally, a fully connected CRF [@krahenbuhl2011efficient] is used to combine the pixel-level and segment-level semantics.
Rooted from the HED [@xie2015holistically] for edge detection, aggregating multi-scale side-outputs is proven to be effective in refining dense predictions especially when the detailed local structures are required to be preserved. In the HED-like architectures, deeper side-outputs capture rich semantics and shallower side-outputs contain high-resolution details. Combining these representations of different levels will lead to significant performance improvements. DSS [@hou2017deeply] introduces deep-to-shallow short connections across different side-outputs to refine the shallow side-outputs with deep semantic features. The deep-to-shallow short connections enable the shallow side-outputs to distinguish real salient objects from the background and meanwhile retain the high resolution. Liu [@Liu2019PoolSal] design a pooling-based module to efficiently fuse convolutional features from a top-down pathway. The idea of imposing top-down refinement has also been adopted in Amulet [@zhang2017amulet], and enhanced by Zhao [@zhao2018hifi] with bi-directional refinement. Later, Wang [@wang2018salient] propose a visual attention-driven model that bridges the gap between SOD and eye fixation prediction. These methods mentioned above tried to refine SOD by introducing a more powerful network architecture, from recurrent refining network to multi-scale side-output fusing. We refer the readers to a recent survey [@BorjiCVM2019] for more details.
#### F-measure Optimization.
Despite having been utilized as a common performance metric in many application domains, optimizing the F-measure doesn’t draw much attention until very recently. The works aiming at optimizing the F-measure can be divided into two subcategories [@dembczynski2013optimizing]: (a) structured loss minimization methods such as [@petterson2010reverse; @petterson2011submodular] which optimize the F-measure as the target during training; and (b) plug-in rule approaches which optimize the F-measure during inference phase [@jansche2007maximum; @dembczynski2011exact; @quevedo2012multilabel; @nan2012optimizing].
Much of the attention has been drawn to the study of the latter subcategory: finding an optimal threshold value which leads to a maximal F-measure given predicted posterior $\hat{Y}$. There are few articles about optimizing the F-measure during the training phase. Petterson [@petterson2010reverse] optimize the F-measure indirectly by maximizing a loss function associated to the F-measure. Then in their successive work [@petterson2011submodular] they construct an upper bound of the discrete F-measure and then maximize the F-measure by optimizing its upper bound. These previous studies either work as post-processing, or are in-differentiable w.r.t posteriors, making them hard to be applied to the deep learning framework.
Optimizing the F-measure for SOD
================================
The Relaxed F-measure
---------------------
In the standard F-measure, the true positive, false positive and false negative are defined as the number of corresponding samples: $$\begin{split}
TP(\dot{Y}^t, Y) &= \sum\nolimits_i 1(y_i==1 \ \text{and} \ \dot{y}^t_i==1), \\
FP(\dot{Y}^t, Y) &= \sum\nolimits_i 1(y_i==0 \ \text{and} \ \dot{y}^t_i==1), \\
FN(\dot{Y}^t, Y) &= \sum\nolimits_i 1(y_i==1 \ \text{and} \ \dot{y}^t_i==0), \\
\end{split}
\label{eq:tpfp0}$$ where $Y$ is the ground-truth, $\dot{Y}^t$ is the binary prediction binarized by threshold $t$ and $Y$ is the ground-truth saliency map. $1(\cdot)$ is an indicator function that evaluates to $1$ if its argument is true and 0 otherwise.
To incorporate the F-measure into CNN and optimize it in an end-to-end manner, we define a decomposable F-measure that is differentiable over posterior $\hat{Y}$. Based on this motivation, we reformulate the true positive, false positive and false negative based on the continuous posterior $\hat{Y}$: $$\begin{split}
TP(\hat{Y}, Y) &= \sum\nolimits_i \hat{y}_i \cdot y_i, \\
FP(\hat{Y}, Y) &= \sum\nolimits_i \hat{y}_i \cdot (1 - y_i), \\
FN(\hat{Y}, Y) &= \sum\nolimits_i (1-\hat{y}_i) \cdot y_i \ . \\
\end{split}
\label{eq:tpfp}$$ Given the definitions in Eq. \[eq:tpfp\], precision $p$ and recall $r$ are: $$p(\hat{Y}, Y) = \frac{TP}{TP + FP},\quad r(\hat{Y}, Y) = \frac{TP}{TP + FN}.
\label{pr}$$ Finally, our *relaxed F-measure* can be written as: $$\begin{split}
F(\hat{Y}, Y) &= \frac{(1+\beta^2) p \cdot r}{\beta^2 p + r} ,\\
&= \frac{(1 + \beta^2)TP}{\beta^2(TP + FN) + (TP + FP)} ,\\
&= \frac{(1 + \beta^2)TP}{H},
\end{split}
\label{f}$$ where $H\! =\! \beta^2(TP + FN) + (TP + FP)$. Due to the relaxation in Eq. \[eq:tpfp\], Eq. \[f\] is decomposable w.r.t the posterior $\hat{Y}$, therefore can be integrated in CNN architecture trained with back-prop.
Maximizing F-measure in CNNs
----------------------------
In order to maximize the *relaxed F-measure* in CNNs in an end-to-end manner, we define our proposed F-measure based loss (FLoss) function $\mathcal{L}_{F}$ as: $$\mathcal{L}_{F}(\hat{Y}, Y) = 1 - F = 1 - \frac{(1 + \beta^2)TP}{H}\label{eq:floss}.$$ Minimizing $\mathcal{L}_{F}(\hat{Y}, Y)$ is equivalent to maximizing the *relaxed F-measure*. Note again that $\mathcal{L}_{F}$ is calculated directly from the raw prediction $\hat{Y}$ without thresholding. Therefore, $\mathcal{L}_{F}$ is differentiable over the prediction $\hat{Y}$ and can be plugged into CNNs. The partial derivative of loss $\mathcal{L}_{F}$ over network activation $\hat{Y}$ at location $i$ is: $$\begin{split}
\frac{\partial \mathcal{L}_{F}}{\partial \hat{y}_i}
&= -\frac{\partial F}{\partial \hat{y}_i} \\
&= -\Big(\frac{\partial F}{\partial TP}\cdot \frac{\partial TP}{\partial \hat{y}_i} +
\frac{\partial F}{\partial H }\cdot \frac{\partial H }{\partial \hat{y}_i}\Big) \\
&= -\Big(\frac{(1+\beta^2)y_i}{H} - \frac{(1+\beta^2)TP}{H^2}\Big) \\
&= \frac{(1+\beta^2)TP}{H^2} - \frac{(1+\beta^2)y_i}{H} .\\
\end{split}\label{eq:grad-floss}$$
There is another alternative to Eq. \[eq:floss\] which maximize the log-likelihood of F-measure: $$\mathcal{L}_{\log F}(\hat{Y}, Y) = -\log(F)\label{eq:logfloss},$$ and the corresponding gradient is $$\frac{\partial \mathcal{L}_{\log F}}{\partial \hat{y}_i} =
\frac{1}{F}\left[\frac{(1+\beta^2)TP}{H^2} - \frac{(1+\beta^2)y_i}{H}\right]. \\
\label{eq:grad-logfloss}$$ We will theoretically and experimentally analyze the advantage of FLoss against Log-FLoss and CELoss in terms of producing polarized and high-contrast saliency maps.
FLoss vs Cross-entropy Loss {#sec:cel-vs-floss}
---------------------------
To demonstrate the superiority of our FLoss over the alternative Log-FLoss and the *cross-entropy loss* (CELoss), we compare the definition, gradient and surface plots of these three loss functions. The definition of CELoss is: $$\mathcal{L}_{CE}(\hat{Y}, Y) \!=\! -\sum\nolimits_i^{|Y|}
\left(y_i \log{\hat{y}_i} + (1\!-\!y_i) \log{(1\!-\!\hat{y}_i)}\right),
\label{eq:celoss}$$ where $i$ is the spatial location of the input image and $|Y|$ is the number of pixels of the input image. The gradient of $\mathcal{L}_{CE}$ w.r.t prediction $\hat{y}_i$ is: $$\frac{\partial \mathcal{L}_{CE}}{\partial \hat{y}_i} = \frac{y_i}{\hat{y}_i} - \frac{1 - y_i}{1 - \hat{y}_i}.
\label{eq:grad-celoss}$$
[figures/loss-surface-crop]{} (7,53)[FLoss (Eq. \[eq:floss\])]{} (42,53)[Log-FLoss (Eq. \[eq:logfloss\])]{} (77,53)[CELoss (Eq. \[eq:celoss\])]{} (7,25)[FLoss (Eq. \[eq:floss\])]{} (42,25)[Log-FLoss (Eq. \[eq:logfloss\])]{} (77,25)[CELoss (Eq. \[eq:celoss\])]{} (-3,38) (-3,10)
As revealed in Eq. \[eq:grad-floss\] and Eq. \[eq:grad-celoss\], the gradient of CELoss $\frac{\partial \mathcal{L}_{CE}}{\partial \hat{y}_i}$ relies only on the prediction/ground-truth of a single pixel $i$; whereas in FLoss $\frac{\partial \mathcal{L}_{F}}{\partial \hat{y}_i}$ is globally determined by the prediction and ground-truth of ALL pixels in the image. We further compare the surface plots of FLoss, Log-FLoss and CELoss in a two points binary classification problem. The results are in Fig. \[fig:loss-surface\]. The two spatial axes represent the prediction $\hat{y}_0$ and $\hat{y}_1$, and the $z$ axis indicates the loss value.
As shown in Fig. \[fig:loss-surface\], the gradient of FLoss is different from that of CELoss and Log-FLoss in two aspects: (1) Limited gradient: the FLoss holds limited gradient values even the predictions are far away from the ground-truth. This is crucial for CNN training because it prevents the notorious gradient explosion problem. Consequently, FLoss allows larger learning rates in the training phase, as evidenced by our experiments. (2) Considerable gradients in the saturated area: in CELoss, the gradient decays when the prediction gets closer to the ground-truth, while FLoss holds considerable gradients even in the saturated area. This will force the network to have polarized predictions. Salient detection examples in Fig. \[fig:examples\] illustrate the ‘high-contrast’ and polarized predictions.
Experiments and Analysis
========================
Experimental Configurations
---------------------------
**Dataset and data augmentation.** We uniformly train our model and competitors on the MSRA-B [@liu2011learning] training set for a fair comparison. The MSRA-B dataset with 5000 images in total is equally split into training/testing subsets. We test the trained models on 5 other SOD datasets: ECSSD [@yan2013hierarchical], HKU-IS [@li2015visual], PASCALS [@li2014secrets], SOD [@movahedi2010design], and DUT-OMRON [@movahedi2010design]. More statistics of these datasets are shown in Table \[tab:dset-stats\]. It’s worth mentioning that the challenging degree of a dataset is determined by many factors such as the number of images, number of objects in one image, the contrast of salient object w.r.t the background, the complexity of salient object structures, the center bias of salient objects and the size variance of images *etc*. Analyzing these details is out of the scope of this paper, we refer the readers to [@dpfan2018soc] for more analysis of datasets.
Data augmentation is critical to generating sufficient data for training deep CNNs. We fairly perform data augmentation for the original implementations and their FLoss variants. For the DSS [@hou2017deeply] and DHS [@liu2016dhsnet] architectures we perform only horizontal flip on both training images and saliency maps just as DSS did. Amulet [@zhang2017amulet] only allows $256\!\times\!256$ inputs. We randomly crop/pad the original data to get square images, then resize them to meet the shape requirement.
**Network architecture and hyper-parameters.** We test our proposed FLoss on 3 baseline methods: Amulet [@zhang2017amulet], DHS [@liu2011learning] and DSS [@hou2017deeply]. To verify the effectiveness of FLoss (Eq. \[eq:floss\]), we replace the loss functions of the original implementations with FLoss and keep all other configurations unchanged. As explained in Sec. \[sec:cel-vs-floss\], the FLoss allows a larger base learning rate due to limited gradients. We use the base learning rate $10^4$ times the original settings. For example, in DSS the base learning rate is $10^{-8}$, while in our F-DSS, the base learning rate is $10^{-4}$. All other hyper-parameters are consistent with the original implementations for a fair comparison.
**Evaluation metrics.** We evaluate the performance of saliency maps in terms of maximal F-measure (MaxF), mean F-measure (MeanF) and mean absolute error (MAE = $\frac{1}{N}\sum_i^N |\hat{y}_i - y_i|$). The factor $\beta^2$ in Eq. \[eq:def-f\] is set to 0.3 as suggested by [@achanta2009frequency; @hou2017deeply; @li2016deep; @liu2016dhsnet; @wang2016saliency]. By applying series thresholds $t\in \mathcal{T}$ to the saliency map $\hat{Y}$, we obtain binarized saliency maps $\dot{Y}^t$ with different precisions, recalls and F-measures.
Then the optimal threshold $t_o$ is obtained by exhaustively searching the testing set: $$t_o = {\mathop{\mathrm{argmax}}\limits}_{t\in \mathcal{T}} F(Y, \dot{Y}^t).
\label{eq:optimal-t}$$
Finally, we binarize the predictions with $t_o$ and evaluate the best F-measure: $$\text{MaxF} = F(Y, \dot{Y}^{t_o}),
\label{eq:maxf}$$ where $\dot{Y}^{t_o}$ is a binary saliency map binarized with $t_o$. The MeanF is the average F-measure under different thresholds: $$\text{MeanF} = \frac{1}{|\mathcal{T}|}\sum_{t\in \mathcal{T}} F(Y, \dot{Y}^t),
\label{eq:meanf}$$ where $\mathcal{T}$ is the collection of possible thresholds.
Log-FLoss vs FLoss
------------------
Firstly we compare FLoss with its alternative, namely Log-FLoss defined in Eq. \[eq:logfloss\], to justify our choice. As analyzed in Sec. \[sec:cel-vs-floss\], FLoss enjoys the advantage of having large gradients in the saturated area that cross-entropy loss and Log-FLoss don’t have.
[figures/floss-vs-logfloss3]{} (-1,41) (-1,30) (-1,14) (-1,2)
[figures/detection-examples]{} (3,65)[Image]{} (17,65)[GT]{} (26,65)[DHS [@liu2016dhsnet]]{} (39.5,65) (50,65)[Amulet [@zhang2017amulet]]{} (64,65)[F-Amulet]{} (78,65)[DSS [@hou2017deeply]]{} (90,65)
To experimentally verify our assumption that FLoss will produce high-contrast predictions, we train the DSS [@hou2017deeply] model with FLoss and Log-FLoss, respectively. The training data is MSRA-B [@liu2011learning] and hyper-parameters are kept unchanged with the original implementation, except for the base learning rate. We adjust the base learning rate to $10^{-4}$ since our method accept larger learning rate, as explained in Sec. \[sec:cel-vs-floss\]. Quantitative results are in Table \[tab:floss-vs-logfloss\] and some example detected saliency maps are shown in Fig. \[fig:examples-floss-vs-logfloss\].
Although both of Log-FLoss and FLoss use F-measure as maximization target, FLoss derives polarized predictions with high foreground-background contrast, as shown in Fig. \[fig:examples-floss-vs-logfloss\]. The same conclusion can be drawn from Table \[tab:floss-vs-logfloss\] where FLoss achieves higher Mean F-measure. Which reveals that FLoss achieves higher F-measure score under a wide range of thresholds.
Evaluation results on open Benchmarks
-------------------------------------
We compare the proposed method with several baselines on 5 popular datasets. Some example detection results are shown in Fig. \[fig:examples\] and comprehensive quantitative comparisons are in Table \[tab:quantitative\]. In general, FLoss-based methods can obtain considerable improvements compared with their cross-entropy loss (CELoss) based counterparts especially in terms of mean F-measure and MAE. This is mainly because our method is stable against the threshold, leading to high-performance saliency maps under a wide threshold range. In our detected saliency maps, the foreground (salient objects) and background are well separated, as shown in Fig. \[fig:examples\] and explained in Sec. \[sec:cel-vs-floss\].
Threshold Free Salient Object Detection {#sec:thres-free}
---------------------------------------
State-of-the-art SOD methods [@hou2017deeply; @li2016deep; @liu2016dhsnet; @zhang2017amulet] often evaluate maximal F-measure as follows: (a) Obtain the saliency maps $\hat{Y}_i$ with pretrained model; (b) Tune the best threshold $t_o$ by exhaustive search on the testing set (Eq. \[eq:optimal-t\]) and binarize the predictions with $t_o$; (c) Evaluate the maximal F-measure according to Eq. \[eq:maxf\].
There is an obvious flaw in the above procedure: the optimal threshold is obtained via an exhaustive search on the testing set. Such procedure is impractical for real-world applications as we would not have annotated testing data. And even if we tuned the optimal threshold on one dataset, it can not be widely applied to other datasets.
[@cc@]{}
[figures/f-thres]{} (60, 35.5) (56, 26.5) (55, 17.5) (50, -5)[(a)]{}
&
[figures/thres-variation]{} (50, -5)[(b)]{}
\
We further analyze the sensitivity of methods against thresholds in two aspects: (1) model performance under different thresholds, which reflects the stability of a method against threshold change, (2) the mean and variance of optimal threshold $t_o$ on different datasets, which represent the generalization ability of $t_o$ tuned on one dataset to others.
Fig. \[fig:thres-free\] (a) illustrates the F-measure w.r.t different thresholds. For most methods without FLoss, the F-measure changes sharply with the threshold, and the maximal F-measure (MaxF) presents only in a narrow threshold span. While FLoss based methods are almost immune from the change of threshold. Fig. \[fig:thres-free\] (b) reflects the mean and variance of $t_o$ across different datasets. Conventional methods (DHS, DSS, Amulet) present unstable $t_o$ on different datasets, as evidenced by their large variances. While the $t_o$ of FLoss-based methods (F-DHS, F-Amulet, F-DSS) stay unchanged across different datasets and different backbone network architectures. In conclusion, the proposed FLoss is stable against threshold $t$ in three aspects: (1) it achieves high performance under a wide range of threshold; (2) optimal threshold $t_o$ tuned on one dataset can be transferred to others, because $t_o$ varies slightly across different datasets; and (3) $t_o$ obtained from one backbone architecture can be applied to other architectures.
The Label-unbalancing Problem in SOD
------------------------------------
![, , and Maximal F-measure () of DSS (**- - -**) and F-DSS (**—**) under different thresholds. DSS tends to predict unknown pixels as the majority class–the background, resulting in high precision but low recall. FLoss is able to find a better compromise between precision and recall. []{data-label="fig:prf-thres"}](figures/prf-thres2){width="0.75\linewidth"}
The foreground and background are biased in SOD where most pixels belong to the non-salient regions. The unbalanced training data will lead the model to local minimal that tends to predict unknown pixels as the background. Consequently, the recall will become a bottleneck to the performance during evaluations, as illustrated in Fig. \[fig:prf-thres\].
Although assigning loss weight to the positive/negative samples is a simple way to offset the unbalancing problem, an additional experiment in Table \[tab:balance\] reveals that our method performs better than simply assigning loss weight. We define the *balanced cross-entropy loss* with weight factor between positive/negative samples: $$\begin{split}
\mathcal{L}_{balance} = \sum\nolimits_i^{|Y|} &w_1 \cdot y_i\log{\hat{y_i}} + \\
&w_0 \cdot (1-y_i)\log{(1-\hat{y_i})}.
\end{split}
\label{eq:balance-cross-entropy}$$ The loss weights for positive/negative samples are determined by the positive/negative proportion in a mini-batch: $w_1 = \frac{1}{|Y|}\sum_i^{|Y|} 1(y_i\!==\!0)$ and $w_0 = \frac{1}{|Y|}\sum_i^{|Y|} 1(y_i\!==\!1)$, as suggested in [@xie2015holistically] and [@shen2015deepcontour].
The Compromise Between Precision and Recall
-------------------------------------------
\[tab:balance\]
Recall and precision are two conflict metrics. In some applications, we care recall more than precision, while in other tasks precision may be more important than recall. The $\beta^2$ in Eq. \[eq:def-f\] balances the bias between precision and precision when evaluating the performance of specific tasks. For example, recent studies on edge detection use [@bsds500; @xie2015holistically; @shen2015deepcontour] $\beta^2=1$, indicating its equal consideration on precision and recall. While saliency detection [@achanta2009frequency; @hou2017deeply; @li2016deep; @liu2016dhsnet; @wang2016saliency] usually uses $\beta^2=0.3$ to emphasize the precision over the recall.
As an optimization target, the FLoss should also be able to balance the favor between precision and recall. We train models with different $\beta^2$ and comprehensively evaluate their performances in terms of precision, recall and F-measure. Results in Fig. \[fig:pr-beta\] reveal that $\beta^2$ is a bias adjuster between precision and recall: larger $\beta^2$ leads to higher recall while lower $\beta^2$ results in higher precision.
![, , of model trained under different $\beta^2$ (Eq. \[eq:def-f\]). The precision decreases with the growing of $\beta^2$ whereas recall increases. This characteristic gives us much flexibility to adjust the balance between recall and precision: use larger $\beta^2$ in a recall-first application and lower $\beta^2$ otherwise. []{data-label="fig:pr-beta"}](figures/pr-beta){width="0.75\linewidth"}
Faster Convergence and Better Performance
-----------------------------------------
In this experiment, we train three state-of-the-art saliency detectors (Amulet [@zhang2017amulet], DHS [@liu2011learning] and DSS [@hou2017deeply]) and their FLoss counterparts. Then we plot the performance of all the methods at each checkpoint to determine the converge speed and converged performance of respective models. All the models are trained on the MB [@liu2011learning] dataset and tested on the ECSSD [@yan2013hierarchical] dataset. The results are shown in Fig.\[fig:f-iter\].
[figures/f-iter-multiple]{}
We observe that our FLoss offers a per-iteration performance promotion for all the three saliency models. We also find that the FLoss-based methods quickly learn to focus on the salient object area and achieve high F-measure score after hundreds of iterations. While cross-entropy based methods produce blurry outputs and cannot localize salient areas very preciously. As shown in Fig. \[fig:f-iter\], FLoss based methods converge faster than its cross entropy competitors and get higher converged performance.
Conclusion
==========
In this paper, we propose to directly maximize the F-measure for salient object detection. We introduce the FLoss that is differentiable w.r.t the predicted posteriors as the optimization objective of CNNs. The proposed method achieves better performance in terms of better handling biased data distributions. Moreover, our method is stable against the threshold and able to produce high-quality saliency maps under a wide threshold range, showing great potential in real-world applications. By adjusting the $\beta^2$ factor, one can easily adjust the compromise between precision and recall, enabling flexibility to deal with various applications. Comprehensive benchmarks on several popular datasets illustrate the advantage of the proposed method.
#### Future work.
We plan to improve the performance and efficiency of the proposed method by using recent backbone models, , [@gao2019res2net; @MobileNetV2]. Besides, the FLoss is potentially helpful to other binary dense prediction tasks such as edge detection [@RcfEdgePami2019], shadow detection [@Hu_2018_CVPR] and skeleton detection [@zhao2018hifi].
#### Acknowledgment.
This research was supported by NSFC (61572264, 61620106008), the national youth talent support program, and Tianjin Natural Science Foundation (17JCJQJC43700, 18ZXZNGX00110).
[^1]: M.M. Cheng is the corresponding author.
|
---
abstract: 'By studying the Hawking radiation of the most general static spherically symmetric black hole arising from scalar and Dirac particles tunnelling, we find the Hawking temperature is invariant in the general coordinate representation (\[arbitrary1\]), which satisfies two conditions: a) its radial coordinate transformation is regular at the event horizon; and b) there is a time-like Killing vector.'
author:
- Chikun Ding and Jiliang Jing
title: 'What kinds of coordinate can keep the Hawking temperature invariant for the static spherically symmetric black hole?'
---
[^1]
=0.65 cm
introduction
============
In recent years, a semi-classical method of modeling Hawking radiation as a tunnelling effect has been developed and has excited a lot of interest [@man; @man1; @wil; @ag; @sh1; @sh2; @sh3; @mn; @vagenas; @par; @par1; @par2; @pkra; @pkra1; @pkra2; @arz; @qqj; @zz; @sqw; @ajm; @pm1; @pm]. Tunnelling provides not only a useful verification of thermodynamic properties of black holes but also an alternate conceptual means for understanding the underlying physical process of black hole radiation. In the tunnelling approach, the particles are allowed to follow classically forbidden trajectories, by starting just behind the horizon onward to infinity. The particles must then travel necessarily back in time, since the horizon is locally to the future of the static or stationary external region. The classical, one-particle action becomes complex, signaling the classical impossibility of the motion, and gives the amplitude an imaginary part, provides a semi-classical approximation to free field propagators. In general the tunnelling methods involve calculating the imaginary part of the action $I$ for the (classically forbidden) process of s-wave emission across the horizon, which in turn is related to the Boltzmann factor for emission at the Hawking temperature, i.e. $$\begin{aligned}
\Gamma\propto e^{-2\text{Im}I}=
e^{-E/T_H},\end{aligned}$$where $T_H$ is the Hawking temperature of the black hole, $E$ is the energy of the tunnelling particles.
There are two different approaches that are used to calculate the imaginary part of the action for the emitted particle. The first method developed was the Null Geodesic Method used by Parikh and Wilczek [@wil]; The other approach is the Hamilton-Jacobi Ansatz used by Agheben [*et al*]{} [@ag] which is an extension of the complex path analysis of Padmanabhan [*et al*]{} [@sh1; @sh2; @sh3]. For the Hamilton-Jacobi ansatz it is assumed that the action of the emitted scalar particle satisfies the relativistic Hamilton-Jacobi equation. From the symmetries of the metric one picks an appropriate ansatz for the form of the action and plugs it into the relativistic Hamilton-Jacobi equation to solve.
Since a black hole has a well defined temperature it should radiate all types of particles like a black body at that temperature. The emission spectrum therefore contains particles of all spins such as Dirac particles. In this paper, we will use the Hamilton-Jacobi ansatz method to calculate the Hawking temperature.
Can the Hawking temperature keep invariant under any coordinate transformation? At the first glance, the Hawking temperature is invariant. However, this invariance has been lost in the following isotropic coordinate [@ag; @mn] for the Schwarzschild black hole $$\begin{aligned}
\label{isotropic} t\rightarrow t,~~r\rightarrow
\rho,~~\ln \rho=\int \frac{dr}{r\sqrt{1-\frac{2M}{r}}}.
\end{aligned}$$And so the line element of the Schwarzschild black hole becomes$$\begin{aligned}
ds^2=-\left(\frac{2\rho-M}{2\rho+M}\right)^2dt^2+\left(\frac{2\rho+M}{2\rho}\right)^4d\rho^2
+\frac{(2\rho+M)^4}{16\rho^2}d\Omega^2,
\end{aligned}$$ and the horizon $\rho_H=M/2.$ Substituting it and $\phi=e^{i[-Et+W(\rho)+J(\theta,\varphi)]/\hbar}$ into Klein-Gordon equation$$\begin{aligned}
\label{kg}
\frac{1}{\sqrt{-g}}\partial_\mu(\sqrt{-g}g^{\mu\nu}
\partial\nu\phi)
-\frac{m^2}{\hbar^2}\phi=0,
\end{aligned}$$we can obtain$$\begin{aligned}
\text{Im}W_{\pm}(\rho)&=&\pm\text{Im}\left[ \int
\frac{(2\rho+M)^3d\rho}{4\rho^2(2\rho-M)}\sqrt{E^2-(\frac{2\rho-M}
{2\rho+M})^2(m^2+g^{ij}J_iJ_j)}\right]\nonumber\\
&=&\pm4\pi ME.
\end{aligned}$$The probability is[@sh1; @sh2; @sh3]$$\begin{aligned}
\Gamma=\frac{\Gamma_{out}}{\Gamma_{in}}\propto
\exp\Big[-4\text{Im}W_+\Big] =\exp\Big[-16\pi
ME\Big]=\exp\Big[-\frac{E}{T_H}\Big],
\end{aligned}$$ since $W_-=-W_+$. Then the black hole’s temperature is [@mn] $$\begin{aligned}
T_H=\frac{1}{16\pi M},\end{aligned}$$ which is one-half of the standard Hawking temperature $T_H=1/8\pi
M$. The example tell us that the invariance is missing in the isotropic coordinate! The reason for the phenomenon comes from the coordinate transformation (\[isotropic\]) itself. In the radial coordinate transformation $$\begin{aligned}
\ln \rho=\int \frac{dr}{r\sqrt{1-\frac{2M}{r}}}=\int
F(r)dr,\end{aligned}$$ the function $F(r)=\frac{1}{r\sqrt{1-2M/r}}$ has singularity at the horizon $r=2M$. So it needs to discuss that in which coordinates can Hawking temperature be invariant.
The purpose of this manuscript is to investigate the invariance of the Hawking temperature of the most general static spherically symmetric black hole from scalar and Dirac particles tunnelling in a general coordinate representations. In order to do that, we introduce the metrics of the static spherically symmetric black in the two coordinates: Schwarzschild-like and a general coordinates. This general coordinate should satisfy two conditions: a) its radial coordinate transformation is regular at the event horizon; b) there exists a time-like Killing vector.
The paper is organized as follows. In Sec. 2 the different coordinate representations for the general static spherically symmetric black hole are presented. In Sec. 3 the Hawking temperature of the general static spherically symmetric black hole for scalar particles tunnelling is investigated. In Sec. 4 the Hawking temperature of the general static spherically symmetric black hole from Dirac particles tunnelling is studied. The last section is devoted to a summary.
Coordinate representations for general static spherically symmetric black hole
==============================================================================
In this section we introduce two kinds of the coordinate representations for the general static spherically symmetric black hole, i. e. the Schwarzschild-like and a general coordinates.
Schwarzschild-like coordinate representation
--------------------------------------------
In Schwarzschild-like coordinate the line element for the most general static spherically symmetric black hole in four dimensional spacetime is described by $$\begin{aligned}
\label{ghs}
ds^2=-f(r)dt_s^2+\frac{1}{g(r)}dr^2+R(r)(d\theta^2+\sin^2\theta
d\varphi^2),
\end{aligned}$$ where $f(r)$, $g(r)$ and $R(r)$ are functions of $r$, and $t_s$ is the Schwarzschild-like time coordinate.
Because the spacetime (\[ghs\]) is a static and spherically symmetric one, a time-like Killing vector field $\xi^\mu=(1,0,0,0)$ exists. An interesting feature of the black hole worthy of note is that the norm of the Killing field $\xi^\mu$ is zero on the event horizon $r_H$ since the horizon is a null surface and the vector $\xi^\mu$ is normal to the horizon. Then, for the non-extreme case we have $f(r)=f_1(r)(r-r_H)$ and $g(r)=g_1(r)(r-r_H)$, where $f_1(r)$ and $g_1(r)$ are regular functions in the region $r_H<r<\infty$ and their values are nonzero on the outermost event horizon.
General coordinate representation
---------------------------------
In order to insure that there is a time-like Killing vector in the spacetime, the most general coordinate $(v,~u,~\theta,~\varphi)$ that transform from the Schwarzschild-like coordinate (\[ghs\]) is $$\begin{aligned}
\label{arbitrary}
v=\lambda t_s+\int dr G(r),~~~u=\int dr F(r),\end{aligned}$$ where $v$ is the time coordinate, $u$ is the radial one, and the angular coordinates remain unchanged; $\lambda$ is an arbitrary nonzero constant which re-scales the time; $G$ is arbitrary functions of $r$ and $F$ is a regular function of $r$. The line element (\[ghs\]) in the new coordinate becomes $$\begin{aligned}
\label{arbitrary1}
&&ds^2=-\frac{f(r(u))}{\lambda^2}dv^2+2\frac{f(r(u))G(r(u))}
{\lambda^2F(r(u))}dudv\nonumber\\&&\qquad
+\frac{\lambda^2-f(r(u))g(r(u))G^2(r(u))}{\lambda^2g(r(u))F^2(r(u))}du^2
+R(r(u))(d\theta^2+\sin^2\theta
d\varphi^2).\end{aligned}$$ We now show that two well-known coordinates, the Painlevé and Lemaitre coordinates, are the spacial cases of the metric (\[arbitrary1\]).
### Painlevé coordinate representation
In the transformation (\[arbitrary\]), one sets $\lambda=1$, $G(r)=\sqrt{\frac{1-g(r)}{f(r)g(r)}}$ and $F(r)=1$, the line element (\[arbitrary1\]) becomes the Painlevé coordinate representation[@man1; @ding] $$\begin{aligned}
\label{pan}
\label{ds2}ds^2=-f(r)dt^2+2\sqrt{\frac{f(r)(1-g(r))}{g(r)}}dtdr+dr^2
+R(r)(d\theta^2+\sin^2\theta
d\varphi^2),\end{aligned}$$where $t$ is the Panlevé time. The metric (\[pan\]) has no singularity at g(r) = 0, so the metric is regular at the horizon of the black hole. That is to say, the coordinate complies with perspective of a free-falling observer, who is expected to experience nothing out of the ordinary upon passing through the event horizon.
### Lemaitre coordinate representation
In the transformation (\[arbitrary\]), one sets $\lambda=1$, $G(r)=\frac{1}{2}\sqrt{\frac{g(r)}{f(r)(1-g(r))}}
+\sqrt{\frac{1-g(r)}{f(r)g(r)}}$ and $F(r)=\frac{1}{2}\sqrt{\frac{g(r)}{f(r)(1-g(r))}}$, the line element (\[arbitrary1\]) at present becomes the Lemaitre coordinate representation[@ding; @ding2] $$\begin{aligned}
\label{Lm}
ds^2=-f(r)\big[dV^2+dU^2\big]+2\frac{f(r) (2-g(r))}{g(r)}dVdU
+R(r)(d\theta^2+\sin^2\theta
d\varphi^2),\end{aligned}$$where $U$ is Lemaitre radial coordinate and $V$ is the Lemaitre time one. We can see that the Lemaitre coordinate is time-dependant system, suggests that there could be a genuine particle production.
Temperature of general static spherically symmetric black hole from scalar particles tunnelling
===============================================================================================
We now investigate scalar particles tunnelling of general static spherically symmetric black hole.
Scalar particles tunnelling in Schwarzschild-like coordinate
------------------------------------------------------------
Applying the WKB approximation $$\begin{aligned}
\label{ans}
\phi(t,r,\theta,\varphi)=\exp\Big[\frac{i}{\hbar}I(t,r,\theta,\varphi)+I_1(t,r,\theta,\varphi)
+\mathcal{O}(\hbar)\Big],
\end{aligned}$$ to the Klein-Gordon equation (\[kg\]), then, to leading order in $\hbar$ we get the following relativistic Hamilton-Jacobi equation $$\begin{aligned}
\label{hj}
g^{\mu\nu}(\partial_\mu I\partial_\nu I)+m^2=0.
\end{aligned}$$ As usual, due to the symmetries of the metric (\[ghs\]) and neglecting the effects of the self-gravitation of the particles, there exists a solution in the form $$\begin{aligned}
\label{ansatz}
I=-Et_s+W(r)+J(\theta,\varphi).
\end{aligned}$$ Inserting Eq. (\[ansatz\]) and the metric (\[ghs\]) into the Hamilton-Jacobi equation (\[hj\]), we find $$\begin{aligned}
\label{ww}
W_{\pm}(r)&=&\pm \int
\frac{dr}{\sqrt{f(r)g(r)}}\sqrt{E^2-f(r)(m^2+g^{ij}J_iJ_j)},
\end{aligned}$$ where $J_i=\partial_i I$, $i=\theta,\varphi$. One solution of the Eq. (\[ww\]) corresponds to the scalar particles moving away from the black hole (i.e. “+" outgoing) and the other solution corresponds to particles moving toward the black hole (i.e. “-" incoming). Imaginary parts of the action can only come due the pole at the horizon. The probability of a particle tunnelling from inside to outside the horizon is [@sh1; @sh2; @sh3] $$\begin{aligned}
\label{gamma}
\Gamma=\frac{\Gamma_{out}}{\Gamma_{in}}\propto
\exp\Big[-4\text{Im}W_+\Big] =\exp\Big[-\frac{E}{T_H}\Big],
\end{aligned}$$ since $W_-=-W_+$. Integrating around the pole at the horizon leads to $$\begin{aligned}
\label{radial}
\text{Im} W_+=\frac{\pi E}{\sqrt{f'(r_H)g'(r_H)}}.
\end{aligned}$$ Substituting (\[radial\]) into (\[gamma\]), we obtain the Hawking temperature$$\begin{aligned}
\label{HT}
T_H=\frac{\sqrt{f'(r_H)g'(r_H)}}{4\pi },
\end{aligned}$$ which shows that the temperature of general static spherically symmetric black hole is the same as previous works [@man; @man1].
Scalar particles tunnelling in general coordinate {#General}
-------------------------------------------------
Here we study the scalar tunnelling in a general coordinate (\[arbitrary1\]). Employing the ansatz $$\begin{aligned}
\label{action}
I=-Ev+W(u)+J(\theta,\varphi)\end{aligned}$$ and substituting the metric (\[arbitrary1\]) into the Hamilton-Jacobi equation (\[hj\]), we obtain $$\begin{aligned}
&&\left[g(r(u))G^2(r(u)) -\frac{\lambda^2}{f(r(u))}\right]E^2
- 2g(r(u))G(r(u))F(r(u))E W'(u)\nonumber\\&&
+g(r(u))F^2(r(u))\big[W'(u)\big]^2+g^{ij}J_iJ_j+m^2=0.\end{aligned}$$ Then $W'(u)$ is $$\begin{aligned}
\label{w(u)}
W'_\pm(u)=\frac{G(r(u))}{F(r(u))}E\pm\frac{\sqrt{\lambda^2E^2-f(r(u))[g^{ij}J_iJ_j+m^2]}}
{F(r(u))\sqrt{f(r(u))g(r(u))}}.\end{aligned}$$ We will study the temperature for two cases: $G(r(u))$ is regular function and $G(r(u))$ has a pole at horizon.
\[nopole\]
### $G(r(u))$ is regular function at horizon
When $G(r(u))$ is regular at horizon, $g_{uu}$ of metric (\[arbitrary1\]) shows that there is still a coordinate singularity at the horizon $r_H$. From equation (\[w(u)\]) we get $$\begin{aligned}
\text{Im}W_\pm(u)&=&\text{Im}\int du\left\{\frac{G(r(u))}{F(r(u))}E
\pm\frac{\sqrt{\lambda^2E^2-f(r(u))[g^{ij}J_iJ_j+m^2]}}
{F(r(u))\sqrt{f(r(u))g(r(u))}}\right\}\nonumber\\&=&\pm \frac{\lambda E\pi}
{\sqrt{f'(r_H)g'(r_H)}}.\end{aligned}$$We can see the Im$W_\pm(u)$ are like those in Schwarzschild-like coordinate. Using $$\begin{aligned}
\Gamma\propto\exp[-4\text{Im}W_+]=\exp\left[-\frac{\lambda
E}{T_H}\right],
\end{aligned}$$ we can recover Hawking temperature (\[HT\]).
\[pole\]
### $G(r(u))$ has a pole at horizon
When $G(r(u))$ has a pole at horizon, without loss of generality, it can be expressed as $G(r(u))=\frac{C(r(u))}
{\sqrt{f(r(u))g(r(u))}}+D(r(u))$, where $C(r(u))$ and $D(r(u))$ are the regular functions at horizon. From Eq. (\[w(u)\]), we obtain$$\begin{aligned}
\label{imwu}
\text{Im}W_\pm(u)&=&\text{Im}\int du\left\{
\frac{D(r(u))}{F(r(u))}E+\frac{C(r(u))E\pm\sqrt{\lambda^2E^2-f(r(u))[g^{ij}J_iJ_j+m^2]}}
{F(r(u))\sqrt{f(r(u))g(r(u))}}\right\}\nonumber\\&=& \frac{(\frac{C(r_H)}{\lambda}\pm1)\lambda E\pi}
{\sqrt{f'(r_H)g'(r_H)}}.\end{aligned}$$
In the following, we will consider two cases, i. e., $C(r_H)\neq
\lambda$ and $C(r_H)=\lambda$:
i\) If $C(r_H)\neq \lambda$, after substituting $G(r(u))=\frac{C(r(u))}{\sqrt{f(r(u))g(r(u))}}+D(r(u))$ into $g_{uu}$ of metric (\[arbitrary1\]), it is easy to see that there is still a coordinate singularity at horizon $r_H$, and the probabilities are $$\begin{aligned}
\Gamma_{out}\propto\exp\left[-2
\frac{(\frac{C(r_H)}{\lambda}+1)\pi}
{\sqrt{f'(r_H)g'(r_H)}}\lambda E\right],~~~\Gamma_{in}\propto\exp\left[-2 \frac{(\frac{C(r_H)}{\lambda}-1)\pi}
{\sqrt{f'(r_H)g'(r_H)}}\lambda E\right].
\end{aligned}$$ It is interesting to note that $\Gamma_{out},\Gamma_{in}$ are different from that in the Schwarzschild-like coordinate, but the total probability is $$\begin{aligned}
\Gamma=\frac{\Gamma_{out}}{\Gamma_{in}}\propto\exp\left[-
\frac{4\pi}
{\sqrt{f'(r_H)g'(r_H)}}\lambda E\right],
\end{aligned}$$ and the Hawking temperature (\[HT\]) is also recovered.
ii\) If $C(r_H)=\lambda$, we can write $C(r(u))=\lambda+H(r(u))\sqrt{f(r(u))g(r(u))}$, where $H(r(u))$ is a regular function at horizon. Then we have $G(r(u))=\frac{\lambda}{\sqrt{f(r(u))g(r(u))}}+H(r(u))+D(r(u))$. Substituting it into $g_{uu}$ of metric (\[arbitrary1\]), we find that there is no coordinate singularity at horizon $r_H$ now. From Eq. (\[imwu\]), we obtain $$\begin{aligned}
\text{Im}W_{+}(u)=\frac{2\pi }{\sqrt{f'(r_H)g'(r_H)}}\lambda
E,~~~~\text{Im}W_{-}(u)=0,
\end{aligned}$$this implies that $\Gamma_{in}=1$. So the overall tunnelling probability is $$\begin{aligned}
\Gamma=\Gamma_{out}\propto\exp[-2\text{Im} W_+]=\exp\left[-\frac{4\pi E}
{\sqrt{f'(r_H)g'(r_H)}}\right].\end{aligned}$$ It is obviously that the Hawking temperature (\[HT\]) is recovered.
From above discussions we know that the Hawking temperature of general static spherically symmetric black hole arising from the scalar particles tunnelling is invariant in the general coordinate (\[arbitrary1\]).
Temperature of general static spherically symmetric black hole from Dirac particles tunnelling
===============================================================================================
In this section, we study the Dirac particles tunnelling of the black hole in the coordinates (\[ghs\]) and (\[arbitrary1\]).
Dirac particles tunnelling in Schwarzschild-like coordinate
------------------------------------------------------------
For a general background spacetime, the Dirac equation is [@rmp] $$\begin{aligned}
\label{dirac}
&&\left[\gamma^\alpha
e^\mu_\alpha(\partial_\mu+\Gamma_\mu)+\frac{m}{\hbar}\right]
\psi=0,
\end{aligned}$$ with $$\begin{aligned}
&&\Gamma_\mu=\frac{1}{8}[\gamma^a,\gamma^b]e^\nu_ae_{b\nu;\mu},
\nonumber \end{aligned}$$ where $\gamma^a$ are the Dirac matrices and $e^\mu_a$ is the inverse tetrad defined by $\{e_a^\mu\gamma^a,~~~e_b^\nu\gamma^b\}=2g^{\mu\nu}
\times1$. For the general static spherically symmetric black hole in the Schwarzschild-like metric (\[ghs\]) the tetrad can be taken as $$\begin{aligned}
\label{tetrad}
e_a^\mu=diag\left(\frac{1}{\sqrt{f(r)}},\sqrt{g(r)},\frac{1}
{\sqrt{R(r)}},\frac{1}{\sqrt{R(r)}\sin\theta}\right).
\end{aligned}$$ We employ the following ansatz for the Dirac field $$\begin{aligned}
\label{psi}
&&\psi_\uparrow=\bigg(\begin{array}{ccc}A(t_s,r,\theta,\varphi)\xi_\uparrow\nonumber\\
B(t_s,r,\theta,\varphi)\xi_\uparrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_\uparrow(t_s,r,\theta,\varphi)\big)
=\left(\begin{array}{ccc}A(t_s,r,\theta,\varphi)\nonumber\\ 0\nonumber\\
B(t_s,r,\theta,\varphi)\nonumber\\0\end{array}\right)
\exp\big(\frac{i}{\hbar}I_\uparrow(t_s,r,\theta,\varphi)\big),\nonumber\\
&&\psi_\downarrow=\bigg(\begin{array}{ccc}C(t_s,r,\theta,\varphi)\xi_\downarrow\nonumber\\
D(t_s,r,\theta,\varphi)\xi_\downarrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_\downarrow(t_s,r,\theta,\varphi)\big)
=\left(\begin{array}{ccc}0\nonumber\\ C(t_s,r,\theta,\varphi)\nonumber\\
0\\D(t_s,r,\theta,\varphi)\nonumber\end{array}\right)
\exp\big(\frac{i}{\hbar}I_\downarrow(t_s,r,\theta,\varphi)\big),\nonumber\\
\end{aligned}$$ where “$\uparrow$" and “$\downarrow$" represent the spin up and spin down cases, and $\xi_{\uparrow}$ and $\xi_{\downarrow}$ are the eigenvectors of $\sigma^3$. Inserting Eqs. (\[tetrad\]), (\[psi\]) into the Dirac equation (\[dirac\]) and employing $$\begin{aligned}
\label{ans3}
I_\uparrow=-Et_s+W(r)+J(\theta,\varphi),
\end{aligned}$$ to the lowest order in $\hbar$ we obtain $$\begin{aligned}
\label{aa}
&& -\frac{A}{\sqrt{f(r)}}E+\sqrt{g(r)}B W'(r)
+mA=0,
\\ \label{bb}
&& \frac{B}{\sqrt{R(r)}}(J_\theta+\frac{i}{\sin\theta}J_\varphi
)=0,
\\ \label{cc}
&& \frac{B}{\sqrt{f(r)}}E-\sqrt{g(r)}A W'(r)
+mB=0,
\\ \label{dd}
&& -\frac{A}{\sqrt{R(r)}}(J_\theta +\frac{i}{\sin\theta}J_\varphi
)=0,
\end{aligned}$$ where we consider only the positive frequency contributions without loss of generality. Eqs. (\[bb\]) and (\[dd\]) both yield ($J_\theta +\frac{i}{\sin\theta}J_\varphi$) = 0 regardless of $A$ or $B$, implying that $J(\theta,\varphi)$ must be a complex function. We therefore can ignore $J$ from this point (or else pick the trivial $J = 0$ solution).
Consider first the massless case $m=0$, Eqs. (\[aa\]) and (\[cc\]) give $$\begin{aligned}
W_{\pm}(r)=\pm\int\frac{Edr}{\sqrt{f(r)g(r)}}.
\end{aligned}$$ We therefore recover the expected Hawking temperature (\[HT\]) in the massless case.
In the massive case $m\neq0$, Eqs. (\[aa\]) and (\[cc\]) show $$\begin{aligned}
(\frac{A}{B})^2=\frac{\frac{E}{\sqrt{f(r)}}+m}
{\frac{E}{\sqrt{f(r)}}-m}
\end{aligned}$$ and $$\begin{aligned}
W_\pm(r)=\int\frac{E}{\sqrt{f(r)g(r)}}\frac{2(\frac{A}{B})}{1+(\frac{A}{B})^2}dr.
\end{aligned}$$ Noting $\lim _{r\rightarrow r_H}(\frac{A}{B})^2=1$, we find that the result of integrating around the pole for $W$ in the massive case is the same as the massless case and we recover the Hawking temperature (\[HT\]).
For the spin-down case the calculation is very similar to the spin-up case discussed above. Other than some changes of sign, the equations are of the same form as the spin up case. For both the massive and massless spin down cases the Hawking temperature (\[HT\]) is obtained, implying that both spin up and spin down particles are emitted at the same temperature.
Dirac particles tunnelling in general coordinate
-------------------------------------------------
We take $$\begin{aligned}
\label{psi4}
&&\psi_\uparrow=\bigg(\begin{array}{ccc}A(v,u,\theta,\varphi)\xi_\uparrow\nonumber\\
B(v,u,\theta,\varphi)\xi_\uparrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_\uparrow(v,u,\theta,\varphi)\big)
=\left(\begin{array}{ccc}A(v,u,\theta,\varphi)\nonumber\\ 0\nonumber\\
B(v,u,\theta,\varphi)\nonumber\\0\end{array}\right)
\exp\big(\frac{i}{\hbar}I_\uparrow(v,u,\theta,\varphi)\big),\nonumber\\
&&\psi_\downarrow=\bigg(\begin{array}{ccc}C(v,u,\theta,\varphi)\xi_\downarrow\nonumber\\
D(v,r,\theta,\varphi)\xi_\downarrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_\downarrow(v,u,\theta,\varphi)\big)
=\left(\begin{array}{ccc}0\nonumber\\ C(v,u,\theta,\varphi)\nonumber\\
0\\D(v,u,\theta,\varphi)\nonumber\end{array}\right)
\exp\big(\frac{i}{\hbar}I_\downarrow(v,u,\theta,\varphi)\big)\nonumber\\
\end{aligned}$$ where $ I_\uparrow=-Ev +W(u)+J(\theta,\varphi).$ For the line element (\[arbitrary1\]), we chose the tetrad$$\!\!\!\!\!\!\!\! e_a^\mu=\left(
\begin{array}{cccc}
\frac{\lambda}{\sqrt{f(r(u))}} & \sqrt{g(r(u))}G(r(u)) & 0& 0 \\
0 & \sqrt{g(r(u))}F(r(u)) &0 &0 \\
0 & 0 & \frac{1}{\sqrt{R(r(u))}} &0 \\
0 & 0 & 0 & \frac{1}{\sqrt{R(r(u))}\sin\theta}
\end{array}
\right).$$ Then the Dirac equation (\[dirac\]) can be expressed as $$\begin{aligned}
\Big[-\frac{\lambda}{\sqrt{f(r(u))}}A-\sqrt{g(r(u))}G(r(u))B\Big]E
+\sqrt{g(r(u))}F(r(u))B W'(u)
+mA=0,
\\
\Big[\frac{\lambda}{\sqrt{f(r(u))}}B+\sqrt{g(r(u))}G(r(u))A\Big]E
-\sqrt{g(r(u))}F(r(u))A B W'(u)
+mB=0.
\end{aligned}$$
For the case $m= 0$, we find $$\begin{aligned}
W'(u)=\left[\frac{G(r(u))}{F(r(u))}E\pm\frac{\lambda
E}{\sqrt{f(r(u))g(r(u))}F(r(u))}\right],
\end{aligned}$$ which is similar to Eq. (\[w(u)\]). Taking the same method used in the section \[General\], it is easy to get the Hawking temperature (\[HT\]).
For the case $m\neq 0$, we find $$\begin{aligned}
(\frac{A}{B})^2=\frac{\frac{E}{\sqrt{f(r(u))}}+m}
{\frac{E}{\sqrt{f(r(u))}}-m}
\end{aligned}$$ and $\lim _{u\rightarrow u_H}\frac{A}{B}=\pm1$. We have $$\begin{aligned}
W'(u)=\left[\frac{G(r(u))}{F(r(u))}E\pm\frac{2\lambda|\frac{A}{B}|
E}{\sqrt{f(r(u))g(r(u))}F(r(u))(\frac{A^2}{B^2}+1)}\right],
\end{aligned}$$ which is similar to Eq. (\[w(u)\]). Taking the same method used in the section \[General\], we also find the same Hawking temperature (\[HT\]).
From above discussions we know that the Hawking temperature of general static spherically symmetric black hole arising from the Dirac particles tunnelling is also invariant in the general coordinate (\[arbitrary1\]).
summary
=======
The Hawking temperature of the Schwarzschild black hole in the isotropic coordinate shows us that the temperature is not invariant. What kinds of coordinate can keep the Hawking temperature invariant for the general static spherically symmetric black hole? By studying the Hawking radiation of the most general static spherically symmetric black hole arising from scalar and Dirac particles tunnelling, we find that it is invariant in the general coordinate representation (\[arbitrary1\]), which satisfies two conditions: a) its radial coordinate transformation is regular at the event horizon; and b) there is a time-like Killing vector.
We also find some other interesting results: 1) For the coordinate representations which do not exist coordinate singularity, such as the general coordinate (\[arbitrary1\]) with $C(r_H)=\lambda$ (include the Painlevé (\[pan\]) and Lemaitre (\[Lm\])), $W_+$ has a pole at the event horizon but $W_-$ has a well defined limit at the horizon. Then the imaginary part of $W_-$ is zero due to the imaginary parts of the action can only come from the pole and the probability of a particle tunnelling from inside to outside the horizon is described by $\Gamma= \Gamma_{out}$. 2) The mass of the particles and the angular quantum number do not affect the Hawking temperature for both scalar and Dirac particles. 3) When time coordinate transforms from $t_s$ to $\lambda t_s$, i. e., we re-scale the time, the corresponding energy $E$ of the total tunnelling particles is increased by $\lambda$ times, so re-scale of the time does not affect the Hawking temperature.
This work was supported by the National Natural Science Foundation of China under Grant No. 10675045; the FANEDD under Grant No. 200317; the Hunan Provincial Natural Science Foundation of China under Grant No. 08JJ0001; and the construct program of the key discipline in hunan province.
[99]{} R. Kerner and R. B. Mann, Phys. Rev. D[**73**]{} 104010 (2006).
R. Kerner and R. B. Mann, arXir: 0710. 0612.
M. K. Parikh and F. Wilczek, Phys. Rev. Lett. [**85**]{} 5042 (2000).
M. Agheben, M. Nadalini, VanzoL, and S. Zerbini, J. High Energy Phys. 0505 (2005) 014.
S. Shankaranarayanan and K. Srinivasan and T. Padmanabhan, Mod. Phys. Lett. A [**16**]{} 571 (2001).
S. Shankaranarayanan and T. Padmanabhan and K. Srinivasan, Class. Quantum Grav. [**19**]{} 2671 (2002).
K. Srinivasan and T. Padmanabhan, Phys. Rev. D [**60**]{} 024007 (1999). M. Nadalini, L. Vanzo and S. Zerbini, J. Phys. A: Math. Gen. [**39**]{} 6601 (2006). E. C. Vagenas, Nuovo Cim. [**117B**]{}, 899 (2002).
M. K. Parikh, Phys. Lett. B [**546**]{} 189 (2002). M. K. Parikh, Int. J. Mod. Phys. D[**13**]{} 2351 (2004). M. K. Parikh, arXiv: hep-th/0402166.
P. Kraus, and Frank Wilczek, gr-qc/9406042. P. Kraus and F. Wilczek, Nucl. Phys. B[**433**]{} 403 (1995). P. Kraus and F. Wilczek, Nucl. Phys. B[**437**]{} 231 (1995).
M. Arzano, A. Medved and E. Vagenas, J. High Energy Phys. 0509 (2005) 037.
Qing-Quan Jiang, Shuang-Qing Wu, and Xu Cai, Phys.Rev. D[**73**]{} 064003 (2006).
Jingyi Zhang, and Zheng Zhao, Phys. Lett. B[**638**]{} 110 (2006).
Shuang-Qing Wu, and Qing-Quan Jiang, J. High Energy Phys. 0603 (2006) 079.
A. J. M. Medved and E. Vagenas, Mod. Phys. Lett. A[**20**]{} 2449 (2005).
Bhramar Chatterjee, Amit Ghosh, P. Mitra, arXiv: 0704. 1746. P. Mitra, Phys. Lett. B[**648**]{} 240 (2007).
C. K. Ding and J. L. Jing, J. High Energy Phys. 09 (2007) 067.
C. K. Ding and J. L. Jing, Chin. Phys Lett. [**24**]{} (2007) 2189 (2007).
D. R. Brill and J. A. Wheeler, Rev. Mod. Phys. [**29**]{} (1995) 465 (1995).
[^1]: Corresponding author, Electronic address: [email protected]
|
---
abstract: |
This note contains a representation formula for positive solutions of linear degenerate second-order equations of the form $$\partial_t u (x,t) = \sum_{j=1}^m X_j^2 u(x,t) + X_0 u(x,t) \qquad (x,t) \in \mathbb{R}^N \times\, ]- \infty ,T[,$$ proved by a functional analytic approach based on Choquet theory. As a consequence, we obtain Liouville-type theorems and uniqueness results for the positive Cauchy problem.\
2000 Primary 35K70; Secondary 35B09, 35B53, 35K15, 35K65.\
[*Keywords.*]{} Harnack inequality, hypoelliptic operators, positive Cauchy problem, Liouville-type theorems, ultraparabolic operators.
author:
-
date:
title: 'On Liouville-type theorems and the uniqueness of the positive Cauchy problem for a class of hypoelliptic operators'
---
Introduction
============
In this article we consider second-order partial differential operators of the form $$\label{e1}
\L u : = \p_t u -\sum_{j=1}^m X_j^2 u - X_0 u \qquad \mbox{ in } \R^{N+1}.$$ Points $z \in \R^{N+1}$ are denoted by $z=(x,t)$, where $x\in \R^{N}, t\in\R$. For $j=0,\ldots, m$, the $X_{j}$ are vector fields which are given by first-order linear partial differential operators in $\R^{N}$ with smooth coefficients $$X_{j}(x):=\sum_{k=1}^{N} b_{jk}(x)\p_{x_{k}} \qquad j=0,\ldots, m.$$ We denote by $Y$ the [*drift*]{} $$\label{eY}
Y := X_{0}-\p_{t}.$$ We recall that the class of operators of the form has been studied by many authors. In particular, we refer to the monographs [@LibroBLU; @Bramantibook; @Calin], and to the references therein.
The aim of the article is to prove a representation formula for nonnegative solutions of $\L u= 0$ in the set $$\label{eOT}
\OT := \R^N \times\, ]\!-\infty,T[,$$ where $0 < T \le + \infty$. In the sequel we use the following notation $$\begin{aligned}
& \H := \Big\{u \in C^\infty (\OT) \mid \L u = 0 \quad \mbox{in } \OT \Big\}, \label{e-def-H} \\
& \H_+ := \Big\{u \in \H \mid u \ge 0 \Big\}. \label{e-def-H+}\end{aligned}$$ We use a functional analytic approach based on Choquet theory that allows us to represent all functions belonging to the convex cone $\H_+$ in terms of its extremal rays. Moreover, we prove a *separation principle* for the extremal rays. The separation principle, in the nondegenerate case, says that (under certain conditions) nonnegative extremal solutions of the heat equations have the form $u(x,t) = e^{\beta t} u_\beta(x)$, with $\beta \in \R$. In our degenerate setting the separation principle has a different form that depends on $\L$. However, we prove in Theorem \[H\*-lambda-repr-par\] that, under some additional assumptions, any nonnegative extremal solution of $\partial_t u = \sum_{j=1}^m X_j^2 u$ in $\OT$, does not depend on the ‘degenerate’ variables. From the representation theorem it plainly follows that under the additional assumptions also any function in $\H_+$ does not depend on the ‘degenerate’ variables. A similar result is proved in Theorem \[H\*-lambda-repr\] for degenerate stationary operators $\sum_{j=1}^m X_j^2 u = 0$, and in Corollary \[c\_Kolmo\] for Kolmogorov equations. We refer to this kind of results as *Liouville-type theorems* because of the very specific form of any point in $\H_+$.
Let us informally explain this remarkable phenomenon. We assume in Theorems \[H\*-lambda-repr-par\] and \[H\*-lambda-repr\] that $\L$ is invariant with respect to the *left translations* of a nilpotent stratified Lie group. On the other hand, the proof of our separation principle relies on Harnack inequalities that are invariant with respect to the *right translations* of the group. Both these two properties are satisfied in the particular case of the last layer of the nilpotent Lie group. In this case, we can prove our separation principle, that yields our claim. Let us also note that this fact is not completely unexpected. Indeed, Danielli, Garofalo and Petrosyan consider in [@DGP2007] the subelliptic obstacle problem in Carnot groups of step two, and prove that the non-horizontal derivatives of any solution vanish continuously on the free boundary.
We also give a simple proof of a known uniqueness result for the positive Cauchy problem. We note that this integral representation theory approach was previously used to prove the uniqueness of the positive Cauchy problem and Liouville-type theorems for locally [*uniformly*]{} parabolic and elliptic operators [@KoranyiTaylor85; @LinPinchover94; @Murata93; @Murata95; @Pinchover88; @Pinchover1996 and references therein].
We next focus on Mumford and degenerate Kolmogorov operators. Their drift term $X_0$ is nontrivial, and plays a crucial role in the regularity properties of the solutions. In Section \[sec\_mumford\] we prove a uniqueness result for the positive Cauchy problem for Mumford operators. In Section \[sec\_Kolmogorov\] we consider a family of degenerate Kolmogorov operators, and prove in Corollary \[c\_Kolmo\] that any nonnegative solution of this partial differential equation in $\OT$ does not depend on the ‘degenerate’ variables, and hence, the uniqueness of the positive Cauchy problem holds true for such operators.
We list below our assumptions on $\L$ that will be used to accomplish this project. We assume that $\L$ satisfies the celebrated Hörmander’s condition:
[(H0)]{}
: $ \qquad \text{rank Lie}\{X_{1},\dots,X_{m},Y\}(z) = N+1 \quad \text{for every} \, z \in \R^{N+1}.$
Under this condition Hörmander proved in [@Hormander] that $\L$ is hypoelliptic, that is, any distributional solution $u$ of the equation $\L u= f$ is a smooth classical solution, whenever $f$ is smooth. In particular, $\H$ contains [*all*]{} distributional solutions of the equation $\L u=0$ in $\OT$.
Our second hypothesis is as follows:
[(H1)]{}
: there exists a Lie group $\mathbb{G} = \left(\R^{N+1},\circ \right)$ such that the vector fields $X_{1}, \dots, X_{m}, Y$ are invariant with respect to the left translation of $\mathbb{G}$. That is, for every $z,
\zeta \in \R^{N+1}$ we have $$\begin{split}
\left( X_j u \right) (\zeta \circ z) & = X_j \left( u(\zeta \circ z)\right) \qquad j = 1, \dots, m, \text{ and } \\
\left( Y u \right) (\zeta \circ z) & = Y\left( u(\zeta \circ z)\right).
\end{split}$$
In particular, it follows from (H1) that $$\label{e-Lie}
\big(\L u\big) (z) = f(z)\quad \Leftrightarrow \quad \L \big( u(\zeta \circ z) \big) = f(\zeta \circ z) \qquad
\forall \zeta \in \R^{N+1}.$$
We will use the following notation in our further assumptions. As usual, we identify the first order linear partial differential operator $X_j$ with the vector-valued function $$X_{j}(x)= (b_{j1}(x), \ldots, b_{jN}(x)) \qquad j = 1, \dots, m.$$ For any $z_0 \in \R^{N+1}$ and any piecewise constant function $\omega: \left[0,T_0\right] \to \R^m$, let $\gamma$ be a solution of the following initial value problem $$\label{e-gdot}
\g'(s)=\sum_{j=1}^{m} \omega_j(s) X_j(\g(s))+ Y(\g(s)), \qquad \gamma(0)= z_0.$$ We say that the solution $\g$ to is an [*$\L$-admissible path*]{}.
Let $\O \subseteq\R^{N+1}$ be an open set and let $z_0 \in \O$. The [*attainable set*]{} $$\label{e-Anew}
\A_{z_0} (\Omega):= \overline {A_{z_0}(\Omega)}$$ is defined as the closure in $\O$ of $$A_{z_0} (\Omega) := \big\{z\in\O \mid \;\exists \;\L\text{-admissible path } \g: [0,\tau] \to \O \text{ s.t. } \g(0)=
z_0, \gamma(\tau)=z \big\}.$$ When $\Omega=\OT$ (see ), we use the simplified notation $\A_{z_0} : = \A_{z_0} (\OT)$.
Our last requirement is concerned with a $\L$-admissible path with a constant $\omega\in \R^m$. As we will see in the sequel, it yields a *restricted uniform Harnack inequality* suitably modeled on the Lie group structure of $\mathbb{G}$ (cf. [@Murata93]). For $X=(X_1,\ldots, X_m)$, and $\omega = (\omega_{1},\dots,\omega_{m})\in \R^m$, we denote $$\begin{split}
&\omega \cdot X := \omega_1 X_1 + \dots + \omega_m X_m, \\
& \exp\left( s \left(\omega \cdot X + Y \right) \right) z_0:=\gamma(s), \qquad \mbox{where is defined in
\eqref{e-gdot}} \\
& \exp\left( s \left(\omega \cdot X + Y \right) \right) := \exp\left( s \left(\omega \cdot X + Y \right)
\right)(0,0).
\end{split}$$ Note that, by the invariance of the vector fields with respect to $\mathbb{G}$, we have $$\label{e-gcirc}
\exp\left( s \left(\omega \cdot X + Y \right) \right) z_0 =
z_0 \circ\exp\left( s \left(\omega \cdot X + Y \right) \right).$$ Moreover, from we see that the *time* component of $\exp\left( s \left(\omega \cdot X + Y \right)
\right) (x_0,t_0)$ is always $t_0-s$. With these notations, our last hypothesis reads as follows
[(H2)]{}
: There exists a bounded open set $\Omega$ containing the origin, a vector $\omega
\in \R^m$ and a positive $s_0$ such that $$\label{eq-harnack}
\exp\left( s \left(\omega \cdot X + Y \right) \right) \in \mathrm{Int} \left(\A_{(0,0)} (\Omega) \right)
\quad \text{for
any} \quad s \in\, ]0, s_0].$$
\[rem-assumptions\] Some comments on our assumptions (H0), (H1) and (H2) are worth noting.
1\. The heat operator $\L = \partial_t - \varDelta$ is of the form . Moreover, it is invariant with respect to the Euclidean translations $(x,t) \circ (\xi, \tau) = (x + \xi, t + \tau)$ and (H2) is satisfied by any $\omega \in \R^N$. In this particular case, if we choose $\omega = 0$, and we recall that $X_0=0$, we see that $\exp\left( s \left(\omega \cdot X + Y \right) \right) = (0,- s)$. Note that a restricted uniform Harnack inequality $u(x,t-\e) \le C_\e u(x,t)$ follows from the classical parabolic Harnack inequality first proved by Hadamard [@Hadamard] and Pini [@Pini].
2\. More generally, hypothesis (H2) is satisfied in the case of an operator of the form $\partial_t + \L_0 u$ in $\R^{N}
\times\,]\!-\infty, T[$, where $\L_0$ is a time-independent locally uniformly elliptic operator with bounded coefficients, and also in the case of a manifold $M$ with a cocompact group action $G$ and an operator of the form $\partial_t + \L_0 u$ on $M\times\, ]\!-\infty, T[$, where $\L_0$ is a (time-independent) $G$-invariant elliptic operator on $M$ (see [@KoranyiTaylor85; @LinPinchover94; @Murata93; @Pinchover88; @Pinchover1996]).
3\. We further note that there are operators $\L$ of the form that satisfy (H0) and (H1), for which (H2) is not satisfied for all $\omega$. We refer to Mumford operator discussed in Section \[sec\_mumford\], and to Example \[ex4\].
Our assumptions (H0), (H1) and (H2) provide us with some compactness properties that are needed for proving that all points in the convex closed cone $\H_+$ can be represented in terms of its extremal rays. These compactness properties hinge on the following local Harnack inequality which holds true under our assumptions (see the main result of [@KogojPolidoro15]).
[(H\*)]{}
: Let $\O \subseteq\R^{N+1}$ be a bounded open set and let $z_0 \in \O$. For any compact set $K \subset
\mathrm{Int}\left(\A_{z_0}(\O) \right)$ there exists a positive constant $C_K$, only depending on $\O, K, z_0$ and $\L$, such that $$\label{lHarnack}
\sup_K u \le C_K \, u(z_0),$$ for any nonnegative solution $u$ of $\L u = 0$ in $\Omega$.
Note that, from (H\*) and from the hypoellipticity of $\L$ we have that $\H$ is a Fréchet space with respect to the topology of uniform convergence on compact sets. Moreover, in this topology, $\H_+$ is clearly a closed convex cone in $\H$. We denote by $\exr \H_+$ the set of all extreme rays of $\H_+$.
We next discuss the validity of (H\*). Recall that Krener’s Theorem states that for any open set $\O \subseteq\R^{N+1}$ and $z_0 \in \O$, the interior of $\A_{z_0}(\Omega)$ is not empty whenever (H0) is satisfied (see [@Krener] or [@AgrachevSachkov Theorem 8.1, p. 107]). We note here, that for this reason, it is not clear to us whether there exists an operator $\L$ satisfying (H0) and (H1), but not satisfying (H2).
Properties (H1), (H2) and (H\*) yield the following *restricted uniform Harnack inequality* (cf. [@Murata93]).
\[p-restr-harnack\] Let $\L$ be an operator of the form , satisfying [(H0)]{}, [(H1)]{}, and [(H2)]{}. Let $\omega$, $\Omega$ and $s_0$ be as in (H2). For any $s>0$ there exists a positive constant $C_{s} >0$ depending only on $\omega$, $s$ and $\L$, such that for any nonnegative solution $u$ of $\L u = 0$ in $\OT$ we have $$\label{e-harnack}
u\left( \exp\left( s \left(\omega \cdot X + Y \right) \right) z \right) \le C_{s} u(z)
\qquad \forall z \in \OT.$$ Moreover, if for $j= 1, \dots, k$, $\omega_j$ are as in (H2), and $s_j$ are any positive constants, then there exists a positive constant $C_\mathbf{s} >0$ (where $\mathbf{s} = (s_1, \dots, s_k)$) depending only on $\omega_1, \dots, \omega_k, \mathbf{s}$ and $\L$, such that for any nonnegative solution $u$ of $\L u = 0$ in $\OT$ we have $$\label{e-harnack-2}
u\left( \exp\left( s_k \left(\omega_k \cdot X + Y \right) \right) \dots \exp\left( s_1 \left(\omega_1 \cdot X +
Y \right) \right) z \right) \le C_\mathbf{s} u(z) \qquad \forall z \in \OT.$$
Let $u$ be a nonnegative solution $u$ of $\L u = 0$ in $\OT$. For any $z\in \mathbb{G}$ the function $u^z(y):=u(z\circ
y)$ is a nonnegative solution of the equation $\L u = 0$. Therefore, for every $s \in ]0, s_0]$, by the local Harnack inequality (H\*) and , we have $$\label{e-gcirc1}
\begin{split}
u\left( \exp\left( s \left(\omega \cdot X + Y \right) \right) z\right) = u\left(z \circ\exp\left( s \left(\omega
\cdot X + Y \right) \right)\right)= \\
u^z\left(\exp\left( s \left(\omega \cdot X + Y \right) \right)\right)\leq C_su^z(0)=C_{s} u(z).
\end{split}$$ This proves if $s \in ]0, s_0]$. If $s > s_0$ we choose $\tilde s \in ]0, s_0]$ and $k \in
\N$ such that $s = k \tilde s$. By and we find. $$\begin{split}
u\left( \exp\left( k \tilde s \left(\omega \cdot X + Y \right) \right) z\right) & \leq
C_{\tilde s} u\left( \exp\left( (k-1) \tilde s \left(\omega \cdot X + Y \right) \right) z\right) \leq \\
\dots & \leq C_{\tilde s}^{k-1} u\left( \exp\left( \tilde s \left(\omega \cdot X + Y \right) \right) z\right)
C_{\tilde s}^k u\left(z\right).
\end{split}$$ This concludes the proof of , with $C_s = C_{\tilde s}^k$.
The proof of follows by the same argument.
\[rem-s-s\_0\] In the proof of Proposition \[p-restr-harnack\] we have constructed a *Harnack chain* based on the *local* Harnack inequality (H\*). For this reason, and don’t require the boundedness assumption of the open set $\Omega$ and of the interval $]0,s_0]$ in Condition (H2). Hence, when we apply Proposition \[p-restr-harnack\] in the sequel, we don’t refer to $\Omega$ and $s_0$.
The following theorem is a version of the *separation principle* (see [@Murata93] and [@Pinchover1996 Definition 2.2]). We note that the restricted uniform Harnack inequality (Proposition \[p-restr-harnack\]) is used in the proof of our separation principle to construct *Harnack chains* along the path $\g(s) = \exp\left( s \left(\omega \cdot X + Y \right) \right)(x_0,t_0)$.
\[th-main\] Let $\L$ be an operator of the form , satisfying [(H0)]{}, [(H1)]{}, and [(H2)]{}. Let $\omega$ be as in [(H2)]{}, and suppose that for every $u\in \H_+$, and every positive $s$ $$\label{eq_right_inv}
(x,t) \mapsto u\left(\exp(s (\omega \cdot X +Y))(x,t)\right) \qquad \mbox{is a solution of }\; \L u = 0 \mbox{ in
}\OT.$$ Then, for every $u \in \exr \H_+$, $u\neq 0$, there exists $\beta \in \R$ such that $$\label{eq_funct_eq}
u\left(\exp(s (\omega \cdot X +Y))(x,t)\right) = \mathrm{e}^{- \beta s} u(x,t)$$ for every $(x,t) \in \OT$ and for every $s > 0$. In particular, for every $u \in \exr \H_+$ and $z_0 = (x_0,t_0)$ in $\OT$, if $u(z_0) >0$, then $u>0$ in a neighborhood of the integral curve $$\label{eq_int_curv}
\gamma:=\big\{ \exp\left( s \left(\omega \cdot X + Y \right) \right) z_0 \mid s \in \;] t_0 - T, + \infty[ \big\}.$$
We also have the following result, useful in the study of stratified Lie groups and the Mumford operator. It is weaker than Theorem \[th-main\] in that the [*right-invariance*]{} of solutions is not assumed to hold for every positive $s$.
\[prop-separation\] Let $\L$ be an operator of the form , satisfying [(H0)]{}, [(H1)]{}, and [(H2)]{}. Let $\omega_j$ is as in (H2) for $j= 1, \dots, k$, and suppose that there exists $\mathbf{s} = (s_1, \dots, s_k) \in (\R^+)^k$ such that $$\label{eq_right_inv-2}
(x,t) \mapsto u\left(\exp\left( s_k \left(\omega_k \cdot X + Y \right) \right) \dots \exp\left( s_1 \left(\omega_1
\cdot X + Y \right) \right)(x,t)\right)$$ is a solution of $\L u = 0$ in $\OT$ whenever $u\in \H_+$. Then, for every $u \in \exr \H_+$, $u\neq 0$, there exists a positive constant $C = C(\mathbf{s}, \omega_1, \dots, \omega_k)$ such that $$\label{eq_funct_eq-2}
u\left(\exp\left( s_k \left(\omega_k \cdot X + Y \right) \right) \dots \exp\left( s_1 \left(\omega_1 \cdot
X + Y \right) \right)(x,t)\right) = C u(x,t)$$ for every $(x,t) \in \OT$.
We prove Theorem \[th-main\] and Proposition \[prop-separation\] in the next subsection devoted to our functional setting.
\[rem-leftrightinvariance\] Assumption of Theorem \[th-main\] appears to be quite strong. Indeed, since $\L$ is *left-invariant* with respect to the operation “$\circ$”, it follows that $(x,t)
\mapsto u((x_0, t_0) \circ (x,t))$ is a solution of $\L u = 0$ for every fixed $(x_0,t_0) \in \R^{N+1}$ and $u\in \H$. On the other hand, , says that $u\left(\exp(s (\omega \cdot X +Y))(x,t)\right) = u\left((x,t) \circ
\exp(s (\omega \cdot X +Y))\right)$, and therefore, we also assume, in fact, a *right-invariance* condition, with respect to the point $\exp(s (\omega \cdot X +Y))$.
However, both conditions are satisfied by the class of linear degenerate operators such that $X_0 = 0$. In this case we have $$\label{eq-heatkernel}
\L u(x,t) = \partial_t u(x,t) - \sum_{j=1}^m X_j^2 u(x,t),$$ and (H2) is satisfied for every $\omega \in \R^m$. In particular, for $\omega = 0$ and $s>0$, $$ \exp(s (\omega \cdot X +Y))(x,t) = (x,t-s)$$ and $\L u(x,t-s) = 0$ in $\OT$ if $\L u(x,t) = 0$ in $\OT$.
In Section \[sec\_Parabolic\] we discuss some classes of operators of the form satisfying [(H0)]{}, [(H1)]{}, and [(H2)]{}. In this case, Theorem \[th-main\] says that for any nonnegative extremal solution $u$ of $\L u = 0$ in $\OT$ there exists $\beta\in \R$ such that for any $s>0$ $$\label{eq_funct_eq-nodrift}
u(x,t-s) = \mathrm{e}^{- \beta s} u(x,t) \qquad \forall (x,t) \in \OT.$$ Note that a separation principle also holds when the drift term has the form $X_0 = \sum_{j=1}^N b_j \partial_{x_j}$, where $b = (b_1, \dots , b_N)$ is any constant vector. Indeed, if $u$ is a positive solution of $$\partial_t u =
\sum_{j=1}^m X_j^2 u + \sum_{j=1}^N b_j \partial_{x_j}u$$ then $v(x,t) := u(x - t b, t)$ is a solution of the analogous equation $$\partial_t v = \sum_{j=1}^m X_j^2 v.$$ Then we can apply Theorem \[th-main\] to $v$ with $\omega = b$, and finally we obtain $$u(x + s b,t-s) = \mathrm{e}^{- \beta s} u(x,t) \qquad \forall (x,t) \in \OT.$$
In Section \[sec\_mumford\], we present a remarkable example of an operator satisfying assumption of Theorem \[th-main\], namely, the well-known Mumford operator: $$ \mathscr{M} u := \p_{t} u - \cos(x) \p_y u - \sin(x) \p_w u - \p_x^2 u \qquad (x,y,w,t) \in \R^4,$$ an operator that is discussed in detail in Section \[sec\_mumford\]. Clearly its drift $X_0 = \cos(x) \p_y + \sin(x)
\p_w$ is nontrivial. It is also worth noting that $\mathscr{M}$ satisfies the assumptions of Proposition \[prop-separation\], with $s = 2 \pi$, but it doesn’t satisfy the hypotheses of Theorem \[th-main\]. We also note that Section \[s-remark\] contains some remarks on the validity of for operators with nontrivial drift.
The outline of the paper is as follows. In Section \[sec\_funct\] we introduce representation formulas that play a crucial role in our study, and we give the proof of Theorem \[th-main\]. In sections \[sec\_Parabolic\]–\[sec\_Cauchy\] we study operators $\L$ such that the drift term $X_0$ vanishes identically. In particular, in Section \[sec\_Elliptic\] we study stationary solutions, Section \[sec\_Parabolic\] deals with solutions of the evolution equation, while Section \[ssec\_Liouville\] discusses parabolic Liouville-type theorems, and Section \[sec\_Cauchy\] contains a uniqueness result for the positive Cauchy problem. In Section \[sec\_mumford\] we prove a new uniqueness result for Mumford’s operator. In Section \[sec\_Kolmogorov\] we compute the Martin boundary of Kolmogorov-Fokker-Planck operators in $\OT$. Finally, Section \[sec\_further\] is devoted to some concluding remarks concerning the results of the present paper and to a discussion of some open problems.
Functional setting
==================
\[sec\_funct\] In the present section we introduce some notations, and recall some known facts about convex cones in vector spaces. The following definition plays a crucial role in our study. It leads to some compactness results that enable us to apply Choquet’s theory. We first introduce the following notation. If $z \in \R^{N+1}$ and $\Omega$ is a bounded open subset of $\R^{N+1}$, we set $$\label{Omega-z}
\Omega_z = z \circ \Omega =\Big\{ z \circ \zeta \mid \zeta \in \Omega \Big\}.$$
\[def\_ref\_path\][*Let $\L$ be an operator satisfying (H2). A sequence $\RR := \left( z_k \right)_{k \in \N}\subset \OT$ is said to be a [*reference set*]{} for $\L$ in $\OT$, if*]{} $$\bigcup_{k=1}^{\infty} \mathrm{Int} \left(\A_{z_k} \left(\Omega_{z_k} \right)\right) = \OT,$$ [*where $\Omega$ is the bounded open set satisfying (H2).*]{}
We next prove that, in our setting, a reference set always exists.
\[prop-covering\] If $\L$ satisfies assumptions [(H0), (H1)]{} and [(H2)]{}, then a *reference set* $\RR$ exists.
Let $\left( K_j \right)_{ \in \N}$ be a sequence of compact sets such that $$\bigcup_{j=1}^{\infty} K_j = \OT.$$
We claim that for every $j \in \N$ there exist $k_j \in \N$ and $z_{j_1}, \dots, z_{j_{k_j}} \in \OT$ such that $$\label{eq-covering}
K_j \subset \bigcup_{i=1}^{{k_j}} \mathrm{Int} \left(\A_{z_{j_i}} \left(\Omega_{z_{j_i}} \right)\right).$$ In order to prove we consider $\omega \in \R^m, s_0>0$ and $\Omega$ satisfying (H2). For every $(\xi, \tau) \in K_j$ we choose $s \in]0, s_0]$ such that $s + \tau <T$, and we set $$(x,t) = \exp\left( - s \left(\omega \cdot X + Y \right) \right) (\xi, \tau).$$ Since $t = s+\tau<T$, it follows that $(x,t) \in \OT$. Moreover $$\exp\left( s \left(\omega \cdot X + Y \right) \right) (x,t) = (\xi, \tau),$$ then, by (H1) and (H2) we have that $(\xi, \tau) \in \mathrm{Int}\A_{(x,t)}\left( \Omega_{(x,t)} \right)$. Hence, follows from the compactness of $K_j$. Therefore, a reference set for $\L$ in $\OT$ is given by $$\RR := \bigcup_{j=1}^{\infty} \Big\{ z_{j_1}, \dots, z_{j_{k_j}} \Big\}.$$
We equip $\H$ with the compact open topology, that is, the topology of uniform convergence on compact sets.
Let $\RR := \left( z_k \right)_{k \in \N}$ be a reference set for $\L$ in $\OT$, and let $a = \left( a_k \right)_{k \in
\N}$ be a strictly positive sequence. We set $$\begin{aligned}
&\H_{a} := \bigg\{u \in \H_+ \mid \sum_{k=1}^{\infty} a_k u(z_k) \le 1 \bigg\}, \label{e-def-Ha} \\
&\H_{a}^1 := \bigg\{u \in \H_+ \mid \sum_{k=1}^{\infty} a_k u(z_k) = 1 \bigg\}. \label{e-def-Ha1}\end{aligned}$$
For any positive sequence $a=\left( a_k \right)_{k \in
\N}$, the convex set $\H_{a}$ is compact in $\H_+$.
By the hypoellipticity of $\L$, it is sufficient to show that $\H_{a}$ is locally bounded on $\OT$. With this aim, we consider any compact set $K \subset \OT$. By (H2), and Proposition \[prop-covering\], there exist $w_1, \dots, w_k$ in $\OT$ such that $$K \subset \mathrm{Int} \left(\A_{w_1} \left(\Omega_{w_1} \right) \right) \cup \dots \cup
\mathrm{Int} \left(\A_{w_k} \left(\Omega_{w_k} \right)\right).$$ We claim that there exist $z_{n_1}, \dots, z_{n_k}$ in $\RR$ and $k$ compact sets $K_1, \dots, K_k$ such that $$\label{eq-covering-K}
K = K_1 \cup \dots \cup K_k, \quad \text{and} \quad K_j \subset \mathrm{Int} \left(\A_{z_{n_j}} \left(\tilde \Omega_{j}
\right) \right),$$ for $j=1, \dots, k$, where every $\tilde \Omega_{j}$ is a bounded open set containing $z_{n_j}$.
Indeed, let $K_j := K \cap \left(\overline{\mathrm{Int}\left(\A_{w_j} \left(\Omega_{w_j} \right)\right)}\right)$ for $j=1, \ldots, k$. As in the proof of Proposition \[prop-covering\], we take $z_{n_j} \in \RR$ such that $w_j
\in \mathrm{Int}\left(\A_{z_{n_j}} \left(\Omega_{z_{n_j}} \right)\right)$. We then choose a bounded open set $\tilde
\Omega_j$ containing $\Omega_{z_{n_j}} \cup K_j$, and we have that $K_j \subset \mathrm{Int}\left(\A_{z_{n_j}} \left(\tilde
\Omega_{j} \right)\right)$ for $j=1,\ldots , k$. This proves .
As a consequence of , the restricted uniform Harnack inequality yields $$\sup_K u \le C_K \, \max_{j=1, \dots, k} u \left( z_{n_j} \right),$$ for some positive constant $C_K$ depending only on $\L$ and $K$. On the other hand, from the definition of $\H_a$ it follows that for any $u\in \H_{a}$, we clearly have that $u \left( z_j \right) \le \frac{1}{a_j}$. Consequently, $$\sup_K u \le C_K \, \max_{j=1, \dots, k} \left\{\dfrac{1}{a_{n_j}}\right\}.$$
Note that $\H_+$ is the union of the caps $\H_{a}$. Indeed, for every $u \in \H_+$, we easily see that $u \in \H_{a}$ where the sequence $a = \left( a_k \right)_{k \in \N}$ is defined as follows $a_k := \frac{b_k}{u(z_k) + 1}$ and $\left( b_k \right)_{k \in \N}$ is any nonnegative sequence such that $\sum b_k \le 1$.
Thus, $\H_{a}$ is a metrizable *cap* in $\H_+$ (*i.e.* $\H_{a}$ is a compact convex set and $\H_+
\backslash \H_{a}$ is convex) and $\H$ is [*well-capped*]{} (*i.e.* $\H_+$ is the union of the caps $\H_{a}$). Furthermore, since $\H_+$ is a harmonic space in the sense of Bauer, it follows that $\H_a$ is a *simplex* (see [@Becker; @Choquet]).
Let $\CC$ be a convex cone, we denote by $\exr \CC$ the set of all extreme rays of $\CC$. Analogously, if $K$ is a convex set, we denote by $\ex K$ the set of the extreme points of $K$.
Since $\H_+$ is a proper cone (*i.e.* it contains no one-dimensional subspaces), we have $$\label{e-exH}
\ex \H_{a} = \big\{ 0 \big\} \cup \big\{ \exr \H_+ \cap \H_{a}^1 \big\}.$$
We next prove Theorem \[th-main\] and Proposition \[prop-separation\]. The argument of the proof is standard, we give here the details for reader’s convenience.
Clearly, $\H_+\neq \{0\}$ since $\mathbf{1}\in \H_+$. By the Krein-Milman theorem and , it follows that $\exr \H_+$ contains a nontrivial ray. Consider any function $u \in \exr \H_+$ such that $u \not = 0$, and let $\omega \in \R^m$ be as in Proposition \[p-restr-harnack\]. We claim that, for every positive $s$, there exists a positive constant $\alpha_s$ such that $$\label{e-cone}
u\left(\exp\left( s \left(\omega \cdot X + Y \right)(x,t) \right) \right) = \alpha_s u(x,t).$$ Indeed, let $$v_s(x,t) := C^{-1}_{s} u\left(\exp\left( s \left(\omega \cdot X + Y \right)(x,t) \right)\right),$$ and recall that by our hypothesis , $v_s$ is a nonnegative solution of the equation $\L v_s = 0$ in $\OT$. Moreover, the restricted uniform Harnack inequality (Proposition \[p-restr-harnack\]) implies that $ v_s \leq u$. Since $u \in \exr \H_+$, it follows that $v_s(z)= \nu_su(z)$ for all $z\in \OT$, where $\nu_s\geq 0$. If $\nu_s>0$, then we obviously have . Suppose that $\nu_s=0$, then by applying the exponential map forward, it follows that $u(x,t)=0$ for all $(x,t)\in \Omega_{T-s}$. This completes the proof if $T=\infty$. If $T<\infty$ we repeat the argument for a vanishing sequence $(s_j)_{j \in \N}$ of positive numbers. This contradicts our assumption that $u\neq 0$. Hence is proved.
In order to conclude the proof of , we note that for every $\omega \in \R^m$ satisfying the assumption of Proposition \[p-restr-harnack\], $z \in \OT, s>0$, and any $k \in \N$ we have $$\exp\left( k s \left(\omega \cdot X + Y \right) \right)z = \underbrace{\exp\left( s(\omega \cdot X + Y) \right)
\circ \ldots \circ \exp\left( s( \omega \cdot X + Y) \right)}_{k \ \text{times}} z.$$ By iterating , we then find $$\alpha_{k s} u(x,t) = u\left(\exp\left( k s \left(\omega \cdot X + Y \right) \right)(x,t) \right) = \alpha_s^k
u(x,t).$$ Hence, $\alpha_{k} = \alpha_1^k$, and $\alpha_{1/k} = \alpha_{1}^{1/k}$, for every $k \in \N$. Therefore, $\alpha_{r} = \alpha_1^r$ for every $r \in \Q$. The conclusion of the proof thus follows from the continuity of $u$, by setting $\beta := \log (\alpha_1)$.
For the proof of the last assertion of the theorem, take $z_0$ such that $u(z_0)>0$. Then by $u>0$ on the integral curve $\gamma$ given by .
It is analogous to the proof of , which is based only on the Harnack inequality and on the assumption concerning the (restricted) right-invariance of the solutions in $\H_+$. We omit the details.
\[r-separation\] When considering the classical heat equation in $\OT$, or more generally when $X_0=0$, the separation principle reads as follows (see [@KoranyiTaylor85; @Murata93; @Pinchover88] for the corresponding result in the nondegenerate case):
*For any $u \in \exr \H_+$ there exists $\lambda\leq \lambda_0$ such that* $$\label{ex-sep}
u\left( x, t \right) = \mathrm{e}^{- \lambda t} u(x,0) \qquad \forall (x,t) \in \OT,$$ where $\lambda _0$ is the [*generalized principal eigenvalue*]{} of the operator $\L_0:=- \sum_{j=1}^m X_j^2$ defined by $$\label{lambda0}
\lambda_0:=\sup \Big\{\lambda \in \R \mid \exists u_\lambda\gneqq 0\;
\mbox{ s.t. }\Big(- \sum\nolimits_{j=1}^m X_j^2 - \lambda \Big)u_\lambda =0 \mbox{ in } \R^N \Big\}.$$ Moreover, using Choquet’s theorem and the argument in the proof of [@Pinchover88 Theorem 2.1], implies that $u$ is a nontrivial extremal solution of the equation $\L w = 0$ in $\OT$ if and only if it is of the form $u(x,t) := e^{\lambda t} u_\lambda(x)$, where $\lambda\leq \lambda_0$ and $u_\lambda$ is a nonzero extremal solution of the equation $\L_\lambda \phi = (- \sum_{j=1}^m X_j^2 -\lambda )\phi = 0$ in $\R^{N}$. In particular, it follows that any nontrivial solution in $\H_+$ is strictly positive.
In fact, for the heat equation it is known (see for example [@Doob]) that any nonnegative extremal caloric function $u\neq 0$ in $\R^{N+1}$ or in $\R^{N}\times \R_-$ is of the form $$ u(x,t) = \exp\left(\langle x, v \rangle + t \|v\|^2 \right),$$ where $v\in\R^N$ is a fixed vector.
When a drift term $X_0$ appears in the operator $\L$, does not holds necessarily, even for nondegenerate parabolic equations. Consider, for instance, the nondegenerate Ornstein-Uhlenbeck operator $$\label{e-OU}
\L u : = \p_t u - \Delta u - \langle x, \nabla u \rangle = 0 \qquad \mbox{ in }{ \R}^{N} \times \R_T.$$ Clearly, $\L$ is of the form with $X_j = \partial_{x_j}, j=1, \dots, N$, and $X_0 = \langle x, \nabla \rangle
\simeq x$. Moreover, $\L$ is invariant with respect to the following change of variable. Fix any $(y,s) \in
\R^{N+1}$, and set $v(x,t) := u(x + e^{-t} y, t+s)$. We have that $\L v = 0$ in $\R^{N+1}$, if and only if $\L u = 0$ in $\R^{N+1}$. Thus, the Ornstein-Uhlenbeck operator satisfies (H0), (H1) and (H2). Note that in this case, the restricted Harnack inequality reads as $$ u \left(e^s x, t-s \right) \le C_s u (x,t)\qquad \forall (x,t) \in \R^{N+1},$$ and that does not hold for $y\neq 0$. On the other hand, the expression of a *minimal* solution of the equation in one space variable, given in [@CranstonOreyRosler; @Pinchover1996], is $$u_\lambda(x,t) = \exp \left( \lambda^2 e^{2t} - \sqrt{2} \lambda x e^t \right),$$ where $\lambda\in\R$. Clearly, does not hold for $u_\lambda$.
Degenerate equations without drift
==================================
\[sec\_Parabolic\]
We first derive from (H\*) a Harnack inequality for the operator $\L - \lambda$, where $\L$ is of the form and $\lambda$ is a real constant. After that, we focus on operators $\L$ such that the drift term $X_0$ does not appear. In particular, we prove a representation theorem for the extremal nonnegative solutions of $\L u = 0$ in $\OT$, when $X_0
= 0$ and the Lie group on $\R^N$ is nilpotent and stratified.
\[H\*-lambda\] Let $\L$ be an operator of the form that satisfies [(H0)]{}, [(H1)]{}, and [(H2)]{}. Let $\O \subseteq\R^{N+1}$ be an open set and let $z_0 = (x_0,t_0) \in \O$. For any compact set $K \subset
\mathrm{Int} \left(
\A_{z_0}(\O) \right)$ and for every $\lambda \in \R$ there exists a positive constant $C_{K, \lambda}$, only depending on $\O, K, z_0, \lambda$ and $\L$, such that $$ \sup_K u \le C_{K, \lambda} \, u(z_0),$$ for any nonnegative solution $u$ of the equation $\L w-\lambda w = 0$ in $\Omega$.
If $u$ is a nonnegative solution of $\L w - \lambda w = 0$ in $\Omega$, then $u_\lambda(x,t) := e^{-\lambda t}
u(x,t)$ is a nonnegative solution of $\L w = 0$ in $\Omega$. The claim then follows from (H\*) with $C_{K, \lambda} := C_{K} \max_{(x,t) \in K} e^{\lambda (t_0-t)}$.
We next consider operators $\L$ such that the drift term $X_0$ does not appear. We will use the following notation $$\label{e110}
\L_0 := - \sum_{j=1}^m X_j^2 \, , \qquad \L_\lambda := \L_0 - \lambda.$$ We consider the degenerate *elliptic* equation $\L_\lambda u = 0$ in $\R^N$ and its parabolic counterpart $\L
= \partial_t u + \L_\lambda u = 0$ in $\R^N \times ]0,T[$. In this case Hörmander’s condition (H0) is equivalent to:
[(H0’)]{}
: $ \qquad \text{rank Lie}\{X_{1},\dots,X_{m}\}(x) = N \quad \text{for every} \, x \in \R^{N}.$
Moreover, (H1) is equivalent to:
[(H1’)]{}
: there exists a Lie group $\mathbb{G}_0 = \left(\R^{N},\cdot \right)$ such that the vector fields $X_{1}, \dots, X_{m}$ are invariant with respect to the left translation of $\mathbb{G}_0$.
Indeed, as (H1’) is satisfied, then a group $\mathbb{G} = \left(\R^{N+1},\circ \right)$ satisfying (H1) is defined by $\mathbb{G}:=\mathbb{G}_0\times\R$, with the operation $$(x,t) \circ (y,s) := (x \cdot y, t+s) \qquad (x,t), (y, s) \in \R^{N+1}.$$ Finally, Chow-Rashevskii theorem (see for example [@Montgomery]) implies that for any open cylinder $\Omega = O \times I$, with $O \subseteq\R^{N}$ an open connected set, and an interval $I \subset \R$, we have for every $(x_0, t_0) \in \Omega$ that $$\label{eq-Prop}
\A_{(x_0, t_0)} (\Omega) = \Omega \, \cap\, \{(x,t)\mid t\leq t_0\},$$ whenever (H0’) holds. Thus condition (H2) is satisfied with any $\omega \in \R^m$. In the sequel of the present section we will always consider $\omega = 0$.
Based on Proposition \[H\*-lambda\], we next prove a Harnack inequality for the operators $\L_\lambda$. We refer to the monograph [@LibroBLU] and to the reference therein for an exhaustive bibliography on Harnack inequalities for operators of the form $\L_0$.
\[H\*-lambda-stationary\] Let $\L_0$ be an operator of the form , satisfying [(H0’)]{}, and [(H1’)]{}, and let $\lambda$ be a given constant. Let $O \subseteq\R^{N}$ be an open connected set and let $x_0 \in O$. For any compact set $H \subset O$ there exists a positive constant $C_{H, \lambda}$, only depending on $O, H, x_0, \lambda$ and $\L_0$, such that $$ \sup_H u \le C_{H, \lambda} \, u(x_0),$$ for any nonnegative solution $u$ of $\L_\lambda u = 0$ in $O$.
If $u$ is a nonnegative solution of $\L_\lambda w = 0$ in $O$, then the function $v(x,t) := u(x)$ is a nonnegative solution of $\partial_t w + \L_\lambda w = 0$ in $\Omega := O \times I$, where $I$ is any open interval of $\R$. We choose $I =
]-2,1[$, $z_0 := (x_0, 0)$ and $K := H \times \{ -1 \}$. Then Chow-Rashevskii theorem implies $\A_{z_0}(\O) = \O
\cap \{ t \le 0 \}$, thus $K \subset \mathrm{Int} \left( \A_{z_0}(\O) \right)$. We then apply Proposition \[H\*-lambda\] to $v$, and we obtain the Harnack estimate for $u$.
We consider now operators of the form $\L_0$, satisfying [(H0’)]{}, and [(H1’)]{} with the further property that they are invariant with respect to a family of dilations. Specifically, we suppose that $\R^N$ can be split as follows $$\R^N=\R^{m}\times \R^{m_2}\times \cdots \times \R^{m_n}, \quad \mbox{ and denote }
x= (x^{(m)}, x^{(m_2)} \ldots, x^{(m_n)})\in \R^N,$$ where $n\geq 2$. We assume that there exists a group of dilations $D_r:\R^N \to \R^N$, defined for every $r>0$ as follows $$\begin{aligned}
D_r (x)= D_r \left(x^{(m)}, x^{(m_2)}, \ldots, x^{(m_n)}\right):= \left(r x^{(m)}, r^2 x^{(m_2)}, \ldots, r^n
x^{(m_n)}\right),\end{aligned}$$ which are automorphisms of $(\R^N,\cdot)$. In this case we say that $\ci = \left(\R^N, \cdot, (D_r)_{r>0}\right)$ is a [*homogeneous Lie group*]{}. It is well-known that $\ci$ is nilpotent (see for example [@LibroBLU Proposition 1.3.12]) and compactly generating. Moreover, the following two properties follow from the homogeneous structure of Carnot groups (see for example [@LibroBLU Theorem 1.3.15]). $$\label{eq-firstlayer}
(x \cdot y)^{(m)} = x^{(m)} + y^{(m)} \qquad \text{for every} \quad x, y \in \R^N.$$ $$\label{eq-lastlayer}
x \cdot y = y \cdot x = x + y \qquad \text{whenever} \quad x= (0^{(m)}, 0^{(m_2)} \ldots, 0^{(m_n)}, x^{(m_n)}).$$ We point out that means, in particular, that right and left multiplications by a point $x$ belonging to the last layer of the group agree.
When the vector fields $X_1, \dots, X_m$ are homogeneous of degree $1$ with respect to the dilation $(D_r)_{r>0}$, we say that $\ci:=\left(\R^N,\cdot,(D_r)_{r>0}\right)$ is a [*Carnot group*]{} and $X_1,\ldots, X_{m}$ are called [*generators*]{} of $\ci$. In this case, it is always possible to choose the $X_j$’s such that $X_j = \partial_{x_j} +
\sum_{k=m+1}^{N} b_{jk}(x)\p_{x_{k}}$, for $j=0,\ldots, m$, and the coefficients $b_{jk}(x)$ are polynomials. Moreover all commutators $[X_j, X_k]$ only acts on $(x^{(m_2)}, \ldots, x^{(m_n)})$, third order commutators $[X_i, [X_j, X_k]]$ only acts on $(x^{(m_3)}, \ldots, x^{(m_n)})$, $n$-th order commutators only act on $x^{(m_n)}$.
The corresponding [*sub-Laplacian*]{} $\varDelta_{\mathbb{G}} = \sum_{j=1}^m X_j^2$ agrees with $- \L_0$, and is always self-adjoint, that is $\varDelta_{\mathbb{G}}^* = \varDelta_{\mathbb{G}}$. For an extensive treatment on sub-Laplacians on Carnot groups we refer to the book [@LibroBLU] by Bonfiglioli, Lanconelli and Uguzzoni.
\[ex0-ell\] [Heisenberg group.]{} $\mathbb{H} := \left(\R^3, \cdot \right)$, is defined by the multiplication $$(\xi,\eta, \zeta) \cdot (x,y,z) := \left( \xi + x, \eta + y, \zeta + z + (\eta x - \xi y) \right) \quad
(\xi,\eta, \zeta), (x,y,z) \in \R^3.$$ The vector fields $X_1$ and $X_2$ $$X_1 := \partial_{x} - \tfrac12 y \partial_{z}, \quad X_2 := \partial_{y} + \tfrac12 x \partial_{z},$$ are invariant with respect to the left translation of $\mathbb{H} = \left(\R^3, \cdot \right)$, and with respect to the following dilation in $\R^3$ $$D_r (x,y,z) := \left( r x, r y, r^2 z \right)\qquad (x,y,z) \in \R^3, r > 0.$$ Note that we have $[X_1, X_2] = \partial_{z}$, and any other commutator is zero.
The sub-Laplacian on the Heisenberg group acts on a function $u = u(x,y,z)$ as follows $$\label{e-HG}
\varDelta_{\mathbb{H}} u:= \left(\partial_{x} - \tfrac12 y \partial_{z}\right)^2 u(x,y,z) + \left( \partial_{y} +
\tfrac12 x \partial_{z} \right)^2 u(x,y,z).$$
If $- \L_0$ is a sub-Laplacian in a Carnot group $\ci:=\left(\R^N,\cdot,(D_r)_{r>0}\right)$, we define a homogeneous group $\mathbb{G} =\left(\R^{N+1},\circ,(\delta_r)_{r>0}\right)$ $$(x,t) \circ (\xi, \t) := ( x \cdot \xi, t + \t),\qquad \d_r (x,t) := \big( D_r x, r^2 t \big)$$ for every $(x,t) (\xi, \tau) \in \R^{N+1}$ and for any $r>0$. For $\omega = 0$ we have $\exp\left( \t (\omega \cdot X
+ Y) \right) = (0, -\t)$. Thanks to the invariance with respect to translations and dilations, the restricted uniform Harnack inequality of Proposition \[p-restr-harnack\] for such an operator $\L$ reads as $$\label{e-harnack-Heisenberg}
u\left(x, t-\tau \right) \le C_{\tau} u(x,t) \qquad \text{for every} \ (x,t) \in \OT, \tau >0,$$ and for any nonnegative solution of $\L u = 0$ in $\OT$.
The main result of this Section is the following version of the separation principle.
\[H\*-lambda-repr-par\] Let $\ci = \left(\R^N, \cdot, (D_r)_{r>0}\right)$ be a Carnot group, let $\varDelta_{\mathbb{G}}$ be its sub-Laplacian, and assume that $\L_0 = - \sum_{j=1}^m X_j^2$ agrees with $- \varDelta_{\mathbb{G}}$. If $u$ is an extremal nonnegative solution of $\L u = \partial_t u - \varDelta_{\mathbb{G}} u = 0$ in $\OT$, then $$u(x,t) = \exp \left( \langle x, \alpha \rangle + |\alpha|^2 t \right)$$ for some vector $\alpha =( \alpha_1, \dots, \alpha_m, 0, \dots, 0)$. Moreover, any nontrivial solution $v\in \H_+$ does not depend on the ‘degenerate’ variables $x_{m+1},\ldots,x_N$, and $v$ is strictly positive.
We first give the proof in the simplest (nontrivial) case of the Heisenberg group $\mathbb{H}$, in order to show the main idea of the proof. Let $c$ be any real constant, and let $(x,y,z,t)$ be a given point of $\R^4$. A direct computation shows that $$\begin{gathered}
\label{eq_funct-H}
\exp\!\big( s \!\left(\!- c X_2 \!-\! \partial_t \right)\! \big)\! \exp\!\big(s \!\left(\!- c X_1 \!-\! \partial_t \!\right)\! \big)\!
\exp\!\big(s \!\left(\!c X_2 \!-\! \partial_t \!\right)\! \big)\! \exp\!\big(s \!\left(\! c X_1 \!-\! \partial_t \!\right) \! \big)\!(x,y,z,t)=\\[2mm]
(x,y,z + c^2 s^ 2, t - 4 s),\end{gathered}$$ for every positive $s$. Note that for any $u\in\H_+$, we have that $$v(x,y,z,t):=u(x,y,z + c^2 s^ 2, t - 4 s) \in \H_+.$$ Since hypothesis (H2) holds true, Proposition \[prop-separation\] implies that for any extremal solution $u\in\H_+$ there exists a positive constant $C_s$, that may depend on $c$, such that $$u(x,y,z + c^2 s^ 2, t - 4 s) = C_s u(x,y,z, t) \qquad \forall (x,y,z, t) \in \OT,$$ and for every positive $s$. The standard argument used in the last part of the proof of Theorem \[th-main\] implies that $$\label{eq_funct_H}
u(x,y,z + c^2 s^ 2, t - 4 s) = e^{\beta_c s} u(x,y,z, t) \qquad \forall (x,y,z, t) \in \OT,$$ and for every positive $s$. Note that for $c=0$, the above identity restores $$u(x,y,z, t -s) = e^{\tilde\beta_0 s} u(x,y,z, t).$$ Combining it with we find $$u(x,y,z + c^2 s^ 2, t) = e^{\tilde\beta_c s} u(x,y,z, t) \qquad \forall (x,y,z, t) \in \OT,$$ for some real constant $\tilde\beta_c$. The above identity can be written equivalently as $$\label{eq_funct_H-bis}
u(x,y,z', t) = e^{\tilde\beta_c \sqrt{|z'-z|}} u(x,y,z, t) \qquad \forall (x,y,z, t), (x,y,z', t) \in \OT.$$ We finally note that contradicts the regularity of $u$ unless $\tilde\beta_c=0$. Since $u$ is smooth by Hörmander’s condition (H0), we have necessarily $\tilde\beta_c=0$. Hence $u = u(x,y,t)$ is a nonnegative extremal solution of $\partial_t u = \Delta u$, and the conclusion of the proof, in the case of the Heisenberg group $\mathbb{H}$, follows from the classical representation theorem for the heat equation [@Pinchover88].
Before considering any Carnot Group $\ci$, we point out that the above proof only relies on the fact that $\partial_z$ is the highest order commutators of a nilpotent Lie group. In particular, the operator $\L$ is translation invariant with respect to $z$ and that $\partial_z$ has been obtained by . Then Proposition \[prop-separation\] gives , that in turns contradicts the smoothness of $u$.
Let $\L = \partial_t - \varDelta_{\mathbb{G}}$, where $\varDelta_{\mathbb{G}}$ is a sub-Laplacian on a Carnot group $\ci$. We recall the Baker-Campbell-Hausdorff formula. If $X_j, X_k$ are the vector fields belonging to the first layer of $\ci$, then $$\begin{split}
\exp\big(s \left(X_j - \partial_t \right) \big) & \exp\big(s \left(X_k - \partial_t \right) \big)(x,t) = \\
& \exp\left(s \left( \big( X_j + X_k \big) - 2 \partial_t\right) + \tfrac{s^2}{2}\big[ X_k, X_j \big] + R_{jk}(s)
\right) (x,t)
\end{split}$$ for any $s \in \R$, where $R_{jk}$ denotes a polynomial function of the form $$ R_{jk}(s) = \sum_{i=3}^m c_{i,jk} s^i$$ whose coefficients $c_{i,jk}$’s are sums of commutators of $X_1, \dots, X_m$ of order $i$. In particular, we have $$\begin{gathered}
\exp\!\big(s \!\left(\!- X_j \!-\! \partial_t \!\right)\! \big)\! \exp\!\big(s \!\left(\!-X_k \!-\! \partial_t \!\right)\! \big)\! \exp\!\big(\!s\! \left(\!X_j\! - \!\partial_t \!\right) \!\big)\! \exp\!\big(\!s \!\left(\!X_k \!-\! \partial_t \!\right)\! \big)\!(x,t)= \\[2mm]
\exp\left(-4 s \partial_t + \tfrac{s^2}{2}\big[ X_k, X_j \big] + R_{jk}(s) \right) (x,t).\end{gathered}$$ We can express the variable $x^{(m_n)}$ of the last layer of $\ci$ in terms of commutators of order $n$ with zero reminder. In particular, by repeating the use of the Baker-Campbell-Hausdorff formula, we can express every vector $x_j^{(m_n)}$ of a basis of the last layer of $\ci$ as $$ x_j^{(m_n)} = \exp\big( - X_{j_k} \big) \dots \exp\big( - X_{j_1}),$$ for a suitable choice of $X_{j_1}, \dots, X_{j_k}$ in the fist layer of $\ci$. In particular, we have that $$ u\big(x + s^n x_j^{(m_n)},t-k s\big) = u \left(\exp\big( - s( X_{j_k} - \partial_t) \big) \dots \exp\big( - s (
X_{j_1}- \partial_t)\big) (x,t) \right),$$ for every $(x,t) \in \OT$ and every positive $s$. On the other hand, by $x + s
x_j^{(m_n)}$ is at once a *right* and *left* translation on the group $\ci$. Then, in particular, $(x,t)
\mapsto u \big(x + s x_j^{(m_n)},t\big)$ is a solution of $\L_0 u = 0$ for every $s \in \R$. Thus, if $u$ is an extremal solution of $\L_0 u = 0$, Proposition \[prop-separation\], combined with , yields $$ u\big(x + c s^n x_j^{(m_n)},t\big) = e^{\beta s} u(x,t),$$ for every $x \in \R^N$ and $s \ge 0$. Here $c$ is a real constant that may depend on $x_j^{(m_n)}$. As in the case of the Heisenberg group, this identity contradicts the smoothness of $u$, unless $u$ doesn’t depend on $x^{(m_n)}$. Thus, $u = u\left(x^{(m)}, \dots, x^{(m_{n-1})} \right)$ is an extremal solution of $\L' u = 0$, where $\L' = \partial_t
- \varDelta_{\mathbb{G}'}$, and $\varDelta_{\mathbb{G}'}$ is a sub-Laplacian on a Carnot group $\mathbb{G}'$ on $\R^{N -
m_n}$ defined as the restriction of $\mathbb{G}$ to the first $N-m_n$ variables of $\R^N$. The conclusion of the proof follows by a backward iteration of the above argument.
Stationary equations
====================
\[sec\_Elliptic\] In the present section we consider stationary equations, and we prove a result analogous to Theorem \[H\*-lambda-repr-par\]. We first introduce some notations. Fix any $\lambda \in \R$, and consider an operator $\L_\lambda$ of the form on $\R^N$, satisfying [(H0’)]{}, and [(H1’)]{}. We set $$\begin{aligned}
& \H_\lambda := \Big\{u \in C^\infty (\R^N) \mid \L_\lambda u = 0 \quad \mbox{in } \R^N \Big\}, \label{e-def-Hl} \\
& \H_\lambda^+ := \Big\{u \in \H_\lambda \mid u \ge 0, u(0)=1 \Big\}. \label{e-def-Hl+}\end{aligned}$$ Note that in light of Proposition \[H\*-lambda-stationary\], the generalized principal eigenvalue $\lambda_0$ defined in can be characterized as $$ \lambda_0:=\sup \Big\{\lambda \in \R \mid \H_\lambda^+ \neq \emptyset \Big\}.$$ Moreover, by the strong minimum principle (or Proposition \[H\*-lambda-stationary\]), any function $u \in
\H_\lambda^+$ never vanishes. The results proved in Section \[sec\_funct\] for $\H$ and $\H_+$, and $\H_a$ plainly extend to $\H_\lambda$ and $\H_\lambda^+$. In particular, it follows that $\H_\lambda^+$ is a convex compact set (for a reference set for $\L_\lambda$ in $\R^N$ one can choose any singleton). Hence any function in $\H_\lambda^+$ can be represented by the set of all extreme points of $\H_\lambda^+$.
\[H\*-lambda-repr\] Let $\ci = \left(\R^N, \cdot, (D_r)_{r>0}\right)$ be a Carnot group, let $\varDelta_{\mathbb{G}}$ be its sub-Laplacian, and assume that $\L_0 = - \sum_{j=1}^m X_j^2$ agrees with $- \varDelta_{\mathbb{G}}$. Then $\lambda_0 = 0$, and for any $\lambda \leq 0$, $u\in \H_\lambda^+$ is an extremal solution if and only if $$u(x) = u_\alpha(x):=\exp \left( \langle x, \alpha \rangle \right)$$ for some vector $\alpha =( \alpha_1, \dots, \alpha_m, 0, \dots, 0)$ such that $\|\alpha\|^2 = - \lambda$. Moreover, $u\in \H_\lambda^+$ if and only if there exists a unique probability measure $\mu$ on $\mathbb{S}^{m-1}$ such that $$u(x)=\int_{\xi\in \mathbb{S}^{m-1}} \exp \left( \sqrt{-\lambda}\langle x, \xi \rangle \right) \,\mathrm{d}\mu(\xi).$$
It is a direct consequence of Theorem \[H\*-lambda-repr-par\] and Choquet’s theorem. Recall that as in [@Pinchover88 Theorem 2.1], if the separation principle of the form holds true, then $u_\lambda$ is an extremal solution of $\L_\lambda v = 0$ in $\R^N$ if and only if the function $u(x,t) := e^{\lambda t} u_\lambda(x)$ is a nonzero extremal solution of $\L w = 0$ in $\R^{N+1}$ (see also, Remark ). The conclusion immediately follows from Theorem \[H\*-lambda-repr-par\].
As a result we obtain the following nonnegative Liouville theorem.
\[cor-Liouville\] If $u \in \H_0^+$ and $ - \L_0$ is a sub-Laplacian $\varDelta_{\mathbb{G}}$ on a Carnot group $\mathbb{G}$, then $u =\mathbf{1}$, where $\mathbf{1}$ is the constant function taking the value $1$ in $\R^N$.
If $\L_0 = - \sum_{jk}^m a_{jk} X_j X_k$ for some symmetric positive definite constant matrix $A = \left( a_{jk}
\right)_{j,k=1, m}$, then the result of Theorem \[H\*-lambda-repr\] clearly applies with $$u(x) = u_{A;\alpha}(x)=\exp \left( \langle A^{-1} x, \alpha \rangle \right),$$ with $\alpha=(\alpha_1, \dots, \alpha_m, 0, \dots, 0)\in \R^N$ such that $\langle A^{-1} \alpha, \alpha \rangle = -
\lambda$.
Parabolic Liouville theorems
============================
\[ssec\_Liouville\] In the present section we assume that $\L$ is a hypoelliptic operator of the form $$\label{e11n}
\L := \partial_t + \L_0, \qquad \L_0 := - \sum_{j=1}^m X_j^2$$ satisfying [(H0’)]{} and [(H1’)]{}. In particular, $\L$ is of the form with $X_0=0$.
We say that $ \L_0$ satisfies the *nonnegative Liouville property* if any nonnegative solution of $\L_0 u=0$ in $\R^N$ is equal to a constant. Recall that $$ \lambda_0:=\sup \Big\{\lambda \in \R \mid \exists u_\lambda\gneqq 0\;
\mbox{ s.t. }\left(\L_0 - \lambda \right)u_\lambda =0 \mbox{ in } \R^N \Big\}$$ denotes the generalized principal eigenvalue of the operator $\L_0$.
We assume that
[*a)*]{}
: $\L_0$ satisfies the nonnegative Liouville property,
[*b)*]{}
: $\lambda_0=0$.
We note that the nonnegative Liouville property clearly implies the Liouville property for [*bounded*]{} solutions: any bounded solution of $\L_0 u=0$ in $\R^N$ is equal to a constant.
Properties [*(a)-(b)*]{} hold whenever $\mathbb{G}$ is nilpotent and $\L_0=\L_0^*$ (see [@LinPinchover94] for a similar statement), and in particular, under the assumptions of Theorem \[H\*-lambda-repr\] (see the aforementioned theorem and Corollary \[cor-Liouville\], see also [@KogojLanconelli07]). Property [*(a)*]{} also holds when all the $X_j$’s are homogeneous of degree 1 with respect to a dilation group. It is also true for a wide class of operators including Grushin-type operators $$\L_{0} = - \partial_x^2 - x^{2 \alpha} \partial_y^2\,,$$ where $\alpha$ is any positive constant (see [@KogojLanconelli09]). Property [*(b)*]{} is well studied in the nondegenerate case, and our Theorem \[H\*-lambda-repr\] is a first result for degenerate operators. We aim to study this property under more general assumptions in a forthcoming work.
Since $X_0=0$, Theorem \[th-main\] implies that a nonzero $u\in \mathrm{exr}\,\H^+$ if and only if it satisfies the separation principle, namely, $$u(x,t)=e^{-\lambda t} \varphi_\lambda( x),$$ where $\varphi_\lambda$ is an extreme positive solution of the equation $\big(\sum_{j=1}^m X_j^2 +\lambda\big) u =0$ in $\R^N$, and $\lambda\le \lambda_0=0$. Consequently, the following nonnegative Liouville theorem holds for $\L$ in $\R^{N}\times \R$.
\[thm\_end\] Assume that $\L_0$ satisfies the nonnegative Liouville property and that $\lambda_0=0$. Let $u\geq 0$ be a solution of the equation $$\left(\partial_t + \L_0 \right) u = \partial_t u - \sum_{j=1}^m X_j^2 u = 0
\qquad \text{in} \quad \R^{N}\times \R$$ such that $$u(0,t)=O(\mathrm{e}^{\varepsilon t})\qquad \mbox{ as } t\to\infty,$$ for any $\varepsilon>0$. Then $u=\mathrm{constant}$.
This result should be compared with the Liouville theorems proved by Kogoj and Lanconelli in [@KogojLanconelli05; @KogojLanconelli06; @KogojLanconelli07], where it was assumed that the operator $\L$ is of the form , $\L$ is not necessarily translation invariant, but it is invariant with respect to a dilation group $\left( \d_r \right)_{r > 0}$, and satisfies an *oriented connectivity condition* that is, (using our notation) $$\A_{(x_0,t_0)} = \R^N \times ]-\infty, t_0[, \qquad \text{for every} \quad (x_0,t_0) \in \OT.$$ In this case, a (stronger) sufficient growth condition for the validity of the above Liouville theorem is $$u(0,t)=O(t^m)\qquad \mbox{ as } t\to\infty, \mbox{ for some } m>0.$$ In particular, in this case, the nonnegative Liouville theorem holds true for the stationary equation (without any growth condition, see [@KogojLanconelli05 Corollary 1.2]).
Positive Cauchy Problem
=======================
\[sec\_Cauchy\]
In the present section we consider the positive Cauchy problem for $\L$ in $S_T := \R^N \times\, ]0,T[$ with $0 < T \le
+
\infty$, where $\L$ is of the form . Our aim is to prove the following uniqueness result for the positive Cauchy problem under the assumption that $X_0=0$.
\[th-main-2\] [Let $\L$ be an operator of the form , satisfying [(H0’)]{} and [(H1’)]{}, and let $u_0 \geq 0$ be a continuous function in $\R^N$. Then the positive Cauchy problem $$\label{p-Cauchy-X0}
\begin{cases}
\p_t u = \sum_{j=1}^m X_j^2 u & \quad (x,t) \in S_T, \\
u ( x, 0) = u_0(x)\geq 0 & \quad x \in \R^N,\\
u (x,t)\geq 0 & \quad (x,t) \in S_T,
\end{cases}$$ admits at most one solution.]{}
We note that the first uniqueness result for the positive Cauchy problem was established by Widder for the classical heat equation in the Euclidean space [@Widder].
The proof of Theorem \[th-main-2\] relies on Theorem \[th-main\] which, under the additional assumption $X_0=0$, asserts that every nonnegative extremal solution $u$ of $\L u = 0$ in $\OT$ satisfies $$ u(x,t) = \mathrm{e}^{-\lambda t} u_0(x) \qquad \forall (x,t) \in \OT,$$ where $\lambda \leq \lambda_0$, and $\lambda_0$ is the generalized principal eigenvalue (see Remark \[r-separation\]).
Before giving the proof of Theorem \[th-main-2\], we should compare it with a result of Chiara Cinti [@Cinti09] who considered a class of left translation invariant hypoelliptic operators with nontrivial drift $X_0$ under the additional hypothesis that the operator is [*homogeneous*]{} with respect to a group of dilations on the underlying Lie group. The method used in [@Cinti09] relies on some accurate upper and lower bounds of the fundamental solution of $\L$. We note that the lower bounds for the fundamental solution are usually obtained by constructing suitable *Harnack chains*, as the ones used in the proof of Theorem \[th-main\]. On the other hand, in order to apply the method used in [@Cinti09], the upper and lower bounds need to agree asymptotically. Hence, the Harnack chains need to be chosen in some optimal way. An advantage of our method is that it does not require such an optimization step. Actually, a priori bounds of the fundamental solution, and even its existence are not needed. We also note that the bibliography of [@Cinti09] contains an extensive discussion of known results on the uniqueness of the Cauchy problem. We also recall a recent result by Bumsik Kim [@Kim] for the heat equation associated with subelliptic diffusion operators. In his work, Kim proves uniqueness results for the heat equation under curvature bounds through the generalized curvature-dimension criterion developed by Baudoin and Garofalo and thus without the Lie group assumption.
We start the proof of Theorem \[th-main-2\] with some preliminary results that do not require the assumption $X_0=0$.
Consider the positive Cauchy problem $$\label{p-Cauchy}
\begin{cases}
\L u(x,t) = 0 & \quad (x,t) \in S_T, \\
u ( x, 0) = u_0(x) & \quad x \in \R^N,\\
u (x,t)\geq 0 & \quad (x,t) \in S_T,
\end{cases}$$ with $u_0\geq 0$ continuous function in $\R^N$.
We first recall some basic results on hypoelliptic operators of the form . Usually, hypoelliptic operators have been studied under the further assumption that it is *non-totally degenerate*, namely, there exists a vector $\nu \in \R^N$ and $j \in \big\{1, \dots, m\big\}$ such that $$\label{e-Bony}
\langle X_j(x), \nu \rangle \ne 0\qquad \mbox{ for all } x \in \R^N.$$ This condition was introduced by Bony in [@Bony] and is not very restrictive. We also refer to [@BonfiglioliLanconelli-2012] for a weaker version of this condition.
We observe that can be always satisfied by a simple *lifting* procedure. Indeed, let $\L$ be of the form , and consider the operator $\widetilde \L$ acting on $(x_0, x, t) \in \R^{N+2}$ and defined by $$\begin{aligned}
\widetilde \L u : = - \partial_{x_0}^2 u + \L u = \p_t u - \partial_{x_0}^2 u - \sum_{j=1}^m X_j^2 u + X_0 u.\end{aligned}$$ Clearly, $\widetilde \L$ is non-totally degenerate with respect to $\nu=(1,0,\ldots,0)\in\R^{N+1}$. Moreover, $\widetilde \L$ is hypoelliptic and satisfies (H1) and (H2) if $\L$ is hypoelliptic and satisfies (H1) and (H2). Our uniqueness result for $\L$ readily follows from the uniqueness for $\widetilde \L$. Therefore, in the sequel we assume that $\L$ satisfies .
We recall Bony’s [*strong maximum principle*]{} [@Bony Théorème 3.2] for hypoelliptic operators $\L$ of the form that satisfy . With our notation, it reads as follows. *Let $\Omega$ be any open subset of $\R^{N+1}$ and let $u \in C^{2}(\Omega)$ be such that $\L u
\ge 0$ in $\Omega$. Let $z_0 \in \Omega$ be such that $u(z_0) = \max_{\O}u$. If $\gamma: [0,T_0] \to \Omega$ is an $\L$–admissible path such that $\g(0) = z_0$, then $u(\g(s)) = u(z_0)$ for every $s \in [0,T_0]$.*
The following [*weak maximum principle*]{} can be obtained as a consequence of Bony’s strong maximum principle. *Let $\Omega$ be any bounded open set of $\R^{N+1}$ and let $u \in C^{2}(\Omega)$ be such that $\L u
\leq 0$ in $\Omega$. If $\limsup_{\underset{z\in \Omega}{z \to w}} u (z) \le 0$ for every $w \in \partial \Omega$, then $u \le 0$ in $\Omega$.*
Let $\Omega$ be any bounded open set of $\R^{N+1}$, and let $\varphi \in C(\partial \O)$. The axiomatic potential theory provides us with the Perron solution $u_\varphi$ of the boundary value problem $\L u = 0$ in $\O$, $u = \varphi$ in $\partial \O$. It is known that $u_\varphi$ might attain the prescribed boundary data only in a subset of $\partial
\O$. We say that $w \in \partial \O$ is *regular* for $\L$ if $\lim_{\underset{z\in \Omega}{z \to w}} u_\varphi (z)
\to \varphi(w)$ for every $\varphi \in C(\partial \O)$. we denote by $\partial_r(\Omega)$ the set of the regular points of $\partial \O$ $$\partial_r(\Omega) := \big\{ w \in \partial \O \mid \lim_{\underset{z\in \Omega}{z \to w}} u_\varphi (z) \to
\varphi(w) \
\text{ for every } \varphi \in C(\partial \O) \big\}.$$
Under assumption it is possible to construct a family of *regular cylinders* of $\R^{N+1}$, that is cylinders such that their regular boundary agree with their *parabolic boundary* [@LanconelliPascucci]. Specifically, we denote by $B(x,r)$ the Euclidean ball centered at $x \in \R^N$ with radius $r$. Let $\nu$ be a vector satisfying , and assume, as it is not restrictive, that $|\nu|=1$. For every $x \in \R^N$ and $k \in \N$ we set $$B_k(x) := B(x + k \nu, 2k) \cap B(x - k \nu, 2k).$$ It turns out that for every $x\in \R^N$, $k\in \N$, and $0<T_0<\infty$, the cylinder $Q_{k,T_0} (x) := B_k(x) \times
\,]0,T_0[$ is regular, see [@LanconelliPascucci] for a detailed proof of this statement.
We note that the sequence of regular cylinders $\left( Q_{k,T_0} (0)\right)_{\underset{k \in \N}{0<T_0<T}}$ exhausts the set $S_T$. This property will be used in the sequel.
Consider a regular cylinder $Q := B \times\, ]0,T_0[$ and a function $f \in C(\overline Q)$. In [@LanconelliPascucci Theorem 2.5] it is proved that for a hypoelliptic operator $\L$ of the form satisfying there exists a unique solution $u \in C^{\infty}(Q) \cap C(Q\cup \partial_r(Q))$ to the following initial-boundary value problem $$\label{p-Dirichlet}
\begin{cases}
\L u= f & \quad \text{in} \ Q, \\[2mm]
u = 0 & \quad \text{in} \ \partial_r Q.
\end{cases}$$ We next show that the same result holds when a continuous compactly supported initial condition is prescribed on the bottom of $Q$.
\[lem-bvp\] Let $\L$ be a hypoelliptic operator of the form satisfying , and let $Q := B \,\times\,
]0,T_0[$ be a regular cylinder. Let $\varphi \in C(B)$ be such that *supp*$(\varphi) \subset B$. Then there exists a unique $u \in C^{\infty}(Q) \cap C(Q\cup \partial_r Q)$ to the following initial-boundary value problem $$\label{p-Dirichlet-c}
\begin{cases}
\L u= 0 & \quad \text{in} \ Q, \\
u (x,t) = 0 & \quad \text{in} \ \partial B \times [0,T_0], \\
u (x,0) = \varphi(x) & \quad \text{in} \ B \times \big\{ 0 \big\}.
\end{cases}$$
We use a standard argument. Consider, for any positive $\e$, a function $w_\e \in C^\infty \left(\overline{Q}\right)$ such that $w_\e(\cdot, 0) \to \varphi$, uniformly as $\e \to 0$, and takes the zero boundary condition at the lateral boundary of $Q$. Denote by $f_\e := \L w_\e$, and note that $f_\e$ is continuous on $\overline Q$. We recall that we can solve uniquely the initial-boundary value problem of the form . So, let $v_\e$ be the unique solution of the following problem $$\begin{cases}
\L v_\e= f_\e & \quad \text{in} \ Q, \\
v_\e = 0 & \quad \text{in} \ \partial_r Q.
\end{cases}$$ The function $u_\e : = w_\e - v_\e$ is clearly the unique solution of $$\begin{cases}
\L u_\e= 0 & \quad \text{in} \ Q, \\
u_\e (x,t) = 0 & \quad \text{in} \ \partial B \times [0,T_0], \\
u_\e (x,0) = w_\e (x,0) & \quad \text{in} \ B \times \big\{ 0 \big\}.
\end{cases}$$ By the maximum principle, $u_\e$ uniformly converges to a continuous function $u$ that is a classical solution of . The uniqueness follows from the weak maximum principle.
Next, we apply the well-known argument (introduced by Donnelly for nondegenerate parabolic equations [@Donnelly]) to show that the uniqueness for the positive Cauchy problem is equivalent to the uniqueness of the positive Cauchy problem with the [*zero*]{} initial condition. For this sake, we prove the following proposition, which clearly implies the above equivalence.
\[prop\_Donn\] Let $\L$ be a hypoelliptic operator of the form , satisfying . If $u \in C(\overline {S_T}) \cap C^\infty(S_T)$ is a solution of the positive Cauchy problem , then there exists a minimal nonnegative solution $\widetilde u$ of . Namely, $0 \le
\widetilde u \le v$ in $S_T$ for any solution $v$ of .
We use a standard exhaustion argument. Consider a sequence of continuous functions $\psi_k: \R^N \to \R$ such that $0 \le \psi_k(x) \le 1$, for any $k \in \N$, and that $\psi_k(x) = 1$ whenever $|x| \le k$, and $\psi_k(x) =
0$ if $|x| \ge k+1$. Consider a sequence $Q_k := \Omega_k \times\, ]0,T_k[$ of regular cylinders, such that supp$\left(
\psi_k \right) \subset \Omega_k$, and $T_k\nearrow T$. Let $\widetilde u_k$ be the solution to $$\begin{cases}
\L \widetilde u_k= 0 & \quad \text{in} \ Q_k, \\
\widetilde u_k (x,t) = 0 & \quad \text{in} \ \partial \Omega_k \times [0,T_k], \\
\widetilde u_k (x,0) = \psi_k(x)u_0(x) & \quad \text{in} \ \Omega_k \times \big\{ 0 \big\},
\end{cases}$$ whose existence is given by Lemma \[lem-bvp\]. By the comparison principle, $\left( \widetilde u_k \right)_{k
\in \N}$ is a nondecreasing sequence of nonnegative solutions of the equation $\L \widetilde u_k = 0$, such that $\widetilde u_k (x,t) \le u(x,t)$. Then, the function $$\widetilde u (x,t) := \lim_{k \to \infty} \widetilde u_k (x,t)$$ is a distributional solution of $\L \widetilde u = 0$ in $S_T$ such that $0 \le \widetilde u \le u$ in $S_T$. By the hypoellipticity of $\L$, $\widetilde u$ is a smooth classical solution of the equation $\L \widetilde u = 0$ in $S_T$. In order to prove that $\widetilde u$ takes the initial condition, we fix any $x_0 \in \R^N$, and we choose $k_0 > |x_0|$. We have $$\widetilde u_{k_0} (x,t) \le \widetilde u (x,t) \le u (x,t),$$ for every $(x,t) \in Q_k$ with $k \in \N$. Consequently, $\widetilde u(x,t) \to u_0(x_0)$ as $(x,t) \to (x_0,0)$, and this concludes the proof.
\[cor\_uniquenesspositive\] The positive Cauchy problem has a unique solution if and only if any nonnegative solution of the positive Cauchy problem with $u_0 = 0$ is the trivial solution $u=0$.
In the following proof of Theorem \[th-main-2\], which relies on Choquet’s integral representation theorem and the separation principle , we resume the assumption $X_0=0$.
By Corollary \[cor\_uniquenesspositive\], we may assume that $u_0 = 0$.
So, let $S_T = \R^N \times\, ]0,T[$ with $0 < T \le + \infty$, and let $u: S_T \to \R$ be a solution of the positive Cauchy problem $$\label{e-Cauchy}
\begin{cases}
\L u (x,t)= 0 & \quad (x,t) \in S_T, \\
u ( x, 0) = 0 & \quad x \in \R^N,\\
u (x,t)\geq 0 & \quad (x,t) \in S_T.
\end{cases}$$ We need to prove that $u = 0$.
As in [@KoranyiTaylor85], we extend the solution $u$ of the Cauchy problem to the whole domain $\OT$ by setting $$\tilde{u}(x,t) := \left\{
\begin{array}{ll}
u(x,t) & \quad t \in [0,T[, \\
0 & \quad t<0.
\end{array}
\right.$$ It is easy to see that $\tilde{u}$ is a distributional solution of $\L u = 0$ in $\OT$. Hence, the hypoellipticity of $\L$ yields that $\tilde{u}$ is a nonnegative smooth classical solution of the equation $$\label{e-Cauchy-2}
\L w= 0 \qquad \mbox{in }\OT,$$ and $\tilde{u} = 0$ in $\R^N \times \R^-$. We need to prove that $\tilde{u}=0$ in $S_T$.
Suppose that $u\neq 0$, and let $a \in C( ]\!-\infty, T[)$ be a nonnegative function such that $\tilde{u}\in \H_{a}^1$. By Choquet’s integral representation theorem and , it follows that $\tilde{u}$ can be represented as $$\label{int_rep}
\tilde{u}(x,t) = \int_{\H_+}v(x,t) d \mu (v)$$ for some probability measure $\mu$ supported on $\big\{ 0 \big\} \cup \big\{ \exr \H_+ \cap \H_{a}^1 \big\}$. Recall that $\tilde{u}(x,t)=0$ for $t\leq 0$. On the other hand, by , any nonnegative solution $v\in \big\{
\exr \H_+ \cap \H_{a}^1 \big\}$ is strictly positive in a neighborhood of an integral curve of the form $$\gamma:=\big\{ \exp\left( s \left(\omega \cdot X + Y \right) \right) z_0 \mid s \in \;] t_0- T, + \infty[\big\},$$ where $z_0 = (x_0,t_0)$ might depend on $v$. In particular, all such $v$ are strictly positive in $\R^N \times \R^-$. Therefore, implies that $$\mu \big\{ \exr \H_+ \cap \H_{a}^1 \big\} = 0.$$ Hence, $\tilde{u} = 0$.
Mumford operator
================
\[sec\_mumford\] The Mumford operator $\mathscr{M}$ is defined as $$\label{ex-Mum}
\mathscr{M} u := \p_{t} u - \cos(x) \p_y u - \sin(x) \p_w u - \p_x^2 u \qquad (x,y,w,t) \in \R^4.$$ It models the relative likelihood of different edges disappearing in some scene to be matched up by some hidden edges, and explains the role of *elastica* in computer vision [@Mumford]. In the present section we prove the uniqueness of the positive Cauchy problem for $\M$, and we establish some properties of the minimal positive solutions of $\M u = 0$. The following proposition allows us to apply our results to $\M$.
\[prop-mum\] The Mumford operator $\M$ satisfies conditions [(H0)]{} with the group operation $$\label{mumgl}
\begin{split}
(x_0,y_0,w_0,t_0) \, \circ & \, (x,y,w,t) := \\
& \big(x_0 + x, y_0 + y \cos (x_0) - w \sin (x_0), \\
& \qquad w_0 + y \sin (x_0) + w \cos (x_0), t_0 + t\big)
\end{split}$$ for every $(x_0,y_0,w_0,t_0), (x,y,w,t) \in \R^4$. Moreover, $\M$ satisfies [(H2)]{} with $\omega \ne 0$.
Condition (H0) is verified by a direct computation. Moreover, it is known that $\M$ is invariant with respect to the left translations of the group $\mathbb{G}:= (\R^3\times \R, \circ)$ on $\R^4$ whose operation is defined by , (see [@BonfiglioliLanconelli-2012 Formula (61)]). $\mathbb{G}$ is called in the literature the *roto-translation group*.
In order to check (H2), we note that $$\label{eq-gamma-0-Mum}
\exp\left(s Y \right) (x,y,w,t) = \left(x ,y + s \cos(x), w + s \sin(x), t-s\right),$$ where $Y = \cos(x) \p_y + \sin(x) \p_w -\p_{t}$ (see ), while $$\label{eq-gamma-Mum}
\begin{split}
& \exp\left(s (\omega X + Y)\right) (x,y,w,t) = \\
& \qquad \qquad \left(x + s \omega,y +
\frac{\sin(x + s \omega) - \sin(x)}{\omega},w - \frac{\cos(x + s \omega) - \cos(x)}{\omega},t-s\right),
\end{split}$$ for every $(x,y,w,t) \in \R^4$, and $s,\omega \in \R$, with $\omega \ne 0$.
We first show that $$\label{eq-prop-Mum}
\A_{z_0} = \big\{ (x,y,w,t) \in \R^4 \mid \sqrt{(y-y_0)^2 + (w-w_0)^2}\le t_0-t \big\},$$ for every $z_0 = (x_0,y_0,w_0,t_0) \in \R^4$. The inclusion $\A_{z_0}$ in the right hand side of follows directly from the definition of attainable set, and from the fact that the norm of the drift term $X_0 = \cos(x)
\p_y + \sin(x) \p_w \simeq (0, \cos(x), \sin(x), 0)$ equals $1$.
We next prove the inclusion of the right hand side of in $\A_{z_0}$. We first note that, by the invariance with respect to the Lie operation , it is not restrictive to assume that $(x,y,w,t) = 0$. We also assume that $(y_0, w_0) \ne (0,0)$ since $\A_{z_0}$ is the closure of the set of the reachable points. We introduce polar coordinates; $\widetilde x = - \arg(y_0,w_0)$, and $\widetilde t = \sqrt{y_0^2 + w_0^2}$, and we note that $$\label{eq-tilde}
(y_0, w_0) = - \widetilde t (\cos(\widetilde x), \sin(\widetilde x)) \qquad 0 < \widetilde t \le t_0.$$ We define the sequence of paths $\left( \g_k \right)_{k \in \N}$ in the interval $[0, \widetilde t]$ by choosing $$ x_k(0) = x_0, \qquad x_k(\widetilde t) = 0, \qquad x_k(s) = \widetilde x, \quad \text{for} \quad \frac{\widetilde t}{4
k} \le s \le
\left( 1 - \frac{1}{4 k}\right)\widetilde t,$$ and $x_k$ linear in $\left[ 0, \frac{\widetilde t}{4 k} \right]$ and in $\left[ \left( 1 - \frac{1}{4
k}\right)\widetilde t, \widetilde t \right]$. If $\widetilde t < t_0$, we set $x_k(s) = 2 \pi (s - t_0 + \widetilde t)/\widetilde t$, for every $s \in [t_0 -
\widetilde t, t_0]$. Moreover, $$ y_k(s) = y_0 + \int_0^s \cos(x_k(\tau)) d \tau, \quad w_k(s) = w_0 + \int_0^s \sin(x_k(\tau)) d \tau, \quad t_k(s)
= t_0 -s.$$ We clearly have that $x_k(t_0) = 0, t_k(t_0)=0$. Moreover, a simple computation based on gives $|y_k(t_0)| = |y_k(\widetilde t)| \le \frac{1}{2k}(|y_0| + \widetilde t) \le \frac{1}{2k}(|y_0| + t_0)$ and, analogously, $|w_k(t_0)| \le \frac{1}{2k}(|w_0| + t_0)$. This proves that $\g_k(t_0) \to 0$ as $k \to + \infty$. In particular $0 \in \A_{z_0}$, and the proof of is completed.
The above argument also applies to any bounded open box $\Omega$ which is sufficiently wide in the $x$-direction. More precisely, if $\Omega = ]x_0-R_x,x_0+R_x[ \times ]y_0-R_y,y_0+R_y[ \times ]w_0-R_w,w_0+R_w[ \times
]t_0-R_t,t_0+R_t[$ with $R_x > \pi$, then $$ \A_{z_0}(\Omega) = \big\{ (x,y,w,t) \in \Omega \mid \sqrt{(y-y_0)^2 + (w-w_0)^2}\le t_0-t \big\}.$$
Note that, by and , we have that $\exp\left(s (\omega X + Y)\right)
(z_0)$ belongs to the interior of $\A (z_0)(\Omega)$ if, and only if, $\omega \ne 0$. This proves (H2).
We next prove a separation principle for the extremal solutions of the equation $\M u = 0$. We have
\[prop-sep-mum\] For every $u \in \exr \H_+$ there exist two constants $\beta \in \R$ and $C_0>0$ such that $$u(x + 2 k \pi,y,w,t) = C_0^k e^{\beta t} u(x,y,w,0) \qquad \text{for every} \quad (x,y,w,t) \in \OT, \ k \in \Z.$$ In particular for $k=0$, we have $$u(x,y,w,t) = e^{\beta t} u(x,y,w,0) \qquad \text{ for every} \quad (x,y,w,t) \in \OT.$$
We first prove that $$\label{eq-sep-mum}
u(x,y,w,t - s) = e^{-\beta s} u(x,y,w,t) \quad \text{for every} \quad (x,y,w,t) \in \OT, \ s > 0.$$ Fix any positive $s$, and choose $\omega = 2 \pi / s$. Recall , and note that $$\exp \left(s (- \omega X + Y) \right) \left( \exp\left(s (\omega X + Y)\right) (x,y,w,t) \right) = (x , y, w,t-2 s),$$ and that the change of variable $(x,y,w,t) \mapsto (x , y, w,t-2 s)$ preserves the equation $\M u = 0$. Then the hypotheses of Proposition \[prop-separation\] are satisfied with $\omega_1 = - \omega_2 := \omega$ and $s_1 = s_2 :=
s$. Hence we have $$u (x , y, w,t-2 s) = C u (x , y, w,t)$$ for some positive constant $C=C(s)$. Hence, followed as in the last part of the proof of Theorem \[th-main\].
In order to conclude the proof, we consider again a positive $s$, we set $\omega = 2 \pi / s$, and we note that $$\exp\left(s (\omega X + Y)\right) (x,y,w,t) = (x + 2 \pi, y, w,t-s).$$ Also in this case the assumptions of Proposition \[prop-separation\] are satisfied with $\omega_1 := \omega$ and $s_1 := s$, thus there exists a positive constant $C$ such that $$u (x + 2 \pi, y, w,t - s) = C u (x , y, w,t) \quad \text{for every} \quad (x,y,w,t) \in \OT.$$ The conclusion of the proof then follows by combining the above identity with .
The following result is a corollary of Proposition \[prop-sep-mum\].
\[th-mum-PCP\] Let $\M$ be the Mumford operator , and let $u_0 \geq 0$ be a continuous function in $\R^3$. Then the positive Cauchy problem $$\begin{cases}
\M u (x,y,w,t) = 0 & \quad (x,y,w,t) \in S_T, \\
u (x,y,w,0) = u_0(x,y,w) & \quad (x,y,w) \in \R^3,\\
u (x,y,w,t) \geq 0 & \quad (x,t) \in S_T,
\end{cases}$$ admits at most one solution.
The proof is exactly as in the proof of Theorem \[th-main-2\], once the separation principle has been established. We omit the details.
Kolmogorov-Fokker-Planck operators
==================================
\[sec\_Kolmogorov\] Consider the Kolmogorov operator $$\label{eq-Kolmogorov-md}
\L u (x,y,t) := \partial_t u(x,y,t) - \sum_{j=1}^m \partial^2_{x_j} u(x,y,t) - \sum_{j=1}^m x_j \partial_{y_{j}}
u(x,y,t),$$ with $(x,y,t) \in \R^m \times \R^m \times \R$. As usual, we denote $\OT=\R^{2m} \times ]-\infty, T[$. The operator $\L$ can be written in the form by setting $X_j: = \partial_{x_j}$ for $j= 1, \dots, m$, and $X_0 :=
\sum_{j=1}^m x_j \partial_{y_{j}}$. It follows that $\L$ satisfies Hörmander’s condition (H0). The vector fields $X_j$’s and $Y := X_0-\partial_t$ are invariant with respect to the left translations and the dilation defined by $$\label{kolmogl}
(\xi, \eta, \tau) \circ (x,y,t) := (x + \xi, y + \eta - t \xi ,t + \tau),
\qquad \delta_r (x,y,t) := (r x, r^3 y, r^2 t),$$ respectively. An invariant Harnack inequality for Kolmogorov equations was first proved by Garofalo and Lanconelli in [@GarofaloLanconelli]. It can be written in its restricted form as in Proposition \[p-restr-harnack\] with $\omega = 0$. It reads as $$\label{e-harnack-kolmogorov}
u\left( x, y+\t x, t-\t \right) \le C_{\t} \, u(x,y,t) \qquad \text{for every} \ (x,y,t) \in \R^{2m+1} \text{ and }
\t >0.$$ We stress that due to the drift term $X_0 - \partial_t$, the Harnack inequality for Kolmogorov equations is different from . The above discussion applies to the following more general class of operators of the above type, first studied by Lanconelli and Polidoro in [@LanconelliPolidoro94]. We also refer to the book by Lorenzi and Bertoldi [@LorenziBertoldi] and to the bibliography therein for results on Kolmogorov equations obtained by semigroup theory.
We summarize the properties of $\mathscr{L}$ that are needed for its study in our functional setting. Condition (H0) can be verified by a direct computation, while the group operation required to satisfy (H1) is defined in . Condition (H2) holds for every $\omega \in \R^m$. In the sequel we choose $\omega = 0$.
We use the explicit expression of the fundamental solution $\Gamma$ of $\L$ to compute the Martin functions of $\R^{2m} \times
]-\infty, T[$. We recall that this method has been used in [@CranstonOreyRosler] (see (1.2) therein) to compute the complete parabolic and elliptic Martin boundary for nondegenerate Ornstein-Uhlenbeck processes in dimension two (see also [@Doob] for other explicit examples of computing parabolic Martin boundaries).
We recall the definition of Martin functions for our case. Assume for simplicity that $T<\infty$. We say that a sequence $\{(\xi_k, \eta_k, \tau_k)\}_{k\in \N}$ is a *fundamental sequence* if $\| ( \xi_k, \eta_k, \tau_k )\| \to + \infty$ as $k \to \infty$ and the corresponding sequence of [*Martin quotients*]{} $\{u_k\}$ given by $$\label{eq-MBlim}
u_k(x,y,t) := \frac{\Gamma(x,y,t, \xi_k, \eta_k, \tau_k)}{\Gamma(0,0,T, \xi_k, \eta_k, \tau_k)}$$ converges to a nonnegative solution $u(x,y,t) := \lim_{k \to \infty} u_k(x,y,t)$ in $\H_+$. Such a $u$ is called a [*Martin function*]{} $u$ of $\L$ in $\OT$. It is a is a nonnegative solution of $\L u = 0$ in $\OT$ which is defined by some fundamental sequence $(\xi_k, \eta_k, \tau_k)_{k\in \N}$. Note that $\Gamma(0,0,T, \xi_k, \eta_k, \tau_k) = 0$ whenever $T\leq \tau_k$, hence we need to assume $T> \tau_k $ for every $k \in \N$.
The explicit form of the fundamental solution $\Gamma$ of Kolmogorov operator is known and is given by $$\label{e-KolmogorovFS}
\Gamma(x,y,t, \xi, \eta, \tau) = \left( \tfrac{3}{2 \pi}\right)^{m/2} \!\! \dfrac{1}{(t- \tau)^{2m}} \exp\! \left(
\!\! - \frac{\|x- \xi\|^2}{4(t- \tau)} - 3 \frac{\|y - \eta + \tfrac{t- \tau}{2} (x +\xi)\|^2}{(t- \tau)^3} \right)$$ if $t > \tau$, while $\Gamma(x, y, t, \xi, \eta, \tau) = 0$ if $t \le \tau$.
We have
\[p\_Kolmo\] Let $\L$ be the Kolmogorov operator , and let $u$ be a Martin function for $\L u =
0$ in $\OT$. Then either $u = 0$, or there exists $v \in \R^m$ such that $$\label{eq-exp-Kolmo}
u(x,y,t) = \exp \left( \langle x, v \rangle + t \|v\|^2 \right) \quad \text{for all} \ (x,y,t) \in \OT.$$
Since in any Bauer harmonic space all the extremal solutions are Martin kernels (see, Proposition 4.1 and Theorem 5.1 in [@Maeda91]), we have
\[c\_Kolmo\] Any nonnegative solution $u=u(x,y,t)$ of the Kolmogorov equation $\L u = 0$ in $\OT$ does not depend on the variable $y$, and $u$ is a nonnegative solution of the heat equation $\partial_t w(x,t) =
\varDelta w(x,t)$ in $\R^m\times \R_T$.
In particular, any nonzero nonnegative solution of the equation $\L u = 0$ in $\OT$ is strictly positive, and the uniqueness of the positive Cauchy problem in $S_T$ holds true.
The uniqueness of the positive Cauchy problem in $S_T$ for the Kolmogorov equation was first proved in [@Polidoro1995] by a different method.
Assume, as it is not restrictive, that $T=0$, let $u$ be a Martin functions of $\L$ in $\OT$, and let $(x,y,t) \in \OT$. In order to prove our claim, we preliminarily note that $$\label{eq-firstterm-Martin}
- \frac{1}{4}\left( \frac{\|x-\xi_k\|^2}{t - \tau_k} - \frac{\|\xi_k\|^2}{- \tau_k} \right) =
- \frac{1}{4}\left( \frac{\|x \|^2}{t - \tau_k} - 2 \frac{\langle x, \xi_k \rangle }{t - \tau_k}
- t \frac{\|\xi_k\|^2}{(t - \tau_k)(- \tau_k) } \right),$$ and that $$\begin{gathered}
\label{eq-secondterm-Martin}
- 3 \left( \frac{ \| y - \eta_k + \tfrac{t- \tau_k}{2} (x +\xi_k)\|^2 }{(t - \tau_k)^3} - \frac{ \| \eta_k +
\tfrac{\tau_k}{2} \xi_k\|^2 }{(- \tau_k)^3} \right) = \\
\qquad - 3 \frac{ \| y + \tfrac{t}{2} x\|^2 }{(t - \tau_k)^3}
- \frac{3}{4} \frac{\| t \xi_k - \tau_k x\|^2}{(t - \tau_k)^3}
+ 6 \frac{\langle y + \tfrac{t}{2} x, \eta_k + \tfrac{\tau_k}{2} \xi_k \rangle}{(t - \tau_k)^3} \\
\qquad - 3 \frac{\langle y + \tfrac{t}{2} x, t \xi_k - \tau_k x \rangle}{(t - \tau_k)^3}
+ 3 \frac{\langle \eta_k + \tfrac{\tau_k}{2} \xi_k, t \xi_k - \tau_k x \rangle}{(t - \tau_k)^3} \\
\qquad + \left( 9 t \tau_k^2 - 9 t^2 \tau_k + 3 t^3 \right) \frac{ \| \eta_k + \tfrac{\tau_k}{2}
\xi_k\|^2 }
{(t -\tau_k)^3(- \tau_k)^3}\,.\end{gathered}$$
We next choose a fundamental sequence $\big((\xi_k, \eta_k, \tau_k)\big)_{k \in \N}$ such that $u(x,y,t) = 0$ for every $(x,y,t) \in \OT$. We fix any vector $w \in \R^m$ such that $w \ne 0$, and we set $(\xi_k, \eta_k, \tau_k) = (k w, 0,
-1)$. Since $\Gamma(x, y, t, \xi, \eta, \tau) = 0$ if $t \le \tau$, we have $u_k(x,y,t)=0$ whenever $t < -1$. A direct computation based on and shows that $u_k(x,y,t) \to 0$ also if $-1 < t < 0$. We then conclude that $u = 0$ in $\OT$.
Note that, we find the trivial solution whenever a bounded subsequence of $\big(\tau_{k}\big)_{k\in \N}$ exists. Indeed, let $\big(\tau_{k_j}\big)_{j\in \N}$ be a convergent subsequence of $\big(\tau_{k}\big)_{k\in \N}$, and denote by $\widetilde \tau \in ]- \infty, T]$ its limit. Let $(x,y,t) \in \R^{2m+1}$ be fixed, with $t < \widetilde \tau$. Then there exists a $J \in \N$ such that $\tau_{k_j} > t$, so that $u_{k_j}(x,y,t)= 0$ for every $j > J$. Thus $u(x,y,t) = 0$ for every $(x,y,t)$ such that $t < \tilde \tau$. This proves the claim if $\tilde \tau = T$. If $\tilde \tau > T$ the uniqueness of the positive Cauchy problem for Kolmogorov equations (see Theorem 3.2 in [@Polidoro1995]) implies that $u(x,y,t) = 0$ also when $\tilde \tau < t <T$. For this reason, in the sequel we will always assume that $\tau_k \to - \infty$ as $k \to + \infty$.
We next show that *nontrivial* Martin functions of $\L$ have the form . We fix $w_1, w_2 \in
\R^m$ and we set $(\xi_k, \eta_k, \tau_k) = (2 k w_1, k^2 w_2, -k)$. A direct computation based on shows that $$\label{eq-martin-xk}
- \frac{1}{4}\left( \frac{\|x-\xi_k\|^2}{t - \tau_k} - \frac{\|\xi_k\|^2}{- \tau_k} \right) \to
\langle x, w_1 \rangle + t \|w_1\|^2 \qquad \text{as} \quad k \to \infty.$$ A similar argument, based on , applies to last term in the exponent of . We have $$\eta_k + \tfrac{\tau_k}{2} \xi_k = k^2 \left( w_2- w_1 \right), \qquad
t \xi_k- \tau_k x = k \left(2 t w_1 - x \right),$$ then $$y - \eta_k + \tfrac{t- \tau_k}{2} (x +\xi_k) =
- k^2 \left( w_2- w_1 \right) + k \left( t w_1 - \tfrac12 x\right) + y +\tfrac{t}{2} x.$$ Consequently, we find that $$\begin{split}
& - 3 \left( \frac{\|y - \eta_k + \tfrac{t- \tau_k}{2} (x +\xi_k)\|^2}{(t - \tau_k)^3} -
\frac{\| \eta_k + \tfrac{\tau_k}{2} \xi_k\|^2}{(- \tau_k)^3} \right) = \\
& - 3 \frac{-k^6 \langle w_2 - w_1 , 2 t w_1 + x \rangle - 3 t k^6 \|w_1 - w_2\|^2}{k^3(t + k)^3} + \omega(k),
\end{split}$$ for some function $\omega$ such that $\omega(k) \to 0$ as $k \to \infty$. Hence, $$\label{eq-martin-yk}
- 3 \left( \frac{\|y - \eta_k + \tfrac{t- \tau_k}{2} (x +\xi_k)\|^2}{(t - \tau_k)^3} -
\frac{\| \eta_k + \tfrac{\tau_k}{2} \xi_k\|^2}{(- \tau_k)^3} \right) \to
3 \langle w_2 - w_1 , 2 t w_1 + x \rangle + 9 t \|w_1 - w_2\|^2,$$ as $k \to \infty$. Note that the variable $y$ doesn’t appear in last limit. Thus, also using the obvious fact $\left(\frac{-\tau_k}{t - \tau_k}\right)^{2m} \to 1$ as $k \to \infty$, we find $$u(x,y,t) = \exp \left( \langle x, 3 w_2 -2 w_1 \rangle + t \|3 w_2 -2 w_1\|^2 \right),$$ and we conclude that $u$ has the form if we choose $v = 3 w_2 -2 w_1$.
We next show that either $u$ is zero, or has the form , for every fundamental sequence. With this aim, we consider any sequence $(\xi_k, \eta_k, \tau_k)_{k\in \N}$, with $\tau_k <0$ for every $k \in \N$, and such that $\tau_k \to - \infty$ as $k \to + \infty$, since we know that, otherwise, $u$ is the trivial solution. We also assume that the function $u$ in is well defined.
We set $$\label{eq-ratio}
\widetilde \xi_k := \tfrac{1}{-\tau_k} \xi_k , \qquad \widetilde \eta_k := \tfrac{1}{(-\tau_k)^2} \eta_k,
\quad k \in \N.$$ and, after some elementary, but lengthly computations, we find that $$\label{eq-xi-eta-k}
u_{k}(x,y,t)=\exp\left( \left(\langle x,3 \tilde{\eta}_{k}-\tilde{\xi}_{k}\rangle
+t\| 3 \tilde{\eta}_{k}-\tilde{\xi}_{k}\|^{2} \right)(1 +R_{k})\right),$$ where $R_{k}\to 0$ denotes a vanishing sequence. Thus, either $\big\| 3 \tilde{\eta}_{k}-\tilde{\xi}_{k} \big\|
\to + \infty$ as $k \to + \infty$, or the sequence $\big( 3 \tilde{\eta}_{k}-\tilde{\xi}_{k} \big)_{k \in \N}$ has a bounded subsequence.
In the first case we plainly find $u(x,y,t) = 0$ for every $(x,y,t) \in \R^{m+1}$ with $t<0$.
In the second case there exists a subsequence $\big( 3 \tilde{\eta}_{k_j}-\tilde{\xi}_{k_j} \big)_{j \in \N}$ converging to some point $w \in \R^{m}$. From we have that $$u(x,y,t) = \exp \left( \langle x, w \rangle + t \| w \|^2 \right),$$ and hence, $u$ has the form . This concludes the proof.
Concluding remarks and further developments
===========================================
\[sec\_further\] As was stressed in Remark \[r-separation\], our separation principle (Theorem \[th-main\]) gives a valuable information concerning nonnegative solutions for operators $\L$ of the form $$\L u = \partial_t u - \sum_{j=1}^m X_j^2 u,$$ and for Mumford’s operator $\M$ $$\mathscr{M} u := \p_{t} u - \cos(x) \p_y u - \sin(x) \p_w u - \p_x^2 u.$$ On the other hand, in recent years, operators of the form with $X_0\neq 0$ that satisfy (H0), (H1) and (H2) have received considerable attention. It would be interesting to study their positivity properties using our functional analytic approach. We give here two examples of such operators.
\[ex3\] [Linked operators.]{} Let $(\p_x + y \p_s)^2 + (\p_y - x \p_s)^2$ be the sub-Laplacian on the Heisenberg group given by , and let $ x \p_w -\p_{t}$ be the first order term of the simplest Kolmogorov operator , that is $$\L := \p_{t} - x \p_w - \p_x ^2 \qquad (x,w,t) \in \R^3.$$ Define $$\label{exlink}
\L := \p_{t} - x \p_w - (\p_x + y \p_s)^2 - (\p_y - x \p_s)^2 \qquad (x,y,s,w, t) \in \R^5.$$ Note that the operator $\L$ acts on the variables $(x,y,s,t)$ as the heat equation on the Heisenberg group, and on the variables $(x,y,w,t)$ as a Kolmogorov operator in $\R^3 \times \R$. It is easy to see that $\L$ satisfies the Hörmander condition. Moreover, it can be shown that there exists a homogeneous Lie group on $\R^5$ that *links* the Heisenberg group on $\R^4$ and the Kolmogorov group in $\R^3$, and such that $\L$ is invariant with respect to this new Lie group.
The notion of a [*link of homogeneous groups*]{} has been introduced by Kogoj and Lanconelli in [@KogojLanconelli2; @KogojLanconelli4]. It gives a general procedure for the construction of sequences of homogeneous groups of arbitrarily large dimension and step.
\[ex4\] Consider the following operator studied by Cinti, Menozzi and Polidoro [@CintiMenozziPolidoro] $$\label{ex-CMP}
\L u = \p_{t} u - x \p_w u - x^2 \p_y u - \p_x^2 u \qquad (x,y,w,t) \in \R^4.$$ It is invariant with respect to the following Lie group operations $$\label{eq-group-CMP}
(x,y,w,t) \circ (\x,\y,\omega,\t): = (x + \x, y +\eta +2 x \omega - \t x^2, w +\omega-\t x, t+\t),$$ and verifies Hörmander hypoellipticity condition, so, (H0) and (H1) are satisfied. Note that, in this case, the drift term $X_0 := x^2 \p_y + x \p_w$ is essential for the validity of (H0). $\L$ is also invariant with respect to the following dilation $$\label{eq-dil-CMP}
\d_r (x,y,w,t): = \big(r x, r^{4} y, r^3 w, r^2 t \big).$$ We next show that the attainable set of the point $z_0 =(x_0, y_0, w_0, t_0)$ in $\R^4$ is $$\label{eq-propset}
\A_{z_0} = \big\{ (x,y,w,t) \in \R^4 \mid t\le t_0, y_0\le y, (w-w_0)^2\le (y - y_0 ) (t_0-t) \big\}.$$ To prove , we recall that in [@CintiMenozziPolidoro Lemma 5.11] it has been shown that, if $z_0
= 0 \in \R^4$, and $\O = \,\big( ]-1,1[ \big)^4$ is the open unit cube in $\R^4$, then $$\A_{0} (\Omega) = \big\{ (x,y,w,t) \in \Omega \mid 0 \le y \le -t, w^2 \le - t y \big\}.$$ In accordance with , we consider the $r$ dilation of $\Omega$ $$\d_r \Omega = \, \, ]\!-r, r[ \, \, \times \, \, ]\!-r^4, r^4[ \,\,
\times \,\, ]\!-r^3, r^3[ \,\, \times \, \, ]\!-r^2, r^2[ \,.$$ By the dilation invariance of $\L$, we then have $$\A_{0} = \bigcup_{r > 0} \A_{0} (\d_r \Omega) = \bigcup_{r > 0} \big\{ (x,y,w,t) \in \d_r \Omega
\mid 0 \le y \le - r^2 t, w^2 \le - t y \big\},$$ and we get for $z_0 = 0$. Eventually, for any $z_0 \in \R^4$ follows from the invariance of $\L$ with respect to the translations defined in .
Note that the point $\exp\left( s Y \right) z_0 \not \in$ Int$(\A_{z_0})$, where $Y=x^2 \p_y + x \p_w-\p_t$ is defined by . Since $\A_{z_0}(\Omega) \subset \A_{z_0}$, for every bounded set $\Omega \subset \R^4$, we conclude that (H2) is not satisfied if we choose $\omega = 0$. Nevertheless, $\L$ defined in satisfies assumption (H2), for any $\omega \ne 0$ provided that we choose $\Omega$ big enough.
We note that the operator $\L$ in is an approximation of the Mumford operator . Indeed, the Taylor expansion at $x=0$ of the drift term $X_0 = \cos(x) \p_y + \sin(x) \p_w$, leads us to approximate $\mathscr{M}$ with $$\widetilde {\mathscr{M}} = \p_{t} - \left( 1 - \tfrac{x^2}{2}\right) \p_y - x \p_w - \p_x^2 .$$ Moreover, it can be easily checked that $u$ is a solution of the equation $\L u = 0$ (where $\L$ is the operator defined by ) if and only if the function $v(x,y,w,t) := u\left(x,-\frac{y}{2}-
t,w,t\right)$ is a solution of the equation $\widetilde {\mathscr{M}} v = 0$, and the claim is verified.
On the separation principle {#s-remark}
---------------------------
We discuss here the main assumption of Theorem \[th-main\]. We recall that it is satisfied whenever $X_0 = 0$, and therefore, it is natural to study operators with $X_0\neq 0$ and a non-abelian $\mathbb{G}$ that still satisfy . In order to discuss this question, we focus on the consequence of , that is $$\label{eq_funct_eq-reminder}
u\left(\exp(s (\omega \cdot X +Y))(x,t)\right) = \mathrm{e}^{- \beta s} u(x,t) \quad \forall (x,t) \in \OT \mbox{ and } \forall s > 0,$$ where $u$ is a nonnegative extremal solution. The following result answers this question
\[th-comm\] Let $\L$ be an operator of the form , satisfying [(H0)]{}, [(H1)]{} and [(H2)]{}. Let $u: \OT \to \R$ be a nonnegative smooth function satisfying . Then $$\label{eq_comm-jk0}
[X_j, X_k ] u(x,t) = 0, \qquad \forall j,k = 0, 1, \dots, m, \mbox{ and } \forall (x,t)\in \OT.$$ The same result holds for all higher-order commutators.
Moreover, if any nonnegative extremal solution in $\H_+$ satisfies , then the conclusion holds for any $u\in\H_+$.
As an application, we apply the above result to the degenerate Kolmogorov equations in two space variables $\K := \partial_t - x \partial_y
- \partial_x^2$, and let $\H_+$ the corresponding cone of nonnegative solutions in $\R^2 \times ]- \infty, T[$ . In this case $X_1 = \partial_x, X_0 = x \partial_y$, and Proposition \[th-comm\] says that, if $u$ is a nontrivial nonnegative extremal solution in $\H_+$ that satisfies , then $$[X_1, X_0]u(x,y,t) = [\partial_x, x \partial_y]u(x,y,t) = \partial_y u(x,y,t) =0.$$ Hence, $u$ does not depend on $y$. Therefore, $u$ is a nontrivial nonnegative solution of the heat equation $\partial_t
u = \partial_x^2 u$ in $\R\times ]-\infty,T[$, and in particular $u$ is strictly positive.
In conclusion, all nontrivial nonnegative extremal solutions in $\H_+$ satisfying , do not depend on the ’degenerate’ variable $y$. Recall that in fact, by Corollary \[c\_Kolmo\], all solutions in $\H_+$ do not depend on $y$.
Next, we present the proof of Proposition \[th-comm\]. It relies on the following Lemma, whose proof is analogous to that of Theorem \[H\*-lambda-repr-par\].
\[lem-comm\] Let $u: \OT \to \R$ be a nonnegative smooth function, and $\omega_1, \omega_2$ be two vectors of $\R^m$ such that holds. Then, for every $(x,t) \in \OT$, we have $$ \big[\omega_1 \cdot X + Y,\omega_2 \cdot X + Y \big] u(x,t) = 0.$$
Let $(x,t) \in \OT$, and consider the function $v := \log(u)$. Using with $s > t-T$, we obtain $$\begin{gathered}
v \!\Big( \!\! \exp(-s (\omega_2 \cdot X\! +\!Y))\! \exp(-s (\omega_1 \cdot X \!+\!Y))
\exp(s (\omega_2 \cdot X \!+\!Y)) \exp(s (\omega_1 \cdot X \!+\!Y))(x,t)\!\!\Big)\!\! =\\
s \beta_{\omega_1} + s \beta_{\omega_2} - s \beta_{\omega_1} - s \beta_{\omega_2} +
v(x,t) = v(x,t).\end{gathered}$$ We recall the Baker-Campbell-Hausdorff formula $$\begin{split} \exp\big(s\tilde Y\big) \exp\big(s\tilde X \big)(x,t) = \exp\left(s
\big(\tilde Y + \tilde X \big) + \tfrac{s^2}{2}\big[ \tilde X, \tilde Y \big] + o(s^2) \right)
\end{split}$$ where $o(s^2)$ denotes a function such that $o(s^2)/s^2 \to 0$ as $s \to 0$, and we apply it twice. The first time we choose $\tilde X = \omega_1 \cdot X +Y$ and $\tilde Y = \omega_2 \cdot X +Y$, the second time we set $\tilde X =
- \big(\omega_1 \cdot X +Y\big)$ and $\tilde Y = - \big(\omega_2 \cdot X +Y\big)$, and we find $$ \frac{v \Big( \exp\big(s^2 \big[\omega_1 \cdot X +Y,\omega_2 \cdot X +Y \big] + o(s^2) \big) (x,t)\Big) -
v(x,t)}{s^2}= 0,$$ for every $s > t-T$. Then from the differentiability of the functions $v$ and $\exp$, by letting $s \to 0$ we obtain $$ \big[\omega_1 \cdot X +Y,\omega_2 \cdot X +Y \big] v(x,t) = 0.$$ The proof of the claim then follows from the fact that $u(x,t) = \exp \big(v(x,t)\big)$.
Let $u: \OT \to \R$ be a nonnegative smooth function, and let $\omega \in \R^m$ be any vector satisfying (H2). We claim that $$\label{eq_comm-tilde}
\big[X_k , \omega \cdot X + Y \big] u(x,t) = 0 \qquad k=1, \dots, m,$$ for every $(x,t) \in \OT$.
In order to prove we note that, since $\exp\big( s (\omega \cdot X + Y) \big)(x,t) \in
\mathrm{Int} \left(\A_{(x,t)} \right)(\Omega)$ for any $s \in ]0,s_0[$, there exists $r > 0$ such that $\exp\big( s \big(\omega \cdot X + r X_k + Y \big) \big)(x,t) \in \mathrm{Int} \left(\A_{(x,t)}(\Omega) \right)$ for $k = 1, \dots, m$. We denote by $e_k$ the $k$-th vector of the canonical basis of $\R^m$, and we apply Lemma \[lem-comm\] with $\omega_1 := \omega + r e_k$ and $\omega_2 := \omega$. We find $$ r \big[X_k , \omega \cdot X + Y \big] u(x,t) = \big[\omega \cdot X + r X_k + Y, \omega \cdot X + Y \big] u(x,t) = 0,$$ for every $(x,t) \in \OT$. This proves .
We apply again Lemma \[lem-comm\] with $\omega_1 := \omega + r e_k$ and $\omega_2 := \omega + r e_j$, for $j,k = 1,
\dots, m$, and to obtain $$\begin{split}
r^2 \big[X_j , X_k \big]u(x,t) & = \big[r X_j + \omega \cdot X + Y, r X_k + \omega \cdot X + Y \big] u(x,t) \\
& - r \big[X_j, \omega \cdot X + Y \big] u(x,t) + r \big[ X_k, \omega \cdot X + Y \big] u(x,t) = 0.
\end{split}$$ This proves $$\label{eq_comm-jk}
\big[X_j , X_k \big] u(x,t) = 0 \qquad j,k=1, \dots, m.$$ From and from the fact that $\big[X_k, \partial_t \big]=0$, we eventually obtain $$ \big[X_k, X_0 \big] u(x,t) = \big[X_k, \omega \cdot X + Y \big] u(x,t) - \sum_{j=1}^m \omega_j \big[X_k, X_j
\big] u(x,t) = 0,$$ for $k= 1, \dots, m$. This concludes the proof of . A plain application of the Baker-Campbell-Hausdorff formula gives the result for all higher-order commutator.
The result for any nonnegative solution then clearly follows from the representation formula .
Liftable operators {#s-ex6}
------------------
Our approach applies also to operators that are not invariant with respect to any Lie group structure, but that can be *lifted* to a suitable operator $\widetilde \L$ that satisfies assumption (H1). Consider, for instance, the following [Grushin-type evolution operator]{} $$\label{ex-grushin}
\L u = \p_{t} u - \p_x^2 u - x^2 \p_y^2 u \qquad (x,y,t) \in \R^3.$$ Since it is degenerate at $\big\{ x = 0 \big\}$ and nondegenerate in the set $\big\{ x \ne 0 \big\}$, a change of variables that preserves the operator cannot exist. If we lift the operator by adding a new variable $w$ and introducing the vector fields $\widetilde X_1 := X_1$ and $\widetilde X_2 := X_2 + \partial_w$, then we get the lifted operator $$\label{ex-grushin-lift}
\widetilde \L u: = \p_{t} u - \p_x^2 u - (\p_w + x \p_y)^2 u \qquad (x,y,w,t) \in \R^4,$$ that belongs to the class considered in Section \[sec\_Parabolic\]. The uniqueness result proved for directly extends to .
Analogously, the operator $$\label{ex-CMP-NL}
\L u = \p_{t} u - \p_x^2 u - x^2 \p_y u \qquad (x,y,t) \in \R^3$$ studied in [@CintiMenozziPolidoro], is not invariant with respect to any Lie group structure. However, it can be lifted to the operator defined in and, also in this case, the uniqueness result for extends to . We note that appears in stochastic theory (see the references in [@CintiMenozziPolidoro] for a bibliography on this subject).
Clearly, the lifting method can be applied to a wide class of operators.
Open problems
-------------
In this subsection we list several open problems related to the results of the present paper.
1. Our first problem concerns with the strict positivity of nonzero nonnegative solutions of the equation $\L
u=0$ in $\OT$ (cf. Theorem \[th-main\]).
2. We would like to extend our main results to operators with nontrivial zero-order term, namely to operators of the form $$\L_cu := \p_t u - \sum_{j=1}^m X_j^2 u - X_0 u - c(x) u.$$
3. We would like to weaken the left-invariance condition, as well.
4. We aim to study property [*(b)*]{} of Section \[ssec\_Liouville\] for degenerate operators. More precisely, we would like to find conditions under which the generalized principal eigenvalue $\lambda_0$ of $\L_0$ is equal $0$. Moreover, we would like to understand whether $\L_0$ is critical in $\R^N$.
5. In another direction, we would like to extend the nonnegative Liouville-type theorem in $\R^{N+1}$ (Theorem \[thm\_end\]) to the case of operators with a nontrivial drift term.
6. Finally, it is natural to extend our work to the case where $\L$ of the form is defined on a noncompact Lie group, and even to the more general setting of a noncompact manifold $M$ with a cocompact group action (cf. [@LinPinchover94]). We expect that the acting group should be nilpotent.
[**Acknowledgments**]{}
The authors started to work on the present paper during their visit at BCAM - The Basque Center for Applied Mathematics. The authors wish to thank Professor Enrique Zuazua for the hospitality. The authors thank Professor Nicola Garofalo for bringing their attention to the Mumford operator. They also wish to thank Alano Ancona for a useful discussion concerning the Martin representation theorem on Bauer harmonic spaces, and Caterina Manzini for her help in the study of the Martin functions for Kolmogorov equations.
A.E. K. and S. P. are grateful to the Department of Mathematics at the Technion for the hospitality during their visits. Y. P. acknowledges the support of the Israel Science Foundation (grants No. 963/11) founded by the Israel Academy of Sciences and Humanities. The authors also thank Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM) for supporting the visit of Y. P. to Modena and Reggio Emilia University on February 2014.
[10]{}
, [*Control theory from the geometric viewpoint*]{}, vol. 87 of Encyclopaedia of Mathematical Sciences, Control Theory and Optimization II, Springer-Verlag, Berlin, 2004.
, [*[Sub-Laplacians with drift on Lie groups of polynomial volume growth.]{}*]{}, Mem. Am. Math. Soc. [**739**]{} (2002), 101 p.
, [*Some results on partial differential equations and [A]{}sian options*]{}, Math. Models Methods Appl. Sci. [**11**]{} (2001), 475–497.
, [*Convex cones in analysis*]{}, vol. 67 of Travaux en Cours \[Works in Progress\], Hermann Éditeurs des Sciences et des Arts, Paris, 2006. With a postface by G. Choquet, Translation of the 1999 French version.
, *Lie groups related to [H]{}örmander operators and [K]{}olmogorov-[F]{}okker-[P]{}lanck equations*, Commun. Pure Appl. Anal. **11** (2012), 1587–1614.
, [*Stratified [L]{}ie groups and potential theory for their sub-[L]{}aplacians*]{}, Springer Monographs in Mathematics, Springer, Berlin, 2007.
, [*Principe du maximum, inégalité de [H]{}arnack et unicité du problème de [C]{}auchy pour les opérateurs elliptiques dégénérés*]{}, Ann. Inst. Fourier [**19**]{} (1969), 277–304.
, [*An invitation to hypoelliptic operators and Hörmander’s vector fields*]{}, Springer Briefs in Mathematics, Springer, Berlin, 2014.
, [*Heat kernels for elliptic and sub-elliptic operators, methods and techniques*]{}, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, New York, 2011.
, [*The [B]{}oltzmann equation and its applications*]{}, Springer-Verlag, New York, 1988.
, [*Lectures on analysis. [V]{}ol. [I]{}–[III]{}*]{}, Edited by J. Marsden, T. Lance and S. Gelbart, W. A. Benjamin, Inc., New York-Amsterdam, 1969.
, [*Partial differential equations—uniqueness in the [C]{}auchy problem for a class of hypoelliptic ultraparabolic operators*]{}, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. [**20**]{} (2009), 145–158.
, [*Two-sided bounds for degenerate processes with densities supported in subsets of $\mathbb{R}^n$*]{}, Potential Anal. [**42**]{} (2015), 39–98.
, [*The [M]{}artin boundary of two-dimensional [O]{}rnstein-[U]{}hlenbeck processes*]{}, in Probability, statistics and analysis, vol. 79 of London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, Cambridge, 1983, pp. 63–78.
, *The sub-elliptic obstacle problem: [$C^{1,\alpha}$]{} regularity of the free boundary in [C]{}arnot groups of step two*, Adv. Math. **211** (2007), 485–516.
, [*Uniqueness of positive solutions of the heat equation*]{}, Proc. Amer. Math. Soc. [**99**]{} (1987), 353–356.
, [*Classical potential theory and its probabilistic counterpart*]{}, Reprint of the 1984 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001.
, [*Level sets of the fundamental solution and [H]{}arnack inequality for degenerate equations of [K]{}olmogorov type*]{}, Trans. Amer. Math. Soc. [**321**]{} (1990), 775–792.
, [*Extension à l’équation de la chaleur d’un théorème de [A]{}. [H]{}arnack*]{}, Rend. Circ. Mat. Palermo (2) [**3**]{} (1954), 337–346.
, [*Hypoelliptic second order differential equations*]{}, Acta Math. [**119**]{} (1967), 147–171.
, [*Poincaré inequality and the uniqueness of solutions for the heat equation associated with subelliptic diffusion operators*]{}, (preprint, 2013), arXiv:1305.0508
, [*An invariant [H]{}arnack inequality for a class of hypoelliptic ultraparabolic equations*]{}, Mediterr. J. Math. [**1**]{} (2004), 51–80.
height 2pt depth -1.6pt width 23pt, [*One-side [L]{}iouville theorems for a class of hypoelliptic ultraparabolic equations*]{}, in Geometric analysis of [PDE]{} and several complex variables, vol. 368 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2005, pp. 305–312.
height 2pt depth -1.6pt width 23pt, [*Liouville theorems in halfspaces for parabolic hypoelliptic equations*]{}, Ric. Mat. [**55**]{} (2006), 267–282.
height 2pt depth -1.6pt width 23pt, [*Link of groups and homogeneous [H]{}örmander operators*]{}, Proc. Amer. Math. Soc. [**135**]{} (2007), 2019–2030.
height 2pt depth -1.6pt width 23pt, [*Liouville theorems for a class of linear second-order operators with nonnegative characteristic form*]{}, Bound. Value Probl., (2007), Art. ID 48232, pp. 16.
height 2pt depth -1.6pt width 23pt, [*Liouville theorem for [$X$]{}-elliptic operators*]{}, Nonlinear Anal. [**70**]{} (2009), 2974–2985.
, [*[H]{}arnack inequality for hypoelliptic second order partial differential operators*]{}, (preprint, 2015), arXiv:1509.05245
, [*Minimal solutions of the heat equation and uniqueness of the positive [C]{}auchy problem on homogeneous spaces*]{}, Proc. Amer. Math. Soc. [**94**]{} (1985), 273–278.
, [*A generalization of [C]{}how’s theorem and the bang-bang theorem to non-linear control problems*]{}, SIAM J. Control [**12**]{} (1974), 43–52.
, [*On the fundamental solution for hypoelliptic second order partial differential equations with nonnegative characteristic form*]{}, Ricerche Mat. [**48**]{} (1999), 81–106.
, [*On a class of hypoelliptic evolution operators*]{}, Partial differential equations, II (Turin, 1993), Rend. Sem. Mat. Univ. Politec. Torino [**52**]{} (1994), 29–63.
, [*Manifolds with group actions and elliptic operators*]{}, Mem. Amer. Math. Soc., [**112**]{} (1994), pp. vi+78.
, [*Analytical methods for [M]{}arkov semigroups*]{}, vol. 283 of Pure and Applied Mathematics (Boca Raton), Chapman & Hall/CRC, Boca Raton, FL, 2007.
, [*Martin boundary of a harmonic space with adjoint structure and its applications*]{}, [Hiroshima Math. J.]{} [**21**]{} (1991), 163–186.
, [*A tour of subriemannian geometries, their geodesics and applications*]{}, Mathematical Surveys and Monographs, 91, American Mathematical Society, Providence, RI, 2002.
, [*Elastica and computer vision, in: Algebraic geometry and its applications*]{} (eds. Bajaj, Chandrajit) Springer-Verlag, New-York, (1994), pp. 491–506.
, [*Uniform restricted parabolic [H]{}arnack inequality, separation principle, and ultracontractivity for parabolic equations*]{}, in Functional analysis and related topics, 1991 ([K]{}yoto), vol. 1540 of Lecture Notes in Math., Springer, Berlin, 1993, pp. 277–288.
height 2pt depth -1.6pt width 23pt, [*Uniqueness and nonuniqueness of the positive [C]{}auchy problem for the heat equation on [R]{}iemannian manifolds*]{}, Proc. Amer. Math. Soc. [**123**]{} (1995), 1923–1932.
, [*Representation theorems for positive solutions of parabolic equations*]{}, Proc. Amer. Math. Soc. [**104**]{} (1988), 507–515.
height 2pt depth -1.6pt width 23pt, [*On uniqueness and nonuniqueness of the positive [C]{}auchy problem for parabolic equations with unbounded coefficients*]{}, Math. Z. [**223**]{} (1996), 569–586.
, [*Sulla soluzione generalizzata di [W]{}iener per il primo problema di valori al contorno nel caso parabolico*]{}, Rend. Sem. Mat. Univ. Padova [**23**]{} (1954), 422–434.
, [*Uniqueness and representation theorems for solutions of [K]{}olmogorov-[F]{}okker-[P]{}lanck equations*]{}, Rendiconti di Matematica, Serie VII, [**15**]{}, (1995), 535–560.
, [*The [F]{}okker-[P]{}lanck equation: Methods of solution and applications*]{}, Springer-Verlag, Berlin, second ed., 1989.
, [*Positive temperatures on an infinite rod*]{}, Trans. Amer. Math. Soc. [**55**]{} (1944), 85–95.
|
---
abstract: 'We use Markov chains and numerical linear algebra — and several CPU hours — to determine the expected number of coins in a person’s possession under certain conditions. We identify the spending strategy that results in the minimum possible expected number of coins, and we consider two other strategies which are more realistic.'
address:
- |
Department of Mathematics and Statistics\
Valparaiso University\
Valparaiso, Indiana 46383, USA
- |
LaCIM\
University of Québec at Montréal\
Montréal, QC H2X 3Y7, Canada
author:
- Lara Pudwell
- Eric Rowland
date: 'March 21, 2015'
title: 'What’s in *YOUR* wallet?'
---
Introduction {#Introduction}
============
While you probably associate the title of this paper with credit card commercials, we suggest it is actually an invitation to some pretty interesting mathematics. Every day, when customers spend cash for purchases, they exchange coins. There are a variety of ways a spender may determine which coins from their wallet to give a cashier in a transaction, and of course a given spender may not use the same algorithm every time. In this paper, however, we make some simplifying assumptions so that we can provide an answer to the question ‘What is the expected number of coins in your wallet?’.
Of course, the answer depends on where you live! A *currency* is a set of denominations. We’ll focus on the currency consisting of the common coins in the United States, which are the quarter (25 cents), dime (10 cents), nickel (5 cents), and penny (1 cent). However, we invite you to grab your passport and carry out the computations for other currencies. Since we are interested in distributions of coins, we will consider prices modulo $100$ cents, in the range $0$ to $99$.
The contents of your wallet largely depend on how you choose which coins to use in a transaction. We’ll address this shortly, but let’s start with a simpler question. How does a cashier determine which coins to give you as change when you overpay? If you are due $30$ cents, a courteous cashier will not give you $30$ pennies. Generally the cashier minimizes the number of coins to give you, which for $30$ cents is achieved by a quarter and nickel. Therefore let’s make the following assumptions.
1. \[1\] The fractional parts of prices are distributed uniformly between $0$ and $99$ cents.
2. \[2\] Cashiers return change using the fewest possible coins.
Is there always a unique way to make change with the fewest possible coins? It turns out that for every integer $n \geq 0$ (not just $0 \leq n \leq 99$) there is a unique integer partition of $n$ into parts $25$, $10$, $5$, and $1$ that minimizes the number of parts. And this is what the cashier gives you, assuming there are enough coins of the correct denominations in the cash register to cover it, which is a reasonable assumption since a cashier with only $3$ quarters, $2$ dimes, $1$ nickel, and $4$ pennies can give change for any price that might arise.
The cashier can quickly compute the minimal partition of an integer $n$ into parts $d_1, d_2, \dots, d_k$ using the *greedy algorithm* as follows. To construct a partition of $n = 0$, use the empty partition $\{\}$. To construct a partition of $n \geq 1$, determine the largest $d_i$ that is less than or equal to $n$, and add $d_i$ to the partition; then recursively construct a partition of $n - d_i$ into parts $d_1, d_2, \dots, d_k$. For example, if $37$ cents is due, the cashier first takes a quarter from the register; then it remains to make change for $37 - 25 = 12$ cents, which can most closely be approximated (without going over) by a dime, and so on. The greedy algorithm partitions $37$ into $\{25,10,1,1\}$.[^1]
We remark that for other currencies the greedy algorithm does not necessarily produce partitions of integers into fewest parts. For example, if the only coins in circulation were a $4$-cent piece, a $3$-cent piece, and a $1$-cent piece, the greedy algorithm makes change for $6$ cents as $\{4, 1, 1\}$, whereas $\{3, 3\}$ uses fewer coins. In general it is not straightforward to tell whether a given currency lends itself to minimal partitions under the greedy algorithm. Indeed, there is substantial literature on the subject [@Adamaszek--Adamaszek; @Cai; @Chang--Korsh; @Chang--Gill; @Kozen--Zaks; @Magazine--Nemhauser--Trotter] and at least one published false “theorem” [@Jones; @Maurer]. Pearson [@Pearson] gave the first polynomial-time algorithm for determining whether a given currency has this property.
As for spending coins, the simplest way to spend coins is to not spend them at all. A *coin keeper* is a spender who never spends coins. Sometimes when you’re traveling internationally it’s easier to hand the cashier a big bill than try to make change with foreign coins. Or maybe you don’t like making change even with domestic coins, and at the end of each day you throw all your coins into a jar. In either case, you will collect a large number of coins. What is the distribution?
It is easy to compute the change you receive if you spend no coins in each of the $100$ possible transactions corresponding to prices from $0$ to $99$ cents. Since we assume these prices appear with equal likelihood, to figure out the long-term distribution of coins in a coin keeper’s collection, we need only tally the coins of each denomination. A quick computer calculation shows that the coins received from these 100 transactions total 150 quarters, 80 dimes, 40 nickels, and 200 pennies. In other words, a coin keeper’s stash contains 31.9% quarters, 17.0% dimes, 8.5% nickels, and 42.6% pennies.
What’s in the country’s wallet? The coin keeper’s distribution looks quite different from that of coins actually manufactured by the U.S. mint. In 2014, the U.S. government minted $1580$ million quarters, $2302$ million dimes, $1206$ million nickels, and $8146$ million pennies [@coin; @production] — that’s 11.9% quarters, 17.4% dimes, 9.1% nickels, and 61.6% pennies.
Fortunately, most of us do not behave as coin keepers. So let us move on to spenders who are not quite so lazy.
Markov chains {#Markov chains}
=============
When you pay for your weekly groceries, the state of your wallet as you leave the store depends only on
- the state of your wallet when you entered the store,
- the price of the groceries, and
- the algorithm you use to determine how to pay for a given purchase with a given wallet state.
So what we have is a *Markov chain*.
A Markov chain is a system in which for all $t \geq 0$ the probability of being in a given state at time $t$ depends only on the state of the system at time $t - 1$. Here time is discrete, and at every time step a random event occurs to determine the new state of the system. The main defining feature of a Markov chain is that the probability of the system being in a given state does not depend on the system’s history before time $t - 1$. For us, the system is the spender’s wallet, and the random event is the purchase price.
Let $S = \{s_1, s_2, \dots\}$ be the set of possible states of the system. A Markov chain with finitely many states has a $|S| \times |S|$ *transition matrix* $M$ whose entry $m_{ij}$ is defined as follows. Let $m_{ij}$ be the probability of transitioning to $s_j$ if the current state of the system is $s_i$. By assumption, $m_{ij}$ is independent of the time at which $s_i$ occurs. The transition matrix contains all the information about the Markov chain. Note that the sum of each row is $1$.
As a small example, consider a currency with only $50$-cent coins and $25$-cent coins, and suppose all prices end in 0, 25, 50, or 75 cents. Suppose also that if the spender has sufficient change to pay for their purchase, then they do so using the greedy algorithm. If the spender does not have sufficient change, they pay with bills and receive change. The sets of coins obtained as change from transactions in this model are $\{\}$, $\{25\}$, $\{50\}$, and $\{50, 25\}$. Therefore the possible wallet states are $s_1=\{\}$, $s_2=\{25\}$, $s_3=\{50\}$, $s_4=\{25,25\}$, $s_5=\{50,25\}$, and $s_6=\{25,25,25\}$. Further, if all 4 prices are equally likely, the transition matrix $M$ is a $6 \times 6$ matrix where all entries are either $\tfrac{1}{4}$ or $0$. For example, row $2$ of $M$ is $\begin{bmatrix}\frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0\end{bmatrix}$ because there is a $\tfrac{1}{4}$ chance of moving from $s_2$ to each of $s_1$, $s_2$, $s_4$, and $s_5$, and no chance of moving directly from $s_2$ to $s_3$ or $s_6$. The entire transition matrix is $$M = \frac{1}{4} \begin{bmatrix}
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 0 & 1 & 1 & 0 \\
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 0 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 0 & 1 & 0 & 1 \\
\end{bmatrix}.$$
The reason for putting the transition probabilities $m_{ij}$ in a matrix is that the multiplication of a vector by $M$ carries meaning. Suppose you don’t know the state of your friend’s wallet, but (for some strange reason) you do know the probability $v_i$ of the wallet being in state $s_i$ for each $i$. Let $v = \begin{bmatrix} v_1 & v_2 & \cdots & v_{|S|} \end{bmatrix}$ be a vector whose entries are the probabilities $v_i$. In particular, the entries of $v$ are nonnegative and sum to $1$. Then $v M$ is a vector whose $i$th entry is the probability of the wallet being in state $s_i$ after your friend makes her next cash purchase.
After one step, we can think of $v M$ as the *new* probability distribution of the wallet states and ask what happens after a *second* transaction. Since the probability distribution after one step is $v M$, the probability distribution after *two* steps is $(v M) M$, or in other words $v M^2$.
The long-term behavior of your friend’s wallet is therefore given by $v M^n$ for large $n$. If the limit $p = \lim_{n \to \infty} v M^n$ exists, then there is a clean answer to a question such as ‘What is the expected number of coins in your friend’s wallet?’, since the $i$th entry $p_i$ of $p$ is the long-term probability that the wallet is in state $s_i$. Moreover, if the limit is actually independent of the initial distribution $v$, then $p$ is not just the long-term distribution for your friend’s wallet; it’s the long-term distribution for anyone’s wallet.
Supposing for the moment that $p$ exists, how can we compute it? The limiting probability distribution does not change under multiplication by $M$ (because otherwise it’s not the limiting probability distribution), so $p M = p$. In other words, $p$ is a left eigenvector of $M$ with eigenvalue $1$. There may be many such eigenvectors, but we know additionally that $p_1 + p_2 + \cdots + p_{|S|} = 1$, which may be enough information to uniquely determine the entries of $p$.
In our toy example, it turns out that there is a unique $p$, and Gaussian elimination gives $p=\begin{bmatrix}\frac{1}{4}&\frac{1}{4}&\frac{5}{32}&\frac{3}{32}&\frac{7}{32}&\frac{1}{32}\end{bmatrix}$, which indicates there is a 25% chance of having an empty wallet and a 25% chance of having a wallet with just one quarter. The least likely wallet state is $s_6 = \{25, 25, 25\}$, and this state occurs with probability 3.125%. From $p$ we can compute all sorts of other statistics. For example, the expected number of coins in the wallet is $$\sum_{i=1}^{|S|} p_i |s_i| = \frac{9}{8} = 1.125.$$ The expected total value of the wallet, in cents, is $$\sum_{i=1}^{|S|} p_i \sigma(s_i) = \frac{75}{2} = 37.5,$$ where $\sigma(s_i)$ is the sum of the elements in $s_i$. The expected number of $25$-cent pieces is $\frac{3}{4}$, and the expected number of $50$-cent pieces is $\frac{3}{8}$.
It turns out that, under reasonable spending assumptions, the Perron–Frobenius theorem guarantees the existence and uniqueness of $p$. We just need two conditions on the Markov chain — irreducibility and aperiodicity. A Markov chain is *irreducible* if for any two states $s_i$ and $s_j$ there is some integer $n$ such that the probability of transitioning from $s_i$ to $s_j$ in $n$ steps is nonzero. That is, each state is reachable from each other state, so the state space can’t be broken up into two nonempty sets that don’t interact with each other in the long term. For each Markov chain we consider, irreducibility follows from assumptions (1)–(2) above and details of the particular spending algorithm (for example, assumptions (3)–(4) in Section \[big spender\] below).
The other condition is aperiodicity. A Markov chain is *periodic* (i.e., not aperiodic) if there is some state $s_i$ such that any transition from $s_i$ to itself occurs in a multiple of $k>1$ steps. If a wallet is in state $s_i$, then the transaction with price $0$ causes the wallet to transition to $s_i$, so our Markov chains are aperiodic. Therefore the Perron–Frobenius theorem implies that $p$ exists and that $p$ is the dominant eigenvector of the matrix $M$, corresponding to the eigenvalue $1$.
Spending algorithms {#Spending algorithms}
===================
Now that we understand the mechanics of Markov chains, we just need to determine a suitable Markov chain model for a spender’s behavior. Unlike the cashier, the spender has a limited supply of coins. When the supply is limited, the greedy algorithm does not always make exact change. For example, if you’re trying to come up with $30$ cents and your wallet state is $\{25, 10, 10, 10\}$ then the greedy algorithm fails to identify $\{10, 10, 10\}$.
Moreover, the spender will not always be able to make exact change. Since our spender does not want to accumulate arbitrarily many coins (unlike the coin keeper), let’s first consider the *minimalist spender*, who spends coins so as to minimize the number of coins in their wallet after each transaction.
The minimalist spender
----------------------
Of course, one way to be a minimalist spender is to curtly throw all your coins at the cashier and have them give you change (greedily). Sometimes this can result in clever spending; for example if you have $\{10\}$ and are charged $85$ cents, then you’ll end up with $\{25\}$. However, in other cases this is socially uncouth; if you have $\{1, 1, 1, 1\}$ and are charged $95$ cents, then the cashier will hand you back $\{5, 1, 1, 1, 1\}$, which contains the four pennies you already had. With some thought, you can avoid altercations by not handing the cashier any coins they will hand right back to you.
In any case, if a minimalist spender’s wallet has value $n$ cents and the price is $c$ cents, then the state of the wallet after the transaction will be a minimal partition of $n - c \bmod 100$. Since there is only one such minimal partition, this determines the minimalist spender’s wallet state. There are $100$ possible wallet states, one for each integer $0 \leq n \leq 99$. By assumption , the probability of transitioning from one state to any other state is $1/100$, so no computation is necessary to determine that each state is equally likely in the long term. The expected number of coins in the minimalist spender’s wallet is therefore $\frac{1}{100} \sum_{i=1}^{100} |s_i| = 4.7$, and the expected total value of the wallet is $\frac{1}{100} \sum_{n=0}^{99} n = 49.5$ cents. Counting occurrences of each denomination in the $100$ minimal partitions of $0 \leq n \leq 99$ as we did in Section \[Introduction\] shows that the expected number of quarters is $1.5$; the expected numbers of dimes, nickels, and pennies are $0.8$, $0.4$, and $2$.
Intuitively, one would expect the minimalist’s strategy to result in the lowest possible expected number of coins. Indeed this is the case; let $g(n)$ be the number of coins in the greedy partition of $n$. Fix a spending strategy that yields an irreducible, aperiodic Markov chain. Let $e(n)$ be the long-term conditional expected number of coins in the spender’s wallet, given that the total value of the wallet is $n$ cents. Since $g(n)$ is the minimum number of coins required to have exactly $n$ cents, $e(n) \geq g(n)$ for all $0 \leq n \leq 99$. Since the price $c$ is uniformly distributed, the total value $n$ is uniformly distributed, and therefore the long-term expected number of coins is $$\frac{1}{100} \sum_{n=0}^{99} e(n) \geq \frac{1}{100} \sum_{n=0}^{99} g(n) = \frac{47}{10}.$$
Similarly, in the toy currency from Section \[Markov chains\], the expected number of coins for the minimalist spender is $1$, which is less than the expected number $\frac{9}{8}$ for the spending strategy we considered.
However, the minimalist spender’s behavior is not very realistic. Suppose the wallet state is $\{5\}$ and the price is $79$. Few people would hand the cashier the nickel in this situation, even though doing so would reduce the number of coins in their wallet after the transaction by $2$. So let us consider a more realistic strategy.
The big spender {#big spender}
---------------
If a spender does not have enough coins to cover the cost of their purchase and does not need to achieve the absolute minimum number of coins after the transaction, then the easiest course of action is to spend no coins and receive change. If the spender does have enough coins to cover the cost, it is reasonable to assume that they overpay as little as possible. For example, if the wallet state is $\{25, 10, 5, 1, 1\}$ and the price is $13$ cents, then the spender spends $\{10, 5\}$.
How does a spender identify a subset of coins whose total is the smallest total that is greater than or equal to the purchase price? Well, one way is to examine *all* subsets of coins in the wallet and compute the total of each. This naive algorithm may not be fast enough for the express lane, but it turns out to be fast enough to compute the transition matrix in a reasonable amount of time.
Now, there may be multiple subsets of coins in the wallet with the same minimal total. For example, if the wallet state is $\{10, 5, 5, 5\}$ and the price is $15$ cents, there are two ways to make change. Using the greedy algorithm as inspiration, let us assume the spender breaks ties by favoring bigger coins and spends $\{10, 5\}$ rather than $\{5, 5, 5\}$. In addition to the two assumptions in Section \[Introduction\], our assumptions are therefore the following.
3. \[3\] If the spender does not have sufficient change to pay for the purchase, he spends no coins and receives change from the cashier.
4. \[4\] If the spender has sufficient change, he makes the purchase by overpaying as little as possible and receives change if necessary.
5. \[5\] If there are multiple ways to overpay as little as possible, the spender favors $\{a_1, a_2, \dots, a_m\}$ over $\{b_1, b_2, \dots, b_n\}$, where $a_1 \geq a_2 \geq \dots \geq a_m$ and $b_1 \geq b_2 \geq \dots \geq b_n$, if $a_1 = b_1, a_2 = b_2, \dots, a_i = b_i$ and $a_{i+1} > b_{i+1}$ for some $i$.
We refer to a spender who follows these rules as a *big spender*. Let’s check that there are only finitely many states for a big spender’s wallet.
\[bounded\] Suppose a spender adheres to assumptions and . If the spender’s wallet has at most $99$ cents before a transaction, then it has at most $99$ cents after the transaction.
Let $0 \leq c \leq 99$ be (the fractional part of) the price, and let $n$ be the total value of coins in the spender’s wallet.
If $c\leq n$, by , the spender pays at least $c$ cents, receiving change if necessary, and ends up with $n - c$ cents after the transaction. Since $n \leq 99$ and $c \geq 0$, we know that $n - c \leq 99$ as well.
If $c>n$, since $n$ is not enough to pay $c$ cents, by the spender only pays with bills, and receives $100-c$ in change, for a total of $n + 100 - c = 100 - (c - n)$ after the transaction. Since $c > n$, we know that $c - n \geq 1$, so $100 - (c - n) \leq 99$.
If a big spender begins with more than 99 cents in his wallet (because he did well at a slot machine), then he will spend coins until he has at most 99 cents, and then the lemma applies. Thus any wallet state with more than 99 cents is only transient and has a long-term probability of 0. Since there are finitely many ways to carry around at most 99 cents, the state space of the big spender’s wallet is finite.
We are now ready to set up a Markov chain for the big spender. The possible wallet states are the states totaling at most $99$ cents. Each such state contains at most $3$ quarters, $9$ dimes, $19$ nickels, and $99$ pennies, and a quick computer filter shows that of these $4 \times 10 \times 20 \times 100 = 80000$ potential states only $6720$ contain at most $99$ cents.
To construct the $6720 \times 6720$ transition matrix for the big spender Markov chain, we simulate all $6720 \times 100 = 672000$ possibles transactions. Since we are using the naive algorithm, this is somewhat time-consuming. The authors’ implementation took $8$ hours on a 2.6 GHz laptop. The list of wallet states and the explicit transition matrix can be downloaded from the authors’ web sites, along with a *Mathematica* notebook containing the computations.
We know that the limiting distribution $p$ exists, and it is the dominant eigenvector of the transition matrix. However, computing it is another matter. For matrices of this size, Gaussian elimination is slow. If we don’t care about the entries of $p$ as exact rational numbers but are content with approximations, it’s much faster to use numerical methods. *Arnoldi iteration* is an efficient method for approximating the largest eigenvalues and associated eigenvectors of a matrix, without computing them all. For details, the interested reader should take a look at any textbook on numerical linear algebra.
Fortunately, an implementation of Arnoldi iteration due to Lehoucq and Sorensen [@Lehoucq--Sorensen] is available in the package ARPACK [@ARPACK], which is free for anyone to download and use. This package is also included in *Mathematica* [@implementation], so to compute the dominant eigenvector of a matrix one can simply evaluate $$\texttt{Eigenvectors[N[Transpose[}matrix\texttt{]], 1]}$$ in the Wolfram Language. The symbol `N` converts rational entries in the matrix into floating-point numbers, and `Transpose` ensures that we get a left (not right) eigenvector.
ARPACK is quite fast. Computing the dominant eigenvector for the big spender takes less than a second. And one finds that there are five most likely states, each with a probability of $0.01000$; they are the empty wallet $\{\}$ and the states consisting of $1$, $2$, $3$, or $4$ pennies. Therefore $5\%$ of the time the big spender’s wallet is in one of these states.
The expected number of coins in the big spender’s wallet is $10.05$. This is more than twice the expected number of coins for the minimalist spender. The expected numbers of quarters, dimes, nickels, and pennies are $1.06$, $1.15$, $0.91$, and $6.92$. Assuming that all coin holders are big spenders (which of course is not actually the case, since cash registers dispense coins greedily), this implies that the distribution of coins in circulation is 10.6% quarters, 11.5% dimes, 9.1% nickels, and 68.9% pennies. Compare this to the distribution of U.S. minted coins in 2014 — 11.9% quarters, 17.4% dimes, 9.1% nickels, and 61.6% pennies. Relative to the coin keeper model, the big spender distribution comes several times closer (as points in ${\mathbb{R}}^4$) to the U.S. mint distribution.
The expected total value of the big spender’s wallet is $49.5$ cents, just as it is for the minimalist spender. This may be surprising, since the two spending algorithms are so different. However, it is a consequence of assumption , which specifies that prices are distributed uniformly. If we ignore all information about the big spender’s wallet state except its value, then we get a Markov chain with $100$ states, all equally likely. The expected total wallet value in this new Markov chain is $49.5$ cents. Since the expected wallet value is preserved under the function which forgets about the particular partition of $n$, the big spender has the same expected wallet value. In fact, any spending scenario in which the possible wallet values are all equally likely has an expected wallet value equal to the average of the possible wallet values.
The pennies-first big spender
-----------------------------
We have seen that while the minimalist spender carries $4.7$ coins on average, the big spender carries significantly more. We can narrow the gap by spending pennies in an intelligent way. For example, if your wallet state is $\{1, 1, 1, 1\}$ and the price is $99$ cents, then it is easy to see that spending the four pennies will result in fewer coins than not.
To determine which coins to pay with, the *pennies-first big spender* first computes the price modulo $5$. If he has enough pennies to cover this price, he hands those pennies to the cashier and subtracts them from the price. Then he behaves as a big spender, paying for the modified price.
If the pennies-first big spender has fewer than $5$ pennies before a transaction, he has fewer than $5$ pennies after the transaction. Therefore the pennies-first big spender never carries more than $4$ pennies, and the state space is reduced to only $1065$ states. Computing the dominant eigenvector of the transition matrix shows that the expected number of coins is $5.74$. This is only $1$ coin more than the minimum possible value, $4.7$. So spending pennies first actually gets you quite close to the fewest coins on average.
One computes the expected numbers of quarters, dimes, nickels, and pennies for the pennies-first big spender to be $1.12$, $1.27$, $1.35$, and $2.00$. This raises a question. Is the expected number of pennies not just approximately $2$ but exactly $2$? Imagine that the pennies-first big spender is actually two people, one who holds the pennies, and the other who holds the quarters, dimes, and nickels. When presented with a price $c$ to pay, these two people can behave collectively as a pennies-first big spender without the penny holder receiving information from his partner. If the person with the pennies can pay for $c \bmod 5$, then he does; if not, he receives $5 - (c \bmod 5)$ pennies from the cashier. Since the penny holder doesn’t need any information from his partner, all five possible states are equally likely, and the expected number of pennies is exactly $2$.
Additional currencies {#Additional currencies}
=====================
The framework we have outlined is certainly applicable to other currencies. We mention a few of interest, retaining assumptions –.
A *penniless purchaser* is a spender who has no money. Their long-term wallet behavior is not difficult to analyze. On the other hand, a *pennyless purchaser* is a big spender who never carries pennies but does carry other coins. Pennyless purchasers arise in at least two different ways. Some governments prefer not to deal with pennies. Canada, for example, stopped minting pennies as of 2012, so most transactions in Canada no longer involve pennies. On the other hand, some people prefer not to deal with pennies and drop any they receive into the give-a-penny/take-a-penny tray. Therefore prices for the pennyless purchaser are effectively rounded to a multiple of $5$ cents, and it suffices to consider $20$ prices rather than $100$. Moreover, these $20$ prices occur with equal frequency as a consequence of assumption . There are $213$ wallet states composed of quarters, dimes, and nickels that have value at most $99$ cents. The expected number of coins for the pennyless purchaser is $3.74$. The expected numbers of quarters, dimes, and nickels are $1.12$, $1.27$, and $1.35$.
If these numbers look familiar, it is because they are the same numbers we computed for the pennies-first big spender! Since we established that pennies can be modeled independently of the other coins for the pennies-first big spender, one might suspect that the pennies-first big spender can be decomposed into two independent components — a pennyless purchaser (with $213$ states) and a pennies modulo $5$ purchaser (with $5$ states, all equally likely). When presented with a price $c$ to pay, the pennyless component pays for $c - (c \bmod 5)$ as a big spender, receiving change in quarters, dimes, and nickels if necessary. As before, if the penny component can pay for $c \bmod 5$, then he does; if not, he receives $5 - (c \bmod 5)$ pennies in change. Let us call the product of these independent components a *pennies-separate big spender*.
However, this decomposition doesn’t actually work. For the pennies-*first* big spender, if the price is $c = 1$ cent then the two wallet states $\{5\}$ and $\{5, 1\}$ result in different numbers of nickels after a transaction, so the pennyless component does in fact depend on the penny component. Even worse, if the price is $c = 1$ cent and the wallet is $\{5\}$ then the pennies-*separate* big spender’s wallet becomes $\{5, 1, 1, 1, 1\}$, which is too much change! Nonetheless, these two Markov chains are closely related; their transition matrices are equal, which explains the numerical coincidence we observed. Suppose $s_i$ and $s_j$ are two states such that some price $c$ causes $s_i$ to transition to $s_j$ for the pennies-first big spender. If $s_i$ contains fewer than $c \bmod 5$ pennies, then the price $(c + 5) \bmod 100$ causes $s_i$ to transition to $s_j$ for the pennies-separate big spender; otherwise the price $c$ causes this transition. Since the transition matrices are equal, the long-term probability of each state is the same in both models. Therefore the expected numbers of quarters, dimes, and nickels for the pennyless purchaser agree with the pennies-first big spender.
Another spending strategy is the *quarter hoarder*, used by college students and apartment dwellers who save their quarters for laundry. All quarters they receive as change are immediately thrown into their laundry funds. Of the $10 \times 20 \times 100 = 20000$ potential wallet states containing up to $9$ dimes, $19$ nickels, and $99$ pennies, there are $4125$ states for which the total is at most $99$ cents. The expected number of coins for a big spender quarter hoarder is $13.74$, distributed as $1.60$ dimes, $1.21$ nickels, and $10.93$ pennies.
Finally, let’s consider a currency that no one actually uses. Under assumptions and , Shallit [@Shallit] asked how to choose a currency so that cashiers return the fewest coins per transaction on average. For a currency $d_1 > d_2 > d_3 > d_4$ with four denominations, he computed that the minimum possible value for the average number of coins per transaction is $389/100$, and one way to attain this minimum is with a $25$-cent piece, $18$-cent piece, $5$-cent piece, and $1$-cent piece. So as our final model, we consider a fictional country that has adopted Shallit’s suggestion of replacing the $10$-cent piece with an $18$-cent piece. There are two properties of this currency that the U.S. currency does not have. The first is that the greedy algorithm doesn’t always make change using the fewest possible coins. For example, to make $28$ cents the greedy algorithm gives $\{25, 1, 1, 1\}$, but you can do better with $\{18, 5, 5\}$.
The second property is that there is not always a unique way to make change using the fewest possible coins. For example, $77$ cents can be given in five coins as $\{25, 25, 25, 1, 1\}$ or $\{18, 18, 18, 18, 5\}$. The prices $82$ and $95$ also have multiple minimal representations. (Bryant, Hamblin, and Jones [@Bryant--Hamblin--Jones] give a characterization of currencies $d_1 > d_2 > d_3$ that avoid this property, but for more than three denominations no simple characterization is known.) According to assumption , the big spender breaks ties between minimal representations of $77$, $82$, and $95$ by favoring bigger coins. For example, the big spender spends $\{25, 25, 25, 1, 1\}$ rather than $\{18, 18, 18, 18, 5\}$ if both are possible.
The cashier doesn’t care about getting rid of big coins, however. So to make things interesting, let’s refine assumption as follows.
1. Cashiers return change using the fewest possible coins; when there are two ways to make change with fewest coins, the cashier uses each half the time.
For example, a cashier makes change for $77$ cents as $\{25, 25, 25, 1, 1\}$ with probability $1/2$ and as $\{18, 18, 18, 18, 5\}$ with probability $1/2$. Consequently, the transition matrix has some entries that are $1/200$.
For the minimalist spender in this currency, there are $100$ possible wallet states, and the expected number of coins is $\frac{1}{100} \sum_{i=1}^{100} |s_i| = 3.89$. Note that this is the same computation used to determine the average number of coins per transaction. In general these two quantities are the same, so reducing the number of coins per transaction is equivalent to reducing the number of coins in the minimalist spender’s wallet. Relative to the U.S. currency, the minimalist spender carries $0.81$ fewer coins in the Shallit currency.
The number of wallet states in the Shallit currency totaling at most $99$ cents is $4238$. The pennies-first big spender algorithm is not such a sensible way to spend coins, since if your wallet state is $\{18, 1, 1, 1\}$ and the price is $18$ cents then you don’t want to spend pennies first. For the big spender, however, the expected number of coins is $8.63$, so this currency also reduces the number of coins in the wallet of a big spender. The expected numbers of quarters, $18$-cent pieces, nickels, and pennies are $0.66$, $0.98$, $2.10$, and $4.89$.
Conclusion
==========
In this paper, we have taken the question ‘What’s in your wallet?’ quite literally and considered four spending strategies — the coin keeper, the minimalist spender, the big spender, and the pennies-first big spender. In each strategy we were able to compute the long-term behavior of wallets in various currencies. In two of the strategies, computing the limiting probability vector required techniques from numerical linear algebra.
We also looked at a few alternate currencies, but there are many others we could consider. In fact, it would be interesting to know in which country of the world a big spender (or a smallest-denomination-first big spender) is expected to carry the fewest coins. Or, which currency $d_1 > d_2 > d_3 > d_4$ of four denominations minimizes the expected number of coins in your wallet?
One embarrassing feature is that while Arnoldi iteration allows us to quickly compute the dominant eigenvector of a transition matrix, the computation of the matrix itself is quite time-consuming. We have used the naive algorithm, which looks at all subsets of a wallet to determine which subset to spend. What is a faster algorithm for computing the big spender’s behavior?
Our framework is also applicable to other spending strategies. And indeed there are good reasons to vary some of the assumptions. For example, assumption isn’t universally true. Given the choice between spending a quarter or five nickels, the big spender spends the quarter. While the big spender minimizes the number of coins they spend, it is also reasonable to suppose that a spender would break ties by spending *more* coins. We could consider a *heavy spender* who maximizes the number of coins spent from his wallet in a given transaction according to the following modification of assumption .
1. If there are multiple ways to overpay as little as possible, the spender favors $\{a_1, a_2, \dots, a_m\}$ over $\{b_1, b_2, \dots, b_n\}$ if $m > n$.
When given the choice, a heavy spender favors $\{5, 5, 5\}$ over $\{10, 5\}$. Do assumptions , , and ($5'$) completely determine the behavior of a heavy spender? If so, does the heavy spender have a lighter wallet on average than the big spender?
Of course, the million-dollar question is whether real people actually use any of these spending strategies. To what extent is the pennies-first big spender more realistic than the big spender? How many coins is an actual person expected to have? Then again, maybe nowadays everyone uses a credit card.
[99]{}
Anna Adamaszek and Michal Adamaszek, Combinatorics of the change-making problem, *European Journal of Combinatorics* **31** (2010) 47–63.
Lance Bryant, James Hamblin, and Lenny Jones, A variation on the money-changing problem, *The American Mathematical Monthly* **119** (2012) 406–414.
Xuan Cai, Canonical coin systems for change-making problems, *Proceedings of the Ninth International Conference on Hybrid Intelligent Systems* **1** (2009) 499–504. Lena Chang and James F. Korsh, Canonical coin changing and greedy solutions, *Journal of the Association for Computing Machinery* **23** (1976) 418–422.
S. K. Chang and A. Gill, Algorithmic solution of the change-making problem, *Journal of the Association for Computing Machinery* **17** (1970) 113–122.
John Dewey Jones, Orderly currencies, *The American Mathematical Monthly* **101** (1994) 36–38.
Dexter Kozen and Shmuel Zaks, Optimal bounds for the change-making problem, *Theoretical Computer Science* **123** (1994) 377–388.
Richard B. Lehoucq and Danny C. Sorensen, Deflation techniques for an implicitly restarted Arnoldi iteration, *SIAM Journal on Matrix Analysis and Applications* **17** (1996) 789–821.
M. J. Magazine, G. L. Nemhauser, and L. E. Trotter, Jr., When the greedy solution solves a class of knapsack problems, *Operations Research* **23** (1975) 207–217.
Stephen B. Maurer, Disorderly currencies, *The American Mathematical Monthly* **101** (1994) 419.
David Pearson, A polynomial-time algorithm for the change-making problem, *Operations Research Letters* **33** (2005) 231–234.
Rice University, ARPACK, <http://www.caam.rice.edu/software/ARPACK/>.
Jeffrey Shallit, What this country needs is an 18 piece, *The Mathematical Intelligencer* **25** (2003) 20–23.
United States Mint, Production and sales figures, <http://www.usmint.gov/about_the_mint/?action=coin_production>.
Wolfram Research, Some notes on internal implementation, <http://reference.wolfram.com/language/tutorial/SomeNotesOnInternalImplementation.html#15781>.
[^1]: Maurer [@Maurer] interestingly observes that before the existence of electronic cash registers, cashiers typically did not use the greedy algorithm but instead counted *up* from the purchase price to the amount tendered — yet still usually gave change using the fewest coins.
|
---
abstract: 'The interaction of two counter-propagating electromagnetic waves in a vacuum is analyzed within the framework of the Heisenberg-Euler formalism in quantum electrodynamics. The nonlinear electromagnetic wave in the quantum vacuum is characterized by wave steepening, subsequent generation of high order harmonics and electromagnetic shock wave formation with electron–positron pair generation at the shock wave front.'
author:
- Hedvika Kadlecová
- Georg Korn
- 'Sergei V. Bulanov'
title: Electromagnetic Shocks in the Quantum Vacuum
---
Introduction
============
In contrast to classical electrodynamics where electromagnetic waves do not interact in a vacuum, in quantum electrodynamics (QED), the photon–photon scattering in a vacuum occurs via the generation of virtual electron–positron pairs resulting in vacuum polarization, Lamb shift, vacuum birefringence, Coulomb field modification, etc. [@BLP-QED]. Off–shell photon–-photon scattering was indirectly observed in collisions of heavy ions accelerated by standard charged particles accelerators (see review article [@Baur] and in results of experiments obtained with the ATLAS detector at the Large Hadron Collider [@ATLASScattering]). Further study of the process will allow extensions of the Standard Model to be tested, in which new particles can participate in loop diagrams [@Zempf; @Inada].
The increasing availability of high power lasers raises interest in experimental observation and motivates theoretical studies of such processes in laser-laser scattering [@Mourou; @Marklund; @DTomma; @DiPizzaReview; @MonKod; @King; @Koga; @KarbsteinShai], scattering of the XFEL emitted photons [@Inada], and the interaction of relatively long wavelength high intensity laser light with short wavelength X-ray photons [@Schlenvoigt2016; @BaifeiShen2018; @Heinzl2006; @Shanghai100PW].
In the relatively low photon energy limit, for photon energy below the electron rest-mass energy, ${\cal E}_{\gamma}=\hbar \omega<m_e c^2$, the total photon–photon scattering cross section for non-polarized photons is proportional to the sixth power of the photon energy, $$\sigma_{\gamma-\gamma}=\left(\frac{973}{10125\pi}\right)\alpha^2 r_e^2 \left(\frac{\hbar \omega}{m_e c^2}\right)^6,
\label{estimate}$$ reaching its maximum at $\hbar \omega\approx 1.5 m_e c^2$ and decreases proportionally to the inverse of the second power of the photon energy for $\hbar \omega>m_e c^2$ (see Ref. [@BLP-QED]), where $\alpha=e^2/\hbar c\approx 1/137$ is the fine structure constant, $r_e=e^2/m_ec^2=2.82\times 10^{-13}\,{\rm cm}$ is the classical electron radius, $e$ and $m_e$ are electron electric charge and mass, $c$ is speed of light in a vacuum and $\hbar$ is the reduced Planck constant.
From Eq. (\[estimate\]), it seems that by using the maximal frequency of the electromagnetic wave, we can reach a higher number of scattering events. This would be so if we assume the same number of photons in the colliding beams having the same transverse size for the beams with different frequencies. However, if we think of highest field amplitude and highest luminosity of the colliding photon beams, we must consider the smallest transverse size of the beams, i.e., they should be focused on a spot of one-lambda size, which is different for beams of different frequencies. In general, this approach corresponds to the Gerard Mourou’s lambda-cube concept [@Mourou2002].
To find the number of photon–photon scattering events in the low frequency limit, we estimate the number of photons in the electromagnetic pulse with the amplitude $E$ in the $\lambda^3$ volume, where $\lambda=2\pi c/\omega$ is the electromagnetic wave wavelength. The number of photons in a laser pulse is then equal to $$N_{\gamma}=\frac{E^2 \lambda^3}{4 \pi \hbar \omega}.$$
Using these relationships it is easy to find the number of scattering events per 4-volume $2\pi \lambda^3/\omega$. It is proportional to the scattering cross section given by Eq. (\[estimate\]), to the product of the photon numbers in colliding photon bunches, and it is inverse proportional to the square of the wavelength. Assuming the equal photon numbers in colliding bunches and the equal photon frequencies we obtain $$N_{\gamma-\gamma}=\sigma_{\gamma-\gamma}\frac{N_{\gamma}^2}{\lambda^2}
=\frac{973}{10125\pi}
\alpha^2\left(\frac{E}{E_S}\right)^4, \label{eq:phot-phot}$$ where $E_S=m_e^2c^3/e \hbar$ is the critical field of quantum electrodynamics. It is also known as the Sauter-Schwinger electric field. The corresponding to this field electromagnetic radiation intensity is $I_S=c E_S^2/4\pi\approx 10^{29}$W/cm$^2$. Finally and importantly, we observe that the number of scatterings does not depend on the electromagnetic wave frequency. It is determined by the radiation intensity $I=cE^2/4\pi$ as $$N_{\gamma-\gamma}\propto \alpha^2 (I/I_S)^2,$$ i.e., the frequency independent dimensionless parameter characterizing photon-photon scattering is $\alpha (I/I_S)$, as in the case when photon–photon scattering is described within the framework of the approach based on the Heisenberg-Euler lagrangian used below. We note that a similar (but not the same) analysis can be found in the papers [@Schlenvoigt2016; @BaifeiShen2018].
At the limit of extremely high amplitude of an electromagnetic field with a strength approaching the QED critical field $E_S$, nonlinear modification of the vacuum refraction index via the polarization of virtual electron–positron pairs supports electromagnetic wave self–interaction.
The nonlinear properties of the QED vacuum have been extensively addressed in a number of publications. The theoretical problem of non–linear effects of light propagation is considered in Ref. [@Bialynicka], where they study photon splitting in an external field in the full Heisenberg–Euler theory. Another extensive studies can be found in Refs. [@Adler; @Brezin; @Ritus]. Other results on nontrivial vacua and on curved spacetimes can be found in Refs. [@Latorre; @DrumondHathrell; @Shore]. The photon splitting in crossed electric and magnetic fields is considered, for example, in [@PapanyanRitus]. Nonlinear wave mixing in cavities is analyzed in [@BrodinErikssonMarklund]. Nonlinear interaction between an electromagnetic pulse and a radiation background is investigated in [@MarklundBrodinStenflo]. In the monograph [@DittrichGies], the vacuum birefringence phenomena is described within the framework of the geometrical optics approximation by using unified formalism. In the work [@Rosanov1993], they incorporate weakest dispersion into Heisenberg–Euler theory, and in [@Lorenci] the approach used in Ref. [@DittrichGies] is generalized allowing one to obtain the dispersion equation for the electromagnetic wave frequency and wavenumber. This process, in particular, results in decreasing the velocity of counter-propagating electromagnetic waves. As well known, the co-propagating waves do not change their propagation velocity because the co-propagating photons do not interact, e.g., see Ref. [@Zee].
The finite amplitude wave interaction in the QED vacuum results in the high order harmonics generation [@Rosanov1993; @DiP; @FN; @NF; @Boehl]. High frequency harmonics generation can be a powerful tool to explore the physics of nonlinear QED vacuum. The highest harmonics can be used to probe the high energy region because they are naturally co-propagating and allow one to measure of QED effects in the coherent harmonic focus. High–order harmonics generation in vacuum is studied in detail in [@DiP; @FN].
Nonlinear properties of the QED vacuum in the long wavelength and low frequency limit are described by the Heisenberg-Euler Lagrangian [@HeisenbergEuler], describing electromagnetic fields in dispersionless media whose refraction index depends on the electromagnetic field. In the media where the refraction index dependense on the field amplitude leads to the nonlinear response, the electromagnetic wave can evolve into a configuration with singularities [@LL-EDCM].
The appearance of singularities in the Heisenberg-Euler electrodynamics is noticed in Ref. [@LutzkyToll] where a singular particular solution of equations derived from the Heisenberg–Euler Lagrangian is obtained. In Ref. [@Boehl], the wave steepening is demonstrated by numerical integration of nonlinear QED vacuum electrodynamics equations.
In this paper we address the problem of nonlinear wave evolution in quantum vacuum using the low frequency and long wavelength approximation aiming at finding theoretical description of the electromagnetic shock wave formation in nonlinear QED vacuum. We present and analyze an analytical solution of the Heisenberg–Euler electrodynamics equations for the finite amplitude electromagnetic wave counter-propagating to the crossed electromagnetic field. This configuration may correspond to the collision of the low–frequency very high intensity laser pulse with high frequency X-ray pulse generated by XFEL. The first, long-wavelength, electromagnetic wave is approximated by a constant crossed field. We derive the corresponding nonlinear field equations containing expressions for the relatively short wavelength pulse. The solution of the nonlinear field equations is found in a form of the simple wave or the Riemann wave. This solution describes high order harmonic generation, wave steepening and formation of the electromagnetic shock wave in a vacuum. We investigate these characteristics in more detail together with discussion of the shock wave front formation process.
On The Heisenberg–Euler Lagrangian
==================================
The Heisenberg–Euler Lagrangian is given by $$\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}', \label{eq:Lagrangian}$$ where $$\mathcal{L}_{0}=-\frac{1}{16\pi}F_{\mu \nu}F^{\mu \nu}
\label{eq:claLagrangian}$$ is the Lagrangian in classical electrodynamics, $F_{\mu \nu}$ is the electromagnetic field tensor ($F_{\mu \nu}=\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu}$), with $A_{\mu}$ being the 4-vector of the electromagnetic field and $\mu=0,1,2,3$. In the Heisenberg–Euler theory, the radiation corrections are described by $\mathcal{L}'$ on the right hand side of Eq. (\[eq:Lagrangian\]), which in the weak field approximation is given by [@HeHe], $$\begin{aligned}
\mathcal{L}'&=\frac{\kappa}{4}\left\{\left(F_{\mu \nu}F^{\mu \nu}\right)^2
+ \frac{7}{4} \left(F_{\mu \nu}\tilde F^{\mu \nu}\right)^2 \right. + \nonumber \\
&\frac{90}{315}\left. \left(F_{\mu \nu}F^{\mu \nu}\right) \left[ \left(F_{\mu \nu}F^{\mu \nu}\right)^2 +\frac{13}{16}\left(F_{\mu \nu}\tilde F^{\mu \nu}\right)^2 \right] \right\}
\label{eq:mathcalL}\end{aligned}$$ where $\kappa=e^4/360 \pi^2 {m}^4$, $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is and $\tilde F^{\mu \nu} = \epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$ is a dual tensor to a tensor of electromagnetic field, $F_{\mu \nu}$ with $\varepsilon^{\mu \nu \rho \sigma}$ being the Levi-Civita symbol in four dimensions.
In the following text, we use the units $c=\hbar=1$, and the electromagnetic field is normalized on the QED critical field $E_{S}$.
To describe the singular solutions, we should keep the terms within the weak field approximation to the sixth order in the field amplitude, because the terms in the contributions of the fourth order cancel each other in calculation of dispersive properties of the QED vacuum.The remaining contribution is of the same order as from the Heisenberg–Euler Lagrangian expansion to the sixth order in the fields. We note that in Ref. [@LutzkyToll], the Heisenberg–Euler Lagrangian expansion to the fourth order was used.
In the Lagrangian (\[eq:mathcalL\]) the first two terms on the right hand side describe four interacting photons and the last two terms correspond to six photon interaction.
Counter-propagating electromagnetic waves
==========================================
For the sake of brevity, we consider counter-propagating electromagnetic waves of the same polarization. They are given by the vector potential having one component, ${\bf A}=A {\bf e}_z$, with ${\bf e}_z$ being a unit vector along the $z$ axis.
We assume that the electromagnetic field 4-potential written in the light cone coordinates, $$x_+=(x+t)/\sqrt{2},\quad
x_-=(x-t)/\sqrt{2},$$ equals $$A=W x_+ + a(x_+,x_-).$$ The term $W x_+$ describes the crossed electric and magnetic fields ($E_0=B_0=-W/\sqrt{2}$), whose Poynting vector is antiparallel to the $x$ axis: $${\bf P}=\frac{1}{4 \pi}{\bf E}\times {\bf B}=- \frac{W^2}{8\pi}{\bf e}_x.$$ Here ${\bf e}_x$ is a unit vector along the $x$-axis. In this case, the Lagrangian (\[eq:Lagrangian\]) with $\mathcal{L}_0$ and $\mathcal{L}^{\prime}$ given by Eqs. (\[eq:claLagrangian\]) and (\[eq:mathcalL\]) takes the form $$\mathcal{L}=-\frac{1}{4\pi}\left[(W+w)u-\epsilon_2 (W+w)^2u^2-\epsilon_3 (W+w)^3u^3 \right] \label{eq:lightcone-Lagrangian}$$ It depends on the functions $u=\partial_{x_-}a$ and $w=\partial_{x_+}a$. The dimensionless parameters $\epsilon_2$ and $\epsilon_3$ in Eq. (\[eq:lightcone-Lagrangian\]) are equal to $$\begin{aligned}
\epsilon_2&=2 e^2/45\pi =(2/45 \pi) \alpha, \label{eq:eps2}\\
\epsilon_3&=32 e^2/315\pi =(32/315 \pi) \alpha,\end{aligned}$$ i.e., $\epsilon_2\approx 10^{-4}$ and $\epsilon_3\approx 2\times 10^{-4}$.
The field equations can be found by varying the Lagrangian. It yields $$\partial_{x_-}(\partial \mathcal{L}/\partial u)+\partial_{x_+}(\partial \mathcal{L}/\partial w)=0.$$ As a result, we obtain the system of equations $$\begin{aligned}
\partial_{x_-} w-\partial_{x_+} u=&0, \label{eq:lightcone-1} \\
[1-4\epsilon_2 (W+w)u-9\epsilon_3u^2(W+w)^2&]\partial_{x_+} u \nonumber \\
-[\epsilon_2 (W+w)^2+3\epsilon_3 u (W+w)^3&]\partial_{x_-} u \label{eq:lightcone-2}\\
-[\epsilon_2 u^2+3\epsilon_3 u^3(W+w)&]\partial_{x_+} w=0, \nonumber\end{aligned}$$ where the first equation comes from the equality of mixed partial derivatives: $\partial_{x_-x_+}a=\partial_{x_+x_-}a$.
The equations (\[eq:lightcone-1\], \[eq:lightcone-2\]) have a solution for which $u=0$ and $\partial_{x_-} w=0$, i.e., $w$ is an arbitrary function depending on the variable $x_+$. This is a finite amplitude electromagnetic wave propagating from the right to the left with a propagation velocity equal to speed of light in a vacuum. Its form does not change in time. The electric and magnetic field components are equal to each other ($E=B=-w/\sqrt{2}$), i.e., its superposition with the crossed electromagnetic field gives $E=B=-1/\sqrt{2}(W+w)$.
Linearizing Eqs. (\[eq:lightcone-1\], \[eq:lightcone-2\]), it is easy to find expressions describing the small amplitude wave for which we have $$u(x_+,x_- )=u_0\left(x_- +\epsilon_2 W^2 x_+\right)
\label{eq:lin-wave-u}$$ and $$\quad w(x_+,x_- )=\epsilon_2 W^2 u_0\left(x_- +\epsilon_2 W^2 x_+\right) +w_0(x_+).
\label{eq:lin-wave-w}$$ In Eqs. (\[eq:lin-wave-u\], \[eq:lin-wave-w\]) the functions $u_0$ and $w_0$ are determined by the initial conditions. The function $u(x_+,x_-)$ depends on the light-cone coordinates $(x_+,x_-)$ in combination $$\psi (x_+,x_-)=x_- +\epsilon_2 W^2 x_+. \label{eq:lin-wave-psi-pm}$$ The wave phase $\psi$ can be rewritten as $$\psi (x,t)=\frac{1}{\sqrt{2}}\left[x\left(1+\epsilon_2 W^2\right) -t\left(1-\epsilon_2 W^2\right)\right].
\label{eq:lin-wave-psi-xt}$$ The constant phase condition shows that the wave propagates from the left to the right with the speed $$v_W=\frac{1-\epsilon_2 W^2}{1+\epsilon_2 W^2}\approx 1-2\epsilon_2 W^2+2 \epsilon_2^2 W^4.
\label{eq:v0}$$ It is less than unity, i.e., the wave phase (group) velocity is below the speed of light in a vacuum (see also Refs. [@Marklund; @Bialynicka; @DittrichGies] and the literature cited therein).
Measuring the phase difference between the phase of the electromagnetic pulse colliding with the counterpropagating wave and the phase of the pulse which does not interact with high intensity wave, it is equal to $$\delta \psi=4 \pi\frac{d}{\lambda}\epsilon_2 W^2,
\label{eq:deltapsi}$$ where $\lambda$ is the wavelength of high frequency pulse and $d$ is the interaction length, plays a central role in discussion of experimental verification of the QED vacuum birefringence [@King; @Heinzl2006]. For 10 petawatt laser the radiation intensity can reach $10^{24}$W/cm$^2$, for which $W^2\approx 10^{-5}$. Taking the ratio equal to $d/\lambda\approx 10^4$, i.e. equal to the ratio between the optical and x-ray radiation wavelength, and using for $\epsilon_2$ the expression (\[eq:eps2\]), we find that $\psi\approx 10^{-4}$.
Nonlinear wave evolution
=========================
To analyze the nonlinear wave evolution, we seek a self-similar solution to Eqs. (\[eq:lightcone-1\], \[eq:lightcone-2\]) of a simple wave (e.g., see [@Whitham; @LL-HD; @Kadomtsev]), in which $w$ is considered as a function of $u$: $w(u)$. The simple wave (or Riemann wave) represents an exact solution of self–similar type of the nonlinear wave equations describing the finite amplitude wave propagating in a continous media. With this assumption, we obtain from Eqs. (\[eq:lightcone-1\], \[eq:lightcone-2\]) the system of equations $$\begin{aligned}
&\partial_{x_+} u=J\partial_{x_-} u, \label{eq:lightconeK-1}\\
&\partial_{x_+} u=\nonumber\\
&\frac{(W+w)^2(\epsilon_2+3\epsilon_3(W+w)u)\partial_{x_-} u}
{1-(W+w)u[4\epsilon_2 +9\epsilon_3u(W+w)+3\epsilon_3 u^2J]-\epsilon_2 u^2J} ,
\label{eq:lightconeK-2}\end{aligned}$$ where we express $$\partial_{x_+}w= J \partial_{x_+}u$$ and $$\partial_{x_-}w= J \partial_{x_-}u$$ with the Jakobian $J=dw/du$. Equations (\[eq:lightconeK-1\]) and (\[eq:lightconeK-2\]) are consistent provided that the coefficients in front of $\partial_{x_-} u$ on the right hand sides are equal to each other. This condition yields an equation for $J$: $$\begin{aligned}
&[\epsilon_2 u^2+3\epsilon_3 (W+w) u^3]J^2-\left[1-4\epsilon_2(W+w)u \right. \nonumber \\
&\left.-9\epsilon_3u^2(W+w)^2\right]J+(W + w)^2[\epsilon_2 + 3\epsilon_3 u (W+w)]=0.
\label{eq:dwduJ}\end{aligned}$$ Using smallness of the parameters $\epsilon_2$ and $\epsilon_3$ and the relationship $w\approx \epsilon_2 W^2$, which follows from Eq. (\[eq:lin-wave-w\]), we obtain an expression for the function $J(u)$ in the form of the power series: $$J(u)=\epsilon_2 W^2+4 \epsilon_2^2 W^3 u+3\epsilon_3W^3u+...
\label{eq:dwdu2}$$ Taking into account that $J=dw/du$ and integrating the r.h.s. of Eq. (\[eq:dwdu2\]) with respect to the variable $u$ we obtain for the function $w(u)$ the following expression $$w(u)=\epsilon_2 W^2 u+2 \epsilon_2 W^3 u^2+\frac{3}{2}\epsilon_3 W^3u^2+...
\label{eq:dwdu-w}$$ As a result, we find the electric and magnetic field components in the electromagnetic wave propagating from the left to the right $$E=(w-u)/\sqrt{2}\approx -\sqrt{2}u(1-\epsilon_2 W^2) \label{eq:dwdu-E}$$ and $$B=(u+w)/\sqrt{2}\approx \sqrt{2}u(1+\epsilon_2 W^2) \label{eq:dwdu-B}$$ respectively.
Substitution of this expression to the right hand side of Eq. (\[eq:lightconeK-1\]) results in $$\partial_{x_+} u-\left[\epsilon_2 W^2+(4 \epsilon_2^2 +3\epsilon_3)W^3 u\right]\partial_{x_-} u=0.
\label{eq:simple-u}$$ For the variables $x,t$ the equation for the function $$\bar u=-2(4\epsilon_2^2+3\epsilon_3)W^3 u
\label{eq:simple-baru}$$ can be written as $$\partial_{t} \bar u+\left(v_W+\bar u\right)\partial_{x} \bar u=0.
\label{eq:simple-u-xt}$$ with the velocity of linear wave, $v_W$, given by Eq. (\[eq:v0\]).
A solution to this equation can be obtained in a standard manner (see Refs. [@Kadomtsev; @Whitham]). According to this solution, the function $\bar u(x,t)$ transfers along the characterstic $x_0$ without distortion: $$\bar u=\bar u_0(x_0)$$ The characterstic equation for Eq. (\[eq:simple-u-xt\]) is $$\frac{dx}{d t } =v_W+\bar u \label{eq:charact1}$$ with the solution $$x=x_0+(v_W+\bar u_0(x_0))t.
\label{eq:charact2}$$ Combining these relationships, we obtain the solution to Eq. (\[eq:simple-u-xt\]) in the implicit form, where the function $u(x,t)$ should be found from equation $$\bar u=\bar u_0\left(x-(v_W+\bar u)t\right).
\label{eq:implicite}$$ In particular, this expression describes high order harmonics generation and wave steepening in a vacuum.
Various mechanisms for generating high order harmonics in the QED vacuum are analyzed in Refs. [@Rosanov1993; @DiP; @FN; @Boehl]. In particular, the parametric wave interaction process was considered in [@Rosanov1993] and the “relativistic oscillating mirror" concept (for details of this concept, see [@ROM; @ROMR; @RFM2]) was applied in [@Boehl]. Here, we formulate perhaps one of the simplest mechanisms. To obtain the scaling of high order harmonics generation within the framework of this mechanism, we choose the initial electromagnetic wave as $$\bar u_0=\bar a_1\cos(k(x-v_W t)),
\label{eq:HoH1}$$ where $\bar a_1$ and $k=\omega/c$ are the wave amplitude and wave number ($\omega$ is the wave frequency), respectively. Using the weakness of nonlinearity ($\bar a_1 \ll 1$), we find from Eqs. (\[eq:implicite\], \[eq:HoH1\]) that $$\bar u(x,t)=\bar a_1\cos(k(x-v_W t))-\frac{\bar a_1^2}{2} k v_W t\sin(2k(x-v_W t))...
\label{eq:HoH2}$$ Taking into account normalization of the wave amplitude given by Eq. (\[eq:simple-baru\]), we find that the ratio of the second harmonic amplitude to the amplitude of the wave with fundamental frequency scales as $(2 \epsilon_2^2 W^3+3\epsilon_3W^3/2)\bar a_1 k v_W t$. It is proportional to the duration of the electromagnetic wave interaction. Assuming $kv_W t=2\pi d/\lambda$ as in the case corresponding to Eq. (\[eq:deltapsi\]), and the intensity of the x-ray pulse of $10^{21}$W/cm$^2$ we obtain that the ratio is approximately equal to $10^{-11}$.
From expression (\[eq:implicite\]), it follows that the electromagnetic field gradient increases with time, i.e., wave steepening occurs. Differentiating $u(x,t)$ with respect to the coordinate $x$, we find $$\partial_x u=\frac{\partial_{x_0}u_0(x_0)}{1-2(4\epsilon_2^2+3\epsilon_3)W^3 \partial_{x_0}u_0(x_0) t}, \label{eq:grad1}$$ where the dependence of the Lagrange coordinate $x_0$ on time and the Euler coordinate $x$ is given by Eq. (\[eq:charact2\]). As shown, the gradient $\partial_x u$ becomes infinite at time $$t_{br}=\frac{1}{2(4\epsilon_2^2+3\epsilon_3)W^3 | \partial_{x_0}u_0(x_0) |}
\label{eq:tbr1}$$ and at the coordinate $x_0$ where the derivative $\partial_{x_0}u_0(x_0) $ has its maximum. This singularity is called the “ gradient catastrophe” or “the wave breaking”.
The formation of singularity during the evolution of a finite amplitude electromagnetic wave in the quantum vacuum is illustrated in Figures \[Fig1\] and \[Fig2\]. The electromagnetic pulse at $t=0$ takes the form $$u_0(x_0)=a_0 \exp (-x_0^2/2 L^2)\cos (k x_0),\label{eq:u0x0}$$ where $L=4\pi$ and $k=2$. The parameter $4 \epsilon_{2}^2 W^3+3\epsilon_3W^3$ is assumed to be equal to 0.125 and $a_0=1$. As clearly shown in Fig. \[Fig1\], wave steepening evolves with time. Wave breaking occurs due to the characteristic intersection as shown in Fig. \[Fig2\].
![\[Fig1\] The function $u(x,t)$ given by Eq. (\[eq:u0x0\]) for $u_{0}(x_{0})$ given by the expression (\[eq:u0x0\]). ](Fig1.pdf){width="50.00000%"}
![\[Fig2\] Characteristics of Eq. (\[eq:implicite\]) plotted in the $x, t$ plane for the same parameters as in Fig. (\[Fig1\]).](Fig2.pdf){width="43.00000%"}
We note that the singularity formed at the electromagnetic wave breaking corresponds to the rarefaction shock wave formation (the wave steepens and breaks in the backwards direction, as shown in Fig. 1), because the wave crest propagates with a speed lower than the propagation speed of the part of the pulse with lower amplitude.
When a wave approaches the wave breaking point, wave steepening is equivalent to harmonics generation with higher numbers. Because of this fact the long-wavelength approximation used above becomes inapplicable and the Heisenberg-Euler Lagrangian cannot be used to describe the wave evolution in the vicinity of the gradient catastrophe, i.e., at the shock wave front. According to the shock wave paradigm [@LL-HD], the basic properties of the shock wave (the relationships between the shock wave velocity and the average parameters of the medium before and after the shock wave front) can be found within the framework in “the long-wavelength” approximation if we consider the shock wave front region as a discontinuity.
Electromagnetic shock wave in vacuum
=====================================
The long-wavelength approximation breaks when the frequencies of the interacting waves, $\omega_{\gamma}$ and $\Omega$ become high enough, i.e., when their product becomes of the order or higher than $$\omega_{\gamma} \Omega>m_e^2 c^4/\hbar^2.
\label{Threshold}$$
At this photon energy level the photon–photon interaction can result in the creation of real electron–positron pairs during the Breit-Wheeler process [@BW], in the saturation of wave steepening, and in electromagnetic shock wave formation. Here, $\omega_{\gamma} $ and $\Omega$ are the frequencies of high energy photons and low frequency counter-propagating electromagnetic waves, respectively.
The electromagnetic shock wave separates two regions, I and II, where the function $u(x,t)$ takes the values $u_I$ and $u_{II}$, respectively. The shock wave front (it is an interface between regions I and II) moves with a velocity equal to $v_{sw}$, in other words, the shock wave front is localized at the position $x_{sw}=v_{sw}t$. Integrating Eq. (\[eq:simple-u-xt\]) over an infinitely small interval $(-\delta+x_{sw}, x_{sw}+\delta)$, where $\delta \to 0$, we obtain $$\{-v_{sw}u+v_W u -(4\epsilon_2^2+3\epsilon_3)W^3u^2\}_{x=x_{sw}}=0.
\label{eq:discont}$$ Here, $\{f\}_x=f(x+\delta)-f(x-\delta)$ at $\delta\to 0$ denotes the discontinuity of the function $f(x)$ at the point $x$. From Eq. (\[eq:discont\]), it follows that $$v_{sw}=v_W -(4\epsilon_2^2+3\epsilon_3)W^3 (u_I+u_{II}).
\label{eq:velshock}$$ Near the threshold $\omega \Omega\geq m_e^2 c^4/\hbar^2$, the electron–positron creation cross section equals [@BLP-QED; @PinG-G] $$\sigma_{e-p}=\pi r_e^2 \sqrt{\frac{\hbar^2 \omega \Omega}{m_e^2 c^4}-1}.
\label{eq:sigma-e-p}$$ The width of the shock wave front can be estimated to be of the order of the length $l_{sw} = 1/n_{\gamma} \sigma_{e-p}$, over which the photon with energy of the order of $m_ec^2/\hbar$ creates the electron–positron pair. The photon density $n_{\gamma}$ is related to the electromagnetic pulse energy ${\cal E}_{em}$ as $n_{\gamma}\approx{\cal E}_{em}/\hbar \omega \lambda^3 N_{em}$, where $N_{em}=l_{em}/\lambda$ is the electromagnetic pulse length divided by the wavelength. It yields $l_{sw} \approx \hbar \omega \lambda^3 N_{em}/\pi r_e^2 {\cal E}_{em}$. Since it is assumed that the photon energy is at the threshold of the electron-positron pair creation, the shock wave front width should be of the order of the Compton wavelength, $\lambdabar_C=\hbar/m_ec$. This condition imposes a constraint from below on the electromagnetic pulse energy of ${\cal E}_{em}\geq m_ec^2(\lambda/r_e)^2$. For a 1$\mu$m wavelength laser with $N_{em} = 10$, it requires ${\cal E}_{em}\geq100$kJ. If $\lambda=10^{-8}$cm$^2$, which corresponds to an X-ray pulse of 10 KeV, we have ${\cal E}_{em}\geq 10^{-4}$J.
The electron-positron pairs created at the electromagnetic shock wave front being accelerated by the electromagnetic wave emit gamma-ray photons which lead to the electron–positron avalanche via the multi-photon Breit-Wheeler mechanism [@AIN-VIR] as discussed in Refs. [@BellKirk; @FAM] (see also review article [@DiPizzaReview] and the literature cited therein). This process requires the dimensionless parameters $\chi_e\approx(E/E_S)\gamma_e$ and $\chi_{\gamma}\approx(E/E_S)(\hbar \omega_{\gamma}/m_ec^2)$ to be greater than one. At $\chi_{\gamma}>1$, the QED vacuum becomes a dispersive and dissipative media [@NBN1969; @Erber1966]. The effects of the electromagnetic wave dispersion originate formally from the high order derivatives in the corrections to the Heisenberg-Euler Lagrangian found in Refs. [@Mamayev1981; @gusyninI; @gusyninII]. Discussions of higher-order derivatives when describing the QED vacuum beyond the Heisenberg-Euler Lagrangian model can be found in Refs. [@Mamayev1981] and [@gusyninI; @gusyninII]. As it has been noted in Refs. [@SS-2000; @RNN1998] the implementation of the high derivatives into the description of nonlinear wave interaction in QED vacuum can result in the soliton formation. In [@RNN1998] it is noted that the counterplay of nonlinearity and dispersion in nonlinear QED vacuum can lead to the dark soliton formation, which can be interpreted as the shock wave of the electromagnetic pulse envelope. Details on the dark soliton properties can be found in Refs. [@DS1; @DS2; @DS3] and the literature cited therein.
In our case, the dispersion can result in modulations of the electromagnetic field in the vicinity of the shock wave front. Whether the dispersive properties come into play well below the pair-production threshold (\[Threshold\]) depends not only on the wave amplitudes but also on the frequency of the interacting waves. For example, in the case of $10$ KeV X-ray radiation, the dispersive effects prevail at an intensity above $10^{26}$ $W/cm^2$. In any case, we do not expect this would change the main result of our paper because dissipation/dispersion determines the shock front structure.
With additional terms with derivatives in the Heisenberg–Euler Lagrangian (\[eq:Lagrangian\]), the theory predicts also the existence of bright spatial solitons [@SS-2000]. With the presence of dispersion, the resulting wave break is prevented and it results in the process when the first soliton is formed. This process continues until the initial pulse is completely splitted into the chain of the solitons (see for more details Refs. [@GP; @SPN]). Such scenario needs huge peak intensity $10^{33} W/cm^2$ which can be decreased for experimental observation by making the size of the soliton large compared to the carier wavelength. For $\lambda=10\,{\rm nm}$ the peak intensity is $10^{25}\,W/cm^2$ [@SS-2000].
Conclusion
===========
In conclusion, we presented and analyzed an analytical solution of the Heisenberg–Euler electrodynamics equations describing the finite amplitude electromagnetic wave counter-propagating to the crossed electromagnetic field. The solution belongs to the family of self-similar solutions corresponding to the Rieman wave. It describes the wave steepening and formation of the electromagnetic shock wave in the vacuum and the high order harmonic generation.
The singularity formed at the electromagnetic wave breaking has rarefaction shock wave character (the wave steepens and breaks in the backwards direction, as illustrated in Fig. 1), because the wave crest propagates with a speed lower than the propagation speed of the part of the pulse with lower amplitude.
In general, photon–photon scattering in a vacuum is governed by the dimensionless parameter $\alpha (I_{em}/I_S)$, as it concerns shock-like configuration formation, high order harmonics generation and the electron-positron and gamma ray flash at the electromagnetic shock wave front. Observation of these phenomena in a high power laser or x-ray interaction with matter implies high precision measurements as in experiments [@Baur; @ATLASScattering] or achieving an electromagnetic field amplitude approaching the critical QED field $E_S$. One of the ways of reaching these regimes is to increase the laser power. For example, observation of one scattered photon per day with a 1 Hz laser requires an intensity of the order of $8\times 10^{27}$W/cm$^2$, i.e., several hundred kJ laser energy. Another way of approaching the critical QED field limit is associated with the relativistic flying mirror concept [@Koga] (for relativistic flying mirror theory and experiments see Refs. [@RFM1; @RFM2; @RFM3; @RFM4]), where light intensity can be increased during the nonlinear laser-plasma interaction.
We thank Drs. T. Heinzl, R. Sauerbrey, N. N. Rosanov, J. Nejdl, T. Esirkepov, and J. Koga for productive discussions. The work is supported by the project: High Field Initiative (CZ$.02.1.01/0.0/0.0/15\_003/0000449$) under the European Regional Development Fund.
[99]{}
V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii, [*Quantum Electrodynamics*]{} (Pergamon, New York, 1982).
G. Baur, K. Hencken, D. Trautmann, S. Sadovsky, and Y. Kharlov, [*Phys. Rep.*]{} [**364**]{}, 359 (2002).
ATLAS Collaboration, [*Nature Physics*]{} [**13**]{}, 852 (2017).
F. Karbstein, H. Gies, M. Reuter, M. Zepf, [*Phys. Rev. D*]{} [**92**]{}, 071301(R) (2015).
T. Inada, T. Yamazaki, T. Yamaji, Y. Seino, X. Fan, S. Kamioka, T. Namba, and S. Asai, [*Appl. Sci.*]{} [**7**]{}, 671 (2017).
G. A. Mourou, T. Tajima, and S. V. Bulanov, [*Rev. Mod. Phys.*]{} [**78**]{}, 309 (2006).
M. Marklund and P. K. Shukla, [*Rev. Mod. Phys.*]{} [**78**]{}, 591 (2006).
A. Di Piazza, C. M. [Müller]{}, K. Z. Hatsagortsyan, and C. H. Keitel, [*Rev. Mod. Phys.*]{} [**84**]{}, 1177 (2012)
D. Tommasini, A. Ferrando, and M. Seco, [*Phys. Rev. A*]{} [**77**]{}, 042101 (2008); A. Paredes, D. Novoa, and D. Tommasini, [*Phys. Rev. A*]{} [**90**]{}, 063803 (2014).
B. King and T. Heinzl, [*High Power Laser Science and Engineering*]{} [**4**]{}, 1 (2016).
Y. Monden and R. Kodama, [*Phys. Rev. Lett.*]{} [**107**]{}, 073602 (2011).
J. K. Koga, S. V. Bulanov, T. Zh. Esirkepov, A. S. Pirozkhov, M. Kando, and N. N. Rosanov, [*Phys. Rev. A*]{} [**86**]{}, 053823 (2012).
F. Karbstein and R. Shaisultanov, [*Phys. Rev. D*]{} [**91**]{}, 113002 (2015); H. Gies, F. Karbstein, C. Kohlfuerst, and N. Seegert, [*Phys. Rev. D*]{} [**97**]{}, 076002 (2018).
H.-P. Schlenvoigt, T. Heinzl, U. Schramm, T. E. Cowan, and R. Sauerbrey, [*Phys. Scr.*]{} [**91**]{}, 023010 (2016).
B. Shen, Z. Bu, J. Xu, T. Xu, L. Ji, R. Li, and Z. Xu, [*Plasma Phys. Control. Fusion*]{} [**60**]{}, 044002 (2018).
T. Heinzl, B. Liesfeld, K.-U. Amthor, H. Schwoerer, R. Sauerbrey, and A. Wipf, [*Optics Express*]{} [**267**]{}, 318 (2006).
B. Shen, Z. Bu, J. Xu, T. Xu, L. Ji, R. Li, and Z. Xu, [*Plasma Phys. and Contr. Fusion*]{} [**4**]{}, 044002 (2018).
G. Mourou, Z. Chang, A. Maksimchuk, J. Nees, S. V. Bulanov, V. Yu. Bychenkov, T. Zh. Esirkepov, N. M. Naumova, F. Pegoraro, and H. Ruhl, [*Plasma Phys. Rep.*]{} [**28**]{}, 12 (2002).
Z. Bialynicka-Birula and I. Bialynicki-Birula, [*Phys. Rev. D*]{} [**2**]{}, 2341 (1970).
S. L. Adler, [*Ann. Phys.*]{} [**67**]{}, 599–647 (1971)
E. Brezin and C. Itzykson, [*Phys. Rev. D*]{} [**3**]{}, 618 (1971).
V. I. Ritus, [*Sov. Phys. JETP*]{} [**42**]{}, 774 (1975).
, [P. Pascual]{}, and [R. Tarrach]{}, [*Nucl. Phys. B*]{} [**437**]{}, 60–82 (1995).
and [S. J. Hathrell]{}, [*Phys. Rev. D*]{}, [**22**]{}, 343 (1980).
, [*Nucl. Phys. B*]{} [**460**]{}, 379–394 (1996).
and [V. I. Ritus]{}, [*Sov. Phys. JETP*]{} [**34**]{}, 1195 (1972).
, [D. Eriksson]{}, and [M. Marklund]{}, [*Phys. Scr*]{} [**209**]{} (2004).
, [G. Brodin]{}, and [L. Stenflo]{}, [*Phys. Rev. Lett.*]{} [**91**]{}, (2003).
W. Dittrich and H. Gies, [*Probing the quantum vacuum. Perturbative effective action approach in quantum electrodynamics and its application*]{}, Springer Tracts Mod. Phys. [**166**]{}, 1 (2000).
A. Zee, [*Quantum Field Theory in a Nutshell*]{}, (Princeton University Press, 2010).
N. N. Rosanov, [*JETP*]{} [**76**]{}, 991 (1993).
, [*Phys. Lett. B*]{} [**482**]{}, 137–140 (2000).
A. Di Piazza, K. Z. Hatsagortsyan, and C. H. Keitel, [*Phys. Rev. D*]{} [**72**]{}, 085005 (2005).
A. M. Fedotov and N. B. Narozhny, [*Phys. Lett. A*]{} [**362**]{}, 1 (2007).
N. B. Narozhny and A. M. Fedotov, [*Laser Physic*]{} [**17**]{}, 350 (2007).
P. [Böhl]{}, B. King, and H. Ruhl, [*Phys. Rev. A*]{} [**92**]{}, 032115 (2015).
W. Heisenberg and H. Euler, [*Zeit. für Phys.*]{} [**98**]{}, 714 (1936).
L. D. Landau and E. M. Lifshitz, [*Electrodynamics of Continous Media*]{} (Pergamon, Oxford, 1984).
M. Lutzky and J. S. Toll, [*Phys. Rev.*]{} [**113**]{}, 1649 (1959).
J. S. Heyl and L. Hernquist, [*Phys. Rev. D*]{} [**55**]{}, 2449 (1997).
G. B. Whitham, [*Linear and Nonlinear Waves*]{} ( Wiley, 1974).
L. D. Landau and E. M. Lifshitz, [*Fluid Mechanics*]{} (Pergamon, Oxford, 1997).
B. B. Kadomstev, [*Cooperative effects in plasmas in [Reviews of plasma physics]{}, Edited by V. D. Shafranov*]{}, Springer, [**22**]{}, Boston (2001).
S. V. Bulanov, N. M. Naumova, and F. Pegoraro, [*Phys. Plasmas*]{} [**1**]{}, 745 (1994).
U. Teubner and P. Gibbon, [*Rev. Mod. Phys.*]{} [**81**]{}, 445 (2009).
S. V. Bulanov, T. Zh. Esirkepov, M. Kando, A. S. Pirozhkov, and N. N. Rosanov, [*Physics Uspekhi*]{} [**56**]{}, 429 (2013).
G. Breit and J. A. Wheeler, [*Phys. Rev.*]{} [**46**]{}, 1087 (1934).
R. J. Gould and G. P. Schreder, [*Phys. Rev.*]{} [**155**]{}, 1404 (1967).
A. I. Nikishov and V. I. Ritus, [*Sov. Phys. Usp.*]{} [**13**]{}, 303 (1970).
A. R. Bell and J. G. Kirk, [*Phys. Rev. Lett.*]{} [**101**]{}, 200403 (2008).
A. M. Fedotov, N. B. Narozhnyi, G. Mourou, and G. Korn, [*Phys. Rev. Lett.*]{} [**105**]{}, 080402 (2010).
S. V. Bulanov, T. Esirkepov, and T. Tajima, [*Phys. Rev. Lett.*]{} [**91**]{}, 085001 (2003).
J. K. Koga, S. V. Bulanov, T. Zh. Esirkepov, M. Kando, S. S. Bulanov, and A. S. Pirozhkov, [*Plasma Phys. Control. Fusion*]{} [**60**]{}, 074007 (2018).
M. Kando, T. Esirkepov, J. K. Koga, A. S. Pirozhkov, and S. V. Bulanov, [*Quantum Beam Sci.*]{} [**2**]{}, 9 (2018).
N. B. Narozhnyi, [*Sov. Phys. JETP*]{} [**28**]{}, 2 (1969).
T. Erber, [*Rev. Mod. Phys.*]{} [**38**]{}, 4 (1966).
S. G. Mamaev, V. M. Mostepanenko, and M. I. Eides, [*Sov. J. Nucl. Phys.*]{} [**33**]{}, 569 (1981).
V. P. Gusynin and I. A. Shovkovy, [*Can. J. Phys.*]{} [**74**]{}, 282 (1996)
V. P. Gusynin and I. A. Shovkovy, [*J. Math. Phys.*]{} [**40**]{}, 5406 (1999)
A. V. Gurevich and L. P. Pitaevskii, [*JETP Lett.*]{} [**17**]{}, 193 (1973).
S. Novikov, S. V. Manakov, L. P. Pitaevskii, and V. E. Zakharov, [*Theory of solitons: the inverse scattering method*]{} (Springer Science, Business Media. 1984).
M. Soljaci[ć]{} and M. Segev, [*Phys. Rev. A*]{} [**62**]{}, 043817 (2000).
N. N. Rosanov, [*JETP*]{} [**86**]{}, 284 (1998).
Yu. S. Kivshar and B. Luther-Davis, [*Phys. Reports*]{} [**298**]{}, 81 (1998).
D. Farina and S. V. Bulanov, [*Phys. Rev. E*]{} [**64**]{}, 066401 (2001).
Yu. S. Kivshar and P. Agrawal, [*Optical Solitons. From Fibers to Photonic Cristals*]{}, (Academic Press. 2003).
|
---
abstract: 'In this paper, quantizer design for weak-signal detection under arbitrary binary channel in generalized Gaussian noise is studied. Since the performances of the generalized likelihood ratio test (GLRT) and Rao test are asymptotically characterized by the noncentral chi-squared probability density function (PDF), the threshold design problem can be formulated as a noncentrality parameter maximization problem. The theoretical property of the noncentrality parameter with respect to the threshold is investigated, and the optimal threshold is shown to be found in polynomial time with appropriate numerical algorithm and proper initializations. In certain cases, the optimal threshold is proved to be zero. Finally, numerical experiments are conducted to substantiate the theoretical analysis.'
author:
- 'Guanyu Wang, Jiang Zhu and Zhiwei Xu'
title: 'Asymptotically Optimal One-Bit Quantizer Design for Weak-signal Detection in Generalized Gaussian Noise and Lossy Binary Communication Channel'
---
[**Keywords:**]{} Threshold optimization, weak-signal detection, quantization, generalized Gaussian noise
Introduction
============
Signal estimation and detection from quantized data continues to attract attention over the past years [@Poor1; @Li1; @Ribeiro3; @Pan1; @Ciuonzo2; @Jiang1; @Jiang2; @Sani; @Jiang3; @Jiang4; @Farias1; @Farias2; @Ciuonzo1; @ZJF1; @ZJF2; @ZJF3]. In [@Poor1], a general result is developed and applied to obtain specific asymptotic expressions for the performance loss under uniform data quantization in several signal detection and estimation problems including minimum mean-squared error (MMSE) estimation, non-random point estimation, and binary signal detection. In [@Li1], a distributed adaptive quantization scheme is proposed for signal estimation, where individual sensor nodes dynamically adjusts their quantizer threshold based on earlier transmissions from other sensor nodes. In [@Ribeiro3], distributed parameter estimators based on binary observations along with their error-variance performance are derived in the case of an unknown noise probability density function (PDF). For the robust estimation of a location parameter, the noise benefits to maximum likelihood type estimators are investigated [@Pan1]. As a result, the analysis of stochastic resonance effects is extended for noise-enhanced signal and information processing. In [@Ciuonzo2], distributed detection of a non-cooperative target is tackled, and fusion rules are developed based on the locally-optimum detection framework. Recently, some variants of the classical signal estimation and detection model from quantized data are studied. One is that the unquantized observations are corrupted by combined multiplicative and additive Gaussian noise [@Jiang1; @Jiang2; @Sani]. Another is called the unlabeled sensing where the unknown order of the quantized measurements causes the entanglement of desired parameter and nuisance permutation matrix [@Jiang3; @Jiang4]. In [@Farias1; @Farias2], the authors investigate the estimation problem under generalized Gaussian noise (GGN) and reveal the property of the Fisher information (FI). In addition, a systematic framework for composite hypothesis testing from independent Bernoulli samples is studied, and the comparison of detectors are made under one-sided and two-sided assumptions [@Ciuonzo1].
The threshold of the quantizer can be designed to improve the performance of estimation and detection [@Kassam1; @Warren1; @Willett3; @Chen1; @Venkitasubramaniam1; @Junfang1; @Rousseau1; @Ciuonzo3; @Ciuonzo4; @Ciuonzo5]. In the early paper [@Kassam1], two useful detection criteria are proposed, leading to the MMSE between the quantized output and the locally optimum nonlinear transform for each data sample. Later in [@Warren1], the optimal quantized detection problem is considered for the Neyman-Pearson, Bayes, Ali-Silvey distance, and mutual (Shannon) information criteria, and it is shown that the optimal sensor decision rules quantize the likelihood ratio of the observations. In the design of quantized detection systems, the optimal test is shown to employ a nonrandomized rule under certain conditions, which considerably simplifies the design [@Willett3]. In [@Chen1], it is shown that given a particular constraint on the fusion rule, the optimal local decisions which minimize the error probability amount to a likelihood-ratio test (LRT). In addition, a design example with a binary symmetric channel (BSC) is given to illustrate the usefulness of the result in obtaining optimal threshold for local sensor observations. In [@Venkitasubramaniam1], the maximin asymptotic relative efficiency (ARE) criterion is proposed to optimize the thresholds, and the improvement of estimation performance is demonstrated in distributed systems. Utilizing the asymptotic performance of the one-bit generalized likelihood ratio test (GLRT) detector, the optimal threshold is proven to be zero under Gaussian noise and a BSC [@Junfang1]. The quantizer design is also analyzed under the GGN [@Rousseau1; @Ciuonzo3]. In [@Rousseau1], the problem is considered under the error-free channel, and the optimal threshold is only plotted without theoretical justification. The BSC is also included in the successional studies [@Ciuonzo3; @Ciuonzo4; @Ciuonzo5]. In [@Ciuonzo3], zero is shown to be the optimal threshold when the shape coefficient is less than or equal to two and a good (sub-optimal) choice when the shape coefficient is larger than two. Analogously, zero is employed as a good choice in the generalized Rao test [@Ciuonzo4]. For generalized locally optimum detectors, the threshold optimization is re-formulated as a maximization problem in terms of the local false-alarm probability, which can be easily evaluated via one-dimension numerical search [@Ciuonzo5].
Related Work and Main Contributions
-----------------------------------
The most related work to ours is [@Junfang1; @Rousseau1; @Ciuonzo3]. Compared to [@Junfang1] focusing on Gaussian noise only in the BSC setting, we study the threshold optimization problem under GGN and arbitrary binary channel. In [@Rousseau1], the authors present the optimal threshold without theoretical proof, and they do not take the binary channel into account. In [@Ciuonzo3], it states that choosing zero threshold is a suboptimal choice (not too bad). In this paper, we extend their work to more general settings. It should be noticed that compared to the wide use of Gaussian noise assumption, the GGN assumption is usually made for infrequent but high level events, e.g., extremely low frequency electromagnetic noise due to thenderstorms or under ice acoustic noise due to iceberg break. In these events, the GGN assumption models the noise spikes more accurately than the Gaussian one and thus leads to better detection performance [@Kay2 p.381].
The main contribution of this paper is to address the threshold design problem under GGN in the arbitrary binary channel. Under the weak-signal assumption, the thresholds can be optimized via maximizing the noncentrality parameter. Unfortunately, it is difficult to prove the theoretical properties of the noncentrality parameter function with respect to the threshold. We novelly propose a simplified function whose sign is the same as that of the first derivative of the noncentrality parameter function. Consequently, we rigorously prove the theoretical properties of the noncentrality parameter function with respect to the threshold indirectly. Then we prove that for arbitrary binary channel, the optimal threshold can be found in polynomial time via appropriate numerical algorithm with proper initializations. In certain cases, we prove the optimal threshold to be zero.
Organization
------------
The paper is organized as follows. In section \[section\_problem\], the weak-signal detection problem is described, and preliminary materials including both maximum likelihood (ML) estimation and parameter tests are introduced. Section \[Q\_design\] states the main results of the quantizer design. In addition, an algorithm to calculate the optimal threshold is proposed. In section \[simulation\], numerical experiments are conducted to substantiate the theoretical analysis. The conclusions are presented in section \[con\]. Finally, the related functions and the proof of propositions are presented in \[appendix\].
Problem Setup {#section_problem}
=============
In this section, the weak-signal detection problem from binary samples is described. In addition, the ML estimation, GLRT and Rao test are presented.
Consider a binary hypothesis testing problem, in which $N$ distributed sensors in a wireless sensor network (WSN) are utilized $K$ times to generate noisy observations. Those observations are quantized with different thresholds, and then used to detect the presence of an unknown deterministic weak signal with amplitude $\theta$. The quantized samples under both hypotheses are $$\label{hypothesis_testing}
\begin{aligned}
\begin{cases}
&{\mathcal H}_0:b_{ij}={\rm 1}\{w_{ij}\geq \tau_{ij}\},\\
&{\mathcal H}_1:b_{ij}={\rm 1}\{h_{ij}{\theta}+w_{ij}\geq \tau_{ij}\},
\end{cases}
\end{aligned}$$ where $i=1,\cdots,N$ denotes the sensor number, $j=1,\cdots,K$ denotes the observation time, $h_{ij}$ is a spatial-temporal signal, $w_{ij}$ is the independent and identically distributed (i.i.d) noise, $\tau_{ij}$ is the threshold of the $i$ th sensor at the $j$ th observation time, and ${\rm 1}\{\cdot\}$ is an indicator function which produces 1 if the argument is true and 0 otherwise. We assume that $\theta\in [-\Delta,\Delta]$ for technical reasons [@Papadopoulos1], where $\Delta$ is a known constant.
Between the quantized data and the fusion center (FC) during the transmission, $b_{ij}$ are flipped to $u_{ij}$ before being received [@Ozdemir1]. Let $(q_0,q_1)$ denote the flipping probabilities such that $$\label{binary_channel}
\begin{aligned}
&{\rm Pr}(u_{ij}=1|b_{ij}=0)=q_0,\\
&{\rm Pr}(u_{ij}=0|b_{ij}=1)=q_1,
\end{aligned}$$ which will be used to calculate the probability mass function of $u_{ij}$ later.
In this paper, we focus on the asymptotically optimal quantizer design in the case of arbitrary binary channel and the GGN $w$, whose cumulative distribution function (CDF) is $F(w)$ and PDF is $$\begin{aligned}
\label{noise_pdf}
f(w)=\frac{\alpha\beta}{2\Gamma(1/\beta)}e^{-(\alpha|w|)^\beta},\end{aligned}$$ where $\Gamma(\cdot), \alpha^{-1}>0, \beta>0$ denote the gamma function, its scale parameter and shape parameter. Note that the GGN can describe several common PDFs such as Laplace distribution ($\beta=1$), Gaussian distribution ($\beta=2$) and uniform distribution ($\beta=\infty$).
Maximum Likelihood Estimation {#mle_pre}
-----------------------------
Under hypothesis $\mathcal H_1$, the PMF of $b_{ij}$ derived from (\[hypothesis\_testing\]) is $$\label{y_probability}
\begin{aligned}
&{\rm Pr}(b_{ij}=1|{\mathcal H_1})=F(h_{ij}\theta-\tau_{ij}),\\
&{\rm Pr}(b_{ij}=0|{\mathcal H_1})=1-F(h_{ij}\theta-\tau_{ij}),
\end{aligned}$$ where $F(\cdot)$ denotes the CDF of the GGN $w$. The binary data $b_{ij}$ are transmitted to the FC through the binary channel (\[binary\_channel\]). As a consequence, the PMF of $u_{ij}$ under hypothesis $\mathcal H_1$ can be formulated as $$\label{y_probability}
\begin{aligned}
{\rm Pr}(u_{ij}=1|{\mathcal H_1})&=q_0+(1-q_0-q_1)F\left({h_{ij}{\theta}-\tau_{ij}}\right)\triangleq p_{ij},\\
{\rm Pr}(u_{ij}=0|{\mathcal H_1})&=1-p_{ij}.\\
\end{aligned}$$ Let $\mathbf U$ be the matrix satisfying $[\mathbf U]_{ij}=u_{ij}$. The PMF of $\mathbf U$ under hypothesis $\mathcal H_1$ is $$\begin{aligned}
\label{LK}
p(\mathbf U;{\theta}|{\mathcal H_1})=\prod_{i=1}^{N} \prod_{j=1}^{K} {\rm Pr}(u_{ij}=1|{\mathcal H_1})^{u_{ij}} {\rm Pr}(u_{ij}=0|{\mathcal H_1})^{(1-u_{ij})},\end{aligned}$$ and the corresponding log-likelihood function $l(\mathbf U;{\theta})\triangleq l(\mathbf U;{\theta}|{\mathcal H_1})$ is $$\begin{aligned}
\label{log_likelihood_function}
l(\mathbf U;{\theta})=\sum_{i=1}^{N}\sum_{j=1}^{K} (u_{ij} \log p_{ij}+(1-u_{ij})\log(1-p_{ij})).\end{aligned}$$ Similarly, the log-likelihood under hypothesis $\mathcal H_0$ is $l(\mathbf U|{\mathcal H_0})=l(\mathbf U;0)$.
Parameter Tests
---------------
### GLRT
In the case of known $\theta$, the optimal detector according to the NP criterion is the log-likelihood ratio test [@Kay1 p. 65, Theorem 3.1]. For unknown $\theta$, the GLRT is usually adopted. Although there is no optimality associated with the GLRT, it appears to work well in many scenarios of practical interest [@Kay2 p. 200]. The GLRT replaces the unknown parameter by its ML estimation and decides ${\mathcal H}_1$ if $$\label{GLRT_Detector}
\begin{aligned}
T_G(\mathbf U)=\underset{\theta\in[-\Delta,\Delta]} {\operatorname{max}}~ l({\mathbf U};{\theta})-{ l({\mathbf U};0)}>\gamma,
\end{aligned}$$ where $\gamma$ is a threshold determined by the given false alarm probability $P_{FA}$.
### Rao Test
Since the Rao test does not require an ML estimate evaluation, it is easier to compute in practice [@Kay2 p. 187]. The Rao test decides ${\mathcal H}_1$ if $$\label{GLRT_Detector}
\begin{aligned}
T_R(\mathbf U)=\left(\frac{\partial l({\mathbf U};\theta)}{\partial\theta}\bigg|_{\theta=0}\right)^2 I^{-1}(0)>\gamma,
\end{aligned}$$ where $I(0)$ is the FI $I(\theta)$ evaluated at $\theta=0$, and the concrete expression of $I(\theta)$ is presented later in equation (\[FI\]).
Quantizer Design {#Q_design}
=================
In this section, the threshold optimization problem of maximizing the noncentrality parameter is formulated, and the theoretical property of the noncentrality parameter function is revealed. In addition, a gradient descent algorithm is proposed to find the optimal thresholds.
The detection performance of GLRT $T_G(\mathbf U)$ or Rao test $T_R(\mathbf U)$ is difficult to analyze. Fortunately, an approximation can be utilized and it reveals that as $NK\to\infty$, the asymptotic performance of $2T_G(\mathbf U)$ and $2T_R(\mathbf U)$ is [@Kay2 pp. 188-189] $$\begin{aligned}
\label{Chi_Square}
2T_G(\mathbf U),2T_R(\mathbf U) \sim
\begin{cases}
&{\mathcal H}_0: \quad \chi_1^2 \\
&{\mathcal H}_1:\quad \chi_1'^{2}(\lambda_Q),
\end{cases}\end{aligned}$$ where $\chi_n^2$ denotes a central chi-squared PDF with $n$ degrees of freedom, and $\chi_n'^{2}(\lambda_Q)$ denotes a noncentral chi-squared PDF with $n$ degrees of freedom and noncentral parameter $\lambda_Q$. In our problem, $\lambda_Q$ is $$\label{lambda_Q}
\begin{aligned}
\lambda_Q = {\theta}^2 I(\theta),
\end{aligned}$$ where $I(\theta)$ denotes the FI [@Junfang1]. The FI $I(\theta)$ is the expectation of the second derivative of the negative log-likelihood function $l(\mathbf U;{\theta})$ (\[log\_likelihood\_function\]) taken w.r.t. $\theta$, i.e., $$\begin{aligned}
&I({\theta})=(q_0+q_1-1)\sum_{i=1}^{N}\sum_{j=1}^{K}h_{ij}\Bigg[{\rm E}_{\mathbf U}\left(\frac{\partial}{\partial \theta} \left(\frac{u_{ij}}{p_{ij}}-\frac{1-u_{ij}}{1-p_{ij}}\right)\right)\notag\\
&\times f\left(h_{ij}\theta-\tau_{ij}\right)+\frac{\partial}{\partial \theta}f\left({h_{ij}\theta-\tau_{ij}}\right) {\rm E}_{\mathbf U}\left(\frac{u_{ij}}{p_{ij}}-\frac{1-u_{ij}}{1-p_{ij}}\right)\Bigg]\notag\\
&={(1-q_0-q_1)^2}\sum_{i=1}^{N}\sum_{j=1}^{K}\frac{ h_{ij}^2 f^2\left({h_{ij}{\theta}-\tau_{ij}}\right)} {p_{ij}(1-p_{ij})},\label{FI}\end{aligned}$$ where (\[FI\]) follows due to ${\rm E}_{\mathbf U}[u_{ij}/p_{ij}-(1-u_{ij})/(1-p_{ij})]=0$ and the PMF of $\mathbf U$ (\[y\_probability\]). Under the weak-signal assumption, the unknown scaling $\theta$ takes values near $0$ (actually, we assume that $|\theta|=c/\sqrt{NK}$ for some constant $c>0$), and we have $$\begin{aligned}
\label{lambda_app}
\lambda_Q \approx{\theta}^2 I(0)\end{aligned}$$ as $NK\to\infty$ [@Kay2 p. 232]. From (\[FI\]), we have $$\begin{aligned}
I(0)=(1-q_0-q_1)^2\sum_{i=1}^{N} \sum_{j=1}^{K}h_{ij}^2 G(-\tau_{ij}),\label{FI0}\end{aligned}$$ where $$\label{Gx}
\begin{aligned}
G(x)\triangleq G(x,q_0,q_1)=\frac{f^2(x)}{\frac{1}{4}-\left[(1-q_0-q_1)F(x)-\frac{1}{2}+q_0\right]^2}.
\end{aligned}$$ Asymptotically, the noncentrality parameter $\lambda_Q$ determines the detection performance [@Junfang1]. Therefore, maximizing the noncentrality parameter $\lambda_Q$ (\[lambda\_app\]) with respect to $\tau_{ij}$ can be decomposed into a set of independent quantization threshold design problems $$\begin{aligned}
\label{optmal_tau}
\tau_{ij}^*=\underset{\tau_{ij}}{\operatorname{argmax}}~ h_{ij}^2G(-\tau_{ij})=\underset{\tau}{\operatorname{argmax}}~ G(-\tau)\triangleq\tau^*.\end{aligned}$$ Equation (\[optmal\_tau\]) demonstrates that the asymptotically optimal weak-signal detection performance can be achieved by utilizing the identical optimal thresholds $\tau^*$, irrespective of the shape of the spatial-temporal signal, which is also shown in [@Junfang1]. The optimal threshold $\tau^*$ can be found via solving the problem $$\begin{aligned}
\label{optmal_x}
x^*=\underset{x}{\operatorname{argmax}}~ G(x),\end{aligned}$$ and $\tau^*=-x^*$.
However, $G(x)$ can also be regarded as a function of parameters $q_0$, $q_1$ and $\beta$. Varying these parameters may result in different optimal value $x^*$, more intuitively, different shape of $G(x)$ [@Ciuonzo1 Fig.1]. For better investigation of the theoretical property of $G(x)$, we partition the parameter values and discuss them separately. First, the binary asymmetric channel case which corresponds to $q_0\not=q_1$ is considered. In this setup, the monotonicity or quasiconcavity of $G(x)$ is studied under $0<\beta\leq 1$, $1<\beta\leq 2$ and $\beta>2$ respectively. Second, the deduction in the binary asymmetric channel case is extended to the simple BSC case corresponding to $q_0=q_1$. Finally, combining both cases, we proposed a numerical algorithm to efficiently calculate the optimal value $x^*$ (\[Gx\]) for arbitrary binary channel.
Binary Asymmetric Channel {#BAC}
-------------------------
In this subsection, we focus on the binary asymmetric channel case, i.e., $q_0\not=q_1$. By observing the formula (\[Gx\]), we realize that a swap of the values of $q_0$ and $q_1$ does not change the value of $G(x)$, namely, $q_0$ and $q_1$ contribute equally to the value of $G(x)$. Heuristically, $G(x)$ may be similar to symmetric functions which have some “symmetry” properties, from which Proposition \[prop\_1\] is derived.
\[prop\_1\] The maximum of $G(x)$ under arbitrary $q_0\not=q_1$ can be found via solving the problem by restricting $q_0>q_1$ and $1-q_0-q_1>0$, in which the optimal point $x^*\geq 0$.
The proof is postponed to \[prop1\].
According to Proposition \[prop\_1\], the problem (\[optmal\_x\]) can be reduced without loss of generality. Hereinafter, we only have to investigate the property of $G(x)$ under $q_0>q_1$, $1-q_0-q_1>0$ and $x>0$. Since the property of a function depends a lot on the derivative, it is necessary to provide the derivative of $G(x)$ as $$\begin{aligned}
\label{G_derivative}
G'(x)=\frac{f^3(x)\left[F(x)+\frac{2q_0-1}{2(1-q_0-q_1)}+m_1(x)+m_2(x)\right]M(x)}{(1-q_0-q_1)^2m_1(x)\left[\frac{1}{4(1-q_0-q_1)^2}-\left(F(x)+\frac{2q_0-1}{2(1-q_0-q_1)}\right)^2\right]^2},\end{aligned}$$ where
$$\begin{aligned}
&M(x)=F(x)+\frac{2q_0-1}{2(1-q_0-q_1)}+m_1(x)-m_2(x),\label{M_def}\\
&m_1(x)=\frac{f(x)}{2\alpha^{\beta}\beta x^{\beta-1}},\label{rdef}\\
&m_2(x)=\sqrt{\frac{1}{4(1-q_0-q_1)^2}+m_1^2(x)}.\label{sdef}\end{aligned}$$
Please notice that $m_1(x)$ and $m_2(x)$ are introduced for compact representation and will be repeatedly used in the following deduction. It is obvious that $m_2(x)>m_1(x)>0$ and $F(x)+\frac{2q_0-1}{2(1-q_0-q_1)}>\frac{q_0-q_1}{2(1-q_0-q_1)}>0$ (due to $x > 0$ and $F(x)>\frac{1}{2}$). Then all the components on the right side of (\[G\_derivative\]) are positive except for $M(x)$. In other words, the sign of $G'(x)$ is the same as that of $M(x)$. In the following, we deduce the the property of $G(x)$ from $M(x)$, more explicitly, the monotonicity or quasiconcavity of $G(x)$ via $M(x)$ and the derivatives of $M(x)$.
### $0<\beta\leq1$ {#beta_01}
In this setup, we prove that $G(x)$ is monotonically decreasing in $(0,+\infty)$ which is equivalent to $M(x)<0$ for $x>0$. Here we present the derivative of $M(x)$ as $$\begin{aligned}
\label{M_derivative}
M'(x)&=f(x)\left[1+\left(1-\frac{m_1(x)}{m_2(x)}\right)m_3(x)\right],\end{aligned}$$ where $$\begin{aligned}
\label{cdef}
m_3(x)&=\frac{1-\beta}{2\alpha^{\beta}\beta}x^{-\beta}-\frac{1}{2}.\end{aligned}$$ From (\[rdef\]), (\[sdef\]) and (\[cdef\]), one obtains that $0<\frac{m_1(x)}{m_2(x)}<1$ and $m_3(x)>-\frac{1}{2}$. Consequently, $M'(x)$ (\[M\_derivative\]) satisfies $$\begin{aligned}
\label{Mder}
M'(x)\geq f(x)\left[1-\frac{1}{2}\left(1-\frac{m_1(x)}{m_2(x)}\right)\right]>\frac{1}{2}f(x)>0.\end{aligned}$$ Combined with $\underset{x\to+\infty}{\lim} M(x)=\frac{-q_1}{(1-q_0-q_1)}\leq 0$, one concludes that $M(x)<\underset{x\to+\infty}{\lim} M(x)\leq 0$, and $G(x)$ is monotonically decreasing in $(0,+\infty)$. As a result, The maximum of $G(x)$ is obtained at $x^*=0$.
### $1<\beta\leq 2$ {#beta_12}
In this setup, we prove that $G(x)$ is quasiconcave and has only one stationary point in $(0,+\infty)$. First we introduce a point $x_1$ which is useful for the following analysis.
\[prop\_4\] Let $x_1=\frac{1}{\alpha}\left(\frac{\beta-1}{\beta}\right)^{\frac{1}{\beta}}$ and $\beta>1$, one has $M'(x_1)>0$.
The proof is postponed to \[prop4\].
Due to $\underset{x\to 0^+}{\lim}~m_1(x){x^{\beta-1}}=\frac{f(0)}{2\alpha^{\beta}\beta}$ and $$\begin{aligned}
\notag
1-\frac{m_1(x)}{m_2(x)}=\frac{1/(1-q_0-q_1)^2}{4 m_1(x)m_2^2(x)},\end{aligned}$$ one has $\underset{x\to 0^+}{\lim} M'(x)=-\infty$. Combined with $M'(x_1)>0$ from Proposition \[prop\_4\], one concludes that $M(x)$ has at least a stationary point $x_0\in(0,x_1)$ satisfying $M'(x_0)=0$. Utilizing $M'(x_0)=0$, from (\[M\_derivative\]) one has $$\begin{aligned}
\label{r_r1}
&\frac{m_1(x_0)}{m_2(x_0)}=1+\frac{1}{m_3(x_0)}>0.\end{aligned}$$
Then we calculate the second derivative of $M(x)$ as $$\begin{aligned}
\label{M_2}
M''(x)&=\frac{f'(x)M'(x)}{f(x)}+f(x)\left(1-\frac{m_1(x)}{m_2(x)}\right)\times\notag\\
&\left[-\frac{\beta}{x}\left(m_3(x)+\frac{1}{2}\right)-\frac{f(x)c^2(x)}{m_2(x)}\left(1+\frac{m_1(x)}{m_2(x)}\right)\right],\end{aligned}$$ where $f'(x)=-\alpha^{\beta}\beta x^{\beta-1}f(x)<0$ is the derivative of $f(x)$ $(x>0)$, and (\[M\_2\]) can be derived from (\[M\_derivative\]) via the basic product rule of differentiation formula $(uv)' = u'v +uv'$ and $m_1'(x)=f(x)m_3(x)$. Substituting (\[rdef\]), (\[r\_r1\]) and $M'(x_0)=0$ into (\[M\_2\]) yields $$\begin{aligned}
\label{M_2_x0}
&M''(x_0)=\frac{f(x_0)}{x_0}\left[\left(1+\frac{1}{m_3(x_0)}\right)(2-\beta)-\frac{\beta}{2m_3(x_0)}\right].\end{aligned}$$ Given $1<\beta\leq2$, from (\[cdef\]) one obtains $m_3(x)<-1/2$ for arbitrary $x\in(0+\infty)$. Therefore, $M''(x_0)>0$ due to $m_3(x_0)<0$ and $1+{1}/{m_3(x_0)}>0$ (\[r\_r1\]). Here we state the uniqueness of the stationary point $x_0$ in the following proposition.
\[prop\_5\] For a second-order differentiable univariate function $f(x) (a<x<b)$, and the stationary point $x_0$ such that $f'(x_0)=0$ satisfies $f''(x_0)>0$ ($f''(x_0)<0$), $f(x)$ is a quasiconvex (quasiconcave) function and $x_0$ is unique.
The proof is postponed to \[prop5\].
According to Proposition \[prop\_5\], $M(x)$ is quasiconvex and $x_0$ is unique. Then $M(x)$ is increasing in $(x_0,+\infty)$ and $M(x)<\underset{x\to +\infty}\lim M(x)$ for $x>x_0$. Combined with $\underset{x\to +\infty}\lim M(x)=\frac{-q_1}{1-q_0-q_1}$, one has $M(x)<\frac{-q_1}{1-q_0-q_1}\leq 0$ for $x>x_0$. Due to $\underset{x\to 0^+}\lim M(x)=\frac{q_0-q_1}{2(1-q_0-q_1)}>0$, $M(x_0)<0$ and $M(x)$ is decreasing in $(0,x_0)$, one concludes that there exists a point $x^*$ such that $$\begin{aligned}
\label{xstar}
M(x^*)=0,\quad 0<x^*<x_0,\end{aligned}$$ and $M(x)>0$ in $(0,x^*)$ and $M(x)<0$ in $(x^*,x_0)$. Consequently, $G(x)$ is monotonically increasing in $(0,x^*)$ and monotonically decreasing in $(x^*,+\infty)$. $G(x)(x>0)$ is quasiconcave and achieves its maximum at the unique stationary point $x^*$.
### $\beta>2$ {#beta_2}
In this setup, we prove that $G(x)$ is quasiconcave and has only one stationary point in $(0,+\infty)$. For $\beta > 2$, revealing the quasiconcave property of $G(x)$ $(x > 0)$ is a little difficult than that of $1<\beta\leq 2$. Similar to Proposition \[prop\_1\], we introduce a point $x_2$ as below.
\[prop\_6\] Let $x_2=\frac{1}{\alpha}\left(\frac{\beta-2}{2\beta}\right)^{\frac{1}{\beta}}$ and $\beta>2$, one has $M'(x_2)<0$.
The proof is postponed to \[prop6\].
Due to $\underset{x\to 0^+}{\lim} M'(x)>0$, $M'(x_2)<0$, $M'(x_1)>0$ and $0<x_2<x_1$, we conclude that there exist at least two stationary points $x_0$ and $x'_0$ such that
\[x0\_x0\] $$\begin{aligned}
&M'(x_0)=0,\quad 0<x_0<x_2,\\
&M'(x'_0)=0,\quad x_2<x'_0<x_1.\end{aligned}$$
Substituting $m_3(x_0)<m_3(x_2)=\frac{2-\frac{3}{2}\beta}{\beta-2}<m_3(x'_0)<-\frac{1}{2}$ into (\[M\_2\_x0\]) yields
$$\begin{aligned}
&M''(x_0)=\frac{f(x_0)}{x_0}\left[2-\beta+(2-\frac{3}{2}\beta)\frac{1}{m_3(x_0)}\right]\notag\\
&<\frac{f(x_0)}{x_0}\left[2-\beta+(2-\frac{3}{2}\beta)\frac{\beta-2}{2-\frac{3}{2}\beta}\right]=0,\\
&M''(x'_0)=\frac{f(x'_0)}{x'_0}\left[2-\beta+(2-\frac{3}{2}\beta)\frac{1}{m_3(x'_0)}\right]\notag\\
&>\frac{f(x'_0)}{x'_0}\left[2-\beta+(2-\frac{3}{2}\beta)\frac{\beta-2}{2-\frac{3}{2}\beta}\right]=0\end{aligned}$$
From Proposition \[prop\_4\], in $(0,x_2)$ $M'(x)$ is quasiconcave and the stationary point $x_0$ is unique; in $(x_2,+\infty)$ $M(x)$ is quasiconvex and the stationary point $x'_0$ is unique. In addition, one can conclude that $M'(x)<0$ in $(x_0,x'_0)$, and $M'(x)>0$ in the rest intervals. From $M'(x)>0$ in $(0,x_0)$, one has $M(x_0)>\underset{x\to 0^+}{\lim}~M(x)=\frac{q_0-q_1}{2(1-q_0-q_1)}>0$. From $M'(x)>0$ in $(x'_0,+\infty)$, one has $M(x'_0)<\underset{x\to +\infty}{\lim}~M(x)=\frac{-q_1}{1-q_0-q_1}\leq 0$ and $M(x)<0$ for $x>x'_0$. Combined with $M'(x)<0$ in $(x_0,x'_0)$, one concludes that there exists a unique point $x^*$ such that $$\begin{aligned}
\label{xstar_2}
M(x^*)=0,\quad x_0<x^*<x'_0.\end{aligned}$$ Therefore, $M(x)>0$ in $(0,x^*)$ and $M(x)<0$ in $(x^*,x_0)$, and $G(x)$ is monotonically increasing in $(0,x^*)$ and monotonically decreasing in $(x^*,+\infty)$. $G(x)(x>0)$ is quasiconcave and achieves its maximum at the unique stationary point $x^*$.
BSC {#BSC}
---
In this subsection, the optimal threshold in BSC is studied, which follows the results derived in the binary asymmetric channel case. Provided that $q_0=q_1=q$, from (\[x\_-x\]) it can be derived that $G(x)=G(-x)$ for arbitrary $x$. Therefore, the optimal threshold must be zero or pairs of opposite numbers. Now we prove that the optimal threshold in BSC is zero for $0<\beta\leq2$ and a pair of opposite numbers for $\beta<2$.
### $0<\beta\leq2$ {#bsc_02}
For $0<\beta\leq 1$, following the similar derivation in section \[beta\_01\], one concludes that $M'(x)>0$ (\[Mder\]), and $M(x)<0$ due to $\underset{x\to+\infty}{\lim} M(x)=\frac{-q}{1-2q}\leq 0$. For $1<\beta\leq 2$, following the similar derivation in section \[beta\_12\], one concludes that $M(x)$ has at least a stationary point $x_0\in(0,+\infty)$ satisfying $M'(x_0)=0$ and $M''(x_0)>0$. From Proposition \[prop\_5\], $M(x)$ is quasiconvex and $x_0$ is unique. Then $M(x)$ is decreasing in $(0,x_0)$ and increasing in $(x_0,+\infty)$. Due to $\underset{x\to +\infty}\lim M(x)=\frac{-q}{1-2q}\leq 0$ and $\underset{x\to 0^+}\lim M(x)=0$ (while in the setting $q_0>q_1$, $\underset{x\to 0^+}\lim M(x)=\frac{q_0-q_1}{2(1-q_0-q_1)}>0$), one has $M(x)<0$ for arbitrary $x\in(0,+\infty)$. Therefore, for $0<\beta\leq 2$, one concludes that $G(x)$ is decreasing in $x\in(0,+\infty)$ due to the same signs of $G'(x)$ and $M(x)$. Because of $G(x)=G(-x)$, $G(x)$ attains its maximum at zero.
### $\beta>2$ {#bsc_2}
For $\beta>2$, $G(x)(x>0)$ is quasiconcave and achieves its maximum at the unique stationary point $x^*$. The proof is similar to that in section \[beta\_2\] except that $\underset{x\to 0^+}{\lim}~M(x)=0$ and $\underset{x\to +\infty}{\lim}~M(x)=\frac{-q}{1-2q}\leq 0$.
Optimal Threshold Calculation {#optimal_beta_1}
-----------------------------
In this subsection, first an upper bound for the optimal value is given. Then combing both asymmetric and symmetric cases, we propose a numerical algorithm to efficiently calculate the optimal value $x^*$ (\[Gx\]) for arbitrary binary channel.
### Upper Bound {#upper}
In the arbitrary binary channel setup, we prove that $1/\alpha$ is an upper bound for the optimal point $x^*$.
Under the binary asymmetric channel, for $0<\beta\leq 1$, the optimal threshold zero proved in section \[beta\_01\] meets the bound. For $1<\beta\leq 2$, it is proved that $M(x)$ is quasiconvex and $x_0$ is the unique stationary point of $M(x)$ in section \[beta\_12\]. Hence we have $x_1>x_0$ from $M'(x_1)>M'(x_0)=0$. From (\[xstar\]) we know that $x^*<x_0$. Therefore, we have $$\begin{aligned}
x^*<x_0<x_1.\end{aligned}$$ For $\beta>2$, from (\[x0\_x0\]) and (\[xstar\_2\]) in section \[beta\_2\], we have $$\begin{aligned}
\label{x01alpha}
x^*<x'_0<x_1.\end{aligned}$$ Therefore, $x^*$ is upper bounded by $x_1$ for $\beta>1$. In addition, $x_1$ is an increasing function with respect to $\beta$ and attains its maximum at $x_1|_{\beta=+\infty}=1/\alpha$, which results in $$\begin{aligned}
\label{bound2}
x^*<1/\alpha.\end{aligned}$$
Under the BSC, for $0<\beta\leq 2$, the optimal threshold zero proved in section \[bsc\_02\] meets the bound. For $\beta>2$, similarly to the binary asymmetric channel case (\[x01alpha\]), one has $x^*<x_0<x_1<1/\alpha$. As a result, $1/\alpha$ is an upper bound for the optimal threshold for arbitrary binary channel.
### Numerical Algorithm
From section \[BAC\] and \[BSC\], the optimal threshold is zero for $0<\beta\leq 1$ under binary asymmetric channel and for $0<\beta\leq 2$ under BSC. In other settings, the optimal threshold is non-zero. Utilizing the upper bound $1/\alpha$, we provide a numerical algorithm for efficient calculation of the non-zero optimal threshold, as shown in Algorithm \[algorithm\_only\]. Because the inequality constrained minimization problem $\underset{0<x<1/\alpha} {\operatorname{min}}~-G(x)$ has a unique stationary point, and the first-order descent methods converge to a stationary point, a gradient descent algorithm is guaranteed to find the global optimum.
1. Initialize $k=0$ and $x_{k}\in(0,1/\alpha)$.
2. Set $\triangle x_k=-G'(x_{k})$ (\[G\_derivative\]).
3. Choose a step size $t$ via backtracking linear search, satisfying $G(x_k+t\triangle x_k)\leq G(x)+0.4 t G'(x_k)\triangle x$ and $x_k+t\triangle x_k\in(0,1/\alpha)$.
4. Update $x_{k+1}=x_k+t\triangle x_k$.
5. Set $k=k+1$ and return to step 2 until the stopping criterion $|G'(x_k)|<10^{-5}\alpha^3$ is satisfied.
Numerical Simulations {#simulation}
=====================
In section \[Q\_design\], it is proven that $x^*=0$ for $0<\beta\leq 1$, and $G(x)(x>0)$ is quasiconcave for $\beta >1$ in the asymmetry binary channels $q_0>q_1$ and $1-q_0-q_1>0$. Utilizing the quasiconcavity, the numerical algorithm is conducted to obtain the maximum of $G(x)$, and the effectiveness of the corresponding optimal threshold is verified via numerical simulations.
For the first experiment, we use gradient descent algorithm to find the optimal threshold normalized by the scale parameter $\alpha^{-1}$ of the GGN. The results are presented in Fig. \[optimum\_beta\].
![The relationship between the normalized $\alpha x^*$ and $\beta$ under different flipping probabilities $(q_0,q_1)$.[]{data-label="optimum_beta"}](optimum_beta.pdf){width="80mm"}
It shows that for $0<\beta\leq 1$, the optimal threshold is zero; for $1<\beta\leq 2$, the optimal threshold is zero under $q_0=q_1$, and non-zero under $q_0\not =q_1$; for $\beta> 2$, the optimal threshold is non-zero. In addition, for arbitrary flipping probabilities, the optimal threshold increases with $\beta$ and is upper bounded by $x_1=\frac{1}{\alpha}(1-\frac{1}{\beta})^{\frac{1}{\beta}}$.
For the second experiment, the effectiveness of quantizer thresholds design is verified. In TABLE \[x\_star\], $x^*$ under different $\beta$ is calculated by Algorithm \[algorithm\_only\]. The corresponding optimal threshold $\tau^*$ is $-x^*$. Parameters are set as follows: $\alpha=1$, $\theta = 0.0661$, $q_0=0.7$, $q_1=0$, $N=2000$, $K = 1$, $h_{ij}=1,~\forall~ i,j$, the number of Monte Carlo trials is $2000$. The receiver operating characteristic (ROC) curves, i.e., the detection probability $P_{\rm D}$ versus the false alarm probability $P_{\rm FA}$, are presented in Fig. \[Pd\_Pfa\]. We have noticed that the ROCs of the Rao test are similar to those of the GLRT. To present the results clearly, we do not plot the ROCs of the Rao test in this experiment.
.\[x\_star\]
[$\beta$ ]{} $1.5$ $2$ $4$ $8$
-------------- ---------- ---------- ---------- ----------
$x^*$ $0.1200$ $0.3682$ $0.7727$ $0.9130$
: The values of $x^*$ under different $\beta$ with flipping probabilities $(q_0,q_1)=(0.7,0)$
![The ROC curve under $\beta>1$. The flipping probabilities are $q_0=0.7$ and $q_1=0$.[]{data-label="Pd_Pfa"}](Pd_Pfa.pdf){width="140mm"}
From TABLE \[x\_star\] and Fig. \[Pd\_Pfa\], one obtains that under certain flipping probabilities $q_0\not=q_1$ and $\beta>1$, the performance of GLRT is improved by using the optimal threshold. When $\beta$ is small, the gain of the quantizer design with respect to the zero-threshold is negligible because the optimal threshold is still close to zero. As $\beta$ increases, the detection performance of the designed quantizer improves significantly compared to that utilizing the zero-threshold.
For the third experiment, we detect an one dimensional acoustic field under the ship transit noise [@Hodgkiss]. Let $h_{ij}=\sin(kx_i-\omega t_j)$ denote the unit response of the acoustic field at position $x_i$ and time instant $t_j$, $k$ is the wave number and $\omega$ is the angular frequency. In $25^{\circ}$C seawater (in which the sound speed is about $1500$ m/s), $50$ sensors are equispaced in $100$ m to test for the presence of a weak sound wave whose amplitude is $0.1$ Pa and frequency is $200$ Hz. For sensors, the sampling frequency is $5000$ Hz, and the sampling time is $0.1$ s. Accordingly, parameters are set as follows: $\theta=0.05$, $\alpha=1$, $q_0=0.3$, $q_1=0$, $N=50$, $K=50$, $x_i=2i$, $t_j=j/500$, $k=400\pi /1500\approx0.8378$, $\omega=400\pi\approx1257$, and $h_{ij}=\sin(1.676i-2.514j)$. In addition, the GGN with $\beta=2.779$ represents the ship transit noise [@Banerjee]. The number of Monte Carlo trials is $10^3$, and the ROC curves are presented in Fig. \[wave\_vector\]. It can be seen that the ROCs of the Rao test are almost the same as those of the GLRT. Compared to using the suboptimal zero-threshold, utilizing the optimal threshold improves the performances of the GLRT and Rao detectors.
![The ROC curve for detecting the acoustic wave field under the ship transit noise environments.[]{data-label="wave_vector"}](wave_vector.pdf){width="80mm"}
Conclusion {#con}
==========
Provided that the noise obeys the generalized Gaussian distribution, it is shown that the optimal threshold depends on the value of shape $\beta$ critically. For $0<\beta\leq 1$, the optimal threshold is zero in both binary symmetric and asymmetric channels. For $1<\beta\leq 2$, the optimal threshold is zero in the BSC, while it is non-zero and unique in the binary asymmetric channel. For $\beta>2$, in the BSC, there exist two non-zero solutions which are opposite numbers corresponding to optimal thresholds, while in the binary asymmetric channel the optimal threshold is non-zero and unique. Next, for the cases of non-zero optimal thresholds, we prove that maximizing the non-central parameter can be solved efficiently via numerical algorithm. Finally, the effectiveness of the optimal threshold is verified in numerical experiments, and the gain of using the designed threshold becomes larger as the shape parameter $\beta$ increases.
Acknowledgement {#ack}
===============
This work is supported by the Zhejiang Provincial Natural Science Foundation of China under grant No. LQ18F010001 and the Fundamental Research Funds for the Central Universities under Grant No. 2017QNA4042.
Proof of Propositions {#appendix}
======================
Proposition 1 {#prop1}
-------------
$\forall$ $\alpha>0$, $\beta>0$, $0\leq q_0 \leq1$ and $0\leq q_1 \leq1$, the equalities $$\begin{aligned}
\label{lemma_xab}
&G(x,q_0,q_1)=G(-x,q_1,q_0)=G(x,1-q_0,1-q_1)\notag\\
&=G(-x,1-q_1,1-q_0).\end{aligned}$$ hold, due to $f(x)=f(-x)$ and $F(x)+F(-x)=1$. Let $(q_0=q_a, q_1=q_b)$ satisfy $q_0>q_1$ and $1-q_0-q_1>0$, and $x^*$ denote the value of $x$ which attains the maximum of $G(x,q_0,q_1)$. According to (\[lemma\_xab\]), $G(x^*,q_a,q_b)=G(x,q_b,q_a)|_{x=-x^*}=G(x^*,1-q_a,1-q_b)=G(x,1-q_b,1-q_a)|_{x=-x^*}\geq G(x,q_a,q_b)=G(-x,q_b,q_a)=G(x,1-q_a,1-q_b)=G(-x,1-q_b,1-q_a)$. The maximums of $G(x,q_b,q_a)$, $G(x,1-q_a,1-q_b)$ and $G(x,1-q_b,1-q_a)$ are obtained at $-x^*$, $x^*$ and $-x^*$, corresponding to the cases that $q_0<q_1$ & $q_0+q_1<1$, $q_0<q_1$ & $1-q_0-q_1<0$ and $q_0>q_1$ & $1-q_0-q_1<0$, respectively. As a consequence, we conclude that the maximum of $G(x)$ in the case that $q_0<q_1$ or $1-q_0-q_1<0$ can be transformed into the case that $q_0>q_1$ and $1-q_0-q_1>0$. Given that $q_0>q_1$ and $1-q_0-q_1>0$, for $x>0$, we have $$\begin{aligned}
\label{x_-x}
&G(x)-G(-x)=\frac{1}{\frac{1}{4}-\left[(1-q_0-q_1)F(x)-\frac{1}{2}+q_1\right]^2}\notag\\
&\times\frac{2f^2(x)[F(x)-\frac{1}{2}](1-q_0-q_1)(q_0-q_1)}{\frac{1}{4}-\left[(1-q_0-q_1)F(x)-\frac{1}{2}+q_0\right]^2}.\end{aligned}$$ Utilizing $\frac{1}{2}<F(x)<1$, we have
$$\begin{aligned}
\frac{q_1-q_0}{2}<(1-q_0-q_1)F(x)-\frac{1}{2}+q_1<\frac{1-2q_0}{2},\notag\\
\frac{q_0-q_1}{2}<(1-q_0-q_1)F(x)-\frac{1}{2}+q_0<\frac{1-2q_1}{2},\notag\end{aligned}$$
which guarantee the inequalities
$$\begin{aligned}
\left|(1-q_0-q_1)F(x)-\frac{1}{2}+q_1\right|<\frac{1}{2},\notag\\
\left|(1-q_0-q_1)F(x)-\frac{1}{2}+q_0\right|<\frac{1}{2}.\notag\end{aligned}$$
Therefore, the denominators of both terms in (\[x\_-x\]) are positive and $G(x)-G(-x)>0$. Because $x = 0$ is also a feasible point of $G(x)$, the optimal point $x^*$ is either equal to zero or in the interval $(0,+\infty)$.
Proposition \[prop\_4\] {#prop4}
-----------------------
For $\beta>1$, from (\[cdef\]) we have $$\begin{aligned}
\label{cx1_1}
m_3(x_1)=-1.\end{aligned}$$ Substituting (\[cx1\_1\]) into (\[M\_derivative\]) yields $$\begin{aligned}
M'(x_1)=\frac{f(x_1)m_1(x_1)}{m_2(x_1)}>0.\end{aligned}$$
Proposition \[prop\_5\] {#prop5}
-----------------------
The proof refers to [@boyd p. 101]. From $f''(x_0)>0$ ($f''(x_0)<0$), we know that whenever the function $f'(x)$ crosses the value $0$, it is strictly increasing (decreasing). Therefore $f'(x)$ can cross the value $0$ at most once. It follows that $f'(x)<0$ for $a<x<x_0$ and $f'(x)>0$ for $x_0<x<b$ ($f'(x)<0$ for $a<x<x_0$ and $f'(x)>0$ for $x_0<x<b$). This shows that $f(x)$ is quasiconvex (quasiconcave) and the stationary point $f'(x_0)=0$ is unique.
Proposition \[prop\_6\] {#prop6}
-----------------------
For $\beta>2$, from (\[M\_derivative\]) and $m_3(x_2)=-(3/2 +1/(\beta-2)<-3/2$, we have $$\begin{aligned}
\frac{M'(x_2)}{f(x_2)}\leq1+\left(1-\frac{m_1(x_2)}{\sqrt{\frac{1}{4}+m_1^2(x_2)}}\right)m_3(x_2),\end{aligned}$$ in which the condition for equality is $q_0=q_1=0$ or $1$. To prove that $M'(x_2)<0$ for arbitrary $(q_0,q_1)$ is equivalent to prove that $$\begin{aligned}
1+\left(1-\frac{m_1(x_2)}{\sqrt{\frac{1}{4}+m_1^2(x_2)}}\right)m_3(x_2)<0,\end{aligned}$$ which can be simplified as $$\begin{aligned}
\label{complex1}
\frac{1}{m_1^2(x_2)}>4\left[\left(\frac{1}{1+\frac{1}{m_3(x_2)}}\right)^2-1\right].\end{aligned}$$ Substituting (\[cdef\]) and (\[rdef\]) into (\[complex1\]) yields $$\begin{aligned}
\label{complex2}
\Gamma^2(1/\beta)>\left[e^{\frac{2-\beta}{2\beta}}\left(\frac{2\beta}{\beta-2}\right)^{\frac{\beta-1}{\beta}}\right]^2\frac{2(\beta-1)(\beta-2)}{\beta^2},\end{aligned}$$ whose logarithm is $$\begin{aligned}
\label{complex3}
2\ln\Gamma(1/\beta)&>\frac{2}{\beta}-1+\left(3-\frac{2}{\beta}\right)\ln2-\frac{2}{\beta}\ln\beta\notag\\
&+\left(\frac{2}{\beta}-1\right)\ln(\beta-2)+\ln(\beta-1).\end{aligned}$$ Let $t=1/\beta$, then $0<t<\frac{1}{2}$ due to $\beta>2$. Utilizing $\Gamma(x+1)=x\Gamma(x) (x>0)$, (\[complex3\]) can be transformed as $$\begin{aligned}
2\ln\Gamma(t+1)&>2\ln t+2t-1+\left(3-2t\right)\ln2+\notag\\
&(2t-1)\ln(1-2t)+\ln(1-t)\triangleq Q(t).\end{aligned}$$ According to [@nature], the minimum of $\Gamma(t+1)(0<t<1/2)$ is obtained at $t=0.461$. Now, we prove that $Q(t)<2\ln\Gamma(1.461)= -0.2430$ for $0<t<1/2$. The first and second order derivatives of $Q(t)$ are
\[complex5\] $$\begin{aligned}
Q'(t)&=4-2\ln2+\frac{2}{t}+\frac{1}{t-1}+2\ln(1-2t),\\
Q''(t)&=-\frac{2}{t^2}-\frac{4}{1-2t}-\frac{1}{(1-t)^2}.\end{aligned}$$
Given $0<t<1/2$, $Q''(t)<0$ and $Q(t)$ is concave. We use the *MATLAB* *fminunc* function and obtain the maximum of $Q(t)$, achieved at $t=0.4609$ (very near the optimal point $t=0.461$ of $\Gamma(t+1)(0<t<1/2)$). Since $Q(t)\leq Q(0.4609)=-0.60542<-0.2430=2\ln\Gamma(1.461)\leq 2\ln\Gamma(t+1)$, the proposition is proved.
[1]{} H.V. Poor, Fine quantization in signal detection and estimation, [IEEE Trans. Inf. Theory]{} 34 (5) (1988) 960-972. H. Li and J. Fang, Distributed adaptive quantization and estimation for wireless sensor networks, [IEEE Signal Process. Lett.]{}, 14 (10) (2007) 669-672. A. Ribeiro and G.B. Giannakis, Non-parametric distributed quantization-estimation using wireless sensor networks, in [Proc. Int. Conf. Acoust., Speech, Signal Process.]{} 4 (2005) 61-64. Y. Pan, F. Duan, F. Chapeau-Blondeau and D. Abbott, Noise enhancement in robust estimation of location, [IEEE Trans. Signal Process.]{} 66 (8) (2018) 1953-1966. D. Ciuonzo and P. Salvo Rossi, Distributed detection of a non-cooperative target via generalized locally-optimum approaches, [Information Fusion]{} 36 (2017) 261-274. J. Zhu, X. Lin, R.S. Blum and Y. Gu, Parameter estimation from quantized observations in multiplicative noise environments," [IEEE Trans. Signal Process.]{} 63 (15) (2015) 4037-4050. J. Zhu, X. Wang, X. Lin and Y. Gu, Maximum likelihood estimation from sign measurements with sensing matrix perturbation, [IEEE Trans. Signal Process.]{} 62 (15) (2014) 3741-3753. A. Sani and A. Vosoughi, On distributed linear estimation with observation model uncertainties, to appear in [IEEE Trans. Signal Process.]{}, also avaliable at https://arxiv.org/abs/1709.02040. J. Zhu, H. Cao, C. Song and Z. Xu, Parameter estimation via unlabeled sensing using distributed sensors, [IEEE Commun. Lett.]{} 21 (10) 2017 2130-2133. G. Wang, J. Zhu, R. S. Blum, P. Willett, S. Marano, V. Matta and P. Braca, Signal amplitude estimation and detection from unlabeled binary quantized samples, avaliable at https://arxiv.org/pdf/1706.01174.pdf. R.C. Farias, E. Moisan and J.M. Brossier, Optimal asymmetric binary quantization for estimation under symmetrically distributed noise, [IEEE Signal Process. Lett.]{} 21 (5) (2014) 523-526. R.C. Farias and J.M. Brossier, Scalar quantization for estimation: from an asymptotic design to a practical solution, [IEEE Trans. Signal Process.]{} 62 (11) (2013) 2860-2870. D. Ciuonzo, A.D. Maio and P. Salvo Rossi, A systematic framework for composite hypothesis testing of independent Bernoulli trials, [IEEE Signal Process. Lett.]{} 22 (9) (2015) 1249-1253. J. Zhang, R.S. Blum, X. Lu and D. Conus, Asymptotically optimum distributed estimation in the presence of attacks, [IEEE Trans. Signal Process.]{} 63 (5) (2015) 1086-1101. J. Zhang, R.S. Blum, L. Kaplan and X. Lu, Functional forms of optimum spoofing attacks for vector parameter estimation in quantized sensor networks, [IEEE Trans. Signal Process.]{} 65 (3) (2015) 705-720. J. Zhang, R.S. Blum, L. Kaplan and X. Lu, A fundamental limitation on maximum parameter dimension for accurate estimation using quantized data, available at http://arxiv.org/pdf/1605.07679.pdf. S. Kassam, Optimum quantization for signal detection, [IEEE Trans. Commun.]{} 25 (5) (1977) 479-484. D. Warren and P. Willett, Optimum quantization for detector fusion: some proofs, examples, and pathology, [J. Franklin Inst.]{} 336 (2) (1999) 323-359. P. Willett and D. Warren, The suboptimality of randomized tests in distributed and quantized detection systems, IEEE Trans. Inf. Theory 38 (2) (1992) 355-361. B. Chen and P. Willett, On the optimality of the likelihood-ratio test for local sensor decision rules in the presence of nonideal channels, [IEEE Trans. Inf. Theory]{} 51 (2) (2005) 693-699. P. Venkitasubramaniam, L. Tong and A. Swami, Quantization for maximin ARE in distributed estimation, [IEEE Trans. Signal Process.]{} 55 (7) (2007) 3596-3605. J. Fang, Y. Liu and H. Li, One-bit quantizer design for multisensor GLRT fusion, [IEEE Signal Process. Lett.]{} 20 (3) (2013) 257-260. D. Rousseau, G.V. Anand, and F. Chapeau-Blondeau, Nonlinear estimation from quantized signals: Quantizer optimization and stochastic resonance, [Proc. 3rd Int. Symp. Physics in Signal and Image Processing]{}, 2003, pp. 89-92. D. Ciuonzo, G. Papa, G. Romano, P. Salvo Rossi and P. Willett, One-Bit decentralized detection with a Rao test for multisensor fusion, [IEEE Signal Process. Lett.]{} 20 (9) (2013) 861-864. D. Ciuonzo, P. Salvo Rossi and P. Willett, Generalized Rao Test for Decentralized Detection of an Uncooperative Target, [IEEE Signal Process. Lett.]{} 24 (5) 2017 678-682. D. Ciuonzo and P. Salvo Rossi, Quantizer design for generalized locally-optimum detectors in wireless sensor networks, [IEEE Wireless Commun. Lett.]{} 7 (2) (2017) 162-165. H.C. Papadopoulos, G.W. Wornell and A.V. Oppenheim, Sequential signal encoding from noisy measurements using quantizers with dynamic bias control, [IEEE Trans. Inf. Theory,]{} 47 (3) (2001) 978-1002. S.M. Kay, [Fundamentals of Statistical Signal Processing, Volume II: Detection Theory]{}, Englewood Cliffs, NJ: Prentice Hall, 1993. O. Ozdemir and P.K. Varshney, Channel aware target location with quantized data in wireless sensor networks, [IEEE Trans. Signal Process.]{} 57 (2009) 1190-1202. S.M. Kay, [Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory]{}, Englewood Cliffs, NJ: Prentice Hall, 1993. W.S. Hodgkiss and V.C. Anderson, Detection of sinusoids in ocean acoustic background noise, [J. Acoust. Soc. Am.]{} 67 (1) (1980) 214-219. S. Banerjee and M. Agrawal, Underwater acoustic noise with generalized Gaussian statistics: Effects on error performance, in [Proc. of the IEEE Oceans]{}, 2013, pp. 1-8. S. Boyd and L. Vandenberghe, [Convex Optimization]{}, Cambridge University Press, 2004. W.E. Deming and C.G. Colcord, The minimum in the gamma function, [Nature]{} 135 (1935) 917.
|
---
abstract: 'Localized cavity resonances due to nanostructures at material surfaces can greatly enhance radiative heat transfer (RHT) between two closely placed bodies owing to stretching of cavity states in momentum space beyond light line. Based on such understanding, we numerically demonstrate the possibility of ultra-broadband super-Planckian RHT between two plates patterned with trapezoidal-shaped hyperbolic metamaterial (HMM) arrays. The phenomenon is rooted not only in HMM’s high effective index for creating sub-wavelength resonators, but also its extremely anisotropic isofrequency contour. The two properties enable one to create photonic bands with a high spectral density to populate a desired thermal radiation window. At sub-micron gap sizes between such two plates, the artificial continuum states extend outside light cone, tremendously increasing overall RHT. Our study reveals that structured HMM offers unprecedented potential in achieving a controllable super-Planckian radiative heat transfer for thermal management at nanoscale.'
author:
- Jin Dai
- Fei Ding
- 'Sergey I. Bozhevolnyi'
- Min Yan
title: 'Ultra-Broadband Super-Planckian Radiative Heat Transfer with Artificial Continuum Cavity States in Patterned Hyperbolic Metamaterial'
---
Near-field-mediated radiative heat transfer (RHT) exceeding the far-field blackbody limit [@polder] has attracted increased attention in recent years [@RevModPhys], not only because it unfolds an un-explored fundamental scientific field, but also because it holds technological importance towards nano-gap thermophotovoltaics, scanning near-field thermal microscopy, thermal logics etc. Commonly, materials supporting surface-guided waves at infrared frequency were investigated for achieving such phenomenon, since surface modes especially at frequency close to their resonances offer extra channels of energy transfer due to evanescent-wave coupling at small gaps. Surface-wave-bearing materials at infrared include polar dielectrics supporting surface phonon polaritons [@RHTplate1; @RHTplate2], doped silicon supporting surface plasmon polaritons [@dopedSi], and more recently grooved metal surfaces supporting the so-called spoof surface plasmon polaritons [@PRBrgrating; @jin1; @jin2; @jin3]. The presence of surface modes leads to spectrally enhanced quasi-monochromatic heat flux around their resonance frequencies. Contrary to the extensive attentions paid to surface modes, *localized resonant modes* were rarely mentioned for achieving near-field-enhanced RHT. Localized or cavity resonances at infrared or even optical frequencies can be readily created using today’s nanofabrication technologies. Spatial localization of such modes usually corresponds to enormous extension of the modes in momentum (wave vector $\mathbf{k}$) dimension. Such flat bands can potentially lie outside light cone (bounded by $k=\omega/c$) in frequency-momentum representation. Similar to surface-mode-based RHT scenario, presence of photonic states outside light cone implies near-field energy transfer between two bodies, which can potentially amount to super-Planckian RHT. A direct advantage of cavity-resonance-based RHT is that one can arrange nano-cavities with different resonant frequencies within unit cells of two plates to achieve enhanced RHT at multiple frequencies. With an exotic resonator design, as the current work will reveal, an ultra-broadband super-Planckian RHT can even be achieved.
Referring to Fig. \[fig1\], we consider RHT between two plates each consisting a gold substrate and an array of trapezoidal-shaped hyperbolic metamaterial (HMM). The HMM is formed by a multi-layer metal-dielectric stack. The two plates are separated by a vacuum gap $g$. The thicknesses of dielectric and metal are 95 and $20$ nm, respectively. Each HMM cavity contains 20 dielectric-metal pairs. The cross-section of a single cavity resembles a trapezoid with short base of $w_t=0.4~\mu$m, long base of $w_b=1.9~\mu$m, and height of $h=2.3~\mu$m. The period of the HMM arrays is fixed at $a=2.0~\mu$m. The relative permittivities of dielectric (Si) and metal (Au) are $\epsilon_{\mathrm{Si}}=11.7$ and $\epsilon_{\mathrm{Au}}(\omega)=1-\frac{\omega_p^2}{\omega(\omega+i\gamma)}$, in which $\omega_p=9$ eV, and $\gamma=35$ meV, respectively. Such structure can be fabricated with focused ion beam milling of deposited metal-dielectric multilayers [@Fei], or with shadow deposition of dielectric and metal layers [@Yang2012; @jay].
![(Color online). Schematic of the trapezoidal-shaped HMM plates. The cyan and orange layers denote dielectric and gold layers, respectively.[]{data-label="fig1"}](Fig1.eps){width="0.8\columnwidth"}
The HMM, when truncated, helps to create *subwavelength* electromagnetic cavities, while the trapezoidal geometry is responsible for producing such resonances over a broad wavelength range. To simplify our argument in this paragraph, we neglect the gold substrates[^1] and consider only $k_x$ wave-vector direction. A single HMM plate, when un-patterned, has an *indefinite* diagonal effective permittivity tensor, with negative $x$ and $y$ components and positive $z$ component. Such an anisotropic slab, for the given geometry, guides a set of $x$-propagating $p$-polarized modes [@Yan]. For frequency up to $300\times 10^{12}~\mathrm{rad/s}$ and even higher, the modes have *almost similar* linear dispersion curves, based on which one calculates the effective index ($n_\mathrm{eff}$) of the HMM as $\sim 3.77$. When laterally truncated, $p$-polarized wave bounces between two truncation facets; each HMM patch therefore is a 2D *high-index* resonator. High-index material is essential for creating small-dimension resonators, especially at the upper wavelength limit for broadband RHT, that fit into a grating period $a$. Note also that, for achieving super-Planckian RHT at a wavelength $\lambda_r$, $a$ needs to satisfy $a<\lambda_r/2$ in order for the resonance to cross light line in the first Brillouin zone of the plate’s mode spectrum. The lower wavelength limit $\lambda_\mathrm{min}$ of desired broadband RHT would set $a<\lambda_\mathrm{min}/2$. The most intriguing property of such a HMM resonator, as will be further clarified in Fig. \[fig3\], is the weak dependence of its resonant frequencies on mode orders (with nodal breaking along $z$), or equivalently the resonator’s thickness. This sets the fundamental difference between using HMM and using an isotropic dielectric material. A trapezoidal profile-patterned HMM can therefore be treated as a series of vertically-stacked thin HMM resonators of varying widths, and in turn varying resonant frequencies, for achieving broadband operation.
We mention that a single array of such HMM resonators was previously found to exhibit broadband absorption of far-field radiation [@cui]. Near-field properties with implications to RHT were insofar left unexplored. There were also studies of RHT based on un-patterned HMM plates [@MLhyper1; @MLhyper2; @MLhyper3; @PRLHMM; @Whyper; @Liu:2015:metasurfaces]; the results, as we will show later, can be radically different from RHT between patterned HMMs, principally due to lack of localized resonances. Here, using a rigorous full-wave scattering-matrix method [@PRBrgrating; @jin3], we calculate the RHT flux between two patterned HMM plates and numerically confirm ultra-broadband super-Planckian RHT at small gap sizes. In addition, we utilize a finite-element based eigen-mode solver [@cwes1; @cwes2] to reveal the modal properties of the double-plate structure and identify that cavity modes play critical roles in enhancing RHT.
The radiative heat flux between two 1D periodic arrays can be expressed by $$\begin{aligned}
\label{eq1}
q(T_1, T_2)=\frac{1}{2\pi}\int_0^\infty[\Theta(\omega,
T_1)-\Theta(\omega, T_2)] \Phi(\omega)\mathrm{d}\omega,\end{aligned}$$ where $\Theta(\omega,
T)$=$\hbar\omega/\mathrm{exp}[(\hbar\omega/k_B T) -1]$ is the mean energy of Planck oscillators at temperature $T$ and angular frequency $\omega$. $\Phi$ is integrated transmission factor $$\begin{aligned}
\label{eq2}
\Phi(\omega)=\frac{1}{4\pi^2}\sum\limits_{j=s,p}\int_{-\infty}^{+\infty}\int_{-\frac{\pi}{a}}^{+\frac{\pi}{a}}\mathcal{T}_j(\omega,k_x,k_y)\mathrm{d}
k_x\mathrm{d} k_y.\end{aligned}$$ $\mathcal{T}_j(\omega,k_x,k_y)$ is the transmission factor that describes the probability of a thermally excited photon transferring from one plate to the other, given polarization $s$ or $p$, and surface-parallel wavevector $\mathbf{k}_\parallel\equiv (k_x,k_y)$ at $\omega$.
{width="0.8\columnwidth"}
Figure \[fig2\] plots the transmission-factor distributions ($\mathcal{T}$ maps) over frequency and $k_x$, while $k_y$ is kept zero, for three types of plate configurations: unpatterned HMM plates \[panels \[fig2\](a) and (b)\], rectangular-profiled HMM plates \[\[fig2\](d) and (e)\], and trapezoidal-profiled HMM plates \[\[fig2\](g) and (h)\]. All configurations are mirror-symmetric. Note the $\mathcal{T}$ maps are shown only for $k_x$ direction, along which the mode spectra of truncated HMM structures exhibit marked difference against un-truncated scenario. The calculation is repeated for two gap sizes: 1000 and 50 nm. For comparison purpose, in some $\mathcal{T}$ maps we selectively superimpose dispersion curves obtained from eigen-mode calculations. In Fig. \[fig2\](a), when HMM is un-truncated ($g=1000$ nm), the $\mathcal{T}$ map shows a thin line of states just below light line. The states originate from a gap plasmon mode (GPM) confined mostly in the vacuum gap between the two HMM plates, as confirmed by the mode distribution \[mode I in panel (c), or c-I\] from eigen-mode analysis. Further below the GPM states, there are a set of modes (c-II, c-III, c-IV) guided mostly in the HMMs; the number of modes is decided by the number of metal-dielectric layer pairs constructing the HMMs [@Yan]. Only the bonding-type HMM-guided modes are shown. Owing to their strong confinements and the relatively large gap size, the fields in two HMM plates are hardly coupled, therefore having almost no contribution to the RHT process. Even when separation between the two plates is reduced to $g=50$nm, the contribution of the HMM-guided modes to RHT is trivial, as shown by the $\mathcal{T}$ map in Fig. \[fig2\](b), as well as its inset with a down-limited color scale. When HMM is truncated, mode structure in the two-plate system changes drastically. The GPM remains (f-I), but now with its field confined between the two gold substrates; an anti-bonding GPM (f-II) also emerge due to relatively large separation between two gold plates. Then, importantly, each HMM patch becomes a cavity; localized resonances happen (modes f-III, f-IV). The cavity mode fields are tightly confined laterally, which leads to almost flat dispersion curves of the modes, as shown in Fig. \[fig2\](d). The contribution of these cavity modes to RHT is evident in Fig. \[fig2\](d) with $g=1000~\mathrm{nm}$, and even more so in Fig. \[fig2\](e) with $g=50~\mathrm{nm}$.
Unlike resonators made of isotropic dielectric materials, the resonant frequencies of the HMM modes with different order numbers due to nodal breaking along $z$ are quite close to each other. This can be understood by examining the iso-frequency contours of the HMM in bulk, shown in $k_x$ and $k_z$ axes ($k_y=0$) in Fig. \[fig3\]. Three hyperbolic contours correspond to relatively close frequencies at 235, 245, and 255$\times 10^{12}~\mathrm{rad/s}$. Given a 2D rectangular HMM cavity (of width $w$) in Fig. \[fig2\](f), resonant frequencies of modes are decided by the modes’ corresponding $k_x$ and $k_z$ values. The fundamental mode has approximately $k_x=\pi/w$ and $k_z=1.5\pi/h$ [^2]. For next higher-order mode with a nodal breaking along $z$, $k_x$ remains the same, and it has $k_z=2.5\pi/h$, and so on. The first three cavity modes are indicated by three yellow dots in Fig. \[fig3\], vertically aligned. They are positioned quite near to 245$~\times
10^{12}~\mathrm{rad/s}$, which is in reasonably good agreement with Figs. \[fig2\](d) and (e). The fact that the modes with nodal breaking along $z$ stay close in frequency is fundamentally decided by the extremely anisotropic hyperbolic iso-frequency curves of the HMM. This is confirmed by a re-plot of Fig. \[fig3\] in its inset using equal axis scales. The hyperbolic curves are almost vertically lines.
. Inset shows the same plot with equal axis scales.[]{data-label="fig3"}](Fig3.eps){width="0.8\columnwidth"}
The iso-frequency plot in Fig. \[fig3\] also suggests that it is the width of a 2D HMM cavity which determines the resonant frequencies of its modes. The cavity height (even very thin) does not influence much the resonant frequencies. Knowing this, one can potentially create a cavity that supports resonances at multiple frequencies. Trapezoidal-profiled HMM structure as illustrated in Fig. \[fig1\] is a straightforward solution. Indeed, the computed $\mathcal{T}$ map for $g=1000~\mathrm{nm}$ exhibits ultra-broadband transmission factors \[Fig. \[fig2\](g)\]. The states contributing to RHT are so densely packed such that they form almost a continuum. The lower cut-off frequency of the continuum is determined by the bottom width of the trapezoids. High transmission factors extend beyond light line, and at certain frequencies reach the first Brillouin zone edge. When the gap becomes smaller, at $g=50~\mathrm{nm}$, the continuum states extend to larger $k_x$ values; more evanescent states contribute to RHT. This lends the possibility of achieving an ultra-broadband super-Planckian RHT between two plates using trapezoidal-profiled HMMs. From eigen-mode calculations, we obtain modes responsible for the RHT process. Besides the GPM pair (modes i-I, i-II), we show modes i-III and i-IV, which offer a glimpse of cavity modes in the continuum. As characterized by their hot spots, the cavity modes now have very tight $z$ confinement, or correspondingly large $k_z$. The positions of hot spots reveal the mechanisms of their resonances. Mode i-III has hot spots at the middle section of the trapezoidal HMM cavity; it well corresponds to a resonant frequency just below $200\times 10^{12}$ rad/s. Slight complication arises at higher frequencies. Mode i-IV, for example, has its hot spots located at both narrow- and wide-width sections of the HMM cavity; a wide HMM section can support a high-frequency resonance through nodal breaking along $x$ direction.
![(Color online). (a) Integrated transmission-factor spectra $\Phi(\omega)$ for three types of two-plate structures: trapezoidal-profiled (trap) HMM plates, rectangular-profiled (rect) HMM plates, and homogeneous (homo) HMM plates. Results for two gap sizes 1000 and 50 nm are presented. Black line represents integrated $\Phi$ spectrum between two blackbodies. (b) Spectral heat flux $q(\omega)$ for the same configurations as in (a) for plate temperatures at $301$ and $300$ K. The thin gray lines with shading in (a) and (b) indicate Planck’s oscillator term $\Theta(\omega,
\textrm{301~K})-\Theta(\omega, \textrm{300~K})$ and spectral heat flux between two blackbodies, respectively.[]{data-label="fig4"}](Fig4.eps){width="0.8\columnwidth"}
A full characterization of RHT between two plates requires a calculation of transmission factors for all $\mathbf{k}_\parallel$ over concerned frequency range. In Supplemental Material, we selectively plot $\mathcal{T}_{s+p}(k_x,k_y)$ maps at $\omega=173\times 10^{12}$ rad/s for various gap sizes. There we also explain the modal origin of the RHT states with the help of eigen-mode analysis especially when $k_x=0$ and $k_y\neq 0$. Generally speaking, as $\mathbf{k}_\parallel$ deviates from $x$ direction, the near-field contribution to RHT becomes less significant, but the far-field contribution persists. The decreasing near-field contribution is expected since each HMM patch no longer sustains cavity resonances when $\mathbf{k}_\parallel$ deviates from $x$ direction. The volumetric transmission-factor data were integrated with respect to $\mathbf{k}_x$ and $\mathbf{k}_y$. In Fig. \[fig4\](a) we plot the integrated transmission-factor spectra, $\Phi(\omega)$, for the three types of HMM plate configurations as mentioned in Fig. \[fig3\] at gap sizes of 1000 and 50 nm. The trapezoidal-profiled HMM plates clearly exhibit the highest $\Phi$ over almost the whole frequency range when $g=1000$ nm; at $g=50$ nm, it has the highest $\Phi$ among the three structures at high frequencies (above $\sim 110\times 10^{12}~\mathrm{rad/s}$). At low frequencies, the homogeneous HMM plates have better RHT performance at small gap sizes. That is because the effect of eddy current generation through magnetic-field coupling ($s$ polarization) between metal plates becomes prominent [@RHTplateAu; @jin3]; and the surface area at the nearest proximity between two plates decides the degree of such coupling (*i.e.* Derjaguin proximity approximation starts to apply). The broadband high $\Phi$ for the trapezoidal-is dominantly due to the cavity mode continuum as shown in Figs. \[fig3\](g) and (h). The rectangular-profiled HMM plates show high $\Phi$ bands at certain frequencies (mainly at $\sim 235\times 10^{12}$ rad/s) linked to cavity resonances in the HMM patches, as indicated in Fig. \[fig3\](d) and (e). It is worth noticing that the peak $\Phi$ value of the rectangular-profiled HMM structure is smaller than that of the trapezoidal-profiled HMM structure at the same frequency. This is due to the fact that the trapezoidal one can have extra resonances due to nodal breaking in $x$ direction, while similar high-order resonances do not exist for the rectangular counterpart in the considered frequency range. The homogeneous HMM structure shows featureless $\Phi$ spectra, because they do not sustain any cavity resonances but linearly dispersive guided modes \[Fig. \[fig3\](a) and (b)\]. As the gap size decreases down to 50 nm, the $\Phi$ spectrum of the trapezoidal-profiled HMM structure surpasses that of blackbody structure over the whole frequency range; ultra-broadband super-Planckian RHT occurs. We further calculate spectral heat flux $q(\omega)$ between two plates with temperatures at 301 and 300 K for the mentioned HMM structures \[Fig. \[fig4\](b)\]. At $g=50$ nm, the trapezoidal structure performs significantly better than blackbody plates, with 155% increase in $q$ at the highest-$q$ frequency of the blackbody-plate scenario (*i.e.*, 150$\times 10^{12}$ rad/s). Note that the structure presented in this work is for demonstrating the flexibility of HMM-based plates for tailoring enhanced near-field RHT \[$\Phi$ spectra in Fig. \[fig4\](a)\] without considering specific plate temperatures in the first place. The actual heat transfer in measurable values \[as $q$ spectra in Fig. \[fig4\](b)\] depends further on exact temperature settings. In reality, towards a particular application, one should design the profile patterning (period, resonator width, trapezoid shape, etc) such that one maximize near-field RHT in measurable quantities either at a desired frequency range or over the whole spectrum.
The profile-patterning of HMM layers is currently one dimensional, which results in inferior RHT when $\mathbf{k}_\parallel$ has inclination towards $y$ direction. We envisage that a 2D periodic structuring of the HMM layer (pyramid array) [@Fei] can give rise to isotropic enhancement in RHT in all $\mathbf{k}_\parallel$ directions owing to true localization of the cavities modes. In addition, in this work we have not optimized the metal and dielectric layer thicknesses (as well as the dielectric material type) in the HMM. It is likely one can have even higher effective index of the HMM by choosing appropriate geometrical and material parameters, so that one can further adjust the frequency range of the cavity mode continuum as well as the Brillouin zone size.
In conclusion, we numerically demonstrated one can achieve an ultra-broadband super-Planckian RHT with two closely spaced trapezoidal-profiled HMM plates. The design rests on two key properties of HMMs: high effective index for creating sub-wavelength resonators, and extremely anisotropic iso-frequency curves responsible for cavity-width-dependent resonance frequencies. The superiority of the trapezoidal-profiled patterning in achieving ultra-broadband enhanced RHT is explained through its capability of forming a cavity mode continuum. The transmission-factor maps derived from scattering-matrix method were confirmed by dispersion curves calculated using an eigen-mode mode solver. The contributing modes were further elucidated with the obtained mode field distributions. Our study reveals that highly localized cavity resonances, besides surface waves, do enhance RHT at small separation of two bodies. In this respect, structured hyperbolic media offers unprecedent control in creating cavity modes for achieving a controllable super-Planckian RHT.
J. D. and M. Y. acknowledge support by the Swedish Research Council (VR) via project 2011-4526, and VR’s Linnaeus center in Advanced Optics and Photonics. F. D. and S. I. B. acknowledge support from the Danish Council for Independent Research via project 1335-00104. The simulations were performed on the Swedish National Infrastructure for Computing (SNIC).
[25]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevB.4.3303) [****, ()](\doibase 10.1103/RevModPhys.79.1291) [****, ()](http://stacks.iop.org/0022-3727/43/i=7/a=075501) [****, ()](\doibase
10.1103/PhysRevB.90.045414) [**** ()](http://scitation.aip.org/content/aip/journal/apl/95/23/10.1063/1.3271681) [****, ()](\doibase
10.1103/PhysRevB.85.180301) [****, ()](\doibase 10.1103/PhysRevB.92.035419) [****, ()](\doibase 10.1103/PhysRevB.93.155403) [****, ()](\doibase
10.1103/PhysRevB.94.125431) [****, ()](\doibase 10.1002/lpor.201400157) [****, ()](\doibase
10.1038/nphoton.2012.124) [****, ()](\doibase
10.1021/ph5001007) [****, ()](\doibase 10.1364/OE.19.003818) [****, ()](\doibase 10.1021/nl204118h) [****, ()](\doibase http://dx.doi.org/10.1063/1.4800233) [****, ()](\doibase 10.1364/OE.21.015014) [****, ()](\doibase http://dx.doi.org/10.1063/1.4754616) [****, ()](\doibase 10.1103/PhysRevLett.112.157402) [****, ()](\doibase 10.1103/PhysRevLett.109.104301) @noop [****, ()]{} [****, ()](\doibase 10.1364/OE.20.016690) [****, ()](\doibase 10.1364/OE.19.019027) [****, ()](\doibase
10.1103/PhysRevB.77.035431)
**Supplemental Material**
![(Color online). Upper panels show transmission factor maps $\mathcal{T}_{s+p}(k_x,k_y)$ between the two trapezoidal-profiled HMM plates at four gap sizes at $\omega=173\times 10^{12}$ rad/s. Both polarizations are included. Circles in dashed white lines are light cones. Lower panels present four modes in their respective major electric- and magnetic-field components at various $k_y$ values ($k_x=0$) for the structure with $g=50$ nm.[]{data-label="fig_s1"}](sup1.eps){width="0.6\columnwidth"}
Figure \[fig\_s1\] shows transmission-factor ($\mathcal{T}$) maps as a function of surface-parallel wavevectors ($k_x,k_y$) at a fixed frequency $\omega=173\times 10^{12}$ rad/s for the trapezoidal-profiled HMM plate structure discussed in the main text. Four scenarios corresponding to gap sizes of 5000, 1000, 100, and 50 nm are shown. Integration of each spectra over $k_x$ and $k_y$ gives rise to the integrated transmission-factor $\Phi$ at this frequency.
From the upper panels in Fig. \[fig\_s1\], one sees that at larger gap sizes the contribution to radiative heat transfer (RHT) mainly comes from electromagnetic states inside light cone. These states correspond to thermally excited photons radiating away from one plate to the other. It is interesting to notice that the combined $s+p$ states are almost isotropic in the $(k_x,k_y)$ plane; they form nearly unitary transmission factors filling the whole light cone. The amplitude of the transmission factors are still less than those between two ideal blackbody plates, which would have a uniform amplitude of two filling the whole light cone. As the gap size decreases, states outside the light cone come into play, suggesting more and more near-field contributions to RHT. At $g=1000$ nm, the integrated transmission factor at this frequency is already beyond that between the far-field blackbody limit \[see Fig. 4(a) in the main article\].
Besides the main block of RHT states connected to light cone, there are discrete thin lines of states emerging and their contributions become more significant as gap size decreases. To better understand these RHT channels, in the lower panels in Fig. \[fig\_s1\] we plot four representative mode fields as marked on the $\mathcal{T}$ spectrum for the $g=50$ nm configuration. The modes were calculated using a finite-element based eigen-mode solver. We examine particularly modes with $(k_x=0,k_y\neq~0$). Mode patterns for ($k_x\neq~0,k_y=0$) were presented in Fig. 2 in the main text. The modes in Fig. \[fig\_s1\] are nothing but guided modes by the trapezoidal-profiled HMM plates along $y$ direction. Each trapezoidal HMM stack, being structurally invariant in $y$ direction, functions like an electromagnetic waveguide; a single grating with periodic arrangement of such HMM stacks is a waveguide array. Due to high contrast in permittivity values between HMM and vacuum, the guided modes are hybrid in polarization. the mode inside the light cone (marked by green square) is a radiation mode, which is manifested by its rapid variation in mode profile plotted in Fig. \[fig\_s1\]. The modes outside light cone are in principle similar to those presented for the un-patterned HMM plate structure in Fig. 2(c) in the main article. The mode marked by the diamond is a gap plasmon mode mainly confined inside the air gap between the two HMM stacks. The modes marked by the triangle and the circle are a bonding- and anti-bonding mode pair whose fields are mainly confined in HMMs. Note that, these two HMM-guided modes are the first two (fundamental mode pair), among a set of such HMM-guided modes supported by the system.
[^1]: The gold substrates are mainly to prevent transmission leakage; their presence also induces a surface mode between HMM and gold. However, the main contribution to near-field RHT is the cavity modes, whose existence persist even without the gold substrates.
[^2]: Extra $0.5\pi$ phase change is due to nearly perfect magnetic conductor condition at boundary in contact with gold.
|
---
abstract: |
We investigate the orgin of “quantum superarrivals” in the reflection and transmission probabilities of a Gaussian wave packet for a rectangular potential barrier while it is perturbed by either reducing or increasing its height. There exists a finite time interval during which the probability of reflection is [*larger*]{} (superarrivals) while the barrier is [*lowered*]{} compared to the unperturbed case. Similarly, during a certain interval of time, the probability of transmission while the barrier is [*raised*]{} [*exceeds*]{} that for free propagation. We compute [*particle trajectories*]{} using the Bohmian model of quantum mechanics in order to understand [*how*]{} this phenomenon of superarrivals occurs.
PACS number(s): 03.65.Bz
address:
- '$^1$S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Calcutta 700098, India'
- '$^2$Department of Physics, Bose Institute, Calcutta 700009, India'
author:
- 'Md. Manirul Ali[^1]$^1$, A. S. Majumdar[^2]$^1$, and Dipankar Home[^3]$^2$'
title: Understanding Quantum Superarrivals using the Bohmian model
---
[2]{}
A number of interesting investigations have been reported on wave-packet dynamics[@greenberger] including, in particular, recent studies on issues such as the observation of revivals of wave packets[@venu]. Of late, we had pointed out a hitherto unexplored effect[@bandyo] considering the time dependent reflection probability of a Gaussian wave packet reflected from a perturbed potential barrier. By reducing the height of the barrier to zero in a short span of time during which there is a significant overlap of it with the wave packet, we observed that the reflection probability is [*larger*]{} compared to the case of reflection from a static barrier for a small but finite interval of time. This phenomenon is what we have called “Quantum Superarrivals”. The speed with which the effect due to reducing the barrier height propagates across the wavefunction was noticed to be depending on the rate at which the barrier height is reduced. We also found the magnitude of superarrivals to be proportional to the rate of reduction of the potential barrier. We argued that superarrivals occur because of the “objective reality” of a wave function acting as a “field” which mediates across it the propagation of a physical disturbance, [*viz*]{}. perturbation of the potential barrier.
The aim of this paper is to further [*generalize*]{} the phenomenon of superarrivals and also to understand [*how*]{} superarrivals occur. We begin by first showing that superarrivals also indeed occur in the transmission probability when the barrier height is raised from zero to some value (this is [*complementary*]{} to the superarrival phenomenon occuring for the reflected wave packet). We then compute particle trajectories using the Bohm model. We derive a [*quantitative*]{} [*estimate*]{} of the magnitude of superarrivals using the Bohmian trajectories. We show that it is possible to obtain a deeper insight into the nature of superarrivals using such computed trajectories of individual particles. We illustrate this by considering the case of a wave packet which is reflected from the perturbed barrier. Similar analysis can be done for the transmitted wave packet.
Let us first briefly recapitulate the essential features of quantum superarrivals. Consider a Gaussian wave packet peaked at $x_0$ with half width $\sigma$. It moves to the right and strikes a potential barrier of width $w$ centred at a point $x_c$. A detector placed at a point $x'$ far left of $x_0$ measures the time-dependent reflection probability by counting the reflected particles arriving there up to various instants for both the case of a static barrier, and also when the barrier is perturbed by reducing its height to zero linearly in time. At any instant [*before*]{} the asymptotic value of the reflection probability is attained, the time evolving reflection probability in the region $-\infty <x\leq x\prime$ is given by $$\label{3}
\left| R(t)\right|^{2}=\int ^{x\prime }_{-\infty }\left| \psi \left( x,t\right) \right|^{2}dx$$ We denote the reflected probability for the static and the perturbed cases as $R_s(t)$ and $R_p(t)$ respectively. In [@bandyo] we computed these probabilities versus time for various values of $\epsilon$ which is the time span over which the barrier height goes to zero (implying different rates of reduction of the potential barrier). We observed that $R_p(t) > R_s(t)$ during the time interval $t_d<t<t_c$. If $t_{p}$ is the instant at which the perturbation starts, $t_{c}$ the instant when the static and the perturbed curves cross each other, and $t_{d}$ the time from which the curve corresponding to the perturbed case starts deviating from that in the unperturbed case, we found that $t_{c}>t_{d}>t_{p}$.
Let us now consider the case when initially there is no barrier, and the wave packet is allowed to propagate freely towards the right. A second detector placed far away at $x''$ records the time-dependent transmission probability $T_s(t)$ (counting the transmitted particles up to various instants of time). If a barrier is raised in the path of the wave packet, a portion of it will be reflected back. We denote by $T_p(t)$ the transmitted probability in this case. At any instant [*before*]{} the asymptotic value of the transmission probability ($=1$ since there is no absorption) is attained, the time evolving transmission probability in the region $x'' \leq x \leq \infty$ is given by $$\label{3}
\left| T(t)\right|^{2}=\int _{x''}^{\infty }\left| \psi \left( x,t\right) \right|^{2}dx$$ We compute the values of $T_s(t)$ and $T_p(t)$ using the same method of numerically integrating the time dependent Schrodinger equation as used in [@bandyo], which was first developed in [@goldberg]. The following values for the parameters are chosen for our computations (in units of $\hbar=1$ and $m=1/2$): $x_0=-0.3$, $\sigma=0.05/\sqrt{2}$, $x_c=0$, $w=0.016$, $x'=-0.5$, $x''=0.5$ and $t_p = 8\times 10^{-4}$. It should be emphasized that the observation of the phenomenon of superarrivals does [*not*]{} hinge upon the choice of these particular values of the parameters. Indeed, the quantitative dependence of superarrivals on the parameter values have been studied in [@bandyo] where it was shown that superarrivals in reflection persist for a sufficiently wide range of values of these parameters. We choose one particular set of values for the computations used in this paper since our aim here is primarily to investigate the origin of superarrivals.
The potential barrier is raised from $V=0$ to $V=2E$ (where $E$ is the energy of the incident wave packet) linearly in time $\epsilon$. In Figure 1 we plot the computed values of $T_s(t)$ and $T_p(t)$ for different values of $\epsilon$. The numbers denoting various instants of time in this as well as the subsequent figures are in units of the time steps used in the numerical algorithm. For example, $t = 8\times 10^{-4}$ corresponds to $400$ time steps. It is seen that superarrivals are also exhibited in the transmitted wave packet.
Superarrivals can be quantitatively defined by a parameter $\eta$ given by $$\label{8}
\eta =\frac{I_{p}-I_{s}}{I_{s}}$$ where the quantities $I_{p}$ and $I_{s}$ are defined with respect to $\Delta t = t_c-t_d$ during which superarrivals occur. For the case of superarrivals in the reflected probability, $$\label{9a}
I_{p}=\int _{\Delta t}\left| R_{p}(t)\right|^{2}dt$$ $$\label{9b}
I_{s}=\int _{\Delta t}\left| R_{s}(t)\right|^{2}dt$$ Replacing the static and perturbed reflected probabilities by $T_s(t)$ and $T_p(t)$ respectively, one can obtain the corresponding expression of $\eta$ for the case of the transmitted wave packet.
It has been observed[@bandyo] that both $\Delta t$ and superarrivals given by $\eta$ depend on the instant $t_{p}$ around which the barrier is perturbed. The magnitude of superarrivals is appreciable only in cases where the wave packet has some significant overlap with the barrier while it is being perturbed. The magnitude of superarrivals falls off with increasing $\epsilon$, for the reflected as well as the transmitted wave packets. Another interesting observation is about information transfer from the perturbing barrier to the detector. We defined signal velocity $$v_{e}=\frac{D}{t_{d}-(t_{p}-\frac{\epsilon }{2})}$$ measuring how fast the influence of barrier perturbation travels across the wave packet. We found that $v_e$ is again proportional to $\epsilon$ as was the case with $\eta$. These features lead one to argue that the wave packet acts as a carrier (objective field-like behaviour) through which information about the barrier perturbation propagates with a velocity that is proportional to the “disturbance” (measured in terms of the rate of barrier reduction) imparted to the packet by the barrier.
Now, in order to understand [*how*]{} superarrivals originate, we use the concept of [*particle trajectories*]{} in terms of the Bohm model (BM). We recall that BM provides an ontological and a self-consistent interpretation of the formalism of quantum mechanics[@holland; @squires]. Predictions of BM are in agreement with that of standard quantum mechanics. In BM a wave function $\psi$ is taken to be an incomplete specification of the state of an individual particle. An objectively real “position” coordinate (“position” existing irrespective of any external observation) is ascribed to a particle apart from the wave function. Its “position” evolves with time obeying an equation that can be justified in the following way from the Schroedinger equation (considering the one dimensional case)[@squires] $$\begin{aligned}
i\hbar {\partial\psi \over \partial t} = H\psi \equiv - {\hbar^2
\over 2m} {\partial^2 \psi \over \partial x^2} + V(x)\psi\end{aligned}$$ by writing $$\begin{aligned}
\psi = Re^{iS/\hbar}\end{aligned}$$ and using the continuity equation $$\begin{aligned}
{\partial \over \partial x} (\rho v) + {\partial\rho \over \partial
t} = 0\end{aligned}$$ with the probability distribution $\rho(x,t)$ being given by $$\begin{aligned}
\rho = \vert \psi \vert^2.\end{aligned}$$ It is important to note that $\rho$ in BM is ascribed an [*ontological*]{} significance by regarding it as representing the probability density of “particles” occupying [*actual*]{} positions and the velocity $v$ is interpreted as an ontological (premeasurement) velocity. On the other hand, in the standard interpretation, $\rho$ is interpreted as the probability density of [*finding*]{} particles around specific positions and there is no concept of an ontological velocity. Integrating Eq.(9) by using Eqs.(7), (8) and (10) and requiring that $v$ should vanish when $\rho$ vanishes leads to the Bohmian equation of motion where the particle velocity $v(x,t)$ is given by $$\begin{aligned}
v \equiv {dx \over dt} = {1\over m}{\partial S \over \partial x}\end{aligned}$$ The particle trajectory is thus deterministic and is obtained by integrating the velocity equation for a given initial position.
Another perspective on the notion of particle trajectories in BM is obtained by decomposing the Schrodinger equation in terms of two real equations for the modulus $R$ and the phase $S$ of the wave function $\psi$[@holland] $$\begin{aligned}
{\partial S \over \partial t} + {(\vec{\nabla} S)^2 \over 2m} - {\hbar^2 \over 2m}
{\nabla^2 R \over R} + V = 0 \\
{\partial R^2 \over \partial t} + \vec{\nabla}.\biggl({R^2 \vec{\nabla} S \over m}\biggr) = 0\end{aligned}$$ and by indentifying $$Q(x,t) = -{\hbar ^2 \over 2m}{\nabla^2 R \over R}$$ as the “quantum potential”[@holland]. The equation of motion of a particle along its trajectory can now be written in a form analogous to Newton’s second law $${d \over dt}(m\dot{\vec{X}}) = - \vec{\nabla}(V + Q)|_{X}$$ (with $d/dt = \partial/\partial t + \dot{\vec{X}}.\vec{\nabla}$) where the particle is subjected to a quantum force $-\vec{\nabla} Q$ in addition to the classical force $-\vec{\nabla}V$. The effective potential on the particle is $(Q+V)$. We plot the profile of $Q$ versus $x$ at various instants of time near the potential barrier (when its height is reduced) in Figure 2. It is then transparent how the perturbation of the classical potential $V$ affects $Q$ away from the vicinity of the boundary of $V$. This in turn accounts for the sharp turn experienced by those particles which contribute towards [*superarrivals*]{} (as we shall see explicitly later).
We compute the Bohmian trajectories for a given set of initial positions with a Gaussian distribution corresponding to the initial wave packet. This procedure is carried out for both the cases of lowering and raising the barrier. Since our purpose is to obtain conceptual clarity of the phenomenon of superarrivals, it suffices to illustrate our scheme through the example of superarrivals in the reflection probability when the barrier is reduced. All the qualitative as well as quantitative features of superarrivals are similar in the case where one observes the transmitted probabilty from a rising barrier. Thus, henceforth we consider only the former case in the following discussion.
The following approach is used to study [*superarrivals*]{} in terms of the Bohmian trajectories. First, a particular value of the barrier reduction rate, or $\epsilon$ is chosen. We then choose a range of initial positions for which the trajectory arrival times at the detector lie between $t_d$ and $t_c$ (i.e., we select only those trajectories which [*contribute*]{} to superarrivals). We consider $N$ such trajectories whose initial positions form a Gaussian distribution. Let us denote [*one*]{} such trajectory by $S_{ip}$ having the initial position $x_{i}$ and the arrival time $t_{ip}$. Taking the static case, the trajectory $S_i$ for that initial position $x_i$ is computed. Let the corresponding arrival time be $t_i$. A supearrival parameter $\beta_i$ for the $i$-th Bohmian trajectory is then defined as $$\beta_i = {t_i - t_{ip} \over t_i}$$ which provides a measure of superarrivals for a [*particular value*]{} of initial position. Next we define an [*average value*]{} $$\tilde{\beta} = {\sum_i \beta_i \over N}$$ which provides a [*quantitative estimate*]{} of superarrivals obtained through Bohmian trajectories.
Our results show that the arrival time[^4] $t_{ip}$ for the perturbed case is sensitive to the value of initial position $x_{i}$. We have checked that for a particular initial position, $t_i$ [*exceeds*]{} $t_{ip}$ for [*only*]{} those trajectories which [*contribute*]{} to superarrivals. This is a distinct feature associated with the superarrivals that can be identified in terms of the Bohmian trajectories. We plot a set of Bohmian trajectories in Figure 3. Note that the trajectories of the particles corresponding to the perturbed case take a sharp turn and arrive at the detector [*earlier*]{} than they would have for a static barrier. Any abrupt perturbation of the potential barrier has thus a [*global effect*]{} on the wave function and affects the values of the quantum potential $Q(x,t)$ at various points. Then, through the Bohmian equation of motion the velocities of the incident particles get correspondingly affected [*much before*]{} reaching the vicinity of the potential barrier. Superarrivals originate from [*those*]{} particles in the perturbed case which reach the detector [*earlier*]{} than those corresponding to the [*same*]{} initial positions in the static case. This accounts for why a detector records more counts in the perturbed case during a particular time interval as compared to that in a static situation. The origin of superarrivals can thus be understood in this way by using the Bohmian trajectories.
The effect of altering the barrier perturbation time $\epsilon$ on the magnitude of superarrivals $\tilde{\beta}$ can be studied by computing $\tilde{\beta}$ for various values of $\epsilon$. We display the results of this study in Figure 4. Note that the the magnitude of superarrivals decreases monotonically with increasing $\epsilon$, or decreasing rate of perturbation. This effect was also observed in [@bandyo] where we obtained a similar behaviour for the superarrival parameter $\eta$. The similarity of these two results obtained through entirely different techniques reinforces our contention about the dynamical nature of superarrivals originating from a “disturbance” provided by the lowering of potential barrier, which propagates across the wave function with a definite speed.
To conclude, in this paper we have explored further the nature and origin of [*quantum superarrivals*]{} manifested in terms of enhanced reflection and transmission probabilities of wave packets from a perturbed potential barrier which is respectively lowered or raised. We have shown how the concept of particle trajectories obtained from the Bohm model enables one to have an insight into the phenomenon of quantum superarrivals. This analysis substantiates our earlier contention[@bandyo] that superarrivals arise from a dynamical disturbance provided by the perturbed barrier which propagates across the wave packet (which acts like a “physically real field”) with a definite speed and affects the “particles”. Such time dependent quantum phenomena could be useful in furnishing examples of the [*conceptual utility*]{} of the Bohm model from a perspective different from other examples studied recently [@pla] for this purpose.
0.1in D. Home thanks John Corbett for useful suggestions. Md. Manirul Ali acknowledges the grant of a research fellowship from Council of Scientific and Industrial Research, India.
[99]{}
D. M. Greenberger, Physica B [**151**]{}, 374 (1988); M.V. Berry, J. Phys. A [**29**]{}, 6617 (1996); M.V. Berry and S. Klein, J. Mod. Opt. [**43**]{}, 2139 (1996); D.L. Aronstein and C.R. Stroud, Phys. Rev. A [**55**]{}, 4526 (1997); F. Lillo and R. N. Mantegna, Phys. Rev. Lett. [**84**]{}, 1061 (2000); G. Kalbermann, J. Phys. A [**34**]{}, 3841 (2001); G. Kalbermann, quant-ph/0203036. See, for instance, A. Venugopalan and G.S. Agarwal, Phys. Rev. A [**59**]{}, 1413 (1999); F.B.J. Buchkremer, R. Dumke, H. Levsen, G. Birkl and W. Ertmer, Phys. Rev. Lett. [**85**]{}, 3121 (2000); H. Mack, M. Bienert, F. Haug, F. S. Straub, M. Freyberger and W. P. Schleich, quant-ph/0204040. S. Bandyopadhyay, A. S. Majumdar and D. Home, Phys. Rev. A [**65**]{}, 052718 (2002). A. Goldberg, H. M. Schey and J. L. Schwartz, Am. J. Phys. [**35**]{}, 177 (1967). D.Bohm, Phys. Rev. [**85**]{}, 166 (1952); D.Bohm and B.J.Hiley, “The Undivided Universe”, (Routledge, London, 1993). P.R.Holland, “The Quantum Theory of Motion”, (Cambridge University Press, London, 1993). E.J. Squires, in “Bohmian Mechanics and Quantum Theory: An Appraisal”, Eds. J.T. Cushing, A. Fine and S. Goldstein (Kluwer, Dordrecht, 1996), pp. 131-140. J. G. Muga and C. R. Leavens, Phys. Rep. [**338**]{}, 353 (2000), and references therein; G. Gruebl and K. Rheinberger, quant-ph/0202084. P. Ghose, A. S. Majumdar, S. Guha and J. Sau, Phys. Lett. A [**290**]{}, 205 (2001); A. S. Majumdar and D. Home, Phys. Lett. A [**296**]{}, 176 (2002).
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
[^4]: For conceptual subtleties concerning [*arrival time*]{} in the Bohm model, and, in general, its definition in quantum theory, see [@arrtim].
|
---
bibliography:
- 'bessel\_functionals.bib'
---
[**Some results on Bessel functionals for ${{\rm GSp}}(4)$**]{}
Brooks Roberts and Ralf Schmidt[^1]
We prove that every irreducible, admissible representation $\pi$ of ${{\rm GSp}}(4,F)$, where $F$ is a non-archimedean local field of characteristic zero, admits a Bessel functional, provided $\pi$ is not one-dimensional. Given $\pi$, we explicitly determine the set of all split Bessel functionals admitted by $\pi$, and prove that these functionals are unique. If $\pi$ is not supercuspidal, or in an $L$-packet with a non-supercuspidal representation, we explicitly determine the set of all Bessel functionals admitted by $\pi$, and prove that these functionals are unique.
Introduction {#introduction .unnumbered}
============
\[introsec\]
Let $F$ be a non-archimedean local field of characteristic zero, and let $\psi$ be a non-trivial character of $F$. Let ${{\rm GSp}}(4,F)$ be the subgroup of $g$ in ${{\rm GL}}(4,F)$ satisfying $^tgJg=\lambda(g)J$ for some scalar $\lambda(g)$ in $F^\times$, where $$J=\begin{bmatrix}&&&1\\&&1\\&-1\\-1\end{bmatrix}.$$ The Siegel parabolic subgroup $P$ of ${{\rm GSp}}(4,F)$ is the subgroup consisting of all matrices whose lower left $2\times2$ block is zero. Let $N$ be the unipotent radical of $P$. The characters $\theta$ of $N$ are in one-to-one correspondence with symmetric $2\times2$ matrices $S$ over $F$ via the formula $$\theta({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&X\\#3&1\end{array}\right]}})=\psi({\rm tr}(S{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}X)).$$ We say that $\theta$ is *non-degenerate* if the matrix $S$ is invertible, and we say that $\theta$ is *split* if ${{\rm disc}}(S)=1$; here ${{\rm disc}}(S)$ is the class of $-\det(S)$ in $F^\times/F^{\times2}$. For a fixed $S$, we define $$\label{Tdefeq2}
T={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}\{g\in {{\rm GL}}(2,F):\:^tgSg=\det(g)S\}{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}.$$ We embed $T$ into ${{\rm GSp}}(4,F)$ via the map $$t\mapsto{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}t&\\#3&\det(t)t'\end{array}\right]}},$$ where for a $2\times2$-matrix $g$ we write $g'=\left[\begin{smallmatrix}&1\\1\end{smallmatrix}\right]\,^tg^{-1}\left[\begin{smallmatrix}&1\\1\end{smallmatrix}\right]$. The group $T$ normalizes $N$, so that we can define the semidirect product $D=TN$. This will be referred to as the *Bessel subgroup* corresponding to $S$. For $t$ in $T$ and $n$ in $N$, we have $\theta(tnt^{-1})=\theta(n)$. Thus, if $\Lambda$ is a character of $T$, we can define a character $\Lambda\otimes\theta$ of $D$ by $(\Lambda\otimes\theta)(tn)=\Lambda(t)\theta(n)$. Whenever we regard ${{\mathbb C}}$ as a one-dimensional representation of $D$ via this character, we denote it by ${{\mathbb C}}_{\Lambda\otimes\theta}$. Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. A non-zero element of the space ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is called a $(\Lambda,\theta)$-Bessel functional for $\pi$. We say that $\pi$ admits a $(\Lambda,\theta)$-Bessel functional if ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is non-zero, and that $\pi$ admits a unique $(\Lambda,\theta)$-Bessel functional if ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is one-dimensional.
In this paper we investigate the existence and uniqueness of Bessel functionals. We prove three main results about irreducible, admissible representations $\pi$ of ${{\rm GSp}}(4,F)$.
- If $\pi$ is not one-dimensional, we prove that $\pi$ admits some $(\Lambda,\theta)$-Bessel functional; see Theorem \[existencetheorem\].
- If $\theta$ is split, we determine the set of $\Lambda$ for which $\pi$ admits a $(\Lambda,\theta)$-Bessel functional, and prove that such functionals are unique; see Proposition \[GSp4genericprop\], Theorem \[existencetheorem\], Theorem \[mainnonsupercuspidaltheorem\] and Theorem \[splituniquenesstheorem\].
- If $\pi$ is non-supercuspidal, or is in an $L$-packet with a non-supercuspidal representation, we determine the set of $(\Lambda,\theta)$ for which $\pi$ admits a $(\Lambda,\theta)$-Bessel functional, and prove that such functionals are unique; see Theorem \[mainnonsupercuspidaltheorem\] and Theorem \[splituniquenesstheorem\].
We point out that all our results hold independently of the residual characteristic of $F$.
To investigate $(\Lambda,\theta)$-Bessel functionals for $(\pi,V)$ we use the $P_3$-module $V_{Z^J}$, the $G^J$-module $V_{Z^J,\psi}$, and the twisted Jacquet module $V_{N,\theta}$. Here, $$P_3={{\rm GL}}(3,F)\cap\begin{bmatrix} *&*&*\\ *&*&*\\&&1\end{bmatrix},\;\:
Z^J={{\rm GSp}}(4,F)\cap\begin{bmatrix}1&&&*\\&1\\&&1\\&&&1\end{bmatrix},\;\:
G^J={{\rm GSp}}(4,F)\cap\begin{bmatrix}1&*&*&*\\&*&*&*\\&*&*&*\\&&&1\end{bmatrix}.$$ The $P_3$-module $V_{Z^J}$ was computed for all $\pi$ with trivial central character in $\cite{NF}$; in this paper, we note that these results extend to the general case. The $G^J$-module $V_{Z^J,\psi}$ is closely related to representations of the metaplectic group ${\widetilde{\rm SL}}(2,F)$. The twisted Jacquet module $V_{N,\theta}$ is especially relevant for non-supercuspidal representations. Indeed, we completely calculate twisted Jacquet modules of representations parabolically induced from the Klingen or Siegel parabolic subgroups. These methods suffice to treat most representations; for the few remaining families of representations we use theta lifts. As a by-product of our investigations we obtain a characterization of non-generic representations. Namely, the following conditions are equivalent: $\pi$ is non-generic; the twisted Jacquet module $V_{N,\theta}$ is finite-dimensional for all non-degenerate $\theta$; the twisted Jacquet module $V_{N,\theta}$ is finite-dimensional for all split $\theta$; the $G^J$-module $V_{Z^J,\psi}$ is of finite length. See Theorem \[nongenchartheorem\].
Bessel functionals have important applications, and have been investigated in a number of works. For example, they can be used to define and study $L$-functions, in the case where the representation in question has no Whittaker model; e.g., [@Piatetski1997], [@Sugano1984], [@Furusawa1993], [@PitaleSahaSchmidt2011]. As far as we know, the first works investigating Bessel functionals for irreducible, admissible representations of ${{\rm GSp}}(4,F)$ are [@NovoPia1973] and [@Novodvorski1973]. Both of these papers consider only representations with trivial central character. The main result of the first paper is the uniqueness of $(\Lambda,\theta)$-Bessel functionals for trivial $\Lambda$. In the second paper, this is generalized to arbitrary $\Lambda$.
If an irreducible, admissible representation $\pi$ admits a $(\Lambda,\theta)$-Bessel functional, then $\pi$ has an associated Bessel model. For unramified $\pi$ admitting a $(\Lambda,\theta)$-Bessel functional, the works [@Sugano1984] and [@BumpFriedbergFurusawa1997] contain explicit formulas for the spherical vector in such a Bessel model. Other explicit formulas in certain cases of Iwahori-spherical representations appear in [@Saha2009], [@Pitale2011] and [@PitaleSchmidt2012]. We note that these works show that all the values of a certain vector in the given Bessel model can be expressed in terms of data depending only on the representation and $\Lambda$ and $\theta$; in this situation it follows that the Bessel functional is unique. As far as we know, a detailed proof of uniqueness of Bessel functionals in all cases has not yet appeared in the literature.
In the case of odd residual characteristic, and when $\pi$ appears in a generic $L$-packet, the main local theorem of [@PrTa2011] gives an $\varepsilon$-factor criterion for the existence of a $(\Lambda,\theta)$-Bessel functional. There is some overlap between the methods of [@PrTa2011] and the present work. However, the goal of this work is to give a complete and ready account of Bessel functionals for all non-supercuspidal representations. We hope these results will be useful for applications where such specific knowledge is needed.
Some definitions {#defsec}
================
Throughout this work let $F$ be a non-archimedean local field of characteristic zero. Let $\bar F$ be a fixed algebraic closure of $F$. We fix a non-trivial character $\psi:\:F\rightarrow{{\mathbb C}}^\times$. The symbol ${{\mathfrak o}}$ denotes the ring of integers of $F$, and ${\mathfrak p}$ is the maximal ideal of ${{\mathfrak o}}$. We let $\varpi$ be a fixed generator of ${\mathfrak p}$. We denote by $|\cdot|$ the normalized absolute value on $F$, and by $\nu$ its restriction to $F^\times$. The Hilbert symbol of $F$ will be denoted by $(\cdot,\cdot)_F$. If $\Lambda$ is a character of a group, we denote by ${{\mathbb C}}_\Lambda$ the space of the one-dimensional representation whose action is given by $\Lambda$. If $x=\left[\begin{smallmatrix} a&b \\ c&d \end{smallmatrix} \right]$ is a $2 \times 2$ matrix, then we set $x^* = \left[\begin{smallmatrix} d&-b \\ -c&a \end{smallmatrix} \right]$. If $X$ is an $l$-space, as in 1.1 of [@BeZe1976], and $V$ is a complex vector space, then $\mathcal{S}(X,V)$ is the space of locally constant functions $X\to V$ with compact support. Let $G$ be an $l$-group, as in [@BeZe1976], and let $H$ be a closed subgroup. If $\rho$ is a smooth representation of $H$, we define the compactly induced representation (unnormalized) ${\mathrm{c}\text{-}\mathrm{Ind}}_H^G(\rho)$ as in 2.22 of [@BeZe1976]. If $(\pi,V)$ is a smooth representation of $G$, and if $\theta$ is a character of $H$, we define the twisted Jacquet module $V_{H,\theta}$ as the quotient $V/V(H,\theta)$, where $V(H,\theta)$ is the span of all vectors $\pi(h)v-\theta(h)v$ for all $h$ in $H$ and $v$ in $V$.
Groups
------
Let $${{\rm GSp}}(4,F)=\{g\in{{\rm GL}}(4,F):\:^tgJg=\lambda(g)J,\:\lambda(g)\in F^\times\},\qquad J=\begin{bmatrix}&&&1\\&&1\\&-1\\-1\end{bmatrix}.$$ The scalar $\lambda(g)$ is called the *multiplier* or *similitude factor* of the matrix $g$. The *Siegel parabolic subgroup* $P$ of ${{\rm GSp}}(4,F)$ consists of all matrices whose lower left $2\times2$ block is zero. For a matrix $A\in{{\rm GL}}(2,F)$ set $$A'={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}\,^t\!A^{-1}{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}.$$ Then the Levi decomposition of $P$ is $P=MN$, where $$\label{Mdefeq}
M=\{{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}A&\\#3&\lambda A'\end{array}\right]}}:\:A\in{{\rm GL}}(2,F),\:\lambda\in F^\times\},$$ and $$\label{Ndefeq}
N=\{\begin{bmatrix}
1&&y&z\\&1&x&y\\&&1\\&&&1
\end{bmatrix}:\;x,y,z\in F\}.$$ Let $Q$ be the *Klingen parabolic subgroup*, i.e., $$\label{Qdefeq}
Q={{\rm GSp}}(4,F)\cap\begin{bmatrix} *&*&*&*\\&*&*&*\\&*&*&*\\&&&*\end{bmatrix}.$$ The Levi decomposition for $Q$ is $Q=M_QN_Q$, where $$\label{MQdefeq}
M_Q=\{\begin{bmatrix}t\\&A\\&&t^{-1}\det(A)\end{bmatrix}:\:A\in{{\rm GL}}(2,F),\:t\in F^\times\},$$ and $N_Q$ is the *Heisenberg group* $$\label{NQdefeq}
N_Q=\{\begin{bmatrix}
1&x&y&z\\&1&&y\\&&1&-x\\&&&1
\end{bmatrix}:\;x,y,z\in F\}.$$ The subgroup of $Q$ consisting of all elements with $t=1$ and $\det(A)=1$ is called the *Jacobi group* and is denoted by $G^J$. The standard Borel subgroup of ${{\rm GSp}}(4,F)$ consists of all upper triangular matrices in ${{\rm GSp}}(4,F)$. We let $$U={{\rm GSp}}(4,F)\cap\begin{bmatrix}1&*&*&*\\&1&*&*\\&&1&*\\&&&1\end{bmatrix}$$ be its unipotent radical.
The following elements of ${{\rm GSp}}(4,F)$ represent generators for the eight-element Weyl group, $$\label{s1s2defeq}
s_1=\begin{bmatrix}&1\\1\\&&&1\\&&1\end{bmatrix}\qquad\text{and}\qquad s_2=\begin{bmatrix}1\\&&1\\&-1\\&&&1\end{bmatrix}.$$
Representations {#representationssec}
---------------
For a smooth representation $\pi$ of ${{\rm GSp}}(4,F)$ or ${{\rm GL}}(2,F)$, we denote by $\pi^\vee$ its smooth contragredient.
For $c_1,c_2$ in $F^\times$, let $\psi_{c_1,c_2}$ be the character of $U$ defined by $$\label{psic1c2eq}
\psi_{c_1,c_2}(\begin{bmatrix}1&x&*&*\\&1&y&*\\&&1&-x\\&&&1\end{bmatrix})=\psi(c_1x+c_2y).$$ An irreducible, admissible representation $(\pi,V)$ of ${{\rm GSp}}(4,F)$ is called *generic* if the space ${{\rm Hom}}_{U}(V,\psi_{c_1,c_2})$ is non-zero. This definition is independent of the choice of $c_1,c_2$. It is known by [@Rod1973] that, if non-zero, the space ${{\rm Hom}}_{U}(V,\psi_{c_1,c_2})$ is one-dimensional. Hence, $\pi$ can be realized in a unique way as a space of functions $W:\:{{\rm GSp}}(4,F)\rightarrow{{\mathbb C}}$ with the transformation property $$W(ug)=\psi_{c_1,c_2}(u)W(g),\qquad u\in U,\;g\in {{\rm GSp}}(4,F),$$ on which $\pi$ acts by right translations. We denote this model of $\pi$ by $\mathcal{W}(\pi,\psi_{c_1,c_2})$, and call it the *Whittaker model* of $\pi$ with respect to $c_1,c_2$.
We will employ the notation of [@SaTa1993] for parabolically induced representations of ${{\rm GSp}}(4,F)$ (all parabolic induction is normalized). For details we refer to the summary given in Sect. 2.2 of [@NF]. Let $\chi_1$, $\chi_2$ and $\sigma$ be characters of $F^\times$. Then $\chi_1\times\chi_2\rtimes\sigma$ denotes the representation of ${{\rm GSp}}(4,F)$ parabolically induced from the character of the Borel subgroup which is trivial on $U$ and is given by $${\rm diag}(a,b,cb^{-1},ca^{-1})\longmapsto\chi_1(a)\chi_2(b)\sigma(c),\qquad a,b,c\in F^\times,$$ on diagonal elements. Let $\sigma$ be a character of $F^\times$ and $\pi$ be an admissible representation of ${{\rm GL}}(2,F)$. Then $\pi\rtimes\sigma$ denotes the representation of ${{\rm GSp}}(4,F)$ parabolically induced from the representation $$\label{Prepeq}
{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}A&*\\#3&cA'\end{array}\right]}}\longmapsto\sigma(c)\pi(A),\qquad A\in{{\rm GL}}(2,F),\:c\in F^\times,$$ of the Siegel parabolic subgroup $P$. Let $\chi$ be a character of $F^\times$ and $\pi$ an admissible representation of ${{\rm GSp}}(2,F)\cong{{\rm GL}}(2,F)$. Then $\chi\rtimes\pi$ denotes the representation of ${{\rm GSp}}(4,F)$ parabolically induced from the representation $$\label{Qrepeq}
\begin{bmatrix}t&*&*\\&g&*\\&&\det(g)t^{-1}\end{bmatrix}
\longmapsto\chi(t)\pi(g),\qquad t\in F^\times,\:g\in{{\rm GL}}(2,F),$$ of the Klingen parabolic subgroup $Q$.
For a character $\xi$ of $F^\times$ and a representation $(\pi,V)$ of ${{\rm GSp}}(4,F)$, the *twist* $\xi\pi$ is the representation of ${{\rm GSp}}(4,F)$ on the same space $V$ given by $(\xi\pi)(g)=\xi(\lambda(g))\pi(g)$ for $g$ in ${{\rm GSp}}(4,F)$, where $\lambda$ is the multiplier homomorphism defined above. A similar definition applies to representations $\pi$ of ${{\rm GL}}(2,F)$; in this case, the multiplier is replaced by the determinant. The behavior of parabolically induced representations under twisting is as follows, $$\begin{aligned}
\xi(\chi_1\times\chi_2\rtimes\sigma)&=\chi_1\times\chi_2\rtimes\xi\sigma,\\
\xi(\pi\rtimes\sigma)&=\pi\rtimes\xi\sigma,\\
\xi(\chi\rtimes\pi)&=\chi\rtimes\xi\pi.\end{aligned}$$
The irreducible constituents of *all* parabolically induced representations of ${{\rm GSp}}(4,F)$ have been determined in [@SaTa1993]. The following table, which is essentially a reproduction of Table A.1 of [@NF], provides a summary of these irreducible constituents. In the table, $\chi,\chi_1,\chi_2,\xi$ and $\sigma$ stand for characters of $F^\times$; the symbol $\nu$ denotes the normalized absolute value; $\pi$ stands for an irreducible, admissible, supercuspidal representation of ${{\rm GL}}(2,F)$, and $\omega_\pi$ denotes the central character of $\pi$. The trivial character of $F^\times$ is denoted by $1_{F^\times}$, the trivial representation of ${{\rm GL}}(2,F)$ by ${1}_{{{\rm GL}}(2)}$ or ${1}_{{{\rm GSp}}(2)}$, depending on the context, the trivial representation of ${{\rm GSp}}(4,F)$ by ${1}_{{{\rm GSp}}(4)}$, the Steinberg representation of ${{\rm GL}}(2,F)$ by ${{\rm St}}_{{{\rm GL}}(2)}$ or ${{\rm St}}_{{{\rm GSp}}(2)}$, depending on the context, and the Steinberg representation of ${{\rm GSp}}(4,F)$ by ${{\rm St}}_{{{\rm GSp}}(4)}$. The names of the representations given in the “representation” column are taken from [@SaTa1993]. The “tempered” column indicates the condition on the inducing data under which a representation is tempered. The “$L^2$” column indicates which representations are square integrable after an appropriate twist. Finally, the “g” column indicates which representations are generic.
In addition to all irreducible, admissible, non-supercuspidal representations, the table also includes two classes of supercuspidal representations denoted by Va$^*$ and XIa$^*$. The reason that these supercuspidal representations are included in the table is that they are in $L$-packets with some non-supercuspidal representations. Namely, the Va representation $\delta([\xi,\nu\xi],\nu^{-1/2}\sigma)$ and the Va$^*$ representation $\delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)$ form an $L$-packet, and the XIa representation $\delta(\nu^{1/2}\pi,\nu^{-1/2}\sigma)$ and the XIa$^*$ representation $\delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)$ form an $L$-packet; see the paper [@GaTa2011]. Incidentally, the other non-singleton $L$-packets involving non-supercuspidal representations are the two-element packets $\{\tau(S,\nu^{-1/2}\sigma),\tau(T,\nu^{-1/2}\sigma)\}$ (type VIa and VIb), as well as $\{\tau(S,\pi),\tau(T,\pi)\}$ (type VIIIa and VIIIb).
$$\renewcommand{\arraystretch}{1.19}
\setlength{\arraycolsep}{0.3cm}
\begin{array}{ccccccccc}
\toprule
&\mbox{constituents of}&&\mbox{representation}
&{\rm tempered}&L^2\!&\,{\rm g}\\
\toprule
{\rm I}&\multicolumn{3}{c}{\chi_1\times\chi_2\rtimes\sigma\quad
\mbox{(irreducible)}}&\mbox{$\chi_i,\sigma$ unitary}
&&\bullet\\
\midrule
{\rm II}&\nu^{1/2}\chi\times\nu^{-1/2}\chi\rtimes\sigma&\mbox{a}
&\chi{{\rm St}}_{{{\rm GL}}(2)}\rtimes\sigma&\mbox{$\chi,\sigma$ unitary}&&\bullet\\
\cmidrule{3-7}
&(\chi^2\neq\nu^{\pm1},\chi\neq\nu^{\pm 3/2})&\mbox{b}
&\chi{1}_{{{\rm GL}}(2)}\rtimes\sigma
&&&\\
\midrule
{\rm III}&\chi\times\nu\rtimes\nu^{-1/2}\sigma&\mbox{a}
&\chi\rtimes\sigma{{\rm St}}_{{{\rm GSp}}(2)}&\mbox{$\chi,\sigma$ unitary}&
&\bullet\\
\cmidrule{3-7}
&(\chi\notin\{1,\nu^{\pm2}\})&\mbox{b}
&\chi\rtimes\sigma{1}_{{{\rm GSp}}(2)}
&&&\\\midrule
{\rm IV}&\nu^2\times\nu\rtimes\nu^{-3/2}\sigma&\mbox{a}&\sigma{{\rm St}}_{{{\rm GSp}}(4)}&\mbox{$\sigma$ unitary}&\bullet&\bullet\\
\cmidrule{3-7}
&&\mbox{b}&L(\nu^2,\nu^{-1}\sigma{{\rm St}}_{{{\rm GSp}}(2)})&&&\\
\cmidrule{3-7}
&&\mbox{c}&L(\nu^{3/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-3/2}\sigma)&&&\\
\cmidrule{3-7}
&&\mbox{d}&\sigma{1}_{{{\rm GSp}}(4)}&&&\\
\midrule
{\rm V}&\nu\xi\times\xi\rtimes\nu^{-1/2}\sigma&\mbox{a}
&\delta([\xi,\nu\xi],\nu^{-1/2}\sigma)&\mbox{$\sigma$ unitary}&\bullet&\bullet\\
\cmidrule{3-7}
&(\xi^2=1,\:\xi\neq1)&\mbox{b}&L(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)&&&\\
\cmidrule{3-7}
&&\mbox{c}&L(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)},\xi\nu^{-1/2}\sigma)&&&\\
\cmidrule{3-7}
&&\mbox{d}&L(\nu\xi,\xi\rtimes\nu^{-1/2}\sigma)&&&\\
\midrule
{\rm VI}&\nu\times1_{F^\times}\rtimes\nu^{-1/2}\sigma&\mbox{a}
&\tau(S,\nu^{-1/2}\sigma)&\mbox{$\sigma$ unitary}&&\bullet\\
\cmidrule{3-7}
&&\mbox{b}&\tau(T,\nu^{-1/2}\sigma)&\mbox{$\sigma$ unitary}&&\\
\cmidrule{3-7}
&&\mbox{c}&L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)&&&\\
\cmidrule{3-7}
&&\mbox{d}&L(\nu,1_{F^\times}\rtimes\nu^{-1/2}\sigma)&&&\\
\toprule
{\rm VII}&\multicolumn{3}{c}{\chi\rtimes\pi\quad
\mbox{(irreducible)}}&\mbox{$\chi,\pi$ unitary}&&\bullet\\
\midrule
{\rm VIII}&1_{F^\times}\rtimes\pi&\mbox{a}&\tau(S,\pi)&\mbox{$\pi$ unitary}&&\bullet\\
\cmidrule{3-7}
&&\mbox{b}&\tau(T,\pi)&\mbox{$\pi$ unitary}&&\\
\midrule
{\rm IX}&\nu\xi\rtimes\nu^{-1/2}\pi&\mbox{a}
&\delta(\nu\xi,\nu^{-1/2}\pi)&\mbox{$\pi$ unitary}&\bullet&\bullet\\
\cmidrule{3-7}
&(\xi\neq1,\:\xi\pi=\pi)&\mbox{b}&L(\nu\xi,\nu^{-1/2}\pi)&&&\\
\toprule
{\rm X}&\multicolumn{3}{c}{\pi\rtimes\sigma\quad
\mbox{(irreducible)}}&\mbox{$\pi,\sigma$ unitary}
&&\bullet\\
\midrule
{\rm XI}&\nu^{1/2}\pi\rtimes\nu^{-1/2}\sigma&\mbox{a}
&\delta(\nu^{1/2}\pi,\nu^{-1/2}\sigma)
&\mbox{$\pi,\sigma$ unitary}&\bullet&\bullet\\
\cmidrule{3-7}
&(\omega_\pi=1)&\mbox{b}&L(\nu^{1/2}\pi,\nu^{-1/2}\sigma)&&&\\
\toprule
{\rm Va^*}&\mbox{(supercuspidal)}&&\delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)
&\mbox{$\sigma$ unitary}&\bullet&\\
\midrule
{\rm XIa^*}&\mbox{(supercuspidal)}&
&\delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)
&\mbox{$\pi,\sigma$ unitary}&\bullet&\\
\toprule
\end{array}$$
Generalities on Bessel functionals
==================================
In this section we gather some definitions, notation, and basic results about Bessel functionals.
Quadratic extensions {#quadextsubsec}
--------------------
Let $D \in F^\times$. If $D \notin F^{\times 2}$, then let ${{\Delta}}= \sqrt{D}$ be a square root of $D$ in $\bar F$, and $L=F({{\Delta}})$. If $D \in F^{\times 2}$, then let $\sqrt{D}$ be a square root of $D$ in $F^\times$, $L = F\times F$, and ${{\Delta}}= (-\sqrt{D},\sqrt{D}) \in L$. In both cases $L$ is a two-dimensional $F$-algebra containing $F$, $L=F+F {{\Delta}}$, and ${{\Delta}}^2=D$. We will abuse terminology slightly, and refer to $L$ as the *quadratic extension associated to $D$*. We define a map $\gamma :L \to L$ called *Galois conjugation* by $\gamma(x+y{{\Delta}}) = x-y{{\Delta}}$. Then $\gamma(xy)=\gamma(x)\gamma(y)$ and $\gamma(x+y)=\gamma(x)+\gamma(y)$ for $x,y \in L$, and the fixed points of $\gamma$ are the elements of $F$. The group ${{\rm Gal}}(L/F)$ of $F$-automorphisms $\alpha:L \to L$ is $\{1,\gamma\}$. We define norm and trace functions ${{\rm N}}_{L/F}:L \to F$ and ${{\rm T}}_{L/F}:L \to F$ by ${{\rm N}}_{L/F}(x) = x \gamma (x)$ and ${{\rm T}}_{L/F}(x) = x + \gamma (x)$ for $x \in L$. We let $\chi_{L/F}$ be the quadratic character associated to $L/F$, so that $\chi_{L/F}(x)=(x,D)_F$ for $x \in F^\times$.
symmetric matrices {#twobytwosubsec}
-------------------
Let $a,b,c \in F$ and set $$\label{Sdefeq}
S = {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b/2\\#3&c\end{array}\right]}}.$$ Let $D=b^2/4-ac= -\det(S)$. Assume that $D \neq 0$. The *discriminant* ${{\rm disc}}(S)$ of $S$ is the class in $F^\times/F^{\times2}$ determined by $D$. It is known that there exists $g \in {{\rm GL}}(2,F)$ such that ${}^t g S g $ is of the form $\left[\begin{smallmatrix} a_1 & \\ & a_2 \end{smallmatrix} \right]$ and that $(a_1,a_2)_F$ is independent of the choice of $g$ such that ${}^t g S g $ is diagonal; we define the *Hasse invariant* $\varepsilon(S) \in \{ \pm 1\}$ by $\varepsilon(S) = (a_1,a_2)_F$. In fact, one has: $$\renewcommand{\arraystretch}{1.2}
\begin{array}{ccccc}
\toprule
S & g & {}^tgSg & {{\rm disc}}(S) & \varepsilon(S) \\
\midrule
a \neq 0, c \neq 0 & \left[ \begin{smallmatrix} 1 & \frac{-b}{2a} \\ & 1 \end{smallmatrix} \right] & \left[ \begin{smallmatrix} a & \\ & c-\frac{b^2}{4a} \end{smallmatrix} \right] & (b^2/4-ac)F^{\times 2} & (a, b^2/4-ac)_F =(c,b^2/4-ac)_F\\
\cmidrule{1-5}
a \neq 0, c = 0 & \left[ \begin{smallmatrix} 1 & \frac{-b}{2a} \\ & 1 \end{smallmatrix} \right] & \left[ \begin{smallmatrix} a & \\ & -\frac{b^2}{4a} \end{smallmatrix} \right] & F^{\times 2}& 1 \\
\cmidrule{1-5}
a=0, c \neq 0 & \left[ \begin{smallmatrix} &1\\ 1& -\frac{b}{2c} \end{smallmatrix} \right] & \left[\begin{smallmatrix} c & \\ &-\frac{b^2}{4c}\end{smallmatrix}\right]&F^{\times 2} & 1 \\
\cmidrule{1-5}
a=0, c=0 & \left[ \begin{smallmatrix} 1& 1 \\ 1&-1 \end{smallmatrix} \right] & \left[ \begin{smallmatrix} b & \\ & -b \end{smallmatrix} \right] &F^{\times 2}& 1 \\
\bottomrule
\end{array}$$ If ${{\rm disc}}(S)=F^{\times 2}$, then we say that $S$ is *split*. If $S$ is split, then for any $\lambda \in F^\times$ there exists $g \in {{\rm GL}}(2,F)$ such that ${}^tgSg=\left[\begin{smallmatrix} & \lambda \\ \lambda & \end{smallmatrix} \right]$.
Another -algebra {#anotheralgebrasubsec}
----------------
Let $S$ be as in with ${{\rm disc}}(S) \neq 0$. Set $D=b^2/4-ac$. We define $$\label{AFdefeq}
A= A_S= \{ {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}x-yb/2&-ya\\#3&x+yb/2\end{array}\right]}} : x,y \in F \}.$$ Then, with respect to matrix addition and multiplication, $A$ is a two-dimensional $F$-algebra naturally containing $F$. One can verify that $$\label{AFdefeq2}
A={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}\{g\in {{\rm M}}_2(F):\:^tgSg=\det(g)S\}{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}.$$ We define $T=T_S = A^\times$. Let $L$ be the quadratic extension associated to $D$; we also say that $L$ is the *quadratic extension associated to $S$*. We define an isomorphism of $F$-algebras, $$\label{ALisoeq}
A \stackrel{\sim}{\longrightarrow} L, \qquad {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}x-yb/2&-ya\\#3&x+yb/2\end{array}\right]}} \longmapsto x + y {{\Delta}}.$$ The restriction of this isomorphism to $T$ is an isomorphism $T \stackrel{\sim}{\longrightarrow} L^\times$, and we identify characters of $T$ and characters of $L^\times$ via this isomorphism. The automorphism of $A$ corresponding to the automorphism $\gamma$ of $L$ will also be denoted by $\gamma$. It has the effect of replacing $y$ by $-y$ in the matrix . We have $\det (t) = {{\rm N}}_{L/F}(t)$ for $t \in A$, where we identify elements of $A$ and $L$ via .
\[TFGL2lemma\] Let $T$ be as above, and assume that $L$ is a field. Let $B_2$ be the group of upper triangular matrices in ${{\rm GL}}(2,F)$. Then $TB_2={{\rm GL}}(2,F)$.
This can easily be verified using the explicit form of the matrices in $T$ and the assumption $D \notin F^{\times 2}$.
Bessel functionals {#besselsec}
------------------
Let $a,b$ and $c$ be in $F$. Define $S$ as in , and define a character $\theta=\theta_{a,b,c}=\theta_S$ of $N$ by $$\label{thetaSsetupeq}
\theta(\begin{bmatrix} 1 &&y&z \\ &1&x&y \\ &&1& \\ &&&1 \end{bmatrix}) = \psi(ax+by+cz) = \psi ({{\rm tr}}(S{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}y&z\\#3&y\end{array}\right]}}))$$ for $x,y,z \in F$. Every character of $N$ is of this form for uniquely determined $a,b,c$ in $F$, or, alternatively, for a uniquely determined symmetric $2\times2$ matrix $S$. We say that $\theta$ is *non-degenerate* if $\det(S)\neq0$. Given $S$ with $\det(S)\neq0$, let $A$ be as in , and let $T=A^\times$. We embed $T$ into ${{\rm GSp}}(4,F)$ via the map defined by $$\label{Tembeddingeq}
t\longmapsto {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}t&\\#3&\det(t)t'\end{array}\right]}},\qquad t\in T.$$ The image of $T$ in ${{\rm GSp}}(4,F)$ will also be denoted by $T$; the usage should be clear from the context. For $t\in T$ we have $\lambda(t)=\det(t)={{\rm N}}_{L/F}(t)$. It is easily verified that $$\theta(tnt^{-1})=\theta(n)\qquad\text{for $n\in N$ and $t\in T$}.$$ We refer to the semidirect product $$\label{Ddefeq}
D=TN$$ as the *Bessel subgroup* defined by character $\theta$ (or, the matrix $S$). Given a character $\Lambda$ of $T$ (identified with a character of $L^\times$ as explained above), we can define a character $\Lambda\otimes\theta$ of $D$ by $$(\Lambda\otimes\theta)(tn)=\Lambda(t)\theta(n)\qquad\text{for $n\in N$ and $t\in T$}.$$ Every character of $D$ whose restriction to $N$ coincides with $\theta$ is of this form for an appropriate $\Lambda$.
Now let $(\pi,V)$ be an admissible representation of ${{\rm GSp}}(4,F)$. Let $\theta$ be a non-degenerate character of $N$, and let $\Lambda$ be a character of the associated group $T$. We say that $\pi$ admits a *$(\Lambda,\theta)$-Bessel functional* if ${{\rm Hom}}_{D}(V,{{\mathbb C}}_{\Lambda\otimes\theta})\neq0$. A non-zero element $\beta$ of ${{\rm Hom}}_{D}(V,{{\mathbb C}}_{\Lambda\otimes\theta})$ is called a *$(\Lambda,\theta)$-Bessel functional* for $\pi$. If such a $\beta$ exists, then $\pi$ admits a model consisting of functions $B:\:{{\rm GSp}}(4,F)\rightarrow{{\mathbb C}}$ with the Bessel transformation property $$B(tng)=\Lambda(t)\theta(n)B(g)\qquad\text{for $t\in T$, $n\in N$ and $g\in{{\rm GSp}}(4,F)$},$$ by associating to each $v$ in $V$ the function $B_v$ that is defined by $B_v(g)=\beta(\pi(g)v)$ for $g \in {{\rm GSp}}(4,F)$. We note that if $\pi$ admits a central character $\omega_\pi$ and a $(\Lambda,\theta)$-Bessel functional, then $\Lambda|_{F^\times}=\omega_\pi$. For a character $\sigma$ of $F^\times$, it is easy to verify that $$\label{besseltwistformula}
{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})={{\rm Hom}}_D(\sigma\pi,{{\mathbb C}}_{(\sigma\circ{{\rm N}}_{L/F})\Lambda\otimes\theta}).$$ If $\pi$ is irreducible, then, using that $\pi^\vee\cong\omega_\pi^{-1}\pi$ (Proposition 2.3 of [@Takloo-Bighash2000]), one can also verify that $$\label{besselcontragredientformula}
{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})\cong{{\rm Hom}}_D(\pi^\vee,{{\mathbb C}}_{(\Lambda\circ\gamma)^{-1}\otimes\theta}).$$
The twisted Jacquet module of $V$ with respect to $N$ and $\theta$ is the quotient $V_{N,\theta}=V/V(N,\theta)$, where $V(N,\theta)$ is the subspace spanned by all vectors $\pi(n)v-\theta(n)v$ for $v$ in $V$ and $n$ in $N$. This Jacquet module carries an action of $T$ induced by the representation $\pi$. Evidently, there is a natural isomorphism $$\label{DTJacqueteq}
{{\rm Hom}}_{D}(V,{{\mathbb C}}_{\Lambda\otimes\theta})\cong{{\rm Hom}}_{T}(V_{N,\theta},{{\mathbb C}}_\Lambda).$$ Hence, when calculating the possible Bessel functionals on a representation $(\pi,V)$, a first step often consists in calculating the Jacquet modules $V_{N,\theta}$. We will use this method to calculate the possible Bessel functionals for most of the non-supercuspidal, irreducible, admissible representations of ${{\rm GSp}}(4,F)$. The few representations that are inaccessible with this method will be treated using the theta correspondence.
In this paper we do not assume that $(\Lambda,\theta)$-Bessel functionals are unique up to scalars. See Sect. \[uniquenesssec\] for some remarks on uniqueness.
Action on Bessel functionals {#actionsubsec}
----------------------------
There is an action of $M$, defined in , on the set of Bessel functionals. Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $\beta: V \to {{\mathbb C}}$ be a $(\Lambda, \theta)$-Bessel functional for $\pi$. Let $a,b,c\in F$ be such that holds. Let $m \in M$, with $$m = \begin{bmatrix} g & \\ & \lambda g' \end{bmatrix},$$ where $\lambda \in F^\times$ and $g \in {{\rm GL}}(2,F)$. Define $m \cdot \beta : V \to {{\mathbb C}}$ by $(m\cdot \beta)(v) = \beta (\pi (m^{-1})v)$ for $v\in V$. Calculations show that $m\cdot \beta$ is a $(\Lambda',\theta')$-Bessel functional with $\theta'$ defined by $$\theta'(\begin{bmatrix} 1 &&y&z \\ &1&x&y \\ &&1& \\ &&&1 \end{bmatrix}) = \psi(a'x+b'y+c'z) = \psi ({{\rm tr}}(S'{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}{{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}y&z\\#3&y\end{array}\right]}})), \quad x,y,z \in F,$$ where $$S' = {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a'&b'/2\\#3&c'\end{array}\right]}} = \lambda\,^t h S h\quad \text{with}\quad h={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} g^{-1} {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}.$$ Since ${{\rm disc}}(S') = {{\rm disc}}(S)$, the quadratic extension $L'$ associated to $S'$ is the same as the quadratic extension $L$ associated to $S$. There is an isomorphism of $F$-algebras $$A'=A_{S'} \stackrel{\sim}{\longrightarrow} A= A_S, \qquad a \mapsto g^{-1} a g.$$ Let $T' = A'{}^\times$. Finally, $\Lambda':T' \to {{\mathbb C}}^\times$ is given by $\Lambda'(t') = \Lambda(g^{-1}t'g)$ for $t' \in T'$.
For example, assume that $\beta'$ is a *split* Bessel functional, i.e., a Bessel functional for which the discriminant of the associated symmetric matrix $S'$ is the class $F^{\times2}$. By Sect. \[twobytwosubsec\] there exists $m$ as above such that $\beta'=m\cdot \beta$, where the symmetric matrix $S$ associated to the $(\Lambda,\theta)$-Bessel functional $\beta$ is $$\label{splitSeq}
S = {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1/2\\#3&\end{array}\right]}},$$ and $$\label{splitthetaeq}
\theta(\begin{bmatrix} 1 &&y&z \\ &1&x&y \\ &&1& \\ &&&1 \end{bmatrix}) = \psi(y).$$ In this case $$\label{splitthetaTeq}
T=T_S=\{\begin{bmatrix}a\\&b\\&&a\\&&&b\end{bmatrix}:\:a,b\in F^\times\}.$$ Sometimes when working with split Bessel functionals it is more convenient to work with the conjugate group $$\label{splitthetaconjNeq}
N_{\mathrm{alt}}=s_2^{-1}Ns_2=\begin{bmatrix}1&*&&*\\&1\\&*&1&*\\&&&1\end{bmatrix}$$ and the conjugate character $$\label{splitthetaconjeq}
\theta_{\mathrm{alt}}(\begin{bmatrix}1&-y&&z\\&1&&\\&x&1&y\\&&&1\end{bmatrix})=\psi(y).$$ In this case the stabilizer of $\theta_{\mathrm{alt}}$ is $$\label{splitthetaconjTeq}
T_{\mathrm{alt}}=\{\begin{bmatrix}a\\&a\\&&b\\&&&b\end{bmatrix}:\:a,b\in F^\times\}.$$
Galois conjugation of Bessel functionals {#galoissubsec}
----------------------------------------
The action of $M$ can be used to define the Galois conjugate of a Bessel functional. Let $S$ be as in , and let $A=A_S$ and $T=T_S$. Define $$\label{hgammaeq}
h_\gamma=
\left\{
\begin{array}{ll}
\left[\begin{matrix} 1& b/a\\ & -1 \end{matrix} \right] & \text{if $a \neq 0$}, \\
\left[\begin{matrix} 1& \\ -b/c& -1 \end{matrix} \right] & \text{if $a = 0$ and $c \neq 0$}, \\
\left[\begin{matrix} &1 \\ 1 & \end{matrix} \right] & \text{if $a=c=0$}.
\end{array}
\right.$$ Then $h_\gamma \in {{\rm GL}}(2,F)$, $h_\gamma^2=1$, $S={}^t h_\gamma S h_\gamma $ and $\det (h_\gamma) = -1$. Set $$g_\gamma = {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} h_\gamma^{-1} {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}= {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} h_\gamma {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} \in {{\rm GL}}(2,F), \quad m_\gamma = \begin{bmatrix} g_\gamma & \\ & g_\gamma' \end{bmatrix} \in M.$$ We have $g_\gamma Tg_\gamma^{-1} = T$, and the diagrams $$\begin{CD}
A @>\sim>> L \\
@V\text{conjugation by $g_\gamma$}VV @VV\gamma V\\
A @>\sim>> L
\end{CD}
\qquad\qquad\qquad\qquad\qquad\qquad
\begin{CD}
T @>\sim>> L^\times \\
@V\text{conjugation by $g_\gamma$}VV @VV\gamma V\\
T @>\sim>> L^\times
\end{CD}$$ commute. Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $\beta$ be a $(\Lambda,\theta)$-Bessel functional for $\pi$. We refer to $m_\gamma \cdot\beta$ as the *Galois conjugate* of $\beta$. We note that $m_\gamma\cdot \beta$ is a $(\Lambda\circ \gamma, \theta)$-Bessel functional for $\pi$. Hence, $$\label{besselgaloiseq}
{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})\cong{{\rm Hom}}_D(\pi,{{\mathbb C}}_{(\Lambda\circ\gamma)\otimes\theta}).$$ In combination with , we get $$\label{besselgaloiseq2}
{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})\cong{{\rm Hom}}_D(\pi^\vee,{{\mathbb C}}_{\Lambda^{-1}\otimes\theta}).$$
Waldspurger functionals {#waldfuncsubsec}
-----------------------
Our analysis of Bessel functionals will often involve a similar type of functional on representations of ${{\rm GL}}(2,F)$. Let $\theta$ and $S$ be as in , and let $T\cong L^\times$ be the associated subgroup of ${{\rm GL}}(2,F)$. Let $\Lambda$ be a character of $T$. Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GL}}(2,F)$. A *$(\Lambda,\theta)$-Waldspurger functional* on $\pi$ is a non-zero linear map $\beta:\:V\rightarrow{{\mathbb C}}$ such that $$\beta(\pi(g)v)=\Lambda(g)\beta(v)\qquad\text{for all }v\in V\text{ and }g\in T.$$ For trivial $\Lambda$, such functionals were the subject of Proposition 9 of [@Wald1980] and Proposition 8 of [@Wald1985]. For general $\Lambda$ see [@Tu1983], [@Saito1993] and Lemme 8 of [@Wald1985]. The $(\Lambda,\theta)$-Waldspurger functionals are the non-zero elements of the space ${\rm Hom}_T(\pi,{{\mathbb C}}_\Lambda)$, and it is known that this space is at most one-dimensional. An obvious necessary condition for ${\rm Hom}_T(\pi,{{\mathbb C}}_\Lambda)\neq0$ is that $\Lambda\big|_{F^\times}$ equals $\omega_\pi$, the central character of $\pi$. By Sect. \[galoissubsec\], Galois conjugation on $T$ is given by conjugation by an element of ${{\rm GL}}(2,F)$. Hence, $$\label{waldspurgerpropeq1}
{{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)\cong{{\rm Hom}}_T(\pi,{{\mathbb C}}_{\Lambda\circ\gamma}).$$ Using $\pi^\vee\cong\omega_\pi^{-1}\pi$, one verifies that $$\label{waldspurgerpropeq3}
{{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)\cong{{\rm Hom}}_T(\pi^\vee,{{\mathbb C}}_{(\Lambda\circ\gamma)^{-1}}).$$ In combination with , we also have $$\label{waldspurgerpropeq4}
{{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)\cong{{\rm Hom}}_T(\pi^\vee,{{\mathbb C}}_{\Lambda^{-1}}).$$ Let $\pi^{\mathrm{JL}}$ denote the Jacquet-Langlands lifting of $\pi$ in the case that $\pi$ is a discrete series representation, and $0$ otherwise. Then, by the discussion on p. 1297 of [@Tu1983], $$\label{waldspurgerpropeq2}
\dim{{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)+\dim{{\rm Hom}}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_\Lambda)=1.$$ It is easy to see that, for any character $\sigma$ of $F^\times$, $$\label{Waldspurgertwistingeq}
{\rm Hom}_T(\pi,{{\mathbb C}}_\Lambda)={\rm Hom}_T(\sigma\pi,{{\mathbb C}}_{(\sigma\circ {{\rm N}}_{L/F})\Lambda}).$$ For $\Lambda$ such that $\Lambda\big|_{F^\times}=\sigma^2$, it is known that $$\label{StGL2Waldspurgereq2}
\dim({\rm Hom}_T(\sigma{{\rm St}}_{{{\rm GL}}(2)},{{\mathbb C}}_\Lambda))=\left\{\begin{array}{l@{\qquad}l}
0&\text{ if $L$ is a field and $\Lambda=\sigma\circ {{\rm N}}_{L/F}$},\\
1&\text{ otherwise};\end{array}\right.$$ see Proposition 1.7 and Theorem 2.4 of [@Tu1983]. As in the case of Bessel functionals, we call a Waldspurger functional *split* if the discriminant of the associated matrix $S$ lies in $F^{\times2}$. By Lemme 8 of [@Wald1985], an irreducible, admissible, infinite-dimensional representation of ${{\rm GL}}(2,F)$ admits a split $(\Lambda,\theta)$-Waldspurger functional with respect to any character $\Lambda$ of $T$ that satisfies $\Lambda\big|_{F^\times}=\omega_\pi$ (this can also be proved in a way analogous to the proof of Proposition \[GSp4genericprop\] below, utilizing the standard zeta integrals for ${{\rm GL}}(2)$).
Split Bessel functionals
========================
Irreducible, admissible, generic representations of ${{\rm GSp}}(4,F)$ admit a theory of zeta integrals, and every zeta integral gives rise to a split Bessel functional. As a consequence, generic representations admit *all* possible split Bessel functionals; see Proposition \[GSp4genericprop\] below for a precise formulation.
To put the theory of zeta integrals on a solid foundation, we will use $P_3$-theory. The group $P_3$, defined below, plays a role in the representation theory of ${{\rm GSp}}(4)$ similar to the “mirabolic” subgroup in the theory for ${{\rm GL}}(n)$. Some of what follows is a generalization of Sects. 2.5 and 2.6 of [@NF], where $P_3$-theory was developed under the assumption of trivial central character. The general case requires only minimal modifications.
While every generic representation admits split Bessel functionals, we will see that the converse is not true. $P_3$-theory can also be used to identify the non-generic representations that admit a split Bessel functional. This is explained in Sect. \[splitbesselnongenericsec\] below.
The group and its representations
---------------------------------
Let $P_3$ be the subgroup of ${{\rm GL}}(3,F)$ defined as the intersection $$P_3={{\rm GL}}(3,F)\cap\begin{bmatrix} *&*&*\\ *&*&*\\&&1\end{bmatrix}.$$ We recall some facts about this group, following [@BeZe1976]. Let $$U_3=P_3\cap\begin{bmatrix}1&*&*\\&1&*\\&&1\end{bmatrix},\qquad N_3=P_3\cap\begin{bmatrix}1&&*\\&1&*\\&&1\end{bmatrix}.$$ We define characters $\Theta$ and $\Theta'$ of $U_3$ by $$\Theta(\begin{bmatrix}1&u_{12}&*\\&1&u_{23}\\&&1\end{bmatrix})=\psi(u_{12}+u_{23}),\qquad
\Theta'(\begin{bmatrix}1&u_{12}&*\\&1&u_{23}\\&&1\end{bmatrix})=\psi(u_{23}).$$ If $(\pi,V)$ is a smooth representation of $P_3$, we may consider the twisted Jacquet modules $$V_{U_3,\Theta}=V/V(U_3,\Theta),\qquad V_{U_3,\Theta'}=V/V(U_3,\Theta')$$ where $V(U_3,\Theta)$ (resp. $V(U_3,\Theta')$) is spanned by all elements of the form $\pi(u)v-\Theta(u)v$ (resp. $\pi(u)v-\Theta'(u)v$) for $v$ in $V$ and $u$ in $U_3$. Note that $V_{U_3,\Theta'}$ carries an action of the subgroup $$\begin{bmatrix} *&&\\&1\\&&1\end{bmatrix}\cong F^\times$$ of $P_3$. We may also consider the Jacquet module $V_{N_3}=V/V(N_3)$, where $V(N_3)$ is the space spanned by all vectors of the form $\pi(u)v-v$ for $v$ in $V$ and $u$ in $N_3$. Note that $V_{N_3}$ carries an action of the subgroup $$\begin{bmatrix} *&*&\\ *&*\\&&1\end{bmatrix}\cong{{\rm GL}}(2,F)$$ of $P_3$.
Next we define three classes of smooth representations of $P_3$, associated with the groups ${{\rm GL}}(0)$, ${{\rm GL}}(1)$ and ${{\rm GL}}(2)$. Let $$\label{tauP3GL0eq}
\tau^{P_3}_{{{\rm GL}}(0)}(1):={\mathrm{c}\text{-}\mathrm{Ind}}^{P_3}_{U_3}(\Theta),$$ where ${\mathrm{c}\text{-}\mathrm{Ind}}$ denotes compact induction. Then $\tau^{P_3}_{{{\rm GL}}(0)}(1)$ is a smooth, irreducible representation of $P_3$. Next, let $\chi$ be a smooth representation of ${{\rm GL}}(1,F)\cong F^\times$. Define a representation $\chi\otimes\Theta'$ of the subgroup $$\begin{bmatrix}*&*&*\\&1&*\\&&1\end{bmatrix}$$ of $P_3$ by $$(\chi\otimes\Theta')(\begin{bmatrix}a&*&*\\&1&y\\&&1\end{bmatrix})=\chi(a)\psi(y).$$ Then $$\tau^{P_3}_{{{\rm GL}}(1)}(\chi):={\mathrm{c}\text{-}\mathrm{Ind}}^{P_3}_{\left[\begin{smallmatrix} *&*&*\\&1&*\\&&1\end{smallmatrix}\right]}(\chi\otimes\Theta')$$ is a smooth representation of $P_3$. It is irreducible if and only if $\chi$ is one-dimensional. Finally, let $\rho$ be a smooth representation of ${{\rm GL}}(2,F)$. We define the representation $\tau_{{{\rm GL}}(2)}^{P_3}(\rho)$ of $P_3$ to have the same space as $\rho$, and action given by $$\label{tauP3GL2eq}
\tau_{{{\rm GL}}(2)}^{P_3}(\rho)(\begin{bmatrix}a&b&*\\c&d&*\\&&1\end{bmatrix})=\rho({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}}).$$ Evidently, $\tau_{{{\rm GL}}(2)}^{P_3}(\rho)$ is irreducible if and only if $\rho$ is irreducible.
\[P3representationsprop\] Let notations be as above.
1. Every irreducible, smooth representation of $P_3$ is isomorphic to exactly one of $$\tau^{P_3}_{{{\rm GL}}(0)}(1),\qquad
\tau^{P_3}_{{{\rm GL}}(1)}(\chi),\qquad
\tau_{{{\rm GL}}(2)}^{P_3}(\rho),$$ where $\chi$ is a character of $F^\times$ and $\rho$ is an irreducible, admissible representation of ${{\rm GL}}(2,F)$. Moreover, the equivalence classes of $\chi$ and $\rho$ are uniquely determined.
2. Let $(\pi,V)$ be a smooth representation of $P_3$ of finite length. Then there exists a chain of $P_3$ subspaces $$0\subset V_2\subset V_1\subset V_0=V$$ with the following properties, $$\begin{aligned}
V_2&\cong \dim(V_{U_3,\Theta})\cdot\tau^{P_3}_{{{\rm GL}}(0)}(1),\\
V_1/V_2&\cong\tau^{P_3}_{{{\rm GL}}(1)}(V_{U_3,\Theta'}),\\
V_0/V_1&\cong\tau^{P_3}_{{{\rm GL}}(2)}(V_{N_3}).
\end{aligned}$$
See 5.1 – 5.15 of [@BeZe1976].
-theory for arbitrary central character
---------------------------------------
It is easy to verify that any element of the Klingen parabolic subgroup $Q$ can be written in a unique way as $$\label{Qelementeq}
\begin{bmatrix}ad-bc\\&a&b\\&c&d\\&&&1\end{bmatrix}\begin{bmatrix}1&\,-y&x&z\\&1&&x\\&&1&y\\&&&1\end{bmatrix}\begin{bmatrix}u\\&u\\&&u\\&&&u\end{bmatrix}$$ with ${{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}}\in{{\rm GL}}(2,F)$, $x,y,z\in F$, and $u\in F^\times$. Let $Z^J$ be the center of the Jacobi group, consisting of all elements of ${{\rm GSp}}(4)$ of the form $$\label{ZJdefeq}
\begin{bmatrix}1&&&*\\&1\\&&1\\&&&1\end{bmatrix}.$$ Evidently, $Z^J$ is a normal subgroup of $Q$ with $Z^J\cong F$. Let $(\pi,V)$ be a smooth representation of ${{\rm GSp}}(4,F)$. Let $V(Z^J)$ be the span of all vectors $v-\pi(z)v$, where $v$ runs through $V$ and $z$ runs through $Z^J$. Then $V(Z^J)$ is preserved by the action of $Q$. Hence $Q$ acts on the quotient $$V_{Z^J}:=V/V(Z^J).$$ Let $\bar Q$ be the subgroup of $Q$ consisting of all elements of the form with $u=1$, i.e., $$\bar Q={{\rm GSp}}(4)\cap\begin{bmatrix} *&*&*&*\\&*&*&*\\&*&*&*\\&&&1\end{bmatrix}.$$ The map $$\label{QP3mapeq}
i(\begin{bmatrix}ad-bc\\&a&b\\&c&d\\&&&1\end{bmatrix}\begin{bmatrix}1&\,-y&x&z\\&1&&x\\&&1&y\\&&&1\end{bmatrix})=\begin{bmatrix}a&b\\c&d\\&&1\end{bmatrix}\begin{bmatrix}1&&x\\&1&y\\&&1\end{bmatrix}$$ establishes an isomorphism $\bar Q/Z^J\cong P_3$.
Recall the character $\psi_{c_1,c_2}$ of $U$ defined in . Note that $U$ maps onto $U_3$ under the map , and that the diagrams $$\xymatrix{U\ar[r]^i\ar[dr]_{\psi_{-1,1}}&U_3\ar[d]^\Theta\\
&{{\mathbb C}}^\times}
\qquad\qquad\qquad
\xymatrix{U\ar[r]^i\ar[dr]_{\psi_{-1,0}}&U_3\ar[d]^{\Theta'}\\
&{{\mathbb C}}^\times}$$ are commutative. The radical $N_Q$ (see ) maps onto $N_3$ under the map . The following theorem is exactly like Theorem 2.5.3 of [@NF], except that the hypothesis of trivial central character is removed.
\[finitelength\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. The quotient $V_{Z^J} =V/V(Z^J)$ is a smooth representation of $\bar Q/Z^J$, and hence, via the map , defines a smooth representation of $P_3$. As a representation of $P_3$, $V_{Z^J}$ has finite length. Hence, $V_{Z^J}$ has a finite filtration by $P_3$ subspaces such that the successive quotients are irreducible and of the form $\tau_{{{\rm GL}}(0)}^{P_3} (1)$, $\tau_{{{\rm GL}}(1)}^{P_3} (\chi)$ or $\tau_{{{\rm GL}}(2)}^{P_3} ( \rho)$ for some character $\chi$ of $F^\times$, or some irreducible, admissible representation $\rho$ of ${{\rm GL}}(2,F)$. Moreover, the following statements hold:
1. There exists a chain of $P_3$ subspaces $$\label{V0V1V2def}
0 \subset V_2 \subset V_1 \subset V_0 = V_{Z^J}$$ such that $$\begin{aligned}
V_2 & \cong \dim {{\rm Hom}}_{U} (V, \psi_{-1,1} ) \cdot \tau_{{{\rm GL}}(0)}^{P_3} (1), \\
V_1 / V_2 & \cong \tau_{{{\rm GL}}(1)}^{P_3} ( V_{U, \psi_{-1,0}} ), \\
V_0/V_1 & \cong \tau_{{{\rm GL}}(2)}^{P_3} (V_{N_Q}).
\end{aligned}$$ Here, the vector space $V_{U,\psi_{-1,0}}$ admits a smooth action of ${{\rm GL}}(1,F) \cong F^\times$ induced by the operators $$\pi ( \begin{bmatrix} a &&& \\ &a&& \\ &&1& \\ &&& 1 \end{bmatrix} ),\quad a\in F^\times,$$ and $V_{N_Q}$ admits a smooth action of ${{\rm GL}}(2,F)$ induced by the operators $$\pi ( \begin{bmatrix} \det g && \\ & g& \\ && 1 \end{bmatrix} ), \quad g \in {{\rm GL}}(2,F).$$
2. The representation $\pi$ is generic if and only if $V_2 \neq 0$, and if $\pi$ is generic, then $V_2\cong \tau_{{{\rm GL}}(0)}^{P_3}(1)$.
3. We have $V_2= V_{Z^J}$ if and only if $\pi$ is supercuspidal. If $\pi$ is supercuspidal and generic, then $V_{Z^J}= V_2 \cong \tau_{{{\rm GL}}(0)}^{P_3}(1)$ is non-zero and irreducible. If $\pi$ is supercuspidal and non-generic, then $V_{Z^J}=V_2 =0$.
This is an application of Proposition \[P3representationsprop\]. See Theorem 2.5.3 of [@NF] for the details of the proof.
Given an irreducible, admissible representation $(\pi,V)$ of ${{\rm GSp}}(4,F)$, one can calculate the semisimplifications of the quotients $V_0/V_1$ and $V_1/V_2$ in the $P_3$-filtration from the Jacquet modules of $\pi$ with respect to the Siegel and Klingen parabolic subgroups. The results are exactly the same as in Appendix A.4 of [@NF] (where it was assumed that $\pi$ has trivial central character).
Note that there is a typo in Table A.5 of [@NF]: The entry for Vd in the “$\text{s.s.}(V_0/V_1)$” column should be $\tau_{{{\rm GL}}(2)}^{P_3}(\nu(\nu^{-1/2}\sigma \times \nu^{-1/2}\xi\sigma))$.
Generic representations and zeta integrals
------------------------------------------
Let $\pi$ be an irreducible, admissible, generic representation of ${{\rm GSp}}(4,F)$. Recall from Sect. \[representationssec\] that $\mathcal{W}(\pi,\psi_{c_1,c_2})$ denotes the Whittaker model of $\pi$ with respect to the character $\psi_{c_1,c_2}$ of $U$. For $W$ in $\mathcal{W}(\pi,\psi_{c_1,c_2})$ and $s \in {{\mathbb C}}$, we define the *zeta integral* $Z(s,W)$ by $$\label{localzetaintdefeq}
Z(s,W)=\int\limits_{F^\times}\int\limits_FW(\left[\begin{matrix}a\\&a\\
&x&1\\&&&1\end{matrix}\right])|a|^{s-3/2}\,dx\,d^\times a.$$ It was proved in Proposition 2.6.3 of [@NF] that there exists a real number $s_0$, independent of $W$, such that $Z(s,W)$ converges for $\Re(s)>s_0$ to an element of $\mathbb C(q^{-s})$. In particular, all zeta integrals have meromorphic continuation to all of ${{\mathbb C}}$. Let $I(\pi)$ be the ${{\mathbb C}}$-vector subspace of ${{\mathbb C}}(q^{-s})$ spanned by all $Z(s,W)$ for $W$ in $\mathcal{W}(\pi,\psi_{c_1,c_2})$. It is easy to see that $I(\pi)$ is independent of the choice of $\psi$ and $c_1,c_2$ in $F^\times$.
\[basicpropertieszetaintegrals\] Let $\pi$ be a generic, irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Then $I(\pi)$ is a non-zero $\mathbb C[q^{-s},q^s]$-module containing $\mathbb C$, and there exists $R(X) \in \mathbb C[X]$ such that $R(q^{-s}) I(\pi) \subset \mathbb C[q^{-s},q^s]$, so that $I(\pi)$ is a fractional ideal of the principal ideal domain $\mathbb C[q^{-s},q^s]$ whose quotient field is $\mathbb C(q^{-s})$. The fractional ideal $I(\pi)$ admits a generator of the form $1/Q(q^{-s})$ with $Q(0)=1$, where $Q(X) \in \mathbb C[X]$.
The proof is almost word for word the same as that of Proposition 2.6.4 of [@NF]. The only difference is that, in the calculation starting at the bottom of p. 79 of [@NF], the element $q$ is taken from $\bar Q$ instead of $Q$.
The quotient $1/Q(q^{-s})$ in this proposition is called the *$L$-factor* of $\pi$, and denoted by $L(s,\pi)$. If $\pi$ is supercuspidal, then $L(s,\pi)=1$. The $L$-factors for all irreducible, admissible, generic, non-supercuspidal representations are listed in Table A.8 of [@NF]. By definition, $$\label{zetaLquotienteq}
\frac{Z(s,W)}{L(s,\pi)}\in{{\mathbb C}}[q^s,q^{-s}]$$ for all $W$ in $\mathcal{W}(\pi,\psi_{c_1,c_2})$.
Generic representations admit split Bessel functionals
------------------------------------------------------
In this section we will prove that an irreducible, admissible, generic representation of ${{\rm GSp}}(4,F)$ admits split Bessel functionals with respect to *all* characters $\Lambda$ of $T$. This is a characteristic feature of generic representations, which will follow from Proposition \[nongenericsplitproposition\] in the next section.
\[GSp4genericlemma\] Let $(\pi,V)$ be an irreducible, admissible, generic representation of ${{\rm GSp}}(4,F)$. Let $\sigma$ be a unitary character of $F^\times$, and let $s\in{{\mathbb C}}$ be arbitrary. Then there exists a non-zero functional $f_{s,\sigma}:\:V\rightarrow{{\mathbb C}}$ with the following properties.
1. For all $x,y,z\in F$ and $v\in V$, $$\label{GSp4genericlemmaeq1}
f_{s,\sigma}(\pi(\begin{bmatrix}1&&y&z\\&1&x&y\\&&1\\&&&1\end{bmatrix})v)
=\psi(y)f_{s,\sigma}(v).$$
2. For all $a\in F^\times$ and $v\in V$, $$\label{GSp4genericlemmaeq2}
f_{s,\sigma}(\pi(\begin{bmatrix}a\\&1\\&&a\\&&&1\end{bmatrix})v)
=\sigma(a)^{-1}|a|^{-s+1/2}f_{s,\sigma}(v).$$
We may assume that $V=\mathcal{W}(\pi,\psi_{c_1,c_2})$ with $c_1=1$. Let $s_0\in{{\mathbb R}}$ be such that $Z(s,W)$ is absolutely convergent for $\Re (s)>s_0$. Then the integral $$\label{ZsWsigmadefeq}
Z_\sigma(s,W)=\int\limits_{F^\times}\int\limits_F
W(\begin{bmatrix}a\\&a\\&x&1\\&&&1\end{bmatrix})|a|^{s-3/2}\sigma(a)\,dx\,d^\times a$$ is also absolutely convergent for $\Re(s)>s_0$, since $\sigma$ is unitary. Note that these are the zeta integrals for the twisted representation $\sigma\pi$. Therefore, by , the quotient $Z_\sigma(s,W)/L(s,\sigma\pi)$ is in ${{\mathbb C}}[q^{-s},q^s]$ for all $W\in\mathcal{W}(\pi,\psi_{c_1,c_2})$. We may therefore define, for any complex $s$, $$\label{GSp4genericlemmaeq3}
f_{s,\sigma}(W)=\frac{Z_\sigma(s,\pi(s_2)W)}{L(s,\sigma\pi)},$$ where $s_2$ is as in (\[s1s2defeq\]). Straightforward calculations using the definition show that and are satisfied for $\Re (s)>s_0$. Since both sides depend holomorphically on $s$, these identities hold on all of ${{\mathbb C}}$.
\[GSp4genericprop\] Let $(\pi,V)$ be an irreducible, admissible and generic representation of ${{\rm GSp}}(4,F)$. Let $\omega_\pi$ be the central character of $\pi$. Then $\pi$ admits a split $(\Lambda,\theta)$-Bessel functional with respect to any character $\Lambda$ of $T$ that satisfies $\Lambda\big|_{F^\times}=\omega_\pi$.
Let $\theta$ be as in with $T$ as in . Let $s\in{{\mathbb C}}$ and $\sigma$ be a unitary character of $F^\times$ such that $$\Lambda(\begin{bmatrix}a\\&1\\&&a\\&&&1\end{bmatrix})=\sigma(a)^{-1}|a|^{-s+1/2}
\qquad\text{for all }a\in F^\times.$$ Let $f_{s,\sigma}$ be as in Lemma \[GSp4genericlemma\]. By , $$\label{GSp4genericpropeq1}
f_{s,\sigma}(\pi(\begin{bmatrix}a\\&1\\&&a\\&&&1\end{bmatrix})v)
=\Lambda(a)f_{s,\sigma}(v)\qquad\text{for all }a\in F^\times.$$ Since $\Lambda\big|_{F^\times}=\omega_\pi$ we have in fact $f_{s,\sigma}(\pi(t)v)=\Lambda(t)f_{s,\sigma}(v)$ for all $t\in T$. Hence $f_{s,\sigma}$ is a Bessel functional as desired.
Split Bessel functionals for non-generic representations {#splitbesselnongenericsec}
--------------------------------------------------------
The converse of Proposition \[GSp4genericprop\] is not true: There exist irreducible, admissible, non-generic representations of ${{\rm GSp}}(4,F)$ which admit split Bessel functionals. This follows from the following proposition. In fact, using this result and the $P_3$-filtrations listed in Table A.6 of [@NF], one can precisely identify which non-generic representations admit split Bessel functionals. Other than in the generic case, the possible characters $\Lambda$ of $T$ are restricted to a finite number.
\[nongenericsplitproposition\] Let $(\pi,V)$ be an irreducible, admissible and non-generic representation of ${{\rm GSp}}(4,F)$. Let the semisimplification of the quotient $V_1=V_1/V_2$ in the $P_3$-filtration of $\pi$ be given by $\sum_{i=1}^n\tau^{P_3}_{{{\rm GL}}(1)}(\chi_i)$ with characters $\chi_i$ of $F^\times$.
1. $\pi$ admits a split Bessel functional if and only if the quotient $V_1$ in the $P_3$-filtration of $\pi$ is non-zero.
2. Let $\beta$ be a non-zero $(\Lambda,\theta)$-Bessel functional, with $\theta$ as in , and a character $\Lambda$ of the group $T$ explicitly given in . Then there exists an $i$ for which $$\label{nongenericsplitpropositioneq1}
\Lambda(\begin{bmatrix}a\\&1\\&&a\\&&&1\end{bmatrix})=|a|^{-1}\chi_i(a)\qquad\text{for all}\quad a\in F^\times.$$
3. If $V_1$ is non-zero, then there exists an $i$ such that $\pi$ admits a split $(\Lambda,\theta)$-Bessel functional with respect to a character $\Lambda$ of $T$ satisfying .
4. The space of split $(\Lambda,\theta)$-Bessel functionals is zero or one-dimensional.
5. The representation $\pi$ does not admit any split Bessel functionals if and only if $\pi$ is of type IVd, Vd, VIb, VIIIb, IXb, or is supercuspidal.
Let $N_{\mathrm{alt}}$ be as in and $\theta_{\mathrm{alt}}$ be as in . We use the fact that any $(\Lambda,\theta_{\mathrm{alt}})$-Bessel functional factors through the twisted Jacquet module $V_{ N_{\mathrm{alt}},\theta_{\mathrm{alt}} }$. To calculate this Jacquet module, we use the $P_3$-filtration of Theorem \[finitelength\]. Since $\pi$ is non-generic, the $P_3$-filtration simplifies to $$0\subset V_1\subset V_0=V_{Z^J},$$ with $V_1$ of type $\tau^{P_3}_{{{\rm GL}}(1)}$ and $V_0/V_1$ of type $\tau^{P_3}_{{{\rm GL}}(2)}$. Taking further twisted Jacquet modules and observing Lemma 2.5.6 of [@NF], it follows that $$V_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}=(V_1)_{\left[\begin{smallmatrix}1\\ *&1&*\\&&1\end{smallmatrix}\right],\psi},\qquad\text{where}\quad\psi(\begin{bmatrix}1\\x&1&y\\&&1\end{bmatrix})=\psi(y).$$ By Lemma 2.5.5 of [@NF], after suitable renaming, $$0=J_n\subset\ldots\subset J_1\subset J_0=(V_1)_{\left[\begin{smallmatrix}1\\ *&1&*\\&&1\end{smallmatrix}\right],\psi},$$ where $J_i/J_{i+1}$ is one-dimensional, and ${\rm diag}(a,1,1)$ acts on $J_i/J_{i+1}$ by $|a|^{-1}\chi_i(a)$. Table A.6 of [@NF] shows that all the $\chi_i$ are pairwise distinct. This proves i), ii), iii) and iv).
v\) If $\pi$ is one of the representations mentioned in v), then $V_1/V_2=0$ by Theorem \[finitelength\] (in the supercuspidal case), or by Table A.6 in [@NF] (in the non-supercuspidal case). By part i), $\pi$ does not admit a split Bessel functional. For any representation not mentioned in v), the quotient $V_1/V_2$ is non-zero, so that a split Bessel functional exists by iii).
Theta correspondences
=====================
Let $S$ be as in , and let $\theta=\theta_S$ be as in . Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $(\sigma,W)$ be an irreducible, admissible representation of ${{\rm GO}}(X)$, where $X$ is an even-dimensional, symmetric, bilinear space. Let $\omega$ be the Weil representation of the group $R$, consisting of the pairs $(g,h)\in{{\rm GSp}}(4,F)\times{{\rm GO}}(X)$ with the same similitude factors, on the Schwartz space $\mathcal{S}(X^2)$. Assume that the pair $(\pi,\sigma)$ occurs in the theta correspondence defined by $\omega$, i.e., ${{\rm Hom}}_R(\omega,\pi\otimes\sigma)\neq0$. It is a theme in the theory of the theta correspondence to relate the twisted Jacquet module $V_{N,\theta}$ of $\pi$ to invariant functionals on $\sigma$; a necessary condition for the non-vanishing of $V_{N,\theta}$ is that $X$ represents $S$. See for example the remarks in Sect. 6 of [@Roberts1999].
Applications to $(\Lambda,\theta_S)$-Bessel functionals also require the involvement of $T$. The idea is roughly as follows. The group $T$ is contained in $M$. Moreover, $\omega(m,h)$ for $(m,h)$ in $R\cap(M\times{{\rm GO}}(X))$ is given by an action of such pairs on $X^2$. The study of this action leads to the definition of certain compatible embeddings of $T$ into ${{\rm GO}}(X)$. Using these embeddings allows us to show that if $\pi$ has a $(\Lambda,\theta)$-Bessel functional, then $\sigma$ admits a non-zero functional transforming according to $\Lambda^{-1}$.
After setting up notations and studying the embeddings of $T$ mentioned above, we obtain the main result of this section, Theorem \[fourdimthetatheorem\]. Section \[thetaapplicationssec\] contains the applications to Bessel functionals.
The spaces {#thespacessubsec}
----------
In this section we will consider non-degenerate symmetric bilinear spaces $(X,\langle\cdot, \cdot \rangle)$ over $F$ such that $$\label{Xtypeseq}
\text{$\dim X =2$, or $\dim X =4$ and ${{\rm disc}}(X)=1$}.$$ We begin by recalling the constructions of the isomorphism classes of these spaces, and the characterization of their similitude groups. Let $m \in F^\times$, $A=A_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]}$ and $T=A^\times$ be as in Sect. \[anotheralgebrasubsec\], so that $$\label{specialATeq}
A =\{\begin{bmatrix}x&-y\\-ym&x\end{bmatrix}:x,y\in F\}, \qquad T=A^\times =\{\begin{bmatrix}x&-y\\-ym&x\end{bmatrix}:x,y\in F,\:x^2 -y^2m \neq 0\}.$$ Let $\lambda \in F^\times$. Define a non-degenerate two-dimensional symmetric bilinear space $(X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda})$ by $$\label{twodimexeq}
X_{m,\lambda}=A_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]}, \qquad \langle x_1,x_2 \rangle_{m,\lambda} = \lambda\,\mathrm{tr}(x_1x_2^*)/2, \quad x_1,x_2 \in X_{m,\lambda}.$$ Here, $*$ is the canonical involution of $2 \times 2$ matrices, given by $\left[\begin{smallmatrix} a&b \\ c& d \end{smallmatrix} \right]^* =
\left[\begin{smallmatrix} d&-b \\ -c& a \end{smallmatrix} \right]$. Define a homomorphism $\rho: T \to{{\rm GSO}}(X_{m,\lambda})$ by $\rho(t)x= tx$ for $x \in X_{m,\lambda}$. We also recall the Galois conjugation map $\gamma:A \to A$ from Sect. \[anotheralgebrasubsec\]; it is given by $\gamma(x) =x^*$ for $x \in A$. The map $\gamma$ can be regarded as an $F$ linear endomorphism $$\label{gammaendoeq}
\gamma: X_{m,\lambda} \longrightarrow X_{m,\lambda},$$ and as such is contained in ${{\rm O}}(X_{m,\lambda})$ but not in ${{\rm SO}}(X_{m,\lambda})$.
\[twodimclasslemma\] If $(X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda})$ is as in , then ${{\rm disc}}(X_{m,\lambda}) = mF^{\times 2}$, $\varepsilon(X_{m,\lambda}) = (\lambda, m)$, and the homomorphism $\rho$ is an isomorphism, so that $$\label{rhoiso2eq}
\rho: T \stackrel{\sim}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}).$$ The image $\rho(T)$ and the map $\gamma$ generate ${{\rm GO}}(X_{m,\lambda})$. If $(X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda})$ and $(X_{m',\lambda'},\langle\cdot,\cdot\rangle_{m',\lambda'})$ are as in , then $(X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda}) \cong (X_{m',\lambda'},\langle\cdot,\cdot\rangle_{m',\lambda'})$ if and only if $mF^{\times 2} = m'F^{\times 2}$ and $(\lambda ,m) = (\lambda', m')$. Every two-dimensional, non-degenerate symmetric bilinear space over $F$ is isomorphic $(X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda})$ for some $m$ and $\lambda$.
Let $m, \lambda \in F^\times$. In $X_{m,\lambda}$ let $x_1 =\left[ \begin{smallmatrix} 1 & \\ & 1 \end{smallmatrix}\right]$ and $x_2 =\left[ \begin{smallmatrix} &1 \\ m & \end{smallmatrix}\right]$. Then $x_1,x_2$ is a basis for $X_{m,\lambda}$, and in this basis the matrix for $X_{m,\lambda}$ is $\lambda \left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]$. Calculations using this matrix show that ${{\rm disc}}(X_{m,\lambda}) = mF^{\times 2}$ and $\varepsilon(X_{m,\lambda}) = (\lambda, m)$. The map $\rho$ is clearly injective. To see that $\rho$ is surjective, let $h \in {{\rm GSO}}(X_{m,\lambda})$. Write $h$ in the ordered basis $x_1,x_2$ so that $h = \left[ \begin{smallmatrix} h_1 & h_2 \\ h_3 & h_4 \end{smallmatrix} \right]$. By the definition of ${{\rm GSO}}(X_{m,\lambda})$, we have ${}^t h \lambda \left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right] h = \det (h) \lambda \left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]$. By the definition of $T$, this implies that $t=\left[ \begin{smallmatrix} & 1 \\ 1 & \end{smallmatrix} \right] h \left[ \begin{smallmatrix} & 1 \\ 1 & \end{smallmatrix} \right] \in T$. Hence, $h = \left[ \begin{smallmatrix} h_1 & h_3m \\ h_3 & h_1 \end{smallmatrix} \right]$ for some $h_1,h_3 \in F$. Calculations now show that $\rho(t) x_1 = h (x_1)$ and $\rho(t) x_2 = h (x_2)$, so that $\rho(t) = h$. This proves the first assertion. The second assertion follows from the fact that two non-degenerate symmetric bilinear spaces over $F$ with the same finite dimension are isomorphic if and only if they have the same discriminant and Hasse invariant. For the final assertion, let $(X, \langle\cdot,\cdot\rangle)$ be a two-dimensional, non-degenerate symmetric bilinear space over $F$. There exists a basis for $X$ with respect to which the matrix for $X$ is of the form $\left[\begin{smallmatrix} \alpha_1 & \\ & \alpha_2 \end{smallmatrix}\right]$ for some $\alpha_1, \alpha_2 \in F^\times$. Then ${{\rm disc}}(X) =- \alpha_1 \alpha_2F^{\times 2}$ and $\varepsilon(X) = (\alpha_1,\alpha_2)_F$. An argument shows that there exists $\lambda \in F^\times$ such that $(\lambda, {{\rm disc}}(X))_F = \varepsilon(X)$. We now have $(X,\langle\cdot,\cdot\rangle) \cong (X_{m,\lambda},\langle\cdot,\cdot\rangle_{m,\lambda})$ with $m={{\rm disc}}(X)$ because both spaces have the same discriminant and Hasse invariant.
Next, define a four-dimensional non-degenerate symmetric bilinear space over $F$ by setting $$\label{Xmateq}
X_{{{\rm M}}_2}={{\rm M}}_2(F), \qquad \langle x_1,x_2 \rangle_{{{\rm M}}_2} = \mathrm{tr} (x_1x_2^*)/2, \quad x_1,x_2 \in X_{{{\rm M}}_2}.$$ Here, $*$ is the canonical involution of $2 \times 2$ matrices, given by $\left[\begin{smallmatrix} a&b \\ c& d \end{smallmatrix} \right]^* =
\left[\begin{smallmatrix} d&-b \\ -c& a \end{smallmatrix} \right]$. Define $\rho: {{\rm GL}}(2,F) \times {{\rm GL}}(2,F) \to {{\rm GSO}}(X_{{{\rm M}}_2})$ by $\rho(g_1,g_2)x = g_1 x g_2^*$ for $g_1,g_2 \in {{\rm GL}}(2,F)$ and $x \in X_{{{\rm M}}_2}$. The map $*:X_{{{\rm M}}_2} \to X_{{{\rm M}}_2}$ is contained in ${{\rm O}}(X_{{{\rm M}}_2})$ but not in ${{\rm SO}}(X_{{{\rm M}}_2})$.
Finally, let $H$ be the division quaternion algebra over $F$. Let $1,i,j,k$ be a quaternion algebra basis for $H$, i.e., $$\label{Hdefeq}
H = F + F i + F j + Fk, \quad i^2 \in F^\times,\ j^2 \in F^\times,\ k =ij,\ ij = -ji.$$ Let $*$ be the canonical involution on $H$ so that $(a+b\cdot i + c\cdot j +d \cdot k)^* = a-b\cdot i -c \cdot j -d \cdot k$, and define the norm and trace functions ${{\rm N}},{{\rm T}}: H \to F$ by ${{\rm N}}(x) =xx^*$ and ${{\rm T}}(x) = x+x^*$ for $x \in H$. Define another four-dimensional non-degenerate symmetric bilinear space over $F$ by setting $$\label{XHeq}
X_{H}=H, \qquad \langle x_1,x_2 \rangle_H = {{\rm T}}(x_1x_2^*)/2, \quad x_1,x_2 \in X_H.$$ Define $\rho: H^\times \times H^\times \to {{\rm GSO}}(X_H)$ by $\rho(h_1,h_2)x = h_1 x h_2^*$ for $h_1, h_2 \in H^\times$ and $x \in X_{H}$. The map $*:X_{H} \to X_{H}$ is contained in ${{\rm O}}(X_{H})$ but not in ${{\rm SO}}(X_{H})$.
\[fourdisconelemma\] The symmetric bilinear space $(X_{{{\rm M}}_2},\langle\cdot,\cdot\rangle_{{{\rm M}}_2})$ is non-degenerate, has dimension four, discriminant ${{\rm disc}}(X_{{{\rm M}}_2})=1$, and Hasse invariant $\varepsilon(X_{{{\rm M}}_2}) = (-1,-1)$. The $(X_H,\langle\cdot,\cdot\rangle_H)$ symmetric bilinear space is non-degenerate, has dimension four, discriminant ${{\rm disc}}(X_H)=1$, and Hasse invariant $\varepsilon(X_H) = -(-1,-1)$. The sequences $$\begin{gathered}
1 \longrightarrow F^\times \longrightarrow {{\rm GL}}(2,F) \times {{\rm GL}}(2,F) \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_{{{\rm M}}_2}) \longrightarrow 1, \label{GSOexacteq1}\\
1 \longrightarrow F^\times \longrightarrow H^\times \times H^\times \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_H) \longrightarrow 1 \label{GSOexacteq2}\end{gathered}$$ are exact; here, the second maps send $a$ to $(a,a^{-1})$ for $a \in F^\times$. The image $\rho({{\rm GL}}(2,F) \times {{\rm GL}}(2,F))$ and the map $*$ generate ${{\rm GO}}(X_{{{\rm M}}_2})$, and the image $\rho(H^\times \times H^\times)$ and the map $*$ generate ${{\rm GO}}(X_{H})$. Every four-dimensional, non-degenerate symmetric linear space over $F$ of discriminant $1$ is isomorphic to $(X_{{{\rm M}}_2},\langle\cdot,\cdot\rangle_{{{\rm M}}_2})$ or $(X_{H},\langle\cdot,\cdot\rangle_{H})$.
See, for example, Sect. 2 of [@Roberts2001].
Embeddings
----------
Suppose that $(X,\langle\cdot, \cdot \rangle)$ satisfies . We define an action of the group ${{\rm GL}}(2,F) \times {{\rm GO}}(X)$ on the set $X^2$ by $$\label{gl2goeq}
(g,h)\cdot (x_1,x_2) = (hx_1,hx_2)g^{-1} = (g'_1hx_1+g'_3hx_2,g'_2hx_1+g'_4hx_2)$$ for $(x_1,x_2) \in X^2$, $h \in {{\rm GO}}(X)$ and $g \in {{\rm GL}}(2,F)$ with $g^{-1}=\left[ \begin{smallmatrix} g'_1& g'_2 \\ g'_3 & g'_4 \end{smallmatrix} \right]$. For $S$ as in with $\det(S) \neq 0$, we define $$\label{omegaSdefeq}
\Omega=\Omega_S =\Omega_{S,(X,\langle\cdot, \cdot\rangle)}= \{(x_1,x_2) \in X^2: \begin{bmatrix} \langle x_1,x_1 \rangle & \langle x_1, x_2\rangle \\ \langle x_1 , x_2 \rangle & \langle x_2 , x_2\rangle \end{bmatrix} = S \}.$$ We say that $(X,\langle\cdot, \cdot\rangle)$ *represents* $S$ if the set $\Omega$ is non-empty.
\[ghlemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. The subgroup $$\label{Bdefeq}
B=B_S= \{ (g,h) \in {{\rm GL}}(2,F) \times {{\rm GO}}(X): \text{${}^t g S g = \det(g) S$ and $\det(g) = \lambda (h)$} \}$$ maps $\Omega=\Omega_S$ to itself under the action of ${{\rm GL}}(2,F) \times {{\rm GO}}(X)$ on $X^2$.
Let $(g,h) \in B$, and let $g =\left[ \begin{smallmatrix} g_1&g_2\\g_3&g_4 \end{smallmatrix} \right]$. To start, we note that the assumption ${}^tgSg=\det(g)S$ is equivalent to ${}^tg^{-1}Sg^{-1}=\det(g)^{-1}S$, which is in turn equivalent to $$\begin{aligned}
ag_4^2-bg_3g_4+cg_3^2 & = \det (g) a,\\
-ag_4g_2+b(g_1g_4+g_2g_3)/2-cg_3g_1 &= \det(g) b/2,\\
ag_2^2 - b g_2g_1 +cg_1^2 & = \det (g) c. \end{aligned}$$ Let $(x_1,x_2) \in \Omega$ and set $(y_1,y_2) = (g,h_1(t))\cdot (x_1,x_2)$. By the definition of the action and $\Omega$, and using $\det(g) = \lambda (h)$, we have $$\begin{aligned}
\langle y_1, y_1 \rangle
&= \det(g)^{-1} \big( g_4^2 \langle x_1,x_1 \rangle -2 g_3 g_4 \langle x_1,x_2 \rangle + g_3^2 \langle x_2,x_2 \rangle\big) \\
& = \det(g)^{-1} \big( g_4^2 a- g_3 g_4 b + g_3^2 c\big) \\
&=a. \end{aligned}$$ Similarly, $\langle y_1,y_2 \rangle = b/2$ and $\langle y_2,y_2 \rangle =c$. It follows that $(y_1,y_2) = (g,h_1(t))(x_1,x_2) \in \Omega$.
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\Omega$ is non-empty, and let $T=T_S$, as in Sect. \[anotheralgebrasubsec\]. The goal of this section is to define, for each $z \in \Omega$, a set $$\label{Ecaleq}
\mathcal{E}(z) = \mathcal{E}_{(X,\langle\cdot,\cdot\rangle), S}(z)$$ of embeddings $\tau:T \to {{\rm GSO}}(X)$ such that: $$\begin{aligned}
&\text{$\tau(t)=t$ for $t \in F^\times \subset T$;}\label{tauoneeq} \\
&\text{$\lambda(\tau(t))=\det(t)$ for $t \in T$, so that $(t,\tau(t)) \in B$ for $t \in T$;}\label{tautwoeq} \\
&\text{$(\begin{bmatrix}&1\\1&\end{bmatrix}t\begin{bmatrix}&1\\1&\end{bmatrix},\tau(t))\cdot z = z$ for $t\in T$.}\label{tauthreeeq}\end{aligned}$$
We begin by noting some properties of $\Omega$. The set $\Omega$ is closed in $X^2$. The subgroup ${{\rm O}}(X) \cong 1\times {{\rm O}}(X) \subset B \subset {{\rm GL}}(2,F) \times {{\rm GO}}(X)$ preserves $\Omega$, i.e., if $ h \in {{\rm O}}(X)$ and $(x_1,x_2) \in \Omega$, then $(hx_1,hx_2) \in \Omega$. Since $\det(S) \neq 0$, the group ${{\rm O}}(X)$ acts transitively on $\Omega$. If $\dim X =4$, then ${{\rm SO}}(X)$ acts transitively on $\Omega$. If $\dim X =2$, then the action of ${{\rm SO}}(X)$ on $\Omega$ has two orbits.
\[simlemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\dim X =2$ and $\Omega$ is non-empty. Let $z=(z_1,z_2) \in \Omega$. For $$t ={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} g {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}}= {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}g_1&g_2\\#3&g_4\end{array}\right]}} {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}&1\\#3&\end{array}\right]}} \in T$$ let $\tau_z(t):X \to X$ be the linear map that has $g $ as matrix in the ordered basis $z_1,z_2$ for $X$, so that $$\begin{aligned}
\tau_z(t) (z_1) &= g_1 z_1+g_3z_2,\\
\tau_z(t) (z_2) &= g_2z_1+g_4 z_2.\end{aligned}$$
1. For $t \in T$, the map $\tau_z(t)$ is contained in ${{\rm GSO}}(X)$ and $\lambda(\tau_z(t)) =\det(t)$.
2. If $z'$ lies in the ${{\rm SO}}(X)$ orbit of $z$, and $t \in T$, then $\tau_z(t) = \tau_{z'}(t)$.
3. The map sending $t$ to $\tau_z(t)$ defines an isomorphism $
\tau_z:T\stackrel{\sim}{\longrightarrow} {{\rm GSO}}(X).
$
4. Let $h_0 \in {{\rm O}}(X)$ with $\det(h_0)=-1$. Let $z' \in \Omega$ not be in the ${{\rm SO}}(X)$ orbit of $z$. Then $\tau_{z'}(t) = h_0 \tau_z(t) h_0^{-1}$ for $t \in T$.
5. Let $t \in T$. The element $(\left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau_z(t)) \in B$ acts by the identity on the ${{\rm SO}}(X)$ orbit of $z$, and maps the other ${{\rm SO}}(X)$ orbit of $\Omega$ to itself.
i\) A computation verifies that $\tau_z(t) \in {{\rm GO}}(X)$, with similitude factor $\lambda(\tau_z(t))=\det(g)=\det(t)$, and the equality $\det(\tau_z(t))=\lambda(\tau_z(t))$ implies that $\tau_z(t) \in {{\rm GSO}}(X)$ by the definition of ${{\rm GSO}}(X)$.
ii\) Suppose that $z'=(z_1',z_2')$ lies in the ${{\rm SO}}(X)$ orbit of $z$, and let $c \in {{\rm SO}}(X)$ be such that $c(z_1) = z_1'$ and $c(z_2) = z_2'$. Then $\tau_{z'}(t)=c\tau_z(t)c^{-1}$. But the group ${{\rm GSO}}(X)$ is abelian, so that $\tau_{z'}(t)=c\tau_z(t)c^{-1} = \tau_z(t)$.
iii\) Calculations prove that $\tau_z:T \to {{\rm GSO}}(X)$ is an isomorphism.
iv\) Let $z''=h_0(z)$. A calculation shows that $\tau_{z''}(t) = h_0 \tau_z(t) h_0^{-1}$ for $t \in T$. By, ii), $\tau_{z''}(t)=\tau_{z'}(t)$ for $t \in T$.
v\) Write $g =\left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right]$, so that ${}^tgSg=\det(g)S$. Let $g =\left[ \begin{smallmatrix} g_1&g_2\\g_3&g_4 \end{smallmatrix} \right]$. By the definition of $\tau_z(t)$, we have $$\begin{aligned}
(g,\tau_z(t))\cdot z
&=(\det(g)^{-1} g_4(g_1z_1+g_3z_2)-\det(g)^{-1} g_3(g_2z_1+g_4z_2), \\
&\qquad \det(g)^{-1} (-g_2)(g_1z_1+g_3z_2) + \det(g)^{-1} g_1(g_2z_1+g_4z_2))\\
&=z.\end{aligned}$$ By ii), it follows that $(g,\tau_z(t))$ acts by the identity on all of the ${{\rm SO}}(X)$ orbit of $z$. Next, let $z' \in \Omega$ with $z' \notin {{\rm SO}}(X) z$. Assume that $(g,\tau_z(t))\cdot z' \in {{\rm SO}}(X) z$; we will obtain a contradiction. Since $(g,\tau_z(t))\cdot z' \in {{\rm SO}}(X)z$ and since we have already proved that $(g,\tau_z(t))$ acts by the identity on ${{\rm SO}}(X)z$, we have: $$\begin{aligned}
(g,\tau_z(t)) \cdot \big((g,\tau_z(t)) \cdot z'\big) &= (g,\tau_z(t)) \cdot z' \\
(g,\tau_z(t))\cdot z' & = z'.\end{aligned}$$ This is a contradiction since $z' \notin {{\rm SO}}(X)z$ and $(g,\tau_z(t))\cdot z' \in {{\rm SO}}(X) z$.
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\dim X =2$ and $\Omega$ is non-empty. For $z \in \Omega$, we define $$\label{twodimcalEeq}
\mathcal{E}(z) = \mathcal{E}_{(X,\langle\cdot,\cdot\rangle), S}(z) = \{ \tau_z \},$$ with $\tau_z$ as defined in Lemma \[simlemma\]. It is evident from Lemma \[simlemma\] that the element of $\mathcal{E}(z)$ has the properties , , and .
\[twoscalarslemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\dim X =2$. Let $\lambda, \lambda'\in F^\times$, and set $\Omega=\Omega_{\lambda S}$ and $\Omega' = \Omega_{\lambda' S}$. Assume that $\Omega$ and $\Omega'$ are non-empty. Then $$\label{Eunioneq}
\bigcup\limits_{z \in \Omega} \mathcal{E}(z) = \bigcup\limits_{z' \in \Omega'} \mathcal{E}(z').$$
Let $\Omega_1$ and $\Omega_2$ be the two ${{\rm SO}}(X)$ orbits of the action of ${{\rm SO}}(X)$ on $\Omega$ so that $\Omega=\Omega_1 \sqcup \Omega_2$, and analogously define and write $\Omega' = \Omega_1' \sqcup \Omega_2'$. Let $z=(z_1,z_2) \in \Omega_1$ and $z'=(z_1',z_2') \in \Omega'_1$. Define a linear map $h:X \to X$ by setting $h(z_1) = z_1'$ and $h(z_2)=z_2'$. We have $\langle h(x), h(y) \rangle = (\lambda' /\lambda) \langle x,y \rangle $ for $x,y\in X$, so that $h \in {{\rm GO}}(X)$. Assume that $h \notin {{\rm GSO}}(X)$. Let $z''=(z_1'',z_2'') \in \Omega_2'$, and let $h':X\to X$ be the linear map defined by $h'(z_1')=z_1''$ and $h'(z_2')=z_2''$. Then $h' \in {{\rm O}}(X)$ with $\det (h') =-1$, so that $h'h \in {{\rm GSO}}(X)$ and $(h'h)(z_1)=z_1''$ and $(h'h)(z_2)=z_2''$. Therefore, by renumbering if necessary, we may assume that $h \in {{\rm GSO}}(X)$. Next, a calculation shows that $h\tau_z(t)h^{-1} = \tau_{z'}(t)$ for $t \in T$. Since ${{\rm GSO}}(X)$ is abelian, this means that $\tau_z =\tau_{z'}$. The claim follows now from ii) and iv) of Lemma \[simlemma\].
\[Zdecomplemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\dim X =4$ and $\Omega$ is non-empty. Let $z=(z_1,z_2) \in \Omega$, and set $U = Fz_1+Fz_2$, so that $X = U \oplus U^\perp$ with $\dim U =\dim U^\perp =2$. There exists $\lambda \in F^\times$ such that $(U^\perp, \langle \cdot, \cdot \rangle)$ represents $\lambda S$.
Let ${{\rm M}}_{4,1}(F)$ be the $F$ vector space of $4 \times 1$ matrices with entries from $F$. Let $D=-\det(S)$. Let $\lambda \in F^\times$, and define a four-dimensional symmetric bilinear space $X_\lambda$ by letting $X_\lambda={{\rm M}}_{4,1}(F)$ with symmetric bilinear form $b$ given by $b(x,y)={}^txMy$, where $$M = \begin{bmatrix} S & \\ & \lambda S \end{bmatrix}.$$ Evidently, ${{\rm disc}}(X_\lambda) =1$, and the Hasse invariant of $X_\lambda$ is $\varepsilon(X_\lambda) = (-1,-1)_F(-\lambda, D)_F$. Now assume that $X$ is isotropic. Then the Hasse invariant of $X$ is $(-1,-1)_F$. It follows that if $\lambda=-1$, then $\varepsilon(X_\lambda) = \varepsilon(X)$, so that $X_\lambda \cong X$. By the Witt cancellation theorem, $(U^\perp, \langle \cdot, \cdot \rangle)$ represents $\lambda S$. Next, assume that $X$ is anisotropic, so that $\varepsilon(X) = -(-1,-1)_F$. By hypothesis, $(X,\langle \cdot,\cdot \rangle)$ represents $S$; since $X$ is anisotropic, this implies that $D \notin F^{\times 2}$. Since, $D \notin F^{\times 2}$, there exists $\lambda \in F^\times$ such that $-1 = (-\lambda,D)_F$. It follows that $\varepsilon(X_\lambda) = \varepsilon(X)$, so that $X_\lambda \cong X$; again the Witt cancellation theorem implies that $(U^\perp, \langle \cdot, \cdot \rangle)$ represents $\lambda S$.
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\dim X =4$, $\Omega=\Omega_S$ is non-empty, and let $T=T_S$, as in Sect. \[anotheralgebrasubsec\]. Let $z=(z_1,z_2) \in \Omega$, and as in Lemma \[Zdecomplemma\], let $U = Fz_1+Fz_2$, so that $X = U \oplus U^\perp$ with $\dim U =\dim U^\perp =2$. By Lemma \[Zdecomplemma\] there exists $\lambda \in F^\times$ such that $(U^\perp, \langle \cdot, \cdot \rangle)$ represents $\lambda S$. Let $\tau_z: T \to {{\rm GSO}}(U)$ be the isomorphism from Lemma \[simlemma\] that is associated to $z$. Also, let $\tau_{z'}, \tau_{z''}: T \to {{\rm GSO}}(U^\perp)$ be the isomorphisms from Lemma \[simlemma\], where $z'$ and $z''$ are representatives for the two ${{\rm SO}}(U^\perp)$ orbits of ${{\rm SO}}(U^\perp)$ acting on $\Omega_{\lambda S, (U^\perp,\langle\cdot,\cdot\rangle)}$; by Lemma \[twoscalarslemma\], $\{\tau_{z'}, \tau_{z''}\}$ does not depend on the choice of $\lambda$. We now define $$\label{fourEzeq}
\mathcal{E}(z) = \mathcal{E}_{(X,\langle\cdot,\cdot\rangle), S}(z) = \{ \tau_1, \tau_2 \},$$ where $\tau_1,\tau_2:T \to {{\rm GSO}}(X)$ are defined by $$\tau_1(t) = \begin{bmatrix} \tau_z(t) & \\ & \tau_{z'}(t) \end{bmatrix},\qquad
\tau_2(t) = \begin{bmatrix} \tau_z(t) & \\ & \tau_{z''}(t) \end{bmatrix}$$ with respect to the decomposition $Z = U \oplus U^\perp$, for $t \in T$. For $t \in T$, the similitude factor of $\tau_i(t)$ is $\det(t)$. It is evident that the elements of $\mathcal{E}(z)$ satisfy , , and .
\[zstablemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Assume that $\Omega=\Omega_S$ is non-empty, and let $A=A_S$ and $T=T_S$, as in Sect. \[anotheralgebrasubsec\]. If $\dim X =4$, assume that $A$ is a field. Let $z \in \Omega$ and $\tau \in \mathcal{E}(z)$. Let $C$ be a compact, open subset of $\Omega$ containing $z$. There exists a compact, open subset $C_0$ of $\Omega$ such that $z \in C_0 \subset C$ and $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t)) \cdot C_0 = C_0$ for $t \in T$.
Assume $\dim X =2$. Let $C_0$ be the intersection of $C$ with the ${{\rm SO}}(X)$ orbit of $z$ in $\Omega$. Then $C_0$ is a compact, open subset of $\Omega$ because the ${{\rm SO}}(X)$ orbit of $z$ in $\Omega$ is closed and open in $\Omega$, and $C$ is compact and open. We have $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t)) \cdot C_0 = C_0$ for $t \in T$ by v) of Lemma \[simlemma\]. Assume $\dim X =4$. The group of pairs $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t))$ for $t \in T$ acts on $X^2$ and can be regarded as a subgroup of ${{\rm GL}}(X^2)$. The group $T$ contains $F^\times$, and the pairs with $t \in F^\times$ act by the identity on $X^2$. The assumption that $A$ is a field implies that $T/F^\times$ is compact, and hence the image $\mathcal{K}$ in ${{\rm GL}}(X^2)$ of this group of pairs is compact. There exists a lattice $\mathcal{L}$ of $X^2$ such that $k\cdot\mathcal{L} = \mathcal{L}$ for $k \in \mathcal{K}$. Also, by we have that $k\cdot z=z$ for $k \in \mathcal{K}$. Let $n$ be sufficiently large so that $(z+\varpi^n \mathcal{L})\cap \Omega \subset C$. Then $C_0 = (z+\varpi^n\mathcal{L})\cap \Omega$ is the desired set.
Example embeddings {#exampleembeddingssubsec}
------------------
In this section we provide explicit formulas for the embeddings of the previous section.
\[lambdaSlemma\] Let $S$ be as in . Let $m, \lambda \in F^\times$, and define $(X_{m,\lambda}, \langle \cdot, \cdot \rangle_{m,\lambda})$ as in . The set $\Omega=\Omega_S$ is non-empty if and only if ${{\rm disc}}(S)=mF^{\times 2}$ and $\varepsilon(S) = (\lambda,m)_F$. Assume that the set $\Omega$ is non-empty. Set $D=b^2/4-ac$ so that ${{\rm disc}}(S) = DF^{\times 2}$, and define $\Delta$ and the quadratic extension $L=F+F\Delta$ of $F$ (which need not be a field) associated to $D$ as in Sect. \[quadextsubsec\]. Similarly, define $\Delta_m$ with respect to $m$; the quadratic extension associated $m$ is also $L$ and $L=F+F\Delta_m$. The set of compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_S \stackrel{\tau}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda})$$ for $z \in \Omega$ and $\tau \in \mathcal{E}(z)$ is the same as the set consisting of the two compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\sim}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}), \quad L^\times \stackrel{\gamma}{\longrightarrow} L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\sim}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}).$$ Here, the maps $L^\times \to T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] }$ are as in , and the isomorphism $\rho$ of $T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] }$ with ${{\rm GSO}}(X_{m,\lambda})$ is as in .
By definition, $\Omega$ is non-empty if and only if there exist $x_1,x_2 \in X_{m,\lambda}$ such that $S = \left[\begin{smallmatrix} \langle x_1,x_1 \rangle & \langle x_1,x_2 \rangle \\ \langle x_1,x_2 \rangle & \langle x_2, x_2 \rangle \end{smallmatrix} \right]$. Since $X_{m,\lambda}$ is two-dimensional, this means that $\Omega$ is non-empty if and only if $(X_{m,\lambda}, \langle \cdot, \cdot \rangle_{m,\lambda})$ is equivalent to the symmetric bilinear space over $F$ defined by $S$. From Lemma \[twodimclasslemma\], we have ${{\rm disc}}(X_{m,\lambda}) = m F^{\times 2}$ and $\varepsilon(X_{m,\lambda}) = (\lambda,m)_F$. Since a finite-dimensional non-degenerate symmetric bilinear space over $F$ is determined by its dimension, discriminant and Hasse invariant, it follows that $\Omega$ is non-empty if and only if ${{\rm disc}}(S) =m F^{\times 2}$ and $\varepsilon(S)=(\lambda,m)_F$.
Assume that $\Omega$ is non-empty, so that ${{\rm disc}}(S)=m F^{\times 2}$ and $\varepsilon(S) = (\lambda,m)_F$. Let $e \in F^\times$ be such that $\Delta=e \Delta_m$; then $b^2/4-ac=D = e^2 m$. Assume first that $a \neq 0$. By Sect. \[twobytwosubsec\], $\varepsilon(S) =(a,m)_F$. Therefore, $(a,m)_F=(\lambda,m)_F$. It follows that there exist $g,h \in F^\times$ such that $g^2 - m h^2 = \lambda^{-1} a$. Set $$z_1=\begin{bmatrix} g & h \\ -h(-m) & g \end{bmatrix},
\quad
z_2=a^{-1} \begin{bmatrix} ehm +gb/2 & eg+hb/2 \\ -(eg+hb/2)(-m) & ehm+gb/2 \end{bmatrix}.$$ Then $z_1,z_2 \in X_{m,\lambda}$, and a calculation shows that $$\begin{bmatrix}
\langle z_1,z_1 \rangle_{m,\lambda} & \langle z_1,z_2 \rangle_{m,\lambda} \\ \langle z_1,z_2 \rangle_{m,\lambda} & \langle z_2,z_2 \rangle_{m,\lambda}
\end{bmatrix}
= S.$$ It follows that $z=(z_1,z_2) \in \Omega$. Let $u \in L^\times$. Write $u =x + y\Delta$ for some $x,y \in F^\times$. By , $u$ corresponds to $t = \left[\begin{smallmatrix} x - yb/2 & -ya \\ yc & x+yb/2 \end{smallmatrix} \right] \in T_S$. Using the definition of $\tau_z(t)$, we find that $$\begin{aligned}
\tau_z(t) (z_1) & = \begin{bmatrix} gx -ehym & hx-egy \\ -(hx-egy)(-m) & gx-ehym \end{bmatrix},\label{z1teq}\\
\tau_z(t)(z_2)&= \frac{1}{2a}\begin{bmatrix}(2ehm +bg)x- (behm + 2e^2mg) y &(2 e g+ b h ) x- ( b e g+ 2e^2m h) y \\ -((2 e g+ b h ) x- ( b e g+ 2e^2m h) y )(-m) & (2ehm +bg)x-(behm + 2e^2mg) y \end{bmatrix}.\label{z2teq}\end{aligned}$$ On the other hand, we also have that $u = x + y e \Delta_m$, and $u$ corresponds to the element $
t'=\left[\begin{smallmatrix} x & -ye \\ (ye) (-m) & x \end{smallmatrix}\right]
$ in $T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } $. Moreover, calculations show that $\rho(t')(z_1)=t'\cdot z_1$ and $\rho(t')(z_2)=t' \cdot z_2$ are as in and , respectively, proving that the two compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_S \stackrel{\tau_z}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}),\qquad
L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda})$$ are the same map. Next, let $z'=(\gamma(z_1),\gamma(z_2))$. Then $z' \in \Omega$, and calculations as above show that the two compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_S \stackrel{\tau_{z'}}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}),\qquad
L^\times \stackrel{\gamma}{\longrightarrow} L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda})$$ are the same. This completes the proof in this case since $z$ and $z'$ are representatives for the two ${{\rm SO}}(X_{m,\lambda})$ orbits of $\Omega$, and by ii) of Lemma \[simlemma\], $\cup_{w \in \Omega}\mathcal{E}(w) = \{\tau_z,\tau_{z'}\}$. Now assume that $a=0$. Set $$z_1=\lambda^{-1} \begin{bmatrix} b/2&-e \\ e(-m) & b/2 \end{bmatrix}, \qquad
z_2=\begin{bmatrix} (c\lambda^{-1}+1)/2 & -eb^{-1}(c\lambda^{-1}-1) \\ eb^{-1} (c\lambda^{-1}-1) (-m) & (c\lambda^{-1}+1)/2 \end{bmatrix}.$$ Again, a calculation shows that $z=(z_1,z_2) \in \Omega$. Let $u \in L^\times$ with $u =x + y\Delta$ for some $x,y \in F^\times$. Then $u$ corresponds to $ t=\left[\begin{smallmatrix} x - yb/2 & \\ yc & x+yb/2 \end{smallmatrix} \right] \in T_S$, and $u$ corresponds to $t'=\left[\begin{smallmatrix} x &-y e \\ ye(-m) & x\end{smallmatrix} \right] \in T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]}$. Computations show that $\tau_z(t)(z_1)=\rho(t')(z_1)$ and $\tau_z(t)(z_2)=\rho(t')(z_2)$, proving that the compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_S \stackrel{\tau_{z'}}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}),\qquad
L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda})$$ are the same. As in the previous case, if $z'=(\gamma(z_1),\gamma(z_2))$, then $z' \in \Omega$, and the two compositions $$L^\times \stackrel{\sim}{\longrightarrow} T_S \stackrel{\tau_{z'}}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda}),\qquad
L^\times \stackrel{\gamma}{\longrightarrow} L^\times \stackrel{\sim}{\longrightarrow} T_{\left[\begin{smallmatrix} 1 & \\ & -m \end{smallmatrix}\right] } \stackrel{\rho}{\longrightarrow} {{\rm GSO}}(X_{m,\lambda})$$ are the same. As above, this completes the proof.
Let $c \in F^\times$, and set $$\label{Sceq}
S = \begin{bmatrix} 1 & \\ & c \end{bmatrix}.$$ Let $(X_{{{\rm M}}_2}, \langle \cdot,\cdot \rangle_{{{\rm M}}_2})$ be as in . Let $A=A_S$ and $T=T_S$ be as in Sect. \[anotheralgebrasubsec\]. We embed $A$ in ${{\rm M}}_2(F)$ via the inclusion map. Set $$z_1 = \begin{bmatrix} 1 & \\ & 1 \end{bmatrix}, \quad z_2 = \begin{bmatrix} & 1 \\ -c & \end{bmatrix}, \quad
z_1' = \begin{bmatrix} 1 & \\ & -1 \end{bmatrix}, \quad z_2' = \begin{bmatrix} &1 \\ c & \end{bmatrix}.$$ The vectors $z_1,z_2,z_1',z_2'$ form an orthogonal ordered basis for $X_{{{\rm M}}_2}$, and in this basis the matrix for $X_{{{\rm M}}_2}$ is $$\begin{bmatrix} S & \\ & -S \end{bmatrix}.$$ As in Lemma \[Zdecomplemma\], set $U =Fz_1 +Fz_2$. Then $U^\perp=Fz_1'+Fz_2'$, and the $\lambda$ of Lemma \[Zdecomplemma\] is $-1$. Calculations show that the set $\mathcal{E}(z)=\mathcal{E}_{X_{{{\rm M}}_2}}(z)$ of is $$\label{Ezmateq}
\mathcal{E}_{X_{{{\rm M}}_2}}(z) = \{ \tau_1,\tau_2 \}, \qquad \tau_1(t) = \rho(t,1), \quad \tau_2(t) =\rho(1,\gamma(t)), \quad t \in T^\times.$$
Finally, let $S$ be as in with $-c \notin F^{\times 2}$, and let $(X_{H}, \langle \cdot,\cdot \rangle_{H})$ be as in . Let $A=A_S$ and $T=T_S$ be as in Sect. \[anotheralgebrasubsec\]. Let $L$ be the quadratic extension associated to $-c$ as in Sect. \[quadextsubsec\]; $L$ is a field. Let $e$ be a representative for the non-trivial coset of $F^\times / {{\rm N}}_{L/F}(L^\times)$, so that $(e,-c)_F=-1$. We realize the division quaternion algebra $H$ over $F$ as $$\label{SpecialHeq}
H = F + F i + F j + Fk, \quad i^2 = -c,\ j^2 = e,\ k =ij,\ ij = -ji.$$ We embed $A$ into $H$ via the map defined by $$\begin{bmatrix} x & -y \\ cy & x \end{bmatrix} \mapsto x -yi$$ for $x,y \in F$. Let $$\label{specialijkeq}
z_1 =1, \quad z_2 = i, \quad z_1' = j, \quad z_2' =k.$$ The vectors $z_1,z_2,z_1',z_2'$ form an orthogonal ordered basis for $X_{H}$, and in this basis the matrix for $X_{H}$ is $$\begin{bmatrix} S & \\ & -e S \end{bmatrix}.$$ As in Lemma \[Zdecomplemma\], set $U =Fz_1 +Fz_2$. Then $U^\perp=Fz_1'+Fz_2'$, and the $\lambda$ of Lemma \[Zdecomplemma\] is $-e$. Calculations again show that the set $\mathcal{E}(z)=\mathcal{E}_{X_H}(z)$ of is $$\label{EzHeq}
\mathcal{E}_{X_H}(z) = \{ \tau_1,\tau_2 \}, \qquad \tau_1(t) = \rho(t,1), \quad \tau_2(t) =\rho(1,\gamma(t)), \quad t \in T^\times.$$
To close this subsection, we note that $(X_H,\langle\cdot,\cdot\rangle_H)$ does not represent $S$ if $S$ is as in but $-c \in F^{\times 2}$. To see this, assume that $-c \in F^{\times 2}$ and $(X_H,\langle\cdot,\cdot\rangle_H)$ represents $S$; we will obtain a contradiction. Write $-c=t^2$ for some $t \in F^\times$. Since $X_H$ represents $S$, there exist $x_1,x_2 \in H$ such that $\langle x_1,x_1 \rangle_H = {{\rm N}}(x_1)=1$, $\langle x_2,x_2 \rangle_H = {{\rm N}}(x_2)=c=-t^2$ and $\langle x_1,x_2 \rangle_H = {{\rm T}}(x_1x_2^*)/2 =0$. A calculation shows that ${{\rm N}}(tx_1+x_2)=0$. Since $H$ is a division algebra, this means that $tx_1=-x_2$. Hence, $t^2 = N(tx_1) = \langle tx_1,tx_1 \rangle_H = \langle tx_1 ,-x_2 \rangle_H = -t \langle x_1,x_2 \rangle_H =0$, a contradiction.
Theta correspondences and Bessel functionals {#thetabesselsubsec}
--------------------------------------------
In this section we make the connection between Bessel functionals for ${{\rm GSp}}(4,F)$ and equivariant functionals on representations of ${{\rm GO}}(X)$. The main result is Theorem \[fourdimthetatheorem\] below.
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying . We define the subgroup ${{\rm GSp}}(4,F)^+$ of ${{\rm GSp}}(4,F)$ by $$\label{gsp4plusdef}
{{\rm GSp}}(4,F)^+ = \{ g \in {{\rm GSp}}(4,F): \lambda (g) \in \lambda ({{\rm GO}}(X)) \}.$$ The following lemma follows from and the exact sequences and , which facilitate the computation of $\lambda({{\rm GSO}}(X))$. Note that ${{\rm N}}(H^\times) = F^\times$.
\[gsp4calclemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying . Then $$\label{gsp4pluscomp}
[{{\rm GSp}}(4,F): {{\rm GSp}}(4,F)^+] =
\begin{cases}
1 & \text{if $\dim X =4$, or $\dim X=2$ and ${{\rm disc}}(X) =1$},\\
2 & \text{if $\dim X =2$ and ${{\rm disc}}(X) \neq 1$}.
\end{cases}$$
\[tingsp4lemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Let $\Omega=\Omega_S$ be as in , and assume that $\Omega$ is non-empty. Let $T=T_S$ be as in Sect. \[anotheralgebrasubsec\]. Embed $T$ as a subgroup of ${{\rm GSp}}(4,F)$, as in . Then $T$ is contained in ${{\rm GSp}}(4,F)^+$.
By we may assume that $\dim X=2$ and ${{\rm disc}}(X) \neq 1$. Since $\Omega$ is non-empty and $\dim X=2$, we make take $S$ to be the matrix of the symmetric bilinear form $\langle\cdot,\cdot\rangle$ on $X$. By definition, ${{\rm GO}}(X)$ is then the set of $h \in {{\rm GL}}(2,F)$ such that ${}^t h S h = \lambda (h) S$ for some $\lambda (h) \in F^\times$. From , we have that ${}^t h S h = \det(h) S$ for $h = \left[ \begin{smallmatrix} &1 \\ 1& \end{smallmatrix} \right] t\left[ \begin{smallmatrix} &1 \\ 1& \end{smallmatrix} \right]$ with $t \in T$. It follows that $\det(T)$ is contained in $\lambda ({{\rm GO}}(X))$. This implies that $T$ is contained in ${{\rm GSp}}(4,F)^+$.
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying . Define $$R=\{ (g,h) \in {{\rm GSp}}(4,F) \times {{\rm GO}}(X):\lambda (g) = \lambda (h) \}.$$ We consider the Weil representation $\omega$ of $R$ on the space $\mathcal{S}(X^2)$ defined with respect to $\psi^2$, where $\psi^2(x) = \psi(2x)$ for $x \in F$. If $\varphi \in \mathcal{S}(X^2)$, $g \in {{\rm GL}}(2,F)$ and $h \in {{\rm GO}}(X)$ with $\det (g) = \lambda (h)$, and $x_1,x_2 \in X$, then $$\begin{gathered}
\big( \omega(\begin{bmatrix} 1&& y & z \\ &1&x&y \\ &&1& \\ &&& 1 \end{bmatrix},1) \varphi \big)(x_1, x_2) =
\psi( \langle x_1, x_1 \rangle x + 2\langle x_1,x_2 \rangle y +\langle x_2, x_2 \rangle z)\varphi(x_1,x_2), \label{Znformulaeq}\\
\big( \omega(\begin{bmatrix} g & \\ & \det(g) g' \end{bmatrix},h) \varphi \big) (x_1, x_2)
= ( \det(g), {{\rm disc}}(X) )_F \varphi (( \left[\begin{smallmatrix} & 1 \\ 1& \end{smallmatrix}\right] g \left[\begin{smallmatrix} & 1 \\ 1& \end{smallmatrix}\right],h)^{-1}\cdot (x_1,x_2) ). \label{Zaformulaeq}\end{gathered}$$ For these formulas, see Sect. 1 of [@Roberts2001]; note that the additive character we are using is $\psi^2$. Also, in we are using the action of ${{\rm GL}}(2,F) \times {{\rm GO}}(X)$ defined in .
We will also use the Weil representation $\omega_1$ of $$R_1=\{ (g,h) \in {{\rm GL}}(2,F) \times {{\rm GO}}(X):\det(g) = \lambda (h) \}$$ on $\mathcal{S}(X)$ defined with respect to $\psi^2$. For formulas, again see Sect. 1 of [@Roberts2001]. The two Weil representations $\omega$ and $\omega_1$ are related as follows.
\[restrictionlemma\] The map $$\label{gsp4gl2eq}
T:\mathcal{S}(X) \otimes \mathcal{S}(X) \longrightarrow \mathcal{S}(X^2),$$ determined by the formula $$\label{gsp4gl2eq2}
T(\varphi_1 \otimes \varphi_2) (x_1,x_2) = \varphi_1(x_1)\varphi_2(x_2)$$ for $\varphi_1$ and $\varphi_2$ in $\mathcal{S}(X)$ and $x_1$ and $x_2$ in $X$, is a well-defined complex linear isomorphism such that $$\begin{aligned}
&T\circ (\omega_1\big(\begin{bmatrix} a_2 &b_2 \\ c_2 & d_2 \end{bmatrix},h) \otimes \omega_1(\begin{bmatrix} a_1 &b_1 \\ c_1 & d_1 \end{bmatrix},h)\big)\nonumber\\
&\qquad =\omega(\begin{bmatrix} a_1&&&b_1\\&a_2&b_2&\\&c_2&d_2&\\c_1&&&d_1\end{bmatrix},h)\circ T \label{gsp4gl2eq3}
\end{aligned}$$ for $g_1=\left[\begin{smallmatrix} a_1&b_1\\c_1&d_1\end{smallmatrix} \right]$ and $g_2=\left[\begin{smallmatrix}a_2&b_2\\c_2&d_2\end{smallmatrix}\right]$ in ${{\rm GL}}(2,F)$ and $h$ in ${{\rm GO}}(X)$ such that $$\det(g_1)=\det(g_2)=\lambda(h).$$
This lemma can be verified by a direct calculation using standard generators for ${{\rm SL}}(2,F)$.
Let $\theta=\theta_S$ be the character of $N$ defined in with respect to a matrix $S$ as in . Let $\mathcal{S}(X^2)(N,\theta)$ be the subspace of $\mathcal{S}(X^2)$ spanned by all vectors $\omega(n)\varphi-\theta(n)\varphi$, where $n$ runs through $N$ and $\varphi$ runs through $\mathcal{S}(X^2)$, and set $\mathcal{S}(X^2)_{N,\theta}=\mathcal{S}(X^2)/\mathcal{S}(X^2)(N,\theta)$.
\[Nthetaomegalemma\] (Rallis) Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. If $(X,\langle \cdot, \cdot \rangle)$ does not represent $S$, then the twisted Jacquet module $\mathcal{S}(X^2)_{N,\theta}$ is zero. Assume that $(X,\langle \cdot, \cdot \rangle)$ represents $S$. The map $\mathcal{S}(X^2) \to \mathcal{S}(\Omega)$ defined by $\varphi\mapsto \varphi|_{\Omega}$ induces an isomorphism $$\mathcal{S}(X^2)_{N,\theta} \stackrel{\sim}{\longrightarrow} \mathcal{S}(\Omega).$$ Equivalently, $\mathcal{S}(X^2)(N,\theta)$ is the space of $\varphi \in \mathcal{S}(X^2)$ such that $\varphi|_{\Omega} =0$.
See Lemma 2.3 of [@KuRa1994].
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Let $\Omega=\Omega_S$ be as in , and assume that $\Omega$ is non-empty. In Lemma \[ghlemma\] we noted that the subgroup $B$ of ${{\rm GL}}(2,F) \times {{\rm GO}}(X)$ acts on $\Omega$. By identifying ${{\rm O}}(X)$ with $1\times {{\rm O}}(X) \subset {{\rm GL}}(2,F) \times {{\rm GO}}(X)$, we obtain an action of ${{\rm O}}(X)$ on $\Omega$: this is given by $h\cdot (x_1,x_2) = (hx_1,hx_2)$, where $h \in {{\rm O}}(X)$ and $(x_1,x_2) \in
\Omega$. This action is transitive. We obtain an action of ${{\rm O}}(X)$ on $\mathcal{S}(\Omega)$ by defining $(h \cdot \varphi) (x) = \varphi(h^{-1} \cdot x)$ for $h \in {{\rm O}}(X)$, $\varphi \in \mathcal{S}(\Omega)$ and $x \in \Omega$. This action is used in the next lemma.
\[Mnonvanishlemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Let $\Omega=\Omega_S$ be as in , and assume that $\Omega$ is non-empty. Let $(\sigma_0,W_0)$ be an admissible representation of ${{\rm O}}(X)$, and let $M':\mathcal{S}(\Omega) \to W_0$ be a non-zero ${{\rm O}}(X)$ map. Let $z \in \Omega$. There exists a compact, open subset $C$ of $\Omega$ containing $z$ such that if $C_0$ is a compact, open subset of $\Omega$ such that $z \in C_0 \subset C$, then $M'(f_{C_0}) \neq 0$. Here, $f_{C_0}$ is the characteristic function of $C_0$.
Let $H$ be the subgroup of $h \in {{\rm O}}(X)$ such that $hz=z$. By 1.6 of [@BeZe1976], the map $H \backslash {{\rm O}}(X) \stackrel{\sim}{\longrightarrow} \Omega$ defined by $Hh \mapsto h^{-1} z$ is a homeomorphism, so that the map $\mathcal{S}(\Omega) \stackrel{\sim}{\longrightarrow} {\mathrm{c}\text{-}\mathrm{Ind}}_H^{{{\rm O}}(X)} {1}_H$ that sends $\varphi$ to the function $f$ such that $f(h) = \varphi(h^{-1}z)$ for $h \in {{\rm O}}(X)$ is an ${{\rm O}}(X)$ isomorphism. Via this isomorphism, we may regard $M'$ as defined on ${\mathrm{c}\text{-}\mathrm{Ind}}_H^{{{\rm O}}(X)} {1}_H$, and it will suffice to prove that that there exists a compact, open neighborhood $C$ of the identity in ${{\rm O}}(X)$ such that if $C_0$ is a compact, open neighborhood of the identity in ${{\rm O}}(X)$ with $C_0 \subset C$, then $M'(f_{HC_0}) \neq 0$, where $f_{HC_0}$ is the characteristic function of $HC_0$. Since $\sigma_0$ is admissible, by 2.15 of [@BeZe1976] we have $(\sigma_1)^\vee \cong \sigma_0$ where $\sigma_1=\sigma_0^\vee$. Let $W_1$ be the space of $\sigma_1$. We may regard $M'$ as a non-zero element of ${{\rm Hom}}_{{{\rm O}}(X)}({\mathrm{c}\text{-}\mathrm{Ind}}_H^{{{\rm O}}(X)} {1}_H, \sigma_1^\vee)$. Now $H$ and ${{\rm O}}(X)$ are unimodular since both are orthogonal groups ($H$ is isomorphic to ${{\rm O}}(U^\perp)$, where $U=Fz_1+Fz_2$). By 2.29 of [@BeZe1976], there exists an element $\lambda$ of ${{\rm Hom}}_H(\sigma_1, {1}_H)$ such that $M'$ is given by $$M'(f)(v) = \int\limits_{H \backslash {{\rm O}}(X)} f(h) \lambda (\sigma_1(h) v)\, dh$$ for $f \in {\mathrm{c}\text{-}\mathrm{Ind}}_H^{{{\rm O}}(X)} {1}_H$ and $v \in W_1$. Since $M'$ is non-zero, there exists $v \in W_1$ such that $\lambda(v) \neq 0$. Let $C$ be a compact, open neighborhood of $1$ in ${{\rm O}}(X)$ such that $\sigma_1(h) v =v$ for $h \in C$. Let $C_0$ be a compact, open neighborhood of $1$ in ${{\rm O}}(X)$ such that $C_0 \subset C$. Then $$\begin{aligned}
M'(f_{HC_0})(v)&= \int\limits_{H \backslash {{\rm O}}(X)} f_{HC_0}(h) \lambda(\sigma_1(h) v)\, dh\\
&= \int\limits_{H \backslash H C_0} \lambda(\sigma_1(h) v)\, dh\\
&=\mathrm{vol}(H \backslash H C_0) \lambda (v),\end{aligned}$$ which is non-zero.
In the following theorem we mention the set $\mathcal{E}(z)$ of embeddings of $T$ into ${{\rm GSO}}(X)$; see , and .
\[fourdimthetatheorem\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. Let $A=A_S$, $T=T_S$, and $L=L_S$ be as in Sect. \[anotheralgebrasubsec\]. If $\dim X =4$, assume that $A$ is a field. Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)^+$, and let $(\sigma,W)$ be an irreducible, admissible representation of ${{\rm GO}}(X)$. Assume that there is a non-zero $R$ map $M:\mathcal{S}(X^2) \to \pi \otimes \sigma$. Let $\theta=\theta_S$ and let $\Lambda$ be a character of $T$.
1. If ${{\rm Hom}}_N(\pi, {{\mathbb C}}_\theta) \neq 0$, then $\Omega = \Omega_S$ is non-empty and $D=TN$ is contained in ${{\rm GSp}}(4,F)^+$.
2. Assume that ${{\rm Hom}}_N(\pi,{{\mathbb C}}_\theta) \neq 0$ so that $\Omega=\Omega_S$ is non-empty, and $D=TN \subset {{\rm GSp}}(4,F)^+$ by i). Assume further that ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda \otimes \theta})\neq 0$. Let $z \in \Omega$, and $\tau \in \mathcal{E}(z)$. There exists a non-zero vector $w \in W$ such that $$\sigma(\tau(t)) w = \Lambda^{-1}(t) w$$ for $t \in T$.
i\) The assumptions ${{\rm Hom}}_R(\mathcal{S}(X^2), V \otimes W )\neq 0$ and ${{\rm Hom}}_N(V,{{\mathbb C}}_\theta)\neq 0$ imply that ${{\rm Hom}}_N(\mathcal{S}(X^2), {{\mathbb C}}_\theta) \neq 0$. This means that $\mathcal{S}(X^2)_{N,\theta} \neq 0$; by Lemma \[Nthetaomegalemma\], we obtain $\Omega \neq {\varnothing}$. Lemma \[tingsp4lemma\] now also yields that $D \subset {{\rm GSp}}(4,F)^+$.
ii\) Let $\beta$ be a non-zero element of ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda \otimes \theta})$. We first claim that the composition $M'$ $$\mathcal{S}(X^2) \stackrel{M}{\longrightarrow} V \otimes W \stackrel{\beta \otimes \mathrm{id}}{\longrightarrow} {{\mathbb C}}_{\Lambda \otimes \theta} \otimes W$$ is non-zero. Let $\varphi \in \mathcal{S}(X^2)$ be such that $M(\varphi) \neq 0$, and write $$M(\varphi) = \sum_{\ell =1}^t v_\ell \otimes w_\ell$$ where $v_1,\dots,v_t \in V$ and $w_1,\dots, w_t \in W$. We may assume that the vectors $w_1,\dots, w_t$ are linearly independent and that $v_1 \neq 0$. Since $\beta$ is non-zero and $V$ is an irreducible representation of ${{\rm GSp}}(4,F)^+$, it follows that there exists $g \in {{\rm GSp}}(4,F)^+$ such that $\beta(\pi(g) v_1)\neq 0$. Let $h \in {{\rm GO}}(X)$ be such that $\lambda(h)=\lambda(g)$. Then $(g,h) \in R$. Since $M$ is an $R$-map, we have $$M(\omega(g,h) \varphi) = \sum_{\ell =1}^t \pi(g) v_\ell \otimes \sigma(h) w_\ell.$$ Applying $\beta \otimes \mathrm{id}$ to this equation, we get $$M'(\omega(g,h)\varphi) = \sum_{\ell=1}^t \beta (\pi(g) v_\ell) \otimes \sigma(h) w_\ell$$ in ${{\mathbb C}}_{\Lambda \otimes \theta} \otimes W$. Since the vectors $\sigma(h) w_1,\dots, \sigma(h) w_t$ are also linearly independent, and since $\beta (\pi(g) v_1)$ is non-zero, it follows that the vector $M'(\omega(g,h)\varphi)$ is non-zero; this proves $M' \neq 0$.
Next, the map $M'$ induces a non-zero map $\mathcal{S}(X^2)_{N,\theta} \to {{\mathbb C}}_{\Lambda \otimes \theta} \otimes W$, which we also denote by $M'$. Lemma \[Nthetaomegalemma\] implies that the restriction map yields an isomorphism $\mathcal{S}(X^2)_{N,\theta} \stackrel{\sim}{\longrightarrow}
\mathcal{S}(\Omega)$. Composing, we thus obtain a non-zero map $\mathcal{S}(\Omega) \to {{\mathbb C}}_{\Lambda \otimes \theta} \otimes W$, which we again denote by $M'$. Let $z \in \Omega$ and $\tau \in \mathcal{E}(z)$. By Lemma \[ghlemma\], the elements $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t))$ for $t \in T$ act on $\Omega$. We can regard these elements as acting on $\mathcal{S}(\Omega)$ via the definition $\big( (\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t)) \cdot \varphi \big)(x) = \varphi ( (\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t))^{-1} \cdot x)$ for $\varphi \in \mathcal{S}(\Omega)$ and $x \in \Omega$. Moreover, by the definition of $M'$ and , we have $$\label{M1transeq}
M'( (\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t))\cdot \varphi ) = (\det(t), {{\rm disc}}(X))_F \Lambda (t) \sigma(\tau(t)) M'(\varphi)$$ for $t \in T$ and $\varphi \in \mathcal{S}(\Omega)$. Let $C$ be the compact, open subset from Lemma \[Mnonvanishlemma\] with respect to $M'$ and $z$; note that the restriction of $\sigma$ to ${{\rm O}}(X)$ is admissible. By Lemma \[zstablemma\] there exists a compact, open subset $C_0$ of $C$ containing $z$ such that $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t)) \cdot C_0 = C_0$ for $t \in T$. Let $\varphi = f_{C_0}$. Then $(\left[ \begin{smallmatrix} &1\\1& \end{smallmatrix} \right] t \left[\begin{smallmatrix}&1\\1&\end{smallmatrix}\right],\tau(t)) \cdot \varphi = \varphi$ for $t \in T$, and by Lemma \[Mnonvanishlemma\], we have $M'(\varphi) \neq 0$. From we have $\sigma (\tau(t))M'(\varphi) = (\det(t), {{\rm disc}}(X))_F \Lambda(t)^{-1} M'(\varphi)=\chi_{L/F}({{\rm N}}_{L/F}(t)) \Lambda(t)^{-1} M'(\varphi)=\Lambda(t)^{-1} M'(\varphi)$ for $t \in T$. Since $M'(\varphi) \neq 0$, this proves ii).
Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and let $S$ be as in with $\det(S) \neq 0$. If $\Omega_S$ is non-empty and $z=(z_1,z_2) \in \Omega_S$, then we let ${{\rm O}}(X)_z$ be the subgroup of $h \in {{\rm O}}(X)$ such that $h(z_1)=z_1$ and $h(z_2)=z_2$.
\[scdimprop\] Let $(X,\langle\cdot,\cdot\rangle)$ be a non-degenerate symmetric bilinear space over $F$ satisfying , and assume that $\dim X=4$. Let $S$ be as in with $\det(S) \neq 0$. Assume that $\Omega_S$ is non-empty, and let $z$ be in $\Omega_S$. Let $\varPi$ and $\sigma$ be irreducible, admissible, supercuspidal representations of ${{\rm GSp}}(4,F)$ and ${{\rm GO}}(X)$, respectively. If ${{\rm Hom}}_R(\omega,\varPi \otimes \sigma) \neq 0$, then $$\label{Nthetadimeq}
\dim \varPi_{N,\theta_S} = \dim {{\rm Hom}}_{{{\rm O}}(X)_z} (\sigma, {{\mathbb C}}_1).$$
Assume that ${{\rm Hom}}_R(\omega,\varPi \otimes \sigma) \neq 0$. By Proposition 3.3 of [@Roberts2001] the restriction of $\sigma$ to ${{\rm O}}(X)$ is multiplicity-free. By Lemma 4.2 of [@Roberts1996] we have $\varPi|_{{{\rm Sp}}(4,F)} = \varPi_1 \oplus \dots \oplus \varPi_t$, where $\varPi_1,\dots,\varPi_t$ are mutually non-isomorphic, irreducible, admissible representations of ${{\rm Sp}}(4,F)$, $\sigma|_{{{\rm O}}(X)} = \sigma_1\oplus \dots \oplus \sigma_t$, where $\sigma_1,\dots, \sigma_t$ are mutually non-isomorphic, irreducible, admissible representations of ${{\rm O}}(X)$, with ${{\rm Hom}}_{{{\rm Sp}}(4,F) \times {{\rm O}}(X)} (\omega, \varPi_i \otimes \sigma_i) \neq 0$ for $i \in \{1,\dots,t\}$. Let $i \in \{1,\dots,t\}$; to prove the proposition, it will suffice to prove that $ (\varPi_i)_{N,\theta_S} \cong {{\rm Hom}}_{{{\rm O}}(X)_z}(\sigma_i,{{\mathbb C}}_1)$ as complex vector spaces. By Lemma 6.1 of [@Roberts1999], we have $\Theta(\sigma_i)_{N,\theta_S} \cong {{\rm Hom}}_{{{\rm O}}(X)_z}(\sigma_i^\vee,{{\mathbb C}}_1)$ as complex vector spaces. By 1) a) of the theorem on p. 69 of [@MoeglinVignerasWaldspurger1987], the representation $\Theta(\sigma_i)$ of ${{\rm Sp}}(4,F)$ is irreducible. By Theorem 2.1 of [@Kudla1986] we have $\varPi_i \cong \Theta(\sigma_i)$. Therefore, $(\varPi_i)_{N,\theta_S} \cong {{\rm Hom}}_{{{\rm O}}(X)_z}(\sigma_i^\vee,{{\mathbb C}}_1)$. By the first theorem on p. 91 of [@MoeglinVignerasWaldspurger1987], $\sigma_i^\vee \cong \sigma_i$. The proposition follows.
Representations of {#goxsubsec}
-------------------
Let $m,\lambda \in F^\times$. By Lemma \[twodimclasslemma\], the group ${{\rm GSO}}(X_{m,\lambda})$ is abelian. It follows that the irreducible, admissible representations of ${{\rm GSO}}(X_{m,\lambda})$ are characters. To describe the representations of ${{\rm GO}}(X_{m,\lambda})$, let $\mu: {{\rm GSO}}(X_{m,\lambda}) \to {{\mathbb C}}^\times$ be a character. We recall that the map $\gamma$ from is a representative for the non-trivial coset of ${{\rm GSO}}(X_{m,\lambda})$ in ${{\rm GO}}(X_{m,\lambda})$. Define $\mu^\gamma: {{\rm GSO}}(X_{m,\lambda}) \to {{\mathbb C}}^\times$ by $\mu^\gamma(x) = \mu (\gamma x \gamma^{-1})$. If $\mu^\gamma \neq \mu$, then the representation ${{\rm ind}}^{{{\rm GO}}(X_{m,\lambda})}_{{{\rm GSO}}(X_{m,\lambda})} \mu$ is irreducible, and we define $$\mu^+ = {{\rm ind}}^{{{\rm GO}}(X_{m,\lambda})}_{{{\rm GSO}}(X_{m,\lambda})} \mu.$$ Assume that $\mu= \mu^\gamma$. Then the induced representation ${{\rm ind}}^{{{\rm GO}}(X_{m,\lambda})}_{{{\rm GSO}}(X_{m,\lambda})} \mu$ is reducible, and is the direct sum of the two extensions of $\mu$ to ${{\rm GO}}(X_{m,\lambda})$. We let $\mu^+$ be the extension of $\mu$ to ${{\rm GO}}(X_{m,\lambda})$ such that $\mu^+(\gamma)=1$ and let $\mu^-$ be the extension of $\mu$ to ${{\rm GO}}(X_{m,\lambda})$ such that $\mu^-(\gamma)=-1$. Every irreducible, admissible representation of ${{\rm GO}}(X_{m,\lambda})$ is of the form $\mu^+$ or $\mu^-$ for some character $\mu$ of ${{\rm GSO}}(X_{m,\lambda})$. We will sometimes identify characters of ${{\rm GSO}}(X_{m,\lambda})$ with characters of $T_{\left[ \begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]}$, via , and in turn identify characters of $T_{\left[ \begin{smallmatrix} 1 & \\ & -m \end{smallmatrix} \right]}$ with characters of $L^\times$, via . Here $L$ is associated to $m$, as in Sect. \[quadextsubsec\], so that $L=F(\sqrt{m})$ if $m \notin F^{\times 2}$, and $L = F\times F$ if $m \in F^{\times 2}$.
Next, let $(X,\langle\cdot,\cdot\rangle)$ be either $(X_{{{\rm M}}_2},\langle\cdot,\cdot\rangle_{{{\rm M}}_2})$ or $(X_H,\langle\cdot,\cdot\rangle_H)$, as in or . If $X=X_{{{\rm M}}_2}$, set $G={{\rm GL}}(2,F)$, and if $X=X_{H}$, set $G=H^\times$. Let $h_0$ be the element of ${{\rm GO}}(X)$ that maps $x$ to $x^*$; then $h_0$ represents the non-trivial coset of ${{\rm GSO}}(X)$ in ${{\rm GO}}(X)$. Let $\pi_1$ and $\pi_2$ be irreducible, admissible representations of $G$ with the same central character. Via the exact sequences and , the representations $(\pi_1,V_1)$ and $(\pi_2,V_2)$ define an irreducible, admissible representation $\pi_1\otimes\pi_2$ of ${{\rm GSO}}(X)$ which has space $V_1 \otimes V_2$ and action given by the formula $(\pi_1\otimes\pi_2)(\rho(g_1,g_2)) =\pi_1(g_1) \otimes \pi_2(g_2)$ for $g_1,g_2 \in G$. If $\pi_1$ and $\pi_2$ are not isomorphic, then $\pi_1\otimes\pi_2$ induces irreducibly to ${{\rm GO}}(X)$; we denote this induced representation by $(\pi_1\otimes\pi_2)^+$. Assume that $\pi_1$ and $\pi_2$ are isomorphic. In this case the representation $\pi_1\otimes\pi_2$ does not induce irreducibly to ${{\rm GO}}(X)$, but instead has two extensions $\sigma_1$ and $\sigma_2$ to representations of ${{\rm GO}}(X)$. Moreover, the space of linear forms on $\pi_1\otimes\pi_2$ that are invariant under the subgroup of ${{\rm GSO}}(X)$ of elements $\rho(g,g^{*-1})$ for $g \in G$ is one-dimensional. Let $\lambda$ be a non-zero functional in this space. Then $\lambda\circ \sigma_i(h_0)$ is another such functional, so that $\lambda\circ \sigma_i(h_0)=\varepsilon_i\lambda$ with $\{\varepsilon_1,\varepsilon_2\}=\{1,-1\}$. The representation $\sigma_i$ for which $\varepsilon_i=1$ is denoted by $(\pi_1\otimes\pi_2)^+$, and the representation $\sigma_j$ for which $\varepsilon_j=-1$ is denoted by $(\pi_1\otimes\pi_2)^-$. See [@Roberts1999] for details.
\[Ozdimeq\] Let $H$ be as in and let $X_H$ be as in . Let $S$ be as in with $-c \notin F^{\times 2}$; we may assume that $i^2 = -c$, as in . Let $z=(z_1,z_2)$ be as in , so that $z \in \Omega_S$. Set $L=F(\sqrt{-c})$. We have $$\dim {{\rm Hom}}_{{{\rm O}}(X_H)_z} (\sigma_0,{{\mathbb C}}_1) = 1$$ for the following families of irreducible, admissible representations $\sigma_0$ of ${{\rm GO}}(X_H)$:
1. $\sigma_0 = (\sigma 1_{H^\times} \otimes \sigma \chi_{L/F})^+$;
2. $\sigma_0 = (\sigma1_{H^\times} \otimes \sigma \pi^{\mathrm{JL}})^+$.
Here, $\sigma$ is a character of $F^\times$, and $\pi$ is a supercuspidal, irreducible, admissible representation of ${{\rm GL}}(2,F)$ with trivial central character with ${{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1) \neq 0$.
We begin by describing ${{\rm O}}(X_H)_z$. Define $g_1:X_H \to X_H$ by $$g_1(1) =1, \quad g_1(i) = i, \quad g_1(j) = j, \quad g_1(k) = -k.$$ Evidently, $g_1 \in {{\rm O}}(X_H)_z$, moreover, $\det(g_1)=-1$. It follows that ${{\rm O}}(X_H)_z = ({{\rm SO}}(X_H) \cap {{\rm O}}(X_H)_z) \sqcup ({{\rm SO}}(X_H) \cap {{\rm O}}(X_H)_z) g_1$. Using that $z_1=1$, $z_2=i$, and the fact that every element of ${{\rm SO}}(X_H)$ is of the form $\rho(h_1,h_2)$ for some $h_1,h_2 \in H^\times$, a calculation shows that ${{\rm SO}}(X_H) \cap {{\rm O}}(X_H)_z$ is $\{\rho(h^*{}^{-1},h): h \in (F+Fi)^\times = L^\times \}$.
i\) Since $\sigma_0|_{{{\rm O}}(X_H)} = (1_{H^\times} \otimes \chi_{L/F})^+$, we may assume that $\sigma=1$. A model for $\sigma_0$ is ${{\mathbb C}}\oplus {{\mathbb C}}$, with action defined by $$\begin{aligned}
\sigma_0(\rho(h_1,h_2))(w_1 \oplus w_2) & = \chi_{L/F}({{\rm N}}(h_2))w_1 \oplus \chi_{L/F}({{\rm N}}(h_1)) w_2, \\
\sigma_0(*)(w_1 \oplus w_2 ) & = w_2 \oplus w_1\end{aligned}$$ for $w_1,w_2 \in {{\mathbb C}}$ and $h_1,h_2 \in H^\times$; here, $*$ is the canonical involution of $H$, regarded as an element of ${{\rm O}}(X_H)$ with determinant $-1$. Using that $g_1 = * \circ \rho(k^{*-1},k)$, we find that the restriction of $\sigma_0$ to ${{\rm O}}(X_H)_z$ is given by $$\begin{aligned}
\sigma_0(\rho(h^{*-1},h)) (w_1 \oplus w_2)& = w_1 \oplus w_2, \\
\sigma_0(g_1)(w_1 \oplus w_2) & = \chi_{L/F}({{\rm N}}(k))(w_2 \oplus w_1)\end{aligned}$$ for $w_1,w_2 \in {{\mathbb C}}$ and $h \in (F+Fi)^\times = L^\times$. Therefore, $\sigma_0|_{{{\rm O}}(X_H)_z}$ is the direct sum of the trivial character ${{\rm O}}(X_H)_z$, and the non-trivial character of ${{\rm O}}(X_H)_z$ that is trivial on ${{\rm SO}}(X_H) \cap {{\rm O}}(X_H)_z$ and sends $g_1 $ to $-1$. This implies that ${{\rm Hom}}_{{{\rm O}}(X_H)_z}(\sigma_0,{{\mathbb C}}_1)$ is one-dimensional.
ii\) Again, we may assume that $\sigma=1$. Let $V$ be the space of $\pi^{\mathrm{JL}}$. As a model for $\sigma_0$ we take $V\oplus V$ with action of ${{\rm GO}}(X_H)$ defined by $$\begin{aligned}
\sigma_0(\rho(h_1,h_2))(v_1 \oplus v_2)& = \pi^{\mathrm{JL}}(h_2)v_1 \otimes \pi^{\mathrm{JL}}(h_1)v_2, \\
\sigma_0(*)(v_1 \oplus v_2) & = v_2 \oplus v_1\end{aligned}$$ for $h_1,h_2 \in H^\times$ and $v_1,v_2 \in V$. By hypothesis, ${{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq 0$. This space is one-dimensional; see Sect. \[waldfuncsubsec\]. We have $kLk^{-1}=L$; in fact, conjugation by $k$ on $L$ is the non-trivial element of ${{\rm Gal}}(L/F)$. Since ${{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1)$ is one-dimensional, there exists $\varepsilon \in \{ \pm 1\}$ such that $\lambda \circ \pi^{\mathrm{JL}}(k) = \varepsilon \lambda$ for $\lambda \in {{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1)$. Define a map $${{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1) \longrightarrow {{\rm Hom}}_{{{\rm O}}(X_H)_z}(\sigma_0,{{\mathbb C}}_1)$$ by sending $\lambda$ to $\Lambda$, where $\Lambda$ is defined by $\Lambda(v_1\oplus v_2)=\lambda(v_1)+\varepsilon\lambda(v_2)$ for $v_1,v_2 \in V$. A computation using the fact that $g_1 = * \circ \rho(k^{*-1},k)$ shows that this map is well defined. It is straightforward to verify that this map is injective and surjective, so that $${{\rm Hom}}_{L^\times}(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\cong {{\rm Hom}}_{{{\rm O}}(X_H)_z}(\sigma_0,{{\mathbb C}}_1).$$ Hence, ${{\rm Hom}}_{{{\rm O}}(X_H)_z}(\sigma_0,{{\mathbb C}}_1)$ is one-dimensional.
and {#thetasubsec}
-----
In this section we will gather together some information about the theta correspondence between ${{\rm GO}}(X)$ and ${{\rm GSp}}(4)$ when $X$ is as in . When $\dim(X)=4$, we recall in Theorem \[Ganthetatheorem\] some results from [@GaAt2011] and [@GaTa2011]. When $\dim(X)=2$, we calculate two theta lifts, producing representations of type Vd and IXb, in Proposition \[thetaliftprop\]. This calculation uses $P_3$-theory. We include this material because, to the best of our knowledge, such a computation is absent from the literature.
We let $R_Q$ be the group of elements of $R$ of the form $$(\begin{bmatrix} *&*&*&*\\&*&*&*\\&*&*&*\\&&&*\end{bmatrix},*).$$ Let $Z^J$ be the group defined in .
\[jacanisosteplemma\] Let $(X,\langle\cdot,\cdot\rangle)$ be an even-dimensional symmetric bilinear space satisfying ; assume additionally that $X$ is anisotropic. There is an isomorphism of complex vector spaces $$\label{firstisobackeq}
T_1: \mathcal{S}(X^2)_{Z^J }\;\stackrel{\sim}{\longrightarrow}\;\mathcal{S}(X)$$ that is given by $$T_1 \big( \varphi + \mathcal{S}(X^2) (Z^J) \big) (x) = \varphi(x,0)$$ for $\varphi$ in $\mathcal{S}(X^2)$ and $x$ in $X$. The subgroup $R_Q$ of $R$ acts on the quotient $\mathcal{S}(X^2)_{Z^J } $. Transferring this action to $\mathcal{S}(X)$ via $T_1$, the formulas for the resulting action are $$\begin{aligned}
( \begin{bmatrix} t &&& \\ &a&b& \\ &c&d& \\ &&& \lambda(h) t^{-1} \end{bmatrix}, h) \cdot \varphi & = |\lambda(h)|^{-\dim(X)/4}\,(t,{{\rm disc}}(X))_F\,|t|^{\dim(X)/2}\,\omega_1(\begin{bmatrix} a&b \\ c&d \end{bmatrix},h) \varphi, \label{transeq1} \\
(\begin{bmatrix} 1 & x & y & z \\ &1&&y \\ &&1&-x \\ &&&1 \end{bmatrix} ,1) \cdot \varphi & = \varphi \label{transeq2}
\end{aligned}$$ for $\varphi$ in $\mathcal{S}(X)$, $x,y$ and $z$ in $F$, $t$ in $F^\times$, and $g=\left[\begin{smallmatrix} a&b \\ c&d \end{smallmatrix} \right]$ in ${{\rm GL}}(2,F)$ and $h$ in ${{\rm GO}}(X)$ with $\lambda (h) = \det(g)$.
We first claim that $$\label{jacanisoeq1}
\mathcal{S}(X^2)(Z^J)
=
\{ \varphi \in \mathcal{S}(X^2): \varphi(X \times 0) = 0 \}.$$ Let $\varphi$ be in $\mathcal{S}(X^2)(Z^J)$. By the lemma in 2.33 of [@BeZe1976] there exists a positive integer $n$ so that $$\label{jacanisoeq3}
\int\limits_{{\mathfrak p}^{-n}} \omega(\begin{bmatrix} 1&&&b \\ &1&& \\ &&1& \\ &&&1 \end{bmatrix},1)\varphi\, db =0.$$ Evaluating at $(x,0)$ and using shows that $\varphi(X\times0)=0$. Conversely, assume that $\varphi$ is contained in the right hand side of . For any integer $k$ let $$\label{Lkdefeq}
L_k=\{x \in X: \langle x, x \rangle \in {\mathfrak p}^k \}.$$ It is known that $L_k$ is a lattice, i.e., it is a compact and open ${{\mathfrak o}}$ submodule of $X$; see the proof of Theorem 91:1 of [@OMeara1973]. Any lattice is free of rank $\dim X$ as a ${{\mathfrak o}}$ module. Since $\varphi(X\times0)=0$, there exists a positive integer $n$ such that $\varphi(X\times L_n)=0$. We claim that holds. Let $x_1$ and $x_2$ be in $X$. Evaluating at $(x_1,x_2)$ gives $$\big( \int\limits_{{\mathfrak p}^{-n}} \psi(b \langle x_2,x_2 \rangle)\, db \big) \varphi(x_1,x_2).$$ This is zero if $x_2$ is in $L_n$ because $\varphi(X \times L_n)=0$. Assume that $x_2$ is not in $L_n$. By the definition of $L_n$, we have $\langle x_2, x_2 \rangle \notin {\mathfrak p}^n$. This implies that $$\int\limits_{{\mathfrak p}^{-n}} \psi(b \langle x_2,x_2 \rangle)\, db =0,$$ proving our claim. This completes the proof of .
Using , it is easy to verify that the map $T_1$ is an isomorphism of vector spaces. Equation follows from Lemma \[restrictionlemma\], and equation follows from and .
\[thetaliftprop\] Let $m \in F^\times$, and let $(X_{m,1},\langle\cdot,\cdot\rangle_{m,1})$ be as . Assume that $m\notin F^{\times2}$, so that $X_{m,1}$ is anisotropic. Let $E=F(\sqrt{m})$, and identify characters of ${{\rm GSO}}(X_{m,1})$ and characters of $E^\times$ via and . Let $\chi_{E/F}$ be the quadratic character associated to $E$. Let $\varPi$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $\sigma$ be an irreducible, admissible representation of ${{\rm GO}}(X_{m,1})$.
1. Assume that $\sigma=\mu^+$ with $\mu=\mu\circ\gamma$, so that $\mu=\alpha\circ{{\rm N}}_{E/F}$ for a character $\alpha$ of $F^\times$. Then ${{\rm Hom}}_R(\omega, \varPi^\vee \otimes \sigma) \neq 0$ if and only if $\varPi=L(\nu \chi_{E/F}, \chi_{E/F} \rtimes \nu^{-1/2} \alpha)$ (type Vd).
2. Assume that $\sigma=\mu^+= {{\rm ind}}^{{{\rm GO}}(X_{m,1})}_{{{\rm GSO}}(X_{m,1})}(\mu)$ with $\mu \neq \mu \circ \gamma$. Then ${{\rm Hom}}_R(\omega, \varPi^\vee \otimes \sigma) \neq 0$ if and only if $\varPi=L(\nu \chi_{E/F}, \nu^{-1/2} \pi(\mu))$ (type IXb). Here, $\pi(\mu)$ is the supercuspidal, irreducible, admissible representation of ${{\rm GL}}(2,F)$ associated to $\mu$.
Let $(\sigma,W)$ be as in i) or ii). In the case of i), set $\pi(\mu)=\alpha\times\alpha\chi_{E/F}$. Then ${{\rm Hom}}_{R_1}(\omega_1,\pi(\mu)^\vee\otimes\sigma)\neq0$, and $\pi(\mu)$ is the unique irreducible, admissible representation of ${{\rm GL}}(2,F)$ with this property, by Theorem 4.6 of [@JacquetLanglands1970].
Let $(\varPi',V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$ such that ${{\rm Hom}}_R(\omega, \varPi' \otimes \sigma) \neq 0$. Let $T$ be a non-zero element of this space. The non-vanishing of $T$ implies that the central characters of $\varPi'$ and $\sigma$ satisfy $$\label{thetaliftpropeq2}
\omega_{\varPi'}=\omega_\sigma^{-1}=(\mu|_{F^\times})^{-1}.$$ We first claim that $V$ is non-supercuspidal. By reasoning as in [@GK], there exist $\lambda_1,\dots, \lambda_t$ in $F^\times$ and an irreducible ${{\rm Sp}}(4,F)$ subspace $V_0$ of $V$ such that $$V=V_1\oplus \dots\oplus V_t,$$ where $$\label{thetaliftpropeq1}
V_1 = \pi(\begin{bmatrix} 1&&& \\ &1&& \\ &&\lambda_1& \\ &&&\lambda_1 \end{bmatrix}) V_0\quad ,\dots, \quad
V_t = \pi(\begin{bmatrix} 1&&& \\ &1&& \\ &&\lambda_t& \\ &&&\lambda_t \end{bmatrix}) V_0.$$ Similarly, there exist irreducible ${{\rm O}}(X)$ subspaces $W_1,\dots, W_r$ of $W$ such that $$W=W_1 \oplus \dots \oplus W_r.$$ There exists an $i$ and a $j$ such that ${{\rm Hom}}_{{{\rm Sp}}(4,F)\times{{\rm O}}(X)}(\omega,V_i\otimes W_j)\neq0$. As in the proof of Lemma 4.2 of [@Roberts1996], there is an irreducible constituent $U_1$ of $\pi(\mu)^\vee$ such that ${{\rm Hom}}_{{{\rm O}}(X)}(\omega_1,U_1\otimes W_j)\neq0$. By Theorem 4.4 of [@Roberts1998], the representation $V_i$ is non-supercuspidal, so that $V$ is non-supercuspidal.
Since $V$ is non-supercuspidal, we have $V_{Z^J}\neq0$ by Tables A.5 and A.6 of [@NF] (see the comment after Theorem \[finitelength\]). We claim next that ${{\rm Hom}}_{R_Q}(\mathcal{S}(X^2)_{Z^J},V_{Z^J}\otimes W)\neq0$. It follows from that $(V_i)_{Z^J}\neq0$. Let $p_i:\:V\to V_i$ and $q_j:\:W\to W_j$ be the projections. These maps are ${{\rm Sp}}(4,F)$ and ${{\rm O}}(X)$ maps, respectively. The composition $$\mathcal{S}(X^2) \stackrel{T}{\longrightarrow} V \otimes W \stackrel{p_i \otimes q_j}{\longrightarrow} V_i \otimes W_j \longrightarrow (V_i)_{Z^J} \otimes W_j$$ is non-zero and surjective; note that $V_i\otimes W_j$ is irreducible. The commutativity of the diagram $$\begin{CD}
\mathcal{S}(X^2) @>T>> V\otimes W @>p_i \otimes q_j >> V_i \otimes W_j \\
@. @VVV @VVV\\
@. V_{Z^J } \otimes W @>p_i \otimes q_j>> (V_i)_{Z^J } \otimes W_j
\end{CD}$$ implies our claim that ${{\rm Hom}}_{R_Q}(\mathcal{S}(X^2)_{Z^J},V_{Z^J}\otimes W)\neq0$.
Let $R_{\bar Q}$ be the subgroup of $R_Q$ consisting of the elements of the form $$(\begin{bmatrix} *&*&*&* \\ &*&*&* \\ &*&*&* \\ &&&1 \end{bmatrix}, *).$$ Let $R_{P_3}$ be the subgroup of $P_3 \times {{\rm GO}}(X)$ consisting of the elements of the form $$(\begin{bmatrix} a&b&x \\ c&d&y \\ &&1 \end{bmatrix},h),\qquad ad-bc=\lambda(h).$$ There is a homomorphism from $R_{\bar Q}$ to $R_{P_3}$ given by $$(\begin{bmatrix} *&*&*&* \\ &a&b&x \\ &c&d&y \\ &&&1 \end{bmatrix},h) \mapsto (\begin{bmatrix} a&b&x \\ c&d&y \\ &&1 \end{bmatrix},h)$$ for $\begin{bmatrix} a&b \\ c&d \end{bmatrix}$ in ${{\rm GL}}(2,F)$, $x$ and $y$ in $F$, and $h$ in ${{\rm GO}}(X)$ with $ad-bc=\lambda(h)$. We consider $Z^J$ a subgroup of $R_{\bar Q}$ via $z\mapsto(z,1)$. The above homomorphism then induces an isomorphism $R_{\bar Q}/Z^J\cong R_{P_3}$.
We restrict the $R_Q$ modules $\mathcal{S}(X^2)_{Z^J}$ and $V_{Z^J}\otimes W$ to $R_{\bar Q}$. The subgroup $Z^J$ of $R_{\bar Q}$ acts trivially, so that these spaces may be viewed as $R_{P_3}$ modules.
Let $\chi$ be a character of $F^\times$. We assert that $$\begin{aligned}
&\mathrm{Hom}_{R_{P_3}}(\mathcal{S}(X^2)_{Z^J}, \tau_{{{\rm GL}}(0)}^{P_3}(1) \otimes \sigma)=0\label{gl0gl1noeq1},\\
&\mathrm{Hom}_{R_{P_3}}(\mathcal{S}(X^2)_{Z^J}, \tau_{{{\rm GL}}(1)}^{P_3}(\chi) \otimes \sigma) = 0. \label{gl0gl1noeq2}\end{aligned}$$ Let $\tau$ be $\tau_{{{\rm GL}}(0)}^{P_3}(1)$ or $\tau_{{{\rm GL}}(1)}^{P_3}(\chi)$. Assume that or is non-zero; we will obtain a contradiction. Let $S$ be a non-zero element of or . Since $S$ is non-zero, there exists $\varphi$ in $\mathcal{S}(X^2)_{Z^J}$ such that $S(\varphi)$ is non-zero. Write $S(\varphi)=\sum_{i=1}^t f_i \otimes w_i$ for some $f_1,\dots,f_t$ in the standard space of $\tau$ and $w_1,\dots,w_t$ in $W$. The elements $f_1,\dots,f_t$ are functions from $P_3$ to ${{\mathbb C}}$ such that $$f_i(\begin{bmatrix} 1&&x\\&1&y\\ &&1 \end{bmatrix} p) = \psi(y) f_i(p)$$ for $x$ and $y$ in $F$, $p$ in $P_3$, and $i=1,\dots,t$. We may assume that the vectors $w_1,\dots, w_t$ are linearly independent, and that there exists $p$ in $P_3$ such that $f_1(p)$ is non-zero. Using the transformation properties of $S$ and $f_1$, we may assume that $$p=\begin{bmatrix}a\\&1\\ &&1 \end{bmatrix}.$$ Let $\lambda: \sigma \to {{\mathbb C}}$ be a linear functional such that $\lambda (w_1)=1$ and $\lambda(w_2) = \dots = \lambda(w_t) =0$, and let $e:\tau \to {{\mathbb C}}$ be the linear functional that sends $f$ to $f(p)$. The composition $(e \otimes \lambda) \circ S$ is non-zero on $\varphi$. On the other hand, using , for $y$ in $F$ we have $$\begin{aligned}
\big( (e \otimes \lambda) \circ S \big) ( ( \begin{bmatrix}1&& \\ &1&y \\ &&1 \end{bmatrix},1) \varphi )
&= (e \otimes \lambda) \big( (\begin{bmatrix} 1&& \\ &1&y \\ &&1 \end{bmatrix},1) \cdot S (\varphi)\big),\\
\big( (e \otimes \lambda) \circ S \big) ( \varphi )&= (e \otimes \lambda) \big( (\begin{bmatrix} 1&& \\ &1&y \\ &&1 \end{bmatrix},1) \cdot \sum_{i=1}^t f_i \otimes w_i \big)\\
&= \sum_{i=1}^t f_i(p\begin{bmatrix} 1&& \\ &1&y \\ &&1 \end{bmatrix}) \lambda(w_i) \\
&=\psi(y) f_1(p),\\
\big( (e \otimes \lambda) \circ S \big) ( \varphi )&=\psi(y) \big( (e \otimes \lambda) \circ S \big) ( \varphi ).\end{aligned}$$ This is a contradiction since $\big( (e \otimes \lambda) \circ S \big) ( \varphi )$ is non-zero, and there exist $y$ in $F$ such that $\psi(y) \neq 1$. This concludes the proof of and .
It follows from and and the non-vanishing of ${{\rm Hom}}_{R_{P_3}}(\mathcal{S}(X^2)_{Z^J},V_{Z^J}\otimes W)$ that there exists an irreducible, admissible representation $\rho$ of ${{\rm GL}}(2,F)$ that occurs in the $P_3$ filtration of $V_{Z^J}$ (Theorem \[finitelength\]) such that ${{\rm Hom}}_{R_{P_3}}(\mathcal{S}(X^2)_{Z^J},\tau_{{{\rm GL}}(2)}^{P_3}(\rho)\otimes W)\neq0$. It follows from that ${{\rm Hom}}_{R_1}(\omega_1,\nu^{-1/2}\chi_{E/F}\rho\otimes\sigma)\neq0$. By the uniqueness stated in the first paragraph of this proof, it follows that $$\label{thetaliftpropeq3}
\rho=\nu^{1/2}\chi_{E/F}\,\pi(\mu)^\vee.$$ As a consequence, $\omega_\rho=\nu(\mu|_{F^\times})^{-1}\chi_{E/F}$. Together with , it follows that $$\label{thetaliftpropeq4}
\omega_{\varPi'}=\chi_{E/F}\nu^{-1}\omega_\rho.$$ Going through Table A.5 of [@NF], we see that only the $\varPi'=\varPi^\vee$ with $\varPi$ as asserted in i) and ii) satisfy both and . (Observe the remark made after Theorem \[finitelength\].)
Conversely, assume that $\varPi$ is as in i) or ii). Since ${{\rm Hom}}_{R_1}(\omega_1,\pi(\mu)^\vee\otimes\sigma)\neq0$, we have ${{\rm Hom}}_{{{\rm O}}(X)}(\mathcal{S}(X^2),\sigma)\neq0$ by, for example, Remarque b) on p. 67 of [@MoeglinVignerasWaldspurger1987]. Arguing as in Theorem 4.4 of [@Roberts1996], there exists some irreducible, admissible representation $\varPi'$ of ${{\rm GSp}}(4,F)$ such that ${{\rm Hom}}_R(\omega, \varPi' \otimes \sigma) \neq 0$. By what we proved above, $\varPi'=\varPi^\vee$. This concludes the proof.
\[Ganthetatheorem\] Let $(X,\langle\cdot,\cdot\rangle)$ be either $(X_{{{\rm M}}_2},\langle\cdot,\cdot\rangle_{{{\rm M}}_2})$ or $(X_H,\langle\cdot,\cdot\rangle_H)$, as in or . If $X=X_{{{\rm M}}_2}$, set $G={{\rm GL}}(2,F)$, and if $X=X_{H}$, set $G=H^\times$. Let $\varPi$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $\pi_1$ and $\pi_2$ be irreducible, admissible representations of $G$ with the same central character. We have $${{\rm Hom}}_R(\omega, \varPi^\vee \otimes (\pi_1 \otimes \pi_2)^+) \neq 0$$ for $\varPi$, $\pi_1$ and $\pi_2$ as in the following table: $$\renewcommand{\arraystretch}{1.1}
\begin{array}{clccc}
\toprule
\multicolumn{2}{c}{\text{type of $\varPi$}}&\varPi&\pi_1 & \pi_2\\
\toprule
\mathrm{I}&& \chi_1 \times \chi_2 \rtimes \sigma & \sigma \chi_1 \chi_2 \times \sigma & \sigma \chi_1 \times \sigma \chi_2 \\
\midrule
\mathrm{II}&\mathrm{a}&\chi{{\rm St}}_{{{\rm GL}}(2)} \rtimes \sigma & \sigma \chi^2 \times \sigma & \sigma \chi {{\rm St}}_{{{\rm GL}}(2)}\\
\cmidrule{2-5}
&\mathrm{b}&\chi 1_{{{\rm GL}}(2)} \rtimes \sigma & \sigma \chi^2 \times \sigma & \sigma \chi 1_{{{\rm GL}}(2)} \\
\midrule
\mathrm{III}& \mathrm{b} & \chi \rtimes \sigma 1_{{{\rm GSp}}(2)} & \sigma \chi \nu^{1/2} \times \sigma \nu^{-1/2} & \sigma \chi \nu^{-1/2} \times \sigma \nu^{1/2}\\
\midrule
\mathrm{IV} & \mathrm{c}& L(\nu^{3/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-3/2}\sigma) & \sigma\nu^{3/2} \times \sigma \nu^{-3/2} & \sigma {{\rm St}}_{{{\rm GL}}(2)} \\
\cmidrule{2-5}
&\mathrm{d}& \sigma 1_{{{\rm GSp}}(4)} & \sigma\nu^{3/2} \times \sigma \nu^{-3/2} & \sigma 1_{{{\rm GL}}(2)} \\
\midrule
\mathrm{V}&\mathrm{a} & \delta([\xi, \nu \xi],\nu^{-1/2}\sigma) & \sigma {{\rm St}}_{{{\rm GL}}(2)} & \sigma \xi {{\rm St}}_{{{\rm GL}}(2)} \\
\cmidrule{2-5}
&\mathrm{a^*} & \delta^*([\xi, \nu \xi],\nu^{-1/2}\sigma) & \sigma 1_{H^\times} & \sigma \xi 1_{H^\times} \\
\cmidrule{2-5}
&\mathrm{b} & L(\nu^{1/2} \xi {{\rm St}}_{{{\rm GL}}(2)}, \nu^{-1/2} \sigma) & \sigma 1_{{{\rm GL}}(2)} & \sigma \xi {{\rm St}}_{{{\rm GL}}(2)} \\
\cmidrule{2-5}
&\mathrm{d} & L(\nu \xi \rtimes \nu^{-1/2} \sigma) & \sigma 1_{{{\rm GL}}(2)} & \sigma \xi 1_{{{\rm GL}}(2)} \\
\midrule
\mathrm{VI} &\mathrm{a} & \tau(S,\nu^{-1/2}\sigma) & \sigma {{\rm St}}_{{{\rm GL}}(2)} & \sigma {{\rm St}}_{{{\rm GL}}(2)} \\
\cmidrule{2-5}
&\mathrm{b} & \tau(T,\nu^{-1/2}\sigma) & \sigma 1_{H^\times} & \sigma 1_{H^\times} \\
\cmidrule{2-5}
&\mathrm{c} & L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma) & \sigma 1_{{{\rm GL}}(2)} & \sigma {{\rm St}}_{{{\rm GL}}(2)} \\
\cmidrule{2-5}
&\mathrm{d} & L(\nu,1_{F^\times} \rtimes \nu^{-1/2} \sigma ) & \sigma 1_{{{\rm GL}}(2)} & \sigma 1_{{{\rm GL}}(2)} \\
\midrule
\mathrm{VIII} & \mathrm{a} & \tau(S,\pi) & \pi & \pi \\
\cmidrule{2-5}
& \mathrm{b} & \tau(T,\pi) & \pi^{\mathrm{JL}} & \pi^{\mathrm{JL}} \\
\midrule
\mathrm{X} && \pi \rtimes \sigma &\sigma \omega_\pi \times \sigma &\pi \\
\midrule
\mathrm{XI} & \mathrm{a} & \delta(\nu^{1/2} \pi , \nu^{-1/2} \sigma) & \sigma {{\rm St}}_{{{\rm GL}}(2)}& \sigma \pi \\
\cmidrule{2-5}
& \mathrm{a^*} & \delta^*(\nu^{1/2} \pi , \nu^{-1/2} \sigma) & \sigma 1_{H^\times} & \sigma \pi^{\mathrm{JL}} \\
\cmidrule{2-5}
& \mathrm{b} & L(\nu^{1/2} \pi, \nu^{-1/2} \sigma) & \sigma 1_{{{\rm GL}}(2)} & \sigma \pi\\
\bottomrule
\end{array}$$ The notation $\pi^{\mathrm{JL}}$ in the table denotes the Jacquet-Langlands lifting of the supercuspidal representation $\pi$ of ${{\rm GL}}(2,F)$ to a representation of $H^\times$. See Sect. \[goxsubsec\] for the definitions of the $+$ representation.
Applications {#thetaapplicationssec}
------------
We now apply Theorem \[fourdimthetatheorem\] along with knowledge of the theta correspondences of the previous section to obtain results about Bessel functionals.
\[fourdimthetatheoremcor1\] Let $(X,\langle\cdot,\cdot\rangle)$ be either $(X_{{{\rm M}}_2},\langle\cdot,\cdot\rangle_{{{\rm M}}_2})$ or $(X_H,\langle\cdot,\cdot\rangle_H)$, as in or . If $X=X_{{{\rm M}}_2}$, set $G={{\rm GL}}(2,F)$, and if $X=X_{H}$, set $G=H^\times$. Let $\varPi$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$, and let $\pi_1$ and $\pi_2$ be irreducible, admissible representations of $G$ with the same central character. Assume that $${{\rm Hom}}_R(\omega, \varPi^\vee \otimes (\pi_1 \otimes \pi_2)^+) \neq 0$$ and that $\varPi$ has a non-split $(\Lambda,\theta)$-Bessel functional with $\theta=\theta_S$. Then $${{\rm Hom}}_{T}(\pi_1,{{\mathbb C}}_\Lambda)\neq0 \quad \text{and}\quad {{\rm Hom}}_{T}(\pi_2,{{\mathbb C}}_\Lambda)\neq0$$ where $T=T_S$.
The assumption that the Bessel functional is non-split means that $A=A_S$ is a field. By Sect. \[twobytwosubsec\] and Sect. \[actionsubsec\] we may assume that $S$ has the diagonal form . By , the contragredient $\varPi^\vee$ has a $((\Lambda\circ\gamma)^{-1},\theta)$-Bessel functional. The assertion follows now from Theorem \[fourdimthetatheorem\], the explicit embeddings in and , and the relation .
\[bsummary\] Let $(\varPi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. If $\varPi$ is one of the representations in the following table, then $\varPi$ admits a non-zero $(\Lambda,\theta)$-Bessel functional $\beta$ if and only if the quadratic extension $L$ associated to $\beta$, and $\Lambda$, regarded as a character of $L^\times$, are as specified in the table. $$\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{0.3cm}
\begin{array}{cccc}
\toprule
\text{type of $\varPi$} & \varPi & L & \Lambda\\
\toprule
{\text{\rm Va$^*$}}&\delta^*([\chi_{E/F},\nu \chi_{E/F}],\nu^{-1/2}\alpha) &E & \alpha \circ {{\rm N}}_{E/F}\\
\cmidrule{1-4}
{\rm Vd}& L(\nu \chi_{E/F}, \chi_{E/F} \rtimes \nu^{-1/2} \alpha) & E & \alpha \circ {{\rm N}}_{E/F} \\
\cmidrule{1-4}
{\rm IXb} & L(\nu \chi_{E/F}, \nu^{-1/2} \pi(\mu)) & E & \text{$\mu$ and the Galois conjugate of $\mu$}\\
\bottomrule
\end{array}$$
First we consider the Va$^*$ case. Let $\varPi=\delta^*([\chi_{E/F},\nu \chi_{E/F}],\nu^{-1/2}\alpha)$. By Theorem \[Ganthetatheorem\], $${{\rm Hom}}_R(\omega,\varPi^\vee\otimes(\alpha{1}_{H^\times}\otimes\alpha\chi_{E/F}{1}_{H^\times})^+)\neq0.$$ First, assume that $\varPi$ admits a non-zero $(\Lambda,\theta)$-Bessel functional, and let $L$ be the quadratic extension associated to this Bessel functional; we will prove that $E=L$ and that $\Lambda=\alpha\circ{{\rm N}}_{E/F}$. By v) of Proposition \[nongenericsplitproposition\], this Bessel functional is non-split. It follows from Corollary \[fourdimthetatheoremcor1\] that $$\alpha({{\rm N}}_{L/F}(t))=\Lambda(t)\qquad\text{and}\qquad(\chi_{E/F}\alpha)({{\rm N}}_{L/F}(t))=\Lambda(t)$$ for $t$ in $T=L^\times$. It follows that $E=L$, and that $\Lambda=\alpha\circ{{\rm N}}_{E/F}$.
Finally, we prove that Va$^*$ admits a Bessel functional as specified in the statement of the corollary. By Theorem \[existencetheorem\] below, Va$^*$ admits some non-zero Bessel functional. The previous paragraph proves that this Bessel functional must be as described in the statement of the corollary.
The arguments for the cases Vd and IXb are similar; we will only consider the case of type IXb. Let $\varPi=L(\nu\chi_{E/F},\nu^{-1/2}\pi(\mu))$, where $E$ is a quadratic extension of $F$, $\chi_{E/F}$ is the quadratic character associated to $E/F$, $\mu$ is a character of $E^\times$ that is not Galois invariant, and $\pi(\mu)$ is the supercuspidal, irreducible, admissible representation of ${{\rm GL}}(2,F)$ associated to $\mu$.
First, assume that $\varPi$ admits a non-zero $(\Lambda,\theta)$-Bessel functional, and let $L$ be the quadratic extension associated to this Bessel functional; we will prove that $E=L$, and that $\Lambda$ is $\mu$ or the Galois conjugate of $\mu$. Let $S$ define $\theta$, as in . By , $\varPi^\vee$ admits a non-zero $((\Lambda \circ \gamma)^{-1},\theta)$ Bessel functional $\beta$. Write $E=F(\sqrt{m})$ for some $m \in F^\times$. By Proposition \[thetaliftprop\] we have ${{\rm Hom}}_R(\omega, \varPi^\vee \otimes \mu^+) \neq 0$ with $\mu^+$ as in this proposition. The involved symmetric bilinear space is $(X_{m,1},\langle\cdot,\cdot \rangle_{m,1})$. Let ${{\rm GSp}}(4,F)^+$ be defined with respect to $(X_{m,1},\langle\cdot,\cdot \rangle_{m,1})$ as in . By Lemma \[gsp4calclemma\] the index of ${{\rm GSp}}(4,F)^+$ in ${{\rm GSp}}(4,F)$ is two. By Lemma 2.1 of [@GK], the restriction of $\varPi^\vee$ to ${{\rm GSp}}(4,F)^+$ is irreducible or the direct sum of two non-isomorphic irreducible, admissible representations of ${{\rm GSp}}(4,F)^+$; the non-vanishing of ${{\rm Hom}}_R(\omega, \varPi^\vee \otimes \mu^+)$ and Lemma 4.1 of [@Roberts2001] (with $m=2$ and $n=2$) imply that $V^\vee = V_1 \oplus V_2$ with $V_1$ and $V_2$ irreducible ${{\rm GSp}}(4,F)^+$ subspaces of $V$. Moreover, for each $i \in \{1,2\}$, there exists $\lambda_i \in F^\times$ such that $\varPi(\left[\begin{smallmatrix} 1 & \\ & \lambda_i \end{smallmatrix} \right]) V_1 =V_i$. Since ${{\rm Hom}}_R(\omega, V^\vee \otimes \mu^+)$ is non-zero, we may assume, after possibly renumbering, that ${{\rm Hom}}_R(\omega, V_1 \otimes \mu^+)$ is non-zero. There exists $i \in \{1,2\}$ such that the restriction of $\beta$ to $V_i$ is non-zero. Let $\beta' = \left[\begin{smallmatrix} 1 & \\ & \lambda_i^{-1} \end{smallmatrix} \right]\cdot \beta$. From Sect. \[actionsubsec\] it follows that $\beta'$ is a $((\Lambda \circ \gamma)^{-1}, \theta')$ Bessel functional on $\varPi^\vee$ with $\theta'$ defined by $S'= \lambda_i^{-1} S$; also, the restriction of $\beta'$ to $V_1$ is non-zero. We will now apply Theorem \[fourdimthetatheorem\], with $S'$ and $V_1$ playing the roles of $S$ and $\pi$, respectively. By i) of this theorem we have that $\Omega_{S'}$ is non-empty; since $S$ and $S'$ have the same discriminant, Lemma \[lambdaSlemma\] implies that $L = E$. Let $z \in \Omega_{S'}$ and $\tau \in \mathcal{E}(z)$. By ii) of Theorem \[fourdimthetatheorem\], there exists a non-zero vector $w$ in the space of $\mu^+$ such that $\mu^+(\tau(t)) w = (\Lambda \circ \gamma)(t)w$ for $t \in T_{S'}$. By Lemma \[lambdaSlemma\] again, this implies that $\mu^+(\rho(x)) w = \Lambda(x) w $ for $x \in L^\times$, or $\mu^+(\rho(\gamma(x))) w = \Lambda(x) w $ for $x \in L^\times$. Since $w \neq 0$, the definition of $\mu^+$ now implies that $\Lambda = \mu$ or $\mu \circ \gamma$, as desired.
Finally, we prove that $\varPi$ admits Bessel functionals as specified in the statement of the corollary. By Theorem \[existencetheorem\] below, $\varPi$ admits some non-zero Bessel functional. The previous paragraph proves that this Bessel functional must be as described in the statement of the corollary, and implies that the $\varPi$ admits both of the asserted Bessel functionals.
The following result will imply uniqueness of Bessel functionals for representations of type Va$^*$ and XIa$^*$.
\[Vastarlemma\] Let $\sigma$ be a character of $F^\times$. Let $c \in F^\times$ with $-c \notin F^{\times 2}$. Let $S$ be as in and set $L=F(\sqrt{-c})$.
1. If $\xi=\chi_{L/F}$, then $\dim \delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)_{N,\theta_S}=1$.
2. If $\pi$ is an irreducible, admissible, supercuspidal representation of ${{\rm GL}}(2,F)$ with trivial central character such that ${{\rm Hom}}_{L^\times} (\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq0$, then $\dim \delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)_{N,\theta_S}=1$.
This follows from Proposition \[scdimprop\], Theorem \[Ganthetatheorem\], and Proposition \[Ozdimeq\]; note that $\varPi^\vee|_{N} \cong \varPi|_{N}$ for irreducible, admissible representations $\varPi$ of ${{\rm GSp}}(4,F)$ because $\varPi^\vee \cong \omega_{\varPi}^{-1} \varPi$.
Twisted Jacquet modules of induced representations
==================================================
Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. In view of the isomorphism , understanding the possible Bessel functionals of $\pi$ is equivalent to understanding the twisted Jacquet modules $V_{N,\theta}$ as $T$-modules. In this section, we will calculate the twisted Jacquet modules for representations induced from the Siegel and Klingen parabolic subgroup. This information will be used to determine the possible Bessel functionals for many of the non-supercuspidal representations of ${{\rm GSp}}(4,F)$; see Sect. \[maintheoremproofsec\].
The results of this section are similar to Proposition 2.1 and 2.3 of [@PrTa2011]. However, we prefer to redo the arguments, as those in [@PrTa2011] contain some inaccuracies.
Two useful lemmas
-----------------
For a positive integer $n$ let $\mathcal{S}(F^n)$ be the Schwartz space of $F^n$, meaning the space of locally constant, compactly supported functions $F^n\rightarrow{{\mathbb C}}$. As before, $\psi$ is our fixed non-trivial character of $F$.
Let $V$ be a complex vector space. Let $\mathcal{S}(F,V)$ be the space of compactly supported, locally constant functions from $F$ to $V$. There is a canonical isomorphism $\mathcal{S}(F,V)\cong\mathcal{S}(F)\otimes V$. The functional on $\mathcal{S}(F)$ given by $f\mapsto\int_F f(x)\,dx$ gives rise to a linear map $\mathcal{S}(F)\otimes V\rightarrow V$, and hence to a linear map $\mathcal{S}(F,V)\rightarrow V$. We write this map as an integral $$f\longmapsto \int\limits_Ff(x)\,dx.$$ The following lemma will be frequently used when we calculate Jacquet modules in the subsequent sections.
\[basicFjacquetlemma\] Let $\rho$ denote the action of $F$ on $\mathcal{S}(F,V)$ by translation, i.e., $(\rho(x)f)(y)=f(x+y)$. Let $\rho'$ be the action of $F$ on $\mathcal{S}(F,V)$ given by $(\rho'(x)f)(y)=\psi(xy)f(y)$.
1. The map $f\mapsto\int_Ff(x)\,dx$ induces an isomorphism $$\mathcal{S}(F,V)/\langle f-\rho(x)f\::\:x\in F\rangle\cong V.$$
2. The map $f\mapsto\int_F\psi(-x)f(x)\,dx$ induces an isomorphism $$\mathcal{S}(F,V)/\langle\psi(x)f-\rho(x)f\::\:x\in F\rangle\cong V.$$
3. The map $f\mapsto f(0)$ induces an isomorphism $$\mathcal{S}(F,V)/\langle f-\rho'(x)f\::\:x\in F\rangle\cong V.$$
By the Proposition in 1.18 of [@BeZe1976], every translation-invariant functional on $\mathcal{S}(F)$ is a multiple of the Haar measure $f\mapsto\int_Ff(x)\,dx$. This proves i) in the case where $V={{\mathbb C}}$. The general case follows from this case by tensoring the exact sequence $$0\longrightarrow\langle f-\rho(x)f\::\:x\in F,\:f\in\mathcal{S}(F)\rangle\longrightarrow \mathcal{S}(F)\longrightarrow{{\mathbb C}}\longrightarrow0$$ by $V$. Under the isomorphism $\mathcal{S}(F)\otimes V\cong\mathcal{S}(F,V)$, the space $\langle f-\rho(x)f\::\:x\in F,\:f\in\mathcal{S}(F)\rangle\otimes V$ maps onto $\langle f-\rho(x)f\::\:x\in F,\:f\in\mathcal{S}(F,V)\rangle$.
To prove ii), observe that there is an isomorphism $$\mathcal{S}(F,V)/\langle f-\rho(x)f\::\:x\in F,\:f\in\mathcal{S}(F,V)\rangle\longrightarrow\mathcal{S}(F,V)/\langle\psi(x)f-\rho(x)f\::\:x\in F,\:f\in\mathcal{S}(F,V)\rangle$$ induced by the map $f\mapsto f'$, where $f'(x)=\psi(x)f(x)$. Hence ii) follows from i). Finally, iii) also follows from i), since the Fourier transform $f\mapsto\hat f$, where $$\hat f(y)=\int\limits_F \psi(-uy)f(u)\,du,$$ intertwines the actions $\rho$ and $\rho'$ of $F$ on $\mathcal{S}(F,V)$.
\[inducedreslemma2\] Let $G$ be an $l$-group, as in [@BeZe1976], and let $H_1$ and $H_2$ be closed subgroups of $G$. Assume that $G=H_1H_2$, and that for every compact subset $K$ of $G$, there exists a compact subset $K_2$ of $H_2$ such that $K\subset H_1K_2$. Let $(\rho,V)$ be a smooth representation of $H_1$. The map $r:{\mathrm{c}\text{-}\mathrm{Ind}}^G_{H_1} \rho \to {\mathrm{c}\text{-}\mathrm{Ind}}^{H_2}_{H_1\cap H_2}(\rho|_{H_1\cap H_2})$ defined by restriction of functions is a well-defined isomorphism of representations of $H_2$.
This follows from straightforward verifications.
Siegel induced representations {#siegelindsec}
------------------------------
Let $\pi$ be an admissible representation of ${{\rm GL}}(2,F)$, let $\sigma$ be a character of $F^\times$, and let $\pi\rtimes\sigma$ be as defined in Sect. \[representationssec\]; see . In this section we will calculate the twisted Jacquet modules $(\pi\rtimes\sigma)_{N,\theta}$ for any non-degenerate character $\theta$ of $N$ as a module of $T$. Lemma \[siegelinducedbesselwaldspurgerlemma\] below corrects an inaccuracy in Proposition 2.1 of [@PrTa2011]. Namely, Proposition 2.1 of [@PrTa2011] does not include ii) of our lemma.
\[casselmanfiltrationlemma2\] Let $\sigma$ be a character of $F^\times$, and $\pi$ an admissible representation of ${{\rm GL}}(2,F)$. Let $I$ be the standard space of the Siegel induced representation $\pi\rtimes\sigma$. There is a filtration of $P$-modules $$I^3=0\subset I^2 \subset I^1 \subset I^0=I.$$ with the quotients given as follows.
1. $ I^0/I^1 = \sigma_0$, where $$\sigma_0({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}A&*\\#3&cA'\end{array}\right]}})=\sigma(c)\,|c^{-1}\det(A)|^{3/2}\,\pi(A)$$ for $A$ in ${{\rm GL}}(2,F)$ and $c$ in $F^\times$.
2. $I^1/I^2 = {\mathrm{c}\text{-}\mathrm{Ind}}_{ \left[ \begin{smallmatrix} *&*&*&*\\ &*&&*\\ &&*&*\\ &&&* \end{smallmatrix} \right] }^P \sigma_1$, where $$\sigma_1 (\begin{bmatrix} t&*&y&* \\&a&&*\\&&d&*\\&&& ad t^{-1} \end{bmatrix}) = \sigma(ad)\,|a^{-1}t|^{3/2}\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}t&y\\#3&d\end{array}\right]}})$$ for $y$ in $F$ and $a,d,t$ in $F^\times$.
3. $I^2/I^3 = {\mathrm{c}\text{-}\mathrm{Ind}}_{ \left[ \begin{smallmatrix} *&*\\ *&*\\ &&*&*\\ &&*&* \end{smallmatrix} \right] }^P \sigma_2$, where $$\sigma_2({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}A&\\#3&cA'\end{array}\right]}})=\sigma(c)\,|c\det(A)^{-1}|^{3/2}\,\pi(cA')$$ for $A$ in ${{\rm GL}}(2,F)$ and $c$ in $F^\times$.
This follows by going through the procedure of Sections 6.2 and 6.3 of [@C].
\[siegelinducedbesselwaldspurgerlemma\] Let $\sigma$ be a character of $F^\times$, and let $(\pi,V)$ be an admissible representation of ${{\rm GL}}(2,F)$. We assume that $\pi$ admits a central character $\omega_\pi$. Let $I$ be the standard space of the Siegel induced representation $\pi\rtimes\sigma$. Let $\theta$ be the character of $N$ defined in . Assume that $\theta$ is non-degenerate. Let $L$ be the quadratic extension associated to $S$ as in Sect. \[anotheralgebrasubsec\].
1. Assume that $L$ is a field. Then $I_{N,\theta}\cong V$ with the action of $T$ given by $\sigma\omega_\pi\pi'$. Here, $\pi'$ is the representation of ${{\rm GL}}(2,F)$ on $V$ given by $\pi'(g)=\pi(g')$. In particular, if $\pi$ is irreducible, then the action of $T$ is given by $\sigma\pi$.
2. Assume that $L$ is not a field; we may arrange that $S=\left[\begin{smallmatrix} &1/2 \\ 1/2&\end{smallmatrix}\right]$. Then there is a filtration $$0\subset J_2\subset J_1=I_{N,\theta},$$ with vector space isomorphisms:
- $J_1/J_2\cong V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi},$
- $J_2\cong V$.
The action of $T=T_S$ is given as follows, $$\begin{aligned}
{\rm diag}(a,b,a,b)(v_1 \oplus v_2) &= \Big|\dfrac{a}{b}\Big|^{1/2}\sigma(ab)\omega_\pi(a)v_1
\oplus \Big|\dfrac{a}{b}\Big|^{-1/2}\sigma(ab)\omega_\pi(b)v_2, \\
{\rm diag}(a,b,a,b)v&=\sigma(ab)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v,
\end{aligned}$$ for $a, b \in F^\times$, $v_1 \oplus v_2 \in J_1/J_2\cong V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi}$, and $v \in J_2$. In particular, if $\pi$ is one-dimensional, then $I_{N,\theta}\cong V$, with the action of $T$ given by ${\rm diag}(a,b,a,b)v=\sigma(ab)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v$.
We may assume that $b=0$. Since $\det (S ) \neq 0$ we have $a \neq 0$ and $c \neq 0$. We use the notation of Lemma \[casselmanfiltrationlemma2\]. We calculate the twisted Jacquet modules $(I^i/I^{i+1})_{N,\theta}$ for $i \in \{0,1,2\}$. Since the action of $N$ on $I^0/I^1$ is trivial and $\theta$ is non-trivial, we have $(I^0/I^1)_{N,\theta}=0$.
We consider the quotient $I^1/I^2 = {\mathrm{c}\text{-}\mathrm{Ind}}_{H}^P \sigma_1$, where $$H=\begin{bmatrix} *&*&*&*\\ &*&&*\\ &&*&*\\ &&&* \end{bmatrix},$$ and with $\sigma_1$ as in ii) of Lemma \[casselmanfiltrationlemma2\]. We first show that for each function $f$ in the standard model of this representation, the function $f^\circ:\:F\to V$, given by $$f^\circ(w)=f(\begin{bmatrix}1&\\&1&w\\&&1&\\&&&1\end{bmatrix}),$$ has compact support. Let $K$ be a compact subset of $P$ such that the support of $f$ is contained in $HK$. If $$\begin{bmatrix}1&\\&1&w\\&&1&\\&&&1\end{bmatrix}=\begin{bmatrix} t&*&y&* \\&a&&*\\&&d&*\\&&& ad t^{-1} \end{bmatrix}\begin{bmatrix}k_1&k_2&x_1&x_2\\k_3&k_4&x_3&x_4\\&&k_5&k_6\\&&k_7&k_8\end{bmatrix},$$ with the rightmost matrix being in $K$, then calculations show that $k_3=k_7=0$ and $w=k_4^{-1}x_3$. Since $k_4^{-1}$ and $x_3$ vary in bounded subsets, $w$ is confined to a compact subset of $F$. This proves our assertion that $f^\circ$ has compact support.
Next, for each function $f$ in the standard model of ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^P \sigma_1$, consider the function $\tilde f:\:F^2\to V$ given by $$\tilde f(u,w)=f(\begin{bmatrix}1&\\u&1&w\\&&1&\\&&-u&1\end{bmatrix})$$ for $u,w$ in $F$. Let $W$ be the space of all such functions $\tilde f$. Since the map $f\mapsto\tilde f$ is injective, we get a vector space isomorphism ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^P \sigma_1\cong W$. In this new model, the action of $N$ is given by $$\label{siegelinducedbesselwaldspurgerlemma20}
(\begin{bmatrix}1&&y&z\\&1&x&y\\&&1\\&&&1\end{bmatrix}\tilde f)(u,w)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&y+uz\\#3&1\end{array}\right]}})\tilde f(u,w+x+2uy+u^2z)$$ for $x,y,z,u,w$ in $F$.
We claim that $W$ contains $\mathcal{S}(F^2,V)$. Since $W$ is translation invariant, it is enough to prove that $W$ contains the function $$f_{N,v}(u,w)=
\begin{cases}
v & \text{if $u,w \in {\mathfrak p}^N$,}\\
0 & \text{if $u \notin {\mathfrak p}^N$ or $w \notin {\mathfrak p}^N$,}
\end{cases}$$ for any $v$ in $V$ and any positive integer $N$. Again by translation invariance, we may assume that $N$ is large enough so that $$\label{siegelinducedbesselwaldspurgerlemma21}
\sigma_1(h)v=v\qquad\text{for }h\in H\cap\Gamma_N,$$ where $$\label{GammaNdefeq}
\Gamma_{N}=
\begin{bmatrix}
1+{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N \\
{\mathfrak p}^N&1+{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N\\
&&1+{\mathfrak p}^N&{\mathfrak p}^N\\&&{\mathfrak p}^N&1+{\mathfrak p}^N
\end{bmatrix} \cap P.$$ Define $f:\:P\to V$ by $$f(g)=
\begin{cases}
\sigma_1(h)v&\text{if $g=hk$ with }h\in H,\;k\in\Gamma_N,\\
0&g\notin H\Gamma_N.
\end{cases}$$ Then, by , $f$ is a well-defined element of ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^Q \sigma_1$. It is easy to verify that $\tilde f=f_{N,v}$. This proves our claim that $W$ contains $\mathcal{S}(F^2,V)$.
Now consider the map $$\label{siegelinducedbesselwaldspurgerlemma22}
W\longrightarrow\mathcal{S}(F,V),\qquad \tilde f\longmapsto\Big(w\mapsto f(\begin{bmatrix}1\\&1&w\\&&1\\&&&1\end{bmatrix}s_1)\Big),$$ where $s_1$ is defined in . This map is well-defined, since the function on the right is $(s_1f)^\circ$, which we showed above has compact support. Similar considerations as above show that the map is surjective.
We claim that the kernel of is $\mathcal{S}(F^2,V)$. First suppose that $\tilde f$ lies in the kernel; we have to show that $\tilde f$ has compact support. Choose $N$ large enough so that $f$ is right invariant under $\Gamma_N$. Then, for $u$ not in ${\mathfrak p}^{-N}$ and $w$ in $F$, $$\begin{aligned}
\tilde f(u,w)&=f(\begin{bmatrix}1\\u&1&w\\&&1\\&&-u&1\end{bmatrix})\\
&=f(\begin{bmatrix}1&\\&1&w\\&&1\\&&&1\end{bmatrix}\!\begin{bmatrix}1&u^{-1}\\&1\\&&1&-u^{-1}\\&&&1\end{bmatrix}\!\begin{bmatrix}-u^{-1}\\&u\\&&u^{-1}\\&&&\!\!\!-u\end{bmatrix}\!s_1\!\begin{bmatrix}1&u^{-1}\\&1\\&&1&-u^{-1}\\&&&1\end{bmatrix})\\
&=f(\begin{bmatrix}1&u^{-1}\\&1\\&&1&-u^{-1}\\&&&1\end{bmatrix}\begin{bmatrix}1&&-u^{-1}w&u^{-2}w\\&1&w&-u^{-1}w\\&&1\\&&&1\end{bmatrix}\begin{bmatrix}-u^{-1}\\&u\\&&u^{-1}\\&&&\!\!\!-u\end{bmatrix}s_1)\\
&=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&-u^{-1}w\\#3&1\end{array}\right]}})f(\begin{bmatrix}1&&\\&1&w&\\&&1\\&&&1\end{bmatrix}\begin{bmatrix}-u^{-1}\\&u\\&&u^{-1}\\&&&\!\!\!-u\end{bmatrix}s_1)\\ &=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&w^2u^{-1}\\#3&1\end{array}\right]}})f(\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}\begin{bmatrix}1\\&-u^{-1}\\&&-u\\&&&1\end{bmatrix}s_2)\\
&=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&-u^{-1}w\\#3&1\end{array}\right]}})f(\begin{bmatrix}-u^{-1}\\&u\\&&u^{-1}\\&&&\!\!\!-u\end{bmatrix}\begin{bmatrix}1&&\\&1&u^{-2}w&\\&&1\\&&&1\end{bmatrix}s_1)\\
&=|u|^{-3}\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}-u^{-1}&-u^{-2}w\\#3&u^{-1}\end{array}\right]}})f(\begin{bmatrix}1&&\\&1&u^{-2}w&\\&&1\\&&&1\end{bmatrix}s_1).\end{aligned}$$ This last expression is zero by assumption. For fixed $u$ in ${\mathfrak p}^{-N}$, the function $\tilde f(u,\cdot)$ has compact support; this follows because each $f^\circ$ has compact support. Combining these facts shows that $\tilde f$ has compact support. Conversely, assume $\tilde f$ is in $\mathcal{S}(F^2,V)$. Then we can find a large enough $N$ such that if $u$ has valuation $-N$, the function $\tilde f(u,\cdot)$ is zero. Looking at the above calculation, we see that, for fixed such $u$, $$f(\begin{bmatrix}1&&\\&1&u^{-2}w&\\&&1\\&&&1\end{bmatrix}s_1)=0$$ for all $w$ in $F$. This shows that $\tilde f$ is in the kernel of the map , completing the proof of our claim about this kernel. We now have an exact sequence $$\label{siegelinducedbesselwaldspurgerlemma23}
0\longrightarrow\mathcal{S}(F^2,V)\longrightarrow W\longrightarrow\mathcal{S}(F,V)\longrightarrow0.$$ Note that the space $\mathcal{S}(F^2,V)$ is invariant under the action of $N$. A calculation shows that the action of $N$ on $\mathcal{S}(F,V)$ is given by $$\label{siegelinducedbesselwaldspurgerlemma24}
(\begin{bmatrix}1&&y&z\\&1&x&y\\&&1\\&&&1\end{bmatrix}f)(w)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&y\\#3&1\end{array}\right]}})f(w+z)$$ for $x,y,z,w$ in $F$ and $f$ in $\mathcal{S}(F,V)$. Since the action of $x$ is trivial and $a\neq0$, it follows that $\mathcal{S}(F,V)_{N,\theta}=0$. Hence, by , we have $W_{N,\theta}\cong\mathcal{S}(F^2,V)_{N,\theta}$.
We will compute the Jacquet module $\mathcal{S}(F^2,V)_{N,\theta}$ in stages. The action of $N$ on $\mathcal{S}(F^2,V)$ is given by . By ii) of Lemma \[basicFjacquetlemma\], the map $\tilde f\mapsto f'$, where $f':\:F\to V$ is given by $$f'(u)=\int\limits_F\psi^a(-u)\tilde f(u,w)\,dw,$$ defines a vector space isomorphism $$W_{\left[ \begin{smallmatrix} 1&&& \\ &1&*& \\ &&1& \\ &&&1 \end{smallmatrix} \right], \psi^a} \stackrel{\sim}{\longrightarrow} \mathcal{S}(F,V).$$ The transfer of the action of the remaining group $\left[\begin{smallmatrix} 1&&*&* \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right]$ to $\mathcal{S}(F,V)$ is given by $$(\begin{bmatrix} 1&&y&z \\ &1&& y \\ &&1& \\ &&&1 \end{bmatrix} f)(u) = \psi (a (2uy+u^2 z)) \pi ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&y+uz\\#3&1\end{array}\right]}}) f(u)$$ for $u,y,z \in F$ and $f \in \mathcal{S}(F,V)$. The subspace $\mathcal{S}(F^\times,V)$ of elements $f$ of $\mathcal{S}(F,V)$ such that $f(0)=0$ is preserved under this action, so that we have an exact sequence $$\label{siegelinducedbesselwaldspurgerlemmaeqq2}
0 \longrightarrow \mathcal{S}(F^\times,V) \longrightarrow \mathcal{S}(F,V) \longrightarrow \mathcal{S}(F,V)/ \mathcal{S}(F^\times,V) \longrightarrow 0$$ of representations of the group $\left[ \begin{smallmatrix} 1&&*&* \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right]$. There is an isomorphism $$\mathcal{S}(F,V)/ \mathcal{S}(F^\times,V) \stackrel{\sim}{\longrightarrow} V$$ of vector spaces that sends $f$ to $f(0)$. The transfer of the action of the group $\left[ \begin{smallmatrix} 1&&*&* \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right]$ to $V$ is given by $$\label{siegelinducedbesselwaldspurgerlemmaeqq1}
\begin{bmatrix} 1&&y&z \\ &1&& y \\ &&1& \\ &&&1 \end{bmatrix} v= \pi ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&y\\#3&1\end{array}\right]}}) v$$ for $y, z \in F$ and $v \in V$. The non-vanishing of $c$ and imply that $$(\mathcal{S}(F,V)/ \mathcal{S}(F^\times,V))_{\left[ \begin{smallmatrix} 1&&&* \\ &1&& \\ &&1& \\ &&&1 \end{smallmatrix} \right],\psi^c} =0.$$ Therefore, $$\big( \mathcal{S}(F,V)/ \mathcal{S}(F^\times,V) \big)_{\left[ \begin{smallmatrix} 1&&*&* \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right], \theta} =0.$$ Next, we define a vector space isomorphism of $\mathcal{S}(F^\times ,V)$ with itself and then transfer the action. For $f$ in $\mathcal{S}(F^\times ,V)$, set $$\tilde f(u) = \pi ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}u&\\#3&1\end{array}\right]}})f(u)$$ for $u$ in $F^\times$. The map defined by $f \mapsto \tilde f$ is an automorphism of vector spaces. The transfer of the action of $\left[ \begin{smallmatrix} 1&&*&* \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right]$ is given by $$(\begin{bmatrix} 1&&y&z \\ &1&&y \\ &&1& \\ &&&1 \end{bmatrix} f )(u)
=\psi(a (2uy+u^2z)) \pi ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&uy+u^2 z\\#3&1\end{array}\right]}}) f(u)$$ for $f \in \mathcal{S}(F^\times, V)$, $y,z \in F$, and $u \in F^\times$. Now define a linear map $$p:\:\mathcal{S}(F^\times, V) \longrightarrow \mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})$$ by composing the elements of $\mathcal{S}(F^\times, V)$ with the natural projection from $V$ to $V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}=V/V(\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a})$. The map $p$ is surjective. Let $f$ be in $\mathcal{S}(F^\times, V)$. Since $f$ has compact support and is locally constant, we see that $f$ is in the kernel of $p$ if and only if $$\label{siegelinducedbesselwaldspurgerlemmaeqq4}
\text{there exists $l>0$ such that } \int\limits_{{\mathfrak p}^{-l}} \psi (2ay) \pi( {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&y\\#3&1\end{array}\right]}})f(u)\, dy =0 \qquad\text{for all }u \in F^\times.$$ Also, $f$ is in $\mathcal{S}(F^\times, V)(\left[ \begin{smallmatrix} 1&&*& \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right])$ if and only if $$\label{siegelinducedbesselwaldspurgerlemmaeqq5}
\text{there exists $k>0$ such that } \int\limits_{{\mathfrak p}^{-k}} \psi (2auy) \pi( {{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&uy\\#3&1\end{array}\right]}})f(u)\, dy =0 \qquad\text{for all }u \in F^\times.$$ Since $f$ is locally constant and compactly supported the conditions and are equivalent. It follows that $p$ induces an isomorphism of vector spaces: $$\mathcal{S}(F^\times, V)_{\left[ \begin{smallmatrix} 1&&*& \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right],\psi^b} =
\mathcal{S}(F^\times, V)_{\left[ \begin{smallmatrix} 1&&*& \\ &1&&* \\ &&1& \\ &&&1 \end{smallmatrix} \right]}
\stackrel{\sim}{ \longrightarrow }\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}).$$ Transferring the action of $\left[ \begin{smallmatrix} 1&&&* \\ &1&& \\ &&1& \\ &&&1 \end{smallmatrix} \right]$ on the first space to the last space results in the formula $$\begin{bmatrix} 1&&&z \\ &1&& \\ &&1& \\ &&&1 \end{bmatrix} f (u) = \psi(a u^2 z) \pi ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&u^2z\\#3&1\end{array}\right]}}) f(u)=\psi (-au^2 z) f(u), \qquad z \in F,\ u \in F^\times,$$ for $f$ in $\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})$.
Assume that $L$ is a field; we will prove that $$\label{siegelinducedbesselwaldspurgerlemmaeqq6}
\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})_{\left[ \begin{smallmatrix} 1&&&* \\ &1&& \\ &&1& \\ &&&1 \end{smallmatrix} \right], \psi^c} =0.$$ Let $f$ be in $\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})$. Since the support of $f$ is compact, and since there exists no $u$ in $F^\times$ such that $c+au^2 =0$ as $D=b^2/4-ac=-ac$ is not in $F^{\times2}$, there exists a positive integer $l$ such that $$\label{addnosuppeq}
\int\limits_{{\mathfrak p}^{-l}} \psi(-(c+au^2) z)\,dz = 0$$ for $u$ in the support of $f$. Hence, for $u$ in $F^\times$, $$\label{JLcondeq}
(\int\limits_{{\mathfrak p}^{-l}} \psi(-cz) \begin{bmatrix} 1&&&z \\ &1&& \\ &&1& \\ &&&1 \end{bmatrix} f\, dz)(u)
= \big( \int\limits_{{\mathfrak p}^{-l}} \psi(-(c+au^2) z)\,dz\big) f(u)=0.$$ This proves , and completes the argument that $(I^1/I^2)_{N,\theta}=0$ in the case $L$ is a field.
Now assume that $L$ is not a field. We may further assume that $a=1$ and $c=-1$ while retaining $b=0$. The group $T=T_{\left[\begin{smallmatrix} a&b/2\\b/2&c \end{smallmatrix} \right]}=T_{\left[\begin{smallmatrix} 1&\\&-1 \end{smallmatrix} \right]}$ consists of the elements $$\label{talteq}
t=\begin{bmatrix} x&y && \\ y&x && \\ && x&-y \\ && -y &x \end{bmatrix}$$ with $x,y \in F$ such that $x^2 \neq y^2$. Define $$\label{lastisoeq}
\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}) \longrightarrow
V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}$$ by $f \mapsto f(1) \oplus f(-1)$. We assert that the kernel of this linear map is $$\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})(\left[ \begin{smallmatrix} 1&&&* \\ &1&& \\ &&1& \\ &&&1 \end{smallmatrix} \right], \psi^c).$$ Evidently, this subspace is contained in the kernel. Conversely, let $f \in \mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}) $ be such that $f(1)=f(-1)=0$. Then there exists a positive integer $l$ such that holds for $u$ in the support of $f$, implying that holds. This proves our assertion. The map is clearly surjective, so that we obtain an isomorphism $$\mathcal{S}(F^\times, V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}})_{\left[ \begin{smallmatrix} 1&&&* \\ &1&& \\ &&1& \\ &&&1 \end{smallmatrix} \right], \psi^c} \stackrel{\sim}{\longrightarrow} V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}.$$ We now have an isomorphism $(I^1/I^2)_{N,\theta} \stackrel{\sim}{\longrightarrow} V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}$. A calculation shows that the transfer of the action of $T$ to $V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}} \oplus V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}$ is given by $$t(v_1 \oplus v_2) = \Big|\dfrac{x-y}{x+y}\Big|^{1/2}\sigma\big((x-y)(x+y)\big)\omega_\pi(x-y)v_1
\oplus \Big|\dfrac{x-y}{x+y}\Big|^{-1/2}\sigma\big((x-y)(x+y)\big)\omega_\pi(x+y)v_2$$ for $t$ as in and $v_1,v_2 \in V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}$. Finally, the result stated in ii) is written with respect to $S=\left[\begin{smallmatrix} &1/2\\1/2&\end{smallmatrix}\right]$. To change to this choice note that the map $$C:(I^1/I^2)_{N,\theta_{\left[\begin{smallmatrix} a&b/2\\b/2&c\end{smallmatrix}\right]}} \longrightarrow
(I^1/I^2)_{N,\theta_{\left[\begin{smallmatrix} &1/2\\1/2&\end{smallmatrix}\right]}}$$ defined by $v \mapsto \left[\begin{smallmatrix} g& \\ &g'\end{smallmatrix} \right] v$, where $g=\left[\begin{smallmatrix} -1&1\\1&1\end{smallmatrix}\right]$, is a well-defined isomorphism; recall that $a=1,b=0,c=-1$. Moreover, $C(tv)=t'C(v)$ for $t$ as in and $$t'
=
\begin{bmatrix} x-y & \\ & x+y \\ && x-y \\ &&&x+y \end{bmatrix} \in T_{\left[\begin{smallmatrix} &1/2\\1/2 & \end{smallmatrix}\right]}.$$ It follows that the group $T_{\left[\begin{smallmatrix} &1/2\\1/2 & \end{smallmatrix}\right]}$ acts on the isomorphic vector spaces $$(I^1/I^2)_{N,\theta_{\left[\begin{smallmatrix} &1/2\\1/2&\end{smallmatrix}\right]}}
\cong
V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}
\oplus
V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi^{-2a}}
\cong
V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi}
\oplus
V_{\left[\begin{smallmatrix} 1&*\\&1 \end{smallmatrix} \right],\psi}$$ via the formula in ii).
Next, we consider the quotient $I^2/I^3={\mathrm{c}\text{-}\mathrm{Ind}}_M^P(\sigma_2)$ from iii) of Lemma \[casselmanfiltrationlemma2\]. By Lemma \[inducedreslemma2\], restriction of functions in the standard model of this representation to $N$ gives an $N$-isomorphism $${\mathrm{c}\text{-}\mathrm{Ind}}_M^P(\sigma_2)\cong\mathcal{S}(N,V).$$ An application of i) and ii) of Lemma \[basicFjacquetlemma\] shows that $
\mathcal{S}(N,V)_{N,\theta}\cong V
$ via the map defined by $$f\longmapsto\int\limits_N\theta(n)^{-1}f(n)\,dn.$$ Transferring the action of $T$ we find that $t\in T$ acts by $\sigma_2(t)$ on $V$. If $t={{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}g&\\#3&\det(g)g'\end{array}\right]}}$ as in , then $$\sigma_2(t)=\sigma(\det(g))\omega_\pi(\det(g))\pi(g').$$ This concludes the proof.
In case of a one-dimensional representation of $M$, it follows from this lemma that $$\label{siegelinducedonedimjacqueteq}
(\chi{1}_{{{\rm GL}}(2)}\rtimes\sigma)_{N,\theta}={{\mathbb C}}_{(\sigma\chi)\circ {{\rm N}}_{L/F}}$$ as $T$-modules. In case $L$ is a field and $\pi$ is irreducible, it follows from Lemma \[siegelinducedbesselwaldspurgerlemma\] that $$\label{siegelinducedfieldeq}
{{\rm Hom}}_T((\pi\rtimes\sigma)_{N,\theta},{{\mathbb C}}_\Lambda)={{\rm Hom}}_T(\sigma\pi,{{\mathbb C}}_\Lambda).$$ Hence, in view of , the space of $(\Lambda,\theta)$-Bessel functionals on $\pi\rtimes\sigma$ is isomorphic to the space of $(\Lambda,\theta)$-Waldspurger functionals on $\sigma\pi$.
Klingen induced representations {#Klingendegsec}
-------------------------------
Let $\pi$ be an admissible representation of ${{\rm GL}}(2,F)$, let $\chi$ be a character of $F^\times$, and let $\chi\rtimes\pi$ be as defined in Sect. \[representationssec\]; see . In this section we will calculate the twisted Jacquet modules $(\chi\rtimes\pi)_{N,\theta}$ for any non-degenerate character $\theta$ of $N$ as a module of $T$. In the split case our results make several corrections to Proposition 2.3 and Proposition 2.4 of [@PrTa2011].
\[casselmanfiltrationlemma1\] Let $\chi$ be a character of $F^\times$ and $\pi$ an admissible representation of ${{\rm GL}}(2,F)$. Let $I$ be the space of the Klingen induced representation $\chi\rtimes\pi$. There is a filtration of $P$-modules $$I^2=0\subset I^1 \subset I^0=I.$$ with the quotients given as follows.
1. $ I^0/I^1 = {\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0$, where $$\sigma_0(\begin{bmatrix}t&*&*&*\\&a&b&*\\&&d&*\\&&&adt^{-1}\end{bmatrix})=\chi(t)\,|t|^2\,|ad|^{-1}\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}})$$ for $b$ in $F$ and $a,d,t$ in $F^\times$.
2. $I^1/I^2 = {\mathrm{c}\text{-}\mathrm{Ind}}_{ \left[ \begin{smallmatrix} *&*&&*\\ &*\\ &&*&*\\ &&&* \end{smallmatrix} \right] }^P \sigma_1$, where $$\sigma_1 (\begin{bmatrix} t&*&&x \\&a\\&&d&*\\&&& ad t^{-1} \end{bmatrix}) = \chi(d)\,|a^{-1}d|\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}t&x\\#3&ad t^{-1}\end{array}\right]}})$$ for $x$ in $F$ and $a,d,t$ in $F^\times$.
This follows by going through the procedure of Sections 6.2 and 6.3 of [@C].
\[Klingendegjacquetlemma1\] Let $\chi$ be a character of $F^\times$, and let $(\pi,V)$ be an admissible representation of ${{\rm GL}}(2,F)$. We assume that $\pi$ has a central character $\omega_\pi$. Let $I$ be the standard space of the Klingen induced representation $\chi\rtimes\pi$. Let $N$ be the unipotent radical of the Siegel parabolic subgroup, and let $\theta$ be the character of $N$ defined in (\[thetaSsetupeq\]). We assume that the associated quadratic extension $L$ is a field. Then, as $T$-modules, $$I_{N,\theta}\cong\bigoplus_{\Lambda|_{F^\times}=\chi\omega_\pi}d\cdot\Lambda,\qquad\text{where }d=\dim{{\rm Hom}}_{ \left[ \begin{smallmatrix} 1&*\\ &1\end{smallmatrix} \right] }(\pi,\psi).$$ In particular, $I_{N,\theta}=0$ if $\pi$ is one-dimensional.
We will first prove that $(I_0/I_1)_{N,\theta}=0$, where the notations are as in Lemma \[casselmanfiltrationlemma1\]. We may assume that the element $b$ appearing in the matrix $S$ in is zero. For $f$ in the standard space of the induced representation $I_0/I_1={\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0$, let $$\tilde f(u)=f(\begin{bmatrix}1\\u&1\\&&1\\&&-u&1\end{bmatrix}),\qquad u\in F.$$ Let $W$ be the space of all functions $F\to{{\mathbb C}}$ of the form $\tilde f$, where $f$ runs through ${\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0$. Since the map $f\mapsto\tilde f$ is injective, we obtain a vector space isomorphism ${\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0\cong W$. The identity $$\begin{bmatrix}1\\u&1\\&&1\\&&-u&1\end{bmatrix}=\begin{bmatrix}1&u^{-1}\\&1\\&&1&-u^{-1}\\&&&1\end{bmatrix}\begin{bmatrix}-u^{-1}\\&u\\&&u^{-1}\\&&&-u\end{bmatrix}s_1\begin{bmatrix}1&u^{-1}\\&1\\&&1&-u^{-1}\\&&&1\end{bmatrix},$$ where $s_1$ is as in , shows that $\tilde f$ satisfies $$\label{Klingendegjacquetlemma1eq10}
\tilde f(u)=\chi(-u^{-1})|u|^{-2}\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}u&\\#3&u^{-1}\end{array}\right]}})f(s_1)\qquad\text{for }|u|\gg0.$$ The space $W$ consists of locally constant functions. Furthermore, $W$ is invariant under translations, i.e., if $f'\in W$, then the function $u\mapsto f'(u+x)$ is also in $W$, for any $x$ in $F$.
We claim that $W$ contains $\mathcal{S}(F,V)$. Since $W$ is translation invariant, it is enough to prove that $W$ contains the function $$f_{N,v}(u)=
\begin{cases}
v & \text{if $u \in {\mathfrak p}^N$,}\\
0 & \text{if $u \notin {\mathfrak p}^N$,}
\end{cases}$$ for any $v$ in $V$ and any positive integer $N$. Again by translation invariance, we may assume that $N$ is large enough so that $$\label{Klingendegjacquetlemma1eq11}
\sigma_0(b)v=v\qquad\text{for }b\in B\cap\Gamma_N,$$ where $\Gamma_N$ is as in . Define $f:\:P\to V$ by $$f(g)=
\begin{cases}
\sigma_0(b)v&\text{if $g=bk$ with }b\in B,\;k\in\Gamma_N,\\
0&g\notin B\Gamma_N.
\end{cases}$$ Then, by , $f$ is a well-defined element of ${\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0$. It is easy to verify that $\tilde f=f_{N,v}$. This proves our claim that $W$ contains $\mathcal{S}(F,V)$.
We define a linear map $W\to V$ by sending $\tilde f$ to the vector $f(s_1)$ in . Then the kernel of this map is $\mathcal{S}(F,V)$. We claim that the map is surjective. To see this, let $v$ be in $V$. Again choose $N$ large enough so that holds. Then the function $f:\:P\to V$ given by $$f(g)=
\begin{cases}
\sigma_0(b)v&\text{if $g=bs_1k$ with }b\in B,\;k\in\Gamma_N,\\
0&g\notin Bs_1\Gamma_N.
\end{cases}$$ is a well-defined element of ${\mathrm{c}\text{-}\mathrm{Ind}}_B^P\sigma_0$ with $f(s_1)=v$. This proves our claim that the map $W\to V$ is surjective. We therefore get an exact sequence $$\label{Klingendegjacquetlemma1eq12}
0\longrightarrow\mathcal{S}(F,V)\longrightarrow W\longrightarrow V\longrightarrow0.$$ The transfer of the action of $N$ to $W$ is given by $$(\begin{bmatrix}1&&y&z\\&1&x&y\\&&1\\&&&1\end{bmatrix}\tilde f)(u)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&x+2uy+u^2z\\#3&1\end{array}\right]}})\tilde f(u)$$ for all $x,y,z,u$ in $F$. Evidently, the subspace $\mathcal{S}(F,V)$ is invariant under $N$. Moreover, the action of $N$ on $V$ is given by $$\label{Klingendegjacquetlemma1eq13}
\begin{bmatrix}1&&y&z\\&1&x&y\\&&1\\&&&1\end{bmatrix}v=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&z\\#3&1\end{array}\right]}})v$$ for all $x,y,z$ in $F$ and $v$ in $V$.
To prove that $(I_0/I_1)_{N,\theta}=0$, it suffices to show that $\mathcal{S}(F,V)_{N,\theta}=0$ and $V_{N,\theta}=0$. Since the element $a$ in the matrix $S$ is non-zero, it follows from that $V_{N,\theta}=0$.
To prove that $\mathcal{S}(F,V)_{N,\theta}=0$, we define a map $p$ from $\mathcal{S}(F,V)$ to $$\mathcal{S}(F,V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right],\psi^a})=\mathcal{S}(F,V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&*\\#3&1\end{array}\right]}},\psi^a))$$ by sending $f$ to $f$ composed with the natural projection from $V$ to $V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&*\\#3&1\end{array}\right]}},\psi^a)$. This map is surjective. It is easy to see that $p$ induces an isomorphism $$\mathcal{S}(F,V)_{\left[ \begin{smallmatrix}1\\ &1&* \\ &&1& \\ &&&1 \end{smallmatrix} \right],\psi^a }\cong\mathcal{S}(F,V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right],\psi^a}).$$ For the space on the right we have the action $$(\begin{bmatrix}1&&y&z\\&1&&y\\&&1\\&&&1\end{bmatrix}f)(u)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\:2uy+u^2z\\#3&1\end{array}\right]}})f(u),\qquad u\in F.$$ By iii) of Lemma \[basicFjacquetlemma\], the map $f\mapsto f(0)$ induces an isomorphism $$\mathcal{S}(F,V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right],\psi^a})_{\left[\begin{smallmatrix}1&&*\\&1&&*\\&&1\\&&&1\end{smallmatrix}\right]}\cong V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right],\psi^a}.$$ For the space on the right we have the action $$\begin{bmatrix}1&&&z\\&1\\&&1\\&&&1\end{bmatrix}v=v.$$ Taking a twisted Jacquet module with respect to the character $\psi^c$ gives zero, since $c\neq0$. This concludes our proof that $(I_0/I_1)_{N,\theta}=0$.
Next let $\sigma_1$ be as in ii) of Lemma \[casselmanfiltrationlemma1\]. Let $$H_1=\begin{bmatrix} *&*&&*\\ &*\\ &&*&*\\ &&&* \end{bmatrix}$$ and $H_2=TN$. By Lemma \[TFGL2lemma\], we have $P=H_1H_2$. To verify the hypotheses of Lemma \[inducedreslemma2\], let $K$ be a compact subset of $P$. Write $P=MN$ and let $p:P\to N$ be the resulting projection map. Since $p$ is continuous, the set $p(K)$ is compact. There exists a compact subset $K_T$ of $T$ such that $T=F^\times K_T$. Then $M\subset H_1K_T$ by Lemma \[TFGL2lemma\]. Therefore $K\subset H_1K_2$ with $K_2=K_Tp(K)$.
By Lemma \[inducedreslemma2\], restriction of functions gives a $TN$ isomorphism $${\mathrm{c}\text{-}\mathrm{Ind}}_{ \left[ \begin{smallmatrix} *&*&&*\\ &*\\ &&*&*\\ &&&* \end{smallmatrix} \right] }^P \sigma_1\cong{\mathrm{c}\text{-}\mathrm{Ind}}_{F^\times Z^J}^{TN}(\sigma_1\big|_{F^\times Z^J}).$$ Note that $F^\times$ acts via the character $\chi\omega_\pi$ on this module. Since $T$ is compact modulo $F^\times$, the Jacquet module $({\mathrm{c}\text{-}\mathrm{Ind}}_{F^\times Z^J}^{TN}(\sigma_1\big|_{F^\times Z^J}))_{N,\theta}$ is a direct sum over characters of $T$. Let $\Lambda$ be a character of $T$. It is easy to verify that $${{\rm Hom}}_{T}\big(({\mathrm{c}\text{-}\mathrm{Ind}}_{F^\times Z^J}^{TN}(\sigma_1\big|_{F^\times Z^J}))_{N,\theta},\Lambda\big)={{\rm Hom}}_{TN}\big({\mathrm{c}\text{-}\mathrm{Ind}}_{F^\times Z^J}^{TN}(\sigma_1\big|_{F^\times Z^J}),\Lambda\otimes\theta\big).$$ By Frobenius reciprocity, the space on the right is isomorphic to $$\label{Klingendegjacquetlemma1eq1}
{{\rm Hom}}_{F^\times Z^J}\big(\sigma_1\big|_{F^\times Z^J},(\Lambda\otimes\theta)\big|_{F^\times Z^J}\big).$$ This space is zero unless the restriction of $\Lambda$ to $F^\times$ equals $\chi\omega_\pi$. Assume this is the case. Then is equal to $${{\rm Hom}}_{ \left[ \begin{smallmatrix} 1&*\\ &1\end{smallmatrix} \right] }(\pi,\psi^c)\cong{{\rm Hom}}_{ \left[ \begin{smallmatrix} 1&*\\ &1\end{smallmatrix} \right] }(\pi,\psi).$$ This concludes the proof.
\[casselmanfiltrationlemma\] Let $\chi$ be a character of $F^\times$ and $\pi$ an admissible representation of ${{\rm GL}}(2,F)$. Let $I$ be the space of the Klingen induced representation $\chi\rtimes\pi$. There is a filtration of $Q$-modules $$I^3=0\subset I^2 \subset I^1 \subset I^0=I.$$ with the quotients given as follows.
1. $ I^0/I^1 = \sigma_0$, where $$\sigma_0 (\begin{bmatrix} t & * & *& * \\ & a&b & * \\ &c & d & * \\ & & & (ad -bc )t^{-1} \end{bmatrix}) = \chi (t)\,|t^2 (ad-bc)^{-1}|\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}})$$ for $\left[\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right]$ in ${{\rm GL}}(2,F)$ and $t$ in $F^\times$.
2. $I^1/I^2 = {\mathrm{c}\text{-}\mathrm{Ind}}_{ \left[ \begin{smallmatrix} *&&*&*\\ &*&*&*\\ &&*&\\ &&&* \end{smallmatrix} \right] }^Q \sigma_1$, where $$\sigma_1 (\begin{bmatrix} t&&*&x \\&a&b&*\\&&d&\\&&& ad t^{-1} \end{bmatrix}) = \chi (a)\,|a d^{-1} |\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}t&x\\#3&ad t^{-1}\end{array}\right]}})$$ for $b,x$ in $F$ and $a,d,t$ in $F^\times$.
3. $I^2/I^3=I^2= {\mathrm{c}\text{-}\mathrm{Ind}}_{\left[ \begin{smallmatrix} *&&& \\ &*&*& \\ &*&*& \\ &&& * \end{smallmatrix} \right] } ^Q\sigma_2$, where $$\sigma_2 (\begin{bmatrix}t&&&\\&a&b&\\&c&d&\\&&& (ad-bc) t^{-1}\end{bmatrix}) = \chi(t^{-1}(ad-bc))\,|t^{-2}(ad-bc)|\,\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}})$$ for $\left[\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right]$ in ${{\rm GL}}(2,F)$ and $t$ in $F^\times$.
This follows by going through the procedure of Sections 6.2 and 6.3 of [@C].
\[Klingendegjacquetlemma\] Let $\chi$ be a character of $F^\times$, and let $(\pi,V)$ be an admissible representation of ${{\rm GL}}(2,F)$. Let $I$ be the standard space of the Klingen induced representation $\chi\rtimes\pi$. Let $N$ be the unipotent radical of the Siegel parabolic subgroup, and let $\theta$ be the character of $N$ defined in (\[splitthetaeq\]) (i.e., we consider the split case). Then there is a filtration $$0\subset J_3\subset J_2\subset J_1=I_{N,\theta},$$ with the quotients given as follows.
- $J_1/J_2\cong V$
- $J_2/J_3\cong V_{ \left[ \begin{smallmatrix} 1\\ *&1\end{smallmatrix} \right]}$.
- $J_3\cong \mathcal{S}(F^\times,V_{\left[\begin{smallmatrix}1\\ *&1\end{smallmatrix}\right],\psi})$.
The action of the stabilizer of $\theta$ is given as follows, $$\begin{aligned}
{\rm diag}(a,b,a,b)v&=\chi(a)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v\qquad\text{for }v\in J_1/J_2,\\
{\rm diag}(a,b,a,b)v&=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v\qquad\text{for }v\in J_2/J_3,\\
({\rm diag}(a,b,a,b)f)(u)&=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&a\end{array}\right]}})f(a^{-1}bu)\qquad\text{for }f\in J_3,\;u\in F^\times,
\end{aligned}$$ for all $a$ and $b$ in $F^\times$. In particular, we have the following special cases.
1. Assume that $\pi=\sigma1_{{{\rm GL}}(2)}$. Then the twisted Jacquet module $I_{N,\theta}=I/\langle\theta(n)v-\rho(n)v:n\in N,\,v\in I\rangle$ is two-dimensional. More precisely, there is a filtration $$0\subset J_2\subset J_1=I_{N,\theta},$$ where $J_2$ and $J_1/J_2$ are both one-dimensional, and the action of the stabilizer of $\theta$ is given as follows, $$\begin{aligned}
{\rm diag}(a,b,a,b)v&=\chi(a)\sigma(ab)v\qquad\text{for }v\in J_1/J_2,\\
{\rm diag}(a,b,a,b)v&=\chi(b)\sigma(ab)v\qquad\text{for }v\in J_2,
\end{aligned}$$ for all $a$ and $b$ in $F^\times$.
2. Assume that $\pi$ is infinite-dimensional and irreducible. Then there is a filtration $$0\subset J_3\subset J_2\subset J_1=I_{N,\theta},$$ with the quotients given as follows.
- $J_1/J_2\cong V$
- $J_2/J_3\cong V_{ \left[ \begin{smallmatrix} 1\\ *&1\end{smallmatrix} \right]}$. Hence, $J_2/J_3$ is $2$-dimensional if $\pi$ is a principal series representation, $1$-dimensional if $\pi$ is a twist of the Steinberg representation, and $0$ if $\pi$ is supercuspidal.
- $J_3\cong \mathcal{S}(F^\times)$.
The action of the stabilizer of $\theta$ is given as follows, $$\begin{aligned}
{\rm diag}(a,b,a,b)v&=\chi(a)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v\qquad\text{for }v\in J_1/J_2,\\
{\rm diag}(a,b,a,b)v&=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v\qquad\text{for }v\in J_2/J_3,\\
({\rm diag}(a,b,a,b)f)(u)&=\chi(b)\omega_\pi(a)f(a^{-1}bu)\qquad\text{for }f\in J_3,\;u\in F^\times,
\end{aligned}$$ for all $a$ and $b$ in $F^\times$.
It will be easier to work with the conjugate subgroup $N_{\mathrm{alt}}$ and the character $\theta_{\mathrm{alt}}$ of $N_{\mathrm{alt}}$ defined in (\[splitthetaconjeq\]). For the top quotient from i) of Lemma \[casselmanfiltrationlemma\] we have $$(I^0/I^1)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}=0,$$ since the subgroup $$\begin{bmatrix}1&*\\&1\\&&1&*\\&&&1\end{bmatrix}$$ acts trivially on $I^0/I^1$, but $\theta_{\mathrm{alt}}$ is not trivial on this subgroup. We consider the quotient $I^1/I^2 = {\mathrm{c}\text{-}\mathrm{Ind}}_{H}^Q \sigma_1$, where $$H=\begin{bmatrix} *&&*&*\\ &*&*&*\\ &&*&\\ &&&* \end{bmatrix},$$ and with $\sigma_1$ as in ii) of Lemma \[casselmanfiltrationlemma\]. We first show that for each function $f$ in the standard model of this representation, the function $f^\circ:\:F\to V$, given by $$f^\circ(w)=f(\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}),$$ has compact support. Let $K$ be a compact subset of $Q$ such that the support of $f$ is contained in $HK$. If $$\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}=\begin{bmatrix} t&&*&x \\&a&b&*\\&&d&\\&&& ad t^{-1} \end{bmatrix}\begin{bmatrix}k_0&x_1&x_2&x_3\\&k_1&k_2&x_4\\&k_3&k_4&x_5\\&&&k_5\end{bmatrix},$$ with the rightmost matrix being in $K$, then calculations show that $k_3=0$ and $w=k_4^{-1}x_5$. Since $k_4^{-1}$ and $x_5$ vary in bounded subsets, $w$ is confined to a compact subset of $F$. This proves our assertion that $f^\circ$ has compact support.
Next, for each function $f$ in the standard model of ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^Q \sigma_1$, consider the function $\tilde f:\:F^2\to V$ given by $$\tilde f(u,w)=f(\begin{bmatrix}1&-w\\&1\\&u&1&w\\&&&1\end{bmatrix}).$$ Let $W$ be the space of all such functions $\tilde f$. Since the map $f\mapsto\tilde f$ is injective, we get a vector space isomorphism ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^Q \sigma_1\cong W$. Evidently, in this new model, the action of $N_{\mathrm{alt}}$ is given by $$\label{Klingendegjacquetlemmaeq10}
(\begin{bmatrix}1&-y&&z\\&1&&\\&x&1&y\\&&&1\end{bmatrix}\tilde f)(u,w)=\tilde f(u+x,w+y).$$
We claim that $W$ contains $\mathcal{S}(F^2,V)$. Since $W$ is translation invariant, it is enough to prove that $W$ contains the function $$f_{N,v}(u,w)=
\begin{cases}
v & \text{if $u,w \in {\mathfrak p}^N$,}\\
0 & \text{if $u \notin {\mathfrak p}^N$ or $w \notin {\mathfrak p}^N$,}
\end{cases}$$ for any $v$ in $V$ and any positive integer $N$. Again by translation invariance, we may assume that $N$ is large enough so that $$\label{Klingendegjacquetlemmaeq11}
\sigma_1(h)v=v\qquad\text{for }h\in H\cap\Gamma_N,$$ where $$\label{GammaNdefeq2}
\Gamma_{N}=
\begin{bmatrix}
1+{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N \\
&1+{\mathfrak p}^N&{\mathfrak p}^N&{\mathfrak p}^N\\
&{\mathfrak p}^N&1+{\mathfrak p}^N&{\mathfrak p}^N\\
&&&1+{\mathfrak p}^N
\end{bmatrix} \cap Q.$$ Define $f:\:Q\to V$ by $$f(g)=
\begin{cases}
\sigma_1(h)v&\text{if $g=hk$ with }h\in H,\;k\in\Gamma_N,\\
0&g\notin H\Gamma_N.
\end{cases}$$ Then, by , $f$ is a well-defined element of ${\mathrm{c}\text{-}\mathrm{Ind}}_{H}^Q \sigma_1$. It is easy to verify that $\tilde f=f_{N,v}$. This proves our claim that $W$ contains $\mathcal{S}(F^2,V)$.
Now consider the map $$\label{Klingendegjacquetlemmaeq12}
W\longrightarrow\mathcal{S}(F,V),\qquad \tilde f\longmapsto\Big(w\mapsto f(\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}s_2)\Big),$$ where $s_2$ is defined in . This map is well-defined, since the function on the right is $(s_2f)^\circ$, which we showed above has compact support. Similar considerations as above show that the map is surjective.
We claim that the kernel of is $\mathcal{S}(F^2,V)$. First suppose that $\tilde f$ lies in the kernel; we have to show that $\tilde f$ has compact support. Choose $N$ large enough so that $f$ is right invariant under $\Gamma_N$. Then, for $u$ not in ${\mathfrak p}^{-N}$ and $w$ in $F$, $$\begin{aligned}
\tilde f(u,w)&=f(\begin{bmatrix}1&-w\\&1\\&u&1&w\\&&&1\end{bmatrix})\\
&=f(\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}\begin{bmatrix}1\\&1&u^{-1}\\&&1\\&&&1\end{bmatrix}\begin{bmatrix}1\\&-u^{-1}\\&&-u\\&&&1\end{bmatrix}s_2\begin{bmatrix}1\\&1&u^{-1}\\&&1\\&&&1\end{bmatrix})\\
&=f(\begin{bmatrix}1&&-wu^{-1}&w^2u^{-1}\\&1&u^{-1}&-wu^{-1}\\&&1\\&&&1\end{bmatrix}\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}\begin{bmatrix}1\\&-u^{-1}\\&&-u\\&&&1\end{bmatrix}s_2)\\
&=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&w^2u^{-1}\\#3&1\end{array}\right]}})f(\begin{bmatrix}1&-w\\&1\\&&1&w\\&&&1\end{bmatrix}\begin{bmatrix}1\\&-u^{-1}\\&&-u\\&&&1\end{bmatrix}s_2)\\
&=\chi(-u^{-1})|u|^{-2}\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&w^2u^{-1}\\#3&1\end{array}\right]}})f(\begin{bmatrix}1&wu^{-1}\\&1\\&&1&-wu^{-1}\\&&&1\end{bmatrix}s_2).\end{aligned}$$ This last expression is zero by assumption. For fixed $u$ in ${\mathfrak p}^{-N}$, the function $\tilde f(u,\cdot)$ has compact support; this follows because each $f^\circ$ has compact support. Combining these facts shows that $\tilde f$ has compact support. Conversely, assume $\tilde f$ is in $\mathcal{S}(F^2,V)$. Then we can find a large enough $N$ such that if $u$ has valuation $-N$, the function $\tilde f(u,\cdot)$ is zero. Looking at the above calculation, we see that, for fixed such $u$, $$f(\begin{bmatrix}1&wu^{-1}\\&1\\&&1&-wu^{-1}\\&&&1\end{bmatrix}s_2)=0$$ for all $w$ in $F$. This shows that $\tilde f$ is in the kernel of the map , completing the proof of our claim about this kernel. We now have an exact sequence $$\label{Klingendegjacquetlemmaeq13}
0\longrightarrow\mathcal{S}(F^2,V)\longrightarrow W\longrightarrow\mathcal{S}(F,V)\longrightarrow0.$$ Note that the space $\mathcal{S}(F^2,V)$ is invariant under the action of $N_{\mathrm{alt}}$. A calculation shows that the action of $N_{\mathrm{alt}}$ on $\mathcal{S}(F,V)$ is given by $$\label{Klingendegjacquetlemmaeq14}
(\begin{bmatrix}1&-y&&z\\&1&&\\&x&1&y\\&&&1\end{bmatrix}f)(w)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1\;&z-2wy-w^2x\\#3&1\end{array}\right]}})f(w)$$ for $x,y,z,w$ in $F$ and $f$ in $\mathcal{S}(F,V)$.
We claim that $\mathcal{S}(F,V)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}=0$. To prove this, we calculate this Jacquet module in stages. We define a map $p$ from $\mathcal{S}(F,V)$ to $$\mathcal{S}(F,V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right]})=\mathcal{S}(F,V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&*\\#3&1\end{array}\right]}}))$$ by sending $f$ to $f$ composed with the natural projection from $V$ to $V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&*\\#3&1\end{array}\right]}})$. This map is surjective and has kernel $\mathcal{S}(F,V)(\left[\begin{smallmatrix}1&&&*\\&1\\&&1\\&&&1\end{smallmatrix}\right])$. Hence, we obtain an isomorphism $$\mathcal{S}(F,V)_{\left[\begin{smallmatrix}1&&&*\\&1\\&&1\\&&&1\end{smallmatrix}\right]}\cong\mathcal{S}(F,V_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right]}).$$ The action of the group $\left[\begin{smallmatrix}1&*&&\\&1\\&*&1&*\\&&&1\end{smallmatrix}\right]$ on these spaces is trivial. Since $\theta_{\mathrm{alt}}$ is not trivial on this group, this proves our claim that $\mathcal{S}(F,V)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}=0$.
By , we now have $W_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}\cong\mathcal{S}(F^2,V)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}$. The action of $N_{\mathrm{alt}}$ on $\mathcal{S}(F^2,V)$ is given by . Since $\mathcal{S}(F^2,V)=\mathcal{S}(F)\otimes\mathcal{S}(F)\otimes V$, Lemma \[basicFjacquetlemma\] implies that the map $$f\longmapsto\int\limits_F\int\limits_F f(u,w)\psi(-w)\,du\,dw$$ induces an isomorphism $\mathcal{S}(F^2,V)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}\cong V$. Moreover, a calculation shows that ${\rm diag}(a,a,b,b)$ acts on $\mathcal{S}(F^2,V)_{N_{\mathrm{alt}},\theta_{\mathrm{alt}}}\cong V$ by $\chi(a)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})$.
Finally, we consider the bottom quotient $I^2/I^3={\mathrm{c}\text{-}\mathrm{Ind}}_{\left[ \begin{smallmatrix} *&&& \\ &*&*& \\ &*&*& \\ &&& * \end{smallmatrix} \right] } ^Q\sigma_2$ with $\sigma_2$ as in iii) of Lemma \[casselmanfiltrationlemma\]. If we associate with a function $f$ in the standard model of this induced representation the function $$\tilde f(u,v,w)=f(\begin{bmatrix}1&-v&u&w\\&1&&u\\&&1&v\\&&&1\end{bmatrix}),$$ then, by Lemma \[inducedreslemma2\], we obtain an isomorphism $I^2/I^3\cong\mathcal{S}(F^3,V)$. A calculation shows that the action of $N_{\mathrm{alt}}$ on $\mathcal{S}(F^3,V)$ is given by $$\label{Klingendegjacquetlemmaeq15}
(\begin{bmatrix}1&-y&&z\\&1&&\\&x&1&y\\&&&1\end{bmatrix}f)(u,v,w)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u,v+y-ux,w+z+uy)$$ for $x,y,z,u,v,w$ in $F$ and $f$ in $\mathcal{S}(F^3,V)$. This time we take Jacquet modules step by step, starting with the $z$-variable. Lemma \[basicFjacquetlemma\] shows that the map $$f\longmapsto \bigg((u,v)\mapsto\int\limits_Ff(u,v,w)\,dw\bigg)$$ induces an isomorphism $\mathcal{S}(F^3,V)_{\left[\begin{smallmatrix}1&&&*\\&1\\&&1\\&&&1\end{smallmatrix}\right]}\cong\mathcal{S}(F^2,V)$. On $\mathcal{S}(F^2,V)$ we have the action $$(\begin{bmatrix}1&-y\\&1\\&x&1&y\\&&&1\end{bmatrix}f)(u,v)=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u,v+y-ux)$$ for $x,y,u,v$ in $F$ and $f$ in $\mathcal{S}(F^2,V)$. Part ii) of Lemma \[basicFjacquetlemma\] shows that the map $$f\longmapsto \bigg(u\mapsto\int\limits_Ff(u,v)\psi(-v)\,dv\bigg)$$ induces an isomorphism $\mathcal{S}(F^2,V)_{\left[\begin{smallmatrix}1&*\\&1\\&&1&*\\&&&1\end{smallmatrix}\right],\psi}\cong\mathcal{S}(F,V)$. A calculation shows that on $\mathcal{S}(F,V)$ we have the actions $$\label{Klingendegjacquetlemmaeq1}
(\begin{bmatrix}1\\&1\\&x&1\\&&&1\end{bmatrix}f)(u)=\psi(-ux)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u)$$ for $x,u$ in $F$, and $$\label{Klingendegjacquetlemmaeq1b}
(\begin{bmatrix}a\\&a\\&&b\\&&&b\end{bmatrix}f)(u)=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})f(a^{-1}bu)$$ for $u$ in $F$ and $a,b$ in $F^\times$. The subspace $\mathcal{S}(F^\times,V)$ consisting of functions that vanish at zero is invariant under these actions. We consider the exact sequence $$0\longrightarrow\mathcal{S}(F^\times,V)\longrightarrow\mathcal{S}(F,V)\longrightarrow\mathcal{S}(F,V)/\mathcal{S}(F^\times,V)\longrightarrow 0.$$ The quotient $\mathcal{S}(F,V)/\mathcal{S}(F^\times,V)$ is isomorphic to $V$ via the map $f\mapsto f(0)$. The actions of the above subgroups on $V$ are given by $$\label{Klingendegjacquetlemmaeq1c}
\begin{bmatrix}1\\&1\\&x&1\\&&&1\end{bmatrix}v=\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})v$$ and $$\label{Klingendegjacquetlemmaeq1d}
\begin{bmatrix}a\\&a\\&&b\\&&&b\end{bmatrix}v=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&b\end{array}\right]}})v.$$ Taking Jacquet modules on the above sequence gives $$0\longrightarrow\mathcal{S}(F^\times,V)_{ \left[ \begin{smallmatrix} 1\\ &1\\ &*&1\\ &&&1 \end{smallmatrix} \right]}\longrightarrow\mathcal{S}(F,V)_{ \left[ \begin{smallmatrix} 1\\ &1\\ &*&1\\ &&&1 \end{smallmatrix} \right]}\longrightarrow\big(\mathcal{S}(F,V)/\mathcal{S}(F^\times,V)\big)_{ \left[ \begin{smallmatrix} 1\\ &1\\ &*&1\\ &&&1 \end{smallmatrix} \right]}\longrightarrow 0.$$ In view of , the Jacquet module on the right is isomorphic to $V_{ \left[ \begin{smallmatrix} 1\\ *&1\end{smallmatrix} \right]}$. The action of the diagonal subgroup on $V_{ \left[ \begin{smallmatrix} 1\\ *&1\end{smallmatrix} \right]}$ is given by the same formula as in .
We consider the map from $\mathcal{S}(F^\times,V)$ to itself given by $$f\longmapsto\Big(u\mapsto\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&u\end{array}\right]}})f(u)\Big).$$ This map is an isomorphism of vector spaces. The actions and turn into $$\label{Klingendegjacquetlemmaeq2}
(\begin{bmatrix}1\\&1\\&x&1\\&&&1\end{bmatrix}f)(u)=\psi(-ux)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u)$$ and $$\label{Klingendegjacquetlemmaeq2b}
(\begin{bmatrix}a\\&a\\&&b\\&&&b\end{bmatrix}f)(u)=\chi(b)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&\\#3&a\end{array}\right]}})f(a^{-1}bu).$$ We define a map $p$ from $\mathcal{S}(F^\times,V)$ to $$\mathcal{S}(F^\times,V_{\left[\begin{smallmatrix}1\\ *&1\end{smallmatrix}\right],\psi})=\mathcal{S}(F^\times,V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}},\psi))$$ by sending $f$ to $f$ composed with the natural projection from $V$ to $V/V({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}},\psi)$. This map is surjective. The kernel of $p$ consists of all $f$ in $\mathcal{S}(F^\times,V)$ for which there exists a positive integer $l$ such that $$\label{Klingendegjacquetlemmaeq3}
\int\limits_{{\mathfrak p}^{-l}}\psi(-x)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u)\,dx=0\qquad\text{for all }u\in F^\times.$$ Let $W$ be the space of $f$ in $\mathcal{S}(F^\times,V)$ for which there exists a positive integer $k$ such that $$\label{Klingendegjacquetlemmaeq4}
\int\limits_{{\mathfrak p}^{-k}}\begin{bmatrix}1\\&1\\&x&1\\&&&1\end{bmatrix}f\,dx=0,$$ so that $\mathcal{S}(F^\times,V)/W=\mathcal{S}(F^\times,V)_{ \left[ \begin{smallmatrix} 1\\ &1\\ &*&1\\ &&&1 \end{smallmatrix} \right]}$. Let $f$ be in $W$. The condition means that $$\label{Klingendegjacquetlemmaeq5}
\int\limits_{{\mathfrak p}^{-k}}\psi(-ux)\pi({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&\\#3&1\end{array}\right]}})f(u)\,dx=0\qquad\text{for all }u\in F^\times.$$ Since $f$ has compact support in $F^\times$, the conditions and are equivalent. It follows that $$\mathcal{S}(F^\times,V)_{ \left[ \begin{smallmatrix} 1\\ &1\\ &*&1\\ &&&1 \end{smallmatrix} \right]}\cong\mathcal{S}(F^\times,V_{\left[\begin{smallmatrix}1\\ *&1\end{smallmatrix}\right],\psi}).$$ The diagonal subgroup acts on $\mathcal{S}(F^\times,V_{\left[\begin{smallmatrix}1\\ *&1\end{smallmatrix}\right],\psi})$ by the same formula as in .
The main results {#besseltablesec}
================
Having assembled all the required tools, we are now ready to prove the three main results of this paper mentioned in the introduction.
Existence of Bessel functionals
-------------------------------
In this section we prove that every irreducible, admissible representation $(\pi,V)$ of ${{\rm GSp}}(4,F)$ which is not a twist of the trivial representation admits a Bessel functional. The proof uses the $P_3$-module $V_{Z^J}$ and the $G^J$-module $V_{Z^J,\psi}$. The first module is closely related to the theory of zeta integrals. The second module, on the other hand, is related to the theory of representations of the metaplectic group ${\widetilde{\rm SL}}(2,F)$.
\[Nthetaexistslemma\] Let $(\pi,V)$ be a smooth representation of $N$. Then there exists a character $\theta$ of $N$ such that $V_{N,\theta}\neq0$.
This follows immediately from Lemma 1.6 of [@RoSc2011].
Let ${\widetilde{\rm SL}}(2,F)$ be the metaplectic group, defined as in Sect. 1 of [@RoSc2011]. Let $m$ be in $F^\times$. We will use the *Weil representation* ${\pi_{\scriptscriptstyle W}}^m$ of ${\widetilde{\rm SL}}(2,F)$ on $\mathcal{S}(F)$ associated to the quadratic form $q(x)=x^2$ and $\psi^m$. This is as defined on pp. 3-4 of [@Wald1980] and p. 223 of [@Wald1991]. The only explicit property of ${\pi_{\scriptscriptstyle W}}^m$ we will use is $$\begin{aligned}
({\pi_{\scriptscriptstyle W}}^m ({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}1&b\\#3&1\end{array}\right]}}, 1)f)(x)&=\psi(mbx^2)f(x), \label{upperweileq}\end{aligned}$$ for $b$ in $F$ and $f$ in $\mathcal{S}(F)$. We define an action of $N_Q$, introduced in , on the Schwartz space $\mathcal{S}(F)$ by $$\label{piswmformula1}
{\pi_{\scriptscriptstyle S}}^m(\begin{bmatrix}1&\lambda&\mu&\kappa\\&1&&\mu\\&&1&-\lambda\\&&&1\end{bmatrix}f)(x)=\psi^m(\kappa+(2x+\lambda)\mu)f(x+\lambda)$$ for $f$ in $\mathcal{S}(F)$. This representation of $N_Q$ is called the *Schrödinger representation*.
Given a smooth, genuine representation $(\tau,W)$ of ${\widetilde{\rm SL}}(2,F)$, we define a representation $\tau^J$ of $G^J$ on the space $W\otimes\mathcal{S}(F)$ by the formulas $$\begin{aligned}
\label{GJrepeq1} \tau^J(\begin{bmatrix}1\\&a&b\\&c&d\\&&&1\end{bmatrix})(v\otimes f)&=\tau({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}},1)v\otimes{\pi_{\scriptscriptstyle W}}^m({{\setlength{\arraycolsep}{0.5mm}\left[
\begin{array}{cc}a&b\\#3&d\end{array}\right]}},1)f,\\
\label{GJrepeq2} \tau^J(\begin{bmatrix}1&\lambda&\mu&\kappa\\&1&&\mu\\&&1&-\lambda\\&&&1\end{bmatrix})(v\otimes f)&=v\otimes{\pi_{\scriptscriptstyle S}}^m(\begin{bmatrix}1&\lambda&\mu&\kappa\\&1&&\mu\\&&1&-\lambda\\&&&1\end{bmatrix})f.\end{aligned}$$ Computations show that $\tau^J$ is a smooth representation of $G^J$. Moreover, the map that sends $\tau$ to $\tau^J$ is a bijection between the set of equivalence classes of smooth, genuine representations of ${\widetilde{\rm SL}}(2,F)$, and smooth representations of $G^J$ with central character $\psi^m$. The proof of this fact is based on the Stone-von Neumann Theorem; see Theorem 2.6.2 of [@BS1998]. Under this bijection, irreducible $\tau$ correspond to irreducible $\tau^J$.
\[GJWhitlemma\] Let $m$ be in $F^\times$. Let $(\tau^J,W^J)$ be a non-zero, irreducible, smooth representation of $G^J$ with central character $\psi^m$. Then $\dim W^J_{N,\theta_{a,0,m}} \leq1$ for all $a$ in $F^\times$ and $\dim W^J_{N,\theta_{a,0,m}} =1$ for some $a$ in $F^\times$. This dimension depends only on the class of $a$ in $F^\times/F^{\times2}$.
By the above discussion, there exists an irreducible, genuine, admissible representation $\tau$ of ${\widetilde{\rm SL}}(2,F)$ such that $\tau^J\cong \tau \otimes {\pi_{\scriptscriptstyle SW}}^m$. Using , and iii) of Lemma \[basicFjacquetlemma\], an easy calculation shows that $$W^J_{\left[\begin{smallmatrix}1&&*&*\\&1&*&*\\&&1\\&&&1\end{smallmatrix}\right],\theta_{a,0,m}}\cong W_{\left[\begin{smallmatrix}1&*\\&1\end{smallmatrix}\right],\psi^a}.$$ By Lemme 2 on p. 226 of [@Wald1991], the space on the right is at most one-dimensional, and is one-dimensional for some $a$ in $F^\times$. Moreover, the dimension depends only on the class of $a$ in $F^\times/F^{\times2}$.
\[Nthetaprop\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Then the following statements are equivalent.
1. $\pi$ is not a twist of the trivial representation.
2. There exists a non-trivial character $\theta$ of $N$ such that $V_{N,\theta}\neq0$.
3. There exists a non-degenerate character $\theta$ of $N$ such that $V_{N,\theta}\neq0$.
i\) $\Rightarrow$ ii) Assume that $V_{N,\theta}=0$ for all non-trivial $\theta$. By Lemma \[Nthetaexistslemma\], it follows that $V_{N,1}\neq0$. In particular, the $P_3$-module $V_{Z^J}$ is non-zero. By using Theorem \[finitelength\] and inspecting tables A.5 and A.6 in [@NF], one can see that $V_{Z^J}$ contains an irreducible subquotient $\tau$ of the form $\tau^{P_3}_{{{\rm GL}}(0)}(1)$, or $\tau^{P_3}_{{{\rm GL}}(1)}(\chi)$ for a character $\chi$ of $F^\times$, or $\tau^{P_3}_{{{\rm GL}}(2)}(\rho)$ for an irreducible, admissible, infinite-dimensional representation $\rho$ of ${{\rm GL}}(2,F)$; it is here that we use the hypothesis that $\pi$ is not one-dimensional. For $a,b$ in $F$ we define a character of the subgroup $\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right]$ of $P_3$ by $$\label{thetaabdefeq}
\theta_{a,b}(\begin{bmatrix}1&x&y\\&1\\&&1\end{bmatrix})=\psi(ax+by).$$ By Lemma 2.5.4 or Lemma 2.5.5 of [@NF], or the infinite-dimensionality of $\rho$ if $\tau=\tau^{P_3}_{{{\rm GL}}(2)}(\rho)$, $$\tau_{\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right],\theta_{a,b}}\neq0$$ for some $(a,b)\neq(0,0)$. This implies that $V_{N,\theta_{a,b,0}}\neq0$, contradicting our assumption.
ii\) $\Rightarrow$ iii) The hypothesis implies that $V_{Z^J,\psi^m}$ is non-zero for some $m$ in $F^\times$. We observe that $V_{Z^J,\psi^m}$ is a smooth $G^J$ representation. By Lemma 2.6 of [@BeZe1976], there exists an irreducible subquotient $(\tau^J,W^J)$ of this $G^J$ module. By Lemma \[GJWhitlemma\], we have $\dim W^J_{N,\theta_{a,0,m}}=1$ for some $a$ in $F^\times$. This implies that $V_{N,\theta_{a,0,m}}\neq0$.
iii\) $\Rightarrow$ i) is obvious.
\[existencetheorem\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Assume that $\pi$ is not one-dimensional. Then $\pi$ admits a $(\Lambda,\theta)$-Bessel functional for some non-degenerate character $\theta$ of $N$ and some character $\Lambda$ of $T$. If $\pi$ is non-generic and supercuspidal, then every Bessel functional for $\pi$ is non-split.
By Proposition \[Nthetaprop\], there exists a non-degenerate $\theta$ such that $V_{N,\theta}\neq0$. Assume that $\theta$ is non-split. Then, since the center $F^\times$ of ${{\rm GSp}}(4,F)$ acts by a character on $V_{N,\theta}$ and $T/F^\times$ is compact, $V_{N,\theta}$ decomposes as a direct sum over characters of $T$. It follows that a $(\Lambda,\theta)$-Bessel functional exists for some character $\Lambda$ of $T$.
Now assume that $\theta$ is split. We may assume that $S$ is the matrix in . Let $V_0,V_1,V_2$ be the modules appearing in the $P_3$-filtration, as in Theorem \[finitelength\]. Since $V_{N,\theta}\neq0$, we must have $$(V_0/V_1)_{\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right],\theta_{0,1}}\neq0,\qquad
(V_1/V_2)_{\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right],\theta_{0,1}}\neq0,\qquad\text{or}\qquad
(V_2)_{\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right],\theta_{0,1}}\neq0,$$ where we use the notation . It is immediate from that the first space is zero. If the second space is non-zero, then $\pi$ admits a split Bessel functional by iii) of Proposition \[nongenericsplitproposition\]. If the third space is non-zero, then $\pi$ is generic by Theorem \[finitelength\], and hence, by Proposition \[GSp4genericprop\], admits a split Bessel functional.
For the last statement, assume that $\pi$ is non-generic and supercuspidal. Then $V_{Z^J}=0$ by Theorem \[finitelength\]. Hence, $V_{N,\theta}=0$ for any split $\theta$. It follows that all Bessel functionals for $\pi$ are non-split.
The table of Bessel functionals {#maintheoremproofsec}
-------------------------------
In this section, given a non-supercuspidal representation $\pi$, or a $\pi$ that is in an $L$-packet with a non-supercuspidal representation, we determine the set of $(\Lambda,\theta)$ for which $\pi$ admits a $(\Lambda,\theta)$-Bessel functional.
\[Kcompactexactlemma\] Let $\theta$ be as in , and let $T$ be the corresponding torus. Assume that the associated quadratic extension $L$ is a field. Let $V_1$, $V_2$, $V_3$ and $W$ be smooth representations of $T$. Assume that these four representations all have the same central character. Assume further that there is an exact sequence of $T$-modules $$0\longrightarrow V_1\longrightarrow V_2\longrightarrow V_3\longrightarrow0.$$ Then the sequence of $T$-modules $$0\longrightarrow{{\rm Hom}}_{T}(V_3,W)\longrightarrow{{\rm Hom}}_{T}(V_2,W)\longrightarrow{{\rm Hom}}_{T}(V_1,W)\longrightarrow0$$ is exact.
It is easy to see that the sequence $$0\longrightarrow{{\rm Hom}}_{T}(V_3,W)\longrightarrow{{\rm Hom}}_{T}(V_2,W)\longrightarrow{{\rm Hom}}_{T}(V_1,W)$$ is exact. We will prove the surjectivity of the last map. Let $f$ be in ${{\rm Hom}}_{T}(V_1,W)$. We extend $f$ to a linear map $f_1$ from $V_2$ to $W$. We define another linear map $f_2$ from $V_2$ to $W$ by $$f_2(v)=\int\limits_{T/F^\times}t^{-1}\cdot f_1(t\cdot v)\,dt.$$ This is well-defined by the condition on the central characters, the compactness of $T/F^\times$, and the smoothness hypothesis. Evidently, $f_2$ is in ${{\rm Hom}}_{T}(V_2,W)$ and maps to a multiple of $f$.
\[mainnonsupercuspidaltheorem\] The following table shows the Bessel functionals admitted by the irreducible, admissible, non-supercuspidal representations of ${{\rm GSp}}(4,F)$. The column “$L\leftrightarrow\xi$” indicates that the field $L$ is the quadratic extension of $F$ corresponding to the non-trivial, quadratic character $\xi$ of $F^\times$; this is only relevant for representations in groups V and IX. The pairs of characters $(\chi_1,\chi_2)$ in the “$L=F\times F$” column for types IIIb and IVc refer to the characters of $T=\{{\rm diag}(a,b,a,b):\:a,b\in F^\times\}$ given by ${\rm diag}(a,b,a,b)\mapsto\chi_1(a)\chi_2(b)$. In representations of group IX, the symbol $\mu$ denotes a non-Galois-invariant character of $L^\times$, where $L$ is the quadratic extension corresponding to $\xi$. The Galois conjugate of $\mu$ is denoted by $\mu'$. The irreducible, admissible, supercuspidal representation of ${{\rm GL}}(2,F)$ corresponding to $\mu$ is denoted by $\pi(\mu)$. Finally, the symbol ${{\rm N}}$ in the table stands for the norm map ${{\rm N}}_{L/F}$. In the split case, the character $\sigma\circ{{\rm N}}$ is the same as $(\sigma,\sigma)$. In the table, the phrase “all $\Lambda$” means all characters $\Lambda$ of $T$ whose restriction to $F^\times$ is the central character of the representation of ${{\rm GSp}}(4,F)$.
$$\renewcommand{\arraystretch}{1.09}\renewcommand{\arraycolsep}{0.07cm}
\begin{array}{cccccc}
\toprule
&&\text{representation}&
\multicolumn{3}{c}{(\Lambda,\theta)\text{-Bessel functional exists exactly for \ldots}}
\\
\cmidrule{4-6}
&&&L=F\times F&\multicolumn{2}{c}{L/F\text{ a field extension}}\\
\cmidrule{5-6}
&&&&L\leftrightarrow\xi&L\not\leftrightarrow\xi\\
\toprule
{\rm I}&& \chi_1 \times \chi_2 \rtimes \sigma\
\mathrm{(irreducible)}&\text{all }\Lambda&\multicolumn{2}{c}{\text{all }\Lambda}\\
\midrule
\mbox{II}&\mbox{a}&\chi {{\rm St}}_{{{\rm GL}}(2)} \rtimes \sigma&
\text{all }\Lambda&\multicolumn{2}{c}{\Lambda\neq(\chi\sigma)\circ{{\rm N}}}\\
\cmidrule{2-6}
&\mbox{b}&\chi {1}_{{{\rm GL}}(2)} \rtimes \sigma
&\Lambda=(\chi\sigma)\circ{{\rm N}}&\multicolumn{2}{c}{\Lambda=(\chi\sigma)\circ{{\rm N}}}\\
\midrule
\mbox{III}&\mbox{a}&\chi \rtimes \sigma {{\rm St}}_{{{\rm GSp}}(2)}&\text{all }\Lambda
&\multicolumn{2}{c}{\text{all }\Lambda}\\\cmidrule{2-6}
&\mbox{b}&\chi \rtimes \sigma {1}_{{{\rm GSp}}(2)}
&\Lambda\in\{(\chi\sigma,\sigma),(\sigma,\chi\sigma)\}&\multicolumn{2}{c}{\text{---}}\\
\midrule
\mbox{IV}&\mbox{a}&\sigma{{\rm St}}_{{{\rm GSp}}(4)}&\text{all }\Lambda&
\multicolumn{2}{c}{\Lambda\neq\sigma\circ{{\rm N}}}\\\cmidrule{2-6}
&\mbox{b}&L(\nu^2,\nu^{-1}\sigma{{\rm St}}_{{{\rm GSp}}(2)})&\Lambda=\sigma\circ{{\rm N}}&\multicolumn{2}{c}{\Lambda=\sigma\circ{{\rm N}}}\\\cmidrule{2-6}
&\mbox{c}&L(\nu^{3/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-3/2}\sigma)
&\Lambda=(\nu^{\pm1}\sigma,\nu^{\mp1}\sigma)&\multicolumn{2}{c}{\text{---}}\\\cmidrule{2-6}
&\mbox{d}&\sigma{1}_{{{\rm GSp}}(4)}&\text{---}&\multicolumn{2}{c}{\text{---}}\\
\midrule
\mbox{V}&\mbox{a}&\delta([\xi,\nu \xi], \nu^{-1/2} \sigma)&\text{all }\Lambda
&\Lambda\neq\sigma\circ{{\rm N}}&\sigma\circ{{\rm N}}\neq\Lambda\neq(\xi\sigma)\circ{{\rm N}}\\\cmidrule{2-6}
&\mbox{b}&L(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2} \sigma)&\Lambda=\sigma\circ{{\rm N}}&\text{---}&\Lambda=\sigma\circ{{\rm N}}\\\cmidrule{2-6}
&\mbox{c}&L(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)},\xi\nu^{-1/2} \sigma)
&\Lambda=(\xi\sigma)\circ{{\rm N}}&\text{---}&\Lambda=(\xi\sigma)\circ{{\rm N}}\\\cmidrule{2-6}
&\mbox{d}&L(\nu\xi,\xi\rtimes\nu^{-1/2}\sigma)&\text{---}
&\Lambda=\sigma\circ{{\rm N}}&\text{---}\\
\midrule
\mbox{VI}&\mbox{a}&\tau(S, \nu^{-1/2}\sigma)&\text{all }\Lambda
&\multicolumn{2}{c}{\Lambda\neq\sigma\circ{{\rm N}}}\\\cmidrule{2-6}
&\mbox{b}&\tau(T, \nu^{-1/2}\sigma)&\text{---}&\multicolumn{2}{c}{\Lambda=\sigma\circ{{\rm N}}}\\\cmidrule{2-6}
&\mbox{c}&L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)
&\Lambda=\sigma\circ{{\rm N}}&\multicolumn{2}{c}{\text{---}}\\\cmidrule{2-6}
&\mbox{d}&L(\nu,1_{F^\times}\rtimes\nu^{-1/2}\sigma)&
\Lambda=\sigma\circ{{\rm N}}&\multicolumn{2}{c}{\text{---}}\\
\toprule
\mbox{VII}&&\chi \rtimes \pi&\text{all }\Lambda
&\multicolumn{2}{c}{\text{all }\Lambda}\\\midrule
\mbox{VIII}&\mbox{a}&\tau(S, \pi)&\text{all }\Lambda&\multicolumn{2}{c}{{\rm Hom}_T(\pi,{{\mathbb C}}_\Lambda)\neq0}\\\cmidrule{2-6}
&\mbox{b}&\tau(T, \pi)
&\text{---}&\multicolumn{2}{c}{{\rm Hom}_T(\pi,{{\mathbb C}}_\Lambda)=0}\\
\midrule
\mbox{IX}&\mbox{a}&\delta(\nu\xi,\nu^{-1/2}\pi(\mu))&\text{all }\Lambda
&\mu\neq\Lambda\neq\mu'&\text{all }\Lambda\\\cmidrule{2-6}
&\mbox{b}&L(\nu\xi,\nu^{-1/2}\pi(\mu))
&\text{---}&\Lambda=\mu\text{ or }\Lambda=\mu'&\text{---}\\
\toprule
\mbox{X}&&\pi \rtimes \sigma&\text{all }\Lambda
&\multicolumn{2}{c}{{\rm Hom}_T(\sigma\pi,{{\mathbb C}}_\Lambda)\neq0}\\
\midrule
\mbox{XI}&\mbox{a}&\delta (\nu^{1/2}\pi,\nu^{-1/2}\sigma)&\text{all }\Lambda&\multicolumn{2}{c}{\Lambda\neq\sigma\circ{{\rm N}}\:\text{ and }{\rm Hom}_T(\sigma\pi,{{\mathbb C}}_{\Lambda})\neq0}\\
\cmidrule{2-6}
&\mbox{b}&L(\nu^{1/2}\pi,\nu^{-1/2}\sigma)&\Lambda=\sigma\circ{{\rm N}}&\multicolumn{2}{c}{\Lambda=\sigma\circ{{\rm N}}\:\text{ and } {\rm Hom}_T(\pi,{{\mathbb C}}_1)\neq0}\\
\toprule
\mbox{Va$^*$}&&\delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)&\text{---}&\Lambda=\sigma\circ{{\rm N}}&\text{---}\\
\midrule
\mbox{XIa$^*$}&&\delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)&\text{---}&\multicolumn{2}{c}{\Lambda=\sigma\circ{{\rm N}}\:\text{ and }{\rm Hom}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq0}\\
\toprule
\end{array}$$
We will go through all representations in the table and explain how the statements follow from our preparatory sections.
: This follows from Proposition \[GSp4genericprop\] and Lemma \[Klingendegjacquetlemma1\].
: In the split case this follows from Proposition \[GSp4genericprop\]. In the non-split case it follows from Lemma \[siegelinducedbesselwaldspurgerlemma\] together with (\[StGL2Waldspurgereq2\]).
: This follows from Lemma \[siegelinducedbesselwaldspurgerlemma\]; see .
: This follows from Proposition \[GSp4genericprop\] and Lemma \[Klingendegjacquetlemma1\].
: It follows from Lemma \[Klingendegjacquetlemma1\] that IIIb type representations have no non-split Bessel functionals. The split case follows from either Proposition \[nongenericsplitproposition\] or i) of Lemma \[Klingendegjacquetlemma\]. Note that the characters $(\chi\sigma,\sigma)$ and $(\sigma,\chi\sigma)$ are Galois conjugates of each other.
: It is easy to see that the twisted Jacquet modules of the trivial representation are zero.
: By (2.9) of [@NF] there is a short exact sequence $$0\longrightarrow{\rm IVb}\longrightarrow\nu^{3/2}1_{{{\rm GL}}(2)}\rtimes\nu^{-3/2}\sigma
\longrightarrow\sigma1_{{{\rm GSp}}(4)}\longrightarrow0.$$ Taking twisted Jacquet modules and observing , we get $$({\rm IVb})_{N,\theta}\cong(\nu^{3/2}1_{{{\rm GL}}(2)}\rtimes\nu^{-3/2}\sigma)_{N,\theta}={{\mathbb C}}_{\sigma\circ {{\rm N}}_{L/F}}$$ as $T$-modules.
: By (2.9) of [@NF] there is a short exact sequence $$0\longrightarrow{\rm IVc}\longrightarrow\nu^2\rtimes\nu^{-1}\sigma1_{{{\rm GSp}}(2)}
\longrightarrow\sigma1_{{{\rm GSp}}(4)}\longrightarrow0.$$ Taking twisted Jacquet modules gives $$({\rm IVc})_{N,\theta}\cong(\nu^2\rtimes\nu^{-1}\sigma1_{{{\rm GSp}}(2)})_{N,\theta}.$$ Hence IVc admits the same Bessel functionals as the full induced representation $\nu^2\rtimes\nu^{-1}\sigma1_{{{\rm GSp}}(2)}$. By Lemma \[Klingendegjacquetlemma1\], any such Bessel functional is necessarily split. Assume that $\theta$ is as in (\[splitthetaeq\]). Then, using Lemma \[Klingendegjacquetlemma\], it follows that IVc admits the $(\Lambda,\theta)$-Bessel functional for $$\label{IVcpossibleLambdaeq}
\Lambda(\begin{bmatrix}a\\&b\\&&a\\&&&b\end{bmatrix})=\nu(ab^{-1})\sigma(ab).$$ which we write as $(\nu\sigma,\nu^{-1}\sigma)$. By , IVc also admits a $(\Lambda,\theta)$-Bessel functional for $\Lambda=(\nu^{-1}\sigma,\nu\sigma)$. Again by Lemma \[Klingendegjacquetlemma\], IVc does not admit a $(\Lambda,\theta)$-Bessel functional for any other $\Lambda$.
: In the split case this follows from Proposition \[GSp4genericprop\]. Assume $\theta$ is non-split. By (2.9) of [@NF], there is an exact sequence $$0\longrightarrow\sigma{{\rm St}}_{{{\rm GSp}}(4)}\longrightarrow\nu^2\rtimes\nu^{-1}\sigma{{\rm St}}_{{{\rm GSp}}(2)}\longrightarrow {\rm IVb}\longrightarrow0.$$ Taking Jacquet modules, we get $$0\longrightarrow(\sigma{{\rm St}}_{{{\rm GSp}}(4)})_{N,\theta}\longrightarrow(\nu^2\rtimes\nu^{-1}\sigma{{\rm St}}_{{{\rm GSp}}(2)})_{N,\theta}\longrightarrow ({\rm IVb})_{N,\theta}\longrightarrow0.$$ Keeping in mind Lemma \[Kcompactexactlemma\], the result now follows from Lemma \[Klingendegjacquetlemma1\] and the result for IVb.
: This was proved in Corollary \[bsummary\].
: Let $\xi$ be a non-trivial quadratic character of $F^\times$. By (2.10) of [@NF], there are exact sequences $$0\longrightarrow{\rm Vb}\longrightarrow\nu^{1/2}\xi{1}_{{{\rm GL}}(2)}\rtimes\xi\nu^{-1/2}\sigma
\longrightarrow{\rm Vd}\longrightarrow0$$ and $$0\longrightarrow{\rm Vc}\longrightarrow\nu^{1/2}\xi{1}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma
\longrightarrow{\rm Vd}\longrightarrow0.$$ Taking Jacquet modules and observing (\[siegelinducedonedimjacqueteq\]), we get $$\label{VdBesseleq1}
0\longrightarrow({\rm Vb})_{N,\theta}\longrightarrow{{\mathbb C}}_{\sigma\circ {{\rm N}}_{L/F}}
\longrightarrow({\rm Vd})_{N,\theta}\longrightarrow0$$ and $$\label{VdBesseleq2}
0\longrightarrow({\rm Vc})_{N,\theta}\longrightarrow{{\mathbb C}}_{(\xi\sigma)\circ {{\rm N}}_{L/F}}
\longrightarrow({\rm Vd})_{N,\theta}\longrightarrow0.$$ Hence the results for Vb and Vc follow from the result for Vd.
: In the split case this follows from Proposition \[GSp4genericprop\]. Assume $\theta$ is non-split. Assume first that $\xi$ corresponds to the quadratic extension $L/F$. As we just saw, $({\rm Vb})_{N,\theta}=0$ in this case. By (2.10) of [@NF], there is an exact sequence $$0\longrightarrow{\rm Va}\longrightarrow\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma\longrightarrow{\rm Vb}\longrightarrow0.$$ Taking Jacquet modules, it follows that $$({\rm Va})_{N,\theta}=(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma)_{N,\theta}.$$ By Lemma \[siegelinducedbesselwaldspurgerlemma\], the space of $(\Lambda,\theta)$-Bessel functionals on the representation Va is isomorphic to ${\rm Hom}_T(\sigma\xi{{\rm St}}_{{{\rm GL}}(2)},{{\mathbb C}}_\Lambda)$. Using (\[StGL2Waldspurgereq2\]), it follows that Va admits a $(\Lambda,\theta)$-Bessel functional if and only if $\Lambda\neq(\sigma\xi)\circ {{\rm N}}_{L/F}=\sigma\circ {{\rm N}}_{L/F}$.
Now assume that $\xi$ does not correspond to the quadratic extension $L/F$. Then, by what we already proved for Vb, we have an exact sequence $$\label{VdBesseleq5}
0\longrightarrow({\rm Va})_{N,\theta}\longrightarrow
(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma)_{N,\theta}
\longrightarrow{{\mathbb C}}_{\sigma\circ {{\rm N}}_{L/F}}\longrightarrow0.$$ Using Lemma \[Kcompactexactlemma\], it follows that the possible characters $\Lambda$ for Va are those of $(\nu^{1/2}\xi{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma)_{N,\theta}$ with the exception of $\sigma\circ {{\rm N}}_{L/F}$. By Lemma \[siegelinducedbesselwaldspurgerlemma\] and (\[StGL2Waldspurgereq2\]), these are all characters other than $\sigma\circ {{\rm N}}_{L/F}$ and $(\xi\sigma)\circ {{\rm N}}_{L/F}$.
: By (2.11) of [@NF], there is an exact sequence $$0\longrightarrow{\rm VIc}\longrightarrow{1}_{F^\times}\rtimes\sigma{1}_{{{\rm GSp}}(2)}
\longrightarrow{\rm VId}\longrightarrow0.$$ It follows from Lemma \[Klingendegjacquetlemma1\] that VIc and VId have no non-split Bessel functionals. The split case follows from Proposition \[nongenericsplitproposition\].
: In the split case this follows from Proposition \[GSp4genericprop\]. Assume that $\theta$ is non-split. By (2.11) of [@NF], there is an exact sequence $$\label{VIcexacteq1}
0\longrightarrow{\rm VIa}\longrightarrow
\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma
\longrightarrow{\rm VIc}\longrightarrow0.$$ Taking Jacquet modules and observing the result for VIc, we get $({\rm VIa})_{N,\theta}=(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma)_{N,\theta}$. Hence the result follows from Lemma \[siegelinducedbesselwaldspurgerlemma\] and (\[StGL2Waldspurgereq2\]).
: By (2.11) of [@NF], there is an exact sequence $$\label{VIdBesseleq3}
0\longrightarrow({\rm VIb})_{N,\theta}\longrightarrow
(\nu^{1/2}{1}_{{{\rm GL}}(2)}\rtimes\nu^{-1/2}\sigma)_{N,\theta}
\longrightarrow({\rm VId})_{N,\theta}\longrightarrow0.$$ By (\[siegelinducedonedimjacqueteq\]), the middle term equals ${{\mathbb C}}_{\sigma\circ {{\rm N}}_{L/F}}$. One-dimensionality implies that the sequence splits, so that $$\label{VIdBesseleq4}
{\rm Hom}_T({{\mathbb C}}_{\sigma\circ {{\rm N}}_{L/F}},{{\mathbb C}}_\Lambda)=
{\rm Hom}_D({\rm VIb},{{\mathbb C}}_{\Lambda\otimes\theta})\oplus{\rm Hom}_D({\rm VId},{{\mathbb C}}_{\Lambda\otimes\theta})$$ ($D$ is the Bessel subgroup defined in ). Hence the VIb case follows from the known result for VId.
: This follows from Proposition \[GSp4genericprop\] and Lemma \[Klingendegjacquetlemma1\].
: In the split case this follows from Proposition \[GSp4genericprop\] and v) of Proposition \[nongenericsplitproposition\]. Assume that $\theta$ is non-split. Since we are in a unitarizable situation, the sequence $$0\longrightarrow{\rm VIIIa}\longrightarrow
1_{F^\times}\rtimes\pi\longrightarrow
{\rm VIIIb}\longrightarrow0$$ splits. It follows that $$\label{VIIIbBesseleq1}
{{\rm Hom}}_D(1_{F^\times}\rtimes\pi,{{\mathbb C}}_{\Lambda\otimes\theta})
={{\rm Hom}}_D({\rm VIIIa},{{\mathbb C}}_{\Lambda\otimes\theta})\oplus
{{\rm Hom}}_D({\rm VIIIb},{{\mathbb C}}_{\Lambda\otimes\theta}).$$ By Lemma \[Klingendegjacquetlemma1\], the space on the left is one-dimensional for any $\Lambda$. Therefore the Bessel functionals of VIIIb are complementary to those of VIIIa.
Assume that VIIIa admits a $(\Lambda,\theta)$-Bessel functional. Then, by Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\], we have ${{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)\neq0$. Conversely, assume that ${{\rm Hom}}_T(\pi,{{\mathbb C}}_\Lambda)\neq0$ and assume that VIIIa does not admit a $(\Lambda,\theta)$-Bessel functional; we will obtain a contradiction. By , we have ${{\rm Hom}}_D({\rm VIIIb},{{\mathbb C}}_{\Lambda\otimes\theta})\neq0$. By Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\], we have ${{\rm Hom}}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_\Lambda)\neq0$. This contradicts .
The result for VIIIb now follows from .
: This was proved in Corollary \[bsummary\].
: In the split case this follows from Proposition \[GSp4genericprop\]. Assume that $\theta$ is non-split. We have an exact sequence $$0\longrightarrow{\rm IXa}\longrightarrow\nu\xi\rtimes\nu^{-1/2}\pi\longrightarrow{\rm IXb}\longrightarrow0.$$ By Lemma \[Klingendegjacquetlemma1\], the space ${\rm Hom}_D(\nu\xi\rtimes\nu^{-1/2}\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is one-dimensional, for any character $\Lambda$ of $L^\times$ satisfying the central character condition. It follows that the possible Bessel functionals of IXa are complementary to those of IXb.
: In the split case this follows from Proposition \[GSp4genericprop\]. In the non-split case it follows from Lemma \[siegelinducedbesselwaldspurgerlemma\].
: In the split case this follows from Proposition \[GSp4genericprop\] and Proposition \[nongenericsplitproposition\]; note that the $V_1/V_2$ quotient of XIb equals $\tau_{{{\rm GL}}(1)}^{P_3}(\nu\sigma)$ by Table A.6 of [@NF]. Assume that $L/F$ is not split, and consider the exact sequence $$\label{XIaBesseleq1}
0\longrightarrow({\rm XIa})_{N,\theta}\longrightarrow
(\nu^{1/2}\pi\rtimes\nu^{-1/2}\sigma)_{N,\theta}\longrightarrow
({\rm XIb})_{N,\theta}\longrightarrow0.$$ It follows from Lemma \[Kcompactexactlemma\] that $$\label{XIaBesseleq2}
{{\rm Hom}}_D(\nu^{1/2}\pi\rtimes\nu^{-1/2}\sigma,{{\mathbb C}}_{\Lambda\otimes\theta})
={{\rm Hom}}_D({\rm XIa},{{\mathbb C}}_{\Lambda\otimes\theta})\oplus
{{\rm Hom}}_D({\rm XIb},{{\mathbb C}}_{\Lambda\otimes\theta}).$$ Observe here that, by Lemma \[siegelinducedbesselwaldspurgerlemma\], the left side equals ${\rm Hom}_T(\sigma\pi,{{\mathbb C}}_\Lambda)$, which is at most one-dimensional.
Assume that the representation XIa admits a $(\Lambda,\theta)$-Bessel functional. Then $\Lambda\neq\sigma\circ{{\rm N}}_{L/F}$ and ${{\rm Hom}}_T(\sigma\pi,{{\mathbb C}}_{\Lambda})\neq0$ by Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\]. Conversely, assume that $\Lambda\neq\sigma\circ{{\rm N}}_{L/F}$ and ${{\rm Hom}}_T(\sigma\pi,{{\mathbb C}}_{\Lambda})\neq0$. Assume also that XIa does not admit a $(\Lambda,\theta)$-Bessel functional; we will obtain a contradiction. By the one-dimensionality of the space on the left hand side of , we have ${{\rm Hom}}_D({\rm XIb},{{\mathbb C}}_{\Lambda\otimes\theta})\neq0$. By Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\], we conclude $\Lambda=\sigma\circ{{\rm N}}_{L/F}$, contradicting our assumption.
Assume that the representation XIb admits a $(\Lambda,\theta)$-Bessel functional. Then $\Lambda=\sigma\circ{{\rm N}}_{L/F}$ and ${{\rm Hom}}_T(\pi,{{\mathbb C}}_1)\neq0$ by Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\]. Conversely, assume that $\Lambda=\sigma\circ{{\rm N}}_{L/F}$ and ${{\rm Hom}}_T(\pi,{{\mathbb C}}_1)\neq0$. Assume also that XIb does not admit a $(\Lambda,\theta)$-Bessel functional; we will obtain a contradiction. By our assumption, the space on the left hand side of is one-dimensional. Hence ${{\rm Hom}}_D({\rm XIa},{{\mathbb C}}_{\Lambda\otimes\theta})\neq0$. By what we have already proven, this implies $\Lambda\neq\sigma\circ{{\rm N}}_{L/F}$, a contradiction.
: This was proved in Corollary \[bsummary\].
: By Proposition \[nongenericsplitproposition\], the representation XIa$^*$ has no split Bessel functionals. Assume that $\theta$ is non-split. By Corollary \[fourdimthetatheoremcor1\] and Theorem \[Ganthetatheorem\], if XIa$^*$ admits a $(\Lambda,\theta)$-Bessel functional, then $\Lambda=\sigma\circ{{\rm N}}$ and ${{\rm Hom}}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq0$. Conversely, assume that $\Lambda=\sigma\circ{{\rm N}}$ and ${{\rm Hom}}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq0$. By Corollary \[Vastarlemma\], the twisted Jacquet module $\delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)_{N,\theta}$ is one-dimensional. Therefore, XIa$^*$ does admit a $(\Lambda',\theta)$-Bessel functional for some $\Lambda'$. By what we already proved, $\Lambda'=\Lambda$.
This concludes the proof.
Some cases of uniqueness {#uniquenesssec}
------------------------
Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Using the notations from Sect. \[besselsec\], consider $(\Lambda,\theta)$-Bessel functionals for $\pi$. We say that such functionals are *unique* if the dimension of the space ${{\rm Hom}}_D(V,{{\mathbb C}}_{\Lambda\otimes\theta})$ is at most $1$. In this section we will prove the uniqueness of split Bessel functionals for all representations, and the uniqueness of non-split Bessel functionals for all non-supercuspidal representations.
As far as we know, a complete proof that Bessel functionals are unique for all $(\Lambda,\theta)$ and all representations $\pi$ has not yet appeared in the literature. In [@NovoPia1973] it is proved that $(1,\theta)$-Bessel functionals are unique if $\pi$ has trivial central character. The main ingredient for this proof is Theorem 1’ of [@GelfandKazhdan1975]. In [@Novodvorski1973] it is proved that $(\Lambda,\theta)$-Bessel functionals are unique if $\pi$ has trivial central character. The proof is based on a generalization of Theorem 1’ of [@GelfandKazhdan1975]. In [@Rodier1976] it is stated, without proof, that $(\Lambda,\theta)$-Bessel functionals are unique if $\pi$ is supercuspidal and has trivial central character.
\[splituniquenesslemma\] Let $\sigma_1$ be a character of $F^\times$, and let $(\pi_1,V_1)$ be an irreducible, admissible representation of ${{\rm GL}}(2,F)$. Let the matrix $S$ be as in , and $\theta$ be as in . The resulting group $T$ is then given by . Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Assume there is an exact sequence $$\label{splituniquenesslemmaeq}
\pi_1\rtimes\sigma_1\longrightarrow\pi\longrightarrow0.$$ Let $\Lambda$ be a character of $T$. If $\Lambda$ is not equal to one of the characters $\Lambda_1$ or $\Lambda_2$, given by $$\begin{aligned}
\label{splituniquenesslemmaeq2}\Lambda_1({\rm diag}(a,b,a,b))&=\nu^{1/2}(a)\nu^{-1/2}(b)\sigma_1(ab)\omega_{\pi_1}(a),\\
\label{splituniquenesslemmaeq3}\Lambda_2({\rm diag}(a,b,a,b))&=\nu^{-1/2}(a)\nu^{1/2}(b)\sigma_1(ab)\omega_{\pi_1}(b),
\end{aligned}$$ then $(\Lambda,\theta)$-Bessel functionals are unique.
Since $\pi$ is a quotient of $\pi_1\rtimes\sigma_1$, it suffices to prove that ${{\rm Hom}}_D(\pi_1\rtimes\sigma_1,{{\mathbb C}}_{\Lambda\otimes\theta})$ is at most one-dimensional. Any element $\beta$ of this space factors through the Jacquet module $(\pi_1\rtimes\sigma_1)_{N,\theta}$. These Jacquet modules were calculated in Lemma \[siegelinducedbesselwaldspurgerlemma\] ii). Using the notation of this lemma, the assumption about $\Lambda$ implies that restriction of $\beta$ to $J_2$ establishes an injection $${{\rm Hom}}_D(\pi_1\rtimes\sigma_1,{{\mathbb C}}_{\Lambda\otimes\theta})\longrightarrow{{\rm Hom}}_{\left[\begin{smallmatrix} *\\&*\end{smallmatrix}\right]}(\sigma_1\pi_1,{{\mathbb C}}_\Lambda).$$ The space on the right is at most one-dimensional; see Sect. \[siegelindsec\]. This proves our statement.
\[splituniquenesstheorem\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$.
1. Split Bessel functionals for $\pi$ are unique.
2. Non-split Bessel functionals for $\pi$ are unique, if $\pi$ is not supercuspidal, or if $\pi$ is of type $\mathrm{Va^*}$ or $\mathrm{XIa^*}$.
i\) By Proposition \[nongenericsplitproposition\], we may assume that $\pi$ is generic. Let the matrix $S$ be as in , and $\theta$ be as in . The resulting group $T$ is then given by . Let $\Lambda$ be a character of $T$. We use the fact that any $(\Lambda,\theta)$-Bessel functional $\beta$ on $V$ factors through the $P_3$-module $V_{Z^J}$.
Assume that $\pi$ is supercuspidal. Then, by Theorem \[finitelength\], the associated $P_3$-module $V_{Z^J}$ equals $\tau_{{{\rm GL}}(0)}^{P_3}(1)$. Therefore, the space of $(\Lambda,\theta)$-Bessel functionals on $V$ equals the space of linear functionals considered in Lemma 2.5.4 of [@NF]. By this lemma, this space is one-dimensional.
Now assume that $\pi$ is non-supercuspidal. As in the proof of Proposition \[nongenericsplitproposition\], we write the semisimplification of the quotient $V_1/V_2$ in the $P_3$-filtration as $\sum_{i=1}^n\tau^{P_3}_{{{\rm GL}}(1)}(\chi_i)$ with characters $\chi_i$ of $F^\times$. Let $C(\pi)$ be the set of characters $\chi_i$. Proposition 2.5.7 of [@NF] states that if the character $a\mapsto\Lambda({\rm diag}(a,1,a,1))$ is not contained in the set $\nu^{-1}C(\pi)$, then the set of $(\Lambda,\theta)$-Bessel functionals is at most one-dimensional (note that the arguments in the proof of this proposition do not require the hypothesis of trivial central character). The table below lists the sets $\nu^{-1}C(\pi)$ for all generic non-supercuspidal representations. This table implies that $(\Lambda,\theta)$-Bessel functionals for types VII, VIIIa and IXa are unique.
Assume that $\pi$ is not one these types. Then there exists a sequence as in for some irreducible, admissible representation $\pi_1$ of ${{\rm GL}}(2,F)$ and some character $\sigma_1$ of $F^\times$. These $\pi_1$ and $\sigma_1$ are listed in the table below. Let $\Lambda_1$, $\Lambda_2$ be the characters defined in and . Note that, since $\Lambda_1$ and $\Lambda_2$ are Galois conjugate, we have $$\label{splituniquenesstheoremeq1}
\dim{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda_1\otimes\theta})=\dim{{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda_2\otimes\theta})$$ by . By Lemma \[splituniquenesslemma\], it suffices to prove that these spaces are one-dimensional. Define characters $\lambda_1,\lambda_2$ of $F^\times$ by $$\begin{aligned}
\lambda_1(a)&=\Lambda_1({\rm diag}(a,1,a,1))=\nu^{1/2}(a)\sigma_1(a)\omega_{\pi_1}(a),\\
\lambda_2(a)&=\Lambda_2({\rm diag}(a,1,a,1))=\nu^{-1/2}(a)\sigma_1(a).\end{aligned}$$ The set $\{\lambda_1,\lambda_2\}$ is listed in the table below for each representation. By the previous paragraph, the spaces are one-dimensional if $\{\lambda_1,\lambda_2\}$ is not a subset of $\nu^{-1}C(\pi)$. This can easily be verified using the table below.
$$\begin{array}{ccccc}
\toprule
\pi&\pi_1&\sigma_1&\{\lambda_1,\lambda_2\}&\nu^{-1}C(\pi)\\
\toprule
\text{I}&\chi_1\times\chi_2&\sigma&\{\nu^{1/2}\chi_1\chi_2\sigma,\nu^{-1/2}\sigma\}&\{\nu^{1/2}\chi_1\chi_2\sigma,\nu^{1/2}\chi_1\sigma,\nu^{1/2}\chi_2\sigma,\nu^{1/2}\sigma\}\\
\midrule
\text{IIa}&\chi{{\rm St}}_{{{\rm GL}}(2)}&\sigma&\{\nu^{1/2}\chi^2\sigma,\nu^{-1/2}\sigma\}&\{\nu^{1/2}\chi^2\sigma,\nu^{1/2}\sigma,\nu\chi\sigma\}\\
\midrule
\text{IIIa}&\chi^{-1}\times\nu^{-1}&\nu^{1/2}\chi\sigma&\{\sigma,\chi\sigma\}&\{\nu\chi\sigma,\nu\sigma\}\\
\midrule
\text{IVa}&\nu^{-3/2}{{\rm St}}_{{{\rm GL}}(2)}&\nu^{3/2}\sigma&\{\nu^{-1}\sigma,\nu\sigma\}&\{\nu^2\sigma\}\\
\midrule
\text{Va}&\nu^{-1/2}\xi{{\rm St}}_{{{\rm GL}}(2)}&\nu^{1/2}\xi\sigma&\{\xi\sigma\}&\{\nu\sigma,\nu\xi\sigma\}\\
\midrule
\text{VIa}&\nu^{-1/2}{{\rm St}}_{{{\rm GL}}(2)}&\nu^{1/2}\sigma&\{\sigma\}&\{\nu\sigma\}\\
\midrule
\text{VII}&\text{---}&\text{---}&\text{---}&{\varnothing}\\
\midrule
\text{VIIIa}&\text{---}&\text{---}&\text{---}&{\varnothing}\\
\midrule
\text{IXa}&\text{---}&\text{---}&\text{---}&{\varnothing}\\
\midrule
\text{X}&\pi&\sigma&\{\nu^{1/2}\omega_\pi\sigma,\nu^{-1/2}\sigma\}&\{\nu^{1/2}\omega_\pi\sigma,\nu^{1/2}\sigma\}\\
\midrule
\text{XIa}&\nu^{-1/2}\pi&\nu^{1/2}\sigma&\{\sigma\}&\{\nu\sigma\}\\
\bottomrule
\end{array}$$
ii\) Assume first that $\pi$ is not supercuspidal. Then there exist an irreducible, admissible representation $\pi_1$ of ${{\rm GL}}(2,F)$ and a character $\sigma$ of $F^\times$ such that $\pi$ is either a quotient of $\pi_1\rtimes\sigma$, or a quotient of $\sigma\rtimes\pi_1$. The assertion of ii) now follows from i) of Lemma \[siegelinducedbesselwaldspurgerlemma\] and Lemma \[Klingendegjacquetlemma1\].
Now assume that $\pi=\delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)$ is of type Va$^*$. Suppose that ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is non-zero for some $\theta$ and $\Lambda$, with $L$ being a field. By our main result Theorem \[mainnonsupercuspidaltheorem\], the quadratic extension $L$ is the field corresponding to $\xi$ and $\Lambda=\sigma\circ{{\rm N}}_{L/F}$. By Corollary \[Vastarlemma\], the Jacquet module $\pi_{N,\theta}$ is one-dimensional. This implies that ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is one-dimensional.
Finally, assume that $\pi=\delta^*(\nu^{1/2}\pi,\nu^{-1/2}\sigma)$ is of type XIa$^*$. Suppose that ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is non-zero for some $\theta$ and $\Lambda$, with $L$ being a field. By our main result Theorem \[mainnonsupercuspidaltheorem\], we have $\Lambda=\sigma\circ{{\rm N}}_{L/F}$ and ${{\rm Hom}}_T(\pi^{\mathrm{JL}},{{\mathbb C}}_1)\neq0$. By Corollary \[Vastarlemma\], the Jacquet module $\pi_{N,\theta}$ is one-dimensional. This implies that ${{\rm Hom}}_D(\pi,{{\mathbb C}}_{\Lambda\otimes\theta})$ is one-dimensional.
Some applications
=================
We present two applications that result from the methods used in this paper. The first application is a characterization of irreducible, admissible, non-generic representations of ${{\rm GSp}}(4,F)$ in terms of their twisted Jacquet modules and their Fourier-Jacobi quotient. The second application concerns the existence of certain vectors with good invariance properties.
Characterizations of non-generic representations
------------------------------------------------
As before, we fix a non-trivial character $\psi$ of $F$.
\[ZJfinitelengthlemma\] Let $(\pi,V)$ be a non-generic, supercuspidal, irreducible, admissible representation of ${{\rm GSp}}(4,F)$. Then $\dim V_{N,\theta}<\infty$ for all non-degenerate $\theta$.
If $\theta$ is split, then $V_{N,\theta}=0$ by Theorem \[finitelength\]. Assume that $\theta$ is not split. Let $\theta=\theta_S$ with $S$ as in . We may assume that $\dim V_{N,\theta}\neq0$. Let $X$ be as in . By Theorem 5.6 of [@GaTa2011], there exists an irreducible, admissible representation $\sigma$ of ${{\rm GO}}(X)$ such that ${{\rm Hom}}_R(\omega,\pi\otimes\sigma)\neq0$; here, $\omega$ is the Weil representation defined in Sect. \[thetabesselsubsec\]. By i) of Theorem \[fourdimthetatheorem\], the set $\Omega_S$ is non-empty. By Proposition \[scdimprop\], the dimension of $V_{N,\theta}$ is finite.
Let $W$ be a smooth representation of $N$. We will consider the dimensions of the complex vector spaces $
W_{N, \theta_{a,b,c}}.
$ Fix representatives $a_1,\dots, a_t$ for $F^\times / F^{\times 2}$. We define $$d(W)=\sum_{i=1}^t \dim W_{N, \theta_{a_i,0,1}}.$$ If $0=W_0 \subset W_1 \subset W_2 \subset \dots \subset W_k=W$ is a chain of $N$ subspaces, then $$\label{dWsumeq}
d(W) = \sum_{j=1}^k d(W_j/W_{j-1}).$$ If one of the spaces $W_{N, \theta_{a_i,0,1}}$ is infinite-dimensional, then this equality still holds in the sense that both sides are infinite.
\[detectlemma\] Let $W^J$ be a non-zero, irreducible, smooth representation of $G^J$ admitting $\psi$ as a central character. Then $1\leq d(W^J) \leq\#F^\times/F^{\times2}$.
This follows immediately from Lemma \[GJWhitlemma\].
\[kdboundlemma\] Let $(\tau^J,W^J)$ be a smooth representation of $G^J$. Then $W^J$ has finite length if and only if $d(W^J)$ is finite. If it has finite length, then $${\rm length}(W^J)\leq d(W^J)\leq{\rm length}(W^J)\cdot\#F^\times/F^{\times2}.$$
Assume that $W^J$ has finite length. Let $$0=W_0 \subset W_1 \subset W_2 \subset \dots \subset W_k=W^J$$ be a chain of $G^J$ spaces such that $W_j/W_{j-1}$ is not zero and irreducible. By , we have $$d(W^J)=\sum_{j=1}^k d(W_j/W_{j-1}).$$ By Lemma \[detectlemma\], $1\leq d(W_j/W_{j-1}) \leq\#F^\times/F^{\times2}$ for $j=1,\dots,k$. It follows that $d(W^J)$ is finite, and that the asserted inequalities hold.
If $W^J$ has infinite length, a similar argument shows that $d(W)$ is infinite.
\[nongenchartheorem\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. The following statements are equivalent.
1. $\pi$ is not generic.
2. $\dim V_{N,\theta}<\infty$ for all split $\theta$.
3. $\dim V_{N,\theta}<\infty$ for all non-degenerate $\theta$.
4. The $G^J$-representation $V_{Z^J,\psi}$ has finite length.
i\) $\Rightarrow$ iii) Assume that $\pi$ is not generic. Let $\theta$ be a non-degenerate character of $N$. Assume first that $\theta$ is split. Then $V_{N,\theta}$ can be calculated from the $P_3$-filtration of $\pi$. As in the proof of Proposition \[nongenericsplitproposition\] we see that $V_{N,\theta}$ is finite-dimensional.
Now assume that $\theta$ is not split. If $\pi$ is supercuspidal, then $\dim V_{N,\theta}<\infty$ by Lemma \[ZJfinitelengthlemma\]. Assume that $\pi$ is not supercuspidal. Then the table of Bessel functionals shows that $\pi$ admits $(\Lambda,\theta)$-Bessel functionals only for finitely many $\Lambda$. Since every $\Lambda$ can occur in $V_{N,\theta}$ at most once by the uniqueness of Bessel functionals (Theorem \[splituniquenesstheorem\]), this implies that $V_{N,\theta}$ is finite-dimensional.
iii\) $\Rightarrow$ ii) is trivial.
ii\) $\Rightarrow$ i) Assume that $\pi$ is generic. Then the subspace $V_2$ of the $P_3$-module $V_{Z^J}$ from Theorem \[finitelength\] is non-zero. In fact, this subspace is isomorphic to the representation $\tau^{P_3}_{{{\rm GL}}(0)}(1)$ defined in . By Lemma 2.5.4 of [@NF], the space $$(V_2)_{\left[\begin{smallmatrix}1&*&*\\&1\\&&1\end{smallmatrix}\right],\theta_{0,1}},$$ where $\theta_{a,b}$ is defined in , is infinite-dimensional. This implies that $V_{N,\theta_{0,1,0}}$ is infinite-dimensional, contradicting the hypothesis in ii).
iii\) $\Leftrightarrow$ iv) Let $W^J=V_{Z^J,\psi}$. Then $W^J_{N,\theta_{a,0,1}}=V_{N,\theta_{a,0,1}}$ for any $a$ in $F^\times$, so that $d(W^J)=d(V)$. Lemma \[kdboundlemma\] therefore implies that iii) and iv) are equivalent.
For more thoughts on $V_{Z^J,\psi}$, see [@AdlerPrasad2006]. Theorem \[nongenchartheorem\] answers one of the questions mentioned at the end of this paper.
Invariant vectors
-----------------
Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$. In this section we will prove the existence of a vector $v$ in $V$ such that ${\rm diag}(1,1,c,c)v=v$ for all units $c$ in the ring of integers ${{\mathfrak o}}$ of $F$. This result was motivated by a question of Abhishek Saha; see [@Sa2013].
Our main tool will be the $G^J$-module $V_{Z^J,\psi}$ for a smooth representation $(\pi,V)$ of ${{\rm GSp}}(4,F)$. Throughout this section we will make a convenient assumption about the character $\psi$ of $F$, namely that $\psi$ has conductor ${{\mathfrak o}}$. By definition, this means that $\psi$ is trivial on ${{\mathfrak o}}$, but not on ${\mathfrak p}^{-1}$, where ${\mathfrak p}$ is the maximal ideal of ${{\mathfrak o}}$. We normalize the Haar measure on $F$ such that ${{\mathfrak o}}$ has volume $1$. Let $q$ be the cardinality of the residue class field ${{\mathfrak o}}/{\mathfrak p}$.
In this section, we will abbreviate $$d(c)=\begin{bmatrix}1\\&1\\&&c\\&&&c\end{bmatrix},\qquad z(x)=\begin{bmatrix}1&&&x\\&1\\&&1\\&&&1\end{bmatrix}$$ for $c$ in $F^\times$ and $x$ in $F$.
\[VZJlemma1\] Let $(\pi,V)$ be a smooth representation of ${{\rm GSp}}(4,F)$. Let $p:\:V\rightarrow V_{Z^J,\psi}$ be the projection map, and let $w$ in $V_{Z^J,\psi}$ be non-zero. Then there exists a positive integer $m$ and a non-zero vector $v$ in $V$ with the following properties.
1. $p(v)=w$.
2. $\pi(z(x))v=\psi(x)v$ for all $x\in{\mathfrak p}^{-m}$.
3. $\pi(d(c))v=v$ for all $c\in1+{\mathfrak p}^m$.
Let $v_0$ in $V$ be such that $p(v_0)=w$. Let $m$ be a positive integer such that $\pi(d(c))v_0=v_0$ for all $c\in1+{\mathfrak p}^m$. Set $v=q^{-m}\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x))v_0\,dx$. Then $p(v)=w$. In particular, $v$ is not zero. Evidently, $v$ has property ii). Moreover, for $c$ in $1+{\mathfrak p}^m$, $$\begin{aligned}
\pi(d(c))(v)&=q^{-m}\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(xc^{-1})d(c))v_0\,dx\\
&=q^{-m}\int\limits_{{\mathfrak p}^{-m}}\psi(-xc)\pi(z(x))v_0\,dx\\
&=q^{-m}\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x))v_0\,dx\\
&=v.\end{aligned}$$ This concludes the proof.
\[VZJlemma2\] Let $(\pi,V)$ be a smooth representation of ${{\rm GSp}}(4,F)$. Let $p:\:V\rightarrow V_{Z^J,\psi}$ be the projection map. Let $m$ be a positive integer. Assume that $v$ in $V$ is such that $\pi(z(x))v=\psi(x)v$ for all $x\in{\mathfrak p}^{-m}$. If $c$ is in ${{\mathfrak o}}^\times$ but not in $1+{\mathfrak p}^m$, then $p(\pi(d(c))v)=0$.
Let $w=\pi(d(c))v$. To show that $p(w)=0$, it is enough to show that $$\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x))w\,dx=0.$$ Indeed, $$\begin{aligned}
\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x))w\,dx&=\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x)d(c))v\,dx\\
&=\pi(d(c))\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(xc))v\,dx\\
&=\pi(d(c))\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\psi(xc)v\,dx\\
&=\Big(\int\limits_{{\mathfrak p}^{-m}}\psi(x(c-1))\,dx\Big)\pi(d(c))v\\
&=0,\end{aligned}$$ since $c\notin1+{\mathfrak p}^m$ and $\psi$ has conductor ${{\mathfrak o}}$.
\[VZJprop\] Let $(\pi,V)$ be a smooth representation of ${{\rm GSp}}(4,F)$. Let $p:\:V\rightarrow V_{Z^J,\psi}$ be the projection map. Let $w$ be in $V_{Z^J,\psi}$. Then there exists a unique vector $v$ in $V$ with the following properties.
1. $p(v)=w$.
2. $\pi(z(x))v=v$ for all $x\in{{\mathfrak o}}$.
3. $\int\limits_{{\mathfrak p}^{-1}}\pi(z(x))v\,dx=0$.
4. $\pi(d(c))v=v$ for all $c\in{{\mathfrak o}}^\times$.
For the existence part we may assume that $w$ is non-zero. Let the positive integer $m$ and $v$ in $V$ be as in Lemma \[VZJlemma1\]. Define $v_1=q^m\int\limits_{{{\mathfrak o}}^\times}\pi(d(c))v\,dc$. Then, by Lemma \[VZJlemma2\], $$\begin{aligned}
p(v_1)&=q^m\int\limits_{{{\mathfrak o}}^\times}p(\pi(d(c))v)\,dc\\
&=q^m\int\limits_{1+{\mathfrak p}^m}p(\pi(d(c))v)\,dc\\
&=q^m\int\limits_{1+{\mathfrak p}^m}p(v)\,dc\\
&=w.\end{aligned}$$ Evidently, $v_1$ has property iv). To see properties ii) and iii), let $x$ be in ${\mathfrak p}^{-1}$. By ii) of Lemma \[VZJlemma1\], $$\begin{aligned}
\pi(z(x))v_1&=q^m\int\limits_{{{\mathfrak o}}^\times}\pi(d(c)z(xc))v\,dc=q^m\int\limits_{{{\mathfrak o}}^\times}\psi(xc)\pi(d(c))v\,dc.\end{aligned}$$ It follows that $v_1$ has property ii). Integrating over $x$ in ${\mathfrak p}^{-1}$ shows that $v_1$ has property iii) as well.
To prove that $v_1$ is unique, let $V_1$ be the subspace of $V$ consisting of vectors $v$ satisfying properties ii), iii) and iv). We will prove that the restriction of $p$ to $V_1$ is injective (so that $p$ induces an isomorphism $V_1\cong V_{Z^J,\psi}$). Let $v$ be in $V_1$ and assume that $p(v)=0$. Then there exists a positive integer $m$ such that $$\int\limits_{{\mathfrak p}^{-m}}\psi(-x)\pi(z(x))v\,dx=0.$$ Applying $d(c)$ to this equation, where $c$ is in ${{\mathfrak o}}^\times$, leads to $$\int\limits_{{\mathfrak p}^{-m}}\psi(-cx)\pi(z(x))v\,dx=0.$$ Integrating over $c$ in ${{\mathfrak o}}^\times$, we obtain $$q^{-1}\int\limits_{{\mathfrak p}^{-1}}\pi(z(x))v\,dx=\int\limits_{{\mathfrak o}}\pi(z(x))v\,dx.$$ Using properties ii) and iii) it follows that $v=0$. This concludes the proof.
\[VZJpropcor\] Let $(\pi,V)$ be an irreducible, admissible representation of ${{\rm GSp}}(4,F)$ that is not a twist of the trivial representation. Then there exists a vector $v$ in $V$ that is invariant under all elements $d(c)$ with $c$ in ${{\mathfrak o}}^\times$.
By Proposition \[VZJprop\], it is enough to show that $V_{Z^J,\psi}$ is non-zero. By Proposition \[Nthetaprop\], there exists a non-trivial character $\theta$ of $N$ such that $V_{N,\theta}\neq0$. We may assume that $\theta$ is of the form with $c=1$. The assertion follows.
{#section .unnumbered}
\[refsec\]
[^1]: Supported by NSF grant DMS-1100541.\
2010 *Mathematics Subject Classification*. Primary 11F70 and 22E50.
|
---
abstract: 'This is a short note that formally presents the matching model for the theoretical study of self-adjusting networks as initially proposed in [@avin2019toward].'
author:
- Chen Avin
- Chen Griner
- Iosif Salem
- Stefan Schmid
bibliography:
- 'bibs/literature.bib'
- 'bibs/cerberus.bib'
- 'bibs/bibly.bib'
title: |
An Online Matching Model for\
Self-Adjusting ToR-to-ToR Networks
---
Background and Motivation {#sec:motivation}
=========================
This note is motivated by the observation that existing datacenter network designs sometimes provide a *mismatch* between some common traffic patterns and the switching technology used in the network topology to serve it. On the contrary, we make the case for a systematic approach to assign a specific type of traffic or flow to the topology component which best matches its characteristics and requirements. For instance, static topology components can provide a very low latency, however, static topologies inherently require multi-hop forwarding: the more hops a flow has to traverse, the more network capacity is consumed, which can be seen as an “bandwidth tax,” as noticed in prior work [@rotornet]. This makes these networks less fitted at high loads: the more traffic they carry, the more bandwidth tax is paid. Inspired by the notion of bandwidth tax, we introduce a second dimension, called “latency tax” to capture the delay incurred by the reconfiguration time of optical switches. For instance, rotor switches reduce the bandwidth tax by providing periodic *direct* connectivity. While this architecture performs well for all-to-all traffic patterns, it is less suited for elephant flows created by ring-reduce traffic pattern of machine learning training with Horovod. We note that static and rotor topology components both form demand-oblivious topologies, and hence, they cannot account for specific elephant flows. While Valiant routing [@valiant1982scheme] can be used in combination with rotor switches to carry large flows, this again results in bandwidth tax. This is the advantage of demand-aware topologies, based on 3D MEMS optical circuit switches, which can provide shortcuts specifically to such elephant flows. However, the state-of-the-art demand-aware optical switches have a reconfiguration latency of several milliseconds and hence incur a higher latency tax to establish a circuit. Moreover, demand-aware topologies might require a control logic that adds to the latency tax. Thus, this latency can only be amortized for large flows, which benefit from the demand-aware topology components in the longer term.
![Overview of TMT model.[]{data-label="fig:system"}](figures/switch_evolution.pdf){width="\columnwidth"}
ToR-Matching-ToR Architecture
=============================
Given the above motivation for a unified network design, combining the advantages of static, rotor, and demand-aware switches, we propose a two-layer leaf-spine network architecture in which spine switches can be of different types ß, and . Since this network architecture generalizes existing architectures such as RotorNet [@rotornet], in that it supports different types of switches *matching* ToRs to each other, we will refer to it as the ToR-Matching-ToR () network model.
More specifically, the network interconnects a set of $n$ ToRs, $\{1,2, \dots, n\}$ and its two-layer leaf-spine architecture composed of leaf switches and spine switches, similar to [@opera; @rotornet]. The $n$ ToR packet switches are connected using $k$ spine switches, $SW = \{sw_1, sw_2, \dots, sw_k\}$ and each switch internally connects its in-out ports via a matching. Figure \[fig:system\] illustrates a schematic view of our design. We assume that each ToR $i: 1 \le i \le n$ has $k$ uplinks, where uplink $j: 1 \le j \le k$ connects to port $i$ in $sw_j$. The directed outgoing (leaf) uplink is connected to incoming port of the (spine) switch and the directed incoming (leaf) uplink is connected to the outgoing port of the (spine) switch. Each switch has $n$ input ports and $n$ output ports and the connections are directed, from input to output ports.
At any point in time, each switch $sw \in SW$ provides a *matching* between its input and output ports. Depending on the switch type, this matching may be *reconfigured* at runtime: The set of matchings $\M_j$ of a switch $j$ may be larger than one, i.e., $m_j=\card{\mathcal{M}_j}>1$. Changing from a matching $M' \in \mathcal{M}$ to a matching $M''\in \mathcal{M}$ takes time, which we model with a parameter $R_j$: the *reconfiguration time* of switch $j$. During reconfiguration, the links in $M' \cap M''$, i.e., the links which are not being reconfigured, can still be used for forwarding; the remaining links are blocked during the reconfiguration. Depending on the technology, different switches in $SW$ support different sets of matchings and reconfiguration times.
We note that the TMT network can be used to model existing systems, e.g., Eclipse [@venkatakrishnan2018costly] or ProjecToR [@projector] which rely on a demand-aware switches, [@rotornet] and Opera [@opera] which relies on a rotor based switches, or an optical variant of Xpander [@xpander] which can be built from a collection of static matchings.
The Matching Model
==================
This section presents a general algorithmic model for Self-Adjusting Networks (SAN) constructed using a set of matchings. We mostly follow [@avin2019toward]. We consider a set of $n$ nodes $V=\{1,\ldots,n\}$ (e.g., the top-of-rack switches). The communication *demand* among these nodes is a sequence $\sigma =
(\sigma_1, \sigma_2, \ldots)$ of *communication requests* where $\sigma_t = (u,v) \in V \times V$, is a source-destination pair. The communication demand can either be finite or infinite.
In order to serve this demand, the nodes $V$ must be inter-connected by a network $\netw$, defined over the same set of nodes. In case of a demand-aware network, $\netw$ can be optimized towards $\sigma$, either statically or dynamically: a self-adjusting network $\netw$ can change over time, and we denote by $\netw_t$ the network at time $t$, i.e., the network evolves: $\netw_0,$ $\netw_1,$ $\netw_2,$ $\ldots$
Matching
--------
The $n$ nodes are connected using $k$ switches, $SW = \{sw_1, sw_2, \dots, sw_k\}$ and each switch internally connects its $n$ in-out ports via a *matching*. These matchings can be dynamic, and change over time. To denote the matching on a switch $i$ at time $t$ we use $M(i,t)$. At each time $t$ our network is the union of these matchings, $\netw_t=\mathcal{M}=\bigcup_{i=1}^{k} M(i,t)$.
In general, not all switches are necessarily reconfigurable. Since, reconfigurable switches tend to be more costly than static ones, a network could gain from using some hybrid mix of switches.
Cost
----
The crux of designing smart self-adjusting networks is to find an optimal *tradeoff* between the benefits and the costs of reconfiguration: while by reconfiguring the network, we may be able to serve requests more efficiently in the future, reconfiguration itself comes at a cost.
The inputs to the *matching based* self-adjusting network design problem is the number of nodes $n$, the number of switches (i.e., matchings) $k$, a set of allowed network topologies $\netws$ (i.e., all networks that can be built from $k$ matchings), the request sequence $\sigma=(\sigma_1,\sigma_1,\ldots,
\sigma_{m})$, and two types of costs:
- An **adjustment cost** $\RecCost: \netws \times \netws \rightarrow \mathbb{R}$ which defines the cost of reconfiguring a network $\netw \in \netws$ to a network $\netw' \in \netws$. Adjustment costs may include mechanical costs (e.g., energy required to move lasers or abrasion) as well as performance costs (e.g., reconfiguring a network may entail control plane overheads or packet reorderings, which can harm throughput). For example, the cost could be given by the number of links which need to be changed in order to transform the network.
- A **service cost** $\RouCost: \sigma \times
\netws \rightarrow \mathbb{R}$ which defines, for each request $\sigma_i$ and for each network $\netw\in \netws$, what is the price of serving $\sigma_i$ in network $\netw$. For example, the cost could correspond to the route length: shorter routes require less resources and hence reduce not only load (e.g., bandwidth consumed along fewer links), but also energy consumption, delay, and flow completion times, could be considered for example.
Serving request $\sigma_i$ under the current network configuration $\netw_{i}$ will hence cost $\RouCost(\sigma_i,\netw_i)$, after which the network reconfiguration algorithm may decide to reconfigure the network at cost $\RecCost(\netw_{i},\netw_{i+1})$. The total processing cost of a demand sequence $\sigma$ for an algorithm $\A$ is then $$\begin{aligned}
\Cost(\A, \netw_0, \sigma) = \sum_{t=1}^{m} \RouCost(\sigma_t,\netw_{t-1})
+ \RecCost(\netw_{t-1},\netw_{t}) \end{aligned}$$ where $\netw_t \in \mathcal{N}$ denotes the network at time $i$.
Specific Metrics
----------------
### Service Cost
In order to give a more useful description of the performance of a self-adjusting network, we model the service cost for each $\sigma_i=(u,v)$ as the shortest distance between $u$ and $v$ on the graph $N_i$, that is $$\RouCost(\sigma_t,\netw_{t})=d_{\netw_t}(u,v)=d(\sigma_t,\netw_t)$$ where $d_{G}(u,v)$ denotes the *shortest path* distance between $u$ and $v$ on the graph $G$.
### Adjustments Cost
Adjustments cost can depend on the particular network modeled. We will discuss three particular cases for adjustment costs and recall that our network graph at time $t$, $N_t$, is a union of the different matchings on each of our $k$ switches. At any time $t$, a switch can adjust its matching, causing a change in the overall network’s topology.
- **Edge Distance:** The basic case where we define the adjustments cost as propositional to the number of replaced edges between each consecutive matchings of the same switch. Recall that we denote the matching of switch $i$ at time $t$ as $M(i, t)$, which denotes the set edges in the the matching. Let the cost of a single edge be $\alpha$ then the adjustment cost for a single switch is therefore $\alpha \cdot \card{M(i, t+1) \setminus M(i, t)}$, where $S \setminus T$ denotes the *set difference* between $S$ and $T$. For the entire network, this turns out to be $$\RecCost(\netw_{t-1},\netw_{t})=\alpha \sum_{i=1}^k \card{M(i,t)\setminus M(i,t-1)}.$$
- **Switch Cost:** In this case, if a matching (switch) is changed, it costs $\alpha$ regardless of the number of edge changes in the matching. Let $\mathbb{I}_{S=T}$ be an indicator function that denotes if set $S$ is equal set $T$. Then the adjustments cost for the network is: $$\RecCost(\netw_{t-1},\netw_{t})=\alpha \sum_{i=1}^k \mathbb{I}_{M(i,t) = M(i,t-1)}.$$
- **No Direct Cost:** In this case the adjustment cost is zero $$\RecCost(\netw_{t-1},\netw_{t})=0$$ however the cost of reconfiguring the network is still incurred through the inactivity of some of the edges during the adjustment itself. When some switch $i$ changes its matching from $M(i,t)$ to $M(i,t+1)$, its edges will be unavailable, and requests cannot be served using these edges until the adjustment process is completed after some $\beta$ units of time. Here, we also consider two cases: (i) the entire switch (matching) is unavailable for $\beta$ time units, namely all its edges are inactive; (ii) only the edges that are changing are inactive for $\beta$ time units. Let $M^*(i,t)$ denote the set of *active* edges in matching $M(i,t)$ (or in $sw_i$). Then for each time $t$ we have: $$\netw_t=\bigcup_{i=1}^{k} M^*(i, t)$$
|
---
abstract: 'The complete transposition graph is defined to be the graph whose vertices are the elements of the symmetric group $S_n$, and two vertices $\alpha$ and $\beta$ are adjacent in this graph iff there is some transposition $(i,j)$ such that $\alpha=(i,j) \beta$. Thus, the complete transposition graph is the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ of the symmetric group generated by the set $S$ of all transpositions. An open problem in the literature is to determine which Cayley graphs are normal. It was shown recently that the Cayley graph generated by 4 cyclically adjacent transpositions is not normal. In the present paper, it is proved that the complete transposition graph is not a normal Cayley graph, for all $n \ge 3$. Furthermore, the automorphism group of the complete transposition graph is shown to equal $${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S)) = (R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)) \rtimes \mathbb{Z}_2,$$ where $R(S_n)$ is the right regular representation of $S_n$, ${\mathop{\mathrm{Inn}}\nolimits}(S_n)$ is the group of inner automorphisms of $S_n$, and $\mathbb{Z}_2 = \langle h \rangle$, where $h$ is the map $\alpha \mapsto \alpha^{-1}$.'
author:
- 'Ashwin Ganesan [^1]'
bibliography:
- 'refsaut.bib'
title: Automorphism group of the complete transposition graph
---
**Index terms** — complete transposition graph; automorphisms of graphs; normal Cayley graphs.
Introduction
============
Let $X=(V,E)$ be a simple, undirected graph. An automorphism of $X$ is a permutation of its vertex set that preserves adjacency (cf. Tutte [@Tutte:1966], Biggs [@Biggs:1993]). The set $\{g \in {\mathop{\mathrm{Sym}}\nolimits}(V): E^g=E\}$ of all automorphisms of $X$ is called the automorphism group of $X$, and is denoted by ${\mathop{\mathrm{Aut}}\nolimits}(X)$. Given a group $H$ and a subset $S \subseteq H$ such that $1 \notin S$ and $S=S^{-1}$, the Cayley graph of $H$ with respect to the $S$, denoted by ${\mathop{\mathrm{Cay}}\nolimits}(H,S)$, is defined to be the graph with vertex set $H$ and edge set $\{(h,sh): h \in H, s \in S\}$. The right regular representation $R(H)$ acts as a group of automorphisms of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(H,S)$, and hence a Cayley graph is always vertex-transitive. The set of automorphisms of the group $H$ that fixes $S$ setwise, denoted by ${\mathop{\mathrm{Aut}}\nolimits}(H,S)$, is a subgroup of the stabilizer ${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(H,S))_e$ of the vertex $e$ (cf. [@Biggs:1993]). A Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(H,S)$ is said to be normal if $R(H)$ is a normal subgroup of ${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(H,S))$, or equivalently, if ${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(H,S)) = R(H) \rtimes {\mathop{\mathrm{Aut}}\nolimits}(H,S)$ (cf. [@Godsil:1981], [@Xu:1998]).
An open problem in the literature is to determine which Cayley graphs are normal. Let $S$ be a set of transpositions generating $S_n$. The transposition graph of $S$ is defined to be the graph with vertex set $\{1,\ldots,n\}$, and with two vertices $i$ and $j$ being adjacent in this graph iff $(i,j) \in S$. A set of transpositions $S$ generates $S_n$ iff the transposition of $S$ is connected. Godsil and Royle [@Godsil:Royle:2001] showed that if the transposition graph of $S$ is an asymmetric tree, then ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ has automorphism group isomorphic to $S_n$. Feng [@Feng:2006] showed that if the transposition graph of $S$ is an arbitrary tree, then ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ has automorphism group $R(S_n) \rtimes {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. Ganesan [@Ganesan:DM:2013] showed that if the girth of the transposition graph of $S$ is at least 5, then ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ has automorphism group $R(S_n) \rtimes {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. In all these cases, the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal. Ganesan [@Ganesan:DM:2013] showed that if the transposition graph of $S$ is a 4-cycle graph, then ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal.
While one can often obtain some automorphisms of a graph, it is often difficult to prove that one has obtained the (full) automorphism group. In the present paper, we obtain the full automorphism group of the complete transposition graph. The complete transposition graph has also been studied for consideration as the topology of interconnection networks [@Stacho:Vrto:1998]. Many authors have investigated the automorphism group of other graphs that arise as topologies of interconnection networks; for example, see [@Deng:Zhang:2011], [@Deng:Zhang:2012], [@Zhou:2011], [@Zhang:Huang:2005].
**Notation.** Throughout this paper, $S$ represents a set of transpositions generating $S_n$, $X:={\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ and $G:={\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))$. $X_r(e)$ denotes the set of vertices in $X$ whose distance to the identity vertex $e$ is exactly $r$. Thus, $X_0(e) = \{e\}$ and $X_1(e)=S$. Greek letters $\alpha, \beta,\ldots \in S_n$ usually represent the vertices of $X$ and lowercase Latin letters $g, h,\ldots \in {\mathop{\mathrm{Sym}}\nolimits}(S_n)$ often represent automorphisms of $X$. The support of a permutation $\alpha$ is the set of points moved by $\alpha$. For a graph $X$, $L_e:=L_e(X)$ denotes the set of automorphisms of $X$ that fixes the vertex $e$ and each of its neighbors in $X$.
The main result of this paper is the following:
Let $S$ be the set of all transpositions in $S_n$ ($n \ge 3$). Then the automorphism group of the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is $${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S)) = (R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)) \rtimes \mathbb{Z}_2,$$ where $R(S_n)$ is the right regular representation of $S_n$, ${\mathop{\mathrm{Inn}}\nolimits}(S_n)$ is the inner automorphism group of $S_n$, and $\mathbb{Z}_2 = \langle h \rangle$, where $h$ is the map $\alpha \mapsto \alpha^{-1}$. The complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal.
Preliminaries
=============
Whitney [@Whitney:1932] investigated whether a graph $T$ is uniquely determined by its line graph $L(T)$ and showed that the answer is in the affirmative for all connected graphs $T$ on 5 or more vertices (this is because the only exceptions occur when $T$ is $K_3$ or $ K_{1,3}$, which have fewer than 5 vertices). More specifically, two connected graphs on 5 or more vertices are isomorphic iff their line graphs are isomorphic. And if $T$ is a connected graph that has 5 or more vertices, then every automorphism of the line graph $L(T)$ is induced by a unique automorphism of $T$, and the automorphism groups of $T$ and of $L(T)$ are isomorphic:
\[thm:Whitney:graph:linegraph:sameautgroup\] (Whitney [@Whitney:1932]) Let $T$ be a connected graph containing at least 5 vertices. Then the automorphism group of $T$ and of its line graph $L(T)$ are isomorphic.
\[thm:Feng:Aut:Sn:S:equals:AutTS\] (Feng [@Feng:2006]) Let $S$ be a set of transpositions in $S_n$, and let $T=T(S)$ denote the transposition graph of $S$. Then, ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S) \cong {\mathop{\mathrm{Aut}}\nolimits}(T)$.
Feng’s result (Theorem \[thm:Feng:Aut:Sn:S:equals:AutTS\]) does not require that $S$ generate $S_n$, i.e. it holds even if the transposition graph of $S$ is not connected.
\[thm:Aut:Sn:S:equals:Inn:Sn\] (Suzuki [@Suzuki:1982 Chapter 3, Section 2] If $n \ge 2$ and $n \ne 6$, then ${\mathop{\mathrm{Aut}}\nolimits}(S_n)={\mathop{\mathrm{Inn}}\nolimits}(S_n)$. If $n=6$, then $|{\mathop{\mathrm{Aut}}\nolimits}(S_n):{\mathop{\mathrm{Inn}}\nolimits}(S_n)|=2$, and every element in ${\mathop{\mathrm{Aut}}\nolimits}(S_n) - {\mathop{\mathrm{Inn}}\nolimits}(S_n)$ maps a transposition to a product of three disjoint transpositions.
An equivalent condition for normality
=====================================
Let $S$ be a set of transpositions generating $S_n$ ($n \ge 5$). Let $X:={\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ and let $L_e=L_e(X)$ denote the set of automorphisms of $X$ that fixes the identity vertex $e$ and each of its neighbors. In this section an equivalent condition for normality of ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is obtained: the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal iff $L_e=1$. It is not assumed in this section that $S$ is the complete set of transpositions in $S_n$.
\[lemma:uniqueC4\] Let $S$ be a set of transpositions generating $S_n$. Let $\tau, \kappa \in S, \tau \ne \kappa$. Then, $\tau \kappa = \kappa \tau$ if and only if there is a unique 4-cycle in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e,\tau$ and $\kappa$.
*Proof*: Suppose $\tau \kappa=\kappa\tau$. Then $\tau$ and $\kappa$ have disjoint support. Let $\omega$ be a common neighbor of the vertices $\tau$ and $\kappa$ in the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$. By definition of the adjacency relation in the Cayley graph, there exist $x,y\in S$ such that $x\tau=y\kappa=\omega$. Observe that $x\tau=y\kappa$ iff $\tau\kappa=xy$. But since $\kappa$ and $\tau$ have disjoint support, $\tau\kappa=xy$ iff $\tau=x$ and $\kappa=y$ or $\tau=y$ and $\kappa=x$. Thus, $\omega$ is either the vertex $e$ or the vertex $\tau\kappa$. Hence, there exists a unique 4-cycle in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e,\tau$ and $\kappa$, namely the cycle $(e,\tau,\tau\kappa=\kappa\tau,\kappa,e)$.
To prove the converse, suppose $\tau\kappa \ne \kappa\tau$. Then $\tau$ and $\kappa$ have overlapping support; without loss of generality, take $\tau=(1,2)$ and $\kappa=(2,3)$. We consider two cases, depending on whether $(1,3) \in S$. First suppose $(1,3) \notin S$. Let $\omega$ be a common neighbor of $\tau$ and $\kappa$. So $\omega=x\tau=y\kappa$ for some $x,y \in S$. As before, $x\tau=y\kappa$ iff $xy=\tau\kappa=(1,2)(2,3)=(1,3,2)$. The only ways to decompose $(1,3,2)$ as a product of two transpositions is as $(1,3,2)=(1,2)(2,3)=(3,2)(1,3)=(1,3)(1,2)$. Since $(1,3) \notin S$, we must have $x=(1,2)$ and $y=(2,3)$, whence $\omega=e$. Thus, $\tau$ and $\kappa$ have only one common neighbor, namely $e$. Therefore, there does not exist any 4-cycle in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e,\tau$ and $\kappa$.
Now suppose $\rho:=(1,3) \in S$. Then $S$ contains the three transpositions $\tau=(1,2),\kappa=(2,3)$ and $\rho=(1,3)$. The Cayley graph of the permutation group generated by these transpositions is the complete bipartite graph $K_{3,3}$. Hence ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ contains as a subgraph the complete bipartite graph $K_{3,3}$ with bipartition $\{e,\kappa\tau,\tau\kappa \}$ and $\{\tau,\kappa,\rho \}$. There are exactly two 4-cycles in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e,\tau$ and $\kappa$, namely the 4-cycle through the vertex $\kappa\tau$ and the 4-cycle through the vertex $\kappa\tau$. Thus, while there exists a 4-cycle in this case, it is not unique.
\[prop:autHS:in:autLT\] Let $S$ be a set of transpositions generating $S_n$. Then, every automorphism of $S_n$ that fixes $S$ setwise, when restricted to $S$, is an automorphism of the line graph of the transposition graph of $S$.
*Proof*: Let $g \in {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. Let $\tau,\kappa \in S, \tau \ne \kappa$. Since $g$ is an automorphism of $S_n$, it takes $\tau \kappa$ to $(\tau \kappa)^g = \tau^g \kappa^g$. An automorphism of a group preserves the order of the elements, whence $\tau$ and $\kappa$ have disjoint support if and only if $\tau^g$ and $\kappa^g$ have disjoint support. Since $g$ fixes $S$, $\tau^g, \kappa^g \in S$. Thus, in the transposition graph of $S$, the edges $\tau$ and $\kappa$ are incident to a common vertex if and only if the edges $\tau^g$ and $\kappa^g$ are incident to a common vertex. In other words, $g$ restricted to $S$ is an automorphism of the line graph of the transposition graph of $S$.
Since ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S) \subseteq G_e$, a stronger result than Proposition \[prop:autHS:in:autLT\] is the following:
\[prop:Ge:restrictedtoS:is:in:AutLT\] Let $S$ be a set of transpositions generating $S_n$, and let $G:={\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))$. If $g \in G_e$, then $g$ restricted to $S$ is an automorphism of the line graph of the transposition graph of $S$.
*Proof*: Let $\tau, \kappa \in S$ and $g \in G_e$. Let $L(T)$ denote the line graph of the transposition graph of $S$. Two transpositions commute iff they have disjoint support. It needs to be shown that the restriction of $g$ to $S$ is an automorphism of $L(T)$, i.e. that $\tau,\kappa$ have disjoint support iff $\tau^g,\kappa^g$ have disjoint support. Thus, it suffices to show that $\tau \kappa=\kappa\tau$ iff $\tau^g \kappa^g=\kappa^g \tau^g$. By Lemma \[lemma:uniqueC4\], $\tau \kappa=\kappa\tau$ iff there is a unique 4-cycle in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e, \tau$ and $\kappa$, which is the case iff there is a unique 4-cycle in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ containing $e, \tau^g$ and $\kappa^g$, which is the case iff $\tau^g \kappa^g = \kappa^g \tau^g$.
\[prop:restriction:map:is:surjective\] Let $S$ be a set of transpositions generating $S_n$, and let $T=T(S)$ denote the transposition graph of $S$ and $L(T)$ denote its line graph. Let $G:={\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))$. Then the restriction map from $G_e$ to ${\mathop{\mathrm{Aut}}\nolimits}(L(T))$ defined by $g \mapsto g |_S$ is surjective.
*Proof*: Let $h \in {\mathop{\mathrm{Aut}}\nolimits}(L(T))$. Then $h \in {\mathop{\mathrm{Sym}}\nolimits}(S)$ since the vertices of $L(T)$ correspond to the transpositions in $S$. We show that there exists an element $g \in G_e$ whose action on $S$ is identical to that of $h$. By Whitney’s Theorem \[thm:Whitney:graph:linegraph:sameautgroup\], there is an automorphism $h' \in {\mathop{\mathrm{Aut}}\nolimits}(T)$ that induces $h$. Now $h'$ is a permutation in $S_n$. Let $g$ denote conjugation by $h'$. Thus, $g \in {\mathop{\mathrm{Aut}}\nolimits}(S_n)$. Since $h'$ is an automorphism of $T$, it fixes the edge set $S$ of $T$. Hence conjugation by $h'$ also fixes $S$, i.e., $g \in {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. Since ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S) \subseteq G_e, g \in G_e$. It is clear that $g|_S$ equals $h$. For example, if $h$ takes $\{i,j\}$ to $\{m,\ell\}$, then there exists an $h' \in S_n$ that takes $\{i,j\}$ to $\{i^{h'},j^{h'} \} = \{m,\ell\}$. Then $g$ takes $(i,j) \in S$ to $(m,\ell) \in S$. Thus, $g|_S$ and $h$ induce the same permutation of $S$, which implies the given restriction map is surjective.
\[thm:normal:iff:Le:equals:1\] Let $S$ be a set of transpositions generating $S_n$ ($n \ge 5$). Let $L_e$ denote the set of automorphisms of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ that fixes the vertex $e$ and each of its neighbors. Then, ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal if and only if $L_e=1$.
*Proof*: $\Leftarrow$: Consider the map $f$ from the domain $G_e$, defined to be the restriction map $g \mapsto g|_S$. By Proposition \[prop:Ge:restrictedtoS:is:in:AutLT\], $f$ is into ${\mathop{\mathrm{Aut}}\nolimits}(L(T))$. The kernel of the map $f: G_e \rightarrow {\mathop{\mathrm{Aut}}\nolimits}(L(T))$ is the set of elements in $G_e$ that fixes each element in $S$ and hence equals $L_e$. Since $L_e=1$, $f$ is injective. By Proposition \[prop:restriction:map:is:surjective\], $f$ is surjective. The restriction map is also a homomorphism. Hence $f$ is an isomorphism.
Thus, $|G_e| = |{\mathop{\mathrm{Aut}}\nolimits}(L(T))|$. By Whitney’s Theorem \[thm:Whitney:graph:linegraph:sameautgroup\], the transposition graph $T$ and its line graph $L(T)$ have isomorphic automorphism groups. Thus, $|G_e| = |{\mathop{\mathrm{Aut}}\nolimits}(T)|$. By Theorem \[thm:Feng:Aut:Sn:S:equals:AutTS\], ${\mathop{\mathrm{Aut}}\nolimits}(T) \cong {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. Thus, $G_e = {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$, which implies ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal.
$\Rightarrow$: If ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal, then $G_e = {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$ (cf. [@Xu:1998]). Once again, by Theorem \[thm:Whitney:graph:linegraph:sameautgroup\] and Theorem \[thm:Feng:Aut:Sn:S:equals:AutTS\], $|G_e| = |{\mathop{\mathrm{Aut}}\nolimits}(L(T))|$. Also, the map $f: G_e \rightarrow {\mathop{\mathrm{Aut}}\nolimits}(L(T)), g \mapsto g|_S$ is surjective by Proposition \[prop:restriction:map:is:surjective\]. Thus, $f$ is also injective and therefore its kernel $L_e=1$.
Non-normality of the complete transposition graph
=================================================
\[prop:inverse:map:is:aut\] Let $S$ be the set of all transpositions in $S_n~ (n \ge 3)$. Then, the map $\alpha \mapsto \alpha^{-1}$ is an automorphism of the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
*Proof*: Let $G$ denote the automorphism group of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ and let $e$ denote the identity element in $S_n$. The Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is normal if and only if the stabilizer $G_e \subseteq {\mathop{\mathrm{Aut}}\nolimits}(S_n)$ (cf. Xu [@Xu:1998 Proposition 1.5]). Thus, to prove that ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal, it suffices to show that $G_e$ contains an element which is not a homomorphism from $S_n$ to itself. Consider the map $\alpha \mapsto \alpha^{-1}$ from $S_n$ to itself. Since $n \ge 3$, $S_n$ is nonabelian, whence the map $\alpha \mapsto \alpha^{-1}$ is not a homomorphism. It suffices to show that the map $\alpha \mapsto \alpha^{-1}$ is an automorphism of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
Let $\alpha$ and $\beta$ be two adjacent vertices in the graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$. Then $\alpha$ and $\beta$ differ by a transposition, i.e. there is some $i \ne j$ such that $\beta=(i,j)g$. We shall prove that $\alpha^{-1}$ and $\beta^{-1}$ also differ by a transposition; since the set $S$ contains all transpositions in $S_n$, it follows that $\alpha^{-1}$ and $\beta^{-1}$ are also adjacent vertices in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
Two cases arise, depending on whether $i$ and $j$ are in the same cycle of $\alpha$ or in different cycles of $\alpha$. Suppose $i$ and $j$ are in the same cycle of $\alpha$, say $\alpha=(\alpha_1,\ldots,\alpha_r,i,\beta_1,\ldots,\beta_s,j) \cdots$. Then $\beta=(i,j)\alpha=(\alpha_1,\ldots,\alpha_r,i,\beta_1,\ldots,\beta_s,j) \cdots$. A quick calculation shows that $\alpha^{-1}$ and $\beta^{-1}$ differ by the transposition $\tau=(\alpha_1,\beta_1)$ if $r,s \ge 1$, by $\tau=(i,\beta_1)$ if $r=0, s \ge 1$, by $\tau=(j,\alpha_1)$ if $s=0, r \ge 1$, and by $\tau=(i,j)$ if $r,s=0$. Hence $\alpha^{-1}=\tau h^{-1}$ for some transposition $\tau$. Thus, $\alpha^{-1}$ and $\beta^{-1}$ are also adjacent vertices in the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
Suppose $i$ and $j$ are in different cycles of $\alpha$, say $\alpha=(\alpha_1,\ldots,\alpha_r,i)(\beta_1,\ldots,\beta_s,j) \cdots$ and $\beta=(i,j) \alpha$. Then $i$ and $j$ are in the same cycle of $\beta$ and $(i,j) \beta= \alpha$. By the argument in the previous paragraph applied to $\beta$ instead of $\alpha$, it follows that $\beta^{-1}$ and $\alpha^{-1}$ are adjacent vertices in ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
We have shown that if $\alpha$ and $\beta$ are adjacent vertices, then so are $\alpha^{-1}$ and $\beta^{-1}$. It follows that if $\alpha^{-1}$ and $\beta^{-1}$ are adjacent vertices, then so are $(\alpha^{-1})^{-1}=\alpha$ and $(\beta^{-1})^{-1}=\beta$. Hence, $\alpha \mapsto \alpha^{-1}$ is an automorphism of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$.
\[thm:aut:completetransp:subgroup\] Let $S$ be the set of all transpositions in $S_n$ ($n \ge 3$). Then $${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S)) \supseteq (R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)) \rtimes \mathbb{Z}_2,$$ where $R(S_n)$ is the right regular representation of $S_n$, ${\mathop{\mathrm{Inn}}\nolimits}(S_n)$ is the inner automorphism group of $S_n$, and $\mathbb{Z}_2 = \langle h \rangle$, and $h$ is the map $\alpha \mapsto \alpha^{-1}$.
*Proof*: Let $G:={\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))$ denote the automorphism group of the complete transposition graph. Since $R(S_n)$ and ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$ are automorphisms of the Cayley graph (cf. [@Biggs:1993]), we have $G \supseteq R(S_n) \rtimes {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$. Also, $S$ is a nonempty set of transpositions, so by Theorem \[thm:Aut:Sn:S:equals:Inn:Sn\] every element in ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$ is an inner automorphism of $S_n$. In fact, the elements in ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$ are exactly conjugations by the automorphisms of the transposition graph of $S$ (cf. Theorem \[thm:Feng:Aut:Sn:S:equals:AutTS\] and [@Feng:2006]). The transposition graph of $S$ is complete, hence ${\mathop{\mathrm{Aut}}\nolimits}(S_n,S) = {\mathop{\mathrm{Inn}}\nolimits}(S_n) \cong S_n$.
By Proposition \[prop:inverse:map:is:aut\] the map $(h: \alpha \mapsto \alpha^{-1})$ is in $G$. We show that $h \notin R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)$. By way of contradiction, suppose $h \in R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)$. Then $h=ab$ for some $a \in R(S_n), b \in {\mathop{\mathrm{Inn}}\nolimits}(S_n)$. Hence $e^h=e^{-1}=e$, and $e^{ab}=e$. Since $b \in {\mathop{\mathrm{Inn}}\nolimits}(S_n)$, $b$ fixes $e$. Thus $e^a=a$, whence $a=1$. Thus, $h=ab=b \in {\mathop{\mathrm{Inn}}\nolimits}(S_n)$, which is a contradiction since the map $h: \alpha \mapsto \alpha^{-1}$ is not a homomorphism.
Thus $G$ contains $H:= (R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)) \rtimes \mathbb{Z}_2$, where $\mathbb{Z}_2 := \langle h \rangle$ and $R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)$ has index 2 in $H$ and hence is a normal subgroup in $H$.
This implies that the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ has at least $2(n!)^2$ automorphisms, for all $n \ge 3$.
Let $S$ be the set of all transpositions in $S_n$ ($n \ge 3$). Then the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal.
*First proof*: By Proposition \[prop:inverse:map:is:aut\], the inverse map $h: \alpha \mapsto \alpha^{-1}$ is an automorphism of the Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$. The map $h$ fixes the vertex $e$ and also fixes each transposition $(i,j) \in S$. Thus, $h \in L_e$. Since $n \ge 3$, $\exists \alpha \in S_n$ such that $\alpha \ne \alpha^{-1}$. Thus $h$ is not the trivial map and $L_e > 1$. If $n=3$ or $n=4$, it can be confirmed through computer simulations that $R(S_n)$ is not a normal subgroup of the automorphism group of ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$; hence ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal in these cases. If $n \ge 5$, then Theorem \[thm:normal:iff:Le:equals:1\] applies and again ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is not normal.
*Second proof*: Alternatively, Theorem \[thm:aut:completetransp:subgroup\] provides a second proof that the complete transposition graph is not normal: a normal Cayley graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ has the smallest possible full automorphism group $R(S_n) \rtimes {\mathop{\mathrm{Aut}}\nolimits}(S_n,S)$, whereas by Theorem \[thm:aut:completetransp:subgroup\] the complete transposition graph has an automorphism group that is strictly larger. Hence the complete transposition graph is not normal.
Let $S$ be a set of transpositions generating $S_n$ ($n \ge 3$). The only Cayley graphs ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ known so far to be non-normal are those arising from the 4-cycle transposition graph and from the transposition graphs that are complete.
Automorphism group of the complete transposition graph
======================================================
Let $S$ be the set of all transpositions in $S_n$. In the previous section a set of $2(n!)^2$ automorphisms were exhibited for the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$. In this section, it is proved that the complete transposition graph has no other automorphisms, which implies that the subgroup given in Theorem \[thm:aut:completetransp:subgroup\] is in fact the full automorphism group.
\[thm:Le:equals:C2\] Let $S$ be the set of all transpositions in $S_n$ and let $X$ be the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$. Let $L_e(X)$ denote the set of automorphisms of $X$ that fixes the vertex $e$ and each of its neighbors. Then $L_e(X) = \{1,h \}$, where $h: V(X) \rightarrow V(X)$ is the map $\alpha \rightarrow \alpha^{-1}$.
*Proof*: By Proposition \[prop:inverse:map:is:aut\], $L_e \supseteq \{1,h\}$. We need to show that $L_e$ has no other elements.
The vertex $e$ of $X$ corresponding to the identity element in $S_n$ has as its neighbors the set $S$ of all transpositions in $S_n$. Suppose $g$ is an automorphism of $X$ that fixes the vertex $e$ and each vertex in $S$; so $g \in L_e(X)$. Then the set of common neighbors of the three vertices $(1,2), (2,3)$ and $(1,3)$ in $S$, namely the set $\Delta:=\{(1,2,3),(1,3,2)\}$, is a fixed block of $g$. We show that the action of $L_e:=L_e(X)$ on $\Delta$ uniquely determines its action on all the remaining vertices, i.e. that if $g \in L_e$ fixes $\Delta$ pointwise, then $g=1$, and if $g$ interchanges $(1,2,3)$ and $(1,3,2)$, then $g$ extends uniquely to the automorphism $\alpha \mapsto \alpha^{-1}$ of $X$.
Suppose $g \in L_e$ and $g$ fixes $\Delta = \{\alpha,\alpha^{-1} \}$ pointwise, where $\alpha=(1,2,3)$. Let $\beta=(2,3,4)$. We show $g$ fixes $\{\beta,\beta^{-1} \}$ also pointwise. Given any vertex $\gamma \in V(X)$ that is a 3-cycle permutation (so the distance in $X$ between $\gamma$ and $e$ is 2), let $W_\gamma$ be the set of neighbors of $\gamma$ that have distance 3 to $e$ in $X$ (see Figure \[fig:distance:partition\]).
\[fill\] (e) at (0,0) \[label=below:$e$\] ;
\[fill\] (v12) at (2,3) \[label=above:$(12)$\] ; \[fill\] (v23) at (2,2) \[label=above:$(23)$\] ; \[fill\] (v13) at (2,1) \[label=above:$(13)$\] ;
\[fill\] (alpha) at (4,4) \[label=above:[$\alpha=(123)$]{}\] ; \[fill\] (alphainv) at (4,3) \[label=below:[$\alpha^{-1}$]{}\] ; \[fill\] (beta) at (4,2) \[label=below:[$\beta$]{}\] ; \[fill\] (betainv) at (4,1) \[label=below:[$\beta^{-1}$]{}\] ;
(6,3.5) ellipse (.2cm and 1cm); at (6,2.2) [$W_\alpha$]{}; (walphatop) at (6.1,4.5) ; (walphabottom) at (6.1,2.4) ; (alpha) edge (walphatop); (alpha) edge (walphabottom);
at (2,-1) [$\underbrace{~~~~~~}_{X_1(e)}$]{}; at (4,-1) [$\underbrace{~~~~~~}_{X_2(e)}$]{}; at (6,-1) [$\underbrace{~~~~~~}_{X_3(e)}$]{}; (e) edge (v12) (e) edge (v13) (e) edge (v23) (v12) edge (alpha) (v23) edge (alpha) (v13) edge (alpha) (v12) edge (alphainv) (v23) edge (alphainv) (v13) edge (alphainv) ;
We claim that $|W_\alpha \cap W_\beta|=|W_{\alpha^{-1}} \cap W_{\beta^{-1}}|=2$ and $|W_\alpha \cap W_{\beta^{-1}} | = |W_{\alpha^{-1}} \cap W_\beta|=1$. Supose some neighbor of $\alpha=(1,2,3)$ is also a neighbor of $\beta=(2,3,4)$. Then $\exists x, y \in S$ such that $x \alpha = y \beta$. Hence, $\alpha \beta^{-1} = (1,2,3)(2,3,4)=(1,4,3)=x^{-1}y=xy$. Now $(1,4,3)=(1,4)(1,3)=(1,3)(3,4)=(3,4)(1,4)$. So $x \in \{(1,4),(1,3),(3,4)\}$. But if $x=(1,3)$, then $x \alpha = (1,3)(1,2,3)=(2,3)$, so $x \alpha$ has distance 1 to $e$, and $x \alpha \notin W_\alpha$. Thus, there are two solutions $(1,4)$ and $(3,4)$ for $x$ in $x \alpha=y \beta \in W_\alpha \cap W_\beta$. Hence $|W_\alpha \cap W_\beta|=2$. Similarly, $|W_{\alpha^{-1}} \cap W_{\beta^{-1}}|=2$. Now consider $|W_\alpha \cap W_{\beta^{-1}} |$. If $x,y \in S$ are such that $x \alpha=y \beta^{-1}$, then $\alpha \beta=(1,2,3)(2,3,4)=(1,3)(2,4)=xy$. But if $x=(1,3)$, then $y \alpha=(1,3)(1,2,3)=(2,3) \notin W_\alpha$. Thus, $x=(2,4)$, $y=(1,3)$, $x \alpha = (2,4)(1,2,3)=(1,2,4,3)$, and $|W_\alpha \cap W_{\beta^{-1}} | = |\{(1,2,4,3)\}|=1$.
Since $g$ is an automorphism of $X$, it preserves the number of common neighbors of any two vertices. Thus, if $g$ fixes $\alpha$ and $\alpha^{-1}$, by the result in the previous paragraph, $g$ also fixes $\beta$ and $\beta^{-1}$. More generally, if $g$ fixes vertex $(j,k,i)$, then $g$ also fixes $(j,k,\ell)$ for each $\ell \neq j,k,i$. Repeating this process, we see that $g$ fixes all vertices that are 3-cycles in $S_n$. The only other vertices having distance 2 to $e$ in $X$ are those permutations that are a product of two disjoint transpositions, and each of these vertices are also fixed by $g$ by Lemma \[lemma:uniqueC4\].
Thus, if $g \in L_e(X)$ fixes vertex $(1,2,3)$, then $g$ fixes each vertex that has distance 2 to $e$. Let $X_r(e)$ denote the set of vertices that have distance $r$ to $e$. We have that $g$ fixes $X_0(e)$ and $X_1(e)$ pointwise since $g \in L_e$, and it was just shown that if $g$ fixes $(1,2,3) \in X_2(e)$, then $g$ also fixes $X_2(e)$ pointwise. Since $g$ is an automorphism, it maps the neighbors of a vertex $\alpha$ to the neighbors of $\alpha^g$. But by the next proposition (Proposition \[prop:distinct:neighbors\]), any two distinct vertices in $X_k(e)$ $(k \ge 3)$ have a different set of neighbors in $X_{k-1}(e)$. Thus, if $g$ fixes $X_{k-1}(e)$ pointwise, then $g$ also fixes $X_k(e)$ pointwise. By induction on $k$, $g$ is the trivial automorphism.
If $g \in L_e$ interchanges $(1,2,3)$ and $(1,3,2)$, and $h$ is the map $\alpha \mapsto \alpha^{-1}$, then $gh=1$ by the previous paragraph, whence $g=h^{-1}=h$. Thus, $L_e = \{1,h\} \cong C_2$.
In the proof above, we used the following result:
\[prop:distinct:neighbors\] Let $n \ge 5$ and let $X={\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ be the complete transposition graph. Let $\alpha$ and $\beta$ be distinct vertices in $X_k(e)$ $(k \ge 3)$. Then the set of neighbors of $\alpha$ in $X_{k-1}(e)$ and of $\beta$ in $X_{k-1}(e)$ are not equal.
*Proof*: Each permutation in $X_k(e)$ can be written as a product of $k$ transpositions, and since the length of this product is minimal, the edges of the transposition graph of $S$ corresponding to these $k$ transpositions form a forest.
Let $\alpha, \beta \in X_k(e)$. If the support of $\alpha$ and of $\beta$ are not equal, then they clearly have different sets of neighbors in $X_{k-1}(e)$ because some transposition in a forest that yielded $\alpha$ is incident to a vertex that does not belong to any forest that yields $\beta$. (For example, if $\alpha=(1,2,3)(4,5)$ and $\beta=(1,2,3,4)$ are two vertices in $X_3(e)$, then $\alpha$ does have a neighbor $(1,2)(4,5)$ in $X_2(e)$ whose support contains 5, but $\beta$ does not have such a neighbor.)
Now suppose $\alpha$ and $\beta$ are distinct vertices in $X_k(e)$ that have the same support. Since $\alpha \ne \beta$, there is a point in their common support, 1 say, such that $1^\alpha \ne 1^\beta$. So suppose $\alpha=(1,2,x_1,\ldots,x_r) \alpha'$ and $\beta=(1,3,y_1,\ldots,y_t) \beta'$. We consider three cases:
Case 1: Suppose $\alpha' = \beta'=1$. Then $\alpha$ and $\beta$ are cyclic permutations of the same length $r$, where $r \ge 4$ since $k \ge 3$. If $\alpha=\beta^{-1}$, then we can find two consecutive points in the cycle of $\alpha$ that are not consecutive in the cycle of $\beta$. Suppose $i,j$ are these two points; so $\alpha=(i,j,k,\ldots,m)$ and $\beta=(i,\ell,\ldots,j,p,\ldots)$. Then $\gamma=(i,j)(k,\ldots,m)$ is a neighbor of $\alpha$ in $X_{k-1}(e)$ but not of $\beta$. For if $s \gamma=\beta$ for some transposition $s$, then $s=\beta \gamma^{-1} = (i,\ell,\ldots,j,p,\ldots)(i,j)(k,m,\ldots)$. Now $s$ moves $i$ since $i^s = i^{\beta \gamma^{-1}}=\ell^{\gamma^{-1}} \ne i$. Also, $s$ moves $j$ since $j^s=p^{\gamma^{-1}} \ne j$. If $s=(i,j)$, then $(i,j) = (i,\ell,\ldots,j,p,\ldots)(k,m,\ldots)(i,j)$, whence $(i,\ell,\ldots,j,p,\ldots,q)(k,m,\ldots)=1$, which is a contradiction since $q$ is not fixed by the left hand side but is fixed by the right hand side. Thus $s$ moves at least 3 points. But then $s$ is not a transposition, a contradiction. Hence $\beta$ does not have $\gamma$ as a neighbor.
If $\alpha=\beta^{-1} = (\alpha_1,\ldots,\alpha_r)$, then $(\alpha_1,\ldots,\alpha_{r-1})$ is a neighbor of $\alpha$ in $X_{k-1}(e)$ but not of $\beta$.
Case 2: Suppose $\alpha'=\beta' \ne 1$. So $\alpha=(1,2,x_1,\ldots,x_r) (\alpha_1,\ldots,\alpha_s) \alpha''$,\
$\beta=(1,3,y_1,\ldots,y_t)(\alpha_1,\ldots,\alpha_s) \alpha''$ for some $s \ge 2$ and some (possibly trivial) permutation $\alpha''$. Let $\gamma=(1,2,x_1,\ldots,x_r)(\alpha_1,\ldots,\alpha_{s-1}) \alpha''$. Then $\gamma$ is a neighbor of $\alpha$ but not of $\beta$.
Case 3: Suppose $\alpha' \ne \beta'$. So $\alpha=(1,2,x_1,\ldots,x_r) \alpha'$ and $\beta = (1,3,y_1,\ldots,y_t) \beta'$ are in $X_k(e)$. If the support of $\alpha'$ and of $\beta'$ are equal, then take $\gamma:=(1,2,x_1,\ldots,x_r) \gamma'$, where $\gamma'$ is any vertex that is adjacent in $X$ to $\alpha'$ and that lies on a shortest $e-\alpha'$ path in $X$. Then $\gamma$ is adjacent to $\alpha$ but not to $\beta$.
On the other hand, if the support of $\alpha'$ and of $\beta'$ are not equal, we consider three subcases:\
(i) Suppose $r=t=0$. Then $\alpha=(1,2)\alpha',\beta=(1,3)\beta'$. Take $\gamma=(1,2)\gamma'$ where $\gamma'$ is any vertex in $X$ adjacent to $\alpha'$ and such that $\gamma$ lies on a shortest $e-\alpha'$ path in $X$. Then $\gamma$ is a neighbor of $\alpha$ but not of $\beta$ because if $s$ is a transposition, then $s \gamma = s (1,2) \gamma'$ will either split a cycle in $\gamma$ or merge two cycles in $\gamma$, neither of which can produce $(1,3)\beta'$.\
(ii) Suppose $r \ge 1$ and $t=0$. Then $\beta=(1,3)\beta'$. Take $\gamma$ to be $(1,2)(x_1,\ldots,x_r) \alpha'$. As in subcase (i), there does not exist any transposition $s$ such that $s \gamma = \beta$.\
(iii) Suppose $r,t \ge 1$. Let $\alpha = (1,2,x_1,\ldots,x_r) \alpha' = \alpha^0 \alpha'$ and $\beta = (1,3,y_1,\ldots,y_t) \beta' = \beta^0 \beta'$. Let ${\mathop{\mathrm{supp}}\nolimits}(\alpha)$ denote the support of the permutation $\alpha$.
If $3 \notin {\mathop{\mathrm{supp}}\nolimits}(\alpha^0)$, take $\gamma = (1,3)(y_1,\ldots,y_t) \beta'$. Then $\gamma$ is a neighbor of $\beta$. But if $\alpha=s \gamma$ for some transposition $s$, then $s$ must modify the cycle $(1,3)$ of $\gamma$, hence must merge this cycle with another one. The merged cycle will contain both 1 and 3, whence $s \gamma \ne \alpha$ because $3 \notin {\mathop{\mathrm{supp}}\nolimits}(\alpha^0)$. Similarly, if $2 \notin {\mathop{\mathrm{supp}}\nolimits}(\beta^0)$, then take $\gamma=(1,2)(x_1,\ldots,x_r) \alpha'$, and $\gamma$ is a neighbor of $\alpha$ but not of $\beta$.
Finally, suppose $e \in {\mathop{\mathrm{supp}}\nolimits}(\alpha^0)$ and $2 \in {\mathop{\mathrm{supp}}\nolimits}(\beta^0)$. Split $\alpha^0=(1,2,\ldots,3,\ldots)$ before the 3 to get $\gamma^0=(1,2,\ldots)(3,\ldots)$. Let $\gamma:=\gamma^0 \alpha'$. Then $\gamma$ is a neighbor of $\alpha$. If $\gamma$ is also a neighbor of $\beta=(1,3,y_1,\ldots,y_t) \beta'$, then $s \gamma=\beta$ for some $s$ that merges the two cycles in $\gamma^0$. But such a merge will produce a single cycle that has the same support as $\alpha^0$, whereas ${\mathop{\mathrm{supp}}\nolimits}(\alpha^0) \ne {\mathop{\mathrm{supp}}\nolimits}(\beta^0)$ by hypothesis. Hence $\gamma$ is not a neighbor of $\beta$.
\[cor:ubound:numauts:completetranspgraph\] Let $S$ be the set of all transpositions in $S_n$ ($n \ge 3$). Then $$|{\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))| \le 2(n!)^2.$$
*Proof*: Let $G:={\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S))$. The upper bound is verified to be exact if $n=3,4$ by computer simulations. If $n \ge 5$, by Lemma \[prop:Ge:restrictedtoS:is:in:AutLT\], every element in $G_e$, when restricted to $S$, is an automorphism of the line graph of the transposition graph of $S$. The transposition graph is complete, and hence its line graph has automorphism group isomorphic to $S_n$ (cf. Theorem \[thm:Whitney:graph:linegraph:sameautgroup\]). Hence $|G_e| \le |S_n|~ |L_e|$. Also, $|L_e|=2$, hence $|G_e| \le 2(n!)$. Thus $|G| = |V(X)|~ |G_e| \le n! (2n!)$.
By Corollary \[cor:ubound:numauts:completetranspgraph\], subgroup given in Theorem \[thm:aut:completetransp:subgroup\] is in fact the full automorphism group:
Let $S$ be the set of all transpositions in $S_n$ ($n \ge 3$). Then, the automorphism group of the complete transposition graph ${\mathop{\mathrm{Cay}}\nolimits}(S_n,S)$ is $${\mathop{\mathrm{Aut}}\nolimits}({\mathop{\mathrm{Cay}}\nolimits}(S_n,S)) = (R(S_n) \rtimes {\mathop{\mathrm{Inn}}\nolimits}(S_n)) \rtimes \mathbb{Z}_2,$$ where $R(S_n)$ is the right regular representation of $S_n$, ${\mathop{\mathrm{Inn}}\nolimits}(S_n)$ is the inner automorphism group of $S_n$, and $\mathbb{Z}_2 = \langle h \rangle$, where $h$ is the map $\alpha \mapsto \alpha^{-1}$.
[^1]: 53 Deonar House, Deonar Village Road, Mumbai - 88, India. Correspondence address: `[email protected]`.
|
---
author:
-
bibliography:
- 'IEEEabrv.bib'
- 'ictai-2015.bib'
title: 'Exploiting n-gram location for intrusion detection'
---
Intrusion detection systems; Semi-supervised learning; N-grams; Anomaly detection; FTP traffic;
|
---
abstract: 'We apply a new threshold detection method based on the extreme value theory to the von Kármán sodium (VKS) experiment data. The VKS experiment is a successful attempt to get a dynamo magnetic field in a laboratory liquid-metal experiment. We first show that the dynamo threshold is associated to a change of the probability density function of the extreme values of the magnetic field. This method does not require the measurement of response functions from applied external perturbations, and thus provides a simple threshold estimate. We apply our method to different configurations in the VKS experiment showing that it yields a robust indication of the dynamo threshold as well as evidence of hysteretic behaviors. Moreover, for the experimental configurations in which a dynamo transition is not observed, the method provides a way to extrapolate an interval of possible threshold values.'
author:
- 'Davide Faranda,'
- Mickael Bourgoin
- Sophie Miralles
- Philippe Odier
- 'Jean-Francois Pinton'
- Nicolas Plihon
- Francois Daviaud
- Bérengère Dubrulle
bibliography:
- 'dynamo.bib'
title: Robust estimate of dynamo thresholds in the von Kármán sodium experiment using the Extreme Value Theory
---
It is generally accepted that the planetary magnetic field is generated by dynamo action, an instability mechanism inside the liquid conducting fluid of the planetary core. There is however presently no general theory providing an estimate for the corresponding dynamo threshold, except in some particular cases [@stieglitz2001experimental; @radler1998karlsruhe; @gailitis2001magnetic]. The main difficulties in computing the threshold derive from the turbulent nature of the flow, that make the dynamo action akin to a problem of instability in a presence of a multiplicative noise [@leprovost2005turbulent]. As more and more data from experiments are available [@berhanu2007magnetic; @spence2007turbulent; @kelley2007inertial], the possibility of devising precise, almost automated methods for dynamo threshold detection would be welcome. The statistical approach to this question traditionally involves so-called indicators of criticality [@scheffer2009early]. Some of these indicators are based on modifications of the auto-correlation properties of specific observables when parameters controlling the system approach some critical value, others on the fact that an increase of the variance and the skewness is observed when moving towards tipping points [@kuehn2011mathematical]. Other approaches are based on the definition of *ad hoc* susceptibility functions or critical exponents [@monchaux2009Karman; @berhanu2009bistability; @miralles]. In [@lahjomri1993cylinder; @miralles], the decay of external applied magnetic field pulses is studied and the transition is detected through the divergence of the decay times near the dynamo threshold. Although interesting for controlled laboratory applications, this approach cannot be extended to problems involving planetary scales. In the present paper, we suggest that the statistical approach based on the Extreme Value Theory proposed in [@farandamanneville] could provide a robust determination of the threshold even in the presence of turbulence. The main advantage of the present method is that it yelds to a precise and unique determination of the threshold as the location of zero crossing of a statistical parameter $\kappa$. It therefore works even in the case of imperfect bifurcation that usually occurs in experimental dynamo due to the ambient magnetic field (Earth field, residual magnetization of the disks and other magnetic perturbations of the set up). To illustrate the possibilities of the method, we analyse data from the VKS experiment, consisting of a von Kármán swirling flow of liquid sodium. In this experiment, turbulent effects are roughly of the same order as the mean flow. The control parameter of the system is the magnetic Reynolds number: $Rm$ which is proportional to the driving impellers rotation frequency $F$. Several dynamo and non dynamo configurations have been obtained by changing the material of the impellers and of the cylinder [@miralles; @boisson2012symmetry] and by varying the impellers rotation frequency. This versatility allows for reproducing a spectrum of magnetic field dynamics which can be observed for the planetary magnetic fields such as reversal [@berhanu2007magnetic], bistability [@berhanu2009bistability; @miralles2] or localization [@gallet2012experimental]. Applying our method to several different configurations, we show in the present article that it provides a robust indication of the dynamo threshold as well as evidence of hysteretic behaviors.\
#### Method {#method .unnumbered}
We use the statistical approach based on the Extreme Value Theory proposed in [@farandamanneville] as a criterion allowing the determination of the dynamo threshold. We briefly recall the basic intuition beyond the method referring to [@farandamanneville] for further discussions. Classical Extreme Value Theory (EVT) states that, under general assumptions, the statistics of maxima $M_m=\max\{ X_0,X_1, ..., X_{m-1}\}$ of independent and identically distributed (i.i.d.) variables $X_0, X_1,\dots, X_{m-1}$, with cumulative distribution function (cdf) $F(x)$ in the form: $$F(x)=P\{a_m(M_m-b_m) \leq x\},$$ where $a_m$ and $b_m$ are normalizing sequences, asymptotically obeys a Generalized Extreme Value (GEV) distribution with cumulative distribution function: $$F_{G}(x; \mu, \sigma,
\kappa)=\exp\left\{-\left[1+{\kappa}\left(\frac{x-\mu}{\sigma}\right)\right]^{-1/{\kappa}}\right\}
\label{cumul}$$ with $1+{\kappa}(x-\mu)/\sigma>0 $. The [*location parameter*]{} $\mu \in \mathbb{R}$ and the [*scale parameter*]{} $\sigma>0$ in Equation \[cumul\] account for the normalization of the data, avoiding the recourse to scaling constants $a_m$ and $b_m$ [@LLR83].
The sign of $\kappa$ discriminates the kind of tail decay of the parent distribution: When ${\kappa} = 0$, the distribution is of Gumbel type (type 1). This is the asymptotic Extreme Value Law (EVL) to be expected when the parent distribution shows an exponentially decaying tail. The Fréchet distribution (type 2), with $\kappa>0$, is instead observed when the parent distribution possess a fat tail decaying as a power law. Eventually, the Weibull distribution (type 3), with $\kappa<0$, corresponds to a parent distribution having a finite upper endpoint. When properties of maxima and minima are of interest, respectively corresponding to the exploration of the right or left tails of the parent distribution, they can be treated on an equal footing by considering the minima as maxima of the variables after sign reversal [@coles2001introduction]. Physical observables have generally bounded fluctuations and their extremes follow Weibull distributions [@holland2012extreme; @lucarini2012extreme]. Gaussian fluctuations (featuring Brownian motion of microscopic degrees of freedom) would yield the formal possibility of infinite extremes and thus Gumbel distributions, but the convergence towards this law is logarithmically slow [@hall1979rate] so that a Weibull law is observed in these cases as well. The interest of the EVL statistics in bifurcation detection relies on the change of the nature of the fluctuations of a given system, when going from a situation with one stable attractor to a situation with two competing attractors, with jump between the two allowed either under the effect of external noise or due to internal chaotic fluctuations. In such a case, two time scale are present, a short one related to transitive dynamics within an attracting component and a long one corresponding to intermittent jumps from one to the other component. The fluctuations and their extreme are then of different nature over the two time scales: over the long time scale, some extremes correspond to noisy excursions directed toward the saddle-state and gain a [*global*]{} status as they can trigger jumps from one to the other component. The probability increases as the observable visits corresponding “anomalous” values associated to these global extremes during a time series of length $s$, and the tail of the parent distribution becomes large. Through the bifurcation, we are thus in a situation where the parent distribution goes from bounded fluctuations (with extreme converging to a Weibull law) to fluctuations with fat tails (with extreme converging to a Fréchet distribution). The shape parameter $\kappa$ then changes through the bifurcation from $\kappa<0$ to $\kappa>0$, wich enables a precise definition of the threshold as the value at which the zero crossing of $\kappa$ happens. Physical observables will display deviations of greater amplitude in the direction of the state the system is doomed to tumble, than in the opposite direction, therefore one expects to observe this switching either in the maxima or in the minima.\
#### Experimental set-up {#experimental-set-up .unnumbered}
Here we focus on the VKS experiment, consisting of a von Kármán swirling flow of liquid sodium. The dynamo is generated in a cylinder of radius $R_0=289$ mm by the motion of two coaxial discs of radius $R_{imp}=154.5$ mm, counter-rotating at a frequency $F$. We define the magnetic Reynolds number as: $Rm=2\pi\mu_0 \sigma R_{imp} R_0 F$ where $\sigma=9.6\times10^6$ $\Omega\cdot{\rm m}^{-1}$ is the sodium electrical conductivity and $\mu_0$ the permeability of vacuum. In the sequel, we use data from the 8 configurations obtained by changing the material of the impellers and of the cylinder as shown in Fig. \[configurations\] and described in [@miralles]. Magnetic fields are recorded using four arrays of ten 3-axis Hall effect sensors inserted in radial shafts, as shown in Fig. \[manip\]. Two arrays are inserted in the mid-plane of the vessel, within long probe shafts (labeled b and d in Fig. \[manip\]); the other two are inserted closer to the impellers, within shorter probe shafts (labeled a and c in Fig. \[manip\]). These magnetic field at the sensors are recorded at a rate of 2000 Hz, with accuracy $\pm 0.1$ G. Overall, the probes provide measurements of the 3 components of the magnetc field $\vec{B}(t)$ as a function of time $t$.
![Experimental setup, showing the location of the Hall probes. $x$ is the axial coordinate directed from impeller 1 to impeller 2 []{data-label="manip"}](SetupVKS2R_P13.pdf){width="80mm"}
![Schematic representation of the studied VKS configurations. Gray colors stands for stainless steel, yellow color for copper and red for soft iron.[]{data-label="configurations"}](config_vks_P13.pdf){width="140mm"}
#### Application to VKS data. {#application-to-vks-data. .unnumbered}
We present the results for the detection of the dynamo threshold $Rm^*$ by using as observable the modulus of the magnetic field $|\vec{B}(t)|$ measured by the 40 different detectors (Hall probes).
The method can be described as follows. First of all, the extremes of the magnetic field are extracted by using the so called [*block maxima approach*]{} which consists in dividing the series $|\vec{B}(t)|, \ t=1,2,...,s$ into $n$ bins each containing $m$ observations ($s=nm$) and thus selecting the maximum (minimum) $M_j$ in each bin. The series of $M_j, j=1,...,n$ is then fitted to the GEV distribution via the L-moment procedure described in [@faranda2012generalized]. In order to sample proper extreme values one has to consider a bin length longer than the correlation time $ \tau $. For each of the sensors we have computed $\tau$ as the first zero of the autocorrelation function finding that $0.42$ s $< \tau <$ $1$ s lags depending on the cases considered. This value is similar to the magnetic diffusion time found in [@bourgoin2002magnetohydrodynamics]. By choosing a bin duration longer than $1$ s (or, equivalently, a number of samples $m$ in each bin larger than 2000) and repeating the fit until the shape parameter $\kappa$ is not changing in appreciable way, one can establish the convergence to the GEV model [@LLR83]. In our experiments we found that reliable estimates can be generally obtained for $m>4000$. Being the length of each series $ 10^5<s<3\cdot10^5$, for any choice of $m>4000$ no more than $n=100$ maxima can be extracted. Such a value of $n$ is one order of magnitude smaller than the one prescribed in [@eckmann1992fundamental; @faranda2011numerical] for avoiding biased fits to the GEV model. In order to overcome this problem we have grouped sensors located at the same radial position. The sensors of the four arrays of Hall effect sensors are not installed at the same radial distance (see Fig \[manip\] for a visual explanation). However, an effective radial grouping can be obtained by adding to the $n$ extremes of the sensor $a_l$ the ones of $b_{l+2},c_l$ and $d_{l+2},\ l=1,...,8$, thus obtaining 8 different series with a sufficient number of maxima to perform the fit. The choice of grouping the sensors by their radial location is justified by checking that the shape of the distribution, which enters in the computation of the shape parameter $\kappa$ does not change substantially for sensors located at the same radial position. In order to do so, we have computed the skewness and the kurtosis for the time series of the magnetic field, finding small variations for sensors located at the same radial position. We also checked that the maxima extracted by combining the series are independent by analyzing the cross-correlation function of different sensors. For example, for sensors $a$ and $b$, the cross-correlation function is defined as: $$\tilde{\tau}_{M(a),M(b)}(h)=\frac{1}{n}\sum_{i=1}^{n-h} (M_j(a)- \langle M_j(a)\rangle_j)(M_{j+h}(b)- \langle M_j(b)\rangle_j).$$ Here, the notation $\langle \cdot\rangle_j$ indicates the expectation value taken over the $j$ index. The results of this analysis are shown in Fig. \[crosscorr\], for maxima in the case $Rm\simeq33.8$, sensors index 5, in the R configuration. The plots on the left refer to $m=1000$, the ones on the right to $m=4000$. From top to bottom we represent $\tilde{\tau}(h)$ respectively for $h=0$, $h=+5$ and $h=-5$. The case $h$=0 corresponds to sensors located at the same radial position, which we grouped in our study. One can observe that although the correlation is non-zero, it is relatively small (about 0.5 for neighboring probes and smaller than 0.2 for non-neighbouring probes), which validates our grouping of the sensors to increase our statistics. In addition, for sensors located at different radial positions the decorrelation is total (see the example at $h=\pm 5$ in figure 3 but the decorrelation already starts at $h=\pm 1$). This indicates that the different series we show are totally independent.\
![ Cross correlation $\tilde{\tau}_{M(\lambda),M(\mu)}(h)$ for $m=1000$ (left panels) and $m=4000$ (right panels) $\lambda=\{a,b,c,d\}$, $\mu=\{a,b,c,d\}$. From top to bottom panels: $h=0$, $h=5$, $h=-5$. R configuration, $Rm\simeq 33.8$ []{data-label="crosscorr"}](cross1000.pdf "fig:"){width="80mm"} ![ Cross correlation $\tilde{\tau}_{M(\lambda),M(\mu)}(h)$ for $m=1000$ (left panels) and $m=4000$ (right panels) $\lambda=\{a,b,c,d\}$, $\mu=\{a,b,c,d\}$. From top to bottom panels: $h=0$, $h=5$, $h=-5$. R configuration, $Rm\simeq 33.8$ []{data-label="crosscorr"}](cross4000.pdf "fig:"){width="80mm"}
Due to the different size of the fluctuations, extremes have been renormalized using the following, rather standard, definition: $$\tilde{M}_j(a_l) =\frac{(M_j,a_l - \langle M(a_l)\rangle_j)}{ \sqrt{\langle M_j(a_l) - \langle M(a_l)\rangle_j \rangle_j}}$$
The same normalization applies for the sensors $b,c,d$. There are less trivial ways of normalizing the extremes e.g. by choosing other location indicators than the expected value such as the median or the mode (the most probable value). We thus tested that by replacing the mean with such indicators and checked that the results do not change in an appreciable way.
![Upper panel: bifurcation in terms of the magnetic field averaged over all the sensors. Central panel: Shape parameters vs Reynolds magnetic number for the 8 group of sensors (each in a different color), $<k>_l$ , (thick black line) and Gumbel law $\kappa=0$ (dashed line) in the R configuration. Lower panel: same as the central panel but for the minima.[]{data-label="mm"}](MaxMin.pdf){width="130mm"}
![Two histograms for the normalized maxima $\tilde{M}$ of the sensors 6 (black markers) and correspondent fits to the GEV distribution (red lines). Left: $Rm\simeq27$ the maxima are bounded: $\kappa=-0.21$. Right: $Rm\simeq48$, some maxima are detached with respect to the bulk statistics. These events trigger the transition of $\kappa$ towards positive values: $\kappa=+0.01$.[]{data-label="histo"}](fig5a.pdf "fig:"){width="80mm"} ![Two histograms for the normalized maxima $\tilde{M}$ of the sensors 6 (black markers) and correspondent fits to the GEV distribution (red lines). Left: $Rm\simeq27$ the maxima are bounded: $\kappa=-0.21$. Right: $Rm\simeq48$, some maxima are detached with respect to the bulk statistics. These events trigger the transition of $\kappa$ towards positive values: $\kappa=+0.01$.[]{data-label="histo"}](fig5b.pdf "fig:"){width="80mm"}
![$\langle \kappa \rangle_l$, (blue line) and Gumbel law $\kappa=0$ (dashed black line) vs magnetic Reynolds number in the R configuration. The arrows indicate the direction of variation of $Rm$ in the experiment. []{data-label="hyster"}](hysteresis.pdf){width="100mm"}
#### Results. {#results. .unnumbered}
We begin the analysis by computing dynamo threshold $Rm^*$ in the experiments performed with the configuration R featuring soft iron impellers. This configuration produces a well-documented stationary dynamo at $Rm\approx 44$, thereby providing a fair test of our method [@berhanu2010dynamo]. In the run we analyze, the Reynolds magnetic number is increased monotonically from $Rm\simeq26$ up to $Rm\simeq54$. By monitoring the value of $|\vec{B}|$ as a function of $Rm$, represented in the upper panel of Fig. \[mm\], one observes a sudden increase of the magnetic field amplitude around the value $Rm\approx 44$, leading to previous definition of the threshold parameter as $Rm^*= 44$. The observation of variations of $|\vec{B}|$ provides interesting information about the detection of threshold through EVL method. Indeed, since beyond the dynamo threshold $Rm^*$ the values of $|\vec{B}|$ are significantly higher, we expect to detect the transition by the change of sign of the maxima distribution whereas the minima shape parameter should remain negative even across the transition. Results are shown in Fig. \[mm\] for the shape parameter of the maxima (central panel) and of the minima (lower panel). Each color represents the curve of $\kappa$ obtained by grouping the sensors located at the same radial position whereas the thick lines respectively represent an average over $l$ (solid black line) and the Gumbel law (dashed black line). When $Rm$ is approaching 47 the average shape parameter for the distribution of maxima first decreases, then increases and changes sign at $Rm=47$, whereas for the minima it remains negative. We therefore set the threshold value $Rm^*\simeq47$. The decrease before the change of sign might be a signature of earth-field expulsion before dynamo onset. The change of sign, characteristic of dynamo onset, is associated with a change in the nature of distribution of maxima of the magnetic field, as expected from EVL theory. Indeed, we have plotted in Fig. \[histo\] two histograms for the maxima distribution, one for a value of $Rm$ far from the transition (left plot) and one for $Rm$ close to the bifurcation (right plot). Whereas in the first case the distribution of maxima is bounded above, in the second case the largest values of $\tilde{M}$ will eventually trigger the transition and are responsible for the change of sign of $\kappa$. One may note in Fig. \[mm\] that, contrary to the transition presented in [@farandamanneville], here there is an evident effect also on the minima shape parameter which tends to more negative values for $Rm>Rm^*$. This effect, definitely due to the complex geometry of the two attracting basins involved in the transition, is difficult to quantify and will be addressed specifically in future publications.
By analyzing the results obtained at low frequencies of rotations, the fit for each group of sensors returns a shape parameter statistically dispersed around the average, with no radial dependence. On the contrary, for $Rm>Rm^*$ the shape parameter crosses zero for increasing values of $Rm$ as the radial location of the sensors increases. This effect is even more pronounced for sensors outside the flow (i;e; sensor 9 and 10 of probes a and c, not shown in Fig. \[mm\] since at these radial locations only two sensors were available instead of 4). This means that the threshold detection based only on external sensors is likely to overestimate the threshold. This has, of course, great implications for the detection of threshold of magnetic fields from planetary observation as we are likely to observe only an equivalent of the outer sensors. This analysis confirms nevertheless a posteriori the reasonableness of grouping the sensors in a radial direction.\
Hysteresis has been previously reported in the VKS experiment [@monchaux2009Karman; @berhanu2010dynamo] and was also observed in the R configuration under scrutiny here: in order to shut down the dynamo one has to decrease the magnetic Reynolds number to values smaller than $Rm^*$. This is presumably an effect of the residual magnetization of the iron impellers. This hysteresis is a good test for further validation of the results obtained via the extreme value based technique since the curve of the shape parameter should be able to detect some hysteretic behavior. If we redefine the dynamo activation threshold found in the previous analysis as $Rm^*_f=47$, $f$ indicating the first passage in the forward direction of the experiment, we expect to find a dynamo deactivation threshold $Rm^*_b< Rm^*_f $, $b$ indicating the backward experiment obtained by decreasing $Rm$ from $Rm\simeq 55$ to $Rm=30$. We have then analyzed a run in which the magnetic Reynolds number is first increased monotonically from $Rm\simeq26$ up to $Rm\simeq54$, then decreased monotonically from $Rm\simeq54$ up to $Rm\simeq26$. The results shown in Fig. \[hyster\] for the maxima average shape parameter $\langle \kappa \rangle_l\ l=1,...8$, clearly indicate the presence of a hysteresis cycle in agreement with expectation. We have already commented on the forward part of the experiment repeated in Fig. \[hyster\] for clarity and represented by the right arrows. When the frequency is instead decreased, a Fréchet extreme value law is observed until $Rm^*_b\simeq 37 <Rm^*_f$. At this value, the shape parameter crosses the Gumbel law and approaches again the Weibull distribution of the maxima. Note also that the shape parameter for the minima (not shown here) remains always negative even in the backward transition, as expected by the theory described so far.
The same analysis has been carried out for all the configurations shown in Fig. \[configurations\]. The corresponding $Rm_f^*$ and $Rm^*_b$ are reported in the table below. For comparison, we have included in the table values estimated via three other techniques: from the increase of the magnetic field amplitude $|B|$-denoted $Rm_{|B|}$- [@monchaux2009Karman; @berhanu2009bistability], from decay time divergence $Rm^{d}$[@miralles] and via induction $Rm^i$ [@miralles]. The value of the shape parameter remains negative for both the maxima and the minima, in the configurations P, Q, Q’, S, T where dynamos have not been observed, whereas the method is able to detect the dynamo and the hysteretic behavior for the U and V setup. These results are in agreement with [@miralles]. For the configurations for which the dynamo (run P, Q, Q’, S, T) is not observed within the range of accessible $Rm$, it is interesting to follow [@miralles], and try to estimate possible dynamo threshold by extrapolation techniques. Indeed, in the Q’, S and T, we observed that the values of $\kappa$ increase monotonically for at least the 3 consecutive highest $Rm$. An example is shown for the Q’ configuration in Fig. \[extra\]. An extrapolate threshold value $Rm^e$ can then be found by applying a polynomial fit of the $\langle \kappa \rangle_l$ curve and detecting the location of the zero crossing. Of course, as seen in Fig. \[extra\], the value of $Rm^e$ depends on the order of the polynomial fit: for example, the value of $Rm^e$ obtained by a linear and a quadratic fit is larger than what is obtained through higher polynomial order fits. We then turned back to configurations R, U, V, and found that a cubic fit of the $\langle \kappa \rangle_l$ values such that $Rm< Rm^*$ provides an extrapolated threshold value $Rm^e$ that is close to the $ Rm^*$ determined via real data. We thus run this cubic extrapolation technique to Q’, S and T, and obtain value of $Rm^e$ that are reported in the table. The extrapolated values found here are generally smaller than the one found by Miralles et al. [@miralles], but in both cases the extrapolation presents great uncertainty.\
![$\langle \kappa \rangle_l$, (blue solid error-bar) and Gumbel law $\kappa=0$ (dashed black line) in the Q’ configuration vs Reynolds magnetic number. The red dashed-dotted line, the green solid line and the magenta dashed line represent respectively a linear, quadratic and cubic fits of the data. The linear fit is obtained by considering only the 3 values of $\langle \kappa \rangle_l$ at higher $Rm$. $m=4000$ []{data-label="extra"}](extrap2.pdf){width="100mm"}
Run $Rm_{||\vec{B}|}$ $Rm^*_f$ $Rm^*_b$ $Rm^e$ $Rm^d$ $Rm^i$
----- ------------------- ---------- ---------- ------------- -------- --------
P - - - - - -
Q - - - - - 200
Q’ - - - 85$\pm$ 10 350 125
R 44 46 37 - 51 56
S - - - 150$\pm$ 25 - -
T - - - 100$\pm$ 25 250 205
U 70 75 66 - 58 100
V 66 67 45 - 71 93
: Dynamo threshold for various configuration in the VKS experiment, obtained through various technique: $Rm_{||\vec{B}|}$: from the increase of the magnetic field amplitude $|\vec{B}|$ [@monchaux2009Karman; @berhanu2009bistability], $Rm^*_f$ and $Rm^*_b$: forward and backward threshold obtained from the extreme value technique, with zero crossing detection (this paper); $Rm^e$: from the extreme value technique, with cubic extrapolation to detect zero crossing (this paper); $Rm^{d}$: from decay time divergency extrapolation [@miralles]; $Rm^i$ from induction increase extrapolation.
In this article, we have tested a methodology for the detection of dynamo threshold based on EVT using datasets produced in the VKS experiment. This technique, applied here for the first time to an experimental dataset, confirms the theoretical expectations of [@farandamanneville] and allows for detecting hysteretic behaviors. The main advantage of the technique is to provide a precise and unambiguous estimate of the thresholds on probabilistic basis, providing the direction of the shift (towards the maxima or the minima). The analysis is affordable with every home PC and many software packages contain the routine necessary for performing the fit of the GEV distribution. In light of the possibility of extracting the magnetic field data from exoplanetary radio emissions, one could exploit the technique described in this article for studying the properties of exo-planetary magneto-spheres thus defining a criterion for the classification of planetary dynamos based on the detected threshold values. Moreover, since hysteretic behaviours are encountered in many other scientific fields, e.g the thermohaline circulation reversibility [@rahmstorf2005thermohaline] in climate sciences or the economical crisis behavior [@martin2012regional], we consider the method to be applicable to a more general class of problems featuring critical transitions.
Acknowledgments
===============
We thank the other members of the VKS collaboration, with whom the experimental runs have been performed. We thank M. Moulin, C. Gasquet, A. Skiara, N. Bonnefoy, D. Courtiade, J.-F. Point, P. Metz, V. Padilla, and M. Tanase for their technical assistance. This work is supported by ANR 08-0039-02, Direction des Sciences de la Matière, and Direction de l’Energie Nucléaire of CEA, Ministère de la Recherche, and CNRS. The experiment is operated at CEA/Cadarache DEN/DTN.
|
---
abstract: |
In the literature there are several methods for comparing two convergent iterative processes for the same problem. In this note we have in view mostly the one introduced by Berinde in \[Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators, Fixed Point Theory and Applications 2, 97–105 (2004)\] because it seems to be very successful. In fact, if IP1 and IP2 are two iterative processes converging to the same element, then IP1 is faster than IP2 in the sense of Berinde. The aim of this note is to prove this almost obvious assertion and to discuss briefly several papers that cite the mentioned Berinde’s paper and use his method for comparing iterative processes.
**MSC** 41A99
**Keywords**: faster convergence; better convergence; Berinde’s method for comparing iterative processes
author:
- 'Constantin Zalinescu[^1]'
title: 'On Berinde’s method for comparing iterative processes'
---
Introduction
============
In the literature there are several methods for comparing two convergent iterative processes for the same problem. In this note we have in view mostly the one introduced by Berinde in [@Ber04 Definition 2.7] because it seems to be very successful. This was pointed out by Berinde himself in [@Ber16]: “This concept turned out to be a very useful and versatile tool in studying the fixed point iterative schemes and hence various authors have used it". However, it was pointed out by Popescu, using [@Pop07 Example 3.4], that Berinde’s method is not consistent. The inconsistency of Berinde’s method is mentioned also by Qing & Rhoades in [@QinRho08 page 2]. Moreover, referring to Berinde’s method, Phuengrattana & Suantai say in [@PhuSua13 page 218] : “It seem not to be clear if we use above definition for comparing the rate of convergence". In fact, if IP$_{1}$ and IP$_{2}$ are two (arbitrary) iterative processes converging to the same element, then IP$_{1}$ is faster than IP$_{2}$ (and vice-versa) in the sense of Berinde ([@Ber04 Definition 2.7]).
The aim of this note is to prove this almost obvious assertion and to discuss briefly several papers that cite [@Ber04] and use Berinde’s method for comparing iterative processes.
Definitions and the main assertion\[sec1\]
==========================================
First, we quote from [@Ber04 pages 99, 100] the text containing the definitions which we have in view; these are reproduced in many papers from our bibliography.
“Definition 2.5. Let $\{a_{n}\}_{n=0}^{\infty}$, $\{b_{n}\}_{n=0}^{\infty}$ be two sequences of real numbers that converge to $a$ and $b$, respectively, and assume that there exists $l=\lim_{n\rightarrow
\infty}\big\vert
\frac{a_{n}-a}{b_{n}-b}\big\vert $.
\(a) If $l=0$, then it can be said that $\{a_{n}\}_{n=0}^{\infty}$ converges *faster* to $a$ than $\{b_{n}\}_{n=0}^{\infty}$ to $b$.
\(b) If $0<l<\infty$, then it can be said that $\{a_{n}\}_{n=0}^{\infty}$, and $\{b_{n}\}_{n=0}^{\infty}$ *have the same rate of convergence*."
“Suppose that for two fixed point iteration procedures $\{u_{n}\}_{n=0}^{\infty}$ and $\{v_{n}\}_{n=0}^{\infty}$, both converging to the same fixed point $p$, the error estimates
$\left\Vert u_{n}-p\right\Vert \leq a_{n}$, $n=0,1,2,..$. (2.7)
$\left\Vert v_{n}-p\right\Vert \leq b_{n}$, $n=0,1,2,..$. (2.8)
are available, where $\{a_{n}\}_{n=0}^{\infty}$ and $\{b_{n}
\}_{n=0}^{\infty}$ are two sequences of positive numbers (converging to zero).
Then, in view of Definition 2.5, we will adopt the following concept.
Definition 2.7. Let $\{u_{n}\}_{n=0}^{\infty}$ and $\{v_{n}\}_{n=0}^{\infty}$ be two fixed point iteration procedures that converge to the same fixed point $p$ and satisfy (2.7) and (2.8), respectively. If $\{a_{n}\}_{n=0}^{\infty}$ converges faster than $\{b_{n}\}_{n=0}^{\infty}$, then it can be said that $\{u_{n}\}_{n=0}^{\infty}$ *converges faster* than $\{v_{n}\}_{n=0}^{\infty}$ to $p$."
Practically, the text above is reproduced in [@Ber16 pages 30, 31], getting so Definitions 1.1 and 1.2. The only differences are: “(2.7)" and “(2.8) are available, where" are replaced by “(1.7)" and “(1.8) are available *(and these estimates are the best ones available)*, where", respectively.
Immediately after [@Ber16 Definition 1.2] it is said:
“This concept turned out to be a very useful and versatile tool in studying the fixed point iterative schemes and hence various authors have used it, see \[1\]-\[5\], \[18\], \[22\], \[23\], \[28\], \[32\]-\[34\], \[37\]-\[41\], \[40\], \[43\]-\[46\], \[55\]-\[57\], \[66\], \[68\]-\[72\], \[74\], \[78\]-\[81\], to cite just an incomplete list."[^2]
Note that Definition 9.1 from [@Ber07] is equivalent to Definition 2.5 from [@Ber04]; replacing $u_{n}$, $v_{n}$, $p$, $\left\Vert u_{n}
-p\right\Vert $ and $\left\Vert v_{n}-p\right\Vert $ with $x_{n}$, $y_{n}$, $x^{\ast}$, $d(x_{n},x^{\ast})$ and $d(y_{n},x^{\ast})$ in (2.7), (2.8) and Definition 2.7 from [@Ber04], one obtains relations (5), (6) from [@Ber07 page 201] and an equivalent formulation of [@Ber07 Definition 9.2], respectively. Note that these definitions from Berinde’s book [@Ber07] are presented in the lecture [@Ber07b].
Because of the parentheses in “(converging to zero)" in the preamble of [@Ber04 Definition 2.7] (and [@Ber07 Definition 9.2], [@Ber16 Definition 1.2]), the convergence to $0$ of $(a_{n})$ and $(b_{n})$ seems to be optional. This is probably the reason for the absence of this condition in [@FaGhPoRe15 page 3]; note that $(a_{n})$ is a constant sequence in [@VerJaiShu16].
In the next result we use the version for metric spaces of [@Ber04 Definition 2.7] (see [@Ber07 Definition 9.2]).
\[p-fast\]Let $(X,d)$ be a metric space and $(x_{n})_{n\geq1}$, $(y_{n})_{n\geq1}$ be two sequences from $X$ converging to $x^{\ast}\in X$. Then $(x_{n})$ converges faster than $(y_{n})$ to $x^{\ast}$.
Proof. For each $n\geq1$ let us consider $$0<a_{n}:=d(x_{n},x^{\ast})+d(y_{n},x^{\ast})+\frac{1}{n},\quad0<b_{n}
:=\left\{
\begin{array}
[c]{ll}\sqrt{a_{n}} & \text{if }a_{n}\leq1,\\
d(y_{n},x^{\ast}) & \text{otherwise.}\end{array}
\right.$$ It follows that $a_{n}\rightarrow0$, $b_{n}\rightarrow0$, $$d(x_{n},x^{\ast})\leq a_{n},\quad d(y_{n},x^{\ast})\leq b_{n},\quad\forall
n\geq1,$$ and $a_{n}/b_{n}=\sqrt{a_{n}}$ for sufficiently large $n;$ it follows that $\lim_{n\rightarrow\infty}a_{n}/b_{n}=\lim_{n\rightarrow\infty}\sqrt{a_{n}}
=0$. Therefore, $(x_{n})$ converges faster to $x^{\ast}$ than $(y_{n})$ does. $\square$
From our point of view, the preceding result shows that Berinde’s notion of rapidity for fixed point iterative schemes, recalled above, is not useful, even if Berinde in [@Ber16 page 35] sustains that “Of all concepts of rapidity of convergence presented above for numerical sequences, the one introduced by us in Definition 1.2 \[14\] appears to be the most suitable in the study of fixed point iterative methods". Berinde (see [@Ber16 page 36]) mentions that he “tacitly admitted in Definition 1.2 that *the estimates (1.7) and (1.8)* taken into consideration *are the best possible*". Clearly, “the estimates are the best ones available" and “the estimates ... are the best possible" are very different in meaning.[^3]
Of course, *the best possible estimates in relations (1.7) and (1.8)* from [@Ber16] (that is in relations (2.7) and (2.8) from [@Ber04] recalled above) *are* $$a_{n}:=\left\Vert u_{n}-p\right\Vert ,\quad b_{n}:=\left\Vert v_{n}-p\right\Vert \quad(n\geq0). \label{r-1}$$
Assuming that $d(x_{n},x^{\ast})\rightarrow0$, getting (better) upper estimates for $d(x_{n},x^{\ast})$ depends on the proof, including the author’s ability to majorize certain expressions. Surely, *the best available estimates are* exactly *those obtained by the authors in their proofs*.
The use of Berinde’s method for comparing the speeds of convergence is very subjective. It is analogue to deciding that $a/b\leq c/d$ knowing only that $0<a\leq c$ and $0<b\leq d$!
Taking $a_{n}$ and $b_{n}$ defined by (\[r-1\]) in [@Ber04 Definition 2.7] one obtains Definition 3.5 of Popescu from [@Pop07]. Popescu’s definition is used explicitly by Rhodes & Xue (see [@RhoXue10 page 3]), but they wrongly atribute it to [@Ber04]; this attribution is wrong because [@Pop07 Definition 3.5] reduces to [@Ber04 Definition 2.5] only in the case in which the involved normed vector space is $\mathbb{R}$. Note that Rhoades knew about Popescu’s definition because [@Pop07] is cited in [@QinRho08 page 2].
Notice that Popescu’s definition is extended to metric spaces by Berinde, Khan & Păcurar in [@BerKhaPac15 page 8], as well as by Fukhar-ud-din & Berinde in [@FukBer16 page 228]; also observe that Popescu’s paper [@Pop07] is not cited in [@BerKhaPac15] and [@FukBer16].
Even if in [@Ber04] it is not defined when two iteration schemes have the same rate of convergence, Dogan & Karakaya obtain that “the iteration schemes $\{k_{n}\}_{n=0}^{\infty}$ and $\{l_{n}\}_{n=0}^{\infty}$ have the same rate of convergence to $p$ of $\wp$" in [@DogKar18 Theorem 2.4]. The proof of [@DogKar18 Theorem 2.4] is based on the fact that one obtained two sequences $(a_{n})$ and $(b_{n})$ converging to $0$ such that $\left\Vert k_{n+1}-p\right\Vert \leq a_{n}$, $\left\Vert l_{n+1}
-p\right\Vert \leq b_{n}$ for $n\geq0$ and $\lim_{n\rightarrow\infty} a_{n}/b_{n}=1$.
Accepting such an argument, and taking $a_{n}:=b_{n}:=d(x_{n},x^{\ast
})+d(y_{n},x^{\ast})+\frac{1}{n}$ in the proof of Proposition \[p-fast\], one should obtain that any pair of sequences $(x_{n})_{n\geq1}$, $(y_{n})_{n\geq1}\subset(X,d)$ with the same limit $x^{\ast}\in X$ have the same rate of convergence.
Recall that Rhoades in [@Rho76 pages 742, 743] says that having “$\{x_{n}\}$, $\{z_{n}\}$ two iteration schemes which converge to the same fixed point $p$, we shall say that $\{x_{n}\}$ is better than $\{z_{n}\}$ if $\left\vert x_{n}-p\right\vert \leq\left\vert
z_{n} -p\right\vert $ for all $n$". It seems that this definition is too restrictive (see for instance [@Ber04 Example 2.8]). In this context we propose the following definition.
\[d-better\]Let $(X,d)$ be a metric space, and let $(x_{n})_{n\geq1}$, $(y_{n})_{n\geq1}\subset(X,d)$ and $x,y\in X$ be such that $x_{n}\rightarrow
x$, $y_{n}\rightarrow y$. One says that $(x_{n})$ converges better to $x$ than $(y_{n})$ to $y$ if there exists some $\alpha>0$ such that $d(x_{n}
,x)\leq\alpha d(y_{n},y)$ for sufficiently large $n;$ one says that $(x_{n})$ and $(y_{n})$ have the same rate of convergence if $(x_{n})$ converges better to $x$ than $(y_{n})$ to $y$, and $(y_{n})$ converges better to $y$ than $(x_{n})$ to $x$.
Using the conventions $\frac{0}{0}:=1$ and $\frac{\alpha}{0}:=\infty$ for $\alpha>0$, \[$(x_{n})$ converges better to $x$ than $(y_{n})$ to $y$\] if and only if $\limsup_{n\rightarrow\infty}\frac{d(x_{n},x)}{d(y_{n},y)}<\infty$; consequently, \[$(x_{n})$ and $(y_{n})$ have the same rate of convergence\] (in the sense of Definition \[d-better\]) if and only if $0<\liminf
_{n\rightarrow\infty}\frac{d(x_{n},x)}{d(y_{n},y)}\leq\limsup_{n\rightarrow
\infty}\frac{d(x_{n},x)}{d(y_{n},y)}<\infty$.
\[ex1\]Consider the sequences $(x_{n})_{n\geq1}$, $(y_{n})_{n\geq1}
\subset\mathbb{R}$ defined by $$x_{n}:=\left\{
\begin{array}
[c]{ll}n^{-1} & \text{if }n\text{ is odd,}\\
(2n)^{-1} & \text{if }n\text{ is even,}\end{array}
\right. \quad y_{n}:=\left\{
\begin{array}
[c]{ll}(2n)^{-1} & \text{if }n\text{ is odd,}\\
n^{-1} & \text{if }n\text{ is even.}\end{array}
\right.$$ Clearly $\lim_{n\rightarrow\infty}x_{n}=\lim_{n\rightarrow\infty}y_{n}=0$, and it is very natural to consider that they have the same rate of convergence; this is confirmed using Definition \[d-better\]. It is obvious that neither $(x_{n})$ is better (faster) than $(y_{n})$, nor $(y_{n})$ is better (faster) than $(x_{n})$ in the senses of Rhoades ([@Rho76]), or Berinde [@Ber04], or Popescu [@Pop07], or Berinde, Khan & Păcurar ([@BerKhaPac15]), or Fukhar-ud-din & Berinde ([@FukBer16]).
Remarks on the use of Berinde and Popescu’s notions in papers citing [@Ber04]
=============================================================================
Practically, all the papers mentioned in the sequel were found on internet when searching, with Google Scholar, the works citing Berinde’s article [@Ber04].
First we give the list of articles, mentioning their authors and results, in which Berinde’s Definition 2.7 from [@Ber04] is used (even if not said explicitly):
Berinde & Berinde – [@BerBer05 Theorem 3.3]; Babu & Prasad – [@BabPra06 Theorem 2.1], [@BabPra06b Theorems 3.1, 3.3]; Olaleru – [@Ola07 Theorem 1], [@Ola09 Theorems 1, 2]; Sahu – [@Sah11 Theorem 3.6]; Akbulut & Özdemir – [@AkbOzd12 Theorem 2.3]; Hussain et al. – [@HusKumKut13 Theorems 18, 19]; Karahan & Ozdemir – [@KarOzd13 Theorem 1]; Abbas & Nazir – [@AbbNaz14 Theorem 3]; Gürsoy & Karakaya – [@GurKar14 Theorem 3]; Kadioglu & Yildirim – [@KadYil14 Theorem 5]; Karakaya et al. – [@KarGurErt14 Theorem 3], [@KarGurErt16 Theorem 2.2]; Kumar – [@Kum14 Theorem 3.1]; Öztürk Çeliker – [@Ozt14 Theorem 8]; Thakur et al. – [@ThaThaPos14 Theorem 2.3],[^4] [@ThaThaPos16 Theorem 3.1]; Chugh et al. – [@ChuMalKum15 Theorem 3.1], [@ChuMalKum15b Theorem 13]; Fathollahi et al. – [@FaGhPoRe15 Propositions 3.1, 3.2, Theorem 3.1, Lemmas 3.2–3.4, Theorems 4.1–4.4]; Gursoy – [@Gur15 Theorem 3]; Jamil & Abed – [@JamAbe15 Theorems 3.1–3.4]; Yadav – [@Yad15 Example 2]; Abed & Abbas, [@AbeAbb16 Theorem (3.8)]; Asaduzzaman et al. – [@AsaKhaAli16 Theorem 3.3]; Mogbademu – [Mog16]{}; Sintunavarat & Pitea – [@SinPit16 Theorem 2.1]; Verma et al. – [@VerJaiShu16];[^5] Alecsa – [@Ale17 Theorems 3.1, 3.3–3.12]; Okeke & Abbas – [@OkeAbb17 Proposition 2.1]; Sharma & Imdad –[@ShaImd17 Remark 4.8]; Yildirim & Abbas – [@YilAbb17 Theorem 2]; Akhtar & Khan – [@AkhKha18 Theorem 3.1–3.3]; Alagoz et al. – [@AlaGunAkb18 Theorem 2.1]; Ertürk & Gürsoy – [@ErtGur18 Theorem 2.3]; Fathollahi & Rezapour – [@FatRez:18 Propositions 2.1–2.3, 3.1, Theorem 3.2]; Garodia & Uddin – [@GarUdd18 Theorem 3.1]; Gürsoy et al. – [@GuEkKhKa18 Theorem 6][^6]; Kosol – [@Kos18 Theorem 2.2]; Kumar & Chauhan – [@KumCha18 Theorems 1, 2]; Piri et al. [@PiDaRaGh18 Lemmas 3.1, 3.2, Theorem 3.3]; Yildirim – [@Yil18 Theorem 2].
As mentioned in Section \[sec1\], Dogan & Karakaya obtain that “the iteration schemes $\{k_{n}\}_{n=0}^{\infty}$ and $\{l_{n}\}_{n=0}^{\infty}$ have the same rate of convergence to $p$ of $\wp$" in [@DogKar18 Theorem 2.4] because $\lim_{n\rightarrow\infty}a_{n} /b_{n}=1$, where the sequences $(a_{n})$ and $(b_{n})$ are such that $\left\Vert
k_{n+1}-p\right\Vert \leq a_{n}$, $\left\Vert l_{n+1}-p\right\Vert
\leq b_{n}$ for $n\geq0$.
It is worth repeating that Popescu (in [@Pop07]) recalls [@Ber04 Definition 2.7], mentions its inconsistency, introduces his direct comparison of iterative processes in [@Pop07 Definition 3.5], and uses this definition in [@Pop07 Theorem 3.7].
Other papers in which [@Pop07 Definition 3.5] is used, without citing it (but possibly recalling [@Ber04 Definition 2.5 or/and Definition 2.7]), are: Xue [@Xue08], Rhodes & Xue [@RhoXue10], Chugh et al. [@ChuKumKum12], Thong [@Tho12], Alotaibi et al. [@AloKumHus13], Hussain et al. [@HusKumKut13][^7], Phuengrattana & Suantai [@PhuSua13], Doğan & Karakaya [@DogKar14], Khan et al. [@KhaKumHus14 Theorem 3.1], Fukhar-ud-din & Berinde [@FukBer16], Gürsoy [@Gur16], Khan et al. [@KhaGurKar16 Theorem 3], Gürsoy et al. [@GurKhaFuk17 Theorem 2.3, Corollary 2.4], Ertürk & Gürsoy – [@ErtGur18 Theorem 2.3],.
It is also worth noticing that by taking simple examples in $\mathbb{R}$, Hussain et al. [@HuRaDaLa11 Example 9], Chugh et al. [@ChuKumKum12 Example 4.1], Hussain et al. [@HuChKuRa12 Example 3.1], Kang et al. [@KanCRAK13 Example 11], Karakaya et al. [@KaDoGuEr13 Example 4], Kumar et al.[@KuLaRaHu13 Example 9], Chugh et al. [@ChuMalKum15 Example 14] (see also P. Veeramani’s review MR3352138 from Mathematical Reviews), Chauhan et al. [@ChUtImAh17], Sintunavarat [@Sin17], Wahab & Rauf [@WahRau16 Example 11, Remarks 12–17] and Akewe & Eke [@AkeEke18], “prove" that certain iteration processes are faster than other ones.
[99]{}
Abbas, M, Nazir, T: A new faster iteration process applied to constrained minimization and feasibility problems. Mat. Vesnik, **66**(2), 223–234 (2014)
Abed, SS, Abbas RF: S-iteration for general quasi multi valued contraction mappings. Int. J. Appl. Math. Stat. Sci. **5**(4), 9–22 (2016)
Akbulut, S, Özdemir, M: Picard iteration converges faster than Noor iteration for a class of quasi-contractive operators. Chiang Mai J. Sci. **39**(4), 688–692 (2012)
Akewe, H, Eke, KS: Convergence speed of some random implicit-Kirk-type iterations for contractive-type random operators. Austr. J. Math. Anal. Appl. **15**(2), Article 15, 1–14, (2018)
Akhtar, Z, Khan, MAA: Rates of convergence for a class of generalized quasi contractive mappings in Kohlenbach hyperbolic spaces, arXiv:1802.09773v1 \[math.FA\]
Alagoz, O, Gunduz, B, Akbulut, S: Numerical Reckoning Fixed Points for Berinde Mappings via a Faster iteration Process, - Facta Universitatis, Ser. Math. Inform. **33**(2), 295–305 (2018)
Alecsa, CD: On new faster fixed point iterative schemes for contraction operators and comparison of their rate of convergence in convex metric spaces. Int. J. Nonlinear Anal. Appl. **8**(1), 353–388 (2017)
Alotaibi, A, Kumar, V, Hussain, N: Convergence comparison and stability of Jungck-Kirk-type algorithms for common fixed point problems. Fixed Point Theory Appl. **2013**:173 (2013)
Asaduzzaman, M, Khatun, MS, Ali, MZ: On new three-step iterative scheme for approximating the fixed points of non-expansive mappings. JP J. Fixed Point Theory Appl. **11**(1), 23–53 (2016)
Babu, GVR, Prasad, KV: Mann iteration converges faster than Ishikawa iteration for the class of Zamfirescu operators. Fixed Point Theory Appl. **2006**, Article ID 49615 (2006); erratum ibid. **2007**, Article ID 97986 (2007)
Babu, GVR, Prasad, KV: Comparison of fastness of the convergence among Krasnoselskij, Mann, and Ishikawa iterations in arbitrary real Banach spaces. Fixed Point Theory Appl. **2006**, Article ID 35704 (2006)
Berinde, V: Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Fixed Point Theory Appl. **2**, 97–105 (2004)
Berinde, V: Iterative Approximation of Fixed Points. Springer, Berlin (2007)
Berinde, V: Iterative approximation of fixed points (Approximation itérative des points fixes). CNRS, GT Méthodes Numériques, 18 Juin 2007
Berinde, V: On a notion of rapidity of convergence used in the study of fixed point iterative methods. Creat. Math. Inform. **25**(1), 29–40 (2016)
Berinde, V, Berinde, M: The fastest Krasnoselskij iteration for approximating fixed points of strictly pseudo-contractive mappings. Carpathian J. Math. **21**(1-2) (2005), 13–20
Berinde, V, Khan, AR, Păcurar, M: Analytic and empirical study of the rate of convergence of some iterative methods. J. Numer. Anal. Approx. Theory **44**(1), 25–37 (2015)
Chauhan, SS, Utreja, K, Imdad, M, Ahmadullah, M: Strong convergence theorems for a quasi contractive type mapping employing a new iterative scheme with an application. Honam Math. J. **39**(1), 1–25 (2017)
Chugh, R, Kumar, V, Kumar, S: Strong convergence of a new three step iterative scheme in Banach spaces. Amer. J. Comput. Math. **2**, 345–357 (2012)
Chugh, R, Malik, P, Kumar, V: On analytical and numerical study of implicit fixed point iterations. Cogent Math. **2**, Article ID 1021623 (2015)
Chugh, R, Malik, P, Kumar, V: On a new faster implicit fixed point iterative scheme in convex metric spaces. J. Funct. Spaces **2015**, Article ID 905834 (2015)
Doğan, K, Karakaya, V: On the convergence and stability results for a new general iterative process. The Scientific World J. **2014**, Article ID 852475 (2014)
Doğan, K, Karakaya, V: A study in the fixed point theory for a new iterative scheme and a class of generalized mappings. Creat. Math. Inform. **27**(2), 151–160 (2018)
Ertürk, M, Gürsoy, F: Some convergence, stability and data dependency results for a Picard-S iteration method of quasi-strictly contractive operators. Math. Bohemica, (2018) DOI: 10.21136/MB.2018.0085-17
Fathollahi, S, Ghiura, A, Postolache, M, Rezapour, S: A comparative study on the convergence rate of some iteration methods involving contractive mappings. Fixed Point Theory Appl. **2015**:234 (2015)
Fathollahi, S, Rezapour, S: Efficacy of coefficients on rate of convergence of some iteration methods for quasi-contractions, Iran. J. Sci. Tech. Trans. Sci. **42**(3), 1517–1523 (2018)
Fukhar-ud-din, H, Berinde, V: Iterative methods for the class of quasi-contractive type operators and comparsion of their rate of convergence in convex metric spaces. Filomat **30**(1), 223–230 (2016)
Garodia, C, Uddin, I: Solution of a nonlinear integral equation via new fixed point iteration process. arXiv:1809.03771v1 \[math.FA\]
Gürsoy, F: On Huang and Noor’s open problem. arXiv:1501.03318v1 \[math.FA\]
Gürsoy, F: A Picard-S iterative method for approximating fixed point of weak-contraction mappings. Filomat **30**(10) (2016), 2829–2845
Gürsoy, F, Eksteen, JJA, Khan, AR, Karakaya, V: An iterative method and its application to stable inversion, Soft Comput (2018). https://doi.org/10.1007/s00500-018-3384-6
Gürsoy, F, Khan, AR, Fukhar-ud-din, H: Convergence and data dependence results for quasi-contractive type operators in hyperbolic spaces. Hacettepe Journal of Mathematics and Statistics **46**(3), 373–388 (2017)
Gürsoy, F, Karakaya, V: A Picard-S hybrid type iteration method for solving a differential equation with retarded argument. arXiv:1403.2546v2 \[math.FA\]
Hussain, N, Chugh, R, Kumar, V, Rafiq, A: On the rate of convergence of Kirk-type iterative schemes. J. Appl. Math. **2012**, Art. ID 526503 (2012)
Hussain, N, Kumar, V, Kutbi, MA: On rate of convergence of Jungck-type iterative schemes. Abstr. Appl. Anal. **2013**, Article ID 132626 (2013)
Hussain, N, Rafiq, A, Damjanović, B, Lazović, R: On rate of convergence of various iterative schemes. Fixed Point Theory Appl. **2011**:45 (2011)
Jamil, ZZ, Abed, MB: On a modified SP-iterative scheme for approximating fixed point of a contraction mapping. Iraqi J. Science, **56**(4B), 3230–3239 (2015)
Kadioglu, N, Yildirim, I: Approximating fixed points of nonexpansive mappings by a faster iteration process. arXiv:1402.6530v1 \[math.FA\]
Karakaya, V, Doğan, K, Gürsoy, F, Ertürk, M: Fixed point of a new three-step iteration algorithm under contractive-like operators over normed spaces. Abstr. Appl. Anal. **2013**, Article ID 560258 (2013)
Karakaya, V, Gürsoy, F, Ertürk, M: Comparison of the speed of convergence among various iterative schemes. arXiv:1402.6080v1 \[math.FA\]
Karakaya, V, Gürsoy, F, Ertürk, M: Some convergence and data dependence results for various fixed point iterative methods. Kuwait J. Sci. **43**(1), 112–128 (2016)
Karahan, I, Ozdemir, M: A general iterative method for approximation of fixed points and their applications. Adv. Fixed Point Theory **3**(3), 510–526 (2013)
Kang, SM, Ćirić, LB, Rafiq, A, Ali, F, Kwun, YC: Faster multistep iterations for the approximation of fixed points applied to Zamfirescu operators. Abstr. Appl. Anal. **2013**, Article ID 464593 (2013)
Khan, AR, Gürsoy, F, Karakaya, V: Jungck-Khan iterative scheme and higher convergence rate. Int. J. Comput. Math. **93**(12), 2092–2105 (2016)
Khan, AR, Kumar, V, Hussain, N: Analytical and numerical treatment of Jungck-type iterative schemes. Appl. Math. Comp. **231**, 521–535 (2014)
Kosol, S: Strong convergence theorem of a new iterative method for weak contractions and comparison of the rate of convergence in Banach space, Adv. Fixed Point Theory, **8**(3), 303-312 (2018)
Kumar, L: On the fastness of the convergence between Mann and Noor iteration for the class of Zamfirescu operators. IOSR J. Math. **10**(5), 48–52 (2014)
Kumar, N, Chauhan, SS: Analysis of Jungck-Mann and Jungck-Ishikawa iteration schemes for their speed of convergence, AIP Conference Proceedings **2050**, 020011 (2018); doi: 10.1063/1.5083598
Kumar, V, Latif, A, Rafiq, A, Hussain, N: S-iteration process for quasi-contractive mappings. J. Inequal. Appl. **2013**:206 (2013)
Mogbademu, AA: New iteration process for a general class of contractive mappings, Acta Comment. Univ. Tartu. Math. **20**(2), 117–122 (2016)
Okeke, GA, Abbas, M: A solution of delay differential equations via Picard–Krasnoselskii hybrid iterative process. Arab. J. Math. (Springer) **6**(1), 21–29 (2017)
Olaleru, JO: A comparison of Picard and Mann iterations for quasi-contraction maps. Fixed Point Theory **8**(1), 87–95 (2007)
Olaleru, JO: On the convergence rates of Picard, Mann and Ishikawa iterations of generalized contractive operators. Stud. Univ. Babeş-Bolyai Math. **54**(4), 103–114 (2009)
Öztürk Çeliker, F: Convergence analysis for a modified SP iterative method. The Scientific World J. **2014**, Article ID 840504 (2014)
Phuengrattana, W, Suantai, S: Comparison of the rate of convergence of various iterative methods for the class of weak contractions in Banach spaces. Thai J. Math. **11**(1), 217–226 (2013)
Piri, H, Daraby, B, Rahrovi, S, Ghasemi, M:Approximating fixed points of generalized $\alpha$-nonexpansive mappings in Banach spaces by new faster iteration process. Numerical Algorithms (2018)
Popescu, O: Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Math. Commun. **12**(2), 195–202 (2007)
Qing, Y, Rhoades, BE: Letter to the editor: Comments on the rate of convergence between Mann and Ishikawa iterations applied to Zamfirescu operators. Fixed Point Theory Appl. **2008**, Article ID 387504 (2008)
Rhoades, BE: Comments on two fixed point iteration methods. J. Math. Anal. Appl. **56**(3), 741–750 (1976)
Rhoades, BE, Xue, Z: Comparison of the rate of convergence among Picard, Mann, Ishikawa, and Noor iterations applied to quasicontractive maps. Fixed Point Theory Appl. **2010**, Article ID 169062 (2010)
Sahu, DR: Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory **12**(1), 187–204 (2011)
Sharma, A, Imdad, M: Fixed point approximation of generalized nonexpansive multi-valued mappings in Banach spaces via new iterative algorithms. Dynamic Syst. Appl. **26**, 395–410 (2017)
Sintunavarat, W: An Iterative Process for Solving Fixed Point Problems for Weak Contraction Mappings, Proceedings of the International MultiConference of Engineers and Computer Scientists 2017 Vol II, IMECS 2017, March 15–17, 2017, Hong Kong, 1019–1023
Sintunavarat, W, Pitea, A: On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis. J. Nonlinear Sci. Appl. **9**, 2553–2562 (2016)
Thakur, D, Thakur, BS, Postolache, M: New iteration scheme for numerical reckoning fixed points of nonexpansive mappings. J. Inequal. Appl. **2014**:328 (2014)
Thakur, BS, Thakur, D, Postolache, M: A new iteration scheme for approximating fixed points of nonexpansive mappings. Filomat **30**(10), 2711–2720 (2016)
Thong DV: The comparison of the convergence speed between Picard, Mann, shikawa and two-step iterations in Banach spaces. Acta Math. Vietnam. **37**(2), 243–249 (2012)
Verma, M, Jain, P, Shukla, KK: A new faster first order iterative scheme for sparsity-based multitask learning. 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2016)
Wahab, OT, Rauf, K: On faster implicit hybrid Kirk-multistep schemes for contractive-type operators. Intern. J. Anal. **2016**, Article ID 3791506 (2016)
Xue, Z: The comparison of the convergence speed between Picard, Mann, Krasnoselskij and Ishikawa iterations in Banach spaces. Fixed Point Theory Appl. **2008**, Article ID 387056 (2008)
Yadav, MR: Two-step iteration scheme for nonexpansive mappings in Banach space. Math. Morav. **19**(1), 95–105 (2015)
Yildirim, I: On the rate of convergence of different implicit iterations in convex metric spaces. Konuralp J. Math. **6**(1), 110-116 (2018)
Yildirim, I, Abbas, M: Convergence rate of implicit iteration process and a data dependence result. arXiv:1703.10357v1 \[math.FA\]
[^1]: Octav Mayer Institute of Mathematics of Romanian Academy, Iasi, Romania, email: [email protected].
[^2]: Throughout this paper the references mentioned in the quoted texts are those in the works from where the texts are taken.
[^3]: Among the 19 papers from our bibliography published in 2017 and 2018, our reference [@Ber16] is mentioned only in [@ErtGur18] and [@GuEkKhKa18].
[^4]: One appreciates here that “In recent years, Definition 2.2 has been used as a standard tool to compare the fastness of two fixed point iterations", Definition 2.2 being [@Ber04 Definition 2.7].
[^5]: See the estimates (23) and (24), as well as the very strange arguments to get the conclusion on page SMC\_ 2016 001606.
[^6]: In [@ErtGur18] and [@GuEkKhKa18] one refers to [@Ber16] when adding “and these estimates are the best possible" , but without any mention to these “best estimates" in the proofs.
[^7]: Note the strange quantity $\big\Vert\frac{\text{JN}_{n+1}-p}{\text{JI}_{n+1}-p}\big\Vert$, the numerator and denominator being in $(X,\left\Vert
\cdot\right\Vert )$ “an arbitrary Banach space.
|
---
abstract: 'We present the results of a Monte Carlo study of the three-dimensional $XY$ model and the three-dimensional antiferromagnetic three-state Potts model. In both cases we compute the difference in the free energies of a system with periodic and a system with antiperiodic boundary conditions in the neighbourhood of the critical coupling. From the finite-size scaling behaviour of this quantity we extract values for the critical temperature and the critical exponent $\nu$ that are compatible with recent high statistics Monte Carlo studies of the models. The results for the free energy difference at the critical temperature and for the exponent $\nu$ confirm that both models belong to the same universality class.'
---
4ex
KL-TH-94/8
CERN-TH.7290/94
****
The XY Model and the
Three-State Antiferromagnetic Potts Model
in Three Dimensions:
Critical Properties from
Fluctuating Boundary Conditions
and
KL-TH-94/8CERN-TH.7290/94\
June 1994
Introduction
============
Ueno et al. [@ueno] pointed out that the differences in the free energy $\Delta F$ of systems with different boundary conditions, such as periodic and antiperiodic boundary conditions, might be a powerful alternative to the fourth-order cumulant [@Binder] in the study of critical phenomena. For the Ising model, antiperiodic boundary conditions force an interface into the system, and $\Delta F$ can be interpreted as interface free energy. In the case of $O(N)$-invariant vector models with $N \ge 2$, such as the $XY$ model ($N=2$) and the classical Heisenberg model ($N=3$), however, the continuous symmetry of the model prevents the creation of a sharp interface and $\Delta F$ becomes rather a measure for the helicity-modulus.
Ueno et al. [@ueno] give, based on previous results [@reviews], the scaling relation F = f(t L\^[1/]{}) , where $t=(T-T_c)/T_c$ is the reduced temperature, $L$ the linear extension of the lattice, and the reduced free energy $F$ is given by $F=-\ln Z$, where $Z$ is the partition function of the system. It is important to note that the above relation requires that all directions of the lattice scale with $L$. It follows that the crossings of $\Delta F$, plotted as a function of the temperature for different $L$, provide estimates for the critical temperature. Furthermore the energy difference $\Delta E$, which is the derivative of $\Delta F$ with respect to the inverse temperature, scales as E L\^[1/]{} \[enerskal\], where $\nu$ is the critical exponent of the correlation length $\xi$.
The drawback of the method outlined above is that, in general, it is hard to obtain free energies from Monte Carlo simulations. The standard approach is to measure $\Delta E$ at a large number of temperatures and perform a numerical integration starting from $T=0$ or $ T=\infty $, where the free energy is known, up to the temperature in question. In [@Habu1; @Habu2] however, one of the authors presented a version of the cluster algorithm [@Wang1; @Wolff] that gives direct access to the interface free energy of Ising systems ($N=1$). It was demonstrated that the crossings of $\Delta F$ converge even faster than the crossings of the fourth-order cumulant in the case of the 3D Ising model on a simple cubic lattice.
In the present paper we show how the algorithm of refs. [@Habu1; @Habu2] can be generalized to $O(N)$-invariant vector models with $N > 1$ and apply it to the 3D $XY$ model on a simple cubic lattice.
The $\lambda$-transition of helium from the fluid He-I phase to the superfluid He-II phase at low temperature is supposed to share the 3D $XY$ universality class. It is the experimentally best studied second-order phase transition. The superfluid density corresponds to the helicity modulus of the $XY$ model [@fisher]. The quoted error bars of the measured value $\nu=0.6705(6)$ [@ahlers] are smaller than that of the theoretical predictions for the 3D $XY$ universality class.
Banavar et al. [@banavar] conjectured that the 3D antiferromagnetic (AF) three-state Potts model belongs to the same universality class as the 3D $XY$ model. Ueno et al. [@ueno] implemented “favourable" and “unfavourable" boundary conditions for the 3D AF 3-state Potts model. They found that the corresponding $\Delta F$ is incompatible with that found for the 3D $XY$ model. They also obtained an estimate for the critical exponent of the correlation length $\nu =0.58(1)$ [@ueno], which is not consistent with the exponent $\nu=0.669(2)$ [@guillou] of the 3D 2-component $(\phi^2)^2$-theory. This result has to be compared with recent high-precision studies of the 3D AF 3-state Potts model [@Wang2; @WePott], where the $XY$ exponents and critical amplitudes where recovered to high accuracy. To clarify this point we discuss how antiperiodic boundary conditions can be implemented for the 3D AF 3-state Potts model. Our numerical findings are then compared with the 3D $XY$ results.
$O(N)$ models with fluctuating boundary conditions
==================================================
We consider a simple cubic lattice with extension $L$ in all directions. The uppermost layer of the lattice is regarded as the lower neighbour plane of the lower-most plane. An analogous identification is done for the other two lattice directions. The $O(N)$ model is defined by the classical Hamiltonian $$H(\vec{s},bc) = - \sum_{<ij>} J_{<ij>} \vec{s}_i\cdot\vec{s}_j\;,$$ where $\vec{s}_i$ are unit-vectors with $N$ components. When periodic $(p)$ boundary conditions $(bc)$ are employed, then $J_{<ij>}=1$ for all nearest-neighbour pairs. When antiperiodic $(ap)$ boundary conditions are employed, then $J_{<ij>}=-1$ for bonds $<ij>$ connecting the lower-most and uppermost plane of the lattice, while all other nearest-neighbour pairs keep $J_{<ij>} = 1$. The free energy difference is now given by $$\Delta F = F_{ap}-F_{p} = - \ln \frac{Z_{ap}}{Z_p},$$ where $Z_{ap}$ and $Z_p$ are the partition functions with antiperiodic and periodic boundary conditions respectively.
In order to obtain the ratio of partition functions $ Z_{ap} / Z_{p} $ we consider a system that allows both periodic and antiperiodic boundary conditions. The partition function of this system is given by $$Z = \sum_{bc} \prod_{i\in\Lambda}\int_{S_{N-1}}\!\!ds_i
\exp(-K H(\vec{s} , bc)) \, ,$$ where $K$ is the inverse temperature. The fraction of configurations with antiperiodic boundary conditions is given by the ratio $Z_{ap} / Z$ , $$\begin{aligned}
\frac{Z_{ap}}{Z} &=& \frac{\prod_{i\in\Lambda}\int_{S_{N-1}}\!\!ds_i
\exp(-K H(\vec{s},ap))} {Z}\, ,
\nonumber \\
&=& \frac{\sum_{bc} \prod_{i\in\Lambda}\int_{S_{N-1}}\!\!ds_i
\exp(-K H(\vec{s} , bc))
\delta_{bc,ap}} {Z}\, , \nonumber\\ &=& \langle\delta_{bc,ap}\rangle \, ,\end{aligned}$$ where $\delta_{bc,ap}=1$ for antiperiodic boundary conditions and $\delta_{bc,ap}=0$ for periodic boundary conditions. An analogous result can be found for periodic boundary conditions. Now we can express the ratio $ Z_{ap} / Z_{p} $ as a ratio of observables in this system, $$\frac{Z_{ap}}{Z_{p}} = \frac{ Z_{ap}/Z}
{ Z_{p}/Z}
=\frac{\langle\delta_{bc,ap}\rangle}
{\langle\delta_{bc,p}\rangle} \,$$ which is hence accessible in a single Monte Carlo simulation.
Boundary flip algorithm for $O(N)$ models
=========================================
We shall now describe an efficient algorithm to update the system explained above, where the type of boundary condition is a random variable [@Habu1; @Habu2].
The algorithm is based on a standard cluster algorithm [@Wang1; @Wolff]. For the Ising model it can be explained as follows. First the bonds are deleted with the standard probability $$p_d = \exp(- K (|s_i s_j| + J_{<ij>} s_i s_j)).$$ or else frozen. After deleting or freezing the bonds of the system one searches for an interface of deleted bonds that completely cuts the lattice in the $z$-direction. If there is such an interface, the spins between the bottom of the system and this interface and the sign of the coupling $J_{<ij>}$ connecting top and bottom are flipped simultaneously. This is a valid update, since the bonds in the interface are deleted and the value of $J_{<ij>} s_i s_j$, for $i$ in the lowermost and $j$ in the uppermost plane, is not changed when we alter the sign of $J_{<i,j>}$ and $s_i$.
In order to apply this algorithm to $O(N)$ models, each component of the spin must be considered as an embedded Ising variable. In the delete probability, we just have to replace the Ising spins by a given component of the $O(N)$ spin.
Note that these embedded Ising models do not couple with each other. The above boundary flips can be done independently for any component.
The simplest approach would be to simulate an ensemble that contains also configurations, with different boundary conditions for the different components. However, we avoided these configurations with mixed boundary conditions. We only allowed a flip of the boundary condition, when it could be done for all components simultaneously.
In our simulations we alternate this boundary flip update with standard single-cluster updates [@Wolff].
The antiferromagnetic three-state Potts model and antiperiodic boundary conditions
==================================================================================
The three-state AF Potts model in three dimensions is defined by the partition function $$Z = \prod_{l\in\Lambda}\sum_{\sigma_l=1}^{3}
\exp\left(-\coup \sum_{\langle i,j\rangle}
\delta_{\sigma_i, \sigma_j}\right)\; ,
\label{a}$$ where the summation is taken over all nearest-neighbour pairs of sites $i$ and $j$ on a simple cubic lattice $\Lambda$, and $\coup=|J|/k_BT$ is the reduced inverse temperature.
One has to note that a change of the boundary interaction to a negative sign is incompatible with the symmetries of the classical Hamiltonian. The change of the sign of $J$ from minus to plus would mean that there is only one favourable value of the neighbouring spin instead of two. Hence changes in the free energy would also arise from a local distortion of the system. However, when one adds or removes one layer from the lattice, so that the extension in one direction, measured in units of lattice spacings, becomes an odd number, one obtains the global frustration we are aiming at. Hence we define $\Delta E$ of an $L^3 $ system by E(L,L,L) = \[E(L,L,L+1)+E(L,L,L-1)\] - E(L,L,L), where the energy $E$ of the model is given by E = \_[i,j ]{} \_[\_i, \_j]{} .
We were not able to find an efficient algorithm that adds or removes a layer of sites from the lattice. Hence we had to rely on the standard integration method to obtain the corresponding $\Delta F$ for the Potts model, as opposed to the $XY$ model.
Numerical results
=================
The 3D $XY$ model
-----------------
On lattices of size $L = 4,8,16,32 $ and 64, we performed simulations at $K_0=0.45420$, which is the estimate for the critical coupling obtained in ref. [@WeXY]. As explained above, we performed single cluster updates [@Wolff] in addition to the boundary updates. We have chosen the number $N_0$ of the single cluster updates per boundary update such that $N_0$ times the average cluster volume is approximately equal to the lattice volume. We performed a measurement after each boundary update. The number of measurements was $100\;000$ for all lattice sizes.
First we determined the critical coupling $K_c$ using the crossings of $Z_{ap}/Z_p$. For the extrapolation of $\langle\delta_{bc,ap}\rangle$ and $\langle\delta_{bc,p}\rangle$ to couplings $K$ other than the simulation coupling $K_0$, we used the reweighting formula [@swendferr] $$\langle \delta_{bc,x} \rangle (K) =
\frac{\sum_i \delta_{bc(i),x} \exp((-K+K_0) H_i)}
{\sum_i \exp((-K+K_0) H_i)} ,
\label{reweight}$$ where $i$ labels the configurations generated according to the Boltzmann weight at $K_0$, $bc(i)$ denotes the boundary condition of the $i^{th}$ configuration, and $x$ must be replaced by either $p$ or $ap$. We computed the statistical errors from Jackknife binning [@siam] applied to the ratio $\langle\delta_{bc,ap}\rangle/\langle\delta_{bc,p}\rangle$. The extrapolation gives good results only within a small neighbourhood of the simulation coupling $K_0$. This range shrinks with increasing volume of the lattice. However, fig. 1 shows that in a sufficiently large neighbourhood of the crossings of $Z_{ap}/Z_p$ the extrapolation performs well. The results for the crossings are $K =$ 0.45439(22), 0.45412(10), 0.454138(31), and 0.454147(14), for $L =$ 4 and 8, 8 and 16, 16 and 32, and 32 and 64, respectively.
The convergence of the crossings of $Z_{ap}/Z_p$ towards $K_c$ is excellent. Even with the high statistical accuracy that we reached, all crossings starting from $L=4$ and $L=8$ are compatible within error bars. The convergence of the crossings is governed by K\_[cross]{}(L) = K\_c ( 1 + const. L\^[ -(+1/)]{}+…), \[kcross\] where $\omega$ is the correction to scaling exponent [@Binder; @wegner]. We performed a two-parameter fit with fixed $\nu=0.669$ and $\omega=0.780$ [@guillou]. Taking all crossings we obtain $K_c = 0.454142(13) $ and when discarding the $L=4$ and 8 crossing, we get $K_c = 0.454148(15) $, where both times the correction term is compatible with zero. Note that we obtained $K_c =0.45420(2)$ [@WeXY] (or reanalysed $K_c = 0.45419(2)$ [@WePott]) from the crossing of the fourth-order cumulant. From the scaling behaviour of the magnetic susceptibility in the high-temperature phase we obtained $K_c = 0.45417(1)$ [@WeXY]. All these estimates are consistent within two standard deviations.
At the critical coupling, $Z_{ap}/Z_p$ converges with increasing $L$ like (L) = () ( 1 + const. L\^[ -]{} …) . \[correction\] In table \[tab1\] we give the value of $Z_{ap}/Z_p$ at our estimate of the critical coupling. The result is stable with increasing $L$. Hence we take the result for $L=64$, $Z_{ap}/Z_p=0.322(8)$, as our final estimate for the infinite volume limit. Taking the logarithm we obtain $\Delta F = 1.13(2)$.
We extracted the critical exponent $\nu$ of the correlation length from the $L$ dependence of the energy difference $\Delta E$. The values for $\Delta E$ at the critical coupling are given in table \[tab1\]. We performed fits according to eq. (\[enerskal\]) for the $\Delta E$ at the new estimate of the critical coupling and at the edges of the error bars. The results, which are summarized in table \[tab2\], are stable within the error bars, when we discard data with small $L$ from the fit. We take as our final result the fit including the lattice sizes $L=16,32$, and $64$, i.e. $\nu = 0.679(7)$, where the error due to the uncertainty in the critical coupling is taken into account. Performing a similar analysis at our old estimate for the critical coupling $K_c = 0.45419(2)$ leads to $\nu = 0.670(7)$, which is more consistent with the accurate value $\nu = 0.669(2)$ obtained [@guillou] from resummed perturbation theory.
The 3D AF three-state Potts model
---------------------------------
For the 3D AF three-state Potts model we computed $\Delta F$ by the integration method. At $K=0$ the free energy is given by $$F = V \ln 3\, ,$$ where $V$ is the number of lattice sites. Hence $$\Delta F = \frac{1}{2} [F(L,L,L-1) + F(L,L,L+1)]- F(L,L,L) = 0$$ at $K=0$. For $L=4$ we measured $\Delta E$ at 83 different values of $K$, starting at $K=0.01$ and going up in steps of $\Delta K=0.01$ until we reached $K=0.83$. In the large-$L$ limit, $\Delta F$ stays $0$ up to the critical point. Therefore we started the integration at a $K$ such that we observed $\Delta E > 0$ within our statistical accuracy for the larger lattices. For $L=8$ we measured $\Delta E$ at 67 different values of $K$, starting at $K=0.50$ and going up in steps of $\Delta K=0.005$ until we reached $K=0.83$. For $L=16$ we measured $\Delta E$ at 52 different values of $K$, starting at $K=0.70$ and going up in steps of $\Delta K=0.0025$ until we reached $K=0.83$.
All runs consisted of $10\;000$ measurements. Per measurement we performed such a number of single cluster updates that the lattice volume was approximately covered by the average cluster volume. Then we performed the integration using the trapeze rule. The result is given in fig. 2. The curves for $L=4$ and $L=8$ cross at $K=0.8155(18)$ and the curves for $L=8$ and $L=16$ at $K=0.8166(8)$, which is in good agreement with $K_c = 0.81563(3)$ [@WePott]. The values of $ \Delta F$ at $K_c = 0.81563$ are summarized in table 3. Our statistical accuracy degrades with increasing lattice size. Hence we skipped the simulations of larger lattice sizes. However, already for the small lattices the results at $K_c = 0.81563$ are rather stable; systematic errors due to corrections to scaling should be small. The result $\Delta F = 1.13(2)$ for $L=16$ at the critical point nicely agrees with our final result $\Delta F = 1.13(2)$ for the 3D $XY$ model. One should note that for the 3D Ising model one obtains $\Delta F = 0.605(6)$ [@Habu2], which is only a little more than half the $XY$ value.
At $K_c = 0.81563$ we simulated the $L \times L \times L-1$ and $L \times L \times L+1$ lattices for sizes up to $L=64$ with a statistics of $100\;000$ measurements. For the cubic lattices we used the results of our previous study [@WePott], where $200\;000$ measurements had been performed. The resulting $\Delta E$ are summarized in table \[tab3\]. We performed fits according to eq. (\[enerskal\]) for the $\Delta E$ at the critical coupling and at the edges of the error bars. The results are summarized in table \[tab4\]. The fit including all lattice sizes gives an unacceptably large $\chi^2/degrees\; of \; freedom$, which in the following will be denoted as $\G$. Discarding the $L=4$ data the value of $\G$ becomes acceptable, and when discarding also the $L=8$ data the result for $\nu $ remains stable within the error bars. Hence we conclude that systematic errors due to corrections to scaling are smaller than our statistical errors. We take $\nu=0.663(4)$, from the fit including the lattice sizes $L=16,32$ and $64$, as our final result, where the error due to the uncertainty in the critical coupling is taken into account.
Conclusions
===========
In the present work we have shown how the boundary algorithm of refs. [@Habu1; @Habu2] can be applied to $O(N)$ models with $N > 1$. We demonstrated, in the case of the 3D $XY$ model, that its critical properties can be nicely extracted from the ratio of the partition functions $Z_{ap}/Z_p$. The accuracy of the results for the critical coupling and the critical exponent of the correlation length $\nu$ are compatible with that obtained from the fourth-order cumulant.
We showed how antiperiodic boundary conditions can be implemented for the 3D AF Potts model. The value of the free energy difference $\Delta F = F_{ap}-F_{p}$ at the critical coupling is in good agreement with that found for the 3D $XY$ model. The value $\nu = 0.663(4)$ obtained from the scaling behaviour of the energy difference $\Delta E$ at the critical coupling is as accurate as our previous estimate, which we obtained from the slope of the fourth-order cumulant. We conclude that this strongly supports the fact that the 3D AF 3-state Potts model and the 3D $XY$ model belong to the same universality class. The confirmation of the conjecture by Banavar et al. also has practical implications. The 3D AF 3-state Potts model is simpler to simulate than the 3D $XY$ model. The application of multispin-coding techniques, which have been used to speed up simulations of the Ising model [@Ito], might also allow further improvements of the 3D AF 3-state Potts results. A first attempt in this direction you can find in ref.[@okabe].
For a detailed comparison with previous results, see [@WePott].
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank D. Stauffer for many helpful suggestions.
The major part of the numerical simulations was performed on an IBM RISC 6000 cluster of the Regionales Hochschulrechenzentrum Kaiserslautern (RHRK). The simulations took about one CPU-month on an IBM RISC 6000-590 workstation.
[99]{}
Y. Ueno, G. Sun and I. Ono, J. Phys. Soc. Japn. [**58**]{}, 1162 (1989).
K. Binder, Phys. Rev. Lett. [**47**]{}, 693 (1981) ; K. Binder, Z. Phys. B [**43**]{}, 119 (1981).
M.E. Fisher, Critical Phenomena, Proc. 51st Enrico Fermi Summer School, ed.\
M.S. Green (Academic, New York, 1971), p. 1;\
M.N. Barber, Phase Transitions and Critical Phenomena, eds. C. Domb and\
J.L. Lebowitz (Academic, New York, 1984), Vol. 9, p. 145;\
D. Jasnow, Rep. Prog. Phys. [**47**]{} 1059 (1984).
M. Hasenbusch, J. Phys. (Paris) I [**3**]{}, 753 (1993).
M. Hasenbusch, Physica A [**197**]{}, 423 (1993).
R.H. Swendsen and J.-Sh. Wang, Phys. Rev. Lett. [**58**]{}, 86 (1987).
U. Wolff, Phys. Rev. Lett. [**62**]{}, 361 (1989) and U. Wolff, Nucl. Phys. B[**322**]{}, 759 (1989).
M.E. Fisher, M.N. Barber and D. Jasnow, Phys. Rev. A [**8**]{}, 1111 (1973).
L.S. Goldner and G. Ahlers, Phys. Rev. B [**45**]{}, 13129 (1992)
J.R. Banavar, G.S. Grest, and D. Jasnow, Phys. Rev. Lett. [**45**]{} 1424 (1980); Phys. Rev. B [**25**]{}, 4639 (1982).
J.C. Le Guillou and J. Zinn-Justin, Phys. Rev. B [**21**]{}, 3976 (1980) and J. Phys. Lett. (Paris) [**46**]{}, L137 (1985).
J.-S. Wang , R.H. Swendsen and R. Kotecký, Phys. Rev. Lett. [**63**]{}, 109 (1989) and Phys.Rev.B [**42**]{} , 2465 (1990).
A.P. Gottlob and M. Hasenbusch, preprint KL-Th-94/5, CERN-TH.7183/94 Kaiserslautern/Genève. To be published in Physica A.
A.P. Gottlob and M. Hasenbusch, Physica A [**201**]{}, 593 (1993).
A.M. Ferrenberg and R.H. Swendsen, Phys. Rev. Lett. [**61**]{}, 2635 (1988).
R.G. Miller, Biometrica [**61**]{}, 1 (1974); B. Efron, [*The Jackknife, the Bootstrap and Other Resampling Plans*]{} (SIAM, Philadelphia, PA, 1982).
F.J. Wegner, Phys. Rev. B [**5**]{}, 4529 (1972).
N. Ito and G.A. Kohring, J. Mod. Phys. C [**5** ]{}, 1 (1994). and references therein.
Y. Okabe and M. Kikuchi, in [*Computational Approaches in Condensed-Matter Physics*]{}, S. Miyashita, M. Imada and H. Takayama eds. (Springer, Heidelberg, 1992); Y. Okabe, M. Kikuchi and K.Niizeki, in [*Computer Simulation Studies in Condensed-Matter Physics V*]{}, D.P. Landau, K.K. Mon and H.-B. Schuettler eds. (Springer, Heidelberg, 1993)
$L$ $ Z_{ap}/Z_p$ $ \Delta E $
----- ---------------- -----------------
4 0.3245(19)(1) 17.64(13)(1)
8 0.3242(19)(3) 52.18(41)(4)
16 0.3234(21)(9) 144.7(1.3)(3)
32 0.3224(20)(27) 407.0(4.6)(2.2)
64 0.3216(25)(72) 1113.(13.)(18.)
: Results of the ratio $Z_{ap}/Z_p$ and $\Delta E$ at the critical coupling $K_c=0.45415(2)$. The number in the second bracket gives the uncertainty due to the error bar of the critical coupling.[]{data-label="tab1"}
---- ------------ ------ ------------ ------ ------------ ------
\# $\nu$ $\G$ $\nu$ $\G$ $\nu$ $\G$
0 0.6771(45) 1.80 0.6756(44) 0.78 0.6741(44) 0.77
1 0.6813(30) 0.96 0.6783(29) 0.53 0.6753(29) 0.32
2 0.6833(50) 1.67 0.6787(49) 1.05 0.6743(48) 0.57
---- ------------ ------ ------------ ------ ------------ ------
: Estimates of the critical exponent $\nu$ obtained from the fit of the surface energy density following eq. (\[enerskal\]) at $K_c=0.45415(2)$. \# gives the number of discarded data points with small $L$ and $\G$ denotes $\chi^2 / degrees\; of \; freedom$. []{data-label="tab2"}
$L$ $\Delta F$ $\Delta E$
----- ------------ -----------------
4 1.165(8) 6.68(2)
8 1.157(13) 19.10(9)(1)
16 1.130(20) 53.36(30)(6)
32 152.92(92)(48)
64 431.8(2.6)(3.7)
: $\Delta F$ and $\Delta E$ for the 3D AF 3-state Potts model at $K=0.81563(3)$. The number in the second bracket of $\Delta E$ gives the uncertainty due to the error bar of the critical coupling. []{data-label="tab3"}
------ ------------ ------ ------------ ------ ------------ ------
$\#$ $\nu$ $\G$ $\nu$ $\G$ $\nu$ $\G$
0 0.8988(49) 2528 0.9130(43) 3392 0.8598(42) 1903
1 0.6681(19) 1.21 0.6664(16) 1.92 0.6650(18) 1.73
2 0.6651(32) 1.10 0.6629(26) 1.04 0.6605(31) 0.45
------ ------------ ------ ------------ ------ ------------ ------
: Results for the critical exponent $\nu$ obtained from the fit following eq. (\[enerskal\]). \# denotes the number of discarded data points with small $L$ and $\G$ denotes $\chi^2 / degrees$ $of \; freedom$. []{data-label="tab4"}
|
---
author:
- 'Victor E. Ambru and Elizabeth Winstanley'
title: Fermions on adS
---
Introduction {#sec:intro}
============
Quantum field theory (QFT) on curved spaces (CS) is a semi-classical theory for the investigation of quantum effects in gravity. Due to its simplicity, the scalar field has been the main focus of QFT on CS. However, due to the fundamental difference between the quantum behaviour of fermions and bosons, it is important to also study fermionic fields. In this paper, we consider the propagation of Dirac fermions on the anti de Sitter space-time (adS) background space-time, where the maximal symmetry can be used to obtain analytic results.
We start this paper by presenting in Sec. \[sec:ads\] an expression for the spinor parallel propagator [@art:muck]. Using results from geodesic theory [@art:allen_jacobson; @art:muck], an exact expression for the Feynman propagator is obtained in Sec. \[sec:sf\]. Section \[sec:had\] is devoted to Hadamard’s regularisation method [@art:najmi_ottewill], while, in Sec. \[sec:tvac\], the result for the renormalised vacuum expectation value (v.e.v.) of the stress-energy tensor (SET) is presented using two methods: the Schwinger-de Witt method [@art:christensen] and the Hadamard method [@art:hack]. The exact form of the bi-spinor of parallel transport is then used in Sec. \[sec:tbeta\] to calculate the thermal expectation value (t.e.v.) of the SET for massless spinors. More details on the current work, as well as an extension to massive spinors, can be found in [@art:ambrus_winstanley_ads].
Geometric structure of adS {#sec:ads}
==========================
Anti-de Sitter space-time (adS) is a vacuum solution of the Einstein equation with a negative cosmological constant, having the following line element: $$\label{eq:ds2}
ds^2 = \frac{1}{\cos^2\omega r}
\left[-dt^2 + dr^2 + \frac{\sin^2\omega r}{\omega^2} \left(d\theta^2 + \sin^2\theta d\varphi^2\right)\right].$$ The time coordinate $t$ runs from $-\infty$ to $\infty$, thereby giving the covering space of adS. The radial coordinate $r$ runs from $0$ to the space-like boundary at $\pi/2\omega$, while $\theta$ and $\varphi$ are the usual elevation and azimuthal angular coordinates. In the Cartesian gauge, the line element (\[eq:ds2\]) admits the following natural frame [@art:cota]: $$\label{eq:tetrad}
\omega^{\hat{t}} = \frac{\D t}{\cos\omega r}, \qquad
\omega^{\hat{i}} = \frac{\D x^j}{\cos\omega r}\left[\frac{\sin\omega r}{\omega r}\left(\delta_{ij} - \frac{x^ix^j}{r^2}\right) +
\frac{x^ix^j}{r^2}\right],$$ such that $\eta_{\hat{\alpha}\hat{\beta}} \omega^{\hat{\alpha}}_\mu \omega^{\hat{\beta}}_\nu = g_{\mu\nu}$, where $\eta_{\hat{\alpha}\hat{\beta}} = \rm{diag}(-1,1,1,1)$ is the Minkowski metric.
A key role in the construction of the propagator of the Dirac field is played by the bi-spinor of parallel transport $\Lambda(x,x')$, which satisfies the parallel transport equation $n^\mu D_\mu \Lambda(x,x') = 0$ [@art:muck]. On adS, the explicit form of $\Lambda(x,x')$ is [@art:ambrus_winstanley_ads]: $$\begin{aligned}
\Lambda(x,x') &=& \frac{\cos(\omega\Delta t/2)}{\cos(\omega s/2)\sqrt{\cos\omega r \cos \omega r'}}
\Big\{
\cos\frac{\omega r}{2} \cos\frac{\omega r'}{2} +
\frac{\mathbf{x}\cdot \hat{\mathbf{\gamma}}}{r} \frac{\mathbf{x'}\cdot \hat{\mathbf{\gamma}}}{r'}
\sin\frac{\omega r}{2} \sin\frac{\omega r'}{2}\nonumber\\
&& - \gamma^{\hat{t}} \tan\frac{\omega\Delta t}{2} \left(
\frac{\mathbf{x}\cdot \hat{\mathbf{\gamma}}}{r} \sin\frac{\omega r}{2} \cos\frac{\omega r'}{2} -
\frac{\mathbf{x'}\cdot \hat{\mathbf{\gamma}}}{r'} \cos\frac{\omega r}{2} \sin\frac{\omega r'}{2}\right)\Big\},
\label{eq:lambda}\end{aligned}$$ where $\gamma^{\hat{\alpha}} = (\gamma^{\hat{t}}, \hat{\mathbf{\gamma}})$ are the gamma matrices in the Dirac representation and $s$ is the geodesic distance between $x$ and $x'$.
Feynman propagator on adS {#sec:sf}
=========================
The Feynman propagator $S_F(x,x')$ for a Dirac field of mass $m$ can be defined as the solution of the inhomogeneous Dirac equation, with appropriate boundary conditions: $$\label{eq:sf_def}
(\I\slashed{D} - m) S_F(x,x') = (-g)^{-1/2} \delta^4(x-x'),$$ where $D_\mu$ denotes the spinor covariant derivative and $g$ is the determinant of the background space-time metric. Due to the maximal symmetry of adS, the Feynman propagator can be written in the following form [@art:muck]: $$\label{eq:sf_muck}
S_F(x,x') = \left[\alpha_F(s) + \slashed{n}\, \beta_F(s)\right]\, \Lambda(x,x').$$ The functions $\alpha_F$ and $\beta_F$ can be determined using (\[eq:sf\_def\]): $$\begin{aligned}
\alpha_F &=& \frac{\omega^3 k}{16\pi^2} \cos\frac{\omega s}{2}\left\{
-\frac{1}{\sin^2\frac{\omega s}{2}}
+ 2 (k^2 - 1) \ln \left|\sin\frac{\omega s}{2}\right| {}_2F_1\left(2+k,2-k;2;\sin^2\frac{\omega s}{2}\right)
\right.\nonumber\\
& &\left. + (k^2 - 1) \sum_{n=0}^\infty \frac{(2+k)_n (2-k)_n}{(2)_n n!} \left(\sin^2\frac{\omega s}{2}\right)^n
\Psi_n\right\},\label{eq:alpha}\label{eq:af}\\
\beta_F &=& \frac{\I \omega^3}{16\pi^2} \sin\frac{\omega s}{2}
\Bigg\{\nonumber\\
& & \frac{1 + k^2 \sin^2(\omega s/2)}{[\sin(\omega s/2)]^4}
- k^2 (k^2 - 1) \ln \left|\sin\frac{\omega s}{2}\right|
{}_2F_1\left(2+k,2-k;3;\sin^2\frac{\omega s}{2}\right) \nonumber\\
& &\left. - \frac{k^2 (k^2 - 1)}{2} \sum_{n=0}^\infty \frac{(2+k)_n (2-k)_n}{(3)_n n!} \left(\sin^2\frac{\omega s}{2}\right)^n
\left(\Psi_n - \frac{1}{2+n}\right)\right\}\label{eq:beta},\label{eq:bf}\end{aligned}$$ where $a_n = \Gamma(a+n)/\Gamma(a)$ is the Pochhammer symbol, $\Gamma(z) = \int_0^\infty x^{z-1} \E^{-x} dx$ is the gamma function, $k = m/\omega$, $$\Psi_n = \psi(k + n + 2) + \psi(k - n - 1) - \psi(n + 2) - \psi(n + 1)$$ and $\psi(z) = \D \ln \Gamma(z) / \D z$ is the digamma function.
Hadamard renormalisation {#sec:had}
========================
To regularise $S_F$, it is convenient to use the auxilliary propagator $\mathcal{G}_F$, defined by analogy to flat space-time [@art:najmi_ottewill]: $$S_F(x,x') = (\I \slashed{D} + m) \mathcal{G}_F.$$ On adS, $\mathcal{G}_F$ can be written using the bi-spinor of parallel transport: $$\mathcal{G}_F(x,x') = \frac{\alpha_F}{m} \Lambda(x,x'),$$ where $\alpha_F$ is given in (\[eq:af\]).
According to Hadamard’s theorem, the divergent part $\mathcal{G}_H$ of $\mathcal{G}_F$ is state-independent, having the form [@art:najmi_ottewill]: $$\mathcal{G}_H(x,x') = \frac{1}{8\pi^2} \left[\frac{u(x,x')}{\sigma} + v(x,x') \ln \mu^2 \sigma\right],$$ where $u(x,x')$ and $v(x,x')$ are finite when $x'$ approaches $x$, $\sigma = -s^2/2$ is Synge’s world function and $\mu$ is an arbitrary mass scale. The functions $u$ and $v$ can be found by solving the inhomogeneous Dirac equation (\[eq:sf\_def\]), requiring that the regularised auxilliary propagator $\mathcal{G}_F^{\rm{reg}} \equiv \mathcal{G}_F - \mathcal{G}_H$ is finite in the coincidence limit: $$\begin{aligned}
u(x,x') &=& \sqrt{\Delta(x,x')} \Lambda(x,x'),\label{eq:uhad}\\
v(x,x') &=& \frac{\omega^2}{2} (k^2 - 1) \cos\frac{\omega s}{2} {}_2F_1\left(2-k,2+k;2;\sin^2\frac{\omega s}{2}\right) \Lambda(x,x'),
\label{eq:vhad}\end{aligned}$$ where the Van Vleck-Morette determinant $\Delta(x,x') = (\omega s / \sin \omega s)^3$ on adS.
Renormalised vacuum stress-energy tensor {#sec:tvac}
========================================
To remove the traditional divergences of quantum field theory, we employ two regularisation methods: the Schwinger–de Witt method in Sec. \[sec:tvac:sdw\] and the Hadamard method in Sec. \[sec:tvac:had\]. Due to the symmetries of adS, the regularised v.e.v. of the SET takes the form $\braket{\tens{T}_{\mu\nu}}_{\rm{vac}}^{\rm{reg}} = \frac{1}{4} \tens{T} g_{\mu\nu}$, where $\tens{T} = \tens{T}\indices{^\mu_\mu}$ is its trace. The renormalisation process has the profound consequence of shifting $\tens{T}$ for the massless (hence, conformal) Dirac field to a finite value, referred to as the conformal anomaly.
Schwinger–de Witt regularisation {#sec:tvac:sdw}
--------------------------------
By using the Schwinger–de Witt approach to investigate the singularity structure of the propagator of the Dirac field in the coincidence limit, Christensen [@art:christensen] calculates a set of subtraction terms which only depend on the geometry of the background space-time, using the following formula: $$\label{eq:tmunu_sf_can}
\braket{\tens{T}_{\mu\nu}} = \lim_{x'\rightarrow x} {\rm{tr}}
\left\{\frac{\I}{2} \left[\gamma_{(\mu} D_{\nu)} - \gamma_{(\mu'}D_{\nu')}\right] S_F(x,x')\right\}.$$ After subtracting Christensen’s terms, we exactly recover the result obtained by Camporesi and Higuchi [@art:camporesi_higuchi] using the Pauli-Villars regularisation method: $$\label{eq:tvac:sdw}
\braket{\tens{T}}_{\rm{vac}}^{\rm{SdW}} = -\frac{\omega^4}{4\pi^2} \left\{
\frac{11}{60} + k - \frac{k^2}{6} - k^3 +
2 k^2(k^2 -1) \left[\ln\frac{\mu}{\omega} - \psi(k)\right]\right\},$$ where $\mu$ is an arbitrary mass scale.
Hadamard regularisation {#sec:tvac:had}
-----------------------
The Hadamard theorem presented in Sec. \[sec:had\] allows the renormalisation to be performed at the level of the propagator. To preserve the conservation of the SET, the following definition for the SET must be used [@art:hack]: $$\label{eq:tmunu_sf_hack}
\braket{\tens{T}_{\mu\nu}} = \lim_{x'\rightarrow x} {\rm{tr}}
\left\{\frac{\I}{2} \left[\gamma_{(\mu} D_{\nu)} - \gamma_{(\mu'}D_{\nu')}\right] + \frac{1}{6} g_{\mu\nu}
\left[\frac{\I}{2} (\slashed{D} - \slashed{D}') - m\right] \right\} S^{\rm{reg}}_F(x,x'),$$ where $S_F^{\rm{reg}}(x,x') = (\I \slashed{D} + m) (\mathcal{G}_F - \mathcal{G}_H)$ is the regularised propagator. The coefficient of $g_{\mu\nu}$ is proportional to the Lagrangian of the Dirac field and evaluates to zero when applied to a solution of (\[eq:sf\_def\]). However, $S_F^{\rm{reg}}(x,x')$ is not a solution of (\[eq:sf\_def\]). The v.e.v. obtained from (\[eq:tmunu\_sf\_hack\]) matches perfectly the result obtained by Camporesi and Higuchi [@art:camporesi_higuchi] using the zeta-function regularisation method ($\gamma$ is Euler’s constant): $$\label{eq:tvac:had}
\braket{\tens{T}}_{\rm{vac}}^{\rm{Had}} = -\frac{\omega^4}{4\pi^2}\left\{
\frac{11}{60} + k - \frac{7k^2}{6} - k^3 + \frac{3k^4}{2} +
2k^2(k^2 - 1)\left[\ln \frac{\mu \E^{-\gamma} \sqrt{2}}{\omega} - \psi(k)\right]\right\}.$$ Even though the results (\[eq:tvac:sdw\]) and (\[eq:tvac:had\]) are different for general values of the mass parameter $k$, they yield the same conformal anomaly. We would like to stress that the omission of the term proportional to $g_{\mu\nu}$ in (\[eq:tmunu\_sf\_hack\]) would increase the value of the conformal anomaly by a factor of $3$.
Thermal stress-energy tensor {#sec:tbeta}
============================
The renormalised thermal expectation value (t.e.v.) of the SET can be written as: $$\braket{\tens{T}_{\mu\nu}}_\beta^{\rm{reg}} = \braket{:\tens{T}_{\mu\nu}:}_\beta + \braket{\tens{T}_{\mu\nu}}_{\rm{vac}}^{\rm{ren}},$$ where $\beta = T^{-1}$ is the inverse temperature and the colons $::$ indicate that the operator enclosed is in normal order, i.e. with its v.e.v. subtracted. The bi-spinor of parallel transport can be used to show that $$\braket{:\tens{T}^\mu_{\phantom{\mu}\nu}:}_\beta = \rm{diag}(-\varrho, p, p, p),$$ where $\rho$ is the energy density and $p$ is the pressure. If $m = 0$, we have $p = \rho / 3$ and: $$\varrho\rfloor_{m = 0} = -\frac{3 \omega^4}{4\pi^2} (\cos\omega r)^4 \sum_{j = 1}^\infty (-1)^j
\frac{\cosh(j \omega \beta/2)}{[\sinh (j \omega \beta / 2)]^4},$$ with the coordinate dependence fully contained in the $(\cos \omega r)^4$ prefactor. The first term in the sum over $j$ is within $6\%$ of the sum, while the first two terms together are less than $1\%$ away, for all values of $\omega \beta$. The small and large $\omega \beta$ limits can be extracted: $$\begin{aligned}
\varrho\rfloor_{m = 0} &=& (\cos \omega r)^4 \left[\frac{7\pi^2}{60\beta^4} - \frac{\omega^2}{24\beta^2} + O(\omega^4)\right],\label{eq:rho-small}\\
\varrho\rfloor_{m = 0} &=& \frac{6\omega^4}{\pi^2} \frac{(\cos\omega r)^4}{1 + \E^{3 \beta \omega / 2}}
\left[1 + 5 \E^{-\omega \beta} \frac{1 + \E^{-3 \omega \beta / 2}}{1 + \E^{-5 \omega \beta / 2}} +
O(\E^{-2\omega \beta})\right].\label{eq:rho-large}\end{aligned}$$ Figure \[fig\] shows a graphical representation of the above results.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![[**a**]{} $\varrho$ between the origin ($r=0$) and the boundary ($r\omega = \pi/2$) for $\beta \omega = 0.8$,$1.0$,$1.2$ and $1.4$; [**b**]{} Log-log plot of $\varrho$ in terms of $\beta \omega$; comparison with the asymptotic results in (\[eq:rho-small\]) and (\[eq:rho-large\])[]{data-label="fig"}](rho-betas-k0.ps "fig:"){width="0.45\linewidth"} ![[**a**]{} $\varrho$ between the origin ($r=0$) and the boundary ($r\omega = \pi/2$) for $\beta \omega = 0.8$,$1.0$,$1.2$ and $1.4$; [**b**]{} Log-log plot of $\varrho$ in terms of $\beta \omega$; comparison with the asymptotic results in (\[eq:rho-small\]) and (\[eq:rho-large\])[]{data-label="fig"}](rho-b-k0.ps "fig:"){width="0.45\linewidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This work is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/J000418/1, the School of Mathematics and Statistics at the University of Sheffield and European Cooperation in Science and Technology (COST) action MP0905 “Black Holes in a Violent Universe”.
|
---
abstract: 'Certain facial parts are salient (unique) in appearance, which substantially contribute to the holistic recognition of a subject. Occlusion of these salient parts deteriorates the performance of face recognition algorithms. In this paper, we propose a generative model to reconstruct the missing parts of the face which are under occlusion. The proposed generative model (SD-GAN) reconstructs a face preserving the illumination variation and identity of the face. A novel adversarial training algorithm has been designed for a bimodal mutually exclusive Generative Adversarial Network (GAN) model, for faster convergence. A novel adversarial “structural” loss function is also proposed, comprising of two components: a holistic and a local loss, characterized by SSIM and patch-wise MSE. Ablation studies on real and synthetically occluded face datasets reveal that our proposed technique outperforms the competing methods by a considerable margin, even for boosting the performance of Face Recognition.'
address: 'Dept. of CS&E, IIT Madras, Chennai, India'
author:
- Samik Banerjee
- Sukhendu Das
bibliography:
- 'egbib.bib'
title: 'SD-GAN: Structural and Denoising GAN reveals facial parts under occlusion'
---
GAN ,structural loss ,Nash equilibrium ,occlusion ,PMSE ,Face Verification
Introduction {#sec:intro}
============
Faces appearing under occlusion is a major hindrance for accurate Face Recognition (FR), which has been far from being solved. With the advent of generative adversarial models [@goodfellow2014generative] in the field of deep learning (DL), there has been a surge of techniques to predict the missing values or pixels in an image. Revealing of missing parts of an image is a common image editing operation, which aims to fill the missing or masked regions in images with appropriate contents that appears to be visually realistic. The generated contents can either be as accurate as the original, or simply fit well within the context such that the restored image looks perceptually plausible and complete. Recent image completion techniques [@barnes2009patchmatch; @huang2014image] rely on low and mid-level cues for the generation of the missing patches in the image.
Contrary to the recent techniques, our proposed method reconstructs a full face despite the fact that certain salient and unique features on the faces are occluded. The processes concerned with generating missing patches on the faces make an assumption that the similar patterns do not exist everywhere. Inline with his assumptions, Generative Adversarial Networks (GANs) aim to perform well in generating the facial parts behind the mask, due to its capability of generating the unseen. Wright *et al.* [@wright2009robust] used a method for sparse recovery of signals for image completion, which is further used in face completion. Recently, Ren *et al.* [@ren2015shepard] used Convolutional neural networks (CNN) for inpainting of images. Li *et al.* [@li2017generative] used a generative model to restore face-parts occluded by patches on the CelebA dataset, but they did not provide any result on real-world occluded face datasets, like the AR face database [@martinez1998ar]. They also relied on post-processing of the images to produce semantically correct images. The Generative Face Completion (GFC) [@li2017generative] process requires significantly large amount of training time to reach the equilibrium point.
With the aim of designing an end-to-end framework for generating face images from the masked ones, the primary contribution of this paper lies in design of a novel bimodal training algorithm for GAN. Mode-I of the training process produces faces with ambient illumination, while Mode-II denoises that generated by Mode-I. A unique training algorithm is proposed with faster convergence. An adversarial “structural” loss is also proposed in this paper in order to maintain the holistic quality of the face images. This “structural” loss consists of two components: “Structural Similarity (SSIM) loss” and “Patch-wise Mean squared error (PMSE)”. The SSIM [@wang2004image] takes care of the holistic features of the face, while PMSE takes care of the pixel-wise differences in the faces. Further, our model converges to an equilibrium in Mode-II faster than other generative models [@krizhevsky2012imagenet], since the generator is based on a denoising auto-encoder [@vincent2008extracting] model. The generated faces boost the performance of FR on occluded faces, when compared with the works published recently in literature.
Sections \[sec:GAN\] and \[sec:DAE\] give brief overviews of GAN and Denoising Auto-encoder, respectively, while section \[sec:loss\] discusses the loss functions used in this paper. Section \[sec:arch\] gives the details of the proposed architecture of SD-GAN, followed by the description of the proposed training algorithm in section \[sec:train\]. In section \[sec:res\], the quantitative and qualitative results of our experiments, showing the effectiveness of our proposed method are reported, along with the different benchmark datasets used for experimentations. Finally, the paper concludes in section \[sec:conc\].
Generative Adversarial Networks (GAN) {#sec:GAN}
=====================================
Generative Adversarial Network (GAN) [@goodfellow2014generative] consists of two models: the generative ($G$) and the discriminator ($D$). The CNN based deep network in $G$ captures the true data distribution, $p_{data}$, and generates images sampled from a distribution $p_{z}$, the distribution of the training data provided as input to $G$. $D$ as a counter-part of $G$ (also CNN-based) discriminates between the original images, sampled from $p_{data}$, and the images generated by $G$. Typically, $G$ learns to map from a latent space ($p_z$) to a particular data distribution ($p_{data}$) of interest, while $D$ discriminates between instances from $p_{data}$ and candidates produced by the generator. The objective of training $G$ is to increase the error rate of $D$ (*i.e.*, “fool” $D$ by producing novel synthesized instances that appear to have come from $p_{data}$). This adversarial training adopted for GAN is derived from that in Schmidhuber [@urgen1992learning]. In other words, an alternate training procedure is performed on GAN, where $D$ and $G$ play a two-player minimax gaming strategy of a zero-sum game with the value function $V(G,D)$. The overall objective function minimized by GANs [@goodfellow2014generative], is given as: $$\begin{split}
\min_G \max_D V(G,D) & = \mathbb{E}_{x\sim p_{data}}[\log D(x)]\\
& + \mathbb{E}_{x\sim p_z}[\log(1-D(G(z)))]
\end{split}$$ To learn $p_z$ over data $x$, a mapping to data space is represented as $G(z; \theta_g)$, where $G$ is a differentiable function representing a CNN with parameters $\theta_g$. Another CNN based deep network represented by $D(x; \theta_d)$ outputs a single scalar $[0/1]$. $D(x)$ represents the probability that $x$ came from the true data rather than $p_z$.
Two major drawbacks of an adversarial system are:
1. GANs can generate all the pixels in one shot, rather than guessing the value of one pixel given another pixel. This is the main reason for the noise in the output images, whenever missing pixels are generated.
2. Reaching the Nash equilibrium [@nash1950equilibrium] of a game requires large number of iterations/epochs due to the instability inherent in GANs [@goodfellow2014generative].
An aim to overcome the above two drawbacks, forms the basic motivation of our work presented in this paper. To deal with noise, a Denoising Auto-encoder based generator model has been introduced in conjunction with the standard GAN framework. Further, the Mode-II reaches the Nash equilibrium faster than Mode-I. A trade-off has been done at Mode-I between the structural loss and training time, where the generator loss is thresholded for the generated images passed to Mode-II for denoising.
Denoising Auto-encoder {#sec:DAE}
======================
The general deep auto-encoder, as proposed by Bengio *et al.* [@bengio2007greedy], maps an input vector $\vec{x} \in [0,1]^d$ to a latent representation $\vec{y} \in [0,1]^{d'}$ through a deterministic mapping $\vec{y} = f_\theta(x) = s(\textbf{W}\vec{x}+\vec{b})$ with $\theta = \{\textbf{W},\vec{b}\}$, and then maps back to the reconstructed vector, $\vec{z} = g_{\theta'}(y) = s(\textbf{W}'\vec{y}+\vec{b}'), \vec{z} \in [0,1]^d$ in the input space with $\theta' = \{\textbf{W}',\vec{b}'\}$, where $s(\cdot)$ denotes the activation function. The optimization of the parameters is based on the mean reconstruction error [@bengio2007greedy]: $$\begin{split}
\theta^*,\theta'^* & = arg\min_{\theta,\theta'}\frac{1}{n}\sum_{i=1}^{n}L\big(\vec{x}^{(i)}, \vec{z}^{(i)}\big)\\
&=arg\min_{\theta,\theta'}\frac{1}{n}\sum_{i=1}^{n}L\big(\vec{x}^{(i)}, g_{\theta'}(f_\theta(\vec{x}^{(i)}))\big)
\end{split}
\label{eq:ae}$$ where, $\vec{x}^{(i)}$ represents the $i^{th}$ training sample and $L$ is the squared error $L(\vec{x},\vec{z}) = \|\vec{x}-\vec{z}\|^2$.
Vincent *et al.* [@vincent2008extracting] designed a denoising autoencoder by modifying the formulation in equation \[eq:ae\]. The authors assumed $\vec{\tilde{x}}$ to be a noisy approximation of $\vec{x}$, characterized by a stochastic mapping $\vec{x}\sim q_D(\vec{\tilde{x}}|\vec{x})$. The joint distribution is given as $q^0(\vec{x}, \vec{\tilde{x}}, \vec{y}) = q^0(\vec{x})q_D(\vec{\tilde{x}}|\vec{x}) \delta_{f_\theta(\vec{\tilde{x}})}(\vec{y})$, where $\delta_u(v)=0$, when $u\ne v$, and parameterized by $\theta$. Thus, $\vec{y}$ becomes the deterministic function of $\vec{\tilde{x}}$. The objective function in equation \[eq:ae\] thus transforms into: $$arg\min_{\theta,\theta'} \mathbb{E}_{q^0(\vec{x}, \vec{\tilde{x}})}\big[L\big(\vec{x}^{(i)}, g_{\theta'}(f_\theta(\vec{\tilde{x}}^{(i)}))\big)\big]$$ Patch-wise minimization of mean-squared error (discussed later in section \[sec:psme\]) further helps in image denoising [@lee2012mmse]. Thus patch-wise mean squared error loss has been used in this paper as a component of the loss function in both the generators ($G_1$ & $G_2$) of our SD-GAN framework.
Loss Functions {#sec:loss}
==============
The process of training the SD-GAN consists of two modes, and optimizes four adversarial loss functions described (later) in equations \[eq:d1\_loss\]-\[eq:g2\_loss\]. The corresponding criteria are described in the following sub-sections.
Binary Cross Entropy Loss {#sec:bce}
-------------------------
Binary cross-entropy is a loss function used effectively in the field of deep learning for binary classification problems and sigmoid output units. The binary class labels used at the discriminators are $0$ & $1$, representing the real and fake (generated) images. The loss function is given as: $$\begin{split}
\mathcal{L}_{bce}(\vec{\tilde{y}},\vec{y}) & = -\frac{1}{n}\sum_{i=1}^n \left[y_i \log(\tilde{y}_i) + (1-y_i) \log(1-\tilde{y}_i)\right]\\
&= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij} \log(\tilde{y}_{ij})
\end{split}$$ where, $i$ indexes $n$ samples/observations and $j$ indexes $m$ classes, and $y_i$ is the sample label (binary for LHS, one-hot vector on the RHS) and the prediction of sample is $\tilde{y}_{ij}\in(0,1):\sum_{j} \tilde{y}_{ij} =1, \text{ } \forall i,j$.
SSIM Loss {#sec:ssim_loss}
---------
SSIM [@wang2004image] gives the structural similarity index between two images ($x_1$ and $x_2$). We first define SSIM index [@wang2004image], estimated using multiple patches (windows) of an image. This measure between two windows $p$ and $q$ of common size $N \times N$ is: $$SSIM(p,q) = \frac{(2\mu_p\mu_q + c_1)(2\sigma_{pq} + c_2)}{(\mu_p^2 + \mu_q^2 + c_1)(\sigma_p^2 + \sigma_q^2 + c_2)}
\label{eq:ssim}$$ where, $\mu_p, \mu_q$ are the pixel-wise averages of image patches $p$ and $q$ respectively, $\sigma_p^2, \sigma_q^2$ their respective variances, $\sigma_{pq}$ the covariance of $p$ and $q$; $c_1 = (k_1L)^2$, $c_2 = (k_2L)^2$ as two variables used to stabilize the division with weak denominator, $L$ the dynamic range of the pixel-values (typically this is $2^{\#bits/pixel}-1$), and $k_1 = 0.01$ and $k_2 = 0.03$ set by default. The SSIM loss ($\mathcal{L}_{ssim}$) function estimated between two single-channel (gray-scale) images, produces a maximum value of $1$ for two identical images and decreases henceforth as the similarity between the images decreases. Hence, the SSIM loss is calculated as: $$\mathcal{L}_{ssim} = 1-SSIM(x_1, x_2)$$ where, SSIM is given in equation \[eq:ssim\]. Minimization of this loss provides a better estimate of the $x_2$ for $x_1$.
Patch-wise MSE Loss {#sec:psme}
-------------------
Patch-wise MSE (PMSE) loss is derived as the mean-squared error between two images. Let $h_1$ and $h_2$ be the two patches extracted from $x_1$ and $x_2$, respectively. The PMSE between $x_1$ and $x_2$, is calculated as: $$\mathcal{L}_{pmse}(x_1,x_2) = \sum_{i=1}^{|C|} \frac{\lambda_i}{|h|}\sum_{j=1}^{|h|} \|h_1^{(i,j)}-h_2^{(i,j)}\|^2$$ where, $|C|$ & $|h|$ are the number of channels and patches in an image, while $h_k$ is a patch extracted from $x_k$ and $\lambda_i$’s are the channel-wise weights of the image ($\lambda = \{0.2989,0.5870,0.1141\}$ as given in [@johnson2006stephen]). A weighted linear combination (using $\lambda$) of the MSE’s is used to estimate the MSE of each patch. PMSE is the average MSE over all the pair of corresponding (spatially) patches in the images.
Structural Loss {#sec:struct}
---------------
This paper also proposes a novel structural loss ($\mathcal{L}_{st}$) in addition to the binary cross-entropy loss as in DCGAN [@goodfellow2014generative]. The primary aim of proposing this novel loss is to constrain the structure of the generated image. The SSIM (see section \[sec:ssim\_loss\]) loss accounts for the facial structure while a mean-squared error (MSE) based loss applied patch-wise (refer section \[sec:psme\]) helps to replicate of the illumination variation in $G_1$ and denoising in the auto-encoder based $G_2$. The structural loss is given as: $$\mathcal{L}_{st} = \frac{\mathcal{L}_{ssim} + \mathcal{L}_{pmse}}{2}
\label{eq:st}$$
The proposed architecture: SD-GAN {#sec:arch}
=================================
The proposed Structural and Denoising Generative Adversarial Network (SD-GAN) works in two-modes. Figures \[fig:sd\_gan\] & \[fig:net\] show the proposed architecture with structural details of SD-GAN, and descriptions for each of the modes of operation are described in the following sub-sections.
Mode-I {#sec:p1_sdg}
------
The Mode-I of SD-GAN is derived from DC-GAN [@radford2015unsupervised], with a few variations in the input as well as in the training procedure (see section \[sec:train\] for further details). The generator, $G_1$, is a deep-network (see figure \[fig:net\](a)) which takes the occluded faces as input, instead of the noise vector (as in DC-GAN) and generates (synthetic) faces to be fed to the discriminator $D_1$. $D_1$, similar to the discriminator network in DC-GAN (see figure \[fig:net\](b)), takes both the full real-world facial as well as $x_{gen}$ as inputs and attempts to discriminate between the real and generated (fake) images.
A “nice generation” module acts as an interface for selective data transfer between two modes of training. It takes fake images ($x_{gen}$) as input with a mini-batch of size $20$, and computes a loss function (see line 5 of algorithm \[algo:SDGAN\]) to filter and create nice images ($x_{nice}$), when the loss is significantly low ($<0.01$). The corresponding full face images are also filtered as $x_{real}$ and given to mode-II of training. This is done under the assumption that $G_1$ has successfully fooled $D_1$ for the batch of images, when loss is low. Since $x_{nice}$ are often corrupted by noise, an operation of denoising is necessary as done by mode-II of operation.
Mode-II {#sec:p2_sdg}
-------
The Mode-II is the denoising unit of our proposed architecture, compared to the Mode-I which preserves the structural identity of the face. To perform the task of denoising, a denoising auto-encoder (see section \[sec:DAE\]) is used as the generator ($G_2$) in this mode of operation. For the CNN-based denoising auto-encoder (refer figure \[fig:net\](c)) proposed in this paper, the generated “nice” images ($x_{nice}$) obtained from Mode-I are taken as inputs. The discriminator ($D_2$), identical to $D_1$, takes as input $x_{real}$ images and performs adversarial training independently and exclusively. Though the input to Mode-II is given as output of Mode-I, the training and weight update of the model at Mode-II is independent of the training of Mode-I, *i.e.* the gradients do not backpropagate into the model of Mode-I.
Training SD-GAN {#sec:train}
===============
The bimodal SD-GAN model is trained using the proposed algorithm \[algo:SDGAN\]. The procedure involves an end-to-end training of both the modes simultaneously. Each mode is trained using a procedure adopted from DC-GAN [@radford2015unsupervised], with a structural loss induced for each mode, exclusively. The model is trained in *Keras* with *Tensorflow* backend [@abadi2016tensorflow]. A uniform mini-batch size of $20$ samples has been used throughout the training process, with gradient based optimization for weight update in the network. The following sub-sections detail the mode-wise training procedure, with the loss functions involved for weight update in the network (for all notations used hereafter, refer algorithm \[algo:SDGAN\]).
$B$ := mini-batch from $F_m$ & $F_f$\
$x_{nice}$ $\leftarrow$ \[\]; $x_{real}$ $\leftarrow$ \[\]\
Training for Mode-I {#tr_p1}
-------------------
The training process used for Mode-I is outlined in lines $4-12$ of algorithm \[algo:SDGAN\]. The occluded images are given as inputs to $G_1$, to generate fake images matching the underlying true distribution of the full-facial images. The semi-supervised training procedure of SD-GAN involves a discriminator $D_1$ to distinguish between the real-world and generated images. The full-faces corresponding to each of the occluded faces in a batch, $B$, is fed to the discriminator as real images. The training of $D_1$ is based on the minimization of the binary cross-entropy loss ($\mathcal{L}_{bce}$) (see section \[sec:bce\] for details), using the ADAM [@kingma2014adam] optimizer. Let, $x_{real}$ represent the set of full real-world face images and $x_{occ}$ be the occluded faces in a particular batch, while $D_1(x,y)$ represents the discriminator function with an input $x$ and a target label $y$ (set as $1$ for $x_{real}$ and $0$ for $x_{occ}$), and $G_1(x)$ depicts the generating function with the input $x$. The adversarial loss corresponding to $D_1$ can be written as: $$\begin{split}
\mathcal{L}_{D_1}^{adv} (x_{real}, x_{occ}) = & \\
\mathcal{L}_{bce}(D_1(x_{real},y),\vec{1})+ &\mathcal{L}_{bce}(D_1(G_1(x_{occ}),y),\vec{0})\\
\end{split}
\label{eq:d1_loss}$$
Training the generator $G_1$ is essentially an optimization process executed using Stochastic Gradient Descent (SGD) [@amari1993backpropagation], while freezing the weight update of $D_1$. The proposed structural loss (auxiliary) is induced at this stage of training. The adversarial loss for $G_1$ is: $$\begin{split}
\mathcal{L}_{G_1}^{adv} (x_{occ}, x_{real}) & \\ =\mathcal{L}_{bce}(D_1(G_1(x_{occ}),y),\vec{1})+ & \mathcal{L}_{st}(x_{real}, G_1(x_{occ})) \\
\end{split}
\label{eq:g1_loss}$$ where, $\mathcal{L}_{st}$ is defined in equation \[eq:st\].
Minimization of these two criteria given by equations (\[eq:d1\_loss\]) and (\[eq:g1\_loss\]), makes $G_1$ outsmart (by cheating) $D_1$ upon reaching Nash equilibrium [@gibbons1992primer], where $D_1$ believes that the images generated by $G_1$ is sampled from the true distribution.
Training for Mode-II {#sec:tr_p2}
--------------------
The output images obtained from Mode-I are used in training for Mode-II in SD-GAN. Hence, these batch of “nice” images ($x_{nice}$) generated by $G_1$ are provided as inputs to Mode-II along with their corresponding (subject-wise) full-face images ($x_{real}$). Though, these images have their structural content partly preserved, they suffer from few degradation due to noise. To denoise these images, a denoising auto-encoder based generator model had been proposed in this paper. Lines $14-20$ in algorithm \[algo:SDGAN\] outlines mode-II of training. The Discriminator $D_2$ comprises of a similar adversarial loss as in $D_1$, given as: $$\begin{split}
\mathcal{L}_{D_2}^{adv} (x_{real}, x_{nice}) = &\\
\mathcal{L}_{bce}(D_2(x_{real},y),\vec{1})+ &\mathcal{L}_{bce}(D_2(G_2(x_{occ}),y),\vec{0})\\
\end{split}
\label{eq:d2_loss}$$
The denoising auto-encoder training of $G_2$ is incremental, in a sense that the number of training samples increases as the $G_1$ becomes stronger. The instability issues [@goodfellow2014generative] prevalent in training is taken care by over-training the weaker of the two to reach the equilibrium point. The adversarial loss incurred at this phase mainly deals with closing the gap between the distributions of the real and the generated (fake) samples. The adversarial loss at $G_2$ is given by: $$\begin{split}
\mathcal{L}_{G_2}^{adv} (x_{nice}, x_{real}) = &\\
\mathcal{L}_{bce}(D_2(G_2(x_{occ}),y),\vec{1})+ &\mathcal{L}_{aux}(x_{real}, G_1(x_{occ})) \\
\end{split}
\label{eq:g2_loss}$$ where,\
$\mathcal{L}_{aux} = \triangle \big(\mathcal{L}_{st}(x_{real}, G_1(x_{occ})), \mathcal{L}_{st}(x_{real}, G_2(x_{nice}))\big)$, and $\triangle$ being the difference operator.
Minimization of $\mathcal{L}_{G_2}^{adv}$ reduces the gap in structural and pixel-values between the generated (fake) and true samples, which also reduces the noise in the generated samples.
The use of Mode-II of training along with Mode-I (done independently) reduces the overall time for training ($\sim10^2$ folds, considering the number of epochs) compared to a recent state-of-the-art technique [@li2017generative] used for the task at hand.
Results and Performance Analysis {#sec:res}
================================
This section first describes the datasets used, then gives the quantitative measures used to show the effectiveness of our proposed model for face completion and FR, compared with a few state-of-the-art techniques.
Datasets {#sec:ds}
--------
Experimentations are carried on three datasets: (a) AR dataset [@martinez1998ar], (b) Celeb-A dataset [@liu2015faceattributes], and (c) multi-PIE [@gross2010multi]; each is briefly described below.
### AR Database {#sec:ar}
The AR database [@martinez1998ar] consists of face images which contain real-world occlusions. The database consists of $136$ subjects with varying illumination conditions and expressions. For our study, we consider those images which are near-frontal and have minimal expression variations (see figure \[fig:ar\] for samples). Two variations of occlusions are available in the database, *viz.* the sunglasses and scarf on the face, which prevents the faces to be reconstructed using symmetric transformations from the other half of the face. For our experimentations, the dataset has been divided into 2 subsets: **AR1**, the images with sunglasses and **AR2**, those with scarfs. A data partition as $60:20:20$ ratio is maintained uniformly for training, validation, and testing throughout the set of the experimentations. The subjects used for training and validation are never used for testing.
\
### Celeb-A Database {#sec:celeba}
The CelebA [@liu2015faceattributes] dataset consists of 202,599 face images. Each face image is cropped, roughly aligned by the position of two eyes, and rescaled to $100\times100\times3$ pixels. The standard benchmark split with 162,770 images for training, 19,867 for validation and 19,962 for testing, has been followed for experimentation. A mask of size $50 \times 50$ pixels covers the face (see figure \[fig:ca\] for samples) at random locations, as described in [@li2017generative].
### Multi-PIE dataset {#sec:mpa}
The CMU Multi-PIE database [@gross2010multi] consists of 755,370 images shot in 4 different sessions from 337 subjects. The images in the dataset are split up into training, validation and test set. The training set is composed of all individuals in non-frontal pose (except those used for validation and testing) at the generator, while the size of the validation (64 identities at a pose of $90^\circ$) and test sets (65 identities at a pose of $90^\circ$) are almost identical. We consider the images taken in session 1, with the probe images taken at $90^\circ$ pose.
Evaluation metrics {#sec:metric}
------------------
Along with the visual results shown in section \[sec:pa\_gen\] we perform quantitative evaluation of the proposed model for the two datasets under test. Firstly, we use the peak-signal-to-noise-ratio (PSNR) value, which captures the difference in the pixel values of the two images. PSNR (higher the better) is defined as:
$$\begin{split}
MSE(x_{fin},x_{real}) &= \frac{1}{mn}\sum_{i=1}^{m-1}\sum_{i=1}^{n-1}\big[x_{gen}(i,j)-x_{real}(i,j)\big]^2 \\
PSNR &= 10 \cdot \log_{10} \bigg(\frac{MAX_{x_{fin}}^2}{MSE}\bigg)
\end{split}$$
where, $x_{fin}$ is the output (generated) image and $x_{real}$ is the reference (ground-truth, GT) image.
Secondly, SSIM index (refer equation \[eq:ssim\]) is used for quantifying the generated results, which estimates the holistic similarity between two images. Finally, we also use the identity distances measured by the OpenFace toolbox [@amos2016openface] to determine the high-level semantic similarity of two faces.
Performance Analysis for generation of full facial images {#sec:pa_gen}
---------------------------------------------------------
A few examples of generation of the full facial images from occluded faces are shown in figure \[fig:res\_ar\] under two different scenarios of the proposed method. The column (b) depicts the output of DC-GAN [@radford2015unsupervised], while the results progressively becomes better as we move towards the right, showing the effectiveness of the auxiliary losses proposed in this paper. The significant improvement in the image quality measure shown by our model in (e) as compared to (d) (see table \[tab:quant\] for quantitative measures showing similar trends) strengthens our claim for the introduction of Mode-II for denoising the output of Mode-I.
![Results for image generation from two different sets of occlusions, *viz.*, **AR2** and **AR1** (arranged row-wise) present in AR, by SD-GAN: (a) the input occluded image, (b) output of $G_1$ using $\mathcal{L}_{bce}$, (c) output of $G_1$ using $\mathcal{L}_{bce} + \mathcal{L}_{ssim}$, (d) output of $G_1$ at Phase-I, (e) output of $G_2$ at Phase-II, (f) Ground-truth (GT). The values below each image from (b)-(e) give the (PSNR/SSIM) values of the images compared to the expected output (GT).[]{data-label="fig:res_ar"}](res){width="\textwidth"}
![Results for image generation from two different methods: (a) occluded images (one each from **AR2** (*Top-row*) and **AR1** (*Bottom-Row*)), (b) Images generated by GFC [@li2017generative] without post-processing, (c) Images generated by SD-GAN, (d) expected output. The values below each image gives the (PSNR/SSIM) values of the images compared to the expected output.[]{data-label="fig:comp"}](comp){width="70.00000%"}
![Results for image generation from two different methods: (a) occluded images (from Celeb-A dataset [@liu2015faceattributes]), (b) Images generated by GFC [@li2017generative] without post-processing, (c) Images generated by SD-GAN, (d) expected output. The values below each image gives the (PSNR/SSIM) values of the images compared to the expected output.[]{data-label="fig:comp1"}](comp1){width="75.00000%"}
Both the quantitative as well as the qualitative measures are compared with a recent state-of-the-art technique. GFC [@li2017generative] uses face parsing as well as Poisson Blending [@perez2003poisson] as post-processing techniques to generate facial parts under occlusion. Graph Laplacian (GL) based methods [@deng2011graph] also attempts to solve the problem. The quantitative results evaluating the quality of the images are given in table \[tab:quant\]. Our proposed SD-GAN (referred as ’SDG’ in tables) outperforms all other techniques based on PSNR values, whereas in case of the holistic measure (SSIM), the nearest competing method GFC, also a GAN based deep model with post-processing techniques, matches our performance in a few cases and even marginally outperforms our proposed technique in only one case. Qualitative experiments also reveal that without the post-processing technique, GFC fails to match the performance of our proposed technique in both the datasets, for which our method is a clear winner, as shown in figures \[fig:comp\] & \[fig:comp1\]. The values at the bottom of the images in columns (b) & (c) in figures \[fig:comp\] & \[fig:comp1\], reveal the superiority of our proposed SD-GAN, based on the PSNR/SSIM values on the four exemplar images.
[p[1.5cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{}]{}\
&**(b)** & **(c)** & **(d)** & **GL** & **GFC** & **SDG**\
*AR1* & 12.15 & 13.27 & 15.62 & 13.48 & 15.83 & **18.43**\
*AR2* & 11.92 & 12.58 & 14.87 & 11.78 & 13.84 & **17.68**\
*CelebA* & 12.31 & 12.86 & 16.82 & 9.43 & 18.30 & **18.61**\
\
&**(b)** & **(c)** & **(d)** & **GL** & **GFC** & **SDG**\
*AR1* & 0.67 & 0.70 & 0.70 & 0.65 & **0.77** & **0.77**\
*AR2* & 0.59 & 0.65 & 0.70 & 0.54 & 0.73 & **0.76**\
*CelebA* & 0.68 & 0.71 & 0.73 & 0.67 & **0.76** & **0.76**\
\
&**(b)** & **(c)** & **(d)** & **GL** & **GFC** & **SDG**\
*AR1* & 0.64 & 0.61 & 0.52 & 0.52 & 0.48 & **0.47**\
*AR2* & 0.75 & 0.72 & 0.59 & 0.67 & **0.56** & **0.56**\
*CelebA* & 0.68 & 0.62 & 0.59 & 0.61 & **0.55** & 0.57\
\[tab:quant\]
Performance boost in Face Recognition {#sec:perf_ver}
-------------------------------------
Face Recognition (FR) systems underperform when the faces are occluded. Our proposed SD-GAN reconstructs a full-face when presented with a occluded face, which facilitates efficient performance for FR. Performances of several recent shallow learning techniques, *viz.* LSM [@hwang2003reconstruction], RPCA [@wright2009robustpca], GL [@deng2011graph] have been compared with our proposed and GFC [@li2017generative] methods for generation of the faces, evaluated using state-of-the-art benchmark FR systems, like PCA [@turk1991face], Gabor [@lei2008gabor], LPP [@he2004locality], Sparse Representation (SR) [@wright2009robust] and VGG [@parkhi2015deep]. The results in table \[tab:ar\_rec\] show the rank-1 accuracies for AR1 and AR2 datasets, where our proposed model (SD-GAN) outperforms all other methods, indicating that it must be capable of generating discriminative parts of the face better than the other competing methods. Interpret the values in the table \[tab:ar\_rec\] as performances for FR, for images generated by the methods mentioned at the top of each column, while the FR methods appear at the left of each row. Observe the huge jump in performance from the statistical methods to the GAN based methods, indicating the power of the GAN based techniques for overcoming occluded faces, specifically when applied for FR applications.
[p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{} p[1cm]{}]{}\
&**Occ.**&**LSM**&**RPCA**&**GL**&**GFC**&**SDG**\
*PCA* & 52.6 & 61.4 & 64.2 & 70.0 & 82.9 & **89.7**\
*GPCA* & 67.5 & 73.3 & 71.6 & 76.6 & 88.4 & **93.3**\
*LPP* & 53.4 & 45.7 & 61.4 & 59.0 & 83.5 & **90.1**\
*SR* & 58.4 & 59.2 & 57.3 & 60.6 & 85.7 & **91.6**\
*VGG* & 84.2 & 85.4 & 84.5 & 87.9 & 91.7 & **96.8**\
\
&**Occ.**&**LSM**&**RPCA**&**GL**&**GFC**&**SDG**\
*PCA* & 15.7 & 37.5 & 32.2 & 40.8 & 72.6 & **79.4**\
*GPCA* & 55.1 & 56.2 & 54.0 & 60.9 & 80.3 & **88.6**\
*LPP* & 34.4 & 43.0 & 38.3 & 47.1 & 75.9 & **81.2**\
*SR* & 45.2 & 51.8 & 47.7 & 56.7 & 79.8 & **86.8**\
*VGG* & 72.3 & 75.9 & 79.6 & 83.5 & 89.9 & **92.6**\
\[tab:ar\_rec\]
An extension of Linear Discriminant Analysis (LDA) [@gunther2016face] to the two color channels I-chrominance and the Red channel (LDA-IR) is described in [@gunther2016face]. Inter-Session Variability (ISV) [@gunther2016face] modeling is a technique that has been successfully employed for face verification, which does not have occluded images during training. The rank-1 recognition rates of the VGG+SD-GAN (VGG is used as a classifier with SDG as the generator), when compared with these two state-of-the-art techniques, LDA-IR and ISV, are much higher for the AR database, as reported in table \[tab:gun\].
**Dataset** **ISV [@gunther2016face]** **LDA-IR [@gunther2016face]** **VGG+SDG**
------------- ---------------------------- ------------------------------- -------------
AR1 45.13 62.59 **96.82**
AR2 39.81 57.44 **92.64**
: Rank-1 Recognition rates for end-to-end system for occluded face recognition. Higher values are better.
\[tab:gun\]
Analysis of training time of SD-GAN, compared to GFC [@li2017generative] {#sec:time}
------------------------------------------------------------------------
All experiments are performed on a dual GPU machine with dual Nvidia TITAN X, with 64 GB RAM and Intel core i7 4790K processor. The training for both the models are performed using $Keras$ with $Tensorflow$ backend. The training times are tabulated in table \[tab:time\], which shows that the SD-GAN is faster than GFC, since it converges near a Nash equilibrium (see arrow on graph in figure \[fig:eq\_graph\] for details) in lesser number of epochs as compared to GFC.
----- -------------- ---------------- -- -------------- ----------------
**\#epochs** **mins/epoch** **\#epochs** **mins/epoch**
GFC 30K 8 20K 25
SDG **550** **3** **500** **12**
----- -------------- ---------------- -- -------------- ----------------
: Comparison of training times of SD-GAN and GFC. Lower value is better.
\[tab:time\]
![Graphs showing the discriminator and generator loss functions during training.[]{data-label="fig:eq_graph"}](p1){width="0.7\linewidth"}
Results on the Multi-PIE dataset
--------------------------------
In order to evaluate our proposed algorithm on pose-variations of the face images producing self-occlusions, we performed experimentations on MultiPIE dataset, which has 750000+ images, at different poses. Self-occlusion of faces occur due to off-frontal and out-of-plane rotation variations in pose. For evaluating performance using rank-1 recognition rates, we follow the protocol from \[33\], and only images from session one are used. Results are given in table \[tab:pose\]. All images used for testing and validation have $90^\circ$ pose. Few results shown in figure \[fig:pose\] display the superiority of our method over TP-GAN (TPG) [@huang2017beyond], both qualitatively as well as with quantitative measures in terms the SSIM/PSNR values. Observe the sample at the last row of figure \[fig:pose\], which shows a non-frontal (not side profile view) query face. In this case, our result in (c) has produced an exact illumination variation as that in GT (d), whereas the process of [@huang2017beyond] in (b) produces exactly the opposite (mirror-like image) while producing a sharper contrast (unnecessarily, in general) than that in GT. Also, observe intriguingly the presence of ear-rings (appears non-identical ones) in the output of [@huang2017beyond], not present in GT and our output in (c). The proposed system intrinsically exploits the symmetric nature of the face, helping to generate images with appropriate illumination variations at high-resolution with desired quality as in GT.
![Results for image generation from two different methods: (a) Images at different poses (obtained from Multi-Pie dataset [@gross2010multi], with left- (*top-row*) & right-looking (*Middle-row*) profiles at $90^\circ$; and a face image at $60^\circ$ pose (*Bottom-row*)) used for testing, (b) Image generated by TPG [@huang2017beyond], (c) Image generated by SDG, (d) expected output (ground-truth).The values below each image gives the (PSNR/SSIM) values of the image compared to the expected (target) output.[]{data-label="fig:pose"}](pose1){width="80.00000%"}
**Criteria** **TPG [@huang2017beyond]** **SDG**
----------------------------- ---------------------------- -----------
Rank-1 Recognition Rate (%) 64.03 **65.19**
PSNR 12.26 **19.84**
SSIM 0.59 **0.66**
: Comparison of Rank-1 recognition rate for Multi-PIE dataset, with faces at $90^\circ$ pose (best values are in bold).
\[tab:pose\]
Conclusion {#sec:conc}
==========
The proposed SD-GAN model uses end-to-end training for reconstruction of occluded parts of the face. The proposed technique does not rely on any post-processing technique for semantic correction of the faces. Thus, this module may be used as pre-processing for any FR system, in cases where faces are occluded. A faster training time is ensured in this model, based on the Nash Equilibrium. The qualitative and the quantitative results discussed above confirm the superiority of our proposed model. Misalignment of faces may lead to distortions as happens in all reconstruction techniques. In order to generate better quality photo-realistic images for AR and LFW datasets, the dual pathway technique proposed in [@huang2017beyond] can be used as a post-processing stage following our SD-GAN.
References {#references .unnumbered}
==========
|
---
abstract: 'We study the estimation error of constrained $M$-estimators, and derive explicit upper bounds on the expected estimation error determined by the Gaussian width of the constraint set. Both of the cases where the true parameter is on the boundary of the constraint set (matched constraint), and where the true parameter is strictly in the constraint set (mismatched constraint) are considered. For both cases, we derive novel universal estimation error bounds for regression in a generalized linear model with the canonical link function. Our error bound for the mismatched constraint case is minimax optimal in terms of its dependence on the sample size, for Gaussian linear regression by the Lasso.'
author:
- 'Yen-Huan Li, Ya-Ping Hsieh, Nissim Zerbib and Volkan Cevher'
bibliography:
- 'list.bib'
title: 'A Geometric View on Constrained $M$-Estimators'
---
Introduction {#sec_formulation}
============
Consider a general statistical estimation problem. Let $( y_1, \ldots, y_n )$ be a sample following a probability distribution $\mathbb{P}_{{\theta^\natural}}$ in a given class $\mathcal{P} := {\left\{ \mathbb{P}_{\theta} : \theta \in \mathbb{R}^p \right\}}$. We are interested in estimating the parameter ${\theta^\natural}$, given $( y_1, \ldots, y_n )$ and $\mathcal{P}$, under the high-dimensional setting where $n < p$.
If ${\theta^\natural}$ is known to satisfy $g ( {\theta^\natural}) \leq c$ for some continuous convex function $g$ and positive constant $c$, we can consider a constrained $M$-estimator of the form $${\hat{\theta}}\in \arg \min_{ \theta } {\left\{ f_n ( \theta ) : \theta \in \mathcal{G} \right\}}, \quad \mathcal{G} := {\left\{ \theta \in \mathbb{R}^p : g ( \theta ) \leq c \right\}}. \label{eq_that}$$ We assume that $f_n$ is a continuously differentiable convex function, and the constraint set $\mathcal{G}$ is non-empty. For example, the Lasso [@Tibshirani1996] corresponds to $$f_n ( \theta ) := \frac{1}{2 n} \sum_{i = 1}^n \left( y_i - \left\langle a_i, \theta \right\rangle \right)^2, \quad \mathcal{G} := {\left\{ {\left\Vert \theta \right\Vert}_1 \leq c \right\}}, \label{eq_LS}$$ for some $a_1, \ldots, a_n \in \mathbb{R}^p$ and positive constant $c$ . A matrix $\Theta \in \mathbb{R}^{d \times d}$ can be vectorized as a corresponding vector $\theta \in \mathbb{R}^p$, $d^2 = p$. In the low-rank matrix recovery problem [@Candes2011b; @Gunasekar2014], a popular estimator corresponds to $$f_n ( \Theta ) := \frac{1}{2 n} \sum_{i = 1}^n \left( y_i - {\mathrm{Tr} \left( A_i^T \Theta \right)} \right)^2, \quad \mathcal{G} := {\left\{ {\left\Vert \Theta \right\Vert}_* \leq c \right\}}, \label{eq_matrix_lasso}$$ for some $A_1, \ldots, A_n \in \mathbb{R}^{d \times d}$ and positive constant $c$, where ${\left\Vert \cdot \right\Vert}_*$ denotes the nuclear norm. In general, $f_n$ can be the normalized negative log-likelihood function, or any properly defined function, and $g$ depends on the *a priori* information on the structure of the parameter ${\theta^\natural}$ [@Bach2013a; @Chandrasekaran2012; @ElHalabi2015].
One can also consider a penalized $M$-estimator, given by $${\hat{\theta}}_{\text{penalized}} \in \arg \min_{ \theta \in \mathbb{R}^p } {\left\{ f_n ( \theta ) + \rho_n g ( \theta ) \right\}}, \label{eq_penalized_est}$$ for some positive constant $\rho_n$. The penalized $M$-estimator can be computed by fast proximal methods, provided that the proximal mapping of $g$ is easy to compute [@Beck2009; @Nesterov2013]. This condition, however, is not always satisfied. For example, if $g$ is the nuclear norm, computing the corresponding proximal mapping requires a full singular value decomposition (SVD) in the first few iterations, and hence is not scalable with the parameter dimension. In contrast, if we consider a constrained $M$-estimator and compute it by the Frank-Wolfe algorithm, each iteration of the algorithm requires a linear minimization oracle (LMO), which can be approximated efficiently by Lanczos’ algorithm [@Jaggi2013]. The paper [@Zhang2013] also shows that when $g$ is a structured sparsity regularizer, the LMO can be much easier to compute than the proximal mapping. If we consider a constrained $M$-estimator, setting the value of the constant $c$ in (\[eq\_that\]) becomes a practical issue. For the case $c < g ( {\theta^\natural})$, the estimation error is obviously bounded below by the distance between ${\theta^\natural}$ and the constraint set $\mathcal{G}$, and hence estimation consistency is impossible. Ideally we would like to set $c = g ( {\theta^\natural})$, while in practice $g ( {\theta^\natural})$ is seldom known. The last case is when we have some estimate on $g( {\theta^\natural})$, and choose $c$ such that $c > g ( {\theta^\natural})$. Some natural questions arise: Is estimation consistency possible? How fast will the estimation error decay with the sample size $n$? Does setting $c > g ( {\theta^\natural})$ result in larger estimation error than setting $c = g ( {\theta^\natural})$? We review related works in Section \[sec\_related\_work\], which shows that answers existed only for specific cases even when $c = g ( {\theta^\natural})$.
In this paper, we provide a unified analysis for constrained $M$-estimators. Specifically,
- We propose an elementary framework for analyzing any $M$-estimator applied to any statistical model in Section \[sec\_framework\].
- We obtain universal error bounds in terms of the Gaussian width, valid *for all* canonical GLMs. We consider the matched constraint case ($c = g ( {\theta^\natural})$) in Section \[sec\_matched\], and the mismatched constraint case ($c > g ( {\theta^\natural})$) in Section \[sec\_mismatch\].
- To illustrate the universal error bounds, we specialize the universal error bound to Gaussian linear regression with arbitrary convex constraint, and regression in canonical GLMs with the $\ell_1$-constraint in Section \[sec\_appl\], and obtain explicit results.
- Our error bound for the Lasso applied to the Gaussian linear model is optimal in the minimax sense (cf. Section \[sec\_mismatch\_further\]).
Existing results for penalized $M$-estimators [@Banerjee2015; @Bickel2009; @Buhlmann2011; @Honorio2014; @Kakade2010; @Negahban2012; @Geer2013], which are for deterministic $\rho_n$’s, cannot directly recover our results, and vice versa. Indeed, by Lagrange duality, there exists some $\rho_n > 0$ such that the constrained $M$-estimator in (\[eq\_that\]) is equivalent to the penalized $M$-estimator in (\[eq\_penalized\_est\]). This correspondence, however, holds *only for given realization of the sample $( y_1, \ldots, y_n )$*, and hence $\rho_n$ is a random variable depending on the sample. Conversely, for any penalized $M$-estimator $\hat{\theta}_{\text{penalized}}$ for some $\rho_n > 0$, there exists a constant $c = g ( \hat{\theta}_{\text{penalized}} )$ such that the corresponding constrained $M$-estimator (\[eq\_penalized\_est\]) is equivalent to $\hat{\theta}_{\text{penalized}}$. Note that $c = g ( \hat{\theta}_{\text{penalized}} )$ is again a random variable and dependent on the sample. We are not aware of any existing work on characterizing the correspondence between the two formulations.
Related Works {#sec_related_work}
=============
In [@Oymak2013a; @Oymak2013], the authors derived sharp estimation error bounds for regression in the linear model by constrained least squares (LS) estimators. The analysis in [@Vershynin2014] provides a minimax estimation error bound for the same setting . There are some related works on learning a function in a function class [@Koltchinskii2013a; @Mendelson2014]. When the function class is linearly parametrized by vectors in $\mathbb{R}^p$, and the function corresponding to $\theta^\natural$ is in the function class, the $L_2$-estimation error in the function class may be translated into the $\ell_2$-estimation error with respect to $\theta^\natural$. A common limitation of [@Koltchinskii2013a; @Mendelson2014; @Oymak2013; @Oymak2013a; @Vershynin2014] is that the results are not extendable to general non-linear statistical models.
Another research direction considers constrained estimation in possibly non-linear statistical models [@Plan2013a; @Plan2015; @Plan2014a]. A constrained $M$-estimator for logistic regression was proposed and analyzed in [@Plan2013a]. In [@Plan2014a], the authors proposed and analyzed a universal projection-based estimator for regression in generalized linear models (GLMs). In [@Plan2015], the authors analyzed the performance of the constrained LS estimator in GLMs. A common limitation of [@Plan2013a; @Plan2015; @Plan2014a] is that the results are valid only for the specific proposed estimators, and they do not even apply to the constrained maximum-likelihood (ML) estimator, which is the most popular approach in practice. Moreover, the proposed estimators in [@Plan2013a; @Plan2015; @Plan2014a] can only recover the true parameter up to a scale ambiguity.
We say that the constraint is *matched* if $\theta^\natural$ lies on the boundary of $\mathcal{G}$ in (\[eq\_that\]) (or $c = g ( \theta^\natural )$), and *mismatched* if ${\theta^\natural}$ lies strictly in $\mathcal{G}$ (or $c < g ( {\theta^\natural})$). The analyses in [@Oymak2013a; @Oymak2013] require the constraint to be matched, while in practice the exact value of $g( \theta^\natural )$ is seldom known. The constraint in [@Koltchinskii2013a] is always matched due to the special structure of quantum density operators. The error bounds in [@Plan2013a; @Vershynin2014] can be overly pessimistic, because they hold for all $\theta^\natural \in \mathcal{G}$. The results in [@Mendelson2014; @Plan2015; @Plan2014a] do not require a matched constraint and depend on $\theta^\natural$; our result is of this kind. Recall that, however, [@Mendelson2014] is limited to specific statistical models, and [@Plan2015; @Plan2014a] are limited to specific $M$-estimators.
A Geometric Framework {#sec_framework}
=====================
Basic Idea {#sec_basic_idea}
----------
To illustrate the basic idea of our framework, let us start with a simple setting, where $f_n$ is strongly convex with parameter $\mu > 0$, i.e., $$\left\langle \nabla f_n ( y ) - \nabla f_n ( x ), y - x \right\rangle \geq \mu {\left\Vert y - x \right\Vert}_2^2, \notag$$ for any $x, y \in \mathrm{dom}\, f$. Note that then ${\hat{\theta}}$ is uniquely defined.
Define $\iota_g: \mathbb{R}^p \to \mathbb{R} \cup {\left\{ + \infty \right\}}$ as the indicator function of the constraint set $\mathcal{G}$; that is, $\iota_{\mathcal{G}} ( \theta ) = 0$ if $\theta \in \mathcal{G}$, and $\iota_{ \mathcal{G} } ( \theta ) = + \infty$ otherwise. By the strong convexity of $f_n$, we have $$\left\langle \nabla f_n ( {\hat{\theta}}) - \nabla f_n ( {\theta^\natural}), {\hat{\theta}}- {\theta^\natural}\right\rangle \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2^2. \label{eq_strong_convexity}$$ By the convexity of $\iota_g$, or the monotonicity of the subdifferential mapping, we have $$\left\langle {\hat{z}}- {z^\natural}, {\hat{\theta}}- {\theta^\natural}\right\rangle \geq 0, \label{eq_monotonicity}$$ for any ${\hat{z}}\in \partial \iota_g ( {\hat{\theta}})$, and any ${z^\natural}\in \partial \iota_g ( {\theta^\natural})$. Summing up (\[eq\_strong\_convexity\]) and (\[eq\_monotonicity\]), we obtain $$\left\langle \nabla f_n ( {\hat{\theta}}) + {\hat{z}}- \nabla f_n ( {\theta^\natural}) - {z^\natural}, {\hat{\theta}}- {\theta^\natural}\right\rangle \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2^2, \notag$$ for any ${\hat{z}}\in \partial \iota_g ( {\hat{\theta}})$. By the optimality condition of ${\hat{\theta}}$, there exists some $\hat{z} \in \partial \iota_{\mathcal{G}} ( \hat{\theta} )$ such that $$0 = \nabla f_n ( {\hat{\theta}}) + \hat{z}, \label{eq_optimality}$$ and hence we have $$\left\langle - \nabla f_n ( {\theta^\natural}) - {z^\natural}, {\hat{\theta}}- {\theta^\natural}\right\rangle \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2^2, \notag$$ for any ${z^\natural}\in \partial \iota_g ( {\theta^\natural})$. Since $\partial \iota_g ( {\theta^\natural})$ is always a closed convex cone, we can choose ${z^\natural}= 0$ and obtain $$\left\langle - \nabla f_n ( {\theta^\natural}) , {\hat{\theta}}- {\theta^\natural}\right\rangle \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2^2. \label{eq_bad_lower}$$ Applying the Cauchy-Schwarz inequality to the left-hand side, we obtain $$\begin{aligned}
{\left\Vert \nabla f_n ( {\theta^\natural}) \right\Vert}_2 {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2^2, \notag\end{aligned}$$ or $${\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq \frac{1}{\mu} {\left\Vert \nabla f_n ( {\theta^\natural}) \right\Vert}_2.$$ Taking expectations on both sides, we immediately obtain the following estimation error bound: $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq \frac{1}{\mu} \mathbb{E}\, {\left\Vert \nabla f_n ( {\theta^\natural}) \right\Vert}_2. \label{eq_bad_bound}$$ The gradient at the true parameter, $\nabla f_n ( {\theta^\natural})$, usually concentrates around $0$ with high probability.
The simple error bound (\[eq\_bad\_bound\]) is not desirable for two reasons:
1. In the high-dimensional setting where $n < p$, $f_n$ cannot be strongly convex even for the basic LS estimator.
2. It does not depend on the choice of $g$.
We address the first issue in Section \[sec\_RSC\], and the second issue in Section \[sec\_refined\_bound\].
Restricted Strong Convexity {#sec_RSC}
---------------------------
Note that in order to facilitate the arguments in the previous sub-section, we only require (\[eq\_strong\_convexity\]) to hold for ${\hat{\theta}}$ and ${\theta^\natural}$, instead of any two vectors in $\mathbb{R}^p$. Therefore, we only need $f_n$ to satisfy some *restricted* notion of strong convexity. Similar (but not exactly the same) ideas had appeared in [@Chandrasekaran2012; @Negahban2012], and can be traced back to [@Bickel2009; @Geer2007].
The *feasible set* of $g$ at ${\theta^\natural}$, denoted by $\mathcal{F}_g ( {\theta^\natural})$, is given by $$\mathcal{F}_g ( {\theta^\natural}) := \mathcal{G} - {\theta^\natural}= {\left\{ \theta - {\theta^\natural}: \theta \in \mathcal{G} \right\}}. \notag$$ The *feasible cone* of $g$ at ${\theta^\natural}$, denoted by $\overline{\mathcal{F}_g ( {\theta^\natural})}$, is defined as the conic hull of $\mathcal{F}_g ( {\theta^\natural})$.
By the definition of ${\hat{\theta}}$, the estimation error must satisfy ${\hat{\theta}}- {\theta^\natural}\in \mathcal{F}_g ( {\theta^\natural})$.
\[def\_RSC\] The function $f_n$ satisfies the restricted strong convexity (RSC) condition with parameter $\mu > 0$ if $$\left\langle \nabla f_n ( {\theta^\natural}+ e ) - \nabla f_n ( {\theta^\natural}), e \right\rangle \geq \mu {\left\Vert e \right\Vert}_2^2, \label{eq_RSC}$$ for any $e \in \mathcal{F}_g ( {\theta^\natural})$.
If $f_n$ is twice continuously differentiable, we have a sufficient condition.
\[prop\_RSC\_hessian\] The function $f_n$ satisfies the RSC condition with parameter $\mu > 0$ if $$\left\langle e, \nabla^2 f_n ( {\theta^\natural}+ \lambda e ) e \right\rangle \geq \mu {\left\Vert e \right\Vert}_2^2, \notag$$ for all $\lambda \in [0,1]$ and all $e \in {\mathcal{F}_g ( {\theta^\natural})}$.
The uniqueness of ${\hat{\theta}}$ and the derivation of the error bound in Section \[sec\_basic\_idea\] are still valid even when $n < p$, as long as $f_n$ satisfies the RSC condition with some parameter $\mu > 0$.
Refined Error Bound {#sec_refined_bound}
-------------------
We address the dependence of the estimation error on the choice of $g$, and derive a refined error bound in this sub-section.
We note that $$\left\langle - \nabla f_n ( {\theta^\natural}), {\hat{\theta}}- {\theta^\natural}\right\rangle = \left\Vert \Pi_{ \overline{ {\hat{\theta}}- {\theta^\natural}} } \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert_2 {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2, \notag$$ where $\Pi_{ \overline{ {\hat{\theta}}- {\theta^\natural}} } ( \cdot )$ denotes the projection onto the conic hull of ${\left\{ {\hat{\theta}}- {\theta^\natural}\right\}}$ (which is a half-line or $\{ 0 \}$). This implies, by (\[eq\_bad\_lower\]), $${\left\Vert \Pi_{ \overline{ {\hat{\theta}}- {\theta^\natural}} } \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2 \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2. \notag$$ The left-hand side, however, is not tractable due to its dependence on ${\hat{\theta}}$. As ${\hat{\theta}}- {\theta^\natural}\in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$ by definition, we consider a looser bound: $${\left\Vert \Pi_{\overline{\mathcal{F}_g ( {\theta^\natural})}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 \geq \mu {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2, \label{eq_concentration}$$ where $\Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( \cdot )$ denotes projection onto the feasible cone ${\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$.
Taking expectations on both sides, we obtain the following lemma.
\[lem\_fundamental\] Assume that $f_n$ satisfies the RSC condition with parameter $\mu > 0$. Then ${\hat{\theta}}$ is uniquely defined, and satisfies $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq \frac{1}{\mu} \mathbb{E}\, {\left\Vert \Pi_{\overline{\mathcal{F}_g ( {\theta^\natural})}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2. \notag$$
Since $- \nabla f_n ( {\theta^\natural})$ is a descent direction of $f_n$, if its direction is coherent with the feasible cone ${\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$, we may find some point ${\hat{\theta}}'$ far away from ${\theta^\natural}$ in the feasible set ${\mathcal{F}_g ( {\theta^\natural})}$ such that $f_n ( {\hat{\theta}}' )$ is much smaller than $f_n ( {\theta^\natural})$, and hence the estimation error can be large. This provides an intuitive interpretation of the lemma.
Since projection onto a closed convex set is a non-expansive mapping, we have $${\left\Vert \Pi_{\overline{\mathcal{F}_g ( {\theta^\natural})}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 \leq {\left\Vert \nabla f_n ( {\theta^\natural}) \right\Vert}_2, \notag$$ so the error bound is always no larger than the one in Section \[sec\_basic\_idea\].
Lemma \[lem\_fundamental\] is the theoretical foundation of the rest of this paper.
Estimation Error Bound in Terms of the Gaussian Width {#sec_matched}
=====================================================
We apply Lemma \[lem\_fundamental\] to constrained ML estimators in a GLM with the canonical link function. Examples of a canonical GLM include the Gaussian linear, logistic, gamma, and Poisson regression models.
Let ${\theta^\natural}\in \mathbb{R}^p$ be the parameter to be estimated, or the unknown vector of regression coefficients. In a canonical GLM, the negative log-likelihood of a sample $y$, given ${\theta^\natural}$, is of the form (up to scaling and shifting by some constants) $$L ( y; {\theta^\natural}) = y \left\langle a_i, {\theta^\natural}\right\rangle - b ( \left\langle a_i, {\theta^\natural}\right\rangle ), \notag$$ where $a_1, \ldots, a_n \in \mathbb{R}^p$ are given, and we assume that $b$ is some given concave function. Let $( y_1, \ldots, y_n ) \in \mathbb{R}^n$ be the sample. The constrained ML estimator is given by (\[eq\_that\]) with $$f_n ( \theta ) := \frac{1}{n} \sum_{i = 1}^n L ( y_i, \theta ), \label{eq_fn_glm}$$ and $g$ being some continuous convex function. For simplicity, we consider the case where $c = g ( {\theta^\natural})$ in this section; we address the case where $c > g ( {\theta^\natural})$ in Section \[sec\_mismatch\].
We specialize Lemma \[lem\_fundamental\] to the canonical GLM and obtain the following theorem.
\[def\_gwidth\] Let $\mathcal{C} \subseteq \mathbb{R}^p$. The *Gaussian width* of $\mathcal{C}$ is given by $$\omega_t ( \mathcal{C} ) := \mathbb{E}\, \sup_{ v \in \mathcal{C} \cap t \mathcal{S}^{p - 1} } {\left\{ \left\langle h, v \right\rangle \right\}}, \notag$$ where $h := ( h_1, \ldots, h_p )$ is a vector of i.i.d. standard Gaussian random variables, and $\mathcal{S}^{p-1}$ denotes the unit $\ell_2$-sphere in $\mathbb{R}^p$.
\[thm\_no\_mismatch\] Consider the canonical GLM and the corresponding ML estimator described above for $c = g ( {\theta^\natural})$. Assume that the entries of $a_1, \ldots, a_n$ are either all i.i.d. standard Gaussian or all i.i.d. Rademacher random variables (random variables taking values in ${\left\{ +1, -1 \right\}}$ with equal probability), and $f_n$ satisfies the RSC condition for $\mu > 0$ with probability at least $1/2$. Then $$\mathbb{E}\, {\left\Vert \hat{\theta} - {\theta^\natural}\right\Vert}_2 \leq 2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}})}{ \mu \sqrt{n}}, \notag$$ where $\sigma_{\max} := \max_i \sqrt{ \mathrm{var}\, y_i }$.
Note that the expectation is with respect to $A$ and $\varepsilon$, conditioned on the event that the RSC condition holds.
The feasible cone $\overline{ \mathcal{F}_g ( {\theta^\natural}) }$ coincides with the tangent cone of $g$ at ${\theta^\natural}$ defined in [@Chandrasekaran2012]. Therefore, to evaluate the estimation error bound, we only need to evaluate the Gaussian width of the corresponding tangent cone. We note that there are already many results for a variety of commonly used regularization functions, such as the $\ell_1$-norm, nuclear norm, total variation semi-norm, and general atomic norms [@Cai2013; @Chandrasekaran2012; @Foygel2014; @Plan2013a; @Rao2012; @Vershynin2014]. Therefore, for most of the applications, we only need to *plug in* an existing bound on the Gaussian width.
Finally, we would like to emphasize that the Gaussian width in Theorem \[thm\_no\_mismatch\] comes from bounding the random process induced by the random gradient $\nabla f_n ( \theta^\natural )$ (cf. the proof of Theorem \[thm\_no\_mismatch\]), instead of being a consequence of applying Gordon’s Lemma. That is, our result is essentially different from those in [@Chandrasekaran2012; @Oymak2013a; @Oymak2013].
Effect of a Mismatched Constraint {#sec_mismatch}
=================================
In this section, we discuss the effect of a mismatched constraint for ML regression in a canonical GLM. Recall that the constraint set $\mathcal{G}$ is called *mismatched* if $c > g ( {\theta^\natural})$ in (\[eq\_that\]).
The notion of the RSC in Definition \[def\_RSC\] is no longer meaningful when the constraint set is mismatched. Take ML regression in the Gaussian linear model for example, for which the corresponding $f_n$ is given in (\[eq\_LS\]). Let $A \in \mathbb{R}^{n \times p}$ be defined as in Theorem \[thm\_no\_mismatch\]. The RSC condition requires $$\left\langle \nabla f_n ( {\theta^\natural}+ e ) - \nabla f_n ( {\theta^\natural}), e \right\rangle = \frac{1}{n} {\left\Vert A e \right\Vert}_2^2 \geq \mu {\left\Vert e \right\Vert}_2^2, \notag$$ for some $\mu > 0$ and all $e \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$, where we say $e \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$ instead of $e \in {\mathcal{F}_g ( {\theta^\natural})}$ because $A$ is a linear operator. Since when the constraint is mismatched, ${\overline{{\mathcal{F}_g ( {\theta^\natural})}}}$ is the whole space $\mathbb{R}^p$, the RSC condition requires $A$ to be a non-singular matrix. This cannot be true in the high-dimensional setting, where $A \in \mathbb{R}^{n \times p}$ and $n < p$.
**Our Approach:** Let $t > 0$ and denote by $\mathcal{B}$ the unit $\ell_2$-ball in $\mathbb{R}^p$. We partition the feasible set ${\mathcal{F}_g ( {\theta^\natural})}$ as $${\mathcal{F}_g ( {\theta^\natural})}= ( {\mathcal{F}_g ( {\theta^\natural})}\cap t \mathcal{B} ) \cup ( {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} ). \notag$$ When $t$ is large enough, the conic hull of $( {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} )$ will not be the whole space $\mathbb{R}^p$, so it is possible to have restricted strong convexity on $( {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} )$ when $n < p$. If the error vector ${\hat{\theta}}- {\theta^\natural}$ lies in $( {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} )$, we can obtain an error bound, say, $\tilde{t}$, as in Section \[sec\_matched\]; otherwise, if the error vector lies in $\mathcal{B}_t$, a naïve error bound is the radius of the ball, i.e., $t$. Finally, we can bound the estimation error from above by the maximum of $\tilde{t}$ and $t$. Note that $\tilde{t}$ is implicitly dependent on $t$.
The arguments in the previous paragraph can be made precise as in Lemma \[lem\_mismatched\], which is an analogue of Lemma \[lem\_fundamental\] in the mismatched case. Lemma \[lem\_mismatched\] holds for arbitrary constrained $M$-estimators of the form (\[eq\_that\]) and statistical models.
\[lem\_mismatched\] Suppose that for some $t > 0$, we have $$\begin{gathered}
\left\langle \nabla f_n ( {\theta^\natural}+ e ) - \nabla f_n ( {\theta^\natural}), e \right\rangle \geq \mu {\left\Vert e \right\Vert}_2^2, \label{eq_RRSC}\end{gathered}$$ for some $\mu > 0$ and all $e \in {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}$. Then $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq t + \mathbb{E}\, {\left\Vert \Pi_{ \overline{ {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} } } \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2. \notag$$
We can also prove an analogue of Theorem \[thm\_no\_mismatch\] for constrained ML regression in a canonical GLM.
\[cor\_mismatched\] Consider the canonical GLM and the corresponding ML estimator described in Section \[sec\_matched\], for $c > g ( {\theta^\natural})$. Let $A$ be defined as in Theorem \[thm\_no\_mismatch\] and let $t > 0$. Suppose that (\[eq\_RRSC\]) holds true with for some $\mu > 0$ with probability at least $1/2$. Then we have $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq t + 2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )}{ \mu \sqrt{n}}, \notag$$ where $\sigma_{\max}$ is defined as in Theorem \[thm\_no\_mismatch\].
The proofs of Lemma \[lem\_mismatched\] and Corollary \[cor\_mismatched\] are similar to the proofs of Lemma \[lem\_fundamental\] and Theorem \[thm\_no\_mismatch\], respectively.
Applications {#sec_appl}
============
Once the conditions (\[eq\_RSC\]) and (\[eq\_RRSC\]) are verified, our results Theorem \[thm\_no\_mismatch\] and Corollary \[cor\_mismatched\] immediately follow. We explicitly verify the conditions for two applications and obtain the corresponding estimation error bounds.
The first application is regression by the constrained LS estimator in a Gaussian linear model. Let $\theta^\natural \in \mathbb{R}^p$ and $a_1, \ldots, a_n$ be vectors in $\mathbb{R}^p$. The sample is given by $$y_i = \langle a_i, \theta^\natural \rangle + \sigma w_i, \quad i = 1, \ldots, n, \notag$$ for some $\sigma > 0$, where $w_1, \ldots, w_n$ are i.i.d. standard Gaussian random variables. We consider the constrained LS estimator, for which $f_n$ is given by (\[eq\_LS\]), and $\mathcal{G} := {\left\{ \theta: g ( \theta ) \leq c \right\}}$ for some $c \geq g ( \theta^\natural )$, where $g$ can be any convex continuous function.
\[cor\_lasso\_error\] Consider the Gaussian linear model and the constrained LS estimator described above. Assume that the entries of $a_1, \ldots, a_n$ are either all i.i.d. standard Gaussian or all i.i.d. Rademacher random variables. Let $\epsilon \in ( 0, 1 )$. For any $t \geq 0$, there exist positive constants $c_1$ and $c_2$ such that if $$\sqrt{n} \geq \frac{c_1 \alpha^2 \omega_1(\overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}})}{\epsilon}, \label{eq_lasso_sample_complexity}$$ then we have $$\mathbb{E}\, {\left\Vert \hat{\theta} - \theta^\natural \right\Vert}_2 \leq t + 2 \sqrt{ 2 \pi } \sigma\, \frac{\omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )}{ ( 1 - \epsilon ) \sqrt{n}} , \label{eq_lasso_error}$$ with probability at least $1 - \exp(-c_2 \epsilon^2 n) > 1/2$ when $n$ is large enough.
When the constraint is matched, we can simply set $t = 0$. Recall that $t$ cannot be zero for the mismatched constraint case when $n < p$ (cf. Section \[sec\_mismatch\]). This remark also applies to Corollary \[cor\_glm\_error\] below.
For the mismatched constraint case, Corollary (\[cor\_lasso\_error\]) is minimax optimal for the Lasso in the Gaussian linear model. We address this in Section \[sec\_mismatch\_further\].
Corollary \[cor\_lasso\_error\] is consistent with [@Oymak2013]. The result in [@Oymak2013] is sharper, while Corollary \[cor\_lasso\_error\] is more general as it also covers the mismatched constraint case.
The second application is $\ell_1$-constrained ML regression in a canonical GLM.
\[cor\_glm\_error\] Consider the canonical GLM and the constrained ML estimator described in Section \[sec\_matched\], for $g ( \theta ) := {\left\Vert \theta \right\Vert}_1$ and $c \geq {\left\Vert {\theta^\natural}\right\Vert}$. Assume that $f_n$ in (\[eq\_fn\_glm\]) is twice continuously differentiable, and the entries of $a_1, \ldots, a_n$ are i.i.d. Rademacher random variables. Let $\epsilon \in ( 0, 1 )$. For any $t \geq 0$, there exist positive constants $c_1$, and $c_2$ such that if (\[eq\_lasso\_sample\_complexity\]) is satisfied, then we have $$\mathbb{E}\, {\left\Vert \hat{\theta} - \theta^\natural \right\Vert}_2 \leq t +2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )}{ ( 1 - \epsilon ) \sqrt{n}}, \label{eq_glm_error}$$ with probability at least $1 - \exp(c_2 \epsilon^2 n) > 1/2$ when $n$ is large enough, where $\sigma_{\max} := \max_i \sqrt{ \mathrm{var}\, y_i }$ is bounded above by a constant independent of $n$.
To the best of our knowledge, there are not existing results for $\ell_1$-constrained ML regression in GLMs. Here we compare Corollary \[cor\_glm\_error\] with [@Negahban2010], which provides an error bound for $\ell_1$-penalized ML estimators in GLMs . Recall that, however, the correspondence between the constrained and penalized estimators is currently unclear. When the constraint is matched and $\theta^\natural$ is $s$-sparse, Corollary \[cor\_glm\_error\] states that when $n = \Omega ( s \log ( p / s ) )$, $$\mathbb{E}\, {\left\Vert \hat{\theta} - \theta^\natural \right\Vert}_2 = O \left( \sqrt{\frac{s}{n} \log \left( \frac{p}{s} \right) } \right) \notag$$ by Proposition 3.10 in [@Chandrasekaran2012], which essentially coincides with Corollary 5 in [@Negahban2010][^1]. We note that [@Negahban2010] only provides an error bound for the $\ell_1$-penalization case.
Sharpness of Our Error Bound {#sec_mismatch_further}
============================
It has been shown that in a Gaussian linear model with $\mathcal{G}$ being an $\ell_1$-ball, *any* estimator $\hat{\theta}_{\text{arbitrary}}$ must satisfy, with probability larger than $1/2$, $$\max_{\theta^\natural \in \mathcal{G}} {\left\Vert \hat{\theta}_{\text{arbitrary}} - \theta^\natural \right\Vert}_2 = \Omega ( n^{-1/4} ), \notag$$ under some technical conditions [@Raskutti2011]. Now we show our error bound for the Lasso in Corollary \[cor\_lasso\_error\] actually achieves the error decaying rate $O ( n^{-1/4} )$ in the mismatched constraint case, and hence cannot be essentially improved.
By the definition of the Gaussian width, we have, for any $t > 0$, $$\omega_1 \left( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} \right) = \frac{ \omega_t \left( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} \right) }{t} = \frac{ \omega_t \left( {\mathcal{F}_g ( {\theta^\natural})}\right) }{t}, \notag$$ and hence the estimation error bound in Corollary \[eq\_lasso\_error\] can be written as $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq t + \frac{C}{t} \frac{ \omega_t ( {\mathcal{F}_g ( {\theta^\natural})}) }{\sqrt{n}}, \label{compare}$$ for some $C > 0$, when $n$ is large enough such that (\[eq\_lasso\_sample\_complexity\]) is satisfied.
Define the *global Gaussian width*: $$\omega ( {\mathcal{F}_g ( {\theta^\natural})}) := \mathbb{E}\, \sup_{ v \in {\mathcal{F}_g ( {\theta^\natural})}} {\left\{ \left\langle h, x \right\rangle \right\}}, \notag$$ where $h \in \mathbb{R}^p$ is a vector of i.i.d. standard Gaussian random variables. By definition, $\omega_t ( {\mathcal{F}_g ( {\theta^\natural})})$ is bounded above by $\omega ( {\mathcal{F}_g ( {\theta^\natural})})$, independent of $n$. Replacing $\omega_t ( {\mathcal{F}_g ( {\theta^\natural})})$ by $\omega ( {\mathcal{F}_g ( {\theta^\natural})})$ in (\[compare\]), we have a looser error upper bound: $$\mathbb{E}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2 \leq t + \frac{C}{t} \frac{\omega( {\mathcal{F}_g ( {\theta^\natural})})}{\sqrt{n}}, \notag$$ Minimizing this bound over all $t > 0$, we obtain the $O ( n^{-1/4} )$ error decaying rate. Similar discussion can be found in [@Plan2014a].
Discussion
==========
Note that by the elementary argument in Section \[sec\_framework\], we arrive at an estimation error bound (\[eq\_concentration\]) that holds *surely*. It is possible to derive a concentration-type error guarantee based on this sure error bound, which we are working on.
Our framework is not restricted to constraint sets of the form (\[eq\_that\]); it applies to any non-empty closed convex set $\mathcal{G}$, as we only require $\iota_{\mathcal{G}} ( \cdot )$ to be proper closed convex in the proof. This observation is crucial to applying our framework to analyze constrained estimators for quantum tomography [@Flammia2012; @Gross2010] and photon-limited imaging systems [@Raginsky2010], which we are studying.
In this paper, we consider a random matrix $A$, and discuss the expected estimation error with respect to both $A$ and the sample $( y_1, \ldots, y_n )$. The extension to the the case where $A$ is deterministic is technically non-trivial, and we have not obtained a satisfactory result. We address this in the remark following the proof of Theorem \[thm\_no\_mismatch\] in the appendix.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported in part by the European Commission under Grant MIRG-268398, ERC Future Proof, SNF 200021-132548, SNF 200021-146750 and SNF CRSII2-147633.
Proof of Proposition
=====================
We have $$\begin{aligned}
\left\langle \nabla f_n ( {\theta^\natural}+ e ) - \nabla f_n ( {\theta^\natural}), e \right\rangle = \int_0^1 \left\langle e, \nabla^2 f_n ( {\theta^\natural}+ \lambda e ) e \right\rangle \, d \lambda. \notag\end{aligned}$$ The right-hand side is always larger than $\mu {\left\Vert e \right\Vert}_2^2$ by assumption.
Proof of Theorem
=================
The main goal of the proof is to evaluate $\mathbb{E}\, {\left\Vert \Pi_{ {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2$. Here the expectation is with respect to both $A$ and the sample $( y_i )_{i = 1, \ldots, n}$.
We start with an equivalent formulation: $$\mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2 = \mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1} } {\left\{ \left\langle - \nabla f_n ( {\theta^\natural}), v \right\rangle \right\}}, \label{eq_proj_sup}$$ where $\mathcal{S}^{p - 1}$ denotes the unit $\ell_2$-sphere in $\mathbb{R}^p$. It is well known that in a canonical GLM, we have $$\nabla f_n ( {\theta^\natural}) = - \frac{1}{n} A^T \varepsilon, \label{eq_nablaF}$$ where $\varepsilon := ( y_i - \mathbb{E}\, y_i )_{i = 1, \ldots, n}$, and hence $$\mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2 = \frac{1}{n} \, \mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \varepsilon, v \right\rangle \right\}}. \notag$$
To proceed, we need the following symmetrization inequality. The symmetrization inequality is different from the well-known symmetrization inequality by a Rademacher process, so we show it here for completeness.
\[lem\_symmetrization\] Let $\xi_1, \ldots, \xi_n$ be independent real-valued random variables, and let $\mathcal{F}$ be a class of real functions. We have $$\mathbb{E}\, \sup_{f \in \mathcal{F}} {\left\{ \sum_{i = 1}^n \left[ f ( \xi_i ) - \mathbb{E}\, f ( \xi_i ) \right] \right\}} \leq \sqrt{2 \pi} \,\mathbb{E}\, \sup_{ f \in \mathcal{F} } {\left\{ \sum_{i = 1}^n h_i f ( \xi_i ) \right\}}, \notag$$ where $h_1, \ldots, h_n$ are i.i.d. standard Gaussian random variables.
In [@Handel2014], the lemma is stated for the case when $\xi_1, \ldots, \xi_n$ are i.i.d. The case when $\xi_1, \ldots, \xi_n$ are not necessarily identical can be proved in a similar way, as noted in [@Pollard1984].
By Lemma \[lem\_symmetrization\], we have $$\begin{aligned}
\mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \varepsilon, v \right\rangle \right\}}
& \quad = \mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle \varepsilon, A v \right\rangle \right\}} \notag \\
& \quad \leq \sqrt{2 \pi} \, \mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1} } {\left\{ \left\langle h \cdot \varepsilon, A v \right\rangle \right\}}, \notag\end{aligned}$$ where $h \cdot \varepsilon := ( h_i \varepsilon_i )_{i = 1, \ldots, n}$, and $h_1, \ldots, h_n$ are i.i.d. standard Gaussian random variables. Note that $h \cdot \varepsilon$ is a random Gaussian vector with zero mean and covariance matrix $\Sigma \in \mathbb{R}^{n \times n}$ which is dependent on $A$ in general; moreover, since the entries in $\varepsilon$ are independent, $\Sigma$ is a diagonal matrix with diagonal entries given by $\Sigma_{i,i} := \mathrm{var}\, y_i$. Define $\tilde{h} := ( \tilde{h}_i )_{i = 1, \ldots, n}$, where $\tilde{h}_i := \Sigma_{i,i}^{-1/2} h_i \varepsilon_i$. Then $\tilde{h}$ is a vector of i.i.d. standard Gaussian random variables; furthermore, it is still a vector of i.i.d. standard Gaussian random variables condition on $A$, and hence it is statistically independent of $A$.
Since $h \cdot \varepsilon$ and $\sqrt{\Sigma} \tilde{h}$ have the same probability distribution, we can write $$\mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1} } {\left\{ \left\langle h \cdot \varepsilon, A v \right\rangle \right\}} = \mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle \sqrt{\Sigma} \tilde{h}, A v \right\rangle \right\}}. \notag$$ Let $\mathcal{T} := {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}$. Condition on any given $A$ (and hence $\Sigma$), we consider two mean-zero Gaussian processes ${\left\{ X_t \right\}}_{t \in \mathcal{T}}$ and ${\left\{ Y_t \right\}}_{t \in \mathcal{T}}$ defined as $$X_t := \left\langle \sqrt{\Sigma} \tilde{h}, A t \right\rangle, \quad Y_t := \sigma_{\max} \left\langle \tilde{h}, A t \right\rangle, \notag$$ where $\sigma_{max} := \max_i \Sigma_{i,i} = \max_i \sqrt{\mathrm{var}\, \varepsilon_i}$. We have, for any $t_1, t_2 \in \mathcal{T}$, $$\mathbb{E}\, {\left\vert X_{t_1} - X_{t_2} \right\vert}^2 = {\left\Vert \Sigma A ( t_1 - t_2 ) \right\Vert}_2^2 \leq \sigma_{\max}^2 {\left\Vert A ( t_1 - t_2 ) \right\Vert}_2^2 = \mathbb{E}\, {\left\vert Y_{t_1} - Y_{t_2} \right\vert}^2. \notag$$ By Slepian’s lemma, this implies $$\mathbb{E}\, \sup_{t \in \mathcal{T}} X_t \leq \mathbb{E}\, \sup_{t \in \mathcal{T}} Y_t. \notag$$ Since the inequality holds given any realization of $A$, we have $$\begin{aligned}
\mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \varepsilon, v \right\rangle \right\}}
&\leq \sqrt{2 \pi} \, \sigma_{\max} \, \mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle \tilde{h}, A v \right\rangle \right\}} \notag \\
&= \sqrt{2 \pi} \, \sigma_{\max} \, \mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \tilde{h}, v \right\rangle \right\}}. \notag\end{aligned}$$
It remains to prove $$\mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \tilde{h}, v \right\rangle \right\}} \leq \sqrt{n} \, \omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}) := \sqrt{n} \, \mathbb{E}\, \sup_{ v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1} }{\left\{ \left\langle \tilde{h}, v \right\rangle \right\}}. \label{eq_key}$$ We consider two cases:
#### Case 1:
If $A$ has i.i.d. standard Gaussian entries, then condition on $\tilde{h}$, $A^T \tilde{h}$ is a vector of mean-zero Gaussian random variables with covariance matrix ${\left\Vert \tilde{h} \right\Vert}_2 I$, and hence has the same probablity distribution as ${\left\Vert \tilde{h} \right\Vert} \bar{h}$, where $\bar{h}$ is a vector of i.i.d. standard Gaussian random variables independent of $\tilde{h}$. Therefore, $$\begin{aligned}
\mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \tilde{h}, v \right\rangle \right\}} &= \mathbb{E} \, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle {\left\Vert \tilde{h} \right\Vert} \bar{h} , v \right\rangle \right\}} \notag \\
&= \left( \mathbb{E}_{\tilde{h}}\, {\left\Vert \tilde{h} \right\Vert}_2 \right) \, \mathbb{E}_{\bar{h}} \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle \bar{h}, v \right\rangle \right\}} \notag \\
&\leq \sqrt{n}\, \omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}). \notag\end{aligned}$$
#### Case 2:
If $A$ has i.i.d. Rademacher entries, then condition on $A$, $A^T \tilde{h}$ is a vector of mean-zero Gaussian random variables with covariance matrix $n I$, and hence has the same probability distribution as $\sqrt{n} \bar{h}$, where $\bar{h}$ is a vector of i.i.d. standard Gaussian random variables. Therefore, $$\begin{aligned}
\mathbb{E}\, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle A^T \tilde{h}, v \right\rangle \right\}} &= \mathbb{E} \, \sup_{v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}} {\left\{ \left\langle \sqrt{n} \bar{h} , v \right\rangle \right\}} \notag \\
&= \sqrt{n}\, \omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}). \notag\end{aligned}$$
In summary, we obtain $$\mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 \leq \sqrt{ 2 \pi }\, \sigma_{\max}\, \frac{ \omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}) }{ \sqrt{n} }, \notag$$ if the entries of $A$ are i.i.d. standard Gaussian or Rademacher random variables, for a canonical GLM, where the expectation is with respect to both $A$ and the sample $( y_i )_{i = 1, \ldots, n}$.
Let $\mathcal{E}$ denote that event that the RSC condition holds. Then we have $$\begin{aligned}
\mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 = & \, \mathbb{P} ( \mathcal{E} ) \, \mathbb{E}_{A, ( y_i ) \vert \mathcal{E}}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 \notag \\
& \quad + \mathbb{P} ( \mathcal{E}^C ) \, \mathbb{E}_{A, ( y_i ) \vert \mathcal{E}^C}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2, \notag\end{aligned}$$ and hence $$\begin{aligned}
\mathbb{E}_{A, ( y_i ) \vert \mathcal{E}}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 &\leq \frac{ \mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 }{ \mathbb{P} ( \mathcal{E} ) } \notag \\
&\leq 2 \mathbb{E}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2, \notag\end{aligned}$$ where we applied the assumption that $\mathbb{P} ( \mathcal{E} ) \geq 1/2$. By Lemma , this implies $$\begin{aligned}
\mathbb{E}_{A, \varepsilon \vert \mathcal{E}}\, {\left\Vert {\hat{\theta}}- {\theta^\natural}\right\Vert}_2
&\leq \frac{1}{\mu} \mathbb{E}_{A, ( y_i ) \vert \mathcal{E}}\, {\left\Vert \Pi_{{\overline{{\mathcal{F}_g ( {\theta^\natural})}}}} ( - \nabla f_n ( {\theta^\natural}) ) \right\Vert}_2 \notag \\
&\leq 2 \sqrt{2 \pi} \, \sigma_{\max} \, \frac{ \omega_1 ( {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}) }{\mu \sqrt{n}}. \notag\end{aligned}$$ This completes the proof.
If we want to adapt this proof to the deterministic $A$ case, a technical issue arises when bounding the right-hand side of (\[eq\_key\]). As the random process ${\left\{ \tilde{X}_v := \left\langle \tilde{h}, v \right\rangle \right\}}_{v \in \mathcal{V}}$, where $\mathcal{V} := {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}$, is a mean-zero Gaussian process, a standard approach is to bound $\sup_{v \in \mathcal{V}} \tilde{X}_v$ by Slepian’s lemma. Note that, for any $v_1, v_2 \in \mathcal{V}$, $$\mathbb{E}\, {\left\vert \tilde{X}_{v_1} - \tilde{X}_{v_2} \right\vert}^2 = {\left\Vert A ( v_1 - v_2 ) \right\Vert}_2^2, \notag$$ and hence an upper-bound on $\mathbb{E}\, {\left\vert \tilde{X}_{v_1} - \tilde{X}_{v_2} \right\vert}^2$ would depend on the largest eigenvalue of $A$. The largest eigenvalue of $A$, however, cannot be bounded above by a constant independent of $n$ under the high-dimensional setting. Although we can weaken the requirement on $A$ to a restricted smoothness condition as $${\left\Vert A v \right\Vert}_2 \leq \sqrt{ 1 + \epsilon } {\left\Vert v \right\Vert}_2, \quad \text{for all } v \in {\overline{{\mathcal{F}_g ( {\theta^\natural})}}}\cap \mathcal{S}^{p - 1}, \notag$$ which, by Theorem \[thm\_mendelson\], holds with high probability. This condition does not imply $${\left\Vert A ( v_1 - v_2 ) \right\Vert}_2^2 \leq C {\left\Vert v_1 - v_2 \right\Vert}_2^2, \notag$$ for some dimension-independent constant $C > 0$, for all $v_1, v_2 \in \mathcal{V}$.
Proof of Lemma {#sec_proof_lem_mismatch}
===============
Let $e := {\hat{\theta}}- {\theta^\natural}$. If $e \in {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}$, following the proof of Theorem , we obtain $${\left\Vert e \right\Vert}_2 \leq \frac{1}{\mu} {\left\Vert \Pi_{\overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2, \notag$$ where $\overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}}$ denotes the conic hull of ${\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}$. If $e \in t \mathcal{B}$, we have the naïve bound: ${\left\Vert e \right\Vert}_2 \leq t$. Therefore, $$\begin{aligned}
{\left\Vert e \right\Vert}_2 &\leq \max {\left\{ t, \frac{1}{\mu} {\left\Vert \Pi_{\overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2 \right\}} \notag \\
&\leq t + \frac{1}{\mu} {\left\Vert \Pi_{\overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}}} \left( - \nabla f_n ( {\theta^\natural}) \right) \right\Vert}_2. \notag\end{aligned}$$ The lemma follows by taking expectations on both sides.
Proof of Corollary {#sec_proof_cor_mismatch}
===================
Let $e := {\hat{\theta}}- {\theta^\natural}$. If $e \in {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}$, following the proof of Theorem , we can obtain $$\mathbb{E}\, {\left\Vert e \right\Vert}_2 \leq 2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( \overline{ {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} } )}{ \mu \sqrt{n}}; \notag$$ otherwise, we can bound the expected estimation error from above by $t$. Therefore, $$\begin{aligned}
\mathbb{E}\, {\left\Vert e \right\Vert}_2 &\leq \max {\left\{ t, 2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( \overline{ {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} } )}{ \mu \sqrt{n}} \right\}} \notag \\
& \leq t + 2 \sqrt{ 2 \pi } \, \sigma_{\max} \frac{\omega_1 ( \overline{ {\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B} } )}{ \mu \sqrt{n}}. \notag\end{aligned}$$
Proof of Corollary and Corollary
==================================
The proofs in this section rely on the following theorem [@Mendelson2007].
\[thm\_mendelson\] Let $\mathcal{T} \subseteq \mathbb{R}^p$ be star-shaped. Let $A \in \mathbb{R}^{n \times p}$, $n < p$, whose rows are i.i.d. isotropic subgaussian random vectors with subgaussian norm $\alpha \geq 1$, and let $\epsilon \in ( 0, 1 )$. Then there exist constants $c_1$ and $c_2$ such that for all $x \in \mathcal{T}$ satisfying $${\left\Vert x \right\Vert}_2 \geq \gamma_n^* \left( \frac{\epsilon}{c_1 \alpha^2}, \mathcal{T} \right) := \inf {\left\{ t > 0: t \geq \frac{c_1 \alpha^2 \omega_t ( \mathcal{T} )}{\epsilon \sqrt{n}} \right\}}, \label{eq_rkstar}$$ we have $$( 1 - \epsilon ) {\left\Vert x \right\Vert}_2^2 \leq \frac{{\left\Vert A x \right\Vert}_2^2}{n} \leq ( 1 + \epsilon ) {\left\Vert x \right\Vert}_2^2 \notag$$ with probability at least $1 - \exp \left( - c_2 \epsilon^2 n / \alpha^4 \right)$.
We note that the sub-Gaussian norm of a vector of i.i.d. standard Gaussian entries or i.i.d. Rademacher entries is bounded above by a constant [@Vershynin2012].
Proof of Corollary {#sec_Lasso}
-------------------
We prove by Corollary .
Let $A$ be defined as in Theorem . We verify the condition () by Theorem \[thm\_mendelson\]. Since $\omega_t ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} ) = t \omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )$, the condition (\[eq\_rkstar\]) is equivalent to requiring $$\sqrt{n} \geq \frac{c_1 \alpha^2 \omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )}{\epsilon}. \notag$$ Once this inequality is satisfied, we can set $\mu = 1 - \epsilon$, and the condition () hold with probability at least $1 - \exp \left( - c_2 \epsilon^2 n / \alpha^4 \right)$. Note that $\sigma_{\max} = \sqrt{ \mathbb{E}\, w_i^2 } = \sigma$. This completes the proof.
Proof of Corollary {#proof-of-corollary}
-------------------
We prove the corollary by Corollary .
It is known that $$\nabla^2 f_n ( \theta ) = \frac{1}{n} A^T D(\theta) A \notag$$ for the ML estimator in a canonical GLM, where $A$ is defined as in Theorem , and $D( \theta )$ is a diagonal matrix; furthermore, there exists a continuous strictly positive function $\phi$ such that the $( i, i )$-th entry of $D ( \theta )$ is given by $\phi ( \langle a_i, \theta \rangle )$. Since the entries of $A$ are i.i.d. Rademacher random variables, for any $\theta \in \mathcal{G}$, $${\left\vert \left\langle a_i, \theta \right\rangle \right\vert} \leq {\left\Vert a_i \right\Vert}_{\infty} {\left\Vert \theta \right\Vert}_1 \leq c. \notag$$ By the extreme value theorem, the diagonal entries of $D(\theta)$ are bounded below by a constant $\nu > 0$ for all $\theta \in \mathcal{G}$, which is independent of $n$. Similarly, $\sigma_{\max}$ is bounded above by a constant independent of $n$.
The rest of the proof is similar to the last paragraph in the previous sub-section. By Theorem \[thm\_mendelson\], if we choose $n$ such that $$\sqrt{n} \geq \frac{c \alpha^2 \omega_1 ( \overline{{\mathcal{F}_g ( {\theta^\natural})}\setminus t \mathcal{B}} )}{\epsilon}, \notag$$ then the condition () holds with probability at least $1 - \exp \left( - c_2 \epsilon^2 n / \alpha^4 \right)$ with $\mu = \nu ( 1 - \epsilon )$.
[^1]: We cite [@Negahban2010] instead of the published version [@Negahban2012], because the estimation error bound only appears in [@Negahban2010].
|
---
abstract: 'Consider the problem of estimating a multivariate normal mean with a known variance matrix, which is not necessarily proportional to the identity matrix. The coordinates are shrunk directly in proportion to their variances in Efron and Morris’ (*J. Amer. Statist. Assoc.* **68** (1973) 117–130) empirical Bayes approach, whereas inversely in proportion to their variances in Berger’s (*Ann. Statist.* **4** (1976) 223–226) minimax estimators. We propose a new minimax estimator, by approximately minimizing the Bayes risk with a normal prior among a class of minimax estimators where the shrinkage direction is open to specification and the shrinkage magnitude is determined to achieve minimaxity. The proposed estimator has an interesting simple form such that one group of coordinates are shrunk in the direction of Berger’s estimator and the remaining coordinates are shrunk in the direction of the Bayes rule. Moreover, the proposed estimator is scale adaptive: it can achieve close to the minimum Bayes risk simultaneously over a scale class of normal priors (including the specified prior) and achieve close to the minimax linear risk over a corresponding scale class of hyper-rectangles. For various scenarios in our numerical study, the proposed estimators with extreme priors yield more substantial risk reduction than existing minimax estimators.'
address: 'Department of Statistics, Rutgers University, 110 Frelinghuysen Road, Piscataway, NJ 08854, USA. '
author:
-
title: Improved minimax estimation of a multivariate normal mean under heteroscedasticity
---
Introduction
============
A fundamental statistical problem is shrinkage estimation of a multivariate normal mean. See, for example, the February 2012 issue of *Statistical Science* for a broad range of theory, methods, and applications. Let $X=(X_1,\ldots, X_p)^\T$ be multivariate normal with *unknown* mean vector $\theta=(\theta_1,\ldots,\theta_p)^\T$ and *known* variance matrix $\Sigma$. Consider the problem of estimating $\theta$ by an estimator $\delta=\delta(X)$ under the loss $L(\delta, \theta) = (\delta-\theta)^\T Q (\delta-\theta)$, where $Q$ is a *known* positive definite, symmetric matrix. The risk of $\delta$ is $R(\delta,\theta)=E_\theta\{ L(\delta,\theta
)\}$. The general problem can be transformed into a canonical form such that $\Sigma$ is diagonal and $Q=I$, the identity matrix (e.g., Lehmann and Casella [@LehCas98], Problem 5.5.11). For simplicity, assume except in Section \[sec3.2\] that $\Sigma$ is $D=\diag(d_1, \ldots, d_p) $ and $L(\delta,\theta)=\|\delta- \theta\|^2$, where $\|x \|^2 = x^{\T} x$ for a column vector $x$. The letter $D$ is substituted for $\Sigma$ to emphasize that it is diagonal.
For this problem, we aim to develop shrinkage estimators that are both minimax and capable of effective risk reduction over the usual estimator $\delta_0=X$ even in the heteroscedastic case (i.e., $d_1,\ldots,d_p$ are not equal). An estimator of $\theta$ is minimax if and only if, *regardless of* $\theta\in\mathbb R^p$, its risk is always no greater than $\sum_{j=1}^p d_j$, the risk of $\delta
_0$. For $p\ge3$, minimax estimators different from and hence dominating $\delta_0$ are first discovered in the homoscedastic case where $D=\sigma^2 I$ (i.e., $d_1=\cdots=d_p=\sigma^2$). James and Stein [@JamSte61] showed that $\delta_c^{\mathrm{JS}} = (1-c \sigma^2 /\|X
\|^2 ) X$ is minimax provided $0 \le c \le2(p-2)$. Stein [@Ste62] suggested the positive-part estimator $\delta_c^{\mathrm{JS}+} =
(1-c \sigma^2/\|X \|^2)_+ X$, which dominates $\delta_c^{\mathrm{JS}}$. Throughout, $a_+ =\max(0, a)$. Shrinkage estimation has since been developed into a general methodology with various approaches, including empirical Bayes (Efron and Morris [@EfrMor73]; Morris [@Mor83]) and hierarchical Bayes (Strawderman [@Str71]; Berger and Robert [@BerRob90]). While these approaches are prescriptive for constructing shrinkage estimators, minimaxity is not automatically achieved but needs to be checked separately.
For the heteroscedastic case, there remain challenging issues on how much observations with different variances should be shrunk relatively to each other (e.g., Casella [@Cas85], Morris [@Mor83]). For the empirical Bayes approach (Efron and Morris [@EfrMor73]), the coordinates of $X$ are shrunk directly in proportion to their variances. But the existing estimators are, in general, non-minimax (i.e., may have a greater risk than the usual estimator $\delta_0$). On the other hand, Berger [@Ber76] proposed minimax estimators, including admissible minimax estimators, such that the coordinates of $X$ are shrunk inversely in proportion to their variances. But the risk reduction achieved over $\delta_0$ is insubstantial unless all the observations have similar variances.
To address the foregoing issues, we develop novel minimax estimators for multivariate normal means under heteroscedasticity. There are two central ideas in our approach. The first is to develop a class of minimax estimators by generalizing a geometric argument essentially in Stein [@Ste56] (see also Brandwein and Strawderman [@BraStr90]). For the homoscedastic case, the argument shows that $\delta_c^{\mathrm{JS}}$ can be derived as an approximation to the best linear estimator of the form $(1-\lambda) X$, where $\lambda$ is a scalar. In fact, the optimal choice of $\lambda$ in minimizing the risk is $p\sigma^2 /E_\theta(\|
X\|^2)$. Replacing $E_\theta(\|X\|^2)$ by $\|X\|^2$ leads to $\delta
_c^{\mathrm{JS}}$ with $c=p$. This derivation is highly informative, even though it does not yield the optimal value $c=p-2$.
Our class of minimax estimators are of the linear form $(I - \lambda
A)X$, where $A$ is a nonnegative definite, diagonal matrix indicating the direction of shrinkage and $\lambda$ is a scalar indicating the magnitude of shrinkage. The matrix $A$ is open to specification, depending on the variance matrix $D$ but *not* on the data $X$. For a fixed $A$, the scalar $\lambda$ is determined to achieve minimaxity, depending on both $D$ and $X$. Berger’s [@Ber76] minimax estimator corresponds to the special choice $A=D^{-1}$, thereby leading to the unusual pattern of shrinkage discussed above.
The second idea of our approach is to choose $A$ by approximately minimizing the Bayes risk with a normal prior in our class of minimax estimators. The Bayes risk is used to measure average risk reduction for $\theta$ in an elliptical region as in Berger [@Ber80; @Ber82]. It turns out that the solution of $A$ obtained by our approximation strategy has an interesting simple form. In fact, the coordinates of $X$ are automatically segmented into two groups, based on their Bayes “importance” (Berger [@Ber82]), which is of the same order as the coordinate variances when the specified prior is homoscedastic. The coordinates of high Bayes “importance” are shrunk inversely in proportion to their variances, whereas the remaining coordinates are shrunk in the direction of the Bayes rule. This shrinkage pattern may appear paradoxical: it may be expected that the coordinates of high Bayes “importance” are to be shrunk in the direction of the Bayes rule. But that scheme is inherently aimed at reducing the Bayes risk under the specified prior and, in general, fails to achieve minimaxity (i.e., it may lead to even a greater risk than the usual estimator $\delta_0$). In addition to simplicity and minimaxity, we further show that the proposed estimator is scale adaptive in reducing the Bayes risk: it achieves close to the minimum Bayes risk, with the difference no greater than the sum of the 4 highest Bayes “importance” of the coordinates of $X$, simultaneously over a scale class of normal priors (including the specified prior). To our knowledge, the proposed estimator seems to be the first one with such a property in the general heteroscedastic case. Previously, in the homoscedastic case, $\delta
_{p-2}^{\mathrm{JS}}$ is known to achieve the minimum Bayes risk up to the sum of 2 (equal-valued) Bayes “importance” of the coordinates over the scale class of homoscedastic normal priors (Efron and Morris [@EfrMor73]).
The rest of this article is organized as follows. Section \[sec2\] gives a review of existing estimators. Section \[sec3\] develops the new approach and studies risk properties of the proposed estimator. Section \[sec4\] presents a simulation study. Section \[sec5\] provides concluding remarks. All proofs are collected in the .
Existing estimators {#sec2}
===================
We describe a number of existing shrinkage estimators. See Lehmann and Casella [@LehCas98] for a textbook account and Strawderman [@autokey29] and Morris and Lysy [@MorLys12] for recent reviews. Throughout, $\tr(\cdot)$ denotes the trace and $\lambda_{\max
}(\cdot)$ denotes the largest eigenvalue. Then $\tr(D) = \sum
_{j=1}^p d_j$ and $\lambda_{\max}(D) = \max(d_1,\ldots,d_p)$.
For a Bayes approach, assume the prior distribution: $\theta\sim\N
(0, \gamma I)$, where $\gamma$ is the prior variance. The Bayes rule is given componentwise by $\delta^{\mathrm{Bayes}}_j=\{1- d_j/(d_j + \gamma)\} X_j $. Then the greater $d_j$ is, the more $X_j$ is shrunk whether $\gamma$ is fixed or estimated from the data. For the empirical Bayes approach of Efron and Morris [@EfrMor73], $\gamma$ is estimated by the maximum likelihood estimator $\hat\gamma$ such that $$\begin{aligned}
\label{EB-iter}
\hat\gamma= \sum_{j=1}^p
\frac{X_j^2- d_j}{(d_j+\hat\gamma)^2} \biggl/ \sum_{j=1}^p
\frac{1}{(d_j+\hat\gamma)^2} .\end{aligned}$$ Morris [@Mor83] suggested the modified estimator $$\begin{aligned}
\label{EB}
\delta^{\mathrm{EB}}_j= \biggl( 1- \frac{p-2}{p}
\frac{d_j}{d_j+\hat\gamma
_+} \biggr) X_j .\end{aligned}$$ In our implementation, the right-hand side of (\[EB-iter\]) is computed to update $\hat\gamma$ from the initial guess, $p^{-1} \{
\sum_{j=1}^p (X_j^2-d_j)\}_+$, for up to 100 iterations until the successive absolute difference in $\hat\gamma$ is $\le$$10^{-4}$, or $\hat\gamma$ is set to $\infty$ so that $\delta^{\mathrm{EB}}=X$ otherwise.
Alternatively, Xie *et al.* [@XieKouBro12] proposed empirical Bayes-type estimators based on minimizing Stein’s [@Ste81] unbiased risk estimate (SURE) under heteroscedasticity. Their basic estimator is defined componentwise by $$\begin{aligned}
\delta^{\mathrm{XKB}}_j = \biggl( 1- \frac{d_j}{d_j +\tilde\gamma
} \biggr)
X_j, \label{XKB}\end{aligned}$$ where $\tilde\gamma$ is obtained by minimizing the SURE of $\delta
^{\mathrm{Bayes}}$, that is, $\operatorname{SURE}(\gamma) =X^\T D \{D+\gamma
I\}^{-1} X
+ 2 \gamma\tr\{D(D+\gamma I)^{-1}\} - \tr(D)$. In general, the two types of empirical Bayes estimators, $\delta^{\mathrm{EB}}$ and $\delta^{\mathrm{XKB}}$, are non-minimax, as shown in Section \[sec4\].
For a direct extension of $\delta_c^{\mathrm{JS}}$, consider the estimator $\delta_c^{\mathrm{S}} = (1-c/\|X \|^2) X$ and, more generally, $\delta_r^{\mathrm{S}} =\{1-r( \|X\|^2 )/\|X \|^2 \} X$, where $c$ is a scalar constant and $r(\cdot)$ a scalar function. See Lehmann and Casella [@LehCas98], Theorem 5.7, although there are some typos. Both $\delta_c^{\mathrm{S}}$ and $\delta_r^{\mathrm{S}}$ are spherically symmetric. The estimator $\delta_c^{\mathrm{S}}$ is minimax provided $$\begin{aligned}
\label{S-cond}
0 \le c \le2 \bigl\{\tr(D) - 2 \lambda_{\max} (D) \bigr\},\end{aligned}$$ and $\delta_r^{\mathrm{S}}$ is minimax provided $0 \le r(\cdot)\le2 \{\tr(D) - 2 \lambda_{\max} (D) \} \mbox{ and
} r(\cdot) \mbox{ is nondecreasing}$. No such $c\neq0$ exists unless $ \tr(D) >2 \lambda_{\max} (D) $, which restricts how much $(d_1, \ldots, d_p)$ can differ from each other. For example, condition (\[S-cond\]) fails when $p=10$ and $$\begin{aligned}
\label{example}
d_1=40,\qquad d_2=20,\qquad d_3=10, \qquad d_4=
\cdots=d_{10} =1,\end{aligned}$$ because $\tr(D) = 77$ and $\lambda_{\max}(D)=40$.
Berger [@Ber76] proposed estimators of the form $\delta_c^{\mathrm{B}} =\{ I- c D^{-1}/(X^\T D^{-2} X) \} X$ and $\delta_r^{\mathrm{B}} = \{I- r( X^\T D^{-2} X )/(X^\T D^{-2} X) D^{-1} \} X$, where $c$ is a scalar constant and $r(\cdot)$ a scalar function. Then $\delta_c^{\mathrm{B}}$ is minimax provided $0 \le c \le2(p-2)$, and $\delta_r^{\mathrm{B}}$ is minimax provided $0 \le r(\cdot) \le2(p-2) \mbox{ and } r(\cdot) \mbox{ is nondecreasing}$, regardless of differences between $(d_1, \ldots, d_p)$. However, a striking feature of $\delta_c^{\mathrm{B}}$ and $\delta_r^{\mathrm{B}}$, compared with $\delta^{\mathrm{EB}}$ and $\delta^{\mathrm{XKB}}$, is that the smaller $d_j$ is, the more $X_j$ is shrunk. For example (\[example\]), under $\delta_c^{\mathrm{B}}$, the coordinates $(X_1, X_2, X_3)$ are shrunk only slightly, whereas $(X_4, \ldots, X_{10})$ are shrunk as if they were shrunk as a 7-dimensional vector under $\delta_c^{\mathrm{JS}}$. The associated risk reduction is insubstantial, because the risk of estimating $(\theta_4,\ldots,\theta_{10})$ is a small fraction of the overall risk of estimating $\theta$.
Define the positive-part version of $\delta_c^{\mathrm{B}}$ componentwise as $$\begin{aligned}
\label{B+}
\bigl(\delta_c^{\mathrm{B}+}\bigr)_j = \biggl( 1-
\frac{c d_j^{-1}}{X^\T D^{-2} X} \biggr)_+ X_j.\end{aligned}$$ The estimator $\delta_c^{\mathrm{B}+}$ dominates $\delta_c^{\mathrm{B}}$ by Baranchik [@Bar64], Section 2.5. Berger [@Ber85], Equation (5.32), stated a different positive-part estimator, $\delta_r^{\mathrm{B}}$ with $r(t)=\min(p-2, t)$, but the $j$th component may not be of the same sign as $X_j$.
Given a prior $\theta\sim\N(0,\Gamma)$, Berger [@Ber82] suggested an approximation of Berger’s [@Ber80] robust generalized Bayes estimator as $$\begin{aligned}
\label{RB}
\delta^{\mathrm{RB}} = \biggl[I- \min\biggl\{ 1, \frac{p-2}{X^\T(D+\Gamma
)^{-1} X} \biggr\}
D(D+\Gamma)^{-1} \biggr] X.\end{aligned}$$ The estimator is expected to provide significant risk reduction over $\delta_0=X$ if the prior is correct and be robust to misspecification of the prior, but it is, in general, non-minimax. In the case of $\Gamma=0$, $\delta^{\mathrm{RB}}$ becomes $\{1 - (p-2)/(X^\T
D^{-1} X)\}_+ X$, in the form of spherically symmetric estimators $\delta^{\mathrm{SS}}_r = \{1 - r(X^\T D^{-1} X)/(X^T D^{-1} X)\}X$, where $r(\cdot)$ is a scalar function (Bock [@Boc75], Brown [@Bro75]). The estimator $\delta_r^{\mathrm{SS}}$ is minimax provided $0 \le r(\cdot) \le2 \{\tr
(D)/\lambda_{\max}(D)-2\}$ and $r(\cdot)$ is nondecreasing. Moreover, if $\tr(D) \le 2 \lambda_{\max}(D)$, then $\delta_r^{\mathrm{SS}}$ is non-minimax unless $r(\cdot) =0$.
To overcome the non-minimaxity of $\delta^{\mathrm{RB}}$, Berger [@Ber82] developed a minimax estimator $\delta^{\mathrm{MB}}$ by combining $\delta
_r^{\mathrm{B}}$, $\delta^{\mathrm{RB}}$, and a minimax estimator of Bhattacharya [@Bha66]. Suppose that $\Gamma= \diag(\gamma_1, \ldots, \gamma_p)$ and the indices are sorted such that $d_1^* \ge\cdots\ge d_p^*$, where $d_j^*
= d_j^2/(d_j+\gamma_j)$. Define $\delta^{\mathrm{MB}}$ componentwise as $$\begin{aligned}
\label{MB}
\delta^{\mathrm{MB}}_j =
X_j - \Biggl[ \frac{1}{d_j^*} \sum_{k=j}^p
\bigl(d_k^*-d_{k+1}^*\bigr) \min\biggl\{1,\frac{ (k-2)_+}{ \sum
_{\ell=1}^k X_\ell^2/(d_\ell+\gamma
_\ell)}
\biggr\} \Biggr]\frac{d_j}{d_j+\gamma_j} X_j,\end{aligned}$$ where $d_{p+1}^*=0$. In the case of $\Gamma=0$, $\delta^{\mathrm{MB}}$ reduces to the original estimator of Bhattacharya [@Bha66]. The factor $(k-2)_+$ is replaced by $2(k-2)_+$ in Berger’s [@Ber82] original definition of $\delta^{\mathrm{MB}}$, corresponding to replacing $p-2$ by $2(p-2)$ in $\delta^{\mathrm{RB}}$. In our simulations, the two versions of $\delta^{\mathrm{MB}}$ somehow yield rather different risk curves, and so do the corresponding versions of other estimators. But there has been limited theory supporting one version over the other. Therefore, we focus on comparisons of only the corresponding versions of $\delta^{\mathrm{MB}}$ and other estimators.
Proposed approach {#sec3}
=================
We develop a useful approach for shrinkage estimation under heteroscedasticity, by making explicit how different coordinates are shrunk differently. The approach not only sheds new light on existing results, but also lead to new minimax estimators.
A sketch {#sec3.1}
--------
Assume that $\Sigma=D$ (diagonal) and $Q=I$. Consider estimators of the linear form $$\begin{aligned}
\label{delta-form}
\delta= (I- \lambda A) X = X - \lambda A X,\end{aligned}$$ where $A$ is a nonnegative definite, diagonal matrix indicating the *direction* of shrinkage and $\lambda$ is a scalar indicating the *magnitude* of shrinkage. Both $A$ and $\lambda$ are to be determined. A sketch of our approach is as follows.
(i) For a fixed $A$, the optimal choice of $\lambda$ in minimizing the risk is $$\lambda_{\mathrm{opt}} = \frac{\tr(DA)}{E_\theta(X^\T A^\T A X)}.$$
(ii) For a fixed $A$ and a scalar constant $c \ge0$, consider the estimator $$\delta_{A,c} = X - \frac{c}{X^\T A^\T A X} A X .$$ By Theorem \[th1\], an upper bound on the risk function of $\delta_{A,c}$ is $$\begin{aligned}
\label{upper-bound}
R(\delta_{A,c}, \theta) \le\tr(D) + E_\theta\biggl[
\frac{c\{
c-2c^*(D,A)\}}{X^\T A^\T A X} \biggr],\end{aligned}$$ where $c^*(D,A)=\tr(DA)-2\lambda_{\max}(DA)$. Requiring the second term to be no greater than 0 shows that if $c^*(D,A) \ge0$, then $\delta_{A,c}$ is minimax provided $$\begin{aligned}
\label{Tan-cond}
0 \le c \le2 c^*(D,A).\end{aligned}$$ If $c^*(D,A) \ge0$, then the upper bound (\[upper-bound\]) has a minimum at $c=c^*(D,A)$.
(iii) By taking $c=c^*(D,A)$ in $\delta_{A,c}$, consider the estimator $$\delta_A = X - \frac{c^*(D,A)}{X^\T A^\T A X} A X$$ subject to $c^*(D,A)\ge0$, so that $\delta_A$ is minimax by step (ii). A positive-part estimator dominating $\delta_A$ is defined componentwise by $$\begin{aligned}
\label{A+}
\bigl(\delta_A^+\bigr)_j = \biggl\{1 -
\frac{c^*(D,A) a_j}{X^\T A^\T A X} \biggr\}_+ X_j,\end{aligned}$$ where $(a_1,\ldots,a_p)$ are the diagonal elements of $A$. The upper bound (\[upper-bound\]) on the risk functions of $\delta
_A$ and $\delta^+_{A}$, subject to $c^*(D,A)\ge0$, gives $$\begin{aligned}
\label{point-bound}
R(\delta_A, \theta) \le\tr(D) - E_\theta\biggl\{
\frac
{{c^*}^2(D,A)}{X^\T A^\T A X} \biggr\}.\end{aligned}$$ We propose to choose $A$ based on some optimality criterion, such as minimizing the Bayes risk with a normal prior centered at 0 (Berger [@Ber82]).
Further discussions of steps (i)–(iii) are provided in Sections \[sec3.2\]–\[sec3.3\].
Constructing estimators: Steps (i)–(ii) {#sec3.2}
---------------------------------------
We first develop steps (i)–(ii) for the general problem where neither $\Sigma$ nor $Q$ may be diagonal. The results can be as concisely stated as those just presented for the canonical problem where $\Sigma
$ is diagonal and $Q=I$. Such a unification adds to the attractiveness of the proposed approach.
Consider estimators of the form (\[delta-form\]), where $A$ is not necessarily diagonal, but $$\begin{aligned}
\label{A-cond}
A \Sigma\mbox{ is nonnegative definite.}\end{aligned}$$ Condition (\[A-cond\]) is invariant under a linear transformation. To see this, let $B$ be a nonsingular matrix and $\Sigma^*=B \Sigma
B^\T$ and $A^*=B A B^{-1}$. For the transformed problem of estimating $\theta^*=B \theta$ based on $X^*=B X$ with variance matrix $\Sigma
^*$, the transformed estimator from (\[delta-form\]) is $\delta^* =
X^* - \lambda A^* X^*$. The application of condition (\[A-cond\]) to $\delta^*$ says that $A^* \Sigma^* = B A \Sigma B^\T$ is nonnegative definite and therefore is equivalent to (\[A-cond\]) itself. For the canonical problem where $\Sigma=D$ (diagonal), condition (\[A-cond\]) only requires that $AD$ is nonnegative definite, allowing $A$ to be non-diagonal. On the other hand, it seems intuitively appropriate to restrict $A$ to be diagonal. Then condition (\[A-cond\]) is equivalent to saying that $A$ is nonnegative definite (and diagonal), which is the condition introduced on $A$ in the sketch in Section \[sec3.1\].
The risk of an estimator of the form (\[delta-form\]) is $$\begin{aligned}
&& E_\theta\bigl\{ (X-\theta-\lambda A X)^\T Q (X-\theta-
\lambda A X) \bigr\}
\\
&&\quad = E_\theta\bigl\{ (X-\theta)^\T Q (X-\theta) \bigr\} +
\lambda^2 E_\theta\bigl(X^\T A^\T Q A
X \bigr) - 2 \lambda E_\theta\bigl\{ (X-\theta)^\T Q A X
\bigr\} .\end{aligned}$$ For a fixed $A$, the optimal $\lambda$ in minimizing the risk is $$\begin{aligned}
\lambda_{\mathrm{opt}} = \frac{E_\theta\{ (X-\theta)^\T Q A X \}
}{E_\theta(X^\T A^\T Q A X )} = \frac{\tr(\Sigma Q A)}{E_\theta
(X^\T A^\T Q A X )}.\end{aligned}$$ Replacing $E_\theta(X^\T A^\T Q A X )$ by $X^\T A^\T Q A X$ and $\tr
(\Sigma QA)$ by a scalar constant $c\ge0$ leads to the estimator $$\delta_{A,c} = X - \frac{c}{X^\T A^\T Q A X} A X .$$ For a generalization, replacing $c$ by $r(X^\T A^\T Q A X)$ with a scalar function $r(\cdot)\ge0$ leads to the estimator $$\delta_{A,r} = X - \frac{r(X^\T A^\T Q A X)}{X^\T A^\T Q A X} A X .$$ We provide in Theorem \[th1\] an upper bound on the risk function of $\delta_{A,r}$.
\[th1\] Assume that $r(\cdot)$ almost differentiable (Stein [@Ste81]). If (\[A-cond\]) holds and $r(\cdot) \ge0$ is nondecreasing, then for each $\theta$, $$\begin{aligned}
\label{r-upper-bound}
R(\delta_{A,r}, \theta) \le\tr(\Sigma Q) + E_\theta\biggl[
\frac
{r\{r-2c^*(\Sigma,Q,A)\}}{X^\T A^\T Q A X} \biggr],\end{aligned}$$ where $r=r(X^\T A^\T Q A X)$ and $c^*(\Sigma,Q,A)=\tr(A \Sigma
Q)-\lambda_{\max}(A \Sigma Q + \Sigma A^\T Q )$. Taking $r(\cdot
)\equiv c \ge0$ in (\[r-upper-bound\]) gives an upper bound on $R(\delta_{A,c}, \theta)$.
Requiring the second term in the risk upper bound (\[r-upper-bound\]) to be no greater than 0 leads to a sufficient condition for $\delta_{A,r}$ to be minimax.
\[cor1\] If (\[A-cond\]) holds and $c^*(\Sigma
,Q,A)\ge0$, then $\delta_{A,r}$ is minimax provided $$\begin{aligned}
\label{Tan-cond2}
0 \le r(\cdot) \le2c^*(\Sigma,Q,A)\quad \mbox{and} \quad r(\cdot) \mbox{ is
nondecreasing}.\end{aligned}$$ Particularly, $\delta_{A,c}$ is minimax provided $0 \le c \le
2c^*(\Sigma,Q,A)$.
For the canonical problem, inequality (\[r-upper-bound\]) and condition (\[Tan-cond2\]) for $\delta_{A,c}$ give respectively (\[upper-bound\]) and (\[Tan-cond\]). These results generalize the corresponding ones for $\delta_c^{\mathrm{S}}$ and $\delta_c^{\mathrm{B}}$ in Section \[sec2\], by the specific choices $A=I$ or $D^{-1}$. The generalization also holds if $c$ is replaced by a scalar function $r(\cdot)>0$. In fact, condition (\[Tan-cond2\]) reduces to Baranchik’s [@Bar70] condition in the homoscedastic case.
If $c^*(\Sigma,Q,A)\ge0$, then the risk upper bound (\[r-upper-bound\]) has a minimum at $r(\cdot) \equiv c = c^*(\Sigma,
Q,A)$. As a result, consider the estimator $$\begin{aligned}
\delta_A = X - \frac{c^*(\Sigma,Q,A)}{X^\T A^\T Q A X} A X,\end{aligned}$$ which is minimax provided $c^*(\Sigma,Q,A)\ge0$. If $A=Q^{-1}\Sigma
^{-1}$ (Berger [@Ber76]), then $c^*(\Sigma,Q,A)=p-2$ and, by the proof of Theorem \[th1\] in the , the risk upper bound (\[r-upper-bound\]) becomes exact for $\delta_{A,c}$. Therefore, for $A=Q^{-1}\Sigma
^{-1}$, the estimator $\delta_A=\delta_{A,p-2}$ is uniformly best in the class $\delta_{A,c}$, in agreement with the result that $\delta
_{p-2}^{\mathrm{JS}}$ is uniformly best among $\delta_c^{\mathrm{JS}}$ in the homoscedastic case.
The estimator $\delta_A$ has desirable properties of invariance. First, $\delta_A$ is easily shown to be invariant under a multiplicative transformation $A \mapsto aA$ for a scalar $a >0$. Second, $\delta_A$ is invariant under a linear transformation of the inference problem. Similarly as discussed below (\[A-cond\]), let $B$ be a nonsingular matrix and $\Sigma^*=B \Sigma B^\T$, $Q^*={B^\T
}^{-1} Q B^{-1}$, and $A^*=B A B^{-1}$. For the transformed problem of estimating $\theta^*=B \theta$ based on $X^*=B X$, the transformed estimator from $\delta_A$ is $ X^* - \{
c^*(\Sigma,Q,A)/({X^*}^\T{A^*}^\T Q^* A^* X^*)\} A^* X^*$, whereas the application of $\delta_A$ is $ X^* - \{c^*(\Sigma^*,Q^*,A^*)/({X^*}^\T{A^*}^\T Q^* A^* X^*)\} A^*
X^*$. The two estimators are identical because $A^*\Sigma^*Q^* = B A
\Sigma Q B^{-1}$, $\Sigma^* {A^*}^\T Q^* =B \Sigma A^\T Q B^{-1}$, and hence $c^*(\Sigma^*,Q^*,A^*)=c^*(\Sigma,Q,A)$.
Finally, we present a positive-part estimator dominating $\delta_A$ in the case where both $A\Sigma$ and $QA$ are symmetric, that is, $$\begin{aligned}
\label{A-cond2}
A\Sigma=\Sigma A^\T\quad \mbox{and}\quad QA =A^\T Q .\end{aligned}$$ Similarly to (\[A-cond\]), it is easy to see that this condition is invariant under a linear transformation. Condition (\[A-cond2\]) is trivially true if $\Sigma$, $Q$, and $A$ are diagonal. In the , we show that (\[A-cond2\]) holds if and only if there exists a nonsingular matrix $B$ such that $Q=B^\T B$, $\Sigma
=B^{-1}D {B^\T}^{-1}$, and $A=B^{-1} A^* B$, where $D$ and $A^*$ are diagonal and the diagonal elements of $D$ or $A^*$ are, respectively, the eigenvalues of $\Sigma Q$ or $A$. In the foregoing notation, $\Sigma^*=D$ and $Q^*=I$. For the problem of estimating $\theta^*=B
\theta$ based on $X^*=B X$, consider the estimator $\eta= X- \{
c^*(D,A^*)/({X^*}^\T{A^*}^\T A^* X^*) \} A^* X$ and the positive-part estimator $\eta^+$ with the $j$th component, $$\begin{aligned}
\biggl\{ 1- \frac{c^*(D,A^*)}{{X^*}^\T{A^*}^\T A^* X^*} a_j^* \biggr
\}_+ X^*_j
,\end{aligned}$$ where $(a_1^*,\ldots,a_p^*)$ are the diagonal elements of $A^*$. The estimator $\eta^+$ dominates $\eta$ by a simple extension of Baranchik [@Bar64], Section 2.5. By a transformation back to the original problem, $\eta$ yields $\delta_A$, whereas $\eta^+ $ yields $$\begin{aligned}
\delta_A^+ = B^{-1} \diag\biggl[ \biggl\{1-
\frac{c^*(\Sigma
,Q,A)}{X^\T A^\T Q AX}a^*_1 \biggr\}_+, \ldots, \biggl\{1-
\frac
{c^*(\Sigma,Q,A)}{X^\T A^\T Q AX} a^*_p \biggr\}_+ \biggr] B X.\end{aligned}$$ Then $\delta_A^+$ dominates $\delta_A$. Therefore, (\[r-upper-bound\]) also gives an upper bound on the risk of $\delta
_A^+$, with $r(\cdot) \equiv c^*(\Sigma, Q,A)$, even though $\delta
_A^+$ is not of the form $\delta_{A,r}$.
In practice, a matrix $A$ satisfying (\[A-cond2\]) can be specified in two steps. First, find a nonsingular matrix $B$ such that $Q=B^\T B$ and $\Sigma=B^{-1}D {B^\T}^{-1}$, where $D$ is diagonal. Second, pick a diagonal matrix $A^*$ and define $A= B^{-1} A^* B$. The first step is always feasible by taking $B=OC$, where $C$ is a nonsingular matrix such that $Q= C^\T C$ and $O$ is an orthogonal matrix $O$ such that $O (C\Sigma C^\T) O^\T$ is diagonal. Given $(\Sigma,Q)$ and $D$, it can be shown that $A$ and $\delta_A^+$ depend on the choice of $A^*$, but not on that of $B$, provided that $a^*_j=a^*_k$ if $d_j=d_k$ for any $j, k=1,\ldots,p$. In the canonical case where $\Sigma=D$ and $Q=I$, this condition amounts to saying that any coordinates of $X$ with the same variances should be shrunk in the same way.
Constructing estimators: Step (iii) {#sec3.3}
-----------------------------------
Different choices of $A$ lead to different estimators $\delta_A$ and $\delta_A^+$. We study how to choose $A$, depending on $(\Sigma, Q)$ but *not* on $X$, to approximately optimize risk reduction while preserving minimaxity for $\delta_A$. The estimator $\delta
_A^+$ provides even greater risk reduction than $\delta_A$. We focus on the canonical problem where $\Sigma=D$ (diagonal) and $Q=I$. Further, we restrict $A$ to be diagonal and nonnegative definite.
As discussed in Berger [@Ber80], any estimator can have significantly smaller risk than $\delta_0=X$ only for $\theta$ in a specific region. Berger [@Ber80; @Ber82] considered the situation where significant risk reduction is desired for an elliptical region $$\begin{aligned}
\label{region}
\bigl\{\theta\dvt (\theta-\mu)^\T\Gamma^{-1} (\theta-\mu) \le
p\bigr\},\end{aligned}$$ with $\mu$ and $\Gamma$ the prior mean and prior variance matrix. See $\delta^{\mathrm{RB}}$ and $\delta^{\mathrm{MB}}$ reviewed in Section \[sec2\]. To measure average risk reduction for $\theta$ in region (\[region\]), Berger [@Ber82] used the Bayes risk with the normal prior $\theta\sim\N(\mu
,\Gamma)$. For simplicity, assume throughout that $\mu=0$ and $\Gamma
=\diag(\gamma_1,\ldots,\gamma_p)$ is diagonal.
We adopt Berger’s [@Ber82] ideas of specifying an elliptical region and using the Bayes risk to quantify average risk reduction in this region. We aim to find $A$, subject to $c^*(D,A)\ge0$, minimizing the Bayes risk of $\delta_A$ with the prior $\pi_\Gamma$, $\theta\sim\N
(0,\Gamma)$, $$\begin{aligned}
R(\delta_A, \pi_\Gamma) = E^{\pi_\Gamma} E_\theta
\bigl( \|\delta_A-\theta\|^2 \bigr),\end{aligned}$$ where $E^{\pi_\Gamma}$ denotes the expectation with respect to the prior $\pi_\Gamma$. Given $A$, the risk $R(\delta_A, \pi_\Gamma)$ can be numerically evaluated. A simple Monte Carlo method is to repeatedly draw $\theta\sim\N(0,\Gamma)$ and $X |\theta\sim\N
(\theta, D)$ and then take the average of $\| \delta_A(X) - \theta\|
^2$. But it seems difficult to literally implement the foregoing optimization. Alternatively, we develop a simple method for choosing $A$ by two approximations.
First, if $c^*(D,A)\ge0$, then taking the expectation of both sides of (\[point-bound\]) with respect to the prior $\pi_\Gamma$ gives an upper bound on the Bayes risk of $\delta_A$: $$\begin{aligned}
\label{bayes-bound}
R(\delta_A, \pi_\Gamma) \le\tr(D) - E^m \biggl
\{ \frac
{{c^*}^2(D,A)}{X^\T A^\T A X} \biggr\},\end{aligned}$$ where $E^m$ denotes the expectation with respect to the marginal distribution of $X$ in the Bayes model, that is, $X \sim\N(0,D+\Gamma
)$. An approximation strategy for choosing $A$ is to minimize the upper bound (\[bayes-bound\]) on the Bayes risk or to maximize the second term. The expectation $E^m\{(X^\T A^\T A X)^{-1}\}$ can be evaluated as a 1-dimensional integral by results on inverse moments of quadratic forms in normal variables (e.g., Jones [@Jon86]). But the required optimization problem remains difficult.
Second, approximations can be made to the distribution of the quadratic form $X^\T A^\T A X$. Suppose that $X^\T A^\T A X$ is approximated with the same mean by $\{\sum_{j=1}^p (d_j+\gamma_j)a_j^2 \} \chi^2_p/p$, where $\chi^2_p$ is a chi-squared variable with $p$ degrees of freedom. Then $E^m\{(X^\T A^\T A X)^{-1}\}$ is approximated by $\{
p/(p-2)\} \{\sum_{j=1}^p (d_j+\gamma_j)a_j^2\}^{-1}$. We show in the that this approximation gives a valid lower bound: $$\begin{aligned}
\label{bayes-bound2}
E^m \biggl( \frac{1}{X^\T A^\T A X} \biggr) \ge\frac{p}{p-2} \cdot
\frac{1}{ \sum_{j=1}^p (d_j+\gamma_j)a_j^2} .\end{aligned}$$ A direct application of Jensen’s inequality shows that $E^m\{(X^\T A^\T
A X)^{-1}\} \ge\{\sum_{j=1}^p (d_j+\gamma_j)a_j^2\}^{-1}$. But the lower bound (\[bayes-bound2\]) is strictly tighter and becomes exact when $(d_1+\gamma_1)a_1^2 = \cdots=(d_p+\gamma
_p)a_p^2$. No simple bounds such as (\[bayes-bound2\]) seem to hold if more complicated approximations (e.g., Satterthwaite [@SAT46]) are used.
Combining (\[bayes-bound\]) and (\[bayes-bound2\]) shows that if $c^*(D,A)\ge0$, then $$\begin{aligned}
\label{bayes-bound3}
R(\delta_A, \pi_\Gamma) \le\tr(D) - \frac{p}{p-2}
\cdot\frac
{{c^*}^2(D,A)}{ \sum_{j=1}^p (d_j+\gamma_j)a_j^2} .\end{aligned}$$ Notice that $\delta_A$ is invariant under a multiplicative transformation $A \mapsto a A$ for a scalar $a >0$, and so is the upper bound (\[bayes-bound3\]). Our strategy for choosing $A$ is to minimize the upper bound (\[bayes-bound3\]) subject to $c^*(D,A)\ge0$ or, equivalently, to solve the constrained optimization problem: $$\begin{aligned}
\label{opt}
&&\max_A \quad c^*(D,A)=\sum_{j=1}^p
d_j a_j - 2 \max_{j=1,\ldots
,p}
d_j a_j \nonumber
\\[-8pt]\\[-8pt]
&&\quad \mbox{subject to} \quad \sum_{j=1}^p
(d_j+ \gamma_j) a_j^2 = \mbox{fixed}.
\nonumber\end{aligned}$$ The condition $c^*(D,A)\ge0$ is dropped, because for $p\ge3$, the achieved maximum is at least $c^*(D,
aD^{-1})=a(p-2)>0$ for some scalar $a>0$. In spite of the approximations used in our approach, Theorem \[th2\] shows that not only the problem (\[opt\]) admits a non-iterative solution, but also the solution has a very interesting interpretation. For convenience, assume thereafter that the indices are sorted such that $d_1^2/(d_1+\gamma_1) \ge d_2^2/(d_2+\gamma_2) \ge\cdots\ge
d_p^2/(d_p+\gamma_p)$.
\[th2\] Assume that $p\ge3$, $D=\diag(d_1, \ldots,
d_p)$ with $d_j >0$ and $\Gamma= \diag(\gamma_1, \ldots, \gamma
_p)$ with $\gamma_j\ge0$ ($j=1,\ldots,p$). For problem (\[opt\]), assume that $A=\diag(a_1, \ldots, a_p)$ with $a_j \ge0$ ($j=1,\ldots
,p$) and $\sum_{j=1}^p (d_j+\gamma_j) a_j^2= \sum_{j=1}^p
d_j^2/(d_j+\gamma_j)$, satisfied by $a_j=d_j/(d_j+\gamma_j)$. Then the following results hold.
(i) There exists a *unique* solution, $A^\dag= \diag
(a_1^\dag, \ldots, a_p^\dag)$, to problem (\[opt\]).
(ii) Let $\nu$ be the largest index such that $d_\nu a^\dag
_\nu= \max(d_1a_1^\dag,\ldots,d_pa_p^\dag)$. Then $\nu\ge3$, $d_1a_1^\dag= \cdots= d_\nu a_\nu^\dag> d_ja_j^\dag$ for $j \ge
\nu+1$, and $$\begin{aligned}
a_j^\dag& =& K_\nu\Biggl(\sum
_{k=1}^\nu\frac{d_k+\gamma
_k}{d_k^2} \Biggr)^{-1}
\frac{\nu-2}{d_j}\qquad (j=1,\ldots, \nu),
\\
a_j^\dag& =& K_\nu\frac{d_j}{d_j+\gamma_j}\qquad (j=\nu+1,
\ldots, p),\end{aligned}$$ where $K_\nu= \{\sum_{j=1}^p d_j^2/(d_j+\gamma_j)\}^{1/2} M_\nu
^{-1/2}$ and $$\begin{aligned}
M_\nu= \frac{(\nu-2)^2}{ \sum_{j=1}^\nu\vfrac{d_j+\gamma
_j}{d_j^2}} + \sum_{j=\nu+1}^p
\frac{d_j^2}{d_j+\gamma_j} .\end{aligned}$$ The achieved maximum value, $c^*(D,A^\dag)$, is $K_\nu M_\nu\ (>0)$.
(iii) The resulting estimator $\delta_{A^\dag}$ is minimax.
We emphasize that, although $A$ can be considered a tuning parameter, the solution $A^\dag$ is *data independent*, so that $\delta
_{A^\dag}$ is automatically minimax. If a data-dependent choice of $A$ were used, minimaxity would not necessarily hold. This result is achieved both because each estimator $\delta_A$ with $c^*(D,A)\ge0$ is minimax and because a global criterion (such as the Bayes risk) is used, instead of a pointwise criterion (such as the frequentist risk at the unknown $\theta$), to select $A$. By these considerations, our approach differs from the usual exercise of selecting a tuning parameter in a data-dependent manner for a class of candidate estimators.
There is a remarkable property of monotonicity for the sequence $(M_3,
M_4, \ldots, M_p)$, which underlies the uniqueness of $\nu$ and $A^\dag$.
\[cor2\] The sequence $(M_3, M_4, \ldots, M_p)$ is nonincreasing: for $3 \le k \le p-1$, $M_k \ge M_{k+1}$, where the equality holds if and only if $$\frac{k-2}{ \sum_{j=1}^k \vfrac{d_j+\gamma_j}{d_j^2} } = \frac
{d_{k+1}^2}{d_{k+1}+\gamma_{k+1}} .$$ The condition $d_\nu a_\nu^\dag> d_{\nu+1} a_{\nu+1}^\dag$ is equivalent to saying that the left side is greater than the right-hand side in the above expression for $k=\nu$. Therefore, $\nu$ is the smallest index $3 \le k \le p-1$ with this property, and $M_\nu>
M_{\nu+1}$.
The estimator $\delta_{A^\dag}$ is invariant under scale transformations of $A^\dag$. Therefore, the constant $K_\nu$ can be dropped from the expression of $A^\dag$ in Theorem \[th1\].
\[cor3\] The solution $A^\dag=\diag(a_1^\dag, \ldots,
a_p^\dag)$ can be rescaled such that $$\begin{aligned}
\label{sol1}a_j^\dag& =& \Biggl(\sum_{k=1}^\nu
\frac{d_k+\gamma_k}{d_k^2} \Biggr)^{-1} \frac{\nu-2}{d_j}\qquad
(j=1,\ldots,\nu),
\\
\label{sol2}a_j^\dag& =& \frac{d_j}{d_j + \gamma_j} \qquad (j=\nu+1, \ldots, p).\end{aligned}$$ Then $c^*(D, A^\dag) = \sum_{j=1}^p {a_j^{\dag}}^2 (d_j+\gamma_j) =
M_\nu$. Moreover, it holds that $$\begin{aligned}
\label{sol-ineq}
a_j^\dag\le\frac{d_j}{d_j + \gamma_j} \qquad (j=1,\ldots,\nu).\end{aligned}$$ The estimator $\delta_{A^\dag}$ can be expressed as $$\begin{aligned}
\label{delta-A}
\delta_{A^\dag} = X -
\frac{ \sum_{j=1}^p {a_j^{\dag}}^2 (d_j+\gamma_j) }{ \sum_{j=1}^p
{a_j^{\dag}}^2 X_j^2} A^\dag X.\end{aligned}$$
The foregoing results lead to a simple algorithm for solving problem (\[opt\]):
(i) Sort the indices such that $d_1^2/(d_1+\gamma_1) \ge
\cdots\ge d_p^2/(d_p+\gamma_p)$.
(ii) Take $\nu$ to be the smallest index $k$ (corresponding to the largest $M_k$) such that $3 \le k \le p-1$ and $$\frac{k-2}{ \sum_{j=1}^k \vfrac{d_j+\gamma_j}{d_j^2} } > \frac
{d_{k+1}^2}{d_{k+1}+\gamma_{k+1}} ,$$ or take $\nu=p$ if there exists no such $k$.
(iii) Compute $(a_1^\dag, \ldots, a_p^\dag)$ by (\[sol1\])–(\[sol2\]).
This algorithm is guaranteed to find the (unique) solution to problem (\[opt\]) by a fixed number of numerical operations. No iteration or convergence diagnosis is required. Therefore, the algorithm is exact and non-iterative, in contrast with usual iterative algorithms for nonlinear, constrained optimization.
The estimator $\delta_{A^\dag}$ has an interesting interpretation. By (\[sol1\])–(\[sol2\]), there is a dichotomous segmentation in the shrinkage direction of the coordinates of $X$ based on $d_j^*=d_j^2/(d_j+\gamma_j)$. This quantity $d_j^*$ is said to reflect the Bayes “importance” of $\theta_j$, that is, the amount of reduction in Bayes risk obtainable in estimating $\theta_j$ in Berger [@Ber82]. The coordinates with high $d_j^*$ are shrunk inversely in proportion to their variances $d_j$ as in Berger’s [@Ber76] estimator $\delta_c^{\mathrm{B}}$, whereas the coordinates with low $d_j^*$ are shrunk in the direction of the Bayes rule. Therefore, $\delta_{A^\dag}$ mimics the Bayes rule to reduce the Bayes risk, except that $\delta_{A^\dag}$ mimics $\delta_c^{\mathrm{B}}$ for some coordinates of highest Bayes “importance” in order to achieve minimaxity. In fact, by inequality (\[sol-ineq\]), the relative shrinkage, $a_j^\dag/\{d_j/(d_j+\gamma_j)\}$, of each $X_j$ ($j=1,\ldots,\nu$) in $\delta_{A^\dag}$ versus the Bayes rule is always no greater than that of $X_k$ ($k=\nu+1,\ldots,p$).
The expression (\[delta-A\]) suggests that there is a close relationship in beyond the shrinkage direction between $\delta_{A^\dag
}$ and the Bayes rule under the Bayes model, $X \sim\N(0, D+\Gamma
)$. In this case, $E^m( \sum_{j=1}^p {a_j^{\dag}}^2 X_j^2 ) = \sum
_{j=1}^p {a_j^{\dag}}^2 (d_j +\gamma_j)$, and hence $\delta_{A^\dag
}$ behaves similarly to $X - A^\dag X$. Therefore, *on average* under the Bayes model, the coordinates of $X$ are shrunk in $\delta
_{A^\dag}$ the same as in the Bayes rule, except that some coordinates of highest Bayes “importance” are shrunk no greater than in the Bayes rule. While this discussion seems heuristic, we provide in Section \[sec3.4\] a rigorous analysis of the Bayes risk of $\delta_{A^\dag}$, compared with that of the Bayes rule.
We now examine $\delta_{A^\dag}$ for two types of priors: $\gamma
_1=\cdots=\gamma_p = \gamma$ and $\gamma_j = \gamma d_j $ ($j=1,
\ldots, p$), referred to as the homoscedastic and heteroscedastic priors. For both types, $(d_1^*,\ldots,d_p^*)$ are of the same order as the variances $(d_1,\ldots,d_p)$. Recall that $\delta_A$ is invariant under a multiplicative transformation of $A$. For both the homoscedastic prior with $\gamma=0$ and the heteroscedastic prior *regardless* of $\gamma\ge0$, the solution $A^\dag=\diag(a_1^\dag, \ldots, a_p^\dag)$ can be rescaled such that $$\begin{aligned}
a_j^\dag&=& \Biggl(\sum_{k=1}^\nu
d_k^{-1} \Biggr)^{-1} \frac{\nu
-2}{d_j} \qquad (j=1,
\ldots, \nu),
\\
a_j^\dag&=& 1 \qquad (j=\nu+1,\ldots, p).\end{aligned}$$ Denote by $A^\dag_0$ this rescaled matrix $A^\dag$, corresponding to $\Gamma=0$. Then coordinates with high variances are shrunk inversely in proportion to their variances, whereas coordinates with low variances are shrunk symmetrically. For $\Gamma=0$, the proposed method has a purely frequentist interpretation: it seeks to minimize the upper bound (\[bayes-bound3\]) on the pointwise risk of $\delta_A$ at .
For the homoscedastic prior with $\gamma\to\infty$, the proposed method is then to minimize the upper bound (\[bayes-bound3\]) on the Bayes risk of $\delta_A$ with an extremely flat, homoscedastic prior. As $\gamma\to\infty$, the solution $A^\dag$ can be rescaled such that $$\begin{aligned}
a_j^\dag&=& \Biggl(\sum_{k=1}^\nu
d_k^{-2} \Biggr)^{-1} \frac{\nu
-2}{d_j} \qquad (j=1,
\ldots, \nu),
\\
a_j^\dag&=& d_j\qquad (j=\nu+1,\ldots, p).\end{aligned}$$ Denote by $A^\dag_\infty$ this rescaled matrix $A^\dag$. Then coordinates with low (or high) variances are shrunk directly (or inversely) in proportion to their variances. The direction $A^\dag_\infty$ can also be obtained by using a fixed prior in the form $\gamma_j = \gamma d_1 - d_j$ ($j=1,\ldots,p$) for arbitrary $\gamma\ge1$, where $d_1 = \max_{j=1,\ldots,p} d_j$.
Finally, in the homoscedastic case ($d_1=\cdots=d_p=\sigma^2$), if the prior is also homoscedastic ($\gamma_1=\cdots=\gamma_p=\gamma
$), then $\nu=p$, $a_1^\dag= \cdots= a_p^\dag$, and $\delta
_{A^\dag}$ reduces to the James–Stein estimator $\delta_{p-2}^{\mathrm{JS}}$, *regardless* of $\sigma^2$ and $\gamma$.
Evaluating estimators {#sec3.4}
---------------------
The estimator $\delta_{A^\dag}$ is constructed by minimizing the upper bound (\[bayes-bound3\]) on the Bayes risk subject to minimaxity. In addition to simplicity, interpretability, and minimaxity demonstrated for $\delta_{A^\dag}$, it remains important to further study risk properties of $\delta_{A^\dag}$ and show that $\delta
_{A^\dag}$ can provide effective risk reduction over $\delta_0=X$. Write $\delta_{A^\dag} = \delta_{A^\dag(\Gamma)}$ whenever needed to make explicit the dependency of $A^\dag$ on $\Gamma$.
First, we study how close the Bayes risk of $\delta_{A^\dag(\Gamma
)}$ can be to that of the Bayes rule, which is the smallest possible among *all* estimators including non-minimax ones, under the prior $\pi_\Gamma$, $\theta\sim\N(0, \Gamma)$. The Bayes rule $\delta_\Gamma^{\mathrm{Bayes}}$ is given componentwise by $(\delta
^{\mathrm{Bayes}}_\Gamma)_j = \{1- d_j/(d_j+\gamma_j)\} X_j$, with the Bayes risk $$\begin{aligned}
R\bigl(\delta^{\mathrm{Bayes}}_\Gamma, \pi_\Gamma\bigr) = \tr(D) -
\sum_{j=1}^p d^*_j ,\end{aligned}$$ where $d^*_j =d_j^2/(d_j+\gamma_j)$, indicating the Bayes “importance” of $\theta_j$ (Berger [@Ber82]). The upper bound (\[bayes-bound3\]) on the Bayes risk of $\delta
_{A^\dag(\Gamma)}$ gives $$\begin{aligned}
\label{bayes-bound4}
R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\} \le\tr(D) - \frac
{p}{p-2}
M_\nu= \tr(D) - \frac{p}{p-2} \Biggl\{ \frac{(\nu
-2)^2}{ \sum_{j=1}^\nu{d_j^*}^{-1}} + \sum
_{j=\nu+1}^p d^*_j \Biggr\} ,\end{aligned}$$ because $c^*(D,A^\dag) = \sum_{j=1}^p (d_j+\gamma_j) {a_j^\dag}^2 =
M_\nu$ and hence ${c^*}^2(D, A^\dag) / \{\sum
_{j=1}^p (d_j+\gamma_j) {a_j^\dag}^2\} = M_\nu$ by Corollary \[cor3\]. It appears that the difference between $R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\}$ and $R(\delta^{\mathrm{Bayes}}_\Gamma,\allowbreak \pi_\Gamma)$ tends to be large if $\nu$ is large. But $d_1^* \ge\cdots\ge d_\nu^*$ cannot differ too much from each other because by Corollary \[cor1\], $$k-2 \le\sum_{j=1}^k \frac{d_{k+1}^*}{d_j^*}
\le k \qquad (k=3,\ldots,\nu-1).$$ Then the difference between $R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma
\}$ and $R(\delta^{\mathrm{Bayes}}_\Gamma, \pi_\Gamma)$ should be limited even if $\nu$ is large. A careful analysis using these ideas leads to the following result.
\[th3\] Suppose that the prior is $\theta\sim\N
(0,\Gamma)$. If $\nu=3$, then $$\begin{aligned}
\label{tight-bound1}R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\} & \le&\tr(D) - \sum
_{j=3}^p d_j^* + \Biggl(
d_3^* - \frac{2}{p-2} \sum_{j=4}^p
d_j^* -\frac{ p}{p-2} \frac{d_3^*}{3} \Biggr)
\\
\label{loose-bound1}& \le&\tr(D) - \sum_{j=3}^p
d_j^* + \frac{2}{3} d_3^*.\end{aligned}$$ If $\nu\ge4$, then $$\begin{aligned}
\label
{tight-bound2}R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\} & \le&\tr(D) - \sum
_{j=3}^p d_j^* + \Biggl(
d_3^* + d_4^* - \frac{2}{p-2} \sum
_{j=5}^p d_j^* -\frac{ 4 p}{p-2}
\frac{d_\nu^*}{\nu} \Biggr)
\\
\label{loose-bound2}& \le&\tr(D) - \sum_{j=3}^p
d_j^* + \bigl( d_3^* + d_4^* \bigr).\end{aligned}$$ Throughout, an empty summation is 0.
There are interesting implications of Theorem 3. By (\[loose-bound1\]) and (\[loose-bound2\]), $$\begin{aligned}
\label
{bayes-close}
R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\} \le R\bigl(\delta
^{\mathrm{Bayes}}_\Gamma, \pi_\Gamma\bigr) + \bigl(d_1^*+d_2^*+d_3^*+d_4^*
\bigr).\end{aligned}$$ Then $\delta_{A^\dag(\Gamma)}$ achieves almost the minimum Bayes risk if $d_1^* / \{\tr(D)-\sum_{j=1}^p d_j^*\} \approx0$. In terms of Bayes risk reduction, the bound (\[bayes-close\]) shows that $$\begin{aligned}
\tr(D) - R\{\delta_{A^\dag(\Gamma)}, \pi_\Gamma\} \ge\biggl( 1-
\frac{d_1^*+d_2^*+d_3^*+d_4^*}{ \sum_{j=1}^p d_j^*} \biggr) \bigl\{
\tr(D) - R\bigl(\delta^{\mathrm{Bayes}}_\Gamma,
\pi_\Gamma\bigr) \bigr\}.\end{aligned}$$ Therefore, $\delta_{A^\dag(\Gamma)}$ achieves Bayes risk reduction within a negligible factor of that achieved by the Bayes rule if $d_1^*
/ \sum_{j=1}^p d_j^* \approx0$.
In the homoscedastic case where both $D=\sigma^2 I$ and $\Gamma
=\gamma I$, $\delta_{A^\dag}$ reduces to $\delta_{p-2}^{\mathrm{JS}}$, regardless of $\gamma\ge0$ (Section \[sec3.3\]). Then the bounds (\[tight-bound1\]) and (\[tight-bound2\]) become exact and give Efron and Morris’s [@EfrMor73] result that $R(\delta_{p-2}^{\mathrm{JS}} , \pi_{\gamma I})
= \tr(D) - (p-2) \{\sigma^4/(\sigma^2+\gamma)\}$ or equivalently $\tr(D) -
R(\delta_{p-2}^{\mathrm{JS}}, \pi_{\gamma I}) = (1-2/p) \{\tr(D) - R(\delta
^{\mathrm{Bayes}}_{\gamma I}, \pi_{\gamma I})\}$.
It is interesting to compare the Bayes risk bound of $\delta_{A^\dag
(\Gamma)}$ with that of the following simpler version of Berger’s [@Ber82] estimator $\delta^{\mathrm{MB}}$: $$\begin{aligned}
\delta^{\mathrm{MB}2}_j =
X_j - \Biggl\{ \frac{1}{d_j^*} \sum_{k=j}^p
\bigl(d_k^*-d_{k+1}^*\bigr) \frac{(k-2)_+}{\sum_{\ell=1}^k X_\ell
^2/(d_\ell
+\gamma_\ell)} \Biggr\}
\frac{d_j}{d_j+\gamma_j} X_j.\end{aligned}$$ By Berger [@Ber82], $\delta^{\mathrm{MB}2}$ is minimax and $$\begin{aligned}
\label{MB-bayes1}R\bigl(\delta^{\mathrm{MB}2}, \pi_\Gamma\bigr) &
=& \tr(D) - \sum_{j=3}^p d_j^*
- 2 \sum_{j=3}^p \frac{d_j^*}{j}
\Biggl( 1 - \frac{d_j^*}{j-1} \sum_{k=1}^{j-1}
\frac{1}{d_k^*} \Biggr)
\\
\label{MB-bayes2}& \le&\tr(D) - \sum_{j=3}^p
d_j^*.\end{aligned}$$ There seems to be no definite comparison between the bounds (\[tight-bound1\]) and (\[tight-bound2\]) on $R\{\delta_{A^\dag
(\Gamma)}, \pi_\Gamma\}$ and the exact expression (\[MB-bayes1\]) for $R(\delta^{\mathrm{MB2}}, \pi_\Gamma)$, although the simple bounds (\[loose-bound1\]) and (\[loose-bound2\]) is slightly higher, by at most $d_3^*+d_4^*$, than the bound (\[MB-bayes2\]). Of course, each risk upper bound gives a conservative estimate of the actual performance, and comparison of two upper bounds should be interpreted with caution. In fact, the positive-part estimator $\delta_{A^\dag}^+$ yields lower risks than those of the non-simplified estimator $\delta^{\mathrm{MB}}$ in our simulation study (Section \[sec4\]).
The simplicity of $\delta_{A^\dag}$ and $\delta_{A^\dag}^+$ makes it easy to further study them in other ways than using the Bayes (or average) risk. No similar result to the following Theorem \[th4\] has been established for $\delta^{\mathrm{MB}}$ or $\delta^{\mathrm{MB2}}$. Corresponding to the prior $\N(0,\Gamma)$, consider the worst-case (or maximum) risk $$R(\delta, \mathcal H_\Gamma) = \sup_{\theta\in\mathcal H_\Gamma}
R(\delta,
\theta)$$ over the hyper-rectangle $\mathcal H_\Gamma= \{\theta\dvt \theta_j^2
\le\gamma_j, j=1,\ldots, p\}$ (e.g., Donoho *et al.* [@DonLiuMac90]). Applying Jensen’s inequality to (\[point-bound\]) shows that if $c^*(D,A)>0$, then $$\begin{aligned}
R(\delta_A, \theta) \le\tr(D) - \frac{{c^*}^2(D,A)}{ \sum
_{j=1}^p (d_j+\theta_j^2) a_j^2 } ,\end{aligned}$$ which immediately leads to $$\begin{aligned}
\label{minimax-bound}
R(\delta_A, \mathcal H_\Gamma) \le\tr(D) -
\frac{{c^*}^2(D,A)}{
\sum_{j=1}^p (d_j+\gamma_j) a_j^2 } .\end{aligned}$$ By the discussion after (\[bayes-bound2\]), a direct application of Jensen’s inequality to (\[bayes-bound\]) shows that the Bayes risk $R(\delta_A, \pi_\Gamma)$ is also no greater than the right-hand side of (\[minimax-bound\]), whereas inequality (\[bayes-bound2\]) leads to a strictly tighter bound (\[bayes-bound3\]). Nevertheless, the upper bound (\[minimax-bound\]) on the worst-case risk of $\delta_{A^\dag(\Gamma)}$ gives $$\begin{aligned}
R \{ \delta_{A^\dag(\Gamma)}, \mathcal H_\Gamma\} \le\tr(D) -
M_\nu= \tr(D) - \Biggl\{ \frac{(\nu-2)^2}{ \sum_{j=1}^\nu
{d_j^*}^{-1}} + \sum
_{j=\nu+1}^p d^*_j \Biggr\},\end{aligned}$$ similarly as how (\[bayes-bound3\]) leads to (\[bayes-bound4\]) on the Bayes risk of $\delta_{A^\dag(\Gamma)}$. Therefore, the following result holds by the same proof of Theorem 3.
\[th4\] Suppose that $\mathcal H_\Gamma= \{\theta\dvt
\theta_j^2 \le\gamma_j, j=1,\ldots, p\}$. If $\nu=3$, then $$\begin{aligned}
R\{\delta_{A^\dag(\Gamma)}, \mathcal H_\Gamma\} \le\tr(D) -
\sum
_{j=3}^p d_j^* +
\frac{2}{3} d_3^*.\end{aligned}$$ If $\nu\ge4$, then $$\begin{aligned}
R\{\delta_{A^\dag(\Gamma)}, \mathcal H_\Gamma\} & \le&\tr(D) -
\sum
_{j=3}^p d_j^* + \biggl(
d_3^* + d_4^* - 4\frac{d_\nu^*}{\nu} \biggr)
\\
& \le&\tr(D) - \sum_{j=3}^p
d_j^* + \bigl( d_3^* + d_4^* \bigr).\end{aligned}$$
There are similar implications of Theorem \[th4\] to those of Theorem \[th3\]. By Donoho *et al.* [@DonLiuMac90], the minimax linear risk over $\mathcal
H_\Gamma$, $R^L(\mathcal H_\Gamma) = \inf_{\delta\,\mathrm{linear}}
R(\delta, \mathcal H_\Gamma)$, coincides with the minimum Bayes risk $R(\delta^{\mathrm{Bayes}}_\Gamma, \pi_\Gamma)$, and is no greater than $1.25$ times the minimax risk over $\mathcal H_\Gamma$, $R^N(\mathcal
H_\Gamma)= \inf_{\delta} R(\delta, \mathcal H_\Gamma)$. These results are originally obtained in the homoscedastic case ($d_1=\cdots
=d_p$), but they remain valid in the heteroscedastic case by the independence of the observations $X_j$ and the separate constraints on $\theta_j$. Therefore, a similar result to (\[bayes-close\]) holds: $$\begin{aligned}
R\{ \delta_{A^\dag(\Gamma)}, \mathcal H_\Gamma\} &\le& R^L(
\mathcal H_\Gamma) + \bigl(d_1^*+d_2^*+d_3^*+d_4^*
\bigr)
\\
&\le&1.25 R^N(\mathcal H_\Gamma) + \bigl(d_1^*+d_2^*+d_3^*+d_4^*
\bigr).\end{aligned}$$ If $d_1^* / \{\tr(D)-\sum_{j=1}^p d_j^*\} \approx0$, then $\delta
_{A^\dag}$ achieves almost the minimax linear risk (or the minimax risk up to a factor of $1.25$) over the hyper-rectangle $\mathcal
H_\Gamma$, in addition to being globally minimax with $\theta$ unrestricted.
The foregoing results might be considered non-adaptive in that $\delta
_{A^\dag(\Gamma)}$ is evaluated with respect to the prior $\N
(0,\Gamma)$ or the parameter set $\mathcal H_\Gamma$ with the same $\Gamma$ used to construct $\delta_{A^\dag(\Gamma)}$. But, by the invariance of $\delta_A$ under scale transformations of $A$, $\delta
_{A^\dag(\Gamma)}$ is identical to the estimator, $\delta_{A^\dag
(\Gamma_\alpha)}$, that would be obtained if $\Gamma$ is replaced by $\Gamma_\alpha= \alpha(D+\Gamma)-D$ for any scalar $\alpha$ such that the diagonal matrix $\Gamma_\alpha$ is nonnegative definite. By Theorems 3–4, this observation leads directly to the following adaptive result. In contrast, no adaptive result seems possible for $\delta^{\mathrm{MB}}$.
\[cor4\] Let $\Gamma_\alpha= \alpha(D+\Gamma)-D$ and $\alpha_0 = \max
_{j=1,\ldots,p} \{ d_j/(d_j+\gamma_j)\}\ ( \le1)$. Then for each $\alpha\ge\alpha_0$, $$\begin{aligned}
\max\bigl[ R\{ \delta_{A^\dag(\Gamma)}, \pi_{\Gamma_\alpha} \},
R\{
\delta_{A^\dag(\Gamma)}, \mathcal H_{\Gamma_\alpha}\} \bigr] &\le&
R\bigl(
\delta^{\mathrm{Bayes}}_{\Gamma_\alpha}, \pi_{\Gamma_\alpha}\bigr) +
\alpha^{-1} \bigl(d_1^*+d_2^*+d_3^*+d_4^*
\bigr)
\\
&=& R^L(\mathcal H_{\Gamma_\alpha})+ \alpha^{-1} \bigl(
d_1^*+d_2^*+d_3^*+d_4^*\bigr),\end{aligned}$$ where $R( \delta^{\mathrm{Bayes}}_{\Gamma_\alpha}, \pi_{\Gamma_\alpha}) =
\tr(D) - \alpha^{-1} \sum_{j=1}^p d_j^*$.
For fixed $\Gamma$, $\delta_{A^\dag(\Gamma)}$ can achieve close to the minimum Bayes risk or the minimax linear risk with respect to each prior in the class $\{\N(0,\Gamma_\alpha)\dvt \alpha\ge\alpha_0\}$ or each parameter set in the class $\{\mathcal H_{\Gamma_\alpha}\dvt
\alpha\ge\alpha_0\}$ under mild conditions. For illustration, consider the case of a heteroscedastic prior with $\Gamma\propto D$. Then $\{\Gamma_\alpha\dvt \alpha\ge\alpha_0\}$ can be reparameterized as $\{\gamma D\dvt \gamma\ge0\}$. By Corollary \[cor4\], for each $\gamma\ge0$, $$\begin{aligned}
\max\bigl\{R( \delta_{A^\dag_0}, \pi_{\gamma D} ), R(
\delta_{A^\dag
_0}, \mathcal H_{\gamma D})\bigr\} \le R\bigl(
\delta^{\mathrm{Bayes}}_{\gamma D}, \pi_{\gamma D}\bigr) +
\frac{d_1+d_2+d_3+d_4}{1+\gamma} ,\end{aligned}$$ where $R( \delta^{\mathrm{Bayes}}_{\gamma D}, \pi_{\gamma D}) = \{\gamma
/(1+\gamma)\} \tr(D)$ and $d_1 \ge d_2 \ge\cdots\ge d_p$. Therefore, if $d_1/\tr(D) \approx0$, then $\delta_{A^\dag_0}$ achieves the minimum Bayes risk, within a negligible factor, under the prior $\N(0, \gamma D)$ for each $\gamma>0$. This can be seen as an extension of the result that in the homoscedastic case, $\delta
_{p-2}^{\mathrm{JS}}$ asymptotically achieves the minimum Bayes risk under the prior $\N(0, \gamma I)$ for each $\gamma>0$ as $p\to\infty$.
Finally, we compare the estimator $\delta_{A^\dag}$ with a block shrinkage estimator, suggested by the differentiation in the shrinkage of low- and high-variance coordinates by $\delta_{A^\dag}$. Consider the estimator $$\begin{aligned}
\delta^{\mathrm{block}} = {\left}\{ \begin{array} {c}
\delta_{\tau-2}^{\mathrm{B}}(X_1,\ldots,X_\tau)
\\\noalign{\vspace*{2pt}}
\delta_{p-\tau-2}^{\mathrm{B}} (X_{\tau+1},\ldots,X_p)
\end{array}
{\right}\},\end{aligned}$$ where $\tau$ is a cutoff index, and $\delta_c^{\mathrm{B}}(Y)=Y$ if $Y$ is of dimension 1 or 2. The index $\tau$ can be selected such that the coordinate variances are relatively homogeneous in each block. Alternatively, a specific strategy for selecting $\tau$ is to minimize an upper bound on the Bayes risk of $\delta^{\mathrm{block}}$, similarly as in the development of $\delta_{A^\dag}$. Applying (\[bayes-bound3\]) with $A=D^{-1}$ to $\delta_{p-2}^{\mathrm{B}}$ in the two blocks shows that $R(\delta^{\mathrm{block}}, \pi_\Gamma) \le\tr(D) - L_\tau$, where $$\begin{aligned}
L_k = \frac{k-2}{\sklfrac{1}{k} \sum_{j=1}^k \vfrac{d_j+\gamma
_j}{d_j^2}} + \frac{p-k-2}{({1}/({p-k})) \sum_{j=k+1}^p \vfrac
{d_j+\gamma_j}{d_j^2}} .\end{aligned}$$ The first (or second) term in $L_k$ is set to 0 if $k\le2$ (or $k\ge
p-2$). Then $\tau$ can be defined as the smallest index such that $L_\tau=\max(L_1,L_2,\ldots,L_p)$. But the upper bound (\[bayes-bound4\]) on $R(\delta_{A^\dag}, \pi_\Gamma)$ is likely to be smaller than the corresponding bound on $R(\delta^{\mathrm{block}}, \pi_\Gamma
)$, because $\{k/(k-2)\} M_k \ge L_k$ for each $k\ge3$ by the Cauchy–Schwarz inequality $ \{\sum_{j=k+1}^p d_j^2/(d_j+\gamma_j)\} \{
\sum_{j=k+1}^p (d_j+\gamma_j)/d_j^2 \}\ge(p-k)^2$. Therefore, $\delta_{A^\dag}$ tends to yield greater risk reduction than $\delta^{\mathrm{block}}$. This analysis also indicates that $\delta_{A^\dag}$ can be advantageous over $\delta^{\mathrm{block}}$ extended to multiple blocks.
The rationale of forming blocks in $\delta_{A^\dag}$ and $\delta
^{\mathrm{block}}$ differs from that in existing block shrinkage estimators (e.g., Brown and Zhao [@BroZha09]). As discussed in Cai [@Cai12], block shrinkage has been developed mainly in the homoscedastic case as a technique for pooling information: the coordinate means are likely to be similar to each other within a block. Nevertheless, it is possible to both deal with heterogeneity among coordinate variances and exploit homogeneity among coordinate means within individual blocks in our approach using a block-homoscedastic prior (i.e., the prior variances are equal within each block). This topic can be pursued in future work.
Simulation study {#sec4}
================
Setup {#sec4.1}
-----
We conduct a simulation study to compare the following 8 estimators,
(i) Non-minimax estimators: $\delta^{\mathrm{EB}}$ by (\[EB\]), $\delta^{\mathrm{XKB}}$ by (\[XKB\]), $\delta^{\mathrm{RB}}$ by (\[RB\]) with $\Gamma=0$;
(ii) Minimax estimators: $\delta_{p-2}^{\mathrm{B}+}$ by (\[B+\]), $\delta^{\mathrm{MB}}$ by (\[MB\]) with $\Gamma=0$ or $\gamma I$ for some large $\gamma$, $\delta^+_A$ by (\[A+\]) with $A=A^\dag_0$ and $A^\dag_\infty$.
Recall that $A_0^\dag$ corresponds to $\Gamma=0$ or $\Gamma\propto
D$ and $A_\infty^\dag$ corresponds to $\Gamma= \gamma I$ with $\gamma\to\infty$. In contrast, letting the diagonal elements of $\Gamma$ tend to $\infty$ in any direction in $\delta^{\mathrm{RB}}$ and $\delta^{\mathrm{MB}}$ leads to $\delta_0=X$. Setting $\Gamma$ to 0 or $\infty$ is used here to specify the relevant estimators, rather than to restrict the prior on $\theta$.
For completeness, we also study the following estimators: $\delta
^{\mathrm{B}+}_{2(p-2)}$ by (\[B+\]), $\delta^{\mathrm{RB}}$ with $p-2$ replaced by $2(p-2)$ in (\[RB\]), $\delta^{\mathrm{MB}}$ with $(k-2)_+$ replaced by $2(k-2)_+$ in (\[MB\]), and $\delta^+_A$ with $c^*(D,A)$ replaced by $2 c^*(D,A)$ in (\[A+\]), referred to as the alternative versions of $\delta^{\mathrm{B}+}_{p-2}$, $\delta^{\mathrm{RB}}$, $\delta^{\mathrm{MB}}$, and $\delta
^+_A$ respectively. The usual choices of the factors, $p-2$, $(k-2)_+$, and $c^*(D,A)$, are motivated to minimize the risks of the non-positive-part estimators, but may not be the most desirable for the positive-part estimators. As seen below, the alternative choices $2(p-2)$, $2(k-2)_+$, and $2c^*(D,A)$ can lead to risk curves for the positive-part estimators rather different from those based on the usual choices $(p-2)$, $(k-2)_+$, and $c^*(D,A)$. Therefore, we compare the estimators $\delta
^{\mathrm{B}+}_{p-2}$, $\delta^{\mathrm{RB}}$, $\delta^{\mathrm{MB}}$, and $\delta^+_A$ and, separately, their alternative versions.
Each estimator $\delta$ is evaluated by the pointwise risk function $R(\delta,\theta)$ as $\theta$ moves in a certain direction or the Bayes risk function $R(\delta,\pi)$ as $\pi$ varies in a set of priors on $\theta$. Consider the homoscedastic prior $\N(0,\eta^2
I/p)$ or the heteroscedastic prior $\N\{0, \eta^2 D/\tr(D)\}$ for $\eta\ge0$. As discussed in Section \[sec3.3\], the Bayes risk with the first or second prior is meant to measure average risk reduction over the region $\{\theta\dvt \|\theta\|^2 \le\eta^2\}$ or $\{\theta\dvt
\theta^\T D^{-1} \theta\le p \eta^2/\tr(D)\}$. Corresponding to the two priors, consider the direction along $(\eta/\sqrt{p},\ldots,\eta
/\sqrt{p})$ or $(\eta\sqrt{d_1}, \ldots,\eta\sqrt{d_p})/\sqrt{\tr(D)}$, where $\eta$ gives the Euclidean distance from 0 to the point indexed by $\eta$. The two directions are referred to as the homoscedastic and heteroscedastic directions.
We investigate several configurations for $D$, including (\[example\]) and $$\begin{aligned}
\label
{example-group3}(d_1,d_2,\ldots,d_{10}) &=&
(40,20,10,5,5,5,1,1,1,1) \quad \mbox{or}
\\
\label{example-group22}& =& (40,20,10,7,6,5,4,3,2,1) \quad \mbox{or}
\\
& =& 5\%, 15\%, \ldots, 95\% \mbox{ quantiles of }
8/\chi^2_3 \mbox{ or } 24/\chi^2_5,
\nonumber\end{aligned}$$ where $\chi^2_k$ is a chi-squared variable with $k$ degrees of freedom. In the last case, $(d_1,\ldots,d_{10})$ can be considered a typical sample from a scaled inverse chi-squared distribution, which is the conjugate distribution for normal variances. In the case (\[example-group3\]), the coordinates may be segmented intuitively into three groups with relatively homogeneous variances. In the case (\[example-group22\]), there is no clear intuition about how the coordinates should be segmented into groups.
For fixed $D$, the pointwise risk $R(\delta,\theta)$ is computed by repeatedly drawing $X\sim\N(\theta,D)$ and then taking the average of $\|\delta-\theta\|^2$. The Bayes risk is computed by repeatedly drawing $\theta\sim\N(0,\Gamma)$ and $X|\theta\sim\N(\theta,D)$ and then taking the average of $\|\delta-\theta\|^2$. Each Monte Carlo sample size is set to $10^5$.
![Pointwise risks along the homoscedastic (first row) and heteroscedastic (second row) directions and $\theta_1$ axis (third row) in the case (\[example-group3\]). Left: non-minimax estimators $\delta^{\mathrm{EB}}$ ($\triangledown$), $\delta^{\mathrm{RB}}$ ($\blacktriangledown
$), $\delta^{\mathrm{XKB}}$ ($\vartriangle$). Right: minimax estimators $\delta_{p-2}^{\mathrm{B}+}$ ($\blacktriangle$), $\delta^{\mathrm{MB}}$ with $\Gamma=0$ ($\bullet$) and $\Gamma=(16^2/p)I$ ($\circ$), $\delta^+_A$ with $A=A^\dag_0$ ($\blacksquare$) and $A^\dag_\infty
$ ($\square$).[]{data-label="fig1"}](580f01.eps)
![Pointwise risks along the homoscedastic (first row) and heteroscedastic (second row) directions and $\theta_1$ axis (third row) in the case (\[example-group3\]), with the same legend as in Figure \[fig1\]. The alternative versions of $\delta^{\mathrm{B}+}_{p-2}$, $\delta
^{\mathrm{RB}}$, $\delta^{\mathrm{MB}}$, and $\delta^+_A$ are used.[]{data-label="fig2"}](580f02.eps)
Results
-------
The relative performances of the estimators are found to be consistent across different configurations of $D$ studied. Moreover, the Bayes risk curves under the homoscedastic prior are similar to the pointwise risk curves along the homoscedastic direction. The Bayes risk curves under the heteroscedastic prior are similar to the pointwise risk curves along the heteroscedastic direction. Figure \[fig1\] shows the pointwise risks of the estimators with the usual versions of $\delta^{\mathrm{B}+}_{p-2}$, $\delta^{\mathrm{RB}}$, $\delta^{\mathrm{MB}}$, and $\delta^+_A$ and Figure \[fig2\] shows those of the estimators with the alternative versions of $\delta^{\mathrm{B}+}_{p-2}$, $\delta^{\mathrm{RB}}$, $\delta^{\mathrm{MB}}$, and $\delta^+_A$ for the case (\[example-group3\]), with roughly three groups of coordinate variances, which might be considered unfavorable to our approach. For both $A^\dag_0$ and $A^\dag_\infty$, the cutoff index $\nu$ is found to be 3. See the supplementary material (Tan [@Tan]) for the Bayes risk curves of all these estimators for the case (\[example-group3\]) and the results for other configurations of $D$.
A number of observations can be drawn from Figures \[fig1\]–\[fig2\]. First, $\delta
^{\mathrm{EB}}$, $\delta^{\mathrm{XKB}}$, and $\delta^{\mathrm{RB}}$ have among the lowest risk curves along the homoscedastic direction. But along the heteroscedastic direction, the risk curves of $\delta^{\mathrm{EB}}$ and $\delta^{\mathrm{XKB}}$ rise quickly above the constant risk of $X$ as $\eta$ increases. Moreover, all the risk curves of $\delta^{\mathrm{EB}}$, $\delta^{\mathrm{XKB}}$, and $\delta^{\mathrm{RB}}$ along the $\theta_1$ axis exceed the constant risk of $X$ as $|\theta_1|$ increases. Therefore, $\delta^{\mathrm{EB}}$, $\delta^{\mathrm{XKB}}$, and $\delta
^{\mathrm{RB}}$ fail to be minimax, as mentioned in Section \[sec2\].
Second, $\delta_{p-2}^{\mathrm{B}+}$ or $\delta_{2(p-2)}^{\mathrm{B}+}$ has among the highest risk curve, except where the risk curves of $\delta^{\mathrm{EB}}$ and $\delta^{\mathrm{XKB}}$ exceed the constant risk of $X$ along the heteroscedastic direction. The poor performance is expected for $\delta
_{p-2}^{\mathrm{B}+}$ or $\delta_{2(p-2)}^{\mathrm{B}+}$, because there are considerable differences between the coordinate variances in (\[example-group3\]).
Third, among the minimax estimators, $\delta^+_A$ with $A=A^\dag_0$ or $A^\dag_\infty$ has the lowest risk curve along various directions, whether the usual versions of $\delta^{\mathrm{B}+}_{p-2}$, $\delta
^{\mathrm{MB}}$, and $\delta^+_A$ are compared (Figure \[fig1\]) or the alternative versions are compared (Figure \[fig2\]).
Fourth, the risk curve of $\delta^+_A$ with $A=A^\dag_0$ is similar to that of $\delta^+_A$ with $A=A^\dag_\infty$ along the heteroscedastic direction. But the former is noticeably higher than the latter along the homoscedastic direction as $\eta$ increases, whereas is noticeably lower than the latter along the $\theta_1$ axis as $|\theta_1|$ increases. These results agree with the construction of $A^\dag_0$ using a heteroscedastic prior and $A^\dag_\infty$ using a flat, homoscedastic prior. Their relative performances depend on the direction in which the risks are evaluated.
Fifth, $\delta^{\mathrm{MB}}$ with $\Gamma=0$ has risk curves below that of $\delta_{p-2}^{\mathrm{B}+}$ or $\delta_{2(p-2)}^{\mathrm{B}+}$, but either above or crossing those of $\delta^+_A$ with $A=A^\dag_0$ and $A^\dag_\infty
$. Moreover, $\delta^{\mathrm{MB}}$ with $\Gamma=(16^2/p)I$ has elevated, almost flat risk curves for $\eta$ from 0 to 16. This seems to indicate an undesirable consequence of using a non-degenerate prior for $\delta^{\mathrm{MB}}$ in that the risk tends to increase for $\theta$ near 0, and remains high for $\theta$ far away from 0.
The foregoing discussion involves the comparison of the risk curves as $\theta$ moves away from 0 between $\delta^{\mathrm{MB}}$ and $\delta_{A^\dag
}^+$ specified with fixed priors. Alternatively, we compare the pointwise risks at $\theta=(\eta/\sqrt{p},\ldots,\eta/\sqrt{p})$ or $(\eta\sqrt{d_1}, \ldots,\eta\sqrt{d_p})/\sqrt{\tr(D)}$ and the Bayes risks under the prior $\N(0,\eta^2 I/p)$ or $\N\{0,
\eta^2 D/\tr(D)\}$ between $\delta^{\mathrm{MB}}$ and $\delta_{A^\dag}^+$ specified with the prior $\N(0,\eta^2 I/p)$ for a range of $\eta$. The homoscedastic prior used in the specification of $\delta^{\mathrm{MB}}$ and $\delta_{A^\dag}^+$ can be considered correctly specified or misspecified, when the Bayes risks are evaluated under, respectively, the homoscedastic or heteroscedastic prior or when the pointwise risks are evaluated along the homoscedastic or heteroscedastic direction. For each situation, $\delta_{A^\dag}^+$ has lower pointwise or Bayes risks than $\delta^{\mathrm{MB}}$. See Figure A2 in the supplementary material (Tan [@Tan]).
Conclusion {#sec5}
==========
The estimator $\delta_{A^\dag}$ and its positive-part version $\delta
^+_{A^\dag}$ are not only minimax and but also have desirable properties including simplicity, interpretability, and effectiveness in risk reduction. In fact, $\delta_{A^\dag}$ is defined by taking $A=A^\dag$ in a class of minimax estimators $\delta_A$. The simplicity of $\delta_{A^\dag}$ holds because $\delta_A$ is of the linear form $(I-\lambda A)X$, with $A$ and $\lambda$ indicating the direction and magnitude of shrinkage. The interpretability of $\delta
_{A^\dag}$ holds because the form of $A^\dag$ indicates that one group of coordinates are shrunk in the direction of Berger’s [@Ber76] minimax estimator whereas the remaining coordinates are shrunk in the direction of the Bayes rule. The effectiveness of $\delta_{A^\dag}$ in risk reduction is supported, in theory, by showing that $\delta
_{A^\dag}$ can achieve close to the minimum Bayes risk simultaneously over a scale class of normal priors (Corollary \[cor4\]). For various scenarios in our numerical study, the estimators $\delta
_{A^\dag}^+$ with extreme priors yield more substantial risk reduction than existing minimax estimators.
It is interesting to discuss a special feature of $\delta_{A,r}$ and hence of $\delta_{A,c}$ and $\delta_A$ among linear, shrinkage estimators of the form $$\begin{aligned}
\label{general}
\delta= X - h\bigl(X^\T B X\bigr) A X,\end{aligned}$$ where $A$ and $B$ are nonnegative definite matrices and $h(\cdot)$ is a scalar function. The estimator $\delta_{A,r}$ corresponds to the choice $B \propto A^\T
Q A$, which is motivated by the form of the optimal $\lambda$ in minimizing the risk of $(I-\lambda A) X$ for fixed $A$. On the other hand, Berger and Srinivasan [@BerSri78] showed that under certain regularity conditions on $h(\cdot)$, an estimator (\[general\]) can be generalized Bayes or admissible only if $B \propto\Sigma^{-1} A$. This condition is incompatible with $B \propto A^\T Q A$, unless $A
\propto Q^{-1} \Sigma^{-1}$ as in Berger’s [@Ber76] estimator. Therefore, $\delta_A$ including $\delta_{A^\dag}$ is, in general, not generalized Bayes or admissible. This conclusion, however, does not apply directly to the positive-part estimator $\delta_A^+$, which is no longer of the linear form $(I-\lambda A)X$.
There are various topics that can be further studied. First, the prior on $\theta$ is fixed, independently of data in the current paper. A useful extension is to allow the prior to be estimated within a certain class, for example, homoscedastic priors $N(0,\gamma I)$, from the data, in the spirit of empirical Bayes estimation (e.g., Efron and Morris [@EfrMor73]). Second, the Bayes risk with a normal prior is used to measure average risk reduction in an elliptical region (Section \[sec3.3\]). It is interesting to study how our approach can be extended when using a non-normal prior on $\theta$, corresponding to a non-elliptical region in which risk reduction is desired.
Appendix {#app .unnumbered}
========
The following extends Stein’s [@Ste81] lemma for computing the expectation of the inner product of $X-\theta$ and a vector of functions of $X$.
\[lem1\] Let $X=(X_1,\ldots,X_p)^\mathrm{T}$ be multivariate normal with mean $\theta$ and variance matrix $\Sigma$. Assume that $g=(g_1,\ldots,g_p)^\T\dvtx \mathcal R^p \to\mathcal R^p$ is almost differentiable Stein [@Ste81] with $E_\theta
\{ |\nabla_j g_i(X)|
\}<\infty$ for $i,j=1,\ldots,p$, where $\nabla_j=\partial/\partial
x_j$. Then $$E_\theta\bigl\{ (X-\theta)^\T g(X) \bigr\} = \tr\bigl[
\Sigma E_\theta\bigl\{\nabla g(X)\bigr\} \bigr],$$ where $\nabla g(x)$ is the matrix with $(i,j)$th element $\nabla_j g_i(x)$.
A direct generalization of Lemma 2 in Stein [@Ste81] to a normal random vector with non-identity variance matrix gives $$E_\theta\bigl\{ (X-\theta) g_i(X) \bigr\} = \Sigma
E_\theta^\T\bigl\{ \nabla g_i(X) \bigr\},$$ where $\nabla g_i(x)$ is the row vector with $j$th element $\nabla_j
g_i(x)$. Taking the $i$th element of both sides of the equation gives $$E_\theta\bigl\{ (X_i-\theta_i)
g_i(X) \bigr\} = \sum_{j=1}^p
\sigma_{ij} E_\theta\bigl\{\nabla_j
g_i(X)\bigr\},$$ where $\sigma_{ij}$ is the $(i,j)$th element of $\Sigma$. Summing both sides of the preceding equation over $i$ gives the desired result.
[Proof of Theorem \[th1\]]{} By direct calculation, the risk of $\delta_{A,r}$ is $$\begin{aligned}
R(\delta_{A,r},\theta) = \tr(\Sigma Q) + E_\theta\biggl(
\frac
{r^2}{X^\T A^\T Q A X} \biggr) - 2 E_\theta\biggl\{ (X-\theta)^\T
\frac{r Q A X }{X^\T A^\T Q A X} \biggr\}.\end{aligned}$$ By Lemma \[lem1\] and the fact that $\tr(\Sigma QAX X^\T A^\T QA)= X^\T A^\T
QA\Sigma QAX $, the third term after the minus sign in $R(\delta
_{A,r},\theta)$ is $$\begin{aligned}
2E_\theta\biggl\{ r \frac{\tr(\Sigma Q A)}{X^\T A^\T Q A X} \biggr
\} - 4 E_\theta
\biggl\{ r\frac{ X^\T A^\T QA \Sigma QAX }{(X^\T A^\T
Q A X)^2} \biggr\} + 4 E_\theta\biggl(r'
\frac{ X^\T A^\T QA \Sigma
QAX }{X^\T A^\T Q A X} \biggr).\end{aligned}$$ By condition (\[A-cond\]), $A^\T QA \Sigma QA $ is nonnegative definite. By Section 21.14 and Exercise 21.32 in Harville [@Har08], $(x^\T A^\T QA \Sigma QA x)/(x^\T A^\T Q A x) \le\lambda_{\max
}(A\Sigma Q + \Sigma A^\T Q)/2$ for $x \neq0$. Then the preceding expression is bounded from below by $$\begin{aligned}
2E_\theta\biggl\{ r \frac{\tr(\Sigma Q A)-\lambda_{\max}(A\Sigma Q
+ \Sigma A^\T Q)}{X^\T A^\T Q A X} \biggr\},\end{aligned}$$ which leads immediately to the upper bound on $R(\delta_{A,r},\theta
)$.
[Proof for condition (\[A-cond2\])]{} We show that if condition (\[A-cond2\]) holds, then there exists a nonsingular matrix $B$ with the claimed properties. The converse is trivially true. Let $R$ be the unique symmetric, positive definite matrix such that $R^2=Q$. Then $R A R^{-1}$ is symmetric, that is, $R A R^{-1} = R^{-1} A^\T R$, because $QA=A^\T Q$. Moreover, $R \Sigma R$ and $R A R^{-1}$ commute, that is, $RAR^{-1} (R \Sigma R) =R \Sigma R (RAR^{-1})^\T= R \Sigma R
(RAR^{-1})$, because $A \Sigma=\Sigma A^\T$ and $R A R^{-1}$ is symmetric. Therefore, $R\Sigma R$ and $R A R^{-1}$ are simultaneously diagonalizable (Harville [@Har08], Section 21.13). There exists an orthogonal matrix $O$ such that $O(R\Sigma R)O^\T=D$ and $O(R A
R^{-1})O^\T= A^*$ for some diagonal matrices $D$ and $A^*$. Then $B=OR$ satisfies the claimed properties.
[Proof of inequality (\[bayes-bound2\])]{} We show that if $(Z_1,\ldots,Z_p)$ are independent standard normal variables, then $E\{ (\sum_{j=1}^p a_j^2 Z_j^2)^{-1}\} \ge\{p/(p-2)\} (\sum_{j=1}^p
a_j^2)^{-1}$. Let $S= \sum_{j=1}^p Z_j^2$. Then $S$ and $(Z_1^2/S,
\ldots, Z_p^2/S)$ are independent, $S\sim\chi^2_p$, and $(Z_1^2/S,
\ldots, Z_p^2/S) \sim\operatorname{Dirichlet}(1/p,\ldots, 1/p)$. The claimed inequality follows because $E\{ (\sum_{j=1}^p a_j^2 Z_j^2)^{-1}\} =
E\{ (\sum_{j=1}^p a_j^2 Z_j^2/S)^{-1}\} E(S^{-1})$, $E(S^{-1})=1/(p-2)$, and $E\{ (\sum_{j=1}^p$ $ a_j^2 Z_j^2/S)^{-1}\} \ge\break (\sum_{j=1}^p
a_j^2/p)^{-1}$ by Jensen’s inequality.
[Proofs of Theorem \[th2\] and Corollary \[cor2\]]{} Consider the transformation $\delta_j = d_j^2 /(d_j+\gamma_j)$ and $\alpha_j = \{
(d_j+\gamma_j)/d_j\} a_j$, so that $\delta_j \alpha_j = d_j a_j$ and $\delta_j \alpha_j^2 = (d_j+\gamma_j) a_j^2 $. Problem (\[opt\]) is then transformed to $
\max_{\alpha_1,\ldots,\alpha_p} \{ \sum_{j=1}^p \delta_j \alpha
_j - 2 \max(\delta_1 \alpha_1, \ldots, \delta_p \alpha_p)\}$, subject to $\alpha_j\ge0$ ($j=1,\ldots, p$) and $\sum_{j=1}^p
\delta_j \alpha_j^2 = \sum_{j=1}^p \delta_j$, which is of the form of the special case of (\[opt\]) with $\gamma_j=0$ ($j=1,\ldots,p$). But it is easy to verify that if the claimed results hold for the transformed problem, then the results hold for original problem (\[opt\]). Therefore, assume in the rest of proof that $\gamma_j=0$ ($j=1,\ldots,p$).
There exists at least a solution, $A^\dag$, to problem (\[opt\]) by boundedness of the constraint set. Let $\mathcal K=\{k\dvt d_k a^\dag_k =
d_\nu a^\dag_\nu, k=1, \ldots, p\}$ and $\mathcal K^c=\{j\dvt d_j
a^\dag_j < d_\nu a^\dag_\nu, j=1, \ldots, p\}$. A key of the proof is to exploit the fact that, by the setup of problem (\[opt\]), $(a_1^\dag, \ldots, a_p^\dag)$ is automatically a solution to the problem $$\begin{aligned}
\label{a1}
&&\max_{a_1,\ldots,a_p} \quad \sum_{j=1}^p
d_j a_j - 2 d_\nu a_\nu,\nonumber
\\
&&\quad \mbox{subject to} \quad a_j\ge0,\qquad d_ja_j \le
d_\nu a_\nu\qquad (j=1,\ldots, p), \quad \mbox{and}\\
&&\hphantom{\quad \mbox{subject to} \quad} \sum
_{j=1}^p d_j a_j^2
= \sum_{j=1}^p d_j.
\nonumber\end{aligned}$$ The Karush–Kuhn–Tucker condition for this problem gives $$\begin{aligned}
\label{a2}-1 + 2\lambda a_j^\dag-d_j^{-1}
\rho_j &=& 0 \qquad \mbox{for} j \in\mathcal K^c,
\\
\label{a3}-1 + 2\lambda a_k^\dag+ \mu_k-d_k^{-1}
\rho_k &=& 0\qquad \mbox{for } k\ (\neq\nu) \in\mathcal K,
\\
\label{a4}-1 + 2\lambda a_\nu^\dag+ \biggl(2-\sum
_{k\in\mathcal K \setminus\{
\nu\}} \mu_k \biggr)-d_\nu^{-1}
\rho_\nu& =& 0,\end{aligned}$$ where $\lambda$, $\mu_k \ge0$ ($k\in\mathcal K\setminus\{\nu\}$), and $\rho_j\ge0$ satisfying $\rho_j a_j^\dag=0$ ($j=1,\ldots,p$) are Lagrange multipliers.
First, we show that $a_j^\dag>0$ and hence $\rho_j=0$ for $j=1,\ldots
,p$. If $\mathcal K^c = {\varnothing}$, then either $a_j^\dag>0$ for $j=1,\ldots,p$, or $a_1^\dag= \cdots= a_p^\dag=0$. The latter case is infeasible by the constraint $\sum_{j=1}^p d_j a_j^2 = \sum
_{j=1}^p d_j$. Suppose $\mathcal K^c \neq{\varnothing}$. By (\[a2\]), $a_j^\dag>0$ for each $j\in\mathcal K^c$. Then $a_k^\dag> 0$ for each $k \in\mathcal K$ because $d_k a_k^\dag> d_j a_j^\dag$.
Second, we show that $\nu\ge3$. If $\mathcal K^c = {\varnothing}$, then $\nu=p\ge3$. Suppose $\mathcal K^c \neq{\varnothing}$. Then $\lambda
>0$ by (\[a2\]). Summing (\[a3\]) over $k\ (\neq\nu) \in\mathcal K $ and (\[a4\]) shows that $-\nu+ 2\lambda\sum_{k=1}^\nu a_k^\dag+ 2=0$. Therefore, $\nu>2$ or equivalently $\nu\ge3$.
Third, we show that $\mathcal K=\{1,2,\ldots,\nu\}$ and $\mathcal
K^c=\{\nu+1,\ldots, p\}$. For each $k\ (\neq\nu)\in\mathcal K$ and $j \in\mathcal K^c$, $a_k^\dag\le a_j^\dag$ by (\[a2\])–(\[a3\]) and then $d_k > d_j$ because $d_k a_k^\dag> d_j a_j^\dag$. The inequalities also hold for $k=\nu$, by application of the argument to problem (\[a1\]) with $\nu$ replaced by some $k\ (\neq\nu)\in\mathcal K$. Then $\mathcal K^c=\{\nu+1,\ldots, p\}$ because $d_\nu>d_j$ for each $j\in\mathcal K^c$, $d_1\ge d_2 \ge\cdots\ge d_p$, and $\nu$ is the largest element in $\mathcal K$.
Fourth, we show the expressions for $(a_1^\dag,\ldots,a_p^\dag)$ and the achieved maximum value. By the definition of $\mathcal K$, $a_k^\dag\propto d_k^{-1}$ for $k=1,\ldots,\nu$. By (\[a2\]), $a_j^\dag\propto1$ for $j=\nu+1,\ldots,p$. Let $y^\dag=d_\nu a^\dag_\nu$ and $z^\dag= a^\dag_{\nu+1}$. Then $(y^\dag,z^\dag)$ is a solution to the problem $$\begin{aligned}
&&\max_{y,z} \quad (\nu-2) y + \Biggl( \sum
_{j=\nu+1}^p d_j \Biggr) z ,
\\
&&\quad \mbox{subject to}\quad y\ge0,\qquad z\ge0,\qquad y \ge d_{\nu+1}z, \quad \mbox{and}\\
&&\hphantom{\quad \mbox{subject to}\quad}\Biggl(
\sum_{k=1}^\nu d_k^{-1}
\Biggr) y^2 + \Biggl( \sum_{j=\nu
+1}^p
d_j \Biggr) z^2 = \sum_{j=1}^p
d_j.\end{aligned}$$ By the definition of $\mathcal K$, $y^\dag>d_{\nu+1}z^\dag$ and hence $(y^\dag, z^\dag)$ lies off the boundary in the constraint set. Then $(y^\dag,z^\dag)$ is a solution to the foregoing problem with the constraint $y \ge d_{\nu+1}z$ removed. The problem is of the form of maximizing a linear function of $(y,z)$ subject to an elliptical constraint. Straightforward calculation shows that $$\begin{aligned}
y^\dag = \biggl( \frac{ \sum_{j=1}^p d_j}{ M_\nu} \biggr)^{1/2}
\frac{\nu-2}{ \sum_{j=1}^\nu d_j^{-1}}, \qquad z^\dag= \biggl( \frac{ \sum
_{j=1}^p d_j}{ M_\nu}
\biggr)^{1/2},\end{aligned}$$ and the achieved maximum value is $(\sum_{j=1}^p d_j)^{1/2} M_\nu
^{1/2}$, where $M_\nu= (\nu-2)^2 /(\sum_{j=1}^\nu d_j^{-1}) $ $+ \sum_{j=\nu+1}^p
d_j $ .
Finally, we show that the sequence $(M_3, M_4, \ldots M_p)$ is nonincreasing: $M_k \ge M_{k+1}$, where the equality holds if and only if $k-2 = \sum_{j=1}^k d_{k+1}/d_j$. Because $y^\dag>d_{\nu+1}z^\dag
$ or $\nu-2 > \sum_{j=1}^\nu d_{\nu+1}/d_j$, this result implies that $M_\nu> M_{\nu+1}$ and hence $A^\dag$ is a unique solution to (\[opt\]). Let $L_k = \{(\sum_{j=1}^k d_j)( \sum_{j=1}^k d_j^{-1}) -(k-2)^2 \}/ \sum
_{j=1}^k d_j^{-1}$ so that $M_k = \sum_{j=1}^p d_j - L_k$. By the identity $(b+\beta)/(a+\alpha) - b/a = (\beta/ \alpha-
b/a)\{ \alpha/(a+\alpha)\}$ and simple calculation, $$\begin{aligned}
\label{a5}
L_{k+1} - L_k & =& \Biggl[ \frac{ \sum_{j=1}^k (\sfrac{d_j}{d_{k+1}} +
\sfrac{d_{k+1}}{d_j}) - 2k +4}{ d_{k+1}^{-1}} - \Biggl\{
\sum_{j=1}^k d_j -
\frac{(k-2)^2 }{ \sum_{j=1}^k d_j^{-1}} \Biggr\} \Biggr] \frac
{d_{k+1}^{-1}}{\sum_{j=1}^{k+1} d_j^{-1}}
\nonumber
\\[-6pt]
\\[-10pt]
& =& d_{k+1} \frac{ \{ r_k - (k-2) \}^2}{r_k (r_k+1)},\nonumber\end{aligned}$$ where $r_k = \sum_{j=1}^k d_{k+1}/d_j$. Therefore, $L_k \le L_{k+1}$. Moreover, $L_k=L_{k+1}$ if and only if $r_k=k-2$, that is, $\sum
_{j=1}^k d_{k+1}/d_j=k-2$.
[Proof of Corollary \[cor3\]]{} It suffices to show (\[sol-ineq\]). By Corollary \[cor2\], $\sum_{k=1}^{\nu-1} d_\nu^*/ d_k^* \ge\nu-3$ and hence $\sum_{k=1}^\nu d_\nu^*/ d_k^* \ge\nu-2$. Then for $j=1,\ldots,\nu$. $$\begin{aligned}
a_j^\dag= \frac{(\nu-2) {d_j^*}^{-1} }{ \sum_{k=1}^\nu{d_k^*}^{-1}
} \frac{d_j}{d_j+\gamma_j} \le
\frac{(\nu-2) {d_\nu^*}^{-1} }{
\sum_{k=1}^\nu{d_k^*}^{-1} } \frac{d_j}{d_j+\gamma_j} \le\frac
{d_j}{d_j+\gamma_j},\end{aligned}$$ because $d_j^* \ge d_\nu^*$ for $j \le\nu$.
[Proof of Theorem \[th3\]]{} Let $L_k = \sum_{j=1}^k d_j^* -(k-2)^2/ \sum_{j=1}^k {d_j^*}^{-1}$ so that $M_k = \sum_{j=1}^p d_j^* - L_k$, similarly as in the proof of Theorem \[th2\]. By equation (\[a5\]) with $r_k = \sum_{j=1}^k d_{k+1}^*/d_j^*$ and $d_{k+1}$ replaced by $d_{k+1}^*$, $$\begin{aligned}
L_\nu= L_3 + \sum_{k=3}^{\nu-1}
(L_{k+1} - L_k)= L_3 + \sum
_{k=3}^{\nu-1} d_{k+1}^* \frac{ \{ r_k - (k-2) \}^2}{r_k (r_k+1)} .\end{aligned}$$ By the relationship $r_k = (d_{k+1}^* / d_k^*) (1+r_{k-1})$ and simple calculation, $$\begin{aligned}
L_3 & =& d_1^* + d_2^* + d_3^*
- \frac{1}{{d_1^*}^{-1} + {d_2^*}^{-1} +
{d_3^*}^{-1}}
\\
& =& d_1^* + d_2^* + d_3^* - \sum
_{k=3}^{\nu-1} d_{k+1}^* \biggl(
\frac{1}{r_k} - \frac{1}{r_k+1} \biggr) - \frac{d_\nu^*}{r_{\nu-1}+1}.\end{aligned}$$ If $\nu\ge4$, combining the two preceding equation gives $$\begin{aligned}
L_\nu&=& d_1^* + d_2^* + d_3^* +
\sum_{k=3}^{\nu-1} d_{k+1}^*
\frac{
\{ r_k - (k-2) \}^2-1}{r_k (r_k+1)} - \frac{d_\nu^*}{r_{\nu-1}+1}
\\
& \le &d_1^* + d_2^* + d_3^* + \sum
_{k=3}^{\nu-1} d_{k+1}^*
\frac
{3}{k(k+1)} - \frac{d_\nu^*}{\nu}
\\
& =& d_1^* + d_2^* + d_3^* +
d_4^* - 3 \sum_{k=3}^{\nu-2}
\frac
{d_{k+1}^* - d_{k+2}^*}{k+1} - 4 \frac{d_\nu^*}{\nu}
\\
& \le& d_1^* + d_2^* + d_3^* +
d_4^* -4 \frac{d_\nu^*}{\nu}.\end{aligned}$$ The first inequality follows because $k-2 \le r_k \le k$ for $k=3,\ldots,\nu-1$ and $\{t - (k-2)\}^2 /\{t(t+1)\}$ is increasing for $k-2 \le t \le k$ with a maximum at $t=k$. The second inequality follows because $d_1^* \ge d_2^* \ge\cdots\ge d_p^*$. Therefore, if $\nu\ge4$ then $$\begin{aligned}
\frac{p}{p-2} M_\nu& \ge&\frac{p}{p-2} \Biggl\{ \sum
_{j=1}^p d_j^* - \biggl(
d_1^* + d_2^* + d_3^* + d_4^*
-4 \frac{d_\nu^*}{\nu} \biggr) \Biggr\}
\\
& =& \sum_{j=3}^p
d_j^* - \Biggl( d_3^* + d_4^* -
\frac{2}{p-2} \sum_{j=5}^p
d_j^* -\frac{ 4 p}{p-2} \frac{d_\nu^*}{\nu} \Biggr) .\end{aligned}$$ If $\nu=3$, then $L_\nu\le d_1^* + d_2^* + d_3^* -d_3^*/3$ and hence $$\begin{aligned}
\frac{p}{p-2} M_\nu& \ge&\frac{p}{p-2} \Biggl\{ \sum
_{j=1}^p d_j^* -
\bigl(d_1^* + d_2^* + d_3^*
-d_3^*/3 \bigr) \Biggr\}
\\
& =&\sum_{j=3}^p d_j^* -
\Biggl( d_3^* - \frac{2}{p-2} \sum_{j=4}^p
d_j^* -\frac{ p}{p-2} \frac{d_3^*}{3} \Biggr). $$ This completes the proof.
Acknowledgements {#acknowledgements .unnumbered}
================
The author thanks Bill Strawderman and Cunhui Zhang for helpful discussions.
[31]{}
().
(). . .
(). . .
(). . .
(). . .
(). , ed. . : .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
, (). . .
(). . .
(). . : .
(). . In . : .
(). . .
(). , ed. . : .
(). . . .
(). . .
(). . .
(). . In . : .
(). .
(). . .
(). . .
(). . In (, eds.) . : .
(). .
, (). . .
|
---
abstract: 'Perfect colourings of the rings of cyclotomic integers with class number one are studied. It is shown that all colourings induced by ideals $(q)$ are chirally perfect, and vice versa. A necessary and sufficient condition for a colouring to be perfect is obtained, depending on the factorisation of $q$. This result yields the colour symmetry group $H$ in general. Furthermore, the colour preserving group $K$ is determined in all but finitely many cases. An application to colourings of quasicrystals is given.'
address:
- 'Mathematics department, Ateneo de Manila University, Loyola Heights, 1108 Quezon City, Philippines'
- 'Mathematics department, Ateneo de Manila University, Loyola Heights, 1108 Quezon City, Philippines'
- 'Fakultät für Mathematik, Universität Bielefeld, 33501 Bielefeld, Germany'
author:
- 'E.P. Bugarin'
- 'M.L.A.N. de las Peñas'
- 'D. Frettlöh'
title: Perfect colourings of cyclotomic integers
---
**Introduction** {#intro}
================
The study of colour symmetries of periodic patterns or point lattices in two or three dimensions is a classical topic, see [@gs] or [@schw]. A colour symmetry is a symmetry of a coloured pattern up to permutation of colours; and the study of colour symmetry groups is dedicated to the relation of the colour symmetries of a coloured pattern to the symmetries of the uncoloured pattern. During the last century, the classification of colour symmetry groups of periodic patterns has been carried out to a great extent. The discovery of quasiperiodic patterns [@sen2] like the Penrose tiling raised the question about colour symmetries of these patterns. Quasiperiodic patterns are not periodic, that is, the only translation fixing the pattern is the trivial translation by 0. Nevertheless, quasiperiodic patterns show a high degree of short and long range order. One early approach to generalise the concept of colour symmetry to quasiperiodic patterns was given in [@lip]. It used the notion of indistinguishability of coloured patterns and the fact that, for quasiperiodic patterns, it can be described in Fourier space rather than in real space [@dm]. A more algebraic approach was used in [@mp], making use of quadratic number fields. In this work, the problem of colour symmetries of both periodic and non-periodic patterns, including the quasiperiodic cases, is addressed by studying the sets of cyclotomic integers following the setting introduced in [@b1; @bg; @bgs]. Cyclotomic integers turned out to be very useful in describing symmetries of quasiperiodic patterns.
This article can be seen as a complement to [@bg], which concentrates on the combinatorial aspects of perfect or chirally perfect colourings (there called ‘Bravais colourings’) of ${\ensuremath{{\mathcal M}}}_n$, where ${\ensuremath{{\mathcal M}}}_n = {\ensuremath{\mathbb{Z}}}[e^{2 \pi i / n}]$ denotes a ${\ensuremath{\mathbb{Z}}}$-module of cyclotomic integers. In particular, the results in [@bg] yield the numbers $\ell$ for which a (chirally) perfect colouring of ${\ensuremath{{\mathcal M}}}_n$ with $\ell$ colours exists, given that ${\ensuremath{{\mathcal M}}}_n$ has class number one. In contrast, this paper studies the algebraic properties of the colour symmetry groups of perfect colourings of ${\ensuremath{{\mathcal M}}}_n$, again for the case that ${\ensuremath{{\mathcal M}}}_n$ has class number one.
**Preliminaries**
=================
Let ${\ensuremath{{\mathcal M}}}_n$ denote the cyclotomic integers. That is, ${\ensuremath{{\mathcal M}}}_n = {\ensuremath{\mathbb{Z}}}[\xi_n]$ is the ring of polynomials in $\xi_n$, where $\xi_n=e^{2\pi i/n}$ always be a primitive complex root of unity. If it is clear from the context, we may write just $\xi$ instead of $\xi_n$. Since ${\ensuremath{{\mathcal M}}}_{2n} = {\ensuremath{{\mathcal M}}}_n$ for $n$ odd, we omit the case $n \equiv 2 \mod
4$, for the sake of uniqueness. As mentioned above, our approach requires that ${\ensuremath{{\mathcal M}}}_n$ has class number one. Then we can use the fact that ${\ensuremath{{\mathcal M}}}_n$ is a principal ideal domain, and therefore also a unique factorisation domain. This is only true for the following values of $n$. $$\label{eq:cls1}
n=3,4,5,7,8,9,11,12,13,15,16,17,19,20,21,24,25,27,28,32,33,
35,36,40,44,45,48,60,84.$$ Let us emphasise that ${\ensuremath{{\mathcal M}}}_n$ always denotes the ring of cyclotomic integers for the values in Equation only.
[**Notation:**]{} Throughout the text, $D_n$ (resp. $C_n$) denotes the dihedral (resp. cyclic) group of order $2n$ (resp. $n$). The symmetric group of order $n!$ is denoted by ${\ensuremath{{\mathcal S}}}_n$. Let $\xi_n =
e^{2 \pi i / n}$, a primitive $n$-th root of unity. The set of cyclotomic integers ${\ensuremath{\mathbb{Z}}}[\xi_n]$ is denoted by ${\ensuremath{{\mathcal M}}}_n$. The point group of ${\ensuremath{{\mathcal M}}}_n$ (the set of linear isometries fixing ${\ensuremath{{\mathcal M}}}_n$) is the dihedral group $D_N$, where $N=n$ if $n$ is even, and $N=2n$ if $n$ is odd. The entire symmetry group $G({\ensuremath{{\mathcal M}}}_n)$ of ${\ensuremath{{\mathcal M}}}_n$ is [*symmorphic*]{}, that is, it equals the semidirect product of its translation subgroup with its point group: $G({\ensuremath{{\mathcal M}}}_n) = {\ensuremath{{\mathcal M}}}_n \rtimes D_N$, where $N=n$ if $n$ is even, and $N=2n$ if $n$ is odd. If $H$ is a subgroup of some group $G$, the index of $H$ in $G$ is denoted by $[G:H]$. Throughout the text we will identify the Euclidean plane with the complex plane. The complex norm of $z \in {\ensuremath{\mathbb{C}}}$ is always denoted by $|z|$, while the algebraic norm of $z \in {\ensuremath{{\mathcal M}}}_n$ is denoted by $N_n(z)$.
The symmetry group of some set $X \subset {\ensuremath{\mathbb{R}}}^2$ is always denoted by $G$ in the sequel. The following definitions are mainly taken from [@gs]. A [*colouring*]{} of $X$ is a surjective map $c: X \to \{1, \ldots,
\ell\}$. Whenever we want to emphasise that a colouring uses $\ell$ colours, we will also call it an $\ell$-colouring. The objects of interest are colourings where an element of $G$ acts as a global permutation of the colours. Thus, for given $X
\subset {\ensuremath{\mathbb{R}}}^2$ and a colouring $c$ of $X$, we consider the following group. $$H = \{ h \in G \, | \, \exists \, \pi \in {\ensuremath{{\mathcal S}}}_{\ell} \; \forall x \in
X: \; c(h(x)) = \pi (c(x))\}.$$ The elements of $H$ are called [*colour symmetries*]{} of $X$. $H$ is the [*colour symmetry group*]{} of the coloured pattern $(X,c)$.
\[def:perfect\] A colouring $c$ of a point set $X$ is called [*perfect*]{}, if $H=G$. It is called [*chirally perfect*]{}, if $H=G'$, where $G'$ is the index 2 subgroup of $G$ containing the orientation preserving isometries in $G$.
See Figure \[fig:bspcol\] for some examples. By the requirement $\pi c = c h$, each $h$ determines a unique permutation $\pi = \pi_h$. This also defines a map $$\label {eq:p}
P: H \to {\ensuremath{{\mathcal S}}}_{\ell}, \quad P(h):=\pi_h.$$ Let $g,h\in H$. Because of $c (hg(x)) = ch(g(x)) = \pi_h c(g(x)) =
\pi_h (\pi_g (c(x))) = \pi_h \pi_g (c(x))$, we obtain the following result.
$P$ is a group homomorphism. $\square$
A further object of interest is the subgroup $K$ of $H$ which fixes the colours. For a given $X \subset {\ensuremath{\mathbb{R}}}^2$ and a colouring $c$ of $X$, we consider the [*colour preserving group*]{} $K$ (in [@pbeff] called colour [*fixing*]{} group): $$K := \{ k \in H \, | \, c(k(x)) = c(x), \, x \in X \}.$$ In other words, $K$ is the kernel of $P$. The aim of this paper is to deduce the nature of the groups $H$ and $K$ for (chirally) perfect colourings of ${\ensuremath{{\mathcal M}}}_n$.
![Four examples of colourings of ${\ensuremath{\mathbb{Z}}}^2$. For clarity, each element of ${\ensuremath{\mathbb{Z}}}^2$ is replaced by a unit square. From left to right: An arbitrary 2-colouring with two colours, neither ideal nor perfect; a 2-colouring induced by a coset colouring, but neither ideal nor perfect; a perfect 4-colouring induced by the ideal $(2)$; a chirally perfect 5-colouring induced by the ideal $(2+i)$. \[fig:bspcol\] ](bspcol.eps){width="120mm"}
**Coset colourings and ideal colourings of planar modules**
===========================================================
A colouring of a point set $X$ with a group structure (like a lattice or a ${\ensuremath{\mathbb{Z}}}$-module) can be constructed by choosing a subgroup of $X$ and assigning to each coset a different colour ([@vdw], see also [@mlp]). Thus we will generate colourings of ${\ensuremath{{\mathcal M}}}_n$ by suitable subgroups of ${\ensuremath{{\mathcal M}}}_n$. Since ${\ensuremath{{\mathcal M}}}_n$ is in fact a principal ideal domain, we will choose principal ideals $(q)$ as these subgroups. Each element $q \in {\ensuremath{{\mathcal M}}}_n$ thus generates a colouring in the following way.
An [*ideal colouring*]{} of ${\ensuremath{{\mathcal M}}}_n$ with $\ell$ colours is defined as follows: For each $z \in (q) = q {\ensuremath{{\mathcal M}}}_n$, let $c(z)=1$. Let the other cosets of $(q)$ be $(q)+t_2, \ldots, (q) + t_{\ell}$. For each $z \in (q)+t_i$, let $c(z)=i$.
If $(q)$ is given explicitly, we will also call such an ideal colouring a [*colouring induced by*]{} $(q)$. We will see that all chirally perfect colourings of ${\ensuremath{{\mathcal M}}}_n$, where $n$ is of class number one, arise from principal ideals $(q) = q
{\ensuremath{{\mathcal M}}}_n$, where $q \in {\ensuremath{{\mathcal M}}}_n$. Consequently, there exists a chirally perfect colouring of ${\ensuremath{{\mathcal M}}}_n$ with $\ell$ colours, if and only if there is $q$ such that $N_n(q) = [ {\ensuremath{{\mathcal M}}}_n : (q)] = \ell$. (Note that the index of $(q)$ in ${\ensuremath{{\mathcal M}}}_n$ is just the algebraic norm of $q$.)
In [@bg], the number of Bravais colourings of ${\ensuremath{{\mathcal M}}}_n$ was obtained for all $n$ as in . Let us shortly explain, why in this context Bravais colourings are chirally perfect colourings, and vice versa. A [*Bravais colouring*]{} of ${\ensuremath{{\mathcal M}}}_n$ is a colouring where each one-coloured subset is in the same Bravais class as ${\ensuremath{{\mathcal M}}}_n$. In plain words, this means that each one-coloured subset is similar to ${\ensuremath{{\mathcal M}}}_n$. More precisely: there is $q \in {\ensuremath{\mathbb{C}}}$ such that for each $i$, $c^{-1}(i)$ is a translate of $q {\ensuremath{{\mathcal M}}}_n$. For a general definition of Bravais class, see for instance [@m].
Let ${\ensuremath{{\mathcal M}}}_n = {\ensuremath{\mathbb{Z}}}[\xi_n]$ be a principal ideal domain. A colouring of ${\ensuremath{{\mathcal M}}}_n$ is a Bravais colouring, if and only if it is a chirally perfect colouring, if and only if it is an ideal colouring.
Let $c$ be an ideal colouring induced by $(q)$. Trivially, $(q) = q {\ensuremath{{\mathcal M}}}_n$ is similar to ${\ensuremath{{\mathcal M}}}_n$, and the cosets are translates of $q {\ensuremath{{\mathcal M}}}_n$. Thus $c$ is a Bravais colouring.
Let $c$ be a Bravais colouring of ${\ensuremath{{\mathcal M}}}_n$. Without loss of generality, let $0 \in c^{-1}(1)$. The set $c^{-1}(1)$ of points of colour 1 is similar to ${\ensuremath{{\mathcal M}}}_n$, that is, it equals $q {\ensuremath{{\mathcal M}}}_n$ for some $q
\in {\ensuremath{\mathbb{C}}}$. Since $c^{-1}(1) \subset {\ensuremath{{\mathcal M}}}_n$, we have $q {\ensuremath{{\mathcal M}}}_n \subset
{\ensuremath{{\mathcal M}}}_n$, which implies $q \in {\ensuremath{{\mathcal M}}}_n$. Thus $c^{-1}(1) = (q)$. All other preimages $c^{-1}(i)$ are translates of $(q)$, thus cosets of $(q)$ in ${\ensuremath{{\mathcal M}}}_n$. Therefore $c$ is an ideal colouring.
For the equivalence of chirally perfect colouring and ideal colouring, see Theorem \[thm:bal\] below.
**The structure of $H$**
========================
Recall that $P: H \to {\ensuremath{{\mathcal S}}}_{\ell}$ maps a colour symmetry to the permutation it induces on the colours, see .
\[lem:h/k\] $H$ acts transitively on the coloured subsets of any perfect colouring of ${\ensuremath{{\mathcal M}}}_n$, and $H/K \cong P(H)$.
The proof of the first statement follows from the proof of Theorem \[thm:bal\] below, see the remark there. Since $K = \mbox{ker}(P)$, the second claim is clear.
This yields the short exact sequence $$\label{eq:exseq}
0 \longrightarrow K \longrightarrow H \longrightarrow
H/K \longrightarrow 0.$$ Therefore, $H$ is always a group extension of $K$. In general, $H$ is neither a direct nor a semidirect product of $K$ and $H/K$, see Theorem \[thm:prod\] below.
We proceed by examining how the factorisation of $q$ in ${\ensuremath{{\mathcal M}}}_n$ affects the structure of the colour symmetry group $H$ of the colouring induced by $(q)$. The unique factorisation of $q$ over ${\ensuremath{{\mathcal M}}}_n$ reads $$\label{eq:qfac}
q = \varepsilon \prod_{p_i \in {\ensuremath{{\mathcal P}}}} p_i^{\alpha_i} \prod_{p_j \in {\ensuremath{{\mathcal C}}}}
\omega_{p_j}^{\beta_j} \overline{\omega_{p_j}}^{\gamma_j}
\prod_{p_k \in {\ensuremath{{\mathcal R}}}} p_k^{\delta_k},$$ where $\varepsilon$ is a unit in ${\ensuremath{{\mathcal M}}}_n$. Here, ${\ensuremath{{\mathcal P}}}$ (resp. ${\ensuremath{{\mathcal C}}}$, resp. ${\ensuremath{{\mathcal R}}}$) denotes the set of inert (resp.complex splitting, resp. ramified) primes over ${\ensuremath{{\mathcal M}}}_n$. The generator $q$ is called [*balanced*]{} if $\beta_j=\gamma_j$ for all $j$. In other words: $q$ is balanced if it is of the form $$q= {\varepsilon}x p,$$ where ${\varepsilon}$ is a unit in ${\ensuremath{{\mathcal M}}}_n$, $x$ is a real number in ${\ensuremath{{\mathcal M}}}_n$ (i.e., $x \in {\ensuremath{\mathbb{Z}}}[\xi+\overline{\xi}]$), and $p$ is a product of ramified primes. By the definition of a ramified prime $p$ (see [@wash]), $\overline{p} \in (p)$ holds in ${\ensuremath{{\mathcal M}}}_n$. (Equivalently, $p / \overline{p}$ is a unit in ${\ensuremath{{\mathcal M}}}_n$.) The following lemma is well-known, it is stated here for the convenience of the reader.
\[lem:unit\] All units ${\varepsilon}$ in ${\ensuremath{\mathbb{Z}}}[\xi_n]$ are of the form ${\varepsilon}=\pm \lambda
\xi_n^k$, where $\lambda \in {\ensuremath{\mathbb{Z}}}[\xi+\overline{\xi}]$.
(Essentially [@wash], Prop. 1.5:) Let ${\varepsilon}$ be a unit, and let $\alpha={\varepsilon}/
\overline{{\varepsilon}}$. Since ${\varepsilon}, \overline{{\varepsilon}},
1/\overline{{\varepsilon}} \in {\ensuremath{{\mathcal M}}}_n$, $\alpha$ is an algebraic integer. Since complex conjugation commutes with any element of the Galois group, for all algebraic conjugates $\alpha_i$ of $\alpha$ holds $|\alpha_i|=1$.
From Lemma 1.6 of [@wash] then follows: If for all algebraic conjugates $\alpha_i$ of $\alpha$ holds $|\alpha_i|=1$, and $\alpha$ is an algebraic integer, then $\alpha$ is some root of unity, say, $\xi_r^q$. Since $\alpha \in {\ensuremath{{\mathcal M}}}_n$, it is either an $n$-th root of unity, or a $2n$-th root of unity, if $n$ is odd. In each case, $\alpha=\pm \xi_n^j$ for some $j$. Then ${\varepsilon}^2={\varepsilon}\overline{{\varepsilon}} \alpha = |
{\varepsilon}| \xi_n^j$, where $| {\varepsilon}|$ is a real number, thus $|
{\varepsilon}| \in {\ensuremath{\mathbb{Z}}}[\xi_n + \overline{\xi}_n]$. It follows $ {\varepsilon}= \sqrt{|{\varepsilon}|} \xi^{j/2}_n = \pm \sqrt{|{\varepsilon}|}
\xi^{j}_{2n} = \pm \lambda \xi^{j}_{2n}$, where $\lambda \in {\ensuremath{\mathbb{Z}}}[\xi_n
+ \overline{\xi}_n]$. However, since ${\varepsilon}\in {\ensuremath{\mathbb{Z}}}[\xi_n]$, it can’t be a proper $2n$-th complex root of unity. Thus $\pm \lambda
\xi^{j}_{2n} = \pm \lambda \xi^{k}_n$ for some $k$.
In particular, if ${\varepsilon}$ is a unit in ${\ensuremath{\mathbb{Z}}}[\xi_n]$ with $|{\varepsilon}|
= 1$, then it is (up to sign) an $n$-th complex root of unity: ${\varepsilon}= \pm \xi_n^k$.
\[lem:bal\] $\overline{q} \in (q)$ if and only if $q$ is balanced.
Let $q$ be as in . Consider $q/\overline{q}$. The inert primes in numerator and denominator cancel each other. The unit $\varepsilon$, as well as the factors of the ramified primes, contribute a unit $\varepsilon' \in {\ensuremath{{\mathcal M}}}_n$. Thus $$r:= q/\overline{q} = \varepsilon' \prod_{p_j \in {\ensuremath{{\mathcal C}}}}
\omega_{p_j}^{\beta_j-\gamma_j}
\overline{\omega_{p_j}}^{\gamma_j-\beta_j}.$$ If $q$ is balanced, then $\beta_j=\gamma_j$, thus $\overline{q} =
(\varepsilon')^{-1} q \in (q)$, since $\varepsilon'$ is a unit. If $q$ is not balanced, then $\beta_j-\gamma_j \ne 0$ for some $j$. Then, by Lemma \[lem:unit\], the right hand side $r$ is not a unit, thus $r^{-1} \notin {\ensuremath{{\mathcal M}}}_n$, and consequently $\overline{q} = r^{-1} q \notin
(q)$.
\[thm:bal\] Let ${\ensuremath{{\mathcal M}}}_n={\ensuremath{\mathbb{Z}}}[\xi_n]$ be a principal ideal domain.
1. Each chirally perfect colouring of ${\ensuremath{{\mathcal M}}}_n$ is an ideal colouring.
2. Each ideal colouring of ${\ensuremath{{\mathcal M}}}_n$ is chirally perfect.
3. The colouring $c$ induced by $(q)$ is perfect, if and only if $q$ is balanced.
Consequently, $H= {\ensuremath{{\mathcal M}}}_n \rtimes D_N$ in the latter case, and $H= {\ensuremath{{\mathcal M}}}_n
\rtimes C_N$ otherwise. ($N=2n$ if $n$ is odd, $N=n$ else.)
Let $(q)$ be the ideal inducing the colouring of ${\ensuremath{{\mathcal M}}}_n$. Let $\ell=[{\ensuremath{{\mathcal M}}}_n:(q)]$, and denote the cosets of $(q)$ by $(q)+t_1, \ldots,
(q)+t_{\ell}$, where $t_1=0$ for convenience. (The notation $(q)+g$ rather than $g(q)$ or $(q) g$ is justified as follows: all rotations and reflections in $G$ fix $(q)$. Only maps with some translational part map $(q)$ to a coset different from $(q)$.)
We proceed by studying whether an element $g \in G$ maps an entire coset $(q)+t_i$ to an entire coset $(q)+t_j$ or not. If yes, then $g$ induces a global permutation of the colours, and $g \in H$. Three cases have to be considered.
1\. Let $g \in G$ be a translation. Then $g$ is of the form $g(x)=x+t$ for some $t$. Hence $g((q)+t_i)=(q)+t_i+t$ trivially is a coset of $(q)$.
2\. Let $g \in G$ be a rotation. Then $g((q))=(q)$, thus $g((q)+t_j)=(q) +
g(t_i)$, which is again a coset of $(q)$.
So, the first two cases are not critical, whether $q$ is balanced or not. In particular, all orientation preserving isometries map entire cosets to entire cosets, which proves part (2) of Theorem \[thm:bal\].
3\. Let $g \in G$ be the reflection $x \mapsto \overline{x}$. If $q$ is balanced, then, by Lemma \[lem:bal\], $g(q) = \overline{q} \in
(q)$, hence $g((q))=(q)$. Thus $g((q)+t_j) = (q) + g(t_j)$, which is a coset of $(q)$. Any element of $G$ is a composition of the three symmetries above, hence the ‘if’-part of Theorem \[thm:bal\] (3) follows.
If $q$ is not balanced, then $g(0)=0 \in (q)$, but $g(q) \notin (q)$ by Lemma \[lem:bal\]. Thus $g$ does not map entire cosets to entire cosets. Consequently, no reflection in $G$ maps entire cosets to entire cosets. The reflections in $G$ are the only elements which fail to do so. This (again) shows Theorem \[thm:bal\] (2), and the ‘only-if’ part of Theorem \[thm:bal\] (3).
Regarding Theorem \[thm:bal\] (1): Let $c$ be a chirally perfect colouring. Let $0 \in c^{-1}(1)$. Then $c^{-1}(1)$ is invariant and closed under rotations in $C_N$, and under translations by $t \in
c^{-1}(1)$. It follows that $c^{-1}(1)$ is invariant under multiplication by elements of ${\ensuremath{{\mathcal M}}}_n$, and under translations by elements of $c^{-1}(1)$. Thus $c^{-1}(1)$ is an ideal in ${\ensuremath{{\mathcal M}}}_n$.
This proves Lemma \[lem:h/k\] as well: $H$ contains all translations $z \mapsto z+t, \; t \in {\ensuremath{{\mathcal M}}}_n$. Clearly, these translations act transitively on the cosets.
\[cor:one\] If there exists only one ideal colouring of ${\ensuremath{{\mathcal M}}}_n$ with $\ell$ colours, the colouring is perfect.
If an ideal colouring induced by $(q)$ is not perfect, then $q$ is not balanced by Theorem \[thm:bal\]. Thus, $\overline{q} \notin (q)$ by Lemma \[lem:bal\], hence $(q) \ne (\overline{q})$. So $(q)$ and $(\overline{q})$ define two different colourings with $\ell$ colours.
Now we get immediately a result on colourings of ${\ensuremath{\mathbb{Z}}}^2$. This is Theorem 8.7.1 in [@gs], see also [@sen]. Note that the number of colours $\ell$ is the norm $N_4(q)$ of $q$, which is just $q\overline{q}$.
\[cor:z2\] Let $c$ be an $\ell$–colouring of the square lattice ${\ensuremath{\mathbb{Z}}}[i]$ generated by $(q)$, $q \in {\ensuremath{\mathbb{Z}}}[i]$.
1. If the factorisation of $\ell$ over ${\ensuremath{\mathbb{Z}}}$ contains no primes $p \equiv 1 \mod 4$, then the colouring is perfect.
2. If $q=m$, or $q=im$, or $q=(1 \pm i)^k m$ for some $m, k \in {\ensuremath{\mathbb{Z}}}\setminus \{0\}$, then the colouring is perfect.
3. Otherwise the colouring is not perfect but chirally perfect, and so $H = {\ensuremath{{\mathcal M}}}_n \rtimes C_N$.
The inert primes in ${\ensuremath{{\mathcal M}}}_4={\ensuremath{\mathbb{Z}}}[i]$ are exactly the ones of the form $p
\equiv 3 \mod 4$; and the splitting primes are exactly those of the form $p \equiv 1 \mod 4$. The only ramified prime in ${\ensuremath{{\mathcal M}}}_4$ is $2=(1+i)(1-i)$. So (1) and (2) of Corollary \[cor:z2\] cover exactly the cases where $q$ is balanced, and the claim follows from Theorem \[thm:bal\].
Corollary \[cor:z2\] tells us that all ideal colourings of the square lattice with $1,2,4,8,9,16$ or $18$ colours are perfect, and all those with $5,10,13,17,20$ colours are not. (These are all possible values for $\ell<25$, see [@bg]). The first ambiguity occurs at the value $\ell=25$: the three possible generators are $q=5, \; q=3+4i,\; q=3-4i$. The first one induces a perfect colouring, whereas the other two induce non-perfect but chirally perfect colourings.
\[thm:prod\] If $\ell = 2$, then $H$ is a semidirect product: $H = K \rtimes H/K$. If $\ell > N$, then $H \ne K \rtimes H/K$. ($N=2n$, if $n$ odd, $N=n$ else.)
Consider Equation . By the splitting lemma, $H = K
\rtimes H/K$, if and only if there is a homomorphism $Q: H/K \to H$ such that $PQ = \operatorname{id}$ on $H/K$. If $\ell=2$, then $H/K \cong C_2 =
\{ \operatorname{id}, x \}$. Let $Q(\operatorname{id})=\operatorname{id}$ and $Q(x) = \varphi$, where $\varphi$ is the reflection in the vertical line through $\frac{1}{2}$. Certainly, $\varphi \in G$ holds: $\varphi: a + bi \mapsto 1-a +bi$ is a composition of $z \mapsto \overline{z}, \; z \mapsto iz, \; z \mapsto
z+1$. By Lemma \[lgleich2\] (see below), the colouring is perfect, thus $f \in H$, and $c(0) \ne c(1)$, thus $f$ interchanges the two colours. This makes $Q$ a homomorphism.
In general, there is no such homomorphism $Q$: All elements $\pi \in H/K$ are of finite order. If $Q(\pi)$ contains a translational part, it is of infinite order in $H$. Thus there is $k = \mbox{ord}(\pi)$ such that $\operatorname{id}= Q(\operatorname{id}) = Q(\pi^k) \ne Q(\pi)^k$, hence $Q$ is not a homomorphism.
The elements $z \in {\ensuremath{{\mathcal M}}}_n$ with $|z|=1$ are exactly the $N$ elements of the form $\pm \xi^i_n$ (see Lemma \[lem:unit\]). Thus they can carry at most $N$ colours. They can be mapped to each other by rotations about 0, or by reflection $z \mapsto \overline{z}$. $H$ acts transitively on the colours. Thus in any colouring with more than $N$ colours there has to be a map $g \in H$ which is neither a rotation about 0, nor a reflection $z \mapsto \overline{z}$. Thus the colouring requires a map $h$ with some translational part, which is of infinite order. Consequently, $Q(h)$ is of infinite order.
The results in this section yield $H$ in general — that is, whether a colouring of ${\ensuremath{{\mathcal M}}}_n$ is perfect or not — depending on the factorisation of the generator of the underlying ideal. In the next section we obtain results yielding $K$, depending only on the number $\ell$ of colours. As a byproduct, we also obtain partial results on $H$, depending on $\ell$ only.
**The structure of $K$** {#sec:k}
========================
It follows a series of lemmas which determine the structure of $K$ in all but finitely many cases. Recall that $\ell$ denotes the number of colours and $c(x)$ denotes the colour of $x$. We denote the group of translations by elements in $(q)$ by $T_{(q)}$. Note that $T_{(q)}$ is always contained in $K$.
\[0ungleichxi\] $\ell \ge 2$ if and only if $c(0) \ne c(\pm\xi^i)$ for all $i \le n$.
$\ell = 1 {\Leftrightarrow}(q)=(1) {\Leftrightarrow}(q)=(\pm\xi^i)$ for some $i {\Leftrightarrow}0, \xi^i \in
(q) {\Leftrightarrow}c(0)=c(\xi^i)$.
Let $\phi$ denote Euler’s totient function.
\[lgleich2\] Each 2-colouring of ${\ensuremath{{\mathcal M}}}_n$ induced by $(q)$ is perfect. Moreover, $\ell = 2$, if and only if $c(\pm\xi^i) = c(\xi^j)\;
\mbox{for all}\; i,j \le n$, if and only if $K=T_{(q)} \rtimes D_N$.
Since $\ell=2$, we consider two cosets of $(q)$, namely, $(q)$ and $(q)+1$. Note that $2 \in (q)$ and $\pm\xi^i\in (q)+1$ for any $i \in
{\ensuremath{\mathbb{Z}}}$. Consequently $\pm\xi^i \pm\xi^j \in (q)$, while $\pm\xi^i \pm\xi^j
\pm\xi^k \in (q)+1$ for any $i,j,k\in{\ensuremath{\mathbb{Z}}}$, and in general: If $z=\sum_{i=0}^{\phi(n)-1} \alpha_i\xi^i \in {\ensuremath{{\mathcal M}}}_n$, with $\alpha_i
\in {\ensuremath{\mathbb{Z}}}$, then $z \in (q)$ if and only if $\sum_{i=0}^{\phi(n)-1} \alpha_i \equiv 0 \mod 2$, otherwise $z \in (q)+1$. Now, $\overline{z} = \sum_{i=1}^{\phi(n)-1} \alpha_i\xi^{n-i}$, and by conjugation of $z$, the sum $\sum_{i=0}^{\phi(n)-1} \alpha_i$ does not change. This implies that ${z} \in (q)$ if and only if $\overline{z} \in
(q)$. Similarly, $z \in (q) + 1$ if and only if $\overline{z} \in (q)
+ 1$. Thus the reflection $z \mapsto \overline{z}$ maps $(q)$ to itself and $(q) + 1$ to $(q)+1$. Hence the reflection is in $H$, and so the colouring is perfect. Furthermore it fixes the coloured pattern, and so is also in $K$.
From Lemma \[0ungleichxi\] follows that $c(\pm\xi^i) \ne c(0) \ne
c(\xi^j)$ for all $i,j$. Since $\ell = 2$, that is, there are two colours only, it follows $c(\pm\xi^i) = c(\xi^j)$.
Vice versa, if $c(\pm\xi^i)
= c(\xi^j)$ for all $i,j$, then $\pm\xi^i-\xi^j, 2 \in (q)$. This means, analogous to the reasoning above, $(q) = \big\{\sum_{i=0}^{\phi(n)-1} \alpha_i \xi^i \, | \,
\sum_{i=0}^{\phi(n)-1}\alpha_i\equiv 0\mod2, \; \alpha_i \in {\ensuremath{\mathbb{Z}}}\big\}$, and thus $(q)$ has only one other coset, say $(q)+1$. This settles the first equivalence. Since $\pm \xi^i (q) =(q)$, $\pm \xi^i ((q)+1) = (q) \pm \xi^i = (q)+1$, it follows that both the cosets are invariant under $N$-fold rotations, and so $K=T_{(q)} \rtimes D_N$, since the reflection is also in $K$ as noted above. Vice versa, if $K=T_{(q)} \rtimes D_N$, then $c(\pm\xi^i) = c(\xi^j)$.
\[elllarge\] For all $\ell$-colourings of ${\ensuremath{{\mathcal M}}}_n$ holds: If $\ell > 2^{\phi(n)}$ then $K=T_{(q)}$.
Recall that $K$ is a subgroup of $T_{(q)} \rtimes D_N$. If some $\operatorname{id}\ne g \in D_N$ is an element of $K$, then $g$ maps some $\pm \xi^i$ to some $\xi^j \ne \pm\xi^i$, with $c(\xi^i) = c(\pm\xi^j)$. Thus it suffices to show $c(\xi^i) \ne c(\pm \xi^j)$ for all $i \ne
j$.
Assume $c(\xi^i) = c(\pm \xi^j)$. Then $\xi^i \pm \xi^j \in (q)$. Hence $$N_n(\xi^i \pm \xi^j) = \bigg| \prod_{k=1}^{\phi(n)} \sigma_k(\xi^i \pm
\xi^j) \bigg| = \prod_{k=1}^{\phi(n)} | \xi^{i_k} \pm \xi^{j_k} | \le
2^{\phi(n)}, \; \mbox{where} \; \sigma_k \in Gal({\ensuremath{\mathbb{Q}}}(\xi_n),{\ensuremath{\mathbb{Q}}}).$$ It follows $\ell = [{\ensuremath{{\mathcal M}}}_n : (q)] \le N_n(\xi^i \pm \xi^j) \le
2^{\phi(n)}$, which contradicts $\ell > 2^{\phi(n)}$.
\[lem:q=2\] If $(q)=(2)$, then $H=G$. Furthermore, for all $n \ne 4$ in : $K=T_{(q)} \rtimes C_2$; and for $n=4$: $K=T_{(q)}
\rtimes D_2$.
$q=2$ is balanced. Thus, $H=G$.
Now for $K$, compare the last proof: Note that $N_n(2)=2^{\phi(n)}$, and $c(1)=c(-1)$. But not more, because then we would have a factor of modulus strictly less than 2 in the equation above, and the $\le$ becomes $<$. Consequently, $c(\xi^i)=c(-\xi^i)$, and so the only rotation in $K$ is the rotation by $\pi$ about 0.
The reflection $z \mapsto \overline{z}$ maps $\xi$ to $\xi^{n-1}$. For all $n \ne 4$ in , $N_n(\xi-\xi^{n-1}) < 2^{\phi(n)}$ and so $c(\xi)\ne c(\xi^{n-1})$. Thus the reflection is not contained in $K$, and so $K=T_{(q)} \rtimes C_2$. Only for the case $n=4$ we get $N_n(\xi-\xi^{n-1}) = 2^{\phi(n)}$, and so the reflection is in $K$ and thus $K=T_{(q)} \rtimes D_2$.
Why is the case $n=4$, $\ell = 2^{\phi(4)} = 4$ different? By inspection of this case (see Figure \[fig:bspcol\]) we find that $K = T_{(2)}
\rtimes D_2$. This is because $c(1) = c(-1)$ and $c(i) = c(-i)$; and only in this case does the reflection $z \mapsto \overline{z}$ also belong in $K$.
\[lem:normq=2n\] If $\ell = 2^{\phi(n)}$ but $(q)\neq(2)$, then $K=T_{(q)}$
From the proof of the previous lemma, there can be no more symmetries than the rotation by $\pi$ that can fix the colours. If this rotation by $\pi$ indeed fixes the colours, then in particular $c(1)=c(-1)$. Thus $2 \in (q)$, and so $(2) \subseteq (q)$. But since $(q)$ and $(2)$ have equal algebraic norms, then it follows that $(q)=(2)$, which is a contradiction. Therefore, $K=T_{(q)}$.
The case $\ell = 2^{\phi(n)}$ but $(q)\neq(2)$ first occurs when $n=7$, see Table \[tab\].
\[lem:lnprim\] If $2 < \ell = n$, where $n$ is prime in ${\ensuremath{\mathbb{Z}}}$, then $H=G$ and $K=T_{(q)}
\rtimes D_n$.
Note that in the case when $n$ is an odd prime, the symmetry group of $(q)$ contains $D_N = D_{2n}$.
Let $2 < \ell = n$ and $n$ prime in ${\ensuremath{\mathbb{Z}}}$. Then the unique factorisation of $\ell = n$ in ${\ensuremath{{\mathcal M}}}_n$ is $\ell = \prod_{i=1}^{n-1}
(1-\xi^i)$ [@wash]. Thus $\ell$ ramifies, and the possible generators of the ideal $(q)$ are exactly the $1-\xi^i$. Therefore, by Theorem \[thm:prod\], each corresponding colouring is perfect. In fact, there is only one such colouring, since for all $1
\le j \le n$ holds: $1-\xi^j \in (1-\xi)$. (This follows from $\xi^k(1-\xi) \in (q)$, thus $\sum_{k=0}^{j-1} \xi^k(1-\xi) = 1-\xi^j
\in (q)$.) Moreover, it follows that $c(1)=c(\xi^j)$ for all $j$.
Since $\ell$ is prime in ${\ensuremath{\mathbb{Z}}}$, we have ${\ensuremath{{\mathcal M}}}_n / (q)\cong C_\ell$, and so the $\ell$ distinct cosets can be expressed as $(q), (q) + 1, (q) +
2, \ldots, (q) + \ell - 1$. Each coset is invariant under multiplication by $\xi^j$, but not under multiplication by $-\xi$. Thus, $K=T_{(q)} \rtimes D_n$.
\[krefl\] If $H=G$ and $\ell$ is prime in ${\ensuremath{\mathbb{Z}}}$, then $K$ contains a reflection. Thus, $T_{(q)} \rtimes C_2$ is a subgroup of $K$.
Because $\ell$ is prime, the $\ell$ distinct cosets are $(q), (q) + 1, \ldots, (q) + \ell -
1$. Clearly, these cosets are invariant under conjugation, hence the reflection $z \mapsto \overline{z}$ is contained in $K$. Consequently, $K$ contains $T_{(q)} \rtimes C_2$ as a subgroup.
The previous lemma together with Lemma \[elllarge\] yields the following result immediately.
If $\ell$ is prime in ${\ensuremath{\mathbb{Z}}}$ and $K = T_{(q)}$, then $H=G'$. In particular, if $\ell>2^{\phi(n)}$ is prime in ${\ensuremath{\mathbb{Z}}}$, then $H=G'$. $\square$
If $\ell > n$ and $\ell$ is prime in ${\ensuremath{\mathbb{Z}}}$, then $H=G'$.
If $H=G$, then by Lemma \[krefl\] the cosets must be fixed by taking conjugates. This would mean that $c(\xi^i)=c(\xi^{n-i})$ and so $\xi^i(1-\xi^{n-2i})\in(q)$. Thus $\ell =
N_n(q)\mid N_n(1-\xi^{n-2i})=: \alpha$. Now, $\alpha$ must be either $2^{\phi(n)}$ (when $\xi^{n-2i}=-1$) or a factor of $n^{\phi(n)}$. For the latter case, recall that $\prod_{j=1}^{n-1}(1-\xi^j)=n$. Taking the algebraic norm of both sides, and noting that this norm is completely multiplicative, gives us $$\label{eqn:pp}
\prod_{j=1}^{n-1}N_n(1-\xi^j)=n^{\phi(n)}.$$ This suggests that each factor $N_n(1-\xi^j)$ on the left hand side of Equation divides $n^{\phi(n)}$. But $\ell$ is not a prime factor of $n$, and so $\ell$ cannot divide $\alpha$. Hence $H=G'$.
\[lem:lteiltnicht\] If $\ell \nmid 2^{\phi(n)}$ and $\ell \nmid n^{\phi(n)}$, then $K = T_{(q)}$.
Suppose there is $\operatorname{id}\ne g\in D_N$ which fixes the cosets, so in particular $g((q)+1)=(q)+1$. This implies that $(q) \pm \xi^{i} =
(q) + 1$ for some integer $i$, and hence $1 \pm \xi^i \in (q)$. As in the proof of the previous lemma, it follows then that $\ell \mid
N_n(1\pm\xi^i) = \beta$, where $\beta$ is either $2^{\phi(n)}$ or a factor of $n^{\phi(n)}$. This is a contradiction, thus $K = T_{(q)}$.
$n$ $\ell$ $j$ $H$ $K$ $q$
----- -------- ----- -------- ------------------------- ---------------------------------- --
3 3 1 $G$ $T_{(q)} \rtimes D_3$ $1-\xi_3$
4 1 $G$ $T_{(q)} \rtimes C_2$ $2$
$> 4$ \* \* $T_{(q)}$ \*
4 2 1 $G$ $T_{(q)} \rtimes D_4$ $1-\xi_4$
4 1 $G$ $T_{(q)} \rtimes D_2$ 2
$> 4$ \* \* $T_{(q)}$ \*
7 7 1 $G$ $T_{(q)} \rtimes D_7$ $1- \xi_7$
8 2 {$G'$} {$T_{(q)} \rtimes C_2$} $1- \xi_7-\xi_7^3$
29 6 $G'$ $T_{(q)}$ $1- \xi_7 - \xi_7^2$
43 6 $G'$ $T_{(q)}$ $1- \xi_7 - \xi_7^2 - \xi_7^3$
49 1 $G$ {$T_{(q)}$} $(1- \xi_7)^2$
56 2 {$G'$} $T_{(q)}$ $(1- \xi_7)(1- \xi_7- \xi_7^3)$
64 1 $G$ $T_{(q)} \rtimes C_2$, 2
2 {$G'$} $T_{(q)}$ $(1- \xi_7-\xi_7^3)^2$
$> 64$ \* \* $T_{(q)}$ \*
9 3 1 $G$ {$T_{(q)} \rtimes D_9$} $1- \xi_9$
9 1 $G$ {$T_{(q)}$} $(1- \xi_9)^2$
19 6 $G'$ $T_{(q)}$ $1- \xi_9 - \xi_9^2$
27 1 $G$ {$T_{(q)}$} $1- \xi_9^3$
37 6 $G'$ $T_{(q)}$ $1- \xi_9 - \xi_9^3$
57 6 {$G'$} $T_{(q)}$ $(1- \xi_9)(1- \xi_9 - \xi_9^2)$
64 1 $G$ $T_{(q)} \rtimes C_2$ 2
$> 64$ \* \* $T_{(q)}$ \*
: \[tab\] The cases $n=3,4,7,9$. Here, $j$ denotes the number of colourings with $\ell$ colours. Non-bracketed entries in the columns labelled $H$ and $K$ follow directly from results in this paper. Entries in brackets are computed by methods from [@pbeff].
Table \[tab\] illustrates applications of the results of the last two sections for the cases $n=3,4,7,9$. (These are exactly the values of $n$ where $\phi(n) \in \{ 2,6 \}$. The cases $n=6, 14, 18$ are covered implicitly.) This table can be seen as a complement to Table 4 in [@pbeff], where values of $n$ for which $\phi(n)=4$ are considered. The entries in the second and third column follow from [@bg]. Many entries in the fourth and fifth column follow immediately from the results in this paper. Entries in brackets require further computations (compare [@pbeff]), entries without brackets are immediate. Entries with an asterisk mean that there are multiple different possibilities. The last column lists one (out of possibly more than one) generator $q$ of a corresponding colouring. Note that for $n=7$, there are three colourings with 64 colours. The colouring induced by $(2)$ is perfect, while the other two colourings are not.
**Application to quasiperiodic structures** {#sec:qc}
===========================================
Consider a colouring $c$ of ${\ensuremath{{\mathcal M}}}_8$ with eight colours. There exists exactly one such colouring [@bg]. By Corollary \[cor:one\], this colouring is perfect. Therefore $H= {\ensuremath{{\mathcal M}}}_8 \rtimes D_8$. Because of $N_8(1+\xi+\xi^2+\xi^3) = 8$, this colouring is defined by the ideal $(q)=(1+\xi+\xi^2+\xi^3)$. Because of Theorem \[thm:bal\] and Lemma \[lem:bal\], $\overline{q}=1+\xi^7+\xi^6+\xi^5 \in
(q)$. Thus, $q+\overline{q}=2 \in (q)$. Hence the rotation by $\pi$ about $0$, which maps $1$ to $-1$, is contained in $K$. The rotation by $\pi/2$ about 0 maps $1$ to $i$. The norm of $1-i$ in ${\ensuremath{{\mathcal M}}}_8$ is $N_8(1-i)=4<8$, thus $1-i \notin (q)$. Therefore $1$ and $i$ have different colours, and the rotation by $\pi/2$ and thus the rotation by $\pi/4$ are not contained in $K$. Finally, the reflection maps $\xi_8$ to $-\xi_8^3$. But $N_8(\xi_8+\xi_8^3)=4$ implying that $\xi_8$ and $-\xi_8^3$ have different colours. This yields $K=T_{(q)} \rtimes{C_2}$ (where $C_2$ represents the rotation by $\pi$).
Let us now describe how to illustrate this colouring $c$ and its symmetries. Since ${\ensuremath{{\mathcal M}}}_8$ is dense in the plane, we want a discrete subset of ${\ensuremath{{\mathcal M}}}_8$, which exhibits the colour symmetries of $({\ensuremath{{\mathcal M}}}_8,c)$. A colouring of such a subset is shown in Figure \[fig:ab8col\]. This set is well known from the theory of aperiodic order: It is the vertex set of an Ammann Beenker tiling, see [@gs] or [@sen2]. The symmetries discussed above are visible in the image.
Further examples of perfect and chirally perfect colourings of quasiperiodic structures can be found in [@lip] (a 5-colouring of the vertex set of the famous Penrose tiling, based on ${\ensuremath{{\mathcal M}}}_5$), in [@bgs] (an 8-colouring of a quasiperiodic pattern based on ${\ensuremath{{\mathcal M}}}_7$), in [@lueck] (several colourings based on ${\ensuremath{{\mathcal M}}}_n$ for $n=4,6,8,10,12$), and in [@pbeff] (a 4-colouring of the Ammann Beenker tiling, based on ${\ensuremath{{\mathcal M}}}_8$). All these colourings arise from perfect or chirally perfect colourings of ${\ensuremath{{\mathcal M}}}_n$.
![\[fig:ab8col\] An 8-colouring of the vertices of the Ammann-Beenker tiling, arising from the 8-colouring of the underlying set ${\ensuremath{{\mathcal M}}}_8$.](ab-8col-bw.eps){width="120mm"}
**Conclusion**
==============
Two classical special cases of colour symmetries are covered by our approach, namely, the square lattice (${\ensuremath{{\mathcal M}}}_4$) and the hexagonal lattice (${\ensuremath{{\mathcal M}}}_3$, resp. ${\ensuremath{{\mathcal M}}}_6$). These are discrete point sets. In particular, we obtain Theorem 8.7.1 in [@gs] as a corollary, see Corollary \[cor:z2\]. All other cases ($n=5$, $n \ge 7$) yield point sets ${\ensuremath{{\mathcal M}}}_n$ which are dense in the plane. In the case where ${\ensuremath{{\mathcal M}}}_n$ has class number one, we obtained our main results. These are a necessary and sufficient condition for a colouring to be perfect (Theorem \[thm:bal\]). It allows the determination of the colour symmetry group $H$ of ${\ensuremath{{\mathcal M}}}_n$ in general. In particular, it yields all perfect colourings of ${\ensuremath{{\mathcal M}}}_n$. Moreover, for all but finitely many cases, we determine the subgroup $K$ of $H$ of symmetries which fix the coloured pattern: the lemmas in Section \[sec:k\] aid the derivation of $K$. A systematic way to determine $K$ for a given ideal $\ell$-colouring would be to check the conditions of Lemma \[lgleich2\] ($\ell = 2$), Lemma \[elllarge\] ($\ell > 2^{\phi(n)}$), Lemma \[lem:q=2\] and Lemma \[lem:normq=2n\] ($\ell = 2^{\phi(n)}$), Lemma \[lem:lnprim\] ($\ell = n$ prime), Lemma \[lem:lteiltnicht\] ($\ell \nmid
2^{\phi(n)}$ and $\ell \nmid n^{\phi(n)}$). The remaining cases have to be handled individually. This allows — in principle — to obtain all colour preserving groups of (chirally) perfect colourings of ${\ensuremath{{\mathcal M}}}_n$.
For large $n$, the value $2^{\phi(n)}$ tends to be large, and it might be tedious to handle the remaining cases individually. Nevertheless, Lemma \[lem:lteiltnicht\] seems to cover many of the remaining cases of $\ell$, compare Table 5 in [@bg]. For instance, for $n=15$, there are 11 cases for which $\ell \leq 2^{\phi(15)}=256$. Our results cover 8 out of 11 cases, only three cases require further effort in order to derive the colour preserving group $K$. To give another example, for $n=16$, there are 23 cases for $\ell \leq
2^{\phi(16)}=256$, but 17 of them are covered by our results, and only six cases have to be checked individually in order to determine the group $K$.
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors are grateful to Michael Baake and Christian Huck for helpful discussions. Also, they wish to express their thanks to the CRC 701 of the German Research Council (DFG).
[ZZZ9]{}
M. Baake: Combinatorial aspects of colour symmetries, [*J. Phys. A: Math. Gen.*]{} 30 (1997) 2687-98, [mp\_arc/02-323]{}.
M. Baake, U. Grimm: Bravais colourings of planar modules with $N$-fold symmetry, [*Z. Krist.*]{} 219 (2004) 72-80, [math.CO/0301021]{}.
M. Baake, U. Grimm, M. Scheffer: Colourings of planar quasicrystals [*J. Alloys and Compounds*]{} 342 (2002) 195-197, [cond-mat/0110654]{}.
E.P. Bugarin, M.L.A.N. de las Peñas, I. Evidente, R.P. Felix, D. Frettlöh: On color groups of Bravais colorings of planar modules with quasicrystallographic symmetry, [*Z. Krist.*]{} 223 (2008), 785-790.
M.L.A.N. de las Peñas, R.P. Felix, G.R. Laigo: Colorings of hyperbolic plane crystallographic patterns, [*Z. Krist.*]{} 221 (2006) 665-672.
J. Dräger, N.D. Mermin: Superspace Groups without the Embedding: The Link between Superspace and Fourier-Space Crystallography, [*Phys. Rev. Lett.*]{} 76 (1996) 1489-1492.
B. Grünbaum, G.C. Shephard: [*Tilings and patterns*]{}, Freeman, New York, 1987.
R. Lifshitz: Theory of color symmetry for periodic and quasiperiodic crystals, [*Rev. Mod. Phys.*]{} [**69**]{} (1997) 1181-1218.
R. Lück: Colour symmetry of 25 colours in quasiperiodic patterns, [*Phil. Mag.*]{} 88 (2008) 2049-2058.
N.D. Mermin: Copernican crystallography, [*Phys. Rev. Lett.*]{} 68 (1992) 1172-1175.
R.V. Moody, J. Patera: Colourings of quasicrystals, [*Can. J. Phys.*]{} 72 (1994) 442-452.
R.L.E. Schwarzenberger: Colour symmetry, [*Bull. London Math. Soc.*]{} 16 (1984) 209-240.
M. Senechal: Color groups, [*Discrete Appl. Math.*]{} 1 (1979) 51-73.
M. Senechal: [*Quasicrystals and Geometry*]{}, Cambridge University Press (1995).
B.L. van der Waerden, J.J. Burckhardt: Farbgruppen, [*Z. Krist.*]{} 115 (1961) 231-234.
L.C. Washington: [*Introduction to cyclotomic fields*]{}, Springer, New York (1996).
|
---
abstract: 'We present some applications of ideas from partial differential equations and differential geometry to the study of difference equations on infinite graphs. All operators that we consider are examples of “elliptic operators” as defined by Y. Colin de Verdiere [@CdV2]. For such operators, we discuss analogs of inequalities of Cheeger and Harnack and of the maximum principle (in both elliptic and parabolic versions), and apply them to study spectral theory, the ground state and the heat semigroup associated to these operators.'
address: |
Ph.D. Program in Mathematics\
Graduate Center (CUNY)\
New York, NY 10016\
Email: [email protected]
author:
- 'J. Dodziuk'
title: Elliptic operators on infinite graphs
---
[^1]
Preliminaries
=============
We consider graphs (without loops or multiple connections) $G=(V,E)$ where $V$ is a set whose elements are called vertices and $E$, the set of edges, is a subset of the set of two-element subsets of $V$. For an edge $e=\{x,y\}\in E$, we will denote by $[x,y]$ the *oriented* edge from $x$ to $y$ and write $\overline{E}$ for the set of all oriented edges. We also write $x \sim y$ if $\{x,y\}$ is an edge. All graphs considered will be connected.
By a function on a graph we will mean a mapping $f:V\longrightarrow {{\mathbb C}}$. By an operator on a graph, we shall always mean an operator acting on functions and follow [@CdV2] in defining the notion of “self-adjoint, positive, elliptic operator.” Observe first that every operator $L$ is given by a matrix $(b_{x,y})$. We require our operators to be local, i.e. $$b_{x,y} =0 \qquad \text{if $\{x,y\}$ is not an edge and $x\neq y$
.}$$ Thus $$Lf(x)=b_{x,x}f(x)+\sum_{x\sim y}b_{x,y} f(y) .$$ The constant functions are annihilated by $L$ if and only if $\sum_{y\sim x}b_{x,y} = - b_{x,x}$ for every $x\in V$. Every local operator L can be rewritten in the form $$\label{local}
Lf(x)=W(x)f(x)+\sum_{y\sim x}a_{x,y}(f(x)-f(y))$$ where $W(x)=b_{x,x} + \sum_{y\sim x}b_{x,y}$ and $a_{x,y}=-b_{x,y}$. We will often write $L=A+W$, where $A$, given by the sum in the formula above, annihilates constant functions and $W$ denotes the operator of multiplication by the function $W(x)$.
Let $\ell^2(V)$ be the space of complex-valued functions $f$ satisfying $$\sum_{x\in V} |f(x)|^2 < \infty$$ equipped with the standard hermitian inner product $$(f,g)=\sum_{x\in V} f(x){\overline{g(x)}}.$$ We denote by $C_0(V)$ the space of all functions on $V$ with finite support. In order that the operator $L$ be symmetric on $C_0(V)$, i.e. $(Lf,g) = (f,Lg)$ it is necessary and sufficient that $a_{x,y} = \overline{a_{y,x}}$ and $W(x)+\sum_{y\sim x}a_{x,y}\in {\mathbb{R}}$. We want to think of the operator in (\[local\]) as a “Laplacian” plus a potential. Thus, we impose an additional condition on $A$ that will make it positive on $C_0(V)$. Namely, we require that $a_{x,y}$ be real and positive for every edge $\{x,y\}$. We will refer to such operators as *elliptic*, *positive* and *symmetric*. A very important example is the combinatorial Laplacian $A=\Delta$ given by choosing $a_{x,y}=1$ for every edge, $$\Delta f(x) = \sum_{x\sim y}(f(x)-f(y))=
m(x)f(x) -\sum_{x\sim y}f(y),$$ where $m(x)$ is the valence of the vertex $x\in V$ i.e. the number of edges emanating from $x$.
The following lemma sheds some light on the structure of a positive, symmetric operator. First, we need a definition. Let $C(\overline{E})$ denote the space of functions $\phi$ on *oriented* edges satisfying $\phi([x,y])=-\phi([y,x])$ for every edge $\{x,y\}$ and let $$\ell^2(\overline{E}) = \{ \phi \in C({\overline{E}}) \mid
\sum_{\{x,y\}\in E} |\phi([x,y])|^2 < \infty\}.$$ We equip $\ell^2(\overline{E})$ with the natural inner product $$<\phi,\psi> = \sum_{\{ x,y\}\in E}\phi([x,y]){\overline{\psi([x,y])}}.$$ In addition, given a positive, symmetric operator $A$ as above, define the (possibly unbounded) operator $d_A$ from $\ell^2(V)$ to $\ell^2({\overline{E}})$ by $$d_A f([x,y]) = \sqrt{a_{x,y}} (f(x) - f(y)).$$
[\[divergence\]]{} Suppose $f$ and $g$ are two functions on the graph and one of them has finite support. Then $$(Af,g) = <d_A f, d_A g>.$$ In particular, if $f$ has finite support, $(Af,f) \geq 0$ with equality if and only if $f\equiv 0$.
The proof is a simple calculation. $$\begin{aligned}
(Af,g)&=&\sum_x \left ( \sum_{y\sim
x} a_{x,y} (f(x) - f(y)) \right ){\overline{g(x)}}\\
& =&
\sum_{\{x,y\}\in E} a_{x,y} (f(x)-f(y)){\overline{(g(x)-g(y))}} = <d_A f,d_A g>\end{aligned}$$ To justify it note that an edge $\{z,w\}$ contributes to the first sum twice. The contribution is $$\begin{aligned}
a_{z,w} (f(z)-f(w)){\overline{g(z)}} &+& a_{w,z} (f(w)-f(z)){\overline{g(w)}} = \\[5pt] &&
a_{z,w} (f(z)-f(w)){\overline{(g(z)-g(w))}}\end{aligned}$$ since $a_{z,w}$ is symmetric. This proves that the two sums are equal. The statement about strict positivity of $(Af,f)$ follows trivially.
We wish to consider $L=A+W$ as an unbounded operator on $\ell^2(V)$ and to study its spectrum. In order to obtain a reasonable setup we will require that the potential $W$ be bounded from below by a constant, $W(x) \geq c$ for all $x \in V$. By the lemma above, $L$ is semi-bounded, i.e. $(Lf,f) \geq c (f,f)$ for every $f\in C_0(V)$. By Theorem X.23 of [@simon-reed2], $L$ then has a distinguished self-adjoint extension, the Friedrichs extension, $\hat{L}$ such that $\lambda_0(\hat{L})$, the bottom of the spectrum of $\hat{L}$, has a variational characterization $$\label{var-princ}
\lambda_0(\hat{L}) = \inf_{f\in C_0(V)\setminus \{0\}}
\frac{(Lf,f)}{(f,f)}.$$ We will abuse the notation and write $\lambda_0(L)$ for $\lambda_0(\hat{L})$.
In general, without any further restrictions, the operator $L$ with domain $C_0(V)$ may have many self-adjoint extensions. The theorem below gives conditions under which $L$ is essentially self-adjoint, i.e. has a unique self-adjoint extension, cf. [@simon-reed2], Theorem X.28.
\[a+w-sa\] Suppose that $A$ is a positive, symmetric and bounded as an operator on $\ell^2(V)$. Let $W$ be bounded from below by a constant. Then $L=A+W$ is essentially self-adjoint on $C_0(V)$.
Choose a positive constant $\kappa$ so that $W+\kappa\geq 1$. By Theorem X.26 of [@simon-reed2], it suffices to show that $$(A+W+\kappa)^*f=0 \label{adj-eq}$$ implies that $f=0$. Taking the inner product of the equation above with the function $\delta_x$ ($\delta_x(y) = 1$ if $x=y$ and $0$ otherwise), using the definition of the adjoint and Lemma \[divergence\], we see that (\[adj-eq\]) is equivalent to $$(A+W+\kappa)f=0, \qquad\qquad f\in \ell^2(V)$$ where $(A+W+\kappa)f$ is computed pointwise as in (\[local\]) with $W$ replaced by $W+\kappa$. Since $A$ is bounded and $C_0(V)$ is dense in $\ell^2(V)$, $(Af,f)\geq 0$ by Lemma \[divergence\]. Therefore, $0=(Af,f)+((W+\kappa)f,f) \geq (f,f)$. It follows that $f=0$ which proves the theorem.
\[bounded\] Observe that the condition that $A$ be bounded holds if $a=\sup{a_{x,y}}<\infty$ and $M=\sup m(x) < \infty$. In fact, in this case $\parallel A \parallel \leq 2 aM$.
We view the Theorem \[a+w-sa\] as an analog of Theorem X.28 of [@simon-reed2] which applies to a differential operator $-\Delta + V$ on ${\mathbb{R}}^n$. Clearly, $\Delta$ is unbounded but the unboundedness is an infinitesimal effect that does not occur for difference operators on graphs. We view the boundedness of $A$ or the condition $a<\infty$ as a partial replacement of uniform ellipticity, (see Corollary \[growth-gr-state\] below for a proper analog of uniform ellipticity). Similarly, $M< \infty$ is a bounded geometry condition.
We now state two local results. Their continuous analogs - the maximum principle and Harnack’s inequality - are discussed at great length in [@protter-weinberger]. Let $V_1 \subset V$ be a set of vertices and let $G_1$ be the full subgraph of $G$ generated by $V_1$ (i.e. the set of edges of $G_1$ consists of all edges $\{x,y\}$ of G such that $x,y\in V_1$). Let ${\overset{o}{V_1}} = \{x\in V_1\mid y \sim x \quad \text{implies}\quad y\in V_1\}$ and $\partial V_1 = V_1\setminus
{\overset{o}{V_1}}$. We say that ${\overset{o}{V_1}}$ is connected if every two of its vertices $x$, $y$ can be connected by a path of edges $[x_0,x_1],[x_1,x_2],\: \ldots\: ,[x_{n-1},x_n]$, $x_0=x, x_n=y$ with $x_i \in {\overset{o}{V_1}}$ for $i=0,1,\ldots ,n$.
\[max-pr\] Let $L=A+W$ where $A$ is positive, symmetric and $W$ is nonnegative. Suppose $V_1 \subset V$ is a subset with ${\overset{o}{V_1}}$ connected. Let $f$ be a function on $V_1$ such that $$Lf(x)=Af(x) +W(x)f(x) \geq 0\quad \text{for}\quad x\in {\overset{o}{V_1}}.$$ If $f$ has a minimum at $x_0\in {\overset{o}{V_1}}$ and $f(x_0)\leq 0$ then $f$ is constant on $V_1$.
Suppose $x_0\in {\overset{o}{V_1}}$ is a minimum and $f(x_0)\leq 0$. Then $$0\leq \sum_{y\sim x_0} a_{x_0,y}(f(x_0) -f(y)) +W(x_0)f(x_0) \leq 0$$ since $A$ is positive, $x_0$ is a minimum, and $W(x_0)f(x_0) \leq 0$. It follows that all terms in the sum above are equal to zero, i.e. $f(y) = f(x_0)$ for every $y\sim x_0$. By connectedness, $f$ is constant.
\[harnack\] Suppose $A$ and $W$ satisfy the assumptions of Lemma \[max-pr\]. Let $V_1\subset V$, $x\sim y$, $x,y\in{\overset{o}{V_1}}$. If $$Lf=Af + Wf \geq 0\quad \text{and}\quad f>0 \quad \text{on}\quad V_1$$ then $$\frac{a_{x,y}}{\left ( W(x) + \sum_{z\sim x} a_{x,z} \right )} \leq \frac{f(x)}{f(y)}\leq
\frac{\left ( W(y) + \sum_{z\sim y} a_{y,z} \right )}{a_{x,y}}.$$
By symmetry, it suffices to prove one of the two inequalities above. We have $$(A+W)f(x) = \sum_{z\sim x} a_{x,z} (f (x) -f (z)) +W(x)f (x) \geq 0.$$ Therefore, $$\left (\sum_{z\sim x} a_{x,z} \right )f(x) + W(x)f(x) \geq \sum_{z\sim x} a_{x,z} f(z)
\geq a_{x,y} f(y).$$ This, of course, is equivalent to the lower bound on $f(x)/f(y)$ in the statement of the lemma.
We refer to Lemma \[max-pr\] as the maximum principle and to Lemma \[harnack\] as the Harnack inequality. The significance of the Harnack inequality is that it gives a bound of the ratio $f(x)/f(y)$ in terms of the coefficients of the operator but *independent* of the function $f$.
Existence of ground state
=========================
In this section we prove, for an operator $L=A+W$ with positive, symmetric A and the potential W bounded from below by a constant, the existence of a ground state, i.e. a positive solution of the equation $$L\phi =\lambda_0(L) \phi,$$ cf. [@pinsky] for an extensive discussion in the continuous setting. We assume that the underlying graph $G$ is connected and fix a vertex $x_0$ as an “origin”. Consider the exhaustion $\{G_n\}_{n=1}^\infty$ of $G$ where, for every $n$, $G_n$ is the full subgraph with the vertex set $V_n=\{x\in V \mid d(x_0,x) \leq n\}$. Here, $d(x,y)$ denotes the combinatorial distance between $x,y\in V$, i.e. the length of the shortest path of edges connecting $x$ with $y$. Clearly, ${\overset{o}{V_n}}$ is connected for every $n\geq 1$. We will construct a ground state $\phi$ by solving certain “boundary value problems” on $G_n$ and taking a limit of the solutions. In order to get started we need to review these boundary value problems. Thus, let $U$ be a finite subset of $V$ such that the full subgraph generated by $U$ has connected interior. Let $C_0(U)$ be the space of functions on $U$ that vanish on $\partial U$. Extending functions in $C_0(U)$ by zero embeds $C_0(U)$ isometrically in $C_0(V)$. We define, for $f\in C_0(U)$, $L_Uf\in C_0(U)$ by $$L_Uf(x)=\begin{cases}
W(x)f(x)+\sum_{x\sim y}a_{x,y}(f(x)-f(y)) & \text{if $x\in {\overset{o}{U}}$},\\
0 & \text{if $x\in \partial U$}.
\end{cases}$$ We can define $A_Uf\in C_0(U)$ for $f\in C_0(U)$ analogously. The calculation in the proof of Lemma \[divergence\] shows that $A_U$ and $L_U$ are symmetric operators on $C_0(V)$ and that $A_U$ is strictly positive. It follows that $\lambda_0(L_U)$, the smallest eigenvalue of $L_U$ on $C_0(U)$, has variational characterization $$\label{var}
\lambda_0(L_U) = \inf_{f\in C_0(U)\setminus \{0\}}
\frac{(L_Uf,f)}{(f,f)} =
\inf_{f\in C_0(U)\setminus \{0\}} \frac{(Lf,f)}{(f,f)}$$ where in the last expression above we identify $f$ with its extension by zero outside $U$.
The eigenspace of $\lambda_0(L_U)$ is one-dimensional and every eigenfunction $\psi$ belonging to $\lambda_0(L_U)$ has constant sign in the interior of $U$.
It is enough to consider real-valued functions. Replacing $W$ by $W+c$ with a suitably large $c$, we can assume that $W$ is nonnegative. Since $$(L_Uf,f) = \sum_{x\sim y, \: x\in U,\:y\in{\overset{o}{U}}} a_{x,y} (f(x)-f(y))^2 + \sum_{x \in {\overset{o}{U_0}}}W(x)f(x)^2$$ replacing $f$ by $|f|$ decreases the Rayleigh-Ritz quotient in (\[var\]). Therefore, it follows that if $\psi$ is an eigenfunction belonging to $\lambda_0(L_U)$ then $|\psi|$ is one as well. Thus we can assume that there exists a nonnegative eigenfunction $\psi$. Since the Raylegh-Ritz quotient is nonnegative, $\lambda_0(L_U) \geq 0$. The maximum principle in Lemma \[max-pr\] implies that $\psi$ is strictly positive in ${\overset{o}{U}}$. Finally, if the eigenspace of $\lambda_0(L_U)$ had two or more dimensions, there would exist another eigenfunction $\phi$ orthogonal to $\psi$. Therefore $\phi$ would have to change sign and be negative at an interior point, but this is impossible by the maximum principle.
We are now ready to prove
\[gr-state\] Consider an operator $L=A+W$ on a connected graph $G$ with positive, symmetric A and the potential $W$ bounded below by a constant. There exists a ground state $\phi$ for $L$ i.e. a function $\phi > 0$ on $V$ such that $$L\phi = \lambda_0\phi$$ where $\lambda_0=\lambda_0(L)$ is the bottom of the spectrum of (the Friedrichs extension of) $L$ on G.
The proof for the case of the combinatorial Laplacian was given in [@dod-mat3]. We follow the same line of argument here but remark that exhaustion argument of this kind is applied very often in studying partial differential equations on noncompact domains or domains with non-smooth boundaries as, for example, in [@pinsky], Chapter 4. Note first that by adding a suitable constant to the potential $W$ we can assume without any loss of generality that $W>0$. We use the exhaustion of $G$ by finite subgraphs $G_n$ described above. Let $\lambda_n=\lambda_0(L_{G_n})$ and let $\phi_n$ be the corresponding positive eigenfunction of $L$ on $C_0(V_n)$ normalized so that $\phi_n(x_0)=1$. By the variational characterization of eigenvalues and of the bottom of the spectrum (\[var-princ\]), (\[var\]) we have $\lambda_n\searrow \lambda_0$. Fix a point $y\in V$. Then, there exists $k=k(y)$ such that $y\in {\overset{o}{V_n}}$ for all $n>k$. Choose a path of length $d(x_0,y)$ that connects $x_0$ and $y$. Using the normalization $\phi_n(x_0)=1$ and applying the local Harnack inequality in Lemma \[harnack\] to successive edges of the path, we see that the sequence $\phi_n(y)$ is bounded above and below by positive constants that are independent of $n$. Using the diagonal process, we choose a subsequence $(n_k)_{k=1}^\infty$ such that the sequence $(\phi_{n_k}(y))_{k=1}^\infty$ converges to the limit $\phi(y)$ of every vertex $y\in V$ and $\phi(y)> 0$. Since $L\phi$ is given by the formula (\[local\]) and $\lambda_n\searrow \lambda_0$ we see that $\phi$ is a positive solution of $L\phi=\lambda_0 \phi$ as required.
We now need the following lemma to control the behavior at infinity of a ground state under certain additional assumptions.
\[growth-gr-state\]Assume that $A$ is symmetric and positive, that the graph $G$ has bounded valence $\sup_{x\in V} m(x) = M <\infty$ and that the operator $A$ is uniformly elliptic in the sense that there exist constants $\gamma, \Gamma > 0$ so that $\gamma \leq a_{x,y} \leq \Gamma$ for every edge $\{x,y\}$. Suppose a function $f$ on $V$ satisfies $Af\geq 0$, $f>0$. Then, for every $x,y\in V$, $$\left (\frac{M\Gamma}{\gamma}\right )^{-d(x,y)} \leq \frac{f(x)}{f(y)} \leq \left (\frac{M\Gamma}{\gamma}\right )^{d(x,y)}.$$
By Lemma \[harnack\], $\gamma/M\Gamma \leq f(z)/f(w) \leq M\Gamma/\gamma$ if $z\sim w$. We connect $x$ with $y$ by a path of edges of length $d(x,y)$ and apply these inequalities for every edge along the path. The corollary follows.
Observe that this is entirely analogous to Theorem 21 in [@protter-weinberger].
Cheeger’s inequality
====================
In this section, we assume that $L=A$ and give a lower bound for the bottom of the spectrum of $A$ on $G$. This bound originated in Riemannian geometry [@cheeger] and has been studied a great deal for the combinatorial Laplacian on graphs [[@lubotzky]]{}, [[@d2]]{}, [[@dod-ken]]{}.
As before, let $A$ be a positive, symmetric elliptic operator on an infinite graph $G$ and let $U\subset V$ be a finite subset. We define $$\label{iso-U}
h_A(U) = \frac{\sum_{x\in{\overset{o}{U}},\,y\in \partial U,\,x\sim y} \sqrt{a_{x,y}}}{\# (U)},$$ and $$\label{cheeger}
\beta (G,A) = \inf_U h_A(U)$$ where $\# U$ denotes the number of vertices of $U$.
\[cheeger-inq\] Suppose $\sup_{x\in V} m(x) = M < \infty$ The lower bound of the spectrum of $A$ on $G$ satisfies $$\lambda_0(A) \geq \frac{\beta(G,A)^2}{2M}.$$
We follow the proof of Theorem 2.3 of [[@d2]]{}. Let $(G_n)_{n=1}^{\infty}$ be the exhaustion of $G$ used in the proof of Theorem \[gr-state\]. Since $\lambda_n\searrow \lambda_0$ it will suffice to show that $\lambda_n \geq \beta(G,A)^2/{2M}$ independently of $n$. We will fix $n$, set $U=V_n$ and let $\phi$ be positive eigenfunction of $A_U$. Observe that by Lemma \[divergence\] and (\[var\]) $$\label{lambda}
\lambda_n = \lambda_0(A_U) = \frac{<d_A \phi, d_A\phi>}{(\phi,\phi)}$$ if we extend $\phi$ by zero outside $U$. Consider the expression $${\mathcal{A}}= \sum_{\{x,y\}\in E} \sqrt{a_{x,y}}|\phi^2(x)-\phi^2(y)|.$$ By Cauchy-Schwartz inequality we have $$\begin{aligned}
{\mathcal{A}}&=& \sum_{\{x,y\}}\sqrt{a_{x,y}} |\phi(x)-\phi(y)|\,|\phi(x) + \phi (y)|\\
&\leq & \left ( \sum_{\{x,y\}} |\phi(x)+\phi(y)|^2\right )^{1/2} \,\left (\sum_{\{x,y\}} a_{x,y} |\phi(x)-\phi(y)|^2\right )^{1/2}\\
&\leq& \sqrt{2}\left ( \sum_{\{x,y\}} (\phi^2(x)+\phi^2(y))\right )^{1/2} \, (d_A\phi ,d_A\phi )^{1/2}.\end{aligned}$$ In $\sum_{\{x,y\}}(\phi^2(x) + \phi^2(y))$, every vertex contributes as many times as the number of edges emanating from it. Hence we get the following upper bound on ${\mathcal{A}}$. $$\label{a-upper}
{\mathcal{A}}\leq \sqrt{2M}\,(\phi ,\phi)^{1/2}\, (d_A\phi, d_A\phi )^{1/2}.$$ On the other hand we can estimate ${\mathcal{A}}$ from below in terms of $(\phi,\phi)$ as follows. Let $0=\nu_0<\nu_1<\nu_2< \ldots<\nu_N$ be the sequence of all values of $\phi^2$. Note that, since $A\phi (x) = \lambda_0(U) \phi (x)$ at every interior vertex $x$ and since $\lambda_0(U) > 0$ by (\[lambda\]), every interior vertex $x$ will have a neighbor $y$ such that $\phi(x) > \phi (y)$. Define a set of vertices $U_i$, $i=1,2,\ldots ,N$ as follows. A vertex $x\in U$ belongs to $U_i$ if and only if $\phi^2(x) \geq \nu_i$ and let $F_i$ be the full graph generated by the set $U_i$. Now $${\mathcal{A}}= \sum_{i=1}^N \sum_{\phi^2(x)=\nu_i} \sum_{y\sim x\,\phi^2(y)<\nu_i} \sqrt{a_{x,y}}(\phi^2(x)-\phi^2(y)).$$ If $\phi^2=\nu_i$ and $\phi^2(y)=\nu_{i-k}$ for some $k\in \{1,2,\ldots,i\}$, then on the one hand,$\phi^2(x)-\phi^2(y)=(\nu_i-\nu_{i-1}) + (\nu_{i-1} - \nu_{i-2} + \ldots (\nu_{i-k+1}-\nu_{i-k})$ and, on the other hand, $x \in \partial U_i \cap \partial U_{i-1} \cap \ldots \cap \partial U_{i-k+1}$. It follows that $${\mathcal{A}}\geq \sum_{i=1}^N (\nu_i-\nu_{i-1})\sum_{y\sim x,\,y\in\partial U_i} \sqrt{a_{x,y}}.$$ Applying (\[cheeger\]) we obtain $${\mathcal{A}}\geq h_A(U) \sum_{i=1}^N \#U_i(\nu_i - \nu_{i-1})\geq \beta \sum_{i=1}^N \#U_i(\nu_i - \nu_{i-1})$$ with $\beta=\beta(G,A)$. “Summation by parts” now yields$${\mathcal{A}}\geq \beta\left ( \nu_N \#U_n + \sum_{i=1}^{N-1} \nu_i(\#U_i - \# U_{i+1})\right ) .$$ Observe that $\#U_n$ is the cardinality of the set where $\phi^2=\nu_N$ while $\#U_i-\#U_{i+1}$ is the number of points where $\phi^2=\nu_i$. It follows that $${\mathcal{A}}\geq \beta (\phi,\phi).$$ This inequality combined with (\[lambda\]) and (\[a-upper\]) gives the desired lower bound.
We remark that one can also bound $\lambda_0(A)$ from above by a related isoperimetric constant. Namely, let $\chi_U$ be the characteristic function of a finite set of vertices $U\subset V$. Then $$\lambda_0(A) \leq \frac{<d_A\chi_U,d_A\chi_U>}{(\chi_U,\chi_U)} = \frac{\sum_{x\sim y,x\in U, y\not\in U}a_{x,y}}{\#U}$$ It follows that $$\lambda_0(A) \leq \beta_1(G,A) = \inf \frac{\sum_{x\sim y,x\in U, y\not\in U}a_{x,y}}{\#U}$$ where the infimum is taken over all finite subsets $U$ of $V$.
Note that for the combinatorial Laplacian $\Delta$, $a_{x,y}\equiv 1$. Thus $\beta(G,A)=\beta_1(G,A)$. In particular, for graphs of bounded valence $\lambda_0(\Delta) =0$ if and only if $\beta(G,\Delta) =0$ which is analogous to a result of Buser [@buser-uppr] in the Riemannian setting and is very useful in connection with various characterizations of amenability of discrete, finitely generated groups [@brooks2].
The heat equation
=================
In this section we make several standing assumptions. Namely, we assume that the graph G has bounded valence $\sup_{x\in V} m(x) =M < \infty$; that the potential $W\equiv 0$ i.e. $L=A$; and that $a=\sup_{\{x,y\}\in E} a_{x,y} <\infty$. We shall study the parabolic initial value problem $$\label{initial}
\begin{split}
Au + \frac{\partial u}{\partial t} &= 0\\
u(x,0) &= u_0(x)
\end{split}$$ and the associated heat semigroup using the method of [@d3] applied previously to the combinatorial Laplacian in [@dod-mat3]. Here $u(x,t)$ is a function of $x\in V$ and $t>0$, while $u_0$ is a given function on $G$. The first equation above will be referred to as the heat equation.
We are going to use the following version of the maximum principle, see [@protter-weinberger], Chapter 4 for an analog in the continuous setting.
\[max-par\] Suppose $u(x,t)$ satisfies the inequality $Au + \frac{\partial
u}{\partial t} < 0$ on $\overset{o}U \times [0,T]$ for a finite subset $U$ of $V$. Then the maximum of $u$ on $U\times [0,T]$ is attained on the set $U\times \{0\} \cup \partial U\times [0,T]$.
Suppose $(x_0,t_0)\in \overset{o}{U} \times (0,T]$ is a maximum. It follows that $\frac{\partial u}{\partial t}(x_0,t_0)$ is nonnegative so that $Au(x_0,t_0) < 0$. On the other hand, (\[local\]) and positivity of $A$ imply that $Au(x_0,t_0) \geq 0$. The contradiction proves the lemma.
We use the lemma above to prove the uniqueness of bounded solutions of (\[initial\]).
\[unique\] Let $u(x,t)$ be a bounded solution of (\[initial\]) with the initial condition $|u_0(x)|\leq N_0$. Then $u$ is determined uniquely by $u_0$ and $$|u(x,t)| \leq N_0$$ for all $(x,t)$. Moreover, if a bounded initial condition $u_0$ is given, then a bounded solution $u(x,t)$ of (\[initial\]) exists.
Suppose that $u(x,t)$ is a bounded solution. Let $N_1= \sup
|u(x,t)|$. Fix $x_0 \in V$ and define $r(x) = d(x,x_0)$. By our assumption on the valence and (\[local\]) $$\label{Ar}
|Ar| \leq aM .$$ Consider an auxiliary function $$v(x,t) = u(x,t) - N_0 -
\frac{N_1}{R} \left(r(x) + a(M+1)t\right ),$$ where $R$ is a large parameter. Let $U = B(x_0,R)$ be the set of vertices of $V$ at distance at most $R$ from $x_0$. The function $v(x,t)$ is nonpositive on the set $U\times \{0\} \cup \partial U\times [0,T]$ and satisfies $(A
+ \frac{\partial}{\partial t}) v < 0$ on $\overset{o}U \times[0,T]$ because of (\[Ar\]). Lemma \[max-par\] implies therefore that $v(x,t) \leq 0$ so that $$u(x,t) \leq N_0 + \frac{N_1}{R}\left (r(x) + a(M+1)t\right )$$ on $B(x_0,R)\times
[0,T]$. Keeping $(x,t)$ fixed and letting $R$ increase without bounds, we see that $ u(x,t)\leq N_0$. Applying the same argument to $-u$ yields $|u(x,t)| \leq N_0$. Since $T > 0$ and $x$ were arbitrary, this last inequality holds for all $x\in V $ and $t\geq
0$. Uniqueness follows by considering the difference of two solutions. We postpone the proof of existence of the solution.
Recall that under our assumption $A$ is a bounded operator on $\ell^2(V)$. Therefore, we can define for $t\geq 0$ $$\label{semi}
P_t = e^{-tA} = \sum_{k=0}^\infty (-1)^k \frac{t^kA^k}{k!}.$$ Obviously, $u(x,t)=\left ( P_t u_0\right )(x)$ is a solution of (\[initial\]) whenever $u_0$ is in $\ell^2(V)$. Since $\parallel P_t\parallel \leq 1$ we see that for every $x \in V$ and $t\geq 0$ $$|u(x,t)| \leq \parallel u(\cdot , t) \parallel \leq \parallel u_0 \parallel$$ so that $u(x,t)$ is a bounded solution and we get uniqueness. We would like to extend the semigroup $P_t$ to a larger class of functions.
We define $p_t(x,y)$ to be matrix coefficients of the operator $P_t$, i.e. $$p_t(x,y)=(P_t\delta_x,\delta_y)$$ where $\delta_x$ is the characteristic function of the set $\{x\}$. Similarly, let $A(x,y)=(A\delta_x,\delta_y)$. Since $A$ is self-adjoint both of these matrices are symmetric. Writing $u_0 = \sum_y u_0(y)\delta_y$ and using the symmetry, we see that $$\label{series-heat}
P_tu_0(x) = (P_tu_0,\delta_x)=\sum_y p_t(x,y)u_0(y)$$ for $u_0\in\ell^2(V)$. Substituting $u_0=\delta_y$ we see that $p_t(x,y)$ satisfies the heat equation in variables $x,t$. We try to extend $P_t$ to functions that are not necessarily in $\ell^2(V)$ by using this formula and verifying the convergence of the series. To do this we shall need an estimate in the lemma below of $p_t(x,y)$ for $t\in[0,T]$ and $d(x,y)$ large.
\[heat-decay\] For every $T>0$ there exist a constant $C=C(a,M,T)>0$ such that $$p_t(x,y) \leq \frac{C}{d(x,y)!}$$ for all $t\in [0,T] $.
Write $A^n(x,y)$ for the matrix coefficient of the $n$-th power of $A$. Then $A(x,y)=0$ if $d(x,y)>1$ by the locality of $A$. It follows, that $A^n(x,y)=0$ if $d(x,y) > n$. Now suppose that $d(x,y)=k$. It follows from (\[semi\]) that $$\label{series}
p_t(x,y) = \sum_{n=k}^\infty\frac{(-t)^nA^n(x,y)}{n!}.$$ Since the operator $A$ is bounded with $\parallel A \parallel \leq 2aM$, $$|A^n(x,y)| =
|(A^n\delta_x,\delta_y)| \leq 2^na^nM^n.$$ Therefore the series obtained by factoring out $1/k!$ from (\[series\]) is easily seen to be uniformly bounded for $t\leq T$. This proves the lemma.
The lemma says that for $t$ bounded, the heat kernel $p_t(x,y)$ decays very rapidly as the distance $d(x,y)$ goes to infinity. This is a familiar behavior of the heat kernel of a Riemannian manifold of bounded geometry. Thus we can substitute for $u_0$ in (\[series-heat\]) functions of moderate growth so that the series defining $u(x,t)$ converges and produces a solution of (\[initial\]). In particular, this yields existence of bounded solutions of (\[initial\]) asserted in Theorem \[unique\]. More precisely, for bounded initial data $|u_0| \leq c$, we define the solution of (\[initial\]) by (\[series-heat\]) and group the terms as follows $$\sum_y p_t(x,y)u_0(y) = \sum_{k=0}^{\infty} \left ( \sum_{d(x,y)=k } p_t(x,y)u_0(y) \right ).$$ By our assumption on the valence, the number of terms in the inner sum is at most $M^k$. Thus, for a bounded $t$, the absolute value of the $k$-th term together with its $t$ derivative is dominated by $(C/k!)M^k c$ because of Lemma \[heat-decay\]. This shows that the series converges very rapidly and can be differentiated term by term proving existence in Theorem \[max-par\]. For future reference we make the following
\[allow-growth\] In the argument above we could have allowed $u_0$ to grow at a certain rate. For example, the argument goes through if $|u_0(y)| \leq c_1e^{c_2 d(x,y)}$.
Our next result gives a relation between a ground state and the heat semigroup. It illustrates a technique used frequently in the study of diffusions [@sullivan-lambda], [@pinsky], [@dod-mat3]. Let ${\mathcal{H}}= \{ u:V \longrightarrow {\mathbb{C}}\mid u\cdot \phi \in \ell^2 (V) \}$. It is a Hilbert space with the inner product $<u,v>=\sum_{x\in V} u(x){\overline{v}}(x)\phi^2(x)$. We use the ground state $\phi$ to transplant the semigroup $P_t$ to ${\mathcal{H}}$. Namely, define $\tilde{P}_t$ as a bounded self-adjoint operator on ${\mathcal{H}}$ by $$\label{renorm}
\tilde{P}_t = e^{\lambda _0 t} [\phi^{-1}]P_t[\phi] = e^{\lambda _0 t} [\phi^{-1}]e^{-tA}[\phi],$$ where $\lambda_0 = \lambda_0(A)$ and $[f]$ denotes the operator of multiplication by a function $f$. Observe that for $u_0\in {\mathcal{H}}$ $$\label{renorm-kernel}
\tilde{P}_t u_0(x)= e^{\lambda_0 t}\sum_y \frac{1}{\phi(x)}p_t(x,y)\phi(y)u_0(y)$$ by (\[series-heat\]). Clearly, $\tilde{P}_t$, $t\geq 0$ is a semigroup with infinitesimal generator $$-\tilde{A} = -[\phi^{-1}] (A-\lambda_0)[\phi].$$ The following calculation gives a local formula for $\tilde{A}$. $$\begin{aligned}
\tilde{A}u(x) & = & \phi^{-1}(x) A(\phi u)(x) -\lambda_0 u(x) \nonumber \\
& = & \phi^{-1}(x) \sum_{y\sim x} a_{x,y} \left (\phi(x)u(x) -\phi(y)u(x)\right ) \nonumber \\
&& + \phi^{-1}(x) \sum_{y\sim x} a_{x,y} \left ( \phi(y)u(x) -\phi(y)u(y)\right )-\lambda_0 u(x) \nonumber\\
& = & \lambda_0 u(x) + \sum_{y\sim x} a_{x,y} \frac{\phi(y)}{\phi(x)}\left (u(x) - u(y)\right ) - \lambda_0 u(x)\nonumber\\
& = & \sum_{y\sim x} a_{x,y} \frac{\phi(y)}{\phi(x)} \left (u(x) - u(y)\right ). \label{generator}\end{aligned}$$ Note that $\tilde{A}$ is different than the local operators considered until now as its coefficients are not symmetric in $x,y$. We will consider however the initial value problem analogous to (\[initial\]) for the operator $\tilde{A}$.
Under the assumptions stated in the beginning of this section, the initial value problem $$\begin{split}
\tilde{A}u + \frac{\partial u}{\partial t} &= 0\\
u(x,0) &= u_0(x)
\end{split}$$ has a unique bounded solution $u(x,t)$ for every bounded function $u_0$.
The proof is completely analogous to the proof of Theorem \[unique\]. The uniqueness used only the maximum principle in Lemma \[max-par\] which in turn depended only on positivity and *not on symmetry* of the coefficients of the operator $A$. The proof thus applies equally well to the operator $\tilde{A}$ whose coefficients are positive by (\[generator\]) since the ground state $\phi$ is positive. Similarly, one proves existence for bounded initial data using the formula (\[renorm-kernel\]) and applying Remark \[allow-growth\] together with the estimate of Corollary \[growth-gr-state\].\
The following corollary is of independent interest. Its special case was used to derive certain estimates of the heat kernel for the combinatorial Laplacian in [@dod-mat3].
Under the assumption of this section, the ground state $\phi$ of $A$ is complete i.e. satisfies$$P_t\phi=e^{-\lambda_0t}\phi.$$
By the theorem above, $\tilde{P}_t$ applied to the function $u_0\equiv 1$ is a solution of the equation $\tilde{A}u+\frac{\partial u}{\partial t} =0$ with the initial data $u_0$. The function identically equal to one is also a solution. By uniqueness, the two solutions are equal i.e. $$e^{\lambda_0 t}\sum_y \frac{1}{\phi(x)}p_t(x,y)\phi(y)=1$$ for all $t>0,x\in V$. This proves the corollary.
[**Acknowledgement:**]{} I am very grateful to Radek Wojciechowski for a careful reading of the paper, correcting errors and making suggestions that that lead to improvement of exposition.
[10]{}
. The fundamental group and the spectrum of the [L]{}aplacian. [**56**]{} \#4 (1981), 581–598.
. A note on the isoperimetric constant. [**15**]{} (1982), 213–230.
. A lower bound for the smallest eigenvalue of the [L]{}aplacian. In [*Problems in Analysis*]{}, 195–199, Princeton, New Jersey, 1970. Princeton University Press.
. , Cours Spécialisés \[Specialized Courses\] [**4**]{}. Paris: Société Mathématique de France, 1998.
. Maximum principle for parabolic inequalities and the heat flow on open manifolds. [**32**]{} \#5 (1983), 703–716.
. Difference equations, isoperimetric inequality and transience of certain random walks. [**284**]{} (1984), 787–794.
& [W. Kendall]{}. Combinatorial [L]{}aplacian and isoperimetric inequality. In [*From local times to global geometry, control and physics*]{}, [K. D. Ellworthy]{}, ed., Pitman Research Notes in Mathematics Series [ **150**]{}, 68–74. Longman Scientific [&]{} Technical, 1986.
& [V. Mathai]{}. Kato’s inequality and asymptotic spectral properties for discrete magnetic laplacians. In [*Geometry of the Laplace Operator*]{}, [J. Jorgenson]{} & [ L. Walling]{}, eds., Contemporary Mathematics. Ubiquitous Heat Kernels. to appear.
. , Progress in Mathematics [**125**]{}. Basel: Birkhäuser Verlag, 1994. With an appendix by Jonathan D. Rogawski.
. , Cambridge Studies in Advanced Mathematics [**45**]{}. Cambridge: Cambridge University Press, 1995.
& [H. F. Weinberger]{}. . New York: Springer-Verlag, 1984. Corrected reprint of the 1967 original.
& [B. Simon]{}. . New York: Academic Press \[Harcourt Brace Jovanovich Publishers\], 1975.
. Related aspects of positivity in [R]{}iemannian geometry. [**25**]{} \#3 (1987), 327–351.
[^1]: This work is supported in part by a PSC-CUNY Research Grant.
|
---
abstract: 'Nozi$\grave{e}$res’ exhaustion theory argues the temperature for coherently screening of all local moments in Kondo lattice could be much lower than the temperature of single moment screening with insufficient number of conduction electrons. Recent experiment \[Luo et al, PNAS, 112,13520 (2015)\] indicates the cerium based nickel pnictides $CeNi_{2-\delta}As_2 (\delta\approx0.28)$ with low carrier density is an ideal material to exam such protracted Kondo screening. Using the density functional theory and dynamical mean-field theory, we calculated the respective electronic structures of paramagnetic $CeNi_2As_2$/$CeNi_2P_2$. In contrast to structurally analogous layered iron pnictides, the electronic structures of the present systems show strong three-dimensionality with substantially small contributions of Ni-3d electrons to the carrier density. Moreover, we find significant Kondo resonance peaks in the compressed $CeNi_2As_2$ and $CeNi_2P_2$ at low temperatures, accompanied by topological changes of the Fermi surfaces. We also find similar quantum phase transition in $CeNi_2As_2$ driven by chemical pressure via the isovalence As$\rightarrow$P substitution.'
author:
- Peng Zhang
- Bo Liu
- Shengli Zhang
- 'K. Haule'
- Jianhui Dai
bibliography:
- 'refs.bib'
title: Protracted Kondo coherence with dilute carrier density in Cerium based nickel pnictides
---
Recently extensive interests have been focused on the heavy fermion materials with dilute carrier density due to their exotic behaviors, for example the topologically nontrivial electronic states [@PhysRevLett.110.096401; @PhysRevX.7.011027; @PhysRevLett.118.246601], the Kondo semimetals [@Rai2019; @Lv2019; @FengXY] and the quantum phase transitions due to protracted Kondo screening [@Luo2012; @Luo2014; @Luo2015]. The properties of heavy fermion materials depend on two key factors, one is the electronic correlations between the localized f-electrons and another is the hybridization among the f-electrons and the conduction electrons [@Steglich1991; @Hess1993; @Hewson1993]. When hybridization is weak, the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction mediated by the hybridization leads to magnetic ordering of the localized f-electrons [@Ruderman1954; @Kasuya1956; @Yosida1957]. In the strong hybridization limit, the Kondo coupling forces the screening of local f-moments by conduction electrons producing a Fermi liquid at low temperature. However, the Kondo screening of localized f-electrons at the dilute conduction electron limit is less understood. According to Nozi$\grave{e}$res’ argument [@Nozieres1985; @Nozieres1998], only the conduction electrons within the Kondo energy scale around the Fermi level can participate in the Kondo screening. When the charge density of conduction electrons is low, the number of available conduction electrons for Kondo screening can be smaller than the total number of localized f-electrons in the lattice. The full Kondo screening of all localized f-electrons, if happens, must be coherent. The corresponding energy scale for the coherent Kondo screening with insufficient conduction electrons is much lower than that of the single-impurity Kondo screening, being named the protracted Kondo screening temperature [@Sarrao1999; @Lawrence2001].
Previous theoretical investigations on Nozi$\grave{e}$res’ argument are largely limited to simple models like the single-impurity Anderson model, the periodic Anderson lattice model, and the Kondo lattice model [@Tahvildar-Zadeh1997; @Tahvildar-Zadeh1998; @PhysRevB.60.10782; @PhysRevB.61.12799; @Vidhyadhiraja2000; @Burdin2000]. Two energy scales are introduced in these calculations: the single-impurity Kondo temperature, $T_K$, that indicates the screening of a localized f-electron, and the coherent temperature, $T_{coh}$, below which all f-electrons are coherently screened to form a Fermi liquid. Recent experiment by Luo et al [@Luo2015] suggests that the protracted Kondo screening may be realized in heavy fermion compound $CeNi_{2-\delta}As_2$ where the carrier density is very low. They also found that by applying physical pressure there is a quantum phase transition from the antiferromagnetic (AFM) phase to the coherent Kondo screening state. The two phases are separated by a possible unconventional quantum critical point. Therefore, $CeNi_{2-\delta}As_2$ presents an ideal and rare platform to investigate how the Nozi$\grave{e}$res’ exhaustion affects the quantum phase transition. This experiment also poses further problems concerning the perspective of electronic structures: the origin of the dilute carrier density, the topological difference of electronic structures on both sides of quantum phase transition, and whether chemical pressure via isovalence substitution of As by P leads to similar quantum phase transition and protracted Kondo screening.
To answer these questions, we investigate the protracted Kondo screening by calculating the electronic structures of the stoichiometric $CeNi_2As_2$ under compression and that of the stoichiometric $CeNi_2P_2$ at the ambient pressure. We employ the first principle density functional theory plus the dynamical mean-field theory (DFT+eDMFT) [@Kotliar2006; @Haule2010] with continuous time quantum Monte Carlo impurity solver [@Werner2006; @Haule2007]. Since the $Ni$ vacancies remove the conduction electrons from crystal, our results about the Nozi$\grave{e}$res’ exhaustion in the stoichiometric $CeNi_2As_2$ stay valid for $CeNi_{2-\delta}As_2$. More details about the calculations can be found in the Supplementary Information (S.I.).
{width="465pt"}
The DFT+eDMFT calculated total density of states (DOS) and all important partial DOS (Ce-4f, Ce-5d, Ni-3d, As-4p/P-3p) of $CeNi_2As_2$ and $CeNi_2P_2$ at 300 K (upper panel) and 38 K (lower panel) are presented in Fig.1. In Fig.1(a-d) DOS of $CeNi_2As_2$ around the Fermi level are mainly from the Ce-4f orbitals and the Ni-3d orbitals, although the Ce-5d orbitals and the As-4p orbitals contribute at around $\pm$4 ev. This indicates in $CeNi_2As_2$ the conduction electrons are from Ni-3d orbitals. In Fig.1(a-b) from 300 K to 38 K at the ambient pressure, there is no sign of enhanced hybridization between the Ce-4f states and the conduction electrons. It proves the Ce-4f electrons of $CeNi_2As_2$ are still localized at lower temperatures and there is no Kondo resonance in $CeNi_2As_2$ at the ambient pressure (555.4 $Bohr^3$/f.u.). However in Fig.1(c-d) when $CeNi_2As_2$ is under 4.8 GPa compression (544.2 $Bohr^3$/f.u.), even at 300 K there is a small quasi-particle peak at the Fermi level. Further decreasing temperature down to 38 K the quasi-particle peak becomes pronounced. The sharp quasi-particle peak comes from the hybridization between the Ni-3d states and the Ce-4f states which is enhanced by the decreased $CeNi_2As_2$ lattice volume, in spite of the fact that the DOS of the Ni-3d orbitals around $E_F$ is still fairly small. This is a clear manifestation of the Kondo screening. The quasi-particle peak of the Ce-4f orbital at $E_F$ has the main quantum number $J=5/2$. The second peak of Ce-4f orbital at 0.3 ev above $E_F$ comes from the orbital with $J=7/2$. Our results support the experimental observation of Luo et al [@Luo2015] that under compression there is a local moment to Kondo resonance phase transition in $CeNi_2As_2$.
Another interesting issue is whether chemical pressure on $CeNi_2As_2$ will lead to the similar local moment to Kondo resonance phase transition. The chemical pressure on $CeNi_2As_2$ can be induced via the isovalence substitution of As by P. As shown in Fig.1(e-f), there are large Kondo resonance peaks in $CeNi_2P_2$ either at 300 K or 38 K, while the contribution of the Ni 3d-orbitals to the DOS around the Fermi energy is still very small. This fact indicates the development of the Kondo screening states in $CeNi_2P_2$ at the ambient pressure and the room temperature. Given the local moment state in $CeNi_2As_2$ and the Kondo screening state in $CeNi_2P_2$, we expect a local moment to Kondo screening phase transition driven by the isovalence $As\rightarrow P$ substitution in this system. Our discovery is consistent with recent experiment observations [@ChenJian2017].
{width="480pt"}
It should be noticed that in Fig.1(a-b) the calculated Ni-3d DOS of $CeNi_2As_2$ is about 0.6-0.7 $ev^{-1}$ at $E_F$, which is much smaller than the DOS at $E_F$ of conduction electrons in some other cerium based nickel pnictides like $CeNiAsO$ [@Luo2014] where the Ni-3d DOS at $E_F$ is about 1.5 $ev^{-1}$. Given the Ni vacancies in the realistic $CeNi_{2-\delta}As_2(\delta\approx0.28)$ crystal, the charge carrier density in this system is even lower. According to the Nozi$\grave{e}$res’ argument, the effective number of conduction electrons that participate in coherent Kondo screening is estimated by $n_{eff}=\rho_c(E_F)T_K$, where $\rho_c(E_F)$ is the DOS of conduction electrons at $E_F$, and $T_K$ is the corresponding single impurity Kondo temperature typically much smaller than 1 ev. Because in $CeNi_2As_2$ there is a local moment on each lattice site due to the occupied Ce-4f states, the number of available conduction electrons is indeed small relative to the number of local moments. Therefore, the observed coherent Kondo screening in $CeNi_2As_2$ under physical or chemical pressure are protracted.
$P(GPa)$ $\epsilon_f (ev)$ $Im\Delta(E_F)(ev)$ $\rho(E_F) (ev^{-1})$ $T_K (K)$ $T_{coh} (K)$
-------------- ---------- ------------------- --------------------- ----------------------- -------------------- --------------------
$CeNi_2As_2$ 0.0 -2.40 -0.051 0.72 3.0$\times10^{-2}$ 9.2$\times10^{-9}$
$CeNi_2As_2$ 4.8 -2.10 -0.115 0.90 86.5 0.1
$CeNi_2P_2$ 21.1 -2.10 -0.128 1.17 147.5 0.366
To observe the low energy excitation around the Fermi level in detail, we show the momentum-resolved spectral function $A(k,\omega)$ of $CeNi_2As_2$ and $CeNi_2P_2$ at 38 K in Fig.2(a-c). In Fig.2(a), the spectra function of $CeNi_2As_2$ at the ambient pressure shows no sign of hybridization between the conduction bands (mainly Ni-3d) with large dispersion and the dim flat localized Ce-4f bands at around $E_F$ and $E_F$+0.3 ev. But in Fig.2(b), $CeNi_2As_2$ at 4.8 GPa shows strong hybridization between the conduction bands and the Ce-4f bands. Consequently the spectra function of Ce-4f bands gain tremendous spectra weight at $E_F$ and $E_F$+0.3 ev relative to that in Fig.2(a). The two enhanced Ce-4f bands in Fig.2(b) are corresponding to the $J=5/2$ and $J=7/2$ peaks in Fig.1(d) respectively. In Fig.2(c), the spectral function of $CeNi_2P_2$ shows similar enhanced hybridization between the conduction bands and the localized Ce-4f bands because of the chemical pressure via $As\rightarrow P$ substitution.
The Fermi surfaces of $CeNi_2As_2$ and $CeNi_2P_2$ at 38 K are also presented in Fig.2(d-f). Unlike some other layered materials with typical two-dimensional band structures in the a-b plane (e.g. the structurally analogous iron pnictides $BaFe_2As_2$ [@PhysRevLett.101.107006; @PhysRevLett.101.257003] and the cerium based pnictides $CeNiAsO$ [@Luo2014] exhibiting similar quantum phase transition), the Fermi surfaces of both $CeNi_2As_2$ and $CeNi_2P_2$ show prominent dispersion in all three directions. The larger electron dispersion in the c-axis in $CeNi_2As_2$ and $CeNi_2P_2$ comes from the relatively shorter distance between the Ce-layer and the transition metal-pnictide layer. The average distance at the ambient pressure is 4.6697 Bohr in $CeNi_2As_2$ and is 4.4234 Bohr in $CeNi_2P_2$. In contrast the average distance between the Ba layer and the Fe-As layer is 6.1495 Bohr in $BaFe_2As_2$ [@PhysRevLett.101.257003], and the average distance between the Ce layer and the Ni-As layer is 5.4119 Bohr in $CeNiAsO$ [@Luo2014]. In Fig.1 we found the partial DOS at the Fermi level are mainly Ce-4f and Ni-3d states. Since the Ce-4f bands show weak dispersion, it indicates in $CeNi_2As_2$ and $CeNi_2P_2$ the inter-layer hopping of Ni-3d electrons across the Ce-Ni-As/P layers is strong. The Fermi surfaces of $CeNi_2As_2$ at the ambient pressure (Fig.2(d)) have three sheets: a large sheet (sheet 1, in green and purple) at the top and the bottom that forms hole pocket around the $N$ point, another large sheet (sheet 2, in blue and gold) surrounding the $Z-\Gamma$ line, and a tiny sheet (sheet 3, in red and cyan) barely touching the $P$ point. Under compression to 4.8 GPa (Fig.2(e)), sheet 1 shrinks into two plates cutting the $Z-\Gamma$ line and the hole pocket at $N$ disappears, sheet 2 changes its topology to cut the $Z-\Gamma$ line as well, and sheet 3 develops into an electron pocket surrounding the $X-P$ line. For $CeNi_2P_2$ in Fig.2(f), sheet 1 totally disappears and sheet 2 cut the $Z-\Gamma$ line like under the physical pressure, but sheet 3 develops into two electron pockets not only surrounding the $X-P$ line but also surrounding the $\Gamma$ point. Because of enhanced hybridization between the conduction electrons and the Ce-4f electrons by increased physical/chemical pressure, more Ce-4f electrons become itinerant and the total number of electrons on the Fermi surfaces increases.
Next we estimate the two relevant energy scales, the single-impurity Kondo temperature $T_K$ and the coherent Kondo temperature $T_{coh}$ in $CeNi_2As_2$ and $CeNi_2P_2$.
When the number of available conduction electrons $N_c$ is smaller than the number of magnetic local moments $N_f$, $T_{coh}$ will be suppressed relative to $T_K$ according to the Nozi$\grave{e}$res’ argument [@Nozieres1998], $$T_{coh}=\frac{N_c}{N_f}T_K=\frac{\rho(E_F)}{N_f}{T_K}^2.$$
$T_K$ can be estimated using [@Gunnarsson1983; @Haule2001; @Pourovskii2008], $$T_K=\sqrt{W |Im\Delta(E_F)|}e^{-\frac{\pi |\epsilon_f|}{2 N_f |Im\Delta(E_F)|}}.$$ where $W$ is the width of conduction band below the Fermi level, $\epsilon_f$ is the average energy levels of the f-electrons, $N_f$ is the band degeneracy of f-electrons, and $\Delta(E_F)$ is the hybridization function at the Fermi level. We choose $N_f$=6 since the Kondo peak at the Fermi level belongs to the J=5/2 Ce-4f bands and the J=7/2 peak is 0.3 ev above. The estimated Kondo temperature $T_K$ of $CeNi_2As_2$ as a function of pressure is presented in Table. 1. At the ambient pressure the estimated $T_K$ of $CeNi_2As_2$ is roughly 0 K. At 4.8 GPa, $T_K$ of $CeNi_2As_2$ increases to 86.5 K, which explains the enhanced Kondo peak in Fig.1(c, d). $CeNi_2P_2$ has much higher $T_K$ at 147.5 K that produces the well developed Kondo peak in Fig.1(e, f). The large $T_K$ of $CeNi_2P_2$ originates from its much smaller volume at 499.4 $Bohr^3/f.u.$. If $CeNi_2As_2$ is compressed to this volume the corresponding pressure will be 21.1 GPa. The derived coherent temperature $T_{coh}$ is significantly lower than the Kondo temperature $T_K$ due to the small number of effective conduction electrons $N_c=\rho(E_F)T_K$ in Kondo screening. At the ambient pressure $T_{coh}$ of $CeNi_2As_2$ is zero since there is no Kondo screening. At 4.8 GPa $T_{coh}$ of $CeNi_2As_2$ is 0.1 K, which is much lower than the experimental result of Luo [@Luo2015] that the Fermi liquid temperature $T_{FL}$ is 1.0 K at 4.0 GPa. The discrepancy might come from the fact that Eq.(2) could underestimate $T_K$ since at 300 K there are obvious signs of Kondo screening in $CeNi_2As_2$ at 4.8 GPa and in $CeNi_2P_2$. Another possibility is that Nozi$\grave{e}$res’ formula, Eq.(1), need a renormalization factor that depends on materials [@Vidhyadhiraja2000]. A more precise constrain of $T_K$ and $T_{coh}$ need to be done in the future. For $CeNi_2P_2$, the calculated $T_{coh}$ is 0.366 K due to its larger $T_K$. Therefore, the calculated $T_K$, $T_{coh}$ together with the DOS in Fig.1 provide solid evidence for the development of protracted Kondo coherence in the $CeNi_2As_2$ system at low temperatures by applying either physical pressure or chemical pressure via the isovalence $As\rightarrow P$ substitution.
By calculating the electronic structures of cerium based nickel pnictides $CeNi_2As_2$ and $CeNi_2P_2$, we find a novel quantum phase transition from the local moment phase to the coherent Kondo screening phase in $CeNi_2As_2$ under compression. We show the coherent Kondo temperature $T_{coh}$ is much smaller than the single-impurity Kondo temperature $T_K$, and the protracted Kondo screening is due to the diluted conduction electrons on the Ni-3d orbitals. We further find that this transition is accompanied by topological changes of the Fermi surface. Unlike structurally analogous layered iron pnictides, the relatively stronger transfer of Ni-3d electrons across the Ce-Ni-As/P-layers in the z-direction makes $CeNi_2As_2$/$CeNi_2P_2$ three-dimensional materials. Our calculations also point to similar quantum phase transition and protracted Kondo screening driven by chemical pressure in agreement with the recent isovalence $As\rightarrow P$ substitution experiment. Our discoveries provide important insights in understanding the nature of the quantum phase transition and the coherent Kondo screening protraction in heavy fermion systems with dilute carrier density.
P. Z. is supported by National Science Foundation of China (NSFC) Grant No. 11604255. J. D. is supported by NSFC Grant No. 11474082. B. L. is supported by the National Key Research and Development Program of China (2018YFA0307600) and NSFC Grant No. 11774282. K. H. is supported by National Science Foundation (NSF) grant DMR-1709229. P. Z. and J. D. want to thank J. Chen, Y. Luo, Q. Si and Z.A. Xu for extensive and valuable discussions. This work is supported by the HPC platform of Xi’an Jiaotong university.
|
---
abstract: 'This essay demonstrates the key role of Astronomy in the Botticelli [*Venus and Mars-NG915*]{} painting, to date only very partially understood. Worthwhile coincidences among the principles of the Ficinian philosophy, the historical characters involved and the compositional elements of the painting, show how the astronomical knowledge of that time strongly influenced this masterpiece. First, Astronomy provides its precise dating since the artist used the astronomical ephemerides of his time, albeit preserving a mythological meaning, and a clue for Botticelli’s signature. Second, it allows the correlation among Botticelli’s creative intention, the historical facts and the astronomical phenomena such as the heliacal rising of the planet Venus in conjunction with the Aquarius constellation dating back to the earliest representations of Venus in Mesopotamian culture. This work not only bears a significant value for the history of science and art, but, in the current era of three-dimensional mapping of billion stars about to be delivered by Gaia, states the role of astronomical heritage in Western culture. Finally, following the same method, a precise astronomical dating for the famous [*Primavera*]{} painting is suggested.'
author:
- Mariateresa Crosta
title: |
The astronomical garden of *Venus and Mars - NG915*:\
the pivotal role of Astronomy in dating and deciphering Botticelli’s masterpiece
---
[**Keywords**]{}: History of Astronomy, Science and Philosophy, Renaissance Art, Education.
Introduction {#introduction .unnumbered}
============
Since its acquisition by London’s National Gallery on June 1874, the painting [*Venus and Mars*]{} by Botticelli, cataloged as [*NG915*]{}, has remained a mystery to be interpreted [@paoli][^1]. In the present study the association of an astronomical configuration has been determinant for reading this masterpiece, and for its accurate dating. A search on the astronomical charts of the late ’400 has widely supported the initial intuition.
{width="\textwidth"}
{width="\textwidth"}
{width="\textwidth"}
{width="\textwidth"}
Figure \[Fig1\] presents Botticelli’s painting associated to a picture showing a relationship between the representation of [*Venus and Mars*]{} and some possible asterisms. The link arose in the context of the space mission Gaia (European Space Agency, ESA [@gaia]), currently in orbit at 1.5 million km from Earth, whose goal is a high-precision three-dimensional map of Galaxy. Indeed, Gaia will generate the most important and most numerous (about 2 billion objects) astronomical cartography ever realized by humanity.
Given that one can trace as many asterisms as the number of visible stars, it was necessary to find a clue for the proper interpretation of NG915. Thanks to the similarity of the dress worn by Venus in [*NG915*]{} with that one in the famous [*Primavera*]{}, the first step was checking the astral situation of the Spring Equinox at the latitude of Florence around the presumable years dating the painting.
Among the possible constellations, Aquarius and Capricorn turned out to be the best candidates. In the northern hemisphere they are visible in the summer and fall night sky, but the heliacal rising of both happens toward the equinox of March. Let us remind that heliacal rising means the rising of a star just before dawn, consequently its visibility in the morning until the sunlight diffuses completely. Moreover in this circumstance Aquarius and Capricorn appear in some cases in conjunction with planets Venus and Mars respectively.
Premise: the astronomical atlases at Botticelli’s time {#premise-the-astronomical-atlases-at-botticellis-time .unnumbered}
======================================================
The need to position the stars and the resulting study is the most ancient branch of Astronomy named Astrometry. Comparing the positions of stellar objects at different epochs allows to determine their distance and their proper motion. The measurements to position the stars were carried out over the centuries with increasingly sophisticated instrumentation and today also from space, as in the case of the Gaia mission. Only since 1930 the International Astronomical Union has adopted criteria to compare the various celestial charts and established the current 88 constellations. Until then the shape and boundaries of the constellations, as well as their names, although passed on, were still susceptible of free interpretation by astronomers. The definitive IAU constellations include all those described by Ptolemy in the famous [*Almagesto*]{}, with the addition of those of [*Uranometria*]{} by Johann Bayer, the first atlas that covered the entire celestial sky, i.e. including also the southern hemisphere, published in 1603. Prior to Bayer’s [*Uranometria*]{}, the fundamental text was Ptolemy’s Almagest, representing the culmination of the scientific production of Greek astronomers and philosophers such as Eratosthenes, Hipparchus and Ptolemy himself. The [*Almagesto*]{} constituted the scientific context throughout the Middle Ages and the Renaissance and was only updated in the [*Liber locis stellarum fixarum*]{} by Abd-al-Rahman al-Sufi in 964 [^2].
For the purpose of this study, it is worth mentioning that before [*Uranometria*]{}, [*De le stelle fisse*]{}, published in 1543 by Alessandro Piccolomini, was the first modern celestial atlas and the first to assign Latin letters to the stars according to their luminosity. The maps contained in that work include all the Ptolemaic constellations (except one) and show the stars without the corresponding mythological figures; so we infer that, in any case, before that date, mythology was part of the interpretation of the sky.
As matter of fact until the 15th century the didactic poem in hexameters [*Phaenomena*]{} by Arato had been in circulation for a long time. Such a work included the millennial celestial knowledge received and elaborated by Greeks, and transmitted to Lucrezio, Virgilio, Cicero, Ovid, Germanicus, and Avienio.[^3] Arato’s translations includes also two short works of the Early Middle Ages, the [*De signis caeli*]{} and [*De ordine ac positione stellarum in signis*]{}, and Igino’s [*De Astronomia*]{} (C. Julius Hyginus, 64 BC-17 AD) manuscripts. Igino resumes materials dating back to Eratosthenes of Cyrene, who in the III sec. B.C. wrote the astronomical treatise, [*Catasterismi*]{}, which means “transformation into stars”. The second book by Igino describes the mythological stories for each constellation that constitute the basis on which the constellation itself have been formed, and its process of collocation in the sky (i.e. [*catasterismo*]{}).
Igino’s [*De Astronomia*]{} has been passed on through numerous independent medieval manuscripts and print works published between the fifteenth and seventeenth centuries, containing graphic representations of mythological characters not always philologically consistent with the text. In such works the aesthetic and astrological needs prevail, the philological and literary interpretation is preferred to such a degree that the positioning of the stars is conditioned to coincide with the drawing of an anatomical detail of the mythological figure to which they belong.
Al-Ma’mun [^4] explains the reason for naming constellations rather than single stars:\
<< There are many stars in everywhere and many of them are identical in size and brightness in their travels. For this reason it seemed reasonable to group the stars together, so that, arranged with one another, they represented figures, so the stars became nominated >> [^5]. But even when the stars are singly named, the meaning of the translation of their Greek, Latin, and Arabic traditional name almost always identifies the anatomical position or a quality which the star occupies in the figure. And this tradition will last throughout the Middle Ages and the Renaissance, and will be interrupted right in Bayer’s atlas by Dürer.
The numerous editions of Igino’s tales constitued the [*Poeticon Astronomicon*]{}. The first print publication was edited in Venice in 1482, on behalf of Erhard Ratdolt.
As briefly outlined[^6] this was essentially the corpus of the astronomical knowledge to which Sandro Botticelli could have been exposed at the time he lived and worked in Florence under the influence of the Medici family. The interest towards the classical culture was renewed thanks to the translations of Plotin’s treatises and Plato’s dialogues by Marsilio Ficino and the creation of the Neoplatonic Academy, whose principles were fully expressed through multiple readings in Botticelli’s works. Marsilio Ficino wrote:<<According to the most ancient followers of Plato, the Soul of the World has built beside the stars figures and portions of figures that are themselves figures of a certain type, also conferring certain properties on each of them. In the stars - in their figures, parts and properties - are contained all sorts of things that are in the lower world and their properties>>[^7].
Dating NG915 according to the motion of Mars and Venus from 1480 to 1488 {#dating-ng915-according-to-the-motion-of-mars-and-venus-from-1480-to-1488 .unnumbered}
========================================================================
![\[Fig2\]Aquarium and Capricorn around dawn around the Spring Equinox at the latitudes of Florence in 1482.](figure2){width="15.0cm"}
Around the spring equinox of 1482, at the latitudes of Florence, Venus is Eastward of Aquarius, in heliacal rising, 68 degrees far from the planet Mars which is West of Capricorn in the constellation of Sagittarius (figure \[Fig2\])[^8]. This coincidence offers a unique dating of the painting against the uncertainties that date it in an interval ranging from 1480 to 1488. In fact at the spring equinox of 1480 Mars is in the Ofiuco constellation, in 1485 it does not appear and in 1488 it is in conjunction with Jupiter at Eastward of Aquarius and Venus on West (figure \[Fig3\]). In the remaining dates, nothing relevant for the purpose of this study is observed. A part form this time interval, the exact simultaneous presence of Venus, in conjunction with Aquarius, and Mars, in conjunction with Capricorn, occurs only in 1469 and in 1501 (figure \[Fig4\]), dates not consistent with the Botticelli biography.
{width="\textwidth"}
{width="\textwidth"}
{width="\textwidth"}
![\[Fig4\]The ascending Aquarius and Capricorn at dawn around the spring equinox at the latitudes of Florence in 1469 and 1501; note Venus and Mars in conjunction with the constellations.](figure3a){width="\textwidth"}
![\[Fig4\]The ascending Aquarius and Capricorn at dawn around the spring equinox at the latitudes of Florence in 1469 and 1501; note Venus and Mars in conjunction with the constellations.](figure3b){width="\textwidth"}
Mars generally looks orange or reddish, with brightness quite variable throughout its orbit. The apparent magnitude of Mars passes from +1.8 mag (visual band) at conjunction to the much brighter -2.9 at perihelion opposition, phenomenon that occurs every two years and makes the planet difficult to observe. The ancients were already aware of its retrograde motion.
Venus, on the other hand, is yellowish-whitish and makes a revolution along an almost circular orbit in 224.7 Earth days. Known since ancient times as the brightest natural object in the night sky, after the Moon, it reaches an apparent maximum visual magnitude of -4.4.
Being an inner planet, it is only visible shortly before dawn or shortly after sunset - hence the respective denominations [*Lucifero*]{} and [*Vespero*]{} - and for a few hours near the Sun, then moves alternately east and west of the Sun. In fact, the favorable periods for observing the planet are those where elongation - the angular distance between a planet and the Sun- reaches the highest values of 47 degrees East or 47 degrees West: in the first case, the planet appears immediately after sunset, in the second just before dawn.
Let us also remind that apart from the Sun, the Moon and rarely Jupiter, Venus is the only celestial body visible to the naked eye during daylight, provided that the sky is not cloudy and its elongation from the Sun is not at the minimum.
Aquarius constellation in NG915, the connection with the planet Venus and Botticelli’s signature {#aquarius-constellation-in-ng915-the-connection-with-the-planet-venus-and-botticellis-signature .unnumbered}
=================================================================================================
Aquarius is a winter constellation, considered in ancient times as a rain carrier. Its connection with water goes back to Babylonian astronomy, so nearby it on the sky we find the Fish and other aquatic constellations, such as Ceto and Capricorn, located in the so-called “ Celestial Waters ” region or “The Sea” [^9].
In the [*Catasterismi*]{} this constellation is often represented by Ganimede, the young Trojan that Zeus rapes for his beauty and transports to heavens to serve the gods as a cupbearer, as Ovid told. Zeus then donates Ganimede the immortality by transforming him into the constellation of Aquarius. Neoplatonism provides a mystical representation of Ganymede’s abduction, meaning the abduction of the soul to God, whereas in ancient Greece and Rome the myth became very popular and considered a divine endorsement of homosexuality.
Another alternative association is Deucalion, who wandered with its arch on flood waters for nine days and nine nights, but Igino, citing Eubulo, still offers another identification of the constellation with Cecrope, the first mythical king of Athens: since he reigned in times when the wine was not yet invented, he is depicted while offering a sacrifice to the gods with water.
Infact the Aquarius constellation is generally represented by the figure of a man with an arm open in the direction of Capricorn holding the flap of a mantle or a crossbar while the other arm, whose hand is almost in contact with Pegasus, holds an amphora from where the [*Fluvius Aquarii*]{} flows up to the Southern Fish. Dante described his appearance in the night sky as the period when winter slowly moves towards spring.
For the present study is also interesting is what Aquarius meant for the Babylonians. Sumerian called it the <<Great Man>> (GU.LA) and it was identified with the god Ea, the Lord of the source and depicted while holding a jar in his hand from which two water streams spurt. However Ea was also the symbol of Capricorn.
![\[Fig6\]The Adda seal, British Museum, Museum number 89115](figure6){width="14.5cm"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7a){width="\textwidth"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7b){width="\textwidth"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7c){width="\textwidth"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7d){width="\textwidth"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7e){width="\textwidth"}
![\[Fig7\] From upper left: the Aquarius constellation according to Seneca, Tragedies (comm. Nicolas Trevet)[@lippncott]; [*Hyginus Aquarius*]{}, Florence, Bibl Laurenziana, Ms. Plut 29.30[@lippncott]; [*Astronomical Compendium*]{} in Hebrew, Catalonia, c. 1361[@lippncott]; [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; Acquarius full dressed, maybe female, ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure7f){width="\textwidth"}
In addition, in the seal of Adda (figure \[Fig6\]) Ea is the divinity of Earth and Life, who abides in abysmal waters, and is represented with two branches of water flowing to the ground [@cesta]. In the same seal is also depicted Ishtar, the goddess of love and war. Note the co-presence of Ishtar and Aquarius-Ea as if there was a connection with the well known greek myth that Venus is born from the sea foam. Indeed, the original representation of Babylonian divinity should be sought in the most ancient Sumerian culture - in the form of Enki - although the primordial reference could lead us to Vedic India, where the man’s figure, Trita Aptya (a pre-Vedic god), was the officer who holds the vase in the <<ayoma>> ritual. Tritha became Triton in the cultures of the Mediterranean, a god with the lower limbs shaped as double tail of fish, who had in his hand a twisted shell whose sound was used to burst or calm storms. Going back to the possible representations of Aquarius, in 1482 (at the time of the publication of the [*Poeticon Astronomicon*]{}) such a constellation is depicted dressed with long and loose hair, where a folded right leg is evident. Below, in the figures \[Fig7\], a selection of the various pictures about Aquarius in the old atlases. For a complete list of images, refer to “The Saxl Project” [@lippncott] and Certissima Signa [@certissimasigna]. In particular, the reader should pay attention to the position of the right leg of the Aquarius figure, to the reclined ones and especially to the image in figure \[Fig7\], where Capricorn is facing Aquarius like in NG915 instead to be in the opposite direction as in the traditional representation. Moreover in some of them Aquarius is a female character, with long hair, although is mostly male in others.
#### Simonetta Vespucci as Venus and Aquarius.
The woman in the foreground, Venus, as hypothesized by many critics, resembles Simonetta Vespucci and wears the wedding white dress with golden edges similar to that of Venus in the famous [*Primavera*]{} picture, so the first clue was looking for a reference to the spring equinox. As anticipated, the constellations of Aquarius and Capricorn have heliacal rising at the spring equinox at the latitudes of Florence. Historical sources do not provide Simonetta Vespucci’s month and place of birth. She was born in 1453, perhaps in Portovenere - according to Poliziano “in grembo a Venere” - and on January 28 [@govetti; @allan] by the nobles Gaspare Cattaneo della Volta and Cattochia Spinola de Candia. If the day was correct, Simonetta would have been of the Aquarius sign, i.e. the Sun at the time of her birth was in Aquarius constellation. When she was sixteen, Simonetta was married to the young Florentine Marco Vespucci. The date of their marriage is supposed towards August 1469 [@biotreccani]. In other texts it is also indicated on the beginning of 1469 or April 1469 [@govetti]. After the marriage the couple settled in Florence; their arrival coincided with the rise of Lorenzo dei Medici.
Simonetta died very young of phthisis on April 26, 1476, even Lorenzo sends his personal doctor as the last desperate attempt to save her life. Dressed in white as a bride, Simonetta’s coffin crossed Florence with her face and body uncovered, escorted to the burial place in the church of Ognissanti. On that occasion Pulci and Poliziano depicted her respectively as: <<Ma forse che ancor viva al mondo è quella poiché vista da noi fu dopo il fine in sul feretro posta assai più bella>> and <<Bellezza Immortale>> .
Lorenzo il Magnifico wrote for her a sonnet [^10] apparently inspired by a very glittering star in the clear night - perhaps the planet Venus or a fireball? - so shiny that could only be Simonetta’s luminous soul joining to an object of the firmament.
![\[Fig5\]The sky on the spring equinox of 1453 (Venus is in conjunction with Aquarius) and on April 26, 1476 (Jupiter is in Aquarium and Venus appears at sunset).](figure5a){width="\textwidth"}
![\[Fig5\]The sky on the spring equinox of 1453 (Venus is in conjunction with Aquarius) and on April 26, 1476 (Jupiter is in Aquarium and Venus appears at sunset).](figure5b){width="\textwidth"}
The astronomical correspondences at the various dates mentioned in this paragraph prove the celestial and astrological themes that Botticelli might have pursued. Moreover, the reference to <<contending with Febo>> in Lorenzo’s verses suggests an heliacal rising of the star if one hypothesizes that Simonetta is the planet Venus, ready to climb on its moving chariot. As deduced by consulting the astronomical software, Venus is a constant theme in the dates related to Simonetta Vespucci: the planet Venus is still present at dawn of her supposed birthday and together with the planet Mars it moves along the ecliptic at the dates of her marriage, as well as from January (Lorenzo’s Carousel month) until February 1469 (in conjunction with Mars in Sagittarius); towards the end of February Venus passes in Capricorn, then it swaps its position with respect to Mars and has a heliacal rising on the spring equinox in the Aquarius constellation next to Capricorn that hosts Mars (figure \[Fig5\]). This configuration lasts until the begging of April, while at the end of this month Venus disappears just after the sunrise and Mars reaches Aquarius. Towards August 1469 Venus is almost always in conjunction with the Sun. On April 26, 1476, the date of her death, we find Jupiter to the East of Aquarius close to the Sun and Venus at sunset.
#### The Aquarius Stars as Satyrs, Botticelli’s signature, Venus’ right leg and jewel.
The most visible part of Aquarius - a not so bright constellation - is formed by a group of four Y-shaped stars representing the amphora from which the water comes out. A good part of stars in Aquarius have names beginning with << Sad >> in Arabic meaning << fortune >>. The main ones are the following: Sadalmelik ($\alpha$), the lucky stars of the king (from sa’d al-malik); Sadalsuud ($\beta$ ), the luckiest of lucky stars (from sa’d al-su’ud); Sadalbachia ($\gamma$ ), the lucky stars of the curtains or hidden things, hideouts (from sa’d alakhbiya); Skat ($\delta$ ), leg or tibia (from as-saq); Albali ($\epsilon$ ), devourer or the lucky one of the eater or the one swallowing; Ancha ($\theta$), lip (from latin); Situla (kappa), bucket (from latin); Zeta Aquarii ($\zeta$), at the center of the letter Y, delimited by $\pi$, $\eta$ e $\gamma$ [*Acquarii*]{}.
As mentioned above before the Bayer Catalog the anatomical parts of the human figure were associated with the stars of the constellation and their disposition freely adapted for the interpretation that was intended to give. Traces of this association were found still two centuries later [@cirella]. Therefore Skat and Ancha stars seem a hint for an explicit reference to the lower limbs: <<So you have to remember that..\[omissis\]...Capricorn rules knees; Aquarius legs and shins>> ([*De vita coelitus comparanda*]{}, M.Ficino). An unsolved enigma in [*Venus and Mars*]{} is the disappearance of the lower half of the right leg of Venus in the folds, perhaps accentuated to cover an anatomical error. Nevertheless, if we associate Venus to the constellation of Aquarius, then the representation of the constellation as reported in Igino’s tables - which Botticelli could consult or be aware of - where its right leg is visibly bent, also provides an explanation of its absence in painting NG915, rather than being a mistake made by the painter. Indeed a technical inspection conducted by the National Gallery failed to put such possible error in evidence [@davis]. Moreover, almost all of the images of Aquarius from the Middle Age until 1482 are mostly dressed. Therefore, this could explain why the goddess does not show itself naked as the mythological encounter between Mars and Venus would require. Actually she represents a ‘celestial’ Venus.
While, on the one hand, we can assert with certainty that theta and delta stars can refer to the anatomical details of Venus’ figure, on the other hand, if we consider the two constellations of Aquarius and Capricorn as a whole, the spear would disperse along the pouring water to the joining point with Capricorn. Then, Sadalmeik could represent the satyr with the helmet, Sadalsuud the satyr in the middle - in fact he looks at Venus << the most lucky of the lucky stars >> - and Albali << the devourer >> is the last satire with the tongue out, especially if the fruit he holds in his hands corresponds to the [*Ecballium elaterium*]{}, much used as a purgative medicine.
And remarking that Lorenzo il Magnifico in [*Simposio*]{} wrote jokingly about Botticelli’s greed: << Botticel whose fame is not blurry, Botticel I say; Botticello is more greedy than a fly, oh how many babbles of his I remember, when he is about to be invited to a have a dinner, one does not get in time to open the mouth that he already rushes dreaming of the food. Botticello goes and comes back full as a barrel>> [^11], we can affirm that the satyr inserted in the armor is a mocking reminder of Lorenzo’s verses, then a signature of Botticelli, located on the right side of the painting, as it is in the [*Adorazione dei Magi*]{}. Curiously, the only damaged part of the painting is just the little satyr’s face inside the armor [@davis].
Finally, Sadalbachia, <<the lucky stars of the hidden things>>, could be compared to the jewel worn by Venus: 8 pearls (symbolism of the bride and Venus?) surrounding a red stone (symbolism of love [@zoeller] or a reflection of the light from Mars?). The eight pearls could refer to Venus’ visibility periods, although at Botticelli’s time there were no known phases (it was Galileo to observe them in 1610 with the telescope as the effect of the planet’s revolution around the Sun). In fact, for about 8 months the planet is visible to the West, then it disappears for seven days when it reaches its minimum distance from Earth (lower conjunction). Then it appears again to the East and remains visible for another 8 months. After this period Venus disappears for three months, totally illuminated by the Sun, on the opposite side, in the upper conjunction (maximum distance) with Earth. Then the cycle resumes. In addition, the path of Venus along its orbit draws against the Zodiac - as seen from Earth - a stellar pentagram (a pythagorean [*pentalfa*]{}?) every 8 years.
It is also worth remembering that in Aquarius there are three meteor showers: [*Eta Aquaridi*]{} (maximum of 40 meteors per hour on May 5), [*Delta Aquaridi*]{} (twenty meteors per hour around July 28) and [*Iota Aquaridi*]{} (maximum of six meteors per hour on August 6). In the light of the interpretation presented here there is also the doubt that, in the absence of timely historical feedbacks, it was one of these transient objects to be seen by Lorenzo il Magnifico during his night walk and described in the sonnet for Simonetta.
Capricorn constellation in NG915 and the celestial gate {#capricorn-constellation-in-ng915-and-the-celestial-gate .unnumbered}
=======================================================
Known in the ancient times and in Mesopotamia as goatfish (Suhur-Mash-Ha), Capricorn represented the god Ea destined to become later one of the three components of the Triad of creation, along with Anu and Enlil. When the sign hosted the summer solstice, the Capricorn was called, on the Sumere cuneiform inscriptions, <<Father of Light>> and was revered as a protector of men since he was thought he had saved them from the Universal Deluge by communicating to one of them, the Great Wise, the secrets for building an ark. Also called [*Oannes*]{} (greek version of [*Eaganna*]{}, namely <<Ea the Fish>>), Ea was the god who had taught human beings the original doctrine. Then the Greeks called him <<Egocero>> (horned goat) and identified him with Pan, the god of the countryside with horns and goat’s legs. And this would explain the presence of the Satyrs with horns and paws in the encounter between Venus and Mars instead of the [*Amorini*]{} predicted in the mythological story. In support of such an assertion, the Mars planet at the vernal equinox of 1482 was in Sagittarius. Eratosthenes described Sagittarius as a two-legged creature with a tail of a satyr called [*Crotus*]{}, rather than the traditional Centaur.
According to Igino, Crotus’s father was Pan and, according to the myth, Pan was a playful creature from uncertain birth, intended for most of the time to hunt women or to sleep. His gift was to scare people by the particularly strong shout from which the word <<panic>> originated. Pan rescued gods twice: the first one during their fight against the Titans by blowing in a shell and scaring of the enemies (according to Eratosthenes the connection with the shell explains the partial fish representation attributed to the Capricorn); the second one, when by screaming he warns the gods of the approach of Typhoon. Thank to his services, Zeus raises Pan in the sky as the constellation of Capricorn.
It is well known the astronomical meaning of this story: Zeus represents the Sun that at the winter solstice remains at the mercy of the darkness from which it can only escape thanks to the ‘goat-fish’ solstice, point in the sky of great symbolic value at which the inversion of the path of the Sun towards to spring takes place. A kind of heavenly gate and a symbol that reappears, for example, in the Gospel according to St. John and in the [*Sol Invictus*]{} celebration [^12].
![\[Fig9\] On the left Capricorn as a goat in ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; in the middle Capricorn in [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; on right depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure9a){width="\textwidth"}
![\[Fig9\] On the left Capricorn as a goat in ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; in the middle Capricorn in [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; on right depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure9b){width="\textwidth"}
![\[Fig9\] On the left Capricorn as a goat in ‘Ptolemaic’ stellar tables, Vatican BAV Pal lat 1368 [@lippncott]; in the middle Capricorn in [*Germanicus Aratea, Hyginus, Astronomica*]{}, Biblioteca Laurenziana, Plut. 89. sup 43 [@lippncott]; on right depiction of Aquarius in the publication of [*Poeticon*]{}, 1482.](figure9c){width="\textwidth"}
In figure \[Fig9\] one can analyze the images of Capricorn that Botticelli might have seen. Note the position of the right knee in the [*Poeticon*]{} edition of 1482, very similar to that of Mars in NG915.
Bode [@bode] and Schmarsov [@schas] already suggested that NG915 refers to the Medici Carousels. Lorenzo il Magnifico was born on January 1, 1449, just under the sign of Capricorn. The Medici family assumed this constellation as a regal and power identifier. Even the helmet used on February 1469 during the Carousel (<<elmetto fornito d’ariento con un Marte per cimiero>>) referred to the birthday theme of that day: on the front was painted Mars (considered the dominant planet of Ascendant Scorpio) resting his feet on an eagle (<<L’aquila rossa in sull’elmetto un Marte sopra sua stella fe’ d’argento e d’oro>> ) while on the back there was a goat’s head, probably depicting the Capricorn Zodiacal Sign. This would explain why the first satyr that holds the spear - Aquarius’s alpha or << the lucky stars of the king >> - wears a helmet whose front looks to Mars and the back, Capricorn, to Venus (like the alignment in the sky). A clear reference to the rise of Lorenzo, who remained miraculously alive on the day of the Pazzi Conspiracy, while his brother Giuliano died (the Sunday before ascension).
#### Giuliano dei Medici as the planet Mars.
Angiolo Poliziano depicted Giuliano dei Medici as the following <<He was of high stature with a well-proportioned body, with large and protruding pectorals. He had turned and muscular arms, flat belly, lively eyes, indomitable face, and long black hair>> [^13]. According to the chronicles Giuliano dei Medici fell in love with Simonetta Vespucci. He was born like Simonetta in the year 1453, on October 25th, with an heliacal rising of Mars, not so much visible, and in conjunction with Saturn. The association with the planet Mars makes more plausible the red hair of the figure representing him in NG915.
The exaltation of their history took place in the Carousel fought on January 29, 1475 in Piazza Santa Croce to celebrate an important peace agreement between Florence, Milan and Venice realized by il Magnifico. Giuliano dedicated the victory to Simonetta who was present among the people crowding Piazza Santa Croce. A banner - conceived by Poliziano - was commissioned to Sandro Botticelli for Giuliano, where Simonetta was depicted in the allegorical dress of Venus-Minerva with a Cupid tied to her feet. The motto <<La sans par>> on the banner was personally chosen by Lorenzo. Like Luigi Pulci who dedicated a poem years before to the Lorenzo’s Carousel, Poliziano wrote the [*Stanze*]{} for Giuliano, emphasizing Giuliano’s fight was devoted to his love, the beautiful Simonetta Cattaneo Vespucci. It was interrupted because of the Giuliano’s death.
Giuliano, in fact, was killed in the Pazzi Conspiracy two years later, on April 26, 1478, indeed dying on Simonetta’s day of death, while Lorenzo was injured but not severely. Giuliano was buried in a uncovered coffin like Simonetta. Coincidences that perhaps could not go unnoticed in the neoplatonic environment. In this regard, critics Enrica Tiezte-Conrad (1925) [@conrat] and Carlo Gamba (1936) [@gamba] already identified in the layering and contiguous figures of NG915 a similarity with Etruscan funerary representations[^14].
![\[Fig8\]Astral Situation, in Florence, on 25th October 1453 (heliacal rising of Mars) and on 26th of April 1478 (Jupiter is in conjunction with Sun and Venus appears at sunset).](figure8a){width="\textwidth"}
![\[Fig8\]Astral Situation, in Florence, on 25th October 1453 (heliacal rising of Mars) and on 26th of April 1478 (Jupiter is in conjunction with Sun and Venus appears at sunset).](figure8b){width="\textwidth"}
#### The NG915 marine shell and the sacrifice stars of Capricorn.
The brightest stars of Capricorn are located at the border with Aquarius to form a sufficiently recognizable triangle, whose vertices are [*Algedi*]{} to the North-West, [*Deneb Algedi*]{} to the North-East and $\psi$ Capricorns to the South. Let us list them: Algedi ($\alpha$) o Giedi, the kidskin (from arabic al-jady), or two stars, alpha1 e alpha2, called [*Prima Giedi*]{} and [*Secunda Giedi*]{}; Dabih ($\beta$), the luck of sacrifice or the butcher (from arabic al-sad al-dhabih); Nashira ($\gamma$), the lucky one or <<the bearer of good news>> (from arabic); Deneb Algedi ($\delta$), to the tail of the kidskin (from arabic al-dhanab al-jady); Alshat ($\nu$), the slaughterer’s lamb, meaning the sheep that was to be slaughtered by the adjacent Dabih. The names are related to the Id-al-Adha feast celebrated among Muslims in the period of the winter solstice, exactly on the day when the moon was between Dhabih and Alpha-Beta of Capricorn, during which goats were sacrificed to promote healing from diseases. It would appear, therefore, that the third satyr that blows in the marine shell - an important part of the myth of Capricorn and also present in that of Aquarius with reference to Triton - is associated to Nashira, [*gamma Capricorni*]{} located at the border with Aquarius.
The link in NG915 could be at the failed Pazzi Conspiracy, considering that thanks to the “astral” sound of the seashell, Firenze was saved from the danger of enemies’ occupation. The association with the stars of Capricorn is even more relevant if one thinks of the meaning of their names related to the concept of sacrifice and that Giuliano, as matter of fact, was the only Medici to be “sacrificed”, the first to die hit by many stabbers.
Poliziano tells that Lorenzo il Magnifico was first grabbed by his shoulder and then hit at the throat, but he could shield from the following strokes with his arm to which he had wrapped his cloak and then, once freed, defended himself with his sword. Lorenzo’s fortune was due to the fact that his attackers were two inexperienced priests, who were chosen as slayers only at the last moment. In fact, initially the chosen hit man was the leader Giovanni Battista of Montesecco who, after a rethinking, refused to commit homicide in a sacred place (the Conspiracy took place in the church of Santa Maria del Fiore). Indeed Lorenzo was surrounded by his friends and escorts, and his friend Francesco Nori died in the attempt to defend him from the killers. On the body of Giuliano acts cruelly Francesco dei Pazzi (and also Baldini)[^15].
Like Pan, Giuliano (or Iulius in the Poliziano’s [*Stanze*]{}) ascends to the heavens. The hilt indicated by the left middle finger of Mars’ hand confirms that “ I ” stands for “Iulius”, as has already been recognized by Bellingham [@Bellingham] and Guidoni [@guidoni].
#### Lighting from Mars, light in the sky, and the plants in the astral winter of NG915.
Assuming that the picture represents the rising Sun, the light of dawn spreads on the background of NG915. Mars-Giuliano is therefore directly illuminated by the Sun (unlike Venus turning her shoulders to it, the elongation of the planet is twenty degrees). Perhaps the satyr with the shell is working to wake him up, warning that a new astral life is coming. Associated with the nature of Winter, Capricorn as a sign of earth symbolizes the seed that begins its slow and progressive maturation until it reaches rebirth on the surface in Spring. Bellingham was the first to suggest that the absence of flowers is due to the winter setting of the Carousel [@Bellingham]. In a recent publication Paoli reports that the vegetative stage of plants in the painting is typical of the months preceding Spring [@paoli]. In addition to the [*Ecballium elaterium*]{} in NG915 we find: on the bottom right the [*Plantago lanceolata*]{}, called also Mars grass (<<[*Herba quarta Martis armoglossa dicitur*]{}>>, [*Albertus Magnus*]{}), used as a remedy for women and by Greeks called lamb’s tongue [@mirella]; close to the hilt the [*Poterium sanguisorba L.*]{}, used in ancient time against hemorrhoids and in salads; finally close to Venus [*Tragopon*]{} that in Greek means goat beard[^16]. This emphasizes their connection with the stars of Capricorn and the interpretation of the fourth satyr as Albali in Aquarius. We can also assume that in the background there is the myrtle and the [*Laur Nobilis*]{} (not flourished), the first one sacred to Venus and the second one symbol of conjugal love [@mirella], but at the same time linked to Lorenzo il Magnifico and Pierfrancesco di Lorenzo (see below) because of the assonance with their name and the ancient symbology of power: <<E ’l laur che tanto fa bramar sue fronde>>, <<E tu ben nato Laur; sotto el cui velo Fiorenza lieta in pace si riposa>>([*Stanze*]{}, 82.4 and 4.1) [^17].
Marsilio Ficino and the stellar harmony {#marsilio-ficino-and-the-stellar-harmony .unnumbered}
=======================================
The role of Marsilio Ficino in Florence is known and widely studied. However it is worth highlighting some aspects of his philosophy in support of the astronomical interpretation given to NG915.
Marsilio Ficino had spread the ancient mythologies and the works of ancient poets - in particular Homer, Orazio and Ovid - and together with Poliziano he was master of Lorenzo il Magnifico who was later patron of Botticelli. In 1459 Marsilio Ficino created the Platonic Academy which sought to reconcile Platonic ideas with Christianity. According to many critics [@gombrich; @kps; @boskovitz; @ferruolo; @robb], NG915 should be read in harmony with the philosophical themes expressed by the Neoplatonic Academy, in particular the ones of marriage. The painting is the representation of the encounter between Venus (depicting “catastematici”, i.e., static, pleasures) and Mars (representing dynamic pleasures), a concept found in the proemio of [*De rerum natura*]{} by Lucrezio. In addition, Olson [@olson] assimilates the figures of Eve and Adam in the upper frame of the gate of Ghiberti’s paradise with those of Venus and Mars in Botticelli’s painting, both as regards the contrast between the dressed woman and the naked man, as well as for the postures that recall the ancient fluvial divinities, as proved in the present work.
In the [*Symposium*]{} Marsilio Ficino writes about the harmony of opposites symbolized by the duality of Mars-Venus and the superiority of the goddess Venus on the god Mars[^18]. Venus symbolizes [*Humanitas*]{}, love and concord, while Mars hatred and discord, so much so that the ancients recognized in him the god of war. Living in the Beauty helps to overcome the earthly dimension, and Venus, linked to the concept of beauty and contemplation, is a symbol of spiritual elevation through the arts, nature, knowledge and love. The Creation is only possible through love: Love is therefore at the foundation of the cosmos, and only through love the laws of the universe are understood and one is able to approach God.
Inspired by Plotin [@canaglia; @faggin], Ficino states that pure philosophy (the one devoted to God’s devotion) has to deal only with tho se sciences that allow admiring the predetermined paths of the stars subjected to numerical laws. But the universal harmony (<<universal sympathy>>) for its very essence reverberates on all the levels of being by investing them with its own law and order, as a magical plot that envelops the whole reality by establishing a strong causal network of invisible constraints. Harmony is born from the << opposites >>, but also by the << similar >>, as all things are related. The knowledge of the qualities and virtues of planets, constellations, and zodiac signs, but also of the time and progression of the celestial configurations, helps to recognize the bond of concordance that connects to them the acts and the inclinations of men [@faracovi-1; @faracovi-2].
In addition, NG915 can also be read as a symbol of Peace that demolishes the War of Weapons, to be linked to the new Florentine climate created by the diplomatic abilities of Lorenzo il Magnifico after Pazzi’s conspiracy, as already assumed by Cheney [@cheney]. It is worth noting that Ficino studied also the works of Ya’qub Ibn Ishaq al-Kindi, philosopher, scientist and theoretician of the magic arts, who lived in the 9th century and wrote the [*De Radiis*]{}, one of the most widespread magic manuals in Western europe [@corbin]. Chapter IX of the [*De Radiis*]{} is devoted to animal sacrifices: a dying animal naturally fits in the ordinary concatenation of the events; the intentionally killed animal, on the other hand, temporarily disrupts a balance, it comes to form an opposition to the course of nature which, if altered, opens a gap in the order of the real and creates interference zones that stop the ordinary flow of things. Al-Kindi speaks of the sacrificial act intended as the creation of a << natal theme >> [@melis; @katins].
But as there is a macrocosmic order between the forces that govern the world and whose cycles are qualitatively similar to human ones, there is likewise a microcosmic astronomy: imagination is an << astro >>, which creates a causal << astral >> impulse, like a seminal power that can only be activated in virtue of the faith and the intentional strength that the operator, the [*alter deus*]{}, is able to put in action, i.e. through the psychic life, acting at the same time on the quality and <<signaturæ>> of nature.
Marsilio Ficino not only translated [*Hermes Trismegisto*]{}, but also elaborates his own magical practices in accordance with Christian neoplatonism. As Hermes Trismegisto Ficino puts man into the physical center of the world, conceived as a unity and totality. The symbolism of magic arts was vast, and it is found on the ancient astronomical charts that contained, besides the zodiac, a myriad of strange figures each of them associated with a planet, star, or constellation [@garin; @treccanimagia]. The quality of a “magician” was considered in direct relation with his ability to grasp the symbols and relationships between the things of Earth and those of Heavens [^19].
#### The marriage in NG915.
The size and shape of the painting suggest that the work has been commissioned for a wedding, more exactly to be placed on the cover of a hope chest. On July 19, 1482 Lorenzo di PierFrancesco dei Medici (called Popolano) and Semiramide Appiani (nephew of Simonetta) married [^20]. This union was asked by Lorenzo il Magnifico, who had previously thought of marrying his young brother Giuliano with Semiramide because he aimed at the iron mines on the island of Elba in possession of the Appiani family (note that iron is also the symbol of Mars). The marriage of a girl Appiani with a Medici man revived the myth of Simonetta, to whom Semiramide apparently resembled.
Lorenzo di Pierfrancesco dei Medici was born in Florence on August 4, 1463 and at the death of his father on July 19, 1476, was placed under the patronage of Lorenzo il Magnifico and Giuliano. His teachers were Giorgio Antonio Vespucci, Marsilio Ficino, Naldo Naldi, and Angelo Poliziano [@biotreccani]. Taking advantage of his role, in 1478 Lorenzo il Magnifico took possession of more than 53,000 florins cash from Pierfrancesco’s inheritance to cope with the crisis that had hit the Roman branch of the Medici counter following the Pazzi Conspiracy. Moreover, Pierfrancesco was acquainted with Vespucci family, especially with Amerigo Vespucci who dedicated to him his treatise [*Mundus novus*]{} (1504)[^21]. It was with Botticelli that Pierfrancesco had a lasting relationship by commissioning to him various works. It seems that Pierfrancesco ordered [*Primavera*]{} and [*Nascita di Venere*]{} to Botticelli to adorn his bedroom in the Palace of Via Larga. The historians, however, do not all agree on the commission, and there are those who claim that the paintings were given to Lorenzo il Popolano by Lorenzo il Magnifico, who actually ordered the works, after the ‘Lodo Scala’. On the other hand, Lorenzo il Magnifico was also the client of the [*Banchetto di Nastagio*]{} at that time, painted by Botticelli around 1483.
The reference to Vespucci is also confirmed by the motif of the wasps in the upper right, highlighted for the first time by Gombrich [@gombrich], quite unusual given the winter setting of the painting. Here, we emphasize that such insects are particularly active between July and August, presumed period of marriages we refer to and month of birth of Lorenzo Pierfrancesco. On the other hand the quoted sonnets by Lorenzo il Magnifico represent a exegetical key for NG915, being a clear allusion to the real customer of the work. Along with the transfiguration of the tragic, political, historical and personal events according to the Ficinian ’astronomy/philosophy’, in the opinion of the author, facetious aspects have to be considered as an invitation to enjoy the happy moments of life - such as a marriage - because: <<Chi vuol essere lieto, sia: di doman non c’è certezza>> (Lorenzo il Magnifico, "The Triumph of Bacco and Arianna”).
The iconography could therefore have been chosen as a wish by Lorenzo il Magnifico - who is widely represented in the compositional scheme - towards the bride and groom who, given the correspondence and affinity with Giuliano de Medici and Simonetta Vespucci, might be Lorenzo di Pier Francesco dei Medici and Semiramide Appiani. The commitment of NG915 by Medici was also suggested by Salvini [@salvini].
Astronomical <<signaturae>> as disambiguating keys for *Venus and Mars* and Venus’ mythological representation {#astronomical-signaturae-as-disambiguating-keys-for-venus-and-mars-and-venus-mythological-representation .unnumbered}
==========================================================================================================================
As is often the case with Botticelli’s works, we are confronted with a symbolic truth expressed at many levels, with a thick plot between allegories and philosophical concepts, with a game between the ambivalence of the meanings and the real reality.
In a recent book [@paoli], Paoli speculates that in NG 915 is painted the parody of a myth, the recipients of which were Simonetta and Giuliano: one would allude to the missing copula between Venus and Mars, to Ganymede, to the supposed masturbation of Mars typical of God Pan, to the scorn of the satyrs, to the resentment of the Vespucci family towards the Medici, to a negative opinion on the adulterous Simonetta by Savanarola’s followers, and so on.
The present study is instead aimed at providing an astronomical reading of the pictorial composition while leaving a window open to the possibility of coexisting different levels of communication and interpretation. A heavenly connection, though only of mythological and non-astronomical nature, had been mentioned by Langton Douglas [@douglas], but later never fully and seriously investigated.
This preliminary analysis already highlights remarkable and non-negligible coincidences, such as, for example, the motion of the planets (reflected even on Venus’ jewel) and the meaning of the star names with the characters and the stories that the NG915 painting invites us to consider. In particular $\epsilon$ [*Aquarii*]{}, Albali, can be read as a signature of Botticelli according to the verses wrote by il Magnifico about artist’s greed.
The heliacal rising, in particular, is clearly the thread by which it is possible to decipher Ficino’s message and the historical events; in addition the conjunction of Venus and Mars with Aquarius and Capricorn provides the name and date, i.e. 1482, the same year of the first print publication of [*Poeticon Astronomicon*]{} by Ratdolt (in which Aquarius and Capricorn show similarities with the figures of Venus and Mars in NG915).
Furthermore, specific astronomical phenomena such as the helical raising in conjunction of stars/planets could be the key to interpret some mythological figures, as the case of Venus which appeared quite often in heliacal rising in the so-called “ Celestial Waters ”\
region of the sky, i.e. where Acquarius is located, during the spring equinox at the Mesopotamian latitudes. Indeed Venus belongs to the old pantheon: according to Esiodo was the daughter of Heavens and Sea, or Uranus and Gaia, and Ihstar was often depicted in conjunction with the Sun in the [*Lucifero*]{} aspect [^22]. A clear evidence of the association Venus/Acquarius is, for example, the megalographic wall fresco from the perystiulium of the [*Casa di Venere in Conchiglia*]{} at Pompeii.
![\[venere\_pompei\] Megalographic wall fresco from the perystiulium of the [*Casa di Venere in Conchiglia*]{} in Pompei.[]{data-label="default"}](venere_pompei){width="12.0cm"}
The fresco shows one of the most scenographic example of Venus lying on a shell widely diffused in classical antiquity. Venus holds a fan in her right hand while the left holds the veil that swells with the wind as the water flow of Aquarius. The hairstyle shows the typical flavian curls. The goddess wears a diadem, a necklace, gold bracelets on the wrists and ankles. The garden was originally embellished with myrtle plants and gallic roses. The legs are crossed according to a typical pattern, the right leg and the fingers of her left foot show at first glance evident similarities with those of Venus in NG915. The residential [*domus*]{}, located in the amphitheater quarter, was brought to light only in 1952. However the strong coincidence with the Venus in NG915 is impressive, suggesting that Botticelli during his stay in Rome just before 1482 saw this iconography of Venus in some roman collections.
Conclusions {#conclusions .unnumbered}
===========
The analysis exposed in the present essay can be considered the basis for further insights by specialists from the various multidisciplinary sectors (historical, philosophical, artistic, exegetical, literary) necessarily brought up by this astronomical study, hoping to open new targeted readings.
![\[Fig10\]Stellar configuration, at Florence, around the vernal equinox of 1477.](figure10a){width="\textwidth"}
![\[Fig10\]Stellar configuration, at Florence, around the vernal equinox of 1477.](figure10b){width="\textwidth"}
For example, it would be desirable to have a more in-depth study on the astrological (classical) aspects concerning the real positions of the planets, as well as on the celestial correspondences with the plants as set forth in Marsilio Ficino’s treatise to explore further astronomical configurations and allegories [^23].
As, if we admit the astronomical interpretation, we confirm the intepretation that [*Pallade ed il Centauro*]{}, [*Primavera*]{} [^24], and [*Venus and Mars*]{} may have been conceived of representing first the absolute victory of Love on brutality and lower instincts (the Centaur), thus as a regenerating force thank to fertility that in April revives the Earth (Spring) and, finally, as contemplation of Love, namely the ascension of the soul to Heavens in the harmony of opposites (Venus and Mars). Better yet, if we include also [*Nascita di Venere*]{}, so the works as a whole celebrate the cyclical rhythm of being through the vital principle: birth, life and death.
Moreover, according to Ficino in [*De Vita Coelitus comparanda*]{}, Venus, Jupiter and the Sun are the three “stars” propitious to man associated with the Three Graces of Spring [^25]. These three stars always come as gifts of Joy, Splendor and Freshness, which are precisely the translation from the greek names given to the Three Graces: respectively Euphrosine, Aglaia, and Talia. In the wake of the analysis presented here, we report the image of the astronomical ephemerides of the spring equinox of 1477 in figure \[Fig10\]. Note the conjunctions of Mercury-Mars, Sun-Jupiter, while Venus was in Aquarius and the lunar conjunctions with these planets while approaching to the vernal equinox. Several years ago Gombrich pointed out [@gombrich2] that there exist a letter dated in 1477 sent by Ficino to the teenager Lorenzo di Pierfrancesco, which contained a moral exhortation in the form of an allegorical horoscope to fix his eyes to Venus, the [*Humanitas*]{}, according to the pedagogical advise of Cicerone to use visual teaching tools. The letter was accompanied by a note addressed to his educators, Giorgio Antonio Vespucci and Naldo Naldi, in order to incite him to memorize the contents since the young Popolano, being an irascible character, did not possess the [*humanitas*]{} virtue.
As a conclusive provocation, I would like to point out that the planetary configuration reported on the spring equinox of 1477 did not occur in the years that have been hypothesized for dating Botticelli’s [*Primavera*]{}.
#### Acknowledgments
[40]{}
M. Paoli, [*Venere Marte, Parodia di un adulterio nella Firenze di Lorenzo il Magnifico*]{}, Edizioni ETS (2017) ESA’s website for the Gaia Scientific Community, https://www.cosmos.esa.int/web/gaia/home F. Stoppa, [*Breve Storia della cartografia Celeste*]{}, http://www.eanweb.com/2011/breve-storia-della-cartografia-celeste-occidentale F. Stoppa,[*Atlas Coelestis*]{}, http://www.atlascoelestis.com , Scuola Normale Superiore di Pisa A. Cesta, [*Sulle Origini delle Costellazioni*]{}, Tesi di laurea, Scuola di Scienze Dipartimento di Fisica e Astronomia Corso di Laurea magistrale in Astrofisica e Cosmologia (2015-2016) K. Lippincott, [*The Saxl Project: Manuscripts, Illustrations, Bibliography*]{} P. Giovetti, [*La Modella del Botticelli*]{}, Edizioni Studio Tesi (2015) J. R. Allan, [*Simonetta Cattaneo Vespucci: beauty, politics, literature and art in early Renaissance Florence*]{}, PhD Thesis, University of Birmingham (2014) Dizionario-Biografico, enciclopedia Treccani G. Zappella, ”Una frontiera ambigua: ibridazioni iconografiche nei libri di soggetto astronomico del sec. XVII" , in [*Le seicentine dell’Osservatorio astronomico di Capodimonte*]{}, edited by E. Olostro Cirella, Napoli: Giannini Editore, 2017, p.48 M. Davies, [*National gallery Catalogue. The Earlier Italian Schools*]{}, London, The Trutsees (1961), pp.99-101 F. Zöller, [*Sandro Botticelli*]{}, Munich-Berlin-London-New York, Prestel (2009), pp. 124-130 W. von Bode, [*Sandro Botticelli*]{}, Berlin, Im Propyläen Verlag (1921) A. Schmarsov, [*Sandro del Botticello*]{}, Dresden, Reinesser (1923) E.Tieze-Conrat, [*Botticelli and the Antique*]{}, The Burlington Magazine, 47 (1925) C. Gamba, [*Botticelli*]{}, Milano, Hoepli (1936) D. Bellingham, [*Aphrodite Deconstructed: Botticelli’s Venus and Mars in the National Gallery, London*]{},Leiden-Boston, Brill (2010) E. Guidoni, [*Venere e Marte di Sandro Botticelli, una nuova interpretazione*]{}, in Studi Giorgeneschi, VII, Roma, Palombi Editori (2003), pp. 7-14 M.L. D’Ancona, [*The garden of the Renaissance, botanical symbolism in Italian painting*]{}, Leo S.Olschki Editore (1977) E. Gombrich, [*Botticelli’s Mythologies. A study in the Neoplatonic Symbolism of his circle*]{}, Journal of the Warburg and Courtlaud Institutes, 8, (1945) R. Klibansky, E. Panofsky, F. Saxl, [*Saturno e la melanconia*]{}, Einaudi, Torino (1983) M. Boskovitz, [*Botticelli*]{}, Budapest, Corvina (1964), pp.42-92 A. B. Ferruolo, [*Botticelli’s Mythologies, Ficino’s De Amore, Poliziano’s Stanze per la Giostra: Their Circle of Love*]{}, The Art Bulletin, XXXVII (1995),1, pp.17-25 N.A. Robb, [*Neoplatonism of the Italian Renaissance*]{}, London, George Allen & Unwin, LTD (1935) R.J.M. Olson, [*Studies in the later works of Sandro Botticelli*]{}, Princeton University, (1975) pp. 202-204 M. Canaglia, C. Guidelli, A. Linguiti, F. Moriani, [*Plotino, Enneadi*]{}, vol. II, Utet, Torino (1997) G.Faggin,[*Plotino, Enneadi*]{}, VI 4, 40-41. Trad it. A cura di Rusconi, Milano (1992), pp. 687-689. O. Pompeo Faracovi, [*M. Ficino, Scritti sull’astrologia*]{}, Milano, Rizzoli (1999) O.Pompeo Faracovi, [*Scritto negli astri. L’astrologia nella cultura dell’Occidente*]{}, Venezia, Marsilio (1996) , pp. 199-218. L.Cheney,[*Quattrocento Neoplatonism and Medici Humanism in Botticelli’s Mythological Paintings*]{}, New York, London University Press (1985), pp. 66-70 H.Corbin, [*Storia della Filosofia Islamica*]{}, Adelphi, Milano (1989), pp. 164-1677 A. Melis, [*Armonie cosmiche e consonanze magiche*]{}, http://users.unimi.it/gpiana/dm6/dm6armam.htm T. Katinis,[*Studi ed Edizioni delle Opere di Marsilio Ficino dal 1986 al 2000*]{}, http://www.ficino.it/it/bibliografia-ficiniana E. Garin,[*Lo zodiaco della vita*]{}, Laterza, Bari (1982) , http://www.treccani.it/enciclopedia/il-rinascimento-magia-e-astrologia\_28Storia-della-Scienza\_29 R. Salvini, [*Enciclopedia Universale dell’Arte*]{}, II, Novara, De Agostini (1958), coll. 750-760 R. Langton Douglas, [*Piero di Cosimo*]{}, Chicago, University Press (1946), p.52 E. Gombrich, [*Symbolic Images. Studies in the art of the Renaissance*]{}, Phaidon (1972), p. 31-81.
[^1]: For an exhaustive list of all the interpretations of NG915, please refer to the recent book by Marco Paoli [*Venere Marte, Parodia di un adulterio nella Firenze di Lorenzo il Magnifico*]{}.
[^2]: A part from a very limited number of Arabic globes, there remains a representation of the sky of Hipparchus and Ptolemy thanks to the celestial globe on the shoulder of [*Atlante Farnese*]{} in the National Archaeological Museum of Naples.
[^3]: One can add also the translations of Germanicus between I sec. B.C. and I sec. d.C., Avienio in IV sec. and those of an anonymous monk of the French monastery of Corbie in the eighteenth century, [*Aratus Latinus Primitivo*]{}. Some of these translations (those of Cicero, Germanic and Aratus Latinus) have been handed down to us in manuscripts with illustrations and in some cases they present traces of their precursor models.
[^4]: Son of Harun al-Rashid, who set up a personal library, called Bayt al-Hikma, “The House of Science,” which Al-Ma’mun enlarged to create the richest library of the whole Islamic world.
[^5]: Author’s translation from italian.
[^6]: For details, references [@stoppa; @stoppaatlanti].
[^7]: Author’s translation from: <<Secondo infatti i più antichi platonici dalle sue ragioni l’Anima del Mondo ha costruito accanto alle stelle figure e parti di figure tali che sono esse stesse figure di un certo tipo conferendo anche determinate proprietà a ciascuna di esse. Inoltre nelle stelle - nelle loro figure, parti e proprietà - sono contenute tutte le specie di cose che si trovano nel mondo inferiore e le loro proprietà>>.
[^8]: It is worth mentioning that at the time of Botticelli the Gregorian calendar had not yet entered into force, so in theory the date of the equinox should be backdated by 10 days, but astronomical software uses the Julian date converter.
[^9]: Here and after for the astronomical details refer to [@certissimasigna].
[^10]: << O chiara stella che coi raggi tuoi / togli alle vicine stelle il lume, / perchè splendi assai più del tuo costume?/ Perchè con Febo ancor contender vuoi?/ Forse i belli occhi, quali ha tolti a noi/ morte crudel, che ormai troppo presume,/ accolti hai in te: adorna del loro nume,/ il suo bel carro a Phebo chieder puoi./ O questo o nuova stella che tu sia,/ che di splendor novello adorni il cielo,/ chiamata essa udì, nume, i voti nostri:/ leva dello splendor tuo tanto via,/ che agli occhi, che han d’eterno pianto zelo,/ sanza altra offension lieta ti mostri>>.
[^11]: Author’s translation from:<<Botticel la cui fama non è fosca, / Botticel dico; Botticello ingordo / Ch’e più impronto e più ghiotto ch’una mosca. / Oh di quante sue ciancie hor mi ricordo, / Se gli è invitato à desinar, ò cena, / Quel che l’invita non lo dice a sordo. / Non s’apre a l’invitar la bocca a pena, / Ch’ e’ se ne viene, e al pappar non sogna, / Va Botticello, e torna botte piena>>
[^12]: At the time of Eratosthenes and Hipparchus, the winter solstice was still in this constellation, but because of the precession of equinoxes, today the winter solstice falls into Sagittarius.
[^13]: Author’s translation from <<Fu di alta statura un corpo ben proporzionato con pettorali ampi e sporgenti. Aveva le braccia tornite e muscolose, il ventre piatto, occhi vivaci, il volto indomito, capelli lunghi e neri>>.
[^14]: Tiezte-Conrad referred to a detail of the cover of a third-century sarcophagus (in the Vatican Museums) depicting two female figures reclined in contraposition against a Dionysian scene.
[^15]: << Con tanto studio lo percosse, che obcecato da quel furor che lo portava, se medesimo in una gamba gravemente offese>> (N. Machiavelli)
[^16]: For more details the reader can refer to the book by Mirella Levi D’Antona[@mirella]
[^17]: Note that the spring equinox in the calendar of that epoch fell around March 10, the absence of blossoms may be justified as NG915 symbolically represents an astral ascending.
[^18]: <<If you fear Mars put against Venus, namely Venus is in harmony with Mars>> M.Ficino, author’s translation from <<Se temete Marte opponetegli Venere, ovvero Venere è in aspetto armonico a Marte>>.
[^19]: <<Nature it is not an inanimate house, but it’s all covered by attractions and repulsions. The magician is the one who, knowing the sympathies and the contrasts, and generally the quality of constraints, is able to act on them and to connect similar things as the farmer marries the elms to the screws>>, author’s translation from <<la Natura non è una casa inanimata, ma è tutta percorsa da attrazioni e repulsioni. Il mago è colui che, conoscendo le simpatie e i contrasti, e in genere la qualità dei vincoli, è in grado di agire su di loro e di collegare cose simili, come l’agricoltore sposa gli olmi alle viti.>>([*Disputatio contra iudicium astrologorum*]{}, 1477).
[^20]: Simonetta’s marriage is also assumed to have been celebrated towards August.
[^21]: Amerigo Vespucci was sent as agent to the branch in Seville at the service of Lorenzo di Pierfrancesco; during his stay in Seville he developed the idea to sail to the New World.
[^22]: A quick check with the astronomical software confirms this fact therefore deserving a specific attention in a next study.
[^23]: In the Enneade IV book (chapter 32) Ficino outlines the image of a Universe as of <<a living unitary, embracing the living all who are in it and is endowed with a unitary soul spread on all its parts>>, author’s translation from <<un vivente unitario, che abbraccia i viventi tutti che son nel suo interno ed è dotato di un’anima unitaria diffusa su tutte le sue parti>>.
[^24]: Both were exposed above the entrance door of the antechamber in the Palace in Via Larga where Lorenzo il Popolano lived. In the [*Primavera*]{} painting Semiramide Appiani would have been recognized in the central figure of the Three Graces, to represent spiritual love, i.e. the [*Humanus*]{} Love in the neoplatonic philosophy; Pierfrancesco, on the other hand, would have been portrayed in Mercurio’s clothes (and those of Mars, since he wears also a sword).
[^25]: << The Three Graces are Jupiter, Sun and Venus. Jupiter is the Grace, which is between the other two and is particularly appropriate to us, >> fifth chapter of [*De Vita Coelitus comparanda*]{}, while in the sixth chapter it is told that << Where it deals with our virtues, the natural, the vital and the animal ones, and which planets help and how they act it through the appearance of the Moon to the Sun, to Venus, and especially to Jupiter >>. Author’s translation from <<Ove si tratta delle nostre virtù, quella naturale, vitale e animale, e quali pianeti le aiutano e in che modo lo fanno tramite l’aspetto della Luna al Sole a Venere e specialmente a Giove>>.
|
---
abstract: 'We review recent results obtained for charge asymmetric systems at Fermi and intermediate energies, ranging from 30 MeV/u to 1 GeV/u. Observables sensitive to the isospin dependent part of nuclear interaction are discussed, providing information on the symmetry energy behavior from sub- to supra-saturation densities.'
address:
- 'INFN-LNS, via Santa Sofia 62, I-95123 Catania, Italy'
- 'NIPNE-HH, Bucharest and Bucharest University, Romania'
- 'Physics and Astronomy Dept., University of Catania, Italy'
author:
- 'M. Colonna, V. Baran, M. Di Toro and V. Giordano'
title: Dynamical Phase Trajectories in Baryon and Isospin Density Spaces
---
Introduction
============
Nuclear reactions give us the opportunity to create transient states of nuclear matter, following several dynamical paths in temperature, baryon and isospin density spaces. By looking at appropriate mechanisms, and related observables, along these trajectories, one can try to map the behavior of the nuclear interaction away from normal conditions. In particular, we want to investigate the energy functional of asymmetric nuclear matter and constrain the term depending on the asymmetry parameter $I = (N-Z)/A$, the so-called symmetry energy, $E_{sym}$, that is still largely debated nowadays [@baranPR; @baoPR08]. Suitable parametrizations of the symmetry potential can be inserted into existing transport codes (here we will follow the SMF approach [@chomazPR]), providing predictions for isospin-sensitive observables, that can be confronted to experimental data. We stress that the knowledge of the Equation of State of asymmetric matter (Iso-EOS) has important implications in the context of structure studies and astrophysical problems. At the Fermi energies, where one essentially explores the low-density zone of the nuclear matter phase diagram, isospin effects can be investigated in reaction mechanisms typical of this energy domain, such as deep-inelastic collisions and multifragmentation. The high density symmetry term can be probed from isospin effects appearing in heavy ion reactions at relativistic energies (few GeV/u range). Rather isospin sensitive observables are proposed from nucleon/cluster emissions, collective flows and meson production. A large symmetry repulsion at high baryon density will also lead to an “earlier” hadron-deconfinement transition in n-rich matter.
In the following, we will test an $Asysoft$ parametrization of $E_{sym}$, with an almost flat behavior below $\rho_0$ and even decreasing at supra-saturation, or an $Asystiff$ behavior, with a faster decrease at lower densities and much stiffer above saturation.
![\[imb\_eloss\] Imbalance ratios as a function of relative energy loss. Upper panel: separately for stiff (solid) and soft (dashed) Iso-EOS, and for two parametrizations of the isoscalar part of the interaction: MD (circles and squares) and MI (diamonds and triangles), in the projectile region (full symbols) and the target region (open symbols). Lower panel: quadratic fit to all points for the stiff (solid), resp. soft (dashed) Iso-EOS.](fig1_nn09.eps){width="20pc"}
Low density behavior of $E_{sym}$ : Isospin equilibration
=========================================================
In this section we focus on the mechanisms connected to isospin transport in binary events at Fermi energies. This process involves nucleon exchange through the low density neck region and hence it is sensitive to the low density behavior of $E_{sym}$ [@tsang92; @isotr07; @sherry].
Within a first order approximation of the transport dynamics, the relaxation of a given observable $x$ towards its equilibrium value can be expressed as: $x_{P,T}(t) - x^{eq} = (x^{P,T} - x^{eq})~e^{-t/\tau}$, where $x^{P,T}$ is the initial $x$ value for the projectile (P) or the target (T), $x_{eq} = (x^P + x^T)/2$ is the full equilibrium value, $t$ is the elapsed time and $\tau$ is the relaxation time, that depends on the mechanism under study. The degree of isospin equilibration reached in the collision can be inferred by looking at isospin dependent observables in the exit channel, such as the N/Z of PLF and TLF. Using the dissipated kinetic energy as a measure of the contact time $t$, one can finally extract the information on the relaxation time $\tau$, that is related to the symmetry energy. It is rather convenient to construct the so-called imbalance ratio, $R^x_{P,T} = {(x_{P,T}-x^{eq})} / {|x^{P,T}-x^{eq}|}~$ [@tsang92]. Within our approximation, it simply reads: $R_{P,T} = \pm e^{-t/\tau}$. The simple arguments developed above are confirmed by full simulations of (Sn,Sn) collisions at 35 and 50 MeV/u [@isotr07]. In figure \[imb\_eloss\] we report the correlation between $R_{P,T}$ and the total kinetic energy loss, that is used as a selector of the reaction centrality and, hence, of the contact time $t$. On the bottom part of the figure, where all results are collected together, one can see that all the points essentially follow a given line, depending only on the symmetry energy parametrization adopted. A larger equilibration (smaller $R$) is observed in the $Asysoft$ case, corresponding to the larger value of $E_{sym}$. An experimental study of isospin diffusion as a function of the dissipated kinetic energy has been performed recently, by looking at the isotopic content of the light charged particle emission as an indicator of the N/Z of the PLF [@Indra]. This analysis points to a symmetry energy behavior in between the two adopted parametrizations, in agreement with other recent estimates [@bettynew].
0.3cm {width="20pc"}1.cm
\[iso\_kin\]
Isospin distillation in central collisions
==========================================
In central collisions at 30-50 MeV/u, where the full disassembly of the system into many fragments is observed, one can study specifically properties of liquid-gas phase transitions occurring in asymmetric matter [@baranPR; @baoPR08; @chomazPR]. For instance, in neutron-rich matter, phase co-existence leads to a different asymmetry in the liquid and gaseous phase: fragments (liquid) appear more symmetric with respect to the initial matter, while light particles (gas) are more neutron-rich. This sharing of the neutron excess optimizes the energy balance and is ruled by the derivative of the symmetry energy with respect to density. Recently we have proposed to investigate the correlations between the distillation mechanism and the underlying expansion dynamics of the fragmenting system. In fact, in neutron(poor)-rich systems, neutrons(protons) are more repulsed than protons(neutrons), building interesting correlations between the fragment $N/Z$ and kinetic energy, that are sensitive to the symmetry energy parametrization adopted in the calculations, see figure 2. As one can see in the figure, larger (negative) slopes are obtained in the $Asystiff$ case, corresponding to the lower value of the symmetry energy at low density. This appears as a promising experimental observable to be investigated, though fragment secondary effects are expected to reduce the sensitivity to the Iso-EOS [@col07].
-1.0cm
Isospin effects at high baryon density
======================================
{width="8.5cm"} 0.3cm
.
\[fastratios\]
The problem of Momentum Dependence in the Isovector channel ($Iso-MD$) of the nuclear interaction (leading to neutron/proton effective mass splitting) is still very controversial and it would be extremely important to get more definite experimental information [@BaoNPA735; @rizzoPRC72]. Exotic Beams at intermediate energies are of interest in order to have high momentum particles and to test regions of high baryon (isoscalar) and isospin (isovector) density during the reaction dynamics. We present here some results for reactions induced by $^{132}Sn$ beams on $^{124}Sn$ targets at 400 MeV/u [@vale08]. For central collisions in the interacting zone we can reach baryon densities about $1.7-1.8~ \rho_0$ in a transient time of the order of 15-20 fm/c. In figure 3 we show the $(n/p)$ and $^3H/^3He$ yield ratios at freeze-out, for two choices of mass splitting, vs. transverse momentum $p_T$ (upper curves) and kinetic energy (lower curves). In this way we can separate particle emission from sources at different densities. We note a clear decreasing trend only in the case $m^*_p < m^*_n$, corresponding to a larger proton repulsion. Similar results are obtained for $Asysoft$ or $Asystiff$ parametrizations. Hence these data seem to be suitable to disentangle $Iso-MD$ effects, rather than the stiffness of the symmetry energy. An interesting dependence on the effective mass splitting is observed also for other observables, such as collective flows, that are also sensitive to the $Asy$-stiffness at high density [@vale08].
Perspectives
============
We have reviewed some aspects of the phenomenology associated with nuclear reactions, from which new hints are emerging to constrain the EOS of asymmetric matter. The greatest theoretical uncertainties concerns the high density domain, that has the largest impact on the understanding of the properties of neutron stars. In the near future, thanks to the availability of both stable and rare isotope beams, more selective analyses, also based on new exclusive observables, are expected to provide further stringent constraints.
[9]{} V. Baran, M. Colonna, V. Greco, M. Di Toro, Phys. Rep. 410 (2005) 335. B.A. Li, L.W. Chen, C.M. Ko, Phys. Rep. 465 (2008) 113. P. Chomaz, M. Colonna, J. Randrup, Phys. Rep. 389 (2004) 263. M.B. Tsang et al., Phys. Rev. Lett. 92 (2004) 062701. J. Rizzo et al., Nucl. Phys. A806 (2008) 79. S. Wuenschel et al., Phys. Rev. C79 (2009) 061602. E. Galichet et al., Phys. Rev. C79 (2009) 064615. M.B. Tsang et al. Phys. Rev. Lett. 102 (2009) 122701. M. Colonna et al., Phys. Rev. C78 (2008) 064618. B.-A. Li, B. Das Champak, S. Das Gupta, C. Gale, Nucl. Phys. A735 (2004) 563. J.Rizzo, M.Colonna, M.Di Toro Phys. Rev. C72 (2005) 064609. V. Giordano, Master Thesis, Univ. of Catania (2008).
|
---
abstract: 'Boosting algorithms have been widely used to tackle a plethora of problems. In the last few years, a lot of approaches have been proposed to provide standard AdaBoost with cost-sensitive capabilities, each with a different focus. However, for the researcher, these algorithms shape a tangled set with diffuse differences and properties, lacking a unifying analysis to jointly compare, classify, evaluate and discuss those approaches on a common basis. In this series of two papers we aim to revisit the various proposals, both from theoretical (Part I) and practical (Part II) perspectives, in order to analyze their specific properties and behavior, with the final goal of identifying the algorithm providing the best and soundest results.'
address: 'Signal Theory and Communications Department, University of Vigo, Maxwell Street, 36310, Vigo, Spain'
author:
- 'Iago Landesa-Vázquez, José Luis Alba-Castro'
---
AdaBoost ,Classification ,Cost ,Asymmetry ,Boosting
Introduction {#sec:Intro1}
============
The classical approach to solve a classification problem is based on the use of a single expert that must be able to build a solution classifier. However, in the last few decades, a new classification paradigm, based on the combination of several experts in a distributed decision process, has arisen and attracted the attention of the Machine Learning community. The success of this paradigm relies on several theoretical, practical and even biological reasons (such as generalization properties, complexity, data handling, data source fusion, etc.) making these *Ensemble Classifiers* [@Polikar06] preferable to classical ones in many scenarios.
One of the milestones on the history of ensemble methods was the work published by Robert E. Schapire in 1990 [@Schapire90], in which the author proves the equivalence between *weak* learners, algorithms able to generate classifiers performing only slightly better than random guessing, and *strong* learners, those generating classifiers which are correct in all but an arbitrarily small fraction of the instances. This new model of learnability, in which weak learners can be *boosted* to achieve strong performance when they are properly combined, paved the way to one of the most prominent families of algorithms within the ensemble classifiers paradigm: *boosting*.
In 1997, Yoav Freund and Robert E. Schapire [@FreundSchapire97] proposed a more general boosting algorithm called AdaBoost (from Adaptive Boosting). Unlike previous approaches, AdaBoost does not require any prior knowledge on weak hypothesis space, and it iteratively adjusts to weak hypothesis that become part of the ensemble. Apart from theoretical guarantees and practical advantages over its predecessors, early experiments on AdaBoost also showed a surprising resistance to overfitting. As a consequence of all these qualities, AdaBoost has received an attention “rarely matched in computational intelligence” [@Polikar06] being an active research topic in the fields of machine learning, pattern recognition and computer vision [@Schapire98; @SchapireSinger99; @Opitz99; @Friedman00; @MeaseWyner08a; @ViolaJones04; @MasnadiVasconcelos11; @LandesaAlba12] till present.
Throughout this time, several studies have been conducted to analyze AdaBoost from different points of view, relating the algorithm with different theories: margin theory [@Schapire98], entropy [@Kivinen99], game theory [@FreundSchapire96], statistics [@Friedman00], etc. In the same way, numerous AdaBoost and boosting variants have been proposed for the two-class and multiclass problems: Real AdaBoost [@SchapireSinger99; @Friedman00], LogitBoost [@Friedman00], Gentle AdaBoost [@Friedman00], AsymBoost [@ViolaJones02], AdaCost [@Fan99], AdaBoost.M1 [@FreundSchapire96b], AdaBoost.M2 [@FreundSchapire96b], AdaBoost.MH [@SchapireSinger99], AdaBoost.MO [@SchapireSinger99], AdaBoost.MR [@SchapireSinger99], JointBoosting [@Torralba04], AdaBoost.ECC [@GuruswamiSahai99] etc.
Among the different kinds of classification problems, one common subset is that of tasks with clearly different costs depending on each possible decision, or scenarios with very unbalanced class priors in which one class is extremely more frequent or easier to sample than the other one. In such *cost-sensitive* or *asymmetric* conditions (disaster prediction, fraud detection, medical diagnosis, object detection, etc.) classifiers must be able to focus their attention in the rare/most valuable class. Many works in the literature have been devoted to cost-sensitive learning [@Elkan01; @Provost97; @Weiss03], including a significant set of proposals on how to provide AdaBoost with asymmetric properties (e.g. [@Fan99; @Ting00; @ViolaJones04; @ViolaJones02; @Sun07; @MasnadiVasconcelos07; @MasnadiVasconcelos11; @LandesaAlba12; @LandesaAlba13]). The link between AdaBoost and Cost-Sensitive learning has special interest since AdaBoost is the learning algorithm inside the widespread Viola-Jones object detector framework [@ViolaJones04], a seminal work in computer vision dealing with a markedly asymmetric problem and a enormous number of weak classifiers (the order of hundred of thousands).
The different AdaBoost asymmetric variants proposed in the literature are very heterogeneous, and their related works are focused on emphasizing the possible advantages of each respective method, rather than building a common framework to jointly classify, analyze and discuss the different approaches. The final result is that, for the researcher, these algorithms shape a confusing set with no clear theoretical properties to rule their application in practical problems.
In this series of two papers we try to classify, analyze, compare and discuss the different proposals on Cost-Sensitive AdaBoost algorithms, in order to gain a unifying perspective. Our final goal is finding a definitive scheme to directly translate any cost-sensitive learning problem to the AdaBoost framework and shedding light on which algorithm can ensure the best performance.
The current article is focused on the theoretical part of our work and it is organized as follows: next section focuses on standard AdaBoost and its related theoretical framework, Section \[sec:CSvar\] is devoted to cluster and explain, in an homogeneous notational framework, the different cost-sensitive AdaBoost variants proposed in the literature, and in Section \[sec:Discuss\] we analyze in depth those algorithms with a fully theoretical derivation scheme. Finally, we present the preliminary conclusions (Section \[sec:Conclusions1\]) that will be culminated in the accompanying paper “” covering the experimental part of our work.
AdaBoost {#sec:AdaBoost}
========
Let us define $\mathbf{X}$ as the random process from which our observations $\mathbf{x}=\left(x_{1},\ldots,x_{N}\right)^{T}$ are sampled, and $Y$ the random variable governing the related labels $y \in \{-1,1\}$. In this scenario, a *detector* $H(\mathbf{x})$ (we will also refer to it as *classifier* or *hypothesis*) is a function trying to guess the label $y$ of a given sample $\mathbf{x}$, and it can be defined in terms of a more generic function $f\left(\mathbf{x}\right) \in \mathbb{R}$ which we will call *predictor*.
$$\label{pred_eqn}
H(\mathbf{x})=\mathrm{sign}\left[f\left(\mathbf{x}\right)\right]$$
Suppose we have a training set of $n$ examples $\mathbf{x}_i$ with its respective labels $y_i$, a weight distribution $D(i)$ over them and a *weak learner* able to select, according to labels and weights, the best detector $h(\mathbf{x})$ from a predefined collection of weak classifiers. In this scenario, the role of AdaBoost is to compute a goodness measure $\alpha$ depending on the performance obtained by the selected weak classifier, and to update, accordingly, the weight distribution to emphasize misclassified training examples. Then, with a different weight distribution, the weak learner can make a new hypothesis selection and the process restarts. By iteratively repeating this scheme (, , , ) with $t$ indexing the number of learning rounds, AdaBoost obtains an ensemble of weak classifiers with respective goodness parameters $\alpha_{t}$.
$$\label{alphat_eqn}
%\begin{split}
\alpha_{t}=
%\frac{1}{2} \ln \left(\frac{1+\sum D_{t}(i) y_{i} h_{t}(x_{i})}{1-\sum D_{t}(i) y_{i} h_{t}(x_{i})}\right)
\frac{1}{2} \log \left(\frac{1+r_{t}}{1-r_{t}}\right)
%=\frac{1}{2} \ln \left( \frac{1-\epsilon_{t}}{\epsilon_{t}}\right)
%\end{split}$$
$$\label{rt_eqn}
r_{t}=
\sum_{i=1}^{n}D_{t}(i) y_{i} h_{t} (\mathbf{x}_{i})$$
$$\label{weight_rule_eqn}
D_{t+1}(i)=
\frac{D_{t}(i)\exp\left(-\alpha_{t} y_{i} h_{t}(\mathbf{x}_{i})\right)}{Z_{t}}$$
$$\label{zt_eqn}
Z_{t}=
\sum_{i=1}^{n} D_{t}(i) \exp\left(-\alpha_{t} y_{i} h_{t}(\mathbf{x}_{i})\right)$$
Weak hypothesis searching in AdaBoost is guided to maximize goodness $\alpha_{t}$ of each selected classifier, which is equivalent to maximize, at each iteration, weighted correlation $r_{t}$ () between labels $y_{i}$ and predictions $h_{t}$. This iterative searching process can continue until a predefined number $T$ of training rounds have been completed or some performance goal is reached. The final AdaBoost *strong detector* $H(\mathbf{x})$ is defined () in terms of a boosted *predictor* $f(\mathbf{x})$ built as an ensemble of the selected weak classifiers weighted by their respective goodness parameters $\alpha_{t}$.
$$\label{stradb_eqn}
H(\mathbf{x})=\mathrm{sign}\left(f(\mathbf{x})\right)=\mathrm{sign}\left(\sum_{t=1}^{T}\alpha_{t}h_{t}(\mathbf{x})\right)$$
Error Bound Minimization {#subsec:GeneralizedVersion}
------------------------
Robert E. Schapire and Yoram Singer proposed [@SchapireSinger99], from the original derivation of AdaBoost, a generalised and simplified analysis that models the algorithm as an additive (round-by-round) minimization process of an exponential bound on the strong classifier training error ($E_{T}$). This bounding process is explained in equation[^1] () from which all AdaBoost equations we have presented, weight update rule included, can be derived [@LandesaAlba12].
$$\label{bound_ineq_eqn}
{\begin{array}{c}
\underbrace{\strut H(\mathbf{x}_{i})\neq y_{i} \:\Rightarrow\: y_{i} f(\mathbf{x}_{i}) \leq 0 \:\Rightarrow\: \exp\left(-y_{i} f(\mathbf{x}_{i})\right) \geq 1}\\
\Downarrow\\
E_{T}= \sum_{i=1}^{n} D_{1}(i) \llbracket H(\mathbf{x}_{i}) \neq y_{i}\rrbracket \leq \sum_{i=1}^{n} D_{1}(i) \exp \left( -y_{i} f(\mathbf{x}_{i}) \right)
\end{array}
}$$
After (), the final bound of the training error obtained by AdaBoost can be expressed as (), and the additive minimization of the exponential bound $\tilde{E}_{T}$ can be seen as finding, in each round, the weak hypothesis $h_{t}$ that maximizes $r(t)$, the weighted correlation between labels $(y_{i})$ and predictions $(h_{t})$.
$$\label{et_bound_eqn}
E_{T}\leq
\prod_{t=1}^{T}Z_{t}\leq
\prod_{t=1}^{T}\sqrt{1-{r_{t}}^2}=\tilde{E}_{T}$$
When weak hypothesis are binary, $h_{i}\in\{-1,+1\}$, the last inequality on () becomes an equality, and parameter $\alpha_{t}$ can be directly rewritten () in terms of the weighted error $\epsilon_{t}$ of the current weak classifier. As can be seen, the minimization process turns out to be equivalent to simply selecting the weak classifier with less weighted error.
$$\label{round_err_eqn}
\epsilon_{t}=
\sum_{i=1}^{n} D_{t}(i) \llbracket h_{t}(\mathbf{x}_{i}) \neq y_{i}\rrbracket=
\sum_{\textrm{err}}D_{t}(i)$$
$$\label{alphat2_eqn}
%\begin{split}
\alpha_{t}=
\frac{1}{2} \log \left( \frac{1-\epsilon_{t}}{\epsilon_{t}}\right)
%\end{split}$$
In line with other works, for the sake of simplicity and clarity, we will focus our analysis on this *Discrete* version of AdaBoost using binary weak classifiers, which does not prevent our conclusions from being extended to other variations of the algorithm. Also, trying to define an homogeneous notational framework for our work, we have unified the different notations found in the literature to that used by Schapire and Singer [@SchapireSinger99]. A summary of AdaBoost can be found on Algorithm \[adb\_algorithm\] (all the algorithms discussed in this paper are detailed, with homogeneous notation, in Appendix \[app:algorithms\]).
Statistical View of Boosting {#subsec:Statistical View}
----------------------------
One of the milestones in boosting research and the foundation of many variations of AdaBoost is the highly-cited contribution by Jerome H. Friedman et al. [@Friedman00] in which a statistical reinterpretation of boosting is given. Following the exponential criterion seen in the last subsection, Friedman et al. showed that AdaBoost can be motivated as an iterative algorithm building an additive logistic regression model $f(\mathbf{x})$ that minimizes the expectation of the exponential loss, $J(f(\mathbf{x}))$:
$$\label{loss_eqn}
J(f(\mathbf{x}))=\E\left[\exp(-yf(\mathbf{x}))\right]$$
This defined loss is effectively minimized at $$\label{regress_eqn}
f(\mathbf{x})=\frac{1}{2}\log\left(\frac{\Prob(y=1|\mathbf{x})}{\Prob(y=-1|\mathbf{x})}\right)$$ so a direct connection between boosting and additive logistic regression models is drawn. According to this statistical perspective, AdaBoost predictions can be seen as estimations of the posterior class probabilities, which has served as basis to develop many extensions and variants of the algorithm (among them, the Cost-Sensitive Boosting scheme [@MasnadiVasconcelos07; @MasnadiVasconcelos11]).
It is important to mention that, despite the huge and unquestionable value of the statistical view, some enriching controversy, revealed by empirical evidences [@MeaseWyner08a; @Bennet08; @MeaseWyner08b], has arisen about inconsistencies of this interpretation.
Cost-Sensitive Variants of AdaBoost {#sec:CSvar}
===================================
Cost-sensitive classification problems can be fully portrayed by a cost matrix [@Elkan01] whose components map the loss of each possible result. For two-class problems there are four kinds of results: true positives, true negatives, false positives and false negatives; so the cost matrix $\mathbf{C}$ can be defined as follows:
$$\label{cost_matrix}
\begin{tabular} {c c c c c c l l l}
& & \multicolumn{2}{c}{Actual} & & & \\
& & Negative & Positive & & &\\
\multirow{2}{*}{$\mathbf{C}=$} & \multirow{2}{*}{{\Huge(}} & $c_{nn}$ & $c_{np}$ & \multirow{2}{*}{{\Huge)}} & Negative & \multirow{2}{*}{Classified}\\
& & $c_{pn}$ & $c_{pp}$ & & Positive & &
\end{tabular}$$
The optimal decision for a given cost matrix will not change if all its coefficients are added a constant, or if they are multiplied by a constant positive factor. As a result, a cost matrix for two-class classification problems only has two degrees of freedom and can be parametrized by only two coefficients: false negatives normalized cost ($\overline{c}_{np}$) and true positives normalized cost ($\overline{c}_{pp}$):
$$\label{simple_C}
\mathbf{C}=\left(
\begin{array} {c c}
0 & \overline{c}_{np}\\
1 & \overline{c}_{pp}\\
\end{array}
\right)$$
In the most common case correct decisions have null related costs ($c_{nn}=c_{pp}=0$), so $\mathbf{C}$ has eventually only one degree of freedom: the ratio between cost of errors on positives ($c_{np}$) and cost of errors on negatives ($c_{pn}$). In the literature and most practical problems, cost requirements are usually specified by these two error parameters, which, for simplicity, we will denote as $C_P$ and $C_N$ respectively.
$$\label{simple_C2}
\mathbf{C}=\left(
\begin{array} {c c}
0 & C_P/C_N\\
1 & 0\\
\end{array}
\right)
\rightarrow \left(
\begin{array} {c c}
0 & C_P\\
C_N & 0\\
\end{array}
\right)$$
The coefficients of a cost matrix may not be constant in general. While constant coefficients model a scenario where all the examples of each class have the same cost (class-level asymmetry), variable coefficients mean that examples belonging to the same class can have different costs (example-level asymmetry). Whatever the scenario, it is also important to notice that, for “reasonableness” [@Elkan01], correct predictions in a cost matrix should have lower associated costs than mistaken ones ($c_{nn}<c_{np}$ and $c_{pp}<c_{pn}$).
Bearing in mind that class-level asymmetry is the most common for detection problems, and that example-level asymmetry can be modeled by a class-level asymmetry scheme with a resampled training dataset, for our analysis we have homogenized the different Asymmetric AdaBoost approaches to the class-level scheme. Thus, we will follow a prototypical cost-sensitive detection statement specified by two constant coefficients $C_P$ and $C_N$, that can be alternatively described by the “normalized cost asymmetry” of the problem $\gamma \in (0,1)$:
$$\gamma=\frac{C_{P}}{C_{P}+C_{N}}$$
Despite the widespread use of these particularizations, in Appendix \[app:cost\_scen\] we will extend our conclusions to example-level asymmetry and also cases in which correct classification costs are nonzero.
It is also important to emphasize that this work is focused on AdaBoost and its cost-sensitive variants, a realm of methods in the literature that are based on a exponential loss minimization criterion, analogous to that giving rise to the original algorithm (as we have seen in Sections \[subsec:GeneralizedVersion\] and \[subsec:Statistical View\] from different points of view) . Other boosting algorithms based on other kinds of losses beyond the exponential paradigm, like the binomial log-likelihood [@Friedman00] or the p-norm loss [@LozanoAbe08], are outside the scope of the current study.
Classification {#subsec:Classification}
--------------
In order to give a clear overview of the cost-sensitive variants of AdaBoost proposed in the literature, we suggest an analytical classification scheme to cluster them into three categories according to the way asymmetry is reached: *A posteriori*, *Heuristic* and *Theoretical*.
### A Posteriori {#subsec:APosteriori}
The seminal face/object detector framework by Paul Viola and Michael J. Jones [@ViolaJones04] uses a validation set to modify, after training, the threshold of the original (cost-insensitive) AdaBoost strong classifier. The goal is to adjust the balance between false positive and detection rates, building, that way, a cost-sensitive boosted classifier:
$$\label{vjmod_eqn}
\tilde{H}(\mathbf{x})=\mathrm{sign}\left(f(\mathbf{x})-\phi\right)=\mathrm{sign}\left({\sum_{t=1}^{T}\alpha_{t}h_{t}(\mathbf{x})}-\phi\right)$$
Besides the great success of the detection framework, the authors themselves acknowledge that neither this a posteriori cost-sensitive tuning ensures that the selected weak classifiers are optimal for the asymmetric goal [@ViolaJones02], nor their modifications preserve the original AdaBoost training and generalization guarantees [@ViolaJones04].
An useful insight on this can be drawn from the analysis by Masnadi-Shirazi and Vasconcelos [@MasnadiVasconcelos11]. According to the Bayes Decision Rule, the optimal predictor $f^{*}(\mathbf{x})$ can be expressed in terms of the optimal predictor for a cost-insensitive scenario $f^{*}_{0}(\mathbf{x})$ and a threshold $\phi$ depending on costs.
$$\label{pred_thresh}
f^{*}\left(\mathbf{x}\right)=\log\left(\frac{\Prob_{Y|\mathbf{X}}\left(1|\mathbf{x}\right)C_{P}}{\Prob_{Y|\mathbf{X}}\left(-1|\mathbf{x}\right)C_{N}}\right)=\log\left(\frac{\Prob_{Y|\mathbf{X}}\left(1|\mathbf{x}\right)}{\Prob_{Y|\mathbf{X}}\left(-1|\mathbf{x}\right)}\right)-\log\left(\frac{C_N}{C_P}\right)=f^*_0(\mathbf{x})-\phi$$
As a consequence, for any cost requirements, the optimal cost-sensitive *detector* $H^{*}(\mathbf{x})$ can also be expressed as a threshold on the cost-insensitive optimal *predictor* $f^{*}_{0}(\mathbf{x})$.
$$\label{optimal_detector}
H^*\left(\mathbf{x}\right)=\mathrm{sign}\left[f^*(\mathbf{x})\right]=\mathrm{sign}\left[f^*_0(\mathbf{x})-\phi\right]$$
In practical terms, however, learning algorithms do not have access to the exact probability distributions and they must approximate this optimal detector rule. Thus, AdaBoost can be seen as an algorithm obtaining an approximation ($\hat{H}_0(\mathbf{x})$) to the optimal cost-insensitive *detector*, built by means of an estimation ($\hat{f}_0(\mathbf{x})$) of the cost-insensitive *predictor* ().
$$\label{adaboost_estimation}
\hat{H}_0(\mathbf{x})=\mathrm{sign}\left[\hat{f}_0(\mathbf{x})\right]=\mathrm{sign}\left({\sum_{t=1}^{T}\alpha_{t}h_{t}(\mathbf{x})}\right) \approx H^*_0(\mathbf{x})$$
By definition, the purpose of AdaBoost is to obtain a *detector* as close as possible to the optimal one, and this optimality is ensured if the learned *predictor* satisfies two necessary and suficient conditions:
$$\label{costinsens_cond}
{\begin{array}{c}
\hat{H}_0(\mathbf{x})=H_0^*(\mathbf{x}) \\
\Updownarrow \\
\begin{cases}
\hat{f}_0\left(\mathbf{x}\right)=f_0^*(\mathbf{x})=0 & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x})=\Prob_{Y|\mathbf{X}}(-1|\mathbf{x})\\
\mathrm{sign}\left[\hat{f}_0\left(\mathbf{x}\right)\right]=\mathrm{sign}\left[f_0^*(\mathbf{x})\right] & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x}) \neq \Prob_{Y|\mathbf{X}}(-1|\mathbf{x})
\end{cases}
\end{array}
}$$
As can be seen, in order to reach optimal *detection* the predictor learned by AdaBoost should match the optimal predictor in the boundary region, but only its sign elsewhere. Analogously, optimal detection for the cost-sensitive case, would be ensured by two equivalent conditions:
$$\label{costsens_cond}
{\begin{array}{c}
\hat{H}(\mathbf{x})=H^*(\mathbf{x}) \\
\Updownarrow \\
\begin{cases}
\hat{f}\left(\mathbf{x}\right)=f^*(\mathbf{x})=0 & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x})C_P=\Prob_{Y|\mathbf{X}}(-1|\mathbf{x})C_N\\
\mathrm{sign}\left[\hat{f}\left(\mathbf{x}\right)\right]=\mathrm{sign}\left[f^*(\mathbf{x})\right] & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x})C_P \neq \Prob_{Y|\mathbf{X}}(-1|\mathbf{x})C_N
\end{cases}
\end{array}
}$$
Thus, optimality conditions required by the *a posteriori* modification of the AdaBoost threshold would be as follows:
$$\label{threshmod_cond}
{\begin{array}{c}
\hat{H}(\mathbf{x})=H^*(\mathbf{x}) \\
\Updownarrow \\
\begin{cases}
\hat{f}_0\left(\mathbf{x}\right)=f_0^*(\mathbf{x})=\phi & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x})C_P=\Prob_{Y|\mathbf{X}}(-1|\mathbf{x})C_N\\
\mathrm{sign}\left[\hat{f}_0\left(\mathbf{x}\right)-\phi\right]=\mathrm{sign}\left[f_0^*(\mathbf{x})-\phi\right] & \text{if } \Prob_{Y|\mathbf{X}}(1|\mathbf{x})C_P \neq \Prob_{Y|\mathbf{X}}(-1|\mathbf{x})C_N
\end{cases}
\end{array}
}$$
Bearing in mind that AdaBoost predictor $\hat{f}_0(\mathbf{x})$ is geared to satisfy (), the optimality conditions for *threshold modification* are not necessarily fulfilled. The only way to meet these requirements for any cost would be that the predictor obtained by AdaBoost matched the optimal one along the whole space, which is an obviously stronger condition than actually required (). Moreover, recalling the exponential bounding equation in which AdaBoost is based (), we can see that, once the sign of the obtained predictor matches the right label, the error bound is further minimized for increasing absolute values of the estimated predictor, no matter how close they are (or not) to the optimal predictor value.
As a consequence, there are no guarantees that a threshold change on the classical AdaBoost predictor will give us a cost-sensitive detector oriented to be optimal. Nonetheless, this non-optimality has not prevented that asymmetric detectors obtained by the Viola-Jones framework have been very successfully used for object detection.
### Heuristic {#subsec:Heuristic}
Most of the proposed cost-sensitive variations of AdaBoost [@Fan99; @Ting00; @ViolaJones02; @Sun07] try to deal with asymmetry through direct manipulations of the weight update rule (), but they are not full reformulations of AdaBoost for cost-sensitive scenarios. Masnadi-Shirazi and Vasconcelos pointed out that this kind of manipulations “provide no guarantees of asymptotic convergence to a good cost-sensitive decision rule” [@MasnadiVasconcelos11], considering those algorithms as “heuristic” modifications of AdaBoost [@MasnadiVasconcelos07; @MasnadiVasconcelos11].
Although these proposals have, in greater or lesser extent, some theoretical basis, for the sake of clarity and distinctiveness in our analysis, we will maintain the term *heuristic*, as used in [@MasnadiVasconcelos07; @MasnadiVasconcelos11], to label this group of approaches based on the arbitrary modification of the weight update rule, as opposed to the full *theoretical* derivations we will delve into in the next subsection.
### AsymBoost {#subsubsec:AsymBoost .unnumbered}
Assuming the non-optimality of the strong classifier threshold adjustment procedure in their object detector framework (Section \[subsec:APosteriori\]), Paul Viola and Michael J. Jones proposed a different scheme, coined as AsymBoost [@ViolaJones02], trying to optimize AdaBoost for cost-sensitive classification problems.
Discarding the asymmetric weight initialization to be “naive” and only “somewhat effective” due to “AdaBoost’s balanced reweighting scheme” (we will discuss on this point in Section \[subsec:weight\]), AsymBoost proposes to distributedly emphasize weights by an asymmetric modulation before each round. In practical terms, the only change is multiplying weights $D(i)$ by a constant factor $(C_P/C_N)^{y_i/2T}$ before every learning step of a $T$-round process. As a consequence, the overall asymmetric factor seen by positive elements across the whole process is $C_P/C_N$ times the factor seen by negatives.
$$\label{asb_equation}
D(i)_{t+1} = \frac{D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)\left(\frac{C_P}{C_N}\right)^{\frac{y_{i}}{2T}}}{\sum_{i=1}^{n}D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)\left(\frac{C_P}{C_N}\right)^{\frac{y_{i}}{2T}}}$$
AsymBoost, that reduces to AdaBoost when costs are uniform, is detailed in Algorithm \[asb\_algorithm\] (Appendix \[app:algorithms\]).
Though the global AsymBoost procedure seems to be theoretically sound, the *equitable* asymmetry sharing among a *fixed* number of rounds entails significant problems: Why such a rigid equitable sharing procedure should be optimal inserted in an adaptive framework such as AdaBoost? Why should we have to know in advance the number of training rounds while standard AdaBoost does not require that? Note that standard AdaBoost allows flexible performance tests to decide when to stop training, since any change in the total number of rounds is directly performed by training new additional rounds or trimming the current ensemble. However, a change in the size of the final ensemble (number of rounds) would strictly require Asymboost to re-train the whole classifier with a new asymmetry distribution.
### AdaCost {#adacost .unnumbered}
Wei Fan et al. proposed [@Fan99] a cost-sensitive variation of AdaBoost called AdaCost. The idea behind AdaCost is to modify the weight update rule, so examples with higher costs have sharper increases of their weights after misclassification but lighter decreases when are succesfully classified. This scheme is essentially addresed by introducing a misclassification adjustment function $\beta(i)$ into the weight update rule ().
$$\label{adc_weights}
D_{t+1}(i) = \frac{D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\beta(i)\right)}{\sum_{i=1}^{n}D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\beta(i)\right)}$$
The misclassification adjustment function must depend on the cost ($C(i)$) of each example/class and the success/fail of its classification. As a result, $\beta(i)$ is imposed to be non-decreasing respect to $C(i)$ when classification fails, and non-increasing when classification succeeds. This opens the door to a huge amount of functions satisfying such requirements, from which authors chose the next:
$$\beta(i)=
\left\{
\begin{array}{ll}
0.5 \left(1-C(i)\right) & \mbox{$\text{if } h_{f}(\mathbf{x}_{i}) = y_{i}$},\\
0.5 \left(1+C(i)\right) & \mbox{$\text{if } h_{f}(\mathbf{x}_{i}) \neq y_{i}$}.
\end{array} \right.$$
As can be seen, AdaCost does not match with AdaBoost for uniform costs and also applies a cost-dependent weight pre-emphasis (see Algorithm \[adc\_algorithm\]).
### CSB0, CSB1 and CSB2 {#csb0-csb1-and-csb2 .unnumbered}
Following the same idea of modifying the weight update rule, the CSB (acronym from Cost-Sensitive Boosting) family of algorithms [@Ting98; @Ting00] propose three different updating schemes depending on which parameters are involved, resulting in CSB0, CSB1 and CSB2 algorithms (see respective Algorithms \[csb0\_algorithm\], \[csb1\_algorithm\] and \[csb2\_algorithm\]). These rules are complemented, for all the three alternatives, by an asymmetric weight initialization and a minimum expected cost criterion for strong classification replacing the usual weighted voting scheme:
$$H(\mathbf{x})=\mathrm{sign}\left(\sum_{t=1}^{T}\alpha_{t}h_{t}(\mathbf{x}) \left( C_P \llbracket h_{t}(\mathbf{x})=+1 \rrbracket + C_N \llbracket h_{t}(\mathbf{x})=-1 \rrbracket \right)\right)$$
This new voting rule gives emphasis, in run time, to weak hypothesis deciding in favor of the costly class. Of the three alternatives, only the last one, CSB2, is reduced to standard AdaBoost when costs are equal.
### AdaC1, AdaC2 and AdaC3 {#adac1-adac2-and-adac3 .unnumbered}
Defining new ways to modify the weight update rule, Yanmin Sun et al. [@Sun05; @Sun07] proposed another family of asymmetric AdaBoost alternatives called AdaC1, AdaC2 and AdaC3. These variants couple the cost factor in different parts of the update equation: inside the exponent (AdaC1), outside the exponent (AdaC2) and both (AdaC3):
$$\label{ac1_update}
D_{t+1}(i) = \frac{D_{t}(i)\exp\left(-\alpha_{t}c_{i}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}{\sum_{i=1}^{n}D_{t}(i)\exp\left(-\alpha_{t}c_{i}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}$$
$$\label{ac2_update}
D_{t+1}(i) = \frac{c_{i}D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}{\sum_{i=1}^{n}c_{i}D_{t}(i)\exp\left(-\alpha_{t}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}$$
$$\label{ac3_update}
D_{t+1}(i) = \frac{c_{i}D_{t}(i)\exp\left(-\alpha_{t}c_{i}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}{\sum_{i=1}^{n}c_{i}D_{t}(i)\exp\left(-\alpha_{t}c_{i}y_{i}h_{t}\left(\mathbf{x}_i\right)\right)}$$
As a difference from previous approaches, these changes in the weight update are also propagated to the way goodness parameter $\alpha_t$ is defined and, as a consequence, have influence on how the weak classifier error is computed (see Algorithms \[ac1\_algorithm\], \[ac2\_algorithm\], \[ac3\_algorithm\]). All these variants reduce to AdaBoost when the cost function $C(i)$ is 1 for all examples.
### Theoretical {#subsec:Theoretical}
The methods in the previous subsection have one key point in common: the starting point of their derivations is an arbitrary modification of the weight update rule. However, as can be easily shown following the work by Schapire and Singer [@SchapireSinger99], weight update in standard AdaBoost is actually a *consequence* of the error minimization procedure () and not an arbitrary starting point of it. Thus, the way to reach theoretically sound cost-sensitive boosting algorithms should be to walk the path in the opposite direction: designing a new asymmetric derivation scheme to obtain a new full formulation (that may include a new weight update rule), instead of partially adapting previous equations.
There are three alternatives in the literature that follow different theoretically sound derivation schemes reaching cost-sensitive variants of AdaBoost: Cost-Sensitive AdaBoost [@MasnadiVasconcelos07; @MasnadiVasconcelos11], AdaBoostDB [@LandesaAlba13] and Cost-Generalized AdaBoost [@LandesaAlba12].
### Cost-Sensitive AdaBoost {#cost-sensitive-adaboost .unnumbered}
The Cost-Sensitive Boosting framework proposed by Hamed Masnadi-Shirazi and Nuno Vasconcelos [@MasnadiVasconcelos07; @MasnadiVasconcelos11] has its roots in the Statistical View of Boosting [@Friedman00], by adapting the standard loss in equation () with asymmetric exponential arguments for each class component.
$$\label{csloss_eqn}
J(f(\mathbf{x}))=\E\left[\llbracket y=1 \rrbracket \mathrm{e}^{-C_{P}f(\mathbf{x_i})} + \llbracket y=-1 \rrbracket \mathrm{e}^{C_{N}f(\mathbf{x_i})}\right]$$
This asymmetric loss is theoretically minimized by the asymmetric logistic transform of $\Prob\left(y=1|\mathbf{x}\right)$ (see Section \[subsec:Statistical View\]), which should ensure cost-sensitive optimality.
$$\label{stat_sol_asym}
\begin{split}
f(x)=\frac{1}{C_{P}+C_{N}}\log\frac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}
\end{split}$$
The empirical minimization of the asymmetric loss proposed by Masnadi-Shirazi and Vasconcelos follows a gradient descent scheme on the space of boosted (combined and modulated) binary weak classifiers, resulting in the Cost-Sensitive Adaboost algorithm shown in Algorithm \[csa\_algorithm\]. As can be seen, the final solution involves hyperbolic functions and scalar search procedures, being extremely more complex and computing demanding than the original AdaBoost.
### AdaBoostDB {#adaboostdb .unnumbered}
Following the generalizad analysis of AdaBoost [@SchapireSinger99] instead of the Statistical View of Boosting, a different approach to provide AdaBoost with Cost-Sensitive properties through a fully theoretical derivation procedure is presented in [@LandesaAlba13]. This algorithm, coined as AdaBoostDB (from Double Base), is based on the use of different exponential bases $\beta_P$ and $\beta_N$ for each class error component, thus defining a class-dependent error bound to minimize.
$$\label{exp_bound_eqn_asym}
%\begin{split}
E_{T} \leq \tilde{E}_T = \sum_{i=1}^{m} D_{1}(i){\beta_P}^{ -y_{i} f(\mathbf{x_{i}})} + \sum_{i=m+1}^{n} D_{1}(i){\beta_N}^{ -y_{i} f(\mathbf{x_{i}})}
%\end{split}$$
On the one hand, the derivation scheme followed and the polynomial model used to address the problem, enable a different and extremely efficient formulation, able to achieve over 99% training time saving with respect to Cost-Sensitive AdaBoost (see Algorithm \[abdb\_algorithm\]). On the other hand, this class-dependent error is fully equivalent to the cost-sensitive loss () defined for Cost-Sensitive Boosting, so both minimizations will converge to the same solution and ensure the same formal guarantees.
As a result, AdaBoostDB is a much more efficient framework to reach the same solution as Cost-Sensitive Boosting (except for numerical errors related to the different models adopted, hyperbolyc vs. polynomial). However, despite its large improvement in training complexity and performance, AdaBoostDB is still much more complex than standard AdaBoost.
### Cost-Generalized AdaBoost {#cost-generalized-adaboost .unnumbered}
The Asymmetric AdaBoost problem is addressed in [@LandesaAlba12] from a different theoretical perspective, realizing that one kind of modification have systematically been either overlooked or undervalued in the related literature: weight initialization.
Even though some preliminary studies by Freund and Schapire [@FreundSchapire97], creators of AdaBoost, left the initial weight distribution free to be controlled by the learner, AdaBoost is “de facto” defined, almost everywhere in the literature (e.g. [@Schapire98; @SchapireSinger99; @Fan99; @FreundSchapire99; @Ting00; @Friedman00; @Polikar06; @Sun07; @Polikar07; @MasnadiVasconcelos11]), with a fixed initial uniform weight distribution. From there, some asymmetric boosting algorithms (like AdaCost or CSB) use cost-sensitive initialization as a lateral or secondary strategy respect to their proposed weight update rules, while others (like AsymBoost or Cost-Sensitive Boosting), immediately discard asymmetric weight initialization to be “naive” and ineffective, arguing that the first boosting round would absorb the full introduced asymmetry and the rest of the process would keep entirely symmetric.
In [@LandesaAlba12], following a different insight to analyze AdaBoost and obtaining a novel error bound interpretation, asymmetric weight initialization is shown to be an effective way to reach cost-sensitiveness, and, as occurs with everything related to boosting, it is achieved in an additive round-by-round (asymptotic) way. All, with the added advantage that weight initialization is the only needed change to gain asymmetry with regard to standard AdaBoost (even weight update rule is unchanged). Hence, for whatever desired asymmetry, both complexity and formal guarantees of the original AdaBoost remain intact.
In this work, we will refer to the algorithm underlying this perspective as Cost-Generalized AdaBoost (see Algorithm \[adbg\_algorithm\]).
Theoretical Algorithms: Analysis and Discussion {#sec:Discuss}
===============================================
Though in the experimental part of our work (see the accompanying paper [@LandesaAlba??b]) we will show comparative results of all the alternatives presented in the previous section, at this point we will focus our attention on the three proposals with a fully theoretical derivation scheme: Cost-Sensitive AdaBoost [@MasnadiVasconcelos07; @MasnadiVasconcelos11], AdaBoostDB [@LandesaAlba13] and Cost-Generalized AdaBoost [@LandesaAlba12]. The first important aspect we should notice is that these three proposals can be effectively analyzed as if they were only two, since Cost-Sensitive AdaBoost and AdaBoostDB, despite following different perspectives and obtaining markedly different algorithms, share an equivalent theoretical root and drive to the same solution [@LandesaAlba13]. As a consequence, if not otherwise specified, in this section we will refer to one or another interchangeably, giving priority to the name Cost-Sensitive AdaBoost due to its chronological precedence.
The Question of Weight Initialization {#subsec:weight}
-------------------------------------
As commented in Section \[subsec:Theoretical\], despite some initial studies pointing to free initial weight distributions [@FreundSchapire97] or works proposing cost-proportional weighting as an effective way to transform generic cost-insensitive learning algorithms into cost-sensitive ones [@Zadrozny03], subsequent works on boosting have insisted on two recurrent ideas: On the one hand, uniform distribution has been assumed as the “de facto” standard for weight initialization when defining AdaBoost (e.g. [@Schapire98; @SchapireSinger99; @Fan99; @FreundSchapire99; @Ting00; @Friedman00; @Polikar06; @Sun07; @Polikar07; @MasnadiVasconcelos11]); on the other hand, asymmetric weight initialization has been systematically rejected as a valid method to achieve cost-sensitive boosted classifiers, arguing that it is insufficient [@Fan99; @Ting00] or ineffective [@ViolaJones02; @MasnadiVasconcelos07; @MasnadiVasconcelos11].
However, in [@LandesaAlba12], AdaBoost is demonstrated to have inherent and sound cost-sensitive properties embedded in the way the weight distribution is initialized. In fact, the method we are referring to as Cost-Generalized AdaBoost, was not even originally proposed as a new algorithm: “it is just AdaBoost” [@LandesaAlba12] with appropriate initial weights. Such an analysis, supported by a novel class-conditional interpretation of AdaBoost, is, thus, in clear contradiction to the supposed ineffectiveness of cost-sensitive weight initialization underlying previous works.
In order to definitely clarify this contradiction, we will connect both perspectives by demonstrating the validity of asymmetric weight initialization in the same scenarios and lines of reasoning that have been previously used in the literature to decline its use.
### The Supposed Symmetry {#subsubsec:supposed_symmetry}
Masnadi-Shirazi and Vasconcelos [@MasnadiVasconcelos11] when explaining their Cost-Sensitive Boosting framework, immediately discard the unbalanced weight initialization (calling it “naive implementation”) with the argument that iterative weight update in AdaBoost “quickly destroys the initial asymmetry” obtaining a “predictor” which “is usually not different from that produced with symmetric initial conditions”. Though their statement is not explicitly supported for any further test or bibliographic reference, it seems to be extracted from the work by Viola and Jones [@ViolaJones02] in which AsymBoost is presented. In that work, the initial weight modification technique is rejected arguing that “the first classifier selected absorbs the entire effect of the initial asymmetric weights”, and assuming the rest of the process as “entirely symmetric”. It is because of this seeming problem that AsymBoost was designed for distributing an equitable asymmetry among a fixed number of rounds.
The cost-sensitive analysis by Viola and Jones [@ViolaJones02] is illustrated by a four-round boosted classifier graphic representation that supports their conclusions against asymmetric weight initialization. However, this example can be misleading: what would happen if boosting were run for more than those four rounds? An answer can be found in Figure \[counter\_example\_1\_fig\], where we have reproduced and extended that illustrative experiment.
![Synthetic counterexample to the example by Viola and Jones [@ViolaJones02], with costs $C_P=4$ and $C_N=1$, and the same polarity as the original:(a) training set with the first four weak classifiers superimposed; (b) weak classifiers after 50 training rounds; (c) Global error evolution through 50 training rounds. Weak classifiers are stumps in the linear 2D space. Positive examples are marked as ‘$+$’, ‘$\circ$’ are the negative ones, and ‘1’ denotes the first selected weak classifier. Positives are the costly class.[]{data-label="counter_example_1_fig"}](Fig01.pdf){width="0.8\columnwidth"}
Strictly following Viola and Jones [@ViolaJones02], after Figure \[counter\_example\_1\_fig\]a we could reach the seeming conclusion that, once an initial asymmetric weak classifier has been selected, the selection of the remaining weak classifiers is not guided by an asymmetric goal. However, as showed by Schapire and Singer [@SchapireSinger99], AdaBoost is an additive minimization process and, as such, it has an *asymptotic* behavior, a kind of behavior that can not be properly judged by stopping after only a few training rounds. Running the algorithm for many more rounds in the same example (see Figure \[counter\_example\_1\_fig\]b), we appreciate that many other subsequent selected classifiers are, at least, as asymmetric as the first one.
The class-conditional interpretation of AdaBoost in [@LandesaAlba12] shows that the asymmetry encoded by the initial weight distribution is actually translated to a cost-sensitive global error (a weighted error), and what AdaBoost is actually minimizing is a bound on that global error. Thus, instead of inspecting the individual asymmetry of each single hypothesis, the cost-sensitive behavior of AdaBoost should be evaluated, for correctness, in terms of the *cumulative contribution* of *all* the selected weak classifiers giving rise to the strong one. Figure \[counter\_example\_1\_fig\]c shows how, even in a scenario like the one proposed by Viola and Jones [@ViolaJones02], the classifier obtained by AdaBoost after an asymmetric weight initialization follows a real cost-sensitive iterative profile.
Moreover, postulates by Viola and Jones [@ViolaJones02] and Masnadi-Shirazi and Vasconcelos [@MasnadiVasconcelos11] can also be refuted by simply inverting labels on the same set (see Figure \[counter\_example\_2\_fig\]). As can be seen, no weak classifier is able to satisfy, by itself, the requirements of that “supposed” initial round absorbing the full asymmetry of the problem. However, even in such an unfavorable scenario, the desired asymmetry is effectively achieved, from cost-proportionate initial weights, after a (boosted) round-by-round cumulative process.
![Synthetic counterexample to the example by Viola and Jones [@ViolaJones02], with costs $C_P=4$ and $C_N=1$, and with opposite polarity to the original:(a) training set with the first four weak classifiers superimposed;(b) weak classifiers after 50 training rounds; (c) Global error evolution through 50 training rounds. Weak classifiers are stumps in the linear 2D space. Positive examples are marked as ‘$+$’, ‘$\circ$’ are the negative ones, and ‘1’ denotes the first selected weak classifier. Positives are the costly class.[]{data-label="counter_example_2_fig"}](Fig02.pdf){width="0.8\columnwidth"}
Further comments on these experiments can be found in Appendix \[app:comments\_figures\].
### Weight Initialization inside the Cost-Sensitive Boosting Framework {#subsubsec:weight_cost_sensitive}
Cost-Sensitive AdaBoost [@MasnadiVasconcelos11] is an algorithm that, despite having a rigorous theoretical derivation, is built upon the belief that cost-sensitive initial weighting is not a valid method to achieve asymmetric boosted classifiers. However, as we have already mentioned, the theoretical analysis in [@LandesaAlba12] refutes that supposed invalidity. A clarifying experiment at this point is to introduce asymmetric weight initialization inside the Cost-Sensitive AdaBoost theoretical framework, to assess the theoretical validity of the former with the tools used by the latter.
Based on the Statistical View of Boosting [@Friedman00], the cost-sensitive expected loss (i.e. the risk function) proposed by Masnadi-Shirazi and Vasconcelos to derive Cost-Sensitive AdaBoost, consists on two class-dependent exponential components with asymmetry embedded in its exponents:
$$\label{csloss_eqn1}
J_{CSA}(f(\mathbf{x}))=\E\left[\llbracket y=1 \rrbracket \exp(-C_{P}f(\mathbf{x})) + \llbracket y=-1 \rrbracket \exp(C_{N}f(\mathbf{x}))\right]$$
Following the proof derivation scheme in [@MasnadiVasconcelos11], if the derivatives of this loss are set to zero, we will obtain the function of minimum expected loss (minimum risk) conditioned on $\mathbf{x}$ for Cost-Sensitive AdaBoost, that, as can be seen, is based on the asymmetric logistic transform of $\Prob(y=1|\mathbf{x})$.
$$\label{statistical_deriv_cda}
{\begin{array}{c}
J_{CSA}(f(\mathbf{x}))=\Prob\left(y=1|\mathbf{x}\right)\exp(-C_{P}f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)\exp(C_{N}f(\mathbf{x}))\\
\Downarrow\\
\dfrac{\partial J_{CSA}(f(\mathbf{x}))}{\partial f(\mathbf{x})}= -C_{P}\Prob\left(y=1|\mathbf{x}\right)\exp(-C_{P}f(\mathbf{x}))+C_{N}\Prob\left(y=-1|\mathbf{x}\right)\exp(C_{N}f(\mathbf{x}))=0\\
\Downarrow\\
\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}=\exp((C_{P}+C_{N})f(\mathbf{x}))\\
\Downarrow\\
f_{CGA}(\mathbf{x})=\dfrac{1}{C_{P}+C_{N}}\log\left(\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}\right)
\end{array}
}$$
Now, let us suppose that the two cost parameters $C_P$ and $C_N$, rather than in the exponents, are incorporated as direct modulators of the exponentials (). This procedure is equivalent to model the initial weight distribution by means of two uniform *class-conditional* distributions, respectively modulated by $C_{P}/\left(C_{P}+C_{N}\right)$ and $C_{N}/\left(C_{P}+C_{N}\right)$, i.e. an asymmetric weight initialization as the one proposed giving rise to Cost-Generalized AdaBoost.
$$\label{csloss_exp}
J_{CGA}(f(\mathbf{x}))=\E\left[\llbracket y=1 \rrbracket C_{P} \exp(-f(\mathbf{x})) + \llbracket y=-1 \rrbracket C_{N} \exp(f(\mathbf{x}))\right]$$
If we repeat the above derivation scheme on this new loss, we will find the function of minimum expected loss (minimum risk) conditioned on $\mathbf{x}$ for Cost-Generalized AdaBoost:
$$\label{statistical_deriv_cga}
{\begin{array}{c}
J_{CGA}(f(\mathbf{x}))=\Prob\left(y=1|\mathbf{x}\right)C_{P}\exp(-f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)C_{N}\exp(f(\mathbf{x}))\\
\Downarrow\\
\dfrac{\partial J_{CGA}(f(\mathbf{x}))}{\partial f(\mathbf{x})}= -\Prob\left(y=1|\mathbf{x}\right)C_{P}\exp(-f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)C_{N}\exp(f(\mathbf{x}))=0\\
\Downarrow\\
\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}=\exp(2f(\mathbf{x}))\\
\Downarrow\\
f_{CGA}(\mathbf{x})=\dfrac{1}{2}\log\left(\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}\right)
\end{array}
}$$
As can be seen, the obtained minimizer is also based on the asymmetric logistic transform of $\Prob(y=1|\mathbf{x})$, showing us that, even from the Cost-Sensitive AdaBoost derivation perspective, there is no reason to discard asymmetric weight initialization as a valid approach to build cost-sensitive boosted classifiers[^2].
Comparative Analysis of the Theoretical Approaches {#subsec:algorithms_cmp}
--------------------------------------------------
As we have seen, among the three asymmetric AdaBoost algorithms with a full theoretical derivation, two of them (Cost-Sensitive AdaBoost and AdaBoostDB) drive to the same solution, while the other one (Cost-Generalized AdaBoost) has been shown to guarantee the same theoretical validity than its counterparts. At this point, we may wonder if Cost-Generalized AdaBoost is also obtaining the same solution as Cost-Sensitive AdaBoost/AdaBoostDB. As we will see in the experimental part of our work (in the second part of this series of two papers [@LandesaAlba??b]) the answer to this question is “no”: classifiers obtained by Cost-Sensitive AdaBoost and Cost-Generalized AdaBoost in the same scenarios are markedly different. In this section, from a theoretical perspective, we will analyze the differences between the two algorithms, with the aim of achieving the intrinsic distinctivenesses of their respective classifiers.
### Error Bound Minimization {#subsubsec:error_bound_cmp}
As commented in Section \[sec:CSvar\], the most common detection problem can be parametrized by the next cost matrix:
$$\mathbf{C}
=\left(
\begin{array} {c c}
c_{nn} & c_{np}\\
c_{pn} & c_{pp}\\
\end{array}
\right)
=\left(
\begin{array} {c c}
0 & C_P\\
C_N & 0\\
\end{array}
\right)$$
We will start our comparative analysis by following the error bound minimization perspective originally proposed by Schapire and Singer [@SchapireSinger99], also used in the derivation of Cost-Generalized AdaBoost and AdaBoostDB. From that point of view, classical AdaBoost, with its initial uniform weight distribution, is an algorithm driven to minimize an exponential bound ($\tilde{E}_T$) on the training error ($E_T$) (), as illustrated in Figure \[std\_adb\_bound\_fig\]. In that figure, the horizontal axis ($y_if(\mathbf{x}_i)$) represents the *performance score* of a classification, whose sign indicates the success (if $y_if(\mathbf{x}_i)>0$) or failure (if $y_if(\mathbf{x}_i)<0$) of the decision, and whose magnitude indicates the confidence expected by the classifier on its decision. The exponential bound is decreasing for increasing performance scores, so the classical AdaBoost minimization process is aimed to maximize correct classifications and their margin (distance to the boundary), in a scenario where all the training examples follow a common cost scheme.
$$\label{std_adb_bound}
E_{T}= \sum_{i=1}^{n} \frac{1}{n} \llbracket H(\mathbf{x}_{i}) \neq y_{i}\rrbracket \leq \sum_{i=1}^{n} \frac{1}{n} \exp \left( -y_{i} f(\mathbf{x}_{i}) \right) = \tilde{E}_{T}$$
![Training error bound of AdaBoost. The loss (y-axis) associated to each decision has an exponential dependency on the performance score of the strong classifier (x-axis).[]{data-label="std_adb_bound_fig"}](Fig03.pdf){width="7.5cm"}
Cost-Sensitive AdaBoost and AdaBoostDB, assumming that the training set is divided into two significant subsets (positives and negatives), define two different exponential bounds ($\tilde{E}_{TP}$ and $\tilde{E}_{TN}$) with different associated costs ($C_P$ and $C_N$) over each subset. These costs are inserted as exponent modulators into each class-dependent exponential bound (), reaching a cost-sensitive behavior that can be graphically interpreted as shown in Figure \[csb\_bound\_fig\]. The goal is, again, to maximize correct classifications and their margin, but this time in a scenario where positives and negatives have different associated losses.
$$\label{csb_bound}
\begin{split}
E_{T} &= \sum_{i=1}^{n} \frac{1}{n} \llbracket H(\mathbf{x}_{i}) \neq y_{i}\rrbracket\\
& \leq \sum_{i=1}^{m} \frac{1}{n} \exp \left( -C_{P}y_{i} f(\mathbf{x}_{i}) \right) + \sum_{i=m}^{n} \frac{1}{n} \exp \left( -C_{N}y_{i} f(\mathbf{x}_{i}) \right)\\
& =\tilde{E}_{TP} + \tilde{E}_{TN} = \tilde{E}_{T}
\end{split}$$
![Training error bound of Cost-Sensitive AdaBoost and AdaBoostDB for $C_P=2$ and $C_N=1$. Loss has a class-dependent definition and is composed of two different exponential functions.[]{data-label="csb_bound_fig"}](Fig04.pdf){width="7.5cm"}
As can be seen, asymmetric modifications in Cost-Sensitive AdaBoost (and AdaBoostDB) are based on new bounds for the training error, while the error definition itself remains unchanged from original (cost-insensitive) AdaBoost.
Cost-Generalized AdaBoost, on the other hand, is based on redefining the training error and then applying the standard exponential bounding process. To achieve this, training error in positives (${E}_{TP}$) and in negatives (${E}_{TN}$) are computed separately, and then are modulated by its respective normalized costs. The resulting class-dependent weighted error components (${E}_{TP}'$ and ${E}_{TN}'$) jointly define the *cost-sensitive global training error* ($E_{T}'$). The same way as in standard AdaBoost, each of these weighted error components can be exponentially bounded ($\tilde{E}_{TP}$ and $\tilde{E}_{TN}$), and the combination of the two resulting *class-dependent* bounds will define a cost-sensitive global bound ($\tilde{E}_{T}$) (), that is the function being minimized by Cost-Generalized AdaBoost. The scenario is graphically depicted in Figure \[cgb\_bound\_fig\].
$$\label{cgb_bound}
\begin{split}
E_{T}' &= E_{TP}' + E_{TN}' = \frac{C_{P}}{C_{P}+C_{N}} E_{TP} + \frac{C_{N}}{C_{P}+C_{N}} E_{TN} \\
& = \frac{C_{P}}{C_{P}+C_{N}} \sum_{i=1}^{m} \frac{1}{m} \llbracket H(\mathbf{x}_{i}) \neq y_{i}\rrbracket + \frac{C_{N}}{C_{P}+C_{N}} \sum_{i=m+1}^{n} \frac{1}{n-m} \llbracket H(\mathbf{x}_{i}) \neq y_{i}\rrbracket \\
& \leq \frac{C_{P}}{C_{P}+C_{N}} \sum_{i=1}^{m} \frac{1}{m} \exp \left( -y_{i} f(\mathbf{x}_{i}) \right) + \frac{C_{N}}{C_{P}+C_{N}} \sum_{i=m}^{n} \frac{1}{n-m} \exp \left( -y_{i} f(\mathbf{x}_{i}) \right)\\
& =\tilde{E}_{TP} + \tilde{E}_{TN} = \tilde{E}_{T}
\end{split}$$
![Training error bound of Cost-Generalized AdaBoost for $C_P=2$ and $C_N=1$. Loss keeps again an exponential dependency, but now modulated by a class-dependent behavior.[]{data-label="cgb_bound_fig"}](Fig05.pdf){width="7.5cm"}
It is important to notice that, by definition, all these algorithms have the goal of obtaining the best possible classifier able to deal with the problem in a cost-sensitive sense, and that the bounding loss functions $\tilde{E}_{T}$ are a mere mathematical tool to make the minimization problem tractable. Thus, from a formal point of view, the direct definition of a cost-sensitive error to be subsequently bounded, as proposed by Cost-Generalized AdaBoost, seems to be more suitable than using the standard cost-insensitive error and manipulate its bound to be asymmetric, as suggested by Cost-Sensitive AdaBoost or AdaBoostDB.
Figure \[preval\_fig\] illustrates the prevalence of the class-dependent error bounds of the two algorithms, assuming, without loss of generality, that positives have a greater cost than negatives $C_P>C_N$ (the opposite case can be modeled by a simple label swap). As can be seen, in Cost-Generalized AdaBoost (Figure \[preval\_fig\]a) the loss associated to positives is always greater than the loss associated to negatives, and the ratio between the two class-dependent losses remains constant along the performance scores. However, in Cost-Sensitive AdaBoost (Figure \[preval\_fig\]b), the ratio between losses varies according to the score, to the extent that class prevalence is inverted depending on which side of the success boundary ($y_if(\mathbf{x}_i)=0$) we are.
![Class prevalence of error bounds for Cost-Generalized AdaBoost (a) and Cost-Sensitive AdaBoost (b) ($C_P=2$, $C_N=1$).[]{data-label="preval_fig"}](Fig06.pdf){width="0.9\columnwidth"}
The iterative learning process behind AdaBoost builds a predictor function $f(\mathbf{x}_i)$ aimed to progressively (round by round) minimize the respective loss function over the training dataset. In terms of classification, this means that AdaBoost classifiers are trained not only to maximize the accuracy of the classifier over the training set, but also to maximize the margin of its decisions. So, once one training example is correctly classified, the tendency of the learner will be to continue increasing the confidence of its prediction ($\mathrm{abs}(f(\mathbf{x}_i))$) to move it away from the decision boundary ($f(\mathbf{x}_i)=0$). For Cost-Generalized AdaBoost, this means that any positive training example will always be more costly (and in the same ratio) than any negative example with its same performance score, whatever this score is. However, in the case of Cost-Sensitive AdaBoost, prevalence ratio varies exponentially with performance scores. So, when scores are positive, negative training examples become the prevalent ones.
Bearing in mind that the performance score of any training example, at any iteration of the learning process, is determined by the evaluation over the example of the boosted predictor learned so far, and that the weight of this example for the next learning round will depend on the value of the related bounding loss for that particular score, we can draw the two following consequences:
- In Cost-Generalized AdaBoost positives will always be the costly class, and the same cost asymmetry is preserved throughout the whole learning process.
- In Cost-Sensitive AdaBoost cost asymmetry changes. While the classifier is wrong, positives are the costly class (learning is positive-driven), but when classification is correct, negatives are prevalent (learning is negative-driven). The more accurate the classifier obtained is, the more costly will be negatives over positives in subsequent training rounds.
In terms of training error, these differences seem to be anecdotal, since the change of class prevalence occurs once the classifier succeeds for each example. However, what is really relevant, is the effect in terms of generalization error: when the classifier works on unseen instances it will make mistakes and it is essential, from a cost-sensitive perspective, to characterize which class is the most prone to errors and to what extent.
As the iterative training process progresses, the performance scores associated to the training examples tend to increase, and their respective losses tend to decrease moving along the Y axis on Figures \[csb\_bound\_fig\] and \[cgb\_bound\_fig\], so, the more rounds we train, the more on the right of these figures we will be. In the case of Cost-Sensitive AdaBoost this trend will increasingly emphasize negatives at the expense of positives, while Cost-Generalized AdaBoost keeps the ratio between classes intact throughout the whole learning process. Thus, due to its changing emphasis, Cost-Sensitive AdaBoost may run the risk of obtaining classifiers in which the supposed costly class is the most prone to errors: just the opposite of what was originally intended!
In the companion paper of the series [@LandesaAlba??b] we will see empirical evidences confirming this *asymmetry swapping* behavior that, by definition, is expected to be more noticeable the closer the system is to overfitting, but that may have an implicit detrimental effect on the performance reached by all classifiers trained by Cost-Sensitive AdaBoost.
### Statistical View of Boosting {#subsubsec:statistical_cmp}
Instead of the exponential error bound minimization perspective that originally gave rise to AdaBoost (and that also is the derivation core of Cost-Generalized AdaBoost and AdaBoostDB) we will now adopt a different point of view: the Statistical View of Boosting [@Friedman00], the other major analytical framework to interpret and derive AdaBoost that, in addition, is the foundation of Cost-Sensitive AdaBoost.
As we have seen in Section \[subsec:Statistical View\], from the Statistical View of Boosting perspective, AdaBoost can be interpreted as an algorithm that iteratively builds an additive regression model based on the following loss function:
$$\label{loss_ab_eqn}
l_{AB}(f(\mathbf{x}),y)=\exp\left(-yf(\mathbf{x})\right)$$
From that loss, an associated risk function $J_{AB}(f(\mathbf{x}))$ (the expected loss) is defined:
$$\label{risk_ab_eqn}
\begin{split}
J_{AB}(f(\mathbf{x}))&=\E\left[l_{AB}(f(\mathbf{x}),y)\right]\\
&=\Prob\left(y=1|\mathbf{x}\right)\exp(-f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)\exp(f(\mathbf{x}))
\end{split}$$
If we minimize that risk we will obtain the optimal predictor $f_{AB}(\mathbf{x})$, that turns out to be the symmetric logistic transform of $\Prob\left(y=1|\mathbf{x}\right)$.
$$\label{min_ab_eqn}
f_{AB}(\mathbf{x})=\dfrac{1}{2}\log\dfrac{\Prob\left(y=1|\mathbf{x}\right)}{\Prob\left(y=-1|\mathbf{x}\right)}$$
AdaBoost is geared to approximate, in an additive way, that optimal predictor without embedded costs. Thus, the obtained model will be cost-insensitive, only depending on the likelihood of each class (see Figure \[stat\_cmp\_ab\_fig\]).
![Risk minimizing function (optimal predictor) for AdaBoost ($f_{AB}(\mathbf{x})$). It only depends on the likelihood of each class.[]{data-label="stat_cmp_ab_fig"}](Fig095a.pdf){width="7.5cm"}
In the case of Cost-Generalized AdaBoost, from this same perspective, we will have a loss function in which costs are included as modulators of the exponentials.
$$\label{loss_cga_eqn}
l_{CGA}(f(\mathbf{x}),y)=\llbracket y=1 \rrbracket C_{P} \exp(-f(\mathbf{x})) + \llbracket y=-1 \rrbracket C_{N} \exp(f(\mathbf{x}))$$
Thus, as explained in Section \[subsubsec:weight\_cost\_sensitive\], the respective risk function $J_{CGA}(f(\mathbf{x}))$ and its minimizer $f_{CGA}(\mathbf{x})$ will be the following ones:
$$\label{risk_cga_eqn}
\begin{split}
J_{CGA}(f(\mathbf{x}))&=\E\left[l_{CGA}(f(\mathbf{x}),y)\right]\\
&=\Prob\left(y=1|\mathbf{x}\right)C_{P}\exp(-f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)C_{N}\exp(f(\mathbf{x}))
\end{split}$$
$$\label{min_cga_eqn}
f_{CGA}(\mathbf{x})=\dfrac{1}{2}\log\left(\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}\right)$$
As can be seen, now we have a cost-sensitive risk function with a cost-sensitive minimizer gearing to an optimal predictor $f_{CGA}(\mathbf{x})$ based on the asymmetric logistic transform of $\Prob\left(y=1|\mathbf{x}\right)$. Thus, in contrast to AdaBoost, the model pursued by Cost-Generalized AdaBoost will not exclusively depend on the likelihood of each class, but also on the related costs.
![Risk minimizing function (optimal predictor) for Cost-Generalized AdaBoost ($f_{CGA}(\mathbf{x})$). It depends on the likelihood of each class and on the related costs, having a homogeneous and continuous cost-sensitive behavior for whatever likelihood.[]{data-label="stat_cmp_cga_fig"}](Fig095b.pdf){width="7.5cm"}
On the other hand, the loss function of Cost-Sensitive AdaBoost embeds the costs inside the exponents $$\label{loss_csa_eqn}
l_{CSA}(f(\mathbf{x}),y)=\llbracket y=1 \rrbracket \exp(-C_{P} f(\mathbf{x})) + \llbracket y=-1 \rrbracket \exp(C_{N}f(\mathbf{x}))$$ so the risk function and its associated minimizer will be as follows (see Section \[subsubsec:weight\_cost\_sensitive\]): $$\label{risk_csa_eqn}
\begin{split}
J_{CSA}(f(\mathbf{x}))&=\E\left[l_{CSA}(f(\mathbf{x}),y)\right]\\
&=\Prob\left(y=1|\mathbf{x}\right)\exp(-C_{P}f(\mathbf{x}))+\Prob\left(y=-1|\mathbf{x}\right)\exp(C_{N}f(\mathbf{x}))
\end{split}$$ $$\label{min_csa_eqn}
f_{CSA}(\mathbf{x})=\dfrac{1}{C_{P}+C_{N}}\log\left(\dfrac{C_{P}\Prob\left(y=1|\mathbf{x}\right)}{C_{N}\Prob\left(y=-1|\mathbf{x}\right)}\right)$$
Then, Cost-Sensitive AdaBoost is also aimed to fit a model based on the asymmetric logistic transform of $\Prob\left(y=1|\mathbf{x}\right)$, depending both on the likelihood of each class as well as on the related costs (see Figure \[stat\_cmp\_csa\_fig\]).
![Risk minimizing function (optimal predictor) for Cost-Sensitive AdaBoost ($f_{CSA}(\mathbf{x})$). It depends on the likelihood of each class and on the related costs, but in this case the cost-sensitive behavior is not homogeneous with respect to likelihood (solutions for different costs cross each other depending on $\Prob\left(y=1|\mathbf{x}\right)$).[]{data-label="stat_cmp_csa_fig"}](Fig095c.pdf){width="7.5cm"}
Notwithstanding, the optimal predictors guiding Cost-Sensitive AdaBoost and Cost-Generalized AdaBoost, despite being both cost-sensitive, have different equations. Such differences become apparent in their graphic representations (see Figures \[stat\_cmp\_cga\_fig\] and \[stat\_cmp\_csa\_fig\]).
To delve into the consequences of these differences, we will analyze the optimal predictors of Cost-Generalized AdaBoost and Cost-Sensitive AdaBoost as functions depending on two magnitudes: likelihood and cost asymmetry [^3]. In Figure \[stat\_cmp\_cgacsamap\_fig\] we have represented the outputs of the optimal predictors as colormaps (we have used isolines for the sake of clarity) onto the plane defined by the likelihood and the cost asymmetry. As can be seen, the optimal predictor of Cost-Generalized AdaBoost (Figure \[stat\_cmp\_cgacsamap\_fig\]a) obtains higher predictor values for increasing $\Prob\left(y=1|\mathbf{x}\right)$ and increasing $C_P$ (vice versa for negatives). However, that is not the case for Cost-Sensitive AdaBoost (Figure \[stat\_cmp\_cgacsamap\_fig\]b) where, for a given likelihood, we can find lower predictor outputs for increasing positive costs (and vice versa for negatives). This inhomogeneous behavior can explain the *asymmetry swapping* effect we have commented in Section \[subsubsec:error\_bound\_cmp\], and to which we will come back in the companion paper of the series [@LandesaAlba??b] when analyzing the experimental behavior of Cost-Sensitive AdaBoost.
[ \[training\_set\_nonover\_fig\] ![Isolines of the optimal predictors for Cost-Generalized AdaBoost (a), and Cost-Sensitive AdaBoost (b), with respect to the likelihood ($\Prob\left(y=1|\mathbf{x}\right)$) and the normalized cost asymmetry ($\gamma=C_P/(C_P+C_N)$).[]{data-label="stat_cmp_cgacsamap_fig"}](Fig095d1.pdf "fig:"){width="5.5cm"} ]{} [ \[training\_classifiers\_nonover\_fig\] ![Isolines of the optimal predictors for Cost-Generalized AdaBoost (a), and Cost-Sensitive AdaBoost (b), with respect to the likelihood ($\Prob\left(y=1|\mathbf{x}\right)$) and the normalized cost asymmetry ($\gamma=C_P/(C_P+C_N)$).[]{data-label="stat_cmp_cgacsamap_fig"}](Fig095d2.pdf "fig:"){width="5.5cm"} ]{}
Summary and Conclusions {#sec:Conclusions1}
=======================
In this first paper of the series we have introduced our working scenario, presenting the algorithms under study (AdaBoost with threshold modification [@ViolaJones04]; AsymBoost [@ViolaJones02]; AdaCost [@Fan99]; CSB0, CSB1 and CSB2 [@Ting98; @Ting00]; AdaC1, AdaC2 and AdaC3 [@Sun05; @Sun07]; Cost-Sensitive AdaBoost [@MasnadiVasconcelos07; @MasnadiVasconcelos11]; AdaBoostDB [@LandesaAlba13]; and Cost-Generalized AdaBoost [@LandesaAlba12]) in a homogeneous notational framework and proposing a clustering scheme for them based on the way asymmetry is inserted in the learning process: *theoretically*, *heuristically* or *a posteriori*. Then, for those algorithms with a fully theoretical derivation, we performed a thorough theoretical analysis and discussion, adopting the different perspectives that have been used to explain and derive the related approaches in the literature (Error Bound Minimization perspective [@SchapireSinger99] and Statistical View of Boosting [@Friedman00]).
The presented analysis clearly shows that the asymmetric weight initialization mechanism used by Cost-Generalized AdaBoost, from whatever point of view, is definitely a valid mechanism to build theoretically sound cost-sensitive boosted classifiers, despite having being recurrently overlooked or rejected in many previous works (e.g. [@Fan99; @Ting00; @ViolaJones02; @MasnadiVasconcelos07; @MasnadiVasconcelos11]). In addition, and besides being the simplest algorithm, Cost-Generalized AdaBoost exhibits the most consistent error bound definition and it is able to preserve the class-dependent loss ratio regardless of the training round whereas Cost-Sensitive AdaBoost and AdaBoostDB, the other theoretical alternatives, may end up emphasizing the least costly class.
After this purely theoretical study, an empirical analysis of the different approaches, also including the non-fully-theoretical methods (a posteriori and heuristic), is needed to reach global conclusions and culminate the analysis we have started in this paper. Such experimental part can be found in the next article of the series: “” [@LandesaAlba??b].
[^1]: Notation: $\llbracket a \rrbracket$ is $1$ when $a$ is true and $0$ otherwise.
[^2]: As analyzed in Appendix \[app:weight\], the way asymmetry is applied across the different boosting variants covered by the Cost-Sensitive Boosting framework [@MasnadiVasconcelos11] is not homogeneus either. In fact, despite having discarded cost-proportionate weight initialization as a valid method, one of the algorithms (Cost-Sensitive LogitBoost) proposed in the same work is actually based on that strategy.
[^3]: In the case of Cost-Sensitive AdaBoost (and AdaBoostDB) we can actually distinguish three different involved magnitudes (likelihood, cost of positives and cost of negatives), since the optimal predictor changes when costs are multiplied by a positive factor. This behavior (that does not happen for Cost-Generalized AdaBoost) violates the rules of the cost matrix [@Elkan01] explained at the beginning of Section \[sec:CSvar\]. In order to tackle this problem for our analysis, we have restricted the possible costs to combinations $(C_P, C_N)$ in which one of the coefficients is always 1, and the other one is $\geq1$. This decision allows us to homogeneously interpret the scenarios in which negatives are the costliest class as label inversions.
|
[**Effect of Nonmagnetic Impurities on the Magnetic Resonance Peak in $\bf YBa_2 Cu_3 O_7$**]{}
H.F. Fong$^{(1)}$, P. Bourges$^{(2)}$, Y. Sidis$^{(2)}$, L.P. Regnault$^{(3)}$, J. Bossy$^{(4)}$,\
A. Ivanov$^{(5)}$, D.L. Milius$^{(6)}$, I.A. Aksay$^{(6)}$, and B. Keimer$^{(1,7)}$
--- --------------------------------------------------------------------------------------------------------------
1 Department of Physics, Princeton University, Princeton, NJ 08544 USA
2 Laboratoire Léon Brillouin, CEA-CNRS, CE Saclay, 91191 Gif sur Yvette, France
3 CEA Grenoble, Département de Recherche Fondamentale sur la matière Condensée, 38054 Grenoble cedex 9, France
4 CNRS-CRTBT, BP 156, 38042 Grenoble Cedex 9, France
5 Institut Laue-Langevin, 156X, 38042 Grenoble Cedex 9, France
6 Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 USA
7 Max-Planck-Institut für Festkörperforschung, D-70569 Stuttgart, Germany
--- --------------------------------------------------------------------------------------------------------------
ABSTRACT
The magnetic excitation spectrum of a $\rm YBa_2 Cu_3 O_7$ crystal containing 0.5% of nonmagnetic (Zn) impurities has been determined by inelastic neutron scattering. Whereas in the pure system a sharp resonance peak at $E \simeq 40$ meV is observed exclusively below the superconducting transition temperature $\rm T_c$, the magnetic response in the Zn-substituted system is broadened significantly and vanishes at a temperature much [*higher*]{} than $\rm T_c$. The energy-integrated spectral weight observed near ${\bf q}= (\pi,\pi)$ increases with Zn substitution, and only about half of the spectral weight is removed at $\rm T_c$.
The magnetic resonance peak is a sharp collective excitation at an energy of 40 meV and wavevector ${\bf q}_0 = (\pi,\pi)$ in the superconducting state of $\rm YBa_2 Cu_3 O_7$ [@rossat91]-[@regnault98]. The peak is also observed at lower energies in underdoped $\rm YBa_2 Cu_3 O_{6+x}$ [@dai96; @fong97; @bourges97]. As the existence of this peak requires d-wave superconductivity, it demonstrates that magnetic neutron scattering is a phase sensitive probe of superconductivity [@fong95]. The peak also provides important clues to the microscopic mechanism of high temperature superconductivity: It does not appear in the Lindhard susceptibility of a noninteracting band metal [@mazin95], and the interactions responsible for the enhancement of the band susceptibility are presumably the same as the ones that drive superconductivity. Several enhancement mechanisms have thus been suggested: band structure singularities [@band], antiferromagnetic interactions [@antiferro], and interlayer tunneling [@chakravarty97]. Other models of the resonance peak [@zhang95; @pines96; @assaad98] appeal directly to the parent antiferromagnetic insulator $\rm YBa_2 Cu_3 O_6$, where low energy spin waves of spectral weight comparable to the resonance peak are observed near ${\bf q}_0 = (\pi,\pi)$.
Clearly, more experimental data are needed to differentiate between these theories. Here we present neutron scattering measurements of the spin dynamics of a $\rm YBa_2 Cu_3 O_7$ single crystals in which a small number of nonmagnetic zinc ions replace copper ions. Zn substitution introduces minimal structural disorder, substitutes for copper in the $\rm CuO_2$ planes [@maeda89], and does not modify the hole concentration substantially [@alloul91]. The Zn ions are known to induce local magnetic moments on neighboring Cu sites [@mahajan] which are associated with low energy magnetic excitations [@sidis96; @kakurai93; @matsuda93]. It has further been shown that Zn impurities scatter conduction electrons near the unitary limit and rapidly suppress the superconducting transition temperature, $\rm T_c$ [@transport]. The new data reported here demonstrate a broadening of the spin excitation spectrum in the presence of a minute amount of Zn impurities. Instead of disappearing in the normal state as in zinc-free $\rm YBa_2 Cu_3 O_7$, the broadened intensity now persists well above $\rm T_c$. These results are surprising and were not anticipated by any of the theoretical models of the resonance peak.
A single crystal of composition $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$ and volume $\rm 1.7 cm^3$ was prepared by a method described previously [@fong96]. The crystal was annealed under oxygen flow at $\rm 600^\circ C$ for 14 days, a procedure that resulted in a $\rm T_c = 93K$ in Zn-free crystals synthesized by the same method [@fong96]. After the heat treatment, the crystal showed $\rm T_c = 87 K$ width a width of about 5K, consistent with earlier reports on Zn-substituted, fully oxygenated $\rm YBa_2 Cu_3 O_7$ [@transport].
The measurements were taken at the IN8 triple axis spectrometer at the Institut Laue Langevin, Grenoble, France. Preliminary data were also taken at the BT2 spectrometer at the NIST research reactor. The IN8 beam optics included a vertically focusing Cu (111) monochromator, and a horizontally focusing pyrolytic graphite (002) analyser which selected a fixed final energy of 35 meV. A pyrolytic graphite filter was inserted into the scattered beam in order to eliminate higher-order contamination. The sample was attached to the cold finger of a closed cycle helium refrigerator mounted on a two-circle goniometer. Data were taken with the crystal in two different orientations where wave vectors of the forms ${\bf Q} = (H,H,L)$ and $(3H,H,L)$ were accessible. \[Throughout this article, the wave vector ${\bf Q} = (H, K, L)$ is indexed in units of the reciprocal lattice vectors $2 \pi/a
\sim 2\pi/b \sim 1.63 {\rm \AA}^{-1}$ and $2\pi/c \sim 0.53 {\rm
\AA}^{-1}$. In this notation, the $(\pi,\pi)$ point corresponds to $(\frac{h}{2},\frac{k}{2})$ with $h$ and $k$ integers.\]
As described in detail elsewhere [@bourges96; @fong96], the imaginary part of the dynamical magnetic susceptibility, $\chi''({\bf Q},\omega)$, can be separated from phonon scattering with the aid of lattice vibrational calculations, and by studying the momentum, temperature and doping dependence of the neutron scattering cross section. In pure (Zn-free) $\rm YBa_2 Cu_3 O_{6+x}$, this procedure was verified by measurements with polarized neutron beams for some scattering configurations in the energy range covered by the present study, 10 meV through 50 meV [@fong96; @fong97]. Since the changes in the phonon spectrum induced by 0.5% Zn-substitution are insignificant, the data analysis procedures developed for pure $\rm YBa_2 Cu_3 O_{6+x}$ carry over directly to the sample investigated here.
Fig. 1 shows representative constant-energy scans taken in the $(H, H, L)$ zone. As in pure $\rm YBa_2 Cu_3 O_7$, the magnetic scattering is confined to a window around 40 meV and ${\bf q}_0=(\pi,\pi)$, with no magnetic scattering observed above background at low energies. The magnetic scattering around 40 meV also exhibits the sinusoidal modulation in $L$ discussed in detail previously [@rossat91]-[@fong96],[@reznik96]. An upper limit of 1/3 of the maximum intensity can be placed on the intensity at the minimum of the modulation, as in the pure system. These and similar scans taken in the $(3H, H, L)$ zone were put on an absolute unit scale by normalizing to the phonon spectrum following Ref. [@fong96], using the same definition of the spin susceptibility as Ref. [@ybco6.5] (Fig. 2a). The constant-energy scans were fitted to Gaussian profiles whose amplitude is plotted in Fig. 2b as a function of energy.
The overall shape of the magnetic spectrum of $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$, being concentrated around a characteristic energy 40 meV and wave vector ${\bf q}_0=(\pi,\pi)$, is clearly reminiscent of the resonance peak in the pure system. The spectral weight increases at low temperatures, as does the resonance peak in pure $\rm YBa_2 Cu_3 O_7$. However, there are also substantial differences between the pure and Zn-substituted systems. First, while the magnetic resonance peak in the pure system is very sharp in energy, the magnetic response in the Zn-substituted sample is substantially broadened. This is apparent in the temperature difference spectrum (Fig. 2c), which gives a more detailed picture of the energy range near 40 meV. (In order to obtain better counting statistics, the high temperature cross section at constant wave vector was subtracted from the low temperature cross section at the same wave vector, without a full [**Q**]{}-scan at each energy. Since the phonon scattering is temperature independent in this energy and temperature range, only magnetic scattering contributes to the difference spectrum.) The full width at half maximum of the difference spectrum is $\Delta E \sim 10$ meV, much broader than the instrumental resolution ($\sim 5$ meV), yielding an intrinsic energy width of $\sim 8.5$ meV. The width of the unsubtracted spectrum in Fig. 2b is also consistent with this value. Within the errors, the full width at half maximum in [**Q**]{}-space, $\Delta Q \sim 0.25 {\rm \AA}^{-1}$ at $E=39$ meV, is identical to the resonance width in the pure system (Fig. 1).
A series of constant-energy scans at energies 39 meV and 35 meV were carried out at temperatures up to $\sim 300$K and fitted to Gaussian profiles. The fitted amplitudes for 39 meV are plotted in Fig. 3; the data for 35 meV track those of Fig. 3 to within the errors. This figure fully reveals an even more dramatic difference of the spectra in the pure and Zn-substituted systems, already indicated in Figs. 2a and b. Whereas the magnetic response in the pure system (restricted to a single resonance peak) disappears in the normal state [@fong95; @bourges96; @fong96], Fig. 3 shows that in $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$ the magnetic spectral weight actually persists up to $\sim 250$K. Furthermore, in both underdoped and optimally doped systems, the magnetic resonance peak follows a sharp, order parameter-like curve below $\rm T_c$ (Refs. [@bourges96]-[@bourges97]). By contrast, there is at most a weak inflection point near $\rm T_c$ in the Zn-substituted system. The influence of superconductivity on the spin excitations, which is so clearly apparent in pure $\rm YBa_2 Cu_3 O_{6+x}$, is thus almost completely obliterated by 0.5% Zn substitution.
A further important comparison between the pure and Zn-substituted materials is made possible by the absolute unit calibration. Since the resonance peak in the pure system is very sharp and comparable to the instrumental energy resolution, the appropriate quantity to compare is the energy-integrated magnetic spectral weight, $\int d \omega \; \chi'' ({\bf q}_0,\omega)$, in the energy range probed by the neutron experiment. This quantity is 2.2 $\pm$ 0.5 $\mu_B^2$ at low temperatures in $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$, as compared to 1.6 $\pm$ 0.5 $\mu_B^2$ in $\rm YBa_2 Cu_3 O_7$ [@footnote]. (Note, however, that in $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$ only half of this intensity is removed upon heating to $\rm T_c$, whereas in the pure system no magnetic intensity is observable above $\rm T_c$.) As nonmagnetic impurities are added, the total energy-integrated spectral weight around $(\pi,\pi)$ therefore [*increases*]{} in the energy range probed by the neutron experiment, implying that zinc restores antiferromagnetic correlations. In this respect, the Zn-substituted system resembles the underdoped pure system (x $< 0.95$) where a normal state antiferromagnetic contribution exists [@rossat91; @regnault; @regnault98]. Surprisingly, in the Zn-doped system this additional intensity appears in the same energy and wave vector range as the resonance peak in the pure system.
It is also interesting to compare the present data to previous neutron scattering work on more heavily Zn substituted cuprate superconductors [@sidis96]-[@matsuda93]. The enhanced low energy excitations near ${\bf q}_0=(\pi,\pi)$ reported for these materials were not observed in our very lightly Zn-substituted sample (bottom panel in Fig. 1). However, at higher energies a 2% Zn-substituted, fully oxygenated $\rm YBa_2 Cu_3 O_7$ sample investigated by Sidis [*et al.*]{} [@sidis96] exhibits a spectral distribution closely similar to the one shown in Fig. 2. The temperature evolution of the magnetic intensity [@regnault98] is also consistent with the one reported here (Fig. 3).
In summary, the effect of substituting one out of 200 copper atoms by nonmagnetic impurities is dramatic. The total spectral weight near ${\bf q}_0=(\pi,\pi)$ actually increases and persists to higher temperatures while remaining centered around 40 meV. On the other hand, the characteristic features of the resonance peak ([*i.e.*]{}, its sharpness in energy and its coupling to superconductivity) are obliterated. It is worth noting that in the underdoped regime, where the normal-state susceptibility is also enhanced with respect to $\rm YBa_2 Cu_3 O_7$, the resonance peak remains sharp and coupled to superconductivity [@dai96]-[@bourges97]. This aspect thus seems to be a manifestation of a delicate coherence that is very easily disrupted by disorder. While none of the theories of the resonance peak [@mazin95]-[@pines96] has anticipated this behavior, it is reminiscent of the extreme susceptibility of collective-singlet ground states in quasi-one dimensional systems (realized, for instance, in spin-Peierls and spin ladder materials) to nonmagnetic impurities. A microscopic analogy between both systems was pointed out by Fukuyama and coworkers [@fukuyama96], but its consequences for the spin excitations have not yet been evaluated. Viewed from a different angle, a gradual buildup of spectral weight below $\rm T \sim 250$K (as shown in Fig. 3) is also observed in underdoped $\rm YBa_2 Cu_3 O_{6+x}$, where it is centered around a somewhat lower energy (20-30 meV) and goes along with the opening of the “spin pseudo-gap” [@regnault; @ybco6.5]. The strong temperature evolution in the normal states of both underdoped and disordered $\rm YBa_2 Cu_3 O_{6+x}$ is obviously closely related to the resonance peak and should be part of a comprehensive theoretical description of the spin dynamics of the cuprates.
[**Acknowledgments**]{}\
We are grateful for technical assistance provided by D. Puschner. The work at Princeton University was supported by the National Science Foundation under Grant No. DMR-9400362, and by the Packard and Sloan Foundations.
[99]{}
J. Rossat-Mignod [*et al.*]{}, Physica C [**185-189**]{}, 86 (1991).
H.A. Mook [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 3490 (1993).
L.P. Regnault [*et al.*]{}, Physica C, [**235-240**]{}, 59, (1994); Physica B, [**213&214**]{}, 48, (1995).
H.F. Fong [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 316 (1995).
P. Bourges, L.P. Regnault, Y. Sidis and C. Vettier, Phys. Rev. B [**53**]{}, 876 (1996).
H.F. Fong [*et al.*]{}, Phys. Rev. B. [**54**]{}, 6708 (1996).
L.P. Regnault [*et al.*]{} in [*Neutron Scattering in Layered Copper-Oxide Superconductors* ]{}, Edited by A. Furrer, (Kluwer, Amsterdam, 1998), p. 85.
P. Dai [*et al.*]{}, Phys. Rev. Lett [**77**]{}, 5425 (1996).
H.F. Fong, B. Keimer, D.L. Milius and I.A. Aksay, Phys. Rev. Lett. [**78**]{}, 713 (1997).
P. Bourges [*et al.*]{}, Europhys. Lett. [**38**]{}, 313 (1997).
I.I. Mazin and V.M. Yakovenko, Phys. Rev. Lett. [**75**]{}, 4134 (1995).
N. Bulut and D.J. Scalapino, Phys. Rev. B [**53**]{}, 5149 (1996); G. Blumberg, B.P. Stojkovic and M.V. Klein, [*ibid.*]{} [**52**]{}, 15741 (1995); A.A. Abrikosov, [*ibid.*]{} [**57**]{}, 8656 (1998).
D.Z. Liu, Y. Zha and K. Levin, Phys. Rev. Lett. [**75**]{}, 4130 (1995); F. Onufrieva, Physica C [**251**]{}, 348 (1995); A.J. Millis and H. Monien, Phys. Rev. B [**54**]{}, 16172 (1996).
L. Yin, S. Chakravarty and P.W. Anderson, Phys. Rev. Lett. [**78**]{}, 3559 (1997).
E. Demler and S.C. Zhang, Phys. Rev. Lett. [**75**]{}, 4126 (1995); S.C. Zhang, Science [**275**]{}, 1089 (1997).
Y. Zha, V. Barzykin and D. Pines, Phys. Rev. B [**54**]{}, 7561 (1996); D.K. Morr and D. Pines, Report No. cond-mat/9805107.
F.F. Assaad and M. Imada, Report No. cond- mat/9711172.
G. Xiao [*et al.*]{}, Nature [**332**]{}, 238 (1988); H. Maeda [*et al.*]{}, Physica C [**157**]{}, 483 (1989).
H. Alloul [*et al.*]{}, Phys. Rev. Lett. [**67**]{}, 3140 (1991).
V. A. Mahajan [*et al.*]{}, Phys. Rev. Lett. [**72**]{}, 3100 (1994).
Y. Sidis [*et al.*]{}, Phys. Rev. B [**53**]{}, 6811 (1996).
K. Kakurai [*et al.*]{}, Phys. Rev. B [**48**]{}, 3485 (1993).
M. Matsuda [*et al.*]{}, J. Phys. Soc. Jpn. [**62**]{}, 443 (1993).
T.R. Chien, Z.Z. Wang and N.P. Ong, Phys. Rev. Lett. [**67**]{}, 2088 (1991); D.A. Bonn [*et al.*]{}, Phys. Rev. B [**50**]{}, 4051 (1994); Fukuzumi [*et al.*]{}, Phys. Rev. Lett.[**76**]{}, 684 (1996).
D. Reznik [*et al.*]{}, Phys. Rev. B [**53**]{}, R14741 (1996); S.M. Hayden [*et al.*]{}, [*ibid.*]{} [**54**]{}, R6905 (1996).
P. Bourges [*et al.*]{}, Phys. Rev. B [**56**]{}, R11439 (1997).
Note that if the susceptibility is energy integrated in the energy range probed by the experiment [*and*]{} averaged over the two-dimensional Brillouin zone, the resulting numbers, $\int d \omega \int d^2 q \; \chi'' ({\bf q}_0,\omega) \; / \;
\int d^2 q = 0.058 \mu_B^2$ in $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$ and $0.043 \mu_B^2$ in $\rm YBa_2 Cu_3 O_7$ come out much smaller than the corresponding number $\frac{\pi}{3} s(s+1) g^2 \mu_B^2$ required by the total moment sum rule for an insulating $s=1/2$ antiferromagnet. In deriving these numbers, we used the full width at half maximum in momentum space, which is 0.25 ${\rm \AA}^{-1}$ in both materials. Note also that in underdoped materials where the spin excitation spectrum is much broader in energy, it is often convenient to quote the Brillouin zone averaged (local) susceptibility $\int d^2 q \; \chi'' ({\bf q}_0,\omega) \; / \; \int d^2 q$ [*without*]{} integrating over energy [@ybco6.5].
N. Nagaosa [*et al.*]{}, J. Phys. Soc. Jpn. [**65**]{}, 3724 (1996); H. Fukuyama, T. Tanimoto, and M. Saito, [*ibid.*]{} [**65**]{}, 1182 (1996).
Figure Captions {#figure-captions .unnumbered}
---------------
1. Constant-energy scans at 39 meV and 10 meV through ${\bf Q}=(\frac{1}{2},\frac{1}{2},L)$ for $\rm YBa_2 (Cu_{0.995} Zn_{0.005})_3 O_7$. The line in the upper panel is the results of a Gaussian fit. The bar gives the instrumental [**Q**]{}-resolution. Because of the good energy resolution ($\sim 5$ meV), the 42.5 meV phonon [@fong95] makes only a weak contribution ($\leq 10$%) to the upper scan.
2. a\) Constant-energy scans through (1.5, 0.5, -1.7), background corrected and converted to absolute units. The bar gives the instrumental [**Q**]{}-resolution. b) Peak dynamical susceptibility at ${\bf q}_0=(\pi,\pi)$ extracted from fits to constant-energy profiles (panel a and Fig. 1). The line is a Gaussian with the same width as the difference spectrum in panel c. c) More detailed spectrum around 40 meV. The data around 100K ($\rm > T_c$) were subtracted from the low temperature data. The bar gives the instrumental energy resolution, and the line is the result of a fit to a Gaussian. All data are given in absolute units. A $\sim 30$% overall systematic error in the absolute unit calibration is not included in the error bars.
3. Temperature dependence of the dynamical susceptibility at ${\bf q}_0=(\pi,\pi)$ and at the peak energy of the spectrum ($\sim$ 39 meV), in absolute units. The closed circles are the fitted amplitudes of constant-energy scans, the open circles are the peak count rates.
|
---
abstract: 'We discuss utility based pricing and hedging of jump diffusion processes with emphasis on the practical applicability of the framework. We point out two difficulties that seem to limit this applicability, namely drift dependence and essential risk aversion independence. We suggest to solve these by a re-interpretation of the framework. This leads to the notion of an implied drift. We also present a heuristic derivation of the marginal indifference price and the marginal optimal hedge that might be useful in numerical computations.'
author:
- |
Jochen Zahn\
Courant Research Centre “Higher Order Structures”\
University of Göttingen\
Bunsenstra[ß]{}e 3-5, D-37073 Göttingen, Germany\
[email protected]
title: Utility based pricing and hedging of jump diffusion processes with a view to applications
---
Introduction
============
The applicability of the Black–Scholes framework for the pricing and hedging of derivative claims crucially depends on the assumption of market completeness, i.e., the possibility to replicate claims and thus eliminate risk. This assumption is not fulfilled if the asset process is driven by more than one source of risk or when market imperfections such as transaction costs are not negligible. One then speaks of an incomplete market in which investors may attribute different prices to derivatives, according to their risk preferences.
As an example, let us consider a jump diffusion process, i.e., the asset $S$ evolves according to $$\label{eq:AssetProcess}
{\mathrm{d}}S_t = \mu S_{t_-} {\mathrm{d}}t + \sigma S_{t_-} {\mathrm{d}}W_t + (e^{J_t} - 1) S_{t_-} {\mathrm{d}}N_t.$$ Here $W_t$ is a Wiener and $N_t$ a Poisson process with frequency $\lambda$. The random variable $J_t$ determines the relative size $e^{J_t}-1$ of the jump. The oldest and probably most popular approach for the pricing and hedging of a claim on such an asset is Merton’s [@Merton]. There, the investor sets up a portfolio $\Pi$ consisting of the claim with value $V$ and a quantity $- \Delta$ of assets, so that the evolution of the portfolio is given by $$\begin{aligned}
{\mathrm{d}}\Pi_t = & \ {\mathrm{d}}V_t - \Delta_t {\mathrm{d}}S_t \nonumber \\
\label{eq:Portfolio}
= & \left( {\partial}_t V_t(S_t) + \mu S_{t} {\partial}_S V_t(S_{t}) + \tfrac{\sigma^2}{2} S^2_{t} {\partial}_S^2 V_t(S_{t}) - \Delta_t \mu S_{t} \right) {\mathrm{d}}t \\
& + \sigma S_{t} \left\{ {\partial}_S V_t(S_{t}) - \Delta_t \right\} {\mathrm{d}}W_t + \left\{ V_t(e^{J_t} S_{t}) - V_t(S_{t}) - \Delta_t (e^{J_t}-1) S_{t} \right\} {\mathrm{d}}N_t \nonumber\end{aligned}$$ It is in general not possible to eliminate jump and diffusion risk at the same time, so some “optimal” choice is necessary. Merton’s proposal is to hedge only the diffusion risk and to diversify the jump risk, i.e., to set $\Delta_t = {\partial}_S V_t$. The above then yields $$\begin{aligned}
{\mathrm{d}}\Pi_t = & \left( {\partial}_t V_t(S_t) + \tfrac{\sigma^2}{2} S^2_{t} {\partial}_S^2 V_t(S_{t}) \right) {\mathrm{d}}t \\
& + \left\{ V_t(e^{J_t} S_{t}) - V_t(S_{t}) - (e^{J_t}-1) S_{t} {\partial}_S V_t(S_t)\right\} {\mathrm{d}}N_t.\end{aligned}$$ If jump risk is diversified, the investor does not need any risk premium for taking this risk, i.e., the expected value of ${\mathrm{d}}\Pi_t$ should vanish. Thus, we obtain the partial integro-differential equation (PIDE) $$\begin{gathered}
\label{eq:MertonPIDE}
0 = {\partial}_t V_t(S) + \tfrac{\sigma^2}{2} S^2 {\partial}_S^2 V_t(S) - \left\{ \int (e^z-1) {\mathrm{d}}\nu(z) \right\} S {\partial}_S V_t(S)
\\ + \int \left\{ V_t(e^z S) - V_t(S) \right\} {\mathrm{d}}\nu(z). \end{gathered}$$ Here $\nu$ is the cumulative jump frequency distribution, i.e., for an interval $I$ with characteristic function $\chi_I$, $\nu(\chi_I)$ gives the frequency of jumps of size in $I$. In particular $\nu({\mathbb{R}}) = \lambda$.
Two remarks are in order here:
1. The diversification of jump risk is problematic not only in our model (as there is only one asset), but also in practice: In a typical market crash, jumps occur in the whole market, so diversification may well turn out to be accumulation of risk.
2. Merton’s proposal coincides with a naive interpretation of the Black–Scholes framework which states that for risk-neutral pricing one simply has to adjust the drift term such that the expected drift vanishes (in discounted units), and that the appropriate hedging strategy is given by the derivative of the price. In particular, the real-world drift does not enter the price, which is a benefit, as it is notoriously hard to estimate.
Note that the assumption that diversification is possible is crucial here, since otherwise one could not invoke no-arbitrage arguments to set the expected return of the portfolio to zero. If one drops this assumption, then the investor should (i) try to find an optimal balance between diffusion and jump risk and (ii) value the remaining risk in order to obtain a risk premium. A popular framework that achieves (i) is minimal variance pricing and hedging, cf. [@Schweizer94; @GourierouxEtAl98] and references therein. There, the investor tries to minimize the variance of the expected returns. It has the advantage that no new concepts have to be introduced. However, the choice of a quadratic criterion is somewhat arbitrary and penalizes profits as well as losses. Furthermore, in the case of a jump diffusion, the framework in general yields a *signed* risk-neutral measure, i.e., there would be positive claims which have a negative value in the framework. Finally, the framework only tackles (i), but does not yield a price for the remaining risk.
A framework that achieves (i) and (ii) at one stroke is utility based pricing and hedging. There, the investor is equipped with a concave von Neumann utility function $U(X_T)$ that assigns an economic value to the wealth $X_T$ at the investment horizon $T$. Risk aversion is encoded in the concavity of $U$ which entails that the investor prefers a secure income to a random income with the same expectation. This preference is encoded in the risk aversion $A(x) = - U''(x)/U'(x)$.
In this framework, the appropriate price $v$ for a claim with payoff $C(S_T)$ is the indifference price, i.e., the amount the investor should receive such that her maximal expected utility $E[U(X_T - C(S_T))]$ for initial capital $x + v$ is the same as the expected utility $E[U(X_T)]$ for initial capital $x$. This means that one has to consider investment and hedging at the same time and then try to disentangle them. In general, this is a very complicated optimization problem. However, in the limit where the number of traded claims is infinitesimally small, the problem becomes much simpler. One then speaks of the *marginal indifference price* and the corresponding *marginal optimal hedge*. This field has ripen considerably during the last years. Milestones were the papers of Kramkov and S[î]{}rbu, who gave sufficient criteria for the the marginal indifference price to be well-behaved [@KS06], and defined the concept of the marginal optimal hedge, together with convenient characterizations of it [@KS07]. The framework was applied to, e.g., basis risk [@KS07], transaction costs [@Wilmott; @Monoyios], and Lévy processes and stochastic volatility models [@KMV09].
In spite of its conceptual elegance, the practical applicability of the framework seems to be limited by two problems:
1. The marginal indifference price and the marginal optimal hedge depend strongly on the real-world drift, which is notoriously hard to estimate.
2. The marginal indifference price (and also the marginal optimal hedge) are essentially independent of the risk aversion of the investor [@Nutz10].
The second fact is quite inconvenient for a framework whose purpose is to incorporate risk aversion. It turns out that the two problems can be solved, at one stroke, by a change of perspective. The marginal indifference price takes into account how well the option trade matches to the optimal investment strategy of the investor. The assumption is of course that the investor is invested in this optimal strategy. However, the investment strategy a bank chooses is typically not derived from the model that is used to price options. It is thus tempting to interpret the *actual* investment strategy as the optimal one and adjust the drift such that they match. One may thus speak of an *implied drift*. To the best of our knowledge, this concept is new.
In the following section, we introduce utility based pricing and hedging in a heuristic fashion. In particular, we do not (explicitly) use semi-martingale decompositions, on which the rigorous mathematical treatment [@KS06; @KS07] heavily relies. Instead, we use the concept of functional differentiation and derive a formula for the marginal optimal hedge that is similar to the well-known $\Delta$ hedging formula. It has a straightforward economic interpretation and is, to the best of our knowledge, new. We suspect that, when made mathematically precise, this approach is equivalent to setting of Kramkov and S[î]{}rbu in the common domain of applicability. At least for the case of a jump diffusion under power utility, this is shown to be true, as we re-derive results implicitly contained [@KMV09]. For the case of exponential utility we obtain results that are, to the best of our knowledge, new. Even though the approach presented here is not (yet) mathematically rigorous, and, if made so, presumably equivalent to that of Kramkov and S[î]{}rbu, it might still be interesting, as it sheds new light on the framework and is also straightforwardly applicable in a discrete time setting, which might be useful in practical applications.
In Section \[sec:Interpretation\], we discuss our results, in particular the two problems mentioned above. The concept of implied drift that solves these is also introduced. As a nontrivial toy model that exemplifies the discussion, we use a jump diffusion with fixed jump size. We also compare the marginal utility price and hedge with those obtained in Merton’s and the minimal variance approach. We conclude with a summary and an outlook.
A heuristic derivation {#sec:Derivation}
======================
The basic idea of marginal utility based pricing and hedging is the following: Consider an investor with a concave utility function $U$, i.e., the expected utility for investment with time horizon $T$ is given by $$u_t(x; \pi) = E[U(X^\pi_t)| X^\pi_t = x],$$ where $X^\pi_t$ is the wealth process depending on some trading strategy $\pi$. This is maximized by the optimal investment strategy $\pi^*$: $$ u_t(x) = \sup_{\pi} u_t(x; \pi) = u_t(x; \pi^*).$$ Throughout this paper, we will consider trading strategies given by a space of functions $(t, x, s) \mapsto \pi_t(x,s)$ of time, wealth, and the asset price, that is equipped with some locally convex topology. Furthermore, we assume that the set of admissible trading strategies is an open subset, and that it contains a unique optimal investment strategy $\pi^*$. This allows us to consider infinitesimal perturbations of $\pi^*$. Furthermore, $\pi^*$ should be such that the maximal expected utility is finite and such that the asset process is a local martingale under the measure ${\mathcal{Q}}$ defined by $$\label{eq:Q}
\frac{{\mathrm{d}}{\mathcal{Q}}}{{\mathrm{d}}P} = \frac{U'_T(X_T^{\pi^*})}{E[U'_T(X_T^{\pi^*})]},$$ where $P$ is the real-world measure. For conditions under which the last requirements are fulfilled, we refer to [@Sch01].
If we now want to value a European claim with maturity $T$ and bounded payoff $C(S_T)$, where $S_t$ is the asset process, we force the investor to short an infinitesimal number $\varepsilon$ of them. For this she may charge a price $v^\varepsilon_t(x,s)$ per claim. It is the indifference price if $$\label{eq:DefIndifference}
u_t(x) = \sup_\pi E[U(X^\pi_T - \varepsilon C(S_T)) | X^\pi_t = x + \varepsilon v^\varepsilon_t(x,s), S_t = s].$$ It means that the investor is willing to sell the options at a price $v^\varepsilon$, as this does not decrease her expected utility. The limit $$v = \lim_{\varepsilon \to 0} v^\varepsilon$$ is the *marginal indifference price*. The marginal optimal hedge can be similarly characterized as the linear change of $\pi^*$ that is needed to achieve the maximum on the [r.h.s. ]{}of . Before we formalize this notion, we discuss the wealth process $X_t^\pi$ and the optimal investment strategy $\pi^*$ for the case of a jump diffusion and introduce functional differentiation, a technical tool we later employ.
The wealth process and optimal investment {#sec:OptimalInvestment}
-----------------------------------------
The wealth process $X^\pi_t$ corresponding to the asset process , given a trading strategy $\pi_t$, is given by $$\begin{aligned}
{\mathrm{d}}X^\pi_t & = \pi_t(X^\pi_{t_-}, S_{t_-}) {\mathrm{d}}\log S_t \\
& = \pi_t(X^\pi_{t_-}, S_{t_-}) \mu {\mathrm{d}}t + \pi_t(X^\pi_{t_-}, S_{t_-}) \sigma {\mathrm{d}}W_t + \pi_t(X^\pi_{t_-}, S_{t_-}) (e^{J_t} - 1) {\mathrm{d}}N_t\end{aligned}$$ Here $\pi_t(x, s)$ denotes the wealth invested in the asset at time $t$, given that the total wealth is $x$ and the asset price is $s$. Note that no interest rate is present, so we are working in discounted units.
For a quantity $F_t(x, s)$, depending on time $t$, the wealth $x$ and the asset price $s$ that fulfills $$\label{eq:piConsistency}
F_t(x, s; \pi) = E[F_\tau (X^\pi_\tau, S_\tau; \pi) | X^\pi_t = x, S_t = s] \quad \forall t \leq \tau,$$ one obtains the partial integro-differential equation (PIDE) $$\label{eq:Fevolution}
{\partial}_t F_t(x, s; \pi) + L^\pi F_t(x, s; \pi) = 0,$$ where $L^\pi$ is the integro-differential operator defined by $$\begin{aligned}
\label{eq:Lpi}
L^\pi f_t(x, s) & = \mu \left\{ \pi_t(x, s) {\partial}_x + s {\partial}_s \right\} f_t(x, s) \\
& \quad + \tfrac{\sigma^2}{2} \left\{ {\pi_t(x, s)}^2 {\partial}_x^2 + 2 \pi_t(x, s) s {\partial}_x {\partial}_s + s^2 {\partial}_s^2 \right\} f_t(x, s) \nonumber\\
& \quad + \int \left\{ f_t(x + \pi_t(x, s)(e^z-1), e^z s) - f_t(x, s) \right\} {\mathrm{d}}\nu(z). \nonumber\end{aligned}$$ Note that in we included the dependence on the trading strategy $\pi$. As $\pi$ is a function (of $t$, $x$ and $s$), $F$ is, apart from being a function of $t$, $x$ and $s$, also a *functional*, i.e., a map from a space of functions to the real numbers. A useful tool for discussing extrema of such functionals are functional derivatives, which we briefly discuss in Section \[sec:FunctionalDerivatives\].
In the absence of consumption, the expected utility $u_t(x; \pi)$ fulfills (\[eq:piConsistency\]). For the maximal expected utility $u_t(x)$, the HJB equation $$\sup_\pi \left[ {\partial}_t u_t(x) + L^\pi u_t(x) \right] = 0$$ holds, where the supremum is achieved by the optimal investment strategy $\pi^*$. Thus, the optimal investment strategy $\pi_t^*$ fulfills $${\partial}_{\pi_t(x)} |_{\pi^*} L^\pi u_t(x) = 0.$$ Using the explicit form (\[eq:Lpi\]) of $L^\pi$, we obtain $$\label{eq:PiStar}
\mu u'_t(x) + \pi^*_t(x) \sigma^2 u''_t(x) + \int u'_t(x^z) (e^z-1) {\mathrm{d}}\nu(z) = 0,$$ where we used the notation $$\label{eq:Xz}
x^z = x + \pi^*_t(x)(e^z-1)$$ for the wealth after a jump. Having solved for $\pi^*$, we know that $u_t$ fulfills the PIDE $$\label{eq:uPIDE}
{\partial}_t u_t(x) + L^{\pi^*} u_t(x) = 0.$$ For investment with a time horizon $T$, the boundary condition is given by the utility $U$ at time $T$, i.e., $u_T(x) = U(x)$.
We now discuss the form of $\pi^*$ and $u_t$ in the two cases that will be of most interest to us, namely the case of constant relative or absolute risk aversion. Constant relative risk aversion is specified by a utility function $$\label{eq:Upower}
U(x) = x^{1-\beta} / (1-\beta), \quad \beta > 1.$$ The limit $\beta \to 1$ corresponds to logarithmic utility, and the results below are also valid in that case. It can be shown that, up to an unimportant multiplicative constant, the expected utility $u_t$ is of the same form, i.e., $$ u_t(x) = B_t x^{1-\beta} / (1-\beta).$$ Introducing the notation $\tilde \pi_t^*(x) = \pi_t^*(x) / x$, becomes $$\label{eq:TildePi}
\mu - \tilde \pi^*_t(x) \beta \sigma^2 + \int (e^z-1) (1 + \tilde \pi^*_t(x) (e^z-1))^{-\beta} {\mathrm{d}}\nu(z) = 0.$$ We see that $\tilde \pi^*_t(x)$ is independent of $x$ and $t$. The optimal strategy is to invest a fixed fraction of the wealth in the asset. For precise conditions under which a solution to exists, we refer to [@Nutz10b].
We now discuss a special case which we will study explicitly in Section \[sec:Interpretation\] and in which can be solved analytically:
If only jumps of a certain size $J$ can happen, i.e., $\nu(z) = \lambda \delta(z-J)$, then becomes $$\label{eq:TildePiPower}
\mu - \tilde \pi^* \beta \sigma^2 + \lambda \tilde J (1 + \tilde \pi^* \tilde J)^{-\beta} = 0.$$ Here we introduce the notation $\tilde J = e^J-1$ for the relative jump size. For logarithmic utility, i.e., for $\beta = 1$ this has an analytic solution: $$\label{eq:TildePiLog}
\tilde \pi^* = - \frac{1}{2 \tilde J} \left\{ 1 - \frac{\mu}{\sigma^2} \tilde J - \sqrt{\left( 1 - \frac{\mu}{\sigma^2} \tilde J \right)^2 + 4 \frac{\mu + \lambda \tilde J}{\sigma^2} \tilde J } \right\}.$$ This has the expected behavior: $\tilde \pi^*$ always has the same sign as the average drift $\mu + \lambda \tilde J$. Also note that the expression under the square root is strictly positive, so that an optimal investment strategy always exist for a fixed jump size. For $\beta > 1$, can easily be solved numerically.
We briefly consider the case of constant absolute risk aversion, i.e., exponential utility $$\label{eq:Uexp}
U(x)=-C e^{-\alpha x}, \quad \alpha > 0.$$ As above, the expected utility is of the same form, where $C$ (but not $\alpha$) is time dependent. The optimal investment strategy $\pi^*_t(x)$ fulfills $$\label{eq:PiExp}
\mu - \pi^*_t(x) \sigma^2 \alpha + \int (e^z-1) e^{- \alpha \pi^*_t(x) (e^z-1)} {\mathrm{d}}\nu(z) = 0.$$ A solution $\pi^*_t(x)$ of this equation will be independent of $x$ and $t$, so that the optimal strategy is to invest a fixed amount of wealth in the asset. Furthermore, it is antiproportional to the risk aversion $\alpha$, i.e., we can write $$\label{eq:barPi}
\pi^* = \bar \pi^* / \alpha$$ for some constant $\bar \pi^*$. Noting that, for large $\beta$, $\tilde \pi_*(\beta)$ behaves as $\beta^{-1}$, one obtains, by inspection of and that $$\label{eq:betaLimit}
\lim_{\beta \to \infty} \beta \tilde \pi^*(\beta) = \bar \pi^*.$$
Finally, we remark that by applying the PIDE to the terminal condition $W(x)=U'(x)$, one can show that for $\pi^*$ fulfilling , one has $w_t = u'_t$, i.e., $$\label{eq:del_XU}
u'_t(x, s) = E[U'(X^{\pi^*}_T) | X^{\pi^*}_t = x, S_t = s].$$
Functional derivatives {#sec:FunctionalDerivatives}
----------------------
We briefly introduce the concept of functional derivatives as a special case of directional derivatives, cf. [@Hamilton; @lcs]. Let $F$ be a functional, i.e., a continuous map $U \to {\mathbb{R}}$, where $U$ is an open subset of a space $X$ of functions[^1]. Then $F$ is called *differentiable* at $f \in U$ in the direction $h \in X$ if the limit $${\langle \delta F(f) , h \rangle} := \lim_{t \to 0} \frac{F(f+th) - F(f)}{t}$$ exists. It is called *continuously differentiable* (or $C^1$) on $U$ if the limit exists for all $f \in U$, $h \in X$ and if $\delta F: U \times X \to {\mathbb{R}}$ is continuous. If $F$ is $C^1$, then $\delta F(f): X \to {\mathbb{R}}$ is linear. Many of the usual theorems of differential calculus hold, in particular the fundamental theorem. It follows that a necessary condition for a $C^1$ functional to have a local maximum in $f$ is the vanishing of $\delta F(f)$.
The second derivative can be defined as the derivative of the first derivative, [w.r.t. ]{}$f$, i.e., $${\langle \delta^2 F(f) , h \otimes k \rangle} := \lim_{t \to 0} \frac{{\langle \delta F(f+tk) , h \rangle} - {\langle \delta F(f) , h \rangle}}{t}.$$ We say that $F$ is $C^2$ on $U$ if the limit exists for all $f \in U$ and $h,k \in X$ and is a continuous map $\delta^2 F: U \times X \times X \to {\mathbb{R}}$. In that case $\delta^2 F(f)$ is bilinear and symmetric. This generalizes to derivatives of arbitrary order. There is a Taylor formula from which it follows that a sufficient condition for a $C^2$ function to have a local maximum in $f \in U$ is that $\delta F(f) = 0$ and ${\langle \delta^2 F(f) , h \otimes h \rangle} < 0$ for all $h \in X$.
If $F$ is $C^1$, then $\delta F(f)$ is a continuous linear functional on $X$, so $\delta F(f)$ is a distribution. Similarly, if $F$ is $C^2$, then $\delta^2 F(f)$ is a symmetric bi-distribution. Employing the familiar abuse of notation to express a distribution in terms of an integral kernel, we sometimes write $${\langle \delta F(f) , h \rangle} = \int \delta_{f(x)} F(f) h(x) {\mathrm{d}}x,$$ and analogously for the higher order derivatives.
The functionals that we want to differentiate below are solutions to a PIDE of the form , which we want to differentiate [w.r.t. ]{}$\pi$. More precisely, let $w$ be a solution to the PIDE $${\partial}_t w_t(x, s; \pi) + M^\pi w_t(x, s; \pi) = 0,$$ where $M^\pi_t$ is an integro-differential operator that depends on $\pi$. We will want to compute $$\delta_{\pi_t(x,s)} w_t(x, s; \pi),$$ i.e, compute the change in $w_t(x,s; \pi)$ if $\pi$ is perturbed at the same point, namely at time $t$, wealth $x$ and asset price $s$. We assume that the PIDE is solved backwards in time from some terminal condition. Formally, we thus have $$w_t(x, s; \pi) = w_{t+{\mathrm{d}}t}(x, s; \pi) + M^\pi w_t(x, s; \pi) {\mathrm{d}}t.$$ The effect of turning on a perturbation $\pi'$ of $\pi$ that is localized around $x, s$ and in the time interval $[t, {\mathrm{d}}t)$, can thus be computed by differentiating $M^\pi$ [w.r.t. ]{}$\pi$. One thus obtains $${\langle \delta_\pi w_t(x, s; \pi) , \pi' \rangle} = {\partial}_\pi M^\pi w_t(x, s; \pi) \pi'(x, s) {\mathrm{d}}t.$$ Here ${\partial}_\pi$ only acts on the operator $M^\pi$. The limit where $\pi'$ tends to a Dirac $\delta$ in time corresponds to ${\mathrm{d}}t \to 0$, $\pi' \sim {\mathrm{d}}t^{-1}$. Hence, we obtain $$\label{eq:FunctionalDiff}
\delta_{\pi_t(x, s)} w_t(x, s; \pi) = {\partial}_{\pi_t(x,s)} M^\pi w_t(x, s; \pi).$$
In the following, we assume that the expected utility is $C^2$ in a neighborhood $U$ of $\pi^*$. This is the case if $$\int u''_t(x+\pi_t(e^z-1)) (e^z-1)^2 {\mathrm{d}}\nu(z) < \infty$$ for $\pi \in U$. For $\pi = 0$, this means that the jump distribution must have a finite second moment.
The marginal indifference price
-------------------------------
We want to determine the marginal indifference price $v$ from . As the perturbation is infinitesimally small, we can assume that the trading strategy $\pi^\varepsilon$ that achieves the maximum on the [r.h.s. ]{}of fulfills $\pi^\varepsilon = \pi^* + \varepsilon \bar \pi + {\mathcal{O}}(\epsilon^2)$. We also have $v^\varepsilon = v + {\mathcal{O}}(\varepsilon)$. Thus, expanding in $\varepsilon$, we obtain $$\begin{gathered}
u_t(x) = u_t(x) + \varepsilon v_t(x,s) {\partial}_x u_t(x,s) - \varepsilon E[ U'(X^{\pi^*}_T) C(S_T) | X^{\pi^*}_t = x, S_t = s] \\
+ \varepsilon {\langle \delta_\pi u_t(x; \pi^*) , \bar \pi \rangle} + {\mathcal{O}}(\epsilon^2),\end{gathered}$$ To obtain the fourth term on the r.h.s., we used functional differentiation [w.r.t. ]{}$\pi$. This term vanishes, since $\pi^*$ is optimal. Equating the remaining terms of first order in $\epsilon$, one obtains [@Davis] $$\label{eq:MarginalPrice}
v_t(x, s) = \frac{E[ U'(X^{\pi^*}_T) C(S_T) | X^{\pi^*}_t = x, S_t = s]}{u'_t(x) }.$$ It follows that in order to determine the marginal indifference price $v$ it suffices to know $\pi^*$, i.e., one does not have to solve the full optimization problem. Using the tower property, can be expressed as an expected value for quantities at times $\tau$ with $t < \tau \leq T$: $$\label{eq:MarginalPriceTau}
v_t(x, s) = \frac{E[ u'_\tau(X^{\pi^*}_\tau) v_\tau(X^{\pi^*}_\tau, S_\tau) | X^{\pi^*}_t = x, S_t = s]}{u'_t(x) }.$$ In this form, the pricing problem can be solved backwards in time with the payoff as the terminal condition. In continuous time, the limit $\tau = t + {\mathrm{d}}t$ will yield a partial (integro-) differential equation.
In the case of jump diffusion, one obtains from the PIDE $$\label{eq:PricingPIDE}
{\partial}_t v_t(x, s) + L^{\mathcal{Q}}v_t(x, s) = 0,$$ where $L^{\mathcal{Q}}_t$ is the integro-differential operator defined by $$\begin{aligned}
\label{eq:LQ}
L^{\mathcal{Q}}f_t(x, s) & = \mu^{\mathcal{Q}}\left\{ \pi^*_t(x, s) {\partial}_x + s {\partial}_s \right\} f_t(x, s) \\
& \quad + \tfrac{\sigma^2}{2} \left\{ {\pi^*_t(x, s)}^2 {\partial}_x^2 + 2 \pi^*_t(x, s) s {\partial}_x {\partial}_s + s^2 {\partial}_s^2 \right\} f_t(x, s) \nonumber\\
& \quad + \int \left\{ f_t(x^z, e^z s) - f_t(x, s) \right\} {\mathrm{d}}\nu^{\mathcal{Q}}(x; z). \nonumber\end{aligned}$$ where we used the notation and changed the drift and the jump distribution [w.r.t. ]{}$L^\pi$, cf. , as $$\begin{aligned}
\label{eq:nuQ}
{\mathrm{d}}\nu^{\mathcal{Q}}(x; z) & = \frac{u'_t(x^z)}{u'_t(x)} {\mathrm{d}}\nu(z), \\
\label{eq:muQ}
\mu^{\mathcal{Q}}(x) & = - \int (e^z-1) {\mathrm{d}}\nu^{\mathcal{Q}}(x; z).\end{aligned}$$ Here the adjusted drift and jump distribution define the risk-neutral measure ${\mathcal{Q}}$, cf. .
Let us briefly discuss the intuition behind . Recall that $U$, and thus also $u$, is concave, i.e., $u'(y) < u'(x)$ for $y > x$. Suppose the asset has, on average, positive returns. Then the optimal investment strategy will be to invest in the asset, i.e., $\pi^* > 0$. Then, for a downward jump, $z<0$, we have $x^z < x$. It follows that the fraction on the [r.h.s. ]{}of is greater than one, so that downward jumps become more (and upward jumps less) frequent. The opposite happens for $\pi^* < 0$. The economic rationale behind this is the following: If the investor is invested in the asset, she is exposed to the risk of downward jumps. She will thus seek remuneration for taking even more downward jump risk. On the other hand, she is also exposed to the risk of no upward jumps happening. She is thus willing to sell a claim that is exposed to upward jump risk with a discount. Finally serves to set the average drift to zero.
If the process is a pure diffusion, i.e., $\lambda = 0$, the terms in the first and the third line in vanish. All other terms that have some $x$-dependence involve ${\partial}_x$. Thus, if the terminal condition is independent of $x$, as for a payoff, the marginal indifference price is also independent of $x$ and one recovers the Black–Scholes PDE, in discounted units.
From our discussion in Section \[sec:OptimalInvestment\], we know that in the case of constant relative or absolute risk aversion $u'_t(x^z) / u'_t(x)$ is independent of $x$. Thus, for these types of utility, the only terms in $L^{\mathcal{Q}}$ that depend on $x$ are those that involve at least one ${\partial}_x$. It follows that if the terminal condition is independent of $x$, the solution to will also be independent of $x$. Since by definition the payoff only depends on $s$, the marginal indifference price is independent of $x$ for constant relative or absolute risk aversion [@KS06]. We thus obtain
In the case of power utility, , the marginal indifference price is a solution to the PIDE $$\begin{gathered}
\label{eq:PricingPIDEPower}
0 = {\partial}_t v_t(s) + \left\{ \int \{ e^z - 1\} \left(1+\tilde \pi^*(e^z-1) \right)^{-\beta} {\mathrm{d}}\nu(z) \right\} s {\partial}_s v_t(s) + \tfrac{\sigma^2}{2} s^2 {\partial}_s^2 v_t(s) \\
+ \int \left\{ v_t(e^z s) - v_t(s) \right\} \left(1+\tilde \pi^*(e^z-1) \right)^{-\beta} {\mathrm{d}}\nu(z),\end{gathered}$$ where $\tilde \pi^*$ is a solution to . In the case of exponential utility, , the marginal indifference price is a solution to the PIDE $$\begin{gathered}
\label{eq:PricingPIDEExp}
0 = {\partial}_t v_t(S) + \left\{ \int \{ e^z - 1\} e^{- \bar \pi^* (e^z-1)} {\mathrm{d}}\nu(z) \right\} s {\partial}_s v_t(s) + \tfrac{\sigma^2}{2} s^2 {\partial}_s^2 v_t(s) \\
+ \int \left\{ v_t(e^z s) - v_t(s) \right\} e^{- \bar \pi^* (e^z-1)} {\mathrm{d}}\nu(z),\end{gathered}$$ where $\bar \pi^*$ is given by .
A special case of was found in [@Lewis]. There, it is assumed that all market participants have power utility, and so the market-clearing utility must also be of this form. Furthermore, the market is invested fully in the asset, the positions in cash and options cancel each other[^2]. This corresponds to $\tilde \pi^*(x) = 1$ in the present setting, which, inserted in , gives the PIDE of [@Lewis].
The marginal optimal hedge
--------------------------
We now want to study the marginal optimal hedge corresponding to the marginal indifference price. In the previous section, we expressed the trading strategy that maximizes the [r.h.s. ]{}of as $\pi^\varepsilon = \pi^* + \varepsilon \bar \pi + {\mathcal{O}}(\varepsilon^2)$. We define the marginal optimal hedge $\hat \pi$ as $$\hat \pi = \bar \pi + v {\partial}_x \pi^*.$$ The idea behind this definition is the following: We want to determine the change in the optimal trading strategy that is caused by the option trade. Thus, we wish the investor to invest optimally as she would do without the trade plus some correction which we wish determine. The purpose of the second term on the [r.h.s. ]{}of the above equation is to cancel the shift in the optimal investment strategy that is caused by the payment of the option price $v$.
As $\pi^\varepsilon$ is the optimizer on the [r.h.s. ]{}of , the functional derivative at this point should vanish: $$ \delta_{\pi_{t'}(x',s')} E[U(X^{\pi^\varepsilon}_T - \varepsilon C(S_T)) | X^{\pi^\varepsilon}_t = x + \varepsilon v^\varepsilon_t(x,s), S_t = s] = 0 \quad \forall (t', x', s').$$ Expanding this in $\varepsilon$, one obtains $$\begin{aligned}
\label{eq:expansion}
0 & = - \delta_{\pi_{t'}(x',s')} E[U'(X^{\pi^*}_T) C(S_T) | X^{\pi^*}_t = x, S_t = s] \\
& \quad + v_t(x,s) \delta_{\pi_{t'}(x',s')} {\partial}_x u_t(x; \pi^*) \nonumber \\
& \quad + \int \bar \pi_{t''}(x'', s'') \delta_{\pi_{t'}(x',s')} \delta_{\pi_{t''}(x'',s'')} u_t(x; \pi^*) {\mathrm{d}}t'' {\mathrm{d}}x'' {\mathrm{d}}s''. \nonumber\end{aligned}$$ The two derivatives in the second line commute, so by the optimality of $\pi^*$, this term vanishes.
We now want to argue that the second order functional derivative in the third term vanishes unless $t' = t''$. Here the optimality (and the implicitly assumed Markov property of $S$) is crucial. Assume that $t'' > t'$. Then we may find $\tau$ such that $t' < \tau < t''$. We have, by the tower property $$u_t(x; \pi) = E[U(X^{\pi}_T) | X^{\pi}_t = x] = E[ E[U(\tilde X^{\pi}_T) | \tilde X^{\pi}_\tau = X^\pi_\tau] | X^{\pi}_t = x].$$ Here we introduced the notation $\tilde X$ for the process from $\tau$ to $T$ in order to distinguish in from the process $X$ on which it is conditioned at $\tau$. Changing $\pi$ at $t'$ only affects the process $X$, while changing $\pi$ at $t''$ only affects $\tilde X$. In particular, the derivative [w.r.t. ]{}$\pi_{t''}(x'', s'')$ can be pulled inside the outer expected value and the derivative [w.r.t. ]{}$\pi_{t'}(x', s')$ does not act on the inner expected value. But the derivative [w.r.t. ]{}$\pi_{t''}(x'', s'')$ of the inner expected value, evaluated at $\pi^*$, vanishes, by optimality. Thus, the second order functional derivative in the third term on the [r.h.s. ]{}of vanishes unless $t' = t''$.
For well behaved processes (in particular there should be no predetermined jump times), the second order functional derivative at equal times $t' = t''$ vanishes unless $(x', s') = (x'', s'')$. The intuitive reason is that a change of the trading strategy at $(t', x', s')$ can only affect paths that are at $(x', s')$ at time $t'$. But a path can not be at $(x',s')$ and $(x'', s'')$ at the same time unless $(x', s') = (x'', s'')$. We may thus write $$\begin{gathered}
\delta_{\pi_{t'}(x',s')} \delta_{\pi_{t''}(x'',s'')} u_t(x; \pi^*) \\ = \delta_D(t' - t'') \delta_D(x' - x'') \delta_D(s' - s'') \delta_{\pi_{t'}(x',s')}^2 u_t(x; \pi^*).\end{gathered}$$ Here $\delta_D$ denotes the Dirac $\delta$ distribution and the second order functional derivative on the [r.h.s. ]{}is implicitly defined by this equation.
Applying this to , we obtain $$\begin{gathered}
\delta_{\pi_{t'}(x',s')} E[U'(X^{\pi^*}_T) C(S_T) | X^{\pi^*}_t = x, S_t = s] \\
= \left( \hat \pi_{t'}(x', s') - v_{t'}(x', s') {\partial}_x \pi^*_{t'}(x') \right) \delta_{\pi_{t'}(x',s')}^2 u_t(x; \pi^*).\end{gathered}$$ For a constant payoff $P$, $v = P$ and $\hat \pi$ vanishes, so that we have $${\partial}_x \pi^*_{t'}(x') \delta_{\pi_{t'}(x',s')}^2 u_t(x; \pi^*) = - \delta_{\pi_{t'}(x',s')} E[U'(X^{\pi^*}_T) | X^{\pi^*}_t = x, S_t = s].$$ This is valid for all $(t', x', s')$. In particular, we may choose $(t', x', s') = (t, x, s)$, and with and we obtain $$\label{eq:Proposition_2}
\hat \pi_t(x, s) = \frac{u'_t(x, s) \delta_{\pi_{t}(x,s)} \frac{E[U'(X^{\pi^*}_T) C(S_T) | X^{\pi^*}_t = x, S_t = s]}{E[U'(X^{\pi^*}_T) | X^{\pi^*}_t = x, S_t = s]} }{\delta_{\pi_{t}(x,s)}^2 u_t(x; \pi^*)}.$$ Note that the expression that is functionally differentiated in the numerator is the marginal indifference price. Thus, this equation has a straightforward economic interpretation: The functional derivative in the numerator gives the marginal gain in the price $v$ one can generate by shifting $\pi$ from the optimal trading strategy $\pi^*$. The factor in front of the functional derivative converts this into a marginal gain in utility. This gain in utility stemming from $v$ is to be balanced by the loss in utility that is incurred to the wealth process (without the claim) by deviating from $\pi^*$, which one finds in the denominator[^3].
We also note the similarity of this formula with the sensitivities that are used in $\Delta$-hedging. However, in the present case, one does not differentiate the option value [w.r.t. ]{}the asset price but [w.r.t. ]{}the trading strategy and weights the result with derivatives of expected utility. This representation of the marginal optimal hedge might be useful in numerical calculations, where it could be used to compute $\hat \pi$, and simultaneously $v$, backwards in time in a discrete-time setting.
We may now evaluate for the case of a jump diffusion. We note that the two quantities that are functionally differentiated are solutions to PIDEs that depend on $\pi$. We may thus proceed as discussed in Section \[sec:FunctionalDerivatives\], i.e., we apply . As the derivation of from did not make use of the optimality of $\pi^*$, the functional derivative in the numerator may be computed by differentiating $L^{\mathcal{Q}}$ [w.r.t. ]{}$\pi$. Similarly, for the computation of the functional derivative in the denominator, we twice differentiate $L^\pi$ [w.r.t. ]{}$\pi$. Restricting again to the case of constant relative or absolute risk aversion, one obtains[^4]
In the case of power utility, , the marginal optimal hedging strategy is given by $$\label{eq:OptimalMarginalHedgePower}
\hat \pi_t(s) = s \frac{ \sigma^2 {\partial}_s v_t(s) + \int \frac{ v_t(e^z s) - v_t(s)}{(e^z-1)s} \left( e^z - 1 \right)^2 \left( 1 + \tilde \pi^*(e^z-1) \right)^{-\beta-1} {\mathrm{d}}\nu(z)}{\sigma^2 + \int \left( e^z - 1 \right)^2 \left( 1 + \tilde \pi^*(e^z-1) \right)^{-\beta-1} {\mathrm{d}}\nu(z)},$$ where $\tilde \pi^*$ is a solution to . In the case of exponential utility, , the marginal optimal hedge is $$\label{eq:OptimalMarginalHedgeExp}
\hat \pi_t(s) = s \frac{ \sigma^2 {\partial}_s v_t(s) + \int \frac{ v_t(e^z s) - v_t(s)}{(e^z-1)s} \left( e^z - 1 \right)^2 e^{-\bar \pi^*(e^z-1)} {\mathrm{d}}\nu(z)}{\sigma^2 + \int \left( e^z - 1 \right)^2 e^{-\bar \pi^*(e^z-1)} {\mathrm{d}}\nu(z)},$$ where $\bar \pi^*$ is given by .
As the expected utility is $C^2$ by assumption, cf. Section \[sec:FunctionalDerivatives\], the integral in the denominator in and is finite. It follows that also the integrals in the numerator are finite, by the boundedness of $v$.
In the pure diffusion case $\lambda = 0$ one recovers Black–Scholes $\Delta$ hedging, $\hat \pi = s {\partial}_s v$. But if a jump component is present, the marginal optimal hedge is not given by $s {\partial}_s v$. Instead, it optimally balances diffusion and jump risk, given the specified utility function.
\[rem:RiskPremium\] Taking the marginal optimal hedge , as starting point and following the derivation of from , one does in general not recover the PIDE , for the marginal indifference price. This is not surprising, since in the present framework the investor wants to be compensated for taking risk.
Minimal variance pricing and hedging
------------------------------------
The basic idea of minimal variance pricing and hedging was briefly discussed in the introduction. Here, we content ourselves with giving the corresponding price and hedge for our jump diffusion process. For the minimal variance price, one finds the following PIDE [@ColwellElliott93]: $$\begin{gathered}
\label{eq:PricingPIDE_MV}
{\partial}_t v_t(s) - \left\{ \int \left\{ e^z - 1 \right\} \left\{ 1-\alpha(e^z-1) \right\} {\mathrm{d}}\nu(z) \right\} s {\partial}_s v_t(s) + \tfrac{\sigma^2}{2} s^2 {\partial}_s^2 v_t(s) \\ + \int \left\{ v_t(e^z s) - v_t(s) \right\} \left\{ 1-\alpha(e^z-1) \right\} {\mathrm{d}}\nu(z) = 0.\end{gathered}$$ Here $\alpha$ is a generalization of the market price of risk and is given by $$\alpha = \frac{\mu + \int (e^z -1) {\mathrm{d}}\nu(z)}{\sigma^2 + \int (e^z -1)^2 {\mathrm{d}}\nu(z)}.$$ Note that the new jump measure in gives negative frequencies for jumps with $\alpha (e^z-1) > 1$. For $\alpha > 0$, this condition will always be fulfilled for unbounded upward jump distributions, which includes Merton’s log-normal jump distribution [@Merton].
For the corresponding hedge one obtains $$\label{eq:mvHedge}
\theta_t(s) = \frac{\sigma^2 {\partial}_s v_t(s) + \int \frac{v_t(e^z s)-v_t(s)}{(e^z-1)s} (e^z-1)^2 {\mathrm{d}}\nu(z)}{\sigma^2 + \int (e^z-1)^2 {\mathrm{d}}\nu(z)}.$$
\[rem:RiskCompensation\] Taking $\theta$ as hedging strategy and following the derivation of Merton’s formula from , one recovers the pricing PIDE . This shows that with minimal variance hedging one tries to minimize risk (as measured by the variance), but one is not compensated for it. This is in contrast to the setting of utility maximization, cf. Remark \[rem:RiskPremium\]. Possible modifications of the framework to include also a risk premium are discussed, e.g., in [@WilmottAhn].
The approach of Kramkov and S[î]{}rbu
-------------------------------------
We want to briefly compare the (as yet heuristic) framework of functional differentiation presented here with the rigorous approach of Kramkov and S[î]{}rbu [@KS06; @KS07]. There, one restricts to utility functions with bounded relative risk aversion, of which power utility is a special case. Optimal hedging strategies are defined by their wealth process. The optimal wealth process for an initial capital $x$ and a quantity $q$ of claims is denoted by $X(x,q)$. The utility-based wealth process $G(x,q)$ is defined as $$G(x,q) = X(c(x,q), 0) - X(x,q),$$ where $c$ is the indifference price. The marginal optimal hedge $H$ is defined as the derivative of $G$ [w.r.t. ]{}$q$ at $q=0$. Hence, the definition of the marginal optimal hedge presented here is based on the same idea as the definition of Kramkov and S[î]{}rbu.
Let us see whether the two notions coincide in the case of power utility. By [@KS07 Thm. 1], the process $H$ is given by (for initial capital $x=1$) $$H_t = X^{\pi^*}_t \left( V_0 + M \right),$$ where $V_0$ is the marginal indifference price and the process $M$ is the minimizer of the optimization problem $$c = \inf_{M} E_R \left[ \frac{U''(X_T^{\pi^*})}{X_T^{\pi^*} U'(X_T^{\pi^*})} \left( X_T^{\pi^*} (V_0 + M_T) - C(S_T) \right)^2 \right].$$ Here $R$ is the measure given by $$\frac{{\mathrm{d}}R}{{\mathrm{d}}P} = X_T^{\pi^*} \frac{U'(X_T^{\pi^*})}{u'}.$$ Following [@KMV09], one may write this as a minimal variance hedging problem under the measure $\tilde R$ defined by $$\frac{{\mathrm{d}}\tilde R}{{\mathrm{d}}R} = \frac{U''(X_T^{\pi^*})}{X_T^{\pi^*} U'(X_T^{\pi^*})} E_R\left[ \frac{U''(X_T^{\pi^*})}{X_T^{\pi^*} U'(X_T^{\pi^*})} \right]^{-1},$$ provided that the [r.h.s. ]{}is a uniformly integrable martingale. As shown in [@KMV09], this is the case for power utility if the jump distribution has a finite second moment, a condition that we had to impose, too, cf. the discussion at the end of Section \[sec:FunctionalDerivatives\]. Then one can use the minimal variance hedging formula with this measure. One indeed obtains .
Discussion {#sec:Interpretation}
==========
We will now discuss the results obtained so far. In order to exemplify the findings, we use the toy model of a jump diffusion with a fixed jump size $J$. This model is analytically tractable [@Merton]. Re-introducing the risk-free rate $r$, we have to solve a PIDE of the form $$ {\partial}_t v_t(s) + \tfrac{\sigma^2}{2} s^2 {\partial}_s^2 v_t(s) + \{ r - \bar \lambda \tilde J \} s {\partial}_s v_t(s) - r v_t(s)
+ \bar \lambda \left\{ v_t(e^J s) - v_t(s) \right\} = 0,$$ where we used $\tilde J = e^J -1$. The only difference between the different methods and utilities lies in the value of $\bar \lambda$ that is employed. The above is solved by $$ v_t(s) = \sum_{k = 0}^\infty \frac{( \bar \lambda (T-t))^k e^{- \bar \lambda (T-t)}}{k!} v_t(e^{k J} s, r, \bar \lambda \tilde J)$$ where $v_t(s, r, q)$ is the Black–Scholes price for the claim, given a risk-free rate $r$ and a dividend yield $q$.
The price
---------
In order to get a feeling for the magnitude of the effect, we compare the marginal indifference price for logarithmic utility with Merton’s and the minimal variance price. We use a process with $\lambda = 0.25$, $\tilde J = - 0.25$, i.e., on average there is a jump of $-25 \%$ every four years. For the marginal indifference price, the relevant value for $\bar \lambda$ is obtained from and . The price for a put (converted to implied volatilities) is shown in Figure \[fig:Price\_PosRet\]. As a reference, the square root of the annualized variance is indicated. We see that the marginal indifference price and the minimal variance price are quite close together, but the difference to Merton’s price is notable. For a moneyness of 0.5, it corresponds to a price difference of 40%.
That the marginal indifference price and the minimal variance price are above Merton’s price is not a generic feature, but depends on the average drift of the asset. This is illustrated in Figure \[fig:Price\_NegRet\], which shows the same plot as before, but with an expected drift $\tilde \mu = - 0.05$. Now the marginal indifference price and the minimal variance price are below Merton’s price. This can be understood as follows: If the expected drift is positive, the investor will be invested in the asset. Since jumps are always downwards in our model, she is exposed to jump risk. Writing a put on the asset in this situation enlarges this exposure. She will thus ask for a risk premium. On the other hand, if the expected drift is negative, the investor is short the asset and is then exposed to the risk of no jumps happening. Writing a put in this situation diminishes the exposure to this risk. Thus, she can sell the put with a discount. This strong dependence on the drift seems to limit the practical applicability of the framework, as it is very hard to estimate.
Another disturbing feature of the marginal indifference price is that it is essentially independent of the risk aversion. That it is completely independent of the risk aversion $\alpha$ in the case of exponential utility is obvious from . But also for power utility, it is independent of $\beta$ in the limit $\beta \to \infty$. Using and comparing and , one easily sees that the marginal indifference price for power utility converges to the one for exponential utility in the limit $\beta \to \infty$. This property was proven in a general setting in [@Nutz10]. In our example, this is shown in Figure \[fig:beta1\]: The price changes very little with the risk aversion and approaches the price for exponential utility in the limit $\beta \to \infty$.
The implied drift
-----------------
The two features just discussed, the drift dependence and the essential risk aversion independence of the marginal indifference price seem to limit the practical applicability of the framework. We also note that the essence of the indifference price is that it takes into account how well the option trade matches to the optimal investment strategy. But typically the investment strategy a bank chooses is not derived from the model that is used to price options. A possible way out is a change of perspective: One takes the actual investment strategy as given and tries to take it into account for the valuation and hedging of options. This is possible straightforwardly, as , , and do not contain the original drift directly, but only via $\tilde \pi^*$ or $\bar \pi^*$. In the case of power utility one would thus set $\tilde \pi^*$ to the fraction of the wealth that is actually invested in the asset and use and . In the case of exponential utility, one uses the actual amount invested in the asset and the risk aversion $\alpha$ to compute $\bar \pi^*$ via . It is easily seen that this amounts to a change of the drift in the original problem. One may thus speak of an implied drift. Note however, that this implied drift need not be computed for pricing and hedging. It suffices to know the actual investment strategy.
This change of perspective solves the problems discussed above: One does not need to know the drift, and the price and hedge will in general depend on the risk preference. This is exemplified in Figure \[fig:beta2\]. We see that the marginal indifference price increases considerably with the risk aversion. We note however, that also the opposite effect is possible: For a negative actual, i.e., optimal, investment strategy, the marginal indifference price decreases with risk aversion. Again, this is due to the fact that by selling a put the investor can hedge the risk of no jumps happening, to which she is exposed by her investment strategy. Nevertheless, the marginal indifference price is always greater than the Black–Scholes price, in which the jump component is neglected. Finally, we note that for $\pi^* = 0$, one recovers Merton’s price. This, however, is not true for the marginal optimal hedge, which coincides with the minimal variance hedge for $\alpha = 0$ in that case.
The hedge
---------
We now discuss the hedges corresponding to the prices considered before. Figure \[fig:Delta\_PosRet\] shows the hedges for the prices shown in Figure \[fig:Price\_PosRet\]. While the minimal variance and the marginal optimal hedge are relatively close together, the deviation from Merton’s hedge is noticeable. Heavily out of the money ($S=200$), the relative difference is over 150%. Note that this strong deviation stems mainly from the new hedging formula and not so much from using a different price. This can be seen from Figure \[fig:Delta\_PosRet\_2\] where, for the same parameters as above, Merton’s hedge and the optimal marginal hedge are compared to the derivative [w.r.t. ]{}$s$ of the marginal indifference price. This derivative is quite close to Merton’s hedge, so for hedging purposes it seems to be more important to use the appropriate hedging formula than to use the correct price.
Finally, we compare the marginal optimal hedges corresponding to the prices shown in Figure \[fig:beta2\]. These are shown in Figure \[fig:beta3\]. We see the expected behavior, i.e., for out of the money puts the higher the risk aversion the shorter the investors are in the asset in order to hedge against downward jumps.
Summary & Outlook
=================
We discussed marginal utility based pricing and hedging for the case of a jump diffusion process. We pointed out two problems that seem to limit the practical applicability of the framework: The drift dependence and the essential risk aversion independence of the marginal indifference price and the corresponding hedge. We proposed to circumvent these by a change of perspective, by interpreting the actual investment strategy as the optimal one. We also compared the marginal utility based framework conceptually and concretely in a toy model with the minimal variance and Merton’s framework.
It would be desirable to apply the framework to more realistic models like a log-normal jump distribution or variance-gamma processes. While this is no problem in principle, we note that by the inclusion of a risk preference, the jump distribution is changed. Thus, computational methods that rely on a particular form of the jump distribution may no longer be applicable.
This work is based on a dissertation for the part-time MSc in Mathematical Finance at Oxford University, which was written under the supervision of Jan Obloj. It is a pleasure to thank him for his support and encouragement. I am also grateful to d-fine GmbH, Frankfurt a. M., Germany, for making my studies in Oxford possible.
[19]{}
H. Ahn and P. Wilmott, *Jump Diffusion, Mean and Variance: How to Dynamically Hedge, Statically Hedge and to Price*, Wilmott magazine, May 2007, 96–109.
D. B. Colwell and R. J. Elliott, [*Discontinuous asset prices and non-attainable contingent claims*]{}, Mathematical Finance, [**3**]{} (1993) 295–308.
M. H. A. Davis, *Option Pricing in Incomplete Markets*, In: Mathematics Of Derivative Securities (eds: M. A. H. Dempster, S. R. Pliska), Cambridge University Press 1997.
H. Glöckner, [*Infinite-dimensional Lie groups without completeness condition*]{}, in “Geometry and Analysis on finite and infite-dimensional Lie groups,” Eds. A. Strassburger, W. Wojtynski, J. Hilgert and K.-H. Neeb, Banach Center Publications [**55**]{} (2002), 43–59.
C. Gourieroux, J. P. Laurent and H. Pham, [*Mean-variance hedging and numéraire*]{}, Mathematical Finance, [**8**]{} No. 3 (1998) 179–200. R. S. Hamilton, [*The inverse function theorem of Nash and Moser*]{}, Bull. AMS [**7**]{}, No. 1, 65–222 (1982).
J. Kallsen, J. Muhle-Karbe and R. Vierthauer, [*Asymptotic power utility-based pricing and hedging*]{}, arXiv:0912.3362v2.
D. Kramkov and M. S[î]{}rbu, *Sensitivity analysis of utility-based prices and risk-tolerance wealth processes*, Annals of Applied Probabilty, [**16**]{} No. 4 (2006) 2140–2194.
D. Kramkov and M. S[î]{}rbu, *Asymptotic analysis of utility-based hedging strategies for small number of contingent claims*, Stochastic Processes and Their Applications, [**117**]{} No. 11 (2007) 1606–1620.
A. Lewis, *Fear of jumps*, Wilmott magazine, [**1**]{} (2002) 60.
R. C. Merton, *Option Pricing When Underlying Stock Returns are Discontinuous*, Journal of Financial Economics, [**3**]{} (1976) 125–144.
M. Monoyios, *Option pricing with transaction costs using a Markov chain approximation*, Journal of Economic Dynamics & Control, [**28**]{} (2004) 889.
M. Nutz, [*Risk aversion asymptotics for power utility maximization*]{}, Probability Theory and Related Fields, [**152**]{} (2012) 703–749. M. Nutz, [*Power utility maximization in constrained exponential Lévy models*]{}, arXix:0912.1885v2.
W. Schachermayer, *Optimal investment in incomplete markets when wealth may become negative*, Ann. Appl. Probab., [**11**]{} (2001) 694–734.
M. Schweizer, *Approximating random variables by stochastic integrals*, Annals of Probability [**22**]{} No. 3 (1994) 1536–1575.
A. E. Whalley, P. Wilmott, *Optimal Hedging of Options with Small but Arbitrary Transaction Cost Structure*, European Journal of Applied Mathematics, [**10**]{} (1999) 177.
[^1]: In order to define these notions, $X$ has to be equipped with a topology, which we assume to be locally convex.
[^2]: This implies that the model is only applicable to an “index” that comprises the whole market, i.e., in principle equities, commodities, real estate, etc.
[^3]: One might think of the following analogy: Let $f$ be a function with a local maximum at $x^*$ and $f''(x^*) < 0$. Perturbing $f(x)$ by subtracting $\epsilon g(x)$, the new maximum is found at $x_\epsilon^* = x^* + \epsilon g'(x^*)/f''(x^*) + {\mathcal{O}}(\epsilon^2)$.
[^4]: The hedge (and also the price) for power utility is implicitly contained in [@KMV09].
|
---
abstract: 'The next generation of galaxy surveys will observe millions of galaxies over large volumes of the universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian Experimental Design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45%. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky we can in fact constrain the parameters with errors reduced by 28%.'
author:
- |
P. Paykari$^{1}$[^1] and A. H. Jaffe$^{2}$\
$^{1}$Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp/SEDI, Service d’Astrophysique, CEA Saclay, F-91191 GIF- SUR-YVETTE CEDEX, France.\
$^{2}$Department of Physics, Blackett Laboratory, Imperial College, London SW7 2AZ, United Kingdom
bibliography:
- 'biblio.bib'
title: 'Sparsely Sampling the Sky: A Bayesian Experimental Design Approach'
---
\[firstpage\]
cosmology
Introduction
============
The measurements of the cosmological parameters heavily rely on accurate measurements of power spectra. Power spectra describe the spatial distribution of an isotropic random field, defined as the Fourier transform of the spatial correlation function. The perturbations in the universe can be described statistically using the correlation function $\xi(r)$ between two points, which depends only on their separation $r$ (when isotropy is assumed)[^2]; $$\xi(r)\equiv\left\langle \delta(\underbar{x})\delta(\underbar{x}+\underline{r})\right\rangle \;,$$ where $\delta(\underbar{x})=\left(\rho(\underbar{x})-\bar{\rho}\right)/\bar{\rho}$ measures the continuous over-density, where $\rho(\underline{x})$ is the density at position $\underline{x}$ and $\bar{\rho}$ is the average density. The power spectrum $P(k)$, which is the Fourier transform of the correlation function, is enough to define the perturbations completely when the perturbations are assumed uncorrelated Gaussian random fields in the Fourier space. Power spectra (or correlation functions) are what the surveys actually measure, from which cosmological parameters are inferred. These spectra are normally a convolution of the primordial power spectrum (which measures the statistical distribution of perturbations in the early universe) and a transfer function which depends on the cosmological parameters. Hence accurate measurements of the power spectra from surveys are very important for accurate measurements of the cosmological parameters.
The most important observed spatial power spectrum for cosmology is the galaxy power spectrum; the Fourier transform of the galaxy correlation function, which was first formulated by @Peebles1973. A galaxy survey lists the measured positions of the observed galaxies. As proposed by Peebles, these positions are modelled as a random Poissonian point source, where the galaxy density is modulated by the fluctuations in the underlying matter distribution and the selection effects. The selection function of the survey is described by $\bar{n}(\underbar{x})$, which is the expected galaxy density at position $\underbar{x}$ in the absence of clustering. The fluctuations in the underlying matter density are given by $\delta(\underbar{x})$, as described previously. The the galaxy number over-density $n(\underbar{x})$, which is the observed quantity, is related to the matter over-density via the bias $b$ [@kaiser1984_bias] — galaxies trace dark matter up to this $b$ factor. We define the galaxy power spectrum $P_{g}(k)$ as $$P_{g}(k)=2\pi^{2}\cdot b^{2}(k)\cdot k\cdot T^{2}(k)\cdot P_{p}(k)\;,$$ where $P_{p}(k)$ is the primordial power spectrum $P_{p}(k)=A_{s}k^{n_{s}-1}$. The transfer function $T(k)$ further depends upon the cosmological parameters (e.g., the matter density $\Omega_m$, the scalar spectral index, $n_s$, etc.) responsible for the evolution of the universe. The bias $b$ relates the galaxy power spectrum to the matter power spectrum, as explained above.
This power spectrum is very rich in terms of constraining a large range of cosmological parameters. On large scales this spectrum probes structure which is less affected by clustering and evolution. Hence these scales are still in the linear regime and have a “memory” of the initial state. The information from these regimes are, therefore, the cleanest since the Big Bang and any knowledge on these large scales would shed light on the physics of early universe and hence the primordial power spectrum. On intermediate scales the spectrum provides us with information about the evolution of the universe since the Big Bang; for example the matter-radiation equality which is responsible for the peak of the galaxy spectrum. The matter-radiation equality is a unique point in the history of the evolution, giving information about the amount of matter and radiation in the universe. On relatively small scales there is a great deal of information about galaxy clustering via the Baryonic Acoustic Oscillations (BAO) which encode a characteristic scale; the sound horizon at the time of recombination. Therefore, measuring the galaxy power spectrum on a large range of scales can help us constrain the cosmological parameters responsible for the evolution of the universe as well as the ones of its initial state.
Accurate measurements of the galaxy power spectrum depend on two main factors; the Poisson noise and the cosmic variance. To overcome the Poisson noise, surveys aim to maximise the number of galaxies observed. The impressive constraints on cosmological parameters from previous and current surveys, such as the 2dF [@2dF] and SDSS [@sdss], has motivated even more ambitious future surveys such as DES [@DES] and Euclid [@Euclid], aiming to observe millions of galaxies over large volumes of the universe. Considering the large investments in time and money for these surveys, one wants to ask what is really the optimal survey strategy! In this work we want to investigate this exact questions and find the optimal strategy for galaxy surveys such as DES and Euclid.
In this era of cosmology where the statistical errors have reduced greatly and are now comparable with systematics, observing, for example, a greater number of galaxies may not necessarily improve our results. We need to devise more strategic ways to make our observations and take control of our systematics. For example, to investigate larger scales, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this case we would gather a larger density of states in Fourier space, but at the expense of an increased correlation between different scales — aliasing. This would smooth out features on these scales and decrease its significance if any observed. Here, by making use of Bayesian Experimental Design we will investigate the advantages and disadvantages of the sparse sampling and verify if a complete contiguous survey is indeed the most efficient way of observing the sky for our purposes. The parameter of interest here is the galaxy power spectrum itself and a set of cosmological parameters that depend on this spectrum.
Some previous work on sparse sampling includes @Kaiser-Sparse and @Blakeetal06; @Kaiser-Sparse shows that measuring the large scale correlation function from a complete magnitude-limited redshift survey is actually not the most efficient approach. Instead, sampling a fraction of galaxies randomly, but to a fainter magnitude limit, will improve the constraints of the correlation function measurements significantly, for the same amount of observing time. @Blakeetal06 have shown that a sparse-sampling (achieved by a non-contiguous telescope pointings or, for a wide-field multi-object spectrograph, by having the fibres distributed randomly across the field-of-view) is preferred when the angular size of the sparse observed patches is much smaller than angular scale of the features in the power spectrum (the acoustic features).
Bayesian Experimental Design and Figure-of-Merit
================================================
Bayesian methods have recently been used in cosmology for model comparison and for deriving posterior probability distributions for parameters of different models. However, Bayesian statistics can do even more by handling questions about the performance of future experiments, based on our current knowledge [@Liddleetal06; @Trotta07; @Trotta07-BayesFactor]. For example, @PBKBNG-BayesExp use a Bayesian approach to constrain the dark energy parameters by optimising the Baryon Acoustic Oscillations (BAO) surveys. By searching through a survey parameter space (which includes parameters such as redshift range, number of redshift bins, survey area, observing time, etc.) they find the optimal survey with respect to the dark energy equation-of-state parameters. Here we will use this strength of Bayesian statistics for optimising the strategy to observe the sky for galaxy surveys. There are three requirements for such an optimisation; 1. specify the parameters that define the experiment which need to be optimised for an optimal survey; 2. specify the parameters to constrain, with respect to which the survey is optimised; 3. specify a quantity of interest, generally called the figure of merit (FoM), associated with the proposed experiment. The choice of the FoM depends on the questions being asked, as will be explained later in the text. We then want to extrimise the FoM subject to constraints imposed by the experiment or by our knowledge about the nature of the universe. Below, we will explain the procedure.
Assume $e$ denotes the different experimental designs that we can implement and $M^{i}$ are the different models under consideration with their parameters $\theta^{i}$. Assume that experiment $o$ has been performed, so that this experiment’s posterior $P(\theta|o)$ forms our prior probability function for the new experiment. The FoM will depend on the set of parameters under investigation, the performed experiment (data) and the characteristics of the future experiment; $U(\theta,e,o)$. From the utility we can build the expected utility $E\left[U\right]$ as $$E[U|e,o]=\sum_{i}P(M^{i}|o)\int d\hat{\theta}^{i}\; U(\hat{\theta}^{i},e,o)P(\hat{\theta}^{i}|o,M^{i})\:,$$ where $\hat{\theta}^{i}$ represent the fiducial parameters for model $M^{i}$. This says: If a set of fiducial parameters, $\hat{\theta}$, correctly describe the universe and we perform an experiment $e$, then we can compute the utility function for that experiment, $U(\hat{\theta},e,o)$. However, our knowledge of the universe is described by the current posterior distribution $P(\hat{\theta}|o)$. Averaging the utility over the posterior accounts for the present uncertainty in the parameters and summing over all the available models would account for the uncertainty in the underlying true model. The aim is to select an experiment that extremises the utility function (or its expectation). The utility function takes into account the current models and the uncertainties in their parameters and, therefore, extremising it takes into account the lack of knowledge of the true model of the universe.
One of the common choices for the FoM is some form of function of the Fisher matrix, which is the expectation of the inverse covariance of the parameters in the Gaussian limits (We will explain in the next section how a Fisher matrix is obtained in more detail.). One can refer to the Dark Energy Task Force (DETF) FoM, that use Fisher-matrix techniques to investigate how well each model experiment would be able to restrict the dark energy parameters $w_0$, $w_a$, $\Omega_{DE}$ for their purposes. Three common FoMs, which we will be using as well, are
- A-optimality $=\log(\textrm{trace}(\mathbf{F}))$\
trace of the Fisher matrix (or its $\log$) and is proportional to sum of the variances. This prefers a spherical error region, but may not necessarily select the smallest volume.
- D-optimality $=\log\left(\left|\mathbf{F}\right|\right)$\
determinant of the Fisher matrix (or its $\log$), which measures the inverse of the square of the parameter volume enclosed by the posterior. This is a good indicator of the overall size of the error over all parameter space, but is not sensitive to any degeneracies amongst the parameters.
- Entropy (also called the Kullback-Leibler divergence) $$\begin{aligned}
E & = & \int d\theta\; P(\theta|\hat{\theta},e,o)\log\frac{P(\theta|\hat{\theta},e,o)}{P(\theta|o)}\nonumber \\
& = & \frac{1}{2}\left[\log\left|\mathbf{F}\right|-\log|\mathbf{\Pi}|-\textrm{trace}(\mathbb{I}-\mathbf{\Pi}\mathbf{F}^{-1})\right]\,,\end{aligned}$$ where $P(\theta|\hat{\theta},e,o)$ is the posterior distribution with Fisher matrix $\mathbf{F}$ and $P(\theta|o)$ is the prior distribution with Fisher matrix $\mathbf{\Pi}$. The entropy forms a nice compromise between the A-optimality and D-optimality. Note that these are the utility functions, not the ‘expected’ utility functions. In our current models of the universe, we do not expect a significant difference between the parameters of the same model. However, this will be investigated in a future work, where we will explicitly use expected utility functions. In the next section we will explain how a Fisher matrix is formulated.
Fisher Matrix Analysis
======================
The Fisher matrix is generally used to determine the sensitivity of a particular survey to a set of parameters and has been largely used for optimisation (and forecasting). Consider the likelihood function for a future experiment with experimental parameters $e$, $\mathcal{L}(\theta|e)\equiv P(D_{\hat{\theta}}|\theta,e)$, where $D_{\hat{\theta}}$ are simulated data from the future experiment assuming that $\hat{\theta}$ are the true parameters in the given model. We Taylor expand the log-likelihood around its maximum value: $$\ln\mathcal{L}(\theta|e)=\ln\mathcal{L}(\theta^{ML})+\frac{1}{2}\sum_{ij}(\theta_{i}-\theta_{i}^{ML})\frac{\partial^{2}\ln\mathcal{L}}{\partial\theta_{i}\partial\theta_{j}}(\theta_{j}-\theta_{j}^{ML})\:,$$ where the first term is a constant and only affects the height of the function, the second term describes how fast the likelihood function falls around the maximum. The Fisher matrix is defined as the ensemble average of the *curvature* of the likelihood function $\mathcal{L}$ (i.e., it is the average of the curvature over many realisations of signal and noise); $$\begin{aligned}
F_{ij} & = & \left\langle \mathcal{F}\right\rangle =\left\langle -\frac{\partial^{2}\ln\mathcal{L}}{\partial\theta_{i}\partial\theta_{j}}\right\rangle \label{eq:General_FM}\\
& = & \frac{1}{2}\textrm{trace}[C_{,i}C^{-1}C_{,j}C^{-1}]\:,\end{aligned}$$ where the second line is appropriate for a Gaussian distribution with correlation matrix $C$ determined by the parameters $\theta_{i}$, and $\mathcal{L}$ is the likelihood function. The inverse of the Fisher matrix is an approximation of the covariance matrix of the parameters, by analogy with a Gaussian distribution in the $\theta_{i}$, for which this would be exact. The Cramer-Rao inequality[^3] states that the smallest error measured, for $\theta_{i}$, by any unbiased estimator (such as the maximum likelihood) is $1/\sqrt{F_{ii}}$ and $\sqrt{(F^{-1})_{ii}}$, for non-marginalised and marginalised[^4] one-sigma errors respectively. The derivatives in Equation \[eq:General\_FM\] generally depend on where in the parameter space they are calculated and hence it is clear that the Fisher matrix is function of the fiducial parameters.
The Fisher matrix allows us to estimate the errors on parameters without having to cover the whole parameter space (but of course will only be appropriate so long as the derivatives are roughly constant throughout the space). So, a Fisher matrix analysis is equivalent to the assumption of a Gaussian distribution about the peak of the likelihood [e.g. @bjk]. It also makes the calculations easier. For example, if we are only interested in a subset of parameters, then marginalising over unwanted parameters is just the same as inverting the Fisher matrix, taking only the rows and columns of the wanted parameters and inverting the smaller matrix back. It is also very straightforward to combine constraints from different independent parameters: we just sum over the Fisher matrices of the experiments (remember Fisher matrix is the $\log$ of the likelihood function).
We further note, as in all uses of the Fisher matrix, that any results thus obtained must be taken with the caveat that these relations only map onto realistic error bars in the case of a Gaussian distribution, usually most appropriate in the limit of high signal-to-noise ratio and/or relatively small scales, so that the conditions of the central limit theorem obtain. As long as we do not find extremely degenerate parameter directions, we expect that our results will certainly be indicative of a full analysis, using simulations and techniques such as Bayesian Experimental Design [@T-BayesExp].
Fisher Matrix for Galaxy Surveys
--------------------------------
We follow the approach of @tegmark1997 to define the pixelisation for galaxy surveys. First we define the data in pixel $i$ as $$\Delta_{i}\equiv\int d^{3}x\psi_{i}\left(\underbar{x}\right)\left[\frac{n\left(\underbar{x}\right)-\bar{n}}{\bar{n}}\right]\,,$$ where $n(\underline{x})$ is the galaxy density at position $\underline{x}$ and $\bar{n}$ is the expected number of galaxies at that position. The weighting function, $\psi_{i}(\underline{x})$, which determines the pixelisation (and is sensitive to the shape of the survey as you will see later), is defined as a set of Fourier pixels $$\begin{aligned}
\psi_{i}(\underline{x})=\frac{e^{\iota\underline{k}_{i}.\underline{x}}}{V}\times\begin{cases}
1 & \,\vec{x}\,\,\textnormal{ inside\,\,\ survey\,\,\ volume}\\
0 & \,\textnormal{otherwise}
\end{cases}\,,\label{eq:weighting_fn}\end{aligned}$$ where $V$ is the volume of the survey. Here we have divided the volume into sub-volumes, each being much smaller than the total volume of the survey, but being large enough to contain many galaxies. This means $\Delta_{i}$ is the fractional over-density in pixel $i$. Using this pixelisation we can define a covariance matrix as $$\left\langle \Delta_{i}\Delta_{j}^{*}\right\rangle =C=(C_{S})_{ij}+(C_{N})_{ij}\,,$$ where $C_{S}$ and $C_{N}$ are the signal and noise covariance matrices respectively and are assumed independent of each other. The signal covariance matrix can be defined as $$\begin{aligned}
(C_{S})_{ij} & = & \left\langle \Delta_{i}\Delta_{j}^{*}\right\rangle \nonumber \\
& = & \int d^{3}xd^{3}x^{\prime}\;\psi_{i}(\underbar{x})\psi_{j}^{*}(\underbar{x}^{\prime}) \nonumber \\
& \; & \left\langle \frac{n(\underbar{x})-\bar{n}}{\bar{n}}\cdot\frac{n(\underbar{x}^{\prime})-\bar{n}}{\bar{n}}\right\rangle \,.\label{eq:C_S_ij1}\end{aligned}$$ By equating the number over-density $\left(n(\underbar{x})-\bar{n}\right)/\bar{n}$ to the continuous over-density $\delta(\underbar{x})=\left(\rho(\underbar{x})-\bar{\rho}\right)/\bar{\rho}$ we obtain $$\begin{aligned}
(C_{S})_{ij} & = & \int\frac{d^{3}k}{(2\pi)^{3}}P(k)\tilde{\psi}_{i}(\underline{k})\tilde{\psi}_{j}^{*}(\underline{k})\nonumber \\
& = & \int\frac{dk}{(2\pi)^{3}}k^{2}P(k)\int d\Omega_{k}\;\tilde{\psi}_{i}(\underline{k})\tilde{\psi}_{j}^{*}(\underline{k})\nonumber \\
& = & \int\frac{dk}{(2\pi)^{3}}k^{2}P(k)W_{ij}(k)\,,\label{eq:C_S_ij2}\end{aligned}$$ where $\tilde{\psi}_{i}(\underline{k})$ is the Fourier transform of $\psi_{i}(\underline{x})$ and the window function $W_{ij}(k)$ is defined as the angular average of the square of the Fourier transform of the weighting function. With the same approach, the noise covariance matrix — which is due to Poisson shot noise — is given by $$\begin{aligned}
(C_{N})_{ij} & = & \left\langle N_{i}N_{j}^{*}\right\rangle _{\textnormal{Noise}}\nonumber \\
& = & \int d^{3}xd^{3}x^{\prime}\psi_{i}\left(\underbar{x}\right)\psi_{j}^{*}\left(\underbar{x}^{\prime}\right)\frac{1}{\overline{n}}\delta_{D}\left(\underbar{x}-\underbar{x}^{\prime}\right)\nonumber \\
& = & \int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{\overline{n}}\tilde{\psi}_{i}(\underline{k})\tilde{\psi}_{j}^{*}(\underline{k})\nonumber \\
& = & \int\frac{dk}{(2\pi)^{3}}k^{2}\frac{1}{\overline{n}}\int d\Omega_{k}\tilde{\psi}_{i}(\underline{k})\tilde{\psi}_{j}^{*}(\underline{k})\nonumber \\
& = & \frac{1}{\overline{n}}\int\frac{dk}{(2\pi)^{3}}k^{2}W_{ij}(k)\ .\end{aligned}$$ The design of the survey will shape the form of the weighting function in Equation \[eq:weighting\_fn\], which will be discussed in the next section.
This prescription gives us a data covariance matrix for a galaxy survey. What we actually need is a Fisher matrix for the parameters we are interested in. For this we will use Equation \[eq:General\_FM\] above, which defines the Fisher matrix of parameters in terms of the inverse of the data covariance matrix and its differentiation with respect to the parameters of interest. We are interested in the galaxy power spectrum and hence the differentiation of the covariance matrix in Equation \[eq:General\_FM\] is taken with respect to the bins of this power spectrum. As the noise covariance matrix does not depend on the power spectrum, we only need to differentiate the signal covariance matrix in Equation \[eq:C\_S\_ij2\]. Taking the galaxy power spectrum as a series of top-hat bins $$P(k)=\sum_{B}w_{B}(k)P_{B}\quad\begin{cases}
w_{B}=1 & \,\, k\in B\\
0 & \,\textnormal{otherwise}
\end{cases}\,,$$ where $P_{B}$ is the power in each bin, the differentiation takes the form $$\frac{\partial(C_{S})_{ij}}{\partial P(k)}=\int_{k_{B}^{min}}^{k_{B}^{max}}\frac{dk}{(2\pi)^{3}}k^{2}W_{ij}(k)\,.$$ We insert this and the inverse of the data covariance matrix into Equation \[eq:General\_FM\] to get a Fisher matrix for the galaxy power spectrum bins. To get a Fisher matrix for the cosmological parameters one can use the parameters Jacobian $$F_{\alpha\beta}=\sum_{ab}F_{ab}\frac{\partial P_{a}}{\partial\lambda_{\alpha}}\frac{\partial P_{b}}{\partial\lambda_{\beta}}\;.$$ where $F_{ab}$ is the galaxy spectrum Fisher matrix and $F_{\alpha\beta}$ is the Fisher matrix for the cosmological parameters $\lambda_{\alpha}$ and $\lambda_{\beta}$.
Survey Design
=============
We will investigate the FoM of a sparse design to that of a contiguous survey, which we have chosen to bhe Dark Energy Survey (DES).
Dark Energy Survey (DES)
-------------------------
The Dark Energy Survey (DES)[^5] [@DES] is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy. Its digital camera, DECam, is mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory in the Chilean Andes. Starting in December 2012 and continuing for five years, DES will catalogue 300 million galaxies in the southern sky over an area of 5000 square degrees and a redshift range of $0.2<z<1.3$. In the next section we will explain how we ‘sparsify’ the DES survey for our purposes.
Here, we use a flat-sky approximation. Euclid, with a survey area of $20,000$ square degrees should be treated on the full sky and is not investigated here. Nonetheless we expect qualitatively similar results to DES.
Sparse Design
--------------
![Design of the mask on the sky to sparsely sample the sky. A regular grid with $n$ patches of size $M$ (note that we are observing through these patches — white squares in the Figure), placed at constant distances from one another at $x_{i}$ and $y_{j}$. The total *observed* area is the sum of the areas of all the patches, $n \times M^2$, and total *sampled* area is the total area which bounds both the masked and the unmasked areas, $V$. Hence the fraction of sky observed is $f=(n\times M^2)/A_{\textrm{tot}}$. Also, note that we are assuming a flat-sky approximation.\[fig:Design\]](Fig/Mask){width="\columnwidth"}
For simplicity, we will design the sparsely sampled area of the sky as a regular grid of $n_{p}\times n_{p}$ square patches of size $M\times M$ — Figure \[fig:Design\]. We therefore define the structure on the sky as a top-hat in both $x$ and $y$ directions $$\begin{aligned}
\sum_{n}\Pi(x-x_{n}) & = & \begin{cases}
1 & \,0<|x-x_{n}|<M/2\\
0 & \,\textnormal{otherwise}
\end{cases}\,,\\
\sum_{m}\Pi(y-y_{m}) & = & \begin{cases}
1 & \,0<|y-y_{m}|<M/2\\
0 & \,\textnormal{otherwise}
\end{cases}\,,\end{aligned}$$ where $x_{i}$ and $y_{j}$ mark the centres of the patches in our coordinate system. In the $z$ direction we use the step function, which is defined as: $$\begin{aligned}
\Theta(z) & = & \begin{cases}
1 & z > 0 \\
0 & \,\textnormal{otherwise}
\end{cases}\,.\end{aligned}$$ With this design the weight function in equation \[eq:weighting\_fn\] takes the form: $$\begin{aligned}
\tilde{\psi}_{i}(\underline{k}) & = & \int d^{3}x\; e^{\imath(\underline{k}_{i}-\underline{k}).\underline{x}} \;\;\;\times\; \nonumber \\
& \; & \;\;\;\;\sum_{n}\Pi(x-x_{n})\sum_{m}\Pi(y-y_{m}) \;\;\;\times\; \nonumber \\
& \; & \;\;\;\;\Theta\left(z+\frac{L}{2}\right)\Theta\left(\frac{L}{2}-z\right) \;\times\frac{1}{V} \nonumber \\
& = & \int dxe^{\imath q_{x}x}\sum_{n}\Pi(x-x_{n}) \;\;\;\times\; \nonumber \\
& \; & \int dye^{\imath q_{y}y}\sum_{m}\Pi(y-y_{m}) \;\;\;\times\; \nonumber \\
& \; & \int dze^{\imath q_{z}z}\Theta\left(z+\frac{L}{2}\right)\Theta\left(\frac{L}{2}-z\right) \;\times\frac{1}{V} \nonumber \\
& = & \text{sinc}\left(q_{x}\frac{M}{2}\right)\sum_{n}2\cos\left(q_{x}x_{n}\right) \;\;\;\times\; \nonumber \\
& \; & \text{sinc}\left(q_{y}\frac{M}{2}\right)\sum_{m}2\cos\left(q_{y}y_{m}\right) \;\;\;\times\; \nonumber \\
& \; & \text{sinc}\left(q_{z}\frac{L}{2}\right) \;\times\frac{M^{2}L}{V} \ ,
\label{eq:weighting_fn2}\end{aligned}$$ where [$\underline{q}=\underline{k}_{i}-\underline{k}$]{}, $q_{x}=q\sin\theta\cos\phi$, $q_{y}=q\sin\theta\sin\phi$, $q_{z}=q\cos\phi$ and $d\mu=d\cos\theta$. The volume $V$ is the *total* sparsely sampled volume, $M$ is the size of the observed patch on the surface of the sky and $L$ is the observed depth. The last equality in the above equation uses the *Dirichlet Kernel* $$D_{n}(x)=\sum_{k=-n}^{n}e^{ikx}=1+2\sum_{k=1}^{n}\cos(kx)\,,$$ which can be used due to the symmetry of the design. The window function, defined in Equation \[eq:C\_S\_ij2\], now takes the form
$$\begin{aligned}
W_{ij}(k) & = & \int_{-1}^{1}\frac{d\mu}{2}\int_{0}^{2\pi}\frac{d\phi}{2\pi}\tilde{\psi}(\underline{k}_{i}-\underline{k})\tilde{\psi^{*}}(\underline{k}_{j}-\underline{k})\nonumber \\
& = & \int_{-1}^{1}\frac{d\mu}{2}\int_{0}^{2\pi}\frac{d\phi}{2\pi}\times\left(\frac{M^{2}L}{V}\right)^{2}\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{x}\frac{M}{2}\right)\sum_{n}2\cos(q_{x}x_{n}) \times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{x}^{\prime}\frac{M}{2}\right)\sum_{n^{\prime}}2\cos(q_{x}^{\prime}x_{n^{\prime}})\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{y}\frac{M}{2}\right)\sum_{m}2\cos(qy_{m})\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{y}^{\prime}\frac{M}{2}\right)\sum_{m^{\prime}}2\cos(q_{y}^{\prime}y_{m^{\prime}})\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{z}\frac{L}{2}\right)\text{sinc}\left(q_{z}^{\prime}\frac{L}{2}\right)\;.\label{eq:Window Function}\end{aligned}$$
Note that there are two scales that control the behaviour of the window function; one is the size of the patches, $M$, and the other is their distance from one another, $x_{i}$. We will investigate the influence of both of these scales on the FoM by trying two different configurations, discussed in the next section. In case of the contiguous sampling of the sky where we are observing through a contiguous square, the window function takes the form of one single big patch, as shown below $$\begin{aligned}
W_{ij}(k) & = & \int_{-1}^{1}\frac{d\mu}{2}\int_{0}^{2\pi}\frac{d\phi}{2\pi}\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{x}\frac{M}{2}\right)\text{sinc}\left(q_{x}^{\prime}\frac{M}{2}\right)\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{y}\frac{M}{2}\right)\text{sinc}\left(q_{y}^{\prime}\frac{M}{2}\right)\times\nonumber \\
& & \;\;\;\;\;\;\;\text{sinc}\left(q_{z}\frac{L}{2}\right)\text{sinc}\left(q_{z}^{\prime}\frac{L}{2}\right)\;.\end{aligned}$$ which is a square cylinder.
Sparsifying DES
---------------
We divide the total area of DES into small square patches, as explained in the design of the mask previously. There are two ways to sparsify this area;
- Constant Total Area (full sampled area stays constant)\
In this setting we keep the patches at a constant position and gradually decrease their size. Therefore, the total *sampled* [^6] area is kept constant, while the total *observed* area decreases as the patch sizes decrease. The patches are placed at $60\textnormal{Mpc}$ from one another; this scale is about half of the scale of the BAO Scales, which is $\sim120\textnormal{Mpc}$. The patches are placed at half this scale to capture the BAO features at best. This restricts the maximum size of the patches to be $60\textnormal{Mpc}$ for $f=1$. We then shrink them from $60\textnormal{Mpc}$ to $10\textnormal{Mpc}$. The minimum size of $10\textnormal{Mpc}$ was chosen to avoid entering the non-linear physics at $<10\textnormal{Mpc}$. This configuration is shown in Figure \[fig:Survey-Geometry—CTV\]. In this case, as we make our observations more sparse, the total observing time decreases as well; we could instead choose to observe more deeply in the same amount of time and gain volume in the redshift direction.
   
- Constant Observed Area (footprint of the survey stays constant)\
In this setting the size of the patches are kept fixed at $60\textnormal{Mpc}$, and the area is sparsified by placing the patches further and further from one another. Here the total observed area is constant, while the total sampled area increases as the patches are put further and further. This configuration is shown in Figure \[fig:Survey-Geometry—COV\]. Now, the length of time for the survey remains the same, but is spread out over a larger area of sky.\
  
Note that the areas we consider here are small enough that the flat sky approximation is valid. Also note that in all the above setting we keep the number of bins of the galaxy power spectrum constant at $n_{\textrm{bin}}=60$. In reality we should let the total volume of the survey choose the binning of the power spectrum via $k_{min}=(2\pi/V)^{1/3}=dk$, and hence the number of the bins $n_{\textrm{bin}}$. However, if $n_{bin}$ changes from case to case it will be unfair to compare D-optimality and Entropy as they will have different units as $n_{bin}$ changes. To have a fair comparison between the cases we keep $n_{\textrm{bin}}$ constant.
Results
=======
We have chosen a geometrically flat $\Lambda$CDM model with adiabatic perturbations. We have a five-parameter model with the following values for the parameters: $\Omega_{m}=0.214$, $\Omega_{b}=0.044$, $\Omega_{\Lambda}=0.742$, $\tau=0.087$ and $h=0.719$, where $H_{0}=100h\textnormal{k}\textnormal{m}^{-1}\textnormal{Mp}\textnormal{c}^{-1}$. The FoM used are $$\begin{aligned}
\textnormal{Entropy} & = & \left[\ln\left|\mathbf{F}\right|-\ln|\mathbf{\Pi}|-\textrm{trace}(\mathbb{I}-\mathbf{\Pi}\mathbf{F}^{-1})\right]
\nonumber\\
& & \;\times \; 0.5\,,\\
\textnormal{A-optimality} & = & \ln(\textrm{trace}(\mathbf{F}))\,,\\
\textnormal{D-optimality} & = & \ln(\left|\mathbf{F}\right|)\,,\end{aligned}$$ where $\mathbf{\Pi}$ is the prior Fisher matrix, which we have chosen to be that for a SDSS-LRG-like survey. The posterior Fisher matrix is $\mathbf{F}=\mathbf{L}+\mathbf{\Pi}$, where $\mathbf{L}$ is the likelihood Fisher matrix, which is the current sparse survey we have designed. The utility functions above are defined so that they need to be maximised for an optimal design.
Constant Total Area {#sec:CTV}
-------------------
Figures \[fig:CTV\_FoM\] shows the FoM for both the galaxy power spectrum bins on the left and the cosmological parameters on the right. In both cases, the Entropy, A-optimality and D-optimality all increase with $f$. This is as expected as a contiguous sampling of the sky captures all the information and should be the best to constrain cosmology. The top panels in the Figure show A-optimality for the bins on the left and the cosmological parameters on the right. In both cases, $A$ increases with $f$ and reaches its maximum at $f=1$ for DES. Note that A-optimality is a measure of the errors of the parameters only — it is a measure of the trace of the Fisher matrix. Therefore, it is does not account for the correlations between parameters. Although $A$ increases with $f$ for both the bins and the parameters, note that this increase is very small. To see the amount of change in each of the elements of the power spectrum Fisher matrix as $f$ increases, look at the top panel of Figure \[fig:CTV\_FM\]. This shows the diagonal elements of the Fisher matrix $\mathbf{F}$ for galaxy power spectrum bins for the different $f$. The elements are all on top of each other and indeed the gain obtained by increasing $f$ is very small.
The middle panels of Figure \[fig:CTV\_FoM\] show D-optimality, which again increases with $f$ for both the bins and the parameters. Note that, D-optimality is a measure of the determinant of the Fisher matrix and therefore takes the correlation between the parameters into account. The correlation between the parameters is indeed very important; one disadvantage of the sparse sampling is the correlation it induces between the parameters due to aliasing. To see this effect, look at the bottom panel of Figure \[fig:CTV\_FM\], where the row of the Fisher matrix that corresponds to the middle bin of the power spectrum is shown. Going away from the peak in both direction, the elements show the correlation between the different bins and the middle one. As $f$ decreases and we get more and more sparse, the power in the off-diagonal elements of the Fisher matrix increases, meaning there is more aliasing. The DES survey, as a full contiguous survey, has the least aliasing, while the sparsest survey has the most. The rise towards the small $k$ (large scales) is due to sample variance.
Looking at the correlations and the errors in the Fisher matrix of the spectrum one notes that the decrease in D-optimality for sparser surveys is mostly due to the increased correlation between the bins rather than the the increased errors; as we saw in the top panel of this Figure the decrease in the errors are negligible. In general we conclude that total aliasing induced by sparsity is small and the loss in the constraining power of the survey due to this aliasing is negligible. Hence, overall, little is gained by observing the sky more contiguously.
The bottom panels in Figure \[fig:CTV\_FoM\] show the Entropy for the bins and the parameters. Again, $E$ increases with $f$ and reaches its maximum for DES. The Entropy measures the total size of the errors of the parameters in the Fisher matrix as well as their correlation. Hence it is a good compromise of A- and D-optimality. It measures the total information gain of the survey relative to a prior survey. Having an SDSS-like-survey as our prior, and taking into account both the errors and the correlation between the parameters, the contiguous DES survey has the largest gain compared to the sparse surveys. However, note that this gain is again very small.
Figure \[fig:CTV\_FMcp\] shows the relative loss in the marginalised errors of each of the cosmological parameters with respect to DES. The largest loss for a sparse observation of the sky is on the spectral index with $\delta \Omega_\Lambda/\Omega_\Lambda\sim0.45\%$ and the smallest is for $\Omega_{c}$ with a loss of $\delta\Omega_c/\Omega_c\sim0.15\%$. The non-marginalised errors show a qualitatively different behaviour, where $n_s$ has the largest and $\Omega_{\Lambda}$ has the smallest loss.
 

![‘Constant total area’ — Relative change in the errors of the cosmological parameters. The largest loss is about $4.5\%$ due to sparsifying the survey. \[fig:CTV\_FMcp\]](Fig/CTV/FMcp_Marg){width="\columnwidth"}
Constant Observed Area {#sec:COV}
----------------------
Figure \[fig:COV\_FoM\] shows the FoM for the power spectrum bins and the cosmological parameters. In this case the Entropy, A-optimality and D-optimality all decrease with $f$. And the overall changes in all the FoM are much larger than the ones seen in the previous scenario for both the bins and the parameters.
The top panel of Figure \[fig:COV\_FM\] shows the diagonal elements of the Fisher matrix of the bins. As we sparsify the survey these elements increase, and hence better constrain the spectrum. The bottom panel in the Figure shows the row of the Fisher matrix that corresponds to the middle bin of the spectrum. Going away from the peak, the elements show the correlation between the different bins and the middle one. For DES the middle bin has a correlation with the close neighbouring bins. However, the correlation decreases as we go away from the peak. Towards small $k$ (large scales) it starts to increase again due to sample variance. As $f$ decreases and we get more and more sparse, the middle bin has a sharper drop (due to the larger total size of the survey) i.e., less correlation with neighbouring bins. However, there is more aliasing between distant bins. Also, there are peaks (i.e., larger correlations) at certain scales which are related to the distances between the patches, which changes case by case. The DES survey, as a full contiguous survey, has indeed the least aliasing, while the sparsest survey has the most.
Note that in this case the sparsity is obtained by placing the observed patches further and further away from each other. As the sparsity increases as the patches are placed further, the total size of the survey is greatly increased, which seems to make up for the aliasing that the sparse design has induced. Overall we gain a great deal by spending the same amount of time on larger but sparsely sampled area.
Figure \[fig:COV\_FMcp\] shows the relative gain in the marginalised errors of each of the cosmological parameters with respect to DES. The largest gain for a sparse observation of the sky is on $\Omega_{\Lambda}$ with $\delta\Omega_{\Lambda}/\Omega_{\Lambda}\sim27\%$ and the smallest is for $\Omega_c$ with a gain of $\delta\Omega_{c}/\Omega_{c}\sim7\%$. Again, a qualitatively different scenario is seen for the non-marginalised errors; $\Omega_{b}$ has the largest gain due to sparsity, and $h$ has the smallest.
 

![‘Constant observed area’ — Relative change in the errors of the cosmological parameters. The largest gain is about $27\%$ by sparsifying the survey. Note that the $f=0.07$ case is only for illustration purposes as it covers an area larger than the area of the sky. \[fig:COV\_FMcp\]](Fig/COV/FMcp_Marg){width="\columnwidth"}
Conclusion
==========
In this work we have investigated the advantages and disadvantages of sparsely sampling the sky as opposed to a contiguous observation. By making use of Bayesian Experimental Design, we have defined our Figure of Merit as different functions of the Fisher matrix. These FoM capture different aspects of the parameters of interest such as their overall variance, the correlation between them or a measure of both as in Entropy. By optimising these functions we investigate an optimal survey design for estimating the galaxy power spectrum and a set of cosmological parameters. We have compared a series of sparse designs to a contiguous design of DES. We split the area of the DES survey into small square patches and sparsify the survey in two ways:
1. by shrinking the size of the patches while they are kept at a constant position. In this case the total sampled area of the survey is constant while the observed area (and the survey observing time) shrinks. This means the total information gained from the survey reduces in each case. In this scenario all the three FoM (A-optimality, D-optimality and Entropy) increase with $f$, both for the power spectrum bins and the cosmological parameters. This is expected as a contiguous sampling should capture all the information and constrain cosmology the best. However, we note that this increase with decreasing sparsity is very small for both the bins and the cosmological parameters. Looking at the variance and the covariance of the parameters, we note that the slight degrading of the surveys due to sparsity is mostly because of the increased correlation between the bins — aliasing — rather than the the increased errors. In general we conclude that total aliasing induced by sparsity is small and the loss in the constraining power of the survey because of it is negligible. Hence, overall, little is gained by observing the sky more contiguously. Indeed the largest loss in terms of the errors of the cosmological parameters is of the order of $\sim4.5\%$ in the sparsest case.
2. by keeping the size of the patches constant, but placing them further and further from one another. In this scenario the observed area (and observing time) is kept constant, while sparsifying means larger and larger total sampled area. This means the total information gained from the survey in each case is the same. Therefore, there are the two competing factors; one is the increase in the total sampled area as the survey is sparsified and the other is aliasing induced due to the larger and larger sparse mask on the sky.
In this case all FoM decrease with $f$, and the change in the FoM is much larger than the ones seen in the previous scenario. As we sparsify the survey the decrease in errors makes up for the increased aliasing induced and hence cause a general improvement in constraining power of the survey. Overall we gain a great deal by spending the same amount of time on larger but sparsely sampled area. Indeed we gain as much as $\sim27\%$ on the sparsest survey, which is a significant improvement.
We conclude that sparse sampling could be a good substitute for the contiguous observations and indeed the way forward for future surveys. At least for small areas of the sky, such as that of DES, sparse sampling of the sky can have less cost and less observing time, while obtaining the same amount of constraints on the cosmological parameters. On the other hand we can spend the same amount of time but sparsely observe a larger area of the sky. This greatly improves the constraining power of the survey.
In this work we have chosen square observation patches, which may be the worst shape in terms of the correlation they induce. Yet another constraint in this design is the fixed and determined positions of the patches which cause a loss of information at certain scales. The advantage of this approach has been its analytical formalism, which has made it possible to understand the important factors in the sparse sampling. For future work we will investigate an optimal shape foe the patches and have a numerical approach where these patches are randomly distributed on the sky. This causes an even loss of information on all scales and is expected to improve results greatly.
\[lastpage\]
[^1]: E-mail: [email protected]; [email protected]
[^2]: Note that we use underlined symbols to denote vectors and bold symbols for matrices.
[^3]: It should be noted that the Cramer-Rao inequality is a statement about the so-called “Frequentist” confidence intervals and is not strictly applicable to “Bayesian” errors.
[^4]: Integration of the joint probability over other parameters.
[^5]: <http://www.darkenergysurvey.org/>
[^6]: This is the total area including both the masked and unmasked areas.
|
---
abstract: 'In this article we establish $C^{3,\alpha}$-regularity of the reduced boundary of stationary points of a nonlocal isoperimetric problem in a domain $\Omega \subset \mathbb{R}^n$. In particular, stationary points satisfy the corresponding Euler-Lagrange equation classically on the reduced boundary. Moreover, we show that the singular set has zero $(n-1)$-dimensional Hausdorff measure. This complements the results in [@Choksi-Sternberg:2007] in which the Euler-Lagrange equation was derived under the assumption of $C^2$-regularity of the topological boundary and the results in [@Sternberg-Topaloglu:2011] in which the authors assume local minimality. In case $\Omega$ has non-empty boundary, we show that stationary points meet the boundary of $\Omega$ orthogonally in a weak sense, unless they have positive distance to it.'
author:
- 'Dorian Goldman [^1]'
- 'Alexander Volkmann [^2]'
bibliography:
- 'references.bib'
nocite: '[@Volkmann:2010]'
title: On the regularity of stationary points of a nonlocal isoperimetric problem
---
Introduction
============
The main goal of this work is to establish $C^{3,\alpha}$-regularity of the reduced boundary of stationary points of a nonlocal isoperimetric problem, and estimate the size of its singular set. More precisely, we consider the following functional $$\label{nonloceqn1}
\mathcal E_\gamma(E): = P(E,\Omega) + \gamma \int_E\int_E G(x,y)\,dy\,dx+\int_E f(x)\,dx,$$ where $\Omega\subset \mathbb R^n$ is a domain (open, connected) of class $C^2$, $E\subset \Omega$ is a bounded set of finite perimeter $P(E,\Omega)$ in $\Omega$, $\gamma \geq 0$, $f\in C_{loc}^{2}(\overline \Omega)$, and $G$ denotes a symmetric “kernel” (see below for precise assumptions on $G$). The reader should think of $G$ as the Green’s function of the Laplace operator with Neumann boundary condition in $\Omega$ or the Newtonian potential in case $\Omega= \mathbb R^n$.
Physically, the first term in models surface tension an thus its minimization favors clustering, whereas the second term can be used to model a competing repulsive term. The third term can be used to model additional external forces, cf. [@Giusti:1981]. The functional $\mathcal E_\gamma$ is often referred to as the *sharp-interface Ohta-Kawasaki energy* [@Ohta-Kawasaki:1986] in connection with di-block copolymer melts. Minimizers of $\mathcal E_\gamma$ under a volume constraint describe a number of polymer systems [@deGennes:1979; @Nagaev:1995; @Ren-Wei:2000] as well as many other physical systems [@Chen-Khachaturyan:1993; @Emery:1993; @Glotzer-DiMarzio-Muthukumar:1995; @Lundqvist-March:1983; @Nagaev:1995] due to the fundamental nature of the Coulombic term. Despite the abundance of physical systems for which is applicable, rigorous mathematical analysis for the case $\gamma \neq 0$ is fairly recent. We refer to the introduction of [@Cicalese-Spadaro:2013] for more details and an account of the results about this functional.
Regularity for (local) minimizers of $\mathcal E_\gamma$ under a volume constraint was established by Sternberg and Topaloglu [@Sternberg-Topaloglu:2011]. Sternberg and Topaloglu showed that any local minimizer $E$ of $\mathcal E_\gamma$ in a ball $B_\rho(x)$ is a so called *$(K,\varepsilon)$-minimizer* of perimeter in the sense that $$P(E,B_\rho(x))\leq P(F,B_\rho(x)) +K\rho^{n-1+\varepsilon} \quad\text{for all $F$ such that $F\Delta E \subset\subset B_\rho(x)$,}$$ for some $K<\infty$ some $\varepsilon \in (0,1]$. Standard results (see for example [@Giusti:1984; @Massari-Miranda:1984; @Maggi:2012]) imply that the reduced boundary $\partial^*E\cap B_\rho(x)$ is of class $C^{1,\frac{\varepsilon}{2}}$ and that the singular set $(\partial E\setminus\partial^*E)\cap B_\rho(x)$ has Hausdorff dimension at most $n-8$. Standard elliptic regularity theory then implies higher regularity.
For *stationary* points of $\mathcal E_\gamma$, which are not a priori minimizing in any sense, these methods are no longer available. To this end Röger and Tonegawa [@Roeger-Tonegawa:2008 Section 7.2] proved $C^{3,\alpha}$-regularity of the reduced boundary of stationary points of $\mathcal E_\gamma$ that arise as the limit of stationary points of the (diffuse) Ohta-Kawasaki energy with parameter $\varepsilon$ going to zero. They also showed that in this case the singular set has Hausdorff dimension at most $n-1$.
Our main result (Theorem \[thm:regularity\]) removes this special assumption. In particular, we do not require any minimality assumptions. As part of our proof we establish a weak measure theoretic form of the Euler-Lagrange equation for arbitrary stationary points of $\mathcal E_\gamma$ under very weak regularity assumptions (we only require the set to have finite perimeter). The Euler-Lagrange equation for stationary points of $\mathcal E_\gamma$ has previously been derived by Choksi and Sternberg [@Choksi-Sternberg:2007], however assuming $C^2$-regularity of the topological boundary. An application of our main result will be used in [@Goldman:2014] which studies the asymptotics of stationary points of the Ohta-Kawaski energy and its diffuse interface version.\
\
In order to state our main result we need to introduce some notation and specify our hypotheses:
For a given domain $\Omega \subset \mathbb R^n$ with $C^2$-boundary we consider two classes of sets. $$\mathcal A:= \{ E \subset \Omega: E\;\text{is bounded and } P(E,\Omega) < +\infty \} \quad\text{and}\quad
\mathcal A_m:= \{ E \in \mathcal A: | E | = m\},$$ where $m \in (0,\vert \Omega \vert)$. A stationary point of $\mathcal E_\gamma$ in $\mathcal{A}$ or $\mathcal{A}_m$ is then defined as follows.
\[defcp\] A set $E \in \mathcal{A}$ is said to be a *stationary point of $\mathcal E_\gamma$ (see ) in $\mathcal{A}$* if for every vector field $X \in C_c^1(\mathbb{R}^n;\mathbb{R}^n)$ with $X\cdot \nu_\Omega =0$ on $\partial \Omega$ we have that $$\begin{aligned}
\label{dervan} \frac{d}{dt}\Big|_{t=0}\mathcal E_{\gamma}(\phi_t(E)) = 0,\end{aligned}$$ where $\{\phi_t\}$ is the flow of $X$, i.e. $\partial_t\phi_t =X\circ \phi_t $, $\phi_0 ={\rm id}$. If holds only for all $X$ such that $\phi_t(E) \in \mathcal{A}_m$ for all $t \in (-\varepsilon,\varepsilon)$ and some small $\varepsilon > 0$, then we call $E$ a *stationary point of $\mathcal E_\gamma$ in $\mathcal{A}_m$*.
We now specify the assumptions that we impose on the function $G$ appearing in .
Firstly, we let $\Gamma$ be the fundamental solution of the Laplace operator given by $$\Gamma(x,y):=
\begin{cases}
\frac{1}{\omega_{n}(n-2)}\frac{1}{|x-y|^{n-2}}&,n\geq 3\\
-\frac{1}{2\pi}\log|x-y| &,n=2.
\end{cases}$$ Here $\omega_{n} = \mathcal H^n(\mathbb S^n)$. We assume that $$G(x,y)=
\Gamma(x,y) +R(x,y),$$ where $R$ is a symmetric corrector function. I.e. $$\begin{cases}
\Delta R(\cdot,y) = \frac{1}{|\Omega|}&\text{in $\Omega$}\\
\frac{\partial R(\cdot,y)}{\partial \nu_\Omega} = - \frac{\partial \Gamma(\cdot,y)}{\partial \nu_\Omega} &\text{on $\partial \Omega$}
\end{cases}$$ for all $y\in \Omega$. Here we interpret $|\Omega|^{-1}$ to be zero for unbounded domains $\Omega$. In case $\Omega$ is bounded $G$ is a Neumann Green’s function of the Laplace operator. In case $\Omega = \mathbb R^2$ we also allow that $G(x,y)=\Gamma_\beta(x,y)$ for $\beta \in (0,1)$, where $\Gamma_\beta(x,y):= |x-y|^{-\beta}$.
For a bounded Borel set $E \subset \Omega$ we define $$\label{vdef1}
\phi_E(x) := \int_{E} G(x,y) \,dy,$$ to be the *potential* of $E$ associated to the kernel $G$. By standard elliptic theory we have $\phi_E\in C_{loc}^{1,\alpha}( \overline\Omega)$.\
\
Our main result reads as follows.
\[thm:regularity\] Let $E$ be a stationary point of the functional $\mathcal E_\gamma$ in $\mathcal{A}$ or $\mathcal{A}_m$ with $f$ and $G$ as above. Then the reduced boundary $\partial_\Omega^*E=\partial^* E\cap \Omega$ is of class $C^{3,\alpha}$ for all $\alpha \in (0,1)$. In particular, the equation $$\label{EL1} H +2 \gamma \phi_E + f = \lambda,$$ holds classically on $\partial_\Omega^*E$ where $H$ is the mean curvature[^3] of $\partial_\Omega^*E$, $\lambda$ is a Lagrange multiplier, and $\phi_E$ is the potential arising from $E$, given by . (When $E$ is a stationary point in the class $\mathcal{A}$, then $\lambda = 0$.) The measure $\mu_E =\mathcal H^{n-1}\llcorner \partial_\Omega^*E$ is weakly orthogonal to $\partial \Omega$ in the sense that $$\int_{\partial_\Omega^*E}{\rm div}_EX\,d\mathcal H^{n-1} = -\int_{\partial_\Omega^*E}\vec H \cdot X\,d\mathcal H^{n-1} ,$$ for all $X \in C_c^1(\mathbb R^n;\mathbb R^n)$ with $X\cdot \nu_\Omega = 0$ on $\partial \Omega$.
Moreover, the singular set $(\partial E \setminus \partial^*E)\cap \Omega$ is a relatively closed subset of $\partial E \cap \Omega$ which satisfies $\mathcal H^{n-1}((\partial E \setminus \partial^*E)\cap \Omega)=0$.
The estimate on the singular set in Theorem \[thm:regularity\] is optimal. This can already be seen in the case $\gamma =0$. E.g. let $\Omega = B_1(0) \subset \mathbb R^n$ and set $E:=\{x=(x_1,...,x_n)\in \Omega: x_1\cdot x_2>0\}$. Then $E$ is a stationary point of the perimeter functional with singular set $(\partial E \setminus \partial^*E)\cap \Omega = \{ x \in \Omega: x_1= x_2 =0\}$.
Our paper is organized as follows. In Section \[sec:notation\] we introduce our notation and review the basic theory of rectifiable varifolds and sets of finite perimeter, and present Allard’s regularity theorem and De Giorgi’s structure theorem for the reader’s convenience. In Section \[sec:prelem\] we prove some preliminary results that are needed in order to prove Theorem \[thm:regularity\]. In Section \[sec:proofofmain\] we prove Theorem \[thm:regularity\]. In Section \[sec:bdryregularity\] we include, for convenience, the regularity for local minimizers of $\mathcal E_\gamma$ near boundary points $x\in \partial E \cap \partial \Omega$. This has already been proven independently by Julin and Pisante [@Julin-Pisante:2013 Theorem 3.2].
Notation and preliminaries {#sec:notation}
==========================
Throughout this work we assume that $\Omega\subset \mathbb R^n$, $n\geq 2$, is a domain (open, connected) of class $C^2$ (although the regularity assumption on the boundary is only needed when we consider vector fields that do not have compact support inside $\Omega$). In this section we introduce our notation and summarize basic results from geometric measure theory that are needed in the sequel. For more details on the subject we refer the reader to [@Evans-Gariepy:1992; @Giusti:1984; @Maggi:2012; @Simon:1983].
Varifolds and Allard’s regularity theorem
-----------------------------------------
Here we collect basic definitions for varifolds and state Allard’s regularity theorem. An $\mathcal H^{k}$-measurable set $M \subset \mathbb R^n$ is called *countably $k$-rectifiable* if $$M = \bigcup_{j=0}^\infty N_j,$$ where $N_j \subset\mathbb R^n$, $0\leq j\leq n-1$, are $k$-dimensional submanifolds of class $C^1$ and $\mathcal{H}^{k}(N_0)=0$. For a vector field $X\in C_c^1(\mathbb R^n;\mathbb R^n)$ we can define the tangential divergence $\div_MX$ of $X$ by setting $$\div_MX(x):= \div_{N_j}X(x)$$ for $x\in N_j$, which is well-defined $\mathcal H^k$-a.e. on $M$. Here $\div_{N_j}X(x) = \sum_{i=1}^k\tau_i\cdot DX(x)\tau_i$, where $\{\tau_i\}_{i=1,...,k}$ is an orthonormal basis of the tangent plane $T_xN_j$ of $N_j$ at the point $x$.
For the purpose of this article we use the following pragmatic definition of rectifiable $k$-varifolds, which usually has to be deduced from the definition (we refer to [@Simon:1983] for details):\
A *rectifiable $k$-varifold* $\mu$ in $\Omega$ is a Radon measure on $\Omega$ such that $$\mu = \theta \mathcal H^k \llcorner M,$$ where $M$ is a countably $k$-rectifiable set and where the *multiplicity function* $\theta \in L_{loc}^1(\mathcal H^k \llcorner M)$ is such that $\theta >0$ $\mathcal H^k$-a.e. on $M$.
The *first variation* $\delta \mu$ of $\mu$ with respect to $X\in C^1_c(\Omega,\mathbb R^{n})$ is given by $$\delta \mu(X) := \int_M \div_MX\,d\mu,$$ which by [@Simon:1983 §16] is equal to $ \frac{d}{dt}(\phi_t{}_\sharp\mu)(\Omega) |_{t=0} $. Here $\phi_t{}_\sharp\mu$ denotes the *image varifold* given by $\phi_t{}_\sharp\mu: = (\theta\circ\phi_t^{-1}) \mathcal H^k \llcorner \phi_t(M)$, and where $\{\phi_t\}$ denotes the flow of $X$.
We say that $\mu$ has *generalized mean curvature* $\vec H$ in $\Omega$ if $$\label{genH}
\delta \mu(X)=\int_M\div_MX\,d\mu = - \int_M \vec H \cdot X\,d\mu\quad\text{for all $X\in C^1_c(\Omega;\mathbb R^n)$},$$ where $\vec H$ is a locally $\mu$-integrable function on $M\cap \Omega$ with values in $\mathbb R^{n}$. We remark that using the Riesz representation theorem such an $\vec H$ exists if the total variation $\|\delta \mu \|$ is a Radon measure in $\Omega$ and moreover $\|\delta V\|$ is absolutely continuous with respect to $\mu$ (see [@Simon:1983] for details).
We make the trivial but important remark that a rectifiable $k$-varifold $\mu$ in $\Omega$ that has finite total mass $\mu(\Omega)$ naturally defines a rectifiable $k$-varifold in $\mathbb R^n$.
A fundamental result in the theory of varifolds is the following regularity theorem due to Allard [@Allard:1972] (see also [@Simon:1983 Chapter 5] for a more accessible approach) that holds for rectifiable $k$-varifolds $\mu$ in $\Omega \subset \mathbb R^n$. We use the following hypotheses. $$\label{hyp}
\left.\begin{split} 1\leq \theta\,\,\, \mu\text{-a.e. , }0\in\operatorname{spt}(\mu)\,\,,&\,B_\rho(0)\subset \Omega\\
\alpha_k^{-1}\rho^{-k}\mu(B_\rho(0))\leq & 1+\delta\\
\left(\int_{B_\rho(0)}|\vec H|^p\, d\mu\right)^\frac{1}{p}\rho^{1-\frac{k}{p}} &\leq \delta .
\end{split}\,\,\,\,\right\}\tag{h}$$
\[thm:Allard\] For $p>k$, there exist $\delta=\delta(n,k,p)$ and $\gamma=\gamma(n,k,p)$ $\in (0,1)$ such that if $\mu$ is a rectifiable $k$-varifold in $\Omega$ that has generalized mean curvature $\vec H$ in $\Omega$ (see ) and satisfies hypotheses , then ${\rm spt}(\mu)\cap B_{\gamma\rho}(0)$ is a graph of a $C^{1, 1-\frac{k}{p}}$ function with scaling invariant $C^{1, 1-\frac{k}{p}}$ estimates depending only on $n,k,p,\delta$.
More precisely, there is a linear isometry $q$ of $\mathbb{R}^{n}$ and a function $u \in C^{1,1-\frac{k}{p} }(B_{\gamma r}^k(0);\mathbb R^{n-k})$ with $u(0) = 0$, $\textrm{spt}(\mu) \cap B_{\gamma \rho}(0) = q(\textrm{graph}(u)) \cap B_{\gamma \rho} (0)$, and $$\label{estimates}
\rho^{-1} \sup_{B_{\gamma \rho}^k(0)}|u| + \sup_{B_{\gamma \rho}^k(0)} |Du| + \rho^{1-\frac{k}{p} } \sup_{\substack{x,y \in B_{\gamma \rho}^k (0)\\ x \neq y}} |x-y|^{-(1- \frac{k}{p})} |Du(x)-Du(y)| \leq c(n,k,p)\delta^{1/4k}.$$
Sets of finite perimeter
------------------------
Let $E \subset \Omega$ be a Borel set. We say that $E$ has finite perimeter $P(E,\Omega)$ in $\Omega$ if $$P(E,\Omega) :=\sup_{\substack{X \in C_c^1(\Omega;\mathbb{R}^n)\\ |X | \leq 1}} \int_{E} {\rm div} X\,dx < \infty.$$ The Riesz representation theorem implies the existence of a Radon measure $\mu_E$ on $\Omega$ and a $\mu_E$-measurable vector field $\eta_E:\Omega \to \mathbb R^n$ with $|\eta_E|=1$ $\mu_E$-a.e. such that $$\int_E \div X\,dx = \int_{\mathbb{R}^n} X\cdot \eta_E \,d\mu_E \quad\textrm{for all } X \in C_c^1(\Omega;\mathbb{R}^n).$$ The vector valued measure $\vec \mu_E := \eta_E\,\mu_E$ is sometimes referred to as the Gauss-Green measure of $E$ (with respect to $\Omega$). For the total perimeter of the set $E$ in $\Omega$ we have $$P(E,\Omega) = \mu_E(\Omega).$$
In the case that $\partial E\cap \Omega$ is of class $C^1$, we have $$\vec \mu_E = \nu_E \mathcal{H}^{n-1} \llcorner (\partial E\cap\Omega)\quad\text{and}\quad P(E,\Omega) = \mathcal{H}^{n-1}(\partial E\cap \Omega).$$ In particular, we have for every point $x\in \partial E\cap \Omega$ $$\label{LimitNormal}
\nu_E(x) = \lim_{r \to 0} \dashint_{ \partial E \cap B_r(x)} \nu_E \,d\mathcal{H}^{n-1} = \lim_{r \to 0} \frac{\vec \mu_E(B_r(x))}{\mu_E(B_r(x))}.$$ For a generic set $E$ of finite perimeter, the *reduced boundary* $\partial_\Omega^*E $ of $E$ in $\Omega$ is defined as those $x \in \partial E \cap \Omega$ such that the above limit on the right hand side exists and has norm $1$. The Lebesgue-Besicovitch differentiation theorem implies that $\mu_E(\mathbb R^n\setminus \partial_\Omega^*E)=0$. The vector field $\nu_E \in L^1(\mu_E; \mathbb R^{n})$ *defined* by the equation on $\partial_\Omega^*E$ (and set to $0$ elsewhere), is called the *measure theoretic outer unit normal* of $E$. For more details on sets of finite perimeter we refer to [@Giusti:1984; @Simon:1983; @Evans-Gariepy:1992].
\[thm:DeGiorgi\] Suppose $E$ has finite perimeter in $\Omega$. Then $\partial_\Omega^*E$ is countably $(n-1)$-rectifiable. In addition for all $x \in \partial_\Omega^*E$ $$\Theta(\mu_E,x) := \lim_{r \to 0} \frac{\mu_E(B_r(x))}{\alpha_{n-1} r^{n-1}} = 1,$$ where $\alpha_{n-1}$ is the volume of the unit ball in $\mathbb{R}^{n-1}$. (i.e. the limit exists and is equal to $1$.) Moreover, $\mu_E = \mathcal{H}^{n-1}\llcorner \partial_\Omega^*E$.
\[rmk:DeGiorgi\] De Giorgi’s structure theorem in particular shows that every set $E$ of finite perimeter defines - through its generalized surface measure $\mu_E$ - a rectifiable $(n-1)$-varifold of multiplicity $\theta \equiv 1$ on $\partial_\Omega^*E$.
Let $E\subset \Omega$ be of finite perimeter in $\Omega$. If $\Omega$ is Lipschitz regular, one can define the (inner) *trace* $\chi_E^+ \in L^1(\mathcal H^{n-1}\llcorner \partial \Omega)$ of $\chi_E$ on $\partial \Omega$. For details we refer to [@Evans-Gariepy:1992 Chapter 5.3]. For every vector field $X \in C_c^1(\mathbb R^n;\mathbb R^n)$ we have $$\int_E \div X\,dx = \int_{\partial_\Omega^*E}X\cdot\nu_E\,d\mathcal H^{n-1} + \int_{\partial\Omega} X\cdot\nu_\Omega\,\chi_E^+\,d\mathcal H^{n-1}.$$ This implies that $E$ is also a set of finite perimeter as a subset of $\mathbb R^n$ with $$P(E,\mathbb R^n) = P(E,\Omega)+ \int_{\partial\Omega} |\chi_E^+|\,d\mathcal H^{n-1}.$$ As a finite perimeter set in $\mathbb R^n$, $E$ also has a Gauss-Green measure which we shall denote by $ \vec \mu_{E}^*$. Obviously $ \vec \mu_{E}^* \llcorner \Omega = \vec \mu_E$ and $\partial^*E \cap \Omega = \partial_\Omega^*E$.\
\
Since sets of finite perimeter are equivalence classes of sets, one needs to choose a good representative in order to talk about their regularity properties. W.l.o.g. (see [@Giusti:1984 Proposition 3.1] for details) we will always assume that any finite perimeter set $E$ at hand satisfies the following properties: $$\begin{aligned}
\label{representative}
(a)\quad& E \text{ is Borel} \nonumber \\
(b)\quad& 0 < \vert E \cap B_\rho(x) \vert < \vert B_\rho(x) \vert\;\;\text{for all}\;\;x \in \partial E\;\;\text{and all}\;\;\rho > 0 \\
(c)\quad& \overline{\partial_\Omega^*E} = \partial E\cap \overline \Omega \text{ which implies that ${\rm spt}(\mu_E)=\partial E \cap \overline \Omega.$} \nonumber \end{aligned}$$
The first variation of perimeter
--------------------------------
Let $X \in C_c^{1}(\Omega; \mathbb{R}^n)$ with corresponding flow $\{\phi_t\}$. The *first variation of perimeter* is then easily computed as (see [@Giusti:1984; @Maggi:2012]) $$\label{pervar}
\frac{d}{dt}\Big|_{t=0}P(\phi_t(E),\Omega) = \int_{\partial_\Omega^*E} \textrm{div}_{E} X \,d\mathcal{H}^{n-1},$$ where $\textrm{div}_E X$ is the tangential divergence of the vector field $X$ with respect to $E$: $$\textrm{div}_E X = \textrm{div} X - \nu_E\cdot DX\nu_E,$$ which obviously agrees with the definition of tangential divergence with respect to $\partial_\Omega^*E$. Hence, the expression equals the first variation $\delta \mu_E(X)$ of the varifold $\mu_E$ with respect to $X$.
In order to investigate the behavior of stationary points $E$ of $\mathcal E_\gamma$ at the boundary $\partial \Omega$ of $\Omega$ we need to allow for more general variations (as already appearing in Definition \[defcp\]). By the regularity assumption on $\partial \Omega$ it follows (cf. [@Grueter:1987]) that $\phi_t(\Omega)\equiv \Omega$ (and $\phi_t(\partial \Omega)\equiv \partial \Omega$) for the flow $\{\phi_t\}$ of any vector field $X\in C_c^1(\mathbb R^n;\mathbb R^n)$ such that $X\cdot \nu_\Omega =0$ on $\partial \Omega$. Since $P(\phi_t(E),\Omega)\equiv (\phi_t{}_\sharp\mu_E)(\mathbb R^n)$ we see that the formula still holds for such vector fields $X$.
Preliminary results {#sec:prelem}
===================
\[1st variation of perimeter\] Let $E \in \mathcal A$ and let $X\in C_c^1(\mathbb R^n;\mathbb R^n)$ with $X\cdot \nu_\Omega =0$ on $\partial \Omega$ be a vector field with corresponding flow $\{\phi_t\}$. Then $$\begin{aligned}
\frac{d}{dt}\mathcal E_\gamma(\phi_t(E)) \Big|_{t=0} =\int_{ \partial_\Omega^*E}\div_E X\,d\mathcal{H}^{n-1} +2\gamma\int_{\partial_\Omega^*E} \phi_E X \cdot \nu_E \,d\mathcal{H}^{n-1} + \int_{\partial_\Omega^*E} f X \cdot \nu_E \,d\mathcal{H}^{n-1}.\end{aligned}$$
The first variation of perimeter is equation . It remains to compute the first variation of the nonlocal term; the computation of the first variation of the third term is similar but easier. By the change of variables formula it holds that $$\begin{aligned}
\label{G1}
\int_{\phi_t(E)}\int_{\phi_t(E)}G(x,y)\,dx\,dy = \int_{E}\int_{E}G(\phi_t(x),\phi_t(y))\,\vert\det D\phi_t(x)\vert\vert\det D\phi_t(y)\vert\,dx\,dy .\end{aligned}$$ Hence, we compute using and the assumptions on $G$ which allow us to differentiate under the integral $$\begin{aligned}
\label{1stvarcomp}
\frac{d}{dt}\Big|_{t=0} &\int_{\phi_t(E)}\int_{\phi_t(E)}G(x,y)\,dx\,dy\nonumber\\
& =2\int_{E}\int_{E} (\nabla_xG)(x,y) \cdot X(x)dx\,dy + 2\int_{E}\int_{E} G(x,y)\,\text{div}X(x) \,dx\,dy \nonumber \\
& =2\int_{E}\int_{E} \div ( G(\cdot,y)X)(x) \,dx\,dy \nonumber\\
& =2\int_{E}\int_{E} \div ( \Gamma(\cdot,y)X)(x) \,dx\,dy +2\int_{E}\int_{\partial_\Omega^*E} R(\cdot,y)\,X\cdot\nu_E \,d\mu_E\,dy .\end{aligned}$$ We cannot directly apply the divergence theorem to the first term of since $\partial_\Omega^*E$ is only $(n-1)$-rectifiable and $G(\cdot,y)$ is not of class $C^1$ near $x=y$. We can get around this technical obstacle by applying the results of [@Chen-Torres-Ziemer:2009], but we present a simple argument which suffices in our case. Since $\Omega$ is of class $C^2$ (for this argument Lipschitz is enough) we have (see Section \[sec:prelem\]) that $E$ is a set of finite perimeter in $\mathbb R^n$. By [@Giusti:1984 Theorem 1.24] we can approximate $E$ in the support of $X$ by smooth sets $E^{i}$ such that $$\begin{aligned}
\chi_{E^{i}} &\to \chi_E \textrm{ in } L^1_{loc}(\mathbb R^n)\textrm{ and } \\
\vec\mu_{E^{i}}^* &\to \vec\mu_E^* \textrm{ weakly as Radon measures on $\mathbb R^n$}.\end{aligned}$$ We may apply the Lebesgue dominated convergence theorem to conclude that $$\begin{aligned}
\label{dominated}
\int_{E}\int_{E} \div ( \Gamma(\cdot,y)X)(x) \,dx\,dy= \lim_{i \to \infty} \int_{E}\int_{E^i} \div ( \Gamma(\cdot,y)X)(x) \,dx\,dy.\end{aligned}$$ Moreover, we have $$\begin{aligned}
\label{dominated1}
\int_{E^{i}} \div ( \Gamma(\cdot,y)X)(x) \,dx =\lim_{\rho \to 0} \int_{E^{i} \setminus B_{\rho}(y)} \div ( \Gamma(\cdot,y)X)(x) \,dx.\end{aligned}$$ We may now apply the divergence theorem and we have for a.e. $0< \rho < 1$ $$\begin{aligned}
\label{dominated2}
\int_{E^{i} \backslash B_{\rho}(y)} \div ( \Gamma(\cdot,y)X)(x) \,dx
&= \int_{\partial E^i \setminus B_{\rho}(y)} \Gamma(\cdot,y) \,X \cdot \nu_{E^{i}}\,d\mathcal H^{n-1} \nonumber\\
&\quad- \int_{\partial B_{\rho}(y) \cap E^{i}} \Gamma(\cdot,y) \,X \cdot \nu_{B_{\rho}(y)} \,d\mathcal H^{n-1}.\end{aligned}$$ The second term on the right hand side of can be estimated by $c(n) \sup |X|\,\rho^{1-\varepsilon}$ for some $\varepsilon \in [0,1)$, and hence goes to zero as $\rho \to 0$. On the other hand, we have $$\begin{aligned}
&\left| \int_{\partial E^i \setminus B_\varrho(y)} \Gamma(\cdot,y)X \cdot\nu_{E^i}\,d\mathcal H^{n-1} - \int_{\partial E^i } \Gamma(\cdot,y)X \cdot\nu_{E^i}\,d\mathcal H^{n-1} \right| \\
& \quad \leq \mathcal H^{n-1}(\partial E^i \cap B_\varrho(y) )^{1-\frac{1}{p} } \left( \int_{\partial E^i \cap {\rm spt}(X) } |\Gamma(\cdot,y)|^p\,d\mathcal H^{n-1} \right)^\frac{1}{p} \sup |X| ,\end{aligned}$$ where $p\in (1, \frac{n-1}{n-2})$, in case $n\geq 3$, and $p \in (1,\beta^{-1})$ in case $G=\Gamma_\beta$. Whence, upon combining and , $$\int_{E^{i}} \div ( \Gamma(\cdot,y)X)(x) \,dx=\int_{\partial E^i } \Gamma(\cdot,y) X \cdot \nu_{E_{i}} \,d\mathcal H^{n-1}.$$ Using and applying Fubini’s theorem we arrive at $$\begin{aligned}
\label{dominated4}
\int_E \int_{E^{i}} \div ( G(\cdot,y)X)(x) \,dx \,dy = \int_{\partial E^i} \phi_E X \cdot \nu_{E^{i}} \,d\mathcal H^{n-1}.\end{aligned}$$ Now let $i \to \infty$, using the fact that $\phi_E$ is continuous and that $X$ has compact support, and combining and we obtain $$\begin{aligned}
\int_{E}\int_{E} \div ( G(\cdot,y)X)(x) \,dx\,dy= \int_{\partial^*E} \phi_E X \cdot \nu_E \,d\mathcal{H}^{n-1}.\end{aligned}$$ The claim now follows from the fact that $X$ is tangential to $\partial \Omega$.
\[lagrange1\] There exists a vector field $Y \in C_c^1(\Omega;\mathbb R^n)$ such that $\int_{E}\div Y dx =1$.
Assume by contradiction that for every vector field $X \in C_c^1(\Omega;\mathbb R^n)$: $\int_{E}\text{div}X dx =0$. Then by Du Bois-Reymond’s lemma [@Giaquinta-Hildebrandt:1996-I] we conclude that $$\chi_E=0 \;\;\;\text{or}\;\;\; \chi_E=1 \;\;\;\text{$\mathcal L^n$-a.e. on $ \Omega$},$$ where we used that $\Omega$ is connected. Hence, $$E = \Omega \;\;\;\text{or}\;\;\; E =\emptyset \;\;\;\text{in the measure theoretic sense}.$$ This contradicts the assumption that $0<\vert E \vert < \vert \Omega \vert$, proving the claim.
\[firstvariation\] Let $E$ be a stationary point of $\mathcal E_\gamma$ in $\mathcal{A}_m$ or $\mathcal{A}$. Then there exists a real number $\lambda$ such that $\mu_E$ has a generalized mean curvature vector $$\vec H = -(\lambda - 2 \gamma \phi_E - f)\nu_E$$ and such that $\mu_E$ is weakly orthogonal to $\partial \Omega$. That is, for every vector field $X \in C_c^1(\mathbb{R}^n;\mathbb{R}^n)$ with $X\cdot \nu_\Omega =0$ on $\partial\Omega$ the following variational equation is true: $$\label{guy} \int_{\partial_\Omega^*E} \textrm{\em div}_E X \,d\mathcal{H}^{n-1} = - \int_{\partial_\Omega^*E} \vec H \cdot X \,d\mathcal{H}^{n-1}.$$ For stationary points in $\mathcal{A}$, we have $\lambda = 0$.
[*Step 1: Construction of the local variation*.]{}\
The case of variations in $\mathcal{A}$ is an immediate consequence of Proposition \[1st variation of perimeter\]. For the case of $\mathcal{A}_m$ let $Y \in C_c^1(\Omega;\mathbb R^n)$ be a vector field such that $\int_{E}\div Y dx =1$. The existence of such a vector field is guaranteed by Lemma \[lagrange1\]. Let $\{\phi_t\}$ be the flow of $X$ and $\{\psi_s\}$ the flow of $Y$. For $(t,s) \in \mathbb R^2$ set $$A(t,s):=P(\psi_s(\phi_t(E))) \cap \Omega)$$ and $$\mathcal V(t,s):=\vert\psi_s(\phi_t(E))\vert -\vert E \vert .$$ Then $\mathcal V \in C^1(\mathbb R^2)$, $\mathcal V(0,0)=0$ and $\partial_s\mathcal V(0,0)=\int_{E}\div Y dx =1$. The implicit function theorem ensures the existence of an open interval $I$ containing $0$ and a function $\sigma\in C^1(I)$ such that $$\mathcal V(t,\sigma(t))=0\;\;\;\text{for all $t \in I$ and}\;\;\; \sigma'(0)=-\frac{\partial_t\mathcal V(0,0)}{\partial_s\mathcal V(0,0)}.$$ Hence, $$t \mapsto \psi_{\sigma(t)}\circ\phi_t$$ is a 1-parameter family of $C^1$-diffeomorphisms of $\overline\Omega$ and thus defines a volume preserving variation of $E$ in $\Omega$.\
[*Step 2: Computing the first variation*.]{}\
The fact that $E$ is a stationary point in the class $\mathcal{A}_m$ then implies from Proposition \[1st variation of perimeter\] $$\frac{d}{dt}\Big|_{t=0} A(t,\sigma(t)) = \int_{\partial_\Omega^*E} \textrm{div}_E X= - 2\gamma\int_{\partial_\Omega^*E} \phi_E X \cdot \nu_E \,d\mathcal{H}^{n-1} - \int_{\partial_\Omega^*E} f X \cdot \nu_E \,d\mathcal{H}^{n-1}.$$ On the other hand, we have $$\begin{aligned}
\frac{d}{dt}\Big|_{t=0} A(t,\sigma(t)) & = \partial_t\mathcal A(0,0)+\sigma'(0)\partial_s\mathcal A(0,0) \\
& =\int_{\partial_\Omega^*E}\textrm{div}_E X \,d\mathcal{H}^{n-1} + \sigma'(0)\int_{\partial_\Omega^*E}\textrm{div}_E Y \,d\mathcal{H}^{n-1} \\
& =\int_{\partial_\Omega^*E}\textrm{div}_E X \,d\mathcal{H}^{n-1} - \frac{\int_{E}\div X dx}{\int_{E}\div Y dx}\int_{\partial_\Omega^*E}\textrm{div}_E Y \,d\mathcal{H}^{n-1} \\
& =\int_{\partial_\Omega^*E}\textrm{div}_E X \,d\mathcal{H}^{n-1} - \lambda \int_{\partial_\Omega^*E} X \cdot \nu_E \,d\mathcal{H}^{n-1},\end{aligned}$$ where $\lambda:=\int_{\partial_\Omega^*E}\textrm{div}_E Y \,d\mathcal{H}^{n-1}$, and where we used the divergence theorem on the last line. Therefore, setting $\vec H: = (2\gamma \phi_E +f -\lambda )\nu_E$, we have $$\int_{\partial_\Omega^*E}\div_E X \,d\mathcal{H}^{n-1}= -\int_{\partial_\Omega^*E} \vec H \cdot X \,d\mathcal{H}^{n-1}$$ for every vector field $X \in C_c^1(\mathbb R^n;\mathbb R^n)$ with $X \cdot \nu_\Omega =0$ on $\partial \Omega$.
Proof of the Theorem \[thm:regularity\] {#sec:proofofmain}
=======================================
Firstly, notice that the weak orthogonality of $\mu_E$ and $\Omega$ is included in Proposition \[firstvariation\].\
\
We want to apply Allard’s regularity theorem (here Theorem \[thm:Allard\]) to establish the regularity of the reduced boundary $\partial_\Omega^*E$ . We verify the necessary hypotheses:
By De Giorgi’s structure theorem (here Theorem \[thm:DeGiorgi\]) and Remark \[rmk:DeGiorgi\] we have that $\mu_E$ is a multiplicity-$1$ rectifiable $(n-1)$-varifold. Moreover, for each point $x \in \partial_\Omega^*E$ we have that $\Theta(\mu_E,x) = 1$. Now, we choose any point $x_0 \in \partial_\Omega^*E$. W.l.o.g., after possibly translating and rotating the set $E$, we may assume that $x_0 = 0$ and $\nu_E(0)= - e_n$. We fix any $p>n-1$ and pick $\delta \in (0,1)$ to be as in the statement of Theorem \[thm:Allard\]. Since $\Theta(\mu_E,0) = 1$ we can find a small radius $\rho>0$ such that $$\label{verifyingassumptions}
B_\rho(0) \subset \subset \Omega\quad\text{and}\quad \alpha_{n-1}^{-1} \rho^{-{n-1}} \mu_E(B_\rho(0)) \leq 1 + \delta.$$ Proposition \[firstvariation\] implies that $\mu_E$ has generalized mean curvature $\vec H$ in $\Omega$, given by $$\vec H= -(\lambda - 2 \gamma \phi_E - f)\nu_E$$ for some constant $\lambda \in \mathbb R$. We have $$\|\vec H \|_{L^\infty(\mu_E\llcorner B_\rho(0))} \leq |\lambda|+2\gamma \sup_{B_\rho(0)}|\phi_E|+\sup_{B_\rho(0)}|f| =: c_0.$$ With Hölder’s inequality and we get $$\begin{aligned}
\left(\int_{B_\rho(0)} |\vec H|^p\,d\mu_E \right)^\frac{1}{p} \rho^{1-\frac{n-1}{p} }
& \leq c_0\, (1 + \delta)^\frac{1}{p} \alpha_{n-1}^\frac{1}{p} \,\rho,\end{aligned}$$ which is less that $\delta$ provided $\rho \leq \delta c_0^{-1}\, 2^{-\frac{1}{p}} \alpha_{n-1}^{-\frac{1}{p}}$. Thus the hypotheses are satisfied and Theorem \[thm:Allard\] implies the existence of a function $u:B'( := B_{\gamma\rho}^{n-1}(0)) \to \mathbb R$ of class $C^{1,\alpha}$, $\alpha= 1- (n-1)/p$, such that $u(0) = 0$, $Du(0)=0$, and ${\rm spt}(\mu_E) \cap B_{\gamma \rho}(0) = {\rm graph}(u) \cap B_{\gamma \rho} (0)$. Moreover, our orientation assumption on $E$ implies that $\overline E \cap (B' \times I) = \overline{\textrm{epigraph}(u) }\cap (B' \times I)$ for some open interval $0\in I$.
Now let $X(x',z) = \zeta(z) \eta(x') e_n$, where $\eta \in C_c^1(B')$, $x' \in B'$, $e_n=(0,...,0,1)$ is the $n$-th-standard basis vector, and where $\zeta \in C_c^\infty(\mathbb R)$ is a cut-off function such that $(\zeta\circ u)(x)=1$ for every $x \in B'$.
Then recalling that $\textrm{div}_{E} X = \div X - \nu_E \cdot D X \nu_E$, we have $\textrm{div}_E X = - (\nabla' \eta,0) \cdot \nu_E \nu_E^n$ where $\nu_E^n$ is the $n$-th component of the normal vector, and where $\nabla'$ is the gradient in $\mathbb{R}^{n-1}$. Since $\partial^* E \cap (B' \times I)= \partial E \cap (B' \times I)$ is the graph of $u$, and by our orientation assumption, we have that $\nu_E^n = \frac{-1}{\sqrt{1 + |\nabla' u|^2}}$. Using the area formula, equation becomes $$\label{weakform}
-\int_{B'} \frac{\nabla'\eta \cdot \nabla' u }{ \sqrt{1+|\nabla' u|^2}} \,dx' =\int_{B'} ( \lambda -2\gamma \,v_E(x',u) - f(x',u) ) \,\eta\,dx'.$$ Equation is the weak form of the prescribed mean curvature equation. Since by Theorem \[thm:Allard\] the gradient of $u$ is locally uniformly bounded in $C^{0,\alpha}$ and since the right hand side of is of class $C^{1,\alpha}$, interior Schauder estimates (see [@Gilbarg-Trudinger:2001]) and bootstrapping imply local $C^{3,\alpha}$ regularity of the function $u$. Thus holds pointwise, and since $x_0 \in \partial_\Omega^*E$ was arbitrary we have $$H+ 2\gamma \phi_E +f=\lambda \textrm{ on } \partial_\Omega^*E,$$ where $H$ is the classical mean curvature of the surface $\partial_\Omega^*E$.
On the size of the singular set
-------------------------------
By a direct consequence the monotonicity formula, see [@Simon:1983 Corollary 17.8], we have that $\Theta(\mu_E,x)$ exists and that $\Theta(\mu_E,x) \geq 1$ for *every* point $x \in {\rm spt}(\mu_E)=\overline{\partial E\cap \Omega}$. This allows us to estimate the size of the singular set $(\partial E\setminus\partial^*E)\cap\Omega$.
\[estimateondimofsingset1\] We have the following estimate $$\mathcal H^{n-1}((\partial E\setminus\partial^* E)\cap \Omega)=0.$$
W.l.o.g. we may assume that $\Omega$ is bounded. Otherwise, we may exhaust $\Omega$ with bounded sets. We know that $\mu_{E}=\mu_{E} \llcorner \partial_\Omega^*E$. Hence, $$\mu_{E}((\partial E\setminus\partial^* E)\cap \Omega)=0.$$ Since $\mu_{E}$ is a Radon measure, given an $\varepsilon>0$, there exists an open set $U_{\varepsilon} \subset \Omega$ containing $(\partial E\setminus\partial^* E)\cap \Omega$ such that $$\mu_{E}(U_{\varepsilon})\leq\varepsilon.$$ Now, by our w.l.o.g.-assumptions: $\overline{\partial_\Omega^*E}=\partial E\cap \overline \Omega$, and so for any fixed $\delta>0$ $$(\partial E\setminus\partial^* E)\cap \Omega\subset \bigcup\mathcal F,$$ where $\mathcal F:=\{ B_{\rho}(x)\subset\subset U_{\varepsilon}: x\in\partial_\Omega^*E\;\;\text{and}\;\;\rho\leq \delta\}$. By Vitali’s covering theorem there exists a countable family $\mathcal G \equiv \{B_{\rho_{j}}(x_j):j\in \mathbb N\}$ of disjoint balls in $\mathcal F$ such that $$\bigcup \mathcal F \subset \bigcup_{j=1}^\infty\overline B_{5\rho_j}(x_j).$$ Hence, $$\sum_{j=1}^\infty\mu_{E}( B_{\rho_j}(x_j))\leq \mu_{E}(U_{\varepsilon})\leq \varepsilon.$$ On the other hand, by the monotonicity formula [@Simon:1983 Theorem 17.7], $$\begin{aligned}
\mu_{E}( B_{\rho_j}(x_j))\geq \frac{\alpha_{n-1}}{2}\rho_j^{n-1},\end{aligned}$$ if $\delta$ is small enough as to guarantee that $$\alpha_{n-1}^{\frac{n-1}{p}}\left(1- 2^{-\frac{1}{p}} \right) \geq \frac{\| \vec H\|_{L^p}}{p-(n-1)}\delta^{1-\frac{n-1}{p}} ,$$ for some $p>n-1$. Therefore, $$\alpha_{n-1}\sum_{j=1}^\infty \rho_j^{n-1}\leq 2\varepsilon,$$ which yields $$\begin{aligned}
\mathcal H_{5\delta}^{n-1}((\partial E\setminus\partial^* E)\cap \Omega) & \leq \alpha_{n-1}\sum_{j=1}^\infty(5\rho_j)^{n-1} \leq 5^{n} \varepsilon,\end{aligned}$$ for any $\varepsilon>0$ and $\delta>0$. Letting $\varepsilon,\delta\searrow 0$ we conclude $$\mathcal H^{n-1}((\partial E\setminus\partial^* E)\cap \Omega) =0.$$
It is an interesting question whether the estimate on the Hausdorff dimension of the singular set can be improved under the additional assumption of stability. Even without the nonlocal term this is an open problem in the class $\mathcal A_m$. For the case of minimal surfaces Wickramasekera [@Wickramasekera:2014] recently showed that in this case the singular set has Hausdorff dimension at most $n-8$.
Boundary regularity of local minimizers {#sec:bdryregularity}
=======================================
In this section we outline how Theorem \[thm:regularity\] can be used to prove boundary regularity, that is regularity near points $x\in \partial \Omega\cap \partial E$, for local minimizers $E$ of $\mathcal E_\gamma$ in $\mathcal A$ or $\mathcal A_m$. This has already been established in [@Julin-Pisante:2013] but we include it for convenience of the reader.
As mentioned earlier, the interior regularity for local minimizers of $\mathcal E_\gamma$ was proved by Sternberg and Topaloglu [@Sternberg-Topaloglu:2011 Propostion 2.1]. The authors prove that local minimizers of $\mathcal E_\gamma$ are $(K,\varepsilon)$-minimal and can thus appeal to the standard methods (cf. [@Massari:1974]). We include a slightly different proof.
\[def:locmin\] We say that $E \in\mathcal A$ or $\mathcal A_m$ is a *local minimizer* of $\mathcal E_\gamma$ in $\mathcal A$ or $\mathcal A_m$ (at scale $R$) if for all balls $B_R(x)\subset\mathbb R^n$ we have that $$\label{locmin}
\mathcal E_\gamma(E) \leq \mathcal E_\gamma(F)\quad\text{for all $F\in \mathcal A$ or $\mathcal A$ with $E\Delta F\subset\subset B_R(x)$}.$$
\[intextpoints\] Theorem \[thm:regularity\] implies that for any ball $B_\rho(x)\subset \mathbb R^n$ with $0<|E\cap B_\rho(x)|<|\Omega \cap B_\rho(x)|$ we can find exterior and interior points, i.e. there exist two balls $B_r(a),B_r(b)\subset\subset \Omega \cap B_\rho(x)$ with $\bigcup_{t\in [0,1]} B_r(ta+(1-t)b)\subset\subset \Omega$ such that $$|B_r(a)\setminus E|=|E \cap B_r(a)|=0.$$
We are now ready to prove the following
\[prop:quasimin\] Let $E \in\mathcal A$ or $\mathcal A_m$ be a local minimizer of $\mathcal E_\gamma$ in $\mathcal A$ or $\mathcal A_m$ at scale $2R_0 >0$, and let $0<|E\cap B_{R_0}(x_0)|<|\Omega \cap B_{R_0}(x_0)|$ for some ball $B_{R_0}(x_0) \subset \mathbb R^n$. Then $E$ is $(K,\varepsilon)$-minimal in $B_R(x_0)$ for some $R\leq R_0$, that is for every $B_\rho \subset\subset B_R(x_0)$ $$P(E,\Omega )\leq P(F,\Omega) +K\rho^{n} \quad\text{for all $F$ such that $F\Delta E \subset\subset B_\rho$.}$$
Let $B_\rho \subset\subset B_R(x_0)$ and let $F$ be such that $F\Delta E \subset\subset B_\rho$. We only give a proof for local minimizers in $\mathcal A$. (For the case with a volume constraint one may use Remark \[intextpoints\] to adjust the volume of the competitor $F$ which gives us the additional term $\frac{c(n)}{r}\rho^n$ on the right hand side of equation below. We refer to [@Gonzalez-Massari-Tamanini:1983] for details. Alternatively, one can proceed as in [@Sternberg-Topaloglu:2011] and use a result of Giusti [@Giusti:1981 Lemma 2.1] to balance out the volume constraint.)
By we have that $$\begin{aligned}
\label{minprop}
P(E,\Omega )
&\leq P(F,\Omega) +\gamma \int_F\int_FG(x,y)\,dx\,dy -\gamma \int_E\int_E G(x,y)\,dx\,dy \\
&\quad+ \int_{ F}f\,dx- \int_{E}f\,dx.\nonumber\end{aligned}$$ The last two terms can be estimated by $ \int_{\Omega \cap B_\rho }f\,dx \leq c(n) \| f\|_{L^\infty}\rho^n$, cf. [@Massari:1974]. In remains to estimate the difference of the nonlocal terms. Setting $A:=\Omega \cap B_R(x_0)$, we estimate for any $p>n$ $$\begin{aligned}
& \int_F\int_FG(x,y)\,dx\,dy - \int_E\int_E G(x,y)\,dx\,dy \\
&\quad \leq \int_F\left(\int_{F\cap B_\rho} G(x,y)\,dx - \int_{E\cap B_\rho}G(x,y)\,dx \right)\,dy+\int_{E\Delta F}|\phi_E|\,dx \\
&\quad \leq \int_{E\cup (\Omega \cap B_\rho)}\left(\int_{\Omega \cap B_\rho} |\chi_{F\cap B_\rho}(x)-\chi_{E\cap B_\rho}(x)||G(x,y)|\,dx \right)\,dy+\int_{B_\rho}|\phi_E|\,dx \\
&\quad \leq \|G\|_{L^1(A \times A) } |B_\rho| +\left( \int_{A}|\phi_E|^p\,dx\right)^\frac{1}{p} |B_\rho|^{1-\frac{1}{p}}\\
&\quad \leq c(n,p,G,E) \rho^{n-1+(1-\frac{n}{p})}.\end{aligned}$$ The claim follows with $\varepsilon= 1-\frac{n}{p}$ for any $p>n$ and $K=c(n,p,G,E)$ (or $K=c(n,p,G,E,r)$ in case of a volume constraint with $r$ as in Remark \[intextpoints\]).
Theorem \[thm:regularity\] and Proposition \[prop:quasimin\] in conjunction with the results of Grüter [@Grueter:1987] in which boundary regularity of $(K,\varepsilon)$-minimizers with weakly orthogonal surface measure was shown, immediately imply the following
Let $E\in \mathcal A$ or $\mathcal A_m$ be a local minimizer of $\mathcal E_\gamma$ in $\mathcal A$ or $\mathcal A_m$. Then
1. ${\rm reg}(\mu_E)$ is of class $C^{1,\alpha}$ for all $\alpha \in (0,1)$, ${\rm reg}(\mu_E)\cap\Omega$ is of class $C^{3,\alpha}$ for all $\alpha \in (0,1)$ and has mean curvature $ H = \lambda- 2\gamma \phi_E -f$ for some constant $\lambda \in \mathbb R$. If $x\in {\rm reg}(\mu_E)\cap \partial \Omega$ then ${\rm reg}(\mu_E)$ and $\partial \Omega$ intersect orthogonally in a neighborhood of $x$.
2. $\mathcal H^s({\rm sing}(\mu_E) ) = 0\quad\text{for all $s>n-8$}.$
Here ${\rm reg}(\mu_E)$ is defined as the set of all points in $\partial E \cap \overline \Omega ={\rm spt}(\mu_E)$ such that one of the following alternatives holds.
1. If $x\in {\rm reg}(\mu_E) \cap \Omega$ there exits an oriented $C^1$-hypersurface $M_x$ such that $\mu_E = \mathcal H^{n-1}\llcorner M_x$ and $\nu_E= \nu_{M_x}$ in a neighborhood of $x$.
2. If $x\in {\rm reg}(\mu_E) \cap \partial\Omega$ there exits an oriented $C^1$-hypersurface $M_x'$ with boundary inside $\partial \Omega$ such that $\mu_E = \mathcal H^{n-1}\llcorner M_x'$ and $\nu_E= \nu_{M_x'}$ in a neighborhood of $x$.
And ${\rm sing}(\mu_E):= \partial E \cap \overline \Omega\setminus {\rm reg}(\mu_E)$.
In case $\Omega$ is of class $C^{k,\alpha}$ for $k=2,3$ we get that ${\rm reg}(\mu_E)$ is of class $C^{k,\alpha}$ (up to the boundary).
**Acknowledgments** The research of the first-named author was supported by the Herchel Smith Research Fellowship at the University of Cambridge and NSF grant DMS-0807347. The first-named author would like to thank Theodora Bourni and Robert Haslhofer for helpful discussions throughout the course of this work.
[^1]: [email protected], DPMMS University of Cambridge, Cambridge (UK)
[^2]: [email protected], Albert Einstein Institute, Potsdam-Golm
[^3]: by our convention the mean curvature is chosen such that the boundary of the unit ball in $\mathbb R^n$ has positive mean curvature equal to $n-1$
|
---
abstract: 'Introduced by Albertson et al. [@albertson], the distinguishing number $D(G)$ of a graph $G$ is the least integer $r$ such that there is a $r$-labeling of the vertices of $G$ that is not preserved by any nontrivial automorphism of $G$. Most of graphs studied in literature have 2 as a distinguishing number value except complete, multipartite graphs or cartesian product of complete graphs depending on $n$. In this paper, we study circulant graphs of order $n$ where the adjacency is defined using a symmetric subset $A$ of $\mathbb{Z}_n$, called generator. We give a construction of a family of circulant graphs of order $n$ and we show that this class has distinct distinguishing numbers and these lasters are not depending on $n$. ‘'
author:
- 'Sylvain GRAVIER[^1], Kahina Meslem [^2], Souad SLIMANI'
title: Distinguishing Number for Some Circulant Graphs
---
Introduction {#sec:in}
============
In 1979, F.Rudin [@rudin] proposed a problem in Journal of Recreational Mathematics by introducing the concept of the breaking symmetry in graphs. Albertson et al.[@albertson] studied the distinguishing number in graphs defined as the minimum number of labels needed to assign to the vertex set of the graph in order to distinguish any non trivial automorphism graph. The distinguishing number is widely focused in the recent years : many articles deal with this invariant in particular classes of graphs: trees [@tree], hypercubes [@Bogstad], product graphs [@klav_power] [@Imrich_cartes_power] [@klav_cliques] [@Fisher_1] and interesting algebraic properties of distinguishing number were given in [@Potanka] [@tym] and [@Z]. Most of non rigid structures of graphs (i.e structures of graphs having at most one non trivial automorphism) need just two labels to destroy any non trivial automorphism. In fact, paths $P_n$ $(n>1)$, cycles $C_n$ $(n>5)$, hypercubes $Q_n$ $(n>3)$, $r$ $(r>3)$ times cartesian product of a graph $G^r$ where $G$ is of order $n>3$, circulant graphs of order $n$ generated by $\{\pm 1,\pm 2,\dots \pm k\}$ [@gravier]($n\geq2k+3$) have 2 as a common value of distinguishing number. However, complete graphs, complete multipartite graphs [@chrom] and cartesian product of complete graphs (see [@klav_cliques] [@Fisher_1] [@Fisher_2]) are the few classes with a big distinguishing number. The associated invariant increases with the order of the graphs. In order to surround the structure of a graph of a given order $n$ and get a proper distinguishing number we built regular graphs $C(m,p)$ of order $mp$ where the adjacency is described by introducing a generator $A$ $(A \subset \mathbb{Z}_{m.p})$. These graphs are generated by $A=\{(p-1)+ r.p, (p+1)+ r.p$ : $0\leq r \leq m-1\}$ for all $n=m.p \geq 3$. In fact, the motivation of this paper is to give an answer to this following question, noted ${\mathcal{(Q)}}$:\
“Given a sequence of ordered and distinct integer numbers $d_1,d_2,\dots,d_r$ in $\mathbb{N}^* \setminus \{1\}$, does it exist an integer $n$ and $r$ graphs $G_i$ $(1\leq i\leq r)$ such that $D(G_i)=d_i$ for all $i=1,\dots,r$ and $n$ is the common order of the $r$ graphs?"\
In the following proposition, we give the answer to this question:
\[disconnected\] Given an ordered sequence of $r$ distinct integers $d_1,d_2,\dots,d_r$ with $r\geq2$ and $d_i\geq 2$ for $i=1,\dots,r$, there exists $r$ graphs $G_1,G_2,\dots,G_r$ of order $n$ such that $G_i$ contains a clique $K_{d_i}$ and $D(G_i)=d_i$ for all $1\leq i\leq r$.
Suppose that $d_1\neq 2$ and $n=d_r$. For the integer $d_r$, we assume that $G_r \simeq K_{d_r}$ and $D(G_r)=d_r$.\
For the other integers, we consider the disconnected $(r-1)$ graphs $G_i$ having two connected component $C$ and $C'$ such that $C\simeq K_{d_i}$ and $C'$ is a path $P_{n-d_i}$ for all $i=1,\dots,(r-1)$.\
Observe that, when $d_1\neq 2$ or $n= d_r\neq4$, then the connected component $C$ and $C'$ can not be isomorphic. By consequence, an automorphism $\delta$ of a graph $G_i$ acts in the same connected component for all $1\leq i\leq r-1$. More than, $D(G_i)=\max (D(C),D(C'))=D(C)=d_i$ for all $1\leq i\leq r-1$.\
If $d_1= 2$ and $n= d_r=4$ the same graphs are considered except for $G_1$ where we put $G_1\simeq P_4$. Then, $D(G_1)=2=d_1$.
The graphs of Proposition \[disconnected\] are not completely satisfying since these ones are not connected. Furthermore, these graphs give no additional information for graphs having hight distinguishing number, since they just use cliques for construction. So our purpose is to construct connected graphs structural properties that give answer to question ${\mathcal{(Q)}}$
\[main\] Given an ordered sequence of $r$ distinct integers $d_1,d_2,\dots,d_r$ with $r\geq2$ and $d_i\geq 2$ for $i=1,\dots,r$, there exists $r$ connected circulant graphs $G_1,G_2,\dots,G_r$ of order $n$ such that $D(G_i)=d_i$.
So, in section 1, basic definitions and preliminary results used in this paper are given. Then in section 2, we define circulant graphs $C(m,p)$ , $n=m.p\geq 3$ and provide interesting structural properties of this class of graphs.These later are used to determine the associated distinguishing number which is given in section 3. We also give the proof of Theorem \[main\] in the same section. Finally, in section 4, we conclude by some remarks and possible improvement of reply of the question ${\mathcal{(Q)}}$.
Definitions and Preliminaries Results {#sec:1}
=====================================
We only consider finite, simple, loopless, and undirected graphs $G=(V ,E)$ where $V$ is the vertex set and $E$ is the edge set. The *complement* of $G$ is the simple graph $\overline{G}=(V,\overline{E})$ which consists of the same vertex set $V$ of G. Two vertices $u$ and $v$ are adjacent in $\overline{G}$ if and only if they are not in $G$. The *neighborhood* of a vertex $u$, denoted by $N(u)$, consists in all the vertices $v$ which are adjacent to $u$. A *complete graph* of order $n$, denoted $K_n$, is a graph having $n$ vertices such that all two distinct vertices are adjacent. A *path* on $n$ vertices, denoted $P_n$, is a sequence of distinct vertices and and $n-1$ edges $v_iv_{i+1}$, $1 \leq i \leq n - 1$. A path relying two distinct vertices $u$ and $v$ in $G$ is said $uv$-path. A *cycle*, on $n$ vertices denoted $C_n$, is a path with $n$ distinct vertices $v_1, v_2, \dots, v_n$ where $v_1$ and $v_n$ are confused. For a graph $G$, the *distance* $d_G(u, v)$ between vertices $u$ and $v$ is defined as the number of edges on a shortest $uv$-path.\
Given a subset $A \subset \mathbb{Z}_n$ with $0 \not \in A$ and for all $a\in A$ and $-a\in A$, a *circulant graph*, is a graph on $n$ vertices $0,1,\dots,n-1$ where two vertices $i$ and $j$ are adjacent if $j-i$ modulo $n$ is in $A$.
The *automorphism* (or *symmetry*) of a graph $G=(V,E)$ is a permutation $\sigma$ of the vertices of $G$ preserving adjacency i.e if $xy \in E$, then $\sigma(x)\sigma(y) \in E$. The set of all automorphisms of $G$, noted $Aut(G)$ defines a structure of a group. A labeling of vertices of a graph $G$, $c: V(G) \rightarrow \{1,2,\dots, r\}$ is said $r$-*distinguishing* of $G$ if $\forall \sigma \in Aut (G)\setminus \{Id_G\}$: $c \neq c \circ \sigma$. That means that for each automorphism $\sigma \neq id $ there exists a vertex $v\in V$ such that $c(v)\neq c(\sigma(v))$. A *distinguishing number* of a graph $G$, denoted by $D(G)$, is a smallest integer $r$ such that $G$ has an $r$-distinguishing labeling. Since $Aut(G)=Aut(\overline{G})$, we have $D(G)=D(\overline{G})$. The distinguishing number of a complete graph of order $n$ is equal to $n$. The distinguishing number of complete multipartite graphs is given in the following theorem:
[@chrom]\[multipartite\] Let $K_{a_1^{j_1} ,a_2^{j_2},\dots,a_r^{j_r}}$ denote the complete multipartite graph that has $j_i$ partite sets of size $a_i$ for $i = 1, 2,\dots,r$ and $a_1 > a_2 > \dots > a_r$. Then $D(K_{a_1^{j_1} ,a_2^{j_2},\dots,a_r^{j_r}})= \min \{p :\binom{p}{a_i} \geqslant j_i$ for all $i \}$
Let us introduce the concept of modules useful to investigate distinguishing number in graphs. A *module* in the graph $G$ is a subset $M$ of vertices which share the same neighborhood outside $M$ i.e for all $y \in V \setminus M$: $M \subseteq N(y)$ or $xy \not \in E$ for all $x\in M$. A trivial module in a graph $G$ is either the set $V$ or any singleton vertex. A module $M$ of $G$ is said *maximal* in $G$ if for each non trivial module $M'$ in $G$ containing $M$, $M'$ is reduced to $M$. The following lemma shows how modules can help us to estimate the value of distinguishing number in graphs:
\[module\] Let $G$ be a graph and $M$ a module of $G$. Then, $D(G)\geq D(M)$
Let $c$ be an $r$-labeling such that $r<D(M)$. Since $r<D(M)$, there exits $\delta\mid_{M}$ a non trivial automorphism of $M$ such that $c(x)=c(\delta\mid_{M}(x))$ for all $x \in M$ i.e the restriction of $c$ in $M$ is not a distinguishing. Now, let $\delta$ be the extension of $\delta \mid_{M}$ to $G$ with $\delta(x)=x$ $\forall x \not \in M$ and $\delta(x)=\delta\mid_M(x)$ otherwise. We get $c(x)=c(\delta(x))$ for all $x \in G$. Moreover, $\delta \neq id$ since $\delta\mid_{M} \neq id\mid_{M}$.
Circulant Graphs $C(m,p)$ {#sec:2}
=========================
In this section, we study distinguishing number of circulant graphs $C(m,p)$ of order $n=m.p\geq3$ with $m\geqslant 1$ and $p\geqslant 2$. A vertex $i$ is adjacent to $j$ in $C(m,p)$ iff $j-i$ modulo $n$ belongs to $A=\{p-1+r.p, p+1+ r.p$, $0\leq r \leq m-1\}$ (See Fig. \[weakly\]). When $p>1$, these graphs are circulant since for all $0 \leq r\leq m-1$ the symmetric of $p-1+r.p$ is $1+p+(m-r-2)p$ which belongs to $A$ and $p>1$ implies that $0\notin A$. By construction, set $C(m,1)$ is the clique $K_m$. Let specify some other particular values of $p$ and $m$, $C(1,p)$ is the cycle $C_p$. Also we have: $C(m,2)=K_{m,m}$ and $C(m,3)=K_{m,m,m}$. By Theorem \[multipartite\], $D(C(m,2))=D(C(m,3))=m+1$. Moreover, $D(C(1,p))=2$ for $p\geq6$.
\[proper\] The vertex set of $C(m,p)$ ($m\geqslant 2$ and $p\geqslant 2$) can be partitioned into $p$ stable modules $M_i=\{i+r.p:$ $ 0\leq r \leq m-1 \}$ of size $m$ for $i=0,\dots,p-1$.
Given two distinct vertices $a, b \in M_i$ for $i=0,\dots,p-1$, $a-b\equiv rp[n]$ for some $0<r \leqslant m-1$ , then $a-b \notin A$ which proves that each $M_i$ induces a stable sets.
Moreover, it is clear that $\{M_i \}_{i=0,\dots, p-1}$ forms a partition of vertex set of $C(m,p)$.\
Let us prove that $M_i$ defines a module. For this, suppose that $a=i+r_{a}\cdot p$ and $b=i+r_{b}\cdot p$ two distinct vertices of a given stable set $M_i$.\
Let $c \in V\setminus M_i$ such that $ac$ is an edge and let $c=j+r_{c}\cdot p$.Let
$ r_{bc}=\left \{ \begin{array}{ll}
r_b-r_c & \mbox{if } r_b> r_c \\
m+(r_b - r_c) & \mbox{else }
\end{array}
\right.
$ $r_{ac}= \left \{
\begin{array}{ll}
r_a-r_c & \mbox{if } r_a> r_c \\
m+(r_a - r_c) &\mbox{else}
\end{array}
\right.
$
two integer numbers such that $b-c\equiv (i-j)+r_{bc}\cdot p[n]$ and $a-c\equiv (i-j)+r_{ac}\cdot p[n]$ (with $0 \leqslant r_{ac} \leqslant m-1$ and $0 \leqslant r_{bc} \leqslant m-1$.)\
Since $a-c$ is in $A$ then there is some integers $k$ verifying $0\leqslant k\leqslant r_{ac}$ such that $i-j+kp=p-1$ (or= $p+1$).\
If $k\leqslant r_{bc}$, we obtain $b-c\equiv i-j+kp+(r_{bc}-k)\cdot p[n]$.\
Then $b-c \equiv p-1+(r_{bc}-k)\cdot p[n]$ (or $\equiv p+1+(r_{bc}-k)\cdot p[n]$). We deduce that $b-c \in A$ since $0\leqslant k \leqslant m-1$.\
Else, we have $r_{bc} < k \leqslant m+r_{bc}$. We have $b-c \equiv i-j+r_{bc}\cdot p[n]$.Then $b-c \equiv i-j+(m+r_{bc})\cdot p[n]$. We get $b-c \equiv i-j +kp+(m+r_{bc}-k)\cdot p[n]$ which belongs to $A$ since $0\leqslant m+r_{bc}-k \leqslant m-1$.\
(1,2)(15.5,7.5) (3,5)[2]{} (3,7)[0.07]{}(3,3)[0.07]{}(4.7,6)[0.07]{}(4.7,4)[0.07]{}(1.3,6)[0.07]{}(1.3,4)[0.07]{}(3,7)(4.7,4) (4.7,4)(1.3,4) (1.3,4)(3,7) (4.7,6)(3,3) (3,3)(1.3,6) (1.3,6)(4.7,6) (2,1.8)[$C(2,3)$]{} (11,5)[2]{} (11,7)[0.07]{}(12.8,4.2)[0.07]{}(9.2,4.2)[0.07]{}(11.8,6.8)[0.07]{}(12.4,6.4)[0.07]{}(12.8,5.8)[0.07]{}(13,4.9)[0.07]{}(12.3,3.5)[0.07]{}(11.5,3.08)[0.07]{}(10.5,3.08)[0.07]{}(9.7,3.5)[0.07]{}(9,5)[0.07]{}(9.1,5.6)[0.07]{}(9.5,6.3)[0.07]{}(10.2,6.83)[0.07]{}(11,7)(13,4.9) (11,7)(12.3,3.5) (11,7)(9.7,3.5) (11,7)(9,5) (11.8,6.8)(12.8,4.2) (11.8,6.8)(9.2,4.2) (11.8,6.8)(11.5,3.08) (11.8,6.8)(9.1,5.6) (12.4,6.4)(12.3,3.5) (12.4,6.4)(10.5,3.08) (12.4,6.4)(9,5) (12.4,6.4)(9.5,6.3) (12.8,5.8)(11.5,3.08) (12.8,5.8)(9.7,3.5) (12.8,5.8)(10.2,6.83) (12.8,5.8)(9.1,5.6) (13,4.9)(9.5,6.3) (13,4.9)(10.5,3.08) (13,4.9)(9.2,4.2) (12.8,4.2)(9.7,3.5) (12.8,4.2)(9,5) (12.8,4.2)(10.2,6.83) (12.3,3.5)(9.2,4.2) (12.3,3.5)(9.1,5.6) (11.5,3.08)(9,5) (11.5,3.08)(9.5,6.3) (10.5,3.08)(9.1,5.6) (10.5,3.08)(10.2,6.83) (9.7,3.5)(9.5,6.3)(9.2,4.2)(10.2,6.83) (10,1.8)[$C(3,5)$]{}
Since each $M_i$ (for all $0\leqslant i\leqslant p-1$) is a stable set then, by definition of a module, we have:
\[permutation\] Any permutation of elements of $M_i$ is an automorphism of $G$ for all $0\leqslant i \leqslant p-1$.
By Lemma \[module\] and Property \[proper\], we have $D(C(m,p))\geqslant m$. We will improve this bound:
\[principal\] For all $p \geq 2$ and for all $m \geq2$, $D(C(m,p)) = m+1$ if $p\neq 4$.
Proof of Theorem \[main\] and Theorem \[principal\] {#sec:3}
===================================================
In this section, we give the proof of Theorem \[principal\] in the first step, while the second step is spent to give the proof of the Theorem \[main\]
\[borne\] For all $p \geq 2$ and for all $m \geq2$, $D(C(m,p)) > m$.
If $p=2$ (resp. $p=3)$ then $C(m,2)\cong K_{m,m}$ (resp. $C(m,3)\cong K_{m,m,m}$). According to Theorem \[multipartite\], we have $D(C(m,p))>m$. Let $C(m,p)$ be the circulant graph generated by $A=\{p-1+rp, p+1+rp: 0\leqslant r \leqslant m-1\}$.
Let us suppose that $p>3$. Since the modules $M_i$ $(i=0,\dots, p-1)$ are stables of size $m$, then by Lemma \[module\] we have $D(C(m,p))\geq m$.\
Consider $c:V(C(m,p))\rightarrow \{1,2,\dots,m\}$ be a $m$-labeling of $C(m,p)$ $(m \geq 2)$ and prove that $c$ is not $m$-distinguishing.\
By way of contradiction, assume that $c$ is $m$-distinguishing.
For all distinct vertices $v$, $w$ in a given module $M_{i_0}$ with $i_0\in \{0,1,\dots,p-1\}$ we have $c(v)\neq c(w)$ otherwise, there exists a transposition $\tau$ of $v$ and $w$ verifying $c=c \circ \tau$. This yields a contradiction. That means that in a fixed module $M_i$ we have all labels.
Let $P_j$ ($1\leqslant j \leqslant m$) be a set of index $\{(j-1)p+i,i \in \{0, \dots, p-1\} \}$.
Let $v\in M_i$ ( $0\leqslant i\leqslant p-1$) then $v=i+rp$ where $0\leqslant r \leqslant m-1$. Consider now the mapping $\delta_i$ with $i=0,\dots,p-1$ defined as follows: $\delta_i: V \rightarrow V$ such that $\delta_i(v)=(c(v)-1)p+i$ if $v \in M_i$ else $\delta (v)=v$. By Property \[permutation\], $\delta_i$ defines an automorphism of $G$.\
Let $\delta = \delta_0 \circ \dots \circ \delta_{p-1}$ be an automorphism of $G$.
Let $\psi$ be a mapping defined as follows: $\psi: V \rightarrow V$ such that $\psi(i+rp)= p-(i+1) + rp$. Let prove that $\psi$ is an automorphism of $G$.
Let $a=i+rp$ and $b=j+r'p$ two adjacent vertices then $b-a=j-i+(r'-r)p \in A$. We have $\psi(b)- \psi(a) = i-j +(r'-r)p$ which belongs to $A$. Thus $\psi$ is an automorphism of $G$.
Check now that $\delta ^{-1} \circ \psi \circ \delta$ is non trivial automorphism of $G$ preserving the labeling $c$. See Fig. \[composition\].
Then $\delta ^{-1} \circ \psi \circ \delta$ is clearly an automorphism because it is a composition of automorphisms.
Since $\delta ^{-1} \circ \psi \circ \delta(0) = \delta ^{-1} \circ \psi ( (c(0)-1)p +0)
= \delta ^{-1} ( (c(0)-1)p+(p-1)) = u$ with $u \in M_{p-1}$ and $c(u)=c(0)$, then $u\neq 0$ since $0 \in M_0$ and $M_0 \neq M_{p-1}$ and $p> 1$. Thus $\delta ^{-1} \circ \psi \circ \delta$ is not a trivial automorphism.
(0,2)(18,14) (3,11)[2]{} (1,11)[0.08]{}(0.4,10.8)[$12$]{}(5,11)[0.08]{}(5,10.8)[$4$]{}(3,9)[0.08]{}(2.8,8.5)[$8$]{}(3,13)[0.08]{} (2.8,13)[$0$]{}(1.6,9.6)[0.08]{}(1,9.3)[$10$]{}(4.4,9.6)[0.08]{}(4.4,9.3)[$6$]{}(1.6,12.4)[0.08]{}(0.85,12.2)[$14$]{}(4.4,12.4)[0.08]{}(4.4,12.3)[$2$]{}(2.2,9.2)[0.08]{}(1.9,8.6)[$9$]{}(3.8,9.2)[0.08]{}(3.8,8.8)[$7$]{} (2.2,12.8)[0.08]{}(1.5,12.6)[$15$]{} (3.8,12.8)[0.08]{}(3.8,12.8)[$1$]{}(4.85,11.7)[0.08]{}(4.85,11.5)[$3$]{}(4.85,10.3)[0.08]{}(4.9,10)[$5$]{}(1.15,10.3)[0.08]{}(0.6,10)[$11$]{}(1.15,11.7)[0.08]{}(0.5,11.5)[$13$]{}(2.6,8)[$(a)$]{} (5.5,13)(6.5,13) (5.7,13.2)[$\delta$]{} (9,11)[2]{} (7,11)[0.08]{} (6.4,10.8)[$4$]{} (7,10.8)[$12$]{} (11,11)[0.08]{} (11,10.8)[$12$]{} (10.5,10.8)[$4$]{}(9,9)[0.08]{} (8.8,8.5)[$0$]{} (8.8,9)[$8$]{}(9,13)[0.08]{} (8.8,13)[$8$]{} (8.8,12.5)[$0$]{}(7.6,9.6)[0.08]{} (7,9.3)[$2$]{} (7.6,9.4)[$10$]{}(10.4,9.6)[0.08]{} (10.4,9.3)[$6$]{} (9.92,9.4)[$6$]{}(7.6,12.4)[0.08]{} (7,12.2)[$10$]{} (7.6,12.1)[$14$]{}(10.4,12.4)[0.08]{} (10.4,12.3)[$14$]{} (9.8,12)[$2$]{}(8.2,9.2)[0.08]{} (7.9,8.6)[$13$]{} (8.1,9.2)[$9$]{}(9.8,9.2)[0.08]{} (9.8,8.8)[$7$]{} (9.55,9.3)[$7$]{}(8.2,12.8)[0.08]{} (7.5,12.6)[$15$]{} (8.1,12.4)[$15$]{}(9.8,12.8)[0.08]{} (9.8,12.8)[$5$]{} (9.4,12.4)[$1$]{}(10.85,11.7)[0.08]{} (10.85,11.5)[$3$]{} (10.3,11.5)[$3$]{} (10.85,10.3)[0.08]{} (10.9,10)[$9$]{} (10.4,10)[$5$]{}(7.15,10.3)[0.08]{} (6.6,10)[$11$]{} (7.1,10)[$11$]{}(7.15,11.7)[0.08]{} (6.5,11.5)[$1$]{} (7.2,11.5)[$13$]{}(8.6,8)[$(b)$]{} (0.5,7)(1.5,7) (0.7,7.2)[$\psi$]{} (3,4)[2]{} (1,4)[0.08]{} (0.4,3.8)[$15$]{} (1.1,3.8)[$15$]{}(1.15,4.7)[0.08]{} (0.5,4.5)[$10$]{} (1.25,4.5)[$14$]{}(1.6,5.4)[0.08]{} (1,5.2)[$1$]{} (1.4,5)[$13$]{}(2.2,5.8)[0.08]{} (1.5,5.7)[$4$]{} (1.9,5.2)[$12$]{}(5,4)[0.08]{} (5,3.8)[$7$]{} (4.4,3.8)[$7$]{}(4.85,3.3)[0.08]{} (4.9,3)[$6$]{} (4.4,3)[$6$]{}(4.4,2.6)[0.08]{} (4.4,2.3)[$9$]{} (4.02,2.5)[$6$]{}(3.8,2.2)[0.08]{} (3.8,1.8)[$12$]{} (3.6,2.3)[$4$]{}(3,2)[0.08]{} (2.7,1.5)[$11$]{} (2.8,2)[$11$]{}(2.2,2.2)[0.08]{} (1.8,1.6)[$2$]{}(1.85,2.2)[$10$]{}(1.6,2.6)[0.08]{} (1,2.3)[$13$]{}(1.4,2.6)[$9$]{}(1.15,3.3)[0.08]{} (0.7,3)[$0$]{} (1.1,3.1)[$8$]{}(3.8,5.8)[0.08]{} (3.7,5.8)[$14$]{} (3.5,5.3)[$2$]{}(3,6)[0.08]{} (2.8,6)[$3$]{} (2.8,5.5)[$3$]{}(4.4,5.4)[0.08]{} (4.4,5.3)[$5$]{} (4,5)[$1$]{}(4.85,4.7)[0.08]{} (4.85,4.5)[$8$]{} (4.3,4.5)[$0$]{}(6.8,7.2)[$\delta^{-1}$]{} (6.5,7)(7.5,7) (2.6,1)[$(c)$]{} (10,4)[2]{} (8,4)[0.08]{} (7.4,3.8)[$7$]{} (12,4)[0.08]{} (7.5,4.5)[$2$]{} (10,2)[0.08]{} (8,5.25)[$5$]{} (10,6)[0.08]{} (8.55,5.7)[$4$]{} (8.6,2.6)[0.08]{} (12,3.8)[$15$]{} (11.4,2.6)[0.08]{} (11.9,3)[$14$]{} (8.6,5.4)[0.08]{} (11.4,2.3)[$9$]{}(11.4,5.4)[0.08]{} (10.8,1.8)[$12$]{} (9.2,2.2)[0.08]{} (9.7,1.5)[$3$]{} (10.8,2.2)[0.08]{} (8.8,1.6)[$6$]{} (9.2,5.8)[0.08]{} (8,2.3)[$1$]{} (10.8,5.8)[0.08]{} (7.7,3)[$0$]{} (11.85,4.7)[0.08]{} (10.8,5.8)[$10$]{} (11.85,3.3)[0.08]{} (9.8,6)[$11$]{} (8.15,3.3)[0.08]{} (11.4,5.3)[$13$]{} (8.15,4.7)[0.08]{} (11.85,4.5)[$8$]{} (9.6,1)[$(d)$]{}
To complete the proof, it is enough to show that $c(u)=c(\delta ^{-1} \circ \psi \circ \delta(u))$ for all vertex $u$.\
Let $u=i+rp$ then we have $\delta ^{-1} \circ \psi \circ \delta (u) =\delta ^{-1} \circ \psi ((c(u)-1)p +i)=
\delta ^{-1} ((c(u)-1)p+ p-(i+1)) =v$ such that $v\in M_{p-(i+1)}$ and $c(v)=c(u)$.
Then $\delta ^{-1} \circ \psi \circ \delta$ preserves the labeling.
The following result gives the exact value of $D(C(m,p))$
\[D(G)\] For all $p\geq 2$ and $p\neq4$ and for all $m \geq 2$ : $D(C(m,p)) \leqslant m+1$
If $p\in \{2,3\}$ the proposition is true by Theorem \[multipartite\]. Consider $c$ be the $(m+1)$-labeling defined as follows (See Fig. \[m+1color\]):
(0,1)(9,9) (5,5)[3]{} (5,8)[0.07]{}(5.7,7.9)[0.07]{}(6.9,7.33)[0.07]{}(7.4,6.8)[0.07]{}(7.9,5.8)[0.07]{}(8,5.2)[0.07]{}(8,4.7)[0.07]{}(7,2.78)[0.07]{}(6.5,2.4)[0.07]{}(6,2.2)[0.07]{}(2.78,3)[0.07]{}(2.2,6)[0.07]{}(4.2,7.9)[0.07]{}(4.8,8)[$0$]{}(5.5,7.9)[$1$]{}(6.9,7)[$[\frac{p}{2}]$]{}(7.4,6.4)[$[\frac{p}{2}]+1$]{}(7.9,5.4)[$p-1$]{}(8,4.9)[$p$]{}(8,4.3)[$p+1$ ]{}(7,2.4)[$2p-2$]{}(6.5,2)[$2p-1$]{}(5.7,1.6)[$2p$]{}(2.2,2.5)[$3p$]{}(0.3,5.7)[$(m-1)p$]{}(2.5,7.6)[$mp-1$]{}(4.8,7.5)[$1$]{}(5.5,7.4)[$1$]{}(6.5,7)[$1$]{}(7,6.4)[$2$]{}(7.5,5.4)[$2$]{}(7.6,4.9)[$3$]{}(7.6,4.3)[$3$ ]{}(6.8,2.8)[$3$]{}(6.3,2.6)[$1$]{}(5.6,2.3)[$4$]{}(2.8,2.8)[$5$]{}(2.3,5.7)[$m+1$]{}(3.9,7.3)[$m+1$]{}
$$c(v)= \left\{
\begin{array}{ll}
1 & \hspace{7mm} 0 \leqslant v \leqslant \lfloor \frac{p}{2}\rfloor \hspace{2mm} \text{and}\hspace{2mm} v=2p-1 \\
2 & \hspace{7mm} \lfloor \frac{p}{2}\rfloor < v \leqslant p-1 \\
j+1 & \hspace{7mm} v\in P_j \hspace{2mm} \text{and} \hspace{2mm}2\leqslant j\leqslant m \hspace{2mm}\text{and} \hspace{2mm} v\neq 2p-1
\end{array}
\right.$$
Suppose that there exists an automorphism $\delta$ preserving this labeling and prove that $\delta$ is trivial.
Since $p>4$, $0$ is the unique vertex labeled $1$ which has the following sequence of label in his neighborhood $(1,1,2,3,4,4,\dots,m+1,m+1)$. Thus $\delta(0)=0$.
However, we refer to the following claim:
\[distance\] For each vertex $i$ in $C(m,p)$ where $0\leq i\leq p-1$, we have:
$$d(0,i)= \left\{
\begin{array}{ll}
i & \hspace{5mm} 1 \leqslant i \leqslant \lfloor \frac{p}{2}\rfloor \\
p-i & \hspace{5mm} \lfloor \frac{p}{2} \rfloor < i \leqslant p-1
\end{array}
\right.$$
First observe that for all pair of vertices $u$ and $v$ in the same module $M$ and $z\in V\setminus M$, we have $d(u,z)=d(v,z)$ and $d(u,v)=2$.
Now, if we contract each module $M_i$ of $C(m,p)$, then we get a cycle on $p$ vertices which implies the claim.
Let us prove that each vertex lebeled $1$, is fixed by the automorphism $\delta$:\
Consider the table describing the sequence of labels of the vertex $u$:
---------------------------------------------------------------------------------------- -- --
**$u$ & **$c(u)$ & **$c(N(u))$\
**$0$ & **$1$ & **$1,1,2,3,4,4, \dots, m+1,m+1$.\
**$0 < i < \lfloor \frac{p}{2}\rfloor$ & **$1$ & **$1,1,3,3,4,4, \dots, m+1,m+1$.\
**$\lfloor \frac{p}{2}\rfloor$ & **$1$ & **$1,2,3,3,4,4, \dots, m+1,m+1$.\
**$\lfloor \frac{p}{2}\rfloor < j < p-1$ & **$2$ & **$2, 2 ,3,3,4,4, \dots, m+1,m+1$.\
**$p-1$ & **$2$ & **$1, 2, 3, 3,4,4, \dots, m+1,m+1$.\
**$2p-1$ & **$1$ & **$1,2,3,3,4,4, \dots, m+1,m+1$.\
******************************************
---------------------------------------------------------------------------------------- -- --
: The sequence of labels being in the neighborhood of vertices.
For all $i$ such that $0< i< \lfloor \frac{p}{2} \rfloor$, we have the sequence of labels occurring in the neighborhood of a vertex $i$ is $(1,1,3,3, \dots m+1, m+1)$. More than, for all two distinct vertices $u$ and $v$ such that $0< u,v< \lfloor \frac{p}{2} \rfloor$ we have $d(u,0)\neq d(v,0)$. Then, since $\delta(0)=0$ we get $\delta (u)=u$ and $\delta (v)=v$. Generally, for all vertex $i$ such that $0< i< \lfloor \frac{p}{2} \rfloor$, we obtain $\delta(i)=i$.\
More than, the sequence of labels in the neighborhood of $2p-1$ and $\lfloor \frac{p}{2}\rfloor$ is $\{1, 2, 3, 3, 4, 4, \dots, m+1, m+1 \}$. Since $d(\lfloor \frac{p}{2} \rfloor,0) > d(2p-1,0)=1$, then we get $\delta(2p-1)=2p-1$ and $\delta(\lfloor \frac{p}{2} \rfloor)= \lfloor \frac{p}{2} \rfloor$.
Now observe that by the previous claim, any distinct vertices $u$ and $v$ labeled $2$, we have $d(u,0)\neq d(v,0)$. Then for any vertex $u$ such that $c(u)=2$, we have $\delta(u)=u$.
Finally, let us prove that each vertex $v$ in $C(m,p)\setminus (P_1\cup \{2p-1\})$ is fixed by the automorphism $\delta$. For that, it is enough to show for all pair of distinct vertices $u$ and $v$ such that $c(u)=c(v)$, we have $N(u)\cap \{0,1,2,\dots,p-1\} \neq N(v)\cap \{0,1,2,\dots,p-1\}$. This proposition will imply that each vertex $v$ labeled $c(v)$ $(c(v)\geq2)$ is fixed by $\delta$ and we conclude the proof of theorem.
Let $u$ and $v$ two distinct vertices such that $c(u)=c(v)$ with $u,v \in C(m,p)\setminus (P_1\cup \{2p-1\}) $.\
Since $c(u)=c(v)$, we have $u \in M_i$ and $v\in M_j$ with $i\neq j$. Then $i-1, i+1 \in N(u)$ and $j-1, j+1 \in N(v)$.
If $i=0$ then $p-1\in N(u)$ since $p\in M_i$. Similarly, if $i=p-1$, then $0\in N(u)$ since $mp-1\in M_i$.
Therefore, modulo $p$, we have that $i-1, i+1 \in N(u)\cap \{0,1,\dots,p-1 \}$ and $j-1, j+1 \in N(v)\cap \{0,1,\dots,p-1 \}$.
Additionally, observe that any vertex $u$ has exactly two neighborhood among $p$ consecutive vertices of $G$. Thus $N(u)\cap \{0,1, \dots,p-1\} =\{i-1, i+1 \; \; \bmod{p} \}$ and $N(v)\cap \{0,1, \dots,p-1\} =\{j-1, j+1\; \; \bmod{p}\}$.
Now, if $N(u)\cap \{0,1,\dots,p-1 \}= N(v)\cap \{0,1,\dots,p-1 \}$ and $i\neq j$, then $i+1=j-1$ and $i-1=j+1$. Thus $j=i-2$, $j=i+2$ and $p=4$.
Since $p>4$, we get that $N(u)\cap \{0,1,\dots,p-1 \}\neq N(v)\cap \{0,1,\dots,p-1 \}$.
Lemma \[borne\] and Lemma \[D(G)\] give the proof of Theorem \[principal\]. The following result gives the value of distinguishing number for $p=4$:
\[p4\] For each $m\geq 2$, $C(m,4)$ is isomorphic to $C(2m,2)$ $($or $K_{2m,2m})$ and $D(C(m,4))=$ $2m+1$.
The graph $C(m,4)$ is partitioned into four modules $M_0$, $M_1$, $M_2$, $M_3$. We have: $N(M_0)=N(M_2)=M_1\cup M_3$ and $N(M_1)=N(M_3)=M_0\cup M_2$. Thus, the module $M_i$ is not maximal where $i \in \{0,1,2,3\}$. Furthermore, $M_0 \cup M_2$ and $M_1\cup M_3$ are stables of size $2m$. Then, the graph $C(m,4)$ is a multipartite graph $K_{2m,2m}$ and $D(C(m,4))=D(K_{2m,2m})=D(C(2m,2))=2m+1$.
**PROOF OF THEOREM \[main\]**
Let $d_1,d_2,\dots,d_r$ be an ordered sequence of distinct integers. Let $m_i=d_i -1$ for all $i=1,\dots,r$ and $p_i=\displaystyle\prod_{j\neq i} m_j$.
By definition, $m_i p_i=m_j p_j$ for $i\neq j$ for $i,j=1,\dots,r$.\
If all $p_i\neq 4$, then let $n=m_i p_i$ else $n=3m_i p_i$ for all $i=1,\dots,r$.\
Now, by Theorem \[principal\], $D(C(m_i,p_i))=m_i+1=d_i$ for all $i=1,\dots,r$.\
So, $(G_i)_i ={(C(m_{i},p_{i}))}_i$ with $i=1,\dots,r$, is a family of connected circulant graphs of order $n$ such that $D(G_i)=d_i$.
Remarks and conclusion {#sec:4}
======================
We have studied the structure of circulant graphs $C(m,p)$ by providing the associated distinguishing number. We have determined the distinguishing number of circulant graphs $C(m,p)$ for all $m.p\geq 3$ with $m\geqslant 1$ and $p\geqslant 2$. We can summarize the result which give the value of distinguishing number for circulant graphs $C(m,p)$ as follows:
$D(C(m,p))=$ $\begin{cases}
m & (m\geqslant 3 \; \; \text{and} \; \; p=1)\\
m+1 & (m=1 \; \; \text{and} \; \; p\geq 6) \; \; or \; \; (m\geq2 \; \; p\geq2 \; \; p\neq4) \\
2m+1 & (m=1 \; \; \text{and} \; \; p\in\{3,4,5\}) \; \; or \; \; (m\geq2 \; \; p=4)
\end{cases}$
We deduce that for a given integer $n=\displaystyle\prod_{i=1}^{r} m_i$ for $r\geq 2$ and $m_i\geq 1$, we can build a family of graphs of same order $n$ where the distinguishing number depends on divisors of $n$ . The main idea of constructing such graphs consists of partitioning the vertex set into modules of same size. The circulant graphs are well privileging structure. One may ask if we can construct such family of circulant graphs with smaller order?
For instance, we can improve in Theorem \[main\] the order $n$ of $(C(m_i,p_i))_i$ for $i=1,\dots r$, by taking $n=\frac{\displaystyle\prod_{i=1}^{r} m_i}{gcd(m_i, \displaystyle\prod_{j<i} m_j)}$.
M. O. Albertson and K. L. Collins. Symmetry breaking in graphs. [*Electronic J. of Combinatorics*]{}. [**3**]{}(1996),\# R18.
B. Bogstad and L. Cowen. The distinguishing number of hypercubes. [*Discrete Mathematics*]{}. [**383**]{}(2004),29–35, .
C. T. Cheng. On computing the distinguishing numbers of trees and forests. [*Electronic J. of Combinatorics*]{}. [**13**]{}(2011),\# R11.
K. L. Collins and A. N. Trenk. The Distinguishing Chromatic Number. [*Electronic J. of Combinatorics.*]{} [**13**]{}(2006),\# R16.
M. J. Fisher and G. Isaak. Distinguishing colorings of Cartesian products of complete graphs. [*Discrete Mathematics*]{}. [**308**]{}(2008),2240–2246.
M. J. Fisher and G. Isaak. Distinguishing numbers of Cartesian products of multiple complete graphs. [*PARS Mathematica Comptemporanea.*]{} [**5**]{}(2012),159–170.
S. Gravier, J. Jerebic and M. Mollard. Distinguishing number of some circulant graphs. [*Manuscript*]{}. 2010.
W. Imrich, J. Jerebic and S. Klavžar. The distinguishing number of Cartesian products of complete graphs. [*European. J. Combin.*]{} [**45**]{}(2009), 175–188.
W. Imrich and S. Klavžar. Distinguishing Cartesian powers of graphs. [*J. Graph Theory*]{}. [**53**]{}(2006),250–260.
S. Klavžar and X. Zhu. Cartesian powers of graphs can be distinguished by two labels. [*European J. Combinatorics*]{}. [**28**]{} (2007) 303–310.
K. S. Potanka. Groups, Graphs and Symmetry Breaking. Masters Thesis, Virginia Polytechnic Institute and State University, 1998.
F. Rubin. Problem 729 in [*J. Recreational Math. [**Vol. 11**]{} (Solution in Vol.12, 1980)*]{}(1979),128.
J. Tymoczko. Distinguishing number for graphs and groups. [*Electronic J. Combinatorics*]{}, [**11(1)**]{}(2004),\# R63.(Also available at arXiv:math.CO/0406542.).
X. Zhu and T. L. Wong. Distinguishing labeling of group actions. [*Discrete Mathematics, Vol 309.*]{} [**6**]{} (2009),1760–1765.
[^1]: Institut Fourier - SFR Maths à Modeler.UMR 5582 CNRS/Université Joseph Fourier 100 rue des maths, BP 74, 38402 St Martin d’Hères, France
[^2]: Laboratoire LaROMaD, SFR Maths à Modeler. Faculté des Mathématiques, U.S.T.H.B. El Alia Bab-Ezzouar 16111, Algiers, Algeria
|
---
abstract: 'We present several strategies for searching for supersymmetry in dijet channels that do not explicitly invoke missing energy. Preliminary investigations suggest that signal-to-background ratios of at least 4–5 should be achievable at the LHC, with discovery possible for squarks as heavy as $\sim$ 1.7 TeV.'
author:
- Lisa Randall
- 'David Tucker-Smith'
title: Dijet Searches for Supersymmetry at the LHC
---
Introduction
============
The LHC is set to explore the physics of the weak scale, whatever it should turn out to be. Supersymmetry is one of the leading candidates and enormous effort has been dedicated to studying missing energy signals that characterize almost any weak-scale supersymmetric model. However, supersymmetry searches will be challenging and disentangling the supersymmetry parameters will be more difficult still.
In light of the above, it is imperative to study every possible channel in order to optimize our chances of discovering [new physics]{} and understanding the underlying theory. In this regard, events with the lowest multiplicity [may]{} be the simplest [ones]{} with which to make headway on the inverse problem.
Although two-jet events with missing energy have been studied at the Tevatron [@:2007ww], they have been less prominent in LHC studies. ATLAS has shown that two jet events can be useful for certain SUSY models, both for discovery and for constraining superpartner masses [@atlas], but recent ATLAS and CMS studies have focused more heavily on the more challenging cascade decays. In this paper we study one novel and two existing kinematic variables that can be used to capture dijet missing-energy events without explicit reference to missing transverse energy. We find that pairs of these variables can be used to give signal-to-background of at least 4–5, indicating that these variables are worth exploring with a full detector simulation[^1].
Dijet events are worthy of attention as a potentially clear window into parameter space. They are not results of complicated cascade decays but arise simply from two squarks decaying to two quarks and two neutralinos. Because we know the identity of the particles involved and because there are so few, the signal is relatively straightforward to interpret. For example, with sufficient integrated luminosity, these events alone can be used to constrain the squark and neutralino masses. Dijet studies along the lines explored here may usefully supplement recent analyses dedicated to distinguishing SUSY from other models using events with at least three jets [@Hubisz:2008gg].
The kinematic variables we consider are constructed from the two jets’ momenta alone. These variables should have different systematic uncertainties than missing transverse energy since they pick out slightly different events and are based on different measurements. At the very least, then, the searches we suggest should be worthwhile as cross-checks of standard searches. The variables we use may also be useful for optimization when signal-to-background is relatively low.
The searches we describe will be most effective when squarks are pair-produced in abundance and have large branching ratios to decay directly to the lightest neutralino, which requires that squarks are lighter than the gluino so that cascade decays through gluinos are absent. Because $t$-channel gluino exchange is an important source of squark pair production, the lighter the gluino the more prominent the signal. For the parameter points considered below, we find the signal is cut by a factor of $\sim 6-7$ when the gluino decouples. Fortunately, comparable gluino and squark masses are a feature of a large class of models – most notably high-scale models where the heavier gluino mass feeds into the squark mass. We focus on such models in this study.
Analysis Details
================
Before getting to the dijet properties that will be the focus of our study, we consider the effectiveness of ${{E}_T \!\!\!\!\!\!\! /\;\;}\!\!$ and ${{H}_T \!\!\!\!\!\!\!\! /\;\;}$, the missing transverse energy obtained from the dijet system alone. After requiring the sum of the two jets’ $p_T$’s to be greater than 500 GeV, event rates and signal-to-background ratios for one particular SUSY point are presented in Table \[table:met\] (details regarding event generation and cuts are given below). Neither variable suffices for a clean search, but we observe that the $S/B$ values obtained using ${{H}_T \!\!\!\!\!\!\!\! /\;\;}$ are essentially identical to those obtained using ${{E}_T \!\!\!\!\!\!\! /\;\;}$. This analysis suggests that, in the two-jet channel at high $p_T$, nothing is to be gained by using full ${{E}_T \!\!\!\!\!\!\! /\;\;}$ rather than kinematic variables associated with the two jets alone.
${{E}_T \!\!\!\!\!\!\! /\;\;}/{{H}_T \!\!\!\!\!\!\!\! /\;\;}$ cut 300 350 400 450 500 550 600 650 700
-- ------------------------------------------------------------------- ------ ------ ------ ------ ------ ------ ------ ------ ------
$\sigma_{susy}$(fb) 864. 759. 645. 526. 397. 257. 143. 81.9 51.1
$S/B$ 0.7 1.0 1.3 1.7 1.8 2.0 1.8 1.5 1.4
$\sigma_{susy}$(fb) 862. 757. 639. 521. 379. 229. 128. 74.5 47.4
$S/B$ 0.7 1.0 1.3 1.7 1.9 1.8 1.7 1.5 1.3
: For dijet events passing the cuts described in the text, the dependence of the signal cross section and signal-to-background ($S/B$) on a variable ${{E}_T \!\!\!\!\!\!\! /\;\;}$ cut (top), and on a variable ${{H}_T \!\!\!\!\!\!\!\! /\;\;}$ cut (bottom). All energies are in GeV.[]{data-label="table:met"}
\[default\]
We now present three dijet variables that can be used to separate signal and bacground, with $\sim 1$% of signal events passing all cuts.
$\alpha$:
which we define as the ratio of the $p_T$ of the second hardest jet and the invariant mass formed from the two hardest jets, $$\alpha \equiv \frac{{p_T}_2}{m_{jj}}.$$ As far as we know, this variable has not been considered previously. Background events generally trail off at $\alpha=0.5$, whereas supersymmetry events with invisible decay products can easily have larger $\alpha$. Large $\alpha$ tends to arise in events in which the jets are not back-to-back. As one extreme example, if the two jets are nearly aligned, their invariant mass can be quite small, leading to very large $\alpha$.
Because of the background’s sharp drop-off around $\alpha=0.5$, this variable is potentially useful as a diagnostic tool for analyzing two jet events and cleanly separating signal events from QCD.
$\Delta \phi $:
the azimuthal angle between the two hardest jets. Azimuthal angle is often used in conjunction with missing transverse energy, and $\Delta \phi $ was among the variables used in the dijet SUSY search at D0 [@:2007ww].
$M_{T2}$ [@Lester:1999tx]:
which is defined for events in which two particles of the same mass undergo identical semi-invisible decays, as $$M_{T2}(\chi) = \min_{{{q} \!\!\! /}_1+{{q} \!\!\! /}_2={{p} \!\!\! /}_T} \{ \max[m_T(p_1, q_1\!\!\!\!\! / \;,\chi), m_T(p_2,q_2\!\!\!\!\! / \;,\chi)] \},$$ where $p_1$ and $p_2$ are the momenta of the visible particles, ${{p}_T \!\!\!\!\!\! /\;}$ is the missing transverse momentum of the event, and $m_T$ is the transverse mass function, which depends on an assumed value $\chi$ of the invisible particle’s mass. In calculating $M_{T2}(\chi)$ we use the missing transverse momentum as determined by the dijet system alone.
If $\chi$ is taken to be equal to the mass of the invisible particle, the $M_{T2}$ distribution will have an endpoint at the mass of the decaying particle. Not knowing this mass, $M_{T2}$ endpoints still constrain the masses of the decaying and invisible particles, as emphasized in [@Lester:1999tx] and used below.
{width="2in"} {width="2in"} {width="2in"}
We consider these variables singly and in tandem. We find the first two variables are useful in that one can choose parameter-independent cuts that give sizable $S/B$, whereas the last variable, though more parameter-dependent in its optimization, might ultimately maximize $S/B$. Since the advantage is not overwhelming, we expect all the variables could prove useful, either at the trigger or analysis level. Because they are dimensionless, the first two variables might have the further advantage of being less sensitive to absolute energy scale, and might therefore have lower systematic errors.
For all our analyses, we select events in which exactly two jets have $p_T>50$ GeV, with no isolated leptons, photons, or $\tau$ jets. One could attempt to achieve better background rejection by an additional veto on extra jets with lower $p_T$. In general, we have chosen felicitous cuts but have not pursued a careful optimization, which will be more appropriate at the full-detector-simulation level.
A gluino that is only slightly heavier than the squarks arises naturally in models with supersymmetry broken at a high scale, as renomalization-group effects prevent the squarks from being hierarchically lighter than the gluino. For our analyses we specify parameters at the high scale and use the SUSY-HIT package [@Djouadi:2006bz] to calculate superpartner masses and decay branching ratios. In the relevant parameter regions, the signal depends strongly on $M_{1/2}$, the unified gaugino mass at the high scale, and is less sensitive to $M_0$, the unified scalar mass, because the squark mass is dominated by gauge-loop contributions. We set the other SUSY parameters to be $\tan \beta = 10$, $A_0=0$, and $\mu>0$.
The backgrounds included in our analyses are QCD, $(W\rightarrow l {\overline \nu})/(Z\rightarrow \nu {\overline \nu})$+jets, and ${t \overline t}$. We have checked that diboson+jets production does not significantly modify our results. The QCD and $t {\overline t}$ samples were generated with Pythia 6.4 [@Sjostrand:2006za], and $Z/W$+jets with Alpgen 2.12 [@Mangano:2002ea]. Fully showered and hadronized events were then passed to the PGS 4.0 detector simulator [@PGS], with the energy smearing in the hadronic calorimeter given by $\Delta E/E = 0.8/\sqrt{E/{\rm GeV}}$ and the calorimeter granularity set to $(\Delta \phi\times \Delta \eta)= (0.1\times 0.1)$. Jets were defined using a cone algorithm with $\Delta R=0.4$.
A $K$-factor of 2 is applied to the QCD sample, but no $K$-factor is used for $W/Z$ production, because the most important contributions come from $W/Z$+2 jets, which are not enhanced at NLO [@Campbell:2003hd]. (After cuts, $W/Z$ production ends up being the dominant background to SUSY dijet events, so to include a $K$-factor one can simply divide our signal-to-background ratios by $K$.) For $t{\overline t}$ we use $\sigma=830$ pb as the NLO production cross section [@Bonciani:1998vc]. Including the $K$ factors our samples sizes are $ \sim 0.8$ fb$^{-1}$ for QCD, $\sim$ 20 fb$^{-1}$ for $t{\overline t}$, and $\sim 100$ fb$^{-1}$ for $W/Z$. Appropriate generator-level kinematic cuts were imposed to obtain the QCD and $W/Z$ samples.
SUSY samples were also generated with Pythia. For each parameter point we use Prospino 2.0 [@prospino] to calculate an appropriate $K$-factor from the NLO cross section for squark pair production [@Beenakker:1996ch].
Results
=======
The plots in Figure \[fig:phialphascans\] suggest that appropriate cuts on $\alpha$, $\Delta \phi$, and/or $M_{T2}$ can suppress both the QCD background and the dominant background after cuts, $(Z \rightarrow \nu {\overline \nu})$+jets. The SUSY parameter point used here is $(M_{1/2},M_0) = (300,100)$ GeV, and we impose a hard cut on the sum of the two hard jets’ transverse momenta, $${p_T}_1+{p_T}_2 > 500 \;{\rm GeV}.$$ To streamline the analysis, events were required to have ${{E}_T \!\!\!\!\!\!\! /\;\;}> 100$ GeV for Figure, \[fig:phialphascans\] and at least one of $\alpha > 0.5$, $\Delta \phi <2\pi/3$, and ${{E}_T \!\!\!\!\!\!\! /\;\;}>100$ GeV for Figure \[fig:nomet\]. Removing these requirements does not affect the results once optimal cuts on $\alpha$, $\Delta \phi$, and/or $M_{T2}$ are made.
Evidently signal dominates over background for $\alpha{ \mathop{}_{\textstyle \sim}^{\textstyle >} }0.5$, $\Delta \phi { \mathop{}_{\textstyle \sim}^{\textstyle <} }2\pi/3$, and $M_{T2} { \mathop{}_{\textstyle \sim}^{\textstyle >} }300$ GeV. We will soon see that $\alpha $, $\Delta \phi$, and $M_{T2}$ can be used to discriminate signal from background by themselves, but first we point out that cuts on these variables can improve an analysis based on ${{E}_T \!\!\!\!\!\!\! /\;\;}\!$ or ${{H}_T \!\!\!\!\!\!\!\! /\;\;}\!$. For example, the combination $(\alpha>0.45,\; {{H}_T \!\!\!\!\!\!\!\! /\;\;}\!>300 \:{\rm GeV})$ selects 315 signal events per fb$^{-1}$, with $S/B=4.3$. The combination $(\Delta \phi < 2\pi/3,\; {{H}_T \!\!\!\!\!\!\!\! /\;\;}\!>450 \:{\rm GeV})$ gives a somewhat lower $S/B$ (3.1), but with more events (429). An $M_{T2}$ cut of 450 GeV gives the largest $S/B$ of all (5.0, with 304 events), and in fact there appears to be no benefit in supplementing the $M_{T2}$ cut with the ${{H}_T \!\!\!\!\!\!\!\! /\;\;}$ cut.
Figure \[fig:nomet\] suggests
![For events preselected as described in the text, the dependence of the signal cross section and $S/B$ on a variable $\alpha$ cut (top), a variable $\Delta \phi$ cut (middle), and a variable $M_{T2}$ cut (bottom). []{data-label="fig:nomet"}](alphasplitplot.eps "fig:"){width="3in"}\
![For events preselected as described in the text, the dependence of the signal cross section and $S/B$ on a variable $\alpha$ cut (top), a variable $\Delta \phi$ cut (middle), and a variable $M_{T2}$ cut (bottom). []{data-label="fig:nomet"}](phisplitplot.eps "fig:"){width="3in"}\
![For events preselected as described in the text, the dependence of the signal cross section and $S/B$ on a variable $\alpha$ cut (top), a variable $\Delta \phi$ cut (middle), and a variable $M_{T2}$ cut (bottom). []{data-label="fig:nomet"}](mt2splitplot.eps){width="3in"}
that each of $\alpha$, $\Delta \phi$, and $M_{T2}$ can be used independently to observe a clear signal, without employing ${{H}_T \!\!\!\!\!\!\!\! /\;\;}$ at all. Well-chosen cuts give $\sim {\rm a \;few} \times 10^2$ signal events after 1 fb$^{-1}$, with $S/B\sim 3-5$.
Figure \[fig:nomet\] also shows how the three variables can be used in pairs to improve $S/B$ in conjunction with the signal event-rate. We again find that $M_{T2}$ seems to dominate a little, but since we do not know if this is the cleanest variable to use in practice, which can be determined nly after a full detector simulation, we present all combinations. Any two on their own can potentially give a robust signal.
As an exanple, we consider the combination $\Delta \phi < 2\pi/3$ and $\alpha< 0.45$, which gives a good $S/B$ and a decent event rate. As stated earlier, we do not optimize cuts, but we use this combination that works rather well.
With those cuts in place, Figure \[fig:300ptbins\] shows signal and background events binned in the sum of the two hardest jets’ transverse momenta.
![Signal and background rates after the cuts $\Delta \phi < 2\pi/3$ and $\alpha >0.45$. The QCD background is not included for ${p_1}_T +{p_2}_T<500$ GeV. We take $(M_{1/2}, M_0)=(300,100)$ GeV. []{data-label="fig:300ptbins"}](plot3_300.eps){width="3in"}
We see that $Z$+jets is the dominant background, followed by $W$+jets. A total of four QCD events with ${p_1}_T +{p_2}_T<500$ GeV passed the cuts, out of a sample corresponding to over 1.5 fb$^{-1}$ of integrated luminosity, divided by the $K$ factor. A higher luminosity sample would be needed to get a better estimate of the QCD background, but it seems safe to say that the $W$ and $Z$ backgrounds are more important.
In Figure \[fig:300ptbins\] we see that $S/B$ is cleanest at high $p_T$. Of course the optimal $p_T$ cut depends on underlying parameters that are not known [*a priori*]{}, but a scan at high $p_T$ should help maximize $S/B$. For the chosen parameter point, cutting above ${p_1}_T +{p_2}_T=550$ GeV gives $S/B = 4.9$, with an average of 205 signal events after 1 fb$^{-1}$. Table \[table:efficiencies\] shows the efficiencies with which the SUSY events pass the successive jet multiplicity, ${p_T}_1+{p_T}_2$, $\Delta \phi$, and $\alpha$ cuts.
$N_{jets}=2$ ${p_T}_1+{p_T}_2 > 550$ $\Delta \phi < 2\pi/3$ $\alpha>0.45$
---------------------- ----------------------- ------------------------- ------------------------ -----------------------
$\epsilon$ $1.08 \times 10^{-1}$ $5.04 \times 10^{-2}$ $2.05 \times 10^{-2}$ $9.48 \times 10^{-3}$
$\sigma_{susy}$ (fb) $2.33 \times 10^{3}$ $1.09 \times 10^{3}$ 443. 205.
: The efficiencies $\epsilon$ for signal events to pass the successive cuts, taking $(M_{1/2}, M_0)=(300,100)$ GeV. []{data-label="table:efficiencies"}
\[default\]
The final efficiency is lower than that for SUSY searches with additional jets, and so despite the different systematics SUSY might well be discovered in other channels first. Moreover, the dijet channel is relevant only for certain models. On the other hand, this analysis picks out particularly simple events– two squarks decay to produce two jets and two neutralinos. If these events do occur it would certainly be worthwhile to study them in isolation.
For example, with enough luminosity these events alone can be used to obtain a simple constraint on the squark and neutralino masses, using the $M_{T2}$ event function [@Lester:1999tx] introduced above. If one can ignore all visible particles in the event except those in the two jets, one expects
![The $m_{T2}$ distribution for signal and background, after the cuts described in the text. We take $(M_{1/2}, M_0)=(300,100)$ GeV. []{data-label="fig:mt2"}](plot5.eps){width="3in"}
the endpoint $$M_{T2}(0)_{max}=\frac{m_{\tilde q}^2-m_{\tilde \chi_1^0}^2}{m_{\tilde q}}.$$ For the parameter point under study, the predicted endpoint turns out to be 619 GeV if we use the mass of the right-handed squarks, which are the ones that decay predominantly to ${\tilde \chi_1^0} q$. Figure \[fig:mt2\] shows the $m_{T2}(0)$ distribution for $~10$/fb of data, with the cuts of Table \[table:efficiencies\] imposed. A sharp drop-off leading up to $\sim 620$ GeV is evident, consistent with expectations. The spill-over to larger values is mostly due to the effects of extra jets not included in the calculation of the missing transverse energy (in calculating $M_{T2}(0)$ we use the missing transverse momentum as determined by the dijet system alone).
The ($\alpha$, $\Delta \phi$) analysis we have described can be effective for higher-mass searches as well, with the cut on the sum of the two jets’ transverse momenta increased appropriately. Table \[table:summary\] gives results for other parameter points, with the cuts on ${p_1}_T +{p_2}_T$ again chosen to give robust values of $S/B$. The $M_0$ values are chosen to be near the lower bounds below which a stau LSP results. Provided that the squarks remain lighter than the gluino, increasing $M_0$ lowers the event rate somewhat but not dramatically. For $(M_{1/2}, M_0)=(300,300)$ GeV, for example, the same cuts used for the $(M_{1/2}, M_0)=(300,100)$ GeV point give 195 events after 1 fb$^{-1}$, with $S/B=4.7$.
Taking $S/\sqrt{B}>5$ as the relevant criterion, our results suggest that discovery through the dijet channel should be possible for squark masses up t abut $\sim 1700$ GeV after 100 fb$^{-1}$ of integrated luminosity. By the same measure, discovery for lighter squark masses, $\sim 600$ GeV, should be possible after $\sim$ a few$\times 10^2$ pb$^{-1}$ or less. It may be optimistic to focus on $S/\sqrt{B}$ as a discovery criterion, as doing so assumes that the background is fully understood. However, it is worth pointing out that (1) events with leptonic $Z$ decays will provide some experimental handle on the dominant background, $Z+$jets, and (2) the shapes of the ${p_T}_1+{p_T}_2$ distributions for signal and background events passing the $\alpha$ and $\Delta \phi$ cuts are quite different (see Figure \[fig:300ptbins\]). The excesses obtained in our analysis would lead to a prominent bump in the measured distribution, which would not be accommodated simply by rescaling the background.
$(M_{1/2},\; M_0)$ $(m_{\tilde g}, m_{\tilde q_R})$ $\sum p_T$ cut $\epsilon$ $\sigma_{susy}$ (fb) $S/B$
-------------------- ---------------------------------- ---------------- ----------------------- ---------------------- -------
(300, 100) (716, 640) 550 $9.5\times 10^{-3}$ 205. 4.9
(450, 100) (1040, 918) 800 $7.9 \times 10^{-3}$ 21.3 4.7
(600, 150) (1358, 1195) 1050 $8.1\times 10^{-3}$ 4.07 5.0
(750, 200) (1669, 1465) 1250 $9.6 \times 10^{-3}$ 1.17 4.8
(900, 200) (1965, 1726) 1450 $1.0 \times 10^{-2}$ 0.37 3.5
: Efficiencies, event rates, and signal-to-background ratios for various SUSY parameters, using the cuts described in the text. All masses are in GeV.[]{data-label="table:summary"}
\[default\]
Conclusions
===========
We have studied several kinematic variables that can be used for dijet SUSY searches, and found that they give reasonable signal-to-background ratios. Dijet events can be used to constrain SUSY mass parameters should the type of supersymmetry model we have considered be correct. Studies of $Z+$jet events with leptonic $Z$ decays will give a better understanding of the background and a more reliable extraction of signal from background. For the future, it would be useful to see how well the lessons here can be applied to develop multijet searches that do not rely on full missing energy.
[**Acknowledgements**]{} We wish to thank Patrick Meade and Liantao Wang for useful discussions in the early stages of this work. We also wish to thank Maria Spiropulu, Patrick Janot, Oliver Buchmuller, Henning Flacher, and the CMS SUSY analysis group for useful feedback and suggestions. Finally, we thank Ian Hinchliffe for bringing to our attention existing ATLAS dijet studies and ones in progress. LR is supported by NSF grants PHY-0201124 and PHY-055611. DTS is supported by NSF grant 0555421.
[99]{}
natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
V. M. Abazov [*et al.*]{} \[D0 Collaboration\], Phys. Lett. B [**660**]{}, 449 (2008) \[arXiv:0712.3805 \[hep-ex\]\]. ATLAS collaboration, ATLAS Physics TDR CERN-LHCC 99-15; E. Richter-Was, D. Froidevaux, and J. Soderqvist, ATLAS Internal Note ATL-PHYS-97-108 (1997); M. Biglietti et. al., ATL-PHYS-2004-011; I. Hinchliffe and F. E. Paige, Phys. Rev. D [**61**]{}, 095011 (2000) \[arXiv:hep-ph/9907519\]; F. Gianotti [*et al.*]{}, Eur. Phys. J. C [**39**]{}, 293 (2005) \[arXiv:hep-ph/0204087\].
We thank O. Buchmuller, H. Flacher, J. Jones, T. Rommerskirchen and M. Stoye, as part of the CMS SUSY Analysis Group, for useful discussions and feedback.
We thank Ian Hinchliffe for sharing this information.
J. Hubisz, J. Lykken, M. Pierini and M. Spiropulu, arXiv:0805.2398 \[hep-ph\].
, , , ****, (), .
, , , ****, (), .
, , , , , ****, (), .
J. Campbell, R. K. Ellis and D. L. Rainwater, Phys. Rev. D [**68**]{}, 094021 (2003) \[arXiv:hep-ph/0308195\]. , , , , ****, (), .
, , , , ****, (), .
C. G. Lester and D. J. Summers, Phys. Lett. B [**463**]{}, 99 (1999) \[arXiv:hep-ph/9906349\]; A. Barr, C. Lester and P. Stephens, J. Phys. G [**29**]{}, 2343 (2003) \[arXiv:hep-ph/0304226\].
[^1]: Such a study has been started by CMS [@cms]. In addition, ATLAS is currently engaged in an updated dijet study [@ian].
|
---
abstract: 'Let $\R(\cdot)$ stand for the bounded-error randomized query complexity. We show that for any relation $f \subseteq \{0,1\}^n \times \mathcal{S}$ and partial Boolean function $g \subseteq \{0,1\}^n \times \{0,1\}$, $\R_{1/3}(f \circ g^n) = \Omega(\R_{4/9}(f) \cdot \sqrt{\R_{1/3}(g)})$. Independently of us, Gavinsky, Lee and Santha [@newcomp] proved this result. By an example demonstrated in their work, this bound is optimal. We prove our result by introducing a novel complexity measure called the *conflict complexity* of a partial Boolean function $g$, denoted by $\chi(g)$, which may be of independent interest. We show that $\chi(g) = \Omega(\sqrt{\R(g)})$ and $\R(f \circ g^n) = \Omega(\R(f) \cdot \chi(g))$.'
author:
- 'Swagato Sanyal [^1]'
bibliography:
- 'ref.bib'
title: A Composition Theorem via Conflict Complexity
---
Introduction {#intro}
============
Let $f \subseteq\{0,1\}^n \times\mathcal{S}$ be a relation and $g \subseteq \{0,1\}^m \times \{0,1\}$ be a partial Boolean function. In this work, we bound the bounded-error randomized query complexity of the composed relation $f \circ g^n$ from below in terms of the bounded-error query complexitites of $f$ and $g$. Our main theorem is as follows.
[thm]{}[main]{} \[main\] For any relation $f \subseteq \{0,1\}^n \times \mathcal{S}$ and partial Boolean function $g \subseteq \{0,1\}^n \times \{0,1\}$, $$\R_{1/3}(f \circ g^n) = \Omega\left(\R_{4/9}(f) \cdot \sqrt{\R_{1/3}(g)}\right).$$
Prior to this work, Anshu et. al. [@fstcomp] proved that $\R_{1/3}(f\circ g^n) = \Omega(\R_{4/9}(f)\cdot\R_{1/2-1/n^4}(g))$. Although in the statement of their result $g$ is stated to be a Boolean function, their result holds even when $g$ is a partial Boolean function.
In the special case of $g$ being a total Boolean function, Ben-David and Kothari [@DBLP:conf/icalp/Ben-DavidK16] showed that $\R(f\circ g^n) = \Omega\left(\R(f)\cdot \sqrt{\frac{\R(g)}{\log \R(g)}}\right)$.
Gavinsky, Lee and Santha [@newcomp] independently proved Theorem \[main\] (possibly with different values for the error parameters). They also prove this bound to be tight by exhibiting an example that matches this bound. We believe that our proof is sufficiently different and significantly shorter and simpler than theirs. We draw on and refine the ideas developed in the works of Anshu et. al. and Ben-David and Kothari to prove our result.
We define a novel measure of complexity of a partial Boolean function $g$ that we refer to as the *conflict complexity* of $g$, denoted by $\chi(g)$ (see Section \[cc\] for a definition). This quantity is inspired by the *Sabotage complexity* introduced by ben-David and Kothari. However, the two measures also have important differences. For example, we could show that for any partial function $g$, $\chi(g)$ and $\R(g)$ are related as follows.
[thm]{}[maina]{} \[maina\] For any partial Boolean function $g \subseteq \{0,1\}^n \times \{0,1\}$, $$\chi(g)=\Omega\left(\sqrt{\R_{1/3}(g)}\right).$$
See Section \[cc\] for a proof of Throrem \[maina\]. Sabotage complexity is known to be similarly related to the bounded-error randomized query complexity (up to a logarithmic factor) when $g$ is a total Boolean function. For partial Boolean functions, unbounded separation is possible between sabotage complexity and $\R(\cdot)$.
We next prove the following composition theorem.
[thm]{}[mainb]{} \[mainb\] Let $\mathcal{S}$ be an arbitrary set, $f \subseteq \{0,1\}^n \times \mathcal{S}$ be a relation and $g \subseteq \{0,1\}^m \times \{0,1\}$ be a partial Boolean function. Then, $$\R_{1/3}(f \circ g^n)=\Omega(\R_{4/9}(f) \cdot \chi(g)).$$
To prove Theorem \[mainb\] we draw on the techniques developed by Anshu et. al. and ben-David and Kothari. See Section \[comp\] for a proof of Theorem \[mainb\]. Theorem \[main\] follows from Theorems \[maina\] and \[mainb\].
Preliminaries {#prelims}
=============
A partial Boolean function $g$ is a relation in $\{0,1\}^m \times \{0,1\}$. For $b \in \{0,1\}$, $g^{-1}(b)$ is defined to tbe the set of strings $x$ in $\{0,1\}^n$ for which $(x,b) \in g$ and $(x,\overline{b}) \notin g$. $g^{-1}(0) \cup g^{-1}(0)$ is referred to as the set of valid inputs to $g$. We assume that for all strings $y \notin g^{-1}(0) \cup g^{-1}(1)$, both $(y,0)$ and $(y,1)$ are in $g$. For a string $x \in g^{-1}(0) \cup g^{-1}(1)$, $g(x)$ refers to the unique bit $b$ such that $(x,b) \in g$. All the probability distributions $\mu$ over the domain of a partial Boolean function $g$ in this paper are assumed to be supported entirely on $g^{-1}(0) \cup g^{-1}(1)$. Thus $g(x)$ is well-defined for any $x$ in the support of $\mu$.
Let $\mathcal{S}$ be any set. Let $h \subseteq \{0,1\}^k \times \mathcal{S}$ be any relation and $\epsilon \in [0,1/2)$. The 2-sided error randomized query complexity $\R_\epsilon(h)$ is the minimum number of queries made in the worst case by a randomized query algorithm $\mathcal A$ (the worst case is over inputs and the internal randomness of $\mathcal{A}$) that on each input $x \in \{0,1\}^k$ satisfies $\Pr[(x,\mathcal A(x)) \in h] \geq 1 - \epsilon$ (where the probability is over the internal randomness of $\mathcal{A}$).
Let $h \subseteq \{0,1\}^k \times \mathcal{S}$ be any relation, $\mu$ a distribution on the input space $\{0,1\}^k$ of $h$, and $\epsilon \in [0,1/2)$. The distributional query complexity $\D^\mu_\epsilon(h)$ is the minimum number of queries made in the worst case (over inputs) by a deterministic query algorithm $\mathcal A$ for which $\Pr_{x \sim \mu}[(x,\mathcal A(x)) \in h] \geq 1 - \epsilon$.
In particular, if $h$ is a function and $\mathcal{A}$ is a randomized or distributional query algorithm computing $h$ with error $\epsilon$, then $\Pr [h(x)=\mathcal{A}(x)] \geq 1-\epsilon$, where the probability is over the respective sources of randomness.
The following theorem is von Neumann’s minimax principle stated for decision trees.
\[minmax\] For any integer $k$, set $\mathcal{S}$, and relation $h \subseteq \{0,1\}^k \times \mathcal{S}$, $$\R_\epsilon(h)=\max_{\mu}\D_\epsilon^\mu(h).$$
Let $\mu$ be a probabilty distribution over $\{0,1\}^k$. $x \sim \mu$ implies that $x$ is a random string drawn from $\mu$. Let $C \subseteq \{0,1\}^k$ be arbitrary. Then $\mu \mid C$ is defined tobe the probability distribution obtained by conditioning $\mu$ on the event that the sampled string belongs to $C$, i.e., $$\mu \mid C(x)=\left\{ \begin{array}{ll} $0$ & \mbox{if $x \notin C$} \\
\frac{\mu(x)}{\sum_{y \in C} \mu(y)} & \mbox{if $x \in C$}\end{array} \right.$$
For a partial Boolean function $g:\{0,1\}^m \rightarrow \{0,1\}$, probability distribution $\mu$ and bit $b$, $$\mu_b:=\mu \mid g^{-1}(b).$$ Notice that $\mu_0$ and $\mu_1$ are defined with respect to some Boolean function $g$, which will always be clear from the context.
A subset ${\mathcal{C}}$ of $\{0,1\}^m$ is called a subcube if there exists a set $S \subseteq \{1, \ldots, m\}$ of indices and an *assignment function* $A:S \rightarrow \{0,1\}$ such that ${\mathcal{C}}=\{x \in \{0,1\}^m:\forall i \in S, x_i=A(i)\}$. The co-dimension $\codim({\mathcal{C}})$ of ${\mathcal{C}}$ is defined to be $|S|$.
Now we define composition of two relations.
\[def:comp\] We now reproduce from the Section \[intro\] the definition of composed relations. Let $f \subseteq \{0,1\}^n \times \mathcal{S}$ and $g \subseteq \{0,1\}^m \times \{0,1\}$ be two relations. The composed relation $f \circ g^n \subseteq \left(\{0,1\}^m\right)^n \times \mathcal{S}$ is defined as follows: For $x=(x^{(1)}, \ldots, x^{(n)}) \in \left(\{0,1\}^m\right)^n$ and $s \in \mathcal{S}$, $(x,s) \in f \circ g^n$ if and only if there exists $b=(b^{(1)}, \ldots, b^{(n)}) \in \{0,1\}^n$ such that for each $i=1, \ldots, n$, $(x^{(i)},b^{(i)}) \in g$ and $(b,s) \in f$.
We will often view a deterministic query algorithm as a binary decision tree. In each vertex $v$ of the tree, an input variable is queried. Depending on the outcome of the query, the computation goes to a child of $v$. The child of $v$ corresponding to outcome $b$ to the query made is denoted by $v_b$.
It is well known that the set of inputs that lead the computation of a decision tree to a certain vertex forms a subcube. We will denote use the same symbol (e.g. $v$) to refer to a vertex as well as the subcube associated with it.
The depth of a vertex $v$ in a tree is the number of vertices on the unique path from the root of the tree to $v$ in the tree. Thus, the depth of the root is $1$.
Let ${\mathcal{A}}$ be a decision tree on $m$ bits. Let $\eta_0$ and $\eta_1$ be two probability distributions with disjoint supports. Let $v$ be a vertex in ${\mathcal{A}}$. Let variable $x_i$ be queried at $v$. Then, $$\Delta^{(v)}:=\left\{\begin{array}{ll}|\Pr_{x \sim \eta_0} [x_i=0]-\Pr_{x \sim \eta_1} [x_i=0]| & \mbox{if $v \neq \bot$.} \\ 1 & \mbox{if $v=\bot$.} \end{array}\right.$$
Note that $\Delta^{(v)}$ is defined with respect to distributions $\eta_0$ and $\eta_1$. In our application, we will often consider a decision tree ${\mathcal{A}}$, a partial Boolean function $g$ and a probability distributions $\mu$ over the inputs. $\Delta^{(v)}$, for a vertex $v$ of ${\mathcal{A}}$, will then be assumed to be with respect to the distributions $(\mu_b \mid v)_{b \in \{0,1\}}$.
\[mutin\] Let ${\mathcal{A}}$ be a decision tree on $m$ bits. Let $g$ be a partial Boolean function. Let $x \sim \{0,1\}^n$ be sampled from a distribution $\mu$. Let $v$ be a vertex in ${\mathcal{A}}$. Let variable $x_i$ be queried at $v$. Then, $$\I_\mu(g(x) : x_i \mid x \in v) = \I_{\mu \mid v}(g(x) : x_i)\geq 32 \left(\Pr_{x \sim \mu \mid v}[g(x)=0] \cdot \Pr_{x \sim \mu \mid v}[g(x)=1] \cdot \Delta^{(v)}\right)^2,$$ where $\Delta^{(v)}$ is with respect to the distributions $(\mu_b \mid v)_{b \in \{0,1\}}$.
Define $b:=g(x)$. Condition on the event $x \in v$. Let $(b \otimes x_i)$ be the distribution over pairs of bits, where the bits are distributed independently according to the distributions of $b$ and $x_i$ respectively. We use the equivalence: $\I(b : x_i)=\Div((b,x_i) || (b \otimes x_i))$. Now, an application of *Pinsker’s inequality* implies that
$$\begin{aligned}
\label{fst}\Div((b,x_i) || (b \otimes x_i)) \geq 2 ||(b,x_i)-(b \otimes x_i)||^2_1.\end{aligned}$$
Next, we bound $|(b,x_i)-(b \otimes x_i)||_1$. To this end, we fix bits $z_1, z_2 \in \{0,1\}$, and bound $|\Pr[(b,x_i)=(z_1,z_2)]-\Pr[(b \otimes x_i)=(z_1,z_2)]|$. We have that, $$\begin{aligned}
\label{t1} \Pr[(b,x_i)=(z_1,z_2)]&=\Pr[b=z_1]\Pr[x_i=z_2 \mid b = z_1].\end{aligned}$$ Now, $$\begin{aligned}
\label{t2} \Pr[(b \otimes x_i)=(z_1,z_2)]&=\Pr[b=z_1]\Pr[x_i=z_2] \nonumber \\
&=\Pr[b=z_1](\Pr[b=z_1]\Pr[x_i=z_2 \mid b=z_1]+&\nonumber \\
& \qquad \qquad \qquad \qquad \qquad \Pr[b=\overline{z_1}]\Pr[x_i=z_2 \mid b=\overline{z_1}]).\end{aligned}$$ Taking the absolute difference of (\[t2\]) and (\[t1\]) we have that, $$\begin{aligned}
&|\Pr[(b,x_i)=(z_1,z_2)]-\Pr[(b \otimes x_i)=(z_1,z_2)]| \nonumber \\
&=\Pr[b=z_1] \cdot \Pr[b=\overline{z_1}] \cdot \Delta^{(v)}=\Pr[b=0] \cdot \Pr[b=1] \cdot \Delta^{(v)}\label{fin}\end{aligned}$$ The Claim follows by adding (\[fin\]) over $z_1, z_2$ and using (\[fst\]).
Conflict Complexity {#cc}
===================
In this section, we introduce a randomized process ${\mathcal{P}}$ (formally given in Algorithm \[P\]). This process is going to play a central role in the proof of our composition theorem (Theorem \[mainb\]). Later in the section, we use ${\mathcal{P}}$ to define the *conflict complexity* of a partial Boolean function $g$.
Let $n>0$ be any integer and ${\mathcal{B}}$ be any deterministic query algorithm that runs on inputs in $(\{0,1\}^m)^n$. ${\mathcal{B}}$ can be though of as just a query procedure that queries various input variables, and then terminates without producing any output. Let $x=(x_i^{(j)})_{{i=1, \ldots, n} \atop {j=1, \ldots, m}}$ be a generic input to ${\mathcal{B}}$, and $x_i$ stand for $(x_i^{(j)})_{j=1, \ldots, m}$. For a vertex $v$ of ${\mathcal{B}}, v^{(i)}$ denotes the subcube in $v$ corresponding to $x_i$, i.e., $v=\times_{i=1}^n v^{(i)}$. Recall from Section \[prelims\] that for $b \in \{0,1\}$, $v_b$ stands for the child of $v$ corresponding to the query outcome being $b$. Let $\mu_0$ and $\mu_1$ be any two probability distributions supported on $g^{-1}(0)$ and $g^{-1}(1)$ respectively. Let $z=(z_1, \ldots, z_n) \in \{0,1\}^n$ be arbitrary. Now consider the probabilistic process ${\mathcal{P}}$ given by Algorithm \[P\]. Note that ${\mathcal{P}}$ can be thought of as a randomized query algorithm on input $z \in \{0,1\}^n$, where a query to $z_i$ corresponds to an assignment of $0$ to $\mathsf{NOQUERY}_i$ in line \[query\]. This view of ${\mathcal{P}}$ will be adopted in Section \[comp\].
\[P\]
$v \gets $Root of ${\mathcal{B}}$ // Corresponds to $\{0,1\}^m$
We now prove an important structural result about ${\mathcal{P}}$ which will be used many times in our proofs. Consider the following distribution $\gamma_z$ on $(\{0,1\}^m)^n$: For each $i$, sample $x_i$ independently from $\mu_{z_i}$.
Let $v$ be a vertex of ${\mathcal{B}}$. Let $A_{\mathcal{B}}(v)$ be the event that process ${\mathcal{P}}$ reaches node $v$, and $B_{\mathcal{B}}(v)$ be the event that for a random input $x$ sampled from $\gamma_z$, the computation of ${\mathcal{B}}$ reaches node $v$.
\[samedistn\] For each vertex $v$ of ${\mathcal{B}}$, $$\Pr[A_{\mathcal{B}}(v)]=\Pr[B_{\mathcal{B}}(v)].$$
We will prove by induction on the depth $t$ of $v$, i.e., the number of vertices on the unique path from the root to $v$ in ${\mathcal{B}}$.
Base case:
: $t=1$. $v$ is the root of ${\mathcal{B}}$. Thus $\Pr[A_{\mathcal{B}}(v)]=\Pr[B_{\mathcal{B}}(v)]=1$.
Inductive step:
: Assume that $t \geq 2$, and that the statement is true for all vertices at depth at most $t-1$. Since $t \geq 2$, $v$ is not the root of ${\mathcal{B}}$. Let $u$ be the ancestor of $v$, and variable $x_i^{(j)}$ be queried at $u$. without loss of generality assume that $v$ is the child of $u$ corresponding to $x_i^{(j)}=0$. We split the proof into the following two cases.
- [**Case 1:**]{} $\Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i] \leq \Pr_{x_i \sim \mu_{\overline{z_i}}}[x_i^{(j)}=0 \mid x_i \in u_i]$.
Condition on $A_{\mathcal{B}}(u)$ and $\nq_i=0$. The probability that ${\mathcal{P}}$ reaches $v$ is $\Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i]$. Now, condition on $A_{\mathcal{B}}(u)$ and $\nq_i=1$. The probability that ${\mathcal{P}}$ reaches $v$ is exactly equal to the probability that the real number $r$ sampled at $v$ lies in $[0, \Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i] ]$, which is equal to $\Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i]$. Thus, $$\begin{aligned}
\Pr[A_{\mathcal{B}}(v]&=\Pr[A_{\mathcal{B}}(u)]. \Pr[A_{\mathcal{B}}(v) \mid A_{\mathcal{B}}(u)] \nonumber \\
&=\Pr[A_{{\mathcal{B}}}(u)] \cdot \Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i]. \label{c1:one}\end{aligned}$$ Now condition on $B_{\mathcal{B}}(u)$. The probability that ${\mathcal{B}}$ reaches $v$ is exactly equal to the probability that $x_i^{(j)}=0$ when $x$ is sampled according to the distribution $\gamma_z$ conditioned on the event that $x \in u$. Note that in the distribution $\gamma_z$, the $x_k$’s are independently distributed. Thus, $$\begin{aligned}
\Pr[B_{\mathcal{B}}(v)]&=\Pr[B_{\mathcal{B}}(u)]. \Pr[B_{\mathcal{B}}(v) \mid B_{\mathcal{B}}(u)] \nonumber \\
&=\Pr[B_{{\mathcal{B}}}(u)] \cdot \Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i]. \label{c1:two}\end{aligned}$$ By the inductive hypothesis, $\Pr[A_{{\mathcal{B}}}(u)]=\Pr[B_{{\mathcal{B}}}(u)]$. The claim follows from (\[c1:one\]) and (\[c1:two\]).
- [**Case 2:**]{} $\Pr_{x_i \sim \mu_{z_i}}[x_i^{(j)}=0 \mid x_i \in u_i] > \Pr_{x_i \sim \mu_{\overline{z_i}}}[x_i^{(j)}=0 \mid x_i \in u_i]$. Let $v'$ be the child of $u$ corresponding to $x_i^{(j)}=1$. By an argument similar to Case 1, we have that $$\begin{aligned}
\Pr[A_{\mathcal{B}}(v')]=\Pr[B_{\mathcal{B}}(v')]. \label{c2}\end{aligned}$$ Now, $$\begin{aligned}
\Pr[A_{\mathcal{B}}(v)] &=\Pr[A_{\mathcal{B}}(u)] - \Pr[A_{\mathcal{B}}(v')] \nonumber \\
&= \Pr[B_{{\mathcal{B}}}(u)] - \Pr[A_{\mathcal{B}}(v')] \mbox{\ \ \ \ \ (By inductive hypothesis)} \nonumber \\
&= \Pr[B_{\mathcal{B}}(u)] - \Pr[B_{\mathcal{B}}(v')] \mbox{\ \ \ \ \ \ (By (\ref{c2}))} \nonumber \\
&= \Pr[B_{\mathcal{B}}(v]. \nonumber\end{aligned}$$
Let $n=1, z \in \{0,1\}$, and ${\mathcal{B}}$ be a decision tree that computes $g$. Consider process ${\mathcal{P}}$ on ${\mathcal{B}}, \mu_0, \mu_1, z$. Note that $\nq_1$ is set to $0$ with probability $1$. To see this observe that as long as $\nq_1=1$, the current subcube $v$ contains strings from the supports of both $\mu_0$ and $\mu_1$, and hence from both $g^{-1}(0)$ and $g^{-1}(1)$. If $\nq_1$ is not set to $0$ for the entire run of ${\mathcal{P}}$, then there exist inputs $x \in g^{-1}(0), x' \in g^{-1}(1)$ which belong to the same leaf of ${\mathcal{B}}$, contradicting the hypothesis that ${\mathcal{B}}$ computes $g$. Let the random variable ${\mathcal{N}}$ stand for the value of the variable ${\mathsf{N}}_1$ after the termination of ${\mathcal{P}}$. Note that ${\mathcal{N}}$ is equal to the the index of the iteration of the while loop in which $\nq_1$ is set to $0$. The distribution of ${\mathcal{N}}$ depends on $\mu_0, \mu_1$ and ${\mathcal{B}}$, which in our applications will either be clear from the context, or clearly specified. Note that the distribution of ${\mathcal{N}}$ is independent of the value of $z$.
The *conflict complexity* of a partial Boolean function $g$ with respect to distributions $\mu_0$ and $\mu_1$ supported on $g^{-1}(0)$ and $g^{-1}(1)$ respectively, and decision tree ${\mathcal{B}}$ computing $g$, is defined as: $$\chi (\mu_0, \mu_1, {\mathcal{B}})=\E[{\mathcal{N}}].\footnote{As observed before, the choices of $\mu_0, \mu_1$ and ${\mathcal{B}}$ are built into the definition of ${\mathcal{N}}$.}$$ The conflict complexity of $g$ is defined as: $$\chi(g)=\max_{\mu_0, \mu_1} \min_{\mathcal{B}}\chi(\mu_0, \mu_1, {\mathcal{B}}).$$ Where the maximum is over distributions $\mu_0$ and $\mu_1$ supported on $g^{-1}(0)$ and $g^{-1}(1)$ respectively, and the minimum is over decision trees ${\mathcal{B}}$ computing $g$.
For a pair $(\mu_0, \mu_1)$ of distributions, let ${\mathcal{B}}$ be the decision tree computing $g$ such that $\E[{\mathcal{N}}]$ is minimized. We call such a decision tree an *optimal* decision tree for $\mu_0, \mu_1$. We conclude this section by making an important observation about the structure of optimal decision trees. Let $v$ be any node of ${\mathcal{B}}$. Let $\mu_0':=\mu_0 \mid v$ and $\mu_1':=\mu_1 \mid v$. Let ${\mathcal{B}}_v$ denote the subtree of ${\mathcal{B}}$ rooted at $v$. We observe that ${\mathcal{B}}_v$ is an optimal tree for $\mu_0'$ and $\mu_1'$; if it is not then we could replace it by an optimal tree for $\mu_0'$ and $\mu_1'$, and for the resultant tree, the expected value of ${\mathcal{N}}$ with respect to $\mu_0$ and $\mu_1$ will be smaller than that in ${\mathcal{B}}$. This will contradict the optimality of ${\mathcal{B}}$. This recursive sub-structure property of optimal trees will be helpful to us.
Conflict Complexity and Randomized Query Complexity
===================================================
In this section, we will prove Theorem \[maina\] (restated below).
\[hibias\] We will bound the distributional query complexity of $g$ for each input distribution $\mu$ with rspect to error $47/95<1/2$, $\D_{47/95}^\mu (g)$, from above by $O(\chi(g)^2)$. Theorem \[maina\] will follow from the *minimax principle* (Fact \[minmax\]), and the observation that the error can be brought down to $1/3$ by constantly many independent repetitions followed by a selection of the majority of the answers. It is enough to consider distributions $\mu$ supported on valid inputs of $g$. To this end, fix a distribution $\mu$ supported only on $g^{-1}(0) \cup g^{-1}(1)$.
Let $\chi(g)=d$. Let $\mu_b$ be the distribution obtained by conditioning $\mu$ on the event $g(x)=b$. Let ${\mathcal{B}}$ be an optimal decision tree for distributions $\mu_0$ and $\mu_1$. Clearly $\E[{\mathcal{N}}] \leq \chi(g) = d$.
We first prove some structural results about ${\mathcal{B}}$. Let ${\mathcal{B}}$ be run on a random input $x$ sampled according to $\mu$. Let $v_t$ be the random vertex at which the $t$-th query is made; If ${\mathcal{B}}$ terminates before making $t$ queries, define $v_t:=\bot$. Let ${\mathcal{E}}$ be any event which is a collection of possible transcripts of ${\mathcal{B}}$, such that $\Pr[{\mathcal{E}}] \geq \frac{3}{4}$. Recall from Section \[prelims\] that for any vertex $v$ of ${\mathcal{B}}$, $\Delta^{(v)}$ is assumed to be with respect to the probability distribution $\mu \mid v$.
\[sumofdelta\] $$\sum_{t=1}^{10d} \E[\Delta^{(v_t)} \mid {\mathcal{E}}] \geq \frac{13}{20}.$$
Let us sample vertices $u_t$ of ${\mathcal{B}}$ as follows:
1. Set $z=\left\{\begin{array}{ll}
1 & \mbox{with probability $\Pr_{x \sim \mu}[g(x)=1]$}, \\
0 & \mbox{with probability $\Pr_{x \sim \mu}[g(x)=0]$}
\end{array}\right.$
2. Run process ${\mathcal{P}}$ for ${\mathcal{B}}, \mu_0, \mu_1, z$.
3. Let $u_t$ be the vertex $v$ in the beginning of the $t$-th iteration of the *while* loop of Algorithm \[P\]. Return $(u_t)_{t=1,\ldots}$. If the simulation stops after $i$ iterations, set $u_t:=\bot$ for all $t > i$.
By Claim \[samedistn\], and since $z$ has the same distribution as that of $g(x)$ where $x$ is sampled from $\mu$, the vertices $u_t$ and $v_t$ have the same distribution. In the above sampling process for each $t=1, \ldots, 10d$, let $E_t$ be the event that $\nq_1=1$ in the beginning of the $t$-th iteration of the *while* loop of Algorithm \[P\]. Conditioned on ${\mathcal{E}}$, the probability that $\nq_1$ is set to $0$ in the $t$-th iteration is $\Pr[E_t \mid {\mathcal{E}}] \cdot \E[\Delta^{(u_t)} \mid E_t, {\mathcal{E}}]$[^2]. By union bound we have that, $$\begin{aligned}
\sum_{t=1}^{10d} \E[\Delta^{(v_t)} \mid {\mathcal{E}}]&=\sum_{t=1}^{10d} \E[\Delta^{(u_t)} \mid {\mathcal{E}}] \nonumber \\
&\geq \sum_{t=1}^{10d}\Pr[E_t \mid {\mathcal{E}}] \cdot \E[\Delta^{(u_t)} \mid E_t, {\mathcal{E}}] \nonumber \\
&\geq \Pr\left[\overline{\bigcap_{t=1}^{10d} E_t} \mid {\mathcal{E}}\right] \label{delbound1} \nonumber \\
&\geq \Pr\left[\overline{\bigcap_{t=1}^{10d} E_t}\right] - \Pr[\overline{{\mathcal{E}}}].\end{aligned}$$ Now, since $\E[{\mathcal{N}}] \leq \chi(g) = d$, we have by *Markov’s inequality* that the probability that the process ${\mathcal{P}}$, when run for ${\mathcal{B}}, \mu_0, \mu_1$ and the random bit $z$ generated as above[^3], sets $\nq_1$ to $0$ within first $10d$ iterations of the *while* loop, is at least $9/10$. Thus we have that, $$\begin{aligned}
\Pr\left[\bigcap_{t=1}^{10d} E_t\right]^c \geq \frac{9}{10}. \label{delbound2}\end{aligned}$$ The claim follows from (\[delbound1\]), (\[delbound2\]) and the hypothesis $\Pr[{\mathcal{E}}] \geq \frac{3}{4}$.
The next Lemma follows from Claim \[sumofdelta\] and the recursive sub-structure property of optimal trees discussed in the last paragraph of Section \[cc\].
\[sumofdelta2\] Let $i$ be any positive integer. Then, $$\sum_{t=1}^{10di} \E[\Delta^{(v_t)} \mid {\mathcal{E}}] \geq \frac{13i}{20}.$$
Notice that if ${\mathcal{B}}$ terminates before making $t$ queries, $v_t=\bot$ and $\Delta^{(v_t)}=1$.
For $j=1,\ldots, i$, let $w$ be any vertex at depth $10jd+1$. Consider the subtree $\mathsf{T}$ of ${\mathcal{B}}$ rooted at $w$. By the recursive sub-structure property of ${\mathcal{B}}$, $\mathsf{T}$ is an optimal tree for distributions $\mu_0':=\mu_0 \mid w, \mu_1':=\mu_1 \mid w$. Let $w_t$ be the random vertex at depth $t$ of $\mathsf{T}$, when $\mathsf{T}$ is run on a random input from $\mu \mid w$. By Claim \[sumofdelta\], we have that, $$\begin{aligned}
\sum_{t=1}^{10d} \E[\Delta^{(w_t)} \mid {\mathcal{E}}] \geq \frac{13}{20}. \label{onetree}\end{aligned}$$ In (\[onetree\]), $\Delta^{(w_t)}$ is with respect to distributions $\mu'_0 \mid w_t=\mu_0 \mid w_t, \mu'_1 \mid w_t=\mu_1 \mid w_t$. Now, when $w$ is the random vertex $v_{10jd+1}$, $w_t$ is the random vertex $v_{10jd+t}$. Thus from (\[onetree\]) we have that, $$\begin{aligned}
\sum _{t=10jd+1}^{10(j+1)d} \E[\Delta^{(v_t)} \mid {\mathcal{E}}] \geq \frac{13}{20}. \label{oneslab}\end{aligned}$$ The claim follows by adding (\[oneslab\]) over $j=0, \ldots, i-1$.
We now finish the proof of Theorem \[maina\] by showing that $D^\mu(g) = O(d^2)$. Let $x$ be distributed according to $\mu$, and ${\mathcal{B}}$ be run on $x$. Let $\mathsf{BIASED}$ denote the event that in at most $10d^2$ queries, the computation of ${\mathcal{B}}$ reaches a vertex $v$ for which $\Pr_{x \sim \mu}[g(x)=0 \mid x \in v]\cdot\Pr_{x \sim \mu}[g(x)=1 \mid x \in v] \leq \frac{1}{9}$. Let $\mathsf{STOP}$ denote the event that ${\mathcal{B}}$ terminates after making at most $10d^2$ queries. Let ${\mathcal{E}}:=\overline{\mathsf{BIASED} \vee\mathsf{STOP}}$.
Consider the following decision tree ${\mathcal{B}}'$: Start simulating ${\mathcal{B}}$. Terminate the simulation if one of the following events occurs. The outputs in each case is specified below.
1. (*Event $\mathsf{STOP}$*) If ${\mathcal{B}}$ terminates, terminate and output what ${\mathcal{B}}$ outputs. \[e1\]
2. If $10d^2$ queries have been made and the computation is at a vertex $v$, terminate and output $\arg \max_b \Pr[g(x)=b \mid x \in v]$. \[e2\]
By construction, ${\mathcal{B}}'$ makes at most $10d^2$ queries in the worst case. We shall show that $\Pr_{x \sim \mu}[{\mathcal{B}}'(x)\neq g(x)] \leq \frac{47}{95} < \frac{1}{2}$. This will prove Theorem \[maina\].
We split the proof into the following two cases.
Case $1$:
: $\Pr[\overline{{\mathcal{E}}}] \geq \frac{1}{4}$.
First, condition on the event that the computation reaches a vertex $v$ for which $\Pr_{x \sim \mu}[g(x)=0 \mid x \in v]\cdot\Pr_{x \sim \mu}[g(x)=1 \mid x \in v] \leq \frac{1}{9}$ holds. Thus one of $\Pr_{x \sim \mu}[g(x)=0 \mid x \in v]$ and $\Pr_{x \sim \mu}[g(x)=1 \mid x \in v]$ is at most $1/3$. Hence, $|\Pr_{x \sim \mu}[g(x)=0 \mid x \in v]-\Pr_{x \sim \mu}[g(x)=1 \mid x \in v]| \geq 2/3$. Let $m$ be the random leaf of the subtree of ${\mathcal{B}}'$ rooted at $v$ at which the computation ends. The probability that ${\mathcal{B}}'$ errs is at most $$\begin{aligned}
&\E_{x \sim \mu \mid v}\left[\frac{1}{2}-\frac{1}{2}\left|\Pr_{x \sim \mu}[g(x)=0 \mid x \in m]-\Pr_{x \sim \mu}[g(x)=1 \mid x \in m]\right|\right]. \\
& \leq \frac{1}{2}-\frac{1}{2} \left|\E_{x \sim \mu \mid v}\Pr_{x \sim \mu}[g(x)=0 \mid x \in m]-\E_{x \sim \mu \mid v}\Pr_{x \sim \mu}[g(x)=1 \mid x \in m]\right| \\
& \qquad \qquad \qquad \qquad \mbox{\ \ \ \ (By Jensen's inequality)} \\
&=\frac{1}{2}-\frac{1}{2}\left|\Pr_{x \sim \mu}[g(x)=0 \mid x \in v]-\Pr_{x \sim \mu}[g(x)=1 \mid x \in v]\right| \leq \frac{1}{3}.\end{aligned}$$
Then, condition on the event $\mathsf{STOP}$. The probability that ${\mathcal{B}}'$ errs is $0 \leq \frac{1}{3}$.
Thus we have shown that conditioned on $\overline{{\mathcal{E}}}$ the probability that ${\mathcal{B}}'$ errs is at most $\frac{1}{3}$. Thus the probability that ${\mathcal{B}}'$ errs is at most $\frac{1}{4}\cdot \frac{1}{3}+\frac{3}{4}\cdot\frac{1}{2} = \frac{11}{24}<\frac{47}{95}$.
Case $2$:
: $\Pr[\overline{{\mathcal{E}}}] < \frac{1}{4}$.
By Claim \[sumofdelta2\] we have that $$\begin{aligned}
\sum_{t=1}^{10d^2} \E[\Delta^{v^{(t)}} \mid {\mathcal{E}}] \geq \frac{13d}{20}. \label{deltabound}\end{aligned}$$ Let $a_i:=(x_i, b_i)$ be the tuple formed by the random input variable $x_i$ queried at the $i$-th step by ${\mathcal{B}}'$, and the outcome $b_i$ of the query; if ${\mathcal{B}}'$ terminates before $i$-th step, $a_i:=\bot$. Notice that the vertex $v_i$ at which the $i$-th query is made is determined by $(a_1, \ldots, a_{i-1})$ and vice versa. We have, $$\begin{aligned}
&\I(a_1, \ldots, a_{10d^2}:g(x)) \nonumber \\
&= \sum_{i=1}^{10d^2} \I(a_i:g(x) \mid a_1, \ldots, a_{i-1}) \mbox{\ \ \ \ (Chain rule of mutual information)}\nonumber \\
&= \sum_{i=1}^{10d^2} \I(b_i:g(x) \mid v_i) \nonumber \\
& \geq 32 \sum_{i=1}^{10d^2} \E \left[\mathbf{1}_{v_i \neq \bot}\cdot\left[\Pr[g(x)=0 \mid x \in v_i] \cdot \Pr[g(x)=1 \mid x \in v_i] \cdot \Delta^{(v_i)}\right]^2\right] \nonumber \\
&\qquad \qquad \qquad \qquad \mbox{\ \ \ (From Claim~\ref{mutin})} \nonumber \\
&\geq 32 \sum_{i=1}^{10d^2} \Pr[{\mathcal{E}}] \cdot \E\left[\left[\Pr[g(x)=0 \mid x \in v_{i-1}] \cdot \Pr[g(x)=1 \mid x \in v_{i-1}] \cdot \Delta^{(v_i)}\right]^2 \mid {\mathcal{E}}\right] \nonumber \\
&\qquad \qquad \qquad \qquad \mbox{\ \ \ \ (Conditioned on ${\mathcal{E}}, v_i \neq \bot$)} \nonumber \\
&\geq 32 \sum_{i=1}^{10d^2} \frac{3}{4} \cdot \frac{1}{9} \cdot \E[{\Delta^{(v_i)}}^2 \mid {\mathcal{E}}] \nonumber \\
&= \frac{8}{3}\sum_{i=1}^{10d^2} \E[{\Delta^{(v_i)}}^2 \mid {\mathcal{E}}] \mbox{\ \ \ \ \ \ (By the assumption $\Pr[\overline{{\mathcal{E}}}] \leq \frac{1}{4}$ )} \nonumber \\
&\geq \frac{8}{3} \cdot \frac{1}{10d^2} \left(\sum_{i=1}^{10d^2} \E[\Delta^{(v_i)} \mid {\mathcal{E}}]\right)^2 \mbox{(By Cauchy-Schwarz inequality)} \nonumber \\
&\geq \frac{1}{10}.\mbox{\ \ \ \ (From~(\ref{deltabound}))}\label{infbound}\end{aligned}$$ Hence, from (\[infbound\]) we have $$\begin{aligned}
\Hen(g(x) \mid a_1, \ldots a_{v_{10d^2}}) \leq 1-\frac{1}{10}=\frac{9}{10}. \label{enbound}\end{aligned}$$ Let ${\mathcal{L}}$ be the set of leaves $\ell$ of ${\mathcal{B}}'$ such that $\Hen(g(x) \mid \ell) \leq \frac{19}{20}$. For each $\ell \in {\mathcal{L}}$, $\min_b \Pr_{x \sim \mu}[g(x)=b \mid x \in \ell] \leq \frac{2}{5}$. Conditioned on $(a_1, \ldots, a_{10d^2}) \in {\mathcal{L}}$, the probability that ${\mathcal{B}}'$ errs is at most $\frac{2}{5}$. By *Markov’s inequality* and (\[enbound\]), it follows that $\Pr[(a_1, \ldots, a_{10d^2}) \in {\mathcal{L}}] \geq \frac{1}{19}$. Thus ${\mathcal{B}}'$ errs with probability at most $\frac{1}{19}\cdot \frac{2}{5}+\frac{18}{19}\cdot \frac{1}{2}=\frac{47}{95}$.
The Composition Theorem {#comp}
=======================
In this section we prove Theorem \[mainb\] (restated below).
We shall prove that for each distribution $\eta$ on the inputs to $f$, there is a query algorithm ${\mathcal{A}}$ making $O(\R(f \circ g^n) /\chi(g))$ queries in the worst case, for which $\Pr_{z \in \nu}[(z,{\mathcal{A}}(z)) \in f] \geq \frac{5}{9}$ holds. This will imply the theorem by *Yao’s minimax principle*. To this end let us fix a distribution $\eta$ over $\{0,1\}^n$.
Let $\chi(g)=d$. Thus, there is a *hard* pair of distributions $\mu_0, \mu_1$, supported on $g^{-1}(0)$ and $g^{-1}(1)$ respectively, such that for every decision tree ${\mathcal{B}}$ that computes $g$, $\chi(\mu_0, \mu_1, g) \geq d$. We will use distributions $\eta, \mu_0$ and $\mu_1$ to set up a distribution $\gamma_\eta$ over the input space of $f \circ g^n$. For a fixed $z=(z_1, \ldots, z_n) \in \{0,1\}^n$, We recall the distribution $\gamma_z$ over $\left(\{0,1\}^m\right)^n$ from Section \[cc\]. $\gamma_z$ is given by the following sampling procedure:
1. For $i=1, \ldots, n$, sample $x_i=(x_i^{(j)})_{j=1, \ldots, m}$ from $\mu_{z_i}$ independently for each $i$.
2. return $x=(x_i)_{i=1, \ldots, n}$.
Now, let $\gamma_\eta$ be the distribution over $\left(\{0,1\}^m\right)^n$ that is given by the following sampling procedure:
1. Sample $z =(z_1, \ldots, z_n)$ from $\eta$.
2. Sample $x=(x_i)_{i=1, \ldots, n}$ from $\gamma_z$. Return $x$.
Observe that for each $z, x$ sampled as above, for each $s \in \mathcal{S}$, $(z,s) \in f$ *if and only if* $(x,s) \in f \circ g^n$.
Assume that $\R_{1/3}(f \circ g^n)=t$. Yao’s mimimax principle implies that there is a deterministic query algorithm ${\mathcal{A}}'$ for inputs from $\left(\{0,1\}^m\right)^n$, that makes at most $t$ queries in the wors case, such that $\Pr_{x \in \gamma_\nu}[(x,{\mathcal{A}}'(x)) \in f \circ g^n] \geq \frac{2}{3}$. We will first use ${\mathcal{A}}'$ to construct a randomized algorithm $T$ for $f$, whose accuracy is as desired, and for which the expected number of queries made is small.
\[T\]
$v \gets $Root of ${\mathcal{A}}'$ // Corresponds to $\{0,1\}^m$
$T$, described formally in Algorithm \[T\], is essentially viewing the process ${\mathcal{P}}$ for $z, \mu_0, \mu_1, A'$ as a query algorithm runnng on input $z$; an assignment of $0$ to $\nq_i$ corresponds to a query to $z_i$. By Claim \[samedistn\], we have that for each $z \in \{0,1\}^n$, $\Pr[(z, T(z)) \in f]=\Pr_{x \sim \gamma_z}[(x,A'(x)) \in f \circ g^n]$. Thus, $\Pr_{z \sim \eta}[(z, T(z)) \in f]=\Pr_{x \sim \gamma_\eta}[(x,A'(x)) \in f \circ g^n] \geq \frac{2}{3}$.
We now bound the expected number of queries made by $T$ on each $z$. For doing that we consider the following randomized process $Q$ that acts on $z$. Let ${\mathcal{B}}$ be an optimal tree for distributions $\mu_0, \mu_1$. $Q$ is described formally in Algorithm \[Q\].
\[Q\]
Run $T$ on $z$. \[runT\]
Since ${\mathcal{B}}$ computes $g$, process Q is guaranteed to set $\nq_i$ to $0$ for each $i$. In steps \[runT\] and \[runP\], the process ${\mathcal{P}}$ is run with trees $A'$ and ${\mathcal{B}}$, and the trees make queries inside the for loop of ${\mathcal{P}}$. These queries can be thought of as being made to an $mn$ bit string $(x_i^{(j)})_{{i=1, \ldots, n}\atop{j=1, \ldots, m}}$. Let the random variable $X_i$ stand for the total number of queries made by these trees in $x_i$. $X=\sum_{i=1}^n X_i$ is the total number of queries in $Q$, i.e., the total number of iterations of the for loop of ${\mathcal{P}}$ in all the runs of ${\mathcal{P}}$ in $Q$. The next claim bounds $\E X$ from below.
\[dp\] $$\E X \geq nd.$$
Towards a contradiction assume that $\E X < nd$. Thus there exists an $i$ such that $\E X_i < d$. Notice that this expectation is over the random real numbers sampled in the for loop of ${\mathcal{P}}$. Thus, there exists a fixing of those real numbers $r$ that are sampled in those iterations of the for loop of ${\mathcal{P}}$ that correspond to queries into $x_j$ for $j \neq i$, such that conditioned on that fixing, $\E X_i < d$. However, under that fixing, process $Q$ is equivalent to process ${\mathcal{P}}$ for some deterministic decision tree $T'$ that computes $g(x_i)$ (since $\nq_i$ is set to $0$ with probability $1$), $\mu_0, \mu_1$ and $z_i$. Thus $\E X_i < d$ conditioned on the above-mentioned fixing of randomness contradicts the assumption that $\min_{\mathcal{B}}\chi({\mathcal{B}}, \mu_0, \mu_1)=\chi(g)=d$, where the minimum is taken over all deterministic decision tree $\beta$ that computes $g$.
Now, let $Y$ denote the size of the random set $\{i \mid \nq_i \mbox{ is set to $0$ in step~\ref{runT} in $Q$}\}$. Now, conditoned on the event $Y=b$, the expected number of queries made in step \[runP\] of $Q$ is $(n-b)d=nd-bd$. So under this conditioning the total number of queries $X$ made by $Q$ is at most $t+nd-bd$. Taking expectation over $b$, and using Claim \[dp\] we have that $$\begin{aligned}
t + nd - d \cdot \E Y \geq nd \Longrightarrow \E Y \leq \frac{t}{d}. \nonumber\end{aligned}$$ Observing that for each $z$, $Y$ has the same distribution as the number of queries made by $T$ when run on $z$, we conclude that for each $z$, $T$ makes at most $t/d$ queries on expectation. By Markov’s inequality, the probability that $T$ makes more than $9t/d$ queries is at most $1/9$. Thus the probabilistic algorithm ${\mathcal{A}}''$ obtained by terminating $T$ after $10t/d$ queries computes $f$ with probability at least $2/3 - 1/9=5/9 > 1/2$ on a random input from $\eta$. By fixing the randomness of ${\mathcal{A}}'$ appropriately we get a deterministic algorithm ${\mathcal{A}}$ of complexity $O(t/d)=O(\R(f \circ g)/\chi(g))$ such that $\Pr_{z \sim \eta}[(z,{\mathcal{A}}(z)) \in f] \geq \frac{5}{9}$.
#### Acknowledgements.
I thank Rahul Jain for helpful discussions.
This material is based on research supported by the Singapore National Research Foundation under NRF RF Award No. NRF-NRFF2013-13.
[^1]: Division of Mathematical Sciences, Nanyang Technological University, Singapore and Centre for Quantum Technologies, National University of Singapore, Singapore. [<[email protected]>]{}
[^2]: Note that conditioned on $E_t$, $u_t \neq \bot$.
[^3]: Recall that the distribution of ${\mathcal{N}}$ is independent of $z$.
|
---
abstract: |
A mediator implements a correlated equilibrium when it proposes a strategy to each player confidentially such that the mediator’s proposal is the best interest for every player to follow. In this paper, we present a mediator that implements the best correlated equilibrium for an extended [El Farol game ]{}with symmetric players. The extended El Farol game we consider incorporates both negative and positive network effects.
We study the degree to which this type of mediator can decrease the overall social cost. In particular, we give an exact characterization of [*Mediation Value*]{} ([*MV*]{}) and [*Enforcement Value*]{} ([*EV*]{}) for this game. [*MV*]{} is the ratio of the minimum social cost over all Nash equilibria to the minimum social cost over all mediators of this type, and [*EV*]{} is the ratio of the minimum social cost over all mediators of this type to the optimal social cost. This sort of exact characterization is uncommon for games with both kinds of network effects. An interesting outcome of our results is that both the ${\emph{MV}}$ and ${\emph{EV}}$ values can be unbounded for our game.
Equilibria, Correlated Equilibria, Mediators and Network Effects.
author:
- Dieter Mitsche
- George Saad
- Jared Saia
bibliography:
- 'elfarol.bib'
title: '[The Power of Mediation in an Extended El Farol Game]{}'
---
Introduction
============
When players act selfishly to minimize their own costs, the outcome with respect to the total social cost may be poor. The Price of Anarchy [@Koutsoupias1999] measures the impact of selfishness on the social cost and is defined as the ratio of the worst social cost over all Nash equilibria to the optimal social cost. In a game, with a high Price of Anarchy, one way to reduce social cost is to find a mediator of expected social cost less than the social cost of any [Nash equilibrium]{}.
In the literature, there are several types of mediators [@Ashlagi:2007; @Diaz2009; @Forgo2010; @Forgo2010-2; @Monderer:2009; @Peleg:2007; @RT; @RT3; @RT2; @Tennenholtz:2008]. In this paper, we consider only the type of mediator that implements a correlated equilibrium (CE) [@Aumann].
A mediator is a trusted external party that suggests a strategy to every player separately and privately so that each player has no gain to choose another strategy assuming that the other players conform to the mediator’s suggestion.
The algorithm that the mediator uses is known to all players. However, the mediator’s random bits are unknown. We assume that the players are symmetric in the sense that they have the same utility function and the probability the mediator suggests a strategy to some player is independent of the identity of that player.
Ashlagi et al. [@AshlagiMT08] define two metrics to measure the quality of a mediator: the mediation value ([*MV*]{}) and the enforcement value ([*EV*]{}). In our paper, we compute these values, adapted for games where players seek to minimize the social cost. The [*Mediation Value*]{} is defined as the ratio of the minimum social cost over all Nash equilibria to the minimum social cost over all mediators. The [*Enforcement Value*]{} is the ratio of the minimum social cost over all mediators to the optimal social cost.
A mediator is optimal when its expected social cost is minimum over all mediators. Thus, the [*Mediation Value*]{} measures the quality of the optimal mediator with respect to the best [Nash equilibrium]{}; and the [*Enforcement Value*]{} measures the quality of the optimal mediator with respect to the optimal social cost.
El Farol Game {#sec:elfarol}
-------------
First we describe the traditional [El Farol game ]{}[@Arthur; @CPG; @CMO; @LAKUA]. El Farol is a tapas bar in Santa Fe. Every Friday night, a population of people decide whether or not to go to the bar. If too many people go, they will all have a worse time than if they stayed home, since the bar will be too crowded. That is a negative network effect [@David:2010].
Now we provide an extension of the traditional [El Farol game]{}, where both negative and positive network effects [@David:2010] are considered. The positive network effect is that if too few people go, those that go will also have a worse time than if they stayed home.
### Motivation.
Our motivation for studying this problem comes from the following discussion in [@David:2010].
*“It’s important to keep in mind, of course, that many real situations in fact display both kinds of \[positive and negative\] externalities - some level of participation by others is good, but too much is bad. For example, the El Farol Bar might be most enjoyable if a reasonable crowd shows up, provided it does not exceed 60. Similarly, an on-line social media site with limited infrastructure might be most enjoyable if it has a reasonably large audience, but not so large that connecting to the Web site becomes very slow due to the congestion."*
We note that our El Farol extension is one of the simplest, non-trivial problems for which a mediator can improve the social cost. Thus, it is useful for studying the power of a mediation.
### Formal Definition of the Extended El Farol Game.
![The individual cost to go $f(x)$.[]{data-label="fig:fgx"}](Figures/fgx1.pdf "fig:"){width="40.00000%"} ![The individual cost to go $f(x)$.[]{data-label="fig:fgx"}](Figures/fgx2.pdf "fig:"){width="40.00000%"}
We now formally define our game, which is non-atomic [@aumann1974values; @Schmeidler1973], in the sense that no individual player has significant influence on the outcome; moreover, the number of players is very large tending to infinity. The [$(c,s_1,s_2)$-El Farol game ]{}has three parameters $c, s_1$ and $s_2$, where $0 < c < s_1$ and $s_2 > 0$. If $x$ is the fraction of players to go, then the cost $f(x)$ for any player to go is as follows: $$\label{eq:fgx}
f(x) = \left\{
\begin{array}{l l}
c- s_1 x & \quad \mbox{$0 \leq x \leq \frac{c}{s_1}$,}\\
s_2 (x - \frac{c}{s_1}) & \quad \mbox{$\frac{c}{s_1} \leq x \leq 1$.}\\
\end{array} \right.$$ and the cost to stay is 1. The function $f(x)$ is illustrated in the two plots of Figure \[fig:fgx\].
### Our Contributions.
The main contributions of our paper are threefold:
- We design an optimal mediator, which implements the best correlated equilibrium for an extension of the [El Farol game ]{}with symmetric players. Notably, this extension incorporates both negative and positive network effects.
- We give an exact characterization of the [*Mediation Value*]{} ([*MV*]{}) and the [*Enforcement Value*]{} ([*EV*]{}) for our game.
- We show that both the ${\emph{MV}}$ and ${\emph{EV}}$ values can be unbounded for our game.
### Paper Organization.
In Section \[relatedwork\], we discuss the related work. Section \[sec:definitions\] states the definitions and notations that we use in the [El Farol game]{}. Our results are given in Section \[sec:our\_results\], where we show our main theorem that characterizes the best correlated equilibrium, and we compute accordingly the [*Mediation Value*]{} and the [*Enforcement Value*]{}. Finally, Section \[sec:conclusion\] concludes the paper and discusses some open problems.
Related Work {#relatedwork}
============
Mediation Metrics
-----------------
Christodoulou and Koutsoupias [@Christodoulou:2005] analyze the price of anarchy and the price of stability for Nash and correlated equilibria in linear congestion games. A consequence of their results is that the [*EV*]{} for these games is at least $1.577$ and at most $1.6$, and the [*MV*]{} is at most $1.015$.
Brandt et al. [@Brandt:2007] compute the mediation value and the enforcement value in ranking games. In a ranking game, every outcome is a ranking of the players, and each player strictly prefers high ranks over lower ones [@Brandt:2006]. They show that for the ranking games with $n>2$ players, ${\emph{EV}}= n-1$. They also show that ${\emph{MV}}= n-1$ for $n>3$ players, and for $n=3$ players where at least one player has more than two actions.
The authors of [@Diaz2009] design a mediator that implements a correlated equilibrium for a virus inoculation game [@ACY; @MSW]. In this game, there are $n$ players, each corresponding to a node in a square grid. Every player has either to inoculate itself (at a cost of $1$) or to do nothing and risk infection, which costs $L>1$. After each node decides to inoculate or not, one node in the grid selected uniformly at random is infected with a virus. Any node, $v$, that chooses not to inoculate becomes infected if there is a path from the randomly selected node to $v$ that traverses only uninoculated nodes. A consequence of their result is that [*EV*]{} is $\Theta(1)$ and [*MV*]{} is $\Theta((n/L)^{1/3})$ for this game.
Jiang et al. [@Xin13a] analyze the price of miscoordination (PoM) and the price of sequential commitment (PoSC) in security games, which are defined to be a certain subclass of Stackelberg games. A consequence of their results is that [*MV*]{} is unbounded in general security games and it is at least $4/3$ and at most $\frac{e}{e-1} \thickapprox 1.582$ in a certain subclass of security games.
We note that a poorly designed mediator can make the social cost worse than what is obtained from the Nash equilibria. Bradonjic et al. [@Bradonjic:2009] describe the *Price of Mediation* ($PoM$) which is the ratio of the social cost of the worst correlated equilibrium to the social cost of the worst [Nash equilibrium]{}. They show that for a simple game with two players and two possible strategies, $PoM$ can be as large as $2$. Also, they show for games with more players or more strategies per player that $PoM$ can be unbounded.
Finding and Simulating a Mediator
---------------------------------
Papadimitriou and Roughgarden [@Papadimitriou:2008] develop polynomial time algorithms for finding correlated equilibria in a broad class of succinctly representable multiplayer games. Unfortunately, their results do not extend to non-atomic games; moreover, they do not allow for direct computation of [*MV*]{} and [*EV*]{}, even when they can find the best correlated equilibrium.
Abraham et al. [@ADGH; @ADH] describe a distributed algorithm that enables a group of players to simulate a mediator. This algorithm works robustly with up to linear size coalitions, and up to a constant fraction of adversarial players. The result suggests that the concept of mediation can be useful even in the absence of a trusted external party.
Other Types of Mediators
------------------------
In all equilibria above, the mediator does not act on behalf of the players. However, a more powerful type of mediators is described in [@Ashlagi:2007; @Forgo2010; @Forgo2010-2; @Monderer:2009; @Peleg:2007; @RT; @RT3; @RT2; @Tennenholtz:2008], where a mediator can act on behalf of the players that give that right to it.
For multistage games, the notion of the correlated equilibrium is generalized to the communication equilibrium in [@Forges1986; @Myerson1986]. In a communication equilibrium, the mediator implements a multistage correlated equilibrium; in addition, it communicates with the players privately to receive their reports at every stage and selects the recommended strategy to each player accordingly.
Definitions and Notations {#sec:definitions}
=========================
Now we state the definitions and notations that we use in the [El Farol game]{}.
*A configuration* ${\ensuremath{C(x)}}$ characterizes that a fraction of players, $x$, is being advised to go; and the remaining fraction of players, $(1-x)$, is being advised to stay.
*A configuration distribution $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$* is a probability distribution over $k\geq 2$ configurations, where $({\ensuremath{C(x_i)}},p_i)$ represents that configuration ${\ensuremath{C(x_i)}}$ is selected with probability $p_i$, for $1\leq i \leq k$. Note that $0\leq x_i\leq 1$, $0<p_i<1$, $\sum^k_{i=1} p_i = 1$ and if $x_i=x_j$ then $i=j$ for $1\leq i,j\leq k$.
For any player $i$, let ${\ensuremath{\mathcal E}}^i_G$ be the event that player $i$ is advised to go, and $C^i_G$ be the cost for player $i$ to go (when all other players conform to the advice). Also let ${\ensuremath{\mathcal E}}^i_S$ be the event that player $i$ is advised to stay, and $C^i_S$ be the cost for player $i$ to stay. Since the players are symmetric, we will omit the index $i$.
A configuration distribution, $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$, is a correlated equilibrium iff $$\begin{aligned}
{\ensuremath{{\mathbf{E}\left[C_S |{\ensuremath{\mathcal E}}_G\right]}}} \geq {\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_G\right]}}}, \\
{\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_S\right]}}} \geq {\ensuremath{{\mathbf{E}\left[C_S |{\ensuremath{\mathcal E}}_S\right]}}}.\end{aligned}$$
*A mediator* is a trusted external party that uses a configuration distribution to advise the players such that this configuration distribution is a correlated equilibrium. The set of configurations and the probability distribution are known to all players. The mediator selects a configuration according to the probability distribution. The advice the mediator sends to a particular player, based on the selected configuration, is known only to that player.
Throughout the paper, we let $n$ be the number of players.
Our Results {#sec:our_results}
===========
In our results, we assume that *the cost to stay* is $1$; we justify this assumption at the end of this section. Our first results in Lemmas \[lem:socialoptimum\] and \[lem:bestnash\] are descriptions of the optimal social cost and the minimum social cost over all Nash equilibria for our extended [El Farol game]{}. We next state our main theorem which characterizes the best correlated equilibrium and determines the [*Mediation Value*]{} and [*Enforcement Value*]{}.
\[lem:socialoptimum\] For any [$(c,s_1,s_2)$-El Farol game]{}, the optimal social cost is $({\ensuremath{y^*}}f({\ensuremath{y^*}})+(1-{\ensuremath{y^*}}))n$, where $${\ensuremath{y^*}}= \left\{
\begin{array}{l l}
\frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) & \quad \mbox{if $\frac{c}{s_1} \leq \frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) \leq 1$,}\\
\frac{c}{s_1} & \quad \mbox{if $ \frac{1}{s_2} < \frac{c}{s_1}$,}\\
1 & \quad \mbox{$otherwise$.}\\
\end{array} \right.$$
By Equation [(\[eq:fgx\])]{}, $f(x)$ has two cases. Let $f_1(x)$ be $f(x)$ for $x \in [0,\frac{c}{s_1}]$, and let $f_2(x)$ be $f(x)$ for $x \in [\frac{c}{s_1}, 1]$. Also let $h_1(x)$ be the social cost when $0 \leq x \leq \frac{c}{s_1}$, and let $h_2(x)$ be the social cost when $\frac{c}{s_1} \leq x \leq 1$. Thus, $h_1(x) = (xf_1(x)+(1-x))n$ and $h_2(x) = (xf_2(x)+(1-x))n$.
We know that $h_1(x)$ is minimized at $x=\frac{c}{s_1}$. In addition, we know that $h_2(x)$ is a quadratic function with respect to $x$, and thus it has one minimum over $x \in [\frac{c}{s_1}, 1]$ at $x={\ensuremath{y^*}}$, where: $${\ensuremath{y^*}}= \left\{
\begin{array}{l l}
\frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) & \quad \mbox{if $\frac{c}{s_1} \leq \frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) \leq 1$,}\\
\frac{c}{s_1} & \quad \mbox{if $ \frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) < \frac{c}{s_1}$,}\\
1 & \quad \mbox{$ otherwise$.}\\
\end{array} \right.$$
Let $h^*$ be the optimal social cost. Then $h^* = min (h_1(\frac{c}{s_1}), h_2({\ensuremath{y^*}}))$. Since $f_1(\frac{c}{s_1}) = f_2(\frac{c}{s_1})$, we have $h_1(\frac{c}{s_1}) = h_2(\frac{c}{s_1})$. Hence, $h^* = min (h_2(\frac{c}{s_1}), h_2({\ensuremath{y^*}}))$. This implies that $h^* = h_2({\ensuremath{y^*}})$.
\[lem:bestnash\] For any [$(c,s_1,s_2)$-El Farol game]{}, if $f(1)\geq 1$, then the best Nash equilibrium is at which the cost to go in expectation is equal to the cost to stay; otherwise, the best Nash equilibrium is at which all players would rather go. The social cost of the best [Nash equilibrium ]{}is $\min(n, f(1) \cdot n)$.
There are two cases for $f(1)$ to determine the best [Nash equilibrium]{}.\
**Case 1:** $f(1)\geq1$. Let $N_y$ be a Nash equilibrium with the minimum social cost over all Nash equilibria and with a $y$-fraction of players that go in expectation. If $f(y)>1$, then at least one player of the $y$-fraction of players would rather stay. Also if $f(y)<1$, then at least one player of the $(1-y)$-fraction of players would rather go. Thus, we must have $f(y) = 1$. Assume that each player has a mixed strategy, where player $i$ goes with probability $y_i$. Recall that $N_y$ has a $y$-fraction of players that go in expectation. Thus, $y=\frac{1}{n}\sum^{n}_{i=1} y_i$. Then the social cost is $\sum^{n}_{i=1}(y_if(y)+(1-y_i))$, or equivalently, $n$.\
**Case 2:** $f(1)<1$. In this case, the best [Nash equilibrium ]{}is at which all players would rather go, with a social cost of $f(1) \cdot n$.
Therefore, the social cost of the best [Nash equilibrium ]{}is $min(n, f(1) \cdot n)$.
\[thm:optimal\] For any [$(c,s_1,s_2)$-El Farol game ]{}, if $c \leq 1$, then the best correlated equilibrium is the best [Nash equilibrium]{}; otherwise, the best correlated equilibrium is $\D\{({\ensuremath{C(0)}},p),({\ensuremath{C({\ensuremath{x^*}})}},1-p)\}$, where $
{\ensuremath{\lambda}}(c,s_1,s_2) = c(\frac{1}{s_1} + \frac{1}{s_2}) - \sqrt{\frac{c(\frac{1}{s_1} + \frac{1}{s_2})(c-1)}{s_2}},
$ $${\ensuremath{x^*}}= \left\{
\begin{array}{l l}
{\ensuremath{\lambda}}(c,s_1,s_2) & \quad \mbox{if $\frac{c}{s_1} \leq {\ensuremath{\lambda}}(c,s_1,s_2) < 1$,}\\
\frac{c}{s_1} & \quad \mbox{if $ {\ensuremath{\lambda}}(c,s_1,s_2) < \frac{c}{s_1}$,}\\
1 & \quad \mbox{$ otherwise$.}\\
\end{array} \right.$$ and $p = \frac{(1-{\ensuremath{x^*}})(1-f({\ensuremath{x^*}}))}{(1-{\ensuremath{x^*}})(1-f({\ensuremath{x^*}}))+c-1}$. Moreover,
1. the expected social cost is $
(p+(1-p)({\ensuremath{x^*}}f({\ensuremath{x^*}})+(1-{\ensuremath{x^*}})))n
$,
2. the Mediation Value $({\emph{MV}})$ is $
\frac{\min(f(1), 1)}{p+(1-p)({\ensuremath{x^*}}f({\ensuremath{x^*}})+(1-{\ensuremath{x^*}}))}
$ and
3. the Enforcement Value $({\emph{EV}})$ is $
\frac{p+(1-p)({\ensuremath{x^*}}f({\ensuremath{x^*}})+(1-{\ensuremath{x^*}}))}{{\ensuremath{y^*}}f({\ensuremath{y^*}})+(1-{\ensuremath{y^*}})},
$ where $${\ensuremath{y^*}}= \left\{
\begin{array}{l l}
\frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) & \quad \mbox{if $\frac{c}{s_1} \leq \frac{1}{2}(\frac{c}{s_1}+\frac{1}{s_2}) \leq 1$,}\\
\frac{c}{s_1} & \quad \mbox{if $ \frac{1}{s_2} < \frac{c}{s_1}$,}\\
1 & \quad \mbox{$ otherwise$.}\\
\end{array} \right. .$$
Due to the space constraints, the proof of this theorem is not given here.
The following corollary shows that for $c > 1$, if ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1$, then the best correlated equilibrium is the best [Nash equilibrium]{}, where all players would rather go.
\[corollary:no.mediation\] For any [$(c,s_1,s_2)$-El Farol game]{}, if $c > 1$ and ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1$ then ${\emph{MV}}= 1$.
By Theorem \[thm:optimal\], when ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1$, ${\ensuremath{x^*}}=1$ and $p=0$. Now we prove that if ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1$, then the best correlated equilibrium is the best [Nash equilibrium ]{}of the case $f(1) < 1$ in Lemma \[lem:bestnash\]. To do so, we prove that ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1 \Rightarrow f(1) < 1$.
Now assume by way of contradiction that $
{\ensuremath{\lambda}}(c,s_1,s_2) \geq 1 \Rightarrow f(1) \geq 1.
$ Recall that $f(1) = s_2(1-\frac{c}{s_1})$. Then $
{\ensuremath{\lambda}}(c,s_1,s_2) \geq 1 \Rightarrow \frac{c}{s_1}+\frac{1}{s_2} \leq 1,$ or equivalently, $
{\ensuremath{\lambda}}(c,s_1,s_2) \geq 1 \Rightarrow \frac{c}{s_1}+\frac{1}{s_2} \leq {\ensuremath{\lambda}}(c,s_1,s_2).
$ Also recall that ${\ensuremath{\lambda}}(c,s_1,s_2) = c(\frac{1}{s_1}+\frac{1}{s_2})- \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}}$. Thus, we have:
$$\begin{aligned}
{\ensuremath{\lambda}}(c,s_1,s_2) \geq 1 && \Rightarrow \frac{c}{s_1}+\frac{1}{s_2} \leq c(\frac{1}{s_1}+\frac{1}{s_2})- \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}}\\
&& \Rightarrow s_2 \cdot \frac{c}{s_1} \leq -1,\end{aligned}$$
which contradicts since $s_1,s_2$ and $c$ are all positive. Therefore, for $c > 1$ and ${\ensuremath{\lambda}}(c,s_1,s_2) \geq 1$, $MV$ must be equal to $1$.
Now we show that ${\emph{MV}}$ and ${\emph{EV}}$ can be unbounded in the following corollaries.
\[corollary:unboundedmv\] For any $(2+\epsilon,\frac{2+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, as $\epsilon \to 0$, ${\emph{MV}}\to \infty$.
For any $(2+\epsilon,\frac{2+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, we have $f(1) = 1$. By Theorem \[thm:optimal\], we obtain ${\ensuremath{x^*}}=1-\epsilon$, $f({\ensuremath{x^*}}) = 0$ and $p=\frac{\epsilon}{1+2\epsilon}$ for $\epsilon \leq \frac{1}{2}(\sqrt{3}-1)$. Thus we have $$\lim_{\epsilon \to 0} {\emph{MV}}= \lim_{\epsilon \to 0}\frac{\min{(f(1),1)}}{\frac{\epsilon}{1+2\epsilon}+\epsilon(\frac{1+\epsilon}{1+2\epsilon})} = \infty.$$
\[corollary:unboundedev\] For any $(1+\epsilon,\frac{1+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, as $\epsilon \to 0$, ${\emph{EV}}\to \infty$.
For any $(1+\epsilon,\frac{1+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, by Theorem \[thm:optimal\], we obtain ${\ensuremath{x^*}}=1+\epsilon^2-\epsilon \sqrt{1+\epsilon^2}$ and $f({\ensuremath{x^*}}) = 1+\epsilon -\sqrt{1-\epsilon^2}$. Then we have $$p=\frac{(1-(1+\epsilon^2-\epsilon \sqrt{1+\epsilon^2}))(1-(1+\epsilon -\sqrt{1-\epsilon^2}))}{(1-(1+\epsilon^2-\epsilon \sqrt{1+\epsilon^2}))(1-(1+\epsilon -\sqrt{1-\epsilon^2}))+\epsilon}.$$ Also we have ${\ensuremath{y^*}}= 1-\epsilon$ and $f({\ensuremath{y^*}}) = 0$ for $\epsilon \leq \frac{1}{2}$. Thus we have $$\lim_{\epsilon \to 0} {\emph{EV}}= \lim_{\epsilon \to 0}
\frac{p+(1-p)({\ensuremath{x^*}}f({\ensuremath{x^*}})+(1-{\ensuremath{x^*}}))}{{\ensuremath{y^*}}f({\ensuremath{y^*}})+(1-{\ensuremath{y^*}})}
= \infty.$$
![NE, MED, OPT, MV and EV with respect to $s_1$ and $s_2$.[]{data-label="fig:s1_s2"}](Figures/s1.pdf "fig:"){width="45.00000%"} ![NE, MED, OPT, MV and EV with respect to $s_1$ and $s_2$.[]{data-label="fig:s1_s2"}](Figures/s2.pdf "fig:"){width="45.00000%"}
![NE, MED, OPT, MV and EV with respect to $c/s_1$.[]{data-label="fig:c_s1"}](Figures/inf_mv.pdf "fig:"){width="45.00000%"} ![NE, MED, OPT, MV and EV with respect to $c/s_1$.[]{data-label="fig:c_s1"}](Figures/inf_ev.pdf "fig:"){width="45.00000%"}
Based on these results, we show in Figures \[fig:s1\_s2\] and \[fig:c\_s1\] the social cost of the best Nash equilibrium (NE), the expected social cost of our optimal mediator (MED) and the optimal social cost (OPT), normalized by $n$, with respect to $s_1$, $s_2$ and $c/s_1$. Also we show the corresponding [*Mediation Value*]{} ([*MV*]{}) and [*Enforcement Value*]{} ([*EV*]{}).
In Figure \[fig:s1\_s2\], the left plot shows that for $c=2$ and $s_2=10$, the values of NE, MED, OPT increase, each up to a certain point, when $s_1$ increases; however, the values of [*MV*]{} and [*EV*]{} decrease when $s_1$ increases. Moreover, [*MV*]{} reaches its peak at the point where the best Nash equilibrium starts to remain constant with respect to $s_1$. In the right plot, we set $c = 2$ and $s_1 = 2.25$; it shows that the values of NE, MED, OPT, [*MV*]{} and [*EV*]{} increase, each up to a certain point, when $s_2$ increases.
Figure \[fig:c\_s1\] illustrates Corollaries \[corollary:unboundedmv\] and \[corollary:unboundedev\], and it shows how fast [*MV*]{} and [*EV*]{} go to infinity with respect to $c/s_1$, where $c/s_1 = 1 - \epsilon$. The left plot shows that for any $(2+\epsilon,\frac{2+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, as $c/s_1 \to 1$ ($\epsilon \to 0$), $MV \to \infty$ and $EV \to 2$. In the right plot, for any $(1+\epsilon,\frac{1+\epsilon}{1-\epsilon}, \frac{1}{\epsilon})$-El Farol game, as $c/s_1 \to 1$ ($\epsilon \to 0$), $EV \to \infty$ and $MV \to 2$.
Note that for any [$(c,s_1,s_2)$-El Farol game]{}, if $c/s_1 = 1$, then the best correlated equilibrium is at which all players would rather go with a social cost of $0$, that is the best [Nash equilibrium ]{}as well. Therefore, once $c/s_1$ is equal to $1$, ${\emph{MV}}$ drops to $1$.
The cost to stay assumption {#section_thecosttostayassumption .unnumbered}
---------------------------
Now we justify our assumption that the cost to stay is unity. Let $(c',s'_1,s'_2,t')$-El Farol game be a variant of [$(c,s_1,s_2)$-El Farol game]{}, where $0 < c' < s'_1$, $s' > 0$ and the cost to stay is $t'>0$. If $x$ is the fraction of players to go, then the cost $f'(x)$ for any player to go is as follows: $$f'(x) = \left\{
\begin{array}{l l}
c'- s'_1 x & \quad \mbox{$0 \leq x \leq \frac{c'}{s'_1}$,}\\
s'_2 (x - \frac{c'}{s'_1}) & \quad \mbox{$\frac{c'}{s'_1} \leq x \leq 1$.}\\
\end{array} \right.$$ The following lemma shows that any $(c',s'_1,s'_2,t')$-El Farol game can be reduced to a [$(c,s_1,s_2)$-El Farol game]{}.
\[lem:t\_normalization\] Any $(c',s'_1,s'_2,t')$-El Farol game can be reduced to a [$(c,s_1,s_2)$-El Farol game ]{}that has the same [*Mediation Value*]{} and [*Enforcement Value*]{}, where $c = \frac{c'}{t'}, s_1 = \frac{s'_1}{t'}$ and $s_2 = \frac{s'_2}{t'}$.
In a manner similar to Theorem (\[thm:optimal\]), for any $(c',s'_1,s'_2,t')$-El Farol game, if $c > t'$, then the best correlated equilibrium is $\D\{({\ensuremath{C(0)}},p'),({\ensuremath{C(x')}},1-p')\}$, where $
{\ensuremath{\lambda}}'(c',s'_1,s'_2,t') = c'(\frac{1}{s'_1}+\frac{1}{s'_2})- \sqrt{\frac{c'(\frac{1}{s'_1}+\frac{1}{s'_2})(c'-t')}{s'_2}};
$ $$x' = \left\{
\begin{array}{l l}
{\ensuremath{\lambda}}'(c',s'_1,s'_2,t') & \quad \mbox{if $\frac{c'}{s'_1} \leq {\ensuremath{\lambda}}'(c',s'_1,s'_2,t') < 1$,}\\
\frac{c'}{s'_1} & \quad \mbox{if $ {\ensuremath{\lambda}}'(c',s'_1,s'_2,t') < \frac{c'}{s'_1}$,}\\
1 & \quad \mbox{otherwise} .\\
\end{array} \right.$$ and $
p'=\frac{(1-x')(t'-f(x'))}{(1-x')(t'-f(x'))+c'-t'}.
$ Moreover,
1. the Mediation Value ($MV'$) is $
\frac{\min{(f'(1), t')}}{p't'+(1-p')(x'f(x')+(1-x')t')}
\ and
$
2. the Enforcement Value ($EV'$) is $
\frac{p't'+(1-p')(x'f(x')+(1-x')t')}{y'f(y')+(1-y')t'},
$ where $$y' = \left\{
\begin{array}{l l}
\frac{1}{2}(\frac{c'}{s'_1}+\frac{t'}{s'_2}) & \quad \mbox{if $\frac{c'}{s'_1} \leq \frac{1}{2}(\frac{c'}{s'_1}+\frac{t'}{s'_2}) \leq 1$,}\\
\frac{c'}{s'_1} & \quad \mbox{if $ \frac{t'}{s'_2} < \frac{c'}{s'_1}$,}\\
1 & \quad \mbox{$ otherwise$.}\\
\end{array} \right. .$$
Similarly, for $c \leq t'$, we have $MV' = 1$ and $EV' = \frac{\min{(f'(1), t')}}{y'f(y')+(1-y')t'}$.
For both cases, by Theorem \[thm:optimal\], if we set $c=c'/t'$, $s_1=s'_1/t'$ and $s_2=s'_2/t'$, then we have $f'(1) = f(1) \cdot t'$; also we get $y' = {\ensuremath{y^*}}$ and $\lambda'(c',s'_1,s'_2,t') = \lambda(c,s_1,s_2)$. This implies that $f'(y') = f({\ensuremath{y^*}}) \cdot t'$ and $x' = {\ensuremath{x^*}}$; which in turn $f'(x') = f({\ensuremath{x^*}}) \cdot t'$ and $p' = p$. Thus, we obtain $MV'=MV$ and $EV'=EV$.
Conclusion {#sec:conclusion}
==========
We have extended the traditional [El Farol game ]{}to have both negative and positive network effects. We have described an optimal mediator, and we have measured the [*Mediation Value*]{} and the [*Enforcement Value*]{} to completely characterize the benefit of our mediator with respect to the best Nash equilibrium and the optimal social cost.
Several open questions remain including the following: can we generalize our results for our game where the players choose among $k>2$ actions? How many configurations are required to design an optimal mediator when there are $k>2$ actions? Another problem is characterizing the [*MV*]{} and [*EV*]{} values for our game with the more powerful mediators in [@Ashlagi:2007; @Forgo2010; @Forgo2010-2; @Monderer:2009; @Peleg:2007; @RT; @RT3; @RT2; @Tennenholtz:2008]. How much would these more powerful mediators reduce the social cost over our type of weaker mediator?
Appendix - Proof of Theorem \[thm:optimal\] {#sec:proofoftheorem}
===========================================
First of all, we call a mediator over $k$ configurations when the configuration distribution this mediator uses has $k$ configurations.
The proof has four main parts. The first part is *The Reduction of Mediators for $c > 1$*, where we prove that if $c > 1$, then for any optimal mediator over $k > 2$ configurations, there is a mediator over two configurations that has the same social cost. The second part is *The Reduction of Mediators for $c \leq 1$*, where we prove that if $c \leq 1$, then the best correlated equilibrium is the best Nash equilibrium. The third part is *An Optimal Mediator*, where we describe an optimal mediator for any arbitrary constants $c, s_1$ and $s_2$. Finally, the fourth part is *The Mediation Metrics*, where we measure the [*Mediation Value*]{} and the [*Enforcement Value*]{}.
Recall that $x_i$ is the fraction of players that are advised to go in configuration ${\ensuremath{C(x_i)}}$ which is selected with probability $p_i$ in a configuration distribution, $\D\{({\ensuremath{C(x_1)}},p_1),.., ({\ensuremath{C(x_k)}},p_k)\}$, for $1\leq i \leq k$. We define ${\ensuremath{\Delta(x_i)}} = 1-f(x_i)$, where $f(x_i)$ is defined in Equation [(\[eq:fgx\])]{}.
The Reduction of Mediators for $c > 1$
--------------------------------------
In this section, we consider the case that $c >1$.
\[fact:deltas\] For any mediator over $k$ configurations, and for $1\leq i\leq k$, ${\ensuremath{\Delta(x_i)}}>0$ iff $(\frac{c-1}{s_1} < x_i< \frac{1}{s_2}+\frac{c}{s_1} \ and \ f(1) \geq 1)$ or $(\frac{c-1}{s_1} < x_i \leq 1 \ and \ f(1) < 1)$; and ${\ensuremath{\Delta(x_i)}}<0$ iff $0\leq x_i <\frac{c-1}{s_1}$ or $(\frac{1}{s_2}+\frac{c}{s_1}<x_i \leq 1 \ and \ f(1) > 1)$.
Recall that ${\ensuremath{\Delta(x_i)}} = 1-f(x_i)$. Then by Equation (\[eq:fgx\]), we have
$${\ensuremath{\Delta(x_i)}} = \left\{
\begin{array}{l l}
\Delta_1(x_i) & \quad \mbox{$0 \leq x_i \leq \frac{c}{s_1}$,}\\
\Delta_2(x_i) & \quad \mbox{$\frac{c}{s_1} \leq x_i \leq 1$.}\\
\end{array} \right.$$ where $\Delta_1(x_i) =1-(c- s_1x_i)$ and $\Delta_2(x_i)=1-s_2 (x_i - \frac{c}{s_1})$. Now we make a case analysis:\
**Case 1:** $0\leq x_i\leq \frac{c}{s_1}$: $\Delta_1(x_i)<0 \Longleftrightarrow 0\leq x_i < \frac{c-1}{s_1}$; and $\Delta_1(x_i)>0 \Longleftrightarrow \frac{c-1}{s_1}<x_i\leq \frac{c}{s_1}$.\
**Case 2:** $\frac{c}{s_1} \leq x_i \leq 1$: $\Delta_2(x_i)>0 \Longleftrightarrow
(\frac{c}{s_1} \leq x_i< \frac{1}{s_2}+\frac{c}{s_1} \ and \ f(1) \geq 1)$ or $(\frac{c}{s_1} \leq x_i \leq 1 \ and \ f(1) < 1)$; and $\Delta_2(x_i)<0 \Longleftrightarrow (\frac{1}{s_2}+\frac{c}{s_1}<x_i \leq 1 \ and \ f(1) > 1)$.
\[fact:const\] $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium iff $$\label{D_kconstraints>0}
\sum_{i=1}^{k} p_ix_i{\ensuremath{\Delta(x_i)}} \geq 0$$ and $$\label{D_kconstraints<0}
\sum_{i=1}^{k} p_i(1-x_i){\ensuremath{\Delta(x_i)}} \leq 0.$$
Recall that ${\ensuremath{\mathcal E}}^i_G$ is the event that the mediator advises player $i$ to go, $C^i_G$ is the cost for player $i$ to go, ${\ensuremath{\mathcal E}}^i_S$ is the event that the mediator advises player $i$ to stay, and $C^i_S$ is the cost for player $i$ to stay. Also we will omit the index $i$ since the players are symmetric.
By definition, $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium iff $$\begin{aligned}
{\ensuremath{{\mathbf{E}\left[C_S |{\ensuremath{\mathcal E}}_G\right]}}} \geq {\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_G\right]}}}, \\
{\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_S\right]}}} \geq {\ensuremath{{\mathbf{E}\left[C_S |{\ensuremath{\mathcal E}}_S\right]}}}.\end{aligned}$$ Note that: $$\begin{aligned}
\label{eq:stay_go}
{\ensuremath{{\mathbf{E}\left[C_S |{\ensuremath{\mathcal E}}_G\right]}}} = 1,\end{aligned}$$ $$\begin{aligned}
\label{eq:go_go}
{\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_G\right]}}} = \frac{\sum_{i=1}^{k} p_i f(x_i) x_i}{\sum_{i=1}^{k} p_i x_i},\end{aligned}$$ $$\begin{aligned}
\label{eq:go_stay}
{\ensuremath{{\mathbf{E}\left[C_G |{\ensuremath{\mathcal E}}_S\right]}}} = \frac{\sum_{i=1}^{k} p_i f(x_i) (1-x_i)}{\sum_{i=1}^{k} p_i (1-x_i)}\end{aligned}$$ and $$\begin{aligned}
\label{eq:stay_stay}
E(C_S |{\ensuremath{\mathcal E}}_S) = 1.\end{aligned}$$
Therefore, $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium iff $$\begin{aligned}
\label{eqn:const1}
\frac{\sum_{i=1}^{k} p_i f(x_i) x_i}{\sum_{i=1}^{k} p_i x_i}\leq 1\end{aligned}$$ and $$\begin{aligned}
\label{eqn:const2}
\frac{\sum_{i=1}^{k} p_i f(x_i) (1-x_i)}{\sum_{i=1}^{k} p_i (1-x_i)} \geq 1.\end{aligned}$$ By rearranging Inequalities [(\[eqn:const1\])]{} and [(\[eqn:const2\])]{}, we have $$\begin{aligned}
\sum_{i=1}^{k} p_i x_i (1-f(x_i))\geq 0\end{aligned}$$ and $$\begin{aligned}
\sum_{i=1}^{k} p_i (1-x_i) (1-f(x_i)) \leq 0.\end{aligned}$$
\[fact:socialcost\] The expected social cost of $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ is $$(1 - \sum_{i=1}^{k} p_i x_i {\ensuremath{\Delta(x_i)}})n.$$
Let ${\ensuremath{Cost({\ensuremath{C(x_i)}})}}$ be the cost of configuration ${\ensuremath{C(x_i)}}$ in $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x_k)}},p_k)\}$, for $1\leq i \leq k$. We know that the expected social cost of $\D\{({\ensuremath{C(x_1)}},p_1)\\,..,({\ensuremath{C(x_k)}},p_k)\}$ is $$\sum_{i=1}^{k} p_i {\ensuremath{Cost({\ensuremath{C(x_i)}})}}.$$ We have ${\ensuremath{Cost({\ensuremath{C(x_i)}})}}=(x_if(x_i)+(1-x_i))n$, and since ${\ensuremath{\Delta(x_i)}}=1-f(x_i)$, it follows that ${\ensuremath{Cost({\ensuremath{C(x_i)}})}}=(1-x_i{\ensuremath{\Delta(x_i)}})n$. Therefore, the expected social cost is $$\sum_{i=1}^{k} p_i (1-x_i{\ensuremath{\Delta(x_i)}})n,$$ or equivalently, $$(\sum_{i=1}^{k} p_i - \sum_{i=1}^{k}p_ix_i{\ensuremath{\Delta(x_i)}})n.$$ Finally, we note that $\sum_{i=1}^{k} p_i=1$.
\[fact:signedDelta\_i\] For any optimal mediator over $k\geq 2$ configurations, ${\ensuremath{\Delta(x_i)}}\neq 0$ for all $1\leq i\leq k$, and ${\ensuremath{\Delta(x_u)}}>0$ and ${\ensuremath{\Delta(x_v)}}<0$ for some $1\leq u,v \leq k$.
First we show that for any optimal mediator over $k\geq2$ configurations, ${\ensuremath{\Delta(x_i)}}$ is non-zero for all $1\leq i\leq k$. Assume by way of contradiction that there is an optimal mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$, and there is some $1\leq j\leq k$ such that ${\ensuremath{\Delta(x_j)}} = 0$. Recall that $0 < p_j < 1$. Now let $\D\{({\ensuremath{C(x_1)}},\frac{p_1}{1-p_j}),..,({\ensuremath{C(x_{j-1})}},\frac{p_{j-1}}{1-p_j}),({\ensuremath{C(x_{j+1})}},\frac{p_{j+1}}{1-p_j}),..,({\ensuremath{C(x_k)}},\frac{p_{k}}{1-p_j})\}$ be a configuration distribution over $k-1$ configurations.
Since $M_k$ is a mediator and ${\ensuremath{\Delta(x_j)}} = 0$, Constraints [(\[D\_kconstraints>0\])]{} and [(\[D\_kconstraints<0\])]{} of [Lemma ]{}\[fact:const\] imply that $$\sum_{1\leq i \leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} \geq 0$$ and $$\sum_{1\leq i \leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} \leq 0.$$ Now if we multiply both sides of these two constraints by $\frac{1}{1-p_j}$, we have $$\sum_{1\leq i \leq k, i\neq j} \frac{p_i}{1-p_j} x_i{\ensuremath{\Delta(x_i)}}\geq 0$$ and $$\sum_{1\leq i \leq k, i\neq j} \frac{p_i}{1-p_j}(1-x_i){\ensuremath{\Delta(x_i)}}\leq 0.$$ By [Lemma ]{}\[fact:const\], $\D\{({\ensuremath{C(x_1)}},\frac{p_1}{1-p_j}),..,({\ensuremath{C(x_{j-1})}},\frac{p_{j-1}}{1-p_j}),({\ensuremath{C(x_{j+1})}},\frac{p_{j+1}}{1-p_j}),..,({\ensuremath{C(x_k)}},\frac{p_{k}}{1-p_j})\}$ is a correlated equilibrium. Let $M_{k-1}$ be a mediator that uses this correlated equilibrium. By [Lemma ]{}\[fact:socialcost\], the expected social cost of $M_{k-1}$ is $$(1-\frac{1}{1-p_j}\sum_{1\leq i \leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}})n,$$ and since ${\ensuremath{\Delta(x_j)}} = 0$, the expected social cost of $M_k$ is $$(1-\sum_{1\leq i \leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}})n.$$ We know that $0<p_j<1$ implies $\frac{1}{1-p_j}>1$. Therefore, the expected social cost $M_{k-1}$ is less than the expected social cost of $M_k$. This contradicts the fact that $M_k$ is optimal.\
Recall that $0<p_i<1$ and $0\leq x_i\leq 1$ for all $1\leq i \leq k$. By [Lemma ]{}\[fact:const\], Constraint [(\[D\_kconstraints>0\])]{} implies that there exists $u$ such that ${\ensuremath{\Delta(x_u)}}>0$ for $1\leq u \leq k$. Similarly, Constraint [(\[D\_kconstraints<0\])]{} implies that there exists $v$ such that ${\ensuremath{\Delta(x_v)}}<0$ for $1\leq v \leq k$.
\[fact:implication\] Any optimal mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,\\({\ensuremath{C(x_k)}},p_k)\}$, where $k\geq 2$, has $$\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}(x_i-x_j) \geq 0, 1\leq j\leq k.$$
Let $M_k$ be an optimal mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,\\({\ensuremath{C(x_k)}},p_k)\}$. We know by [Lemma ]{}\[fact:signedDelta\_i\] that any configuration, ${\ensuremath{C(x_j)}}$, has either ${\ensuremath{\Delta(x_j)}}<0$ or ${\ensuremath{\Delta(x_j)}}>0$, for $1\leq j \leq k$. Now fix any $1\leq j \leq k$, and do a case analysis for ${\ensuremath{\Delta(x_j)}}$.\
**Case 1:** If ${\ensuremath{\Delta(x_j)}}<0$, then by repeated application of [Lemma ]{}\[fact:const\] we have $$\label{tmp:compatible11}
\frac{\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}(1-x_i)}{(1-x_j)|{\ensuremath{\Delta(x_j)}}|}
\leq
p_j
\leq
\frac{\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i}{x_j|{\ensuremath{\Delta(x_j)}}|}$$ Removing $p_j$ from Inequality [(\[tmp:compatible11\])]{} and rearranging, we get $$x_j|{\ensuremath{\Delta(x_j)}}|\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}(1-x_i)
\leq
(1-x_j)|{\ensuremath{\Delta(x_j)}}|\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i.$$ By canceling the common terms, we have $$\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_j
\leq
\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i.$$ **Case 2:** If ${\ensuremath{\Delta(x_j)}}>0$, then similarly by repeated application of [Lemma ]{}\[fact:const\] we have $$\label{tmp:compatible21}
\frac{-\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i}{x_j{\ensuremath{\Delta(x_j)}}}
\leq
p_j
\leq
\frac{-\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}(1-x_i)}{(1-x_j){\ensuremath{\Delta(x_j)}}}$$ Removing $p_j$ from Inequality [(\[tmp:compatible21\])]{} and rearranging, we get $$x_j{\ensuremath{\Delta(x_j)}}\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}(1-x_i) \leq (1-x_j){\ensuremath{\Delta(x_j)}}\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i.$$ By canceling the common terms, we have $$\sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_j \leq \sum_{1\leq i \leq k, i\neq j} p_i{\ensuremath{\Delta(x_i)}}x_i.$$ Since $j$ is any value between $1$ and $k$, this implies the statement of the lemma for every such $j$.
\[fact:delta<0-x=0\] Consider any mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,\\({\ensuremath{C(x_k)}},p_k)\}$, where $0<x_j< \frac{c-1}{s_1}$. Then there exists a mediator $M'_k$ of less expected social cost, which uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $x'_j=0$.
Let $M_k$ be a mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $0<x_j< \frac{c-1}{s_1}$. By [Lemma ]{}\[fact:const\], we have
$$\label{eq:fact:delta<0-x=0-const1}
p_jx_j{\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} \geq 0$$
and
$$\label{eq:fact:delta<0-x=0-const2}
p_j(1-x_j){\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} \leq 0 .$$
Since $0<x_j< \frac{c-1}{s_1}$, by [Lemma ]{}\[fact:deltas\], ${\ensuremath{\Delta(x_j)}}<0$. Now let $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution that has $x'_j=0$. Thus, we have $p_jx'_j{\ensuremath{\Delta(x'_j)}}=0$ and $p_jx_j{\ensuremath{\Delta(x_j)}}<0$. By Inequality [(\[eq:fact:delta<0-x=0-const1\])]{}, we have $$\label{eq:med1}
p_jx'_j{\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} > 0.$$ We know that ${\ensuremath{\Delta(x'_j)}} <{\ensuremath{\Delta(x_j)}} < 0$ and $(1-x'_j) > (1-x_j) > 0$, so we have $(1-x'_j){\ensuremath{\Delta(x'_j)}}<(1-x_j){\ensuremath{\Delta(x_j)}}$. By Inequality [(\[eq:fact:delta<0-x=0-const2\])]{}, we get $$\label{eq:med2}
p_j{\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} < 0.$$
Now by [Lemma ]{}\[fact:const\] and Inequalities [(\[eq:med1\])]{} and [(\[eq:med2\])]{}, $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j)\\,..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium. Let $M'_k$ be a mediator that uses this correlated equilibrium. By [Lemma ]{}\[fact:socialcost\], and since $x'_j=0$, the expected social cost of $M'_k$ is $$(1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}})n.$$ Moreover, by [Lemma ]{}\[fact:socialcost\], the expected social cost of $M_k$ is $$((1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}}) - p_jx_j{\ensuremath{\Delta(x_j)}})n.$$ Since ${\ensuremath{\Delta(x_j)}}<0$ and $x_j>0$, the expected social cost of $M'_k$ is less than the expected social cost of $M_k$.
\[fact:delta>0-x=c/s\_1\] For $f(1) \geq 1$, consider any mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x_j)}},p_j) , .. , ({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c-1}{s_1}<x_j < \frac{c}{s_1}$. Then there exists a mediator $M'_k$ of less expected social cost, which uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c}{s_1} < x'_j< \frac{c}{s_1}+\frac{1}{s_2}$ and $f(x'_j)=f(x_j)$.
Let $M_k$ be a mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c-1}{s_1}<x_j < \frac{c}{s_1}$. By [Lemma ]{}\[fact:const\], we have
$$\label{fact:delta>0-x=c/s_1-const1}
p_jx_j{\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} \geq 0$$
and
$$\label{fact:delta>0-x=c/s_1-const2}
p_j(1-x_j){\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} \leq 0 .$$
Recall that $\frac{c-1}{s_1}<x_j < \frac{c}{s_1}$ and $f(1)\geq 1$. Then $\exists x'_j:$ $\frac{c}{s_1} < x'_j< \frac{c}{s_1}+\frac{1}{s_2}$ and $f(x'_j)=f(x_j)$. Now let $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution. Since $f(x'_j)=f(x_j)$, $\Delta(x'_j) = \Delta(x_j)$. We know that $x'_j>x_j$, then $x'_j{\ensuremath{\Delta(x'_j)}}>x_j{\ensuremath{\Delta(x_i)}}$. By Inequality [(\[fact:delta>0-x=c/s\_1-const1\])]{}, we obtain $$p_jx'_j{\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} > 0.$$ Since $(1-x'_j)<(1-x_j)$, we have $(1-x'_j){\ensuremath{\Delta(x'_j)}}<(1-x_j){\ensuremath{\Delta(x_j)}}$. By Inequality [(\[fact:delta>0-x=c/s\_1-const2\])]{}, we get $$p_j(1-x'_j){\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} < 0 .$$ Now by [Lemma ]{}\[fact:const\], $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium. Let $M'_k$ be a mediator that uses this correlated equilibrium. By [Lemma ]{}\[fact:socialcost\], the expected social cost of $M'_k$ is $$((1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}}) - p_jx'_j{\ensuremath{\Delta(x'_j)}})n,$$ and the expected social cost of $M_k$ is $$((1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}}) - p_jx_j{\ensuremath{\Delta(x_j)}})n.$$ Since $p_jx'_j{\ensuremath{\Delta(x'_j)}}>p_jx_j{\ensuremath{\Delta(x_i)}}$, the expected social cost of $M'_k$ is less than the expected social cost of $M_k$.
\[fact:f(1)<1,f(x)<=f(1),(c-1)/s\_1<x<c/s\_1\] For $f(1) < 1$, consider any mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x_j)}},p_j) , .. , ({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c-1}{s_1} < x_j < \frac{c}{s_1}$ and $f(x_j) \leq f(1)$. Then there exists a mediator $M'_k$ of less expected social cost, which uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c}{s_1} < x'_j \leq 1$ and $f(x'_j)=f(x_j)$.
We know that for $\frac{c-1}{s_1}<x_j < \frac{c}{s_1}$ and $f(x_j) \leq f(1) < 1$, $\exists x'_j:$ $\frac{c}{s_1} < x'_j \leq 1$ and $f(x'_j)=f(x_j)$. In a manner similar to the proof of Lemma \[fact:delta>0-x=c/s\_1\], we prove this Lemma.
\[fact:f(1)<1,f(x)>f(1),(c-1)/s\_1<x<c/s\_1\] For $f(1) < 1$, consider any mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x_j)}},p_j) , .. , ({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c-1}{s_1} < x_j < \frac{c}{s_1}$ and $f(x_j) > f(1)$. Then there exists a mediator $M'_k$ of less expected social cost, which uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $x'_j = 1$.
Let $M_k$ be a mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c-1}{s_1}<x_j < \frac{c}{s_1}$. By [Lemma ]{}\[fact:const\], we have
$$\label{fact:f(1)<1,f(x)>f(1),(c-1)/s_1<x<c/s_1-const1}
p_jx_j{\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} \geq 0$$
and
$$\label{fact:f(1)<1,f(x)>f(1),(c-1)/s_1<x<c/s_1-const2}
p_j(1-x_j){\ensuremath{\Delta(x_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} \leq 0 .$$
Now let $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution, where $x'_j = 1$. For $f(x_j) > f(x'_j)$ and $f(x'_j) < 1$, $\Delta(x'_j) > \Delta(x_j)$. We know that $x'_j>x_j$, then $x'_j{\ensuremath{\Delta(x'_j)}}>x_j{\ensuremath{\Delta(x_i)}}$. By Inequality [(\[fact:f(1)<1,f(x)>f(1),(c-1)/s\_1<x<c/s\_1-const1\])]{}, we obtain $$p_jx'_j{\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_ix_i{\ensuremath{\Delta(x_i)}} > 0.$$ Since $(1-x'_j) = 0$, $(1-x_j) > 0$ and $\Delta(x_j) > 0$, $(1-x'_j){\ensuremath{\Delta(x'_j)}}<(1-x_j){\ensuremath{\Delta(x_j)}}$. By Inequality [(\[fact:f(1)<1,f(x)>f(1),(c-1)/s\_1<x<c/s\_1-const2\])]{}, we get $$p_j(1-x'_j){\ensuremath{\Delta(x'_j)}} + \sum_{1\leq i\leq k, i\neq j} p_i(1-x_i){\ensuremath{\Delta(x_i)}} < 0 .$$ Now by [Lemma ]{}\[fact:const\], $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium. Let $M'_k$ be a mediator that uses this correlated equilibrium. By [Lemma ]{}\[fact:socialcost\], the expected social cost of $M'_k$ is $$((1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}}) - p_jx'_j{\ensuremath{\Delta(x'_j)}})n,$$ and the expected social cost of $M_k$ is $$((1 - \sum_{1\leq i\leq k, i\neq j} p_i x_i {\ensuremath{\Delta(x_i)}}) - p_jx_j{\ensuremath{\Delta(x_j)}})n.$$ Since $p_jx'_j{\ensuremath{\Delta(x'_j)}}>p_jx_j{\ensuremath{\Delta(x_i)}}$, the expected social cost of $M'_k$ is less than the expected social cost of $M_k$.
\[lem:onedelta<0-on-slope-s1\] For any [$(c,s_1,s_2)$-El Farol game]{}, any optimal mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ has exactly one configuration that has no players advised to go, and any other configuration has at least a $\frac{c}{s_1}$-fraction of players advised to go.
Let $M_k$ be an optimal mediator over $k\geq 2$ configurations that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$. First we prove that $M_k$ must have a configuration where no players advised to go. We know by [Lemma ]{}\[fact:signedDelta\_i\] that there exists some $1\leq j\leq k$ where ${\ensuremath{\Delta(x_j)}} < 0$. By [Lemma ]{}\[fact:deltas\], ${\ensuremath{\Delta(x_j)}}<0$ iff $(x_j \in (1/s_2 + c/s_1,1] \ and \ f(1) > 1)$ or $x_j \in [0,\frac{c-1}{s_1})$. Now we do a case analysis for $x_j$.\
**Case 1:** $x_j \in (1/s_2 + c/s_1,1] \ and \ f(1) > 1$. Assume by way of contradiction that $M_k$ has no configuration that has less than a $\frac{c-1}{s_1}$-fraction of players advised to go. Let $x_q$ be the smallest fraction that is $x_q>1/s_2 + c/s_1$, where $1\leq q\leq k$. By [Lemmas ]{}\[fact:deltas\] and \[fact:signedDelta\_i\], for $1\leq r\leq k$, $${\ensuremath{\Delta(x_r)}} = \left\{
\begin{array}{l l}
>0 & \quad \mbox{if $x_r<x_q$,}\\
<0 & \quad \mbox{otherwise}.\\
\end{array} \right.$$ Note that by the definition of the configuration distribution, if $x_r = x_q$ then $r=q$. Therefore, we have $$\label{eq:not-a-mediator}
\sum_{1\leq r \leq k, r\neq q} p_r{\ensuremath{\Delta(x_r)}}(x_r-x_q) < 0.$$ By [Lemma ]{}\[fact:implication\], Inequality [(\[eq:not-a-mediator\])]{} contradicts that $M_k$ is an optimal mediator. Thus, $M_k$ must have a configuration, ${\ensuremath{C(x)}}$, where $x<\frac{c-1}{s_1}$, and the rest of the argument is as in Case 2.\
**Case 2:** $x_j \in [0,\frac{c-1}{s_1})$. By [Lemma ]{}\[fact:delta<0-x=0\], and since $M_k$ is an optimal mediator, $x_j=0$.
By the definition of the configuration distribution, $M_k$ has no two configurations that have the same fraction of players that are advised to go. So $M_k$ has exactly one configuration, over all the $k$ configurations, that has no players advised to go. We know that $\Delta(\frac{c-1}{s_1}) = 0$. By [Lemma ]{}\[fact:signedDelta\_i\], there is no optimal mediator that has a configuration ${\ensuremath{C(\frac{c-1}{s_1})}}$. Now since $M_k$ is an optimal mediator, by [Lemmas ]{}\[fact:delta<0-x=0\], \[fact:delta>0-x=c/s\_1\], \[fact:f(1)<1,f(x)<=f(1),(c-1)/s\_1<x<c/s\_1\] and \[fact:f(1)<1,f(x)>f(1),(c-1)/s\_1<x<c/s\_1\], $M_k$ has no configuration in which an $x$-fraction of players is advised to go, where $x \in (0,\frac{c}{s_1})$.
\[fact:x=py+(1-p)z\] For any $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_i)}},p_i),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, and for any arbitrary $x_i$ and $x_j$ such that $x_j>x_i\geq \frac{c}{s_1}$, there exists $\D\{({\ensuremath{C(x_1)}},p_1),\\..,({\ensuremath{C(x_{i-1})}},p_{i-1}),({\ensuremath{C(x_{i+1})}},p_{i+1}),..,
({\ensuremath{C(x'_j)}},p_i+p_j),..,
({\ensuremath{C(x_k)}},p_k)\}$, where $x'_j=\frac{p_i}{p_i+p_j}x_i+\frac{p_j}{p_i+p_j}x_j$. Moreover,
$$1) \ (p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}}>p_ix_i{\ensuremath{\Delta(x_i)}}+p_jx_j{\ensuremath{\Delta(x_j)}}.$$ $$2) \ (p_i+p_j)(1-x'_j){\ensuremath{\Delta(x'_j)}} < p_i(1-x_i){\ensuremath{\Delta(x_i)}}+p_j(1-x_j){\ensuremath{\Delta(x_j)}}.$$
Let $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_i)}},p_i),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution that has $x_j>x_i\geq \frac{c}{s_1}$.
Also let $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_{i-1})}},p_{i-1}),({\ensuremath{C(x_{i+1})}},p_{i+1}),..,({\ensuremath{C(x'_j)}},p_i+p_j),..,\\({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution that has $x'_j=\frac{p_i}{p_i+p_j}x_i+\frac{p_j}{p_i+p_j}x_j$. We know that $0<p_i,p_j<1$ and $x_j>x_i$. Thus $x_i<x'_j<x_j$. Assume by way of contradiction that $$(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}}\leq p_ix_i{\ensuremath{\Delta(x_i)}}+p_jx_j{\ensuremath{\Delta(x_j)}},$$ or equivalently, $$x'_j{\ensuremath{\Delta(x'_j)}}\leq \frac{p_i}{p_i+p_j}x_i{\ensuremath{\Delta(x_i)}}+\frac{p_j}{p_i+p_j}x_j{\ensuremath{\Delta(x_j)}}.$$ Let $p=\frac{p_i}{p_i+p_j}$, so $1-p=\frac{p_j}{p_i+p_j}$. Then we have $$x'_j{\ensuremath{\Delta(x'_j)}}\leq px_i{\ensuremath{\Delta(x_i)}}+(1-p)x_j{\ensuremath{\Delta(x_j)}}.$$ Recall that for $\frac{c}{s_1}\leq x\leq 1$, ${\ensuremath{\Delta(x)}}=1-s_2(x-\frac{c}{s_1})$. Since $\frac{c}{s_1}\leq x_i, x_j, x'_j\leq 1$, we get $$x'_j(1-s_2(x'_j-\frac{c}{s_1})) \leq
px_i(1-s_2(x_i-\frac{c}{s_1}))+
(1-p)x_j(1-s_2(x_j-\frac{c}{s_1})).$$ Since $x'_j=px_i+(1-p)x_j$, we have $$x'_j(-s_2(x'_j-\frac{c}{s_1})) \leq
px_i(-s_2(x_i-\frac{c}{s_1}))+
(1-p)x_j(-s_2(x_j-\frac{c}{s_1})).$$ We know that $s_2>0$, and hence dividing by $-s_2$, we get $$x'_j(x'_j-\frac{c}{s_1}) \geq px_i(x_i-\frac{c}{s_1})+
(1-p)x_j(x_j-\frac{c}{s_1}).$$ Since $-\frac{c}{s_1}x'_j=-\frac{c}{s_1}(px_i+(1-p)x_j)$, we have $$x'^2_j \geq px_i^2+
(1-p)x_j^2.$$ Substituting $x'_j$ by $px_i+(1-p)x_j$, we get $$p^2x_i^2+2p(1-p)x_ix_j+(1-p)^2x_j^2 \geq px_i^2+
(1-p)x_j^2.$$ By rearranging, we have $$p(1-p)(x_j^2-2x_ix_j+x_i^2) \leq 0.$$ Now since $0<p<1$, we can divide by $p(1-p)$, and we get $$(x_j-x_i)^2 \leq 0,$$ which contradicts since $ x_j\neq x_i$. This proves that $$\label{eq:cond1}
(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}} > p_ix_i{\ensuremath{\Delta(x_i)}}+p_jx_j{\ensuremath{\Delta(x_j)}}.$$ Now we prove that $(p_i+p_j)(1-x'_j){\ensuremath{\Delta(x'_j)}} < p_i(1-x_i){\ensuremath{\Delta(x_i)}}+p_j(1-x_j){\ensuremath{\Delta(x_j)}}$. To do so, we first show that $(p_i+p_j){\ensuremath{\Delta(x'_j)}} = p_i{\ensuremath{\Delta(x_i)}}+
p_j{\ensuremath{\Delta(x_j)}}$.
We know that $$\begin{aligned}
&& x'_j=\frac{p_i}{p_i+p_j}x_i+\frac{p_j}{p_i+p_j}x_j \\
\Longleftrightarrow && (p_i+p_j)x'_j = p_ix_i+p_jx_j \\
\Longleftrightarrow && (p_i+p_j)(x'_j-\frac{c}{s_1}) = p_i(x_i-\frac{c}{s_1})+p_j(x_j-\frac{c}{s_1}) \\
\Longleftrightarrow && (p_i+p_j)s_2(x'_j-\frac{c}{s_1}) = p_is_2(x_i-\frac{c}{s_1})+p_js_2(x_j-\frac{c}{s_1}) \\
\Longleftrightarrow && (p_i+p_j)(1-s_2(x'_j-\frac{c}{s_1})) = p_i(1-s_2(x_i-\frac{c}{s_1}))+p_j(1-s_2(x_j-\frac{c}{s_1}))\end{aligned}$$ Recall that ${\ensuremath{\Delta(x)}}=1-s_2(x-\frac{c}{s_1})$ when $\frac{c}{s_1}\leq x\leq 1$. Since $\frac{c}{s_1}\leq x_i, x_j, x'_j\leq 1$, we get $$\label{eq:cond2}
(p_i+p_j){\ensuremath{\Delta(x'_j)}} = p_i{\ensuremath{\Delta(x_i)}}+p_j{\ensuremath{\Delta(x_j)}}.$$ By subtracting [(\[eq:cond1\])]{} from [(\[eq:cond2\])]{}, we obtain $$(p_i+p_j)(1-x'_j){\ensuremath{\Delta(x'_j)}} < p_i(1-x_i){\ensuremath{\Delta(x_i)}}+p_j(1-x_j){\ensuremath{\Delta(x_j)}}.$$
\[lem:oneDelta\_i>0-on-s2-slope\] For any [$(c,s_1,s_2)$-El Farol game]{}, any optimal mediator that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_k)}},p_k)\}$ has at most one configuration that has at least a $\frac{c}{s_1}$-fraction of players advised to go, and any other configuration has less than a $\frac{c}{s_1}$-fraction of players advised to go.
Assume by way of contradiction that there is an optimal mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_i)}},p_i),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $x_j>x_i\geq \frac{c}{s_1}$. Let $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_{i-1})}},p_{i-1}),({\ensuremath{C(x_{i+1})}},p_{i+1}),..,({\ensuremath{C(x'_j)}},p_i+p_j),\\..,({\ensuremath{C(x_k)}},p_k)\}$ be a configuration distribution that has $x'_j=\frac{p_i}{p_i+p_j}x_i+\frac{p_j}{p_i+p_j}x_j$. Since $M_{k}$ is a mediator, by [Lemma ]{}\[fact:const\], we have $$\label{eq:lem:oneDelta_i>0-on-s2-slope-const1}
p_ix_i{\ensuremath{\Delta(x_i)}}+ p_jx_j{\ensuremath{\Delta(x_j)}} + \sum_{1\leq r\leq k, r\neq i, r\neq j} p_rx_r{\ensuremath{\Delta(x_r)}} \geq 0$$ and $$\label{eq:lem:oneDelta_i>0-on-s2-slope-const2}
p_i(1-x_i){\ensuremath{\Delta(x_i)}}+ p_j(1-x_j){\ensuremath{\Delta(x_j)}} + \sum_{1\leq r\leq k, r\neq i, r\neq j} p_r(1-x_r){\ensuremath{\Delta(x_r)}}\leq 0.$$ By [Lemma ]{}\[fact:x=py+(1-p)z\], we have $$\label{eq:lem:oneDelta_i>0-on-s2-slope-cond1}
(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}} > p_ix_i{\ensuremath{\Delta(x_i)}}+p_jx_j{\ensuremath{\Delta(x_j)}}$$ and $$\label{eq:lem:oneDelta_i>0-on-s2-slope-cond2}
(p_i+p_j)(1-x'_j){\ensuremath{\Delta(x'_j)}} < p_i(1-x_i){\ensuremath{\Delta(x_i)}}+p_j(1-x_j){\ensuremath{\Delta(x_j)}}.$$ By Inequalities [(\[eq:lem:oneDelta\_i>0-on-s2-slope-const1\])]{} and [(\[eq:lem:oneDelta\_i>0-on-s2-slope-cond1\])]{}, we get $$\label{eq:lem4.11:cond1}
(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}} + \sum_{1\leq r\leq k, r\neq i, r\neq j} p_rx_r{\ensuremath{\Delta(x_r)}}> 0.$$ Similarly, by Inequalities [(\[eq:lem:oneDelta\_i>0-on-s2-slope-const2\])]{} and [(\[eq:lem:oneDelta\_i>0-on-s2-slope-cond2\])]{}, we obtain $$\label{eq:lem4.11:cond2}
(p_i+p_j)(1-x'_j){\ensuremath{\Delta(x'_j)}} + \sum_{1\leq r\leq k, r\neq i, r\neq j} p_r(1-x_r){\ensuremath{\Delta(x_r)}}< 0.$$ By [Lemma ]{}\[fact:const\] and Inequalities [(\[eq:lem4.11:cond1\])]{} and [(\[eq:lem4.11:cond2\])]{}, $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_{i-1})}},p_{i-1}),\\({\ensuremath{C(x_{i+1})}},p_{i+1}),..,({\ensuremath{C(x'_j)}},p_i+p_j),..,({\ensuremath{C(x_k)}},p_k)\}$ is a correlated equilibrium. Let $M_{k-1}$ be a mediator that uses this correlated equilibrium. By [Lemma ]{}\[fact:socialcost\], the expected social cost of $M_{k}$ is $$((1 - \sum_{1\leq r\leq k, r\neq i, r\neq j} p_r x_r {\ensuremath{\Delta(x_r)}}) - p_ix_i{\ensuremath{\Delta(x_i)}}-p_jx_j{\ensuremath{\Delta(x_j)}})n,$$ and the expected social cost of $M_{k-1}$ is $$((1 - \sum_{1\leq r\leq k, r\neq i, r\neq j} p_r x_r {\ensuremath{\Delta(x_r)}}) -(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}})n.$$ Since $(p_i+p_j)x'_j{\ensuremath{\Delta(x'_j)}}>p_ix_i{\ensuremath{\Delta(x_i)}}+p_jx_j{\ensuremath{\Delta(x_j)}}$, the expected social cost of $M_{k-1}$ is less than the expected social cost of $M_{k}$. This contradicts that $M_{k}$ is an optimal mediator.
\[lem:two-configurations\] For any [$(c,s_1,s_2)$-El Farol game]{}, there exists an optimal mediator that uses $\D\{({\ensuremath{C(0)}},p),({\ensuremath{C(x)}},1-p)\}$, where $0<p<1$; and $\frac{c}{s_1} \leq x < \frac{1}{s_2}+\frac{c}{s_1}$ if $f(1) \geq 1$, otherwise $\frac{c}{s_1} \leq x \leq 1$.
By the definition of the configuration distribution, a mediator has at least two configurations. By Lemmas \[lem:onedelta<0-on-slope-s1\] and \[lem:oneDelta\_i>0-on-s2-slope\], there exists an optimal mediator that has exactly two configurations. The first configuration has no players advised to go, and the second configuration has an $x$-fraction of players advised to go, where $x\geq \frac{c}{s_1}$. Since the first configuration has $zero$ players advised to go, by [Lemma ]{}\[fact:deltas\], ${\ensuremath{\Delta(0)}}<0$. By [Lemma ]{}\[fact:signedDelta\_i\], we must have ${\ensuremath{\Delta(x)}}>0$. We know that $x\geq \frac{c}{s_1}$. By [Lemma ]{}\[fact:deltas\], if $f(1) \geq 1$, then $\frac{c}{s_1} \leq x< \frac{1}{s_2}+\frac{c}{s_1}$; otherwise, $\frac{c}{s_1} \leq x \leq 1$.
The Reduction of Mediators for $c \leq 1$
-----------------------------------------
Now we consider the case that $c \leq 1$ in the following lemma.
\[lem:c<1,mv=1\] For any [$(c,s_1,s_2)$-El Farol game ]{}, if $c \leq 1$, then ${\emph{MV}}= 1$.
In a manner similar to Lemma \[fact:signedDelta\_i\], any optimal mediator over $k \geq 2$ does not have a configuration ${\ensuremath{C(x)}}$ with $\Delta(x) = 0$.
Also in a manner similar to Lemmas \[fact:delta>0-x=c/s\_1\], \[fact:f(1)<1,f(x)<=f(1),(c-1)/s\_1<x<c/s\_1\] and \[fact:f(1)<1,f(x)>f(1),(c-1)/s\_1<x<c/s\_1\], for any mediator $M_k$ that uses $\D\{({\ensuremath{C(x_1)}},p_1),..,({\ensuremath{C(x_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $0 \leq x_j < \frac{c}{s_1}$, there exists a mediator $M'_k$ of less expected social cost, which uses $\D\{({\ensuremath{C(x_1)}},p_1),..,\\({\ensuremath{C(x'_j)}},p_j),..,({\ensuremath{C(x_k)}},p_k)\}$, where $\frac{c}{s_1} \leq x'_j< \frac{1}{s_2}+\frac{c}{s_1}$ if $ f(1) \geq 1$; otherwise, $\frac{c}{s_1} \leq x'_j \leq 1$.
Finally, in a manner similar to Lemma \[lem:oneDelta\_i>0-on-s2-slope\], any optimal mediator has at most one configuration, ${\ensuremath{C(x)}}$, where $x \geq \frac{c}{s_1}$.
Therefore, for $c \leq 1$, the best correlated equilibrium is a configuration distribution over just one configuration, which is trivially the best Nash equilibrium.
An Optimal Mediator
-------------------
We have proved that for any [$(c,s_1,s_2)$-El Farol game]{}, if $c \leq 1$ then the best correlated equilibrium is the best Nash equilibrium; otherwise, there exists an optimal mediator that is over two configurations. Now we describe this mediator in detail.
\[lem:socialcost(x\_2)\] For any [$(c,s_1,s_2)$-El Farol game]{}, if $c > 1$, then $\D\{({\ensuremath{C(0)}},p),({\ensuremath{C(x)}},1-p)\}$ is the best correlated equilibrium, where ${\ensuremath{\lambda}}(c,s_1,s_2) = c(\frac{1}{s_1}+\frac{1}{s_2})- \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}}$, $$x = \left\{
\begin{array}{l l}
{\ensuremath{\lambda}}(c,s_1,s_2) & \quad \mbox{if $\frac{c}{s_1} \leq {\ensuremath{\lambda}}(c,s_1,s_2) < 1$,}\\
\frac{c}{s_1} & \quad \mbox{if $ {\ensuremath{\lambda}}(c,s_1,s_2) < \frac{c}{s_1}$,}\\
1 & \quad \mbox{$ otherwise$.}\\
\end{array} \right.$$ and $p=\frac{(1-x)(1-f(x))}{(1-x)(1-f(x))+c-1}$. Moreover, the expected social cost is $$(p+(1-p)(x f(x)+(1-x)))n.$$
By Lemma \[lem:two-configurations\], there exists an optimal mediator $M_2$ that uses $\D\{({\ensuremath{C(0)}},p),\\({\ensuremath{C(x)}},1-p)\}$, where $\frac{c}{s_1} \leq x < \frac{1}{s_2}+\frac{c}{s_1}$ if $f(1) \geq 1$; otherwise, $\frac{c}{s_1} \leq x \leq 1$.
Now we determine $p$ and $x$ so that $M_2$ is an optimal mediator.
First, we determine $p$. By Constraint [(\[D\_kconstraints<0\])]{} of [Lemma ]{}\[fact:const\], we have $$\label{lem:socialcost(x)-temp1}
p{\ensuremath{\Delta(0)}} + (1-p)(1-x){\ensuremath{\Delta(x)}} \leq 0.$$ We know that $c>1$, ${\ensuremath{\Delta(0)}}=1-c$ and ${\ensuremath{\Delta(x)}}>0$. By rearranging Inequality [(\[lem:socialcost(x)-temp1\])]{}, we obtain $$\label{lem:socialcost(x)-p}
p \geq \frac{(1-x){\ensuremath{\Delta(x)}}}{(c-1) + (1-x){\ensuremath{\Delta(x)}}}.$$ Recall that the cost of any configuration, ${\ensuremath{C(x_i)}}$, is ${\ensuremath{Cost({\ensuremath{C(x_i)}})}}=(1-x_i{\ensuremath{\Delta(x_i)}})n$. Thus ${\ensuremath{Cost({\ensuremath{C(0)}})}}=n$, and ${\ensuremath{Cost({\ensuremath{C(x)}})}}=(1-x{\ensuremath{\Delta(x)}})n$. Since ${\ensuremath{\Delta(x)}}>0$, ${\ensuremath{Cost({\ensuremath{C(x)}})}}<n$. Thus, ${\ensuremath{Cost({\ensuremath{C(x)}})}}<{\ensuremath{Cost({\ensuremath{C(0)}})}}$. We know that the social cost of $M_2$ is $$\label{lem:socialcost(x)-socialcost}
p{\ensuremath{Cost({\ensuremath{C(0)}})}}+(1-p){\ensuremath{Cost({\ensuremath{C(x)}})}}.$$ Since ${\ensuremath{Cost({\ensuremath{C(x)}})}}<{\ensuremath{Cost({\ensuremath{C(0)}})}}$, the minimum expected social cost is when $p$ is the smallest possible value in Inequality [(\[lem:socialcost(x)-p\])]{} which is $\frac{(1-x){\ensuremath{\Delta(x)}}}{(c-1) + (1-x){\ensuremath{\Delta(x)}}}$.
Now we determine $x$. By [Lemma ]{}\[fact:socialcost\], the expected social cost of $M_2$ is $$(1 - (1-p) x {\ensuremath{\Delta(x)}})n.$$ Since $p=\frac{(1-x){\ensuremath{\Delta(x)}}}{(c-1) + (1-x){\ensuremath{\Delta(x)}}}$, the expected social cost is then $$(1 - \frac{(c-1)x {\ensuremath{\Delta(x)}}}{(c-1) + (1-x){\ensuremath{\Delta(x)}}})n.$$
As $M_2$ is an optimal mediator, we minimize its expected social cost with respect to $x$. Thus $g(x)$ is maximized with respect to $x$, where $$g(x) = \frac{(c-1)x {\ensuremath{\Delta(x)}}}{(c-1) + (1-x){\ensuremath{\Delta(x)}}}.$$ Hence, we have $$\frac{dg(x)}{dx} = \frac{(c-1)[(c-1 + (1-x){\ensuremath{\Delta(x)}})({\ensuremath{\Delta(x)}}-s_2x) + x {\ensuremath{\Delta(x)}}((1-x)s_2+{\ensuremath{\Delta(x)}})]}{((c-1) + (1-x){\ensuremath{\Delta(x)}})^2}.$$ By rearranging and canceling common terms, we obtain $$\frac{dg(x)}{dx} = \frac{(c-1)[({\ensuremath{\Delta(x)}})^2+(c-1){\ensuremath{\Delta(x)}}-(c-1)s_2x]}{((c-1) + (1-x){\ensuremath{\Delta(x)}})^2}.$$
We know that ${\ensuremath{\Delta(x)}}>0$, $\frac{c}{s_1}\leq x < \frac{c}{s_1}+\frac{1}{s_2}$, $x \leq 1$ and $c>1$, so the denominator is always positive. By setting the numerator to zero and dividing by $c-1$, we get $$\label{maxsw:df1}
({\ensuremath{\Delta(x)}})^2+(c-1){\ensuremath{\Delta(x)}}-(c-1)s_2x=0$$
By solving Equation [(\[maxsw:df1\])]{}, we have $x = c(\frac{1}{s_1}+\frac{1}{s_2})\pm \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}}$. Now let ${\ensuremath{\lambda}}(c,s_1,s_2) = c(\frac{1}{s_1}+\frac{1}{s_2})- \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}}$ and $\bar{{\ensuremath{\lambda}}}(c,s_1,s_2) = (c(\frac{1}{s_1}+\frac{1}{s_2})+ \sqrt{\frac{c(\frac{1}{s_1}+\frac{1}{s_2})(c-1)}{s_2}})$.
Since $ \bar{{\ensuremath{\lambda}}}(c,s_1,s_2) > (\frac{1}{s_2}+\frac{c}{s_1})$, by Lemma \[lem:two-configurations\], it is out of range. Therefore, we have exactly one root $x = {\ensuremath{\lambda}}(c,s_1,s_2)$.
We know $\frac{dg(x)}{dx}\mid_{(x = \frac{c}{s_1})}<0$ iff ${\ensuremath{\lambda}}(c,s_1,s_2)<\frac{c}{s_1}$, and $\frac{dg(x)}{dx}\mid_{(x = 1)}>0$ iff ${\ensuremath{\lambda}}(c,s_1,s_2) > 1$. Also we know that ${\ensuremath{\lambda}}(c,s_1,s_2) < \frac{c}{s_1}+\frac{1}{s_2}$; and $\frac{c}{s_1} \leq x < \frac{1}{s_2}+\frac{c}{s_1}$ if $f(1) \geq 1$, otherwise, $\frac{c}{s_1} \leq x \leq 1$. Therefore, for $\frac{c}{s_1} \leq {\ensuremath{\lambda}}(c,s_1,s_2) \leq 1$, the maximum of $g(x)$ is at $x={\ensuremath{\lambda}}(c,s_1,s_2)$. Moreover, if ${\ensuremath{\lambda}}(c,s_1,s_2) <\frac{c}{s_1}$, then the maximum of $g(x)$ is at $x = \frac{c}{s_1}$; and for ${\ensuremath{\lambda}}(c,s_1,s_2) > 1$, the maximum is at $x = 1$.
Recall that the expected social cost of $\D\{({\ensuremath{C(0)}},p),({\ensuremath{C(x)}},1-p)\}$ is $$p{\ensuremath{Cost({\ensuremath{C(0)}})}}+(1-p){\ensuremath{Cost({\ensuremath{C(x)}})}},$$ or equivalently, $$(p+(1-p)(xf(x)+(1-x)))n.$$
The Mediation Metrics
---------------------
Now we compute the [*Mediation Value*]{} and the [*Enforcement Value*]{}. To obtain the [*Mediation Value*]{} and [*Enforcement Value*]{}; recall that the [*Mediation Value*]{} ([*MV*]{}) is the ratio of the minimum social cost over all Nash equilibria to the minimum social cost over all mediators, and the [*Enforcement Value*]{} is the ratio of the minimum social cost over all mediators to the optimal social cost.
For $c \leq 1$, by Lemma \[lem:c<1,mv=1\], ${\emph{MV}}= 1$; and by Lemmas \[lem:socialoptimum\] and \[lem:bestnash\], ${\emph{EV}}= \frac{\min(f(1),1)}{yf(y)+(1-y)}$.
For $c > 1$, by Lemmas \[lem:bestnash\] and \[lem:socialcost(x\_2)\], the [*Mediation Value*]{} is: $$\frac{\min(f(1),1)}{p+(1-p)(xf(x)+(1-x))};$$ and by Lemmas \[lem:socialoptimum\] and \[lem:socialcost(x\_2)\], the [*Enforcement Value*]{} is: $$\frac{p+(1-p)(xf(x)+(1-x))}{yf(y)+(1-y)}.$$
|
---
author:
- Fuminori Hasegawa
- Masahiro Kawasaki
bibliography:
- 'ADPBHf.bib'
title: 'Primordial Black Holes from Affleck-Dine Mechanism'
---
IPMU18-0115
Introduction
============
The recent observations of the gravitational waves from the merger of the binary black holes (GW150914 [@TheLIGOScientificCollaboration2016b], GW151226 [@TheLIGOScientificCollaboration2016a], GW170104 [@TheLIGOScientificCollaboration2017a], GW170814 [@Abbott2017]) have reveled the existence of the heavier BHs with a mass $\sim 30M_\odot$. It is in dispute how such heavy BH binaries are formed by the stellar evolution, and many researchers are exploring their origin. A primordial black hole (PBH) is one of the candidates which account for these heavier BHs [@Bird2016; @Clesse2016; @Kashlinsky:2016sdv; @Carr2016; @Eroshenko2016; @Sasaki2016]. PBHs are formed by the gravitational collapse of the over-dense regions in the early Universe. Therefore, on the contrary to the stellar ones, PBHs can have a very wide range of masses relating to the scale of the over-dense regions. To generate such over-density in the early universe, inflation is well-motivated and studied extensively [@Yokoyama:1995ex; @GarciaBellido:1996qt; @Kawasaki:1997ju; @Kawasaki2016; @Inomata2016; @Inomata2017aa; @Inomata2017]. Since the density perturbation generated by the conventional inflation is predominantly scale invariant and too small to collapse to the PBHs, much effort has been made to amplify the curvature perturbations only at the small scale corresponds to $\sim 30M_\odot$.
Meanwhile, such amplified small scale density perturbations which can be the seeds of the PBHs are severely constrained by the cosmological observations. First, they cause a distortion of the Cosmic Microwave Background (CMB) spectrum because these perturbations dissipate into the background thermal plasma. In fact, the observation of the $\mu$-distortion excludes the density perturbations which correspond to the PBHs with mass $4\times10^2M_\odot\lesssim M_{\rm PBH}\lesssim4\times10^{13}M_\odot$ [@Kohri2014]. Moreover, large curvature perturbations could source the tensor perturbations by the second-order effect [@Saito2008; @Saito2009]. It is known that such secondary GWs can be significantly larger than those of the first-order and severely constrained by observations of pulsar timing. According to the latest results of pulsar timing array (PTA) experiments [@Arzoumanian2015; @Lentati2015; @Shannon2015], the inflationary PBHs with mass $0.1M_\odot{\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle <}{\sim}\:$}}M_{\rm PBH}{\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle <}{\sim}\:$}}10M_\odot$ are already excluded. Consequently, PBHs can explain the massive BHs only in limited mass range as long as curvature perturbations are used as their seeds. Fortunately, there still exist some successful models which consistently explain the LIGO events evading those constraints by inflationary PBHs [@Inomata2016; @Inomata2017].
As an alternative mechanism to create the PBHs, the one which utilizes the Affleck-Dine (AD) baryogenesis [@Affleck1985; @Dine1996] was studied in the ref. [@Dolgov1993; @Dolgov2008; @Blinnikov2016]. In this mechanism, baryon asymmetry is inhomogeneously generated and the localized high-baryon regions called “high-baryon bubbles (HBBs)" are created. Since the HBBs become over-dense in the subsequent cosmological evolution, they could gravitationally collapse into the PBHs. In their setup, however, the ad-hoc and non-SUSY interactions between the inflaton and the AD field are required although the AD baryogenesis is most naturally realized in the framework of SUSY.
Recently, we found that this inhomogeneous AD-baryogenesis can be naturally embedded into the SUSY setup where the dynamics is described by the MSSM flat-directions [@Hasegawa:2017jtk]. The point is that the thermal potential for the AD field, which is originated from thermal plasma produced by the inflaton decay, can trigger the multi-vacua structure of the scalar potential and hence the baryon asymmetry is inhomogeneously produced. Since this mechanism does not require large Gaussian curvature perturbations, the stringent constraints from the $\mu$-distortion and PTA experiments are completely absent. Furthermore, it is revealed that the PBHs which explain the LIGO events and the dark matter can be cogenerated if Q-balls [@Coleman1985; @Enqvist1998; @Enqvist1999; @Kasuya2000g; @Kasuya2000; @Kasuya2001] are formed after the AD baryogenesis.
In this paper, we discuss the PBH formation from the AD baryogenesis more detail and its possible extension. We consider both gravity- and gauge- mediated SUSY breaking scenarios where the properties of the Q-balls formed after the AD baryogenesis are different. In the gravity-mediated SUSY breaking scenario, since Q-balls are unstable against decay into the nucleons, the HBBs become dense due to the QCD phase transition which transfers the relativistic energy of the massless quarks into the non-relativistic energy of the baryons. On the other hand, Q-balls in the gauge-mediated SUSY breaking scenario are stable. Therefore, their non-relativistic energy density eventually dominates the HBBs making the HBBs more dense than the outside the HBBs. Since the HBBs should have high baryon density such as $n_b/s\sim1$ in both cases, we also examine whether AD baryogenesis can really realize these situation for natural cosmological parameters. The HBB scenario in a single inflation model generally predicts a significant number of the residual HBBs which could cause the cosmological problems. Therefore, we consider HBB formation in a double inflation model which enables us to suppress the excessive residual HBBs.
This paper is organized as follows. In the Sec. \[sec:HBB\], we demonstrate the outline of the scenario and discuss how the HBBs are produced from the AD baryogenesis. In the Sec. \[sec:HBB\_ditribution\], the distribution of the HBBs over their scales is evaluated in detail. We explain how the HBBs evolve and gravitationally collapse into the PBHs in the Sec. \[sec:gravitatinal\_collapse\]. In the Sec. \[sec:LIGO\_event\], we evaluate the abundance of the PBHs comparing with the current observational bounds and discuss its consistency with the current observation of the GWs. The extension to the double inflation scenario is performed in the Sec. \[sec:double\_inflation\]. The Sec. \[sec:conclusion\] is devoted to conclusions and discussions.
HBBs from Affleck-Dine field {#sec:HBB}
============================
In this section, we consider the generation of the inhomogeneous baryon asymmetry, that is, HBBs. In the previous work [@Hasegawa:2017jtk] it was shown that HBBs are naturally produced in the modified version of the AD baryogenesis. Before we discuss this mechanism in detail, let us briefly review the conventional frame work of the AD baryogenesis.
Conventional Affleck-Dine baryogenesis: a review
------------------------------------------------
In the MSSM there exist flat directions with non-vanishing $B-L$ charge, called Affleck-Dine fields. Although the potential of the AD fields is flat in the exact-SUSY limit, it is lifted up by SUSY breaking and non-renormalizable terms with cutoff of the Planck scale $M_\text{Pl}$. The non-renormalizable term is written as $$\begin{aligned}
W_{\rm AD}=\lambda\frac{\Phi^n}{n{M_{\rm Pl}}},\end{aligned}$$ where $\Phi$ is an Affleck-Dine (super)field, $\lambda$ is a coupling constant and $n(>4)$ is a certain integer determined by specifying a flat direction. In combination with the SUSY-breaking effect, the scalar potential for the AD field $\phi=\varphi e^{i\theta}$ is given by $$\begin{aligned}
V(\phi)&= (m_\phi^2-cH^2)|\phi|^2+V_{\rm NR},\\
V_{\rm NR}&=\left(\lambda a_M\frac{m_{3/2}\phi^n}{n{M_{\rm Pl}}^{n-3}}+\hc\right)
+\lambda^2\frac{|\phi|^{2(n-1)}}{{M_{\rm Pl}}^{2(n-3)}},\end{aligned}$$ where $m_\phi$ is the soft SUSY breaking mass and $-cH^2$ is the Hubble induced mass with $H$ being the Hubble parameter. The first part of $V_\text{NR}$ represents so-called A-term which violates baryon number. In the conventional scenario, the sign of the Hubble induced mass term, which is determined by the coupling to the inflation sector, is assumed to be negative. Then, in the high energy regime $H\gtrsim m_\phi$, AD field develops a non-vanishing VEV such that $$\begin{aligned}
\label{ADV}
\phi(t)\simeq
\left(\sqrt{\frac{c}{\lambda^2(n-1)}}H(t){M_{\rm Pl}}^{n-3}
\right)^{\frac{1}{n-2}}.\end{aligned}$$ As the Hubble parameter $H(t)$ decreases due to the cosmic expansion, the field value of the AD field $\phi(t)$ also decreases in time. However, when $H(t)$ becomes smaller than $m_{\phi}$, the effective mass of the AD field flips to positive and AD field starts to oscillate around the origin. At the same time, the phase direction of the AD field $\theta$ receives a “kick" from the A-term potential. Then the AD field $\phi$ starts to rotate in the complex plane, which yields the baryon asymmetry of the universe since the baryon number density is defined by $$\begin{aligned}
n_b=-2q_b\varphi^2\dot\theta.\end{aligned}$$ After some calculation, we can obtain the accurate expression of the baryon abundance $\eta_b~(=n_b/s)$ by the model parameters as $$\begin{aligned}
\label{CAD}
\eta_b&\simeq\epsilon \frac{T_Rm_{3/2}}{H_{\rm osc}^2}\left(\frac{\varphi_{\rm osc}}{{M_{\rm Pl}}}\right)^2,\\
\epsilon&=\sqrt{\frac{c}{n-1}}\frac{q_b|a_M|\sin{(n\theta_0+\arg(a_M))}}{3\left(\frac{n-4}{n-2}+1\right)},\end{aligned}$$ where the subscript “osc" denotes the value evaluated at the time AD field starts to oscillate.
Inhomogeneous Affleck-Dine baryogenesis
---------------------------------------
The mechanism of the HBB production was first proposed in ref. [@Dolgov1993] and developed in refs. [@Dolgov2008; @Blinnikov2016], where AD baryogenesis is employed in producing baryon asymmetry inside the HBBs. The inhomogeneity of the baryon asymmetry is explained by the perturbation of the initial value of the AD field owing to the quantum fluctuations during inflation. However, their setup is non-SUSY and requires ad-hoc couplings between the inflaton and the AD field. Furthermore, the distribution of the HBBs are hard to be estimated due to the complexity of the setup. Recently, in ref. [@Hasegawa:2017jtk], we have succeeded in embedding this mechanism into SUSY, where the dynamics are naturally described by the MSSM flat-directions. In the model, the distribution of the HBBs is easily evaluated by the model parameters. Let us see how the HBBs are produced from the MSSM flat directions.
### Model setting
Although the mechanism is based on the conventional AD baryogenesis discussed above, we put two unconventional assumptions:
- During inflation the AD field has a positive Hubble induced mass, while it has negative one after inflation.
- After inflation, the temperature $T$ of the thermal bath due to the decay of the inflaton overcomes the Hubble parameter $H$.
These assumptions are easily satisfied by appropriately choosing the model parameters. Under these assumptions, the scalar potential for the AD field is altered and given by $$\begin{aligned}
\nonumber
V(\phi)=
\begin{cases}
(m_\phi^2+c_IH^2)|\phi|^2+V_{\rm NR},
&({\rm during~inflation}) \\
(m_\phi^2-c_MH^2)|\phi|^2+V_{\rm NR}+V_{\rm T}(\phi),
& ({\rm after~inflation})
\end{cases}\end{aligned}$$ where $c_I,~c_M$ are dimensionless positive constants,[^1] $V_{\rm T}$ is the thermal potential for the AD field induced by the decay product of the inflaton and written as $$\begin{aligned}
\label{TP}
V_{\rm T}(\phi)&=
\begin{cases}
c_1T^2|\phi|^2, & f_k|\phi|{\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle <}{\sim}\:$}}T, \\
c_2 T^4\ln\left(\frac{|\phi|^2}{T^2}\right),& |\phi|{\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle >}{\sim}\:$}}T,
\end{cases}\end{aligned}$$ where $c_1,~c_2$ are $\mathcal{O}(1)$ parameters which are related to the couplings of the AD field to the thermal bath.
We can see that this setting has a significant feature; *multi-vacua appears after inflation*. At first, during inflation, the scalar potential has a minimum at the origin ($\varphi=0$) by the positive Hubble induced mass due to the assumption (i). After inflation, as usual, the AD field has a vacuum with a non-vanishing VEV \[Eq. (\[ADV\])\], due to the negative Hubble induced mass. We name this vacuum “B". In addition, around the origin $\varphi\lesssim T/c_1$, the mass of the AD field is [*positive*]{} because the positive thermal mass overcomes the negative Hubble induced mass (= assumption (ii)), which results in the appearance of the second vacuum $\phi=0$. We name this new vacuum “A”. The shape of the potential after inflation looks like a “dented" Mexican hat as shown in the lower side of Fig. \[fig:hbb\]. The critical point between two vacuums lies at $\varphi_c(t)\simeq T(t)^2/H(t)$, which is determined by the conditions $V'(\varphi)=0$ and $V''(\varphi)<0$.\
![The schematic view of the dynamics of the AD field before and after inflation. [*Upper side*]{}: During inflation, the IR modes of the AD field diffuse in the complex plain and take different values in different Hubble patches. The probability distribution function $P(N,~\phi)$ is given by the Gaussian form. [*Lower side*]{}: Just after inflation, the multi-vacua structure appears due to the thermal potential and the negative Hubble induced mass. Some patches where $|\phi|>\phi_c$ is satisfied roll down the vacuum B classically and are identified with HBBs. On the other hand, the others where $|\phi|<\phi_c$ rolls down to the vacuum A.[]{data-label="fig:hbb"}](hbb.pdf){width="85mm"}
### Dynamics of the AD field
Now, let us discuss the dynamics of the AD field in the scenario. We show the schematic view of the dynamics of the AD field in Fig. \[fig:hbb\]. During inflation, the AD field has positive Hubble induced mass and is located at the origin classically. However, the AD field acquires quantum fluctuations during inflation. Therefore, IR modes of the AD field ($=$ coarse-grained AD field over local Hubble patches ) stochastically diffuse in the complex plane and $\phi$ takes different values in different Hubble patches. As we will discuss later, this stage is described by Fokker-Planck equation and the the probability distribution function $P(N,\phi)$ with $N=\ln(a/a_{\rm ini})$ being $e$-folding number, exhibits a Gaussian distribution [@Vilenkin1982; @Starobinsky1982; @Linde1982].
After inflation, however, highly non-trivial dynamics occurs. As we mentioned, the shape of the potential is deformed to the “dented" Mexican hat and multi-vacua structure appears. Then, the AD field must roll down to either of the two vacua A and B classically. If the AD field takes a value $\varphi(t_e)<\varphi_c(t_e)$ in some patch at the end of inflation, $\varphi$ rolls down to the vacuum A. On the other hand, if $\varphi(t_e)>\varphi_c(t_e)$ is satisfied, $\varphi$ rolls down to the vacuum B. As a result, the universe is separated into the two phases A and B after inflation. As we will see, although the vacuum B vanishes later, the patches which go through the phase B acquire the baryon asymmetry and form HBBs.
First, we illustrate the evolution of the phase A. After inflation, in the patches with $\varphi(t_e)<\varphi_c(t_e)$, AD fields rolls down and oscillates around the vacuum A. In this case, AD field can hardly produce the baryon asymmetry due to the small oscillation amplitude: $$\begin{aligned}
\eta_b^{({\rm A})}\simeq0\end{aligned}$$ On the other hand, in the patches with $\varphi(t_e)<\varphi_c(t_e)$, AD field rolls down to the vacuum B, that is, AD field makes a condensate as is the case with the conventional AD baryogenesis. Therefore, at the time $H(t)\simeq H_{\rm osc}$, AD field starts to oscillate toward the vacuum A producing the baryon symmetry given by $$\begin{aligned}
\label{etab}
\eta_b^{({\rm B})}&\simeq
\epsilon \frac{T_Rm_{3/2}}{H_{\rm osc}^2}
\left(\frac{\varphi_{\rm osc}}{{M_{\rm Pl}}}\right)^2.\end{aligned}$$ Consequently, the separate universe converges to the vacuum A ($\phi=0$) and the difference in their paths in the field space are printed as a difference in the baryon asymmetry. This is how the inhomogeneous AD baryogenesis works.
In order to apply this mechanism to the creation of the PBH, we assume the produced baryon asymmetry in the phase B is very large such as $\eta_b^{({\rm B})}\sim1$ and the volume fraction of the phase B is very small. Since outside the HBBs, the substantial baryon asymmetry is not generated, we need another AD field or baryogenesis which realizes the observed homogeneous baryon asymmetry $\eta_b^{\rm ob}\sim10^{-10}$.
Finally we comment on the temperature of the thermal bath produced by the inflaton decay. To make our analysis, we want to relate the temperature just after inflation $T(t_e)$ to the parameters of the inflation model such as $T_R$ and $ H(t_e)$. The simplest way is to assume “instantaneous thermalization" where the evolution of the temperature is give by $$\begin{aligned}
T^{\rm inst}(t)\simeq(T_R^2H(t){M_{\rm Pl}})^{1/4}.\end{aligned}$$ Then, since the temperature $T(t)$ decreases slower than $H(t)$, the assumption (ii) is translated as $$\begin{aligned}
\label{Deltai}
\left(\frac{c_1}{c_M}\right)^2\frac{T_R^2{M_{\rm Pl}}}{H(t_{e})^3}>1.\end{aligned}$$ However, the actual time scale of the thermalization is not so short [@Mukaida:2015ria]. As a consequence, the value of $T(t_e)$ could be much lower than the estimation by “instantaneous thermalization", $T^{\rm inst}(t_e)$. In our case, however, the required reheating temperature is so high (Eq. (\[Deltai\])) that the deviation from the $T^{\rm inst}(t_e)$ is not so significant.[^2] Thus assuming $T(t_e)\sim T^{\rm inst}(t_e)$, we rewrite the assumption (ii) as $$\begin{aligned}
\label{Delta}
\Delta\equiv \frac{T_R^2{M_{\rm Pl}}}{H(t_{e})^3}\gtrsim1,\end{aligned}$$ where we set $c_M,~c_1\sim1$ for simplicity. The critical point just after inflation $\varphi_c(t_e)$ is also rewritten as $$\begin{aligned}
\varphi_c(t_e)\equiv\varphi_c=\Delta^{1/2}H(t_e),\end{aligned}$$ where we also set the $\mathcal{O}(1)$ parameters in the model to unity.
Distribution of HBBs {#sec:HBB_ditribution}
====================
In this section, we evaluate the distribution of the HBBs in terms of the model parameters. Because the probability distribution function of the AD field during inflation is given by the simple Gaussian form, we can evaluate their distribution analytically.
Volume fraction of HBBs
-----------------------
The evolution of the IR modes of the AD field during inflation is described by stochastic classical theory with the Langevin equation including the Gaussian noise [@Vilenkin1982; @Starobinsky1982; @Linde1982]. Then, the evolution of the probability distribution function of the AD field with respect to the $e$-folding number $N$, $P(N,\phi)$ is described by the Fokker-Planck equation as $$\begin{aligned}
\frac{\partial P(N,\phi)}{\partial N}
=\sum_{i=1,2} \frac{\partial}{\partial \phi_i}
\left[
\frac{V_{\phi_i} P(N,\phi)}{3H^2}
+\frac{H^2}{8\pi^2}\frac{\partial P(N,\phi)}{\partial \phi_i}
\right]\end{aligned}$$ where $(\phi_1,~\phi_2)=(\Re[\phi],~\Im[\phi])$. The first term of the RHS represents the classical force induced by the scalar potential and the second term represents the Gaussian noise due to the quantum fluctuations. Under the initial condition $P(0,~\phi)\propto \delta(\phi)$ and assuming the Hubble parameter during inflation is constant $H(t\leq t_e)=H_I$, we obtain the analytical expression $$\begin{aligned}
\label{qf}
P(N,\phi)
=\frac{e^{-\frac{\varphi^2}{2\sigma^2(N)}}}{2\pi\sigma^2(N)},
~\ \
\sigma^2(N)
=\left(\frac{H_I}{2\pi}\right)^2\frac{1-e^{-c'_IN}}{c'_I}.\end{aligned}$$ Here we have used $V(\phi)\simeq c_IH_I^2\phi^2$ and defined $c'_I\equiv(2/3)c_I$. The distribution of the phase direction $\theta$ is random unless large CP violating term such as Hubble induced A-terms are introduced. In the massless limit $c'_I\rightarrow0$, the variance of the AD field growth lineally with respect to $N$ and the scale invariance of the quantum fluctuation is manifest. For non-vanishing $c'_I$, however, the diffusion of the AD field stops for $N\sim1/c'_I$ due to the classical force induced by the mass term. Thus, the $c'_I$ determines the saturation of the diffusion.
As we discussed in the previous section, the HBBs are the bubbles which pass through the vacuum B where AD baryogenesis occurs. Therefore, the physical volume of the HBBs at a certain $N$ is evaluated as $$\begin{aligned}
\label{frac}
V_{\rm B}(N)=V(N)\int_{\varphi>\varphi_c}P(N,\phi)d\phi
\equiv V(N)f_B(N).\end{aligned}$$ Here we represent the physical volume of the Universe at $N$ as $V(N)\sim r_H^3e^{3N}$, where $r_H$ is the Hubble radius during inflation ($\sim H_I^{-1}$). $f_B(N)$ represents the volume fraction of the HBBs at $N$. Since the integration of $f_B$ is straightforward, we can obtained the analytic expression for $f_B$ such that $$\begin{aligned}
f_B(N)&=
\int_0^{2\pi}d\theta\int^\infty_{\varphi_c}\varphi
\frac{e^{-\frac{\varphi^2}{2\sigma^2(N)}}}{2\pi\sigma^2(N)}d\varphi
=e^{-\frac{2\pi^2\Delta}{\tilde{\sigma}^2(N)}},\end{aligned}$$ where we define $\tilde{\sigma}^2(N)\equiv(1-e^{-c'_IN})/c'_I $. It is seen that $\Delta$ is responsible for the overall magnitude of the HBB fraction. We show the evolution of $f_B(N)$ in Fig. \[fig:d\] (left panel). In the figure we take different $\Delta$ for each $c'_I$ fixing the consequential fraction of the HBBs $f_B(N_e)$. For the larger value of $c'_I$, the growth of $f_B$ saturates earlier because the larger mass ($\propto c'_I$) suppress the quantum diffusion of the AD field.
![Evolution of the volume fraction of HBBs (left) and production rate of HBBs at $N$ (right) for various $c'_I$.[]{data-label="fig:d"}](d.pdf){width="170mm"}
Size distribution of HBBs {#3B}
-------------------------
The creation rate of the HBBs at $N$ is understood by differentiating $V_B(N)$ with respect to $N$: $$\begin{aligned}
\frac{dV_{\rm B}(N)}{dN}
=3V_{\rm B}(N)
+V(N)\int_{\varphi>\varphi_c}\frac{dP(N,\phi)}{dN}d\phi.\end{aligned}$$ We can see that the first term in the RHS represents the growth of the existing HBBs due to the cosmic expansion. The second term represents nothing but the creation of the HBBs at $N$. Since the created HBBs also grow by the cosmic expansion, the fraction of the HBBs formed at $N$ evaluated at the inflation end $N_e$ is $$\begin{aligned}
\label{beta}
\beta_B(N;N_e)
=\frac{1}{V(N_e)}\cdot e^{3(N-N_e)}\cdot
\left[V(N)\int_{\varphi>\varphi_c}\frac{dP(N,\phi)}{dN}d\phi\right].\end{aligned}$$ The third term in the RHS represents the physical volume of the HBBs produced at $N$ and the second term represents their growth due to the cosmic expansion. Since only the cosmic expansion can not change the volume fraction of the HBBs, the expression of Eq.(\[beta\]) is actually independent of $N_e$ and reduced to $$\begin{aligned}
\beta_B(N)=\frac{d}{dN}f_B(N)~
\left(=\frac{(\pi c'_I)^2\Delta}{\sinh(c'_IN/2)}f_B(N)\right),\end{aligned}$$ as expected. This quantity is nothing but the volume-fraction distribution of the HBBs created at $N$. We show the shape of $\beta_B(N)$ in Fig.\[fig:d\] (right panel). Indeed, the peak of the HBB distribution coincides with the time when the growth of $f_B(N)$ saturates. We can see for much smaller $c'_I$, the AD field does not feel the “viscosity" by the mass term and the production of the HBBs does not saturate before the end of inflation. In conclusion, the distribution of the HBBs can be represented only by $c_I$ and the parameter $\Delta$ defined in Eq.(\[Delta\]).
For later convenience, we relate the size of the HBBs to the horizon mass $M_{\rm H}$ evaluated at the time when they reenter the horizon. Since the scale of the HBBs are same as the horizon when they are created, we can rewrite the scale of HBBs $k$ created at $N$ as $$\begin{aligned}
k(N)=k_*e^{N-N_{\rm CMB}},\end{aligned}$$ where $k_*$ is the CMB pivot scale and $N_{\rm CMB}$ is the number of $e$-foldings when the pivot scale exits the horizon. Since the horizon mass when the scale $k$ reenter the horizon is evaluated as $$\begin{aligned}
M_{\rm H}
\simeq19.3M_\odot\left(\frac{g_*}{10.75}\right)^{-1/6}
\left(\frac{k}{{10^6\rm Mpc}^{-1}}\right)^{-2},\end{aligned}$$ the number of $e$-foldings $N$ is represented in terms $M_{\rm H}$ as $$\begin{aligned}
N(M_{\rm H})
\simeq-\frac{1}{2}\ln\frac{M_{\rm H}}{M_\odot}+21.5+N_{\rm CMB},
$$ where $M_\odot$ is the solar mass and we used $g_*=10.75$ and the CMB pivot scale $k_*=0.002{\rm Mpc}^{-1}$. It is known that the typical inflation models suggest that $$\begin{aligned}
N_e-N_{\rm CMB}\sim50-60,\end{aligned}$$ On the other hand, to solve the horizon and flatness problem of the Big Bang cosmology, total number of $e$-foldings of the inflation era should be $$\begin{aligned}
N_e-N_{\rm ini}=N_e\gtrsim60.\end{aligned}$$ Although these values are depend on the post-inflationary thermal history, in the rest of the paper, except for Sec.\[sec:double\_inflation\], we fix $N_e=60$ and parametrize $N_{\rm CMB}$ as $$\begin{aligned}
N_{\rm CMB}\sim0-10.\end{aligned}$$ It is also convenient to relate the temperature $T$ to the horizon mass at the horizon reentry as $$\begin{aligned}
T(M_{\rm H})
&=434{\rm MeV}\left(\frac{M_{\rm H}}{M_\odot}\right)^{-1/2},
\\\label{TM}
M_{\rm H}(T)
&=18.8M_\odot\left(\frac{T}{200{\rm MeV}}\right)^{-2}.\end{aligned}$$
Gravitational collapse of HBBs {#sec:gravitatinal_collapse}
==============================
The purpose of this section is to clarify the thermal history inside the HBBs and discuss their gravitational collapse into PBHs. The energy density inside and outside HBBs are almost same just after inflation because of the energy conservation of the AD field and the domination of the oscillation energy of the inflaton. However, after AD baryogenesis takes place, the energy contrast between outside and inside the HBBs becomes significant resulting in the formation of the PBHs.
Density contrast of HBBs
------------------------
As we discussed in the previous sections, the HBBs are naturally produced in the consequence of the AD baryogenesis. Inside the HBBs, we assume that there is large baryon asymmetry such as $\eta_b^{\rm in}\simeq\eta_b^{\rm (B)}\sim 1$, while outside the HBBs the baryon asymmetry is $\eta_b^{\rm out}=\eta_b^{\rm ob}\sim10^{-10}$, which must be realized by additional baryogenesis. This fluctuation of the baryon number density is regarded as a top-hat type isocurvature perturbation. Such a large but small-scale isocurvature perturbation is hardly constrained by the observations in contrast to the adiabatic perturbations. That is why this model does not suffer from the stringent constraints from CMB $\mu$-distortion and the PTA experiment. In the followings we see that the isocurvature perturbations due to HBBs induce substantial density perturbations.
### QCD phase transition
After the AD field decays, the produced baryon number is carried by the quarks. As long as the quarks remain relativistic particles, the density fluctuations are not produced. However, nearly massless quarks are confined within the massive baryons ($=$protons and neutrons) by the QCD phase transition. Thus, after the QCD phase transition the energy of the baryons behave as matter and their energy density is $\rho\simeq n_b m_b $, where $m_b$ is the nucleon mass. The point is that inside the HBBs, the contribution of the baryons to the total energy density is large while it is negligible outside the HBB. The density contrast between inside and outside the HBBs is represented as $$\begin{aligned}
\label{T}
\delta\equiv\frac{\rho^{\rm in}-\rho^{\rm out}}{\rho^{\rm out}}
\simeq\frac{n_b^{\rm in} m_b}{(\pi^2/30)g_*T^4}
\simeq0.3\eta^{\rm in}_b
\left(\frac{T}{200{\rm MeV}}\right)^{-1}
\theta(T_{\rm QCD}-T),\end{aligned}$$ where we have used $m_b\simeq938{\rm MeV}$ and $\theta(x)$ is the Heaviside theta function. This value can be larger enough to form the PBHs for $\eta_b^{\rm in}\sim1$. Strictly speaking, the temperature of the plasma inside the HBBs is different from that outside the HBBs due to the large chemical potential. However, even in our case $\eta_b^{\rm in}\sim1$, the deviation is at most $\mathcal{O}(1)$ and hence we neglect the difference.
Here we make a comment on the fluctuations of the baryon asymmetry $\eta^{\rm in}_b$ among HBBs. The baryon asymmetry produced by the AD mechanism depends on the initial value of the phase direction of the AD field $\theta_0$ \[Eq.(\[CAD\])\]. Since the phase direction $\theta$ is almost flat during inflation, that is, there in no CP-violating terms other than the tiny soft A-term, the values in the HBBs of $\theta_0$ are generally different. Thus the baryon asymmetry inside the HBBs are different by factor $\sin(\theta_0-\arg(a_M))$, which clearly affects the evaluation of the density contrast Eq.(\[T\]). Although this effect does not bring a substantial change to the following discussion, we can realize the uniform baryon asymmetry by introducing the Hubble induced A-term.
### Q-ball formation
We have just considered that the coherent oscillation of the AD field decays into the quarks after the $t>t_{\rm osc}$. However, it is known that the coherent oscillation of the AD field is usually spatially unstable and fragments to the localized lumps, called Q-balls. A Q-ball is a configuration of the complex scalar which minimizes the energy under the fixed global $U(1)$ charge (in this case, which is identified with a baryon number). The property of the Q-balls formed after AD baryogenesis depends on the scenario of the mediation of the SUSY breaking effect. In the gravity-mediated SUSY breaking scenario, produced Q-balls are unstable against the decay to the quarks. Therefore, the baryon number is produced from their decay and hence the Q-balls are only the “transients" of the baryogenesis. On the other hand, in the gauge-mediated SUSY breaking scenario, Q-balls are stable and behave as the dark matter [@Kasuya2001; @Kasuya2000g]. We represent the abundance of the stable Q-balls formed after the AD baryogenesis inside the HBBs as $Y^{\rm in}_Q\equiv\rho_Q^{\rm in}/s$. Since Q-balls are not produced outside the HBBs, the density contrast becomes $$\begin{aligned}
\label{delta}
\delta
=\frac{\rho_Q^{\rm in}}{(\pi^2/30)g_*T^4}
=\frac{4}{3}\left(\frac{T}{Y^{\rm in}_Q}\right)^{-1},\end{aligned}$$ which can clearly reach $\mathcal{O}(1)$ at $T\sim Y^{\rm in}_Q$. It is known that all the baryon number produced by the AD baryogenesis is enclosed in the Q-balls and their abundance $Y^{\rm in}_Q$ is evaluated by the Q-ball number density and the Q-ball mass.
PBH formation
-------------
As we have seen, the HBBs become over-dense by the QCD phase transition or the formation of stable Q-balls. If the density contrast is large enough, the self-gravity of the over-dense regions overcomes the pressure and the regions gravitationally collapse into PBHs just after the horizon reentry. In the radiation-dominated era, the threshold value of the density contrast for the PBH formation is estimated as $\delta_c\simeq w$ [@Carr:1974nx], where $w\equiv p/\rho$ is the parameter of the equation of the state. Although the recent studies perform the more precise estimation of $\delta_c$ by analytic/numerical method, we simply adopt $\delta_c\simeq w$ because our model is hardly sensitive to the choice of the $\delta_c$ unlike the case of the Gaussian density perturbations.
Then let us consider the PBH formation in this model according to the threshold value $\delta_c\simeq w$. The characteristic point of this model is that density perturbations do not conserve even in super-horizon scale due to the redshift of the radiation. The condition for the PBH formation is represented by $$\begin{aligned}
\delta(T)> \delta_c(T)\simeq w(T)\end{aligned}$$ and it depends on the temperature at the horizon crossing of the HBB. Since the density contrast of the HBB is originated from the non-relativistic (pressure-less) baryons/Q-balls, the equation of the state parameter $w$ inside the HBB can be expressed in terms of $\delta(T)$ as $$\begin{aligned}
w(T)
=\frac{p^{\rm in}}{\rho^{\rm in}}
\simeq\frac{p^{\rm out}}{\rho^{\rm in}}
=\frac{1}{3}\frac{1}{1+\delta(T)}.\end{aligned}$$ Therefore, the condition of the PBH formation is written as $$\begin{aligned}
\delta(T) \gtrsim \frac{1}{3}\frac{1}{1+\delta(T)}
~\Longleftrightarrow~
\delta(T) \gtrsim 0.26,\end{aligned}$$ and this gives an upper bound on the temperature at the horizon crossing of the HBB. This critical temperature for the PBH formation $T_c$ is obtained from Eq.(\[T\]) and (\[delta\]) such that $$\begin{aligned}
T_c\simeq
\begin{cases}
{{\rm Min}}[231\eta_b^{\rm in}{\rm MeV},~T_{\rm QCD}],
&(\text{QCD phase transition}) \\
5.1Y_Q^{\rm in}.
& (\text{stable Q-ball~formation})
\end{cases}\end{aligned}$$ According to Eq.(\[TM\]), these conditions are translated to the lower bound on the horizon mass at the horizon reentry. This critical value of the horizon mass for PBH formation is then given by $$\begin{aligned}
M_c\simeq
\begin{cases}
{{\rm Max}}\left[14.1(\eta_b^{\rm in})^{-2}M_\odot,
~18.8M_\odot\left(\frac{T_{\rm QCD}}{200{\rm MeV}}\right)^{-2}\right],
&(\text{QCD phase transition}) \\
18.1M_\odot\left(\frac{Y_Q^{\rm in}}{40{\rm MeV}}\right)^{-2}.
& (\text{stable Q-ball~formation})
\end{cases}
\label{eq:critical_mass}\end{aligned}$$ Thus, only the HBBs larger than $M_c$ can gravitationally collapse into the PBHs. On the other hand, smaller HBBs can not collapse due to the pressure inside the HBBs, but would form the self-gravitating objects made of baryons/Q-balls. The interesting point is that the mass distribution of the PBHs is determined by not only the distribution of the HBBs, but also this cutoff $M_c$. Assuming the formed PBH has the mass which is comparable with the horizon mass at the reentry, $M_{\rm PBH}\sim M_{\rm H}$, we can evaluate the distribution of the PBHs in the model as $$\begin{aligned}
\label{fr}
\beta_{\rm PBH}(M_{\rm PBH})=\beta_B(M_{\rm PBH})\theta(M_{\rm PBH}-M_c).\end{aligned}$$ In the case of the stable Q-ball formation, the cutoff scale of the PBH formation is determined by the Q-ball abundance generated by the AD baryogenesis. However, in the case without the stable Q-ball formation, the cutoff scale must be $\mathcal{O}(10)M_\odot$, that is, mass range of the BHs detected by LIGO is naturally explained. This is the most fascinating feature of the model because any fine-tuning of the parameters is required unless $\eta^{\rm in}_b\sim1$ is realized.
Here we make a comment on the PBH formation at $T\ll T_c$. Since the density contrast $\delta$ grows linearly in time due to the redshift, we naively expect $\delta$ becomes larger than unity at a certain time. It had been pointed out such a large density contrast indicates a separate universe and hence PBH is not formed. However, the recent study [@Carr:2014pga] suggests the separate universe is not created in realistic cosmological evolution and does not constrain the PBH formation. Furthermore, even in this case, they show that mass of the created PBH generally can not be much larger than the horizon mass. Thus, we consider PBH is continued to be formed even when $T\ll T_c$ and their mass is roughly given by the horizon mass $M_{\rm H}$. The PBH formation from the over-density of the Q-balls has been studied in a different cosmological context [@Cotner:2016cvr; @Cotner:2017tir].
LIGO events from Affleck-Dine baryogenesis {#sec:LIGO_event}
==========================================
Let us discuss the PBH formation from the AD baryogenesis in a concrete model and its consistency with the current observational constraints. In this section, we consider the gravity-mediated and gauge-mediated SUSY breaking scenarios. As we mentioned, the difference in the mediator of the SUSY breaking effect is related to the stability of the Q-ball.
Gravity-mediated SUSY breaking scenario
---------------------------------------
We first consider the case where the SUSY breaking effect is mediated to the visible sector only through gravity. In this case, taking the one-loop correction into account, the potential of the AD field after the oscillation is written as $$\begin{aligned}
V(\phi)\simeq m_{3/2}^2|\phi|^2
\left[1+K\ln\left(\frac{|\phi|^2}{M_*^2}\right)\right],\end{aligned}$$ where $m_{3/2}$ is the gravitino mass, $M_*$ is a renormalization scale and $K$ is a constant determined by specifying the MSSM flat-direction and typically $-K=0.1\sim0.01$.[^3] Since this one-loop correction makes the potential flatter than the quadratic one, AD field feels the negative pressure and forms localized solitons called Q-balls. The Q-balls formed by this gravity-mediation potential is called “gravity-mediation type" and its properties are as follows: $$\begin{aligned}
\label{grp}
M_Q&\simeq m_{3/2}Q,\\
R_Q&\simeq |K|^{-1/2}m_{3/2}^{-1},\\
\omega_Q&\simeq m_{3/2},\end{aligned}$$ where $M_Q$ and $R_Q$ are the mass and the size of the Q-ball and $\omega_Q$ is the energy of the Q-ball per unit baryon number. The gravity-mediation type Q-ball is unstable with respect to the decay into the nucleons. This is simply because $\omega\simeq m_{3/2}$, which can be regarded as the effective mass of the AD field, is larger than the nucleon mass $\simeq 1{\rm GeV}$ in the gravity-mediation where typically $m_{3/2}\gg {\rm GeV}$. Thus, baryons confined in the Q-balls are released inside the HBBs through the Q-ball decay. Then, the density contrast of the HBBs are induced by the QCD phase transition as Eq.(\[T\]) and the critical mass scale of the PBH is $\mathcal{O}(10)M_\odot$ as long as $\eta_b^{\rm in}\sim 1$.
Before discussing the abundance of the PBHs, we consider how $\eta_b^{\rm in}\sim 1$ is realized in our model. In fact, AD baryogenesis can naturally produce such huge baryon asymmetry especially in the case of $n=6$. The baryon asymmetry inside the HBB is given by Eq.(\[etab\]). Since we are considering the situation where the temperature of the thermal plasma is relatively high due to the assumption (ii), thermal potential Eq.(\[TP\]) can also trigger the oscillation of the AD field in addition to the soft mass. Therefore, the Hubble parameter when the AD field starts to oscillate is evaluated as $$\begin{aligned}
H_{\rm osc}\simeq {{\rm Max}}\left[
m_{\phi},
~{M_{\rm Pl}}\lambda^{2/n}\left(\frac{T_R}{{M_{\rm Pl}}}\right)^{\frac{n-2}{n/2}}
\right].\end{aligned}$$ Taking this “early oscillation" into account, we can calculate the consequential baryon asymmetry produces in the HBBs. In the Fig. [\[fig:gr1\]]{}, we plot the contours of $\eta_b^{\rm in}=(10^{-1},~1,~1)$ in the $(\lambda,~T_R)$ - plane. In the case of $n=4$, the production of the large baryon asymmetry $\eta_b^{\rm in}\sim1$ requires smaller $\lambda$ such as $10^{-10}$. On the other hand, in the case of $n=6$, the amplitude of the oscillation of the AD field is relatively large and $\lambda\lesssim10^{-5}$ is sufficient. In addition, we have to take into account the thermal production of the gravitino in both cases. In the gravity-mediated SUSY breaking scenario, the mass of gravitino tends to heavy ($m_{3/2}\sim 10^{2-3}{\rm GeV}$) and unstable against the radiative/hadronic decay. It is shown that such decay product could spoil the success of the big bang nucleosynthesis and their abundance must be small. This fact sets the upper bound on the reheating temperature (blue shaded region in the Fig. [\[fig:gr1\]]{}).[^4] In any case, the large baryon asymmetry inside the HBBs is consistently produced in this scenario. In the following, we assume $\eta_b^{\rm in}$ is so large that $M_c=18.8M_\odot(T_{\rm QCD}/200\text{MeV})^{-2}$ is realized.
![We search the parameter space where the $\mathcal{O}(1)$ baryon asymmetry is produced. The upper panels are the case with the $n=4$ AD field and the lower panels are the case with $n=6$. The left and right panels correspond to $m_{3/2}=10^3{\rm GeV},10^4{\rm GeV}$ respectively. Three black lines represent the contours of $\eta_b^{\rm in}=(10^{-1},1,10)$. On the upper side of the red line, the dynamics is dominated by the finite temperature effect. The blue shaded region is excluded by the overproduction of the gravitino.[]{data-label="fig:gr1"}](gr1.pdf){width="180mm"}
![We show the PBH abundance and the observational constraints. The shaded regions are excluded by extragalactic gamma rays from Hawking radiation (EG$\gamma$) [@Carr2009], femtolensing of known gamma ray bursts (Femto) [@Barnacka2012], white dwarfs existing in our local galaxy (WD) [@Graham2015], microlensing search with Subaru Hyper Suprime-Cam (HSC) [@Niikura2017], Kepler micro/millilensing (Kepler) [@Griest2013], EROS/MACHO microlensing (EROS/MACHO) [@Tisserand2007], dynamical heating of ultra faint dwarf galaxies (UFD) [@Brandt2016], and accretion constraints from CMB (CMB) [@Ali-Haimoud20172]. []{data-label="fig:qcd"}](qcd.pdf){width="128mm"}
Then, let us discuss the abundance of the PBHs. The present abundance of the PBHs with mass $M_{\rm PBH}$ over logarithmic mass interval $d(\ln M_{\rm PBH})$ are estimated as $$\begin{aligned}
\frac{\Omega_{\rm PBH}(M_{\rm PBH})}{\Omega_c}
&\simeq\left.\frac{\rho_{\rm PBH}}{\rho_m}\right|_{\rm eq}\frac{\Omega_m}{\Omega_c}
=\frac{\Omega_m}{\Omega_c}\frac{T(M_{\rm PBH})}{T_{\rm eq}}
\beta_{\rm PBH}(M_{\rm PBH})\\\label{pa}
&\simeq\left(\frac{\beta_{\rm PBH}(M_{\rm PBH})}{1.6\times10^{-9}}\right)
\left(\frac{\Omega_ch^2}{0.12}\right)^{-1}
\left(\frac{M_{\rm PBH}}{M_\odot}\right)^{-1/2},\end{aligned}$$ where $\Omega_c$ and $\Omega_m$ are the present density parameters of the dark matter and matter, respectively. Here we use the latest Planck result $\Omega_ch^2\simeq0.12$ [@PlanckCollaboration2015a] \[$h$: the present Hubble parameter in units of $100\,\text{km/sec/Mpc}$\]. $T(M_{\rm PBH})$ and $T_{\rm eq}$ are the temperatures at the formation of the PBHs with mass $M_{\rm PBH}$ and the matter-radiation equality, respectively.
The other parameters which determine $\beta_{\rm PBH}$ are $c_I'$, $\Delta$ and $N_{\rm CMB}$. In order to evaluate not only the created PBHs but also the effect of the residual HBBs, we introduce the quantity $$\begin{aligned}
\eta_b^{\rm B}\equiv f_B(N_e)\eta_b^{\rm in},
$$ which represents the contribution of the HBBs to the baryon asymmetry of the entire universe. We show the prediction of the PBH abundance for $\eta_b^{\rm B}/\eta_b^{\rm ob}=(1,10^{-1},10^{-2})$ in Fig. \[fig:qcd\]. We can see that due to the cut-off $M_c$, there exists a peak-like “edge" whose mass $\sim\mathcal{O}(10)M_\odot$. The figure confirm the fact that large $\eta_b^{\rm B}$ realize the higher peak. The LIGO events can be explained for $\mathcal{O}(10)M_\odot$ PBHs whose abundance is $\Omega_\text{PBH}/\Omega_c \sim \mathcal{O}(10^{-3})$–$\mathcal{O}(10^{-2})$ [@Sasaki2016], which requires $\eta_b^{\text{B}}\sim \eta_b^{\text{ob}}$. However, $\eta_b^{\rm B}$ can not exceed the observational baryon asymmetry $\eta_b^{\rm ob}\sim10^{-10}$. Furthermore, since the baryon number density inside the HBB, $\eta_b^{\rm in}$, is so huge that the produced abundances of the light elements are significantly different from the prediction of ordinary BBN. Thus, in order not to spoil the success of BBN, $\eta_b^{\rm B}/\eta_b^{\rm ob}\ll1$ is required. Although the model still have the sizable contribution to the LIGO event, the abundance may be too small to account for all LIGO events. As we will see later, a double inflation scenario enable us to obtain more higher and sharper peak even if $\eta_b^{\rm B}/\eta_b^{\rm ob}\ll1$ is satisfied.
We note on the possible form of the residual HBBs in the present time. Since the density contrast of the HBBs reaches the order of unity after QCD phase transition, we naively expect that they form a self-gravitating system of the non-relativistic baryons where a significant amount of heavy elements may be synthesized by BBN. There is a possibility that they contaminate the surrounding universe and explain the metallicity of the population II stars. On the other hand, it is known that QCD phase transition may give birth to the hypothetical bound state of up, down and strange quarks, called quark nuggets (or strange matters)[@Witten:1984rs; @Alcock:1985vc; @Iso:1985iw]. Although there have been some research on them because they are possible candidate for baryonic dark matter, it is revealed that almost all of the quark nuggets evaporate and can not survive until now. For the quark nuggets to survive against the evaporation, they have to carry a large baryon number such as $N_B\gtrsim10^{51}$, which is much larger than the baryon number in the horizon at QCD epoch. In the our scenario, however, the formation of the stable quark nuggets may possible. This is simply because HBBs can carry anomalously large baryon number greater than $10^{51}$. In fact, the HBBs with mass greater than $10^{-5}M_{\odot}$ would form stable quark nuggets and contribute to the current dark matter. While their abundance is negligible with respect to the entire abundance of the HBBs, they may reduce the contribution of the HBBs to the net baryon asymmetry, $\eta_b^{\rm B}$, and relax the constraint on the PBH abundance.
Gauge-mediated SUSY breaking scenario
-------------------------------------
![We search the parameter space where the $Y_{Q}^{\rm in}\sim 40{\rm MeV}$ is realized. The upper panels are the case with the $n=4$ AD field and the lower panels are the case with $n=6$. The left and right panels correspond to $m_{3/2}=10{\rm MeV},10^2{\rm MeV}$ respectively. Three black lines represent the contours of $Y_{Q}^{\rm in}/40{\rm MeV}=(10^{-1},1,10)$. On the upper side of the red line, the dynamics is dominated by the finite temperature effect. The blue shaded region is excluded by the overproduction of the gravitino which exceeds the current dark matter abundance.[]{data-label="fig:Q1"}](Q1.pdf){width="180mm"}
Next, we consider the case of the gauge-mediated SUSY breaking. In this case, the potential of the AD field is lifted up as $$\begin{aligned}
V(\phi)\simeq
M_{\rm F}^4\left(\log\frac{|\phi|^2}{M_{\rm mess}^2}\right)^2
+m_{3/2}^2|\phi|^2\left[1+K\ln\left(\frac{|\phi|^2}{M_*^2}\right)\right].\end{aligned}$$ Here we include the contribution from the gravity-mediation (2nd term) because it is not forbidden generically. We can see that the contribution from the gauge-mediation (1st term) is also shallower than the quadratic potential and the AD field fragments to the Q-balls. Since there are two contributions to the potential, two types of the Q-balls exist in the gauge-mediation, “gauge-mediation type" and “new type". The former is realized when the potential is dominated by the gauge-mediation potential and the Q-ball properties are $$\begin{aligned}
M_Q&\simeq \frac{4\sqrt{2}\pi}{3}\zeta M_{\rm F}Q^{3/4}\\
R_Q&\simeq \frac{1}{\sqrt{2}}\zeta^{-1} M_{\rm F}^{-1}Q^{1/4},\\
\omega_Q&\simeq \sqrt{2}\pi\zeta M_{\rm F}Q^{-1/4},\end{aligned}$$ where $\zeta$ is a $\mathcal{O}(1)$ numerical constant. We can see this Q-ball configuration is stable against the decay into the nucleons for sufficiently large charge. On the other hand, the later is realized when the potential is dominated by the gravity-mediation potential and the Q-ball properties are given by $$\begin{aligned}
M_Q&\simeq m_{3/2}Q,\\
R_Q&\simeq |K|^{-1/2}m_{3/2}^{-1},\\
\omega_Q&\simeq m_{3/2},\end{aligned}$$ which are the same as those of the “gravity-mediation type" (Eq.(\[grp\])) because the scalar potential is identical. However, in the gauge-mediation scenario, the gravitino mass $m_{3/2}$ is typically lighter than $\sim {\rm GeV}$ and the decay to the nucleons are forbidden by the kinematics. Thus, the gauge-mediation scenario ensures the existence of the stable Q-balls which are good candidate of the dark matter. Since the gauge-mediation potential also triggers the oscillation of the AD field, the Hubble parameter at the start of the oscillation is given by $$\begin{aligned}
H_{\rm osc}\simeq {{\rm Max}}\left[
m_{3/2},
~\left(\lambda\frac{M_{\rm F}^{2n-4}}{{M_{\rm Pl}}^{n-3}}\right)^{\frac{1}{n-1}},
~{M_{\rm Pl}}\lambda^{2/n}\left(\frac{T_R}{{M_{\rm Pl}}}\right)^{\frac{n-2}{n/2}}
\right],\end{aligned}$$ which determines the total baryon asymmetry enclosed in the Q-balls. The type of the Q-ball is specified by comparing $\phi_{\rm osc}$ with the critical point $\phi_{\rm eq}\simeq M_{\rm F}^2/m_{3/2}$ where gravity- and gauge- mediation potential become comparable. The new (gauge-mediation) type Q-ball is obtained for $\phi_{\rm osc}>\phi_{\rm eq}~(\phi_{\rm osc}<\phi_{\rm eq})$. In the aim of the creation of the PBH, however, the required baryon number density is so high that the contribution from the gauge mediation potential is negligible. Thus the produce Q-balls are determined to the new type ones and their abundance is estimated as $$\begin{aligned}
\label{qad}
Y_{Q}^{\rm in}
=M_Q\frac{n_Q^{\rm in}}{s}
=\frac{M_Q}{Q}\eta_b^{\rm in}
\simeq m_{3/2}\eta_b^{\rm in}.\end{aligned}$$ From now on, restricting our interest to the PBH with mass $\mathcal{O}(10)M_\odot$, let us consider the case where the Q-ball abundance satisfies $Y_{Q}^{\rm in}\sim 40{\rm MeV}$ \[see, Eq. (\[eq:critical\_mass\])\]. We calculate the Eq. (\[qad\]) and plot the contours of $Y_{Q}^{\rm in}/40{\rm MeV}=(10^{-1},~1,~10)$ in the $(\lambda,~T_R)$ - plane in Fig. \[fig:Q1\]. As with the case in the gravity-mediated SUSY breaking scenario, there exist a parameters which realize the sufficient amount of the Q-balls. Here we also make a remark about the thermally produced gravitino. In the gauge-mediated SUSY breaking scenario, gravitino is the lightest SUSY particle (LSP) and they contribute to the dark matter abundance. Therefore, the reheating temperature must have a upper bound (blue shaded region in the Fig. \[fig:Q1\]) for the thermally produced gravitino not to over-close the universe [@Moroi:1993mb], which is stronger than the case of the gravity-mediation.
![We show the PBH abundance in the case of the gauge-mediated SUSY breaking scenario, where PBHs are formed by the over-density of the Q-balls. Here we make the parameter choice $(c_I,\Delta,N_{\rm CMB})=(0.046,19,10)$. The observational constraint are represented by the shaded region by the same manner with that in Fig. \[fig:qcd\].[]{data-label="fig:q"}](Q.pdf){width="110mm"}
![The green shaded region is where the peak of the PBH abundance is consistent with the event rate inferred from the LIGO event, $\Omega_{\rm PBH}/\Omega_c\sim\mathcal{O}(10^{-2})-\mathcal{O}(10^{-3})$. We also show the region where the energy density of the Q-balls could give a sizable contribution to the current dark matter density by black lines. It is found that two regions are well degenerated in the almost all of the $(c_I,\Delta)$-plane.[]{data-label="fig:cg"}](cg.pdf){width="100mm"}
Next, let us discuss the abundance of the PBHs in this case. Using Eq.(\[pa\]) again we show the one example which can explain the LIGO event evading the observational constraints in Fig. \[fig:q\]. In contrast to the case without Q-balls, the peak at the $M_c$ is so high that the event rate $\Omega_{\rm PBH}/\Omega_c\sim\mathcal{O}(10^{-2})-\mathcal{O}(10^{-3})$ [@Sasaki2016] is easily realized. This is simply because the residual HBBs, which is too small to collapse into the PBHs, do not contribute to the baryon asymmetry and larger HBBs abundance such as $f_B\sim10^{-8}$ is permitted. On the other hand, the Q-balls inside the residual HBBs would survive until now and contribute to the current dark matter abundance. This contribution is estimated as $$\begin{aligned}
\nonumber
\frac{\rho_Q}{s}&\simeq f_BY_Q^{\rm in}
=4.4\times10^{-10}{\rm GeV}
\left(\frac{Y_Q^{\rm in}}{40{\rm MeV}}\right)
\left(\frac{f_B}{1.1\times10^{-8}}\right).\end{aligned}$$ Interestingly, the value of $f_B$ required to realize the LIGO event rate is very similar to one which make the residual Q-balls constitute to the all dark matter. We plot the region where the event rate of the BH merger $\Omega_{\rm PBH}/\Omega_c\sim\mathcal{O}(10^{-2})-\mathcal{O}(10^{-3})$ is explained and the contribution of the residual Q-balls to the current dark matter abundance in the $(c_I,\Delta)$-plane in Fig. \[fig:cg\]. From this figure, we can conclude that the residual Q-balls inevitably make a sizable contribute to the dark matter abundance to explain the LIGO event rate. In other words, the LIGO PBHs and dark matter are simultaneously generated, namely, [*cogenerated*]{} in our scenario. Actually, the parameter choice $(c_I,\Delta,N_{\rm CMB})=(0.046,19,10)$ we made in the Fig. \[fig:q\] explains the all dark matter by the residual Q-balls.
Before closing this section, we make a comment on the scale of the inflation in the gauge-mediated SUSY breaking scenario. We have seen that the reheating temperature has a upper bound $10^{6-7}{\rm GeV}$ for gravitino not to over-close the universe, that is, $Y_{3/2}\leq Y_{\rm DM}$. However, this requirement is not sufficient because residual Q-balls must have a significant contribution to the dark matter. Therefore, the reheating temperature must be much lower than $10^{6-7}{\rm GeV}$. In this case, the condition for domination of the thermal effect over the Hubble induced mass \[Eq. (\[Delta\])\] is satisfied only if the Hubble parameter during inflation is small ($H_I \lesssim 10^{10}$ GeV). As a result, this scenario works only in the low-scale inflation scenario such as new-inflation ($H_I \sim 10^{6}$ GeV), $\alpha$-attractor inflation with small $\alpha$ ($H_I \sim \sqrt{\alpha}10^{13}$ GeV), and some string-motivated models with small tenser-to-scalar ratio $r$.
Extension to the double inflation scenario {#sec:double_inflation}
==========================================
In previous sections, we discussed the production of the PBH from the AD baryogenesis assuming that a single inflation with $N_e \gtrsim 60$ is responsible for all scales of our universe. Although LIGO PBHs are sufficiently produced, so many residual HBBs, which are too small to collapse into the PBHs, are predicted at the same time. Interestingly they properly contribute to the dark matter abundance in the case of the gauge-mediation. However, in the case of the gravity-mediation their abundance is constrained not to change the abundances of the light elements produced by the BBN. Such excessive residual HBBs arise from the (approximate) scale invariance of the HBB spectrum. As is discussed in the Sec. \[3B\], the HBBs are produced from the begging of inflation $(N=0)$. In order to produce the sufficient HBBs with the LIGO scale ($N\sim30$), the coefficient of the Hubble induced mass-term $c_I$ should be much smaller than unity, that is, the HBB spectrum is nearly scale invariant. This is because for larger $c_I$, the growth of the fluctuation saturates immediately and smaller HBBs are not produced. Consequently, due to the scale invariance of the spectrum, the residual HBBs are produced as well as the LIGO scale HBBs.
Considering the multi-stage inflation models where inflations occur more than once, we can relax such difficulties. For example, let us consider the following double inflation scenario with two stages of inflation. The first inflation with $N_e \ll 60$ produces density perturbations at the CMB scale while the second one responsible for small-scale perturbations. During the first inflation, we assume that AD field has a large Hubble induced mass ($c_{1I}\gg1$) and its quantum fluctuations do not grow. On the other hand, the Hubble induced mass of the AD field is assume to be $c_{2I}\lesssim 1$ for the second inflation. Here $c_{iI}$ is the coefficient of the Hubble induced mass term during the $i$-th inflation. Then, the HBBs are start to be created at the beginning of the second inflation, which can be much later than the time CMB scale exit the horizon. After a while, the production of the HBBs saturates due to the Hubble induced mass. Therefore, the HBBs are produced only around the scale corresponding to the beginning of the second inflation. If we assume the second inflation starts at $N\sim 30$ when the LIGO PBH scale exits the horizon, we can suppress the excessive residual HBBs.
We can easily apply the calculation in the previous section to the double inflation scenario. We only have to redefine the parameters as $$\begin{aligned}
N\equiv \ln(a/a_{i_2}),~~~~c_I\equiv c_{2I},\end{aligned}$$ where $a_{i_2}=a(t_{i_2})$ is the scale factor at the beginning of the second inflation. The difference is that $N_{\rm CMB}$ takes a negative value because CMB scale exit the horizon during the first inflation. Here we note that the relation $N_e-N_{\rm CMB}\sim50-60$ should still be satisfied. We show the PBH abundance in this double inflation scenario in the Fig. \[fig:d\]. Owing to the suppression of the residual HBBs, we can realize the higher peak with the same $f_B$.
[c]{}
![We plot the PBH abundance in the double inflation scenario. In the left panel, that in the gravity-mediated SUSY breaking scenario is shown. Here we choose the parameters $(c_I,\Delta,N_{\rm CMB})=(1.22,1.05,-17.5)$ and $\eta_b^{B}/\eta_b^{\rm ob}\simeq10^{-1}$. The left panel is the case in the gauge-mediated SUSY breaking scenario. Here we choose the parameters $(c_I,\Delta,N_{\rm CMB})=(0.92,1.01,-18.5)$ and the dark matter abundance is explained by the residual HBBs. In both model, the peak values get higher than the case in the single inflation scenario.[]{data-label="fig:img"}](dqcd.pdf){width="70mm"}
![We plot the PBH abundance in the double inflation scenario. In the left panel, that in the gravity-mediated SUSY breaking scenario is shown. Here we choose the parameters $(c_I,\Delta,N_{\rm CMB})=(1.22,1.05,-17.5)$ and $\eta_b^{B}/\eta_b^{\rm ob}\simeq10^{-1}$. The left panel is the case in the gauge-mediated SUSY breaking scenario. Here we choose the parameters $(c_I,\Delta,N_{\rm CMB})=(0.92,1.01,-18.5)$ and the dark matter abundance is explained by the residual HBBs. In both model, the peak values get higher than the case in the single inflation scenario.[]{data-label="fig:img"}](dq.pdf){width="70mm"}
Finally, we comment on the supermassive BHs. In the single inflation scenario, although it successfully produce the LIGO PBHs, one may worry about the over-production of the supermassive BHs. Due to the scale invariance of the HBB spectrum, the heavier HBBs are abundantly generated as well as HBBs which are responsible for the LIGO PBHs. As a result, larger amount of the supermassive BHs are generated (see Fig. \[fig:qcd\],\[fig:q\]). While they do not conflict with the observational constraints, their abundance is much greater than one in every comoving volume of 1Gpc$^3$ and somewhat unconventional. On the other hand, in the double inflation scenario, such concern is obviously absent because the HBBs are started to generated only after the second inflation (see Fig. \[fig:img\]).
Conclusions and Discussions {#sec:conclusion}
===========================
In this paper, we have discussed the formation of the PBH from the AD-mechanism proposed in the ref.[@Hasegawa:2017jtk] in more details. By taking into account that the Hubble induced mass can change before and after inflation, the inhomogeneous AD baryogenesis can take place, which produces HBBs. The produced HBBs have large density contrasts through the QCD phase transition or Q-ball formation and form PBHs when they reenter the horizon. This mechanism can explain the LIGO gravitational wave events evading the stringent constraints from the $\mu$-distortion and PTA experiment. We have considered the gravity- and gauge- mediated SUSY breaking scenarios where the SUSY breaking effect is mediated by the gravity and gauge interactions, respectively. The SUSY breaking scenarios affect not only the baryon asymmetry inside the HBB but also the properties of the Q-ball, which determines the evolution of the density contrast of the HBBs.
In the case of the gravity-mediated SUSY breaking scenario, the produced Q-balls are unstable against the decay into the baryons, so the baryon number is not confined inside the Q-balls. Then, the baryon asymmetry in the HBBs is carried by non-relativistic nucleons after the QCD phase transition. As the universe expands, the density contrast of the HBBs increase and they gravitationally collapse into the PBHs. The remarkable feature is that the mass spectrum of the PBHs have a lower cut-off because the PBH formation occur only after the QCD phase transition. Interestingly, this cutoff $M_{\rm QCD}$ coincide with the mass of the LIGO BHs $\sim 30M_{\odot}$. We have shown this mechanism consistently explains the BHs inferred from the LIGO event evading the observational constraints. We note that smaller HBBs which reenter the horizon before the QCD phase transition do not collapse and contribute to the current baryon asymmetry. Although the formation of the PBH require the huge baryon asymmetry $\eta^{\rm (B)}_b\sim1$, it is naturally realized by the AD mechanism with both $n=4$ and $6$ flat directions.
In the case of the gauge-mediated SUSY breaking scenario, the Q-balls are stable and contribute to the current dark matter. Thus, the HBBs are eventually dominated by the Q-balls and collapse into the PBHs. The cut-off for the mass spectrum is determined by the horizon size at the Q-ball domination inside the HBBs, which is related to the Q-ball abundance inside the HBBs. We have shown that if we assume the residual HBBs have a sizable contribution to the dark matter, the sufficient amount of the PBHs are produced so that the event rate of the LIGO events are consistently reproduced. We call this coincidence as cogenesis of the LIGO PBHs and the dark matter. Such a large Q-ball abundance is also naturally realized by the AD mechanism with both $n=4$ and $6$ flat directions.
We would like to thank Kenta Ando, Jeong-Pyong Hong, Masahiro Ibe, Keisuke Inomata and Eisuke Sonomoto for helpful comments. This work is supported by JSPS KAKENHI Grant Number 17H01131 (M. K.) and 17K05434 (M. K.), MEXT KAKENHI Grant Number 15H05889 (M. K.), JSPS Research Fellowship for Young Scientists Grant Number 17J07391 (F. H.) and also by the World Premier International Research Center Initiative (WPI), MEXT, Japan.
[^1]: Such a change of the constant $c$ naturally takes place in the supergravity-based inflation models where a multiple superfields is employed.
[^2]: See the ref. [@Mukaida:2015ria].
[^3]: The gauginos give negative contributions to the one-loop potential while the Yukawa couplings give positive contributions. In general, the gaugino contribution is dominant and $K$ is negative. However, $K$ can be positive for flat directions which contain stop.
[^4]: Strictly speaking, this upper bound on the reheating temperature has mild dependence on the MSSM parameters such as gaugino mass and scalar mass and so on. In this paper, we assume the bound is almost independent of the MSSM parameters and use a typical value. See the ref. [@Kawasaki:2008qe] for detail.
|
---
abstract: 'Silicon Carbide (SiC) displays a unique combination of optical and spin-related properties that make it interesting for photonics and quantum technologies. However, guiding light by total internal reflection can be difficult to achieve, especially when SiC is grown as thin films on higher index substrates, like Silicon. Fabricating suspended, subwavelength waveguides requires a single lithography step and offers a solution to the confinement problem, while preserving the design flexibility required for a scalable and complete photonic platform. Here we present a design for such platform, that can be used for both classical and quantum optics operation. We simulate the key optical components and analyze how to exploit the high nonlinearities of SiC and its defects.'
author:
- Francesco Garrisi
- Ioannis Chatzopoulos
- Robert Cernansky
- Alberto Politi
bibliography:
- 'citations.bib'
title: A Silicon Carbide photonic platform based on suspended subwavelength waveguides
---
Introduction {#sec:introduction}
============
Silicon Carbide (SiC) is establishing itself as an important material in the field of quantum photonics. Among its many polytypes, 3C and 4H-SiC are hosts of a large variety of point defects emitting in the visible and in the near infrared (NIR) [@koehl2011room; @falk2013polytype; @castelletto2014silicon; @widmann2015coherent]; these defects can be used as single photon sources and their spin state can be addressed through radio frequency and optical electromagnetic fields, while the coherence time has been shown to exceed milliseconds [@christle2015isolated; @christle2017isolated; @simin2017locking].
Photonic structures can enhance the interaction between these colour centres and light, providing a path for the development of a scalable approach for quantum technologies. Moreover, SiC provides interesting optical properties. Being non-centrosymmetric crystals, both polytypes of SiC possess a strong static second-order nonlinearity ($\chi^{(2)}_\text{xyz} \simeq 60\ \text{pm/V}$ for 3C-SiC [@tang1991linear] and $\chi^{(2)}_\text{zzz} \simeq 32.8\ \text{pm/V}$ for 4H-SiC [@wu2008second]); they do not suffer from two-photon absorption at telecommunication wavelengths due to their large electronic bandgap (around 2.4 eV for 3C and 2.9 eV for 4H [@madelung1982physics]); as well as diamond, SiC is one of the hardest known materials [@jackson2005mechanical], providing the mechanical stability required to support complex nanostructures at small scale, along with excellent thermal conductivity. Finally, SiC is known to be an established platform for high power microelectronics, making promising the integration of photonic and electronic devices on the same platform.
The fabrication of SiC for photonic applications, however, can be problematic. For example, few hundred nanometers of 3C-SiC can be grown heteroepitaxially on silicon (Si), but this poses two issues: i) having a higher index of refraction, the substrate prevents the use of total internal reflection (TIR) to obtain light confinement in the vertical direction; ii) due to crystalline mismatch, the interface between 3C-SiC and Si grows with very low quality, increasing losses of light travelling in such region. These two problems can be addressed at once by adopting wafer-bonding techniques [@fan2018high]. On the other hand, the homoepitaxial growth of 4H-SiC provides high quality films, but obtaining thin membranes is not straightforward. Smart-cut process [@di1996silicon] can be applied to obtain SiC on insulator, but the ion implantation step increases optical losses and produces lattice damages detrimental to color center properties. Wafer bonding and thin down has demonstrated excellent material properties and low losses in photonic crystal cavities [@Song2019ultrahigh] and ring resonators [@lukin20194h]. However, the uniformity of the thickness of the SiC layer over appreciable chip sizes is a limiting factor for the scalability of SiC photonics.
An alternative approach is to suspend membranes in air, either removing part of the Si substrate, or by electrochemical etching of doped SiC. This approach has been used to produce photonic crystal cavities [@bracher2015fabrication; @calusine2014silicon] as well as optical waveguides using a two-step lithography technique [@martini2018four]. In this case, the first etch defines the lateral confinement of the waveguides, while the second one opens holes to access the substrate that has to be removed. Here we propose subwavelength geometries that allow the use of a single etch step to access the substrate, simplifying the fabrication, as it has already been demonstrated for other platforms [@penades2014suspended; @Penades16; @osman2018suspended; @penades2018suspended]. Subwavelength structures can be defined as periodic dielectric structures whose periodicity is much smaller than the wavelength of light; more rigorously, they are periodic structures where the energy of the photonic bandgap lies above the energy of the photons propagating in the medium. As such, they behave as an effective homogeneous medium (EHM) and they prevent the scattering of light [@cheben2018subwavelength]. Subwavelength structures can be used easily to obtain a complete photonic platform in SiC, capable not only to guide light, but also to realize ring resonators, grating couplers and slow-light waveguides.\
In Section \[sec:waveguide\] the design of the most basic component of the platform, a straight subwavelength suspended waveguide, is presented, followed by a discussion on the results of the numerical simulations which led to the choice of the dimensions of the waveguide. Then, we consider the amount of losses expected from the waveguide design, and we give an estimate of the nonlinear waveguide parameter. In Section \[sec:tolerance\] we assess the tolerance of the design to fabrication imperfections, in terms of the variation of the modal refractive index resulting from variation in the geometry of the waveguide. Section \[sec:devices\] briefly introduces the analysis of additional photonic components and presents detailed results of numerical simulations used to design a uniform grating coupler. Section \[sec:slowlight\] describes how the platform can be adapted easily to reach a slow-light regime by changing the periodicity of the lateral suspending structures. Section \[sec:modulators\] discusses the performances of a proposed design for an electro-optical modulator integrated alongside the suspended waveguides. Finally, in Section \[sec:conclusions\] we give the conclusions and perspectives.
Subwavelength Waveguide {#sec:waveguide}
=======================
{width="80.00000%"}
When a dielectric medium is periodic in one direction, the light travelling inside it can be described in terms of the photonic band structure [@bookjoannopoulos]. The periodicity will produce the emergence of the photonic bandgap, a range of frequencies at which light cannot propagate in the medium. If the energy of the light is lower than the photonic bandgap, radiation can propagate, ideally without scattering, and the periodic medium acts as an EHM [@cheben2018subwavelength]. In Figure \[fig:WGscheme\]-a) we show the design of a SiC waveguide that exploits this principle to guide light at 1550 nm wavelength. The design is based on previous works realized in silicon on insulator (SOI) [@Penades16; @penades2018suspended] and germanium [@osman2018suspended]. Light is confined in the vertical direction by TIR. The lateral arms serve two functions: to mechanically suspend the waveguide and to introduce the periodic perturbation. The perturbation has a periodicity that is much smaller than the wavelength of light, hence, similarly to a multilayer, the arms act as a homogeneous medium with index of refraction $n_e$ intermediate between the one of SiC and air. Thus, for the case of the straight waveguide presented here, the structure is akin to the one shown in Figure \[fig:WGscheme\]-b), where the yellow region highlights the EHM; in practice, this confines light by TIR on the horizontal direction, as well.
The bulk effective index $n_e$ of the subwavelength region can be tuned changing the filling factor (FF) of the arms $f_\text{wg}$ in the periodic cell, and can be estimated by calculating the effective index of the light travelling normal to a SiC-air multilayer with the same periodicity and FF of the lateral arms [@yariv2006photonics]. The minimum feature size given by the fabrication process sets the constraints for $f_\text{wg}$ and hence to $n_e$. For our SiC structure we believe the higher limit on $f_\text{wg}$ will be set by the resolution of the lithographic process, while the lower limit will be determined by the mechanical strength of the material. For instance, other structures in SOI [@Penades16; @penades2018suspended] were fabricated with a minimum dimension of the arms equal to 100 nm. Since SiC is a very hard material [@jackson2005mechanical], it is reasonable to assume that the minimum dimension of the arms could be smaller than this value, but a more detailed analysis is required that takes into account not only the mechanical stability of the material but also the inner stress.
--------- ------
300
650
$(a_0)$ 300
150
2000
--------- ------
: Proposed dimensions for a single TE-TM subwavelength waveguide. We assume a value of 2.6 for the index of refraction of SiC, suspension in air and vertical walls.[]{data-label="tab:dimensions"}
Aiming at the use of the waveguide’s fundamental TE mode, the dimensions of the proposed structure are listed in Table \[tab:dimensions\]. We assumed a value of 2.6 for the refractive index of the SiC layer since it is close to the refractive indexes of both 3C and 4H-SiC. Then, the layer thickness ($h$) of 300 nm was chosen in order to have the fundamental slab mode close to the cut-off condition. The periodicity of the structure ($a_0$) is set by the subwavelength condition: the continuous lines of Figure \[fig:Bands300\] are the band structure of our subwavelength waveguides calculated using MPB - MIT Photonic Bands [@johnson2001block]; choosing a periodicity of 300 nm puts the TE bandgap well above the energy of 1550 nm radiation, ensuring the suppression of scattered light and the validity of the EHM approximation.
![Simulated band structure of the proposed suspended waveguide along the propagation direction. Assuming a = 300 nm, the horizontal black line corresponds to 1550 nm. Orange and light blue lines correspond to guided TE and TM modes respectively, calculated with the MIT MPB simulation suite [@johnson2001block]; red and blue dots are the same modes calculated with a numerical eigensolver; dashed red and blue lines are effective horizontal TE and TM light-lines calculated from an effective index approach.[]{data-label="fig:Bands300"}](figures/Bands300nm.pdf){width="45.00000%"}
We have chosen $f_\text{wg}$ equal to 0.5, which provides $n_e$ equal to 2.144, a good compromise between a higher lateral confinement and a high mechanical strength; the lateral arms are thus 150 nm long ($u$) in the propagation direction. The proposed waveguide’s width ($w$) of 650 nm is the one that maximizes the confinement of the fundamental mode while maintaining the structure single-TE-moded. In fact, the structure sustains a single TM mode ($\text{TM}_{00}$) and two TE modes ($\text{TE}_{00}$ and $\text{TE}_{01}$); the $\text{TE}_{01}$ mode is very loosely bound and is likely to experience very high losses compared to the $\text{TE}_{00}$ mode, since it would be easily coupled to radiative modes.
The mode profiles have been simulated both with the MPB simulation suite and with a numerical eigensolver (Lumerical MODE). In Figures \[fig:WGscheme\]-c), \[fig:WGscheme\]-d) and \[fig:WGscheme\]-e) we show the profiles of the TE and TM modes calculated with the eigensolver, under the EHM approximation for the lateral arms, which agree very well with the ones obtained from MPB. Although not perfectly, the dispersion of the three modes calculated from the eigensolver (the dots in Figure \[fig:Bands300\]) is in agreement with the band structure calculated by MPB, apart from a relative shift of the effective refractive index; at 1550 nm the effective index of the fundamental TE mode given by the eigensolver ($n_\text{TE}$ = 1.967) is slightly higher than the one obtained from the MPB band structure (1.907).
In order to estimate the lateral confinement we calculated the effective light-lines in the horizontal direction for the TE and TM modes, which are reported as the dashed lines of Figure \[fig:Bands300\]. The two lines were calculated from an effective index approach: the lateral confinement of the waveguide-arms system has been modeled by an infinite symmetric slab parallel to the $y$-$z$ plane, surrounded by a cladding material, and whose thickness equals the waveguide’s width (650 nm). The effective light-lines of the original system are then equal to the light-lines of this new slab. The refractive index of the cladding $n_\text{TE}(\omega)$ ($n_\text{TM}(\omega)$) is the only one of importance to determine the light-line, and it is set equal to the effective index of the fundamental TE (TM) slab mode of the original 300 nm thick layer made of the EHM. Then, the effective light-lines are described in term of the cladding index by the equations $$k_\text{TE}(\omega) = \frac{\omega}{c} n_\text{TE}(\omega),\quad k_\text{TM}(\omega) = \frac{\omega}{c} n_\text{TM}(\omega)$$
As seen in Figure \[fig:Bands300\], the dispersion of the $\text{TE}_\text{01}$ mode obtained from Lumerical lies very close to the effective TE light-line, thus further confirming that the mode is only loosely bound. Finally, tridimensional FDTD simulations confirmed that indeed the eigensolver modes are guided without scattering by the full subwavelength waveguide.
We now consider sources of losses in the subwavelength waveguide other than the intrinsic material losses. While the fundamental TE mode is found well below the light-line, one expected source of losses is given by the coupling between the fundamental waveguide mode and the modes confined in the remaining SiC layer, past the suspending arms. Indeed, these losses vanish completely only if the width of the lateral arms $v$ is infinitely large. However, we find that a 2 ${\mu}$m width of the arms is sufficient to ensure the necessary mechanical strength to suspend the waveguide [@penades2014suspended] and to suppress at negligible levels the outcoupling of the guided TE-mode. The latter was verified using the same effective-index approach used to calculate the lateral effective light-lines, by adding two additional layers beyond the cladding, which is now 2 ${\mu}$m thick in the $x$ direction to both the sides of the central slab. The losses resulting from eigenmode simulations with perfectly matching layer boundary conditions are found lower than $10^{-4}$ dB/cm for $v = 2$ ${\mu}$m. Additionally, bending losses were obtained from 2-D eigensolver simulations under the EHM approximation, finding a value of about 0.2 dB/cm for 20 ${\mu}$m bending radius.
Given the above results, assuming a lossless material, we expect that the main limitations of this kind of structure are given by surface roughness and disorder. Surface roughness couples light from the guided modes to radiative modes [@payne1994theoretical; @grillot2004size]. With respect to traditional ridge waveguides, we expect this effect to be slightly higher due to the presence of the additional material interfaces corresponding to the arms. Still, if needed, the effect of roughness can be counteracted by increasing the width of the central branch ($w$), which increases confinement, at the expense of the introduction of additional guided modes. Disorder in the periodicity or in the position of the lateral arms also increases losses and has to be kept to low enough values. As shown in ref. [@ortega2017disorder], where these effects are studied on a similar structure to the one considered here, the jitter in the position and dimension of periodic structures should not exceed 5 nm to keep losses to a reasonable level. Choosing a working point far below the bandgap can decrease the effect of disorder.
We now consider the nonlinear optical properties of the system. In particular, we estimate the nonlinear waveguide parameter $\gamma$ for the nominal waveguide design. For uniform waveguides, $\gamma$ can be defined in terms of the nonlinear Kerr index $n_2$ and of the Poynting vector ${\bm{P}}$ [@foster2004optimal], according to $$\gamma = \frac{k_0 \int_\Sigma n_2 P_z(x,y)^2\,dx\,dy}{\abs{\int P_z(x,y)\,dx\,dy}^2},
\label{eqn:uniformgamma}$$ where $k_0$ is the vacuum wavevector, $P_z$ is the component of ${\bm{P}}$ normal to the integration surface and where the top integral is performed on the cross-section of the waveguide $\Sigma$, that is, where $n_2$ is non-vanishing. Since in our case the field changes along the propagation, we average the nonlinear waveguide parameter along a single periodic cell, following the approach described in ref. [@sato2015rigorous]: $$\begin{gathered}
\gamma = \aver{\gamma(z)} = \frac{1}{a_0} \int_z^{z+a_0} \gamma(z)\,dz =\\ =\frac{k_0}{a_0}\int_z^{z+a_0}\frac{\int n_2(x,y,z) P_z(x,y,z)^2 \,dx\,dy}{\abs{\int P_z(x,y,z) \,dx\,dy}^2}\,dz
\label{eqn:gamma}\end{gathered}$$ where $n_2(x,y,z)$ is assumed equal to $5.31 \cdot 10^{-19}\ \text{m}^2/\text{W}$ [@martini2018four] where $(x,y,z)$ is found within the SiC structure and zero otherwise. Using the Poynting field calculated from MPB, we find $\gamma = 7.346\ \text{W}^{-1} \text{m}^{-1}$. The nonlinear waveguide parameter is calculated also using Lumerical and the EHM approximation, obtaining $\gamma = 6.182\ \text{W}^{-1} \text{m}^{-1}$; in this case, the presence of the lateral arms is taken into account by assuming that the nonlinear index of the homogeneous medium is the average of the ones of SiC and air (i.e. equal to $n_2/2 = 2.655 \cdot 10^{-19}\ \text{m}^2/\text{W}$). By comparison, in ref. [@martini2018four] the nonlinear waveguide parameter of slightly lesser confining SiC waveguides was measured to be $\gamma = 3.86 \pm 0.03\ \text{W}^{-1} \text{m}^{-1}$, while the one of typical Silicon Nitride waveguides is close to 2 $\text{W}^{-1} \text{m}^{-1}$ [@tan2010group].
Tolerance {#sec:tolerance}
=========
![Simulations of the effective index $n_\text{TE}$ of the waveguide’s fundamental $\text{TE}_{00}$ mode as a function of the cross-section of the central waveguide. a) Variation of $n_\text{TE}$ in terms of the waveguide width ($w$). b) Variation of $n_\text{TE}$ in terms of the waveguide height ($h$). Continuous lines: MPB - Dashed lines: Lumerical.[]{data-label="fig:tolerance"}](figures/TolerancePlot.pdf){width="48.00000%"}
In order to determine the tolerance of the design to fabrication, we simulated the subwavelength waveguide varying its geometry; in particular, we considered variations in the cross-section and in the filling factor of the lateral arms, and we monitored the change in the effective index of the fundamental TE mode, $\Delta n = n - n_0$, where $n_0$ is the effective index of the nominal waveguide. The simulations were performed using both the eigensolver and the MPB software suite, and the results for the cross-section variation are reported in Figure \[fig:tolerance\]. We find that the results obtained under the EHM approximation (dashed lines) are in good agreement with the ones obtained from MPB (continuous lines). From the linear fit of the data obtained from MPB, we find that the sensitivities of the waveguide effective index on the width and height are respectively $\sigma_w = \Delta n/ \Delta w = 5.00 \cdot 10^{-4}\ \text{nm}^{-1}$ and $\sigma_h = \Delta n/ \Delta h = 1.88 \cdot 10^{-3}\ \text{nm}^{-1}$, while from Lumerical we find $\sigma_w = 3.27 \cdot 10^{-4}\ \text{nm}^{-1}$ and $\sigma_h = 1.91 \cdot 10^{-3}\ \text{nm}^{-1}$. Similarly, we performed simulations varying the filling factor of the lateral arms, obtaining $\sigma_u = \Delta n/\Delta u = 9.69 \cdot 10^{-4}\ \text{nm}^{-1}$ with MPB and $\sigma_u = 1.25 \cdot 10^{-3}\ \text{nm}^{-1}$ with Lumerical.
The small overall variation of the waveguide’s effective index in both cases demonstrates that the subwavelength waveguide design is very tolerant to variations of its geometrical parameters. Moreover, these results justify the use of the homogeneous medium approximation to simplify the design procedure of other photonic devices in this platform, once the difference in the effective index given by the two simulation methods is taken into account.
Photonic components {#sec:devices}
===================
A whole range of additional structures can be easily realized under the EHM approximation for the lateral arms region. We can apply standard photonic design and simulation tools to obtain, for example, tapers and bends. In order to avoid losses introduced by periodicity mismatch, the arms in bent sections have to maintain their mutual spacing as close as possible to the nominal value of the straight waveguide. Again, simulations of the structures with FDTD methods confirmed the expected behaviour of the devices.
![Schematic representation of the grating coupler geometry (top view, not to scale): blue regions represent holes to be etched in the SiC film.[]{data-label="fig:Structures"}](figures/TaperScheme.pdf){width="48.00000%"}
Efficient coupling of light into the suspended waveguide can be achieved using grating couplers. On this matter, different designs that exploit subwavelength structures have been proposed [@halir2009waveguide; @cheng2012broadband; @halir2010continuously]. Briefly, the subwavelength arms, in this case, are used to define the effective index for the subwavelength grooves of the grating coupler and thus are oriented along the propagation direction, as shown schematically in Figure \[fig:Structures\].
Table \[tab:gratingcoupler\] lists the dimensions for a uniform grating coupler with simulated -3.8 dB maximum coupling efficiency, designed for TE radiation incoming at an $8\degree$ angle to normal incidence. For the subwavelength grooves we chose a (transversal) periodicity $b_T$ and filling factor $f_\text{grat,T}$ of 300 nm and 0.5, respectively, which give an effective index $n_e'$ for the grooves equal to 1.110. At variance with the bulk effective index of the lateral arms, these values were obtained numerically, simulating the effective index of the mode travelling in the slab and applying periodic boundary conditions in the lateral direction. Thus the value of $n_e'$ for $f_\text{grat,T} = 1$ would correspond to the effective index of the fundamental mode of the SiC slab suspended in air (2.134).
300 nm
-- -- -----------------
1230 nm
32.4% \*
13
300 nm
50%
12 $\mu$m
41.8% (-3.8 dB)
75 nm
: Proposed dimensions and properties for a TE SiC subwavelength grating coupler operating around 1550 nm. The index of SiC is assumed to be 2.6. The values marked with \* are obtained by numerical optimization.[]{data-label="tab:gratingcoupler"}
Following the design method described in ref. [@halir2009waveguide], the values marked with \* in Table \[tab:gratingcoupler\] are obtained by numerical 3D FDTD optimization (having maximum transmission at 1550 nm as target and exploiting periodic boundary conditions in the transverse direction); they are very close to the values obtained from the simplest analytic descriptions of the uniform grating coupler: $$l_1 = \frac{\lambda_0}{2 (n_1 - n_c \sin{\alpha})},\quad
l_2 = \frac{\lambda_0}{2 (n_2 - n_c \sin{\alpha})},$$ where $l_1$ and $l_2$ are the longitudinal dimensions of the low- and high-index sections of the grating (so that $b_L = l_1+l_2$ is the grating period and $f_\text{grat,L} = l_2/(l_1+l_2)$ is the grating *longitudinal* filling factor), $\lambda_0$ is the vacuum wavelength of light, $\alpha$ is the angle to normal incidence, $n_1$ and $n_2$ are the low and high effective indexes of the light travelling in the grating, and $n_c$ is the index of the surrounding material. In our case $n_1$ and $n_2$ are equal to 1.110 and 2.134, while $n_c$ is the index of air (1.0). This gives $b_L = 1187$ nm and $l_2/(l_1+l_2) = f_\text{grat,L} = 32.7$%.
Figure \[fig:gratingtransmission\] shows the transmission of the grating as a function of the wavelength, obtained from 3D FDTD simulations; The 1 dB bandwidth is 75 nm large, ranging between 1511 nm and 1586 nm. At the expense of the bandwidth, apodised designs can be employed to increase the maximum coupling efficiency.\
![Simulated transmission of the proposed subwavelength grating coupler.[]{data-label="fig:gratingtransmission"}](figures/GratingTransmission.pdf){width="48.00000%"}
Slow-light {#sec:slowlight}
==========
In one-dimensional periodic structures, the modal dispersion near the photonic bandgap becomes flatter, corresponding to a lower group velocity of light (so called “slow-light" regime [@krauss2007slow]). Exploiting slow-light increases the interaction between radiation and matter. For instance, the fraction of the photons emitted to a guided mode by a dipole localized in a waveguide (also known as $\beta$ factor) is inversely proportional to the group index and can reach values very close to unity [@rao2007single; @arcari2014near]. In our platform, the slow-light regime can be reached naturally by increasing the periodicity of the subwavelength waveguide to move down the bandgap close to the working frequency. As it is shown in Fig. \[fig:GroupIndex\], the group index is more than doubled when the waveguide periodicity approaches 390 nm ($f_\text{wg}$ still equal to 0.5).
![Group index and Purcell Factor enhancement as a function of the periodicity of the waveguide, obtained from MPB simulations. The dashed gray line is the enhancement of the group index alone, highlighting the main contribution to the PF enhancement.[]{data-label="fig:GroupIndex"}](figures/PFandng.pdf){width="48.00000%"}
Following the approach of [@rao2007single], we defined an effective mode volume $V_\text{eff}$ and Purcell Factor (PF) associated to the light travelling in the waveguide. $$\text{PF} = \frac{3 \pi c^3 a}{V_\text{eff} \omega_0 \epsilon^{3/2} v_g}$$ where $\omega_0 = 2\pi c/\lambda_0$ is the frequency of light at the working point, $\epsilon^{1/2} = n_\text{SiC}$ is the refractive index of SiC and $v_g$ is the group velocity of light; the effective volume $V_\text{eff}$ is given by $$V_\text{eff}=\frac{1}{\max(\epsilon({\bm{r}})\lvert{\bm{e}}({\bm{r}})\rvert^2)}$$ where ${\bm{e}}$ is the modal electric field traveling in the waveguide, $\epsilon({\bm{r}})$ is the dielectric function that defines the periodic strcture and where ${\bm{r}}$ is allowed to vary on the periodic cell. Figure \[fig:GroupIndex\] also reports the enhancement of PF (i.e. $\text{PF}(a)/\text{PF}(a_0)$, where $a$ is the increased periodicity compared to the nominal periodicity $a_0$), showing that it is mainly induced by the increase of group index.
A transition region between nominal and slow-light regimes can be realized easily by changing adiabatically the periodicity of the waveguide. Since the field profiles of the different regions are very similar, there is no need to modulate the waveguide width to spatially match the two modes, which has been shown to be a key aspect to obtain low insertion losses [@krauss2007slow]. Yet, as discussed previously, the closeness of the photon energy to the photonic bandgap would make the system more sensitive to disorder and the fabrication more challenging, and will likely set the limitation of the slow-light operation.
The achievement of modest PF can benefit the field of quantum technologies based on color centres embedded in SiC. For example, the emission rate of the silicon vacancy (SiV) center is limited by the non-radiative decay from the excited state to a metastable state [@nagy2019high]. Moreover, the collection efficiency in confocal microscopy setups is hampered by the high refractive index due to TIR. A moderate PF would increase the radiative rate to values sufficient to accomplish quantum non-demolition readout of the spin state.
Electro-optic Modulators {#sec:modulators}
========================
Active modulation of light travelling inside SiC can be performed with electro-optic modulators that exploit the high $\chi^{(2)}$ nonlinearity of the material. Assuming that the bottom surface of the structure is not accessible, the modulator could be realized by patterning two metallic pads to the sides of the suspending arms’ region. In order to give an estimate of the performance of the device, we model the two pads as a parallel plate capacitor with 6 $\mu$m spacing and centered on the SiC waveguide [@alferness1982waveguide], so that the overlap between the driving and optical fields is equal to unity. Since the index of refraction of the material $n = \sqrt{1+\chi^{(1)}}$ is modified by an applied electric field $E$ according to $$n(E) = \sqrt{1+\chi^{(1)}+2\chi^{(2)}E} \simeq n + \chi^{(2)}E,$$ the standard voltage-length figure of merit for a $\pi$ phase shifter is given by $$L_\pi V_\pi \simeq \frac{\lambda l}{r n^3},$$ where $l$ the distance between the capacitor plates and $r = 2 \chi^{(2)}/n^4$ is the electro-optic coefficient of the waveguide material. Assuming $n = 2.6$, $\lambda_0 = 1550$ nm, $l = 6\ \mu$m, $\chi^{(2)} = 32.8$ pm/V, then $r = 1.43$ pm/V and $L_\pi V_\pi \simeq 36.9\ \text{V}\cdot\text{cm}$; this performance can be improved by a factor 2 by implementing an amplitude modulator based on a Mach-Zehnder interferometer driven by pads in the ground-signal-ground configuration, reducing $V_\pi L_\pi$ down to $18.4\ \text{V}\cdot\text{cm}$. This value is about one order of magnitude higher than the one of state of the art electro-optic modulators based on Lithium Niobate [@wang2018nanophotonic] ($r_{33} \simeq 30.8$ pm/V [@yariv2006photonics]) which use similar spacing between the pads [@janner2009micro]. As for other platforms, it is conceivable to improve the performance of electro-optic modulators using resonant structures like microring resonators [@xu2005micrometre]. We also confirmed that the pads induce negligible losses: a Lumerical simulation of the optical mode propagating alongside gold pads spaced by 6 ${\mu}$m and placed outside the arms’ region results in about 2 dB/m losses.
Conclusions {#sec:conclusions}
===========
In this work we proposed a scalable photonic platform based on SiC that allows coupling of electromagnetic radiation into and out of a SiC thin film, and that allows the manipulation of the electromagnetic field in the material. This platform, based on suspended subwavelength waveguides, is flexible enough to allow the realization of all the basic photonic components such as waveguides, bends, directional couplers, grating couplers and tapers. The proposed design requires a single etch step to access the substrate and to define the geometry of the devices, simplifying the fabrication process with respect to other previous SiC suspended platforms; despite this, the platform retains a powerful design flexibility, because the duty cycle of subwavelength sections can be different in different parts of the sample. As explained, an increase in the periodicity allows to reach a slow-light regime, which can be used to increase the linear and nonlinear interaction of light with SiC nonlinearities or color centers therein. For instance, this effect can be used to shorten the length of superconducting nanowires or electro-optical modulators integrated alongside the suspended waveguides.
Since SiC is very hard, compared to other sub-wavelength platforms realized in other materials such as Si and germanium, we believe that the lateral suspending structures can be very thin, hence allowing sub-wavelength regimes for shorter wavelengths than previously demonstrated. Reaching a periodicity shorter than 280 nm would allow the propagation of 1100 nm light, which in turn enables the interaction with NIR defects in SiC. Quantum optics applications would then become feasible. This, together with the increase of the $\beta$ factor given by slow-light could make this platform appealing for both 3C- and 4H-SiC. An even shorter periodicity of 200 nm would allow the guided propagation of 785 nm radiation, the second harmonic of 1550 nm; provided that a suitable way to obtain phase-matching between these two frequencies can be found, the strong $\chi^{(2)}$ nonlinearity of SiC would allow the efficient exploitation of second harmonic generation and stimulated/spontaneous parametric down-conversion.
Provided that the overall losses of the platform, given not only by the material but also by roughness and disorder, can be kept low enough, squeezing on an integrated, scalable platform would become a concrete and promising application.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) (EP/P003710/1). Useful discussions with Marco Liscidini are acknowledged.
|
---
abstract: 'This is an opinion paper about the strengths and weaknesses of Deep Nets for vision. They are at the center of recent progress on artificial intelligence and are of growing importance in cognitive science and neuroscience. They have enormous successes but also clear limitations. There is also only partial understanding of their inner workings. It seems unlikely that Deep Nets in their current form will be the best long-term solution either for building general purpose intelligent machines or for understanding the mind/brain, but it is likely that many aspects of them will remain. At present Deep Nets do very well on specific types of visual tasks and on specific benchmarked datasets. But Deep Nets are much less general purpose, flexible, and adaptive than the human visual system. Moreover, methods like Deep Nets may run into fundamental difficulties when faced with the enormous complexity of natural images which can lead to a combinatorial explosion. To illustrate our main points, while keeping the references small, this paper is slightly biased towards work from our group.'
author:
- 'Alan L. Yuille'
- Chenxi Liu
bibliography:
- 'refs.bib'
date: 'Received: date / Accepted: date'
title: 'Deep Nets: What have they ever done for Vision?[^1]'
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction
============
In the last few years Deep Nets have enabled enormous advances in computer vision and the study of biological visual systems. But as researchers in these areas, we find ourselves having mixed feelings about them. On the one hand, we marvel at their successes and how they have led to amazing results on some real world tasks and, in academic settings, their performance on benchmarked datasets almost always outperforms alternative approaches. But, on the other hand, we are aware of their limitations and concern about the hype that surrounds them. Several recent papers [@DBLP:journals/cacm/Darwiche18; @DBLP:journals/corr/abs-1801-00631] have critiqued Deep Nets from the perspectives of machine reasoning and cognitive science, arguing that though Deep Nets are useful as a tool they will need to be combined with alternative approaches in order to achieve human level intelligence. The nature of our research means that we interact with research faculty in many disciplines (cognitive science, computer science, applied mathematics, engineering, neuroscience, physics, and radiology) and the Deep Nets are a frequent topic of conversation. We find ourselves spending half the time criticizing Deep Nets for their limitations and the other half praising them and defending them against their critics (not infrequently we are confidently told that “Deep Nets can never do xxx” when we already know that they can). This opinion paper attempts to provide a balanced viewpoint on the strengths and weaknesses of Deep Nets for studying vision.
The organization of this article is as follows. In Section \[sec:history\] we discuss the history of neural networks and its tendency to boom and bust. Section \[sec:success\] describes a few of the successes of Deep Nets while also mentioning the caveats and fine print. In Section \[sec:understanding\] we discuss the limited understanding of the internal workings of Deep Nets. Section \[sec:cogsci\] surveys their potential for helping to construct theories of biological visual systems, but also their limited relationships to real neurons and neural circuits. In Section \[sec:challenges\] we discuss the challenges that Deep Nets are now grappling with. Section \[sec:explosion\] is more speculative and argues that as vision researchers attempt to model increasingly complex visual tasks they will face a combinatorial explosion which Deep Nets may be unable to handle.
Some History \[sec:history\]
============================
We are in the third wave of neural network approaches. The first two waves — 1950s–1960s and 1980s–1990s — generated considerable excitement but slowly ran out of steam. Despite a few exceptions, the overall performance of neural networks was disappointing for machines (artificial intelligence/machine learning) and for understanding biological vision systems (neuroscience, cognitive science, psychology). But the third wave — 2000s–present — is distinguished because of the dramatic success of Deep Nets on many large benchmarked problems and their industrial application to real world tasks. It should be acknowledged that almost all the basic ideas of many of the currently successful neural networks were developed during the second wave. But their strengths were not appreciated until the availability of big datasets and the ubiquity of powerful computers (e.g., GPUs) which only became available after 2000 and which fueled the third wave.
The rise and falls of these neural network waves reflect changes in intellectual fashion and the varying popularity of other approaches. The second wave of neural networks was partly driven by the perceived limitations of classic artificial intelligence where disappointing results and accusations of over-promising led to an AI winter in the mid-1980s. In turn, the decline of the second wave corresponded to the rise of support vector machines, kernel methods, and related approaches. Credit is due to those neural network researchers who carried on despite discouragement through the troughs of the waves when it was sometimes hard to publish neural network papers. The pendulum has now swung again and it sometimes seems hard to publish anything that is not neural network related. We suspect that progress would be faster if researchers resisted the attraction of fashions and instead pursued a diversity of approaches and techniques. It is also worrying that the courses for students often tends to follow the latest fashions and ignore the older techniques (until they are rediscovered).
The current successes of neural networks are mainly for artificial intelligence tasks where they have made big advances in tasks like face recognition (now working on datasets of tens of millions of people) and on medical image analysis. Neural networks are increasingly being used to model the mind and brain but their relations to real neurons and neural circuits should be treated with caution. Although artificial neural networks were inspired by biology it must be acknowledged that real neurons are much more complex and understanding real neural circuits remains one of the most fundamental challenges of neuroscience.
The Successes, with the Fine Print {#sec:success}
==================================
--------------------------- -------------------------
[Input]{} [Boundaries]{}
[Surface Normals]{} [Saliency]{}
[Semantic Segmentation]{} [Semantic Boundaries]{}
[Human Parts]{} [Detection]{}
--------------------------- -------------------------
The computer vision community was fairly skeptical about Deep Nets until the impressive performance of AlexNet [@DBLP:conf/nips/KrizhevskySH12] for classifying objects in ImageNet [@DBLP:conf/cvpr/DengDSLL009]. This classification task assumes there is a foreground object which is surrounded by a limited background region, so the input is similar to one of the red boxes of the bottom right image in Figure \[fig:ubernet\]. AlexNet’s success stimulated the vision community leading to a variety of Deep Net architectues with increasingly better performance on object classification, e.g., [@DBLP:journals/corr/SimonyanZ14a; @DBLP:conf/cvpr/HeZRS16; @DBLP:conf/eccv/LiuZNSHLFYHM18].
Deep Nets were also rapidly adapted to other visual tasks such as object detection, where the image contains one or more objects and the background is much larger, e.g., the PASCAL challenge [@DBLP:journals/ijcv/EveringhamGWWZ10]. For this task, Deep Nets were augmented by an initial stage which made proposals for possible positions and sizes of the objects and then applied Deep Nets to classify the proposals (current methods train the proposals and objects together in what is called “end-to-end”). These methods outperformed the previous best methods, the Deformable Part Models [@DBLP:journals/pami/FelzenszwalbGMR10], for the PASCAL object detection challenge (PASCAL was the main object detection and classification challenge before ImageNet). Other Deep Net architectures also gave enormous performance jumps in other classic tasks like edge detection, semantic segmentation, occlusion detection (edge detection with border-ownership), symmetry axis detection. Major increases also happened for human joint detection, human segmentation, binocular stereo, 3D depth estimation from single images, and scene classification. Several of these tasks are illustrated in Figure \[fig:ubernet\].
But although Deep Nets are very effective, almost always outperforming alternative techniques, they are not general purpose and their successes come with the following three restrictions.
Firstly, Deep Nets are designed for specific visual tasks. Most Deep Nets are designed for single tasks and a Deep Net designed for one task will not be well-suited for another. For example, a Deep Net designed for object classification on ImageNet cannot perform human parsing (i.e. the detection of human joints) on the Leeds Sports Dataset (LSD). There are, however, some exceptions and “transfer learning” sometimes makes it possible to adapt Deep Nets trained on one task to a closely related task provided annotated data is available for that task (see Section \[sec:transfer\]). Intuitively this happens because the features learned by the Deep Net captures image structures that are useful for both tasks. In addition, researchers have recently developed Deep Nets, e.g., UberNet [@DBLP:conf/cvpr/Kokkinos17], which can perform up to four tasks with the same network. But, in general, there is a growing zoo of different Deep Net architectures designed for specific tasks which include cascades of networks and supervision at several different levels of the network.
![Figure taken from @DBLP:conf/eccv/QiuY16. UnrealCV allows vision researchers to easily manipulate synthetic scenes, e.g. by changing the viewpoint of the sofa. We found that the Average Precision (AP) of Faster-RCNN [@DBLP:conf/nips/RenHGS15] detection of the sofa varies from 0.1 to 1.0, showing extreme sensitivity to viewpoint. This is perhaps because the biases in the training cause Faster-RCNN to favor specific viewpoints.[]{data-label="fig:unrealcv"}](figs/unrealcv1 "fig:"){width="\linewidth"} ![Figure taken from @DBLP:conf/eccv/QiuY16. UnrealCV allows vision researchers to easily manipulate synthetic scenes, e.g. by changing the viewpoint of the sofa. We found that the Average Precision (AP) of Faster-RCNN [@DBLP:conf/nips/RenHGS15] detection of the sofa varies from 0.1 to 1.0, showing extreme sensitivity to viewpoint. This is perhaps because the biases in the training cause Faster-RCNN to favor specific viewpoints.[]{data-label="fig:unrealcv"}](figs/unrealcv2 "fig:"){width="\linewidth"}
Secondly, Deep Nets which perform well on benchmarked datasets may fail badly on real world images outside the dataset. This is because the set of real world images is infinitely large and so it is hard for any dataset, no matter how big, to be representative of the complexity of the real world. This is an important issue which we will return to in Section \[sec:explosion\]. For now, we simply remark that all datasets have biases. These biases were particularly blatant in the early vision datasets and researchers rapidly learned to exploit them for example by exploiting the background context (e.g., detecting fish in Caltech101 was easy because they were the only objects whose backgrounds were water). Comparative studies showed that methods which performed well on some datasets often failed to generalize to others [@DBLP:conf/cvpr/TorralbaE11]. These problems are reduced, but still remain, despite the use of big datasets and Deep Nets. For example, background context remains problematic even for ImageNet [@DBLP:conf/ijcai/ZhuXY17]. Biases also occur if the dataset contain objects from limited viewing conditions, e.g., as shown in Figure \[fig:unrealcv\], a Deep Net trained to detect sofas on ImageNet can fail to detect them if shown from viewpoints which were underrepresented in the training dataset. In particular, Deep Nets are biased against “rare events” which occur infrequently in the datasets. But in real world applications, these biases are particularly problematic since they may correspond to situations where failures of a vision system can lead to terrible consequences, e.g., datasets used to train autonomous vehicles almost never contain babies sitting in the road. Similarly, datasets often tend to under-represent the hazardous factors which are known to cause algorithm to fail, such as specularity for binocular stereo. We will return to this example in Section \[sec:sensitivity\]. Thirdly, almost all Deep Nets require annotated data for training and testing. This has the effect of biasing vision researchers to work on those visual tasks for which annotation is easy. For example, annotation for object detection merely requires specifying a tight bounding box around an object. But for other vision tasks, such as detecting the joint of a human, annotation is much harder and for some tasks it is almost impossible. There are methods which reduce the need for supervision as discussed in Section \[sec:transfer\], and there is also the possibility of using synthetic stimuli (generated by computer graphics engines) which enables groundtruth to be available for all visual tasks. But realistic synthetic stimuli are limited and the vision community is reluctant to rely on it until they become sufficiently realistic.
In summary, Deep Nets are a set of tools which are constantly being refined and developed according to the needs of specific visual tasks. They almost all rely on fully supervised data, with caveats we will discuss later, and their performance can fail to generalize to images outside the dataset they have been trained on. Dataset biases are particularly problematic for vision due to the infinite complexity of real world images, as we will discuss in Section \[sec:explosion\].
Towards Understanding Deep Nets {#sec:understanding}
===============================
{width="\linewidth"}
It is difficult to characterize what Deep Nets can do and to understanding their inner workings. Theoretical results show that multi-layer perceptrons, and hence Deep Nets, can represent any input output function provided there are a sufficient number of hidden units [@DBLP:journals/nn/HornikSW89]. But, as anybody who has proven theorems of this type is well aware [@DBLP:journals/nn/XuKY94], theoretical results which hold in the asymptotic limit are of limited utility. Much more valuable would be results which hold for limited numbers of hidden units and limited training data, but it is hard to see what meaningful theoretical results could be obtained for systems as complicated as Deep Nets.
At a more intuitive level it seems possible to get some rough understanding of Deep Nets at least when applied to visual tasks. The hierarchical structure of Deep Nets is similar to classical models of the visual cortex such as the NeoCognition [@fukushima1982neocognitron] and HMax [@riesenhuber1999hierarchical] and captures many of the intuitions which motivated these models. Deep Nets contain feature representations where those at lower levels have receptive fields of limited sizes and which are sensitive to the precise positions of patterns. But as we ascend the hierarchy the receptive fields become larger and more sensitive to specific patterns, while being less concerned about their exact locations.
This can be partially understood by studying the activities of the internal filters/features of the convolutional levels of Deep Nets [@DBLP:conf/eccv/ZeilerF14; @DBLP:journals/corr/YosinskiCNFL15]. In particular, if Deep Nets are trained for scene classification then some convolutional layer filters roughly correspond to objects which appear frequently in the scene, while if the Deep Nets are trained for object detection, then some features roughly correspond to parts of the objects [@DBLP:journals/corr/ZhouKLOT14]. In detailed studies of a restricted subset of objects (e.g., vehicles), researchers [@DBLP:journals/corr/WangZPY15] discovered regular patterns of activity of the feature vectors, called visual concepts, which corresponded approximately to the semantic parts of objects (with sensitivity to viewpoint), see Figure \[fig:vc\]. But we acknowledge that while theses studies are encouraging they remain fairly impressionistic and lack the precision of true understanding (e.g., these studies have not yet enabled researchers to learn models of objects and object-parts in an unsupervised manner).
This suggests the following rough conceptual picture of Deep Nets. The convolutional levels represent the manifold of intensity patterns at different levels of abstraction. The lowest levels represent local image patterns while the high levels represent larger patterns which are invariant to the details of the intensity patterns. From a related perspective, the weight vectors represent a dictionary of templates of image patterns. The final “decision layers” of the Deep Net are usually harder to interpret but it is plausible that they make decisions based on the templates represented by the lower layers. This “dictionary of templates” interpretation of Deep Nets suggests they are very efficient to learn and represent an enormous variety of image patterns, and interpolate between them, but cannot extrapolate much beyond the patterns they have seen in their training dataset. Other studies suggest that Deep Nets are less effective at modeling visual properties which are specified purely by geometry, particularly if the input consists of binary valued patterns corresponding to the presence or absence of boundary edges. It is an open issue whether Deep Nets can learn features that “factorize” different visual properties which, as we will argue later in Section \[sec:explosion\], will ultimately be necessary for dealing with the full complexity of real images.
Deep Nets and Biological Vision {#sec:cogsci}
===============================
Deep Nets have a lot to offer for studying biological vision systems and, in particular, disciplines like cognitive science, neuroscience and psychology which aim at understanding the mind and the brain. They can help develop and test computational theories by exploiting the availability of big data while raising the possibility of understanding the brain by relating the artificial neurons in Deep Nets to real neurons in the brain. But they also have significant limitations for both modeling real neural circuits and human cognitive abilities.
Exploiting Big Data
-------------------
The use of Deep Nets, and other machine learning techniques, can help develop theories of mind and brain which exploit big data. This can be done in roughly three ways. Firstly, Deep Nets can help develop theories that deal with the enormous complexity of real world images. Secondly, they can be used to partially learn the knowledge about the visual world that humans and other animals obtain through development and experience. Thirdly, they enable theories to be tested on complex stimuli and compared to alternative theories. We will now address these issues in turn.
Historically, studies of biological visual systems have largely relied on simple synthesized stimuli. These studies have led to many important findings and were historically necessary because the complexity of natural image stimuli means that it is extremely hard to perform controlled scientific experiments by systematically varying the experimental parameters. This also follows the well established scientific strategy of divide and conquer which aims at understanding by breaking down complex phenomena into more easily understandable chunks. But studying vision on simplified stimuli has limitations which Deep Nets and big data can help address. As researchers in computer vision discovered in the 1980s, findings on simplified synthetic stimuli, though sometimes providing motivations and good starting points, typically required enormous modifications before they could be extended to realistic stimuli if they could be extended at all. Computer vision researchers had to leave their comfort zone of synthetic stimuli and address the fundamental challenge of vision: namely how visual systems deal with the complexity and ambiguity of real world images and achieve the miracle of converting the light rays that enter the eye, or a camera, into an interpretation of the three-dimensional physical world. Driven by the need to address these issues, computer vision researchers developed a large set of mathematical and computational techniques and increasingly realized the important of learning theories from data using tools like Deep Nets, which required large annotated datasets. The same techniques can be directly applied to studying biological vision by predicting experimental responses to visual stimuli, e.g., human performance in behavioral experiments, the responses of neurons, or fMRI activity.
Big data, and learning methods for mining the data, are particularly important for vision because, as leading vision scientists like Gregory and Marr have argued, visual systems require knowledge of the world in the form of natural and ecological constraints. In Gregory’s words “perception is not just a passive acceptance of stimuli, but an active process involving memory and other internal processes”. In other words, the visual systems of humans, and other animals, exploit a large amount of knowledge which has been acquired through development and experience. Big data methods, like Deep Nets, gives a surrogate way for vision scientists to partially learn this knowledge by studying properties of real world images.
Finally, the use of big datasets are also very important for testing visual theories because they enabled detailed comparisons with alternative theories. They make it easy to reject “toy theories” that exploit the biases inherent in small datasets and simplified stimuli. In summary, the use of Deep Nets and big data enable biological vision researchers to develop and test theories that can work in realistic visual domains and address the fundamental challenge of vision.
Real Neurons and Neural Circuits
--------------------------------
From the neuroscience perspective, Deep Nets have been used to predict brain activity, such as fMRI and other non-invasive measurements, and there are a growing number of examples [@cichy2016comparison; @wen2017neural]. They have also been applied to predicting neural responses as measured by electrophysiology and, in particular, for predicting the response of neurons in the ventral stream [@yamins2014performance]. These are examples where Deep Nets’ ability to learn from data and to deal with the complexity of real stimuli really pays off. But in terms of understanding neuroscience, this is best thought of as a starting point. The ventral stream of primates is very complex and there is evidence that it estimates the three-dimensional structure of objects and parts [@yamane2008neural], and relates to the classic theory of object recognition by component [@biederman1987recognition] which differs in many respects from standard Deep Nets. More generally, the primate visual systems must perform all the visual tasks listed in Section \[sec:success\], namely edge detection, binocular stereo, semantic segmentation, object classification, scene classification, and 3D-depth estimation. The vision community has developed a range of different Deep Nets for these tasks so it is extremely unlikely, for example, for a Deep Net trained for object classification on ImageNet to be able to account for the richness of primate visual systems.
It should also be emphasized that while Deep Nets perform computations bottom-up in a feedforward manner there is considerable evidence of top-down processing in the brain [@lee2003hierarchical], particularly driven by top-down attention [@gregoriou2014lesions]. Researchers have also identified cortical circuits [@mcmanus2011adaptive] which implement spatial interactions (though possibly in a bottom-up and top-down manner). These types of phenomena require other families of mathematical models, perhaps the compositional models described in Section \[sec:explosion\].
But, more fundamentally, it must be acknowledged that there are big differences between the artificial neurons used in Deep Nets and real neurons in the brain. Artificial models of neurons are, at best, great simplifications of realistic neurons as shown by studies of real neurons in vitro [@poirazi2001impact]. Neuroscientists have found that there are over one hundred different types of neurons, and there are enormous morphological differences which may be exploited to enable computation [@seung2012connectome]. There is also lack of detailed understanding of neural circuits. For example, the wiring diagram of C-elegans has been known for over thirty years but there is still only limited understanding of how it functions as a neural circuit (as stated by O. Hobert the wiring diagram “is like a road map that tells you where cars can drive, but does not tell you when or where cars are actually driving”). Understanding neural circuits will also require understanding their dynamics and how this can change based on a host of possible mechanisms such as rapidly changing synapses [@von1994correlation]. Understanding real neurons and real neural circuits is a fascinating scientific challenge and exciting engineering advances [@boyden2005millisecond] and the availability of huge datasets and the tools to analyze them means that progress will surely be made. But these are highly challenging scientific tasks. In summary, the jump between real neural circuits and the artificial circuits in Deep Nets remains huge and it is likely that real neural circuits will be ultimately found to be much more complicated.
Cognitive Abilities: Deep Nets and Scientific Understanding
-----------------------------------------------------------
It is clear that Deep Nets, and other machine learning techniques, are very helpful for vision scientists but are doubtful that they are sufficient to capture the complexity of biological visual systems. The human visual system performs much better than Deep Nets, or other AI visual systems, on almost all visual tasks. The few exceptions are on situations for which evolution and experience put humans at a disadvantage. For example, AI systems can outperform humans by recognizing hundreds of millions of faces provided they are seen from front-on under reasonable lighting conditions and with limited occlusion, but until recently most humans never saw more than a few thousand people in their whole lifetime. It is also possible that AI systems could perform better than the average radiologists when reading computer tomography (CT) images, but even the most expert radiologists have only seen a fairly small number of CT scans (and AI systems can directly access the three-dimensional data in CT scans, while radiologists can only view two-dimensional slices). In each of these cases, humans are at a disadvantage because they do not have access to, and hence cannot exploit, the enormous amounts of annotated big data which enable Deep Nets to do so well on these tasks. But true examples of Deep Nets outperforming humans are very rare (and often due to Deep Nets overfitting the datasets on which the studies are performed). Moreover, humans can perform a large variety of visual tasks while current AI systems are usually specialized on single tasks.
Moreover, studies of cognitive science show that human visual systems can work at levels of abstraction which current Deep Nets cannot match. This can be illustrated by human ability at visual analogies some of which depend only on visual similarity but others depend on the notion of parts and subparts, while others include the idea of function. As we will argue in Section \[sec:explosion\] this reflects limitations of current machine learning methods and the suggestion that current techniques, like Deep Nets, will reach a wall. From another perspective, it can also be argued that the goal of vision science is to discover underlying principles. From this perspective, a model that explains phenomena in terms of an uninterpretable Deep Net would not be very satisfying. This is a debatable issue on which reasonable people can disagree. But we suspect that progress in AI will also require interpretable models partly for the pragmatic engineering principle, that this is necessary for debugging and for performance and safety guarantees. In summary, Deep Nets, and other techniques which exploit big data, are a tool that mind and brain researchers should know how to use and not misuse. But it is equally clear that current Deep Nets fail to capture some of the most interesting phenomena such as human’s ability to perform abstractions and perform analogical reasoning (although Deep Nets might be useful as building blocks to construct such a theory). Nevertheless a closer relationship between biological and artificial models of vision would be beneficial to both disciplines. Researchers in AI have developed a large set of technical tools, like Deep Nets, which can allow their models to be applied to the complexity of natural images and tested under rigorous realistic conditions. Vision scientists can challenge computer vision researchers to develop theories which can perform as well as, or better than humans, in challenging situations while using orders of magnitude less power than current computers.
Some Challenges {#sec:challenges}
===============
This section describes some of the current challenges of Deep Nets and the attempts to address them. Some of these challenges are gradually being overcome while others, such the sensitivity to non-local attacks, may require more fundamental changes as we will discuss in Section \[sec:explosion\].
Relaxing the Need for Full Supervision {#sec:transfer}
--------------------------------------
A disadvantage of Deep Nets is that they typically need a very large amount of annotated training data, which restricts their use to situations where big data is available. But this is not always the case. In particular, “transfer learning” shows that the features of Deep Nets learned on annotated datasets for certain visual tasks can sometimes be transferred to novel datasets and related tasks, thereby enabling learning with much less data and sometimes with less supervision. For example, as mentioned earlier, Deep Nets were first successful for object classification on ImageNet but failed on object detection on the smaller PASCAL dataset. This was presumably because PASCAL was not big enough to train a Deep Net but ImageNet was (ImageNet is almost two orders of magnitude larger than PASCAL). But researchers quickly realized that it was possible to train a Deep Net for object detection and semantic segmentation on PASCAL by initializing the weights of the Deep Net by the weights of a Deep Net trained on ImageNet [@DBLP:conf/cvpr/GirshickDDM14; @DBLP:conf/cvpr/LongSD15; @DBLP:journals/pami/ChenPKMY18]. This also introduced a mechanism for generating proposals, see Figure \[fig:ubernet\] (bottom right).
This ability to transfer Deep Net knowledge learned on another domain relates intuitively to the way children learn. A child initially learns rather slowly compared to other young animals but at critical periods the child’s learning accelerates very rapidly [@smith2005development]. From the “dictionary of templates” perspective, this could happen because after a child has learned to recognize enough objects he/she may have enough building blocks (i.e. deep network filters) to be able to represent new objects in terms of a dictionary of existing templates. If so, only a few examples of the new object may be needed in order to do few-shot learning.
Few-shot learning of novel object categories has been shown for Deep Nets provided they have first been trained on a large set of object categories [@DBLP:conf/iccv/MaoWYWHY15; @DBLP:conf/nips/VinyalsBLKW16; @DBLP:conf/cvpr/QiaoLSY18]. Another strategy is to train a Deep Net to learn similarity (technically a [*Siamese network*]{}) on the set of object categories, hence obtaining a similarity measure for the new objects. For example, @DBLP:journals/corr/LinWLZYL17 trained a Siamese network to learn similarity for objects in ShapeNet [@DBLP:journals/corr/ChangFGHHLSSSSX15] and then this similarity measure was used to cluster objects in the Tufa dataset [@DBLP:journals/jmlr/SalakhutdinovTT12]. Other few-shot learning tasks can also be done by using features from Deep Nets trained for some other tasks as ways to model the visual patterns of objects.
More recently, there has been work on unsupervised learning which shows that optical flow and structure from motion can be learned without requiring detailed supervision but only an energy function model [@DBLP:conf/aaai/RenYNLYZ17; @DBLP:conf/cvpr/ZhouBSL17]. Like many neural nets in the third wave some of the basic ideas can be found in obscure papers from the second wave [@smirnakis1995neural]. In some cases, this can even be bootstrapped to learning depth from single images. Other forms of unsupervised learning show that Deep Net features can be learned by distinguishing between scrambled and unscrambles images [@DBLP:conf/iccv/DoerschGE15], or by tracking an object over time [@DBLP:conf/iccv/WangG15].
Other studies show that Deep Nets can exploit large numbers of unsupervised, or weakly supervised, data provided they have sufficient annotated data to start with. For example, to train object detection using images where only the names of the objects in the image are known but their locations and sizes are unknown. This is known as weakly supervised learning and it can be treated as missing/hidden data problem which can be addressed by methods such as Multiple Instance Learning (MIL) or Expectation-Maximization (EM). Performance of these types of methods is often improved by using a small amount of fully supervised training data which helps the EM or MIL algorithms converge to good solutions, e.g., see @DBLP:conf/iccv/PapandreouCMY15.
Defending Against Adversarial Examples {#sec:adversarial}
--------------------------------------
![Figure taken from @DBLP:journals/corr/abs-1711-01991. A deep network can correctly classify the left image as *king penguin*. The middle image is the adversarial noise magnified by 10 and shifted by 128, and on the right is the adversarial example misclassified as *chihuahua*.[]{data-label="fig:adv-cls"}](figs/adversarial){width="\linewidth"}
![Figure taken from @DBLP:conf/iccv/XieWZZXY17. The top row is the input (adversarial perturbation already added) to the segmentation network, and the bottom row is the output. The red, blue and black regions are predicted as *airplane*, *bus* and *background*, respectively.[]{data-label="fig:adv-seg"}](figs/adversary-segmentation){width="\linewidth"}
Another limitation of Deep Nets comes from studies showing they can be successfully attacked by imperceptible modifications of the images which nevertheless cause the Deep Nets to make major mistakes for object classification [@DBLP:journals/corr/SzegedyZSBEGF13], object detection, and semantic segmentation [@DBLP:conf/iccv/XieWZZXY17] (see Figure \[fig:adv-cls\] and Figure \[fig:adv-seg\]). This problem partly arises because the datasets are finite and contain only an infinitesimal fraction of all possible images. Hence there are infinitely many images arbitrarily close to the training images and so there is a reasonable chance that the Deep Net will misclassify some of them. Researchers have shown that they can find such images either by [*white box*]{} attacks, where the details of the Deep Net are known, or by [*black box*]{} attacks, when they are not. But there are now strategies which defend against these attacks. One strategy is to treat these “attack images” as extra training data, known as “adversarial training” [@DBLP:journals/corr/GoodfellowSS14; @DBLP:journals/corr/MadryMSTV17]. A second recent alternative [@DBLP:journals/corr/abs-1711-01991] is to introduce small random perturbations into the images, exploiting the assumption that the “attack images” are very unstable so small random perturbation will defend against them (admittedly @DBLP:conf/icml/AthalyeC018 has successfully circumvented this defense). It should be acknowledged that adversarial attacks can be mounted against any vision algorithm and it would be much easier to successfully attack most other vision algorithms.
Addressing Over-Sensitivity to Context {#sec:sensitivity}
--------------------------------------
{width="\linewidth"}
{width="\linewidth"}
A more serious challenge to Deep Nets is their over-sensitivity to context. Figure \[fig:monkey\] shows the effect of photoshopping a guitar into a picture of a monkey in the jungle. This causes the Deep Net to misidentify the monkey as a human and also misinterpret the guitar as a bird, presumably because monkeys are less likely than humans to carry a guitar and birds are more likely than guitars to be in a jungle near a monkey [@wang2018visual]. Recent work gives many examples of the over-sensitivity of Deep Nets to context, such as putting an elephant in a room [@DBLP:journals/corr/abs-1808-03305].
This over-sensitivity to context can also be traced back to the limited size of datasets. For any object only a limited number of contexts will occur in the dataset and so the Deep Net will be biased towards them. For example, in early image captioning datasets it was observed that giraffes only occurred with trees and so the generated captions failed to mention giraffes in images without trees even if they were the most dominant object.
Observe that the limited size of datasets is a common theme when we consider the current limitations of Deep Nets. Recall that we already mentioned how synthetic data could be used, see Figure \[fig:unrealcv\], to show that Deep Nets trained on ImageNet could not recognize objects from some viewpoints. An advantage of synthetic data is that it enables us to generate, in principle, an infinite amount of images and hence to systematically explore the effect of varying factors like viewpoint and material properties, e.g., see @DBLP:conf/eccv/QiuY16 [@DBLP:journals/corr/abs-1811-11553]. Similarly synthetic data can be used to systematically vary hazardous factors for stereo vision (those factors like specularity which are known to cause stereo algorithms to fail; see Figure \[fig:hazardous\]) enabling researchers to characterize the sensitivity of stereo algorithms to these factors [@DBLP:conf/3dim/ZhangQCHY18]. Hence synthetic datasets offer the possibility of generating as much data as is required to systematically study the sensitivity of Deep Nets to the nuisance factors, like viewpoint and radiosity, which arrive in reality (provided the synthetic datasets are realistic enough to accurately represent real world images).
The difficulty of capturing the enormous varieties of context, as well as the need to explore the large range of nuisance factors, is highly problematic for data driven methods like Deep Nets. It seems that ensuring that the networks can deal with all these issues will require datasets that are arbitrarily big, which raises enormous challenges for both training and testing datasets. We will discuss these issues next.
The Combinatorial Explosion: When Big Datasets Are Not Enough {#sec:explosion}
=============================================================
This section argues that vision researchers face a combinatorial explosion as they grapple with the complexity of real world data in order to develop algorithms that will work robustly on complex visual tasks in the real world. In such situations big datasets will not be big enough and novel methods will be required for developing algorithms and for testing them.
The Combinatorial Explosion
---------------------------
{width="0.85\linewidth"}
Deep Nets are trained and evaluated on large datasets which are intended to be representative of the real world. But, as discussed earlier, Deep Nets can fail to generalize to images outside the datasets they were trained on, can make mistakes on rare events that occur rarely within the datasets (but which may have disastrous consequences, such as running over a baby or failure to detect a cancerous tumor), and are also sensitive to adversarial attacks and changes in context. None of these problems are necessarily deal-breakers for the success of Deep Nets and they can certainly be overcome for certain visual domains and tasks. But we argue that these are early warning signs of a problem that will arise as vision researchers attempt to use Deep Nets to address increasingly complex visual tasks in unconstrained domains. Namely, that in order to deal with the combinatorial complexity of real world images the datasets would have to become exponentially large, which is clearly impractical.
To understand this combinatorial complexity consider the following thought experiment. Imagine constructing a visual scene by selecting objects from an object dictionary and placing them in different configurations. This can clearly be done in an exponential number of ways. We can obtain similar complexity even for images of a single object since it can be partially occluded in an exponential number of ways. We can also change the context of an object in an infinite number of ways. Although humans are good at adapting to changes in visual context, Deep Nets are much more sensitive, as illustrated in Figure \[fig:monkey\]. We note that this combinatorial explosion may not happen for some visual tasks and Deep Nets are likely to be extremely successful for medical image application because there is comparatively little variability in context (e.g., the Pancreas is always very close to the Duodenum). But for many real world applications, particularly those involving humans interacting with the world in video sequences, it seems that the complexity of the real world cannot be captured without having an exponentially large dataset.
This causes big challenges for current methods of training and testing visual algorithms. These methods were developed by machine learning researchers to ensure that algorithms are capturing the underlying structure of the data instead of merely memorizing the training data. They assume that the training and testing data are randomly drawn samples from some unknown probability distributions. But critically, the datasets need to be large enough to be representative of the underlying distribution of the data. Interestingly, to the best of our knowledge, researchers on the foundations of machine learning have never directly addressed this issue. Instead they have concentrated on theoretical results, called Probably Approximately Correct (PAC) theorems, which give bounds on the probability that a machine learning algorithm has learned the structure of the underlying data, whose key insight is that the amount of training data must be much larger than the set of hypotheses that the learning algorithm can consider before seeing the data [@DBLP:journals/cacm/Valiant84; @DBLP:books/daglib/0097035; @poggio2003mathematics]. But, in any case, the standard paradigm of training and testing models on a finite number of randomly drawn samples becomes impractical if the set of images is combinatorially large. This forces us to address two new problems: (I) How can we train algorithms on finite sized datasets so that they can perform well on the truly enormous datasets required to capture the combinatorial complexity of the real world? (II) How can we efficiently test these algorithms to ensure that they work in these enormous datasets if we can only test them on a finite subset?
{width="\linewidth"}
It helps to consider these issues from the perspective of computer graphics. It is straightforward (see Figure \[fig:qi\]) to specify a computer program with $13$ parameters that can render images of a single object from different viewpoints, under different illuminations, and in a limited number of background scenes. If we allow 1,000 different values for each parameter we obtain a total of $10^{39}$ different images, $10^{30}$ orders of magnitude larger than any existing dataset. The program can be extended to include multiple objects in an enormous range of visual scenes and, in principle, we can specify a model with a finite, but very large, number of parameters that can generate a combinatorially large number of real images which can approximate the real world. But while this gives a way to potentially generate all real world images it does not solve the issue of how to train and test models on these datasets.
{width="0.9\linewidth"}
Models for Overcoming Combinatorial Complexity
----------------------------------------------
It seems highly unlikely that methods like Deep Nets, in their current forms, can deal with the combinatorial explosion. The datasets may never be large enough to either train or test them. Here we sketch the types of ideas we think will be relevant. We can get some guidance from the human visual system which faces and overcomes these challenges. Humans see roughly $10^9$ images every year (assuming 30 images per second) which is big, but not combinatorial. But humans, above a critical age, can learn from small numbers of examples, perceive three-dimensional structure, deal with abstraction, can exploit context when it is helpful but ignore it when it is not. Recent experiments [@ullman2016atoms] suggest that humans can interpret images unambiguously provided they are above a critical size (which depends on the image content) and additional context is unnecessary.
Compositionality will probably be one part of the solution. This is a general principle which can be described poetically as “an embodiment of faith that the world is knowable, that one can tease things apart, comprehend them, and mentally recompose them at will”. The key assumption is that structures are composed hierarchically from more elementary substructures following a set of grammatical rules. This suggests that the substructures and the grammars can be learned from finite amounts of data but will generalize to combinatorial situations. Unlike Deep Nets, compositional models require structured representations which make explicit their structures and substructures which enables them to do multiple tasks (e.g., detecting objects, object parts, and object boundaries) with the same underlying representation [@DBLP:conf/nips/ChenZLYZ07] (it is argued that Deep Nets are compositional, but this is in a very different sense). Compositional models offer the ability to extrapolate beyond data they have seen, to reason about the system, intervene, do diagnostics, and to answer many different questions based on the same underlying knowledge structure [@pearl2009causality]. To quote Stuart Geman “the world is compositional or God exists”, since otherwise it would seem necessary for God to handwire human intelligence [@geman2007compositionality].
Compositionality relates closely to pattern theory and analysis by synthesis [@grenander1993general; @mumford1994pattern; @DBLP:conf/iccv/TuCYZ03; @DBLP:journals/ftcgv/ZhuM06; @mumford2010pattern]. It can be illustrated by a toy-world example, shown in Figure \[fig:letters\], where images are created in terms of basic vocabularies of elementary components. The three panels show microworlds of increasing complexity from left to right. For each microworld there is a grammar which specifies the possible images as constructed by compositions of the elementary components. In the left panel the elementary components are letters which do not overlap, and so interpreting the image is easy. The center and right panels are generated by more complicated grammars – letters of different fonts, bars, and fragments which can heavily occlude each other. Interpreting these images is much harder and seems to require the notion that letters are composed of elementary parts, that they can occur in a variety of fonts, and the notion of “explaining away” (to explain that parts of a letter are missing because they have been occluded by another letter).
The third microworld in Figure \[fig:letters\] is an example of a combinatorially large dataset since images are constructed by selecting objects from a dictionary and placing them at random while allowing for occlusion. This microworld is essentially the same as CAPTCHAs which can be used to distinguish between humans and robots. Interestingly, work on CAPTCHAs [@george2017generative] show that compositional models which represent objects in terms of compositions of elementary tokens and factorize geometry and appearances can perform well on these types of datasets. Their inference algorithm involves bottom-up and top-down processing [@DBLP:conf/iccv/TuCYZ03] which enables the algorithm to “explain away” missing parts of the letters and to impose “global consistency” of the interpretation to remove ambiguities. Intuitively, part detectors make bottom-up proposals for letters which can be validated or rejected in the top-down stage. By contrast, Deep Nets performed much worse on these datasets. Presumably because, unlike compositional models, they cannot capture the underlying generative structure of the domain and extrapolate outside their training dataset. Since the microworld is combinatorially large, it will not be possible to train Deep Nets on enough data to guarantee good performance on the entire dataset. Other theoretical studies, e.g., @DBLP:journals/jmlr/YuilleM16, suggest that compositional models are well suited for dealing with complexity by sharing parts and using hierarchical abstraction.
Other non-visual examples illustrate the same points. A recent example is when researchers [@DBLP:conf/icml/SantoroHBML18] tried to train standard Deep Nets to do IQ tests. The task requires finding composition of meaningful rules/patterns (distractors may be present) within 8 given images in a $3 \times 3$ grid, and the goal is to fill in the last missing image. Not surprisingly, Deep Nets do not generalize well. For natural language applications, Neural Module Networks [@DBLP:conf/cvpr/AndreasRDK16] are more promising than static, fixed-structure Deep Nets, in that the dynamic architectural layout may be flexible enough to capture some meaningful compositions. In fact, we recently verified that the individual modules indeed perform their intended functionalities (e.g. `AND`, `OR`, `Filter(red)` etc) after joint training [@DBLP:journals/corr/abs-1901-00850].
Compositional models have many desirable theoretical properties, such as being [*interpretable*]{}, and the ability to be [*generative*]{} so they can be sampled from. This means that, in principal, they know everything about the object (or whatever entity is being modeled) which makes them easier to diagnose, and hence harder to fool, than black box methods like Deep Nets. But learning compositional models is hard because it requires learning the building blocks and the grammars (and even the nature of the grammars is debatable). There has, however, been some limited success in learning hierarchical dictionaries starting from basic elementary tokens like edges [@DBLP:conf/cvpr/ZhuCTFY10]: see Figure \[fig: composition\].
A current limitation of compositional models is that in order to perform analysis by synthesis they need to have generative models of objects and scene structures. Putting distributions on images is challenging with a few exceptions like faces, letters, and regular textures [@DBLP:conf/iccv/TuCYZ03]. But there is promising progress from two directions. Firstly, computer graphics models are becoming increasingly realistic and visual appearance can be roughly factored into geometry, texture, and illumination. Recall that the $10^{39}$ images (Figure \[fig:qi\]) were generated from only $13$ parameters. Secondly, Deep Nets have also been applied to generating images using Generative Adversarial Networks (GANs). From the perspective of analysis by synthesis, the results of GANs are disappointing though recent work on conditional GANs shows promise. More fundamentally, dealing with the combinatorial explosion requires learning causal models of the 3D world and how these generate images. Studies of human infants suggest that they learn by making causal models that predict the structure of their environment including naive physics. This causal understanding enables learning from limited amounts of data and performing true generalization to novel situations. This is analogous to contrasting Newton’s Laws, which gave causal understanding with a minimal amount of free parameters, with the Ptolemaic model of the solar system gave very accurate predicts but required a large amount of data to determines its details (i.e. the epicycles).
Testing Models When Data Is Combinatorial
-----------------------------------------
How can we test vision algorithms to deal with the complexity of the real world if we can only test them on finite amounts of data? If we have well structured models, e.g., compositional models as described above, then we can exploit the structure of the models to determine their failure modes. This, of course, is similar to how complex engineering (e.g., airplanes) or software structures are tested by systematically identifying their weak points. This is more reminiscent of game theory rather than decision theory (which focuses on the average loss and which underlies machine learning theory) because it suggests paying attention to the worst cases instead of the average cases. This makes sense if the goal is to develop visual algorithms for self-driving cars, or diagnosing cancer in medical images, where failures of the algorithms can have major consequences.
This can be done already if the failure modes of the visual tasks can be identified and are low-dimensional. For example, as mentioned earlier in Section \[sec:sensitivity\], researchers have isolated the hazardous factors which cause stereo algorithms to fail which include specularities and texture-less regions. In such cases it is possible to exploit computer graphics to systematically vary these hazardous factors to determine which algorithms are resistant to them [@DBLP:conf/3dim/ZhangQCHY18]. In short, we can stress-test these algorithms along these specific dimensions.
But for most visual tasks it is very hard to identify a small number of hazard factors which can be isolated and tested further. Instead, we should generalize the notion of adversarial attacks to include non-local structure. A simple possibility is to allow other more complex operations which cause reasonable changes to the image or scene, e.g., by occlusion, or changing the physical properties of the objects being viewed [@DBLP:journals/corr/abs-1711-07183], but without significantly impacting human perception.
Conclusion \[sec:conclusion\]
=============================
This opinion piece has been motivated by discussions about Deep Nets with researchers in many different disciplines. We have tried to strike a balance which acknowledges the immense success of Deep Nets but which does not get carried away by the popular excitement surrounding them. We have often used work from our own group to illustrate some of our main points and apologize to other authors whose work we would have cited in a more scholarly review of the field. Several of our concerns parallel those mentioned in recent critiques of Deep Nets [@DBLP:journals/cacm/Darwiche18; @DBLP:journals/corr/abs-1801-00631].
A few years ago Aude Oliva and the first author co-organized a NSF-sponsored workshop on the Frontiers of Computer Vision (MIT CSAIL, August 21-24 2011). The meeting encouraged frank exchanges of opinion and, in particular, there was enormous disagreement about the potential of Deep Nets for computer vision. But a few years later, as Yann LeCun predicted, everybody is using Deep Nets. Their successes have been extraordinary and have helped vision become much more widely known, dramatically increased the interaction between academia and industry, lead to application of vision techniques to a large range of disciplines, and have many other important consequences. But despite their successes there remain enormous challenges which must be overcome before we reach the goal of general purpose artificial intelligence and understanding of biological vision systems. In particular dealing with the combinatorial explosion as researchers address increasingly complex visual tasks in real world conditions. While Deep Nets, and other big data methods, will surely be part of the solution we believe that we will also need complementary approaches which can build on their successes and insights.
This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216 and ONR N00014-15-1-2356. We thank Kyle Rawlins and Tal Linzen for providing feedback.
[^1]: For those readers unfamiliar with Monty Python see: <https://youtu.be/Qc7HmhrgTuQ>
|
---
abstract: 'Fidelity is a figure of merit widely employed in quantum technology in order to quantify similarity between quantum states and, in turn, to assess quantum resources or reconstruction techniques. Fidelities higher than, say, 0.9 or 0.99, are usually considered as a piece of evidence to say that two states are very close in the Hilbert space. On the other hand, on the basis of several examples for qubits and continuous variable systems, we show that such high fidelities may be achieved by pairs of states with considerably different physical properties, including separable and entangled states or classical and nonclassical ones. We conclude that fidelity as a tool to assess quantum resources should be employed with caution, possibly combined with additional constraints restricting the pool of achievable states, or only as a mere summary of a full tomographic reconstruction.'
author:
- Matteo Bina
- Antonio Mandarino
- Stefano Olivares
- 'Matteo G. A. Paris'
title: About the use of fidelity to assess quantum resources
---
Introduction
============
In the last two decades several quantum-enhanced communication protocols and measurement schemes have been suggested and demonstrated. The effective implementation of these schemes crucially relies on the generation and characterization of nonclassical states and operations (including measurements), which represent the two pillars of quantum technology. The assessment of quantum resources amounts to make quantitative statements about the similarity of a quantum state to a target one, or to measure the effectiveness of a reconstruction technique. For these purposes one needs a figure of merit to compare quantum states. Among the possible distance-like quantities that can be defined in the Hilbert space a widely adopted measure of closeness of two quantum states is the *Uhlmann Fidelity* [@Uhl] defined as $$\label{fidelity}
F(\rho_1,\rho_2)= \left( {\hbox{Tr}}\sqrt{\sqrt{\rho_1}
\rho_2 \sqrt{\rho_1} } \right)^2$$ which is linked to the Bures distance $D_B(\rho_1,\rho_2)=\sqrt{2[1-\sqrt{F}]}$ between the two states $\rho_1$ and $\rho_2$, and provides bounds to the trace distance [@fuc99] $$1-\sqrt{F(\rho_1,\rho_2)}\leq \frac12 || \rho_1-\rho_2||_1\leq
\sqrt{1-F(\rho_1,\rho_2)}\,.$$ Fidelity is bounded to the interval $[0, 1]$, and values above a given threshold close to unit, say, 0.9 or 0.99 are usually considered very high. Indeed, this implies that the two states are very close in the Hilbert space, as it follows from the above relations between the fidelity and the Bures and trace distances. On the other hand, neighboring states may not share nearly identical physical properties [@edvd; @dodonov] as one may be tempted to conclude. The main purpose of this paper is to show, on the basis of several examples for qubits and continuous variable (CV) systems, that very high values of fidelity may be achieved by pairs of states with considerably different physical properties, including separable and entangled states or classical and nonclassical ones. Furthermore, we provide a quantitative analysis of this discrepancy.
In order to illustrate the point let us start with a very simple example. Suppose you are given a qubit, aimed at being prepared in the basis state $|0\rangle$, and guaranteed to have either a fidelity to the target state larger than a threshold, say $F>0.9$, or a given fidelity within a confidence interval, say $F=0.925\pm 0.025$. The situation is depicted in Fig. \[f:1q\] where we show the corresponding regions on the Bloch sphere. [[As it is apparent from the plots, neighboring states in terms of fidelity are compatible with a relatively large portion of the sphere that includes those states with different physical properties, e.g. the spin component in the $z$ direction]{}]{}.
![(Color online) The green volumes represent single qubit states having fidelity larger than the threshold $F>0.9$ (left) or a fidelity $F=0.925\pm 0.025$ (right) to the target state $|0\rangle$.[]{data-label="f:1q"}](f1a_sb1.pdf "fig:"){width="0.49\columnwidth"} ![(Color online) The green volumes represent single qubit states having fidelity larger than the threshold $F>0.9$ (left) or a fidelity $F=0.925\pm 0.025$ (right) to the target state $|0\rangle$.[]{data-label="f:1q"}](f1b_sb2.pdf "fig:"){width="0.49\columnwidth"}
The rest of the paper is devoted to illustrate few relevant, and “more dramatic” examples, for two-qubit states and for continuous variable ones, where fidelity should be employed with caution to assess quantum resources. Indeed, our examples show that high values of fidelity may be achieved by pairs of states with considerably different physical properties, e.g. states containing quantum resources and states of no value for quantum technology. Our examples are thus especially relevant for certification of quantumness in the presence of noise.
The paper is structured as follows. In the next Section we address two-qubit systems, focusing on both entanglement and discord of nearby Pauli diagonal states. The subsequent Sections are devoted to continuous variable systems: Section \[s:cv1\] addresses certification of quantumness for single-mode squeezed thermal states and their displaced versions, whereas in Section \[s:cv2\] we focus on entanglement and discord of two-mode squeezed thermal states. Section \[s:out\] closes the paper with some concluding remarks.
Two-qubit systems
=================
Let us consider the subset of [*Pauli diagonal*]{} (PD) two-qubit states $$\label{PDstate}
\rho=\frac{1}{4} \left ( {{\mathbbm I}}\otimes {{\mathbbm I}}+\sum_{j=1}^3c_j\sigma_j
\otimes\sigma_j \right )$$ where $c_j$ are real constants, ${{\mathbbm I}}$ is the identity operator and $\sigma_j$ are Pauli matrices. The corresponding eigenvalues are $$\label{eigPD}\begin{split}
\lambda_0=\frac{1}{4} \left ( 1-c_1-c_2-c_3 \right )\\
\lambda_1=\frac{1}{4} \left ( 1-c_1+c_2+c_3 \right )\\
\lambda_2=\frac{1}{4} \left ( 1+c_1-c_2+c_3 \right )\\
\lambda_3=\frac{1}{4} \left ( 1+c_1+c_2-c_3 \right )
\end{split}$$ whose positivity implies constraints on coefficients $c_j$ for $\rho$ to describe a physical state. PD states in Eq. (\[PDstate\]) have maximally mixed marginals (partial traces) $\rho^A=\rho^B={{\mathbbm I}}/2$, $A$ and $B$ denoting the two subsystems. The choice of this subset stems from the fact that an analytic expression of the quantum discord is available [@Luo], so we can compare quantum discord and entanglement of states within the PD class for fixed values of fidelity. The fidelity between two PD states may be expressed in terms of the eigenvalues in Eq. (\[eigPD\]) as follows $${F} \left ( \rho_1,\rho_2 \right )=
\Big ( \sum_{k=0} ^3 \sqrt{\lambda_{k,1} \lambda_{k_2}} \Big )^2,$$ whereas entanglement, quantified by negativity, is given by $${N}(\rho)=-2\sum_i\eta_i(\rho^{\tau_A}),$$ where $\eta_i(\rho^{\tau_A})$ are the negative eigenvalues of the partial transpose $\rho^{\tau_A}$ with respect to the subsystem $A$ [@negativity]. The quantum discord for PD states has been evaluated in [@Luo], and it is given by $${D}(\rho)={I}(\rho)-\frac12 (1-c)\log_2(1-c)-\frac12 (1+c)\log_2(1+c)$$ where ${I}(\rho)=2+\sum_{i=0}^3\lambda_i \log_2\lambda_i$ is the mutual information and the other terms are the result of the maximization of the classical information. The quantity $c$ denotes the maximum $c\equiv\text{max}\{|c_1|,|c_2|,|c_3|\}$.
![\[f:f2\] (Color online) (Left panel): The tetrahedron represents the region of all physical PD states, whereas the inner octahedron contains the separable ones. [[The balloons centered in $c_1=c_2=c_3=-0.45$ (on the right of the panel) contain PD states having fidelity ${F}>0.95$ and $F>0.99$ to the target Werner (entangled) state.]{}]{} The balloons on the left of the panel describe states having fidelity $F>0.95$ and $F>0.99$ to the separable PD state with $c_1=0.3$, $c_2=-0.3$, and $c_3=0.1$ . (Right panel): the plot describes PD states with fixed $c_3=-0.45$ and varying $\{ c1,c2\}$. We show the ovoidal slice containing states having fidelity ${F}>0.95$ to the target Werner state with $c_1=c_2=c_3=-0.45$ and the corresponding rectangular region of entangled states. Contour lines refer to entanglement negativity (gray) and quantum discord (red).](f2a_ent2q.pdf "fig:"){width="0.5\columnwidth"} ![\[f:f2\] (Color online) (Left panel): The tetrahedron represents the region of all physical PD states, whereas the inner octahedron contains the separable ones. [[The balloons centered in $c_1=c_2=c_3=-0.45$ (on the right of the panel) contain PD states having fidelity ${F}>0.95$ and $F>0.99$ to the target Werner (entangled) state.]{}]{} The balloons on the left of the panel describe states having fidelity $F>0.95$ and $F>0.99$ to the separable PD state with $c_1=0.3$, $c_2=-0.3$, and $c_3=0.1$ . (Right panel): the plot describes PD states with fixed $c_3=-0.45$ and varying $\{ c1,c2\}$. We show the ovoidal slice containing states having fidelity ${F}>0.95$ to the target Werner state with $c_1=c_2=c_3=-0.45$ and the corresponding rectangular region of entangled states. Contour lines refer to entanglement negativity (gray) and quantum discord (red).](f2b_d2q.pdf "fig:"){width="0.47\columnwidth"}
Let us now consider a situation where the target state of, say, a preparation scheme, is a Werner state $$\rho_W=\frac{1-c}{4} {{\mathbbm I}}\otimes {{\mathbbm I}}+c{\vert \Psi^- \rangle \langle \Psi^- \vert}\,,$$ i.e. a PD state with $c_1=c_2=c_3=-c$ and $c\in[0,1]$ and where ${\vert \Psi^- \rangle}=({\vert 01 \rangle}-{\vert 10 \rangle})/\sqrt{2}$ is one of the Bell states. The Werner state $\rho_W$ is entangled for $c>\frac13$ and separable otherwise. In particular, let us choose a target state with $c=0.45$ and address the properties of PD states having fidelity larger than a threshold, say $F>0.95$ or $F>0.99$ to this target. Results are reported in the left panel of Fig. \[f:f2\], where the tetrahedral region is the region of physical two-qubit PD states and the separable states are confined to the inner octahedron. The ovoidal regions (from now on the *balloons*) contain the PD states with fidelity ${F}>0.95$ and $F>0.99$ to our target Werner state. As it is apparent from the plot, both the balloons cross the separability border, thus showing that [[a “high” value of fidelity to the target should not be used as a benchmark for creation of entanglement]{}]{}, even assuming that the generated state belongs to the class of PD states. The same phenomenon may lead one to waste entanglement, i.e. to erroneously recognize an entangled state as separable on the basis of a high fidelity to a separable state, as it may happen to an initially maximally entangled state driven towards the separability threshold by the environmental noise. As an example, we show in the left panel of Fig. \[f:f2\] the balloons of states with fidelity $F>0.95$ and $F>0.99$ to a separable PD state with $c_1=0.3$, $c_2=-0.3$, and $c_3=0.1$.
In the right panel of Fig. \[f:f2\] we show the “slice” of PD states with $c_3=-0.45$ and fidelity ${F}>0.95$ to the Werner target, together with the corresponding region of entangled states, and the contour lines of entanglement negativity and quantum discord. This plot clearly shows that high values of fidelity are compatible with large range of variation for both entanglement and discord.
The fact that neighboring states may have quite different physical properties has been recently investigated for quantum optical polarization qubits [@edvd]. In particular, the discord of several two-qubit states has been experimentally determined using partial and full polarization tomography. Despite the reconstructed states had high fidelity to depolarized or phase-damped states, their discord has been found to be largely different from the values predicted for these classes of states, such that no reliable estimation procedure beyond tomography may be effectively implemented, and thus questioning the use of fidelity as a figure of merit to assess quantum correlations. Indeed, when full tomography is performed, fidelity is used only to summarize the overall quality of the reconstruction [@anto1; @anto2; @nat1; @nat2] and thus correctly convey also the information obtained about quantum resources.
Single-mode Gaussian States {#s:cv1}
===========================
Here we address the use of fidelity to assess quantumness of single-mode CV states. In particular, in Section \[s:cv1a\] we address nonclassicality of squeezed thermal states, whereas Section \[s:cv1b\] is devoted the subPossonian character of their displaced version.
Squeezed thermal states {#s:cv1a}
-----------------------
Let us now consider single-mode CV systems and start with Gaussian state preparations of the form $$\rho_{s\mu}= S(r)\nu (N)S^\dag(r)$$ i.e. single-mode squeezed thermal states (${{\hbox{STS}_1}}$) with real squeezing, $S(r)=\exp \{ \frac12 r (a^{\dag 2} -
a^2) \} $ and $N$ thermal photons, $\nu (N)= N^{a^\dag a}/(1+N)^{a^\dag
a+1}$. This class of states have zero mean and covariance matrix (CM) given by $$\sigma=\frac{1}{2\mu}
\begin{pmatrix}
1/s& 0\\
0 & s
\end{pmatrix},$$ where $\mu=(2
\sqrt{\det{\sigma}})^{-1} = (2N+1)^{-1}$ is the purity of $\rho_{s\mu}$ and $s=e^{-2r}$ is the squeezing factor. ${{\hbox{STS}_1}}$ are nonclassical, i.e. they show a singular Glauber P-function, when $s<\mu$ or $s>1/\mu$ [@CTLee]. Fidelity between two ${{\hbox{STS}_1}}$ is given by [@twa96; @scu98] $$F_{s\mu}=\frac{1}{\sqrt{\Delta + \delta} - \sqrt{\delta}}$$ where $$\Delta=\det[\sigma_1 + \sigma_2] \qquad
\delta=4\prod_{k=1}^2 \left[\det[\sigma_k]-\frac{1}{4}\right]\,,$$ $\sigma_1$ and $\sigma_2$ being the CM of the two states. In Fig. \[f:f3\] we report the region of classicality together with the balloons of ${{\hbox{STS}_1}}$ having fidelity larger than $F_{s\mu}>0.99$ to three ${{\hbox{STS}_1}}$ chosen as targets (one classical thermal state and two nonclassical thermal squeezed states).
[[As it is apparent from the plot, the balloons have large overlaps with both the classical and the nonclassical region, such that fidelity cannot be used, for this class of states, to certify the creation of quantum resources.]{}]{} This feature is only partially cured by imposing additional constraints to the set of states under examination [@dodonov]. As for example, in the left panel of Fig. \[f:f3\] we show the “stripes” of states that have both a fidelity $F_{s\mu}>0.99$ [*and*]{} a mean photon numbers $\langle n\rangle$, (i.e. the mean energy of state) which differ at most $10\%$ from that of the target. In the right panel we show the regions of states satisfying also the additional constraints of having photon number fluctuations $\langle \Delta n^2\rangle$ within a $10\%$ interval from that of the targets. Overall, we have strong evidence that fidelity should not be used to certify the presence of quantumness, and that this behavior persists even when we add quite stringent constraints to delimit the class of states under investigations. In fact, only by performing the full tomographic reconstruction of the state one imposes a suitable set of constraints to make fidelity a fully meaningful figure of merit [@Jar09]. In this case, as already mentioned for qubits, fidelity represents a summary of the precision achieved by the full tomographic reconstruction.
![\[f:f3\] (Color online) The plots show the region of classicality (the triangular-like green regions) together with the balloons of ${{\hbox{STS}_1}}$ having fidelity larger than $F_{s\mu}>0.99$ to three ${{\hbox{STS}_1}}$ chosen as targets: a classical thermal state with $s=1$ and $\mu=0.9$ and two nonclassical ${{\hbox{STS}_1}}$ with $\mu=0.7$ and $s=0.6$ and $s=1.6$ respectively. In the left panel the stripes of states close to the targets contain states having $F_{s\mu}>0.99$ and mean photon numbers which differ at most $10\%$ from that of the target. In the right panel the states close to the targets satisfy the additional constraints of having number fluctuations within a $10\%$ interval from that of the targets.](f3a_ncl1c.pdf "fig:"){width="0.49\columnwidth"} ![\[f:f3\] (Color online) The plots show the region of classicality (the triangular-like green regions) together with the balloons of ${{\hbox{STS}_1}}$ having fidelity larger than $F_{s\mu}>0.99$ to three ${{\hbox{STS}_1}}$ chosen as targets: a classical thermal state with $s=1$ and $\mu=0.9$ and two nonclassical ${{\hbox{STS}_1}}$ with $\mu=0.7$ and $s=0.6$ and $s=1.6$ respectively. In the left panel the stripes of states close to the targets contain states having $F_{s\mu}>0.99$ and mean photon numbers which differ at most $10\%$ from that of the target. In the right panel the states close to the targets satisfy the additional constraints of having number fluctuations within a $10\%$ interval from that of the targets.](f3b_ncl2c.pdf "fig:"){width="0.49\columnwidth"}
Displaced squeezed thermal states {#s:cv1b}
---------------------------------
When only intensity measurements may be performed, nonclassicality of a single-mode state may be assessed by the Fano Factor [@HP82], which is defined as the ratio of the photon number fluctuations over the mean photon number $R= \langle \Delta n^2 \rangle/\langle n
\rangle$. One has $R=1$ for coherent states, while a smaller value is a signature of nonclassicality since sub-Poissonian statistics cannot be described in classical terms. In order to illustrate the possible drawbacks of fidelity in certifying this form of quantumness, let us consider displaced version of ${{\hbox{STS}_1}}$ $$\rho_G=D(x)\rho_{s\mu}D^\dag(x),$$ where $D(\alpha)=\exp\{\alpha a^\dag
- \bar\alpha a\}$ is the displacement operator and we chose real displacement $\alpha=x\in {\mathbbm R}$. The CM is determined by $\rho_{s\mu}$ whereas the displacement change only the mean values of the canonical operators. The fidelity between two Gaussian states of the form $\rho_G$ is given by [@scu98] $$F_G =
\exp\{-(\mathbf X_1 - \mathbf X_2)^T(\sigma_1+\sigma_2)^{-1}
(\mathbf X_1 - \mathbf X_2)\} F_{s\mu}$$ where $\mathbf X=(x,0)$. In the left panel of Fig. \[f:f4\] we show the region of sub-Poissonianity as a function of the purity, the squeezing factor, and the displacement of states $\rho_G$. We also show the balloons of states with fidelity larger than $F_G>0.97$ to two $\rho_G$ target states: a subPoissonian state corresponding to $\mu=0.9$, $s=1.4$, and $x=0.5$ and a superPoissonian one with $\mu=0.7$, $s=1.2$, and $x=1.5$. Despite the high value of fidelity (notice that fidelity decreases exponentially with the displacement amplitude) both the balloons crosses the Poissonian border, and the parameters of the states that may differ considerably from the targeted ones. In the right panel of Fig. \[f:f4\] we show the subPoissonian region for a fixed value of purity $\mu=0.8$ as a function of squeezing and displacement, together with the balloons of states having fidelity larger than $F_G>0.97$ to a pair of target states: a subPoissonian state with parameters $x=1.5$ and $s=1.5$ and a superPoissonian one with $x=0.8$ and $s=1.0$. We also show the subregions of states having mean photon number and number fluctuations which differ at most $10\%$ from those of the target. We notice that even restricting attention to states with comparable energy and fluctuations, fidelity is not able to discriminate states having quantum resources or not.
 (Left): subPoissonian region for $\rho_G$ states as a function of the purity $\mu$, the squeezing $s$, and the displacement $x$, together with the balloons of states having fidelity larger than $F_G>0.97$ to a nonclassical target with $\mu=0.9$, $s=1.4$, and $x=0.5$ and a classical one with $\mu=0.7$, $s=1.2$, and $x=1.5$. (Right): The subPoissonian region for a fixed value of purity $\mu=0.8$ as a function of squeezing and displacement, together with the balloons of states having fidelity larger than $F_G>0.97$ to the target states having $x=1.5$ and $s=1.5$ (subPoissonian) or $x=0.8$ and $s=1.0$ (superPoissonian). We also show the subregions of states having mean photon number and number fluctuations which differ at most $10\%$ from those of the target.](f4a_fano3d.pdf "fig:"){width=".49\columnwidth"}  (Left): subPoissonian region for $\rho_G$ states as a function of the purity $\mu$, the squeezing $s$, and the displacement $x$, together with the balloons of states having fidelity larger than $F_G>0.97$ to a nonclassical target with $\mu=0.9$, $s=1.4$, and $x=0.5$ and a classical one with $\mu=0.7$, $s=1.2$, and $x=1.5$. (Right): The subPoissonian region for a fixed value of purity $\mu=0.8$ as a function of squeezing and displacement, together with the balloons of states having fidelity larger than $F_G>0.97$ to the target states having $x=1.5$ and $s=1.5$ (subPoissonian) or $x=0.8$ and $s=1.0$ (superPoissonian). We also show the subregions of states having mean photon number and number fluctuations which differ at most $10\%$ from those of the target.](f4b_fano2d.pdf "fig:"){width=".49\columnwidth"}
Two-mode Gaussian States {#s:cv2}
========================
Here we focus on a relevant subclass of two-mode Gaussian states: the so-called two-mode squeezed thermal states (${{\hbox{STS}_2}}$) described by density operators of the form $$\rho_{N\beta\gamma} = S_2(r)\nu(n_1)\otimes\nu(n_2)S_2^\dag(r)$$ where $S_2(r)=\exp\{r(a^\dag b^\dag-a b)\}$ is the two-mode squeezing operator with [[real parameter $r$ and $\nu(n_k)$, $k=1,2$ are thermal states with $n_k$ photon number on average]{}]{}. The class of states $\rho_{N\beta\gamma}$ is fully described by three parameters: the total mean photon number $N$, the two-mode squeezing fraction $\beta$ and the single-mode fraction of thermal photons: $\gamma$ $$\label{parameter}\begin{split}
&N=\langle a^\dag a + b^\dag b\rangle\\
&\beta=\frac{2 \sinh^2 r}{N}\\
&\gamma=\frac{n_1}{n_1+n_2}.
\end{split}$$ The CM of ${{\hbox{STS}_2}}$ may be written in the block form $$\sigma=\frac{1}{2}\begin{pmatrix}
A\, \mathbb{I}& C\, \sigma_z\\
C\, \sigma_z & B\, \mathbb{I}
\end{pmatrix}$$ with the coefficients parametrized according to (\[parameter\]): $$\begin{split}
A&=1+\frac{2\gamma(1-\beta)N+\beta N (1+N)}{1+\beta N}\\
B&=1+\frac{2(1-\gamma)(1-\beta)N+\beta N (1+N)}{1+\beta N}\\
C&=\frac{(1+N)\sqrt{\beta N(2+\beta N)}}{1+\beta N}.
\end{split}$$ A squeezed thermal state is separable iff $\tilde d_- \geq \frac12$, where $
\sqrt{2}\tilde d_\pm
= \sqrt{A^2+B^2+2C^2\pm (A+B)\sqrt{(A-B)^2+4C^2}}$ are the symplectic eigenvalues. Gaussian B-discord, i.e. the difference between the mutual information and the maximum amount of classical information obtainable by [*local Gaussian*]{} measurements on system B, may be analytically evaluated for ${{\hbox{STS}_2}}$ [@gd], leading to $$D(\rho_{N\beta\gamma})=h(B)
- h(d_-)-h(d_+) + h\left ( \frac{A-C^2}{B+\frac12} \right )$$ where $h(x)=
(x+\frac12) \ln (x+\frac12)
-(x-\frac12) \ln(x+\frac12)$. Finally, fidelity between two ${{\hbox{STS}_2}}$ is given by [@sorin; @Marian; @Oli12] $${F}_{N\beta\gamma} =
\frac{(\sqrt{{X}}+\sqrt{{X}-1})^2}{\sqrt{\det[\sigma_1+\sigma_2]}}$$ where $$\begin{aligned}
X&=2\sqrt{{E_1}}+2\sqrt{{E_2}}+\frac{1}{2}\,,\notag\\
E_1&=\frac{\det[\Omega\,\sigma_1\,\Omega\,\sigma_2]-
\frac14}{\det[\sigma_1+\sigma_2]}\,,\notag\\
E_2&=\frac{\det[\sigma_1+\frac{{{\rm i }}}{2}\Omega]\det[\sigma_2+\frac{{{\rm i }}}{2}\Omega]}{
\det[\sigma_1+\sigma_2]}\,,\notag\end{aligned}$$ $\Omega$ being the $2$-mode symplectic form [@Oli12] $$\Omega=\omega \oplus \omega \qquad \omega=
\left(\begin{array}{cc}0&1\\ -1&0
\end{array}\right)\,. \notag$$
In the left panel of Fig. \[f:f5\] we show the separability region in terms of the three parameters $N$, $\beta$ and $\gamma$ together with the balloons of states having ${F}_{N\beta\gamma}>0.99$ with two target states: an entangled ${{\hbox{STS}_2}}$ with parameters $N=2.5$, $\beta=0.2$, $\gamma=0.5$ and a separable one with $N=1$, $\beta=0.13$ and $\gamma=0.5$. As it is apparent from the plot, both balloons cross the separability border and have a considerable overlap to both regions, thus making fidelity of little use to assess entanglement in these kind of systems.
 (Left): Separability region of ${{\hbox{STS}_2}}$ in terms of the three parameters $N$, $\beta$ and $\gamma$ together with the balloons of states having ${F}_{N\beta\gamma}>0.99$ with two target states: an entangled ${{\hbox{STS}_2}}$ with parameters $N=2.5$, $\beta=0.2$, $\gamma=0.5$ and a separable one with $N=1$, $\beta=0.13$ and $\gamma=0.5$. (Right): the region of of states having a fidelity in the range $0.95 < F_{N\beta\gamma}<0.99$ to a two-mode squeezed vacuum, $N=1$ and $\beta=1$. We also show the stripe of states having a mean photon number in the range $0.9<N<1.1$.](f5a_regs.pdf "fig:"){width="0.49\columnwidth"}  (Left): Separability region of ${{\hbox{STS}_2}}$ in terms of the three parameters $N$, $\beta$ and $\gamma$ together with the balloons of states having ${F}_{N\beta\gamma}>0.99$ with two target states: an entangled ${{\hbox{STS}_2}}$ with parameters $N=2.5$, $\beta=0.2$, $\gamma=0.5$ and a separable one with $N=1$, $\beta=0.13$ and $\gamma=0.5$. (Right): the region of of states having a fidelity in the range $0.95 < F_{N\beta\gamma}<0.99$ to a two-mode squeezed vacuum, $N=1$ and $\beta=1$. We also show the stripe of states having a mean photon number in the range $0.9<N<1.1$.](f5b_targ.pdf "fig:"){width="0.49\columnwidth"}
Another phenomenon arising from benchmarking with fidelity is illustrated in the right panel of Fig. \[f:f5\], where we report the region of states having a fidelity in the range $0.95 < F_{N\beta\gamma}<0.99$ to a [*two-mode squeezed vacuum*]{}, i.e. a maximally entangled state with $N=1$ and $\beta=1$. The emphasized sector corresponds to states that also have a mean photon number not differing more than $10\%$ from the target, i.e. in the range $0.9<N<1.1$. As a matter of fact, the total photon number $N$ and the squeezing fraction $\beta$ in this region may be considerably different from the targeted one and, in addition, the states with comparable energy are the least entangled in the region. Finally, in Fig. \[f:f6\] we show the range of variation of Gaussian B-discord compatible with high values of fidelity. In the left panel we consider a non-separable target state with discord $D(\rho_{2,0.2,0.5})=0.22$ and a region of $\text{STS}_2$ states with fidelity $F_{N\beta\gamma}>0.95$. The region of separability (green) is crossed by a non negligible set of states and the relative variations of the discord is considerably large, ranging from $0.38$ to $1.88$. In the right panel of Fig. \[f:f6\] we show again the wide range of variation of Gaussian B-discord for a set of $\text{STS}_2$ states with fidelity $0.95<F(\rho_{N\beta\gamma})<0.99$ to a target two-mode squeezed vacuum state with $N=2$. The high discrepancy in the relative discord can be only partially limited by constraining the mean photon number $N$ with fluctuations of the $10\%$. Notice that also in the case of two modes, full Gaussian tomography [@FullCM; @bla12] is imposing a suitable set of constraints to make fidelity a meaningful figure of merit to summarize the overall quality of the reconstruction.
 (Left): Contour lines of Gaussian B-discord in the region of $\text{STS}_2$ having fidelity $F_{N\beta\gamma}>0.95$ to an entangled target state with $N=2,\beta=0.2$ and $\gamma=0.5$. The relative discord, rescaled to that of the target state ($D(\rho_{2,0.2,0.5})=0.22$), ranges from $0.38$ to $1.88$. (Right): Variations of the relative Gaussian B-discord in a region of $\text{STS}_2$ with fidelity $0.95 < F_{N\beta\gamma} < 0.99$ to a two-mode squeezed vacuum state ($N=2$ and $\beta=1$). In evidence the constrained region of states having the $10\%$ of energy fluctuations around $N=2$.](f6a_dcon.pdf "fig:"){width="0.49\columnwidth"}  (Left): Contour lines of Gaussian B-discord in the region of $\text{STS}_2$ having fidelity $F_{N\beta\gamma}>0.95$ to an entangled target state with $N=2,\beta=0.2$ and $\gamma=0.5$. The relative discord, rescaled to that of the target state ($D(\rho_{2,0.2,0.5})=0.22$), ranges from $0.38$ to $1.88$. (Right): Variations of the relative Gaussian B-discord in a region of $\text{STS}_2$ with fidelity $0.95 < F_{N\beta\gamma} < 0.99$ to a two-mode squeezed vacuum state ($N=2$ and $\beta=1$). In evidence the constrained region of states having the $10\%$ of energy fluctuations around $N=2$.](f6b_dtwb.pdf "fig:"){width="0.49\columnwidth"}
Conclusions {#s:out}
===========
In conclusion, we have shown by examples that being close in the Hilbert space may not imply being close in terms of quantum resources. In particular, we have provided quantitative examples for qubits and CV systems showing that pairs of states with high fidelity may include separable and entangled states, classical and nonclassical ones, and states with very different values of quantum of Gaussian discord.
Our results make apparent that in view of its wide use in quantum technology, fidelity is a quantity that should be employed with caution to assess quantum resources. In some cases it may be used in conjunction with additional constraints, whereas in the general situation it should be mostly used as an overall figure of merit, summarizing the findings of a full tomographic reconstruction.
This work has been supported by the MIUR project FIRB-LiCHIS- RBFR10YQ3H. MGAP thanks Claudia Benedetti for useful discussions.
[99]{} A. Uhlmann, Rep. Math. Phys. [**9**]{}, 273 (1976). C. A. Fuchs, J. van de Graaf, IEEE Trans. Inf. Theory [**45**]{}, 1216 (1999). C. Benedetti, A. P. Shurupov, M. G. A. Paris, G. Brida, M. Genovese, Phys. Rev. A [**87**]{}, 052136 (2013). V. Dodonov, J. Phys. A [**45**]{}, 032002 (2012). S. Luo, Phys. Rev. A [**[77]{}**]{}, 042303 (2008). [[ A. Miranowicz and A. Grudka, J. Opt. B [**[6]{}**]{}, 542 (2004).]{}]{} C. F. Roos, G. P. T. Lancaster, M. Riebe, H. Häffner, W. Hänsel, S. Gulde, C. Becher, J. Eschner, F. Schmidt-Kaler and R. Blatt, Phys. Rev. Lett. [ 92]{}, 220402 (2004). J.Fulconis, O. Alibart, J. L. O’Brien, W. J. Wadsworth and J. G. Rarity, Phys. Rev. Lett.[**99**]{}, 120501 (2007). D. Riste, M. Dukalski, C. A. Watson, G. de Lange, M. J. Tiggelman, Ya. M. Blanter, K. W. Lehnert, R. N. Schouten, L. DiCarlo, Nature [**[502]{}**]{}, 350 (2013). L. Steffen, Y. Salathe, M. Oppliger, P. Kurpiers, M. Baur, C. Lang, C. Eichler, G. Puebla-Hellmann, A. Fedorov, A. Wallraff, Nature [**[500]{}**]{}, 319 (2013). C. T. Lee, Phys. Rev. A [**44**]{}, R2775 (1991). J Twamley, J. Phys. A [**29**]{}, 3723 (1996). H. Scutaru, J. Phys. A [**31**]{}, 3659 (1998). J. Řeháček, S. Olivares, D. Mogilevtsev, Z. Hradil, M. G. A. Paris, S. Fornaro, V. D’Auria, A. Porzio, S. Solimeno, Phys. Rev. A [**79**]{}, 032111 (2009). H. Paul, Rev. Mod. Phys. [**54**]{}, 1061 (1982). P. Giorda, M. G. A. Paris, Phys. Rev. Lett. [**105**]{}, 020503 (2010). Gh.-S. Paraoanu, H. Scutaru, Phys. Rev. A [**61**]{}, 022306 (2000). P. Marian, T. A. Marian, Phys. Rev. A [**86**]{}, 022340 (2012). S. Olivares, Eur. Phys. J. ST [**203**]{}, 3 (2012). V. D’Auria, S. Fornaro, A. Porzio, S. Solimeno, S. Olivares, M. G. A. Paris, Phys. Rev. Lett. [**102**]{}, 020502 (2009); D. Buono, G. Nocerino, V. D’Auria, A. Porzio, S. Olivares, M. G. A. Paris, J. Opt. Soc. Am. B [**27**]{}, 110 (2010). R. Blandino, M. G. Genoni, J. Etesse, M. Barbieri, M. G. A. Paris, P. Grangier, R. Tualle-Brouri, Phys. Rev. Lett [**109**]{}, 180402 (2012).
|
---
abstract: 'The systematic uncertainty on the $W$ mass and width measurement resulting from the imperfect knowledge of electroweak radiative corrections is discussed. The intrinsic uncertainty in the 4-$f$ generator used by the DELPHI Collaboration is studied following the guidelines of the authors of [YFSWW]{}, on which its radiative corrections part is based. The full DELPHI simulation, reconstruction and analysis chain is used for the uncertainty assessment. A comparison with the other available 4-$f$ calculation implementing DPA $\mathcal{O}(\alpha)$ corrections, [RacoonWW]{}, is also presented. The uncertainty on the $W$ mass is found to be below 10 MeV for all the $WW$ decay channels used in the measurement.'
author:
- |
Fabio Cossutti\
[*INFN, Sezione di Trieste, I-34127 Trieste, Italy*]{}
title: |
[rl]{}
&
--------------------------------------------
[ ISTITUTO NAZIONALE DI FISICA NUCLEARE]{}
--------------------------------------------
[Sezione di Trieste]{}\
0.5cm
------------------------------------------------------------------------
1.5cm
\
[**9 Maggio 2005**]{}\
**ELECTROWEAK CORRECTIONS UNCERTAINTY ON THE $\mathbf{W}$ MASS MEASUREMENT AT LEP**
---
*Published by [**SIS-Pubblicazioni**]{}\
Laboratori Nazionali di Frascati*
2
Introduction
============
Precision tests of the Standard Model in the $W$ sector have been one of the main issues of the LEP2 physics program. In this context the measurement of the $W$ mass is one of the most interesting tests. Due to the high precision which is experimentally achievable, about 0.05% in the LEP combination, it is important to have a robust estimate of all the possible systematic uncertainties.
Electroweak radiative corrections on $WW$ events, which are used for the $W$ mass and width measurements, and more generally on 4-$f$ events, have been an important issue since LEP2 beginning. After the LEP2 Workshop of 1995 [@lep2] it has been clear that the simple radiative corrections approach based on the Improved Born Approximation (IBA) is not sufficient to obtain a theoretical precision smaller than the experimental foreseen one in precise $W$ physics measurements.
At the 2000 LEP2 Monte Carlo workshop [@lep2mcws] calculations implementing full $\mathcal{O}(\alpha)$ electroweak radiative corrections for 4-$f$ events in the so called Double Pole Approximation (DPA) [@dpa0; @dpa1; @dpa2], i.e. reliable around the double resonant $W$ pole, have been available as the result of an effort from the theory community. There are two Monte Carlo generators implementing these calculations, [YFSWW]{} [@Yfsww] and [ RacoonWW]{} [@RacoonWW].
Initially the studies on the theoretical precision of these calculations have been devoted to the inclusive $WW$ cross section, showing a satisfactory 0.4% agreement between the two codes. Studies of differential distributions at generator level have been shown by both the theoretical groups and by others (for instance [@fabio]), but a full attempt of assessing the theoretical precision on $W$ related observables has been presented only later for the $W$ mass [@wmasssys] and for the TGC [@tgcsys].
In the TGC related study the possible sources of uncertainties in both generators are considered and the calculations compared one to the other. Moreover a detector effect parameterization (based on the ALEPH simulation and analysis) is used to mimic the dominant effects beyond the pure electroweak generator.
The $W$ mass study is a pure 4-$f + \gamma$ generator one based on a pseudo-observable (the $\mu\nu$ mass with some photon recombination) not directly comparable with the real observable measured by the experiments. It is based on an internal precision study of [YFSWW]{} plus a comparison with [RacoonWW]{}.
These studies provide a complete discussion of all the basic ingredients of the systematic uncertainty related to electroweak corrections, but the authors themselves recognize that for the $W$ mass a study at full analysis level is needed for a complete final determination to be used by LEP experiments.
The purpose of the present work is to use the above mentioned studies as a guideline to perform a complete estimation of this systematic uncertainty for the $W$ mass analysis in the frame of the full DELPHI event and detector simulation, reconstruction and analysis chain.
In section \[sec:delphi4f\] the study of the intrinsic uncertainty of the DELPHI 4-$f$ generator [@delphi4f], based on [YFSWW]{} as far as radiative corrections are concerned, is discussed. In section \[sec:racoonww\] the comparison with [RacoonWW]{} is presented. Section \[sec:results\] shows the global results and conclusions on the systematic uncertainty on the $W$ mass and width.
Although the target of the present study is the assessment of the uncertainty on the $W$ mass, the techniques and the Monte Carlo samples presented can be used for similar studies on other observables, in particular the TGC.
The uncertainty of the DELPHI 4-$f$ generator {#sec:delphi4f}
=============================================
Description of the setups and samples {#sec:delphisetup}
-------------------------------------
The 4-$f$ generator used for this study is the standard DELPHI one, based on [WPHACT]{} [@wphact] with the YFS-exponentiated ISR from [KoralW]{} [@KW1.42] and with additional radiative corrections implemented for $WW$ like events through [YFSWW]{}, using a reweighting technique as in the [KandY]{} “Monte Carlo tandem” [@KandY]: IBA based events are reweighted in order to reproduce with good approximation the result of the DPA calculation. For simplicity it will be referred to as [WandY]{}. For single $W$ events and non $WW$-like final states an IBA approach is adopted, using the [QEDPS]{} parton shower generator [@qedps] in order to describe ISR, suitably adapted in the energy scale used for the radiation.
The version used for this study, as well as for the final DELPHI $W$ mass analysis (internal DELPHI version 2.4) differs from [@delphi4f] in the treatment of the final state radiation (FSR) from leptons, which is implemented with [ PHOTOS]{} [@photos]: [PHOTOS]{} version 2.5 is used, implementing non leading logarithm (NLL) corrections which bring it quite close to the full matrix element calculation [@photosnll].
The study has been performed at the centre of mass energy of $\sqrt{s}
= 188.6$ GeV, corresponding to the 1998 data sample. It has been chosen since it represents the highest single-energy data statistics available.
The wide range of sources of systematic uncertainties and possible studies discussed in [@wmasssys] implies the need for several distinct Monte Carlo samples. Several sources can in fact be studied by simple event reweighting, applying as event weight the ratio of the modified matrix element squared and the standard one, where the modifications are related to the uncertainty source to be studied. All the possible weights have been implemented in the production of the standard $WW$-like 4-$f$ samples.
Some studies cannot be performed by event reweighting and require dedicated samples. In the standard [WandY]{} the Leading Pole Approximation (LPA) expansion around the double resonant pole is made using the approach that in [YFSWW]{} is called the $\mbox{LPA}_A$ scheme [@lpaa]; the other available approach, the so called $\mbox{LPA}_B$ scheme [@lpab], must be generated directly with [YFSWW]{}. Another case is the possible change of order in leptonic FSR: this would require distinct samples with $\mathcal{O}(\alpha)$ and $\mathcal{O}$$(\alpha^2)$ matrix elements.
Furthermore the need to compare [WandY]{} to [RacoonWW]{}, which has some remarkable differences with respect to the normal DELPHI code, has suggested to produce a dedicated [WandY]{} sample suitably modified to be as close as possible to [RacoonWW]{} itself. Since [RacoonWW]{} cannot produce directly samples with several final states at the same time, and the statistical precision needed for a meaningful comparison ($\Delta m_W(\mbox{\tt Wandy - RacoonWW}) \simeq
\mathcal{O}$$(5 \, \mbox{MeV})$) requires about 1 million events per channel to be produced, two final states have been chosen as representatives of the fully hadronic and semileptonic channels for these special event samples.
In order to minimize as much as possible 4-$f$ background contamination to CC03 diagrams, CC11 final states have been selected; the 4-$f$ background effect is better studied in the standard [WandY]{} sample, with massive kinematics and dedicated radiative corrections not present in [RacoonWW]{} and where inter-channel migration effects, in which the 4-$f$ background can also play a role, can be studied. For the fully hadronic channel the $udsc$ final state has been chosen, and for the semileptonic channel $ud\mu\nu$ has been preferred due to the presumably higher sensitivity to FSR corrections: photons are likely to be seen, while in final states with electrons most of them are merged in the calorimetric shower of the electron itself, and in taus they are generally merged in the jet of particles coming from the decay, which play a dominant role making all the studies more complex.
In order to be directly comparable with [RacoonWW]{}, these dedicated samples have been produced with the following modifications (compared to the standard settings):
- diagonal CKM matrix;
- fixed $W$ and $Z$ widths;
- $\mathcal{O}(\alpha)$ final state radiation from leptons with [PHOTOS]{} version 2.5. It is closer to [RacoonWW]{} than the original version in the lack of higher orders FSR;
- no Coulomb correction, Khoze-Chapovsky ansatz Coulomb correction [@KCansatz] implemented through reweighting.
Since in the normal production the standard Coulomb correction is already included, the reweighting would allow to study only the difference between this one and the approximated version of the full non-factorizable $\mathcal{O}(\alpha)$ correction, the so called Coulomb correction in the Khoze-Chapovsky ansatz. In order to study the net $\mathcal{O}(\alpha)$ correction effect with respect to the tree level (known to be significantly smaller than the previously mentioned difference), no Coulomb correction is implemented in the special samples generation.
The main concern of possible systematic differences in the results from the dedicated samples and the standard ones is linked to the propagators’ width treatment. A test has been performed with a small (100k events) dedicated $ud\mu\nu$ sample produced with the above modification but the $W$ and $Z$ width, kept running. The $W$ mass difference with respect to the main $ud\mu\nu$ sample was: $$\begin{aligned}
\Delta(\mbox{running} \, \Gamma_W - \mbox{fixed} \, \Gamma_W) & = & -28 \pm 16
\, \mbox{MeV}\end{aligned}$$ well compatible with the known simple shift of $-27$ MeV of the mass value when moving from the fixed to the running width definition [@lep2; @runwidth]. This known shift has been verified at generator level with a precision of about 2 MeV.
The [WandY]{} code has been extensively compared to [YFSWW]{} (see [@delphi4f]), and for CC03 events it has been shown to be equivalent to [KandY]{}. Anyway, as a further consistency cross check, in order to allow the generalization of the results of this study, a dedicated [YFSWW]{} $udsc$ sample using $\mbox{LPA}_A$ scheme has been produced at pure “4-$f + n~\gamma$” level (including FSR from quarks) to compare with a similar [WandY]{} sample and with [RacoonWW]{} at a corresponding level. In appendix \[sec:app1\] the input parameters set for [YFSWW]{}, equivalent to what used in [WandY]{}, is given.
In the cross check only the CC03 part of [WandY]{} has been used to be consistent with [YFSWW]{}. The total cross sections are found to be in agreement at the ($0.03 \pm 0.06)~\%$ level. In the event analysis photons forming an angle with the beam axis smaller than 2 degrees are discarded, and those with a bigger angle are recombined with the charged fermion with which they form the smallest invariant mass if their energy is below 300 MeV or if this mass is below 5 GeV. Several observables have been checked, among which the most interesting ones for this study are invariant mass distributions. They have been fitted using a fixed width like Breit-Wigner function: $$\begin{aligned}
BW(s) & = & \frac{P_3~s}{(s-(P_1+80.4)^2)^2+(P_1+80.4)^2P_2^2}
\label{eq:bw}\end{aligned}$$ where the parameters $P_1$ and $P_2$ are the (shifted) $W$ mass and width ($P_1$ actually represents the shift of the $W$ mass with respect to 80.4 GeV/c$^2$). The absolute value obtained in the fit depends on the fit function form and it is not particularly relevant. What matters for this check is the level of agreement between different codes when using the same analysis and fit procedure.
Fig. \[fig1\] shows the result of the fits on the average of the $ud$ and $sc$ invariant masses. The agreement both in the mass and in the width is satisfactory. An approach closer to the real analysis is to look at the average of the masses from the pairing in which the difference of the di-fermion masses is smallest (a criterion inspired by the equal masses constraint used in constrained fits); the result is shown in fig. \[fig2\], and also here the agreement is good. An observable that is very interesting, as will be seen in the comparison with [RacoonWW]{}, is the invariant mass rescaled by the ratio of the beam energy and di-fermion energy: it is the simplest way to mimic at pure generator level the energy-momentum conservation which is usually imposed in constrained fits and which is responsible for the sensitivity of the results to photon radiation, ISR in particular. Differences in the radiation structure are likely to cause visible effects in this kind of mass distributions, even if the previous ones are in good agreement. In fig. \[fig3\] the average of the invariant masses computed as in fig. \[fig2\] but rescaled by the ratio $E_{beam}/E_{f\bar{f}}$ is shown: also in this case, despite the sizeable effect of the rescaling on the fitted parameters compared to the previous fits, the agreement is very satisfactory.
This check proves that the results based on [WandY]{} can be considered valid for similar analysis using [YFSWW]{} (possibly except for specific non CC03 diagrams related features).
Technique of the uncertainty study {#sec:systechnique}
----------------------------------
The systematic uncertainty on the $W$ mass and width measurement due to the electroweak radiative corrections is the effect of the approximations and of the missing terms in the theoretical calculation used for the analysis. Its exact knowledge would imply the full computation of the missing corrections. The evaluation of the systematic uncertainty means estimating the order of magnitude of the effect of these not yet computed terms on the analysis.
This goal is practically achieved by splitting the calculations in different parts (ISR, FSR, etc.), whose limited knowledge introduces a source of uncertainty in the electroweak radiative corrections as implemented in [WandY]{}. The size of the uncertainty from each of these sources can be estimated by repeating the full $W$ mass (and width) analysis with changes in the part of the radiative corrections related to this source, whose effect should reasonably be of the same order of magnitude (or bigger) than the missing terms, and comparing with the standard calculation. This study can be performed on both the dedicated high statistics samples and on the standard ones.
The purely numerical precision from the fit algorithm is 0.1 MeV for the mass value and 0.3 MeV for the mass error. On the width, due to the very slow variation of the likelihood curve around the minimum, the numerical accuracy on the fit result is about 1 MeV.
As already mentioned in the previous section, for several sources of uncertainty it is possible to use a reweighting technique, which allows to reuse the same event sample for several studies, minimizing the simulation needed. When using the reweighting technique, the statistical error on the difference between the results of the fits on the standard and the modified sample has to take into account the correlation existing between the samples: the same events are used, simply with a different weight in the fit. This correlation allows to strongly reduce the error on the difference itself, with respect to comparisons of statistically uncorrelated samples.
In order to take into account the correlation the total sample for one channel has been divided into several subsamples, and the difference has been computed for each subsample. The RMS of the subsamples differences distribution, divided by the square root of the number of subsamples, is an estimate of the uncertainty which naturally includes the correlation between the original and reweighted samples. This way of computing the errors has been cross checked for the mass (where numerical fluctuations are generally negligible compared to the statistical ones) with the “Jackknife” [@jackknife] one, subtracting each time one subsample, and a very good agreement in the error estimate has been found.
The study has been performed only on 4-$f$ $WW$-like events, omitting all the remaining background processes. The rate and nature of the total selected events which are discarded in this way strongly depends on the channel [@delphiwmass]:
: $\simeq$ 5%
: $<$ 1%
: $\simeq$ 9%
: $\simeq$ 24%
For semileptonic events they are both $q\bar{q'}ll$ and $q\bar{q}\gamma$, the relative rate depending on the channel, while for fully hadronic events practically only the latter class of events weighs and is not considered. Other processes give anyway a negligible contribution. The uncertainty from the radiative corrections on these events is taken into account in the uncertainty on the background.
Analysis of the sources of systematic uncertainties {#sec:syssources}
---------------------------------------------------
Following the approach of ref. [@wmasssys], several distinct categories of uncertainty sources common to all $WW$ channels can be identified, corresponding to different parts of the electroweak corrections:
- $WW$ production: initial state radiation (ISR);
- $W$ decay: final state radiation (FSR);
- Non-factorizable QED interference (NF) $\mathcal{O}(\alpha)$ corrections;
- Ambiguities in LPA definition: non leading factorizable (NL) $\mathcal{O}(\alpha)$ corrections.
Moreover, due to the importance of the single $W$ diagrams in the semileptonic electron channel and the relatively sizeable uncertainty on the radiative corrections on them, a dedicated study has been performed for semileptonic channels.
The uncertainty for each of the categories is studied by testing the effect of activating/deactivating or modifying the relative corrections, in order to have an estimate of the potential effect of used approximations and non-calculated missing terms.
Table \[tab:specmw\] and \[tab:specgw\] show the results of the studies for $m_W$ and $\Gamma_W$ respectively on the dedicated samples, while table \[tab:stdmw\] and \[tab:stdgw\] show the results on the standard samples.
[|l|c|c|]{}\
Numerical test & $ud\mu\nu$ & $udsc$\
\
Best - IBA & $-10.6 \pm 0.7$ & $-10.1 \pm 1.0$\
\
Best - $\mathcal{O}$$(\alpha^2)$ & $< -0.1$ & $< -0.1$\
Best - $\mathcal{O}(\alpha)$ & $-0.7 \pm 0.1$ & $-0.3 \pm 0.1$\
\
Best - LL FSR & $< -0.1$ & -\
\
Best - no KC Coulomb & $-0.7 \pm 0.1$ & $-1.9 \pm 1.0$\
\
Best - EW scheme B & $0.1 \pm 0.1$ & $< 0.1$\
Best - no NL ($\mbox{LPA}_A$) & $-9.9 \pm 0.7$ & $-8.2 \pm 1.0$\
NL $\Delta (\mbox{no LPA}_A - \mbox{no LPA}_B)$ & $0.0 \pm 1.1$ & $1.3 \pm 1.0$\
[|l|c|c|]{}\
Numerical test & $ud\mu\nu$ & $udsc$\
\
Best - IBA & $-9.4 \pm 1.4$ & $-17.0 \pm 1.0$\
\
Best - $\mathcal{O}$$(\alpha^2)$ & $< -0.1$ & $< -0.1$\
Best - $\mathcal{O}(\alpha)$ & $-1.0 \pm 0.1$ & $-0.7 \pm 0.1$\
\
Best - LL FSR & $-0.5 \pm 0.1$ & -\
\
Best - no KC Coulomb & $1.6 \pm 0.1$ & $-0.4 \pm 0.1$\
\
Best - EW scheme B & $-0.1 \pm 0.1$ & $0.1 \pm 0.1$\
Best - no NL ($\mbox{LPA}_A$) & $-11.1 \pm 1.4$ & $-16.6 \pm 1.0$\
NL $\Delta (\mbox{no LPA}_A - \mbox{no LPA}_B)$ & $3.9 \pm 2.8$ & $-1.6 \pm 4.0$\
### $WW$ production: initial state radiation
ISR is playing a key role in the $W$ mass analysis since it is one of the main sources of the bias on the fit result with respect to the true value, due to the energy-momentum conservation constraint used in the kinematical constrained fits. The ISR is computed in the YFS exponentiation approach, using a leading logarithm (LL) $\mathcal{O}$$(\alpha^3)$ matrix element.
The difference between the best result, implementing the $\mathcal{O}$$(\alpha^3)$ ISR matrix element and the $\mathcal{O}$$(\alpha^2)$ one gives an order of magnitude of the effect of the missing higher orders in the matrix element, i.e. to use a wrong description of events with more than three hard photons or more than one photon with high $p_t$. As can be seen from the tables, this effect is below the fit sensitivity for all the channels.
The difference between the best result and the $\mathcal{O}(\alpha)$ includes the previous study, and can be used for estimating an upper limit of the effect of the missing non leading logarithm (NLL) terms at $\mathcal{O}$$(\alpha^2)$, which should be smaller than the LL component removed. From the tables it is seen that the effect is below 1 MeV both for the mass and the width in all the channels.
Taking into account also the study performed in [@wmasssys], the ISR related uncertainty can be conservatively estimated at 1 MeV for the mass and 2 MeV on the width.
### $W$ decay: final state radiation
The FSR description and uncertainty is tightly linked to the final state considered. QED FSR from quarks is embedded in the parton shower describing the first phase of the hadronization process. It is therefore essentially impossible to separate it from the rest of the hadronization process, and the relative uncertainty is considered as included in the jet and fragmentation related ones.
FSR from leptons is described by [PHOTOS]{}. The difference between the best result, based on the new NLL treatment, and the previous LL one can give an estimate of the effect of the missing part of the $\mathcal{O}(\alpha)$ FSR correction. It depends on the semileptonic channel, but it is always within 1 MeV.
In [@wmasssys] the effect of the missing higher orders beyond $\mathcal{O}$$(\alpha^2)$ has been found to be negligible at generator level. Since a full study of this uncertainty would require a high statistics dedicated simulation, and simple perturbative QED considerations suggest that the size of the effect should not exceed the size of the previous one, conservatively the previous error can be doubled to take into account also this component of the uncertainty.
[|l|c|c|c|c|c|]{}\
------------------------------------------------------------------------
Numerical test & $q\bar{q'}e\nu$ & $q\bar{q'}\mu\nu$ & $q\bar{q'}\tau\nu$ & $q\bar{q'}l\nu$ & $q\bar{q'}Q\bar{Q'}$\
\
Best - IBA & $2.1 \pm 2.9$ & $6.3 \pm 2.0$ & $1.6 \pm 3.4$ & $4.0 \pm 1.6$ & $5.6 \pm 1.0$\
\
Best - $\mathcal{O}$$(\alpha^2)$ & $< -0.1$ & $< -0.1$ & $< -0.1$ & $<
-0.1$ & $< -0.1$\
Best - $\mathcal{O}(\alpha)$ & $-0.8 \pm 0.1$ & $-0.6 \pm 0.1$ & $-0.9 \pm 0.1$ & $-0.8 \pm 0.1$ & $-0.3 \pm 0.1$\
\
Best - LL FSR & $< -0.1$ & $< -0.1$ & $-0.6 \pm
0.1$ & $-0.2 \pm 0.1$ & -\
\
Best - no KC Coulomb & $16.5 \pm 0.2$ & $15.6 \pm 0.1$ & $17.6 \pm 0.2$ & $16.3 \pm 0.1$ & $13.3 \pm 0.1$\
\
Best - EW scheme B & $0.2 \pm 0.1$ & $0.1 \pm 0.1$ & $0.1 \pm 0.1$ & $0.1 \pm 0.1$ & $0.1 \pm 0.1$\
Best - no NL ($\mbox{LPA}_A$) & $-14.4 \pm 2.9$ & $-9.6 \pm 2.0$ & $-16.1 \pm 3.4$ & $-12.3 \pm 1.6$ & $-7.7 \pm 1.0$\
[|l|c|c|c|c|c|]{}\
------------------------------------------------------------------------
Numerical test & $q\bar{q'}e\nu$ & $q\bar{q'}\mu\nu$ & $q\bar{q'}\tau\nu$ & $q\bar{q'}l\nu$ & $q\bar{q'}Q\bar{Q'}$\
\
Best - IBA & $-16.3 \pm 7.7$ & $-17.7 \pm 5.3$ & $-23.0 \pm 7.5$ & $-18.8 \pm 3.7$ & $-4.3 \pm 1.0$\
\
Best - $\mathcal{O}$$(\alpha^2)$ & $< -0.1$ & $< -0.1$ & $< -0.1$ & $< -0.1$ & $< -0.1$\
Best - $\mathcal{O}(\alpha)$ & $-1.0 \pm 0.1$ & $-1.0 \pm 0.1$ & $-1.4 \pm 0.1$ & $-1.1 \pm 0.1$ & $-0.8 \pm 0.1$\
\
Best - LL FSR & $-0.3 \pm 0.1$ & $-0.4 \pm 0.1$ & $-0.9 \pm
0.2$ & $-0.4 \pm 0.1$ & -\
\
Best - no KC Coulomb & $-9.8 \pm 0.3$ & $-10.3 \pm 0.3$ & $-10.2 \pm 0.4$ & $-9.7 \pm 0.2$ & $2.9 \pm 0.2$\
\
Best - EW scheme B & $-0.1 \pm 0.1$ & $-0.1 \pm 0.1$ & $0.0 \pm 0.1$ & $-0.1 \pm 1.1$ & $0.1 \pm 0.1$\
Best - no NL ($\mbox{LPA}_A$) & $-6.8 \pm 7.7$ & $-7.9 \pm 5.3$ & $-14.0 \pm 7.5$ & $-8.6 \pm 3.7$ & $-7.2 \pm 1.0$\
### Non-factorizable QED interference: NF $\mathcal{O}(\alpha)$ corrections
Non factorizable $\mathcal{O}(\alpha)$ corrections have to be treated with care. It is known (see for instance [@fabio; @wmasssys; @KCansatz]) that the net effect of the $\mathcal{O}(\alpha)$ QED interference between $W$s on the $W$ mass is small if compared with Born level, and the apparent sizeable effect seen when comparing new DPA calculations with the old IBA ones is an artifact due to the use of the standard Coulomb correction.
This can be seen by comparing the results in tables \[tab:specmw\] and \[tab:specgw\], where the effective implementation of DPA NF corrections through the Khoze-Chapovsky (KC) ansatz is compared to the Born level (i.e. no correction at all), and the results in tables \[tab:stdmw\] and \[tab:stdgw\]. Here the comparison is done with the standard Coulomb correction, part of the traditional IBA setup used before DPA.
The effect of using the KC ansatz with respect to Born can be considered as an upper limit of the missing part of the full $\mathcal{O}(\alpha)$ calculation and of the higher order terms. Since the effect on the $W$ mass and width in comparing with the standard Coulomb correction on all the final states is approximately the same for all the channels, the values found on the special samples are used for all the final states without further studies.
### Ambiguities in LPA definition: NL $\mathcal{O}(\alpha)$ corrections
The effect of the NL factorizable $\mathcal{O}(\alpha)$ corrections in LPA is shown in all the tables. As it is seen, its almost complete compensation with the change from standard Coulomb to KC Coulomb correction is the reason for the small net effect of the full DPA correction on the $W$ mass in comparison to the IBA. For the $W$ width on the contrary the effects are in the same sense and add up.
Two sources of uncertainties are considered, following the study in [@wmasssys]. Missing higher orders effect can be, at least partly, evaluated by changing the electroweak scheme used in the $\mathcal{O}(\alpha)$ calculation. The standard one in [YFSWW]{} and [WandY]{}, conventionally called A, corresponds to the $G_{\mu}$ scheme, the other available one is called B, and it corresponds to the choice of [RacoonWW]{}. This essentially means changing the definition of the QED fine structure constant used in the $\mathcal{O}(\alpha)$ matrix element (see for instance the explanation in [@Yfsww]). The effect is very small, at the limit of the fit sensitivity, both for the mass and the width.
It is worthwhile to notice here that in [YFSWW]{} and [WandY]{} the $\mathcal{O}(\alpha)$ implementation beyond the standard IBA can be technically splitted in two stages, the first one involving the introduction of the WSR and ISR-WSR interference in the YFS form factor and infrared $\tilde{S}$ factors, and the second one where the electroweak virtual and soft $\mathcal{O}(\alpha)$ corrections and the hard $\mathcal{O}(\alpha)$ matrix element are used to replace the pure QED LL calculation. In this context it is interesting to notice that the effect on the $W$ mass of the second phase is quite small when compared to the total effect of the LPA correction, at most $\mathcal{O}$$(5-10\%)$ of it. This allows to conclude that the introduction of the ISR-WSR interference in the YFS form factor and infrared $\tilde{S}$ factors plays a key role. For the $W$ width on the contrary the effect of the second part is found to be much more important.
The second, more relevant, source of uncertainty connected to the LPA is its possible definition, i.e. the ambiguity present in the way of expanding the amplitude around the double resonant $W$ pole. The standard [YFSWW]{} and [WandY]{} use the so called $\mbox{LPA}_A$ definition; a comparison with the $\mbox{LPA}_B$ one can give an estimate of the effect from the instrinsic ambiguity in the LPA definition. Unfortunately $\mbox{LPA}_B$ cannot be reproduced through reweighting, and it gives sizeable changes in comparison to $\mbox{LPA}_A$ already at Born (or IBA) level. Therefore in order to evaluate only the effect on the $\mathcal{O}(\alpha)$ correction a separate $\mbox{LPA}_B$ sample has been generated with [YFSWW]{}, and the effect has been estimated as the double difference: $$\begin{aligned}
\Delta \mathcal{O}(\alpha) & \! \! \! (\mbox{LPA}_{A} - \mbox{LPA}_{B}) = & \Delta (\mbox{Best
LPA}_A - \mbox{no NL LPA}_A) - \Delta (\mbox{Best
LPA}_B - \mbox{no NL LPA}_B) \end{aligned}$$ on the special samples. The size of the effect is within 1 MeV for the mass, within 4 MeV for the width, dominated by the statistical uncertainty (statistically independent samples are used). This result will be used for all the final states and channels, since LPA is applied on the CC03 part of the matrix element and therefore the estimate obtained here should be approximately valid for all the final states.
### Radiative corrections on 4-$f$ background diagrams: single $W$
At Born level the full 4-$f$ diagrams set for $WW$-like final states is computed with a very high precision, at least for LEP2 energies and in the phase space regions relevant for the $W$ mass and width measurements. This was shown already by the studies in [@lep2]. Therefore the systematic uncertainties associated to it are linked essentially to the electroweak corrections.
The DPA is known to be valid in a few $\Gamma_W$ interval around the double resonant pole. The study of the previous section takes into account the ambiguity in its definition and the effects caused by this ambiguity far from the pole. Since the so called “additive approach” is used in [WandY]{} for the DPA implementation through reweighting, e.g. the DPA correction is applied only to the CC03 part of the matrix element (and partly to the interference, see [@delphi4f]), non CC03 diagrams contributions are not directly affected by the DPA uncertainty (except for possible effects in the interference term which is relevant for the electron channel).
It is clear that this still leaves the problem of the approximated radiative corrections treatment for the non CC03 part of the matrix element (and the interference).
The ISR studies previously discussed can reasonably cover the most relevant part of the electroweak radiative corrections uncertainties present also for the $WW$-like 4-$f$ background diagrams, e.g. the non CC03 part. There is a noticeable exception represented by the so called single $W$ diagrams for the $q\bar{q'}e\nu$ final state (see [@lep2; @lep2mcws] for their definition and a basic discussion of the problem).
The bulk of single $W$ events is rejected in the $W$ mass and width analysis, since the electron in these events is lost in the beam pipe. But the CC03 - single $W$ interference is sizeable, and it has a strong impact on the $W$ mass result in the electron channel. This can be easily seen from the variation of the $W$ mass result for the electron channel when only the CC03 part of the matrix element is used in the simulation (inter-final state cross talk is included): $$\begin{aligned}
\Delta m_W \, (\mbox{electron}) \, \mbox{Best - CC03 only} & = & 106.6 \pm 1.9
\, \mbox{MeV}\end{aligned}$$ and comparing with the variation when only the CC03/non CC03 interference is excluded from the simulation: $$\begin{aligned}
\Delta m_W \, (\mbox{electron}) \, \mbox{Best - no interference} & = & 106.3 \pm 2.2
\, \mbox{MeV}\end{aligned}$$
It can be noticed that the big effect of moving from a full 4-$f$ calculation to the CC03 only is almost entirely due to the interference between the CC03 and the non CC03 part.
The situation is different in the $W$ width analysis, where in $qqe\nu$ events reconstructed as electrons the effects of non CC03 diagrams and the CC03 - non CC03 interference are opposite in sign and almost completely canceling.
The situation is made even more complex by the cross talk between channel, e.g. events belonging in reality to one channel but reconstructed as belonging to another one. This cross talk is particularly relevant between electrons and taus, and this explains why also the $\tau$ channel is sensitive to this uncertainty source.
The effect is particularly relevant for the width, where variations of the non CC03 parts of the $qqe\nu$ matrix element give different results with respect to the electron channel: the pure non CC03 diagrams give again an effect opposite in sign to the interference, but much bigger, so in the width analysis the tau channel is more sensitive to this systematic effect than the electron one: $$\begin{aligned}
\Delta \Gamma_W \, (\mbox{tau}) \, \mbox{Best - CC03 only} & = & 190.7 \pm 12.3
\, \mbox{MeV} \\
\Delta \Gamma_W \, (\mbox{tau}) \, \mbox{Best - no interference} & = & -9.8 \pm 10.7
\, \mbox{MeV} \end{aligned}$$
Studying separately real $qq\tau\nu$ events from the $qqe\nu$ ones reconstructed as taus clearly shows that this behaviour is due to the cross talk.
Theoretical studies [@lep2mcws] show that the standard IBA calculations suffer from several problems for the single $W$ process, ranging from gauge invariance issues to the scale to be used for the ISR (the $t$-channel scale should be preferred to the $s$-channel one), problems which can globally lead to a $\mathcal{O}$$(4\%)$ uncertainty on the cross section.
It should be noticed that [WandY]{} implements several improvements in this sector with respect to fixed width based IBA calculations (see [@delphi4f; @wphact]). Nevertheless, in order to give an estimate of the uncertainty related to the radiative corrections for the single $W$ part, the non CC03 part of the matrix element, assumed dominated by the single $W$ contribution, has been scaled by a factor 1.04 for $q\bar{q'}e\nu$ final states.
The effect on the mass and width measurement is shown in table \[tab:singlew\].
Another possible source of uncertainty related to 4-$f$ background is represented by partly applying the DPA correction to the interference term (see the discussion in [@delphi4f]). The effect of this way of computing the corrections is shown in table \[tab:singlew\], and can be considered as another estimate of the uncertainty related to the 4-$f$ background presence.
[|l|c|c|c|c|c|]{} Numerical test & $q\bar{q'}e\nu$ & $q\bar{q'}\mu\nu$ & $q\bar{q'}\tau\nu$ & $q\bar{q'}l\nu$ & $q\bar{q'}Q\bar{Q'}$\
\
Best - non CC03 $\times$ 1.04 & $-4.2 \pm 0.1$ & $< -0.1$ & $0.6 \pm
0.1$ & $-1.2 \pm 0.1$ & -\
Best - no DPA in int. & $-1.3 \pm 0.2$ & $0.2 \pm 0.1$ & $0.1 \pm 0.3$ & $-0.3 \pm 0.1$ & $< 0.1$\
\
Best - non CC03 $\times$ 1.04 & $0.2 \pm 0.2$ & $< -0.1$ & $-6.4 \pm
0.4$ & $-1.2 \pm 0.1$ & -\
Best - no DPA in int. & $1.8 \pm 0.5$ & $-0.4 \pm 0.1$ & $0.5 \pm 0.7$ & $0.5 \pm 0.2$ & $< 0.1$\
The DELPHI 4-$f$ generator - RacoonWW comparison {#sec:racoonww}
================================================
The generator chosen by the LEP collaborations for implementing electroweak radiative corrections in $WW$-like events is [YFSWW]{}, used together with another full 4-$f$ generator (either [KoralW]{} or [WPHACT]{}). [RacoonWW]{} is the other, completely independent Monte Carlo generator which implements radiative corrections in DPA on top of a (massless) 4-$f$ generator.
Its use has been fundamental in assessing the DPA precision on the $WW$ cross section, by comparing it with [YFSWW]{}. It looks therefore interesting to try to use it also for a completely independent cross check of the [YFSWW]{} based results on the $W$ mass and width (and possibly on other $W$ related measurements). This check has been already done in [@wmasssys], finding a good agreement between the two codes, but as previously explained on an observable which is not directly linked to the real analysis.
In appendix \[sec:app2\] the input options set used for [ RacoonWW]{} in this study is shown, and the output of one of the runs is given to show the values of all the relevant parameters adopted for the tuned comparison with [WandY]{} and [YFSWW]{}. The phase space slicing approach has been adopted for the implementation of the radiative corrections, in the version suggested for unweighted events production ([smc = 3]{}). The DELPHI version of [PYTHIA]{} has been used for the quark hadronization.
There is anyway a number of challenges in this test to be taken into account. Real photon emission is handled in a completely different way with respect to [YFSWW]{}. In particular real emission in the detector acceptance (i.e. with finite $p_t$) is computed only at $\mathcal{O}(\alpha)$, although with a full 4-$f + \gamma$ matrix element. Higher order ISR is present only through collinear structure functions on events where there is no hard $\mathcal{O}(\alpha)$ emission, a very different situation compared to the YFS exponentiation for ISR and WSR and the $\mathcal{O}$$(\alpha^3)$ LL ISR matrix element. No FSR beyond the one already included in the $\mathcal{O}(\alpha)$ is present, while in [YFSWW]{} the FSR is independent from the remaining part of the $\mathcal{O}(\alpha)$ calculation and introduced at $\mathcal{O}$$(\alpha^2)$ for leptons through [PHOTOS]{} and, merged with gluon emission, in the parton shower for quarks. These differences have been investigated in the literature (see for instance [@lep2mcws; @RacoonWW; @fabio]) and are known to give sizeable discrepancies in the photon related observables.
Therefore it is difficult to disentangle differences arising from a different way of computing the same corrections from those due to the use of different sets of corrections.
Since it is known that [RacoonWW]{} in its DPA mode does not compare well with [YFSWW]{} on photonic spectra, the [RacoonWW]{} authors have developed a 4-$f + \gamma$ IBA mode which combines the $\mathcal{O}(\alpha)$ matrix element and collinear structure functions. The photonic energy and angular spectra produced in this mode are in much better agreement with the [YFSWW]{} ones at LEP2 energies, but it is not possible at present to combine it with the DPA corrections for the virtual and soft emission part in a consistent way.
Moreover the energy and angle cutoffs for the soft/hard photon emission separation in [RacoonWW]{} are in practice quite higher than the [YFSWW]{} ones, due to the quite different techniques adopted in the two calculations. The phase space slicing approach for matching virtual, soft and hard corrections has been used for this test, and these cutoffs are an integral part of the approach itself. The values used, shown in appendix \[sec:app2\], correspond roughly to a minimum real photon energy of about 95 MeV and a minimum real photon-fermion angle of about 1.8 degrees, and are a compromise between the reliability of the calculation and the attempt to avoid merging with fermions photons which could be detected separately by the detector. Moreover, in contrast to what has been suggested by the authors, to avoid results which are dependent on the specific cutoff chosen, no further photon recombination is applied in the sample production. This choice is motivated by the fact that in a realistic simulation of a detector any recombination has to be determined by the detector granularity and analysis procedure itself, and due to the already big values of the cutoffs adopted, any further recombination would risk to suppress photons that would be detectable.
For final states with quarks, where the hadronization phase has to be described beyond the electroweak radiative corrections, the use of a full 4-$f + \gamma$ matrix element, in principle more correct than a parton shower, creates in practice a problem: photons are systematically emitted before gluons, which is unphysical and most probably incompatible with the hadronization packages tunings used ([PYTHIA]{} [@pythia] is the standard choice for the analysis and this study).
The suggestion of the authors of [RacoonWW]{} to switch off the photon radiation in the parton shower to compensate for the photon emission in the matrix element has been adopted in this study, but it does not seem a real solution to the problem, and of course it can potentially spoil the validity of the hadronization tuning used. In case of need this problem might be studied with the [WandY]{} setup, trying to emulate the [RacoonWW]{} situation, i.e. calling [ PHOTOS]{} also for quark pairs before the call to [PYTHIA]{}, and switching off photon emission inside [PYTHIA]{} itself. This presumably would overestimate the effect of FSR, since the photon emission would be performed independently from the two fermion pairs.
A third potential problem in the comparison is represented by [ RacoonWW]{} generating massless fermions in the final state. Fermion masses are added [*a posteriori*]{} using the routine provided by the authors, which conserves obviously the total 4-momentum and the di-fermion masses. It is clear that when a sizeable mass, compared to the fermion energy, is added, as in the case of the $cs$ quark pair, this could lead to distortions in the final state distributions.
All these features suggest that the comparison results must be considered with care, if serious discrepancies are found (as it is the case). On the other hand no special tuning has been prepared for the hadronization package, in order to avoid mixing problems concerning different sectors of the event description.
Table \[tab:racoonww\] shows the result of the comparison between [WandY]{} and [RacoonWW 1.3]{}. A sizeable discrepancy can be seen for the mass in the $ud\mu\nu$ channel, and, to a minor extent, for the width in the $udsc$ channel.
[|l|c|c|]{} Numerical test & $ud\mu\nu$ & $udsc$\
\
- [RacoonWW 1.3]{} & $-38 \pm 5$ & $-4 \pm 5$\
\
- [RacoonWW 1.3]{} & $4 \pm 10$ & $27 \pm 10$\
Extensive studies have been performed in order to investigate the discrepancies, in particular the one on the $W$ mass.
The different hadronization due to the treatment of FSR from quarks in [RacoonWW]{} has of course an influence on the jet characteristics, and can affect the results, in particular the ones for the width. Optimizing the interface of the hadronization with the electroweak full matrix element to circumvent possible problems arising from the simple minded approach followed goes beyond the scope of this study.
A generator level analysis analogous to the one whose results are shown in fig. \[fig1\], \[fig2\] and \[fig3\], has been used for a 4-$f + \gamma$ level comparison of [WandY]{} with [RacoonWW 1.3]{} for the $ud\mu\nu$ channel (all the 4-$f$ diagrams are included here, not only the CC03 part). This study has been used to investigate the discrepancy on the $W$ mass trying to disentangle the genuine electroweak part from possible problems connected to the implementation of the hadronization phase.
This study has clearly shown the crucial role played by the photon clustering, in particular around the muon. The different treatment of the soft, but mainly of the collinear photons in the two codes implies a strong difference in the radiation around the fermions. In [RacoonWW]{} no visible photon is generated in a cone of 1.8 degrees around a fermion, no matter which energy it has, and the radiation is reassociated to the lepton. This is not true in [WandY]{}, were the energy and angle cutoffs (for FSR from leptons the [PHOTOS]{} ones) are quite smaller, closer to a real situation.
For quarks this is not a big problem since experimentally FSR photons cannot be disentangled from jets, and they are naturally clustered to the jets themselves. But the treatment of photons around leptons is a different problem. While in the reconstruction of high energy electrons a clustering of photons is done in order to take into account the bremsstrahlung due to the interaction with the detector, muons can be quite cleanly separated from photons, unless they are strictly collinear. In the latter case the photon energy is anyway lost, since the muon momentum is used, not the energy deposited in the calorimeters possibly associated to it. $ud\mu\nu$ is therefore a good final state to study in detail differences in the visible photon radiation, mainly FSR.
In the real analysis visible photons, which have passed the quality selection criteria, are clustered to the muon if in a cone of 3 degrees around it, otherwise are associated to the jets. This procedure can partly reabsorb the difference in the collinear radiation mentioned above, even if not completely, because of limited photon reconstruction efficiency, resolution, selection cutoffs, etc. The effect of this photon clustering is of improving the agreement between the two calculations on the fitted mass, without it the difference in table \[tab:racoonww\] would be about -50 MeV.
The $W$ mass difference obtained on the beam energy rescaled average mass (like in fig. \[fig3\]) is -6 MeV if photons are clustered to the charged fermion with which they have the smallest $p_t$. If on the contrary the clustering to the muon is done only for photons in a 3 degrees cone around it, associating all the others to the quarks, the difference becomes -23 MeV.
Increasing the opening angle of the cone for the clustering improves the agreement, but of course in the real analysis such a procedure would rapidly cluster photons coming from the hadronization of the quarks (mainly $\pi^0$ decay products). Although the opening angle might be tuned to minimize the rate of photons from jets clustered and optimize the [WandY]{} - [RacoonWW]{} agreement, such a procedure would introduce further systematic uncertainties due to the imperfect knowledge of the photon distributions in jets.
The residual discrepancy is presumably linked to the known differences between the two calculations in the description of the radiation beyond the treatment of the strictly collinear region in this study. The good agreement for the mass found in the hadronic channel seems due to the smaller sensitivity of the analysis to the detailed description of the photonic radiation, since the photon clustering is implicit in the analysis procedure itself. This looks anyway an encouraging result for the general confidence in the study.
In this situation using the difference between the prediction of the two calculations to estimate the systematic uncertainty on the $W$ mass and width does not seem appropriate.
Results and conclusions {#sec:results}
=======================
The results of all the studies presented have to be combined in a single uncertainty for each channel. Tables \[tab:deltamw\] and \[tab:deltagw\] present an estimate of the different sources of uncertainties as it can be deduced from the studies presented in the section \[sec:delphisetup\]. Where the numerical or statistical uncertainty on the estimate is comparable with the estimate itself, they are added linearly to take them conservatively into account.
[|l|c|c|c|c|]{}\
Uncertainty source & $q\bar{q'}e\nu$ & $q\bar{q'}\mu\nu$ & $q\bar{q'}\tau\nu$ & $q\bar{q'}Q\bar{Q'}$\
ISR & 1 & 1 & 1 & 1\
FSR & 0.5 & 0.5 & 1 & -\
NF $\mathcal{O}(\alpha)$ & 1 & 1 & 1 & 2\
NL $\mathcal{O}(\alpha)$ & 1 & 1 & 1 & 1\
4-$f$ background & 5.5 & 0.5 & 1 & 0.5\
Total & 9 & 4 & 5 & 4.5\
[|l|c|c|c|c|]{}\
Uncertainty source & $q\bar{q'}e\nu$ & $q\bar{q'}\mu\nu$ & $q\bar{q'}\tau\nu$ & $q\bar{q'}Q\bar{Q'}$\
ISR & 2 & 2 & 2 & 2\
FSR & 1 & 1 & 2 & -\
NF $\mathcal{O}(\alpha)$ & 2 & 2 & 2 & 2\
NL $\mathcal{O}(\alpha)$ & 4 & 4 & 4 & 4\
4-$f$ background & 2 & 1 & 6 & 1\
Total & 11 & 10 & 16 & 9\
The total uncertainty per channel is computed summing linearly the values of the contributions. This choice is conservatively motivated by the fact that several contributions are more maximal upper limits than statistical errors. All the numbers have been rounded to 0.5 MeV.
As can be seen, the uncertainty on the $W$ mass is within the 10 MeV level.
Acknowledgments
===============
I am grateful to the authors of [YFSWW]{}, S. Jadach, W. Placzek, M. Skrzypek, B. F. L. Ward and Z. Was, and to those of [RacoonWW]{}, A. Denner, S. Dittmaier, M. Roth and D. Wackeroth, for the useful discussions, the help in understanding both the theoretical problems connected to this study and the way of correctly performing it. I would also like to thank the authors of [RacoonWW]{} for the help in cross checking my results. I want to thank the DELPHI colleagues involved in the $W$ mass measurement for the useful feedback and for reading this manuscript, and in particular J. D’Hondt for extracting the results of this study for the fully hadronic channel. Finally I want to thank A. Ballestrero and R. Chierici for the fruitful long lasting discussions and exchange of ideas on these problems.
Appendix: [YFSWW]{} input parameters {#sec:app1}
====================================
The [YFSWW]{} samples used for the study, whose settings are the same as those used in [WandY]{} special samples, have been generated with version 3-1.17. The input for the $LPA_A$ sample ($udsc$ final state) is:
*//////////////////////////////////////////////////////////////////////////////
*// //
*// Input data for YFSWW3: ISR + EW + FSR //
*// For Simple DEMO Program //
*// //
*//////////////////////////////////////////////////////////////////////////////
BeginX
*<-i><----data-----><-------------------comments------------------------------>
1 188.6d0 CMSEne =xpar( 1) ! CMS total energy [GeV]
2 1.16639d-5 Gmu =xpar( 2) ! Fermi Constant
4 91.187d0 aMaZ =xpar( 4) ! Z mass
5 2.506693d0 GammZ =xpar( 5) ! Z width
6 80.400d0 aMaW =xpar( 6) ! W mass
7 -2.08699d0 GammW =xpar( 7) ! W with, For gammW<0 it is RECALCULATED
11 115d0 amh =xpar(11) ! Higgs mass
13 0.1255d0 alpha_s=xpar(13) ! QCD coupling const.
111 1d0 vckm(1:1)
112 0d0 vckm(1:2)
113 0d0 vckm(1:3)
114 0d0 vckm(2:1)
115 1d0 vckm(2:2)
116 0d0 vckm(2:3)
117 0d0 vckm(3:1)
118 0d0 vckm(3:2)
119 1d0 vckm(3:3)
*<-i><----data-----><-------------------comments------------------------------>
* YFSWW3 SPECIFIC PARAMETERS !!!
*=============================================================================
2001 5d0 KeyCor =xpar(2001) Radiative Correction switch
* KeyCor =0: Born
* =1: Above + ISR
* =2: Above + Coulomb Correction
* =3: Above + YFS Full Form-Factor Correction
* =4: Above + Radiation from WW
* =5: Above + Exact O(alpha) EWRC (BEST!)
* =6: As Above but Apporoximate EWRC (faster)
2002 0d0 KeyLPA =0: LPA_a
*=============================================================================
1011 1d0 KeyISR =0,1 initial state radiation off/on (default=1)
*
1013 1d0 KeyNLL =0 sets next-to leading alpha/pi terms to zero
* =1 alpha/pi in yfs formfactor is kept (default)
1014 2d0 KeyCul =xpar(1014)
* =0 No Coulomb correction
* =1 "Normal" Coulomb correction
* =2 "Screened-Coulomb" Ansatz for Non-Factorizable Corr.
1021 2d0 KeyBra =xpar(1021)
* = 0 Born branching ratios, no mixing
* = 1 branching ratios from input
* = 2 branching ratios with mixing and naive QCD
* calculated in IBA from the CKM matrix (PDG 2000);
* see routine filexp for more details (file filexp.f)
1023 1d0 KeyZet =xpar(1023)
* = 0, Z width in z propagator: s/m_z *gamm_z
* = 1, Z width in z propagator: m_z *gamm_z
* = 2, Z zero width in z propagator.
1026 1d0 KeyWu =xpar(1026)
* = 0 w width in w propagator: s/m_w *gamm_w
* = 1 w width in w propagator: m_w *gamm_w
* = 2 no (0) w width in w propagator.
1031 0d0 KeyWgt =xpar(1031)
* =0, unweighted events (wt=1), for apparatus Monte Carlo
* =1, weighted events, option faster and safer
1041 1d0 KeyMix =xpar(1041)
* KeyMix EW "Input Parameter Scheme" choices.
* =0 "LEP2 Workshop '95" scheme (for Born and ISR only!)
* =1 G_mu scheme (RECOMMENDED)
* W decays: 1=ud, 2=cd, 3=us, 4=cs, 5=ub, 6=cb, 7=e, 8=mu, 9=tau, 0=all chan.
1055 4d0 KeyDWm =xpar(1055) W- decay: 7=(ev), 0=all ch.
1056 1d0 KeyDWp =xpar(1056) W+ decay: 7=(ev), 0=all ch.
1057 16d0 Nout =xpar(1057) Output unit no, for Nout<0, Nout=16
*=============================================================================
* TAUOLA, PHOTOS, JETSET
* >>> If you want to switch them OFF, uncomment the lines below <<<
*<-i><----data-----><-------------------comments------------------------------>
1071 0d0 Jak1 =xpar(1071) Decay mode tau+
1072 0d0 Jak2 =xpar(1072) Decay mode tau-
1073 0d0 Itdkrc =xpar(1073) Bremsstrahlung switch in Tauola
1074 2d0 IfPhot =xpar(1074) PHOTOS switch
1075 0d0 IfHadM =xpar(1075) Hadronization W-
1076 0d0 IfHadP =xpar(1076) Hadronization W+
516 0.01d0 mass [GeV] (3-9 MeV in PDG)
526 0.005d0 mass [GeV] (1.5-5 MeV in PDG)
536 0.2d0 mass [GeV] (60-170 MeV in PDG)
546 1.30d0 mass [GeV] (1.1-1.4 GeV in PDG)
556 4.8d0 mass [GeV] (4.1-4.4 GeV in PDG)
566 175.0d0 mass [GeV] (174.3 GeV in PDG 1999)
EndX
*//////////////////////////////////////////////////////////////////////////////
The $LPA_b$ sample input, used for fully simulated events, differed in the following parameters:
2002 1d0 KeyLPA =0: LPA_a
1073 1d0 Itdkrc =xpar(1073) Bremsstrahlung switch in Tauola
1074 1d0 IfPhot =xpar(1074) PHOTOS switch
1075 1d0 IfHadM =xpar(1075) Hadronization W-
1076 1d0 IfHadP =xpar(1076) Hadronization W+
The [YFSWW]{} version used for the full simulation implemented the [PYTHIA]{} version and tuning and the [TAUOLA]{} version used in [@delphi4f].
Appendix: [RacoonWW]{} input options and parameters {#sec:app2}
===================================================
The [RacoonWW]{} samples used for the study, with input options and parameters tuned to give the best agreement with the [WandY]{} and [YFSWW]{} samples used, have been generated both with version 1.2 and 1.3. The input for the 1.3 sample ($udsc$ final state) is:
udsc.out ! name of output file
188.6d0 ! energy: CMF energy (in GeV)
100000 ! neventsw: number of weighted events
3 ! smc: choice of MC branch: 1(or 3):slicing 2:subtraction
1 ! sborn4: include Born ee->4f: 0:no 1-3:yes
1 ! sborn5: include Born ee->4f+photon: 0:no 1:yes
0 ! sborng5: include Born ee->4f+gluon: 0:no 1:yes
1 ! sisr: include higher-order ISR: 0:no 1:yes
1 ! src: include radiative corrections: 0:no 1:DPA 2:IBA-4f 3:IBA-4fa
0 ! scoul5: Coulomb singularity for ee->4f+photon: 0:no 1,2:yes
3 ! qnf: Coulomb singularity for ee->4f: 1,2, or 3
0 ! qreal: neglect imaginary part of virt. corr.: 0:no 1:yes
2 ! qalp: choice of input-parameter scheme: 0,1, or 2
4 ! qgw: calculate the W-boson width: 0:no 1-4:yes
1 ! qprop: choice of width scheme: 0,1,2,3 or 4
0 ! ssigepem4: choice of diag. for Born ee->4f: 0:all 1-5:subsets
5 ! ssigepem5: choice of diag. for Born ee->4f+ga: 0:all 1-5:subsets
0 ! ssigepemg5: choice of diag. for Born ee->4f+gl: 0:all 1,5:subsets
2 ! qqcd: include QCD radiative corr.: 0:no 1:CC03 2:naive 3:CC11
0 ! sqcdepem: include gluon-exch. diag. in Born: 0:no 1:yes 2:only
u ! fermion 3
d ! anti-fermion 4
s ! fermion 5
c ! anti-fermion 6
0d0 ! pp: degree of positron beam polarization [-1d0:1d0]
0d0 ! pm: degree of electron beam polarization [-1d0:1d0]
0 ! srecomb: recombination cuts: 0:no 1:TH 2:EXP
1.7d0 ! precomb(1): angular rec. cut between photon and beam
0.1d0 ! precomb(2): rec. cut on photon energy
1.32d0 ! precomb(3): inv.-mass rec.(TH) or angular rec. cut for lept.(EXP)
0d0 ! precomb(4): angular rec. cut for quarks(EXP)
0 ! srecombg: gluon recombination cuts: 0:no 1:TH 2:EXP
0d0 ! precombg(1): rec. cut on gluon energy
0d0 ! precombg(2): inv.-mass (TH) or angular (EXP) recombination cut
0 ! satgc: anomalous triple gauge couplings (TGC): 0:no 1:yes
0d0 ! TGC Delta g_1^A
0d0 ! TGC Delta g_1^Z
0d0 ! TGC Delta kappa^A
0d0 ! TGC Delta kappa^Z
0d0 ! TGC lambda^A
0d0 ! TGC lambda^Z
0d0 ! TGC g_4^A
0d0 ! TGC g_4^Z
0d0 ! TGC g_5^A
0d0 ! TGC g_5^Z
0d0 ! TGC tilde kappa^A
0d0 ! TGC tilde kappa^Z
0d0 ! TGC tilde lambda^A
0d0 ! TGC tilde lambda^Z
0d0 ! TGC f_4^A
0d0 ! TGC f_4^Z
0d0 ! TGC f_5^A
0d0 ! TGC f_5^Z
0d0 ! TGC h_1^A
0d0 ! TGC h_1^Z
0d0 ! TGC h_3^A
0d0 ! TGC h_3^Z
0 ! qaqgc: anomalous quartic gauge couplings (QGC): 0:no 1:yes
0d0 ! QGC a_0/Lambda^2
0d0 ! QGC a_c/Lambda^2
0d0 ! QGC a_n/Lambda^2
0d0 ! QGC tilde a_0/Lambda^2
0d0 ! QGC tilde a_n/Lambda^2
10 ! scuts: separation cuts: 0:no 1,2:default(ADLO,LC) 10,11:input
0d0 ! photon(gluon) energy cut
1d0 ! charged-lepton energy cut
2d0 ! quark energy cut
2d0 ! quark-quark invariant mass cut
0d0 ! angular cut between photon and beam
0d0 ! angular cut between photon and charged lepton
0d0 ! angular cut between photon(gluon) and quark
0d0 ! angular cut between charged leptons
0d0 ! angular cut between quarks
0d0 ! angular cut between charged lepton and quark
0d0 ! angular cut between charged lepton and beam
0d0 ! angular cut between quark and beam
and the corresponding output is:
smc= 3: Phase-space-slicing branch of RacoonWW
======================================
technical cutoff parameters (photon):
delta_s = 1.0000000000000000E-003
delta_c = 5.0000000000000000E-004
Input parameters:
-----------------
CMF energy = 188.60000 GeV, Number of events = 100000,
alpha(0) = 1/ 137.0359895, alpha(MZ) = 1/128.88700, alpha_s = 0.12550,
GF = .1166390E-04,
MW = 80.40000, MZ = 91.18700, MH = 115.00000,
GW = 2.09372, GZ = 2.50669,
me = .51099906E-03, mmu = 0.105658300, mtau = 1.77700,
mu = 0.00500, mc = 1.30000, mt = 175.00000,
md = 0.01000, ms = 0.20000, mb = 4.80000.
Effective branching ratios:
leptonic BR = 0.32476, hadronic BR = 0.67524, total BR = 1.00000
Process: anti-e e -> u anti-d s anti-c (+ photon)
pp= 0.0: degree of positron beam polarization.
pm= 0.0: degree of electron beam polarization.
qalp= 2: GF-parametrization scheme.
qgw= 4: one-loop W-boson width calculated (with QCD corr.).
qprop= 1: constant width.
sborn4= 1: tree-level process ee -> 4f.
ssigepem4= 0: all electroweak diagrams included.
qqcd= 2: naive QCD corrections included.
src= 1: virtual corrections in DPA and real corrections included.
ssigepem5= 5: real photon corr. : only CC11 class of diagrams included.
qqcd= 2: naive QCD corrections included
qreal= 0: imaginary part of virtual corrections included.
qnf= 3: off-shell Coulomb singularity with off-shell Born included.
sisr= 1: initial-state radiation up to order alpha^3 included.
scuts=10: with separation cuts:
energy(3) > 2.00000 GeV
energy(4) > 2.00000 GeV
energy(5) > 2.00000 GeV
energy(6) > 2.00000 GeV
mass(3,4) > 2.00000 GeV
mass(3,5) > 2.00000 GeV
mass(3,6) > 2.00000 GeV
mass(4,5) > 2.00000 GeV
mass(4,6) > 2.00000 GeV
mass(5,6) > 2.00000 GeV
events : intermediary results : preliminary results
1000000 : 1846.54692 +- 11.43978 : 1846.54692 +- 11.43978
2000000 : 1833.80408 +- 11.11287 : 1840.17550 +- 7.97440
3000000 : 1852.08809 +- 11.09372 : 1844.14636 +- 6.47590
4000000 : 1839.58697 +- 11.05121 : 1843.00651 +- 5.58773
5000000 : 1834.96514 +- 10.89130 : 1841.39824 +- 4.97266
6000000 : 1843.06512 +- 11.03453 : 1841.67605 +- 4.53366
7000000 : 1840.81013 +- 10.93956 : 1841.55235 +- 4.18847
8000000 : 1832.80339 +- 10.87736 : 1840.45873 +- 3.90900
Warning: weight=-1 1 19685
9000000 : 1836.94865 +- 10.94773 : 1840.06872 +- 3.68143
Warning: weight>weighttotmax 1 21733
weight/weighttotmax=.21913D+01
Redefining weighttotmax=weight
10000000 : 1844.61537 +- 11.08916 : 1840.52339 +- 3.49394
11000000 : 1851.84340 +- 11.08708 : 1841.55248 +- 3.33239
12000000 : 1841.10316 +- 10.94892 : 1841.51504 +- 3.18804
13000000 : 1837.82755 +- 10.94003 : 1841.23138 +- 3.06077
14000000 : 1846.96380 +- 10.97894 : 1841.64084 +- 2.94835
15000000 : 1830.80012 +- 10.84000 : 1840.91813 +- 2.84510
16000000 : 1854.14538 +- 11.11172 : 1841.74483 +- 2.75621
17000000 : 1841.29479 +- 10.91541 : 1841.71836 +- 2.67237
18000000 : 1841.08998 +- 10.99218 : 1841.68345 +- 2.59673
19000000 : 1844.23691 +- 10.97313 : 1841.81784 +- 2.52694
20000000 : 1859.00710 +- 11.16467 : 1842.67730 +- 2.46465
21000000 : 1834.41218 +- 10.92653 : 1842.28373 +- 2.40426
22000000 : 1854.21070 +- 11.12994 : 1842.82586 +- 2.35007
Warning: weight=-1 2 35285
23000000 : 1841.83930 +- 10.95617 : 1842.78297 +- 2.29782
24000000 : 1828.49319 +- 10.88035 : 1842.18756 +- 2.24825
25000000 : 1849.61925 +- 11.05529 : 1842.48483 +- 2.20316
26000000 : 1840.99837 +- 10.98930 : 1842.42766 +- 2.16018
Warning: weight=-1 3 39352
27000000 : 1839.23779 +- 10.87842 : 1842.30951 +- 2.11883
28000000 : 1850.36220 +- 10.97517 : 1842.59711 +- 2.08042
29000000 : 1842.48313 +- 10.89208 : 1842.59318 +- 2.04349
30000000 : 1843.99336 +- 11.02637 : 1842.63985 +- 2.00928
31000000 : 1846.22236 +- 10.88435 : 1842.75542 +- 1.97591
32000000 : 1844.43879 +- 10.92402 : 1842.80802 +- 1.94436
33000000 : 1855.61815 +- 10.91623 : 1843.19621 +- 1.91424
34000000 : 1838.22596 +- 10.89012 : 1843.05002 +- 1.88535
35000000 : 1838.40827 +- 10.94610 : 1842.91740 +- 1.85799
36000000 : 1854.60202 +- 11.09578 : 1843.24197 +- 1.83249
Result:
-------
Number of weighted events = 36942025
Average = 1843.0071014423 fb
Standard deviation = 1.8086070041 fb
Maximal weight = 0.0442221449 fb
Tree-level four-fermion cross section:
Average = 1932.5509695232 fb
Standard deviation = 2.9415853629 fb
Number of events
----------------
Unweighted events = 50000
Events with weight=-1 = 3
Events with weight>weightmax = 1
[9]{} Physics at LEP2, G. Altarelli, T. Sjöstrand and F. Zwirner eds., CERN 96-01 (1996). M. Grünewald [*et al.*]{}, [*Four-Fermion Production in Electron-Positron Collisions*]{}, in [*Report of the Working Groups on precision calculations for LEP2 physics*]{}, S. Jadach, G. Passarino and R. Pittau eds., CERN 2000-009 (2000) 1 \[hep-ph/0005309\]. M. Veltman, Physica [**29**]{} (1963) 186;\
R.G. Stuart, Phys. Lett. [**B262**]{} (1991) 113;\
A. Aeppli, G.J. van Oldenborgh and D. Wyler, Nucl. Phys. [**B428**]{} (1994) 126. W. Beenakker, F.A. Berends and A.P. Chapovsky, Nucl. Phys. [**B548**]{} (1999) 3. A. Denner, Fortschr. Phys. [**41**]{} (1993) 307;\
A. Denner, G. Weiglein and S. Dittmaier, Nucl. Phys. [**B440**]{} (1995) 95;\
A. Denner, S. Dittmaier and M. Roth, Nucl. Phys. [**B519**]{} (1998) 39. S. Jadach, W. Placzek, M. Skrzypek and B.F.L. Ward, Phys. Rev. [**D54**]{} (1996) 5434;\
S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Phys. Lett. [**B417**]{} (1998) 326;\
S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Phys. Rev. [**D61**]{} (2000) 113010;\
S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Comp. Phys. Commun. [**140**]{} (2001) 432;\
S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Phys. Rev. [**D65**]{} (2002) 093010. A. Denner, S. Dittmaier, M. Roth and D. Wackeroth, Nucl. Phys. [**B560**]{} (1999) 33;\
A. Denner, S. Dittmaier, M. Roth and D. Wackeroth, Nucl. Phys. [ **B587**]{} (2000) 67; A. Denner, S. Dittmaier, M. Roth and D. Wackeroth, Eur. Phys. J. [ **C20**]{} (2001) 201; A. Denner, S. Dittmaier, M. Roth and D. Wackeroth, Comp. Phys. Commun. [**153**]{} (2003) 462. R. Chierici and F. Cossutti, Eur. Phys. J. [**C23**]{} (2002) 65. S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Phys. Lett. [**B523**]{} (2001) 117. R. Brunelière [*et al.*]{}, Phys. Lett. [**B533**]{} (2002) 75. A. Ballestrero, R. Chierici, F. Cossutti and E. Migliore, Comp. Phys. Commun. [**152**]{} (2003) 175. E. Accomando and A. Ballestrero, Comp. Phys. Commun. [**99**]{} (1997) 270;\
E. Accomando, A. Ballestrero and E. Maina, Comp. Phys. Commun. [**150**]{} (2003) 166. S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Comp. Phys. Commun. [**119**]{} (1999) 272. S. Jadach, W. Placzek, M. Skrzypek, B.F.L. Ward and Z. Was, Comp. Phys. Commun. [**140**]{} (2001) 475. Y. Kurihara, J. Fujimoto, T. Munehisa and Y. Shimizu, Progress of Theoretical Physics [**96**]{} (1996) 1223. E. Barberio and Z. Was, Comp. Phys. Commun. [**79**]{} (1994) 291. G. Nanava and Z. Was, Acta Phys. Polon. [**B34**]{} (2003) 4561. R. G. Stuart, Nucl. Phys. [**B498**]{} (1997) 28, and references therein. W. Beenakker [*et al.*]{}, [*WW Cross-Sections and Distributions*]{}, in Ref. [@lep2], Vol. 1, 79. A.P. Chapovsky and V.A. Khoze, Eur. Phys. J. [ **C9**]{} (1999) 449. D. Bardin, A. Leike, T. Riemann and M. Sachwitz, Phys. Lett. [ **B206**]{} (1988) 539. B. Efron, “Computers and the Theory of Statistics”, SIAM Rev. [**21**]{} (1979) 460; B. Efron and R. Tibshirani, “An Introduction to the Bootstrap”, Chapman & Hall 1993, ISBN 0-412-04231-2. DELPHI Collab., P. Abreu [*et al.*]{}, Phys. Lett. [**B511**]{} (2001) 159. T. Sjöstrand [*et al.*]{}, Comp. Phys. Commun. [**135**]{} (2001) 238.
|
---
---
\
[**J.N. Alonso Álvarez$^{1}$, J.M. Fernández Vilaboa$^{2}$, R. González Rodríguez$^{3}$**]{}
\
$^{1}$ Departamento de Matemáticas, Universidad de Vigo, Campus Universitario Lagoas-Marcosende, E-36280 Vigo, Spain (e-mail: jnalonso@ uvigo.es) \
$^{2}$ Departamento de Álxebra, Universidad de Santiago de Compostela. E-15771 Santiago de Compostela, Spain (e-mail: [email protected]) \
$^{3}$ Departamento de Matemática Aplicada II, Universidad de Vigo, Campus Universitario Lagoas-Marcosende, E-36310 Vigo, Spain (e-mail: [email protected]) \
[**Abstract**]{} In this paper we show that weak Hopf (co)quasigroups can be characterized by a Galois-type condition. Taking into account that this notion generalizes the ones of Hopf (co)quasigroup and weak Hopf algebra, we obtain as a consequence the first fundamental theorem for Hopf (co)quasigroups and a characterization of weak Hopf algebras in terms of bijectivity of a Galois-type morphism (also called fusion morphism).
[**Keywords.**]{} Hopf algebra, weak Hopf algebra, Hopf (co)quasigroup, weak Hopf (co)quasigroup, Galois extension.
[**MSC 2010:**]{} 18D10, 16T05, 17A30, 20N05.
introduction
============
The notion of Hopf algebra and its generalizations appeared as useful tools in relation with many branch of mathematics such that algebraic geometry, number theory, Lie theory, Galois theory, quantum group theory and so on. A common principle to obtain generalizations of the original notion of Hopf algebra is to weak some of axioms of its definition. For example, if one does not force the coalgebra structure to respect the unit of the algebra structure, one is lead to weak Hopf algebras. In a different way, the weakening of the associativity leads to Hopf quasigroups and quasi-Hopf algebras.
Weak Hopf algebras (or quantum groupoids in the terminology of Nikshych and Vainerman [@NV]) were introduced by Böhm, Nill and Szlachányi [@bohm] as a new generalization of Hopf algebras and groupoid algebras. A weak Hopf algebra $H$ in a braided monoidal category [@IND] is an object that has both, monoid and comonoid structure, with some relations between them. The main difference with other Hopf algebraic constructions is that weak Hopf algebras are coassociative but the coproduct is not required to preserve the unit, equivalently, the counit is not a monoid morphism. Some motivations to study weak Hopf algebras come from the following facts: firstly, as group algebras and their duals are the natural examples of Hopf algebras, groupoid algebras and their duals provide examples of weak Hopf algebras and, secondly, these algebraic structures have a remarkable connection with the theory of algebra extensions, important applications in the study of dynamical twists of Hopf algebras and a deep link with quantum field theories and operator algebras [@NV].
On the other hand, Hopf (co)quasigroups were introduced in [@Majidesfera] in order to understand the structure and relevant properties of the algebraic $7$-sphere. They are a non-(co)associative generalizations of Hopf algebras. Like in the quasi-Hopf setting, Hopf quasigroups are not associative but the lack of this property is compensated by some axioms involving the antipode. The concept of Hopf quasigroup is a particular instance of the notion of unital coassociative $H$-bialgebra introduced in [@PI2].
Recently [@AFG-Weak-quasi], the authors have introduced a new generalization of Hopf algebras (called weak Hopf (co)quasigroups) which encompass weak Hopf algebras and Hopf (co)quasigroups. A family of non-trivial examples of weak Hopf quasigroups can be obtained by working with bigroupoids, i.e. bicategories where every $1$-cell is an equivalence and every $2$-cell is an isomorphism. Moreover, many properties of these algebraic structures remain valid under this unified approach (in particular, the Fundamental Theorem of Hopf Modules associated to a weak Hopf quasigroup [@AFG-Weak-quasi]), and it is very natural to ask for other well-known properties related with Hopf algebras. In particular, Nakajima [@N] gave a characterization of ordinary Hopf algebras in terms of bijectivity of right or left Galois maps (also called fusion morphisms in [@Street]). This result was extended by Schauenburg [@S] to weak Hopf algebras, and by Brzeziński [@Brz] to Hopf (co)quasigroups. The main purpose of this work is to give a similar characterization in the weak Hopf (co)quasigroup setting. More precisely, we state that a weak Hopf (co)quasigroup satisfies a right and left Galois-type condition, and these Galois morphisms must have almost right and left (co)linear inverses, and conversely. As a consequence we get the characterization of Hopf (co)quasigroups given by Brzeziński [@Brz] (called the First Fundamental Theorem for Hopf (co)quasigroups), and the one obtained by Schauenburg [@S] for weak Hopf algebras.
A characterization of weak Hopf quasigroups
===========================================
Throughout this paper $\mathcal C$ denotes a strict monoidal category with tensor product ${\otimes}$ and unit object $K$. For each object $M$ in $ \mathcal
C$, we denote the identity morphism by $id_{M}:M\rightarrow M$ and, for simplicity of notation, given objects $M$, $N$, $P$ in $\mathcal
C$ and a morphism $f:M\rightarrow N$, we write $P{\otimes}f$ for $id_{P}{\otimes}f$ and $f {\otimes}P$ for $f{\otimes}id_{P}$.
From now on we also assume that $\mathcal C$ admits equalizers and coequalizers. Then every idempotent morphism splits, i.e., for every morphism $\nabla_{Y}:Y\rightarrow Y$ such that $\nabla_{Y}=\nabla_{Y}{\circ}\nabla_{Y}$, there exist an object $Z$ and morphisms $i_{Y}:Z\rightarrow Y$ and $p_{Y}:Y\rightarrow Z$ such that $\nabla_{Y}=i_{Y}{\circ}p_{Y}$ and $p_{Y}{\circ}i_{Y} =id_{Z}$.
Also we assume that $\mathcal C$ is braided, that is: for all $M$ and $N$ objects in $\mathcal C$, there is a natural isomorphism $c_{M, N}:M{\otimes}N\rightarrow N{\otimes}M$, called the braiding, satisfying the Hexagon Axiom (see [@JS] for generalities). If the braiding satisfies $c_{N,M}{\circ}c_{M,N}=id_{M{\otimes}N}$, the category $\mathcal C$ will be called symmetric.
By a unital magma in $\mathcal C$ we understand a triple $A=(A, \eta_{A}, \mu_{A})$ where $A$ is an object in $\mathcal C$ and $\eta_{A}:K\rightarrow A$ (unit), $\mu_{A}:A{\otimes}A \rightarrow A$ (product) are morphisms in $\mathcal C$ such that $\mu_{A}{\circ}(A{\otimes}\eta_{A})=id_{A}=\mu_{A}{\circ}(\eta_{A}{\otimes}A)$. If $\mu_{A}$ is associative, that is, $\mu_{A}{\circ}(A{\otimes}\mu_{A})=\mu_{A}{\circ}(\mu_{A}{\otimes}A)$, the unital magma will be called a monoid in $\mathcal C$. Given two unital magmas (monoids) $A= (A, \eta_{A}, \mu_{A})$ and $B=(B, \eta_{B}, \mu_{B})$, $f:A\rightarrow B$ is a morphism of unital magmas (monoids) if $\mu_{B}{\circ}(f{\otimes}f)=f{\circ}\mu_{A}$ and $ f{\circ}\eta_{A}= \eta_{B}$.
By duality, a counital comagma in $\mathcal C$ is a triple ${D} = (D, \varepsilon_{D}, \delta_{D})$ where $D$ is an object in $\mathcal C$ and $\varepsilon_{D}: D\rightarrow K$ (counit), $\delta_{D}:D\rightarrow D{\otimes}D$ (coproduct) are morphisms in $\mathcal C$ such that $(\varepsilon_{D}{\otimes}D){\circ}\delta_{D}= id_{D}=(D{\otimes}\varepsilon_{D}){\circ}\delta_{D}$. If $\delta_{D}$ is coassociative, that is, $(\delta_{D}{\otimes}D){\circ}\delta_{D}= (D{\otimes}\delta_{D}){\circ}\delta_{D}$, the counital comagma will be called a comonoid. If ${D} = (D, \varepsilon_{D}, \delta_{D})$ and ${ E} = (E, \varepsilon_{E}, \delta_{E})$ are counital comagmas (comonoids), $f:D\rightarrow E$ is a morphism of counital comagmas (comonoids) if $(f{\otimes}f){\circ}\delta_{D} =\delta_{E}{\circ}f$ and $\varepsilon_{E}{\circ}f =\varepsilon_D.$
If $A$, $B$ are unital magmas (monoids) in $\mathcal C$, the object $A{\otimes}B$ is a unital magma (monoid) in $\mathcal C$ where $\eta_{A{\otimes}B}=\eta_{A}{\otimes}\eta_{B}$ and $\mu_{A{\otimes}B}=(\mu_{A}{\otimes}\mu_{B}){\circ}(A{\otimes}c_{B,A}{\otimes}B).$ In a dual way, if $D$, $E$ are counital comagmas (comonoids) in $\mathcal C$, $D{\otimes}E$ is a counital comagma (comonoid) in $\mathcal C$ where $\varepsilon_{D{\otimes}E}=\varepsilon_{D}{\otimes}\varepsilon_{E}$ and $\delta_{D{\otimes}E}=(D{\otimes}c_{D,E}{\otimes}E){\circ}( \delta_{D}{\otimes}\delta_{E}).$
Moreover, if $D$ is a comagma and $A$ a magma, given two morphisms $f,g:D\rightarrow A$ we will denote by $f\ast g$ its convolution product in $\mathcal C$, that is $f\ast g=\mu_{A}{\circ}(f{\otimes}g){\circ}\delta_{D}$.
Let $A$ be a monoid. The pair $(M, \phi_M)$ is a right $A$-module if $M$ is an object in $\mathcal C$ and $\phi_M:A{\otimes}M\rightarrow M$ is a morphism in $\mathcal C$ such that $\phi_M{\circ}(\eta_A{\otimes}M)=id_M$ and $\phi_M{\circ}(A{\otimes}\phi_M)=\phi_M{\circ}(\mu_A{\otimes}M)$. Given two right $A$-modules $(M, \phi_M)$ and $(N, \phi_N$, a map $f:M\rightarrow N$ is a morphism of right $A$-modules if $\phi_N{\circ}(A{\otimes}f)=f{\circ}\phi_M$. We shall denote by $\mathcal {C}_A$ the category of right $A$-modules. In an analogous way we can define the category of left $A$-modules and we denote it by $_A\mathcal {C}$.
Let $D$ be a comonoid. The pair $(M, \rho_M)$ is a right $D$-comodule if $M$ is an object in $\mathcal C$ and $\rho_M:M\rightarrow M{\otimes}D$ is a morphism in $\mathcal C$ satisfying that $(M{\otimes}\varepsilon_D){\circ}\rho_M=id_M$ and $(\rho_M{\otimes}D){\circ}\rho_M=(M{\otimes}\delta_D){\circ}\rho_M$. Given two right $D$-comodules $(M, \rho_M)$ and $(N, \rho_N)$, a map $f:M\rightarrow N$ is a morphism of right $D$-comodules if $(f{\otimes}D){\circ}\rho_M=\rho_N{\circ}g$. We shall denote by $\mathcal {C}^D$ the category of right $D$-comodules. In an analogous way we can define the category of left $D$-comodules and we denote it by $^D\mathcal {C}$.
Now we recall the notion of weak Hopf quasigroup we introduced in [@AFG-Weak-quasi].
\[Weak-Hopf-quasigroup\]
A weak Hopf quasigroup $H$ in ${\mathcal
C}$ is a unital magma $(H, \eta_H, \mu_H)$ and a comonoid $(H,\varepsilon_H, \delta_H)$ such that the following axioms hold:
- $\delta_{H}{\circ}\mu_{H}=(\mu_{H}{\otimes}\mu_{H}){\circ}\delta_{H{\otimes}H}.$
- $\varepsilon_{H}{\circ}\mu_{H}{\circ}(\mu_{H}{\otimes}H)=\varepsilon_{H}{\circ}\mu_{H}{\circ}(H{\otimes}\mu_{H})$
- $= ((\varepsilon_{H}{\circ}\mu_{H}){\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(H{\otimes}\delta_{H}{\otimes}H)$
- $=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(H{\otimes}(c_{H,H}^{-1}{\circ}\delta_{H}){\otimes}H).$
- $(\delta_{H}{\otimes}H){\circ}\delta_{H}{\circ}\eta_{H}=(H{\otimes}\mu_{H}{\otimes}H){\circ}((\delta_{H}{\circ}\eta_{H}) {\otimes}(\delta_{H}{\circ}\eta_{H}))$
- $=(H{\otimes}(\mu_{H}{\circ}c_{H,H}^{-1}){\otimes}H){\circ}((\delta_{H}{\circ}\eta_{H}) {\otimes}(\delta_{H}{\circ}\eta_{H})).$
- There exists a morphism $\lambda_{H}:H\rightarrow H$ (called the antipode of $H$) such that, if we denote by $\Pi_{H}^{L}$ (target morphism) and by $\Pi_{H}^{R}$ (source morphism) the morphisms $$\Pi_{H}^{L}=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}((\delta_{H}{\circ}\eta_{H}){\otimes}H),$$ $$\Pi_{H}^{R}=(H{\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(c_{H,H}{\otimes}H){\circ}(H{\otimes}(\delta_{H}{\circ}\eta_{H})),$$ then:
- $\Pi_{H}^{L}=id_{H}\ast \lambda_{H}.$
- $\Pi_{H}^{R}=\lambda_{H}\ast id_{H}.$
- $\lambda_{H}\ast \Pi_{H}^{L}=\Pi_{H}^{R}\ast \lambda_{H}= \lambda_{H}.$
- $\mu_H\circ (\lambda_H{\otimes}\mu_H)\circ (\delta_H{\otimes}H)=\mu_{H}{\circ}(\Pi_{H}^{R}{\otimes}H).$
- $\mu_H\circ (H{\otimes}\mu_H)\circ (H{\otimes}\lambda_H{\otimes}H)\circ (\delta_H{\otimes}H)=\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H).$
- $\mu_H\circ(\mu_H{\otimes}\lambda_H)\circ (H{\otimes}\delta_H)=\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L}).$
- $\mu_H\circ (\mu_H{\otimes}H)\circ (H{\otimes}\lambda_H{\otimes}H)\circ (H{\otimes}\delta_H)=\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{R}).$
Note that, if in the previous definition the triple $(H, \eta_H, \mu_H)$ is a monoid, we obtain the notion of weak Hopf algebra in a braided category introduced in [@AFG1] (see also [@IND]). Under this assumption, if ${\mathcal C}$ is symmetric, we have the monoidal version of the original definition of weak Hopf algebra introduced by Böhm, Nill and Szlachányi in [@bohm]. On the other hand, if $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas, (equivalently, $\eta_{H}$, $\mu_{H}$ are morphisms of counital comagmas), $\Pi_{H}^{L}=\Pi_{H}^{R}=\eta_{H}{\otimes}\varepsilon_{H}$ and, as a consequence, we have the notion of Hopf quasigroup defined by Klim and Majid in [@Majidesfera] in the category of vector spaces over a field ${\Bbb F}$. (Note that in this case there is no difference between the definitions for the symmetric and the braided settings).
Now we recall some properties related with weak Hopf quasigroups we will need in what sequel. The proofs are identical to the ones given in [@AFG-Weak-quasi], because condition (a4) of Definition \[Weak-Hopf-quasigroup\] is unnecessary.
\[propertieswithouta4\] Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Define $\overline{\Pi}_{H}^{L}$ and $\overline{\Pi}_{H}^{R}$ by $$\overline{\Pi}_{H}^{L}=(H{\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}((\delta_{H}{\circ}\eta_{H}){\otimes}H)$$ and $$\overline{\Pi}_{H}^{R}=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}H){\circ}(H{\otimes}(\delta_{H}{\circ}\eta_{H})).$$ Then the morphisms $\Pi_{H}^{L}$, $\Pi_{H}^{R}$, $\overline{\Pi}_{H}^{L}$ and $\overline{\Pi}_{H}^{R}$ are idempotent. Moreover, the following equalities $$\label{unidadpi}
\Pi_{H}^{L}{\circ}\eta_H=\Pi_{H}^{R}{\circ}\eta_H=\overline{\Pi}_{H}^{L}{\circ}\eta_H=\overline{\Pi}_{H}^{R}{\circ}\eta_H=\eta_H,$$ $$\label{counidadpi}
\varepsilon_H{\circ}\Pi_{H}^{L}=\varepsilon_H{\circ}\Pi_{H}^{R}=\varepsilon_H{\circ}\overline{\Pi}_{H}^{L}=\varepsilon_H{\circ}\overline{\Pi}_{H}^{R}=\varepsilon_H,$$ $$\label{pi-l}
\Pi_{H}^{L}\ast id_{H}=id_{H}\ast \Pi_{H}^{R}=id_{H},$$ $$\label{mu-pi-l}
\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}(\delta_{H}{\otimes}H),$$ $$\label{mu-pi-r}
\mu_{H}{\circ}(\Pi_{H}^{R}{\otimes}H)=(H{\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(c_{H,H}{\otimes}H){\circ}(H{\otimes}\delta_{H}),$$ $$\label{mu-pi-l-var}
\mu_{H}{\circ}(H{\otimes}\overline{\Pi}_{H}^{L})=(H{\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(\delta_{H}{\otimes}H),$$ $$\label{mu-pi-r-var}
\mu_{H}{\circ}(\overline{\Pi}_{H}^{R}{\otimes}H)=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}H){\circ}(H{\otimes}\delta_{H}),$$ $$\label{delta-pi-l}
(H{\otimes}\Pi_{H}^{L}){\circ}\delta_{H}=(\mu_{H}{\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}((\delta_{H}{\circ}\eta_{H}){\otimes}H),$$ $$\label{delta-pi-r}
(\Pi_{H}^{R}{\otimes}H){\circ}\delta_{H}=(H{\otimes}\mu_{H}){\circ}(c_{H,H}{\otimes}H){\circ}(H{\otimes}(\delta_{H}{\circ}\eta_{H})),$$ $$\label{pi-l-mu-pi-l}
\Pi_{H}^{L}{\circ}\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})=\Pi_{H}^{L}{\circ}\mu_{H}=\Pi_{H}^{L}{\circ}\mu_{H}{\circ}(H{\otimes}\overline{\Pi}_{H}^{L}),$$ $$\label{pi-delta-mu-pi-3}
(H{\otimes}\Pi^{L}_{H})\circ \delta_{H}\circ\Pi^{L}_{H}=\delta_{H}\circ \Pi^{L}_{H}=(H{\otimes}\overline{\Pi}^{R}_{H})\circ \delta_{H}\circ\Pi^{L}_{H},$$ $$\label{pi-l-barra-delta}
(\overline{\Pi}_{H}^{L}{\otimes}H){\circ}\delta_{H}=(H{\otimes}\mu_{H}){\circ}((\delta_{H}{\circ}\eta_{H}){\otimes}H),$$ $$\label{pi-r-barra-delta}
(H{\otimes}\overline{\Pi}_{H}^{R}){\circ}\delta_{H}=(\mu_{H}{\otimes}H){\circ}(H{\otimes}(\delta_{H}{\circ}\eta_{H})),$$ $$\label{pi-delta-mu-pi-4}
(\Pi^{R}_{H}{\otimes}H)\circ \delta_{H}\circ \Pi^{R}_{H}=\delta_{H}\circ \Pi^{R}_{H}=(\overline{\Pi}^{L}_{H}{\otimes}H){\circ}\delta_H{\circ}\Pi^{R}_{H},$$ $$\label{doblepiLmu}
\mu_H{\circ}(\Pi^{L}_{H}{\otimes}\Pi^{L}_{H})=\Pi^{L}_{H}{\circ}\mu_H{\circ}(\Pi^{L}_{H}{\otimes}\Pi^{L}_{H}),$$ $$\label{doblepiRmu}
\mu_H{\circ}(\Pi^{R}_{H}{\otimes}\Pi^{R}_{H})=\Pi^{R}_{H}{\circ}\mu_H{\circ}(\Pi^{R}_{H}{\otimes}\Pi^{R}_{H}),$$ $$\label{pi-composition-2}
\overline{\Pi}_{H}^{R}{\circ}\Pi_{H}^{L}=\Pi_{H}^{L},\;\;\; \overline{\Pi}_{H}^{L}{\circ}\Pi_{H}^{R}=\Pi_{H}^{R},\;\;\;
\Pi_{H}^{L}{\circ}\overline{\Pi}_{H}^{L}=\Pi_{H}^{L}\;\;\;,\overline{\Pi}_{H}^{R}{\circ}\Pi_{H}^{L}=\Pi_{H}^{L},$$ hold.
The following properties are also proved in [@AFG-Weak-quasi], but we give a slightly different proof without using (a4) of Definition \[Weak-Hopf-quasigroup\].
\[otherproperties\] Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Then we have that $$\label{PiLRconvolution}
\Pi_{H}^{L}\ast \Pi_{H}^{L}=\Pi_{H}^{L},\;\;\;, \Pi_{H}^{R}\ast \Pi_{H}^{R}=\Pi_{H}^{R}$$ $$\label{aux-1-monoid-hl}
\delta_{H}{\circ}\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H)=(\mu_{H}{\otimes}H){\circ}(\Pi_{H}^{L}{\otimes}\delta_{H}),$$ $$\label{aux-2-monoid-hl}
\delta_{H}{\circ}\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})=(\mu_{H}{\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}(\delta_{H}{\otimes}\Pi_{H}^{L}).$$ $$\label{aux-1-monoid-hr}
\delta_{H}{\circ}\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{R})=(H{\otimes}\mu_{H}){\circ}(\delta_{H}{\otimes}\Pi_{H}^{R}),$$ $$\label{aux-2-monoid-hr}
\delta_{H}{\circ}\mu_{H}{\circ}(\Pi_{H}^{R}{\otimes}H)=(H{\otimes}\mu_{H}){\circ}(c_{H,H}{\otimes}H){\circ}(\Pi_{H}^{R}{\otimes}\delta_{H}).$$ $$\label{monoid-hl-1}
\mu_{H}{\circ}((\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H)){\otimes}H)=\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}\mu_{H}),$$ $$\label{monoid-hl-2}
\mu_{H}{\circ}(H{\otimes}(\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H)))=\mu_{H}{\circ}((\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})){\otimes}H),$$ $$\label{monoid-hl-3}
\mu_{H}{\circ}(H{\otimes}(\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})))=\mu_{H}{\circ}(\mu_{H}{\otimes}\Pi_{H}^{L}),$$ and similar equalities to (\[monoid-hl-1\]), (\[monoid-hl-2\]) and (\[monoid-hl-3\]) with $\Pi_{H}^{R}$ instead of $\Pi_{H}^{L}$ also hold.
We begin by showing the first equality of (\[PiLRconvolution\]), the second one is similar. Using the definition of $\Pi_{H}^{L}$, the naturalness of $c$, and (a2) and (a1) of Definition \[Weak-Hopf-quasigroup\],
- $\hspace{0.38cm} \Pi_{H}^{L}\ast \Pi_{H}^{L}$
- $=(\varepsilon_{H}{\otimes}H){\circ}\mu_{H{\otimes}H}{\circ}(H{\otimes}H{\otimes}((\mu_H{\otimes}H){\circ}(H{\otimes}c_{H,H}))){\circ}((\delta_H{\circ}\eta_H){\otimes}(\delta_H{\circ}\eta_H){\otimes}H)$
- $=((\varepsilon_{H}{\circ}\mu_H){\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}((\mu_{H{\otimes}H}{\circ}(\delta_H{\otimes}\delta_H){\otimes}H){\circ}(\eta_H{\otimes}\eta_H{\otimes}H)$
- $=\Pi_{H}^{L}.$
The proof of (\[aux-1-monoid-hl\]) and (\[aux-2-monoid-hl\]) is the same that the given in [@AFG-Cleftwhq], and the equalities (\[aux-1-monoid-hr\]) and (\[aux-2-monoid-hr\]) follow a similar pattern. As far as the last equalities, the proof is somewhat different to the given in [@AFG-Cleftwhq] because in this case we can not use the antipode. We only show (\[monoid-hl-2\]), the other being analogous. Using that $H$ is a comonoid, condition (a1) of Definition \[Weak-Hopf-quasigroup\] (twice), (\[aux-2-monoid-hl\]), condition (a2) of Definition \[Weak-Hopf-quasigroup\] and (\[aux-1-monoid-hl\]),
- $\hspace{0.38cm} \mu_{H}{\circ}(H{\otimes}(\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H)))$
- $=(\varepsilon_H{\otimes}H){\circ}\delta_H{\circ}\mu_{H}{\circ}(H{\otimes}(\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H)))$
- $=(\varepsilon_H{\otimes}H){\circ}\mu_{H{\otimes}H}{\circ}((\delta_H{\circ}\mu_H{\circ}(H{\otimes}\Pi_{H}^{L})){\otimes}\delta_H)$
- $=((\varepsilon_H{\circ}\mu_{H}{\circ}(\mu_H{\otimes}H)){\otimes}\mu_H){\circ}(H{\otimes}H{\otimes}c_{H,H}{\otimes}H){\circ}(H{\otimes}c_{H,H}{\otimes}H{\otimes}H){\circ}(\delta_H{\otimes}\Pi_{H}^{L} {\otimes}\delta_H)$
- $=(\varepsilon_H{\otimes}H){\circ}\mu_{H{\otimes}H}{\circ}(\delta_H{\otimes}((\mu_H{\otimes}H){\circ}(\Pi_{H}^{L}{\otimes}\delta_H)))$
- $=(\varepsilon_H{\otimes}H){\circ}\mu_{H{\otimes}H}{\circ}(\delta_H{\otimes}\delta_H){\circ}(H{\otimes}(\mu_H{\circ}(\Pi_{H}^{L}{\otimes}H)))$
- $=\mu_{H}{\circ}((\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L})){\otimes}H).$
\[HsubLmonoide\]
Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Denote by $H_L=Im (\Pi_{H}^{L})$ and let $p_{L}:H\rightarrow H_L$, $i_{L}:H_L\rightarrow H$ be the morphisms such that $i_L{\circ}p_L=\Pi_{H}^{L}$ and $p_{L}{\circ}i_{L}=id_{H_L}$. Then the equalities (\[monoid-hl-1\]), (\[monoid-hl-2\]) and (\[monoid-hl-3\]) imply that $(H_L, \eta_{H_L}=p_L{\circ}\eta_H, \mu_{H_L}=p_L{\circ}\mu_H{\circ}(i_L{\otimes}i_L))$ is a monoid. Therefore we can consider the category of right (left) $H_L$-modules, denoted by $\mathcal C_{H_L}$ ($_{H_L}\mathcal C$). In particular, $(H, \phi_{H}^{L}=\mu_H{\circ}(H{\otimes}i_L))$ is in $\mathcal C_{H_L}$ and $(H, \varphi_{H}^{L}=\mu_H{\circ}(i_L{\otimes}H))$ is in $_{H_L}\mathcal C$. Moreover, if $(M \phi_M)$ is a right $H_L$-module, we define $M{\otimes}_{H_L}H$ by the following coequalizer diagram:
$$\setlength{\unitlength}{1mm}
\begin{picture}(101.00,10.00)
\put(22.00,8.00){\vector(1,0){40.00}}
\put(22.00,4.00){\vector(1,0){40.00}}
\put(75.00,6.00){\vector(1,0){21.00}}
\put(43.00,11.00){\makebox(0,0)[cc]{$M{\otimes}\varphi_{H}^{L}$ }}
\put(43.00,0.00){\makebox(0,0)[cc]{$\phi_M{\otimes}H$ }}
\put(85.00,9.00){\makebox(0,0)[cc]{$n_M$ }}
\put(10.00,6.00){\makebox(0,0)[cc]{$ M{\otimes}H_L{\otimes}H$ }}
\put(70.00,6.00){\makebox(0,0)[cc]{$M{\otimes}H$ }}
\put(105.00,6.00){\makebox(0,0)[cc]{$M{\otimes}_{H_L}H$ }}
\end{picture}$$
Similar considerations can be done for $H_R=Im (\Pi_{H}^{R})$.
Now let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Define the morphisms, called $\Omega$-morphisms, $$\Omega_{L}^{1}=(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(H{\otimes}\delta_H),$$ $$\Omega_{R}^{1}=(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H){\circ}(H{\otimes}\delta_H),$$ $$\Omega_{L}^{2}=(H{\otimes}\mu_H){\circ}(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(\delta_H{\otimes}H)$$ and $$\Omega_{R}^{2}=(H{\otimes}\mu_H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H){\circ}(\delta_H{\otimes}H).$$
By Proposition \[otherproperties\], it is not difficult to see that these morphisms are idempotent. As a consequence, there exist objects $H\times_{L}^{1} H$, $H\times_{R}^{1} H$, $H\times_{L}^{2} H$ and $H\times_{R}^{2} H$ and morphisms $$q_{L}^{1}:H{\otimes}H \rightarrow H\times_{L}^{1} H\;\;\;, j_{L}^{1}:H\times_{L}^{1} H\rightarrow H{\otimes}H,$$ $$q_{R}^{1}:H{\otimes}H \rightarrow H\times_{R}^{1} H\;\;\;, j_{R}^{1}:H\times_{R}^{1} H\rightarrow H{\otimes}H,$$ $$q_{L}^{2}:H{\otimes}H \rightarrow H\times_{L}^{2} H\;\;\;, j_{L}^{2}:H\times_{L}^{2} H\rightarrow H{\otimes}H,$$ and $$q_{R}^{2}:H{\otimes}H \rightarrow H\times_{R}^{2} H\;\;\;, j_{R}^{2}:H\times_{R}^{2} H\rightarrow H{\otimes}H,$$ such that, for $\sigma\in \{L,R\}$ and $\alpha \in \{1,2\}$,
$$\label{Omegas} j_{\sigma}^{\alpha}{\circ}q_{\sigma}^{\alpha}=\Omega_{\sigma}^{\alpha} ,\;\;\; q_{\sigma}^{\alpha}{\circ}j_{\sigma}^{\alpha}=id_{H\times_{\sigma}^{\alpha}H}.$$
Finally, by conditions (\[monoid-hl-1\]) and (\[monoid-hl-3\]), it is easy to see that $$\label{muconomega}
(\mu_H{\otimes}H){\circ}(H{\otimes}\Omega_{\sigma}^{1})=(H{\otimes}\Omega_{\sigma}^{1}){\circ}(\mu_H{\otimes}H)$$ and $$\label{omegaconmu}
(H{\otimes}\mu_H){\circ}(\Omega_{\sigma}^{2}{\otimes}H)=(\Omega_{\sigma}^{2}{\otimes}H){\circ}(H{\otimes}\mu_H), \sigma\in \{L,R\}.$$
Note that the morphism $\Omega_{R}^{1}$ is the same that the one defined in [@AFG-Cleftwhq] by the name of $\nabla_H$. The following Lemma gives an explanation of the meaning of the objects $H\times_{L}^{1} H$, $H\times_{R}^{1} H$, $H\times_{L}^{2} H$ and $H\times_{R}^{2} H$ by using equalizer and coequalizer diagrams.
\[diagrams\] Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Then we have that:
- The diagrams $$\setlength{\unitlength}{1mm}
\begin{picture}(101.00,10.00)
\put(22.00,8.00){\vector(1,0){40.00}}
\put(22.00,4.00){\vector(1,0){40.00}}
\put(75.00,6.00){\vector(1,0){21.00}}
\put(43.00,11.00){\makebox(0,0)[cc]{$\phi_{H}^{L}{\otimes}H$ }}
\put(43.00,0.00){\makebox(0,0)[cc]{$H{\otimes}\varphi_{H}^{L}$ }}
\put(85.00,9.00){\makebox(0,0)[cc]{$q_{L}^{1}$ }}
\put(10.00,6.00){\makebox(0,0)[cc]{$ H{\otimes}H_L{\otimes}H$ }}
\put(70.00,6.00){\makebox(0,0)[cc]{$H{\otimes}H$ }}
\put(105.00,6.00){\makebox(0,0)[cc]{$H\times_{L}^{1} H$ }}
\end{picture}$$ and $$\setlength{\unitlength}{1mm}
\begin{picture}(101.00,10.00)
\put(22.00,8.00){\vector(1,0){40.00}}
\put(22.00,4.00){\vector(1,0){40.00}}
\put(75.00,6.00){\vector(1,0){21.00}}
\put(43.00,11.00){\makebox(0,0)[cc]{$\phi_{H}^{R}{\otimes}H$ }}
\put(43.00,0.00){\makebox(0,0)[cc]{$H{\otimes}\varphi_{H}^{R}$ }}
\put(85.00,9.00){\makebox(0,0)[cc]{$q_{R}^{2}$ }}
\put(10.00,6.00){\makebox(0,0)[cc]{$ H{\otimes}H_R{\otimes}H$ }}
\put(70.00,6.00){\makebox(0,0)[cc]{$H{\otimes}H$ }}
\put(105.00,6.00){\makebox(0,0)[cc]{$H\times_{R}^{2} H$ }}
\end{picture}$$ are coequalizer diagrams. By Remark \[HsubLmonoide\], we have that $H\times_{L}^{1} H\cong H{\otimes}_{H_L}H$ and $H\times_{R}^{2} H\cong H{\otimes}_{H_R}H$.
- The diagrams
$$\setlength{\unitlength}{3mm}
\begin{picture}(30,4)
\put(3,2){\vector(1,0){4}}
\put(11,2.5){\vector(1,0){15}}
\put(11,1.5){\vector(1,0){15}}
\put(1,2){\makebox(0,0)[cc]{$H\times_{L}^{2} H$}}
\put(9,2){\makebox(0,0)[cc]{$H{\otimes}H$}}
\put(30,2){\makebox(0,0)[cc]{$H{\otimes}H_L{\otimes}H$}}
\put(5.5,3){\makebox(0,0)[cc]{$j_{L}^{2}$}}
\put(19,3.5){\makebox(0,0)[cc]{$((H{\otimes}p_L){\circ}\delta_{H}){\otimes}H$}}
\put(19,0.5){\makebox(0,0)[cc]{$H{\otimes}((p_L{\otimes}H){\circ}\delta_{H})$}}
\end{picture}$$ and $$\setlength{\unitlength}{3mm}
\begin{picture}(30,4)
\put(3,2){\vector(1,0){4}}
\put(11,2.5){\vector(1,0){15}}
\put(11,1.5){\vector(1,0){15}}
\put(1,2){\makebox(0,0)[cc]{$H\times_{R}^{1} H$}}
\put(9,2){\makebox(0,0)[cc]{$H{\otimes}H$}}
\put(30,2){\makebox(0,0)[cc]{$H{\otimes}H_L{\otimes}H$}}
\put(5.5,3){\makebox(0,0)[cc]{$j_{R}^{1}$}}
\put(19,3.5){\makebox(0,0)[cc]{$((H{\otimes}p_R){\circ}\delta_{H}){\otimes}H$}}
\put(19,0.5){\makebox(0,0)[cc]{$H{\otimes}((p_R{\otimes}H){\circ}\delta_{H})$}}
\end{picture}$$ are equalizer diagrams.
$(i).$ We will give the computations for the first diagram, the proof for the other is similar. First of all,
- $\hspace{0.38cm} \Omega_{L}^{1}{\circ}(H{\otimes}\varphi_{H}^{L})$
- $=((\mu_H{\circ}(H{\otimes}\Pi_{H}^{L})){\otimes}H){\circ}(H{\otimes}(\delta_H{\circ}\mu_H{\circ}(i_L{\otimes}H)))$
- $=((\mu_H{\circ}(H{\otimes}\Pi_{H}^{L})){\otimes}H){\circ}(H{\otimes}((\mu_H{\otimes}H){\circ}(i_L{\otimes}\delta_H)))$
- $=((\varepsilon_H{\circ}\mu_H{\circ}(H{\otimes}\mu_H)){\otimes}H{\otimes}H){\circ}(H{\otimes}H{\otimes}c_{H,H}{\otimes}H){\circ}(H{\otimes}c_{H,H}{\otimes}\delta_H){\circ}(\delta_H{\otimes}i_L{\otimes}H)$
- $=((\varepsilon_H{\circ}\mu_H{\circ}(\mu_H{\otimes}H)){\otimes}H{\otimes}H){\circ}(H{\otimes}H{\otimes}c_{H,H}{\otimes}H){\circ}(H{\otimes}c_{H,H}{\otimes}\delta_H){\circ}(\delta_H{\otimes}i_L{\otimes}H)$
- $=((\varepsilon_H{\circ}\mu_H){\otimes}H{\otimes}H){\circ}\delta_{H{\otimes}H}{\circ}((\mu_H{\circ}(H{\otimes}i_L)){\otimes}H)$
- $=\Omega_{L}^{1}{\circ}(\phi_{H}^{L}{\otimes}H),$
where the first equality follows by the definition of $\Omega_{L}^{1}$; the second one by (\[aux-1-monoid-hl\]); in the third and the last ones we use (\[mu-pi-l\]); the fourth one relies on (a2) of Definition \[Weak-Hopf-quasigroup\]; finally, the fifth equality follows by (\[aux-2-monoid-hl\]).
By composing on the left with $q_{L}^{1}$, we have that $q_{L}^{1}{\circ}(H{\otimes}\varphi_{H}^{L})=q_{L}^{1}{\circ}(\phi_{H}^{L}{\otimes}H)$. Now assume that $r:H{\otimes}H\rightarrow Q$ is a morphism such that $r{\circ}(H{\otimes}\varphi_{H}^{L})=r{\circ}(\phi_{H}^{L}{\otimes}H)$. Then the morphism $r{\circ}j_{L}^{1}:H\times_{L}^{1} H\rightarrow Q$ satisfies that $$r{\circ}j_{L}^{1}{\circ}q_{L}^{1}=r{\circ}\Omega_{L}^{1}=r{\circ}((\mu_H{\circ}(H{\otimes}(i_L{\circ}p_L))){\otimes}H){\circ}(H{\otimes}\delta_H)=r{\circ}(H{\otimes}(\Pi_{H}^{L}\ast id_H))=r,$$ and if $s:H{\otimes}H\rightarrow Q$ is such that $s{\circ}q_{L}^{1}=r$, then $s=s{\circ}q_{L}^{1}{\circ}j_{L}^{1}=r{\circ}j_{L}^{1}$.
$(ii).$ We only give the computations for the first diagram. Composing on the right with $q_{L}^{2}$ and on the left whit $H{\otimes}i_L{\otimes}H$,
- $\hspace{0.38cm} (H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}\Omega_{L}^{2}$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(H{\otimes}(\delta_H{\circ}\mu_H{\circ}(\Pi_{H}^{L}{\otimes}H))){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(H{\otimes}((\mu_H{\otimes}H){\circ}(\Pi_{H}^{L}{\otimes}\delta_H))){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}(\Pi_{H}^{L}{\circ}\mu_H{\circ}(\Pi_{H}^{L}{\otimes}\overline{\Pi}_{H}^{L})){\otimes}H){\circ}(\delta_H{\otimes}\delta_H)$
- $=(H{\otimes}(\Pi_{H}^{L}{\circ}\mu_H){\otimes}H){\circ}(H{\otimes}\Pi_{H}^{L}{\otimes}((H{\otimes}\mu_H){\circ}((\delta_H{\circ}\eta_H){\otimes}H))){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}\mu_H){\circ}(H{\otimes}((H{\otimes}\overline{\Pi}_{H}^{R}){\circ}\delta_H{\circ}\Pi_{H}^{L}){\otimes}H){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}\mu_H){\circ}(H{\otimes}(\delta_H{\circ}\Pi_{H}^{L}){\otimes}H){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}\mu_H){\circ}(H{\otimes}(((\varepsilon_H{\circ}\mu_H){\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}(\delta_H{\otimes}H)){\otimes}H{\otimes}H)$
- $\hspace{0.38cm} {\circ}(H{\otimes}H{\otimes}c_{H,H}{\otimes}H){\circ}(H{\otimes}(\delta_H{\circ}\eta_H){\otimes}H{\otimes}H){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}(\Pi_{H}^{L}{\circ}\mu_H){\otimes}\mu_H){\circ}(H{\otimes}H{\otimes}c_{H,H}{\otimes}H){\circ}(H{\otimes}(\delta_H{\circ}\eta_H){\otimes}H{\otimes}H){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}\mu_H){\circ}(H{\otimes}((H{\otimes}\Pi_{H}^{L}){\circ}\delta_H){\otimes}H){\circ}(\delta_H{\otimes}H)$
- $=(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(\delta_H{\otimes}H){\circ}\Omega_{L}^{2},$
where the first equality follows by the definition of $\Omega_{L}^{2}$; the second one by (\[aux-1-monoid-hl\]); in the third one we use (\[pi-l-mu-pi-l\]); the fourth equality relies on (\[pi-l-barra-delta\]); the fifth one follows by (\[pi-r-barra-delta\]); the sixth one by (\[pi-delta-mu-pi-3\]); the seventh one uses coassociativity and the definition of $\Pi_{H}^{L}$, the eighth one relies on (\[mu-pi-l\]); the ninth one uses (\[delta-pi-l\]); finally, the last one follows by coassociativity.
As a consequence, $(H{\otimes}((p_L{\otimes}H){\circ}\delta_H)){\circ}j_{L}^{2}=(((H{\otimes}p_L){\circ}\delta_H){\otimes}H){\circ}j_{L}^{2}$ and, if $r:Q\rightarrow H{\otimes}H$ is a morphism such that $(H{\otimes}((p_L{\otimes}H){\circ}\delta_H)){\circ}r=(((H{\otimes}p_L){\circ}\delta_H){\otimes}H){\circ}r$, it is easy to see that the morphism $q_{L}^{2}{\circ}r$ satisfies that $j_{L}^{2}{\circ}q_{L}^{2}{\circ}r=r$, and it is unique because, if $s:Q\rightarrow H{\otimes}H$ is such that $j_{L}^{2}{\circ}s=r$, then $s=q_{L}^{2}{\circ}j_{L}^{2}{\circ}s=q_{L}^{2}{\circ}r$.
The following definition is inspired in [@Brz].
\[almostlinear\]
Let $H$ be a magma. We say that a morphism $\phi:H{\otimes}H\rightarrow H{\otimes}H$ is:
- Almost left $H$-linear, if $\phi=(\mu_H{\otimes}H){\circ}(H{\otimes}\phi){\circ}(H{\otimes}\eta_H{\otimes}H)$.
- Almost right $H$-linear, if $\phi=(H{\otimes}\mu_H){\circ}(\phi{\otimes}H){\circ}(H{\otimes}\eta_H{\otimes}H)$.
By dualization, if $H$ is a comagma, we will say that a morphism $\phi$ is almost left $H$-colinear if $\phi=(H{\otimes}\varepsilon_H{\otimes}H){\circ}(H{\otimes}\phi){\circ}(\delta_H{\otimes}H)$, and almost right $H$-colinear if $\phi=(H{\otimes}\varepsilon_H{\otimes}H){\circ}(\phi{\otimes}H){\circ}(H{\otimes}\delta_H)$.
\[examplesalmost\] Let $H$ be a magma and comagma. The following assertions hold.
- The right Galois morphism, defined as $\beta=(\mu_H{\otimes}H){\circ}(H{\otimes}\delta_H)$ is almost left $H$-linear and almost right $H$-colinear.
- The left Galois morphism, defined as $\gamma=(H{\otimes}\mu_H){\circ}(\delta_H{\otimes}H)$ is almost right $H$-linear and almost left $H$-colinear.
- The morphisms $\Omega_{L}^{1}$ and $\Omega_{R}^{1}$ are almost left $H$-linear and almost right $H$-colinear.
- The morphisms $\Omega_{L}^{2}$ and $\Omega_{R}^{2}$ are almost right $H$-linear and almost left $H$-colinear.
Moreover, if $H$ is a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold:
- The morphism $\Omega_{L}^{1}$ is almost right $H$-linear and it is almost left $H$-colinear if and only if $\Pi_{H}^{L}=\overline{\Pi}_{H}^{L}$.
- The morphism $\Omega_{R}^{1}$ is almost left $H$-colinear and it is almost right $H$-linear if and only if $\overline{\Pi}_{H}^{L}=\Pi_{H}^{R}$.
- The morphism $\Omega_{L}^{2}$ is almost right $H$-colinear and it is almost left $H$-linear if and only if $\Pi_{H}^{L}=\overline{\Pi}_{H}^{R}$.
- The morphism $\Omega_{R}^{2}$ is almost left $H$-linear and it is almost right $H$-linear if and only if $\Pi_{H}^{R}=\overline{\Pi}_{H}^{R}$.
It is easy to see assertions (i)-(iv). As far as (v), we get the almost right $H$-linearity by using (\[pi-l-barra-delta\]) and (\[pi-composition-2\]). Indeed, $$(H{\otimes}\mu_H){\circ}(\Omega_{L}^{1}{\otimes}H){\circ}(H{\otimes}\eta_H{\otimes}H)=(\mu_H{\otimes}H){\circ}(H{\otimes}(\Pi_{H}^{L}{\circ}\overline{\Pi}_{H}^{L}){\otimes}H){\circ}(H{\otimes}\delta_H)=\Omega_{L}^{1}.$$ On the other hand, using (\[mu-pi-l\]) and (\[mu-pi-l-var\]), $$(H{\otimes}\varepsilon_H{\otimes}H){\circ}(H{\otimes}\Omega_{L}^{1}){\circ}(\delta_H{\otimes}H)=(H{\otimes}(\varepsilon_H{\circ}\mu_H){\otimes}H){\circ}(\delta_H{\otimes}\delta_H)=
(\mu_H{\otimes}H){\circ}(H{\otimes}\overline{\Pi}_{H}^{L}{\otimes}H){\circ}(H{\otimes}\delta_H),$$ and as a consequence we have that $\Omega_{L}^{1}$ is almost left $H$-colinear if and only if $\Pi_{H}^{L}=\overline{\Pi}_{H}^{L}$.
To get (vi), the morphism $\Omega_{R}^{1}$ is almost left $H$-colinear because by (\[mu-pi-l-var\]) and (\[pi-composition-2\]),
- $\hspace{0.38cm} (H{\otimes}\varepsilon_H{\otimes}H){\circ}(H{\otimes}\Omega_{R}^{1}){\circ}(\delta_H{\otimes}H)=(H{\otimes}(\varepsilon_H{\circ}\mu_H){\otimes}H){\circ}(\delta_H{\otimes}((\Pi_{H}^{R}{\otimes}H){\circ}\delta_H))$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}(\overline{\Pi}_{H}^{L}{\circ}\Pi_{H}^{R}){\otimes}H){\circ}(H{\otimes}\delta_H)=\Omega_{R}^{1}.$
Moreover, using (\[delta-pi-r\]) and (\[pi-l-barra-delta\]), $$(H{\otimes}\mu_H){\circ}(\Omega_{R}^{1}{\otimes}H){\circ}(H{\otimes}\eta_H{\otimes}H)=(\mu_H{\otimes}\mu_H){\circ}(H{\otimes}(\delta_H{\circ}\eta_H){\otimes}H)=(\mu_H{\otimes}H){\circ}(H{\otimes}\overline{\Pi}_{H}^{L}{\otimes}H){\circ}(H{\otimes}\delta_H),$$ and then $\Omega_{R}^{1}$ is almost right $H$-linear if and only if $\overline{\Pi}_{H}^{L}=\Pi_{H}^{R}$. We leave to the reader the proofs for (vii) and (viii).
[Note that, as we showed in Propositions 1.5. and 1.6. of [@AFGLV] (the proofs do not use associativity nor the antipode), $\Pi_{H}^{L}=\overline{\Pi}_{H}^{L}$ iff $\Pi_{H}^{R}=\overline{\Pi}_{H}^{R}$, and $\overline{\Pi}_{H}^{L}=\Pi_{H}^{R}$ iff $\Pi_{H}^{R}=\overline{\Pi}_{H}^{L}$. Therefore, the morphism $\Omega_{L}^{1}$ is almost left $H$-colinear if and only if $\Omega_{R}^{2}$ is almost right $H$-linear (that is the case, for example, if $H$ is coconmutative, i.e., $\delta_H=c_{H,H}{\circ}\delta_H$), and the morphism $\Omega_{R}^{1}$ is almost right $H$-linear if and only if $\Omega_{L}^{2}$ is almost left $H$-linear (for example, if $H$ is commutative, i. e., $\mu_H=\mu_H{\circ}c_{H,H}$). ]{}
[Note that, if $H$ is a weak Hopf quasigroup, we can express the $\Omega$-morphisms as compositions of the Galois maps. Actually, by (a4) we have that $$\label{equalitiesomega}
\Omega_{L}^{1}=\overline{\beta}{\circ}\beta ,\;\;\;\Omega_{R}^{1}=\beta{\circ}\overline{\beta} ,\;\;\;\Omega_{L}^{2}=\gamma{\circ}\overline{\gamma},\;\;\;\Omega_{R}^{2}=\overline{\gamma}{\circ}\gamma,$$ where $\overline{\beta}=(\mu_H{\otimes}H){\circ}(H{\otimes}\lambda_H{\otimes}H){\circ}(H{\otimes}\delta_H)$ and $\overline{\gamma}=(H{\otimes}\mu_H){\circ}(H{\otimes}\lambda_H{\otimes}H){\circ}(\delta_H{\otimes}H)$. Moreover, if the weak Hopf quasigroup $H$ is a Hopf quasigroup, $\Pi_{H}^{L}=\Pi_{H}^{R}=\overline{\Pi}_{H}^{L}=\overline{\Pi}_{H}^{R}=\varepsilon_H{\otimes}\eta_H$ and then the $\Omega$-morphism are identities. As a consequence we have that in this case the Galois maps $\beta$ and $\gamma$ are isomorphisms with inverses $\overline{\beta}$ and $\overline{\gamma}$, respectively. ]{}
Now we give the main result of this paper, which characterizes weak Hopf quasigrous in terms of a composition involving the Galois maps.
\[characterization\] Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. The following assertions are equivalent:
- $H$ is a weak Hopf quasigroup.
- The morphisms $f=q_{R}^{1}{\circ}\beta{\circ}j_{L}^{1}:H\times_{L}^{1} H\rightarrow H\times_{R}^{1} H$ and $g=q_{L}^{2}{\circ}\gamma{\circ}j_{R}^{2}:H\times_{R}^{2} H\rightarrow H\times_{L}^{2} H$ are isomorphisms, the morphism $j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}$ is almost left $H$-linear, and $j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}$ is almost right $H$-linear.
$(i)\Rightarrow(ii).$ Assume that $H$ is a weak Hopf quasigroup. Define $f^{-1}=q_{L}^{1}{\circ}\overline{\beta}{\circ}j_{R}^{1}$ and $g^{-1}=q_{R}^{2}{\circ}\overline{\gamma}{\circ}j_{L}^{2}$. Then $f^{-1}$ and $g^{-1}$ are the inverses of $f$ and $g$, respectively. Indeed,
$$f{\circ}f^{-1}=q_{R}^{1}{\circ}\beta{\circ}\Omega_{L}^{1}{\circ}\overline{\beta}{\circ}j_{R}^{1}
=q_{R}^{1}{\circ}\beta{\circ}\overline{\beta}{\circ}\beta{\circ}\overline{\beta}{\circ}j_{R}^{1}=q_{R}^{1}{\circ}\Omega_{R}^{1}{\circ}\Omega_{R}^{1}{\circ}j_{R}^{1}
=q_{R}^{1}{\circ}\Omega_{R}^{1}{\circ}j_{R}^{1}=q_{R}^{1}{\circ}j_{R}^{1}{\circ}q_{R}^{1}{\circ}j_{R}^{1}=id_{H\times_{R}^{1} H}.$$
On the other hand,
$$f^{-1}{\circ}f=q_{L}^{1}{\circ}\overline{\beta}{\circ}\Omega_{R}^{1}{\circ}\beta{\circ}j_{L}^{1}
=q_{L}^{1}{\circ}\overline{\beta}{\circ}\beta{\circ}\overline{\beta}{\circ}\beta{\circ}j_{L}^{1}
=q_{L}^{1}{\circ}\Omega_{L}^{1}{\circ}\Omega_{L}^{1}{\circ}j_{L}^{1}=q_{L}^{1}{\circ}\Omega_{L}^{1}{\circ}j_{L}^{1}
=q_{L}^{1}{\circ}j_{L}^{1}{\circ}q_{L}^{1}{\circ}j_{L}^{1}=id_{H\times_{L}^{1} H},$$
and then $f^{-1}$ is the inverse of $f$. In a similar way it is easy to see that $g^{-1}$ is the inverse of $g$. To see the almost left and right $H$-linearity, we will see that $j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}=\overline{\beta}$ and $j_{R}^{2}{\circ}g^{-1}{\circ}q_{R}^{2}=\overline{\gamma}$. We only show the first equality, the second one follows a similar pattern. Indeed, using the definition of $f^{-1}$, the idempotent character of $\Omega_{L}^{1}$, equality (\[monoid-hl-2\]) for $\Pi_{H}^{R}$, coassociativity and (a4-3) of Definition \[Weak-Hopf-quasigroup\], we obtain that
- $\hspace{0.38cm} j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}=\Omega_{L}^{1}{\circ}\overline{\beta}{\circ}\Omega_{R}^{1}
=\overline{\beta}{\circ}\beta{\circ}\overline{\beta}{\circ}\beta{\circ}\overline{\beta}=\overline{\beta}{\circ}\beta{\circ}\overline{\beta}=\overline{\beta}{\circ}\Omega_{R}^{1}$
- $=((\mu_H{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H)){\otimes}H){\circ}(H{\otimes}H{\otimes}((\lambda_H{\otimes}H){\circ}\delta_H)){\circ}(H{\otimes}\delta_H)$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}(\Pi_{H}^{R}\ast \lambda_H){\otimes}H){\circ}(H{\otimes}\delta_H)=\overline{\beta}.$
$(ii)\Rightarrow (i).$ First of all, note that $$\label{betaandgammaexpressions}
j_{R}^{1}{\circ}f{\circ}q_{L}^{1}=\beta\;\;\; j_{L}^{2}{\circ}g{\circ}q_{R}^{2}=\gamma .$$
Indeed,
- $\hspace{0.38cm} j_{R}^{1}{\circ}f{\circ}q_{L}^{1}$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{L}{\otimes}H){\circ}(H{\otimes}\delta_H)$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}(\Pi_{H}^{L}\ast id_H){\otimes}H){\circ}(H{\otimes}\delta_H)$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}\Pi_{H}^{R}{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\delta_H)$
- $=(\mu_H{\otimes}H){\circ}(H{\otimes}(id_H\ast \Pi_{H}^{R}){\otimes}H){\circ}(H{\otimes}\delta_H)$
- $=\beta,$
where the first equalitiy follows by the definitions of $f$, $\Omega_{L}^{1}$ and $\Omega_{R}^{1}$; the second and the fourth ones by (\[monoid-hl-2\]) and (\[monoid-hl-3\]); and in the third and the last ones we use (a4-3) of Definition \[Weak-Hopf-quasigroup\]. In a similar way, we get the second equality. As a consequence, we obtain the following expressions for $\mu_H$ and $\delta_H$: $$\label{muexpression}
\mu_H=(H{\otimes}\varepsilon_H){\circ}j_{R}^{1}{\circ}f{\circ}q_{L}^{1}=(\varepsilon_H{\otimes}H){\circ}j_{L}^{2}{\circ}g{\circ}q_{R}^{2}.$$ $$\label{deltaexpression}
\delta_H=j_{R}^{1}{\circ}f{\circ}q_{L}^{1}{\circ}(\eta_H{\otimes}H)=j_{L}^{2}{\circ}g{\circ}q_{R}^{2}{\circ}(H{\otimes}\eta_H).$$ Now define $\lambda_H=(H{\otimes}\varepsilon_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)$ and $\overline{\lambda_H}=(\varepsilon_H{\otimes}H){\circ}j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2} {\circ}(H{\otimes}\eta_H)$. To obtain that $H$ is a weak Hopf quasigroup we will show that $\lambda_H=\overline{\lambda_H}$ and they satisfy (a4) of Definition \[Weak-Hopf-quasigroup\]. We begin showing that $id_H\ast \lambda_H=\Pi_{H}^{L}$. Indeed, by the almost left $H$-linearity and (\[deltaexpression\]),
- $\hspace{0.38cm} id_H\ast \lambda_H=(H{\otimes}\varepsilon_H){\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}(j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H))){\circ}\delta_H$
- $=(H{\otimes}\varepsilon_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}\delta_H=(H{\otimes}\varepsilon_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}j_{R}^{1}{\circ}f{\circ}q_{L}^{1}{\circ}(\eta_H{\otimes}H)$
- $=(H{\otimes}\varepsilon_H){\circ}\Omega_{L}^{1}{\circ}(\eta_H{\otimes}H)=\Pi_{H}^{L}.$
In a similar way, but using the almost right $H$-linearity, we get that $\overline{\lambda_H}\ast id_H=\Pi_{H}^{R}$. On the other hand, note that $(\beta{\otimes}H){\circ}(H{\otimes}\delta_H)=(H{\otimes}\delta_H){\circ}\beta$ holds and, by (\[betaandgammaexpressions\]) it is easy to see that $$\label{betaequality}
(H{\otimes}\delta_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}=((j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}){\otimes}H){\circ}(H{\otimes}\delta_H).$$ Moreover, taking into account that $(H{\otimes}\gamma){\circ}(\delta_H{\otimes}H)=(\delta_H{\otimes}H){\circ}\gamma$, we get $$\label{gammaequality}
(\delta_H{\otimes}H){\circ}j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}=(H{\otimes}(j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2})){\circ}(\delta_H{\otimes}H).$$ Therefore, using (\[muexpression\]), we obtain that
- $\hspace{0.38cm} \lambda_H\ast id_H=\mu_H{\circ}(H{\otimes}((\varepsilon_H{\otimes}H){\circ}\delta_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)$
- $=(H{\otimes}\varepsilon_H){\circ}j_{R}^{1}{\circ}f{\circ}q_{L}^{1}{\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)
=(H{\otimes}\varepsilon_H){\circ}\Omega_{R}^{1}{\circ}(\eta_H{\otimes}H)=\Pi_{H}^{R},$
and by similar computations, but using (\[gammaequality\]), we have that $id_H\ast \overline{\lambda_H}=\Pi_{H}^{L}$.
To get (a4-3) of Definition \[Weak-Hopf-quasigroup\], $$\lambda_H\ast \Pi_{H}^{L}=\mu_H{\circ}(H{\otimes}\Pi_{H}^{L}){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)
=(H{\otimes}\varepsilon_H){\circ}\Omega_{L}^{1}{\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)=\lambda_H,$$
where the first equality follows by almost right $H$-linearity; the second one because $H$ is a comonoid; in the third one we use that $\mu_H{\circ}(H{\otimes}\Pi_{H}^{L})=(H{\otimes}\varepsilon_H){\circ}\Omega_{L}^{1}$; finally, the last one follows because $\Omega_{L}^{1}{\circ}j_{L}^{1}=j_{L}^{1}$. By similar computations but using almost left $H$-linearity and that $(\Pi_{H}^{L}{\otimes}H){\circ}\delta_H=\Omega_{R}^{1}{\circ}(\eta_H{\otimes}H)$, it is not difficult to see that $\Pi_{H}^{R}\ast \lambda_H=\lambda_H$, and the same ideas can be used to show that $\overline{\lambda_H}= \overline{\lambda_H}\ast \Pi_{H}^{L}$. Now we prove (a4-4)-(a4-7) of Definition \[Weak-Hopf-quasigroup\]. Firstly, by almost right $H$-linearity and (\[betaandgammaexpressions\])
- $\hspace{0.38cm} \mu_H\circ (\overline{\lambda_H}{\otimes}\mu_H)\circ (\delta_H{\otimes}H)
=(\varepsilon_H{\otimes}H){\circ}((H{\otimes}\mu_H){\circ}((j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}){\otimes}H){\circ}(H{\otimes}\eta_H{\otimes}H){\circ}\gamma$
- $=(\varepsilon_H{\otimes}H){\circ}j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}{\circ}j_{L}^{2}{\circ}g{\circ}q_{R}^{2}=(\varepsilon_H{\otimes}H){\circ}\Omega_{R}^{2}
=\mu_{H}{\circ}(\Pi_{H}^{R}{\otimes}H),$
and using almost right $H$-linearity, (\[gammaequality\]) and (\[muexpression\]),
- $\hspace{0.38cm} \mu_H\circ (H{\otimes}\mu_H){\circ}(H{\otimes}\overline{\lambda_H}{\otimes}H){\circ}(\delta_H{\otimes}H)
=\mu_H{\circ}(H{\otimes}\varepsilon_H{\otimes}H){\circ}(H{\otimes}(j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2})){\circ}(\delta_H{\otimes}H)$
- $=\mu_H{\circ}(((H{\otimes}\varepsilon_H){\circ}\delta_H){\otimes}H){\circ}j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}
=(\varepsilon_H{\otimes}H){\circ}j_{L}^{2}{\circ}g{\circ}q_{R}^{2} {\circ}j_{R}^{2}{\circ}g^{-1}{\circ}q_{L}^{2}=(\varepsilon_H{\otimes}H){\circ}\Omega_{L}^{2}$
- $=\mu_{H}{\circ}(\Pi_{H}^{L}{\otimes}H).$
By similar ideas but using almost left $H$-linearity and (\[betaequality\]), we show that
$$\mu_H{\circ}(\mu_H{\otimes}\lambda_H){\circ}(H{\otimes}\delta_H)=\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{L}),$$ and $$\mu_H{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\lambda_H{\otimes}H){\circ}(H{\otimes}\delta_H)=\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{R}).$$
To finish the proof, it only remains to see that $\lambda_H=\overline{\lambda_H}$. Indeed,
- $\hspace{0.38cm} \lambda_H=\lambda_H\ast \Pi_{H}^{L}
=\mu_H{\circ}(H{\otimes}\Pi_{H}^{L}){\circ}(\lambda_H{\otimes}H){\circ}\delta_H
=\mu_H{\circ}(\mu_H{\otimes}\lambda_H){\circ}(H{\otimes}\delta_H){\circ}(\lambda_H{\otimes}H){\circ}\delta_H$
- $=\mu_H{\circ}(\Pi_{H}^{R}{\otimes}H){\circ}(H{\otimes}\lambda_H){\circ}\delta_H=\mu_H{\circ}(\overline{\lambda_H}{\otimes}\mu_H){\circ}(\delta_H{\otimes}H){\circ}(H{\otimes}\lambda_H){\circ}\delta_H
=\overline{\lambda_H}\ast \Pi_{H}^{L}=\overline{\lambda_H},$
and the proof is complete.
As we have said in the Introduction, the notion of weak Hopf quasigroup generalizes the ones of Hopf quasigroups and weak Hopf algebras. To finish this section we particularize our main theorem in these settings. Note that the first result is the assertion $(1)$ of the Theorem 2.5. (called the first fundamental theorem for Hopf (co)quasigroups) given by Brzezinski in [@Brz].
\[corolarioHqg\] Let $H$ be a unital magma and comonoid such that $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas (equivalently, $\eta_H$ and $\mu_H$ are morphisms of counital comagmas). Then $H$ is a Hopf quasigroup if and only if the right and left Galois morphisms $\beta$ and $\gamma$ are isomorphisms and they have almost left $H$-linear and almost right $H$-linear inverses, respectively.
First of all, note that conditions (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] trivialize because $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas. Moreover, $\Pi_{H}^{L}=\Pi_{H}^{R}=\overline{\Pi}_{H}^{L}=\overline{\Pi}_{H}^{R}=\varepsilon_H{\otimes}\eta_H$ and then the $\Omega$-morphisms are identities. As a consequence, $f=\beta$ and $g=\gamma$.
As far as weak Hopf algebras, we will prove that it is possible to remove the conditions about almost $H$-linearity. First we need to show the following technical Lemma:
\[fbaixaporvarphi\] Let $H$ be a unital magma and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. Let $f$ and $g$ be the maps defined in Theorem \[characterization\] and define the morphisms: $$\label{expressionsvarphi-1}
\varphi_{H\times_{R}^{1}H}=q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{R}^{1}),\;\;\;\varphi_{H\times_{L}^{1} H}=q_{L}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{L}^{1}),$$ $$\label{expressionsvarphi-2}
\psi_{H\times_{R}^{2}H}=q_{R}^{2}{\circ}(H{\otimes}\mu_H){\circ}(j_{R}^{2}{\otimes}H),\;\;\;\psi_{H\times_{L}^{2}H}=q_{L}^{2}{\circ}(H{\otimes}\mu_H){\circ}(j_{L}^{2}{\otimes}H).$$ Then the following assertions are equivalent:
- $H$ is a monoid.
- The morphism $f$ satisfies that $f{\circ}\varphi_{H\times_{L}^{1} H}=\varphi_{H\times_{R}^{1}H}{\circ}(H{\otimes}f)$.
- The morphism $g$ satisfies that $g{\circ}\psi_{H\times_{R}^{2} H}=\psi_{H\times_{L}^{2} H}{\circ}(g{\otimes}H)$.
$(i)\Rightarrow (ii).$ Assume that $H$ is a monoid. Then,
- $\hspace{0.38cm} f{\circ}\varphi_{H\times_{L}^{1} H}=q_{R}^{1}{\circ}\beta{\circ}\Omega_{L}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{L}^{1})=q_{R}^{1}{\circ}\beta{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{L}^{1})$
- $=q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\beta){\circ}(H{\otimes}j_{L}^{1})=q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}(\Omega_{R}^{1}{\circ}\beta)){\circ}(H{\otimes}j_{L}^{1})=\varphi_{H\times_{R}^{1}H}{\circ}(H{\otimes}f),$
where the first and the last equalities are consequences of (\[expressionsvarphi-1\]); the second and the fourth ones rely on (\[muconomega\]). Finally, the third equality follows because $H$ is associative and then $(\mu_H{\otimes}H){\circ}(H{\otimes}\beta)=\beta{\circ}(\mu_H{\otimes}H)$.
To get $(ii)\Rightarrow (i)$, we will show that $(\mu_H{\otimes}H){\circ}(H{\otimes}\beta)=\beta{\circ}(\mu_H{\otimes}H)$. By composing with $H{\otimes}\varepsilon_H$ we obtain that $H$ is associative. First of all, note that, by (\[monoid-hl-2\]) and (\[pi-l\]), $$\beta{\circ}\Omega_{L}^{1}=(\mu_H{\otimes}H){\circ}(H{\otimes}(\Pi_{H}^{L}\ast id_H){\otimes}H){\circ}(H{\otimes}\delta_H)=\beta,$$ and in a similar way but using (\[monoid-hl-2\]) for $\Pi_{H}^{R}$ we get that $\Omega_{R}^{1}{\circ}\beta=\beta$. Then, by (\[expressionsvarphi-1\]) and (\[muconomega\]),
$$f{\circ}\varphi_{H\times_{L}^{1} H}=q_{R}^{1}{\circ}\beta{\circ}\Omega_{L}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{L}^{1})=q_{R}^{1}{\circ}\beta{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}j_{L}^{1})$$ and $$\varphi_{H\times_{R}^{1}H}{\circ}(H{\otimes}f)=q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\Omega_{R}^{1}){\circ}(H{\otimes}(\beta{\circ}j_{L}^{1}))=q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}(\beta{\circ}j_{L}^{1})).$$
Composing with $j_{R}^{1}$ on the left and with $H{\otimes}q_{L}^{1}$ on the right, and using (\[muconomega\]) and (\[omegaconmu\]) we obtain that $(\mu_H{\otimes}H){\circ}(H{\otimes}\beta)=\beta{\circ}(\mu_H{\otimes}H)$.
The proof for the equivalence between $(i)$ and $(iii)$ is similar and we leave the details to the reader.
Now we can give our characterization for weak Hopf algebras. Note that the equivalence between $(i)$ and $(ii)$ is the result given by Schauenburg in [@S] (Theorem 6.1).
\[corolarioHweak\] Let $H$ be a monoid and comonoid such that conditions (a1), (a2) and (a3) of Definition \[Weak-Hopf-quasigroup\] hold. The following assertions are equivalent.
- $H$ is a weak Hopf algebra.
- The morphism $f$ defined in Theorem \[characterization\] is an isomorphism.
- The morphism $g$ defined in Theorem \[characterization\] is an isomorphism.
By Theorem \[characterization\], $(i)\Rightarrow (ii)$ and $(i)\Rightarrow (iii)$. To get $(ii)\Rightarrow (i)$, we will begin by showing that, if $f$ is an isomorphism, the morphism $j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}$ is always almost left $H$-linear. Indeed, note that by Lemma \[fbaixaporvarphi\], $f{\circ}\varphi_{H\times_{L}^{1} H}=\varphi_{H\times_{R}^{1}H}{\circ}(H{\otimes}f)$. By the suitable compositions, we obtain that $\varphi_{H\times_{L}^{1} H}{\circ}(H{\otimes}f^{-1})=f^{-1}{\circ}\varphi_{H\times_{R}^{1}H}$ and then
- $\hspace{0.38cm} (\mu_H{\otimes}H){\circ}(H{\otimes}(j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1})){\circ}(H{\otimes}\eta_H{\otimes}H)
=\Omega_{L}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}(j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1})){\circ}(H{\otimes}\eta_H{\otimes}H)$
- $=j_{L}^{1}{\circ}\varphi_{H\times_{L}^{1} H}{\circ}(H{\otimes}(f^{-1}{\circ}q_{R}^{1})){\circ}(H{\otimes}\eta_H{\otimes}H)
=j_{L}^{1}{\circ}f^{-1}{\circ}\varphi_{H\times_{R}^{1}H}{\circ}(H{\otimes}q_{R}^{1}){\circ}(H{\otimes}\eta_H{\otimes}H)$
- $=j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\mu_H{\otimes}H){\circ}(H{\otimes}\Omega_{R}^{1}){\circ}(H{\otimes}\eta_H{\otimes}H)=j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}\Omega_{R}^{1}=j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}.$
Now we can follow the proof given in Theorem \[characterization\] to see that the morphism $\lambda_H=(H{\otimes}\varepsilon_H){\circ}j_{L}^{1}{\circ}f^{-1}{\circ}q_{R}^{1}{\circ}(\eta_H{\otimes}H)$ is the antipode of $H$ (in this case, by associativity of $H$, conditions (a4-4)-(a4-7) of Definition \[Weak-Hopf-quasigroup\] trivialize). The proof for $(iii)\Rightarrow (i)$ follows a similar pattern and we leave the details to the reader.
A characterization for weak Hopf coquasigroups
==============================================
The notions of weak Hopf quasigroup and weak Hopf coquasigroup are entirely dual, i.e., we can obtain one of them by reversing arrows in the definition of the other. As a consequence, by dualizing the results given in the previous Section we get a characterization for weak Hopf coquasigroups. The proofs follow the same ideas, and in order to brevity they will be omitted. First of all we introduce the notion of weak Hopf coquasigroup.
\[Weak-Hopf-coquasigroup\]
A weak Hopf coquasigroup $H$ in ${\mathcal
C}$ is a monoid $(H, \eta_H, \mu_H)$ and a counital comagma $(H,\varepsilon_H, \delta_H)$ such that the following axioms hold:
- $\delta_{H}{\circ}\mu_{H}=(\mu_{H}{\otimes}\mu_{H}){\circ}\delta_{H{\otimes}H}.$
- $\varepsilon_{H}{\circ}\mu_{H}{\circ}(\mu_{H}{\otimes}H)=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(H{\otimes}\delta_{H}{\otimes}H)$
- $=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(H{\otimes}(c_{H,H}^{-1}{\circ}\delta_{H}){\otimes}H).$
- $(\delta_{H}{\otimes}H){\circ}\delta_{H}{\circ}\eta_{H}=(H{\otimes}\delta_{H}){\circ}\delta_{H}{\circ}\eta_{H}=(H{\otimes}\mu_{H}{\otimes}H){\circ}((\delta_{H}{\circ}\eta_{H}) {\otimes}(\delta_{H}{\circ}\eta_{H}))$
- $=(H{\otimes}(\mu_{H}{\circ}c_{H,H}^{-1}){\otimes}H){\circ}((\delta_{H}{\circ}\eta_{H}) {\otimes}(\delta_{H}{\circ}\eta_{H})).$
- There exists $\lambda_{H}:H\rightarrow H$ in ${\mathcal C}$ (called the antipode of $H$) such that, if we denote by $\Pi_{H}^{L}$ (target morphism) and by $\Pi_{H}^{R}$ (source morphism) the morphisms $$\Pi_{H}^{L}=((\varepsilon_{H}{\circ}\mu_{H}){\otimes}H){\circ}(H{\otimes}c_{H,H}){\circ}((\delta_{H}{\circ}\eta_{H}){\otimes}H),$$ $$\Pi_{H}^{R}=(H{\otimes}(\varepsilon_{H}{\circ}\mu_{H})){\circ}(c_{H,H}{\otimes}H){\circ}(H{\otimes}(\delta_{H}{\circ}\eta_{H})),$$ then:
- $\Pi_{H}^{L}=id_{H}\ast \lambda_{H}.$
- $\Pi_{H}^{R}=\lambda_{H}\ast id_{H}.$
- $\lambda_{H}\ast \Pi_{H}^{L}=\Pi_{H}^{R}\ast \lambda_{H}= \lambda_{H}.$
- $(\mu_H{\otimes}H){\circ}(\lambda_H{\otimes}\delta_H){\circ}\delta_H=(\Pi_{H}^{R}{\otimes}H){\circ}\delta_H.$
- $(\mu_H{\otimes}H){\circ}(H{\otimes}\lambda_H{\otimes}H){\circ}(H{\otimes}\delta_H){\circ}\delta_H=(\Pi_{H}^{L}{\otimes}H){\circ}\delta_H.$
- $(H{\otimes}\mu_H){\circ}(\delta_H{\otimes}\lambda_H){\circ}\delta_H=(H{\otimes}\Pi_{H}^{L}){\circ}\delta_H.$
- $(H{\otimes}\mu_H){\circ}(H{\otimes}\lambda_H{\otimes}H){\circ}(\delta_H{\otimes}H){\circ}\delta_H=\mu_{H}{\circ}(H{\otimes}\Pi_{H}^{R}).$
Note that, if $\eta_H$ and $\mu_H$ are morphisms of counital comagmas, (equivalently, $\varepsilon_{H}$, $\delta_{H}$ are morphisms of unital magmas), $\Pi_{H}^{L}=\Pi_{H}^{R}=\eta_{H}{\otimes}\varepsilon_{H}$ and, as a consequence, we have the notion of Hopf coquasigroup.
Note that, when reversing arrows, the morphisms $\Pi_{H}^{L}$ and $\Pi_{H}^{R}$ are exactly the same of the previous section, while the morphism $\overline{\Pi}_{H}^{L}$ changes in $\overline{\Pi}_{H}^{R}$ and vice versa. As far as the $\Omega$-morphisms, we must change $\Omega_{L}^{1}$, $\Omega_{R}^{1}$, $\Omega_{L}^{2}$ and $\Omega_{R}^{2}$ by $\Omega_{L}^{2}$, $\Omega_{R}^{2}$, $\Omega_{L}^{1}$ and $\Omega_{R}^{1}$, respectively. Therefore the characterization of weak Hopf coquasigroups is the given by the following result:
\[characterizationcoquasi\] Let $H$ be a monoid and counital comagma such that conditions (b1), (b2) and (b3) of Definition \[Weak-Hopf-coquasigroup\] hold. The following assertions are equivalent:
- $H$ is a weak Hopf coquasigroup.
- The morphisms $h=q_{R}^{2}{\circ}\gamma{\circ}j_{L}^{2}:H\times_{L}^{2} H\rightarrow H\times_{R}^{2} H$ and $s=q_{L}^{1}{\circ}\beta{\circ}j_{R}^{1}:H\times_{R}^{1} H\rightarrow H\times_{L}^{1} H$ are isomorphisms. Moreover, the morphism $j_{L}^{2}{\circ}h^{-1}{\circ}q_{R}^{2}$ is almost left $H$-colinear and $j_{R}^{1}{\circ}s^{-1}{\circ}q_{L}^{1}$ is almost right $H$-colinear.
When particularizing to Hopf coquasigroups, we get the assertion $(2)$ of the Theorem 2.5 given by Brzezinski in [@Brz].
\[corolarioHcoqg\] Let $H$ be a monoid and counital comagma such that $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas (equivalently, $\eta_H$ and $\mu_H$ are morphisms of counital comagmas). Then $H$ is a Hopf coquasigroup if and only if the right and left Galois morphisms $\beta$ and $\gamma$ are isomorphisms and they have almost right $H$-colinear and almost left $H$-colinear inverses, respectively.
We will finish this paper giving the corresponding characterization for weak Hopf algebras.
\[corolarioHweakcoquasi\] Let $H$ be a monoid and comonoid such that conditions (b1), (b2) and (b3) of Definition \[Weak-Hopf-coquasigroup\] hold. The following assertions are equivalent.
- $H$ is a weak Hopf algebra.
- The morphism $h$ defined in Theorem \[characterizationcoquasi\] is an isomorphism.
- The morphism $s$ defined in Theorem \[characterizationcoquasi\] is an isomorphism.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors were supported by Ministerio de Economía y Competitividad of Spain (European Feder support included). Grant MTM2013-43687-P: Homología, homotopía e invariantes categóricos en grupos y álgebras no asociativas.
[99]{}
J.N. Alonso Álvarez, J.M. Fernández Vilaboa, R. González Rodríguez, M. P. López López, E. Villanueva Novoa, *Weak Hopf algebras with projection and weak smash bialgebra structures*, J. Algebra, **269** (2003), 701-725.
J.N. Alonso Álvarez, J.M. Fernández Vilaboa, R. González Rodríguez, *Weak Hopf algebras and weak Yang-Baxter operators*, J. Algebra, **320** (2008), 2101-2143.
J. N. Alonso Álvarez, J.M. Fernández Vilaboa, R. González Rodríguez, *Weak braided Hopf algebras*, Indiana Univ. Math. J. **57** (2008), 2423-2458.
J.N. Alonso Álvarez, J.M. Fernández Vilaboa, R. González Rodríguez, *Weak Hopf quasigroups*, Asian J. Math., to appear (available in arXiv:1410.2180 (2014)).
J.N. Alonso Álvarez, J.M. Fernández Vilaboa, R. González Rodríguez, *Cleft and Galois extensions associated to a weak Hopf quasigroup*, J. Pure Appl. Algebra, to appear (available in arXiv:1412.1622 (2014)).
G. Böhm, G., F. Nill, K. Szlachányi, *Weak Hopf algebras, I. Integral theory and $C^{\ast}$-structure*, J. Algebra **221** (1999), 385-438.
T. Brzeziński, *Hopf modules and the fundamental theorem for Hopf (co)quasigroups*, Internat. Elec. J. Algebra, **8** (2010), 114-128.
A. Joyal, R. Street, *Braided tensor categories*, Adv. Math. **102** (1993), 20-78.
J. Klim, S. Majid, *Hopf quasigroups and the algebraic 7-sphere*, J. Algebra **323** (2010), 3067-3110.
A. Nakajima, *Bialgebras and Galois extensions*, Math. J. Okayama Univ. **33** (1991), 37-46.
D. Nikshych, L. Vainerman, *Finite quantum groupoids and their applications*, New directions in Hopf algebras, MSRI Publications **43** (2002), 211-262.
J.M. Pérez-Izquierdo, *Algebras, hyperalgebras, nonassociative bialgebras and loops*, Adv. Math. **208** (2007), 834-876.
P. Schauenburg, *Weak Hopf algebras and quantum groupoids*, Noncommutative geometry and quantum groups (Warsaw, 2001), 171-188, Polish Acad. Sci., Warsaw, (2003).
R. Street, *Fusion operators and cocycloids in monoidal categories*, Appl. Categor. Struct. **6** (1998), 177-191.
|
---
abstract: 'We consider one of the fundamental debates in performing the relativity theory, namely, the ether and the relativity points of view, in a way to aid the learning of the subjects. In addition, we present our views and prospects while describing the issues that being accessible to many physicists and allowing broader views. Also, we very briefly review the two almost recent observations of the Webb redshift and the ultra high–energy cosmic rays, and the modified relativity models that have been presented to justify them, wherein we express that these justifications have not been performed via a single model with a single mechanism.'
author:
- |
Mehrdad Farhoudi[^1] and Maysam Yousefian[^2]\
[Department of Physics, Shahid Beheshti University, Evin, Tehran 19839, Iran]{}
date: 'November 29, 2015'
title: '**Ether and Relativity**'
---
\#1\#1 \#1\#2
In commemorating the first century of the discovery of general relativity by Albert Einstein that was recognized as a triumph of the human intellect, it would be instructive to look through one of its fundamental debates, namely, between the [*ether*]{} and the [*relativity*]{} points of view. Certainly, very vast amount of work have been performed on these subjects and the references given in this compact survey are not obviously a complete bibliography on these topics, and although we provide adequate references, it is a self–contained work. However, while we are trying to spell out some basic issues behind the subject, the work almost provides a brief review in a different perspective about the long history and the situations of the ether and relativity up to the present days. Nevertheless, it has not been only aimed to give just a motivation to research on the issue, and we propose, somehow during the work, to introduce our points of view and prospects on these subjects in a way that being accessible to many physicists and allowing broader views.
It seems that it was Descartes who first introduced into science the concept of ether as a space–filling material in the manner of a container and a transmitter between distant bodies (similar to what that, nowadays, we call it a field) in the first half of seventeenth century [@ref27-2]. About one generation after him, perhaps one can consider Newton as one of the ether theory pioneers who practically introduced ether into physics. Actually, Newton presented the concept of [*inertia*]{} in the first law (i.e., the inertia law) of his famous three laws of mechanics [@Newton], and in this respect, he considered an inertial frame as a rigid frame in which free particles move with constant speed in straight lines. On the other hand, a free particle is a particle that moves with constant speed in a straight line in an inertial frame; and obviously, this is a [*vicious circle*]{} (or, a logical loop). In another word, it is ambiguous that what thing distinguishes or singles out the class of inertial frames as criteria or standards of non–acceleration from all other frames. Newton, who was also aware of this difficulty, in order to specify the inertial frames from theoretical point of view, employed the idea of [*absolute*]{} space with the aid of the notion of ether. He considered absolute space as a [*rest*]{} inertial frame (or the Newtonian ether) that actually is a very thin motionless media with nearly zero density, perfect luminosity and strong elasticity character, which is also a conveyer for force transmission. To Newton’s contemporaries, like Hooke and Huyghens, the ether’s main function was just to carry light waves and thus, it could also be acted on [@ref27-2]. However, the Newton idea on the ether based on it as an acting substance which does not accept reaction, but Leibniz (or, Leibnitz) insisted that the space is an order of [*coexistences*]{}[^3] [@Erlichson-1967; @Capek-1976]. He argued against the Newton idea of [*substantival ontology of space*]{}, and believed that this idea leads to contract with the [*principle of sufficient reason*]{} [@Earman1987]. Also, Berkley presented some arguments against the Newton absolute space on his work named [*De Motu*]{} (On Motion) [@Berkley-1721]. However, when the luminiferous ether evolved into a cornerstone of the Maxwell theory [@ref000], it became a plausible marker for the Newton absolute space.
In Newtonian physics, space is a pre–existing stage on which material particles are the characters acting out the drama of physical events. This point of view is on the contrary to the Aristotelian[^4] view that space is a [*plenum*]{} (i.e., occupied by matter) and [*inseparably*]{} associated with the material substance [@Hardie-Aristotle; @Aristotle]. In fact, the Newton view is a return to the Democritus view that space is a [*void*]{} with the properties which are [*independent*]{} of the material bodies that move [*in*]{} it [@Adler-1966]. While in relativistic gravitational physics, again, space cannot be considered apart from the matter that is in it, and, as the mathematician E.T. Whittaker [@Adler-1966] points out, in this case, the characters create the stage as they walk about on it, i.e., gravitation has become part of the stage instead of being a player. In another word, the properties of space in gravitational theories are inseparable from the matter that is in it. Indeed, it has been pointed out [@Mashhoon-1994] that a basic problem of Newtonian mechanics is that the [*extrinsic*]{} state of a point particle, i.e. its appearance in space and time (that usually characterized by its position and velocity), is a *priori* independent of its [*intrinsic*]{} state (that usually characterized by its mass). However in quantum physics, each coordinate (or in another word, position that is the notion of geometry) does not commute with its corresponding momentum (or in another word, dynamics that can be considered as the notion of, moving, mass); or in other words, for each object, these two characters are not simultaneously compatible from an observer view. That is, analogous to the [*complementarity*]{} principle of the particle–wave duality, the issue may be interpreted as in confrontation with everything, it either represents the aspect of geometry or the aspect of matter in one instant depending on the experimental arrangements and/or the initial conditions.
Nevertheless, and principally, the innovation of absolute space is while the Galilean transformations do not distinguish among the inertial frames as well, and thus Newton, in confronting the quarry that how absolute space can be specified, presented the famous idea of the Newton [*bucket*]{} from the practical point of view [@d'Inverno-1992; @Janssen2005]. However, the Newton bucket provides the distinction of a non–inertial frame, and does not distinguish the inertial frames from each other. That is, any curve or change in the horizontal level of the bucket water does represent the acceleration of the bucket with respect to a frame which is itself either an inertial or a non–inertial one. However, Newton accounted it with respect to an inertial frame (as a criterion to distinguish acceleration), and actually with respect to a specified inertial one, i.e., absolute space. Although in this regard, Mach also interpreted the changes with respect to the average motions of all particles in the universe (or, the distant fixed stars) [@Mach1977]. Indeed, and in a more accurate expression, the Newton discussion was that any curve of the horizontal level of the water cannot be because of its relative rotation with respect to the bucket, however, Mach did consider it as the relative motion between them [@Barbour-1995]. Incidentally, and up to the available experiments, one may also not being able to locally detect an accelerated frame in the large scales, e.g., the rotation of the earth around the sun and or the rotation of the solar system around the center of our local galaxy, merely by the idea of the Newton bucket.
Before we continue our discussions, it would be also instructive to review the following well–known proposed experiment on the issue. Consider the rotation of the plane of a swinging Foucault pendulum at the earth’s north pole. Within the limits of experimental accuracy,[^5] the remarkable fact is that the times taken for the earth to rotate a complete round with respect to absolute space, and relative to the fixed stars are the same. In Newtonian view, there is nothing a [*priori*]{} to predict this result, and it is just a [*coincidence*]{}. In other words, the result indicates that the fixed stars are not rotating (or, do not have acceleration) relative to absolute space, and can be employed as a criterion to specify the class of inertial frames. However in Machian view, one precisely expects that the two time durations of the measurements must be the same regardless of the accuracy of the instruments, for, in his view, the detected criteria of acceleration are exactly the fixed stars.
In the historical process, Maxwell offered an another way to specify the Newton ether. In the ether theory presented by him [@ref000], absolute space is accounted as a media for the light propagation, and is specified via the Maxwell equations (or from his point of view, the ether equations). Regarding this theory, due to the [*Fresnel dragging*]{} effect [@French-1966; @Resnick-1968], the speed of light in the other inertial frames is different from the ether one, as the Maxwell equations are not invariant under the Galilean transformations. To investigate the Maxwell ether, the [*Michelson–Morley experiment*]{} [@ref00-1]-[@Shankland2] was performed in the year of 1887, and the Maxwell ether was not confirmed. However, to explain the null result of the Michelson–Morley experiment, Fitzgerald proposed a hypothesis in the year of 1889 [@ref00-1-1]. According to his hypothesis, when a body moves with a constant velocity with respect to the ether, it will be (really) contracted in the direction of motion.
Lorentz, like Maxwell, believed that the light propagation, similar to the sound, requires a media which is the ether as the characteristic of absolute rest. Thus, he also proposed [@ref00-2; @ref00-3] a hypothesis (although independent, but actually an elaboration of the Fitzgerald hypothesis) to explain the Michelson–Morley experiment. According to the Lorentz ether theory [@Lorentz1909], a body with a constant velocity with respect to the ether is also contracted[^6] in the direction of motion (the [*Lorentz–Fitzgerald contraction*]{}) and its clocks are slowed down [@Lorentz1904] (i.e., the clock retardation and/or (real) time dilation) when moving through the ether. By his theory, the results of the Michelson–Morley and even the [*Kennedy–Thorndike*]{} [@ref27-1] experiments, which is a broader and more general one than the previous experiment, are also explained. Incidentally, based on the Lorentz ether theory, and similar to the Newtonian mechanics, the inertial reference frames are related to each other by the Galilean transformations, however according to the Lorentz–Fitzgerald contraction hypothesis, the form of the Maxwell equations still remain invariant. In this case, as the earth is surrounded by the ether, there are two choices. Either the ether must be dragged by the earth and remains at rest with respect to it, which the [*aberration observation*]{} [@ref0-1; @Stewart-1964] has rejected this choice. Or, the ether must not be dragged by the earth and contains a velocity with respect to it, in a way that the Fresnel dragging should be observed for light. This subject was investigated via the [*Fizeau experiment*]{} [@ref0-3]–[@ref0-4], and the Lorentz ether theory has also been able to explain the result of this experiment by the aid of local time dilation hypothesis. Regarding these facts, there have been some debates, comparisons and investigations about the originality and the equivalence of the Lorentz ether theory with the special theory of relativity ([**STR**]{}); see, e.g., Refs. [@ref27-2; @Ives]–[@Erlichson] and references therein. However, the two theories are logically independent, because obviously, the choice of different postulates principally leads to theories which differ in their simplicity and appeal, although they may observationally be equivalent.
However, Einstein, and Mach before him, were also among the opponents of the Newton absolute space, and in this regard, they raised two main objections. First, how absolute space, as an inertial frame, can be theoretically distinguished and located from all other inertial frames in a unique way. Second, how absolute space can act on every particle and distinguishes free particles from other ones, but cannot be acted upon. In general, from the Einstein point of view, the existence of a matter which is completely transparent, its nature is unspecified and obscure, and there is no way to prove its existence,[^7] was not required. Eventually, in 1905, Einstein by proposing the STR [@Einstein-1905], attracted most of the attentions towards this theory. Actually, Einstein presented his theory by generalizing the Newton absolute space to the Minkowski spacetime [@Minkowski] and extending Newtonian relativity with the Galilean transformations to special relativity with the Lorentz transformations, and could explain the whole mentioned experiments very well. However in the STR, the inertial frames, as the preferred ones, are still the references and the criteria for the absoluteness of the concept of acceleration,[^8] and yet, the difficulty remains in theoretically distinguishing these class of frames from all other frames. Indeed, the Galilean relativity principle, in general, contains the four–dimensional special relativity formulation [@Petkov2005]. Nevertheless, in the STR, as the equivalence of the inertial frames are valid for all physical laws (including the Maxwell equations), the Maxwell ether hypothesis is rejected. However, it cannot verify absolute space (although, still has the mentioned objections to it), but it cannot also deny its existence even though absolute space cannot be distinguished by intrinsic properties from all other inertial frames. On this point of view, Einstein was not satisfied with the theory too.
It may be surprising, but, perhaps due to some points (like the mentioned ones), even Mach, whose critiques to the Newtonian mechanics paved the way to the relativity theory smooth (at least philosophically), was suspicious on the Einstein theory.[^9] However, due to many experimental verifications of the STR obtained from the wide diversity of different phenomena, the skeptics had to give up; indeed, special relativity probably is the most based and reliable tested theory in the contemporary physics. Nevertheless and despite the enormous experimental robustness on the STR, in the last two decades, due to the theoretical posited questions, scientists are eagerly looking for experimental findings that somehow violate the STR [@Iorio2006]. In this regard, the researches are particularly focused on experiments that indicate violation of the Lorentz symmetry.[^10] However, and as a rough estimation, the quantum gravity induced Lorentz violation can only be achieved as a theoretical purpose, for the natural scale that one would expect (in this respect and as a strong violation) is the Planck energy of about $10^{19}$ GeV, while the highest known energy particles is the ultra high–energy cosmic rays ([**UHECR**]{}) [@ref5]–[@ref6] of about $10^{11}$ GeV and the present accelerator energies are about $10^{3}$ GeV that preclude any direct observation of the Planck scale Lorentz violation. Nevertheless and in particular, some physicists have looked for evidence on invalidity of the principle of relativity [@Drozdzynski2011], or for deriving special relativity from Galilean mechanics alone [@Sela2009] and or for testing the STR by investigating what sort of new bounds can be achieved at high energies while the Lorentz symmetry is not satisfied [@ColemanGlashow].
On the other hand, on the way to reject absolute space and absolute concept of acceleration, Einstein himself also attempted to propose the general theory of relativity ([**GTR**]{}) in 1915 [@Janssen2005; @Einstein-1915; @Einstein-1916] inspired from the Mach ideas [@ref27-2; @Barbour-1995; @Lichtenegger-2005] and with the aid of the [*principle of equivalence*]{} of gravitation and inertia [@d'Inverno-1992; @CiufoliniWheeler1995; @Straumann2004] as a clue principle.[^11] Of course, in the GTR, no way or solution has also been provided for determining the inertial frames, and in fact in this theory, no preference or reference, as a criterion of non–acceleration, is taken to these frames as well. That is, it is appeared that in the GTR, the question of how to determine the inertial frames has been deleted. Nonetheless, in the GTR, contrary to the [*strong version*]{} of the Mach principle (i.e., space is not expressed as an independent essence/substance, but merely as an abstraction from the totality of distance–relations between matter), the spacetime is attained as an independent essence/substance which both acts on the matter and is reacted upon (indeed, it has the weak version of the Mach principle). Meanwhile, the confusion surrounding the principle of equivalence led a physicist like Synge to suggest, in the preface of his book about the GTR [@Synge1960], that this principle has to be set aside and the facts of absolute spacetime be faced. Although, to clear some of the ambiguities about the principle of equivalence, it is emphasized that its statement must be stated [*locally*]{}, wherein by locally it is meant a region over which the variation of the gravitational field cannot be detected [@d'Inverno-1992]. Nonetheless, even with this type of statement, it seems that still the ambiguities have not been completely eliminated.[^12]
However, about the two main raised objections on absolute space, in the GTR for elimination of the first objection, there is no need or preference requirement to propose an absolute space. Resolution of the second objection is also expressed by accepting an [*independent essence*]{} (or, substance) for geometry (i.e., space), and actually, by appealing to the weak version of the Mach principle. On the other hand, while forming the GTR, it was specified [@Stachel-1912] via the [*hole argument*]{} (or, problem)[^13] that the point events of the spacetime manifold had been incorrectly thought of as individuated independently of the field itself. That is, it is impossible to drag the metric field away from a physical point in empty spacetime and leave that physical point behind. As Einstein himself wrote [@Einstein-Ehrenfest; @Einstein-Besso] that nothing is physically real but the totality of spacetime point coincidences, and placed [@Einstein-1952] great stress on the inseparability of the metric and the manifold. Hence, the spacetime continuum (i.e., physical events) is the same as space points (or, manifold) that are not separated from the metric (i.e., geometry of space).[^14] In fact, a key lesson raised from the Einstein gravitational theory is that the gravitational field has been inseparably twisted/intricated with the geometry of spacetime, and thus, geometry itself is an impellent essence (or, dynamic).
In this regard, it is worth to mention that most of the leading relativists in the early twentieth century, for examples, Eddington [@Eddington-1921] and even Einstein himself [@ref27-4], claimed that, in principal, the GTR is merely an ether theory.[^15] On this issue, Trautman has asserted [@Trautman-1966] that he has presented the mathematical demonstration of such a claim by obtaining a form of the GTR without spatial curvature. And recently, by employing a combination of Lorentz’s and Kelvin’s conception of the ether,[^16] and actually by using the Lorentz–Kelvin ether theory [@Whittaker-1954; @Whittaker-1951; @Schaffner-1972], the Einstein field equations has been obtained [@Dupre-2012].[^17] Meanwhile, it has also been claimed in Ref. [@Gautreau-2000] that there is an underlying relationship between the GTR and Newton’s absolute time and space (via the existence of a preferred set of coordinates in general relativity[^18] that is equivalent to Newton’s absolute time and space). And even it has been asserted [@Savickas] that, in terms of the Newton laws within 4–dimensional curved geometries, the GTR can be exactly described.
Also in relevance to the Mach ideas, some physicists just believe that, in his ideas, the inertial frames have been replaced by the average motions of all particles in the universe and the influence of distribution of matter in the immediate vicinity of any particle, as well as the other distant bodies, determines the inertial frames. Nonetheless, in this kind of belief, this new reference adapts its nature from the whole matter of the universe. On the other hand, in a few recent decades, some novel ideas and theories have been proposed as “geometrical description of physical forces”, “geometrical base of material content of the universe”, “geometrical curvature induces matter” and “induced–matter theory” which usually connect extra dimensions (i.e., geometry) to the properties of the matter [@Salam-1980]–[@Rasouli-2014]. Even some gravitational theories have been considered in which the Lagrangian of the geometry, that is usually supposed to be the characteristic of the geometry alone, just from the beginning, and indeed [*a priori*]{}, is presented as a function of the geometry and the matter [@Harko-2008]–[@ZareFarhoudi]. Now, inspiring from these types of ideas and theories, and knowing that, in the Mach ideas, the inertia of a body is not just the intrinsic property of that body, but is [*caused*]{} by the cosmic masses via some interactions (where the influence of the distant bodies preponderates),[^19] one can perhaps relate this [*cause*]{} to be due to the cosmic background (e.g., the whole geometry of the universe).
Nevertheless, to make the GTR more consistent with even the strong version of the Mach principle, Einstein inserted[^20] [@EinsteinCC] the well–known term of the cosmological constant[^21] into his equations.[^22] However, when de Sitter achieved [@deSitter] his solution for the vacuum GTR plus the cosmological constant term,[^23] Einstein vehemently retook the inclusion of such a term while describing it as the biggest mistake he ever made [@Gamov1970]. Even in this regard, realizing that the metric field is not a phenomenon resulted of matter but has its own independent existence, Einstein, near the end of his life, gradually decreased his enthusiasm for the Mach principles. Indeed in 1954, he wrote to Pirani that one should no longer speak of the Mach principles at all [@Pais; @Pirani]. Perhaps the main point of the issue roots in considering spacetime as a new inertial standard which directly influences by the active gravitational mass through the Einstein equations, although, in the absence of mass and other disturbances, still spacetime would straighten itself out into the class of extended inertial frames, contrary to the idea that all inertia is caused by the cosmic masses.
Nonetheless, by considering the necessity of [*conformal symmetry breaking*]{}, the inclusion of the cosmological constant term is still proposed to remedy the inconsistency of the Einstein gravitational theory with the strong version of the Mach principle [@NamFar]. On the other hand, besides confronting the cosmic gravitational collapse (due to gravity among them), there needs to have a kind of repulsing force for explaining the recent discovered acceleration of the universe [@Riess-1998]-[@Riess2004], which seems to originate from “property" of geometry itself or spacetime in global scales, contrary to the well–known forces up to now. In this respect, the Einstein equations including the cosmological constant term have again been considered, and this term is interpreted as if the vacuum fluid and the vacuum energy density,[^24] see, e.g., Refs. [@WeinbergBook; @Barrow2011]. Incidentally in this regard, the ether energy–momentum tensor introduced in Ref. [@Dupre-2012] is not dissimilar to this term. Also in the last two decades, in the [*dark energy*]{} issue [@Peebles-2003]–[@Bamba-2012] (an energy that consists nearly $69\%$ of the matter density of the universe [@Ade-2013; @Ade-2015]), it seems as if the geometry (or in another word, space) in the cosmological scale has an anti–gravity type of interaction. In essence, in these ideas, both the geometry and matter (in its general meaning, including material and radiation) are [*different aspects*]{} of a “thing” (or in another word, existence), although, even by accepting an independent entity for each one, they would somehow relate to the other one as well (at least, through that “thing”); see, e.g., Ref. [@Farhoudi-2006] and references therein.
In addition to the dark energy issue, the other cosmological observations have indicated [@dmatt1]–[@dmatt5] that there should also be another kind of matter besides the usual barionic matters, i.e. an exotic fluid called [*dark matter*]{}, that consists [@Ade-2013; @Ade-2015] nearly $26\%$ of the matter density of the universe. These two important cosmological problems and, on the other hand, the quantizing difficulty [@bida; @bos; @Farhoudi-2006; @farc] of the Einstein gravity (in spite of the impressive successes of it) are, in general, the main reasons that have raised the need to investigate generalized or alternative gravitational theories. In this respect, and for instance, one of the alternative theories is the Brans–Dicke gravitational theory [@Brans-1961] that is more consistent with the Mach ideas. In connection to our discussion, also in this theory, there is a kind of matter in the form of a scalar field in the whole space in addition to the usual matter (or, the barionic matter) [@Dicke-1962]. Actually, while the Brans–Dicke gravity is regarded as the generalized Einstein gravity, its Lagrangian can be converted to the Einstein gravitational Lagrangian plus a scalar field term via the [*conformal transformation*]{} [@Fujii-2004; @Farajollahi-2010]. Meanwhile, in the other gravitational theories of type of the Brans–Dicke gravity (or in general, the scalar–tensor gravitational theories [@Fujii-2004; @Faraoni-2004; @Capozziello-2011]), in particular, the [*chameleonic*]{} gravitational theories [@Khoury-2004]–[@SabaFarhoudi], by the coupling of a scalar field with the metric (or in another word, space), the dynamic of the scalar field depends on the surrounding background density which requires that the interaction of this scalar field with the usual matter to be of gravitational type. Among different types of the modified gravitational theories, one can also mention the Einstein–ether [@Mattingly-2005; @JacMat2001]–[@Haghani2014] (and references therein) and non–minimal [æ]{}ther-–modified [@Furtado-2013] gravity theories. In these theories, in general, the coupling of the Einstein gravity with a dynamical timelike vector field (representing a preferred rest frame, i.e., ether) is considered.
Essentially, one of the three probable assumptions that Brans and Dicke stated in their work [@Brans-1961] is that physical space has intrinsic geometry and inertial properties beyond those that can be achieved from the matter contained therein, however, in their work [@Brans-1961], they proceeded the other assumption that leads to the Brans–Dicke gravitational theory. Nonetheless, and also according to the Dicke view [@Dicke-1962], the introduced scalar field in this theory is a field that along with the metric are described as the gravitational field (or in another word, geometry). In this regard, in the ancient time, although Plato did not accept the view of [*void*]{} space and believed that space is a [*plenum*]{} (i.e., a general assembly), but his view was also different from the Aristotelian one. In the Plato view, space is an entity that bodies are made out of it and cannot exist without it [@Adler-1966].[^25] In other words, Plato identified space as that in which things come to be [@Archer-Plato; @Plato].
Moreover, in the last two decades, another two observations, namely the Webb redshift [@Webb-1999; @ref1] and the UHECR [@ref5]–[@ref6; @Albert], have been reported while the standard Einstein relativity theory is not capable to explain these two cosmological phenomena. Hence, it was required that some modifications and generalizations being performed on the Einstein relativity. In this respect and up to now, there have been represented several models to describe the Webb redshift, including the models for varying the constants that participate in determining the atomic structure [@Barrow-1998]. Among these types of models, one can mention the varying electric charge [@ref2] and the varying speed of light ([**VSL**]{}) [@Barrow-Magueijo-1998]–[@ref4-5] (and references therein) models, where the comparison of these two kinds of models has also been performed in Ref. [@ref3]. On the other side, along with theories such as the non–commutative field theory [@ref18; @ref19], the most reliable models, that attempt to explain the observed UHECR, are known as the doubly–special–relativity or deformed–special–relativity ([**DSR**]{}) [@ref10]–[@ref17]. However, all the available modified models on these subjects have been unsuccessful in justifying these two phenomena via [*a single*]{} model with a single mechanism.
To clarify the latter expression, let us very briefly review how these modified models work. Actually, it would be instructive to represent a concise description of these two recent phenomena and an overall explanations on the VSL and the DSR regarding justifications of these two observations.
During the observations of galaxies and distant stars covering the redshift range $0.5<z<3.5$, the Australian group of Webb noticed redshifts that can be justified with a variable fine structure constant [@Webb-1999; @ref1]. Actually, in the standard cosmology, the ratio of the cosmological redshift (due to the expansion of the universe) of the absorption lines spectra of atoms on distant galaxies to the ones of the same atoms in laboratory is predicted to be the same for different amounts of energies of the absorption bands. However, Webb [*et al.*]{} observed that this ratio depends on the quantum numbers and the atomic and the molecular structures of the materials that radiate the corresponding rays, and hence, the structure of the absorption bands should vary due to the redshift caused by the expansion of the universe. As in the standard cosmology, the redshift usually means distant past, thus, the explanation of such a phenomenon has been based on having different atomic structure (and hence, the absorption bands) in distant past with respect to its present structure. In this regard, among the models that aim to explain the Webb redshift, the VSL models are the best option.
In relativity area and in general, the VSL models can usually be classified into two methods. In one method, new scalar fields are added to the Einstein–Hilbert Lagrangian, and another method is mainly based on changing this Lagrangian itself. In general, the appearance of any scalar field can be performed somehow to make variation in the speed of light, for, naively, it is analogous to have light rays passing through a dielectric media. It means that the appearance of any dielectric media causes variation in the speed of light, and if there is no dielectric in the matter media, the constancy of the speed of light will be in the vacuum. Thus, in these models, the speed of light practically depends on the appearance of the scalar field, by which also, the other cosmological issues, such as inflation, flatness and dark energy, are usually described.
As a simple prototype, although general, a scalar–tensor action for the VSL models, analogous to the one used in Ref. [@ref2], can be written as $$\label{eq0}
S=\int d^{4}x\sqrt{-g} \left(
L^{[g]}+L^{[\psi]}+L^{[m]}e^{-2\psi}\right),$$ where $L^{[g]}\equiv R/16\pi G$ is the Einstein–Hilbert Lagrangian, $L^{[\psi]}\equiv-\omega(\psi)\partial^{\alpha}\psi\,\partial_{\alpha}\psi/2+V(\psi)$, $V(\psi)$ is a self–interacting potential and $L^{[m]}$ is the matter Lagrangian. Also, $R$ is the Ricci scalar, $\omega(\psi)$ is a varying dimensionless coupling coefficient of the scalar field $\psi$, $g$ is the determinant of the metric and, for simplicity, we have set the speed of light, in the absence of the scalar field, to be $c\left(\psi=0\right)=1$. Variations of this action, with respect to the metric and the scalar field, yield $$\label{eq0-a}
\Square\,
\psi=\frac{1}{\omega}\left(2e^{-2\psi}L^{[m]}-\frac{\omega
'}{2}\partial^{\alpha}\psi\partial_{\alpha}\psi-V'\right)$$ and $$\label{eq0-b}
G_{\mu\nu}=8\pi G\left(
T^{[\psi]}_{\mu\nu}+T^{[m]}_{\mu\nu}e^{-2\psi}\right),$$ where the prime denotes the ordinary derivative with respect to the argument, $\Square\equiv {}_{;\,\rho}{}^{\rho}$ and $T^{[i]}_{\mu\nu}\equiv -(2/\sqrt{-g})\delta (\sqrt{-g}\,
L^{[i]})/\delta g^{\mu\nu}$. Now, by employing the spatially flat homogeneous and isotropic metric of the Friedmann–Lemaître–Robertson–Walker ([**FLRW**]{}) $$\label{eq-metric-FLRW}
ds^{2}=dt^{2}-a^{2}(t)\left( dr^{2}+r^{2}d\Omega^{2}\right)$$ that includes the scale factor $a(t)$, the Friedmann–like equation for a perfect fluid achieves as $$\label{eq0-c}
\left( \frac{\dot{a}}{a}\right) ^{2}=\frac{8\pi G}{3}\left(
\rho^{[m]}e^{-2\psi}+\rho^{[\psi]}\right),$$ where $\rho^{[m]}$ is the matter density and with the assumption of the scalar field being also homogeneous, we have $\rho^{[\psi]}=\omega\dot{\psi}^{2}/2+V$. At last, using the cosmological considerations, the resulted equations specify the way that the scalar field and, in turn, the speed of light vary.
Meanwhile, in the VSL models, one should note that if the metric is assumed to be $$\label{eq-metric-FLRW-mofat}
ds^{2}=c^{2}(t)dt^{2}-a^{2}(t)\left(
dr^{2}+r^{2}d\Omega^{2}\right),$$ it cannot by itself being used as the variation of the local speed of light (i.e., as the one that travels along the null geodesics), and hence, as the local violation of the Lorentz symmetry. Because, the speed of light, in the absence of matter, is just a criterion of the variation of time with respect to the place, which this kind of variation does not have an interesting meaning. Indeed, the variation of the speed of light and the local violation of the Lorentz symmetry, on the Riemannian manifold, can be considered only in two cases. Either the assumption is that light rays travel on a manifold and observers are on another one, wherein this case, light rays obviously do not travel along the null geodesics of the observers. As an example of this case, we can mention the “induced–matter” models [@Wesson-1999; @Wesson-2006] and some of the multi–metric models [@Alexander-2000]. Or, as an another case, there exist some fields on the Riemannian manifold that, by interaction with light, prevent light rays traveling along the null geodesics [@ref23]–[@ref24].
Among the VSL models, it is worths to mention the bimetric model, e.g. Refs. [@ref23]–[@ref24], in which the effective metric of light and matter, $\breve{g}_{\alpha\beta}$, is distinct from the spacetime metric $g_{\alpha\beta}$ as $$\label{eq0-d}
\breve{g}_{\alpha\beta}\equiv
g_{\alpha\beta}+B\partial_{\alpha}\psi\,\partial_{\beta}\psi ,$$ where $B$ is a constant coefficient with the dimension of the inverse of the energy density. The corresponding action is $$\label{eq0-e}
S=\int d^{4}x\sqrt{-g} \left(
L^{[g]}+L^{[\psi]}+\frac{\sqrt{-\breve{g}}}{\sqrt{-g}}\breve{L}^{[m]}\right),$$ where all the terms are as in action (\[eq0\]) except that here, $\omega$ is constant and the matter Lagrangian is a function of the effective metric $\breve{g}_{\alpha\beta}$. In this model, light rays do not travel along the spacetime geodesics, and thus, there exist local variations of the speed of light with respect to the speed of graviton [@ref23]–[@ref24]. Then, variations of the action, with respect to the metric and the scalar field, yield $$\label{eq0-g}
\Square\,
\psi=\frac{1}{\omega}\left(B\frac{\sqrt{-\breve{g}}}{\sqrt{-g}}\breve{T}^{[m]\,\mu\nu}\,
\breve{\nabla}_{\mu}\breve{\nabla}_{\nu}\psi-V'\right)$$ and $$\label{eq0-h}
G_{\mu\nu}=8\pi G\left(
T^{[\psi]}_{\mu\nu}+\frac{\sqrt{-\breve{g}}}{\sqrt{-g}}\breve{T}_{\mu\nu}^{[m]}\right).$$ And again, for all the same mentioned conditions, the corresponding Friedmann–like equation for a perfect fluid is $$\label{eq0-i}
\left( \frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}\left(
\frac{\sqrt{-\breve{g}}}{\sqrt{-g}}\breve{\rho}^{[m]}+\rho^{[\psi]}\right),$$ that eventually, with the aid of the cosmological considerations, specifies how the scalar field and, in turn, the speed of light vary by the time.
As mentioned, the other important recent observation, that is considered as a challenge for the STR and the Lorentz symmetry, is the UHECR [@ref5]–[@ref6; @Albert]. At first, let us briefly describe this phenomenon. When particles usually reach the specified energies, then, due to interaction with the microwaves background (that can be the cosmic infrared background [@ref7-1; @ref7] and or the cosmic microwaves background [@ref7-2]), either would be significantly absorbed by the pair–production (like, for the high–energy photons), or their energies are reduced via the photon–pion production (like, for the ultra high–energy protons and neutrons). Hence, the ultra high–energy cosmic particles have limited life–times, and thus, can travel limited distances. In this regard, in the year 1966, in two distinct papers [@ref8; @ref9], the threshold energies were specified for the distances that can be traveled by the ultra high–energy cosmic particles (depending on the amount of their energies) using the calculations based on the quantum field theory and according to the Lorentz symmetry. These threshold energies, that are derived via the STR, $E_{\rm th-SR}$, are known as the [**GZK**]{} threshold after Greisen–Zatsepin–Kuzmin. According to these calculations, the threshold energy for the high–energy photons are about $10^{4}$ GeV, and for the ultra high–energy protons are about $5\times 10^{10}$ GeV [@ref5]. Nevertheless, the ultra high–energy protons and photons have been observed that their energies are more than the corresponding calculated threshold energies [@ref5]–[@ref6]. On the other hand, as there is no source for such ultra high–energy particles inside our galaxy, hence, these particles have, in principle, been able to travel extragalactic distances. Therefore, this observation infers more life–times for these particles than the calculated ones based on the Lorentz symmetry and the standard quantum field theory [@ref5-1; @ref6].
To explain the observation of the UHECR, as mentioned, the most reliable approach is the DSR [@ref10]–[@ref17], wherein, inspired from the notion of quantum gravity, it contains two invariant scales, e.g., the speed of light and the Planck energy. Of course, in the realm of DSR, different models have been presented. Some of these models are based on the quantum groups[^26] and the non–commutative geometry [@ref11; @ref16], wherein, the corresponding quantum groups are related to Hopf algebra. In these type of models, either the corresponding relation of the non–commutative geometry is one of the obtained results of the quantum groups relations [@ref11], or based on the non–commutative geometry assumption, the corresponding relations of the quantum groups are resulted [@ref16]. Some of the other models of the DSR are presented based on the projective linear group [@ref12; @ref13; @ref14]. As a prototype of these models, the model of Magueijo–Smolin [@ref12] can be mentioned, in which by substituting the Fock–Lorentz [@Fock1964] inertial transformations instead of the Lorentz ones, new transformations, named Magueijo–Smolin, have been defined for the energy–momentum space. Then, by the new transformations, a specified scale of energy (for instance, the Planck energy) is set as an invariant for different inertial observations. Also, some other models of the DSR are stated based on the deformation in the generators of the Lorentz group [@ref10; @ref17].
However, in general, the common key point, in all of the explaining models [@ref18]–[@ref17] of the observation of the UHECR, is that the linear equations of the field and also the energy–momentum are somehow replaced by non–linear ones. For instance, in Ref. [@ref10], the dispersion relation (that usually specifies the connections between the energy, momentum and mass through the Klein–Gordon equation, and also is the indicator of the linear equation of wave) is modified by a length parameter (for instance, the Planck length, $\ell_{\rm p}$) as a Lorentz violation parameter. Thus, in this way, via the modified dispersion relation and the conservation laws of energy and momentum, the value of the threshold energy, $E_{\rm th-SR}$, increases to a new value, $E_{\rm th}$. For example, in Refs. [@ref10; @ref17], the change in the dispersion relation for an ultra–relativistic (i.e., $E\gg m$) particle of mass $m$ and energy $E$ has been considered, via the Lorentz violation parameter and in leading order in the Planck length, as the non–linear form $$\label{eq0-1}
E^{2}\simeq p^{2}+m^{2}+\varepsilon\,\ell_{\rm p}\, p^{2}E,$$ where $p$ is the momentum of the particle and $\varepsilon=\pm 1$ depending on the model under consideration. Note that, as we have set $c=1=\hbar$, the Planck length has the dimension of the inverse of energy, and actually, the Planck energy is the invariant (observer–independent) maximum energy scale similar to the speed of light for the speed of particles. Also, although relation (\[eq0-1\]) is not invariant under the Lorentz transformations, but it is under a sort of amended Lorentz ones depended on the considered model of the DSR. Then, as the threshold energy of a high–energy particle is the value of the energy that the particle can interact with the microwaves background, with the aid of relation (\[eq0-1\]) and the conservation laws of energy and momentum before and after the collision, the value of the threshold energy in the case of the Lorentz symmetry, $E_{\rm th-SR}$, is amended up to the first–order in the Planck length to be [@ref17] $$\label{eq0-2}
E_{\rm th}+\varepsilon\,\ell_{\rm p}\frac{E_{\rm th}^{3}}{8E_{\rm
IR}}\simeq E_{\rm th-SR},$$ where $E_{\rm IR}$ is the background infrared energy (soft–photon energy) and $E_{\rm th}$ is the physical (amended) threshold energy. Nonetheless, we should also mention that there are some problems in the models of the DSR, such as the lack of a standard approach for achieving the DSR and not having a unique type of transformations of the spacetime, that have been considered in Ref. [@ref17]. Moreover, it has been argued [@Hossenfelder] that the DSR with an energy–dependent speed of light has some inconsistencies, and wherein, the present–day observations in particle physics rule out its first–order modification in the speed of light.
Now, let us consider our above expression about these two types of models. First, in the VSL models, those scalar (or vector) fields (that are introduced to describe the variation of the speed of light) essentially need to be assumed almost constants (or very slowly varying) over small cosmological intervals, for to comply with the recent cosmological considerations. Thus, these models are in no use for justification of the observed UHECR which are actually effective in those intervals. On the other hand, although the DSR has emerged as a VSL effective model, but it acts just for the observational implications of the UHECR [@Albert; @Magueijo; @ref23; @ref41]. Indeed, with the speed of light as a function of energy, it predicts the variation of the speed of light in the range of ultra high–energies. Thus, such a variation of the speed of light in the DSR is in no use for justification of the observed Webb redshift which indicates the variations of the speed of light in low–energies. Therefore, there is no single model for justification of these two observed phenomena via a single mechanism. In this regard, in another work [@YouFar], we have defined and introduced a new type of ether model, consistent with the Mach ideas, that can justify both of the observed phenomena.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the Research Office of Shahid Beheshti University for the financial support.
[1]{} W. Rindler, “*Relativity: Special, General and Cosmological*", (Oxford University Press, Oxford, 2nd Ed. 2006). I. Newton, “[*Philosophi[æ]{} Naturalis Principia Mathematica*]{}", (Streater, London, 1st Ed. 1687), Final Ed. in English by: A. Motte, 1729, Revised by: A. Cajori, “[*Sir Isaac Newton’s Mathematical Principles of Natural Philosophy and His System of the World*]{}", (University of California Press, Berkeley, 1962). F. Michael, “*Leibniz’s Metaphysics of Time and Space*", (Springer, Heidelberg, 2008). R. Dean, “*Symmetry, Structure and Spacetime*", (Elsevier, Oxford, 2008). E. Erlichson, “The Leibniz–Clarke controversy: Absolute versus relative space and time", *Am. J. Phys.* **35** (1967), 89. M. Čapek \[Editor\], “*The Concepts of Space and Time, Their Structure and Their Development*", Boston Studies in The Philosophy of Science, Vol. **74**, (Reidel Publishing Company, Boston, 1976). J. Earman and J. Norton, “What price spacetime substantivalism? The hole story”, *British J. Phil. Sci.* **38** (1987), 515. G. Berkley, “De Motu or The principle and nature of motion and the cause of the communication of motions", (1721). J.C. Maxwell, “A dynamical theory of the electromagnetic field", *Roy. Soc.* **155** (1865), 459. This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society. I. Adler, “*A New Look at Geometry*", (John Day Company, NewYork, 1966). R.P. Hardie and R.K. Gaye \[Translators\], “*The Works of Aristotle*", Vol. **2** “*Physica*", (Clarendon Press, Oxford, 1930). Aristotle, “*Physics*" (Oxford World’s Classics), Edited by: D. Bostock, Translated by: R. Waterfield, (Oxford University Press, Oxford, 2008). B. Mashhoon, H. Liu and P.S. Wesson, “*Space–time–matter*", In: “*Proceedings 7th Marcel Grossmann Meeting*", Stanford (1994), pp. 333–335. R. d’Inverno, “*Introducing Einstein’s Relativity*", (Clarendon Press, Oxford, 1992). M. Janssen, “Of pots and holes: Einstein’s bumpy road to general relativity”, *Ann. Phys. (Berlin)* **14** Supplement, (2005), 58. E. Mach, “*La Meccanica nel suo Sviluppo Storico–Critico (Mechanics in Its Development Historical–Critical)*”, (Boringhieri, Torino, 1977), Italian translation from the original 9th German Ed. of 1933 (1st Ed. 1883). Also published as “[*The Science of Mechanics: A Critical and Historical Account of Its Development*]{}”, (Open Court, Illinois, 1960). J. Barbour and H. Pfister \[Editors\], “*Mach’s Principle: From Newton’s Bucket to Quantum Gravity*”, Einstein Studies, Vol. **6**, (Birkhäuser, Boston, 1995). J. Lense and H. Thirring, “Über den Einfluss der Eigenrotation der Zentralkörper auf die Bewegung der Planeten und Monde nach der Einsteinschen Gravitationstheorie (About the influence of the self–rotation of cenral body to the movement of planets and moons according to Einstein’s theory of gravitation)”, [*Physik. Z.*]{} [**19**]{} (1918), 156. B. Mashhoon, F.W. Hehl and D.S. Theiss, “On the gravitational effects of rotating masses: The Thirring–Lense papers”, *Gen. Rel. Grav.* [**16**]{} (1984), 711. I. Ciufolini, “The 1995–99 measurements of the Lense–Thirring effect using laser–ranged satellites”, *Class. Quant. Grav.* (2000), 2369. F. Everitt, *et al.*, “Gravity Probe B: Final results of a space experiment to test general relativity”, *Phys. Rev. Lett.* (2011), 221101. L. Iorio, “Some considerations on the present–day results for the detection of frame–dragging after the final outcome of GP–B”, *Europhys. Lett.* [**96**]{} (2011), 30001. A.P. French, “*Special Relativity*", (W.W. Norton, New York, 1966). R. Resnick, “*Introduction to Special Relativity*", (Wiley, New York, 1968). A.A. Michelson and E.W. Morley, “On the relative motion of the earth and the luminiferous ether", *Am. J. Sci.* **34** (1887), 333. R.S. Shankland, S.W. McCuskey, F.C. Leone and G. Kuerti, “New analysis of the interferometer observations of Dayton C. Miller", *Rev. Mod. Phys.* **27** (1955), 167. R.S. Shankland, “Michelson–Morley experiment", *Am. J. Phys.* **32** (1964), 16. F. Fitzgerald, “The ether and the earth’s atmosphere", *Sci.* **13** (1889), 390. H.A. Lorentz, “La théorie électromagnétique de Maxwell et son application aux corps mouvants (The electromagnetic theory of Maxwell and its application to moving bodies)", *Arch. Néerl. Sci. Ex. Nat.* **25** (1892), 363. H.A. Lorentz, “The relative motion of the earth and the aether", *Zitt. Akad. V. Wet.* **1** (1892), 74. H.A. Lorentz, “[*The Theory of Electrons and Its Applications to The Phenomena of Light and Radiatiant Heat*]{}", (Columbia University Press, New York, 1909; Dover Publications, New York, 1952). H.A. Lorentz, “Electromagnetic phenomena in a system moving with any velocity less than that of light", *Proc. Acad. Sci. Amsterdam* **6** (1904), 809. Reprinted in: “[*The Principle of Relativity: A Collection of Original Memoirs on The Special and General Theory of Relativity*]{}”, by: H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl, Translated by: W. Perrett and G.B. Jeffery, (Dover Publications, New York, 1952), pp. 9–34. R.J. Kennedy and E.M. Thorndike, “Experimental establishment of the relativity of time", *Phys. Rev.* **42** (1932), 400. J. Bradley, “New discovered motion of the fixed stars", *Phil. Trans. Roy. Soc.* **35** (1727), 637. A.B. Stewart, “The discovery of stellar aberration", *Sci. Am.* **210** (March 1964), 100. A.A. Michelson and E.W. Morley, “Influence of motion of the medium on the velocity of light", *Am. J. Sci.* **31** (1886), 377. H.R. Bilger and W.K. Stowell, “Light drag in a ring laser: An improved determination of the drag coefficient", *Phys. Rev. A* **16** (1977), 313. G.A. Sanders and S. Ezekiel, “Measurement of Fresnel drag in moving media using a ring resonator technique", *J. Opt. Soc. Am. B* **5** (1988), 674. H.E. Ives, “Historical note on the rate of a moving atomic clock", *J. Opt. Soc. Am.* **37** (1947), 810. E.T. Whittaker, “*A History of The Theories of [Æ]{}ther and Electricity: The Modern Theories 1900–1926*", (Nelson, London, 1953; Harper, New York, 1960; Humanities Press, London, 1973). G. Holton, “On the origins of the special theory of relativity", *Am. J. Phys.* **28** (1960), 627. W. Rindler, “Einstein’s priority in recognizing time dilation physically", *Am. J. Phys.* **38** (1970), 1111. H. Erlichson, “The rod contraction–clock retardation ether theory and the special theory of relativity", *Am. J. Phys.* **41** (1973), 1068. A.G. Riess, *et al.*, “Observational evidence from supernovae for an accelerating universe and a cosmological constant", *Astron. J.* **116** (1998), 1009. S. Perlmutter, *et al.* \[The Supernova Cosmology Project\], “Measurements of Omega and Lambda from $42$ high–redshift supernovae", *Astrophys. J.* **517** (1999), 565. A.G. Riess, [*et al.*]{}, “BV RI light curves for $22$ type Ia supernovae", *Astron. J.* **117** (1999), 707. A.G. Riess, *et al.*, “Type Ia supernova discoveries at $ z>1$ from the Hubble space telescope: Evidence for past deceleration and constraints on dark energy evolution", *Astrophys. J.* **607** (2004), 665. A. Einstein, “Zur Elektrodynamik bewegter Körper", *Ann. Phys. (Berlin)* **322** (1905), 891. Its English version: “On the electrodynamics of moving bodies", In: “[*The Principle of Relativity: A Collection of Original Memoirs on The Special and General Theory of Relativity*]{}”, by: H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl, Translated by: W. Perrett and G.B. Jeffery, (Dover Publications, New York, 1952), pp. 35–65. H. Minkowski, “Raum und Zeit”, *Jber. Deutsch. Math.–Verein.* **18** (1909), 75. Address delivered at the 80th Assembly of German Natural Scientists and Physicians, Cologne, Sept. 21, 1908. Its English version: “Space and time”, In: “[*The Principle of Relativity: A Collection of Original Memoirs on The Special and General Theory of Relativity*]{}”, by: H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl, Translated by: W. Perrett and G.B. Jeffery, (Dover Publications, New York, 1952), pp. 73–91. M. Born, “Die Theorie des starren Elektrons in der Kinematik des Relativitätsprinzips (The theory of rigid electron in the kinematics of principle of relativity)”, *Ann. Phys. (Berlin)* **335** (1909), 1. P. Ehrenfest, “Gleichförmige Rotation starrer Körper und Relativitätstheorie (Uniform rotation of rigid bodies and theory of relativity.)”, *Physik. Z.* **10** (1909), 918. V. Petkov, “*Relativity and The Nature of Spacetime*”, (Springer, Berlin, 2005). A. Iorio, “Three questions on Lorentz violation”, *J. Phys. Conf. Ser.* **67** (2007), 012008. D. Mattingly, “Modern tests of Lorentz invariance", *Living Rev. Rel.* **8** (2005), 5. K. Shinozaki, [*et al.*]{} \[AGASA Collaboration\], “AGASA results", *Nucl. Phys. B* **136** (2004), 18. F.W. Stecker, M.A. Malkan and S.T. Scully, “Intergalactic photon spectra from the far–IR to the UV Lyman limit for $0<z<6$ and the optical depth of the universe to high–energy gamma rays", *Astrophys. J.* **648** (2006), 774. R.U. Abbasi, [*et al.*]{}, “First observation of the Greisen–Zatsepin–Kuzmin suppression", *Phys. Rev. Lett.* **100** (2008), 101101. J. Drożdżyński, “Evidence for an invalidity of the principle of relativity”, *J. Mod. Phys.* **2** (2011), 1247. O. Sela, B. Tamir, S. Dolev and A.C. Elitzur, “Can special relativity be derived from Galilean mechanics alone?”, *Found. Phys.* **39** (2009), 499. S. Coleman and S.L. Glashow, “High–energy tests of Lorentz invariance", *Phys. Rev. D* **59** (1999), 116008. A. Einstein, “Die Feldgleichungen der Gravitation (The field equations of gravitation)", *Preuss. Akad. Wiss. Berlin Sitz.* **17** (1915), 844. A. Einstein, “Die Grundlage der allgemeinen Relativitätstheorie", *Ann. Phys. (Berlin)* **354** (1916), 769. Its English version: “The foundation of the general theory of relativity", In: “[*The Principle of Relativity: A Collection of Original Memoirs on The Special and General Theory of Relativity*]{}”, by: H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl, Translated by: W. Perrett and G.B. Jeffery, (Dover Publications, New York, 1952), pp. 109–164. H. Lichtenegger and B. Mashhoon, “Mach’s principle", In: “*The Measurment of Gravitomagnetism: A Challenging Enterprise*", Edited by: L. Iorio, (NOVA Science, Hauppage, New York, 2005), pp. 13–27, *arXiv: physics/0407078*. I. Ciufolini and J.A. Wheeler, “[*Gravitation and Inertia*]{}”, (Princeton University Press, Princeton, 1995). N. Straumann, “[*General Relativity With Applications to Astrophysics*]{}”, (Springer, Berlin, 2004). J. Mehra, “[*Einstein, Hilbert, and The Theory of Gravitation*]{}”, (Reidel Publishing Company, Holland, 1974). A. Pais, “[*Subtle Is The Lord, The Science and The Life of Albert Einstein*]{}”, (Oxford University Press, Oxford, 1982). J. Stachel, “Einstein’s struggle with general covariance, 1912–1915", Presented at General Relativity and Gravitation 9th, 1980 at Jena, Germany; Reprinted as “Einstein’s search for general covariance, 1912–1915", In: “*Einstein and The History of General Relativity*", based on the Proceedings of May 1986, Osgood Hill Conference, Massachusetts, Edited by: D. Howard and J. Stachel, (The Center for Einstein Studies, Boston University, 1989), pp. 63–100. J. Stachel, “What a physicist can learn from the discovery of general relativity”, In: “[*Proceedings of The Fourth Marcel Grossmann Meeting on General Relativity*]{}”, Edited by: R. Ruffini, (North–Holland, Amsterdam, 1986), pp. 1857–1862. J. Norton, “How Einstein found his field equations, 1912–1915”, In: “[*Einstein and The History of General Relativity*]{}”, based on the Proceedings of May 1986, Osgood Hill Conference, Massachusetts, Edited by: D. Howard and J. Stachel, (The Center for Einstein Studies, Boston University, 1989), pp. 101–159. It is reprinted from “[*Historical Studies in The Physical Sciences*]{}”, Vol. **14**, Part 2, Edited by: J.L. Heilbron, (The Regents of The University of California, Berkeley, 1984), pp. 253–316. J.L. Synge, “[*Relativity: The General Theory*]{}”, (North–Holland, Amsterdam, 1960). N.D. Birrell and P.C.W. Davies, “[*Quantum Fields in Curved Space*]{}”, (Cambridge University Press, Cambridge, 1982). I.L. Buchbinder, S.D. Odintsov and I.L. Shapiro, “[ *Effective Action in Quantum Gravity*]{}”, (Institute of Physics Publishing, Bristol, 1992). A. Einstein and M. Grossmann, “Entwurf einer verallgemeinerten Relativitätstheorie und einer Theorie der Gravitation (Draft of a generalized relativity theory and a theory of gravitation)", *Z. Math. Phys.* **62** (1913), 225. A. Einstein and M. Grossmann, “Kovarianzeigenschaften der Feldgleichungen der auf die verallgemeinerte Relativitätstheorie gegründeten Gravitationstheorie (Covariance properties of the field equations of the gravitational theory based on generalized relativity)", *Z. Math. Phys.* **63** (1914), 215. A. Einstein wrote to: P. Ehrenfest, on 26th December, 1915, EA 9–363. A. Einstein wrote to: M. Besso, on 3rd January, 1916, In: “*Albert Einstein, Michele Besso Correspondence 1903–1955*”, Edited by: P. Speziali, (Hermann, Paris, 1972), pp. 63–64. A. Einstein, “*Relativity and The Problem of Space (1952)*", Appendix $5$, In: “*Relativity, The Special and The General Theory: A Popular Exposition*", Translated by: R.W. Lawson, (Methuen, London, 15th Ed. 1954), pp. 135–157. A.S. Eddington, “ ‘Space’ or ‘[Æ]{}ther’?", *Nature* **107** (1921), 201. A. Einstein, “*Äther und Relativitätstheorie (Ether and Relativity Theory)*", (Springer, Berlin, 1920), reprinted as “*Sidelights on Relativity*", (Dover Publications, New York, 1983). A. Trautman,“Comparison of Newtonian and relativistic theories of space–time", In: “*Perspectives in Geometry and Relativity*", Edited by: B. Hoffmann, (Indiana University Press, Bloomington, 1966), pp. 413–425. E.T. Whittaker, “*A History of The Theories of [Æ]{}ther and Electricity: The Classical Theories*", (Nelson, London, 2nd Ed. 1951; Tomash Publishers, New York, 1987). K.F. Schaffner, “*Nineteenth–Century [Æ]{}ther Theories*", (Pergamon Press, New York, 1972). M.J. Dupré and F.J. Tipler, “General relativity as an [æ]{}ther theory", *Int. J. Mod. Phys. D* **21** (2012), 1250011. R. Gautreau, “Newton’s absolute time and space in general relativity", *Am. J. Phys.* **68** (2000), 350. D. Savickas, “General relativity exactly described in terms of Newton’s laws within curved geometries”, *Int. J. Mod. Phys. D* **23** (2014), 1430018. A. Salam, “Gauge unification of fundamental forces", *Rev. Mod. Phys.* **52** (1980), 525. P.S. Wesson and J. Ponce de Leon, “Kaluza–Klein equations, Einstein’s equations, and an effective energy–momentum tensor", *J. Math. Phys.* **33** (1992), 3883. C. Romero, R. Tavakol and R. Zalaletdinov, “The embedding of general relativity in five dimensions", *Gen. Rel. Grav.* **28** (1996), 365. J.M. Overduin and P.S. Wesson, “Kaluza–Klein gravity", *Phys. Rep.* **283** (1997), 303. P.S. Wesson, “*Space–Time–Matter: Modern Kaluza–Klein Theory*", (World Scientific, Singapore, 1999). P.S. Wesson, “*Five–Dimensional Physics: Classical and Quantum Consequences of Kaluza–Klein Cosmology*", (World Scientific, Singapore, 2006). A.F. Bahrehbakhsh, M. Farhoudi and H. Shojaie, “FRW cosmology from five dimensional vacuum Brans–Dicke theory", *Gen. Rel. Grav.* **43** (2010), 847. S.M.M. Rasouli, M. Farhoudi and H.R. Sepangi “Anisotropic cosmological model in modified Brans–Dicke theory", *Class. Quant. Grav.* **28** (2011), 155004. A.F. Bahrehbakhsh, M. Farhoudi and H. Vakili, “Dark energy from fifth dimensional Brans–Dicke theory", *Int. J. Mod. Phys. D* **22** (2013), 1350070. S.M.M. Rasouli, M. Farhoudi and P.V. Moniz, “Modified Brans–Dicke theory in arbitrary dimensions", *Class. Quant. Grav.* **31** (2014), 115002. T. Harko, “Modified gravity with arbitrary coupling between matter and geometry", *Phys. Lett. B* **669** (2008), 376. T. Harko, F.S.N. Lobo, S. Nojiri and S.D. Odintsov, “$f(R,T)$ gravity", *Phys. Rev. D* **84** (2011), 024020. Y. Bisabr, “Modified gravity with a nonminimal gravitational coupling to matter", *Phys. Rev. D* **86** (2012), 044025. M. Jamil, D. Momeni, R. Muhammad and M. Ratbay, “Reconstruction of some cosmological models in $f(R,T)$ gravity", *Eur. Phys. J. C* **72** (2012), 1999. F.G. Alvarenga, A. de la Cruz–Dombriz, M.J.S. Houndjo, M.E. Rodrigues and D. Saez–Gomez, “Dynamics of scalar perturbations in $f(R,T)$ gravity", *Phys. Rev. D* **87** (2013), 103526. Z. Haghani, T. Harko, F.S.N. Lobo, H.R. Sepangi and S. Shahidi, “Further matters in space–time geometry: $f(R,T,R_{\mu\nu}T^{\mu\nu})$ gravity", *Phys. Rev. D* **88** (2013), 044023. H. Shabani and M. Farhoudi, “$f(R,T)$ cosmological models in phase–space", *Phys. Rev. D* **88** (2013), 044048. H. Shabani and M. Farhoudi, “Cosmological and solar system consequences of $f(R,T)$ gravity models”, *Phys. Rev. D* **90** (2014), 044031. R. Zaregonbadi and M. Farhoudi, “Late time acceleration from matter–curvature coupling”, submitted to journal. A. Einstein, “Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie”, [*Preuss. Akad. Wiss. Berlin, Sitz.*]{} (1917), 142. Its English version: “Cosmological considerations on the general theory of relativity”, In: “[*The Principle of Relativity: A Collection of Original Memoirs on The Special and General Theory of Relativity*]{}”, by: H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl, Translated by: W. Perrett and G.B. Jeffery, (Dover Publications, New York, 1952), pp. 175–188. W. de Sitter, “On the curvature of space", *Proc. Kon. Ned. Acad. Wet.* **20** (1918), 229. E.P. Hubble, “A relation between distance and radial velocity among extragalactic nebulae”, *Proc. Nat. Acad. Sci. USA* **15** (1929), 169. G. Gamow, “*My World Line, An Informal Autobiography*”, (Viking, New York, 1970). A. Einstein wrote to: F. Pirani, 1954, EA 17–448. N. Namavarian and M. Farhoudi, “Cosmological constant implementing Mach principle in general relativity", submitted to journal. S. Weinberg, “The cosmological constant problem”, *Rev. Mod. Phys.* **61** (1989), 1. S.M. Carroll, “The cosmological constant", *Living. Rev. Rel.* **4** (2001), 1. V. Sahni, “The cosmological constant problem and quintessence", *Class. Quant. Grav.* **19** (2002), 3435. S. Nobbenhuis, “Categorizing different approaches to the cosmological constant problem”, *Found. Phys.* **36** (2006), 613. H. Padmanabhan and T. Padmanabhan, “CosMIn: The solution to the cosmological constant problem", *Int. J. Mod. Phys. D* **22** (2013), 1342001. D. Bernard and A. LeClair, “Scrutinizing the cosmological constant problem and a possible resolution", *Phys. Rev. D* **87** (2013), 063010. S. Weinberg, “*Cosmology*”, (Oxford University Press, Oxford, 2008). J.D. Barrow and D.J. Shaw, “The value of the cosmological constant", *Gen. Rel. Grav.* **43** (2011), 2555. P.J.E. Peebles and B. Ratra, “The cosmological constant and dark energy", *Rev. Mod. Phys.* **75** (2003), 559. T. Padmanabhan, “Cosmological constant–the weight of the vacuum”, *Phys. Rep.* **380** (2003), 235. D. Polarski, “Dark energy: Current issues", *Ann. Phys. (Berlin)* **15** (2006), 342. E.J. Copeland, M. Sami and S. Tsujikawa, “Dynamics of dark energy", *Int. J. Mod. Phys. D* **15** (2006), 1753. R. Durrer and R. Maartens, “Dark energy and dark gravity: Theory overview", *Gen. Rel. Grav.* **40** (2008), 301. K. Bamba, S. Capozziello, S. Nojiri and S.D. Odintsov, “Dark energy cosmology: The equivalent description via different theoretical models and cosmography tests", *Astrophys. Space Sci.* **342** (2012), 155. P.A.R. Ade, [*et al.*]{} \[Planck Collaboration\], “Planck 2013 results. XVI. Cosmological parameters", *Astron. Astrophys.* **571** (2014), A16. P.A.R. Ade, [*et al.*]{} \[Planck Collaboration\], “Planck 2015 results. XIII. Cosmological parameters", *arXiv: 1502.01589*. M. Farhoudi, “On higher order gravities, their analogy to GR, and dimensional dependent version of Duff’s trace anomaly relation", *Gen. Rel. Grav.* **38** (2006), 1261. G. Bertonea, D. Hooperb and J. Silk, “Particle dark matter: Evidence, candidates and constraints", *Phys. Rep.* **405** (2005), 279. J. Silk, “Dark matter and galaxy formation", *Ann. Phys. (Berlin)* **15** (2006), 75. J.L. Feng, “Dark matter candidates from particle physics and methods of detection", *Annu. Rev. Astron. Astrophys.* **48** (2010), 495. L. Bergström, “Dark matter evidence, particle physics candidates and detection methods", *Ann. Phys. (Berlin)* **524** (2012), 479. C.S. Frenk and S.D.M. White, “Dark matter and cosmic structure", *Ann. Phys. (Berlin)* **524** (2012), 507. M. Farhoudi, “Classical trace anomaly”, [*Int. J. Mod. Phys. D*]{} [**14**]{} (2005), 1233. C. Brans and R.H. Dicke, “Mach’s principle and a relativistic theory of gravitation", *Phys. Rev.* **124** (1961), 925. R.H. Dicke, “Mach’s principle and invariance under transformation of units", *Phys. Rev.* **125** (1962), 2163. Y. Fujii and K. Maeda, “*The Scalar–Tensor Theory of Gravitation*", (Cambridge University Press, Cambridge 2004). H. Farajollahi, M. Farhoudi and H. Shojaie, “On dynamics of Brans–Dicke theory of gravitation", *Int. J. Theor. Phys.* **49** (2010), 2558. V. Faraoni, “*Cosmology in Scalar Tensor Gravity*", (Kluiwer Academic Publishers, Netherlands, 2004). S. Capozziello and V. Faraoni, “*Beyond Einstein Gravity: A Survey of Gravitational Theories for Cosmology and Astrophysics*", (Springer, Heidelberg, 2011). J. Khoury and A. Weltman, “Chameleon cosmology", *Phys. Rev. D* **69** (2004), 044026. P. Brax, C. Burrage, A.–C. Davis, D. Seery and A. Weltman, “Higgs production as a probe of chameleon dark energy", *Phys. Rev. D* **81** (2010), 103524. H. Farajollahi, M. Farhoudi, A. Salehi and H. Shojaie, “Chameleonic generalized Brans–Dicke model and late–time acceleration", *Astrophys. Space Sci.* **337** (2012), 415. N. Saba and M. Farhoudi, “Chameleonic inflation in the light of Planck 2015”, work in progress. T. Jacobson and D. Mattingly, “Gravity with a dynamical preferred frame", *Phys. Rev. D* **64** (2001), 024028. C. Eling, T. Jacobson and D. Mattingly, “Einstein–[æ]{}ther theory", *arXiv: gr–qc/0410001*. T. Jacobson, “Einstein–[æ]{}ther gravity: A status report", *PoS QG–Ph* (2007), 020, *arXiv: 0801.1547*. J.D. Barrow, “Some inflationary Einstein–aether cosmologies”, *Phys. Rev. D* **85** (2012), 047503. H. Wei, X.–P. Yan and Y.–N. Zhou, “Cosmological evolution of Einstein–aether models with power–law–like potential”, *Gen. Rel. Grav.* **46** (2014), 1719. Z. Haghani, T. Harko, H.R. Sepangi and S. Shahidi, “Scalar Einstein-aether theory”, *arXiv: 1404.7689*. C. Furtado, J.R. Nascimento, A.Y. Petrov and A.F. Santos, “The [æ]{}ther–modified gravity and the Gödel metric", *arXiv: 1109.5654*. “*The Timaeus of Plato*", Edited with Introduction and Notes by: R.D. Archer–Hind, (Macmillan, London, 1888). Plato, “*Timaeus*", Translated by: B. Jowett, (Echo Library, United Kingdom, 2006). J.K. Webb, [*et al.*]{}, “A search for time variation of the fine structure constant", *Phys. Rev. Lett.* **82** (1999), 884. M.T. Murphy, [*et al.*]{}, “Possible evidence for a variable fine structure constant from QSO absorption lines: Motivations, analysis and results", *Mon. Not. Roy. Astron. Soc.* **327** (2001), 1208. J. Albert, *et al.* \[MAGIC Collaboration\], “Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope", *Phys. Lett. B* **668** (2008), 253. J.D. Barrow and J. Magueijo, “Varying–$\alpha$ theories and solutions to the cosmological problems", *Phys. Lett. B* **443** (1998), 104. H.B. Sandvik, J.D. Barrow and J. Magueijo, “A simple varying–alpha cosmology", *Phys. Rev. Lett.* **88** (2002), 031302. J.D. Barrow and J. Magueijo, “Solutions to the quasi–flatness and quasi–lambda problems”, *Phys. Lett. B* [**447**]{} (1998), 246. M.A. Clayton and J.W. Moffat, “Dynamical mechanism for varying light velocity as a solution to cosmological problem”, *Phys. Lett. B* [**480**]{} (1998), 263. A. Albrecht and J. Magueijo, “A time varying speed of light as a solution to cosmological puzzles”, *Phys. Rev. D* [**59**]{} (1999), 043516. J. Magueijo, “New varying speed of light theories”, *Rep. Prog. Phys.* [**66**]{} (2003), 2025. H. Shojaie and M. Farhoudi, “A cosmology with variable c", *Can. J. Phys.* **84** (2006), 933. H. Shojaie and M. Farhoudi, “A varying–c cosmology", *Can. J. Phys.* **85** (2007), 1395. J. Magueijo, J.D. Barrow and H.B. Sandvik, “Is it e or is it c? Experimental tests of varying alpha", *Phys. Lett. B* **549** (2002), 284. P. Castorina and D. Zappala, “Noncommutative electrodynamics and ultra high energy gamma rays", *Europhys. Lett.* **64** (2003), 641. R. Horvat, D. Kekez, P. Schupp, J. Trampeti and J. You, “Photon–neutrino interaction in $\theta$–exact covariant noncommutative field theory", *Phys. Rev. D* **84** (2011), 045004. G. Amelino–Camelia, “Relativity in space–times with short–distance structure governed by an observer–independent (Planckian) length scale", *Int. J. Mod. Phys. D* **11** (2002), 35. J. Magueijo and L. Smolin, “Lorentz invariance with an invariant energy scale", *Phys. Rev. Lett.* **88** (2002), 190403. J. Kowalski–Glikman and S. Nowak, “Non–commutative space–time of doubly special relativity theories", *Int. J. Mod. Phys. D* **12** (2003), 299. H.–Y. Guo, C.–G. Huang, Z. Xu and B. Zhou, “On de Sitter invariant special relativity and cosmological constant as origin of inertia", *Mod. Phys. Lett. A* **19** (2004), 1701. A. Agostini, G. Amelino–Camelia and F. D’Andrea, “Hopf–algebra description of noncommutative–spacetime symmetries", *Int. J. Mod. Phys. A* **19** (2004), 5187. H.–Y. Guo, H.–T. Wu and B. Zhou, “The principle of relativity and the special relativity triple", *Phys. Lett. B* **670** (2009), 437. G. Amelino–Camelia, “Doubly–special relativity: Facts, myths and some key open issues", *Symmetry* **2** (2010), 230. S. Alexander, “On the varying speed of light in a brane–induced FRW universe", *J. High Energy Phys.* **0011** (2000), 017. I.T. Drummond and S.J. Hathrell, “QED vacuum polarization in a background gravitational field and its effect on the velocity of photons", *Phys. Rev. D* **22** (1980), 343. M.A. Clayton and J.W. Moffat, “Dynamical mechanism for varying light velocity as a solution to cosmological problems", *Phys. Lett. B* **460** (1999), 263. M.A. Clayton and J.W. Moffat, “Scalar–tensor gravity theory for dynamical light velocity", *Phys. Lett. B* **477** (2000), 269. J. Magueijo, “Bimetric varying speed of light theories and primordial fluctuations", *Phys. Rev. D* **79** (2009), 043525. D. Finkbeiner, M. Davis and D. Schlegel, “Detection of a far IR excess with DIRBE at 60 and 100 microns", *Astrophys. J.* **544** (2000), 81. D. Mazin and M. Raue, “New limits on the density of the extragalactic background light in the optical to the far infrared from the spectra of all known TeV blazars", *Astron. Astrophys.* **471** (2007), 439. A.A. Penzias and R.H. Wilson, “A measurement of excess antenna temperature at 4080 Mc/s", *Astrophys. J.* **142** (1965), 419. K. Greisen, “End to the cosmic–ray spectrum", *Phys. Rev. Lett.* **16** (1966), 748. G.T. Zatsepin and V.A. Kuzmin, “Upper limit of the spectrum of cosmic rays", *J. Exp. Theor. Phys. Lett.* **4** (1966), 78. S. Majid, “Foundations of Quantum Group Theory”, (Cambridge University Press, Cambridge, 2000). V.A. Fock, “The Theory of Space–Time and Gravitation”, (Pergamon Press, New York, 1964). S. Hossenfelder, “The box–problem in deformed special relativity”, [*arXiv: 0912.0090*]{}. S.T. Scully and F.W. Stecker, “Lorentz invariance violation and the observed spectrum of ultrahigh energy cosmic rays", *Astropart. Phys.* **31** (2009), 220. M. Yousefian and M. Farhoudi, “Justification of Webb’s redshift and ultra high energy cosmic rays via an ether model", work in progress.
[^1]: E-mail: [email protected]
[^2]: E-mail: M\[email protected]
[^3]: This point of view is a return to the Aristotle idea. Incidentally, this idea is related to the [*relational physics*]{} [@Michael-2008; @Dean-2008], which means a physical system is in a way that positions and other properties of things have meanings just with respect to the other things. This point of view is a prelude to the Mach ideas, particularly the [*weak version*]{} of it.
[^4]: The Aristotle point of view on space was asserted in his definition of [*place*]{} as [*the limit between the surrounding and the surrounded body*]{} [@Adler-1966], and also as [*the innermost motionless boundary of that which surrounds it*]{} [@Hardie-Aristotle; @Aristotle].
[^5]: Also, the [*Lense–Thirring precession*]{} effect (see, e.g., Refs. [@LenseThirring]–[@Ciufolini2000]), or actually the [*frame–dragging*]{} effect [@GP-B; @Iorio2011], must be neglected.
[^6]: Such a contraction was accounted in terms of the Lorentz electron theory [@Lorentz1909]; however, it is believed that some other results predicted from his theory could not be found experimentally [@Resnick-1968] and the theory has some philosophical deficit such that its basic assumptions are unverifiable [@d'Inverno-1992].
[^7]: However, nowadays, the discovery of the acceleration of the universe [@Riess-1998]-[@Riess2004] can be discussed as a possible way of investigating the contrary to such a claim.
[^8]: Thus, the Newton first law is consistent with special relativity. However, for distinguishing the inertial frames from the rigid frames, instead of the existing context of the Newton first law, one can employ, e.g., the law of light propagation. In fact, the usual definition of rigid bodies cannot be applied in special relativity, although to be consistent with it, some new definitions have been stated. For instance, the characteristic of rigidity is assigned to a body as relative–rigidity that [*any length element of the body on the move remains invariant with respect to the comoving observer*]{} [@Born] or [*a body on the move somehow deforms continuously that each of its infinitesimal elements has just the Lorentz contraction with respect to the instantaneous rest observer*]{} [@Ehrenfest].
[^9]: His critical view on the STR has been explicitly expressed in the foreward to the ninth edition of his book [@Mach1977].
[^10]: Among the reviews on the Lorentz violation, Ref. [@Mattingly-2005] includes the theoretical approaches as well as the phenomenological analysis.
[^11]: Meanwhile, for grasping more about the contents and points that led to the advent of the GTR; see Refs. [@Mehra]–[@Norton].
[^12]: Principally in gravitational physics, energy itself acts as a source of gravity and is not capable of simply being thrown away, and also one cannot easily rescale the zero point of it [@bida; @bos].
[^13]: It had been thought that generally invariant field equations cannot uniquely determine the gravitational field generated by certain distributions of source masses, in contradiction with the requirement of physical causality [@d'Inverno-1992; @Janssen2005; @Einstein-1913; @Einstein-1914].
[^14]: This is known as the [*point–coincidence*]{} discussion.
[^15]: In this respect; see also Ref. [@Janssen2005] and references therein.
[^16]: That is, the ether as a substance of some kind, and not a type of vacuum without any properties intrinsic to itself (e.g., the ether would have the property of ponderability, which is to say, it has the power to gravitate or to generate curvature).
[^17]: By adopting that the ether gravitates [*only*]{} in the presence of matter.
[^18]: Note that, in general, any relativistic gravitational equation, including the Einstein equations of the GTR, is needed to be non–linear, and hence, its number of independent solutions are not finite and the [*superposition principle*]{} is not also valid for it.
[^19]: However in the Mach ideas, there is also no description about why the interaction should be velocity–independent, but acceleration–dependent, and or indeed, why there is such a distinction between unaccelerated and accelerated motion in the nature.
[^20]: Incidentally, by this insertion, he also provided the possibility of having a [*static*]{} solution for the universe (that was thought to be so at that time), as an appropriate condition on the GTR.
[^21]: Even a sufficiently small value of the cosmological constant can have very important effects on the evolution of the universe; and although the implications of this term are cosmological, the origin of it is probably to be found in the quantum theory rather than cosmology.
[^22]: The Einstein equations are $G_{\mu\nu}=(8\pi G/c^{4})T_{\mu\nu}$, where $G_{\mu\nu}$ is the Einstein tensor as a function of the metric and its derivatives, $G$ is the Newtonian gravitational constant, $T_{\mu\nu}$ is the energy–momentum tensor and the lower case Greek indices run from zero to three. The Einstein equations plus the cosmological constant are $G_{\mu\nu}-\Lambda g_{\mu\nu}=(8\pi
G/c^{4})T_{\mu\nu}$, where $g_{\mu\nu}$ is the metric tensor and $\Lambda$ is a constant.
[^23]: Meanwhile and almost around the same time, [*non–static*]{} closed solutions of the GTR (corresponding to an expanding distribution of matter) were discovered, and also it was specified that the universe is not static, but rather is expanding in the large–scale (that was officially published a few years later [@Hubble1929]).
[^24]: Incidentally, according to quantum theory, the vacuum has [*vacuum fluctuations*]{} and an energy tensor (zero–point energy) that the only form of it (being the same in all inertial frames) is a constant multiple of the metric, i.e. the same as the cosmological constant term. However, the calculations based on theories of elementary particles yield a value for the corresponding cosmological constant term to be orders of magnitude far larger than the observations allow. This discrepancy is known as the [*cosmological constant problem*]{}; see, e.g., Refs. [@Cos.pro1]-[@Bernard] and references therein.
[^25]: This point of view seems not to be irrelevant with the strong version of the Mach principle.
[^26]: For this subject; see, e.g., Ref. [@Majid2000].
|
---
author:
- |
Duong Quoc Viet and Truong Thi Hong Thanh\
Department of Mathematics, Hanoi National University of Education\
136 Xuan Thuy Street, Hanoi, Vietnam\
Emails: [email protected] and [email protected]\
date:
-
-
title: |
**ON SOME MULTIPLICITY AND MIXED\
MULTIPLICITY FORMULAS $^1$\
(Forum Math. 26(2014), 413-442)**
---
**1. Introduction**
$\\$ Let $(A,\frak{m})$ be an artinian local ring with maximal ideal $\frak{m}$, infinite residue field $k = A/\frak{m}.$ Let $S=\bigoplus_{n_1,\ldots,n_d\ge 0}S_{(n_1,\ldots,n_d)}$ $(d > 0)$ be a finitely generated standard $\mathbb{N}^d$-graded algebra over $A$ (i.e., $S$ is generated over $A$ by elements of total degree 1) and let $M=\bigoplus_{n_1,\ldots,n_d\ge 0}M_{(n_1,\ldots,n_d)}$ be a finitely generated $\mathbb{N}^d$-graded $S$-module such that $M_{(n_1,\ldots,n_d)}=S_{(n_1,\ldots,n_d)}M_{(0,\ldots,0)}$ for all $n_1,\ldots,n_d \ge 0.$ Throughout this paper, put $S_i=S_{(0,\ldots,{\underbrace{1}_i},\ldots,0)}$ for all $i=1,\ldots,d$ and
$$\begin{array}{lll}
&S^\triangle&=\bigoplus_{n\ge 0}\;S_{(n,\ldots,n)},\;S_{++}=\bigoplus_{\;n_1,\ldots,n_d> 0}S_{(n_1,\ldots,n_d)},\\
&S^\triangle_+&=\bigoplus_{n> 0}\;S_{(n,\ldots,n)},\;S_+=\bigoplus _{n_1+\cdots+n_d> 0}S_{(n_1,\ldots,n_d)},\\&M^\triangle &=\bigoplus_{n\ge 0}M_{(n,\ldots,n)},\;
{\frak a}:{\frak b}^\infty=\bigcup_{n\ge 0}(\frak a:{\frak b}^n)
.\end{array}$$
Denote by $\text{Proj}\; S$ the set of the homogeneous prime ideals of $S$ which do not contain $S_{++}$. Set $\dim M^\triangle = \ell \;\;\text{ and }$$$\text{Supp}_{++}M=\Big\{P\in \text{Proj}\; S\;|\;M_P\ne 0\Big\}.$$ By [@VM Remark 3.1], $\dim\text{Supp}_{++}M = \ell-1.$ And by [@HHRT Theorem 4.1], $\ell_A[M_{(n_1,\ldots,n_d)}]$ is a polynomial of degree $\ell-1$ for all large $n_1,\ldots,n_d.$ The terms of total degree $\ell-1$ in this polynomial have the form $$\sum_{k_1\:+\:\cdots\:+\:k_d\;=\;\ell-1}e(M;k_1,\ldots,k_d)\dfrac{n_1^{k_1}\cdots n_d^{k_d}}{k_1!\cdots k_d!}.$$ Then $e(M;k_1,\ldots,k_d)$ are called the [*mixed multiplicity of type $(k_1,\ldots,k_d)$ of $M$*]{} [@HHRT]. In the case that $(R, \frak n)$ is a noetherian local ring with maximal ideal $\frak{n},$ $J$ is an $\frak n$-primary ideal, $I_1,\ldots, I_d$ are ideals of $R,$ $N$ is a finitely generated $R$-module, then it is easily seen that $$F_J(J,I_1,\ldots,I_d;N) =\bigoplus_{n_0, n_1,\ldots,n_d\ge 0}\dfrac{J^{n_0}I_1^{n_1}\cdots I_d^{n_d}{N}}{J^{n_0+1}I_1^{n_1}\cdots I_d^{n_d}{N}}$$ is a finitely generated graded $F_J(J,I_1,\ldots,I_d;R)$-module. Mixed multiplicities of $F_J(J,I_1,\ldots,I_d;N)$ are denoted by $e\big(J^{[k_0+1]},I_1^{[k_1]},\ldots,I_d^{[k_d]};N\big)$ and which are called [*mixed multiplicities of $N$ with respect to ideals $J,I_1,\ldots,I_d$*]{} (see [@MV; @Ve]).
Although the problems of expressing the multiplicity of graded modules in terms of mixed multiplicities and the relationship between mixed multiplicities and Hilbert-Samuel multiplicity have attracted much attention in past years (the citations will be mentioned in the next sections), the properties similar to that of the Hilbert-Samuel multiplicity (for instance: the additive property on exact sequences as in [@HS Lemma 17.4.4] and the additivity and reduction formula [@HS Theorem 17.4.8] for mixed multiplicities of $\frak n$-primary ideals...) for mixed multiplicities of arbitrary ideals and multi-graded modules, and other properties, are not yet known.
In the present paper, by a new approach we give additivity and reduction formulas for mixed multiplicities of multi-graded modules and mixed multiplicities of arbitrary ideals. And we establish the recursion formulas for the sum of all the mixed multiplicities of multi-graded modules.
As one might expect, we first obtain the following result for mixed multiplicities of multi-graded modules.
0.2cm
[**Theorem 3.1.**]{}
We would like to emphasize that although Theorem 3.1 is a general result for mixed multiplicities of multi-graded modules which is a general object for mixed multiplicities of ideals, up to now we can not prove the following theorem by using Theorem 3.1.
0.2cm
[**Theorem 3.2.**]{} [*Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n},$ infinite residue field $k = R/\frak{n},$ ideals $I_1,\ldots,I_d,$ an $\frak n$-primary $J.$ Let $N$ be a finitely generated $R$-module. Assume that $I=I_1\cdots I_d$ is not contained in $ \sqrt{\mathrm{Ann}{N}}.$ Set $\overline{N}=\dfrac{N}{0_N: I^\infty}.$ Denote by $\Pi$ the set of all prime ideals $\frak p $ of $R$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}\overline{N})$ and $\dim R/\frak p = \dim \overline{N}.$ Then we have $$e(J^{[k_0+1]},I_1^{[k_1]},\ldots,I_d^{[k_d]}; N)= \sum_{\frak p \in \Pi}\ell({N}_{\frak p})e(J^{[k_0+1]},I_1^{[k_1]},\ldots,I_d^{[k_d]}; R/\frak p).$$*]{} It is natural to suppose that the proof of Theorem 3.2 will have to use the additive property on exact sequences of mixed multiplicities. But in fact, this approach seems to become a obstruction in proving Theorem 3.2. This is a motivation to help us giving another approach for the proof of Theorem 3.2 as in this paper (see the proof of Theorem 3.2, Section 3). On the contrary, even from Theorem 3.2 we show that mixed multiplicities of arbitrary ideals are additive on exact sequences (see Corollary 3.9, Section 3) which covers [@HS Lemma 17.4.4].
Our approach is based on multiplicity formulas of multi-graded Rees modules with respect to powers of ideals (see Proposition 2.4 and Corollary 2.5, Section 2) via linking minimal homogeneous prime ideals of maximal coheight of the Rees module $\mathfrak R(I_1,\ldots,I_d; N)= \bigoplus_{n_1,\ldots,n_d\ge 0}I_1^{n_1}\cdots I_d^{n_d}N$ and minimal prime ideals of maximal coheight of $N$ (see Lemma 3.4, Section 3).
Set $$S_{\widehat{i}} \bigoplus_{n_1,\ldots,n_{i-1}, n_{i+1},\ldots, n_d \ge 0\;;\;n_i=0}S_{(n_1,\ldots,n_d)}\; \text {and}\; M_{\widehat{i}} = S_{\widehat{i}}M_{(0,\ldots,0)}.$$
Next, we establish the recursion formulas for the sum of all the mixed multiplicities of the $\mathbb{N}^{d}$-graded module $M:$ $\widetilde{e}(M)= \sum_{k_1\:+\:\cdots\:+\:k_d\;=\;\ell-1}e(M;k_1,\ldots,k_d)$ which express $\widetilde{e}(M)$ as a sum $$\widetilde{e}(M) = \widetilde{e}(M/xM) + \widetilde{e}(W),$$ where $$\dim \text{Supp}_{++}(M/xM) = \dim \text{Supp}_{++}M-1$$ and $W$ is an $\mathbb{N}^{d-1}$-graded module. This result can be stated as follows.
0.2cm
[**Theorem 5.2.**]{}
As consequences of Theorem 5.2, we get the recursion formulas for the multiplicity of multi-graded Rees modules (see Theorem 5.5; Corollary 5.6; Corollary 5.7 and Corollary 5.8, Section 5).
0.2cm The main results of this paper yield many interesting consequences such as the additivity and reduction formulas for mixed multiplicities of ideals of positive height that covers [@HS Theorem 17.4.8] for the case of $\frak n$-primary ideals; the additive property on exact sequences for mixed multiplicities of ideals and the multiplicity of multi-graded Rees modules; the recursion formulas for the multiplicity of multi-graded Rees modules; and the multiplicity formulas of Rees modules.
0.2cm
This paper is divided into five sections. Section 2 is devoted to the discussion of mixed multiplicities of multi-graded Rees modules and the multiplicity of Rees modules with respect to powers of ideals (Proposition 2.4 and Corollary 2.5) that will be used as a tool in the proofs of the paper. Section 3 gives the additivity and reduction formulas for mixed multiplicities of multi-graded modules and mixed multiplicities of arbitrary ideals. Section 4 investigates the relationship between filter-regular sequences of multi-graded $F_J(J,I_1,\ldots,I_d;R)$-module $ F_J(J,I_1,\ldots,I_d;N)$ and weak-(FC)-sequences of ideals that will be used in the proofs of Section 5 (Proposition 4.5). Section 5 introduces the recursion formulas for the sum of all the mixed multiplicities of multi-graded modules. And as an application, we obtain the recursion formulas for the multiplicity of multi-graded Rees modules.
0.2cm
**2. Multiplicity of multi-graded Rees modules**
$\\$ This section studies mixed multiplicities and the multiplicity of multi-graded modules. We will give multiplicity and mixed multiplicity formulas of Rees modules with respect to powers of ideals that will be used as a tool in the proofs of the paper.
Set $\dim M^\triangle = \ell.$ By [@HHRT Theorem 4.1], $\ell_A[M_{(n_1,\ldots,n_d)}]$ is a polynomial of degree $\dim \text{Supp}_{++}M$ for all large $n_1,\ldots,n_d.$ Remember that $\dim \text{Supp}_{++}M=\ell-1$ by [@VM Remark 3.1]. The terms of total degree $\ell-1$ in this polynomial have the form $$B_M(n_1,n_2,\ldots,n_d)= \sum_{k_1\:+\:\cdots\:+\:k_d\;=\;\ell-1}e(M;k_1,\ldots,k_d)\dfrac{n_1^{k_1}\cdots n_d^{k_d}}{k_1!\cdots k_d!}.$$
Then $e(M;k_1,\ldots,k_d)$ are non-negative integers not all zero, called the [*mixed multiplicity of type $(k_1,\ldots,k_d)$ of $M$*]{} [@HHRT]. And from now on $B_M(n_1,n_2,\ldots,n_d)$ is called the [*Bahattacharya homogeneous polynomial*]{} of $M$ [@Bh].
Set $\mathrm{\bf k}= k_1,\ldots,k_d$ and $\mid\mathrm{\bf k}\mid = k_1+\cdots+k_d.$ Denote by $\widetilde{e}(M)$ the sum of all the mixed multiplicities of $M,$ i.e., $\widetilde{e}(M):=\sum_{\mid \mathrm{\bf k}\mid=\:\ell-1}e(M;\mathrm{\bf k}).$ It is well known that in generally, the multiplicity $e(M)$ of $M$ and $\widetilde{e}(M)$ are different invariants of $M$.
Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n},$ infinite residue field $k = R/\frak{n}$ and let $N$ be a finitely generated $R$-module. Let $I_1,\ldots,I_d$ be ideals of $R$ such that $I_1\cdots I_d$ is not contained in $ \sqrt{\mathrm{Ann}{N}}.$
Put $\mathrm{\bf I}= I_1,\ldots,I_d;$ $\mathrm{\bf n}= n_1,\ldots,n_d;$ $\mathbb{I}^{\mathrm{\bf n}}= I_1^{n_1},\ldots,I_d^{n_d};$ $\mathrm{\bf I}^{[\mathrm{\bf k}]}= I_1^{[k_1]},\ldots,I_d^{[k_d]}.$
Denote by $$\mathfrak R(\mathrm{\bf I}; R) = \mathfrak R(I_1,\ldots,I_d;R)= \bigoplus_{n_1,\ldots,n_d\ge 0}I_1^{n_1}\cdots I_d^{n_d}$$ the Rees algebra of ideals $I_1,\ldots,I_d$ and by $$\;\;\;\mathfrak R(\mathrm{\bf I}; N) = \mathfrak R(I_1,\ldots,I_d;N)= \bigoplus_{n_1,\ldots,n_d\ge 0}I_1^{n_1}\cdots I_d^{n_d}N$$ the Rees module of ideals $I_1,\ldots,I_d$ with respect to $N.$ Let $J$ be an $\frak n$-primary ideal. Set $$F_J(J,\mathrm{\bf I}; R)= F_J(J,I_1,\ldots,I_d; R) =\bigoplus_{n_0, n_1,\ldots,n_d\ge 0}\dfrac{J^{n_0}I_1^{n_1}\cdots I_d^{n_d}}{J^{n_0+1}I_1^{n_1}\cdots I_d^{n_d}}$$ and $$\;\;\;\;\;F_J(J,\mathrm{\bf I}; N)= F_J(J,I_1,\ldots,I_d;N) =\bigoplus_{n_0, n_1,\ldots,n_d\ge 0}\dfrac{J^{n_0}I_1^{n_1}\cdots I_d^{n_d}{N}}{J^{n_0+1}I_1^{n_1}\cdots I_d^{n_d}{N}}.$$ Then $F_J(J,\mathrm{\bf I};R)$ is a finitely generated standard multi-graded algebra over an artinian local ring $R/J$ and $F_J(J,\mathrm{\bf I}; N)$ is a finitely generated multi-graded $F_J(J,\mathrm{\bf I}; R)$-module. Set $ I = I_1\cdots I_d.$ Denote by $B_N\big(J, \mathrm{\bf I};n_0, \mathrm{\bf n}\big)= B_N\big(J, \mathrm{\bf I};n_0, n_1,\ldots, n_d\big)$ the Bahattacharya homogeneous polynomial of $ F_J(J,\mathrm{\bf I}; N).$ Then remember that $$\deg B_N\big(J, \mathrm{\bf I};n_0, \mathrm{\bf n}\big) =\dim \dfrac{N}{0_N: I^\infty}-1$$ by [@Vi1 Proposition 3.1] (see [@MV Proposition 3.1]). And by [@VM Remark 3.1], $\deg B_N\big(J, \mathrm{\bf I};n_0,\mathrm{\bf n}\big)= \dim F_J(J,\mathrm{\bf I}; N)^\triangle-1.$ Hence $\dim F_J(J,\mathrm{\bf I}; N)^\triangle = \dim \dfrac{N}{0_N: I^\infty}.$ In the case that $\mathrm{ht} \dfrac{I+\mathrm{Ann}N}{\mathrm{Ann} N} > 0,$ $\dim \dfrac{N}{0_N: I^\infty} = \dim N.$ The above facts yield:
0.2cm [**Note 2.1.**]{} $\dim F_J(J,\mathrm{\bf I}; N)^\triangle = \dim \dfrac{N}{0_N: I^\infty},$ and if $\;\mathrm{ht} \dfrac{I+\mathrm{Ann}N}{\mathrm{Ann} N} > 0$ then $$\dim F_J(J,\mathrm{\bf I}; N)^\triangle = \dim {N}.$$ Set $\dim \dfrac{N}{0_N:{ I}^\infty} = q$ and $$e\big(F_J(J,\mathrm{\bf I}; N);k_0, k_1,\ldots,k_d\big) = e\big(J^{[k_0+1]},I_1^{[k_1]},\ldots,I_d^{[k_d]};N\big):= e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big)$$ $(k_0+k_1+\cdots+k_d = k_0 + \mid\mathrm{\bf k}\mid = q-1).$ Then $e\big(J^{[k_0+1]},I_1^{[k_1]},\ldots,I_d^{[k_d]};N\big) $ is called the [*mixed multiplicity of $N$ with respect to ideals $J,I_1,\ldots,I_d$ of type $(k_0,k_1,\ldots,k_d)$*]{} (see [@MV; @Ve]).
0.2cm [**Note 2.2.**]{} Recall that by [@MV Proposition 3.1] which is a generalized result of [@Vi1 Proposition 3.1], we have $e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; N\big)= e\Big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};\dfrac{N}{0_N: I^\infty}\Big),$ and hence $$\widetilde{e}\big(F_J(J,\mathrm{\bf I}; {N})\big)= \widetilde{e}\Big(F_J(J,\mathrm{\bf I}; \dfrac{N}{0_N: I^\infty})\Big).$$
0.2cm [**Note 2.3.**]{} By [@HHRT Corollary 4.6], it follows that $$e\big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R(\mathrm{\bf I}; N)\big)= e\big(F_J(J,\mathrm{\bf I}; N)\big).$$
Now, assume that $\mathrm{ht} \dfrac{I+\mathrm{Ann}N}{\mathrm{Ann} N} > 0.$ Then $\dim \dfrac{N}{0_N: I^\infty} = \dim N.$ In this case,
$$B_N\big(J, \mathrm{\bf I};n_0,\mathrm{\bf n}\big)
= \sum_{k_0\:+\mid\mathrm{\bf k}\mid\;=\;q-1}e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big)
\dfrac{n_0^{k_0}n_1^{k_1}\cdots n_d^{k_d}}{k_0!k_1!\cdots k_d!}\eqno(1)$$ and $$e\big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R(\mathrm{\bf I}; N)\big)= \sum_{k_0\:+\:\mid\mathrm{\bf k}\mid
=\;q-1}e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big)\eqno(2)$$ by [@HHRT Theorem 4.4] which is a generalized version of [@Ve Theorem 1.4]. Next, let $u_1,\ldots,u_d$ be positive integers. Set $\mathrm{\bf u}^\mathrm{\bf k}= u_1^{k_1}\cdots u_d^{k_d}.$ From (1) we have $$\begin{aligned}
&B_N\big(J, \mathrm{\bf I}^\mathrm{\bf u}, n_0 , \mathrm{\bf n}\big)
= \sum_{k_0\:+\mid\mathrm{\bf k}\mid\;=\;q-1}e\big({J}^{[k_0+1]},{\mathrm{\bf I}^{\mathrm{\bf u}}}^{[\mathrm{\bf k}]};N\big)
\dfrac{n_0^{k_0}n_1^{k_1}\cdots n_d^{k_d}}{k_0!k_1!\cdots k_d!}\;\;\; \mathrm{and}\\
&B_N\big(J, \mathrm{\bf I}^\mathrm{\bf u}, n_0 , \mathrm{\bf n}\big)
= \sum_{k_0\:+\mid\mathrm{\bf k}\mid\;=\;q-1}e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big)
\dfrac{n_0^{k_0}(u_1n_1)^{k_1}\cdots (u_dn_d)^{k_d}}{k_0!k_1!\cdots k_d!}. \end{aligned}$$ Consequently, $e\big({J}^{[k_0+1]},{\mathrm{\bf I}^{\mathrm{\bf u}}}^{[\mathrm{\bf k}]};N\big)=
e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; N\big)\mathrm{\bf u}^\mathrm{\bf k}.$ Hence by (2), $$e\big(\big(J,\mathfrak R(\mathrm{\bf I}^\mathrm{\bf u}; R)_+\big); \mathfrak R(\mathrm{\bf I}^\mathrm{\bf u}; N)\big)= \sum_{k_0\:+\mid\mathrm{\bf k}\mid\;
=\;q-1}e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; N\big)\mathrm{\bf u}^\mathrm{\bf k}.$$ We obtain the following result. 0.2cm [**Proposition 2.4.**]{} Set $\overline{N} = \dfrac{N}{0_N: I^\infty}.$ It can be verified that $\mathrm{ht} \dfrac{I+\mathrm{Ann}\overline{N}}{\mathrm{Ann} \overline{N}} > 0.$ By Note 2.2, $$e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big)= e\Big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};\dfrac{N}{0_N: I^\infty}\Big).$$ Then as an immediate consequence of Proposition 2.4 we get the following. 0.2cm [**Corollary 2.5.**]{}
Set $\mathbb{S}= F_J(J,\mathrm{\bf I}; R)$ and $\mathbb{M}= F_J(J,\mathrm{\bf I};N).$ Recall that $$\widetilde{e}(\mathbb{M}) = \sum_{k_0\:+\:\mid\mathrm{\bf k}\mid
=\;q-1}e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N).$$ Hence combining this fact with Note 2.3 and Corollary 2.5 yields: 0.2cm [**Corollary 2.6.**]{} $e\Big(F_J(J,\mathrm{\bf I}; \dfrac{N}{0_N: I^\infty})\Big)= \widetilde{e}(\mathbb{M})= e\Big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R(\mathrm{\bf I}; \dfrac{N}{0_N: I^\infty})\Big).$\
0.2cm [**Remark 2.7.**]{} If $\mathrm{ht} \dfrac{I+\mathrm{Ann}N}{\mathrm{Ann} N} > 0,$ then $e\big(\mathbb{M}\big)$ is the sum of all the mixed multiplicities of $\mathbb{M}$ by [@HHRT; @Ve]. Hence $e\big(\mathbb{M}\big)= \widetilde{e}\big(\mathbb{M}\big).$ Thus $e\big(F_J(J,\mathrm{\bf I}; {N})\big)= e\Big(F_J(J,\mathrm{\bf I}; \dfrac{N}{0_N: I^\infty})\Big)$ and $e\big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R(\mathrm{\bf I}; {N})\big)= e\Big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R(\mathrm{\bf I}; \dfrac{N}{0_N: I^\infty})\Big)$ by Corollary 2.6.
**3. Additivity and reduction formulas for mixed multiplicities**
In this section, we prove additivity and reduction formulas for mixed multiplicities. And as an application of these formulas, we show that mixed multiplicities of arbitrary ideals are additive on exact sequences.
First, we have the following result for $\mathbb{N}^d$-graded $S$-modules. 0.2cm [**Theorem 3.1.**]{}
Denote by $B_M(\mathrm{\bf n})$ the Bahattacharya homogeneous polynomial of $M.$ Remember that since $S_{(1,1,\ldots,1)}\nsubseteq \; \sqrt{\mathrm{Ann} M},$ $\deg B_M(\mathrm{\bf n}) = \dim\text{Supp}_{++}M $ by [@HHRT Theorem 4.1] (see [@VM Remark 3.1]). Let $$0 = M_0 \subseteq M_1 \subseteq M_2 \subseteq\cdots \subseteq M_u=M$$ be a prime filtration of M, i.e., $M_{i+1}/M_i \cong S/P_i$ where $P_i$ is a homogeneous prime ideal for all $0 \le i \le u-1.$ Since $S_{(1,1,\ldots,1)}\nsubseteq \sqrt{\mathrm{Ann} M},\;\emptyset \ne \Lambda \subseteq \mathrm{Min}(S/\mathrm{Ann}M)$ by [@HHRT Lemma 1.1]. Consequently, $\Lambda \subseteq \{P_0,P_1,\ldots,P_{u-1}\}.$ Note that $$\{P_0,P_1,\ldots,P_{u-1}\} \subseteq \text{Supp}M.$$ Hence if $P_i \notin \mathrm{Supp}_{++}M$ then $P_i \supseteq S_{++}.$ In this case, $\Big(\dfrac{S}{P_i}\Big)_{\mathrm{\bf n}}= 0$ for all $\mathrm{\bf n} \gg 0$ by [@VM Proposition 2.7]. Therefore $B_{S/P_i} (\mathrm{\bf n})= 0.$ If $\dim \mathrm{Proj}(S/P_i) < \dim \mathrm{Supp}_{++}M ,$ we have $\deg B_{S/P_i} (\mathrm{\bf n}) = \dim \mathrm{Proj}(S/P_i) < \dim \mathrm{Supp}_{++}M$ by [@HHRT Theorem 4.1]. From the above facts, it follows that $$\deg B_{S/P_i} (\mathrm{\bf n}) < \dim \mathrm{Supp}_{++}M$$ for all $P_i \notin \Lambda.$ Hence $ B_M(\mathrm{\bf n})$ is a sum of all the $B_{S/P}(\mathrm{\bf n})$ for $P \in \Lambda,$ counted as many times as $S/P$ appears as some $\dfrac{M_{i+1}}{M_i}.$ This number is exactly the length of $M_P$ because $\Lambda \subseteq \mathrm{Min}(S/\mathrm{Ann}M).$ Therefore $B_M(\mathrm{\bf n}) = \sum_{P \in \Lambda}\ell(M_P)B_{S/P}(\mathrm{\bf n}).$ Set $\dim \mathrm{Supp}_{++}M = s.$ Remember that $\mathrm{\bf n}^\mathrm{\bf k}:= n_1^{k_1}\cdots n_d^{k_d}.$ Now since $$\;\;B_{S/P}(\mathrm{\bf n})=\sum_{\mid \mathrm{\bf k}\mid =\;s}
e(S/P;\mathrm{\bf k})\dfrac{\mathrm{\bf n}^\mathrm{\bf k}}{k_1!\cdots k_d!} \;\;\mathrm {for \;\; any} \; P \in \Lambda,$$ $$B_M(\mathrm{\bf n})= \sum_{\mid \mathrm{\bf k}\mid\;=\;s}\Big[\sum_{P \in \Lambda}\ell(M_P)e(S/P;\mathrm{\bf k})\Big]\dfrac{\mathrm{\bf n}^\mathrm{\bf k}}{k_1!\cdots k_d!}.$$ Hence $$\begin{array}{l}\sum_{\mid\mathrm{\bf k}\mid\;
=\;s}e(M;\mathrm{\bf k})\dfrac{\mathrm{\bf n}^\mathrm{\bf k}}{k_1!\cdots k_d!}\\
= \sum_{\mid \mathrm{\bf k}\mid\;=\;s}\Big[\sum_{P \in \Lambda}\ell(M_P)e(S/P;\mathrm{\bf k})\Big]\dfrac{\mathrm{\bf n}^\mathrm{\bf k}}{k_1!\cdots k_d!}.\end{array}$$ Thus, $$e(M;\mathrm{\bf k})= \sum_{P \in \Lambda}\ell(M_P)e(S/P;\mathrm{\bf k}).\;\blacksquare$$
Although Theorem 3.1 is a general result for mixed multiplicities of multi-graded modules that is a general object for mixed multiplicities of ideals, up to now we can not give a proof for the case of mixed multiplicities of ideals in the following result by using this theorem. 0.2cm [**Theorem 3.2.**]{} [*Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n},$ infinite residue field $k = R/\frak{n},$ ideals $I_1,\ldots,I_d,$ an $\frak n$-primary $J.$ Let $N$ be a finitely generated $R$-module. Assume that $I=I_1\cdots I_d$ is not contained in $ \sqrt{\mathrm{Ann}{N}}.$ Set $\overline{N}=\dfrac{N}{0_N: I^\infty}.$ Denote by $\Pi$ the set of all prime ideals $\frak p $ of $R$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}\overline{N})$ and $\dim R/\frak p = \dim \overline{N}.$ Then we have $$e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N)= \sum_{\frak p \in \Pi}\ell({N}_{\frak p})e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};R/\frak p).$$*]{} 0.2cm [**Remark 3.3.**]{} Recall that $\Pi$ is the set of prime ideals $\frak p$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}\overline{N})$ and $\dim R/\frak p = \dim \overline{N}.$ It is easy seen that $$\Pi = \Big\{\frak p \in \mathrm{Ass}\Big(\frac{R}{\mathrm{Ann}\overline{N}}\Big) \mid \; \dim R/\frak p = \dim \overline{N} \Big\}.$$ Since $\mathrm{Ann}\overline{N} = \mathrm{Ann}N:I^\infty,$ $\frac{R}
{\mathrm{Ann}\overline{N}} = \frac{R}{\mathrm{Ann}N:I^\infty}.$ Consequently $$\begin{aligned}
\Pi &= \Big\{\frak p \in \mathrm{Ass}\Big(\frac{R}{\mathrm{Ann}N:I^\infty}\Big) \mid \; \dim R/\frak p = \dim \overline{N} \Big\}\\
&= \Big\{\frak p \in \mathrm{Ass}\Big(\frac{R}{\mathrm{Ann}N}\Big) \mid \; \frak p \nsupseteq I \; \mathrm{and}\; \dim R/\frak p = \dim \overline{N} \Big\}. \end{aligned}$$ If $\frak p \in \Pi$, $\overline{N}_{\frak p} = N_{\frak p}$ because $I \nsubseteq \frak p.$ Since $\ell({N}_{\frak p})= \ell(\overline{N}_{\frak p}) < +\infty,$ $ \frak p \in \mathrm{Min}(\frac{R}{\mathrm{Ann}N}).$ Hence $\Pi = \Big\{\frak p \in \mathrm{Min}\Big(\frac{R}{\mathrm{Ann}N}\Big) \mid \; \frak p \nsupseteq I \; \mathrm{and}\; \dim R/\frak p = \dim \overline{N} \Big\}.$ In the case that $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0,$ $\dim \overline{N} = \dim N$ and $\frak p \nsupseteq I$ for any $\frak p \in \mathrm{Min}(\frac{R}{\mathrm{Ann}N}).$ Consequently $\Pi = \Big\{\frak p \in \mathrm{Min}\Big(\frac{R}{\mathrm{Ann}N}\Big) \mid \;\; \dim R/\frak p = \dim N \Big\}. $
Our approach is based on multiplicity formulas of multi-graded Rees modules with respect to powers of ideals that gave in Proposition 2.4 via linking homogeneous prime ideals which are in $\mathrm{Min}(\mathfrak R(\mathbf{I}; R)/\mathrm{Ann}\;\mathfrak R(\mathbf{I}; N))$ of maximal coheight and prime ideals in $\Pi$ by the following lemma.
0.2cm
[**Lemma 3.4.**]{} [*Let $N$ be a finitely generated $R$-module and let $I_1, \ldots, I_d$ be ideals of $R$ such that $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0$ $(I = I_1\cdots I_d)$. Denote by $\Lambda$ the set of homogeneous prime ideals $P$ of the Rees algebras $\mathfrak R(\mathbf{I}; R)$ such that $P \in \mathrm{Min}(\mathfrak R(\mathbf{I}; R)/\mathrm{Ann}\mathfrak R(\mathbf{I}; N))$ and $\dim \mathfrak R(\mathbf{I}; R)/P = \dim \mathfrak R(\mathbf{I}; N),$ and denote by $\Pi$ the set of prime ideals of $R$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}N)$ and $\dim R/\frak p = \dim N$. Then there is an one-to-one correspondence between the set of prime ideals $\Pi$ and the set of prime ideals $\Lambda$ given by $$\frak p \mapsto
P = \bigoplus_{n_1, \ldots n_d\geq 0}(\frak p \cap I_1^{n_1}\cdots I_d^{n_d}).$$*]{}
First, remember that since $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0,$ $\dim \mathfrak R(\mathbf{I}; N) = \dim N +d.$ Note that $\Lambda \subseteq \mathrm{Ass}_{\mathfrak R(\mathbf{I}; R)}\mathfrak R(\mathbf{I}; N)$ and $\Pi \subseteq \mathrm{Ass}_RN$ and $$\mathrm{Ann }\;\mathfrak R(\mathbf{I}; N) = \bigoplus_{n_1, \ldots n_d\geq 0}(\mathrm{Ann}N \cap I_1^{n_1}\cdots I_d^{n_d}).$$ Now, if $\frak p$ is an ideal in $\Pi$, then it can be verified that $$P = \bigoplus_{n_1, \ldots n_d\geq 0}(\frak p \cap I_1^{n_1}\cdots I_d^{n_d})$$ is a homogeneous prime ideal of $\mathfrak R(\mathbf{I}; R)$ and $\mathrm{Ann}\;\mathfrak R(\mathbf{I}; N) \subseteq P.$
[**Note 3.5.**]{} If $\frak q$ is a prime ideal of $R$ and $I \nsubseteq \frak q$ then $\dfrac{I+ \frak q}{\frak q}\ne 0.$ Since $R/ \frak q$ is an integral domain and $\dfrac{I+ \frak q}{\frak q}\ne 0,$ $\mathrm{ht}\dfrac{I+ \frak q}{\frak q} > 0.$ Therefore for any $\frak p \in \Pi,$ $\mathrm{ht}\dfrac{I+\frak p}{\frak p} > 0$ because $I \nsubseteq \frak p$ by Remark 3.3.
It is easily seen that $$\begin{aligned}
\mathfrak R(\mathbf{I}; R)/P &= \bigoplus_{n_1, \ldots n_d\geq 0}\frac{I_1^{n_1}\cdots I_d^{n_d}}{\frak p \cap I_1^{n_1}\cdots I_d^{n_d}}\\
&\cong \bigoplus_{n_1, \ldots n_d\geq 0}\frac{I_1^{n_1}\cdots I_d^{n_d}+\frak p}{\frak p }
= \mathfrak R(\mathbf{I}; R/\frak p).\end{aligned}$$ Since $\mathrm{ht}\dfrac{I + \frak p }{\frak p}> 0$ by Note 3.5, $ \dim \mathfrak R(\mathbf{I}; R/\frak p) = \dim R/\frak p + d.$ Since $\frak p \in \Pi,$ $\dim R/\frak p =\dim N.$ Hence $\dim \mathfrak R(\mathbf{I}; R)/P = \dim \mathfrak R(\mathbf{I}; N).$ So $P \in \Lambda.$
Next, suppose that $P$ is an ideal in $\Lambda$. Then $P$ is an associated prime ideal of $\mathfrak R(\mathbf{I}; N)$. Hence $P$ is homogeneous and there is a homogeneous element $x \in \mathfrak R(\mathbf{I}; N)$ such that $P = 0:x$. Set $\frak p = P\cap R$. Then $\frak p = \{a \in R \mid ax = 0\}$ and $\mathrm{Ann}N \subseteq \frak p .$ Set $P = \bigoplus_{n_1, \ldots, n_d \geq 0}P_{(n_1, \ldots, n_d)}$, we have $$P_{(n_1, \ldots, n_d)} = \{ a \in I_1^{n_1}\cdots I_d^{n_d} \mid ax = 0\}.$$ It implies that $P_{(n_1, \ldots, n_d)} = \frak p \cap I_1^{n_1}\cdots I_d^{n_d}.$ Therefore $P$ has the form $$P = \bigoplus_{n_1, \ldots n_d\geq 0}(\frak p \cap I_1^{n_1}\cdots I_d^{n_d}).$$ Consequently $\mathfrak R(\mathbf{I}; R)/P \cong \mathfrak R(\mathbf{I}; R/\frak p).$ Since $P \in \Lambda$ and $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0,$ $$\dim \mathfrak R(\mathbf{I}; R)/P = \dim \mathfrak R(\mathbf{I}; N) = \dim N +d.$$ Note that $$\dim \mathfrak R(\mathbf{I}; R/\frak p) \leqslant \dim R/\frak p +d.$$ Consequently $\dim R/\frak p \geqslant \dim N.$ Hence since $\mathrm{Ann}N \subseteq \frak p,$ $\dim R/\frak p =\dim N.$ Thus, $\frak p \in \Pi.$ The above facts follow that there is a bijection between the set $\Pi$ and the set $\Lambda$ given by $\frak p \mapsto
P = \bigoplus_{n_1, \ldots n_d\geq 0}(\frak p \cap I_1^{n_1}\cdots I_d^{n_d}).$ $\blacksquare$
[**The proof of Theorem 3.2:**]{} Let $u_1, \ldots, u_d$ be positive integers. Remember that $\mathbf{I^u} = I_1^{u_1}, \ldots, I_d^{u_d}$. Set $\overline{N} = \dfrac{N}{0_N : I^\infty}$ and $q = \dim \overline{N}$. Denote by $\Lambda_\mathbf{u}$ the set of homogeneous prime ideals $P$ of the Rees algebra $\mathfrak R(\mathbf{I^u}; R)= \mathfrak R(I_1^{u_1}, \ldots, I_d^{u_d}; R)$ such that $P \in \mathrm{Min}(\mathfrak R(\mathbf{I^u}; R)/\mathrm{Ann}\mathfrak R(\mathbf{I^u}; \overline{N}))$ and $\dim \mathfrak R(\mathbf{I^u}; R)/P = \dim \mathfrak R(\mathbf{I^u}; \overline{N})$. Recall that $$\Pi = \Big\{\frak p \in \mathrm{Min}\Big(\frac{R}{\mathrm{Ann}\overline{N}}\Big) \mid \;\;\; \dim R/\frak p = \dim \overline{N} \Big\}.$$ By [@HS Theorem 11.2.4], we have $$\begin{array}{l}e\big(\big(J, \mathfrak R(\mathbf{I^u}; R)_+\big); \mathfrak R(\mathbf{I^u}; \overline{N})\big)\\
= \sum_{P\in \Lambda_\mathbf{u}}\ell(\mathfrak R(\mathbf{I^u}; \overline{N})_P)e\big(\big(J, \mathfrak R(\mathbf{I^u}; R)_+\big); \mathfrak R(\mathbf{I^u}; R)/P\big).\end{array} \eqno(3)$$ Remember that $\mathrm{ht}\dfrac{I+ \mathrm{Ann}\overline{N}}{\mathrm{Ann}\overline{N}} > 0.$ In this case, if $P \in \Lambda_\mathbf{u}$ and $\frak p = P \cap R$, we have $$P = \bigoplus_{n_1, \ldots n_d\geq 0}(\frak p \cap {(I_1^{u_1})}^{n_1}\cdots {(I_d^{u_d})}^{n_d})$$ and $\frak p \in \Pi$ by Lemma 3.4. Next, we prove that $\ell(\mathfrak R(\mathbf{I^u};\overline{N})_P) = \ell(\overline{N}_{\frak p}).$ Indeed, since $\mathfrak R(\mathbf{I^u};R)_P/P\mathfrak R(\mathbf{I^u};R)_P \cong (\mathfrak R(\mathbf{I^u};R)/P)_P \cong \mathfrak R(\mathbf{I^u};R/{\frak p})_P,$ it follows that $\mathfrak R(\mathbf{I^u};R/{\frak p})_P$ is a simple $\mathfrak R(\mathbf{I^u};R)_P$-module. Now assume that $\ell_{R_{\frak p}}(\overline{N}_{\frak p}) = t.$ Then there exists a sequence of submodules of the $R$-module $\overline{N}:$ $$\overline{N} = N_0 \supset N_1 \supset \cdots \supset N_t= \{0\}$$ such that $(N_i/N_{i+1})_{\frak p} \cong R_{\frak p}/
{\frak p} R_{\frak p}$ $(0 \leq i \leq t-1).$ It can be verified that $$\mathfrak R(\mathbf{I^u};N_i/N_{i+1})_P \cong \mathfrak R(\mathbf{I^u};(N_i/N_{i+1})_{\frak p})_P \cong \mathfrak R(\mathbf{I^u};R_{\frak p}/
{\frak p} R_{\frak p})_P \cong \mathfrak R(\mathbf{I^u};R/{\frak p})_P.$$ So $\mathfrak R(\mathbf{I^u};N_i/N_{i+1})_P$ is a simple $\mathfrak R(\mathbf{I^u};R)_P$-module $(0 \leq i \leq t-1).$ By the above facts, we get a composition series of the $\mathfrak R(\mathbf{I^u};R)_P$-module $\mathfrak R(\mathbf{I^u};\overline{N})_P:$ $$R(\mathbf{I^u};\overline{N})_P = R(\mathbf{I^u};N_0)_P \supset R(\mathbf{I^u};N_1)_P \supset \cdots \supset R(\mathbf{I^u};N_t)_P= \{0\}.$$ Consequently $\ell(\mathfrak R(\mathbf{I^u};\overline{N})_P) = \ell(\overline{N}_{\frak p}).$ Hence since $$\mathfrak R(\mathbf{I^u}; R)/P \cong \mathfrak R(\mathbf{I^u}; R/\frak p) \quad$$ and by (3) we obtain
$$e\big((J, \mathfrak R(\mathbf{I^u}; R)_+); \mathfrak R(\mathbf{I^u}; \overline{N})\big)
= \sum_{\frak p\in \Pi}\ell(\overline{N}_\frak p)e\big((J, \mathfrak R(\mathbf{I^u}; R/\frak p)_+); \mathfrak R(\mathbf{I^u}; R/\frak p)\big). \eqno(4)$$ Recall that $\frak p \in \Pi,$ $\mathrm{ht}\dfrac{I+\frak p}{\frak p} > 0$ by Note 3.5. Hence by Corollary 2.5(ii) and Proposition 2.4, we respectively get
$$\begin{array}{l}e\big((J, \mathfrak R(\mathbf{I^u}; R)_+); \mathfrak R(\mathbf{I^u}; \overline{N})\big)
= \sum_{k_0+ \mid\mathbf k\mid = q-1}e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; N)\mathbf{u^k}
\end{array}\eqno(5)$$ and
$$\begin{array}{l}e\big((J, \mathfrak R(\mathbf{I^u}; R/\frak p)_+); \mathfrak R(\mathbf{I^u}; R/\frak p)\big)
= \sum_{k_0+ \mid \mathbf {k}\mid = q-1}e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; R/\frak p)\mathbf{u^k}. \end{array}\eqno(6)$$ From (4), (5) and (6), it follows that $$\begin{aligned}
& \sum_{k_0+\mid\mathbf k\mid = q-1}e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; N)\mathbf{u^k} \\
&= \sum_{\frak p \in \Pi}\ell(\overline{N}_\frak p)\Big( \sum_{k_0+\mid\mathbf k\mid = q-1}e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; R/\frak p)\mathbf{u^k}\Big)\\
&= \sum_{k_0+\mid\mathbf k\mid = q-1}\Big(\sum_{\frak p\in \Pi}\ell(\overline{N}_\frak p) e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; R/\frak p)\Big)\mathbf{u^k}.\end{aligned}$$ Therefore $$e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; N) = \sum_{\frak p\in \Pi}\ell(\overline{N}_\frak p) e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; R/\frak p).$$ Recall that $\ell(\overline{N}_\frak p) = \ell({N}_\frak p)
$ by Remark 3.3. Thus $$e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; N) = \sum_{\frak p\in \Pi}\ell({N}_\frak p) e(J^{[k_0+1]}, \mathbf{I}^{[{\mathbf k}]}; R/\frak p).
\;\blacksquare$$
Note that if $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0$ then $\Pi = \big\{\frak p \in \mathrm{Min}\Big(\frac{R}{\mathrm{Ann}N}\Big) \mid \dim R/\frak p = \dim N \big\}$ by Remark 3.3. Hence by Theorem 3.2, we obtain the following result. 0.2cm [**Corollary 3.6.**]{} [*Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n}$ and infinite residue field $k = R/\frak{n},$ ideals $I_1,\ldots,I_d$ and an $\frak n$-primary ideal $J.$ Let $N$ be a finitely generated $R$-module. Set $I=I_1\cdots I_d.$ Assume that $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0.$ Denote by $\Pi$ the set of all prime ideals $\frak p $ of $R$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}N)$ and $\dim R/\frak p = \dim N.$ Then we have $$e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N)= \sum_{\frak p \in \Pi}\ell(N_{\frak p})e(J^{[k_0+1]}, \mathrm{\bf I}^{[\mathrm{\bf k}]};R/\frak p).$$*]{}
Let $I_1,\ldots, I_d$ be $\frak n$-primary ideals of $R$. Set $\dim N = q.$ Denote by $P(n_1,\ldots,n_d)$ the Hilbert-Samuel polynomial of the Hilbert-Samuel function $\ell_A\Big(\frac{N}{I_1^{n_1}\cdots I_d^{n_d}N}\Big).$ For any $1\leq i\leq d,$ denote by $Q_i(n_1,\ldots,n_d)$ the Hilbert-Samuel polynomial of the Hilbert-Samuel function $\ell_A\Big(\frac{I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N}{I_1^{n_1}\cdots I_i^{n_i+1}\cdots I_d^{n_d}N}\Big).$ Then we have $\deg P(n_1,\ldots,n_d) = q$ and $$P(n_1,\ldots,n_i+1,\ldots,n_d) -P(n_1,\ldots,n_i,\ldots,n_d) = Q_i(n_1,\ldots, n_i, \ldots,n_d).$$ Write the terms of total degree $q$ in $P(\mathrm{\bf n})$ in the form $\sum_{\mid\mathrm{\bf k}\mid = q} e(\mathrm{\bf I}^{[\mathrm{\bf k}]}; N)\frac{\mathrm{\bf n}^\mathrm{\bf k}}{k_1! \cdots k_d!}.$ Since $k_1+\cdots+k_d= \mid\mathrm{\bf k}\mid = q >0,$ there exists $1 \le j \le d$ such that $k_j>0.$ It is easy to check that $\dfrac{e(\mathrm{\bf I}^{[\mathrm{\bf k}]}; N)}{k_1!\cdots (k_j-1)!\cdots k_d!}n_1^{k_1}\cdots n_j^{k_j-1}\cdots n_d^{k_d}$ is a term of total degree $q-1$ in $Q_j(\mathrm{\bf n}).$ So $e(\mathrm{\bf I}^{[\mathrm{\bf k}]}; N)$ as in [@HS] is exactly the mixed multiplicity of $N$ with respect to $(I_1,\ldots,I_j,\ldots, I_d)$ of the type $(k_1,\ldots,k_j,\ldots,k_d)$ defined in Section 2 with $I_j$ playing the role of $J$. Therefore, for any non-negative integers $k_1,\ldots,k_d$ with $k_1+\cdots+k_d = \;\mid\mathrm{\bf k}\mid \;= q$, one also calls $e(\mathrm{\bf I}^{[\mathrm{\bf k}]}; N)$ the mixed multiplicity of $N$ with respect to $(I_1,\ldots, I_d)$ of the type $(k_1,\ldots,k_d).$
Then as a consequence of Corollary 3.6, we get the following result.
0.2cm
[**Corollary 3.7**]{} [@HS Theorem 17.4.8]. [*Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n}$ and infinite residue field $k = R/\frak{n},$ and $\frak n$-primary ideals $I_1,\ldots,I_d.$ Let $N$ be a finitely generated $R$-module of Krull dimension $\dim N >0$. Denote by $\Pi$ the set of all prime ideals $\frak p $ of $R$ such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}N)$ and $\dim R/\frak p = \dim N.$ Assume that $k_1,\ldots, k_d$ are non-negative integers with $k_1+\cdots+k_d = \dim N.$ Then we have $$e(\mathrm{\bf I}^{[\mathrm{\bf k}]};N)= \sum_{\frak p \in \prod}\ell(N_{\frak p})e(\mathrm{\bf I}^{[\mathrm{\bf k}]};R/\frak p).$$*]{}
Since $\dim N >0$ and $I = I_1\cdots I_d$ is an $\frak n$-primary ideal, $\mathrm{ht}\dfrac{I + \mathrm{Ann}N}{\mathrm{Ann}N}> 0.$ Hence the proof is immediate from Corollary 3.6. $\blacksquare$
0.2cm [**Remark 3.8.**]{} Let $ W_1, W_2, W_3$ be finitely generated $R$-modules and let $I_1,\ldots,I_d$ be ideals of $R$ such that $I=I_1\cdots I_d \nsubseteq \sqrt{\mathrm{Ann}{W_i}}$ for all $i = 1, 2, 3.$ Let $$0\longrightarrow W_1 \longrightarrow W_3 \longrightarrow W_2\longrightarrow 0$$ be a short exact sequence of $R$-modules. For any $i = 1, 2, 3,$ set $\overline{W}_i= \dfrac{W_i}{0_{W_i}: I^\infty}$ and $p_i = \dim \overline{W}_i.$ Denote by $\Pi_i$ the set of prime ideals such that $\frak p \in \mathrm{Min}(R/\mathrm{Ann}\overline{W}_i)$ and $\dim R/\frak p = p_i$. Set $\Omega = \Pi_1 \cup\Pi_2\cup\Pi_3$. For any $\frak p \in \Omega,$ we have always short exact sequences $$0\longrightarrow (W_1)_\frak p \longrightarrow (W_3)_\frak p \longrightarrow (W_2)_\frak p\longrightarrow 0.$$ If $p_j < p_i\;$ and $p_k = \{p_1,\; p_2,\;p_3\}\setminus \{p_i,\;p_j\}$ then for any $\frak p \in \Pi_i,$ we get $\;\dim \overline{W}_j < \dim R/\frak p$, and hence $\frak p \nsupseteq \mathrm{Ann}\overline{W}_j$. In this case, $(\overline{W}_j)_\frak p = 0.$ By Remark 3.3, $(W_j)_\frak p = (\overline{W}_j)_\frak p.$ Hence $(W_j)_\frak p = 0$. Thus $0 \ne (W_i)_\frak p = (W_k)_\frak p.$ This argument proves that if $p_j < p_i$ then $p_i = p_k$ and $\Pi_i = \Pi_k,$ moreover, $p_3 = \max\{p_1, p_2\}$.
Using Theorem 3.2, now we prove that the mixed multiplicities of arbitrary ideals are additive on short exact sequences by the following result.
0.2cm [**Corollary 3.9.**]{}
*Keep the notations as in Remark $3.8$. Let $J$ be an $\frak n$-primary ideal. Set $ \frak J = (J,\mathfrak R(\mathrm{\bf I}; R)_+).$ Assume that $\dim \overline{W}_3= k_0+k_1+\cdots+k_d+1.$ Then the following statements hold.*
- If $\dim \overline{W}_1 =\dim \overline{W}_2=\dim \overline{W}_3$ then $$\begin{aligned}
(a )&: e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_3)=
e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_1)+
e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_2);\\
(b)&: e\big(\frak J; \mathfrak R(\mathrm{\bf I}; \overline{W}_3)\big)
= e\big(\frak J; \mathfrak R(\mathrm{\bf I}; \overline{W}_1)\big)+ e\big(\frak J; \mathfrak R(\mathrm{\bf I}; \overline{W}_2)\big).\end{aligned}$$
- If $h\ne k = 1,2$ and $\dim \overline{W}_3 > \dim \overline{W}_h$ then $$\begin{aligned}
(a)&: e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_3)=
e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_k);\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\
(b)&: e\big(\frak J; \mathfrak R(\mathrm{\bf I}; \overline{W}_3)\big)
= e\big(\frak J; \mathfrak R(\mathrm{\bf I}; \overline{W}_k)\big).\end{aligned}$$
The proof of (i): Since $p_1 = p_2 = p_3,$ by Theorem 3.2 we have $$e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_i)
= \sum_{\frak p\in \Pi_i}\ell(W_i)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)$$ for $i = 1, 2, 3.$ Let $\frak p \in \Omega\setminus \Pi_i.$ Since $\dim R/\frak p = p_i,$ $\frak p \nsupseteq \mathrm{Ann}\overline{W}_i.$ Consequently, $(\overline{W}_i)_\frak p = 0.$ By Remark 3.3, $(W_i)_\frak p = (\overline{W}_i)_\frak p.$ So $(W_i)_\frak p = 0.$ From this it follows that $$e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_i) = \sum_{\frak p\in \Pi_i}\ell(W_i)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)
= \sum_{\frak p\in \Omega}\ell(W_i)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)$$ for all $i = 1, 2, 3.$ Therefore by Theorem 3.2, we obtain $$\begin{aligned}
e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_3) &= \sum_{\frak p\in \Omega}\ell(W_3)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)\\
&= \sum_{\frak p\in \Omega}(\ell(W_1)_\frak p + \ell(W_2)_\frak p) e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)\\
&= \sum_{\frak p\in \Omega}\ell(W_1)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)
+ \sum_{\frak p\in \Omega}\ell(W_2)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)\\
& = e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_1) + e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_2).\end{aligned}$$ Hence we get (a) of (i). By Corollary 2.5(ii) we have (b) of (i). The case that $p_3 > p_h:$ By Remark 3.8, $p_3 = p_k;$ $\Pi_3 = \Pi_k$ and $(W_3)_\frak p = (W_k)_\frak p$ for all $\frak p \in \Pi_3 = \Pi_k.$ Consequently, we obtain (ii) by Corollary 2.5(ii) and since $$\begin{aligned}
e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_3) &= \sum_{\frak p\in \Pi_3}\ell(W_3)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)\\
&= \sum_{\frak p\in \Pi_k}\ell(W_k)_\frak p e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; R/\frak p)\\
&= e(J^{[k_0+1]}, \mathbf{I}^{[\mathbf{k}]}; W_k).
\; \blacksquare \end{aligned}$$
[**Remark 3.10.**]{} Now, if we assign the mixed multiplicities of modules $W_i:$ $$e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_i) = 0$$ to the case that $ k_0+\cdots+k_d > \dim \overline{W}_i-1,$ then from Corollary 3.9, we immediately get that: if $k_0 + \mid \mathrm{\bf k}\mid = \dim\overline{W}_3-1$ then $$e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_3)=
e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_1)+
e(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]}; W_2).$$ It is natural to suppose that the proof of Theorem 3.2 will be based on Corollary 3.9. Hence one of obstructions in proving Theorem 3.2 is Corollary 3.9. This is a motivation to help us giving the proof of Theorem 3.2 as in this paper.\
**4. Filter-regular sequences of multi-graded modules**
0.2cm In this section, we explore the relationship between filter-regular sequences of the multi-graded $F_J(J,\mathrm{\bf I}; R)$-module $ F_J(J,\mathrm{\bf I}; N)$ and weak-(FC)-sequences of ideals that will be used in the proofs of Section 5. 0.2cm The concept of filter-regular sequences was introduced by Stuckrad and Vogel in [@SV]. The theory of filter-regular sequences became an important tool to study some classes of singular rings and has been continually developed (see e.g. [@BS; @Hy; @Tr1; @Tr2; @VM]). 0.2cm [**Definition 4.1.**]{} Let $S=\bigoplus_{n_1,\ldots,n_d\ge 0}S_{(n_1,\ldots,n_d)}$ be a finitely generated standard $\mathbb{N}^d$-graded algebra over an artinian local ring $A$ and let $M=\bigoplus_{n_1,\ldots,n_d\ge 0}M_{(n_1,\ldots,n_d)}$ be a finitely generated $\mathbb{N}^d$-graded $S$-module. Let $S_{(1,1,\ldots,1)}$ be not contained in $ \sqrt{\mathrm{Ann}M}$. Then a homogeneous element $x\in S$ is called an [*$S_{++}$-filter-regular element with respect to $M$*]{} if $(0_M:x)_{(n_1,\ldots,n_d)}=0$ for all large $n_1,\ldots,n_d.$ Let $x_1,\ldots, x_t$ be homogeneous elements in $S$. We call that $x_1,\ldots, x_t$ is an [*$S_{++}$-filter-regular sequence with respect to $M$*]{} if $x_i$ is an $S_{++}$-filter-regular element with respect to $\dfrac{M}{(x_1,\ldots, x_{i-1})M}$ for all $i = 1,\ldots, t.$ 0.2cm [**Remark 4.2.**]{} If $S_{(1,1,\ldots,1)}\subseteq \sqrt{\mathrm{Ann} M}$ then $(0_M:x)_{(n_1,\ldots,n_d)} \subseteq M_{(n_1,\ldots,n_d)}= 0$ for all large $n_1,\ldots,n_d.$ Hence any homogeneous element of $S$ always has the property of an $S_{++}$-filter-regular element. This fact only obstruct and do not carry usefulness. That is why in Definition 4.1, one has to exclude the case that $S_{(1,1,\ldots,1)}\subseteq \sqrt{\mathrm{Ann} M}$ in defining $S_{++}$-filter-regular elements.
0.2cm [**Note 2.3.**]{} If $S_{(1,1,\ldots,1)}\nsubseteq \sqrt{\mathrm{Ann} M},$ then by [@VM], a homogeneous element $x\in S$ is an $S_{++}$-filter-regular element with respect to $M$ if and only if $x\notin P$ for any $P\in \mathrm{Ass}_SM$ and $P$ does not contain $S_{++}.$ That means $x\notin\bigcup_{S_{++}\nsubseteq P,\; P\in \mathrm{Ass}_SM}P.$ In this case, for any $1\le i \le d,$ there exists an $S_{++}$-filter-regular element $x \in S_i \setminus\frak m S_i.$
Remember that the positivity and the relationship between mixed multiplicities and Hilbert-Samuel multiplicities of ideals have attracted much attention (see e.g. [@KV; @KR1; @KR2; @MV; @Ro; @Sw; @Tr2; @Vi1; @Vi2; @Vi3; @VT]). In past years, using different sequences, one expressed mixed multiplicities into Hilbert-Samuel multiplicity, for instance: Risler-Teissier in 1973 [@Te] by superficial sequences and Rees in 1984 [@Re] by joint reductions; Viet in 2000 [@Vi1] by (FC)-sequences (see e.g. [@DV; @MV; @VT]).
0.2cm [**Definition 4.4**]{} [@Vi1]. Let $(R, \frak n)$ be a noetherian local ring with maximal ideal $\frak{n},$ infinite residue field $k = R/\frak{n}$ and let $N$ be a finitely generated $R$-module. Let $I_1,\ldots,I_d$ be ideals such that $I_1\cdots I_d$ is not contained in $ \sqrt{\mathrm{Ann}{N}}.$ Set $I=I_1\cdots I_d.$ An element $x \in R$ is called an [*$(FC)$-element of $N$ with respect to $(I_1,\ldots, I_d)$*]{} if there exists $i \in \{ 1, \ldots, d\}$ such that $x \in I_i$ and the following conditions are satisfied:
- $x$ is an $I$-filter-regular element with respect to $N,$ i.e.,$0_N:x \subseteq 0_N: I^{\infty}.$
- $x{N}\bigcap {I_1}^{n_1} \cdots I_i^{n_i+1}\cdots I_d^{n_d}{N}
= x{I_1}^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}{N}$ for all $n_1,\ldots,n_d\gg0.$
- $\dim N/(xN:I^\infty)=\dim N/0_N:I^\infty-1.$
We call $x$ a [*weak-$(FC)$-element of $N$ with respect to $(I_1,\ldots, I_d)$*]{} if $x$ satisfies the conditions (i) and (ii).
Let $x_1, \ldots, x_t$ be a sequence in $R$. For any $0\le i < t,$ set ${N}_i = \dfrac{N}{(x_1, \ldots, x_{i})N}$. Then $x_1, \ldots, x_t$ is called a [*weak-$(FC)$-sequence of $N$ with respect to $(I_1,\ldots, I_d)$*]{} if $x_{i + 1}$ is a weak-(FC)-element of ${N}_i$ with respect to $(I_1,\ldots, I_d)$ for all $i = 0, \ldots, t - 1$.
$x_1, \ldots, x_t$ is called an [*$(FC)$-sequence of $N$ with respect to $(I_1,\ldots, I_d)$*]{} if $x_{i + 1}$ is an (FC)-element of ${N}_i$ with respect to $(I_1,\ldots, I_d)$ for all $i = 0, \ldots, t - 1$.
Recall that $$\begin{array}{l}\widetilde{e}(M)=\sum_{\mid \mathrm{\bf k}\mid=\:\ell-1}e(M;\mathrm{\bf k});\\ S_i=S_{(0,\ldots,{\underbrace{1}_i},\ldots,0)}
\mathrm {\;\;for \;\; all\;\;} i=1,\ldots,d;\\ \;\mathbb{S}= F_J(J,\mathrm{\bf I}; R) =\bigoplus_{n_0, n_1,\ldots,n_d\ge 0}\dfrac{J^{n_0}I_1^{n_1}\cdots I_d^{n_d}}{J^{n_0+1}I_1^{n_1}\cdots I_d^{n_d}};\\
\mathbb{M}= F_J(J,\mathrm{\bf I};N) =\bigoplus_{n_0, n_1,\ldots,n_d\ge 0}\dfrac{J^{n_0}I_1^{n_1}\cdots I_d^{n_d}{N}}{J^{n_0+1}I_1^{n_1}\cdots I_d^{n_d}{N}}.\end{array}$$ Set$S_{\widehat{i}} = \bigoplus_{n_1,\ldots,n_{i-1}, n_{i+1},\ldots, n_d \ge 0\;;\;n_i=0}S_{(n_1,\ldots,n_d)}\; \text {and}\; M_{\widehat{i}} = S_{\widehat{i}}M_{(0,\ldots,0)}.$
0.2cm :
$\bar x$ is an
*$\mathbb{S}_{++}$-filter-regular element with respect to $\mathbb{M}.$*
$\dim (\mathbb{M}/\bar x\mathbb{M})^\triangle = \dim \dfrac{{N}}{x{N}: I^\infty}$ and $\widetilde{e}(\mathbb{M}/\bar x\mathbb{M}) =\widetilde{e}\big(F_J(J,\mathrm{\bf I};\dfrac{{N}}{x{N}})\big).$
$\begin{array}{l} \mathbb{S}/\bar I_i\mathbb{S}\cong F_J(J,I_1,\ldots,I_{i-1},I_{i+1},\ldots,I_d;R)\cong \mathbb{S}_{\widehat{i}}\;\text{and}\\
\mathbb{M}/\bar I_i\mathbb{M}\cong F_J(J,I_1,\ldots,I_{i-1},I_{i+1},\ldots,I_d;N)\cong\mathbb{M}_{\widehat{i}}.\end{array} $
We have $(0_{N}: I^\infty)\bigcap J^mI_1^{m_1}\cdots I_d^{m_d}{N}= 0$ for all $m, m_1,\ldots, m_d \gg 0$ by Artin-Rees lemma. Since $x$ is an $ I$-filter-regular element with respect to $M,$ $$(0_N:x) \bigcap J^mI_1^{m_1}\cdots I_d^{m_d}{N} \subseteq (0_N: {I}^{\infty})\bigcap J^mI_1^{m_1}\cdots I_d^{m_d}{N}= 0$$ for all $m, m_1,\ldots, m_d \gg 0.$ From this it follows that $$\begin{aligned}
&\big(J^{n+1}I_1^{n_1}\cdots I_i^{n_i+1}\cdots I_d^{n_d}N:x\big)\bigcap J^{n}I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N\\
&= \Big[\big(x{N }\bigcap J^{n+1}{I_1}^{n_1} \cdots I_i^{n_i+1}\cdots I_d^{n_d}{N }\big):x\Big]\bigcap J^{n}I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N\\
&= \Big[xJ^{n+1}{I_1}^{n_1} \cdots I_i^{n_i}\cdots I_d^{n_d}{N}:x\Big]\bigcap J^{n}I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N\\
&= \Big[J^{n+1}{I_1}^{n_1} \cdots I_i^{n_i}\cdots I_d^{n_d}{N}+0_{N}:x\Big]\bigcap J^{n}I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N\\
&= J^{n+1}{I_1}^{n_1} \cdots I_i^{n_i}\cdots I_d^{n_d}{N}+\big(0_{N}:x\big)\bigcap J^{n}I_1^{n_1}\cdots I_i^{n_i}\cdots I_d^{n_d}N\\
&= J^{n+1}{I_1}^{n_1} \cdots I_i^{n_i}\cdots I_d^{n_d}{N}\end{aligned}$$ for all $n, n_1,\ldots,n_d\gg0.$ Hence $[0_{\mathbb{M}}: \bar x]_{(n, n_1,\ldots,n_d)} = 0$ for all $n, n_1,\ldots,n_d\gg0.$ Thus, $\bar x$ is an $\mathbb{S}_{++}$-filter-regular element. We get (i). It can be verified that
$[\mathbb{M}/\bar x\mathbb{M}]_{(m, m_1,\ldots,m_d)}\cong\dfrac{J^mI_1^{m_1}\cdots I_d^{m_d}{N}}{J^{m+1}I_1^{m_1}\cdots I_d^{m_d}{N}+xJ^{m}I_1^{m_1}\cdots I_i^{m_i-1}\cdots I_d^{m_d}{N}} \;\;\; \text{and } $ $$\begin{array}{l}\Big[F_J(J,\mathrm{\bf I};\dfrac{{N}}{x{N}})\Big]_{(m, m_1,\ldots,m_d)}=\bigg[\bigoplus_{n, n_1,\ldots,n_d\ge 0}\dfrac{J^nI_1^{n_1}\cdots I_d^{n_d}({N}/x{N})}{J^{n+1}I_1^{n_1}\cdots I_d^{n_d}({N}/x{N})}\bigg]_{(m, m_1,\ldots,m_d)}\\\cong \dfrac{J^mI_1^{m_1}\cdots I_d^{m_d}{N}+ x{N}}{J^{m+1}I_1^{m_1}\cdots I_d^{m_d}{N}+x{N}}
\vspace{6pt}\cong \dfrac{J^mI_1^{m_1}\cdots I_d^{m_d}{N}}{J^{m+1}I_1^{m_1}\cdots I_d^{m_d}{N} + x{N}\bigcap J^mI_1^{m_1}\cdots I_d^{m_d}{N}}.\end{array}$$ Since $x$ is a weak-(FC)-element, $$x{N}\bigcap J^mI_1^{m_1}\cdots I_d^{m_d}{N}= xJ^{m}I_1^{m_1}\cdots I_i^{m_i-1}\cdots I_d^{m_d}{N}$$ for all $m, m_1,\ldots,m_d\gg0.$ Hence $[\mathbb{M}/\bar{x}\mathbb{M}]_{(m, m_1,\ldots,m_d)} \cong \Big[F_J(J,\mathrm{\bf I};\dfrac{{N}}{x{N}})\Big]_{(m, m_1,\ldots,m_d)}$ for all $m, m_1,\ldots,m_d\gg0.$ From this it follows that $$\dim \big(\mathbb{M}/\bar x\mathbb{M}\big)^\triangle = \dim \Big[F_J(J,\mathrm{\bf I};\dfrac{{N}}{x{N}})\Big]^\triangle= \dim \dfrac{{N}}{x{N}: { I}^\infty}$$ by Note 2.1 and $\widetilde{e}(\mathbb{M}/\bar x\mathbb{M}) =\widetilde{e}\Big(F_J(J,\mathrm{\bf I};\dfrac{{N}}{x{N}})\Big).$ We get (ii). Since $\bar I_i= \mathbb{S}_i,$ (iii) is obvious. $\blacksquare$
**5. Recursion formulas for multiplicities of graded modules**
This section gives the recursion formulas for the sum of all the mixed multiplicities of multi-graded modules. And as an application, we get the recursion formulas for the multiplicity of multi-graded Rees modules. Recall that $\widetilde{e}(M)$ denotes the sum of all the mixed multiplicities of $M,$ i.e., $\widetilde{e}(M)=\sum_{\mid \mathrm{\bf k}\mid=\:\ell-1}e(M;\mathrm{\bf k});$ $$\begin{aligned}
S_i&=S_{(0,\ldots,{\underbrace{1}_i},\ldots,0)}
\;\;\text {for all}\;\; i=1,\ldots,d; \\
S_{\widehat{i}} &= \bigoplus_{n_1,\ldots,n_{i-1}, n_{i+1},\ldots, n_d \ge 0\;;\;n_i=0}S_{(n_1,\ldots,n_d)}\; \text {and}\; M_{\widehat{i}} = S_{\widehat{i}}M_{(0,\ldots,0)}.\end{aligned}$$
We have the following comment. 0.2cm [**Remark 5.1.**]{} For any $m \geqslant 0,$ $S_i^{m} M_{\widehat{i}}$ is a finitely generated $\mathbb{N}^{d-1}$-graded $S_{\widehat{i}}\,$-module. Since $0 : S_i^uM_{\widehat{i}} = 0 : S_i^{v} M_{\widehat{i}}$ for all $u, v \gg 0,$ there exists $h$ such that $\dim \mathrm{Supp}_{++}S_i^uM_{\widehat{i}}= \dim \mathrm{Supp}_{++}S_i^vM_{\widehat{i}}$ for all $u, v \ge h.$ Hence by [@VM Remark 3.1], $\dim_{{S_{\widehat{i}}}^\triangle}[S_i^uM_{\widehat{i}}]^\triangle = \dim_{{S_{\widehat{i}}}^\triangle}[S_i^vM_{\widehat{i}}]^\triangle$ for all $u, v \ge h.$\
The main result of this section is the following theorem.\
[**Theorem 5.2.**]{}
Since $x\in S_i$ is an $S_{++}$-filter-regular element with respect to $M,$ we have $$\ell_A\Big[\Big(\dfrac{M}{xM}\Big)_{(n_1,\ldots,n_d)}\Big]=
\ell_A[M_{(n_1,\ldots,n_{d})}]-\ell_A[M_{(n_1,\ldots,n_i-1,\ldots,n_{d})}]\eqno(7)$$ for all large $n_1,\ldots,n_d$ by [@VM Remark 2.6]. Denote by $P(n_1,\ldots,n_i,\ldots,n_d)$ the polynomial of $\ell_A[M_{(n_1,\ldots,n_{d})}]$ and $Q(n_1,\ldots,n_d)$ the polynomial of $\ell_A\Big[\Big(\dfrac{M}{xM}\Big)_{(n_1,\ldots,n_d)}\Big],$ from (7) we have $$Q(n_1,\ldots,n_d)=P(n_1,\ldots,n_i,\ldots,n_d)-
P(n_1,\ldots,n_i-1,\ldots,n_d).\eqno(8)$$ Since $e(M;k_1,\ldots,k_d)\ne 0 $ and $k_i > 0,$ by (8) we get $\deg Q =\deg P -1$ and $$e(M;h_1,\ldots,h_d)=e\Big(\dfrac{M}{xM};h_1,\ldots,h_i-1,\ldots,h_d\Big)\; \text {for all } h_i > 0. \eqno(9)$$ By $(9)$, $$\begin{array}{l}\sum_{\mid\mathrm{\bf h}\mid\:=\:\ell-1;\; h_i >0}e(M;\mathrm{\bf h})
=\sum_{\mid \mathrm{\bf h}\mid\:=\:\ell-1;\; h_i >0}e\Big(\dfrac{M}{xM};h_1,\ldots,h_i-1,\ldots, h_d\Big).\end{array}$$ Since $\widetilde{e}\Big(\dfrac{M}{xM}\Big) = \sum_{\mid \mathrm{\bf h}\mid\:=\:\ell-1;\; h_i >0}e\Big(\dfrac{M}{xM};h_1,\ldots,h_i-1,\ldots, h_d\Big),$ $$\widetilde{e}\Big(\dfrac{M}{xM}\Big) = \sum_{\mid \mathrm{\bf h}\mid\:=\:\ell-1;\; h_i >0}e(M;\mathrm{\bf h}).$$ We have (i). Remember that $$\begin{array}{l}\widetilde{e}(M)=\sum_{\mid \mathrm{\bf h}\mid\:=\:\ell-1}
e(M;\mathrm{\bf h})
=\sum_{\mid\mathrm{\bf h}\mid\:=\:\ell-1;\; h_i >0}e(M;\mathrm{\bf h})
+\sum_{\mid \mathrm{\bf h}\mid\:=\:\ell-1;\; h_i =0}e(M;\mathrm{\bf h}).
\end{array}$$ Thus, $$\widetilde{e}(M) = \widetilde{e}\Big(\dfrac{M}{xM}\Big)+\sum_{\mid\mathrm{\bf h}\mid\:=\:\ell-1;\; h_i =0}e(M;\mathrm{\bf h}).\eqno(10)$$ Now, we prove (ii). Choose $v \gg 0$ such that $$P(n_1,\ldots,n_d)= \ell_A[M_{(n_1,\ldots,n_{d})}]$$ for all $n_1,\ldots,n_d \ge v.$ Then $P(n_1,\ldots,v, \ldots,n_d)= \ell_A[M_{(n_1,\ldots,v,\ldots,n_{d})}]$ for all $$n_1,\ldots,n_{i-1},n_{i+1},\ldots, n_d \ge v\; \text{and}\; n_i = v.$$ Note that $$\ell_A[M_{(n_1,\ldots,v,\ldots,n_{d})}] = \ell_A[S_i^v{M_{\widehat{i}}}_{(n_1,\ldots,0,\ldots,n_{d})}]$$ and the terms of total degree $\ell-1$ in the polynomial $$P(n_1,\ldots,v, \ldots,n_d)= \ell_A[S_i^v{M_{\widehat{i}}}_{(n_1,\ldots,0,\ldots,n_{d})}]$$ have the form $$\sum_{h_1\:+\:\cdots+0+\:\cdots+\:h_d\;=\;\ell-1}e(M;h_1,\ldots,0,\ldots, h_d)\dfrac{n_1^{h_1}\cdots v^0\cdots n_d^{h_d}}{h_1!\cdots 0!\cdots h_d!}.$$ This follows that $\sum_{\mid\mathrm{\bf h}\mid\:=\:\ell-1;\; h_i =0}e(M;\mathrm{\bf h})\ne 0$ if and only if $\dim_{{S_{\widehat{i}}}^\triangle}[S_i^vM_{\widehat{i}}]^\triangle =\ell$ for some $v \gg 0.$ In this case, $$e(M;h_1,\ldots,h_{i-1},0,h_{i+1},\ldots, h_d) = e(S_i^vM_{\widehat{i}};h_1,\ldots,h_{i-1},h_{i+1},\ldots, h_d)$$ for all $v \gg 0$ by Remark 5.1. Therefore $\widetilde{e}\big({S_i^vM_{\widehat{i}}}\big)=
\sum_{\mid\mathrm{\bf h}\mid\:=\:\ell-1;\; h_i =0}e(M;\mathrm{\bf h})$ for all $v \gg 0.$ (ii) is proved. By $(10)$ and (ii) we immediately get (iii) and (iv). $\blacksquare$
0.2cm We now will discuss how particular cases of Theorem 5.2 can be treated.\
Remember that if the multiplicities of $M;$ $\dfrac{M}{xM};$ $S_i^vM_{\widehat{i}}$ are expressed as the sums of all the mixed multiplicities of $M;$ $\dfrac{M}{xM};$ $S_i^vM_{\widehat{i}},$ respectively then $$e(M) = \widetilde{e}(M);\; e\Big(\dfrac{M}{xM}\Big)=\widetilde{e}\Big(\dfrac{M}{xM}\Big);\; e(S_i^vM_{\widehat{i}})= \widetilde{e}(S_i^vM_{\widehat{i}}).$$\
Hence as an immediate consequence of Theorem 5.2, we have the following result. 0.2cm [**Corollary 5.3.**]{}
Remember that $\mathbb{S}= F_J(J,\mathrm{\bf I}; R)$ and $\mathbb{M}= F_J(J,\mathrm{\bf I};N);$ $ I = I_1\cdots I_d$ is not contained in $ \sqrt{\mathrm{Ann}{N}};$ $\dim \dfrac{N}{0_N:{ I}^\infty} = q.$ For any $i=1,\ldots,d,$ set $$\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; N) = \mathfrak R(I_1,\ldots,I_{i-1},I_{i+1},\ldots,I_d;N).$$ Recall that by Proposition 4.5(iii), $$\mathbb{M}_{\widehat{i}}
\cong F_J(J, I_1,\ldots,I_{i-1},I_{i+1},\ldots,I_d;N).$$ Upon simple computation, we get $$\mathbb{S}_i^v\mathbb{M}_{\widehat{i}}
\cong F_J(J, I_1,\ldots,I_{i-1},I_{i+1},\ldots,I_d;I_i^vN).$$ Set $$\overline{N} = \dfrac{N}{0_N: {I}^\infty}\;\; \text{ and}\;
\overline{\mathbb{M}}= F_J(J,\mathrm{\bf I};\overline{N}).$$ Then since $\text{ht} \dfrac{ I+\text{Ann}\overline{N}}{\text{Ann}\overline{N}} > 0,$ we have $$\dim I_i^v\overline{N} = \dim \overline{N} > \dim \dfrac{\overline{N}}{I_i^v\overline{N}}$$ for any $1\leqslant i\leqslant d$ and for all $v >0.$ Hence from short exact sequences $$0\longrightarrow I_i^v\overline{N} \longrightarrow \overline{N}\longrightarrow \dfrac{\overline{N}}{I_i^v\overline{N}}\longrightarrow 0,$$ by Corollary 3.9(ii)(b) we get $$e\big(\big(J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big) ; \mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; \overline{N})\big) = e\big(\big(J, \mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big) ;\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; I_i^v\overline{N})\big).$$ On the other hand $$\widetilde{e}
\big(\mathbb{S}_i^v{\mathbb{M}}_{\widehat{i}}\big)= e\big(\big(J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big); \mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; I_i^v\overline{N})\big)$$ by Corollary 2.6. Hence $$\widetilde{e}
\big(\mathbb{S}_i^v{\mathbb{M}}_{\widehat{i}}\big)= e\big(\big(J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big) ;\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; \overline{N})\big).$$ This fact yields: 0.2cm [**Note 5.4.**]{} We have $$\widetilde{e}
\big(\mathbb{S}_i^v{\mathbb{M}}_{\widehat{i}}\big)= e\big(\big(J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big) ; \mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,;\overline{N})\big).$$
0.2cm Put $\frak J = (J,\mathfrak R(\mathrm{\bf I};R)_+)$ and $\frak J_{\widehat{i}}= (J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+).$ Then as a consequence of Theorem 5.2 and Proposition 4.5 we obtain the following results. 0.2cm [**Theorem 5.5.**]{}
- $e\Big({\frak J};\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)=
\sum_{h_0+\mid\mathrm{\bf h}\mid =\:q-1;\; h_i >0}e\big(J^{[h_0+1]},\mathrm{\bf I}^{[\mathrm{\bf h}]}; N\big).$
- $e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{0_N:{ I}^\infty}\big)\Big)= e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)+ e\Big(\frak J_{\widehat{i}}; \mathfrak R\big(\mathrm{\bf I}_{\widehat{i}}\,;\dfrac{N}{0_N:{ I}^\infty}\big)\Big).$
- $e\Big(\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{0_N:{ I}^\infty}\big)\Big)= e\Big(\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)+
e\Big(\mathfrak R\big(\mathrm{\bf I}_{\widehat{i}}\,;\dfrac{N}{0_N:{I}^\infty}\big)\Big).
$
Denote by $\bar x$ the image of $x$ in $\mathbb{S}_i.$ Since $x \in I_i$ is a weak-(FC)-element of $N$ with respect to $(J, I_1,\ldots, I_d),$ $\bar x$ is an $\mathbb{S}_{++}$-filter-regular element with respect to $\mathbb{M}$ by Proposition 4.5(i). By Proposition 4.5(ii), $\widetilde{e}\big(\mathbb{M}/\bar x\mathbb{M}\big) =\widetilde{e}\Big(F\big(J,\mathrm{\bf I};\dfrac{{N}}{x{N}}\big)\Big).$ Hence $$\begin{aligned}
\widetilde{e}\big(\mathbb{M}/\bar x\mathbb{M}\big) &=e\Big(F_J\big(J,\mathrm{\bf I};\dfrac{N}{xN: I^\infty}\big)\Big)\\&= e\Big(\big(J,\mathfrak R(\mathrm{\bf I}; R)_+\big); \mathfrak R\big(\mathrm{\bf I}; \dfrac{N}{xN: I^\infty}\big)\Big)\\&= e\Big({\frak J};\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big) \end{aligned}$$ by Corollary 2.6. Thus, we get(i) by Theorem 5.2(i). Now, since $$\widetilde{e}\big(\mathbb{M}\big)= e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{0_N:{ I}^\infty}\big)\Big)\;\; \text{and}\;\;
\widetilde{e}\big(\mathbb{M}/\bar x\mathbb{M}\big) =e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)$$ by Corollary 2.6, and $$\widetilde{e}
\big(\mathbb{S}_i^v{\mathbb{M}}_{\widehat{i}}\big)= e\big(\big(J,\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; R)_+\big) ;\mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; \overline{N})\big)= e\Big(\frak J_{\widehat{i}}; \mathfrak R(\mathrm{\bf I}_{\widehat{i}}\,; \dfrac{N}{0_N:{ I}^\infty})\Big)$$ by Note 5.4, we have (ii) by Theorem 5.2(iii). Choose $J = \frak n,$ we get (iii) by (ii). $\blacksquare$
Remember that if ht $\dfrac{I+\text{Ann}N}{\text{Ann}N} > 0,$ then $$e\Big({\frak J};\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{0_N:{ I}^\infty}\big)\Big)= e\big({\frak J};\mathfrak R\big(\mathrm{\bf I}; N\big)\big)$$ by Remark 2.7. Hence as an immediate consequence of Theorem 5.5, we have the following result.
[**Corollary 5.6.**]{}
- $e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)=
\sum_{h_0+\mid\mathrm{\bf h}\mid =\:q-1;\; h_i >0}e\big(J^{[h_0+1]},\mathrm{\bf I}^{[\mathrm{\bf h}]}; N\big).$
- $e\big(\frak J; \mathfrak R\big(\mathrm{\bf I}; N\big)\big)= e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)+ e\big(\frak J_{\widehat{i}} ; \mathfrak R\big(\mathrm{\bf I}_{\widehat{i}}\,; {N}\big)\big).$
- $e\big(\mathfrak R\big(\mathrm{\bf I}; {N}\big)\big)= e\Big(\mathfrak R\big(\mathrm{\bf I};\dfrac{N}{xN:{ I}^\infty}\big)\Big)+
e\big(\mathfrak R\big(\mathrm{\bf I}_{\widehat{i}}\,; {N}\big)\big).
$
Suppose that $e\big(J^{[k_0+1]},\mathrm{\bf I}^{[\mathrm{\bf k}]};N\big) \ne 0$ and $x_1,\ldots, x_p$ $(p \leqslant k_i)$ is a weak-(FC)-sequence in $I_i.$ By Theorem 5.5 and by induction on $p$, we get the following corollary. 0.2cm [**Corollary 5.7.**]{} $$\begin{aligned}
e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{0_N: I^\infty}\big)\Big)
&= e\Big(\frak J; \mathfrak R\big(\mathrm{\bf I};\dfrac{N}{(x_1,\ldots, x_p)N:{ I}^\infty}\big)\Big)\\ &+ \sum_{j=0}^{p-1}e\Big(\frak J_{\widehat{i}}; \mathfrak R\big(\mathrm{\bf I}_{\widehat{i}}\,;\dfrac{N}{(x_1,\ldots, x_j)N:{I}^\infty}\big)\Big). \end{aligned}$$
In particular, if $d=1$ then $I= I_1.$ Put $p = \max\{i \;|\; e(J^{[q-i]},I^{[i]};N) \ne 0\}$ and assume that $x_1,\ldots, x_p$ is a weak-(FC)-sequence of $N$ with respect to $(J,I).$ Then by [@Vi1; @Vi2](see [@MV Proposition 3.3(iii) and Theorem 3.4(iii)]), $x_1,\ldots, x_p$ is a maximal (FC)-sequence of $N$ with respect to $(J,I).$ By Corollary 5.7, $$e\Big(\frak J; \mathfrak R( I;\dfrac{N}{0:{ I}^\infty})\Big)= e\Big(\frak J; \mathfrak R(I;\dfrac{N}{(x_1,\ldots, x_p)N:{I}^\infty})\Big)+ \sum_{i=0}^{p-1}e\Big( J;\dfrac{N}{(x_1,\ldots,x_i)N:{ I}^\infty}\Big).$$ Since $p$ is maximal, $e(J^{[q-i]},I^{[i]};N) \ne 0$ if and only if $ 0 \leqslant i \leqslant p$ by [@Vi1]. Consequently by [@Vi1](see [@MV Proposition 3.3(i)]), $$e(J^{[q-p-i]},I^{[p+i]};N)= e\Big(J^{[q-p-i]},I^{[i]};\dfrac{N}{(x_1,\ldots, x_p)N}\Big) \ne 0$$ if and only if $i=0$. Therefore by Corollary 2.5(ii),
$$e\Big(\frak J; \mathfrak R(I;\dfrac{N}{(x_1,\ldots, x_p)N:{I}^\infty})\Big)= e\Big(J^{[q-p]},I^{[0]};\dfrac{N}{(x_1,\ldots, x_p)N}\Big).$$ On the other hand $e\Big(J^{[q-p]},I^{[0]};\dfrac{N}{(x_1,\ldots, x_p)N}\Big)= e\Big(J;\dfrac{N}{(x_1,\ldots, x_p)N:{I}^\infty}\Big)$ by [@MV Lemma 3.2]. Hence $e\Big(\frak J;\mathfrak R(I;\dfrac{N}{(x_1,\ldots, x_p)N:{I}^\infty})\Big)=
e\Big(J;\dfrac{N}{(x_1,\ldots, x_p)N:{I}^\infty}\Big).$ Thus, $$e\Big(\frak J; \mathfrak R(I;\dfrac{N}{0_N: { I}^\infty})\Big)=\sum_{j=0}^pe\Big(J;\dfrac{N}{(x_1,\ldots, x_j)N:{I}^\infty}\Big).$$ Then we have the following corollary. 0.2cm [**Corollary 5.8.**]{} $e\Big(\frak J; \mathfrak R(I;\dfrac{N}{0_N: { I}^\infty})\Big)=\sum_{j=0}^pe\Big(J;\dfrac{N}{(x_1,\ldots, x_j)N:{I}^\infty}\Big).$\
In the case that $$\text{ht}\dfrac{ I+\text{Ann}N}{\text{Ann}N}>0,\;
e\Big(\frak J; \mathfrak R(I;\dfrac{N}{0_N: { I}^\infty})\Big)= e\big(\frak J; \mathfrak R(I;N)\big)$$ by Remark 2.7. We get the following result which is proved by [@MV]. 0.2cm [**Corollary 5.9**]{} [@MV Theorem 4.2].
[99]{}
P. B. Bhattacharya, [*The Hilbert-functions of two ideals*]{}, Proc. Cambridge. Philos. Soc. 53(1957), 568-575. M. Brodmann and R. Y. Sharp, [*Local cohomology: an algebraic introduction with geometric applications*]{}, Cambridge studies in advanced mathematics, No 60, Cambridge University Press 1998. L. V. Dinh and D. Q. Viet, [*On two results of mixed multiplicities*]{}, Int. J. Algebra 4(1) 2010, 19-23. M. Herrmann, E. Hyry, J. Ribbe, Z. Tang, [*Reduction numbers and multiplicities of multigraded structures*]{}, J. Algebra 197(1997), 311-341. C. Huneke and I. Swanson, [*Integral Closure of Ideals, Rings, and Modules*]{}, London Mathematical Lecture Note Series 336, Cambridge University Press (2006). E. Hyry, [*The diagonal subring and the Cohen-Macaulay property of a multigraded ring*]{}, Trans. Amer. Math. Soc. 351(1999), 2213-2232. D. Katz and J. K. Verma, [*Extended Rees algebras and mixed multiplicities*]{}, Math. Z. 202(1989), 111-128. D. Kirby and D. Rees, [*Multiplicities in graded rings I: the general theory*]{}, Contemporary Mathematics 159(1994), 209-267. D. Kirby and D. Rees, [*Multiplicities in graded rings II: integral equivalence and the Buchsbaum - Rim multiplicity*]{}, Math. Proc. Cambridge Phil. Soc. 119 (1996), 425-445. S. Kleiman and A. Thorup, [*Mixed Buchsbaum - Rim multiplicities*]{}, Amer. J. Math. 118(1996), 529-569. N. T. Manh and D. Q. Viet, [*Mixed multiplicities of modules over noetherian local rings*]{}, Tokyo J. Math. 29(2006), 325-345. D. G. Northcott and D. Rees, [*Reduction of ideals in local rings*]{}, Proc. Cambridge Phil. Soc. 50(1954), 145-158. D. Rees, [*Generalizations of reductions and mixed multiplicities*]{}, J. London. Math. Soc. 29(1984), 397-414. P. Roberts, [*Local Chern classes, multiplicities and perfect complexes*]{}, Memoire Soc. Math. France 38(1989), 145-161. J. Stuckrad and W. Vogel, [*Buchsbaum rings and applications*]{}, VEB Deutscher Verlag der Wisssenschaften. Berlin, 1986. I. Swanson, [*Mixed multiplicities, joint reductions and quasi-unmixed local rings* ]{}, J. London Math. Soc. 48(1993), no.1, 1-14. B. Teisier, [*Cycles èvanescents, sections planes, et conditions de Whitney*]{}, Singularities à Cargése, 1972. Astérisque, 7-8(1973), 285-362. N. V. Trung, [*Reduction exponents and degree bound for the defining equation of graded rings*]{}, Proc. Amer. Mat. Soc. 101(1987), 229-234. N. V. Trung, [*Positivity of mixed multiplicities*]{}, J. Math. Ann. 319(2001), 33-63. J. K. Verma, [*Multigraded Rees algebras and mixed multiplicities*]{}, J. Pure and Appl. Algebra 77(1992), 219-228. D. Q. Viet, [*Mixed multiplicities of arbitrary ideals in local rings*]{}, Comm. Algebra. 28(2000), 3803-3821. D. Q. Viet, [*Sequences determining mixed multiplicities and reductions of ideals*]{}, Comm. Algebra. 31(2003), 5047-5069. D. Q. Viet, [*Reductions and mixed multiplicities of ideals*]{}, Comm. Algebra. 32(2004), 4159-4178. D. Q. Viet and N. T. Manh, [*Mixed multiplicities of multigraded modules*]{}, Forum Math 25(2013), 337-361. D. Q. Viet and T. T. H. Thanh, [*On $(FC)$-sequences and mixed multiplicities of multi-graded algebras*]{}, Tokyo J. Math. 34(2011), 185-202.
|
---
abstract: 'Reducing the noise below the shot-noise limit in sensing devices is one of the key promises of quantum technologies. Here, we study quantum plasmonic sensing based on an attenuated total reflection configuration with single photons as input. Our sensor is the Kretschmann configuration with a gold film, and a blood protein in an aqueous solution with different concentrations serves as an analyte. The estimation of the refractive index is performed using heralded single photons. We also determine the estimation error from a statistical analysis over a number of repetitions of identical and independent experiments. We show that the errors of our plasmonic sensor with single photons are below the shot-noise limit even in the presence of various experimental imperfections. Our results demonstrate a practical application of quantum plasmonic sensing is possible given certain improvements are made to the setup investigated, and pave the way for a future generation of quantum plasmonic applications based on similar techniques.'
address: |
Department of Physics, Hanyang University, Seoul, 04763, Korea\
School of Chemistry and Physics, University of KwaZulu-Natal, Durban 4001, South Africa\
National Institute for Theoretical Physics, University of KwaZulu-Natal, Durban 4001, South Africa\
Institute of Theoretical Solid State Physics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany\
Institute of Nanotechnology, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany\
[email protected]\
[email protected]
author:
- |
Joong-Sung Lee, Seung-Jin Yoon, Hyungju Rah,\
Mark Tame, Carsten Rockstuhl, Seok Ho Song,\
Changhyoup Lee,and Kwang-Geol Lee
bibliography:
- 'reference.bib'
title: Quantum plasmonic sensing using single photons
---
Introduction
============
Plasmonic effects are successfully exploited in practical photonic sensors, providing much higher sensitivities than conventional photonic sensing platforms [@Homola99a; @Lal07; @Anker08]. The huge improvement in sensitivity results from the increased optical density of states, given by the strong electromagnetic field enhancement near a metallic surface [@Raether88]. This is linked to the excitation of propagating surface plasmon polaritons (SPPs) at spatially extended interfaces – hybrid states whose excitation is shared between the electromagnetic field and the charge density oscillation in the metal. The details of the surface plasmon resonances (SPRs) used in photonic sensors and their sensitivity depend on the geometrical and material configuration, forming a large variety of different sensing platforms [@Rothenhausler88; @Jorgenson93; @Homola99b; @Dostalek05; @Sepulveda06; @Leung07; @Svedendahl09; @Mayer11]. The most widely used plasmonic sensor is the attenuated total reflection (ATR) setup using the Kretschmann configuration. The simplicity of this configuration has led to its great success in the commercialization of classical biosensing [@Bahadir15].
The Kretschmann configuration consists of a high index glass material on which a thin metal film is coated. The analyte to be detected is deposited on the other side of the metallic film. The interface is illuminated from the glass side with an incident plane wave in TM polarization (p-polarized) that has a wave vector component parallel to the interface that is larger than the wavenumber in the medium adjacent to the metallic film on the opposite side. The incident field therefore experiences total internal reflection. However, the reflection is attenuated when a propagating SPP is excited at the interface between the metal and the analyte. The excitation conditions depend sensitively on the optical properties of the analyte. Precise sensing in the ATR setup is performed by measuring the variation in either the intensity or the phase of the reflected light at different angles (or different wavelengths) as the refractive index $n_{\text{analyte}}$ of the analyte changes. The measured reflectance curve yields the so-called SPR dip at resonance, across which the phase abruptly changes. These measurements offer a good estimation of the refractive index of the analyte with high sensitivity once the set-up is calibrated. However, the statistical error $\Delta n_{\text{analyte}}$ of the estimation must also be taken into account in evaluating the sensing performance. Most importantly, this error quantifies how precise or reliable the estimated value is. It is known that when the experiment is performed with a classical laser source, the ultimate estimation error, when all technical noises are removed, is inversely proportional to the intensity of the laser light [@Ran06; @Piliarik09; @Wang11], i.e., $\Delta n_{\text{analyte}}\propto N^{-1/2}$, where $N$ is the average photon number. This limit is often called the shot-noise limit (SNL) or standard quantum limit. The error, of course, can be reduced by simply increasing the power, but this is not always an acceptable strategy since optical damage might occur when the specimens under investigation are vulnerable [@Neuman99; @Peterman03; @Taylor15; @Taylor16]. Therefore, for sensing in case when photodamage may occur in a low power regime or $N$ is upper-limited by a small number of photons, other strategies have to be put in place in order to go beyond the SNL.
Over the last two decades, the advantages of exploiting quantum resources have been extensively and intensively studied in the field of plasmonics [@Tame13]. Such studies not only provide a better understanding of fundamental quantum plasmonic features, but they also unlock potential applications. One promising application is quantum plasmonic sensing [@Kalashnikov14; @Fan15; @Pooser15; @Lee16; @Lee17; @Dowran18; @Chen18]. In recent years, researchers have introduced quantum techniques using particular quantum states of light for plasmonic sensing in order to beat the SNL in the context of quantum metrology [@Giovannetti04; @Boto00; @Giovannetti06; @Giovannetti11]. Kalashnikov [*et al.*]{} experimentally demonstrated the use of frequency-entangled photons in transmission spectroscopy for refractive index sensing in an array of gold nanoparticles with a noise level 70 times lower than the signal [@Kalashnikov14]. Pooser [*et al.*]{} also experimentally measured a sensitivity that is $5$ dB better than its classical counterpart by using two-mode intensity squeezed states in the Kretschmann configuration [@Fan15; @Pooser15]. Lee [*et al.*]{} studied more fundamentally the role of quantum resources combined with plasmonic features in quantum plasmonic sensing and their potential use [@Lee16; @Lee17]. Very recently, Dowran [*et al.*]{} used bright entangled twin beams to experimentally demonstrate a $56\%$ quantum enhancement in sensitivity, compared to state-of-the-art classical plasmonic sensors [@Dowran18]. Also, Chen [*et al.*]{} evaluated the usefulness of their taper-fiber-nanowire coupled system with two-photon plasmonic N00N states for quantum sensing [@Chen18].
Most of the aforementioned quantum plasmonic sensing schemes rely on transmission or absorption spectroscopy. The change of intensity of the transmitted (or reflected) light after propagation through the sensing platform is analyzed with a variation in sensing samples. It is known that the photon number state ${\left\vert{N}\right\rangle}$ is the optimal state for single-mode transmission spectroscopy, leading to a maximal enhancement in precision compared to the classical benchmark [@Monras07; @Adesso09; @Alipour14; @Meda17]. More interestingly, when the state ${\left\vert{N}\right\rangle}$ is used, the quantum-to-classical noise ratio for the same average photon number used – quantifying the amount of quantum enhancement – does not depend on the photon number $N$, but only on the total transmittance $T_{\text{total}}$. For an absolute comparison with state-of-the-art classical plasmonic sensing, a much higher $N$ photon state is desired to match the high mean photon number of the coherent states used. However, the use of single photons is sufficient to demonstrate the same relative enhancement as obtainable by higher photon number states ${\left\vert{N}\right\rangle}$ in transmission spectroscopy.
In this work, we use single photons as inputs in a plasmonic ATR sensor with the Kretschmann configuration. For our sensor to be understood in the context of transmission spectroscopy we treat the actual reflection of a single photon from the ATR setup as a transmission through the ATR setup, as in Ref. [@Lee17]. The generation of the single-photon state (the signal) is heralded by a detection of its twin photon (the idler), due to the quantum correlation of photon pairs initially produced via spontaneous parametric down conversion (SPDC). As a sample to analyze, a blood protein in an aqueous solution with different concentrations is chosen. Out of $\nu$ single-photons sent to the ATR setup, we measure the number $N_{\text{t}}$ of transmitted single-photons. We repeat the independent and identical sampling $\mu$ times, assumed to be large enough, to calculate the standard deviation $\langle \Delta N_{\text{t}}\rangle$ where $\langle ..\rangle$ denotes the average over $\mu$ repetitions. These statistical quantities are exploited to quantify the error of estimation in our transmission spectroscopy. We show that the measured estimation errors beat the SNL that would be obtainable by a coherent state of light with the same average photon number as the single photon. Here, the comparison to the SNL is made for the same input power of $N$ and the sampling size of $\nu$, allowing us to focus more on fundamental aspects of using single photons. The quantum enhancement in the error is achieved even in the presence of significant losses, including all experimental imperfections. All of these imperfections diminish the total transmittance $T_{\text{total}}$, subsequently reducing the enhancement. We discuss how the enhancement could be further improved in our setup in a systematic way according to our theoretical analysis, that also explains the experimental results well.
Experimental scheme
===================
The schematic of our experiment is shown in Fig. \[setup\](a). A continuous wave diode laser (MDL-III-400, CNI) at $401.5$ nm pumps a nonlinear crystal (periodically poled potassium titanyl phosphate, PPKTP) in a temperature-tunable oven. Its temperature is set to $20^{\circ}$C. It produces pairs of orthogonally polarized photons at $799.16$ nm and $803.47$ nm with a FWHM of $6.67$ nm and $5.01$ nm in the same spatial mode via phase-matching for collinear type-II SPDC. The measured spectra of the generated photon pairs are shown in Fig. \[setup\](b). The produced photons pairs can be approximately written as ${\left\vert{\text{SPDC}}\right\rangle}\approx {\left\vert{00}\right\rangle}+\epsilon{\left\vert{11}\right\rangle}$ with $\epsilon\ll1$. The photon pairs are split into two spatial modes via a polarization beam splitter. One of the photons, the idler photon, is directly sent to an avalanche photodiode single-photon detector (APD, SPCM-AQR-15, PerkinElmer), while the other photon, the signal photon, is fed into the ATR sensing setup. When an idler photon is detected by the APD, it heralds the existence of a twin single photon in the signal mode due to the quantum correlation in photon numbers. In the ATR setup, mounted in a rotation stage for angular modulation, the prism is coated with a gold film of about $57$ nm thickness, where we also install a container made of acrylic glass for a fluidic analyte to be put, as depicted in Fig. \[setup\](a). For an evaluation of our quantum plasmonic sensor, we choose bovine serum albumin (BSA) in aqueous solution with different concentrations [@Peters75]. The acrylic container is cleaned by deionized (DI) water before and after measurements for each concentration.
![ (a) Experimental setup. A continuous wave pump beam at $401.5$ nm is filtered to be a single mode with a particular polarization that maximizes the rate of the photon pair generation through the nonlinear crystal. Such initial filtering is carried out before being injected into the nonlinear crystal (PPKTP). The output beams from the crystal are also filtered via a band pass filter (Thorlabs FBH 800-40) centered at $800$ nm with a width of $40$ nm and then collimated by an iris. Orthogonally polarized pair of photons are split into separate arms through the polarization beam splitter. The photon in the idler mode is directly sent to an APD with a temporal resolution of about $1$ ns, where the detection of a photon heralds the presence of a single photon in the signal mode, which is used as a signal for sensing. This heralded signal photon is sent to the ATR setup, which consists of a prism, a gold layer of about $57$ nm, and an acrylic box that contains the fluidic analyte (see the inset for the layered structure). We then count the number of single photons in the signal mode over the sampling with a size of $\nu=10^4$, conditioned on the cases when a detection event is triggered in the idler mode within the time window of $25$ ns (The time window of the coincidence detection is determined by a FPGA used. The count rate of the idler photon is about $2\times10^{5}$ cps, so that the probability for the twin photons to be detected in the different time windows is nearly zero). We repeat the sampling $\mu=10^3$ times to extract the statistical features of the estimation. (b) The spectrum of the output beams are measured by a spectrometer. The central wavelengths of the photon pairs are located at $799.16$ nm and $803.47$ nm with the FWHM of $6.67$ nm and $5.01$ nm, respectively. The wavelengths can be tuned via controlling the temperature of the oven. []{data-label="setup"}](setup.pdf){width="11cm"}
Two kinds of experiments are performed in this work. First, we carry out an incident angular modulation from $66.5^{\circ}$ to $69^{\circ}$ using the heralded single-photon source for BSA concentrations of $0\%$ and $2\%$ as analytes. Here, the concentration $C$ of the BSA is calculated as a ratio of the weight (g) of the BSA powder to $100$ ml of DI water-BSA solution, e.g., $1\%=1~{\text{g}}/100~{\text{ml}}$ [@Singh05]. The weight is measured by an electronic scale that has a resolution of $0.01$ g. Second, we measure the change of the transmittance at a fixed incident angle for different concentrations of BSA ranging from $0\%$ to $2\%$ in $0.25\%$ steps.
For each kind of experiment, we post-select the cases when a detection is triggered in the idler mode from the time-tagged table of detections given by a coincidence detection scheme. This constitutes a scheme for a heralded single-photon source. Out of post-selected successive $\nu$ detections in the idler mode (or equivalently out of $\nu$ single-photons sent to the signal mode), we count how many of the transmitted photons are found in the signal mode, yielding the sample mean $T_{\text{total}}=N_{\text{t}}/\nu$. We set the sample size as $\nu=10^4$ in our experiment. The measured transmittance $T_{\text{total}}$ would be accurate with $\nu\rightarrow \infty$, but in reality where $\nu$ is finite, it fluctuates over repetitions of an identical measurement. The amount of fluctuation, the standard deviation (SD) $\langle \Delta T_{\text{total}}\rangle$ of the sample mean in our case, determines the estimation error of transmittance for a given sample of size $\nu$. To measure this quantity experimentally, we repeat the identical experiment $\mu=10^3$ times, which we assume to be large enough to extract statistical features of interest. From $\mu$ samplings with a size of $\nu$, we calculate the SD of $T_{\text{total}}$ as $$\begin{aligned}
\langle \Delta T_{\text{total}}^{\text{(meas)}}\rangle=\sqrt{\frac{1}{\mu}\sum_{j=1}^{\mu} \left(T_{\text{total}}(j)-\langle T_{\text{total}}\rangle\right)^{2}},
\label{Ttotal}\end{aligned}$$ where $T_{\text{total}}(j)$ denotes the transmittance measured in the $j$th sample of size $\nu$ and $\langle T_{\text{total}}\rangle=\sum_{j=1}^{\mu}T_{\text{total}}(j)/\mu$ denotes the mean of the sample mean. To estimate the refractive index $n_{\text{BSA}}$ of the BSA in the ATR setup, a further post-data processing step is required in the distribution of $T_{\text{total}}(j)$, which will be explained in the next section.
Results and discussions
=======================
We aim to estimate the refractive index $n_{\text{BSA}}$ of the BSA for given concentrations in the ATR setup by fitting our measured data to a well-known formula for the reflectance $R_{\text{sp}}$ of the Kretschmann configuration [@Raether88]. The reflectance is written as $$\begin{aligned}
R_{\text{sp}}={\left\vert \frac{e^{ i 2 k_{2} d} r_{23}+ r_{12}}{e^{ i 2 k_{2} d} r_{23}r_{12} + 1}\right\vert}^{2},
\label{rsp}\end{aligned}$$ where $r_{lm}=\left(\frac{k_{l}}{\varepsilon_{l}} - \frac{k_{m}}{\varepsilon_{m}}\right)\Big/\left(\frac{k_{l}}{\varepsilon_{l}}+\frac{k_{m}}{\varepsilon_{m}}\right)$ for $l,m\in \{1,2,3 \}$, $k_{l}$ denotes the normal-to-surface component of the wave vector in the $l$th layer, $\varepsilon_{l}$ is the respective permittivity, and $d$ is the thickness of the second layer. Here, the first layer is the prism, the second layer is the gold film, and the third layer is the analyte \[see the inset in Fig. \[setup\](a)\]. The associated quantum theory for the ATR setup has been discussed in Refs. [@Tame08; @Ballester09]. What we measure in the experiment is the light reflected from the ATR setup, but we shall regard the reflected light as the transmitted light through the transducer that consists of the ATR setup, as mentioned before. The transmittance being measured in our experiment is the total transmittance $T_{\text{total}}$. This, unfortunately, is not equal to the reflectance $R_{\text{sp}}$ since photon losses can occur before and after the ATR setup. Some losses even depend on the incident angle since the optical paths are not universally aligned for arbitrary incident angles. Therefore, we normalize $T_{\text{total}}$ by the transmittance $T_{\text{total, air}}$ measured for the case of air used as an analyte medium, which is nearly off-resonant from the plasmonic excitation across the entire range of incident angles considered. Then, the normalized transmittance, showing only the transmittance through the prism setup, is given as $$\begin{aligned}
T_{\text{prism}}=\frac{T_{\text{total}}}{\langle T_{\text{total,air}}\rangle},
\label{Tprism}\end{aligned}$$ where the averaged value $\langle T_{\text{total,air}}\rangle=\sum_{j=1}^{\mu}T_{\text{total,air}}(j)/\mu$ is taken into account. Such normalization is expected to remove all unwanted contributions of losses, i.e., $\langle T_{\text{prism}}\rangle \approx R_{\text{sp}}$.
![ (a) Dots show measured transmittances $\langle T_{\text{prism}}^{\text{(meas)}}\rangle$ of Eq. (\[Tprism\]) over the incident angles from $66.5^{\circ}$ to $69^{\circ}$ for BSA concentrations of $0\%$ and $2\%$. Solid lines represent the fitted curves using Eq. . The error bars are measured as a standard deviation of $T_{\text{prism}}$ in the histogram over $\mu$ repetitions at each incident angle, see the inset for an example. (b) The errors $\langle \Delta T_{\text{total}}^{\text{(meas)}}\rangle$, corresponding to the errors $\langle \Delta T_{\text{prism}}^{\text{(meas)}}\rangle$ shown in (a), are represented as a function of $\langle T_{\text{total}}^{\text{(meas)}}\rangle$. The measured errors are compared with theoretically expected errors for classical and quantum sensing when $N=1$, which are given as $\sqrt{T_{\text{total}}^{\text{(true)}}/\nu}$ and $\sqrt{T_{\text{total}}^{\text{(true)}} (1-T_{\text{total}}^{\text{(true)}})/\nu}$, respectively. The comparison for the same input power of $N$ and the sampling size of $\nu$ clearly demonstrates that the measured errors are below the SNL, defined as the error that would be obtained in classical sensing using a coherent state of light with $N=1$. As the total transmittance of $\langle T_{\text{total}}\rangle$ moves close to zero, the enhancement is not so significant, but the quantum enhancement nevertheless always exists at any value of transmittance. []{data-label="TransmissionNoise"}](TransmissionNoise.pdf){width="9cm"}
In Fig. 2(a), the measured transmittances $\langle T_{\text{prism}}\rangle$ are shown for the DI water (i.e., $C=0\%$) and the BSA concentration of $2\%$ over the incident angles from $66.5^{\circ}$ to $69.0^{\circ}$. We fit Eq. to the transmission curves to first obtain the electric permittivity and thickness of the gold film. From a simultaneous fitting to both curves, we obtain $\varepsilon_{\text{gold}}'=-18.2484$ and $\varepsilon_{\text{gold}}''=0.8096$ for the electric permittivity ($\varepsilon_{\text{gold}}=\varepsilon_{\text{gold}}'+i\varepsilon_{\text{gold}}''$) of the gold film at $\lambda=799$ nm, and a thickness of $d=57.41$ nm. Also, the refractive index of the BSA concentration of $2\%$ and the DI water are inferred as $n_{{\text{BSA}},2\%}=1.3325$ and $n_{\text{DI water}}=1.3284$, respectively. The latter is in good agreement with the value ($1.3285$) measured in Ref. [@Daimon07]. On the other hand, the error bars are included, obtained as the SD from the histogram of $T_{\text{prism}}$ over $\mu$ repetitions \[see the inset in Fig. \[TransmissionNoise\](a), for an example\]. It is of great importance to examine if these errors are below the SNL at the same input power considered. To this end, let us consider a coherent state ${\left\vert{\alpha}\right\rangle}$ with an average photon number of $N$ and the $N$-photon number state for classical and quantum sensing, respectively. For both cases, we suppose that photon-number-resolving detection is made at the end of the signal channel. When $\mu$ is large enough, it is expected that $\langle\Delta T_{\text{total}}^{\text{(meas)}}\rangle\approx \sqrt{\sigma^{2}/\nu}$, where $\sigma^{2}$ is the variance of the population distribution of the measurement outcomes. Provided that the true value of transmittance is given as $T_{\text{total}}^{\text{(true)}}$, it can be shown that the variances $\sigma^{2}$ are given as $\sigma_{\text{(C)}}^{2} = T_{\text{total}}^{\text{(true)}}N$, and $\sigma_{\text{(Q)}}^{2} = T_{\text{total}}^{\text{(true)}}(1-T_{\text{total}}^{\text{(true)}}) N$, for classical and quantum sensing, respectively [@Loudonbook]. These are given from the fact that the population distributions of the measurement outcomes follow the Poisson and binomial statistics, respectively [@Loudonbook]. In our experiment, $N=1$, for which the APD approximately serves as a photon-number-resolving detector for quantum sensing. The estimator we use is the sample mean, and it is a locally unbiased estimator, so that $\langle T_{\text{total}}^{\text{(meas)}}\rangle = T_{\text{total}}^{\text{(true)}}$. Therefore, the theoretically expected SDs are written as $$\begin{aligned}
\Delta T_{\text{total}}^{\text{(C)}}&=\sqrt{\frac{T_{\text{total}}^{\text{(true)}}}{\nu}},\label{errorC}\\
\Delta T_{\text{total}}^{\text{(Q)}}&=\sqrt{\frac{T_{\text{total}}^{\text{(true)}}\left(1-T_{\text{total}}^{\text{(true)}}\right)}{\nu}}\label{errorQ},\end{aligned}$$ respectively. The corresponding SDs for the normalized transmittance $T_{\text{prism}}^{\text{(true)}}$ are also given as $\Delta T_{\text{prism}}^{\text{(C)}}=\sqrt{T_{\text{prism}}^{\text{(true)}}/\nu}$ and $\Delta T_{\text{prism}}^{\text{(Q)}}=\sqrt{T_{\text{prism}}^{\text{(true)}}(1-T_{\text{total}}^{\text{(true)}}) /\nu}$, where Eq. is taken into account. It is apparent that the noise for classical sensing depends only on the normalized transmittance, whereas the noise for quantum sensing has an additional dependence on the total transmittance. The quantum enhancement can be quantified as a ratio of $\Delta T_{\text{prism}}^{\text{(C)}}$ to $\Delta T_{\text{prism}}^{\text{(Q)}}$, written as $$\begin{aligned}
{\cal R}=\frac{\Delta T_{\text{prism}}^{\text{(C)}}}{\Delta T_{\text{prism}}^{\text{(Q)}}}=\frac{1}{\sqrt{1-T_{\text{total}}^{\text{(true)}}}},\end{aligned}$$ which is always greater than unity. This implies that a quantum enhancement is achieved for any value of $T_{\text{total}}^{\text{(true)}}$. It is interesting that the amount of enhancement is also independent of the average photon number $N$ [@Whittaker17], and the use of a Fock state is always beneficial in reducing the estimation error for any $T_{\text{total}}$ as compared to the classical benchmark. It is evident that the quantum enhancement is truly dependent on the total transmittance $T_{\text{total}}$. Also note that the enhancement is minimal at the resonant point in the SPR curve, where the transmission is attenuated the most, i.e., $T_{\text{prism}}\approx 0$.
![ (a) At a fixed incident angle of $\theta_{\text{in}}=67.5^{\circ}$, the transmittance through the prism changes with the refractive index $n_{\text{BSA}}$ of the BSA sample. By measuring the transmittance $\langle T_{\text{prism}}^{\text (meas)}\rangle$, one may infer the refractive index. However, the measured transmittance has a fluctuation represented by $\langle \Delta T_{\text prism}^{\text (meas)}\rangle$, limiting the precision of estimating the refractive index of $n_{\text BSA}$. The dots represent the average of $T_{\text prism}$ and the estimated refractive index $n_{\text BSA}$, whereas the error bars in the horizontal and vertical directions denote the SDs of the histograms for $T_{\text prism}$ and the estimated $n_{\text BSA}$, respectively. Here the BSA concentration varies from $0\%$ to $2\%$ in $0.25\%$ steps. The inset shows a magnified region where the measured data are presented. (b) Over $\mu$ repetitions of the experiment, one constructs the histogram of the estimated refractive index $n_{\text BSA}$ for given concentrations. Dots and error bars show the mean and standard deviation of the histogram for the estimated $n_{\text BSA}$ over $\mu$ repetitions, respectively. The solid line represents the averaged dependence of the refractive index with respect to the BSA concentration, yielding the slope of $d\langle n_{\text BSA}\rangle/dC=(1.933\pm0.107)\times10^{-3}$. (c) The errors taken from (b) are compared with the theoretically expected errors for classical and quantum sensing, each of which is obtained by using the linear error propagation method where $\langle \Delta T_{\text prism}^{\text (meas)}\rangle$ and the derivative of $R_{\text sp}$ with $n_{\text BSA}$ are implemented. The comparison clearly shows that the estimation errors are below the SNL, implying the estimation of the refractive index $n_{\text BSA}$ is more precise when quantum resources are employed . []{data-label="RefractiveIndexNoise"}](RefractiveIndexNoise.pdf){width="11cm"}
Due to the dependence of the total transmittance on the errors shown in Fig. \[TransmissionNoise\](a), it is more informative to see the experimentally measured total errors as a function of the total transmittance in Fig. \[TransmissionNoise\](b). The errors are compared with the theoretically expected errors of Eqs. and . The comparison clearly demonstrates not only that the error bars are in good agreement with quantum theory, but also that they are below the SNL. It is also known that when the population distributions follow the Poisson or binomial distribution, the Fisher information is given as $F=1/\sigma^{2}$. Since the sample mean estimator is locally unbiased, the SD of the histogram is equivalent to the so-called mean-squared-error, which is lower bounded by the Cramér-Rao bound [@Cramer46]. The Cramér-Rao inequality is written as $\langle \Delta T_{\text total}\rangle\ge (\nu F)^{-1/2}$, where the equality holds only when an optimal estimator is employed. This indicates that the above measured SD can be treated as an ultimate estimate error when photon-number-resolving measurement is considered.
In the second experiment, we fix the incident angle and vary the BSA concentration from $0\%$ to $2\%$ in $0.25\%$ steps. When the concentration $C$ changes, $T_{\text prism}$ subsequently changes, from which we infer the refractive index of the BSA. In Fig. \[RefractiveIndexNoise\](a), the relation between the normalized transmittance $T_{\text prism}$ and the refractive index $n_{\text BSA}$ of the sample at an incident angle $\theta_{\text in}=67.5^{\circ}$ is shown (see the solid line) by using Eq. , with the parameters found from the fitting used in Fig. \[TransmissionNoise\](a). This fitting represents the calibration of the sensor, where the transmittance is linked to a given refractive index. The transmittance $\langle T_{\text prism}\rangle$ for different BSA concentration is measured and the errors are also obtained from the respective histograms \[see dots and error bars in Fig. \[RefractiveIndexNoise\](a)\]. Due to the fluctuation in the transmittance, one cannot estimate the refractive index with certainty, but rather with a statistical error $\langle \Delta n_{\text BSA}\rangle$, clearly shown in the inset of Fig. \[RefractiveIndexNoise\](a). Including those estimation errors, the measured relation between the refractive index $n_{\text BSA}$ and the BSA concentration $C$ is displayed in Fig. \[RefractiveIndexNoise\](b), where the error bars are obtained from the histogram of the individual estimation of the refractive index over $\mu$ repetitions. The sensitivity of our sensor is calculated as the slope of the linear function that we fit to the experimental data, yielding the slope $d\langle n_{\text BSA}\rangle/dC=(1.933\pm0.107)\times10^{-3}$. Note that the measured sensitivity is in good agreement with the value of $1.82\times 10^{-3}$ previously reported at $\lambda=578$ nm [@Barer54]. We also investigate whether the errors in the estimation of the refractive index are below the SNL for the same input power ($N=1$) considered. We compare the estimation error measured as the SD of the histogram of the estimated refractive indices with the errors calculated using the linear error propagation method [@Braunstein94], written as $$\begin{aligned}
\langle\Delta n_{\text BSA}^{\text (LEPM)}\rangle
=\frac{\langle \Delta T_{\text prism}\rangle}{{\left\vert \frac{\partial \langle T_{\text prism}\rangle }{\partial n_{\text BSA}}\right\vert}}.
\label{LEPM}\end{aligned}$$ This method clearly indicates that a high sensitivity provided by plasmonic features is accommodated in the denominator as a derivative of $
\langle T_{\text prism}\rangle$ with $n_{\text BSA}$, whereas the photon number statistics of the input state of light used for sensing is responsible for the numerator $\langle \Delta T_{\text prism}\rangle$. At the incident angle we have chosen, it is clear that the denominator ${\left\vert \frac{\partial \langle T_{\text prism}\rangle }{\partial n_{\text BSA}}\right\vert}$ is large when the BSA concentration varies from $0\%$ to $2\%$ \[see the slope in Fig. \[RefractiveIndexNoise\](a)\]. This part is the same for both classical and quantum sensing, whereas the different photon number statistics leads to a difference in $\langle \Delta T_{\text prism}\rangle$ between classical and quantum sensing. In Fig. \[RefractiveIndexNoise\](c), we compare the experimentally measured $\langle\Delta n_{\text BSA}\rangle$ with the errors $\langle\Delta n_{\text BSA}^{\text (LEPM)}\rangle$, in which the Poisson and binomial statistics are considered for classical and quantum sensing, respectively. It is shown that the estimation error of the refractive index using single photons ${\left\vert{1}\right\rangle}$ is lower than that obtainable by a coherent state ${\left\vert{\alpha}\right\rangle}$ of light with ${\left\vert \alpha\right\vert}^{2}=1$, and in line with that expected from quantum theory.
Discussion
==========
As before, the quantum-enhancement depends not just on the normalized transmittance $T_{\text prism}$, but rather on the total transmittance $T_{\text total}$. Achieving a larger enhancement requires one to increase the total transmittance as much as possible for a given $\langle T_{\text prism}\rangle$ purely from the sensing prism setup. When the total transmittance is decomposed into successive transmittances as $T_{\text total}=T_{\text before}T_{\text prism}T_{\text after}$, where $T_{\text before}$ and $T_{\text after}$ denote the transmittances before and after the prism setup, we have that the imperfection of the SPDC source reduces $T_{\text before}$, and the finite bandwidth of the source also affects $T_{\text prism}$, while the detection part is responsible for $T_{\text after}$. In our experiment, the APD used has a detection efficiency of $\eta_{\text d}\approx 0.5$ at around $800$ nm, but this could be improved by using a single-photon detector with a higher detection efficiency, e.g., as in Ref. [@Slussarenko17]. In the source part, the broadening of the output spectrum of the generated photon pairs affects $T_{\text prism}$ as it modulates the signal in the SPR curve. Such broadening can be reduced by using a longer nonlinear crystal than the one used in this experiment, which has a length of $10$ mm. Furthermore, the state of photon pairs produced from the SPDC is expected to be ${\left\vert{11}\right\rangle}$, upon which the heralding scheme works perfectly, but this is not the case in this experiment since the nonlinear crystal used does not have an anti-reflection coating, and so a reflection of the twin photon can occur even when a photon is found in the idler mode, i.e., the heralded signal state is most likely a mixture of ${\left\vert{1}\right\rangle}$ and ${\left\vert{0}\right\rangle}$, thus further decreasing $T_{\text before}$. All of these aspects are points of departure for future improvement. Nevertheless, despite all these deficiencies, an improvement in the estimation of the error has been successfully demonstrated by exploiting quantum resources in our plasmonic sensor.
Conclusion
==========
We have used single photons, known to be optimal states in single-mode transmission spectroscopy, as an input source for a plasmonic sensor using the ATR setup. A quantum enhancement has been observed in a comparison with a classical benchmark obtainable by using a classical state of light with a photon-number-resolving detector. The amount of relative enhancement will be the same even if a higher photon number state ${\left\vert{N}\right\rangle}$ is used since it only depends on the total transmittance. We have discussed how our sensing setup could be further improved so as to increase the total transmittance, consequently increasing the quantum enhancement.
As future work, exploiting two-mode sensing schemes would also help to further increase the quantum enhancement, where a quantum correlation is expected to play a crucial role in enhancing the sensing performance [@Meda17]. One may also consider slightly different sensing platforms that have also been promising for practical purposes, such as using Bloch surface waves in a periodic dielectric stack [@Toma13], or using a guided mode resonance configuration [@Sahoo17]. Our work indicates that sensing using $N$ photons would be more favorable when the overall transmission is close to unity, i.e., $T_{\text total}\approx 1$, resulting in a much higher enhancement. We believe that our experimental results emphasize the usefulness of single photons or $N$ photons in plasmonic sensing. We hope that this work will help open up future directions in plasmonic sensing, e.g., using sub-Poissonian light sources at a higher optical power regime to directly beat state-of-the-art classical plasmonic sensors.
Funding {#funding .unnumbered}
=======
National Research Foundation of Korea (2016R1A2B4014370, 2014R1A2A1A10050117); Institute for Information & communications Technology Promotion (IITP-2016-R0992-16-1017).\
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Jinhyoung Lee for stimulating discussions. This work is supported by the Basic Science Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Science, ICT & Future Planning (MISP), the Information Technology Research Center (ITRC) support program supervised by the Institute for Information & communications Technology Promotion (IITP), and the South African National Research Foundation and the National Laser Centre.\
|
---
abstract: 'The purpose of this paper is to give homological descent theorems for motivic homology theories (for example Suslin homology) and motivic Borel-Moore homology theories (for example higher Chow groups) for certain hypercoverings.'
author:
- 'Thomas <span style="font-variant:small-caps;">Geisser</span>[^1]'
title: Homological descent for motivic homology theories
---
Introduction
============
We consider covariant functors ${{\mathcal F}}$ from the category of schemes separated and of finite type over a fixed noetherian scheme $X$ and proper morphisms to the category of homologically positive complexes of abelian groups. We assume that for every abstract blow-up square, there is a distinguished triangle $$\label{blowup}
{{\mathcal F}}(Z')\to {{\mathcal F}}(X')\oplus {{\mathcal F}}(Z) \to {{\mathcal F}}(X) \to {{\mathcal F}}(Z')[1]$$ in the derived category of abelian groups. Here an abstract blow-up square is a diagram of the form $$\label{blbl}
\begin{CD}
Z'@>>> X'\\
@VVV @V\pi VV\\
Z@>i>> X
\end{CD}$$ with $\pi$ proper, $i$ a closed embedding, and such that $\pi$ induces an isomorphism $X'-Z' \to X-Z$. The morphism $Z\coprod X'\to X$ is then called an abstract blow-up. In particular, (taking $X'=\emptyset$), every closed embedding $Z\to X$ defined by a nil-potent ideal indudes a quasi-isomoprhism ${{\mathcal F}}(X')\to {{\mathcal F}}(X)$.
Recall that a proper cdh-cover is a proper map $p:X'\to X$ such for every point $x\in X$, there is a point $x'\in X'$ with $p(x')=x$ and $p^*:k(x)\stackrel{\sim}{\to} k(x')$, and a hyperenvelope is an augmented simplicial scheme $a:X_\bullet \to X$ such that for every $n$, the map $X_{n+1} \to ({\operatorname{cosk}}_n X_\bullet)_{n+1}$ is a proper cdh-cover. For a simplicial scheme $X_\bullet$, we simply write ${{\mathcal F}}(X_\bullet)$ for the total complex of the simplicial complex of abelian groups.
\[main1\] For any functor as above, and for any hyperenvelope $a:X_\bullet\to X$, the augmentation map induces a quasi-isomorphism $${{\mathcal F}}(X_\bullet)\to {{\mathcal F}}(X).$$
In characteristic $p$, smooth hyperenvelopes are only known to exist under resolution of singularities. To remove this hypothesis, and to be able to use Gabber’s refinement of de Jong’s theorem on alterations, we consider $l$-hyperenvelopes. A proper ldh-cover is a proper surjection $p:X'\to X$ such that for every point $x\in X$ there is a point $x'\in X'$ with $p(x')=x$ and such $k(x')$ is a finite extension of degree prime to $l$ of $p^*(k(x))$. An $l$-hyperenvelope is an augmented simplicial scheme $a:X_\bullet\to X$ such that $X_{n+1} \to ({\operatorname{cosk}}_n X_\bullet)_{n+1}$ is a proper ldh-cover for all $n$.
\[main2\] Assume that ${{\mathcal F}}$ is a functor to the category of complexes of ${{{\mathbb Z}}}_{(l)}$-modules, which satisfies in addition to the above the following property:
For any finite flat map $p:X\to Y$ of degree $d$ prime to $l$, there is a functorial pull-back map $p^*:{{\mathcal F}}(Y)\to {{\mathcal F}}(X)$ such that $p_*p^*$ induces multiplication by $d$ on homology, and which is compatible with base-change.
Then for any $l$-hyperenvelope $a:X_\bullet\to X$, the augmentation map induces a quasi-isomorphism $${{\mathcal F}}(X_\bullet)\to {{\mathcal F}}(X).$$
One can see that some hypothesis on the coefficients is necessary by considering the Čech-nerve ${\operatorname{cosk}}_0(L/k)$ of a finite field extension $L/k$ of degree $d$: The map $H_0^S(L,{{{\mathbb Z}}})\to H_0^S(k,{{{\mathbb Z}}})$ is multiplication by $d$ on ${{{\mathbb Z}}}$, hence descent does not hold. The proof of the theorems is along the lines SGA 4 Vbis §3. Gillet [@gillet] used a similar argument to prove descent for higher Chow groups and $K'$-homology, but, using the notes of B.Conrad [@brian], we give a self-contained proof which in addition does not require the localization property.
As an application, we obtain the following descent theorem for the motivic homology groups $H_{i}(X,A(n))$, motivic Borel-Moore homology groups $H_i^{BM}(X,A(n))$, and higher Chow groups:
\[desccor\] Let $X$ be of finite type over a perfect field, and $A$ be an abelian group. Suppose that $a:X_\bullet\to X$ is a hyperenvelope and resolution of singularities holds, or that $a:X_\bullet\to X$ is a $l$-hyperenvelope and that $A$ is a ${{{\mathbb Z}}}_{(l)}$-module. Then we have spectral sequences $$\begin{aligned}
E^1_{p,q} = H_q(X_p,A(r)) &\Rightarrow H_{p+q}(X,A(r));\\
E^1_{p,q} = H_q^{BM}(X_p,A(r)) &\Rightarrow H_{p+q}^{BM}(X,A(r)).\\
$$
For a scheme essentially of finite type over a Dedekind ring, we define higher Chow groups as the Zariski hypercohomology of Bloch’s cycle complex (for schemes over fields, this agrees with the homology of the global sectons of the cycle complex).
\[desccorchow\] Let $X$ be of finite type over a field or a Dedekind ring, and $A$ be an abelian group. Suppose that $a:X_\bullet\to X$ is a hyperenvelope, or that $ A$ is a ${{{\mathbb Z}}}_{(l)}$-module and $a:X_\bullet\to X$ an $l$-hyperenvelope. Then we have spectral sequences for any $n\in {{{\mathbb Z}}}$, $$E^1_{p,q} = CH_n(X_p,q,A) \Rightarrow CH_n(X,p+q,A).$$
The analogous results holds for $K'$-theory can be proven by the same method.
We note that $l$-hyperenvelopes exist by by a theorem of Gabber:
\[hyperexist\] For every scheme $U$ of finite type over a perfect field $k$ and any $l\not= {\operatorname{char}}k$, there exists an $l$-hyperenvelope $U_\bullet\to U$ such that $U_\bullet$ is an open simplicial subscheme of a simplicial scheme consisting of smooth projective schemes over $k$.
We thank S.Kelly for helpful comments, discussions and explanation of his work.
Simplicial schemes and hyperenvelopes
=====================================
Let ${\mathcal C}$ be a category with finite limits. A simplicial object in ${\mathcal C}$ is a functor $X_\bullet: \Delta^{\mathrm{{op}}}\to {\mathcal C}$, where $\Delta$ is the simplicial category of finite ordered sets $[i]=\{ 0,\ldots ,i\}$ with non-decreasing maps. As usual, we write $X_n$ for $X_\bullet([n])$ and $\alpha^*:X_j\to X_i$ instead of $X_\bullet(\alpha)$ for $\alpha: [i]\to [j]$. If $\Delta_{\leq n}$ is the full subcategory of $\Delta$ consisting of $[0],\ldots , [n]$, then the restriction functor $i_n^*$ from simplicial objects to restricted simplicial objects, i.e. functors $\Delta^{\mathrm{{op}}}_{\leq n}\to {\mathcal C}$, has a left adjoint skeleton ${\operatorname{sk}}_n$ and a right adjoint coskeleton $(i_n)_*= {\operatorname{cosk}}_n$. We note that in the literature, the notation $ {\operatorname{sk}}_n$ both appears as the name of the restriction functor [@deligne] as well as the name of its left adjoint (e.g. SGA 4 V 7). By abuse of notation, we also denote the composition of $(i_n)_*i_n^*$ by ${\operatorname{cosk}}_n$. In this notation, the adjunction map takes the form $X_\bullet \to {\operatorname{cosk}}_nX_\bullet$. Concretely, $$({\operatorname{cosk}}_nX_\bullet)_m = {\operatornamewithlimits{lim}}_{D_m'}X_\phi$$ where $D_m'$ is the category of non-decreasing maps $\phi:[i]\to [m]$ with $i\leq n$, $X_\phi= X_i$, and morphisms $\alpha: \phi\to \phi'$ the maps $[i]\to [i']$ compatible with the maps to $[m]$. This can also be expressed as follows [@brian Cor.3.10]: Let $D_m$ be the full subcategory of $D_m'$ with objects increasing [*injections*]{} $\phi: [i]\to [m]$ for $i\leq n$ (which implies that morphisms $\alpha: \phi\to \phi'$ are also is injective). Then $({\operatorname{cosk}}_nX_\bullet)_m=X_m $ for $m\leq n$ and, for $m>n$, $ ({\operatorname{cosk}}_nX_\bullet)_m$ is the equalizer of the maps $$\label{equali}
s, t: \prod_{\phi \in ob\; D_m}
X_\phi \to \prod_{\alpha \in mor\; D_m}X_\alpha$$ where $X_\alpha= X_\phi$ for $\alpha :[\phi]\to [\phi']$, and on the component indexed by $\alpha$, $s$ is the projection from $X_\phi$, whereas $t$ is the projection from $X_{\phi'}$ composed with $\alpha^*$. In particular, we obtain $${\operatorname{cosk}}_n \stackrel{\sim}{\longrightarrow} {\operatorname{cosk}}_n{\operatorname{cosk}}_m$$ for $n\leq m$. Similarly, for $n\leq m$, the restriction functor $(i^m_n)^*$ from $m$-truncated simplicial sets to $n$-truncated simplicial sets has a left adjoint and a right adjoint $(i^m_n)_*$. If $n\leq m$, then $$\label{simp}
{\operatorname{cosk}}_n \stackrel{\sim}{\longrightarrow} {\operatorname{cosk}}_m{\operatorname{cosk}}_n$$ (this is wrongly stated in SGA 4 V 7.1.2). Indeed, we just apply the following to $i^*_n$: $$(i_n)_* = (i_m)_*(i^m_n)_* = (i_m)_*i_m^* (i_n)_*.$$
A simplicial map $\Delta[1]\times X_\bullet \to Y_\bullet$ can be described as a collection of maps $h_\tau :X_j\to Y_j$ for every $\tau:[j]\to [1]$, such that for $\alpha:[i]\to [j]$ one has $\alpha^*h_\tau = h_{\tau\alpha}\alpha^*$. A simplicial homotopy between two maps $f,g:X_\bullet\to Y_\bullet$ is a simplicial map $\Delta[1]\times X_\bullet \to Y_\bullet$ such that $h_{c_0^j}=f$ and $h_{c_1^j}=g$, where $c_\epsilon^j:[j]\to [1]$ is the constant map to $\epsilon\in [1]$.
\[brianl\][@sga4 Lemma 3.0.2.4] Let $f: A_\bullet\to B_\bullet$ and $g:B_\bullet \to A_\bullet$ be maps of simplicial schemes such that
1. $f_p:A_p\to B_p$ is inverse to $g_p$ for $p<n$;
2. $g_n$ is a section to $f_n: A_n\to B_n$;
3. $A_\bullet \cong {\operatorname{cosk}}_n A_\bullet$, and $B_\bullet \cong {\operatorname{cosk}}_n B_\bullet$.
Then $f$ and $g$ are simplicial homotopy inverse to each other.
It suffices to prove that two simplicial maps $f,g:X_\bullet \to Y_\bullet$ on $n$-truncated schemes which are equal on $n-1$-truncation induce homotopic maps on the ${\operatorname{cosk}}_n$. For $i\leq n$ and $\tau:[i]\to [1]$, we define $h_\tau=f_i$ if $\tau=c_0$ and $h_\tau=g_i$ otherwise. If easily follows from the first condition that for $i,j\leq n$ and $\alpha:[i]\to [j]$ we have $\alpha^*h_\tau = h_{\tau\alpha}\alpha^*$. In degrees $p>n$ we define the map $h_\tau$ for $\tau:[p]\to [1]$ by the following diagram $$\begin{CD}
X_p @> h_\tau >>Y_p\\
@| @| \\
{\operatornamewithlimits{lim}}_{\phi\in ob D_m'}X_\phi @>h_{\tau\phi}>> {\operatornamewithlimits{lim}}_{\phi\in ob D_m'}
Y_\phi
\end{CD}$$ where $X_\phi=X_i$ for $\phi: [i]\to [p]$ and . The maps $h_{\tau\phi}:X_i\to Y_i$ are compatible with the inverse system because for $\alpha:[j]\to [i]$ we have $\alpha^*h_{\tau\phi}= h_{\tau\phi\alpha}\alpha^*:X_i\to Y_j$ by definition of $h$. It is now easy to see that this induces a map of simplicial objects, i.e. that it is compatible with the maps induced by $[q]\to [p]$. On the other hand, taking $h_{c_0^p}$ we recover $({\operatorname{cosk}}_n f)_p$, and taking $h_{c_1^p}$ we recover $({\operatorname{cosk}}_n g)_p$. Indeed, the maps $h_{\tau\phi}$ between inverse systems will be the system of maps $f_i$ and $g_i$, respectively. [$\square$\
]{}
We will apply this in combination with the following lemma:
[@sga4 Lemma 3.0.2.3] Let $f,g:X_\bullet \to Y_\bullet$ simplicial homotopic and ${{\mathcal F}}$ a functor to an abelian category. Then ${{\mathcal F}}(f)$ and ${{\mathcal F}}(g)$ are homotopic maps of the associated chain complexes.
We apply the above mostly to the category of schemes over a fixed scheme $X$. If we want to emphasize the dependence on the base scheme $X$, we write ${\operatorname{cosk}}_n(X_\bullet/X)$.
We often identify simplicial or multi-simplicial objects $A_\bullet$ in an abelian category with its corresponding chain complex without notice.
By [@svbk Lemma 5.8], every proper cdh-cover can be dominated by a composition of abstract blow-ups. Similarly, we have
\[refine\] Every proper ldh-cover $Y\to X$ of a noetherian scheme can be dominated by a composition $S\to T\to X$, where $S\to T$ is a finite flat map of degree prime to $l$, and $T\to X$ a composition of abstract blow-ups.
The proof is in the spirit of [@ichweilII], [@svbk], and [@kellyK Prop.2.4]. We proceed by induction on the dimension of $X$. Base-changing with a proper cdh-cover, we can assume that $X$ is reduced and integral. Let $\eta$ be a point of $Y$ which maps to the generic point of $X$ and such that $[k(\eta):k(X)]$ is prime to $l$. Let $\tilde Y$ be the closure of $\eta$ in $Y$; $\tilde Y$ is generically finite of degree prime to $l$ over $X$. By the flatification theorem [@rg], there is a blow-up $X'\to X$ with center $Z$ of smaller dimension such that the strict transform $Y'\to X'$ of $\tilde Y\to X$ is flat, hence finite flat surjective of degree prime to $l$. The induction hypothesis applied to the base change $Y\times_XZ\to Z$ gives a factorization $S'\to T'\to Z$ whose union with $Y' \to X'\to X$ is a factorization as required. [$\square$\
]{}
Proof of the main theorem
=========================
Čech covers
-----------
We assume that ${{\mathcal F}}$ satisfies the hypothesis of Theorem \[main1\] or \[main2\].
\[base\] Let $f:X_0\to X$ be a proper cdh-covering. Then the augmentation map ${\operatorname{cosk}}_0(X_0/X)\to X$ induces a quasi-isomorphism on ${{\mathcal F}}(-)$. The same holds for proper ldh-coverings if ${{\mathcal F}}$ is a sheaf of ${{{\mathbb Z}}}_{(l)}$-modules.
We give the proof for the ldh-case and complexes, the cdh-case follows by erasing parts (b) of the proof.
a\) Given an abstract blow-up square , if the statement of the proposition holds for the pull-back to $Z'$, $X'$ and $Z$, then it also holds for $X$. Indeed, we obtain proper ldh-coverings $Z_0=Z\times_XX_0\to Z$, $Z_0'=Z'\times_XX_0\to Z'$ and $X'_0=X'\times_XX_0\to X'$. Since the functor ${\operatorname{cosk}}_n$ commutes with fiber products, we obtain on each level an abstract blow-up square upon applying the coskeleton functor. Thus we obtain a map of distinguished triangles $$\begin{CD}
{{\mathcal F}}({\operatorname{cosk}}_0(Z'_0/Z')) @>>> {{\mathcal F}}({\operatorname{cosk}}_0(Z_0/Z)) \oplus {{\mathcal F}}({\operatorname{cosk}}_0(X'_0/X')) @>>>
{{\mathcal F}}({\operatorname{cosk}}_0(X_0/X))\\
@VVV @VVV @VVV \\
{{\mathcal F}}(Z') @>>> {{\mathcal F}}(Z) \oplus {{\mathcal F}}(X') @>>>
{{\mathcal F}}(X)
\end{CD}$$ and if two maps are quasi-isomorphisms then so is the third.
b\) If $p: X'\to X$ is finite flat of degree $d$ prime to $l$, and the statement of the theorem holds for the pull-back to $X'$, then it also holds for $X$. Indeed consider the diagram $$\begin{CD}
{{\mathcal F}}({\operatorname{cosk}}_0(X'_0/X')) @>p'_*>>
{{\mathcal F}}({\operatorname{cosk}}_0(X_0/X))\\
@V\cong Vf_*'V @VVf_*V \\
{{\mathcal F}}(X') @>p_*>> {{\mathcal F}}(X).
\end{CD}$$ By hypothesis, $p_*p^*$ induces multiplication by the invertible number $d$, hence $p_*$ is split surjective on homology. Since ${\operatorname{cosk}}_0(X_0/X)\times_XX' \cong {\operatorname{cosk}}_0(X'_0/X')$, the pull-back along $p$ is compatible with all the simplicial structure maps, hence compatibly split by $\frac{1}{d}p_*$ on each level. This implies that $f'_*(p')^*= p^*f_*$, and that $p'_*(p')^*$ induces multiplication by the invertible number $d$ on homology, so that $(p')^*$ is split injective on homology. Finally, since $p_*f_*'= f_*p'_*$ is surjective on homology, so is $f_*$, and since $f_*' (p')^*= p^*f_*$ is injective on homology, so is $f_*$.
c\) If $f$ has a section $s:X\to X_0$, then the proposition follows using the contracting homotopy $ s\times {\operatorname{id}}: X_0^{\times p} \to X_0^{\times p+1} $, where the fiber product is taken over $X$.
d\) In general, by Prop.\[refine\], we can dominate $X_0\to X$ be a sequence $X'\to X$ of abstract blow-ups and finite flat maps of degree prime to $l$. By a) and b) and induction of the dimension, it suffices to prove the theorem after base change to $X'$. But then the map $X_0\times_XX'\to X'$ has a section induced by the map $X'\to X_0$. [$\square$\
]{}
The general case
----------------
We now give the proof of Theorem \[main1\] and Theorem \[main2\] in the spirit of SGA 4 Vbis §3 and [@gillet]. Given the hyperenvelope or $l$-hyperenvelope $X_\bullet\to X$, let $X_\bullet^n= {\operatorname{cosk}}_nX_\bullet$ and consider the sequence $$\label{skeleton}
X_\bullet \stackrel{u_n}{\longrightarrow}
X_\bullet^n \stackrel{v_n}{\longrightarrow} X_\bullet^{n-1}
\stackrel{v_{n-1}}{\longrightarrow} \ldots \stackrel{v_2}{\longrightarrow}
X_\bullet^1 \stackrel{v_1}{\longrightarrow} X_\bullet^0
\stackrel{v_0}{\longrightarrow}X.$$ The map $u_n$ is an isomorphism in degrees $\leq n$, so by boundedness of ${{\mathcal F}}(-)$ it suffices to show that the maps $v_n$ induce quasi-isomorphisms on ${{\mathcal F}}(-)$. The case $v_0$ is Proposition \[base\]. By , $v_{n}$ satisfies the condition of the following proposition for $n\geq 1$:
[@sga4 lemma 3.3.3.2] \[sgadesc\] Let $f: K_\bullet \to L_\bullet $ be a map of simplicial schemes such that
1. $K_p\to L_p$ is an isomorphism for $p<n$;
2. $K_n\to L_n$ is a proper cdh-covering (ldh-covering);
3. $K_\bullet \cong {\operatorname{cosk}}_n K_\bullet$, and $L_\bullet \cong {\operatorname{cosk}}_n L_\bullet$.
Then ${{\mathcal F}}(K_\bullet)\to {{\mathcal F}}(L_\bullet)$ is a quasi-isomorphism.
(see [@brian Theorem 7.17]) Let $[K/L]^p_\bullet$ be the $p$th fiber product of $K_\bullet $ over $L_\bullet$. Consider the bisimplicial scheme $Z_{\bullet, \bullet}$ with the $(q+1)$-fold fiber product $Z_{p,q}= K_p\times_{L_p} \cdots \times_{L_p} K_p$ in bidegree $(p,q)$ such that the $p$th column $Z_{ p,\bullet}$ is ${\operatorname{cosk}}_0(K_p/L_p)$, and the $q$th row is $Z_{\bullet,q}$ is $[K/L]^{q+1}_\bullet $. In particular, the $p$th column is $K_p$ for $p<n$ by hypothesis. We have the vertical augmentation $\tilde f:Z_{\bullet,\bullet} \to L_\bullet$ induced by $f: Z_{\bullet,0} = K_\bullet\to L_\bullet$. $$\begin{CD}
@VVV @VVV @VVV \\
K_0\times_{L_0}K_0 @<<< K_1\times_{L_1}K_1 @<<< K_2\times_{L_2}K_2@<<< \cdots \\
@VVV @VVV @VVV \\
K_0 @<<< K_1 @<<< K_2 @<<< \cdots \\
@Vf_0VV @Vf_1VV @Vf_2VV \\
L_0 @<<< L_1 @<<< L_2 @<<< \cdots \\
\end{CD}$$
By Proposition \[base\] and the following lemma, we can see column by column that $\tilde f$ induces a quasi-isomorphism on ${{\mathcal F}}(-)$:
[@sga4 Lemma 3.3.3.3] Under the assumptions of Prop. \[sgadesc\], all maps $K_m\to L_m$ are proper cdh-coverings (resp. ldh-coverings).
By , we have a map of equalizers: $$\begin{CD}
K_m @>>> \prod_{\phi \in ob\; D_m} K_\phi @>s,t>>
\prod_{\alpha \in mor\; D_m}K_\alpha \\
@VVV @VfVV @VfVV \\
L_m@>>> \prod_{\phi \in ob\; D_m} L_\phi @>s,t>>
\prod_{\alpha \in mor\; D_m}L_\alpha,
\end{CD}$$ where $D$ runs through the injections $\phi:[i]\to [m]$, $i\leq n$ and $K_\phi:= K_i$, $L_\phi:= L_i$. It suffices to show that the fiber product of the left square $P= L_m\times_{\prod L_\phi} \prod K_\phi$ has the universal property of the equalizer, because then $K_m\to L_m$ is a base-change of a proper cdh-covering (ldh-covering), hence is itself a proper cdh-cover (ldh-covering).
Given a map $u:T\to \prod_{\phi \in D_m} K_\phi $ with $su=tu$, we have to show that there is a unique map $T\to P$ such that composition with the projection $P\to \prod_{\phi \in D_m} K_\phi$ is $u$. By definition of $L_m$ and $P$, it suffices to show that $sfu=tfu$, or alternatively that $fsu=ftu$, and we can do this factor by factor. Given $\alpha:[i]\to [i']$, there are two cases: If $i<n$, then the two maps agree because $K_i\to L_i$ is an isomorphism by hypothesis. If $i=n$, then also $i'=n$ and $\alpha$ is the identity, hence $s=t$ trivially. [$\square$\
]{}
We apply Lemma \[brianl\] to $f$ any of the face (i.e. projection) maps $[K/L]^{p+1}_\bullet\to [K/L]^p_\bullet$, and $g$ a degeneracy (i.e. diagonal) map which is a section of this face map. Note that the hypothesis $K_\bullet \cong {\operatorname{cosk}}_n K_\bullet$, and $L_\bullet \cong {\operatorname{cosk}}_n L_\bullet$ are preserved under fiber products (since ${\operatorname{cosk}}_n$ is a right adjoint), hence $[K/L]^p_\bullet\cong {\operatorname{cosk}}_n [K/L]^p_\bullet$. Lemma \[brianl\] implies that all face maps $[K/L]^{p+1}_\bullet\to [K/L]^p_\bullet$ induce quasi-isomorphism on ${{\mathcal F}}(-)$ which are equal on homology, hence taking alternating sum of projection maps, we see that the maps between rows of $Z_{\bullet,\bullet}$ induce alternatingly the zero map on homology and quasi-isomorphisms on ${{\mathcal F}}(-)$. This implies that the inclusion $K_\bullet \to Z_{\bullet,\bullet}$ as the $0$th row induces a quasi-isomorphism on ${{\mathcal F}}(-)$, hence so does the composition $$f:K_\bullet \to Z_{\bullet,\bullet}\stackrel{\tilde f}{\to}L_\bullet.$$ [$\square$\
]{}
Applications
============
Existence of $l$-hyperenvelopes
-------------------------------
For $X$ a noetherian scheme, and $l$ a prime number, a morphism $h : X' \to X$ is called an $l$-alteration if $h$ is proper, surjective, generically finite, sends each maximal point to a maximal point, and the degrees of the residual extensions $k(x')/k(x)$ over each maximal point $x$ of $X$ are prime to $l$.
[@gabber X Theorem 2.1] (Gabber). Let $k$ be a field, $l$ a prime number different from the characteristic of $k$, $X$ separated and finite type over $k$. Then there exists a finite extension $k'$ of $k$ of degree prime to $l$, and a projective $l$-alteration $h : \tilde X \to X$ above ${\operatorname{Spec}}k' \to {\operatorname{Spec}}k$, such that $\tilde X$ is smooth and quasi-projective over $k'$.
\[canfindcover\] In the situation of the theorem, there is a proper ldh-cover over $\tilde X\to X$ with $\tilde X$ regular, and quasi-projective, separated and of finite type over $k$.
Covering $X$ by its reduced irreducible components we can assume that $X$ is integral, and proceed by noetherian induction. Let $\tilde X$ be as in the theorem, and take $X'$ the closure of a point of $\tilde X$ mapping to the generic point of $X$ such that the degree of the residue extension if finite prime to $l$. Let $Z$ be the closed subscheme where $X'\to X$ is not flat, then by induction hypothesis we can find a proper ldh-cover $Z'\to Z$ with $Z'$ regular, and $Z'\coprod X'\to X$ is the required ldh-covering. [$\square$\
]{}
To prove Theorem \[hyperexist\], we apply the method of [@deligne 6.2.5] to the Corollary.
Motivic theories
----------------
Let $k$ be a perfect field. For a presheaf with transfers ${{{\mathcal P}}}$ on ${\text{\rm Sm}}/k$, recall that $\underbar C_*({{{\mathcal P}}})$ is the complex of presheaves with transfers given by $\underbar C_i({{{\mathcal P}}})(U)={{{\mathcal P}}}(U\times \Delta^i)$ and boundary maps given by alternating sum of pull-backs along embeddings of faces. The complex of abelian groups $\underbar C_*({{{\mathcal P}}})(k)$ is denoted by $C_*({{{\mathcal P}}})$.
Recall that the cdh-topology is the coarsest Grothendieck topology generated by Nisnevich covers and proper cdh-covers, and the ldh-topology is the coarsest Grothendieck topology generated by cdh-covers and finite flat maps of degree prime to $l$.
\[fvtheorem\] 1) Let ${{{\mathcal P}}}$ be a presheaf with transfers such that ${{{\mathcal P}}}_{cdh}=0$. Then under resolution of singularities, the complex of Nisnevich sheaves with transfers $\underbar C_*({{{\mathcal P}}})_{Nis}$ is acyclic.
2\) If ${{{\mathcal P}}}$ is a presheaf of ${{{\mathbb Z}}}[\frac{1}{p}]$-modules with transfers such that ${{{\mathcal P}}}_{ldh}=0$, then $\underbar C_*({{{\mathcal P}}})_{Nis}$ is acyclic.
This is [@fv Theorem 5.5(2)], and its extension by Kelly [@kellythesis Thm. 5.3.1]. In loc.cit. ${{{\mathcal P}}}$ is supposed to be a presheaf with transfers on all schemes, but replacing ${{{\mathcal P}}}$ by its left Kan extension to all schemes does not change ${{{\mathcal P}}}$ on smooth schemes. Moreover, loc.cit. assumes that ${{{\mathcal P}}}_{cdh}=0$, but in fact for a sheaf of ${{{\mathbb Z}}}[\frac{1}{p}]$-modules, ${{{\mathcal P}}}_{ldh} = 0$ implies that ${{{\mathcal P}}}_{cdh} = 0$. [$\square$\
]{}
Recall that a morphism $f:X\to Y$ is called equidimensional of relative dimension $r$ if it is of finite type, if every irreducible component of $X$ dominates an irreducible component of $Y$, and if $\dim_x(p^{-1}p(x))=r$ for every point $x\in X$. A morphism is equidimensional if and only if it can be locally factored as $X\to \mathbb A^r_Y\to Y$, with the first map quasi-finite and dominant on each irreducible component. A useful criterion is that a flat morphism of finite type is equidimensional of dimension $r$ if all irreducible components of all generic fibers have dimension $r$. For any scheme $X$ over $k$, let $z_{equi}(X,r)$ be the presheaf with transfers on ${\text{\rm Sm}}/k$ which associates to $U$ the free abelian group on those closed integral subschemes of $U\times X$ which are equidimensional of relative dimension $r$ over $U$, and $c_{equi}(X,0)$ the subpresheaf with transfers of $z_{equi}(X,0)$ generated by those subschemes which are finite over $U$.
\[svlemma\] For any scheme $W$, the functors $C_*(z_{equi}(-\times W,r))$ and $C_*(c_{equi}(-\times W,0))$ satisfy the hypothesis of Theorems \[main1\] and \[main2\].
By the existence of transfers, a finite flat map $f:X\to Y$ induces a pull-back map such that the composition with push-forward is multiplication by the degree, and which is compatible with base change. To show the exactness of the triangle resulting from an abstract blow-up, it suffices by Theorem \[fvtheorem\] to show that the functors ${{{\mathcal P}}}=z_{equi}(-\times W,r)$ and ${{{\mathcal P}}}=c_{equi}(-\times W,0)$ send abstract blow-up squares to short exact sequences of ldh-sheaves (or cdh-sheaves under resolution of singularities). Replacing $X,X',Z,Z'$ by its product with $W$, we can drop $W$ from the notation. Only the surjectivity of ${{{\mathcal P}}}(X')\oplus {{{\mathcal P}}}(Z)\to {{{\mathcal P}}}(X)$ is difficult.
We repeat the platification argument of Suslin-Voevodsky [@svbk Theorem 4.7] and Friedlander-Voevodsky [@fv Theorem 5.11]. Given a section $S\in {{{\mathcal P}}}(X)(U)$, we need to find a cdh-covering (ldh-covering) $V\to U$ such that $S|_V$ is in the image of ${{{\mathcal P}}}(X')(V)\oplus {{{\mathcal P}}}(Z)(V)$. This is clear if $S\subseteq U\times Z$. Otherwise let $T$ be the closure of $S\cap U\times (X-Z) \subseteq U\times (X-Z)\cong U\times (X'-X'\times_XZ)$ in $U\times X'$.
Then $T$ may not be equidimensional, but by the flatification theorem [@rg], we can find a blow-up $U'\to U$ such that the proper transform $T'$ of $T$ in $U'\times X'$ is flat over $U'$. By Corollary \[canfindcover\], we can find a cdh-cover (respectively ldh-cover) $V\to U'$ with $V$ smooth, and $T^*$ be the pull-back of $T'$ to $V$. Then $T^*\to V$ is flat, hence equidimensional. [$\square$\
]{}
We define motivic Borel-Moore homology to be $$H_i^{BM}(X,A(n)) =
\begin{cases}
H_{i-2n}C_*(z_{equi}(X,n)\otimes A)& n\geq 0;\\
H_{i-2n}C_*(z_{equi}(X\times \mathbb A^{-n},0)\otimes A) &n<0.
\end{cases}$$ Under resolution of singularities, this agrees with the definition of Friedlander-Voevodsky [@fv §9] by [@fv Thm.5.5(1)]. Recall that motivic homology is defined by $$H_i(X,A(n)) =
\begin{cases}
H^{2n-i}_{\{0\}}
(\mathbb A^n,\underline C_*(c_{equi}(X,0)\otimes A))& n\geq 0;\\
H_{i-2n-1}C_*(\frac{c_{equi}(X\times (\mathbb A^{-n}-\{0\}),0)}
{c_{equi}(X\times \{1\},0)} \otimes A) &n<0.
\end{cases}$$
[*Proof of theorem*]{} \[desccor\]: For Borel-Moore homology, the theorem follow from Prop. \[svlemma\]. For homology in negative degrees, the theorem follows because the inclusion $c_{equi}(X\times \{1\},0)\to c_{equi}(X\times (\mathbb A^{-n}-\{0\}),0)$ is canonically split by the structure map, so that the conclusion of Proposition \[svlemma\] applies to this situation.
Finally, motivic homology in positive degrees is the homology of the cone of the split injection $$C_*(c_{equi}(X,0))\otimes A\to R\Gamma(\mathbb A^n,
\underline C_*(c_{equi}(X,0)\otimes A)).$$ Consider the Godement resolution for the Nisnevich topology of the affine space. We can bound it, because the cohomological dimension of $\mathbb A^n$ is $-n$. Furthermore, by [@mvw Example 6.20], the terms of this resolution are still presheaves with transfers. Since the Godement resolution is functorial and sends short exact sequences of sheaves to short exact sequences of sheaves, it inherits the hypothesis of Theorems \[main1\] and \[main2\] from $c_{equi}(X,0)$. [$\square$\
]{}
[*Proof of theorem*]{} \[desccorchow\]: This follows easily from the localization property of higher Chow groups [@bloch][@levine].
Other applications
------------------
In [@ichsuslin] we show that Parshin’s conjecture implies that for a smooth variety over a finite field, higher Suslin homology vanishes rationally, i.e. $H_p^S(Y,{{{\mathbb Q}}})=0$ for smooth $Y$ and $p>0$. This implies that for every $l$-hyperenvelope consisting of smooth schemes there is an isomorphism $$H_p^S(X,{{{\mathbb Q}}}) \cong H_p(H_0^S(X_\bullet,{{{\mathbb Q}}})).$$
In [@ichrojtman], we use Corollary \[desccor\] to show that for a normal connected variety $X$ over an algebraically closed field, the Albanese map $$alb_X: H_0(X,{{{\mathbb Z}}})^0\to {\operatorname{Alb}}_X(k)$$ is an isomorphism on torsion groups away from the characteristic, and at the characteristic under resolution of singularities.
[99]{} , Algebraic cycles and higher $K$-theory. Adv. in Math. 61 (1986), no. 3, 267–304.
, Cohmological descent, Preprint 2003.
, Theorie de Hodge III, Inst. Hautes Etudes Sci. Publ. Math. No. 44 (1974), 5-77.
, Bivariant cycle homology.Cycles, transfers, and motivic homology theories, 138-187, Ann. of Math. Stud., 143, Princeton Univ. Press, Princeton, NJ, 2000.
, Arithmetic cohomology over finite fields and special values of $\zeta$-functions. Duke Math. J. 133 (2006), no. 1, 27–57.
, Rojtman’s theorem for normal schemes, Preprint 2014.
, On Suslin’s singular homology and cohomology. Doc. Math. (2010), Extra volume: Andrei A. Suslin sixtieth birthday, 223-249.
, Homological descent for the K-theory of coherent sheaves. Algebraic K-theory, number theory, geometry and analysis (Bielefeld, 1982), 80-103, Lecture Notes in Math., 1046, Springer, Berlin, 1984.
, Travaux de Gabber sur l’uniformisation locale et la cohomologie étale des schémas quasi-excellents. Séminarie \` a l’École polytechnique 2006-2008
, Triangulated categories of motives in positive characteristics, arxiv.org 1305.5349.pdf.
, Vanishing of negative $K$-theory in positive characteristic, arxiv.org 1112.5206v4.pdf.
, Techniques of localization in the theory of algebraic cycles. J. Algebraic Geom. 10 (2001), no. 2, 299–363.
, Lecture notes on motivic cohomology. Clay Mathematics Monographs, 2. American Mathematical Society, Providence, RI; Clay Mathematics Institute, Cambridge, MA, 2006. ISBN: 978-0-8218-3847-1; 0-8218-3847-4
, Criteres de platitude et de projectivite. Techniques de "platification” d’un module. Invent. Math. 13 (1971), 1-89.
, Techniques de descentes cohomologique, in: Artin, Michael, Grothendieck, Alexandre, Verdier, J.K, Theorie des topos et cohomologie etale des schemas. Lecture Notes in Mathematics, Vol. 270. Springer-Verlag, Berlin-New York, 1972.
, Singular homology of abstract algebraic varieties. Invent. Math. 123 (1996), no. 1, 61–94.
, Bloch-Kato conjecture and motivic cohomology with finite coefficients. The arithmetic and geometry of algebraic cycles (Banff, AB, 1998), 117-189, NATO Sci. Ser. C Math. Phys. Sci., 548, Kluwer Acad. Publ., Dordrecht, 2000.
[^1]: Graduate School of Mathematics, Nagoya University, Furucho, Nagoya 464-8602, Japan.e-mail: `[email protected]`
|
---
abstract: 'We define and study $C^1-$solutions of the Aronsson equation (AE), a second order quasi linear equation. We show that such super/subsolutions make the Hamiltonian monotone on the trajectories of the closed loop Hamiltonian dynamics. We give a short, general proof that $C^1-$solutions are absolutely minimizing functions. We discuss how $C^1-$supersolutions of (AE) become special Lyapunov functions of symmetric control systems, and allow to find continuous feedbacks driving the system to a target in finite time, except on a singular manifold. A consequence is a simple proof that the corresponding minimum time function is locally Lipschitz continuous away from the singular manifold, despite classical results show that it should only be Hölder continuous unless appropriate conditions hold. We provide two examples for Hörmander and Grushin families of vector fields where we construct $C^1-$solutions (even classical) explicitly.'
author:
- |
Pierpaolo Soravia[^1]\
Dipartimento di Matematica\
Università di Padova, via Trieste 63, 35121 Padova, Italy
title: 'The Aronsson equation, Lyapunov functions and local Lipschitz regularity of the minimum time function'
---
[*2010 Mathematics Subject Classification:*49L20; Secondary 35F21, 35D40, 93B05.]{}
Introduction
============
In this note we want to describe a possible new, non standard way of using the Aronsson equation, a second order partial differential equation, to obtain controllability properties of deterministic control systems. We investigate a symmetric control system $$\label{eqsystem}
\left\{
\begin{array}{ll}
\dot x_t=f(x_t,a_t),\\
x_0=x_o\in\Omega,
\end{array}\right.$$ where $-f(a,A)\subset f(x,A)$, $A$ is a nonempty and compact subset of a metric space. We define the Hamiltonian $$H(x,p)=\max_{a\in A}\{-f(x,a)\cdot p\},$$ which is therefore nonnegative and positively one homogeneous in the adjoint variable, and we want to drive the system to a target, temporarily we say the origin. We are interested in the relationship of (\[eqsystem\]) with the Aronsson equation (AE) $$-\nabla\left(H(x,\nabla U(x))\right)\cdot H_p(x,\nabla U(x))=0,$$ which is a quasilinear degenerate elliptic equation. Ideally, if everything is smooth, when we are given a classical solution $U$ of (AE) and we consider a trajectory $x_t$ of the Hamiltonian dynamics $$\dot x_t=-H_p(x_t,\nabla U(x_t)),$$ which is a closed loop dynamics for the original control system, we find out that (AE) can be rewritten as $$\frac d{dt}H(x_t,\nabla U(x_t))=0.$$ Therefore $H(x_t,\nabla U(x_t))$ is constant. This is a very desirable propery on the control system since it allows to use $U$ as a control Lyapunov function, despite the presence of a possibly nonempty singular set $$\label{eqsing}{\mathcal H}=\{x:H(x,\nabla U(x))=0\},$$ which possibly contains the origin. Indeed if $x_o$ is outside the singular set and $U$ has a unique global minimum at the origin, then the trajectory of the Hamiltonian dynamics will reach the origin in finite time.
In general however, several steps of this path break down. From one side, (AE) does not have $C^2$ classical solutions in general. Even in the case where $f=a$, $A=B_1(0)\subset{\mathbb R}^n$ is the closed unit ball, $H(p)=|p|$ and (AE) becomes the well known infinity Laplace equation $$-\Delta U(x)\;\nabla U(x)\cdot \nabla U(x)=0,$$ solutions are not classical, although known regularity results show that they are $C^{1,\alpha}$. Therefore solutions of (AE) have to be meant in some weak sense, as viscosity solutions. For generic viscosity solutions, we can find counterexamples to the fact that the Hamiltonian is constant along trajectories of the Hamiltonian dynamics, as we show later. For an introduction to the theory of viscosity solutions in optimal control, we refer the reader to the book by Bardi, Capuzzo-Dolcetta [@bcd].
In this paper we will first characterize when, for a given super or subsolution of (AE) the Hamiltonian is monotone on the trajectories of the Hamiltonian dynamics (e.g. satisfies the [*monotonicity property*]{}). To this end we introduce the notion of $C^1-$super/subsolution and prove for them that they satisfy the monitonicity property of the Hamiltonian. We emphasize the fact that not all viscosity solution that are $C^1$ functions, are $C^1-$ solutions according to our definition. Moreover, as a side result, we also show that our $C^1-$solutions are absolutely minimizing functions, i.e. local minimizers of the functional that computes the $L^\infty$ norm of the Hamiltonian. It is a well know equivalent property to being a viscosity solution of (AE) at least when $H$ is coercive or possibly in some Carnot Caratheodory spaces, but this fact is not completely understood in general. Therefore $C^1-$solution appears to be an appropriate notion.
We then prove that if (AE) admits a $C^1-$supersolution $U$ having a unique minimum at the origin, then our control system can be driven to the origin in finite time with a continuous feedback, starting at every initial point outside the singular set $\mathcal H$. If moreover $U$ satisfies appropriate decay in a neighborhood of the origin only at points where the Hamiltonian $H$ stays away from zero, then we show that the corresponding minimun time function is locally Lipschitz continuous outside the singular set, despite the fact that even if the origin is small time locally attainable, then the minimum time function can only be proved to be Hölder continuous in its domain, in general, under appropriate conditions. Thus the loss of regularity of the minimum time function is only concentrated at points in the singular set. Finally for two explicit well known examples, where the system has an Hörmander type, or a Grushin family of vector fields, we exibit two explicit not yet known classical solutions of (AE), their gauge functions, providing examples of smooth absolute minimizers for such systems and the proof that their minimum time function is locally Lipschitz continuous outside the singular set. We remark the fact that neither in the general statement nor in the examples, the family of vector fields is ever supposed to span the whole space at the origin, therefore the classical sufficient attainability condition ensuring that the minimum time function is locally Lipschitz continuous will not be satisfied in general. Indeed in the explicit examples that we illustrate in Section 4, the minimum time function is known to be locally only $1/2-$Hölder continuous in its domain.
Small time local attainability and regularity of the minimum time function is an important subject in optimal control. Classical results by Petrov [@pe] show sufficient conditions for attainability at a single point by requiring that the convex hull of the vector fields at the point contains the origin in its interior. Such result was later improved by Liverovskii [@li] augmenting the vector fields with the family of their Lie brackets, see also the paper by author [@so7]. More recently such results had several extensions in the work by Krastanov and Quincampoix [@kr3] and Marigonda, Rigo and Le [@ma; @ma3; @ma4]. Our regularity results rather go in the direction of those contained in two recent papers by Albano, Cannarsa and Scarinci [@alcasc; @alcasc1], where they show, by completely different methods, that if a family of smooth vector fields satisfies the Hörmander condition, then the set where the local Lipschitz continuity of the minimum time function fails is the union of singular trajectories, and that it is analytic except on a subset of null measure. Our approach is instead more direct and comes as a consequence of constructing Lyapunov functions as $C^1-$supersolutions of the Aronsson equation. We finally mention the paper by Motta and Rampazzo [@mr] where the authors study higher order hamiltonians obtained by adding iterated Lie brackets as additional vector fields, in order to prove global asymptotic controllability to a target. While we do not study asymptotic controllability in this paper, their idea of constructing a higher order Hamiltonian may be seen complementary to ours, using instead the equation (AE).
Equation (AE) was introduced by Aronsson [@ar0], as the Euler Lagrange equation for absolute minimizers, i.e. local minima of $L^\infty$ functionals, typically the $L^\infty$ norm of the gradient. There has been a lot of work in more recent years to develop that theory using viscosity solutions by authors like Jensen [@jen1], Barron-Jensen-Wang [@bjw], Juutinen [@ju], Crandall [@cr]. For the main results on the infinity Laplace equation, we refer the reader to the paper [@acj] and the references therein. For results for equation (AE) especially in the $x$ dependent case, we also refer to the paper by the author [@soae] and the references therein, see also [@so2; @so6]. In particular we mention that equation (AE) has been studied in Carnot groups by Bieske-Capogna [@bica], by Bieske [@bie] in the Grushin space, and by Wang [@wa] in the case of $C^2$ and homogeneous Hamiltonians with a Carnot Caratheodory structure.
The structure of the paper is as follows. In Section 2 we introduce the problem and give a motivating example. In Section 3 we introduce $C^1-$solutions of (AE) and show for them some important properties: monotonicity of the Hamiltonian on the hamiltonian dynamics, an equivalent definition and the fact that they are absolutely minimizing functions. In Section 4, we use $C^1-$solutions of (AE) as Lyapunov functions for nonlinear control systems and obtain local Lipschitz regularity of the minimum time function away from the singular set. In Section 5 we provide two new examples of explicit classical solutions of (AE) in two important cases of nonlinear control systems where the results of Section 4 apply.
Control theory and the Aronsson equation
========================================
As we mentioned in the introduction, throughout the paper we consider the controlled dynamical system (\[eqsystem\]) where $\Omega\subset{\mathbb R}^n$ is open, $A$ is a nonempty, compact subset of some metric space, $a_\cdot \in L^\infty((0,+\infty);A)$ and $f:\Omega\times A\to{\mathbb R}^n$ is a continuous function, continuously differentiable and uniformly Lipschitz continuous in the first group of variables, i.e. $$|f(x^1,a)-f(x^2,a)|\leq L|x^1-x^2|\quad\mbox{for all }x^1,x^2\in\Omega,\;a\in A.$$ We suppose moreover that $f(x,A)$ is convex for every $x\in\Omega$ and that the system is symmetric, i.e. $-f(x,A)\subset f(x,A)$ for all $x\in {\mathbb R}^n$ and define the Hamiltonian $$\label{eqhamiltonian}
H(x,p)=\max_{a\in A}\{-f(x,a)\cdot p\}\in C(\Omega\times {\mathbb R}^n),$$ so that $H\geq0$ and $H(x,-p)=H(x,p)$ by symmetry. Notice that $H$ is at least locally Lipschitz continuous, and $H(x,\cdot)$ is positively homogeneous of degree one by compactness of $A$. We will also assume that $H$ is continuously differentiable on $\{(x,p):\in\Omega\times {\mathbb R}^{n}:H(x,p)>0\}$.
The case we are mostly interested in the following sections is when $$\label{eqsigma}
f(x,a)=\sigma(x)a, \quad\sigma:{\mathbb R}^n\to M_{n\times m}$$ where $M_{n\times m}$ is set of $n\times m$ matrices and $A=B_1(0)\subset{\mathbb R}^m$ is the closed unit ball. In this case $H(x,p)=|p\sigma(x)|$.
Given a smooth function $U\in C^1(\Omega)$ and $x_o\in \Omega\backslash{\mathcal H}$, where $\mathcal H$ is the singular set as in (\[eqsing\]), we consider the hamiltonian dynamics $$\label{eqhd}
\left\{\begin{array}{ll}
\dot x_t=-H_p(x_t,\nabla U(x_t)),\\
x_0=x_o\in\Omega,
\end{array}\right.$$ where $H_p$ indicates the gradient of the Hamiltonian $H=H(x,p)$ with respect to the group of [*adjoint*]{} variables $p$.
When the Hamiltonian $H(x,\nabla U(x))$ is differentiable, notice that for $a_{x}\in A$ such that $-f(x,a_x)\cdot \nabla U(x)=H(x,\nabla U(x))$ we have that $$-H_p(x,\nabla U(x))=f(x,a_x).$$ Therefore trajectories of (\[eqhd\]) are indeed trajectories of the system (\[eqsystem\]) and moreover (\[eqhd\]) is a closed loop system of (\[eqsystem\]) with feedback $a_x$. If in particular $f(x,a)$ is as in (\[eqsigma\]), then, for $|p\sigma(x)|\neq0$, $$H(x,p)=|p\sigma(x)|, \quad H_p(x,p)=\sigma(x)\frac{^t\sigma(x) \;^tp}{H(x,p)}, \quad a_x=-\frac{^t\sigma(x)\nabla U(x)}{H(x,\nabla U(x))}\in B_1(0).$$ Therefore in this case the feedback control is at least continuous on $\Omega\backslash{\mathcal H}$ and the closed loop system always has a well defined local solution starting out on that set.
We want to discuss when $H(x_t,\nabla U(x_t))$ is monotone on a trajectory $x_t$ of (\[eqhd\]). If we can compute derivatives, then we need to discuss the sign of $$\frac d{dt}H(x_t,\nabla U(x_t))=\nabla(H(x_t,\nabla U(x_t)))\cdot \dot x_t=-\nabla(H(x_t,\nabla U(x_t)))\cdot H_p(x_t,\nabla U(x_t)).$$ Therefore a sufficient condition is that $U\in C^2(\Omega\backslash{\mathcal H})$ is a super or subsolution of the following pde $$\label{eqae}
-\nabla(H(x,\nabla U(x)))\cdot H_p(x,\nabla U(x))=0,\quad x\in \Omega\backslash{\mathcal H},$$ which is named Aronsson equation in the literature. Notice that $H(x_t,\nabla U(x_t))$ is actually constant if $U$ is a classical solution of (\[eqae\]). The above computation is correct only under the supposed regularity on $U$ and unfortunately if such regularity is not satisfied and we interpret super/subsolutions of (\[eqae\]) as viscosity solutions this is no longer true in general, as the following example shows. Notice that if $H$ is not differentiable at a point $(x_o,\nabla U(x_o))$ where $H(x_o,\nabla U(x_o))=0$, then $H_p(x_o,\nabla U(x_o))$ is multivalued, precisely the closed convex subgradient of the Lipschitz function $H(x_o,\cdot)$ computed at $
\nabla U(x_o)$and contains the origin by the symmetry of the system. Therefore the dynamics (\[eqhd\]) has at least the constant solution also in this case. In some statements below it will be sometimes more convenient to look at (AE) for $H^2$ in order to gain regularity at points where $H$ vanishes.
\[exinfinity\] In the plane, suppose that $H^2(x,y,p_x,p_y)=(|p_x|^2+|p_y|^2)/2$ hence it is smooth and independent of the state variables. In this case (AE) becomes the well known infinity Laplace equation $$-\Delta_\infty U(x)=-D^2U(x)\nabla U(x)\cdot\nabla U(x)=0.$$ It is easy to check that a viscosity solution of the equation is $u(x,y)=|x|^{4/3}-|y|^{4/3}$. The function $u\in C^{1,1/3}({\mathbb R}^2)\backslash C^2$. Among solutions of the Hamiltonian dynamics $(\dot x_t,\dot y_t)=-\nabla U(x_t,y_t)$, we can find the following two trajectories $$(x^{(1)}_t,y^{(1)}_t)=\left(\left(1-\frac89t\right)^{3/2},0\right),\quad(x^{(2)}_t,y^{(2)}_t)=\left(0,\left(1+\frac89t\right)^{3/2}\right),$$ defined in a neighborhood of $t=0$. Clearly the Hamiltonian along the two trajectories is $$H(\nabla U(x_t^{(1)},y_t^{(1)}))=\frac{2\sqrt{2}}3\sqrt{1-\frac89t},\quad H(\nabla U(x_t^{(2)},y_t^{(2)}))=\frac{2\sqrt{2}}3\sqrt{1+\frac89t},$$ it is strictly decreasing in the first case, strictly increasing in the second but it is never constant. Therefore the remark that we made at the beginning fails in this example. In the next section we are going to understand the reason.
Monotonicity of the Hamiltonian along the Hamiltonian dynamics
==============================================================
Throughout this section, we consider a Hamiltonian not necessarily with the structure as in (\[eqhamiltonian\]) but satisfying the following: $$\tag{H1}\label{eqh1}
\begin{array}{c}
\;H:\Omega\times{\mathbb R}^n\to{\mathbb R}\hbox{ is continuous and }H(x,-p)=H(x,p),\\
H_p(x,p)\hbox{ exists and is continuous for all }
(x,p)\in \Omega\times{\mathbb R}^n\hbox{ if }H(x,p)>0.
\end{array}$$ We will also refer to the following property: $$\tag{H2}\label{eqh2}
H(x,\cdot) \hbox{ is positively }r>0 \hbox{ homogeneous, for all }x\in\Omega.$$ Given $U\in C^1(\Omega)$, the monotonicity of the Hamiltonian along trajectories of (\[eqhd\]) is the object of this section. It is a consequence of the following known general result.
\[propmonotone\] Let $\Omega\subset{\mathbb R}^n$ be an open set and $F:\Omega\to{\mathbb R}^n$ be a continuous vector field. The following are equivalent:
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $-F(x)\cdot\nabla V(x)\leq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is forward weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in [0,{\varepsilon})$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $0\leq s\leq t$.]{}
Moreover the following are also equivalent
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $F(x)\cdot\nabla V(x)\geq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is backward weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in (-{\varepsilon},0]$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $s\leq t\leq0$.]{}
\[corclarke\] Let $\Omega\subset{\mathbb R}^n$ be an open set and $F:\Omega\to{\mathbb R}^n$ be a continuous vector field. The following are equivalent:
- [$V:\Omega\to{\mathbb R}$ is a continuous viscosity solution of $-F(x)\cdot\nabla V(x)\leq 0$ and of $F(x)\cdot\nabla V(x)\geq 0$ in $\Omega$.]{}
- [The system $(V,F)$ is weakly increasing, i.e. for every $x_o\in\Omega$, there is a solution of the differential equation $\dot x_t=F(x_t)$, for $t\in (-{\varepsilon},{\varepsilon})$, $x_0=x_o$ such that $V(x_s)\leq V(x_t)$ for $s\leq t$.]{}
The proof of the previous statement can be found in [@clsw0], see also [@clsw]. When $F\in C^1$ another proof can be found in Proposition 5.18 of [@bcd] or can be deduced from the optimality principles in optimal control proved in [@soopt], when $F$ is locally Lipschitz continuous. In the case when $F$ is locally Lipschitz, the two differential inequalities in (i) of Corollary \[corclarke\] turn out to be equivalent and of course there is also uniqueness of the trajectory of the dynamical system $\dot x=F(x)$, $x(0)=x_o$. When (ii) in the Corollary is satisfied by all trajectories of the dynamical system then the system is said to be strongly monotone. This occurs in particular if there is at most one trajectory, as when $F$ is locally Lipschitz continuous. More general sufficient conditions for strong monotonicity can be found in [@clsw], see also [@drw].
In view of the above result, we introduce the following definition.
\[defcone\] Let $\Omega\subset{\mathbb R}^n$ be open and let $H:\Omega\times{\mathbb R}^n\to{\mathbb R}$ satisfying (\[eqh1\]). We say that a function $U\in C^1(\Omega)$ is a $C^1-$supersolution (resp. subsolution) of the Aronsson equation (\[eqae\]) in $\Omega$, if setting $V(x)=H(x,\nabla U(x))$ and $F(x)=-H_p(x,\nabla U(x))$ we have that $V$ is a viscosity subsolution (resp. supersolution) of $-F(x)\cdot\nabla V(x)=0$ and a supersolution (resp. a subsolution) of $F(x)\cdot\nabla V(x)=0$.
It is worth pointing out explicitely the consequence we have reached by Proposition \[propmonotone\].
Let $U\in C^1(\Omega)$ be a $C^1-$supersolution (resp, subsolution) of (\[eqae\]). For $x_o\in\Omega\backslash{\mathcal H}$, then there is a trajectory $x_t$ of the Hamiltonian dynamics (\[eqhd\]) such that $H(x_t,\nabla U(x_t))$ is nondecreasing (resp. nonincreasing).
- [Notice that if $U$ is a $C^1-$solution of (\[eqae\]) and the Hamiltonian dynamics (\[eqhd\]) is either strongly decreasing and strongly increasing, as for instance if it has a unique solution for a given initial condition, then for all trajectories $x_t$ of (\[eqhd\]), $H(x_t,\nabla U(x_t))$ is constant. ]{}
- [In order to comment back to Example \[exinfinity\], notice that while $U(x,y)=|x|^{4/3}-|y|^{4/3}$ is a $C^1$ function, nevertheless, as easily checked, $V(x,y)=H^2(\nabla U(x,y))=16(|x|^{2/3}+|y|^{2/3})/9$ is only a viscosity subsolution but not a supersolution of $$-\nabla V(x)\cdot (-H_p^2(\nabla U(x)))=0,$$ while it is a viscosity solution of $\nabla V(x)\cdot (-H_p^2(\nabla U(x)))=0$. Then it turns out that the Hamiltonian is weakly increasing on the trajectories of the Hamiltonian dynamics. Indeed there is another trajectory of the Hamiltonian dynamics such that $(x^{(3)}(0),y^{(3)}(0))=(1,0)=(x^{(1)}(0),y^{(1)}(0))$, namely $$(x^{(3)}(t),y^{(3)}(t))=\left(\left(1-\frac89t\right)^{3/2},\left(\frac89t\right)^{3/2}\right)$$ along which the Hamiltonian is actually constant, until the trajectory is well defined. ]{}
- [It is clear by Example \[exinfinity\] that while classical $C^2$ solutions of (\[eqae\]) are $C^1-$solutions, continuous or even $C^1$ viscosity solutions in general are not. The definition of $C^1-$solution that we introduced is meant to preserve the monotonicity property of the Hamiltonian on the trajectories of the Hamiltonian dynamics. ]{}
- [Observe that if $U$ is a $C^1-$solution, then $-U$ is a $C^1-$solution as well, since the Hamiltonian is unchanged and the vector field in the Hamiltonian dynamics becomes the opposite. ]{}
It may look unpleasant that Definition \[defcone\] of solution of (\[eqae\]) refers to a property that is not formulated directly for the function $U$. Therefore in the next statement we will reformulate the above definition. The property (ED) below will give an equivalent definition of a $C^1-$solution.
Let $U\in C^1(\Omega)$ and $H$ satisfying (\[eqh1\]), (\[eqh2\]). The following two statements are equivalent:
- [ for all $x_o\in\Omega\backslash{\mathcal H}$, there is a trajectory $x_t$ of the Hamiltonian dynamics (\[eqhd\]), such that if $\varphi\in C^2([0,{\varepsilon}))\cup C^2([(-{\varepsilon},0])$ is a test function and $U(x_t)-\varphi(t)$ has a minimum (respectively maximum) at $0$ and $\dot\varphi(0)=\frac{d}{dt}U(x_t)_{t=0}$, then we have that $$-\ddot \varphi(0)\geq0\;(\hbox{resp. }\leq0).$$]{}
- [$U$ is a $C^1-$supersolution (resp. subsolution) of (\[eqae\]).]{}
In particular, if $H$ is $C^1$ at $\{(x,p):H(x,p)\neq0\}$, a $C^1-$supersolution (resp. subsolution) is a viscosity supersolution (resp. subsolution) of (\[eqae\]).
In the statement of (ED), when the hamiltonian vector field $F(x)=-H_p(x,\nabla U(x))$ is locally Lipschitz continuous, we may restrict the test functions to $\varphi\in C^2(-{\varepsilon},{\varepsilon})$.
We only prove the statement for supersolutions, the other case being similar. Let $U\in C^1(\Omega)$.
Suppose first that (ED) holds true. Let $V(x)=H(x,\nabla U(x))$ and $\Phi\in C^1(\Omega)$ such that $V-\Phi$ has a maximum at $x_o$, $V(x_o)=\Phi(x_o)$. Therefore if $x_t$ is a solution of the hamiltonian dynamics (\[eqhd\]) that satisfies (ED), we have that, by homogeneity of $H(x,\cdot)$ and for $F(x)=-H_p(x,\nabla U(x))$, $$r\Phi(x_t)\geq rV(x_t)=rH(x_t,\nabla U(x_t))=-\nabla U(x_t)\cdot F(x_t)=-\frac d{dt}U(x_t).$$ Thus integrating for small $t>0$ we get $$\label{eqmis}
\varphi(t):=U(x_o)-r\int_0^t\Phi(x_s)\;ds\leq U(x_t),$$ and thus $U(x_t)-\varphi(t)$ has a minimum at $t=0$ on $[0,{\varepsilon})$ for ${\varepsilon}$ small and $\dot\varphi(0)=-r\Phi(x_t)=\frac{d}{dt}U(x_t)|_{t=0}$. If instead $V-\Phi$ had a minimum at $x_o$, then integrating on $(t,0]$ for $t<0$ small enough, we would still obtain the same as in (\[eqmis\]). By (ED), from (\[eqmis\]) we get in both cases $$0\geq\ddot\varphi(0)=-r\frac d{dt}\Phi(x_t)|_{t=0}=-r\nabla \Phi(x_o)\cdot F(x_o),$$ where $F(x)=-H_p(x,\nabla U(x))$. Therefore we conclude that $V$ is a viscosity subsolution of $-\nabla V\cdot F\leq 0$ (or a supersolution of $\nabla V\cdot F\geq 0$ when $V-\phi$ has a minimum st $x_o$). Finally by definition, $U$ is a $C^1-$supersolution of (\[eqae\]).
Suppose now that $U$ is a $C^1-$supersolution of (\[eqae\]). Then by Proposition \[propmonotone\], for all $x_o\in \Omega\backslash{\mathcal H}$, we can find a trajectory $x_t$ of the dynamics (\[eqhd\]) such that $rV(x_t)=-\frac d{dt}U(x_t)$ is nondecreasing. Therefore $U(x_t)$ is a concave function of $t$. Let $\varphi\in C^2((-{\varepsilon},0])\cup C^2([0,{\varepsilon}))$ be such that $U(x_t)-\varphi(t)$ has a minimum at $t=0$, $U(x_o)=\varphi(0)$ and $\frac d{dt}U(x_t)|_{t=0}=\dot\varphi(0)$. If we had $\ddot\varphi(0)>0$ then $\varphi$ would be strictly convex in its domain. Therefore for $t\neq0$ small enough, and in the domain of $\varphi$, $$U(x_t)\geq\varphi(t)>\varphi(0)+\dot\varphi(0)t=U(x_o)+\frac d{dt}U(x_t)|_{t=0}t\geq U(x_t),$$ by concavity of $U(x_t)$. This is a contradiction.
We prove the last statement on the fact that a $C^1-$solution is a viscosity solution. Therefore for a $C^1-$supersolution $U$ of (\[eqae\]) let now $\Phi\in C^2(\Omega)$ be such that $U-\Phi$ has a minimum at $x_o$. By (ED), for a suitable solution $x_t$ of (\[eqhd\]) we have that $U(x_t)-\varphi(t)$ has a minimum at $t=0$ if $\varphi(t)=\Phi(x_t)$, in particular $\dot\varphi(0)=\frac{d}{dt}U(x_t)_{t=0}$. By (ED) and homogeneity of $H(x,\cdot)$, $$\begin{array}{l}
0\leq -\ddot\varphi(0)=\frac d{dt}\nabla\Phi(x_t)\cdot H_p(x_t,\Phi(x_t))|_{t=0}=r\frac d{dt}H(x_t,\nabla \Phi(x_t))|_{t=0}
\\=-r\nabla(H(x_o,\nabla \Phi(x_o)))\cdot H_p(x_o,\nabla \Phi(x_o)).
\end{array}$$ Therefore $U$ is a viscosity supersolution of (\[eqae\]). The case of subsolutions is similar and we skip it.
We end this section by proving another important property of $C^1-$ solutions of (\[eqae\]) that in the literature was the main motivation to the study of (AE).
\[teoam\] Let $\Omega\subset{\mathbb R}^n$ open and bounded, $H$ satisfying (\[eqh1\]), and having the structure (\[eqhamiltonian\]). Let $U\in C^1(\Omega)\cap C({\overline\Omega})$ be a $C^1-$solution of (\[eqae\]). For any function $W\in C({\overline\Omega})$ such that : $$\label{eqam}\left\{\begin{array}{ll}
H(x,\nabla W(x))\leq k\in{\mathbb R},\quad& x\in\Omega,\\
W(x)=U(x),&x\in\partial\Omega
\end{array}\right.$$ in the viscosity sense, then $H(x,\nabla U(x))\leq k$ in $\Omega$.
When $D\subset{\mathbb R}^n$ is an open set and the property of a function $U\in C^1(D)$ in Theorem (\[teoam\]) holds for all open subsets $\Omega\subset D$ then we say that $U$ is an [*Absolutely minimizing function*]{} in $D$ for the Hamiltonian $H$. This means that $U$ is a local minimizer of $\|H(\cdot,\nabla U(\cdot))\|_{L^\infty}$. It is well known that for the infinity Laplace equation, where we minimize the Lipschitz constant of $U$, it is equivalent to be a viscosity solution and an absolutely minimizing function. Such equivalence is also known for coercive Hamiltonians and for the norm of the horizontal gradient in some Carnot Caratheodory spaces. For more general Hamiltonians this equivalence is not known. Here we prove one implication at least for $C^1-$ solutions of (\[eqae\]).
Let $U,W$ be as in the statement and suppose for convenience that $H(x,\cdot)$ is positively 1-homogeneous. We define $V(x)=H(x,\nabla U(x))\geq0$ and look at solutions $x_t$ of the Hamiltonian dynamics (\[eqhd\]). If $V(x_o)=0$, then clearly $V(x_o)\leq k$ and we have nothing left to show. If otherwise $V(x_o)>0$ since $U$ is a $C^1-$solution of (\[eqae\]), we already know that we can construct a solution of (\[eqhd\]) starting out at $x_o\in\Omega$ such that $V(x_t)$ is nondecreasing for $t\geq0$ and nonincreasing for $t\leq0$ (by a concatenation of two trajectories of (\[eqhd\]) with monotone Hamiltonian). Since $\Omega$ is bounded, then the curve $x_t$ will not stay indefinitely in $\Omega$ because as we already observed $$U(x_t)-U(x_o)\leq -\int_0^tV(x_s)\;ds\leq-t V(x_o),\quad \hbox{for }t\geq 0,$$ and $$U(x_t)-U(x_o)\geq -t V(x_o),\quad \hbox{for }t\leq 0.$$ Hence $x_t$ will hit $\partial\Omega$ forward and backward in finite time. Let $t_1<0<t_2$ be such that $x_{t_1},x_{t_2}\in\partial\Omega$ and $x_t\in \Omega$ for $t\in(t_1,t_2)$. Therefore $$\label{eqab}
U(x_{t_2})+t_2V(x_o)\leq U(x_o)\leq U(x_{t_1})+t_1V(x_o)$$ and then $$\label{eqaa}
W(x_{t_1})-W(x_{t_2})=U(x_{t_1})-U(x_{t_2})\geq (t_2-t_{1})V(x_o).$$ Now we use the differential inequality (\[eqam\]) in the viscosity sense and the lower optimality principle in control theory as in [@soopt] for subsolutions of the Hamilton-Jacobi equation. Therefore since $x_t$ is a trajectory of the control system (\[eqsystem\]) we have that for all ${\varepsilon}>0$ and $t_1+{\varepsilon}<t<t_2$, as $x_s\in\Omega$ for $s\in[t_1+{\varepsilon},t]$, $$W(x_{t_1+{\varepsilon}})\leq k(t-t_1-{\varepsilon})+W(x_{t}).$$ By letting $t\to t_2-$ and ${\varepsilon}\to0+$ we conclude, by continuity of $W$ at the boundary of $\Omega$ and (\[eqaa\]), $$V(x_o)(t_2-t_1)\leq W(x_{t_1})-W(x_{t_2})\leq k(t_2-t_1)$$ which is what we want.
Notice that in (\[eqab\]) equalities hold if $V$ is constant on a given trajectory of (\[eqhd\]) and we obtain that $$\frac{U(x_o)-U(x_{t_1})}{t_1}=\frac{U(x_o)-U(x_{t_2})}{t_2}$$ and then $$U(x_o)=\frac{t_2}{t_2-t_1}U(x_{t_1})-\frac{t_1}{t_2-t_1}U(x_{t_2}),$$ which is an implicit representation formula for $U$ through its boundary values, since the points $x_{t_1},x_{t_2}$ depend on the Hamiltonian dynamics (\[eqhd\]) and $U$ itself.
Liapunov functions and (AE)
===========================
In this section, we go back to the stucture (\[eqhamiltonian\]) for $H$ and want to discuss the classical idea of control Lyapunov function. Let ${\mathcal T}\subset{\mathbb R}^n$ be a closed target set, we want to find $U:{\mathbb R}^n\to[0,+\infty)$ at least lower semicontinuous and such that: $U(x)=0$ if and only if $x\in{\mathcal T}$ and such that for all $x\in{\mathbb R}^n\backslash{\mathcal T}$ there exists a control $a_\cdot\in L^\infty(0,+\infty)$ and $t_x\leq+\infty$ such that the corresponding trajectory of (\[eqsystem\]) satisfies: $$U(x_t)\mbox{ is nonincreasing and }U(x_t)\to0,\quad\hbox{as }t\to t_x.$$ Classical necessary and sufficient conditions lead to look for strict supersolutions of the Hamilton Jacobi equation, namely to find $U$ such that $$\label{eqlyap}
H(x,\nabla U(x))\geq l(x),$$ with $l:{\mathbb R}^n\to[0,+\infty)$ continuous and such that $l(x)=0$ if and only if $x\in{\mathcal T}$. The case ${\mathcal T}=\{0\}$ is already quite interesting for the theory.
Here we will apply the results of the previous section and plan consider Lyapunov functions built as follows. We analyse the existence of $U\in C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))\cap {C(\overline{\Omega\backslash{\mathcal T}})}$ such that $U$ is a $C^1-$supersolution of (AE), i.e. satisfies $$\label{eqaei}
-\nabla(H(x,\nabla U(x))\cdot H_p(x,\nabla U(x)))\geq0\quad x\in\Omega\backslash({\mathcal T}\cap{\mathcal H}).$$
To study (\[eqaei\]) in the case when $H$ is as in (\[eqhamiltonian\]) and $f$ as in (\[eqsigma\]), it is sometimes more convenient to write it for the Hamiltonian squared $H^2(x,\nabla U(x))=|\nabla U(x)\sigma(x)|^2$. Thus $$\begin{array}{ll}
-\nabla(H^2(x,\nabla U(x))\cdot (H^2)_p(x,\nabla U(x))=-4\;^tD(\nabla U\sigma(x))\;^t(\nabla U(x)\sigma(x))\cdot \left(\sigma(x)\;^t(\nabla U(x)\sigma(x))\right)\\
\quad=-4S^*\;^t(\nabla U(x)\sigma(x))\cdot \;^t(\nabla U(x)\sigma(x)),
\end{array}$$ where we indicated $$S=\;^t\sigma(x)^tD(\nabla U\sigma(x))=\;^t\sigma(x)D^2U(x)\sigma(x)+\left(D\sigma_j\sigma_i(x)\cdot \nabla U(x)\right)_{i,j=1,\dots,m},$$ $\sigma_j$, $j=1,\dots,k$ are the columns of $\sigma$, and $S^*=(S+\;^tS)/2$. Therefore a special sufficient condition for $U$ to satisfy (\[eqaei\]) is that $S^*$ is negative semidefinite, which means that $U$ is $\sigma-$concave with respect to the family of vector fields $\sigma_j$, in the sense of Bardi-Dragoni [@badr]. We recall that the matrix $S$ also appears in [@so3] to study second order controllability conditions for symmetric control systems.
Define the minimum time function for system (\[eqsystem\]) as $$T(x)=\inf_{a\in L^\infty(0,+\infty)}t_x(a),$$ where $t_x(a)=\inf\{t\geq0:x_t\in{\mathcal T},\;x_t \mbox{ solution of }(\ref{eqsystem})\}\leq+\infty$. We prove the following result, recall that ${\mathcal H}=\{x:H(x,\nabla U(x))=0\}$ is the singular set.
\[propfeed\] Let $\Omega\subset{\mathbb R}^n$ be open and ${\mathcal T}\subset\Omega$ a closed target. Let $H$ have the structure (\[eqhamiltonian\]). Assume that $U\in C(\overline{\Omega\backslash{\mathcal T}})\cap C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))$ is nonnegative and a $C^1-$solution of (\[eqaei\]) in $\Omega\backslash({\mathcal T}\cap{\mathcal H})$ and that $U(x)=0$ for $x\in{\mathcal T}$, $U(x)=M$ for $x\in\partial\Omega$ and $U(x)\in (0,M)$ for $x\in\Omega\backslash{\mathcal T}$ and some $M>0$. For any $x_o\in\Omega\backslash({\mathcal T}\cup{\mathcal H})$ there exists a solution of the closed loop system (\[eqhd\]) such that
- [$H(x_t,\nabla U(x_t))$ is a nondecreasing function of $t$; ]{}
- [$U(x_t)$ is a strictly decreasing function of $t$ ]{}
- [The trajectory $(x_t)_{t\geq0}$ reaches the target in finite time and the minimum time function for system (\[eqsystem\]) satisfies the estimate $$\label{eqmte}
T(x_o)\leq \frac {U(x_o)}{H(x_o,\nabla U(x_o))}.$$ ]{}
The thesis (i) follows from the results of the previous section since $U$ is a supersolution of (AE). Let $x_o$ be a point where $H(x_o,\nabla U(x_o))>0$. By homogeneity of the Hamiltonian we get, for $t\geq0$ $$0<H(x_o,\nabla U(x_o))\leq H(x_t,\nabla U(x_t))=\nabla U(x_t)\cdot H_p(x_t,\nabla U(x_t))=-\frac d{dt}U(x_t)$$ and (ii) follows. Integrating now the last inequality we obtain $$0\leq U(x_t)\leq U(x_o)-H(x_o,\nabla U(x_o))t$$ and thus the solution of (\[eqhd\]) reaches the target before time $$\label{eqtime}
\bar t=\frac{U(x_o)}{H(x_o,\nabla U(x_o))}.$$ Therefore (\[eqmte\]) follows by definition.
The estimate (\[eqmte\]) can be used to obtain local regularity of the minimum time function. The proof of regularity now follows a more standard path although under weaker assumptions than usual literature and will allow us to obtain a new regularity result. We emphasize that nothing in the next statement is assumed on the structure of the vectogram $f(x,A)$ when $x\in{\mathcal T}$. In particular the target need not be even small time locally attainable.
\[thmregularity\] Let $\Omega\subset{\mathbb R}^n$ be open and ${\mathcal T}\subset\Omega$ a closed target. Assume that $U\in C(\overline{\Omega\backslash{\mathcal T}})\cap C^1(\Omega\backslash({\mathcal T}\cap{\mathcal H}))$ is nonnegative and $C^1-$solution of (\[eqaei\]) in $\Omega\backslash({\mathcal T}\cap{\mathcal H})$ and that $U(x)=0$ for $x\in{\mathcal T}$, $U(x)=M$ for $x\in\partial\Omega$ and $U(x)\in (0,M)$ for $x\in\Omega\backslash{\mathcal T}$ and some $M>0$. Let $d(x)=\mbox{dist}(x,{\mathcal T})$ be the distance function from the target. Suppose that $U$ satisfies the following: for all ${\varepsilon}>0$ there are $\delta,c>0$ such that $$\label{eqexcond}
U(x)\leq c\; d(x),\quad \mbox{if }H(x,\nabla U(x))\geq{\varepsilon},\;d(x)<\delta.$$ Then the minimum time function $T$ for system (\[eqsystem\]) to reach the target is finite and locally Lipschitz continuous in $\Omega\backslash({\mathcal T}\cup{\mathcal H})$.
Let $x_o\in\Omega$, $x_o\notin({\mathcal T}\cup{\mathcal H})$ and $r,{\varepsilon}>0$ be such that $H(x,\nabla U(x))\geq{\varepsilon}$, for all $x\in B_{r}(x_o)$. The parameter $r$ will be small enough to be decided later. We apply the assumption (\[eqexcond\]) and find $\delta,c>0$ correspondingly. The fact that $T$ is finite in $B_r(x_o)$, for $r$ sufficiently small, follows from Proposition \[propfeed\].
Take $x^1,x^2\in B_r(x_o)$ and suppose that $x^1_t,x^2_t$ are the trajectories solutions of (\[eqsystem\]) corresponding to the initial conditions $x_0=x^1,x^2$ respectively. To fix the ideas we may suppose that $T(x^2)\leq T(x^1)<+\infty$ and for any $\rho\in(0,1]$ we choose a control $a^\rho$ and time $t_2=t_{x^2}(a^{\varepsilon})\leq T(x^2)+\rho$ such that $d(x_{t_2})=0$. Note that by (\[eqmte\]), $t_2\leq \frac{U(x^2)}{{\varepsilon}}+\rho\leq M_{\varepsilon}$, for all $x^2\in B_r(x_o)$. Moreover by the Gronwall inequality for system (\[eqsystem\]) and since $d(x_{t_2})=0$, $$d(x^1_{t_2})\leq|x^1_{t_2}-x^2_{t_2}|\leq|x^1-x^2|e^{Lt_2}\leq|x^1-x^2|e^{LM_{\varepsilon}}$$ and the right hand side is smaller than $\delta$ if $r$ is small enough. Now we can estimate, by the dynamic programming principle and by (\[eqmte\]), (\[eqexcond\]), $$0\leq T(x^1)-T(x^2)\leq (t_2+T(x^1_{t_2}))-t_2+\rho\leq \frac{U(x^1_{t_2})}{\varepsilon}+\rho \leq\frac c{\varepsilon}d(x^1_{t_2})+\rho\leq \frac{ce^{LM_{\varepsilon}}}{\varepsilon}|x^1-x^2|+\rho.$$ As $\rho\to0+$, the result follows.
The extra estimate (\[eqexcond\]) is crucial in the sought regularity of the minimum time function but contrary to the existing literature is only asked in a possibly proper subset of a neighborhood of the target. We will show in the examples of the next section how it may follow from (AE) as well. In order to achieve small time local attainability of the target, one needs in addition that the system can evade from $\mathcal H$.
In addition to the assumptions of Theorem \[thmregularity\] suppose that ${\mathcal H}$ is a manifold of codimension at least one and that for all $x_o\in{\mathcal H}\cap(\Omega\backslash{\mathcal T})$ we have $f(x_o,A)\not\subset T_{x_o}({\mathcal H})$, the tangent space of $\mathcal H$ at $x_o$. Then for any $x_o\in \Omega\backslash{\mathcal T}$ we can reach the target in finite time.
By following the vector field $f(x_o,a)\notin T_{x_o}({\mathcal H})$, we immediately exit the singular set.
Some smooth explicit solutions of the Aronsson equation
=======================================================
In this section we show two examples of well known nonlinear systems where we can find an explicit smooth solution of (AE) and then apply Theorem \[thmregularity\] to obtain local Lipschitz regularity of the minimum time function. Our system will be in the form (\[eqhamiltonian\]), (\[eqsigma\]) and ${\mathcal T}=\{0\}$.
Hörmander-like vector fields.
-----------------------------
We consider the case where $x=(x_h,x_v)\in{\mathbb R}^{m+1}$ and $$\label{eqhormander}
\sigma(x)=\left(\begin{array}{cc}
I_m\\^t(Bx_h)\end{array}\right),$$ where $I_m$ is the $m\times m$ identity matrix and $B$ is not singular, $^tB=-B=B^{-1}$ is also $m\times m$. In particular $m$ is an even number and $|Bx_h|=|x_h|$. It is known that the corresponding symmetric control system is globally controllable to the origin and that its minimum time function is locally $1/2-$Hölder continuous. We want to prove higher regularity except on its singular set.
We consider the two functions $$\label{eqgauge}
u(x)=|x_h|^4+4x_v^2,\quad U(x)=(u(x))^{1/4},$$ and want to show that $U$ is a solution of (AE) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$. $U$ is a so called gauge function for the family of vector fields. We easily check that, after denoting $A(x)=\sigma(x)\;^t\sigma(x)$, $$\begin{array}{c}
\nabla u(x)=(4|x_h|^2x_h,8x_v),\quad
A(x)\;^t\nabla u(x)=\left(\begin{array}{cc}
4|x_h|^2x_h+8x_vBx_h\\8x_v|x_h|^2\end{array}\right),\\
H^2(x,\nabla u(x))=|\nabla U(x)\sigma(x)|^2=A(x)\;^t\nabla U(x)\cdot \;^t\nabla U(x)=16|x_h|^6+64x_v^2|Bx_h|^2=16|x_h|^2u(x),\\
H(x,\nabla U(x))=\frac{|x_h|}{U(x)}.
\end{array}$$ Notice in particular that $H(x,\nabla U(x))=0$ if and only if $x_h=0$ and thus the singular set $\{x:H(x,\nabla U(x))=0\}$ contains the target and is a smooth manifold, being the $x_v$ axis. As a consequence of the last displayed equation we have $$U(x)\leq\frac{|x_h|}{\varepsilon}\leq\frac{|x|}{\varepsilon},\quad\hbox{in }H(x,\nabla U(x))\geq{\varepsilon},$$ which is an information that we need to apply Theorem \[thmregularity\]. Finally, if $x\neq0$, $$\begin{array}{l}-\nabla(H^2(x,\nabla U(x)))\cdot (H^2)_p(x,\nabla U(x))
=-2\left(\frac{(x_h,0)}{U^2(x)}-\frac{|x_h|^2}{U^3(x)}\nabla U(x)\right)\cdot A(x)\;^t\nabla U(x)\\
=-\frac{2}{U^3(x)}\left(4U(x)\frac{|x_h|^4}{4U^3(x)}-|x_h|^2\frac{|x_h|^2}{U^2(x)}\right)=0.
\end{array}$$ Therefore $U$ is even a classical $C^2$ solution of (AE) for Hamiltonian $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$ and then $H$ is constant along the trajectories of the closed loop system (\[eqhd\]). Hence, by Theorem \[thmregularity\], the system (\[eqsystem\]) is controllable in finite time to the origin from $$\{x:H(x,\nabla U(x))>0\}={\mathbb R}^{m+1}\backslash\{(0,x_v):x_v\in{\mathbb R}\}$$ and the corresponding minimum time function is locally Lipschitz continuous on that set. Notice that, for ${\varepsilon}<1$, $\{x:H(x,\nabla U(x))\geq{\varepsilon}\}=\{x:4x_v^2\leq(1/{\varepsilon}^4-1)|x_h|^4\}$. Also the last Corollary applies.
Consider the symmetric control system $$\label{eqssystem}
\left\{\begin{array}{ll}
\dot x_t=\sigma(x_t)a_t,&\quad t>0,\\
x_o\in{\mathbb R}^n,
\end{array}\right.$$ where $\sigma$ is given in (\[eqhormander\]). Then the gauge function (\[eqgauge\]) is a solution of the Aronsson equation (\[eqae\]) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$, it is an absolutely minimizing function for the corresponding $L^\infty$ norm of the subelliptic gradient and the minimum time function to reach the origin is locally Lipschitz continuous in $\{x=(x_h,x_v)\in{\mathbb R}^{m+1}:x_h\neq0\}$. The system is small time locally controllable and there is a continuous feedback leading the system to the target outside the singular set.
Grushin vector fields.
----------------------
We consider the system where $x=(x_h,x_v)\in {\mathbb R}^{m+1}$ and $$\label{eqgrushin}
\sigma(x)=\left(\begin{array}{cc}
I_m\quad &0_m\\0&^tx_h\end{array}\right),$$ where $\sigma(x)$ is $(m+1)\times 2m$ matrix. Also in this case it is known that the corresponding symmetric control system is globally controllable to the origin and that its minimum time function is locally $1/2-$Hölder continuous. We consider $u,\;U$ as before in (\[eqgauge\]) want to show that $U$ is a solution of (AE) in ${\mathbb R}^{m+1}\backslash\{0\}$. In this case we can check that, $$A(x)\;^t\nabla u(x)=\left(\begin{array}{cc}
4|x_h|^2x_h\\8x_v|x_h|^2\end{array}\right),\quad H^2(x,\nabla u(x))=16|x_h|^2u(x),
\quad H(x,\nabla U(x))=\frac{|x_h|}{U(x)},$$ and again we have, for ${\varepsilon}>0$, $$U(x)\leq\frac{|x_h|}{\varepsilon}\leq\frac{|x|}{\varepsilon},\quad\hbox{in }H(x,\nabla U(x))\geq{\varepsilon}.$$ Finally, if $x\neq0$, $$\begin{array}{l}-\nabla(H^2(x,\nabla U(x)))\cdot (H^2)_p(x,\nabla U(x))
=-\frac{2}{U^3(x)}\left(U(x)(x_h,0)-|x_h|^2\nabla U(x)\right)\cdot A(x)\;^t\nabla U(x)\\
=-\frac{2}{U^3(x)}\left(4U(x)\frac{|x_h|^4}{4U^3(x)}-|x_h|^2\frac{|x_h|^2}{U^2(x)}\right)=0.
\end{array}$$ Therefore $U$ is a solution of (AE) for Hamiltonian $H^2$ and hence the system (\[eqsystem\]) is controllable in finite time to the origin from $\{x:H(x,\nabla U(x))>0\}$ and we prove the following result.
Consider the symmetric control system (\[eqssystem\]) where $\sigma$ is given in (\[eqgrushin\]). Then the gauge function (\[eqgauge\]) is a solution of (AE) for $H^2$ in ${\mathbb R}^{m+1}\backslash\{0\}$, it is an absolutely minimizing function for the corresponding $L^\infty$ norm of the subelliptic gradient and the minimum time function to reach the origin is locally Lipschitz continuous in $\{x=(x_h,x_v)\in{\mathbb R}^{m+1}:x_h\neq0\}$.
[99]{}
, *Minimization problems for the functional ${\rm sup}\sb{x}\,F(x,\,f(x),\,f\sp{\prime} (x))$*, Ark. Mat. [**6**]{} (1965), 33–53.
, *A tour of the theory of absolutely minimizing functions*, Bull. Amer. Math. Soc. [**41**]{} (2004), no. 4, 439–505.
, *Partial regularity for solutions to subelliptic eikonal equations,* C. R. Math. Acad. Sci. Paris [**356**]{} (2018), no. 2, 172–176.
, *Regularity results for the minimum time function with Hörmander vector fields,* J. Differential Equations [**264**]{} (2018), no. 5, 3312–3335.
*Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. With appendices by Maurizio Falcone and Pierpaolo Soravia,* Systems & Control: Foundations & Applications. Birkhäuser Boston, Inc., Boston, MA, 1997.
*Convexity and semiconvexity along vector fields,* Calc. Var. Partial Differential Equations 42 (2011), no. 3-4, 405–427.
, *The Euler equation and absolute minimizers of L$^{\infty}$ functionals*, Arch. Ration. Mech. Anal. [**157**]{} (2001), no. 4, 255–283.
, *Properties of infinite harmonic functions of Grushin-type spaces*, Rocky Mountain J. Math. [**39**]{} (2009), 729–756.
, *The Aronsson-Euler equation for absolutely minimizing Lipschitz extensions with respect to Carnot-Caratheodory metrics*, Trans. Am. Math. Soc. [**357**]{} (2005), 795-823.
, *Qualitative properties of trajectories of control systems: a survey*, J. Dynam. Control Systems 1 (1995), no. 1, 1–48.
, *Nonsmooth analysis and control theory*, Graduate Texts in Mathematics, 178. Springer-Verlag, New York, 1998.
, *An efficient derivation of the Aronsson equation*, Arch. Ration. Mech. Anal. [**167**]{} (2003), no. 4, 271–279.
, *Strong invariance and one-sided Lipschitz multifunctions*, Nonlinear Anal. 60 (2005), no. 5, 849–862.
, *Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient*, Arch. Rational Mech. Anal. [**123**]{} (1993), no. 1, 51–74.
, *Minimization problems for Lipschitz functions via viscosity solutions*, Dissertation, University of Jyvaskula, Jyvaskula, 1998. Ann. Acad. Sci. Fenn. Math. Diss. [**115**]{} (1998), 53 pp.
, *On the small-time controllability of discontinuous piece-wise linear systems*, [*Systems Control Lett.*]{}, 62(2):218–223, 2013.
, *A [H]{}ölder condition for [B]{}ellman’s function*, [*Differencial’nye Uravnenija*]{}, 13(12):2180–2187, 2301, 1977.
, *Second order conditions for the controllability of nonlinear systems with drift*, [*Commun. Pure Appl. Anal.*]{}, 5(4):861–885, 2006.
, *Sufficient conditions for small time local attainability for a class of control systems*, In [*Large-scale scientific computing*]{}, [*Lect. Notes Comput. Sci.*]{} 9374, 117-125. Springer, Cham, 2015.
, *Small-time local attainability for a class of control systems with state constraints*, [*ESAIM Control Optim. Calc. Var.*]{}, 23(3):1003–1021, 2017.
, [*A*symptotic controllability and Lyapunov-like functions determined by Lie brackets,]{} SIAM J. Control Optim. 56 (2018), no. 2, 1508–1534.
, *Controllability of autonomous systems.*, , 4:606–617, 1968.
, *H[ö]{}lder continuity of the minimum-time function for $C^1$-manifold targets*, [*J. Optim. Theory Appl.*]{}, 75(2):401–421, 1992.
*Optimality principles and representation formulas for viscosity solutions of Hamilton-Jacobi equations. II. Equations of control problems with state constraints*, Differential Integral Equations 12 (1999), no. 2, 275–293.
*Existence of absolute minimizers for noncoercive Hamiltonians and viscosity solutions of the Aronsson equation,* Math. Control Relat. Fields 2 (2012), no. 4, 399–427.
, *Absolute minimizers, Aronsson equation and Eikonal equations with Lipschitz continuous vector fields*, In: International conference for the 25th anniversary of viscosity solutions. Tokyo, 4–6 June 2007, Gakuto Int. Series, Gakkotosho Co., Ltd., (2008), 30, 175–19.
*On Aronsson equation and deterministic optimal control,* Appl. Math. Optim. 59 (2009), no. 2, 175–201.
, *Existence of absolute minimizers for noncoercive Hamiltonians and viscosity solutions of the Aronsson equation*, Math. Control Relat. Fields 2 (2012), no. 4, 399–427.
*Some results on second order controllability conditions*, to appear.
, *The Aronsson equation for absolute minimizers of $L\sp \infty$-functionals associated with vector fields satisfying Hörmander’s condition*, Trans. Amer. Math. Soc. [**359**]{} (2007), 91–113.
[^1]: email: [email protected].
|
---
author:
- 'E. Iodice'
- 'M. Arnaboldi'
- 'M. Rejkuba'
- 'M. J. Neeser'
- 'L. Greggio'
- 'O.A. Gonzalez'
- 'M. Irwin'
- 'J.P. Emerson'
bibliography:
- 'NGC253.bib'
date: 'Received 2014 January 21; accepted 2014 May 26'
title: 'The NIR structure of the barred galaxy NGC 253 from VISTA[^1]'
---
Introduction {#intro}
============
NGC 253 is a southern[^2], barred, edge-on ($i \simeq$ 74 degrees), spiral galaxy in the Sculptor group at a distance of 3.47 Mpc [@RS11], which yields an image scale of 16.8 parsecs per arcsecond ($\sim$1 kpc/arcminute). It is one of the best nearby examples of a nuclear starburst galaxy. Even if its overall gas and stellar morphology is typical of a spiral galaxy, several photometric and kinematical studies on this object have revealed that NGC 253 has a rather complicated structure. The deep image of @MH97, reaching 28 mag/arcsec$^2$, shows the presence of an extended, asymmetrical stellar halo with a semi-major axis radius of about 34 kpc, plus a southern spur. The stellar disk is much more extended than the HI disk [@Boo05], contrary to what is normally observed in spiral galaxies. Furthermore, the HI distribution in NGC 253 presents two other features. The HI disk is less extended on the NE side with respect to the SW, and on the same side, a plume is observed which is elongated perpendicular to the disk major axis and extends for about 12 kpc. This HI plume borders the X-ray halo emission [@Pie00] and the $H\alpha$ emission [@Hoo96] on their northern side. Given the spatial connection, such a feature has been related to the central starburst or, alternatively, to a minor merger and a gas accretion event [@Boo05].
Previous photometric studies in the near infrared ($1-2$ $\mu m$) revealed the presence of a bar extending 150 arcsec from the nucleus [@Sco85; @Forbes92], in addition to strong nuclear emission. These studies are confined within 3 arcmin radius ($\sim 3$ kpc) from the center and do not cover the whole disk that extends out to 30 arcmin ($\sim 30$ kpc).
Toward the nuclear regions, an intense starburst is powering the observed outflow of expanding gas shells along the minor axis [@SW92]. Recently, extraplanar molecular gas was detected by ALMA [@ALMA] which closely tracks the $H\alpha$ emission. The nuclear outflow is also responsible for the extended X-ray plume [@FB84].
The $H\alpha$ rotation curve along the disk major axis [@AM95] is asymmetric inside 100 arcsec ($\sim$1.7 kpc) from the center and the steep velocity gradient for $R\le10$ arcsec on the NE side suggests the presence of a nuclear ring, which may be responsible for the gas supply to the nuclear starburst. From this analysis of the bar dynamics one expects an Inner Lindblad Resonance (ILR) at the scale of the observed nuclear ring [@AM95]. A nuclear ring of a comparable size has been detected by @MS10 based on SINFONI photometry and 2D kinematics in the Ks band. Both kinematical studies cited above have shown that there is an offset between the kinematic center and the brightest location in the nuclear region of NGC 253 [see Fig.1b and Fig.5 in @AM95; @MS10 respectively]. The puzzle of the nucleus in NGC 253 was discussed in detail by @MS10: the SINFONI data have revealed that the IR peak is at about 2.6 arcsec away from the center of the 2D velocity map, while it seems to be consistent with the location of the strongest compact radio source TH2. This radio source has no optical, IR or X-ray counterpart that led to it being excluded as an AGN. Alternatively, since the kinematic center is very close to TH2, they suggested the presence of a dormant black hole in the center of NGC 253, like SgrA\* in the MW.
Taking into account that NGC 253 is a nearby extended object, one limitation of all previous imaging data is the absence of high angular resolution covering the entire extent of the galaxy in a single image. This fact has hampered the study of the fine sub-substructures and the ability to correlate them with the outer disk and halo. As we shall discuss in detail in the next sections, this issue is overcome thanks to the advent of the new generation Wide-Field Imaging (WFI) cameras. In fact, NGC 253 has been the target of the Science Verification (SV) for the new ESO survey telescopes VST and VISTA. The primary goal of SV is to test the expected performance of the telescope, camera, and of the data reduction pipeline. NGC 253 was chosen as a SV target for several reasons. First, its extent fills most of the VISTA and VST field such that one can check for possible reflections within the camera optics, and establish most suitable techniques for background subtraction. Second, as NGC 253 is very dusty, NIR imaging is a requisite for studying the underlying structure of the disk. And, finally, a wealth of data is available in the ESO archive (narrow band $H\alpha$, broad bands from ESO/MPI-2.2WFI, imaging and spectra of the nucleus from SINFONI at ESO/VLT). The main scientific goals of the SV extragalactic mini-survey[^3] are: 1) detecting the Red Giant Branch stars in the faint outer halo, by using the deep exposures, and 2) study of the disk and bulge structure with shallow exposures. The former science case is presented in @Greggio. In this paper, we focus on the latter science case and we show the major results on the structure of NGC 253 derived by the VISTA data in the NIR J and Ks bands. In particular, we derived new and more accurate estimates for the bar length and strength, and discuss the connection between the observed features in the disk and the Lindblad resonances predicted by the bar/disk kinematics.
This paper is structured as follows: in Sec. \[data\] we present the observations and data reduction; in Sec. \[morph\] we describe the morphology of NGC 253 in the J and Ks bands; in Sec. \[phot\] and Sec. \[galfit\] we carry out the surface photometry and the two-dimensional model of the light distribution for the whole system, respectively. Results are discussed in Sec. \[result\] and concluding remarks are drawn in Sec. \[concl\].
Observations & data reduction {#data}
==============================
The [*Visible and Infrared Survey Telescope for Astronomy (VISTA)*]{} [@Em04; @Em10], located at the Paranal Observatory, in Chile, is a 4 meter telescope equipped with the wide-field, near-infrared camera VIRCAM [@da04]. This instrument consists of 16 $2048 \times 2048
$ Raytheon VIRGO HgCdTe detectors non-contiguously covering a $1.29
\times 1.02$ deg$^2$ FoV, in a wavelength range from 0.85 to 2.4 micron. Hence a single VIRCAM exposure, the so-called [ *pawprint*]{}, only covers a FOV of 0.6 deg$^2$. This is due to the large gaps between the VIRCAM detectors [$90\%$ and $42.5\%$ of the x and y axes, respectively, see @Em04]. The contiguous area of 1.65 deg$^2$, the [*tile*]{}, is obtained by combining a minimum of 6 offset pawprints that, in unit of detector size, are 0.475 twice, along the y axis, and 0.95 once, along the x axis [see Fig. 5 in @Em04]. The mean pixel scale is 0.34 arcsec/pixel. In order to account for the variable sky fluctuations and bad pixels, several images are taken by offsetting the telescope in right ascension and declination ([*jitter*]{}) and the series of $DIT \times
NDIT$ is repeated at each jitter position, where the Detector Integration Time (DIT) is a short exposure on the target. The jitter offsets are $\sim20$ arcsec in size (always $< 30$ arcsec).
For NGC 253, the observations were collected in October 2009 and they consist of two datasets: the [*deep data*]{}, taken with J, Z and NB118[^4] filters and the [*shallow data*]{} taken with all broad-band filters i.e., Z, Y, J, H, Ks. Deep data in the J and Z bands are presented in @Greggio, where the resolved stellar population is discussed. Here we present and analyze the surface photometry in the J and Ks bands.
The observing strategy adopted for shallow data consisted of positioning the galaxy at the center of the pawprint field, almost filling the central detectors, for three exposures in a six pointings tile sequence, and positioning the galaxy in the gaps between VIRCAM detectors in the other three pointings. In all pointings, the major axis of the galaxy is aligned parallel to the short side of a tile. Taking into account that gaps between detectors are $\sim4.5$ arcmin in one direction and $\sim10$ arcmin in the other [see Fig.5 in @Em04], by positioning the galaxy in the gap, we obtain an offset-sky exposure. The jitter sequences of these frames were used to create the sky frame from their median combination. Before combining them, the offset-sky exposures are scaled to a reference frame. The final sky image is used for background subtraction of each pointing. The observing log for J and Ks shallow data of NGC 253 is listed in Tab. \[obslog\].
As described also in @Greggio, the data reduction of shallow and deep datasets is carried out using the dedicated CASU pipeline, developed specifically for the reduction of the VISTA data [@Irw04].
-------- ------------------- -------- ---------------- ----------
Filter $DIT \times NDIT$ Jitter Tot. Exp. time seeing
(seconds) (hours) (arcsec)
J $10 \times 6$ 24 0.6 1.1
Ks $12 \times 6$ 24 0.72 0.9
-------- ------------------- -------- ---------------- ----------
: The observing log for the J and Ks data of NGC 253.[]{data-label="obslog"}
Integrated magnitudes and limits of the VISTA data {#2mass}
--------------------------------------------------
In order to quantify the limiting surface brightness of the new VISTA data for NGC 253 we adopted the method described by @PhT06. On the sky-subtracted tiles for the shallow J, Ks and deep J imaging data, we extracted the azimuthally-averaged intensity profile with the IRAF task ELLIPSE. The major axis of the ellipses increases linearly with a step of 50 pixels out to the edges of the frame. The Position Angle (P.A.) and ellipticity ($\epsilon$) of the ellipses are fixed at a $P.A.= 52\deg$ and $\epsilon
= 0.8$, which are the disk’s average values for $R \ge 200$ arcsec in NGC 253. In Fig. \[sky\], we show the intensity profile as a function of the semi-major axis for the VISTA data (top panel) and for the 2MASS data (bottom panel) for NGC 253. From these intensity profiles, we estimated the distance from the center where the galaxy’s light blends into the background at zero counts per pixel on average. This radius sets the surface brightness limit of the VISTA and 2MASS photometry. In the J and Ks 2MASS images, this limit is at $R=600$ arcsec, corresponding to a limiting surface brightness of $\mu_J = 21.50$ mag arcsec$^{-2}$ and $\mu_{Ks} = 19.05$ mag arcsec$^{-2}$, respectively. The outer limits for the VISTA shallow J and Ks images are at $R=1034$ arcsec and $R=830$ arcsec, respectively, while for the deep J band image it is at $R=1305$ arcsec. The limiting magnitudes corresponding to these radii are $\mu_J = 23.0 \pm 0.4$ mag arcsec$^{-2}$ and $\mu_{Ks} = 22.6 \pm 0.6$ mag arcsec$^{-2}$ for the shallow data, and $\mu_J = 25 \pm 1$ mag arcsec$^{-2}$ for the deep J band image. The error estimates on the above quantities take the uncertainties on the photometric calibration ($\sim
0.01$ mag) and sky subtraction ($\sim 0.1$ ADU) into account.
We also measured the integrated magnitudes in two circular apertures centered on NGC 253. The first aperture is within 300 arcsec, for both VISTA and 2MASS J and Ks images; the second aperture corresponds to the outer limit of the J deep and Ks VISTA data derived above. Values are listed in Table \[mag\]. When the 2MASS magnitudes are transformed into the VISTA system[^5] the magnitudes inside 300 arcsec are consistent within the photometric errors, in both J and Ks bands.
-------- --------------- ----------------- --------------- ------------------ -------------------- ------------------
Radius $m_J$ (2MASS) $m^c_J$ (2MASS) $m_J$ (VISTA) $m_{Ks}$ (2MASS) $m^c_{Ks}$ (2MASS) $m_{Ks} $(VISTA)
$\pm 0.01$ $\pm 0.011$ $\pm 0.01$ $\pm 0.011$
(1) (2) (3) (4) (5) (6) (7)
300 5.08 5.01 4.986 4.00 4.01 4.001
830 3.93
1305 4.69
-------- --------------- ----------------- --------------- ------------------ -------------------- ------------------
\[mag\]
![ Azimuthally-averaged intensity profiles (counts) as a function of the semi-major axis for the VISTA (top panel) and 2MASS data of NGC 253, in both J (blue points) and Ks (red points) bands. For the VISTA data, the intensity profile is also derived for the deep J band image (green points). The vertical lines indicate the outer radii corresponding to the ellipse of the limiting surface brightnesses for the imaging data in each band. They are $R=600$ arcsec for the 2MASS images in both J and Ks bands (black vertical line in the bottom panel), and $R=1034$ arcsec and $R=830$ arcsec, for the shallow VISTA data (top panel) in the J (blue vertical line) and Ks (red vertical line) bands respectively, while for the deep J band image it is at $R=1305$ arcsec (green vertical line). The horizontal lines indicate the residual counts in the background, which are $\sim 0.08$ in the 2MASS images (black horizontal line), and $\sim
-0.08$ in the Ks image (red horizontal line), $\sim 0.002$ and $\sim
0.06$ in the J shallow and deep images (blue and green horizontal lines) for the VISTA data.[]{data-label="sky"}](sky.ps){width="9cm"}
Morphology of NGC 253 in the NIR J and Ks bands {#morph}
================================================
[*Disk structure: bar, ring and spiral arms -*]{} The morphology of the Sculptor Galaxy NGC 253 changes dramatically from optical to near-infrared wavelengths[^6]. In the optical, the galaxy structure resembles that of an Sc spiral: the disk is very dusty and star formation regions dominate the spiral arms [see @Iod12]. In the J and Ks images of NGC 253, taken at VISTA, the most prominent features of the galaxy are [*i)*]{} the bright and almost round nucleus with a diameter of about 1 arcmin ($\sim$1 kpc); [*ii)*]{} the bar, with a typical peanut shape ending with very bright edges; [*iii)*]{} a ring-like structure, located in the main disk, enclosing the bar, and [*iv)*]{} the spiral arms, which start at the end of the bar, and dominate the disk. Such structures are already evident in the J-band image (Fig. \[tileJ\]) but become clearer in the Ks image (see Fig. \[tileKs\] and Fig. \[imaK\_dp\]), because of weaker dust absorption.
The multi-component structure of the NGC 253 disk is emphasized by the unsharp masked Ks-band image, shown in Fig. \[FM\]. It is obtained by using the [*FMEDIAN*]{} task in IRAF, with a smoothing box of $150 \times 150$ pixels, and taking the ratio of the Ks-band image to its [*FMEDIAN*]{} smoothed version. In particular, the ring-like structure at the end of the bar can be seen. It has a radius of about 180 arcsec ($\sim$3 kpc) and appears very bright in the SE and NW sides, close to the edges of the bar.
[*The nuclear region -* ]{} The zoomed view of the nuclear region of NGC 253 given in Fig. \[zoom\] reveals the presence of a nuclear ring of about 30 arcsec diameter ($\sim$0.5 kpc). This feature was already detected by @MS10 in the Ks data obtained with SINFONI at the VLT. Its morphology and extension are consistent with those derived by the Ks VISTA image, with the latter providing additional evidence for several luminous peaks distributed along this structure, which generate a clumpy azimuthally light distribution. Furthermore, as already found by @MS10, the Ks VISTA image confirms that the brightest peak is not coincident with the kinematic center, and that it is located on the SW, at 5.5 arcsec far from the kinematic center (both are marked in Fig. \[zoom\]). This value has been derived by computing a statistic on the Ks image (with the [*IMEXAM*]{} task in IRAF) inside the central 30 arcsec area.
{width="14cm"}
\[!ht\] {width="14cm"}
{width="14cm"}
{width="13cm"}
Surface Photometry {#phot}
===================
In this section we describe the surface photometry and the two-dimensional model of the light distribution for the whole system. As already mentioned in Sec. \[intro\] and Sec. \[morph\], previous kinematical studies showed that the brightest location in the nuclear region does not coincide with the kinematic center of the galaxy. For the following analysis, we adopted as center of the galaxy the kinematic center found by @MS10 at $\alpha = 00^h 47^m 33.17^s$ and $\delta = -25^{\circ} 17' 17.1''$, shown in Fig. \[zoom\], which is consistent with the center of the nuclear ring.
Fit of the isophotes {#ellfit}
---------------------
We used the [*ELLIPSE*]{} task in IRAF to perform the isophotal analysis of NGC 253. The average surface brightness profiles in J and Ks shallow data, as well as that derived from the deep J band image, are shown in Fig. \[fit\_log\].
[*Disk structure: bar, ring and spiral arms -*]{} According to the outer limits of the surface photometry in the VISTA images, derived in Sec. \[2mass\], the average surface brightness profiles extend out to 830 arcsec ($\sim$14 kpc) in the Ks band (see Fig. \[fit\_log\] top panel), and out to 1034 arcsec ($\sim$17 kpc) and 1305 arcsec ($\sim$22 kpc) in the J band from shallow and deep images, respectively (see Fig. \[fit\_log\] bottom panel).
The main features are: [*i)*]{} inner flat profiles for $R \le 12$ arcsec ($\sim$0.2 kpc); [*ii)*]{} the bulge plus bar plateau, $12 \le R \le 120$ arcsec ($0.2 \le R \le 2$ kpc); [*iii)*]{} the very extended exponential disk for $R\ge 120$ arcsec ($R \ge 2$ kpc) (see Fig. \[fit\_log\]).
Position Angle (P.A.) and ellipticity ($\epsilon = 1-b/a$, where $b/a$ is the axial ratio of the ellipses) profiles, in the Ks band where the dust absorption is minimal, are shown in the left panel of Fig. \[ellipse\]. The deviations of the isophotes from pure ellipses are estimated by the $a4$ and $b4$ coefficients that are related to the fourth harmonic term of the Fourier series. The $a4$ and $b4$ profiles as function of radius are shown in the right panel of Fig. \[ellipse\]. In the regions where the bar dominates the light ($12 \le R \le 120$ arcsec) the ellipticity increases from 0.4 to 0.7 at $R=87.5$ arcsec ($\sim$1.5 kpc), and decreases to about 0.55 at $R=150$ arcsec ($\sim$2.5 kpc).
In these range of radii, a large twisting is observed. The P.A. varies by about 10 degrees as it moves from $60 \lesssim P.A. \lesssim
70$ degrees, coincident with the peak in ellipticity at $R=87.5$ arcsec. At larger radii ($R \ge 150$ arcsec), in the disk region, the P.A. decreases to 52 degrees and both the ellipticity and P.A. remain almost constant until the outermost measurements, at values of 0.8 and $\sim$52 degrees, respectively. This implies that the apparent axial ratio of the disk is $b/a \sim 0.2$ and the disk inclination is $74 \pm 3$ degrees. This value is consistent with the value of $72 \pm 2$ degrees derived by @P91 from the HI data of NGC 253.
The $a4$ and $b4$ profiles (see Fig. \[ellipse\], right panel) suggest that the isophotes in the inner regions of the bar ($20 \le R
\le 50$ arcsec, $\sim 0.33 - 1$ kpc) have a boxy shape, with $b4
\sim -0.04$ and $a4 \sim 0.045$, while they become more disk-like at larger radii ($50 \le R \le 150$ arcsec, $\sim 0.84 - 2.5$ kpc), where $b4$ reaches a maximum of $0.06$ and $a4$ a minimum of about $-0.06$ ($70 \le R \le 90$ arcsec, $\sim 1.2 - 2.7$ kpc). Between 150 and 300 arcsec (i.e., $\sim 1 - 5$ kpc), where the bar connects with the spiral arms, the isophotes are once more boxy ($b4 \sim -0.04$). In the disk regions, for $R \ge 300$ arcsec ($\ge 5$ kpc), the fit of the isophotes is consistent with pure ellipses, i.e. $a4$ and $b4 \simeq 0$.
[*The nuclear region -* ]{} In Fig. \[ellipse\_zoom\] we show an enlargement on the central 50 arcsec region for the P.A., $\epsilon$ (left panels), and for the a4, b4 profiles (right panels) derived by the fit of the isophotes. Inside $R \le 15$ arcsec ($\simeq$0.25 kpc), where the nuclear ring is observed (see Fig. \[zoom\]), the ellipticity reaches a maximum of about 0.6 at $R=10$ arcsec ($\sim$0.17 kpc), and subsequently decreases to about 0.4 at $R=20$ arcsec ($\sim$0.34 kpc). In this region, the P.A. increases from 45 to 58 degrees. This indicates that the nuclear ring is almost as flat as the bar, but its P.A. differs by about 16 degrees with respect to the bar P.A. (as measured from their outer isophotes, i.e. $R=15$ arcsec for the nuclear ring and $R=150$ arcsec for the bar). Inside these regions, the isophotes are more boxy with a maximum of $b4 \sim 0.03$ at $R \sim 15$ arcsec, ($\sim$0.25 kpc), (see Fig. \[ellipse\_zoom\], right panels).
[*The outer disk -* ]{} In the region of the outer disk, both J and Ks profiles show abrupt changes of slope for $R \ge 550$ arcsec ($\sim$9.2 kpc). As shown in the right panel of Fig. \[fit\_log\], the surface brightness profile has a sharp decline with respect to the “inner” regions of the disk, producing a [*downbending Type II profile*]{}, according to the classification of light profiles in disk galaxies [see e.g. @PhT06; @Erw08]. This feature can be reasonably ascribed to the effect of disk truncation, as shall be discussed in Sec. \[disk\].
In order to derive the break radius $R_{br}$ of the disk in NGC 253, we performed a least-square fit of the azimuthally averaged surface brightness profiles, restricted to the region where the contribution to the light of the bulge plus bar is negligible, i.e. for $R\ge 120$ arcsec, ($\ge$2 kpc), (see Fig. \[fit\_log\], right panel). Two exponential functions are used to describe the [*inner*]{} and [*outer*]{} disk: $$\mu^{in,out}(R)= \mu^{in,out}_{0} + 1.086 \times R/r^{in,out}_{h}$$ where $R$ is the galactocentric distance, $\mu^{in,out}_{0}$ and $r^{in,out}_{h}$ are the central surface brightness and scale length of each of the two exponential components. In the Ks band, the best fit to the structural parameters are summarized in Table \[tabgalfit\]. For the J band profiles, we found $\mu^{in}_0=16.67 \pm 0.05$ mag/arcsec$^2$ and $r^{in}_h = 193
\pm 5$ arcsec ($\sim$3.2 kpc) for the inner disk, and $\mu^{out}_0=16.00 \pm 0.07$ mag/arcsec$^2$ and $r^{out}_h = 159 \pm 1$ arcsec ($\sim$2.7 kpc) for the outer disk.
The inner disk scale length is larger than the outer one, being $r^{in}_h/r^{out}_h = 1.87 \pm 0.02$ in the Ks band, and $r^{in}_h/r^{out}_h = 1.22 \pm 0.03$ in the J band. This is also observed for other galaxies with Type II profiles [@Kim2014]. The break radius $R_{br}$ correspond to the radius at which the magnitudes of the two exponential laws are the same, within $2\sigma$. The best fits are shown in the right panel of Fig. \[fit\_log\], and the values of $R_{br}$ are: $R^J_{br} = 553.20 \pm 0.01$ arcsec and $R^{Ks}_{br} = 554.05 \pm 0.01$ arcsec (at $\sim9.3$ kpc).
{width="9cm"} {width="9cm"}
{width="9cm"} {width="9cm"}
{width="9cm"} {width="9cm"}
Deprojected image of NGC 253 {#ellfit_dp}
-----------------------------
We computed the deprojected image of NGC 253, both in the J and Ks bands to measure the bar structure. These measurements are discussed in detail in Sec. \[bar\]. According to Gadotti et al. (2007), an efficient way to deproject an image is to stretch the image in the direction of the line of nodes by using the IRAF task IMLINTRAN with the constraint that flux is conserved. For NGC 253, we adopted the P.A. of the outer disk (i.e. P.A.= 52 degrees) as the P.A. of the line of nodes and the inclination angle of $i=74$ degrees (see Sec. \[ellfit\]). The deprojected image of NGC 253 in the Ks-band is shown in Fig. \[imaK\_dp\]. This is very similar to that derived by @Dav10.
In order to derive the deprojected ellipticity for the bar, an important parameter to constrain the bar strength and bar length (see Sec.\[bar\] for a detailed discussion), we applied the isophotal analysis to the deprojected image in the Ks-band. The deprojected ellipticity radial profile is shown in Fig. \[ellipse\] (middle-left panel, red points). In the bar region, the pattern is very similar to that of the ellipticity radial profile for the observed image, although, as expected, the ellipticity value and the position of the peak change. The deprojected bar appears longer and less eccentric. The ellipticity reaches its maximum of about 0.65 at larger distances from the center, i.e. R=160 arcsec ($\sim$2.7 kpc). As expected, the outer disk becomes more circular once deprojected.
![Deprojected image of NGC 253 in Ks band. The image size is $39.5 \times 26.6$ arcmin. The dashed green line indicates the outer isophote, which is almost circular.[]{data-label="imaK_dp"}](imaKdp_tile_SZ.eps){width="9cm"}
Light and Dust distribution {#color}
---------------------------
To describe the light distribution of the major structural components of NGC 253 we extracted the one dimensional (1D) light profiles along the disk major axis (P.A. = 52 degrees) and along the bar major axis (P.A. = 72 degrees) in the J and Ks-band shallow images. These are derived by averaging the counts in a wedge with an opening angle of 5 degrees that extends out to the outer limiting radius derived in Sec. \[2mass\]. Furthermore, we produced the 2D J-Ks color map and extract the J-Ks color profiles along P.A. 52 and 72 degrees. Even if the light profiles in the J band are much more affected by dust, the same components are detected in both J and Ks-band images.
[*Disk structure: bar, ring and spiral arms -*]{} The J and Ks surface brightness profiles along the disk major axis and along bar major axis (Fig. \[profK\] left and right panel, respectively) are characterized by a bright and rather concentrated bulge. Over the average light distribution, some peaks are observed. On the NE side, two peaks are detected for $100 \le R \le 200$ arcsec ($1.7
\lesssim R \lesssim 3.4$ kpc); on the SW side another peak is located at $R \sim 158$ arcsec ($\sim$2.6 kpc). They are related to the ring-like component enclosing the bar (see Sec. \[morph\] and Fig. \[tileKs\]). At larger radii in the disk regions, as already detected in the average surface brightness profile derived from the isophote fit (see Sec. \[ellfit\]), the surface brightness has the typical behavior of a down-bending Type-II profile, observed for disk galaxies [@PhT06; @Erw08], with a break radius of $R_{br} \simeq 553$ arcsec ($\sim 9.3$ kpc).
The signature of the bar in the light profiles along $P.A. = 72$ degrees (Fig. \[profK\] right panel) appears on both sides, between $60 \le R \le 120$ arcsec ($\sim 1 - 2$ kpc), as a plateau followed by a steep decline compared to the underlying exponential disk [@Ph00]. This is most evident in the Ks-band. The distance from the galaxy center at which the turn-over in profile slope occurs (at about 118 arcsec, or $\sim$2 kpc), can be considered as a rough estimate of the bar projected length [@Lut00]. An estimate of the bulge length $R_{B}$ is given by the region where light peaks over to the bar and the exponential disk. We derived $R_{B} \sim 60$ arcsec ($\sim$1 kpc).
Fig. \[profJ\_K\] shows the J-Ks color profiles along the major axes of the disk (left panel) and bar (right panel). On average, both profiles show that: [*i)*]{} at small radii, close to the center, the galaxy is redder ($J-Ks \sim 2.2$ mag along the major axis) with respect to the outer regions; [*ii)*]{} the color profiles decrease rapidly and reach a value of $J-Ks \sim 1$ mag at $R = 50$ arcsec ($\sim$0.84 kpc) as we move away from the center; and [*iii)*]{} the color gradient becomes almost linear with $1 \le J-Ks \le 0.5$ mag to the outermost radii. On both E and W sides along the disk major axis, we associate the peaks in the surface brightness profiles of Fig. \[profK\] (left panel) in the range $100 \le R \le 300$ arcsec ($\sim 1.7 - 5$ kpc) with the ring and spiral arms. Projected on the disk major axis (see Fig. \[profJ\_K\] - left panel), we see that these features are redder than the underlying disk, with $J-Ks \sim$1.2 mag for the ring and $J-Ks \sim 0.8$ mag for the spiral arms. At larger radii, in particular for $R \ge R_{br}$, the disk tends to be bluer as $J-Ks$ varies from 0.9 to about 0.7 mag on NE side, and from 0.7 to about 0.5 mag on SW side. Between $50 \le R \le 120$ arcsec ($\sim 0.84 - 2$ kpc), the color profile along the bar major axis is on average redder $1.0 \le J-Ks \le 1.3$ mag than the disk. We also report the presence of two redder ($J-Ks \sim 1.2$ mag) peaks, which correspond to the bright edges of the bar (see Sec. \[bar\] for an detailed discussion).
The two-dimensional J-Ks color distribution is shown in Fig. \[colormap\]. The reddest regions, i.e. $(J-Ks) \ge 1.5$ mag, correspond to the NW arm, the edges of the bar along the EW direction, and the nuclear regions. From the J-Ks color map, one can infer that the dust is confined to the spiral arms and the ring in NGC 253. Dust lanes are curved with their concave side toward the bar major axis and they curl around the nucleus at small radii.
[*The nuclear region -* ]{} Fig. \[profK\_zoom\] shows an enlarged view of both light and color profiles within the nuclear region at $R\le 50$ arcsec ($\le$0.84 kpc). Inside 15 arcsec ($\le$0.25 kpc), the light profiles along the disk and bar major axes are very irregular and asymmetric with respect to the center[^7], and reflect the peculiar nuclear structure of NGC 253. Several small peaks of light are observed and for $R \le 10 $ arcsec, the surface brightness remains almost constant at about $\mu_J \sim 15.2$ mag arcsec$^{-2}$ and $\mu_{Ks} \sim 13$ mag arcsec$^{-2}$ (see Fig. \[profK\_zoom\], left panel). Along the bar major axis, the light profiles are much more peaked and reach a maximum value of $\mu_{Ks} \sim 13.8$ mag arcsec$^{-2}$. Several compact sources, brighter than the underlying diffuse light distribution in the nuclear ring (see Sec. \[morph\] and Fig. \[zoom\]), can be identified as being responsible for the observed peaks of the surface brightness profiles. The flat regions can be due to dust absorption that is still quite high even in the Ks-band. This is suggested by the J-Ks color profile which increases by more than one magnitude within 20 arcsec from the galaxy center (see Fig. \[profK\_zoom\], right panel). Inside $R \le 5$ arcsec ($\le$0.08 kpc), the J-Ks color profile along the bar major axis shows a rapid decrease toward bluer colors, where the minimum is $(J-Ks) \sim 0.2$ mag. This corresponds to a “blue hole” in the J-Ks color map (see the bottom panel of Fig. \[colormap\]). The color map also shows redder filamentary structures, with $(J-Ks) \sim 1$ mag, that are most evident in the SE regions of the galaxy, starting from the galaxy nucleus and extending in the orthogonal direction. Such “polar” filaments were observed by @SW92, and recently confirmed by ALMA observations [@ALMA]; they could be made of dust associated with the outflow material from the burst of newly formed stars. Similar features are observed in the edge-on spiral galaxy NGC 891 [see @Wh09 and references therein].
{width="9cm"} {width="9cm"}
{width="9cm"} {width="9cm"}
{width="16cm"} {width="16cm"}
{width="9cm"} {width="9cm"}
Two dimensional model of the light distribution in the Ks band {#galfit}
==============================================================
The current morphological analysis suggests that NGC 253 is dominated by three main structures at different distances from the galaxy center: the bright nucleus, the bar and the extended disk. In addition to the smooth light distribution of these main components we observe the following substructures (see Sec. \[morph\] and Sec. \[phot\], and Fig. \[tileKs\], Fig. \[FM\] and Fig. \[zoom\]): a nuclear ring, a ring at the end of the bar and spiral arms. In order to measure the structural parameters of the main galaxy components and quantify the observed substructures, in terms of their total luminosity and radial extensions, we adopted the following approach. In the Ks-band, which is less perturbed by dust, we performed [*i)*]{} a 1D least-square fit of the light profiles along disk and bar major axes with masked substructures; [*ii)*]{} produced a “maximum” symmetric two dimensional (2D) model by assuming that the galaxy light distribution arises solely from the above three components (i.e. bulge, bar and disk); and [*iii)*]{} subtracted the 2D model from the original image to create a residual image with only the substructures. We then measureed the luminosity and extensions of the substructures on this residual image.
The light distribution of the bulge and bar in NGC 253 is modeled by a Sersic law [@Sersic; @Kim2014] $$\mu (R) = \mu_e + k(n) \left[ \left( \frac{R}{r_e}\right)^{1/n} -1\right]$$ where $R$ is the galactocentric distance, $r_e$ and $\mu_e$ are the effective radius and effective surface brightness, and $k(n)=2.17 n - 0.355$. For the disk, as described in Sec. \[ellfit\], we adopted a double exponential law to describe the down-bending profile of this component (see also Fig. \[fit\_log\]). The maximum symmetric 2D model is made by using the GALFIT package [@Peng02], where the scale radii ($r_e$ and $r_h$) and the shape parameters $n$ are fixed to those values of the 1D fit, while the total magnitudes, axial ratios and P.A.s are left free. A summary of the structural parameters for each component is listed in Table \[tabgalfit\] and the results are shown in Fig. \[2dmod\]. In the 2D residual image (bottom-left panel of Fig. \[2dmod\]) all the substructures characterising NGC 253 are clearly visible as regions where the galaxy is brighter than the model: the nuclear ring, the bright regions at the end of the bar from which the two prominent spiral arms emerge, and the ring which encloses the bar.
The difference between the observed and fitted light profiles, along the disk major axis and along the bar, is shown in the right panels of Fig. \[2dmod\]. Along the disk major axis (bottom panel), the largest deviations ($\Delta \mu \ge 1$ mag) are in the regions corresponding to the ring at $150\le R \le 200$ arcsec ($\sim$3 kpc), where the bright bumps are $\sim1-2$ mag brighter than the underlying average light distribution. For $300 \le R \le 700$ arcsec ($\sim 5 - 11.8$ kpc), the extended bumps, $\sim 1.5 - 2.5$ mag brighter than the double-exponential fit, are due to the spiral arms. The bulge is less prominent with respect to the other components in NGC 253, since the Bulge-to-Total ratio $B/T = 0.043$.
Along the bar’s major axis (Fig. \[2dmod\], upper panel) the fit is very good (better than 0.2 mag), except for the nuclear regions $R \le 20$ arcsec ($\sim$0.34 kpc), where the peaks in the galaxy brighter than the model are associated with the nuclear ring (see Sec. \[morph\] and Fig. \[zoom\]), and for $100 \le R \le 180$ arcsec ($\sim$2.3 kpc) where the largest deviations of about 2 mag occur close to the edges of the bar.
------------ -------- ----------- ------------------- ------------------- --------------- ------- ---------------- ------- -----------------
Component Model $m_{tot}$ $\mu_{e}$ $\mu_{0}$ $r_e$ $r_e$ $r_h$ $r_h$ n
mag mag arcsec$^{-2}$ mag arcsec$^{-2}$ arcsec kpc arcsec kpc
Bulge Sersic 7.31 $15.49 \pm 0.05$ $9.1 \pm 0.5$ 0.15 $0.76 \pm 0.01$
Bar Sersic 6.45 $19.38 \pm 0.05$ $81 \pm 1$ 1.4 $0.30 \pm 0.01$
Inner Disk exp 4.45 $15.58 \pm 0.05$ $173 \pm 4$ 2.9
Outer Disk exp 5.19 $12.56 \pm 0.07$ $92.6 \pm 0.5$ 1.5
------------ -------- ----------- ------------------- ------------------- --------------- ------- ---------------- ------- -----------------
{width="9cm"} {width="9cm"}
Results: the structure of NGC 253 {#result}
=================================
The study of the bar structure is a key issue in addressing the evolution of self-gravitating disk galaxies. Bars play a major role in shaping the present properties of disks, since they are responsible for the re-distribution of the angular momentum and matter within the disk. The response of a disk to a bar can result in the formation of pseudo-bulges, a 3D boxy bar and of substructures such as spiral arms and rings [see the review by @At12]. In this framework, we discuss the case of NGC 253 and explore the connections between the observed substructures in the disk and the orbital resonances predicted by the disk response to the perturbation by the bar.
The bar structure in NGC 253 {#bar}
----------------------------
The analysis of the light distribution in the shallow J and Ks band images of NGC 253 (in Sec. \[morph\] and \[phot\]) indicates the presence of an extended bar with very bright edges that connect to the outer spiral arms (see Fig. \[tileKs\]). The isophotal analysis (see Fig. \[ellipse\], right panel) suggests that the bar is boxy in the inner region and tends to be disky at larger radii, giving a typical peanut-shaped end. On average, the bar is redder than the disk with $1.0 \le J-Ks \le 1.3$ mag (see Fig. \[profJ\_K\]).
In this section, we give a more accurate estimate of the length of the bar and of its strength using the isophote fits, light and color distributions. Furthermore, we discuss whether the bright features occurring at the edges of the bar are [*ansae*]{}, i.e. the typical symmetric enhancement observed at the end of the stellar bars in many barred galaxies.
[*The estimate of the length of the bar -*]{} The previous estimate of the bar length was about 150 arcsec by @Forbes92 from H band images. It was made by measuring the bar extension on the image from the center out to the luminous edge. Taking advantage of the very low dust absorption in the Ks image, we can determine a new and accurate estimate of the deprojected length of the bar ($l_b$).
According to the method by Gadotti et al. (2007), the observed bar length ($l_{obs}$) is 1.2 times the radius at which the ellipticity is maximum. Given that the ellipticity reaches its maximum at $R=87.5$ arcsec, $\sim$1.5 kpc, (see Fig. \[ellipse\]), we estimate $l_{obs} = 104$ arcsec ($\sim$1.7 kpc). This value is very similar to the preliminary estimate of the bar projected length derived by the light profiles, given in Sec. \[color\] (see also Fig. \[profK\]). The deprojected bar length $l_b$ is derived by the following relation to the observed bar length: $l_{obs}=l_b\sqrt{cos^2(\phi) +
sin^2(\phi)cos^2(i)}$, where $\phi$ is the angle between the bar and the disk major axis, and [*i*]{} is the inclination angle. For a 2D bar, given the angle projected on the sky ($\phi '$), one has $tan(\phi ') = tan(\phi) cos(i)$. From the isophote fitting, we estimate $i=74^\circ$ and $\phi ' = 17.2^\circ$, thus giving $\phi =
48.3^\circ$ and $l_b=1.44 \times l_{obs}=151 \pm 12$ arcsec, which is $\sim$2.5 kpc. The uncertainty on this quantity takes the errors in the ellipticity and P.A., given by the fit of the isophotes, into account, which are 0.002 and 4 degrees, respectively.
[*The bar strength -*]{} Theoretical predictions [@At92a; @At92b] suggest that the degree of dust-lane curvature in barred galaxies is inversely proportional to the bar strength, i.e. dust lanes with greater curvature are found in the weaker bars. This theoretical expectation was confirmed by observations [@Com09; @Kn02]. For a sample of barred galaxies, with a known value of the bar’s strength parameter $Q_b$, Knapen et al. (2002) quantified the degree of the dust-lane curvature by the ratio $\Delta \alpha $ (in unit of degree/kpc), with higher values of $\Delta \alpha $ corresponding to dust lanes with higher curvature. The diagram of $Q_b$ versus $\Delta \alpha $ [see Fig.11 of @Kn02] shows a clear trend, where higher values of $\Delta \alpha $ correspond to lower values of $Q_b$, i.e. weaker bars. Following the technique described by @Kn02, we estimated the curvature of the dust lane in NGC 253. From the deprojected J-Ks color map (shown in Fig. \[colormap\]), we measured the change in the angle of the tangent to the dust lane. To do this, we have choosen two locations: one where the tangent is almost parallel to the bar’s major axis and the second one where the dust lane curves toward the end of the bar. The locations where the dust lanes connect to the nuclear regions are excluded since the curvature changes too abruptly. For NGC 253 we derived $\Delta \alpha \sim 25$ degree/kpc. When compared with the barred galaxies in the sample studied by Knapen et al. (2002), the measured value is in the typical range for weak bars, which are characterized by the strength parameter $Q_b \le
0.2$ [see Fig. 11 of @Kn02].
[*The edges of the bar -*]{} The J and Ks images of NGC 253 show round-like and luminous blobs at the ends of the bar (see Fig. \[tileKs\]). These are also evident as high-frequency substructures in the unsharp masked Ks image (Fig. \[FM\]). When fitting the surface brightness profiles in the Ks-band along the bar major axis (P.A.=72 degrees), we find that the peak flux at the edges of the bar is symmetrically located at $R \sim
90$ arcsec ($\sim$1.5 kpc) and is 0.5 mag brighter than the average bar surface brightness (see Fig. \[2dmod\]). The morphology and light distribution of these luminous blobs resemble those typically observed for the ansae in other galaxies [see Fig. 1 and Fig. 2 in @MV07].
These regions are redder $(J-Ks) \sim 1.2$ mag than the average J-Ks color along the bar major axis (see Fig. \[profJ\_K\]): this observational evidence and the weak bar in NGC 253 are not consistent with the [*ansae*]{}, since the latter do not show any color enhancements and appear mostly in strong bars [@MV07]. An alternative explanation comes from the $H\alpha$ map of NGC 253 [see Fig. 2 of @Hoo96], which support the identification of these bright regions with area of star formation. The redder colors are well explained by the dust absorption.
Rings in NGC 253: the role of the orbital resonances {#bar_kin}
----------------------------------------------------
The high angular resolution and the large field-of-view of the VISTA images, together with lower dust absorption in the J and Ks-band, provide robust proof of the existence of two rings within the disk of NGC 253. There is a nuclear ring with a radius of about 15 arcsec ($\sim$0.2 kpc); it is not uniform, with several bright knots, mostly concentrated on the NE side (see Fig. \[zoom\]). The brightest peak of the whole galaxy light, in the Ks band, is located on the SW side of this structure, at $\sim$5.5 arcsec ($\sim0.92$ kpc) far from the kinematic center of the galaxy (see Sec. \[morph\]). The offset between the kinematic and photometric center has also been observed in other barred galaxies hosting a nuclear ring [@Maz11], and it varies in the range $0.01 - 0.2$ kpc. It is used as a test for the shape of the potential, since the degree of such a difference is an indication of non-circular motion [@Franx94]. Thus, in the case of NGC 253, a dynamical model is needed to test if such a difference is due to a strong deviation from an axisymmetric potential.
A second ring is located in the main disk enclosing the bar (see Fig. \[FM\]), in the range $158 \le R \le 183$ arcsec ($2.6 \lesssim
R \lesssim 3.1$ kpc). This component contains active star formation as suggested by the $H\alpha$ map published by Hoopes et al. (1996), (see Fig. 2 of their paper).
We now discuss the origin of these two rings in turn. Rings within the central kiloparsec are frequently observed in normal disks and barred galaxies [@BC93; @BC96; @Kn05; @Com10] and are most likely associated with resonance orbits. In barred galaxies, the bar plays a crucial role in the redistribution of the angular momentum. The gas is pushed into orbits near dynamical resonances by the bar’s torque and the star formation is triggered by the high gas density in these regions [see @At12 for a review]. For barred spiral galaxies with a bar pattern speed $\Omega_b$ and an angular velocity $\Omega(R)$, there are two basic resonance regions at different distances from the galaxy center [@Lid74]. These are the Inner Lindblad Resonance (ILR), where $\Omega (R_{ILR}) = \Omega_b
- K/2$, and K is the epicyclic frequency, and the Outer Lindblad resonance (OLR) where $\Omega (R_{OLR}) = \Omega_b + K/2$. Rings can be formed in the vicinity of the ILRs and at the Ultra-Harmonic Resonance (UHR), where $\Omega (R_{UHR}) = \Omega_b - K/4$ [@BC96]. The radius where the UHR occurs lies between the ILR and the corotation radius, for which $\Omega (R_{CR}) = \Omega_b$.
In NGC 253 we can estimate the radii at which the resonances occurs by using the rotation curve along the disk major axis measured by @AM95, and verify whether the nuclear and second ring correspond to any of them. A first attempt was done by @AM95, based on the previous and very uncertain bar length [@Forbes92], that derived a CR radius $R\sim 4.5$ kpc, an $\Omega_b = 48$ km/s/kpc, and thus predicted an ILR at about 1 kpc and an OLR outside the optical disk. We accurately measured ($\sim 10\%$) the intrinsic bar length in Sec. \[bar\] to be $l_b=151 \pm 12$ arcsec $\simeq 2.5$ kpc. From $l_b$, the radius of corotation $R_{CR}$ is then $R_{CR}= 1.2\times l_b = 181$ arcsec, $\sim$3 kpc, [@CG89]. Contrary to the measured bar length, this quantity has much larger uncertainty, $\sim$20$\%$, thus, including also the error on $l_b$, $R_{CR}$ has a total uncertainty of about 50 arcsec ($\sim0.8$ kpc). The measured circular velocity at $R_{CR}$ is $V_{circ} = 183.8$ km/s [@AM95], hence the bar pattern speed is $\Omega_b =V_{circ}
(R_{CR})/ R_{CR} \simeq 61.3$ km/s/kpc. In Fig. \[omega\] we plot the angular velocity in the disk of NGC 253 $\Omega (R)$ and the curves relative to $\Omega(R) -K(R)/2 $, $\Omega(R) + K(R)/2$ and $\Omega(R) -K(R)/4 $. An estimate of the corresponding radius for ILR and OLR are given by the loci where $\Omega_b$ insects the $\Omega(R) -K(R)/2 $ and $\Omega(R) + K(R)/2$ curves, respectively. We find that the ILR falls at $0.3 \le R \le 0.4 $ kpc and the OLR at $R \sim 4.9$ kpc. The curve relative to $\Omega(R) -K(R)/4 $ is also added to the plot, and we estimate the location of the UHR at $1 \le R \le 1.5$ kpc. Taking into account the error estimate on $R_{CR}$, the $\Omega_b$ could vary from $\Omega_b^{MIN} =44$ km/s/kpc up to $\Omega_b^{MAX} = 76$ km/s/kpc, as lower and upper limits, respectively. This implies that the uncertainty on the ILR is about 0.1 kpc. The OLR could fall at $\sim$4 kpc, for $\Omega_b = \Omega_b^{MAX}$, and at $\sim$7 kpc, for $\Omega_b = \Omega_b^{MIN}$. Finally, the UHR could be in the range $0.6 \le R \le 0.9$ kpc, for $\Omega_b = \Omega_b^{MAX}$, and at $\sim$2.4 kpc, for $\Omega_b = \Omega_b^{MIN}$. The estimated radius for the ILR is comparable with that of the nuclear ring, within the errors.
The OLR is also inside the optical disk, which extends out to 14 kpc (see Sec. \[phot\] and Fig. \[tileKs\]). The presence of the OLR at $R\sim4.8$ arcmin ($\sim$4.9 kpc) is very close to the peak of the HI surface density observed in the radial range $2.5 \lesssim R \lesssim 4.5$ arcmin ($\sim 2.5 - 4.5$ kpc) [@P91]. Both the dynamics of spiral patterns and simulations of bar perturbation predict that gas piles up at the OLR [see e.g. @B96], and this prediction has been confirmed by observations of other spiral galaxies [@CA01]. This observational evidence further confirms the reliability of the OLR estimate from the bar pattern speed $\Omega_b = 61.3$ km/s/kpc.
We now evaluate whether the ring at $\sim$2.9 kpc, enclosing the bar, may also originate from orbital resonances. The ring is located very close to where CR lies. However we do not expect a ring to form at corotation, as stellar orbits do not accumulate at this radius [@Lid74]. We may also consider this second ring as originating from the UHR radius, however the latter occurs at a smaller radius $1 \le R \le 1.5$ kpc, even considering the large uncertainties on $R_{CR}$ and thus on $\Omega_b$, that gives a maximum value for UHR at 2.4 kpc. Thus, bar perturbation theory seems to exclude a resonance origin for this ring. An alternative origin for this ring may be a merger event or a transient structure formed during an intermediate stage of bar formation [see @Com2013 and reference therein]. Simulations by @Atha97 show that the impact of a small satellite on a barred spiral galaxy can generate a pseudo-ring that encloses the bar. A merging event has already been considered as a possible explanation for the extra-planar distribution of HI gas in NGC 253 [@Boo05] and its connection to the $H\alpha$ and X-Ray emission (see Sec. \[intro\]). The resolved stellar population studies of the deep VISTA data for NGC 253 show a disturbed disk with extra-planar stars and substructures in the inner halo, also supporting a possible merger event [@Greggio and reference therein].
Alternatively, the bar in NGC 253 could still be forming, and we would be looking at an intermediate phase in the evolution of the bar. The N-body simulations that study the role of gas accretion on bar formation and renewal can also account for the formation of rings surrounding the bar [@BC02]. These are only speculative suggestions that need to be tested by detailed dynamical models of bar and disk evolution.
![Angular velocity curve for the disk of NGC 253 from @AM95: $\Omega(R)$ (solid black curve) and the curves relative to $\Omega(R) -K(R)/2$ (short-dashed red line), $\Omega(R)
+ K(R)/2$ (long-dashed blue line) and $\Omega(R) -K(R)/4$ (long-dashed-point magenta line). The horizontal line corresponds to the value of the bar pattern speed $\Omega_b \simeq 61.3$ km/s/kpc. The two vertical segments correspond to the ILR (at $R
\sim 0.33$ kpc) and OLR (at $R \sim 4.9$ kpc) radii. The two dotted vertical segments correspond to the range of radii where the UHR lies at $R \sim 1.3$ kpc. The two vertical arrows indicates the lower and upper limit for the $\Omega_b$.[]{data-label="omega"}](omega_new.ps){width="9cm"}
The steep outer disk profile in NGC 253 {#disk}
---------------------------------------
The most extended component in the shallow images of NGC 253 is the main disk that dominates the J and Ks light out to 1034 arcsec ($\sim$17 kpc) and 830 arcsec ($\sim$14 kpc) respectively. In the deep J-band images, the disk extends further out to 1305 arcsec ($\sim$22 kpc), see Fig. \[fit\_log\].
The light distribution in the disk is characterized by a down-bending Type II profile, with a break at $R_{br} \simeq 554$ arcsec, i.e. $\sim$9.3 kpc (see Sec. \[ellfit\] and the right panel of Fig. \[fit\_log\]). The Type II down-bending surface brightness profiles are frequently observed in normal and barred galaxies, where the radial slope may deviate from a pure exponential decline beyond 3-5 disk scale lengths (van der Kruit 1979; Pohlen et al. 2002; Erwin et al. 2008; Pohlen & Truijllo 2006; Munoz-Mateos et al. 2013). For NGC 253 the break is observed at about 3 times the scale length of the inner disk, precisely $R^{Ks}_{br}/r^{in}_h= 3.20
\pm 0.02$ in the Ks band, and $R^{J}_{br}/r^{in}_h= 2.87 \pm 0.03$ in the J band.
Two possible mechanisms are proposed to explain the observed outer break in the disks of galaxies: an angular momentum exchange and a threshold in star formation. Theoretical studies also showed that bars can generate such disk breaks [@PfF91; @Deb06; @Foy08], and in such cases, the break resides within the OLR. Observations are in good agreement with the above simulations, showing that the location of the break radius is consistent with the location of the estimated OLR [@Erw08; @MM13] in many disks with down-bending profiles.
@PhT06 investigated the location of the break in a sample of spiral galaxies, by dividing the Type II class of profiles into two subcategories, classical or OLR break (named Type II-CT and Type II-OLR, respectively), according to the physical origin of the break. They found that for Type II-CT the break radius is, on average, at $9.2 \pm 2.4$ kpc, and spans a range from 5.1 kpc to 14.7 kpc. For the Type II-OLR the location of the break spans a wider range, from 2.4 kpc to 25 kpc, with an average value of the break radius of $9.5 \pm 6.5$ kpc. @MM13 recently investigated the connection between the bars and the location of breaks. In general, most breaks can be found anywhere in the range of surface brightness $\mu_{br}
\sim 22 - 25$ AB mag arcsec$^{-2}$ or, equivalently, $\Sigma \sim 5
\times 10^7 - 10^8$ $M_{\sun}$ kpc$^{-2}$. They also found that the range of possible break-to-bar radii $R_{br}/R_{bar}$ is a function of the total stellar mass, i.e. the most massive disks ($M \ge 10^{10}
M_{\odot}$) have $R_{br}/R_{bar} \sim 2 - 3$, consistent with the range of OLRs for these galaxies. Galaxies with larger break radii tend to host weaker bars (i.e. with $\epsilon \le 0.5$).
By studying the dependence of the star formation rate (SFR) on the density and dynamics of the interstellar gas in disks, @K89 proposed a different mechanism for the down bending profile in exponential disks. The break sets in because of a critical radius where star formation is inhibited, with the result that a visible change in slope or truncation in luminosity profile is generated at this distance from the galaxy center. The critical radius for star formation can be estimated by the break in the H$\alpha$ surface brightness profile [@MK01] and can be compared with the break radius $R_{br}$ estimated from the light profiles of the underlying stellar population.
What is a realistic explanation for the observed break in the disk of NGC 253? On the basis of the joint analysis of the new NIR VISTA and previous kinematical data for the NGC 253 disk, we place the OLR of the bar at $R \sim 4.9$ arcmin ($\sim$4.9 kpc). Taking into account the large uncertainties on $R_{CR}$ and thus on $\Omega_b$, the maximum value for OLR radius is $\sim$7 kpc. The break radius in the light profiles of NGC 253 is at $\sim$9.3 kpc. Even if this value is comparable with both the estimates derived by @PhT06 for Type II-CT and Type II-OLR profiles, it is much further out than the location of the OLR. This suggests that the angular momentum exchange, driven by the bar, may not be the origin of the down-bending profile for the disk of NGC 253. This is further confirmed by the ratio $R_{br}/R_{bar} \sim 6$ for NGC 253, which is more than a factor of two greater than the range of values estimated by Munoz-Mateos et al. (2013) for galaxies with breaks in their light profiles consistent with the OLRs. On the other hand, considering the average deprojected ellipticity of the bar in NGC 253 ($\sim$0.55, see Fig. \[ellipse\]), the estimated break radius turns out to be consistent with the location of the ones measured in low-mass disk galaxies, with large break radii and hosting weak bars. This is consistent with previous observational studies indicating that “classical breaks” at larger radii are more common in late-type disk galaxies, while the ”OLR breaks" are more frequent in early-type disks [@PhT06; @Erw08; @MM13].
On the basis of the above discussion, it is unlikely that the orbit resonances are responsible for the observed break in the surface brightness profile of the outer disk in NGC 253. Hence we consider the alternative origin due to a threshold in the star formation. In NGC 253, the H$\alpha$ surface brightness profile is published by Hoopes et al. (1996; see Fig. 6 of that paper). For $9 \le R \le
10$ arcmin ($9.1 \le R \le 10.1$ kpc) the slope changes and shows a sharper decline with respect to smaller radii. The observed break in the H$\alpha$ profile falls in the same range where the break in the NIR light profiles are observed. This suggests that some mechanism has inhibited the star formation in the disk of NGC 253 from this radius outward. Both the H$\alpha$ and the HI distribution are less extended than the stellar counterpart, reaching a distance from the center of galaxy of $\sim$11 kpc [@Boo05]. In particular, the deep data in the J band show that the “outer disk” is a factor of two more extended than the H$\alpha$ and HI, reaching a distance of $\sim$22 kpc from the center (see Fig. \[fit\_log\]). This is also confirmed in deep optical band observations by @MH97. As shown by @Greggio, the halo of the galaxy starts from about 20 arcmin ($\sim 20.2$ kpc). The HI distribution [@P91] exhibits a peak in the range $2.5
\lesssim R \lesssim 4.5$ arcmin ($2.53 \le R \le 4.54$ kpc). At this HI peak, the neutral gas surface density also has a peak of $\Sigma
\sim 7$ M$_\odot pc^{-2}$, from which it then declines to a value of $\Sigma \sim 2.5$ M$_\odot pc^{-2}$ at the break radius $R_{br} = 9.3$ kpc. This value is significantly lower than the critical surface density at this radius, which is $\Sigma \sim 9.7$ M$_\odot pc^{-2}$ according to the Kennicutt law [@K89; @MK01]. This clearly suggests that some past mechanism has redistributed the gaseous disk and subsequently inhibited the star formation. As already suggested by Boomsma et al. (2005), there are several mechanisms that may be responsible for the truncation of the gaseous disk in NGC 253 and the consequent inhibition of the star formation: the gas in the outer layers may have been ionized by the hot stars and starburst in the disk of NGC 253; the gas may have been removed by the ram pressure stripping of other galaxies in the Sculptor group or, finally, a merger event may have led to significant disturbances in the disk and halo.
Summary {#concl}
========
We have performed a detailed photometric analysis of the barred spiral galaxy NGC 253 from the deep and shallow data in the J band and shallow data in the Ks band, taken during the Science Verification run of the new VISTA telescope on Paranal. The disk, which extends out to 22 kpc in the deep J band image, hosts three prominent features (see Figure \[tileKs\]): [*i)*]{} the bright and almost round bulge with a diameter of about 1 kpc; [ *ii)*]{} the bar, up to 1.7 kpc, with a typical peanut shape ending in very bright edges; and [*iii)*]{} the spiral arms which dominate in the regions of the disk. In addition to the average light distribution of these main components, the following two substructures are observed (see Fig. \[FM\] and Fig. \[zoom\]): a nuclear ring with a diameter of $\sim$0.5 kpc; and a second ring enclosing to the bar with an average radius of $\sim$2.9 kpc located in the disk. All the above components are already visible in the J-band image (Fig. \[tileJ\]), but are more clearly identified in the Ks image (see Fig. \[tileKs\]). In Table \[tab\_n253\] we list the main parameters for the structural components in the disk of NGC 253 (radial extension, ellipticity, P.A., and average colors), based on our photometric analysis.
Main components in NGC 253
---------------------------- ---------- ------------ -------- -----------
$length$ $\epsilon$ P.A. J-Ks
kpc degree mag
bulge 0.6 0.4 58 1 - 2
bar 1.7 0.4 - 0.7 72 1 -1.3
inner disk 2.9 0.7 - 0.8 52 0.8 - 1
outer disk 1.5 0.8 52 0.3 - 0.8
substructures in NGC 253
nuclear ring 0.2 0.6 52 1.5 - 1.8
ring 2.9 0.8 52 1.2
: Photometric parameters, derived by the analysis in the Ks band, which characterize the main components and substructures in NGC 253.[]{data-label="tab_n253"}
On the basis of the results from the quantitative photometry carried out in this study, we have described the structure of the nucleus, bar and disk, the connection between the observed substructures in the disk and the Lindblad resonances predicted by the bar/disk kinematics (Sec. \[result\]). The main results of this analysis are:
- from the degree of the curvature of the dust-lane in the J-Ks color map, we obtain an indication for a weak bar in NGC 253 (see Sec. \[bar\]). Since the bright knots at the end of the bar are redder than the inner regions, and also given the late-type morphology and strength of the bar, these bright regions are not the ansae typically observed in other barred galaxies, but rather they are regions of local star formation (see Sec. \[bar\]);
- from the measurement of the bar’s deprojected length on the new Ks image, we derive a new value for the corotation radius (CR) in NGC 253, located at $R_{CR} \sim 3$ kpc. We then estimate the bar pattern speed, $\Omega = 61.3$ km/s/kpc, and the corresponding radii for LRs. We find that the ILR is at $0.3
\le R \le 0.4$ kpc, the OLR is at $R \sim 4.9$ kpc, and the UHR is in the range $ 1 \le R \le 1.5$ kpc;
- the nuclear ring observed in NGC 253 is located at the ILR. Its morphology and radius is similar to those found for other nuclear rings with a resonant origin observed in several other barred galaxies [@BC93; @Com10; @Maz11];
- the presence of the OLR at $R \sim 4.9$ kpc is consistent with the peak of the HI surface density observed at similar radii. We cannot associate the ring at 2.9 kpc with UHR, which is expected at smaller radii. The ring may, in fact, be the result of a minor merger event or, alternatively, a transient structure formed during an intermediate stage of bar formation;
- the disk of NGC 253 has a down-bending profile with a break at $R \sim$9.3 kpc, which corresponds to about 3 times the scale length of the inner disk. We conclude that such break may most likely arise from a threshold in star formation. The exact mechanism which is responsible for the truncation of the gaseous disk in NGC 253, and the consequent inhibition of the star formation, is still to be identified. Three possible mechanisms are presented: ionization of the gas in the outer layers by the hot stars and starburst in NGC 253, ram pressure stripping of the gas by the other galaxies in the Sculptor group or by a merger event.
A merger event has been invoked several times already and by independent authors to explain the current morphology of NGC 253 [@Boo05]. This may in fact explain the presence of an extended asymmetrical stellar halo plus a Southern spur, in the deep optical images and recently confirmed by the new deep VISTA data [@Greggio], the HI off-plane plume, elongated perpendicular to the disk major axis, the truncation of the gaseous disk and the consequent inhibition of the star formation and the presence of the ring at the barÕs edges, that is clearly detected in the new VISTA shallow images.
To conclude, the new VISTA imaging data presented in this paper provide a detailed and more complete view of NGC 253, and illustrate the amazing capability of the VISTA telescope for studies that require high angular resolution on a large field of view.
This work is based on observations taken at the ESO La Silla Paranal Observatory within the VISTA Science Verification Program ID 60.A-9285(A). We are very grateful to the referee, Michael Pohlen, for his comments and suggestions that improved this work. We thank Jorge Melnick for initiating the VISTA Science Verification and its organization, Thomas Szeifert and Monika Petr-Gotzens for the assistance and help during the observing run, and Jim Lewis, Simon Hodgkin and Eduardo Gonzalez-Solares from CASU for their expert contribution to the VISTA data processing. E.I wish to thank ESO for the financial support and hospitality given during her several visits in 2011 and 2012 to work on the SV data. The authors wish to thank M. Capaccioli, E.M. Corsini, T. de Zeeuw, E. Emsellem, K.C. Freeman and O. Gerhard for useful comments and discussions.
[^1]: This work is based on observations taken at the ESO La Silla Paranal Observatory within the VISTA Science Verification Program ID 60.A-9285(A).
[^2]: RA(J2000)=00h 47m 33s; DEC(J2000)=-25d 17m 18s
[^3]: The full SV proposal is available at http://www.eso.org/sci/activities/vltsv/vista/index.html
[^4]: NB118 is a narrow band filter centered close to 1.18micron. See VIRCAM user manual on http://www.eso.org/sci/facilities/paranal/instruments/vircam/doc/ and @MilJen2013.
[^5]: To compare 2MASS magnitudes with VISTA ones we have applied the following transformation between 2MASS and VISTA systems: $m_{J}^{VISTA} = m_{J}^{2MASS} -0.065
\left[m_{J}^{2MASS} - m_{Ks}^{2MASS}\right]$, $m_{Ks}^{VISTA} =
m_{Ks}^{2MASS} +0.01 \left[m_{J}^{2MASS} - m_{Ks}^{2MASS}\right]$ See the following link http://casu.ast.cam.ac.uk/surveys-projects/vista/technical/photometric-properties.
[^6]: For visualisation purposes, look at the video that cross fades between VISTA and optical images of NGC253, available at the following link: http://www.eso.org/public/videos/eso1025b/
[^7]: The central pixel adopted to extract the light and color profiles coincides with the kinematic center found by @MS10, given at the beginning of the Sec. \[phot\].
|
---
abstract: 'We study direct and inverse eigenvalue problems for a pair of harmonic functions with a spectral parameter in boundary and coupling conditions. The direct problem is relevant to sloshing frequencies of free oscillations of a two-layer fluid in a container. The upper fluid occupies a layer bounded above by a free surface and below by a layer of fluid of greater density. Both fluids are assumed to be inviscid, incompressible, and heavy, whereas the free surface and the interface between fluids are supposed to be bounded.'
author:
- Nikolay Kuznetsov
title: |
**On direct and inverse spectral problems\
for sloshing of a two-layer fluid\
in an open container**
---
=4.4mm
Laboratory for Mathematical Modelling of Wave Phenomena,\
Institute for Problems in Mechanical Engineering, Russian Academy of Sciences,\
V.O., Bol’shoy pr. 61, St. Petersburg 199178, Russian Federation\
E-mail: [email protected]
Introduction
============
Linear water wave theory is a widely used approach for describing the behaviour of surface waves in the presence of rigid boundaries. In particular, this theory is a common tool for determining sloshing frequencies and modes in containers occupied by a homogeneous fluid, that is, having constant density. The corresponding boundary spectral problem usually referred to as the sloshing problem has been the subject of a great number of studies over more than two centuries (a historical review can be found, for example, in [@FK]). In the comprehensive book [@KK], an advanced technique based on spectral theory of operators in a Hilbert space was presented for studying this problem.
In the framework of the mathematical theory of linear water waves, substantial work has been done in the past two decades for understanding the difference between the results valid for homogeneous and two-layer fluids (in the latter case the upper fluid occupies a layer bounded above by a free surface and below by a layer of fluid whose density is greater than that in the upper one). These results concern wave/structure interactions and trapping of waves by immersed bodies (see, for example, [@CL], [@LC], [@KMM] and references cited therein), but much less is known about the difference between sloshing in containers occupied by homogeneous and two-layer fluids. To the author’s knowledge, there is only one related paper [@KS] with rigorous results for multilayered fluids, but it deals only with the spectral asymptotics in a closed container. Thus, the first aim of the present paper is to fill in this gap at least partially.
Another aim is to consider the so-called inverse sloshing problem; that is, the problem of recovering some physical parameters from known spectral data. The parameters to be recovered are the depth of the interface between the two layers and the density ratio that characterises stratification. It is demonstrated that for determining these two characteristics for fluids occupying a vertical-walled container with a horizontal bottom, one has to measure not only the two smallest sloshing eigenfrequencies, which must satisfy certain inequalities, but also to analyse the corresponding free surface elevations.
Statement of the direct problem
-------------------------------
Let two immiscible, inviscid, incompressible, heavy fluids occupy an open container whose walls and bottom are rigid surfaces. We choose rectangular Cartesian coordinates $(x_1,x_2,y)$ so that their origin lies in the mean free surface of the upper fluid and the $y$-axis is directed upwards. Then the whole fluid domain $W$ is a subdomain of the lower half-space $\{ -\infty < x_1, x_2 < +\infty,\, y<0 \}$. The boundary $\partial W$ is assumed to be piece-wise smooth and such that every two adjacent smooth pieces of $\partial W$ are not tangent along their common edge. We also suppose that each horizontal cross-section of $W$ is a bounded two-dimensional domain; that is, a connected, open set in the corresponding plane. (The latter assumption is made for the sake of simplicity because it excludes the possibility of two or more interfaces between fluids at different levels.) The free surface $F$ bounding above the upper fluid of density $\rho_1>0$ is the non-empty interior of $\partial W \cap \{ y=0 \}$. The interface $I=W\cap \{y=-h\}$, where $0<h< \max \{
|y|:\,(x_1,x_2,y)\in \partial W \}$, separates the upper fluid from the lower one of density $\rho_2>\rho_1$. We denote by $W_1$ and $W_2$ the domains $W\cap \{y>-h\}$ and $W\cap \{y<-h\}$ respectively; they are occupied by the upper and lower fluids respectively. The surface tension is neglected and we suppose the fluid motion to be irrotational and of small amplitude. Therefore, the boundary conditions on $F$ and $I$ may be linearised. With a time-harmonic factor, say $\cos \omega t$, removed, the velocity potentials $u^{(1)} (x_1,x_2,y)$ and $u^{(2)} (x_1,x_2,y)$ (they may be taken to be real functions) for the flow in $W_1$ and $W_2$ respectively must satisfy the following coupled boundary value problem: $$\begin{aligned}
&& u^{(j)}_{x_1 x_1} + u^{(j)}_{x_2 x_2} + u^{(j)}_{yy} = 0 \quad \mbox{in} \
W_j,\quad j=1,2, \label{lap} \\ && u^{(1)}_y = \nu u^{(1)}\quad \mbox{on}\ F,
\label{nuf} \\ && \rho \left( u^{(2)}_y - \nu u^{(2)} \right) = u^{(1)}_y - \nu
u^{(1)}\quad \mbox{on}\ I, \label{nui} \\&& u^{(2)}_y = u^{(1)}_y \quad \mbox{on}\ I,
\label{yi} \\ && \partial u^{(j)}/\partial n = 0\quad \mbox{on}\
B_j\quad j=1,2. \label{nc}\end{aligned}$$ Here $\rho=\rho_2/\rho_1 >1$ is the non-dimensional measure of stratification, the spectral parameter $\nu$ is equal to $\omega^2/g$, where $\omega$ is the radian frequency of the water oscillations and $g$ is the acceleration due to gravity; $B_j=\partial W_j\setminus (\bar F\cup \bar I)$ is the rigid boundary of $W_j$. By combining and , we get another form of the spectral coupling condition : $$(\rho -1) u^{(2)}_y = \nu \left( \rho u^{(2)} - u^{(1)} \right) \quad \mbox{on}\ I.
\label{nui2}$$ We also suppose that the orthogonality conditions $$\int_F u^{(1)}\,{\mathrm{d}\kern0.2pt}x = 0\quad \mbox{and}\quad \int_I \left( \rho u^{(2)} - u^{(1)}
\right)\,{\mathrm{d}\kern0.2pt}x = 0,\quad {\mathrm{d}\kern0.2pt}x = {\mathrm{d}\kern0.2pt}x_1 {\mathrm{d}\kern0.2pt}x_2,
\label{ort}$$ hold, thus excluding the zero eigenvalue of –.
When $\rho =1$, conditions and mean that the functions $u^{(1)}$ and $u^{(2)}$ are harmonic continuations of each other across the interface $I$. Then problem – complemented by the first orthogonality condition (the second condition is trivial) becomes the usual sloshing problem for a homogeneous fluid. It is well-known since the 1950s that the latter problem has a positive discrete spectrum. This means that there exists a sequence of positive eigenvalues $\{ \nu_n^W \}_1^\infty$ of finite multiplicity (the superscript $W$ is used here and below for distinguishing the sloshing eigenvalues that correspond to the case, when a homogeneous fluid occupies the whole domain $W$, from those corresponding to a two-layer fluid which will be denoted simply by $\nu_n$). In this sequence the eigenvalues are written in increasing order and repeated according to their multiplicity; moreover, $\nu_n^W\to
\infty$ as $n\to \infty$. The corresponding eigenfunctions $\{ u_n \}_1^\infty
\subset H^1 (W)$ form a complete system in an appropriate Hilbert space. These results can be found in many sources, for example, in the book [@KK].
Variational principle
=====================
Let $W$ be bounded. It is well known that the sloshing problem in $W$ for homogeneous fluid can be cast into the form of a variational problem and the corresponding Rayleigh quotient is as follows: $$R_W (u) = \frac{\int_W |\nabla u|^2 \,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y}{\int_F u^2 \,{\mathrm{d}\kern0.2pt}x}.
\label{Rayhom}$$ For obtaining the fundamental eigenvalue $\nu_1^W$ one has to minimize $R_W (u)$ over the subspace of the Sobolev space $H^1 (W)$ consisting of functions that satisfy the first orthogonality condition . In order to find $\nu_n^W$ for $n>1$, one has to minimize over the subspace of $H^1 (W)$ such that each its element $u$ satisfies the first condition along with the following equalities $\int_F u\,u_j\,{\mathrm{d}\kern0.2pt}x = 0$, where $u_j$ is either of the eigenfunctions $u_1,\dots,u_{n-1}$ corresponding to the eigenvalues $\nu_1^W, \dots,
\nu_{n-1}^W$.
In the case of a two-layer fluid we suppose that the usual embedding theorems hold for both subdomains $W_j$, $j=1,2$ (the theorem about traces on smooth pieces of the boundary for elements of $H^1$ included). This impose some restrictions on $\partial
W$, in particular, on the character of the intersections of $F$ and $I$ with $\partial W\cap \{ y<0 \}$. Then using , it is easy to verify that the Rayleigh quotient for the two-layer sloshing problem has the following form: $$R (u^{(1)},u^{(2)}) = \frac{\int_{W_1} \left| \nabla u^{(1)}
\right|^2\,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y + \rho \int_{W_2} \left| \nabla u^{(2)}
\right|^2\,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y }{\int_F \left[ u^{(1)} \right]^2\,{\mathrm{d}\kern0.2pt}x +
(\rho - 1)^{-1} \int_I \left[ \rho u^{(2)} - u^{(1)} \right]^2\,{\mathrm{d}\kern0.2pt}x}. \label{Raytwo}$$ To determine the fundamental sloshing eigenvalue $\nu_1$ one has to minimize $R
(u^{(1)},u^{(2)})$ over the subspace of $H^1 (W_1) \oplus H^1 (W_2)$ defined by both orthogonality conditions . In order to find $\nu_n$ for $n>1$, one has to minimize over the subspace of $H^1 (W_1) \oplus H^1 (W_2)$ such that every element $\left( u^{(1)},\,u^{(2)} \right)$ of this subspace satisfies the equalities $$\int_F u^{(1)}\,u_j^{(1)}\,{\mathrm{d}\kern0.2pt}x = 0 \quad \mbox{and} \quad \int_I \left[ \rho
u^{(2)} - u^{(1)} \right]\, \left[ \rho u_j^{(2)} - u_j^{(1)} \right] {\mathrm{d}\kern0.2pt}x = 0 ,
\quad j=1, \dots, n-1 ,$$ along with both conditions . Here $\big( u_j^{(1)}, \, u_j^{(2)} \big)$ is either of the eigensolutions corresponding to $\nu_1, \dots, \nu_{n-1}$.
Now we are in a position to prove the following assertion.
\[prop1\] Let $\nu_1^W$ and $\nu_1$ be the fundamental eigenvalues of the sloshing problem in the bounded domain $W$ for homogeneous and two-layer fluids respectively. Then the inequality $\nu_1 < \nu_1^W$ holds.
The restriction that $W$ is bounded is essential as the example considered in Proposition 4 below demonstrates.
If $u_1$ is an eigenfunction corresponding to $\nu_1^W$, then $$\nu_1^W = \frac{\int_W |\nabla u_1|^2\,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y}{\int_F u_1^2\,{\mathrm{d}\kern0.2pt}x} \, .$$ Let $u^{(1)}$ and $u^{(2)}$ be equal to the restrictions of $\rho u_1$ and $u_1$ to $W_1$ and $W_2$, respectively. Then the pair $\left( u^{(1)},\,u^{(2)} \right)$ is an admissible element for the Rayleigh quotient . Substituting it into , we obtain that $$R (\rho u_1, u_1) = \frac{\int_{W_1} \left| \nabla u_1 \right|^2\,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y +
\rho^{-1} \int_{W_2} \left| \nabla u_1 \right|^2\,{\mathrm{d}\kern0.2pt}x {\mathrm{d}\kern0.2pt}y }{\int_F u_1^2\,{\mathrm{d}\kern0.2pt}x}.$$ Comparing this equality with the previous one and taking into account that $\rho
>1$, one finds that $R (\rho u_1, u_1) < \nu_1^W$. Since $\nu_1$ is the minimum of , we conclude that $\nu_1 < \nu_1^W$.
Containers with vertical walls\
and horizontal bottoms
===============================
Let us consider the fluid domain $W = \{ x = (x_1,x_2) \in D, \, y\in (-d,0) \}$, where $D$ is a piece-wise smooth two-dimensional domain (the container’s horizontal cross-section) and $d \in (0, \infty]$ is the container’s constant depth. Thus, the container’s side wall $\partial D\times (-d,0)$ is vertical, the bottom $\{ x \in D,
\, y=-d \}$ is horizontal, whereas the free surface and the interface are $F=\{ x
\in D, \, y=0 \}$ and $I=\{ x \in D, \, y=-h \}$ respectively, $0<h<d$.
For a homogeneous fluid occupying such a container, the sloshing problem is equivalent to the free membrane problem. Indeed, putting $$u(x,y) = v(x) \cosh k(y+d) \quad \big( \, u(x,y) = v(x) \, {\textrm{e}}^{k y} \ \mbox{when}
\ d = \infty \, \big) ,$$ one reduces problem – with $\rho = 1$, complemented by the first orthogonality condition to the following spectral problem: $$\nabla_x^2 v + k^2 v = 0\ \ \mbox{in}\ \ D,\quad \partial v / \partial n_x = 0 \ \
\mbox{on}\ \ \partial D,\ \ \int_D v\,{\mathrm{d}\kern0.2pt}x = 0, \label{fm}$$ where $\nabla_x = (\partial/\partial x_1,\partial/\partial x_2)$ and $n_x$ is a unit normal to $\partial D$ in ${{\varmathbb{R}}}^2$. It is clear that $\nu^W$ is an eigenvalue of the former problem if and only if $k^2$ is an eigenvalue of (\[fm\]) and $$\nu^W = k\tanh kd\quad \mbox{when}\ d<\infty \quad \big( \, \nu^W = k \quad
\mbox{when} \ d=\infty \, \big) , \quad k>0. \label{nuW}$$ It is well-known that problem (\[fm\]) has a sequence of positive eigenvalues $\{
k_n^2 \}_1^\infty$ written in increasing order and repeated according to their finite multiplicity, and such that $k_n^2\to \infty$ as $n\to \infty$. The corresponding eigenfunctions form a complete system in $H^1 (D)$.
Let us describe the same reduction procedure in the case when $W$ is occupied by a two-layer fluid and $d<\infty$. Putting $$\begin{aligned}
&& u^{(1)} (x,y) = v (x)\,[ A \cosh k(y+h) + B \sinh k(y+h) ], \label{u1} \\ &&
u^{(2)} (x,y) = v (x)\,C \cosh k(y+d), \label{u2}\end{aligned}$$ where $A,B$ and $C$ are constants, one reduces problem – and , $\rho > 1$, to problem combined with the following quadratic equation: $$\begin{gathered}
\nu^2 \cosh kd - \nu k\, [ \sinh kd + (\rho -1) \cosh kh\, \sinh k(d-h) ] \\ + k^2
(\rho -1) \sinh kh\, \sinh k(d-h) = 0 , \quad k > 0 .
\label{qe} \end{gathered}$$ Thus $\nu$ is an eigenvalue of the former problem if and only if $\nu$ satisfies , where $k^2$ is an eigenvalue of (\[fm\]).
Indeed, the quadratic polynomial in $\nu$ on the left-hand side of is the determinant of the following linear algebraic system for $A$, $B$ and $C$: $$\begin{aligned}
A=C\,\left[ \cosh k(d-h) - \nu^{-1} (\rho -1)\,k\,\sinh k(d-h) \right],\ \
B=C\,\sinh k(d-h), \label{ABC} \\ A\, ( k\,\sinh kh - \nu\,\cosh kh ) + C\,\sinh
k(d-h)\, ( k\,\cosh kh - \nu\,\sinh kh) = 0 . \label{AC}\end{aligned}$$ The latter arises when one substitutes expressions and into the boundary condition and the coupling conditions and . This homogeneous system defines eigensolutions of the sloshing problem provided there exists a non-trivial solution, and so the determinant must vanish which is expressed by .
Let us show that the roots $\nu^{(+)}$ and $\nu^{(-)}$ of are real in which case $$\nu^{(\pm)} = k\,\frac{b \pm \sqrt \mathcal{D}}{2 \,\cosh kd} > 0 \, ,
\label{nupm}$$ where the inequality is a consequence of the formulae $$\begin{aligned}
&& b = \sinh kd + (\rho -1)\,\cosh kh\, \sinh k(d-h), \label{b} \\ && \mathcal{D} =
b^2 - 4\,(\rho -1)\,\cosh kd\,\sinh kh\,\sinh k(d-h) . \label{D}\end{aligned}$$
Since $\mathcal{D}$ is a quadratic polynomial of $\rho -1$, it is a simple application of calculus to demonstrate that it attains the minimum at $$\rho -1 = \frac{2 \, \cosh kd \, \sinh kh - \sinh kd \, \cosh kh}{\cosh^2 kh
\, \sinh k(d-h)} \, ,$$ and after some algebra one finds that this minimum is equal to $$\frac{4\,\cosh kd\,\sinh kh\,\sinh k(d-h)}{\cosh^2 kh} >0,$$ which proves the assertion. Thus we arrive at the following.
\[prop2\] If $W$ is a vertical cylinder with horizontal bottom, then the sloshing problem for a two-layer fluid occupying $W$ has two sequences of eigenvalues $$\left\{ \nu_n^{(+)} \right\}_1^\infty \quad and \quad \left\{ \nu_n^{(-)}
\right\}_1^\infty$$ defined by $\eqref{nupm}$ with $k=k_n > 0$, where $k_n^2$ is an eigenvalue of problem $(\ref{fm})$.
The same eigensolution $(u^{(1)}, u^{(2)})$ corresponds to both $\nu_n^{(+)}$ and $\nu_n^{(-)}$, where $u^{(1)}$ and $u^{(2)}$ $($sloshing modes in $W_1$ and $W_2$ respectively$)$ are defined by formulae $\eqref{u1}$ and $\eqref{u2}$ with $v$ belonging to the set of eigenfunctions of problem $(\ref{fm})$ that correspond to $k^2_n;$ furthermore, $C$ is an arbitrary non-zero real constant, whereas $A$ and $B$ depend on $C$ through $\eqref{ABC}$.
Next we analyse the behaviour of $\nu_n^{(\pm)}$ as a function of $\rho$.
\[prop3\] For every $n=1,2,\dots$ the functions $\nu_n^{(-)}$ and $\nu_n^{(+)}$ are monotonically increasing as $\rho$ goes from $1$ to infinity. Their ranges are $$(0,\, k_n\tanh k_n h) \quad and \quad (k_n\tanh k_n d,\,\infty)$$ respectively.
In order to prove the proposition it is sufficient to show that $$\begin{aligned}
\frac{\partial (b\pm \sqrt{\mathcal{D}}\,)}{\partial \rho} = \sinh k(d-h) \Big\{
\cosh kh \pm \mathcal{D}^{-1/2} \big[ \cosh kh\, \sinh kd \nonumber \\ + (\rho -1)
\cosh^2 kh\, \sinh k(d-h) - 2 \cosh kd\, \sinh kh \big] \Big\} > 0 \, . \label{drho}\end{aligned}$$ Since $$\frac{\partial (b + \sqrt{\mathcal{D}}\,)}{\partial \rho} \Bigg|_{\rho =1} =
\frac{2\sinh^2 k(d-h)}{\sinh kd} > 0 \quad \mbox{and}\quad \frac{\partial (b -
\sqrt{\mathcal{D}}\,)}{\partial \rho} \Bigg|_{\rho \to \infty} = 0 \, ,$$ inequality is a consequence of the following one: $$\pm \frac{\partial^2 (b\pm \sqrt{\mathcal{D}})}{\partial \rho^2} =
\frac{4\, \cosh kd \, \sinh kh \, \sinh^3 k(d-h)}{\mathcal{D}^{3/2}} > 0 \quad
\mbox{for all}\ \rho > 1 .$$ The second assertion immediately follows from the first one and formulae –.
Combining Proposition 3 and formula , we arrive at the following assertion.
\[corol1\] The inequalities $\nu_n^{(-)} < \nu_n^W < \nu_n^{(+)}$ hold for each $n=1,2,\dots$ and every $\rho >1$.
Dividing by $k$ and letting $k=k_n$ to infinity, it is straightforward to obtain the following.
\[lemma1\] For every $\rho >1$ the asymptotic formula $$\nu_n^{(\pm)} \sim \frac{\rho + 1 \pm |\rho - 3|}{4}\,k_n \quad as\ n \to \infty,$$ holds with the exponentially small remainder term; here $k^2_n$ is an eigenvalue of $(\ref{fm})$.
In other words there are three cases: $$\begin{aligned}
&& \mbox{(i)\ if}\ \rho =3,\ \mbox{then}\ \nu_n^{(\pm)} \sim k_n\ \mbox{as}\ n\to
\infty; \\ && \mbox{(ii)\ if}\ \rho >3,\ \mbox{then}\ \nu_n^{(-)}\sim k_n\
\mbox{and}\ \nu_n^{(+)} \sim (\rho - 1) \, k_n / 2 \ \mbox{as}\ n\to \infty; \\ &&
\mbox{(iii)\ if}\ \rho \in (1,3),\ \mbox{then}\ \nu_n^{(-)} \sim (\rho - 1) \, k_n /
2 \ \mbox{and}\ \nu_n^{(+)} \sim k_n\ \mbox{as}\ n\to \infty.\end{aligned}$$ Combining these relations and the asymptotic formula $\nu_n^W \sim k_n$ as $n\to
\infty$ (it is a consequence of formula (\[nuW\]) defining $\nu_n^W$ when a homogeneous fluid occupies $W$), we obtain the following.
\[corol2\] As $n\to \infty$, we have that $\nu_n^{(-)} \sim \nu_n^W$ when $\rho \geq 3$, whereas $\nu_n^{(+)} \sim \nu_n^W$ provided $\rho\in (1,3]$.
Another corollary of Lemma 1 concerns the distribution function $\mathcal{N} (\nu)$ for the spectrum of problem – and . This function is equal to the total number of eigenvalues $\nu_n$ that do not exceed $\nu$. An asymptotic formula for $\mathcal{N} (\nu)$ immediately follows from Lemma 1 and the asymptotic formula for the distribution of the spectrum for the Neumann Laplacian (see [@CH], Chapter 6).
\[corol3\] The distribution function $\mathcal{N} (\nu)$ of the spectrum for the sloshing of a two-layer fluid in a vertical cylinder of cross-section $D$ has the following asymptotics $$\mathcal{N} (\nu) \sim \left[ \frac{4}{(\rho -1)^2} + 1 \right]
\frac{|D|\,\nu^2}{4\pi} \quad as \ \nu \to \infty .$$ Here $|D|$ stands for the area of $D$.
It should be also mentioned that in [@KS] the asymptotics for $\mathcal{N}
(\nu)$ was obtained for a multi-layer fluid occupying a bounded closed container.
It follows from Lemma 1 and Corollary 2 that the asymptotic formula for the distribution function of the spectrum $\left\{ \nu_n^W \right\}_1^\infty$ is similar to the above one, but the first term in the square brackets must be deleted. Moreover, in the case of homogeneous fluid the same asymptotic formula (up to the remainder term) holds for arbitrarily shaped fluid domains (see [@KK], Section 3.3). Since the first term in the square brackets tends to infinity as $\rho\to 1$, the transition from the two-layer fluid to the homogeneous one in the asymptotic formula for $\mathcal{N} (\nu)$ is a singular limit in the sense described in [@Ber]. Similar effect occurs for modes trapped by submerged bodies in two-layer and homogeneous fluids as was noted in [@LC].
In conclusion of this section, it should be noted that in the case of an infinitely deep vertical cylinder it is easy to verify that $\nu = k$ is an eigenvalue of the sloshing problem for a two-layer fluid if and only if $k^2$ is an eigenvalue of problem (\[fm\]). Comparing this assertion with that at the beginning of this section we obtain the following.
\[prop4\] In an infinitely deep vertical-walled container, the sloshing problem for a two-layer fluid has the same set of eigenvalues and the same eigenfunctions of the form $v(x) \, {\textrm{e}}^{k y}$, $k>0$, as the sloshing problem for a homogeneous fluid in the same container; here $k^2$ is an eigenvalue and $v$ is the corresponding eigenfunction of problem $(\ref{fm})$.
Inverse problem
===============
Let a given container $W$ be occupied by a two-layer fluid, but now we assume that the position of the interface between layers and the density of the lower layer are unknown. The density of the upper layer is known because one can measure it directly. The sequence of eigenvalues $\left\{ \nu_n^W \right\}_1^\infty$ corresponding to the homogeneous fluid is also known because it depends only on the domain $W$. The inverse problem we are going to consider is to recover the ratio of densities $\rho$ and the depth of the interface $h$ from measuring some sloshing frequencies on the free surface. Say, let the fundamental eigenvalue $\nu_1$ is known along with the second-largest one.
The formulated inverse problem is not always solvable. Indeed, according to Proposition 4, [*it has no solution when $W$ is an infinitely deep container with vertical walls*]{}. Moreover, the inverse problem is trivial for all domains when it occurs that $\nu_1 = \nu_1^W$. In this case Proposition 1 implies that the fluid is homogeneous, that is, $\rho =1$ and $h=d$. Therefore, we restrict ourselves to the case of vertically-walled containers having the finite depth $d$ in what follows.
Reduction to transcendental equations
-------------------------------------
In view what was said above, the inverse problem for $W=D\times (-d,0)$ can be stated as follows. Find conditions that allow us to determine $\rho > 1$ and $h \in
(0,d)$ when the following two eigenvalues are known: the fundamental one $\nu_1$ and the smallest eigenvalue $\nu_N$ that is greater than $\nu_1$. Thus $N$ is such that $k^2_{n} = k^2_1$ for all $n=1,\dots,N-1$, which means that the fundamental eigenvalue $k_1^2$ of problem (\[fm\]) is of multiplicity $N-1$ (of course, $\nu_1$ has the same multiplicity). For example, if $D$ is a disc, then the multiplicity of $k_1^2$ is two (see [@Bandle], Section 3.1), and so $\nu_N=\nu_3$ in this case.
According to formula (\[nupm\]), we have that $\nu_1 = \nu_1^{(-)}$. Hence the first equation for $\rho$ and $h$ is as follows: $$b_1 - \sqrt {\mathcal{D}_1} = \frac{2\,\nu_1}{k_1} \cosh k_1 d . \label{eq1}$$ Here $b_1$ and $\mathcal{D}_1$ are given by formulae and respectively with $k=k_1$.
To write down the second equation for $\rho$ and $h$ we have the dilemma whether $$\nu_N=\nu_N^{(-)} \quad \mbox{or} \quad \nu_N=\nu_1^{(+)} \, ? \label{dil}$$ Let us show that either of these options is possible. Indeed, Proposition 3 implies that $\nu_N = \nu_N^{(-)}$ provided $\rho -1$ is sufficiently small. On the other hand, let us demonstrate that there exists a triple $(\rho, d, h)$ for which $\nu_N
= \nu_1^{(+)}$. For this purpose we have to demonstrate that the inequality $$\nu_N^{(-)} = k_N \frac{b_N - \sqrt {\mathcal{D}_N}}{2\cosh k_N d}
\geq k_1 \frac{b_1 + \sqrt {\mathcal{D}_1}}{2\cosh k_1 d} = \nu_1^{(+)}$$ holds for some $\rho$, $d$ and $h$. As above $b_j$ and $\mathcal{D}_j$, $j=1,N$, are given by formulae and , respectively, with $k=k_j$.
Let $h=d/2$, then we have $$4\,\nu_j^{(\pm)} = k_j \left\{ (\rho +1)\,\tanh k_j d \pm \left[ (\rho +1)^2 \,
\tanh^2 k_j d + 8\,(\rho -1) \frac{1 - \cosh k_j d} {\cosh k_j d} \right]^{1/2}
\right\} ,$$ and so $$4\, \left[ \nu_N^{(-)} - \nu_1^{(+)} \right] \to k_N \left( \rho + 1 -
|\rho -3| \right) - k_1 \left( \rho + 1 + |\rho -3| \right) \quad \mbox{as} \ d \to
\infty .$$ The limit is piecewise linear function of $\rho$, attains its maximum value $4(k_N -
k_1)$ at $\rho =3$ and is positive for $\rho\in (1 + 2\,(k_1/k_N),\, 1 +
2\,(k_N/k_1))$.
Summarising, we arrive at the following.
\[prop5\] Let $k_N^2$ be the smallest eigenvalue of problem $(\ref{fm})$ other than $k_1^2$, and let $\nu_N^{(-)}$ be the sloshing eigenvalue defined by $\eqref{nupm}$–$\eqref{D}$ with $k=k_N$. Then
[(i)]{} $\nu_N^{(-)} < \nu_1^{(+)}$ when $\rho -1 > 0$ is sufficiently small $($of course, its value depends on $d$, $h$ and the domain $D);$
[(ii)]{} $\nu_N^{(-)} > \nu_1^{(+)}$ when $\rho\in (1 + 2\,(k_1/k_N),\, 1 +
2\,(k_N/k_1))$, $h=d/2$ and $d$ is sufficiently large $($of course, its value depends on $\rho$ and $D)$.
Obviously, assertion (ii) can be extended to values of $h$ that are sufficiently close to $d/2$.
Options for the second equation
-------------------------------
Let us develop a procedure for determining which of the two equalities can be chosen to complement equation in order to find $\rho$ and $h$. Our procedure is based on an analysis of the free surface elevations corresponding to the measured values $\nu_1$ and $\nu_N$. Indeed, when a two-layer fluid oscillates at the frequency defined by some $\nu_j$, the free surface elevation is proportional to the trace $u^{(1)}_j (x,0)$ (see, for example, [@Lamb], Section 227).
According to formula , the trace $u^{(1)}_1 (x,0)$ is a linear combination of linearly independent eigenfunctions $v_1 (x),\dots,v_{N-1} (x)$ corresponding to the fundamental eigenvalue $k_1^2$ of problem ; of course, its multiplicity is taken into account. By Proposition 2 the free surface elevation associated with $\nu_1^{(+)}$ is also proportional to a linear combination of $v_1,\dots,v_{N-1}$. Since these functions are known, one has to determine whether the measured free-surface elevation corresponding to $\nu_N$ can be represented in the form of such a combination and only in such a form. If this is the case, then $\nu_N = \nu_1^{(+)} < \nu_N^{(-)}$ and the following equation $$b_1 + \sqrt {\mathcal{D}_1} = \frac{2\,\nu_N}{k_1} \cosh k_1 d \label{eq+}$$ forms the system for $\rho$ and $h$ together with .
Besides, it can occur that the measured free-surface elevation corresponding to $\nu_N$ can be represented in two forms, one of which is a linear combination of $v_1,\dots,v_{N-1}$, whereas the other one involves the function $v_{N}$ as well as other eigenfunctions that correspond to the eigenvalue $k_N^2$ of problem along with $v_1,\dots,v_{N-1}$. It is clear that this happens when $\nu_N =
\nu_1^{(+)} = \nu_N^{(-)}$. Indeed, if all coefficients at the former functions vanish, then the profile is represented by $v_1,\dots,v_{N-1}$, otherwise not. In this case, equation can be complemented by either equation or the following one: $$b_N - \sqrt {\mathcal{D}_N} = \frac{2\,\nu_N}{k_N} \cosh k_N d . \label{eq2}$$ Of course, it is better to use the system that comprises equations and because the right-hand side terms in these equations are proportional.
If the measured free-surface elevation corresponding to $\nu_N$ cannot be represented as a linear combination of $v_1,\dots,v_{N-1}$, then $\nu_N =
\nu_N^{(-)} < \nu_1^{(+)}$, in which case the elevation is a linear combination of eigenfunctions that correspond to the eigenvalue $k_N^2$ of problem the second largest after $k_1^2$. In this case, equation must be complemented by .
Thus we arrive at the following procedure for reducing the inverse sloshing problem to a system of two equations.
[**Procedure.**]{} [*Let $v_1,\dots,v_{N-1}$ be the set of linearly independent eigenfunctions of problem corresponding to $k_1^2$. If the observed elevation of the free surface that corresponds to the measured value $\nu_N$ has a representation as a linear combination of $v_1,\dots,v_{N-1}$, then $\rho$ and $d$ must be determined from equations and . Otherwise, equations and must be used.*]{}
The simplest case is when the fundamental eigenvalue of problem is simple, that is, $N=2$. Then the above procedure reduces to examining whether the free surface elevations corresponding to $\nu_1$ and $\nu_2$ are proportional or not. In the case of proportionality, equations and must be used. Equations and are applicable when there is no proportionality.
Solution of the transcendental systems
======================================
In this section we consider the question how to solve systems and , and and for finding $\rho$ and $h$.
System and
-----------
Equations and can be easily simplified. Indeed, the sum and difference of these equations are as follows: $$b_1 = \frac{\nu_N + \nu_1}{k_1}\, \cosh k_1 d \quad \mbox{and} \quad \mathcal{D}_1
= \left( \frac{\nu_N - \nu_1}{k_1} \right)^2 \cosh^2 k_1 d \, .$$ Substituting the first expression into the second equation (see formulae and ), we obtain $$(\rho - 1)\, \sinh k_1 h\, \sinh k_1 (d-h) = \frac{\nu_N\, \nu_1}{k_1^2} \, \cosh
k_1 d \, , \label{rho1+}$$ whereas the first equation itself has the following form: $$(\rho - 1)\, \cosh k_1 h\, \sinh k_1 (d-h) = \frac{\nu_N + \nu_1}{k_1}\, \cosh k_1 d
- \sinh k_1 d \, . \label{rho2+}$$ The last two equations immediately yield $$\tanh k_1 h = \frac{\nu_N \, \nu_1}{k_1 \, (\nu_N + \nu_1 - \nu_1^W)} \, ,$$ where formula is applied. Thus we are in a position to formulate the following.
\[prop6\] Let $\nu_1$ and $\nu_N \neq \nu_1$ be the smallest two sloshing eigenvalues measured for a two-layer fluid occupying $W=D\times (-d,0)$. Let also $$0 < \frac{\nu_N \, \nu_1}{k_1\, (\nu_N + \nu_1 - \nu_1^W)} < \tanh k_1 d \, ,$$ where $k_1^2$ is the fundamental eigenvalue of problem $\eqref{fm}$ in $D$ and $\nu_1^W$ is defined by formula $\eqref{nuW}$ with $k=k_1$. If Procedure guarantees that $\rho$ and $h$ satisfy equations $\eqref{eq1}$ and $\eqref{eq+}$, then $$h = \frac{1}{k_1} \tanh^{-1} \, \frac{\nu_N\, \nu_1}{k_1\, (\nu_N + \nu_1 -
\nu_1^W)} \, ,$$ whereas $\rho$ is determined either by $\eqref{rho1+}$ or by $\eqref{rho2+}$ with this $h$.
We recall that $\tanh^{-1} z = \frac{1}{2} \ln \frac{1+z}{1-z}$ (see [@AS], Section 4.6).
System and
-----------
Since equations and have the same form, we treat them simultaneously. Eliminating square roots, we get $$\begin{aligned}
(\rho -1)\, \sinh k_j (d-h) \left( \nu_j \cosh k_j h - k_j \sinh k_j h \right) \\
= \frac{\nu_j}{k_j} \left( \nu_j\cosh k_j d - k_j \sinh k_j d \right), \quad j=1,N,\end{aligned}$$ which is linear with respect to $\rho -1$. Taking into account formula , we write this system in the form: $$\begin{aligned}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (\rho -1)\, \sinh k_j (d-h)
\left( k_j \sinh k_j h - \nu_j \cosh k_j h \right) \nonumber \\ && \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ = \frac{\nu_j}{k_j} \left( \nu_j^W - \nu_j \right) \cosh k_j d ,
\quad j=1,N,
\label{sys1} \end{aligned}$$ where the right-hand side term is positive in view of Corollary 1. We eliminate $\rho -1$ from system , thus obtaining the following equation for $h$: $$\begin{aligned}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{\nu_1}{k_1}
\left( \nu_1^W - \nu_1 \right) \cosh k_1 d\, \sinh k_N (d-h)
\left( k_N \sinh k_N h - \nu_N \cosh k_N h \right) \nonumber
\\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\! - \frac{\nu_N}{k_N}
\left( \nu_N^W - \nu_N \right) \cosh k_N d \,\sinh k_1 (d-h)
\left( k_1 \sinh k_1 h - \nu_1 \cosh k_1 h \right) = 0.
\label{eqh1} \end{aligned}$$ Let us denote by $U (h)$ the expression on the left-hand side and investigate its behaviour for $h\geq 0$, because solving equation is equivalent to finding zeroes of $U (h)$ that belong to $(0,d)$.
It is obvious that $U (d) = 0$, and we have that $$U (0) = - \nu_N\,\nu_1 \left( \frac{\nu_1^W - \nu_1}{k_1} \, \cosh k_1 d \,
\sinh k_N d - \frac{\nu_N^W - \nu_N}{k_N}\, \cosh k_N d\, \sinh k_1 d \right).$$ After applying formula , this takes the form: $$U (0) = \left( \nu_N^W\, \nu_1 - \nu_N\, \nu_1^W \right) \frac{\nu_N\,
\nu_1}{k_N\,k_1} \cosh k_N d \, \cosh k_1 d \, ,
\label{-0}$$ and so $U (0)$ is positive, negative or zero simultaneously with $\nu_N^W \, \nu_1
- \nu_N \, \nu_1^W$.
We have that $$\begin{aligned}
U' (h) = \frac{\nu_1 \, k_N \, \cosh k_1 d}{k_1} \, (\nu_1^W - \nu_1) \, \left[ k_N
\, \sinh k_N (d-2h) + \nu_N \, \cosh k_N (d-2h) \right] \\ - \frac{\nu_N \, k_1 \,
\cosh k_N d}{k_N} \, (\nu_N^W - \nu_N) \, \left[ k_1\, \sinh k_1 (d-2h) + \nu_1 \,
\cosh k_1 (d-2h) \right] \, , \\ \frac{U'' (h)}{2} = \frac{\nu_N \, k_1^2 \, \cosh
k_N d}{k_N} \,(\nu_N^W - \nu_N) \, \left[ k_1 \, \cosh k_1 (d-2h) + \nu_1 \, \sinh
k_1 (d-2h) \right] \\ - \frac{\nu_1 \, k_N^2 \, \cosh k_1 d}{k_1} \, (\nu_1^W -
\nu_1) \, \left[ k_N \, \cosh k_N (d-2h) + \nu_N\, \sinh k_N (d-2h) \right] \, .\end{aligned}$$ Then formula yields the following asymptotic formula: $$\begin{aligned}
U (h) \sim (d-h) \, (\nu_N^W - \nu_N) \, (\nu_1^W - \nu_1) \left[ \frac{\nu_1 \,
k_N}{k_1} - \frac{\nu_N \, k_1}{k_N} \right] \cosh k_N d \, \cosh k_1 d \nonumber \\
\mbox{as} \ d-h \to +0 . \label{asym}\end{aligned}$$ Since equation is obtained under the assumption that $\nu_N =
\nu_N^{(-)}$ and $\nu_1 = \nu_1^{(-)}$, Corollary 1 yields that each factor in the asymptotic formula is positive except for the difference in the square brackets.
The next lemma gives a condition providing a relationship between the value $U (0)$ and the behaviour of $U (h)$ for $h < d$ and sufficiently close to $d$.
\[lemma2\] If the following inequality holds: $$\frac{\nu_1\,k_N}{k_1} - \frac{\nu_N\,k_1}{k_N} \leq 0, \label{-}$$ then $U (0) < 0$ and $U (h) < 0$ when $h < d$ and sufficiently close to $d$.
Let us prove the inequality $U (0) < 0$ first. Since $$\nu_N^W \, \nu_1 - \nu_N \, \nu_1^W = \nu_1\, k_N \, \tanh k_N d - \nu_N \, k_1
\, \tanh k_1 d ,$$ according to formula . Furthermore, it follows from that $$\nu_N^W \, \nu_1 - \nu_N \, \nu_1^W \leq \nu_N \, k_1^2 \, d \left[ \frac{\tanh k_N
d}{k_N d} - \frac{\tanh k_1 d}{k_1 d} \right] < 0 , \label{par}$$ because $z^{-1} \tanh z$ is a monotonically decreasing function on $(0,+\infty)$ and $k_1 < k_N$. Then implies that $U (0) < 0$.
If inequality is strict, then the second assertion immediately follows from the asymptotic formula .
In the case of equality in , the asymptotic formula must be extended to include the second-order term with respect to $d-h$ (see the second derivative above). Thus we obtain that $$\begin{aligned}
&& U (h) \sim (d-h)^2 \Bigg\{ \frac{\nu_N \, k_1^2 \, \cosh k_N d}{k_N} \,(\nu_N^W -
\nu_N) \, \left[ \, k_1 \, \cosh k_1 d - \nu_1 \, \sinh k_1 d \, \right] \\ && \ \ \
\ - \frac{\nu_1 \, k_N^2 \, \cosh k_1 d}{k_1} \, (\nu_1^W - \nu_1) \, \left[ \, k_N
\, \cosh k_N d - \nu_N \, \sinh k_N d \, \right] \Bigg\} \ \ \ \mbox{as} \ d-h \to
+0 .\end{aligned}$$ Applying the equality $\nu_N=\nu_1\,(k_N/k_1)^2$ along with formula , we write the expression in braces as follows: $$\nu_1 \, k_N \, k_1^{-1} \cosh k_N d \, \cosh k_1 d \left[
(\nu_N^W - \nu_N) \, (k_1^2 - \nu_1 \nu_1^W) - (\nu_1^W - \nu_1)\, (k_N^2 - \nu_N
\nu_N^W) \right] \, ,$$ and we have in the square brackets $$k_1^2 \, \nu_N^W - k_N^2 \, \nu_1^W + \nu_N^W \, \nu_1^W \, \nu_N - \nu_N^W \,
\nu_1^W \, \nu_1 + \nu_1^W \, \nu_N \, \nu_1 - \nu_N^W \, \nu_N \, \nu_1 \, .$$ Substituting $\nu_N=\nu_1\,(k_N/k_1)^2$, we see that this expression is the following quadratic polynomial in $\nu_1$: $$\left( \nu_1^W - \nu_N^W \right) (k_N/k_1)^2 \, \nu_1^2 + \nu_N^W \, \nu_1^W
\left[ (k_N/k_1)^2 - 1 \right] \nu_1 + \nu_N^W\, k_1^2 - \nu_1^W \, k_N^2 \, .$$ Its first and third coefficients are negative (for the latter one this follows from formula because it is equal to the expression in the square brackets multiplied by a positive coefficient). On the other hand, the second coefficient is positive. Therefore, the last expression is negative when $\nu_1 >0$, which implies that the right-hand side of the last asymptotic formula is negative. This completes the proof of the second assertion.
Immediate consequences of Lemma 2 are the following two corollaries.
\[corol4\] If inequality $\eqref{-}$ holds, then equation $\eqref{eqh1}$ for $h$ $($and the inverse sloshing problem for a two-layer fluid occupying $W)$ either has no solution or have more than one solution.
Inequality implies that $U (0) < 0$ and $U (h) < 0$ for $h < d$, but sufficiently close to $d$. Hence $U (h)$ either has no zeroes on $(0,d)$, or has more than one zero.
\[corol5\] Let $\nu_1$ and $\nu_N \in (\nu_1, \, \nu_N^W)$ be the smallest two measured sloshing eigenvalues for a two-layer fluid occupying $W=D\times (-d,0)$. Then a necessary condition that equation $\eqref{eqh1}$ has a unique solution $h$ is the simultaneous validity of the following two inequalities: $$\frac{\nu_1 \,k_N}{k_1} - \frac{\nu_N \,k_1}{k_N} > 0 \quad and \quad \nu_N^W \,
\nu_1 - \nu_N \, \nu_1^W < 0 . \label{NC}$$
Let equation have a unique solution on $(0,d)$. According to Corollary 4, inequality contradicts to this assumption, and so the first inequality must hold. Then the asymptotic formula implies that $U (h) >0$ when $h < d$ and is sufficiently close to $d$. Hence the assumption that equation have a unique solution on $(0,d)$ implies that either the second inequality is true or $\nu_N^W \, \nu_1 = \nu_N\, \nu_1^W$. Let us show that this equality is impossible which completes the proof.
Indeed, according to formula , the latter equality means that $U (0) =
0$, and so $$U (h) \sim h \left( \nu_N^W \, \nu_1^W - \nu_N \, \nu_1 \right) \left(
\frac{\nu_1 \, k_N}{k_1} - \frac{\nu_N \, k_1}{k_N} \right) \cosh k_N d \, \cosh k_1
d \ \ \mbox{as} \ h \to +0.$$ Here the formula for $U'$ is used along with and the fact that $\nu_N^W
\, \nu_1 = \nu_N \, \nu_1^W$. Since the first inequality is already shown to be true, we have that $U (h) >0$ when $h \neq 0$, but is sufficiently close to $+0$. Since we also have that $U (h) >0$ when $h < d$ and is sufficiently close to $d$, we arrive at a contradiction to the assumption that equation has a unique solution on $(0,d)$.
Now we are in a position to formulate the following
\[prop7\] Let $\nu_1$ and $\nu_N \in (\nu_1,\, \nu_N^W)$ be the smallest two sloshing eigenvalues measured for a two-layer fluid occupying $W=D\times (-d,0)$. If inequalities hold for $\nu_1$ and $\nu_N$, then either of the following two conditions is sufficient for equation $\eqref{eqh1}$ to have a unique solution $h\in (0,d):$
[(i)]{} $U' (h)$ vanishes only once for $h\in (0,d);$
[(ii)]{} $U'' (h) < 0$ on $(0,d)$.
Inequalities and formulae and imply that $U (0) <
0$ and $U (h) > 0$ for $h < d$ and sufficiently close to $d$. Then either of the formulated conditions is sufficient to guarantee that equation $\eqref{eqh1}$ has a unique solution on $(0,d)$.
It is an open question whether equation $\eqref{eqh1}$ can have more than one solution (consequently, at least three solutions), when inequalities are fulfilled.
Conclusions
===========
We have considered direct and inverse sloshing problems for a two-layer fluid occupying an open container. Several results obtained for the direct problem include:
\(i) variational principle and its corollary concerning inequality between the fundamental sloshing eigenvalues for homogeneous and two-layer fluids occupying the same bounded domain.
\(ii) Analysis of the behaviour of eigenvalues for containers with vertical walls and horizontal bottoms. It demonstrates that there are two sequences of eigenvalues with the same eigenfunctions corresponding to eigenvalues having the same number in each of these sequences. The elements of these sequences are expressed in terms of eigenvalues for the Neumann Laplacian in the two-dimensional domain which is a horizontal cross-section of the container.
\(iii) In the particular case of infinitely deep container with vertical boundary, eigenvalues and eigenfunctions for homogeneous and two-layer fluids are the same for any depth of the interface. This makes senseless the inverse sloshing problem in a two-layer fluid occupying such a container.
Inverse sloshing problem for a two-layer fluid, that occupies a container of finite constant depth with vertical walls, is formulated as the problem of finding the depth of the interface and the ratio of fluid densities from the smallest two eigenvalues measured by observing them at the free surface. This problem is reduced to two transcendental equations depending on the measured eigenvalues. There are two systems of such equations and to obtain these systems one has to take into account the behaviour of the observed free surface elevation. Sufficient conditions for solubility of both systems have been found.
[99]{}
Abramowitz, M., Stegun, I.A. [*Handbook of Mathematical Functions*]{}. Dover, Mineola, NY: 1965. 1046 pp.
Bandle, C. [*Isoperimetric Inequalities and Applications*]{}. Pitman, London: 1980. 228 pp.
Berry, M. Singular limits. [*Physics Today*]{}. 2002. N 5. 10–11.
Cadby, J.R., Linton, C.M. Three-dimensional water-wave scattering in two-layer fluids. [*J. Fluid Mech.*]{} 2000 [**423**]{}, 155–173.
Courant, R., Hilbert, D. [*Methods of Mathematical Physics*]{}. Vol. **1**. Interscience, NY: 1953. xv+561 pp.
Fox, D.W., Kuttler, J.R. Sloshing frequencies. [*Z. angew. Math. Phys.*]{} 1983. [**34**]{}. 668–696.
Kopachevsky, N.D., Krein, S.G. [*Operator Approach to Linear Problems of Hydrodynamics*]{}. Birkhäuser, Basel–Boston–Berlin: 2001. xxiv+384 pp.
Karazeeva, N.A., Solomyak, M.Z. Asymptotics of the spectrum of the contact problem for elliptic equations of the second order. [*Selecta Math. Sovietica*]{} 1987. [**6**]{} (1). 151–161.
Kuznetsov, N., McIver, M., McIver, P. Wave interaction with two-dimensional bodies floating in a two-layer fluid: uniqueness and trapped modes. [*J. Fluid Mech.*]{} 2003. [**490**]{}. 321–331.
Lamb, H. [*Hydrodynamics*]{}. Cambridge University Press, Cambridge: 1932. xv+738 pp.
Linton, C.M., Cadby, J.R. Trapped modes in a two-layer fluid. [*J. Fluid Mech.*]{} 2003. [**481**]{}. 215–234.
|
---
abstract: 'We report the values of some thermal and electrical properties of Candelilla Wax (euphorbia cerifera). The open-cell photoacoustic technique and another photothermic technique - based on the measure of the temperature decay of a heated sample - were employed to obtain the thermal diffusivity ($\alpha_{s} = 0.026 \pm 0.00095 \, \mbox{cm}^{2}\mbox{/sec}$) as well as the thermal conductivity ($k=2.132 \pm 0.16 \, \mbox{W/mK}$) of this wax. The Kelvin null method was used to measure the dark decay of the surface potential of the sample after a Corona Discharge, obtaining a resistivity of $\rho_e=5.98 \pm 0.19 \times 10^{17} \, \mbox{ohm-cm}$.'
---
6.0in 9.0in -0.10truein 0.4truein 0.40truein =1.0pc
Some Thermal and Electrical Properties of Candelilla Wax
V. Dossetti-Romero, J. A. Méndez-Bermúdez, and E. López-Cruz
Instituto de Física, Universidad Autónoma de Puebla, Apartado Postal J-48, Puebla 72570, México
Introduction
============
In past years a growing interest in the electronic and optical properties of organic materials has been shown [@1]. Some of the works have been centered in some particular aspects of the material under study [@2] as well as in certain applications [@3]. At the very beginning of the insulator and electret research one of the phenomena extensively studied was the Costa-Ribeiro effect [@4], among the materials in which one can find some waxes (natural as well as synthetic ones) and in organic semiconductors [@5].
In this work we are interested in studying some electrical and thermal properties of Candelilla wax, a natural wax from a bush wildly grown in northern Mexico. We measured the thermal diffusivity, conductivity, and capacity of this material using the standard method of photoacoustic spectroscopy [@6] combined with the measurement of thermal decay of a cooling process in vacuum [@Hatta] to obtain the value of the product: mass density and thermal capacity. The electric conductivity is studied employing the Corona Discharge and the Kelvin null method [@7].
Experimental Details
====================
The samples were small pieces of candelilla wax fused and cooled in order to shape them as platelets of $\sim 500$ microns in thickness and with an area of $\sim 1cm^2$. The measured fusion temperature of this wax is $T_c\simeq 69^\circ \mbox{C}$, which is in good agreement with the $67-69^\circ \mbox{C}$ reported in the literature [@11].
Open-cell photoacoustic technique
---------------------------------
The photoacoustic technique used in this work was the open cell method widely reported in the literature [@8]. The thermal diffusivity $\alpha_s$, was measured using the experimental arrangement shown in Fig. 1. The sample is directly mounted onto a commercial electret microphone. The beam of a 170 W tungsten lamp was focused onto the sample and mechanically chopped. As a result of the periodic heating of the sample by the absorption of the modulated light, the microphone produces a signal that was monitored with a lock-in amplifier as a function of modulation frequency. The temperature reached by the periodically illuminated sample was $44^\circ \mbox{C}$, which is far below the fusion temperature $T_c$.
Description of the method used for measuring $\rho c$
-----------------------------------------------------
Fig. 2 shows the experimental set-up used for measuring the product of the mass density and the specific heat [@Hatta]. Prior to the measurements both faces of the sample are sprayed with black paint in order to make its emissivity approximately equal to one. The sample is positioned inside the vacuum chamber with one of its faces (which we will call front face) illuminated by the light beam of a 60 W tungsten lamp properly focused. The temperature of the back face (the non-illuminated face) of the sample is traced with a Cu-constantan type thermopar connected to a temperature monitor while the temperature increases up to its equilibrium value, about 26 degrees above room temperature. Later on, the light is interrupted and the temperature is traced out, acquiring its value while the sample cools down up to its equilibrium value at room temperature.
Corona dicharge and Kelvin null method
--------------------------------------
The experimental set up for measuring the resistivity is a standard one [@7] as seen on Fig. 3. The sample was provided with an ohmic contact on the back face and mounted with the free surface upward, as shown in the figure. It was placed on an arm that could be moved on the horizontal plane. In one position (Fig. 3a) it was charged by a negative 30 kV corona discharge in air, and in the other (Fig. 3b) the surface potential was measured by a Kelvin method. The discharge was driven from the tip of a fine wire using a DC voltage amplifier. The tip was positioned about 5 mm above the sample. After charging the sample, this was moved into the measuring position under the tracing electrode. The electrode consists of a conducting metal plate of about the same size of the sample and was driven back and forth in a vertical motion by a mechanical setup at a rate of approximately 1 Hz over a path of 1 mm. The sample was positioned in such a way as to be 1 mm away from the tracing electrode plate at the closest approach. This vibrating capacitor gave an output signal that was detected using an electrometer. Then the signal was balanced to a null voltage when the two plates (electrode and sample) are at the same potential. The balancing voltage was driven by a variable high DC voltage power supply. All the system was shielded by a Faraday box.
Results and Discussion
======================
Measurement of the thermal diffusivity
--------------------------------------
In the open-cell photoacoustic technique, it is well known that the acoustic signal has two main contributions, one coming from the thermal diffusion phenomenon and the other one from the thermoelastic bending effect [@Rosencwaig; @new; @Perondi]. In orther to differenciate which one of these two contributions dominates in generating the photoacoustic (PA) signal, one has to compare the experimental measurement with the expression that describes those contributions. By means of this comparison we have found that the thermoelastic bending effect is predominant in the generation of the PA signal when it comes to samples of candelilla wax. Once identified where the main contribution to the PA signal comes from, one can calculate the thermal diffusivity $\alpha_s$ from the modulation frequency dependance of the signal phase. For a thermally thick sample, the expression for the pressure fluctuations inside the PA chamber induced by the thermoelastic bending effect is $$p_{el} \simeq \frac{3 \alpha_{T} R^{4} \gamma P_{0} I_{0} \alpha_{s}}{4 \pi R_{c}^{2} I_{s}^{2} l_{g} k_{s} f}
\left[ \left( 1 - \frac{1}{x} \right)^{2} + \frac{1}{x^{2}} \right]^{1/2} e^{\, j \left[ \omega t \, + \,
\left( \pi/2 \right) \, + \, \phi \right]}
\label{eq:pressure}$$ where $\alpha_{T}$ is the linear thermal expansion coefficient, $R$ is the microphone inlet hole radius, $\gamma$ is the air specific heat ratio, $P_{0}$ is the ambient pressure, $I_{0}$ is the absorbed light intensity, $R_{c}$ is the radius of the PA chamber in front of the diaphragm, $f$ is the modulation frequency, $x = l_{s}(\pi f/\alpha_{s})^{1/2}$, $l_{i}$, $k_{i}$ and $\alpha_{i}$ are the length, thermal conductivity, and the thermal diffusivity of material $i$, with subscripts $g$ and $s$ standing for gas media and sample respectively, and $\tan \phi = 1/(x-1)$.
From equation (\[eq:pressure\]) one gets that the thermoelastic contribution to the PA signal amplitude, at high modulation frequencies ($x \gg 1$), varies as $f^{-1}$ and its phase $\phi_{el}$ approaches $90^{o}$ as $$\phi_{el} \simeq \pi/2 + \arctan[1/(\sqrt{b_s f}-1)],
\label{eq:phase}$$ here, $b_s$ is a fitting parameter. The other condition that must be fulfilled for an optically opaque sample to be thermally thick is $f \gg f_{c}$, where the cutoff frequency $f_{c}$ is given by $f_{c} = \alpha_{s} / (\pi l_{s}^{2})$. In a process where the main contribution to the PA signal comes from the thermal diffusion phenomenon, the almplitude and phase of the signal have a dependency on the frequency of the form $(1/f)\exp{-a_s\sqrt{f}}$ and $\phi_{th}=-(\pi/2)-a_s\sqrt{f}$, respectively, see Ref. [@Perondi]. Where $a_s$ is a fitting parameter which is related to the diffusivity $\alpha_s$ by $a_s = l_s\sqrt{\pi/\alpha_s}$. Figure 4 shows the amplitude and the phase of the PA signal for the candelilla wax sample. We can notice a good correspondence to the models presented before corresponding to the thermoelastic bending effect. In the case of the amplitude of the PA signal (Fig. 4a), it reproduces very well the $f^{-1}$ dependency with an exponent equal to $-1.0866$. Equation (\[eq:phase\]) was used to fit the results shown in Fig. 4b (continuous line) together with our measured values (clear circles) for the phase of the PA signal. It is possible to estimate the thermal diffusivity $\alpha_{s}$ from the fitting parameter $b_s$ considering the relationship $$\alpha_{s} = \frac{\pi l_s^2}{b_s},
\label{eq:alpha}$$ and using $l_{s}=643 \, \mu \mbox{m}$. In this case $f_{c} \simeq 2 \mbox{Hz}$, then from Fig. 4 we can see that we are in the thermally-thick-sample regime.
Finally we obtained the value $\alpha_{s} = 0.026 \pm 0.00095 \, \mbox{cm}^{2}\mbox{/sec}$ for the thermal diffusivity of candelilla wax.
Measurement of $\rho c$ and computation of the thermal conductivity
-------------------------------------------------------------------
When one of the faces of the sample is illuminated as shown in Fig. 2, with a constant flux of light a lack of the equilibrium between the frontal (illuminated) and back (traced) faces of the sample is established. For the case where the width of the sample (including the two coats of black paint) $l$, is smaller than its transversal dimension, which is our case, this phenomenon can be described by a 1D equation. Thus, the conservation condition for the energy is
$$J_0 - \sigma T_1^4 - \sigma T_2^4 = \frac{d}{dt} \int_0^1 \rho c T(x,t) dx,
\label{eq3}$$
where $J_0$ is the flux of incident light over the frontal face, $\sigma$ is the Stefan-Boltzmann constant, $T_1$ is the temperature of the frontal face, $T_2$ is the temperature of the back face, $\rho$ is the mass density of the sample and $c$ its specific heat at a constant pressure. In this equation we use explicitly the fact that the sample is painted with a thin coat of black paint that has an emissivity coefficient approximately equal to one [@Leon].
We define $\Delta T_i(t) = T_{i,max} - T_i(t)$, $(i=1,2)$, where $T_{1,max}$ and $T_{2,max}$ are the maximum temperatures reached by the frontal and back faces of the sample respectively, for long times when the equilibrium is reached. Substituting $\Delta T_i(t)$ in equation (\[eq3\]) and linearizing the resultant equation in terms of $\Delta T_i/T_i$ we obtain
$$J_0 - \sigma T_{1,max}^4 - \sigma T_{2,max}^4 + 4\sigma T_{1,max}^3\Delta T_1(t) + 4\sigma T_{2,max}^3\Delta T_2(t) = \frac{d}{dt} \int_0^1 \rho c T(x,t) dx.
\label{linear}$$
The sum of the first three terms of this equation is equal to zero, since for long times the flux of incident radiation and the flux of emitted radiation cancel out each other. The integral on the right hand side can be written as
$$\frac{d}{dt} \int_0^1 \rho c T(x,t) dx \approx \frac{\rho c l}{2} \frac{d}{dt} \left[ T_1(t)+T_2(t) \right] = -\frac{\rho c l}{2} \frac{d}{dt} \left[ \Delta T_1(t)+\Delta T_2(t) \right]
\label{integ}$$
using the fact that $c$ does not depend on the position and that it is practically constant in the interval of a few degrees above the room temperature. It is also a fact that for the values of $l$ and $J_0$ that we used in the laboratory it is fulfilled that $l \frac{dT(x,t)}{dt} \ll T_1(t) \cong T_2(t)$. From this condition we can assume that $\Delta T_1(t) \cong \Delta T_2(t)$, then for a decrease of the temperature from $T_{2,max}$ to $T_{2,0}$ after the light is interrupted, equation (\[eq3\]) can be written as
$$8 \sigma T_{2,0}^3 \Delta T_2(t) = -\rho c l \frac{d\Delta T_2(t)}{dt}.$$
Substituting the definition for $\Delta T_2(t)$ and using the boundary conditions $\Delta T_2(0) = 0$ and $\Delta T_2(\infty)=T_{2,max}-T_{2,0}$, we obtain the solution
$$T_2(t) = T_{2,0} + \left( T_{2,max}-T_{2,0} \right) \exp(-t/\tau_d)
\label{solution}$$
for the decay of the temperature immediately after the illumination of the sample is interrupted. In this case the relaxation mean time $\tau_d$ is given by
$$\tau_d = \frac{\rho c l}{8 \sigma T_{2,0}^3}.
\label{rmt}$$
Figure 5 presents the evolution of the temperature of the sample as a function of time in a typical thermal decay experiment. From the time constant $\tau_d = 26 \pm 1 \, \mbox{s}$, which fits very well to the relationship given by equation (\[solution\]), one obtains the value of the product $\rho c$ from equation (\[rmt\]). In our case $\rho c = 820030.44 \, \mbox{J/m}^3\mbox{K}$ with an error of $\pm 3.89$ percent, where $T_{2,0} = 23.88^\circ \mbox{C}$ and $l=377 \mu \mbox{m}$.
In order to obtain the thermal conductivity $k$, we can use the very well known relationship
$$\alpha_s = \frac{k}{\rho c},
\label{relation}$$
which yields to $k=2.132 \pm 0.16 \, \mbox{W/mK}$.
Measurement of electrical resistivity
-------------------------------------
We consider a laminar sample with parallel surfaces (of thickness $l$ and surface area $A$) which after corona-charged behaves as an $RC$ circuit with time decay constant $\tau_e = RC$.
The resistance $R$ and the capacitance $C$ of the sample are given by $R=\rho_e(d/A)$ and $C=\xi(A/4\pi d) \times (1.1\times 10^{12})$, where $\rho_e$ is the electrical resistivity and $\xi$ is the dielectric constant ($R$ and $C$ are given in ohms and farads, respectively. $A$ and $d$ in centimeters). Then we have that $\tau_e = (\rho_e \xi /a\pi) \times (1.1\times 10^{12}) \, \mbox{s}$. We can make the approximation $\xi \approx \pi$, valid for many materials [@10], and obtain
$$\rho_e = \tau_e \times 10^{12} \, \mbox{ohm-cm}.
\label{density}$$
It is known that the process for the discharge of a $RC$ circuit as a function of time is an exponential decay for the charge $q(t) = q_0 \exp(-t/\tau_e)$. A typical experiment of the dark decay of surface potential in a negatively charged candelilla wax sample is shown in Fig. 6, which is a plot of the surface potential as a function of time. Since the surface potential is proportional to the charge ($V=qC$), the experimental results in Fig. 6 can be fitted to the relationship
$$V(t) = V_0 \exp(-t/\tau_e).
\label{sp}$$
The time constant obtained from these results was $\tau_e=166.17 \pm 5.48 \, \mbox{hours}$, and using equation (\[density\]) one obtains a resistivity of $\rho_e=5.98 \pm 0.19 \times 10^{17} \, \mbox{ohm-cm}$.
Conclusions
===========
Although candelilla wax has a huge amount of industrial applications, some of its electric and thermal properties are not well understood. In some handbooks one can find the value of its dielectric constant but not its thermal diffusivity or its thermal conductivity, nor its resistivity [@11]. From the standard open-cell photoacoustic and thermal decay techniques one very easily finds some of the above-mentioned physical properties. Using equations (\[eq:alpha\]) and (\[rmt\]) we obtained the thermal diffusivity $\alpha_s$, and the product $\rho c$ respectively, starting with the values obtained from the fittings of the data to equations (\[eq:phase\]) and (\[solution\]). One can calculate the heat capacity once one measures the mass density $\rho$ of the sample. In our case the heat capacity was measured to be $c=754.71 \pm 29.35 \, \mbox{J/kgK}$ and the thermal diffusivity $\alpha_{s} = 0.026 \pm 0.00095 \, \mbox{cm}^{2}\mbox{/sec}$. The density measured in this work is $\rho = 1086.54 \, \mbox{kg/m}^3$. The thermal conductivity was obtained from the relation (\[relation\]) to be $k=2.132 \pm 0.16 \, \mbox{W/mK}$.
Concerning the electric properties of candelilla wax, the time constant for the dark decay of the surface potential found in this work, is an evidence that we are dealing with a very high resistivity material. As we can see from Fig. 6, the dark decay of the surface potential obeys quite well the behaviour of an insulator, considered from the point of view of a parallel plate capacitor as presented on section 3.3. In this work we found from the fitting of the data to equation (\[sp\]) that the time constant $\tau_e=166.17 \pm 5.48 \, \mbox{hours}$, means a resistivity given by equation (\[density\]).
We can say also that for high resistivity materials, the combination of photoacoustic spectroscopy, the thermal decay method, and the dark decay of the surface potential is a well recommended one, since one can obtain some physical properties of this kind of materials in a very simple way.\
[**Acknowledgements.**]{} The authors thank Dr. J. L. Martínez for kindy providing the candelilla wax. This work was partially supported by CONACyT.
[99]{}
Proceedings of the [*$5^{th}$ International Symposium on Electrets*]{}, Heidelbeg, 1985. Edited by [G. M. Sessler]{}, and [R. Gerhardt-Multhaupt]{}, (Available from IEEE Service Center, Piscataway, NJ, USA).
and [J. A. Giacometti]{}, Appl. Phys. Lett. [**32**]{}, 794 (1978).
and [J. West]{}, J. Acoust. Soc. Am. [**34**]{}, 1782 (1962); and [P. Murphy]{} and [F. Fraim]{}, J. Aud. Eng. Soc. [**16**]{}, 450 (1968).
, La Revue Scientifique [**86**]{}, 229 (1948).
Proceedings of the [*International Symposium on Electrets and Dielectrics*]{}, Sâo Carlos, SP, Brasil, 1975. Edited by Academia Brasileira de Ciências Rio de Janeiro, RJ, 1977, page 413.
, [*Photoacoustic and Thermal Wave Phenomena in Semiconductors*]{}, North Holland, New York, 1987.
, Rev. Sci. Instrum. [**3**]{}, 292 (1979).
and [P. Willis]{}, J. Appl. Phys. [**39**]{}, 3731 (1968).
, 66th Edition, CRC Press Inc., USA, 1985.
, and [J. González]{}, J. of Food Science [**60**]{} No. 2, 1-5 (1995).
and [A. Gersho]{}, J. Appl. Phys. [**47**]{}, 64 (1976).
, and [J. L. Carrillo]{}, Ferroelectrics [**270**]{}, 93 (2002).
and [L. C. M. Miranda]{}, J. Appl. Phys. [**62**]{}, 2955 (1987).
and [L. Villaseñor]{}, Rev. Mex. Fís. [**44**]{}, 506 (1998).
and [R. Becker]{}, [*Electricity and Magnetism*]{}, Blockie and Sons, London, 1952.
|
---
author:
- 'Yuan-Ting Hu$^{1}$'
- 'Jia-Bin Huang$^{2}$'
- 'Alexander G. Schwing$^{1}$'
bibliography:
- 'egbib.bib'
- 'alex.bib'
title: 'VideoMatch: Matching based Video Object Segmentation'
---
|
---
abstract: 'Predictive power allocation is conceived for power-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption over a complete video streaming session for a mobile user under the quality of service constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive resource policies that first predict future information with historical data and then optimize the policy based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, the signaling overhead between the edge server and the base stations can be reduced, and the dynamics of the system can be learned effortlessly. To improve the robustness of streaming and accelerate learning, we further exploit the partially known dynamics of the system by integrating the concepts of safer layer, post-decision state, and virtual experience into the basic DDPG algorithm. Our simulation results show that the proposed polices converge to the optimal policy derived based on perfect prediction of the future large-scale channel gains and outperforms the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model of the system dynamics, the convergence speed can be dramatically improved.'
author:
- |
\
[^1]
bibliography:
- 'dongbib.bib'
title: 'Accelerating Deep Reinforcement Learning With the Aid of a Partial Model: Power-Efficient Predictive Video Streaming'
---
Deep reinforcement learning, energy efficiency, video streaming
Introduction
============
Mobile video traffic is expected to account for more than $75\%$ of the global mobile data by 2021, and video-on-demand (VoD) services represent the main contributor [@index2017global]. Video streaming over cellular networks enables mobile users to watch the requested video while downloading [@hanzo2007video; @yang2018dynamic; @choi2019markov]. To avoid video stalling for a user experiencing hostile channel conditions, a base station (BS) can increase its transmit power for ensuring that the video segment is downloaded before being played. This, however, may cause a significant increase in energy consumption, hence degrading one of the most important design metrics of cellular networks, namely energy efficiency (EE).
The dynamic nature of wireless environment mainly owing to the user behavior, which has long been regarded as being random and remains unexploited in the design of wireless systems. However, with the advent of big data analysis, the user behavior becomes predictable to some degree and hence can be exploited for predictive resource allocation (PRA). For example, by predicting the user trajectory [@zhang2018trajectory] and constructing radio coverage map [@kasparick2015kernel], the future average channel gains in each *time frame*[^2] (TF) can be predicted up to a minute-level time horizon. Based on the predicted future channel gains, the BS can transmit more data in advance to the user’s buffer during the instances of good channel conditions.
By harnessing various kinds of future information, PRA has been shown to provide a remarkable gain in improving the EE of mobile networks during video streaming [@tsilimantos2016anticipatory; @abou2014energy; @atawia2017robust; @she2015context; @mobility; @GY18]. Assuming perfectly known future instantaneous channel gains, the trade-off between the required resources and the video stalling duration was investigated in [@tsilimantos2016anticipatory]. Assuming perfectly known future instantaneous data rates in each *time slot*[^3] (TS), the total number of TS for video streaming was minimized in [@abou2014energy] to save energy. Considering that future data rates cannot be predicted without errors, the predicted data rate is modeled as random variables with known average values and bounded prediction errors in [@atawia2017robust] for optimizing PRA. Since the future data rate of a user depends on the resource allocation, the rate prediction is coupled with PRA and the energy-saving potential of PRA cannot be fully exploited by the policy advocated in [@abou2014energy; @atawia2017robust]. Assuming known future average channel gains, the optimal PRA was derived in [@she2015context] for maximizing the EE of video streaming, and was extended to hybrid scenarios, where both real-time and VoD services coexist [@scy].
To employ these optimized PRA policies, an immediate approach is to first predict the future information by machine learning and then allocate resource by solving optimization problems based on the predicted information [@mobility; @GY18]. Such an approach is operated in four phases. The first phase represents training a predictor (say for predicting the future average channel gains) in an off-line manner using historical data [@zhang2018trajectory]. The second phase corresponds to gathering data (say the locations along the user trajectory) for making a prediction after a user initiates a request. The third phase assigns radio resources to all the TFs or TSs in a prediction window at the start of the window. Finally, the BS allocates resources and transmits to the user in each TS according to the pre-assigned resources. When a user starts to play a video file, a central unit in the network has to gather data for making a prediction. Before future resources have been allocated with the predicted information, however, the BS has to serve the user in a non-predictive manner. Furthermore, the first-predict-then-optimize procedure is tedious, and the resultant PRA policy cannot be accurately matched to the dynamically fluctuating wireless environment. Furthermore, the prediction accuracy degrades as the prediction horizon increases. A natural question is: can we optimize PRA in an on-line and end-to-end manner?
Reinforcement learning (RL) [@sutton1998reinforcement] can be invoked for on-line learning by interacting with dynamic environments. Recently, deep learning [@lecun2015deep] has been introduced as a breakthrough solution, heralding a new era for RL, namely deep reinforcement learning (DRL), which relies on the powerful family of deep neural networks (DNNs). With the aid of the new paradigm of mobile edge computing (MEC) [@hu2015mobile], DRL becomes capable of addressing various challenging problems [@DRL; @zhao2019deep; @zhang2019proactive; @liu2019DRL]. Yet, standard DRL algorithms are designed for dealing with entirely model-free tasks, whose convergence speed may still be unsatisfactory even upon adopting DNNs. Fortunately, for many wireless problems, a part of the dynamic model is known. Nevertheless, how to exploit such knowledge for accelerating DRL is an open question at the time of writing.
Against this background, we propose a DRL framework for optimizing predictive power allocation and illustrate how to accelerate DRL with the aid of a partial model. We consider a scenario where users travel across multiple cells covered by a MEC server during video streaming. The objective is to minimize the total average energy consumed for streaming under the throughput constraint that avoids video stalling. We formulate a Markov decision process (MDP) by judiciously designing the *action* and *state*, so that the policy can exploit the dynamics of the system without explicit prediction, whilst imposing a significantly reduced signaling overhead. To cope with the continuous nature of the state and action spaces, we rely on the deep deterministic policy gradient (DDPG) [@DDPG] to solve the MDP. To improve the robustness and accelerate the learning procedure, we tailor the basic DDPG algorithm for exploiting the partially known dynamics, by integrating the concepts of safe layer, post-decision state (PDS) and virtual experiences introduced in [@dalal2018safe; @mastronarde2011fast]. Our simulation results show that the proposed DRL-based policies converge to the optimal policy derived based on perfect future channel prediction and achieve lower energy consumption compared with the optimal PRA in the presence of prediction errors. By exploiting the partial knowledge on the dynamics, the interactions between agent and environment can be significantly reduced. Our major contributions are summarized as follows:
- Instead of directly regarding the transmit power in each TS as the action of the RL agent, we first derive the optimal power allocation policy in closed-form for an arbitrary given average data rate in each TF by exploiting the knowledge on the distribution of small-scale fading. In this way, we can set the average data rate as the action without loss of optimality, and hence the system’s state only depends on the slowly-varying average channel gains. This avoids millisecond-level information exchange between the MEC server and BSs, and makes it easier for the agent to learn the dynamics of the system.
- Inspired by the idea of safe RL designed for the situations where the safety of the agent (say a robot) is particularly important [@dalal2018safe], we design a safe layer for the actor network in DDPG for satisfying the throughput constraint. This avoids introducing a penalty term in the reward function, eliminates a hyper-parameter that requires sophisticated tuning, and hence improves the robustness of the policy learned. In contrast to [@dalal2018safe] that is designed for completely unknown environments, the safe layer in this work is derived in closed-form by exploiting the system’s partially known dynamics.
- Inspired by the idea of introducing PDS to accelerate Q-learning by dividing the system’s dynamics into known and unknown components [@mastronarde2011fast], we integrate PDS into DDPG and propose the amalgamated PDS-DDPG algorithm, which significantly reduces the number of unknown parameters in the DNNs. In contrast to [@mastronarde2011fast], the proposed PDS-DDPG algorithm beneficially harnesses DNNs and it becomes eminently suitable for learning in continuous state and action spaces.
- We further exploit the property that the unknown dynamics are independent of the known dynamics by generating virtual experiences based on historical data. By training relying on both virtual and real experiences, the convergence speed can be dramatically boosted.
- Our technique of integrating the PDS, safe layer and virtual experience into DRL is applicable to numerous wireless tasks for accelerate RL by exploiting a partial model on the system dynamics, provided that certain properties are satisfied.
The rest of the paper is organized as follows. In Section II, we introduce the system model. In Section III, we formulate the RL problem and solve it using DDPG. In Section IV, we exploit the partially known dynamics by integrating the concepts of safer layer, PDS and virtual experience into the basic DDPG. Our simulations results are provided in Section V, and finally, Section VI concludes the paper.
System Model
============
We consider a learning-aided network architecture [@calabrese2018learning] relying on MEC as shown in Fig. \[fig:arch\], where the BSs in an area are connected to a MEC server. The MEC server monitors and records the status of mobile users (say the channel conditions and buffer status) and BSs (say the consumed energy), which are then sent to the cloud server and stored in the database as training samples. A centralized learner within the cloud server learns the transmission policy based on the training samples stored in the database, and issues the learned policy to the MEC servers. Each MEC server stores the learned policy, based on which the MEC server sends instructions to the BSs to implement the transmission policy.
![Learning-aided cellular network architecture.[]{data-label="fig:arch"}](arch){width="60.00000%"}
Each user requesting a video file from the content sever may travel across multiple cells during the video streaming process. We assume that each user is associated with the BS with the strongest average channel gain, and each BS serves the associated users over orthogonal time-frequency resources. Since all the users in the considered network share the same network topology (e.g., BS locations), system configurations (e.g., maximal transmit power and transmission bandwidth), wireless channels (e.g., path loss and small-scaling fading distribution), and road topology, we consider a randomly chosen user and the learned policy is applicable to every user.
Transmission and Channel Models
-------------------------------
Each video file is partitioned into $N_{\rm v}$ segments, each of which is the minimal unit for video playback. The playback duration of each segment is further partitioned into $L_{\rm v}$ TFs, each with duration $\Delta T$. Each TF is divided into $N_{\rm s}$ TSs, each having a duration of $\tau$, i.e., $\tau = \Delta T/N_{\rm s} $, as shown in Fig. \[fig:time\]. The large-scale channel gains (i.e., average channel gains) are assumed to remain constant within each TF, but naturally, it may change from one TF to another due to user mobility. The small-scale channel gains are assumed to remain constant within each TS, but they are independently and identically distributed (i.i.d.) among TSs.
![Video segment playback duration and channel variation.[]{data-label="fig:time"}](time){width="80.00000%"}
Let $\alpha_t g_{ti}$ denote the instantaneous channel gain between a user and its associated BS in the $i$th TS of the $t$th TF, where $\alpha_t$ and $g_{ti}$ denote the large-scale channel gain and the small-scale fading gain, respectively. Upon assuming perfect capacity achieving coding, the instantaneous data rate in the $i$th TS of the $t$th TF can be expressed as $$R_{ti} = W \log_2 \left(1 + \frac{\alpha_t g_{ti}}{\sigma^2} p_{ti}\right), \label{eqn:rate}$$ where $W$ is the transmission bandwidth, $\sigma^2$ is the noise power, and $p_{ti}$ is the transmit power in the $i$th TS of the $t$th TF.
Video Streaming and Power Consumption Model
-------------------------------------------
The video playback starts after the user has received the first video segment. To avoid stalling, each segment should be downloaded to the user’s buffer before playback. We assume that the buffer capacity is higher than the video file size, which is reasonable for contemporary mobile devices. Hence, no buffer overflow is considered. Then, the following throughput constraint should be satisfied $$\sum_{n=1}^{m} \sum_{t=(n-1)L_{\rm v} + 1}^{nL_{\rm v}} \sum_{i=1}^{N_{\rm s}} \tau R_{ti}\geq \sum_{n=2}^{m+1} S_n, ~ m = 1,\cdots,N_{\rm v} -1, \label{eqn:QoS}$$ where $\sum_{t=(n-1)L_{\rm v} + 1}^{nL_{\rm v}} \sum_{i=1}^{N_{\rm s}} \tau R_{ti}$ is the amount of data transmitted to the user during the playback of the $n$th segment, $S_n$ is the size of the segment that is known after the user issues a request.
The energy consumed by a BS for video transmission during the $t$th TF is modeled as [@energy] $$\label{energy-con}
E_t = \frac{1}{\rho} \sum_{i=1}^{N_{\rm s}} \tau p_{ti} + \Delta T P_{\rm c},$$ where $\rho$ reflects the power-efficiency of the power amplifier, and $P_{\rm c}$ is the power dissipated by the baseband and radio frequency circuits as well as by the cooling and power supply.
To find the best power allocation among TSs that minimizes the average energy consumption subject to the throughput and maximal power constraints, the problem can be formulated as
\[eqn:p0\] $$\begin{aligned}
{\sf P1}: \quad \min_{\{p_{ti}\}}~& \mathbb{E}_{\alpha_t, g} \left[ \sum_{t=1}^{(N_{\rm v} - 1)L_{\rm v}} \left(\frac{1}{\rho} \sum_{i=1}^{N_{\rm s}} \tau p_{ti} + \Delta T P_{\rm c} \right)\right] \label{eqn:obj}\\
s.t.~ & \sum_{n=1}^{m} \sum_{t=(n-1)L_{\rm v} + 1}^{nL_{\rm v}} \sum_{i=1}^{N_{\rm s}} \tau R_{ti}\geq \sum_{n=2}^{m+1} S_n, ~ m = 1,\cdots,N_{\rm v} -1 \\
& p_{ti} \leq P_{\max} ,~ \forall ~t, i,
\end{aligned}$$
where $\mathbb{E}_{\alpha_t, g}[\cdot]$ denotes the expectation taken over both the large-scale and small-scale fadings, while $P_{\max}$ is the maximal transmit power of each BS. The distribution of $\alpha_t$ depends on the user’s mobility pattern and $R_{ti}$ in the throughput constraint depends on $g_{ti}$ and $\alpha_t$, all of which are unknown in advance. Without prediction, it is impossible to solve problem $\sf P1$ at the beginning of video streaming. In the sequel, we resort to RL to find the solution.
Energy-Saving Power Allocation Based on DDPG
============================================
In this section, we establish a RL framework for $\sf P1$ and propose a policy learning algorithm.
A standard RL problem can be formulated as an MDP, where an agent learns how to achieve a goal from its interactions with the environment in a sequence of discrete time steps $t = 1, 2,\cdots, T$ [@sutton1998reinforcement]. At each time step $t$, the agent observes the state $\mathbf s_t$ of the environment and executes an action $ a_t$. Then, the agent receives a reward $r_t$ from the environment and transits into a new state $\mathbf s_{t+1}$. The interaction of the agent with the environment is then captured by an experience vector $\mathbf e_t = [\mathbf s_t, a_t, r_t, \mathbf s_{t+1}]$. The agent learns a policy from its experiences for minimizing an expected return, which reflects the cumulative reward received by the agent during the $T$-time-step episode. The policy (denoted by $\pi$) determines which action should be executed in which state. The expected return is defined as $\mathbb E \left[\sum_{t=1}^{T-1} \gamma^{t-1} r_t\right]$, where $\gamma$ denotes the discount factor.
Reinforcement Learning Framework
--------------------------------
In our learning-aided network, the MEC and cloud servers jointly serve as the agent. A direct formulation of the RL problem is to regard the instantaneous power $p_{ti}$ allocated to the $i$th TS as the action. Then, the action depends on the instantaneous channel gain $\alpha_tg_{ti}$, which therefore should be included into the state. However, this incurs millisecond-level information exchange between the MEC server and each BS, which yields excessive signaling overhead. Furthermore, this makes it hard for the agent to learn a good policy, because $g_{ti}$ is hard to predict beyond the channel’s coherence time (i.e., the duration of a TS in the model considered).
In fact, $g_{ti}$ can be regarded as a multiplicative impairment imposed on $\alpha_t$ and hence $\alpha_tg_{ti}$ has a much higher dynamic range than $\alpha_t$. This inspires us to find the action and the state that only depend on $\alpha_t$.
### Action
In practice, it is not hard to evaluate the distribution of small-scale fading. Based on the distribution, we can first derive the optimal power allocation policy for each TF to minimize the average energy consumed in the TF to achieve an arbitrarily given average data rate. Then, by optimizing the average rate for each TF, we can obtain the optimal power allocation for the whole video streaming session to minimize the overall energy consumption. This suggests that we can select the average data rate of each TF as the action. In this way, the action and state for the RL agent are independent of $g_{ti}$.
Based on , the average energy consumption in the $t$th TF can be expressed as $$\label{aver-enegry}
\bar E_t = \mathbb{E}_{g}\left[\frac{1}{\rho} \sum_{i=1}^{N_{\rm s}} \tau p_{ti} \right] + \Delta T P_{\rm c},$$ where $\mathbb{E}_{g}[\cdot]$ denotes the expectation taken over small-scale fading. Then, the objective function of problem $\sf P1$ can be rewritten as $\mathbb{E}_{\alpha_t} \left[ \sum_{t=1}^{(N_{\rm v}- 1)L_{\rm v} } \bar E_t \right]$. For the $t$th TF, to achieve an arbitrarily given average rate $\bar R_t$, the optimal power allocation minimizing the average energy consumption in the $t$th TF can be found by solving the following problem,
$$\begin{aligned}
{\sf P2}:\forall t, \quad \min_{\{p_{ti}\}}~& \bar E_t \\
s.t. ~& \mathbb{E}_{g}\left[W\log_2\left(1 + \frac{\alpha_t}{\sigma^2}p_{ti} g_{ti}\right)\right] = \bar R_t \\
& 0 \leq p_{ti} \leq P_{\max}, ~\forall i,\end{aligned}$$
where $\mathbb{E}_{g}\left[W\log_2\left(1 + \frac{\alpha_t}{\sigma^2}p_{ti} g_{ti}\right)\right]$ is the average data rate in the $t$th TF. The optimal solution of $\sf P2$ is formulated in the following proposition.
The optimal power allocation policy in the $t$th TF is $$\begin{aligned}
p^{\rm opt}(\alpha_t g_{ti}; \xi_t) = \left\{\begin{array}{ll}
0, ~ \alpha_tg_{ti} \leq \frac{\sigma^2}{ \xi_t} \\
\xi_t - \frac{\sigma^2}{\alpha_tg_{ti}},~ \frac{\sigma^2}{ \xi_t} < \alpha_tg_{ti} < \frac{\sigma^2}{\xi_t - P_{\max}} \\
P_{\max}, ~\alpha_tg_{ti}\geq \frac{\sigma^2}{\xi_t - P_{\max}},
\end{array}
\right. \label{eqn:popt}\end{aligned}$$ where the parameter $\xi_t$ can be obtained by solving the following equation $$\bar R_t = \int_{0}^{\infty} W \log_2\left( 1 + \frac{\alpha_t}{\sigma^2} p^{\rm opt}(\alpha_t g, \xi_t) g \right) \rho(g){\rm d}g \label{eqn:relation}$$ via bisection search, and $\rho(g)$ denotes the probability density function (PDF) of $g_{ti}$.
See Appendix A.
Let the function $\xi^{\rm opt}(\bar R_t)$ denote the relationship between $\xi_t$ and $\bar R_t$ found from , i.e., $\xi_t \triangleq \xi^{\rm opt}(\bar R_t)$, whose expression can be obtained for a special case in the following corollary.
For Rayleigh fading and a large value of $P_{\max}$, we have $$\xi^{\rm opt}(\bar R_t) = \frac{\sigma^2}{\alpha_t}\left[{\rm E}_1^{-1} \left(\frac{\bar R_t \ln 2}{W} \right)\right]^{-1}, \label{eqn:xiopt}$$ where ${\rm E}_1^{-1}(x)$ denotes the inverse function of the exponential integral function ${\rm E}_1(x) \triangleq \int_{x}^{\infty}\frac{e^-t}{t} dt$.
See Appendix B.
Since we have obtained the optimal power allocation policy in the $t$th TF at an arbitrarily given average data rate $\bar R_t$, the original problem $\sf P1$ can be solved equivalently by first optimizing the average rate for each TF, i.e., $\{\bar R_t\}_{t=1,\cdots,(N_{\rm v} - 1)L_{\rm v}}$, and then obtaining the optimal power allocation for each TF using . Given the relationship between $\xi_t$ and $\bar R_t$ (i.e., $\xi^{\rm opt}(\bar R_t)$) under the optimal power allocation policy (i.e., $p^{\rm opt} (\alpha_tg_{ti}; \xi_t)$), the MEC server only has to decide the [**action**]{} as $$a_t = \bar R_t.$$ Upon determining the action, the MEC server can compute $\xi_t$ by bisection search based on for the general case or by for the special case given in Corollary 1, followed by sending $\xi_t$ to the specific BS that the user is associated with. According to $p_{ti} = p^{\rm opt} (\alpha_tg_{ti}; \xi_t)$, the BS can adjust the transmit power in each TS of the $t$th TF. In this way, the agent interacts with the environment on a frame-by-frame basis (i.e., the time step is set as a TF on a second-level timescale as shown in Fig. \[fig:time\]) and the computational load of the MEC server can be reduced, while the BS can adjust the transmit power for each TS on a millisecond-level timescale.
### State
Since the average power consumed by video transmission depends on the large-scale channel gain, $\alpha_t$ should be included into the state. To help the agent implicitly predict the user’s mobility pattern, the state should also include the channel gains in the past $N_t$ TFs. Since different locations of a user may result in the same large-scale channel gain between the user and its associated BS, we further include the large-scale channel gains between the user and $(N_b-1)$ adjacent BSs. Let us now define a vector $\bm \alpha_{t} \triangleq [\alpha_{1, t}, \cdots, \alpha_{N_b, t}]$, where $\alpha_{n,t}$ is the large-scale channel gain between the user and the BS with the $n$th largest large-scale channel gain, and $\alpha_{1,t}$ is the large-scale channel gain between the user device and its associated BS (i.e., $\alpha_{1,t} = \alpha_t$). To meet the throughput requirement, the buffer status at the user should also be incorporated into the state. Let $B_t$ denote the amount of data remaining in the user’s buffer at the $t$th TF. The transition of $B_{t}$ obeys: $$B_{t+1} = B_t + \sum_{i=1}^{N_s}\tau R_{ti} - I (l_t = L_v)S_{n_t}, \label{eqn:B}$$ where $\sum_{i=1}^{N_s}\tau R_{ti}$ is the amount of data transmitted during the $t$th TF, $l_t \in [1, L_v]$ denotes the number of TFs that the current segment $S_{n_t}$ has been played without stalling (which reflects the playback progress of the current segment), $I(\cdot)$ is an indicator function that equals $1$ if its argument is true and $0$ otherwise. When $l_t = L_v$, the segment is completely played within the $t$th TF and the next segment should be played in the $(t+1)$th TF. We use $n_t$ to denote the index of the segment played in the $t$th TF and hence the size of the $n_t$th segment is $S_{n_t}$. The last term $-I (l_t = L_v)S_{n_t} $ of means that the $n_t$th segment is removed from the buffer when its playback is completed. The evolution of the playback process obeys: $$l_{t+1} =\left\{\begin{array}{ll}
l_t, &~\text{if}~S_{n_{t+1}} > B_{t+1}\text {, i.e., video stalls} \\
{\rm mod}(l_t, L_{\rm v}) + 1, &~\text{otherwise.} \\
\end{array}
\right. \label{eqn:lt}$$
As shown in , both $l_t$ and $S_{n_t}$ affect the transition of $B_t$ to $B_{t+1}$ and hence they should be included into the state. Finally, the [**state**]{} vector is designed as $$\mathbf s_t = [B_t, S_{n_t}, l_t, \bm \alpha_{t}, \bm \alpha_{t-1}\cdots, \bm \alpha_{t-N_t}]. \label{eqn:st}$$
### Reward
The [**reward**]{} for the agent is designed as $$r_t = - \sum_{i=1}^{N_{\rm s}} \tau p_{ti} - \lambda \max\{S_{n_{t+1}} - B_{t+1} , 0\}, \label{eqn:reward}$$ where $\sum_{i=1}^{N_{\rm s}} \tau p_{ti}$ is the transmit energy consumed in the $t$th TF, while $n_{t+1}$ is the index of the segment to be played in the next TF. The term $-\lambda \max\{S_{n_{t+1}} - B_{t+1} , 0\} $ imposes a penalty on the reward, when the amount of data in the user’s buffer is less than the size of the segment to be played (i.e., when playback stalls), $\lambda$ is the penalty coefficient, and $(S_{n_{t+1}} - B_{t+1})$ increases the impact of penalty, when there is less data in the buffer.
Transmission Policy Based on DDPG
---------------------------------
The state vector defined in lies in the continuous space. If $\mathbf s_t $ is discretized, then the number of possible states will be huge due to the combinatorial nature of the process, and the tabular-based RL (such as Q-learning [@sutton1998reinforcement]) encounters the curse of dimensionality. Moreover, the action $\bar R_t$ also lies in the continuous space. Value-based DRL methods such as deep Q-networks (DQNs) [@mnih2015human] are designed for discrete action space and hence they are not suitable. DDPG [@DDPG] is designed based on the actor-critic architecture and it is able to learn a continuous policy. In contrast to other actor-critic-based RL algorithms that employ a stochastic policy gradient, DDPG employs a deterministic policy gradient so that the gradient of the policy can be estimated more efficiently [@DDPG]. Therefore, we apply DDPG for solving the RL problem.
DDPG maintains two DNNs, namely the actor network $\mu (\mathbf s_t;\bm \theta_{\mu})$ and the critic network $Q(\mathbf s_t, a_t; \bm \theta_Q)$. The DNNs’ architecture in our framework is shown in Fig. \[fig:ddpg\]. The actor network at the MEC server specifies the current policy by deterministically mapping each state into a specific continuous action. The output of the actor network is then used for computing the parameter $\xi_t = \xi^{\rm opt} (\bar R_t)$ according to Proposition 1 or Corollary 1. Upon receiving $\xi_t$ from the MEC server, the BS controls the transmit power at each TS within the $t$th TF according to the policy $p^{\rm opt} (\alpha_tg_{ti}; \xi_t)$ based on the current instantaneous channel gain $\alpha_t g_{ti}$. The critic network stored at the cloud server is used for approximating the *action-value function*, $Q_{\mu}(\mathbf s_t, a_t) \triangleq \mathbb E \left[ \sum_{i=t}^{T} \gamma^{i-t} r_i \big| \mathbf s_t , a_t, \mu\right]$, which is the expected return achieved by policy $\mu$, when taking action $a_t$ under state $\mathbf s_t$.
During the interactions with the environment, the MEC server collects the experience $\mathbf e_t = [\mathbf s_t, a_t, r_t, \mathbf s_{t+1}]$ from the BSs in the database at the cloud server as $\mathcal{D} = \{\mathbf e_1, \cdots, \mathbf e_t \}$. Every $\Delta t$ TFs, a mini-batch of the experiences $\mathcal{B}$ is sampled from $\mathcal{D}$ to update the network parameters, representing an *experience replay* [@mnih2015human]. The parameters of the critic network are updated with using the batch gradient descent as $$\bm \theta_{Q} \leftarrow \bm \theta_{Q} + \frac{\delta_Q}{|\mathcal{B}|} \nabla_{\bm \theta_{Q}} \sum_{j\in \mathcal{B}} \left[ y_j - Q(\mathbf s_j, a_j;\bm \theta_Q) \right]^2, \label{eqn:Q}$$ where $\delta_Q$ is the learning rate of the critic network, while we have $y_j = r_j$ if all the segments have been transmitted to the user and $y_j = r_j + \gamma Q'(\mathbf s_{j+1}, \mu'(\mathbf s_{j+1}; \bm \theta_{\mu}'); \bm \theta_{Q}')$ otherwise. Furthermore, $Q'(\mathbf s, a; \bm\theta_{Q}')$ and $\mu'(\mathbf s;\bm \theta_{\mu}')$ are the target critic network and target actor network, respectively, which have the same structure as $Q(\cdot)$ and $\mu(\cdot)$. They are respectively updated by $\bm \theta_Q' \leftarrow \omega \bm\theta_Q + (1 - \omega) \bm \theta_{Q}'$ and $\bm \theta_\mu' \leftarrow \omega \bm \theta_\mu + (1 - \omega) \bm \theta_\mu'$ using a very small value of $\omega$ to stabilize the learning [@DDPG].
The parameters of the actor network are updated using the sampled policy gradient as $$\bm \theta_{\mu} \leftarrow \bm \theta_{\mu} + \frac{\delta_\mu}{|\mathcal{B}|} \sum_{j\in\mathcal{B}} \nabla_{a} Q(\mathbf s_j, a;\bm \theta_Q)|_{a = \mu(s_j;\bm \theta_\mu)} \nabla_{\bm \theta_\mu} \mu (\mathbf s_j;\bm \theta_{\mu}), \label{eqn:mu}$$ where $\delta_\mu$ is the learning rate of the actor network.
To find the optimal policy, the agent has to explore the action space during the interactions with the environment. A noise term is added to the output of the actor network [@DDPG] to encourage exploration, which is formulated as $a_t = \mu(\mathbf s_t; \bm \theta_{\mu}) + \mathcal{N}_t$. The detailed procedure of learning the transmission policy is formulated in Algorithm 1.
\[alg1\]
Initialize critic network $Q(\mathbf s, a;\bm \theta_Q)$ and actor network $\mu (\mathbf s;\bm \theta_{\mu})$ with random weights $\bm \theta_{Q}$, $\bm \theta_{\mu}$. Initialize target networks $Q'$ and $\mu'$ with weights $\bm \theta_{Q}' \leftarrow \bm \theta_{Q}$, $\bm \theta_{\mu}' \leftarrow \bm \theta_{\mu}$. Initialize replay memory $\mathcal{D}$, $ {\tt done} \leftarrow 0$, ${\tt step\_count} \leftarrow 0$. Observe initial state $\mathbf s_1$. Select action $a_t = \mu(\mathbf s_t; \bm \theta_{\mu}) + \mathcal{N}_t$, set $\bar {R}_t = a_t$ and $\xi_t = \xi^{\rm opt}(\bar R_t)$. Allocate transmit power according to . Observe reward $r_t$ and new state $\mathbf s_{t+1}$. ${\tt done} \leftarrow 1$ Store experience $[\mathbf{s}_t, a_t, r_t, \mathbf s_{t+1} ]$ in $\mathcal D$, $\tt step\_cnt \leftarrow step\_cnt + 1$. Randomly sample a mini-batch of experiences from $\mathcal{D}$ as $\mathcal{B} = \{ [\mathbf{s}_j, a_j, r_j,\mathbf s_{j+1}]\}$. Update the actor and critic networks according to and , respectively. Update the target networks: $\bm \theta_{\mu}' \leftarrow \omega {\bm\theta}_{\mu} + (1-\omega) {\bm \theta}_{\mu}'$, ${\bm \theta}_{Q}' \leftarrow \omega {\bm\theta}_{Q} + (1-\omega) {\bm \theta}_{Q}'$.
PDS-DDPG with Safe Layer and Virtual Experience
===============================================
In this section, we exploit the partial knowledge concerning the dynamics of the system for improving the robustness and learning efficiency of the DDPG-based energy-saving power allocation, respectively by introducing the safe layer concept into the actor network and that of the post-decision state into the critic network.
We first characterize the knowledge available concerning the state transition and the corresponding contribution to the reward in the following.
When $\tau \ll \Delta T $, we have $$\mathrm{Pr} \left(\sum_{i=1}^{N_s} \tau p_{ti} = \Delta T \bar p_t\right) = 1~~\text{and}~~ \mathrm{Pr} \left(\sum_{i=1}^{N_s} \tau R_{ti} = \Delta T\bar R_t\right) = 1,$$ where $\bar p_{t}$ denotes the expectation of transmit power in the $t$th TF over the small-scale fading. Under the optimal power allocation policy, the relationship between $\bar p_{t}$ and $\bar R_t$ is $$\label{ave-pt}
\bar p_t = \bar{p}(\bar R_t) = \int_{\frac{\sigma^2}{\alpha_t\xi^{\rm opt}(\bar R_t)}}^{\frac{\sigma^2}{\alpha_t(\xi^{\rm opt}(\bar R_t) - P_{\max})}} \left(\xi^{\rm opt}(\bar R_t) - \frac{\sigma^2}{\alpha_tg_{ti}}\right) \rho(g) {\rm d}g + P_{\max}\int_{\frac{\sigma^2}{\alpha_t(\xi^{\rm opt}(\bar R_t) - P_{\max})}}^{\infty} \rho(g) {\rm d} g.$$ Particularly, for Rayleigh fading and a large value of $P_{\max}$, we have $$\label{ave-pt-Ray}
\bar p(\bar R_t) = \frac{\sigma^2}{\alpha_t }\left[ e^{-{\rm E}_1^{-1} \left(\frac{\bar R_t}{W} \ln 2\right)} \left[{\rm E}_1^{-1} \left(\frac{\bar R_t}{W} \ln 2\right)\right]^{-1} - \frac{\bar R_t}{W}\ln 2\right].$$
See Appendix C
For mobile users in wireless networks, the small-scale fading gains change much faster than average channel gains, hence the condition of $\tau \ll \Delta T$ holds. Proposition 2 indicates that the energy to be consumed by the BS in the $t$th TF (i.e., $\sum_{i=1}^{N_s} \tau p_{ti}$) and the amount of data to be received by the user within the $t$th TF (i.e., $\sum_{i=1}^{N_s} \tau R_{ti}$) converge almost surely to their expectations (i.e., the ensemble-average) $\Delta T \bar p_t$ and $\Delta T \bar R_t$, respectively. Further considering or , the sums of $\sum_{i=1}^{N_s} \tau R_{ti}$ and $\sum_{i=1}^{N_s} \tau p_{ti}$, which respectively contribute to a part of the state transition and a part of the reward as shown in and , can be pre-computed at the beginning of TF $t$, given an arbitrary action $\bar R_t$.
Safe Layer for Actor Network
----------------------------
In the basic DDPG algorithm of Section III, a penalty term is added to the reward function for ensuring the throughput constraint . This introduces an extra hyper-parameter $\lambda$, which has to be fine-tuned for striking a tradeoff between the energy minimization against the throughput guarantee. As a result, the performance of the learned policy is sensitive to the value of $\lambda$, which has to be re-tuned for re-balancing the tradeoff, whenever the segment size changes or the user moves along a different trajectory.[^4] To improve the robustness of the policy, we try to meet the throughput constraint without the need for such an accurately-tuned hyper-parameter by exploiting our knowledge concerning the transitions of the user’s buffer state.
According to Proposition 2, by setting the average data rate as $\bar R_t$, the amount of data to be received by the user within TF $t$ can be pre-computed at the beginning of the TF as $\sum_{i=1}^{N_s} \tau R_{ti} =\Delta T \bar R_t$. Therefore, given the amount of data $\sum_{i=1}^{N_s} \tau R_{ti}$ that the user should receive within the $t$th TF for meeting the throughput requirement, the action in the $t$th frame is set as $$\bar{R_t} = \frac{\sum_{i=1}^{N_s} \tau R_{ti}}{\Delta T}. \label{eqn:least}$$
To guarantee the throughput constraint , we should ensure that the amount of data in the user’s buffer cover the size of the video segment to be played. According to , to ensure $B_{t+1} \geq S_{n_{t+1}}$, the least amount of data that should be received by the user within the $t$th TF is $\sum_{i=1}^{N_s} \tau R_{ti} \geq\max\{S_{n_{t+1}}- B_t + I(l_t = L_{\rm v})S_{n_t}, 0\}$, which yields $$\bar R_t \geq \frac{1}{\Delta T}\max\{S_{n_{t+1}}- B_t + I(l_t = L_{\rm v})S_{n_t}, 0\} \label{eqn:QoS2}$$ considering . To ensure that the executed action is “safe" in terms of satisfying the constraint $B_{t+1} \geq S_{n_{t+1}}$, we add an additional layer (termed as the *safe layer* [@dalal2018safe]) to the output of the original actor network $\mu (\mathbf s_t; \bm \theta_{\mu})$ to adjust the action as follows: $$a_t = \max\left\{\mu (\mathbf s_t; \bm \theta_{\mu}) + \mathcal{N}_t, \frac{1}{\Delta T}\max\{S_{n_{t+1}}- B_t + I(l_t = L_{\rm v})S_{n_t}, 0\} \right\}. \label{eqn:SL}$$ In this way, the penalty term in can be removed, and hence the hyper-parameter $\lambda$ is no longer needed.
Such a safe layer can also be inserted into other RL problems for satisfying the constraints, as long as the constraints can be equivalently transformed into the constraints on the action of each time step with known expressions, as exemplified by .[^5]
Post-Decision State for Critic Network
--------------------------------------
General RL/DRL algorithms are applicable to the scenarios where the dynamics of the system, including the state transition probability distribution and the reward distribution, are completely unknown. However, for many problems in wireless networks, these dynamics are indeed partially known, as exemplified in Proposition 2 for the problem at hand. To exploit the knowledge available for accelerating learning, we introduce PDS to decompose the dynamics of the system into a known part and an unknown part. Specifically, PDS describes the intermediate state after the known dynamic takes place, but before the unknown dynamic takes place [@mastronarde2011fast].
### Estimating the action-value function with the post-decision state
Let $\tilde{\mathbf s}_t$ denote the PDS at the $t$th TF. For the problem considered, $\tilde{\mathbf s}_t$ is defined to characterize the buffer’s state, the size of the video segment to be played, and the playback progress of the video segment after the user receives the transmitted data within the $t$th TF, and to characterize the large-scale channel gains before transition. To augment our exposition, we rewrite the state vector in in the $t$th and $(t+1)$th TF before and after the defined PDS as follows:
\[eqn:PDS\] $$\begin{aligned}
\text{State at TF $t$: }&\mathbf s_t = [B_t, S_{n_t}, l_t, \bm \alpha_t, \cdots, \bm \alpha_{t-N_t}], \\
\text{PDS at TF $t$: }&\tilde{\mathbf s}_{t} \triangleq [B_{t+1}, S_{n_{t+1}}, l_{t+1}, \bm \alpha_t, \cdots, \bm \alpha_{t-N_t}], \label{eqn:PDS0}\\
\text{State at TF $t+1$: }&\mathbf s_{t+1} = [B_{t+1}, S_{n_{t+1}}, l_{t+1}, \bm \alpha_{t+1}, \cdots, \bm \alpha_{t-N_t + 1}].\end{aligned}$$
By introducing the PDS, the reward can be decomposed into two parts formulated as $r_t = r^{\rm k}_{t} + r^{\rm u}_{t}$, where $r^{\rm k}_{t} $ is the reward received from transition $\mathbf{s}_{t} \to \tilde{\mathbf s}_t$ and $r^{\rm u}_{t}$ is the reward received from transition $\tilde{\mathbf s}_t \to \mathbf{s}_{t+1}$. Let $\rho (\mathbf s_{t+1}, r_t | \mathbf s_t, a_t)$ denote the joint conditional PDF of $\mathbf s_{t+1}$ and $r_t$ when taking action $a_t$ at state $\mathbf s_t$, which characterizes the transition $\mathbf s_t \to \mathbf s_{t+1}$. If the transition $\tilde{\mathbf s}_t \to \mathbf{s}_{t+1}$ and $r^{\rm u}_t$ are independent from the action $a_t$ (which is true for the problem considered since the transition of large-scale channel gains is independent from $a_t$), we can decompose the joint conditional PDF into known and unknown components as $$\rho ( \mathbf s_{t+1}, r_t | \mathbf s_t, a_t) = \iint_{(\tilde{\mathbf s}_t, r_{t}^{\rm k})}\rho^{\rm k}\!\left(\tilde{\mathbf s}_t, r^{\rm k}_{t} \big| \mathbf s_t, a_t\right) {\rm d} r^{\rm k}_{t} \rho^{\rm u}\!\left(\mathbf{s}_{t+1}, r_t- r^{\rm k}_{t} \big| \tilde{\mathbf s}_t \right) {\rm d} \tilde{\mathbf s}_t,$$ where the conditional PDF accounting for the transition $\mathbf s_t \to\tilde{\mathbf{s}}_{t}$ (i.e., $\rho^{\rm k}(\tilde{\mathbf s}_t, r^{\rm k}_t |\mathbf{s}_{t}, a_t)$) is known (to be derived later), and the conditional PDF accounting for the transition $\tilde{\mathbf s}_t\to \mathbf{s}_{t+1}$ (i.e., $\rho^{\rm u} (\mathbf{s}_{t+1}, r_t- r^{\rm k}_{t} | \tilde{\mathbf s}_t )$) is unknown.
Let us define the *PDS-value function* of $\tilde{\mathbf s}_t$ as the expected accumulated reward achieved by policy $\mu$ started from $\tilde{\mathbf s}_t$, i.e., $V_{\mu}(\tilde{\mathbf s}_t) \triangleq \mathbb E \left[r^{\rm u}_t + \sum_{i=t+1}^{T-1}\gamma^{i-t} r_i \big| \tilde{\mathbf s}_t \right]$. Then, based on the factorization of the state transition by PDS as well as on the definitions of the action-value and PDS-value functions, the relationship between the PDS-value function $V_\mu(\cdot)$ and the action-value function $Q_\mu(\cdot)$ can be expressed as $$\begin{aligned}
V_{\mu }(\tilde{\mathbf s}_t) & = \iint_{(\mathbf s_{t+1}, r^{\rm u}_t)}\left[r_t^{\rm u} + \gamma Q_{\mu} (\mathbf s_{t+1}, \mu(\mathbf s_{t+1}))\right] \rho^{\rm u} \!\left(\mathbf{s}_{t+1}, r^{\rm u}_{t} \big| \tilde{\mathbf s}_t \right) {\rm d}r^{\rm u}_{t}{\rm d}\mathbf{s}_{t+1}, \label{eqn:VQ}\\
Q_{\mu}(\mathbf s_t, a_t) & = \iint_{(\tilde{\mathbf s}_t, r^{\rm k}_t)} \left[r^{\rm k}_{t} + V_{\mu}(\tilde{\mathbf s}_t) \right] \rho^{\rm k} \!\left(\tilde{\mathbf s}_t, r^{\rm k}_t |\mathbf s_t, a_t\right) {\rm d}r^{\rm k}_t {\rm d}\tilde{\mathbf s}_t. \label{eqn:QV}\end{aligned}$$
By substituting into and considering $r_t = r^{\rm k}_t + r^{\rm u}_t$ as well as $\rho(\mathbf{s}_{t+1}, r_{t} \big| \mathbf s_t, a_t ) = \iint_{(r^{\rm u}_t, \tilde{\mathbf s}_t)} \rho^{\rm k} (\tilde{\mathbf s}_t, r_t - r^{\rm u}_t |\mathbf s_t, a_t) \rho^{\rm u} (\mathbf{s}_{t+1}, r^{\rm u}_{t} \big| \tilde{\mathbf s}_t ) {\rm d} r^{\rm u}_t {\rm d} \tilde{\mathbf s}_t $, we arrive at $$Q_{\mu}(\mathbf s_t, a_t) = \iint_{(\mathbf s_{t+1}, r_t)}\left[r_t + \gamma Q_{\mu} (\mathbf s_{t+1}, \mu(\mathbf s_{t+1}))\right] \rho\left(\mathbf{s}_{t+1}, r_{t} \big| \mathbf s_t, a_t \right) {\rm d}r_{t}{\rm d}\mathbf{s}_{t+1}, \label{eqn:bellman}$$ which is actually the Bellman equation with respect to $Q_{\mu}$, based on which the critic network parameter $\bm \theta_{Q}$ is updated by the DDPG. Considering that can be derived from and , we can directly develop corresponding RL algorithm based on and rather than .
Specifically, since the transition PDF $\rho^{\rm k} \!\left(\tilde{\mathbf s}_t, r^{\rm k}_t |\mathbf s_t, a_t\right)$ can be derived, the action-value function $Q_{\mu}(\mathbf s_t, a_t)$ can be obtained by estimating the PDS-value function $V_{\mu}(\tilde{\mathbf s}_t)$ in the right-hand-side (RHS) of . Compared to directly estimating $Q_{\mu}(\mathbf s_t, a_t)$, estimating $V_{\mu}(\tilde{\mathbf s}_t)$ is more sample-efficient, since $V_{\mu}(\tilde{\mathbf s}_t)$ does not depend on the action and hence it has a lower dimension.
### Deriving the conditional PDF accounting for the transition $\mathbf s_t \to\tilde{\mathbf{s}}_{t}$
In what follows, we derive $\rho^{\rm k} (\tilde{\mathbf s}_t, r^{\rm k}_t |\mathbf s_t, a_t)$. In particular, we show that $ \tilde{\mathbf s}_t$ and $r^{\rm k}_t$ become deterministic given that the agent executes action $a_t$ at state $\mathbf s_t$. According to Proposition 2, we have $\sum_{i=1}^{N_s}\tau R_{ti} = \Delta T \bar R_t = \Delta T a_t$. Upon substituting this into , the first element of $\tilde{\mathbf s}_t$, i.e., the buffer state $B_{t+1}$, can be expressed as $$\tilde{\mathbf s}_t[1] = B_{t+1} = B_t + \Delta T a_t - I(l_t = L_{\rm v})S_{n_t} = \mathbf s_{t}[1] + \Delta T a_t - I(\mathbf{s}_{t}[3] = L_{\rm v})\mathbf{s}_{t}[2] \label{eqn:s1}$$ which is deterministic, given $\mathbf s_t$ and $a_t$. The second element of $\tilde{\mathbf s}_t$ is $\tilde{\mathbf s}_t[2] = S_{n_{t+1}}$, i.e., the size of the video segment to be played at the $(t+1)$th TF, which is also deterministic given $\mathbf s_t$, because the size of each segment is known after the user issues the video request. The third element of $\tilde{\mathbf s}_t$ is $\tilde{\mathbf s}_t[3] = l_{t+1}$, i.e., the playback progress of the video segment to be played at $t+1$. By substituting into , $l_{t+1}$ can be expressed as a deterministic function of $\mathbf s_t$ and $a_t$ as $$f_{\rm l}(\mathbf s_t, a_t) =\left\{\begin{array}{ll}
l_t, &~\text{if}~S_{n_{t+1}} > \mathbf s_{t}[1] + \Delta T a_t - I(\mathbf{s}_{t}[3] = L_{\rm v})\mathbf{s}_{t}[2] \\
{\rm mod}(l_t, L_{\rm v}) + 1, &~\text{otherwise.} \\
\end{array}
\right.$$ The rest of the elements in $\tilde{\mathbf s}_t$ are the same as $\mathbf s_t$. Finally, $\tilde{\mathbf s}_t$ can be expressed as a function of $\mathbf s_t$ and $a_t$ as $$\begin{aligned}
\tilde{\mathbf s}_t = \mathbf{f}_{\rm PDS} (\mathbf s_t, a_t) = [\mathbf s_{t}[1] + \Delta T a_t - I(\mathbf{s}_{t}[3] = L_{\rm v})\mathbf{s}_{t}[2], S_{n_{t+1}}, f_{\rm l}(\mathbf{s}_t, a_t), \mathbf{s}_t [4\!:]], \label{eqn:fpds}\end{aligned}$$ where $\mathbf s_t [x\!:]$ denotes the sliced vector containing the $x$th, $(x+1)$th element to the last element of $\mathbf s_{t}$.
Again, according the definition of $\tilde{\mathbf s}_t$, we can obtain $r^{\rm k}_{t} = r_t$ and $r^{\rm u}_t = 0$. Furthermore, considering Proposition 2, we have $$\left\{\begin{array}{ll}r^{\rm k}_{t} & = r_t = \sum_{i=1}^{N_s} \tau p_{ti} = \Delta T \bar p ( a_t) \\
r^{\rm u}_t &= 0,
\end{array}\right. \label{eqn:r}$$ where both $r^{\rm k}_{t}$ and $r^{\rm u}_t$ are deterministic given $\mathbf{s}_t$ and $a_t$. Thus, the transition PDF $\rho^{\rm k} \!\left(\tilde{\mathbf s}_t, r^{\rm k}_t \big|\mathbf s_t, a_t\right)$ degenerates into $$\rho^{\rm k} \left(\tilde{\mathbf s}_t, r^{\rm k}_t \big| \mathbf{s}_t, a_t \right) = \delta\left(\tilde{\mathbf s}_t - \mathbf{f}_{\rm PDS} (\mathbf s_t, a_t) , r^{\rm k}_t - \Delta T \bar p ( a_t )\right), \label{eqn:rhok}$$ where $\delta(\tilde{\mathbf s}_t -x, r^{\rm k}_t- y)$ denotes the two-dimensional Dirac delta function defined as $\int_{\tilde{\mathbf s}_t} \int_{r^{\rm k}_t} \delta(\tilde{\mathbf s}_t -x , r^{\rm k}_t -y ) {\rm d} \tilde{\mathbf s}_t {\rm d} r^{\rm k}_t = 1$, and $\delta(\tilde{\mathbf s}_t - x, r^{\rm k}_t - y) = 0$ if $ \tilde{\mathbf s}_t \neq x$ or $r^{\rm k}_t \neq y$.
Finally, by substituting and into and , we can obtain the relationship between $V_\mu(\cdot)$ and $Q_\mu(\cdot)$ for the video streaming problem considered as $$\begin{aligned}
V_{\mu}(\tilde{\mathbf s}_t) & = \gamma\int_{\mathbf s_{t+1}} Q_{\mu} (\mathbf s_{t+1}, \mu(\mathbf s_{t+1})) \rho^{\rm u}\left(\mathbf{s}_{t+1} \big| \tilde{\mathbf s}_t \right){\rm d}\mathbf{s}_{t+1} \label{eqn:VQ2}\\
Q_{\mu}(\mathbf s_t, a_t) & = \Delta T \bar p \left( a_t \right) + V_{\mu}\left( \mathbf{f}_{\rm PDS} (\mathbf s_t, a_t) \right). \label{eqn:QV2}\end{aligned}$$ Observe from that multiple state-action pairs may lead to the same PDS.[^6] This suggests that if we directly estimate the PDS-value function on the RHS of , the action-value of multiple state-action pairs on the left-hand-side can be updated accordingly based on , which indicates the potential for accelerating the learning procedure. In Fig. \[fig:PDS\], we summarize the relationship between $\mathbf s_t$, $\tilde{\mathbf s}_t$ and $\mathbf{s}_{t+1}$.
![The relations between states $\mathbf s_{t}$, $\mathbf s_{t+1}$ and the PDS $\tilde{\mathbf s}_{t}$.[]{data-label="fig:PDS"}](PDS){width="80.00000%"}
### PDS-based DDPG algorithm
Again, we parameterize the transmission policy $\mu$ by a DNN having the parameter $\bm \theta_{\mu}$. Upon adding the safe layer defined by , the structure of the modified actor network $\mu_{\rm s}(\mathbf s_t, \mathcal{N}_t; \bm \theta_{\mu})$ is shown in Fig \[fig:pdsddpg-a\]. Based on , we use a DNN $V(\tilde{\mathbf s}_t;\bm \theta_{V})$ to approximate $V_{\mu}(\tilde{\mathbf s}_t)$ and then obtain the approximated $Q_{\mu }(\mathbf s_t, a_t)$ as $$Q(\mathbf s_t, a_t; \bm \theta_V) = \Delta T \bar p \left( a_t \right) + V\left( \mathbf{f}_{\rm PDS} (\mathbf s_t, a_t);\bm \theta_{V} \right), \label{eqn:Qv}$$ whose structure is shown in Fig. \[fig:pdsddpg-c\].
Upon comparing Fig. \[fig:pdsddpg\] and Fig. \[fig:ddpg\], we can see that by exploiting the partially known dynamics, the structures of the actor and critic networks are incorporated with the aid of known components tailored for the system considered, i.e., the safe layer , $\mathbf{f}_{\rm PDS}(\mathbf s_t, a_t)$ and $\Delta T\bar p(a_t)$, instead of only consisting of fully-connected layers having unknown parameters to be learned.
1\) PDS exploits partial information about the environment so that less parameters of the critic network have to be learned. 2) Updating a single PDS value provides information about the value of many state-action pairs, which further accelerates the learning procedure.
In Section V, we will show that given the modified structure, the number of nodes in the fully-connected layers can be significantly reduced, and hence the convergence can be accelerated.
Analogous to the update rule of $\bm \theta_{Q}$ in based on the Bellman equation , the update rule of $\bm \theta_V $ for the modified critic network can be obtained based on as $$\bm \theta_{V} \leftarrow \bm \theta_{V} + \frac{\delta_{V}}{|\mathcal{B}|}\nabla_{\bm \theta_{V}}\sum_{j\in \mathcal{B}}\left[y_j - V(\mathbf{f}_{\rm PDS} (\mathbf s_j, a_j);\bm \theta_{V})\right]^2, \label{eqn:Q2}$$ where we have substituted $\tilde{\mathbf s}_t = \mathbf{f}_{\rm PDS} (\mathbf s_t, a_t)$, $y_j = 0$ if all the segments have been transmitted to the user, while $y_j = \gamma Q'(\mathbf s_{j+1}, \mu'_{\rm s}(\mathbf s_{j+1}, 0;\bm \theta'_{\mu}); \bm \theta_{V}') $ otherwise. Again, $Q'(\cdot; \bm\theta_{V}')$ and $\mu'_{\rm s}(\cdot;\bm \theta_{\mu}')$ are the target critic network and target actor network, respectively, which have the same structure as $Q(\cdot; \bm\theta_{V})$ and $\mu_{\rm s}(\cdot; \bm\theta_{\mu} )$, and are respectively updated by $\bm \theta_V' \leftarrow \omega \bm\theta_V + (1 - \omega) \bm \theta_{V}'$ and $\bm \theta_\mu' \leftarrow \omega \bm \theta_\mu + (1 - \omega) \bm \theta_\mu'$.
From , we can arrive at $$\nabla_{a} Q(\mathbf s_t, a;\bm \theta_{V}) = \Delta T\nabla_{a}\bar p(a) + \nabla_{\tilde{\mathbf s}} V(\tilde{\mathbf s};\bm \theta_{V})\big|_{\tilde{\mathbf s} = \mathbf{f}_{\rm PDS} (\mathbf s_t, a)} \nabla_{a}\mathbf{f}_{\rm PDS} (\mathbf s_t, a), \label{eqn:dQv}$$ By substituting into , we can derive the update rule of $\bm \theta_{V}$ for the modified actor network as $$\begin{aligned}
\bm \theta_{\mu} \leftarrow \bm \theta_{\mu} + \frac{\delta_\mu}{|\mathcal{B}|} \sum_{j\in\mathcal{B}} \Big[&\Delta T\nabla_{a}\bar p(a)+ \nonumber \\
& \nabla_{\tilde{\mathbf s}} V(\tilde{\mathbf s};\bm \theta_{V}) \nabla_{a}\mathbf{f}_{\rm PDS} (\mathbf s_j, a)\Big]\Big|_{a = \mu_{\rm s}(\mathbf s_j,0; \bm \theta_{\mu}),~ \tilde{\mathbf s} = \mathbf{f}_{\rm PDS} (\mathbf s_j, a)} \nabla_{\bm \theta_\mu} \mu_{\rm s} (\mathbf s_j, 0; \bm \theta_{\mu}). \label{eqn:mu2}\end{aligned}$$
The relationship between the PDS-value function and the action-value function, i.e., and , are also applicable to other RL problems as long as: 1) Parts of the state transition are known; 2) The transition from PDS $\tilde{\mathbf s}_t$ to the next state $\mathbf{s}_{t+1}$ is independent from $a_{t}$. Therefore, the proposed approach of incorporating PDS into the DRL algorithm can be extended to other RL problems.
Virtual Experiences
-------------------
The analysis in the previous subsection suggests that the state transition $\mathbf s_t[1\!:\!3] \to \mathbf s_{t+1}[1\!:\!3]$ and the reward can be obtained in advance, given $\mathbf s_t$ and $a_t$. Further considering that the transition of the average channel gain does not depend on $\mathbf s_t[1\!:\!3]$ and $a_t$, we are able to generate virtual experience based on historical traces of average channel gains recorded for previously served users. The virtual experience can then be used for training the DNNs for further accelerating the learning procedure by relying on less interactions with environment.
Specifically, let $\bm h^{(j)} = [\bm \alpha_t^{(j)}]_{t=1-N_t, \cdots, T}$ denote a trace of the average channel gains of a previously served user recorded during a video streaming episode. From $\bm h^{(j)}$ we can generate an initial state for a virtual user as $\mathbf s_1 = [B_1, S_{n_1}, l_1, \bm \alpha_1^{(j)}, \cdots, \bm \alpha_{1-N_t}^{(j)}]$ and obtain the action output by the actor network as $a_1 = \mu_{\rm s}(\mathbf s_1, \mathcal{N}_1; \bm \theta_{\mu})$. Assuming that the average channel gains of the virtual user evolve the same as recorded in the historical trace $\bm h^{(j)}$, the next state and the reward can be directly computed as $\mathbf s_2 = [\mathbf{f}_{\rm PDS} (\mathbf s_1, a_1)[1\! : \!3], \bm \alpha_{2}^{(j)}, \cdots, \bm \alpha_{2-N_t}^{(j)}]$ and $r_1 = \Delta T \bar p \left( a_1 \right)$ based on and , respectively. This suggests that the agent can deduce how the episode continues for a given transmission policy and channel trace. Hence, it can generate virtual experience accordingly, given that $\bm h^{(j)}$ is a true channel trace sampled from the same wireless environment, the virtual experiences can be used for training both the actor and the critic networks.
To generate and exploit the virtual experiences, every time a real episode terminates, we store the channel trace into the channel trace memory $\mathcal{H}$ and randomly sample $K$ traces from $\mathcal{H}$ to generate $K$ virtual episodes. The virtual experiences are stored into the experience relay memory $\mathcal{D}$ so that the virtual experiences can be sampled together with the real experiences for training both the actor and critic networks. The whole learning procedure of PDS-DDPG using virtual experiences is shown in Algorithm \[alg2\].
\[alg2\]
Initialize modified critic and actor networks $Q(\mathbf s, a;\bm \theta_V)$ and $\mu_{\rm s} (\mathbf s, \mathcal{N}; \bm \theta_{\mu})$ with random weights $\bm \theta_{V}$, $\bm \theta_{\mu}$. Initialize target networks $Q'$ and $\mu'$ with weights $\bm \theta_{V}' \leftarrow \bm \theta_{V}$, $\bm \theta_{\mu}' \leftarrow \bm \theta_{\mu}$. Initialize replay memory $\mathcal{D}$, channel trace memory $\mathcal{H}$, $ \tt done \leftarrow 0$, ${\tt step\_cnt} \leftarrow 0$. Observe initial state $\mathbf s_1$ from the environment, and initialize channel trace $\bm h \leftarrow [\bm \alpha_{1-N_t}, \cdots, \bm \alpha_1]$ Select action $a_t = \mu(\mathbf s_t, \mathcal{N}_t; \bm \theta_{\mu}) $ with exploration noise, and set $\bar {R}_t = a_t$ and $\xi_t = \xi^{\rm opt}(\bar R_t)$. Allocate transmit power according to . Observe reward $r_t$, new state $\mathbf s_{t+1}$, and add $\bm \alpha_{t+1}$ into channel trace $\bm h \leftarrow [\bm h, \bm \alpha_{t+1}]$ $ {\tt done} \leftarrow 1$ Store $\mathbf e_t = [\mathbf{s}_t, a_t, \mathbf s_{t+1}]$ in $\mathcal D$, $\tt step\_cnt \leftarrow step\_cnt + 1 $ Randomly sample a batch from $\mathcal{D}$, update the actor and critic networks according to and . Update the target networks $\bm \theta_{\mu}' \leftarrow \omega {\bm\theta}_{\mu} + (1-\omega) {\bm \theta}_{\mu}'$, ${\bm \theta}_{Q}' \leftarrow \omega {\bm\theta}_{Q} + (1-\omega) {\bm \theta}_{Q}'$ Store channel trace $\bm h$ in $\mathcal{H}$ $\tt done = 0$ and [**break**]{} Randomly sample a channel trace $\bm h^{(j)}$ from $\mathcal{H}$ and generate initial state $\mathbf s_1$. Select action $a_t = \mu(\mathbf s_t, \mathcal{N}_t; \bm \theta_{\mu})$ with exploration noise. Obtain $\bm \alpha_{t+1}$ from $\bm h^{(j)}$, and obtain next state $\mathbf s_{t+1} = [\mathbf{f}_{\rm PDS} (\mathbf s_t, a_t)[1\!:\!3], \bm \alpha_{t+1}^{(j)}, \cdots, \bm \alpha_{t-N_t + 1}^{(j)}]$ $ {\tt done} \leftarrow 1$ Store $\mathbf e_t = [\mathbf{s}_t, a_t, \mathbf s_{t+1}]$ in $\mathcal D$, $\tt step\_cnt \leftarrow step\_cnt + 1 $ Repeat steps 14$\sim$16. ${\tt done} = 0$ and [**break**]{}
The way we generate virtual experiences can be extended to other RL problems, as long as the unknown dynamics are independent from the known dynamics. This is true for many problems in wireless networks, where the dynamics of wireless channel do not depend on the action and on the transition of other elements in the state.
Simulation Results
==================
In this section, we evaluate the performance of the proposed DRL-based policies by comparing them to the benchmark policies via simulations.
Simulation Setup
----------------
We consider several scenarios, where users are moving along one or multiple roads across multiple cells, as shown in Fig. \[fig:layout\]. The distance between the adjacent BSs is $500$ m and the maximal transmit power of each BS is $46$ dBm. The noise power is $-95$ dBm/Hz and the transmission bandwidth of each user is $2$ MHz. Since the circuit power consumption is the same for all the policies considered, we only evaluate the transmit energy consumed by video streaming. The path loss is modeled as $35.3+37.6\log_{10} (d)$ in dB where $d$ is the distance between the user and BS in meters. The small-scale channel is Rayleigh fading. The playback duration of each video file is $150$ s and that of each segment is $10$ s. Each video segment has a size of $1$ MByte. The duration of each TF is $\Delta T = 1$ s and the duration of each TS is $\tau = 1$ ms.
Fine-Tuned Parameters in Algorithm 1 and Algorithm 2
----------------------------------------------------
The DNNs in our algorithms are tuned to achieve the best performance, which are as follows:
1. Algorithm 1: For the actor network, $\mu(\cdot)$ has four fully-connected layers each with $600$ nodes. For the critic network, the state and action first go through three fully-connected layers each having $600$ nodes and a single fully-connected layer having $600$ nodes, which are then concatenated, followed by two fully-connected layers each having $600$ nodes.
2. Algorithm 2: For the modified actor and critic networks, both $\mu(\cdot)$ and $V(\cdot)$ have three fully-connected layers, each with $200$ nodes. Based on these numbers of layers and nodes, the total number of unknown parameters to be learned in the modified networks is reduced roughly by a factor of $10$ compared to the original actor and critic networks of Algorithm 1. This indicates the potential of employing PDS and the safe layer in improving the convergence speed and reducing the computational complexity.
All the hidden layers of the above DNNs use the rectified linear units (ReLU) as the activation function. The output layer of the critic and modified critic networks has no activation function. The network $\mu(\cdot)$ of both the actor and the modified actor networks employs $0.5\times [\tanh(x) + 1]$ as the activation function, which normalizes the output within $[0, 1]$. For the state representation, we set $N_b = 2$ and $N_t = 2$. Since the elements of the input have different ranges and units, we normalize the large-scale channel gain as $45 + \log_2(\alpha_{b,t})$ before going through the DNNs and employ batch normalization [@batch] for each fully-connected layer.
The Adam method of [@adam] is used for adjusting the learning rate during training, and the initial learning rate is $\delta_\mu = 10^{-3}$, $\delta_Q = \delta_V = 10^{-4}$. The update rate for the target networks is $\omega = 2.5\times 10^{-3}$. We also add a $L_2$-norm regularization term with weight $10^{-3}$ on the loss function when training the critic network to avoid over-fitting. The mini-batch size for gradient descent is $|\mathcal B|$ = 128. The update interval is set to $\Delta t = 4$. The discount factor is set to $\gamma= 1$ and the penalty coefficient for Algorithm 1 is set as $\lambda = 30$, respectively. The noise term $n_t$ obeys the Gaussian distribution with zero mean and a variance reduced linearly from $0.1$ to zero during the training phase and remains zero during the testing phase.
Performance Evaluation
----------------------
We compare the proposed DRL-based polices to the following baselines:
- *Predict & Optimize*: This is the existing first-predict-then-optimize PRA policy, which first predicts the future average channel gains within a prediction window, and then determines the target average data rate $\bar R_t$ for each frame based on the predicted average channel gains for minimizing the accumulated energy consumption [@she2015context]. When the future average channel gains are perfect, this policy can serve as the performance upper bound (and hence called “optimal policy” in the sequel). To obtain the predicted average channel gains, we train a fully connected DNN for predicting the user locations and compute the future average channel gains based on the path loss model.
- *Non-predictive*: This is the existing power allocation policy operating without exploiting any future information. To satisfy the throughput constraint, the BS maintains the average data rate as $\bar R_t = S_{n_{t+1}}/L_{\rm v}$ so that the segment to be played in the next TF is downloaded in the current TF. Without using the predicted information, this policy can only minimize the average energy consumption in the current TF via the power allocation given by .
In what follows, we compare the performance of the policies in different scenarios.
### Same Road & Constant Speed
We first consider the scenario shown in Fig. \[fig:layout1\], where each user moves along the same road at a constant velocity of $16$ m/s. In this case, a well-trained DNN can perfectly predict the future locations of users and hence “*Predict & Optimize*" can achieve the optimal performance (with legend “*Optimal*").
![Impact of the safe layer, post-decision state and virtual experience on the learning curves, $K$ is the number of virtual episodes. The results are averaged over $30$ Monte Carlo trials with random initial user locations and over $300$ successive episodes.[]{data-label="fig:learning"}](learning){width="50.00000%"}
In Fig. \[fig:learning\], we show the learning curves of the proposed DRL-based policies. Considering that the major concern of RL convergence speed lies in the number of interactions between the agent and the environment, the $x$-axis represents the number of real episodes. Since there is no video stalling after convergence (and hence no penalty on the reward), the minus value of the converged return is the accumulated transmit energy consumption. We can see that the proposed DRL-based policies converge to the optimal policy and they outperforms “*Non-predictive*" after convergence. By exploiting our knowledge concerning the system dynamics, PDS-DDPG (i.e., Algorithm 2) outperforms the basic DDPG (i.e., Algorithm 1) both in terms of the convergence speed and the average return achieved. Specifically, by employing PDS and the safe layer, “*PDS-DDPG $K=0$*" converges $2.5$ times faster than “*Basic DDPG*". By further training using both the real and virtual experiences with $K=1$ and $K=10$, “*PDS-DDPG $K=1$*" and “*PDS-DDPG $K=10$*" convergence five times and $20$ times faster than “*Basic DDPG*", respectively.
In Fig. \[fig:constant\], we compare how the proposed policy and the baseline policies behave over time. The result is obtained from an episode after Algorithm 2 has converged. Observe from \[fig:constant\_policy\] that the large-scale channel gains vary periodically due to the change of user-to-BS distance as the user moves along the road. The non-predictive policy has to maintain a constant average data rate for satisfying the throughput constraint, since no future information is exploited. By contrast, the DRL-based policy transmits more data when the large-scale channel gain is higher in order to save energy and behaves similarly to the optimal policy. In Fig. \[fig:constant\_power\], we compare the energy consumptions of different policies. To maintain a constant average data rate, the non-predictive policy has to increase the transmit power in order to compensate the decrease of large-scale channel gain, which results in higher accumulated energy consumption. By contrast, the DRL-based policy and the optimal policy allocate less power when the large-scale channel gain decreases, because more data have been transmitted to the user’s buffer in advance when the large-scale channel gains are higher.
### Multiple Road & Random Acceleration
To show the applicability of the DRL-based policy in more complex scenarios, we consider the scenario of Fig. \[fig:layout2\], where the initial location of each user is randomly chosen from two roads at different distances from the BSs, and the users travel with random acceleration. The initial velocity of each user is $16$ m/s and each user’s acceleration in each TF is drawn from the Gaussian distribution with zero mean and standard deviation of $0.1$ m/s$^2$. The legitimate velocity of each user is restricted to $12$ m/s $\sim$ $20$ m/s. For “*Predict & Optimize*", a longer prediction window (say $150$ s, the same as the duration of video playback) incurs larger prediction error, which degrades the gain of PRA. For a fair comparison, the prediction window of “*Predict & Optimize*" is set to $60$ s.
In Fig. \[fig:100200\], we compare the performance and the policy behavior. Observe from Fig. \[fig:cdf\_100m\] and \[fig:cdf\_200m\] that the DRL-based policy achieves lower accumulated energy consumption than “*Predict & Optimize*" on both roads. Moreover, the CDF curve of the DRL-based policy is steeper than that of “*Predict & Optimize*", which suggests that the proposed policy is less sensitive to random user behavior. This is because the pre-determined average data rate of “*Predict & Optimize*" cannot promptly adapt to the real evolution of large-scale channel gains due to the user’s random acceleration, as shown in Fig. \[fig:100m\] and \[fig:200m\]. By contrast, “*PDS-DDPG*" learns a policy that can adjust the average data rate on-line in order to adapt to the channel variation for both roads. Upon comparing Fig. \[fig:cdf\_100m\] and \[fig:cdf\_200m\], we can see that the gain of “*PDS-DDPG*" over “*Predict & Optimize*" is higher when the user is moving along the 1st road. This because the 1st road is closer to the BSs and hence the channel variation is more significant along the user trajectory. Consequently, “*Predict & Optimize*" is more sensitive to the prediction errors.
### Random Stop
Let us now consider the scenario of Fig. \[fig:layout3\], where the users may encounter a traffic light on the road. In this scenario, the initial locations of users are uniformly distributed along the road and they may stop for $0\sim 60$ s upon encountering a red traffic light. Since the instant of when and the duration of how long the user stops for are random, it is much harder to predict the future locations of a user for a minute-level prediction window.
In Fig. \[fig:stop\], we compare the performance and behavior of the proposed policy to that of the baseline policies. We can see from Fig. \[fig:cdf\_stop\] that “*Predict & Optimize*" consumes even more energy than “*Non-Predictive*", which is due to large prediction errors caused by the random stop, as shown in Fig. \[fig:stop\_policy\]. By contrast, “PDS-DDPG" can still learn a good policy to adapt to the channel variation.
Conclusions
===========
In this paper, we proposed a DRL-based policy for optimizing predictive power allocation for video streaming over mobile networks aimed at minimizing the average energy consumption under the throughput constraint. We formulated the problem in a RL framework and resorted to DDPG to learn the policy. To reduce the signaling overhead between the MEC server and each BS, we judiciously designed the action and the state by exploiting the knowledge of small-scale fading distribution. To improve the robustness and accelerate learning, we integrated the concepts of safer layer, post-decision state, and virtual experience into the basic DDPG algorithm by exploiting the partially known dynamics of the system. We have also shown when those accelerating techniques can be extended to other RL problems. Our simulation results have shown that the proposed policy can converge to the optimal policy derived based on perfect future large-scale channel gains. When prediction errors exist, the proposed policy outperforms the *first-predict-then-optimize policy*. By exploiting the partial knowledge on the dynamics, the convergence speed can be dramatically improved.
Appendix A: Proof of Proposition 1 {#appendix-a-proof-of-proposition-1 .unnumbered}
==================================
Since the large-scale channel gain remains constant within a TF and the power allocation policy within a TF should adapt to the small-scale fading, $p_{ti}$ can be expressed as a function of $g_{ti}$ as $p_{ti} = p(g_{ti})$. Considering that $g_{ti}$ is i.i.d. among TSs, can be rewritten as $$\bar E_t =\mathbb{E}_{g_{ti}}\left[ \sum_{i=1}^{N_{\rm s}} \tau p (g_{ti}) \right] + \Delta T P_{\rm c} = \Delta T \left( \mathbb E_{g_{ti}} \left[p(g_{ti})\right] + P_{\rm c} \right), \label{eqn:Et}$$ Since the second term $P_{\rm c}$ in does not depend on the power allocation ${p}(g_{ti})$, problem ${\sf P2}$ is equivalent to the following problem,
$$\begin{aligned}
{\sf P3}:\forall t, \quad \min_{p(g_{ti})} ~ & \mathbb{E} \left[p(g_{ti})\right] \\
s.t. ~& \mathbb{E}_{g_{ti}}\left[W\log_2\left(1 + \frac{\alpha_t}{\sigma^2}p(g_{ti}) g_{ti}\right)\right] = \bar R_t \label{eqn:con1}\\
& 0 \leq p(g_{ti}) \leq P_{\max}, ~\forall g_{i}, \label{eqn:con2}\end{aligned}$$
The Lagrangian function of problem $\sf P3$ can be expressed as $$\begin{aligned}
&\mathcal{L}(p(g_{ti}), \lambda_1(g_{ti}), \lambda_2(g_{ti}), \mu_t)=\nonumber \\
& \mathbb{E}_{g_{ti}}\left[ p(g_{ti}) - \lambda_1(g_{ti})p(g_{ti}) + \lambda_2(g_{ti})(p(g_{ti})\! - \!P_{\max}) + \mu_t \! \left(\bar R_t - W \log_2\left(1 + \tfrac{\alpha_t}{\sigma^2}p(g_{ti}) g_{ti}\right)\right) \right], \!\!\end{aligned}$$ where $\lambda_1(g_{ti}), \lambda_2(g_{ti})$ and $\mu_t$ are the multipliers associated with the inequality and equality constraints, respectively. The Karush-Kuhn-Tucker (KKT) conditions of problem ${\sf P3}$ are:
$$\begin{aligned}
\frac{\partial\mathcal{L}}{\partial p(g_{ti})} = \mathbb{E}_{g_{ti}}\left[ 1 - \lambda_1(g_{ti}) + \lambda_2(g_{ti}) - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})}\right] = 0 & \label{eqn:sta}\\
\lambda_1(g_{ti})p(g_{ti}) = 0&, ~\forall g_{ti} \label{eqn:com1}\\
\lambda_2(g_{ti})(p(g_{ti})-P_{\max}) = 0&, ~\forall g_{ti} \label{eqn:com2}\\
\eqref{eqn:con1}, \eqref{eqn:con2}, \lambda_1(g_{ti}) \geq 0, \lambda_2(g_{ti})\geq 0&, ~ \forall g_{ti}\end{aligned}$$
where $\xi_t = \frac{\mu_t W}{\ln 2}$. The stationary condition can be simplified to $$1 - \lambda_1(g_{ti}) + \lambda_2(g_{ti}) - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})} = 0, \label{eqn:sta2}$$
We first prove $\xi_t > 0$. Assuming that $\xi_t \leq 0$, we can obtain $\lambda_1(g_{ti}) \geq 1 + \lambda_2(g_{ti})$ according to . Then, since $\lambda_2(g_{ti})\geq 0$, we have $\lambda_1(g_{ti}) \geq 0$. Based on , we have $p(g_{ti}) = 0, \forall g_{ti}$, which contradicts to . Therefore, we have $\xi_t > 0$.
When $g_{ti}<\frac{\sigma^2}{\alpha_t \xi_t}$, we have $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})}> 0$. In this case, according to , we can obtain $\lambda_1(g_{ti}) > \lambda_2(g_{ti})$. Further considering that $\lambda_2(g_{ti})\geq 0$, we have $\lambda_1 (g_{ti}) > 0$. Then, according to , we obtain $p(g_{ti}) = 0$. When $g_{ti} = \frac{\sigma^2}{\alpha_t \xi_t}$, we have $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})}> 0$ if $p(g_{ti}) > 0$. However, from $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})}> 0$, we can obtain $p(g_{ti}) = 0$ again based on and $\lambda_2(g_{ti})\geq 0$, which contradicts to $p(g_{ti}) > 0$. Therefore, we have $p(g_{ti}) = 0$.
When $g_{ti} > \frac{\sigma^2}{\alpha_t(\xi_t - P_{\max})}$, we have $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})} < 0$. In this case, according to , we can obtain $\lambda_2(g_{ti}) > \lambda_1(g_{ti})$. Further considering $\lambda_1(g_{ti})\geq 0$, we have $\lambda_2 (g_{ti}) > 0$. Then, according to , we can obtain $p(g_{ti}) = P_{\max}$. When $g_{ti} = \frac{\sigma^2}{\alpha_t(\xi_t - P_{\max})}$, we have $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})} < 0$ if $p(g_{ti}) < P_{\max}$. However, from $1 - \frac{\xi_t}{\sigma^2(\alpha_tg_{ti})^{-1} + p(g_{ti})} < 0$ we can obtain $p(g_{ti}) = P_{\max}$ again based on and $\lambda_1(g_{ti})\geq 0$, which contradicts to $p(g_{ti}) < P_{\max}$. Therefore, $p(g_{ti}) = P_{\max}$.
When $\frac{\sigma^2}{\alpha_t \xi_t} < g_{ti} < \frac{\sigma^2}{\alpha_t(\xi_t - P_{\max})}$, we have $\frac{\sigma^2}{\alpha_t g_{ti}} <\xi_t < \frac{\sigma^2}{\alpha_t g_{ti}} + P_{\max} $. In this case, if $p(g_{ti}) = 0$, according to , we can obtain $\lambda_2(g_{ti})>\lambda_1(g_{ti})$. Further considering that $\lambda_1(g_{ti})\geq 0$, we have $\lambda_2 (g_{ti}) > 0$. Then, according to , we have $p(g_{ti}) = P_{\max}$, which contradict to $p(g_{ti}) = 0$ and hence $p(g_{ti}) > 0$. Similarity, if $p(g_{ti}) = P_{\max}$, according to , we can obtain $\lambda_1(g_{ti})>\lambda_2(g_{ti})$. Further considering that $\lambda_2(g_{ti})\geq 0$, we have $\lambda_1 (g_{ti}) > 0$. Then, according to , we have $p(g_{ti}) = 0$, which contradict to $p(g_{ti}) = P_{\max}$. Hence, $p(g_{ti}) < P_{\max}$. Therefore, we have $0 < p_{g_{ti}} < P_{\max}$. Consequently, we can obtain $\lambda_1(g_{ti}) = \lambda_2(g_{ti}) =0$. By substituting $\lambda_1(g_{ti}) = \lambda_2(g_{ti}) =0$ into , we have $p(g_{ti}) = \xi_t - \frac{\sigma^2}{\alpha_t g_{ti}}$.
Finally, by summarizing the above results and further considering the average rate constraint , Proposition 1 has been proved.
Appendix B: Proof of Corollary 1 {#appendix-b-proof-of-corollary-1 .unnumbered}
================================
For Rayleigh fading, we have $\rho(g) = e^{-g}$. When $P_{
\max}$ is sufficiently high for the maximal transmit power constraints to be neglected, the optimal power allocation degenerates into $$\begin{aligned}
p^{\rm opt}(g_{ti}; \alpha_t, \xi_t) = \left\{\begin{array}{ll}
0, ~ g_{ti} \leq \frac{\sigma^2}{\alpha_t \xi_t} \\
\xi_t - \frac{\sigma^2}{\alpha_t g_{ti}},~ g_{ti} > \frac{\sigma^2}{\alpha_t \xi_t}, \\
\end{array}
\right.\end{aligned}$$ Then, degenerates into $
\bar R_t = W \int_{\frac{\sigma^2}{\alpha_t \xi_t}}^{\infty} \log_2 \left( \frac{\alpha_t\xi_t}{\sigma^2}g\right) e^{-g} dg = \frac{W}{\ln 2} {\rm E}_1\left(\frac{\sigma^2}{\alpha_t\xi_t}\right)
$, from which Corollary 1 can be proved.
Appendix C: Proof of Proposition 2 {#appendix-c-proof-of-proposition-2 .unnumbered}
==================================
Upon considering that $\Delta T = N_s \tau$, we obtain $
\sum_{i=1}^{N_s} \tau p_{ti} = \sum_{i=1}^{N_s} \frac{\Delta T}{N_s} p_{ti} =\Delta T \sum_{i=1}^{N_s} \frac{p_{ti}}{N_s}
$. When $\tau \ll \Delta T$, we have $N_s = \frac{\Delta T}{\tau} \to \infty$. Since $p_{ti}$ is a function of $g_{ti}$, which is i.i.d. among TSs, we can apply the law of large numbers to obtain $ \sum_{i=1}^{N_s} \frac{p_{ti}}{N_s} \overset{a.s.}{\rightarrow} \bar p_t$ and hence $\mathrm{Pr} \big(\sum_{i=1}^{N_s} \tau p_{ti} = \Delta T\bar p_t\big) = 1$. Similarity, we can obtain $\mathrm{Pr} \big(\sum_{i=1}^{N_s} \tau R_{ti} = \Delta T \bar R_t\big) = 1$. The average transmit power $\bar p_{t} = \int_{0}^{\infty} p^{\rm opt}(g; \alpha_t, \xi^{\rm opt}(\bar R_t)) \rho(g) dg$ can be derived from Proposition 1. Specifically, for Rayleigh fading and large transmit power, $\bar p_t$ can be expressed as $$\begin{aligned}
\bar p_t & = \int_{\frac{\sigma^2}{\alpha_t\xi^{\rm opt}(\bar R_t)}}^{\infty} \left(\xi^{\rm opt}(\bar R_t) - \frac{\sigma^2}{\alpha_t g}\right) e^{-g} {\rm d}g = \xi_t^{\rm opt}(\bar R_t) e^{-\frac{\sigma^2}{\alpha_t \xi^{\rm opt}(\bar R_t)}} - \frac{\sigma^2}{\alpha_t} {\rm E}_1 \left( \frac{\sigma^2}{\alpha_t \xi^{\rm opt}(\bar R_t)} \right). \label{eqn:barpt}\end{aligned}$$ By substituting into , Proposition 2 has been proved.
[^1]: D. Liu and L. Hanzo are with the University of Southampton, Southampton SO17 1BJ, UK (email: [d.liu, hanzo]{}@soton.ac.uk). J. Zhao and C. Yang are with Beihang University, Beijing 100191, China (e-mail: jianyuzhao\[email protected], [email protected]). This paper was presented in part at IEEE Globecom 2019 [@dongGC19].
[^2]: In this paper, a time frame refers to the duration of time, say one second, where the large-scale channel gain can be regarded as a constant, instead of the “video frames" that compose a video segment.
[^3]: A time slot typically has a duration of milliseconds within the channel’s coherence time, where the small-scale fading can be regarded as a constant.
[^4]: For conciseness, those simulation results are not provided.
[^5]: When the expressions are unknown, some approximation methods can be used as in [@dalal2018safe].
[^6]: For example, state $[B_t, S_{n_t}, l_t, \bm \alpha_t, \cdots, \bm \alpha_{t-N_t}]$ and $[B_t + \Delta T x, S_{n_t}, l_t, \bm \alpha_t, \cdots, \bm \alpha_{t-N_t}] $ transit to the same PDS if executing action $a_t$ and $a_t - x$, respectively.
|
---
author:
- 'Ladislav Kristoufek,Jiri Skuhrovec'
bibliography:
- 'procurement.bib'
title: Exponential and power laws in public procurement markets
---
[Analyzing]{} distributional properties of different phenomena in social and economic systems has become popular in recent years ranging from the historically most popular wealth and income distributions [[@Pareto1896; @Mandelbrot1961; @Slanina2004; @Coelho2008; @Fiaschi2012]]{} to productivity [@Aoyama2010], city size [[@Benguigui2007; @Cordoba2008; @Levy2009; @Giesen2010]]{}, firm’s size [@Stanley1995], growth [@Salinger1996] and bankruptcy [@Fujiwara2004], internet [@Adamic2002], financial returns and volatility [@Mantegna1995; @Gabaix2003; @Gabaix2006], traded volume [@Souza2006] [and most recently, several social and economic phenomena have been analyzed with Internet-based measures [@Saavedra2011; @Preis2012; @Vespignani2009]]{}. [See [@Farmer2008; @Gabaix2008; @Lux2008] for recent reviews.]{} One of the topics absolutely untouched by such a statistical analysis is the public procurements market. Analysis of this market is absolutely crucial from economic, political and social point of view because huge sums of public money (collected from taxes) flow from state to private firms every year. According to OECD[[^1]]{} [@OECD2011], public procurement[s totaled to an average of]{} 17 % of GDP[[^2]]{} in the OECD member countries, making [the]{} government and state-owned enterprises the most significant buyer[s]{} in virtually every developed economy. Hence, the topic [a]{} has very high economic relevance, yet [the]{} related research has only been very sparse so far[, mainly]{} due to a low availability or quality of [the relevant]{} data. We have overcome this problem [to some extent]{} and [have]{} obtained [a]{} broad and reliable dataset. This paper takes a natural first step in its examination usually characteristic for statistical physics – while studying data distributions and statistical properties, we obtain economically relevant findings and directions for further research.
We start with several intuitive definitions. [A]{} *public procurement* (PP) is a specific procedure [of]{} purchasing goods and services, which is mandatory for various public institutions – municipalities, government bodies, state-owned enterprises, etc., jointly called *contracting authorities*. During [the]{} PP procedure (*a tender*), various companies place their *bids* – offers to provide goods requested by [the]{} contracting authority for [a specific]{} price. One of these bids is then chosen by the contracting authority, we call the company which placed the bid either [*a supplier*]{} or [*a winner*]{}. In this paper, we study [the]{} distributional properties of three important quantities in [the]{} PP – [a]{} number of bidders, total revenues of [the]{} individual suppliers and total spendings of [the]{} contracting authorities.
We focus on two laws standardly observed across [scientific]{} disciplines – [the]{} exponential and power laws. Let us define a cumulative distribution function, *cdf*, as $F(x)=P(X\ge x)$. [The p]{}ower law is described as $F(x) \propto x^{-\alpha}$ with a power-law exponent $\alpha$ and is usually labeled as the Pareto law or distribution. Corresponding probability density function, $pdf$, is defined as $f(x)=\partial F(x)/\partial x \propto x^{-(\alpha+1)}$. [The e]{}xponential law is then [characterized by]{} $F(x) \propto \exp(-\beta x)$ with an exponent $\beta$ and is often labeled as the Maxwell-Boltzmann distribution with an inverse temperate $\beta$ and [the]{} corresponding $cdf$ of $f(x)\propto \exp(-\beta x)$. The Pareto distribution is connected to the extensively analyzed Zipf’s law, which is a power law between a rank and some other variable important for the analyzed system. If $F_i$ is a magnitude of some variable and $r$ is a corresponding rank, then $F_i\propto r^{-\gamma}$ is the Zipf’s law. The Zipf’s law is usually considered only for a special case when $\gamma=1$. It turns out that the power-law exponent $\alpha$ and the Zipf’s law exponent $\gamma$ are inverse, i.e. $\alpha=1/\gamma$ [@Adamic2002]. As will be shown later, these two distributions are particularly important from economic point of view because they are [the]{} entropy-maximizing distributions of extensive and non-extensive systems, which can be well connected to the PP market.
The basic dataset[[^3]]{} has been obtained using web crawlers as a complete image of the public database ISVZ[[^4]]{}, which contains all the Czech public tenders above a threshold of [an]{} expected price of [$6*10^6$ CZK ($\approx$ €$240*10^3$ or \$317$*10^3$)]{} for construction services and [$2*10^6$ CZK ($\approx$ €$80*10^3$ or \$$106*10^3$)]{} for all other procured goods or services. The full dataset [underwent]{} both automated and manual validity checks, assuring [the]{} mostly proper identification of contracting authorities, winners but also validity of other data fields. The data has been cross-checked against the company registry and further enriched using other public databases. [The d]{}ataset covers over 40,000 tenders from the period between 6/2006 and 8/2011. Due to features like [an]{} inclusion of small tenders, [a]{} coverage of [a]{} nationwide set of various tenders and most importantly [a]{} good data quality (which is highly above the European standards [of publication in TED[^5]]{}), the robustness of [the]{} results is ensured. Additionally, since the dataset is of [an]{} almost unique quality in this field and since the examined procurements follow the standard EU directives[[^6]]{}, our results are relevant [at least]{} Europe-wide but due to similarities in various procurement regulations possibly also outside the EU – including the [USA]{} and Japan.
Let us now focus on [the]{} results for [the]{} number of bidders, total winner revenues and total contracting authority spendings.
[The c]{}umulative distribution function for [the]{} number of bidders is shown in Fig. \[fig1\]. Almost a perfect fit in a linear-log scale indicates that the $cdf$ of the number of bidders is very well described by the exponential distribution with $\beta \approx 0.27$, which is supported for both $cdf$ and $pdf$. The most probable [(the most frequent)]{} number of bidders is a single bidder and the probability decreases exponentially. Approximately 95% of the public procurements have 10 or less bidders. However, there is no intuitive or even basic economic reason for such a distribution to occur. Later, we propose that such a distribution emerges in [extensive]{} systems with suitable constraints [related to this specific problem]{}.
Compared to the number of bidders for a specific contract, [the]{} total revenues and total spendings range widely [^7]. For [the]{} total revenues, the sums range from $2*10^6$ CZK up to $4*10^{10}$ CZK, and for [the]{} total spendings, the sums range from $2*10^6$ CZK to $1.5*10^{11}$ CZK. As the power law is defined only for one of the tails (it cannot hold for the whole distribution), we analyze [the]{} potential power law for [the]{} values above one standard deviation for both [the]{} total revenues and total spendings.
In Fig. \[fig2\], we show the $cdf$ and Zipf plot for [the]{} total revenues. Both the log-log specified charts imply [the]{} power-law scaling with $\alpha \approx 1.24$ again with a practically perfect fit. Scaling in the Zipf plot indicates that, at least for the top 100 companies (with the highest revenues), the total revenues can be very well described by the Zipf’s law with $\gamma \approx 0.79$ and the distribution of revenues is hence not uniform. [The e]{}mergence of such a scaling law indicates that the process is governed by a complex dynamics and interactions between competing agents. Such [an]{} interpretation is further developed later in the text. Similar behavior is observed for [the]{} total spendings on [the]{} public procurement contracts. Fig. \[fig3\] uncovers that [the]{} total spendings actually follow the exact Zipf’s law with $\alpha \approx \gamma \approx 1$, i.e. the company with the second highest spendings has half the amount of the most spending company, the third company spends one third of the highest spendings, etc. Such a precise power law distribution again indicates that the whole process is governed by a complex dynamics and interactions between participating companies. Note that the documented power law exponents for revenues and spendings are not markedly different from [the]{} power laws observed for [the]{} income and wealth distributions [@Gabaix2008]. Also, as the Zipf’s exponent is higher for [the]{} total spendings than for [the]{} total revenues, we can state that [the]{} distribution of money related to [the]{} public procurement is less equal for the contracting authorities than for the competing firms, which is rather unexpected. This is also well documented [by]{} the Lorenz curve (not shown here) which uncovers that the top 10% of the competing firms obtains around 80% of [the]{} total public procurement money (and the top 1% of [the]{} firms still gets around 45% of the total amount), whereas for the contracting authorities, the top 10% of the companies spent around 87% of the total amount (and the top 1% of the companies is responsible for approximately 60% of spendings). These are well above [the]{} standard Pareto’s “80-20 rule” [@Pareto1971] where [the]{} top 20% members of a specific group posses 80% of the total money amount (or more generally, 20% of [the]{} causes are responsible for 80% of [the]{} results). Both [the]{} spendings on and [the]{} revenues from the public procurement programs are strongly concentrated.
[The statistical accuracy of the power-law fits and the actual closeness of the empirical distributions to the power-laws in the right tail have been tested with the procedure proposed by Preis *et al.* [@Preis2011]. Both for the total revenues and the total spendings, we simulate samples with the same number of observations, the same cut-off points and the estimated $\alpha$. The samples are simulated 10,000 times and for each sample, the Kolmogorov-Smirnov test [@Stephens1974] is applied to the $cdf$. By doing so, we obtain the critical values and p-values to test whether the distributions of the revenues and spendings are close to being power-law distributed. The test statistics are 0.0007 and 0.0014 for total costs and revenues, respectively. The corresponding p-values are 0.7541 and 0.3820, respectively. Therefore, we cannot reject that total revenues and total costs follow the power-law distribution with the cut-off point at a unit of the corresponding standard deviation. This statistically supports the graphical evidence for the power laws presented in Figs. \[fig2\]-\[fig3\].]{}
There are several possible explanations for the observed distributions which might be a subject of the further research. [As an example, t]{}he distribution of [the]{} spendings may partially follow from the population of the contracting authorities[,]{} 48% of which are municipalities whose population is well [documented]{} to follow [the]{} power law distribution [@Benguigui2007; @Cordoba2008]. Since the population is tightly connected with an economic turnover and thus [the]{} PP volume, this subpopulation may have a substantial effect on the overall distribution. However, [the]{} municipalities make roughly [only]{} 20% of the total spendings and have only 5 representatives among [the]{} 100 largest authorities, making it [a rather]{} weak explanation for the power law tail. Further on, we propose more a general mechanism which might cause emergence of the power law scaling also across other authorities. The distribution of winners is[, however,]{} a much more interesting result as it emphasizes a massive inequality in distribution of public money that has no straightforward economic justification. The underlying mechanics are likely to be connected with a fact that [the]{} past won contracts contribute to a chance of winning in a new contract – either formally as a reference, or through some informal advantage such as an emerging clientelism or corruption ties. Studying this phenomenon on more detailed level is certainly a fruitful area of research. In the rest of this paper, we propose an approach which results in the observed distribution through much simpler means of [the]{} entropy maximization given reasonable constraints.
Let $M$, $C$ and $Z$ stand for a total amount of money spent on PP, a total number of [the]{} firms with at least one won contract and a total number of [the]{} contracting authorities, respectively[^8]. Let us further define $C=\sum_{n=1}^N{c_n}$ and $Z=\sum_{k=1}^K{z_k}$ where $c_n$ is a number of companies with [the]{} total revenues from [the]{} public procurements of some specific level $n$ and $z_k$ is a number of authorities with [the]{} total spendings on [the]{} public procurements of some specific level $k$. Here, $n=1,\ldots,N$ and $k=1,\ldots,K$ are discrete levels of obtained or spent money, respectively. We denote $m_{c_n}$ as a specific amount of revenue of a supplier in $c_n$ so that $\sum_{n=1}^N{c_nm_{c_n}}=M$ and in a similar way[,]{} $m_{z_k}$ is a specific amount spent on [the]{} public procurements for an authority in $z_k$ so that $\sum_{k=1}^K{z_km_{z_k}}=M$. Now, we can define a probability that a firm has a total revenue $m_{c_n}$ as $p(c_n)=c_n/C$ and a probability that an authority has spent a total of $m_{z_k}$ on [the]{} public procurements as $p(z_k)=z_k/Z$. Obviously, it holds that $\sum_{n=1}^N{p(c_n)}=\sum_{k=1}^K{p(z_k)}=1$, $p(c_n)\ge 0$ and $p(z_k)\ge 0$ for all $k$ and $n$, which is needed for a probability measure.
Such a framework [provides]{} enough information to analyze [the]{} probability distributions maximizing [the]{} entropy of the system. For simplicity, we choose [the]{} supplier side of the transaction so that we work with variables $M$, $C$, $c_n$, $m_{c_n}$, $p(c_n)$ and $N$. Using the definition of $p(c_n)$, we can rewrite the restrictions on $c_n$ as restrictions on probabilities, i.e. $\sum_{n=1}^N{p(c_n)}=1$ and $\sum_{n=1}^N{p(c_n)m_{c_n}}=M/C$. In economics, it is usually assumed that the system is in equilibrium or very close to it. In physics, such a system can be analyzed with a use of entropy and the entropy-maximizing (the most probable) configuration of the distribution is found through a solution of a Lagrangian given constraints[, which is parallel to the maximum likelihood approach used in economics [@Aoyama2010]]{}. [Important aspect of the systems’ description is extensivity, i.e. whether the parts of the system are independent (or only weakly dependent/interacting) or strongly dependent/interacting. Therefore, we consider both the extensive and non-extensive systems to see whether the optimization under the given constraints leads to the distributions observed in the public procurement market. For the extensive systems, we utilize the Shannon’s entropy and for the non-extensive systems, i.e. systems with strongly interacting particles, we utilize the Tsallis’ entropy.]{}
Starting with the [extensive systems]{}, we maximize the Shannon’s entropy [[@Shannon1948]]{} $S=-\sum_{n=1}^N{p(c_n)\log(p(c_n))}$ with constraints $\sum_{n=1}^N{p(c_n)}=1$ and $\sum_{n=1}^N{p(c_n)m_{c_n}}=M/C$ yielding the Lagrangian $L_1$:
$$\begin{gathered}
\label{L1}
L_1=-\sum_{n=1}^N{p(c_n)\log(p(c_n))}-\lambda_1\left(\sum_{n=1}^N{p(c_n)}-1\right)-\\
\kappa_1\left(\sum_{n=1}^N{p(c_n)m_{c_n}}-\frac{M}{C}\right)\end{gathered}$$
[The m]{}aximization of $L_1$ with respect to $p(c_n)$ gives $p(c_n)=e^{-\kappa_1 c_n+\lambda_1-1}$, where $\kappa_1$ and $\lambda_1$ are Lagrange multipliers respecting the restrictions [or in economic terms, the sensitivities with respect to the given constraints. Interestingly, $\kappa_1$ characterizes the sensitivity to the changes in the average revenue earned by the firms]{}. Therefore, the maximization of [the]{} entropy [of the extensive system]{} yields the Maxwell-Boltzmann (exponential) distribution with an inverse temperature given as the Lagrange multiplier $\kappa_1$.
Considering [the non-extensive systems]{}, we maximize the Tsallis’ entropy [[@Havrda1967; @Tsallis1988]]{} $S_q=(1-\sum_{n=1}^N{p(c_n)^q})/(q-1)$, where $q$ is an entropic index, with the same constrains[,]{} and the Lagrangian $L_2$ is given as:
$$\begin{gathered}
L_2=\frac{1-\sum_{n=1}^N{p(c_n)^q}}{q-1}-\lambda_2\left(\sum_{n=1}^N{p(c_n)}-1\right)-\\
\kappa_2\left(\sum_{n=1}^N{p(c_n)m_{c_n}}-\frac{M}{C}\right)\end{gathered}$$
Here, the maximization of $L_2$ with respect to $p(c_n)$ yields $p(c_n)=\left(\frac{q-1}{q}\lambda_2+\kappa_2 c_n\right)^{-\frac{1}{q-1}}$, where again $\kappa_2$ and $\lambda_2$ are the Lagrange multipliers respecting the restrictions. Therefore, the maximization of [the]{} entropy in the [non-extensive system]{} yields the [Pareto (power law)]{} distribution. This is indeed what we have observed for [the]{} total revenues of the participating firms and this process can thus be well described as emerging from the [non-extensive system with strongly interacting particles]{}. In the same way, this can be shown for the contracting authorities and the distribution of their total spendings where we also found the [power law]{} distribution. Note that $q>0$ is a measure of a non-extensiveness and the further $q$ is from one, the more non-extensive the system is. For $q\rightarrow 1$, the Shannon’s entropy is recovered [(an extensive system)]{} and the resulting distribution is also exponential as for $L_1$ in Eq. \[L1\].
In a very similar way, we can approach the number of bidders distribution problem. We have $W$ tenders and $K$ distinct values of competing firms for a single tender. Let $w_k$ be a number of tenders with $b_k$ bidders where $k=1,\ldots,K$ so that $\sum_{k=1}^K{w_k}=W$. A probability that there are $b_k$ bidders for [the]{} tender is $p_k=w_k/W$ and it obviously holds that $\sum_{k=1}^K{p_k}=1$ and $p_k\ge 0$ for all $k$. Adding a constraint on the average value of bidders on a single tender $\sum_{k=1}^K{p_kb_k}=F/W$, which follows from a restriction on a total number of bidding firms for all contracts $F$ defined as $\sum_{k=1}^K{w_kb_k}=F$, we can again construct the Lagrangian form $L_3$ maximizing the Shannon’s entropy
$$\begin{gathered}
L_3=-\sum_{n=1}^N{p_k\log p_k}-\lambda_3\left(\sum_{k=1}^K{p_k}-1\right)-\\
\kappa_3\left(\sum_{k=1}^K{p_kb_k}-\frac{F}{W}\right),\end{gathered}$$
yielding a probability distribution function $p_k=e^{-\kappa_3b_k-1+\lambda_3}$, i.e. the Maxwell-Boltzmann distribution, which is indeed observed for the number for bidders for our dataset. [The n]{}umber of bidding firms for a contract thus seems to be generated from [the extensive system with no or only weak interactions between participants]{}.
To further expand the analogy between [the extensive and non-extensive]{} systems in physics and our specific socio-economic problem, consider the particles in the physical systems as market participants listing as well as competing for [the]{} public procurements. As we can hardly describe the behavior of each individual firm, it suffices to analyze only the aggregate behavior. Market participants (particles) can either act independently [(or weakly interact)]{} or strongly interact, which is a parallel to [the]{} physical [extensive and non-extensive systems]{}, respectively. The analogy can be also expanded to potential frictions in the market (collisions of particles) which further increase the entropy and drive the system away from equilibrium. Therefore, the market situations when the market participants do not (or only weakly) cooperate and/or there are no barriers to enter the market can be taken as the [extensive]{} system, which, as we have shown, [would lead]{} to the Maxwell-Boltzmann distribution of [the]{} revenues and spendings. Reversely, the market situation when the market participants cooperate and/or there are barriers and frictions in the market can be taken as the [non-extensive]{} system, which we have shown to yield the Pareto distribution for [the]{} spendings and revenues. This even leads us to a policy implication – the closer the distribution of [the]{} revenues or spendings is to the Maxwell-Boltzmann distribution, the more transparent and competitive the whole system is, and reversely, the closer the distribution of [the]{} revenues and spendings is to the Pareto distribution, the less transparent and competitive the whole system is. [Moreover, getting closer to the extensive system characteristics requires the procurement process to be more transparent, with less cooperation between interacting participants (agents) and less market frictions (barriers).]{} Removing or at least suppressing these inefficiencies shall lead to the Maxwell-Boltzmann distribution of [the]{} total revenues and spendings, which is characteristic for the [extensive]{} systems. Even though such a policy advice seems obvious, it is quite strong as it is based on a well defined statistical analysis.
Taking the results for [the]{} number of bidders into consideration as well, we argue that this process is [similar to an extensive system in physics]{}. From an economic standpoint, it seems that companies competing for the procurements do not cooperate on the level of bidding, which results in the Maxwell-Boltzmann distribution for the number of bidders. Therefore, the forces driving the total revenues from the [“ideal situation”]{} seem to be more caused by the customer–supplier cooperation than the supplier–supplier cooperation. This is indeed rather disturbing result which indicates [potential]{} corruption in the procurement process in the Czech Republic, which is of course illegal and should be a warning for [the]{} authorities. [However, there might be different causes of such results. The fact that the Pareto distribution is found for both the revenues and spendings indicates that the distribution form might be inherited from the contracting authorities to the suppliers. As the contracting authorities are usually politically influenced and interconnected, it seems obvious that there are strong interactions between them leading to the power-law distribution. The policy implications would thus be much stronger if we found the Maxwell-Boltzmann distribution for the spendings and the Pareto distribution for the revenues. Connected to this, the volume of specific procurements might be complicating the interpretation as well.]{}
To conclude, we have shown that [the]{} public procurement market can be analyzed with [the]{} tools standardly used in [the statistical]{} physics and not only do these tools give technically interesting results such as the exponential and power laws but they can even lead us to [the]{} specific policy implications [(even though these should be taken with caution)]{}. These basic results can be used for further analysis and modeling of processes connected to [the]{} public procurements. [Note that]{} the analysis presented here is far from being complete and there are other issues, which should be analyzed in future – relationship between [the]{} number of bidders and [the]{} final price of a contract, between [the]{} number of won contracts and [the]{} final price of a contract, [the]{} concentration of firms with respect to [the]{} specific contracting authority, and others. Indeed, depending on [the]{} data availability, it would be interesting whether [the]{} properties presented here are found for other countries as well.\
The authors acknowledge financial support of the Grant Agency of the Czech Republic (grant numbers P402/11/0948 and 402/09/0965), the Technological Agency of the Czech Republic (grant number TD010133), the Grant Agency of the Charles University (grant numbers 118310) and project [SVV 265 504]{}.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![*Distribution function of number of bidders.* Obvious exponential scaling of both $cdf$ (left) and $pdf$ (right) is shown with $\beta \approx 0.27$. As $cdf$ has to equal 1 for the number of bidders equal to 1, the fit is based on a fixed intercept. \[fig1\]](bidders_cdf.png "fig:"){width="3.3in"} ![*Distribution function of number of bidders.* Obvious exponential scaling of both $cdf$ (left) and $pdf$ (right) is shown with $\beta \approx 0.27$. As $cdf$ has to equal 1 for the number of bidders equal to 1, the fit is based on a fixed intercept. \[fig1\]](bidders_pdf.png "fig:"){width="3.3in"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![*Distribution function for total supplier revenues.* Revenues are standardized so that they are shown in a number of standard deviations. The power law fit is based on the standardized revenues above a single standard deviation. The power law exponent of $\alpha=1.236$ fits almost perfectly for the right tail (left). The parameter is supported in the Zipf’s law plot (right) with $\gamma=0.789$.\[fig2\]](revenue_cdf.png "fig:"){width="3.3in"} ![*Distribution function for total supplier revenues.* Revenues are standardized so that they are shown in a number of standard deviations. The power law fit is based on the standardized revenues above a single standard deviation. The power law exponent of $\alpha=1.236$ fits almost perfectly for the right tail (left). The parameter is supported in the Zipf’s law plot (right) with $\gamma=0.789$.\[fig2\]](revenue_zipf.png "fig:"){width="3.3in"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![*Distribution function for total spendings of contracting authorities.* Spendings are standardized so that they are shown in a number of standard deviations. The power law fit is based on the standardized spendings above a single standard deviation in the same way as for the revenues. The power law exponent of $\alpha=0.993$ fits almost perfectly for the right tail and holds well even for the lower values of the spendings (left). The parameter is supported in the Zipf’s law plot (right) with $\gamma=0.977$. \[fig3\]](cost_cdf.png "fig:"){width="3.3in"} ![*Distribution function for total spendings of contracting authorities.* Spendings are standardized so that they are shown in a number of standard deviations. The power law fit is based on the standardized spendings above a single standard deviation in the same way as for the revenues. The power law exponent of $\alpha=0.993$ fits almost perfectly for the right tail and holds well even for the lower values of the spendings (left). The parameter is supported in the Zipf’s law plot (right) with $\gamma=0.977$. \[fig3\]](cost_zipf.png "fig:"){width="3.3in"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[^1]: Organization for Economic Co-Operation and Development.
[^2]: Gross Domestic Product – a total value of all final goods and services produced in a country during a time period.
[^3]: Full dataset is available upon request from the authors; for its examination, at least rough knowledge of the European procurement law is necessary.
[^4]: Information System about Public Procurement, isvzus.cz.
[^5]: Tenders Electronic Daily, the official database of tenders in the EU.
[^6]: As described by the EU directive 2004/18 on the procurement of public works, public supply and public services contracts [@eu2004].
[^7]: Note that we consider only subjects with total spendings or revenues with at least $2*10^6$ CZK – a floor amount above which the procurement has to be publicly listed.
[^8]: For simplicity, we assume that $M$, $C$ and $Z$ are exogenous. For $M$ and $Z$, this assumption is very reasonable because the total amount of money spent on [the]{} public procurements as well as the number of the contracting authorities are mainly a political decision. For $C$, this might be an oversimplification since the number of [the]{} firms that won at least one procurement arises as a solution of some optimization problem. The value of $C$ is a consequence of an economic friction caused by a limited number of firms and costs of entering the PP market.
|
---
abstract: 'This paper analyzes a time-stepping discontinuous Galerkin method for fractional diffusion-wave problems. This method uses piecewise constant functions in the temporal discretization and continuous piecewise linear functions in the spatial discretization. Nearly optimal convergence rate with respect to the regularity of the solution is established when the source term is nonsmooth, and nearly optimal convergence rate $ \ln(1/\tau)(\sqrt{\ln(1/h)}h^2+\tau) $ is derived under appropriate regularity assumption on the source term. Convergence is also established without smoothness assumption on the initial value. Finally, numerical experiments are performed to verify the theoretical results.'
author:
- |
Binjie Li [^1], Tao Wang [^2], Xiaoping Xie [^3]\
[School of Mathematics, Sichuan University, Chengdu 610064, China]{}
title: ' **Analysis of a time-stepping discontinuous Galerkin method for fractional diffusion-wave equation with nonsmooth data** '
---
[**Keywords:**]{} fractional diffusion-wave problem, discontinuous Galerkin method, discrete Laplace transform, convergence, nonsmooth data.
Introduction
============
This paper considers the following time fractional diffusion-wave problem: $$\label{eq:model}
\left\{
\begin{aligned}
u' - \Delta \operatorname{D}_{0+}^{-\alpha} u & = f & & \text{in $~~ \Omega\times(0,T)$,} \\
u & = 0 & & \text{on $ \partial\Omega \times (0,T) $,} \\
u(0) & = u_0 & & \text{in $~~ \Omega $,}
\end{aligned}
\right.$$ where $ 0<\alpha<1 $, $ 0 < T < \infty $, $ \Omega \subset \mathbb R^d $ ($d=1,2,3$) is a convex $ d $-polytope, $ \operatorname{D}_{0+}^{-\alpha} $ is a Riemann-Liouville fractional integral operator of order $ \alpha $, and $ f $ and $ u_0 $ are two given functions. The above fractional diffusion-wave equation also belongs to the class of evolution equations with a positive-type memory term (or integro-differential equations with a weakly singular convolution kernel), which have attracted many works in the past thirty years.
Let us first briefly summarize some works devoted to the numerical treatments of problem \[eq:model\]. McLean and Thomée [@McLean1993] proposed and analyzed two discretizations: the first uses the backward Euler method to approximate the first-order time derivative and a first-order integration rule to approximate the fractional integral; the second uses a second-order backward difference scheme to approximate the first-order time derivative and a second-order integration rule to approximate the fractional integral. Then McLean et al. [@McLean1996] analyzed two discretizations with variable time steps: the first is a simple variant of the first one analyzed in [@McLean1993]; the second combined the Crank-Nicolson scheme and two integral rules to approximate the fractional integral (but the temporal accuracy is not better than $ \mathcal O(\tau^{1+\alpha}) $). Combining the first-order and second-order backward difference schemes and the convolution quadrature rules [@Lubich1986], Lubich et al.[@Lubich1996] proposed and analyzed two discretizations for problem \[eq:model\], where optimal order error bounds were derived for positive times without spatial regularity assumption on the data. Cuesta et al. [@Cuesta2006] proposed and studied a second-order discretization for problem \[eq:model\] and its semilinear version.
Representing the solution as a contour integral by the Laplace transform techinque and approximating this contour integral, McLean and Thomée [@McLean2010-B; @McLean2010] developed and analyzed three numerical methods for problem \[eq:model\]. These methods use $ 2N+1 $ quadrature points, and the first method possesses temporal accuracies $ \mathcal O(e^{-cN}) $ away from $ t=0 $, the second and third have temporal accuracy $ \mathcal O(e^{-c\sqrt N}) $.
McLean and Mustapha [@McLean2007] studied a generalized Crank-Nicolson scheme for problem \[eq:model\], and they obtained accuracy order $ \mathcal O(h^2 +
\tau^2) $ on appropriately graded temporal grids under the condition that the solution and the forcing term satisfy some growth estimates. Mustapha and McLean [@Mustapha2009Discontinuous] applied the famous time-stepping discontinuous Galerkin (DG) method [@Thomee2006 Chapter 12] to an evolution equation with a memory term of positive type. For the low-order DG method, they derived the accuracy order $ \mathcal O(\ln(1/\tau)h^2 + \tau) $ on appropriately graded temporal grids under the condition that the time derivatives of the solution satisfy some growth estimates. We notice that this low-order DG method is identical to the first-order discretization analyzed in the aforementioned work [@McLean1996]. They also analyzed an $hp$-version of the DG method in [@mustapha2014well-posedness]. So far, by our knowledge the convergence of this algorithm has not been established with nonsmooth data.
This paper analyzed the convergence of the aforementioned low-order DG method, which is a further development of the works in [@McLean1996; @Mustapha2009Discontinuous]. For $ f = 0 $, we derive the error estimate $${\lVert {u(t_j) - U_j} \rVert}_{L^2(\Omega)} \leqslant
C(h^2 t_j^{-\alpha-1} + \tau t_j^{-1}) {\lVert {u_0} \rVert}_{L^2(\Omega)}.$$ For $ u_0 = 0 $, we obtain the following error estimates: $$\begin{aligned}
{\lVert {u-U} \rVert}_{L^\infty(0,T;L^2(\Omega))} & \leqslant
C \big(h + \sqrt{\ln(1/h)}\,\tau^{1/2}\big)
{\lVert {f} \rVert}_{L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega))}, \\
{\lVert {u-U} \rVert}_{L^\infty(0,T;L^2(\Omega))} & \leqslant
C \ln(T/\tau) \big(\sqrt{\ln(1/h)}h^2 + \tau\big)
{\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;L^2(\Omega))},\end{aligned}$$ where the first is nearly optimal with respect to the regularity of the solution and the second is nearly optimal, and we notice that since $ \alpha/(\alpha+1) < 1/2 $ the first estimate imposes no boundary condition on $ f $. In addition, to investigate the effect of the nonvanishing $ f(0) $ on the accuracy of the numerical solution, we establish the error estimate $${\lVert {u(t_j) - U_j} \rVert}_{L^2(\Omega)} \leqslant
C (t_j^{-\alpha} h^2 + \tau) {\lVert {v} \rVert}_{L^2(\Omega)},$$ in the case that $ u_0 = 0 $ and $ f(t) = v \in L^2(\Omega) $, $ 0 \leqslant t
\leqslant T $.
The rest of this paper is organized as follows. \[sec:pre\] introduces some Sobolev spaces, the fractional calculus operator, a time-stepping discontinuous Galerkin method, the weak solution of problem \[eq:model\] and its regularity. \[sec:disc\_regu\] investigates two discretizations of two fractional ordinary equations, respectively. \[sec:main\] establishes the convergence of the numerical method. \[sec:numer\] performs four numerical experiments to confirm the theoretical results. Finally, \[sec:conclusion\] provides some concluding remarks.
Preliminaries {#sec:pre}
=============
Sobolev spaces
--------------
Assume that $ -\infty < a < b < \infty $. For each $ m \in \mathbb N $, define $$\begin{aligned}
{}_0H^m(a,b) & := \{v\in H^m(a,b): v^{(k)}(a)=0,\,\, 0\leqslant k<m\}, \\
{}^0H^m(a,b) & := \{v\in H^m(a,b): v^{(k)}(b)=0,\,\, 0\leqslant k<m\},\end{aligned}$$ where $ H^m(a,b) $ is a usual Sobolev space [@Tartar2007] and $ v^{(k)} $ is the $ k $-th weak derivative of $ v $. We equip the above two spaces with the norms $$\begin{aligned}
{\lVert {v} \rVert}_{{}^0H^m(a,b)} &:= {\lVert {v^{(m)}} \rVert}_{L^2(a,b)}
\quad \forall v \in {}^0H^m(a,b), \\
{\lVert {v} \rVert}_{{}_0H^m(a,b)} &:= {\lVert {v^{(m)}} \rVert}_{L^2(a,b)}
\quad \forall v \in {}_0H^m(a,b),\end{aligned}$$ respectively. For any $ m \in \mathbb N_{>0} $ and $ 0 < \theta < 1 $, define $$\begin{aligned}
{}_0H^{m-\theta}(a,b) & := [ {}_0H^{m-1}(a,b),\ {}_0H^m(a,b) ]_{1-\theta,2}, \\
{}^0H^{m-\theta}(a,b) & := [ {}^0H^{m-1}(a,b),\ {}^0H^m(a,b) ]_{1-\theta,2},\end{aligned}$$ where $ [\cdot, \cdot]_{\theta,2} $ means the famous $ K $-method [@Tartar2007 Chapter 22]. For $ 0 < \gamma < \infty $, we use $ {}^0H^{-\gamma}(a,b) $ and $
{}_0H^{-\gamma}(a,b) $ to denote the dual spaces of $ {}_0H^\gamma(a,b) $ and $
{}^0H^\gamma(a,b) $, respectively. Conversely, since $ {}_0H^\gamma(a,b) $ and $
{}^0H^\gamma(a,b) $ are reflexive, they are the dual spaces of $ {}^0H^{-\gamma}(a,b)
$ and $ {}_0H^{-\gamma}(a,b) $, respectively. Moreover, for any $ 0 < \gamma < 1/2 $, $ {}_0H^\gamma(a,b) = {}^0H^\gamma(a,b) = H^\gamma(a,b) $ with equivalent norms (cf. [@Lions1972 Chapter 1]), and hence $ {}_0H^{-\gamma}(a,b) =
{}^0H^{-\gamma}(a,b) $ with equivalent norms.
It is well known that there exists an orthonormal basis $\{\phi_n: n \in \mathbb N
\}$ of $ L^2(\Omega) $ such that $$\left\{
\begin{aligned}
-\Delta \phi_n ={} &\lambda_n \phi_n&&\,
{\rm~in~}~\,\,\Omega,\\
\phi_n={}&0&&{\rm~on~}\partial\Omega,
\end{aligned}
\right.$$ where $ \{ \lambda_n: n \in \mathbb N \} $ is a positive non-decreasing sequence and $\lambda_n\to\infty$ as $n\to\infty$. For any $ -\infty< \beta < \infty $, define
$
\dot H^\beta(\Omega) := \Big\{
\sum_{n=0}^\infty v_n \phi_n:\
\sum_{n=0}^\infty \lambda_n^\beta v_n^2 < \infty
\Big\}
$,
and endow this space with the norm
$
\big\|\sum_{n=0}^\infty v_n \phi_n \big\|_{\dot H^\beta(\Omega)}
:= \Big(
\sum_{n=0}^\infty \lambda_n^\beta v_n^2
\Big)^{1/2}
$.
For any $ \beta,\gamma \in \mathbb R $, define $${}^0H^\gamma(a,b;\dot H^\beta(\Omega)) := \bigg\{
\sum_{n=0}^\infty c_n \phi_n:\
\sum_{n=0}^\infty \lambda_n^\beta {\lVert {c_n} \rVert}_{{}^0H^\gamma(a,b)}^2 < \infty
\bigg\},$$ and equip this space with the norm $$\Big\| \sum_{n=0}^\infty c_n \phi_n \Big\|_{
{}^0H^\gamma(a,b;\dot H^\beta(\Omega))
} :=
\bigg(
\sum_{n = 0}^\infty \lambda_n^\beta
{\lVert {c_n} \rVert}_{{}^0H^\gamma(a,b)}^2
\bigg)^{1/2}.$$ The space $ {}_0H^\gamma(a,b;\dot H^\beta(\Omega)) $ is analogously defined, and it is evident that $ {}^0H^{-\gamma}(a,b;\dot H^{-\beta}(\Omega)) $ is the dual space of $ {}_0H^\gamma(a,b;\dot H^\beta(\Omega)) $ in the sense that $${\left\langle {
\sum_{n=0}^\infty c_n \phi_n, \sum_{n=0}^\infty d_n \phi_n
} \right\rangle}_{{}_0H^\gamma(a,b;\dot H^\beta(\Omega))} :=
\sum_{n=0}^\infty {\langle {c_n, d_n} \rangle}_{{}_0H^\gamma(a,b)}$$ for all $ \sum_{n=0}^\infty c_n \phi_n \in {}^0H^\gamma(a,b;\dot H^{-\beta}(\Omega))
$ and $ \sum_{n=0}^\infty d_n \phi_n \in {}_0H^\gamma(a,b;\dot H^\beta(\Omega)) $. Since $ {}_0H^\gamma(a,b;\dot H^\beta(\Omega)) $ is reflexive, it is the dual space of $ {}^0H^{-\gamma}(a,b;\dot H^{-\beta}(\Omega)) $. Above and throughout, for any Banach space $ W $, the notation $ {\langle {\cdot,\cdot} \rangle}_W $ means the duality paring between $ W^* $ (the dual space of $ W $) and W.
Fractional calculus operators
-----------------------------
This section introduces fractional calculus operators on a domain $ (a,b) $, $
-\infty < a < b < \infty $, and summarizes several properties of these operators used in the this paper. Assume that $ X $ is a separable Hilbert space.
\[def:frac\_calc\] For $ -\infty < \gamma < 0 $, define $$\begin{aligned}
\left(\operatorname{D}_{a+}^\gamma v\right)(t) &:=
\frac1{ \Gamma(-\gamma) }
\int_a^t (t-s)^{-\gamma-1} v(s) \, \mathrm{d}s, \quad t\in(a,b), \\
\left(\operatorname{D}_{b-}^\gamma v\right)(t) &:=
\frac1{ \Gamma(-\gamma) }
\int_t^b (s-t)^{-\gamma-1} v(s) \, \mathrm{d}s, \quad t\in(a,b),
\end{aligned}$$ for all $ v \in L^1(a,b;X) $, where $ \Gamma(\cdot) $ is the gamma function. In addition, let $ \operatorname{D}_{a+}^0 $ and $ \operatorname{D}_{b-}^0 $ be the identity operator on $
L^1(a,b;X) $. For $ j - 1 < \gamma \leqslant j $ with $ j \in \mathbb N_{>0} $, define $$\begin{aligned}
\operatorname{D}_{a+}^\gamma v & := \operatorname{D}^j \operatorname{D}_{a+}^{\gamma-j}v, \\
\operatorname{D}_{b-}^\gamma v & := (-\operatorname{D})^j \operatorname{D}_{b-}^{\gamma-j}v,
\end{aligned}$$ for all $ v \in L^1(a,b;X) $, where $ \operatorname{D}$ is the first-order differential operator in the distribution sense.
Let $ \{e_n:n \in \mathbb N\} $ be an orthonormal basis of $ X $. For any $ \beta \in
\mathbb R $, define
$$\begin{aligned}
{}^0H^\beta(a,b;X) := \bigg\{
\sum_{n=0}^\infty c_n e_n:\
\sum_{n=0}^\infty {\lVert {c_n} \rVert}_{{}^0H^\beta(a,b)} < \infty
\bigg\}\end{aligned}$$
and endow this space with the norm
$$\Big\|\sum_{n=0}^\infty c_n e_n\Big\|_{{}^0H^\beta(a,b;X)} :=
\bigg( \sum_{n=0}^\infty {\lVert {c_n} \rVert}_{{}^0H^\beta(a,b)}^2 \bigg)^{1/2}.$$ The space $ {}_0H^\beta(a,b;X) $ is analogously defined. It is standard that $
{}_0H^{-\beta}(a,b;X) $ is the dual space of $ {}^0H^\beta(a,b;X) $ in the sense that $${\left\langle {
\sum_{n=0}^\infty c_n e_n,
\sum_{n=0}^\infty d_n e_n
} \right\rangle}_{{}^0H^\beta(a,b;X)} :=
\sum_{n=0}^\infty {\langle {c_n, d_n} \rangle}_{{}^0H^\beta(a,b)}$$
for all $ \sum_{n=0}^\infty c_n e_n \in {}_0H^{-\beta}(a,b;X) $ and $
\sum_{n=0}^\infty d_n e_n \in {}^0H^\beta(a,b;X) $.
\[rem:equiv\_frac\_space\] For any $ 0 < \beta < 1 $, a simple calculation gives that $ {}_0H^\beta(a,b;X) $ is identical to $ [L^2(a,b;X), {}_0H^1(a,b;X)]_{\beta,2} $, and $${\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)} \leqslant
\sqrt2\,{\lVert {v} \rVert}_{[L^2(a,b;X), {}_0H^1(a,b;X)]_{\beta,2}} \leqslant
2{\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)}$$ for all $ v \in {}_0H^\beta(a,b;X) $.
\[lem:regu-basic\] If $ 0 \leqslant \beta < \infty $ and $ -\infty < \gamma \leqslant \beta $, then
$$\begin{aligned}
C_1 {\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)} \leqslant
{\lVert {\operatorname{D}_{a+}^\gamma v} \rVert}_{{}_0H^{\beta-\gamma}(a,b;X )} \leqslant
C_2 {\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)}
\,\forall v \in {}_0H^\beta(a,b;X), \\
C_1 {\lVert {v} \rVert}_{{}^0H^\beta(a,b;X)} \leqslant
{\lVert {\operatorname{D}_{b-}^\gamma v} \rVert}_{{}^0H^{\beta-\gamma}(a,b;X )} \leqslant
C_2 {\lVert {v} \rVert}_{{}^0H^\beta(a,b;X)}
\,\forall v \in {}^0H^\beta(a,b;X),
\end{aligned}$$
where $ C_1 $ and $ C_2 $ are two positive constants depending only on $ \beta $ and $ \gamma $.
\[lem:coer\] If $ -1/2 < \gamma < 1/2 $, then $$\begin{aligned}
\cos(\gamma\pi) {\lVert {\operatorname{D}_{a+}^\gamma v} \rVert}_{L^2(a,b;X)}^2 \leqslant
(\operatorname{D}_{a+}^\gamma v, \operatorname{D}_{b-}^\gamma v)_{L^2(a,b;X)} \leqslant
\sec(\gamma\pi) {\lVert {\operatorname{D}_{a+}^\gamma v} \rVert}_{L^2(a,b;X)}^2, \\
\cos(\gamma\pi) {\lVert {\operatorname{D}_{b-}^\gamma v} \rVert}_{L^2(a,b;X)}^2 \leqslant
(\operatorname{D}_{a+}^\gamma v, \operatorname{D}_{b-}^\gamma v)_{L^2(a,b;X)} \leqslant
\sec(\gamma\pi) {\lVert {\operatorname{D}_{b-}^\gamma v} \rVert}_{L^2(a,b;X)}^2,
\end{aligned}$$ for all $ v \in {}_0H^\gamma(a,b;X) $ (equivalent to $ {}^0H^\gamma(a,b;X) $), where $ (\cdot,\cdot)_{L^2(a,b;X)} $ is the usual inner product in $ L^2(a,b;X) $.
By \[lem:regu-basic\], we can extend the domain of $ \operatorname{D}_{a+}^\gamma $, $ -\infty <
\gamma < 0 $, as follows. Assume that $ v \in {}_0H^\beta(a,b;X) $ with $ -\infty <
\beta < 0 $. If $ \beta \leqslant \gamma $, then define $ \operatorname{D}_{a+}^\gamma v \in
{}_0H^{\beta-\gamma}(a,b;X) $ by that $$\label{eq:extended_frac_int}
{\langle {\operatorname{D}_{a+}^\gamma v, w} \rangle}_{{}^0H^{\gamma-\beta}(a,b;X)} :=
{\langle {v, \operatorname{D}_{b-}^\gamma w} \rangle}_{{}^0H^{-\beta}(a,b;X)}$$ for all $ w \in {}^0H^{\gamma-\beta}(a,b;X) $. If $ \beta > \gamma $, then define $
\operatorname{D}_{a+}^\gamma v \in {}_0H^{\beta-\gamma}(a,b;X) $ by that $ \operatorname{D}_{a+}^\gamma v =
\operatorname{D}_{a+}^{\gamma-\beta} \operatorname{D}_{a+}^\beta v $. The domain of the operator $ \operatorname{D}_{b-}^\gamma
$ can be extended analogously.
\[lem:regu\] If $ -\infty < \beta < \infty $ and $ -\infty < \gamma \leqslant \max\{0,\beta\} $, then $$\begin{aligned}
C_1 {\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)} \leqslant
{\lVert {\operatorname{D}_{a+}^\gamma v} \rVert}_{{}_0H^{\beta-\gamma}(a,b;X )} \leqslant
C_2 {\lVert {v} \rVert}_{{}_0H^\beta(a,b;X)}
\,\forall v \in {}_0H^\beta(a,b;X), \\
C_1 {\lVert {v} \rVert}_{{}^0H^\beta(a,b;X)} \leqslant
{\lVert {\operatorname{D}_{b-}^\gamma v} \rVert}_{{}^0H^{\beta-\gamma}(a,b;X )} \leqslant
C_2 {\lVert {v} \rVert}_{{}^0H^\beta(a,b;X)}
\,\forall v \in {}^0H^\beta(a,b;X),
\end{aligned}$$ where $ C_1 $ and $ C_2 $ are two positive constants depending only on $ \beta $ and $ \gamma $.
If $ -\infty < \beta < \gamma < \beta+1/2 $, then $${\langle {\operatorname{D}_{a+}^\gamma v, w} \rangle}_{{}^0H^{\gamma-\beta}(a,b;X)} =
{\langle {\operatorname{D}_{a+}^\beta v, \operatorname{D}_{b-}^{\gamma-\beta} w} \rangle}_{(a,b;X)}$$ for all $ v \in {}_0H^\beta(a,b;X) $ and $ w \in {}^0H^{\gamma-\beta}(a,b;X) $.
For the proofs of the above lemmas, we refer the reader to [@Luo2018Convergence Section 3].
Algorithm definition
--------------------
Given $ J \in \mathbb N_{>0} $, set $ \tau := T/J $ and $ t_j := j\tau $, $ 0
\leqslant j \leqslant J $, and we use $ I_j $ to denote the interval $ (t_{j-1},t_j)
$ for each $ 1 \leqslant j \leqslant J $. Let $ \mathcal K_h $ be a shape-regular triangulation of $\Omega $ consisting of $ d $-simplexes, and we use $ h $ to denote the maximum diameter of the elements in $ \mathcal K_h $. Define $$\begin{aligned}
S_h & := \Big\{
v_h \in \dot H^1(\Omega):\
v_h \text{ is linear on each } K \in \mathcal K_h
\Big\},\\
W_{\tau,h} &:= \Big\{
V \in L^2(0,T;S_h):\ V \text{ is constant on } I_j,\,
1 \leqslant j \leqslant J
\Big\}.\end{aligned}$$ For any $ V \in W_{\tau,h} $, we set $$\begin{aligned}
V_j &:= \lim\limits_{t\to t_{j}-}V(t),
\quad 1 \leqslant j \leqslant J, \\
V^+_j &:= \lim\limits_{t\to t_{j}+}V(t),
\quad 0 \leqslant j \leqslant J-1, \\
{{[\![ {V_j} ]\!]}} &:= V_j^{+} - V_j, \quad 0 \leqslant j \leqslant J,\end{aligned}$$ where the value of $ V_0 $ or $ V_J^{+} $ will be explicitly specified whenever needed.
Assuming that $ u_0 \in S_h^* $ and $ f \in (W_{\tau,h})^* $, we define a numerical solution $ U \in W_{\tau,h} $ to problem \[eq:model\] by that $ U_0 = P_hu_0 $ and $$\label{eq:numer_sol}
\sum_{j=0}^{J-1}{\langle {{{[\![ {U_j} ]\!]}},V^+_j} \rangle}_\Omega +
{\langle {\nabla \operatorname{D}_{0+}^{-\alpha} U,\nabla V} \rangle}_{\Omega \times (0,T)} =
{\langle {f,V} \rangle}_{W_{\tau,h}}$$ for all $ V \in W_{\tau,h} $, where $ P_h $ is the $ L^2 $-orthogonal projection onto $ S_h $. Above and afterwards, for a Lebesgue measurable set $ \omega $ of $ \mathbb
R^l $ ($ l= 1,2,3,4 $), the symbol $ {\langle {p,q} \rangle}_\omega $ means $ \int_\omega pq $ whenever $ pq \in L^1(\omega) $. In addition, the symbol $ C_\times $ means a positive constant depending only on its subscript(s), and its value may differ at each occurrence.
\[thm:stab\] Assume that $ u_0 \in L^2(\Omega) $. If $ f \in L^1(0,T;L^2(\Omega)) $, then $$\label{eq:stab-1}
{\lVert {U} \rVert}_{L^\infty(0,T;L^2(\Omega))}
\leqslant \sqrt2\ {\lVert {u_0} \rVert}_{L^2(\Omega)} +
2{\lVert {f} \rVert}_{L^1(0,T;L^2(\Omega))}.$$ If $ f \in {}_0H^{\alpha/2}(0,T;\dot H^{-1}(\Omega)) $, then $$\label{eq:stab-2}
\begin{aligned}
& {\lVert {U} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {U} \rVert}_{{}_0H^{-\alpha/2}(0,T;\dot H^1(\Omega))} \\
\leqslant{} &
C_\alpha \big(
{\lVert {u_0} \rVert}_{L^2(\Omega)} +
{\lVert {f} \rVert}_{{}_0H^{\alpha/2}(0,T;\dot H^{-1}(\Omega))}
\big).
\end{aligned}$$
For the proof of \[eq:stab-1\], we refer the reader to [@Mustapha2009Discontinuous Theorem 2.1]. By the techniques used in the proof of \[thm:conv\_f\_L2\] (in \[sec:main\]), the proof of \[eq:stab-2\] is trivial and hence omitted.
Weak solution and regularity {#ssec:regu}
----------------------------
Following [@Luo2018Convergence], we introduce the weak solution to problem \[eq:model\] as follows. Define $$\begin{aligned}
W & := {}_0H^{(\alpha+1)/4}(0,T;L^2(\Omega))
\cap {}_0H^{-(\alpha+1)/4}(0,T;\dot H^1(\Omega)), \\
\widehat W & := {}^0H^{(3-\alpha)/4}(0,T;L^2(\Omega))
\cap {}^0H^{(1-3\alpha)/4}(0,T;\dot H^1(\Omega)),\end{aligned}$$ and endow them with the norms $$\begin{aligned}
{\lVert {\cdot} \rVert}_W &:= \max\left\{
{\lVert {\cdot} \rVert}_{{}_0H^{(\alpha+1)/4}(0,T;L^2(\Omega)) },\
{\lVert {\cdot} \rVert}_{{}_0H^{-(\alpha+1)/4}(0,T;\dot H^1(\Omega))}
\right\}, \\
{\lVert {\cdot} \rVert}_{\widehat W} &:= \max\left\{
{\lVert {\cdot} \rVert}_{{}^0H^{(3-\alpha)/4}(0,T;L^2(\Omega)) },\
{\lVert {\cdot} \rVert}_{{}^0H^{(1-3\alpha)/4}(0,T;\dot H^1(\Omega))}
\right\},\end{aligned}$$ respectively. Assuming that $ u_0 t^{-(\alpha+1)/2} \in W^* $ and $ f \in \widehat
W^* $, we call $ u \in W $ a weak solution to problem \[eq:model\] if $$\label{eq:weak_sol_f}
\begin{aligned}
{}& {\left\langle {
\operatorname{D}_{0+}^{(\alpha\!+\!1)/2} u, v
} \right\rangle}_{ {}^0\!H^{(\alpha\!+\!1)/4}(0,T;L^2(\Omega)\!) } \!+\!
{\left\langle {
\nabla\! \operatorname{D}_{0+}^{\!-(\alpha\!+\!1)/4}u,
\nabla\! \operatorname{D}_{T-}^{\!-(\alpha\!+\!1)/4} v
} \right\rangle}_{ \Omega\times(0,T)} \\
={}&
{\left\langle { f,\ \operatorname{D}_{T-}^{(\alpha-1)/2} v } \right\rangle}_{\widehat W} +
{\left\langle {\frac{t^{-(\alpha+1)/2}}{\Gamma((1-\alpha)/2)} u_0,\ v} \right\rangle}_W
\end{aligned}$$ for all $ v \in W $. In the above definition we have used the fact that, by \[lem:regu-basic,lem:coer\], $${}^0H^{(\alpha+1)/4}(0,T;L^2(\Omega)) =
{}_0H^{(\alpha+1)/4}(0,T;L^2(\Omega))
\quad\text{with equivalent norms,}$$ and $${}^0H^{-(\alpha+1)/4}(0,T;\dot H^1(\Omega)) =
{}_0H^{-(\alpha+1)/4}(0,T;\dot H^1(\Omega))
\quad\text{with equivalent norms.}$$ By the well-known Lax-Milgram theorem and \[lem:coer,lem:coer,lem:regu\], a routine argument yields that the above weak solution is well-defined and admits the stability estimate $${\lVert {u} \rVert}_W \leqslant C_\alpha \Big(
{\lVert {f} \rVert}_{\widehat W^*} +
{\lVert {t^{-(\alpha+1)/2} u_0} \rVert}_{W^*}
\Big).$$ Furthermore, by a trivial modification of the proof of [@Luo2018Convergence Theorems 4.2], we readily obtain the following regularity results.
\[thm:regu-pde\] If $ u_0 = 0 $ and $ f \in {}_0H^\gamma(0,T;\dot H^\beta(\Omega)) $ with $
(\alpha-3)/4 \leqslant \gamma < \infty $ and $ 0 \leqslant \beta < \infty $, then the solution $ u $ to problem \[eq:weak\_sol\_f\] satisfies that $$\begin{aligned}
& \operatorname{D}_{0+}^{\gamma+1} u - \Delta \operatorname{D}_{0+}^{\gamma-\alpha} u =
\operatorname{D}_{0+}^\gamma f,
\label{eq:strong-form} \\
& {\lVert {u} \rVert}_{{}_0H^{\gamma+1}(0,T;\dot H^\beta(\Omega))} +
{\lVert {u} \rVert}_{{}_0H^{\gamma-\alpha}(0,T;\dot H^{2+\beta}(\Omega))} \leqslant
C_{\alpha,\gamma} {\lVert {f} \rVert}_{{}_0H^\gamma(0,T;\dot H^\beta(\Omega))}.
\label{eq:regu-pde}
\end{aligned}$$ Moreover, if $ 0 \leqslant \gamma < \alpha+1/2 $ then $$\label{eq:regu-pde-C}
{\lVert {u} \rVert}_{C([0,T];\dot H^{\beta+(2\gamma+1)/(\alpha+1)}(\Omega))}
\leqslant C_{\alpha,\gamma}
{\lVert {f} \rVert}_{{}_0H^\gamma(0,T;\dot H^\beta(\Omega))},$$ and if $ \gamma = \alpha + 1/2 $ then $${\lVert {u} \rVert}_{
C([0,T];\dot H^{\beta+2(1-\epsilon)}(\Omega))
} \leqslant \frac{C_\alpha}{\sqrt\epsilon}
{\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;\dot H^\beta(\Omega))}$$ for all $ 0 < \epsilon < 1 $.
For any $ v \in W $, since [@Tartar2007 Lemma 33.2] implies
$$\sqrt{
\int_0^T t^{-(\alpha+1)/2} {\lVert {v(t)} \rVert}_{L^2(\Omega)}^2
\, \mathrm{d}t
}\, \leqslant C_\alpha
{\lVert {v} \rVert}_{{}_0H^{(\alpha+1)/4}(0,T;L^2(\Omega))}
\leqslant C_\alpha {\lVert {v} \rVert}_W,$$
we have
$$\begin{aligned}
& {\left\lvert {
\int_0^T t^{-(\alpha+1)/2}
{\langle {u_0, v(t)} \rangle}_\Omega \, \mathrm{d}t
} \right\rvert} \\
\leqslant{} &
{\lVert {u_0} \rVert}_{L^2(\Omega)} \sqrt{
\int_0^T t^{-(\alpha+1)/2}dt \, \mathrm{d}t
} \,\, \sqrt{
\int_0^T t^{-(\alpha+1)/2} {\lVert {v(t)} \rVert}_{L^2(\Omega)}^2 \, \mathrm{d}t
} \\
\leqslant{} &
C_\alpha {\lVert {u_0} \rVert}_{L^2(\Omega)} {\lVert {v} \rVert}_W.
\end{aligned}$$
Therefore, $ t^{-(\alpha+1)/2} u_0 \in W^* $ and hence the above weak solution is well-defined for the case $ u_0 \in L^2(\Omega) $.
Next, we briefly summarize two other methods to define the weak solution to problem \[eq:model\]. The first method uses the Mittag-Leffler function to define the weak solution to problem \[eq:model\] with $ f = 0 $ and $ u_0 \in \dot H^{r}(\Omega)
$, $ r \in \mathbb R $, by that [@McLean2007] $$u(t) = \sum_{n=0}^{\infty}
{\langle {u_0,\phi_n} \rangle}_{\dot H^{-r}(\Omega)}
E_{\alpha+1,1}\big(-\lambda_n t^{\alpha+1}\big) \phi_n,
\quad 0 \leqslant t \leqslant T,$$ where, for any $ \beta, \gamma > 0 $, the Mittag-Leffler function $
E_{\beta,\gamma} $ is defined by $$E_{\beta,\gamma}(z) := \sum_{n=0}^\infty
\frac{z^n}{\Gamma(n\beta+\gamma)},
\quad z \in \mathbb C.$$ Then we can investigate the regularity of this weak solution by a growth estimate [@Podlubny1998]: for any $ \beta,\gamma,t >0 $, $${\lvert {E_{\beta,\gamma}(-t)} \rvert} \leqslant{}
\frac{C_{\beta,\gamma}}{1+t}.$$ The second method uses the well-known transposition technique to define the weak solution to problem \[eq:model\] as follows. Define $$G := {}^0H^1(0,T;L^2(\Omega)) \cap {}^0H^{-\alpha}(0,T;\dot H^2(\Omega)),$$ and equip this space with the norm $${\lVert {\cdot} \rVert}_G := \max\left\{
{\lVert {\cdot} \rVert}_{{}^0H^1(0,T;L^2(\Omega))},\
{\lVert {\cdot} \rVert}_{{}^0H^{-\alpha}(0,T;\dot H^2(\Omega))}
\right\}.$$ Also, define $$G_\mathrm{tr} := \big\{ v(0):\ v \in G \big\},$$ and endow this space with the norm $${\lVert {v_0} \rVert}_{G_\mathrm{tr}} :=
\inf_{v \in G,\ v(0) = v_0} {\lVert {v} \rVert}_G
\quad \forall v_0 \in G_\mathrm{tr}.$$ Assuming that $ u_0 \in G_\mathrm{tr}^* $ and $ f \in G^* $, we call $ u $ a weak solution to problem \[eq:model\] if $${\langle {u, -v' - \Delta \operatorname{D}_{T-}^{-\alpha} v} \rangle}_{\Omega \times (0,T)} =
{\langle {f, v} \rangle}_G + {\langle {u_0, v(0)} \rangle}_{G_\mathrm{tr}}$$ for all $ v \in G $. By the symmetric version of \[thm:regu-pde\], applying the famous Babuška-Lax-Milgram thoerem proves that the above weak solution is well-defined.
Discretizations of two fractional ordinary equations {#sec:disc_regu}
====================================================
An auxiliary function
---------------------
For any $ z \in \{x+iy:\ 0 < x < \infty, -\infty < y < \infty \} $, define $$\label{eq:def-psi}
\psi(z) := \frac{e^z-1}{\Gamma(2+\alpha)}
\sum_{k=1}^\infty k^{1+\alpha} e^{-kz}.$$ By the standard analytic continuation technique, $ \psi $ has a Hankel integral representation (cf. [@Wood1992 (12.1)] and [@McLean2015Time (21)]) $$\begin{aligned}
\psi(z) & = \frac{e^z-1}{2\pi i}
\int_{-\infty}^{({0+})} \frac{w^{-2-\alpha}}{e^{z-w}-1} \, \mathrm{d}w,
\quad z \in \mathbb C \setminus (-\infty,0],\end{aligned}$$ where $ \int_{-\infty}^{({0+})} $ means an integral on a piecewise smooth and non-self-intersecting path enclosing the negative real axis and orienting counterclockwise, $ 0 $ and $ \{z+2k\pi i \neq 0: k \in \mathbb Z\} $ lie on the different sides of this path, and $ w^{-2-\alpha} $ is evaluated in the sense that $$w^{-2-\alpha} = e^{-(2+\alpha) \operatorname{Log}w}.$$ By Cauchy’s integral theorem and Cauchy’s integral formula, it is clear that (cf. [@Wood1992 (13.1)]) $$\label{eq:psi}
\psi(z) = (e^z-1) \sum_{k \in \mathbb Z} (z+2k\pi i)^{-2-\alpha},$$ for all $ z \in \mathbb C \setminus (-\infty, 0] $ satisfying $ -2\pi <
\operatorname{Im} z < 2\pi $. From this series representation, it follows that $$\label{eq:psi-conj}
\psi(z) = \overline{\psi(\overline z)}
\quad \text{for all}\, z \in \mathbb C \setminus (-\infty,0]
\text{ with } {\lvert {\operatorname{Im}z} \rvert} < 2\pi.$$ Moreover, $$\label{eq:shit-7}
\psi(z) - (e^z-1)z^{-2-\alpha} \quad
\text{is analytic on }
\{w \in \mathbb C:\ {\lvert {\operatorname{Im} w} \rvert} < 2\pi\},$$ and hence $$\label{eq:psi-singu}
\begin{aligned}
& \lim_{r \to 0+} \frac{\psi(re^{i\theta})}{
r^{-1-\alpha}\big(
\cos((1+\alpha)\theta) - i \sin((1+\alpha)\theta)
\big)
} = 1 \\
& \text{ uniformly for all } -\pi < \theta < \pi.
\end{aligned}$$
\[lem:1+mupsi\] There exist $ \pi/2 < \theta_\alpha \leqslant (\alpha+3)/(4\alpha+4)\pi $, depending only on $ \alpha $, and $ 0 < \delta_{\alpha,\mu} < \infty $, depending only on $ \alpha $ and $ \mu $, such that $$\label{eq:1+mupsi}
\begin{aligned}
& 1 + \mu \psi(z) \neq 0 \quad\text{ for all }
z \in \big\{
w \in \mathbb C \setminus \{0\}:
-\pi \leqslant \operatorname{Im} w \leqslant \pi
\big\} \bigcap {} \\
& \qquad\qquad\qquad \big\{
w \in \mathbb C:\
0 < \operatorname{Re} w \leqslant \delta_{\alpha,\mu}
\text{ or } \pi/2 \leqslant
{\lvert {\operatorname{Arg} w} \rvert} \leqslant \theta_\alpha
\big\},
\end{aligned}$$ where $ \mu $ is a nonnegative constant.
By \[eq:psi-singu\], there exists $ 0 < \delta_\alpha < \pi $, depending only on $ \alpha $, such that $ \operatorname{Im} \psi(z) < 0 $ and hence
$$\label{eq:shit-4}
1+\mu\psi(z) \neq 0 \text{ for all }
z \in \Big\{
w \in \mathbb C:\ \pi/2
\leqslant \operatorname{Arg} w \leqslant
\frac{\alpha+3}{4(\alpha+1)}\pi,\
0 < \operatorname{Im} w \leqslant \delta_\alpha
\Big\}.$$
For $ 0 < y \leqslant \pi $, by \[eq:psi\] we have $$\begin{aligned}
\psi(iy) & = (e^{iy}-1) \sum_{k=-\infty}^\infty
(iy+2k\pi i)^{-2-\alpha} \notag \\
& =
(e^{iy}-1) \Big(
\sum_{k=-\infty}^{-1} (-2k\pi - y)^{-2-\alpha}
(-i)^{-2-\alpha} +
\sum_{k=0}^\infty (2k\pi+y)^{-2-\alpha} i^{-2-\alpha}
\Big) \notag \\
&=
(1-e^{iy})\Big(
\sum_{k=1}^\infty (2k\pi - y)^{-2-\alpha}
e^{i\alpha\pi/2} + \sum_{k=0}^\infty (2k\pi+y)^{-2-\alpha} e^{-i\alpha\pi/2}
\Big) \notag \\
&= (1-e^{iy}) (A + iB), \label{eq:psi-A-B}
\end{aligned}$$ where $$\begin{aligned}
A &:= \cos(\alpha\pi/2)
\sum_{k=0}^\infty \Big(
(2k\pi + 2\pi - y)^{-2-\alpha} +
(2k\pi+y)^{-2-\alpha}
\Big), \\
B &:= \sin(\alpha\pi/2) \sum_{k=0}^\infty
\Big(
(2k\pi+2\pi-y)^{-2-\alpha} -
(2k\pi+y)^{-2-\alpha}
\Big).
\end{aligned}$$ It follows that $$\begin{aligned}
\operatorname{Re} \psi(iy) = A (1-\cos y) + B \sin y, \\
\operatorname{Im} \psi(iy) = B(1-\cos y) - A \sin y.
\end{aligned}$$ A straightforward computation then gives $$\begin{aligned}
\operatorname{Re} \psi(i\pi) = 4\pi^{-2-\alpha} \cos(\alpha\pi/2)
\sum_{k=1}^\infty (2k-1)^{-2-\alpha} > 0, \label{eq:psipi+} \\
\operatorname{Im} \psi(iy) < 0, \quad 0 < y < \pi, \label{eq:psipi+2}
\end{aligned}$$ and hence, by the continuity of $ \psi $ in $$\left\{
z \in \mathbb C \setminus (-\infty,0]:
-2\pi < \operatorname{Im} \psi(z) < 2\pi
\right\},$$ a routine argument yields that there exists $ 0 < r_\alpha \leqslant
\delta_\alpha\tan((1-\alpha)/(4\alpha+4)\pi) $, depending only on $ \alpha $, such that $$\label{eq:426-2}
1+\mu\psi(z) \neq 0 \text{ for all }
z \in \left\{
w \in \mathbb C:\
-r_\alpha \leqslant \operatorname{Re} w \leqslant 0,\
\delta_\alpha \leqslant \operatorname{Im} w \leqslant \pi
\right\}.$$ By \[eq:shit-4,eq:426-2\], letting $ \theta_\alpha := \pi/2 +
\operatorname{arctan}(r_\alpha/\pi) $ yields $$\label{eq:shit-10}
1 + \mu\psi(z) \neq 0 \text{ for all }
z \in \{w \in \mathbb C:\
\pi/2 \leqslant \operatorname{Arg}w \leqslant \theta_\alpha,\
0 < \operatorname{Im} w \leqslant \pi\}.$$ In addition, by \[eq:psipi+\], \[eq:psipi+2\], \[eq:psi-singu\] and the continuity of $ \psi $ in $$\left\{
z \in \mathbb C \setminus (-\infty,0]:\
-2\pi < \operatorname{Im} z < 2\pi
\right\},$$ there exists $ \delta_{\alpha,\mu} > 0 $ depending only on $ \alpha $ and $ \mu $ such that $$\label{eq:426-3}
1+\mu\psi(z) \neq 0 \text{ for all }
z \in \{
w \in \mathbb C \setminus \{0\}:\
0 \leqslant \operatorname{Re} w \leqslant \delta_{\alpha,\mu},\
0 \leqslant \operatorname{Im} w \leqslant \pi
\}.$$ Finally, by \[eq:psi-conj\], combining \[eq:shit-10,eq:426-3\] proves \[eq:1+mupsi\] and hence this lemma.
\[lem:psi-growth\] For any $ \mu > 0 $ and $ 0 < y \leqslant \pi $, $$\label{eq:psi-growth}
{\lvert {1+\mu\psi(iy)} \rvert} > C_\alpha(1+\mu y^{-1-\alpha}).$$
By \[eq:psi-singu,eq:psipi+,eq:psipi+2\], there exists $ 0 < y_\alpha < \pi $, depending only on $ \alpha $, such that $$\begin{aligned}
\operatorname{Re} \psi(iy) > C_\alpha y^{-1-\alpha}
\quad\forall\, y_\alpha \leqslant y \leqslant \pi, \\
\operatorname{Im} \psi(iy) < -C_\alpha y^{-1-\alpha}
\quad\forall\, 0 < y \leqslant y_\alpha.
\end{aligned}$$ It follows that $${\lvert {1+\mu\psi(iy)} \rvert} > C_\alpha \mu y^{-1-\alpha}
\quad\forall\, 0 < y \leqslant \pi,$$ and hence $$\begin{aligned}
\inf_{
\substack{
0 < y \leqslant \pi \\
y^{1+\alpha} \leqslant \mu < \infty
}
} \frac{{\lvert {1+\mu\psi(iy)} \rvert}}{1+\mu y^{-1-\alpha}} \geqslant
\inf_{
\substack{
0 < y \leqslant \pi \\
y^{1+\alpha} \leqslant \mu < \infty
}
} \frac{y^{1+\alpha}}{2\mu} {\lvert {1+\mu\psi(iy)} \rvert} > C_\alpha.
\end{aligned}$$ It remains therefore to prove $$\label{eq:psi-growth-1}
\inf_{
\substack{
0 < \mu \leqslant \pi^{1+\alpha} \\
\mu^{1/(1+\alpha)} \leqslant y \leqslant \pi
}
} \frac{{\lvert {1+\mu\psi(iy)} \rvert}}{1+\mu y^{-1-\alpha}}
> C_\alpha.$$
To this end, we proceed as follows. By \[eq:psi\], there exists a continuous function $ g $ on $ [0,\pi] $ such that $ g(0) = 0 $ and $$\psi(iy) = (iy)^{-1-\alpha} + y^{-1-\alpha} g(y),
\quad 0 < y \leqslant \pi.$$ A straightforward computation gives $$\begin{aligned}
& 2{\lvert {1+\mu\psi(iy)} \rvert}^2 \\
={} & 2{\lvert {
1+\mu(iy)^{-1-\alpha} +
\mu y^{-1-\alpha} g(y)
} \rvert}^2 \\
\geqslant{} &
{\lvert {1+\mu(iy)^{-1-\alpha}} \rvert}^2 -
2\mu^2 y^{-2-2\alpha} {\lvert {g(y)} \rvert}^2 \\
={} &
1 + \mu^2 y^{-2-2\alpha} +
2\mu y^{-1-\alpha} \cos((1+\alpha)\pi/2) -
2\mu^2 y^{-2-2\alpha} {\lvert {g(y)} \rvert}^2 \\
={} &
\Big( \mu y^{-1-\alpha} + \cos^2\big((1+\alpha)\pi/2\big) \Big) +
\sin^2\big((1+\alpha)\pi/2\big) -
2\mu^2 y^{-2-2\alpha}{\lvert {g(y)} \rvert}^2 \\
\geqslant{} &
\sin^2\big((1+\alpha)\pi/2\big) -
2\mu^2 y^{-2-2\alpha}{\lvert {g(y)} \rvert}^2,
\end{aligned}$$ so that, by the fact $ g(0) = 0 $, there exists $ 0 < y_\alpha < \pi $, depending only on $ \alpha $, such that $$\inf_{
\substack{
0 < \mu \leqslant y_\alpha^{1+\alpha} \\
\mu^{1/(1+\alpha)} \leqslant y \leqslant y_\alpha
}
} {\lvert {1+\mu\psi(iy)} \rvert} > C_\alpha.$$ In addition, applying the extreme value theorem yields, by \[eq:1+mupsi\], that $$\inf_{
\substack{
0 \leqslant \mu \leqslant \pi^{1+\alpha} \\
y_\alpha \leqslant y \leqslant \pi
}
} {\lvert {1+\mu\psi(iy)} \rvert} > C_\alpha.$$ Using the above two estimates yields \[eq:psi-growth-1\], by the estimate $$\inf_{
\substack{
0 < \mu \leqslant \pi^{1+\alpha} \\
\mu^{1/(1+\alpha)} \leqslant y \leqslant \pi
}
} \frac{{\lvert {1+\mu\psi(iy)} \rvert}}{1+\mu y^{-1-\alpha}}
\geqslant \frac12 \inf_{
\substack{
0 < \mu \leqslant \pi^{1+\alpha} \\
\mu^{1/(1+\alpha)} \leqslant y \leqslant \pi
}
}
{\lvert {1+\mu\psi(iy)} \rvert}.$$ This completes the proof.
\[lem:g’\] For any $ \mu > 0 $ and $ 0 < y \leqslant \pi $, $$\label{eq:g'}
{\lvert {g'(y)} \rvert} < C_\alpha
\frac{\mu y^{-2-\alpha}}{(1+\mu y^{-1-\alpha})^2},$$ where $ g(y) := (1+\mu\psi(iy))^{-1} $.
By \[eq:psi-A-B\], $ \psi(iy) $ can be expressed in the form $$\psi(iy) = F(y) + G(y), \quad 0 < y \leqslant \pi,$$ where $ F $ is analytic on $ [0,\pi] $ and $$G(y) = (1-e^{iy}) y^{-2-\alpha}
\big( \cos(\alpha\pi/2) - i \sin(\alpha\pi/2) \big).$$ A direct calculation gives $${\lvert {G'(y)} \rvert} < C_\alpha y^{-2-\alpha},
\quad 0 < y \leqslant \pi,$$ so that $${\left\lvert {i\psi'(iy)} \right\rvert} = {\lvert {F'(y) + G'(y)} \rvert}
< C_\alpha y^{-2-\alpha}, \quad 0 < y \leqslant \pi.$$ In addition, \[lem:psi-growth\] implies $${\lvert {1+\mu\psi(iy)} \rvert}^{-2} < C_\alpha(1+\mu y^{-1-\alpha})^{-2},
\quad 0 < y \leqslant \pi.$$ Therefore, \[eq:g’\] follows from the equality $$g'(y) = \frac{i\mu \psi'(iy)}{(1+\mu\psi(iy))^2}.$$ This completes the proof.
In the next two subsections, we use $ \theta $ to abbreviate $ \theta_\alpha $, defined in \[lem:1+mupsi\], define $$\Upsilon := (\infty,0]e^{-i\theta} \cup [0,\infty)e^{i\theta},$$ and let $ \Upsilon $ be oriented so that $ \operatorname{Im} z $ increases along $
\Upsilon $. In addition, $ \Upsilon_1 := \{z \in \Upsilon:\ {\lvert {\operatorname{Im} z} \rvert}
\leqslant \pi\} $ and it inherits the orientation of $ \Upsilon $.
The first fractional ordinary equation {#ssec:first_ode}
--------------------------------------
This subsection considers the fractional ordinary equation $$\label{eq:ode-y}
\xi'(t) + \lambda \operatorname{D}_{0+}^{-\alpha} \xi(t) = 0,
\quad t > 0,$$ subjected to the initial value condition $ \xi(0) = \xi_0 $, where $ \lambda $ is a positive constant and $ \xi_0 \in \mathbb R $. By [@Lubich1996 (2.1)], the solution $ \xi $ of equation \[eq:ode-y\] is expressed by a contour integral $$\label{eq:y}
\xi(t) = \frac{\xi_0}{2\pi i} \int_\Upsilon
e^{tz} z^\alpha(z^{1+\alpha}+\lambda)^{-1} \, \mathrm{d}z,
\quad t > 0.$$ Applying the temporal discretization used in \[eq:numer\_sol\] to equation \[eq:ode-y\] yields the following discretization: let $ Y_0 = \xi_0 $; for $ k \in
\mathbb N $, the value of $ Y_{k+1} $ is determined by that $$\mu \Big(
\sum_{j=1}^k Y_j \big(
b_{k-j+2} - 2b_{k-j+1} + b_{k-j}
\big) + b_1 Y_{k+1}
\Big) + Y_{k+1} - Y_k = 0,$$ where $ \mu := \lambda \tau^{1+\alpha} $ and $ b_j := j^{1+\alpha}/\Gamma(2+\alpha)
$, $ j \in \mathbb N $.
\[thm:Y-jump\] For any $ k \in \mathbb N_{>0} $ we have $$\label{eq:Y-jump}
{\lvert {Y_{k+1} - Y_k} \rvert} \leqslant C_\alpha k^{-1} {\lvert {\xi_0} \rvert}.$$
\[thm:y-Y\] For any $ k \in \mathbb N_{>0} $ we have $$\begin{aligned}
{\lvert {\xi(t_k) - Y_k} \rvert} &\leqslant C_\alpha k^{-1} {\lvert {\xi_0} \rvert}
\label{eq:y-Y}.
\end{aligned}$$
The main task of the rest of this subsection is to prove the above two theorems by the well-known Laplace transform method (the basic idea comes from [@Lubich1996; @McLean2015Time; @Jin2015]). We introduce the discrete Laplace transform of $ (Y_k)_{k=0}^\infty $ by that $$\label{eq:Y-laplace}
\widehat Y(z) := \sum_{k=0}^\infty Y_k e^{-kz}
\quad \forall z \in H,$$ where $ H := \{x+iy: 0 < x \leqslant \delta_{\alpha,\mu},\, -\pi \leqslant y
\leqslant \pi \} $, with $ \delta_{\alpha,\mu} $ being defined in \[lem:1+mupsi\]. By the definition of the sequence $ (Y_k)_{k=0}^\infty $, a straightforward computation gives $$\begin{aligned}
\mu (\widehat Y(z)-\xi_0) (e^z-1)^2\, \widehat b(z) +
(\widehat Y(z) - \xi_0) e^z - \widehat Y(z) = 0,
\quad z \in H,\end{aligned}$$ where $ \widehat b $ is the discrete transform of the sequence $ (b_k)_{k=0}^\infty
$, namely, $$\widehat b(z) = \sum_{k=1}^\infty
\frac{k^{1+\alpha}}{\Gamma(2+\alpha)} e^{-kz}.$$ For any $ z \in H $, combining like terms yields $$(e^z-1 + \mu(e^z-1)^2 \widehat b(z)) \widehat Y(z) -
\big( e^z + \mu(e^z-1)^2\,\widehat b(z) \big) \xi_0 = 0,$$ so that $$\begin{aligned}
\widehat Y(z) &=
\frac{ e^z + \mu(e^z-1)^2\, \widehat b(z) }{
e^z-1 + \mu(e^z-1)^2\,\widehat b
} \, \xi_0 \\
&= \Big(
1 + \frac1{e^z-1 + \mu(e^z-1)^2\,\widehat b}
\Big) \xi_0 \\
&= \Big(
1 + \frac{(e^z-1)^{-1}}{1+\mu\psi(z)}
\Big) \xi_0,\end{aligned}$$ by \[eq:def-psi,lem:1+mupsi\]. Therefore, a routine calculation (cf. [@McLean2015Time (28)]) yields that, for any $ 0 < a \leqslant
\delta_{\alpha,\mu} $ and $ k \in \mathbb N_{>0} $, $$\begin{aligned}
Y_k & = \frac{\xi_0}{2\pi i} \int_{a-i\pi}^{a+i\pi}
\widehat Y(z) e^{kz} \, \mathrm{d}z =
\frac{\xi_0}{2\pi i} \int_{a-i\pi}^{a+i\pi}
\frac{e^{kz}}{1+\mu\psi(z)} \frac{\mathrm{d}z}{e^z-1}.
$$ By \[eq:psi-singu,eq:1+mupsi\], letting $ a \to {0+} $ and applying Lebesgue’s dominated convergence theorem then yields $$\label{eq:Y_k}
Y_k = \frac{\xi_0}{2\pi i}\,
\int_{-i\pi}^{i\pi} \frac{e^{kz}}{1+\mu\psi(z)}
\, \frac{\mathrm{d}z}{e^z-1}.$$
By \[eq:1+mupsi\] we have that the integrand in \[eq:Y\_k\] is analytic on $$\omega := \left\{
z \in \mathbb C:\
0 < {\lvert {\operatorname{Im} z} \rvert} < \pi,\
\pi/2 < {\lvert {\operatorname{Arg} z} \rvert} < \theta
\right\},$$ this integrand is continuous on $ \partial\omega \setminus \{0\} $, and \[eq:psi-singu\] implies that $$\lim_{\omega \ni z \to 0} {\lvert {z} \rvert}^{-\alpha}
{\lvert {e^{kz}(1+\mu\psi(z))^{-1}(e^z-1)^{-1}} \rvert} = \mu^{-1}.$$ Additionally, $$\frac{e^{kz}}{(1+\mu\psi(z))(e^z-1)} =
\frac{e^{k(z+2\pi i)}}{(1+\mu\psi(z+2\pi i))(e^{z+2\pi i}-1)}$$ for all $ z = x-i\pi $, $ -\pi \tan\theta \leqslant x \leqslant 0 $. Therefore, an elementary calculation yields $$\label{eq:Y_k-3}
Y_k = \frac{\xi_0}{2\pi i} \int_{\Upsilon_1}
\frac{e^{kz}}{1+\mu\psi(z)} \frac{\mathrm{d}z}{e^z-1},$$ by \[eq:Y\_k\] and Cauchy’s integral theorem.
By the techniques used in the proof of \[thm:stab\], it is easy to obtain that $
{\lvert {Y_k} \rvert} \leqslant {\lvert {\xi_0} \rvert} $ for all $ k \in \mathbb N_{>0} $. Therefore, the series in \[eq:Y-laplace\] converge absolutely for all $ z \in H $.
Finally, we present the proofs of \[thm:Y-jump,thm:y-Y\] as follows.
[**Proof of \[thm:Y-jump\].**]{} Firstly, let us prove $$\label{eq:cos-g}
\Big\lvert \int_0^\pi \cos(ky) g(y) \, \mathrm{d}y \Big\rvert
\leqslant C_\alpha k^{-1},$$ where $ g(y) := (1+\mu\psi(iy))^{-1} $, $ 0 < y \leqslant \pi $. A straightforward computation gives $$\begin{aligned}
\int_0^\pi \cos(ky) g(y) \, \mathrm{d}y &=
\sum_{j=1}^k \int_{(j-1)\pi/k}^{j\pi/k}
\cos(ky) g(y) \, \mathrm{d}y \\
&= \sum_{j=1}^k \int_{(j-1)\pi/k}^{j\pi/k}
\cos(ky) \big( g(y)-g((j-1)\pi/k) \big) \, \mathrm{d}y \\
&= \sum_{j=1}^k \int_{(j-1)\pi/k}^{j\pi/k}
\cos(ky) \int_{(j-1)\pi/k}^y g'(s) \, \mathrm{d}s \, \mathrm{d}y.
\end{aligned}$$ It follows that
$$\begin{aligned}
& {\left\lvert {\int_0^\pi \cos(ky) g(y) \, \mathrm{d}y} \right\rvert}
\leqslant \sum_{j=1}^k \int_{(j-1)\pi/k}^{j\pi/k}
\int_{(j-1)\pi/k}^y {\lvert {g'(s)} \rvert} \, \mathrm{d}s \, \mathrm{d}y \\
<{} & \pi k^{-1} \int_0^\pi {\lvert {g'(y)} \rvert} \, \mathrm{d}y
< C_\alpha k^{-1} \int_0^\pi
\frac{\mu y^{-2-\alpha}}{(1+\mu y^{-1-\alpha})^2} \, \mathrm{d}y
\quad\text{(by \cref{lem:g'})} \\
<{} & C_\alpha k^{-1} \bigg(
\int_0^{\mu^{1/(1+\alpha)}}
\frac{\mu y^{-2-\alpha}}{(1+\mu y^{-1-\alpha})^2} \, \mathrm{d}y +
\int_{\mu^{1/(1+\alpha)}}^{\max\{\mu^{1/(1+\alpha)}, \pi\}}
\frac{\mu y^{-2-\alpha}}{(1+\mu y^{-1-\alpha})^2} \, \mathrm{d}y
\bigg) \\
<{} &
C_\alpha k^{-1} \bigg(
\int_0^{\mu^{1/(1+\alpha)}}
\mu^{-1} y^\alpha \, \mathrm{d}y +
\int_{\mu^{1/(1+\alpha)}}^{\max\{\mu^{1/(1+\alpha)},\pi\}}
\mu y^{-2-\alpha} \, \mathrm{d}y
\bigg) < C_\alpha k^{-1},
\end{aligned}$$
which proves \[eq:cos-g\].
Secondly, let us prove $$\label{eq:sin-g}
\Big\lvert \int_0^\pi \sin(ky) g(y) \, \mathrm{d}y \Big\rvert
< C_\alpha k^{-1}.$$ If $ k = 1 + 2m $, $ m \in \mathbb N $, then a similar argument as that to derive \[eq:cos-g\] yields $$\begin{aligned}
\Big\lvert \int_0^{2m\pi/(1+2m)} \sin(ky) g(y) \, \mathrm{d}y \Big\rvert
< C_\alpha k^{-1},
\end{aligned}$$ and hence \[eq:sin-g\] follows from the estimate $$\begin{aligned}
\Big\lvert \int_{2m\pi/(1+2m)}^\pi \sin(ky) g(y) \, \mathrm{d}y \Big\rvert
< C_\alpha k^{-1},
\end{aligned}$$ which is evident by \[lem:psi-growth\]. If $ k = 2m $, $ m \in \mathbb N_{>0} $, then a simple modification of the above analysis proves that \[eq:sin-g\] still holds.
Finally, combining \[eq:cos-g,eq:sin-g\] yields $$\Big\lvert \int_0^\pi e^{iky} g(y) dy \Big\rvert
\leqslant C_\alpha k^{-1},$$ so that $$\Bigl\lvert \operatorname{Re} \int_0^\pi e^{iky} g(y) \, \mathrm{d}y \Big\rvert
\leqslant C_\alpha k^{-1}.$$ Therefore, \[eq:Y-jump\] follows from $$Y_{k+1} - Y_k = \frac{\xi_0}\pi\,
\operatorname{Re} \int_0^\pi e^{iky}g(y) \, \mathrm{d}y,$$ which is evident by \[eq:psi-conj,eq:Y\_k\]. This concludes the proof of \[thm:Y-jump\]. $\blacksquare$
[**Proof of \[thm:y-Y\].**]{} Substituting $ \eta:= \tau z $ into \[eq:y\] yields $$\xi(t_k) = \frac{\xi_0}{2\pi i} \int_\Upsilon
e^{k\eta}(\eta + \mu\eta^{-\alpha})^{-1} \, \mathrm{d}\eta,$$ and then subtracting \[eq:Y\_k-3\] from this equation gives $$\label{eq:y_k-Y_k}
\xi(t_k) - Y_k = \mathbb I_1 + \mathbb I_2,$$ where $$\begin{aligned}
\mathbb I_1 &:= \frac{\xi_0}{2\pi i}
\int_{\Upsilon \setminus \Upsilon_1} e^{kz}
(z+\mu z^{-\alpha})^{-1} \, \mathrm{d}z, \\
\mathbb I_2 &:=
\frac{\xi_0}{2\pi i} \int_{\Upsilon_1} e^{kz} \Big(
(z + \mu z^{-\alpha})^{-1} -
(1+\mu\psi(z))^{-1} (e^z-1)^{-1}
\Big) \, \mathrm{d}z.
\end{aligned}$$ Since $\mathbb{I}_1$ is a real number, a simple calculation gives $$\begin{aligned}
\mathbb I_1 &=
\frac{\xi_0}\pi
\operatorname{Im} \int_{\pi/\sin\theta}^\infty
e^{kre^{i\theta}} (re^{i\theta} + \mu (re^{i\theta})^{-\alpha})^{-1}
e^{i\theta} \,\mathrm{d}r \\
&= \frac{\xi_0}\pi
\operatorname{Im} \int_{\pi/\sin\theta}^\infty
e^{kre^{i\theta}} \frac{(re^{i\theta})^\alpha}{
(re^{i\theta})^{1+\alpha} + \mu
} e^{i\theta} \,\mathrm{d}r,
\end{aligned}$$ and the fact $ \pi/2 < \theta < (\alpha+3)/(4\alpha+4)\pi $ implies $${\left\lvert {\frac{(re^{i\theta})^\alpha}{(re^{i\theta})^{1+\alpha}+\mu}} \right\rvert} =
\frac{r^\alpha}{
{\lvert {
r^{1+\alpha}\cos((1+\alpha)\theta) +
\mu + i r^{1+\alpha}\sin((1+\alpha)\theta)
} \rvert}
} < C_\alpha r^{-1}.$$ Consequently, $$\begin{aligned}
{\lvert {\mathbb I_1} \rvert}
& \leqslant C_\alpha {\lvert {\xi_0} \rvert} \int_{\pi/\sin\theta}^\infty
e^{kr\cos\theta} r^{-1} \, \mathrm{d}r
\leqslant C_\alpha {\lvert {\xi_0} \rvert} \int_{\pi/\sin\theta}^\infty
e^{kr\cos\theta} \, \mathrm{d}r \\
& \leqslant C_\alpha k^{-1} e^{k\pi\cot\theta} {\lvert {\xi_0} \rvert}.
\end{aligned}$$
Then let us estimate $ \mathbb I_2 $. For any $ z \in \Upsilon_1 \setminus \{0\} $, since $$\begin{aligned}
& z + \mu z^{-\alpha} = z^{-\alpha}(z^{1+\alpha} + \mu) \\
={} &
{\lvert {z} \rvert}^{-\alpha} e^{-i\alpha\theta}
\Big(
{\lvert {z} \rvert}^{1+\alpha} \cos\big((1+\alpha)\theta\big) + \mu +
i {\lvert {z} \rvert}^{1+\alpha} \sin\big((1+\alpha)\theta\big)
\Big),
\end{aligned}$$ from the fact $ \pi/2 < \theta < (\alpha+3)/(4\alpha+4)\pi $ it follows that $$ {\lvert {z+\mu z^{-\alpha}} \rvert} > C_\alpha {\lvert {z} \rvert}.$$ By \[eq:psi\], a routine calculation gives $$ {\lvert {(1+\mu\psi(z))(e^z-1) - (z+\mu z^{-\alpha})} \rvert}
\leqslant C_\alpha\big( {\lvert {z} \rvert}^2 + \mu {\lvert {z} \rvert}^{1-\alpha} \big),$$ and, similar to \[eq:psi-growth\], we have $$ {\lvert {1+\mu\psi(z)} \rvert} > C_\alpha (1+\mu {\lvert {z} \rvert}^{-1-\alpha}).$$ In addition, it is clear that $${\lvert {e^z-1} \rvert} > C_\alpha {\lvert {z} \rvert}, \quad z \in \Upsilon_1 \setminus \{0\}.$$ Using the above four estimates, we obtain $$\begin{aligned}
& {\lvert {
(z+\mu z^{-\alpha})^{-1} - (1+\mu\psi(z))^{-1}(e^z-1)^{-1}
} \rvert} \\
={} &
{\left\lvert {
\frac{
(1+\mu\psi(z))(e^z-1) - (z+\mu z^{-\alpha})
}{
(z+\mu z^{-\alpha})(1+\mu\psi(z))(e^z-1)
}
} \right\rvert} \\
<{} &
C_\alpha \frac{
{\lvert {z} \rvert}^2 + \mu {\lvert {z} \rvert}^{1-\alpha}
}{
{\lvert {z} \rvert}^2(1+\mu{\lvert {z} \rvert}^{-1-\alpha})
} = C_\alpha
\end{aligned}$$ for all $ z \in \Upsilon_1 \setminus \{0\} $. Therefore,
$$\begin{aligned}
{\lvert {\mathbb I_2} \rvert} &= {\left\lvert {
\frac{\xi_0}\pi \operatorname{Im} \int_0^{\pi\!/\!\sin\theta}
\! e^{kre^{i\theta}} \! \Big(
\big( re^{i\theta} \!+\! \mu (re^{i\theta})^{-\alpha} \big)^{-1}
\!-\! \big(1\!+\!\mu\psi(re^{i\theta})\big)^{-1}
\big( e^{re^{i\theta}} \!-\! 1 \big)^{-1}
\Big) e^{i\theta} \mathrm{d}r
} \right\rvert} \\
& \leqslant C_\alpha {\lvert {\xi_0} \rvert} \int_0^{\pi/\sin\theta}
e^{kr\cos\theta} \, \mathrm{d}r
\leqslant C_\alpha k^{-1} {\lvert {\xi_0} \rvert}.
\end{aligned}$$
Finally, combing \[eq:y\_k-Y\_k\] and the above estimates for $ \mathbb I_1 $ and $ \mathbb I_2 $ proves \[eq:y-Y\] and thus concludes the proof. $\blacksquare$
The second fractional ordinary equation
---------------------------------------
This subsection considers the fractional ordinary equation $$\xi'(t) + \lambda \operatorname{D}_{0+}^{-\alpha} \xi(t) = 1, \quad t > 0,$$ subjected to the initial value condition $ \xi(0) = 0 $. Applying the temporal discretization in \[eq:numer\_sol\] yields the following discretization: let $ Y_0
= 0 $; for $ k \in \mathbb N $, the value of $ Y_{k+1} $ is determined by that $$\mu \bigg(
\sum_{j=1}^k Y_j(b_{k-j+2} - 2b_{k-j+1} + b_{k-j}) +
b_1 Y_{k+1}
\bigg) + Y_{k+1} - Y_k = \tau.$$ Similar to \[eq:y,eq:Y\_k-3\], we have $$\begin{aligned}
\xi(t) &= \frac1{2\pi i}
\int_{\Upsilon} e^{tz} (z^2+\lambda z^{1-\alpha})^{-1} \, \mathrm{d}z,
\quad t > 0, \label{eq:y2} \\
Y_k &= \frac\tau{2\pi i} \int_{\Upsilon_1}
\frac{e^{kz+z}}{1+\mu\psi(z)} \frac{\mathrm{d}z}{(e^z-1)^2},
\quad k \in \mathbb N_{>0}.
\label{eq:Z_k}\end{aligned}$$
\[thm:z-Z\] For any $ k \in \mathbb N_{>0} $, $$\label{eq:z-Z}
{\lvert {\xi(t_k) - Y_k} \rvert} < C_\alpha \tau.$$
Since the proof of this theorem is similar to that of \[thm:y-Y\], we only highlight the differences. Proceeding as in the proof of \[thm:y-Y\] yields $$\xi(t_k) - Y_k = \mathbb I_1 + \mathbb I_2,$$ where $$\begin{aligned}
\mathbb I_1 &:= \frac\tau{2\pi i}
\int_{\Upsilon \setminus \Upsilon_1} e^{kz}
(z^2+\mu z^{1-\alpha})^{-1} \, \mathrm{d}z, \\
\mathbb I_2 &:=
\frac\tau{2\pi i} \int_{\Upsilon_1} e^{kz} \Big(
(z^2 + \mu z^{1-\alpha})^{-1} -
(1+\mu\psi(z))^{-1} (e^z-1)^{-2} e^z
\Big) \, \mathrm{d}z.
\end{aligned}$$ Moreover, $$\begin{aligned}
{\lvert {\mathbb I_1} \rvert} < C_\alpha \tau \int_{\pi/\sin\theta}^\infty
e^{kr\cos\theta} r^{-2} \, \mathrm{d}r
< C_\alpha \tau \int_{\pi/\sin\theta}^\infty
e^{kr\cos\theta} \, \mathrm{d}r
< C_\alpha \tau k^{-1} e^{k\pi\cot\theta}.
\end{aligned}$$ For any $ z \in \Upsilon_1 \setminus \{0\} $, since $$\begin{aligned}
& z^2 + \mu z^{1-\alpha} = z^{1-\alpha}(z^{1+\alpha} + \mu) \\
={} &
{\lvert {z} \rvert}^{1-\alpha} e^{i(1-\alpha)\theta}
\Big(
{\lvert {z} \rvert}^{1+\alpha} \cos\big((1+\alpha)\theta\big) + \mu +
i {\lvert {z} \rvert}^{1+\alpha} \sin\big((1+\alpha)\theta\big)
\Big),
\end{aligned}$$ from the fact $ \pi/2 < \theta < (\alpha+3)/(4\alpha+4)\pi $ it follows that there exits a positive constant $ c $, depending only on $ \alpha $, such that $${\lvert {z^2 + \mu z^{1-\alpha}} \rvert} >
\begin{cases}
C_\alpha \mu {\lvert {z} \rvert}^{1-\alpha} &
\text{if}\quad 0 < {\lvert {z} \rvert} \leqslant c\mu^{1/(1+\alpha)} , \\
C_\alpha {\lvert {z} \rvert}^2 &
\text{if}\quad c \mu^{1/(1+\alpha)} \leqslant
{\lvert {z} \rvert} \leqslant \pi/\sin\theta.
\end{cases}$$ By \[eq:psi\], a routine calculation gives $$ {\lvert {(1+\mu\psi(z))(e^z-1)^2 - (z^2+\mu z^{1-\alpha}) e^z} \rvert}
< C_\alpha \big( {\lvert {z} \rvert}^4 + \mu{\lvert {z} \rvert}^{2-\alpha} \big),$$ and, similar to \[eq:psi-growth\], we have $$ {\lvert {1+\mu\psi(z)} \rvert} > C_\alpha (1+\mu {\lvert {z} \rvert}^{-1-\alpha}).$$ Using the above three estimates, we obtain $$\begin{aligned}
& {\lvert {
(z^2+\mu z^{1-\alpha})^{-1} - (1+\mu\psi(z))^{-1}(e^z-1)^{-2}e^z
} \rvert} \\
={} &
{\left\lvert {
\frac{
(1+\mu\psi(z))(e^z-1)^2 - (z^2+\mu z^{1-\alpha})e^z
}{
(z^2+\mu z^{1-\alpha})(1+\mu\psi(z))(e^z-1)^2
}
} \right\rvert} \\
<{} &
\left\{
\begin{aligned}
& C_\alpha \frac{{\lvert {z} \rvert}^4 + \mu{\lvert {z} \rvert}^{2-\alpha}}
{\mu({\lvert {z} \rvert}^{3-\alpha} + \mu{\lvert {z} \rvert}^{2-2\alpha})}
\qquad\quad\text{ if }\,\, 0 < {\lvert {z} \rvert} \leqslant c\mu^{1/(1+\alpha)}, \\
& C_\alpha \frac{{\lvert {z} \rvert}^4 + \mu{\lvert {z} \rvert}^{2-\alpha}}
{{\lvert {z} \rvert}^4 + \mu{\lvert {z} \rvert}^{3-\alpha}}
\qquad\qquad\qquad\text{ if}\,\, c\mu^{1/(1+\alpha)} < {\lvert {z} \rvert}
\leqslant \pi/\sin\theta,
\end{aligned}
\right. \\
<{} &
\left\{
\begin{aligned}
& C_\alpha \big(
1 + \mu^{-1} {\lvert {z} \rvert}^\alpha
\big) \qquad\qquad\qquad\,\,\text{ if }
\,\, 0 < {\lvert {z} \rvert} \leqslant c\mu^{1/(1+\alpha)}, \\
& C_\alpha \big( 1 + \mu {\lvert {z} \rvert}^{-2-\alpha} \big)
\qquad\qquad\qquad\text{ if}
\,\, c\mu^{1/(1+\alpha)} < {\lvert {z} \rvert} < \pi/\sin\theta,
\end{aligned}
\right.
\end{aligned}$$ for all $ z \in \Upsilon_1 \setminus \{0\} $. Therefore, if $ c\mu^{1/(1+\alpha)}
\leqslant \pi/\sin\theta $ then $$\begin{aligned}
{\lvert {\mathbb I_2} \rvert} & < C_\alpha \tau \bigg(
\int_0^{c\mu^{1/(1+\alpha)}}
e^{kr\cos\theta} \big(
1 + \mu^{-1} r^\alpha
\big) \, \mathrm{d}r \\
& \qquad\qquad {} +
\int_{ c\mu^{1/(1+\alpha)} }^{ \pi/\sin\theta }
e^{kr\cos\theta}(1 + \mu r^{-2-\alpha}) \, \mathrm{d}r
\bigg) \\
& < C_\alpha \tau \Big(
\int_0^{c\mu^{1/(1+\alpha)}}
1 + \mu^{-1} r^\alpha \, \mathrm{d} r +
\int_{c\mu^{1/(1+\alpha)}}^{\pi/\sin\theta}
1 + \mu r^{-2-\alpha} \, \mathrm{d}r
\Big) \\
& < C_\alpha \tau,
\end{aligned}$$ and if $ c\mu^{1/(1+\alpha)} > \pi/\sin\theta $ then $${\lvert {\mathbb I_2} \rvert} < C_\alpha \tau
\int_0^{\pi\sin\theta}
e^{kr\cos\theta} \big(
1 + \mu^{-1} r^\alpha
\big) \, \mathrm{d}r < C_\alpha\tau.$$
Finally, combing the above estimates for $ \mathbb I_1 $ and $ \mathbb I_2 $ proves \[eq:z-Z\] and hence this theorem.
Main results {#sec:main}
============
In the rest of this paper, we assume that $ h < e^{-2(1+\alpha)} $ and $ \tau < T/e
$. The symbol $ a\lesssim b $ means that there exists a positive constant $ C $, depending only on $ \alpha $, $ T $, $ \Omega $, the shape-regular parameter of $
\mathcal K_h $ and the ratio of $ h $ to the minimum diameter of the elements in $
\mathcal K_h $, unless otherwise specified, such that $ a \leqslant Cb $. Additionally, since the following properties are frequently used in the forthcoming analysis, we will use them implicitly (cf. [@Samko1993]): $$\begin{aligned}
& \operatorname{D}_{a+}^\beta \operatorname{D}_{a+}^\gamma =
\operatorname{D}_{a+}^{\beta+\gamma}, \quad
\operatorname{D}_{b-}^\beta \operatorname{D}_{b-}^\gamma = \operatorname{D}_{b-}^{\beta+\gamma},
\text{ and } \\
& {\langle {\operatorname{D}_{a+}^\beta v, w} \rangle}_{(a,b)} =
{\langle {v, \operatorname{D}_{b-}^\beta w} \rangle}_{(a,b)},
\quad v,w \in L^2(a,b),\end{aligned}$$ where $ -\infty < a < b < \infty $ and $ -\infty < \beta, \gamma < 0 $.
\[thm:conv-u0\] If $ u_0 \in L^2(\Omega) $ and $ f = 0 $, then $$\label{eq:conv-u0}
{\lVert {u(t_j) - U_j} \rVert}_{L^2(\Omega)} \lesssim
\big(
h^2 t_j^{-\alpha-1} + \tau t_j^{-1}
\big) {\lVert {u_0} \rVert}_{L^2(\Omega)}$$ for all $ 1 \leqslant j \leqslant J $.
Let $ u_h $ be the solution of the spatially discrete problem: $$u_h'(t) - \Delta_h \operatorname{D}_{0+}^{-\alpha} u_h(t) = 0,
\quad t > 0,$$ subjected to the initial value condition $ u_h(0) = P_h u_0 $, where the discrete Laplace operator $ \Delta_h:S_h \to S_h $ is defined by that $${\langle {-\Delta_h v_h,w_h} \rangle}_\Omega :=
{\langle {\nabla v_h, \nabla w_h} \rangle}_\Omega
\quad\text{for all}\, v_h, w_h \in S_h.$$ By [@Lubich1996 Theorem 2.1] we have $${\lVert {u(t) - u_h(t)} \rVert}_{L^2(\Omega)} \lesssim
h^2 t^{-\alpha-1} {\lVert {u_0} \rVert}_{L^2(\Omega)},
\quad t > 0,$$ and by \[thm:y-Y\] we obtain $${\lVert {U_j - u_h(t_j)} \rVert}_{L^2(\Omega)}
\lesssim \tau t_j^{-1} {\lVert {u_0} \rVert}_{L^2(\Omega)}.$$ Combining the above two estimates proves \[eq:conv-u0\] and hence this theorem.
\[thm:f-const\] If $ u_0 = 0 $ and $ f(t) = v \in L^2(\Omega) $, $ 0 < t < T $, then $$\label{eq:f-const}
{\lVert {u(t_j) - U_j} \rVert}_{L^2(\Omega)} \lesssim
\big( t_j^{-\alpha} h^2 + \tau \big) {\lVert {v} \rVert}_{L^2(\Omega)}$$ for all $ 1 \leqslant j \leqslant J $.
Let $ u_h $ be the solution of the spatially discrete problem: $$u_h'(t) - \Delta_h \operatorname{D}_{0+}^{-\alpha} u_h(t) = P_h v,
\quad t > 0,$$ subjected to the initial value condition $ u_h(0) = 0 $. By [@Lubich1996 Theorem 2.2] we have $${\lVert {u(t) - u_h(t)} \rVert}_{L^2(\Omega)} \lesssim
t^{-\alpha} h^2 {\lVert {v} \rVert}_{L^2(\Omega)},
\quad t > 0,$$ and \[thm:z-Z\] implies $${\lVert {U_j - u_h(t_j)} \rVert}_{L^2(\Omega)} \lesssim
\tau {\lVert {P_hv} \rVert}_{L^2(\Omega)} \lesssim
\tau {\lVert {v} \rVert}_{L^2(\Omega)}.$$ Combining the above two estimates proves \[eq:f-const\] and hence this theorem.
\[thm:conv\_f\_L2\] If $ u_0 = 0 $ and $ f \in L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega)\!) $, then $$\label{eq:conv_f_L2}
{\lVert {u-U} \rVert}_{L^\infty(0,T;L^2(\Omega))}
\lesssim \left( h + \sqrt{\ln(1/h)} \tau^{1/2} \right)
{\lVert {f} \rVert}_{L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega)\!)}.$$
Since \[thm:regu-pde\] implies $$\begin{aligned}
{\lVert {u} \rVert}_{{}_0H^1(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega)\!)} +
{\lVert {u} \rVert}_{C([0,T];\dot H^1(\Omega)\!)} & \leqslant
C_\alpha {\lVert {f} \rVert}_{L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega)\!)},
\end{aligned}$$ error estimate \[eq:conv\_f\_L2\] is nearly optimal with respect to the regularity of $ u $.
\[thm:conv\_f\_higher\] If $ u_0 = 0 $ and $ f \in {}_0H^{\alpha+1/2}(0,T;L^2(\Omega)) $, then $$\label{eq:conv_f_higher}
{\lVert {u\!-\!U} \rVert}_{L^\infty(0,T;L^2(\Omega))} \lesssim
\ln(T/\tau) \left( \sqrt{\ln(1/h)}\, h^2 \!+\! \tau \right)
{\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;L^2(\Omega))}.$$
Assume that $ u $ satisfies the following regularity assumption: for any $ 0 < t
\leqslant T $, $$\begin{aligned}
{\lVert {u(t)} \rVert}_{\dot H^2(\Omega)} + t{\lVert {u'(t)} \rVert}_{\dot H^2(\Omega)}
& \leqslant M, \\
{\lVert {u'(t)} \rVert}_{L^2(\Omega)} + t{\lVert {u''(t)} \rVert}_{L^2(\Omega)}
& \leqslant M t^{\sigma-1}, \\
t{\lVert {u'(t)} \rVert}_{\dot H^2(\Omega)} +
t^2 {\lVert {u''(t)} \rVert}_{\dot H^2(\Omega)} & \leqslant Mt^{\sigma-1},
\end{aligned}$$ where $ M $ and $ \sigma $ are two positive constants. Letting $$t_j = (j/J)^\gamma T \text{ for all } 1 \leqslant j \leqslant J,
\quad \gamma > 1/\sigma,$$ Mustapha and McLean [@Mustapha2009Discontinuous] obtained $${\lVert {u(t_j)-U_j} \rVert}_{L^2(\Omega)}
\lesssim {\lVert {u_0 - U_0} \rVert}_{L^2(\Omega)} + M \big( \ln(t_j/t_1)h^2 + T/J \big),$$ and hence in the case $ u_0 \in L^2(\Omega) $ no convergence rate was derived. Besides, under the condition $ u_0 = 0 $ and $ f \in
{}_0H^{\alpha+1/2}(0,T;L^2(\Omega)) $, by \[thm:regu-pde\] we have only $${\lVert {u} \rVert}_{{}_0H^{\alpha+3/2}(0,T;L^2(\Omega))} +
{\lVert {u} \rVert}_{{}_0H^{1/2}(0,T;\dot H^2(\Omega))}
\leqslant C_\alpha {\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;L^2(\Omega))},$$ so that $ u $ does not satisfy the above regularity assumption necessarily.
\[thm:f-const,thm:conv\_f\_higher\] imply that if $ f \in
H^{\alpha+1/2}(0,T;L^2(\Omega)) $ and $ f(0) \neq 0 $, then
$${\lVert {u(t_j) - U_j} \rVert}_{L^2(\Omega)} \lesssim
\left(
\left( \ln(T/\tau) \sqrt{\ln(1/h)} + t_j^{-\alpha} \right) h^2 +
\ln(T/\tau) \tau
\right) {\lVert {f} \rVert}_{H^{\alpha+1/2}(0,T;L^2(\Omega))}$$
for all $ 1 \leqslant j \leqslant J $, where $ H^{\alpha+1/2}(0,T;L^2(\Omega)) $ is defined analogously to the space $ {}_0H^{\alpha+1/2}(0,T;L^2(\Omega)) $. Furthermore, \[thm:conv-u0,thm:f-const\] imply that if the accuracy of $ U $ near $ t = 0 $ is unimportant, then using graded temporal grids to tackle the singularity caused by nonsmooth $ u_0 $ and $ f(0) $ is unnecessary.
The rest of this section is devoted to the proofs of \[thm:conv\_f\_L2,thm:conv\_f\_higher\]. Let $ X $ be a separable Hilbert space. For any $ w \in C((0,T];X) $ we define $$(P_\tau v)|_{I_j} \equiv v(t_j),
\quad 1 \leqslant j \leqslant J,$$ and for any $ v \in L^1(0,T;X) $ we define $$\label{eq:def-Q}
(Q_\tau v)|_{I_j} \equiv \tau^{-1} \int_{I_j} v,
\quad 1 \leqslant j \leqslant J.$$ The operator $ Q_\tau $ possesses the standard estimates $$\begin{aligned}
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(0,T;X)} &\leqslant {\lVert {v} \rVert}_{L^2(0,T;X)}
\phantom{\tau{}_0}\quad\forall v \in L^2(0,T;X), \\
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(0,T;X)} &\lesssim \tau {\lVert {v} \rVert}_{{}_0H^1(0,T;X)}
\quad\forall v \in {}_0H^1(0,T;X).\end{aligned}$$ Hence, for any $ v \in {}_0H^\beta(0,T;X) $ with $ 0 < \beta < 1 $, applying [@Tartar2007 Lemma 22.3] yields $${\lVert {(I-Q_\tau)v} \rVert}_{[L^2(0,T;X),\ L^2(0,T;X)]_{\beta,2}}
\lesssim \tau^\beta {\lVert {v} \rVert}_{{}_0H^\beta(0,T;X)},$$ so that [@Tartar2007 (23.11)] implies $$\label{eq:Q_tau}
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(0,T;X)} \lesssim
\tau^\beta \sqrt{\beta(1-\beta)} \, {\lVert {v} \rVert}_{{}_0H^\beta(0,T;X)}.$$ Here we have used the fact that $ {}_0H^\beta(0,T;X) = [L^2(0,T;X),
{}_0H^1(0,T;X)]_{\beta,2} $ with equivalent norms (cf. \[rem:equiv\_frac\_space\]). Similarly, for any $ v \in {}^0H^\beta(0,T;X) $ with $ 0 < \beta < 1 $. $$\label{eq:Q_tau-sys}
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(0,T;X)} \lesssim
\tau^\beta \sqrt{\beta(1-\beta)}
\, {\lVert {v} \rVert}_{{}^0H^\beta(0,T;X)}.$$ Moreover, from [@Tartar2007 Lemmas 12.4, 16.3, 22.3, 23.1] it follows the following three well-known estimates.
\[lem:Hs\] If $ v \in {}_0H^\beta(0,1) $ with $ 0 < \beta < 1 $, then $$\label{eq:Hs-1}
\left(
\int_0^1 \int_0^1 \frac{{\lvert {v(t)-v(s)} \rvert}^2}{{\lvert {t-s} \rvert}^{1+2\beta}}
\, \mathrm{d}t \, \mathrm{d}s
\right)^{1/2} \leqslant C {\lVert {v} \rVert}_{{}_0H^\beta(0,1)},$$ and if, in addition, $ 1/2 < \beta < 1 $ then $$\label{eq:Hs-2}
{\lVert {v} \rVert}_{C[0,1]} \leqslant C
\sqrt{\frac{1-\beta}{2\beta-1}} {\lVert {v} \rVert}_{{}_0H^\beta(0,1)},$$
$$\label{eq:Hs-3}
{\lVert {v} \rVert}_{C[0,1]} \leqslant
\frac{C}{\sqrt{2\beta\!-\!1}} \left(
{\lVert {v} \rVert}_{L^2(0,1)} \!+\! \sqrt{1\!-\!\beta} \left(
\int_0^1\!\!\int_0^1 \frac{{\lvert {v(t)\!-\!v(s)} \rvert}^2}{{\lvert {t\!-\!s} \rvert}^{1+2\beta}}
\, \mathrm{d}s \, \mathrm{d}t
\right)^{1/2}
\right),$$
where $ C $ is a positive constant independent of $ \beta $ and $ v $.
\[lem:P\_tau\] If $ v \in {}_0H^\beta(0,T) $ with $ 1/2 < \beta < 1 $, then $$\label{eq:P_tau}
{\lVert {(I-P_\tau)v} \rVert}_{L^2(0,T)} \lesssim
\tau^\beta \sqrt{\frac{1-\beta}{2\beta-1}}\,
{\lVert {v} \rVert}_{{}_0H^\beta(0,T)}.$$
By the definition of $ P_\tau $ and \[eq:Hs-3\], a scaling argument yields
$$\begin{aligned}
& {\lVert {(I-P_\tau)v} \rVert}_{L^2(I_j)}^2 \\
\lesssim{} &
\frac1{2\beta-1} \left(
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(I_j)}^2 +
(1-\beta)\tau^{2\beta} \int_{I_j} \int_{I_j}
\frac{{\lvert {v(t)-v(s)} \rvert}^2}{{\lvert {t-s} \rvert}^{1+2\beta}}
\, \mathrm{d}s \, \mathrm{d}t
\right),
\end{aligned}$$
so that
$$\begin{aligned}
& \sqrt{2\beta-1}
{\lVert {(I-P_\tau)v} \rVert}_{L^2(0,T)} \\
\lesssim{} &
{\lVert {(I-Q_\tau)v} \rVert}_{L^2(0,T)} +
\sqrt{1-\beta}\, \tau^\beta \left(
\int_0^T \int_0^T \frac{{\lvert {v(t)-v(s)} \rvert}^2}{{\lvert {t-s} \rvert}^{1+2\beta}}
\, \mathrm{d}s \, \mathrm{d}t
\right)^{1/2} \\
\lesssim{} &
\tau^\beta \sqrt{1-\beta}\left(
{\lVert {v} \rVert}_{{}_0H^\beta(0,T)} + \left(
\int_0^T \int_0^T \frac{{\lvert {v(t)-v(s)} \rvert}^2}{{\lvert {t-s} \rvert}^{1+2\beta}}
\, \mathrm{d}s \, \mathrm{d}t
\right)^{1/2}
\right),
\end{aligned}$$
by \[eq:Q\_tau\]. Another scaling argument gives, by \[eq:Hs-1\], that
$$\left(
\int_0^T \int_0^T \frac{{\lvert {v(t)-v(s)} \rvert}^2}{{\lvert {t-s} \rvert}^{1+2\beta}}
\, \mathrm{d}s \, \mathrm{d}t
\right)^{1/2} \lesssim {\lVert {v} \rVert}_{{}_0H^\beta(0,T)}.$$
Combining the above two estimates proves \[eq:P\_tau\] and thus concludes this proof.
\[lem:interp\] Assume that $ -\infty < \beta,\gamma,r,s < \infty $ and $ 0 < \theta < 1 $. If $ v
\in {}_0H^\beta(0,T;\dot H^r(\Omega)) \cap {}_0H^\gamma(0,T;\dot H^s(\Omega)) $, then $$\label{eq:interp-1}
\begin{aligned}
& {\lVert {v} \rVert}_{
{}_0H^{(1-\theta)\beta + \theta \gamma}
(0,T;\dot H^{(1-\theta)r+\theta s}(\Omega))
} \\
\leqslant{} &
C_{\beta,\gamma,\theta}
{\lVert {v} \rVert}_{{}_0H^\beta(0,T;\dot H^r(\Omega)))}^{1-\theta}
{\lVert {v} \rVert}_{{}_0H^\gamma(0,T;\dot H^s(\Omega))}^\theta.
\end{aligned}$$ In particular, if $ \beta = 0 $ and $ \gamma = 1 $ then $${\lVert {v} \rVert}_{{}_0H^\theta(0,T;\dot H^{(1-\theta)r + \theta s}(\Omega)}
\leqslant \frac1{\sqrt{2\theta(1-\theta)}}
{\lVert {v} \rVert}_{L^2(0,T;\dot H^r(\Omega)}^{1-\theta}
{\lVert {v} \rVert}_{{}_0H^1(0,T;\dot H^s(\Omega))}^\theta$$ for all $ v \in L^2(0,T;\dot H^r(\Omega)) \cap {}_0H^1(0,T;\dot H^s(\Omega)) $.
\[lem:VV’\] If $ V \in W_{\tau,h} $ and $ 0 \leqslant i < k \leqslant J $, then $$\sum_{j=i}^k {\langle {{{[\![ {V_j} ]\!]}}, V_j^{+}} \rangle}_\Omega \geqslant
\frac12 \big(
{\lVert {V_k^{+}} \rVert}_{L^2(\Omega)}^2 - {\lVert {V_i} \rVert}_{L^2(\Omega)}^2
\big) \geqslant
\sum_{j=i}^k {\langle {V_j, {{[\![ {V_j} ]\!]}}} \rangle}_\Omega.$$
Proof of \[thm:conv\_f\_L2\]
----------------------------
Let us first prove $$\label{eq:inf-theta}
\begin{aligned}
& {\lVert {U-P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\lesssim{} &
{\lVert {(I-P_h)\operatorname{D}_{0+}^{-\alpha/2} u} \rVert}_{L^2(0,T;\dot H^{1}(\Omega)\!)} +
{\lVert {(I-P_\tau)P_hu} \rVert}_{L^2(0,T;\dot H^1(\Omega)\!)}.
\end{aligned}$$ For any $ 1 \leqslant j \leqslant J $, by \[eq:numer\_sol,eq:strong-form\] we have $$\sum_{i=1}^{j} {\langle {u',\theta} \rangle}_{\Omega\times I_i} +
{\langle {
\nabla \operatorname{D}_{0+}^{-\alpha} (u-U),\nabla \theta
} \rangle}_{\Omega \times (0,t_j)} =
\sum_{i=0}^{j-1}{\langle {{{[\![ {U_i} ]\!]}},\theta^+_i} \rangle}_\Omega,$$ where $ \theta:=U-P_\tau P_hu $ and we set $ (P_\tau P_h u)_0 = 0 $. By the definitions of $ P_h $ and $ P_\tau $, a routine calculation (see [@Thomee2006 Chapter 12]) then yields $$\begin{aligned}
{}&
\sum_{i=0}^{j-1}{\langle {{{[\![ {\theta_i} ]\!]}},\theta^+_i} \rangle}_\Omega+
\big\langle
\nabla \operatorname{D}_{0+}^{-\alpha} \theta,\nabla \theta
\big\rangle_{\Omega \times (0,t_j)} \\
={} &
{\langle {
\nabla\operatorname{D}_{0+}^{-\alpha}(u-P_\tau P_hu), \nabla\theta
} \rangle}_{\Omega \times (0,t_j)} \\
={} &
{\langle {
\nabla\operatorname{D}_{0+}^{-\alpha}(I-P_h)u, \nabla\theta
} \rangle}_{\Omega \times (0,t_j)} +
{\langle {
\nabla \operatorname{D}_{0+}^{-\alpha}(I-P_\tau)P_hu, \nabla\theta
} \rangle}_{\Omega \times (0,t_j)} \\
={} &
{\langle {
\nabla(I-P_h)\operatorname{D}_{0+}^{-\alpha/2}u,
\nabla \operatorname{D}_{t_j-}^{-\alpha/2}\theta
} \rangle}_{\Omega \times (0,t_j)} +
{\langle {
\nabla (I-P_\tau)P_hu, \operatorname{D}_{t_j-}^{-\alpha} \nabla\theta
} \rangle}_{\Omega \times (0,t_j)},\end{aligned}$$ so that using \[lem:VV’\], \[lem:coer\], Sobolev inequality and the Young’s inequality with $ \epsilon $ gives $$\begin{aligned}
{}& {\lVert {\theta_j} \rVert}_{L^2(\Omega)}+
{\lVert {\theta_1} \rVert}_{L^2(\Omega)}+
{\lVert { \operatorname{D}_{0+}^{-\alpha/2} \theta } \rVert}_{L^{2}(0,t_j;\dot H^1(\Omega)\!)} \\
\lesssim {}&
{\lVert {(I-P_h)\operatorname{D}_{0+}^{-\alpha/2} u} \rVert}_{L^2(0,T;\dot H^{1}(\Omega)\!)} +
{\lVert {(I-P_\tau)P_hu} \rVert}_{L^2(0,T;\dot H^1(\Omega)\!)}.\end{aligned}$$ Since $ 1 \leqslant j \leqslant J $ is arbitrary, this implies \[eq:inf-theta\].
Next, let us prove $$\label{eq:inf-theta-2}
{\lVert {U - P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))}
\lesssim \big( h + \sqrt{\ln(1/h)} \, \tau^{1/2} \big)
{\lVert {f} \rVert}_{ L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega)) }.$$ By the inverse estimate and \[lem:P\_tau\], a straightforward computation gives that, for any $ 0 < \epsilon < 1/(\alpha+1) $, $$\begin{aligned}
& {\lVert {(I-P_\tau)P_hu} \rVert}_{L^2(0,T;\dot H^1(\Omega)\!)} \lesssim
h^{-\epsilon} {\lVert {(I-P_\tau)P_hu} \rVert}_{L^2(0,T;\dot H^{1-\epsilon}(\Omega)\!)} \\
\lesssim{} &
h^{-\epsilon} {\lVert {(I-P_\tau)u} \rVert}_{L^2(0,T;\dot H^{1-\epsilon}(\Omega))} \\
\lesssim{} &
h^{-\epsilon} \tau^{(1+\epsilon+\epsilon\alpha)/2}
\sqrt{ \frac{1-(1+\alpha)\epsilon}\epsilon } \,
{\lVert {u} \rVert}_{
{}_0H^{(1+\epsilon+\epsilon\alpha)/2}
(0,T;\dot H^{1-\epsilon}(\Omega))
},\end{aligned}$$ and hence letting $ \epsilon = (2\ln(1/h)\!)^{-1} $ yields $${\lVert {(I-P_\tau)P_hu} \rVert}_{L^2(0,T;\dot H^1(\Omega)\!)}
\lesssim \sqrt{\ln(1/h)} \, \tau^{1/2}
{\lVert {u} \rVert}_{
{}_0H^{(1+\epsilon+\epsilon\alpha)/2}
(0,T;\dot H^{1-\epsilon}(\Omega))
}.$$ Moreover, by \[lem:regu\] we have $$\begin{aligned}
{\lVert {(I-P_h)\operatorname{D}_{0+}^{-\alpha/2} u} \rVert}_{L^2(0,T;\dot H^1(\Omega)\!)}
& \lesssim h {\lVert {\operatorname{D}_{0+}^{-\alpha/2} u} \rVert}_{L^2(0,T;\dot H^2(\Omega)\!)} \\
& \lesssim
h {\lVert {u} \rVert}_{{}_0H^{-\alpha/2}(0,T;\dot H^2(\Omega)\!)}.\end{aligned}$$ Therefore, by \[thm:regu-pde,lem:interp\], combining \[eq:inf-theta\] and the above two estimates yields \[eq:inf-theta-2\].
Finally, a routine calculation gives $$\begin{aligned}
& {\lVert {u-P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\leqslant{} &
{\lVert {(I-P_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {P_h(I-P_\tau)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\leqslant{} &
{\lVert {(I-P_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {(I-P_\tau)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\lesssim{} &
h {\lVert {u} \rVert}_{C([0,T];\dot H^1(\Omega))} +
\tau^{1/2} {\lVert {u} \rVert}_{H^1(0,T;L^2(\Omega))} \\
\lesssim{} &
\big( h + \tau^{1/2} \big)
{\lVert {f} \rVert}_{L^2(0,T;\dot H^{\alpha/(\alpha+1)}(\Omega))}
\quad\text{(by \cref{thm:regu-pde})},\end{aligned}$$ so that \[eq:conv\_f\_L2\] follows from \[eq:inf-theta-2\] and the triangle inequality
$${\lVert {u-U} \rVert}_{L^\infty(0,T;L^2(\Omega))} \leqslant
{\lVert {U-P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {u-P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))}.$$
This completes the proof of \[thm:conv\_f\_L2\].
From the above proof, it is easy to see that \[thm:conv\_f\_L2\] still holds for the case of variable time steps.
Proof of \[thm:conv\_f\_higher\]
--------------------------------
\[thm:discrete\_regu\] If $ W \in W_{\tau,h} $ satisfies that $ W_0 := v_h \in S_h $ and $$\label{eq:W}
\sum_{j=0}^{J-1} {\langle {{{[\![ {W_j} ]\!]}}, V_j^{+}} \rangle}_\Omega +
{\langle {\nabla\operatorname{D}_{0+}^{-\alpha} W, \nabla V} \rangle}_{\Omega \times (0,T)} = 0
\quad \forall V \in W_{\tau,h},$$ then $$\begin{aligned}
{\lVert {W} \rVert}_{{}_0H^{-\alpha/2}(0,T;\dot H^1(\Omega))} &
\leqslant C_\alpha {\lVert {v_h} \rVert}_{L^2(\Omega)},
\label{eq:disc_regu_1} \\
{\lVert {
Q_\tau\operatorname{D}_{0+}^{-\alpha}(-\Delta_h W)
} \rVert}_{L^1(0,T;L^2(\Omega))} &
\leqslant C_\alpha \ln(T/\tau) {\lVert {v_h} \rVert}_{L^2(\Omega)}.
\label{eq:disc_regu_2}
\end{aligned}$$
Since \[lem:VV’\] implies $$\sum_{j=0}^{J-1} {\langle {{{[\![ {W_j} ]\!]}}, W_j^{+}} \rangle}_\Omega \geqslant
\frac12 \big(
{\lVert {W_J} \rVert}_{L^2(\Omega)} - {\lVert {v_h} \rVert}_{L^2(\Omega)}^2
\big),$$ inserting $ V = W $ into \[eq:W\] yields $$\frac12 {\lVert {W_J} \rVert}_{L^2(\Omega)}^2 +
{\langle {\nabla\operatorname{D}_{0+}^{-\alpha} W, \nabla W} \rangle}_{\Omega \times (0,T)}
\leqslant \frac12 {\lVert {v_h} \rVert}_{L^2(\Omega)}^2.$$ Hence, using \[lem:coer,lem:regu\] proves \[eq:disc\_regu\_1\].
Now let us prove \[eq:disc\_regu\_2\]. Let $ \{\phi_{n,h}: 1 \leqslant n \leqslant
N\} $ be an orthonormal basis of $ S_h $ endowed with the norm $ L^2(\Omega) $ such that $$-\Delta_h \phi_{n,h} = \lambda_{n,h} \phi_{n,h},$$ where $ \{\lambda_{n,h}: 1 \leqslant n \leqslant N \} $ is the set of all eigenvalues of $ -\Delta_h $. For each $ 1 \leqslant n \leqslant N $, define $
(Y_k^n)_{k=0}^\infty $ as that described in the first paragraph of \[ssec:first\_ode\] with $ \xi_0 $ replaced by $ {\langle {v_h,\phi_{n,h}} \rangle}_\Omega $ and $ \lambda $ replaced by $ \lambda_{n,h} $. We also define $ W^n(t) :=
{\langle {W(t), \phi_{n,h}} \rangle}_\Omega $, $ 0 < t < T $, and it is easy to verify that $$W^n = Y_j^n \quad \text{ on } I_j,\quad 1 \leqslant j \leqslant J.$$ Hence, \[thm:Y-jump\] implies $${\lVert {{{[\![ {W_j} ]\!]}}} \rVert}_{L^2(\Omega)} \leqslant
C_\alpha j^{-1} {\lVert {v_h} \rVert}_{L^2(\Omega)},\quad 1 \leqslant j < J,$$ and then it follows that $$\label{eq:disc_regu_3}
\sum_{j=1}^{J-1} {\lVert {{{[\![ {W_j} ]\!]}}} \rVert}_{L^2(\Omega)}
\leqslant C_\alpha {\lVert {v_h} \rVert}_{L^2(\Omega)}
\sum_{j=1}^{J-1} j^{-1}
\leqslant C_\alpha \ln(T/\tau) {\lVert {v_h} \rVert}_{L^2(\Omega)}.$$ In addition, inserting $ V = W \chi_{(0,t_1)} $ into \[eq:W\] yields, by \[lem:coer\], that $${\lVert {W_1} \rVert}_{L^2(\Omega)} \leqslant {\lVert {W_0} \rVert}_{L^2(\Omega)},$$ which implies $$\label{eq:disc_regu_4}
{\lVert {{{[\![ {W_0} ]\!]}}} \rVert}_{L^2(\Omega)} \leqslant 2 {\lVert {W_0} \rVert}_{L^2(\Omega)} =
2{\lVert {v_h} \rVert}_{L^2(\Omega)}.$$ Consequently, since \[eq:W\] implies $$\tau Q_\tau \operatorname{D}_{0+}^{-\alpha}(-\Delta_h W) =
{{[\![ {W_{j-1}} ]\!]}} \quad \text{ on } I_j,
\quad 1 \leqslant j \leqslant J,$$ combining \[eq:disc\_regu\_3,eq:disc\_regu\_4\] proves \[eq:disc\_regu\_2\] and hence this lemma.
\[lem:731\] If $ f \in {}_0H^{\alpha/2}(0,T;L^2(\Omega)) $, then $$\label{eq:731}
\begin{aligned}
{\lVert {(U-P_\tau P_hu)_j} \rVert}_{L^2(\Omega)}
& \lesssim
\ln(T/\tau) {\lVert {R_hu - P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
& \qquad{} +
\tau^{\alpha/2} {\lVert {(I-Q_\tau)u} \rVert}_{L^2(0,T;\dot H^1(\Omega))}
\end{aligned}$$ for each $ 1 \leqslant j \leqslant J $.
Let $ \theta = U - P_\tau P_h u $ and set $ (P_\tau P_hu)_0 = 0 $. Define $ W \in W_{\tau,h} $ by that $ W_J^{+} = \theta_J $ and $$-\sum_{j=1}^J {\langle {V_j, {{[\![ {W_j} ]\!]}}} \rangle}_\Omega +
{\langle {\nabla V, \nabla\operatorname{D}_{T-}^{-\alpha} W} \rangle}_{\Omega \times (0,T)} = 0
\quad \forall V \in W_{\tau,h}.$$ A simple calculation then yields
$$\begin{aligned}
& {\lVert {\theta_J} \rVert}_{L^2(\Omega)}^2 =
{\langle {\theta_J, W_J^{+}} \rangle}_\Omega =
\sum_{j=0}^{J-1} {\langle {{{[\![ {\theta_j} ]\!]}}, W_j^{+}} \rangle}_\Omega +
\sum_{j=1}^J {\langle {\theta_j, {{[\![ {W_j} ]\!]}}} \rangle}_\Omega \\
={} &
\sum_{j=0}^{J-1} {\langle {{{[\![ {\theta_j} ]\!]}}, W_j^{+}} \rangle}_\Omega +
{\langle {\nabla\theta, \nabla\operatorname{D}_{T-}^{-\alpha} W} \rangle}_{\Omega \times (0,T)} \\
={} &
\sum_{j=0}^{J-1} {\langle {{{[\![ {\theta_j} ]\!]}}, W_j^{+}} \rangle}_\Omega +
{\langle {\nabla\operatorname{D}_{0+}^{-\alpha}\theta, \nabla W} \rangle}_{\Omega \times (0,T)},
\end{aligned}$$
and proceeding as in the proof of \[thm:conv\_f\_L2\] yields
$$\begin{aligned}
\sum_{j=0}^{J-1} {\langle {{{[\![ {\theta_j} ]\!]}}, W_j^{+}} \rangle}_\Omega +
{\langle {\nabla\operatorname{D}_{0+}^{-\alpha} \theta, \nabla W} \rangle}_{\Omega \times (0,T)} =
{\langle {\nabla\operatorname{D}_{0+}^{-\alpha} (u-P_\tau P_hu), \nabla W} \rangle}_{\Omega \times (0,T)}.
\end{aligned}$$
Consequently, $$\begin{aligned}
{\lVert {\theta_J} \rVert}_{L^2(\Omega)}^2 &=
{\langle {
\nabla(u-P_\tau P_hu), \nabla \operatorname{D}_{T-}^{-\alpha} W
} \rangle}_{\Omega \times (0,T)} \notag \\
&= {\langle {
\nabla(R_hu - P_\tau P_hu),\ \nabla\operatorname{D}_{T-}^{-\alpha} W
} \rangle}_{\Omega \times (0,T)} \notag \\
& = {\langle {
R_hu - P_\tau P_h u,\ \operatorname{D}_{T-}^{-\alpha}(-\Delta_h W
} \rangle}_{\Omega \times (0,T)} \notag \\
&= \mathbb I_1 + \mathbb I_2, \label{eq:theta_J}
\end{aligned}$$ where $$\begin{aligned}
\mathbb I_1 & := {\langle {
R_hu - P_\tau P_hu,\ Q_\tau\operatorname{D}_{T-}^{-\alpha}(-\Delta_h W)
} \rangle}_{\Omega \times (0,T)}, \\
\mathbb I_2 &:=
{\langle {
R_hu-P_\tau P_hu,\ (I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}(-\Delta_h W)
} \rangle}_{\Omega \times (0,T)}.
\end{aligned}$$
Next, it is evident that $$\label{eq:shit-21}
\mathbb I_1 \leqslant
{\lVert {R_hu - P_\tau P_hu} \rVert}_{L^\infty(0,T;L^2(\Omega))}
{\lVert {Q_\tau\operatorname{D}_{T-}^{-\alpha}(-\Delta_h W)} \rVert}_{L^1(0,T;L^2(\Omega))}.$$ By the definitions of $ Q_\tau $ and $ R_h $,
$$\begin{aligned}
\mathbb I_2 &=
{\langle {
R_hu, (I-Q_\tau) \operatorname{D}_{T-}^{-\alpha}(-\Delta_h W)
} \rangle}_{\Omega \times (0,T)} \\
&=
{\langle {
\nabla R_hu, \nabla(I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}W
} \rangle}_{\Omega \times (0,T)} \\
&=
{\langle {
\nabla u, \nabla(I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}W
} \rangle}_{\Omega \times (0,T)} \\
&=
{\langle {
\nabla(I-Q_\tau)u, \nabla(I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}W
} \rangle}_{\Omega \times (0,T)} \\
&\leqslant
{\lVert {(I-Q_\tau)u} \rVert}_{L^2(0,T;\dot H^1(\Omega))}
{\lVert {(I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}W} \rVert}_{L^2(0,T;\dot H^1(\Omega))}.
\end{aligned}$$
In addition, $$\begin{aligned}
& {\lVert {(I-Q_\tau)\operatorname{D}_{T-}^{-\alpha}W} \rVert}_{L^2(0,T;\dot H^1(\Omega))} \\
\lesssim{} & \tau^{\alpha/2}
{\lVert {\operatorname{D}_{T-}^{-\alpha} W} \rVert}_{{}^0H^{\alpha/2}(0,T;\dot H^1(\Omega))}
\quad\text{(by \cref{eq:Q_tau-sys})} \\
\lesssim{} & \tau^{\alpha/2}
{\lVert {W} \rVert}_{{}^0H^{-\alpha/2}(0,T;\dot H^1(\Omega))}
\quad\text{(by \cref{lem:regu}).}
\end{aligned}$$ Consequently, $$\label{eq:shit-22}
\mathbb I_2 \lesssim
\tau^{\alpha/2} {\lVert {(I-Q_\tau)u} \rVert}_{L^2(0,T;\dot H^1(\Omega))}
{\lVert {W} \rVert}_{{}^0H^{-\alpha/2}(0,T;\dot H^1(\Omega))}.$$
Finally, by the symmetric version of \[thm:discrete\_regu\] we have $$\begin{aligned}
{\lVert {W} \rVert}_{{}^0H^{-\alpha/2}(0,T;\dot H^1(\Omega))}
\leqslant C_\alpha {\lVert {\theta_J} \rVert}_{L^2(\Omega)}, \\
{\lVert {Q_\tau\operatorname{D}_{T-}^{-\alpha}(-\Delta_h W)} \rVert}_{L^1(0,T;L^2(\Omega))}
\leqslant C_\alpha\ln(T/\tau) {\lVert {\theta_J} \rVert}_{L^2(\Omega)},
\end{aligned}$$ and hence combining \[eq:theta\_J,eq:shit-21,eq:shit-22\] yields that \[eq:731\] holds for $ j = J $. Since the case $ 1 \leqslant j < J $ can be proved analogously, this completes the proof.
Finally, we conclude the proof of \[thm:conv\_f\_higher\] as follows. By \[lem:731\], a straightforward computation yields
$$\begin{aligned}
& {\lVert {u-U} \rVert}_{L^\infty(0,T;L^2(\Omega))} \notag \\
\lesssim{} &
\tau^{\alpha/2} {\lVert {(I-Q_\tau)u} \rVert}_{L^2(0,T;\dot H^1(\Omega))} +
\ln(T/\tau) \Big(
{\lVert {(I-R_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \notag \\
& \quad{} +
{\lVert {(I-P_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {(I-P_\tau)u} \rVert}_{L^\infty(0,T;L^2(\Omega))}
\Big). \label{eq:shit-1}\end{aligned}$$
By \[thm:regu-pde\] we have $$\begin{aligned}
& {\lVert {(I-R_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {(I-P_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\lesssim{} &
h^{2(1-\epsilon)} {\lVert {u} \rVert}_{C([0,T];\dot H^{2(1-\epsilon)}(\Omega))} \\
\lesssim{} &
\frac{h^{2(1-\epsilon)}}{\sqrt\epsilon}
{\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;L^2(\Omega))}\end{aligned}$$ for all $ 0 < \epsilon < 1/2 $, so that, by the assumption $ h < e^{-2(1+\alpha)} $ (cf. the first paragraph of \[sec:main\]), letting $ \epsilon := (\ln(1/h))^{-1} $ yields $$\label{eq:shit-2}
\begin{aligned}
& {\lVert {(I-R_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} +
{\lVert {(I-P_h)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \\
\lesssim{} &
\sqrt{\ln(1/h)}\, h^2
{\lVert {f} \rVert}_{ {}_0H^{\alpha+1/2}(0,T;L^2(\Omega)) }.
\end{aligned}$$ In addition, by \[thm:regu-pde,lem:interp\], it is standard that $$\begin{aligned}
& {\lVert {(I-Q_\tau)u} \rVert}_{L^2(0,T;\dot H^1(\Omega))} +
{\lVert {(I-P_\tau)u} \rVert}_{L^\infty(0,T;L^2(\Omega))} \notag \\
\lesssim{} &
\tau {\lVert {f} \rVert}_{{}_0H^{\alpha+1/2}(0,T;L^2(\Omega))}.
\label{eq:shit-3}\end{aligned}$$ Combining \[eq:shit-1,eq:shit-2,eq:shit-3\] proves \[eq:conv\_f\_higher\] and thus concludes the proof of \[thm:conv\_f\_higher\].
Numerical experiments {#sec:numer}
=====================
This section performs four numerical experiments in one dimensional space to verify \[thm:conv-u0,thm:f-const,thm:conv\_f\_L2,thm:conv\_f\_higher\], respectively. Throughout this section, $ \Omega = (0,1) $, $ T = 1 $, the spatial and temporal grids are both uniform, and $ U^{m,n} $ is the numerical solution with $ h = 2^{-m} $ and $ \tau = 2^{-n} $. Additionally, $ {\lVert {\cdot} \rVert}_{L^\infty(0,T;L^2(\Omega))} $ is abbreviated to $ {\lVert {\cdot} \rVert} $ for convenience, and, for any $ \beta > 0 $, $${\lVert {v} \rVert}_{\beta,n} := \max_{1 \leqslant j \leqslant 2^n}
(j/2^n)^\beta {\lVert {v((j/2^n)-)} \rVert}_{L^2(\Omega)},$$ where $ v((j/2^n)-) $ means the left limit of $ v $ at $ j/2^n $.
[**Experiment 1.**]{} This experiment verifies \[thm:conv-u0\] in the setting $$u_0(x) = x^{-0.49}, \quad x \in \Omega,$$ which is slightly smoother than $ L^2(\Omega) $. \[tab:ex3-time\] validates the theoretical prediction that the convergence behavior of $ U $ is close to $ \mathcal
O(\tau) $ when $ h $ is fixed and sufficiently small. \[tab:ex3-space\] confirms the theoretical prediction that the convergence behavior of $ U $ is close to $
\mathcal O(h^2) $ when $ \tau $ is fixed and sufficiently small.
-------------------------------------- -------------------------------------- ------- -------------------------------------- ------- -------------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $n$ $ \|U^{11,n}\!-\!U^{11,16}\|_{1,n} $ Order $ \|U^{11,n}\!-\!U^{11,16}\|_{1,n} $ Order $ \|U^{11,n}\!-\!U^{11,16}\|_{1,n} $ Order
$6$ 9.07e-3 – 1.44e-2 – 7.05e-2 –
$7$ 4.58e-3 0.98 7.27e-3 0.98 3.93e-2 0.84
$8$ 2.30e-3 0.99 3.66e-3 0.99 2.10e-2 0.91
$9$ 1.15e-3 1.00 1.83e-3 1.00 1.09e-2 0.95
-------------------------------------- -------------------------------------- ------- -------------------------------------- ------- -------------------------------------- -------
: Convergence behavior with respect to $ \tau $.[]{data-label="tab:ex3-time"}
-------------------------------------- ----------------------------------------- ------- ----------------------------------------- ------- ----------------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $m$ $ \|U^{m,16}\!-\!U^{11,16}\|_{1.2,16} $ Order $ \|U^{m,16}\!-\!U^{11,16}\|_{1.4,16} $ Order $ \|U^{m,16}\!-\!U^{11,16}\|_{1.8,16} $ Order
$3$ 1.43e-3 – 4.51e-3 – 7.06e-2 –
$4$ 3.62e-4 1.98 1.13e-3 1.99 2.37e-2 1.57
$5$ 9.13e-5 1.99 2.83e-4 2.00 6.76e-3 1.81
$6$ 2.30e-5 1.99 7.09e-5 2.00 1.74e-3 1.96
-------------------------------------- ----------------------------------------- ------- ----------------------------------------- ------- ----------------------------------------- -------
: Convergence behavior with respect to $ h $.[]{data-label="tab:ex3-space"}
[**Experiment 2.**]{} This experiment verifies \[thm:f-const\] in the setting $$v(x) = x^{-0.49}, \quad x \in \Omega.$$ \[tab:ex4-time\] confirms the theoretical prediction that the convergence behavior of $ U $ is close to $ \mathcal O(\tau) $ when $ h $ is fixed and sufficiently small. \[tab:ex4-space\] confirms the theoretical prediction that the accuracy of $ U(T-)
$ (the left limit of $ U $ at $ T $) in the norm $ {\lVert {\cdot} \rVert}_{L^2(\Omega)} $ is close to $ \mathcal O(h^2) $ when $ \tau $ is fixed and sufficiently small.
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $n$ $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order
$6$ 4.53e-3 – 5.45e-3 – 1.01e-2 –
$7$ 2.31e-3 0.97 2.77e-3 0.97 5.36e-3 0.91
$8$ 1.17e-3 0.99 1.40e-3 0.99 2.78e-3 0.95
$9$ 5.85e-4 1.00 7.00e-4 1.00 1.41e-3 0.97
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
: Convergence behavior with respect to $ \tau $.[]{data-label="tab:ex4-time"}
--------------------------- ------------------------------------------------ ------- ---------------------------------------------------- -------
(r)[2-3]{} (r)[4-5]{} $m$ $ \|(U^{m,16}-U^{11,16})(T-)\|_{L^2(\Omega)} $ Order $ \|(U^{m,16}\!-\!U^{11,16})(T-)\|_{L^2(\Omega)} $ Order
$3$ 2.71e-3 – 7.76e-3 –
$4$ 7.21e-4 1.91 2.04e-3 1.93
$5$ 1.90e-4 1.92 5.09e-4 2.00
$6$ 4.97e-5 1.93 1.27e-4 2.00
--------------------------- ------------------------------------------------ ------- ---------------------------------------------------- -------
: Convergence behavior with respect to $ h $.[]{data-label="tab:ex4-space"}
[**Experiment 3.**]{} This experiment verifies \[thm:conv\_f\_L2\] in the setting $$f(x,t) = x^{\alpha/(\alpha+1)-0.49} t^{-0.49},
\quad (x,t) \in \Omega \times (0,T),$$ which has slightly higher regularity than $ L^2(0,T;\dot
H^{\alpha/(\alpha+1)}(\Omega)) $. \[thm:conv\_f\_L2\] predicts that the convergence behavior of $ U $ is close to $ \mathcal O(h) $ when $ \tau $ is fixed and sufficiently small, and this is in good agreement with the numerical results in \[tab:ex1-space\]. Moreover, \[thm:conv\_f\_L2\] predicts that the convergence behavior of $ U $ is close to $ \mathcal O(\tau^{1/2}) $ when $ h $ is fixed and sufficiently small, which agrees well with the numerical results in \[tab:ex1-time\].
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $m$ $ \|U^{m,16}\!-\!U^{11,16}\| $ Order $ \|U^{m,16}\!-\!U^{11,16}\| $ Order $ \|U^{m,16}\!-\!U^{11,16}\| $ Order
$3$ 3.53e-2 – 3.84 e-2 – 4.95e-2 –
$4$ 1.70e-2 1.05 1.85e-2 1.06 2.41e-2 1.04
$5$ 8.22e-3 1.05 8.89e-3 1.05 1.17e-2 1.04
$6$ 3.95e-3 1.06 4.29e-3 1.05 5.69e-3 1.04
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
: Convergence behavior with respect to $ h $.[]{data-label="tab:ex1-space"}
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $n$ $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order
$6$ 2.75e-1 – 2.58e-1 – 2.32e-1 –
$7$ 2.04e-1 0.43 1.86e-1 0.47 1.63e-1 0.51
$8$ 1.48e-1 0.47 1.32e-1 0.50 1.13e-1 0.53
$9$ 1.05e-1 0.50 9.21e-2 0.52 7.78e-2 0.54
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
: Convergence behavior with respect to $ \tau $.[]{data-label="tab:ex1-time"}
[**Experiment 4.**]{} This experiment verifies \[thm:conv\_f\_higher\] in the setting $$f(x,t) = x^{-0.49} t^{\alpha+0.01},
\quad (x,t) \in \Omega \times (0,T),$$ which is slightly smoother than $ {}_0H^{\alpha+1/2}(0,T;L^2(\Omega)) $. \[tab:ex2-space\] confirms the theoretical prediction that the convergence behavior of $ U $ is close to $ \mathcal O(h^2) $ when $ \tau $ is fixed and sufficiently small, and \[tab:ex2-time\] confirms the theoretical prediction that the convergence behavior of $ U $ is close to $ \mathcal O(\tau) $ when $ h $ is fixed and sufficiently small.
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $m$ $ \|U^{m,16}\!-\!U^{11,16}\| $ Order $ \|U^{m,16}\!-\!U^{11,16}\| $ Order $ \|U^{m,16}\!-\!U^{11,16}\| $ Order
$3$ 2.90e-3 – 3.14e-3 – 4.46e-3 –
$4$ 7.70e-4 1.91 8.23e-4 1.93 1.15e-3 1.95
$5$ 2.03e-4 1.92 2.15e-4 1.94 2.97e-4 1.96
$6$ 5.36e-5 1.92 5.59e-5 1.94 7.75e-5 1.94
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
: Convergence behavior with respect to $ h $.[]{data-label="tab:ex2-space"}
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
(r)[2-3]{} (r)[4-5]{} (r)[6-7]{} $n$ $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order $ \|U^{11,n}\!-\!U^{11,16}\| $ Order
$6$ 8.98e-3 – 6.15e-3 – 5.24e-3 –
$7$ 4.54e-3 0.98 3.10e-3 0.99 2.64e-3 0.99
$8$ 2.28e-3 0.99 1.56e-3 0.99 1.33e-3 1.00
$9$ 1.14e-3 1.00 7.78e-4 1.00 6.62e-4 1.00
-------------------------------------- -------------------------------- ------- -------------------------------- ------- -------------------------------- -------
: Convergence behavior with respect to $ \tau $.[]{data-label="tab:ex2-time"}
Conclusion {#sec:conclusion}
==========
A time-stepping discontinuous Galerkin method is analyzed in this paper. Nearly optimal error estimate with respect to the regularity of the solution is derived with nonsmooth source term, nearly optimal error estimate is derived when the source term satisfies some regularity assumption, and error estimate with nonsmooth initial vaue is derived by the Laplace transform technique. In addition, the effect of the nonvanishing $ f(0) $ on the accuracy of the numerical solution is also investigated. Finally, numerical results are provided to verify the theoretical results.
[10]{} E. Cuesta, C. Lubich, and C. Palencia. Convolution quadrature time discretization of fractional diffusion-wave equations. , 65(213):1–17, 1996.
B. Jin, R. Lazarov, and Z. Zhou. An analysis of the l1 scheme for the subdiffusion equation with nonsmooth data. , 36(1):197–221, 2016.
J. Lions and E. Magenes. . 1972.
C. Lubich. Discretized fractional calculus. , 17(3):704–719, 1986.
C. Lubich, I. Sloan, and V. Thomée. Nonsmooth data error estimates for approximations of an evolution equation with a positive-type memory term. , 65(213):1–17, 1996.
H. Luo, B. Li, and X. Xie. Convergence analysis of a [P]{}etrov-[G]{}alerkin method for fractional wave problems with nonsmooth data. , arXiv:1901.02799, 2018.
W. McLean and V. Thomée. Maximum-norm error analysis of a numerical solution via laplace transformation and quadrature of a fractional-order evolution equation. , 30(1):208-230, 2010.
W. McLean and V. Thomée. Numerical solution via laplace transforms of a fractional order evolution equation. , 22(1):57-94, 2010.
W. McLean, V. Thomée, and L. B. Wahlbin. Numerical solution of an evolution equation with a positive type memory term. , 35(1):23-70, 1993.
W. McLean, V. Thomée, and L.B. Wahlbin. Discretization with variable time steps of an evolution equation with a positive-type memory term. , 69(1):49 – 69, 1996.
W. McLean and K. Mustapha. A second-order accurate numerical method for a fractional wave equation. , 105(3):481–510, Jan 2007.
W. McLean and K. Mustapha. Time-stepping error bounds for fractional diffusion problems with non-smooth initial data. , 293(C):201–217, 2015.
K. Mustapha and D. Schötzau. Well-posedness of hp-version discontinuous galerkin methods for fractional diffusion wave equations. , 34(4):1426–1446, 2014.
K. Mustapha and W. McLean. Discontinuous galerkin method for an evolution equation with a memory term of positive type. , 78(268):1975–1995, 2009.
I. Podlubny. . 1998.
S. Samko, A. Kilbas, and O. Marichev. . 1993.
L. Tartar. . 2007.
V. Thomée. . 2006.
D. Wood. The computation of polylogarithms, technical report 15-29. 1992.
[^1]: Email: [email protected]
[^2]: Corresponding author. Email: [email protected]
[^3]: Email: [email protected]
|
---
abstract: 'Mixed integer predictive control deals with optimizing integer and real control variables over a receding horizon. The mixed integer nature of controls might be a cause of intractability for instances of larger dimensions. To tackle this little issue, we propose a decomposition method which turns the original $n$-dimensional problem into $n$ indipendent scalar problems of lot sizing form. Each scalar problem is then reformulated as a shortest path one and solved through linear programming over a receding horizon. This last reformulation step mirrors a standard procedure in mixed integer programming. The approximation introduced by the decomposition can be lowered if we operate in accordance with the predictive control technique: i) optimize controls over the horizon ii) apply the first control iii) provide measurement updates of other states and repeat the procedure.'
author:
- 'Dario Bauso[^1]'
title: Mixed integer predictive control and shortest path reformulation
---
Introduction
============
Mixed integer predictive control arises when optimizing integer and real control variables in a receding horizon context [@AVH2010]. For this reason, many authors see it as a specific field in the broader area of optimal hybrid control [@BBM98]. Optimal integer control problems have been receiving a growing attention and are often categorized under different names. See, for instance, the literature on finite alphabet control [@GQ03; @TMD06]. Integer control requires a bit more than standard convex optimization techniques. From the literature we know that new properties come into play. As an example, look at *multimodularity* presented as the counterpart of convexity in discrete action spaces [@DS00]. When talking about mixed integer variables, it is, of course, not possible not to mention the more than vast literature on mixed integer programming [@NW88]. It is exactly in this context that we have found inspiration as clarified in more details next.
In this paper, we have moved our steps along the line of [@PW93] which surveys solution methods for mixed integer lot sizing models. Indeed, decomposing an $n$-dimensional dynamic system into $n$ indipendent lot sizing systems is almost all about this paper is centered around. The approximation introduced by the decomposition can be reduced if we operate in accordance with the predictive control technique: i) optimize controls for each indipendent system all over a prediction horizon, ii) apply the first control to each indipendent system, iii) provide measurement updates of other states and repeat the procedure. The main contribution of this work is to reformulate the mixed integer problem of point i) as a shortest path problem and solve this last through linear programming. This approach mirrors the method surveyed in [@PW93] with the differences that here the shortest path problems run iteratively forward in time over a receding horizon. Reframing the method in a receding horizon context is an element of novelty and presents some additional and new issues which are discussed and overcome throughout the paper.
This paper differs from [@AVH2010] as we focus on a smaller class of problems that can be solved exactly and do not require advanced relaxation methods which, in turn, are a main topic in [@AVH2010]. To bring our discussion back to hybrid control, the lot sizing like model used here has much to do with the inventory example briefly mentioned in [@BBM98]. There, the authors simply include the example in the large list of hybrid optimal control problems but do not address the issue of how to fit general methods to this specific problem. On the contrary, this work cannot emphasize enough the computational benefits deriving from the “nice structure” of the lot sizing constraints matrix. Binary variables, used to model impulses, match linear programming in a previous work of the same author [@B09]. There, the linear reformulation is a straightforward derivation of the *(inverse) dwell time* conditions appeared first in [@HLT05]. Analogies with [@B09] are, for instance, the use of total unimodularity to prove the exactness of the linear programming reformulation. Differences are in the procedure itself upon which the linear program is built up. The shortest path model is an additional element which distinguishes the present approach from [@B09].
This paper is organized as follows. We state the problem in Section \[sec:problem statement\]. We then move to present the decomposition method in Section \[sec:robust decomposition\]. In Section \[sec:shortest path\], we turn to introducing the shortest path reformulation and the linear program. We dedicate the last Section \[sec:numerical example\] to support our theoretical analysis with some numerical results.
Mixed integer predictive control {#sec:problem statement}
================================
In mixed integer control we usually have continuous state $x(k) \in \mathbb R^n$, continuous controls $u(k) \in \mathbb R^n$ and disturbances $w(k)\in \mathbb R^n$, discrete controls $y(k) \in \{0,1\}^n$ (see e.g., [@AVH2010]). Evolution of the state over a finite horizon of length $N$ is described by a linear discrete time dynamics in the general form (\[dynamics\]), where $A$ and $E$ are matrices of compatible dimensions: $$\begin{aligned}
\label{dynamics} x(k+1)=Ax(k) + E w(k) + u(k) \geq 0, \quad x(0)=x(N)=0.
\end{aligned}$$ The above dynamics is characterized by one discrete and continuous control variable per each state, and this reflects the idea that we may wish to control indipendently each state component. Also, starting from initial state at zero, we wish to drive the final state to zero which is a typical requirement when controlling a system over a finite horizon. On this purpose, we have added equality constraints on the final states. Also, we force the states to remain confined within a desired region, take for it the positive orthant, which may describe a safety region in engineering applications or the desire of preventing shortcomings in inventory applications.
Continuous and discrete controls are linked together by general *capacity constraints* (\[capconst\]), where the parameter $C$ is an upper bound on control: $$\begin{aligned}
\label{capconst} 0 \leq u(k) \leq C y(k), \quad y(k)\in \{0,1\}^n.
\end{aligned}$$ For clarity reasons, $y(k)$ is the decision of controlling or not the system, and $u(k)$ is the control action. So if we decide not to control the system then the control action is null, otherwise this last is any value between zero and its upper bound $C$.
The following assumption helps us to describe the common situation where the disturbance seeks to push the state out of the desired region.
$$\label{ue}E w(k) < 0.$$
At this point, the non negative nature of controls $u(k)$ should become much clearer. Actually, control actions are used to push the state far from boundaries into the positive orthant thus to counterbalance the unstabilizing effects of disturbances over a certain period to come. However, controlling the system has a cost and “over acting” on it is punished by introducing a cost/objective function as explained next.
The objective function to minimize with respect to $y(k)$ and $u(k)$ is a linear one including proportional, holding and fixed cost terms expressed by parameters $p^k$, $h^k$, and $f^k$ respectively: $$\begin{aligned}
\label{obj} \sum_{k=0}^{N-1} \left( p^k u(k) + h^k x(k) + f^k y(k)\right).
\end{aligned}$$
Conditions (\[dynamics\])-(\[obj\]) introduced so far describe coincisely the problem of interest. In the next section, we recall a standard method to convert the problem of interest (\[dynamics\])-(\[obj\]) into a mixed integer linear program returning the exact solution in terms of optimal control actions $u(k)$ and $y(k)$.
For sake of simplicity disturbances $w(k)$ are deterministic and apriori known. The approach presented below is still valid if we drop this assumption and turn to consider unknown disturbances. Only, we should carefully repropose problem (\[dynamics\])-(\[obj\]) in a receding horizon form with iterative measuments updates and control optimization forward in time all over the horizon.
Mixed integer linear program and exact solution.
------------------------------------------------
The mixed integer nature of the above program makes it intractable for increasing number of variables and horizon length. So, the topic presented below is motivated mainly by comparisons reasons and applies only to problems of relatively small dimensions.
Before introducing the mixed integer linear program we need to define the following notation. Let us start by collecting states, continuous and discrete controls, proportional, holding and fixed costs all in opportune vectors as shown below: $$\begin{array}{lll}x=[x(0)^T\ldots x(N)^T]^T, & u=[u(0)^T\ldots u(N-1)^T]^T, & y=[y(0)^T\ldots y(N-1)^T]^T, \\
\\p=[(p^0)^T\ldots (p^{N-1})^T]^T, & h=[(h^0)^T\ldots (h^{N-1})^T]^T, & f=[(f^0)^T\ldots (f^{N-1})^T]^T.\end{array}$$ Furthermore, to put dynamics (\[dynamics\]) into “constraints” form, let us introduce matrices $\mathbf A$, $\mathbf B$ and vector $\mathbf b$ defined as $$\mathbf A = \left[\begin{array}{cccccc} -I & 0 & 0 & \hdots & 0 & 0 \\
A & -I & 0 & \ldots & 0 & 0 \\
0 & A & -I & \ldots & 0 & 0 \\
0 & 0 & A & \hdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \ldots & A & -I \\
0 & 0 & 0 & \hdots & 0 & -I \\\end{array}\right]; \; \mathbf B = \left[\begin{array}{cccccc} 0 & 0 & \hdots & 0 \\
B & 0 & \ldots & 0 \\
0 & B & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & B \\
0 & 0 & \hdots & 0 \\\end{array}\right];\; \mathbf b=\left[- \xi_0^T \, \left(E w(0)\right)^T \, \ldots \, \left(E w(N)\right)^T \,-\xi_f^T\right]^T.$$ Notice that once we take for $\xi_0$ and $\xi_f$ the value zero, the first and last rows in the aforementioned matrices restate the constraints on initial and final state of (\[dynamics\]).
Finally, we are in the condition to establish that problem (\[dynamics\])-(\[obj\]) can be solved exactly through the following mixed integer linear program:$$\begin{aligned}
(MIPC) \quad & \min_{u,y} \quad J(u,y)=p u + h x + f y \label{MIPC1}\\
& \mathbf A x + \mathbf B u = \mathbf b\label{eqc}\\ & 0 \leq u \leq C y, \quad y \in \{0,1\}^{nN}.\label{MIPC3}
\end{aligned}$$
The mixed integer linear program (\[MIPC1\])-(\[MIPC3\]) is the most natural mathematical programming representation of the problem of interest (\[dynamics\])-(\[obj\]). For this reason, throughout this paper we will almost always refer to (\[MIPC1\])-(\[MIPC3\]) when we wish to bring back the discussion to the source problem (\[dynamics\])-(\[obj\]) and its exact solution.
To overcome the intractability of the mixed integer linear program (\[MIPC1\])-(\[MIPC3\]), we propose a new method whose underlying idea is to bring back dynamics (\[dynamics\]) to the lot sizing model [@PW93]. To do this, we introduce some additional assumptions on the structure of matrix $A$ which simplify the tractability and affect in no way the generality of the results. This argument is dealt with in details in the next section.
Introducing some structure on $A$
---------------------------------
Our main goal in this section is to rewrite (\[dynamics\]) in a “nice” form. With “nice form” we mean a form that emphasizes the analogies with standard lot sizing models [@PW93]. “Stop beating around the bush”, we will henceforth refer to the following dynamics in state of (\[dynamics\]): $$\label{dynamics1}x(k+1)=x(k) + \Delta x(k) + E w(k) + u(k) \geq 0.$$ The reasons why expression (\[dynamics1\]) is a nice one is that it isolates the dependence of one component state on the other ones. To tell it differently we have separated the influence of all other states on state $i$. It will be soon clearer that turning our attention to the new expression (\[dynamics1\]) is a prelude in view of the decomposition approach discussed later on.
Once clarified the reasons, we need next to clarify how to go from (\[dynamics\]) to (\[dynamics1\]) and what is the underlying assumption that allows us to do that. Before doing this let us denote with $I \in \mathbb R^{n \times n}$ the identity matrix and $a_{ij}$ the dependence of state $i$ on state $j$. So, we can make the following assumption.
\[asm:1\] Matrix $A$ can be decomposed as $$A=I+\Delta, \quad \quad \Delta=\left[\begin{array}{cccccc} 0 & a_{12} & \hdots &a_{1,n-1} & a_{1n} \\ a_{21} & 0 & \hdots &a_{2,n-1} & a_{2n} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
a_{n1} & a_{n2} & \hdots & a_{n,n-1} & 0\end{array}\right].$$
The reader may notice that (\[dynamics1\]) is a straighforward derivation of (\[dynamics\]) once we take for good Assumption \[asm:1\].
Our secondary goal in this section is to preserve the nature of the game which has stabilizing control actions playing against unstabilizing disturbances. To do this, in our next assumption we do consider the case where the influence of other states on state $i$ is relatively “weak” in comparison to the unstabilizing effects of disturbances.
\[asm:wc\] $$\label{wc}\Delta x(k) + E w(k) < 0.$$
Notice that the above assumption preserves the nature of the game by bounding the effects of mutual dependence of state components represented by the term $\Delta x(k)$. A closer look at (\[ue\]) and (\[wc\]) sounds like the term $\Delta x(k)$ do not counterbalance the effects of $E w(k)$. States mutual dependence only emphasize or reduce “weakly” the unstabilizing effects of disturbances. We end this section by noticing that (\[dynamics1\]) is not yet in “lot sizing” form [@PW93]. In the next section, we present a decomposition approach that translate dynamics (\[dynamics1\]) into $n$ scalar dynamics in “lot sizing” form [@PW93].
Robust decomposition {#sec:robust decomposition}
====================
With the term “decomposition” we mean a mathematical manipulation through which the original dynamics (\[dynamics1\]) is replaced by $n$ independent dynamics of the form: $$\label{dynamics2} x_i(k+1)= x_i(k) - d_i(k) + u_i(k).$$ The above dynamics is in a typical lot sizing form in the sense that the (inventory) state tomorrow $x_i(k+1)$ is equal to the (inventory) state today $x_i(k)$ plus the discrepancy between today demand $d_i(k)$ and today reordered quantity $u_i(k)$. Changing (\[dynamics1\]) with (\[dynamics2\]) is possible once we relate the demand $d_i(k)$ to the current values of all other state components and disturbances as expressed below: $$\label{d}\begin{array}{lll} d_i(k) & = & -\left[ \sum_{j=1, \, j \not = i}^n A_{ij} x_j(k) + \sum_{j=1}^n E_{ij} w_j(k) \right]\\ & = &
- \left[\Delta_{i \bullet} x(k) + E_{i \bullet} w(k) \right]. \end{array}$$ To tell it differently, we do assume that the influence that all other states have on state $i$ enters into equation (\[dynamics2\]) through demand $d_i(k)$ defined in (\[d\]). Our next step is to make the $n$ dynamics in the form (\[dynamics2\]) mutually independent. This is possible by replacing the current state values $x_j(k)$, $j\not = i$ with their estimated values on the part of agent $i$ which we denote by $\tilde x_j(k)$, $j\not = i$. Still with reference to (\[dynamics2\]), this implies to replace the current demand $d_i(k)$ by the “estimated” demand $\tilde d_i(k)$ defined as in (\[ed\]) where $X^k$ is the set of admissible state vectors $x(k)$: $$\label{ed}
\tilde d_i(k) =
\max_{\xi \in X^k} \left\{ - \Delta_{i \bullet} \xi - E_{i \bullet} w(k) \right\}.$$ The idea behind (\[ed\]) is to take for estimated value the worst admissible demand, i.e., the demand that would push the state out of the positive orthant in a fewest time and such a demand is of course the maximal one. However, it must be noted that we cannot see any drawbacks in combining other decomposition methods with the approach presented in the rest of the paper. To complete the decomposition, it is left to turn the objective function (\[obj\]) into $n$ indipendent components $$J_i(u_i,y_i)=\sum_{k=0}^{N-1} \left( p_i^k u_i(k) + h_i^k x_i(k) + f_i^k y_i(k)\right).$$ Note that because of the linear structure of $J(u,y)$ in (\[MIPC1\]), it turns $J(u,y)=\sum_{i=1}^{n} J_i(u_i,y_i)$. So, in the end we have translated our original problem into $n$ indipendent mixed integer linear minimization problems of the form (\[objd\])-(\[ud\]) as requested at the beginning of this section. In the spirit of predictive control, each minimization problem is then solved forwardly in time all over the horizon. So, for $\tau=0,\ldots,N-1$ we need to solve $$\begin{aligned}
\label{objd} \left(MIPC_i\right) \quad &\min_{u_i,y_i} \quad \sum_{k=\tau}^{N-1} \left( p_i^k u_i(k) + h_i^k x_i(k) + f_i^k y_i(k)\right)\\
\label{dynd} &x_i(k+1)= x_i(k) -\tilde d_i(k) + u_i(k) \geq 0, \quad x_i(\tau)=\xi_i^0, \, x_i(N)=0\\\label{ud} & 0 \leq u_i(k) \leq C y_i(k), \quad y_i(k)\in \{0,1\}.
\end{aligned}$$
It is worth to be noted that non null initial states, which materialize in values of $\xi_i^0$ strictly greater than zero in constraints (\[dynd\]) might induce infeasibility of $\left(MIPC_i\right)$. So, moving from $\left(MIPC\right)$ to $\left(MIPC_i\right)$ has this little drawback that we will discuss in more details later on in Section \[subsec:rh\] together with some other issues concerned with the receding implementation of our method.
Shortest path and linear programming {#sec:shortest path}
====================================
So far, we have first formulated the problem of interest and then decomposed it into $n$ indipendent scalar problems. By the way, decomposition is only the first step of our solution approach. Actually, the mixed integer nature of variables in (\[objd\])-(\[ud\]) is still an issue to be dealt with. This second part of the work focuses on the relaxation of the integer constraints $y_i(k)\in \{0,1\}$ which would facilitate the tractability of the problem. It is well known that relaxation introduces, in general, some approximation in the solution. The main result of this work establishes that, for the problem at hand, relaxing and massaging the problem in a certain manner, will lead to a shortest path reformulation of the original problem. This is a great result as, it is well known that shortest path problem are in turn easily tractable and solvable through linear programming. Shortest path formulations are based on the notion of regeneration interval discussed in details in the next section.
Regeneration interval $[\alpha,\beta]$
--------------------------------------
Let us start by introducing a formal definition of *regeneration interval* which represents the central topic in this section. The definition, available in the literature for scalar lot sizing models, is borrowed from [@PW93] and adapted to each single (scalar) dynamics $i$ of our decomposed $n$-dimensional model. So, with reference to the generic minimization problem $i$ expressed by (\[objd\])-(\[ud\]), let us state what follows.
A pair of periods $[\alpha, \beta]$ form a *regeneration interval* for $(x_i,u_i,y_i)$ if $x_i(\alpha -1) = x_i(\beta)=0$ and $x_i(k)>0$ for $k=\alpha,\alpha+1, \ldots,\beta-1$.
Given a regeneration interval $[\alpha,\beta]$, we can define the accumulated demand over the interval $d_i^{\alpha \beta}$, and the residual demand $r_i^{\alpha \beta}$ as $$\label{eq:ad}d_i^{\alpha \beta}= \sum_{k=\alpha}^{\beta} \tilde d_i(k),\quad
r_i^{\alpha \beta}= d_i^{\alpha \beta} - \left\lfloor \frac{d_i^{\alpha \beta}}{C}\right\rfloor C.$$
Our idea is now to translate problem (\[objd\])-(\[ud\]) into new variables. More formally, let us consider variables $y_i^{\alpha \beta} (k)$ and $\epsilon_i^{\alpha \beta} (k)$ defined in (\[ye\]) with the following meaning. Variable $y_i^{\alpha \beta} (k)$ is equal to one in presence of a saturated control on time $k$ and zero otherwise. Similarly, variable $\epsilon_i^{\alpha \beta} (k)$ is equal to one in presence of a non saturated control on time $k$ and zero otherwise: $$\label{ye}y_i^{\alpha \beta} (k)=\left\{\begin{array}{ll}1 & \mbox{if $u_i(k)=C$}\\ 0 & \mbox{otherwise.}\end{array}\right. \quad \epsilon_i^{\alpha \beta} (k)=\left\{\begin{array}{ll}1 & \mbox{if $0 < u_i(k)< C$}\\ 0 & \mbox{otherwise.}\end{array}\right.$$ To translate the meaning of $y_i^{\alpha \beta} (k)$ and $\epsilon_i^{\alpha \beta} (k)$ in a lot sizing context, such variables tell us on which period full or partial batches are ordered.
At this point and with in mind the above variable transformation, we can rely on well known results in the lot sizing literature which convert the original mixed integer problem (\[objd\])-(\[ud\]) into a number of linear programs $\left(LP_i^{\alpha \beta} \right)$, each one associated to a specific regeneration interval. Regeneration intervals and the associated linear programs are mutually related in a way that gives raise to a shortest path problem, which will be the central topic in the next section. For now, we simply repropose below the linear programming problem associated to a single regeneration interval $[\alpha,\beta]$. Denoting by $e_i^k=p_i^k + \sum_{j=k+1}^{N-1} h_i^j$ and after some standard manipulation, the linear program for fixed regeneration interval $[\alpha,\beta]$ appears as: $$\begin{aligned}
\left(LP_i^{\alpha \beta} \right) \quad & \min_{y_i^{\alpha,\beta},u_i^{\alpha,\beta} } \quad & \sum_{k=\alpha}^{\beta} \left( Ce_i^k + f_i^k \right) y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{\beta} \left( r^{\alpha \beta} e_i^k + f_i^k \right) \epsilon_i^{\alpha \beta} (k) \label{objlp}\\
& & \sum_{k=\alpha}^{\beta} y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{\beta} \epsilon_i^{\alpha \beta} (k) = \left\lceil \frac{d_i^{\alpha\beta}}{C} \right\rceil \label{clp1}\\
&& \sum_{k=\alpha}^{t} y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{t} \epsilon_i^{\alpha \beta} (k) \geq \left\lceil \frac{d_i^{\alpha t}}{C} \right\rceil, & \quad t=\alpha,\ldots,\beta-1 \label{clp2}\\ && \sum_{k=\alpha}^{\beta} y_i^{\alpha \beta} (k) = \left\lceil \frac{d_i^{\alpha\beta}- r_i^{\alpha\beta}}{C} \right\rceil
\label{clp3}\\
&& \sum_{k=\alpha}^{t} y_i^{\alpha \beta} (k) \geq \left\lceil \frac{d_i^{\alpha t} - r_i^{\alpha t}}{C} \right\rceil, & \quad t=\alpha,\ldots,\beta-1 \label{clp4}\\
&&y_i^{\alpha \beta} (k), \, \epsilon_i^{\alpha \beta} (k) \geq 0, & \quad k=\alpha,\ldots,\beta.\label{clp5}
\end{aligned}$$ The above model is extensively used in the lot sizing context. We can limit ourselves to a pair of comments on the underlying idea of the constraints. So, let us start by focusing on the equality constraints (\[clp1\]) and (\[clp3\]). These constraints tell us that the ordered quantity over the interval has to be equal to the accumulated demand over the same interval. This makes sense as initial and final state of a regeneration interval are null by definition. Let us turn our attention to the inequality constraints (\[clp2\]) and (\[clp4\]). There, we impose that the accumulated demand in any subinterval may not exceed the ordered quantity over the same subinterval. Again, this is due to the condition that states are nonnegative at any period of a regeneration interval. Finally, the objective function (\[objlp\]) is simply a rearrangement of (\[objd\]) induced by the variable transformation seen above and specialized to the regeneration interval $[\alpha,\beta]$ rather than on the entire horizon $[0,N]$.
We are ready to recall the following “nice property” of $(LP_i^{\alpha\beta})$ presented first by Pochet and Wolsey in [@PW93].
The optimal solution of $(LP_i^{\alpha\beta})$ is feasible.
The proof is based on the observation that the constraint matrix of $(LP_i^{\alpha\beta})$ is a $0-1$ matrix. We can reorder the constraints in a certain manner, so that matrix has the consecutive 1’s property on each column and turns to be totally unimodular. It follows that $y_i^{\alpha,\beta}$ and $\epsilon_i^{\alpha,\beta}$ are $0-1$ in any extreme solution.
The above theorem represents a first step in the process of converting the mixed integer problem $(MIPC_i)$ into a linear programming one.
Shortest path
-------------
In the previous section we have introduced a linear programming problem associated to a specific regeneration interval. In this section, we resort to well known results on lot sizing to come up with a shortest path model which links together the linear programming problems of all possible regeneration intervals. Actually, it must be noted that the solution of (\[objd\]) -(\[ud\]) can be expressed as a unique regeneration interval $[0,N]$ or as a list of regeneration intervals.
So, let us define variables $z_i^{\alpha \beta} \in \{0,1\}$ which tell us one or zero whenever a regeneration interval $[\alpha,\beta]$ appears or not in the solution of (\[objd\]) -(\[ud\]). The linear programming problem solving (\[objd\]) -(\[ud\]) takes on the form below. For $\tau=0,\ldots,N-1$, solve
$$\begin{aligned}
\label{objsp}\left(LP_i\right) \quad & \min_{y_i^{\alpha\beta},u_i^{\alpha\beta}, z_i^{\alpha\beta}} \quad & \sum_{\alpha=\tau+1}^{N-1} \sum_{\beta=\alpha}^{N-1} \sum_{k=\alpha}^{\beta} \left[ \left( Ce_i^k + f_i^k \right) y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{\beta} \left( r^{\alpha \beta} e_i^k + f_i^k \right) \epsilon_i^{\alpha \beta} (k) \right]\end{aligned}$$
$$\begin{aligned}
&& \sum_{\beta=\tau+1}^N z_i^{\tau+1\beta}=1
\label{csp0}\\
&& \sum_{\alpha=\tau+1}^{t-1} z_i^{\alpha,t-1} - \sum_{\beta=t}^N z_i^{t\beta}=0 & \quad t=\tau+2,\ldots,N, \quad \tau+1 \leq \alpha \leq \beta \leq N
\label{csp1}\\
& & \sum_{k=\alpha}^{\beta} y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{\beta} \epsilon_i^{\alpha \beta} (k) = \left\lceil \frac{d_i^{\alpha\beta}}{C} \right\rceil z_i^{\alpha\beta},& \quad \tau+1 \leq \alpha \leq \beta \leq N \label{csp2}\\
&& \sum_{k=\alpha}^{t} y_i^{\alpha \beta} (k) + \sum_{k=\alpha}^{t} \epsilon_i^{\alpha \beta} (k) \geq \left\lceil \frac{d_i^{\alpha t}}{C} \right\rceil z_i^{\alpha \beta}, & \quad t=\alpha,\ldots,\beta-1, \quad \tau+1 \leq \alpha \leq \beta \leq N \label{csp3}\\ && \sum_{k=\alpha}^{\beta} y_i^{\alpha \beta} (k) = \left\lceil \frac{d_i^{\alpha\beta}- r_i^{\alpha\beta}}{C} \right\rceil
z_i^{\alpha \beta} & \quad \tau+1 \leq \alpha \leq \beta \leq N \label{csp4}\\
&& \sum_{k=\alpha}^{t} y_i^{\alpha \beta} (k) \geq \left\lceil \frac{d_i^{\alpha t} - r_i^{\alpha t}}{C} \right\rceil z_i^{\alpha\beta}, & \quad t=\alpha,\ldots,\beta-1, \quad \tau+1 \leq \alpha \leq \beta \leq N \label{csp5}\\
&&y_i^{\alpha \beta} (k), \, \epsilon_i^{\alpha \beta} (k),\, z_i^{\alpha \beta} \geq 0, & \quad k=\alpha,\ldots,\beta.\label{csp6}
\end{aligned}$$
Let us spend a couple of words on the meaning of the above linear program. Constraints (\[csp2\])-(\[csp6\]) should be familiar to the reader as they already appeared in (\[clp1\])-(\[clp5\]). The only difference is that, now, because of the presence of $z_i^{\alpha \beta}$ in the right hand term, the constraints referring to a given regeneration interval come into play only if that interval is chosen as part of the solution, that is, whenever $z_i^{\alpha \beta}$ is set equal to one. Furthermore, a new class of constraints appear in (\[csp0\])-(\[csp1\]). These constraints are typical of shortest path problems and in this specific case help us to force the variables $z_i^{\alpha \beta} (k)$ to describe a path from $0$ to $N$. Finally, note that for $\tau=0$, the linear program $(LP_i)$ coincide with the linear program presented by Pochet and Wolsey in [@PW93].
At this point, we are in a position to recall the crucial result established in [@PW93].
The linear program $(LP_i)$ solves $(MIPC_i)$.
(Sketch) It turns out that the linear program $(LP_i)$ is a shortest path problem on variables $z_i^{\alpha,\beta}$. Arcs are all associated to a different regeneration interval $[\alpha,\beta]$ and the respective costs are the optimal values of the objective functions of the corresponding linear programs $(LP_i^{\alpha,\beta})$. We refer the reader to [@PW93] for further details.
Receding horizon implementation of $(LP_i)$ {#subsec:rh}
-------------------------------------------
This section is dedicated to certain issues concerned with the implementation of $(LP_i)$ in a receding horizon context as typical of predictive control. As the reader may know, in predictive control we solve $(LP_i)$ iteratively and forward in time all over the horizon. In the formulation of $(LP_i)$, this is stated clearly when we specify that $\tau$ goes from $0$ to $N-1$ and for each value of $\tau$ we obtain a new linear program of type $(LP_i)$. After we solve $(LP_i)$ for $\tau=0$, we apply the first control to the system, update initial states according to the last available measurements at time $\tau=1$ and move to solve a new $(LP_i)$ starting at $\tau=1$. We repeat this procedure until the end of the horizon, $\tau=N-1$. So, consecutive linear programs are linked together by initial state condition expressed in (\[dynd\]), and which we rewrite below $$x_i(\tau)=\xi_i^0.$$ At this point, we would restate with emphasis the fact that dealing with non null initial states is a main difference between the linear program $(LP_i)$ and the linear program used in the lot sizing literature [@PW93]. To counter this little issue, we need to elaborate more on how to compute the accumulated demand in (\[eq:ad\]). Actually, take for $[\tau,t]$ any interval with $x(\tau)=\xi_i^0 >s 0$. Then, condition (\[eq:ad\]) needs to be revised as $$\label{rf}d_i^{\tau t}= \max\left\{\sum_{k=\tau}^{t} \tilde d_i(k)- \xi_i^0,0\right\}.$$ The rational behind the above formula has an immediate interpretation in the lot sizing context. Actually, the effective demand over an interval is the accumulated demand reduced by the inventory stored and initially available at the warehouse. From a computational standpoint, the revised formula (\[rf\]) has a different effect depending on the cases where the accumulated demand exceeds the initial state or not as discussed next.
1. $\sum_{k=\alpha}^{\beta} \tilde d_i(k) \geq \xi_i^0$: the mixed linear program $(MPC_i)$ with initial state $x(\tau)=\xi_i^0 > 0$ and accumulated demand $\sum_{k=\alpha}^{\beta} \tilde d_i(k)$ is turned into an $(LP_i)$ characterized by null initial state $x(\alpha-1)=0$ and effective demand $d_i^{\alpha \beta}=\sum_{k=\alpha}^{\beta} \tilde d_i(k) - \xi_i^0$ as in the example below: $$\begin{aligned}
(MPC_i) \quad \sum_{k=\alpha}^{\beta} \tilde d_i(k)=12, \quad x(\tau)=\xi_i^0=10 & \Longrightarrow &
(LP_i) \quad x(\alpha-1)=0, \quad d_i^{\alpha \beta}=2;\end{aligned}$$
2. $\sum_{k=\alpha}^{\beta} \tilde d_i(k) < \xi_i^0$: the mixed linear program $(MPC_i)$ with initial state $x(\tau)=\xi_i^0 > 0$ and accumulated demand $\sum_{k=\alpha}^{\beta} \tilde d_i(k)$ is unfeasible. The solution obtained at previous period $\tau-1$ applies. A second example is shown next: $$\begin{aligned}
(MPC_i) \quad \sum_{k=\alpha}^{\beta} \tilde d_i(k)=7, \quad x(\tau)=\xi_i^0=10 & \Longrightarrow &
(LP_i) \text{ unfeasible.} \end{aligned}$$
In both cases, the revised formula (\[rf\]) helps us to generalize the linear program $(LP_i)$ to cases where the initial state is non null and this is a crucial point when applying the lot sizing model in a receding horizon form.
Numerical example {#sec:numerical example}
=================
In this specific example, dynamics (\[dynamics\]) takes on the form expressed below. Such a dynamics is particularly significative as it reproduces the typical influence between position and velocity in a sampled second-order system. Initial and final states are null and state values must remain in the positive quadrant all over the horizon. More specifically, denoting by $x_1$ the position and $x_2(k)$ an opposite in sign velocity, the dynamics appears as: $$\label{exdyn}\left[\begin{array}{ll} x_1(k+1)\\x_2(k+1)\end{array}\right]=\left[\begin{array}{lc} 1 & - \kappa\\\kappa & 1\end{array}\right] \left[\begin{array}{ll} x_1(k)\\x_2(k)\end{array}\right] - \left[\begin{array}{ll} w_1(k)\\w_2(k)\end{array}\right]+\left[\begin{array}{ll} u_1(k)\\u_2(k)\end{array}\right]\geq 0, \quad \left[\begin{array}{ll} x_1(0)\\x_2(0)\end{array}\right]=\left[\begin{array}{ll} x_1(N)\\x_2(N)\end{array}\right]=0.$$ A closer look at the first equation reveals that a greater velocity $x_2(k)$ reflects into a faster decrease of position $x_1(k+1)$. Similarly, the second equation tells us that a greater position $x_1(k)$ induces a faster increase of velocity $x_2(k+1)$ because of some elastic reaction. In both equations, the non negative disturbances $w_i(k) \leq 0$ seek to push the states $x_i(k)$ out of the positive quadrant in accordance to Assumption \[ue\]. Their effect is counterbalanced by positive control actions $u_i$. Notice that matrix $A$ can be decomposed as described in Assumption \[asm:1\]. Also, acting on parameter $\kappa$ we can easily guarantee the “weakly coupling” condition expressed in Assumption \[asm:wc\].
Turning to the capacity constraints (\[capconst\]), for this two-dimensional example, these constraints can be rewritten as: $$0 \leq \left[\begin{array}{ll} u_1(k)\\u_2(k)\end{array}\right] \leq C \left[\begin{array}{ll} y_1(k)\\y_2(k)\end{array}\right], \quad
\left[\begin{array}{ll} y_1(k)\\y_2(k)\end{array}\right]\in \{0,1\}^2.$$ It is left to comment on the objective function (\[obj\]). We consider the case where fixed costs are much more relevant than proportional and holding ones. This materializes in choosing a high value for $f^k$ in comparison to values of parameters $p^k$, $h^k$ as shown in the next linear objective function: $$J(u,y)= \sum_{k=0}^{N-1} \left( \mathbf 1^n u(k) + \mathbf 1^n x(k) + \mathbf {100}^n y(k)\right).$$ This choice makes sense for two reasons. First, all the work is centered around issues deriving from the integer nature of $y(k)$. So, high values of $f^k$ emphasize the role of integer variables in the objective function. Second, high fixed costs incentivate solutions with the fewest number of control actions and this facilitate the validation and interpretation of the simulated results.
The next step is to decompose dynamics (\[exdyn\]) in scalar lot sizing form (\[dynd\]) which we rewrite below: $$x_i(k+1)=x_i(k) - \tilde d_i(k) + u_i(k).$$ When it comes to the discussion on how to compute the estimated demand $\tilde d_i$, a natural choice is to set $\tilde d_i$ as below, where we have denoted by $\tilde x_1(k)$ (respectively $\tilde x_2(k)$) the estimated value of state $x_1(k)$ (respectively $x_2(k)$) available to agent $2$ (agent $1$): $$\label{td}\left[\begin{array}{ll} \tilde d_1(k)\\ \tilde d_2(k) \end{array}\right]=\left[\begin{array}{cc} 0 & \kappa\\ -\kappa & 0 \end{array}\right] \left[\begin{array}{ll} \tilde x_1(k)\\ \tilde x_2(k)\end{array}\right] + \left[\begin{array}{ll} w_1(k)\\w_2(k)\end{array}\right].$$ Now, the question is: which expression should we use to represent the set of admissible state vectors $X^k$ appearing in equation (\[ed\])? This question has much to do with another one: how does agent 1 predict $\tilde x_2$ and the same for agent 2 with respect to state $\tilde x_1$? A possible answer is shown next: $$\label{est}\left[\begin{array}{ll} \tilde x_1(k+1)\\ \tilde x_2(k+1)\end{array}\right]=\left[\begin{array}{ll} \tilde x_1(k)\\ \tilde x_2(k)\end{array}\right] +\left[\begin{array}{ll} 0 \\ \kappa \bar x_1\end{array}\right] - \left[\begin{array}{ll} 0 \\ w_2(k) \end{array}\right]+ \left[\begin{array}{ll} 0 \\ C \end{array}\right] ,\quad \left[\begin{array}{ll} \tilde x_1(0)\\ \tilde x_2(0)\end{array}\right]= \left[\begin{array}{ll} x_1(0)\\ \tilde x_2(0)\end{array}\right].$$ Let us elaborate more on the above equations. Regarding to variable $\tilde x_2(k)$, this is used in the evolution of $\tilde d_1(k)$ as in the first equation of (\[td\]). Because of the positive contribution of the term $\kappa \tilde x_2(k)$ on $\tilde d_1(k)$, a conservative approach would suggest to take for $\tilde x_2(k)$ a possible upper bound of $x_2(k)$ and this is exactly the spirit behind the evolution of $\tilde x_2(k)$ as expressed in the second equation of (\[est\]). Here, $\bar x_1$ is an average value for $ x_1$. A similar reasoning applies to $\tilde x_1(k)$, used in the evolution of $\tilde d_2(k)$ as in the second equation of (\[td\]). We now observe a negative contribution of the term $-\kappa \tilde x_1(k)$ on $\tilde d_2(k)$ and therefore take for $\tilde x_1(k)$ a possible lower bound of $x_1(k)$ as shown in the first equation of (\[est\]).
We can now move to show and comment our simulated results. We have carried out two different set of experiments whose parameters are displayed in Table \[t:data1\]. In the line of the weakly coupling assumption (see Assumption \[asm:wc\]), we have set $\kappa$ small enough and in the range equal from $0.01$ to $0.225$. Such a range works good as we will see that $|\kappa x_i|$ is always less than $w_i$, which also means $\Delta x(k) + E w(k) < 0$. For sake of simplicity and without loss of generality, capacity $C$ is set to three, disturbances $w_i$ are unitary and $\bar x_1$ is equal to one. Unitary disturbances facilitate the check out and interpretation of the results as when the accumulated demand over the horizon turns to be very close to the horizon length. The two experiments differ also in the horizon length $N$ for the reasons clarified next.
The first set of experiments aims at analysing the computational benefits of decomposition and relaxation upon which our solution method is based. So, we consider horizon lenghts $N$ from one to ten. We do not need to consider larger values of $N$ as even in this small range of values, differences in the computational times are already evident enough as clearly illustrated in Fig. \[fig:Time22\]. Here, we plot the average computational time vs. the horizon length $N$ of the mixed integer predictive control problem (solid diamonds), of the decomposed problem $(MIPC_i)$ (dashed squares), and of the linear program $(LP_i)$. Average computational time means the average time for one agent to make a single decision (the total time is about $2N$ times the average one). As the reader may notice, the computational time of the linear program $(LP_i)$ is a fraction either of the one requested by the $(MPC)$ or of the one required by the $(MIPC_i)$.
$N$ $\kappa$ $C$ $w_1(k)$ $w_2(k)$ $\bar x_1$
---- ------- ---------------------------- ----- ---------- ---------- ------------ -- -- -- -- --
I 1 …10 0.1 3 1 1 1
II 6 $\{0.01, \;0.2, \:0.225\}$ 3 1 1 1
: Simulation parameters chosen for the two experiments.[]{data-label="t:data1"}
\
(Figure \[fig:Time22\] about here)
In a second set of simulations, we have inspected how the percentage error $$\epsilon \%=\frac{\text{optimal cost of $(MPC_i)$}- \text{optimal cost of $(MPC)$}}{\text{optimal cost of $(MPC)$}}\%$$ varies with different values of the elastic coefficient $\kappa$. The role of $\kappa$ is crucial as we recall that $\kappa$ describes the effective tightness and coupling between different states $x_1(k)$ and $x_2(k)$. We do expect that small values for coefficient $\kappa$, which means weak coupling of state components, may lead to small errors $\epsilon \%$. Differently, high values of $\kappa$, describing a strong coupling between state components, are supposed to induce higher values of $\epsilon \%$.
This is in line with what we can observe in Fig. \[fig:Fig3\] where we plot the error $\epsilon \%$ as function of coefficient $\kappa$. For a relatively small values of $\kappa$ in the range from $0$ to $0.2$, we observe a percentage error not exceeding the one percent, $\epsilon \% \leq 1$. A discountinuity at around $\kappa=0.2$ causes the error $\epsilon \%$ to go from about $1 \%$ to $20 \%$.
(Figure \[fig:Fig2\] about here)
We might not be surprised as discountinuity of errors is typical in mixed integer programs and we try to clarify this in more details in the plot of Fig. \[fig:Fig4\]. Here, for a horizon length $N=6$ and for a relatively high value of $\kappa=0.225$, we display the exact solution (dashed squares) and approximate solution (solid triangles) returned by the mixed integer linear program $(MIPC)$ and by the linear program $(LP_i)$ respectively. The solution is in terms of the time plot of states $x_i(k)$, continuous controls $u_i(k)$ and discrete controls $y_i(k)$. Dotted lines represent predicted trajectories in earlier periods of the receding horizon implementation. At a first check, and this is in accordance with what we do expect, we note that controls $u_i(k)$ never exceed the capacity and are always associated to unitary control actions $y_i(k)$. Now, with a look at the behaviour of discrete controls $y_1(k)$, it can be observed that the approximate solution presents four control actions (four peaks at one), whereas the exact solution has control $y_1(k)$ acting on the system only three times (three peaks at one). One peak out of four represents an increase in the use of control actions of about $25$ percent which reflects into an approximate increase in the percentage error of $20 \%$. A last observation concerning the exact plot of $y_i(k)$ is that the number of control actions are as minimal as possible, i.e., three for $y_1(k)$ and two for $y_2(k)$. This makes sense as the accumulated demand over the horizon approximates by above the horizon length. This implies that the minimum number of control actions can be roughly obtained dividing the accumulated demand (about something above six) by the capacity $C$ (equal to three) and rounding the fractional result up to the next integer.
(Figure \[fig:Fig3\] about here)
Let us move to compare exact and approximate solutions for a smaller value of $\kappa=0.2$. With reference to Fig. \[fig:Fig4\], we observe that, differently from above, discrete controls $y_i(k)$ coincide. However, we still have notable differences in the plot of continuous controls $u_1(k)$ which cause distinct state trajectories for $x_1(k)$. Small differences can be noted for $u_2(k)$ and $x_2(k)$ as well. The observed differences still cause a reduced percentage error $\epsilon \%=1$.
(Figure \[fig:Fig4\] about here)
We conclude our simulations by showing that the percentage error $\epsilon \%$ is around zero when we reduce further the value of $\kappa$ to $0.01$. This is evident if we look at Fig. \[fig:Fig5\], where plots of different styles overlap which means that exact and approximate solutions coincide.
(Figure \[fig:Fig5\] about here)
[99]{}
D. Axehill, L. Vandenberghe, and A. Hansson, “Relaxations applicable to mixed integer predictive control — Comparisons and efficient computations”, in *Proc. of the 46th IEEE Conference on Decision and Control*, New Orleans, USA, pp. 4103–4109, 2007.
D. Bauso, “Boolean-controlled systems via receding horizon and linear programing”, *Mathematics of Control, Signals, and Systems (MCSS)*, vol. 21, no. 1, 2009, pp. 69–91.
M. S. Branicky, V. S. Borkar and S. K. Mitter, “A Unified Framework for Hybrid Control: Model and Optimal Control Theory”, *IEEE Trans. on Automatic Control*, vol. 43, no. 1, 1998, pp. 31–45.
P. R. De Waal and J. H. Van Schuppen, “A class of team problems with discrete action spaces: optimality conditions based on multimodularity”, *SIAM Journal on Control and Optimization*, vol. 38, pp. 875–892, 2000.
G.C. Goodwin and D.E. Quevedo, “Finite alphabet control and estimation”, *International Journal of Control, Automation, and Systems*, vol. 1, no. 4, pp. 412–430, 2003.
J. Hespanha, D. Liberzon, A. Teel, “Lyapunov Characterizations of Input-to-State Stability for Impulsive Systems”, *Automatica*, vol. 44, no. 11, 2008, pp. 2735–2744.
G. L. Nemhauser, and L. A. Wolsey, *Integer and Combinatorial Optimization*, John Wiley $\&$ Sons Ltd, New York, 1988.
Y. Pochet, and L. A. Wolsey, “Lot Sizing with constant batches: Formulations and valid inequalities”, *Mathematics of Operations Research*, vol. 18, no. 4, pp. 767–785, 1993.
D. C. Tarraf, A. Megretski and M. A. Dahleh, “A Framework for Robust Stability of Systems Over Finite Alphabets”, *IEEE Transactions on Automatic Control*, vol. 53, no. 5, pp. 1133– 1146, June 2008.
![Average computational time vs. horizon length $N$ of the mixed integer predictive control problem (solid diamonds), of the decomposed problem $(MIPC_i)$ (dashed squares), and of the linear program $(LP_i)$.[]{data-label="fig:Time22"}](Time22.eps){width="12cm"}
![Percentage error $\epsilon \%$ for different values of the elastic coefficient $k$.[]{data-label="fig:Fig2"}](Fig2.eps){width="12cm"}
![Elastic coefficient $\kappa=0.225$. Exact solution (dashed squares) and approximate solution (solid triangles) returned by the mixed integer linear program $(MIPC)$ and by the linear program $(LP_i)$ respectively. Horizon length $N=6$. Time plot of states $x_i(k)$, continuous controls $u_i(k)$ and discrete controls $y_i(k)$.[]{data-label="fig:Fig3"}](Fig3.eps){width="12cm"}
![Elastic coefficient $\kappa=0.20$. Exact solution (dashed squares) and approximate solution (solid triangles) returned by the mixed integer linear program $(MIPC)$ and by the linear program $(LP_i)$ respectively. Horizon length $N=6$. Time plot of states $x_i(k)$, continuous controls $u_i(k)$ and discrete controls $y_i(k)$.[]{data-label="fig:Fig4"}](Fig4.eps){width="12cm"}
![Elastic coefficient $\kappa=0.001$. Exact solution (dashed squares) and approximate solution (solid triangles) returned by the mixed integer linear program $(MIPC)$ and by the linear program $(LP_i)$ respectively. Horizon length $N=6$. Time plot of states $x_i(k)$, continuous controls $u_i(k)$ and discrete controls $y_i(k)$.[]{data-label="fig:Fig5"}](Fig5.eps){width="12cm"}
[^1]: Dipartimento di Ingegneria Informatica, Università di Palermo, V.le delle Scienze, 90128 Palermo, ITALY - [email protected]
|
---
abstract: 'Within the frame of macroscopic quantum electrodynamics in causal media, the van der Waals interaction between an atomic system and an arbitrary arrangement of dispersing and absorbing dielectric bodies including metals is studied. It is shown that the minimal-coupling scheme and the multipolar-coupling scheme lead to essentially the same formula for the van der Waals potential. As an application, the vdW potential of an atom in the presence of a sphere is derived. Closed expressions for the long-distance (retardation) and short-distance (non-retardation) limits are given, and the effect of material absorption is discussed.'
author:
- Stefan Yoshi Buhmann
- Ho Trung Dung
- 'Dirk-Gunnar Welsch'
title: The van der Waals energy of atomic systems near absorbing and dispersing bodies
---
Introduction {#Sec:Intro}
============
The involvement of van der Waals (vdW) forces in a variety of physicochemical processes and promising potential applications (such as the construction of atomic-force microscopes [@Binnig86] or reflective atom-optical elements [@Shimizu02]) have created the need for a very detailed understanding and controlling them. Casimir and Polder [@CasimirPolder] were the first to study the vdW interaction within the frame of rigorous quantum electrodynamics (QED). Investigating the interaction between an atom and a perfectly reflecting semi-infinite (planar) body, they found that in the short-distance (non-retardation) limit the interaction potential $U(z)$ behaves like $z^{-3}$ ($z$, distance between the atom and the interface), whereas in the long-distance (retardation) limit it behaves like $z^{-4}$. In their theory, Casimir and Polder quantized the (transverse) vector potential of the electromagnetic field (outside the perfectly reflecting body) in terms of normal modes and coupled them to the atom according to the minimal-coupling Hamiltonian, with its Coulomb part being determined by means of the method of image charges. They then calculated the (lowest-order) change of the ground-state energy of the system arising from this coupling, which is a function of the atomic position and thus plays the role of the potential energy that determines the force acting on the atom.
In the earlier experiments [@Raskin69], which studied the deflection of thermal atomic beams by conducting surfaces, the observed signal was extremely low. Nevertheless, qualitative trends in agreement with the $z^{-3}$ law were observed. Only recent progress in experimental techniques has rendered it possible to detect vdW forces with sufficiently high precision [@AndersonHarocheHinds; @GrisentiSchoellkopfToennies; @Shimizu01; @LandraginCourtoisLabeyrie; @SandoghdarSukenikHinds]. In particular, by using atomic passage between two parallel plates, the $z^{-4}$ retarded potential could be verified [@AndersonHarocheHinds]. Other methods for measuring vdW forces have been based on transmission grating diffraction of molecular beams [@GrisentiSchoellkopfToennies], atomic quantum reflection [@Shimizu01], evanescent-wave atomic mirror techniques [@LandraginCourtoisLabeyrie], and (indirect) measurements via spectroscopic means [@SandoghdarSukenikHinds]. Proposals have been made on improvements of monitoring the vdW interaction by using atomic interferometry [@GorlickiFeronLorentDucloy].
Since the appearance of Casimir’s and Polder’s pioneering article in 1948 there has been a large body of work on the vdW interaction (see, e.g., [@Dzyaloshinskii61; @Langbein74; @Mahanty76; @Hinds91; @Milonni94] and references therein). Roughly speaking, there have been two routes to treat the problem. In the first, which closely follows the ideas of Casimir and Polder, explicit field quantization is performed by applying standard concepts of QED, such as normal-mode techniques [@Bullough70; @Renne; @Milloni; @Tikochinski; @Zhou; @Bostrom00; @MarvinToigo; @Wu]. In particular, extensions of the theory to one [@Milloni; @Renne] and two semi-infinite dielectric walls [@Tikochinski; @Zhou], thin metallic films [@Bostrom00], and cylindrical and spherical dielectric bodies [@MarvinToigo] have been given, and the problem of force fluctuations on short time scales has been studied [@Wu]. The calculations have typically been based on macroscopic QED, by applying normal-mode decomposition and including in it the bodies by the well-known conditions of continuity at the surfaces of discontinuity. Since in such an approach the dependence on frequency of the response to the field of the bodies cannot be properly taken into account, material dispersion and absorption are ignored. The problem does of course not occur in microscopic QED, where the bodies are treated on a microscopical level by adopting, e.g., harmonic-oscillator models (see, e.g., [@Renne]). Apart from the fact that the calculations are rather involved, the results are model-dependent.
To overcome the difficulties mentioned, in the second route, the calculations are based on linear response theory, without (explicitly) quantizing the electromagnetic field [@McLachlan; @Argawal2; @WylieSipe; @Girard; @GirardGirardet; @Fichet; @GirardMaghezzi; @Boustimi]. All the relevant entities are expressed in terms of correlation functions which in turn are related, via the fluctuation-dissipation theorem, to response functions. The method has been employed to investigate the vdW energy of an atom near a semi-infinite (planar) body made of dielectric [@McLachlan; @Argawal2], metallic [@WylieSipe; @Girard], ionic-crystallic [@GirardGirardet], and birefringent material [@Fichet]. Further, the problem of an atom near a a dielectric plate [@WylieSipe], a metallic sphere [@GirardMaghezzi], and a nanowire [@Boustimi] has been considered. Since the method borrows from equilibrium statistical mechanics, its applicability is restricted to the vacuum and thermal quantum states.
In this article, we give a unified QED approach to the vdW interaction between an atomic system (such as an atom or a molecule) and dispersing and absorbing dielectric bodies including metals. Starting from the quantized version of the macroscopic Maxwell field, with the medium being described in terms of a spatially varying, Kramers-Kronig consistent (complex) permittivity, we derive an expression for the vdW potential that applies to arbitrary body configurations. The formalism can be regarded as being a generalization of the normal-mode formalism of macroscopic QED, so that it can be applied to other than the vacuum and thermal states, and it also allows extensions like the inclusion of dynamical interactions between the atomic system and the medium-assisted electromagnetic field. Roughly speaking, the mode expansion is replaced with some source-quantity representation in terms of the Green tensor of the macroscopic Maxwell equations, in which material dispersion and absorption are automatically included. Further, we close the gap between the minimal-coupling scheme and the multipolar-coupling scheme by showing that both approaches lead to equivalent results.
To give an application, we consider the vdW potential of an atom in the vicinity of a dispersing and absorbing microsphere. Microspheres may be interesting candidates for QED experiments (see, e.g., [@Buck02; @Ho01] and references therein) with atomic beams. The atom-surface distances should be adjusted so that the atoms fly close enough to the surface to facilitate a strong coupling with the microsphere resonances, but not get adsorbed on the microsphere surface because of the attractive vdW force.
The article is organized as follows. In Section \[Sec:Basic\_eqs\] the formalism is outlined and an expression for the vdW potential is derived. In Section \[Sec:Appl\] the formalism is applied to an atom near a sphere. A summary and some conclusions are given in Section \[Sec:Concl\].
The van der Waals energy {#Sec:Basic_eqs}
========================
The quantization scheme {#Subsec:mincH}
-----------------------
Let us consider an atomic system (such as an atom or a molecule) interacting with the quantized electromagnetic field in the presence of macroscopic, dispersing and absorbing dielectric bodies. In the nonrelativistic limit, the minimal-coupling Hamiltonian in Coulomb gauge reads [@Knoll01; @Scheel99] $$\begin{aligned}
\label{E3}
\hat{H}&=&
\int_0^{\infty}{\,\mathrm{d}}\omega\,\hbar\omega
\int{\,\mathrm{d}}^3r\,\hat{\bf f}^{\dagger}({\bf r},\omega)
\hat{\bf f}({\bf r},\omega)
\nonumber\\
&&+\sum_{\alpha}\frac{1}{2 m_{\alpha}}
\left[\hat{\bf p}_{\alpha}
-q_{\alpha}\hat{\bf A}({{\bf r}_{\rm A}}+\hat{\bf r}_{\alpha})\right]^2
\nonumber\\
&&
+ {\textstyle\frac{1}{2}}\int{\,\mathrm{d}}^3r\,\hat{\rho}_A({\bf r})
\hat{\varphi}_{\rm A}({\bf r})
+\int{\,\mathrm{d}}^3r\,\hat{\rho}_A({\bf r})
\hat{\varphi}_{\rm M}({\bf r}),
\quad\end{aligned}$$ where $m_\alpha$ and $q_\alpha$ are respectively the masses and charges of the particles constituting the atomic system, while $\hat{\bf r}_\alpha$ and $\hat{\bf p}_{\alpha}$ are respectively their coordinates (relative to the center of mass ${{\bf r}_{\rm A}}$) and canonically conjugated momenta. The first term in the Hamiltonian describes the combined system of the electromagnetic field plus the macroscopic bodies (including dissipative systems) in terms of bosonic vector fields $\hat{\bf f}({\bf r},\omega)$, which satisfy the commutation relations $$\label{E4}
\left[\hat{f}_i({\bf r},\omega),
\hat{f}_j^{\dagger}({\bf r'},\omega')\right]
=\delta_{ij}\delta({\bf r}-{\bf r}')\delta(\omega-\omega'),$$ $$\label{E5}
\left[\hat{f}_i({\bf r},\omega),
\hat{f}_j({\bf r'},\omega')\right]
=\left[\hat{f}_i^{\dagger}({\bf r},\omega),
\hat{f}_j^{\dagger}({\bf r'},\omega')\right]
=0.$$ The second term is the kinetic energy of the charged particles, while the third term describes their mutual Coulomb interaction, where $$\label{E6}
\hat{\varphi}_{\rm A}({\bf r})
= \frac{1}{4\pi\varepsilon_0} \int {\,\mathrm{d}}^3{r}'\,
\frac{\hat{\rho}_{\rm A}({\bf r}')}{|{\bf r}-{\bf r}'|}$$ and $$\label{E7}
\hat{\rho}_{\rm A}({\bf r})
=\sum_{\alpha}q_{\alpha}\delta[{\bf r}
-({{\bf r}_{\rm A}}+\hat{\bf r}_{\alpha})]$$ are respectively the scalar potential and the charge density of the atomic system. The last term accounts for the Coulomb interaction of the particles with the medium. The vector potential $\hat{\bf A}({\bf r})$ and the scalar potential $\hat{\varphi}_{\rm M}({\bf r})$ of the medium-assisted electromagnetic field are given by $$\begin{aligned}
\label{E8}
\hat{\bf A}({\bf r}) =
\int_0^\infty {\rm d} \omega \, (i\omega)^{-1}
\underline{\hat{\bf E}}{^\perp}({\bf r},\omega)
+ {\rm H.c.},
\end{aligned}$$ $$\begin{aligned}
\label{E9}
-\bm{\nabla} \hat{\varphi}_{\rm M}({\bf r})
= \int_0^\infty {\rm d} \omega \,
\underline{\hat{\bf E}}{^\parallel}({\bf r},\omega)
+ {\rm H.c.},\end{aligned}$$ where the symbols $\perp$ and $\parallel$ are used to distinguish transverse and longitudinal vector fields, respectively. In particular, $\hat{\underline{\bf E}}{^{\perp}}({\bf r},\omega)$ and $\hat{\underline{\bf E}}{^{\parallel}}({\bf r},\omega)$ read $$\label{E10}
\hat{\underline{\bf E}}{^{\perp(\parallel)}}({\bf r},\omega)
= \int {\rm d}^3 r' \, \mbox{\boldmath $\delta$}
^{\perp(\parallel)}({\bf r}-{\bf r}')
{} \hat{\underline{\bf E}}({\bf r}',\omega),$$ where $\mbox{\boldmath $\delta$}^\perp({\bf r})$ and $\mbox{\boldmath $\delta$}^\parallel({\bf r})$ are the transverse and longitudinal dyadic $\delta$-functions, respectively, and the medium-assisted electric field in the $\omega$-domain, $\hat{\underline{\bf E}}({\bf r},\omega)$, is expressed in terms of the basic variables $\hat{\bf f}({\bf r},\omega)$ as $$\label{E11}
\begin{split}
\underline{\hat{\bf E}}({\bf r},\omega)
= i \sqrt{\frac{\hbar}{\pi\varepsilon_0}}\,\frac{\omega^2}{c^2}
\int & {\rm d}^3r'\,\sqrt{{\rm Im}\,\varepsilon({\bf r}',\omega)}
\\[1ex]&\times\;
\bm{G}({\bf r},{\bf r}',\omega)
{}\hat{\bf f}({\bf r}',\omega)
\end{split}$$ . Here, $\bm{G}({\bf r},{\bf r}',\omega)$ is the classical Green tensor and ${\rm Im}\,\varepsilon({\bf r},\omega)$ is the imaginary part of the complex, space- and frequency-dependent (relative) permittivity $$\label{E12}
\varepsilon({\bf r},\omega)
= {\rm Re}\,\varepsilon({\bf r},\omega)
+ i{\rm Im}\,\varepsilon({\bf r},\omega).$$ The Green tensor, which obeys the inhomogeneous partial differential equation $$\label{E19}
\left[
\bm{\nabla}\times\bm{\nabla}\times
-\frac{\omega^2}{c^2}\,\varepsilon({\bf r},\omega)
\right]
\bm{G}({\bf r},{\bf r}',\omega)
= \bm{\delta}({\bf r}-{\bf r}')$$ together with the boundary condition at infinity, has the following general properties (see, e.g., [@Knoll01]): $$\label{E20}
\bm{G}^{\ast}({\bf r},{\bf r}',\omega)
=\bm{G}({\bf r},{\bf r}',-\omega^{\ast}),$$ $$\label{E21}
G_{ij}({\bf r},{\bf r}',\omega)
=G_{ji}({\bf r}',{\bf r},\omega),$$ $$\label{E22}
\frac{\omega^2}{c^2}\!
\int\!{\,\mathrm{d}}^3{s}\,\varepsilon_{\rm I}({\bf s},\omega)
\bm{G}({\bf r},{\bf s},\omega)\bm{G}^{\ast}({\bf s},{\bf r}',\omega)
={\,\mathrm{Im}}\bm{G}({\bf r},{\bf r}',\omega).$$
In this way, all electromagnetic-field quantities can be expressed in terms of the fundamental fields $\hat{\bf f}({\bf r},\omega)$. In particular, the operator for the electric field reads $$\begin{aligned}
\label{E13}
\hat{\bf E}({\bf r})
&=& -\frac{1}{i\hbar}\left[\hat{\bf A}({\bf r}),\hat{H}\right]
- \bm{\nabla}\hat{\varphi}_{\rm M}({\bf r})
- \bm{\nabla}\hat{\varphi}_{\rm A}({\bf r})
\nonumber\\
&=& \hat{\bf E}_{\rm M}({\bf r})
+ \sum_\alpha \frac{q_\alpha[{\bf r}-({\bf r}_{\rm A}+\hat{\bf r}_\alpha)]}
{4\pi\varepsilon_0|{\bf r}-({\bf r}_{\rm A}+\hat{\bf r}_\alpha)|^3}\,,\end{aligned}$$ where $$\label{E14}
\hat{\bf E}_{\rm M}({\bf r})
= \int_0^{\infty}{\,\mathrm{d}}\omega\,
\underline{\hat{\bf E}}({\bf r},\omega) + {\rm H.c.},$$ and the polarization field associated with the dielectric medium is given by $$\label{E15}
\hat{\bf P}_{\rm M}({\bf r})
= \int_0^{\infty}{\,\mathrm{d}}\omega\,\underline{\hat{\bf P}}
({\bf r},\omega)+{\rm H.c.},$$ where $$\label{E16}
\underline{\hat{\bf P}}({\bf r},\omega)
= \chi({\bf r},\omega)\varepsilon_0
\underline{\hat{\bf E}}({\bf r},\omega)
+ \hat{\bf P}_{\rm N}({\bf r},\omega).$$ Here, $$\label{E17}
\chi({\bf r},\omega) = \varepsilon({\bf r},\omega)-1$$ is the dielectric susceptibility and $$\label{E18}
\hat{\bf P}_{\rm N}({\bf r},\omega)
=i\sqrt{\frac{\hbar\varepsilon_0}{\pi}\,
{\rm Im}\,\varepsilon({\bf r},\omega)}
\,\hat{\bf f}({\bf r},\omega)$$ is the so-called noise polarization.
For the following it is convenient to decompose the Hamiltonian (\[E3\]) as $$\label{E23}
\hat{H} = \hat{H}_{\rm F} + \hat{H}_{\rm A} + \hat{H}_{\rm AF},$$ where $$\label{E24}
\hat{H}_{\rm F} \equiv \int_0^{\infty}{\,\mathrm{d}}\omega\,\hbar\omega
\int{\,\mathrm{d}}^3 {r}\,
\hat{\bf f}^{\dagger}({\bf r},\omega)
\hat{\bf f}({\bf r},\omega),$$ $$\begin{aligned}
\label{E25}
\hat{H}_{\rm A}
\hspace{-1ex}&\equiv&\hspace{-1ex}
\sum_{\alpha}
\frac{\hat{{\bf p}}_{\alpha}^2}{2m_{\alpha}}
+\textstyle{\frac{1}{2}}\int {\rm d}^3 r\,
\hat{\rho}_{\rm A}({\bf r})\hat{\varphi}_{\rm A}({\bf r})
\nonumber\\[1ex]
\hspace{-1ex}&=&\hspace{-1ex}
\sum_{\alpha}
\frac{\hat{{\bf p}}_{\alpha}^2}{2m_{\alpha}}
+\sum_{\alpha<\beta} \frac{q_{\alpha}q_{\beta}}
{4\pi\varepsilon_0\left|\hat{{\bf r}}_{\alpha}
-\hat{{\bf r}}_{\beta}\right|}\,,\end{aligned}$$ $$\begin{aligned}
\label{E26}
\hat{H}_{\rm AF} &\equiv&
\int{\,\mathrm{d}}^3{r}\,\hat{\rho}_A({\bf r})
\hat{\varphi}_{\rm M}({\bf r})-\sum_{\alpha}
\frac{q_{\alpha}}{m_{\alpha}}\,
\hat{\bf A}({{\bf r}_{\rm A}}+\hat{\bf r}_{\alpha})
\nonumber\\&&\times
\left[\hat{\bf p}_{\alpha}
-\textstyle{\frac{1}{2}}q_{\alpha}
\hat{\bf A}({{\bf r}_{\rm A}}+\hat{\bf r}_{\alpha})\right].
\quad\end{aligned}$$ Obviously, $\hat{H}_{\rm F}$ is the Hamiltonian of the medium-assisted electromagnetic field, $\hat{H}_{\rm A}$ is the Hamiltonian of the atomic system with eigenstates ${|n\rangle}$ and eigenvalues $E_n$ according to $$\label{E27}
\hat{H}_{\rm A}{|n\rangle}=E_n{|n\rangle},$$ and $\hat{H}_{\rm AF}$ is the interaction energy between them. For a neutral atomic system in the electric-dipole approximation, the latter simplifies to $$\label{E28}
\hat{H}_{\rm AF} = \hat{H}_{\rm AF}^{\rm (I)}
+ \hat{H}_{\rm AF}^{\rm (II)}\,,$$ $$\label{E29}
\hat{H}_{\rm AF}^{\rm (I)} \equiv
-\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
\,\hat{\bf p}_{\alpha}\hat{\bf A}({{\bf r}_{\rm A}})
+\hat{\bf d}\bm{\nabla}\hat{\varphi}_{\rm M}({{\bf r}_{\rm A}}),$$ $$\label{E30}
\hat{H}_{\rm AF}^{\rm (II)} \equiv
\sum_{\alpha}\frac{q_{\alpha}^2}{2m_{\alpha}}
\,\hat{\bf A}^2({{\bf r}_{\rm A}}),$$ where $\hat{\bf d}$ is the electric dipole operator of the atomic system $$\label{E31}
\hat{\bf d}= \sum_{\alpha}q_{\alpha}\hat{\bf r}_{\alpha}.$$
The vdW energy in the minimal-coupling scheme {#Subsec:vdW_mincf}
---------------------------------------------
Following the original line of Casimir and Polder [@CasimirPolder], we calculate the vdW energy of an atomic system as the position-dependent part of the leading-order correction to the unperturbed ground state energy due to the perturbation according to Eq. (\[E28\]) \[together with Eqs. (\[E29\]) and (\[E30\])\]. Since the diagonal matrix elements of $\hat{H}_{\rm AF}^{\rm (I)}$, Eq. (\[E29\]), are zero, the non-vanishing contribution to the energy correction from first-order perturbation theory is due to $\hat{H}_{\rm AF}^{\rm (II)}$, Eq. (\[E30\]). Inspection of Eq. (\[E30\]) shows that this contribution is of linear order in the fine structure constant . We have thus to include that contribution from second-order perturbation theory which is of the same order in $\alpha$. Therefore, we apply first-order perturbation theory for $\hat{H}_{\rm AF}^{\rm (II)}$ and second-order perturbation theory for $\hat{H}_{\rm AF}^{\rm (I)}$. The energy correction to the ground state thus reads $$\label{E32}
\Delta E \simeq \Delta_1 E + \Delta_2 E,$$ where $$\begin{aligned}
\label{E33}
\Delta_1 E
&\equiv&{\langle 0 |}{\langle \{0\} |}
\hat{H}_{AF}^{\rm (II)}{|\{0\}\rangle}{|0\rangle}
\nonumber \\[1ex]
&=&{\langle 0 |}{\langle \{0\} |}
\sum_{\alpha}\frac{q_{\alpha}^2}{2m_{\alpha}}
\,\hat{\bf A}^2({{\bf r}_{\rm A}})
{|\{0\}\rangle}{|0\rangle}\end{aligned}$$ and $$\begin{aligned}
\label{E34}
\lefteqn{
\Delta_2 E
\equiv
\sum_n\int_0^{\infty}\!\!{\,\mathrm{d}}\omega\!\int\!{\,\mathrm{d}}^3{r}
\;\frac{|{\langle 0 |}{\langle \{0\} |}
\hat{H}_{\rm AF}^{\rm (I)}
{|\{{\bf 1}({\bf r},\omega)\}\rangle}{|n\rangle}|^2}
{E_0-(E_n+\hbar\omega)}
}
\nonumber\\[1ex]&&
=\frac{1}{\hbar}\sum_n\int_0^{\infty}\!\!
\frac{{\,\mathrm{d}}\omega}{\omega_n+\omega} \int\!{\,\mathrm{d}}^3{r}
\,\Bigl
|{\langle 0 |}{\langle \{0\} |}\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
\hat{\bf p}_{\alpha}\hat{\bf A}({{\bf r}_{\rm A}})
\nonumber\\[1ex]&&\hspace{10ex}
-\,\hat{\bf d}\bm{\nabla}\hat{\varphi}_{\rm M}({{\bf r}_{\rm A}})
{|\{{\bf 1}({\bf r},\omega)\}\rangle}{|n\rangle}
\Bigr
|^2.\end{aligned}$$ Here, $|0\rangle$ and $|\{0\}\rangle$ are respectively the ground state of the atomic system and the ground state of the medium-assisted electromagnetic field, and $$\label{E35}
|\{{\bf 1}({\bf r},\omega)\}\rangle
\equiv \hat{\bf f}{^{\dagger}}({\bf r},\omega)|\{0\}\rangle$$ denotes single-quantum Fock states of the fundamental fields. Further, $$\label{E36}
\omega_n = (E_n-E_0)/\hbar$$ are the transition frequencies between the excited atomic states and the ground state. Note that due to the linear dependence of the vector potential and the gradient of the scalar potential on $\hat{\bf f}({\bf r},\omega)$ \[and $\hat{\bf f}^\dagger({\bf r},
\omega)$\], the nonvanishing matrix elements of $\hat{H}_{\rm AF}^{\rm (I)}$ in the Fock-state basis (which is defined as the set of eigenvectors of $\hat{H}_{\rm F}$) are those between Fock states that just differ in one quantum.
Expressing in Eq. (\[E33\]) $\hat{\bf A}({{\bf r}_{\rm A}})$ in terms of $\hat{\bf f}({\bf r})$ \[and $\hat{\bf f}^\dagger({\bf r})$\], on using Eqs. (\[E8\]), (\[E10\]), and (\[E11\]), recalling Eq. (\[E35\]), and exploiting the integral relation (\[E22\]) for the contraction of two Green tensors, we derive after some lengthy but straightforward calculation $$\label{E37}
\Delta_1 E =
\frac{\hbar\mu_0}{\pi}\sum_{\alpha}
\frac{q_{\alpha}^2}{2m_{\alpha}}
\int_0^{\infty}{\,\mathrm{d}}\omega
{\,\mathrm{Im}}^\perp G^\perp_{ii}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega),$$ where $$\label{E38}
\begin{split}
&^{\perp(\parallel)}\bm{G}^{\perp(\parallel)}({\bf r},{\bf r}',\omega)
\\&\quad
\equiv \!\int\!{\,\mathrm{d}}^3{s}\!\int\!{\,\mathrm{d}}^3{s}'\,
\bm{\delta}^{\perp(\parallel)}({\bf r}-{\bf s})
\bm{G}({\bf s},{\bf s}',\omega)
\bm{\delta}^{\perp(\parallel)}({\bf s'}-{\bf r}').
\quad
\end{split}$$ In Eq. (\[E37\]) and below, summation over repeated vector indices is understood. Using the sum rule $$\label{E39}
\sum_{\alpha}\frac{q_{\alpha}^2}{2m_{\alpha}}\delta_{ij}
= \frac{1}{2\hbar}\sum_n\omega_n
(d_{0n,i} d_{n0,j} + d_{0n,j} d_{n0,i})$$ (for a proof, see Appendix \[Sec:sumrule\]), where $$\label{E42}
d_{0n,i} \equiv \langle 0| \hat{d}_i |n\rangle,$$ denote the matrix elements of the electric-dipole operator (\[E31\]), we may equivalently represent Eq. (\[E37\]) in the form of $$\begin{aligned}
\label{E40}
\lefteqn{
\Delta_1 E =
\frac{\hbar\mu_0}{\pi}\sum_{\alpha}
\frac{q_{\alpha}^2}{2m_{\alpha}} \delta_{ij}
\int_0^{\infty}{\,\mathrm{d}}\omega
\,{\,\mathrm{Im}}^\perp G^\perp_{ij}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
}
\nonumber\\[1ex]&&
=\frac{\mu_0}{\pi}
\sum_n \int_0^{\infty}{\,\mathrm{d}}\omega\, \omega_n
{\bf d}_{0n}
{\,\mathrm{Im}}^{\perp}\bm{G}^{\perp}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
{\bf d}_{n0}.
\qquad\end{aligned}$$ In a similar fashion, we find that Eq. (\[E34\]) leads to $$\label{E41}
\begin{split}
&\Delta_2 E
=
- \frac{\mu_0}{\pi}\sum_n
\int_0^{\infty}{\,\mathrm{d}}\omega\,
\biggl[
\frac{\omega_n^2}{\omega_n+\omega}
\int{\,\mathrm{d}}^3{s}
\\[1ex]
&\times\int{\,\mathrm{d}}^3{s}'
\,{\bf d}_{0n} \bm{\mu}({\bf s},\omega)
{\rm Im}\,\bm{G} ({\bf s},{\bf s}',\omega) \bm{\mu}({\bf s}',\omega)
{\bf d}_{n0}\biggr],
\end{split}$$ where, on using the relation (\[A3\]), the matrix elements of the electric-dipole moment (\[E42\]) have been introduced and the abbreviating notation $$\label{E43}
\bm{\mu}({\bf r},\omega) \equiv
\bm{\delta}^\perp({\bf r}-{{\bf r}_{\rm A}})
- \frac{\omega}{\omega_n}\,
\bm{\delta}^\parallel({\bf r}-{{\bf r}_{\rm A}})$$ has been used.
We substitute Eqs. (\[E40\]) and (\[E41\]) into Eq. (\[E32\]), and obtain, on recalling Eq. (\[E43\]), $$\begin{aligned}
\label{E44}
\lefteqn{
\Delta E =
-\frac{\mu_0}{\pi}
\sum_n \int_0^{\infty} \frac{{\,\mathrm{d}}\omega}{\omega_n+\omega}
\,{\bf d}_{0n}\Bigl\{\omega^2
{\,\mathrm{Im}}^\parallel\bm{G}^\parallel({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
}\quad
\nonumber\\[1ex]&&\hspace{3ex}
-\,\omega_n\omega\bigl[
{\,\mathrm{Im}}^\perp\bm{G}^\perp({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
+{\,\mathrm{Im}}^\perp\bm{G}^\parallel({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
\nonumber\\[1ex]&&\hspace{3ex}
+\,{\,\mathrm{Im}}^\parallel\bm{G}^\perp({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega) \bigr]
\Bigr\} {\bf d}_{n0}.\end{aligned}$$ Let ${\cal R}$ be the (small) region of space where the atom is situated and the permittivity can be regarded as being effectively not varying with space, i.e., if . For , the Green tensor can then be given in the form $$\label{E45}
\bm{G}({\bf r},{\bf r}',\omega)
=\bm{G}^{(0)}({\bf r},{\bf r}',\omega)
+\bm{G}^{(1)}({\bf r},{\bf r}',\omega),$$ with $\bm{G}^{(0)}({\bf r},{\bf r}',\omega)$ denoting the (translationally invariant) bulk Green tensor that corresponds to $\varepsilon({{\bf r}_{\rm A}},\omega)$ and $\bm{G}^{(1)}({\bf r},
{\bf r}',\omega)$ being the scattering Green tensor that accounts for the spatial variation of the permittivity. In practice, the atom is typically situated in a free-space region, so that $\bm{G}^{(0)}$ is simply the vacuum Green tensor. According to the decomposition of the Green tensor in Eq. (\[E45\]), the energy correction $\Delta E$ given by Eq. (\[E44\]) consists of two terms, $$\label{E46}
\Delta E = \Delta E^{(0)} + \Delta E^{(1)}({{\bf r}_{\rm A}}),$$ where the ${{\bf r}_{\rm A}}$-independent term $\Delta E^{(0)}$, which is related to the bulk Green tensor, gives rise to the (vacuum) Lamb shift, whereas the ${{\bf r}_{\rm A}}$-dependent term $\Delta E^{(1)}({{\bf r}_{\rm A}})$, which is related to the scattering Green tensor, is just the vdW energy sought: $$\begin{aligned}
\label{E47}
&&U({{\bf r}_{\rm A}})\equiv\Delta E^{(1)}({{\bf r}_{\rm A}})
= -\frac{\mu_0}{\pi}
\sum_n \int_0^{\infty} \frac{{\,\mathrm{d}}\omega}{\omega_n+\omega}
\,{\bf d}_{0n}
\nonumber\\[1ex]&&\quad\times
\Bigl\{\omega^2
{\,\mathrm{Im}}^\parallel\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
-\,\omega_n\omega
\nonumber\\[1ex]&&\quad
\times\bigl[
{\,\mathrm{Im}}^\perp\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
+{\,\mathrm{Im}}^\perp\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
\nonumber\\[1ex]&&\quad
+\,{\,\mathrm{Im}}^\parallel\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega) \bigr]
\Bigr\} {\bf d}_{n0}.\end{aligned}$$
To further evaluate this expression, it is convenient to express it in terms of the whole scattering Green tensor rather than its imaginary part. For this purpose, we write , recall the relation (\[E20\]), and change the integration variable from $-\omega$ to $\omega$. Equation (\[E47\]) then changes to
$$\begin{aligned}
\label{E48}
\hspace{-3ex}
U({{\bf r}_{\rm A}})
&=& \frac{\mu_0}{2i\pi}
\sum_n {\bf d}_{0n}
\biggl(\int_0^{\infty}\frac{{\,\mathrm{d}}\omega}{\omega_n+\omega}
\Bigl\{ \omega_n\omega \bigl[
{}^\perp\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
+ {}^\perp\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
+ {}^\parallel\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega) \bigr]
\nonumber\\&&
-\omega^2
{}^\parallel\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
\Bigr\}
+ \int^0_{-\infty}\frac{{\,\mathrm{d}}\omega}{\omega_n-\omega}
\Bigl\{ \omega_n\omega \bigl[
{}^\perp\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
+ {}^\perp\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
+ {}^\parallel\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega) \bigr]\nonumber\\
&&+\omega^2
{}^\parallel\bm{G}^{(1)\parallel}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)
\Bigr\} \biggr) {\bf d}_{n0}\,.\end{aligned}$$
This equation can be greatly simplified by using contour-integral techniques. Note that $\bm{G}^{(1)}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)$ is an analytic function in the upper complex half plane (, ). Further, $^\perp\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)$, $^\perp\bm{G}^{(1)\|}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)$, and $^\|\bm{G}^{(1)\perp}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)$ tend to zero as $\omega$ approaches zero, because the asymptote of the Green tensor contains no transverse components \[cf. Eq. (\[B4\])\]. Finally, the term $\omega^2\,{^\|}\bm{G}^{(1)\|}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},\omega)$ is also well-behaved for vanishing $\omega$, as can be seen from Eq. (\[B8\]). Consequently, the integrands of the $\omega$-integrals in Eq. (\[E48\]) are analytic functions without poles in the whole upper complex half plane, including the real axis. We may therefore apply Cauchy’s theorem, and replace the integral over the positive (negative) real half axis by a contour integral along the positive imaginary half axis (introducing the purely imaginary coordinate ) and along a quarter circle with infinite radius in the first (second) quadrant of the complex frequency plane. Since the integrals along the infinitely large quarter circles vanish \[cf. Eq. (\[B3\])\], we finally arrive at $$\label{E49}
U({{\bf r}_{\rm A}})
= \frac{\mu_0}{\pi}
\sum_n
\int_0^{\infty}\!\! {\,\mathrm{d}}u\,
\frac{\omega_n u^2}{\omega_n^2 + u^2}
{\bf d}_{0n} \bm{G}^{(1)}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},iu){\bf d}_{n0}\,,$$ where the identity $\bm{G}^{(1)}$ $\!=$ $\!{}^{\perp}\bm{G}^{(1)\perp}$ $\!+$ $\!{}^{\perp}\bm{G}^{(1)\parallel}$ $\!+$ $\!{}^{\parallel}\bm{G}^{(1)\perp}$ $\!+$ $\!{}^{\parallel}\bm{G}^{(1)\parallel}$ has been taken into account.
Introducing the (lowest-order) ground-state polarizability tensor $$\label{E50}
\bm{\alpha}(\omega)= \lim_{\eta\to 0+}
\frac{2}{\hbar}\sum_n
\frac{\omega_n}{\omega_n^2-\omega^2-i\eta\omega}
\,{\bf d}_{0n}\otimes{\bf d}_{n0}$$ of the atomic system (see, e.g., [@Davydov]), we may represent Eq. (\[E49\]) in the equivalent form of $$\label{E51}
U({{\bf r}_{\rm A}})
= \frac{\hbar\mu_0}{2\pi}
\int_0^{\infty} {\,\mathrm{d}}u \,u^2 \alpha_{ij}(iu)
\,G^{(1)}_{ij}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},iu).$$ It is worth noting that Eq. (\[E51\]) directly follows from QED in causal media, without the need of additional assumptions borrowed from other fields. Equation (\[E51\]) expresses the vdW potential of an arbitrary atomic system (such as an atom or a molecule) in the presence of an arbitrary configuration of dispersing and absorbing macroscopic dielectric bodies in terms of the polarizability tensor of the atomic system in lowest order of perturbation theory and the scattering Green tensor of the macroscopic Maxwell equations.
In particular for an atom, one can make use of the spherical symmetry and reduce Eq. (\[E51\]) to $$\label{E52}
U({{\bf r}_{\rm A}})
= \frac{\hbar\mu_0}{2\pi}
\int_0^{\infty} {\,\mathrm{d}}u \,u^2 \alpha(iu)
\,G^{(1)}_{ii}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},iu),$$ where $$\label{E53}
\alpha(\omega) =
\lim_{\eta\to 0+}
\frac{2}{3\hbar}\sum_n
\frac{\omega_n}{\omega_n^2-\omega^2-i\eta\omega}
\,|{\bf d}_{0n}|^2.$$ This result agrees with the results inferred from mlassical linear response theory. Note that the field susceptibility introduced in Refs. [@McLachlan] and [@WylieSipe] differs from the scattering Green tensor by a factor of $\omega^2$. Needless to say, that in the special case of a two-level atom, Eq. (\[E52\]) reduces to the result, e.g., in Ref. [@Argawal2]. The derivation of Eq. (\[E52\]) shows that it can be regarded as the natural extension of the QED results obtained on the basis of the normal-mode formalism, which ignores material absorption.
The vdW energy in the multipolar-coupling scheme {#Subsec:vdW_mulcf}
------------------------------------------------
Let us turn to the multipolar-coupling scheme widely used for studying the interaction of electromagnetic fields with atoms and molecules. Just as in standard QED, so in the present formalism the multipolar-coupling Hamiltonian can be obtained from the minimal-coupling Hamiltonian by means of a Power–Zienau transformation, $$\label{E54}
\hat{\cal{H}} = \hat{U}^\dagger \hat{H} \hat{U},$$ where $$\begin{aligned}
\label{E55}
\hat{U} &=&
\exp\!\left[
\frac{i}{\hbar}\int\! {\rm d}^3{r}\,
\hat{{\bf P}}_{\rm A}({\bf r})\hat{\bf A}({\bf r})\right]\end{aligned}$$ with $$\label{E56}
\hat{\bf P}_{\rm A}({\bf r}) = \sum_{\alpha}
q_{\alpha}\hat{\bf r}_{\alpha}
\int_0^1 {\rm d}\lambda \,
\delta\!\left[{\bf r}\!-\!\left({{\bf r}_{\rm A}}\!+\!\lambda
\hat{\bf r}_{\alpha} \right)\right]$$ being the polarization associated with the (neutral) atomic system. Using $\hat{H}$ from Eq. (\[E3\]), we derive [@Knoll01; @Ho02] $$\begin{aligned}
\label{E57}
\lefteqn{
\hat{\cal H} = \int\! {\rm d}^3{r} \int_0^\infty\! {\rm d}\omega
\,\hbar\omega\,\hat{\bf f}^{\dagger}({\bf r},\omega)
{}\hat{\bf f}({\bf r},\omega)
+ \sum_{\alpha}\frac{1}{2m_{\alpha}}
\bigg\{\hat{{\bf p}}_{\alpha}
}
\nonumber\\[1ex]&& +\, q_{\alpha}\int_0^1\!{\rm d}\lambda\,\lambda
\hat{\bf r}_{\alpha}
\times
\hat{\bf B}\left[ {{\bf r}_{\rm A}}\!+\!\lambda
\hat{\bf r}_{\alpha}
\right]
\bigg\}^2
\nonumber\\[1ex]&& +\,\frac{1}{2\varepsilon_0}
\!\int\! {\rm d}^3{r} \,
\hat{\bf P}_{\rm A}({\bf r}) \hat{\bf P}_{\rm A}({\bf r})
- \int\! {\rm d}^3{r}\,
\hat{{\bf P}}_{\rm A}({\bf r})
\hat{\bf E}_{\rm M}({\bf r}),
\quad
\end{aligned}$$ where with $\hat{\bf A}({\bf r})$ from Eq. (\[E8\]) \[together with Eqs. (\[E10\]) and (\[E11\])\], and $\hat{\bf E}_{\rm M}({\bf r})$ is defined by Eq. (\[E14\]) \[together with Eq. (\[E11\])\]. Note that in the multipolar-coupling scheme the operator of the electric field strength is defined according to $$\begin{aligned}
\label{E58}
\hat{\bf E}({\bf r})
&=& - \frac{1}{i\hbar}
\left[\hat{\bf A}({\bf r}),\hat{\cal H}\right]
- \bm{\nabla} \hat{\varphi}_{\rm M}({\bf r})
- \bm{\nabla} \hat{\varphi}_{\rm A}({\bf r})
\nonumber\\[1ex]
&=& \hat{\bf E}_{\rm M}({\bf r})
- \frac{1}{\varepsilon_0}\,\hat{\bf P}_{\rm A}({\bf r}),\end{aligned}$$ i.e., $$\begin{aligned}
\label{E59}
\varepsilon_0
\hat{\bf E}_{\rm M}({\bf r})
= \varepsilon_0
\,\,\hat{\bf E}({\bf r})
+ \hat{{\bf P}}_{\rm A}({\bf r}).\end{aligned}$$ Hence, $\varepsilon_0\hat{\bf E}_{\rm M}({\bf r})$ has the meaning of the displacement field with respect to the polarization of the atomic system.
In the electric-dipole approximation, Eq. (\[E57\]) simplifies to $$\label{E60}
\hat{\cal H} = \hat{\cal H}_{\rm F}
+ \hat{\cal H}_{\rm A} + \hat{\cal H}_{\rm AF}\,,$$ where $$\label{E61}
\hat{\cal H}_{\rm F}
= \int {\rm d}^3{r} \int_0^\infty {\rm d} \omega
\,\hbar\omega\,\hat{\bf f}^{\dagger}({\bf r},\omega)
\hat{\bf f}({\bf r},\omega)$$ and $$\begin{aligned}
\label{E62}
\hat{\cal H}_{\rm A} = \sum_{\alpha}
\frac{\hat{{\bf p}}_{\alpha}^2}{2m_{\alpha}}
+ \frac{1}{2\varepsilon_0}
\int {\rm d}^3{r} \,
\hat{\bf P}_{\rm A}({\bf r}) \hat{\bf P}_{\rm A}({\bf r}),
\end{aligned}$$ respectively, are the unperturbed Hamiltonians of the medium-assisted electromagnetic field and the atomic system, and $$\label{E63}
\hat{\cal H}_{\rm AF}
= - \hat{\bf d}\hat{\bf E}_{\rm M}({{\bf r}_{\rm A}})$$ is the interaction energy between them, where $\hat{\bf d}$ is the atomic dipole operator given by Eq. (\[E31\]). Recall that $\hat{\bf E}_{\rm M}({{\bf r}_{\rm A}})$ must be thought of a being expressed in terms of the fundamental field variables $\hat{\bf f}({{\bf r}_{\rm A}})$ \[and $\hat{\bf f}^\dagger({{\bf r}_{\rm A}})$\]. Comparing Eq. (\[E62\]) with Eq. (\[E25\]), we see that, on taking into account the the relationship $$\label{E63-1}
{\textstyle\frac{1}{2}}\int {\rm d}^3 r\,
\hat{\rho}_{\rm A}({\bf r})\hat{\varphi}_{\rm A}({\bf r})
= \frac{1}{2\varepsilon_0}\int {\rm d}^3 r\,
\hat{\bf P}^\parallel_{\rm A}({\bf r}) \hat{\bf P}^\parallel_{\rm A}({\bf r}),$$ the atomic Hamiltonians $\hat{\cal H}_{\rm A}$ and $\hat{H}_{\rm A}$ are different from each other, so that the solution of the eigenvalue problem $$\label{E27a}
\hat{\cal H}_{\rm A}{|n'\rangle}=E_n'{|n'\rangle}$$ may be different from that defined by Eq. (\[E27\]). Keeping in mind this difference, we drop the primes denoting the atomic eigenvalues and eigenstates from here on.
In contrast to the interaction energy in the minimal-coupling scheme, Eq. (\[E28\]), the interaction energy in the multipolar-coupling scheme, Eq. (\[E63\]), is linear in $\hat{\bf f}({{\bf r}_{\rm A}})$ and $\hat{\bf f}^\dagger({{\bf r}_{\rm A}})$. As a consequence of the latter, there is no first-order correction to the ground state energy. We thus have $$\label{E64}
\Delta E \simeq \Delta_2 E,$$ where $$\begin{aligned}
\label{E65}
\Delta_2 E &=& \sum_n\int_0^{\infty}{\,\mathrm{d}}\omega\int{\,\mathrm{d}}^3{r}
\nonumber\\&&\times
\;\frac{|{\langle 0 |}{\langle \{0\} |}
\hat{\bf d}\hat{\bf E}_{\rm M}({\bf r})
{|\{{\bf 1}({\bf r},\omega)\}\rangle}{|n\rangle}|^2}
{E_0-(E_n+\hbar\omega)}
\quad\end{aligned}$$ \[cf. the first line in Eq. (\[E34\]) with $\hat{\cal H}_{AF}$ instead of $\hat{H}_{\rm AF}^{\rm (I)}$\]. In complete analogy to the derivation of Eq. (\[E41\]), we find that $$\label{E67}
\Delta_2 E
= - \frac{\mu_0}{\pi}\sum_n
\int_0^{\infty}\!\!{\,\mathrm{d}}\omega\,
\frac{\omega^2}{\omega_n+\omega}
\,{\bf d}_{0n}
{\rm Im}\,\bm{G} ({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
{\bf d}_{n0}\,.$$
To further evaluate the ${{\bf r}_{\rm A}}$-dependent part $U({\bf r}_{\rm A})$ $\!=$ $\!\Delta_2^{(1)}E({\bf r}_{\rm A})$ of $\Delta_2 E$, which results from the scattering part of the Green tensor, $\bm{G}^{(1)}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)$, and gives the vdW energy, we again write , use the relation (\[E20\]), and change the integration variable from $-\omega$ to $\omega$. After some algebra we arrive at $$\begin{aligned}
\label{E68}
U({{\bf r}_{\rm A}})
&=& -\frac{\mu_0}{2i\pi}
\sum_n {\bf d}_{0n}
\biggl[\int_0^{\infty}{\,\mathrm{d}}\omega
\frac{\omega^2}{\omega_n+\omega}
\,\bm{G}^{(1)}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
\nonumber\\&&
- \int^0_{-\infty}{\,\mathrm{d}}\omega
\frac{\omega^2}{\omega_n-\omega}
\,\bm{G}^{(1)}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},\omega)
\biggr] {\bf d}_{n0}\,.
\qquad\end{aligned}$$ As we already know, the integrands of the two frequency integrals appearing in Eq. (\[E68\]) are analytic functions in the upper half of the complex frequency plane, including the real axis \[cf. Eq. (\[B8\])\]. We therefore can apply contour integral techniques in a similar way as in the derivation of Eq. (\[E49\]) from Eq. (\[E48\]). It is not difficult to see that the result reads $$\label{E69}
U({{\bf r}_{\rm A}})
= \frac{\mu_0}{\pi}
\sum_n
\int_0^{\infty}\!\! {\,\mathrm{d}}u\,
\frac{\omega_n u^2}{\omega_n^2 + u^2}
{\bf d}_{0n} \bm{G}^{(1)}({{\bf r}_{\rm A}},
{{\bf r}_{\rm A}},iu){\bf d}_{n0}\,,$$ which has exactly the same form as the minimal-coupling result (\[E49\]), so that it can also be given in the form of Eq. (\[E51\]). Recall that the values of $\omega_n$ and ${\bf d}_{0n}$ obtained in the minimal-coupling scheme may be different from those obtained in the multipolar-coupling scheme, because of the somewhat different eigenvalue equations (\[E27\]) and (\[E27a\]).
Application: an atom near a sphere {#Sec:Appl}
==================================
Let us apply the theory to an atom near a dispersing and absorbing dielectric (micro-)sphere surrounded by vacuum. The material of the sphere of radius $R$ is assumed to be homogeneous and isotropic, having a permittivity $\varepsilon(\omega)$. The coordinate system is chosen such that its origin lies at the center of the sphere. The scattering Green tensor can be given by [@Li94] $$\begin{aligned}
\label{E70}
\lefteqn{
\bm{G}^{(1)}({{\bf r}_{\rm A}},{{\bf r}_{\rm A}},iu)
=\frac{u}{4\pi c}\sum_{n=1}^{\infty}
\sum_{m=0}^{n}
\!\left(2-\delta_{m0}\right)
\frac{2n+1}{n(n+1)}
}
\nonumber\\[1ex]&&\times\;
\frac{(n-m)!}{(n+m)!}
\biggl[B^M_n\sum_{p=-1,1}{\bf M}_{nm,p}({{\bf r}_{\rm A}})
\otimes{\bf M}_{nm,p}({{\bf r}_{\rm A}})
\nonumber\\[1ex]&&\hspace{4ex}
+\,B^N_n\sum_{p=-1,1}{\bf N}_{nm,p}({{\bf r}_{\rm A}})
\otimes{\bf N}_{nm,p}({{\bf r}_{\rm A}})\biggr],\end{aligned}$$ where ${\bf M}_{nm,p}({{\bf r}_{\rm A}})$ and ${\bf N}_{nm,p}
({{\bf r}_{\rm A}})$ are even and odd spherical wave vector functions, which can be expressed in terms of spherical Hankel functions of the first kind, $h_n^{(1)}(r)$, and associated Legendre functions, $P_n^m(\cos\theta)$, in the following way: $$\begin{aligned}
\label{E71}
\lefteqn{
{\bf M}_{nm,\pm1}({{\bf r}_{\rm A}})
=\mp\frac{m}{\sin(\theta)}\,h_n^{(1)}(k_0 r_{\rm A}) P_n^m(\cos\theta)
}
\nonumber\\[1ex]&&\times\;
\begin{array}{c}\sin\\
\cos
\end{array}
(m\phi){\bf e}_{\theta}
-h_n^{(1)}(k_0 r_{\rm A})\,
\frac{{\,\mathrm{d}}P_n^m(\cos\theta)}{{\,\mathrm{d}}\theta}\,
\begin{array}{c}\cos\\
\sin
\end{array}
(m\phi){\bf e}_{\phi},
\nonumber\\&&
\\
\label{E72}
\lefteqn{
{\bf N}_{nm,\pm1}({{\bf r}_{\rm A}})
=n(n+1)\,\frac{h_n^{(1)}(k_0 r_{\rm A})}{k_0 r_{\rm A}}\,P_n^m(\cos\theta)
}
\nonumber\\&&\times\;
\begin{array}{c}
\cos\\
\sin
\end{array}(m\phi){\bf e}_r
+\frac{1}{k_0 r_{\rm A}}
\frac{{\,\mathrm{d}}[r_{\rm A}h_n^{(1)}(k_0 r_{\rm A})]}{{\,\mathrm{d}}r_{\rm A}}
\biggl[\frac{{\,\mathrm{d}}P_n^m(\cos\theta)}{{\,\mathrm{d}}\theta}
\nonumber\\&&\times\;
\begin{array}{c}
\cos\\
\sin
\end{array}
(m\phi){\bf e}_{\theta}
\mp\frac{m}{\sin\theta}\,P_n^m(\cos\theta)
\begin{array}{c}
\sin\\
\cos
\end{array}
(m\phi){\bf e}_{\phi}\biggr].\end{aligned}$$ Here, is the vacuum wave number, and ${\bf e}_r$, ${\bf e}_{\theta}$, ${\bf e}_{\phi}$, are the mutually orthogonal unit vectors pointing in the directions of $r$, $\theta$, and $\phi$, respectively. The coefficients $B^M_n$ and $B^N_n$ in Eq. (\[E70\]) read $$\begin{aligned}
\label{E73}
\lefteqn{
B^M_n = B^M_n(iu)
}
\nonumber\\[1ex]&&
= - \frac
{\bigl[ z_1j_n(z_1)\bigr]' j_n(z_0)
- \bigl[ z_0j_n(z_0)\bigr]' j_n(z_1) }
{\bigl[ z_1j_n(z_1)\bigr]' h_n^{(1)}(z_0)
- \bigl[z_0 h_n^{(1)}(z_0)\bigr]' j_n(z_1) }\,,
\quad\end{aligned}$$ $$\begin{aligned}
\label{E74}
\lefteqn{
B^N_n = B^N_n(iu)
}
\nonumber\\[1ex]&&
= - \frac
{ \varepsilon(iu)
j_n(z_1) \bigl[z_0 j_n(z_0)\bigr]'
- j_n(z_0) \bigl[z_1 j_n(z_1)\bigr]' }
{ \varepsilon(iu)
j_n(z_1) \bigl[ z_0 h_n^{(1)}(z_0)\bigr]'
-h_n^{(1)}(z_0)\bigl[z_1 j_n(z_1)\bigr]' }\,,
\quad
\nonumber\\&&\end{aligned}$$ where and , with $k$ $\!=$ $\!k_0\sqrt{\varepsilon(iu)}$ being the wave number inside the sphere, and $j_n(z)$ is the spherical Bessel function of the first kind. The primes indicate differentiations with respect to $z_0$ or $z_1$, respectively. The coefficients $B^M_n$ represent contributions from transverse electric (TE) waves reflected at the surface of the sphere, while the coefficients $B^N_n$ represent those from transverse magnetic (TM) waves.
Substituting the trace of $\bm{G}^{(1)}({\bf r}_{\rm A},
{\bf r}_{\rm A},\omega)$ from Eq. (\[E70\]) \[together with Eqs. (\[E71\]) and (\[E72\])\] into Eq. (\[E52\]) yields the vdW energy sought. The sums over $p$ can then easily be performed using the orthogonality of the unit vectors ${\bf e}_r$, ${\bf e}_{\theta}$, and ${\bf
e}_{\phi}$, and the sum over $m$ can be performed with the aid of the summation formulas in Appendix \[AppC\]. So after a lengthy, but straightforward calculation we arrive at the following result: $$\begin{aligned}
\label{E75}
\lefteqn{
U({\bf r}_{\rm A})
= - \frac{\hbar\mu_0}{8\pi^2 c}
\int_0^{\infty}{\,\mathrm{d}}u
\Biggl(
u^3 \alpha(iu)\sum_{n=1}^{\infty}(2n+1)
}
\nonumber\\&&\times\,
\Biggl\{ B^M_n \left[h^{(1)}_n(k_0 r_{\rm A})\right]^2
+ n(n+1)B^N_n \left[\frac{h^{(1)}_n(k_0 r_{\rm A})}
{k_0 r_{\rm A}}\right]^2
\nonumber\\&&\hspace{4ex}
+\,B^N_n \left[\frac{1}{k_0 r_{\rm A}}
\frac{{\,\mathrm{d}}[r_{\rm A} h^{(1)}_n(k_0 r_{\rm A})]}
{{\,\mathrm{d}}r_{\rm A}}\right]^2
\Biggr\}\Biggr).\end{aligned}$$ Note that the vdW potential does not depend on the angle variables of the atomic position, but only on the distance of the atom from the center of the sphere, as can be anticipated from the symmetry of the system. Recall that the terms proportional to $B^{M(N)}_n$ represent the contributions from the TE (TM) waves. Equation (\[E75\]) applies to an arbitrary dielectric sphere. In particular, when material absorption is omitted, then the result in Ref. [@MarvinToigo] can be recovered.
### Long-distance limit {#Long_range}
A detailed analysis of Eq. (\[E75\]) requires numerical computation. Here, however, we would like to focus our attention on two interesting limiting cases, where the atom is very far from or very close to the sphere. Let us first consider the limit of the atom being far away from the sphere, $$\label{E76}
r_{\rm A}\gg R.$$ In this case, Eq. (\[E75\]) reduces to $$\begin{aligned}
\label{E77}
\lefteqn{
U({\bf r}_{\rm A})
\simeq -\frac{\hbar cR^3}{4\pi^2\varepsilon_0}
\frac{1}{r_{\rm A}^7}
\int_0^\infty {\,\mathrm{d}}z\,
\alpha(icz/r_{\rm A})\,
\frac{\varepsilon(icz/r_{\rm A})-1}{\varepsilon(icz/r_{\rm A})+2}
}
\nonumber\\[1ex]&&\hspace{10ex}\times\,
\left[2\left(1+z\right)^2+\left(1+z+z^2\right)^2\right]e^{-2z}
\qquad\end{aligned}$$ (see Appendix \[AppD\]), where it turns out that the TE waves do not contribute. Since the inequalities and are valid, the vdW potential is negative, and the resulting force between the atom and the sphere is attractive.
As is seen from Eq. (\[E77\]), the main contribution to the integral comes from the region where . Therefore, for sufficiently large distances, the contributions from small frequencies dominate, and we can (approximately) replace the atomic polarizability and the material permittivity in Eq. (\[E77\]) with their static values and , respectively. The integration can then be performed in closed form to yield the asymptotic distance law $$\begin{aligned}
\label{E78}
U({\bf r}_{\rm A})
= -\frac{23\hbar cR^3\alpha^{(0)}}{16\pi^2\varepsilon_0}
\frac{\varepsilon^{(0)}-1}{\varepsilon^{(0)}+2}\,
\frac{1}{r_{\rm A}^7}
\quad
\left(\frac{r_A}{R}\to\infty\right),
\quad\end{aligned}$$ in this so-called retarded limit. Note that in the opposite nonretarded limit, where the contributions of $\alpha(\omega)$ and $\varepsilon(\omega)$ at all frequencies have to be retained, Eq. (\[E77\]) reduces to the result given in Ref. [@MarvinToigo], where a $r_{\rm A}^{-6}$ law was found. The (formal) limit in Eq. (\[E78\]) obviously corresponds to a metallic sphere $$\begin{aligned}
\label{E78.1}
U({\bf r}_{\rm A})
= -\frac{23\hbar cR^3\alpha^{(0)}}{16\pi^2\varepsilon_0}
\frac{1}{r_{\rm A}^7}
\quad
\left(\frac{r_A}{R}\to\infty\right).
\quad\end{aligned}$$ Note that the decrease of the force with the distance is of three powers stronger than in the case of the atom being near a planar body.
In particular, if we introduce the static polarizability of the sphere (see, e.g., [@Jackson]) $$\label{E79}
\alpha_{\mathrm{sph}}^{(0)}=
4\pi\varepsilon_0
\,\frac{\varepsilon^{(0)}-1}{\varepsilon^{(0)}+2}\,R^3,$$ we may rewrite Eq. (\[E78\]) as $$\begin{aligned}
\label{E80}
U({\bf r}_{\rm A})
= -\frac{\alpha^{(0)}_{\mathrm{sph}}\alpha^{(0)}}
{(4\pi\varepsilon_0)^2}
\,\frac{23\hbar c}{4\pi}\, \frac{1}{r_{\rm A}^7}
\quad
\left(\frac{r_A}{R}\to\infty\right).
\quad\end{aligned}$$ Interestingly, Eq. (\[E80\]) also applies to the vdW potential between two atoms [@CasimirPolder], if the (static) polarizability of the sphere is replaced with the polarizability of the second atom.
### Short-distance limit {#Short_range}
Let us now proceed to the short-distance limit of the atom being located at a position very close to the sphere, i.e., $$\label{E81}
\frac{\Delta r_{\rm A}}{R} \ll 1$$ ($\Delta r_{\rm A}$ $\!\equiv$ $\!r_{\rm A}$ $\!-$ $\!R$). In this case, from Eq. (\[E75\]) it follows that $$\label{E82}
U({\bf r}_{\rm A})
\simeq
-\frac{\hbar}{16\pi^2\varepsilon_0}
\,\frac{1}{(\Delta r_{\rm A})^3}
\int_0^{\infty}\!{\,\mathrm{d}}u\,\alpha(iu)
\,\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}$$ (see Appendix \[AppE\]). Note that again the TE waves do not contribute to $U({\bf r}_{\rm A})$.
As expected, the dependence on distance of the (attractive) vdW potential corresponds to that obtained in the case of the atom being near a planar body. In fact, it exactly looks like that derived in Ref. [@Zhou] for an atom in the vicinity of a planar, semi-infinite, non-absorbing dielectric. In particular, if the (model) assumption $[\varepsilon(iu)$ $\!-$ $\!1]/[\varepsilon(iu)$ $\!+$ $\!1]$ $\!=$ $\!1$ were made for all values of $u$, then Eq. (\[E82\]) would lead, on using Eq. (\[E53\]), to the result [@CasimirPolder] $$\begin{aligned}
\label{E83}
U({\bf r}_{\rm A})
\simeq -\frac{\langle 0|\hat{\bf d}^2|0\rangle}
{48\pi\varepsilon_0}\,
\frac{1}{(\Delta r_{\rm A})^3}
\,.\end{aligned}$$
### Material absorption {#absorption}
To explore the effect of material absorption, we may assume a permittivity of Drude-Lorentz type, $$\label{E84}
\varepsilon(\omega)=1+\sum_l
\frac{\Omega_{l}^2}{\omega_{l}^2-\omega^2-i\omega\gamma_l}\,,$$ where $\omega_{l}$ and $\gamma_l$ are respectively the (transverse) resonance frequencies and the associated absorption constants, and the frequencies $\Omega_{l}$ are proportional to the so-called oscillator strengths. From Eq. (\[E84\]) it is seen that in the limit the resulting static permittivity $$\label{E85}
\varepsilon^{(0)}= 1+\sum_l
\frac{\Omega_{l}^2}{\omega_{l}^2}$$ is independent of the absorption parameters. Since it is the static permittivity that enters Eq. (\[E78\]), we see that the long-distance asymptote of the vdW potential is not influenced by material absorption.
With decreasing distance the range of frequency that must be taken into account increases. Thus, the frequency response of the permittivity becomes crucial to the strength of the vdW force. Let us consider the short-distance law (\[E82\]). From Eq. (\[E84\]) it follows that $$\label{E86}
\frac{\partial}{\partial \gamma_l}
\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}
= - \frac{1}{[\varepsilon(iu)+1]^2}
\frac{2u\Omega_{l}^2}{(\omega_{l}^2+u^2+u\gamma_l)^2}\,,
$$ that is to say, $$\label{E87}
\frac{\partial}{\partial \gamma_l}
\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}
< 0
\quad\mbox{if}\quad
u > 0.$$ With regard to Eq. (\[E82\]), we therefore find that, on recalling that , $$\label{E88}
\frac{\partial}{\partial \gamma_l}
\left|\frac{\partial U({\bf r}_{\rm A})}{\partial r_{\rm A}}\right|
< 0.$$ Hence, the vdW force monotonically decreases with increasing absorption constants.
![ Absolute value of the normalized van der Waals force $C|\partial U({\bf r}_{\rm A})/\partial r_{\rm A} |$ \[$C$ $\!=$ $\!16\pi^2\varepsilon_0/(\sum_n|{\bf d}_{0n}|^2\lambda_l^4)$, $\lambda_l$ $\!=$ $\!2\pi c/\Omega_l$\] as a function of the atom-surface distance for various strengths of material absorption. In the calculation, a metallic permittivity of Drude type according to Eq. (\[E84\]) and a (degenerate) single-resonance atomic polarizability are assumed with , $\omega_n/\Omega_l$ $\!=$ $\!7\times10^{-1}$, and $\gamma_l/\Omega_l$ $\!=$ $\!10^{-2}$ (solid line), (dashed line), and $\gamma_l/\Omega_l$ $\!=$ $\!1$ (dotted line). []{data-label="force"}](Figure1.eps){width="1.\linewidth"}
[F]{}igure \[force\] illustrates the influence of material absorption on the vdW force acting on an atom located near a metallic sphere in the short-distance limit. It can be seen that the effect of material absorption increases with decreasing atom-surface distance. In particular, at a distance of $\Delta r_{\rm A}$ $\!\simeq$ $10^{-2}\lambda_l$, an increase of the relative absorption parameter from $\gamma_l/\Omega_l$ $\!=$ $\!10^{-2}$ to $\gamma_l/\Omega_l$ $\!=$ $\!1$ would reduce the magnitude of the force by nearly thirty percents.
Conclusions {#Sec:Concl}
===========
Within the frame of macroscopic QED, we have derived an expression for the vdW potential of an atomic system near an arbitrary configuration of dispersing and absorbing bodies. It generalizes the results obtained by means of normal-mode expansion and may be regarded as a foundation of the results inferred from linear response theory. We have performed the calculations for both the minimal-coupling scheme and the multipolar-coupling scheme and shown that the results are essentially the same.
We have applied the theory to the vdW interaction between an atom and a sphere. From the integral expression, we have derived the correct long-distance law corresponding to the retardation limit and recovered the short-distance law corresponding to the non-retardation limit. In particular, replacing in the long-distance law the polarizability of the sphere with that of an atom just yields the vdW potential between two atoms. On the other hand, for sufficiently small distances of the atom from the sphere the vdW potential approaches the potential observed for an atom near a planar body.
It is worth noting that in the long-distance limit it is the static permittivity that enters the vdW potential. Hence material absorption has no effect on it. However, with decreasing distance of the atom from the sphere the relevant frequencies extend for a finite (increasing) interval and material absorption becomes substantial, thereby diminishing the strength of the force.
In this article, we have restricted our attention to ground-state systems and calculated the vdW potential in lowest-order of perturbation theory with respect to the interaction of the atomic system with the medium-assisted electromagnetic field. The theory allows of course extensions in several respects. As a consequence of the lowest-order perturbation theory, the energy denominators that enter the polarizability of the atomic system are the unperturbed ones, without consideration of the level shift and broadening caused by the presence of the bodies. In fact, the polarizability of an atomic system is expected to drastically change when it becomes close to a macroscopic body and the spontaneous decay thus becomes purely radiationless, with the decay rate being proportional to $\Delta r_{\rm A}^{-3}$ [@Ho01]. Since the level broadening is essentially determined by the spontaneous-decay rate, the polarizability becomes distance-dependent – an effect that needs careful consideration.
Since the electromagnetic field in (linear) magnetic media can be quantized analogously [@Knoll01], another interesting extension of the theory be the inclusion in it of composite materials characterized by both a complex permittivity and a complex permeability. Interestingly, such materials, which have been fabricated recently, are left-handed. Last not least the underlying quantization scheme renders it also possible to extend the theory to atoms and molecules in excited states and treat the motion of driven atomic systems.
We would like to thank Ludwig Knöll for valuable discussions. This work was supported by the Deutsche Forschungsgemeinschaft.
Derivation of Eq. (\[E39\]) {#Sec:sumrule}
===========================
From the commutation relation $$\label{A1}
\left[\hat{r}_{\alpha,i},\hat{H}_A\right]
=\biggl[\hat{r}_{\alpha,i}\,,
\sum_{\beta} \frac{\hat{{\bf p}}_{\beta}^2}{2m_{\beta}}\biggr]
=\frac{i\hbar}{m_{\alpha}}\,\hat{p}_{\alpha,i}$$ together with the eigenvalue equation (\[E27\]) we find that $$\begin{aligned}
\label{A2}
{\langle 0 |}\hat{p}_{\alpha,i}{|n\rangle}
&\!=&\!
-\frac{im_{\alpha}}{\hbar}{\langle 0 |}
\big[\hat{r}_{\alpha,i},\hat{H}_A\big]{|n\rangle}
\nonumber\\[1ex]
&\!=&\!
-\frac{im_{\alpha}}{\hbar}{\langle 0 |}\hat{r}_{\alpha,i}\hat{H}_A
-\hat{H}_A\hat{r}_{\alpha,i}{|n\rangle}
\nonumber\\[1ex]
&\!=&\!
-\frac{im_{\alpha}}{\hbar}(E_n-E_0)
{\langle 0 |}\hat{r}_{\alpha,i}{|n\rangle}
\nonumber\\[1ex]
&\!=&\!
-im_{\alpha}\omega_n{\langle 0 |}\hat{r}_{\alpha,i}{|n\rangle}.\end{aligned}$$ Thus $$\begin{aligned}
\label{A3}
\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
{\langle 0 |}\hat{p}_{\alpha,i}{|n\rangle}
&\!=&\!
-i\omega_n\sum_{\alpha}q_{\alpha}{\langle 0 |}
\hat{r}_{\alpha,i}{|n\rangle}
\nonumber\\[1ex]
&\!=&\!
-i\omega_n{\langle 0 |}\hat{d}_i{|n\rangle}.\end{aligned}$$ Using Eq. (\[A3\]), we derive $$\begin{aligned}
\label{A4}
\lefteqn{
\frac{1}{2\hbar}\sum_n\omega_n
\left({\langle 0 |}\hat{d}_i{|n\rangle}{\langle n |}\hat{d}_j{|0\rangle}
+{\langle 0 |}\hat{d}_j{|n\rangle}{\langle n |}\hat{d}_i{|0\rangle}\right)
}
\nonumber \\[1ex]&&\hspace{2ex}
=\frac{i}{2\hbar}\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
\sum_n\left({\langle 0 |}\hat{p}_{\alpha,i}{|n\rangle}{\langle n |}
\hat{d}_j{|0\rangle}\right.
\nonumber \\&&\hspace{20ex}
\left.- {\langle 0 |}\hat{d}_j{|n\rangle}{\langle n |}
\hat{p}_{\alpha,i}{|0\rangle}\right)
\qquad
\nonumber\\[1ex]&&\hspace{2ex}
=\frac{i}{2\hbar}\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
{\langle 0 |}\big[\hat{p}_{\alpha,i},\hat{d}_j\big]{|0\rangle}
\nonumber\\[1ex]&&\hspace{2ex}
=\frac{i}{2\hbar}\sum_{\alpha}\frac{q_{\alpha}}{m_{\alpha}}
{\langle 0 |}\Big[\hat{p}_{\alpha,i},
\sum_{\beta}q_{\beta}\hat{r}_{\beta,j}\Big]{|0\rangle}
\nonumber\\[1ex]&&\hspace{2ex}
=\sum_{\alpha}\frac{q_{\alpha}^2}{2m_{\alpha}}\delta_{ij},\end{aligned}$$ which is just Eq. (\[E39\]).
Asymptotic behavior of the Green tensor {#AppB}
=======================================
The asymptotic behavior of the Green tensor for large frequencies reads [@Knoll01] $$\label{B1}
\lim_{|\omega|\rightarrow \infty}
\frac{\omega^2}{c^2} \bm{G}({\bf r},{\bf r}',\omega)
=-\bm{\delta}({\bf r}-{\bf r}'),$$ $$\label{B2}
\lim_{|\omega|\rightarrow \infty}
\frac{\omega^2}{c^2} \bm{G}^{(0)}({\bf r},{\bf r}',\omega)
=-\bm{\delta}({\bf r}-{\bf r}').$$ If ${\bf r}$ and ${\bf r}'$ lie in a common region of constant permittivity, we can use Eq. (\[E45\]), and subtract the two equations (\[B1\]) and (\[B2\]) to obtain $$\label{B3}
\lim_{|\omega|\rightarrow \infty}
\frac{\omega^2}{c^2} \bm{G}^{(1)}({\bf r},{\bf r}',\omega)
=0.$$ In the low-frequency limit we have [@Knoll01] $$\label{B4}
\lim_{|\omega|\rightarrow 0}\frac{\omega^2}{c^2}
\bm{G}({\bf r},{\bf r}',\omega)
= - {^\|}\bm{L}^{-1}{^\|} ({\bf r},{\bf r}'),$$ where $$\label{B4-1}
\bm{L}({\bf r},{\bf r}')
= \lim_{|\omega|\rightarrow 0}
\int{\,\mathrm{d}}^3{s}\, \bm{\delta}^\parallel({\bf r}-\bm{s})
\varepsilon({\bf s},\omega)
\bm{\delta}^\parallel(\bm{s}-{\bf r}').$$ Recalling that as , $$\label{B6}
\varepsilon({\bf r},\omega) \sim
\left\{
\begin{array}{l@{\quad}l}
\omega^0 & \mbox{for dielectrics},\\[1ex]
(i\omega)^{-1} & \mbox{for metals}
\end{array}
\right.$$ \[cf. Eq. (\[E84\])\], from Eqs. (\[B4\]) and (\[B4-1\]) we see that $$\label{B8}
\lim_{|\omega|\to 0}\omega^2\bm{G}({\bf r},{\bf r}',\omega) =
M, \qquad M<\infty.$$ Needless to say, that Eq. (\[B8\]) is also valid for the scattering part of the Green tensor, $\bm{G}^{(1)}({\bf r},{\bf r}',\omega)$.
Summation formulas for Legendre polynomials {#AppC}
===========================================
The Legendre polynomials obey the relation [@Abramowitz] $$\label{C1}
\sum_{m=0}^n C_{nm}
\cos(m\lambda)P_n^m(x)P_n^m(y)
= P_n(\xi),$$ where $$\begin{aligned}
\label{C1.1}
&\displaystyle
C_{nm} = \left(2\!-\!\delta_{m0}\right) \frac{(n\!-\!m)!}{(n\!+\!m)!},
\\
\label{C2}
&\displaystyle
\xi\equiv xy+\sqrt{(1-x^2)(1-y^2)}\,\cos\lambda.\end{aligned}$$ For and , Eq. (\[C1\]) reduces to $$\label{C3}
\sum_{m=0}^n
C_{nm}P_n^m(\cos\theta)^2=1.$$ Differentiating Eq. (\[C1\]) twice with respect to $\lambda$ and putting and afterwards yield $$\label{C4}
\sum_{m=0}^n
C_{nm}\frac{m^2}{\sin^2\theta}\,P_n^m(\cos\theta)^2
=\frac{n(n+1)}{2}\,.$$ Finally, subsequent differentiations of Eq. (\[C1\]) with respect to $x$ and $y$ and again putting and afterwards yield $$\label{C5}
\sum_{m=0}^n C_{nm}
\left[\frac{{\,\mathrm{d}}P_n^m(\cos\theta)}{{\,\mathrm{d}}\theta}\right]^2
=\frac{n(n+1)}{2}\,.$$
Derivation of Eq. (\[E77\]) {#AppD}
===========================
In Eq. (\[E75\]), the spherical Hankel functions $h^{(1)}_n(k_0r_{\rm A})$ $\!=$ $\!h^{(1)}_n(iur_{\rm A}/c)$ can be written in the form of [@Abramowitz] $$\label{hexpansion}
h^{(1)}_n(k_0r_{\rm A})
=\sum_{j=0}^n h_j e^{-ur_{\rm A}/c}
\left(\frac{c}{ur_{\rm A}}\right)^{j+1}$$ with some complex coefficients $h_j$. From inspection of Eqs. (\[E73\]) and (\[E74\]) it is seen that $B^{M}_n$ and $B^{N}_n$ can be expanded in powers of $u$ at , $$\begin{aligned}
\label{ABexpansion}
B^{M,N}_n=\sum_{j=0}^{\infty}b^{M,N}_j
\left(\frac{uR}{c}\right)^j.\end{aligned}$$ From Eqs. (\[hexpansion\]) and (\[ABexpansion\]) it then follows that the integrand of the (imaginary) frequency integral in Eq. (\[E75\]) is a sum of terms, which are all of the same general structure $$\label{CC3}
f_{jk}(u)=\alpha(iu)u^3\left(\frac{uR}{c}\right)^j
\left(\frac{c}{ur_{\rm A}}\right)^{k+2}e^{-2ur_{\rm A}/c}$$ ($j,k$ are nonnegative integers). For $j$ $\!>$ $\!(k$ $\!-$ $\!1)$ this is a polynomial in $u$ times an exponentially decaying function. The only relevant contributions to the frequency integral come from the maximum of $f_{jk}(u)$ at a frequency $u_0$ satisfying $$\label{CC3-1}
\left.\frac{{\,\mathrm{d}}}{{\,\mathrm{d}}u}f_{jk}(u)\right|_{u=u_o}=0,$$ thus $$\label{CC4}
u \approx u_0 \simeq \frac{(j+1-k)c}{2r_{\rm A}}\,,$$ where we have used the fact that $\alpha(iu)$ can be regarded as almost constant for the small frequencies considered here. For $j\le(k$ $\!-$ $\!1)$, $f_{jk}(u)$ is a monotonically decreasing function, and relevant contributions to the frequency integral can only come from regions, where $$\label{CC5}
u\le\frac{c}{2r_{\rm A}}\,,$$ because for larger frequencies the exponentially decaying factor becomes too small. Combining Eqs. (\[CC4\]) and (\[CC5\]), it can be said that the relevant contributions to the frequency integral come from regions, where $u$ $\!\lesssim$ $\!c/r_{\rm A}$. In these regions we have $$\label{C6}
|k_0R|,\ |kR| \sim \frac{uR}{c}
\lesssim \frac{R}{r_{\rm A}}\,.$$ This means that in the long-distance limit $r_{\rm A}$ $\!\gg$ $\!R$ the main contributions to the integral come from regions, where $|k_0R|,\ |kR|\ll 1$. We may therefore expand the coefficients $B^{M,N}_n$ in Eqs. (\[E73\]) and (\[E74\]) in powers of $k_0R$, on exploiting useful relations in Ref. [@Abramowitz], and retain only the leading terms: $$\label{C7}
B^M_n = o\left[(k_0R)^{2n+3}\right],$$ $$\label{C8}
B^N_n \simeq i\frac{(n+1)(2n+1)}{[(2n+1)!!]^2}
\frac{\varepsilon(iu)-1} {\varepsilon(iu) n +n+1}(k_0R)^{2n+1}.$$ In this way we find that the leading term in Eq. (\[E75\]) comes from the two terms containing $B^N_n$ with $n$ $\!=$ $\!1$. Keeping only these terms, using [@Abramowitz] $$\label{C9}
h^{(1)}_1(z)=-\left(\frac{1}{z}+\frac{i}{z^2}\right)e^{iz},$$ and changing the integration variable according to $u$ $\!\to$ $\!z$ $\!=$ $\!ur_{\rm A}/c$, we arrive at Eq. (\[E77\]).
Derivation of Eq. (\[E82\]) {#AppE}
===========================
Provided that $$\label{D1}
n\gg \frac{|z|^2}{4}\,,$$ the spherical Bessel and Hankel functions appearing in Eqs. (\[E73\]) – (\[E75\]) can be approximated by [@Abramowitz] $$\label{jbign}
j_n(z)\simeq \frac{z^n}{(2n+1)!!}\,,$$ and $$\label{ybign}
h_n^{(1)}(z)\simeq -i \frac{(2n-1)!!}{z^{(n+1)}}\,,$$ respectively. As we know from Appendix \[AppD\], the main contribution to the frequency integral in Eq. (\[E75\]) is from those values satisfying the condition (\[CC5\]). Hence, the condition (\[D1\]) becomes $$\label{D2}
n\gg 1,$$ because $z$ $\!\sim$ $\!uR/c$ $\simeq\!ur_{\rm A}/c$ in the short-distance limit. Substituting in Eqs. (\[E73\]) – (\[E75\]) for $j_n(z)$ and $h_n^{(1)}(z)$ the expressions (\[jbign\]) and (\[ybign\]), we derive after some algebra $$\label{D5}
B^M_n \simeq 0,$$ $$\begin{aligned}
\label{D6}
&&n(n+1)(2n+1)B^N_n \left[\frac{h^{(1)}_n(k_0r_{\rm A})}{k_0r_{\rm A}}\right]^2
\nonumber\\&&
+ (2n+1)B^N_n \left[\frac{{\,\mathrm{d}}[r_{\rm A} h^{(1)}_n(k_0r_{\rm A})]}{k_0r_{\rm A}{\,\mathrm{d}}r_{\rm A}}\right]^2
\nonumber\\&&
\simeq
-i\frac{1}{(k_0r_{\rm A})^3}
\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}n(n+1)
\left(\frac{R}{r_{\rm A}}\right)^{2n+1}
\nonumber\\&&\quad
-i\frac{1}{(k_0r_{\rm A})^3}
\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}
\frac{(2n+1)^2}{4}
\left(\frac{R}{r_{\rm A}}\right)^{2n+1}
\nonumber\\&&
\simeq
-2i\frac{1}{(k_0r_{\rm A})^3}
\frac{\varepsilon(iu)-1}{\varepsilon(iu)+1}n(n+1)
\left(\frac{R}{r_{\rm A}}\right)^{2n+1}.\end{aligned}$$ Whereas in the long-distance limit we could neglect all terms but the $n$ $\!=$ $\!1$ one (Appendix \[AppD\]), in the short-distance limit, as can be seen from Eq. (\[D6\]), the parameter $R/r_{\rm A}$ being very close to one, we encounter the opposite extreme, where the main contribution comes from those terms corresponding to high orders $n$. The main contribution to the sum over $n$ in Eq. (\[E75\]) comes from the peak at $n_1$ determined by $$\begin{aligned}
\label{D7}
\lefteqn{
\left.\frac{{\,\mathrm{d}}}{{\,\mathrm{d}}n}
\left[n(n+1)\left(\frac{R}{r_{\rm A}}\right)^{2n+1}\right]\right|_{n=n_1}
}
\nonumber\\[1ex]&&
\approx\left.\frac{{\,\mathrm{d}}}{{\,\mathrm{d}}n}\left(n^2e^{2n
\left(\ln R-\ln r_{\rm A}\right)}\right)\right|_{n=n_1}
\nonumber\\[1ex]&&
=2n_1\left(\frac{R}{r_{\rm A}}\right)^{2n_1}\left[1+n_1
\left(\ln R-\ln r_{\rm A}\right)\right] =0,\end{aligned}$$ from which we find $$\label{D8}
n_1=\frac{1}{\ln r_{\rm A}-\ln R}
\simeq \frac{R}{\Delta r_{\rm A}}\,,$$ because of $\Delta r_{\rm A}/R$ $\!\ll$ $\!1$. Since the main contribution to the sum over $n$ in Eq. (\[E75\]) comes from values of , where the approximate formulas (\[D5\]) and (\[D6\]) are valid \[cf. Eq. (\[D2\])\]. Therefore we introduce only a small error if we extrapolate these formulas to the terms with small $n$. Then the sum over $n$ is equal to the second derivative with respect to $(R/r_{\rm A})^2$ of a geometric sum, which can be performed in closed form to yield Eq. (\[E82\]).
[99]{} G. Binnig, C. F. Quate, and Ch. Gerber, Phys. Rev. Lett. [**56**]{}, 930 (1986).
F. Shimizu and J. Fujita, Phys. Rev. Lett. [**88**]{}, 123201 (2002).
H. B. G. Casimir and D. Polder, Phys. Rev. [**73**]{}, 360 (1948).
D. Raskin and P. Kusch, Phys. Rev. [**179**]{}, 712 (1969); A. Shih, D. Raskin, and P. Kusch, Phys. Rev. A [**9**]{}, 652 (1974); A. Shih and V. A. Parsegian, Phys. Rev. A [**12**]{}, 835 (1975).
A. Anderson, S. Haroche, E. A. Hinds, W. Jhe, and D. Meschede, Phys. Rev. A [**37**]{}, 3594 (1988); C. I. Sukenik, M. G. Boshier, D. Cho, V. Sandoghdar, and E. A. Hinds, Phys. Rev. Lett. [**70**]{}, 560 (1993).
R. E. Grisenti, W. Schöllkopf, J. P. Toennies, G. C. Hegerfeldt, and T. Köhler, Phys. Rev. Lett. [**83**]{}, 1755 (1999).
F. Shimizu, Phys. Rev. Lett. [**86**]{}, 987 (2001); V. Druzhinina and M. DeKieviet, eprint quant-ph/0212076.
A. Landragin, J.-Y. Courtois, G. Labeyrie, N. Vansteenkiste, C. I. Westbrook, and A. Aspect, Phys. Rev. Lett. [**77**]{}, 1464 (1996).
M. Oria, M. Chrevrollier, D. Bloch, M. Fichet, and M. Ducloy, Europhys. Lett. [**14**]{}, 527 (1991); V. Sandoghdar, C. I. Sukenik, E. A. Hinds, and S. Haroche, Phys. Rev. Lett. [**68**]{}, 3432 (1992); M. Fichet, F. Schuller, D. Bloch, and M. Ducloy, Phys. Rev. A [**51**]{}, 1553 (1995); M. Marrocco, M. Weidinger, R. T. Sang, and H. Walther, Phys. Rev. Lett. [**81**]{}, 5784 (1998); H. Failache, S. Saltiel, M. Fichet, D. Bloch, and M. Ducloy, [*ibid.*]{} [**82**]{}, 5467 (1999).
M. Gorlicki, S. Feron, V. Lorent, and M. Ducloy, Phys. Rev. A [**61**]{}, 013603 (1999); R. Marani, L. Cognet, V. Savalli, N. Westbrook, C. I. Westbrook, and A. Aspect, [*ibid.*]{} [**61**]{}, 053402 (2000).
I. E. Dzyaloshinskii, E. M. Lifshitz, and L. P. Pitaevskii, Adv. Phys. [**10**]{}, 165 (1961).
D. Langbein, Springer Tracks Mod. Phys. [**72**]{}, 1 (1974).
J. Mahanty and B. W. Ninham, [*Dispersion Forces*]{} (Academic, London, 1976).
E. A. Hinds, in [*Advances in Atomic, Molecular, and Optical Physics*]{}, edited by D. Bates and B. Bederson (Academic, New York, 1991), Vol. 28, p. 237.
P. W. Milonni, [*The Quantum Vacuum: An Introduction to Quantum Electrodynamics*]{} (Academic, San Diego, 1994).
R. K. Bullough and B. V. Thompson, J. Phys. C [**3**]{}, 1780 (1970).
M. J. Renne, Physica [**53**]{}, 193 (1971); [*ibid.*]{} [**56**]{}, 124 (1971).
P. W. Milonni and M.-L. Shih, Phys. Rev. A [**45**]{}, 4241 (1992). See also J. Schwinger, L. L. DeRaad, and K. A. Milton, Ann. Phys. (New York) [**115**]{}, 1 (1978) for a related treatment.
Y. Tikochinski and L. Spruch, Phys. Rev. A [**48**]{}, 4223 (1993).
F. Zhou and L. Spruch, Phys. Rev. A [**52**]{}, 297 (1995).
M. Boström and B. E. Sernelius, Phys. Rev. A [**61**]{}, 052703 (2000). Note that the formulas derived in Ref. [@Zhou] for frequency-independent, real permittivities are used for studying metals, by putting complex permittivities in them, without any proof. A. M. Marvin and F. Toigo, Phys. Rev. A [**25**]{}, 782 (1982). In fact an energy formula based on a normal-mode expansion is combined with elements of the linear-response theory.
C. H. Wu, C.-I Kuo, and L. H. Ford, Phys. Rev. A [**65**]{}, 062102 (2002).
A. D. McLachlan, Proc. R. Soc. London Ser. A [**271**]{}, 387 (1963); Mol. Phys. [**7**]{}, 381 (1963).
G. S. Agarwal, Phys. Rev. A [**11**]{}, 243 (1975).
J. M. Wylie and J. E. Sipe, Phys. Rev. A [**30**]{}, 1185 (1984).
C. Girard, J. Chem. Phys. [**85**]{}, 6750 (1986).
C. Girard and C. Girardet, J. Chem. Phys. [**86**]{}, 6531 (1987).
M. Fichet, F. Schuller, D. Bloch, and M. Ducloy, Phys. Rev. A [**51**]{}, 1553 (1995); M.-P. Gorza, S. Saltiel. H. Failache, and M. Ducloy, Eur. Phys. J. D [**15**]{}, 113 (2001).
C. Girard, S. Maghezzi, and F. Hache, J. Chem. Phys. [**91**]{}, 5509 (1989).
M. Boustimi, J. Baudon, P. Candori, and J. Robert, Phys. Rev. B [**65**]{}, 155402 (2002).
Ho Trung Dung, L. Knöll, and D.-G. Welsch, Phys. Rev. A [**64**]{}, 013804 (2001); Ho Trung Dung, S. Scheel, L. Knöll, and D.-G. Welsch, J. Opt. B: Quantum Semiclass. Opt. [**4**]{}, 169 (2002).
J. R. Buck and J. Kimble, Phys. Rev. A [**67**]{}, 033806 (2003).
S. Scheel, L. Knöll, and D.-G. Welsch, Phys. Rev. A [**60**]{}, 4094 (1999); Ho Trung Dung, L. Knöll, and D.-G. Welsch, [*ibid.*]{} [**62**]{}, 053804 (2000).
L. Knöll, S. Scheel, and D.-G. Welsch, in [*Coherence and Statistics of Photons and Atoms*]{}, edited by J. Peřina (John Wiley & Son, New York, 2001), p. 1.
A. S. Davydov, [*Quantum Mechanics*]{} (NEO, Ann Arbor, MI, 1967), pp.317–319.
Ho Trung Dung, L. Knöll, and D.-G. Welsch, Phys. Rev. A [**65**]{}, 043813 (2002).
L. W. Li, P. S. Kooi, M. S. Leong, and T. S. Yeo, IEEE Trans. Microwave Theory Tech. [**42**]{}, 2302 (1994); C.-T. Tai, [*Dyadic Green Functions in Electromagnetic Theory*]{} (IEEE Press, New York, 1994).
J. D. Jackson, [*Classical Electrodynamics*]{} (John Wiley & Sons, New York, 1998).
, edited by M. Abramowitz and I. A. Stegun (Dover, New York, 1973).
|
Introduction and main result
============================
We recall here the basics about the $\Lambda$-Wright-Fisher process with selection. This process represents the evolution of the frequency of a deleterious allele. When no selection is taken into account, we refer the reader to Bertoin-Le Gall [@LGB2] and Dawson-Li [@Dawson] who have introduced this process as a solution to some specific stochastic differential equation driven by a random Poisson measure. Recently Bah and Pardoux [@Bah] have considered a lookdown approach to construct a particle system whose empirical distribution converges to the strong solution to $$\label{SDE}
X_{t}=x+\int_{[0,t]\times [0,1] \times [0,1]}z\left(1_{u\leq X_{s-}}-X_{s-}\right)\bar{\mathcal{M}}(ds,du,dz)-\alpha\int_{0}^{t}X_{s}(1-X_{s})ds$$ where $\bar{\mathcal{M}}$ is a compensated Poisson measure $\mathcal{M}$ on $\mathbb{R}_{+}\times [0,1]\times [0,1]$ whose intensity is $ds\otimes du\otimes z^{-2}\Lambda(dz)$. Strong uniqueness of the solution to (\[SDE\]) follows from an application of Theorem 2.1 in [@Dawson]. The process $(X_{t}, t\geq 0)$ should be interpreted as follows: it represents the frequency of a deleterious allele as time passes. When $\alpha>0$, the logistic term $-\alpha X_{t}(1-X_{t})dt$ makes the frequency of the allele decrease, this is the phenomenon of selection. Heuristically, the equation (\[SDE\]) can be understood as follows:
- Denote the frequency of the allele just before time $s$ by $X_{s-}$. If $(s,u,z)$ is an atom of the measure $\mathcal{M}$, then, at time $s$,
- if $u\leq X_{s-}$, the frequency of the allele increases by a fraction $z(1-X_{s-})$
- if $u>X_{s-}$, the frequency of the allele decreases by a fraction $zX_{s-}$.
- Continuously in time, the frequency decreases due to the deterministic selection mechanism.
Note that we are dealing with a two-allele model: at any time $t$, the *advantageous* allele has frequency $1-X_{t}$. The purely diffusive case is well understood (this is the classical Wright-Fisher diffusion, see e.g. Chapters 3 and 5 of Etheridge’s monography [@etheridge2011] for a complete study). We mention that Section 5 of Bah and Pardoux [@Bah] incorporates a diffusion term in the SDE (\[SDE\]). In such cases, the measure $\Lambda$ has an atom at $0$ and it has been already established in [@Bah] that these processes are absorbed in finite time. We then focus on measures $\Lambda$ carried on $]0,1]$ (see Remark \[rem\]). Lastly, the process $(X_{t}, t\geq 0)$ should be interpreted as one of the simplest models introducing natural selection together with random genetic drift (that is, the random resampling governed by $\Lambda$).\
Plainly, the process $(X_{t}, t\geq 0)$ lies in $[0,1]$ and is a supermartingale. Therefore, the process $(X_{t}, t\geq 0)$ has an almost-sure limit denoted by $X_{\infty}$. This random variable is the frequency at equilibrium. Since $0$ and $1$ are the only absorbing states, the random variable $X_{\infty}$ lies in $\{0,1\}$. Moreover if $\alpha>0$, the supermartingale property yields that for all $x$ in $[0,1]$, $$\mathbb{P}[X_{\infty}=1|X_{0}=x]=\mathbb{E}[X_{\infty}|X_{0}=x]<x.$$ Our main result is the following theorem.
\[main\]Let $\alpha^{\star}:=-\int_{0}^{1}\log(1-x)\frac{\Lambda(dx)}{x^{2}}\in (0,\infty]$. Then,
- if $\alpha<\alpha^{\star}$ then for all $x\in (0,1)$, $0<\mathbb{P}[X_{\infty}=0|X_{0}=x]<1$,
- if $\alpha^{\star}<\infty$ and $\alpha>\alpha^{\star}$ then $X_{\infty}=0$ a.s.
<!-- -->
- As already mentioned, some $\Lambda$-Wright-Fisher processes with selection are absorbed in finite time (for instance the diffusive one). Such processes verify $\alpha^{\star}=\infty$. More precisely, Bah and Pardoux in Section 4.2 of [@Bah] show that they are related to measures $\Lambda$ satisfying the criterion of coming down from infinity.
- The condition $\int_{0}^{1}x^{-1}\Lambda(dx)=\infty$ implies that $-\int_{0}^{1}\log(1-x)x^{-2}\Lambda(dx)=\infty$. One can recognize the first integral condition as the dust-free criterion (see Lemma 25 and Proposition 26 in Pitman’s article [@Pitman]). In other words, the dust-free condition ensures that the deleterious allele does not disappear with probability one. Namely, it may survive in the long run with positive probability. It is worth observing that some measure $\Lambda$ verify $-\int_{0}^{1}\log(1-x)x^{-2}\Lambda(dx)=\infty$ and $\int_{0}^{1}x^{-1}\Lambda(dx)<\infty$. An example is provided in the proof of Corollary 4.2 of Möhle and Herriger [@MohleHerriger].
- Bah and Pardoux in Section 4.3 of [@Bah] have obtained a first result on the impact of selection. Namely they show that if $\alpha>\mu:=\int_{0}^{1}\frac{1}{x(1-x)}\Lambda(dx)$ then $X_{\infty}=0$ almost surely. We highlight that the quantity $\mu$ is strictly larger than $\alpha^{\star}$ and that our method does not rely on the look-down construction.
- Der, Epstein and Plotkin [@Der] and [@Der1] obtain several results in the framework of finite populations with selection. They announce the results of Theorem \[main\] in [@Der1]. However their proofs treat only the case when $\Lambda$ is a Dirac mass. Their method is based on a study of the generator of $(X_{t}, t\geq 0)$ and differs from ours.
Except in the case of simple measures $\Lambda$, the expression of $\alpha^{\star}$ is rather complicated. We provide a few examples.
- Let $x \in ]0,1]$ and $c>0$, consider $\Lambda=c\delta_{x}$. We have $$\alpha^{\star}:x \mapsto -c\log(1-x)/x^{2}.$$ The limit case $x=0$ corresponds to the Wright-Fisher diffusion and we have $\alpha^{\star}(0)=\infty$. When $x=1$, we also have $\alpha^{\star}(1)=\infty$ (this is the so-called star-shaped mechanism). Note that the map $\alpha^{\star}$ is convex and has a local minimum in $(0,1)$. Thus, in this model (called the Eldon-Wakeley model, see e.g. Birkner and Blath [@MR2562160]) the selection pressure which ensures the extinction of the disadvantaged allele is not a monotonic function of $x$.
- Let $a>0, b>0$, consider $\Lambda=Beta(a,b)$ where $Beta(a,b)$ is the unnormalized Beta measure with density $f(x)=x^{a-1}(1-x)^{b-1}$.
- If $a=2$, one can easily compute $\alpha^{\star}(b)= \int_{0}^{\infty}\frac{te^{-bt}}{1-e^{-t}}dt=\zeta(2,b)$ (where $\zeta$ denotes the Hurwitz Zeta function).
- If $b=1$ and $a>1$, we have $\alpha^{\star}(a)=\int_{0}^{\infty}te^{-t}(1-e^{-t})^{a-3}dt$. If $a\leq 1, \alpha^{\star}(a)=\infty$.
The computation is more involved for general measures $Beta$, see Gnedin *et al.* [@MR2896672] page 1442.
A direct study of the process $(X_t, t\geq 0)$ and its limit based on the SDE (\[SDE\]) seems a priori rather involved. The key tool that will allow us to get some information about $X_{\infty}$ is a duality between $(X_{t}, t\geq 0)$ and a continuous-time Markov chain with values in $\mathbb{N}:=\{1,2,...\}$. Namely consider $(R_{t}, t\geq 0)$ with generator $\mathcal{L}$ defined as follows. For every $g: \mathbb{N}\rightarrow \mathbb{R}$: $$\label{generator} \mathcal{L}g(n)=\sum_{k=2}^{n}\binom{n}{k}\lambda_{n,k}[g(n-k+1)-g(n)]+\alpha n[g(n+1)-g(n)]$$ with $$\lambda_{n,k}=\int_{0}^{1}x^{k}(1-x)^{n-k}x^{-2}\Lambda(dx).$$ We have the following duality lemma:
\[dual\]For all $x\in [0,1], n\geq 1$, $$\mathbb{E}[X_{t}^{n}|X_{0}=x]=\mathbb{E}[x^{R_{t}}|R_{0}=n].$$
When no selection is taken into account, this duality is well-known (see for instance the recent survey concerning duality methods of Jansen and Kurt [@Jansen:arXiv1210.7193]). Several works incorporate selection and study the dual process. We mention for instance the work of Neuhauser and Krone [@krone1997anc] in which the Wright-Fisher diffusion case is studied. For a proof of Lemma \[dual\], which relies on standard generator calculations, see Equation 3.11 page 21 in Bah and Pardoux [@Bah].\
The process $(R_{t}, t\geq 0)$ is clearly irreducible and its properties are related to those of $(X_{t}, t\geq 0)$. The following lemma is crucial in our study.
\[lemma\]
- If $(R_{t}, t\geq 0)$ is positive recurrent then the law of $X_{\infty}$ charges both $0$ and $1$.
- If $(R_{t}, t\geq 0)$ is transient then $X_{\infty}=0$ almost surely.
Recall that $(X_{t}, t\geq 0)$ is positive, bounded and converges almost surely. We first establish 1). Assume that the process $(R_t,t\geq 0)$ is positive recurrent. To conclude that the law of $X_{\infty}$ charges both $0$ and $1$, we use Lemma \[dual\]. Hence, we have $$\mathbb{P}[X_{\infty}=1|X_{0}=x]=\mathbb{E}[X_{\infty}|X_{0}=x]\geq \mathbb{E}[X_{\infty}^{n}|X_{0}=x]=\mathbb{E}[x^{R_{\infty}}|R_{0}=n]\geq \frac{x^{n_{0}}}{\mathbb{E}_{n_{0}}[T_{n_{0}}]}>0,$$ where $R_{\infty}$ is a random variable with law, the stationary distribution of $(R_{t}, t\geq 0)$ and $T_{n_{0}}$ is the first return time to state $n_0$ of the chain $(R_t, t\geq 0)$. We prove now 2). Assume that the process $(R_t, t\geq 0)$ is transient. Plainly, applying the dominated convergence theorem in Lemma \[dual\] with $n=1$, we have $$\mathbb{E}[X_{\infty}|X_{0}=x]=\underset{t\rightarrow \infty}\lim \mathbb{E}[x^{R_{t}}|R_{0}=1]=0, \text{ since } R_{t} \underset{t\rightarrow \infty}\longrightarrow \infty \text{ a.s. }$$ Thus, $X_\infty=0$ almost surely.
Similarly to the block counting process of a $\Lambda$-coalescent, the process $(R_{t}, t\geq 0)$ has a genealogical interpretation. Roughly speaking, it counts the number of ancestors of a sample of individuals as time goes towards the past. Two kinds of events can occur:
- A coalescence of lineages. When there are $n$ lineages, it occurs at rate $$\label{ratecoal} \phi(n)=\sum_{k=2}^{n}\binom{n}{k}\lambda_{n,k},$$
- A branching (a birth) event (modelling selection). When there are $n$ lineages, the process jumps to $n+1$ at rate $\alpha n$.
When a lineage splits in two, this should be understood as two potential ancestors. We refer the reader to Sections 5.2 and 5.4 of [@etheridge2011], and also to Etheridge, Griffiths and Taylor [@Etheridge201077] where a dual coalescing-branching process is defined for a general $\Lambda$ mechanism.
Coming down from infinity and study of $(R_{t}, t\geq 0)$
=========================================================
Rather than working with the process satisfying the SDE (\[SDE\]), we will work on the continuous-time Markov chain $(R_{t}, t\geq 0)$. Denote $\nu(dx):=x^{-2}\Lambda(dx)$ and define for all $n\geq 2$, $$\label{delta} \delta(n):= -n\int_{0}^{1}\log\left(1-\frac{1}{n}[np-1+(1-p)^{n}]\right)\nu(dp).$$ The maps $n\mapsto \delta(n)$ and $n\mapsto \delta(n)/n$ are both non-decreasing and $\delta(n)/n\uparrow \alpha^{\star}.$ For the proof of these monotonicity properties we refer the reader to the proof of Lemma 4.1 and to Corollary 4.2 in [@MohleHerriger].\
Firstly, we need to say a word about coalescents and coming down from infinity. Then, we deal with the proof of Theorem \[main\]. We will adapt some arguments due to Möhle and Herriger [@MohleHerriger] and use Lemma \[lemma\].
Revisiting the coming-down from infinity for the $\Lambda$-coalescent
---------------------------------------------------------------------
A nice introduction to the $\Lambda$-coalescent processes is given in Chapter 3 of Berestycki [@Beres2]. Denote the number of blocks in a $\Lambda$-coalescent by $(R_{t}, t\geq 0)$. Started from $n$, this process has the generator $\mathcal{L}$, defined in (\[generator\]), with $\alpha=0$. An interesting property is that this process can start from infinity. We say that the coming down from infinity occurs if almost surely for any time $t>0$, $R_{t}<\infty$, while $R_{0}=\infty$. In this case, $(R_{t}, t\geq 0)$ will be actually absorbed in $1$ in finite time. The arguments that we use to establish Theorem \[main\] are mostly adapted from technics due to Möhle and Herriger [@MohleHerriger]. They have established a new condition for $\Xi$-coalescents (meaning coalescents with simultaneous and multiple collisions) to come down from infinity. Their criterion is based on a new function which corresponds to $\delta$ in the particular case of $\Lambda$-coalescents. Their work relies mostly on linear random recurrences. We give here a proof in a “martingale fashion” for the simpler setting of $\Lambda$-coalescents.\
The next lemma is lifted from Lemma 4.1 in [@MohleHerriger], however we provide a proof for the sake of completeness. Let $n\geq 2$ and $x\in (0,1)$. We consider the auxiliary random variable $Y_{n}(x)$ with law:
$\mathbb{P}[Y_{n}(x)=l]=1_{l=n}(1-x)^{n}+\binom{n}{l-1}(1-x)^{l-1}x^{n-l+1}$ for every $l\in \{1,...,n\}$.
\[majorationdelta\]
- $\mathbb{E}[Y_{n}(x)]=n(1-x)+1-(1-x)^{n}$,
- $\frac{\delta(n)}{n}=\int_{0}^{1}-\log \mathbb{E}[Y_{n}(x)/n]\nu(dx)\leq \sum_{j=2}^{n}-\log\left(\frac{n-j+1}{n}\right)\binom{n}{j}\lambda_{n,j}.$
The first statement is obtained by binomial calculations and is left to the reader, see Remark 7.2.2 for $\Lambda$-coalescent and Equation (2) in [@MohleHerriger]. We focus on the second statement. We have The first statement is obtained by binomial calculations and is left to the reader, see Remark 7.2.2 for $\Lambda$-coalescent and Equation (2) in [@MohleHerriger]. We focus on the second statement. We have $$\begin{aligned}
\frac{\delta(n)}{n}&=\int_{0}^{1}-\log \mathbb{E}[Y_{n}(x)/n]\nu(dx)\\
&\leq \int_{0}^{1}\mathbb{E}[-\log (Y_{n}(x)/n)]\nu(dx) \text{ by the Jensen inequality } (-\log \text{ is convex})\\
&= \sum_{k=1}^{n-1}-\log\left(\frac{k}{n}\right)\int_{0}^{1}\mathbb{P}[Y_{n}(x)=k]\nu(dx)\\
&= \sum_{k=1}^{n-1}-\log\left(\frac{k}{n}\right)\binom{n}{n-k+1}\lambda_{n,n-k+1}\\
&= \sum_{k=2}^{n}-\log\left(\frac{n-k+1}{n}\right)\binom{n}{k}\lambda_{n,k}.\\\end{aligned}$$
\[CDI\] Let $\Lambda$ be a finite measure on $[0,1]$ without mass at $0$. The $\Lambda$-coalescent comes down from infinity if and only if $$\sum_{k\geq 2}\frac{1}{\delta(k)}<\infty.$$ Furthermore, we have $$\mathbb{E}[T]\leq 2\sum_{k=2}^{\infty}\frac{1}{\delta(k)},$$ where $T:=\inf\{t\geq 0; R_{t}=1\}$.
Schweinsberg [@CDI] established that a necessary and sufficient condition for the coming down from infinity is the convergence of the series $\sum_{l\geq 2}\frac{1}{\psi(l)}$ where $$\label{psi} \psi(l):=\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}(k-1)=\int_{0}^{1}[lx-1+(1-x)^{l}]x^{-2}\Lambda(dx).$$ We easily observe that for all $n\geq 2$, $\delta(n)\geq \psi(n)$. Therefore the divergence of the series $\sum \frac{1}{\delta(n)}$ entails that of $\sum \frac{1}{\psi(n)}$ and we just have to focus on the sufficient part (for a proof of the necessary part based on martingale arguments, we refer to Section 6 of [@coaldist]). Assume $\sum \frac{1}{\delta(n)}<\infty$, consider the function $$f(l):=\sum_{k=l+1}^{\infty}\frac{k}{\delta(k)}\log\left(\frac{k}{k-1}\right).$$ This function is well defined since $\frac{k}{\delta(k)}\log\left(\frac{k}{k-1}\right)\underset{k\rightarrow \infty}{\sim}1/\delta(k)$. The generator of the block counting process corresponds to $\mathcal{L}$ with $\alpha=0$, thus we study $$\mathcal{L}f(l)=\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}[f(l-k+1)-f(l)].$$ We have $$f(l-k+1)-f(l)\geq \frac{l}{\delta(l)}\sum_{j=l-k+2}^{l}\log \left(\frac{j}{j-1}\right)=\frac{l}{\delta(l)}[\log(l)-\log(l-k+1)]$$ and then $$\mathcal{L}f(l)\geq \frac{l}{\delta(l)}\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}\left[-\log\left(\frac{l-k+1}{l}\right)\right].$$ By Lemma \[majorationdelta\], we have $$\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}\left[-\log\left(\frac{l-k+1}{l}\right)\right]\geq \delta(l)/l.$$ We deduce that $\mathcal{L}f(l)\geq 1$ for every $l\geq 2$. Then, since $f(R_{t})-\int_{0}^{t}\mathcal{L}f(R_{s})ds$ is a martingale, by applying the optional stopping theorem at time $T_{n}\wedge k$ where $T_{n}:=\inf\{t; R_{t}=1\}$ when $R_{0}=n$, we get: $$\mathbb{E}[f(R_{T_{n}\wedge k})]=f(n)+\mathbb{E}\left[\int_{0}^{T_{n}\wedge k}\mathcal{L}f(R_{s})ds\right]\geq f(n)+\mathbb{E}[T_{n}\wedge k]$$ Letting $k \rightarrow \infty$ and using the fact that $f$ is decreasing, we obtain that $$\mathbb{E}[T_{n}]\leq f(1)-f(n).$$ Recall that $T_n \uparrow T$ a.s when $n\rightarrow \infty$. The result follows by the monotone convergence theorem.
Proof of Theorem \[main\]
-------------------------
The proof is based on three lemmatas. The first lemma states that the process is non-explosive. In the two next lemmatas, we look for martingale arguments. We highlight that Lemmatas \[nonexplosion\], \[functionf\] and \[martingale\] below are valid for $\alpha^{\star}\in (0,\infty]$. By convention, if $\alpha^{\star}=\infty$, then $1/\alpha^{\star}=0$.
\[nonexplosion\] The process $(R_{t},t\geq 0)$ is non-explosive.
We show that the only non-negative bounded solution of $\mathcal{L}f=cf$ for $c>0$ is the trivial solution $f=0$. Reuter’s criterion (see e.g. Corollary 2.7.3 in Norris’s book [@Norris] or [@Reuter]) provides that the process is non-explosive. If $\mathcal{L}f(n)=cf(n)$ then we have $$\alpha nf(n+1)=cf(n)-\sum_{k=2}^{n}\binom{n}{k}\lambda_{n,k}f(n-k+1)+(\phi(n)+\alpha n)f(n).$$ This yields $$\begin{aligned}
f(n+1)-f(n)&=\frac{c}{\alpha n}f(n)-\sum_{k=2}^{n}\frac{\binom{n}{k}\lambda_{n,k}}{\alpha n}f(n-k+1)+\frac{\phi(n)}{\alpha n}f(n)\\
&= \frac{cf(n)}{\alpha n}+\sum_{k=2}^{n}\frac{\binom{n}{k}\lambda_{n,k}}{\alpha n}(f(n)-f(n-k+1)).\end{aligned}$$ If $f(1)=0$ then $f(2)=0$ and by an easy induction $f(i)=0$ for all $i\geq 1$. Therefore, if there exists a positive solution, then necessarily $f(1)>0$. Letting $n=1$ in the last equality provides $f(2)-f(1)>0$ (note that when $n=1$, the sum is empty and $\phi(1)=0$). Assume that $f(n)\geq f(i)$ for all $i\leq n$. The last equality above implies that $f(n+1)-f(n)\geq 0$. By induction, we thus have $f(n+1)\geq f(i)$ for all $i\leq n+1$. Finally, $f$ is non-decreasing and one has $$f(n)=f(1)+\sum_{k=1}^{n-1}(f(k+1)-f(k))\geq \sum_{k=1}^{n-1}\frac{cf(1)}{\alpha k}$$ then $f$ is unbounded.
\[functionf\] Define the function $$f(l):=\sum_{k=2}^{l}\frac{k}{\delta(k)}\log\left(\frac{k}{k-1}\right).$$ Then, with the generator $\mathcal{L}$ of $(R_{t}, t\geq 0)$ defined in (\[generator\]), we have for all $l\geq 2$ $$\mathcal{L}f(l)\leq -1+\alpha l/\delta(l).$$
By definition, $$\mathcal{L}f(l)=\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}[f(l-k+1)-f(l)]+\alpha l[f(l+1)-f(l)].$$ We have $f(l-k+1)-f(l)=-\sum_{j=l-k+2}^{l}\frac{j}{\delta(j)}\log\left(\frac{j}{j-1}\right)$, and since $(j/\delta(j), j\geq 2)$ is decreasing, for all $j\leq l$, $j/\delta(j)\geq l/\delta(l)$. Therefore $$f(l-k+1)-f(l)\leq -\frac{l}{\delta(l)}\sum_{j=l-k+2}^{l}\log\left(\frac{j}{j-1}\right)=-\frac{l}{\delta(l)}\log\left(\frac{l}{l-k+1}\right).$$ We deduce that $$\begin{aligned}
\mathcal{L}f(l)&\leq -\frac{l}{\delta(l)}\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}\log\left(\frac{l}{l-k+1}\right)+\alpha \frac{l+1}{\delta(l+1)} \underbrace{l\log\left(1+\frac{1}{l}\right)}_{\leq 1}\\
&\leq \frac{l}{\delta(l)}\underbrace{\sum_{k=2}^{l}\binom{l}{k}\lambda_{l,k}\log\left(\frac{l-k+1}{l}\right)}_{\leq -\delta(l)/l}+\alpha \frac{l+1}{\delta(l+1)}\\
&\leq -1+\alpha \frac{l}{\delta(l)}.\end{aligned}$$ The second inequality holds by Lemma \[majorationdelta\].
The following lemma tells us that if $\alpha<\alpha^{\star}\in (0,\infty]$, then $(R_{t}, t\geq 0)$ is positive recurrent. Applying Lemma \[lemma\] yields the first part of Theorem \[main\].
\[martingale\] Assume $\alpha<\alpha^{\star}$. Then, there exists $n_{0}$, such that for all $n\geq n_{0}$, $\mathbb{E}_{n}[T^{n_{0}}]<\infty$, where $$T^{n_{0}}:= \inf\{s\geq 0; R_{s}<n_{0}\}.$$ Thus, the process $(R_{t}, t\geq 0)$ is positive recurrent.
For every $N\in \mathbb{N}$, define $$f_{N}(l):=f(l)1_{l\leq N+1}.$$ By Dynkin’s formula, the process $$\left(f_{N}(R_{t})-\int_{0}^{t}\mathcal{L}f_{N}(R_{s})ds, t\geq 0\right)$$ is a martingale. One can easily check that $\mathcal{L}f_N(l)=\mathcal{L}f(l) \mbox{ if } l\leq N$. For any $\epsilon >0$ there exists $n_{0}$ such that for all $l\geq n_{0}$, $$\label{epsilon}
\frac{l}{\delta(l)}\leq \frac{1}{\alpha^{\star}}+\epsilon.$$ Let $n_0\leq n \leq N$ and consider the stopping time $S_N:=\inf \{s\geq 0; R(s)\geq N+1\}$. We apply the optional stopping theorem to the bounded stopping time $T^{n_{0}}\wedge S_{N}\wedge k$ and obtain $$\begin{aligned}
\mathbb{E}_{n}[f_{N}(R_{T^{n_{0}}\wedge S_{N}\wedge k})]&= f_{N}(n) + \mathbb{E}\left[\int_{0}^{T^{n_{0}}\wedge S_{N} \wedge k}\mathcal{L}f_{N}(R_{s})ds\right]\\
&\leq f_{N}(n)+\mathbb{E}\left[\int_{0}^{T^{n_{0}}\wedge S_{N} \wedge k}\left( -1+ \alpha \frac{R_{s}}{\delta(R_{s})}\right)ds\right]\\
&\leq f_{N}(n)+\mathbb{E}\left[\int_{0}^{T^{n_{0}}\wedge S_{N}\wedge k}\left(-1+\alpha(\frac{1}{\alpha^{\star}}+\epsilon) \right)ds\right]\\
&= f_{N}(n)+\left(\frac{\alpha}{\alpha^{\star}}-1+\epsilon \alpha \right)\mathbb{E}[T^{n_{0}}\wedge S_{N} \wedge k].\end{aligned}$$ The first inequality follows from the equality $\mathcal{L}f_{N}(l)=\mathcal{L}f(l)$ when $l\leq N$ and from Lemma \[functionf\]. The second inequality follows from (\[epsilon\]). For small enough $\epsilon$, $1-\frac{\alpha}{\alpha^{\star}}-\epsilon \alpha >0$, thus $$\underbrace{(1-\frac{\alpha}{\alpha^{\star}}-\epsilon \alpha)}_{>0}\mathbb{E}[T^{n_{0}}\wedge S_{N} \wedge k]\leq f_{N}(n)-\mathbb{E}_{n}[f_{N}(R_{T^{n_{0}}\wedge S_{N}\wedge k})]\leq f_{N}(n),$$ On the one hand, since the process is non-explosive, $S_{N}\underset{N\rightarrow \infty}\longrightarrow \infty$ almost surely and therefore, for all $n\geq n_{0}$ $$(1-\frac{\alpha}{\alpha^{\star}}-\epsilon \alpha)\mathbb{E}[T^{n_{0}}\wedge k]\leq f(n).$$ On the other hand, by letting $k\rightarrow \infty$ we get
$\mathbb{E}[T^{n_{0}}]\leq Cf(n)$ for all $n\geq n_{0}$
with $C$ a constant depending only on $\epsilon$.
In order to get statement 2) of Theorem \[main\], we will apply the second part of Lemma \[lemma\]. Namely, we show that if $\alpha>\alpha^{\star}$, then $(R_t, t\geq 0)$ is transient.
\[transience\] If $\alpha>\alpha^{\star}$ then $R_t \underset{t\rightarrow \infty}{\longrightarrow} \infty$ almost surely.
If $g$ is a bounded function such that $\mathcal{L}g(n)<0$ for all $n>n_{0}$, the process $(g(R_{t\wedge T^{n_{0}}}), t\geq 0)$ when starting from $n>n_{0}$, is a supermartingale; with $T^{n_{0}}:=\inf\{t>0, R_{t}<n_{0}\}$. Applying the martingale convergence theorem yields that $\mathbb{P}_{n}(T^{n_{0}}<\infty)<1$. Therefore the process $(R_{t}, t\geq 0)$ is not recurrent, and by irreducibility is transient. We show that the function $g(n):=\frac{1}{\log(n+1)}$ fulfills these conditions. One has $$\mathcal{L}g(n)=\sum_{k=2}^{n}\binom{n}{k}\lambda_{n,k}\left[\frac{1}{\log(n-k+2)}-\frac{1}{\log(n+1)}\right]+\alpha n \left[\frac{1}{\log(n+2)}-\frac{1}{\log(n+1)}\right].$$ On the one hand, one can easily check that $$\alpha n \left[\frac{1}{\log(n+2)}-\frac{1}{\log(n+1)}\right]= \alpha n \frac{\log\left(\frac{n+1}{n+2}\right)}{\log(n+2)\log(n+1)}=-\alpha\frac{1}{\log(n+2)\log(n+1)}(1+o(1)).$$ On the other hand, denote by $B_{n}(x)$ a random variable with a binomial law $(n,x)$. We have $$\begin{aligned}
\mathcal{L}^{0}g(n):=\sum_{k=2}^{n}&\binom{n}{k}\lambda_{n,k}\left[\frac{1}{\log(n-k+2)}- \frac{1}{\log(n+1)}\right]\\
&=\int_{0}^{1}\frac{\Lambda(dx)}{x^{2}}\mathbb{E}\left[\frac{\log\left(\frac{n+1}{n-B_{n}(x)+2}\right)}{\log(n+1)\log(n-B_{n}(x)+2)}\right]\\
&=\frac{1}{\log(n+1)}\int_{0}^{1}\frac{\Lambda(dx)}{x^{2}}\mathbb{E}\left[\frac{-\log\left(1-\frac{B_{n}(x)-1}{n+1}\right)}{\log(n+2)+\log\left(1-\frac{B_{n}(x)}{n+2}\right)}\right].\end{aligned}$$ The last equality holds true since for all $2 \leq k \leq n$, $\log\left(\frac{n+1}{n-k+2}\right)=-\log \left(1-\frac{k-1}{n+1}\right)$ and $\log(n-k+2)=\log(n+2)+\log\left(1-\frac{k}{n+2}\right)$. Moreover $$|(n+1)x-(B_{n}(x)-1)|\leq |nx-B_{n}(x)|+|x+1|$$ and by Chebyshev’s inequality, we have $$\begin{aligned}
\mathbb{P}\left[\left| x- \frac{B_{n}(x)-1}{n+1} \right|>(n+1)^{-1/3}\right] &\leq \frac{\text{Var}(B_{n}(x))}{\big((n+1)^{2/3}-(1+x)\big)^{2}}\\
&\leq \frac{nx(1-x)}{\big((n+1)^{2/3}-2\big)^{2}}. \end{aligned}$$ Notice that $n\big((n+1)^{2/3}-2\big)^{-2}\underset{n\rightarrow \infty}{\sim}n^{-1/3}$ and $\int_{0}^{1}\frac{\Lambda(dx)}{x^{2}}x(1-x)<\infty$. Therefore, from the last expression of $\mathcal{L}^{0}g(n)$ above, we have $$\mathcal{L}^{0}g(n)=\frac{1}{\log(n+1)\log(n+2)}(\alpha^\star+o(1)),$$ and thus, since $\alpha>\alpha^\star$ $$\mathcal{L}g(n)=\frac{1}{\log(n+1)\log(n+2)}\big(\alpha^\star-\alpha+o(1)\big)<0, \text{ for } n \text{ large enough.}$$
\[rem\] Bah and Pardoux [@Bah] have established (Theorem 4.3) that the absorption of the process $(X_{t}, t\geq 0)$ in finite time is almost sure if and only if the underlying $\Lambda$-coalescent comes down from infinity. Furthermore, Proposition 4.4 in [@Bah] states that for all $x\in (0,1)$, $0<\mathbb{P}[X_{\zeta}=0|X_{0}=x]<1$ where $\zeta$ is the absorption time. In such cases, Theorem \[CDI\] yields plainly that $\alpha*=\infty$. We stress that our arguments still hold when $\Lambda(\{0\})>0$, one only has to add the quadratic term $\Lambda(\{0\})\binom{n}{2}$ to the function $\delta$ (Equation \[delta\]).
We end this article by observing a link between the threshold $\alpha^{\star}$ and the first moment of a subordinator.
Assume $\alpha^{\star}<\infty$. Then, the corresponding $\Lambda$-coalescent process has dust, meaning that it has infinitely many singleton blocks at any time. As time passes, the asymptotic frequency of the singleton blocks altogether is given by a process $(D(t), t\geq 0)$ with values in $]0,1]$ such that $$(D(t), t\geq 0)=(\exp(-\xi_{t}), t\geq 0)$$ where $\xi$ is a subordinator with Laplace exponent $$\phi(q)=\int_{0}^{1}[1-(1-x)^{q}]x^{-2}\Lambda(dx).$$ We refer the reader to Proposition 26 in Pitman [@Pitman]. An interesting feature, easily checked, is that $\alpha^{\star}=\mathbb{E}[\xi_{1}]$. Hence one could expect some fluctuations in $(R_{t},t\geq 0)$ when considering the critical case $\alpha=\alpha^{\star}$. We note, in this context, that R. Griffiths proved in [@Griffiths] that $X_{\infty}=0$ almost-surely when $\alpha=\alpha^{\star}$ by a different analytical method.\
\
Let us also mention that several authors (Gnedin et al. [@MR2896672] and Lager[å]{}s [@Lageras] for instance) have studied coalescents with a dust component through the theory of regenerative compositions.
[10]{}
B. Bah and E. Pardoux. $\Lambda$-look-down model with selection. , 2012. <http://arxiv.org/abs/1303.1953>.
N. Berestycki. , volume 16 of [*Ensaios Matemáticos \[Mathematical Surveys\]*]{}. Sociedade Brasileira de Matemática, Rio de Janeiro, 2009.
J. Bertoin and J.-F. Le Gall. Stochastic flows associated to coalescent processes. [II]{}. [S]{}tochastic differential equations. , 41(3):307–333, 2005.
M. Birkner and J. Blath. Measure-valued diffusions, general coalescents and population genetic inference. In [*Trends in stochastic analysis*]{}, volume 353 of [*London Math. Soc. Lecture Note Ser.*]{}, pages 329–363. Cambridge Univ. Press, Cambridge, 2009.
D. A. Dawson and Z. Li. Stochastic equations, flows and measure-valued processes. , 40(2):813–857, 2012.
R. Der, C. Epstein, and J. Plotkin. . , May 2012.
R. Der, C. L. Epstein, and J. B. Plotkin. Generalized population models and the nature of genetic drift. , 80(2):80 – 99, 2011.
A. Etheridge. . Number n° 2012 in Lecture Notes in Mathematics / [É]{}cole d’[É]{}t[é]{} de Probabilit[é]{}s de Saint-Flour. Springer, 2011.
A. M. Etheridge, R. C. Griffiths, and J. E. Taylor. A coalescent dual process in a moran model with genic selection, and the lambda coalescent limit. , 78(2):77 – 92, 2010.
C. Foucart. Distinguished exchangeable coalescents and generalized [F]{}leming-[V]{}iot processes with immigration. , 43(2):348–374, 2011.
A. Gnedin, A. Iksanov, and A. Marynych. On [$\Lambda$]{}-coalescents with dust component. , 48(4):1133–1151, 2011.
R. Griffiths. The $\Lambda$-Fleming-Viot process and a connection with Wright-Fisher diffusion. .
S. Jansen and N. Kurt. On the notion(s) of duality for markov processes. Preprint 1210.7193, 2012. <http://arxiv.org/abs/1210.7193>.
C. Krone and S. Neuhauser. Ancestral processes with selection. , 51(3):210–37, 1997.
A. N. Lager[å]{}s. A population model for [$\Lambda$]{}-coalescents with neutral mutations. , 12:9–20 (electronic), 2007.
M. Möhle and P. Herriger. Conditions for exchangeable coalescents to come down from infinity. , 9:637–665, 2012.
J. Pitman. Coalescents with multiple collisions. , 27(4):1870–1902, 1999. J. Schweinsberg. A necessary and sufficient condition for the [$\Lambda$]{}-coalescent to come down from infinity. , 5:1–11 (electronic), 2000.
J. R. Norris. , volume 2 of [*Cambridge Series in Statistical and Probabilistic Mathematics*]{}. Cambridge University Press, Cambridge, 1998. Reprint of 1997 original.
G. E. H. Reuter. Denumerable [M]{}arkov processes. [IV]{}. [O]{}n [C]{}. [T]{}. [H]{}ou’s uniqueness theorem for [$Q$]{}-semigroups. , 33(4):309–315, 1975/76.
|
---
abstract: 'The asymptotic behavior of the correlator for Polyakov loop operators separated by a large distance $R$ is determined for high temperature QCD. It is dominated by nonperturbative effects related to the exchange of magnetostatic gluons. To analyze the asymptotic behavior, the problem is formulated in terms of the effective field theory of QCD in 3 space dimensions. The Polyakov loop operator is expanded in terms of local gauge-invariant operators constructed out of the magnetostatic gauge field, with coefficients that can be calculated using resummed perturbation theory. The asymptotic behavior of the correlator is $\exp(-MR)/R$, where $M$ is the mass of the lowest-lying glueball in $(2+1)$-dimensional QCD. This result implies that existing lattice calculations of the Polyakov loop correlator at the highest temperatures available do not probe the true asymptotic region in $R$.'
---
-.625in -1.00in 6.5in 9.00in .5in 3
Eric Braaten and Agustin Nieto
*Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208*
One of the basic characteristics of a plasma is the screening of electric fields. The field created by a static charge falls off exponentially beyond the screening radius, whose inverse is called the Debye mass $m_D$. In a quark-gluon plasma at high temperature $T$, chromoelectric fields are believed to be screened in a similar way. However it has proven to be difficult to give a precise definition to the Debye mass in perturbation theory. At leading order in the coupling constant $g$, $m_D$ is proportional to $g T$. The next-to-leading order correction to $m_D$ has been calculated using a resummed perturbation theory in which gluon propagator corrections of order $g^2T^2$ are summed up to all orders [@rebhan; @bn]. The correction is gauge-invariant but infrared-divergent, indicating a sensitivity to nonperturbative effects involving the scale $g^2 T$.
Debye screening can also be studied nonperturbatively using lattice simulations [@attig; @gao; @kark]. One of the simplest probes of Debye screening is the correlator of two Polyakov loop operators as a function of their separation $R$. At lowest order in resummed perturbation theory, the behavior of the correlator is predicted to be $\exp (- 2 m_D R)/R^2$. By fitting the measured correlator to this form, one can extract a value for $m_D$. However the resummed perturbation expansion is known to break down at higher orders, due to contributions that involve the exchange of magnetostatic gluons. These contributions, which are inherently nonperturbative, can be expected to dominate at distances $R$ much greater than $1/m_D$. This raises questions about the utility of determining $m_D$ by fitting the correlator to a leading order expression.
In this paper, we determine the true asymptotic behavior of the Polyakov loop correlator at large $R$. We use effective field theory methods to express the Polyakov loop operator in terms of operators in 3-dimensional QCD. The asymptotic behavior of the Polyakov loop correlator is then given by a simple correlator in this effective theory. It has the form $\exp(- m_H R)/R$, where $m_H$ is the mass of the lowest glueball state in (2+1)-dimensional QCD. The coefficient of the exponential is of order $g^{12}$. Our result implies that the asymptotic behavior of the Polyakov loop correlator is dominated by magnetostatic effects which have little to do with Debye screening.
We wish to study the correlator of Polyakov loop operators in thermal QCD in 4 space-time dimensions with temperature $T$. The fundamental fields are the gauge field $A_\mu({\bf x},\tau)$, which takes values in the $SU(N_c)$ algebra, and the quark fields $\psi^i({\bf x}, \tau)$, whose indices range over $N_c$ colors and $N_f$ flavors. The gluon fields satisfy periodic boundary conditions in the Euclidean time $\tau$ with period $\beta = 1/T$, while the quark fields obey antiperiodic boundary conditions. The Polyakov loop operator is given by the trace of a path-ordered exponential: $$L({\bf x}) \;=\; {1 \over N_c} {\rm tr} \;
{\cal P} \exp \left( - i g \int_0^\beta d \tau A_0({\bf x},\tau) \right) .
\label{PLO}$$ The connected part of the correlator of two Polyakov loop operators separated by a distance $R$ is $$C(R) \;=\;
\langle L^\dagger({\bf 0}) \; L({\bf R}) \rangle
\;-\; \langle L({\bf 0}) \rangle^2 .
\label{CPL}$$ The Polyakov loop operator (\[PLO\]) creates two or more electric gluons ($A_0$ fields). The diagram for the Polyakov loop correlator (\[CPL\]) which is leading order in a naive expansion in powers of $g$ is the 1–loop diagram in which two electrostatic gluons are exchanged. It gives the asymptotic behavior $C(R) \sim 1/R^2$. This is not the correct asymptotic behavior for two reasons. First, thermal loop corrections generate a Debye screening mass for electrostatic gluons, so that the potential due to the exchange of an electrostatic gluon actually falls off exponentially, rather than like $1/R$ as in naive perturbation theory. Secondly, the true asymptotic behavior at sufficiently large temperature actually comes from higher order diagrams that involve the exchange of magnetostatic gluons. Our problem is to determine this asymptotic behavior.
An elegant way to solve this problem is to construct a sequence of two effective field theories which reproduce static correlation functions at successively longer distances. Thermal QCD can be used to calculate the static correlator (\[CPL\]) for any separation $R$. Ordinary perturbation theory in $g^2$ is accurate only for $R$ of order $1/T$ or smaller, but the correlator can be calculated for larger $R$ by using nonperturbative methods such as lattice gauge theory simulations. The first of the two effective field theories is constructed so that it reproduces static correlators at distances $R$ of order $1/(gT)$ or larger. In this effective theory, perturbation theory in $g$ can be used to calculate correlators at distances of order $1/(gT)$, but lattice simulations are required at larger $R$. The second effective field theory is constructed so that it reproduces correlators at distances of order $1/(g^2 T)$ or larger. This field theory is completely nonperturbative and the correlators must be calculated by lattice simulations. Nevertheless, we can use this field theory to determine unambiguously the asymptotic behavior of the Polyakov loop correlator.
The first effective field theory, which we call electrostatic QCD (EQCD), is a 3-dimensional Euclidean field theory that contains the electrostatic gluon field $A_0({\bf x})$ and the magnetostatic gluon field $A_i({\bf x})$. Up to a normalization, they can be identified with the zero-frequency modes of the gluon field $A_\mu({\bf x}, \tau)$ for thermal QCD in a static gauge [@nadkarni1]. The action for the effective field theory is $$S_{\rm EQCD} \;=\; \int d^3x \bigg\{
{1 \over 2} {\rm tr} (G_{ij} G_{ij})
\;+\; {\rm tr} (D_i A_0 D_i A_0)
\;+\; m_{\rm el}^2 \; {\rm tr} (A_0 A_0)
\;+\; \delta {\cal L}_{\rm EQCD} \bigg\} ,
\label{EQCD}$$ where $G_{ij} = \partial_i A_j - \partial_j A_i + ig_3 [A_i,A_j]$ is the magnetostatic field strength, $D_i A_0 = \partial_i A_0 + i g_3 [A_i,A_0]$, and $g_3$ is the coupling constant of the 3-dimensional gauge theory. The action for this effective field theory is invariant under static $SU(N_c)$ gauge transformations. If the fields $A_0$ and $A_i$ are assigned dimension $1/2$, then the operators shown explicitly in (\[EQCD\]) have dimensions 3, 3, and 1. The term $\delta {\cal L}_{\rm EQCD}$ in (\[EQCD\]) includes all other local gauge-invariant operators of dimension 2 and higher that can be constructed out of $A_0$ and $A_i$. The effective theory EQCD is completely equivalent to thermal QCD at distance scales of order $1/(gT)$ or larger. The gauge coupling constant $g_3$, the mass parameter $m_{\rm el}^2$, and the parameters in $\delta {\cal L}_{\rm EQCD}$ can be tuned as functions of $T$ so that correlators of gauge invariant operators in EQCD agree with the corresponding static correlators in thermal QCD to any desired accuracy for $R \gg 1/T$ [@lepage]. By matching the two theories at tree level, we find that the gauge coupling constant is $g_3 = g(T) \sqrt{T}$, where $g(T)$ is the running coupling constant at the momentum scale $T$. The mass parameter $m_{\rm el}^2$ in (\[EQCD\]) is the contribution to the square of the Debye screening mass from short distances of order $1/T$. At leading order in $g$, it is $$m_{\rm el}^2 \;=\; {2 N_c + N_f \over 6} g^2(T) T^2 .
\label{mel}$$ The coefficients of some of the higher dimension operators in $\delta {\cal L}_{\rm EQCD}$ were recently calculated to leading order by Chapman [@chapman]. Since the parameters in EQCD only take into account the effects of the momentum scale $T$, they can be calculated as perturbation series in $g^2(T)$. For example, the next-to-leading order correction to $m_{\rm el}^2$ is of order $g^4(T)$. The Debye screening mass $m_D$ defined by the location of the pole in the gluon propagator is also given at leading order by (\[mel\]), but $m_D^2$ has corrections of order $g^3$ that arise from the momentum scale $gT$ [@rebhan; @bn].
The effective field theory EQCD was used by Nadkarni to study the Polyakov loop correlator beyond leading order in the coupling constant [@nadkarni2]. In EQCD, the Polyakov loop operator is given by a simple exponential: $$L({\bf x}) \;=\; {1 \over N_c}
{\rm tr} \; \exp \left( - i g A_0({\bf x}) / \sqrt{T} \right).$$ The 1–loop diagram involving the exchange of two electrostatic gluons gives a correlator that falls exponentially due to electric screening: $$C(R) \;\longrightarrow\;
{(N_c^2 - 1) g^4 \over 8 N_c^2 T^2}
\left( {e^{- m_{\rm el} R} \over 4 \pi R} \right)^2 .
\label{CEas}$$ The corrections to this correlator of order $g^6$ were calculated by Nadkarni [@nadkarni2]. They have the asymptotic behavior $e^{-2 m_{\rm el} R} \log(R)/R$. In a recent reexamination of this calculation [@bn], it has been shown that one can extract from it the correction of order $g^3$ to the square of the Debye mass that was first obtained by Rebhan from the pole in the gluon propagator [@rebhan].
As pointed out by Nadkarni [@nadkarni2], the asymptotic behavior of the Polyakov loop correlator comes not from the 1–loop diagram in which two electrostatic gluons are exchanged, but instead from higher–loop diagrams that involve the exchange of magnetostatic gluons. The simplest such diagrams are the 3–loop diagrams in Fig. \[fig1\]. In perturbation theory, magnetostatic gluons remain massless in EQCD, and the diagrams in Fig. \[fig1\] give a contribution to the correlator that falls like $1/R^6$. However this is not the correct asymptotic behavior, since nonperturbative effects become important at a distance $R$ of order $1/(g^2T)$.
In order to determine the true asymptotic behavior of the Polyakov loop correlator, it is useful to construct a second effective field that reproduces static correlators at distances $R$ of order $1/(g^2T)$ or larger. This effective theory, which we call magnetostatic QCD (MQCD), is a pure SU(3) gauge theory in 3 space dimensions. The only fields are the magnetostatic gluon fields $A_i({\bf x})$. The action is $$S_{\rm MQCD} \;=\; \int d^3x \left\{
{1 \over 2} {\rm tr} (G_{ij} G_{ij})
\;+\; \delta {\cal L}_{\rm MQCD} \right\} ,
\label{MQCD}$$ where $\delta {\cal L}_{\rm MQCD}$ includes all local gauge-invariant operators that can be constructed out of $A_i$. The gauge coupling constant of MQCD and the parameters in $\delta {\cal L}_{\rm MQCD}$ can be tuned so that MQCD is completely equivalent to EQCD, and therefore to thermal QCD, at distances of order $1/(g^2T)$ or larger. If $g(T)$ is sufficiently small, the parameters of MQCD can be obtained by perturbative calculations in EQCD. The expansion parameter is $g_3^2/m_{\rm el}$, which is of order $g(T)$. The gauge theory MQCD itself is inherently nonperturbative. Any perturbative expansion in powers of $g_3$ is hopelessly plagued with infrared divergences. Thus the correlation functions in this theory must be calculated nonperturbatively using lattice simulations.
The static correlators of gauge-invariant operators in EQCD are reproduced at distances $R \gg 1/m_{\rm el}$ by the corresponding operators in MQCD. In EQCD, the Polyakov loop operator creates electrostatic gluons only. It couples to magnetostatic gluons through loop diagrams involving electrostatic gluons, such as the 1–loop diagrams in Fig. \[fig2\]. Because of screening, electrostatic gluons can only propagate over distances of order $1/m_{\rm el}$. Thus for magnetostatic gluons with wavelengths much greater than $1/m_{\rm el}$, the Polyakov loop operator behaves like a point-like operator that creates magnetostatic gluons. It can therefore be expanded out in terms of local gauge-invariant operators constructed out of the field $A_i$: $$L({\bf x}) \;=\;
\lambda_1(g) \; 1
\;+\; {\lambda_{G^2}(g) \over m_{\rm el}^3} \; G^2({\bf x}) \;+\; \ldots,
\label{PLope}$$ where $G^2 \equiv {\rm tr} (G_{ij} G_{ij})$ and $\ldots$ represents operators of dimension 5 or larger, such as $G^3 \equiv g_3 {\rm tr} (G_{ij}G_{jk}G_{ki})$.
Like the parameters in the effective action (\[MQCD\]), the coefficients in the operator expansion (\[PLope\]) are computable in terms of the parameters of the EQCD action using perturbation theory in $g$. Both EQCD and MQCD reproduce the nonperturbative dynamics of thermal QCD at distances much greater than $1/m_{\rm el}$. Their perturbation expansions also give equivalent (although incorrect) descriptions of the long-distance dynamics. Since the coefficients in the operator expansion (\[PLope\]) are insensitive to the long-distance dynamics, the equivalence between perturbative EQCD and perturbative MQCD can be exploited as a device to compute the coefficients.
We proceed to calculate the coefficient $\lambda_{G^2}$ in (\[PLope\]) to leading order in $g$. The simplest quantity that can be used to calculate $\lambda_{G^2}$ is the coupling of the Polyakov loop operator to two long-wavelength magnetostatic gluons, which we denote by $\langle 0 | L({\bf 0}) | gg \rangle$. We take the gluons to have momenta ${\bf k}_1$ and ${\bf k}_2$, vector indices $i$ and $j$, and color indices $a$ and $b$. In perturbative MQCD, we can read off the coupling of the operator $L({\bf 0})$ to the two gluons directly from the expression (\[PLope\]) for the Polyakov loop operator: $$\langle 0| L({\bf 0}) | gg \rangle
\;=\; {2 \lambda_{G^2} \over m_{\rm el}^3} \; \delta^{ab}
\left(- {\bf k}_1 \cdot {\bf k}_2 \delta^{ij} + k_2^i k_1^j \right).
\label{Omgg}$$ In perturbative EQCD, the coupling is given by the sum of the 1–loop diagrams in Fig. \[fig2\]: $$\begin{aligned}
\langle 0 | L({\bf 0}) | gg \rangle \;=\;
{g^4 \over 2} \delta^{ab} \int {d^3p \over (2 \pi)^3}
{1 \over {\bf p}^2 + m_{\rm el}^2}
{1 \over ({\bf p} + {\bf k}_1 + {\bf k}_2)^2 + m_{\rm el}^2}
\nonumber \\
\left( \delta^{ij} + { (2 {\bf p} + {\bf k}_1)^i
(2 {\bf p} + 2 {\bf k}_1 + {\bf k}_2)^j
\over ({\bf p} + {\bf k}_1)^2 + m_{\rm el}^2} \right) .
\label{OmggE}\end{aligned}$$ Expanding the integrand out to second order in ${\bf k}_1$ and ${\bf k}_2$ and evaluating the loop integrals, we find that (\[OmggE\]) reduces to $$\langle 0 | L({\bf 0}) | gg \rangle \;=\;
{g^4 \over 192 \pi m_{\rm el}^3} \delta^{ab}
\left(- {\bf k}_1 \cdot {\bf k}_2 \delta^{ij} + k_2^i k_1^j \right).
\label{Omggk}$$ Comparing (\[Omgg\]) and (\[Omggk\]), we can read off the coefficient $\lambda_{G^2}$: $$\lambda_{G^2} \;=\; {g^4 \over 384 \pi}.
\label{C3}$$ Having determined the coefficient $\lambda_{G^2}$ in the operator expansion (\[PLope\]) to leading order in $g$, we can now express the Polyakov loop correlator (\[CPL\]) in terms of correlators in MQCD: $$C(R) \;=\; \left( {\lambda_{G^2} \over m_{\rm el}^3} \right)^2
\langle G^2({\bf 0}) G^2({\bf R}) \rangle \;+\; \ldots \; .
\label{CGG}$$ The $\ldots$ in (\[CGG\]) represents the contributions of higher dimension operators in the operator expansion (\[PLope\]), such as $G^3({\bf x})$.
The asymptotic behavior of a correlator in MQCD, such as the one in (\[CGG\]), is related to the spectrum of QCD in (2+1) space-time dimensions. This is a confining gauge theory with a dynamically generated mass gap $M_H$ between the vacuum and the state of next lowest energy. The single particle states in the spectrum are bound states of gluons (glueballs), and $M_H$ is the mass of the lightest glueball. Assuming that the lowest glueball $H$ is a scalar particle, the Fourier transform of the correlator $\langle G^2({\bf 0}) G^2({\bf R}) \rangle$ has a pole at $k^2 = - M_H^2$, as well as poles and branch cuts that are farther from the real $k$ axis. The asymptotic behavior in $R$ is dominated by the pole at $k^2 = - M_H^2$. Denoting the residue of the pole by $|\langle 0 | G^2({\bf 0}) | H \rangle|^2$, the asymptotic behavior is $$C(R) \;\longrightarrow\; \left( {\lambda_{G^2} \over m_{\rm el}^3} \right)^2
|\langle 0 | G^2({\bf 0}) | H \rangle |^2 {e^{- M_H R} \over 4 \pi R}.
\label{CMas}$$ Both the mass $M_H$ and the coupling strength $\langle 0 | G^2({\bf 0}) | H \rangle$ can be calculated using lattice simulations of MQCD. By dimensional analysis, $M_H$ is proportional to $g^2 T$, while $\langle 0 | G^2({\bf 0}) | H \rangle$ is proportional to $(g^2T)^{5/2}$. The overall coefficient of the exponential in (\[CMas\]) is therefore proportional to $g^{12}$.
It is interesting to compare the asymptotic behavior (\[CMas\]) with the result one would obtain at leading order in perturbation theory in MQCD. The leading order diagram is the 1–loop diagram in which two magnetostatic gluons are exchanged. The possibility of a dynamically-generated magnetic screening mass $m_{\rm mag}$ can be taken into account by replacing the propagators $1/{\bf p}^2$ of the gluons by $1/({\bf p}^2 + m_{\rm mag}^2)$. The resulting expression for the correlator is $$C(R) \;\approx\; 2 (N_c^2 -1)
\left( {\lambda_{G^2} \over m_{\rm el}^3} \right)^2
\left( {e^{- m_{\rm mag} R} \over 4 \pi R^3} \right)^2
\left[ 6 + 12 m_{\rm mag} R + 10 (m_{\rm mag} R)^2 + 4 (m_{\rm mag} R)^3
\right]$$ The perturbative result, which is obtained by setting $m_{\rm mag} = 0$, falls like $1/R^6$. In the presence of magnetic screening, the asymptotic behavior is $e^{-2 m_{\rm mag} R}/R^3$. Thus this model does not give the same asymptotic behavior as (\[CMas\]), even if we make the identification $m_H = 2 m_{\rm mag}$.
The correlator of Polyakov loops has been calculated nonperturbatively using lattice simulations [@attig; @gao; @kark]. Recent studies have found that, at temperatures well above the quark–gluon plasma phase transition, the behavior of the correlator at large $R$ is consistent with the form $e^{- \mu R}/R^n$ with $n$ in the range $1 < n < 2$. At the highest temperatures available, the preferred value of $n$ is close to 2, as predicted by the leading electrostatic contribution (\[CEas\]). At lower temperatures, the preferred value of $n$ is closer to 1, consistent with the true asymptotic result (\[CMas\]). These numerical results have a simple interpretation. The electrostatic contribution (\[CEas\]) falls off like $e^{-2 m_{\rm el} R}/(T R)^2$, with a coefficient proportional to $g^4$. The magnetostatic contribution (\[CMas\]) falls off more slowly like $e^{-M_H R}/(T R)$, but its coefficient is proportional to $g^{12}$. At the highest temperatures that have been probed by lattice simulations, the running coupling constant $g(T)$ is quite small. Because it has a very small coefficient, the magnetostatic contribution will probably not dominate over the electrostatic contribution until $R$ is much larger than the size of present lattices. Thus the measured correlator behaves like $e^{- \mu R}/R^2$, and the mass $\mu$ extracted by fitting the correlator to this form can be interpreted as twice the Debye screening mass. At lower temperatures, the magnetostatic contribution is not so strongly suppressed and the true asymptotic behavior $e^{-\mu R}/R$ is probably observed on the lattice. If this interpretation of the numerical simulations is correct, then the mass $\mu$ extracted from fitting the correlator to the form $e^{- \mu R}/R$ has nothing to do with Debye screening, but instead is related to magnetostatic effects. This interpretation could be verified by calculating $m_H$ and $\langle0|G^2({\bf 0})|H\rangle$ using a lattice simulation of MQCD. One could then use (\[CMas\]) to predict quantitatively how large $R$ must be in order to reach the truly asymptotic region of the Polyakov loop correlator.
This work was supported in part by the U.S. Department of Energy, Division of High Energy Physics, under Grant DE-FG02-91-ER40684, and by the Ministerio de Educación y Ciencia of Spain. We are grateful to A. Rebhan for pointing out an error in a draft of this paper which significantly altered the conclusions.
[99]{}
A. Rebhan, Phys. Rev. [**D47**]{}, R3967 (1993).
E. Braaten and A. Nieto, Northwestern preprint NUHEP-94-18 (August 1994), to be published in Phys. Rev. Lett.
N. Attig [*et al.*]{}, Phys. Lett. [**B209**]{}, 65 (1988); J. Engles, F. Karsch, and H. Satz, Nucl. Phys. [**B315**]{}, 419 (1989).
F.R. Brown [*et al.*]{}, Phys. Rev. Lett. [**61**]{}, 2058 (1988); M. Gao, Phys. Rev. [**D41**]{}, 626 (1990).
A. Irbäck [*et al.*]{}, Nucl. Phys. [**B363**]{}, 34 (1991); L. Kärkkäinen [*et al.*]{}, Phys. Lett. [**282**]{}, 121 (1992); Nucl. Phys. [**B395**]{}, 733 (1993).
G.P. Lepage, “What is Renormalization”, in [*From Actions to Answers*]{}, edited by T. DeGrand and D. Toussaint (World Scientific, 1989).
S. Nadkarni, Phys. Rev. [**D27**]{}, 917 (1983).
S. Chapman, Regensburg preprint (hep-ph/9407313).
S. Nadkarni, Phys. Rev. [**D33**]{}, 3738 (1986).
Figure Captions {#figure-captions .unnumbered}
===============
1. \[fig1\] Three–loop diagrams in EQCD that contribute to the correlator of two Polyakov loop operators. The solid lines are electrostatic gluons and the wavy lines are magnetostatic gluons.
2. \[fig2\] One–loop diagrams in EQCD that couple a Polyakov loop operator to two magnetostatic gluons.
|
---
abstract: 'Our numerical simulations show that axisymmetric, torsional, magneto-elastic oscillations of magnetars with a superfluid core can explain the whole range of observed quasi-periodic oscillations (QPOs) in the giant flares of soft gamma-ray repeaters. There exist constant phase, magneto-elastic QPOs at both low ($f<150$Hz) and high frequencies ($f>500$Hz), in full agreement with observations. The range of magnetic field strengths required to match the observed QPO frequencies agrees with that from spin-down estimates. These results strongly suggest that neutrons in magnetar cores are superfluid.'
author:
- Michael Gabler
- 'Pablo Cerdá-Durán'
- Nikolaos Stergioulas
- 'José A. Font'
- Ewald Müller
bibliography:
- 'magnetar.bib'
title: 'Imprints of superfluidity on magneto-elastic QPOs of SGRs'
---
Neutron stars are perfect astrophysical laboratories to study the equation of state (EoS) of matter at supra-nuclear densities, i.e., at conditions impossible to replicate on Earth. Giant flares of Soft Gamma-ray Repeaters (SGRs) are very promising events that can be used to obtain information about the structure of neutron stars, since it is believed that their source are highly magnetized neutron stars (magnetars) [@Duncan1992] suffering a global rearrangement of the magnetic field, and possibly involving a fracture of the solid crust. In the X-ray light curves of two of the three giant flares detected so far, SGR 1806-20 and SGR 1900+14, a number of Quasi-Periodic Oscillations (QPOs) have been observed [@Israel2005; @Watts2007]. This may have been the first detection of neutron star oscillations, which provide a possibility for studying such compact stars through asteroseismology. The observed frequencies consist of two categories, [*low frequency QPOs*]{} between a few tens of Hz and up to $150\,$Hz observed in both events, and [*high frequency QPOs*]{} above $500\,$Hz, which are only observed in the 2004 giant flare. Some QPO frequencies roughly match those of discrete crustal shear modes in non-magnetized stars, namely $n=0$ torsional modes (nodeless in the radial direction) for the low frequency QPOs and $n\geq1$ modes for the high frequency QPOs (see [@Duncan1998; @Strohmayer2005; @Piro2005; @Sotani2007; @Samuelsson2007] and references therein). However, these crustal modes are quickly damped by the magnetic field in the core [@Levin2007; @Gabler2011letter; @Gabler2012; @Colaiuda2011; @vanHoven2011; @vanHoven2012; @Gabler2013].
On the other hand, torsional Alfvén oscillations (fundamental mode $\sim30\,$Hz), i.e. QPOs trapped at turning-points or edges of the Alfvén continuum of the highly magnetized core, can also have frequencies similar to those of the observed QPOs for magnetar field strengths of order $B\sim10^{15}\,$G, with the additional attractive feature of overtones appearing at near-integer ratios [@Sotani2008; @Cerda2009; @Colaiuda2009].
The Alfvén QPO model extended to magneto-elastic QPOs and different types of magnetic fields [@Gabler2012; @Gabler2013] explains the observed low frequency QPOs as excitations of a fundamental turning-point QPO and of several overtones. However, the observation of high frequency QPOs poses a problem for this model, because the first overtone ($n=1$) crustal shear mode is quickly absorbed into the Alfvén continuum [@Gabler2012; @vanHoven2012] and because there is no known mechanism to excite a specific high-order overtone of the turning-point magneto-elastic QPOs with the appropriate frequencies. A model explaining both low- and high-frequency QPOs would thus be a significant step towards a better understanding of neutron star interiors.
Previous models have considered a normal fluid (i.e., non-superfluid) consisting of neutrons, protons, and electrons in the core of the neutron star. This is a valid approach if the interaction between the different species is strong. However, theoretical calculations favor the presence of superfluid neutrons [@Baym1969]. This idea is supported by the theory of pulsar glitches [@Anderson1975] and by the fact that the cooling curve of CasA is consistent with a phase transition to superfluid neutrons [@Shternin2011; @Page2011]. In this case the matter in the core of neutron stars cannot be described by a single-fluid approach. The effect of superfluidity in the oscillation spectrum of unmagnetized stars has been estimated in [@Mendell1991; @Mendell1998; @Andersson2002; @Andersson2004; @Prix2002; @Chamel2008b] and in the context of magnetars in [@Glampedakis2011a; @Passamonti2013; @vanHoven2008; @vanHoven2011; @vanHoven2012; @Andersson2009]. The main consequence of a superfluid core is an increase in frequency of the Alfvén continuum bands by a factor of several with respect to the normal fluid, for the same magnetic field strength. It was suggested in [@Passamonti2013] that such an increase (in conjunction with stratification) could account for the observed high frequency QPOs as fundamental, polar ($m=2$) non-axisymmetric Alfvén modes, although this model cannot simultaneously accommodate the lowest observed frequency QPOs. How superfluid neutrons in the crust would affect the spectrum of shear oscillations was studied both for magnetized and unmagnetized models in [@Andersson2009; @Samuelsson2009; @Sotani2013].
Here, we investigate the effect of a superfluid core on the turning-point magneto-elastic QPOs of magnetars. Superfluidity is handled in our model by decoupling the superfluid neutrons in the core of the neutron star completely, i.e., we assume that there is no entrainment between neutrons and protons, and no direct interaction between both species. Hence, neutrons affect protons only through their gravitational interaction. Protons are expected to be superconducting in the core of neutron stars [@Baym1969], but the magnetic field inside a magnetar may suppress superconductivity beyond a critical field strength that is estimated to be in the range $10^{15}\,$G $\lesssim B_\mathrm{core}\lesssim10^{16}\,$G [@Glampedakis2011a]. Therefore, we consider normal (non-superconducting) protons in the core. In addition, since magnetars are slow rotators with periods of $\sim10\,$s, we neglect effects due to rotation that could create superfluid vortices.
The results presented here are obtained with the numerical code [MCOCOA]{} that solves the general-relativistic MHD equations [@Cerda2008; @Cerda2009] including a treatment of elastic terms for the neutron star crust [@Gabler2011letter; @Gabler2012; @Gabler2013]. The influence of a superfluid phase of neutrons coexisting with a normal fluid can be described by the entrainment, a measure of the interaction of the different species. In the crust the interaction of the superfluid neutron component with the nuclei of the lattice due to Bragg reflection is so strong [@Chamel2012] that the perturbation of the lattice will carry along most of the superfluid neutrons. Therefore, we assume complete entrainment and treat the crust as if it was a single fluid with shear, including the total mass of all constituents. In the core we assume for simplicity that the neutrons are completely decoupled, i.e., only protons are dynamically linked to the magneto-elastic oscillations. This extreme approximation (complete decoupling) complements the one in our previous work, where we assumed complete coupling. The proton fraction in the core has been estimated to be $X_p\sim0.05$ [@Glendenning1985; @Wiringa1988; @Akmal1998; @Douchin2001; @Hebeler2010], and we have assumed this value in all our calculations (a more detailed treatment would consider a particular stratification). The dynamical behavior of electrons can be neglected because of their small mass.
For the evolution we have to solve the momentum and the induction equation. The latter remains unchanged compared to our previous work [@Cerda2009; @Gabler2012], while the former one holds now for protons only. Effectively, we change the momentum of the fluid in the core in the $\varphi$-direction by replacing the total rest-mass density $\rho$ by the rest-mass density of protons $\rho_{p} = X_p
\rho$ only. The superfluid neutrons are not influenced by the torsional magneto-elastic oscillations.
Since the system under consideration consists of crust and core that have different properties, it is not obvious whether there exist discrete eigenmodes. Hence, to differentiate between discrete and continuum oscillations we use the phase of the Fourier transform of the time evolution. For discrete modes the whole star oscillates with the same phase. In contrast, the continuum of torsional Alfvén oscillations gives rise to a continuous phase change as one crosses field lines, because the eigenfrequencies of neighboring field lines are slightly different, i.e., the oscillation at these lines are out of phase leading to phase mixing and damping of the oscillations.
For a magnetar model with a normal fluid core, a crust, and a poloidal magnetic field there exist three distinct types of torsional magneto-elastic QPOs [@Gabler2012]: For weak surface magnetic fields, $B \lesssim 10^{15}\,$G, the QPOs are reflected at the core-crust interface and the different field lines are weakly coupled through this boundary. At strong magnetic fields, $B > 5\times
10^{15}\,$G, the field dominates over the crustal shear modulus, i.e., the magneto-elastic QPOs reach the surface and individual field lines are coupled by the entire crust instead of only at the core-crust interface. For intermediate field strengths the magneto-elastic QPOs change from being reflected at the core-crust interface to being reflected at the surface of the star. In all these cases no discrete modes exist.
If all superfluid neutrons in the core are decoupled we find that there are still no discrete modes for weak magnetic fields of $B
\lesssim \mathrm{few} \times 10^{14}\,$G. However, at typical magnetar surface field strengths of $B \sim 10^{15}\,$G there exist QPOs with an almost constant phase in nearly the whole open field line region. This transition is exemplified in Fig.\[fig\_modes\] for the lowest frequency QPO, but holds for other QPOs as well. The particular model shown in Fig.\[fig\_modes\] has a mass of $1.4~\mathrm{M}_\odot$ and was computed with the APR EoS in the core [@Akmal1998] and the DH EoS in the crust [@Douchin2001].
![Effective amplitude [*top row*]{} and phase [*bottom row*]{} of a particular QPO (see text for details) ranging from white-blue (minimum) to orange-red (maximum), and from $\theta=-\pi/2$ (blue) to $\theta=\pi/2$ (orange-red), respectively. The crust is indicated by the dashed black line, and magnetic field lines are given by the magenta lines. The bottom right panel demonstrates that for a typical magnetar surface field strength a discrete QPO with a constant phase (modulo numerical inaccuracies at a nodal line) exists in the whole open field line region. One can also recognize that the phase changes in regions with vanishing amplitudes only.[]{data-label="fig_modes"}](modes){width=".46\textwidth"}
For $B \gg 10^{15}\,$G we expect the continuum to appear again, which is present in simulations without crust.
The different regimes can be explained with the speed of perturbations propagating along magnetic field lines, which exhibits a discontinuity at the core-crust interface. For weak magnetic fields $B \lesssim
10^{14}\,$G, the shear speed is much higher than the Alfvén speed at the base of the crust, i.e., there is a large jump in the propagation speed at the crust-core interface. This leads to a significant reflection of the QPOs at the core-crust interface, confining the QPOs mostly to the core (Fig.\[fig\_modes\], top left panel). In the superfluid case, the Alfvén speed in the core is a factor $\sqrt{1/X_p}$ higher than for a normal fluid, because $v_A^2
\sim B^2 / \rho$ and only protons (a fraction $X_p$ of the total mass) take part in the magneto-elastic oscillations. Hence, the jump in propagation speed at the crust-core interface is significantly smaller giving rise to less reflection and stronger penetration of the magneto-elastic oscillations into the crust. At $ B \sim 10^{15}\,$G the jump vanishes, and the Alfvén speed in the core approaches the shear speed at the base of the crust. The strong coupling of the magnetic field lines by the crust then leads to the appearance of oscillations with constant phase in the region of open field lines. Similar effects were observed in [@Levin2007] and [@Cerda2009] for strong (numerical) viscosity.
The above effect is less pronounced in the normal fluid case, because there the transition from reflection at the core-crust interface to dominance of magnetic over shear effects in the crust occurs between $10^{15}\,$G$\,< B < 5\times 10^{15}\,$G, while in the superfluid case the transition already starts at a few $10^{14}\,$G. In addition, a more massive core takes part in the magneto-elastic oscillations in the normal fluid case, i.e., the coupling to the crust is weaker.
We now turn to the high frequency QPOs with $f>500\,$Hz, whose preferential excitation could not easily be justified in the magneto-elastic model with a normal fluid core. In a first attempt to include the effects of superfluidity, VanHoven & Levin [@vanHoven2012] assumed that only 5% of the core takes part in the magneto-elastic oscillations. In their simulations, the $n=1$ crustal shear modes are absorbed very efficiently into the core when initially only the crust is excited. In Fig.\[fig\_damp\] we show the corresponding overlap integral (a measure for the excitation of a given crustal mode, see [@Gabler2012]) for a simulation with $B =
10^{15}\,$G and with a $n=1$, $l=2$ crustal shear mode as initial perturbation. We obtain initial damping time scales of a few milliseconds for the superfluid and normal fluid cases, in broad agreement with [@vanHoven2012]. However, the damping does not continue at the initial rate (see the inset in Fig.\[fig\_damp\]). After about $10\,$ms almost stable oscillations with modulating amplitudes persist at a similar frequency for both fluid models with much lower damping rates.
![Overlap integral with the $n=1$ crustal shear mode at $B=10^{15}\,$G for normal and superfluid models. The inset shows a magnification of the amplitude from $10$ to $50\,$ms.[]{data-label="fig_damp"}](damp){width=".48\textwidth"}
Fourier transforming the data for evolution times of about $1\,$s we find that the crustal $n=1$ shear mode ($f \sim 760\,$Hz) excites a global magneto-elastic QPO with $f \sim 893\,$Hz in the superfluid case. For the normal fluid we find three magneto-elastic QPOs in the crust with $f \sim 782,\,806,$ and $829\,$Hz, respectively. In Fig.\[fig\_FFT\] we show the Fourier amplitude of the (azimuthal) velocity inside the crust close to the equator for the normal fluid and close to the pole for the superfluid model. The corresponding spatial structures of the strongest QPOs of both models are displayed in Fig.\[fig\_structure\].
![Fourier transform of the velocity for a superfluid model near the polar axis ($\theta=0.1$; black line), and for a normal fluid model near the equator ($\theta=1.5$; red line). The dashed magenta line indicates the frequency $f=760\,$Hz of the $n=1$, $l=2$ crustal shear mode that was used as initial perturbation.[]{data-label="fig_FFT"}](FFT_n){width=".48\textwidth"}
![Spatial distributions of the Fourier amplitudes of the velocity at the peak frequencies in Fig.\[fig\_FFT\]. Left panel: QPO at $f \sim 893\,$Hz and $B=5.4\times 10^{14}\,$G for decoupled superfluid neutrons. Right panel: strongest shear dominated magneto-elastic ($n=1$) QPO in the crust at $f \sim
782\,$Hz and $B=10^{15}\,$G for a normal fluid core. Magnetic field lines are shown by magenta lines.[]{data-label="fig_structure"}](modes_n1){width=".48\textwidth"}
In both cases the radial structure of the $n=1$ QPO remains similar to that of the pure crustal shear mode inside the crust. However, its angular dependence differs considerably from that of the original spherical harmonic one due to the interaction with the core (see also [@Gabler2012] for the normal fluid case).
The $n=1$ crustal shear modes propagate in radial direction. Because the magnetic field lines are almost orthogonal to this direction close to the equator, the coupling to the core is very weak in that region. This explains the structure of the QPO for the normal fluid case, together with the fact that magneto-elastic oscillations in the core are strongly reflected at the core-crust interface, which does not allow for a resonance between crust and core oscillations. At stronger magnetic fields, the Alfvén character of the magneto-elastic oscillations dominates, before the jump in propagation speed at the core-crust interface disappears. In contrast, in the superfluid case the strongest QPO at $f\sim
893\,$Hz has its maximum close to the polar axis. Here, the shear terms dominate in the crust, and a higher magneto-elastic overtone in the core can enter in resonance.
Overall, our results allow for a better understanding of the observed frequencies in SGR giant flares. The inclusion of superfluidity seems to be a key ingredient which helps in several ways: Firstly, the observed high frequency QPOs can be explained as global magneto-elastic QPOs resulting from a resonance between the crust and a high ($\sim 40$) magneto-elastic overtone. This is only possible if there are superfluid neutrons in the core. There still may exist oscillations at frequencies above $500\,$Hz in models with normal fluid cores, but since these QPOs are limited to a region close to the equator they can only affect a small region of the magnetosphere close to the star. This makes it difficult to explain why QPOs are observed at different rotational phases [@Strohmayer2006]. Secondly, the phase of the magneto-elastic QPOs becomes constant for magnetic fields between several $10^{14}$G to several $10^{15}\,$G. Due to the absence of phase mixing we expect that these QPOs are longer lived than magneto-elastic QPOs of normal fluid cores. We plan to investigate this in forthcoming work. Thirdly, the necessary magnetic field to match the low frequency QPOs $f \sim 30\,$Hz decreases by a factor of $\sqrt{1/X_p}$ which reduces our previous estimates $B \sim 1 - 4
\times 10^{15}\,$G [@Gabler2012] to $B \sim 2\times 10^{14} -
10^{15}\,$G, in good agreement with [@vanHoven2008; @vanHoven2012] and current spin down estimates for magnetars showing giant flares ($6\times 10^{14} \lesssim B \lesssim 2.1\times 10^{15}\,$G). A more realistic treatment of the entrainment is likely to further decrease our magnetic field estimates slightly [@Andersson2009].
These results do not only indicate the presence of a superfluid phase of neutrons in the core of SGRs, but they may also constrain the EoS of the crust significantly. The high frequency QPO and the threshold for the outbreak of the low frequency QPOs [@Gabler2012] give independent limits on the shear modulus of the crust, and hence on the EoS. We plan to investigate this in detail in forthcoming work. For the first time, our magnetar model that includes the effects of the crust, the magnetic field, and superfluidity can accommodate simultaneously all types of observed QPO frequencies, low ($f<
150\,$Hz) and high ($f> 500\,$Hz), in the giant flares of SGRs. For a particular model with a surface magnetic field strength of $B \approx
1.4\times 10^{15}\,$G we find low frequency oscillations at $21$, $30$, $43$, $58$, $70$, $74$, $84$, ${89}$, $98$, $119$, $129$, $135$, ${149}$, and $162\,$Hz that are in broad agreement with the QPOs observed in SGR 1806-20 at $18$, $26$, $30$, $92$, and $150\,$Hz.
More details of the theoretical framework and a careful analysis will be provided in forthcoming papers. The next major step towards a complete model for giant flare QPOs consists in finding a modulation mechanism of the emission in the magnetosphere.
Work supported by the Collaborative Research Center on Gravitational Wave Astronomy of the Deutsche Forschungsgemeinschaft (DFG SFB/Transregio 7), the Spanish [*Ministerio de Educación y Ciencia*]{} (AYA 2010-21097-C03-01) the [*Generalitat Valenciana*]{} (PROMETEO-2009-103), the ERC Starting Grant CAMAP-259276, an IKY-DAAD exchange grant (IKYDA 2012) and by CompStar, a Research Networking Programme of the European Science Foundation. N.S. also acknowledges support by an Excellence Grant for Basic Research (Research Committee of the Aristotle University of Thessaloniki, 2012). Computations were performed at the [*Servei d’Informàtica de la Universitat de València*]{}.
|
---
abstract: 'We describe a bordered version of totally twisted Khovanov homology. We first twist Roberts’s type $D$ structure by adding a “vertical" type $D$ structure which generalizes the vertical map in twisted tangle homology. One of the distinct advantages of our type $D$ structure is that it is homotopy equivalent to a type $D$ structure supported on “spanning tree" generators. We also describe how to twist Roberts’s type $A$ structure for a left tangle in such a way that pairing our type $A$ and type $D$ structures will result in the totally twisted Khovanov homology.'
author:
- 'Nguyen D. Duong'
title: TWISTING BORDERED KHOVANOV HOMOLOGY
---
|
---
author:
- 'Rui Peng Liu[^1]'
title: 'On Feasibility of Sample Average Approximation Solutions[^2]'
---
[^1]: School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA ().
[^2]: Submitted to the editors DATE.
|
---
abstract: 'We prove uniqueness of representations of Nica-Toeplitz algebras associated to product systems of $C^*$-correspondences over right LCM semigroups by applying our previous abstract uniqueness results developed for $C^*$-precategories. Our results provide an interpretation of conditions identified in work of Fowler and Fowler-Raeburn, and apply also to their crossed product twisted by a product system, in the new context of right LCM semigroups, as well as to a new, Doplicher-Roberts type $C^*$-algebra associated to the Nica-Toeplitz algebra. As a derived construction we develop Nica-Toeplitz crossed products by actions with completely positive maps. This provides a unified framework for Nica-Toeplitz semigroup crossed products by endomorphisms and by transfer operators. We illustrate these two classes of examples with semigroup $C^*$-algebras of right and left semidirect products.'
address:
- 'Institute of Mathematics, University of Białystok, ul. K. Ciołkowskiego 1M, 15-245 Białystok, Poland'
- 'Department of Mathematics, University of Oslo, PO Box 1053 Blindern, 0316 Oslo, Norway '
author:
- 'Bartosz K. Kwaśniewski'
- 'Nadia S. Larsen'
date: '15 June 2017. Revised 17 November 2017 and 18 September 2018.'
title: |
Nica-Toeplitz algebras associated with\
product systems over right LCM semigroups
---
Introduction {#introduction .unnumbered}
============
Product systems of $C^*$-correspondences were introduced by Fowler following ideas of Arveson. As spelled out in [@F99], Fowler’s construction served as motivation for his investigation with Raeburn into uniqueness theorems for $C^*$-algebras arising as certain twisted crossed products over positive cones in quasi-lattice ordered groups, [@FR]. Three uniqueness theorems in this context have dominated the attention: [@Fow-Rae Theorem 2.1], [@F99 Theorem 7.2] and [@FR Theorem 5.1]. All three give a necessary condition for faithfulness of a representation $\pi$ of a Toeplitz-type $C^*$-algebra ${\mathcal T}_X$ or ${\mathcal{NT}}(X)$, where $X$ is generic notation for a single $C^*$-correspondence or a product system of such over a semigroup, and $\pi$ arises from a representation of $X$.
Two aspects are striking here: the first is that the necessary condition, to which we choose to refer as *condition (C)* - for compression or for Coburn, who proved the archetypical result of this form - is only sufficient when the left action in each correspondence is by generalized compacts. The second is that, in the aforementioned results on product systems, an auxiliary $C^*$-algebra is involved. It has the structural appearance of a crossed product twisted by a product system and a somewhat unaccountable involvement in the uniqueness of representations of ${\mathcal{NT}}(X)$.
The first main point we make in the present paper is that there is another $C^*$-algebra for which uniqueness of representations coming from $X$ is precisely encoded, as a *necessary and sufficient condition*, by condition (C). This $C^*$-algebra, which we generically denote $\mathcal{DR}({\mathcal{NT}}(X))$, bears the flavor of a Doplicher-Roberts algebra for ${\mathcal{NT}}(X)$. The second point we make is that uniqueness of a representation $\pi$ of ${\mathcal{NT}}(X)$ can, in good situations, be precisely encoded by a weaker condition than (C) which we call *Toeplitz covariance*. The third point we make is that the strategy for proving these results relies on our previous work on $C^*$-precategories developed in [@kwa-larI], and as a very satisfactory bonus provides a clear picture of how ${\mathcal{NT}}(X)$, the Fowler-Raeburn crossed product twisted by a product system and $\mathcal{DR}({\mathcal{NT}}(X))$ are included in each other, respectively, and how uniqueness of representations on $\mathcal{DR}({\mathcal{NT}}(X))$ sieves down to corresponding results on the smaller subalgebras. Together, these three uniqueness results offer a different picture of endeavors by many hands over several decades. In addition we extend these results beyond the scope of quasi-lattice ordered pairs, which is non-trivial as LCM semigroups allow invertible elements and might not be cancellative, cf. [@kwa-larI].
As an application of our uniqueness results we define a Nica-Toeplitz crossed product for a dynamical system involving a semigroup action of completely positive maps on a $C^*$-algebra. For a single completely positive map, a similar construction was proposed by the first named author in [@kwa-exel]. In this new setup of semigroup actions by completely positive maps our construction models Toeplitz-type crossed products in two important contexts: actions by endomorphisms, see e.g. [@F99] (where the assumptions on the acting semigroup and the conventions on covariance are different), and actions by transfer operators, see e.g. [@Larsen] where the acting semigroup is abelian. We formulate uniqueness theorems for our crossed products, and illustrate the two classes of actions with semigroup $C^*$-algebras as in Li [@Li], through the perspective of algebraic dynamical systems developed by the second named author in collaboration with Brownlowe and Stammeier [@bls2]. The left-semidirect product semigroup $C^*$-algebras coming from [@bls2] will serve to motivate crossed products by transfer operators, and, somehow unexpectedly though in hindsight not that surprisingly, right semidirect product semigroup $C^*$-algebras will motivate crossed products by endomorphisms.
The paper is organized as follows. In a preliminaries section we review briefly the basics of $C^*$-correspondences and product systems of these, after which we collect the main ingredients needed about $C^*$-precategories and their $C^*$-algebras from [@kwa-larI]. In section \[technical subsection\] we associate a $C^*$-precategory to a product system $X$ of $C^*$-correspondences over a right LCM semigroup $P$ and in section \[product systems over LCMs\] we use it to construct a Doplicher-Roberts version $\operatorname{\mathcal{DR}}({\mathcal{NT}}(X))$ and a reduced version of ${\mathcal{NT}}(X)$. We introduce conditions under which representations of $X$ give rise to faithful representations of the core subalgebras of ${\mathcal{NT}}(X)$ and $\operatorname{\mathcal{DR}}({\mathcal{NT}}(X))$. In subsection \[subsect:uniqueness theorems\] we prove uniqueness results for ${\mathcal{NT}}(X)$ and $\operatorname{\mathcal{DR}}({\mathcal{NT}}(X))$ and discuss some implications. In subsection \[Fowler-Raeburn section\] we extend this discussion by introducing $C^*$-algebras $\operatorname{\mathcal{FR}}(X)$ generalizing semigroup $C^*$-algebras twisted by product systems studied by Fowler and Raeburn, see [@FR], [@F99]. In section \[section:NT-cp-ccp-maps\] we introduce Nica-Toeplitz crossed products of a $C^*$-algebra by the action of a right LCM semigroup of completely positive maps, and prove uniqueness results for the two major types of examples, crossed products by endomorphisms and by transfer operators. Finally, in section \[section:semigroupCstar alg\] we show that the Nica-Toeplitz crossed products by endomorphisms and by transfer operators can be perfectly embodied by semigroup $C^*$-algebras associated to right and left semidirect products of semigroups, respectively. By specializing the uniqueness results to these contexts we generalize and complement earlier results from [@LR] and [@bls2].
While we were laying the last hand on this article, another paper appeared [@Fle] in which Fletcher takes on clarifying the uniqueness result [@F99 Theorem 7.2] for ${\mathcal{NT}}(X)$ in the context of quasi-lattice ordered pairs.
Acknowledgements
----------------
The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number 621724. B.K. was partially supported by the NCN (National Centre of Science) grant number 2014/14/E/ST1/00525. Part of the work was carried out when B.K. participated in the Simons Semester at IMPAN - Fundation grant 346300 and the Polish Government MNiSW 2015-2019 matching fund, the participation of both authors in the program “Classification of operator algebras: complexity, rigidity, and dynamics” at the Mittag-Leffler Institute (Sweden) in January 2016.
Preliminaries
=============
LCM semigroups
--------------
We refer to [@BRRW] and [@bls] and the references therein for basic facts about right LCM semigroups. All semigroups considered in this paper will have an identity $e$. We let $P^*$ be the group of *units*, or invertible elements, in $P$. A *principal right ideal* in $P$ is a right ideal in $P$ of the form $pP=\{ps: s\in P\}$ for some $p\in P$. The relation of inclusion on the principal right ideals induces a left invariant *preorder* on $P$ given by $p \leq q$ when $qP\subseteq pP$. Clearly $\leq$ is a partial order if and only if $P^*=\{e\}$.
A semigroup $P$ is a *right LCM semigroup* if the family $\{pP\}_{p\in P}$ of principal right ideals extended by the empty set is closed under intersections, that is if for every pair of elements $p, q\in P$ we have $pP\cap qP=\emptyset$ or $pP\cap qP=rP$ for some $r\in P$. In the case that $pP\cap qP=rP$, the element $r$ is a *right least common multiple (LCM)* of $p$ and $q$. If $P$ is a right LCM semigroup then we refer to $
J(P):=\{pP\}_{p\in P}\cup\{\emptyset\}
$ as the *semilattice of principal right ideals* of $P$. Right LCMs in a right LCM semigroup are determined up to multiplication from the right by an invertible element. Namely, if $pP\cap qP=rP$, then $pP\cap qP=tP$ if and only if there is $h\in P^*$ such that $t=rh$.
[(a)]{.nodecor} One of the most known and studied examples of right LCM semigroups are positive cones in quasi-lattice ordered groups, introduced by Nica [@N]. In fact, $P$ is a positive cone in a weakly quasi-lattice ordered group $(G,P)$ if and only if $P$ is an LCM subsemigroup of a group $G$ such that $P^*=\{e\}$.
[(b)]{.nodecor} In semigroup theory, notions similar to right LCM have been known for some time, see e.g. [@Law2]. New large classes of right LCM semigroups with relevance to $C^*$-algebraic context were identified in [@BRRW]. Semidirect product semigroups which are right LCM semigroups were studied in [@bls; @bls2]. More on this in section \[section:semigroupCstar alg\].
We recall from [@kwa-larI Definition 2.4] that a *controlled map of right LCM semigroups* is an identity preserving homomorphism $\theta:P\to {\mathcal{P}}$ between right LCM semigroups $P,{\mathcal{P}}$ such that $\theta(P^*)={\mathcal{P}}^*$ and for all $s,t\in P$ with $sP\cap tP =rP$ we have $
\theta(s){\mathcal{P}}\cap \theta(t){\mathcal{P}}= \theta(r){\mathcal{P}}$ and $\theta(s)=\theta(t)$ only if $s=t$.
Let $P_i$, $i\in I$, be a family of right LCM semigroups. Put $P:=\prod^\ast_{i\in I} P_i$ and ${\mathcal{P}}:=\bigoplus_{i\in I} P_i$. The homomorphism $\theta:P\to {\mathcal{P}}$ which is the identity on each $P_i$, $i\in I$, is a controlled map of right LCM semigroups, by [@kwa-larI Proposition 2.3].
$C^*$-correspondences and product systems
-----------------------------------------
The notion of a $C^*$-correspondence $X$ over a $C^*$-algebra $A$ and its associated Toeplitz algebra ${\mathcal T}(X)$ are standard, and we refer to [@Pim; @KPW; @Fow-Rae] for details. We recall from [@F99]\[Fowler’s definition\] that a *product system* over a semigroup $P$ with coefficients in a $C^*$-algebra $A$ is a semigroup $X= \bigsqcup_{p\in P}X_{p}$, with each $X_p$ a $C^*$-correspondence over $A$, equipped with a semigroup homomorphism $d\colon X \to P$ such that $X_p = d^{-1}(p)$ is a $C^*$-correspondence over $A$ for each $p\in P$, $X_e$ is the standard bimodule $_AA_A$, and the multiplication on $X$ extends to isomorphisms $X_p \otimes_A X_q \cong X_{pq}$ for $p,q \in P \setminus \{e\}$ and coincides with the right and left actions of $X_e = A$ on each $X_p$. For each $p\in P$ we write $\langle\cdot,\cdot\rangle$ for the $A$-valued inner product on $X_p$ and we denote $\phi_p$ the homomorphism from $A$ into ${\mathcal L}(X_p)$ which implements the left action of $A$ on $X_p$.
A *Hilbert $A$-bimodule* is a $C^*$-correspondence which is also a left Hilbert module such that ${}_A\langle x,y\rangle\cdot z = x\cdot\langle y, z\rangle_A$ for all $x,y,z\in X$. An *equivalence $A$-bimodule* is a Hilbert bimodule which is full over left and right. We say that two Hilbert $A$-bimodules $X$, $Y$ are Morita equivalent if there is an equivalence $A$-bimodule $E$ such that $X\otimes_A E\cong Y\otimes_A E$.
\[rem:on essentiality and Fell bundles\] A product system $X$ is (left) *essential* if each $C^*$-correspondence $X_p$, $p\in P$, is essential. We claim that $X$ is automatically essential whenever the group $P^*$ of units in $P$ is non-trivial. Indeed, for any $h\in P^*\setminus\{e\}$ and $p\in P$ we have natural isomorphisms $$X_p= X_{hh^{-1}p}\cong X_h \otimes_A X_{h^-1}\otimes_A X_p \cong X_e \otimes_A X_p \cong \phi_p(A)X_p$$ that give $X_p=\phi_p(A)X_p$. Moreover, isomorphisms $X_h \otimes_A X_{h^{-1}}\cong A_A$ and $X_{h^{-1}} \otimes_A X_{h}\cong A_A$ imply that $X_h$ and $X_{h^{-1}}$ are mutually adjoint Hilbert bimodules, i.e. there is an antilinear isometric bijection $\flat_h:X_{h}\to X_{h^{-1}}$ such that $\flat_h(ab)=\flat_h(b)a$ and $\flat_h(ba)=a\flat_h(b)$ for all $a\in A$ and $b\in X_h$, cf. [@bls2 Remark 6.2]. In particular, the family of Banach spaces $\{X_{h}\}_{h\in P^*}$ together with multiplication inherited from $X$ and involution defined by $b^*:= \flat_h(b)$, for $b\in X_{h}$, $h\in P^*$, is a saturated Fell bundle over the (discrete) group $P^*$, cf. [@exel-book].
Given a product system $X$ and $p, q \in P$ with $p \not= e$, there is a homomorphism $\iota^{pq}_p \colon {\mathcal L}(X_p) \to {\mathcal L}(X_{pq})$ characterized by $$\label{iotapq def}
\iota^{pq}_p(S)(xy) = (Sx)y\text{ for all $x \in X_p$, $y \in
X_{q}$ and $S \in {\mathcal L}(X_p)$.}$$ For each $p\in P$, ${\mathcal K}(A,X_p)$ is a $C^*$-correspondence with $A$-valued inner product $\langle T,S\rangle_A=T^*S$ and pointwise actions. In fact, see [@RaeWill Lemma 2.32], there is a $C^*$-correspondence isomorphism $X_p\cong {\mathcal K}(A,X_p)$ implemented by the map $$\label{C-correspondence isomorphism}
X_p\ni x\mapsto t_x \in {\mathcal K}(A,X_p) \qquad \textrm{ where } t_x(a)=x\cdot a.$$ One defines $\iota^p_e \colon {\mathcal K}(X_e)\to
{\mathcal L}(X_{p})$ by letting $\iota^p_e(t_a)=\phi_p(a)$ for $p\in P$, $a\in A$, see [@SY Section 2.2].
A *representation of the product system* $X$ in a $C^*$-algebra $B$ is a semigroup homomorphism $\psi:X\to B$, where $B$ is viewed as a semigroup with multiplication, such that $(\psi_e,\psi_p)$ is a representation of the $C^*$-correspondence $X_p$, for all $p\in P$, where we put $\psi_p:=\psi|_{X_p}$ for all $p\in P$. The Toeplitz algebra ${\mathcal T}(X)$ is the $C^*$-algebra generated by a universal representation of $X$.
In the case of a quasi-lattice ordered pair $(G, P)$, Fowler introduced in [@F99] the notions of compactly aligned product system over $P$ and Nica covariant representation of it. In [@bls2], these concepts were extended to the case when $P$ is a right LCM semigroup. Given a right LCM semigroup $P$, a product system $X$ over $P$ is called *compactly aligned* if for all $p,q\in P$ such that there is a right LCM $r$ for $p,q$, then $\iota^{r}_p(S) \iota^{r}_q(T) \in {\mathcal K}(X_{r})$ whenever $S \in {\mathcal K}(X_p)$ and $T \in {\mathcal K}(X_q)$. Assume $X$ is a compactly aligned product system over $P$ and let $\psi$ be a representation of $X$ in a $C^*$-algebra. For each $p\in P$, denote $\psi^{(p)}$ the Pimsner $*$-homomorphism defined on ${\mathcal K}(X_p)$ by $\psi^{(p)}(\Theta_{x,y})=
\psi_p(x)\psi_p(y)^*$ for $x,y\in X_p$. Then $\psi$ is *Nica covariant* if $$\displaystyle \psi^{(p)}(S)\psi^{(q)}(T) =
\begin{cases}
\psi^{(r)}\big(\iota^{r}_p(S)\iota^{r}_q(T)\big)
& \text{if $pP\cap qP=rP$} \\
0 &\text{otherwise}
\end{cases}$$ for all $S \in {\mathcal K}(X_p)$ and $T \in {\mathcal K}(X_q)$ (see also [@F99 Definition 5.7]). The *Nica Toeplitz algebra* ${\mathcal{NT}}(X)$ is the $C^*$-algebra generated by a Nica covariant representation $i_X$ which is universal in the following sense: if $\psi$ is a Nica covariant Toeplitz representation of $X$ in $B$ there is a $*$-homomorphism $\psi_* : {\mathcal{NT}}(X) \to B$ such that $\psi_*\circ i_X=\psi.$
$C^*$-precategories
-------------------
$C^*$-precategories should be regarded as non-unital versions of $C^*$-categories, cf. [@glr], [@dr]. We give here a very brief account, for more details and background material on $C^*$-precategories, see [@kwa-doplicher], [@kwa-larI].
Recall that a *$C^*$-precategory* ${\mathcal L}$ with object set $P$ is identified with a collection of Banach spaces $\{{\mathcal L}(p,q)\}_{p,q\in P}$, viewed as morphisms, equipped with bilinear maps, viewed as composition of morphisms, $
{\mathcal L}(p,q)\times {\mathcal L}(q,r)\ni(a, b)\mapsto ab\in {\mathcal L}(p,r)$, $p,q,r\in P,
$ satisfying $\|ab\|\leq \|a\|\cdot \|b\|$, and an antilinear involutive contravariant mapping $^*:{\mathcal L}\to{\mathcal L}$ such that if $a \in {\mathcal L}(p,q)$, then $a^* \in {\mathcal L}(q,p)$ and the $C^*$-equality $\|a^* a\|=\|a\|^2$ holds. In particular, ${\mathcal L}(p,p)$ is naturally a $C^*$-algebra, and we require that for every $a \in {\mathcal L}(q,p)$ the element $a^*a$ is positive in the $C^*$-algebra ${\mathcal L}(p,p)$.
An *ideal in a $C^*$-precategory* ${\mathcal L}$ is a collection ${\mathcal K}=\{{\mathcal K}(p,q)\}_{p,q\in P}$ of closed linear subspaces ${\mathcal K}(p,q)$ of ${\mathcal L}(p,q)$, $ p,q \in P$, such that $${\mathcal L}(p,q){\mathcal K}(q,r) \subseteq {\mathcal K}(p,r)\quad \textrm{ and } \quad {\mathcal K}(p,q){\mathcal L}(q,r)\, \subseteq {\mathcal K}(p,r),$$ for all $p,q,r\in P$. Then ${\mathcal K}$ is automatically selfadjoint and hence a $C^*$-precategory. An ideal ${\mathcal K}$ in ${\mathcal L}$ is uniquely determined by the $C^*$-algebras $\{{\mathcal K}(p,p)\}_{p\in P}$, which are in fact ideals in the corresponding $C^*$-algebras $\{{\mathcal L}(p,p)\}_{p\in P}$. We say that ${\mathcal K}$ is an *essential ideal* in ${\mathcal L}$ if ${\mathcal K}(p,p)$ is an essential ideal in ${\mathcal L}(p,p)$, for every $p\in P$.
A *representation $\Psi:{\mathcal L}\to B$ of a $C^*$-precategory* ${\mathcal L}$ in a $C^*$-algebra $B$ is a family $\Psi=\{ \Psi_{p,q}\}_{p,q\in P}$ of linear operators $\Psi_{p,q}:{\mathcal L}(p,q)\to B$ such that $$\Psi_{p,q}(a)^*=\Psi_{q,p}(a^*), \quad \textrm{ and } \quad \Psi_{p,r}(ab)=\Psi_{p,q}(a)\Psi_{q,r}(b),$$ for all $a\in {\mathcal L}(p,q)$, $b\in {\mathcal L}(q,r)$. Then automatically all the maps $\Psi_{p,q}$, $p,q\in P$, are contractions, and they all are isometries if and only if all the maps $\Psi_{p,p}$, $p\in P$, are injective. In the latter case we say that $\Psi$ is *injective*. We denote by $C^*(\Psi({\mathcal L}))$ the $C^*$-algebra generated by the spaces $\Psi({\mathcal L}(p,q))$, $p,q\in P$. A *representation* $\Psi$ of ${\mathcal L}$ *on a Hilbert space* $H$ is a representation of ${\mathcal L}$ in the $C^*$-algebra ${\mathcal B}(H)$ of all bounded operators on $H$. If in addition $C^*(\Psi({\mathcal L}))H=H$ we say that the representation $\Psi$ is *nondegenerate*.
If ${\mathcal K}$ is an ideal in a $C^*$-precategory ${\mathcal L}$ and $\Psi=\{ \Psi_{p,q}\}_{p,q\in P}$ is a representation of ${\mathcal K}$ on a Hilbert space $H$, then there is a unique extension $\overline{\Psi}=\{\overline{\Psi}_{p,q}\}_{p,q\in P}$ of $\Psi$ to a representation of ${\mathcal L}$ such that the essential subspace of $\overline{\Psi}_{p,q}$ is contained in the essential subspace of $\Psi_{p,q}$, for every $p,q\in P$. Namely, we have $$\label{formula defining extensions of right tensor representations}
\overline{\Psi}_{p,q}(a)({\mathcal K}(q,q) H)^\bot =0,\quad \text{ and }\quad \overline{\Psi}_{p,q}(a) \Psi_{q,q}(b)h = \Psi_{p,q}(ab)h$$ for all $ a\in {\mathcal L}(p,q)$, $b \in {\mathcal K}(q,q)$, $h \in H$. Moreover, $\overline{\Psi}$ is injective if and only if $\Psi$ is injective and ${\mathcal K}$ is an essential ideal in ${\mathcal L}$.
Right tensor $C^*$-precategories and their $C^*$-algebras
---------------------------------------------------------
We recall the basic definitions and facts from [@kwa-larI Section 3]. A *right-tensor $C^*$-precategory* is a $C^*$-precategory ${\mathcal L}=\{{\mathcal L}(p,q)\}_{p,q\in P}$ whose objects form a semigroup $P$ with identity $e$ and which is equipped with a semigroup $\{\otimes 1_r\}_{r\in P}$ of endomorphisms of ${\mathcal L}$ sending $p$ to $pr$, for all $p,r\in P$, and $\otimes 1_e=id$. More precisely, we have linear operators ${\mathcal L}(p,q)\in a \mapsto a\otimes 1_r \in {\mathcal L}(pr,qr)$ such that for each $ a\in {\mathcal L}(p,q)$, $ b\in {\mathcal L}(q,s)$, and $p,q,r, s\in P$ we have $$((a\otimes 1_r)\otimes 1_s) = a\otimes 1_{rs},
\qquad
(a\otimes 1_r)^*=a^*\otimes 1_r,\qquad (a \otimes 1_r) (b\otimes 1_r)= (ab)\otimes 1_r.$$ We refer to $\{\otimes 1_r\}_{r\in P}$ as to a *right tensoring* on ${\mathcal L}=\{{\mathcal L}(p,q)\}_{p,q\in P}$.
If ${\mathcal K}$ is an ideal in a right-tensor $C^*$-precategory ${\mathcal L}$, we say that ${\mathcal K}$ is $\otimes 1$-*invariant*, and write ${\mathcal K}\otimes 1\subseteq {\mathcal K}$, if ${\mathcal K}(p,p)\otimes 1_r \subseteq {\mathcal K}(pr,pr)$ for all $p,r\in P$. One can show that ${\mathcal K}\otimes 1\subseteq {\mathcal K}$ if and only if ${\mathcal K}(p,q)\otimes 1_r \subseteq {\mathcal K}(pr,qr)$ for all $p,q,r\in P$. Right tensor representations and the corresponding Toeplitz algebras are defined for all ideals, $\otimes 1$-invariant or not, in some ${\mathcal L}$.
Let ${\mathcal K}$ be an ideal in a right-tensor $C^*$-precategory ${\mathcal L}$. We say that a representation $\Psi:{\mathcal K}\to B$ of ${\mathcal K}$ in a $C^*$-algebra $B$ is a *right-tensor representation* if for all $a\in {\mathcal K}(p,q)$, $b\in {\mathcal K}(s,t) $ such that $sP\subseteq qP$ we have $$\label{right tensor representation condition}
\Psi(a)\Psi(b)
=
\Psi \left((a \otimes 1_{q^{-1}s}) b\right).$$ Note that, since ${\mathcal K}$ is an ideal, the right hand side of makes sense. One can show there is an injective right-tensor representation $t_{{\mathcal K}}: {\mathcal K}\to {\mathcal T}_{{\mathcal L}}({\mathcal K}) $ with the universal property that for every right-tensor representation $\Psi$ of ${\mathcal K}$ there is a homomorphism $\Psi\times P$ of ${\mathcal T}_{{\mathcal L}}({\mathcal K})$ such that $\Psi\times P\circ t_{{\mathcal K}} =\Psi$, and ${\mathcal T}_{{\mathcal L}}({\mathcal K})=C^*(t_{{\mathcal K}}({\mathcal K}))$. We call ${\mathcal T}_{{\mathcal L}}({\mathcal K})$ the *Toeplitz algebra* of ${\mathcal K}$. We write ${\mathcal T}({\mathcal L})$ for the Toeplitz algebra ${\mathcal T}_{{\mathcal L}}({\mathcal L})
$ associated to ${\mathcal L}$, viewed as an ideal in itself.
If the underlying semigroup is right LCM, then for well-aligned ideals we can make sense of a condition of Nica type, which is stronger than .
Let $({\mathcal L}, \{\otimes 1_r\}_{r\in P})$ be a right-tensor $C^*$-precategory over a right LCM semigroup $P$. An ideal ${\mathcal K}$ in ${\mathcal L}$ is *well-aligned* in ${\mathcal L}$ if for all $a\in {\mathcal K}(p,p)$, $b\in {\mathcal K}(q,q) $ we have $$\label{compact alignment relation}
(a\otimes 1_{p^{-1}r}) (b\otimes 1_{q^{-1}r}) \in {\mathcal K}(r,r)\qquad \textrm{whenever}\quad pP\cap qP=rP.$$ By [@kwa-larI Lemma 3.7], for any ideal ${\mathcal K}$ condition implies the formally stronger condition that for every $a\in {\mathcal K}(p,q)$, $b\in {\mathcal K}(s,t) $ we have $$\label{compact alignment relation2}
(a\otimes 1_{q^{-1}r}) (b\otimes 1_{s^{-1}r}) \in {\mathcal K}(pq^{-1}r,ts^{-1}r)\qquad \textrm{whenever}\quad qP\cap sP=rP.$$
For every well-aligned ideal ${\mathcal K}$ in ${\mathcal L}$, in this paper we will also assume the following condition
- ${\mathcal K}$ is $\otimes 1$-*nondegenerate*, [@kwa-larI Definition 9.6] that is $$\label{non-degeneracy condition}
({\mathcal K}(p,p)\otimes 1_{r}){\mathcal K}(pr,pr)={\mathcal K}(pr,pr)\textrm{ for every } p\in P\setminus{P^*} \textrm{ and } r\in P.$$
- ${\mathcal K}$ satisfies condition (7.6) in [@kwa-larI Proposition 7.6] for $t=e$, that is $$\label{condition for reducing Fock-reps}
\overline{{\mathcal K}(p,e){\mathcal K}(e,p)}\text{ is an essential ideal in the $C^*$-algebra } {\mathcal L}(p,p), \textrm{ for every } p\in P.$$
These conditions will be satisfied by right-tensor $C^*$-precategories arising from product systems.
Nica-Toeplitz algebras associated with right-tensor $C^*$-precategories
-----------------------------------------------------------------------
Let us fix a right-tensor $C^*$-precategory $({\mathcal L}, \{\otimes 1_r\}_{r\in P})$ over an LCM semigroup $P$, and a well-aligned ideal ${\mathcal K}$ in ${\mathcal L}$. A representation $\Psi:{\mathcal K}\to B$ of ${\mathcal K}$ in a $C^*$-algebra $B$ is *Nica covariant* if for all $a\in {\mathcal K}(p,q)$, $b\in {\mathcal K}(s,t) $ we have $$\label{Nica covariance}
\Psi(a)\Psi(b)
=\begin{cases}
\Psi \left((a \otimes 1_{q^{-1}r}) (b\otimes 1_{s^{-1}r})\right) & \textrm{ if } qP\cap sP=rP \textrm{ for some } r\in P,
\\
0 & \textrm{ otherwise}.
\end{cases}$$ Note that by the right hand side of makes sense. By [@kwa-larI] there is an injective Nica covariant representation $i_{{\mathcal K}}: {\mathcal K}\to {\mathcal{NT}}_{{\mathcal L}}({\mathcal K}) $ with the universal property: for every Nica covariant representation $\Psi$ of ${\mathcal K}$ there is a homomorphism $\Psi\rtimes P$ of ${\mathcal{NT}}_{{\mathcal L}}({\mathcal K})$ such that $\Psi\rtimes P\circ i_{{\mathcal K}} =\Psi$, and ${\mathcal{NT}}_{{\mathcal L}}({\mathcal K})=C^*(i_{{\mathcal K}}({\mathcal K}))$. We call ${\mathcal{NT}}_{{\mathcal L}}({\mathcal K})$ the *Nica-Toeplitz algebra* of ${\mathcal K}$. We write ${\mathcal{NT}}({\mathcal L})$ for the Nica-Toeplitz algebra ${\mathcal{NT}}_{{\mathcal L}}({\mathcal L})
$ associated to ${\mathcal L}$, viewed as a well-aligned ideal in itself (in particular, in this paper we assume that ${\mathcal L}$ satisfies the analogue of ). By and [@kwa-larI Lemma 11.1] we have a natural embedding $${\mathcal{NT}}_{{\mathcal L}}({\mathcal K})\hookrightarrow {\mathcal{NT}}({\mathcal L}).$$
The Fock representation of ${\mathcal K}$ constructed in [@kwa-larI] is a direct sum of Nica covariant representations. By [@kwa-larI Proposition 7.6] and , here we may use the $e$-th summand of it. We recall the relevant construction. For $s\in P$, the space $X_{s}:={\mathcal K}(s,e)$ is naturally equipped with the structure of a right Hilbert module over $A:={\mathcal K}(e,e)$ inherited from $C^*$-precategory structure of ${\mathcal K}$: we put $
x \cdot a:=xa$, $\langle x, y\rangle:=x^*y$, for $x,y \in X_{s}$, $a\in A$. Thus we may consider the direct sum Hilbert $A$-module: $
{\mathcal F}_{{\mathcal K}}:=\bigoplus_{s\in P} X_{s}.
$ By [@kwa-larI Remark 4.3 and Proposition 5.2] we have an injective Nica covariant representation ${\mathbb{L}}:{\mathcal K}\to {\mathcal L}({\mathcal F}_{{\mathcal K}})$, there denoted $T^e$, determined by $$\label{Toeplitz representation definition}
{\mathbb{L}}_{p,q}(a)x =\begin{cases}
(a \otimes 1_{q^{-1}s}) x & \textrm{ if } s\in qP ,
\\
0 & \textrm{ otherwise},
\end{cases}$$ for $a \in {\mathcal K}(p,q)$, $x\in X_{s}$ and $p,q,s\in P$. We call ${\mathbb{L}}$ given by the *Fock representation* of ${\mathcal K}$. The *reduced Nica-Toeplitz algebra* of ${\mathcal K}$ is the $C^*$-algebra $
{\mathcal{NT}}^{r}_{{\mathcal L}}({\mathcal K}):=C^*({\mathbb{L}}({\mathcal K}))$. When ${\mathcal K}={\mathcal L}$, we also write $
{\mathcal{NT}}^{r}({\mathcal L}):={\mathcal{NT}}^{r}_{{\mathcal L}}({\mathcal L}).
$ By and [@kwa-larI Proposition 7.6], the $C^*$-algebra ${\mathcal{NT}}^{r}_{{\mathcal L}}({\mathcal K})$ defined above is naturally isomorphic to the one introduced in [@kwa-larI Definition 5.3]. Hence the two definitions are consistent. We refer to ${\mathbb{L}}\rtimes P:{\mathcal{NT}}_{\mathcal L}({\mathcal K})\to {\mathcal{NT}}^{r}_{{\mathcal L}}({\mathcal K})$ as *the regular representation* of ${\mathcal{NT}}_{{\mathcal L}}({\mathcal K})$. We say that ${\mathcal K}$ is *amenable* when ${\mathbb{L}}\rtimes P$ is an isomorphism. A number of amenability criteria are given in [@kwa-larI Section 8].
$C^*$-algebras associated to product systems {#section:Cstar-algebras-product-systems}
============================================
In this section we construct and analyze a canonical right-tensor $C^*$-precategory associated to an arbitrary product system $X$. We employ it to prove the announced uniqueness results.
Right tensor $C^*$-precategories associated to product systems {#technical subsection}
--------------------------------------------------------------
Let $X$ be a product system over a semigroup $P$ with coefficients in a $C^*$-algebra $A$. We will associate to $X$ a right-tensor $C^*$-precategory. In the case $P={\mathbb N}$, it was constructed in [@kwa-doplicher Example 3.2], and in the case $P$ is arbitrary, but the product system $X$ is regular, it was introduced in [@kwa-szym Definition 3.1]. For $p,q\in P$ we put $${\mathcal L}_X(p,q):=\begin{cases} {\mathcal L}(X_{q},X_{p}), & \text{ if }p,q\in P\setminus\{e\},
\\
{\mathcal K}(X_{q},X_{p}), & \text{ otherwise}.
\end{cases}$$ With operations inherited from the corresponding spaces, ${\mathcal L}_X$ forms a $C^*$-precategory. The reason for considering smaller spaces than ${\mathcal L}(X_{q},X_{p})$ when $p$ or $q$ is the unit $e$ is that in general it is not clear how to define right tensoring on such spaces, cf. Remark \[extending the right tensoring\] below. On the other hand, using the isomorphism , for all $p,q\in P$, we have the following isomorphisms of $C^*$-correspondences over $A$: $${\mathcal L}_X(p,e)\cong X_p,\qquad {\mathcal L}_X(e,q)\cong \widetilde{X}_q$$ where $\widetilde{X}_q$ is a (left) $C^*$-correspondence dual to $X_q$. In particular, ${\mathcal L}_X(e,e)=A$.
We will describe a right tensoring structure on ${\mathcal L}_X$ by introducing a family of mappings $\iota_{{p},{q}}^{{pr},{qr}}: {\mathcal L}(X_{q},X_{p}) \to {\mathcal L}(X_{qr},X_{pr})$, $p,q,r\in P$, which extends the standard family of diagonal homomorphisms $\iota_{q}^{qp}$, see . If $q\neq e$ we put $$\iota^{pr,qr}_{p,q}(T)(xy):=(Tx)y,\,\, \,\,\,\,\,\textrm{ where } x\in X_q,\, y\in X_r \textrm{ and } T\in {\mathcal L}(X_{q},X_{p}).$$ Note that under the canonical isomorphisms $ X_{qr}\cong X_q \otimes_A X_r$ and $X_{pr}\cong X_p \otimes_A X_r$, the operator $\iota^{pr,qr}_{p,q}(T)$ corresponds to $T\otimes 1_{r}$, where $1_r$ is the identity in ${\mathcal L}(X_r)$, and in particular $\iota^{pr,qr}_{p,q}(T)\in {\mathcal L}(X_{qr},X_{pr})$. In the case $q=e$, using , the formula $$\iota^{pr,r}_{p,e}(t_x)(y):=xy,\,\, \,\,\,\,\,\textrm{ where }\, y\in X_r \textrm{ and } t_x\in {\mathcal K}(X_{e},X_{p}), x\in X_p,$$ yields a well defined map. As above, under natural identifications, the operator $\iota^{pr,r}_{p,e}(t_x)$ corresponds to $t_x\otimes 1_{r}\in {\mathcal L}(X_e \otimes_A X_r, X_p \otimes_A X_r)$ and therefore $\iota^{pr,r}_{p,e}(t_x)\in {\mathcal L}(X_{r},X_{pr})$. Note that $\iota_{p,p}^{pr,pr}=\iota_{p}^{pr}$.
The linear maps $\iota_{{p},{q}}^{{pr},{qr}}: {\mathcal L}_X(p,q) \to {\mathcal L}_X(pr,qr)$, $p,q,r\in P$, yield a right tensoring on the $C^*$-precategory ${\mathcal L}_X$. We write $$T\otimes 1_r:=\iota^{pr,qr}_{p,q}(T),\qquad \quad T\in{\mathcal L}_X(p,q),\,\, p,q \in P.$$
It suffices to check that $
\iota_{{p},{q}}^{{pr},{qr}}(T)^*=\iota_{{q},{p}}^{{qr},{pr}}(T^*)$, $\iota_{{p},{q}}^{{pr},{qr}}(T) \iota_{{q},{s}}^{{qr},{sr}}(S)= \iota_{{p},{s}}^{{pr},{sr}}(TS)$, and $
\iota_{{pr},{qr}}^{{prs},{qrs}}(\iota_{{p},{q}}^{{pr},{qr}}(T)) = \iota_{{p},{q}}^{{prs},{qrs}}(T)$, for all $T \in {\mathcal L}_X(X_{q},X_{p})$, $S \in {\mathcal L}_X(X_{s},X_{q})$, $p,q,r,s\in P$. Viewing operators $\iota^{pr,qr}_{p,q}(T)$ as $T\otimes 1_{r}$, see discussion above, this is straightforward.
\[right tensor C-precategory\] We call the pair $({\mathcal L}_X,\{\otimes 1_r\}_{r\in P})$ constructed above, *the right-tensor $C^*$-precategory associated to the product system* $X$. We also put $${\mathcal K}_X(p,q):={\mathcal K}(X_{q},X_{p}) \qquad p,q\in P.$$ Clearly, ${\mathcal K}_X:=\{{\mathcal K}_X(p,q)\}_{p,q\in P}$ is an essential ideal in the $C^*$-precategory ${\mathcal L}_X$.
\[extending the right tensoring\] If each $C^*$-correspondence $X_p$, $p\in P$, is left essential (which is automatic when $P^*\neq \{e\}$) then the formula $$(T\otimes 1_r)(ax):=T(a)x, \qquad T\in{\mathcal L}(A,X_{p}), \,\, a\in A, \,\,x \in X_r, \,\, p,r \in P,$$ allows one to extend the right tensoring from ${\mathcal L}_X$ on the whole $C^*$-category $\{{\mathcal L}(X_q,X_p)\}_{p,q\in P}$. Note that ${\mathcal L}_X$ is a $C^*$-category if and only if $A$ is unital, if and only if ${\mathcal L}_X=\{{\mathcal L}(X_q,X_p)\}_{p,q\in P}$.
\[non-degeneracy of K\_X\] The ideal ${\mathcal K}_X$ in the right-tensor $C^*$-precategory $({\mathcal L}_X,\{\otimes 1_r\}_{r\in P})$ associated to the product system $X$ is $\otimes 1$-nondegenerate. Moreover, ${\mathcal K}_X \otimes 1\subseteq {\mathcal K}_X$ if and only if $\phi_p(A)\subseteq {\mathcal K}(X_p)$ for every $p\in P$.
Let $x,y,z \in X_p$, $u\in X_r$ and $v\in X_{pr}$ for some $p,r\in P$, $p\neq e$. Then $
(\Theta_{x,y}\otimes 1_r) \Theta_{zu,v}=\Theta_{x\langle y,z\rangle_{X_p} u, v}.
$ Since elements of the form $x\langle y,z\rangle_{X_p}$ span $X_p$ and since $X_pX_r=X_{pr}$, we conclude that elements $(\Theta_{x,y}\otimes 1_r) \Theta_{zu,v}$ span ${\mathcal K}(X_{pr})$. Hence ${\mathcal K}_X$ is $\otimes 1$-nondegenerate. It is clear that ${\mathcal K}_X \otimes 1\subseteq {\mathcal K}_X$ implies $\phi_p(A)=A\otimes 1_p \subseteq {\mathcal K}(X_p)$ for $p\in P$. The converse implication follows from the standard fact [@lance Proposition 4.7], cf. [@kwa-szym Lemma 3.2].
\[rem:ideals in C\*-precategories of Hilbert modules\] We claim that ${\mathcal K}_X$ and $A$ may be regarded as being Morita equivalent as $C^*$-precategories, see also Lemma \[representations of C\*-precategories of Hilbert modules\]. To make this more precise, we introduce some notation first. If $Z$ and $Y$ are two right Hilbert $A$-modules and $I$ is an ideal in $A$, we let $
{\mathcal L}_I(Z,Y):=\{a\in {\mathcal L}(Z,Y): a(Z)\subseteq YI\}.
$ Note that $a(Z)\subseteq YI$ is equivalent to $a^*(Y)\subseteq ZI$. We also put ${\mathcal K}_I(Z,Y):={\mathcal L}_I(Z,Y)\cap {\mathcal K}(Z,Y)$, cf. [@kwa-doplicher Lemmas 1.1 and 1.2]. Now, given a product system $X$ over $P$, for any ideal $J$ in $A$ the formulas $${\mathcal K}_X(J):=\{{\mathcal K}_{J}(X_q,X_p)\}_{p,q \in P},\qquad {\mathcal L}_X(J):=\{{\mathcal L}_{J}(X_q,X_p)\}_{p,q \in P}$$ define ideals in ${\mathcal L}_X=\{{\mathcal L}(X_q,X_p)\}_{p,q \in P}$, as the reader may readily verify. We claim that every ideal in ${\mathcal K}_X=\{{\mathcal K}(X_q,X_p)\}_{p,q\in P}$ is of the form ${\mathcal K}_X(J)$ for some ideal $J$ in $A$. Indeed, this is proved in [@kwa-doplicher Proposition 2.17] in the case $P={\mathbb N}$ but the proof works for general $P$.
\[representations of C\*-precategories of Hilbert modules\] We have a one-to-one correspondence, established by the formula $$\label{precategory representation}
\Psi_{p,q}(\Theta_{x,y})=
\psi_p(x)\psi_q(y)^*\,\,\, \; \text{for}\; x\in X_p,\,\, y \in X_q, \,\, p,q\in P,$$ between representations $\Psi=\{\Psi_{p,q}\}_{p,q\in P}$ of ${\mathcal K}_X$ and families $\psi=\{(\psi_e,\psi_{p})\}_{p\in P}$ where, for each $p\in P$, $(\psi_e,\psi_{p})$ is a representation of the Hilbert $A$-module $X_p$.
Moreover, if $\Psi$ is a representation of ${\mathcal K}_X$ on a Hilbert space $H$ and $\overline{\Psi}$ is its extension to ${\mathcal L}_X$ determined by , then with $\psi_e=\Psi_{e,e}$ we have $$\ker\Psi= {\mathcal K}_X(\ker\psi_e)\qquad\text{ and }\qquad \ker \overline{\Psi}= {\mathcal L}_X(\ker\psi_e).$$
Let $\psi=\{(\psi_e,\psi_{p})\}_{p\in P}$ be a family of representations of right-Hilbert modules $X_p$ for $p\in P$ in a $C^*$-algebra $B$. Let $p,q,r \in P$. It is well known that determines uniquely a linear contraction $\Psi_{p,q} : {\mathcal K}(X_q,X_p) \longrightarrow B$ which is isometric if $\psi_e$ is injective, see for instance [@KPW Lemma 2.2, Remark 2.3]. It is straightforward to see that the relations $$\Psi_{p,q}(S)^*= \Psi_{q,p}(S^*)\qquad \text{and}\qquad \Psi_{p,q}(S)\psi_{q,r}(T)=\psi_{p,r} (ST)$$ hold for ’rank one’ operators $S=\Theta_{x,y}$, $T=\Theta_{u,w}$. Hence these formulas hold for arbitrary $S\in {\mathcal K}(X_q,X_p)$, $T \in {\mathcal K}(X_r,X_q)$. Thus $\Psi=\{\Psi_{p,q}\}_{p,q\in P}$ is a representation of ${\mathcal K}_X$.
If now $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ is an arbitrary representation of ${\mathcal K}_X$ in a $C^*$-algebra $B$ then by we can define a family of maps $\psi_p:X_p\to B$ by $
\psi_p(x):=\Psi_{p,e}(t_x)$, $x\in X_p, \,\, p\in P.
$ A routine verification shows that $(\psi_e,\psi_p):X_p\to B$ is a right-Hilbert module representation.
The equality $\ker\Psi= {\mathcal K}_X(\ker\psi_e)$ follows from Remark \[rem:ideals in C\*-precategories of Hilbert modules\], as ${\mathcal K}_X(\ker\psi_e)(e,e)=\ker\psi_e=(\ker\Psi)(e,e)$. By [@kwa-larI Proposition 2.13] we have $(\ker\overline{\Psi})(p,q)=\{a\in {\mathcal L}_X(p,q): a{\mathcal K}_X(q,q)\subseteq \ker\Psi_{p,q}\}$. Thus the inclusion ${\mathcal L}_X(\ker\psi_e)(p,q) \subseteq (\ker \overline{\Psi}) (p,q)$ is immediate. For the reverse let $a\in (\ker \overline{\Psi}) (p,q)$ and $x\in X_q$. Note that $x$ may be written as $x=bx'$ where $b\in {\mathcal K}(X_q)$ and $x'\in X_q$. Hence $ax= (ab)x'\in {\mathcal K}_X(p,q) x'\subseteq X_p(\ker\psi_e)$, and $a\in {\mathcal L}_X(\ker\psi_e)(p,q)$.
The semigroup operation in $P$ is irrelevant for the assertions in Remark \[rem:ideals in C\*-precategories of Hilbert modules\] and Lemma \[representations of C\*-precategories of Hilbert modules\] – they remain true when $P$ is any set with a distinguished element $e\in P$ and $\{X_p\}_{p \in S}$ is a family of right Hilbert modules over a $C^*$-algebra $A$ such that $X_e=A_A$.
\[going forward cor\] Let $X$ be a product system over an arbitrary semigroup $P$. The bijective correspondence in Lemma \[representations of C\*-precategories of Hilbert modules\] restricts to a one-to-one correspondence between representations $\psi=\{\psi_{p}\}_{p\in P}$ of $X$ and right-tensor representations $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ of ${\mathcal K}_X$. In particular, ${\mathcal T}(X)$ is isomorphic to ${\mathcal T}_{{\mathcal L}_X}({\mathcal K}_X)$, the Toeplitz algebra of ${\mathcal K}_X$.
Let $\psi$ and $\Psi$ be the corresponding objects in Lemma \[representations of C\*-precategories of Hilbert modules\]. Suppose first that $\Psi$ is a right-tensor representation of ${\mathcal K}_X$. For any $x\in X_p$ and $y\in X_q$, $p,q\in P$, we have $(t_x\otimes 1_y) y=t_{xy}$. Thus $$\psi_p(x)\psi_q(y)=\Psi_{p,e}(t_x) \Psi_{q,e}(t_y)=\Psi_{pq,e}((t_x\otimes 1_y)t_y)=\psi_{pq}(x),$$ so $\psi=\{\psi_{p}\}_{p\in P}$ is a representation of the product system $X$. Suppose now that $\psi$ is a representation of $X$. Let $p,q,s,t\in P$ with $s\geq q$. Consider $S=\Theta_{x,y}\in {\mathcal K}(X_q,X_p)$ and $T=\Theta_{u'u,w}\in {\mathcal K}(X_t,X_s)$ where $u'\in X_{q}$ and $u\in X_{q^{-1}s}$. Then $$\begin{aligned}
\Psi_{p,q}(S) \Psi_{s,t}(T)&=\psi(x) \psi(y)^*\psi(u'u)\psi(w)^*= \psi(x\langle y,u'\rangle_A u)\psi(w)^*
= \Psi_{pq^{-1},t}(\Theta_{x\langle y,u'\rangle_A u, w})
\\
&=\Psi_{pq^{-1},t}(\iota^{pq^{-1}s,s}_{p,q}(\Theta_{x, y})\Theta_{u'u,w})= \Psi_{pq^{-1},t}((S\otimes 1_{q^{-1}s}) T).\end{aligned}$$ Hence, by linearity and continuity, $\Psi$ is a right-tensor representation.
$C^*$-algebras associated with product systems over LCM semigroups {#product systems over LCMs}
------------------------------------------------------------------
For the remaining of this section we assume that $P$ is a right LCM semigroup.
A product system $X$ over $P$ is compactly-aligned if and only if the ideal ${\mathcal K}_X$ is well-aligned in the associated right-tensor $C^*$-precategory $({\mathcal L}_X,\{\otimes 1_r\}_{r\in P})$. In particular, ${\mathcal K}_X$ satisfies and .
${\mathcal K}_X$ satisfies by Lemma \[non-degeneracy of K\_X\]. The remaining claims are immediate.
The next proposition generalizes [@kwa-doplicher Proposition 3.14] from ${\mathbb N}$ to right LCM semigroups.
\[going forward prop\] If $X$ is a compactly-aligned product system over a right LCM semigroup $P$, then the bijective correspondence in Proposition \[going forward cor\] preserves Nica covariance of representations and, hence, it gives rise to a canonical isomorphism $${\mathcal{NT}}_{{\mathcal L}_X}({\mathcal K}_X)\cong {\mathcal{NT}}(X).$$
If $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ is a Nica covariant representation of ${\mathcal K}_X$, then it is also a right-tensor representation and therefore $\psi=\{\psi_{p}\}_{p\in P}$ is a representation of the product system $X$. Since $\psi^{(p)}=\Psi_{p,p}$ and $\iota^{pq}_p(S)=S\otimes 1_q$, for $S\in {\mathcal K}(X_p)$ and $p, q\in P$, Nica covariance of $\Psi$ implies Nica covariance of $\psi$.
Let $\psi=\{\psi_{p}\}_{p\in P}$ be a Nica covariant representation of $X$ on a Hilbert space $H$. Let $\Psi=\{\Psi_{p,q}\}_{p,q\in P}$ be the representation of ${\mathcal K}_X$ given by . To see that $\Psi$ is Nica covariant, let $S=\Theta_{x,y}\in {\mathcal K}(X_q,X_p)$ and $T=\Theta_{u,w}\in {\mathcal K}(X_t,X_s)$ where $p,q,s,t\in P$. Express $y=Yy'$ with $Y\in {\mathcal K}(X_q)$ and $y'\in X_q$, and similarly $u=Uu'$ with $U\in {\mathcal K}(X_s)$ and $u'\in X_s$. Then $\psi(y)^*\psi(u)=\psi(y')^*\psi^{(q)}(Y^*)\psi^{(s)}(U)\psi(u')$. Therefore, by Nica covariance of $\psi$, if $qP\cap sP=\emptyset$, then $\psi(y)^*\psi(u)=0$ and hence $\Psi_{p,q}(S)\Psi_{s,t}(T)=0$. Assume that $qP\cap sP=rP$, for some $r\in P$. Again by Nica covariance of $\psi$ we get $$\psi(y)^*\psi(u)= \psi(y')^*\psi^{(r)}\Big( (Y^*\otimes {1_{q^{-1}r}}) (U\otimes {1_{s^{-1}r}})\Big)\psi(u')$$ We claim that $\psi(y)^*\psi(u)\in \Psi_{q^{-1}r,s^{-1}r } ({\mathcal K}(X_{s^{-1}r},X_{q^{-1}r})$. Indeed, the operator $\psi^{(r)}\Big( (Y^*\otimes{1_{q^{-1}r}}) (U\otimes{1_{q^{-1}s}})\Big)$ can be approximated by finite sums of elements of the form $\psi_r(v'v)\psi_r(z'z)^*$ where $v' \in X_{q}$, $v\in X_{q^{-1}r}$, and $z' \in X_{s}$, $z\in X_{s^{-1}r}$. Since $$\psi_q(y')^*\psi_r(v'v)\psi_r(z'z)^*\psi_s(u')=\psi_{q^{-1}r}(\langle y',v'\rangle_q v)\psi_{s^{-1}r}(\langle u',z'\rangle_s z)^*$$ is an element of $\Psi_{q^{-1}r,s^{-1}r }\big({\mathcal K}(X_{s^{-1}r},X_{q^{-1}r})\big)$, so is $\psi(y)^*\psi(u)$. Accordingly, $$\Psi_{p,q}(S)\Psi_{s,t}(T)=\psi_p(x)\psi_q(y)^*\psi_s(u)\psi_t(w)^* \in \Psi_{pq^{-1}r,ts^{-1}r }\big({\mathcal K}(X_{ts^{-1}r},X_{pq^{-1}r})\big).$$ Hence the product $\Psi_{p,q}(S)\Psi_{s,t}(T)$ acts as zero on the orthogonal complement of the space $
H_{ts^{-1}r }:=\overline{\psi_{ts^{-1}r }(X_{ts^{-1}r})H}.
$ Clearly, the same is true for the operator $\Psi_{pq^{-1}r,ts^{-1}r }\big((S\otimes 1_{q^{-1}r})(T\otimes 1_{s^{-1}r})\big)$. Consider an element $\psi_{ts^{-1}r }(w_0u_0)h$ where $w_0\in X_t$, $u_0\in X_{s^{-1}r}$, $h\in H$. The linear span of such elements is in $H_{ts^{-1}r }$ and we have $$\begin{aligned}
\Psi_{s,t}(T)\psi_{ts^{-1}r }(w_0u_0)&= \psi_s(u)\psi_t(w)^* \psi_{t}(w_0)\psi_{s^{-1}r}(u_0)=\psi_{r}(u\langle w, w_0\rangle_{t} u_0)
\\
&
=\psi_{r}\big((\Theta_{u,w}\otimes 1_{s^{-1}r}) w_0 u_0\big)=\Psi_{r,ts^{-1}r}(T\otimes 1_{s^{-1}r})\psi_{ts^{-1}r }(w_0u_0).\end{aligned}$$ Hence $\Psi_{s,t}(T)$ and $\Psi_{r,ts^{-1}r }\big(T\otimes 1_{s^{-1}r}\big)$ coincide on the space $H_{ts^{-1}r }$, and they map this space into the space $
H_{r }:=\overline{\psi_{r}(X_{r})H}$. Consider an element $\psi_{r }(y_0x_0)h$ where $x_0\in X_{q^{-1}r}$, $y_0\in X_{q}$, $h\in H$. The linear span of such elements is in $H_{r}$ and we have $$\begin{aligned}
\Psi_{p,q}(S)\psi_{r }(y_0x_0)&= \psi_p(x)\psi_q(y)^* \psi_{q}(y_0)\psi_{q^{-1}r}(x_0)=\psi_{pq^{-1}r}(x \langle y, y_0\rangle_{q} u_0)
\\
&
=\psi_{r}\big((\Theta_{x,y}\otimes 1_{q^{-1}r}) y_0 x_0\big)=\Psi_{pq^{-1}r,r}(S\otimes 1_{q^{-1}r})\psi_{r }(y_0x_0).\end{aligned}$$ Hence $\Psi_{p,q}(S)$ and $\Psi_{pq^{-1}r,r}(S\otimes 1_{q^{-1}r})$ coincide when restricted to $H_r$. Combining these two observations we get $$\begin{aligned}
\Psi_{p,q}(S)\Psi_{s,t}(T) &= \Psi_{pq^{-1}r,r}(S\otimes 1_{q^{-1}r})\Psi_{r,ts^{-1}r}(T\otimes 1_{s^{-1}r})
\\
&=\Psi_{pq^{-1}r,ts^{-1}r }\big((S\otimes 1_{q^{-1}r})(T\otimes 1_{s^{-1}r})\big).\end{aligned}$$ Thus $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ is Nica covariant.
The above result motivates the following definition.
Let $X$ be a compactly-aligned product system over a right LCM semigroup $P$. We let ${\mathcal{NT}}^r(X):={\mathcal{NT}}_{{\mathcal L}_X}^r({\mathcal K}_X)$ and call it *the reduced Nica-Toeplitz algebra* of $X$. We also put $$\mathcal{DR}({\mathcal{NT}}(X)):={\mathcal{NT}}({\mathcal L}_X)\qquad\text{and}\qquad \mathcal{DR}^r({\mathcal{NT}}(X)):={\mathcal{NT}}^r({\mathcal L}_X)$$ ($\mathcal{DR}$ stands for Doplicher-Roberts). We denote by $\overline{\Lambda}:\mathcal{DR}({\mathcal{NT}}(X))\to \mathcal{DR}^r({\mathcal{NT}}(X))$ and $\Lambda:{\mathcal{NT}}(X)\to {\mathcal{NT}}^r(X)$ the canonical epimorphisms.
Let $X$ be a compactly-aligned product system. In [@F99] Fowler constructed the *Fock representation* $l:X\to {\mathcal L}(\mathcal{F}(X))$ of $X$. The Fock spaces for $X$, ${\mathcal K}_X$ and ${\mathcal L}_X$ coincide and are equal to the Hilbert $A$-module direct sum $\mathcal{F}(X)=\bigoplus_{p\in P} X_p$. The Fock representation $\overline{{\mathbb{L}}}:{\mathcal L}_X\to {\mathcal L}(\mathcal{F}(X))$ of ${\mathcal L}_X$ is an extension of the Fock representation ${\mathbb{L}}:{\mathcal K}_X\to {\mathcal L}(\mathcal{F}(X))$ of ${\mathcal K}_X$ which is in turn an extension of $l$. This in particular leads to the inclusion $${\mathcal{NT}}^r(X)=\operatorname{\overline{span}}\{ l(x)l(y)^*: x, y\in X\} \subseteq \mathcal{DR}^r({\mathcal{NT}}(X))$$ By [@kwa-larI Lemma 11.1], there is a commutative diagram $$\begin{xy}
\xymatrix{
{\mathcal{NT}}(X) \ar[d]_{\Lambda} \ar[rr]^{\hookrightarrow }& & \mathcal{DR}({\mathcal{NT}}(X)) \ar[d]^{\overline{\Lambda}}
\\
{\mathcal{NT}}^r(X) \ar[rr]^{\hookrightarrow} & & \mathcal{DR}^r({\mathcal{NT}}(X))}
\end{xy}$$ in which the horizontal maps are embeddings. The map $\overline{\Lambda}$ is injective on the core subalgebra $B_e^{i_{{\mathcal L}_X}}=\operatorname{\overline{span}}\{i_{{\mathcal L}_X}(a):a\in {\mathcal L}_X(p,p), p \in P\}$ of $\operatorname{\mathcal{DR}}({\mathcal{NT}}(X))$ and $\Lambda$ is injective on $$B_e^X:=B_e^{i_{{\mathcal K}_X}}=\operatorname{\overline{span}}\{i_X(x) i_X(y)^*: x,y \in X, d(x)=d(y)\},$$ see [@kwa-larI Corollary 6.4]. Clearly, $B_e^X\subseteq B_e^{i_{{\mathcal L}_X}}$. We will characterize representations of $X$ that give rise to injective representations of $B_e^X$ and $B_e^{i_{{\mathcal L}_X}}$ respectively. To this end, we introduce canonical projections associated to a representation of $X$.
If $\psi:X\to {\mathcal B}(H)$ is a representation of a compactly-aligned product system $X$, for every $p\in P$ we denote by $Q^\psi_{p}\in {\mathcal B}(H)$ the projection such that $$Q^\psi_{p}H=\begin{cases}
\psi^{(p)}({\mathcal K}(X_p))H & \textrm{ if } p\in P\setminus \{e\},
\\
\overline{\psi(X_e)H} & \textrm{ if } p=e.
\end{cases}$$
\[properties of family projections2\] If $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ is the representation of ${\mathcal K}_X$ given by Proposition \[going forward cor\], then $Q^\psi_{p}$ equals the projection $Q^\Psi_{p}$ associated to $\Psi$ in [@kwa-larI Definition 9.1] for $p\in P$. In particular, if $\overline{\Psi}:=\{\overline{\Psi}_{p,q}\}_{p,q\in P}$ is the extension of $\Psi$ to ${\mathcal L}_X$, then $Q^\Phi_p=\overline{\Psi}_{p,q}(i_{\mathcal L}(1_{p}))$, for all $p\in P\setminus \{e\}$.
\[Nica relation Lemma\] A representation $\psi:X\to {\mathcal B}(H)$ is Nica covariant if and only if the projections $\{Q^\psi_{p}\}_{p\in P}\in {\mathcal B}(H)$ satisfy the Nica covariance relation $$\label{Nica equation for projections}
Q^\psi_{p} Q_{q}^\psi=
\begin{cases}
Q_r^\psi, & \text{if } qP\cap sP=rP \text{ for some }r\in P,
\\
0, & \text{if } qP\cap sP=\emptyset.
\end{cases}$$ Moreover, if $\psi:X\to {\mathcal B}(H)$ is Nica covariant, the representation $\psi\rtimes P:{\mathcal{NT}}(X)\to B(H)$ extends uniquely to a representation $\overline{\psi\rtimes P}$ of $\mathcal{DR}({\mathcal{NT}}(X))$ such that $$\label{extensions relations}
(\overline{\psi\rtimes P})(i_{{\mathcal L}_X}(a))= Q^\psi_{p} (\overline{\psi\rtimes P})(i_{{\mathcal L}_X}(a))Q^\psi_{q}, \quad a\in {\mathcal L}(X_p,X_q), p,q\in P\setminus\{e\}.$$ In fact, $\overline{\psi\rtimes P}=\overline{\Psi}\rtimes P$ where $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ is the associated representation of ${\mathcal K}_X$.
If $\psi:X\to {\mathcal B}(H)$ is Nica covariant then holds by [@kwa-larI Propositions 9.4 and 9.7]. Conversely, if holds then for $a \in {\mathcal K}(X_p)$ and $b\in {\mathcal K}(X_q)$ the product $\psi^{(p)}(a)\psi^{(q)}(b)$ equals $\psi^{(p)}(a)Q^\psi_{p}Q^\psi_{q}\psi^{(q)}(b)$, which is zero if $pP\cap qP=\emptyset$ or is $\psi^{(p)}(a)Q^\psi_{r}Q^\psi_{q}\psi^{(q)}(b)$ if $pP\cap qP=rP$. In the latter case, we have $Q^\psi_{r}\leq Q^\psi_{q}$ and so by [@kwa-larI Proposition 9.4] applied to $\overline{\Psi}:=\{\overline{\Psi}_{p,q}\}_{p,q\in P}$, we obtain $\psi^{(p)}(a)Q^\psi_{r}\psi^{(q)}(b)=\psi^{(r)}\big(\iota^{r}_p(a)\iota^{r}_q(b)\big)$, using also that $\iota^{r}_p(a)\iota^{r}_q(b)\in {\mathcal K}_X(r,r)$.
If $\psi:X\to {\mathcal B}(H)$ is Nica covariant, then $\overline{\Psi}:=\{\overline{\Psi}_{p,q}\}_{p,q\in P}$ is Nica covariant by [@kwa-larI Proposition 9.5]. Putting $\overline{\psi\rtimes P}:=\overline{\Psi}\rtimes P$ for any $a\in {\mathcal L}(X_p,X_q)$, $p,q\in P\setminus\{e\}$ we get $(\overline{\psi\rtimes P})(i_{{\mathcal L}_X}(a))= \overline{\Psi}_{p,q}(i_{{\mathcal L}_X}(a))$, and therefore relations are satisfied. Conversely, if $\overline{\psi\rtimes P}$ is any representation of $\mathcal{DR}({\mathcal{NT}}(X))$ that extends $\psi\rtimes P$, we get a Nica covariant representation $\Phi$ of ${\mathcal L}_X$ that extends $\Psi$. The relations imply that $\Phi=\overline{\Psi}$ and hence $\overline{\psi\rtimes P}:=\overline{\Psi}\rtimes P$.
Via the bijective correspondence in , we transport the notions of Toeplitz representation and condition (C) for representations of ${\mathcal K}_X$, see [@kwa-larI Definition 6.2 and Definition 10.1] to representations of the product system.
A Nica covariant representation $\psi:X\to {\mathcal B}(H)$ is *Toeplitz covariant* or *Nica-Toeplitz covariant* if for each finite family $q_1,\ldots ,q_n\in P\setminus P^*$, $n\in {\mathbb N}$, we have $$\label{Toeplitz condition666}
\psi_e(A)\cap \operatorname{\overline{span}}\{ \psi^{(q_i)}({\mathcal K}(X_{q_i})):i=1,\ldots,n\} =\{0\}.$$ The representation $\psi$ *satisfies condition (C)* if for each finite family $q_1,\ldots ,q_n\in P\setminus P^*$, $n\in {\mathbb N}$, $$\label{Coburn condition666}
\text{the map $A\ni a \longmapsto \psi_e(a) \prod_{i=1}^{n}(1-Q^\psi_{q_i})$ is injective.}$$
\[conditions T and C for product systems\] Let $\psi:X\to {\mathcal B}(H)$ be a Nica covariant representation of a compactly-aligned product system $X$ and $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ the associated representation of ${\mathcal K}_X$.
- $\psi$ is Toeplitz covariant if and only if $\Psi$ is Toeplitz covariant.
- $\psi$ satisfies condition (C) if and only if $\Psi$ satisfies condition (C).
(i). Since $\psi^{(q)}({\mathcal K}(X_q))=\psi_q(X_q)\psi_q(X_q)^*=\Psi_{q,q}({\mathcal K}(X_q))$, Toeplitz covariance of $\Psi$ immediately implies that of $\psi$. The converse is left to the reader.
(ii). In one direction, it is immediate that $\psi$ satisfies condition (C) when $\Psi$ does. For the converse, let $p\in P$ and $q_1,\ldots,q_n\in P$ be such that $p\not\geq q_i$, for $i=1,\ldots,n$ for some $n\in {\mathbb N}$. We must show that ${\mathcal K}(X_p)\ni a \longrightarrow \psi^{(p)}(a)\prod_{i=1}^{n}(1-Q^\psi_{q_i})=\Psi_{p,p}(a)\prod_{i=1}^{n}(1-Q^\Psi_{q_i})$ is injective.
It follows from that $Q^\psi_{p}\prod_{i=1}^{n}(1-Q^\psi_{q_i})=\prod_{i=1}^{k}(Q^\psi_{p}-Q^\psi_{s_i})$ for some $k\geq n$ and $p \leq s_i \not\leq p$. Then $p^{-1}s_i\notin P^*$, for all $i=1,\ldots,k$. Hence the space $K=\prod_{i=1}^{k}(1-Q^\psi_{p^{-1}s_i})H$ is invariant for $\psi_e$ and $\psi_e|_{K}$ is faithful. By [@Fow-Rae Proposition 1.6(2)], this implies that $\overline{\psi_p(X_p)K}$ is invariant for $\psi^{(p)}$ and $\psi^{(p)}|_{\psi_p(X_p)K}$ is faithful. For any $q\in P$, applying [@kwa-larI Proposition 9.4] twice we get $$\psi_p(X_p)Q^\psi_q=\Psi_{p,e}(X_p)Q^\Psi_q=\overline{\Psi}_{pq,p}(X_p\otimes 1_q)=Q^\Psi_{pq}\Psi_{p,e}(X_p)=Q^\psi_{pq}\psi_p(X_p).$$ Therefore $$\psi_p(X_p)\prod_{i=1}^{k}(1-Q^\psi_{p^{-1}s_i})= \prod_{i=1}^{k}(Q^\psi_{p}-Q^\psi_{s_i}) \psi_p(X_p)= Q^\psi_{p}\prod_{i=1}^{n}(1-Q^\psi_{q_i}) \psi_p(X_p),$$ which gives the desired injectivity.
\[thm:faithulness on the core subalgebras\] Let $\psi:X\to {\mathcal B}(H)$ be a Nica covariant representation of a compactly-aligned product system $X$. Let $\overline{\psi\rtimes P}:\mathcal{DR}({\mathcal{NT}}(X))\to B(H)$ be the extension of $\psi\rtimes P:{\mathcal{NT}}(X)\to B(H)$ described in Lemma \[Nica relation Lemma\].
- $\psi\rtimes P$ is faithful on the core $B_e^X$ of ${\mathcal{NT}}(X)$ if and only if $\psi$ is injective and Toeplitz covariant.
- $\overline{\psi\rtimes P}$ is faithful on the core $B_e^{i_{{\mathcal L}_X}}$ of $\mathcal{DR}({\mathcal{NT}}(X))$ if and only if $\psi$ satisfies condition (C).
Moreover, if $\phi_p(A)\subseteq {\mathcal K}(X_p)$ for every $p\in P$, then the equivalent conditions in (i) are satisfied if and only if those in (ii) hold.
Item (i) follows from Lemma \[conditions T and C for product systems\](i) and [@kwa-larI Corollary 6.3] applied to $\Psi$. By Lemma \[conditions T and C for product systems\](ii) and [@kwa-larI Corollary 10.5], $\psi$ satisfies condition (C) if and only if $\overline{\Psi}$ is Nica-Toeplitz covariant and injective. Hence (ii) follows from [@kwa-larI Corollary 6.3] applied to $\overline{\Psi}$. The last claim of the theorem follows from Lemma \[non-degeneracy of K\_X\] and [@kwa-larI Proposition 10.4].
Uniqueness theorems {#subsect:uniqueness theorems}
-------------------
We aim to prove a uniqueness result for ${\mathcal{NT}}(X)$. Our result, see Theorem \[Uniqueness Theorem for product systems I\], may be considered a far-reaching generalization of [@Fow-Rae Theorem 2.1] and [@LR Theorem 3.7], and was motivated in part by the need to better understand both hypotheses and claims of [@F99 Theorem 7.2]. The proof will employ our abstract uniqueness theorem for $C^*$-algebras associated to well-aligned ideals in $C^*$-precategories, cf. [@kwa-larI Corollary 10.14]. As spin-offs of our strategy of proof we will obtain a uniqueness result in a new context, see Theorem \[Uniqueness for product systems II\], and a generalization of [@FR Theorem 5.1].
We start with some preparation. We recall that aperiodicity for the group of right tensoring $\{\otimes 1_h\}_{h\in P ^*}$ in a $C^*$-precategory was introduced in [@kwa-larI Definition 10.8]. The notion of aperiodic Fell bundle is from [@KS]. Further, for any product system $X$ the spaces $\{X_h\}_{h\in P^*}$ form a saturated Fell bundle over the discrete group of units $P^*$, see Remark \[rem:on essentiality and Fell bundles\]. By [@KM Theorem 9.8], $\{X_h\}_{h\in P^*}$ is *aperiodic* if and only if its dual action on the spectrum $\widehat{A}$ is *topologically free*, at least when $A$ contains an essential ideal which is separable or of Type I.
\[lem: aperiodicity for product systems\] If $X$ is a product system over a semigroup $P$, then the group $\{\otimes 1_{h}\}_{h\in P^*}$ of automorphisms of ${\mathcal K}_X$ is aperiodic if and only if the Fell bundle $\{X_h\}_{h\in P^*}$ is aperiodic.
The only if part is trivial as $X_h={\mathcal K}_{X}(h,e)$, $h\in P^*$. For the converse, let $p\in P$ and $h\in P^*\neq \{e\}$. We may view $X_p$ as an equivalence ${\mathcal K}_X(p,p)$-$A$-bimodule, in an obvious way. Also we may view ${\mathcal K}_X(ph,p)$ as a $C^*$-correspondence over ${\mathcal K}_X(p,p)$ with left action implemented by $\otimes 1_h$. With $\widetilde{X}_{p}$ denoting the dual correspondence, we clearly have isomorphisms of $C^*$-correspondences $${\mathcal K}_X(ph,p)\cong X_{ph}\otimes_A \widetilde{X}_{p}\cong X_{p}\otimes_A X_{h}\otimes_A \widetilde{X}_{p}$$ Hence ${\mathcal K}_X(ph,p)$ is an equivalence bimodule Morita equivalent to the equivalence $A$-bimodule $X_h$, cf. [@KM Lemma 6.4]. Thus the assertion follows from [@KM Corollary 6.3].
We recall that various criteria for amenability of ideals in right-tensor $C^*$-precategories are given in [@kwa-larI Section 8]. For example, any such ideal is amenable when the underlying semigroup admits a controlled map into an amenable group, see [@kwa-larI Theorem 8.4] in conjunction with the fact that any Fell bundle over an amenable group has amenable full sectional $C^*$-algebra.
\[Uniqueness Theorem for product systems I\] Let $X$ be a compactly-aligned product system over a right LCM semigroup $P$ such that ${\mathcal K}_X$ is amenable. Suppose that either $P^*=\{e\}$ or that the Fell bundle $\{X_h\}_{h\in P^*}$ is aperiodic.
Consider the following conditions on a Nica covariant representation $\psi:X\to B(H)$:
- $\psi$ satisfies condition (C);
- $\psi\rtimes P$ is an isomorphism from ${\mathcal{NT}}(X)$ onto $\operatorname{\overline{span}}\{\psi(x)\psi(y)^*: x,y \in X \}
$;
- $\psi$ is injective and Toeplitz covariant.
Then (i)$\Rightarrow$(ii)$\Rightarrow$(iii) and if $\phi_p(A)\subseteq {\mathcal K}(X_p)$ for every $p\in P$, then all these three conditions are equivalent.
Taking into account Lemmas \[non-degeneracy of K\_X\], \[conditions T and C for product systems\] and Proposition \[going forward prop\], the assertion follows from [@kwa-larI Corollary 10.14].
In general, condition (i) in Theorem \[Uniqueness Theorem for product systems I\] is stronger then (iii), cf. Example \[ex:DR-Oinfty\]. It is an open problem whether, under the assumptions of Theorem \[Uniqueness Theorem for product systems I\], conditions (ii) and (iii) are always equivalent. We believe the answer to be affirmative and in the next result confirm this under an assumption of aperiodicity.
\[Abstract uniqueness\] Suppose that the LCM semigroup $P$ is a subsemigroup of a group $G$. Let $X$ be a compactly-aligned product system over $P$, and let ${\mathcal B}^\theta=\{B_g^\theta\}_{g\in G}$ be the Fell bundle associated to ${\mathcal K}_X$ and $\theta=\operatorname{id}$ in [@kwa-larI Theorem 8.4]. If ${\mathcal B}^\theta$ is amenable and aperiodic, then for any Nica covariant representation $\psi:X\to B(H)$, the representation $\psi\rtimes P$ of ${\mathcal{NT}}(X)$ is faithful if and only if $\psi$ is injective and Toeplitz covariant.
By [@kwa-larI Proposition 12.10], see also [@KS Corollary 4.3], $\psi\rtimes P$ is faithful on ${\mathcal{NT}}(X)$ if and only if it is faithful on the core subalgebra $B_e^\theta= B_e^X$. By Theorem \[thm:faithulness on the core subalgebras\], this holds if and only if $\psi$ is injective and Toeplitz covariant. See also [@kwa-larI Remark 10.13].
We can use condition (C) in its full force by exploiting the Doplicher-Roberts version of the Nica-Toeplitz algebra.
\[Uniqueness for product systems II\] Let $X$ be a compactly-aligned product system over a right LCM semigroup $P$. Suppose that either $P^*=\{e\}$ or that the Fell bundle $\{X_h\}_{h\in P^*}$ is aperiodic. Assume also that ${\mathcal L}_X$ is amenable. Then for a Nica covariant representation $\psi:X\to B(H)$ the following are equivalent:
- $\psi$ satisfies condition (C);
- $\overline{\psi\rtimes P}$ is an isomorphism from $\mathcal{DR}({\mathcal{NT}}(X))$ onto the closed linear span of operators $T$ satisfying $T\in \psi(X_e)\cup \psi(X_e)^*$ or $$T\in Q_{p}^{\psi}B(H)Q_{q}^{\psi}\, \text{ where }\, T\psi(X_q)\subseteq \psi(X_p)\, \text{ and } \,T^{*}\psi(X_p)\subseteq \psi(X_q), \text{ for } p,q\in P\setminus\{e\}.$$
The isomorphism in item (ii) restricts, under the embedding ${\mathcal{NT}}(X)\hookrightarrow \mathcal{DR}({\mathcal{NT}}(X)))$, to a natural isomorphism ${\mathcal{NT}}(X) \cong \operatorname{\overline{span}}\{\psi(x)\psi(y)^*: x,y \in X \}.$
In view of Lemmas \[non-degeneracy of K\_X\], \[conditions T and C for product systems\], we may apply [@kwa-larI Theorem 10.15]. To finish the proof, we need to show that for any $p,q\in P\setminus\{e\}$, we have $$\overline{\Psi}({\mathcal L}(X_q,X_p))=\{T\in Q_{p}^{\psi}B(H)Q_{q}^{\psi}:T\psi(X_q)\subseteq \psi(X_p)\,\text{ and } \, T^{*}\psi(X_p)\subseteq \psi(X_q)\}$$ where $\overline{\Psi}:{\mathcal L}_X\to B(H)$ is the extension of the Nica-Toeplitz representation $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ of ${\mathcal K}_X$. This equality readily follows from the fact that $\psi$ is injective (and hence isometric on each fiber) and for $a\in {\mathcal L}(X_q,X_p)$, $\overline{\Psi}(a)$ is determined by the formulas $\overline{\Psi}(a)\psi_q(x)=\psi_p(ax)$ and $\overline{\Psi}(a) (Q_{q}^{\psi})^\bot=0$, where $x\in X_q$.
\[ex:DR-Oinfty\] We will now illustrate the above uniqueness results for $C^*$-algebras of product systems in the case of the Cuntz algebra ${\mathcal O}_\infty$. This in particular will explain the results and phenomena encountered in [@Fow-Rae].
Let $\{u_i: i\in {\mathbb N}\}\subset {\mathcal B}(H)$ be a family of isometries with orthogonal ranges, so $u_i^*u_j=\delta_{i,j} 1$ for all $i,j \in {\mathbb N}$. For $n>0$ and any finite collection $i_1,\dots, i_n$ of indices, let $u_{i_1i_2\dots i_n}:=u_{i_1}u_{i_2}\dots u_{i_n}$ and define $$X_n:=\operatorname{\overline{span}}\{u_{i_1i_2\dots i_n}: i_1, \dots, i_n\in {\mathbb N}\} \qquad Q_n:=\sum_{i_1, \dots, i_n\in {\mathbb N}} u_{i_1i_2\dots i_n} u_{i_1i_2\dots i_n}^*.$$ We also put $X_0={\mathbb C}I$ and $Q_0=1$. The family $X=\{X_n\}_{n\in {\mathbb N}}$ with operations inherited from ${\mathcal B}(H)$ becomes a product system over the semigroup ${\mathbb N}$ with coefficient algebra $A={\mathbb C}$. For each $n>0$, $Q_n$ is the orthogonal projection onto the space $X_nH$. Note that , which is our geometric condition (C), is equivalent to asking that $Q_1$ is not equal to $1$, i.e. $$\label{eq:ranges dont span}
\sum_{i\in {\mathbb N}}u_iu_i^* < 1,$$ where the infinite sum is defined using the strong operator topology. Since ${\mathcal{NT}}(X)$ is generated by an infinite family of isometries with orthogonal ranges given by $\{i_X(u_j):j\in {\mathbb N}\}$, [@Cu77 Theorem 1.12] gives an isomorphism $$\label{normal O_infty}
{\mathcal{NT}}(X)\cong \operatorname{\overline{span}}\{ X_n X_m^*: n,m\in {\mathbb N}\}\cong {\mathcal O}_\infty.$$ In particular, every countably infinite family of isometries with orthogonal ranges gives rise to an injective Nica-Toeplitz representation of $X$ - the algebraic condition is satisfied automatically. We denote by $\mathcal{DR}({\mathcal O}_\infty)$ the Doplicher-Roberts algebra associated to $(X_n)_{n\in {\mathbb N}}$. Theorem \[Uniqueness for product systems II\] implies that is equivalent to having an isomorphism $$\label{Doplicher-Roberts O_infty}
\mathcal{DR}({\mathcal O}_\infty)\cong \operatorname{\overline{span}}\left\{\bigcup_{n,m\in {\mathbb N}} \{T \in Q_mB(H)Q_n: \,\,TX_n \subseteq X_m \,\, \text{ and } \,\,T^{*}X_m\subseteq X_n\}\right\}.$$ Without condition , all we can say is that there is a surjective homomorphism from $\mathcal{DR}({\mathcal O}_\infty)$ to the right-hand side of obtained from the universal property of $\mathcal{DR}({\mathcal{NT}}(X))$.
This example illustrates the fact that condition (C) captures uniqueness of the Doplicher-Roberts algebra $\mathcal{DR}({\mathcal O}_\infty)$, which is a $C^*$-algebra containing ${\mathcal O}_\infty$, and that uniqueness of ${\mathcal O}_\infty$ as a $C^*$-algebra generated by isometries with orthogonal ranges is independent of condition (C). This phenomenon is consistent with our Theorem \[Uniqueness Theorem for product systems I\], as the left action of $A={\mathbb C}1$ on $X_1\cong \ell^2({\mathbb N})$ is not by generalized compacts.
In order to get an efficient uniqueness theorem for ${\mathcal O}_\infty$ one needs to view it as a Nica-Toeplitz algebra over the free semigroup ${\mathbb F}_{{\mathbb N}}^+$. This idea, in disguise, was exploited in [@Fow-Rae]. With our results in hand we can make it formal and explicit. Note that any product system over a free semigroup ${\mathbb F}_\Lambda^+$ is automatically compactly-aligned.
\[bla bla lemma for free product systems\] Let $Y:=\bigoplus_{\lambda \in \Lambda} Y_\lambda$ be a direct sum of $C^*$-correspondences $Y_\lambda$, $\lambda \in \Lambda$, over a $C^*$-algebra $A$. There is a product system $X=\{X_p\}_{p\in {\mathbb F}_\Lambda^+}$ over $A$ such that for any word $p=\lambda_1\dots \lambda_n\in {\mathbb F}_\Lambda^+$ we have $$X_{p}:=Y_{\lambda_1}\otimes Y_{\lambda_2}\otimes\dots\otimes Y_{\lambda_n}$$ and the product in $X$ is given by the iterated internal tensor product. We have a one-to-one correspondence between Nica covariant representations $\Psi$ of $X=\{X_p\}_{p\in {\mathbb F}_\Lambda^+}$ and representations $(\pi,\psi)$ of the $C^*$-correspondence $Y$ where $$\Psi(y_{\lambda_1}\otimes y_{\lambda_2}\otimes\dots\otimes y_{\lambda_n})=\psi(y_{\lambda_1})\psi(y_{\lambda_2})\dots\psi(y_{\lambda_n}) ,\qquad y_{\lambda_i}\in Y_{\lambda_i}, i=1,\dots,n.$$ Thus we have a natural isomorphism ${\mathcal T}_Y\cong {\mathcal{NT}}(X)$.
The proof is straightforward. We leave the details to the reader.
\[Fowler-Raeburn result\] Let $Y=\bigoplus_{\lambda \in \Lambda} Y_\lambda$ be a direct sum of $C^*$-correspondences $Y_\lambda$, $\lambda \in \Lambda$, over a $C^*$-algebra $A$. Consider the following conditions that a representation $(\pi,\psi)$ of the $C^*$-correspondence $Y$ in a Hilbert space $H$ may satisfy:
- $A$ acts, via $\pi$, faithfully on $(\psi(\bigoplus_{\lambda \in F} Y_\lambda)H)^\bot$ for every finite subset $F$ of $\Lambda$;
- The $C^*$-algebra generated by $\pi(A)\cup \psi(Y)$ is naturally isomorphic to ${\mathcal T}_Y$;
- $\pi(A)\cap \operatorname{\overline{span}}\{\psi(x)\psi(y)^*: x,y\in Y_\lambda, \lambda \in F\}=\{0\}
$ for every finite $F\subseteq \Lambda$.
Then (i)$\Rightarrow$(ii)$\Rightarrow$(iii). Moreover, if $A$ acts by generalized compacts on the left of each $Y_\lambda$, $\lambda\in \Lambda$, then all the above conditions are equivalent.
Let $X=\{X_p\}_{p\in {\mathbb F}_\Lambda^+}$ be the product system described in Lemma \[bla bla lemma for free product systems\]. Since ${\mathcal{NT}}(X)$ and ${\mathcal{NT}}^r(X)$ are isomorphic by [@kwa-larI Corollary 8.6] and $({\mathbb F}_\Lambda^+)^*=\{e\}$, we may apply Theorem \[Uniqueness Theorem for product systems I\] to $X$. Translating the result, using Lemma \[bla bla lemma for free product systems\], to $C^*$-correspondences $Y_\lambda$, $\lambda\in \Lambda$, we get the assertions.
The relationship between conditions (i) and (ii) in Corollary \[Fowler-Raeburn result\] was established in [@Fow-Rae Theorem 3.1]. The algebraic condition (iii), which is what we call Toeplitz covariance, in general does not imply (i), see Example \[ex:DR-Oinfty\]. We have already seen pieces of evidence that in general Toeplitz covariance could be the right condition for characterizing uniqueness of ${\mathcal{NT}}(X)$. Another evidence for this is again the case of ${\mathcal O}_\infty$, as we shall now explain.
If we specialize Corollary \[Fowler-Raeburn result\] to the $C^*$-correspondence $X_1\cong \ell^2({\mathbb N})$ over ${\mathbb C}$ from Example \[ex:DR-Oinfty\], then we may view $X_1$ as a direct sum over ${\mathbb N}$ of finite dimensional (even one dimensional) spaces $Y_n$. It is readily seen that for every representation of $X$ coming from a countably infinite family of isometries with orthogonal ranges, both of conditions (i) and (iii) in Corollary \[Fowler-Raeburn result\] are satisfied. Since the left action is by compacts in each $Y_n$, $n\geq 1$, we may use each of these conditions to recover the uniqueness of ${\mathcal O}_\infty$.
Semigroup $C^*$-algebras twisted by product systems {#Fowler-Raeburn section}
---------------------------------------------------
Let $X$ be a compactly-aligned product system over a right LCM semigroup $P$. For each $p\in P$, let $\mathds{1}_p \in \ell^\infty(P)$ be the characteristic function of $pP$. Since the product $\mathds{1}_p \mathds{1}_q$ is either $\mathds{1}_r$ (if $pP\cap qP=rP$) or $0$, we have that $B_P:= \operatorname{\overline{span}}\{\mathds{1}_p:p\in P\}$ is a $C^*$-subalgebra of $\ell^\infty(P)$. Moreover, the projections $\mathds{1}_p$ form a semilattice isomorphic to $J(P)$. Recall that $1_r$ denotes the identity in ${\mathcal L}(X_r)$ for every $r\in P$. If $1$ is the identity in the unitization $\operatorname{\mathcal{DR}}({\mathcal{NT}(X)})^{\sim}$ of $\operatorname{\mathcal{DR}}({\mathcal{NT}(X)})$, then the projections $\{i_{{\mathcal L}_X}(1_p)\}_{p\in P\setminus\{e\}}\cup \{1\}$ form a semilattice isomorphic to $J(P)$, cf. [@kwa-larI Lemma 5.8]. Since the family $J(P)$ is independent, see [@bls Corollary 3.6], it follows from [@Li2 Proposition 2.4] that the assignment $$B_P \ni \mathds{1}_p \longmapsto i_{{\mathcal L}_X}(1_{p})\in \operatorname{\mathcal{DR}}({\mathcal{NT}(X)}), \qquad p\in P\setminus \{e\},$$ extends uniquely to an injective unital homomorphism $B_P \hookrightarrow \operatorname{\mathcal{DR}}({\mathcal{NT}(X)})^{\sim}$. We will use it to identify $B_P$ with a $C^*$-subalgebra of $\operatorname{\mathcal{DR}}({\mathcal{NT}(X)})^{\sim}$.
Let $X$ be a compactly-aligned product system. We call the $C^*$-algebra $$\operatorname{\mathcal{FR}}(X):=C^*\left(B_P\cdot {\mathcal{NT}}(X)\right)\subseteq \operatorname{\mathcal{DR}}({\mathcal{NT}(X)}),$$ generated by elements $i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(a) $ where $a\in {\mathcal K}(q,r), p,q,r\in P$, $p\neq e$, the *Fowler-Raeburn algebra* of $X$ or the *semigroup $C^*$-algebra of $P$ twisted by $X$*.
\[Fowler-Raeburn algebra form\] We have $\operatorname{\mathcal{FR}}(X)=\operatorname{\overline{span}}\{i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(y)^* : x,y\in X, p \in P\}$. In particular, $B_P \subseteq M(\operatorname{\mathcal{FR}}(X))$. Moreover, $\operatorname{\mathcal{FR}}(X)={\mathcal{NT}(X)}$ if and only if the left action of $A$ on each fiber $X_p$ is by generalized compacts.
For any $x\in X$ and $p\in P$, using Nica covariance of $i_{{\mathcal L}_X}$ twice, we get $
i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)=i_{{\mathcal L}_X}(x\otimes 1_{p})= i_{{\mathcal L}_X}(1_{d(x)p})i_{{\mathcal K}_X}(x),
$ and similarly $$\label{Fowlers proof relation}
i_{{\mathcal L}_X}(1_p) i_{{\mathcal K}_X}(x)=
\begin{cases}
i_{{\mathcal K}_X}(x)i_{{\mathcal L}_X}(1_{d(x)^{-1}r}) & \text{if }pP\cap d(x)P=rP,
\\
0, & \text{otherwise}.
\end{cases}$$ This implies that $B_P\cdot {\mathcal{NT}}(X)\subseteq \operatorname{\overline{span}}\{i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(y)^* : x,y\in X, p \in P\}\subseteq \operatorname{\mathcal{FR}}(X)$. Hence to prove the first part of the assertion, it suffices to show that the product of two elements of the form $i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(y)^*$ and $ i_{{\mathcal K}_X}(z) i_{{\mathcal L}_X}(1_s)i_{{\mathcal K}_X}(w)^*$, $x,y,z,w \in X$, $p,s\in P$, can be approximated by a finite sum of elements of that form. The product $i_{{\mathcal K}_X}(y)^* i_{{\mathcal K}_X}(z)$ can be approximated by a finite sum of elements of the form $i_{{\mathcal K}_X}(f) i_{{\mathcal K}_X}(g)^*$, $f,g\in X$. Applying twice, we see that the product $$i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p) i_{{\mathcal K}_X}(f) i_{{\mathcal K}_X}(g)^* i_{{\mathcal L}_X}(1_s)i_{{\mathcal K}_X}(w)^*$$ is either zero or of the form $i_{{\mathcal K}_X}(xf) i_{{\mathcal L}_X}(1_t)i_{{\mathcal K}_X}(wg)^*$. Thus $\operatorname{\mathcal{FR}}(X)$ is the closed linear span of $\{i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(y)^* : x,y\in X, p \in P\}$. Now, implies $B_P \subseteq M(\operatorname{\mathcal{FR}}(X))$.
If the left action of $A$ on each fiber $X_p$ is by compact operators, then ${\mathcal K}_X\otimes 1 \subseteq {\mathcal K}_X$, by Lemma \[non-degeneracy of K\_X\]. Hence $i_{{\mathcal K}_X}(x) i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(y)^*= i_{{\mathcal K}_X}(x\otimes 1_{p})i_{{\mathcal K}_X}(y)^*\in {\mathcal{NT}(X)}$ for every $x,y\in X$ and $p\in P$. Therefore $\operatorname{\mathcal{FR}}(X)\subseteq {\mathcal{NT}(X)}$ and the reverse inclusion is obvious.
Conversely, if $\operatorname{\mathcal{FR}}(X)\subseteq {\mathcal{NT}(X)}$, then for any $a\in A$ and $p\in P\setminus\{e\}$ we have that $i_{{\mathcal L}_X}(\phi_p(a))=i_{{\mathcal L}_X}(1_p)i_{{\mathcal K}_X}(a)\in {\mathcal{NT}}(X)$. Hence for any $\varepsilon >0$ there is a finite sum of the form $S=\sum_{s,t} i_{{\mathcal K}_X}(a_{s,t})$ where $a_{s,t}\in {\mathcal K}(X_t,X_s)$ such that $\|i_{{\mathcal L}_X}(\phi_p(a)) -S\|< \varepsilon$. Hence $\|E^{\mathbb{L}}(\Lambda (i_{{\mathcal L}_X}(\phi_p(a)))- E^{\mathbb{L}}(\Lambda (S))\|< \varepsilon$ where $E^{\mathbb{L}}$ is the transcendental conditional expectation on ${\mathcal{NT}}^r(X)$ constructed in [@kwa-larI]. By [@kwa-larI Proposition 5.4], cf. also [@kwa-larI Remark 5.7], we have $E^{{\mathbb{L}}}\Big({\mathbb{L}}(a_{p,q})\Big)= \bigoplus_{w\in pP\cap qP,\atop p^{-1}w=q^{-1}w}{\mathbb{L}}_{p,q}^{(w)}(a_{p,q})$, for $a_{p,q}\in {\mathcal K}(X_q,X_p)$, $p,q \in P$ where ${\mathbb{L}}^{(p)}_{p,p}=\operatorname{id}$ for every $p\in P$. This implies that $\|\phi_p(a)-a_{p,p} \|< \varepsilon$. Thus $\phi_p(a)\in {\mathcal K}(X_p)$.
Left translation on $\ell^\infty(P)$ restricts to a unital semigroup homomorphism $\tau:P \to \operatorname{End}(\psi_e(A)')$, determined by $\tau_q(\mathds{1}_p) = \mathds{1}_{qp}$ for $p,q\in P$. The isometric crossed product $B_P \rtimes _\tau P$ is naturally isomorphic to the semigroup $C^*$-algebra $C^*(P)$, see [@Li Lemma 2.14], so $\operatorname{\mathcal{FR}}(X)$ may be viewed as a version of $C^*(P)$ twisted by $X$, see [@FR], [@F99]. We make this explicit in our setting.
\[induced\_endomorphic\_action\] Let $\psi$ be a nondegenerate representation of $X$ on a Hilbert space $H$. For each $p\in P\setminus \{e\}$ there is a unique endomorphism $\alpha^\psi_p$ of $\psi_e(A)'$ such that $$\alpha^\psi_p(S) \psi_p(x) =\psi_p(x)S, \qquad \text{for all } S \in \psi_e(A)', \,\, x\in X_p,$$ and $\alpha^\psi_p(1)$ vanishes on $(\psi_p(X_p)H)^\bot$. We put $\alpha_e^\psi=\operatorname{id}$. Then $\alpha^\psi: P \to \operatorname{End}(\psi_e(A)')$ is a unital semigroup homomorphism.
The existence of $\alpha^\psi_p$ for each $p\in P$ is proved in [@F99 Proposition 4.1 (1)]. The semigroup law $\alpha^\psi_p\circ \alpha^\psi_q=\alpha^\psi_{pq}$ for $p,q\in P\setminus\{e\}$ is proved in [@F99 Proposition 4.1 (2)]. To allow $p=e$, Fowler assumes all $X_p$ are essential. With our definition of $\alpha_e^\psi$, the semigroup law follows if one or both of $p,q$ equal $e$ by a direct verification.
A *covariant representation* of the quadruple $(B_P, P, \tau, X)$ on a Hilbert space $H$ is a pair $(\pi,\psi)$ consisting of a nondegenerate representation $\pi:B_P \to B(H)$ and a nondegenerate representation $\psi:X\to B(H)$ such that $
\pi(B_P)\subseteq \psi_e(A)'$ and $\pi \circ \tau_p = \alpha_p^\psi \circ \pi$, $p \in P,$ where $\alpha^\psi: P \to \operatorname{End}(\psi_e(A)')$ is defined in Lemma \[induced\_endomorphic\_action\].
There is a bijective correspondence between covariant representations $(\pi,\psi)$ of $(B_P, P, \tau, X)$ and Nica covariant representations $\psi$ of $X$ implemented by $\pi(\mathds{1}_p)= \alpha_p^\psi(1)$ for $p\in P$.
In particular, there is a covariant representation $(i_{B_P}, i_X)$ of $(B_P, P, \tau, X)$ such that
- $\operatorname{\mathcal{FR}}(X)=C^*(i_{B_P}(B_P) i_X(X))$
- for every covariant representation $(\pi,\psi)$ of $(B_P, P, \tau, X)$ there is a representation $\pi\rtimes \psi$ of $\operatorname{\mathcal{FR}}(X)$ such that $\overline{\pi\rtimes \psi} \circ i_{B_P} = \pi$ and $\overline{\pi\rtimes \psi} \circ i_{X} = \psi$.
If $(\pi,\psi)$ is a covariant representation of $(B_P, P, \tau, X)$, then $\pi(\mathds{1}_p)= \alpha_p^\psi(1)$, $p\in P$ and this relation determines $\pi$. Moreover, we have $\pi(\mathds{1}_p)= \alpha_p^\psi(1)=Q_p^\psi$, and therefore $\psi$ is Nica covariant by Lemma \[Nica relation Lemma\]. Conversely, if $\psi$ is a Nica covariant representations of $X$, then $\alpha_p^\psi(1)=Q_p^\psi$ satisfy and belong to $\psi_e(A)'$, cf. [@kwa-larI Proposition 9.5]. Hence there is a representation $\pi:B_P\to \psi_e(A)'$ determined by $\pi(\mathds{1}_p)= \alpha_p^\psi(1)$, $p\in P$, by [@Li2 Proposition 2.4]. Since $
\pi (\tau_p (\mathds{1}_{q})) = \pi(\mathds{1}_{pq})= \alpha_{pq}^\psi(1)= \alpha_{p}^\psi(\alpha_{q}^\psi(1))= \alpha_{p}^\psi(\pi(\mathds{1}_q))
$ for every $p,q\in P$, we conclude that $(\pi,\psi)$ is a covariant representation of $(B_P, P, \tau, X)$.
The second part of the assertion is now immediate (by representing $\operatorname{\mathcal{FR}}(X)$ faithfully and nondegenerately on a Hilbert space).
\[Uniqueness for product systems III\] Retaining the assumptions in Theorem \[Uniqueness for product systems II\] each of the conditions (i) and (ii) therein are equivalent to the following one:
- $\operatorname{\mathcal{FR}}(X)\cong \operatorname{\overline{span}}\{\psi(x) Q_p^\psi \psi(y)^*: x,y\in X, p \in P\} $. In particular, $\pi\rtimes \psi$ is faithful.
The implication (ii)$\Rightarrow$(iii) is obvious as $\operatorname{\mathcal{FR}}(X)\subseteq \operatorname{\mathcal{DR}}({\mathcal{NT}(X)})$. To see that (iii)$\Rightarrow$(i) follows because condition involves only elements lying in the image of the corresponding representation of $\operatorname{\mathcal{FR}}(X)$. Hence if they are satisfied in $\operatorname{\mathcal{DR}}({\mathcal{NT}(X)})$ they need to be satisfied in $\operatorname{\mathcal{FR}}(X)$.
When $X$ is compactly aligned, $P$ is a positive cone in a quasi-lattice ordered group and all the fibers $X_p$, $p\in P$, are essential, then $\operatorname{\mathcal{FR}}(X)$ coincides with the algebra denoted by $B_P\rtimes_{\tau,X}P$ in [@F99], see also [@FR]. In this case the equivalence of (i) and (iii) in Theorem \[Uniqueness for product systems III\] is [@F99 Theorem 7.2], which in turn is a generalization of [@FR Theorem 5.1].
\[ex:FR-Oinfty\] Retain the notation of Example \[ex:DR-Oinfty\]. We noticed there that conditions and are equivalent. Denoting by $\mathcal{FR}({\mathcal O}_\infty)$ the Fowler-Raeburn algebra $\operatorname{\mathcal{FR}}(X)$ associated to $(X_n)_{n\in {\mathbb N}}$ we see now, using Theorem \[Uniqueness for product systems III\], that these equivalent conditions are further equivalent to having an isomorphism $$\label{Fowler-Raburn O_infty}
\mathcal{FR}({\mathcal O}_\infty)\cong \operatorname{\overline{span}}\{ X_n Q_k X_m^*: n,m,k\in {\mathbb N}\}.$$ In particular, ${\mathcal O}_\infty$ embeds as a proper subalgebra of $\mathcal{FR}({\mathcal O}_\infty)$ by and the second part of Proposition \[Fowler-Raeburn algebra form\], cf. [@FR Example 5.6(2)]. The algebra $\mathcal{FR}({\mathcal O}_\infty)$ is separable while $\mathcal{DR}({\mathcal O}_\infty)$ is not.
Nica-Toeplitz crossed products by completely positive maps {#section:NT-cp-ccp-maps}
==========================================================
In this section we will introduce a general definition of Nica-Toeplitz $C^*$-algebra for an action of a LCM semigroup by completely positive maps. We will do it in two steps. First we introduce a Toeplitz $C^*$-algebra, and then obtain a Nica-Toeplitz $C^*$-algebra as a quotient by ‘eliminating redundancies’. In the next sections we will analyze these $C^*$-algebras in more detail in two special cases. Namely, when the action is by endomorphisms or by transfer operators.
General construction
--------------------
Let $P$ be a right LCM semigroup. Let ${\textrm{CP}}(A)$ denote a semigroup of completely positive maps on a $C^*$-algebra $A$ (with semigroup operation given by composition).
\[C\*-dynamical system\] Let $\varrho:P \ni p \mapsto \varrho_p\in {\textrm{CP}}(A)$ be a unital semigroup antihomomorphism, i.e. $\alpha_e=id$ and $\varrho_q\circ \varrho_p=\varrho_{pq}$ for all $p,q\in P$. We call $(A,P,\varrho)$ a *$C^*$-dynamical system*.
A *representation of the semigroup* $P$ in a Hilbert space $H$ is a unital semigroup homomorphism $S:P\to {\mathcal B}(H)$ into the multiplicative semigroup of ${\mathcal B}(H)$. The following is an obvious semigroup generalization of [@kwa-exel Definition 3.1].
A *representation* of a $C^*$-dynamical system $(A,P,\varrho)$ on a Hilbert space $H$ is a pair $(\pi, S)$ consisting of a nondegenerate representation $\pi:A\to {\mathcal B}(H)$ and a homomorphism $S:P\to {\mathcal B}(H)$ such that $$\label{cp map representation relation}
S_p^*\pi(a)S_p=\pi(\varrho_p(a))$$ for all $p,q\in P$ and $a\in A$. We put $C^*(\pi,S):=C^*(\bigcup_{p\in P} \pi(A)S_p)$. Exactly as in the proof of [@kwa-exel Lemma 3.2], one can prove that there is a universal representation $(i_A, \hat{t})$ of $(A,P,\varrho)$; universal in the sense that for any other representation $ (\pi,S)$ of $(A,P,\varrho)$ the maps $$\label{toeplitz epimorphism}
i_A(a)\longmapsto \pi(a), \qquad i_A(a)\hat{t}_p \longmapsto \pi(a)S_p, \qquad a\in A,\,\, p\in P,$$ give rise to an epimorphism from $C^*(i_A,\hat{t})$ onto $C^*(\pi,S)$. Up to a natural isomorphism the $C^*$-algebra ${\mathcal T}(A,P,\varrho):=C^*(i_A(A),\hat{t})$ is uniquely determined by $(A,P,\varrho)$, and we call it the *Toeplitz algebra of $(A,P,\varrho)$*.
For any $C^*$-algebra $C$ we denote by $\operatorname{\mathcal{RM}}(C)$, $\operatorname{\mathcal{LM}}(C)$ and $\operatorname{\mathcal{M}}(C)$ the algebras of right, left and two-sided multipliers of $C$, respectively, cf. [@pedersen 3.12]. We say that a map $\varrho$ on $C$ is *strict* if for any approximate unit $\{\mu_\lambda\}$ in $C$, the net $\{\varrho(\mu_\lambda)\}$ converges strictly to a multiplier of $C$.
\[lem: Toeplitz algebra for completely positives\] We have ${\mathcal T}(A,P,\varrho)=C^*(\bigcup_{p\in P} i_{A}(A)\hat{t}_p i_{A}(A))$. Hence $i_{A}(A)$ is a nondegenerate subalgebra of ${\mathcal T}(A,P,\varrho)$ and $\{\hat{t}_p\}_{p\in P}\subseteq \operatorname{\mathcal{RM}}({\mathcal T}(A,P,\varrho))$. If every $\varrho_p$, $p\in P$, is strict then ${\mathcal T}(A,P,\varrho)=C^*(\bigcup_{p\in P} \hat{t}_p i_{A}(A))$ and $\{\hat{t}_p\}_{p\in P}\subseteq \operatorname{\mathcal{M}}({\mathcal T}(A,P,\varrho))$.
Suppose that ${\mathcal T}(A,P,\varrho)$ acts in a nondegenerate way on a Hilbert space $H$. By [@kwa-exel Proposition 3.10 and Lemma 3.8], for any $p\in P$ and an approximate unit $\{\mu_\lambda\}_{\lambda\in \Lambda}$ in $A$ we have $\textrm{s-}\lim_{\lambda\in \Lambda} i_{A}(\mu_\lambda)\hat{t}_p=\hat{t}_p$ and $i_{A}(A)\hat{t}_p \subseteq i_{A}(A)\hat{t}_p i_{A}(A)$. In particular, $\hat{t}_p\in {\mathcal T}(A,P,\varrho)''$ and $ {\mathcal T}(A,P,\varrho)\subseteq C^*(\bigcup_{p\in P} i_{A}(A)\hat{t}_p i_{A}(A))$. The reverse inclusion $ C^*(\bigcup_{p\in P} i_{A}(A)\hat{t}_p i_{A}(A)) \subseteq {\mathcal T}(A,P,\varrho)$ is clear since $i_{A}(A)=i_{A}(A)\hat{t}_e\subseteq {\mathcal T}(A,P,\varrho)$. Thus $i_{A}(A)$ is a nondegenerate subalgebra of ${\mathcal T}(A,P,\varrho)$. Every $b\in{\mathcal T}(A,P,\varrho)$ is of the form $b'i_{A}(a)$, where $b'\in{\mathcal T}(A,P,\varrho)$, $a\in A$, and $
b \hat{t}_p= b'i_{A}(a)\hat{t}_p\in b' i_{A}(A)\hat{t}_p i_{A}(A) \subseteq {\mathcal T}(A,P,\varrho).
$ Hence $\hat{t}_p \in \operatorname{\mathcal{RM}}({\mathcal T}(A,P,\varrho))$, for every $p\in P$.
Suppose now that every map $\varrho_p$, $p\in P$, is strict. By [@kwa-exel Proposition 3.10 and Remark 3.9], we get $\hat{t}_p i_{A}(A) \subseteq i_{A}(A)\hat{t}_p i_{A}(A)$, for every $p\in P$. Using this, similarly as above, one gets that ${\mathcal T}(A,P,\varrho)=C^*(\bigcup_{p\in P} \hat{t}_p i_{A}(A)) $ and $\hat{t}_p \in \operatorname{\mathcal{LM}}({\mathcal T}(A,P,\varrho))$, for every $p\in P$.
Let $(\pi, S)$ be a representation of $(A,P,\varrho)$. In view of the Banach spaces $${\mathcal K}_{(\pi,S)}(p,q):=\overline{\pi(A)S_p\pi(A)S_q^*\pi(A)}, \qquad p,q\in P,$$ form a $C^*$-precategory. In general, it is not obvious that there exists a right-tensor $C^*$-precategory containing ${\mathcal K}_{(\pi,S)}$ as an ideal. Nevertheless, we can mimic the definition of Nica covariance to define a Nica-Toeplitz algebra as follows, where we also draw on inspiration from [@exel3].
\[redundancy definition\] Let $(\pi, S)$ be a representation of $(A,P,\varrho)$. We say that a pair $(a \cdot b,k)$ is a *redundancy for $(\pi, S)$* if $a\in {\mathcal K}_{(\pi,S)}(p,q)$, $b\in {\mathcal K}_{(\pi,S)}(s,t) $ and $k\in {\mathcal K}_{(\pi,S)}(pq^{-1}r,ts^{-1}r)$, for some $p,q,s,t,r\in P$ with $qP\cap sP=rP$, are such that $$\label{eq:redundancy condition}
ab \pi(c)S_{ts^{-1}r}=k \pi(c)S_{ts^{-1}r} \,\, \textrm{ for all }c \in A.$$ We say that $(\pi, S)$ is *Nica covariant* if
- for every redundancy $(a \cdot b,k)$ we have $a \cdot b=k$;
- ${\mathcal K}_{(\pi,S)}(p,q) {\mathcal K}_{(\pi,S)}(s,t)=\{0\}$ whenever $qP\cap sP=\emptyset$.
\[rem:uniqueness of redundancy\] Note that if $(a \cdot b,k)$ is a redundancy for $(\pi, S)$, then $a \cdot b$ determines $k$ uniquely, via . Indeed, just note that the essential subspace for $k$ is $$\overline{{\mathcal K}_{(\pi,S)}(ts^{-1}r,ts^{-1}r)H}=\overline{\pi(A)S_{ts^{-1}r}\pi(A)H}=\overline{\pi(A)S_{ts^{-1}r}H}.$$
We define the *Nica-Toeplitz algebra of the $C^*$-dynamical system $(A,P,\varrho)$* to be the $C^*$-algebra ${\mathcal{NT}}(A,P,\varrho):=C^*(j_A,\hat{s})$ generated by a universal Nica covariant representation $(j_A,\hat{s})$ of $(A,P,\varrho)$. As the next result shows, there is an alternate way to justify its existence.
We have ${\mathcal{NT}}(A,P,\varrho)\cong {\mathcal T}(A,P,\varrho)/{\mathcal{N}}$ where ${\mathcal{N}}$ is the ideal of ${\mathcal T}(A,P,\varrho)$ generated by the differences $$a\cdot b -k \quad \text{ where } \quad (a \cdot b,k) \text{ is a redundancy for }(i_A,\hat{t})$$ and products $$a\cdot b \quad \text{ where } a\in {\mathcal K}_{(i_A,\hat{t})}(p,q),\,\, b\in {\mathcal K}_{(i_A,\hat{t})}(s,t) \text{ and } qP\cap sP=\emptyset.$$
It is straightforward and therefore left to the reader.
We aim to investigate uniqueness of representations of ${\mathcal{NT}}(A, P, \rho)$. We will do this by specializing to two classes of actions where ${\mathcal{NT}}(A, P, \rho)$ admits realizations as a Nica-Toeplitz algebra of a right-tensor $C^*$-precategory.
\[well-aligned dynamical system\] Even though in the greatest generality of an action by completely positive maps there is no obvious structure of a right-tensor category, it is still possible to define a notion similar to well-aligned for $C^*$-precategory and show that it provides a structural description of ${\mathcal{NT}}(A, P, \rho)$ similar to the Nica-Toeplitz algebra of a $C^*$-precategory, cf. [@kwa-larI Remark 3.9]. We include the details here for two reasons: first of all because the classes of examples we consider exhibit this additional feature and second because we believe the observation may be of use in future investigations.
We say that a $C^*$-dynamical system $(A,P,\varrho)$ is *well-aligned* if for every representation $(\pi,S)$ of $(A,P,\varrho)$ and all pairs $a \in {\mathcal K}_{(\pi,S)}(p,q)$ and $b\in {\mathcal K}_{(\pi,S)}(s,t)$ with $qP\cap sP=
rP$ there is $k\in {\mathcal K}_{(\pi,S)}(pq^{-1}r,ts^{-1}r)$ such that $(a \cdot b,k)$ is a redundancy for ${(\pi,S)}$ (obviously it suffices to check this requirement only for the universal representation $(i_A,\hat{t})$).
We now claim that if a $C^*$-dynamical system $(A,P,\varrho)$ is well-aligned, then $${\mathcal{NT}}(A,P,\varrho)=\operatorname{\overline{span}}\{\bigcup_{p,q\in P} {\mathcal K}_{(j_A,\hat{s})}(p,q) \}.$$ Indeed, the Banach space $\operatorname{\overline{span}}\{\bigcup_{p,q\in P} {\mathcal K}_{(j_A,\hat{s})}(p,q) \}$ is closed under taking adjoints. Thus we only need to check that it is closed under multiplication. Let $a \in {\mathcal K}_{(j_A,\hat{s})}(p,q)$ and $b\in {\mathcal K}_{(j_A,\hat{s})}(s,t)$. If $qP\cap sP=\emptyset$, then $a\cdot b= 0$ by Nica covariance of $(j_A,\hat{s})$. Assume then that $qP\cap sP=rP$. By well-alignment there is $k\in {\mathcal K}_{(j_A,\hat{s})}(pq^{-1}r,ts^{-1}r)$ such that $(a \cdot b,k)$ is a redundancy for $(j_A,\hat{s})$. Hence $a\cdot b= k\in {\mathcal K}_{(j_A,\hat{s})}(pq^{-1}r,ts^{-1}r)$, again by Nica covariance.
Nica-Toeplitz crossed products by endomorphisms {#subsection:NT-cp-endo}
-----------------------------------------------
Throughout this subsection we let $P$ be a right LCM semigroup and denote by $\alpha:P \ni p \mapsto \alpha_p\in \operatorname{End}(A)$ a unital semigroup antihomomorphism, i.e. $\alpha_e=id$ and $\alpha_q\circ \alpha_p=\alpha_{pq}$ for all $p,q\in P$, where we assume that $\alpha_p$ is an endomorphism of $A$ for each $p\in P$. Since $*$-homomorphisms are completely positive maps, $(A,P,\alpha)$ is a $C^*$-dynamical system in the sense of Definition \[C\*-dynamical system\].
Earlier approaches to associating a Toeplitz-type crossed product to $(A,P,\alpha)$ involve a product system over $P$, see e.g. [@F99 Section 3]. Along the same lines, for each $p\in P$, let $E_p:=\alpha_p(A)A$ be the $C^*$-correspondence over $A$ where $$\quad \langle x, y\rangle_p :=x^*y, \qquad
a\cdot x\cdot b:=\alpha_p(a)xb, \quad x,y \in E_p,\,\, a,b \in A.$$ We define multiplication on $E_\alpha=\bigsqcup_{p\in P} E_p$ by $$\label{multiplication-Ealpha}
E_p\times E_q\ni(x,y)\longmapsto \alpha_q(x)y\in E_{pq}.$$ It is readily seen that the above map induces an isomorphism $E_p\otimes E_q\cong E_{pq}$ and hence $E_\alpha$ is a product system, cf. [@F99 Lemma 3.2]. The left action on each fiber is by generalized compacts, cf. [@kwa-exel Lemma 3.25]. Hence by Lemma \[non-degeneracy of K\_X\] we have a right-tensor $C^*$-precategory ${\mathcal K}_{E_\alpha}=\{{\mathcal K}(E_p,E_q)\}_{p,q\in P}$ which is a well-aligned ideal in the $C^*$-precategory associated to $E$.
We describe next another right-tensor $C^*$-precategory from $(A,P,\alpha)$ which will be useful in proving that our dynamical system is well-aligned. As in [@kwa-doplicher Example 3.4], ${\mathcal K}_\alpha:=\{\alpha_p(A)A\alpha_q(A)\}_{p,q\in P}$ is a $C^*$-precategory with multiplication, involution and norm inherited from $A$. There is a right tensoring on ${\mathcal K}_\alpha$ given by $${\alpha}_p(A)A{\alpha}_q(A)\ni a \longrightarrow a \otimes 1_r := \alpha_r(a) \in {\alpha}_{pr}(A)A{\alpha}_{qr}(A).$$ Our first observation is that ${\mathcal K}_\alpha$ and ${\mathcal K}_{E_\alpha}$ are the same, up to isomorphism.
\[isomorphism of categories for endomorphisms\] The right-tensor $C^*$-precategories ${\mathcal K}_{E_\alpha}$ and ${\mathcal K}_{\alpha}$ are isomorphic with the isomorphism given by ${\mathcal K}(E_p,E_q) \ni\Theta_{x,y}\longmapsto xy^*\in {\mathcal K}_{\alpha}(p,q)$.
Let $x_i\in E_q$ and $y_i\in E_p$, for $i=1,\dots ,n$. Since $\sum_{i=1}^n x_iy_i^*\in \alpha_q(A)A\alpha_p(A)$ we have $$\|\sum_{i=1}^n x_iy_i^*\|=\sup_{y\in\alpha_p(A)A \atop\|y\|=1}\|\sum_{i=1}^n x_iy_i^*y\|=\sup_{y\in E_p\atop\|y\|=1}\|\sum_{i=1}^n x_i\langle y_i, y\rangle\|
=\|\sum_{i=1}^n \Theta_{x_i,y_i}\|.$$ Thus ${\mathcal K}(E_p,E_q) \ni\Theta_{x,y}\mapsto xy^*\in {\mathcal K}_{\alpha}(p,q)$ extends to an isometric isomorphism, and straightforward calculations show that these maps form an isomorphism of $C^*$-precategories ${\mathcal K}_{E_\alpha}$ and ${\mathcal K}_{\alpha}$. Further, the maps intertwine right tensoring because for $x\in E_q$, $y, z\in E_p$, $w\in E_r$ we have $\alpha_r(x)\in E_{qr}$ and $\alpha_r(y)\in E_{pr}$, thus $$(\Theta_{x,y}\otimes 1_r) (z \cdot w)=x \cdot \langle y,z\rangle_p\cdot w=\alpha_r(x)\alpha_r(y^*z)w=\Theta_{\alpha_r(x),\alpha_r(y)} z \cdot w.$$
\[properties of representations of endomorphisms\] Let $(\pi,S)$ be a representation of $(A,P,\alpha)$ on a Hilbert space $H$.
- For every $p\in P$, $S_p$ is a partial isometry and $$\pi(a) S_p=S_p \pi(\alpha(a)),\qquad \textrm{ for all }a\in A.$$In particular, the projection $S_pS_p^*$ belongs to the commutant of $\pi(A)$ and $${\mathcal K}_{(\pi,S)}(p,q)=S_p\pi\big(\alpha_p(A)A\alpha_q(A)\big)S_q^*,\qquad\text{ for all }p,q\in P;$$
- For any approximate unit $\{\mu_\lambda\}$ in $A$ and all $p\in P$ we have $
S_p^*S_p=\text{s-}\lim_{\lambda\in \Lambda} \pi(\alpha_p(\mu_\lambda))$. In particular, $\pi(a)S_p^*S_p=\pi(a)$ for all $a\in A\alpha_p(A)$;
- The family of projections $\{S_p^*S_p\}_{p\in P^{op}}$ forms a decreasing net, that is $$q=tp \,\,\, \Longrightarrow \,\,\, S_q^*S_q \leq S_p^*S_p;$$
- Let $a\in {\mathcal K}_{(\pi,S)}(p,q)$, $b\in {\mathcal K}_{(\pi,S)}(s,t) $ and $k\in {\mathcal K}_{(\pi,S)}(pq^{-1}r,ts^{-1}r)$, where $p,q,s,t,r\in P$ with $qP\cap sP=rP$. The pair $(a\cdot b,k)$ is a redundancy if and only if $$k=S_{pq^{-1}r} \pi(\alpha_{q^{-1}r}(a_0)\alpha_{s^{-1}r}(b_0) )S_{ts^{-1}r}^*$$ where $a_0\in\alpha_p(A)A\alpha_q(A)$ and $b_0\in\alpha_s(A)A\alpha_t(A)$ are such that $a=S_p\pi(a_0)S_q^*$, $b=S_s\pi(b_0)S_t^*$.
Part (i) follows from [@kwa-exel Proposition 3.12]. Part (ii) follows from , because $\pi$ is nondegenerate and therefore $\pi(\mu_\lambda)$ converges strongly to identity. To see part (iii) assume that $q=tp$, and notice that by part (ii) we have $$\begin{aligned}
(S_q^*S_q)(S_p^*S_p)&=\text{s-}\lim_{\lambda\in \Lambda}\text{s-} \lim_{\lambda'\in \Lambda} \pi\big(\alpha_q(\mu_\lambda) \alpha_p(\mu_\lambda')\big)= \text{s-}\lim_{\lambda\in \Lambda}\text{s-} \lim_{\lambda'\in \Lambda}\pi\big(\alpha_p(\alpha_t(\mu_\lambda)\mu_\lambda')\big)
\\
&=\text{s-}\lim_{\lambda\in \Lambda}\pi\big(\alpha_p(\alpha_t(\mu_\lambda))\big)=\text{s-}\lim_{\lambda\in \Lambda}\pi\big(\alpha_q(\mu_\lambda)\big)=S_q^*S_q.\end{aligned}$$ Let now $a,b$ and $k$ be as in part (iv). By part (i) there are $a_0\in\alpha_p(A)A\alpha_q(A)$ and $b_0\in\alpha_s(A)A\alpha_t(A)$ such that $a=S_p\pi(a_0)S_q^*$, $b=S_s\pi(b_0)S_t^*$. By Remark \[rem:uniqueness of redundancy\], condition determines $k$ uniquely. Thus it suffices to show that for $k:=S_{pq^{-1}r} \pi(\alpha_{q^{-1}r}(a_0)\alpha_{s^{-1}r}(b_0) )S_{ts^{-1}r}^*$, the pair $(a\cdot b,k)$ is a redundancy. This follows from the following computation: $$\begin{aligned}
a\cdot b \pi(c)S_{ts^{-1}r}&
=\big(S_p\pi(a_0)S_q^*\big) \big(S_s\pi(b_0)S_t^*\big) \pi(c)S_{t}S_{s^{-1}r}
\stackrel{\eqref{cp map representation relation}}{=}S_p\pi(a_0)S_q^* S_s\pi(b_0 \alpha_t(c))S_{s^{-1}r}
\\
&\stackrel{(i)}{=}S_p\pi(a_0)S_q^* S_r\pi\big(\alpha_{s^{-1}r}(b_0 \alpha_t(c))\big)
=S_p\pi(a_0)S_q^* S_q S_{q^{-1}r}\pi\big(\alpha_{s^{-1}r}(b_0 \alpha_t(c))\big)
\\
&\stackrel{(ii)}{=}S_p\pi(a_0) S_{q^{-1}r}\pi\big(\alpha_{s^{-1}r}(b_0 \alpha_t(c))\big)
\\
&\stackrel{(i)}{=}S_{pq^{-1}r}\pi\big(\alpha_{q^{-1}r}(a_0)\alpha_{s^{-1}r}(b_0 \alpha_t(c))\big)
\\
&=
S_{pq^{-1}r}\pi\big(\alpha_{q^{-1}r}(a_0)\alpha_{s^{-1}r}(b_0)\big) \pi(\alpha_{ts^{-1}r}(c))\stackrel{\eqref{cp map representation relation}}{=}k \pi(c)S_{ts^{-1}r}.\end{aligned}$$
\[representations of product systems for endomorphisms\] There are bijective correspondences between:
- representations $(\pi,S)$ of the $C^*$-dynamical system $(A,P,\alpha)$;
- nondegenerate right-tensor representations $\Psi$ of ${\mathcal K}_\alpha$ on a Hilbert space;
- nondegenerate representations $\psi$ of the product system $E_\alpha$ on a Hilbert space.
Explicitly, these correspondences are determined by $$\label{representation of category from endomorphisms}
\Psi_{p,q}(a)=S_p\pi(a)S_q^*, \qquad\text{ for } a\in {\mathcal K}_\alpha(p,q), \,\, p,q\in P$$ and $$\label{representation of system from endomorphisms}
\psi_{p}(x)=S_p\pi(x), \qquad\text{ for } x\in E_{p}=\alpha_p(A)A \text{ and } S_p=\text{s-}\lim_{\lambda\in \Lambda} \psi_{p}(\alpha_p(\mu_\lambda)),$$ where $\{\mu_\lambda\}$ is an approximate unit in $A$ and $p,q\in P$. In particular, there are canonical isomorphisms $
{\mathcal T}(E_\alpha)\cong {\mathcal T}({\mathcal K}_\alpha)\cong {\mathcal T}(A,P,\alpha).
$
By [@kwa-exel Proposition 3.10], modulo [@kwa-exel Lemma 3.25], for each $p\in P$ the formula for $\psi_p$ in yields a bijective correspondence between representations $(\psi_p,\psi_e)$ of the $C^*$-correspondence $E_p$ and representations $(\pi,S_p)$ of the single endomorphism $\alpha_p$. Thus to establish the bijective correspondence between representations in (i) and (iii) it suffices to check the equivalence of semigroup laws. Suppose that $\psi$ is given by for a representation $(\pi,S)$ of $(A,P,\alpha)$. By Lemma \[properties of representations of endomorphisms\], $$\psi_{p}(x)\psi_q(y)=S_p\pi(x)S_q\pi(y)=S_pS_q\pi(\alpha_q(x))\pi(y)=S_{pq}\pi( \alpha_q(x)y)=\psi_{pq}(x\cdot y).$$ Hence, $\psi$ is a representation of $E_\alpha$. Conversely, if $\psi$ is a representation of $E_\alpha$ and $S$ is given by the strong limits in , then $$\begin{aligned}
S_pS_q&=\text{s-}\lim_{\lambda\in \Lambda}\text{s-} \lim_{\lambda'\in \Lambda} \psi_{p}(\alpha_p(\mu_\lambda))\psi_{q}(\alpha_q(\mu_\lambda'))= \text{s-}\lim_{\lambda\in \Lambda}\text{s-} \lim_{\lambda'\in \Lambda}\psi_{pq}(\alpha_q(\alpha_p(\mu_\lambda)\mu_\lambda'))
\\
&=\text{s-}\lim_{\lambda\in \Lambda}\psi_{pq}(\alpha_q(\alpha_p(\mu_\lambda)))=S_{pq}.\end{aligned}$$ This proves the bijective correspondence between representations in (i) and (iii). By virtue of Lemma \[isomorphism of categories for endomorphisms\], the correspondence between representations in (ii) and (iii) is given by Corollary \[going forward cor\]. In particular, in view of , translates to .
\[prop:Nica covariance of various representations\] The system $(A,P,\alpha)$ is well-aligned and the bijective correspondences in Proposition \[representations of product systems for endomorphisms\] respect Nica covariance of representations. In particular, $${\mathcal{NT}}(E_\alpha)\cong {\mathcal{NT}}({\mathcal K}_\alpha)\cong {\mathcal{NT}}(A,P,\alpha).$$ Moreover, a representation $(\pi,S)$ of $(A,P,\alpha)$ is Nica covariant if and only if $S$ is Nica covariant as a representation of $P$, i.e.: $$\label{Nica covariance for semigroups}
(S_pS_p^*) (S_qS_q^*)=
\begin{cases}
S_r S_r^*, & \text{if } qP\cap sP=rP \text{ for some }r\in P,
\\
0, & \text{if } qP\cap sP=\emptyset.
\end{cases}$$
The first claim in the proposition follows from applying Proposition \[representations of product systems for endomorphisms\], Lemma \[properties of representations of endomorphisms\] (iv), and Proposition \[going forward prop\], see . Chasing universal properties will give the claimed isomorphisms of Nica-Toeplitz algebras.
To prove the last claim of the proposition, let $(\pi,S)$, $\psi$ and $\Psi$ be in the correspondence described in Proposition \[representations of product systems for endomorphisms\]. Assume that $(\pi,S)$, and therefore also $\Psi$, is Nica covariant. Then for every $p,q\in P$ and $a\in \alpha_p(A)A\alpha_p(A)$, $b \in \alpha_q(A)A\alpha_q(A)$ we have $$S_p \pi(a)S_p^* S_q \pi(b)S_q^*=
\begin{cases}
S_r \pi(\alpha_{p^{-1}r}(a)\alpha_{q^{-1}r}(b))S_r^*, & \text{if } qP\cap sP=rP \text{ for some }r\in P,
\\
0, & \text{if } qP\cap sP=\emptyset.
\end{cases}$$ Inserting in the above formula $a=\alpha_p(\mu_\lambda)$ and $b=\alpha_q(\mu_\lambda)$, where $\{\mu_\lambda\}$ is an approximate unit in $A$, and passing to strong limit gives $$(S_pS_p^*) (S_qS_q^*)=
\begin{cases}
S_r (S_{p^{-1}r}^*S_{p^{-1}r}) (S_{q^{-1}r}^*S_{q^{-1}r}) S_r^*, & \text{if } qP\cap sP=rP \text{ for some }r\in P,
\\
0, & \text{if } qP\cap sP=\emptyset,
\end{cases}$$ by Lemma \[properties of representations of endomorphisms\] (ii). By Lemma \[properties of representations of endomorphisms\] we have $S_r (S_{p^{-1}r}^*S_{p^{-1}r})=S_r$ and $(S_{q^{-1}r}^*S_{q^{-1}r})S_r^*=S_r^*$. Hence we get .
Conversely, suppose that holds. Let $a\in \alpha_p(A)A\alpha_p(A)$ and $b \in \alpha_q(A)A\alpha_q(A)$ for some $p,q\in P$. If $qP\cap sP=\emptyset$, then $S_p \pi(a)S_p^* S_q \pi(b)S_q^*=0$ because $S_p$ and $S_q$ have orthogonal ranges. Assume that $qP\cap sP=rP$. By appealing to Lemma \[properties of representations of endomorphisms\] (ii) and (i), we get $$\begin{aligned}
S_p \pi(a)S_p^* S_q \pi(b)S_q^*&
\stackrel{\eqref{Nica covariance for semigroups}}{=}S_p \pi(a)S_p^* S_r S_r^*S_q \pi(b)S_q^*
\\
&=S_p \pi(a)(S_p^* S_p)S_{p^{-1}r} S_{q^{-1}r}^*(S_q^* S_q) \pi(b)S_q^*
\\
&{=}S_p \pi(a)S_{p^{-1}r} S_{q^{-1}r}^*\pi(b)S_q^*
\\
&{=}S_r \pi(\alpha_{p^{-1}r}(a)\alpha_{q^{-1}r}(b))S_r^*.\end{aligned}$$ This, in conjunction with Lemma \[isomorphism of categories for endomorphisms\], proves Nica covariance of $\psi$. Hence $(\pi,S)$ is Nica covariant, by the first part of the proposition.
Let us notice that for $h\in P^*$ the endomorphism $\alpha_h$ is in fact an automorphism. Thus we have a group action $(P^*)^{op}\ni h \mapsto \alpha_h\in \operatorname{Aut}(A)$ of the opposite group to $P^*$. Recall, cf. [@KM Definition 2.15], that the group $\{\alpha_{h}\}_{h\in P^{*op}}$ of automorphisms of $A$ is *aperiodic* if for every $h\in P^*\setminus \{e\}$ and every non-zero hereditary subalgebra $D$ of $A$ we have $
\inf \{\|\alpha_h(a) a\| : a\in D^+,\,\, \|a\|=1\}=0.
$
\[aperiodicity for endomorphisms\] The group $\{\alpha_{h}\}_{h\in P^{*op}}$ is aperiodic if and only if the Fell bundle $\{E_{\alpha_{h}}\}_{h\in P^*}$ is aperiodic if and only if the group of automorphisms $\{\otimes 1_{h}\}_{h\in P^*}$ of ${\mathcal K}_\alpha$ is aperiodic.
It is known, see [@KM Theorem 2.9], that aperiodicity of $\{\alpha_{h}\}_{h\in P^*}$ is equivalent to the following condition: for every $h\in P^*\setminus \{e\}$, every $b\in A$ and every non-zero hereditary subalgebra $D$ of $A$ we have $
\inf \{\|\alpha_h(a)b a\| : a\in D^+,\,\, \|a\|=1\}=0.
$ The latter is exactly aperiodicity of $\{E_{\alpha_{h}}\}_{h\in P^*}$. Aperiodicity of $\{E_{\alpha_{h}}\}_{h\in P^*}$ is equivalent to aperiodicity of $\{\otimes 1_{h}\}_{h\in P^*}$ on ${\mathcal K}_\alpha$, by Proposition \[lem: aperiodicity for product systems\] and Lemma \[isomorphism of categories for endomorphisms\].
We are now ready to state the uniqueness theorem for Nica-Toeplitz crossed products associated to $(A,P,\alpha)$.
\[Uniqueness Theorem for crossed products by endomorphisms\] Let $(A,P,\alpha)$ be a $C^*$-dynamical system where each $\alpha_p$, $p\in P$, is an endomorphism, and $P$ is a right LCM semigroup. Suppose that either $P^*=\{e\}$ or that the group $\{\alpha_{h}\}_{h\in P^{*op}}$ of automorphisms of $A$ is aperiodic. Assume moreover that ${\mathcal K}_\alpha$ is amenable. Then for a Nica covariant representation $(\pi,S)$ of $(A,P,\alpha)$, i.e. a representation satisfying , the canonical epimorphism: $${\mathcal{NT}}(A,P,\alpha) \longrightarrow
\operatorname{\overline{span}}\{S_p\pi(a)S_q^*: a\in \alpha_p(A)A\alpha_q(A), p,q\in P\}$$ is an isomorphism if and only if for any finite family $q_1,\ldots,q_n\in P\setminus P^*$ the representation $A\ni a \mapsto \pi(a) \prod_{i=1}^{n}(1-S_{q_i}S_{q_i}^*)$ is faithful.
By Proposition \[prop:Nica covariance of various representations\], we may view ${\mathcal{NT}}(A,P,\alpha)$ as the Nica-Toeplitz algebra ${\mathcal{NT}}(E_\alpha)$ of the compactly-aligned product system $E_\alpha$, where the left action on each fiber is by compacts. Thus the assertion follows from Theorem \[Uniqueness Theorem for product systems I\] modulo Lemma \[aperiodicity for endomorphisms\] and the observation that for a representation $\Psi$ of ${\mathcal K}_\alpha$ associated to $(\pi,S)$ we have, due to Lemma \[properties of representations of endomorphisms\] (ii), $
Q_p^\Psi H=\Psi_{p,p}({\mathcal K}_\alpha(p,p))H=S_p\pi({\mathcal K}_\alpha(p,p))S_p^*H=S_pS_p^*H.
$
Nica-Toeplitz crossed products by transfer operators {#subsection:NT-cp-transfer}
----------------------------------------------------
Throughout this section we assume that $L:P \ni p \mapsto L_p\in {\textrm{Pos}}(A)$ is a unital semigroup antihomomorphism taking values in the semigroup of positive maps on a $C^*$-algebra $A$. We additionally assume that for each $p\in P$ the map $L_p:A\to A$ admits a ‘multiplicative section’, i.e. a $*$-homomorphism $\alpha_p:A\to M(A)$ such that $$\label{transfer operator equality}
L_p(a\alpha_p(b))=L_p(a)b, \qquad a,b \in A.$$ Thus $(A,\alpha_p,L_p)$ is a so called Exel-system and $L_p$ is a (generalized) *transfer operator* for the endomorphism $\alpha_p$ [@exel3], [@exel-royer]. We emphasize that the choice of endomorphisms $\{\alpha_p\}_{p\in P}$ in general is far from being unique, cf. [@kwa-exel]. In particular, we do not assume that the family $\{\alpha_p\}_{p\in P}$ forms a semigroup. Nevertheless, we show that we may associate to $(A,P,L)$ a product system mimicking [@Larsen]. We also note that implies that each $L_p$ is not only positive but in fact a completely positive map, cf. [@kwa-exel Lemma 4.1]. Thus $(A,P,L)$ is a $C^*$-dynamical system in the sense of Definition \[C\*-dynamical system\].
Let $p\in P$. The *$C^*$-correspondence $M_{p}$ associated to the transfer operator* $L_p$ is the completion of the space $A_{p}:=A$ endowed with a right semi-inner-product $A$-bimodule structure given by $$a\cdot x \cdot b:=ax\alpha_p(b)\,\, \textrm{ for }\,\,a,b\in A \,\,\textrm{ and } \,\,\langle x, y \rangle_p:=L_p(x^*y)\,\, \textrm{ for all }x,y, a\in A_{p}.$$ The image of $x \in A_{p}=A$ in $M_{p}$ will be denoted by $(p, x)$.
\[remark\_on\_KSGNS\_vs\_transfers\] By [@kwa-exel Lemma 4.4], for each $p\in P$ the map $ (p,a\alpha_p(b)) \mapsto a\otimes_{L_p} b$, $a, b\in A$, determines an isomorphism of $C^*$-correspondences from $M_p$ onto the KSGNS-correspondence $X_{L_p}$ of the completely positive map $L_p$ ([@kwa-exel Lemma 4.4] is stated under the assumption that $\alpha_p(A)\subseteq A$, but a quick inspection of the proof shows that this assumption is not needed, cf. also Lemma \[lemma before Product system for transfer operators\] below).
\[lemma before Product system for transfer operators\] Let $p,q\in P$. For any $a\in A$, $x\in A_{pq}$, and any approximate unit $\{\mu_\lambda\}$ in $A$ the elements $(pq,x\alpha_p(\mu_\lambda))$ converge to $(pq,x)$ in $M_{pq}$.
This follows from taking limits in the equalities $$\begin{aligned}
\|(pq,x\alpha_p(\mu_\lambda)-x)\|^2&=\|L_{pq}(\alpha_p(\mu_\lambda)x^*x\alpha_p(\mu_\lambda)- \alpha_p(\mu_\lambda)x^*x - x^*x\alpha_p(\mu_\lambda) +x^*x)\|
\\
&=\|L_{q}\Big( \mu_\lambda L_p(x^*x) \mu_\lambda- \mu_\lambda L_p(x^*x) - L_p(x^*x)\mu_\lambda +L_p(x^*x)\Big)\|.\end{aligned}$$
\[Product system for transfer operators\] The disjoint union of $C^*$-correspondences $M_L=\bigsqcup_{p\in P} M_p$ is a product system over the semigroup $P$, with multiplication determined by $$\label{product system multiplication}
(p,x)(q,y):=(pq,x\alpha_p(y)), \qquad x,y\in A,\,\, p,q\in P.$$
For all $x,y,x',y'\in A,\,\, p,q\in P$ we have $$\begin{aligned}
\langle (p,x)\otimes_A (q,y),(p,x')\otimes_A (q,y') \rangle&=\langle (q,y),\langle(p,x),(p,x')\rangle (q,y')\rangle
\\
&=L_{q}( y^*L_p(x^*x') y')= L_{pq}(\alpha_p(y^*)x^*x\alpha_p(y)\big)
\\
&=\langle (pq,x\alpha_p(y)),(pq,x'\alpha_p(y')) \rangle
\\
&=\langle (p,x)(q,y) ,(p,x')(q,y') \rangle.\end{aligned}$$ Thus we see that extends uniquely to a multiplication $M_p\times M_q\to M_{pq}$ that factors trough to an isometric $C^*$-correspondence map $M_p\otimes_A M_q\to M_{pq}$, which is also surjective by Lemma \[lemma before Product system for transfer operators\]. What is left to be shown is that multiplication is associative. To this end, we note that for any $x,y,z\in A$ and $p,q,r\in P$ we have $$\big( (p,x) (q,y)\big)(r,z)=(pqr,x\alpha_p(y)\alpha_{qp}(z)),\qquad (p,x) \big( (q,y)(r,z)\big)=(pqr,x\alpha_p\big(y\alpha_{q}(z)\big)).$$ Thus it suffices to show that $\|x\alpha_p(y)\alpha_{qp}(z)-x\alpha_p\big(y\alpha_{q}(z)\big)\|_{M_{pqr}}^2=0$. This however follows from the transfer property of $L$ since for any $a\in A$ we have $$\begin{aligned}
L_{pqr}\left(ax\alpha_p\big(y\alpha_{q}(z\big)\right)&=L_{qr}\big(L_{p}(ax)y\alpha_{q}(z)\big)=L_{r}\Big(L_q\big(L_{p}(ax\alpha_p(y))\big) z\big)
\\
&=L_{r}\Big(L_{pq}\big(ax\alpha_p(y)\alpha_{qp}(z)\big)\Big)=L_{pqr}\big(ax\alpha_p(y) \alpha_{qp}(z)\big).\end{aligned}$$
\[prop:representations of Exel systems and product systems\] Let $M_L$ be the product system constructed above. We have a one-to-one correspondence between representations $(\pi,S)$ of $(A,P, L)$ and nondegenerate representations $\psi$ of $M_L$ on Hilbert spaces, given by $$\label{associated correspondence representation}
\psi_p(p,x)= \pi(x)S_p,\qquad x\in A,$$ $$\label{conjugate by limit}
S_p=\textrm{s-}\lim_{\lambda\in \Lambda} \psi_p(p,\mu_\lambda)$$ where $\{\mu_\lambda\}_{\lambda\in \Lambda}$ is an approximate unit in $A$. For the corresponding representations, we have $C^*(\pi,S)=C^*(\psi(M_L))$, and in particular $
{\mathcal T}(A,P, L)\cong {\mathcal T}(M_L).
$
In view of Remark \[remark\_on\_KSGNS\_vs\_transfers\], it follows from [@kwa-exel Proposition 3.10] that relations and establish bijective correspondence between representations $(\pi,S_p)$ of $(A,L_p)$ and nondegenerate representations $(\psi_e, \psi_p)$ of $M_p$. Thus we only need to check the semigroup laws. Assume first that $(\pi,S)$ is a representation of $(A,L)$ and let $\psi$ be given by . The isomorphism $M_p\cong X_{L_p}$ in Remark \[remark\_on\_KSGNS\_vs\_transfers\] implies that $\pi(a\alpha_p(b))S_p=\psi_p(p,a\alpha_p(b))=\pi(a)S_p \pi(b)$ for any $a,b\in A$. Using this, for $x,y\in A, p,q \in P$, we get $$\begin{aligned}
\psi(p,x)\psi(q,y)&= \pi(x)S_p \pi(y) S_q=\pi(x \alpha_p(y)) S_{pq}
=\psi(pq,x\alpha_p(y)).\end{aligned}$$ Hence $\psi:M_L\to B(H)$ is a semigroup homomorphism.\
Now assume that $\psi:M_L\to B(H)$ is a representation. By , and Lemma \[lemma before Product system for transfer operators\] we have $$\begin{aligned}
S_p S_q &= \textrm{s-}\lim_{\lambda\in \Lambda} \Big( \textrm{s-}\lim_{\lambda'\in \Lambda'} \psi(p,\mu_\lambda) \psi(q,\mu'_{\lambda'})\Big)=\textrm{s-}\lim_{\lambda\in \Lambda} \Big( \textrm{s-}\lim_{\lambda'\in \Lambda'} \psi(pq,\mu_\lambda \alpha_p(\mu'_{\lambda'}))\Big)
\\
&=\textrm{s-}\lim_{\lambda\in \Lambda} \psi(pq,\mu_\lambda)\Big)=\textrm{s-}\lim_{\lambda\in \Lambda} \pi(\mu_\lambda) S_{pq}=S_{pq}.\end{aligned}$$
If for $p\in P$ we may choose $\alpha_p$ taking values in $A$, then for each $p\in P$, $L_p$ extends to a strictly continuous map $\overline{L}_p:\operatorname{\mathcal{M}}(A)\to \operatorname{\mathcal{M}}(A)$, see [@kwa-exel Proposition 4.2]. This implies that the limit may be taken in the strict topology of $\operatorname{\mathcal{M}}(C^*(\pi,S))$ and so the multiplier $S_p$ is determined by $S_p\pi(a)=\psi_p(p, \alpha_p(a))$, cf. [@kwa-exel Proposition 3.10] and Lemma \[lem: Toeplitz algebra for completely positives\].
\[Nica-Toeplitz algebras for transfer operators vs product systems\] Suppose that the product system $M_L$ is compactly aligned. The bijective correspondence in Proposition \[prop:representations of Exel systems and product systems\] restricts to a bijective correspondence between Nica-Toeplitz representations $(\pi,S)$ of $(A,P, L)$ and nondegenerate Nica-Toeplitz representations $\psi$ of $M_L$. In particular, $${\mathcal{NT}}(A,P, L)=\operatorname{\overline{span}}\{j_A(a)\hat{s}_p\hat{s}_q^*j_A(b): a\in \alpha_p(A)A, b\in \alpha_q(A)A, p,q\in P\}\cong {\mathcal{NT}}(M_L).$$
Let $(\pi, S)$ be a representation of $(A,P,L)$ and $\psi$ a representation of $M_L$ such that and hold. By Proposition \[going forward prop\], $\psi$ is Nica covariant if and only if the representation $\Psi:=\{\Psi_{p,q}\}_{p,q\in P}$ of ${\mathcal K}_{M_L}$, given by , is Nica covariant. Note that for every $p,q\in P$, the map $\Psi_{p,q}:{\mathcal K}(M_{q}, M_{p})\to \overline{\pi(A)S_p\pi(A)S_q^*\pi(A)}$, given by , is surjective. Thus if $a\in {\mathcal K}_{(\pi,S)}(p,q)$, $b\in {\mathcal K}_{(\pi,S)}(s,t) $ for some $p,q,s,t\in P$, then there are $a'\in {\mathcal K}_{M_L}(p,q)$, $b'\in {\mathcal K}_{M_L}(s,t) $ such that $\Psi_{p,q}(a')=a$ and $\Psi_{s,t}(b')=b$. It suffices to show that if $qP\cap sP=rP$, for some $r\in P$, then $(a\cdot b, k)$, where $k=\Psi_{pq^{-1}r,ts^{-1}r}\Big((a' \otimes 1_{q^{-1}r}) (b'\otimes 1_{s^{-1}r})\Big)$, is a redundancy for $(\pi,S)$. For any $c\in A$ we have $$\begin{aligned}
b\pi(c)S_{ts^{-1}r}&=b\pi(c)S_{t}S_{s^{-1}r} = b\psi_{t}((t,c))S_{s^{-1}r}=\psi_{s}\big(b'(t,c)\big)S_{s^{-1}r}
\\
& \stackrel{\eqref{conjugate by limit}}{=}
\textrm{s-}\lim_{\lambda\in \Lambda} \psi_{s}\big(b'(t,c)\big) \psi_{s^{-1}r}\big( (s^{-1}r,\mu_\lambda)\big)
\\
&=\textrm{s-}\lim_{\lambda\in \Lambda} \psi_{r}\big(b'(t,c) (s^{-1}r,\mu_\lambda)\big)
\\
&=\textrm{s-}\lim_{\lambda\in \Lambda}\overline{\Psi}_{r,ts^{-1}r}\Big(b'\otimes 1_{s^{-1}r}\Big)\psi_{ts^{-1}r}\big((t,c) (s^{-1}r,\mu_\lambda)\big)
\\
&=\textrm{s-}\lim_{\lambda\in \Lambda}\overline{\Psi}_{r,ts^{-1}r}\Big(b'\otimes 1_{s^{-1}r}\Big)
\pi(c)S_{t}\psi_{s^{-1}r}\big( (s^{-1}r,\mu_\lambda)\big)
\\
&=\overline{\Psi}_{r,ts^{-1}r}\Big(b'\otimes 1_{s^{-1}r}\Big)\pi(c)S_{ts^{-1}r}.\end{aligned}$$ Hence for any $c\in A$ we have $b\pi(c)S_{ts^{-1}r}=\overline{\Psi}_{r,ts^{-1}r}\Big(b'\otimes 1_{s^{-1}r}\Big)\pi(c)S_{ts^{-1}r}\in \psi_{r}\big(M_r)$. Since, for any $x\in M_r$ we have $a \psi_{r}(x)=\overline{\Psi}_{pq^{-1}r,r}\Big(a'\otimes 1_{s^{-1}r}\Big)\psi_{r}(x)$, we conclude that for any $c\in A$ we have $a\cdot b\,\pi(c)S_{ts^{-1}r}=\Psi_{pq^{-1}r,ts^{-1}r}\Big((a' \otimes 1_{q^{-1}r}) (b'\otimes 1_{s^{-1}r})\Big)\pi(c)S_{ts^{-1}r}$. Hence $(a\cdot b, \Psi_{pq^{-1}r,ts^{-1}r}\Big((a' \otimes 1_{q^{-1}r}) (b'\otimes 1_{s^{-1}r})\Big))$ is a redundancy for $(\pi,S)$.
Using the above result we can apply Theorem \[Uniqueness Theorem for product systems I\] to the product system $ M_L$ to get a uniqueness theorem for the Nica-Toeplitz crossed product ${\mathcal{NT}}(A,P, L)$. Nevertheless, in this generality we can not simplify the assertion of Theorem \[Uniqueness Theorem for product systems I\] in a meaningful way. Therefore we will specialize to the case of ‘transfer operators of finite type’.
Nica-Toeplitz crossed products by transfer operators of finite type {#subsection:NT-cp-transfer-finitetype}
-------------------------------------------------------------------
As in the previous subsection, we let $L:P \ni p \mapsto L_p\in {\textrm{Pos}}(A)$ be a unital semigroup antihomomorphism. We recall that if $\varrho:A\to A$ is a positive map, then the *multiplicative domain* of $\varrho$ is the $C^*$-subalgebra of $A$ given by $$MD(\varrho):=\{a\in A: \varrho(b)\varrho(a)=\varrho(ba) \text{ and }\varrho(a)\varrho(b)=\varrho(ab)\text{ for every } b\in A\}.$$
Throughout this subsection for every $p\in P$ we make the following standing assumptions:
1. $L_p$ faithful, i.e. $L_p(a^*a)=0$ implies $a=0$;
2. $L_p$ maps its multiplicative domain onto $A$.
We note that in the presence of axiom (A1), axiom (A2) is equivalent to the following two conditions:
1. there is an endomorphism $\alpha_p:A\to A$ such that $L_p$ is a transfer operator for $\alpha_p$ as in [@exel3] and $\operatorname{\bold{E}}_p:=\alpha_p\circ L_p$ is a conditional expectation onto the range of $\alpha_p$;
2. $L_p(\mu_\lambda)$ converges strictly to $1\in \operatorname{\mathcal{M}}(A)$, for any approximate unit $\{\mu_\lambda\}$ in $A$.
Specifically, (A1) and (A2) imply that $L_{p}|_{MD(L_p)}$ is a $*$-isomorphism onto $A$ and its inverse: $$\label{endomorphism definition}
\alpha_p:=(L_{p}|_{MD(L_p)})^{-1}$$ defines a monomorphism $\alpha_p$ with properties as in (A2a), cf. [@kwa-exel Propositions 4.16]. Then property (A2b) follows from [@kwa-exel Propositions 4.13], and we also have $\|L_p\|=1$, cf. [@kwa-exel Lemma 2.1]. Conversely, properties (A2a), (A2b) imply (A2) by [@kwa-exel Propositions 4.16], and then (A1) implies that $\alpha_p$ in (A2a) has to be of the form , see [@kwa-exel Propositions 4.18].
The maps in form an action of $P$ by endomorphisms of $A$.
We need to prove that $
\alpha_{pq}=\alpha_{p}\circ \alpha_q$ for all $p,q\in P$. We claim first that $L_p(MD(L_{pq}))\subseteq MD(L_q)$. Let $a\in A$ and $b\in MD(L_{qp})$. Then $$L_q(L_p(b)a)=L_q(L_p(b\alpha_p(a)))=L_{pq}(b\alpha_p(a))=L_{pq}(b)L_{pq}(\alpha_p(a))=L_q(L_p(b))L_q(a),$$ and similarly one gets $L_q(aL_p(b))=L_q(a)L_q(L_p(b))$.\
Secondly, we show that $MD(L_{pq})\subseteq MD(L_p)$. Indeed, since $L_{p}$ is a contractive completely positive map, we have $$MD(L_p)=\{a\in A: L_p(a^*)L_p(a)=L_p(a^*a) \text{ and } L_p(a)L_p(a^*)=L_p(aa^*)\},$$ cf. [@kwa-exel Proposition 2.6]. Now, since $L_p(MD(L_{pq}))\subseteq MD(L_q)$ and $L_{pq}=L_q\circ L_p$, for any $a\in MD(L_{pq})$ we get $$L_q(L_p(a^*)L_p(a))= L_{pq}(a^*)L_{pq}(a)=L_{pq}(a^*a)=L_q(L_p(a^*a)).$$ Faithfulness of $L_q$ implies that $L_p(a^*)L_p(a)=L_p(a^*a)$. Replacing $a$ with $a^*$, we get $L_p(a)L_p(a^*)=L_p(aa^*)$. Hence $MD(L_{pq})\subseteq MD(L_p)$.
Using the above inclusions, we conclude that $L_p$ restricts to a monomorphism $L_p:MD (L_{pq}) \to MD(L_q)$. In fact, since $L_{pq}=L_q\circ L_p$ restricts to an isomorphism from $MD(L_{pq})$ onto $A$ and $L_q$ restricts to an isomorphism from $MD(L_q)$ onto $A$, we see that $L_p:MD (L_{pq}) \to MD(L_q)$ is an isomorphism. It is restriction of the isomorphism $L_p:MD (L_{p}) \to A$. Hence $(L_{p}|_{MD(L_{p})})^{-1}|_{MD(L_q)}=(L_{p}|_{MD(L_{pq})})^{-1}$. Thus we obtain $$\begin{aligned}
\alpha_{pq}&=(L_{pq}|_{MD(L_{pq})})^{-1}=(L_{q}|_{MD(L_{q})}\circ L_{p}|_{MD(L_{pq})})^{-1}=(L_{p}|_{MD(L_{pq})})^{-1}\circ (L_{q}|_{MD(L_{q})})^{-1}
\\
&= (L_{p}|_{MD(L_{p})})^{-1}|_{MD(L_q)}\circ (L_{q}|_{MD(L_{q})})^{-1}=\alpha_p\circ \alpha_q.\end{aligned}$$
For each $p\in P$, $\operatorname{\bold{E}}_p=\alpha_p\circ L_p$ is a faithful conditional expectation onto the multiplicative domain $MD(L_p)=\alpha_p(A)$ of $L_p$. We will assume that each $\operatorname{\bold{E}}_p$ for $p\in P$ is of *index-finite type* as in [@Wat]. Namely, for every $p\in P$ we assume that
1. there is a finite quasi-basis $\{u_{1}^p,...,u_{m_p}^{p}\}\subseteq A$ for $\operatorname{\bold{E}}_p$, for each $p\in P$, i.e. we have $$\label{finite-index equality}
a =\sum_{i=1}^{m_p} u_i^p \operatorname{\bold{E}}_p((u_i^p)^*a),\quad \text{ for all }a\in A.$$
Associated to $(A,P,L)$ we have the product system $M_L$ of Proposition \[Product system for transfer operators\]. Axiom (A1) implies that the map $A\ni a \mapsto (p,a)\in M_p$ is injective, for each $p\in P$. Axiom (A3) implies that the left action of $A$ on each $M_p$, $p \in P$, is by compacts because for $a,x\in A$, $p\in P$, a simple calculation using gives $$\sum_{i=1}^{m_p}\Theta_{(p,u_i^p), (p,a^* u_i^p)}(p,x)= (p,ax).$$ Therefore the left action of $a$ on $M_p$ is given by the operator $\sum_{i=1}^{m_p}\Theta_{(p,u_i^p), (p,a^* u_i^p)}\in {\mathcal K}(M_p)$. By Lemma \[non-degeneracy of K\_X\], the ideal ${\mathcal K}_{M_L}$ in ${\mathcal L}_{M_L}$ is invariant under right tensoring. Hence ${\mathcal K}_{M_L}$ is a right-tensor $C^*$-precategory itself.
Under the assumptions (A1)–(A3) it is possible to describe a new right-tensor $C^*$-precategory, isomorphic to ${\mathcal K}_{M_L}$, but admitting an explicit formula for the right tensoring. With this at hand, after invoking Propositions \[Nica-Toeplitz algebras for transfer operators vs product systems\] and \[going forward prop\], we give a more explicit characterization of Nica covariance of a representation $(\pi, S)$ of $(A, P, L)$.
For each $p\in P$ denote by ${\mathcal K}_p$ the *reduced $C^*$-basic construction* associated to the conditional expectation $\operatorname{\bold{E}}_p$ cf. [@Wat Subsection 2.1]. Thus ${\mathcal K}_p:={\mathcal K}(\mathcal{E}_p)$ where $\mathcal{E}_p$ is the right Hilbert $\alpha_p(A)$-module obtained by completion of $A$ with respect to the norm induced by the sesquilinear form $\langle x,y \rangle_{\alpha_p(A)}= \operatorname{\bold{E}}_p(x^*y)$, $x,y \in A$. We recall that there is an injective left action of $A$ on $\mathcal{E}_p$, induced by multiplication in $A$. Thus we identify $A$ as a subalgebra of ${\mathcal L}(\mathcal{E}_p)$. The operator $\operatorname{\bold{E}}_p:A\to A$ extends to an idempotent $e_p\in {\mathcal L}(\mathcal{E}_p)$, and then $${\mathcal K}_p=\operatorname{\overline{span}}\{ a e_p b: a,b \in A\}.$$ For each $p,q\in P$ we equip the algebraic tensor product $A\odot A$ with the ${\mathcal K}_q$-valued sesquilinear form determined by $$\langle a\odot b, c\odot d\rangle_{p,q}:=b^* \alpha_q (L_p(a^*c))e_q d, \qquad a,b,c,d\in A.$$ We let ${\mathcal K}_L(p,q)$ be the Hilbert ${\mathcal K}_q$-module arising as the completion of $A\odot A$ with the semi-norm associated to the above sesquilinear form. We denote by $a\otimes_{p,q} b$ the image of a simple tensor $a\odot b$ in the space ${\mathcal K}_L(p,q)$.
\[C\^\*-category associated to transfer operators\] The family of Banach spaces ${\mathcal K}_L:=\{{\mathcal K}_L(p,q)\}_{p,q\in P}$ defined above form a right-tensor $C^*$-precategory where $$\label{C*-category relations to be checked}
(a\otimes_{p,q} b)^*:=b\otimes_{q,p} a, \qquad (a\otimes_{p,q} b)\cdot (c\otimes_{q,r} d) :=a\alpha_p(L_q(bc))\otimes_{p,r} d,$$ $$\label{right tensoring to be checked}
(a\otimes_{p,q} b)\otimes 1_r:= \sum_{i=1}^{m_r} a\alpha_p(u_{i}^{r}) \otimes_{pr,qr} \alpha_q(u_{i}^{r})^* b,$$ for all $a,b,c,d \in A$, $p,q,r \in P$. Moreover, if $M_L$ is the product system associated to $L$, then the map $$\label{iso of compacts}
a\otimes_{p,q} b\longmapsto \Theta_{(p,a), \, (q,b^*)}, \qquad a,b \in A,$$ establishes an isomorphism of right-tensor $C^*$-precategories from ${\mathcal K}_L$ onto the right-tensor $C^*$-precategory ${\mathcal K}_{M_L}=\{{\mathcal K}(M_q,M_p)\}_{p,q\in P}$.
The strategy of the proof is to show that yields an isometric isomorphism ${\mathcal K}_L(p,q)\cong {\mathcal K}(M_q,M_p)$ under which the right-tensor $C^*$-precategory operations from ${\mathcal K}_{M_L}$ translate to the prescribed formulas for ${\mathcal K}_L$. To this end, note that for any $p, q\in P$ the maps $
{\mathcal H}:=\alpha_q\circ L_p \textrm{ and } {\mathcal V}:=\alpha_p\circ L_q
$ form an interaction in the sense of [@exel-inter Definition 3.1]. Indeed, we have $
{\mathcal H}\circ {\mathcal V}=\alpha_q \circ L_p\circ \alpha_p \circ L_q =\alpha_q \circ L_q =\operatorname{\bold{E}}_q,
$ and thus ${\mathcal H}\circ {\mathcal V}\circ {\mathcal H}= \operatorname{\bold{E}}_q\circ {\mathcal H}={\mathcal H}$. For any $a,b\in A$ we get $${\mathcal H}({\mathcal V}(a)b)=\alpha_q\Big(L_p\big(\alpha_p(L_q(a))b\big)\Big)=\alpha_q\big(L_q(a)\big) \cdot \alpha_q\big(L_p(b)\big)={\mathcal H}({\mathcal V}(a)){\mathcal H}(b).$$ The other relations follow by symmetric arguments. Now, by [@exel-inter Proposition 5.4], for $x=\sum_{i=1}^{n}a_i\otimes_{p,q} b_i$, $a_i,b_i\in A$, we have $$\begin{aligned}
\|x\|_{{\mathcal K}(p,q)}&=\|[{\mathcal H}(a_i^*a_j)]_{i,j}^{\frac{1}{2}} [{\mathcal H}({\mathcal V}(b_ib_j^*))]_{i,j}^{\frac{1}{2}}\|_{M_n(A)}
=\|[\alpha_q(L_p(a_i^*a_j)]_{i,j}^{\frac{1}{2}} [\alpha_q(L_q(b_ib_j^*))]_{i,j}^{\frac{1}{2}}\|_{M_n(A)}.\end{aligned}$$ Using the fact that $\alpha_q$ amplifies to an isometric $*$-homomorphism on $M_{n}(A)$ we get $$\begin{aligned}
\|x\|_{{\mathcal K}(p,q)}&=\|[L_p(a_i^*a_j)]_{i,j}^{\frac{1}{2}} [L_q(b_ib_j^*)]_{i,j}^{\frac{1}{2}}\|_{M_n(A)}
\\
&=\|[\langle (p, a_i), (p,a_j)\rangle_p]_{i,j}^{\frac{1}{2}} [\langle (q,b_i^*), (q,b_j^*)\rangle_q]_{i,j}^{\frac{1}{2}}\|_{M_n(A)}.\end{aligned}$$ Comparing this with the norm of the operator $\sum_{i=1}^{n}\Theta_{(p,a_i), (q,b_i^*)}$ described in [@KPW Lemma 2.1], we finally arrive at $
\|\sum_{i=1}^{n}a_i\otimes_{p,q} b_i\|_{{\mathcal K}_L(p,q)}=\|\sum_{i=1}^{n}\Theta_{(p,a_i), (q,b_i^*)}\|_{{\mathcal K}(M_q,M_p)}.
$ Thus defines a linear isometry.
The standard formulas: $\Theta_{x,y}^*=\Theta_{y,x}$, $\Theta_{x,y}\circ \Theta_{z,v}=\Theta_{x\langle y, z\rangle_q, v}$ for $x\in M_p$, $y,z\in M_q$, and $v\in M_r$, translate via to . Hence relations indeed define a $C^*$-precategory structure on ${\mathcal K}_L$, and ${\mathcal K}_L$ is isomorphic to ${\mathcal K}_{M_L}$ as a $C^*$-precategory. Thus it remains to show that the right tensoring in ${\mathcal K}_{M_L}$ translates to on the level of ${\mathcal K}_L$.
Note that the product system $M_L$ is (left) essential. Let $a,b,x,y\in A$, $p,q,r\in P$, and $T=\Theta_{(p,a), \,(q, b^*)} $. Taking into account that $(p,x)\otimes_A (q,y)=(pq,x\alpha_p(y))$ we get $$\begin{aligned}
(T\otimes 1_r) (q,x)\otimes_A (r,y)&
=\big(p, a\alpha_p(L_q(bx))\big)\otimes_A (r,y)=\Big(pr,a\alpha_p\big(L_q(bx)y\big)\Big)\\
&= \Big(pr,a\alpha_p\Big(\sum_{i=1}^{m_r} u_i^r (\alpha_r\circ L_r)\big((u_i^r)^*L_q(bx\alpha_q(y))\big) \Big)\Big)\\
&=\Big(pr,\sum_{i=1}^{m_r} a\alpha_p(u_i^r) \alpha_{pr}\Big(L_{qr}\big(\alpha_q(u_i^r)^*bx\alpha_q(y)\big)\Big)\Big)\\
&=\Big(pr,\Big(\sum_{i=1}^{m_r} \Theta_{(pr,a\alpha_p(u_i^r)),(qr, b^*\alpha_q(u_i^r))}\Big) x\alpha_q(y) \Big)
\\
&=\Big(\sum_{i=1}^{m_r} \Theta_{(pr,a\alpha_p(u_i^r)), (qr,b^*\alpha_q(u_i^r))}\Big) (q,x)\otimes_A (r,y).\end{aligned}$$ Thus $\Theta_{(p,a), (q, b^*)}=\Big(\sum_{i=1}^{m_r} \Theta_{(pr,a\alpha_p(u_i^r)), (qr,b^*\alpha_q(u_i^r))}\Big) $ and therefore defines the desired right tensoring on ${\mathcal K}_L$.
\[Not important remark\] In view of Proposition \[C\^\*-category associated to transfer operators\], since the image of $A\alpha_p(A)$ in $M_p$ is a dense subspace of $M_p$, cf. Lemma \[lemma before Product system for transfer operators\], we have that $
{\mathcal K}_L(p,q)=\operatorname{\overline{span}}\{a\otimes_{p,q} b: a\in A\alpha_p(A), b\in \alpha_q(A)A\} $.
\[relation for Nica covariance of transfers\] Let $pP\cap qP=rP$. For any $a,b,c,d\in A$ we have $$\label{eq:product in K_L}
\bigl((a\otimes_{p,p} b)\otimes 1_{p^{-1}r}\bigr) \cdot \bigl((c\otimes_{q,q} d)\otimes 1_{q^{-1}r}\bigr)= \sum_{i=1}^{m_{q^{-1}r}} a\operatorname{\bold{E}}_p\bigl(bc\alpha_q(u_i^{q^{-1}r})\bigr) \otimes_{r,r} \alpha_q(u_i^{q^{-1}r})^* d.$$
By and , the left hand side of is equal to $$\sum_{i=1}^{m_{p^{-1}r}} \bigl(a\alpha_p(u_{i}^{p^{-1}r}) \otimes_{r,r} \alpha_p(u_{i}^{p^{-1}r})^* b\bigr) \cdot \sum_{i=1}^{m_{q^{-1}r}} \bigl(c\alpha_q(u_{i}^{q^{-1}r}) \otimes_{r,r} \alpha_q(u_{i}^{q^{-1}r})^* d\bigr)$$ $$\,\,\,\,\,\,\,\,\,\,\, =\sum_{i=1,j=1}^{m_{p^{-1}r}, m_{q^{-1}r}} a\alpha_p(u_{i}^{p^{-1}r}) \operatorname{\bold{E}}_{r} \Big(\alpha_p(u_{i}^{p^{-1}r})^* b c\alpha_q(u_{j}^{q^{-1}r})\Big) \otimes_{r,r} \alpha_q(u_{i}^{q^{-1}r})^* d.$$ However, using that $\operatorname{\bold{E}}_r=\alpha_p\circ \operatorname{\bold{E}}_{p^{-1}r} \circ L_p$, for any $f\in A$, it follows that $$\begin{aligned}
\sum_{i=1}^{m_{p^{-1}r}} \alpha_p (u_{i}^{p^{-1}r}) \operatorname{\bold{E}}_{r} \Big(\alpha_p(u_{i}^{p^{-1}r})^* f\Big)&=\sum_{i=1}^{m_{p^{-1}r}} \alpha_p\Big(u_{i}^{p^{-1}r} \operatorname{\bold{E}}_{p^{-1}r} \big(u_{i}^{p^{-1}r*} L_p( f)\big)\Big)
\\
&= \alpha_p\big( L_p( f)\big)=\operatorname{\bold{E}}_p(f).\end{aligned}$$ Now inserting $f=b c\alpha_q(u_{j}^{q^{-1}r})$ in the computations above gives the assertion.
\[Nica covariance for transfers characterised\] A representation $(\pi,S)$ of $(A,P, L)$ is Nica covariant if and only if for every $p,q\in P$, and $a\in \alpha_p(A)A\alpha_q(A)$ the following are satisfied: $$\label{orthogonality for Nica for transfers}
S_p^*\pi(a)S_q = 0 \quad\text{ if }\quad pP\cap qP=\emptyset$$ and $$\label{eq:Nica covariance as Wick ordering}
S_pS_p^*\pi(a)S_qS_q^*
=\sum_{i=1}^{m_{q^{-1}r}} \pi\big(\operatorname{\bold{E}}_p(a\alpha_q(u_i^{q^{-1}r}))\big)S_r S_r^*\pi(u_i^{q^{-1}r})^* \quad\text{ if } \quad pP\cap qP=rP.$$
By Propositions \[going forward cor\], \[Nica-Toeplitz algebras for transfer operators vs product systems\] and \[C\^\*-category associated to transfer operators\], there is a one-to-one correspondence between representations $(\pi,S)$ of $(A,P, L)$ and right-tensor representations $\Psi$ of ${\mathcal K}_L$ determined by $$\label{correspondence of representations for transfer operators}
\Psi(a\otimes_{p,q} b)=\pi(a)S_p S_q^*\pi(b), \qquad a,b\in A.$$ Moreover, $(\pi,S)$ is Nica covariant if and only if $\Psi$ is, cf. Proposition \[going forward prop\].
Let $(\pi,S)$ be a Nica covariant representation. Thus the associated $\Psi$ in is Nica covariant, too. Let $p,q\in P$, and $a\in \alpha_p(A)A\alpha_q(A)$. If $pP\cap qP=\emptyset$, then for any $b,c,d\in A$ we have $$0=\Psi(b\otimes_{p,p} a) \Psi(c\otimes_{q,q} d)=\pi(b) S_pS_p^*\pi(a c)S_qS_q^*\pi(d).$$ Letting $b,c,d\in A$ run through an approximate unit in $A$ and taking (strong) limit, we get $S_pS_p^*\pi(a)S_qS_q^* = 0$, which is equivalent to $S_p^*\pi(a)S_q = 0$. Let now $pP\cap qP=rP$. Invoking Lemma \[relation for Nica covariance of transfers\], with the roles of $a$ and $b$ exchanged, we get $$\pi(b)S_pS_p^*\pi(ac)S_qS_q^* \pi(d)
=\Psi\big((b\otimes_{p,p} a) \,\cdot\, (c\otimes_{q,q} d)\big)
= \Psi\Big(((b\otimes_{p,p} a)\otimes 1_{p^{-1}r}) \cdot ((c\otimes_{q,q} d)\otimes 1_{q^{-1}r}) \Big)$$ $$=\pi(b)\sum_{i=1}^{m_{q^{-1}r}} \pi\big(\operatorname{\bold{E}}_p(ac\alpha_q(u_i^{q^{-1}r}))\big)S_r S_r^*\pi(u_i^{q^{-1}r})^*\pi(d).$$ Letting $b,c,d\in A$ run through an approximate unit in $A$ and taking (strong) limit gives .
Conversely, relations and imply Nica covariance as a reversal of the above arguments shows.
For $h\in P^*$, the mapping $L_h$ is invertible and hence by (A1) and (A2) we have $MD(L_h)=A$, which means that $L_h$ is an automorphism of $A$: we have $L_h=\alpha_{h^{-1}}= \alpha_{h}^{-1}$.
\[Uniqueness Theorem for crossed products by transfers\] Let $(A,P,L)$ be a $C^*$-dynamical system satisfying (A1), (A2), (A3) above. Suppose that either $P^*=\{e\}$ or that the group $\{\alpha_{h}\}_{h\in P^*}$ of automorphisms of $A$ is aperiodic. Assume also that ${\mathcal K}_L$ is amenable. Let $(\pi,S)$ be a Nica covariant representation of $(A,P,L)$. For each $p\in P$, let $Q_p$ be the projection onto the space $\overline{\pi(A)S_pH}$. Then the canonical surjective $*$-homomorphism $${\mathcal{NT}}(A,P,\alpha) \longrightarrow
\operatorname{\overline{span}}\{\pi(a)S_pS_q^*\pi(b): a,b\in A, p,q\in P\}$$ is an isomorphism if and only if for all finite families $q_1,\dots ,q_n\in P\setminus P^*$ the representation $A\ni a \mapsto \pi(a) \prod_{i=1}^{n}(1-Q_{q_i})$ is faithful.
Note that $\{\pi(a)S_pS_q^*\pi(b): a,b\in A, p,q\in P\}$ is closed under multiplication due to Proposition \[Nica covariance for transfers characterised\]. By Proposition \[Nica-Toeplitz algebras for transfer operators vs product systems\], ${\mathcal{NT}}(A,P,L)$ may be viewed as ${\mathcal{NT}}(M_L)$. For $h\in P^*$, the automorphism $L_h$ of $A$ induces an isomorphism of $C^*$-correspondences $E_{\alpha_{h}}=E_{L_{h^{-1}}}\cong M_{L_{h}}$. Thus by Lemma \[aperiodicity for endomorphisms\], the group $\{\alpha_{h}\}_{h\in P^*}=\{L_h\}_{h\in P^{*op}}$ of automorphisms of $A$ is aperiodic if and only if the Fell bundle $\{M_{L_{h}}\}_{h\in P^*}$ is aperiodic. Recalling that under assumption (A3), the left action is by generalized compacts in each fiber, the assertion follows from Theorem \[Uniqueness Theorem for product systems I\] modulo Proposition \[C\^\*-category associated to transfer operators\] and the fact that for the representation $\Psi$ of ${\mathcal K}_L$ associated to $(\pi,S)$ and every $p\in P$ we have $
\Psi_{p,p}({\mathcal K}_L(p,p))H=\pi(A)S_pS_p^*\pi(A)H=\overline{\pi(A)S_pH}.
$
$C^*$-algebras associated to right LCM semigroups {#section:semigroupCstar alg}
=================================================
Throughout this section we use the notation $S$ for a generic right LCM semigroup and reserve $P$ for semigroups in semidirect products as in [@bls2]. Associated to any right LCM semigroup $S$ there is a universal $C^*$-algebra $C^*(S)=\overline{\operatorname{span}}\{v_s^{\phantom{*}}v_t^*: s,t\in S\}$ generated by an isometric representation $v$ of $S$ such that $$(v_pv_p^*) (v_q v_q^*)
=\begin{cases}
v_rv_r^*
& \text{if $pP\cap qP=rP$} \\
0 &\text{if $pP\cap qP=\emptyset$}.
\end{cases}$$ See [@Li] for the abstract construction of $C^*(S)$ valid for arbitrary left cancellative semigroups, and [@bls] or [@No0] for the case of right LCM semigroups. When $S$ is a right LCM semigroup, [@bls2 Corollary 7.11] implies that $C^*(S)$ is isomorphic to the Nica-Toeplitz algebra ${\mathcal{NT}}(X)$ for the compactly aligned product system $X$ over $S$ with fibers $X_s\cong\mathbb{C}$ for all $s\in S$. It is therefore natural to ask if Theorem \[Uniqueness Theorem for product systems I\] can be applied. On one hand, since the left action is by generalized compact operators in every fiber $X_s$, for $s\in S$, we will have equivalence of the three assertions (i)-(iii) if ${\mathcal K}_X$ is amenable and $S^*=\{e\}$. This in particular recovers the case (1) in [@bls Theorem 4.3]. On the other hand, in case that $S^*\neq \{e\}$, the Fell bundle $\{X_h\}_{h\in S^*}$ can never be aperiodic because $X_h=\mathbb{C}$ for all $h\in S^*$. Therefore viewing $C^*(S)$ as a Nica-Toeplitz $C^*$-algebra associated to the product system $X$ with trivial (thus small) fibers we can not apply Theorem \[Uniqueness Theorem for product systems I\] when $S^*\neq \{e\}$. A possible solution to this obstacle is to consider $C^*(S)$ as a Nica-Toeplitz $C^*$-algebra associated to another product system with larger fibers, so that we can detect aperiodicity. For instance, given a controlled function into a group one could obtain a uniqueness result in terms of Fell bundles based on Proposition \[Abstract uniqueness\]. However, in general the fibers of the arising Fell bundle will be very large. We propose an intermediate approach in the case that $S$ is a semidirect product of an LCM semigroup and a group. We will show useful alternative realizations of $C^*(S)$ as Nica-Toeplitz algebras associated to product systems with larger fibers over a smaller semigroup, which lead to efficient uniqueness results.
Semidirect products of LCM semigroups
-------------------------------------
Even though left and right semidirect products are equivalent as abstract constructions, it turns out that right semidirect products have rather different properties than the left semidirect products of a group by a semigroup considered for instance in [@bls; @bls2]. To exemplify, we will use right semidirect products for actions of semigroups on groups to construct right LCM semigroups $S$ with non-trivial group of units $S^*$ and which are not necessarily right cancellative (see Proposition \[construction of LCM’s\] below). Moreover, for these examples the constructible right ideals depend only the acting semigroup, unlike the case of left semidirect products.
We begin by fixing our conventions for the two constructions. For a semigroup $T$ we let $\operatorname{End}T$ denote the semigroup of all semigroup homomorphisms $T\to T$ that preserve the identity $e_T$ in $T$. The identity endomorphism in $\operatorname{End}T$ is $\operatorname{id}_T$. A *left action* $P\stackrel{\theta}{\curvearrowright} T$ of a semigroup $P$ on $T$ is a unital semigroup homomorphism $\theta:P \to \operatorname{End}T$. A *right action* $T\stackrel{\vartheta}{\curvearrowleft} P$ is a unital semigroup antihomomorphism $\vartheta:P \to \operatorname{End}T$, i.e. $\vartheta_p\vartheta_q=\vartheta_{qp}$ for all $p,q\in P$.
\[semidirect products definition\] Let $T, P$ be semigroups. The *(left) semidirect product* of $T$ by $P$ with respect to a left action $P
\stackrel{\theta}{\curvearrowright} T$, denoted $T{\rtimes_\theta}P$, is the semigroup $T\times P$ with composition given by $$(g,p)(h,q) = (g\theta_{p}(h),pq),\qquad \text{ for }g,h\in T \text{ and }p, q\in P.$$ The *(right) semidirect product* of $T$ by $P$ with respect to a right action $T\stackrel{\vartheta}{\curvearrowleft} P$, denoted $P{_\vartheta\ltimes}T$, is the semigroup $P\times T$ with composition given by $$(p,g)(q,h) = (pq, \vartheta_{q}(g)h), \qquad \text{ for }g,h\in T \text{ and }p, q\in P.$$
The opposite semigroup $P^{op}$ to a semigroup $P$ coincides with $P$ as a set but has multiplication defined by reversing the factors. Treating the corresponding endomorphisms as maps on the same set, we have $\operatorname{End}P=\operatorname{End}P^{op}$. Thus every right action $T\stackrel{\vartheta}{\curvearrowleft} P$ can be treated as the left action $P^{op} \stackrel{\vartheta}{\curvearrowright} T^{op}$, and there is an isomorphism of semigroups $$P{_\vartheta\ltimes}T\ni (p,g) \to (g,p) \in \left(T^{op}{\rtimes_\vartheta}P^{op}\right)^{op}.$$
The following proposition should be compared with [@bls Lemma 2.4] proved for left semidirect products. It shows that right semidirect products in the realm of right LCM semigroups are always left cancellative and have easier structure of principal right ideals.
\[construction of LCM’s\] Suppose that $G\stackrel{\vartheta}{\curvearrowleft} P$ is a right action of a right LCM semigroup $P$ with identity on a group $G$. Then $P{_\vartheta\ltimes}G$ is a right LCM semigroup such that $$J(P)\cong J(P{_\vartheta\ltimes}G) \quad \textrm{ and } \quad (P{_\vartheta\ltimes}G)^*=P^*{_\vartheta\ltimes}G.$$ Moreover, $P{_\vartheta\ltimes}G$ is cancellative if and only if $P$ is cancellative and every $\vartheta_p$ is injective, $p\in P$.
The element $(e_P,e_G)$ is the identity of $P{_\vartheta\ltimes}G$. If $(p,g)(q,h)=(p,g)(q',h')$ then $pq=pq'$ and $\vartheta_{q}(g)h=\vartheta_{q'}(g)h'$. By left cancellation in $P$ we get $q=q'$ and therefore also $h=h'$. Thus $P{_\vartheta\ltimes}G$ is left cancellative. Since the action of the group $G$ on itself is transitive, we have $(p,g)(P{_\vartheta\ltimes}G)=(pP)\times G$ for every $(p,g)\in P{_\vartheta\ltimes}G$. Hence $P{\ltimes_\vartheta}G$ is a right LCM semigroup with the semilattice of principal right ideals isomorphic to that of $P$, with isomorphism given by $
pP\mapsto (pP)\ltimes G
$. Plainly, relations $(p,g)(q,h)= (q,h)(p,q)=(e_P,e_G)$ hold if and only if $p\in P^*$, $q=p^{-1}$ and $h=\vartheta_{p^{-1}}(g^{-1})$. This immediately gives $(P{_\vartheta\ltimes}G)^*=P^*{_\vartheta\ltimes}G$.
The claim about $P{_\vartheta\ltimes}G$ being right cancellative follows by noting that $
(p,g)(q,h)=(p',g')(q,h)$ if and only if $pq=p'q$ and $\vartheta_{q}(g)=\vartheta_{q}(g')$.
Using the last part of Proposition \[construction of LCM’s\] it is easy to construct examples of not cancellative LCM semigroups from cancellative ones.
In general the (left) semidirect product of a group $G$ by a right LCM semigroup $P$ with respect to a left action $P
\stackrel{\theta}{\curvearrowright} G$ is not an LCM semigroup. As introduced in [@bls2 Definition 2.1], an *algebraic dynamical system* is a triple $(G, P,\theta)$ where $G$ is a group, $P$ is a right LCM semigroup, and $P
\stackrel{\theta}{\curvearrowright} G$ is a left action by injective endomorphisms of $G$ which respects the order, i.e. $\theta_p(G)\cap \theta_q(G)=\theta_r(G)$ whenever for $p,q\in P$ there is $r\in P$ such that $pP\cap qP=rP$. By [@bls Proposition 8.2 and Lemma 2.4], whenever $(G, P,\theta)$ is an algebraic dynamical system, the left semidirect product $P{\rtimes_\theta}G$ is a right LCM semigroup and $(P{\rtimes_\theta}G)^*=P^*{\rtimes_\theta}G$.
Semigroup $C^*$-algebras associated to right semidirect products $P{_\vartheta\ltimes}G$ {#subsection:semigroupCstar-right}
----------------------------------------------------------------------------------------
Here we assume that ${\vartheta}$ is a right action of a right LCM semigroup $P$ on a group $G$. We let $\delta_g$ for $g\in G$ be the generating unitaries in $C^*(G)$.
\[prop:right semidirect products\] Let ${\vartheta}$ be a right action of a right LCM semigroup $P$ on a group $G$. There is an antihomomorphism $\alpha$ of $P$ into $\operatorname{End}C^*(G)$ given by $\alpha_p(\delta_g)=\delta_{\vartheta_p(g)}$ for $g\in G$ and $p\in P$. Further, $(C^*(G), P,\alpha)$ is a $C^*$-dynamical system as in subsection \[subsection:NT-cp-endo\], and $C^*(P{_\vartheta\ltimes}G)\cong {\mathcal{NT}}(C^*(G), P,\alpha)$.
We only prove the last assertion as the rest is routine. Proposition \[prop:Nica covariance of various representations\] provides natural isomorphisms between three different $C^*$-algebras associated to $(C^*(G), P,\alpha)$. We aim to show that $C^*(P{_\vartheta\ltimes}G)$ is isomorphic to the Nica-Toeplitz algebra ${\mathcal{NT}}(E_\alpha)$ associated to the product system $E_\alpha$ with multiplication defined in .
Let $i_{E_\alpha}$ be the universal Nica covariant representation of $E_\alpha$. For each $p\in P$, denote $i_p$ the restriction of $i_{E_\alpha}$ to $E_p$. We claim that $w_{(p,g)}:=i_p(\delta_g)$ for $(p,g)\in P{_\vartheta\ltimes}G$ is a Li-family in ${\mathcal{NT}}(E_\alpha)$. Let $(p,g), (q,h)\in P{_\vartheta\ltimes}G$. Then $$w_{(p,g)}w_{(q,h)}=i_p(\delta_g)i_q(\delta_h)=i_{pq}(\alpha_q(\delta_g)\delta_h)=w_{(pq,\vartheta_q(g)h)},$$ and since $w_{(e,e)}=1$, we have a representation of $P{_\vartheta\ltimes}G$. Each $w_{(p,g)}$ is an isometry because $w_{(p,g)}^*w_{(p,g)}=i_p(\delta_g)^*i_p(\delta_g)=i_e(\langle \delta_g, \delta_g\rangle_p)=1$. Next we compute $w_{(p,g)}^*w_{(q,h)}$. Since $(p,g)(P{_\vartheta\ltimes}G)\cap (q,h)(P{_\vartheta\ltimes}G)=\emptyset$ if and only if $pP\cap qP=\emptyset$, in which case $i_p(\delta_g)^*i_q(\delta_h)=0$ by Nica covariance of $i_{E_\alpha}$. Thus $w_{(p,g)}^*w_{(q,h)}=0$ when $(p,g)(P{_\vartheta\ltimes}G)\cap (q,h)(P{_\vartheta\ltimes}G)=\emptyset$. Now assume the intersection is non-empty, and write $pp'=qq'=r$ for some $p',q',r$ in $P$. Pick a right LCM for $(p,g)$ and $(q,h)$, which we may assume of the form $(r,j)$ for $j\in G$, and write $
(p,g)(p',k)=(q,h)(q',l)=(r,j)$ where $k,l$ in $G$ are determined by $j=\vartheta_{q'}(h)l=\vartheta_{p'}(g)k$. Then $
w_{(p,g)}^*w_{(q,h)}=w_{(p',k)}w_{(q',l)}^*$, and this readily implies the Li-relation $e_Ie_J=e_{I\cap J}$ for $I=(p,g)(P{_\vartheta\ltimes}G)$ and $J=(q,h)(P{_\vartheta\ltimes}G)$. The remaining relations are easy to see, hence there is a $*$-homomorphism $C^*(P{_\vartheta\ltimes}G)\to {\mathcal{NT}}(E_\alpha)$ which sends a generating isometry $v_{(p,g)}$ to $w_{(p,g)}$. Conversely, for $p\in P$ we let $$\label{eq:psi from v}
\psi_p(\delta_l)=v_{(p,l)}.$$ We claim that $\psi_p:E_p\to C^*(P{_\vartheta\ltimes}G)$ give rise to a Nica covariant representation of $E_\alpha$. However, this follows from routine calculations. For example, for $\delta_g\in C^*(G)$ and $x=\vartheta_p(k)l\in E_p$ we have $$\psi_p(\delta_g\cdot x)=\psi_p(\alpha_p(\delta_g)\delta_{\vartheta_p(k)l}=\psi_p(\delta_{\vartheta_p(gk)l})),$$ which is $\psi_e(\delta_g)\psi_p(x)$. As a consequence, there is a $*$-homomorphism ${\mathcal{NT}}(E_\alpha)\to C^*(P{_\vartheta\ltimes}G)$ sending $i_{E_\alpha}(x)$ for $x=\alpha_p(\delta_k)\delta_l\in E_p$ to $v_{(1,k)}v_{(p,l)}$. This is an inverse to the homomorphism $C^*(P{_\vartheta\ltimes}G)\to {\mathcal{NT}}(E_\alpha)$ obtained in the first half of the proof, hence the result follows.
\[Uniqueness Theorem for $C^*(P{_\vartheta\ltimes}G)$\]\[uniqueness for right semidirect products\] Let $S=P{_\vartheta\ltimes}G$ where ${\vartheta}$ is a right action of a right LCM semigroup $P$ on a group $G$. Suppose that either $P^*=\{e\}$ or that the action of $\{\alpha_{h}\}_{h\in P^{*op}}$ on $C^*(G)$ is aperiodic. Assume that ${\mathcal K}_\alpha$ is amenable. For a Nica covariant representation $(\pi, W)$ of $(C^*(G), P, \alpha)$, cf. Proposition \[prop:Nica covariance of various representations\], we have a canonical surjective homomorphism $$\label{homomorphism to be isomorphism for right actions}
C^*(S)\mapsto \overline{\operatorname{span}}\{W_p\pi(a)W_q^*: a\in \alpha_p(C^*(G))C^*(G)\alpha_q(C^*(G)), p,q\in P\}$$ which is an isomorphism if and only if for every finite family $q_1,\dots, q_n$ in $P\setminus P^*$, the representation $a\mapsto \pi(a)\Pi_{i=1}^n(1-W_{q_i}^{\phantom{*}}W_{q_i}^*)$ of $C^*(G)$ is faithful.
Combine Proposition \[prop:right semidirect products\] and Theorem \[Uniqueness Theorem for crossed products by endomorphisms\].
The above result recovers the Laca-Raeburn uniqueness theorem [@LR] for Nica-Toeplitz algebras in the context of quasi-lattice ordered groups: just take $G=\{e\}$ and $P$ to be any weakly quasi-lattice ordered monoid. It also improves the case (1) in [@bls Theorem 4.3], as we do not require (right) cancellativity.
\[Right wreath product\] Let $\Gamma$ be a group and $P$ a right LCM semigroup. We form the right wreath product $$S:=P\wr \Gamma=P{_\vartheta\ltimes} \bigl(\prod_{p\in P}\Gamma \bigr)$$ with the action given by left shifts $
\vartheta_p\bigl((\gamma_r)_{r\in P}\bigr):=(\gamma_{rp})_{r\in P}$ for $p\in P$ and $(\gamma_r)_{r\in P}\in G=\prod_{p\in P}\Gamma$. Clearly, $\vartheta_p\circ \vartheta_q=\vartheta_{qp}$ for all $p,q\in P$, as required for a right action. For any $a\in C^*(\Gamma)$ and $q\in P$ we let $a \delta_q$ be the element of $C^*(G)=\prod_{p\in P} C^*(\Gamma)$ corresponding to the sequence with $a$ on $q$-th coordinate and zeros elsewhere. The action $\alpha$ by endomorphisms of $C^*(G)$ is determined by $\alpha_p(a\delta_q)=a\sum_{rp=q}\delta_{r}$ (if $P$ is right cancellative this sum has at most one summand, if this sum is infinite we understand it as a series convergent in weak topopology). In particular, if $h\in P^*$, then $\alpha_h(a\delta_q)=a\delta_{qh^{-1}}$ and therefore $\|\alpha_h(a\delta_q)a\delta_q\|=0$ unless $qh^{-1}=q$ which is equivalent to $h=e$, by left cancellation. Since every non-zero hereditary subalgebra of $C^*(G)$ contains a non-zero element of the form $a\delta_q$, we conclude that the action $\{\alpha_h\}_{h\in P^{*op}}$ on $C^*(G)$ is always aperiodic. Assuming, for instance, that there is a controlled function from $P$ into an amenable group, we get ${\mathcal K}_\alpha$ amenable. Therefore, if $\{W_p\}_{p\in P}$ is a semigroup of isometries on a Hilbert space $H$ satisfying Nica relations and $\pi:C^*(G)\to B(H)$ is a nondegenerate representation such that $$W_p^*\pi(a\delta_q))W_p=\sum_{rp=q}\pi(a\delta_{r}), \qquad \text{ for all }a\in C^*(\Gamma), q\in P,$$ where the sum (if infinite) is convergent in the strong operator topology, then by Corollary \[uniqueness for right semidirect products\] the surjective homomorphism in is an isomorphism if and only if for every $q_1,\dots, q_n \in P\setminus P^*$ and $q\in P$, the representation $C^*(\Gamma)\ni a\mapsto \pi(a\delta_q )\Pi_{i=1}^n(1-W_{q_i}^{\phantom{*}}W_{q_i}^*)$ is faithful (then the corresponding representation of $C^*(G)$ is faithful as well).
Semigroup $C^*$-algebras associated to left semidirect products $G\rtimes_\theta P$ {#subsection:semigroupCstar-left}
-----------------------------------------------------------------------------------
Let $(G, P, \theta)$ be an algebraic dynamical system. The authors of [@bls2] associated to $(G, P, \theta)$ a $C^*$-algebra $\mathcal{A}[G,P, \theta]$ universal for a unitary representation of $G$ and a Nica covariant isometric representation of $P$ subject to relations that model $\theta$ and the condition of preservation of order. In fact, there is a canonical isomorphism $\mathcal{A}[G,P, \theta]\cong C^*(G\rtimes_\theta P)$, see [@bls2 Theorem 4.4]. It was also shown in [@bls2] that $\mathcal{A}[G,P, \theta]$ is naturally isomorphic to the Nica-Toeplitz algebra for a compactly-aligned product system $M$ over $P$ with fibers obtained as completions of $C^*(G)$. Specifically, denote by $\delta_g$ for $g\in G$ the generating unitaries in $C^*(G)$. We have two actions $\alpha:P\to \operatorname{End}(C^*(G))$ and $L:P^{op}\to {\textrm{Pos}}(C^*(G))$ given by $\alpha_p(\delta_g)=\delta_{\theta_p(g)}$ and $$\label{def:Lp}
L_p(\delta_g)=\chi_{\theta_p(G)}(g)\delta_{\theta_p^{-1}(g)},\quad \text{ for }p\in P \text{ and }g\in G.$$ For every $p\in P$, $L_p$ is a transfer operator for $\alpha_p$. The product system constructed in [@bls2 Section 7] coincides with the product system $M_L$ we defined in subsection \[subsection:NT-cp-transfer\] (for general semigroup actions by transfer operators). By [@bls2 Proposition 7.8], $M_L$ is compactly-aligned. Summarizing we get:
\[prop:left semidirect products\] Let $(G, P,\theta)$ be an algebraic dynamical system and consider the associated right LCM semigroup $G\rtimes_\theta P$. Let $L$ be the action of $P^{op}$ by transfer operators on $C^*(G)$ described in . Then $(C^*(G),P, L)$ is a $C^*$-dynamical system as in subsection \[subsection:NT-cp-transfer\] and there are natural isomorphisms $$\mathcal{A}[G,P,\theta]\cong C^*(G\rtimes_\theta P)\cong {\mathcal{NT}}(C^*(G),P, L).$$
Since $\mathcal{A}[G,P,\theta]\cong {\mathcal{NT}}(M)$ by [@bls2 Theorem 7.9], the assertions follow by an application of Proposition \[Nica-Toeplitz algebras for transfer operators vs product systems\].
\[cor:Uniqueness Theorem for ...\] Let $S=G\rtimes_\theta P $ where $(G, P,\theta)$ is an algebraic dynamical system. Suppose that either $P^*=\{e\}$ or that the action of $\{\alpha_h\}_{h\in P^*}$ on $C^*(G)$ is aperiodic. Assume also that ${\mathcal K}_{M_L}$ is amenable. Let $(\pi, W)$ be a Nica covariant representation of $(C^*(G), P, L)$, and let $Q_p$ be the projection onto the space $\overline{\pi(A)W_pH}$, $p\in P$. We have a surjective homomorphism $$\label{homomorphism to be isomorphism for left actions}
C^*(S)\mapsto \operatorname{\overline{span}}\{\pi(a)W_pW_q^*\pi(b): a\in \alpha_p(A)A, b\in \alpha_q(A)A, p,q\in P\},$$ which is an isomorphism if for every finite family $q_1,\dots, q_n$ in $P\setminus P^*$, the representation $a\mapsto \pi(a)\Pi_{i=1}^n(1-Q_{q_i})$ of $C^*(G)$ is faithful. If in addition $G/\theta_p(G)$ is finite for every $P$, then the latter condition is also necessary for the representation to be faithful.
$C^*(S)$ is isomorphic to the Nica-Toeplitz crossed product ${\mathcal{NT}}(C^*(G),P, L)$ due to Proposition \[prop:left semidirect products\]. Thus by Proposition \[Nica-Toeplitz algebras for transfer operators vs product systems\] combined with Lemma \[aperiodicity for endomorphisms\] we may apply Theorem \[Uniqueness Theorem for product systems I\] to get the sufficiency claim of the isomorphism in . For the last part, note that the left action of $C^*(G)$ on each fiber of $M_L$ is by compacts if and only if $G/\theta_p(G)$ is finite for every $P$, see [@bls2 Proposition 7.3].
Let $P$ and $\Gamma$ be as in Example \[Right wreath product\]. Form the standard (left) wreath product $$S:=\Gamma\wr P=\bigl(\prod_{p\in P} \Gamma \bigr)\rtimes_\theta P,$$ where $\theta$ acts by right shifts on $G:=\prod_{p\in P} \Gamma$, i.e. $\big(\theta_p\bigl((\gamma_q)_{q\in P}\big))_{r}=\chi_{pP}(r)\gamma_{p^{-1}r}$ for all $r\in P$. Then $(G, P, \theta)$ is an algebraic dynamical system, cf. [@bls2 Proposition 8.8]. As in Example \[Right wreath product\], for any $a\in C^*(\Gamma)$ and $q\in P$ we denote by $a \delta_q$ the corresponding element of $C^*(G)=\prod_{p\in P} C^*(\Gamma)$. For $h\in P^*$ we have $\alpha_h(a\delta_q)=a\delta_{hq}$ and therefore $\|\alpha_h(a\delta_q)a\delta_q\|=0$ unless $hq=q$. Using this, cf. also Example \[Right wreath product\], we get that $$\text{ the action $\{\alpha_h\}_{h\in P^*}$ on $C^*(G)$ is aperiodic } \,\,\Longleftrightarrow \,\,\,\, \left(\forall_{h\in P^*}\,\, \forall_{q\in P\setminus P^*} \,\,\, hq=q\,\, \Longrightarrow\,\, h=e \right).$$ In particular, $\{\alpha_h\}_{h\in P^*}$ is aperiodic when $P^*=\{e\}$ or when $P$ is right cancellative. If it is aperiodic we can get a uniqueness criterion for $C^*(S)$ using Corollary \[cor:Uniqueness Theorem for ...\]. If in addition $G/\theta_p(G)$ is finite for every $P$, then the action $L:P^{op}\to {\textrm{Pos}}(C^*(G))$ given by satisfies assumptions (A1), (A2), (A3) in subsection \[subsection:NT-cp-transfer-finitetype\]. Therefore in this case also Theorem \[Uniqueness Theorem for crossed products by transfers\] applies.
[00]{}
N. Brownlowe, N. S. Larsen and N. Stammeier, *On $C^*$-algebras associated to right LCM semigroups*, Trans. Amer. Math. Soc. [**369**]{} (2017), no. 1, 31–68.
N. Brownlowe, N. S. Larsen and N. Stammeier, *$C^*$-algebras of algebraic dynamical systems and right LCM semigroups*, to appear in Indiana Univ. Math. J., preprint arXiv:1503.01599v1.
N. Brownlowe, J. Ramagge, D. Robertson and M. F. Whittaker, [*Zappa-Szép products of semigroups and their $C^*$-algebras*]{}, J. Funct. Anal. [**266**]{} (2014), 3937–3967.
J. Cuntz, *Simple $C^*$-algebras generated by isometries*, Comm. Math. Phys. [**57**]{} (1977), 173–185.
S. Doplicher and J. E. Roberts, *A new duality theory for compact groups*, Invent. Math. **98** (1989), 157–218.
R. Exel, *A new look at the crossed-product of a $C^*$-algebra by an endomorphism*, Ergodic Theory Dynam. Systems, **23** (2003), 1733–1750.
R. Exel, *Interactions*, J. Funct. Analysis **244** (2007), 26–62.
R. Exel, Partial dynamical systems, Fell bundles and applications, Mathematical Surveys and Monographs, 224. American Mathematical Society, Providence, RI, 2017.
R. Exel and D. Royer, *The crossed product by a partial endomorphism*, Bull. Braz. Math. Soc. (N.S.) 38 (2007), no. 2, 219–261.
J. Fletcher, *A uniqueness theorem for the Nica-Toeplitz algebra of a compactly aligned product system*, preprint arXiv:1705.00775.
N. J. Fowler, *Discrete product systems of Hilbert bimodules*, Pacific J. Math. [**204**]{} (2002), 335–375.
N. J. Fowler and I. Raeburn, *Discrete product systems and twisted crossed products by semigroups*, J. Funct. Anal., [**155**]{} (1998), 171–204.
N. J. Fowler and I. Raeburn, *The Toeplitz algebra of a Hilbert bimodule*, Indiana Univ. Math. J. [**48**]{} (1999), 155–181.
P. Ghez, R. Lima and J. E. Roberts, *$W^*$-categories*, Pacific. J. Math. **120** (1985), 79–109.
T. Kajiwara, C. Pinzari and Y. Watatani, *Hilbert $C^*$-bimodules and countably generated Cuntz-Krieger algebras*, J. Operator Theory [**45**]{} (2001), 3–18.
B. K. Kwaśniewski, *$C^*$-algebras generalizing both relative Cuntz-Pimsner and Doplicher-Roberts algebras*, Trans. Amer. Math. Soc. [**365**]{} (2013), 1809–1873.
B. K. Kwaśniewski, *Exel’s crossed products and crossed products by completely positive maps*, Houston J. Math. **43** (2017), 509–567.
B. K. Kwaśniewski and N. S. Larsen, *Nica-Toeplitz algebras associated with right tensor $C^*$-precategories over right LCM semigroups*, submitted, arXiv:1611.08525.
B. K. Kwaśniewski and W. Szymański, *Topological aperiodicity for product systems over semigroups of Ore type*, J. Funct. Anal. 270 (2016), no. 9, 3453-3504.
B. K. Kwaśniewski and W. Szymański, *Pure infiniteness and ideal structure of $C^*$-algebras associated to Fell bundles*, J. Math. Anal. Appl. **445** (2017), no. 1, 898-943.
B. K. Kwaśniewski and R. Meyer, *Aperiodicity, topological freeness and pure outerness: from group actions to Fell bundles* Studia Math. **241** (2018), 257–302.
M. Laca and I. Raeburn, *Semigroup crossed products and the Toeplitz algebras of nonabelian groups*, J. Funct. Anal., **139** (1996), 415–440.
E. C. Lance, Hilbert $C^*$-Modules: A Toolkit for Operator Algebraists, Cambridge University Press, Cambridge, 1995.
N. S. Larsen, *Crossed products by abelian semigroups via transfer operators*, Ergodic Theory Dynam. Systems [**30**]{} (2010), 1147–1164.
M. V. Lawson, *Non-commutative Stone duality: inverse semigroups, topological groupoids and C$^*$-algebras*, Internat. J. Algebra Comput. **22** (2012), no. 6, 1250058, 47 pp.
X. Li, *Semigroup $C^*$-algebras and amenability of semigroups*, J. Funct. Anal. [**262**]{} (2010), 4302–4340.
X. Li, *Nuclearity of semigroup $C^*$-algebras and the connection to amenability*, Adv. Math. **244** (2013), 626–662.
A. Nica, *$C^*$-algebras generated by isometries and Wiener-Hopf operators*, J. Operator Theory, **27** (1992), 17–52.
M. D. Norling, [*Inverse semigroup $C^*$-algebras associated with left cancellative semigroups*]{}, Proc. Edinb. Math. Soc. **57** (2014), no. 2, 533–564.
G. K. Pedersen, $C^*$-algebras and their automorphism groups, London Mathematical Society Monographs, vol. 14, Academic Press, London, 1979.
M. V. Pimsner, *A class of [$C\sp*$]{}-algebras generalizing both [C]{}untz-[K]{}rieger algebras and crossed products by [${\bf Z}$]{}*, in Free probability theory (Waterloo, ON, 1995), Amer. Math. Soc., Providence, RI, 1997, 189–212.
I. Raeburn and D. P. Williams, Morita equivalence and continuous-trace $C^*$-algebras, Math. Surveys and Monographs, vol. 60, Amer. Math. Soc., Providence, RI,
A. Sims and T. Yeend, *$C^*$-algebras associated to product systems of Hilbert bimodules*, J. Operator Theory [**64**]{} (2010), 349–376.
Y. Watatani, Index for C\*-subalgebras, Mem. Amer. Math. Soc. No. 424, 1990.
|
psfig ‘=11 makefntext\#1[ to 3.2pt [-.9pt $^{{\ninerm\@thefnmark}}$]{}\#1]{} makefnmark[to 0pt[$^{\@thefnmark}$]{}]{} PS. @myheadings[mkbothgobbletwo oddhead[ ]{} oddfootevenheadevenfoot \#\#1\#\#1]{}
\[appendixc\] \[subappendixc\]
\#1
=1.5pc
citex\[\#1\]\#2[@fileswauxout citeacite[forciteb:=\#2]{}[\#1]{}]{}
@cghi cite\#1\#2[[$\null^{#1}$@tempswa ]{}]{}
=cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbxti10 scaled=cmbx10 scaled=cmr10 scaled=cmti10 scaled=cmbxti10 =cmbx10 =cmr10 =cmti10 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8
‘=11 \#1 =by60 = \#1[[bsphack@filesw [ gtempa[auxout[ ]{}]{}]{}gtempa @nobreak esphack]{} eqnlabel[\#1]{}]{} eqnlabel vacuum \#1 ‘@=12 \#1\#2\#3[[*Nucl. Phys.*]{} [**[B\#1]{}**]{} (19\#2) \#3]{} \#1\#2\#3[[*Phys. Lett.*]{} [**[B\#1]{}**]{} (19\#2) \#3]{} \#1\#2\#3[[*Phys. Rev.*]{} [**[D\#1]{}**]{} (19\#2) \#3]{} \#1\#2\#3[[*Phys. Rev. Lett.*]{} [**[\#1]{}**]{} (19\#2) \#3]{} \#1\#2\#3[[*Z. Phys.*]{} [**C\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Prog. Theor. Phys.*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Mod. Phys. Lett.*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Phys. Rep.*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Rev. Mod. Phys.*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Helv. Phys. Acta*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Ann. Phys.*]{} [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[[*Mod. Phys. Lett*]{} [**A\#1**]{} (19\#2) \#3]{}
11[m\_[\_[1]{}]{}]{} 22[m\_[\_[2]{}]{}]{} 12[m\_[\_[1,2]{}]{}]{} 11[m\_[\_[1]{}]{}]{} 22[m\_[\_[2]{}]{}]{} 12[m\_[\_[1,2]{}]{}]{} 2[\^[2]{}]{} 6.0in 8.6in -0.25truein 0.30truein 0.30truein =1.5pc
IEM-FT-152/96\
hep–ph/9703326\
**ELECTROWEAK PHASE TRANSITION AND**
**BARYOGENESIS IN THE MSSM [^1]**
MARIANO QUIROS
*Instituto de Estructura de la Materia (CSIC), Serrano 123*
*28006-Madrid, Spain*
E-mail: [email protected]
Introduction
============
The option of generating the cosmological baryon asymmetry [@baryogenesis] at the electroweak phase transition is not necessarily the one chosen by Nature, but it is certainly fascinating, and has recently deserved a lot of attention [@reviews]. At the quantitative level, the Standard Model (SM) meets the basic requirements for a successful implementation of this scenario due to the presence of anomalous processes [@anomaly]. However, the electroweak phase transition is too weakly first order to assure the preservation of the generated baryon asymmetry at the electroweak phase transition [@first], as perturbative [@improvement; @twoloop] and non-perturbative [@nonpert] analyses have shown. On the other hand, CP-violating processes are suppressed by powers of $m_f/M_W$, where $m_f$ are the light-quark masses. These suppression factors are sufficiently strong to severely restrict the possible baryon number generation [@fs; @huet]. Therefore, if the baryon asymmetry is generated at the electroweak phase transition, it will require the presence of new physics at the electroweak scale.
Low energy supersymmetry is a well motivated possibility, and it is hence highly interesting to test under which conditions there is room for electroweak baryogenesis in this scenario [@early; @mariano1; @mariano2]. It was recently shown [@CQW] that the phase transition can be sufficiently strongly first order in a restricted region of parameter space: The lightest stop must be lighter than the top quark, the ratio of vacuum expectation values $\tan\beta \simlt 3$, while the lightest Higgs must be at the reach of LEP2. Similar results were independently obtained by the authors of Ref. [@Delepine]. These results have been confirmed by explicit sphaleron calculations in the Minimal Supersymmetric Standard Model (MSSM) [@MOQ], while two-loop calculations have the general tendency to strengthen the phase transition [@CEQW; @JoseR] thus making the previous bounds very conservative ones. On the other hand, the MSSM contains, on top of the Cabbibo-Kobayashi-Maskawa matrix phase, additional sources of CP-violation and can account for the observed baryon asymmetry. New CP-violating phases can arise from the soft supersymmetry breaking parameters associated with the stop mixing angle.
In this talk I will review the computation of the baryon asymmetry and the strength of the first order phase transition in the MSSM. I will identify the region in the supersymmetric parameter space where baryon asymmetry is consistent with the observed value and, furthermore, it is not washed out inside the bubbles after the phase transition.
The phase transition in the MSSM
================================
A strongly first order electroweak phase transition can be achieved in the presence of a top squark lighter than the top quark [@CQW]. In order to naturally suppress its contribution to the parameter $\Delta\rho$, and hence preserve a good agreement with the precision measurements at LEP, it should be mainly right handed. This can be achieved if the left handed stop soft supersymmetry breaking mass $m_Q$ is much larger than $M_Z$. For moderate mixing, the lightest stop mass is then approximately given by \[masastop\] \^2 = m\_U\^2 + D\_R\^2 + m\_t\^2() ( 1 - ) where $\widetilde{A}_t = A_t - \mu/\tan\beta$ is the particular combination appearing in the off-diagonal terms of the left-right stop squared mass matrix and $m_U^2$ is the soft supersymmetry breaking squared mass parameter of the right handed stop.
In order to overcome the Standard Model constraints, the stop contribution must be large. The stop contribution strongly depends on the value of $m_U^2$, which must be small in magnitude, and negative, in order to induce a sufficiently strong first order phase transition. Indeed, large stop contributions are always associated with small values of the right handed stop plasma mass $$m^{\rm eff}_{\;\widetilde{t}} = -\widetilde{m}_U^2 + \Pi_R(T)
\label{plasm}$$ where $\widetilde{m}_U^2 = - m_U^2$, $\Pi_R(T) \simeq 4 g_3^2
T^2/9+h_t^2/6[2-
\widetilde{A}_t^2/m_Q^2]T^2$ [@CQW; @CE] is the finite temperature self-energy contribution to the right-handed squarks, and $h_t$ and $g_3$ are the top quark Yukawa and strong gauge couplings, respectively. We are considering heavy (decoupled from the thermal bath) gluinos. For light gluinos, their contribution to the squark self-energies, $2g_3^2 T^2/9$, should be added to $\Pi_R(T)$ [@mariano1]. Moreover, the trilinear mass term, $\widetilde{A}_t$, must be $\widetilde{A}_t^2 \ll m_Q^2$ in order to avoid the suppression of the stop contribution to $v(T_c)/T_c$. The dependence of the order parameter $v(T_c)/T_c$ on $\widetilde{m}_U$ is illustrated in Fig. 1a where we plot it as a function of the light stop mass (\[masastop\]). We see from it a dramatic increase in $v(T_c)/T_c$ as $\widetilde{m}_U$ increases.
Although large values of $\widetilde{m}_U$, of order of the critical temperature, are useful to achieve a strongly first order phase transition, they may also induce charge and color breaking minima. Indeed, if the effective plasma mass at the critical temperature vanished, the universe would be driven to a charge and color breaking minimum at $T \geq T_c$ [@CQW]. A conservative bound on $\widetilde{m}_U$ may be obtained by demanding that the electroweak symmetry breaking minimum be lower than any color-breaking minima induced by the presence of $\widetilde{m}_U$ at zero temperature, which yields the condition $$\widetilde{m}_U\simlt \widetilde{m}_U^{\rm crit}
\equiv \left(\frac{m_h^2 v^2 g_3^2}{12}\right)^{1/4}.
\label{colorbound}$$ It can be shown that this condition is sufficient to prevent dangerous color breaking minima at zero and finite temperature for any value of the mixing parameter $\widetilde{A}_t$ [@CQW]. In this work, we shall use this conservative bound.
Fig. 1a corresponds to a large value of the mass of the pseudoscalar Higgs, for which the strength of the phase transition is maximized [@mariano2]. However, for the purpose of generating the baryon asymmetry, as we will see in the next section, smaller values of $m_A$ should be used. In Fig. 1b we present plots of $v(T_c)/T_c$ as a function of $m_A$ for different values of $\tan\beta$. Every line stops at a lower value of $m_A$, where the experimental LEP bound on the Higgs mass is met. The region to the left of the dashed line in Fig. 1b is excluded by LEP searches of the Higgs boson.
The requirement of not washing out, after the phase transition the previously generated baryon asymmetry provides the condition [@BKS]$ E_{\rm sph}(T_c)/T_c \simgt 45$, which translates, in the Standard Model, into the condition \[condition\] 1. In the MSSM the condition (\[condition\]) should hold provided $E_{\rm sph}^{\rm MSSM}(T_c) \sim E_{\rm sph}^{\rm
SM}(T_c)$. In particular this will hold if the scaling law \[scaling\] 4 E\_[sph]{}\^[MSSM]{}(T\_c)=E\_[sph]{}\^[MSSM]{}(0) is approximately satisfied, and at zero temperature $
E_{\rm sph}^{\rm MSSM}\sim E_{\rm sph}^{\rm SM}(m_{\rm
eff})
$, where m\_[eff]{}\^2=\^2(-)m\_h\^2+\^2(-)m\_H\^2 $m_{h,H}$ being the light/heavy CP-even mass eigenstates, and $\alpha$ the mixing angle in the Higgs sector, where all radiative corrections effects corresponding to the chosen supersymmetric parameters have been incorporated.
In Fig. 2a we compare $E_{\rm sph}^{\rm MSSM}$ (solid line) with $E_{\rm sph}^{\rm SM}$ (dashed line) for a Higgs mass equal to $m_{\rm eff}$. In Fig. 2b we compare the value of $E_{\rm sph}^{\rm MSSM}(T)$ (solid line) with the corresponding scaling value given by Eq. (\[scaling\]). We can see that the differences are $\simlt$ 5 % [@MOQ] which makes the use of condition (\[condition\]) reasonable.
Baryogenesis in the MSSM
========================
Baryogenesis is fueled by CP-violating sources which are locally induced by the passage of the bubble wall [@thick; @thicknoi]. These sources should be inserted into a set of classical Boltzmann equations describing particle distribution densities and permitting to take into account Debye screening of induced gauge charges [@deb], particle number changing reactions [@cha] and to trace the crucial role played by diffusion [@tra]. Indeed, transport effects allow CP-violating charges to efficiently diffuse in front of the advancing bubble wall where anomalous electroweak baryon violating processes are unsuppressed.
Following [@newmethod1; @newmethod2], we are interested in the generation of charges which are approximately conserved in the symmetric phase, so that they can efficiently diffuse in front of the bubble where baryon number violation is fast, and non-orthogonal to baryon number, so that the generation of a non-zero baryon charge is energetically favoured. Charges with these characteristics are the axial stop ($\widetilde{t}$) charge and the Higgsino ($\widetilde{H}$) charge, which may be produced from the interactions of squarks and charginos and/or neutralinos with the bubble wall, provided a source of CP-violation is present in these sectors. CP-violating sources $\gamma_Q(z)$ (per unit volume and unit time) of a generic charge density $J^0$ associated with the current $J^\mu(z)$ and accumulated by the moving wall at a point $z^\mu$ of the plasma can then be constructed from $J^\mu(z)$ [@Toni] as $\gamma_Q(z)=\partial_0 J^0(z)$. The detailed calculation of $\gamma_{\widetilde{q}}$ and $\gamma_{\widetilde{H}}$ has been recently performed [@bau]. It was proven that $\gamma_{\widetilde{q}}\ll
\gamma_{\widetilde{H}}$, due essentially to the chosen region in the supersymmetric parameter space. Moreover, we have found that the Higgsino current is given by $$\label{current}
\langle J_{\widetilde{H}}^0(z)\rangle = \left| \mu\right| \sin\phi_{\mu}
\: \left[H^2(z) \Delta\beta/L_{\omega} \right]
\left[ 3 M_2 \; g_2^2 \; {\cal G}^{\widetilde{W}}_{\widetilde{H}}
+ M_1 \; g_1^2 \; {\cal G}^{\widetilde{B}}_{\widetilde{H}}
\right],$$ where ${\cal G}^{\widetilde{W}(\widetilde{B})}_{\widetilde{H}}$ are integrals over the momentum space of the corresponding Feynman diagrams, $\Delta\beta$ is the variation of the angle $\beta$ through the bubble wall and $L_{\omega}$ is the bubble wall thickness. The integrand of ${\cal G}^{\widetilde{W}(\widetilde{B})}_{\widetilde{H}}$ depends on the masses $\mu$, $M_2$ and $M_1$, as well as on the temperature and on the widths (damping rates) that are taken to be $\Gamma_{\widetilde{H}}\sim\Gamma_{\widetilde{W}}\sim\Gamma_{\widetilde{B}}\sim\alpha_W T$.
We can now solve the set of coupled differential equations describing the effects of diffusion, particle number changing reactions and CP-violating source terms. We will closely follow the approach taken in Ref. [@bau] where the reader is referred to for more details. The final baryon-to-entropy ratio is found to be given by, $$\frac{n_B}{s}=-g(k_i)\frac{{\cal A}\overline{D}\Gamma_{{\rm ws}}}
{v_{\omega}^2 s},
\label{baryon}$$ where $v_{\omega}$ is the wall velocity, $$\label{higgs2}
{\cal A}=
\frac{1}{\overline{D} \; \lambda_{+}} \int_0^{\infty} du\;
\widetilde \gamma(u)
e^{-\lambda_+ u},$$ $\overline{D}$ is the effective diffusion constant, $$\lambda_{+} = \frac{ v_{\omega} +
\sqrt{v_{\omega}^2 + 4 \widetilde{\Gamma}
\overline{D}}}{2 \overline{D}},$$ $\widetilde{\Gamma}$ is the effective decay constant, $\widetilde \gamma(z) = v_{\omega} \partial_{z}
J^0(z) f(k_i)$, and $f(k_i),g(k_i)$ are numerical coefficients depending upon the light degrees of freedom.
From Eq. (\[baryon\]) one can see that the whole effect is proportional to $\Gamma_{\rm ws}\sim 6\kappa \alpha_w^4\; T$, the weak sphaleron rate in the symmetric phase. We have taken $\kappa\sim 1$ [@AK] although its precise value is at present under debate [@ASY]. We can also see from Eq. (\[current\]) that the final baryon-to-entropy ratio depends on the parameter $\Delta\beta$. This parameter should go to zero as $m_A\rightarrow\infty$ and triggers the necessity of considering not too large values of $m_A$. We present in Fig. 3a a plot of $\Delta\beta$ as a function of $m_A$ which confirms our expectatives. In Fig. 3b we plot $\sin\phi_{\mu}$ versus $m_A$ by fixing the value of $n_B/s$ to its lower bound $4\times 10^{-11}$ for the case $M_2=M_1=100$ GeV. The values of the effective diffusion and decay constants are $\overline{D}\sim 0.8\ {\rm
GeV}^{-1}$, $\widetilde{\Gamma}\sim 1.7$ GeV. We see, as anticipated, that for large values of $m_A$, $\Delta\beta$ becomes very small and, correspondingly, $\sin\phi_{\mu}$ approaches 1.
We conclude, from Fig. 3b, that the phase $\phi_{\mu}$ is never much smaller than 0.05. These relatively large values of the phases are only consistent with the constraints from the electric dipole moment of the neutron if the squarks of the first and second generation have masses of the order of a few TeV [@CKNlast]. Moreover, the baryon asymmetry is not washed out inside the bubbles provided that the light stop is lighter than the top quark, the pseudoscalar Higgs boson heavier than $\sim$ 130 GeV and the lightest Higgs boson lighter than $\sim$ 80 GeV.
Acknowledgements
================
Work supported in part by the European Union (contract CHRX-CT92-0004) and CICYT of Spain (contract AEN95-0195). I wish to thank my collaborators A. Brignole, M. Carena, J.R. Espinosa, A. Riotto, I. Vilja, C. Wagner and F. Zwirner.
References
==========
[9]{} A.D. Sakharov, [*JETP Lett.*]{} [**91B**]{} (1967) 24. For recent reviews, see: A.G. Cohen, D.B. Kaplan and A.E. Nelson, 10 [*Annu. Rev. Nucl. Part. Sci.*]{} [**43**]{} (1993) 27; M. Quir[ó]{}s, ; V.A. Rubakov and M.E. Shaposhnikov, e-print \[hep-ph/9603208\]. G. t’Hooft, ; . M. Shaposhnikov, [*JETP Lett.*]{} [**44**]{} (1986) 465; and [**B299**]{} (1988) 797. M.E. Carrington, ; M. Dine, R.G. Leigh, P. Huet, A. Linde and D. Linde, ; ; P. Arnold, ; J.R. Espinosa, M. Quir[ó]{}s and F. Zwirner, ; W. Buchm[ü]{}ller, Z. Fodor, T. Helbig and D. Walliser, . J. Bagnasco and M. Dine, ; P. Arnold and O. Espinosa, ; Z. Fodor and A. Hebecker, . K. Kajantie, K. Rummukainen and M.E. Shaposhnikov, ; Z. Fodor, J. Hein, K. Jansen, A. Jaster and I. Montvay, ; K. Kajantie, M. Laine, K. Rummukainen and M.E. Shaposhnikov, ; K. Jansen, e-print \[hep-lat/9509018\]. G.R. Farrar and M.E. Shaposhnikov, , ([**E**]{}): [**71**]{} (1993) 210 and . M.B. Gavela et al., ; ; P. Huet and E. Sather, . G.F. Giudice, ; S. Myint, . J.R. Espinosa, M. Quir[ó]{}s and F. Zwirner, . A. Brignole, J.R. Espinosa, M. Quir[ó]{}s and F. Zwirner, . M. Carena, M. Quiros and C.E.M. Wagner, . D. Delepine, J.M. Gerard, R. Gonzalez Felipe and J. Weyers, . J.M. Moreno, D.H. Oaknin and M. Quir[ó]{}s, , and \[hep-ph/9612212\] to appear in [*Phys. Lett.*]{} [**B**]{}. M. Carena, J.R. Espinosa, M. Quir[ó]{}s and C.E.M. Wagner, ; M. Carena, M. Quir[ó]{}s and C.E.M. Wagner, ; H.E. Haber, R. Hempfling and A.H. Hoang, e-print \[hep-ph/9609331\]. J.R. Espinosa, ; B. de Carlos and J.R. Espinosa, e-print \[hep-ph/9703212\]. D. Comelli and J.R. Espinosa, e-print \[hep-ph/9606438\]. A.I. Bochkarev, S.V. Kuzmin and M.E. Shaposhnikov, . M. Carena, P. Zerwas and the Higgs Physics Working Group, in Vol. 1 of Physics at LEP2, G. Altarelli, T. Sj[ö]{}strand and F. Zwirner, eds., Report CERN 96-01, Geneva (1996). L. Mc Lerran [*et al.*]{}, [*Phys. Lett.*]{} [**B256**]{} (1991) 451; M. Dine, P. Huet and R. Singleton Jr., [*Nucl. Phys.*]{} [**B375**]{} (1992) 625; M. Dine [*et al.*]{}, [*Phys. Lett.*]{} [**B257**]{} (1991) 351; A.G. Cohen and A.E. Nelson, [*Phys. Lett.*]{} [**B297**]{} (1992) 111. D. Comelli, M. Pietroni and A. Riotto, [*Phys. Lett.*]{} [**B354**]{} (1995) 91 and [*Phys. Rev.*]{} [**D53**]{} (1996) 4668. S.Yu. Khlebnikov, [*Phys. Lett.*]{} [**B300**]{} (1993) 376; A.G. Cohen, D.B. Kaplan, and A.E. Nelson, [*Phys. Lett.*]{} [**B294**]{} (1992) 57; J.M. Cline and K. Kainulainen, [*Phys. Lett.*]{} [**B356**]{} (1995) 19. A.G. Cohen, D.B. Kaplan, and A.E. Nelson, [*Phys. Lett.*]{} [**336**]{} (1994) 41. M. Joyce, T. Prokopec and N. Turok, [*Phys. Rev. Lett.*]{} [**75**]{} (1995) 1695, (E): [*ibidem*]{} 3375; D. Comelli, M. Pietroni and A. Riotto, [*Astropart. Phys.*]{} [**4**]{} (1995) 71. P. Huet and A.E. Nelson, . P. Huet and A.E. Nelson, . A. Riotto, [*Phys. Rev.*]{} [**D53**]{} (1996) 5834. M. Carena, M. Quir[ó]{}s, A. Riotto, I. Vilja and C.E.M. Wagner, e-print \[hep-ph/9702409\]. J. Ambj[ø]{}rn and A. Krasnitz, . P. Arnold, D. Son and L.G. Yaffe, e-print \[hep-ph/9609481\]. A.G. Cohen, D.B. Kaplan and A.E. Nelson, .
[^1]: To appear in the Proceedings of the Workshop on [*The Higgs puzzle– What can we learn from LEP II, LHC, NLC and FMC?*]{}, Ringberg Castle, Germany, December 8-13, 1996. Ed. B. Kniehl, World Scientific, Singapore.
|
---
abstract: 'It is well known that solutions to the Fourier-Galerkin truncation of the inviscid Burgers equation (and other hyperbolic conservation laws) do not converge to the physically relevant entropy solution after the formation of the first shock. This loss of convergence was recently studied in detail in \[S. S. Ray *et al.*, *Phys. Rev. E* **84**, 016301 (2011)\], and traced back to the appearance of a spatially localized resonance phenomenon perturbing the solution. In this work, we propose a way to remove this resonance by filtering a wavelet representation of the Galerkin-truncated equations. A method previously developed with a complex-valued wavelet frame is applied and expanded to embrace the use of real-valued orthogonal wavelet basis, which we show to yield satisfactory results only under the condition of adding a safety zone in wavelet space. We also apply the complex-valued wavelet based method to the 2D Euler equation problem, showing that it is able to filter the resonances in this case as well.'
author:
- 'R. M. Pereira'
- 'R. Nguyen van yen'
- 'M. Farge'
- 'K. Schneider'
bibliography:
- 'biblio.bib'
title: |
Wavelet methods to eliminate resonances\
in the Galerkin-truncated Burgers and Euler equations
---
Introduction
============
Due to the intrinsic limitations of computers, solving a nonlinear partial differential equation numerically actually means solving its truncation to a finite number of modes, where, in favorable cases, the truncated system closely approaches its continuous counterpart. But sometimes the truncation has drastic effects which completely destroy the desired approximation. The first historical example for which this happened was probably the symmetric finite difference scheme designed by von Neumann in the 1940s for nonlinear conservation laws. As recalled in [@Hou1991], it was indeed shown in the 1980s that, when applying this scheme even to the simplest case of the 1D inviscid Burgers equations, convergence to the correct solution is lost at the appearance of the first shock. Other schemes, specifically designed to dissipate kinetic energy at the location of shocks, do not suffer from this limitation and yield the desired solution.
This matter of convergence was investigated in [@Tadmor1989] for another important scheme, namely Fourier-Galerkin truncation, where only the equations for Fourier modes with wavenumbers below a certain cut-off are solved, the other modes being set to zero. Using the conservative character of the truncation and the nonlinear structure of the equations, the author was able to prove that even weak convergence to the physical solutions was ruled out once the latter started to be dissipative. This loss of convergence was scrutinized more closely in the recent work [@Ray2011], which showed that in the truncated system shocks become sources of waves that perturb the numerical solution throughout its spatial domain. This is possible because Fourier-Galerkin truncation is a non-local operator in physical space, instantaneously removing all modes above the truncation wavenumber. Furthermore, these waves resonantly interact with the flow at locations where the velocity is the same as their phase velocity, giving rise to strong perturbations localized around these positions which eventually spread and corrupt the numerical solution.
The aim of the present work is to show how the resonances can be eliminated by filtering the solution in a wavelet basis, a possibility which was already pointed out in [@Ray2011]. The Burgers equation has been chosen as a toy model because its entropy solutions can be computed analytically, enabling direct comparison with numerical results. An important point to keep in mind though is that the analytical solutions are dissipative even in the inviscid limit, a phenomenon known as dissipative anomaly, while the Galerkin-truncated ones never dissipate energy if the viscosity is set to zero. Therefore a numerical solution can approach the exact solution only if it finds a way to dissipate energy, as is achieved by our method through the filtering process described further down. In fact, as discussed in [@Nguyenvanyen2008; @Nguyenvanyen2009] and references therein, many filtering mechanisms are known empirically to achieve this task (see also the recent review in [@Gottlieb2011]). However, the precise effect of these filtering methods on the resonances shown by [@Ray2011] has not been fully clarified yet.
To get insight into the formation of the resonance we start by performing a continuous wavelet analysis of the Galerkin-truncated solutions to the inviscid Burgers equation. Such a representation unfolds the solution in both space and scale in a continuous fashion. It thus allows to visualize at which wavenumbers and positions the resonances are generated and subsequently propagated.
Afterwards, the wavelet filtering method analogous to Coherent Vorticity Simulation (CVS), already proposed to solve Burgers equation [@Nguyenvanyen2008; @Nguyenvanyen2009], is applied here with the same initial conditions used in [@Ray2011]. To demonstrate that the method is well suited for regularizing the solution, the equation is solved in Fourier space using a pseudo-spectral approach, but after each time step the solution is expanded over a frame of complex-valued wavelets, filtered with an iterative procedure introduced in [@AAMF04], and then reprojected onto the Fourier basis for computing the next time step.
We then go further and propose the use of real-valued orthogonal wavelets instead of the redundant complex-valued wavelets. Since the former do not enjoy the translational invariance property of the latter, satisfactory solutions can only be obtained by keeping the neighbors of the retained coefficients, *i.e.*, adding a safety zone in wavelet coefficient space to account for the shocks translation and the small scale generation, a procedure successfully applied in previous works for 2D and 3D flows [@Froehlich1999; @Schneider2006; @Okamoto2011]. The quality of the approximations obtained for the different filtering methods is assessed by computing a global error estimate.
Finally, since [@Ray2011] also discusses the presence of resonances in the Galerkin-truncated 2D incompressible Euler equations, we accordingly study the effect of the complex-valued wavelet method in this case. First results in that same direction can be found in [@Nguyenvanyen2009].
1D inviscid Burgers equation
============================
Continuous wavelet analysis {#seq:CWT}
---------------------------
Our starting point is the inviscid Burgers equation, written in conservative form [$$\partial_t u +\frac{1}{2} \partial_x u^2 = 0, \label{burgers}$$]{} $u$ being velocity, $t$ time and $x$ space, plus periodic boundary conditions, and taking the same harmonic initial condition as in [@Ray2011] (the domain size being normalized to 1): [$$u_0(x) = \sin(2\pi x) + \sin(4\pi x + 0.9) + \sin(6\pi x). \label{init_cond}$$]{} In [@Ray2011] the authors observed that, when solving the Galerkin-truncated version of (\[burgers\]) with a pseudo-spectral code, fine scale oscillations appear all over the solution right after the formation of the first singularity in the exact solution, followed by the emergence of two bulges around the points having the shock velocity with positive velocity gradient. These bulges then grow and start to perturb the solution, initiating the equipartition process predicted by T.D. Lee [@Lee1952]. As explained in [@Ray2011], the bulges are due to a resonant interaction between a truncation wave, excited by the Gibbs oscillations coming from the Galerkin truncation, and the locations where the velocities are close to the phase velocity of the wave.
\
\
\
\
To follow the formation of resonances and the subsequent spreading of the fluctuations, let us first consider the continuous wavelet transform (CWT) of the numerical solution at different time instants. All computations were performed using a 4$^\mathrm{th}$ order Runge-Kutta time evolution scheme with $\delta t = 0.125 N^{-1}$ as time step, up to a Galerkin truncation wavenumber $K = 8192$. For efficiency, the nonlinear term is computed pseudo-spectrally on a collocation grid having $3K$ points, which ensures full dealiasing. The CWT coefficients are calculated as the inner products of the velocity $u(x)$ at a given instant $t$ with a set of wavelet functions $\psi_{l,x'}(x)$ of scales $\ell$ centered around positions $x'$, where for the mother wavelet we have chosen the complex-valued Morlet wavelet for its good analysis properties [@Farge92]. The results, presented in Fig. \[fig:cwt\], show the logarithm of the modulus of wavelet coefficients at different positions $x'$ and scales $\ell$ (represented by the equivalent wavenumbers $k =\frac{k_\psi}{\ell}$, $k_\psi$ being the centroid wavenumber of the chosen wavelet [@Ruppert-Felsot2009]). The horizontal black line indicates the Galerkin truncation frequency and the velocity fields themselves are also shown at the top of each figure for convenience.
Figures \[fig:cwt\_0\] and \[fig:cwt\_02749\] show respectively the harmonic initial condition and how the precursors of the shocks develop. Figure \[fig:cwt\_03505\] shows the solution when the first preshock reaches the cut-off scale and becomes a shock, *i.e.*, when non negligible energy reaches the scale indicated by the horizontal black line. We observe that the first resonances appear immediately after that (note the small time interval between Figs. \[fig:cwt\_03505\] and \[fig:cwt\_03538\]) and then spread all over space. Figure \[fig:cwt\_03648\] shows the formation of the bulges around the resonant locations. They stretch until they reach the Galerkin scale and then generate more truncation waves, as shown in Fig. \[fig:cwt\_03998\]. After that, perturbations at all scales start to spread throughout the solution, and even more so when the second shock is formed, as in Fig. \[fig:cwt\_05897\]. For much longer time the solution then becomes very noisy (Fig. \[fig:cwt\_19989\]), on its way towards equipartition[^1].
Elimination of resonances using complex-valued Kingslets {#sec:kingslet}
--------------------------------------------------------
As explained in [@Ray2011], and as we have seen from the wavelet analysis of the previous section, the failure of the Fourier-Galerkin scheme to reproduce the correct solution can be traced back to the amplification of truncation waves by a resonance mechanism. To suppress these resonances, a dissipation mechanism has to be introduced in the numerical scheme, in a way which does not affect the nonlinear dynamics. This procedure is sometimes called regularization of the solution. In this section, we show by numerical experiments how the resonances are suppressed by the CVS-filtering method, which was first applied to the inviscid Burgers equation in [@Nguyenvanyen2008], and recall its interpretation in terms of denoising.
The algorithm proposed by [@Nguyenvanyen2008] is as follows. Starting from the Fourier coefficients of the velocity field $\widehat{u}_k$ for $\vert k \vert \leq K$ at $t=t_n$:
1. *Time integration*. The Fourier coefficients of the velocity field are advanced in time to $t=t_{n+1}$ using the $4^{th}$ order Runge-Kutta scheme described in Sec. \[seq:CWT\].
2. *Inverse Fourier transform*. The velocity field at $t=t_{n+1}$ is reconstructed from its Fourier coefficients on a grid with $N=2K$ points.
3. *Forward wavelet transform*. The velocity field is written in wavelet space as [$$u(x) = {\left< \phi \right| \left. u \right>} \phi(x) + \sum_{j=0}^{J-1}\sum_{i=1}^{2^j} {\left< \psi_{ji} \right| \left. u \right>} \psi_{ji}(x), \label{wavelet_expansion}$$]{} where $\psi_{ji}$ are the wavelet functions, $\phi$ the associated scaling function and the indexes $j$ and $i$ denote scale and position respectively. Each inner product, defined as ${\left< f \right| \left. g \right>} \equiv \int_0^1 f(x)^*g(x)dx$, corresponds to a wavelet coefficient.
4. *Application of the CVS filter*. The coefficients whose modulus are below a threshold $T$, so-called incoherent coefficients, are discarded, and $T$ is determined at each time step in an iterative way following [@AAMF04]. It is initialized as $T_0 = q\sqrt{E/N}$, $q$ being a compression parameter and $E$ being the total energy, then successive filterings are made as $T$ is recalculated in sub-step $n+1$ as [$$T_{n+1} = q\, \sigma\!\left[ \tilde u^{(n)}_{ji} \right] \label{threshold},$$]{} until $T_{n+1}=T_n$. Here $\tilde u_{ji}^{(n)}$ are the wavelet coefficients below the threshold $T_n$ and $\sigma[\cdot]$ represents the standard deviation of the set of coefficients between brackets.
5. *Inverse wavelet transform*. The coefficients above the final threshold represent the coherent part of the signal and are used as input to an inverse fast wavelet transform.
6. *Forward Fourier transform*. The Fourier coefficients of the filtered velocity field are computed, and the cycle can proceed onward.
There are two choices left to be made in this algorithm: the wavelet basis used in steps 3 and 5, and the parameter $q$ in step 4. As shown in [@Nguyenvanyen2009], this version of the algorithm performs badly if real-valued orthogonal wavelets are used, but works very well when using translation invariant complex-valued wavelets called Kingslets, introduced in [@NK01] and first proposed in [@Nguyenvanyen2008] for this application. Note that Kingslets were constructed to have almost vanishing energy in the negative wavenumber range, which (as explained in [@NK01]) implies that filtering in wavelet space is almost a translation invariant operator (i.e., it commutes with spatial translations of the signal). This is a desired feature for Burgers equation since shocks translate and cannot be properly tracked with a real-valued wavelet basis, whose coefficients are not stable enough due to the loss of translational invariance, giving poor filtering results. Therefore, we stick to this choice in this section, but we will discuss below how the algorithm can be modified to authorize other choices.
Concerning the dimensionless number $q$ in step 4 of the algorithm, it controls the severity of the filter, since increasing $q$ enlarges the set of discarded coefficients. Its value defines a certain balance between regularization and approximation quality, and also influences the compression rate. Here, we follow [@Nguyenvanyen2008] and use $q=5$ with Kingslets. A discussion of the effect of varying $q$ would be of interest but is out of the scope of the present work.
The added complexity of running this algorithm, as compared to the standard Fourier-Galerkin method, comes from the forward and inverse Fourier and wavelet transforms, and the iterations required to determine the threshold. Since the standard 4-th order Runge-Kutta scheme already requires $12$ Fourier transforms per timestep, the additional Fourier transforms represent an increase of computational cost of about 17% in total. The cost of each wavelet transform is proportional to $S\log_2(N)$ where $S$ is the length of the wavelet filters, and for efficient implementations it is lower than the cost of a Fourier transform. Finally, the cost of the iterations is more difficult to evaluate since their number is not known a priori, but we observe in practice that it is low compared to the other costs.
In Fig. \[fig:kings\_nofil\_037\] we show the solutions a few time steps after the appearance of the resonances, which do not occur for the CVS-filtered solution (shown in black). Figures \[fig:kings\_nofil\_048\] and \[fig:kings\_nofil\_129\] show that the evolution is stable and we still have no trace of resonances, even for longer integration times when the Galerkin-truncated solution becomes perturbed, although after the formation of shocks the Gibbs phenomenon is intense (as discussed in [@Nguyenvanyen2008; @Nguyenvanyen2009]). In Fig. \[fig:kings\_tyg\] we show in detail how the resonances are completely filtered out by the CVS method.
![ Zoom of resonance at $t = 0.037$. Green (gray): Galerkin-truncated solutions. Black: CVS-filtered with Kingslets.[]{data-label="fig:kings_tyg"}](kings_tyg_0037){width="40.00000%"}
To demonstrate that the whole dynamics of the Burgers equation is preserved by CVS filtering, we plot in Fig. \[fig:kings\_ref\] the filtered profile along with the analytical solution as a reference, calculated using a Lagrangian map method [@Vergassola1994].
One sees a very good agreement with only small discrepancies at the shocks due to the Gibbs phenomenon.
Overall it appears that this implementation of the CVS filtering method achieves sufficient energy dissipation at shock locations to keep the numerical solution close to the desired entropy solution. It would be interesting to understand which element in the algorithm is essential for this beneficial dissipative effect, but unfortunately there are several competing influences which are difficult to disentangle. The filtering operation in itself (discarding the incoherent coefficients) is certainly an important source of dissipation, but it is difficult to quantify a priori since the Kingslets complex-valued wavelets are not an orthogonal basis, but merely a tight frame (see [@NK01]). Moreover, the alternating projections between the Fourier basis and a wavelet basis, which do not commute which each other, also introduce some dissipation. A first step in order to better understand the process by which this filter achieves dissipation is to move from a wavelet frame to an orthogonal wavelet basis, as we discuss in the next section.
Elimination of resonances using real-valued orthogonal wavelets {#sec:ROW}
---------------------------------------------------------------
Although the Kingslet frame is well suited to suppress resonances as we have recalled in the previous section, it is appealing to be able to use a non-redundant real-valued orthogonal wavelet basis. Due to its lack of translation invariance, this kind of basis does not perform well in the context of the algorithm described in the previous section [@Nguyenvanyen2009]. Following previous work on CVS filtering of the 2D and 3D Navier-Stokes equations [@Froehlich1999; @Schneider2006; @Okamoto2011], we introduce the concept of a safety zone in wavelet space, that is, after computing the coherent coefficients as in the $4^\mathrm{th}$ step of the CVS algorithm, we also keep the neighboring wavelet coefficients in space and in scale. The aim is to account for translation of shocks to neighboring positions and generation of finer scale structures from coarser ones. Hence, we have to add a step 4b. to the algorithm described in section \[sec:kingslet\] as follows:
1. *Definition of the safety zone in wavelet space*. We create an index set $\Lambda$ containing pairs $\lambda = (j,i)$ indexing each coherent wavelet coefficient in scale $j$ and position $i$, kept in step 4. We then define an expanded index set $\Lambda_*$ including the neighboring coefficients in position and scale, namely, for each pair $(j,i)$, the pairs depicted in Fig. \[fig:safetyzone\] [@Schneider1996]. Finally, all the coefficients not present in $\Lambda_*$ are set to zero.
(25,9) (7.55,0.5)[(14.05,2.8)[(j-1,\[i/2\])]{}]{} (0.51,3.3)[(7,2.8)[(j,i-1)]{}]{} (7.55,3.3)[(7,2.8)[(j,i)]{}]{} (14.6,3.3)[(7,2.8)[(j,i+1)]{}]{} (7.55,6.15)[(3.5,2.8)[(j+1,2i)]{}]{} (11.1,6.15)[(3.45,2.8)]{} (11.1,6.7)[(3.45,2.8)[(j+1,]{}]{} (11.1,6.6)[(3.45,2.8)\[b\][2i+1)]{}]{}
This additional step is able to generate a more stable solution, but the fluctuation level is still high when compared to Kingslets. In order to smooth out these fluctuations we need a higher threshold in the CVS filter step of the algorithm, so we choose $q=8$ in equation (\[threshold\]), changing accordingly the start-up value $T_0$.
As examples we employ two different wavelet bases that are widely available in numerical analysis packages, the Daubechies 12 wavelet, which has compact support, and the Spline 6 wavelet, which has an exponential decay [@Daubechies1992]. If we simply apply the CVS filtering procedure from section \[sec:kingslet\] with these bases, the solution becomes very oscillating as soon as the resonances appear and we end up with poor results. But once the safety zone in wavelet space is implemented as described above, the dynamics is properly preserved. In Figs. \[fig:daub12\_safenosafe\] (Daubechies 12) and \[fig:spline\_safenosafe\] (Spline 6) we see the significant improvement in the filtering capability of the code, comparing the cases with and without safety zone along with the analytical solution.
The naturally oscillating character of real-valued wavelets and their lack of translation invariance still plays a role generating small perturbations (that get worse next to regions affected by the Gibbs phenomenon). But while the dynamics is lost when there is no safety zone, with huge oscillations corrupting the phase coherence of the shocks, after the introduction of the safety zone it is very well preserved. Considering the time evolution of energy (Fig. \[fig:energy\]), it appears that in absence of a safety zone, not all the necessary energy is dissipated. This could be an explanation for the poor performance of the filtering scheme in that case.
![ Time evolution of energy. Energies of the CVS filtered solutions with safety zone collapse to the analytical energy evolution.[]{data-label="fig:energy"}](energy){width="45.00000%"}
To give a quantitative aspect to the idea of “good filtering” we consider the global energy error estimate [$$\varepsilon = \frac{\int_0^1 \left[v(x) - v_{\mathrm{ref}}(x)\right]^2 dx}{\int_0^1 v_{\mathrm{ref}}(x)^2 dx},$$]{} where $v_{\mathrm{ref}}$ is the reference analytical solution. This allows us not only to evaluate how close to the reference the CVS-filtered solutions are, but also to compare the efficiencies of different wavelet bases. In Fig. \[fig:errors1\] we plot the time evolution of $\varepsilon$ for all runs.
![\[fig:incoh\] Evolution of the fraction of incoherent wavelet coefficients.](incoherent_coeffs){width="40.00000%"}
For the unfiltered Galerkin-truncated solution, the error grows very fast as soon as resonances appear. The growth is slower for the CVS solutions without safety zone, but the solutions are still eventually destroyed. Due to their much smaller values, the error estimates for the Kingslets and for the real-valued wavelets with safety zone are barely seen in this plot. So in Fig. \[fig:errors2\] we change scales to find that they are of the order $10^{-3}$ and stabilize once the influence of the resonances has been damped. We see that the errors of the real-valued orthogonal wavelets stabilize very close to the Kingslets value. This makes their use attractive, a fact even more reinforced when we compare the level of compression along the time evolution (Fig. \[fig:incoh\]), *i.e.*, the percentage of discarded coefficients each time step.
Indeed, during a large fraction of the evolution, Kingslet-based CVS filtering keeps many more coefficients than its counterparts based on orthogonal wavelets. The level of compression tends to stabilize at a slightly smaller value than the average of the other cases, but since the Kingslets frame has twice as many coefficients as real-valued orthogonal wavelet bases, this result shows the strong potential of the latter for the development of fully adaptive methods, provided a safety zone is implemented.
2D Euler equation
=================
The emergence of resonances in Galerkin-truncated solutions of the 2D Euler equation was also shown in [@Ray2011]. The fact that the CVS solutions, filtered with a 2D version of the Kingslets, are similar to the ones obtained from 2D Navier-Stokes with small viscosity [@Nguyenvanyen2009] suggests that CVS might be suitable to filter the resonances in this case as well. Therefore, in the same spirit as in section \[sec:kingslet\], we apply the CVS method using Kingslets to the same initial condition used in the 2D example of [@Ray2011]: $$\label{eq:euler_initial_condition}
\widehat{\omega}_{\mathbf{k}}= \frac{2\vert k \vert^{7/2}}{N_k} e^{-k^2/4 + i\theta_{\mathbf{k}}},$$ where $\theta_{\mathbf{k}}$ is a realization of a random variable uniformly distributed in $[0,2\pi]$, $k$ is the integer part of $\vert {\mathbf{k}}\vert$, and $N_k$ is the number of distinct vectors ${\mathbf{k}}$ such that $ k \leq \vert {\mathbf{k}}\vert < k+1$. The particular realization used in [@Ray2011] as well as here can be retrieved online [^2]. The 2D Galerkin-truncated Euler equations are solved using a fully dealiased pseudo-spectral method at resolution $N^2 = 1024^2$ with a low storage third order Runge-Kutta scheme for time discretization. The time step is adjusted dynamically to satisfy the CFL stability criterion. For more details on the numerical method, we refer the reader to [@Nguyenvanyen2009].
In contrast to the Burgers case previously presented, we do not have here an analytical solution to compare with, and make an error estimate, but a visual qualitative comparison will be sufficient to check if CVS filters out the resonances while preserving the dynamics. Resonances are well exhibited in plots of the Laplacian of vorticity, so, following [@Ray2011], we show contours of this quantity at $t=0.71$. Figure \[fig:euler\] shows the contours for the whole domain and we can easily see that CVS solutions do not show the resonances but keep the same general aspect.
{width="40.00000%"} {width="40.00000%"}
A more precise comparison can be made from Fig. \[fig:comp\_euler\_zoom\], where the contours of both cases at $t=0.71$, zoomed-in around a region of intense resonance, are plotted together (left panel), as well as a cut as a function of $x_1$ along a segment near $x_2=3$ (right panel). One sees very well how the resonances are suppressed and how the profiles are strikingly similar, indicating that the filter is able to maintain the physical aspects of the solutions.
{width="40.00000%"} {width="40.00000%"}
\[fig:comp\_euler\_lap\]
Finally, the dissipative character of the CVS filter is confirmed when considering the time evolution of the enstrophy $Z = \frac{1}{2} \int \omega^2$, as shown in Fig. \[fig:comp\_euler\_enstrophy\].
![Time evolution of enstrophy for the Galerkin-truncated and CVS-filtered Euler equations.[]{data-label="fig:comp_euler_enstrophy"}](euler_enstrophy){width="42.00000%"}
Conclusion
==========
The continuous wavelet transform allowed to get further insight into the scale-space dynamics of resonance phenomena in Galerkin truncated inviscid equations. We showed that oscillations appear in a non-local fashion as soon as a shock affects the cut-off scale, and that the resonant points and the shock act as sources of perturbations at the cut-off scale. We could also see that despite the fact that the resonances first appear at small scales, large scale structures develop at the resonant points and are stretched into smaller scales until they reach the cut-off and start acting as new sources of truncation waves. These new perturbations spread and reach the shocks, leading to energy equipartition.
For the 1D inviscid Burgers equation, the results presented here confirm that the CVS filtering method we have previously proposed in \[5\], using a dual-tree complex wavelet frame (Kingslets), is well suited for eliminating all spurious oscillations present in the Galerkin-truncated solution as reported in [@Ray2011]. The resonances, which are not due to the dynamics of the original equation but rather to its discretization by a Galerkin method, are completely suppressed in this approach. Their ‘incoherent’ character in relation to the system evolution is established. In order to better understand the dissipative process characteristic of CVS filtering, we have sought to replace Kingslets by standard real-valued orthogonal wavelets. We have obtained satisfactory results under the condition that the coefficients which are adjacent to those whose moduli are above the threshold value are preserved. Such a safety zone is only necessary with orthogonal wavelets, to compensate for their lack of translation invariance, as originally introduced for CVS filtering of the 2D and 3D Navier-Stokes equations [@Froehlich1999; @Schneider2006; @Okamoto2011]. For the 2D Euler equation we have shown that CVS filtering with Kingslets is also capable of filtering the resonances without perturbing the dynamics. The filtered solutions match the unfiltered ones but for the non-physical oscillations which are eliminated. The authors of [@Ray2011] asserted that many features of the resonance phenomena were also observed in the 3D Galerkin-truncated Euler equations, though these results have not been reported yet. It is an interesting perspective to test if in this case CVS filtering is still able to eliminate the resonances.
A limitation of the approach presented here is that the solution is transformed back and forth at each timestep between the wavelet and the Fourier truncations, which do not commute with each other. These alternating projections are likely to introduce a weak dissipation in addition to the filtering operation *per se*. Therefore from the present results it cannot yet be determined whether the observed elimination of resonances could be achieved solely with wavelet filtering, or whether the interleaved truncations in Fourier space play a crucial role. This question could be answered by applying the filtering method to the Wavelet-Galerkin truncation of the equations, instead of the Fourier-Galerkin truncation that was considered here, offering an appealing perspective for future work.
Acknowledgments {#acknowledgments .unnumbered}
===============
RMP thanks the Brazilian National Scientific and Technological Research Council (CNPq) for support. RNVY thanks the ANR Geofluids and the Humboldt foundation for supporting this research through post-doctoral fellowships. RNVY, MF and KS acknowledge support from the contract Euratom-FR-FCM n°2TT.FR.1215 and from the PEPS program of CNRS-INSMI. They are also grateful to M. Domingues and O. Mendes for their kind hospitality in Brazil while revising this paper.
[^1]: Videos with the time evolution of the coefficients were made available on-line for the interested reader as supplementary material to this paper, and also at <http://www.youtube.com/watch?v=WX2YIHGR7LA> and <http://www.youtube.com/watch?v=j4VfBGgSy30>.
[^2]: <http://www.kyoryu.scphys.kyoto-u.ac.jp/%7Etakeshi/populated>
|
---
abstract: 'We present results for prompt photon and inclusive $\pi^0$ production in p-p and A-A collisions at RHIC and LHC energies. We include the full next-to-leading order radiative corrections and nuclear effects, such as nuclear shadowing and parton energy loss. We find the next-to-leading order corrections to be large and $p_T$ dependent. We show how measurements of $\pi^0$ production at RHIC and LHC, at large $p_T$, can provide valuable information about the nature of parton energy loss. We calculate the ratio of prompt photons to neutral pions and show that at RHIC energies this ratio increases with $p_T$ approaching one at $p_T \sim 10$ GeV, due to the large suppression of $\pi^0$ production. We show that at the LHC, this ratio has steep $p_T$ dependence and approaches $10\%$ effect at $p_T \sim 20$ GeV.'
address:
- ' Department of Physics, McGill University, Montreal, QC H3A-2T8, Canada'
- ' RIKEN-BNL Research Center, Upton, NY 11973-5000, USA'
- ' Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000, USA'
- ' Department of Physics, University of Arizona, Tucson, Arizona 85721, USA'
author:
- 'S. Jeon$^,$, J. Jalilian-Marian, I. Sarcevic[^1]'
title: ' Prompt Photon and Inclusive $\pi^0$ Production at RHIC and LHC '
---
INTRODUCTION
============
In high-energy heavy-ion collisions hard scatterings of partons occur in the early stages of the reaction, well before a quark-gluon plasma might have been formed, producing fast partons that propagate through the hot and dense medium and lose their energy. It has been predicted that parton energy loss would result in the suppression of pion production in heavy-ion collisions relative to hadron-hadron collisions [@wang]. Recent data on inclusive $\pi^0$ production at RHIC energy of $\sqrt s=200$ GeV [@phenix] and at large $p_T$, $3$GeV$\le p_T \le 8$GeV, confirms this prediction, however the observed suppression was found to become stronger with increasing $p_T$, providing a new challenge for theoretical models.
In addition to being of special interest for studying parton energy loss effects, large-$p_T$ $\pi^0$ mesons form a significant background for the prompt photons. Theoretical predictions for the prompt photon production [@jos] and for the ratio of prompt protons to pions at RHIC and LHC energies are crucial for studying possible quark-gluon plasma formation via photons.
INCLUSIVE $\pi^0$ AND PROMPT PHOTON PRODUCTION AT RHIC AND LHC
==============================================================
In perturbative QCD, the inclusive cross section for pion production in a hadronic collision is given by:
$$\begin{aligned}
E_\pi \frac{d^3\sigma}{d^3p_\pi}(\sqrt s,p_\pi)
&=&
\int dx_{a}\int dx_{b} \int dz \sum_{i,j}F_{i}(x_{a},Q^{2})
F_{j}(x_{b},Q^{2}) D_{c/\pi}(z,Q^2_f) E_c
{d^3\hat{\sigma}_{ij\rightarrow c X}\over d^3p_c}
\label{eq:factcs}\end{aligned}$$
where $F_{i}(x,Q^{2})$ is the i-th parton distribution in a nucleon, $D_{c/\pi}(z,Q^2_f)$ is the pion fragmentation function and ${d^3\hat{\sigma}_{ij\rightarrow c X}/ d^3p_c}$ are parton-parton cross sections. Prompt photon production is obtained using similar expression, except that photon production has contributions from direct processes and bremsstrahlung processes, where only bremsstrahlung processes are convoluted with photon fragmentation function, $D_{c/\gamma}(z,Q^2_f)$.
We calculate inclusive pion production in proton-proton collisions using MRS99 parton distributions [@mrs], BKK pion fragmentation function [@bkk] and we include leading-order, $O(\alpha_s^2)$, and the next-to-leading order, $O(\alpha_s^3)$, subprocesses [@se]. Our prediction for inclusive $\pi^0$ production at RHIC energy of $\sqrt s=200$ GeV, and for $p_T>3$GeV [@jjs], was found to be in excellent agreement with PHENIX data [@phenix], indicating that perturbative QCD approach is justified in this kinematic region.
To calculate the inclusive cross section for pion production in heavy ion collisions, we use Eq. (1) with the parton distributions modified to include nuclear shadowing effect [@eks98] and modified fragmentation function to incorrporate parton energy loss [@hsw]. We consider constant parton energy loss [@jjs2] as well as energy-dependent energy loss [@jjs]. In Fig. 1 we show our results for the ratio of inclusive cross section for pion production in Au-Au collisions to the one in proton-proton collisions, $R_{AA}(p_T)$. We find that for constant energy loss the ratio increases with $p_T$, while for the energy-dependent case, $\epsilon=\kappa E$, the ratio decreases with $p_T$. For $\kappa=0.06$ we find excellent agreement with the recent PHENIX data [@rob]. In Fig. 1 we also show our prediction for $R_{AA}(p_T)$ in case of prompt photon production. The suppression of prompt photons produced in heavy-ion collisions at RHIC is much less than $\pi^0$ suppresion. This is due to the fact that only bremsstrahlung processes are affected by the parton energy loss, which contribute $24\%$ to the cross section at $p_T=3$ GeV and $6\%$ at $p_T=12$ GeV. For the same reason, prompt photons are not very sensitive to a different choice of parton energy loss.
In Fig. 2 we show the ratio of prompt photon and $\pi^0$ cross sections at $\sqrt s=200$ GeV. We find that for constant energy loss this ratio increases slowly with $p_T$, similar to the p-p case, while for $\epsilon=0.06E$, the ratio has strong $p_T$ dependence, approaching one at $p_T \sim 10$GeV. This is due to the strong suppression of $\pi^0$ production at large $p_T$.
![The $\gamma/\pi^0$ ratio as a function of $p_T$ at $\sqrt s=200$ GeV.[]{data-label="fig2"}](qm2002rhic_gr_pr.eps "fig:"){width="75mm"} -9mm
![The $\gamma/\pi^0$ ratio as a function of $p_T$ at $\sqrt s=200$ GeV.[]{data-label="fig2"}](qm2002rhic_goverp.eps "fig:"){width="75mm"} -9mm
-9mm
In Fig. (3), we present $R_{AA}(p_T)$ for inclusive $\pi^0$ production and for prompt photon production at the LHC. We find that with constant parton energy loss per collision, $\epsilon=1$GeV, pion suppression decreases from $80\%$ at $p_T=5$GeV to $20\%$ at $p_T=40$GeV, while for $\epsilon=0.06E$ the suppression increases from $70\%$ at $p_T=5$GeV to $80\%$ at $p_T=40$GeV. Prompt photon production in Pb-Pb collisions at the LHC is slightly less suppressed than $\pi^0$ production, for the constant energy loss, while for $\epsilon=0.06E$ the suppression decreases slower, from $60\%$ at low $p_T$ to $30\%$ at $p_T=40$GeV. Nuclear shadowing effects are very small, less than $10\%$. At this energy we note that suppression of prompt photons is similar to the $\pi^0$ case, because at LHC energy prompt photon production has $60\%$ contribution from bremsstrahlung processes, which are modified due to the energy loss in a similar way to the $\pi^0$ case. We find that $\pi^0$ production is very sensitive to the parton energy loss parameters.
In Fig. 4 we show the ratio of prompt photons and pions at the LHC. We find that for constant energy loss this ratio increases slowly with $p_T$, similar to p-p case, while for $\epsilon=0.06E$ the ratio increases rapidly approaching $0.2$ at $p_T=35$GeV.
{width="75mm"} -9mm
{width="75mm"} \[fig4\]
-9mm
SUMMARY
=======
We have calculated inclusive pion and prompt photon production cross sections in proton-proton and in heavy-ion collisions at RHIC and LHC energies. We have incorrporated next-to-leading order contributions, initial state parton distribution functions in a nucleus and medium induced parton energy loss by modifying the final state pion and photon fragmentation functions.
We find the nuclear K-factor, which signifies higher order corrections, to be large and $p_T$ dependent and the shape of the $p_T$ distribution insensitive to the choice of scales [@jjs2]. The nuclear shadowing effects are small at RHIC and LHC energies. We show that $\pi^0$ suppression observed at RHIC can be attributed to the parton energy loss. We also present results for the suppression of prompt photon production at RHIC and LHC, and for the ratio of prompt photon and $\pi^0$ cross sections, of relevance to separating different sources of photon production.
We are indebted to P. Aurenche and J. P. Guillet for providing us with the fortran routines for calculating $\pi^0$ and photon production in hadronic collisions and for many useful discussions. We thank D. d’Enterria and M. Tannenbaum for many helpful discussions and suggestions. I.S. is supported in part through U.S. Department of Energy Grants Nos. DE-FG03-93ER40792 and DE-FG02-95ER40906. S.J. is supported in part by the Natural Sciences and Engineering Research Council of Canada. J.J-M. is supported in part by a PDF from BSA and by U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
[9]{}
X. N. Wang, Phys. Rev. [C61]{}, 064910 (2000); M. Gyulassy, P. Levai and I. Vitev, Nucl. Phys. [**A698**]{}, 631 (2002); I. Vitev, Nucl. Phys. [**B594**]{}, 371 (2001).
D. d’Enterria, \[PHENIX Collaboration\], these proceedings; S. Mioduszewski, \[PHENIX Collaboration\], these proceedings.
J. Jalilian-Marian, K. Orginos and I. Sarcevic, Phys. Rev. [**C63**]{}, 041901 (2001); Nucl. Phys. [**A700**]{}, 523 (2002).
A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, Eur. Phys. J. [**C14**]{}, 133 (2000).
J. Binnewies, B. A. Kniehl, G. Kramer, Z. Phys. [**C65**]{} (1995); Phys. Rev. [**D52**]{} (1995).
P. Aurenche, M. Fontanaz, J. Ph. Guillet, B. Kniehl, E. Pilon, and M. Werlen, Eur. Phys. J. [**C 13**]{}, 347 (2001).
S. Jeon, J. Jalilian-Marian and I. Sarcevic, nucl-th/0208012.
K. Eskola, V. Kolhinen and P. Ruuskanen, Nucl. Phys. [**B535**]{}, 351 (1998); K. Eskola, V. Kolhinen and C. Salgado, Eur. Phys. J. [**C9**]{}, 61 (1999).
X-N. Wang, Z. Huang and I. Sarcevic, Phys. Rev. Lett. [**77**]{}, 231 (1996).
S. Jeon, J. Jalilian-Marian and I. Sarcevic, hep-ph/0207120.
R. Pisarski, these proceedings.
R. Baier, Y. Dokshitzer, A. Mueller, S. Peigne and D. Schiff, Nucl. Phys. [**B483**]{}, 291 (1997); [*ibid.*]{} [**B484**]{}, 265 (1997); R. Baier, D. Schiff and B.G. Zakharov, Ann. Rev. Nucl. Part. Sci. [**50**]{}, 37 (2000).
[^1]: talk presented by I. Sarcevic.
|
---
author:
- |
<span style="font-variant:small-caps;">Mihailo Stojnic</span>\
\
[School of Industrial Engineering]{}\
[Purdue University, West Lafayette, IN 47907]{}\
[e-mail: [[email protected]]{}]{}
bibliography:
- 'HopBndsRefs.bib'
title: Bounding ground state energy of Hopfield models
---
[**Abstract**]{}
In this paper we look at a class of random optimization problems that arise in the forms typically known as Hopfield models. We view two scenarios which we term as the positive Hopfield form and the negative Hopfield form. For both of these scenarios we define the binary optimization problems that essentially emulate what would typically be known as the ground state energy of these models. We then present a simple mechanism that can be used to create a set of theoretical rigorous bounds for these energies. In addition to purely theoretical bounds, we also present a couple of fast optimization algorithms that can also be used to provide solid (albeit a bit weaker) algorithmic bounds for the ground state energies.
[**Index Terms: Hopfield models; ground-state energy**]{}.
Introduction {#sec:back}
============
We start by looking at what is typically known in mathematical physics as the Hopfield model. The model was popularized in [@Hop82] (or if viewed in a different context one could say in [@PasFig78; @Hebb49]). It essentially looks at what is called Hamiltonian of the following type $$\cH(H,\x)=\sum_{i\neq j}^{n} A_{ij}\x_i\x_j,\label{eq:ham}$$ where $$A_{ij}(H)=\sum_{l=1}^{m} H_{il}H_{lj},\label{eq:hamAij}$$ are the so-called quenched interactions and $H$ is an $m\times n$ matrix that can be also viewed as the matrix of the so-called stored patterns (we will typically consider scenario where $m$ and $n$ are large and $\frac{m}{n}=\alpha$ where $\alpha$ is a constant independent of $n$; however, many of our results will hold even for fixed $m$ and $n$). Each pattern is essentially a row of matrix $H$ while vector $\x$ is a vector from $R^n$ that emulates neuron states. Typically, one assumes that the patterns are binary and that each neuron can have two states (spins) and hence the elements of matrix $H$ as well as elements of vector $\x$ are typically assumed to be from set $\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}$. In physics literature one usually follows convention and introduces a minus sign in front of the Hamiltonian given in (\[eq:ham\]). Since our main concern is not really the physical interpretation of the given Hamiltonian but rather mathematical properties of such forms we will avoid the minus sign and keep the form as in (\[eq:ham\]).
To characterize the behavior of physical interpretations that can be described through the above Hamiltonian one then looks at the partition function $$Z(\beta,H)=\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{\beta\cH(H,\x)},\label{eq:partfun}$$ where $\beta>0$ is what is typically called the inverse temperature. Depending of what is the interest of studying one can then also look at a more appropriate scaled $\log$ version of $Z(\beta,H)$ (typically called the free energy) $$f_p(n,\beta,H)=\frac{\log{(Z(\beta,H)})}{\beta n}=\frac{\log{(\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{\beta\cH(H,\x)})}}{\beta n}.\label{eq:logpartfun}$$ Studying behavior of the partition function or the free energy of the Hopfield model of course has a long history. Since we will not focus on the entire function in this paper we just briefly mention that a long line of results can be found in e.g. excellent references [@PasShchTir94; @ShchTir93; @BarGenGueTan10; @BarGenGueTan12; @Tal98]. In this paper though we will focus on studying optimization/algorithmic aspects of $\frac{\log{(Z(\beta,H)})}{\beta n}$. More specifically, we will look at a particular regime $\beta,n\rightarrow\infty$ (which is typically called a zero-temperature thermodynamic limit regime or as we will occasionally call it the ground state regime). In such a regime one has $$\hspace{-.3in}\lim_{\beta,n\rightarrow\infty}f_p(n,\beta,H)=
\lim_{\beta,n\rightarrow\infty}\frac{\log{(Z(\beta,H)})}{\beta n}=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\cH(H,\x)}{n}
=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n},\label{eq:limlogpartfun}$$ which essentially renders the following form (often called the ground state energy) $$\lim_{\beta,n\rightarrow\infty}f_p(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n},\label{eq:posham}$$ which will be one of the main subjects that we will study in this paper. We will refer to the optimization part of (\[eq:posham\]) as the positive Hopfield form.
In addition to this form we will also study its a negative counterpart. Namely, instead of the partition function given in (\[eq:partfun\]) one can look at a corresponding partition function of a negative Hamiltonian from (\[eq:ham\]) (alternatively, one can say that instead of looking at the partition function defined for positive temperatures/inverse temperatures one can also look at the corresponding partition function defined for negative temperatures/inverse temperatures). In that case (\[eq:partfun\]) becomes $$Z(\beta,H)=\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{-\beta\cH(H,\x)},\label{eq:partfunneg}$$ and if one then looks at its an analogue to (\[eq:limlogpartfun\]) one then obtains $$\hspace{-.3in}\lim_{\beta,n\rightarrow\infty}f_n(n,\beta,H)=\lim_{\beta,n\rightarrow\infty}\frac{\log{(Z(\beta,H)})}{\beta n}=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}-\cH(H,\x)}{n}
=\lim_{n\rightarrow\infty}\frac{\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:limlogpartfunneg}$$ This then ultimately renders the following form which is in a way a negative counterpart to (\[eq:posham\]) $$\lim_{\beta,n\rightarrow\infty}f_n(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:negham}$$ We will then correspondingly refer to the optimization part of (\[eq:negham\]) as the negative Hopfield form.
In the following sections we will present a collection of results that relate to behavior of the forms given in (\[eq:posham\]) and (\[eq:negham\]) when they are viewed in a statistical scenario. The results that we will present will essentially correspond to what is called the ground state energies of these models. As it will turn out, in the statistical scenario that we will consider, (\[eq:posham\]) and (\[eq:negham\]) will be almost completely characterized by their corresponding average values $$\lim_{\beta,n\rightarrow\infty}Ef_p(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{E\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}\label{eq:poshamavg}$$ and $$\lim_{\beta,n\rightarrow\infty}Ef_n(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{E\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:neghamavg}$$
Before proceeding further with our presentation we will be a little bit more specific about the organization of the paper. In Section \[sec:poshop\] we will present a few results that relate to behavior of the positive Hopfield form in a statistical scenario. We will then in Section \[sec:neghop\] present the corresponding results for the negative Hopfield form. In section \[sec:alghop\] we will present several algorithmic results that will in a way complement our findings from Sections \[sec:poshop\] and \[sec:neghop\]. Finally, in Section \[sec:conc\] we will give a few concluding remarks.
Positive Hopfield form {#sec:poshop}
======================
In this section we will look at the following optimization problem (which clearly is the key component in estimating the ground state energy in the thermodynamic limit) $$\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:posham1}$$ For a deterministic (given fixed) $H$ this problem is of course known to be NP-hard (it essentially falls under the class of binary quadratic optimization problems). Instead of looking at the problem in (\[eq:posham1\]) in a deterministic way i.e. in a way that assumes that matrix $H$ is deterministic, we will look at it in a statistical scenario (this is of course a typical scenario in statistical physics). Within a framework of statistical physics and neural networks the problem in (\[eq:posham1\]) is studied assuming that the stored patterns (essentially rows of matrix $H$) are comprised of Bernoulli $\{-1,1\}$ i.i.d. random variables see, e.g. [@Tal98; @PasShchTir94; @ShchTir93]. While our results will turn out to hold in such a scenario as well we will present them in a different scenario: namely, we will assume that the elements of matrix $H$ are i.i.d. standard normals. We will then call form (\[eq:posham1\]) with Gaussian $H$, the Gaussian positive Hopfield form. On the other hand, we will call form (\[eq:posham1\]) with Bernoulli $H$, the Bernoulli positive Hopfield form. In the remainder of this section we will look at possible ways to estimate the optimal value of the optimization problem in (\[eq:posham1\]). In the first part below we will introduce a strategy that can be used to obtain an upper bound on the optimal value and in the second part we will then create a corresponding lower-bounding strategy.
Upper-bounding ground state energy of the positive Hopfield form {#sec:poshopub}
----------------------------------------------------------------
As we just mentioned above, in this section we will look at problem from (\[eq:posham1\]). In fact, to be a bit more precise, in order to make the exposition as simple as possible, we will look at its a slightly changed version given below $$\xi_p=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2.\label{eq:sqrtposham1}$$ As mentioned above, we will assume that the elements of $H$ are i.i.d. standard normal random variables. Before proceeding further with the analysis of (\[eq:sqrtposham1\]) we will recall on several well known results that relate to Gaussian random variables and the processes they create.
We start by first recalling the following results from [@Gordon88] that relate to statistical properties of such Gaussian processes.
([@Gordon88]) \[thm:Gordonmesh1\] Let $X_{ij}$ and $Y_{ij}$, $1\leq i\leq n,1\leq j\leq m$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices
1. $E(X_{ij}^2)=E(Y_{ij}^2)$
2. $E(X_{ij}X_{ik})\geq E(Y_{ij}Y_{ik})$
3. $E(X_{ij}X_{lk})\leq E(Y_{ij}Y_{lk}), i\neq l$.
Then $$P(\bigcap_{i}\bigcup_{j}(X_{ij}\geq \lambda_{ij}))\leq P(\bigcap_{i}\bigcup_{j}(Y_{ij}\geq \lambda_{ij})).$$
The following, more simpler, version of the above theorem relates to the expected values.
([@Gordon88]) \[thm:Gordonmesh2\] Let $X_{ij}$ and $Y_{ij}$, $1\leq i\leq n,1\leq j\leq m$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices
1. $E(X_{ij}^2)=E(Y_{ij}^2)$
2. $E(X_{ij}X_{ik})\geq E(Y_{ij}Y_{ik})$
3. $E(X_{ij}X_{lk})\leq E(Y_{ij}Y_{lk}), i\neq l$.
Then $$E(\min_{i}\max_{j}(X_{ij}))\leq E(\min_i\max_j(Y_{ij})).$$
When $m=1$ both of the above theorems simplify to what is called Slepian’s lemma (see, e.g. [@Slep62]). In fact, to be completely chronologically exact, the two above theorems actually extended the Slepian’s lemma which, for the completeness, we include below in the form of two theorems that are effective analogues to Theorems \[thm:Gordonmesh1\] and \[thm:Gordonmesh2\].
([@Slep62; @Gordon88]) \[thm:Slepian1\] Let $X_{i}$ and $Y_{i}$, $1\leq i\leq n$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices
1. $E(X_{i}^2)=E(Y_{i}^2)$
2. $E(X_{i}X_{l})\leq E(Y_{i}Y_{l}), i\neq l$.
Then $$P(\bigcap_{i}(X_{i}\geq \lambda_{i}))\leq P(\bigcap_{i}(Y_{i}\geq \lambda_{i}))
\Leftrightarrow P(\bigcup_{i}(X_{i}\geq \lambda_{i}))\leq P(\bigcup_{i}(Y_{i}\geq \lambda_{i})).$$
The following, more simpler, version of the above theorem relates to the expected values.
([@Slep62; @Gordon88]) \[thm:Slepian2\] Let $X_{i}$ and $Y_{i}$, $1\leq i\leq n$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices
1. $E(X_{i}^2)=E(Y_{i}^2)$
2. $E(X_{i}X_{l})\leq E(Y_{i}Y_{l}), i\neq l$.
Then $$E(\min_{i}(X_{i}))\leq E(\min_i(Y_{i})) \Leftrightarrow E(\max_{i}(X_{i}))\geq E(\max_i(Y_{i})).$$
Now, to create an upper-bounding strategy for the positive Hopfield form we will rely on Theorems \[thm:Slepian1\] and Theorem \[thm:Slepian2\]. We start by reformulating the problem in (\[eq:sqrtposham1\]) in the following way $$\xi_p=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x.\label{eq:sqrtposham2}$$ We will first focus on the expected value of $\xi_p$ and then on its more general probabilistic properties. The following is then a direct application of Theorem \[thm:Slepian2\].
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable. Then $$E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\y^T H\x +\|\x\|_2 g))\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x)).\label{eq:posexplemma}$$\[lemma:posexplemma\]
As mentioned above, the proof is a standard/direct application of Theorem \[thm:Slepian2\]. We will sketch it for completeness. Namely, one starts by defining processes $X_i$ and $Y_i$ in the following way $$Y_i=(\y^{(i)})^T H\x^{(i)} +\|\x^{(i)}\|_2 g\quad X_i=\|\x^{(i)}\|_2\g^T\y^{(i)}+\h^T\x^{(i)}.\label{eq:posexplemmaproof1}$$ Then clearly $$EY_i^2=EX_i^2=2\|\x^{(i)}\|_2^2=2.\label{eq:posexplemmaproof2}$$ One then further has $$\begin{aligned}
EY_iY_l & = & (\y^{(i)})^T\y^{(l)}(\x^{(l)})^T\x^{(i)}+\|\x^{(i)}\|_2\|\x^{(l)}\|_2 \nonumber \\
EX_iX_l & = & (\y^{(i)})^T\y^{(l)}\|\x^{(i)}\|_2\|\x^{(l)}\|_2+(\x^{(l)})^T\x^{(i)}.\label{eq:posexplemmaproof3}\end{aligned}$$ And after a small algebraic transformation $$\begin{aligned}
EY_iY_l-EX_iX_l & = & \|\x^{(i)}\|_2\|\x^{(l)}\|_2(1-(\y^{(i)})^T\y^{(l)})-(\x^{(l)})^T\x^{(i)}(1-(\y^{(i)})^T\y^{(l)}) \nonumber \\
& = & (\|\x^{(i)}\|_2\|\x^{(l)}\|_2-(\x^{(l)})^T\x^{(i)})(1-(\y^{(i)})^T\y^{(l)})\nonumber \\
& \geq & 0.\label{eq:posexplemmaproof4}\end{aligned}$$ Combining (\[eq:posexplemmaproof2\]) and (\[eq:posexplemmaproof4\]) and using results of Theorem \[thm:Slepian2\] one then easily obtains (\[eq:posexplemma\]).
Using results of Lemma \[lemma:posexplemma\] we then have $$\begin{gathered}
E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2) =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\y^T H\x +\|\x\|_2 g))\\\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x))=E\|\x\|_2\|\g\|_2+E\sum_{i=1}^{n}|\h_i|\leq \sqrt{m}+\sqrt{\frac{2}{\pi}}\sqrt{n}.\label{eq:poshopaftlemma2}\end{gathered}$$ Connecting beginning and end of (\[eq:poshopaftlemma2\]) we finally have an upper bound on $E\xi_p$ from (\[eq:sqrtposham1\]), i.e. $$E\xi_p=E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2) \leq \sqrt{m}+\sqrt{\frac{2}{\pi}}\sqrt{n}=\sqrt{n}(\sqrt{\alpha}+\sqrt{\frac{2}{\pi}}),\label{eq:poshopubexp}$$ or in a scaled (possibly) more convenient form $$\frac{E\xi_p}{\sqrt{n}}=\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \leq \sqrt{\alpha}+\sqrt{\frac{2}{\pi}}.\label{eq:poshopubexp1}$$
We now turn to deriving a more general probabilistic result related to $\xi_p$. Before doing so we do mention that since the ground state energies will concentrate in thermodynamic limit (more on a much more general approach in this direction can be found in e.g. [@GiuGen12]), their expected values considered above are typically the hardest thing to study. In that regard the probabilistic results that we will present below may not be viewed as important. However, although here for the easiness of the exposition we often assume a large $n$ scenario many of the concepts that we present work just fine even for finite $n$. One should then keep in mind that the strategy we present below has an importance attached to it that goes beyond a likelihood type generalization of the above studied means.
Now, we will present this more general probabilistic estimate of the ground state energy through the following lemma.
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable and let $\zeta_{\x}$ be a function of $\x$. Then $$P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^T H\x +\|\x\|_2 g-\zeta_{\x})\geq 0)\leq
P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\zeta_{\x})\geq 0).\label{eq:posproblemma}$$\[lemma:posproblemma\]
The proof is basically same as the proof of Lemma \[lemma:posexplemma\]. The only difference is that instead of Theorem \[thm:Slepian2\] it relies on Theorem \[thm:Slepian1\].
Let $\zeta_{\x}=-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2+\xi_p^{(u)}$ with $\epsilon_{5}^{(g)}>0$ being an arbitrarily small constant independent of $n$. We will first look at the right-hand side of the inequality in (\[eq:posproblemma\]). The following is then the probability of interest $$P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_p^{(u)}).\label{eq:probanal0}$$ After solving the maximization over $\x$ and $\y$ one obtains $$\hspace{-.3in}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_p^{(u)})=P(\|\g\|_2+\sum_{i=1}^{n}|\h_i|/\sqrt{n}+\epsilon_{5}^{(g)}\sqrt{n}\geq \xi_p^{(u)}).\label{eq:probanal1}$$ Since $\g$ is a vector of $m$ i.i.d. standard normal variables it is rather trivial that $P(\|\g\|_2<(1+\epsilon_{1}^{(m)})\sqrt{m})\geq 1-e^{-\epsilon_{2}^{(m)} m}$ where $\epsilon_{1}^{(m)}>0$ is an arbitrarily small constant and $\epsilon_{2}^{(m)}$ is a constant dependent on $\epsilon_{1}^{(m)}$ but independent of $n$. Along the same lines, since $\h$ is a vector of $n$ i.i.d. standard normal variables it is rather trivial that $P(|\h|<(1+\epsilon_{1}^{(n)})n)\geq 1-e^{-\epsilon_{2}^{(n)} n}$ where $\epsilon_{1}^{(n)}>0$ is an arbitrarily small constant and $\epsilon_{2}^{(n)}$ is a constant dependent on $\epsilon_{1}^{(n)}$ but independent of $n$. Then from (\[eq:probanal1\]) one obtains $$\begin{gathered}
P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_p^{(u)})\\\leq
(1-e^{-\epsilon_{2}^{(m)} m})(1-e^{-\epsilon_{2}^{(n)} n})
P((1+\epsilon_{1}^{(m)})\sqrt{m}+(1+\epsilon_{1}^{(n)})\sqrt{n}\sqrt{\frac{2}{\pi}}+\epsilon_{5}^{(g)}\sqrt{n}\geq \xi_p^{(u)})
+e^{-\epsilon_{2}^{(m)} m}+e^{-\epsilon_{2}^{(n)} n}.\label{eq:probanal2}\end{gathered}$$ If $$\begin{aligned}
& & (1+\epsilon_{1}^{(m)})\sqrt{m}+(1+\epsilon_{1}^{(n)})\sqrt{n}\sqrt{\frac{2}{\pi}}+\epsilon_{5}^{(g)}\sqrt{n}<\xi_p^{(u)}\nonumber \\
& \Leftrightarrow & (1+\epsilon_{1}^{(m)})\sqrt{\alpha}+(1+\epsilon_{1}^{(n)})\sqrt{\frac{2}{\pi}}+\epsilon_{5}^{(g)}<\frac{\xi_p^{(u)}}{\sqrt{n}},\label{eq:condxipu}\end{aligned}$$ one then has from (\[eq:probanal2\]) $$\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_p^{(u)})\leq 0.\label{eq:probanal3}$$
We will now look at the left-hand side of the inequality in (\[eq:posproblemma\]). The following is then the probability of interest $$P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x+\|\x\|_2g+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_p^{(u)})\geq 0).\label{eq:leftprobanal0}$$ Since $P(g\geq -\epsilon_{5}^{(g)}\sqrt{n})\geq 1-e^{-\epsilon_{6}^{(g)} n}$ (where $\epsilon_{6}^{(g)}$ is, as all other $\epsilon$’s in this paper are, independent of $n$) from (\[eq:leftprobanal0\]) we have $$\begin{gathered}
P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x+\|\x\|_2g+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_p^{(u)})\geq 0)
\\\geq (1-e^{-\epsilon_{6}^{(g)} n})P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x-\xi_p^{(u)})\geq 0).\label{eq:leftprobanal1}\end{gathered}$$ When $n$ is large from (\[eq:leftprobanal1\]) we then have $$\begin{gathered}
\hspace{-.4in}\lim_{n\rightarrow \infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x+\|\x\|_2g+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_p^{(u)})\geq 0)
\geq \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x-\xi_p^{(u)})\geq 0)\\
= \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^TH\x)\geq \xi_p^{(u)})
= \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_p^{(u)}).\label{eq:leftprobanal2}\end{gathered}$$ Assuming that (\[eq:condxipu\]) holds, then a combination of (\[eq:posproblemma\]), (\[eq:probanal3\]), and (\[eq:leftprobanal2\]) gives $$\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_p^{(u)})\leq \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x+\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_p^{(u)})\leq 0.\label{eq:leftprobanal3}$$
We summarize our results from this subsection in the following lemma.
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $n$ be large and let $m=\alpha n$, where $\alpha>0$ is a constant independent of $n$. Let $\xi_p$ be as in (\[eq:sqrtposham1\]). Let all $\epsilon$’s be arbitrarily small constants independent of $n$ and let $\xi_p^{(u)}$ be a scalar such that $$(1+\epsilon_{1}^{(m)})\sqrt{\alpha}+(1+\epsilon_{1}^{(n)})\sqrt{\frac{2}{\pi}}+\epsilon_{5}^{(g)}<\frac{\xi_p^{(u)}}{\sqrt{n}}.\label{eq:condxipuposgenlemma}$$ Then $$\begin{aligned}
& & \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\leq \xi_p^{(u)})\geq 1\nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p\leq \xi_p^{(u)})\geq 1 \nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p^2\leq (\xi_p^{(u)})^2)\geq 1, \label{eq:posgenproblemma}\end{aligned}$$ and $$\frac{E\xi_p}{\sqrt{n}}=\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \leq \sqrt{\alpha}+\sqrt{\frac{2}{\pi}}.\label{eq:posgenexplemma}$$ \[lemma:posgenlemma\]
The proof follows from the above discussion, (\[eq:poshopubexp1\]), and (\[eq:leftprobanal3\]).
Lower-bounding ground state energy of the positive Hopfield form {#sec:poshoplb}
----------------------------------------------------------------
In this subsection we will create the corresponding lower-bound results. To create a lower-bounding strategy for the positive Hopfield form we will again (as in previous subsection) rely on Theorems \[thm:Slepian1\] and \[thm:Slepian2\]. We start by recalling that the problem of interest is the one in (\[eq:sqrtposham2\]) and we rewrite it in the following way $$\xi_p=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x.\label{eq:sqrtposham2lb}$$ As in the previous subsection, we will first focus on the expected value of $\xi_p$ and then on its more general probabilistic properties. The following is then a direct application of Theorem \[thm:Slepian2\].
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $H^{(1)}$ and $H^{(2)}$ be $m\times m$ and $n\times n$ matrices, respectively, with i.i.d. standard normal components. Then $$E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\y^T H\x))\geq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)).\label{eq:posexplemmalb}$$\[lemma:posexplemmalb\]
As was the case with the corresponding proof in the previous subsection, the proof is a direct application of Theorem \[thm:Slepian2\]. Namely, one starts by defining processes $X_i$ and $Y_i$ in the following way $$Y_i=(\y^{(i)})^T H\x^{(i)} \quad X_i=\frac{1}{\sqrt{2}}(\y^{(i)})^TH^{(1)}\y^{(i)}+\frac{1}{\sqrt{2}}(\x^{(i)})^TH^{(2)}\x^{(i)}.\label{eq:posexplemmaproof1lb}$$ Then clearly $$EY_i^2=EX_i^2=\|\x^{(i)}\|_2^2=1.\label{eq:posexplemmaproof2lb}$$ One then further has $$\begin{aligned}
EY_iY_l & = & (\y^{(i)})^T\y^{(l)}(\x^{(l)})^T\x^{(i)} \nonumber \\
EX_iX_l & = & \frac{1}{2}((\y^{(i)})^T\y^{(l)})^2+\frac{1}{2}((\x^{(l)})^T\x^{(i)})^2.\label{eq:posexplemmaproof3lb}\end{aligned}$$ And after a small algebraic transformation $$\begin{aligned}
EX_iX_l-EY_iY_l & = & \frac{1}{2}((\y^{(i)})^T\y^{(l)})^2+\frac{1}{2}((\x^{(l)})^T\x^{(i)})^2-(\y^{(i)})^T\y^{(l)}(\x^{(l)})^T\x^{(i)} \nonumber \\
& = & \frac{1}{2}((\x^{(l)})^T\x^{(i)}-(\y^{(i)})^T\y^{(l)})^2\nonumber \\
& \geq & 0.\label{eq:posexplemmaproof4lb}\end{aligned}$$ Combining (\[eq:posexplemmaproof2lb\]) and (\[eq:posexplemmaproof4lb\]) and using results of Theorem \[thm:Slepian2\] one then easily obtains (\[eq:posexplemmalb\]).
Using results of Lemma \[lemma:posexplemmalb\] we then have $$\begin{gathered}
E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2) =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\y^T H\x))\\\geq
E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)\\
=E(\max_{\|\y\|_2=1}\frac{1}{\sqrt{2}}\y^TH^{(1)}\y)+E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\frac{1}{\sqrt{2}}\x^TH^{(2)}\x),\label{eq:poshopaftlemma2lb}\end{gathered}$$ and after scaling $$\begin{gathered}
\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} =\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\y^T H\x))}{\sqrt{n}}\\\geq
\frac{E(\max_{\|\y\|_2=1}(\y^TH^{(1)}\y))}{\sqrt{2n}}+\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\x^TH^{(2)}\x))}{\sqrt{2n}}.\label{eq:poshopaftlemma3lb}\end{gathered}$$ Now, clearly, $\max_{\|\y\|_2=1}(\y^TH^{(1)}\y)$ is the maximum singular value of a Gaussian $m\times m$ matrix $H^{(1)}$. From the theory of large Gaussian random matrices one easily has $$\lim_{m\rightarrow \infty}\frac{E(\max_{\|\y\|_2=1}(\y^TH^{(1)}\y))}{\sqrt{2m}}=1.\label{eq:singvallimit}$$ Moreover, using incredible results of [@Parisi80; @Tal06; @Guerra03] one has $$\lim_{n\rightarrow \infty}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\x^TH^{(2)}\x))}{\sqrt{2n}}=\xi_{SK}\approx 0.763,\label{eq:skmodel}$$ where $\xi_{SK}$ is the average ground state energy of the so-called Sherrington-Kirkpatrick (SK) model in the thermodynamic limit. More on the SK model can be found in excellent references [@Parisi80; @Tal06; @Guerra03; @SheKir72]. We do mention that the work of [@Parisi80; @Tal06; @Guerra03] indeed settled the thermodynamic behavior of the SK model. However, the characterization of $\xi_{SK}$ in [@Parisi80; @Tal06; @Guerra03] is not explicit and the value we give in (\[eq:skmodel\]) is a numerical estimate (it is quite likely though, that the estimate we give is a bit conservative; the true value is probably more around $0.7632$). Connecting (\[eq:poshopaftlemma3lb\]), (\[eq:singvallimit\]), and (\[eq:skmodel\]) one then has the following lower-bounding limiting counterpart to (\[eq:poshopubexp1\]) $$\lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}=\lim_{n\rightarrow\infty}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \geq \sqrt{\alpha}+\xi_{SK}\approx\sqrt{\alpha}+0.763.\label{eq:poshopubexplb}$$
We now turn to deriving a more general probabilistic result related to $\xi_p$. We will do so through the following lemma (essentially a lower-bounding counterpart to Lemma \[lemma:posexplemma\]).
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $H^{(1)}$ and $H^{(2)}$ be $m\times m$ and $n\times n$ matrices, respectively, with i.i.d. standard normal components. Let $\zeta$ be a scalar. Then $$P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^T A\x-\zeta)\geq 0)\geq
P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)-\zeta)\geq 0).\label{eq:posproblemmalb}$$\[lemma:posproblemmalb\]
As in the previous subsection, the proof is basically the same as the proof of Lemma \[lemma:posexplemmalb\]. The only difference is that instead of Theorem \[thm:Slepian2\] it relies on Theorem \[thm:Slepian1\].
Let $\zeta=\xi_p^{(l)}$. We will first look at the right-hand side of the inequality in (\[eq:posproblemmalb\]). The following is then the probability of interest $$P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)-\zeta\geq 0).\label{eq:probanal0lb}$$ From the theory of large Gaussian random matrices we then have $$\lim_{n\rightarrow\infty}P(\max_{\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y)\geq (1-\epsilon_1^{(m_s)})\sqrt{m})\geq 1.\label{eq:probanal1lb}$$ where $\epsilon_1^{(m_s)}$ is an arbitrarily small constant independent of $n$. The powerful results of [@Parisi80; @Tal06; @Guerra03] also give $$\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n=1}(\frac{1}{\sqrt{2n}}\x^TH^{(2)}\x)\geq (1-\epsilon_1^{(n_{sk})})\xi_{SK})\geq 1,\label{eq:probanal2lb}$$ where $\epsilon_1^{(n_{sk})}$ is an arbitrarily small constant independent of $n$. If one then assumes that $$\xi_p^{(l)}= (1-\epsilon_1^{(m_s)})\sqrt{m}+(1-\epsilon_1^{(n_{sk})}\xi_{SK})\sqrt{n},\label{eq:probanal3lb}$$ then a combination of (\[eq:probanal0lb\]), (\[eq:probanal1lb\]), and (\[eq:probanal2lb\]) gives $$\begin{gathered}
\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)-\zeta\geq 0)\\
=\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)-
((1-\epsilon_1^{(m_s)})\sqrt{m}+(1-\epsilon_1^{(n_{sk})})\xi_{SK}\sqrt{n})\geq 0)\\
\geq \lim_{n\rightarrow\infty}P(\max_{\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y)-
(1-\epsilon_1^{(m_s)})\sqrt{m}\geq 0)\\\times \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\frac{\x^TH^{(2)}\x}{\sqrt{2}})-(1-\epsilon_1^{(n_{sk})})\xi_{SK}\sqrt{n}\geq 0)\geq 1.\label{eq:probanal4lb}\end{gathered}$$ Assuming that (\[eq:probanal3lb\]) holds then a further combination of (\[eq:posproblemmalb\]) and (\[eq:probanal4lb\]) gives $$\begin{gathered}
P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2\geq \xi_p^{(l)})=P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n\|\y\|_2=1}(\y^T A\x)\geq \xi_p^{(l)})\\\geq
\lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}(\frac{1}{\sqrt{2}}\y^TH^{(1)}\y+\frac{1}{\sqrt{2}}\x^TH^{(2)}\x)-\xi_p^{(l)}\geq 0)\geq 1.\label{eq:probanal5lb}\end{gathered}$$
We summarize our results from this subsection in the following lemma.
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $n$ be large and let $m=\alpha n$, where $\alpha>0$ is a constant independent of $n$. Let $\xi_p$ be as in (\[eq:sqrtposham1\]). Let $\xi_{SK}$ be the average ground state energy in the thermodynamic limit of the SK model as defined in (\[eq:skmodel\]). Further, let all $\epsilon$’s be arbitrarily small constants independent of $n$ and let $\xi_p^{(l)}$ be a scalar such that $$\frac{\xi_p^{(l)}}{\sqrt{n}}= (1-\epsilon_1^{(m_s)})\sqrt{\alpha}+(1-\epsilon_1^{(n_{sk})})\xi_{SK}.\label{eq:condxipuposgenlemmalb}$$ Then $$\begin{aligned}
& & \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_p^{(l)})\geq 1\nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p\geq \xi_p^{(l)})\geq 1 \nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p^2\geq (\xi_p^{(l)})^2)\geq 1, \label{eq:posgenproblemmalb}\end{aligned}$$ and $$\lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}=\lim_{n\rightarrow\infty}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \geq \sqrt{\alpha}+\xi_{SK}\approx \sqrt{\alpha}+0.763.\label{eq:posgenexplemmalb}$$ \[lemma:posgenlemmalb\]
The proof follows from the above discussion, (\[eq:poshopubexplb\]), and (\[eq:probanal5lb\]).
A combination of results obtained in Lemmas (\[lemma:posgenlemma\]) and (\[lemma:posgenlemmalb\]) then gives $$\sqrt{\alpha}+0.763\leq \approx \sqrt{\alpha}+\xi_{SK} \leq \lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}=\lim_{n\rightarrow\infty}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \leq \sqrt{\alpha}+\sqrt{\frac{2}{\pi}}\approx \sqrt{\alpha}+0.798.\label{eq:posublb}$$ Although we don’t go into further analytical considerations as to what happens with the above bounds as $\alpha$ changes, we do mention that as $\alpha\rightarrow 0$ the upper bound is expected to be close to the true value. On the other hand, as $\alpha\rightarrow \infty$ the lower bound is expected to be close to the true value (for more in this direction see, e.g. [@JYZhao11]). A massive set of numerical experiments that we performed (and that we will report on in a forthcoming paper) seems to indicate that this indeed is a trend. In other words, as $\alpha$ grows from zero to $\infty$ the true value of $\lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}$ seems to slowly transition from the most left to the most right quantity given in (\[eq:posublb\]).
Negative Hopfield form {#sec:neghop}
======================
In this section we will look at the following optimization problem (which again clearly is the key component in estimating the corresponding ground state energy of what we call the negative Hopfield model in the thermodynamic limit) $$\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:negham1}$$ Similarly to what was the case when we studied the positive form in the previous section, for a deterministic (given fixed) $H$ the above problem is of course known to be NP-hard. Of course, this is same as was the case for (\[eq:posham1\]) as it again essentially falls under the class of binary quadratic optimization problems. Consequently, we will again adopt a strategy similar to the one that we considered when studied the positive form in the previous section. Namely, instead of looking at the problem in (\[eq:negham1\]) in a deterministic way i.e. in a way that assumes that matrix $H$ is deterministic, we will look at it in a statistical scenario. Also as in previous section, we will assume that the elements of matrix $H$ are i.i.d. standard normals. We will then call the form (\[eq:posham1\]) with Gaussian $H$, the Gaussian negative Hopfield form. On the other hand, we will call the form (\[eq:negham1\]) with Bernoulli $H$, the Bernoulli negative Hopfield form. In the remainder of this section we will look at possible ways to estimate the optimal value of the optimization problem in (\[eq:negham1\]). In fact we will introduce a strategy similar the one presented in the previous section to create a lower-bound on the optimal value of (\[eq:negham1\]).
Lower-bounding ground state energy of the negative Hopfield form {#sec:neghoplb}
----------------------------------------------------------------
In this section we will look at problem from (\[eq:negham1\]).In fact, to be a bit more precise, as in the previous section, in order to make the exposition as simple as possible, we will look at its a slight variant given below $$\xi_n=\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2.\label{eq:sqrtnegham1}$$ As mentioned above, we will assume that the elements of $H$ are i.i.d. standard normal random variables. Now, to create a lower-bounding strategy for the negative Hopfield form we will rely on Theorems \[thm:Gordonmesh1\] and Theorem \[thm:Gordonmesh2\]. We start by reformulating the problem in (\[eq:sqrtnegham1\]) in the following way $$\xi_n=\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x.\label{eq:sqrtnegham2}$$ As in the previous section, we will first focus on the expected value of $\xi_n$ and then on its more general probabilistic properties. The following is then a direct application of Theorem \[thm:Gordonmesh2\].
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable. Then $$E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^T H\x +\|\x\|_2 g))\geq E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x)).\label{eq:negexplemma}$$\[lemma:negexplemma\]
As mentioned above, the proof is a standard/direct application of Theorem \[thm:Gordonmesh2\]. We will sketch it for completeness. Namely, one starts by defining processes $X_i$ and $Y_i$ in the following way $$Y_{ij}=(\y^{(j)})^T H\x^{(i)} +\|\x^{(i)}\|_2 g\quad X_{ij}=\|\x^{(i)}\|_2\g^T\y^{(j)}+\h^T\x^{(i)}.\label{eq:negexplemmaproof1}$$ Then clearly $$EY_{ij}^2=EX_{ij}^2=2\|\x^{(i)}\|_2^2=2.\label{eq:negexplemmaproof2}$$ One then further has $$\begin{aligned}
EY_{ij}Y_{ik} & = & (\x^{(i)})^T\x^{(i)}(\y^{(k)})^T\y^{(j)}+\|\x^{(i)}\|_2\|\x^{(i)}\|_2 \nonumber \\
EX_{ij}X_{ik} & = & \|\x^{(i)}\|_2\|\x^{(i)}\|_2(\y^{(k)})^T\y^{(j)}+(\x^{(i)})^T\x^{(i)},\label{eq:negexplemmaproof3}\end{aligned}$$ and clearly $$EX_{ij}X_{ik}=EY_{ij}Y_{ik}.\label{eq:negexplemmaproof31}$$ Moreover, $$\begin{aligned}
EY_{ij}Y_{lk} & = & (\y^{(j)})^T\y^{(k)}(\x^{(i)})^T\x^{(l)}+\|\x^{(i)}\|_2\|\x^{(l)}\|_2 \nonumber \\
EX_{ij}X_{lk} & = & (\y^{(j)})^T\y^{(k)}\|\x^{(i)}\|_2\|\x^{(l)}\|_2+(\x^{(i)})^T\x^{(l)}.\label{eq:negexplemmaproof32}\end{aligned}$$ And after a small algebraic transformation $$\begin{aligned}
EY_{ij}Y_{lk}-EX_{ij}X_{lk} & = & \|\x^{(i)}\|_2\|\x^{(l)}\|_2(1-(\y^{(j)})^T\y^{(k)})-(\x^{(i)})^T\x^{(l)}(1-(\y^{(j)})^T\y^{(k)}) \nonumber \\
& = & (\|\x^{(i)}\|_2\|\x^{(l)}\|_2-(\x^{(i)})^T\x^{(l)})(1-(\y^{(j)})^T\y^{(k)})\nonumber \\
& \geq & 0.\label{eq:negexplemmaproof4}\end{aligned}$$ Combining (\[eq:negexplemmaproof2\]), (\[eq:negexplemmaproof31\]), and (\[eq:negexplemmaproof4\]) and using results of Theorem \[thm:Gordonmesh2\] one then easily obtains (\[eq:negexplemma\]).
Using results of Lemma \[lemma:negexplemma\] we then have $$\begin{gathered}
E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2) =E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^T H\x +\|\x\|_2g))\\ \geq E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x))=E\|\x\|_2\|\g\|_2-E\sum_{i=1}^{n}|\h_i|\geq \sqrt{n}(\sqrt{m}-\frac{1}{4\sqrt{m}})-\sqrt{\frac{2}{\pi}}n.\label{eq:neghopaftlemma2}\end{gathered}$$ Connecting beginning and end of (\[eq:neghopaftlemma2\]) we finally have a lower bound on $E\xi_n$ from (\[eq:sqrtnegham1\]), i.e. $$E\xi_n=E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2) \geq (\sqrt{m}-\frac{1}{4\sqrt{m}})-\sqrt{\frac{2}{\pi}}\sqrt{n}=\sqrt{n}(\sqrt{\alpha}-\frac{1}{4\sqrt{mn}}-\sqrt{\frac{2}{\pi}}),\label{eq:neghopubexp}$$ or in a scaled (possibly) more convenient form $$\frac{E\xi_n}{\sqrt{n}}=\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \geq \sqrt{\alpha}-\frac{1}{4\sqrt{mn}}-\sqrt{\frac{2}{\pi}}.\label{eq:neghopubexp1}$$ Of course, the above result will be useful as long as the most right quantity is positive.
Following what was done in the previous section we will now turn to deriving a more general probabilistic result related to $\xi_n$ (all the comments related to these type of results that we have made in the previous section apply here as well). We will do so through the following lemma.
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable and let $\zeta_{\x}$ be a function of $\x$. Then $$P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^T A\x+\|\x\|_2g-\zeta_{\x})\geq 0)\geq
P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\zeta_{\x})\geq 0).\label{eq:negproblemma}$$\[lemma:negproblemma\]
The proof is basically same as the proof of Lemma \[lemma:negexplemma\]. The only difference is that instead of Theorem \[thm:Gordonmesh2\] it relies on Theorem \[thm:Gordonmesh1\].
Let $\zeta_{\x}=\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2+\xi_n^{(l)}$ with $\epsilon_{5}^{(g)}>0$ being an arbitrarily small constant independent of $n$. We will first look at the right-hand side of the inequality in (\[eq:negproblemma\]). The following is then the probability of interest $$P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_n^{(l)}).\label{eq:negprobanal0}$$ After solving the minimization over $\x$ and the maximization over $\y$ one obtains $$\hspace{-.3in}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_n^{(l)})=P(\|\g\|_2-\sum_{i=1}^{n}|\h_i|/\sqrt{n}-\epsilon_{5}^{(g)}\sqrt{n}\geq \xi_n^{(l)}).\label{eq:negprobanal1}$$ We recall that as earlier, since $\g$ is a vector of $m$ i.i.d. standard normal variables it is rather trivial that $P(\|\g\|_2>(1-\epsilon_{1}^{(m)})\sqrt{m})\geq 1-e^{-\epsilon_{2}^{(m)} m}$ where $\epsilon_{1}^{(m)}>0$ is an arbitrarily small constant and $\epsilon_{2}^{(m)}$ is a constant dependent on $\epsilon_{1}^{(m)}$ but independent of $n$. Along the same lines, since $\h$ is a vector of $n$ i.i.d. standard normal variables it is rather trivial that $P(\sum_{i=1}^{n}|\h_i|<(1+\epsilon_{1}^{(n)})n\sqrt{\frac{2}{\pi}})\geq 1-e^{-\epsilon_{2}^{(n)} n}$ where $\epsilon_{1}^{(n)}>0$ is an arbitrarily small constant and $\epsilon_{2}^{(n)}$ is a constant dependent on $\epsilon_{1}^{(n)}$ but independent of $n$. Then from (\[eq:negprobanal1\]) one obtains $$\begin{gathered}
P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_n^{(l)})\\\geq
(1-e^{-\epsilon_{2}^{(m)} m})(1-e^{-\epsilon_{2}^{(n)} n})
P((1-\epsilon_{1}^{(m)})\sqrt{m}-(1+\epsilon_{1}^{(n)})\sqrt{n}\sqrt{\frac{2}{\pi}}-\epsilon_{5}^{(g)}\sqrt{n}\geq \xi_n^{(l)}).
\label{eq:negprobanal2}\end{gathered}$$ If $$\begin{aligned}
& & (1-\epsilon_{1}^{(m)})\sqrt{m}-(1+\epsilon_{1}^{(n)})\sqrt{n}\sqrt{\frac{2}{\pi}}-\epsilon_{5}^{(g)}\sqrt{n}>\xi_n^{(l)}\nonumber \\
& \Leftrightarrow & (1-\epsilon_{1}^{(m)})\sqrt{\alpha}-(1+\epsilon_{1}^{(n)})\sqrt{\frac{2}{\pi}}-\epsilon_{5}^{(g)}>\frac{\xi_n^{(l)}}{\sqrt{n}},\label{eq:negcondxipu}\end{aligned}$$ one then has from (\[eq:negprobanal2\]) $$\lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_n^{(l)})\geq 1.\label{eq:negprobanal3}$$
We will now look at the left-hand side of the inequality in (\[eq:negproblemma\]). The following is then the probability of interest $$P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x+\|\x\|_2g-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_n^{(l)})\geq 0).\label{eq:leftnegprobanal0}$$ Since $P(g\geq\epsilon_{5}^{(g)}\sqrt{n})<e^{-\epsilon_{6}^{(g)} n}$ (where $\epsilon_{6}^{(g)}$ is, as all other $\epsilon$’s in this paper are, independent of $n$) from (\[eq:leftnegprobanal0\]) we have $$\begin{gathered}
P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x+\|\x\|_2g-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_n^{(l)})\geq 0)
\\\leq P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x-\xi_n^{(l)})\geq 0)+e^{-\epsilon_{6}^{(g)} n}.\label{eq:leftnegprobanal1}\end{gathered}$$ When $n$ is large from (\[eq:leftnegprobanal1\]) we then have $$\begin{gathered}
\hspace{-.65in}\lim_{n\rightarrow \infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x+\|\x\|_2g-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2-\xi_n^{(l)})\geq 0)
\leq \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x-\xi_n^{(l)})\geq 0)\\
= \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\y^TH\x)\geq \xi_n^{(l)})
= \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_n^{(l)}).\label{eq:leftnegprobanal2}\end{gathered}$$ Assuming that (\[eq:negcondxipu\]) holds, then a combination of (\[eq:negproblemma\]), (\[eq:negprobanal3\]), and (\[eq:leftnegprobanal2\]) gives $$\lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_n^{(l)})\geq \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(\|\x\|_2\g^T\y+\h^T\x-\epsilon_{5}^{(g)}\sqrt{n}\|\x\|_2)\geq \xi_n^{(l)})\geq 1.\label{eq:leftnegprobanal3}$$
We summarize our results from this subsection in the following lemma.
Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $n$ be large and let $m=\alpha n$, where $\alpha>0$ is a constant independent of $n$. Let $\xi_n$ be as in (\[eq:sqrtnegham1\]). Let all $\epsilon$’s be arbitrarily small constants independent of $n$ and let $\xi_n^{(l)}$ be a scalar such that $$(1-\epsilon_{1}^{(m)})\sqrt{\alpha}-(1+\epsilon_{1}^{(n)})\sqrt{\frac{2}{\pi}}-\epsilon_{5}^{(g)}>\frac{\xi_n^{(l)}}{\sqrt{n}}.\label{eq:negcondxipuneggenlemma}$$ Then $$\begin{aligned}
& & \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_n^{(l)})\geq 1\nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_n\geq \xi_n^{(l)})\geq 1 \nonumber \\
& \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_n^2\geq (\xi_n^{(l)})^2)\geq 1, \label{eq:neggenproblemma}\end{aligned}$$ and $$\frac{E\xi_n}{\sqrt{n}}=\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n} \|H\x\|_2)}{\sqrt{n}} \geq \sqrt{\alpha}-\frac{1}{4\sqrt{mn}}-\sqrt{\frac{2}{\pi}}.\label{eq:neggenexplemma}$$ \[lemma:neggenlemma\]
The proof follows from the above discussion, (\[eq:neghopubexp1\]), and (\[eq:leftnegprobanal3\]).
Algorithmic aspects of Hopfield forms {#sec:alghop}
=====================================
In this section we look at a couple of simple algorithms that can be used to approximately solve optimization problems we studied in the previous sections. The algorithms are clearly not the best possible but are fairly simple. Given their simple structure it will turn out to be possible to provide an analytical characterization of the optimal values that they achieve. In return these values would automatically become bounds on the true optimal values. These bounds won’t be as good as those we presented in the previous sections but will in a way be their algorithmic complements. As earlier in the paper, we will start with the positive Hopfield form and then we will present the corresponding results for the negative Hopfield form.
Simple approximate algorithms for the positive Hopfield forms {#sec:alghoppos}
-------------------------------------------------------------
We recall that our goal it this subsection will be to present algorithms that provide an approximate solution to (\[eq:posham1\]) (or alternatively (\[eq:sqrtposham1\])). Before, proceeding further we recall that in the previous couple of sections it was a bit easier to focus on (\[eq:sqrtposham1\]) instead of focusing on (\[eq:posham1\]). In this section though, it will be the other way around, i.e. we will focus on the original problem (\[eq:posham1\]) which we restate below $$\xi_p^2=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:posham1alg}$$ In this section we will present two simple approximate algorithms that can be used to solve approximately (\[eq:posham1alg\]). We will first present an iterative algorithm that fixes components of $\x$ one at the time and then an algorithm based on the properties of eigenvalues and eigenvectors of Gaussian random matrices.
### An iterative approximate algorithm for the positive Hopfield forms {#sec:alghopposit}
In this section we present an iterative algorithm that approximately solves (\[eq:posham1alg\]). The algorithm is very simple and probably well known. However, we are not aware of any analytical results related to its quality of performance when applied in a statistical scenario considered in this paper. The analysis is actually fairly simple and we think it would be useful to have such a result recorded. Also, since it will be a bit easier to present and follow the exposition we will until the end of this subsection assume that everything is rescaled so that $\x_i\in\{-1,1\}$. Now, going back to the algorithm - as we just stated the algorithm is fairly simple: it starts by setting $\x_1=1$ and then fixing $\x_2$ so that $\|H_{:,1:2}\x_{1:2}\|_2^2$ is maximized ($H_{:,1:2}$ stands for the first two columns of $H$ and $\x_{1:2}$ stands for the first two components of $\x$). After $\x_2$ is fixed the algorithm then proceeds by fixing $\x_3$ so that $\|H_{:,1:3}\x_{1:3}\|_2^2$ is maximized ($H_{:,1:3}$ stands for the first three columns of $H$ and $\x_{1:3}$ stands for the first three components of $\x$) and so on until one fixes all components of $\x$.
To analyze the algorithm we will set $\hat{\x}_1=1$, $r_1=\|H_{:,1}\|_2^2$, and for any $2\leq k\leq n$ $$\begin{aligned}
\hat{\x}_k & = & \mbox{argmax}_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2\nonumber \\
r_k & = & \max_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2=\|H_{:,1:k}\hat{\x}_{1:k}\|_2^2.\label{eq:defrposit}\end{aligned}$$ Our goal will be to compute $Er_n$. We will do so in a recursive fashion. To that end we will start with $Er_2$ $$\begin{gathered}
Er_2=\max_{\x_2\in\{-1,1\}}\|H_{:,1:2}\begin{bmatrix} \hat{\x}_{1}\\ \x_2\end{bmatrix} \|_2^2
=\max_{\x_2\in\{-1,1\}}\|H_{:,1:2}\begin{bmatrix} 1\\ \x_2\end{bmatrix} \|_2^2\\=E\|H_{:,1}\|_2^2+2E(\max_{\x_2\in\{-1,1\}}\x_2(H_{:,2}^TH_{:,1}))+E\|H_{:,2}\|_2^2
=Er_1+2\sqrt{\frac{2}{\pi}}E\sqrt{r_1}+m.\label{eq:rposit1}\end{gathered}$$ One can then apply a similar strategy to obtain for a general $2\leq k\leq n$ $$\begin{gathered}
Er_k=\max_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2
=\max_{\x_k\in\{-1,1\}}\|\begin{bmatrix}H_{:,1:k-1} & H_{:,k} \end{bmatrix}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2\\=E\|H_{:,1:k-1}\hat{\x}_{1:k-1}\|_2^2+2E(\max_{\x_k\in\{-1,1\}}\x_k(H_{:,k}^TH_{:,1:k-1}\hat{\x}_{1:k-1}))+E\|H_{:,k}\|_2^2
=Er_{k-1}+2\sqrt{\frac{2}{\pi}}E\sqrt{r_{k-1}}+m.\label{eq:rposit2}\end{gathered}$$ To make the exposition easier we will assume that $n$ is large and switch to the limiting behavior of $Er$’s. Assuming concentration of $r_k$’s (for $k$ proportional to $n$) around their mean values gives $\lim_{n\rightarrow\infty} \frac{E\sqrt{r_k}}{n}=\lim_{n\rightarrow\infty} \frac{\sqrt{Er_k}}{n}$. One then based on (\[eq:defrposit\]), (\[eq:rposit1\]), and (\[eq:rposit2\]) can establish the following recursion for finding $Er_n$ $$\phi_k=\phi_{k-1}+2\sqrt{\frac{2}{\pi}}\sqrt{\phi_{k-1}}+m,\label{eq:rposit3}$$ with $\phi_1=m$ and $\lim_{n\rightarrow\infty}\frac{Er_n}{n}=\lim_{n\rightarrow\infty}\frac{\phi_n}{n}$. Computing the last limit can then be done to a fairly high precision for any different $m$. We do mention, for example that for $m=n$ (i.e. $\alpha=1$) one has $$\lim_{n\rightarrow\infty}\frac{Er_n}{n^2}=\lim_{n\rightarrow\infty}\frac{\phi_n}{n^2}\approx 2.5259.\label{eq:numpositlb}$$ One can also compare this result to the results of the previous section to get $$\lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}\geq\lim_{n\rightarrow\infty}\frac{E\sqrt{r_n}}{n}=\lim_{n\rightarrow\infty}\frac{\sqrt{\phi_n}}{n}\approx \sqrt{2.5259}\approx 1.5893.\label{eq:numpositlb1}$$ This is a bit worse than $1.763$ bound one would get in Subsection \[sec:poshoplb\] when $\alpha=1$ (i.e. $m=n$)). However, the bound in (\[eq:numpositlb1\]) is algorithmic, i.e. there is an algorithm (in fact a very simple one with a quadratic complexity) that achieves it, whereas the bound from Subsection \[sec:poshoplb\] is purely theoretical and is given without any polynomial algorithm that achieves it.
### A dominating eigenvector algorithm for the positive Hopfield forms {#sec:alghopposeig}
In this section we present another simple algorithm that approximately solves (\[eq:posham1alg\]). This algorithm is also probably well known, but we think that it would a good idea to collect at one place the technical results related to the objective value one can get through it. In that way it will be easier to know how far away from the optimal its performance is.
As the name suggests the algorithm operates on eigenvectors of $H$. The idea is to decompose $H^TH$ through the eigen-decomposition in the following way $$H^TH=Q\Lambda Q^T,\label{eq:eigdec}$$ where obviously $Q$ is an $n\times n$ matrix such that $Q^TQ=I$ and $\Lambda$ is a diagonal matrix of all eigenvalues of matrix $H^TH$. Now, without a loss of generality, we will assume that the elements of the diagonal matrix $\Lambda$ (essentially the eigenvalues of $H^TH$) are sorted in the decreasing order, i.e. $\Lambda_{1,1}\geq \Lambda_{2,2}\geq \dots\geq \Lambda_{n,n}$. The algorithm then works in the following simple way: take $\x$ as the signs of components of vector $Q_{:,1}$, i.e. $$\hat{\x}^{(eig)}=\mbox{sign}(Q_{:,1}).\label{eq:eigoptx}$$ Let $$r^{(eig)}=\|H\hat{\x}^{(eig)}\|_2^2=(\mbox{sign}(Q_{:,1}))^TQ\Lambda Q^T\mbox{sign}(Q_{:,1}).\label{eq:eigoptr}$$ One then further has $$r^{(eig)}=(\mbox{sign}(Q_{:,1}))^TQ\Lambda Q^T\mbox{sign}(Q_{:,1})\geq \Lambda_{1,1}(\sum_{i=1}^{n}|Q_{i,1}|)^2.\label{eq:eigoptr1}$$ Using the theory of random Gaussian matrices one then has that all quantities of interest concentrate and $$\lim_{n\rightarrow\infty}\frac{E\Lambda_{1,1}}{n}=(\sqrt{\alpha}+1)^2.\label{eq:eigoptr2}$$ Furthermore, one can think of all components of $Q_{:,1}$ as being standard normal scaled by the the norm-2 of the vector they comprise. Since everything concentrates when $n$ is large one then has $$\lim_{n\rightarrow\infty}E(\frac{\sum_{i=1}^{n}|Q_{i,1}|}{\sqrt{n}})^2=(\sqrt{\frac{2}{\pi}})^2=\frac{2}{\pi}.\label{eq:eigoptr3}$$ A combination of (\[eq:eigoptr1\]), (\[eq:eigoptr2\]), and (\[eq:eigoptr3\]) then gives $$\lim_{n\rightarrow\infty}\frac{Er^{(eig)}}{n^2}\geq \lim_{n\rightarrow\infty}\frac{\Lambda_{1,1}(\sum_{i=1}^{n}|Q_{i,1}|)^2}{n^2}=(\sqrt{\alpha}+1)^2\frac{2}{\pi}.\label{eq:eigoptr4}$$ One can also compare this result to the results of the previous section. For example, let $\alpha=1$ and $$\lim_{n\rightarrow\infty}\frac{E\xi_p}{n}\geq\lim_{n\rightarrow\infty}\frac{E\sqrt{r^{(eig)}}}{n}\geq \lim_{n\rightarrow\infty}\sqrt{\frac{8}{\pi}}\approx \sqrt{2.5465}\approx 1.5958.\label{eq:numposeiglb1}$$ This is again somewhat worse than $1.763$ bound one would get in Subsection \[sec:poshoplb\] when $\alpha=1$ (i.e. $m=n$) but a bit better than what one can get through the mechanism of the previous subsection and ultimately (\[eq:numpositlb1\]). However, the bound in (\[eq:numposeiglb1\]) is again algorithmic. The corresponding algorithm though is a bit more complex than the one from the previous subsection since it involves performing the eigen-decomposition of $H^TH$. However, we should mention that the value given in (\[eq:numposeiglb1\]) is substantially lower than what the algorithm will indeed give in practice. The reason is of course the cross-correlation of components of different eigenvectors and the fact that the cross products between $\hat{\x}^{(eig)}$ and vectors $Q_{i,:},2\leq i\leq n$, coupled with corresponding eigenvalues will also contribute to the true value of $r^{(eig)}$. To obtain the exact value of $\lim_{n\rightarrow\infty}\frac{Er^{(eig)}}{n^2}$ one would have to account for this as well. This is not so easy and we do not pursue it further. However, practically speaking we do mention, that roughly one can expect that $\lim_{n\rightarrow\infty}\frac{Er^{(eig)}}{n^2}\approx 2.9$ or stated differently $\lim_{n\rightarrow\infty}\frac{E\sqrt{r^{(eig)}}}{n}\approx 1.7$. On the other hand, to be completely fair to the algorithm given in the previous subsection, we should mention that its various adaptations are possible as well. For example, among the simplest ones would be to also keep sorting the columns of $H$ and in each step instead of choosing the first next column choose the column with the largest norm-2. Evaluating the performance of such an algorithm precisely is again not super easy. We do mention from practical experience that it provides a similar objective value as does the eigenvector mechanism presented in this subsection.
A simple approximate algorithm for the negative Hopfield forms {#sec:alghopneg}
--------------------------------------------------------------
We recall that our goal it this subsection will be to present algorithms that provide an approximate solution to (\[eq:negham1\]) (or alternatively (\[eq:sqrtnegham1\])). Before proceeding further we note that in Section \[sec:neghoplb\] it was be a bit easier to focus on (\[eq:sqrtnegham1\]) instead of focusing on (\[eq:negham1\]). In this section though, it will be the other way around, i.e. we will focus on the original problem (\[eq:negham1\]) which we restate below $$\xi_n^2=\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:negham1alg}$$ Below we will present a simple approximate algorithm that can be used to solve approximately (\[eq:negham1alg\]). The algorithm will be a counterpart for the negative form to the iterative algorithm given in Section \[sec:alghopposit\] for the positive Hopfield form.
### An iterative approximate algorithm for the negative Hopfield forms {#sec:alghopnegit}
As mentioned above, in this section we present a counterpart to the iterative algorithm given in Subsection \[sec:alghopposit\]. Clearly, the algorithm that we will present here approximately solves (\[eq:negham1alg\]). In fact as when we looked at the positive form we will again assume that everything is scaled so that $\x_i\in\{-1,1\}$. In fact, the algorithm is almost the same as the algorithm from Subsection \[sec:alghopposit\]: it starts by setting $\x_1=1$ and then fixing $\x_2$ so that $\|H_{:,1:2}\x_{1:2}\|_2^2$ is now *minimized* (as in Subsection \[sec:alghopposit\], $H_{:,1:2}$ stands for the first two columns of $H$ and $\x_{1:2}$ stands for the first two components of $\x$). After $\x_2$ is fixed the algorithm then proceeds by fixing $\x_3$ so that $\|H_{:,1:3}\x_{1:3}\|_2^2$ is *minimized* ($H_{:,1:3}$ stands for the first three columns of $H$ and $\x_{1:3}$ stands for the first three components of $\x$) and so on until one fixes all components of $\x$.
Similarly to what we did when we analyzed the positive counterpart, to analyze the algorithm we will set $\hat{\x}_1=1$, $r_1^{(neg)}=\|H_{:,1}\|_2^2$, and for any $2\leq k\leq n$ $$\begin{aligned}
\hat{\x}_k & = & \mbox{argmin}_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2\nonumber \\
r_k^{(neg)} & = & \min_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2=\|H_{:,1:k}\hat{\x}_{1:k}\|_2^2.\label{eq:defrnegit}\end{aligned}$$ Our goal will be to compute $Er_n^{(neg)}$. We will do so in a recursive fashion. To that end we will start with $Er_2^{(neg)}$ $$\begin{gathered}
Er_2^{(neg)}=\min_{\x_2\in\{-1,1\}}\|H_{:,1:2}\begin{bmatrix} \hat{\x}_{1}\\ \x_2\end{bmatrix} \|_2^2
=\min_{\x_2\in\{-1,1\}}\|H_{:,1:2}\begin{bmatrix} 1\\ \x_2\end{bmatrix} \|_2^2\\=E\|H_{:,1}\|_2^2+2E(\min_{\x_2\in\{-1,1\}}\x_2(H_{:,2}^TH_{:,1}))+E\|H_{:,2}\|_2^2
=Er_1^{(neg)}-2\sqrt{\frac{2}{\pi}}E\sqrt{r_1^{(neg)}}+m.\label{eq:rnegit1}\end{gathered}$$ One can then apply a similar strategy to obtain for a general $2\leq k\leq n$ $$\begin{gathered}
Er_k^{(neg)}=\min_{\x_k\in\{-1,1\}}\|H_{:,1:k}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2
=\min_{\x_k\in\{-1,1\}}\|\begin{bmatrix}H_{:,1:k-1} & H_{:,k} \end{bmatrix}\begin{bmatrix} \hat{\x}_{1:k-1}\\ \x_k\end{bmatrix} \|_2^2\\=E\|H_{:,1:k-1}\hat{\x}_{1:k-1}\|_2^2+2E(\min_{\x_k\in\{-1,1\}}\x_k(H_{:,k}^TH_{:,1:k-1}\hat{\x}_{1:k-1}))+E\|H_{:,k}\|_2^2
=Er_{k-1}^{(neg)}-2\sqrt{\frac{2}{\pi}}E\sqrt{r_{k-1}^{(neg)}}+m.\label{eq:rnegit2}\end{gathered}$$ As earlier, to make the exposition easier we will assume that $n$ is large and switch to the limiting behavior of $Er^{(neg)}$’s. Again, assuming concentration of $r_k^{(neg)}$’s (for $k$ proportional to $n$) around their mean values will then give $\lim_{n\rightarrow\infty} \frac{E\sqrt{r_k^{(neg)}}}{n}=\lim_{n\rightarrow\infty} \frac{\sqrt{Er_k^{(neg)}}}{n}$. One then based on (\[eq:defrnegit\]), (\[eq:rnegit1\]), and (\[eq:rnegit2\]) can establish the following recursion for finding $Er_n^{(neg)}$ $$\phi_k=\phi_{k-1}-2\sqrt{\frac{2}{\pi}}\sqrt{\phi_{k-1}}+m,\label{eq:rnegit3}$$ with $\phi_1=m$ and $\lim_{n\rightarrow\infty}\frac{Er_n^{(neg)}}{n^2}=\lim_{n\rightarrow\infty}\frac{\phi_n}{n^2}$. Computing the last limit can then be done to a fairly high precision for any different $m$. Following the example we chose in the positive case, we note that for $m=n$ (i.e. $\alpha=1$) one has $$\lim_{n\rightarrow\infty}\frac{Er_n^{(neg)}}{n^2}=\lim_{n\rightarrow\infty}\frac{\phi_n}{n^2}\approx .3072.\label{eq:numnegitlb}$$ One can also compare this result to the results of the previous section to get $$\lim_{n\rightarrow\infty}\frac{E\xi_n}{\sqrt{n}}\geq\lim_{n\rightarrow\infty}\frac{E\sqrt{r_n^{(neg)}}}{n}=\lim_{n\rightarrow\infty}\frac{\sqrt{\phi_n}}{n}\approx \sqrt{0.3072}\approx 0.55.\label{eq:numnegitlb1}$$ This is substantially away from the lower bound $0.2021$ one would get in Subsection \[sec:neghoplb\] when $\alpha=1$ (i.e. $m=n$)). However, as was the case with the positive form in earlier sections, the bound given above is algorithmic.
Conclusion {#sec:conc}
==========
In this paper we looked at classic Hopfield forms. We first viewed the standard positive Hopfield form and then defined its a negative counterpart. We were interested in their behavior in the zero-temperature limit which essentially amounts to the behavior of their ground state energies. We then sketched mechanisms that can be used to provide upper and lower bounds for the ground state energies of both models.
To be a bit more specific, we first provided purely theoretical bounds on the expected values of the ground state energy of the positive Hopfield model. These bounds appear to be fairly close to each other (moreover, the upper bounds actually don’t even require the thermodynamic regime). In addition to that we also presented two very simple (certainly well known) algorithms that can be used to approximately determine the ground state energy of the positive Hopfield model. For both algorithms we then sketched how one can determine their performance guarantees. As it turned out, these algorithms provide a fairly good approximations (while the analytical results that we provided demonstrated that they are in certain scenarios about $10\%$ away from the optimal values, practically, in these same scenarios, their objective values are not more than $5\%$ away from the optimal value).
We then translated our results related to the positive Hopfield form to the case of the negative Hopfield form. We again targeted the ground state regime and provided a theoretical lower bound for the expected behavior of the ground state energy. We also, showed how one of the algorithms that we designed for the positive form can easily be adapted to fit the negative form. This enabled us to get an algorithmic upper bound for the ground state energy of the negative form. While, the bounds we obtained for the negative form are not as good as the ones we obtained for the positive form, they are obtained in a very simple manner and provide in a way a quick assessment how the ground state energies of these forms behave.
For several results that relate to the behavior of the expected ground state energies, we also showed that the corresponding (more general) probabilistic results hold in the thermodynamic limit.
Moreover, the purely theoretical results we presented are for the so-called Gaussian Hopfield models. Often though a binary Hopfield model may be a more preferred optiion. However, all results that we presented can easily be extended to the case of binary Hopfield models (and for that matter to an array of other statistical models as well). There are many ways how this can be done. Proving that is not that hard. In fact there are many ways how it can be done, but typically would boil down to repetitive use of the central limit theorem. For example, a particularly simple and elegant approach would be the one of Lindeberg [@Lindeberg22]. Adapting our exposition to fit into the framework of the Lindeberg principle is relatively easy and in fact if one uses the elegant approach of [@Chatterjee06] pretty much a routine. Since we did not create these techniques we chose not to do these routine generalizations. However, to make sure that the interested reader has a full grasp of generality of the results presented here, we do emphasize again that pretty much any distribution that can be pushed through the Lindeberg principle would work in place of the Gaussian one that we used.
We should also mention that the algorithms we presented are simple and certainly not the best known. One can design algorithms that can practically achieve a way better performance for both Hopfield forms. However, since their performance analysis is not easy we leave their detailed exposition for an algorithmic presentation. We do mention though, that out idea here was not to introduce the best possible algorithms but rather to show how one can use the simple ones to get results related to the behavior of the optimal objective value.
It is also important to emphasize that we in this paper presented a collection of very simple observations. One can improve many of the results that we presented here but at the expense of the introduction of a more complicated theory. We will present results in many such directions elsewhere. We do recall though, that in this paper we were mostly concerned with the behavior of the ground state energies. A vast majority of our results can be translated to characterize the behavior of the free energy when viewed at any temperature. However, as mentioned above, this requires a way more detailed exposition and we will present it elsewhere.
|
---
abstract: 'In this paper we propose a generalization of the counterflowing streams modelling of self-driven objects. Our modelling considers two opposite situations: (I) a completely random scenario of independent particles of negligible excluded volume until a regime (II) of the rigid interacting objects. Considering the system divided by spacial cells which have a maximum occupation level, the probability of any given object moving to a neighboring cell depends on the occupation level of this cell according to a Fermi-Dirac like distribution, which considers a parameter that governs the randomness of the system. We show that for a certain critical value of this randomness, the system abruptly transits from a increasingly moving scenario to a clogged state. We numerically describe the structure of this transition by using coupled differential partial equations (PDE) and Monte Carlo (MC) simulations that are in good agreement.'
address: 'Institute of Physics, Federal University of Rio Grande do Sul, Porto Alegre - RS, 91501-970, Brazil. '
author:
- 'Roberto da Silva, Eduardo V. Stock'
title: 'Mobile-clogging transition in a Fermi like model of the counterflowing streams'
---
The direct motion of particles in random environments due to impurities can be observed in many contexts in Physics and in a large number of applications such as capture/decapture of electrons in the micro,nano, and meso devices [@Machlup1954; @Kirton1989; @noisesemiconductors], the erratic, but also directed, motion of molecules in chromatographic columns [Cromatograph]{} and many others. However, looking at the interaction among the particles, other situations can be explored. From these phenomena we can focus on the ones which two different species of particles that move against each other.
The patterns arising from counterflowing stream of particles can be studied considering systems apparently very different such as pedestrian dynamics [@Pinho2016], and charged colloids motion [Vissers2011,VissersPRL2011]{}, which suggest more similarities that we can imagine between the micro and macro systems in this kind of modelling. For this reason, the straight formation of lanes, distillation, originated from the complex emergent process of self-propelled and/or field-directed objects/particles have raised a lot of interesting questions in the context of statistical mechanics and the physics of the stochastic process modelled by Monte Carlo (MC) simulations or Partial Differential Equations (PDE) [Stock2017,rdasilva2015]{}.
Similarly, more fundamental situations related to systems that collapse due to clogging effects, related to the typical concentration phenomena of objects under counterflowing streams leads to a fundamental question: What is the importance of the environmental randomness, i.e., the concentration of impurities (micro) or obstacles (macro) in the medium in opposition to the importance of the size of mobile objects, for the occurrence of such clogging/jamming phenomena?
In order to understand this interesting problem, in this work we suggest to consider a general modelling of streams counterflowing streams of objects interpolating two very distinct/extreme situations: **Situation I**: Objects of negligible sizes, move in environments with many impurities. The objects interact only with the impurities since their excluded volume is not an important parameter. In this situation the system is entirely random: the particle performs a biased random walk that goes to the next cell with probability 1/2 or remains stopped at same cell with the same probability. It works as if this randomness of the motion is due to the resistance offered by these impurities and not from the interaction among the objects. This randomness is not affected by the size of particles. **Situation II**: The second extreme situation considers hard objects where the excluded volume is important, which stream occurs in a environment without impurities. It works as if rigid bodies interact only each other in a clean environment. In this situation the system is essentially deterministic and one object does not move to a cell that does not have enough space to allocate it.
So our intention in this study is to explore the transition between these two very different scenarios, by changing one only external parameter: $\alpha $. In our modelling, by changing $\alpha $, we desire to be able to continuously map the system from situation 1 which corresponds to $\alpha =0$ to situation 2, corresponding to $\alpha \rightarrow \infty $. Thus before to mathematically formulate the problem, we need to consider some important motivations.
Thus, let us consider a simple model of two species of objects, here denoted by $A$ and $B$, moving in opposite direction in a ring. In a good analogy, we can look at two species as oppositely charged colloids. The application of a strong electric field along the longitudinal direction of the ring, would make species $A$ drift, let us say, in a counter clock-wise fashion while species $B$ would drift towards the opposite one. In a slightly different point of view, we can picture objects entering and leaving both extremities of a thin tape (tube) at a constant rate (periodic boundary conditions). This can also mimic, for example, typical situations of pedestrians walking on a subway corridors.
Additionally, we establish that all the cells have the same maximum occupation levels denoted by $\sigma_{\max}$. In the situation 2, which corresponds to $\alpha \rightarrow \infty $, $\sigma _{\max }$ means that no more than $\sigma _{\max }$ objects are allowed per cell and the probability of object to occupy the next cell is $p=1$ if the occupation of this cell is smaller than $\sigma_{\max }$, since there are not obstacles in the environment in this occasion. On the other hand, at this same limit, $p=1/2$ when occupation of the cell is exactly $\sigma _{\max }$, and $p=0$ elsewhere.
The opposite situation ($\alpha \rightarrow 0$) works as if an infinity of objects were allowed per cell, thus the motion does not depend on $\sigma
_{\max }$, and $p=1/2$ occurs regardless of the concentration of the next cell, since the randomness is only due to distribution of random obstacles in the environment, and the interaction among the objects is not considered in this limit.
An alternative interpretation can be looked at other direction: $\alpha $ can be imagined as a field that drive the objects oppositely charged. For low $\alpha $ the environment is important due to the low momenta of the objects. On the other hand when $\alpha $ is large the objects have high momenta and the environment effects are not important to change the velocities of these objects. In this case the interaction of objects have a important role.
Therefore, for the sake of simplicity, we illustrate our idea in Fig. [Fig:all\_plots]{}. Two species of particles drift in counterflow in an annular system composed by $L$ cells, each one with the same limiting factor $\sigma
_{\max }$, regarding the distinct situations as previously described.
![Section of particles under counterflowing streams in a ring topology. The two distinct (extreme) regimes are illustrated: $\protect\alpha \rightarrow
0 $, showing that objects are independent of each other but interact with the stochastic environment, and $\protect\alpha \rightarrow \infty $ where interacting rigid bodies with high momenta ignore the impurities of the environment (represented by the stars). Intermediate values of $\protect\alpha $ correspond to some situation between these two extremities. []{data-label="Fig:all_plots"}](union_of_all_figures.png){width="1.0\columnwidth"}
Considering that the concentration of particles (of whatever species) in the following cell affects its locomotion, the concentration of target objects, according to our prescription can be written by recurrence relation: $A_{m,n}=p_{m-1,m}^{(n-1)}A_{m-1,n-1}+p_{m,m}^{(n-1)}A_{m,n-1}$, where $A_{j,k}$ is the density of particles of species $A$ of the cell $j$ at time $k$ and by construction $p_{m,m}^{(n-1)}+p_{m,m+1}^{(n-1)}=1,$since here $p_{i,j}^{(n)}$ denotes the probability of particle in cell $i$(position $x=i\varepsilon$) to transit to cell $j$ (position $x=j\varepsilon $) at $t=$ $n\tau $, where $\tau $ means the need time to perform such transition and $\varepsilon $ is the length of step.
Combining the equations, one has$$A_{m,n}-A_{m,n-1}=p_{m-1,m}^{(n-1)}A_{m-1,n-1}-p_{m,m+1}^{(n-1)}A_{m,n-1}$$
Taking into account that the occupation depends on a maximum level of occupation, it is interesting to think in the idea of the Fermi level in the context of conductor/semiconductor models, and that here it has a similar role changing the temperature by factor $\alpha ^{-1}$. If the desired cell has a number of objects above this level, the probability of occupation behaves according to Fermi-Dirac occupation function: $$p_{i,j}^{(n)}=(1+\exp [\alpha (\sigma _{j,n}-\sigma _{\max })])^{-1}\text{,}$$where $\sigma _{j,n}=A_{j,n}+B_{j,n}$ denotes the total number of objects at the cell $j$, at the time $n$ which is the sum of the number of objects of target species and the number of the objects of opposite species $B_{j,n}$. The choice of Fermi-Dirac function to modelling the stochastic process of cell occupation is very natural at this point: If the concentration of arrival cell $A_{j,n}+B_{j,n}$ is greater than $\sigma _{\max }$, the transition is hampered, otherwise the transition is facilitated. How much is hampered or facilitated depends only on $\alpha $, which is not exactly the inverse of temperature, however the matching between the Fermi-Dirac distribution and our desired mapping is very surprisingly meaningful, since: The objects are able to occupy the cell even when $A_{j,n}+B_{j,n}>$ $\sigma
_{\max }$. The only case where this does not occur is when $\alpha
\rightarrow \infty $, in this case $p_{i,j}^{(n)}=1$ if $\sigma
_{j,n}=A_{j,n}+B_{j,n}<\sigma _{\max }$, $p_{i,j}^{(n)}=1/2$ if $\sigma
_{j,n}=\sigma _{\max }$, and $p_{i,j}^{(n)}=0$ when $\sigma _{j,n}>\sigma
_{\max }$. When $\alpha \rightarrow 0$, low field regime, $p_{i,j}^{(n)}=1/2$ means that objects do not interact with other objects, only with environment.
So one has for the objects $A$, the recurrence relation $A_{m,n}=A_{m,n-1}+a_{m-1,n-1}-a_{m,n-1}$, and similarly for the objects $B$, the relation $B_{m,n}=B_{m,n-1}+b_{m+1,n-1}-b_{m,n-1}$ with $a_{m,n}\equiv
A_{m,n}/\left[ 1+e^{\alpha(A_{m+1,n}+B_{m+1,n}-\sigma_{\max} )}\right] $ and $b_{m,n}\equiv B_{m,n}/[1+e^{\alpha(A_{m-1,n}+B_{m-1,n}-\sigma_{\max} )}]$.
Sure, we can solve the recurrence relation directly as we make indeed in this paper, but we also analyze a differential equation. By considering the situation $A_{m+1,n}+B_{m+1,n}\approx $ $A_{m-1,n}+B_{m-1,n}\approx
A_{m,n}+B_{m,n}$, which leads to a system of two coupled equations: $$\frac{\partial A(B)(x,t)}{\partial t}=+(-)C\frac{\partial }{\partial x}\left[
\frac{A(B)(x,t)}{1+e^{\alpha (A(x,t)+B(x,t)-\sigma )}}\right] \label{Eq:EDP}$$considering $C=\lim_{\tau ,\varepsilon \rightarrow 0}\frac{\varepsilon }{\tau }$. It is important to notice that when $\alpha \rightarrow 0$, we have uncoupled equations. In this situation, the solutions are expected to satisfy $\frac{\partial A(x,t)}{\partial t}=-C\frac{\partial A(x,t)}{\partial x}$ and $\frac{\partial B(x,t)}{\partial t}=C\frac{\partial B(x,t)}{\partial x}$. For $C=1$ for example, under periodic boundary conditions $\
A(x=L,t)=A(x=0,t)$ and $B(x=L,t)=B(x=0,t)$, it is easy to verify that the only possibility is $A(x,t)=1$ and $B(x,t)=1$, considering the initial conditions $A(x,t=0)=1$ and $B(x,t=0)=1$.
In the discrete formulation, this means $p_{i,j}^{(n)}=1/2$, which corresponds to have a ballistic behavior of objects, because if one considers $A(x,t=0)=L\delta _{x,0}$ and $B(x,t=0)=L\delta _{x,L}$, we expect that probability of a particle solutions of recurrence relations leads to in the first round in the ring: $A_{m,n}\approx L\binom{n-1}{m-1}2^{-n}$ and $B_{m,n}\approx L\binom{n-1}{L-m-1}2^{-n}$, once to execute $m$ success (It means to be in the position $x=m\tau $) the particles have to execute $n>m$ experiments according to a negative binomial [@Feller1966]. Here we are making the number of cells is exactly the length of tube (or simply $\varepsilon =\tau =1$).
However after several rounds, we must expect that $A_{m,n}$, $B_{m,n}\rightarrow 1$. But what happens when $\alpha $ increases and the interactions between particles start to become important? We also expect that $A_{m,n}$, $B_{m,n}\rightarrow 1$, which means that particles are transiting without clogging in the channel?
In order to analyse this point we also consider performing Monte Carlo (MC) simulations in order to support the numerical integration of the recurrence equations, which makes possible to analyze the clogging dynamics in such kind of systems by considering an interesting order parameter that measures a kind of current of objects over the annular tube, here called as mobility, which is defined at time $t$ for $N$ particles as $M(t)=\frac{1}{N}\sum_{i=1}^{N}\xi _{i}(t)$, where $\xi _{i}(t)$ is a binary variable associated to particle $i$, that assumes 0 if the particle stays stopped at time $t$ and 1 if this same particle goes to the next cell at this same time. This quantity cannot be calculated by solution of recurrent relations but when we perform MC simulations it is easily obtained. Some authors call this amount simply as current.
First, let us to better explore the variation of $\alpha $. By solving the recurrence relations by starting as initial conditions $A_{m,0}=1$ and $B_{m,0}=1$, but with one only site $m=L/2$, empty, i.e., $A_{L/2,0}=B_{L/2,0}=0$. Differently from MC simulations we need of a initial defect to promote the time evolution of the system when we numerically integrate the recurrence relations. So we expect that for low $\alpha $, $A_{m,n}\rightarrow 1$ (in our particular case, $A_{m,n}\rightarrow \frac{(L-1)}{L}\approx 1$, for $L$ large). However, on the other hand, i.e., for higher values of $\alpha $, we ask: for which values of $\alpha $ the system breaks down in a clogging situation?
Initially let us consider the simplest case $\sigma _{\max }=1$, and let us start by observing the density of particles $A$ and $B$ in both methods: MC simulations and by numerical integrations of recurrence relations. A summary of our main results can be seen in Fig. [Fig:Different\_frames\_for\_jamming]{}. Fig. [Fig:Different\_frames\_for\_jamming]{} (a) shows that for $\alpha =0.3$ the system is freely flowing since for averaging for a large number of runs ($N_{run}=100$) both species we have $A(x)\approx B(x)\approx 1$. It is important to observe that for $N_{run}=1$ the fluctuations overcome the expected behavior.
![Exploring the dynamics for different values of $\protect\alpha $, methods and time steps. []{data-label="Fig:Different_frames_for_jamming"}](detailing_transition_MC_EDP_REC.pdf){width="1.0\columnwidth"}
The numerical solution was also obtained via two methods: solution of PDE according to Eq. \[Eq:EDP\] and numerical integration of recurrence relations (REC) for the same value of $\alpha $, which have no obligation to agree, but we expect that them to have at least the same qualitative behavior. The results are shown in Fig \[Fig:Different\_frames\_for\_jamming\] (b). We can observe that although to obtain the PDE we have changed the index in the recurrence relation, the methods show curves around $A(x)=B(x)=1 $ for intermediate time, $t=10^{3}$. Moreover, in the same plot, for $t=10^{5}$ steps, the straight gray line, represents all plots obtained from PDE and REC solutions that are coincident, which indicates a exact agreement with $A(x)=B(x)=1$. From now, in this paper, we will use only REC since PDE only presents some slightly differences in relation to the first one and it can be considered a good representation of the model via partial differential equations. Other mathematical properties of PDE in these counterflowing stream problems deserve future exploration.
But and about when $\alpha $ increases? For example for $\alpha =1$, both MC and REC indicate some points of clogging characterized by high density of particles $14\lesssim A\approx B\lesssim 16$ according to Figs. [Fig:Different\_frames\_for\_jamming]{} (c) and (d) showing that both methods bring such situation. It is interesting, since when we have $\alpha =1$ the jamming occurs with many situations of bottlenecks but now occurs with lower intensity $3\lesssim A\approx B\lesssim 4$ exactly as shown in Figs. [Fig:Different\_frames\_for\_jamming]{} (e) and (f). So we raise the question, about the existence of $0<\alpha _{c}<\infty $ for which the system transits from mobile to clogged situation in the case $\sigma _{\max }=1$.
Here it is important to mention that differently on the other works our results look at transition on the randomness of system ($\alpha $). Other works do not consider such parameter $\alpha $, and in pedestrian dynamics the authors work with the transition of some parameter as average system velocity or probability of clogging versus the density of pedestrian (see for example interesting works: respectively [@Wei2015] and [Marroquin2014]{})
![Density of particles $A$ by REC solutions. Clearly the systems is mobile for $\protect\alpha =0.4$ and jammed for $\protect\alpha =0.8$ (peaks of bottlenecks). We can observe a strange behavior in the vicinity of transition ($0.5\leq
\protect\alpha \leq 0.7$). []{data-label="Fig:Vicinity_of_transition"}](distribution_different_alfa_values_REC.pdf){width="0.55\columnwidth"}
First by considering the vicinity of transition, we look the density of $A$ for five different values of $\alpha $ considering the stationary situation $t=10^{5}$ unit times. For example we have a mobile system for $\alpha =0.4$ and a system completely jammed for $\alpha =0.8$ (two pronounced peaks) but for intermediate values of $\alpha $ (0.5, 0.6, and 0.7) the REC solutions show that system seems to be in a metastable situation, where $A\approx 1$, but slightly numerical differences are able to deform the solution leading to strange shapes until to arrive the clogged situation starting from mobile situation.
Thus, it goes into action, the mobility previously defined in order to better explore the some aspects of these phenomena. Thus, we look at time evolution of the mobility considering a large number ($t_{\max }=10^{9}$ MC steps) in order of our stop criteria to fail (what does not happen). Basically the mobility arrives a steady state $M_{\infty }$ (stationary mobility). We use a criteria to analyze when the system reaches this stationary mobility. First, we observe the system visually, which corresponds to a qualitative previous analysis. Secondly, we take a slope of the stationary mobility by lags of 10$^{3}$MC steps. We consider a criteria where when the slope is lesser than (in absolute vale) of $\eta $. We use $\eta =10^{-7}$, and we check these cases with our previous visual analysis. After this analysis we analyse the behavior of $M(t)$, for fixed values of density as function of $\alpha $ taking the stationary value for each value.
![(a) Time evolution of mobility for $L=8$. (b) Time evolution of mobility for $L=256$. (c) Finite size scaling of stationary mobility as function of $\protect\alpha $. (d) Stationary mobility as function of $\protect\alpha $ for fixed densities. An abrupt transition from mobile state to clogging state for $\protect\alpha =\protect\alpha _{c}$, can be observed which depends on density. The inset plot (in log-log scale) shows the dependence of $\protect\alpha _{c}$ as function of $\protect\rho $. []{data-label="Fig:mobility"}](L8.png "fig:"){width="0.5\columnwidth"}![(a) Time evolution of mobility for $L=8$. (b) Time evolution of mobility for $L=256$. (c) Finite size scaling of stationary mobility as function of $\protect\alpha $. (d) Stationary mobility as function of $\protect\alpha $ for fixed densities. An abrupt transition from mobile state to clogging state for $\protect\alpha =\protect\alpha _{c}$, can be observed which depends on density. The inset plot (in log-log scale) shows the dependence of $\protect\alpha _{c}$ as function of $\protect\rho $. []{data-label="Fig:mobility"}](L256.png "fig:"){width="0.5\columnwidth"} ![(a) Time evolution of mobility for $L=8$. (b) Time evolution of mobility for $L=256$. (c) Finite size scaling of stationary mobility as function of $\protect\alpha $. (d) Stationary mobility as function of $\protect\alpha $ for fixed densities. An abrupt transition from mobile state to clogging state for $\protect\alpha =\protect\alpha _{c}$, can be observed which depends on density. The inset plot (in log-log scale) shows the dependence of $\protect\alpha _{c}$ as function of $\protect\rho $. []{data-label="Fig:mobility"}](fss.png "fig:"){width="0.5\columnwidth"}![(a) Time evolution of mobility for $L=8$. (b) Time evolution of mobility for $L=256$. (c) Finite size scaling of stationary mobility as function of $\protect\alpha $. (d) Stationary mobility as function of $\protect\alpha $ for fixed densities. An abrupt transition from mobile state to clogging state for $\protect\alpha =\protect\alpha _{c}$, can be observed which depends on density. The inset plot (in log-log scale) shows the dependence of $\protect\alpha _{c}$ as function of $\protect\rho $. []{data-label="Fig:mobility"}](transition_mobility.png "fig:"){width="0.5\columnwidth"}
We performed simulations for several values of systems $L$. Fig. [Fig:mobility]{} (a) and (b) show respectively the time evolution of mobility for $L=8$ and $L=256$. Here we average the mobility over considerable number of runs: $N_{run}=L^{-1}10^{6}$ runs. We can observe that plots are really different. So it is interesting to check the stationary mobility for different size systems as function of $\alpha $ which is can observed in [Fig:mobility]{} (c). We can check that system is deeply sensitive on the size system, but for $L\geq 128$, no numerical differences were observed and Fig. \[Fig:mobility\] (d) shows the results for $L=256$ considering different densities from $\rho =0.062$ up to 1.
These results show a abrupt transition between a mobile phase ($m_{\infty
}>0 $) to a clogging phase ($m_{\infty }=0$). This transition is preceded by a initial slip of mobility. This occurs because when the interaction of the environment with objects decreases, i.e., $\alpha $ enlarges, the objects initially gain mobility presenting an initial slip of the mobility, given that their momenta. But as $\alpha $ enlarges even more, the interaction among the objects really increases, until it finally destroys the mobility. In this case the motion is random only when cell occupation assumes exactly the value $\sigma _{\max }$. The high momenta of the objects that ignore the environment is a important enemy to make the system reaches the clogging situation due to the strong interaction effects among objects. It is interesting to observe that the abrupt transition to a clogging phase occurs with a high peak of density immediately followed by a large number of smaller peaks of bottlenecks, as suggested by Figs. [Fig:Different\_frames\_for\_jamming]{} and \[Fig:Vicinity\_of\_transition\].
This analysis concerned $\sigma _{\max }=1$. So the question is, we should observe anomalous effects for $\sigma _{\max }>1$, which in our systems means to consider small objects or simply, or more particles occupying the same orbital? In this case we can observe a clogging transition for a $\alpha _{c}^{(1)}$ and a recovering of mobility of the system for a $\alpha _{c}^{(2)}>$ $\alpha _{c}^{(1)}$? Yes, it occurs. So we analyze simulations of mobility considering now $\sigma _{\max }>1$. In this case it is important to make a distinction, density and occupation. We define as density $\rho =\frac{N}{L}$ where $N$ is the number of particles and $L$ the system size, or simply the number of cells. Occupation is different, which here is defined as $o=\frac{N}{\sigma _{\max }L}$. Thus we prepared two experiments, one of them, we change $\sigma _{\max }$ keeping the density constant, and another simulation, we change $\sigma _{\max }$ keeping constant the occupation. And two surprising results are observed.
![Stationary mobility in two situations: $\protect\rho =1$ and $o=1$, for different values of maximal ocupation: $\protect\sigma _{\max
}=1,2,...,6 $. []{data-label="Fig:different_sigmas"}](different_sigmas.pdf){width="1.0\columnwidth"}
Fig. \[Fig:different\_sigmas\] shows two distinct situations where we variate $\sigma _{\max }$. First keeping the density constant at $\rho =1$ and in a second case, keeping the occupation constant at $o=1$. In the first case we can observe a recovering of the mobility for $\sigma _{\max }=3$ and $\sigma _{\max }=4$ and for $\sigma _{\max }>4$ the system does not present a clogging regime, i.e., we can pass from a situation of objects interacting randomly with environment to situation where the objects strongly interact among them without the influence of the environment, and no bottleneck is observed since the object size in relation to cell size allows such situation. However, in the anomalous cases $\sigma _{\max }=3$ and $\sigma
_{\max }=4$, the clogging occurs as in the case $\sigma _{\max }=1$, but the mobility is recovered given that the cleaning of the environment combined with intermediate relation between object size and cell size.
On the other hand, by keeping $o=1$, we did not wait a change in the critical value $\alpha _{c}$, since we enlarge $\sigma _{\max }$ as well as we enlarge the number of objects to maintain $o$ constant. This non-linear response is characteristic of the Fermi-Dirac distribution of the cells for the transition probability. Such effects deserve more future investigation. So in this work we show a different model that works with a parameter that controls the randomness of the system by changing how the objects interact with the environment and among themselves. We observe a transition between a mobile phase and clogged phase in a $\alpha _{c}$ that depends on occupation of objects. The mobile phase can be recovered from a clogged regime, when the objects only interact among them with the system under lower occupation.
[99]{} S. Machlup, J. Appl. Phys, **35**, 341(1954).
M.J. Kirton, M.J. Uren Adv. Phys., **38**, 367(1989)
R. da Silva, L. C. Lamb, G. I. Wirth, Philos. T. Roy. Soc. A, **369**, 307-321(2011), R. da Silva, L. Brusamarello, G. Wirth, Physica. A **389**, 2687-2699 (2010), R. da Silva, G. I. Wirth, J. Stat. Mech., P04025 (2010), R. da Silva, G. I. Wirth, L. Brusamarello, Int. J. Mod. Phys. B, **24**, 5885 (2010), R. da Silva, G. I. Wirth, L. Brusamarello, J. Stat. Mech., P10015 (2008).
R. da Silva, L. C. Lamb, Eder C. Lima, J. Dupont, Physica A, **391**, 1-7 (2012)
C. L. N. Oliveira, A. P. Vieira, D. Helbing, J. S. Andrade Jr., H. J. Herrmann, Phys. Rev. X, **6** 011003 (2016)
T. Vissers, A. Wysocki, M. Rex, H. Lowen, C. P. Royall, A. Imhof, A. van Blaaderen, Soft Matter, **7**, 2352 (2011)
T. Vissers, A. van Blaaderen, A. Imhof, Phys. Rev. Lett. **106**, 228303 (2011)
E. V. Stock, R. da Silva, and H. A. Fernandes, Phys. Rev. E **96**, 012155 (2017)
R. da Silva, A. Hentz, A. Alves, Physica A, **437** 139 (2015)
W. Feller, An Introduction to Probability Theory and Its Applications, New York, J. Wiley (1966)
J. Wei, H. Zhang, Y. Guo, M. Gu, Phys. Lett. A **379**, 1081–1086 (2015)
F. Alonso-Marroquin, J. Busch, C. Chiew, C. Lozano, A. Ramirez-Gomez, Phys. Rev. E **90**, 063305 (2014)
|
---
abstract: 'Spectral characterization is a fundamental step in the development of useful quantum technology platforms. Here, we study an ensemble of interacting qubits coupled to a single quantized field mode, an extended Dicke model that might be at the heart of Bose-Einstein condensate in a cavity or circuit-QED experiments for large and small ensemble sizes, respectively. We present a semi-classical and quantum analysis of the model. In the semi-classical regime, we show analytic results that reveal the existence of a third regime, in addition of the two characteristic of the standard Dicke model, characterized by one logarithmic and two jump discontinuities in the derivative of the density of states. We show that the finite quantum system shows two different types of clustering at the jump discontinuities, signaling a precursor of two excited quantum phase transitions. These are confirmed using Peres lattices where unexpected order arises around the new precursor. Interestingly, Peres conjecture regarding the relation between spectral characteristics of the quantum model and the onset of chaos in its semi-classical equivalent is valid in this model as a revival of order in the semi-classical dynamics occurs around the new phase transition.'
author:
- 'J. P. J. Rodriguez'
- 'S. A. Chilingaryan'
- 'B. M. Rodríguez-Lara'
title: Critical phenomena in an extended Dicke model
---
Introduction
============
The Dicke model describes an ensemble of non-interacting qubits coupled to a single boson mode [@Dicke1954p99; @Garraway2011]. It predicts a zero-temperature transition at a critical coupling parameter where the ground state of the model goes from a so-called normal to superradiant phase in the thermodynamical limit [@Hepp1973p360; @Wang1973p832]. In finite systems, the so-called ground state quantum phase transition (GSQPT) becomes a continuous cross-over where entanglement arises near the critical coupling [@Lambert2004; @Bakemeier2012; @Bao2015]. Such a transition is very hard to observe in the original proposal of non-interacting, two-level neutral atoms coupled to a single electromagnetic field mode at zero temperature, due to restrictions on the achievable coupling strength with respect to the atomic energy gap. Conveniently, theory and experiments involving a Bose-Einstein condensate (BEC) coupled to a high-finesse optical cavity, plus some external standing-wave driving, provide a highly tunable quantum simulation platform to explore the GSQPT as self-organization of the BEC in the optical lattice created by the cavity and driving fields [@Domokos2002p253003; @Nagy2008127137; @Baumann2010p1301; @Keeling2010; @Nagy2010130401]. In addition, it is also possible to create a simulation of the open Dicke model, with a wider range of independently tunable parameter regimes, coupling two-hyperfine ground states of a BEC using two cavity-assisted Raman transitions [@Dimer2007p013804; @Baden2014p020408; @Zhiqiang2018]. The Dicke model also presents an excited-state quantum phase transition (ESQPT) related to singularities in the spectrum that translate into a logarithmic-type singularity of the semi-classical [@Perez2011p033802; @Perez2011p046208; @Puebla2013] and quantum [@Brandes2013; @Bastarrachea2014p012004] density of states. The finite size model spectral characteristics might signal a transition from quasi-integrability to non-integrability caused by the quantum precursors of the phase transition that translates into the onset of chaos in the semi-classical equivalent [@Emary2003p044101; @Emary2003p066203; @Bastarrachea2014p032101]. These results impact the dynamics that can be simulated in the circuit and ion-trap quantum electrodynamics (QED) platforms [@Chen2007p055803; @Mlynek2014; @Mezzacapo2015; @Barberena2017; @Lamata2017; @Aedo2018]
Here, we are interested in an extended Dicke model where the qubits are allowed to interact, $$\begin{aligned}
\label{DLMG}
H = \omega a^{\dagger} a + \omega_{0} J_{z} + \frac{\gamma}{\sqrt{N_{q}}} \left(a + a^{\dagger}\right) \left( J_{+}+J_{-}\right) +\frac{\eta}{N_{q}}J_{z}^{2}\end{aligned}$$ where the total number of qubits in the ensemble is $N_{q}$ and their energy gap is given by $\omega_{0}$. The atomic ensemble is described in the orbital angular momentum representation, $J_{i}$ with $ i = x, y, z$, such that $[J_{a},J_{b}] = i \epsilon_{abc} J_{c}$, in terms of a pseudospin length $j= N_{q} / 2$. The field is taken as a boson mode of frequency $\omega$ described by the annihilation (creation) operators, $a$ ($a^{\dagger}$). The qubit ensemble interaction with the boson mode is given by the parameter $\gamma$ and the qubit-qubit interaction is taken as dipole-dipole with nonlinear coupling strength $\eta$. This model might arise in BEC-cavity realizations [@Yuan20177404] as well as circuit- and ion-trap-QED. The semi-classical model shows a transition from Rabi to Josephson dynamics where the field is found to break the symmetry of initial symmetric states [@RodriguezLara2011p016225]. Under the rotating wave approximation, the finite size model shows a precursor of the GSQPT that provides entanglement and its semi-classical analog shows a transition from order to disorder at a critical energy related to the spectral characteristics [@Robles2015p033819]. In the following, we will conduct a detailed analysis of the full model, paying particular attention to the (ESQPT) which has not been looked at so far. For this, we will find the semi-classical density of states and compare it with the quantum density of states for an ensemble composed of $200$ qubits. Our results show the existence of a new third spectral regime outlined by the existence of one logarithmic and two jump discontinuities in the derivative of the density of states. Then, we will look at the Peres lattice for the $z$-component of the quantum angular momentum in the three regimes to find the critical energies that may signal a transition from order to disorder in the semi-classical dynamics. Interestingly, the semi-classical dynamics in the third regime show a transition from order to disorder and, then, additional islands of order appear. We close this manuscript with our conclusions.
Semi-classical critical analysis
================================
The extended Dicke model in Eq.(\[DLMG\]) is not analytically solvable, but we can provide analytical closed-form expressions for the fixed points of the energy surfaces, critical energies, and density of states (DoS) of its semi-classical equivalent. These structures will serve as a reference for the numerical analysis of the finite size quantum model. We obtain the semi-classical Hamiltonian, $$\begin{aligned}
\label{eq:2}
H_{cl} = \frac{\omega}{2}\left( q^{2} + p^{2} \right) + \omega_{0} j_{z} + 2 \gamma q \left(\frac{j^{2} - j_{z}^2}{j}\right)^{1/2} \cos \phi + \frac{\eta}{2 j} j_{z}^2,\end{aligned}$$ in the usual way [@Aguir1992p291]. We replace the angular momentum operators with their classical counterparts: $j$ for the total momentum, $j_{z}$ for its projection on the $z$-direction, and $j_{x} = \sqrt{j^2 - j_{z}^2}~ \cos \phi $ for the projection in the $x$-direction. For the field we use the classical analogue of the field quadratures, $\hat{q} = \left(\hat{a}^{\dagger} + \hat{a} \right)/\sqrt{2}$ and $\hat{p} = i \left(\hat{a}^{\dagger} - \hat{a} \right)/\sqrt{2}$. The semi-classical equations of motion, $$\begin{aligned}
&& \frac{dq}{dt} = \omega p, \\
&& \frac{dp}{dt} = - \omega q - 2 \gamma \left(\frac{j^{2} - j_{z}^2}{j}\right)^{1/2} \cos \phi, \\
&& \frac{d \phi}{d t} = \omega_{0} + \frac{j_{z}}{j} \left[ \eta - 2 \gamma q \left(\frac{j^{2} - j_{z}^2}{j}\right)^{-1/2} \cos \phi \right], \\
&& \frac{d j_{z}}{dt} = 2 \gamma q \left(\frac{j^{2} - j_{z}^2}{j}\right)^{1/2} \sin \phi,\end{aligned}$$ yield six fixed points on the energy surfaces. Two fixed points occur for any given set of parameters $\{\eta,\gamma\}$, $$\begin{aligned}
\left\lbrace q, p, j_{z} \right\rbrace = \left\lbrace 0, 0, \pm j \right\rbrace, \end{aligned}$$ whose nature depends on the auxiliary parameter $$\begin{aligned}
f = \frac{4 \gamma^2 + \eta \omega}{\omega \omega_{0}}.\end{aligned}$$ The fixed point $\left\lbrace q, p, j_{z} \right\rbrace = \left\lbrace 0, 0, j \right\rbrace$ is not stable for any given value of $f$, while the fixed point $\left\lbrace q, p, j_{z} \right\rbrace = \left\lbrace 0, 0, -j \right\rbrace$ is stable for $f<1$ and becomes unstable for $f\geq1$. In the case when $f\geq1$ and $\eta<w_0$, we find two well-known stable fixed points[@Robles2015p033819], $$\begin{aligned}
\left\lbrace q, p, j_{z}, \phi \right\rbrace = \left\lbrace - q_{(s)}, 0, -j f^{-1}, 0 \right\rbrace, \left\lbrace q_{(s)}, 0, -j f^{-1}, \pi \right\rbrace,\end{aligned}$$ with $q_{(s)} =2 \gamma \left( j - j f^{-2} \right)^{1/2} / \omega$. The final two fixed points are obtained for $f\geq1$ and $\eta\geq w_0$, $$\begin{aligned}
\left\lbrace q, p, j_{z}, \phi \right\rbrace = \left\lbrace 0, 0, - \frac{\eta j_{z}}{\omega_{0}} , \pm \frac{\pi}{2} \right\rbrace.\end{aligned}$$ We can use these six fixed points to divide the parameter space provided by the ensemble-field and qubit-qubit couplings into three regions as shown in Fig.\[fig:Figure1\]. In region I, $f<1$, there are two fixed points, a local maximum and a global minimum. The global minimum becomes a saddle point and two degenerate minima emerge in region II for $f\geq1$ and $\eta<w_0$. In the final region, $f\geq1$ and $\eta\geq w_0$, the saddle point from region II transforms into a local maximum and two degenerate saddle points appear.
![Regions in parameter space classified via types of fixed points of the classical Hamiltonian in Eq.(\[eq:2\]). Three regions with different fixed point structured are found: Region I, $ f < 1 $, region II, $ f \geq 1 $ and $ \eta < \omega_{0} $, and region III, $ f \geq 1 $ and $ \eta \geq \omega_{0} $.[]{data-label="fig:Figure1"}](Fig1.pdf)
The energy minima in these three regions can be obtained by evaluating the Hamiltonian at the stable fixed points, $$\begin{aligned}
\epsilon_{min} \equiv \frac{E_{min}}{\omega_{0} j} =
\left\{ \begin{array}{ll}
-1 + \frac{\eta}{2 \omega_{0}} , & f < 1, \\
-\frac{1}{2} \left( f + f^{-1} \right) + \frac{\eta}{2 \omega_{0}} , & f \geq 1,
\end{array} \right.\end{aligned}$$ Figure \[fig:Figure2\] shows contour plots of the energy surface for the model in the three regions defined by the fixed points. In region I, $f < 1$, there is a local maximum with energy, $$\begin{aligned}
\epsilon_{+} = 1 + \frac{\eta}{2 \omega_{0}},\end{aligned}$$ that is above the global minimum, $\epsilon_{min} < \epsilon_{+}$, Fig.\[fig:Figure2\](a). In the parameter region II, $ f \geq 1 $ and $ \eta < \omega_{0} $, there is a saddle point with energy, $$\begin{aligned}
\epsilon_{-} = - 1 + \frac{\eta}{2 \omega_{0}},\end{aligned}$$ and one local maximum with energy $\epsilon_{+}$. It is straightforward to check that $ \epsilon_{min} < \epsilon_{-} < \epsilon_{+} $ in this region, Fig.\[fig:Figure2\](b). Finally, in region III, $f \geq 1 $ and $ \eta \geq \omega_{0}$, a saddle point with energy, $$\begin{aligned}
\epsilon_{s} \equiv - \frac{\omega_{0}}{2 \eta},\end{aligned}$$ emerges along with two local maxima with energies $\epsilon_{\pm}$. Again, it is straightforward to see that $ \epsilon_{min} < \epsilon_{s} < \epsilon_{-} < \epsilon_{+} $ in this region, Fig.\[fig:Figure2\](c).
![Energy surfaces of the model on resonance, $\omega= \omega_{0}$, in (a) region I, $ f < 1 $ with $\eta = 0.2 ~\omega_{0}$ and $\gamma = 0.3 ~\omega_{0}$ (b) region II, $ f \geq 1 $ and $ \eta < ~\omega_{0}$ with $\eta = 0.2 ~\omega_{0}$ and $\gamma = 0.8 ~\omega_{0}$, and (c) region III, $ f \geq 1 $ and $ \eta \geq ~\omega_{0} $ with $\eta = 2.1 ~\omega_{0}$ and $\gamma = 0.6 ~\omega_{0}$. []{data-label="fig:Figure2"}](Fig2.pdf)
The critical energies calculated above help us calculating an analytic density of states (DoS) for the classical model in terms of the so-called Weyl’s law [@Gutzwiller], $$\begin{aligned}
\nu (E) = \frac{1}{(2 \pi)^2} \int dq ~ dp ~d\phi ~dj_{z} ~ \delta \left( E - H_{cl}\left(q, p, \phi, j_{z}\right) \right), \end{aligned}$$ that determines the allowed phase space volume for a given energy $E$. The integration over the bosonic canonical pair, $q$ and $p$, is readily performed and yields a constant equal to $ 2 \pi / \omega $. The pseudo spin part of the integral is restricted by the following condition, $$\begin{aligned}
(1 - z^{2}) \cos^{2} \phi \geq \frac{\omega \omega_{0}}{2 \gamma^{2}} \left( \frac{\eta}{2 \omega_{0}} z^{2} + z - \epsilon \right),\end{aligned}$$ where we defined the ratio of the $z$-projection to the total orbital angular momentum as the new integration variable, $ z = j_{z} / j $, and a scaled energy, $\epsilon = E / (\omega_{0} j)$. In region I, we recover a DoS with two subregions, $$\begin{aligned}
\frac{\omega}{2 j}\nu(\epsilon) =
\left\{ \begin{array}{ll}
\frac{1}{\pi} \int_{z_{2}}^{z_{+}} \phi_{0} (z, \epsilon) dz + \frac{z_{2} + 1}{2} +, & \epsilon_{-} \leq \epsilon \leq \epsilon_{+}, \\
1 , & \epsilon_{+} < \epsilon,
\end{array} \right.\end{aligned}$$ whose derivative shows a discontinuity of the so-called jump-type at the critical energy $\epsilon_{+}$, Fig.\[fig:Figure3\](a). This might be taken as a semi-classical signature of the ESQPT. In region II, three different DoS subregions are identified, $$\begin{aligned}
\frac{\omega}{2 j}\nu(\epsilon) =
\left\{ \begin{array}{ll}
\frac{1}{\pi} \int_{z_{-}}^{z_{+}} \phi_{0} (z, \epsilon) dz, & \epsilon_{min} \leq \epsilon \leq \epsilon_{-}, \\
\frac{1}{\pi} \int_{z_{2}}^{z_{+}} \phi_{0} (z, \epsilon) dz + \frac{z_{2} + 1}{2}, & \epsilon_{-} < \epsilon \leq \epsilon_{+}, \\
1 , & \epsilon_{+} < \epsilon.
\end{array} \right.\end{aligned}$$ At the critical energy $\epsilon_{-}$, the DoS derivative shows a logarithmic-type discontinuity and the jump-type discontinuity remains at $\epsilon_{+}$, Fig.\[fig:Figure3\](b). This behavior is characteristic of the Dicke model and signals the existence of two essentially different ESQPT at energies $\epsilon_{\pm}$ [@Bastarrachea2014p012004]. In region III, $ f \geq 1 $ and $ \eta \geq \omega_{0} $, we find a behavior different from the standard Dicke model. Four different DoS subregions appear, $$\begin{aligned}
\frac{\omega}{2 j}\nu(\epsilon) =
\left\{ \begin{array}{ll}
\frac{1}{\pi} \int_{z_{-}}^{z_{+}} \phi_{0} (z, \epsilon) dz, & \epsilon_{min} \leq \epsilon \leq \epsilon_{s}, \\
\frac{1}{\pi} \left[ \int_{z_{-}}^{z_{1}} \phi_{0}(z, \epsilon) dz + \int_{z_{2}}^{z_{+}} \phi_{0}(z, \epsilon) dz \right] + \frac{z_{2}-z_{1}}{2}, & \epsilon_{s} < \epsilon \leq \epsilon_{-}, \\
\frac{1}{\pi} \int_{z_{2}}^{z_{+}} \phi_{0} (z, \epsilon) dz + \frac{z_{2} + 1}{2}, & \epsilon_{-} < \epsilon \leq \epsilon_{+}, \\
1 , & \epsilon_{+} < \epsilon.
\end{array} \right.\end{aligned}$$ The logarithmic-type discontinuity relocates at the critical energy $\epsilon_{s}$, related to the new saddle points in the energy surface, a new jump-type discontinuity in the DoS derivative appears at $\epsilon_{-}$, and the jump-type discontinuity at $\epsilon_{+}$ remains signaling three possible ESQPT in region III, Fig.\[fig:Figure3\](c).
In all these expressions, the auxiliary parameters $z_{\pm}$, fulfilling $z_{-} \leq z_{+}$, are the real roots of the following quadratic equation, $$\begin{aligned}
(1 - z^{2}) = \frac{\omega \omega_{0}}{2 \gamma^{2}} \left( \frac{\eta}{2 \omega_{0}} z^{2} + z - \epsilon \right),\end{aligned}$$ the parameters $z_{1,2}$, with $z_{1} \leq z_{2}$, are the real roots of the quadratic equation, $$\begin{aligned}
\frac{\eta}{2 \omega_{0}} z^{2} + z - \epsilon = 0.\end{aligned}$$ and the function $\phi_{0}(z, \epsilon)$ is given by, $$\begin{aligned}
\phi_{0}(z, \epsilon) = \arccos \left[ \frac{\omega \omega_{0}}{2 \gamma^{2}} \frac{ \left( \frac{\eta}{2 \omega_{0}} z^{2} + z -\epsilon \right) }{1-z^{2}} \right]^{-1/2}.\end{aligned}$$
![ Scaled semi-classical DoS, $\omega \nu (\epsilon) / (2 j)$ (red), and its first derivative (black) in terms of the scaled energy $\epsilon \equiv E / (\omega_{0} j) $ for parameters identical to those in Fig.\[fig:Figure2\] []{data-label="fig:Figure3"}](Fig3.pdf){width="\columnwidth"}
Now, we have well defined semi-classical signatures of possible ESQPTs. A ESQPT refers to a singularity in the energy spectrum caused by a change in the clustering of excited states at a critical energy [@Capiro2008p1106]. Therefore, it is directly manifested in the density of states as discontinuities or divergences [@Bastarrachea2014p032101]. Unfortunately, the extended Dicke model remains an unsolved model and we are restricted to a numerical analysis of finite, truncated, computational realizations. Finite models do not show a sharp quantum phase transitions but smooth crossovers from different spectral configurations. Nevertheless, the semi-classical results provide a valuable starting point to search for precursors of ESQPTs in the finite extended Dicke model. In our numerics, we use an ensemble composed of two hundred qubits, $N_{q}=200$, and an extended bosonic coherent basis [@Chen2008p051801] with a maximum of six hundred bosons in the field mode, $n_{max}=600$. We restrict our analysis to the positive parity subspace of the model and obtain about fifty thousand converged eigenstates, with a wavefunction convergence criteria of less than $10^{-18}$ [@Bastarrachea2014p012004]. This allows us to calculate a numerical averaged quantum DoS, $$\begin{aligned}
\bar{\nu}(\bar{\epsilon}) = \frac{\Delta \bar{n}}{\Delta \bar{E}},\end{aligned}$$ as a function of the scaled energy, $$\begin{aligned}
\bar{\epsilon} = \frac{1}{\omega N_{q}} \left[ \bar{E}(\bar{n}+1) - \bar{E}(\bar{n})\right],\end{aligned}$$ where the average energy, $\bar{E}(\bar{n})$, and number of photons, $\bar{n}$, are taken over twenty eigenvalues. Figure \[fig:Figure4\] shows the averaged quantum DoS as blue dots with its semi-classical analogue as a solid red line for comparison. It is possible to see that the averaged quantum DoS follows the trend of its semi-classical equivalent, and shows clustering in the spectrum near the critical scaled energies where the ESQPT is expected.
![Scaled averaged quantum DoS, $\frac{\omega}{2 j} \bar{\nu}(\bar{\epsilon})$ (blue dots), and its semi-classical analogue (red) as a function of the scaled energy, $\bar{\epsilon}$ for parameters identical to those in Fig.\[fig:Figure3\].[]{data-label="fig:Figure4"}](Fig4.pdf){width="\columnwidth"}
Peres lattices are an alternative qualitative method to find the precursors of ESQPTs. Originally conceived as a visual test for the competition between regular and chaotic features in the semi-classical equivalent of quantum models [@Peres1984p1711], the idea behind this method is simple. If we consider an integrable quantum system described by the Hamiltonian $H_{0}$ and a constant of motion $I$, such that $ \left[ H_{0}, I \right] = 0 $, and plot the mean value of the constant of motion for each and every spectral state versus its energy, we will observe a lattice formed by regularly distributed points because each spectral state can be labeled by the quantum number associated with the observable. Introducing a perturbation, $H^{\prime}$, may render the system non-integrable. In such a case, the observable $I$ is no longer a constant of motion and the spectrum states cannot be labeled uniquely by a combination of the energy and the mean value of the observable. However, a weak perturbation might not entirely destroy the regular lattice obtained before but, as the perturbation grows, the regular part of the lattice will disappear gradually and disorder will dominate. Thus, the method of Peres lattices serves as an indicator of the changing structures inside the quantum spectrum of the system and has proven a useful method for identifying the various types of ESQPT in the standard Dicke model [@Bastarrachea2014p032102].
![Peres lattice of the scaled quantum angular momentum $ \langle J_{z} \rangle / ( \omega_{0} j ) $ for parameter values identical to those in Fig.\[fig:Figure4\].[]{data-label="fig:Figure5"}](Fig5.pdf){width="\columnwidth"}
When we look at the Peres lattice for the $z$-projection of the angular momentum operator in the extended Dicke model, Fig.\[fig:Figure5\], those for region I and II are phenomenologically identical to those found in the standard Dicke model [@Bastarrachea2014p032102]. The precursor of the so-called static ESQPT, associated with a maximum of the scaled quantum angular momentum, occurs around the critical scaled energy $\epsilon_{+}$ in region I, Fig.\[fig:Figure5\](a), II, Fig.\[fig:Figure5\](b), and III, Fig.\[fig:Figure5\](c). The precursor for the dynamic ESQPT, associated with a minimum of the scaled angular momentum, appears only in region II and III around the critical energy $\epsilon_{-}$. Region III deviates from the standard Dicke model behavior, here the large values of the nonlinear coupling, restore regularity to the Peres lattice around the critical energy for the precursor of the static ESQPT for large values of the scaled angular momenta, Fig.\[fig:Figure5\](c). Peres lattice in region III tells us that, for small energies, the available phase space in the semi-classical analogue, provided in terms of the scaled angular momentum projection, will be highly restricted and asymmetric below critical energy value $\epsilon_{s}$, Fig.\[fig:Figure6\](a). It will become symmetric around $\epsilon_{-}$, Fig.\[fig:Figure6\](b), and start expanding, Fig.\[fig:Figure6\](c-d), until it reaches its maximum near $\epsilon_{+}$, Fig.\[fig:Figure6\](e-f). As expected from Peres conjecture, the trajectories in the semi-classical analogue will be chaotic for parameter values in the irregular lattice, Fig.\[fig:Figure5\](a-c), and regular for those corresponding to ordered sections of the lattice, Fig.\[fig:Figure6\](e-f). This revival of the regular domain induced by the nonlinear interaction is the most striking effect of the model.
![Poincaré sections in the semi-classical equivalent of the extended Dicke model for the phase space $ \left\lbrace r, \phi \right\rbrace$ at $ p(t) = 0 $ with $ r = 1 + j_{z} / j $ and initial scaled energies (a) $ E / {(\omega_{0} j)} = -0.3 $, (b) $ E / {(\omega_{0} j)} = -0.15 $, (c) $ E / {(\omega_{0} j)} = 0 $, (d) $ E / {(\omega_{0} j)} = 0.15 $, (e) $ E / {(\omega_{0} j)} = 2 $ and (f) $E / {(\omega_{0} j)} = 2.1 $ The parameters belong to the region III: $\omega = \omega_{0} = 1, $ $\eta = 2.1 $, $ \gamma = 0.6 $, $ j = 100 $. []{data-label="fig:Figure6"}](Fig6.pdf)
Conclusion
==========
We studied a Dicke model with dipole-dipole interacting qubits. The semi-classical equivalent of the quantum model allowed us to provide a detailed analysis of the energy landscape, where three structurally distinct regions can be identified. These regions show two, three, and four critical energies at which minima, maxima, and saddle points of the energy manifold appear. The semi-classical model allowed us to calculate a closed-form density of states that shows a jump-type discontinuity in the first region, a logarithmic- and jump-type in the second region, and a logarithmic- and two jump-type discontinuities in the third region at the critical energies. Our results served as pointers to focus the search of precursors of ground and excited quantum phase transitions in the quantum model.
We diagonalized the finite-size quantum model using an extended coherent state basis in the positive parity sector of the related Hilbert space. Our numerical realization considered an ensemble of two hundred qubits with a maximum of six hundred excitations in the boson field and yielded an approximate of fifty thousand converged eigenstates and their respective eigenvalues. The resulting averaged quantum density of states followed in good agreement the trend provided by the semi-classical analytic result signaling the precursors of quantum phase transitions. The first two regions yield results phenomenologically identical to those of the Dicke model, with critical energies displaced by the nonlinear coupling. In the third region, a large nonlinear coupling produces an irregular Peres lattice for the $z$-component of the angular momentum for low energies, differing from the behavior in the standard Dicke model, and a small regular section arises for large value of the angular momentum projection near the critical energy related the second jump-type discontinuity in the derivative of the semi-classical density of states. The parameters associated with this region produces a revival of regular semi-classical trajectories in an otherwise chaotic system.
We hope that this semi-classical and quantum analysis of the Dicke model with interacting qubits might shed light on the dynamical regimes available for simulations of the model in cavity, ion-trap, and cirquit quantum electrodynamics platforms.
B.M.R.L. acknowledges fruitful discussion with Félix Humberto Maldonado Villamizar and Benjamín Raziel Jaramillo Ávila. J.P.J.R acknowledges funding from Consejo Nacional de Ciencia y Tecnología (CONACYT) (CB-2015-01-255230) and B.M.R.L from Consejo Nacional de Ciencia y Tecnología (CONACYT) (CB-2015-01-255230, FORDECYT-296355).
[38]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRev.93.99) [****, ()](\doibase
10.1098/rsta.2010.0333) [****, ()](\doibase 10.1016/0003-4916(73)90039-0) [****, ()](\doibase
10.1103/PhysRevA.7.831) [****, ()](\doibase
10.1103/PhysRevLett.92.073602), [****, ()](\doibase 10.1103/PhysRevA.85.043821), [****, ()](\doibase 10.3390/e17075022) [****, ()](\doibase
10.1103/PhysRevLett.89.253003) [****, ()](\doibase
10.1140/epjd/e2008-00074-6), [****, ()](\doibase
10.1038/nature09009), [****, ()](\doibase 10.1103/PhysRevLett.105.043001), [****, ()](\doibase 10.1103/PhysRevLett.104.130401), [****, ()](\doibase 10.1103/PhysRevA.75.013804), [****, ()](\doibase
10.1103/PhysRevLett.113.020408), [****, ()](\doibase 10.1364/OPTICA.4.000424), [****, ()](\doibase 10.1103/PhysRevA.83.033802), [****, ()](\doibase 10.1103/PhysRevE.83.046208) [****, ()](\doibase 10.1103/PhysRevA.87.023819), [****, ()](\doibase
10.1103/PhysRevE.88.032133), [****, ()](\doibase
10.1088/1742-6596/512/1/012004), [****, ()](\doibase 10.1103/PhysRevLett.90.044101), [****, ()](\doibase
10.1103/PhysRevE.67.066203), [****, ()](\doibase 10.1103/PhysRevA.89.032101), [****, ()](\doibase 10.1103/PhysRevA.76.055803) [****, ()](\doibase 10.1038/ncomms6186), [****, ()](\doibase 10.1038/srep07482), [****, ()](\doibase 10.1038/s41598-017-09110-7), [****, ()](\doibase 10.1038/srep43768), [****, ()](\doibase
10.1103/PhysRevA.97.042317), [****, ()](\doibase
10.1038/s41598-017-07899-x), [****, ()](\doibase 10.1103/PhysRevE.84.016225), [****, ()](\doibase 10.1103/PhysRevA.91.033819), [****, ()](\doibase
10.1016/0003-4916(92)90178-O) @noop [**]{} (, ) [****, ()](\doibase
10.1016/j.aop.2007.06.011), [****, ()](\doibase 10.1103/PhysRevA.78.051801), [****, ()](\doibase
10.1103/PhysRevLett.53.1711) [****, ()](\doibase 10.1103/PhysRevA.89.032102),
|
---
abstract: 'Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.'
author:
- |
Alexandre Passos, Vineet Kumar, Andrew McCallum\
School of Computer Science\
University of Massachusetts, Amherst\
[{apassos,vineet,mccallum}@cs.umass.edu]{}
bibliography:
- 'refs.bib'
title: Lexicon Infused Phrase Embeddings for Named Entity Resolution
---
Introduction {#sec:introduction}
============
In many natural language processing tasks, such as named-entity recognition or coreference resolution, syntax alone is not enough to build a high performance system; some external source of information is required. In most state-of-the-art systems for named-entity recognition (NER) this knowledge comes in two forms: domain-specific lexicons (lists of word types related to the desired named entity types) and word representations (either clusterings or vectorial representations of word types which capture some of their syntactic and semantic behavior and allow generalizing to unseen word types).
Current state-of-the-art named entity recognition systems use Brown clusters as the form of word representation [@Ratinov:2009; @Turian:2010; @Miller:2004; @Brown:1992], or other cluster-based representations computed from private data [@lin2009phrase]. While very attractive due to their simplicity, generality, and hierarchical structure, Brown clusters are limited because the computational complexity of fitting a model scales quadratically with the number of words in the corpus, or the number of “base clusters” in some efficient implementations, making it infeasible to train it on large corpora or with millions of word types.
Although some attempts have been made to train named-entity recognition systems with other forms of word representations, most notably those obtained from training neural language models [@Turian:2010; @Collobert:2008], these systems have historically underperformed simple applications of Brown clusters. A disadvantage of neural language models is that, while they are inherently more scalable than Brown clusters, training large neural networks is still often expensive; for example, Turian et al report that some models took multiple days or weeks to produce acceptable representations. Moreover, language embeddings learned from neural networks tend to behave in a “nonlinear” fashion, as they are trained to encourage a many-layered neural network to assign high probability to the data. These neural networks can detect nonlinear relationships between the embeddings, which is not possible in a log-linear model such as a conditional random field, and therefore limiting how much information from the embeddings can be actually leveraged.
Recently Mikolov et al [@Mikolov:2013; @Mikolov:2013b] proposed two simple log-linear language models, the CBOW model and the Skip-Gram model, that are simplifications of neural language models, and which can be very efficiently trained on large amounts of data. For example it is possible to train a Skip-gram model over more than a billion tokens with a single machine in less than half a day. These embeddings can also be trained on phrases instead of individual word types, allowing for fine granularity of meaning.
In this paper we make the following contributions. (1) We show how to extend the Skip-Gram language model by injecting supervisory training signal from a collection of curated lexicons—effectively encouraging training to learn similar embeddings for phrases which occur in the same lexicons. (2) We demonstrate that this method outperforms a simple application of the Skip-Gram model on the semantic similarity task on which it was originally tested. (3) We show that a linear-chain CRF is able to successfully use these log-linearly-trained embeddings better than the other neural-network-trained embeddings. (4) We show that lexicon-infused embeddings let us easily build a new highest-performing named entity recognition system on CoNLL 2003 data [@conll2003] which is trained using only publicly available data. (5) We also present results on the relatively under-studied Ontonotes NER task [@weischedel2011ontonotes], where we show that our embeddings outperform Brown clusters.
Background and Related Work {#sec:backgr-relat-work}
===========================
Language models and word embeddings {#sec:language-models-word}
-----------------------------------
A statistical language model is a way to assign probabilities to all possible documents in a given language. Most such models can be classified in one of two categories: they can directly assign probabilities to sequences of word types, such as is done in $n$-gram models, or they can operate in a lower-dimensional latent space, to which word types are mapped. While most state-of-the-art language models are $n$-gram models, the representations used in models of the latter category, henceforth referred to as “embeddings,” have been found to be useful in many NLP applications which don’t actually need a language model. The underlying intuition is that when language models compress the information about the word types in a latent space they capture much of the commonalities and differences between word types. Hence features extracted from these models then can generalize better than features derived from the word types themselves. One simple language model that discovers useful embeddings is known as [*Brown clustering*]{} [@Brown:1992]. A Brown clustering is a class-based bigram model in which (1) the probability of a document is the product of the probabilities of its bigrams, (2) the probability of each bigram is the product of the probability of a bigram model over latent classes and the probability of each class generating the actual word types in the bigram, and (3) each word type has non-zero probability only on a single class. Given a one-to-one assignment of word types to classes, then, and a corpus of text, it is easy to estimate these probabilities with maximum likelihood by counting the frequencies of the different class bigrams and the frequencies of word tokens of each type in the corpus. The Brown clustering algorithm works by starting with an initial assignment of word types to classes (which is usually either one unique class per type or a small number of seed classes corresponding to the most frequent types in the corpus), and then iteratively selecting the pair of classes to merge that would lead to the highest post-merge log-likelihood, doing so until all classes have been merged. This process produces a hierarchical clustering of the word types in the corpus, and these clusterings have been found useful in many applications [@Ratinov:2009; @koo2008simple; @Miller:2004]. There are other similar models of distributional clustering of English words which can be similarly effective [@pereira1993distributional].
One limitation of Brown clusters is their computational complexity, as training takes $O(kV^2 + N)$x time to train, where $k$ is the number of base clusters, $V$ size of vocabulary, and $N$ number of tokens. This is infeasible for large corpora with millions of word types.
Another family of language models that produces embeddings is the [ *neural language models*]{}. Neural language models generally work by mapping each word type to a vector in a low-dimensional vector space and assigning probabilities to $n$-grams by processing their embeddings in a neural network. Many different neural language models have been proposed [@Bengio:2003; @Morin:2005; @Bengio:2008; @Mnih:2008; @Collobert:2008; @mikolov2010recurrent]. While they can capture the semantics of word types, and often generalize better than $n$-gram models in terms of perplexity, applying them to NLP tasks has generally been less successful than Brown clusters [@Turian:2010].
Finally, there are algorithms for computing word embeddings which do not use language models at all. A popular example is the CCA family of word embeddings [@dhillon2012two; @dhillon2011multi], which work by choosing embeddings for a word type that capture the correlations between the embeddings of word types which occur before and after this type.
The Skip-gram Model {#skip-gram-model}
-------------------
A main limitation of neural language models is that they often have many parameters and slow training times. To mitigate this, Mikolov et al. recently proposed a family of log-linear language models inspired by neural language models but designed for efficiency. These models operate on the assumption that, even though they are trained as language models, users will only look at their embeddings, and hence all they need is to produce good embeddings, and not high-accuracy language models.
The most successful of these models is the [*skip-gram model*]{}, which computes the probability of each $n$-gram as the product of the conditional probabilities of each context word in the $n$-gram conditioned on its central word. For example, the probability for the $n$-gram “the cat ate my homework” is represented as $P(the|ate)P(cat|ate)P(my|ate)P(homework|ate)$.
![A binary Huffman tree. Circles represent binary classifiers. Rectangles represent tokens, which can be multi-word.[]{data-label="HuffmanTree"}](figures/HuffmanTree){width="50.00000%"}
To compute these conditional probabilities the model assigns an embedding to each word type and defines a binary tree of logistic regression classifiers with each word type as a leaf. Each classifier takes a word embedding as input and produces a probability for a binary decision corresponding to a branch in the tree. Each leaf in the tree has a unique path from the root, which can be interpreted as a set of (classifier,label) pairs. The skip-gram model then computes a probability of a context word given a target word as the product of the probabilities, given the target word’s embeddings, of all decisions on a path from the root to the leaf corresponding to the context word. Figure \[HuffmanTree\] shows such a tree structured model.
The likelihood of the data, then, given a set $N$ of $n$-grams, with $m_n$ being $n$-gram $n$’s middle-word, $c_n$ each context word, $w^{c_n}_i$ the parameters of the $i$-th classifier in the path from the root to $c_n$ in the tree, $l^{c_n}_i$ its label (either $1$ or $-1$), $e_f$ the embedding of word type $f$, and $\sigma$ is the logistic sigmoid function, is $$\label{eq:1}
\prod_{n \in N} \prod_{c_n \in n} \prod_i \sigma(l^{c_n}_i {w^{c_n}_i}^T e_{m_n}).$$
Given a tree, then, choosing embeddings $e_{m_n}$ and classifier parameters $w^{c_n}_i$ to maximize equation is a non-convex optimization problem which can be solved with stochastic gradient descent.
The binary tree used in the model is commonly estimated by computing a Huffman coding tree [@huffman1952method] of the word types and their frequencies. We experimented with other tree estimation schemes but found no perceptible improvement in the quality of the embeddings.
It is possible to extend these embeddings to model phrases as well as tokens. To do so, Mikolov et al use a phrase-building criterion based on the pointwise mutual information of bigrams. They perform multiple passes over a corpus to estimate trigrams and higher-order phrases. We instead consider candidate trigrams for all pairs of bigrams which have a high PMI and share a token.
Named Entity Recognition {#sec:named-entity-recogn}
------------------------
Named Entity Recognition (NER) is the task of finding all instances of explicitly named entities and their types in a given document. While detecting named entities is superficially simple, since most sequences of capitalized words are named entities (excluding headlines, sentence beginnings, and a few other exceptions), finding all entities is non trivial, and determining the correct named entity type can sometimes be surprisingly hard. Performing the task well often requires external knowledge of some form.
In this paper we evaluate our system on two labeled datasets for NER: CoNLL 2003 [@conll2003] and Ontonotes [@weischedel2011ontonotes]. The CoNLL dataset has approximately 320k tokens, divided into 220k tokens for training, 55k tokens for development, and 50k tokens for testing. While the training and development sets are quite similar, the test set is substantially different, and performance on it depends strongly on how much external knowledge the systems have. The CoNLL dataset has four entity types: [Person, Location, Organization, And Miscellaneous]{}. The Ontonotes dataset is substantially larger: it has 1.6M tokens total, with 1.4M for training, 100K for development, and 130k for testing. It also has eighteen entity types, a much larger set than the CoNLL dataset, including works of art, dates, cardinal numbers, languages, and events.
The performance of NER systems is commonly measured in terms of precision, recall, and F1 on the sets of entities in the ground truth and returned by the system.
### Baseline System {#sec:baseline-system}
In this section we describe in detail the baseline NER system we use. It is inspired by the system described in Ratinov and Roth .
Because NER annotations are commonly not nested (for example, in the text “the US Army”, “US Army” is treated as a single entity, instead of the location “US” and the organization “US Army”) it is possible to treat NER as a sequence labeling problem, where each token in the sentence receives a label which depends on which entity type it belongs to and its position in the entity. Following Ratinov and Roth we use the BILOU encoding, where each token can either [Begin]{} an entity, be [Inside]{} an entity, be the [Last]{} token in an entity, be [Outside]{} an entity, or be the single [Unique]{} token in an entity.
Our baseline architecture is a stacked linear-chain CRF [@lafferty2001conditional] system: we train two CRFs, where the second CRF can condition on the predictions made by the first CRF as well as features of the data. Both CRFs, following Zhang and Johnson , have roughly similar features.
While local features capture a lot of the clues used in text to highlight named entities, they cannot necessarily disambiguate entity types or detect named entities in special positions, such as the first tokens in a sentence. To solve these problems most NER systems incorporate some form of external knowledge. In our baseline system we use lexicons of months, days, person names, companies, job titles, places, events, organizations, books, films, and some minor others. These lexicons were gathered from US Census data, Wikipedia category pages, and Wikipedia redirects (and will be made publicly available upon publication).
Following Ratinov and Roth , we also compare the performance of our system with a system using features based on the Brown clusters of the word types in a document. Since, as seen in section \[sec:language-models-word\], Brown clusters are hierarchical, we use features corresponding to prefixes of the path from the root to the leaf for each word type.
More specifically, the feature templates of the baseline system are as follows. First for each token we compute:
its word type;
word type, after excluding digits and lower-casing it;
its capitalization pattern;
whether it is punctuation;
4-character prefixes and suffixes;
character $n$-grams from length 2 to 5;
whether it is in a wikipedia-extracted lexicon of person names (first, last, and honorifics), dates (months, years), place names (country, US state, city, place suffixes, general location words), organizations, and man-made things;
whether it is a demonym.
For each token’s label we have feature templates considering all token’s features, all neighboring token’s features (up to distance 2), and bags of words of features of tokens in a window of size 8 around each token. We also add a feature marking whether a token is the first occurrence of its word type in a document.
When using Brown clusters we add as token features all prefixes of lengths 4, 6, 10, and 20, of its brown cluster.
For the second-layer model we use all these features, as well as the label predicted for each token by the first-layer model.
As seen in the Experiments Section, our baseline system is competitive with state-of-the-art systems which use similar forms of information.
We train this system with stochastic gradient ascent, using the AdaGrad RDA algorithm [@duchi2011adaptive], with both $\ell_1$ and $\ell_2$ regularization, automatically tuned for each experimental setting by measuring performance on the development set.
NER with Phrase Embeddings {#sec:ner-phrase-embeddings}
--------------------------
In this section we describe how to extend our baseline NER system to use word embeddings as features.
![Chain CRF model for a NER system with three tokens. Filled rectangles represent factors. Circles at top represent labels, circles at bottom represent binary token based features. Filled circles indicate the phrase embeddings for each token.[]{data-label="fig:nerModel"}](figures/ner-model){width="50.00000%"}
First we group the tokens into phrases, assigning to each token a single phrase greedily. We prefer shorter phrases over longer ones, sinceour embeddings are often more reliable for the shorter phrases, and since the longer phrases in our dictionary are mostly extracted from Wikipedia page titles, which are not always semantically meaningful when seen in free text. We then add factors connecting each token’s label with the embedding for its phrase.
Figure \[fig:nerModel\] shows how phrase embeddings are plugged into a chain-CRF based NER system. Following Turian , we scale the embedding vector by a real number, which is a hyper-parameter tuned on the development data. Connecting tokens to phrase embeddings of their neighboring tokens did not improve performance for phrase embeddings, but it was mildly beneficial for token embeddings.
Lexicon-infused Skip-gram Models {#sec:semi-supervised-skip}
================================
The Skip-gram model as defined in Section \[skip-gram-model\] is fundamentally trained in unsupervised fashion using simply words and their n-gram contexts. Injecting some NER-specific supervision into the embeddings can make them more relevant to the NER task.
Lexicons are a simple yet powerful way to provide task-specific supervisory information to the model without the burden of labeling additional data. However, while lexicons have proven useful in various NLP tasks, a small amount of noise in a lexicon can severely impair the its usefulness as a feature in log-linear models. For example, even legitimate data, such as the Chinese last name “He” occurring in a lexicon of person last names, can cause the lexicon feature to fire spuriously for many training tokens that are labeled [Person]{}, and then this lexicon feature may be given low or even negative weight.
We propose to address both these problems by employing lexicons as part of the word embedding training. The skip-gram model can be trained to predict not only neighboring words but also lexicon membership of the central word (or phrase). The resulting embedding training will thus be somewhat supervised by tending to bring together the vectors of words sharing a lexicon membership. Furthermore, this type of training can effectively “clean” the influence of noisy lexicons because even if “He” appears in the [Person]{} lexicon, it will have a sufficiently different context distribution than labeled named person entities ([*e.g.*]{} a lack of preceding honorifics, etc) that the presence of this noise in the lexicon will not be as problematic as it was previously.
Furthermore, while Skip-gram models can be trained on billions of tokens to learn word embeddings for over a million word types in a single day, this might not be enough data to capture reliable embeddings of all relevant named entity phrases. Certain sets of word types, such as names of famous scientists, can occur infrequently enough that the Skip-gram model will not have enough contextual examples to learn embeddings that highlight their relevant similarities.
In this section we describe how to extend the Skip-gram model to incorporate auxiliary information from lexicons, or lists of related words, encouraging the model to assign similar embeddings to word types in similar lexicons.
{width="50.00000%"}
. \[lexiconExample\]
In the basic Skip-gram model, as seen in Section \[skip-gram-model\], the likelihood is, for each n-gram, a product of the probability of the embedding associated with the middle word conditioned on each context word. We can inject supervision in this model by also predicting, given the embedding of the middle word, whether it is a member of each lexicon. Figure \[lexiconExample\] shows an example, where the word “New York” predicts “state”, and also its lexicon classes: Business, US-State and Wiki-Location.
Hence, with subscript $s$ iterating over each lexicon (or set of related words), and $l_s^{m_n}$ being a label for whether each word is in the set, and $w_s$ indicating the parameters of its classifier, the full likelihood of the model is
\[eq:2\] \_[n N]{} (\_[c\_n n]{} \_i (l\^[c\_n]{}\_i [w\^[c\_n]{}\_i]{}\^T e\_[m\_n]{}))\
( \_s (l\_s\^[m\_n]{} w\_s\^T e\_[m\_n]{}) ).
This is a simple modification to equation that also predicts the lexicon memberships. Note that the parameters $w_s$ of the auxiliary per-lexicon classifiers are also learned. The lexicons are not inserted in the binary tree with the words; instead, each lexicon gets its own binary classifier.
In practice, a very small fraction of words are present in a lexicon-class and this creates skewed training data, with overwhelmingly many negative examples. We address this issue by aggressively sub-sampling negative training data for each lexicon class. We do so by randomly selecting only 1% of the possible negative lexicons for each token.
A Skip-gram model has $V$ binary classifiers. A lexicon-infused Skip-gram model predicts an additional $K$ classes, and thus has $V+K$ binary classifiers. If number of classes $K$ is large, we can induce a tree over the classes, similarly to what is done over words in the vocabulary. In our trained models, however, we have one million words in the vocabulary and twenty-two lexicons, so this is not necessary.
Experiments {#sec:experiments}
===========
Our phrase embeddings are learned on the combination of English Wikipedia and the RCV1 Corpus [@Rcv1]. Wikipedia contains 8M articles, and RCV1 contains 946K. To get candidate phrases we first select bigrams which have a pointwise mutual information score larger than 1000. We discard bigrams with stopwords from a manually selected list. If two bigrams share a token we add its corresponding trigram to our phrase list. We further add page titles from the English Wikipedia to the list of candidate phrases, as well as all word types. We get a total of about 10M phrases. We restrict the vocabulary to the most frequent 1M phrases. All our reported experiments are on 50-dimensional embeddings. Longer embeddings, while performing better on the semantic similarity task, as seen in Mikolov et al , did not perform as well on NER.
To train phrase embeddings, we use a context of length 21. We use lexicons derived from Wikipedia categories and data from the US Census, totaling $K=22$ lexicon classes. We use a randomly selected 0.01% of negative training examples for lexicons.
We perform two sets of experiments. First, we validate our lexicon-infused phrase embeddings on a semantic similarity task, similar to Mikolov et al [@Mikolov:2013]. Then we evaluate their utility on two named-entity recognition tasks.
For the NER Experiments, we use the baseline system as described in Section \[sec:baseline-system\]. NER systems marked as “Skip-gram” consider phrase embeddings; “LexEmb” consider lexicon-infused embeddings; “Brown” use Brown clusters, and “Gaz” use our lexicons as features.
Syntactic and Semantic Similarity {#sec:synt-semant-simil}
---------------------------------
Mikolov et al. introduce a test set to measure syntactic and semantic regularities for words. This set contains 8869 semantic and 10675 syntactic questions. Each question consists of four words, such as big, biggest, small, smallest. It asks questions of the form “What is the word that is similar to *small* in the same sense as *biggest* is similar to *big*”. To test this, we compute the vector $X =
vector(``biggest") - vector(``big") + vector(``small")$. Next, we search for the word closest to X in terms of cosine distance (excluding “biggest”, “small”, and “big”). This question is considered correctly answered only if the closest word found is “smallest”. As in Mikolov et al [@Mikolov:2013], we only search over words which are among the 30K most frequent words in the vocabulary.
----------------------- --
**Model & **Accuracy\
Skip-Gram & 29.89\
Lex-0.05 & 30.37\
Lex-0.01 & **30.72\
******
----------------------- --
: \[skip-gram\] Accuracy for Semantic-Syntactic task, when restricted to Top 30K words. Lex-0.01 refers to a model trained with lexicons, where 0.01% of negative examples were used for training.
Table \[skip-gram\] depicts the accuracy on Semantic Syntactic Task for models trained with 50 dimensions. We find that lexicon-infused embeddings perform better than Skip-gram. Further, lex-0.01 performs the best, and we use this model for further NER experiments. There was no perceptible difference in computation cost from learning lexicon-infused embeddings versus learning standard Skip-gram embeddings.
CoNLL 2003 NER {#section:conll2003-results}
--------------
We applied our models on CoNLL 2003 NER data set. All hyperparameters were tuned by training on training set, and evaluating on the development set. Then the best hyperparameter values were trained on the combination of training and development data and applied on the test set, to obtain the final results.
---------------------------------------------- -- --
**System & **Dev & **Test\
Baseline & 92.22 & 87.93\
Baseline + Brown & 93.39 & 90.05\
Baseline + Skip-gram & 93.68 & 89.68\
Baseline + LexEmb & 93.81& 89.56\
Baseline + Gaz & 93.69 & 89.27\
Baseline + Gaz + Brown & 93.88 & 90.67\
Baseline + Gaz + Skip-gram & 94.23 & 90.33\
Baseline + Gaz + LexEmb & **94.46 & **90.90\
Ando and Zhang & 93.15 & 89.31\
Suzuki and Isozaki & **94.48 & 89.92\
Ratinov and Roth & 93.50 & 90.57\
Lin and Wu & - & **90.90\
**************
---------------------------------------------- -- --
: \[ner:conll\] Final NER F1 scores for the CoNLL 2003 shared task. On the top are the systems presented in this paper, and on the bottom we have baseline systems. The best results within each area are highlighted in bold. Lin and Wu 2009 use massive private industrial query-log data in training.
Table \[ner:conll\] shows the phrase F1 scores of all systems we implemented, as well as state-of-the-art results from the literature. Note that using traditional unsupervised Skip-gram embeddings is worse than Brown clusters. In contrast, our lexicon-infused phrase embeddings **Lex-0.01** achieves 90.90—a state-of-the-art F1 score for the test set. This result matches the highest F1 previously reported, in Lin and Wu , but is the first system to do so without using massive private data. Our result is signficantly better than the previous best using public data.
Ontonotes 5.0 NER {#sec:ontonotes-5.0-ner}
-----------------
Similarly to the CoNLL NER setup, we tuned the hyperparameters on the development set. We use the same list of lexicons as for CoNLL NER.
Table \[ner:ontonotes\] summarize our results. We found that both Skip-gram and Lexicon infused embeddings give better results than using Brown Clusters as features. However, in this case Skip-gram embeddings give marginally better results. (So as not to jeopardize our ability to fairly do further research on this task, we did not analyze the test set errors that may explain this.) These are, to the best of our knowledge, the first published performance numbers on the Ontonotes NER task.
---------------------------------------------- -- --
**System & **Dev & **Test\
Baseline & 79.04 & 79.85\
Baseline + Brown & 79.95 & 81.38\
Baseline + Skip-gram & 80.59 & 81.91\
Baseline + LexEmbd & 80.65& 81.82\
Baseline + Gaz & 79.85 & 81.31\
Baseline + Gaz + Brown & 80.53 & 82.05\
Baseline + Gaz + Skip-gram &80.70 & **82.30\
Baseline + Gaz + LexEmb & **80.81 & 82.24\
**********
---------------------------------------------- -- --
: \[ner:ontonotes\] Final NER F1 scores for Ontonotes 5.0 dataset. The results in bold face are the best on each evaluation set.
Conclusions
===========
We have shown how to inject external supervision to a Skip-gram model to learn better phrase embeddings. We demonstrate the quality of phrase embeddings on three tasks: Syntactic-semantic similarity, CoNLL 2003 NER, and Ontonotes 5.0 NER. In the process, we provide a new public state-of-the-art NER system for the widely contested CoNLL 2003 shared task.
We demonstrate how we can plug phrase embeddings into an existing log-linear CRF System.
This work demonstrates that it is possible to learn high-quality phrase embeddings and fine-tune them with external supervision from billions of tokens within one day computation time. We further demonstrate that learning embeddings is important and key to improve NLP Tasks such as NER.
In future, we want to explore employing embeddings to other NLP tasks such as dependency parsing and coreference resolution. We also want to explore improving embeddings using error gradients from NER.
|
---
abstract: 'We present a new first-order approach to strain-engineering of graphene’s electronic structure where no continuous displacement field $\mathbf{u}(x,y)$ is required. The approach is valid for negligible curvature. The theory is directly expressed in terms of atomic displacements under mechanical load, such that one can determine if mechanical strain is varying smoothly at each unit cell, and the extent to which sublattice symmetry holds. Since strain deforms lattice vectors at each unit cell, orthogonality between lattice and reciprocal lattice vectors leads to renormalization of the reciprocal lattice vectors as well, making the $K$ and $K''$ points shift in opposite directions. From this observation we conclude that no $K-$dependent gauges enter on a first-order theory. In this formulation of the theory the deformation potential and pseudo-magnetic field take discrete values at each graphene unit cell. We illustrate the formalism by providing strain-generated fields and local density of electronic states on graphene membranes with large numbers of atoms. The present method complements and goes beyond the prevalent approach, where strain engineering in graphene is based upon first-order continuum elasticity.'
address:
- 'Department of Physics. University of Arkansas. Fayetteville, AR 72701, USA'
- 'Departamento de Ingenier[í]{}a Mecánica. Universidad del Norte. Km. 5 V[í]{}a Puerto Colombia. Barranquilla, Colombia'
- 'Department of Materials Science and Engineering. University of Utah. Salt Lake City, UT 84112, USA'
- 'Department of Physics, University of Belgrade. Studentski trg 12, 11158 Belgrade, Serbia'
author:
- 'Salvador Barraza-Lopez'
- 'Alejandro A. Pacheco Sanjuan'
- Zhengfei Wang
- Mihajlo Vanević
date: 'Available online: 14 May 2013'
title: 'Strain-engineering of graphene’s electronic structure beyond continuum elasticity'
---
Introduction
============
The interplay between mechanical and electronic effects in carbon nanostructures has been studied for a long time (e.g., [@Ando2002; @GuineaNatPhys2010; @castroRMP; @Pereira1; @Vozmediano; @deJuanPRL2012; @Asgari; @r2; @Peeters1; @Peeters2; @Peeters3]). The mechanics in those studies invariably enters within the context of continuum elasticity. One of the most interesting predictions of the theory is the creation of large, and roughly uniform pseudo-magnetic fields and deformation potentials under strain conformations having a three-fold symmetry [@GuineaNatPhys2010]. Those theoretical predictions have been successfully verified experimentally [@Crommie; @Gomes2012].
Nevertheless, different theoretical approaches to strain engineering in graphene possess subtle points and apparent discrepancies [@deJuanPRL2012; @Kitt2012], which may hinder progress in the field. This motivated us to develop an approach [@us] which does not suffer from limitations inherent to continuum elasticity. This new formulation accommodates numerical verifications to determine when arbitrary mechanical deformations preserve sublattice symmetry. Contrary to the conclusions of Ref. [@Kitt2012], with this formulation one can also demonstrate in an explicit manner the absence of $K-$point dependent gauge fields on a first-order theory (see Refs. [@us] and [@arxiv; @Kitt2] as well). The formalism takes as its only direct input [*raw*]{} atomistic data –as the data obtained from molecular dynamics runs. The goal of this paper is to present the method, making the derivation manifest. We illustrate the formalism by computing the gauge fields and the density of states in a graphene membrane under central load.
![Gauge fields from first-order continuum elasticity are defined regardless of spatial scale. A unit cell is shown in (b) and (c) for comparison. In this work, we define the pseudospin Hamiltonian for each unit cell using space-modulated, low-energy expansions of a tight-binding Hamiltonian in reciprocal space. As a result, in our approach the gauge fields will become discrete.[]{data-label="fig:F1"}](Fig1v2.pdf){width="45.00000%"}
Motivation
----------
The theory of strain-engineered electronic effects in graphene is semi-classical. One seeks to determine the effects of mechanical strain across a graphene membrane in terms of spatially-modulated pseudospin Hamiltonians $\mathcal{H}_{ps}$; these pseudospin Hamiltonians $\mathcal{H}_{ps}(\mathbf{q})$ are low-energy expansions of a Hamiltonian formally defined in reciprocal space. Under “long range” mechanical strain (extending over many unit cells and preserving sublattice symmetry [@Ando2002; @GuineaNatPhys2010; @castroRMP]) $\mathcal{H}_{ps}$ also become continuous and slowly-varying local functions of strain-derived gauges, so that $\mathcal{H}_{ps}\to\mathcal{H}_{ps}(\mathbf{q},\mathbf{r})$. Within this first-order approach, the salient effect of strain is a local shift of the $K$ and $K'$ points in opposite directions, similar to a shift induced by a magnetic field [@GuineaNatPhys2010; @castroRMP]. In the usual formulation of the theory [@Ando2002; @GuineaNatPhys2010; @castroRMP; @Pereira1; @Vozmediano; @deJuanPRL2012], this dependency on position leads to a [*continuous*]{} dependence of strain-induced fields $\mathbf{B}_s(\mathbf{r})$ and $E_s(\mathbf{r})$. Such continuous fields are customarily superimposed to a discrete lattice, as in Figure \[fig:F1\] [@GuineasSSC2012].
When expressed in terms of continuous functions, a pseudospin Hamiltonian $\mathcal{H}_{ps}$ is defined down to arbitrarily small spatial scales and it spans a zero area. In reality, however, the pseudospin Hamiltonian can only be defined per unit cell, so it should take a single value at an area of order $\sim a_0^2$ ($a_0$ is the lattice constant in the absence of strain).
This observation tells us already that the scale of the mechanical deformation with respect to a given unit cell is inherently lost in a description based on a continuum model. For this reason, it is important to develop an approach which is directly related to the atomic lattice, as opposed to its idealization as a continuum medium. In the present paper we show that in following this program one gains a deeper understanding of the interrelation between the mechanics and the electronic structure of graphene. Indeed, within this approach we are able to quantitatively analyze whether the proper phase conjugation of the pseudospin Hamiltonian holds at each unit cell. The approach presented here will give (for the first time) the possibility to explicitly check on any given graphene membrane under arbitrary strain if mechanical strain varies smoothly on the scale of interatomic distances. Consistency in the present formalism will also lead to the conclusion that in such scenario strain will not break the sublattice symmetry but the Dirac cones at the $K$ and $K'$ points will be shifted in the opposite directions [@GuineaNatPhys2010; @castroRMP].
Clearly, for a reciprocal space to exist one has to preserve crystal symmetry, so that when crystal symmetry is strongly perturbed, the reciprocal space representation starts to lack physical meaning, presenting a limitation to the semiclassical theory. The lack of sublattice symmetry –observed on actual unit cells on this formulation beyond first-order continuum elasticity– may not allow proper phase conjugation of pseudospin Hamiltonians at unit cells undergoing very large mechanical deformations. Nevertheless this check cannot proceed –and hence has never been discussed– on a description of the theory within a continuum media, because by construction there is no direct reference to actual atoms on a continuum.
As it is well-known, it is also possible to determine the electronic properties directly from a tight-binding Hamiltonian $\mathcal{H}$ in real space, without resorting to the semiclassical approximation and without imposing an [*a priori*]{} sublattice symmetry. That is, while the semiclassical $\mathcal{H}_{ps}(\mathbf{q},\mathbf{r})$ is defined in reciprocal space (thus assuming some reasonable preservation of crystalline order), the tight-binding Hamiltonian $\mathcal{H}$ in real space is more general and can be used for membranes with arbitrary spatial distribution and magnitude of the strain.
In addition, contrary to the claim of Ref. [@Kitt2012], the purported existence of $K-$point dependent gauge fields does not hold on a first-order formalism [@us; @arxiv]. What we find instead, is a shift in opposite directions of the $K$ and $K'$ points upon strain [@GuineaNatPhys2010].
Theory
======
Sublattice symmetry
-------------------
The continuum theories of strain engineering in graphene, being semiclassical in nature, require sublattice symmetry to hold [@Ando2002; @GuineaNatPhys2010]. One the other hand, no measure exists in the continuum theories [@Ando2002; @GuineaNatPhys2010; @castroRMP; @Pereira1; @Vozmediano; @deJuanPRL2012] to test sublattice symmetry on actual unit cells under a mechanical deformation. For this reason, sublattice symmetry is an implicit assumption embedded in the continuum approach.
![(a) Definitions of geometrical parameters in a unit cell. (b) Sublattice symmetry relates to how [*pairs*]{} of nearest-neighbor vectors (either in thick, or dashed lines) are modified due to strain. These vectors change by $\Delta \mathbf{\tau}_j$ and $\Delta \mathbf{\tau}_j'$ upon strain ($j=1,2$). Relative displacements of neighboring atoms lead to modified lattice vectors; the choice of renormalized lattice vectors will be unique [*only*]{} to the extent to which sublattice symmetry is preserved: $\Delta \mathbf{\tau}_j'\simeq \Delta \mathbf{\tau}_j$.[]{data-label="fig:F2"}](Fig2v2.pdf){width="45.00000%"}
To address the problem beyond the continuum approach, let us start by considering the unit cell before (Fig. \[fig:F2\](a)) and after arbitrary strain has been applied (Fig. \[fig:F2\](b)). For easy comparison of our results, we make the zigzag direction parallel to the $x-$axis, which is the choice made in Refs. [@GuineaNatPhys2010] and [@Vozmediano]. (Arbitrary choices of relative orientation are clearly possible; in Ref. [@us] we chose the zigzag direction to be parallel to the y-axis.)
The lattice vectors before the deformation are given by (Fig. \[fig:F2\](a)): $$\label{eq:defa}
\mathbf{a}_1=\left(1/2,\sqrt{3}/2\right)a_0,\text{ }\mathbf{a}_2=\left(-{1}/{2},{\sqrt{3}}/{2}\right)a_0,$$ $$\label{eq:deft}
\boldsymbol{\tau}_1=\left(\frac{\sqrt{3}}{2},\frac{1}{2}\right)\frac{a_0}{\sqrt{3}},\text{ } \boldsymbol{\tau}_2=\left(-\frac{\sqrt{3}}{2},\frac{1}{2}\right)\frac{a_0}{\sqrt{3}},\text{ }
\boldsymbol{\tau}_3=\left(0,-1\right)\frac{a_0}{\sqrt{3}}.$$ (Note that $\mathbf{a}_1=\boldsymbol{\tau}_1-\boldsymbol{\tau}_3$, and $\mathbf{a}_2=\boldsymbol{\tau}_2-\boldsymbol{\tau}_3$.)
After mechanical strain is applied (Fig. \[fig:F2\](b)), each local pseudospin Hamiltonian will only have physical meaning at the unit cells where: $$\label{eq:applicabilitycondition}
\Delta \boldsymbol{\tau}_j'\simeq\Delta \boldsymbol{\tau}_j \text{ (j=1,2)}.$$ Condition (\[eq:applicabilitycondition\]) can be re-expressed in terms of changes of angles $\Delta \alpha_j$ or lengths $\Delta L_j$ for pairs of nearest-neighbor vectors $\boldsymbol{\tau}_j$ and $\boldsymbol{\tau}_j'$ \[$j=1$ is shown in thick solid and $j=2$ in thin dashed lines in Fig. \[fig:F2\](b)\]: $$\label{eq:beta}
\small(\boldsymbol{\tau}_j+\Delta \boldsymbol{\tau}_j)\cdot(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j)=
|\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j||\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j|\cos(\Delta\alpha_j),$$ $$\label{eq:sign}
\small\text{sgn}(\Delta \alpha_j)=\text{sgn}\left([(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)
\times(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j)]\cdot \hat{k}\right),$$ where $\hat{k}$ is a unit vector along the z-axis, $sgn$ is the sign function ($sgn(a)=+1$ if $a\ge 0$ and $sgn(a)=-1$ if $a <0$), and: $$\label{eq:L}
\small
\Delta L_j\equiv |\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j|-|\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j|.$$
Even though in the problems of practical interest the deviations from the sublattice symmetry do tend to be small [@us], it is important to bear in mind that the sublattice symmetry [*does not hold a priori*]{} [@GuineaNatPhys2010]. It is therefore important to have a method to quantify such deviations and check whether the sublattice symmetry holds at the problem at hand. Forcing the sublattice symmetry to hold from the start amounts to introducing an artificial mechanical constraint on the membrane which is not justified on physical grounds [@Ericksen]. For this reason the method we propose is discrete and directly related to the actual lattice; it does not resort to the approximation of the membrane as a continuum medium [@Ando2002; @GuineaNatPhys2010; @castroRMP; @Pereira1; @Vozmediano; @deJuanPRL2012; @arxiv; @Kitt2]. Being expressed in terms of the actual atomic displacements, our formalism holds beyond the linear elastic regime where the first-order continuum elasticity may fail. The continuum formalism is recovered as a special case of the one presented here in the limit when $|\Delta\mathbf{\tau}_j|/a_0\to 0$.
Renormalization of the lattice and reciprocal lattice vectors {#sec:3}
-------------------------------------------------------------
In the absence of mechanical strain, the reciprocal lattice vectors $\mathbf{b}_1$ and $\mathbf{b}_2$ are obtained by standard methods: We define $\mathcal{A}\equiv(\mathbf{a}_1^T,\mathbf{a}_2^T)$, with $\mathbf{a}_1$ and $\mathbf{a}_2$ given in Eq. (\[eq:defa\]) and shown in Fig. \[fig:F2\](a). The reciprocal lattice vectors $\mathcal{B}\equiv(\mathbf{b}_1^T,\mathbf{b}_2^T)$ are related to the lattice vectors by [@MartinBook]: $$\label{eq:realreciprocal}
\mathcal{B}^T=2\pi\mathcal{A}^{-1}.$$ With the choice we made for $\mathbf{a}_1$ and $\mathbf{a}_2$ we get: $$\mathbf{b}_1=\left(1,\frac{1}{\sqrt{3}}\right)\frac{2\pi}{a_0} \text{, and }
\mathbf{b}_2=\left(-1,\frac{1}{\sqrt{3}}\right)\frac{2\pi}{a_0}.$$ As seen in Fig. \[fig:F3\](a) the $K-$points on the first Brillouin zone are defined by: $$\mathbf{K}_1=\frac{2\mathbf{b}_1+\mathbf{b}_2}{3}, \text{ }\mathbf{K}_2=\frac{\mathbf{b}_1-\mathbf{b}_2}{3} \text{, and } \mathbf{K}_3=-\frac{\mathbf{b}_1+2\mathbf{b}_2}{3},$$ and: $$\mathbf{K}_4=-\mathbf{K}_1,\text{ } \mathbf{K}_5=-\mathbf{K}_2, \text{ and }\mathbf{K}_6=-\mathbf{K}_3.$$
![First Brillouin zone (a) before and (b) after mechanical strain is applied. The reciprocal lattice vectors are shown, as well as the changes of the high-symmetry points at the corners of the Brillouin zone. Note that independent $K$ points ($K$ and $K'$) move in the opposite directions. The dashed hexagon in (b) represents the boundary of the first Brillouin zone in the absence of strain.[]{data-label="fig:F3"}](Fig3v2.pdf){width="45.00000%"}
The relative positions between atoms change when strain is applied: $\boldsymbol{\tau}_j\to \boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j$ ($j=1,2,3)$, and $-\boldsymbol{\tau}_j\to -\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}_j'$ ($j=1,2$). For negligible curvature, one may assume that $\Delta\boldsymbol{\tau}_j\cdot\hat{z}=\Delta z_j\sim 0$ (and similar for the primed displacements $\Delta \boldsymbol{\tau}_j'$). We present here a formulation of the theory strictly valid for in-plane strain (it would also be valid for membranes with negligible curvature).
We wish to find out how reciprocal lattice vectors change to first order in displacements under mechanical load. In order for reciprocal lattice vectors to make sense at each unit cell, Eqn. \[eq:applicabilitycondition\] must hold. In terms of numerical quantities one would need that $\Delta \alpha_j$ and $\Delta L_j$ are all close to zero. In that case we set $\Delta \boldsymbol{\tau}_j'\to \Delta \boldsymbol{\tau}_j$ for j=1,2, and continue our program.
For this purpose we define: $$\Delta \mathbf{a}_1\equiv\Delta \boldsymbol{\tau}_1-\Delta \boldsymbol{\tau}_3 \text{, and }
\Delta \mathbf{a}_2\equiv\Delta \boldsymbol{\tau}_2-\Delta \boldsymbol{\tau}_3,$$ or in terms of (two-dimensional) components: $$\Delta \mathcal{A}\equiv
\left(
\begin{matrix}
\Delta \tau_{1x}-\Delta \tau_{3x}& \Delta \tau_{2x}-\Delta \tau_{3x}\\
\Delta \tau_{1y}-\Delta \tau_{3y}& \Delta \tau_{2y}-\Delta \tau_{3y}
\end{matrix}
\right).$$ The matrix $\mathcal{A}$ changes to $\mathcal{A}'=\mathcal{A}+\Delta\mathcal{A}$, and we must modify $\mathcal{B}$ so that Eqn. still holds under mechanical load. To first order in displacements $\mathcal{A}'^{-1}$ becomes: $$\label{eq:correction}
\mathcal{A}'^{-1}=(1+\mathcal{A}\Delta\mathcal{A})^{-1}(\mathcal{A}^{-1})\simeq \mathcal{A}^{-1}-\mathcal{A}^{-1}\Delta\mathcal{A}\mathcal{A}^{-1}.$$ By comparing Eqns. (7) and , the reciprocal lattice vectors in Fig. \[fig:F3\](b) must be renormalized by: $$\Delta\mathcal{B}=-2\pi\left(\mathcal{A}^{-1}\Delta\mathcal{A}\mathcal{A}^{-1}\right)^T.$$ We note that the existence of this additional term is quite evident when working directly on the atomic lattice, but it was missed in Ref. [@Kitt2012], where the theory was expressed on a continuum. Let us now calculate some shifts of the $K-$points due to strain. For example, $\mathbf{K}_2$ ($=K$ in Fig. \[fig:F3\](a)) requires an additional contribution, which we find by explicit calculation to be: $$\Delta K=\Delta\mathbf{K}_2=-\frac{4\pi}{3a_0^2}
\left(\Delta\tau_{1x}-\Delta\tau_{2x},\frac{\Delta \tau_{1x}+\Delta \tau_{2x}-2\Delta \tau_{3x}}{\sqrt{3}}\right),$$ and using Eqn. (10) one immediately sees that $\Delta K'=-\Delta\mathbf{K}_2$, so that the $K$ ($\mathbf{K}_2$) and $K'$ ($-\mathbf{K}_2$) points shift in opposite directions, as expected [@GuineaNatPhys2010; @castroRMP].
Gauge fields
------------
Equation gives a condition for which the mechanical strain that varies smoothly on the scale of interatomic distances does not break the sublattice symmetry [@GuineaNatPhys2010]. On the other hand, arbitrary strain breaks down to some extent the periodicity of the lattice, and “short-range” strain can be identified to occur at unit cells where $\Delta \alpha_j$ and $\Delta L_j$ cease to be zero by significant margins.
This observation provides the rationale for expressing the gauge fields without ever leaving the atomic lattice: When $\Delta \boldsymbol{\tau}_j'\simeq\Delta \boldsymbol{\tau}_j$ at each unit cell a mechanical distortion can be considered “long-range,” and the first-order theory is valid. The process to lay down the gauge terms to first order is straightforward. Local gauge fields can be computed as low energy approximations to the following $2\times 2$ pseudospin Hamiltonian: $$\label{eq:tbh}
\left(
\begin{matrix}
E_{s,A} & g^*\\
g & E_{s,B}
\end{matrix}
\right),$$ with $g\equiv -\sum_{j=1}^3(t+\delta t_j)e^{i(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)\cdot(\mathbf{K}_n+\Delta\mathbf{K}_n+\mathbf{q})}$, and $n=1,...,6$. We defer discussion of the diagonal terms for now.
Keeping exponents to first order we have: $$\small
(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)\cdot(\mathbf{K}_n+\Delta\mathbf{K}_n+\mathbf{q})\simeq
\boldsymbol{\tau}_j\cdot\mathbf{K}_n+\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n+
\boldsymbol{\tau}_j\cdot\mathbf{q}.$$ The exponent is next expressed to first-order: $$\begin{aligned}
e^{i(\boldsymbol{\tau}_j\cdot\mathbf{K}_n+\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n+
\boldsymbol{\tau}_j\cdot\mathbf{q})}\simeq \nonumber\\
ie^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}\boldsymbol{\tau}_j\cdot\mathbf{q}+
e^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}[1+i(\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n)].\end{aligned}$$ Carrying out explicit calculations, one can see that: $$\label{eq:cancellation}
\sum_{j=1}^3e^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}[1+i(\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n)]=0.$$
For example, at $K=\mathbf{K}_2$ we have: $$%\frac{1}{9a_0}[e^{i2\pi/3} (9a_0+4 i \pi (\Delta \tau x1+\Delta \tau x2+\Delta \tau x3))-\nonumber\\
%e^{i\pi/3}(9a_0+4 i \pi (\Delta \tau x1+\Delta \tau x2+\Delta \tau x3))+\nonumber\\
%9a_0+4 i \pi (\Delta \tau x1+\Delta \tau x2+\Delta \tau x3)\nonumber]=0.
\left[1+\frac{4i\pi(\Delta \tau_{1x}+\Delta \tau_{2x}+\Delta \tau_{3x})}{9a_0}\right](1+e^{\frac{2\pi i}{3}}-e^{\frac{\pi i}{3}}),$$ with phasors adding up to zero. Similar phasor cancelations occur at every other $K-$point.
The term linear on $\Delta \mathbf{K}_n$ on Eqn. \[eq:cancellation\] cancels out the fictitious $K-$point dependent gauge fields proposed in Ref. [@Kitt2012], which originated from the term linear on $\Delta \mathbf{\tau}_j$ on this same equation. This observation constitutes yet another reason for the formulation of the theory directly on the atomic lattice. With this we have demonstrated that gauges will not depend explicitly on $K-$points, so we now continue formulating the theory considering the $\mathbf{K}_2$ point only [@GuineaNatPhys2010; @Vozmediano; @castroRMP].
Equation takes the following form to first order at $\mathbf{K}_2$ in the low-energy regime: $$\begin{aligned}
\label{eq:ps1}
\mathcal{H}_{ps}=&
\left(
\begin{smallmatrix}
0 & t\sum_{j=1}^3ie^{-i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\boldsymbol{\tau}_j\cdot\mathbf{q}\\
-t\sum_{j=1}^3ie^{i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\boldsymbol{\tau}_j\cdot\mathbf{q} & 0
\end{smallmatrix}
\right)\nonumber\\
+&\left(
\begin{smallmatrix}
E_{s,A} & -\sum_{j=1}^3\delta t_je^{-i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\\
-\sum_{j=1}^3\delta t_je^{i\mathbf{K}_2\cdot\boldsymbol{\tau}_j} & E_{s,B}
\end{smallmatrix}
\right),\end{aligned}$$ with the first term on the right-hand side reducing to the standard pseudospin Hamiltonian in the absence of strain. The change of the hopping parameter $t$ is related to the variation of length, as explained in Refs. [@Ando2002] and [@Vozmediano]: $$\delta t_j=-\frac{|\beta| t}{a_0^2} \boldsymbol{\tau}_j\cdot\Delta\boldsymbol{\tau}_j.$$ This way Eqn. becomes: $$\begin{aligned}
\mathcal{H}_{ps}=
\hbar v_F\boldsymbol{\sigma}\cdot \mathbf{q}
+\left(
\begin{smallmatrix}
E_{s,A} & f_1^*\\
f_1 & E_{s,B}
\end{smallmatrix}
\right),\end{aligned}$$ with $f_1^*=\frac{|\beta|t}{2a_0^2}
[2\boldsymbol{\tau}_3\cdot\Delta\boldsymbol{\tau}_3
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1
-\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2
+\sqrt{3}i(\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1)]$, and $\hbar v_F\equiv
\frac{\sqrt{3}a_0t}{2}$. The parameter $f_1$ can be expressed in terms of a vector potential: $A_s$ $f_1=-\hbar v_F\frac{eA_s}{\hbar}$. This way: $$\begin{aligned}
\label{eq:Asdiscrete}
\small
A_s&=-\frac{|\beta|\phi_0}{\pi a_0^3}[
\frac{2\boldsymbol{\tau}_3\cdot\Delta\boldsymbol{\tau}_3
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1
-\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2}{\sqrt{3}}\nonumber\\
&-i(
\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1)].\end{aligned}$$
We finally analyze the diagonal entries in Eqn. , which are given as follows [@us]: $$\label{eq:EsA}
E_{s,A}=-\frac{0.3 eV}{0.12}\frac{1}{3}\sum_{j=1}^3\frac{|\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}_j|-a_0/\sqrt{3}}{a_0/\sqrt{3}},$$ and $$\label{eq:EsB}
E_{s,B}=-\frac{0.3 eV}{0.12}\frac{1}{3}\sum_{j=1}^3\frac{|\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}'_j|-a_0/\sqrt{3}}{a_0/\sqrt{3}}.$$ These entries represent the scalar deformation potential which we take to linear order in the average bond increase [@YWSon].
Relation to the formalism from first-order continuum elasticity
---------------------------------------------------------------
We next establish how the theory based on a continuum relates to the present formalism. In the absence of significant curvature, the continuum limit is achieved when $\frac{|\Delta\boldsymbol{\tau}_j|}{a_0}\to 0$ (for $j=1,2,3$). We have then (Cauchy-Born rule): $\boldsymbol{\tau}_j\cdot \Delta \boldsymbol{\tau}_j\to \boldsymbol{\tau}_j\left(
\begin{smallmatrix}
u_{xx}&u_{xy}\\
u_{xy}&u_{yy}
\end{smallmatrix}\right)\boldsymbol{\tau}_j^T$, where $u_{ij}$ are the entries of the strain tensor.
This way Eqn. becomes: $$\label{eq:limit}
A_s\to \frac{|\beta|\phi_0}{2\sqrt{3}\pi a_0}(u_{xx}-u_{yy}-2iu_{xy}),$$ as expected [@GuineaNatPhys2010; @Vozmediano].
Equation confirms that if the zigzag direction is parallel to the $x-$axis the vector potential we have obtained is consistent with known results in the proper limit [@GuineaNatPhys2010; @Vozmediano]. Besides representing a consistent first-order formalism, the present approach is exceptionally suited for the analysis of “raw” atomistic data –obtained, for example, from molecular dynamics simulations– as there is no need to determine the strain tensor explicitly: the relevant equations (\[eq:Asdiscrete\], \[eq:EsA\], \[eq:EsB\]) take as input the changes in atomic positions upon strain. Within the present approach $N/2$ space-modulated pseudospinor Hamiltonians can be built for a graphene membrane having $N$ atoms.
Applying the formalism to rippled graphene membranes
====================================================
We finish the present contribution by briefly illustrating the formalism on two experimentally relevant case examples. The developments presented here are motivated by recent experiments where freestanding graphene membranes are studied by local probes [@usold; @stmNanoscale2012; @stroscio]. (One must keep in mind, nevertheless, that the theory provided up to this point is rather general.)
Rippled membranes with no external mechanical load
--------------------------------------------------
It is an established fact that graphene membranes will be naturally rippled due to a number of physical processes, including temperature-induced (i.e., dynamic) structural distortions [@Fasolino1], and static structural distortions created by the mechanical and electrostatic interaction with a substrate, a deposition process [@Nature2007], or line stress at the edges of finite-size membranes [@us].
In reference [@deJuanPRB] it is argued that the rippled texture of freestanding graphene leads to observable consequences, the strongest being a sizeable velocity renormalization. In order to demonstrate such statement, one must take a closer look at the underlying mechanics of the problem. The model [@deJuanPRB] assumes that a graphene membrane is originally pre-strained (in bringing an analogy, one would say that the membrane would be an “ironed tablecloth”), so that curvature due to a single wrinkle directly leads to increases in interatomic distances. Those distance increases directly modify the metric on the curved space. In practice, an external electrostatic field can be used to realize such pre-strained configuration [@Fogler].
In improving the consideration of the mechanics beyond first-order continuum elasticity, let us consider what happens if this pre-strained assumption is relaxed (in continuing our analogy, the rippled membrane in Fig. \[fig:F4\](a) would then be akin to a “wrinkled tablecloth prior to ironing”): How do the gauge fields look in such scenario? With our formalism, we can probe the interrelation between mechanics and the electronic structure directly. In Figure \[fig:F4\](a) we display a graphene membrane with three million atoms at 1 Kelvin after relaxing strain at the edges. The strain relaxation proceeds by the formation of ripples or wrinkles on the membrane. This initial configuration is already different to a flat (“pre-strained”) configuration within the continuum formalism, customarily enforced prior to the application of strain.
*The ripples must be “ironed out” before any significant increase on interatomic distances can occur:* “Isometric deformations” lead to curvature without any increase on interatomic distances [@us] (in continuing our analogy, this is usually what happens with clothing). We believe that a local determination of the metric tensor from atomic displacements alone will definitely be useful in continuing making a case for velocity renormalization [@deJuanPRL2012; @arxiv; @deJuanPRB]; this is presently work in progress [@us2].
![A finite-size graphene membrane at 1 Kelvin. (a) The membrane forms ripples to relieve mechanical strain originating from its finite size. (b) We could not discern changes on the LDOS (which relates to renormalization of the Fermi velocity) on a completely flat membrane and after line strain is relieved. (c) Measures for changes in angles and lengths at individual unit cells (Eqns. 4-6) displaying noise on a small scale, and consistent with the formation of ripples. (d) The deformation potential, mass term and (e) the pseudo-magnetic field are inherently noisy as well.[]{data-label="fig:F4"}](Fig4v2.pdf){width="48.00000%"}
The local density of electronic states is obtained directly from the Hamiltonian of the membrane in configuration space $\mathcal{H}$, and shown in Fig. \[fig:F4\](b). When compared to the DOS from a completely flat membrane, no observable variation on the slope of the DOS appears, and hence, no renormalization of the Fermi velocity either.
One can determine the extent to which nearest-neighbor vectors will preserve sublattice symmetry in terms of $\Delta\alpha_j$ and $\Delta L_j$, Eqns. (4-6). We observe small and apparently random fluctuations on those measures in Fig. \[fig:F4\](c): $\Delta L_j\lesssim $ 1% and $\Delta \alpha_j\lesssim 2^{o}$.
We display the deformation potential in Figure \[fig:F4\](d) in terms of the average ($E_{def}$) and difference ($E_{mass}$) between $E_{s,A}$ and $E_{s,B}$ (Eqns. (\[eq:EsA\]) and (\[eq:EsB\])) at any given unit cell: $$E_{def}=\frac{1}{2}(E_{s,A}+E_{s,B}), \text{ and } E_{mass}=\frac{1}{2}(E_{s,A}-E_{s,B}).$$ Both quantities are of the order of tens of meVs.
The ripples lead to the random-looking pseudo-magnetic field shown in Fig. \[fig:F4\](e), reminiscent of the electron density plots created by random charge puddles [@Rossi1; @Rossi2]. We next consider how strain by a sharp probe modifies the results in Fig. \[fig:F4\].
Rippled membranes under mechanical load
---------------------------------------
In what follows we consider a central extruder creating strain on the freestanding membrane. For this, we placed the membrane shown in Fig. \[fig:F4\] on top of a substrate (shown in blue/light gray in Fig. \[fig:F5\](a)) with a triangular-shaped hole (in green/dark gray in Fig. \[fig:F5\](a)). The membrane is held fixed in position when on the substrate, and pushed down by a sharp tip at its geometrical center, down to a distance $\Gamma$=10 nm.
![Strained membrane: (a) The section in blue (light gray) is kept fixed, and strain is applied by pushing down the triangular section in green (dark gray) with a sharp extruder, located at the geometrical center. (b) Deviations from proper sublattice symmetry are concentrated at the section directly underneath the sharp tip, where the deformation is the largest and strain is the most inhomogeneous. (c-d) Gauge fields.[]{data-label="fig:F5"}](Fig5v2.pdf){width="49.00000%"}
As indicated earlier, sublattice symmetry is not exactly satisfied right underneath the tip, where $\Delta\alpha_j$ and $\Delta L_j$ take their largest values (Fig. \[fig:F5\](c)). While $\Delta L_j$ still displays some fluctuations, this is not the case for $\Delta \alpha_j$ (the scale for $\Delta \alpha_j$ is identical to that from Fig. \[fig:F4\](c)). The large white areas tells us that fluctuations on $\Delta\alpha_j$ are wiped out upon load as the extruder removes wrinkles. This observation stems from the lattice-explicit consideration of the mechanics.
We have presented a detailed discussion of the problem along these lines [@us]. We found that for small magnitudes of load a rippled membrane will adapt to an extruding tip isometrically. This observation is important in the context of the formulation with curvature [@deJuanPRB; @deJuanPRL2012], because in that formulation there is the assumption that distances between atoms increase as soon as graphene deviates from a perfect 2-dimensional plate.
The gauge fields given in Fig. \[fig:F5\](c-d) reflect the circular symmetry induced by the circular shape of the extruding tip [@us].
![Local density of states on the membrane under strain shown in Fig. \[fig:F5\]. The locations where the DOS is computed are shown in the insets (the most symmetric line patterns are displayed in yellow).[]{data-label="fig:F6"}](Fig6v2.pdf){width="48.00000%"}
We finish the discussion by probing the local density of states at many locations in Fig. \[fig:F6\], which may relate to the discussion of confinement by gauge fields [@Blanter]. $E_s$ was was not included in computing DOS curves.
Some generic features of DOS are clearly visible: (i) Near the extruder, the deformation is already beyond the linear regime, and the DOS is indeed renormalized for locations close to the mechanical extruder [@deJuanPRL2012; @arxiv; @deJuanPRB]. (ii) A sequence of features appear on the DOS farther away from the extruder. Because the field is not homogeneous and perhaps due to energy broadening we are unable to tell a central peak. As indicated on the insets, the plots on Fig. 6(b) and 6(d) are obtained along high-symmetry lines (the colors on the DOS subplots correspond with the colored lines on the insets). For this reason they look almost identical, and the three sets of curves (corresponding to the DOS along different lines) overlap. Due to lower symmetry, the LDOS in Fig. 6(a) and 6(c) appear symmetric in pairs, with the exception of the plots highlighted in gray. (the light ’v’-shaped curve in all subplots is the reference DOS in the absence of strain).
LDOS curves complement the insight obtained from gauge field plots. Hence, they should also be reported in discussing strain engineering of graphene’s electronic structure, particularly in situations where gauge fields are inhomogeneous.
Conclusions
===========
We presented a novel framework to study the relation between mechanical strain and the electronic structure of graphene membranes. Gauge fields are expressed directly in terms of changes in atomic positions upon strain. Within this approach, it is possible to determine the extent to which the sublattice symmetry is preserved. In addition, we find that there are no $K-$dependent gauge fields in the first-order theory. We have illustrated the method by computing the strain-induced gauge fields on a rippled graphene membrane with and without mechanical load. In doing so, we have initiated a necessary discussion of mechanical effects falling beyond a description within first-order continuum elasticity. Such analysis is relevant for accurate determination of gauge fields and has not received proper attention yet.\
[**Acknowledgments**]{}\
We acknowledge conversations with B. Uchoa, and computer support from HPC at Arkansas (*RazorII*), and XSEDE (TG-PHY090002, *Blacklight*, and *Stampede*). M.V. acknowledges support by the Serbian Ministry of Science, Project No. 171027.
[31]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , , , , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , and , ****, ().
, ****, ().
, ****, ().
, ** (, ), ed.
, , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
|
---
abstract: 'The problem of scattering of the background radiation on relic cosmological wormholes is considered. It is shown that static wormholes do not perturb the spectrum at all. The presence of peculiar velocities of wormholes results in a distortion of the CMB spectrum which is analogous to the kinematic Sunyaev-Zel’dovich effect. In the first order in $v/c$ the distortion of CMB cannot be separated from from the Compton scattering on electrons. In next orders the scattering on wormholes exhibits some difference from the Compton scattering. High-energy cosmic-ray particles spectrum does not change the form by KSZ, but undergoes a common Doppler shift. Such features may give a new tool to detect the presence of relic wormholes in our Universe.'
author:
- 'A.A. Kirillov and E.P. Savelova'
title: 'On distortion of the background radiation spectrum by wormholes: kinematic Sunyaev-Zel’dovich effect'
---
Introduction
============
As it was recently shown some basic difficulties of cold dark matter models ($\Lambda $CDM) can be cured by the presence of relic cosmological wormholes [@KS11; @ks16; @KS17]. To avoid misunderstanding we point out that relic wormholes are not going to replace completely the dark matter paradigm, since there exist phenomena related to dark matter which wormholes unable to explain. The existence of relic wormholes however is not in a conflict with the simultaneous existence of dark matter particles, the so-called WIMPs (weakly interacting massive particles). Save the dark matter phenomena observed in astrophysics (dark matter halos in galaxies, CMB spectrum, observed structures, etc.), the presence of WIMPs is well motivated by numerous problems of the Standard Model in particle physics, e.g., see the list in [@Feng10]. In particular, the observed high-energy cosmic-ray electrons and positrons [@RS] may enable the observation of phenomena such as dark-matter particle annihilation or decay [@grib]. Relic wormholes do not produce such an effect, though spherically symmetric wormholes collapse and form black holes and may produce all astrophysical phenomena related to them.
We however point out that WIMPs may have the direct relation to virtual wormholes. Such wormholes have virtual character and describe quantum topology fluctuations [@S15; @S16]. It was shown recently in [@KS15] that for all types of relativistic fields, the scattering on virtual wormholes leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. In the simplest picture the mas spectrum of such additional particles starts from the Planck value $M_{pl}$ and is completely determined by parameters of the vacuum distribution of virtual wormholes. It is important that such additional particles are generated for all sorts of particles in the Standard Model and have a discrete spectrum of the more increasing masses. For example, standard massless photons are accompanied with massive photons with masses $M_{i}=a_{i}M_{pl}$, where coefficients $a_{i}$ ($a_{1}<a_{2}<...$) are expressed via the distribution of virtual wormholes [@KS15]. On the very early stage of the development of the Universe such particles were in an abundance in the primordial hot plasma. During the expansion the Universe cools and most of such particles decay. At least all such particles decay if they are involved in the strong or electromagnetic interactions. However weakly interacting particles may survive till the present days and they indeed may play the role of dark matter (e.g., extremely massive gravitons, neutrinos, etc.).The decay of such superheavy particles into unstable particles with large mass is described by [@grib], while their subsequent decay into quarks and leptons leads to events in cosmic rays. In particular, the detected break in the teraelectronvolt cosmic-ray spectrum of electrons and positrons [@RS] can be interpreted as the trace of the decay of two sorts of such particles with different masses $
M_{1}\ll M_{2}$. The values $M_{i}$ determine the absolute boundaries of the respective spectra, while the factor $\exp \left( -\frac{\Delta M}{T_{c}}\right) $ (where $T_{c}$ is the temperature at which the primordial content of such particles had been tempered) determines the ratio of the respective number of events. One may expect that analogous break should be observed and for higher energies as well.
In astrophysical picture relic wormholes produce also a number of effects analogous to effects from dark matter particles and, therefore, the number density of such particles in galactic halos may essentially change, when the presence of relic wormholes is taken into account. Indeed, as it was demonstrated by [@KS11] at very large scales wormholes contribute to the matter density perturbations exactly like standard cold dark matter particles and do not destroy all predictions of $\Lambda $CDM models. However at smaller sub-galactic scales wormholes strongly interact with all existing particles. They scatter photons, baryons, and dark matter particles and, therefore, they do smooth away cusps predicted by numerical simulations at galactic centers [@NFW]. Recall that cold heavy particles unavoidably form cusps $\rho _{DM}\sim 1/r$, while observations [@G04; @B03; @W03] show not such a feature. This may be considered as an essential indirect argument in favor of the existence of relic wormholes, since all other known mechanisms of removing cusps are not efficient.
We point also out that strong theoretical arguments for the existence of relic wormholes come from lattice quantum gravity [@AJL05]. Indeed, it is assumed that at Planckian scales the topological structure of our Universe should have fractal properties. During the inflationary stage the topological structure of space should temper and may survive till the present days in the form of relic cosmological wormholes. The problem of the formation of relic wormholes is not described rigorously yet and we do not discuss it here. Nevertheless, some hints on such a picture can be found from the distribution of galaxies. On scales below $100Mp$ the distribution of galaxies definitely shows fractal features [@L98], see also the more resent results in [@CIR14]. Such a structure may serve as a direct trace of the actual topological structure of space. Indeed, if we assume the homogeneous distribution of galaxies then the number counts $N(R)\sim R^D$ (where $N(R)$ is the number of galaxies within the radius $R$ and $D$ is the dimension) reflects the behavior of the physical volume of space. It crosses over to the homogeneity only on larger scales [@Planck18; @Planck24] which however cannot rule out the possibility of the existence of relic wormholes. Indeed, as it was shown by [@B16] in the absence of peculiar velocities wormholes do not perturb spectrum of the cosmic background radiation and, therefore, they cannot be distinguished on the sky. The detection of relic wormholes requires studying more subtle effects. Of the primary interest are those effects which can be disentangled from effects produced by black holes and other forms of matter.
In the present paper we consider the scattering of background radiation on wormholes and show that they can be in principle observed by means of the effect analogous to the kinematic Sunyaev-Zel’dovich effect (KSZ) [@ZS; @ZSa]. KSZ signal is based on the inverse Compton scattering of relic photons on a moving electron gas. It represents one of the main tools in studying peculiar motions of clusters and groups of galaxies, e.g., see [@KSZ1; @KSZ2; @KSZ3], and see also more applications in a recent review [@B16]. It is actually produced by any kind of matter which scatters CMB (not only by a hot electron gas). As it was shown in [@B16], in the first order in $V/c$ the contribution of wormholes into KSZ cannot be separated from that of the electron gas in clusters and groups. Therefore, there are two possibilities. First one is to look for such an effect in those spots on the sky where the baryonic matter is absent, e.g., in voids where the leading contribution will come from wormholes alone. In this case however we need also some additional independent effects to be sure that the signal comes from a void and not from the last scattering sphere. The second possibility is to study next order corrections and peculiar features of the scattering of background radiation on wormholes. In the case of CMB it turns out that already in the second order KSZ effect on wormholes differs from that on other sorts of matter. In the case of high-energy cosmic rays KSZ produces simply a shift of spectrum without the change of it’s form.
The most simple wormhole is described by a spherically symmetric configuration. Spherical wormholes can be made stable only by the presence of exotic matter [@HVis98]. While the natural sources of the exotic matter are not found, we should state that all relic spherical wormholes collapse very rapidly and hardly survive till the present days. If this occur at relatively late time compared to the time of photon decoupling, then emissions from collapsed structures may contribute to the cosmic-ray background which is different from CMB (e.g., infrared, X-ray, etc.). Remnants of such spherical wormholes can not be distinguished from ordinary primordial black holes and we do not discuss them here. However, as it was shown recently by [@ks16] stable relic wormholes may exist without exotic matter, if they have a less symmetric structure. The rate of evolution of such wormholes is comparable with the rate of cosmological expansion and, therefore, such wormholes may survive till the present days. The less symmetric wormholes have throat sections in the form of a torus or even more complicates surfaces [@ks16]. We use a torus-like wormhole in considering some peculiar features of the scattering on a single wormhole is the section 4. Torus-like wormholes have random orientations in space. Upon averaging over orientations the torus-like wormhole acquires features of a spherically symmetric configuration. This allows us to use spherical wormholes in considering estimates for KSZ and the second order corrections to KSZ.
Cross-sections and KSZ effect
=============================
The scattering of electromagnetic waves on a spherical wormhole has been considered first by [@sct; @sct2]. There are two important features of such a scattering which are the generation of a specific interference picture upon scattering on a single wormhole [@KSWS] and the generation of a diffuse halo around any discrete source [@KSS]. If a wormhole is not very big, the interference picture gives too weak signal and, therefore, it can be used only in the future observations. The generation of the diffuse halo around discrete sources may have various interpretations and this results in a difficulty to disentangle effects of wormholes and, for example, effects of the scattering on dust.
The simplest model of a spherical wormhole is given by a couple of conjugated spherical mirrors, when a relict photon falls on one mirror it is emitted, upon the scattering, from the second (conjugated) mirror. Such mirrors represent two different entrances into the wormhole throat and they can be separated by an arbitrary big distance in the outer space. The cross-section of such a process has been described by [@KSWS] and can be summarized as follows. Let an incident plane wave (a set of photons) falls on one throat. Then the scattered signal has two components. The first component represents the standard diffraction (which corresponds to the absorption of CMB photons on the throat) and forms a very narrow beam along the direction of the propagation. This is described by the cross-section $$\frac{d\sigma _{absor}}{d\Omega }=\sigma _{0}\frac{\left( ka\right) ^{2}}{%
4\pi }\left\vert \frac{2J_{1}\left( ka\sin \chi \right) }{ka\sin \chi }%
\right\vert ^{2}, \label{abs}$$where $\sigma _{0}=\pi a^{2}$, $a$ is the radius of the throat, $k$ is the wave vector, and $\chi $ is the angle from the direction of propagation of the incident photons, and $J_{1}$ is the Bessel function. Together with this part the second throat emits an omnidirectional isotropic flux with the cross-section $$\frac{d\sigma _{emit}}{d\Omega }=\sigma _{0}\frac{1}{4\pi }. \label{flux}$$The both total cross-sections coincide $$\int \frac{d\sigma _{absor}}{d\Omega }d\Omega =\int \frac{d\sigma _{emit}}{%
d\Omega }d\Omega =\sigma _{0}$$ which expresses the conservation law for the number of absorbed and emitted photons. In the absence of peculiar motions (a static gas of wormholes) every wormhole throat end absorbs photons as the absolutely black body, while the second end re-radiates them in an isotropic manner (\[flux\]) with the same black body spectrum. It is clear that there will not appear any distortion of the CMB spectrum at all. In the presence of peculiar motions the motion of one end of the wormhole throat with respect to CMB causes the angle dependence of the incident radiation with the temperature $$T_{1}=\frac{T_{CMB}}{\sqrt{1-\beta _{1}^{2}}\left( 1+\beta _{1}\cos \theta
_{1}\right) }
\simeq$$ $$T_{CMB}\left( 1-\beta _{1}\cos \theta _{1}+\frac{1}{2}%
\left( 1+2\cos ^{2}\theta _{1}\right) \beta _{1}^{2}+...\right)$$ where $\beta _{1}=V_{1}/c$ is the velocity of the throat end and $\beta
_{1}\cos \theta _{1}=\left( \vec{\beta}_{1}\vec{n}\right) $, $\vec{n}$ is the direction for incident photons. Therefore, the absorbed radiation has the spectrum$$\rho \left( T_{1}\right) =\rho \left( T_{CMB}\right) +\frac{d\rho \left(
T_{CMB}\right) }{dT}\Delta T_{1}+$$ $$+\frac{1}{2}\frac{d^{2}\rho \left(
T_{CMB}\right) }{dT^{2}}\Delta T_{1}^{2}+...,$$ where $\rho \left( T\right) $ is the Planckian spectrum and $\Delta
T_{1}=T_{1}-T_{CMB}$. As it was discussed previously by [@B16] in the first order in $\beta _{1}$ the above anisotropy does not contribute to the re-radiation of relic photons from the second end according to (\[flux\]). Indeed, integration over the incident angle $\theta _{1}$ gives $%
\left\langle \Delta T\right\rangle =-\frac{1}{4\pi }\int $ $\beta _{1}\cos
\theta _{1}d\Omega =0$. In other words, in the firs order in $\beta _{1}$ the peculiar motions of the absorbing ends of wormholes can be ignored. In this case the KSZ effects caused by wormholes and by the standard baryonic matter mix and cannot be disentangled. The difference however appears in the second order in $\beta _{1}$. Indeed, considering the second order we find $$\left( \Delta \rho \right) _{2}=T_{CMB}\frac{d\rho \left( T_{CMB}\right) }{dT%
}\frac{\left( 1+2\left\langle \cos ^{2}\theta _{1}\right\rangle \right) }{2}%
\beta _{1}^{2}+$$ $$+\frac{T_{CMB}^{2}}{2}\frac{d^{2}\rho \left( T_{CMB}\right) }{%
dT^{2}}\left\langle \cos ^{2}\theta _{1}\right\rangle \beta _{1}^{2}$$ where $( \Delta \rho ) _{2}=\left\langle \rho \left( T_{1}\right) -\rho
\left( T_{CMB}\right) \right\rangle _{2}$ which gives $$\left( \Delta \rho \right) _{2}=\frac{\beta _{1}^{2}}{6T_{CMB}^{3}}\frac{d}{%
dT}\left( \frac{d\rho \left( T_{CMB}\right) }{dT}T_{CMB}^{5}\right) ,$$ where we used $\left\langle \cos ^{2}\theta _{1}\right\rangle =\frac{1}{3}$. This means that together with the standard Planckian spectrum $%
I(T_{CMB})=c\rho (x) =I_{0}\frac{x^{3}}{e^{x}-1}$, where $I_{0}=\frac{2h}{%
c^{2}}\left( \frac{k_{B}T_{CMB}}{h}\right) ^{3}$ and $x=h\nu /k_{B}T_{CMB}$, every wormhole emits the additional isotropic flux of photons with the spectrum$$\left( \frac{\Delta I}{I_{0}}\right) _{2}=\beta
_{1}^{2}=f(x)\beta _{1}^{2}, \label{sp}$$where $f(x)=\frac{1}{6}\frac{x^{4}e^{x}\left(
3e^{x}-3+x\left( e^{x}+1\right) \right) }{\left( e^{x}-1\right) ^{3}}$. We point out that in this case the distortion of the spectrum does not reduce to a frequency-invariant shift of the temperature. For the sake of comparison we plot the function $f(x)$ (dotted line for $\beta _{1}^{2}=0.05$), the standard Planckian spectrum (in circles) and the sum (solid line) on Fig.1.
The estimate of the relative integrated amplitudes of the radiation emitted by a single wormhole is given by $$\frac{\left( \Delta I\right) _{2}}{I_{CMB}}=\allowbreak 5.\,\allowbreak
33\times \beta _{1}^{2}. \label{2ksz}$$
When considering a cloud of wormhole throats, in addition to the standard CMB every throat radiates photons with the flux $(\Delta I)_{2}$. In the presence of peculiar velocities the CMB part undergoes the Doppler shift (which is the complete analog of the KSZ effect) $$\frac{\Delta T_{KSZ}}{T_{CMB}}=\beta _{p}\tau _{w}.$$ Here $\beta _{p}$ is the projection of the peculiar velocity of the cloud along the line of sight and the optical depth $\tau _{w}$ defined as $$\tau _{w}=\int \pi a^{2}n(r,a)dad\ell , \label{tau}$$where the integration is taken along the line of sight and $n(r,a)$ is the number density of wormholes measured from the center of the cloud and depending on the throat radius $a$. The optical depth $\tau _{w}$ is interpreted as follows. Let $L$ be the characteristic size of the cloud of wormholes. Then on the sky it will cover the surface $S\sim L^{2}$, while the portion of this surface covered by wormhole throats is given by $$\tau _{w}=\frac{N\pi \overline{a^{2}}}{L^{2}}=\pi \overline{a^{2}}\overline{n%
}L,$$ where $N$ is the number of wormhole throats in the cloud and $\overline{n}$ is the mean density. In a sufficiently dense cloud $\tau _{w}\sim 1$ this effect produces simply a hot or a cold (depending on the sign of $\beta _{p}$) spot on the CMB maps. It is important that KSZ corresponds to a frequency-invariant temperature shift which leaves the primary CMB spectrum unchanged.
The second order effect discussed earlier does not depend on velocities of throats in the cloud. It however depends on velocities of conjugated entrances into throats and is given by$$\frac{\left( \Delta I\right) _{2KSZ}}{I_{CMB}}=\allowbreak 5.
33\times \left\langle \beta _{1}^{2}\right\rangle \tau _{w}, \label{rt}$$where $\left\langle \beta _{1}^{2}\right\rangle \tau _{w}=\int \beta _{1}^{2}\pi a^{2}n(r,a,\beta _{1}^{2})dad\ell d\beta
_{1}$. In general, such an effect is very small, since the typical values does not exceed $\left\langle \beta _{1}^{2}\right\rangle \sim 10^{-4}$. It is however measurable for sufficiently dense clouds $\tau _{w}\sim 1$ and which is important it cannot be reduced to a shift of CMB temperature and, therefore, it does slightly change the primary CMB spectrum according to (\[sp\]). This gives a new tool which allows to distinguish the contribution of wormholes into KSZ effect from that of the rest matter.
Cosmic-ray spectrum and KSZ effect
==================================
Measurements of the High-energy cosmic-ray spectrum of electrons and positrons is described by a smoothly broken power- law model, e.g., see [@RS]$$\Phi(E)=\Phi_0\left(\frac{E_0}{E}\right)^{\gamma _1 }\left[1+\left(\frac{E}{%
bE_0}\right)^{\frac{\gamma _2-\gamma _1}{\Delta}}\right]^{-\Delta},
\label{pl}$$where $\Delta=0.1$, $\Phi_0=A/E_0$, $E_0=100$GeV, $A=(1.64 \pm 0.01)\times
10^{-2}$ $m^{-2} s^{-1} sr^{-1} $, $b=9.14 \pm 0.98$, $\gamma _1 = 3.09 \pm
0.01$, and $\gamma _2 =3.92 \pm 0.20$. It shows that at energies $E\simeq
E_b=bE_0$ the spectral index changes from $\gamma _1 \approx 3.1$ to $\gamma
_2\approx 3.9$ [@RS]. The cross-section described in the previous section works in the case of rays as well. First we point out that the presence of relic wormholes leads to the formation of a diffuse halo (of a low intensity) around any discrete source. When wormholes do not move in space, then they do not change the spectrum at all [@KSZ08]. Consider now an incident on a wormhole particle. In the rest frame of the wormhole the energy of the incident particle changes according to the standard Lorentz transformation. For High-energy particles it gives$$E^{\prime }=\frac{E+\left( Vp\right) }{\sqrt{1-\beta ^{2}}}\simeq E\left(
1+\beta \frac{cp}{E}\cos \theta \right) \simeq$$ $$\simeq E\left( 1+\beta \cos \theta
\right) .$$ Here $\cos \theta $ is the angle between the direction of the incident particle and the velocity of a wormhole entrance. Thus the change of the energy of the particle is given by$$\frac{\Delta E^{\prime }}{E}=\frac{1}{\sqrt{1-\beta ^{2}}}\left( 1+\beta
\sqrt{1-\frac{m^{2}c^{4}}{E^{2}}}\cos \theta \right) -1$$$$\frac{\Delta E^{\prime }}{E}\simeq \beta \cos
\theta .$$ According to (\[flux\]) the incident particles produce the isotropic flux from the second entrance into the wormhole throat (in the rest frame of the second entrance) with the same energy $E^{\prime }$. In the case of an isotropic distribution of incident particles the mean change of the energy vanishes $\left\langle \cos \theta \right\rangle =0$. This means that for the isotropic background we have the same situation as in the case of CMB, the motion of the absorbing end of the wormhole does not matter. The peculiar motion of the emitting throat entrance produces the effect analogous to KSZ effect, which is the common Doppler shift of the energy $%
\Delta E^{\prime }/E=\beta _{p}$, where $\beta _{p}=V_{p}/c$ is the projection of the wormhole velocity on the direction pointing out to the observer. For the spectrum (\[pl\]) it can be described in terms of the respective shift of the value $\Delta E_0/E_0=\beta _{p}$ which admits both signs. The basic property of the spectrum (\[pl\]) is that such a shift does not change the form of the spectrum. In the case $\tau _{w}\ll 1$ the form of the spectrum does not change also in next orders in $\beta _{p}$. For sufficiently dense clouds of wormholes $\tau _{w}\sim 1$ relativistic corrections include also terms of the form $\Delta E^{\prime }/E\sim
m^2c^4/E^2$ which do produce distortions of the form of the spectrum but they are to small for high energies and can be neglected. The Doppler shift appears also in the case when the source of radiation moves and both effects merge. KSZ however somewhat smoothes the break in the spectral index at the energy $E\simeq E_b\simeq 0.9$TeV. Thus the basic effect of relic wormholes which admits observation is a small shift (positive or negative) of the value $E_b$ in high-energy cosmic rays.
In conclusion of the section we point out that such a mechanism (the generation of a shift of spectrum) works during the whole period of the evolution of the Universe. In particular, it works also at the time of photon decoupling and if there were such processes as dark-matter particle annihilation or decay, effects of scattering on wormholes should be imprinted in the spectrum.
The scattering of CMB on a single torus-like wormhole
=====================================================
In the case of a single wormhole we should account for the two important features. The first feature is the fact that a stable cosmological wormhole has the throat section in the form of a torus [@ks16]. The simplest model of a torus-like wormhole is given by a couple of conjugated torus-like mirrors. Therefore, if such a wormhole is sufficiently big, then the simplest way to find it is to look for the direct imprints on CMB maps. Indeed, by means of KSZ effect a wormhole should produce a ring on CMB map that has a temperature which is slightly different from the background temperature. In particular, it was reported recently in [@MNR], that there are, with confidence level 99.7 per cent, such ring-type structures in the observed cosmic microwave background. We hope that such structures could be imprints of cosmological wormholes indeed. In this case however more frequent structures should have elliptical form, since tori (wormhole throats) have random orientations in space.
The second important feature is that the scattering forward (i.e. absorption of CMB photons (\[abs\])) produces much bigger effect (since $kR\gg 1$, $%
ka\gg 1$, where $k$ is the wave-vector, $R$ is the largest, and $a$ is the smallest radiuses of the torus respectively). This effect corresponds to the standard diffraction on the torus-like obstacle. In the approximation $\mu
=a/R\ll 1$, where $a$ is the smallest radius of the torus, we may use the flat screen approximation.
Let the orientation of the torus (the normal to the torus direction) be along the $Oz$ axis, i.e. $m=(0,0,1)$. The cross-section depends on the two groups of angle variables, i.e. the two unit vectors $n_{0}(\phi _{0},\theta
_{0})$ and $n(\phi ,\theta )$. The vector $n_{0}=(\cos \phi _{0}\sin \theta
_{0},\sin \phi _{0}\sin \theta _{0},\cos \theta _{0}$) points to the direction of the incident photon (i.e., the wave vector is $k_{0}=\frac{%
\omega }{c}n_{0}$), while the vector $n$ corresponds to the scattered photons. Then the cross-section is given by $$\frac{d\sigma }{d\Omega }=\sigma _{R}\sin ^{2}\theta _{0}\frac{\left(
kR\right) ^{2}}{4\pi }\left( \frac{1+\cos ^{2}\theta }{2}\right) \left\vert
F\right\vert ^{2},$$ where $\sigma _{R}=\pi R^{2}$, and the function $F$ is $$F=\left( 1+\mu \right) ^{2}\frac{2J_{1}\left( \left( 1+\mu \right) y\right)
}{\left( 1+\mu \right) y}-\left( 1-\mu \right) ^{2}\frac{2J_{1}\left( \left(
1-\mu \right) y\right) }{\left( 1-\mu \right) y}$$ where $y=kR\xi $. We also denote $$\xi =\left( \sin ^{2}\theta +\sin ^{2}\theta _{0}-2\sin \theta \sin \theta
_{0}\cos \left( \phi -\phi _{0}\right) \right) ^{1/2}$$ and $J_{n}(y)$ are the Bessel functions. We also averaged $\sigma $ over polarizations. Let us expand the kernel $F$ by the small parameter $\mu \ll
1 $ which gives$$F\approx 2\mu \left( y\left( \frac{2J_{1}\left( y\right) }{y}\right)
^{\prime }+2\frac{2J_{1}\left( y\right) }{y}\right) .$$ Using the property $\left( J_{\nu }(y)/y^{\nu }\right) ^{\prime }=-J_{\nu
+1}(y)/y^{\nu }$ and the identity $J_{2}\left( y\right) =\frac{2}{y}%
J_{1}(y)-J_{0}(y)$ we get $F\approx 4\mu J_{0}(y) $ which gives $$\frac{d\sigma }{d\Omega }=8\sigma _{R}\frac{\left( ka\right) ^{2}}{4\pi }%
\left( 1-\cos ^{2}\theta _{0}\right) \left( 1+\cos ^{2}\theta \right)
\left\vert J_{0}(kR\xi )\right\vert ^{2} .$$ The intensity of the scattered radiation in the solid angle $d\Omega $ and in the interval of frequencies $d\nu $ is given by $$\frac{1}{I_{\nu } }\frac{d\Delta I_{\nu }}{d\Omega }=\frac{2\sigma
_{R}\left( ka\right) ^{2}}{\pi }\left( 1+\cos ^{2}\theta \right) \int
\left\vert J_{0}(kR\xi )\right\vert ^{2}\sin ^{2}\theta _{0}d\Omega _{0},$$where $I_{\nu }=c\rho (x)$ is the intensity of the incident black body radiation and $x=h\nu /k_{B}T_{CMB}$. The above expression improves the absorbtion part given by (\[abs\]). Since $kR\gg 1$, it shows the presence of specific ring-type oscillations in the cross-section. Indeed, if we consider the normal fall of the incident photons, i.e., $\theta _{0}=0$, then we find $\frac{d\sigma }{d\Omega } \thicksim J_{0}(kR\theta)$. For sufficiently remote throats the value $R\theta$ is small and such oscillations should be imprinted in the diffraction picture of CMB in the form of rings.
Conclusion
==========
In conclusion we point out that in searching for KSZ signal from wormholes we meet two basic problems. First one is the need of independent observational effects related to wormholes which can be compared to KSZ. The simplest effect of such a kind we find, if we consider propagation of cosmic rays (of any origin) through the same region of space where we expect to observe KSZ. Wormholes were shown to produce an additional damping in cosmic rays [@KSZ08] which is determined by the same optical depth (\[tau\]) $\tau _{w}$. Thus, if there is a discrete source of a standard intensity, the optical depth can be directly measured. The damping is caused by the capture of some part of particles by wormholes. Particles captured are re-emitted (by the second entrance into wormhole throats) in an isotropic way which forms a diffuse halo around any discrete source. In the absence of peculiar motions of wormholes such a halo has the same energy spectrum. Peculiar motions cause a shift of the initial cosmic ray spectrum without a change of it’s form. For example, random motions should somewhat smooth the detected break in the teraelectronvolt cosmic-ray spectrum of electrons and positrons [@RS], while common peculiar motions simply produce an additional shift of the threshold value $E_b$. The Doppler shift of the spectrum can also be attributed to the motion of the source itself. Therefore, to disentangle KSZ and the motion of the source represents very difficult problem and such subtle effects require the further investigation.
Another possibility is to extract basic parameters (such as the density of wormholes $n_{w}$ and the characteristic cross- section $\sigma _{0}=\pi
\overline{a^{2}}$) from the distribution of dark matter. For example, the behavior of dark matter in galaxies may fix two parameters e.g., see [@PSS; @KT] by means of measuring the empirical Newton’s potential. Indeed, in galaxies the distributions of dark and luminous matter strongly correlate [@D04]. This means the existence of a rigid relation $\rho
_{DM}(k)=b(k)\rho _{vis}(k)$, where $\rho (k)$ are Fourier transforms for dark and visible matter densities. Then from the observed distribution of dark matter in galaxies, e.g. see [@G04; @W03], we may retrieve the Newton’s potential as [@KT] $$\phi _{emp}=-\frac{4\pi Gb(k)}{k^{2}} ,$$ which describes the deviation from the Newton’s law. Observations of rotation curves are fitted by the simple function $b(k)=\left( 1+(Rk)^{-\alpha }\right)$. At small scales ($Rk\gg 1$) it gives the standard Newton’s law, while at large scales $Rk\ll 1$ it transforms to the fractal law, or the logarithmic behavior. We point out that the correction observed in galaxies corresponds to the value $\alpha \approx 1$ and $R\sim
5Kpc$. These parameters can be related to the distribution of wormholes [@KS17] but this problem requires the further study.
The second problem is that in galaxies and clusters (as well in the hot X-ray gas) the KSZ effect on CMB based on wormholes mixes with that on other sorts of matter (dust, hot gas, etc.). The difference appears only in the second order in $V/c$ (\[rt\]) which requires sufficiently high accuracy of observations.
[Perez Bergliaffa & Hibberd(2000)]{} Ambjorn J., Jurkiewicz J., Loll R., 2005, *Phys. Rev. Lett.*, 95, 171301.
Ambrosi, G. et al. 2017, *Nature*, **552**, 24475.
Battistelli E. S. et.al., 2016, *Int. J. Mod. Phys. D*, **25**, 1630023.
Borriello A., Salucci P., Danese L., (2003) Mon. Not. R. Astron. Soc. **341** 1109.
Clement G., 1984, *Int. Journ. Theor. Phys.*, **23**, 335.
Conde-Saavedraa G., Iribarrema A., Ribeirob M.B., 2015, *Physica A* **417** 332-344.
Donato F., Gentile G., Salucci P., 2004, MNRAS, 353, L17
Feng J.L., 2010, Annu. Rev. Astron. Astrophys. **48**, 495–545
Gentile G., Salucci P., Klein U., Vergani D., Kalberla P., 2004, MNRAS, 351, 903
Grib A.A., Pavlov Yu.V., 2009, *Gravitation and Cosmology*, **15**, 44–48.
Hand N., et.al., 2012, *Phys. Rev. Lett.* **109**, 041101.
Hochberg, D.; Visser, M., 1998, *Phys. Rev. Lett.* **81**, 746–749.
Kashlinsky A., Atrio-Barandela F., Ebeling H., 2011, , **732**, 1.
Kirillov A.A., Turaev D., 2006, , **371** L31.
Kirillov A.A., Savelova E.P., Shamshutdinova G.D., 2009, *JETP Lett.*, **90**, 599.
Kirillov A.A. & Savelova E.P., 2011, , 412, 1710.
Kirillov A.A. & Savelova E.P., 2012, *Phys. Lett.*, **B 710**, 516.
Kirillov A.A. & Savelova E.P., 2015, *Physics of Atomic Nuclei*, **78**, 1069–1073.
Kirillov A.A. & Savelova E.P., 2016, *Int. J. Mod. Phys. D*, **25**, 1650075.
Kirillov A.A. & Savelova E.P., 2017, *Int. J. Mod. Phys. D*, **26** 1750145.
Kirillov A.A., Savelova E.P., Zolotarev P.S., 2008, *Physics Letters B*, **663** 372–376.
Labini S. F., Montuori M., Pietronero L., 1998, Phys. Rep. 293, 66
Meissner K.A., Nurowski P., & Ruszczycki B., 2013, *Proc R Soc A*, **469** 20130116.
Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563
Perez Bergliaffa S.E., Hibberd K.E., 2000, *Phys. Rev.* **D62** 044045.
Persic M., Salucci P., Stel F., 1996, , **281**, 27.
Planck Collaboration, 2016, A&A, 594, A18
Planck Collaboration, 2016, A&A, 594, A24
Sayers J., et. al., 2013, , **778**, 52.
Savelova E.P., 2015, *Gravitation and Cosmology*, ** 21**, 48–56.
Savelova E.P., 2016, Gen Relativ Gravit, **48**:85.
Sunyaev R.A., Zeldovich Ya.B., 1980a, , **190**, 413.
SunyaevR.A., Zeldovich Ya.B., 1980b, *Ann Rev A* , **18**, 537.
Weldrake D. T. F., de Blok W. J. G., Walter F., 2003, MNRAS, 340, 12
|
---
abstract: 'We propose an original test of Lorentz invariance in the interaction between a particle spin and an electromagnetic field and report on a first measurement using ultracold neutrons. We used a high sensitivity neutron electric dipole moment (nEDM) spectrometer and searched for a direction dependence of a nEDM signal leading to a modulation of its magnitude at periods of 12 and 24 hours. We constrain such a modulation to $d_{12} < 15 \times 10^{-25} \ e\,{\rm cm}$ and $d_{24} < 10 \times 10^{-25} \ e\,{\rm cm}$ at 95 % C.L. The result translates into a limit on the energy scale for this type of Lorentz violation effect at the level of ${\cal E}_{LV} > 10^{10}$ GeV.'
author:
- 'I. Altarev'
- 'C. A. Baker'
- 'G. Ban'
- 'K. Bodek'
- 'M. Daum'
- 'M. Fertl'
- 'B. Franke'
- 'P. Fierlinger'
- 'P. Geltenbort'
- 'K. Green'
- 'M. G. D. van der Grinten'
- 'P. G. Harris'
- 'R. Henneck'
- 'M. Horras'
- 'P. Iaydjiev'
- 'S. N. Ivanov'
- 'N. Khomutov'
- 'K. Kirch'
- 'S. Kistryn'
- 'A. Knecht'
- 'A. Kozela'
- 'F. Kuchler'
- 'B. Lauss'
- 'T. Lefort'
- 'Y. Lemière'
- 'A. Mtchedlishvili'
- 'O. Naviliat-Cuncic'
- 'J. M. Pendlebury'
- 'G. Petzoldt'
- 'E. Pierre'
- 'F. M. Piegsa'
- 'G. Pignol'
- 'G. Quéméner'
- 'D. Rebreyend'
- 'S. Roccia'
- 'P. Schmidt-Wellenburg'
- 'N. Severijns'
- 'D. Shiers'
- 'K. F. Smith'
- 'J. Zejma'
- 'J. Zenner'
- 'G. Zsigmond'
title: New constraints on Lorentz invariance violation from the neutron electric dipole moment
---
The Standard Model of particle physics (SM) on the one hand and the theory of General Relativity on the other, are the two cornerstones on which our current understanding of the Universe relies. Although of seemingly irreconcilable natures, the principle of Lorentz invariance is at the foundation of both theories. Unification of these two theories, including a consistent description of the four known interactions, is one of the main challenges of contemporary physics. Among the many directions being explored, one of the most radical is to abandon spacetime invariance under Lorentz transformations. A general framework to parameterize such Lorentz violating (LV) effects has recently been proposed [@Colladay]. It is based on the idea that diluted traces from primordial symmetry breaking can be observed via high precision experiments at low energies.
Numerous such experiments have been performed over the last century. A first category of tests probes the photon sector, with a broad range of techniques from laboratory scale experiments to cosmological observations [@Kostelecky2002]. A second category deals with particles, including clock comparison experiments, spin polarized torsion pendula and accelerator based experiments. The current constraints on Lorentz violating vector and tensor background fields obtained from these experiments have recently been reviewed [@Kostelecky2008].
In this letter we report on an experimental limit for an interaction between a particle and an electromagnetic field resulting from a fundamental anisotropy of the universe as recently proposed [@Bolokhov2006]. A nonrelativistic framework will first be developed followed by the description of the experimental procedure and the obtained results.
Consider a nonrelativistic spin 1/2 particle in the presence of electric and magnetic fields. Assuming rotational invariance, the form of the interaction potential is restricted to the simple form $V = - \mu \sigma_i B_i - d \sigma_i E_i$, when considering only the linear terms in the magnetic and electric fields $B_i$ and $E_i$. Throughout this letter we adopt Einstein’s repeated index convention and denote by $\sigma_i$ the Pauli matrices. Thus the interaction is described by only two quantities: the magnetic and electric dipole moments $\mu$ and $d$, respectively. Hence, allowing for Lorentz violating background vector and tensor fields, in the spirit of [@Colladay] and taking into account only linear terms in the electric and magnetic fields, the general form of the interaction potential becomes $$\label{listeCourses}
V = b_i \sigma_i - d_{ij} \sigma_i E_j - \mu_{ij} \sigma_i B_j.$$ The first term $b_i$ is sometimes referred to as the cosmic axial field, with the dimension of an energy. The most stringent limit is $b < 10^{-22}$ eV [@Bear2000] but it has been searched for in numerous clock comparison experiments using different particles, including free neutrons [@Altarev2009Mod]. The next terms $d_{ij}$ and $\mu_{ij}$ in Eq. (\[listeCourses\]) have the dimensions of an electric and magnetic dipole moment respectively. We will refer to $d_{ij}$ ($\mu_{ij}$) as the cosmic electric (magnetic) dipole tensor. They both violate rotation invariance because they define privileged directions in the universe.
The electric term leads to effects analogous to the electro-optical behavior of anisotropic media. If an electric field is applied to a non centrosymmetric medium the latter becomes birefringent for light. This is known as the Pockels effect [@Pockels]. In our case, the vacuum itself is the medium and the particle spin corresponds to the polarization of the light.
We probed these couplings by observing the spin precession of ultracold neutrons in the presence of a strong electric field and a weak magnetic field, using the RAL/Sussex/ILL spectrometer [@Altarev2009] dedicated to the search for the neutron electric dipole moment [@Baker]. Under regular experimental conditions, a vertical $B_0 = 1 \, \mu$T magnetic field is applied parallel or antiparallel to a $8.3 \times 10^5$ V/m electric field. The sensitivity to the electric term is $10^{4}$ times larger than to the magnetic one due to dimensional considerations. Thus, from now on we disregard the anisotropic magnetic moment and focus on the cosmic electric dipole tensor.
The Ramsey’s method of separated oscillating fields was used to measure the Larmor frequency of stored spin-polarized ultracold neutrons. Fluctuations of the magnetic field are corrected by means of a spin-polarized $^{199}$Hg vapor as comagnetometer [@Green]. Both spin-polarized species (ultracold neutrons and mercury atoms) are stored in a cylindrical storage bottle (height $h=~$12 cm, radius $r=~$23.5 cm) during a measurement under vacuum conditions with a duration of 130 s. The storage bottle is composed of top and bottom electrodes coated with diamond-like carbon and of an insulating ring coated with deuterated polystyrene [@Bodek2008]. The homogeneous magnetic field $B_0$ is generated by a coil inside a four-layer mu-metal magnetic shield. At the beginning of the precession time, transverse magnetic pulses are applied to flip the polarization of both species by $\pi/2$ onto a plane normal to $B_0$. The spin precession of mercury is monitored online by optical means. For neutrons a second coherent $\pi/2$ pulse is applied at the end of the precession time. The polarization is measured by sequential counting of the number of spin up and down neutrons leaving the storage volume.
The cosmic electric dipole tensor has in general 9 components and it is convenient to split it into three parts: $$d_{ij} = d^0 I_{ij} + d_{ij}^{S} + d_{ij}^{A},$$ where $I$ is the identity matrix, $d^{S}$ is the traceless, symmetric part of the tensor and $d^{A}$ is the antisymmetric part. The first term $d^0$ is nothing else than the intrinsic EDM which actually does not violate rotational symmetry. The antisymmetric tensor is of rank 2 and dimension 3, thus has $3$ degrees of freedom. We define it as the axial vector $$d^A_i = \frac{1}{2} \epsilon_{ijk} d^{A}_{jk},$$ where $\epsilon$ is the completely antisymmetric Levi-Civita tensor. Then the antisymmetric part of the interaction potential becomes $$V^{A} = - d_{ij}^{A} \sigma_i E_j = ( {\bf d}^A \times {\bf E} ) \cdot {\bf \sigma}.$$ This potential acts like a magnetic field orthogonal to the electric field. It is thus orthogonal to the main magnetic field $B_0$ in the apparatus. To first order this additional field does not change the Larmor precession frequency and will therefore not be considered any further. This is not the case for the five terms arising from the symmetric part of the cosmic EDM tensor which are defined by $$d_{ij}^S =
\begin{pmatrix}
d_{XX} & d_{XY} & d_{XZ} \\
d_{XY} & d_{YY} & d_{YZ} \\
d_{XZ} & d_{YZ} & d_{ZZ}
\end{pmatrix},$$ with $d_{ZZ}=-(d_{XX}+d_{YY})$. These terms contribute to the Larmor precession frequency to first order. The $Z$ axis is defined as the Earth rotation axis. While the Earth is rotating together with the vertical quantization axis and the applied electric field, these five contributions would show themselves as an EDM signal in three different ways: a steady shift $d_{\rm steady}$ of the value of the EDM, a sidereal modulated part $d_{\rm 24}$ coming from the sidereal modulation of the direction of either the quantization axis or the electric field axis with respect to the static background tensor, and a part modulated at twice the frequency, $d_{\rm 12}$, due to the combined effect of the modulation of both axes. Taking into account the intrinsic EDM, we can write these three contributions as: $$\begin{aligned}
d_{\rm steady} & = & d_0 + \sin^2 \lambda d_{ZZ} + \frac{ \cos^2 \lambda}{2} (d_{XX} + d_{YY}) \\
d_{\rm 12} & = & \cos^2 \lambda \sqrt{\frac{1}{4}(d_{XX} - d_{YY})^2 + d_{XY}^2} \\
d_{\rm 24} & = & 2 \cos \lambda \sin \lambda \sqrt{d_{XZ}^2 + d_{YZ}^2}, \end{aligned}$$ where $\lambda$ is the latitude of the experimental site. Then $$\label{param}
d(t) = d_{\rm steady} + d_{12} \cos(2 \Omega t+\phi_{12}) + d_{24} \cos(\Omega t+\phi_{24})$$ where $\Omega=2 \pi / 23.934$ rad/hour is the sidereal frequency and $\phi_{12}$ and $\phi_{24}$ are phases which depend on the definition of the $X$ and $Y$ axes.
Following the standard practice in the measurement of the neutron EDM using a comagnetometer, one considers the ratio $R=f_{\rm n}/f_{\rm Hg} \approx 30\,{\rm Hz}/8\,{\rm Hz}$ between the neutron and the mercury precession frequencies. In the presence of a homogeneous magnetic field and an electric field, this ratio depends on the direction of the latter according to: $$R(t) = \left| \frac{\gamma_{\rm n}}{\gamma_{\rm Hg}} + \frac{2E}{h f_{\rm Hg}} \ d(t) \right|$$ where $\gamma_{\rm n}$ and $\gamma_{\rm Hg}$ are the gyromagnetic ratios and $d$ is the neutron EDM. We neglect a possible contribution from a time-dependent mercury EDM since the Hg nucleus is subject to Schiff screening of the electric field inside the atom [@Schiff].
We studied the time evolution of the correlation between $R$ and the electric field $E$ during 5.6 days in December 2008 at the PF2 ultracold neutron beamline at the Institut Laue-Langevin (ILL), Grenoble. An overview of the data is presented in Fig. \[RvsT\], where the variation $\Delta R$ of $R$ around the mean value is plotted. While the main $B_0$ field was pointing downwards, the electric field was reversed every 2 hours and some additional data were taken without electric field to check for systematic effects. The statistical accuracy per cycle of the neutron frequency is given by [@Green]: $$\sigma f_{\rm n} = \frac{1}{2 \pi \sqrt{N} T \alpha_0 e^{-T/T_2}} = 30~\mu\rm{Hz},$$ where typically $N \approx 4600$ is the number of neutrons per cycle, $T=130$ s is the precession time and $\alpha_0=0.86 \pm 0.01$ is the neutron polarization at the beginning of the precession time. The transverse neutron spin depolarization time $T_2$ depends strongly on the magnetic field homogeneity, whereas the longitudinal depolarization time $T_1=690 \pm 80$ s is attributed to depolarization occurring at wall collisions. The field homogeneity can be adjusted by a set of correction coils; the presented data was in fact taken in three different configurations for the currents in these coils. For the best magnetic field configuration, the value $T_2 = 400 \pm 38$ s was obtained. In addition to the purely statistical error, we expect a fluctuation of the neutron frequency due to a random misalignment of the initial neutron spin after the mercury $\pi/2$ pulse at the level of 20 $\mu$Hz in the worst case. Using only data at zero electric field, we indeed observe a 18 $\mu$Hz non-statistical fluctuation. This error was added quadratically to the entire data set. The mercury cohabiting magnetometer was performing with a typical accuracy of 0.3 $\mu$Hz or $40$ fT for an averaging time of 130 s. Although negligible, the mercury contribution to the individual errors $\sigma R$ was taken into account.
![Variation of $R$ as a function of time $T$ for electric field up (upwards pointing red triangles) and down (downwards pointing blue triangles) and in the case of a null electric field (black dots). For each set of data, the mean value has been substracted. The data is folded modulo 24 h and then binned. []{data-label="RvsT"}](RvsT.eps){width="0.92\linewidth"}
From a set of $1586$ cycles, one can derive a value for $d_{\rm steady}$ from the difference between $R$ for the two different directions of the electric field: $$\label{dsteady}
d_{\rm steady} = (-3.4 \pm 2.7_{\rm stat}) \times 10^{-25}~e\,\rm{cm}.$$ Obviously, this $d_{\rm steady}$ term is better constrained by the preceding work [@Baker] using the same apparatus, i.e. $d_{\rm n} < 2.9\times 10^{-26}~e\,$cm (90 % C.L.), where statistics has been accumulated for several years but where the time evolution has not been studied. However the fact that the present result was obtained in only about 5 days of data taking shows the high performance of the apparatus.
Further, a Bayesian analysis was applied to the data to search for a time variation $d(t)$, Eq. (\[param\]). First, the following Chi squared function is established: $$\chi^2 (d_{12}, \phi_{12}, d_{24}, \phi_{24}) =
\sum_{i = 1}^{1586} \left( \frac{ \Delta R_i - \alpha E_i d(t_i)}{\sigma R_i} \right)^2$$ where the sum runs over all data cycles and $\alpha = \frac{2}{h f_{\rm Hg}}$. Then the posterior probability density for $d_{12}, d_{24}$ is given by the likelihood function: $$\label{likelihood}
L(d_{12}, d_{24}) = \frac{1}{N} \iint \exp(-\chi^2/2) \ d\phi_{12} \ d\phi_{24}$$ where $N$ is a normalization coefficient. This function is plotted in Fig. \[CL2D\], from which we deduce the following bounds: $$\begin{aligned}
\label{limits}
\nonumber
d_{12} < & 15 \times 10^{-25} \ e\,{\rm cm} \ & (95 \ \% \ \ \rm{C.L.}) \\
d_{24} < & 10 \times 10^{-25} \ e\,{\rm cm} \ & (95 \ \% \ \ \rm{C.L.})\end{aligned}$$
![Isodensity lines for the posterior probability density function Eq. (\[likelihood\]). The probability inside the dashed (blue) line is 68 % and 95 % inside the solid (red) line. []{data-label="CL2D"}](CL2D.eps){width="0.92\linewidth"}
This statistical limit could be affected by the main systematic effect namely, a geometrical phase shift of the mercury precession frequency proportional to the electric field and the vertical gradient [@Pendlebury; @Lamoreaux]: $$\Delta f_{\rm Hg} = \frac{E}{2} \left( \frac{\partial B_0}{\partial z} \right) \left( \frac{\gamma_{\rm Hg}^2 r^2}{c^2} \right) \left[ 1- \left( \frac{\omega_{0}}{\omega_{r}^{\dag}} \right)^2 \right]^{-1}
\label{GeomPhase}$$ with $\omega_{0}= |\gamma_{\rm Hg} B_0|$ the Larmor angular frequency and $\omega_{r}^{\dag} = 0.65 \left( v_{xy}/r \right)^2$ the effective radial velocity. In principle, modulations of the vertical gradient at periods of 12 or 24 h would mimic the signal associated with new physics. The magnitude of the vertical gradients can be assessed from the value of $R_0$, the ratio of neutron to mercury precession frequency without electric field: $$R_0 = \left|\frac{f_{\rm n}}{f_{\rm Hg}} \right | = \left| \frac{\gamma_{\rm n}}{\gamma_{\rm Hg}} \left ( 1 - \frac{\partial B_0/\partial z \; \Delta h}{B_0} \right ) \right |
\label{GravShift}$$ which originates from the vertical shift $\Delta h$ of the neutron center of mass with respect to the mercury. Due to this gravitational effect, the neutrons and the mercury atoms do not average exactly the same magnetic field in the presence of a vertical gradient. By dedicated measurements of $R_0$ and using Eqs. (\[GeomPhase\]) and (\[GravShift\]), it is possible to predict a false electric dipole moment signal: $d_{\rm false} = (1.2 \pm 0.2) \times 10^{-25}~e\,$cm. This shift in $d_{\rm steady}$ is too small to be seen in the data with the given statistics. The uncertainty in $d_{\rm false}$ has been calculated from the spread in the measured values $R_0$. These fluctuations are compatible with statistical fluctuations, in agreement with the previous measurement [@Altarev2009Mod]. In order to place an upper limit on the contribution of a modulated gradient to our extracted limits $d_{12}$ and $d_{24}$, one can take the uncertainty in $R_0$ as the maximal amount of gradient fluctuations according to Eq. (\[GravShift\]). This then translates via Eq. (\[GeomPhase\]) into an upper limit of $2 \times 10^{-26}~e\,$cm as the systematic error in our limits due to gradient modulations. Given the current statistical sensitivity, this effect is negligible.
Our result Eq. (\[limits\]) can be simply interpreted on the basis of merely dimensional arguments. We denote by ${\cal E}_{LV}$ the energy scale associated with a violation of Lorentz invariance. It is expected that $d_{12}, d_{24} \approx e \hbar c / {\cal E}_{LV}$. This simple argument is supported by more sophisticated arguments in a quantum field theory framework [@Bolokhov2006]. The limits in Eq. (\[limits\]) correspond to a lower bound on the energy scale for Lorentz violation effects of $10^{10}$ GeV. This is far beyond energies accessible at particle colliders ($10^{3}$ GeV) but still below the Grand Unification scale ($10^{16}$ GeV). Given that new physics is in general expected to be associated with a large energy scale, the proposed observables $d_{ij}$ are indeed stringent tests of the Lorentz invariance complementary to the search for a cosmic axial field.
A significantly improved sensitivity is expected in the near future with the same experimental installation, which has recently been moved to the Paul Scherrer Institute. There it will benefit from a more intense ultracold neutron source [@Anghel2009], and upgrades will allow for an even better control of systematic effects [@Altarev2009].
We are grateful to the ILL staff for providing us with excellent running conditions and in particular acknowledge the support of T. Brenner. We also benefited from the technical support throughout the collaboration. This work was partially supported by Polish Ministry of Science and Higher Education, grant No. N202 065436, the Swiss National Science Foundation, grant No. 200021-126562 and by the DFG cluster of excellence “Origin and Structure of the Universe”.
[99]{} D. Colladay and V. A. Kostelecky, Phys. Rev. [**D 55**]{}, 6760 (1997).
For a review see V. A. Kostelecky and M. Mewes, Phys. Rev. [**D 66**]{}, 056005 (2002).
V. A. Kostelecky and N. Russell, Proceedings of the Fourth Meeting on CPT and Lorentz Symmetry, World Scientific, Singapore (2008).
P. A. Bolokhov, M. Pospelov and M. Romalis, Phys. Rev. D [**78**]{}, 057702 (2008).
D. Bear [*et al.*]{}, Phys. Rev. Lett. [**85**]{}, 5038 (2000).
I. Altarev [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 081602 (2009).
F. Pockels, Abhandl. Gesell. Wiss. Göttingen 39, 1 (1894).
I. Altarev [*et al.*]{}, Nucl. Instrum. Methods Phys. Res., Sect. [**A 611**]{}, 133-136 (2009).
C. A. Baker [*et al.*]{}, Phys. Rev. Lett. [**97**]{}, 131801 (2006).
K. Green [*et al.*]{}, Nucl. Instrum. Methods Phys. Res., Sect. [**A 404**]{}, 381 (1997).
K. Bodek [*et al.*]{}, Nucl. Instrum. Methods Phys. Res., Sect. [**A 597**]{}, 222 (2008).
I. Schiff, Phys. Rev. [**132**]{}, 2194 (1963).
J. M. Pendlebury [*et al.*]{}, Phys. Rev. [**A 70**]{}, 032102 (2004).
S. K. Lamoreaux and R. Golub, Phys. Rev. [**A 71**]{}, 032104 (2005).
A. Anghel [*et al.*]{}, Nucl. Instrum. Methods Phys. Res., Sect. [**[A 611]{}**]{}, 272-275 (2009).
|
---
abstract: 'We analyze, by the finite-difference time-domain numerical methods, several ways to enhance the directional emission from photonic crystal waveguides through the beaming effect recently predicted by Moreno [*et al.*]{} \[Phys. Rev. E [**69**]{}, 121402(R) (2004)\], by engineering the surface modes and corrugation of the photonic crystal surface. We demonstrate that the substantial enhancement of the light emission can be achieved by [*increasing*]{} the refractive index of the surface layer. We also measure power of surface modes and reflected power and confirm that the enhancement of the directional emission is related to the manipulation of the photonic crystal surface modes.'
address: 'Nonlinear Physics Centre and Centre for Ultra-high bandwidth Devices for Optical Systems (CUDOS), Research School of Physical Sciences and Engineering, Australian National University, Canberra, ACT 0200, Australia'
author:
- 'Steven K. Morrison and Yuri S. Kivshar'
title: Engineering of directional emission from photonic crystal waveguides
---
[99]{}
E. Moreno, F.J. Garc[í]{}a-Vidal, and L. Mart[í]{}n-Moreno, “Enhanced transmission and beaming of light via photonic crystal surface modes," Phys. Rev. B [**69**]{}, 121402(R) (2004).
P. Kramper, M. Agio, C.M. Soukoulis, A. Birner, F. M[ü]{}ller, R.B. Wehrspohn, U. G[ö]{}sele, and V. Sandoghdar, “Highly directional emission from photonic crystal waveguides of subwavelength width," Phys. Rev. Lett. [**92**]{}, 113903 (2004).
T.W. Ebbesen, H.J. Lezec, H.F. Ghaemi, T. Thio, and P.A. Wolff, “Extraordinary optical transmission through sub-wavelength hole arrays,” Nature (London) [**391**]{}, 667-669 (1998).
H.J. Lezec, A. Degiron, E. Devaux, R.A. Linke, L. Mart[í]{}n-Moreno, F.J. Garc[í]{}a-Vidal, and T.W. Ebbesen, “Beaming light from a subwavelength aperture,” Science [**297**]{}, 820-822 (2002).
One of the recent advances in the physics of photonic crystals is the discovery of enhanced transmission and highly directional emission from photonic crystal waveguides predicted theoretically by Moreno [*et al.*]{} [@moreno] and demonstrated independently in experiment by Kramper [*et al.*]{} [@costas]. These results provide a new twist in the study of surface modes in photonic crystals. Indeed, it is generally believed that surfaces and surface modes are a highly undesirable feature of photonic crystals, unlike point defects which are useful for creating efficient waveguides with mini-band gaps inside the photonic band gaps of a periodic structure. However, appropriate corrugation of the surface layer may lead to coherent enhancement of the radiating surface modes and highly directional emission of the light from a truncated waveguide [@moreno; @costas].
As already mentioned by Moreno [*et al.*]{} [@moreno], the major motivation for the discovery of highly directional emission from photonic crystal waveguides is largely provided by the physics of extraordinary optical transmission through subwavelength hole arrays in metallic thin films [@ebbesen] and beaming of light from single nanoscopic apertures franked by periodic corrugations [@lezec]. In both those cases, an incident light beam couples to the surface plasmon oscillations via corrugations in a metallic film, and is then emitted from the other side of the film being enhanced by its other corrugated surface. For photonic crystal waveguides, properties of the surface layer [@moreno] or terminated surface [@costas] provide a key physical mechanism for the excitation of surface modes, their constructive interference, and subsequent highly directed emission.
In this paper, we study, by means of the finite-difference time-domain (FDTD) numerical method, the directional emission from a photonic crystal waveguide achieved by appropriate corrugation of the photonic crystal interface, following the original suggestion [@moreno]. We analyze several strategies for enhancing the light beaming effect by varying the surface properties and by engineering the surface modes of a semi-infinite two-dimensional photonic crystal created by a square lattice of cylinders in vacuum. In particular, we optimize the corrugation at the surface, as well as vary the refractive index of the surface layer. We demonstrate that, in comparison with the previously published results [@moreno], the substantial enhancement of the light emission and improved beaming effect can be achieved by [*increasing*]{} the refractive index of the surface layer while using a positive (i.e. opposite to that employed in Ref. [@moreno]) corrugation displacement. We also measure the power of surface modes and reflected power and confirm that the enhancement of the directional emission through the beaming effect links closely to the manipulation of the surface modes supported by the photonic crystal interface.
We consider a photonic crystal slab created by a square lattice of cylinders with dielectric constant $\epsilon_{r}=11.56$ (e.g. GaAs at a wavelength of 1.5 $\mu$m) and radius $r=0.18 \, a$, where $a$ is the lattice period. A row of cylinders removed along the plane $x=0$ forms a single-mode waveguide (see Fig. \[Poynting\_2\_Layers\]) that supports a guided mode with frequencies between $\omega =0.30 \times 2\pi c/a$ and $\omega =
0.44\times 2\pi c/a$ propagating in the plane normal to the cylinders, with the electric field parallel to them.
![Spatial distribution of the Poynting vector for the light emitted from a photonic waveguide: (a) unchanged surface; (b) surface cylinders with $r_{\rm s}=0.09$ and $N=9$ even-numbered cylinders displaced by $\Delta z=-0.3 \, a$ (see [@moreno]); (c) surface cylinders with $r_{\rm s}=0.09$, refractive index $n_{s}=3.6$, and $N=9$ odd-numbered cylinders displaced by $\Delta
z=+0.4 \, a$; in addition, the radius of the cylinders in the layer prior to the surface layer is reduced to $r_{\rm s-1}=0.135
\, a$; (d) surface cylinders with $r_{\rm s}=0.09$, refractive index $n_{\rm s}=4.5$, and $N=9$ odd-numbered cylinders displaced by $\Delta z=+0.4 \, a$.[]{data-label="Poynting_2_Layers"}](P2L.eps){width="10cm"}
When a source is placed in the waveguide at the point $z=0$, it excites waves that propagate along the waveguide and are then emitted at the waveguide exit (at $z =9a$). Since no surface modes are supported by a simple truncated slab, the light radiating from the waveguide undergoes uniform angular diffraction as demonstrated in Fig. \[Poynting\_2\_Layers\](a) for the spatial distribution of the Poynting vector calculated for the source frequency $\omega = 0.408 \times 2\pi c/a$.
To characterize the transmission from the photonic crystal waveguide, we measure the directed power $P_{\rm D}$, normalized to the input power, incident upon a cross-sectional length of $2a$ centered at $x=0$ and $z=45\,a$. A likewise normalized measure is taken of the reflected power $P_{\rm R}$ incident upon a cross-sectional length of $20 \, a$ centered at the input to the waveguide, $x=0$ and $z=-a$. This reflected power is considered a close measure of all reflected power. For the bulk photonic crystal with standard surface layer the directed power is $P_{\rm
D}=0.0123$, and the reflected power is $P_{\rm R}=0.0158$.
Distribution of the Poynting vector for the directional emission from the photonic crystal waveguide demonstrated by Moreno [*et al.*]{} [@moreno] is shown in Fig. \[Poynting\_2\_Layers\](b). These results are produced by altering the surface layer geometry in two ways. Firstly, by reducing the radius of the surface cylinders to the value $r_{\rm s}=0.5r=0.09\,a$, and thereby creating the conditions for a surface mode to exist at the truncated surface. And secondly, by displacing $N=9$ even-numbered cylinders (numbered consecutively away from the waveguide) on both sides of the waveguide by $\Delta z=-0.3\,a$ along the $z-$axis of the crystal, thus enhancing radiation of surface modes. Our calculations show that the directed power for such a structure is $P_{\rm D}=0.0723$, while the reflected power is substantially large, $P_{\rm R}=0.2635$. To further characterize the enhanced beaming effect, we measure one half of the total surface mode power, $P_{\rm S}$, incident upon a cross-sectional length $2\,a$ positioned centrally at $x=24\,a$, $z=9\,a$; again normalized to the input power. Moreover, to characterize the containment of the directed power we measure the width of the central lobe of the directed emission $w_{\rm L}$ between the first nulls at $z=45\,a$. For the geometry considered in Ref. [@moreno], the surface mode power is $P_{\rm S}=0.0030$, while the width of the central lobes is $w_{\rm L} =18.1 \, a$.
A significant drawback of a surface layer design suggested in Ref. [@moreno] is a large amount of the reflected power. We find that the reflected power can be reduced by trapping the electrical field mostly within the surface layer, as occurs for the uncorrugated surface. Increasing the applied wavelength by $4.4\%$ from $\lambda = 2.45\,a$ to $\lambda=2.55\,a$ to account for the proportionally increased distance resulting from the corrugated surface cylinders allows us to decrease the reflected power to $P_{\rm R}=0.048$, while increasing the directed power and surface mode power marginally to $P_{\rm D}=0.0768$ and $P_{\rm S}=0.0484$, respectively. A measure of the average wave impedance in the vicinity of the waveguide shows that the increased wavelength reduces the impedance from $\sim 1000 \Omega$ to $\sim 320 \Omega$.
In order to increase the directional power, we alter the surface layer structure by shifting the even-numbered cylinders [*forward*]{} by the distance of $\Delta z=0.4\,a$, while leaving the odd-numbered cylinders on the lattice sites (i.e. no displacement). As the increased distance due to this corrugation over that of the uncorrugated distance is $7.7\%$, the applied wavelength is increased proportionally to $\lambda=2.63865\,a$. This new surface produces the directed power of $P_{\rm
D}=0.15418$ and decreased reflected and surface mode powers of $P_{\rm R}=0.0318$ and $P_{\rm S}=0.0100$, respectively. Furthermore, the central lobe of the directed emission is now contained within $w_{\rm L} 7.79\,a$.
![Power density incident upon the cross-section at $z=45\,a$ for (a) unchanged surface; (b) surface configuration from Ref. [@moreno]; (c) optimal surface configuration with the maximized beaming, and (d) with the surface refractive index $n=4.5$. []{data-label="PvZ_2_Layers"}](PZ.eps){width="9cm"}
Our analysis shows that substantial improvement to the directed power can be achieved by [*increasing*]{} the refractive index of the surface layer from $n_{\rm s}=3.4$ to $n_{\rm s}=3.6$. This results in the directed power increasing to $P_{\rm D}=0.1689$, while decreasing the reflected and surface-mode power to $P_{\rm
R}=0.0295$ and $P_{\rm S}=0.0023$, respectively. The width of the directed beam’s central lobe resulting from the increased surface layer’s refractive index is $w_{\rm L} =9.553\,a$. The increased power is achieved by decreasing the light-line slope, thus placing the surface mode closer to the continuum of radiative modes.
Additional improvement of the directed power can be achieved by decreasing the radius of the cylinders one layer prior to the surface layer, $z=8\,a$ to $r_{\rm s-1}=0.135\,a$. This change induces a near-surface defect mode that leaks coherently into the surface layer before being radiated, increasing the directed power to $P_{\rm D}=0.2104$, the reflected power, to $P_{\rm R}=0.1028$, and decreasing the surface power to $P_{\rm S}=0.0078$. The width of the central lobe of the directed emission becomes $w_{\rm L}
=8.642\,a$. The spatial distribution of the Poynting vector for this optimal design is shown in Fig. \[Poynting\_2\_Layers\](c). A comparison of the significantly enhanced beaming over the standard interface and that of Ref. [@moreno] is provided in Fig. \[PvZ\_2\_Layers\], for a cross-section of the power density measured at $z=45\,a$.
![Normalized power density incident upon a cross-sectional length of $2\,a$ centered at $x=0$ and $z=45\,a$ as the normalized refractive index of the surface cylinders varies. Top: surface layer used in Ref. [@moreno]. Bottom: with the surface cylinders’ radius reduced to $r_{\rm s}=0.09\,a$ and the refractive index $n_{\rm s}=3.6$, with $N=9$ odd-numbered cylinders displaced $\Delta z=+0.4\,a$, and the radius of the cylinders in the layer prior to the surface layer reduced to $r_{\rm s-1}=0.135\,a$. []{data-label="PvN_2_Layers"}](PN.eps){width="8cm"}
Control of the directed emission is achieved through the manipulation of the refractive index of the surface layer cylinders. This is illustrated in the attenuation of the directed power shown in Fig. \[Poynting\_2\_Layers\](d), where the refractive index of the surface cylinders is increased to the value $n=4.5$. In this case, the outgoing beam splits, the directed power vanishes, and the surface-mode is in cut-off with a localized state formed within the first two surface cylinders next to the waveguide exit. Figure \[PvZ\_2\_Layers\](d) shows a cross-section of the power density measured at $z=45\,a$ for the beam splitting depicted in Fig. \[Poynting\_2\_Layers\](d).
![Normalized power density incident upon a cross-section of the length $2\,a$ centered at $x=0$ and $z=45\,a$ as the radius of the surface cylinders varies. Top: surface layer from Ref. [@moreno] but for different values of $r_{\rm s}$. Bottom: with the radius of the surface cylinders reduced to=20 $r_{\rm s}=0.09\,a$ and their refractive index increased to=20 $n_{\rm s}=3.6$, with $N=9$ odd-numbered cylinders displaced by $\Delta z=+0.4\,a$.[]{data-label="PvR_1_Layer"}](PR.eps){width="8cm"}
The effect produced by a change of the surface refractive index is demonstrated is Fig. \[PvN\_2\_Layers\] where the index is varied from $n=2.4$ to $n=4.4$. As already mentioned, the refractive index of the surface layer has a profound effect on both the directed and reflected powers, suggesting that it could be used not only for achieving a control over the beaming effect but also for matching the waveguide to the surrounding media.
The influence of the radius of the surface cylinders on the beaming effect is summarized in Fig. \[PvR\_1\_Layer\], where the radius is varied from $r_{\rm s}=0.045\,a$ to $r_{\rm s}=0.2\,a$. The radius is the key parameter in the inducement of the surface mode and these results illustrate clearly that the optimum radius is indeed close to $r_{\rm s}=0.9\,a$ used in Ref. [@moreno].
In conclusion, we have implemented different strategies for the enhancement of the light beaming effect by engineering the surface modes of photonic crystals. In particular, we have revealed that, in comparison with the previous studies, the substantial enhancement of the light emission and improved light beaming can be achieved by increasing the refractive index of the surface layer. We have provided a link of the observed enhancement of the directional emission with the properties of the surface modes supported by the photonic crystal interface.
We acknowledge a partial support of the Australian Research Council and useful discussions with Sergei Mingaleev and Costas Soukoulis.
|
---
abstract: 'We explore the ultraviolet continuum regime of causal dynamical triangulations, as probed by the flow of the spectral dimension. We set up a framework in which one can find continuum theories that can in principle fully reproduce the behaviour of the latter in this regime. In particular, we show that, in $2+1$ dimensions, Hořava–Lifshitz gravity can mimic the flow of the spectral dimension in causal dynamical triangulations to high accuracy and over a wide range of scales. This seems to provide evidence for an important connection between the two theories.'
author:
- 'Thomas P. Sotiriou,$^{1,2}$ Matt Visser,$^3$ and Silke Weinfurtner,$^{1}$'
title: Spectral dimension as a probe of the ultraviolet continuum regime of causal dynamical triangulations
---
The first serious effort to implement discretized geometries into the framework of general relativity dates back to 1961 and the development of Regge calculus [@Regge:1961lr]. Since then there has been persistent interest in a variety of discrete quantum gravity models [@Williams:2006kp]. One particularly interesting variant is causal dynamical triangulations (CDT) [@Ambjorn:1998xu], in which a geometry emerges as the sum of all possible triangulations (modulo diffeomorphisms) obeying a global time foliation. The sum is evaluated using the path-integral formalism, such that every history is weighted using a variant of Regge calculus, where the edge lengths of the fundamental building blocks, the $d+1$ dimensional simplices, is kept fixed. The various different histories can be viewed as all possible fluctuations in the geometry, and they differ in the number of simplices and in the manner the latter are glued together.
In a discrete model, such as CDT, part of the challenge is to find suitable probes for the continuum limit. The latter has been investigated numerically, working at a fixed volume (fixed number of simplices) and using a Monte Carlo algorithm [@Ambjorn:2004qm; @Benedetti:2009ge]. The [*spectral dimension*]{} has been proposed as a probe in CDT and has a longer history in lattice quantum gravity, see Refs. [@Ambjorn:2005db; @Horava:2009if] and references therein. This can be thought of as the effective dimension as probed by an appropriately defined (fictitious) diffusion process (random walk). More concretely, a diffusion process from point $\bf{x}$ to point ${\bf x}'$ in (fictitious) diffusion time $s$ is characterized by the probability density $\rho({\bf x},{\bf x}', s)$. The average return probability $P(s)$ is defined as $\rho({\bf x},{\bf x}, s)$ averaged over all points in space. The spectral dimension is then defined as $$d_s=-2\frac{\mathrm{d} \ln P(s)}{\mathrm{d}\ln(\mathrm{s})}.$$ In CDT the diffusion is a discretized, stochastic process. The spectral dimension is not constant, but instead changes with $s$, and consequently with the length scale.
Clearly, the concept of the spectral dimension is not limited to CDT or discrete theories. One could consider a diffusion process on a manifold, as was proposed in Ref. [@Horava:2009if]. Then the spectral dimension is defined same as above and $\rho({\bf x},{\bf x}', s)$ is determined by the diffusion equation $$\label{Eq:DiffusionProcess}
\frac{\partial \rho(\mathbf{x},\mathbf{x}',s)}{\partial s} + \hat{D} \rho(\mathbf{x},\mathbf{x}',s) = 0.$$ The choice of the differential operator $\hat{D}$ corresponds to the “type" of diffusion process being considered.
For example, when $\hat{D}$ is the 3-dimensional Laplacian and $s$ is identified with real time Eq. (\[Eq:DiffusionProcess\]) becomes the heat equation. Instead, when a diffusion process is to be used as a probe of geometry (or kinematics), $s$ becomes a fictitious diffusion time and ${\bf x}$ represents a point in spacetime (in analogy with what was mentioned above for CDT). A natural choice for $\hat{D}$ then seems to be the propagator of perturbations in spacetime. For instance, one could have $\hat{D}=g^{\alpha\beta}\nabla_{\alpha}\nabla_{\beta}$, where $g_{\alpha\beta}$ is the (Lorentzian) metric on a manifold (Greek indices run from $0$ to $d$, which denotes the number of spatial dimensions) and we have performed a Wick rotation ($t\to -i t$). In this case, for large $s$ the corresponding spectral dimension will probe the geometry associated with $g_{\alpha\beta}$. For small $s$, given that spacetime is flat in a sufficiently small neighborhood of each point, the spectral dimension will actually probe the kinematics associated with $\hat{D}$ [@SVW1]. So, when $\hat{D}$ reduces to the flat d’Alembertian at small scales, then $d_s \to d+1$. Note that if $\hat{D}$ is a more complicated differential operator at small scales ([*e.g.*]{} due to ultraviolet corrections) this will be encoded in the small $s$ behaviour of $d_s$.
Given the above, it is very tempting to use the spectral dimension as a probe of the continuum limit of discrete theories, and CDT in particular, or as a potential link to continuum theories. In fact, in Ref. [@Benedetti:2009ge] the large $s$ behaviour of $d_s$ was matched to the outcome of a diffusion process in a curved manifold with the geometry of a stretched sphere. However, the small $s$ (ultraviolet) behaviour of $d_s$ in CDT is, to date, not well understood. This regime is believed to encode the influence of quantum corrections and is expected to provide the most prominent hint towards the effective field theory arising from CDT. Our goal is to demonstrate that one can indeed extract considerable information regarding the ultraviolet continuum regime of CDT from the small $s$ behaviour of $d_s$, and make a first crucial step towards identifying the characteristics of an (effective) continuum theory that could reproduce this behaviour.
The key idea is that $d_s(s)$ can be used to determine $\hat{D}$, which in turn characterizes (to some extent) a continuum theory. In Ref. [@SVW1] we argue that in principle one can indeed determine the dispersion relation associated with $\hat{D}$ when $d_s(s)$ is known in closed form. However, when the latter is known only in tabulated form, as is the case for CDT, one should instead rely on the techique of non-linear regression to reconstruct the dispersion relation from $d_s(s)$. This requires some educated guess for the general form of the dispersion relation.
Consequently, what one can do in practice is to choose a continuum theory of gravity that shares some fundamental characteristic(s) with CDT and check if it can reproduce the behaviour of $d_s(s)$. Clearly, this would not prove that the theory in question is the continuum limit of CDT: the spectral dimension does not carry all the information about the theory (as we will see in more detail below). Additionally, due to the differences in the definitions of the spectral dimension in the discretium and in the continuum there is still some ambiguity on the nature of the linkage it provides between discrete and continuum theories. Nonetheless, even with these caveats in mind, identifying a continuum theory that could fully reproduce the behaviour of the spectral dimension for small $s$ in CDT certainly seems to be a crucial step in understanding its ultraviolet continuum regime.
A suggestion for the candidate continuum theory has been made in Ref. [@Horava:2009if]. It was noticed there that the preferred causal structure of CDT, imposed by a preferred foliation by slices of constant time, is reminiscent of the symmetries of Hořava–Lifshitz (HL) gravity [@Horava:2009uw]. (See Ref. [@Sotiriou:2010wn] for a brief review including viability constraints.) Indeed, the latter is a theory with a preferred spacelike foliation, described by a scalar field. The existence of the preferred foliation allows for higher order spatial derivatives in the theory without having higher order time derivatives. This leads to significantly improved ultraviolet behaviour which actually renders the theory power-counting renormalizable [@Horava:2009uw; @Visser:2009ul], at the expense of giving up Lorentz invariance.
The action of HL gravity is, in the preferred foliation, $$S=\frac{M_\mathrm{pl}^2}{2} \int \mathrm{d}^d x \, \mathrm{d}t \, N \sqrt{g} \left( K^{ij} K_{ij} - \lambda K^{2} + \mathcal{V} \right),$$ where $K_{ij}=\left( \dot{g}_{ij} - \nabla_i N_j - \nabla_j N_i \right)/(2N) $ is the extrinsic curvature, $N$ is the lapse, $N_j$ the shift and $g_{ij}$ is the induced metric on the spacelike hypersurfaces (Latin indices run from $1$ to $d$). $M_\mathrm{pl}$ is the Planck mass and $\lambda$ is a dimensionless running coupling. $\mathcal{V}$ is the part of the Lagrangian which contains only spatial derivatives. Power-counting renormalizability requires that it includes terms with at least $2d$ derivatives. Generally, $\mathcal{V}$ should include all terms compatible with the symmetries of the theory [@Sotiriou:2009gy; @Blas:2009qj]. (For $\lambda=1$, $\mathcal{V}=R$, where $R$ is the Ricci scalar of $g_{ij}$, the theory reduces to general relativity.)
We will focus here on $2+1$ dimensions ($d=2$) mainly because, due to computational limitations, lower dimensional simulations are expected to produce more accurate data sets for our purposes. For $d=2$ we have [@Sotiriou:2011dr] $$\begin{aligned}
{\mathcal V}&=&\xi R +\eta\, a^2 +g_1 \,R^2+g_2\, \nabla^2R+g_3\,a^4+g_4\, Ra^2\nonumber\\&&
+g_5 a^2 (\nabla^j a_j) + g_6 (\nabla^i a_i)^2 + g_7 (\nabla_i a_j) (\nabla^i a^j),\;\end{aligned}$$ where $a_{i}=\partial_i \mathrm{ln} N$, $a^2=a_ia^i$ and $\eta$ and $\xi$ are dimensionless coupling, while the $g_{i}$ couplings have dimensions of an inverse mass squared. We have neglected the cosmological constant as it is irrelevant for our purposes.
We are interested in small length scales where curvature effects are negligible and quantum effects are expected to be important. So, for the diffusion process through which we define $d_s$ for HL gravity it is sufficient to use linearized propagators around flat space for $\hat{D}$. As we have shown in Ref. [@Sotiriou:2011dr], in $2+1$ dimensional HL gravity, the foliation-defining scalar with dispersion relation $$\label{Eq:DispRelMostGen2+1}
\omega^2= \frac{P_\mathrm{1}(k^2)}{P_\mathrm{2}(k^2)}=\mathtt{A}k^2 \frac{1+ \mathtt{B}k^2 + \mathtt{C}k^4}{1+\mathtt{D}k^2},$$ is the only propagating mode. Here $$\begin{aligned}
\label{Eq:A} &&\mathtt{A}=\frac{1-\lambda}{1-2\lambda} \frac{\xi^2}{\eta}, \quad
\mathtt{B}= \frac{2\left( 2 \eta \, g_1 + \xi \, g_2 \right)}{\xi^2}, \\
&&\mathtt{C}= \frac{g_2^2 - 4g_1(g_6+g_7)}{\xi^2},\quad \mathtt{D}= \frac{g_{6}+g_7}{2\eta}. \nonumber\end{aligned}$$ (The dispersion relation for the scalar mode is $3+1$ dimensional HL gravity is qualitatively the same [@Blas:2009qj].)
In Ref. [@SVW1] we discuss in detail how one can define the spectral dimension associated with a general dispersion relation of the form $\omega^2=f(k^2)$. In this case the formal solution of Eq. (\[Eq:DiffusionProcess\]) after Wick rotation is $$\label{Eq:ProbabilityDensity}
\rho(\mathbf{x},\mathbf{x}',s)=\int{ \frac{\mathrm{d}^2 k \, \mathrm{d} \omega}{(2\pi)^{3}}
\mathrm{e}^{i(\mathbf{k}\cdot(\mathbf{x}-\mathbf{x}'))} \, \mathrm{e}^{-s(\omega^2 + f(k^2))} \, .}$$ A straightforward calculation yields $$\label{Eq:SpecDimHL}
d_s=1+ 2s \, \frac{\int{f(k^2) \, k^{d-1} \mathrm{e}^{-s\, f(k^2)} \; \mathrm{d} k }}{\int{ k^{d-1} \mathrm{e}^{-s\, f(k^2)} \; \mathrm{d} k }} \, .$$ This formula is directly applicable to the dispersion relation in Eq. (\[Eq:DispRelMostGen2+1\]).
We have now laid out all of the technical tools that allow us to fit the spectral dimension for CDT in the ultraviolet continuum regime in $2+1$ dimensions using the simulations of Ref. [@Benedetti:2009ge]. These simulations were repeated for $6$ different values of the number of simplices $N_s=[40, 50, 70, 100, 140, 200]\times 10^3$. For each independent simulation $N_s$ was held fixed, and one sums over $1000$ different histories (geometries). On each history the (fictitious) diffusion process was implemented and used to calculate the corresponding spectral dimension $d_s$.
In Ref. [@Horava:2009if] the spectral dimensions of CDT and HL gravity were compared at the two limits of the ultraviolet continuum regime and they were found to be in good agreement. Both theories seem to predict that at small scales $d_s$ flows to 2, whereas at length scales large enough for the quantum correction to become largely subdominant, but small enough for curvature correction to be unimportant, $d_s$ flows to 3, the number of topological dimensions. Encouraging as it may be, this result is far from being conclusive, given that there is an infinity of curves joining two points.
Our goal is far more ambitious: we numerically evaluate the spectral dimensions in Eq. (\[Eq:SpecDimHL\]) for a dispersion relation of the kind (\[Eq:DispRelMostGen2+1\]), and fit the CDT data using the method of non-linear regression. This will allow us to obtain the parameters of the dispersion relation that provide the best fit for the [*whole*]{} ultraviolet continuum region, and access how good such a fit can be. This will provide a conclusive answer to whether HL gravity can reproduce the behaviour of $d_s$ for CDT in this regime.
This procedure will not uniquely determine all of the parameters of HL gravity. First of all, it is already clear from Eqs. (\[Eq:A\]) that $g_3$, $g_4$ and $g_5$ do not enter into the linearized propagator and, therefore, they cannot be determined without taking into account curvature corrections. Secondly, there are $7$ couplings that do enter in the propagator, but only in $4$ combinations. This implies that any observable that probes the dispersion relation will give us some but not full insight into the fundamental theory. Finally, without loss of generality, we can choose to work in units where the infrared light speed is one, i.e. $\mathtt{A}=1$. This amounts to the rescaling $\omega\to \omega/\sqrt{\mathtt{A}}$.
\[myfigure\]
In Fig. \[fig:subfig1\] we plot the spectral dimension $d_s$ as resulting from the CDT simulation with the largest number of simplices. As illustrated with the differently shaded areas, the diffusion process can be divided into three physically different regions. For very small $s\in[1 \, , \, 15]$ the spectral dimension shows discrete behaviour as expected. For very large $s\in[614 \, , \, 29999]$, the behaviour of the spectral dimension is expected to be dominated by curvature effects. This is the region were the flow of $d_s$ is compatible with a diffusion process on a continuous manifold with the geometry of a stretched sphere [@Benedetti:2009ge]. The intermediate region, where a continuous behaviour emerges but quantum corrections are pre-dominant, is the one we are interested in. Our model-fit for this region ($s\in[15 \, , \, 614]$) is represented by a red solid line. Fig. \[fig:subfig2\] zooms into this region.
We have chosen to present a fit for the simulation with the larger number of simplices, because the higher the number of simplices, the larger the volume of the universe will be in the simulations. If the volume is not large enough, curvature effects can kick in before quantum effects become subdominant. There might then be no regime where both effects can be neglected, so no regime where the diffusion effectively takes place in flat space and the spectral dimension flows to $3$. This undesirable behaviour is indeed observed in some of the simulations with fewer simplices.
In Fig. \[fig:subfig1\] we see that for the $200,000$ simplices simulation $d_s$ starts at around $2$ in the ultraviolet continuum regime, it reaches $3$ in a regime where quantum effects are apparently still non-negligible and push $d_s$ slightly over $3$. Eventually, $d_s$ reaches $3$ for a second time and then drops to lower values under the influence of curvature corrections. Remarkably, our model can reproduce this behaviour very well in all the region where curvature corrections are negligible, including the part of the curve that exceeds $3$. The corresponding values for the parameters of the model, in units where $\mathtt{A}=1$, are $\mathtt{B}=-1.18\pm0.22$, $\mathtt{C}=344.47\pm3.11$, $\mathtt{D}=10.08\pm0.30$, which are compatible with technically natural values for the various couplings of HL gravity.
As an indication of how good our fit is, we present in Fig. \[fig:subfig3\] the individual residuals (difference between data and best fit) for 3 simulations involving different number of simplices: $N_s=[50, 100, 200]\times 10^{3}$. For all $3$ cases the residuals grow for very small values of $s$. We attribute this to the fact that at some sufficiently small $s$ the discrete nature of CDT becomes important. Note that, even though we do not attempt to fit the lightly-shaded area of Fig. \[fig:subfig1\] where the data points violently oscillate, the absence of obvious oscillations in $d_s$ does not imply the complete absence of discreteness effects.
Moving to intermediate values of $s$, a somewhat worrisome feature of the residuals is that they are not randomly scattered, but instead they exhibit some oscillatory pattern. This indicates that the fit might be missing a systematic effect. Nevertheless, the fact that the amplitude of the oscillations clearly reduces significantly as one moves to larger number of simplices is very encouraging, as it indicates that this effect is likely to become negligible as the size of the simulations increases. Additionally, this unknown effect might well be the subdominant contribution of the aforementioned discreteness effects.
The absolute magnitude of the residuals for all simulations becomes large again for large enough values of $s$. This is expected and signals the scale at which curvature corrections, which we neglect, become important. Consequently, the fact that the residuals start growing at a value of $s$ that is lower that the value for which $d_s=3$ for the second time (or even for the first time for lower numbers of simplices) seems to indicate that, even for the largest number of simplices, the universe is not large enough to allow one to comfortably define a patch of spacetime that is large enough for quantum effects to be negligible, and at the same time small enough for curvature corrections to be largely subdominant.
It should be pointed out that we have neglected the running of couplings, even though $d_s$ is not necessarily a universal observable throughout the region considered. We are not fitting the deep UV, and so it is not unreasonable to assume that the running is not very significant. Our results [*a posteriori*]{} justify this expectation.
We have attempted to fit the CDT data with other types of dispersion relations, either inspired by lattice field theory, or just simple polynomials, which typically arise in projectable HL gravity where the lapse function is taken to be space independent [@Horava:2009uw; @Sotiriou:2009gy]. However, the independent residual test largely favours rational dispersion relations, as in Eq. (\[Eq:DispRelMostGen2+1\]) [^1]. The flow of $d_s$ as defined in the discretium in CDT appears to match that of the foliation-defining scalar in HL gravity in the ultraviolet continuum regime, and not that of a test scalar field.
To summarize, we have shown that in $2+1$ dimensions HL gravity provides a very good fit (which is expected to improve as simulations include a larger number of simplices) to the flow of the spectral dimension in CDT throughout the ultraviolet continuum regime. Even though this is by no means enough to argue that the former is the continuum limit of the latter, it does provide evidence for an important connection between the two theories. It also demonstrates beyond doubt, that the spectral dimension is a powerful tool for relating discrete and continuum theories and for gaining insight into their behaviour.
We are indebted to D. Benedetti, J. Henson, and R. Loll for providing the CDT data and for useful discussions. We also thank P. Hořava for enlightening comments and discussions. TPS and SW were supported by Marie Curie Fellowships. MV was supported by the Marsden Fund.
-20pt
[15]{} T. Regge, Nuovo Cimento A [**19**]{}, 558 (1961).
R. M. Williams, J. Phys. Conf. Ser. [**33**]{}, 38 (2006). J. Ambjorn and R. Loll, Nucl. Phys. B [**536**]{}, 407 (1998).
J. Ambjorn, J. Jurkiewicz and R. Loll, Phys. Rev. Lett. [**93**]{} (2004) 131301; Phys. Rev. D [**72**]{} (2005) 064014. D. Benedetti and J. Henson, Phys. Rev. D [**80**]{}, 124036 (2009). J. Ambjorn, J. Jurkiewicz and R. Loll, Phys. Rev. Lett. [**95**]{}, 171301 (2005).
P. Hořava, Phys. Rev. Lett. [**102**]{}, 161301 (2009) . T. P. Sotiriou, M. Visser and S. Weinfurtner, arXiv:1105.6098 \[hep-th\]
P. Hořava, Phys. Rev. D [**79**]{}, 084008 (2009). T. P. Sotiriou, J. Phys. Conf. Ser. [**283**]{}, 012034 (2011). M. Visser, Phys. Rev. D [**80**]{}, 025011 (2009).
T. P. Sotiriou, M. Visser and S. Weinfurtner, Phys. Rev. Lett. [**102**]{}, 251601 (2009); JHEP [**0910**]{}, 033 (2009).
D. Blas, O. Pujolas and S. Sibiryakov, Phys. Rev. Lett. [**104**]{}, 181302 (2010). T. P. Sotiriou, M. Visser and S. Weinfurtner, arXiv:1103.3013 \[hep-th\].
[^1]: Projectable HL gravity type dispersion relations in $2+1$ dimensions could also be ruled out analytically as they are of the form $\omega^2=k^4$ [@Sotiriou:2011dr] and $d_s$ remains $2$ throughout.
|
Subsets and Splits