text
stringlengths 448
13k
| label
int64 0
1
| doc_idx
stringlengths 8
14
|
---|---|---|
---
abstract: |
This paper provides the basis for new methods of inference for max-stable processes $\xi$ on general spaces that admit a certain incremental representation, which, in important cases, has a much simpler structure than the max-stable process itself. A corresponding peaks-over-threshold approach will incorporate all single events that are extreme in some sense and will therefore rely on a substantially larger amount of data in comparison to estimation procedures based on block maxima.\
Conditioning a process $\eta$ in the max-domain of attraction of $\xi$ on being *extremal*, several convergence results for the increments of $\eta$ are proved. In a similar way, the shape functions of mixed moving maxima (M3) processes can be extracted from suitably conditioned single events $\eta$. Connecting the two approaches, transformation formulae for processes that admit both an incremental and an M3 representation are identified.
bibliography:
- 'HREstimation.bib'
title: |
Representations of max-stable processes\
based on single extreme events
---
Introduction
============
The joint extremal behavior at multiple locations of some random process $\{\eta(t): t\in T\}$, $T$ an arbitrary index set, can be captured via its limiting *max-stable process*, assuming the latter exists and is non-trivial everywhere. Then, for independent copies $\eta_i$ of $\eta$, $i\in{\mathbb | 1 | member_52 |
N}$, the functions $b_n: T \to {\mathbb R}$, $c_n : T\to (0,\infty)$ can be chosen such that the convergence $$\begin{aligned}
\label{MDA}
\xi(t) = \lim_{n\to\infty} c_n(t) \Big(\max_{i=1}^n \eta_i(t) - b_n(t)\Big),
\quad t\in T,\end{aligned}$$ holds in the sense of finite-dimensional distributions. The process $\xi$ is said to be *max-stable* and $\eta$ is in its max-domain of attraction (MDA). The theory of max-stable processes is mainly concerned with the dependence structure while the marginals are usually assumed to be known. Even for finite-dimensional max-stable distributions, the space of possible dependence structures is uncountably infinite-dimensional and parametric models are required to find a balance between flexibility and analytical tractability [@deh2006a; @res2008].
A general construction principle for max-stable processes was provided by [@deh1984; @smi1990]: Let $\sum_{i\in{\mathbb N}} \delta_{(U_i, S_i)}$ be a Poisson point process (PPP) on $(0,\infty)\times{\mathcal S}$ with intensity measure $u^{-2}\rd u\cdot \nu(\rd s)$, where $({\mathcal S},
\mathfrak S)$ is an arbitrary measurable space and $\nu$ a positive measure on ${\mathcal S}$. Further, let $f:{\mathcal S}\times T \to [0, \infty)$ be a non-negative function with $\int_{{\mathcal S}} f(s,t) \nu(\rd s) = 1$ for all $t\in T$. Then the process $$\begin{aligned}
\xi(t) = \max_{i\in{\mathbb N}} U_i f(S_i, t), \quad t\in T,\label{constr_max_stable}\end{aligned}$$ is max-stable and has | 1 | member_52 |
standard Fréchet margins with distribution function $\exp(-1/x)$ for $x \geq 0$. In this paper, we restrict to two specific choices for $f$ and $({\mathcal S}, \mathfrak S, \nu)$ and consider processes that admit one of the resulting representations. First, let $\{W(t) : t\in T\}$ be a non-negative stochastic process with $\sE W(t) = 1$, $t\in T$, and $W(t_0) = 1$ a.s. for some point $t_0 \in T$. The latter condition means that $W(t)$ simply describes the multiplicative increment of $W$ w.r.t. the location $t_0$. For $({\mathcal S}, \mathfrak S, \nu)$ being the canonical probability space for the sample paths of $W$ and with $f(w,t)=w(t)$, $w\in{\mathcal S}$, $t\in T$, we refer to $$\begin{aligned}
\label{def_xi}
\xi(t) = \max_{i\in{\mathbb N}} U_i W_i(t), \quad t\in T,\end{aligned}$$ as the *incremental representation* of $\xi$, where $\{W_i\}_{i\in{\mathbb N}}$ are independent copies of $W$. Since $T$ is an arbitrary index set, the above definition covers multivariate extreme value distributions, i.e. $T=\{t_1,\dots,t_k\}$, as well as max-stable random fields, i.e. $T = {\mathbb R}^d$.\
For the second specification, let $\{F(t): \ t \in {\mathbb R}^d\}$ be a stochastic process with sample paths in the space $C({\mathbb R}^d)$ of non-negative continuous functions, such that $$\begin{aligned}
\label{assumption_integral}
\textstyle \sE \int_{{\mathbb R}^d} F(t) | 1 | member_52 |
\rd t = 1.\end{aligned}$$ With $S_i = (T_i,F_i)$, $i\in{\mathbb N}$, in ${\mathcal S}= {\mathbb R}^d\times C({\mathbb R}^d)$, intensity measure $\nu(\rd t \times \rd g)=\rd t\sP_F(\rd g)$ and $f((t,g), s)=g(s-t)$, $(t,g)\in{\mathcal S}$, we obtain the class of *mixed moving maxima (M3) processes* $$\begin{aligned}
\xi(t) = \max_{i\in{\mathbb N}} U_i F_i(t- T_i), \quad t\in{\mathbb R}^d. \label{def_M3}\end{aligned}$$ These processes are max-stable and stationary on ${\mathbb R}^d$ (see for instance [@wan2010]). The function $F$ is called *shape function of $\xi$* and can also be deterministic (e.g., in case of the Smith process). In Smith’s “rainfall-storm” interpretation [@smi1990], $U_i$ and $T_i$ are the strength and center point of the $i$th storm, respectively, and $U_i F_i(t- T_i)$ represents the corresponding amount of rainfall at location $t$. In this case, $\xi(t)$ is the process of extremal precipitation.
When i.i.d. realizations $\eta_1, \ldots, \eta_n$ of $\eta$ in the MDA of a max-stable process $\xi$ are observed, a classical approach for parametric inference on $\xi$ is based on generating (approximate) realizations of $\xi$ out of the data $\eta_1, \ldots, \eta_n$ via componentwise block maxima and applying maximum likelihood (ML) estimation afterwards. A clear drawback of this method is that it ignores all information on large values that is contained in | 1 | member_52 |
the order statistics below the within-block maximum. Further, ML estimation needs to evaluate the multivariate densities while for many max-stable models only the bivariate densities are known in closed form. Thus, composite likelihood approaches have been proposed [@pad2010; @dav2012].\
In univariate extreme-value theory, the second standard procedure estimates parameters by fitting a certain PPP to the *peaks-over-thresholds* (POT), i.e., to the empirical process of exceedances over a certain critical value [@lea1991; @emb1997]. Also in the multivariate framework we can expect to profit from using all extremal data via generalized POT methods instead of aggregated data. In contrast to the ML approach, in this paper, we assume that $\xi$ admits one of the two representations and and we aim at extracting realizations of the processes $W$ and $F$, respectively, from *single extreme events*. Here, the specification of a single extreme event will depend on the respective representation.\
In [@eng2012a], this concept is applied to derive estimators for the class of Brown-Resnick processes [@bro1977; @kab2009], which have the form by construction. With $a(n)$ being a sequence of positive numbers with $\lim_{n\to\infty} a(n) = \infty$, the convergence in distribution $$\begin{aligned}
\Bigg( \frac{\eta(t_1)}{\eta(t_0)}, \ldots,
\frac{\eta(t_k)}{\eta(t_0)}
\ \Bigg|\
\eta(t_0) > a(n) \Bigg)\cvgdist \bigl(
W(t_1),\dots, W(t_k) | 1 | member_52 |
\bigr),
\label{cond_incr_conv}\end{aligned}$$ $t_0,t_1,\dots,t_k\in T$, $k\in {\mathbb N}$, is established for $\eta$ being in the MDA of a Brown-Resnick process and with $W$ being the corresponding log-Gaussian random field. A similar approach exists in the theory of homogeneous discrete-time Markov chains. For instance, [@seg2007] and [@ehl2011] investigate the behavior of a Markov chain $\{M(t): t\in {\mathbb Z}\}$ conditional on the event that $M(0)$ is large. The resulting extremal process is coined the tail chain and turns out to be Markovian again. In this paper, the convergence result is generalized in different aspects. Arbitrary non-negative processes $\{W(t) : t\in T\}$ with $\sE W(t) = 1$, $t\in T$, are considered, and convergence of the conditional increments of $\eta$ in the sense of finite-dimensional distributions as well as weak convergence in continuous function spaces is shown (Theorems \[theo\_cond\_increments\_general\] and \[theo\_cond\_increments\_cont\]). Moreover, in Section \[M3representation\], similar results are established for M3 processes by considering realizations of $\eta$ around their (local) maxima. Since one and the same max-stable process $\xi$ might admit both representations and we provide formulae for switching between them in Section \[sec:switching\]. Section \[sec:application\] gives an exemplary outlook on how our results can be applied for statistical inference.
Incremental representation {#examples_increment_representation}
==========================
Throughout | 1 | member_52 |
this section, we suppose that $\{\xi(t): \ t\in T\}$, where $T$ is an arbitrary index set, is normalized to standard Fréchet margins and admits a representation $$\begin{aligned}
\label{def_xi2}
\xi(t) = \max_{i\in{\mathbb N}} U_i V_i(t), \quad t\in T,\end{aligned}$$ where $\sum_{i\in{\mathbb N}}\delta_{U_i}$ is a PPP on $(0,\infty)$ with intensity $u^{-2}du$, which we call *Fréchet point process* in the following. The $\{V_i\}_{i\in{\mathbb N}}$ are independent copies of a non-negative stochastic process $\{V(t): \ t\in T\}$ with $\sE V(t) =
1$, $t\in T$. Note that is slightly less restrictive than the representation in that we do not require that $V(t_0)=1$ a.s. for some $t_0\in T$. For any fixed $t_0\in T$, we have $$\begin{aligned}
\label{decomp_V}
\xi(t) \eqdist \max_{i\in{\mathbb N}} U_i
\left({\mathbf 1}_{P_i=0}V^{(1)}_i(t) + {\mathbf 1}_{P_i=1}V^{(2)}_i(t)\right),
\quad t\in T,\end{aligned}$$ where $\{P_i\}_{i\in{\mathbb N}}$ are i.i.d. Bernoulli variables with parameter $p=\sP(V(t_0) = 0)$ and the $V^{(1)}_i$ and $V^{(2)}_i$ are independent copies of the process $\{V(t): \ t\in T\}$, conditioned on the events $\{V(t_0) > 0\}$ and $\{V(t_0)= 0\}$, respectively.
Note that for $k\in{\mathbb N}$, $t_0, \ldots, t_k\in T$, the vector $\Xi =
(\xi(t_0),\dots,\xi(t_k))$ follows a $(k+1)$-variate extreme-value distribution and its distribution function $G$ can therefore be written as $$\begin{aligned}
\label{def_mu}
G(\mathbf{x}) = \exp( -\mu( [{\mathbf 0},\mathbf{x}]^C) ),
\quad | 1 | member_52 |
\mathbf{x} \in {\mathbb R}^{k+1},\end{aligned}$$ where $\mu$ is a measure on $E = [0,\infty)^{k+1}\setminus\{{\mathbf 0}\}$, the so-called *exponent measure* of $G$ [@res2008 Prop. 5.8], and $[{\mathbf 0},\mathbf{x}]^C = E\setminus
[{\mathbf 0},\mathbf{x}]$.
The following convergence result provides the theoretical foundation for statistical inference based on the incremental process $V$.
\[theo\_cond\_increments\_general\] Let $\{\eta(t): \ t\in T\}$ be non-negative and in the MDA of some max-stable process $\xi$ that admits a representation and suppose that $\eta$ is normalized such that holds with $c_n(t) = 1/n$ and $b_n(t) = 0$ for $n\in{\mathbb N}$ and $t\in T$. Let $a(n)\to\infty$ as $n\to\infty$. For $k\in {\mathbb N}$ and $t_0,\dots,t_k\in T$ we have the convergence in distribution on ${\mathbb R}^{k+1}$ $$\begin{aligned}
\left(\frac{\eta(t_0)}{a(n)}, \frac{\eta(t_1)}{\eta(t_0)} ,\dots,
\frac{\eta(t_k)}{\eta(t_0)}
\ \Bigg|\
\eta(t_0) > a(n)\right) \cvgdist
\left(Z, \Delta\mathbf{\tilde{V}}^{(1)}\right),\quad n\to \infty,
\end{aligned}$$ where the distribution of $\Delta\mathbf{\tilde{V}}^{(1)}$ is given by $$\begin{aligned}
\sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) =
(1-p)\sP(\Delta\mathbf V^{(1)}\in d \mathbf z)
\sE\bigl( V^{(1)}(t_0) \big|
\Delta\mathbf V^{(1)}=\mathbf z \bigr), \quad \mathbf{z} \geq {\mathbf 0}.
\label{density_increment}
\end{aligned}$$ Here, $\Delta\mathbf V^{(1)}$ denotes the vector of increments $\left(\frac{V^{(1)}(t_1)}{V^{(1)}(t_0)}, \ldots,
\frac{V^{(1)}(t_k)}{V^{(1)}(t_0)}\right)$ with respect to $t_0$, and $Z$ is an independent Pareto variable.
Note that any process $\eta$ that satisfies the convergence in for a process $\xi$ with standard Fréchet margins | 1 | member_52 |
can be normalized such that the norming functions in become $c_n(t) = 1/n$ and $b_n(t) = 0$, $n\in{\mathbb N}$, $t\in T$ [@res2008 Prop. 5.10].
For $\mathbf{X} = (\eta(t_0),\dots,\eta(t_k))$, which is in the MDA of the random vector $\Xi=(\xi(t_0),\dots,\xi(t_k))$, it follows from [@res2008 Prop. 5.17] that $$\begin{aligned}
\label{conv_resnick}
\lim_{m\to\infty} m \sP( \mathbf{X}/m \in B ) = \mu(B),\end{aligned}$$ for all elements $B$ of the Borel $\sigma$-algebra $\mathcal B(E)$ of $E$ bounded away from $\{{\mathbf 0}\}$ with $\mu(\partial B)=0$, where $\mu$ is defined by . For $s_0> 0$ and ${\mathbf s}=(s_1, \ldots,
s_k)\in [0, \infty)^{k}$, we consider the sets $A_{s_0}=(s_0,\infty)\times [0, \infty)^k$, $A=A_1$ and $B_{\mathbf{s}} = \{\mathbf{x} \in [0, \infty)^{k+1} :
(x^{(1)},\dots,x^{(k)}) \leq x^{(0)}\mathbf{s}\}$ for ${\mathbf s}$ satisfying $\sP( \Delta\tilde {\mathbf V}^{(1)}\in \partial
[{\mathbf 0},{\mathbf s}])=0$. Then $$\begin{aligned}
\left\{ \eta(t_0) > s_0 a(n),\,
\big( \eta(t_1) / \eta(t_0) ,\dots, \eta(t_k) / \eta(t_0) \big)
\leq \mathbf{s} \right\}
= \{ \mathbf{X} / a(n) \in B_{\mathbf{s}}\cap A_{s_0} \},\end{aligned}$$ since $B_{\mathbf{s}}$ is invariant under multiplication, i.e., $B_{\mathbf s}=cB_{\mathbf s}$ for any $c>0$. Thus, we obtain $$\begin{aligned}
\notag
\sP&\left( \eta(t_0) > s_0 a(n),
\, \left( \eta(t_1) / \eta(t_0) ,\dots,
\eta(t_k) / \eta(t_0) \right) \leq \mathbf{s} \,\Big|\,
\eta(t_0) > a(n) \right) \\
\notag&= \frac{
{a(n)} \sP( \mathbf{X} / a(n)
\in | 1 | member_52 |
B_{\mathbf{s}} \cap A \cap A_{s_0} )}{
{a(n)} \sP( \mathbf{X} / a(n) \in A)} \\
\label{eq:01}
& \longrightarrow
\frac{\mu(B_{\mathbf{s}} \cap A \cap A_{s_0})}{\mu(A)},\quad (n\to\infty),\end{aligned}$$ where the convergence follows from , as long as $\mu\{ \partial (B_{\mathbf{s}} \cap A \cap A_{s_0})\} = 0$.\
Let $$\begin{aligned}
\label{def_xi3}
\xi^{(1)}(t) = \max_{i\in{\mathbb N}} U_i^{(1)} V^{(1)}_i(t),
\quad t\in T,
\end{aligned}$$ where $\sum_{i\in{\mathbb N}} \delta_{U_i^{(1)}}$ is a Poisson point process with intensity $(1-p)u^{-2}\rd u$ and let $\mu^{(1)}$ be the exponent measure of the associated max-stable random vector $(\xi^{(1)}(t_0), \ldots, \xi^{(1)}(t_k))$. Then the choice $A =
(1,\infty)\times [0,\infty)^k$ guarantees that $\mu(\cdot \cap A) =
\mu^{(1)}(\cdot \cap A)$. Comparing the construction of $\xi^{(1)}$ in with the definition of the exponent measure, we see that $\mu^{(1)}$ is the intensity measure of the Poisson point process $\sum_{i\in{\mathbb N}} \delta_{(U_i^{(1)}
V_i^{(1)}(t_0),\, \ldots,\, U_i^{(1)} V_i^{(1)}(t_k))}$ on $E$. Hence, $$\begin{aligned}
\mu(A) &= \int_0^\infty (1-p)u^{-2} \sP(u V^{(1)}(t_0) > 1) \rd u \notag\\
&= (1-p)\int_0^\infty u^{-2} \int_{[u^{-1}, \infty)}
\sP(V^{(1)}(t_0) \in \rd y) \rd u \notag\\
&= (1-p)\int_0^\infty y \sP(V^{(1)}(t_0) \in \rd y)
= (1-p)\sE V^{(1)}(t_0) = 1,
\end{aligned}$$ where the last equality follows from $\sE V^{(1)}(t_0) = \sE
V(t_0)/(1-p)$. Furthermore, for $s_0\geq 1$ and ${\mathbf s}\in[0,\infty)^k$ with $\sP(\Delta\mathbf{\tilde V}^{(1)} \in \partial [{\mathbf 0},{\mathbf s}])=0$, $$\begin{aligned}
&\mu(B_{\mathbf{s}} | 1 | member_52 |
\cap A \cap A_{s_0}) / ((1-p)\mu(A)) \notag\\
&= \int_0^\infty u^{-2}
\sP\Bigl(u V^{(1)}(t_0) > s_0,\,
\big(u V^{(1)}(t_1),\dots, u V^{(1)}(t_k)\big) \leq
\mathbf{s} u V^{(1)}(t_0) \Bigr) \rd u\notag\\
&=
\int_0^\infty \int_{[s_0 u^{-1},\, \infty)} u^{-2}
\sP\Bigl(V^{(1)}(t_0)\in \rd y
\Big|\Delta\mathbf V^{(1)} \leq \mathbf{s} \Bigr)
\sP(\Delta\mathbf V^{(1)} \leq \mathbf{s} )\rd u
\notag\\
&=
\int_{[{\mathbf 0}, \mathbf s]} \int_{[0,\infty)} y s_0^{-1} \cdot
\sP\Bigl(V^{(1)}(t_0)\in \rd y \Big|
\Delta\mathbf V^{(1)}=\mathbf z \Bigr)
\sP(\Delta\mathbf V^{(1)}\in \rd{\mathbf z})
\notag\\
&=
s_0^{-1}\int_{[{\mathbf 0}, \mathbf s]} \sE\Bigl( V^{(1)}(t_0) \Big|
\Delta\mathbf V^{(1)}=\mathbf z \Bigr)
\sP(\Delta\mathbf V^{(1)}\in \rd{\mathbf z}).
\label{mu_expl}
\end{aligned}$$ Equation shows that the convergence in holds for all continuity points ${\mathbf s}\in [0, \infty)^{k}$ of the distribution function of $\Delta{\mathbf V}^{(1)}$. Since $s_0\geq 1$ was arbitrary, this concludes the proof.
1. If $V^{(1)}(t_0)$ is stochastically independent of the increments $\Delta\mathbf V^{(1)}$, we simply have $\sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) =
\sP(\Delta\mathbf{{V}}^{(1)}\in d \mathbf z)$.
2. If $p=\sP(V(t_0) = 0)=0$, the exponent measure $\mu$ of any finite-dimensional vector $\Xi=(\xi(t_0), \ldots, \xi(t_k))$, $t_0,
\ldots, t_k\in T$, $k\in{\mathbb N}$, satisfies the condition $\mu\left( \{0\}\times [0,\infty)^k \right)=0,$ and following Proposition \[calculateW\], the incremental representation of $\Xi$ according to is given by $\Xi = \max_{i\in{\mathbb N}}
U_i \cdot (1, \Delta\mathbf{\tilde{V}}_i)^\top$, where $\Delta\mathbf{\tilde{V}}_i$, $i\in{\mathbb N}$, are independent copies of $\Delta\mathbf{\tilde{V}}=\Delta\mathbf{\tilde{V}}^{(1)}$.
| 1 | member_52 |
3. If $\xi$ admits a representation , we have $\sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) =
\sP(\Delta\mathbf{{V}}\in d \mathbf z)$, which shows that is indeed a special case of Theorem \[theo\_cond\_increments\_general\].
\[rem\_thres\] In the above theorem, the sequence $a(n)$ of thresholds is only assumed to converge to $\infty$, as $n\to\infty$, ensuring that $\{\eta(t_0) > a(n)\}$ becomes a rare event. For statistical applications $a(n)$ should also be chosen such that the number of exceedances $$\begin{aligned}
N(n) = \sum_{i=1}^n {\mathbf 1}\{ \eta_i(t_0) > a(n) \}
\end{aligned}$$ converges to $\infty$ almost surely, where $(\eta_i)_{i\in{\mathbb N}}$ is a sequence of independent copies of $\eta$. By the Poisson limit theorem, this is equivalent to the additional assumption that $\lim_{n\to\infty} a(n)/n = 0$, since in that case $n\sP(\eta(t_0) >
a(n)) = n / a(n) \to \infty$, as $n\to\infty$.
[@eng2012a] consider Hüsler-Reiss distributions [@hue1989; @kab2011] and obtain their limiting results by conditioning on certain extremal events $A\subset E$. They show that various choices of $A$ are sensible in the Hüsler-Reiss case, leading to different limiting distributions of the increments of $\eta$. In case $\xi$ is a Brown-Resnick process and $A =
(1,\infty)\times [0, \infty)^{k}$ the assertions of Theorem \[theo\_cond\_increments\_general\] and [@eng2012a Thm. 3.3] coincide.
A commonly used class of | 1 | member_52 |
stationary yet non-ergodic max-stable processes on ${\mathbb R}^d$ is defined by $$\begin{aligned}
\label{schlather_model}
\xi(t) = \max_{i\in{\mathbb N}} U_i Y_i(t), \quad t\in{\mathbb R}^d,\end{aligned}$$ where $\sum_{i\in{\mathbb N}} \delta_{U_i}$ is a Fréchet point process, $Y_i(t)=\max(0, \tilde Y_i(t))$, $i \in {\mathbb N}$, and the $\tilde Y_i$ are i.i.d. stationary, centered Gaussian processes with $\sE(\max(0,
\tilde Y_i(t))) =1$ for all $t\in{\mathbb R}^d$ [@sch2002; @bla2011]. Note that in general, a $t_0\in{\mathbb R}^d$ s.t. $Y_i(t_0)=1$ a.s. does not exist, i.e., the process admits representation but not representation . In particular, for the extremal Gaussian process we have $p=\sP(V(t_0)=0)=1/2$ and the distribution of the increments in becomes $$\begin{aligned}
\sP(\Delta\mathbf{\tilde{V}}^{(1)} \! \in \rd \mathbf z)
&= \frac12 \sE\Bigl[ Y(t_0) \, \Big|\,
(Y(t_1)/Y(t_0), \ldots, Y(t_k)/Y(t_0)) = \mathbf z,
\, Y(t_0)>0\Bigr]\\
& \qquad \cdot\sP\Bigl( \bigl(Y(t_1)/Y(t_0), \ldots,
Y(t_k)/Y(t_0)\bigr)
\in \rd{\mathbf z}\, \Big|\, Y(t_0)>0 \Bigr).\end{aligned}$$
While the Hüsler-Reiss distribution is already given by the incremental representation , cf. [@kab2011], other distributions can be suitably rewritten, provided that the cumulative distribution function and hence the respective exponent measure $\mu$ is known.
\[calculateW\] Let $\Xi = (\xi(t_0),\dots,\xi(t_k))$ be a max-stable process on $T = \{t_0, \ldots, t_k \}$ with standard Fréchet margins and suppose that its exponent measure $\mu$ is concentrated on $(0, \infty) \times | 1 | member_52 |
[0, \infty)^{k}$. Define a random vector ${\mathbf W}=(W^{(1)}, \ldots, W^{(k)})$ via its cumulative distribution function $$\begin{aligned}
\label{def_W}
\sP( {\mathbf W}\leq \mathbf{s}) = \mu(B_{\mathbf{s}} \cap A),
\quad \mathbf{s}\in [0,\infty)^{k},
\end{aligned}$$ where $A = (1,\infty)\times [0, \infty)^{k}$ and $B_{\mathbf s} =
\{{\mathbf x}\in [0,\infty)^{k+1}: \, (x^{(1)},\ldots,x^{(k)}) \leq
x^{(0)} {\mathbf s}\}$. Then, $\Xi$ allows for an incremental representation with ${\mathbf W}_i$, $i\in{\mathbb N}$, being independent copies of ${\mathbf W}$.
First, we note that indeed defines a valid cumulative distribution function. To this end, consider the measurable transformation $$\begin{aligned}
T: (0, \infty)\times [0, \infty)^{k}
\to (0, \infty)\times [0, \infty)^{k}, \ (x_0,\dots, x_k)
\mapsto \left(x_0, \frac{x_1}{x_0}, \dots, \frac{x_k}{x_0}\right).
\end{aligned}$$ Then, $ T(B_{\mathbf{s}} \cap A) = (1, \infty) \times [{\mathbf 0}, {\mathbf s}]$ and the measure $\mu^T(\cdot) = \mu(T^{-1}((1,\infty)\times \,\cdot\,))$ is a probability measure on $[0, \infty)^{k}$. Since $$\begin{aligned}
\mu(B_{\mathbf{s}} \cap A)
= \mu(T^{-1}((1,\infty)\times[{\mathbf 0}, {\mathbf s}]))
= \mu^T([{\mathbf 0}, {\mathbf s}]),
\end{aligned}$$ the random vector ${\mathbf W}$ is well-defined and has law $\mu^T$.
By definition of the exponent measure, we have $\Xi \eqdist \max_{i
\in {\mathbb N}} {\mathbf X}_i$, where $\Pi = \sum_{i\in{\mathbb N}} \delta_{{\mathbf X}_i}$ is a PPP on $E$ with intensity measure $\mu$. Then, the transformed point process $T\Pi = \sum_{i\in{\mathbb N}} \delta_{(X_i^{(0)},\,
| 1 | member_52 |
X_i^{(1)}/X_i^{(0)},\, \ldots,\, X_i^{(k)} /X_i^{(0)})}$ has intensity measure $$\begin{aligned}
\tilde \mu((c,\infty) \times [{\mathbf 0},{\mathbf s}])
={}& \mu\left(T^{-1}\left( (c,\infty) \times [{\mathbf 0},{\mathbf s}] \right)
\right)\\
={} & \mu(B_{\mathbf{s}} \cap ((c,\infty) \times [0,\infty)^k)) {}
={} c^{-1} \mu(B_{\mathbf{s}} \cap A)
\end{aligned}$$ for any $c > 0$, $\mathbf{s} \in [0,\infty)^k$, where we use the fact that $\mu$, as an exponent measure, has the homogeneity property $c^{-1}\mu(\rd{\mathbf x})=\mu(\rd(c{\mathbf x}))$. Thus, $T\Pi$ has the same intensity as $\sum_{i\in{\mathbb N}} \delta_{(U_i, {\mathbf W}_i)}$, where $\sum_{i\in{\mathbb N}} \delta_{U_i}$ is a Fréchet point process and ${\mathbf W}_i$, $i \in {\mathbb N}$, are i.i.d. vectors with law $\sP({\mathbf W}\leq
\mathbf{s}) = \mu(B_{\mathbf{s}} \cap A)$. Hence, we have $$\begin{aligned}
\Xi\eqdist{}& \max_{i \in {\mathbb N}} T^{-1}\left(\big(X_i^{(0)}, X_i^{(1)} / X_i^{(0)},
\ldots, X_i^{(k)} / X_i^{(0)}\big)\right)\\
\eqdist{}& \max_{i \in {\mathbb N}} T^{-1}\left(\big(U_i,{\mathbf W}_i\big)\right) {}
={} \max_{i \in {\mathbb N}} U_i {\mathbf W}_i,
\end{aligned}$$ which completes the proof.
\[ex:symm\_log\] For $T=\{t_0,\dots,t_k\}$, the symmetric logistic distribution is given by $$\begin{aligned}
\sP(\xi(t_0) \leq x_0,\dots, \xi(t_k) \leq x_k) =
\exp\left[ - \left( x_0^{-q}+ \dots + x_k^{-q}\right)^{1/q} \right],
\label{eq:cdf_symm_log}
\end{aligned}$$ for $x_0,\dots,x_k>0$ and $q > 1$. Hence, the density of the exponent measure is $$\begin{aligned}
\mu(\rd x_0,\dots,\rd x_k) =
\left(\sum_{i=0}^k x_i^{-q}\right)^{1/q -(k+1)}
\left(\prod_{i=1}^k(iq-1)\right)
\prod_{i=0}^k x_i^{-q-1} \rd x_0\dots \rd x_k.
\end{aligned}$$ Applying | 1 | member_52 |
Proposition \[calculateW\], the incremental process $W$ in the representation is given by $$\begin{aligned}
\sP(W(t_1) \leq s_1, \dots W(t_k) \leq s_k)
= \left(1 + \sum_{i=1}^k s_i^{-q}\right)^{1/q - 1}.
\end{aligned}$$
Continuous sample paths
-----------------------
In this subsection, we provide an analog result to Theorem \[theo\_cond\_increments\_general\], in which convergence in the sense of finite-dimensional distributions is replaced by weak convergence on function spaces. In the following, for a Borel set $U\subset{\mathbb R}^d$, we denote by $C(U)$ and $C^+(U)$ the space of non-negative and strictly positive continuous functions on $U$, respectively, equipped with the topology of uniform convergence on compact sets.
\[theo\_cond\_increments\_cont\] Let $K$ be a compact subset of ${\mathbb R}^d$ and $\{\eta(t): \ t\in K\}$ be a process with positive and continuous sample paths in the MDA of a max-stable process $\{\xi(t): \ t\in K\}$ as in in the sense of weak convergence on $C(K)$. In particular, suppose that $$\frac 1n \max_{i=1}^n \eta_i(\cdot) \cvgdist \xi(\cdot), \quad
n\to\infty.$$ Let $W$ be the incremental process from and $Z$ a Pareto random variable, independent of $W$. Then, for any sequence $a(n)$ of real numbers with $a(n) \to \infty$, we have the weak convergence on $(0,\infty)\times C(K)$ $$\begin{aligned}
\left(\frac{\eta(t_0)}{a(n)},
\frac{\eta(\cdot)}{\eta(t_0)} \ \Big|\ \eta(t_0) > a(n) \right)
\cvgdist | 1 | member_52 |
(Z, W(\cdot)),
\end{aligned}$$ as $n$ tends to $\infty$.
\[weak\_conv\_Rd\] Analogously to [@whi1970 Thm. 5], weak convergence of a sequence of probability measures $P_n$, $n\in{\mathbb N}$, to some probability measure $P$ on $C({\mathbb R}^d)$ is equivalent to weak convergence of $P_n r_j^{-1}$ to $P r_j^{-1}$ on $C([-j, j]^d)$ for all $j\geq 1$, where $r_j :
C({\mathbb R}^d) \to C([-j,j]^d)$ denotes the restriction of a function to the cube $[-j, j]^d$. Hence the assertion of Theorem \[theo\_cond\_increments\_cont\] remains valid if the compact set $K$ is replaced by ${\mathbb R}^d$.
As the process $\xi$ is max-stable and $\eta\in\text{MDA}(\xi)$, similarly to the case of multivariate max-stable distributions (cf. Theorem \[theo\_cond\_increments\_general\]), we have that $$\begin{aligned}
\label{conv_dehaan}
\lim_{u \to \infty} u\sP(\eta / u \in B) = \mu(B)
\end{aligned}$$ for any Borel set $B \subset C(K)$ bounded away from $0^K$, i.e., $\inf\{\sup_{s\in K} f(s) : \ f\in B\} > 0$, and with $\mu(\partial
B) = 0$ [@deh2006a Cor. 9.3.2], where $\mu$ is the *exponent measure* of $\xi$, defined by $$\begin{aligned}
& \sP(\xi(s) \leq x_j, \ s \in K_j, \ j=1,\ldots,m) \nonumber \\
&={} \exp\left[-\mu\left(\left\{ f \in C(K): \ \textstyle\sup_{s \in K_j} f(s) > x_j \textrm{ for some } j \in \{1,\ldots,m\}
\right\}\right)\right]
\end{aligned}$$ for $x_j \geq 0$, | 1 | member_52 |
$K_j \subset K$ compact. Thus, $\mu$ equals the intensity measure of the Poisson point process $\sum_{i \in {\mathbb N}}
\delta_{U_i W_i(\cdot)}$. For $z>0$ and $D\subset C(K)$ Borel, we consider the sets $$\begin{aligned}
A_{z} &= \{f \in C(K): \ f(t_0) > z\}\\
B_D &= \{f \in C(K) : f(\cdot)/f(t_0)\in D\}\end{aligned}$$ and $A=A_1$. Note that $B_D$ is invariant w.r.t. multiplication by any positive constant. Then, as $W(t_0) = 1$ a.s., we have $\mu(A_{z}) = \int_{z}^\infty u^{-2} \sd u = z^{-1}$ and for $s_0\geq 1$ and any Borel set $D \subset C(K)$ with $\sP(W \in \partial D) = 0$, by , we get $$\begin{aligned}
&\sP\left\{\eta(t_0) / a(n) > s_0,\ \eta(\cdot)/\eta(t_0) \in D \, \Big|\, \eta(t_0) > a(n) \right\}\\
&= \frac{a(n) \sP\bigl\{\eta(\cdot) / a(n) \in A_{s_0} \cap B_D \cap A\bigr\}}{a(n) \sP\bigl\{\eta(\cdot) / a(n) \in A\bigr\}}\\
&\stackrel{n \to \infty}{\longrightarrow}{} \frac{\mu(B_D \cap A_{s_0})}{\mu(A)}\\
&={} \int_{s_0}^\infty u^{-2}\sP\bigl\{u W(\cdot) \in B_D\bigr\} \sd u\\
&={} s_0^{-1} \sP\bigl\{W(\cdot) \in D\bigr\},
\end{aligned}$$ which is the joint distribution of $Z$ and $W(\cdot)$.
\[BRproc\] For $T={\mathbb R}^d$, $d\geq 1$, let $\{Y(t): \ t\in T\}$ be a centered Gaussian process with stationary increments, continuous sample paths and $Y(t_0) = 0$ for some $t_0\in{\mathbb R}^d$. Note that by [@adl2007 Thm. 1.4.1] it is sufficient for | 1 | member_52 |
the continuity of $Y$ that there exist constants $C,\alpha,\delta > 0$, such that $$\begin{aligned}
\sE |Y(s) - Y(t)|^2 \leq \frac{C}{|\log \|s-t\| |^{1+\alpha}}
\end{aligned}$$ for all $s,t\in{\mathbb R}^d$ with $\|s-t\|<\delta$. Further let $\gamma(t) =
\sE(Y(t) - Y(0))^2$ and $\sigma^2(t) = \sE(Y(t))^2$, $t \in {\mathbb R}^d$, denote the variogram and the variance of $Y$, respectively. Then, with a Fréchet point process $\sum_{i\in{\mathbb N}} \delta_{U_i}$ and independent copies $Y_i$ of $Y$, $i\in{\mathbb N}$, the process $$\begin{aligned}
\label{BR_proc}
\xi(t) = \max_{i\in{\mathbb N}} U_i \exp\left(Y_i(t) - \sigma^2(t) / 2\right),
\quad t\in{\mathbb R}^d,
\end{aligned}$$ is stationary and its distribution only depends on the variogram $\gamma$. Comparing with the incremental representation , the distribution of the increments is given by the log-Gaussian random field $W(t) = \exp\left(Y(t) -
\sigma^2(t) / 2\right)$, $t\in{\mathbb R}^d$, and Theorem \[theo\_cond\_increments\_cont\] applies.
Mixed moving maxima representation {#M3representation}
==================================
A large and commonly used class of max-stable processes is the class of M3 processes . Let $$\begin{aligned}
\label{pi0}
\Pi_0 = \sum_{i\in{\mathbb N}} \delta_{(U_i.T_i,F_i)}
\end{aligned}$$ be the corresponding PPP on $(0,\infty)\times{\mathbb R}^d\times C({\mathbb R}^d)$ with intensity $u^{-2}\rd u \,\rd t \,\sP_F(\rd f)$. In the sequel, M3 processes are denoted by $$\begin{aligned}
M(t) = \max_{i\in{\mathbb N}} U_i F_i(t- T_i), \quad t\in{\mathbb R}^d.\end{aligned}$$ The marginal distributions | 1 | member_52 |
of $M$ are given by $$\begin{aligned}
& \sP(M(t_0)\leq s_0, \ldots, M(t_k)\leq s_k) \notag\\
&= \sP\left[ \Pi_0
\left(\left\{(u,t,f):
\max_{l=0}^k u f(t_l-t)/s_l > 1\right\}\right) = 0\right] \notag\\
&= \exp\left(- \int_{C({\mathbb R}^d)} \int_{{\mathbb R}^d}
\max_{l=0}^k (f(t_l-t)/s_l)\, \rd t \, \sP_F(\rd f) \right),\label{M3_marginal}\end{aligned}$$ $t_0, \ldots, t_k\in{\mathbb R}^d$, $s_0, \ldots, s_k\geq 0$, $k\in{\mathbb N}$.
In Section \[examples\_increment\_representation\], we were interested in recovering the incremental process $W$ from processes in the MDA of a max-stable process with incremental representation. In case of M3 processes, the object of interest is clearly the distribution of the shape function $F$. Thus, in what follows, we provide the corresponding convergence results for processes $\eta$ in the MDA of an M3 process. We distinguish between processes on ${\mathbb R}^d$ with continuous sample paths and processes on a grid (${\mathbb Z}^d$). The main idea is to consider $\eta$ in the neighborhood of its own (local) maximum, conditional on this maximum being large.
Continuous Case
---------------
Let $\{\eta(t): \, t \in {\mathbb R}^d\}$ be strictly positive and in the MDA of a mixed moving maxima process $M$ in the sense of weak convergence in $C({\mathbb R}^d)$. We assume that $\eta$ is normalized such that the norming functions in are given by $c_n(t) = | 1 | member_52 |
1 / n$ and $b_n(t) = 0$, for any $n\in{\mathbb N}$ and $t\in{\mathbb R}^d$. Further suppose that the shape function $F$ of $M$ is sample-continuous and satisfies $$\begin{aligned}
\begin{split}
F({\mathbf}0) &= \lambda \quad a.s., \\
F(t) & \in [0,\lambda) \ \forall t \in {\mathbb R}^d \setminus \{{\mathbf}0\} \quad a.s.
\label{eq:Fmaxatorigin}
\end{split}\end{aligned}$$ for some $\lambda > 0$ and $$\int_{{\mathbb R}^d} \sE\left\{ \max_{t_0 \in K} F(t_0 - t) \right\} \sd t < \infty
\label{eq:sup-integrability-cont}$$ for any compact set $K \subset {\mathbb R}^d$. Under these assumptions, there is an analog result to Theorem \[theo\_cond\_increments\_cont\].
\[thm\_conv\_mmm\] Let ${Q}, K \subset {\mathbb R}^d$ be compact such that $\partial {Q}$ is a Lebesgue null set and let $$\tau_{Q}: \ C({Q}) \to {\mathbb R}^d, \ f \mapsto \inf\left( \operatorname*{arg\,max}_{t \in {Q}}
f(t) \right),$$ where [“inf”]{} is understood in the lexicographic sense. Then, under the above assumptions, for any Borel set $B \subset C(K)$ with $\sP(F / \lambda \in \partial B) = 0$, and any sequence $a(n)$ with $a(n) \to \infty$ as $n \to \infty$, we have $$\begin{aligned}
& \lim_{\substack{\{{\mathbf}0\} \in L \nearrow {\mathbb R}^d\\ {\rm compact}}}
\limsup_{n \to \infty} \sP\Big\{ \eta\big(\tau_{Q}(\eta|_{Q})+\cdot\big) \big/ \eta(\tau_{Q}(\eta|_{Q})) \in B \ \Big| \\[-1em]
& \hspace{2.5cm} \max_{t\in {Q}}\eta(t) = \max_{t \in {Q}\oplus L} | 1 | member_52 |
\eta(t),
\ \max_{t\in {Q}}\eta(t) \geq a(n)\Big\} \hfill {}={} \hfill \sP\big\{F(\cdot) / \lambda \in B\big\},
\end{aligned}$$ where $\oplus$ denotes morphological dilation.
The same result holds true if we replace $\limsup_{n \to \infty}$ by $\liminf_{n \to \infty}$.
First, we consider a fixed compact set $L\subset{\mathbb R}^d$ large enough such that $K \cup \{{\bf 0}\} \subset L$ and define $$\begin{aligned}
A_L = \left\{ f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) \geq 1, \ \max_{t\in {Q}}f(t) = \max_{t \in {Q}\oplus L} f(t)\right\}
\end{aligned}$$ and $$\begin{aligned}
C_B = \left\{f \in C({Q}\oplus L):
\ f\big(\tau_{Q}(f|_{Q}) + \,\cdot\,\big) \big/ f(\tau_{Q}(f|_{Q})) \in B\right\}
\end{aligned}$$ for any Borel set $B \subset C(K)$. Note that $C_B$ is invariant w.r.t. multiplication by any positive constant. Thus, we get $$\begin{aligned}
& \sP\Big\{\eta\big(\tau_{Q}(\eta|_{Q}) + \cdot\big)
\big/ \eta(\tau_{Q}(\eta|_{Q})) \in B \ \Big|\ \max_{t\in {Q}}\eta(t) = \max_{t \in {Q}\oplus L} \eta(t) \geq a(n)\Big\}
\nonumber \\
& ={} \sP\big\{\eta / a(n) \in C_B \,\big|\, \eta / a(n) \in A_L\big\}
\nonumber\\
& ={} \frac{a(n) \sP\big\{\eta/a(n) \in C_B,\,\eta/a(n)
\in A_L \big\}}{a(n) \sP\big\{\eta/a(n) \in A_L \big\}}.
\label{eq:expand-cont}
\end{aligned}$$
By [@deh2006 Cor. 9.3.2] and [@res2008 Prop. 3.12] we have $$\begin{aligned}
\limsup_{u \to \infty} u\sP(\eta / u \in C) \leq{}& \mu(C), \quad C \subset C({Q}\oplus L) \text{ closed},\\
\liminf_{u \to \infty} | 1 | member_52 |
u\sP(\eta / u \in O) \geq{}& \mu(O), \quad O \subset C({Q}\oplus L) \text{ open},
\end{aligned}$$ where $C$ and $O$ are bounded away from $0^K$. Here, $\mu$ is the intensity measure of the PPP $\sum_{i \in {\mathbb N}} \delta_{U_i F_i(\,\cdot\, -
T_i)}$ restricted to $C({Q}\oplus L)$. Thus, by adding or removing the boundary, we see that all the limit points of Equation lie in the interval $$\label{eq:liminterval}
\left[ \frac{\mu(C_B \cap A_L) - \mu(\partial (C_B \cap A_L))}{\mu(A_L) + \mu(\partial A_L)}, \frac{\mu(C_B \cap A_L) + \mu(\partial (C_B \cap A_L))}{\mu(A_L) - \mu(\partial A_L)}\right].$$ We note that $A_L$ is closed and the set $$\begin{aligned}
A_L^* ={} & \bigg\{ f \in C({Q}\oplus L): \\[-.5em]
&\quad \, \tau_{Q}(f|_{Q}) \in {Q}^o, \ \max_{t\in {Q}} f(t) > \max\big\{1, f(t)\big\} \ \forall t \in {Q}\oplus L \setminus\{\tau_{Q}(f|_{Q})\} \bigg\}
\end{aligned}$$ is in the interior of $A_L$ (Lemma \[lem:AL\]). Hence, we can assess $$\begin{aligned}
\mu(\partial A_L) \leq{} & \quad \ \mu(\{f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) = 1\}) \notag\\
& + \mu\bigg( \quad \bigg( \quad \{f \in C({Q}\oplus L): \ \tau_{Q}(f|_{Q}) \in \partial {Q}\} \notag\\
& \hspace{1.55cm} \cup \left\{f \in C({Q}\oplus L): \ \operatorname*{arg\,max}_{t \in {Q}\oplus L} f(t) \text{ is not unique}\right\}\bigg) \notag\\
& \qquad \cap \left\{f \in C({Q}\oplus L): \ | 1 | member_52 |
\max_{t\in {Q}}f(t) = \max_{t \in {Q}\oplus L} f(t) \geq 1 \right\}\bigg) \notag \\
\leq{} & 0 + \int_{\partial {Q}} \int_{\lambda^{-1}}^\infty u^{-2} \sd u \sd t_0 \notag\\
& \phantom{0} + \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\} \sd u \sd x \label{eq:partAL}.
\end{aligned}$$ Here, the equality $\mu(\{f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) = 1\}) = 0$ holds as $\max_{t \in {Q}} M(t)$ is Fréchet distributed (cf. [@deh2006 Lemma 9.3.4]). Since $\partial {Q}$ is a Lebesgue null set, the second term on the right-hand side of also vanishes. Thus, $$\begin{aligned}
\mu(\partial A_L) \leq{} & \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\} \sd u \sd x =: c(L) \label{eq:partAL2}.
\end{aligned}$$
Now, let $B \subset C(K)$ a be Borel set such that $\sP(F / \lambda \in \partial B) = 0$. For the set $C_B$, we obtain that the set $$\begin{aligned}
C_B^* ={} & \bigg\{ f \in C({Q}\oplus L): \ \operatorname*{arg\,max}_{f \in {Q}} f(t) \text{ is unique},\ \frac{f\big(\tau_{Q}(f|_{Q}) + \cdot\big)}{f(\tau_{Q}(f|_{Q}))} \in B^o \bigg\}
\end{aligned}$$ is in the interior of $C_B$ and that the closure of $C_B$ is a subset of $$\begin{aligned}
C_B^* \cup{} & \left\{f \in C({Q}\oplus L): | 1 | member_52 |
\, \operatorname*{arg\,max}_{t \in {Q}} f(t) \text{ is not unique}\right\}\\
\cup{} & \left\{f \in C({Q}\oplus L): \, f\big(\tau_{Q}(f|_{Q}) + \cdot\big) \big/ f(\tau_{Q}(f|_{Q})) \in \partial B\right\}
\end{aligned}$$ (Lemma \[lem:interCB\] and Lemma \[lem:CB\]). Thus, by , we can assess $$\begin{aligned}
\mu(\partial (C_B \cap A_L)) \leq{} & \mu(\partial A_L) + \mu(\partial C_B \cap A_L) \notag\\
\leq{} & c(L) + \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2}
\sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\}\sd u \sd x \notag\\
& \hspace{0.7cm} + \int_{{Q}} \int_{\lambda^{-1}}^\infty u^{-2} \sP(F / \lambda \in \partial B) \sd u \sd t \quad
{}={} \quad 2 c(L). \label{eq:partCB}
\end{aligned}$$
Furthermore, we get $$\begin{aligned}
& \mu(C_B \cap A_L) \nonumber \\
={} & \int_{Q}\int_{\lambda^{-1}}^\infty u^{-2} \sP\Big\{F(\cdot) / \lambda \in B\Big\}
\sd u \sd t_0 \nonumber \\
& + \int_{{\mathbb R}^d \setminus({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2}
\sP\bigg\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1,\ \nonumber \\
& \hspace{2.5cm}
F\left(\Big(\tau_{Q}(F(\cdot-x)|_{Q})\Big)+\cdot-x\right) \Big/ \max_{t_0 \in {Q}} F(t_0-x) \in B,\nonumber \\
& \hspace{2.5cm} F(t-x) / \max_{t_0 \in {Q}} F(t_0-x) \leq 1
\ \forall t \in {Q}\oplus L \bigg\} \sd u \sd x. \label{eq:CB}
\end{aligned}$$ The second term in is positive and can be bounded from above by $c(L)$. Setting $B= C(K)$, $\mu(A_L)$ can be expressed in an analogous way. Now, | 1 | member_52 |
---
abstract: 'We derive the mean squared error convergence rates of kernel density-based plug-in estimators of mutual information measures between two multidimensional random variables $\mathbf{X}$ and $\mathbf{Y}$ for two cases: 1) $\X$ and $\Y$ are both continuous; 2) $\X$ is continuous and $\Y$ is discrete. Using the derived rates, we propose an ensemble estimator of these information measures for the second case by taking a weighted sum of the plug-in estimators with varied bandwidths. The resulting ensemble estimator achieves the $1/N$ parametric convergence rate when the conditional densities of the continuous variables are sufficiently smooth. To the best of our knowledge, this is the first nonparametric mutual information estimator known to achieve the parametric convergence rate for this case, which frequently arises in applications (e.g. variable selection in classification). The estimator is simple to implement as it uses the solution to an offline convex optimization problem and simple plug-in estimators. A central limit theorem is also derived for the ensemble estimator. Ensemble estimators that achieve the parametric rate are also derived for the first case ($\X$ and $\Y$ are both continuous) and another case 3) $\X$ and $\Y$ may have any mixture of discrete and continuous components.'
author:
- 'Kevin | 1 | member_53 |
R. Moon[^1]'
- 'Kumar Sricharan[^2]'
- 'Alfred O. Hero III[^3]'
bibliography:
- 'References.bib'
title: Ensemble Estimation of Mutual Information
---
\#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1]{}]{}
\#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1,h]{}]{} \#1[\_[\#1,h\_[\#1]{}(l)]{}]{} \#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1,h]{}]{}
§ \#1[\_]{}
\#1[\_[\#1,k\_[\#1]{}]{}]{} \#1[\_[\#1,k\_[\#1]{}+1]{}]{}
\#1[\_[\#1,k\_[\#1]{}]{}]{} \#1\#2[\_[\#1,k(\#2)]{}]{} \#1[\_[\#1,k\_[\#1]{}+1]{}]{} \#1[|\_[\#1,k\_[\#1]{}]{}]{} \#1[|\_[\#1,k\_[\#1]{}+1]{}]{}
\#1\#2[\_[\#1\#2,h\_[\#1]{},h\_[\#2]{}]{}]{}
\#1\#2\#3[\_[\#2]{}\^[(\#3)]{}]{}
Introduction {#sec:intro}
============
Mutual information (MI) estimation has many applications in machine learning including MI has been used in fMRI data processing [@chai2009fMRI], structure learning [@structure2016], independent subspace analysis [@pal2010estimation], forest density estimation [@liu2012exponential], clustering [@lewi2006real], neuron classification [@schneidman2003information], and intrinsically motivated reinforcement learning [@mohamed2015variational; @salge2014changing]. Another particularly common application is feature selection or extraction where features are chosen to maximize the MI between the chosen features $\mathbf{X}$ and the outcome variables $\mathbf{Y}$ [@torkkola2003feature; @vergara2014review; @peng2005feature; @kwak2002input]. In many of these applications, the predictor labels have discrete components (e.g. classification labels) while the input variables have continuous components. To the best of our knowledge, there are currently no nonparametric MI estimators that are known to achieve the parametric mean squared error (MSE) convergence rate $1/N$ when $\X$ and/or $\Y$ contain discrete components. Also, while many nonparametric estimators of MI exist, most can only be applied to specific information measures (e.g. Shannon or Rényi information). In this paper, we provide a framework for | 1 | member_53 |
nonparametric estimation of a large class of MI measures where we only have available a finite population of i.i.d. samples. **** We separately consider three cases: 1) $\X$ and $\Y$ are both continuous; 2) $\X$ is continuous and $\Y$ is discrete; 3) $\X$ and $\Y$ may have any mixture of discrete and continuous components. We focus primarily on the second case which includes the problem of feature selection in classification. We derive a MI estimator for this case that achieves the parametric MSE rate when the conditional densities of the continuous variables are sufficiently smooth. We also show how these estimators are extended to the first and third cases.
Our estimation method applies to other MI measures in addition to Shannon information, which have been the focus of much interest. The authors of [@torkkola2003feature] defined an information measure based on a quadratic divergence that could be estimated more efficiently than Shannon information. A MI measure based on the Pearson divergence was considered in [@sugiyama2012machine] for computational efficiency and numerical stability. The authors of [@costa2004geodesic] and [@pal2010estimation] used minimal spanning tree generalized nearest-neighbor graph approaches, respectively, to estimate Rényi information.
Related Work
------------
Many estimators for Shannon MI between continuous random | 1 | member_53 |
variables have been developed. A popular $k$-nn-based estimator was proposed in [@kraskov2004estimating] which is a modification of the entropy estimator derived in [@kozachenko1987sample]. However, these estimators only achieve the parametric convergence rate when the dimension of each of the random variables is less than 3 [@gao2016demystifying]. Similarly, the Rényi information estimator in [@pal2010estimation] does not achieve the parametric rate. Some other estimators are based on maximum likelihood estimation of the likelihood ratio [@suzuki2008approximating] and minimal spanning trees [@khan2007relative].
Recent work has focused on nonparametric divergence estimation for purely continuous random variables. One approach [@krishnamurthy2014divergence; @kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi] uses an optimal kernel density estimator (KDE) to achieve the parametric convergence rate when the densities are at least $d$ [@singh2014exponential; @singh2014renyi] or $d/2$ [@krishnamurthy2014divergence; @kandasamy2015nonparametric] times differentiable where $d$ is the dimension of the data. These optimal KDEs require knowledge of the density support boundary and are difficult to construct near the boundary. Numerical integration may also be required for estimating some divergence functionals under this approach, which can be computationally expensive. In contrast, our approach to MI estimation does not require numerical integration and can be performed without knowledge of the support boundary.
More closely related work [@sricharan2013ensemble; @moon2014isit; @moon2014nips; @moon2016arxiv; | 1 | member_53 |
@moon2016isit] uses an ensemble approach to estimate entropy or divergence functionals. These works construct an ensemble of simple plug-in estimators by varying the neighborhood size of the density estimators. They then take a weighted average of the estimators where the weights are chosen to decrease the bias with only a small increase in the variance. The parametric rate of convergence is achieved when the densities are either $d$ [@sricharan2013ensemble; @moon2014isit; @moon2014nips] or $(d+1)/2$ [@moon2016arxiv; @moon2016isit] times differentiable. These approaches are simple to implement as they only require simple plug-in estimates and the solution of an offline convex optimization problem. These estimators have also performed well in various applications [@szabo2012distributed; @gliske2015intrinsic; @moon2015Bayes; @moon2015partI; @moon2015partII]
Finally, the authors of [@gao2015efficient] showed that $k$-nn or KDE based approaches underestimate the MI when the MI is large. As MI increases, the dependencies between random variables increase which results in less smooth densities. Thus a common approach to overcome this issue is to require the densities to be smooth [@krishnamurthy2014divergence; @kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi; @sricharan2013ensemble; @moon2014isit; @moon2014nips; @moon2016arxiv; @moon2016isit].
Contributions
-------------
In the context of this related work, we make the following novel contributions in this paper: (1) For continuous random variables (case 1), we extend | 1 | member_53 |
the asymptotic bias and variance results for divergence estimators [@moon2016isit; @moon2016arxiv] to kernel density plug-in MI estimators without boundary correction [@karunamuni2005boundary] by incorporating machinery to handle the dependence between the product of marginal density estimators (Section \[sec:MI\_est\]), (2) we extend the theory to handle discrete random variables in the mixed cases (cases 2 and 3) by reformulating the densities as a mixture of the conditional density of the continuous variables given the discrete variables (Section \[sec:mixed\]), and (3) we leverage this theory for the mixed cases in conjunction with the generalized theory of ensemble estimators [@moon2016arxiv; @moon2016isit] to derive, to the best of our knowledge, the first non-parametric estimator that achieves a parametric rate of MSE convergence of $O\left(1/N\right)$ for the mixed cases (Section \[sec:mixed\_ensemble\]), where $N$ is the number of samples available from each distribution. We also derive a central limit theorem for the ensemble estimators (Section \[subsec:clt\]). We verify the theory through experiments (Section \[sec:experiments\]).
Continuous Random Variables {#sec:MI_est}
===========================
In this section, we obtain MSE convergence rates of plug-in MI estimators when $\X$ and $\Y$ are continuous (case 1 in Section \[sec:intro\]). This will enable us to derive the MSE convergence rates of plug-in MI estimators when | 1 | member_53 |
$\X$ is continuous and $\Y$ is discrete and when $\X$ and $\Y$ may have any mixture of continuous and discrete components (respectively, cases 2 and 3 in Section \[sec:intro\]). These rates can then be used to derive ensemble estimators that achieve the parametric MSE rate. Let $f_{X}(x)$, $f_{Y}(y)$, and $f_{XY}(x,y)$ be $d_{X}$, $d_{Y}$, and $d_{X}+d_{Y}=d$-dimensional densities. Let $g(t_{1},t_{2})=g\left(\frac{t_{1}}{t_{2}}\right)$ (e.g. $g(t_{1},t_{2})=\log(t_{1}/t_{2})$ for Shannon information). We define a family of MIs as $$G_{1}(\mathbf{X};\mathbf{Y})=\int g\left(\frac{f_{X}(x)f_{Y}(y)}{f_{XY}(x,y)}\right)f_{XY}(x,y)dxdy.\label{eq:MI}$$
The KDE Plug-in Estimator
-------------------------
When both $\mathbf{X}$ and $\mathbf{Y}$ are continuous with marginal densities $f_{X}$ and $f_{Y}$, the MI functional $G_{1}(\mathbf{X};\mathbf{Y})$ can be estimated using KDEs. Assume that $N$ i.i.d. samples $\left\{ \mathbf{Z}_{1},\dots,\mathbf{Z}_{N}\right\} $ are available from the joint density $f_{XY}$ with $\mathbf{Z}_{i}=\left(\mathbf{X}_{i},\mathbf{Y}_{i}\right)^{T}$. Let $M=N-1$ and let $h_{X}$, $h_{Y}$ be kernel bandwidths. Let $K_{X}(\cdot)$ and $K_{Y}(\cdot)$ be kernel functions with $||K_{X}||_{\infty},\,||K_{Y}||_{\infty}<\infty$ where $||K||_{\infty}=\sup_{x}|K(x)|$. The KDE for $f_{X}$ is $$\begin{aligned}
\ft X(\mathbf{X}_{j}) & = & \frac{1}{Mh_{X}^{d_{X}}}\sum_{\substack{i=1\\
i\neq j
}
}^{N}K_{X}\left(\frac{\mathbf{X}_{j}-\mathbf{X}_{i}}{h_{X}}\right).\label{eq:fx}\end{aligned}$$ The KDEs $\ft Y(\Y_{j})$ and $\ft Z(\mathbf{X}_{j},\mathbf{Y}_{j})$ (where $h_{Z}=(h_{X},h_{Y})$) for estimating $f_{Y}$ and $f_{XY}$, respectively, are defined similarly using $K_{Y}$ and the product kernel $K_{X}\cdot K_{Y}$. Then $G_{1}(\mathbf{X};\mathbf{Y})$ is estimated as $$\gt=\frac{1}{N}\sum_{i=1}^{N}g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right).\label{eq:Gest}$$
Convergence Rates
-----------------
To derive the convergence rates of $\gt$ we assume that | 1 | member_53 |
1) $f_{X}$, $f_{Y}$, $f_{XY}$, and $g$ are smooth; 2) $f_{X}$ and $f_{Y}$ have bounded support sets $\mathcal{S}_{X}$ and $\mathcal{S}_{Y}$; 3) $f_{X}$, $f_{Y}$, and $f_{XY}$ are strictly lower bounded on their support sets. More specifically, we assume that the densities belong to the bounded Hölder class $\Sigma(s,H)$ (the precise definition is included in the appendices) which implies that the densities are $r=\left\lfloor s\right\rfloor $ times differentiable. These assumptions are comparable to those in similar studies on asymptotic convergence analysis [@moon2016isit; @moon2014nips; @moon2014isit; @singh2014renyi; @singh2014exponential; @sricharan2013ensemble; @krishnamurthy2014divergence; @kandasamy2015nonparametric]. To derive the convergence rates without boundary corrections, we also assume that 4) the boundary of the support set is smooth with respect to the corresponding kernels. The full assumptions are
- $(\mathcal{A}.0)$: The kernels $K_{X}$ and $K_{Y}$ are symmetric product kernels with bounded support.
- $(\mathcal{A}.1)$: There exist constants $\epsilon_{0},\epsilon_{\infty}$ such that $0<\epsilon_{0}\leq f_{X}(x)\leq\epsilon_{\infty}<\infty,\,\forall x\in\mathcal{S}_{X}$, $\epsilon_{0}\leq f_{Y}(y)\leq\epsilon_{\infty},\,\forall y\in\mathcal{S}_{Y}$, and $\epsilon_{0}\leq f_{XY}(x,y)\leq\epsilon_{\infty},\,\forall(x,y)\in\mathcal{S}_{X}\times\mathcal{S}_{Y}$.
- $(\mathcal{A}.2)$: Each of the densities belong to $\Sigma(s,H)$ in the interior of their support sets with $s\geq2$.
- $(\mathcal{A}.3)$: $g\left(t_{1}/t_{2}\right)$ has an infinite number of mixed derivatives wrt $t_{1}$ and $t_{2}$.
- $(\mathcal{A}.4$): $\left|\frac{\partial^{k+l}g(t_{1},t_{2})}{\partial t_{1}^{k}\partial t_{2}^{l}}\right|/(k!l!)$, $k,l=0,1,\ldots$ are strictly upper bounded for $\epsilon_{0}\leq t_{1},t_{2}\leq\epsilon_{\infty}$.
- $(\mathcal{A}.5)$: Let $K$ be either $K_{X}$ | 1 | member_53 |
or $K_{Y}$, $\mathcal{S}$ either $\mathcal{S}_{X}$ or $\mathcal{S}_{Y}$, $h$ either $h_{X}$ or $h_{Y}$. Let $p_{x}(u):\mathbb{R}^{d}\rightarrow\mathbb{R}$ be a polynomial in $u$ of order $q\leq r=\left\lfloor s\right\rfloor $ whose coefficients are a function of $x$ and are $r-q$ times differentiable. For any positive integer $t$ $$\int_{x\in\mathcal{S}}\left(\int_{u:K(u)>0,\,x+uh\notin\mathcal{S}}K(u)p_{x}(u)du\right)^{t}dx=v_{t}(h),$$ where $v_{t}(h)$ admits the expansion $$v_{t}(h)=\sum_{i=1}^{r-q}e_{i,q,t}h^{i}+o\left(h^{r-q}\right),$$ for some constants $e_{i,q,t}$.
Assumption $(\mathcal{A}.5)$ states that the support of the density is smooth with respect to the kernel $K$ in the sense that the expectation with respect to any random variable $u$ of the area of the kernel that falls outside the support $\mathcal{S}$ is a smooth function of the bandwidth $h$ provided that the distribution function $p_{x}(u)$ of $u$ is smooth (e.g. $s\geq2$). The inner integral captures this expectation while the outer integral averages this inner integral over all points near the boundary of the support. The $v_{t}(h)$ term captures the fact that the smoothness of this expectation is proportional to the smoothness of the function $p_{x}(u)$. As an example, this smoothness assumption is satisfied when the support is rectangular and the kernel is the uniform rectangular kernel [@moon2016arxiv; @moon2016isit]. Note that this boundary assumption does not result in parametric convergence rates for the plug-in estimator $\gt$, which | 1 | member_53 |
is in contrast with the boundary assumptions in [@singh2014exponential; @singh2014renyi; @krishnamurthy2014divergence; @kandasamy2015nonparametric]. However, the estimators in [@singh2014exponential; @singh2014renyi; @krishnamurthy2014divergence; @kandasamy2015nonparametric] perform boundary correction, which requires knowledge of the density support boundary and complex calculations at the boundary in addition to the boundary assumptions, to achieve the parametric convergence rates. In contrast, we use ensemble methods to improve the resulting convergence rates of $\gt$ without boundary correction.
\[thm:bias\](Bias) Under assumptions $\mathcal{A}.0-\mathcal{A}.5$ and for general $g$, the bias of $\gt$ is $$\begin{aligned}
\bias\left[\gt\right] & = & \sum_{\substack{j=0\\
i+j\neq0
}
}^{r}\sum_{i=0}^{r}c_{10,i,j}h_{X}^{i}h_{Y}^{j}+\frac{c_{11}}{Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}}\nonumber \\
& & +O\left(h_{X}^{s}+h_{Y}^{s}+\frac{1}{Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}}\right).\label{eq:bias1}\end{aligned}$$ If $g\left(t_{1},t_{2}\right)$ also has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq k,l,\leq\lambda$, the bias of $\gt$ is $$\begin{aligned}
& & \lefteqn{\bias\left[\gt\right]}\nonumber \\
& & =\sum_{\substack{m,n=0\\
i+j+m+n\neq0
}
}^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{11,j,i,m,n}\frac{h_{X}^{i}h_{Y}^{j}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(Nh_{Y}^{d_{Y}}\right)^{n}}\nonumber \\
& & +\sum_{m=1}^{\left\lfloor \lambda/2\right\rfloor }\sum_{i=0}^{r}\sum_{j=0}^{r}c_{13,m,n,j}h_{X}^{i}h_{Y}^{j}/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{m}\nonumber \\
& & +O\left(h_{X}^{s}+h_{Y}^{s}+1/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{\lambda/2}\right).\label{eq:bias2}\end{aligned}$$
The constants in both (\[eq:bias1\]) and (\[eq:bias2\]) depend only on the densities and their derivatives, the functional $g$ and its derivatives, and the kernels. They are independent of $N,$ $h_{X}$, and $h_{Y}.$
The purpose of Theorem \[thm:bias\] is two-fold. First, we use Theorem \[thm:bias\] to derive the bias expressions for the MI plug-in | 1 | member_53 |
estimators when $\mathbf{X}$ and $\mathbf{Y}$ may have a mixture of discrete and continuous components (cases 2 and 3) in Section \[sec:mixed\]. Second, in conjunction with Theorem \[thm:variance\] which follows, the results in Theorem \[thm:bias\] can be used to derive MI ensemble estimators in Appendix \[subsec:cont\_ensemble\] that achieve the parametric MSE convergence rate when the densities are sufficiently smooth. The expression in (\[eq:bias2\]) enables us to achieve the parametric rate under less restrictive smoothness assumptions on the densities ($s>d/2$ for (\[eq:bias2\]) compared to $s\geq d$ for (\[eq:bias1\])). The extra condition required on the mixed derivatives of $g$ to obtain the expression in (\[eq:bias2\]) is satisfied, for example, for Shannon and Renyi information measures.
\[thm:variance\](Variance) If the functional $g$ is Lipschitz continuous in both of its arguments with Lipschitz constant $C_{g}$, then the variance of $\gt$ is $$\var\left[\gt\right]\leq\frac{22C_{g}^{2}||K_{X}\cdot K_{Y}||_{\infty}^{2}}{N}.$$
Similar to Theorem \[thm:bias\], Theorem \[thm:variance\] is used to derive variance expressions for the MI plug-in estimators under cases 2 and 3. Theorem \[thm:variance\] is also necessary to derive optimally weighted ensemble estimators. The proofs of Theorems \[thm:bias\] and \[thm:variance\] are similar to the proofs of the bias and variance results for the divergence functional estimators in [@moon2016arxiv]. The primary difference is in | 1 | member_53 |
handling certain products of the marginal KDEs that appear in the expansion of the MSE. See Appendix \[sec:biasProof\] and \[sec:VarProof\] for details.
Theorems \[thm:bias\] and \[thm:variance\] indicate that for the MSE of the plug-in estimator to go to zero for case 1, we require $h_{X},h_{Y}\rightarrow0$ and $Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\rightarrow\infty$. The Lipschitz assumption on $g$ is comparable to other nonparametric estimators of distributional functionals [@kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi; @moon2016arxiv; @krishnamurthy2014divergence]. Specifically, assumption $\mathcal{A}.1$ ensures that functionals such as those for Shannon and Renyi informations are Lipschitz on the space $\epsilon_{0}$ to $\epsilon_{\infty}$.
Mixed Random Variables {#sec:mixed}
======================
In this section, we extend the results of Section \[sec:MI\_est\] to MI estimation when $\mathbf{X}$ and $\mathbf{Y}$ may have a mixture of discrete and continuous components. For simplicity, we focus primarily on the important case when $\mathbf{X}$ is continuous and $\mathbf{Y}$ is discrete (case 2 in Section \[sec:intro\]). The more general case when $\X$ and $\Y$ may have any mixture of continuous and discrete components (case 3 in Section \[sec:intro\]) is discussed in Section \[subsec:general\_rates\]. As an example of the former case, if $\mathbf{Y}$ is a predictor variable (e.g. classification labels), then the MI between $\mathbf{X}$ and $\mathbf{Y}$ indicates the value of $\mathbf{X}$ as a predictor of | 1 | member_53 |
$\mathbf{Y}$. Although $\mathbf{Y}$ is discrete, $f_{XY}=f_{Z}$ is also a density. Let $\mathcal{S}_{X}$ be the support of the density $f_{X}$ and $\mathcal{S}_{Y}$ be the support of the probability mass function $f_{Y}$. The MI is $$\begin{aligned}
& \lefteqn{G_{2}\left(\mathbf{X};\mathbf{Y}\right)}\nonumber \\
& = & \sum_{y\in\mathcal{S}_{Y}}\int g\left(\frac{f_{X}(x)f_{Y}(y)}{f_{XY}(x,y)}\right)f_{XY}(x,y)dx\label{eq:MI_cond}\\
& = & \sum_{y\in\mathcal{S}_{Y}}f_{Y}(y)\int g\left(\frac{f_{X}(x)}{f_{X|Y}(x|y)}\right)f_{X|Y}(x|y)dx.\nonumber \end{aligned}$$
Let $\mathbf{N}_{y}=\sum_{i=1}^{N}1_{\left\{ \mathbf{Y}_{i}=y\right\} }$ where $y\in\mathcal{S}_{Y}$. Let $\ft X$ be as in (\[eq:fx\]) and define $\mathcal{X}_{y}=\left\{ \mathbf{X}_{i}\in\left\{ \mathbf{X}_{1},\dots,\mathbf{X}_{N}\right\} |\mathbf{Y}_{i}=y\right\} $. Then if $\mathbf{X}_{i}\in\mathcal{X}_{y}$, the KDE of $f_{X|Y}(x|y)$ is $$\begin{aligned}
\ft{X|y}(\mathbf{X}_{i}) & = & \frac{1}{\left(\mathbf{N}_{y}-1\right)h_{X|y}^{d_{X}}}\sum_{\substack{\mathbf{X}_{j}\in\mathcal{X}_{y}\\
i\neq j
}
}K_{X}\left(\frac{\mathbf{X}_{i}-\mathbf{X}_{j}}{h_{X|y}}\right).\end{aligned}$$ We define the plug-in estimator $\g{h_{X},h_{X|Y}}$ of (\[eq:MI\_cond\]) as $$\begin{aligned}
\g{h_{X},h_{X|y}} & =\frac{1}{\mathbf{N}_{y}}\sum_{\mathbf{X}\in\mathcal{X}_{y}}g\left(\ft X(\mathbf{X})/\ft{X|y}(\mathbf{X})\right),\nonumber \\
\implies\g{h_{X},h_{X|Y}} & =\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X},h_{X|y}}.\label{eq:mixed_est}\end{aligned}$$
Convergence Rates {#subsec:mixed_conv}
-----------------
To apply the theory of optimally weighted ensemble estimation to $\g{h_{X},h_{X|Y}}$, we need to know its MSE as a function of the bandwidths and the sample size.
\[thm:bias\_mixed\](Bias) Assume that assumptions $\mathcal{A}.0-\mathcal{A}.5$ apply to the functional $g$, the kernel $K_{X}$, and the densities $f_{X}$ and $f_{X|Y}$. Assume that $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ with $0<\beta<\frac{1}{d_{X}}$ and $l$ a positive number. Then the bias of $\g{h_{X},h_{X|Y}}$ is $$\begin{aligned}
& \lefteqn{\bias\left[\g{h_{X},h_{X|Y}}\right]}\nonumber \\
& =\sum_{\substack{j=0\\
i+j\neq0
}
}^{r}\sum_{i=0}^{r}c_{13,i,j}h_{X}^{i}l^{j}N^{-j\beta}+\frac{c_{14,X}}{Nh_{X}^{d_{X}}}+\frac{c_{14,Y}}{l^{d_{X}}N^{1-\beta d_{X}}}\nonumber \\
& +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{Nh_{X}^{d_{X}}}+\frac{1}{N^{1-\beta d_{X}}}\right).\label{eq:bias_mixed1}\end{aligned}$$ If $g\left(t_{1},t_{2}\right)$ also has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and | 1 | member_53 |
$t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq j,l\leq\lambda$, then the bias is $$\begin{aligned}
& \lefteqn{\bias\left[\g{h_{X},h_{X|Y}}\right]}\nonumber \\
& =\sum_{\substack{m,n=0\\
i+j+m+n\neq0
}
}^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{14,j,i,m,n}\frac{h_{X}^{i}l^{j}N^{-j\beta}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(l^{d_{X}}N^{1-\beta d_{X}}\right)^{n}}\nonumber \\
& +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{\left(Nh_{X}^{d_{X}}\right)^{\lambda/2}}+\frac{1}{\left(N^{1-\beta d_{X}}\right)^{\lambda/2}}\right).\label{eq:bias_mixed2}\end{aligned}$$
We focus on (\[eq:bias\_mixed1\]) as (\[eq:bias\_mixed2\]) follows similarly. It can be shown that $$\bias\left[\g{h_{X},h_{X|Y}}\right]=\bE\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right].$$ The conditional bias of $\g{h_{X},h_{X|y}}$ given $\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}$ can then be obtained from Theorem \[thm:bias\] as $$\begin{aligned}
& \lefteqn{\bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]}\\
& =\sum_{\substack{i,j=0\\
i+j\neq0
}
}^{r}c_{10,i,j}h_{X}^{i}\mathbf{h}_{X|y}^{j}\\
& +O\left(h_{X}^{s}+\mathbf{h}_{X|y}^{s}+\frac{1}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{1}{\mathbf{N}_{y}\mathbf{h}_{X|y}^{d_{X}}}\right)\end{aligned}$$ Then given that $\mathbf{h}_{X|y}\propto\mathbf{N}_{y}^{-\beta}$, (\[eq:mixed\_est\]) gives terms of the form of $\mathbf{N}_{y}^{1-\gamma}$ with $\gamma>0$. $\mathbf{N}_{y}$ is a binomial random variable with parameter $f_{Y}(y)$, $N$ trials, and mean $Nf_{Y}(y)$. Thus we need to compute the fractional moments of a binomial random variable. By the generalized binomial theorem, we have that $$\begin{aligned}
\mathbf{N}_{y}^{\alpha} & =\left(\mathbf{N}_{y}-Nf_{Y}(y)+Nf_{Y}(y)\right)^{\alpha}\nonumber \\
& =\sum_{i=0}^{\infty}\left(\begin{array}{c}
\alpha\\
i
\end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\left(\mathbf{N}_{y}-Nf_{Y}(y)\right)^{i},\nonumber \\
& \lefteqn{\implies\bE\left[\mathbf{N}_{y}^{\alpha}\right]}\nonumber \\
& =\sum_{i=0}^{\infty}\left(\begin{array}{c}
\alpha\\
i
\end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\bE\left[\left(\mathbf{N}_{y}-Nf_{Y}(y)\right)^{i}\right].\label{eq:fractional_moment}\end{aligned}$$ From [@riordan1937moment], the $i$-th central moment of $\mathbf{N}_{y}$ has the form of $$\bE\left[\left(\mathbf{N}_{Y}-Nf_{Y}(y)\right)^{i}\right]=\sum_{n=0}^{\left\lfloor i/2\right\rfloor }c_{n,i}(f_{Y}(y))N^{n}.$$ Thus $\bE\left[\mathbf{N}_{y}^{1-\gamma}\right]$ has terms proportional to $N^{1-\gamma-i+n}\leq N^{1-\gamma-\left\lfloor i/2\right\rfloor }$ for $i=0,1,\dots$ since $n\leq\left\lfloor i/2\right\rfloor $. Then since there is an $N$ in the denominator of (\[eq:mixed\_est\]), this leaves terms of the form of $N^{-\gamma}$ when $i=0,1$ and $N^{-1}$ for $i\geq2$. This completes the proof for the | 1 | member_53 |
bias. See Appendix \[sec:MixedProofs\] for more details.
\[thm:var\_mixed\]If the functional $g$ is Lipschitz continuous in both of its arguments and $\mathcal{S}_{Y}$ is finite, then the variance of $\g{h_{X},h_{X|Y}}$ is $O(1/N)$.
By the law of total variance, we have $$\begin{aligned}
\var\left[\g{h_{X},h_{X|Y}}\right] & =\bE\left[\var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]\\
& +\var\left[\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right].\end{aligned}$$ Given all of the $\mathbf{Y}_{i}$’s, the estimators $\g{h_{X},h_{X|y}}$ are all independent since they use different sets of $\mathbf{X}_{i}$’s for each $y$. From Theorem \[thm:variance\], we know that $\var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]=O\left(\sum_{y\in\mathcal{S}_{Y}}\mathbf{N}_{y}/N^{2}\right)$. Taking the expectation then yields $O(1/N)$.
For the second term, we know from the proof of Theorem \[thm:bias\_mixed\] that $\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]$ yields a sum of terms of the form of $\mathbf{N}_{y}^{\gamma}/N$ for $0<\gamma\leq1$. Taking the variance of the sum of these terms yields a sum of terms of the form $\var\left[\mathbf{N}_{y}^{\gamma}\right]/N^{2}$ (the covariance terms can be bounded by the Cauchy-Schwarz inequality to yield similar terms). Then $\var\left[\mathbf{N}_{y}^{\gamma}\right]$ can be bounded by taking a Taylor series expansion of the functions $\mathbf{N}_{y}^{\gamma}$ and $\mathbf{N}_{y}^{2\gamma}$ at the point $Nf_{Y}(y)$ which yields an expression that depends on the central moments of $\mathbf{N}_{y}$. From this, we obtain $\var\left[\mathbf{N}_{y}^{\gamma}\right]=O(N)$ which completes the proof. See Appendix \[sec:MixedProofs\] for details.
Theorems \[thm:bias\_mixed\] and \[thm:var\_mixed\] provide exact expressions for the bias and bounds on the variance of the | 1 | member_53 |
plug-in MI estimator, respectively. It is shown in Section \[sec:mixed\_ensemble\] that the MSE of the plug-in estimator converges very slowly to zero under this setting. However, Theorems \[thm:bias\_mixed\] and \[thm:var\_mixed\] provide with us the necessary information for applying the theory of optimally weighted ensemble estimation to obtain estimators with improved rates. This is done in Section \[sec:mixed\_ensemble\].
Extension to Other Cases {#subsec:general_rates}
------------------------
The results in Section \[subsec:mixed\_conv\] can be extended to the case where $\mathbf{X}$ and/or $\mathbf{Y}$ may have a mixture of continuous and discrete components (case 3 in Section \[sec:intro\]). This scenario can be divided further into three different cases: A) $\X$ is continuous and $\Y$ has a mixture of discrete and continuous components; B) $\X$ and $\Y$ both have a mixture of discrete and continuous components; C) $\Y$ is discrete and $\X$ has a mixture of discrete and continuous components. Consider case A first. Denote the discrete and continuous components of $\mathbf{Y}$ as $\mathbf{Y}_{1}$ and $\mathbf{Y}_{2}$, respectively. Denote the respective support sets as $\mathcal{S}_{Y_{1}}$ and $\mathcal{S}_{Y_{2}}$. We can then write $$\begin{aligned}
& \lefteqn{G_{3A}(\mathbf{X};\mathbf{Y})}\nonumber \\
& =\sum_{y_{1}\in\mathcal{S}_{Y_{1}}}\int g\left(\frac{f_{X}(x)f_{Y}(y_{1},y_{2})}{f_{XY}(x,y_{1},y_{2})}\right)f_{XY}(x,y_{1},y_{2})dxdy_{2}\nonumber \\
& =\sum_{y_{1}\in\mathcal{S}_{Y_{1}}}f_{Y_{1}}(y_{1})\int g\left(\frac{f_{X}(x)f_{Y_{2}|Y_{1}}(y_{2}|y_{1})}{f_{XY_{2}|Y_{1}}(x,y_{2}|y_{1})}\right)\nonumber \\
& \times f_{XY_{2}|Y_{1}}(x,y_{2}|y_{1})dxdy_{2}.\label{eq:mixed_general}\end{aligned}$$ The subscript $3A$ indicates that we are considering case A under the | 1 | member_53 |
third case described in the introduction. The expression in (\[eq:mixed\_general\]) is very similar to the expression in (\[eq:MI\_cond\]). After plugging in KDEs for the corresponding densities and conditional densities, a nearly identical procedure to that in Section \[subsec:mixed\_conv\] can be followed to derive the bias and variance of the corresponding plug-in estimator.
Now consider case B. Denote the discrete and continuous components of $\mathbf{X}$ as $\mathbf{X}_{1}$ and $\mathbf{X}_{2}$, respectively. Then if $\mathbf{Y}_{1}$ is the discrete component of $\mathbf{Y}$, then the expression inside the $g$ functional in (\[eq:mixed\_general\]) includes $f_{X_{1}}(x_{1})f_{Y_{1}}(y_{1})/f_{X_{1}Y_{1}}(x_{1},y_{1})$. Thus the plug-in estimator must include estimators for $f_{X_{1}}(x_{1}),$ $f_{Y_{1}}(y_{1})$, and $f_{X_{1}Y_{1}}(x_{1},y_{1})$. Define $\mathbf{N}_{y_{1}}=\sum_{i=1}^{N}1_{\{\mathbf{Y}_{1,i}=y_{1}\}}$ where $\mathbf{Y}_{1,i}$ is the discrete component of $\mathbf{Y}_{i}$. Then the estimator we use for $f_{Y_{1}}(y_{1})$ is $\mathbf{N}_{y_{1}}/N$. The estimators for $f_{X_{1}}(x_{1})$ and $f_{X_{1}Y_{1}}(x_{1},y_{1})$ are defined similarly. The bias and variance expressions of this plug-in estimator can then be derived with some slight modifications of Theorems \[thm:bias\] and \[thm:variance\]. See Appendix \[subsec:generalCase\] for an expression for $G_{3B}(\mathbf{X};\mathbf{Y})$ in this case and a sketch of these modifications. Case C follows similarly as the expression inside the $g$ functional in (\[eq:mixed\_general\]) includes $f_{X_{1}}(x_{1})f_{Y}(y)/f_{X_{1}Y}(x_{1},y)$ where all the terms are probability mass functions.
The resulting bias and variance expressions in these settings | 1 | member_53 |
are analogous to those in Theorems \[thm:bias\], \[thm:variance\], and \[thm:bias\_mixed\] as the variance will be $O(1/N)$ and the bias will depend on expansions of the bandwidths for the various KDEs. Ensemble methods can then be applied to improve the MSE convergence rates as described in the next section.
Ensemble Estimation of MI\[sec:mixed\_ensemble\]
================================================
Mixed Random Variables {#subsec:mixed_ensemble}
----------------------
We again focus on the case where $\mathbf{X}$ is continuous and $\mathbf{Y}$ is discrete (case 2 in Section \[sec:intro\]). If no bias correction is performed, then Theorem \[thm:bias\_mixed\] shows that the optimal bias rate of the plug-in estimator $\g{h_{X},h_{X|Y}}$ is $O\left(1/N^{1/(d_{X}+1)}\right)$, which converges very slowly to zero when $d_{X}$ is not small. We use the theory of optimally weighted ensemble estimation to improve this rate. An ensemble of estimators is formed by choosing different bandwidth values. Consider first the case where (\[eq:bias\_mixed1\]) applies. Let $\mathcal{L}$ be a set of real positive numbers with $|\mathcal{L}|=L$. This set will parameterize the bandwidths for $\ft X$ and $\ft{X|y}$ resulting in $L$ estimators in the ensemble. While different parameter sets for $\ft X$ and $\ft{X|y}$ can be chosen, we only use one set here for simplicity of exposition. To ensure that the final terms in (\[eq:bias\_mixed1\]) | 1 | member_53 |
are $O(1/\sqrt{N})$ when $s\geq d$, for each estimator in the ensemble we choose $h_{X}(l)=lN^{-1/(2d_{X})}$ and $\mathbf{h}_{X|y}(l)=l\mathbf{N}_{y}^{-1/(2d_{X})}$ where $l\in\mathcal{L}$. Define $w$ to be a weight vector parameterized by $l\in\mathcal{L}$ with $\sum_{l\in\mathcal{L}}w(l)=1$ and define $$\g{w,1}=\sum_{l\in\mathcal{L}}w(l)\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X}(l),h_{X|y}(l)}.\label{eq:ensemble}$$ From Theorem \[thm:bias\_mixed\], the bias of $\g{w,1}$ is $$\begin{aligned}
\bias\left[\g{w,1}\right] & =\sum_{l\in\mathcal{L}}\sum_{i=1}^{r}\theta\left(w(l)l^{i}N^{\frac{-i}{2d_{X}}}\right)\nonumber \\
& +O\left(\sqrt{L}||w||_{2}\left(N^{\frac{-s}{2d_{X}}}+N^{\frac{-1}{2}}\right)\right),\label{eq:weight_bias}\end{aligned}$$ where we use $\theta$ notation to omit the constants.
We use the general theory of optimally weighted ensemble estimation in [@moon2016isit] to improve the MSE convergence rate of the plug-in estimator by using the weights to cancel the lower order terms in (\[eq:weight\_bias\]). The theory is as follows. Let $\left\{ \hat{\mathbf{E}}_{l}\right\} _{l\in\mathcal{L}}$ be an indexed ensemble of estimators with the weighted ensemble estimator $\hat{\mathbf{E}}_{w}=\sum_{l\in\mathcal{L}}w(l)\hat{\mathbf{E}}_{l}$ satisfying:
- $\mathcal{C}.1$. Let $c_{i}$ be constants depending on the underlying density, $J=\{i_{1},\dots i_{I}\}$ a finite index set with $I<L$, $\psi_{i}(l)$ basis functions depending only on the parameter $l$ and not on $N$, $\phi_{i}(N)$ functions of the sample size $N$ that are independent of $l$. Assume the bias is $$\bias\left[\hat{\mathbf{E}}_{l}\right]=\sum_{i\in J}c_{i}\psi_{i}(l)\phi_{i}(N)+O\left(\frac{1}{\sqrt{N}}\right).$$
- $\mathcal{C}.2$. Assume the variance is $$\var\left[\hat{\mathbf{E}}_{l}\right]=c_{v}\left(\frac{1}{N}\right)+o\left(\frac{1}{N}\right).$$
[@moon2016isit] \[thm:opt\_weight\]If conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ hold for an ensemble of estimators $\left\{ \hat{\mathbf{E}}_{l}\right\} _{l\in\mathcal{L}}$, then there exists a weight vector $w_{0}$ such that the MSE of $\hat{\mathbf{E}}_{w_{0}}$ attains the | 1 | member_53 |
parametric rate of convergence of $O\left(1/N\right)$. The weight $w_{0}$ is the solution to the offline convex optimization problem *$$\begin{array}{rl}
\min_{w} & ||w||_{2}\\
subject\,to & \sum_{l\in\mathcal{L}}w(l)=1,\\
& \gamma_{w}(i)=\sum_{l\in\mathcal{L}}w(l)\psi_{i}(l)=0,\,i\in J.
\end{array}\label{eq:optimize}$$*
To apply Theorem \[thm:opt\_weight\] to an ensemble of estimators, all $\phi_{i}(N)$ functions that converge to zero slower than $1/\sqrt{N}$ and the corresponding $\psi_{i}(l)$ functions must be known for the base estimator. Otherwise, Theorem \[thm:opt\_weight\] can only be guaranteed to improve the bias up to the slowest unknown bias rate. This theorem was applied in [@moon2016isit] to the problem of divergence functional estimation where the plug-in estimator has slowly converging bias but the resulting ensemble estimator achieves the parametric rate for sufficiently smooth densities.
We apply Theorem \[thm:opt\_weight\] to the ensemble estimator $\g{w,1}$ as conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ are satisfied with $\phi_{i}(N)=N^{-i/(2d_{X})}$ and $\psi_{i}(l)=l^{i}$ for $i\in\{1,\dots r\}$ as seen in (\[eq:bias\_mixed1\]) and (\[eq:weight\_bias\]). If $s\geq d_{X}$, then the MSE of the optimally weighted estimator $\g{w_{0},1}$ is $O(1/N)$. A similar approach can be used for the case where $\X$ contains a mixture of continuous and discrete components and $\Y$ is discrete (or vice versa). To the best of our knowledge, these are the first nonparametric estimators to achieve the MSE parametric rate in | 1 | member_53 |
this setting of mixed random variables.
If the mixed derivatives of the functional $g$ satisfy the extra condition required for (\[eq:bias\_mixed2\]), we can define an ensemble estimator $\g{w_{0},2}$ that achieves the parametric MSE rate if $s>d_{X}/2$. For simplicity, we focus primarily on $\g{w_{0},1}$. See Appendix \[subsec:Odin2\] for details on $\g{w_{0},2}$.
In practice, the optimization problem in (\[eq:optimize\]) typically results in a very large increase in variance. Thus we follow the lead of [@moon2016arxiv; @moon2014isit; @moon2014nips; @sricharan2013ensemble] and use a relaxed version of (\[eq:optimize\]): $$\begin{array}{rl}
\min_{w} & \epsilon\\
subject\,to & \sum_{l\in\mathcal{L}}w(l)=1,\\
& \left|\gamma_{w}(i)N^{\frac{1}{2}}\phi_{i}(N)\right|\leq\epsilon,\,\,i\in J,\\
& \left\Vert w\right\Vert _{2}^{2}\leq\eta.
\end{array}\label{eq:relaxed}$$ As shown in [@moon2016arxiv; @moon2014isit; @moon2014nips; @sricharan2013ensemble], the ensemble estimator $\g{w_{0},1}$ using the resulting weight vector from the optimization problem in (\[eq:relaxed\]) still achieves the parametric MSE convergence rate under the same assumptions as described previously. It was also shown in [@moon2016arxiv] that the heuristic of setting $\eta=\epsilon$ works well in practice. Algorithm \[alg:estimator\] summarizes the estimator $\g{w_{0},1}$.
$L$ positive real numbers $\mathcal{L}$, samples $\left\{ \mathbf{Z}_{1},\dots,\mathbf{Z}_{N}\right\} $ from $f_{XY}$, dimension $d_{X}$, function $g$, kernel $K_{X}$
The optimally weighted MI estimator $\g{w_{0},1}$
Solve for $w_{0}$ using (\[eq:relaxed\]) with basis functions $\psi_{i}(l)=l^{i}$, $\phi_{i}(N)=N^{-i/(2d_{X})},$ $l\in\mathcal{L}$, and $0\leq i\leq d_{X}$.
$\mathbf{N}_{y}\leftarrow\sum_{i=1}^{N}1_{\{\mathbf{Y}_{i}=y\}}$
$h_{X}(l)\leftarrow lN^{-1/(2d_{X})},$ $\mathbf{h}_{X|y}(l)\leftarrow l\mathbf{N}_{y}^{-1/(2d_{Y})}$
Calculate | 1 | member_53 |
$\ftl X(\mathbf{X}_{i})$, $\ftl{X|y}(\mathbf{X}_{i})$ as described in the text
$\g{h_{X}(l),\mathbf{h}_{X|y}(l)}\leftarrow\frac{1}{\mathbf{N}_{y}}\sum_{\mathbf{X}\in\mathcal{X}_{y}}g\left(\frac{\ftl X(\mathbf{X})}{\ftl{X|y}(\mathbf{X})}\right)$
$\g{w_{0},1}\leftarrow\sum_{l\in\mathcal{L}}w_{0}(l)\sum_{y\in\mathcal{S}_{y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X}(l),\mathbf{h}_{X|y}(l)}$
A similar approach can be used to derive an ensemble estimator $\g{w_{0},1}^{cont}$ for the case when $\X$ and $\Y$ are continuous (case 1 in Section \[sec:intro\]). See Appendix \[subsec:cont\_ensemble\] for details. The case where $\X$ and $\Y$ both contain a mixture of discrete and continuous components follows similarly.
Parameter Selection
-------------------
In theory, the theoretical results of the previous sections hold for any choice of the bandwidth vectors as determined by$\mathcal{L}$. In practice, we find that the following rules-of-thumb for tuning the parameters lead to high-quality estimates in the finite sample regime.
1. Select the minimum and maximum bandwidth parameter to produce density estimates that satisfy the following: first the minimum bandwidth should not lead to a zero-valued density estimate at any sample point; second the maximum bandwidth should be smaller than the diameter of the support.
2. Ensure the bandwidths are sufficiently distinct. Similar bandwidth values lead to negligible decrease in bias and many bandwidth values may increase $||w_{0}||_{2}$ resulting in an increase in variance [@sricharan2013ensemble].
3. Select $L=|\mathcal{L}|>|J|=I$ to obtain a feasible solution for the optimization problems in (\[eq:optimize\]) and (\[eq:relaxed\]). We find that choosing a value | 1 | member_53 |
of $30\leq L\leq60$, and setting $\mathcal{L}$ to be $L$ linearly spaced values between the minimum and maximum values described above works well in practice.
The resulting ensemble estimators are robust in the sense that they are not sensitive to the exact choice of the bandwidths or the number of estimators as long as the the rough rules-of-thumb given above are followed. Moon et al [@moon2016arxiv; @moon2016isit] gives more details on ensemble estimator parameter selection for continuous divergence estimation. These details also apply to the continuous parts of the mixed cases for MI estimation in this paper.
Since the optimal weight $w_{0}$ can be calculated offline, the computational complexity of the estimators is dominated by the construction of the KDEs which has a complexity of $O\left(N^{2}\right)$ using the standard implementation. For very large datasets, more efficient KDE implementations (e.g. [@raykar2010fast]) can be used to reduce the computational burden.
Central Limit Theorem {#subsec:clt}
---------------------
We finish this section with central limit theorems for the ensemble estimators. This enables us to perform hypothesis testing on the mutual information.
\[thm:clt\] Let $\g w^{cont}$ be a weighted ensemble estimator when $\X$ and $\Y$ are continuous with bandwidths $h_{X}(l_{X})$ and $h_{Y}(l_{Y})$ for each estimator in the | 1 | member_53 |
ensemble. Assume that the functional $g$ is Lipschitz in both arguments with Lipschitz constant $C_{g}$ and that $h_{X}(l_{X}),\,h_{Y}(l_{Y})=o(1)$, $N\rightarrow\infty$, and $Nh_{X}^{d_{X}}(l_{X}),\,Nh_{Y}^{d_{Y}}(l_{Y})\rightarrow\infty$ for each $l_{X}\in\mathcal{L}_{X}$ and $l_{Y}\in\mathcal{L}_{Y}$. Then for fixed $\mathcal{L}_{X}$ and $\mathcal{L}_{Y}$, and if $\mathbf{S}$ is a standard normal random variable, $$\Pr\left(\left(\g w^{cont}-\bE\left[\g w^{cont}\right]\right)/\sqrt{\var\left[\g w^{cont}\right]}\leq t\right)\rightarrow\Pr\left(\mathbf{S}\leq t\right).$$
The proof is based on an application of Slutsky’s Theorem preceded by an application of the Efron-Stein inequality (see Appendix \[sec:cltProof\]).
If the space $\mathcal{S}_{Y}$ is finite, then the ensemble estimators for the mixed component case also obey a central limit theorem. The proof follows by an application of Slutsky’s Theorem combined with Theorem \[thm:clt\].
Let $\g w$ be a weighted ensemble estimator when $\X$ is continuous and $\Y$ is discrete with bandwidths $h_{X}(l)$ and $h_{X|y}(l)$ for each estimator in the ensemble. Assume that the functional $g$ is Lipschitz in both arguments and that $h_{X},\,h_{X|y}=o(1)$, $N\rightarrow\infty$, and $Nh_{X}^{d_{X}},\,Nh_{X|y}^{d_{X}}\rightarrow\infty$ for each $l\in\mathcal{L}$ and $\forall y\in\mathcal{S}_{Y}$ with $\mathcal{S}_{y}$ finite. Then for fixed $\mathcal{L}$, $$\Pr\left(\left(\g w-\bE\left[\g w\right]\right)/\sqrt{\var\left[\g w\right]}\leq t\right)\rightarrow\Pr\left(\mathbf{S}\leq t\right).$$
Experimental Validation {#sec:experiments}
=======================
In this section, we validate our theory by estimating the Rényi-$\alpha$ MI integral (i.e. $g(x)=x^{\alpha}$ in (\[eq:MI\_cond\]); see [@principe2010information]) where $\mathbf{X}$ is a mixture of truncated Gaussian random variables restricted to the | 1 | member_53 |
unit cube and $\mathbf{Y}$ is a categorical random variable. We choose Rényi MI as it has received recent interest (e.g. [@pal2010estimation]) and the estimation problem does not reduce to entropy estimation in contrast with Shannon MI. Thus this is a clear case where there are no other nonparametric estimators that are known to achieve the parametric MSE rate.
We consider two cases. In the first case, $\mathbf{Y}$ has three possible outcomes (i.e. $|\mathcal{S}_{Y}|=3$) and respective probabilities $\Pr(\mathbf{Y}=0)=\Pr(\mathbf{Y}=1)=2/5$ and $\Pr(\mathbf{Y}=2)=1/5$. The conditional covariance matrices are all $0.1\times I_{d}$ and the conditional means are, respectively, $\bar{\mu}_{0}=0.25\times\bar{1}_{d}$, $\bar{\mu}_{1}=0.75\times\bar{1}_{d}$, and $\bar{\mu}_{2}=0.5\times\bar{1}_{d}$, where $I_{d}$ is the $d\times d$ identity matrix and $\bar{1}_{d}$ is a $d$-dimensional vector of ones. This experiment can be viewed as the problem of estimating MI (e.g. for feature selection or Bayes error bounds) of a classification problem where each discrete value corresponds to a distinct class, the distribution of each class overlaps slightly with others, and the class probabilities are unequal. We use $\alpha=0.5$. We set $\mathcal{L}$ to be 40 linearly spaced values between 1.2 and 3. The bandwidth in the KDE plug-in estimator is also set to $2.1N^{-1/(2d)}$.
The top three plots in Figure \[fig:mseplot\] shows the MSE (200 trials) | 1 | member_53 |
---
abstract: 'We study static, spherically symmetric vacuum solutions to Quadratic Gravity, extending considerably our previous Rapid Communication \[Phys. Rev. D 98, 021502(R) (2018)\] on this topic. Using a conformal-to-Kundt metric ansatz, we arrive at a much simpler form of the field equations in comparison with their expression in the standard spherically symmetric coordinates. We present details of the derivation of this compact form of two ordinary differential field equations for two metric functions. Next, we apply analytical methods and express their solutions as infinite power series expansions. We systematically derive all possible cases admitted by such an ansatz, arriving at six main classes of solutions, and provide recurrent formulas for all the series coefficients. These results allow us to identify the classes containing the Schwarzschild black hole as a special case. It turns out that one class contains only the Schwarzschild black hole, three classes admit the Schwarzschild solution as a special subcase, and two classes are not compatible with the Schwarzschild solution at all since they have strictly nonzero Bach tensor. In our analysis, we naturally focus on the classes containing the Schwarzschild spacetime, in particular on a new family of the Schwarzschild–Bach black holes which possesses one | 1 | member_54 |
additional non-Schwarzschild parameter corresponding to the value of the Bach tensor invariant on the horizon. We study its geometrical and physical properties, such as basic thermodynamical quantities and tidal effects on free test particles induced by the presence of the Bach tensor. We also compare our results with previous findings in the literature obtained using the standard spherically symmetric coordinates.'
author:
- |
J. Podolský$^\star$, R. Švarc$^\star$, V. Pravda$^\diamond$, A. Pravdov' a$^\diamond$\
\
\
[$^\star$ Institute of Theoretical Physics, Faculty of Mathematics and Physics,]{}\
[Charles University, V Holešovičkách 2, 180 00 Prague 8, Czech Republic.]{}\
[$^\diamond$ Institute of Mathematics, Academy of Sciences of the Czech Republic]{},\
[Žitn' a 25, 115 67 Prague 1, Czech Republic.]{}\
[E-mail: `[email protected], [email protected], `]{}\
[`[email protected], [email protected]`]{}
title: Black holes and other exact spherical solutions in Quadratic Gravity
---
PACS numbers: 04.20.Jb, 04.50.–h, 04.70.Bw, 04.70.Dy, 11.25.–w
Keywords: black holes, exact solutions, Quadratic Gravity, Einstein–Weyl gravity, Schwarzschild metric, Bach tensor, Robinson–Trautman spacetimes, Kundt spacetimes
Introduction {#intro}
============
Soon after Albert Einstein formulated his General Relativity in November 1915 and David Hilbert found an elegant procedure how to derive Einstein’s field equations from the variational principle, various attempts started to extend and generalize this gravity theory. One possible | 1 | member_54 |
road, suggested by Theodor Kaluza exactly a century ago in 1919, was to consider higher dimensions in an attempt to unify the field theories of gravitation and electromagnetism. In the same year, another road was proposed by Hermann Weyl. In this case, the idea was to derive alternative field equations of a metric theory of gravity by starting with a different action. Instead of using the Einstein–Hilbert Lagrangian of General Relativity, which is simply the Ricci curvature scalar $R$ (a double contraction of a single Riemann tensor), Weyl proposed a Lagrangian containing *contractions of a product of two curvature tensors*. Such a Lagrangian is thus not linear in curvature — it is quadratic so that this theory can be naturally called “quadratic gravity”. Einstein was well aware of these attempts to formulate such alternative theories of gravity, and for some time he also worked on them. Interestingly, expressions for the quadratic gravity theory can be found even in his last writing pad (at the bottom of its last but one page) which he used in spring 1955.
Although it turned out rather quickly that these original classical theories extending General Relativity led to specific conceptual, mathematical and physical problems, the | 1 | member_54 |
nice ideas have been so appealing that — the whole century after their conception — they are still very actively investigated. Both the higher dimensions of the Kaluza–Klein theory and Weyl’s higher-order curvature terms in an effective action are now incorporated into the foundations of string theory. Quadratic Gravity (QG) also plays an important role in contemporary studies of relativistic quantum field theories.
Quadratic Gravity is a very natural and quite “conservative” extension of the Einstein theory, the most precise gravity theory today. Quadratic terms in the QG Lagrangian can be understood as corrections to General Relativity, which may play a crucial role at extremely high energies. In the search for a consistent quantum gravity theory, which could be applicable near the Big Bang or near spacetime singularities inside black holes, it is important to understand the role of these higher-order curvature corrections.
Interestingly, it was suggested by Weinberg and Deser, and then proved by Stelle [@Stelle:77] already in the 1970s that adding the terms quadratic in the curvature to the Einstein–Hilbert action renders gravity renormalizable, see the very recent review [@Salvio]. This property is also preserved in the general coupling with a generic quantum field theory. However, due to | 1 | member_54 |
the presence of higher derivatives, “massive ghosts” also appear (the corresponding classical Hamiltonian is unbounded from below). Nevertheless, there is a possibility that these ghosts could be benign [@Smilga]. For all these reasons, this QG theory has attracted considerable attention in recent years.
In our work, we are interested in *classical solutions to QG in four dimensions*. It can be easily shown that all Einstein spacetimes obey the vacuum field equations of this theory. However, QG also admits additional vacuum solutions with nontrivial Ricci tensor. In this paper, we focus on such *static, spherically symmetric vacuum solutions* without a cosmological constant. They were first studied in the seminal work [@Stelle:1978], in which three families of such spacetimes were identified by using a power expansion of the metric functions around the origin. The failure of the Birkhoff theorem in [Quadratic]{} Gravity has also been pointed out therein. Spherically symmetric solutions were further studied in [@Holdom:2002], where also numbers of free parameters for some of the above-mentioned classes were determined. Recently it has been pointed out in [@LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b; @PerkinsPhD] that, apart from the Schwarzschild black hole and other spherical solutions, QG admits a *non-Schwarzschild* spherically symmetric and static black holes.
The | 1 | member_54 |
field equations of a generic Quadratic Gravity theory form a highly complicated system of fourth-order nonlinear PDEs. Only a few nontrivial exact solutions are thus known so far, and various approximative and numerical methods have had to be used in their studies. Specifically, in the new class of black holes presented in [@LuPerkinsPopeStelle:2015], the two unknown metric functions of the standard form of spherically symmetric metric were given in terms of two complicated coupled ODEs which were (apart from the first few orders in the power expansion) solved and analyzed numerically. Interestingly, all QG corrections to the four-dimensional vacuum Einstein equations for constant Ricci scalar are nicely combined into a conformally well-behaved Bach tensor. Together with a conformal-to-Kundt metric ansatz [@PravdaPravdovaPodolskySvarc:2017], this leads to a considerably simpler autonomous system of the field equations. We employed this approach in our recent letters [@PodolskySvarcPravdaPravdova:2018] and [@SvarcPodolskyPravdaPravdova:2018] for vanishing and nonvanishing cosmological constant, respectively. In [@PodolskySvarcPravdaPravdova:2018] we were thus able to present an explicit form of the corresponding nontrivial black-hole spacetimes — the so-called *Schwarzschild–Bach black holes* with two parameters, a position of the horizon and an additional Bach parameter. By setting this additional Bach parameter to zero, the Schwarzschild metric of General | 1 | member_54 |
Relativity is directly recovered. In the present considerably longer paper, we are now giving the details of the derivation summarized in [@PodolskySvarcPravdaPravdova:2018], and also survey and analysis of other classes of spherically symmetric solutions to Quadratic Gravity.
Our paper is organized as follows. In Sec. \[QGandEWtheory\] we recall the Quadratic Gravity and the Einstein–Weyl theory, and we put the corresponding field equations into a convenient form in which the Ricci tensor is proportional to the Bach tensor. In Sec. \[BHmetricsec\] we introduce a suitable spherically symmetric metric ansatz in the conformal-to-Kundt form, and we give relations to the standard metric form. In Sec. \[derivingFE\] we overview the derivation of the field equations with various technical details and thorough discussion being postponed to Appendices A–C. In Sec. \[invariants\] expressions for curvature invariants are derived. In Sec. \[integration\] expansions in powers of ${\Delta \equiv r-r_0}$ around a fixed point $r_0$, and for $r \rightarrow \infty$ are introduced. In Sec. \[expansiont\_0\] the leading orders in ${\Delta }$ of the field equations are solved and four main classes of solutions are obtained. For these solutions, in Sec. \[description\] all coefficients of the metric functions in the power expansions in $\Delta$ are given in the | 1 | member_54 |
form of recurrent formulas, convenient gauge choices are found, and various aspects of the solutions are discussed. Sections \[expansiont\_INF\] and \[description\_INF\] focus on the same topics as Secs. \[expansiont\_0\] and \[description\], respectively, but this time for expansions $r \rightarrow \infty$. In Sec. \[summary\] the relation of the solutions obtained in Secs. \[expansiont\_0\]–\[description\_INF\] (including their special subcases) to the solutions given in the literature is discussed, and summarized in Table \[tab:3\]. Mathematical and physical aspects (specific tidal effects and thermodynamical quantities) of the Schwarzschild–Bach solutions are discussed in Sections \[discussion-and-figures\] and \[physics\], respectively. Finally, concluding remarks are given in Sec. \[conclusions\].
Quadratic Gravity and the Einstein–Weyl theory {#QGandEWtheory}
==============================================
Quadratic Gravity (QG) is a natural generalization of Einstein’s theory that includes higher derivatives of the metric. Its action in four dimensions contains additional quadratic terms, namely square of the Ricci scalar $R$ and a contraction of the Weyl tensor $C_{abcd}$ with itself [@Weyl1919; @Bach1921]. In the absence of matter, the most general QG action generalizing the Einstein–Hilbert action reads[@PravdaPravdovaPodolskySvarc:2017][^1] S = [[[d]{}]{}]{}\^4 x ( (R-2) +R\^2 - C\_[abcd]{} C\^[abcd]{} ), \[actionQG\] where ${\gamma=1/G}$ ($G$ is the Newtonian constant), $\Lambda$ is the cosmological constant, and $\alpha$, $\beta$ are additional QG theory parameters. | 1 | member_54 |
The Einstein–Weyl theory is contained as a special case by setting ${\beta=0}$.
*Vacuum field equations* corresponding to the action (\[actionQG\]) are $$\begin{aligned}
&\gamma \left(R_{ab} - {\pul} R\, g_{ab}+\Lambda\,g_{ab}\right)-4 \alpha\,B_{ab} \nonumber \\
&\quad +2\beta\left(R_{ab}-\tfrac{1}{4}R\, g_{ab}+ g_{ab}\, \Box - \nabla_b \nabla_a\right) R = 0 \,, \label{GenQGFieldEq}\end{aligned}$$ where $B_{ab}$ is the *Bach tensor* defined as B\_[ab]{} ( \^c \^d + R\^[cd]{} )C\_[acbd]{} . \[defBach\] It is traceless, symmetric, and conserved: $$g^{ab}B_{ab}=0 \,, \qquad B_{ab}=B_{ba} \,, \qquad \nabla^b B_{ab}=0 \,,
\label{Bachproperties}$$ and also conformally well-behaved (see expression (\[OmBach\]) below).
Now, *assuming* ${R=\hbox{const.}}$, the last two terms in (\[GenQGFieldEq\]) containing covariant derivatives of $R$ vanish. Using (\[Bachproperties\]), the trace of the field equations thus immediately implies R=4. \[R=4Lambda\] By substituting this relation into the field equations (\[GenQGFieldEq\]), they simplify considerably to R\_[ab]{}-g\_[ab]{}=4k B\_[ab]{}, k . \[fieldeqsgen\]
In this paper, *we restrict ourselves to* investigation of solutions with *vanishing cosmological constant* $\Lambda$ (see [@SvarcPodolskyPravdaPravdova:2018] for the study of a more general case ${\Lambda\ne0}$). In view of (\[R=4Lambda\]), this implies vanishing Ricci scalar, R=0, \[R=0\] and the field equations (\[fieldeqsgen\]) further reduce to a simpler form R\_[ab]{}=4k B\_[ab]{}, \[fieldeqsEWmod\] where the constant $k$ is now a shorthand for the combination of the theory parameters ${ k \equiv | 1 | member_54 |
\alpha/\gamma= G\alpha}$. For ${k=0}$ we recover vacuum Einstein’s equations of General Relativity. Interestingly, all solutions of (\[fieldeqsEWmod\]) in *Einstein–Weyl gravity* (${\beta=0}$) with ${R=0}$ *are also solutions to general Quadratic Gravity* (${\beta\ne0}$) since for ${\Lambda=0}$ the QG parameter $\beta$ does not contribute to the constant $k$ defined by (\[fieldeqsgen\]).
Black hole metrics {#BHmetricsec}
==================
For studying static, nonrotating black holes, it is a common approach to employ the canonical form of a general spherically symmetric metric $${{\rm{d}}}s^2 = -h(\bar r)\,{{\rm{d}}}t^2+\frac{{{\rm{d}}}\bar r^2}{f(\bar r)}+\bar r^2({{\rm{d}}}\theta^2+\sin^2\theta\,{{\rm{d}}}\phi^2) \,.
\label{Einstein-WeylBH}$$ In particular, for the famous *Schwarzschild solution* of Einstein’s General Relativity [@Schwarzschild:1916] (and also of QG), the two metric functions *are the same* and take the well-known form $$f(\bar{r}) = h(\bar{r})=1-\frac{2m}{\bar{r}} \,.
\label{SchwarzschildBH}$$ The metric (\[Einstein-WeylBH\]) was also used in the seminal papers [@LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b] to investigate generic spherical black holes in Quadratic Gravity, in which it was surprisingly shown, mostly by numerical methods, that such a class contains further black-hole solutions *distinct* from the Schwarzschild solution (\[SchwarzschildBH\]). It turned out that while the Schwarzschild black hole has ${f=h}$, this non-Schwarzschild black hole is characterized by ${f\not=h}$. However, due to the complexity of the QG field equations (\[GenQGFieldEq\]) for the classical metric form (\[Einstein-WeylBH\]), it has | 1 | member_54 |
not been possible to find an explicit analytic form of the metric functions ${f(\bar{r}), h(\bar{r})}$.
A new convenient metric form of the black hole geometry {#BH metric}
-------------------------------------------------------
As demonstrated in our previous works [@PodolskySvarcPravdaPravdova:2018; @SvarcPodolskyPravdaPravdova:2018], it is much more convenient to employ an *alternative metric form* of the spacetimes represented by (\[Einstein-WeylBH\]). This is obtained by performing the transformation $$\bar{r} = \Omega(r)\,, \qquad t = u - \int\! \frac{{{\rm{d}}}r}{\H(r)} \,, \label{to static}$$ resulting in [[[d]{}]{}]{}s\^2 = \^2(r) . \[BHmetric\] The two new metric functions $\Omega(r)$ and $\H(r)$ are related to $f(\bar r)$ and $h(\bar r)$ via simple relations h = -\^2 , f = -()\^2 , \[rcehf\] where prime denotes the derivative with respect to $r$. Of course, the argument $r$ of both functions $\Omega$ and $\H$ must be expressed in terms of $\bar{r}$ using the inverse of the relation ${\bar{r} = \Omega(r)}$.
The metric admits a *gauge freedom* given by a constant rescaling and a shift of $r$, r r+, u \^[-1]{}u . \[scalingfreedom\]
More importantly, this new black hole metric is *conformal* to a much simpler Kundt-type metric, [[[d]{}]{}]{}s\^2 =\^2(r)[[[d]{}]{}]{}s\^2\_. \[confrelation\] Indeed, ${{{\rm{d}}}s^2_{\hbox{\tiny Kundt}}}$ belongs to the famous class of *Kundt geometries*, which are nonexpanding, shear-free and | 1 | member_54 |
twist-free, see [@Stephanietal:2003; @GriffithsPodolsky:2009]. In fact, it is a subclass of Kundt spacetimes which is the *direct-product of two 2-spaces*, and is of Weyl algebraic type D and Ricci type II [@GriffithsPodolsky:2009; @PravdaPravdovaPodolskySvarc:2017]. The first part of [[[d]{}]{}]{}s\^2\_=[[[d]{}]{}]{}\^2+\^2[[[d]{}]{}]{}\^2 -2[[[d]{}]{}]{}u[[[d]{}]{}]{}r+[H]{}(r)[[[d]{}]{}]{}u\^2 \[Kundt seed\] spanned by ${\theta, \phi}$ is a round 2-sphere of Gaussian curvature ${K=1}$, while the second part spanned by ${u, r}$ is a 2-dim Lorentzian spacetime. With the usual stereographic representation of a 2-sphere given by ${x+\hbox{i}\, y = 2\tan(\theta/2)\exp(\hbox{i}\phi)}$, this *Kundt seed* metric can be rewritten as [[[d]{}]{}]{}s\^2\_= -2[[[d]{}]{}]{}u[[[d]{}]{}]{}r+[H]{}(r)[[[d]{}]{}]{}u\^2 . \[Kundt seed xy\]
The black hole horizon {#BH horizon}
----------------------
In the usual metric form (\[Einstein-WeylBH\]), the Schwarzschild horizon is defined by the zeros of the same two metric functions $h{({\bar r})=f({\bar r})}$. Due to (\[SchwarzschildBH\]), it is located at ${{\bar r}_h=2m}$, where $m$ denotes the total mass of the black hole.
In a general case, such a horizon can be defined as the *Killing horizon* associated with the vector field ${\partial_t}$. Its norm is determined by the metric function $-h({\bar r})$. In the regions where ${h({\bar r})>0}$, the spacetime is static and $t$ is the corresponding temporal coordinate. The Killing horizon is generated by the *null vector field* | 1 | member_54 |
${\partial_t}$, and it is thus located at a specific radius ${\bar r}_h$ satisfying $$h \big|_{{\bar r}={\bar r}_h}=0\,. \label{standardhorizon}$$
In terms of the new metric form , we may similarly employ the vector field ${\partial_u}$ which coincides with ${\partial_t}$ everywhere. Its norm is given by $\Omega^2\, \H$. Since the conformal factor $\Omega$ is nonvanishing throughout the spacetime, the Killing horizon is uniquely located at a specific radius $r_h$ satisfying the condition $$\H \big|_{r=r_h}=0\,. \label{horizon}$$ Interestingly, via the relations this automatically implies ${h({\bar r_h})=0=f({\bar r_h})}$.
It is also important to recall that there is a *time-scaling freedom* of the metric tt/, \[scaling-t\] where ${\sigma \ne 0}$ is any constant, which implies ${h\to h\,\sigma^2}$. This freedom can be used to adjust an appropriate value of $h$ at a chosen radius ${\bar r}$. Or, in an asymptotically flat spacetime such as (\[SchwarzschildBH\]) it could be used to achieve ${h \to 1}$ as ${{\bar r}\to \infty}$, thus enabling us to determine the mass of a black hole.
The Kundt seed of the Schwarzschild solution {#Kundt seed of the Schwarzschild}
--------------------------------------------
It is also important to explicitly identify the Kundt seed geometry (\[Kundt seed\]) which, via the conformal relation (\[confrelation\]), generates the well-known vacuum *Schwarzschild solution*. | 1 | member_54 |
This is simply given by $$\bar{r}=\Omega(r)=-\frac{1}{r}\,,\qquad
\H(r) = -r^2-2m\, r^3 \,.
\label{Schw}$$ Indeed, the first relation implies ${r=-1/\bar{r}}$, so that ${\H(\bar{r}) = - (1-2m/\bar{r})/\bar{r}^2}$. Using (\[rcehf\]), we easily obtain (\[SchwarzschildBH\]). It should be emphasized that the standard physical range ${\bar{r}>0}$ corresponds to ${r<0}$. Also, the auxiliary Kundt coordinate $r$ *increases from negative values to* $0$, as $\bar{r}$ increases to $\infty$.
Notice that ${\cal H}$ given by (\[Schw\]) is simply a *cubic* in the coordinate $r$ of the Kundt geometry. For ${m=0}$, the Kundt seed with ${{\cal H} = -\,r^2}$ is the Bertotti–Robinson spacetime with the geometry ${S^2\times AdS_2}$ (see chapter 7 of [@GriffithsPodolsky:2009]), and the corresponding conformally related metric (\[confrelation\]) is just the flat space. It should also be emphasized that, while the Schwarzschild and Minkowski spacetimes are (the simplest) vacuum solutions in Einstein’s theory, their Kundt seeds (\[Schw\]) *are not vacuum solutions* in Einstein’s theory since their Ricci tensor is nonvanishing. In fact, the Bertotti–Robinson geometry is an electrovacuum space of Einstein’s theory.
Since conformal transformations preserve the Weyl tensor, both ${{\rm{d}}}s^2$ and ${{\rm{d}}}s^2_{\hbox{\tiny Kundt}} $ are of the *same algebraic type*. Indeed, in the null frame ${{\mbox{\boldmath$k$}}= \mathbf{\partial}_r}$, ${{\mbox{\boldmath$l$}}= {\textstyle\frac{1}{2}}{\cal H}\,\mathbf{\partial}_r+\mathbf{\partial}_u}$, ${{\mbox{\boldmath$m$}}_i = \big(1+\ctvrt(x^2+y^2)\big)\mathbf{\partial}_i}$, the only Newman–Penrose Weyl | 1 | member_54 |
scalar for (\[Kundt seed xy\]) is ${\Psi_2=-\frac{1}{12}({\cal H}''+2)}$, and both ${\mbox{\boldmath$k$}}$ and ${\mbox{\boldmath$l$}}$ are double principal null directions. For the specific function (\[Schw\]), ${\Psi_2=m\,r}$. The Kundt seed geometry for the Schwarzschild solution is thus of algebraic type D. It is conformally flat if, and only if, ${m=0}$, in which case it is the Bertotti–Robinson spacetime.
The Robinson–Trautman form of the black hole metrics {#RT}
----------------------------------------------------
Recently, we have proven in [@PravdaPravdovaPodolskySvarc:2017] that *any metric conformal to a Kundt geometry must belong to the class of expanding Robinson–Trautman geometries* (or it remains in the Kundt class). Indeed, performing a simple transformation ${r(\tilde r)}$ of (\[confrelation\]), (\[Kundt seed xy\]), such that $$r = \int\!\!\frac{{{\rm{d}}}\tilde r}{\Omega^2(\tilde r)}\, , \qquad
{{H}}\equiv \Omega^{2}\, \H \,,
\label{guu_RT}$$ we obtain $${{\rm{d}}}s^2_{\hbox{\tiny RT}} = \Omega^2(\tilde r)\,\frac{{{\rm{d}}}x^2 + {{\rm{d}}}y^2}{\big(1+\ctvrt(x^2+y^2)\big)^2}
-2\,{{\rm{d}}}u\,{{\rm{d}}}\tilde r+{{H}}(\tilde r)\,{{\rm{d}}}u^2 \,. \label{confRT}$$ This has the canonical form of the Robinson–Trautman class [@Stephanietal:2003; @GriffithsPodolsky:2009] with the identification \_[,r]{} = ,[[H]{}]{}= - h. The Schwarzschild black hole is recovered for ${\Omega(\tilde r)=\tilde r}$ that is ${\Omega_{,\tilde r}=1}$, equivalent to ${f(\bar{r}) = h(\bar{r})}$. Other distinct non-Schwarzschild black hole solutions are identified by ${f(\bar{r}) \ne h(\bar{r})}$. The Killing horizon is obviously given by ${{{H}}(\tilde r_h)=0}$, corresponding to ${\H(r_h)=0=h(\bar r_h)}$ and ${f(\bar | 1 | member_54 |
r_h)=0}$.
The field equations {#derivingFE}
===================
The conformal approach to describing and studying black holes and other spherical solutions in Einstein–Weyl gravity and fully general Quadratic Gravity, based on the new form of the metric , is very convenient. Due to (\[confrelation\]), it enables to evaluate easily the Ricci and Bach tensors, entering the field equations (\[fieldeqsEWmod\]), from the Ricci and Bach tensors of the much simpler Kundt seed metric ${{{\rm{d}}}s^2_{\hbox{\tiny Kundt}}}$. In particular, to derive the explicit form of the field equations, it is possible to proceed as follows:
1. Calculate all components of the Ricci and Bach tensors $R_{ab}^{{\hbox{\tiny Kundt}}}$ and $B_{ab}^{{\hbox{\tiny Kundt}}}$ for the Kundt seed metric $g_{ab}^{{\hbox{\tiny Kundt}}}$. Since such a metric (\[Kundt seed xy\]) is simple, containing only one general metric function of one variable $\H(r)$, its key curvature tensors are also simple. Their explicit form is presented in Appendix A.
2. Use the well-known geometric relations for the Ricci and Bach tensors of conformally related metrics $g_{ab}^{{\hbox{\tiny Kundt}}}$ and ${g_{ab}=\Omega^2 \,g_{ab}^{{\hbox{\tiny Kundt}}}}$. Thus it is straightforward to evaluate the curvature tensors $R_{ab}$ and $B_{ab}$ for spherically symmetric geometries, starting from their forms of the Kundt seed calculated in the first step. In particular, since | 1 | member_54 |
the Bach tensor trivially rescales under the conformal transformation as ${B_{ab} = \Omega^{-2}\,B_{ab}^{{\hbox{\tiny Kundt}}}}$, it remains simple. These calculations are performed in Appendix B.
3. These explicit components of the Ricci and Bach tensors are substituted into the field equations of Quadratic Gravity, which we already reduced to the expression ${R_{ab}=4k\, B_{ab}}$, see . This immediately leads to a very simple and compact form of these field equations. Moreover, using the Bianchi identities, it can be shown that the whole system reduces just to two equations , for the metric functions $\Omega(r)$ and $\H(r)$, see Appendix C.
By this procedure, we thus arrive at a remarkably simple form of the field equations (\[fieldeqsEWmod\]) for spherically symmetric vacuum spacetimes in Einstein–Weyl gravity and general Quadratic Gravity with ${R=0}$, namely *two ordinary differential equations* for the *two metric functions* $\Omega(r)$ and ${\cal H}(r)$: $$\begin{aligned}
\Omega\Omega''-2{\Omega'}^2 = &\ \tfrac{1}{3}k\, \B_1 \H^{-1} \,, \label{Eq1}\\
\Omega\Omega'{\cal H}'+3\Omega'^2{\cal H}+\Omega^2
= &\ \tfrac{1}{3}k \,\B_2 \,. \label{Eq2}\end{aligned}$$ The functions $\B_1(r)$ and $\B_2(r)$ denote *two independent components of the Bach tensor*, && \_1 ””, \[B1\]\
&& \_2 ’[H]{}”’-\^2 +2. \[B2\]
Recall also the relation (\[R=0\]), that is ${R=0}$, which is a trace of the field equations . This relation | 1 | member_54 |
takes the explicit form $${\cal H}\Omega''+{\cal H}'\Omega'+{\textstyle \frac{1}{6}} ({\cal H}''+2)\Omega = 0 \,,
\label{trace}$$ see (\[barR\]). Indeed, it immediately follows from , : just subtract from the derivative of the second equation the first equation multiplied by $\H'$ (and divide the result by $6\Omega'$).
It is a great advantage of our conformal approach with the convenient form of the new metric (\[BHmetric\]) that the field equations (\[Eq1\]), (\[Eq2\]) are *considerably simpler* than the previously used field equations for the standard metric . Moreover, they form an *autonomous system*, which means that the differential equations *do not explicitly depend on the radial variable $r$*. This will be essential for solving such a system, finding their analytic solution in the generic form , or , in subsequent Section \[integration\].
Fundamental scalar invariants and geometric classification {#invariants}
==========================================================
For a geometrical and physical interpretation of spacetimes that are solutions to the field equations (\[Eq1\]), (\[Eq2\]), it will be crucial to investigate the behaviour of scalar curvature invariants constructed from the Ricci, Bach, and Weyl tensors themselves. A direct calculation yields $$\begin{aligned}
R_{ab}\, R^{ab} &= 16k^2\, B_{ab} B^{ab} \,, \label{invR}\\
B_{ab}\, B^{ab} &= \tfrac{1}{72}\,\Omega^{-8}\,\big[(\B_1)^2 + 2(\B_1+\B_2)^2\big] \,,\label{invB}\\
C_{abcd}\, C^{abcd} &= \tfrac{1}{3}\,\Omega^{-4}\,\big({\cal H}'' +2\big)^2 \,. | 1 | member_54 |
\label{invC}\end{aligned}$$ To derive these expressions, we have used the field equations, the quantities (\[RT\_R rr\])–(\[RT\_R xx\]), (\[Bach rr\])–(\[Bach xx\]), (\[WeyliK\])–(\[WeylfK\]), and relations (\[contraEinstein-WeylBHC\]), (\[confrel\]), (\[OmBach\]) together with ${C_{abcd}\,C^{abcd}=\Omega^{-4}\, C_{abcd}^{{\hbox{\tiny Kundt}}}\, C^{abcd}_{{\hbox{\tiny Kundt}}}}$ which follows from the invariance of the Weyl tensor under conformal transformations.
It is interesting to observe from (\[invB\]) and (\[Bach rr\])–(\[Bach xx\]) with (\[OmBach\]) that B\_[ab]{}=0B\_[ab]{}B\^[ab]{} =0. \[Bach=0iffINV=0\] Moreover, C\_[abcd]{}C\^[abcd]{}=0B\_[ab]{} =0, \[Weylinv=0thenBach=0\] because the relation ${{\cal H}'' +2=0}$ substituted into gives ${B_{ab}\,B^{ab} =0}$, i.e., ${B_{ab} =0}$ due to .
Notice also that the *first Bach component* ${\B_1=\H \H''''}$ *always vanishes on the horizon* where ${\H=0}$, see the condition .
In view of the key invariant , there are *two geometrically distinct classes of solutions* to (\[Eq1\]), (\[Eq2\]), depending on the Bach tensor ${B_{ab}}$. The first simple case corresponds to ${B_{ab}=0}$, while the much more involved second case, not allowed in General Relativity, arises when ${B_{ab}\ne0}$. This invariant classification has geometrical and physical consequences. In particular, the distinction of spacetimes with ${B_{ab}=0}$ and with ${B_{ab}\ne0}$ can be detected by measuring geodesic deviation of test particles, see Section \[geodeviation\] below.
${B_{ab}=0}$: Uniqueness of Schwarzschild {#integration:Schw}
-----------------------------------------
First, let us assume the metrics (\[BHmetric\]) such that ${B_{ab}=0}$ everywhere. In view | 1 | member_54 |
of and , this condition requires ${\B_1=0=\B_2}$, that is ””=0,’[H]{}”’-[[H]{}”]{}\^2 +2 =0. \[Bab=0-RHS\] Therefore, all left-hand sides and right-hand sides of equations (\[Eq1\]) and (\[Eq2\]) *vanish separately*, i.e., ”=2[’]{}\^2,’[H]{}’+3’\^2[H]{}+\^2 =0. \[Bab=0-LHS\] The first equations of (\[Bab=0-RHS\]) and (\[Bab=0-LHS\]) imply that ${\cal H}$ must be *at most cubic*, and $\Omega^{-1}$ must be *at most linear* in $r$. Using a coordinate freedom of the metric (\[BHmetric\]), without loss of generality we obtain ${\Omega=-1/r}$. The remaining equations (\[Bab=0-RHS\]), (\[Bab=0-LHS\]) then admit a unique solution $$\Omega(r)=-\frac{1}{r}\,,\qquad
{\cal H}(r) = -r^2-2m\, r^3 \,.
\label{IntegrSchwAdS}$$ Not surprisingly, this is exactly the Schwarzschild solution of General Relativity, see equation (\[Schw\]). Thus we have verified that the *Schwarzschild black hole spacetime is the only possible solution with vanishing Bach tensor*. Its corresponding scalar invariants (\[invR\])–(\[invC\]) are R\_[ab]{} R\^[ab]{} = 0 = B\_[ab]{} B\^[ab]{},C\_[abcd]{} C\^[abcd]{} = 48m\^2r\^6 . \[SchwarzInvariants\] Clearly, for ${m\not=0}$ there is a curvature singularity at ${r\to\infty}$ corresponding to ${\bar{r}=\Omega(r)=0}$.[^2]
${B_{ab}\ne0}$: New types of solutions to QG {#integration:nonSchw}
--------------------------------------------
Many other spherically symmetric vacuum solutions to Quadratic Gravity and Einstein–Weyl gravity exist when the Bach tensor is nontrivial. They are *much more involved, and do not exist in General Relativity*. Indeed, the field equations (\[fieldeqsEWmod\]) imply ${R_{ab}=4k\, | 1 | member_54 |
B_{ab}\ne0}$, which is in contradiction with vacuum Einstein’s equations ${R_{ab}=0}$.
In the rest of this paper, we now concentrate on these new spherical spacetimes in QG, in particular on black holes generalizing the Schwarzschild solution. First, we integrate the field equations (\[Eq1\]), (\[Eq2\]) for the metric functions $\Omega(r)$ and ${\cal H}(r)$. Actually, we demonstrate that there are several classes of such solutions with ${B_{ab}\ne0}$. After their explicit identification and description, we will analyze their geometrical and physical properties.
Solving the field equations {#integration}
===========================
For nontrivial Bach tensor (${\B_1, \B_2 \ne0}$), the right-hand sides of the field equations (\[Eq1\]), (\[Eq2\]) are nonzero so that the nonlinear system of two ordinary differential equations for $\Omega(r)$, ${\cal H}(r)$ is coupled in a complicated way. Finding explicitly its general solution seems to be hopeless. However, *it is possible to write the admitted solutions analytically, in terms of (infinite) mathematical series expressed in powers of the radial coordinate $r$*.
In fact, there are *two natural possibilities*. The first is the expansion in powers of the parameter ${\Delta \equiv r-r_0}$ which expresses the solution around any finite value $r_0$ (including ${r_0=0}$). The second possibility is the expansion in powers of $r^{-1}$ which is applicable for | 1 | member_54 |
large values of $r$. Let us now investigate both these cases.
Expansion in powers of ${\Delta \equiv r-r_0}$ {#expansio_DElta}
----------------------------------------------
It is a great advantage that , is an *autonomous system*. Thus we can find the metric functions in the form of an *expansion in powers of $r$ around any fixed value* ${r_0}$, $$\begin{aligned}
\Omega(r) {\!\!\!& = &\!\!\!}\Delta^n \sum_{i=0}^\infty a_i \,\Delta^{i}\,, \label{rozvojomeg0}\\
\H(r) {\!\!\!& = &\!\!\!}\Delta^p \,\sum_{i=0}^\infty c_i \,\Delta^{i}\,, \label{rozvojcalH0}\end{aligned}$$ where r-r\_0, \[DElta\] and $r_0$ is *any real constant*.[^3] In particular, in some cases this allows us to find solutions close to any black hole horizon $r_h$ by choosing ${r_0=r_h}$.
It is assumed that ${i=0, 1, 2, \ldots}$ are integers, so that the metric functions are expanded in integer steps of ${\Delta=r-r_0}$. On the other hand, the *dominant real powers* $n$ and $p$ in the expansions (\[rozvojomeg0\]) and (\[rozvojcalH0\]) *need not be* positive integers. We only assume that ${a_0\not=0}$ and ${c_0\not=0}$, so that the coefficients $n$ and $p$ are uniquely defined as the leading powers.
By inserting – into the field equations , , we prove in Section \[expansiont\_0\] that *only 4 classes of solutions of this form are allowed*, namely =\[-1,2\],=\[0,1\],=\[0,0\],=\[1,0\]. \[4classes\] In subsequent Section \[description\], it will turn | 1 | member_54 |
out that the only possible solution in the class ${[n,p]=[-1,2]}$ is the Schwarzschild black hole for which the Bach tensor vanishes. Explicit Schwarzschild–Bach black holes with ${B_{ab}\ne0}$ are contained in the classes ${[0,1]}$ and ${[0,0]}$. The fourth class ${[n,p]=[1,0]}$ represents singular solutions without horizon, and it is equivalent to the class ${(s,t)=(2,2)}$ identified previously in [@Stelle:1978; @LuPerkinsPopeStelle:2015b; @PerkinsPhD].
Expansion in powers of $r^{-1}$ {#expansion_INF}
-------------------------------
Analogously, we may study and classify all possible solutions to the QG field equations for an asymptotic expansion as ${r\rightarrow \infty}$. Instead of , with , for very large $r$ we can assume that the metric functions $\Omega(r)$, $\mathcal{H}(r)$ are expanded in *negative powers* of $r$ as $$\begin{aligned}
\Omega(r) {\!\!\!& = &\!\!\!}r^N \sum_{i=0}^\infty A_i \,r^{-i}\,, \label{rozvojomegINF}\\
\mathcal{H}(r) {\!\!\!& = &\!\!\!}r^P \,\sum_{i=0}^\infty C_i \,r^{-i}\,. \label{rozvojcalHINF}\end{aligned}$$
Inserting the series (\[rozvojomegINF\]), (\[rozvojcalHINF\]) into the field equations , , it can be shown that *only 2 classes of such solutions are allowed*, namely =\[-1,3\]\^,=\[-1,2\]\^, \[2classes\] see Section \[expansiont\_INF\]. In subsequent Section \[description\_INF\], it will be shown that the class ${[N,P]=[-1,3]^\infty}$ represents the Schwarzschild–Bach black holes, whereas the class ${[N,P]=[-1,2]^\infty}$ is a specific Bachian generalization of a flat space which does not correspond to a black hole.
Discussion of solutions | 1 | member_54 |
using the expansion in powers of $\Delta$ {#expansiont_0}
=================================================================
By inserting the series (\[rozvojomeg0\]), (\[rozvojcalH0\]) into the first field equation (\[Eq1\]), the following key relation is obtained $$\begin{aligned}
&\sum_{l=2n-2}^{\infty}\Delta^{l}\sum^{l-2n+2}_{i=0}a_i\, a_{l-i-2n+2}\,(l-i-n+2)(l-3i-3n+1) \nonumber \\
& \hspace{35.0mm}=\tfrac{1}{3}k \sum^{\infty}_{l=p-4}\Delta^{l}\,c_{l-p+4}\,(l+4)(l+3)(l+2)(l+1) \,.
\label{KeyEq1}\end{aligned}$$ The second field equation (\[Eq2\]) puts further constraints on the admitted solutions, namely $$\begin{aligned}
&\sum_{l=2n+p-2}^{\infty}\Delta^{l}\sum^{l-2n-p+2}_{j=0}\sum^{j}_{i=0}a_i\,a_{j-i}\,c_{l-j-2n-p+2}\,(j-i+n)(l-j+3i+n+2)
+\sum_{l=2n}^{\infty}\Delta^{l}\sum^{l-2n}_{i=0}a_i\,a_{l-i-2n}
\nonumber \\
& = \tfrac{1}{3}k \bigg[2+\sum^{\infty}_{l=2p-4}\Delta^{l}\sum^{l-2p+4}_{i=0}c_{i}\,c_{l-i-2p+4}\,(i+p)(l-i-p+4)(l-i-p+3)(l-\tfrac{3}{2}i-\tfrac{3}{2}p+\tfrac{5}{2})\bigg]\,.
\label{KeyEq2}\end{aligned}$$ A considerably simpler is the additional (necessary but not sufficient) condition following from the trace equation (\[trace\]) which reads $$\begin{aligned}
&\sum_{l=n+p-2}^{\infty}\Delta^{l}\sum^{l-n-p+2}_{i=0}c_i\,a_{l-i-n-p+2}\,\big[(l-i-p+2)(l+1)+\tfrac{1}{6}(i+p)(i+p-1)\big] =-\tfrac{1}{3}\sum^{\infty}_{l=n}\Delta^{l}\,a_{l-n}
\,.
\label{KeyEq3}\end{aligned}$$
Now we analyze the consequences of the equations (\[KeyEq1\])–(\[KeyEq3\]).
First, by comparing the corresponding coefficients of the same powers of $\Delta^l$ on both sides of the key relation (\[KeyEq1\]), we can express the coefficients $c_j$ in terms of (products of) $a_j$. Moreover, the *terms with the lowest order* put further restrictions. In particular, comparing the lowest orders on both sides (that is ${l=2n-2}$ and ${l=p-4}$) it is obvious that *we have to discuss three distinct cases*, namely:
- **Case I**: ${\ \ 2n-2<p-4}$, i.e., ${\ p>2n+2}$,
- **Case II**: ${\ 2n-2>p-4}$, i.e., ${\ p<2n+2}$,
- **Case III**: ${2n-2=p-4}$, i.e., ${\ p=2n+2}$.
Now let us systematically derive all possible solutions in these three distinct cases.
**Case I**
| 1 | member_54 |
----------
In this case, ${2n-2<p-4}$, so that the *lowest* order in the key equation (\[KeyEq1\]) is on the *left hand* side, namely $\Delta^l$ with ${l=2n-2}$, and this yields the condition $$n(n+1)=0 \,.
\label{KeyEq1CaseI}$$ There are thus only two possible cases, namely ${n=0}$ and ${n=-1}$. Next, it is convenient to apply the equation (\[KeyEq3\]) whose lowest orders on its both sides are $$\big[6n(n+p-1)+p(p-1)\big]c_0\,\Delta^{n+p-2}+\cdots=-2\,\Delta^{n}+\cdots \,.
\label{KeyEq3CaseI}$$ For ${n=0}$, these powers are ${\Delta^{p-2}}$ and ${\Delta^{0}}$, respectively, but ${p-2>2n=0}$ by the definition of Case I. The lowest order ${0=-2\Delta^{0}}$ thus leads to a contradiction. Only the possibility ${n=-1}$ remains, for which reduces to $$(p-3)(p-4)c_0\,\Delta^{p-3}+\cdots =-2\,\Delta^{-1}+\cdots
\,.
\label{KeyEq3CaseIn=-1}$$ Since ${c_0\ne0}$, the only possibility is ${p=2}$, in which case ${c_0=-1}$.
**To summarize**: The only possible class of solutions in Case I is given by $$[n,p]=[-1,2]\qquad \hbox{with}\quad c_0=-1\,.
\label{CaseI_summary}$$
**Case II**
-----------
In this case, ${2n-2>p-4}$, so that the *lowest* order in the key equation (\[KeyEq1\]) is on the *right hand* side, namely $\Delta^l$ with ${l=p-4}$, and this gives the condition $$p(p-1)(p-2)(p-3)=0 \,.
\label{KeyEq1CaseII}$$ Thus there are four possible cases, namely ${p=0}$, ${p=1}$, ${p=2}$, and ${p=3}$. Equation has the lowest orders on both sides the same as given by equation , that is $$\begin{aligned}
\hbox{for}\quad p=0:\qquad &
| 1 | member_54 |
---
abstract: 'In this note, we show that the normalized Hochschild co–chains of an associative algebra with a non–degenerate, symmetric, invariant inner product are an algebra over a chain model of the framed little discs operad which is given by cacti. In particular, in this sense they are a BV algebra up to homotopy and the Hochschild cohomology of such an algebra is a BV algebra whose induced bracket coincides with Gerstenhaber’s bracket. To show this, we use a cellular chain model for the framed little disc operad in terms of normalized cacti. This model is given by tensoring our chain model for the little discs operad in terms of spineless cacti with natural chain models for $(S^1)^{\times n}$ adapted to cacti.'
address: 'University of Connecticut, Department of Mathematics, Storrs, CT 06269'
author:
- 'Ralph M. Kaufmann'
title: 'A proof of a cyclic version of Deligne’s conjecture via Cacti'
---
Introduction {#introduction .unnumbered}
============
In this note, we expand our chain model of the little discs operad which we gave in terms of spineless cacti to a chain model for the framed little discs operad in terms of normalized cacti. Extending the philosophy of [@del], we then show that the | 1 | member_55 |
chain model for the framed little discs operad naturally acts on the normalized Hochschild cochains of a unital associative algebra with a non–degenerate, symmetric, invariant bi–linear pairing. In fact, as in [@del], this operation can again be seen as a discretization of the calculations for the relations of a BV algebra up to homotopy on the chains of the operad $\Arc$ of [@KLP]. In [@cact] it is proven, that the operad of framed little discs is equivalent to the operad of cacti. Moreover, we gave a description of cacti in terms of a bi–crossed product of spineless cacti and an operad built on the monoid $S^1$ which we showed to be homotopy equivalent to the semi–direct product of these operads [@cact]. Furthermore, we gave a chain model for spineless cacti in terms of normalized spineless cacti which we showed to give a natural solution to Deligne’s conjecture [@del]. Using the description in terms of the bi–crossed and semi–direct products, we obtain a chain model for the operad of framed little discs, by tensoring the chains of normalized spineless cacti with the chains for the operad built on the monoid $S^1$. In order to prove the necessary relations on the chain | 1 | member_55 |
level one can translate the respective relations from the relations in the $\Arc$ operad using the method described in [@cact; @KLP]. As it turns out, in order to translate the relations and thus to establish the homotopy BV structure on the chain level, one needs a refinement of the cell decomposition on the semi-direct product to be able to accommodate all the operations which were used in the $\Arc$ operad picture. This refinement uses cell decompositions on the $S^1$ factors which are induced by regarding them as the lobe they represent. This leads to a combinatorial description in terms of planar planted black and white (b/w) bipartite trees with additional data called spines. In the language of cacti [@cact], the additional data keeps track of the position of the local zeros. On these trees, there are linear orders at each vertex, which may differ from the induced linear order of the planar planted trees. This forces us to look at non–rooted trees or equivalently to invert the orientation of edges. According to the general calculus for “correlation functions” defined by trees, to achieve such an inversion one needs to have a non–degenerate pairing, which is symmetric and invariant. This is | 1 | member_55 |
the assumption we have to make on our algebra. With this assumption, we can rewrite the action of the cellular chains as “operadic correlation functions” for decorated trees. In this description the operation of the chains of the framed little discs operad becomes apparent.
The results and techniques we present below can also be employed in other situations, which we comment on at the end of the paper. Notably one can use it to obtain an action of cells of a ribbon graph cell decomposition of moduli space on cyclic complexes. This should ultimately lead to string topology like operations of the cells of moduli space of decorated bordered surfaces on the free loop space of a compact manifold extending the operations of the string PROP or dioperad. The basic constructions for this are announced below.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Alain Connes for an enlightening discussion and Jim Stasheff for his valuable comments. We also thank the Max–Planck–Institute for Mathematics in Bonn for providing the atmosphere and stimulus to conceptualize and complete this paper.
Background
==========
Graphs {#Graphs}
------
In this section, we formally introduce the graphs and the operations on graphs which we will | 1 | member_55 |
use in our analysis of cacti. This is the approach as given in Appendix B of [@cact] in which cacti are characterized as a certain type of ribbon graph. Namely, a cactus is a marked treelike ribbon graph with a metric.
### Graphs {#graphs}
A graph $\Gamma$ is a tuple $(V_{\Gamma},F_{\Gamma}, \imath_{\Gamma}: F_{\Gamma}\rightarrow
F_{\Gamma},\del_{\Gamma}:F_{\Gamma} \rightarrow V_{\Gamma})$ where $\imath_{\Gamma}$ is an involution $\imath_{\Gamma}^2=id$ without fixed points. We call $V_{\Gamma}$ the vertices of $\Gamma$ and $F_{\Gamma}$ the flags of $\Gamma$. The edges $E_{\Gamma}$ of $\Gamma$ are the orbits of the flags under the involution $\imath_{\Gamma}$. A directed edge is an edge together with an order of the two flags which define it. In case there is no risk of confusion, we will drop the subscripts $\Gamma$. Notice that $f\mapsto (f,\imath(f))$ gives a bijection between flags and directed edges.
We also call $F_v(\Gamma):=\del^{-1}(v)\subset F_{\Gamma}$ the set of flags of the vertex $v$ and call $|F_v({\Gamma})|$ the valence of $v$ and denote it by $\val(v)$. We also let $E(v)=\{\{f,\imath(f)\}|f\in F_{v}\}$ and call these edges the edges incident to $v$.
The geometric realization of a graph is given by considering each flag as a half-edge and gluing the half-edges together using the involution $\imath$. This | 1 | member_55 |
yields a one-dimensional CW complex whose realization we call the realization of the graph.
### Trees
A graph is connected if its realization is. A graph is a tree if it is connected and its realization is contractible.
A rooted tree is a pair $(\t,v_0)$ where $\t$ is a tree and $v_0\in V_{\t}$ is a distinguished vertex. In a rooted tree there is a natural orientation for edges, in which the edge points toward the root. That is we say $(f,\imath (f))$ is naturally oriented if $\del(\imath(f))$ is on the unique shortest path from $\del(f)$ to the root. This means that the set $E(v)$ splits up into incoming and outgoing edges. Given a vertex $v$, we let $|v|$ be the number of incoming edges and call it the arity of $v$. A vertex $v$ is called a leaf if $|v|=0$. Notice that the root is the only vertex for which $|v_0|=\val(v_0)$. For all other vertices $v\neq v_0$ one has $|v|=\val(v)-1$.
A bi-colored or black and white (b/w) tree is a tree $\t$ together with a map $\color:V\rightarrow \mathbb{Z}/2\mathbb{Z}$. Such a tree is called bipartite if for all $f\in
F_{\t}:\color(\del(f))+\color(\del(\imath(f)))=1$, that is edges are only between black and white vertices. We | 1 | member_55 |
call the set $V_w:=\color^{-1}(1)$ the white vertices. If $(f,\imath (f))$ is a naturally oriented edge, we call the edge white if $\del(\imath(f))\in V_w$ and denote the set of white edges by $E_w$. Likewise we call $V_b:=\color^{-1}(0)$ the black vertices and let $E_b$ be the set of black edges, where a naturally oriented edge $(f,\imath (f))$ is called black if $\del(\imath(f))\in V_b$.
The black leaves in a rooted black and white tree are called tails. The edges incident to the tails are called tail edges and are denoted $E_{tail}$. For tails, we will only consider those flags of the tail edges which are not incident to the tail vertices and call them $F_{tail}$.
### Planar trees and Ribbon graphs
A ribbon graph is a connected graph whose vertices are of valence at least two together with a cyclic order of the set of flags of the vertex $v$ for every vertex $v$.
A graph with a cyclic order of the flags at each vertex gives rise to bijections $N_v:F_v\rightarrow F_v$ where $N_v(f)$ is the next flag in the cyclic order. Since $F=\amalg F_v$ one obtains a map $N:F\rightarrow F$. The orbits of the map $N \circ \imath$ are called the cycles | 1 | member_55 |
or the boundaries of the graph. These sets have the induced cyclic order.
Notice that each boundary can be seen as a cyclic sequence of directed edges. The directions are as follows. Start with any flag $f$ in the orbit. In the geometric realization go along this half-edge starting from the vertex $\del(f)$, continue along the second half-edge $\imath(f)$ until you reach the vertex $\del(\imath(f))$ then continue starting along the flag $N(\imath(f))$ and repeat.
A tree with a cyclic order of the flags at each vertex is called planar. A planar tree has only one cycle $c_0$.
Planar planted trees
--------------------
A planted planar tree is a rooted planar tree $(\t,v_0)$ together with a linear order of the set of flags at $v_0$. Such a tree has a linear order of all flags as follows: Let $f$ be the smallest element of $\del^{-1}(v_0)$, then every flag appears in $c_0$ and defining the flag $f$ to be the smallest gives a linear order on the set of all flags. This linear order induces a linear order on all oriented edges and on all un-oriented edges, by restricting to the edges in the orientation opposite the natural orientation i.e. pointing away from | 1 | member_55 |
the root. We denote the latter by $\prec$ and its restriction to $E(v)$ or $F(v)$ by $\prec_v$.
We will equivalently consider planar planted trees as defined above or as a rooted planar trees whose root vertex has valence one. The bijection in one direction is given by adding a new root vertex and one new edge such that the induced linear structure on the old root is the given one. This tree is called the realization of the planar planted tree. In the other direction the bijection is simply given by contracting the unique edge incident to the root, but retaining the linear order. In the realization of a planar planted tree, we call the unique edge incident to the (new) root $v_{root}$ the root edge and denote it by $e_{root}$ and set $f_{root}$ to be the flag of the root edge which is not incident to the root. Also $E_{root}=\{e_{root}\}, F_{root}=\{f_{root}\}$.
An angle at a vertex $v$ in a planar tree is a pair of two flags incident to $v$ of which one is the immediate successor of the other in the cyclic order of $F_v$. There is a bijection between angles, flags and edges by associating to an | 1 | member_55 |
angle its bigger flag and to the latter the unique edge defined by it.
The genus of a ribbon graph and its surface
-------------------------------------------
The genus $g(\Gamma)$ of a ribbon graph $\Gamma$ is given by $2g(\Gamma)+2=|V_\Gamma|-|E_{\Gamma}|+\#cycles$.
The surface $\Sigma(\Gamma)$ of a ribbon graph $\Gamma$ is the surface obtained from the realization of $\Gamma$ by thickening the edges to ribbons. I.e. replace each 0-simplex $v$ by a closed oriented disc $D(v)$ and each 1-simplex $e$ by $e\times I$ oriented in the standard fashion. Now glue the boundaries of $e\times I$ to the appropriate discs in their cyclic order according to the orientations. Notice that the genus of $\Sigma(\Gamma)$ is $g(\Gamma)$ and that $\Gamma$ is naturally embedded as the spine of this surface.
### Treelike and marked ribbon graphs
A ribbon graph together with a distinguished cycle $c_0$ is called [*treelike*]{} if
- the graph is of genus $0$ and
- for all cycles $c_i\neq c_0$: if $f\in c_i$ then $\imath(f)\in c_0$.
In other words each edge is traversed by the cycle $c_0$. Therefore there is a cyclic order on all (non-directed) edges, namely the cyclic order of $c_0$.
A [*marked ribbon graph*]{} is a ribbon graph together with a map $\mk:\{cycles\} | 1 | member_55 |
\rightarrow
F_{\Gamma}$ satisfying the conditions
- For every cycle $c$ the directed edge $\mk(c)$ belongs to the cycle.
- All vertices of valence two are in the image of $\mk$, that is $\forall v,\val(v)=2$ implies $v\in Im(\del\circ\mk)$.
Notice that on a marked treelike ribbon graph there is a linear order on each of the cycles $c_i$. This order is defined by upgrading the cyclic order to the linear order $\prec_i$ in which $\mk(c_i)$ is the smallest element.
### Dual b/w tree of a marked ribbon graph
Given a marked treelike ribbon graph $\G$, we define its dual tree to be the colored graph whose black vertices are given by $V_{\G}$ and whose set of white vertices is the set of cycles $c_i$ of $\G$. The set of flags at $c_i$ are the flags $f$ with $f\in c_i$ and the set of flags at $v$ are the flags $\{f:f \in c_0,
\del(f)=v\}$. The involution is given by $\imath_{\t}(f)=N(f)$ if $f\in c_0$ and $\imath_{\t}(f)=N^{-1}(f)$ else.
This graph is a tree and is b/w and bipartite by construction. It is also planar, since the $c_i$ and the sets $F(v)$ have a cyclic order and therefore also $F_v\cap c_0$. It is furthermore rooted by | 1 | member_55 |
declaring $\del(\mk(c_0))$ to be the root vertex and declaring $\mk(c_0)$ to be the smallest element makes it into a planted tree.
An equivalent definition is given by defining that there is an edge between a pair of a black and a white vertex if and only if the vertex corresponding to $b$ is on the boundary of the cycle $c_i$, i.e. $v\in \del(c_i):= \{\del(f):f\in c_i\}$.
### Spineless marked ribbon graphs {#spinlessgraph}
A marked treelike ribbon graph is called [*spineless*]{}, if
- There is at most one vertex of valence $2$. If there is such a vertex $v_0$ then $\del(mk(c_0))=v_{0}$.
- The induced linear orders on the $c_i$ are compatible with that of $c_0$, i.e. $f\prec_i f'$ if and only if $\imath(f')\prec_0 \imath(f)$.
### Graphs with a metric
A metric $w_{\Gamma}$ for a graph is a map $E_{\Gamma}\rightarrow
\mathbb{R}_{>0}$. The (global) re-scaling of a metric $w$ by $\lambda$ is the metric $ \lambda w: \lambda w(e)=\lambda w(e)$. The length of a cycle $c$ is the sum of the lengths of its edges $length(c)=\sum_{f\in c} w(\{f,\imath(f)\})$. A metric for a treelike ribbon graph is called normalized if the length of each non-distinguished cycle is $1$.
### Marked ribbon graphs with metric and | 1 | member_55 |
maps of circles.
For a marked ribbon graph with a metric, let $c_i$ be its cycles, let $|c_i|$ be their image in the realization and let $r_i$ be the length of $c_i$. Then there are natural maps $\phi_i:S^1\rightarrow |c_i|$ which map $S^1$ onto the cycle by starting at the vertex $v_i:=\del(\mk(c_i))$ and going around the cycle mapping each point $\theta\in S^1$ to the point at distance $\frac{\theta}{2\pi}r_i$ from $v_i$ along the cycle $c_i$.
### Contracting edges
The contraction $(\bar V_{\Gamma}, \bar F_{\Gamma},\bar
\imath,\bar \del)$ of a graph $(V_{\Gamma},F_{\Gamma},\imath,\del)$ with respect to an edge $e=\{f,\imath(f)\}$ is defined as follows. Let $\sim$ be the equivalence relation induced by $\del(f)\sim\del(\imath(f))$. Then let $\bar V_{\Gamma}:=V_{\Gamma}/\sim$, $\bar
F_{\Gamma}=F_{\Gamma}\setminus\{f,\imath(f)\}$ and $\bar \imath:
\bar F_{\Gamma}\rightarrow \bar F_{\Gamma}, \bar\del: \bar
F_{\Gamma}\rightarrow \bar V_{\Gamma}$ be the induced maps.
For a marked ribbon graph, we define the marking of $(\bar
V_{\Gamma}, \bar F_{\Gamma},\bar \imath,\bar \del)$ to be $\overline{\mk}(\bar c)=\overline{\mk(c)}$ if $\mk(c)\notin\{f,\imath(f)\}$ and $\overline{\mk}(\bar
c)=\overline{N\circ \imath(\mk (c))}$ if $\mk(c)\in\{f,\imath(f)\}$, viz. the image of the next flag in the cycle.
### Labelling graphs
By a labelling of the edges of a graph $\Gamma$ by a set $S$, we simply mean a map $E_{\Gamma}\rightarrow S$. A labelling of a ribbon graph $\Gamma$ by | 1 | member_55 |
a set $S$ is a map $\lab\{$cycles of $\Gamma\}\rightarrow S$, we will write $c_i:=\lab^{-1}(i)$. By a labelling of a black and white tree by a set $S$ we mean a map $\lab:E_w\rightarrow S$. Again we will write $v_i:=\lab^{-1}(i)$.
### Planar planted bipartite labelled trees with white leaves
We set $\wlbptree(n)$ to be the set of planar planted bipartite trees which are labelled from $\{1,\dots,n\}$ with white leaves only. To avoid cluttered notation, we also denote the respective free Abelian group and the $k$-vector space with basis $\wlbptree(n)$ by the same name and let $\wlbptree$ be their union respectively direct sum.
Cacti
-----
A cactus with $n$ lobes is a $\{0,1, \dots ,n\}$ labelled marked treelike ribbon graph with a metric. I.e. The set $\Cacti(n)$ is the set of these graphs. $\Cact(n)\subset \Cacti(n)$ is the subset of spineless graphs and called the spineless cacti or alternatively cacti without spines. $\Cacti^1(n)\subset \Cacti(n)$ is the subset of normalized graphs, called normalized cacti, and finally $\Cact^1(n)=\Cact(n)\cap\Cacti^1(n)$ is the set of normalized spineless cacti.
### Cactus terminology
The edges of a cactus are traditionally called arcs or segments and the cycles of a cactus are traditionally called lobes. The vertices are sometimes called the marked | 1 | member_55 |
or special points. Furthermore the distinguished cycle $c_0$ is called the outside circle or the perimeter and the vertex $\del(\mk(c_0))$ is called the global zero. And the vertices $\del(\mk(c_i)),i\neq 0$ are called the local zeros. In pictures these are represented by lines rather than fat dots.
\[setrem\] It is clear that as sets $\Cacti(n)=\Cact(n)\times
(S^1)^{\times n}$ and $\cact(n)= \cact^1(n)\times
\mathbb{R}_{>0}^{\times n}$.
For the first statement one notices for each lobe $v_i$ there is a unique lowest intersection point $b$ which is the vertex of the outgoing edge of $v$. Thus there is a canonical map $\phi'_i:S^1\rightarrow |c_i|$ which starts at $b$ and goes around the cycle opposite its natural orientation. So to each cycle we associate $(\phi'_i)^{-1}(\del(\mk(c_i)))$ that is the co-ordinate of the spine as measured by $\phi'_i$. This gives the projection onto the factors $(S^1)^{\times n}$. The projection onto the first factor is given by forgetting the spines, i.e. contracting the edges $\mk(c_i)$ if $\val(\del(\mk(c_i)))=2$ and changing the marking to the unique marking which makes the graph spineless.
For the second statement the first projection is given by homogeneously scaling the weights of the edges of each non-marked cycle so that their lengths are one. The projection to the | 1 | member_55 |
factors of $\mathbb{R}_{>0}$ are given by associating to each lobe its length. In both cases the inverse map is clear.
The topological type of a spineless cactus in $\cact^1(n)$ is defined to be its dual b/w tree $\t \in \wlbptree(n)$.
\[arctoedge\] Notice that the arcs of a cactus correspond to the set $E_{arcs}=E(\t)\setminus (\{e_{root}\})$. This bijection can be defined as follows. To a given $e\in E_{arcs}, e=\{w,b\}$ with $b$ black and $w$ white, we associate the unique arc between the points corresponding to the black vertices $b$ and $b-$ where $b-$ is the black vertex immediately preceding $b$ in the cyclic order of $v$. In other words if $e=\{f,\imath(f)\}$ with $f\in F_v$. Let $f-$ be the flag immediately preceding $f$ in the cyclic order at $v$, then $b-=\del(\imath(f-))$. Notice that if $|v|=0$ then and only then $f-=f$.
\[typelemma\] A spineless cactus is uniquely determined by its topological type and the lengths of the segments.
The CW complex of normalized spineless cacti
--------------------------------------------
We recall from [@del] the CW complexes $\CWcact(n)$. For more details and pictures the reader is referred to [@del; @cact].
\[lengthrem\] For a normalized spineless cactus the lengths of the arcs have to sum up to the radius | 1 | member_55 |
of the lobe and the number of arcs on a given lobe represented by a white vertex $v$ is $\val(v)=|v|+1$. Hence the lengths of the arcs lying on the lobe represented by a vertex $v$ are in 1-1 correspondence with points of the simplex $|\Delta^{|v|}|$. The coordinates of $|\Delta^{|v|}|$ naturally correspond to the arcs of the lobe represented by $v$ on one hand and on the other hand in the dual b/w graph to the edges incident to $v$.
### The tree differential in the spineless case {#diffdef}
Let $\t\in \wlbptree$. We set $E_{angle}=E(\t)\setminus
(E_{leaf}(\t)\cup \{e_{root}\})$ and we denote by $\num_E:E_{angle} \rightarrow \{1,\dots,N\}$ the bijection which is induced by the linear order $\prec^{(\t,p)}$.
Let $\t\in \wlbptree$, $e\in E_{angle}$, $e=\{w,b\}$, with $w\in V_w$ and $b\in V_b$. Let $e-=\{w,b-\}$ be the edge preceding $e$ in the cyclic order $\prec^{\t}_w$ at $w$. Then $\del_e(\tau)$ is defined to be the planar tree obtained by collapsing the angle between the edge $e$ and its predecessor in the cyclic order of $w$ by identifying $b$ with $b-$ and $e$ with $e-$. Formally $w=\whitevert(e), e-=\prec^{\t}_w(e),\{b-\}= \del(e-)\cap
V_b(\t)$, $V_{\del_e(\tau)}=V(\t)/(b\sim b-)$, $E_{\del_e(\tau)}=E_{\tau}/(e\sim e-)$. The linear order of $\del_e(\t)$ is given by keeping the linear order at all vertices which | 1 | member_55 |
are not equal to $\bar b$ where $\bar
b$ is the image of $b$ and $b-$. For $\bar b$ the order is given by extending the linear order $(\In(\bar b), \prec_{\bar
b}^{\del_e(\t)}) =(\In(b-)\amalg\In(b), \prec^{\t}_{b-}\amalg
\prec^{\t}_{b}) $ —the usual order on the union of totally ordered sets– to $E(\bar b)$ by declaring the image of $e$ and $e-$ to be the minimal element.
We define the operator $\del$ on the space $\wlbptree$ to be given by the following formula: $\del(\t) := \sum_{e\in E_{angle}}
(-1)^{\num_E(e)-1} \del_e (\tau) $.
### The Cell Complex
We define $\wlbptree(n)^k$ to be the elements of $\wlbptree(n)$ with $|E_w|=k$.
For $\t \in \wlbptree$ we define $\D(\t):=\times_{v \in V_w(\tau)}\D^{|v|}$. We define $C(\t)=|\D(\t)|$. Notice that $\dim(C(\t))=|E_w(\t)|$.
Given $\D(\t)$ and a vertex $x$ of any of the constituting simplices of $\D(\t)$ we define the $x$-th face of $C(\t)$ to be the subset of $|\D(\t)|$ whose points have the $x$-th coordinate equal to zero.
We let $\CWcact(n)$ be the CW complex whose k-cells are indexed by $\t \in \wlbptree(n)^k$ with the cell $C(\t)=|\D(\t)|$ and the attaching maps $e_{\t}$ defined as follows. We identify the $x$-th face of $C(\t)$ with $C(\t')$ where $\t'=\del_x(\t)$. This corresponds to contracting an edge of the cactus if | 1 | member_55 |
its weight goes to zero (see Remark \[arctoedge\]) so that $\Delta(\del \t)$ is identified with $\del (\Delta(\t))$.
We define the topology of $\cact^1(n)$ to be that induced by the bijection with $\CWcact(n)$. Via Remark \[setrem\] this gives a topology to the spaces $\Cact(n),\cacti(n)$ and $\cacti^1(n)$.
The (quasi)-operad structure
----------------------------
### The operad of cacti
The gluing maps for cacti $$\circ_i:\cacti(n)\otimes \cacti(m)\rightarrow \cacti(n+m-1)$$ are defined on elements $(c,c')\mapsto c\circ_i c'$ as follows
- Scaling the weight function $w'$ of $c'$ by the length $\frac{r_i}{R}$ where $r_i$ is the length of the cycle $c_i$ of the cactus $c$ and $R$ is the length of the cycle $c_0$ of $c'$.
- Identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$, with the orientation on the second $S^1$ reversed, as usual.
These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti(n)\}$ into an operad $\cacti$. The collection $\{\cact(n)\}$ forms the suboperad $\cact$.
### The quasi-operad of normalized cacti
We recall from [@cact] that a quasi-operad is the generalization of a (pseudo)-operad in which the axiom of associativity is omitted and the others are kept.
The gluing maps for | 1 | member_55 |
normalized cacti $$\circ_i:\cacti^1(n)\otimes \cacti^1(m)\rightarrow \cacti^1(n+m-1)$$ are defined on elements $(c,c') \mapsto c\circ_i c'$ simply by identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$ again with the orientation on the second $S^1$ reversed.
These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti^1(n)\}$ into a homotopy associative quasi-operad $\cacti^1$. The collection $\{\cact^1(n)\}$ forms a homotopy associative quasi-suboperad $\cact^1$ of $\cacti^1$ [@cact].
Relations among cacti
---------------------
\[cactthm\] [@cact] Normalized cacti are homotopy equivalent through quasi-operads to the cacti. The same holds for the (quasi)-suboperads of normalized spineless cacti and spineless cacti.
[@cact] Normalized cacti are quasi-isomorphic as quasi-operads to cacti and normalized spineless cacti are quasi-isomorphic as quasi-operads to spineless cacti. In particular in both cases the homology quasi-operads are operads and are isomorphic as operads.
### Remarks on the bi-crossed product
In this section we recall the construction of the bi-crossed product as it was given in [@cact] to which we refer the reader for more details.
First notice that there is an action of $S^1$ on $\Cact(n)$ given by rotating the base point [*clockwise*]{} (i.e. in the orientation opposite the usual one of | 1 | member_55 |
$c_0$) around the perimeter. We denote this action by $$\rho^{S^1}: S^1 \times \Cact(n) \rightarrow \Cact(n)$$ With this action we can define the twisted gluing $$\begin{aligned}
\label{circtheta}
\circ_i^{S^1}:\Cact(n) \times S^1(n) \times \Cact(m) &\rightarrow& \Cact(n+m-1)\nn\\
(C,\theta,C')&\mapsto& C \circ \rho^{S^1}(\theta_i,C') =: C
\circ_i^{\theta_i}C'\end{aligned}$$
Given a cactus without spines $C\in \Cact(n)$ the orientation reversed perimeter (i.e. going around the outer circle [*clockwise*]{} i.e. reversing the orientation of the source of $\phi_0$) gives a map $\Delta_C: S^1 \rightarrow (S^1)^n$.
As one goes around the perimeter the map goes around each circle once and thus the map $\Delta_C$ is homotopic to the diagonal $ \Delta_C (S^1) \sim \Delta(S^1)$.
We can use the map $\Delta_C$ to give an action of $S^1$ and $(S^1)^{\times n}$. $$\rho^C: S^1 \times(S^1)^{\times n}\stackrel{\Delta_C}
{\rightarrow} (S^1)^{\times n} \times (S^1)^{\times n}
\stackrel{\mu^n}{\rightarrow}(S^1)^{\times n}$$ here $\mu_n$ is the diagonal multiplication in $(S^1)^{\times n}$ and $\bar \circ_i$ is the operation which forgets the $i$-th factor and shuffles the last $m$ factors to the $i$-th, …, $i+m-1$st places. Set $$\begin{gathered}
\label{perturbdef} \circ_i^C:(S^1)^{\times n} \times (S^1)^{\times
m} \stackrel{(id \times \pi_i)(\Delta) \times id}
{\longrightarrow} (S^1)^{\times n} \times
S^1\times (S^1)^{\times m}\\
\stackrel{id \times \rho^C}{\longrightarrow} (S^1)^{\times n}
\times (S^1)^{\times m}
\stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ These maps are to be understood as perturbations of | 1 | member_55 |
the usual maps $$\begin{gathered}
\circ_i:(S^1)^{\times n} \times (S^1)^{\times
m} \stackrel{(id \times \pi_i)(\Delta) \times id}
{\longrightarrow} (S^1)^{\times n} \times
S^1\times (S^1)^{\times m}\\
\stackrel{id \times \rho}{\longrightarrow} (S^1)^{\times n} \times
(S^1)^{\times m}
\stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ where now $\rho$ is the diagonal action of $S^1$ on $(S^1)^{\times
n}$. The maps $\circ_i$ and the permutation action on the factors give the collection $\{\mathcal{S}^1(n)\}=(S^1)^{\times n}$ the structure of an operad. In fact this is exactly the usual construction of an operad built on a monoid.
\[cactbicross\] [@cact] The operad of cacti is the bi–crossed product of the operad $\cact$ of spineless cacti with the operad $\mathcal {S}^1$ based on $S^1$. Furthermore this bi–crossed product is homotopic to the semi–direct product of the operad of cacti without spines with the circle group $S^1$. $$\cacti \cong \cact \bowtie {\mathcal S}^1 \simeq \cact \rtimes
{\mathcal S}^1$$ The multiplication in the bi-crossed product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C',
\theta\circ_{i}^{C'}\theta')$$ The multiplication in the semi-direct product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C',
\theta\circ_{i}\theta')$$ Also, normalized cacti are homotopy equivalent to cacti which are homotopy equivalent to the bi-crossed product of normalized cacti with $\mathcal{S}^1$ and the semi-direct product with $\mathcal{S}^1$, where all equivalences are as | 1 | member_55 |
quasi-operads $$\cacti^1 \sim \cacti \cong \cact \bowtie {\mathcal S}^1
\sim\cact^1 \bowtie {\mathcal S}^1\sim \cact^1 \rtimes {\mathcal
S}^1$$
The proof of the first statement is given by verifying that the two operad structures coincide. For the second statement one notices that the homotopy diagonal is homotopy equivalent to the usual one and that one can find homotopies to the diagonal which continuously depend on the cactus. The third statement follows from contracting the factors $\mathbb{R}^n_{>0}$ and using Theorem \[cactthm\].
The homology operad of $\cacti$ is the semi-direct product of $\cacti$ and the homology of the operad $\mathcal{S}^1$ built on the monoid $S^1$.
Relation to (framed) little discs
---------------------------------
[@cact] The operad $\cact$ is equivalent to the little discs operad and the operad $\cacti$ is equivalent to the framed little discs operad.
The latter result has been first stated by Voronov in [@Vor].
A CW decomposition for $\cacti^1$ and a chain model for the framed little discs
===============================================================================
A $\Zz$ decoration for a black and white bipartite tree is a map $\Zdec: V_w \rightarrow \Zz$.
\[firstcells\] The quasi–operad of normalized cacti $\cacti^1$ has a CW–decomposition which is given by cells indexed by planar planted bi–partite trees with a $\Zz$ decoration. The $k$ | 1 | member_55 |
cells are indexed by trees with $k-i$ white edges and $i$ vertices marked by $1$.
Moreover cellular chains are a chain model for the framed little discs operad and form an operad. This operad is isomorphic to the semi–direct product of the chain model of the little discs operad given by $CC_*(\cact)$ of [@del] and the cellular chains of the operad built on the monoid $S^1$.
For the CW decomposition we note that as spaces $\cacti^1(n)=
\cact^1(n) \times (S^1)^{\times n}$ see Remark \[setrem\]. Now viewing $S^1=[0,1]/0\sim1$ as a 1-cell together with the 0-cell given by $0\in S^1$ the first part of the proposition follows immediately, by viewing the decoration by 1 as indicating the presence of the 1-cell of $S^1$ for that labelled component in the product of cells.
To show that the cellular chains indeed form an operad, we use the fact that the bi–crossed product is homotopy equivalent to the semi–direct product in such a way, that the action of a cell $S^1$ in the bi–crossed product is homotopic to the diagonal action. This is just the observation that the diagonal and the diagonal defined by a cactus are homotopic. Since a semi-direct product of a monoid with | 1 | member_55 |
an operad is an operad the statement follows. Alternatively one could just remark, that there is also an obvious functorial map induced by the diagonal for these cells.
The chains are a chain model for the framed little discs operad since $\cacti^1(n)$ and $\cacti(n)$ are homotopy equivalent and the latter is equivalent to the framed little discs operad.
Although the above chain model is the one one would expect to use for framed little discs, it does not have enough cells for our purposes. In order to translate the proofs in the arc complex given in [@KLP] into statements about the Hochschild complex, we will need a slightly finer cell structure then the one above. After having used the larger structure one can reduce to the cell model with less cells as they are obviously equivalent.
A spine decoration $\sdec$ for a planted planar bi–partite tree is a $\Zz$ decoration together with the marking of one angle at each vertex labelled by one and a flag at each vertex labelled by zero. We call the set of such trees which are $n$-labelled by $\swlbptree(n)$ and again use this notation as well for the free Abelian group and the $k$ vector | 1 | member_55 |